<div dir="ltr"><div>Hi John,</div><div><br></div><div>Hi<br><br>Lets assume that your network deployment (for the ipsec tunnel) is as below:<br><br>Note: The values mentioned are the mtu of the interfaces connected<br><br>[appliance1]1500mtu-----1500mtu[openwrt-router1]1500mtu-------[internet]-----1500mtu[opnwrt-router2]1500mtu-----1500mtu[appliance2]1500mtu<br><br>In this case the xfrm-based ipsec tunnel is between the router1 and router2<br><br>The below are some of the points to be considered to understand what's happening and why the iptables-mangle rule for TCPMSS is used for "mss-clamping" for each direction of the tcp-connection<br><br><br>1. You mentioned that the TCP traffic between the appliances flowing via the ipsec-tunnel is using large-packet-size. <br><br>a) This would mean that the MSS that is negotiated between the appliance1 and appliance2 would always be set to 1460 (1500-40 bytes = 1460 bytes)<br><br>- Note:The 40 bytes is TCP-Hdr of 20-bytes and the IP-Hdr of 20-bytes<br><br>2. Another point to note is that by default all the hosts/gateways/routers by default have the PMTUD (pmtu-discovery) enabled by default. So this means that every TCP/UDP connection "initiated" from each of them will have the DF-bit flag set for sure<br><br>- To disable the PMTUD, the setting "/proc/sys/net/ipv4/ip_no_pmtu_disc" has to be set to 1 "echo 1 > /proc/sys/net/ipv4/ip_no_pmtu_disc"<br>- this will ensure that the tcp/udp packets are NOT set with the df-bit flag<br><br>3. With reference to the IPsec-Tunnel (using the xfrm-interfaces) established on each of the Router (router1/router2), it is to be noted that once the ipsec tunnel is established, based on the mtu size of the outbound interface (the wan interface - which is 1500 in this case), there is invariably a IPSEc-SA MTU set for "Outbound-SA" whose value is dependent on the encryption algorithm used (say AES256 for example) and the wan-interface-MTU.<br><br>a) Iam not very sure as to where exactly we can see the IPSec-SA mtu that is set for a tunnel (using a specific algorithm), but based on my past recollections, i would say that for AES256 algo, the ipsec-SA MTU would be approximately about 1422 (1500 - <all the encryption/esp overhead applied>) for all outbound ipsec-esp packets<br><br>b) So if the appliance1 was a host following the standards of PMTUD/etc behavior, when it sends a TCP/UDP packet (with DF-bit set) of size say 1500 and this arrives at the Peer-Router1 and after this traffic matches the ipsec-tunnel policy and needs to be forwarded thru the ipsec tunnel to Peer-Router2, then before encryption there is a check done against the "ipsec-SA-mtu" for the tunnel, which would be 1422. <br><br>- So in this case the Peer-Router1 would send a icmp-unreachable message "(type-3/code-4) packet-too-big need to fragment, with the MTU value of 1422" TO the appliance1 <br><br>- And if the appliance1 was following standards, then the icmp-packet-too-big message would trigger it to "re-negotiate" the TCP-connection with a reduced MSS value of 1422-40 = 1382 bytes....<br><br>- And the same process is expected to happen from the other end where the appliance2 TCP-host is connected<br><br>- So this ensures that the TCP data connection is using a max packet size of 1382-tcppayload+40=1422 to avoid fragmentation at the ipsec-tunnel in outbound direction<br><br>Note: In case of UDP-connections, if appliance1 was following standards, the icmp-packet-too-big message would result in the appliance1 itself fragmenting the large-packet into 2 fragments which after re-assembly at the Peer-Router1 would not be more than 1422-bytes. And this ensures that there is NO fragmentation at the ipsec-tunnel in outbound direction<br><br><br>4. So in your case, since both appliances are misbehaving and not following standards and ignoring the pmtu icmp-messages AND ofcourse sending traffic with DF-bit set, so: <br><br>a) you have correctly applied one of the solutions to avoid fragmentation for TCP-connections: mss-clamping in both directions to be applied during the TCP-handshake negotiation (the tcp-control connection)<br><br><br>iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -o xfrm0 -j TCPMSS --set-mss 1240<br>iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -i xfrm0 -j TCPMSS --set-mss 1240<br><br><br>b) The above i believe you have applied only on router1. You could also the same on Router2 too, if you can.<br><br>c) What the above mss-clamping does is that the MSS-value in the outgoing TCP-syn packet from appliance1 to appliance2 is re-written to 1240. This would inform the appliance2 that appliance1 is capable of processing tcp-data packets of max segment-size of 1240 ONLY. So appliance2 would always send tcp-packets of max 1240+40=1280 Only<br>- the same happens in the other direction and therefore results in appliance1 always sending tcp-data packets with max size of 1280 (1240+40)<br><br>Note: Also, generally the mss-clamping is applied at POSTROUTING, but then again if the above works in FORWARD, do continue with it<br><br>#iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -o xfrm0 -j TCPMSS --set-mss 1240<br>#iptables -t mangle -A POSTROUTING -p tcp --tcp-flags SYN,RST SYN -i xfrm0 -j TCPMSS --set-mss 1240<br><br>d) Some things you need to check after applying the above mss-clamping. <br><br>- Capture the tcp-session packets flowing between appliance1 and peer-router1 lan-interface, and <br>- check whether the MSS value is being negotiated and set to 1240 OR are both appliances1/2 continue to set their MSS to 1460 ignoring the mss-clamping?<br><br><br>Note: One point you need to note with the above clamping value is: the mss being to set to 1240 means the IP-tcp packet size generated from appliance1/appliance2 will be of sizes 1240+40=1280 bytes size which would incase the wan-interface being 1500 and the ipsec-SA mtu(if at all it is set/used) would be 1422 (if AES256 algo is used). This should ideally result in NO Fragmentation at all<br><br>e) BUT if you are saying that inspite of the mss-clamping, the appliances continue to send tcp-packets with MSS of 1460 AND DF-bit set, then there will fragmentation - atleast post-ipsec esp-fragmentation in outbound direction on each of the Ipsec-Peer-Routers<br><br><br>5. So another question to consider is, what about udp traffic generated from each of the appliances???? Are they generating large-size non-fragmented packets of 1500-bytes each AND DF-bit flag always set????<br>- in this case there is no clamping that can be done, except to apply the final alternate solution that applies to both TCP and UDP traffic....clearing the DF-bit flag in all of the TCP/UDP packets that are being generated from the appliances<br><br>a) This i think can most probably done by "disabling pmtu-discovery" on both the appliances as below:<br><br>"echo 1 > /proc/sys/net/ipv4/ip_no_pmtu_disc"<br>- this will ensure that the tcp/udp packets are NOT set with the df-bit flag<br><br>b) The above should be possible on Linux/Unix systems if thats what the appliances are using.<br><br><br>6. Now coming to another important point you had asked, about how to "clear the df-bit flag setting" of the inbound plain TCP/UDP traffic/connections before they are encrypted into the ipsec tunnel<br><br>a) well as such you cannot on Peer-Router1, but you should be able to on the 2 appliances as mentioned in point-5 above<br><br>b) But FYI, in IPsec-tunnels as per the RFC-standard, every implementation is required to do the below "during encryption with ESP"<br><br>i) copy the df-bit flag(if set) from the Inner-IP-hdr (of the plain tcp/udp packet) to the outer IP-Hdr of the ESP-packet that is generated by router/gateway<br><br>ii) If the df-set flag is set in the Inner-IP hdr of the plain tcp/udp packet before encryption, then we could also "clear the df-bit" if implemented in the ipsec-engine locally on the router/gateway<br><br>iii) If there is NO df-bit flag set in the Inner-IP hdr of the plain-tcp/udp packet before encryption, then you can apply the setting "set df-bit" in the outer IP-hdr of the ESP packet<br><br>- Generally its always the "copy df-bit from Inner-IP-Hdr to Outer-IP-Hdr of ESP-packet" that is always implemented as a MUST (as per RFC requirements)<br><br>- BUT this does not mean that this will prevent/clear the df-bit flag of incoming plain tcp/udp packets coming from the appliances before encryption. <br><br><br><br>c) So FYI, since you are using XFRMi interfaces with strongswan-ikev2 and specifically using swanctl.conf, you may please try the below setting for "clearing the df-bit flag in the outer-ip-hdr of the ESP packets"<br><br><br>connections.<conn>.children.<child>.copy_df (since 5.7.0) yes(by default)<br><br>- Whether to copy the DF bit to the outer IPv4 header in tunnel mode. <br><br>set this as:<br><br>connections.<conn>.children.<child>.copy_df=no<br><br><br><br>hope the above info helps somewhat<br><br>thanks & regards<br>Rajiv<br><br><br><br><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Dec 15, 2021 at 7:35 AM Noel Kuntze <noel.kuntze+strongswan-users-ml@thermi.consulting> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello John,<br>
<br>
I am not aware of if the kernel tracks the assigned TCP MSS of the connections it knows of.<br>
Conntrack does not have that information. So it's a good question why exactly that happens.<br>
<br>
Can you double check if there is not maybe something like a local proxy running that could<br>
be the cause of that? Also, what is the currently set MTU on the interface?<br>
Does it coincide with the MSS (taking the TCP overhead into account)?<br>
<br>
I agree that it is likely extremely fragile. A good way would be a userspace proxy, like squid.<br>
Squid knows about conntrack, so can transparently proxy connections, even without tproxy (speaking from memories).<br>
<br>
Kind regards<br>
Noel<br>
<br>
<br>
Am 03.12.21 um 15:35 schrieb John Marrett:<br>
> I am working on a VPN solution connecting some appliances on two<br>
> different networks. I’m using an x86 openwrt router with strongswan<br>
> 5.9.2 and kernel 5.4.154. The systems I am connecting exhibit<br>
> non-compliant TCP MSS behaviour. They are, for unknown reasons,<br>
> ignoring the MSS from their peers and sending oversized packets. They<br>
> also ignore ICMP unreachable messages indicating path MTU, I have<br>
> confirmed that the ICMP unreachable messages are not blocked and they<br>
> have been captured directly on the system sending the problematic<br>
> traffic. I do not have control over the appliances and need to solve<br>
> the issues at the network level.<br>
> <br>
> I'm using a modern IKEv2 / XFRM based configuration for this VPN. I<br>
> would like to ignore the DF bit and fragment traffic passing through<br>
> the VPN tunnel. This fragmentation could occur before or after<br>
> encapsulation, it's not significant to me.<br>
> <br>
> If I was using a GRE tunnel I could use the ignore-df configuration<br>
> [1], however there doesn't appear to be an equivalent with an xfrm<br>
> interface.<br>
> <br>
> I have managed to "solve" my problem, though I do not understand the<br>
> solution or how it works. If I create the following iptables rule to<br>
> adjust the MSS on traffic traversing the xfrm interface:<br>
> <br>
> iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -o xfrm0<br>
> -j TCPMSS --set-mss 1240<br>
> iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -i xfrm0<br>
> -j TCPMSS --set-mss 1240<br>
> <br>
> Then, in addition to the expected modification of the mss field, my<br>
> TCP traffic will be fragmented, ignoring the DF bit.<br>
> <br>
> Here's an excerpt of traffic in ingress to the router:<br>
> <br>
> 09:23:56.103022 IP 10.1.34.10.5060 > 10.1.61.20.25578: Flags [P.], seq<br>
> 883:1906, ack 1760, win 260, length 1023<br>
> 09:23:56.119864 IP 10.1.61.20.25578 > 10.1.34.10.5060: Flags [.], ack<br>
> 1906, win 501, length 0<br>
> 09:24:01.448960 IP 10.1.34.10.5060 > 10.1.61.20.25578: Flags [P.], seq<br>
> 1906:3271, ack 1760, win 260, length 1365<br>
> 09:24:01.467771 IP 10.1.61.20.25578 > 10.1.34.10.5060: Flags [.], ack<br>
> 3148, win 501, length 0<br>
> 09:24:01.467810 IP 10.1.61.20.25578 > 10.1.34.10.5060: Flags [.], ack<br>
> 3271, win 501, length 0<br>
> <br>
> And egress on the xfrm interface (In addition to being sent over a VPN<br>
> connect the traffic is also being NATed by the VPN router):<br>
> <br>
> 09:23:56.103150 IP 10.2.30.1.5060 > 10.2.2.6.25578: Flags [P.], seq<br>
> 881:1902, ack 1750, win 260, length 1021<br>
> 09:23:56.119828 IP 10.2.2.6.25578 > 10.2.30.1.5060: Flags [.], ack<br>
> 1902, win 501, length 0<br>
> 09:24:01.449067 IP 10.2.30.1.5060 > 10.2.2.6.25578: Flags [.], seq<br>
> 1902:3142, ack 1750, win 260, length 1240<br>
> 09:24:01.449135 IP 10.2.30.1.5060 > 10.2.2.6.25578: Flags [P.], seq<br>
> 3142:3265, ack 1750, win 260, length 123<br>
> 09:24:01.467724 IP 10.2.2.6.25578 > 10.2.30.1.5060: Flags [.], ack<br>
> 3142, win 501, length 0<br>
> 09:24:01.467725 IP 10.2.2.6.25578 > 10.2.30.1.5060: Flags [.], ack<br>
> 3265, win 501, length 0<br>
> <br>
> The packet with length 1365 has been split into a packet of 1240 bytes<br>
> and a second of 123.<br>
> <br>
> Without these rules I see the expected behaviour, the packets are<br>
> dropped and ICMP unreachable messages are sent indicating the path<br>
> MTU.<br>
> <br>
> Is anyone able to explain why, in addition to adjusting the MSS, this<br>
> mangle configuration is allowing fragmentation ignoring the DF bit?<br>
> While the solution is working as I need it to, I'm concerned that it<br>
> may be extremely fragile.<br>
> <br>
> Is there a better way to solve this problem?<br>
> <br>
> Thanks in advance for any help you can offer,<br>
> <br>
> -JohnF<br>
> <br>
> [1] <a href="https://man7.org/linux/man-pages/man8/ip-tunnel.8.html" rel="noreferrer" target="_blank">https://man7.org/linux/man-pages/man8/ip-tunnel.8.html</a><br>
</blockquote></div></div>