[strongSwan] IPv6 routing cycle
Andrej Podzimek
andrej at podzimek.org
Tue Nov 19 17:44:47 CET 2013
Hello,
my question might not be directly related to StrongSwan... I'm facing a routing problem when I try to direct IPv6 traffic through a default gateway over IPSec. A road warrior (RW) and a server both run StrongSwan 5.1.1 on ArchLinux. The RW is on an IPv4-only network behind NAT and the goal is to give the RW a full IPv6 access via the server, i.e., not only the access to the server's network.
The server and the RW can ping6 one another and the tunnel between them seems to work fine.
Unfortunatey, the RW cannot use the server as a default route. Despite the fact that the RW obtains a publicly routable IPv6 address from the right range, it can only ping6 the server's IPv6 addresses. No other machines can be contacted by the RW over IPv6. And vice versa, other machines cannot ping6 the RW either, although IPv6 works fine for them otherwise.
The tcpdump output on the server (tcpdump -i any icmp6) is surprising: When I try to ping6 the RW through the server from outside, it seems that the server doesn't figure out that the packets need to be forwarded through the tunnel and resends it to itself until the hop limit is reached. A huge number of these cyclic retransmissions appears on tcpdump. Furthermore, I can see ICMPv6 redirects sent by the server that redirect the ping packets to the *same* IPv6 address over and over (both "from" and "to" are the same -- it's the address of the RW).
Packets are silently swallowed in the other direction: Packets sent from the RW to a remote machine do *not* appear on the server's tcpdump at all, so for some reason they don't make it through the IPSec tunnel.
Otherwise routing works normally on the server, i.e., there is a LAN network behind it with an ethernet subnet, a WiFi AP subnet and an OpenVPN subnet. Clients on all these subnets can use IPv6 just fine. So the IPSec problem may not be just a glitch with disabled forwarding or the like. :-)
Additionally, the IPSec tunnel also has IPv4 private addresses configured. Unlike IPv6, IPv4 works flawlessly there -- the RW can ping4 and access machines from the server's networks (including the server) and vice versa. (However, for IPv4, the server is not configured as the RW's default gateway.)
I set the RW's IPv6 default gateway to the server's 'src' IPv6 address (obtained from routing table 220 on the server). The RW can ping6 that address and the server can ping6 the RW back, so I thought that the server could be a default gateway for the RW. The routing tables 220 created by StrongSwan look OK on both the RW and on the server.
As for other possible causes of the problem, the server doesn't route "back through the same interface" in this case, because the IPSec IPv6 packets emerege from the IPv4-only eth0 interface, whereas IPv6 packets to/from remote machines use 6to4 tunnels -- separate virtual interfaces distinct from eth0.
When speaking about routing, the RW's IPv{4,6} addresses have prefixes distinct from all the other networks interconnected by the server, so all communication to/from IPSec tunnels should be simply routed -- there should be no need for bridging, farp or for a hypothetical NDP equivalent of farp...
One more thing: I don't use {left,right}firewall, because I have my own (quite lengthy) iptables configuration that lets the IPSec traffic through. Disabling iptables completely did not help; the issue was still exactly the same -- packets disappearing in one direction and bouncing back crazily in in the other direction.
Well, I'm running out of ideas and I badly need a piece of advice. :-)
Cheers,
Andrej
More information about the Users
mailing list