[strongSwan] High availability configuration

unite unite at openmailbox.org
Sun Feb 22 13:57:13 CET 2015


On 2015-02-21 20:52, Noel Kuntze wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
> 
> Hello Aleksey,
> 
> Currently, strongSwan only supports high-availability in an
> active-active cluster.
> However, you can abuse it and make it active-passive by simply not 
> using
> a multicast mac address and configuration on the the CLUSTERIP rule on 
> the
> devices. That way, the SAs will be synchronized, but traffic will only
> be forwarded to
> one member of the cluster. Failover of the IP needs to be done by a
> cluster executive.
> Propagating the new MAC address of the IP needs to be done either by 
> the kernel
> or the cluster executive. After the IP is assigned to the former
> passive and now active
> member, it will process the traffic.
> 
> In an active-active configuration, the multicast mac address would
> ensure that the traffic traffic is
> always received by both nodes. A hash function over the layer three
> address would decide which host
> processes it. However, be aware that I had problems with multicast mac
> addresses with some newer Juniper switches.
> They do not seem to handle those addresses and forwarding the traffic 
> correctly.
> 
> Why do you want to use a CLUSTERIP rule two times on the same node?
> That doesn't make any sense.
> 
> The documents only mention IKEv2, but I think that IKEv1 works, too.
> 
> Currently, it is not possible to make a tunnel depend on the state of
> another tunnel,
> however, you can leverage marks in netfilter to choose what policies
> match the traffic.
> For this, see the mark match module and target on the man page for
> iptables-extensions
> and mark, mark_in and mark_out on the ipsec.conf and swanctl.conf man 
> pages,
> as well as my email to this list on 2014-11-25[1]. Using this, you can
> have to active tunnels to the same subnets
> and use marks to match the tunnel you want to use. Failover can be
> handled by strongswan through
> use of short dpdtimeouts and retransmission values. Reestablishement
> of the tunnels can be done by it, too,
> through use of dpdaction. I do not know if this plays nice with 
> auto=route.
> 
> A cluster executive that would support most needed features would be
> corosync[2].
> 
> [1] 
> https://lists.strongswan.org/pipermail/users/2014-November/006942.html
> [2] https://corosync.github.io/corosync/
> 
> Mit freundlichen Grüßen/Regards,
> Noel Kuntze
> 
> GPG Key ID: 0x63EC6658
> Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658
> 
> Am 20.02.2015 um 16:04 schrieb unite:
>> Hi guys!
>> 
>> I have a couple of questions regarding stronswan HA configuration.
>> 
>> I have the following topology:
>> I have two debain wheezy nodes running the 5.2.1 strongswan installed 
>> from backports and 3.16 kernel also installed from wheezy backports. 
>> Here is the part of "ipsec statusall" ouput:
>> 
>> ipsec statusall
>> Status of IKE charon daemon (strongSwan 5.2.1, Linux 
>> 3.16.0-0.bpo.4-amd64, x86_64)
>> 
>> My two nodes receive routes from 2 ISPs using bgp. So both nodes are 
>> running quagga and ISP's router is configured to operate with two 
>> neighbors in my AS. The addressing between my external interfaces and 
>> the first ISPs gateway (on which BGP relations are held) is for 
>> example (192.168.1.1/24 - ISP gateway, 192.168.1.2 - my cluster node1, 
>> 192.168.1.3 - node2). Addressing for the second let's assume is in 
>> 192.168.2.0/24 net. My AS is bound to net for example 1.1.1.0/24 and I 
>> have for example vlan 50 which contains these addresses.
>> 
>> So, for maximum reachability, I would like to configure strongswan to 
>> use the source ip from my AS net, for example - 1.1.1.50, so all 
>> tunnels would be initiated from this IP and even if one ISP fails my 
>> tunnels are still reacheble. However, is that possible to configure 
>> strongswan in HA mode using such a configuration? So, I see two 
>> possible ways how it theoretically might work:
>> 
>> 1) in active/standby configuration - when for example all bgp traffic 
>> will be held by node1 - so ISP gateways forward traffic to node1 of my 
>> cluster, it receives VPN packets for destination 1.1.1.50 and decrypts 
>> them and so on. So in this setup all traffic will be received by 
>> node1. If I want to have high availability does the configuration 
>> differ from the simple one? Will the SA's be synchronized and will 
>> failover work correctly when only one node receives 100% of the 
>> traffic, and also at this time no multicast is used (all traffic is 
>> received by the node1, both nodes have 1.1.1.50 address so it won't 
>> forward it also to node2)?
>> 
>> 2) in active/active configuration - if I configure my nodes to send 
>> virtual next-hop address to ISP routers. In this way both nodes will 
>> receive connection in round robin fashion however, multivast still 
>> won't be used - will this solution work correctly, will SA's be 
>> correctly synchronized and so on?
>> 
>> Also for both cases (if they should work at all)  I believe I need to 
>> make some unusual clusterip rule. So if the address could be reached 
>> directly from ISP, the clusterip rule would have been like this:
>> ifconfig vlan50:0 1.1.1.50/24 up
>> iptables -A INPUT -i vlan50 -d 1.1.1.50/32 -j CLUSTERIP --new 
>> --total-nodes 2 --local-node 1
>> 
>> but assuming I have this configration I guess I need to change the 
>> incoming interface to the one, on which packets from ISPs are received 
>> while the address 1.1.1.50 still belongs to vlan50. For example
>> ifconfig vlan50:0 1.1.1.50/24 up
>> iptables -A INPUT -i eth0 -d 1.1.1.50/32 -j CLUSTERIP --new 
>> --total-nodes 2 --local-node 1
>> iptables -A INPUT -i eth1 -d 1.1.1.50/32 -j CLUSTERIP --new 
>> --total-nodes 2 --local-node 1
>> 
>> Am I right? Won't it cause any problems?
>> Also, should I anyway patch 3.16 kernel or the needed patch for 
>> clusterip+strongswan is included in it?
>> 
>> 
>> Also, did I understand correctly, strongswan can only use HA mode if 
>> IKEv2 is used for the tunnel?
>> 
>> Another question, is there a way to have redundant tunnels? So, what 
>> do i mean - I have two tunnels two different peers, though they link 
>> the same subnets. Tunnel is built with one peer, however if it becomes 
>> unavailable another tunnel should automatically be brought up. Is this 
>> possible using strongswan utilities?
>> 
>> Thanks in advance.
>> 
> 
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v2
> 
> iQIcBAEBCAAGBQJU6NPiAAoJEDg5KY9j7GZYw54P/2x0ZWFzTKqp6F2QTFja4QGe
> hmGyK5En4i9EzX+R+uJOrCriVQHmO96467QqqUohI1BuKpPqW/UuWowT2nKUaOxI
> pTaEyv+FPOru5MGUzj8dR85a5vM1Cx2wJppt2+i8l1KKxZlDH4zzKWg47h04WCb9
> 6uFKVn9sHjbHP03kSSmLg5JTUyMu9JZGwVvP1znep7/5/QGkDbgRhb+D1Hybdj/+
> p2dltsh3mFaV2hw7vBKg/+k/CSHsshVH+jCiPFfzo2CuHZxKNcR0Gdp8s9LsfraG
> 1/eFFUGr7gvHbHlxGaqm9va0iZxiOyA7xQqfON4Y4JUGzi3f/dGlCIuJO0Clr5YM
> m34cF121kbK5+kZIEq5vRM/0ZQKXOuz9ghX7U4TIgwagu39cnFwT9TImJ5qMaXLs
> /I0RU/1I++q6v2aSkMk28CGZuzrpehFrPOiYmID/6na8WDjEfQHsq5D8200kv1YJ
> 6NLhpz541gOeI5k6AeTEIeFgkCYFU890ER72lgYSkQpbodj9mxIzXWM4zf2/J4/G
> nSJgLYcPHS8TJknjIDNr7FSFGjVwpje2q5bpos1Mjh6EfMHHsQHk14hobVyf+7vP
> 4S3m01aH4OMXc4sHzZjLYhU1MP8LrzLY2/C3a83r1so90HZ599LwSTzgY6UlgU/K
> ZXdCwjsJEF/p/m6ensUZ
> =U4up
> -----END PGP SIGNATURE-----
> 
> 
> _______________________________________________
> Users mailing list
> Users at lists.strongswan.org
> https://lists.strongswan.org/mailman/listinfo/users

So... If I use the active/passive config without using multicast 
address, should my tunnel source address and addresses on vpn-linked 
subnets be present on currently passive node? Or i can maintain this 
addresses using, for example, vrrp, so they are only on the active node 
and are got up on the passive only in the case of failure?

In active/active config, I've written two clusterip rules because I'm 
not sure how to make it running correctly:
so, eth0 - points to the ISP1 (192.168.1.0/24 subnet), eth1 - points to 
the ISP2 (192.168.2.0/24) subnet, and tunnel source IP resides on the 
vlan interface - for example vlan50:0 (1.1.1.50), subnet for vlan50 is 
1.1.1.0/24. I'm just quite new to iptables clusterip module. Is the 
input interface stated in iptables rule somehow strictly bound to subnet 
on this interface? Or it can be safely ignored and the rule can be 
written completely without input interface statement - just using 
destination IP and making it clusterip? Or should I create clusterip 
using vlan50 input interface on which the corresponding subnet resides?

And also, assuming that routing is implemented using bgp,  can I setup 
cluster IP's only on external interfaces in ISP-pointing networks, and  
just create interface alias for tunnel source on vlan interface? I guess 
my explanations are quite unclear, so I'll try to explain in little bit 
more detail (I'll use only one isp in example).

So:
Remote-Host(100.100.100.100)-----Internet--- ISP-Gateway(192.168.1.1)

ISP gateway is in the same subnet as two my nodes:


NODE1 eth0(192.168.1.3)--------ISP-Gateway (192.168.1.1) ------------ 
NODE2 eth0(192.168.1.4)

Cluster IP for my two nodes will be 192.168.1.2 using clusterip (so 
traffic should be received by both nodes using multicast). Both node 1 
and 2 have the ip 1.1.1.50 which is tunnel source for all of my tunnels 
set just as an alias interface without using cluster ip (Or it also 
should be clusterip?). So for example if we trace packet from the host 
100.100.100.100 to my 1.1.1.50 address on the ISP-Gateway to MY-Cluster 
stage, the packet will hit the clusterip mac (01:00:5e:11:22:33) on 
NODE1 interface eth0:0 with the destination of 1.1.1.50 (having source 
ip of 100.100.100.100 and source mac as ISP-Gateway interface). It will 
be processed then by interface vlan50:0 (1.1.1.50) whcih has tunnel 
source IP and be further decrypted and passed through. At the same time 
node2 should receive the same traffic with multicast but it shouldn't 
process it. If another remote host initiates the second tunnel from 
200.200.200.200 ip the process should be the same but traffic processing 
would be held by node2. Am I right? Would such sa scheme work as 
expected?

Still, do I need to patch the 3.16 kernel to use HA plugin? I tried 
setting up the HA without patching the kernel and failed. As I've said 
I've installed strongswan 5.2.1 from wheezy-backports. Is the repository 
version of strongswan built with ha plugin or I should rebuild it 
manually enabling ha-plugin? Also, it is very possible that I do have 
some configuration issues (I configured it using how-to's and 
active\active configuration examples). I'll provide my config files 
tomorrow so probably someone could point me to the source of the 
problem.

Thanks for your help. If something is still unclear tell me and I'll try 
to explain in more detail.

-- 
With kind regards,
Aleksey


More information about the Users mailing list