[strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Emeric POUPON emeric.poupon at stormshield.eu
Mon May 4 15:55:28 CEST 2015


Hello,

No idea on this subject?

Regards,

----- Mail original -----
De: "Emeric POUPON" <emeric.poupon at stormshield.eu>
À: "Krishna G, Suhas (Nokia - IN/Bangalore)" <suhas.krishna_g at nokia.com>
Cc: users at lists.strongswan.org
Envoyé: Mardi 14 Avril 2015 15:27:28
Objet: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Hello,

Yes you're right, this is not really the same issue, even if it has the same effect (blackholes since the local and remote GW lost synchronization)
In my case I still have some questions pending:
1) why do I manage to establish several IKE SA with a remote GW despite the uniqueids option set to yes?
It looks as if we initiate an IKE SA without checking if another IKE SA has already been initiated using the same peer configuration?

2) on all the AUTH messages received on the responder side, only two of them are destroyed (seen as duplicated) with no notification

Maybe a member of the dev team may have an idea on this?

As a workaround I have set the uniqueids to no, but I lose the benefit of the INITIAL_CONTACT mechanism.

Regards,

Emeric

----- Mail original -----
De: "Krishna G, Suhas (Nokia - IN/Bangalore)" <suhas.krishna_g at nokia.com>
À: "ext Emeric POUPON" <emeric.poupon at stormshield.eu>
Cc: users at lists.strongswan.org
Envoyé: Vendredi 10 Avril 2015 11:19:22
Objet: RE: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Hi Emeric,

Sorry for the late response. I was facing this problem with old strongswan version 4.3.6 and 4.4. The behavior in old strongswan is that spd are linked to sads using the reqids and the reqids used to differ for different SAs as per my understanding. So no matter how many SAs were there, only one which has same id as that of SPD would be used for traffic. Problem here is that, on one end, the peer maybe referring one set of SA to the policy and on the other end a different set of SA maybe referring to the policy (note that both the set of SAs are present on both the nodes). Hence traffic was being dropped because of this mismatch. However, I did not encounter this in the new strongswan 5.2.2 version. I haven't had time to completely understand your issue but I don't think it is related to mine. Anyways, I will get back to you if I find something.

Regards
Suhas 

-----Original Message-----
From: ext Emeric POUPON [mailto:emeric.poupon at stormshield.eu] 
Sent: Wednesday, April 08, 2015 8:18 PM
To: Krishna G, Suhas (Nokia - IN/Bangalore)
Cc: users at lists.strongswan.org
Subject: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Hello,

I think I may have found a clue. It sounds like there is a problem with INITIAL_CONTACT
Here is a "reduced" test case:

uniqueids=yes
strongSwan 5.2.2
FreeBSD 9.3

172.16.X.0/24 - 172.16.255.254 GW1 10.0.0.1 <----> 10.0.0.2 GW2 172.17.255.254 - 172.17.X.0/24

test:  10.0.0.1...10.0.0.2  IKEv2, dpddelay=30s
test:   local:  [172.18.0.1] uses pre-shared key authentication
test:   remote: [172.18.0.2] uses pre-shared key authentication
test:   child:  172.16.0.0/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test2:   child:  172.16.0.8/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test3:   child:  172.16.0.16/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test4:   child:  172.16.0.24/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test5:   child:  172.16.0.32/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test6:   child:  172.16.0.40/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test7:   child:  172.16.0.48/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test8:   child:  172.16.0.56/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold
test9:   child:  172.16.0.64/29 === 172.17.0.0/16 TUNNEL, dpdaction=hold

I send a lot a trafic in only one direction, from 172.16.X.0/24 to 172.17.X.0/24
It triggers at the same time all the SP installed in the kernel.

The kernel sends to strongSwan a lot of ACQUIRE messages with different reqids.
strongSwan will immediately queue 9 acquire jobs for each message received:
Apr  8 15:59:38 03[KNL] received an SADB_ACQUIRE
Apr  8 15:59:38 03[KNL] creating acquire job for policy 10.0.0.1/32 === 10.0.0.2/32 with reqid {1}
...
Apr  8 15:59:38 08[KNL] received an SADB_ACQUIRE
Apr  8 15:59:38 08[KNL] creating acquire job for policy 10.0.0.1/32 === 10.0.0.2/32 with reqid {2}
...

Then later I see things like this:
Apr  8 15:59:38 08[ENC] <test|4> generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH N(ESP_TFC_PAD_N) SA TSi TSr N(EAP_ONLY) ]
...
Apr  8 15:59:38 10[ENC] <test|5> generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH N(ESP_TFC_PAD_N) SA TSi TSr N(EAP_ONLY) ]
...
Apr  8 15:59:38 12[ENC] <test|1> generating IKE_AUTH request 1 [ IDi N(INIT_CONTACT) IDr AUTH N(ESP_TFC_PAD_N) SA TSi TSr N(EAP_ONLY) ]
...
(same thing for the 9 IKE SA)

On the responder side, strongSwan looks like unhappy with that, but only for two IKE SA:
Apr  8 15:58:21 10[IKE] <test|1> destroying duplicate IKE_SA for peer '10.0.0.1', received INITIAL_CONTACT
...
Apr  8 15:58:21 16[IKE] <test|2> destroying duplicate IKE_SA for peer '10.0.0.1', received INITIAL_CONTACT
...
(don't pay attention to the date/time that is not synchronized)

"ipsec statusall" shows the initiator gateway has 9 IKE SA, and the responder gateway has only 7 IKE SA remaining.
Some of the CHILD SA on the responder side are then destroyed, but I don't get any DELETE payload on initiator side, turning some CHILD SAs into blackholes.

Surprisingly, I don't find any INIT_CONTACT notify payloads when uniqueids is set to no? (And therefore there is not this issue in that case)
And why only two IKE SA are being deleted on the responder side?

Any help, thoughts, or acceptable workarounds on this subject would be greatly appreciated.

Best Regards,

Emeric


----- Mail original -----
De: "Emeric POUPON" <emeric.poupon at stormshield.eu>
À: "Krishna G, Suhas (NSN - IN/Bangalore)" <suhas.krishna_g at nsn.com>
Cc: users at lists.strongswan.org
Envoyé: Mardi 7 Avril 2015 13:10:46
Objet: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Hello,

I may have a similar, or at least related issue here.
Here is the configuration:

172.16.X.0/24 - 172.16.255.254 GW1 10.0.0.1 <----> 10.0.0.2 GW2 172.17.255.254 - 172.17.X.0/24

The goal is to connect subnets 172.16.X.0/24 and 172.17.X.0/24
X is ranging from 0 to 200, we have 200 connections defined in each configuration.
(172.16.0.0/24 <=> 172.17.0.0/24, 172.16.1.0/24 <=> 172.17.1.0/24, ...)

I send traffic from both sides and from all subnets "simultaneously".

If "uniqueids" is set to yes, we end up with very questionable situations:
- I get at least 4 or 5 IKE SA on each GW.
  -> each IKE SA is responsible for some CHILD SA. They do not seem to be duplicated.
  -> this is strange since they are supposed to be unique
- the number of ESTABLISHED IKE SA on each GW is not always the same at a same time (!)
  -> the CHILD SA negotiated from the extra remaining IKE SA are not usable, therefore traffic is getting dropped until DPD detects the IKE SA is dead

Setting uniqueids to "never" seems to correct the problem, but I not sure about side effects (still working on this)

What do you think?

Best Regards,
Emeric

----- Mail original -----
De: "Krishna G, Suhas (NSN - IN/Bangalore)" <suhas.krishna_g at nsn.com>
À: "ext Noel Kuntze" <noel at familie-kuntze.de>, users at lists.strongswan.org
Envoyé: Vendredi 13 Février 2015 08:14:16
Objet: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously.

Hi, 
Can anyone please comment on this. I tested this with the new strongswan version 5.2 and noticed the same behavior. 
Regards 
Suhas 
-----Original Message----- 
From: users-bounces at lists.strongswan.org [ mailto:users-bounces at lists.strongswan.org ] On Behalf Of ext Krishna G, Suhas (NSN - IN/Bangalore) 
Sent: Monday, February 09, 2015 2:24 PM 
To: ext Noel Kuntze; users at lists.strongswan.org 
Subject: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously. 
Hi Noel, 
One more observation with respect to the below query. With "uniqueids=yes", as you mentioned only one pair of SA is formed but I found a limitation w.r.t this. I configured IPSec(same setup as I had mentioned in previous mail) between two nodes(peer1-------peer2) with the uniqueids=yes option in the ipsec.conf file on both ends. Apparently, no SAs are duplicated when I do "ipsec up" multiple times from one end. Old SAs are being replaced by new ones. But when I do "ipsec up" for the first time from the other end , though SAs are already established, another set of SA is initiated. So two set of SAs get established. But no further SAs are duplicated even after "ipsec up" multiple times from either ends. So, there can exist max two set of SAs between the same end points with the "uniqueids=yes" option. First set of SA initiated from peer1 and the second set initiated by peer2 . So, I think there is no actual check for SA being established but there is only a check for the number of times "ipsec up" is done from one end which is a drawback. Is this the same behavior in the new version of strongswan? 
Regards 
Suhas 
-----Original Message----- 
From: users-bounces at lists.strongswan.org [ mailto:users-bounces at lists.strongswan.org ] On Behalf Of ext Krishna G, Suhas (NSN - IN/Bangalore) 
Sent: Friday, February 06, 2015 12:12 PM 
To: ext Noel Kuntze; users at lists.strongswan.org 
Subject: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously. 
Hi Noel, 
I had forgot to ask one thing. Does any configuration in strongswan.conf file affect this. How exactly does this uniqueid stuff work? 
My strongsan.conf file for reference just in case: 
# strongswan.conf 
charon { 
reuse_ikesa=no 
install_routes=no 
block_threshold=50 
cookie_threshold=100 
} 
-----Original Message----- 
From: ext Noel Kuntze [ mailto:noel at familie-kuntze.de ] 
Sent: Thursday, February 05, 2015 12:07 AM 
To: Krishna G, Suhas (NSN - IN/Bangalore); users at lists.strongswan.org 
Subject: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously. 
-----BEGIN PGP SIGNED MESSAGE----- 
Hash: SHA256 
Hello Krishna, 
Yes, that is relevant. I have a net-to-net setup here with the newest strongSwan 
version and PSK authentication, that does not show this bad behaviour. 
You might want to try a newer version. 
Mit freundlichen Grüßen/Regards, 
Noel Kuntze 
GPG Key ID: 0x63EC6658 
Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658 
Am 04.02.2015 um 11:07 schrieb Krishna G, Suhas (NSN - IN/Bangalore): 
> Hi Noel, 
> 
> Thanks for the quick response. I tested with the combination of changes you suggested but am still facing the same issue. I found a thread relating to this: http://permalink.gmane.org/gmane.network.vpn.strongswan.devel/671 
> Is this of any relevance? The charon does not check for duplicate SAs and delete them? The duplicate SAs persist even after rekeying. 
> 
> Regards 
> Suhas Krishna 
> 
> -----Original Message----- 
> From: users-bounces at lists.strongswan.org [ mailto:users-bounces at lists.strongswan.org ] On Behalf Of ext Noel Kuntze 
> Sent: Wednesday, February 04, 2015 1:23 AM 
> To: users at lists.strongswan.org 
> Subject: Re: [strongSwan] FW: Traffic dropped when IKE initiation happen between two nodes simultaneously. 
> 
> 
> Hello Kirshna, 
> 
> You set "uniqueids=no". That causes that behaviour. 
> Use "uniqueids=yes", "uniqueids=keep" or "uniqueids=replace". 
> 
> Mit freundlichen Grüßen/Regards, 
> Noel Kuntze 
> 
> GPG Key ID: 0x63EC6658 
> Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658 
> 
> Am 03.02.2015 um 11:05 schrieb Krishna G, Suhas (NSN - IN/Bangalore): 
> > Hi, 
> 
> > I am testing a simple scenario using ikev2. The setup is as follows: 
> 
> > (Traffic generator2)30.0.0.1-------(30.0.0.2)node2(20.0.0.1)----------(20.0.0.2)node1(40.0.0.1)------------40.0.0.2(Traffic generator1) 
> > eth2 eth3 eth2 
> > (vlan201) 
> 
> > Node1: 
> > # ipsec.conf 
> 
> > config setup 
> > charonstart=yes 
> > plutostart=no 
> > uniqueids=no 
> > charondebug="knl 0,enc 0,net 0" 
> > conn %default 
> > auto=route 
> > keyexchange=ikev2 
> > reauth=no 
> > conn r2~v2 
> > rekeymargin=150 
> > rekeyfuzz=100% 
> > left=20.0.0.2 
> > right=20.0.0.1 
> > leftsubnet=40.0.0.2/32 
> > rightsubnet=30.0.0.1/32 
> > authby=secret 
> > leftid=20.0.0.2 
> > rightid=%any 
> > ike=aes128-sha1-modp1024! 
> > esp=aes128-sha1! 
> > type=tunnel 
> > ikelifetime=2000s 
> > keylife=1500s 
> > mobike=no 
> > auto=route 
> > reauth=no 
> 
> > addresses configured: 
> > 1. vlan201 at eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
> > link/ether 00:30:64:26:2f:5f brd ff:ff:ff:ff:ff:ff 
> > inet 20.0.0.2/24 brd 20.0.0.255 scope global vlan201 
> > inet6 fe80::30:6400:a26:2f5f/64 scope link 
> > valid_lft forever preferred_lft forever 
> 
> > 2. eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
> > link/ether 00:30:64:26:2f:5e brd ff:ff:ff:ff:ff:ff 
> > inet 40.0.0.1/24 brd 40.0.0.255 scope global eth2 
> > inet6 fe80::30:6400:426:2f5e/64 scope link 
> > valid_lft forever preferred_lft forever 
> 
> 
> > routes: 
> > 40.0.0.0/24 dev eth2 proto kernel scope link src 40.0.0.1 
> > 20.0.0.0/24 dev vlan201 proto kernel scope link src 20.0.0.2 
> > 30.0.0.0/24 via 20.0.0.1 dev vlan201 proto gated 
> 
> 
> 
> > Node2 : 
> > # ipsec.conf 
> 
> > config setup 
> > charonstart=yes 
> > plutostart=no 
> > uniqueids=no 
> > charondebug="knl 0,enc 0,net 0" 
> > conn %default 
> > auto=route 
> > keyexchange=ikev2 
> > reauth=no 
> > conn r2~v2 
> > rekeymargin=150 
> > rekeyfuzz=100% 
> > left=20.0.0.1 
> > right=20.0.0.2 
> > leftsubnet=30.0.0.1/32 
> > rightsubnet=40.0.0.2/32 
> > authby=secret 
> > leftid=20.0.0.1 
> > rightid=%any 
> > ike=aes128-sha1-modp1024! 
> > esp=aes128-sha1! 
> > type=tunnel 
> > ikelifetime=2000s 
> > keylife=1500s 
> > dpdaction=clear 
> > dpddelay=20 
> > mobike=no 
> > auto=route 
> > reauth=no 
> 
> 
> > addresses configured: 
> > 1. vlan201 at eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
> > link/ether 00:30:64:26:32:02 brd ff:ff:ff:ff:ff:ff 
> > inet 20.0.0.1/24 brd 20.0.0.255 scope global vlan201 
> > inet6 fe80::30:6400:a26:3202/64 scope link 
> > valid_lft forever preferred_lft forever 
> 
> 
> > 2. eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000 
> > link/ether 00:30:64:26:32:01 brd ff:ff:ff:ff:ff:ff 
> > inet 30.0.0.2/24 brd 30.0.0.255 scope global eth2 
> > inet6 fe80::30:6400:426:3201/64 scope link 
> > valid_lft forever preferred_lft forever 
> 
> 
> > routes: 
> > 40.0.0.0/24 via 20.0.0.2 dev vlan201 proto gated 
> > 20.0.0.0/24 dev vlan201 proto kernel scope link src 20.0.0.1 
> > 30.0.0.0/24 dev eth2 proto kernel scope link src 30.0.0.2 
> 
> 
> > In my setup, I am pumping traffic from both ends simultaneously. I see that IKE initiations happen simultaneously from both ends and two pair of SAs are formed instead of one as shown below: 
> 
> > 20.0.0.2 20.0.0.1 
> > esp mode=tunnel spi=3303990082(0xc4eee342) reqid=1(0x00000001) 
> > E: aes-cbc 2d2d6603 aa9bc830 1c3ee36a d964b1f1 
> > A: hmac-sha1 3889f511 69cd3c4e 6f416739 e5c685cc 3f316067 
> > seq=0x00000000 replay=64 flags=0x00000000 state=mature 
> > created: Jan 23 20:22:13 2015 current: Jan 23 20:22:37 2015 
> > diff: 24(s) hard: 300(s) soft: 268(s) 
> > last: Jan 23 20:22:13 2015 hard: 0(s) soft: 0(s) 
> > current: 285648670(bytes) hard: 0(bytes) soft: 0(bytes) 
> > allocated: 283945 hard: 0 soft: 0 
> > sadb_seq=1 pid=24064 refcnt=0 
> > 20.0.0.1 20.0.0.2 
> > esp mode=tunnel spi=3422609051(0xcc00de9b) reqid=1(0x00000001) 
> > E: aes-cbc 37be21d3 79d00867 968bcc4e 21c3a5c8 
> > A: hmac-sha1 f46a45e7 c3b90b4e 20e3e68e 782a8b48 5d2d7758 
> > seq=0x00000000 replay=64 flags=0x00000000 state=mature 
> > created: Jan 23 20:22:13 2015 current: Jan 23 20:22:37 2015 
> > diff: 24(s) hard: 300(s) soft: 265(s) 
> > last: hard: 0(s) soft: 0(s) 
> > current: 0(bytes) hard: 0(bytes) soft: 0(bytes) 
> > allocated: 0 hard: 0 soft: 0 
> > sadb_seq=2 pid=24064 refcnt=0 
> > 20.0.0.2 20.0.0.1 
> > esp mode=tunnel spi=3272081281(0xc307ff81) reqid=2(0x00000002) 
> > E: aes-cbc 6c9cbd30 0aa302bb 9741ca7f 231ce550 
> > A: hmac-sha1 9c21160b a03990f5 a07d2c29 a18d8b7f 02c020a7 
> > seq=0x00000000 replay=64 flags=0x00000000 state=mature 
> > created: Jan 23 20:22:13 2015 current: Jan 23 20:22:37 2015 
> > diff: 24(s) hard: 300(s) soft: 264(s) 
> > last: Jan 23 20:22:13 2015 hard: 0(s) soft: 0(s) 
> > current: 20120(bytes) hard: 0(bytes) soft: 0(bytes) 
> > allocated: 20 hard: 0 soft: 0 
> > sadb_seq=3 pid=24064 refcnt=0 
> > 20.0.0.1 20.0.0.2 
> > esp mode=tunnel spi=3466205953(0xce9a1b01) reqid=2(0x00000002) 
> > E: aes-cbc 465a0a5f 454ffbcc d4a63bf7 f3f102e5 
> > A: hmac-sha1 36cefc1d 6c9729fe 4a142a0d 66033097 4b6e9d3a 
> > seq=0x00000000 replay=64 flags=0x00000000 state=mature 
> > created: Jan 23 20:22:13 2015 current: Jan 23 20:22:37 2015 
> > diff: 24(s) hard: 300(s) soft: 261(s) 
> > last: Jan 23 20:22:13 2015 hard: 0(s) soft: 0(s) 
> > current: 285656718(bytes) hard: 0(bytes) soft: 0(bytes) 
> > allocated: 283953 hard: 0 soft: 0 
> > sadb_seq=0 pid=24064 refcnt=0 
> 
> 
> > Due to this there is a 100% traffic drop seen at both ends. I referred to a similar query posted - _https://lists.strongswan.org/pipermail/users/2012-October/003765.html_ but no conclusion was drawn out of that. 
> 
> > According to my investigation, the two nodes are using different set of SAs for communication resulting in the problem. tcpdump of the packets flowing is as below: 
> 
> > 20:23:48.400585 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556a), length 1044 
> > 20:23:48.400629 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115573), length 1044 
> > 20:23:48.400669 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556b), length 1044 
> > 20:23:48.400713 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115574), length 1044 
> > 20:23:48.400752 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556c), length 1044 
> > 20:23:48.400796 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115575), length 1044 
> > 20:23:48.400836 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556d), length 1044 
> > 20:23:48.400881 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115576), length 1044 
> > 20:23:48.400919 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556e), length 1044 
> > 20:23:48.400963 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115577), length 1044 
> > 20:23:48.401003 IP 20.0.0.2 > 20.0.0.1: ESP(spi=0xc4eee342,seq=0x11556f), length 1044 
> > 20:23:48.401047 IP 20.0.0.1 > 20.0.0.2: ESP(spi=0xce9a1b01,seq=0x115578), length 1044 
> 
> 
> > Is there any fix to this issue. The scenario of simultaneous ike initiations happening for the first time when tunnel is being established is something which is not addressed I feel. 
> 
> 
> > Regards 
> > Suhas Krishna 
> 
> 
> 
> > _______________________________________________ 
> > Users mailing list 
> > Users at lists.strongswan.org 
> > https://lists.strongswan.org/mailman/listinfo/users 
> 
> 
> _______________________________________________ 
> Users mailing list 
> Users at lists.strongswan.org 
> https://lists.strongswan.org/mailman/listinfo/users 
-----BEGIN PGP SIGNATURE----- 
Version: GnuPG v2 
iQIcBAEBCAAGBQJU0mbXAAoJEDg5KY9j7GZY+eEP/0/sp8332hmIqdFDAQIzW2fh 
MD3emEBa7DXeaPNcXUEjA2wNnv0qIP7ctiBn2YB5+pvMGYMn8KTwbrxN9vQN4sD9 
35hijguCA4YVxEvu4+xkIuNbLWylcgAglCmAIpnt6HrXxDA4+OVKbBgcF05lqcnH 
sWdnuDhmfCNjXafU2/Zxw1mpDNM2tpgab0eOnTD0LFKnRklOtq8tGQxdZl+Wtkwo 
ng0bVTZx1qM9zVektfmIYzTOnC/ScfqaRBYR3LwHdIpUfxUR6v0yFxSO0ypCVR+M 
wUK3xCOzDPC3/NlqQ6qeoOkCLzlAmGj3KDOsFmsjIpAc5JYKrcOqoI6LSWb2WZds 
q+DVp1O4OPUjQKza8rZzOiY4uPA54jiqRumum0iq4NFGv40Hua84bYJ/EPf5MqOP 
fZw8bCZj+iPxZUQuUdXCm6+DUHWzATgQQ+QU1Ysw9hziNJc5Eo+6md2Kp2p/3pW2 
sZAOYtTtK7ShD9pG7DUd51nYA5aqhyuy3XHE1gxYKSXtgeX7i2qpzVMjqV0yFRzc 
YyKh8tnlVE1tX13POYF6Wd3plbqThyZx/Yc35aRY+gugMRJ/sr/qcu2U/oVwVhy5 
slK3uG5ZLAxbj8h4cmN45WP9To282wEaVmRvdNFf14R4GarJdIOBZRLd1ivg5gfn 
vm/LimVxeJNk64H1XuNL 
=BsFr 
-----END PGP SIGNATURE----- 
_______________________________________________ 
Users mailing list 
Users at lists.strongswan.org 
https://lists.strongswan.org/mailman/listinfo/users 
_______________________________________________ 
Users mailing list 
Users at lists.strongswan.org 
https://lists.strongswan.org/mailman/listinfo/users 

_______________________________________________
Users mailing list
Users at lists.strongswan.org
https://lists.strongswan.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users at lists.strongswan.org
https://lists.strongswan.org/mailman/listinfo/users
_______________________________________________
Users mailing list
Users at lists.strongswan.org
https://lists.strongswan.org/mailman/listinfo/users


More information about the Users mailing list