[strongSwan] tunnels keep going down
Catscrash
catscrash at catscrash.de
Wed Oct 9 11:23:19 CEST 2019
Hi,
since upgrading from debian 7 to debian 9 and therefore from strongswan
4.5.2 to 5.5.1 I have now continuing issues with several tunnels that
have been very stable before.
After a restart, the tunnel is perfectly fine
============
sudo ipsec status s2s_xxx
Routed Connections:
s2s_xxx{28}: ROUTED, TUNNEL, reqid 28
s2s_xxx{28}: yy.yy.yy.yy/29 === zz.zz.zz.zz/32
Security Associations (33 up, 0 connecting):
s2s_xxx[26]: ESTABLISHED 12 minutes ago,
OWNPUBIP[OWNPUBIP]...CUSTPUBIP[CUSTPUBIP]
s2s_xxx{87}: INSTALLED, TUNNEL, reqid 28, ESP SPIs: cdf13e25_i
cc85f09e_o
s2s_xxx{87}: yy.yy.yy.yy/29 === zz.zz.zz.zz/32
============
but at some point it goes down
============
sudo ipsec status s2s_xxx
Security Associations (33 up, 0 connecting):
s2s_xxx[173]: ESTABLISHED 18 hours ago,
OWNPUBIP[OWNPUBIP]...CUSTPUBIP[CUSTPUBIP]
============
The tunnel is configured like this:
============
conn s2s_xxx
type=tunnel
left=OWNPUBIP
leftsubnet=yy.yy.yy.yy/29
leftfirewall=yes
leftid=OWNPUBIP
right=CUSTPUBIP
rightsubnet=zz.zz.zz.zz/32
rightid=CUSTPUBIP
auto=start
compress=no
#Phase-1
keyexchange=ikev1
authby=secret
ike=aes256-sha1-modp1536!
ikelifetime=24h
#Phase-2
keylife=1h
pfs=yes
auth=esp
esp=aes256-sha1-modp1536!
============
I tried setting dpdaction=restart and closeaction=restart - but no luck.
I also tried switching to auto=route, that seems to help for some
tunnels. It still goes down but it rebuilds if traffic is detected.
Unfortunately for some tunnels this doesn't work, it doesn't seem to
detect the traffic and rebuild the tunnel. Also this seems like a
workaround instead of a solution.
the charon log says:
============
Oct 8 16:27:06 15[ENC] <s2s_xxx|55> generating QUICK_MODE request
4092633500 [ HASH SA No KE ID ID ]
Oct 8 16:27:06 15[NET] <s2s_xxx|55> sending packet: from OWNPUBIP[500]
to CUSTPUBIP[500] (380 bytes)
Oct 8 16:27:06 12[CFG] <173> selected peer config "s2s_xxx"
Oct 8 16:27:06 12[IKE] <s2s_xxx|55> detected reauth of existing IKE_SA,
adopting 1 children and 0 virtual IPs
Oct 8 16:27:06 12[IKE] <s2s_xxx|173> IKE_SA s2s_xxx[173] established
between OWNPUBIP[OWNPUBIP]...CUSTPUBIP[CUSTPUBIP]
Oct 8 16:27:06 12[IKE] <s2s_xxx|173> scheduling reauthentication in 85625s
Oct 8 16:27:06 12[IKE] <s2s_xxx|173> maximum IKE_SA lifetime 86165s
Oct 8 16:27:06 12[IKE] <s2s_xxx|173> DPD not supported by peer, disabled
Oct 8 16:27:06 12[ENC] <s2s_xxx|173> generating ID_PROT response 0 [ ID
HASH ]
Oct 8 16:27:06 12[NET] <s2s_xxx|173> sending packet: from OWNPUBIP[500]
to CUSTPUBIP[500] (76 bytes)
Oct 8 16:27:06 05[NET] <s2s_xxx|173> received packet: from
CUSTPUBIP[500] to OWNPUBIP[500] (92 bytes)
Oct 8 16:27:06 05[ENC] <s2s_xxx|173> parsed INFORMATIONAL_V1 request
1269941818 [ HASH D ]
Oct 8 16:27:06 05[IKE] <s2s_xxx|173> received DELETE for different
IKE_SA, ignored
Oct 8 16:27:10 11[IKE] <s2s_xxx|55> sending retransmit 1 of request
message ID 4092633500, seq 20
Oct 8 16:27:10 11[NET] <s2s_xxx|55> sending packet: from OWNPUBIP[500]
to CUSTPUBIP[500] (380 bytes)
Oct 8 16:27:16 13[IKE] <s2s_xxx|55> deleting IKE_SA s2s_xxx[55] between
OWNPUBIP[OWNPUBIP]...CUSTPUBIP[CUSTPUBIP]
Oct 8 16:27:16 13[IKE] <s2s_xxx|55> sending DELETE for IKE_SA s2s_xxx[55]
Oct 8 16:27:16 13[ENC] <s2s_xxx|55> generating INFORMATIONAL_V1 request
3002067408 [ HASH D ]
Oct 8 16:27:16 13[NET] <s2s_xxx|55> sending packet: from OWNPUBIP[500]
to CUSTPUBIP[500] (92 bytes)
Oct 8 16:39:09 09[IKE] <s2s_xxx|173> closing expired CHILD_SA
s2s_xxx{669} with SPIs cf2e897f_i 12f701a0_o and TS yy.yy.yy.yy/29 ===
zz.zz.zz.zz/32
Oct 8 16:39:09 09[IKE] <s2s_xxx|173> sending DELETE for ESP CHILD_SA
with SPI cf2e897f
Oct 8 16:39:09 09[ENC] <s2s_xxx|173> generating INFORMATIONAL_V1
request 1912553271 [ HASH D ]
Oct 8 16:39:09 09[NET] <s2s_xxx|173> sending packet: from OWNPUBIP[500]
to CUSTPUBIP[500] (76 bytes)
============
After that the tunnel seems to stay down until manual intervention.
This doesn't happen with just this connection, but with several ones. If
someone has any idea, I'd be very greatful
thank you!
best regards
More information about the Users
mailing list