[strongSwan] New IPSec tunnels brings down active tunnels
Eric.Zaluzec at vertiv.com
Eric.Zaluzec at vertiv.com
Fri Jan 10 17:33:28 CET 2020
Using StrongSwan v5.7.1 on an embedded Linux system, I have an ipsec network setup where multiple devices(workers) will create tunnels to a single device(master). I'm having trouble with a scenario where when a new device(worker) comes online and a tunnel is created to the master device, the previously connected tunnels go down. A dpdaction to restart on previously connected tunnels will re-establish the previously connected tunnels, but then that new device tunnel goes down. This creates a loop where tunnels are continuous going down and up, and the tunnels are never stable and all up at once. Running an `ipsec update` or `ipsec reload` on the master device does not change this tunnel down/up loop.
I'm using Systemd to run the strongswan service. I've found that a `systemctl restart strongswan` on the master device will stop the continuous down/up of tunnels. I can script the master device to run `systemctl restart strongswan` after new devices(workers) establish a tunnel; however if one of the multiple devices(workers) happens to reboot, then after the reboot, the device tunnel to master device will cause all existing to master to drop once again. Writing a service on the master device to detect when worker devices reboot and to restart strongswan is not as feasible.
Has anyone ran into a similar issue and can help suggest what is a good course of action to take? I'm not sure if I am missing something in my ipsec configuration. Any suggestions or feedback would be helpful and greatly appreciated!
Here is what my ipsec.conf files look like from master device & worker device:
# Master ipsec.conf
config setup
strictcrlpolicy=no
charondebug="ike 4, knl 4, cfg 2"
uniqueids = no
conn %default
rekey=no
ike=aes256-sha256-modp2048
esp=aes256-sha256-modp2048
auto=start
dpddelay=30
dpdtimeout=120
dpdaction=restart
conn tunnel10.207.15.85-10.207.15.70
keyexchange=ikev2
left=10.207.15.85
leftsubnet=
leftcert=peerCert.pem
right=10.207.15.70
rightsubnet=
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
conn tunnel10.96.0.1-10.207.15.70
keyexchange=ikev2
left=10.207.15.85
leftsubnet=10.96.0.1
leftcert=peerCert.pem
right=10.207.15.70
rightsubnet=
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
conn tunnel10.207.15.85-10.207.15.23
keyexchange=ikev2
left=10.207.15.85
leftsubnet=
leftcert=peerCert.pem
right=10.207.15.23
rightsubnet=
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
conn tunnel10.96.0.1-10.207.15.23
keyexchange=ikev2
left=10.207.15.85
leftsubnet=10.96.0.1
leftcert=peerCert.pem
right=10.207.15.23
rightsubnet=
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
# Worker ipsec.conf
config setup
strictcrlpolicy=no
charondebug="ike 4, knl 4, cfg 2"
uniqueids = no
conn %default
rekey=no
ike=aes256-sha256-modp2048
esp=aes256-sha256-modp2048
auto=start
dpddelay=30
dpdtimeout=120
dpdaction=restart
conn tunnel10.207.15.85-10.207.15.70
keyexchange=ikev2
right=10.207.15.85
rightsubnet=
left=10.207.15.70
leftsubnet=
leftcert=peerCert.pem
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
conn tunnel10.96.0.1-10.207.15.70
keyexchange=ikev2
right=10.207.15.85
rightsubnet=10.96.0.1
left=10.207.15.70
leftsubnet=
leftcert=peerCert.pem
leftid="C=US, O=Vertiv, CN=peer"
rightid="C=US, O=Vertiv, CN=peer"
CONFIDENTIALITY NOTICE: This e-mail and any files transmitted with it are intended solely for the use of the individual or entity to whom they are addressed and may contain confidential and privileged information protected by law. If you received this e-mail in error, any review, use, dissemination, distribution, or copying of the e-mail is strictly prohibited. Please notify the sender immediately by return e-mail and delete all copies from your system.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.strongswan.org/pipermail/users/attachments/20200110/6d8a4c98/attachment-0001.html>
More information about the Users
mailing list