<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>Hi Noel,</p>
<p>may be it sounds a bit strange :-) but this is kind of
errors/misconfigurations prevention. Existing connection must not
be dropped if somebody else trying to connect with same
credentials.</p>
<p>===<br>
</p>
<p>In such configuration, it leads to another problem - if a
client's Internet connection flapped, it will be able to reconnect
in ~3m because of described earlier behaviour.</p>
<p>While it's possible to manage conn.dpd_delay, is it possible to
manage delay pattern of DPD messages and/or their qty? If admin
(e.g. me) knows and understands what he is doing, this can solve
an issue, shortening time to detection of dead peer.<br>
</p>
<p>Thanks.<br>
</p>
<div class="moz-cite-prefix">On 12.06.2020 11:18, Noel Kuntze wrote:<br>
</div>
<blockquote type="cite"
cite="mid:5f5f2f96-363a-23df-78e1-d71c35a58108@thermi.consulting">
<pre class="moz-quote-pre" wrap="">Hi Volodymyr,
I'd configure your RADIUS server to use DAE and allow new connections, thus simply disconnecting existing clients with the same account when a client authenticates as that account.
Kind regards
Noel
Am 11.06.20 um 22:41 schrieb Volodymyr Litovka:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Colleagues, hi,
as always, the most magic things happens with those, who claims the best security solutions in the Universe :-\
Faced a very strange behaviour when using IPSec between Strongswan (server) and Cisco (client). On Cisco I'm using the tunnel configuration:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">interface Tunnel1
ip address negotiated
ip mtu 1400
ip tcp adjust-mss 1360
tunnel source GigabitEthernet1
tunnel mode ipsec ipv4
tunnel destination y.y.y.y
tunnel protection ipsec profile NEW-tun
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">and things are ok to some degree: when I shutting down the tunnel, Cisco IOS clears all states inside (both ike sa and ike session), while sending to the peer only request to close Child SA:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">Jun 11 21:48:58 newton charon-systemd[3040]: received DELETE for ESP CHILD_SA with SPI fb709251
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">causing Strongswan to delete Child SA but to keep IKE SA active:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap=""><a class="moz-txt-link-abbreviated" href="mailto:root@newton:/etc/strongswan.d#">root@newton:/etc/strongswan.d#</a> swanctl --list-sas
ikev2-eap-mschapv2: #1, ESTABLISHED, IKEv2, 865999d54ba73a0c_i 63a1831a36835a29_r*
local 'newton.sq' @ y.y.y.y[4500]
remote '192.168.1.161' @ x.x.x.x[4500] EAP: '<a class="moz-txt-link-abbreviated" href="mailto:doka.ua@gmail.com">doka.ua@gmail.com</a>' [172.29.24.2]
AES_GCM_16-256/PRF_HMAC_SHA2_256/MODP_2048
established 246s ago, rekeying in 10188s
active: IKE_DPD
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">I'm using Radius to authenticate users and manage simultaneous use of sessions with same id. Thus, the problem with this issue is that Strongswan don't send Accounting-Stop record until DPD will find it finally closed and during this period connection looks as active, preventing reconnection.
Even after I reduced dpd_delay to 10s, full cleanup happens in about 3 minutes after Child SA was closed:
</pre>
<blockquote type="cite">
<pre class="moz-quote-pre" wrap="">21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> received DELETE for ESP CHILD_SA with SPI fb709251
21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> closing CHILD_SA carlo{1} with SPIs cfd87900_i [...]
21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> sending DELETE for ESP CHILD_SA with SPI cfd87900
21:48:58 newton: 09[CHD] <ikev2-eap-mschapv2|1> CHILD_SA carlo{1} state change: INSTALLED => DELETING
21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> CHILD_SA closed
21:48:58 newton: 09[CHD] <ikev2-eap-mschapv2|1> CHILD_SA carlo{1} state change: DELETING => DESTROYING
21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> sending DPD request
21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> queueing IKE_DPD task
21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> activating IKE_DPD task
21:49:26 newton: 05[IKE] <ikev2-eap-mschapv2|1> retransmit 1 of request with message ID 8
21:49:34 newton: 09[IKE] <ikev2-eap-mschapv2|1> retransmit 2 of request with message ID 8
21:49:47 newton: 12[IKE] <ikev2-eap-mschapv2|1> retransmit 3 of request with message ID 8
21:50:10 newton: 05[IKE] <ikev2-eap-mschapv2|1> retransmit 4 of request with message ID 8
21:50:52 newton: 07[IKE] <ikev2-eap-mschapv2|1> retransmit 5 of request with message ID 8
21:52:07 newton: 06[IKE] <ikev2-eap-mschapv2|1> giving up after 5 retransmits
21:52:07 newton: 06[CFG] <ikev2-eap-mschapv2|1> sending RADIUS Accounting-Request to server '127.0.0.1'
21:52:08 newton: 06[CFG] <ikev2-eap-mschapv2|1> received RADIUS Accounting-Response from server '127.0.0.1'
21:52:08 newton: 06[IKE] <ikev2-eap-mschapv2|1> IKE_SA ikev2-eap-mschapv2[1] state change: ESTABLISHED => DESTROYING
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">So, the question is: are there ways to be more aggressive in detecting closed connections, e.g. :
- is it possible to destroy IKE SA if there are no Child SAs anymore?
- or, may be, change parameters of DPD messages retransmission? - qty of messages, fixed delay between messages, smth else?
Any other ways to work around this problem?
Thank you.
--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison
</pre>
</blockquote>
<pre class="moz-quote-pre" wrap="">
</pre>
</blockquote>
<pre class="moz-signature" cols="72">--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison</pre>
</body>
</html>