[strongSwan] weird cisco behaviour - how to work around?
Volodymyr Litovka
doka.ua at gmx.com
Fri Jun 12 12:21:35 CEST 2020
Hi Noel,
under "prevention" I meant disabling of simultaneous use of same id.
And thanks for pointing to charon.retransmit* parameters - yes, this is
what I was looking for.
Thank you!
On 12.06.2020 12:59, Noel Kuntze wrote:
> Hi Volodymyr,
>
> I disagree. That "prevention" enables a better user experience during network or software failures on the client side, as described before and by yourself already.
>
> In IKEv2, every packet exchange is used for dead peer detection, so you need to change the IKEv2 timeout in strongswan.conf to tweak that.
>
> Kind regards
>
> Noel
>
> Am 12.06.20 um 11:55 schrieb Volodymyr Litovka:
>> Hi Noel,
>>
>> may be it sounds a bit strange :-) but this is kind of errors/misconfigurations prevention. Existing connection must not be dropped if somebody else trying to connect with same credentials.
>>
>> ===
>>
>> In such configuration, it leads to another problem - if a client's Internet connection flapped, it will be able to reconnect in ~3m because of described earlier behaviour.
>>
>> While it's possible to manage conn.dpd_delay, is it possible to manage delay pattern of DPD messages and/or their qty? If admin (e.g. me) knows and understands what he is doing, this can solve an issue, shortening time to detection of dead peer.
>>
>> Thanks.
>>
>> On 12.06.2020 11:18, Noel Kuntze wrote:
>>> Hi Volodymyr,
>>>
>>> I'd configure your RADIUS server to use DAE and allow new connections, thus simply disconnecting existing clients with the same account when a client authenticates as that account.
>>>
>>> Kind regards
>>>
>>> Noel
>>>
>>> Am 11.06.20 um 22:41 schrieb Volodymyr Litovka:
>>>> Colleagues, hi,
>>>>
>>>> as always, the most magic things happens with those, who claims the best security solutions in the Universe :-\
>>>>
>>>> Faced a very strange behaviour when using IPSec between Strongswan (server) and Cisco (client). On Cisco I'm using the tunnel configuration:
>>>>
>>>>> interface Tunnel1
>>>>> ip address negotiated
>>>>> ip mtu 1400
>>>>> ip tcp adjust-mss 1360
>>>>> tunnel source GigabitEthernet1
>>>>> tunnel mode ipsec ipv4
>>>>> tunnel destination y.y.y.y
>>>>> tunnel protection ipsec profile NEW-tun
>>>>>
>>>> and things are ok to some degree: when I shutting down the tunnel, Cisco IOS clears all states inside (both ike sa and ike session), while sending to the peer only request to close Child SA:
>>>>
>>>>> Jun 11 21:48:58 newton charon-systemd[3040]: received DELETE for ESP CHILD_SA with SPI fb709251
>>>> causing Strongswan to delete Child SA but to keep IKE SA active:
>>>>
>>>>> root at newton:/etc/strongswan.d# swanctl --list-sas
>>>>> ikev2-eap-mschapv2: #1, ESTABLISHED, IKEv2, 865999d54ba73a0c_i 63a1831a36835a29_r*
>>>>> local 'newton.sq' @ y.y.y.y[4500]
>>>>> remote '192.168.1.161' @ x.x.x.x[4500] EAP: 'doka.ua at gmail.com' [172.29.24.2]
>>>>> AES_GCM_16-256/PRF_HMAC_SHA2_256/MODP_2048
>>>>> established 246s ago, rekeying in 10188s
>>>>> active: IKE_DPD
>>>>>
>>>> I'm using Radius to authenticate users and manage simultaneous use of sessions with same id. Thus, the problem with this issue is that Strongswan don't send Accounting-Stop record until DPD will find it finally closed and during this period connection looks as active, preventing reconnection.
>>>>
>>>> Even after I reduced dpd_delay to 10s, full cleanup happens in about 3 minutes after Child SA was closed:
>>>>
>>>>> 21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> received DELETE for ESP CHILD_SA with SPI fb709251
>>>>> 21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> closing CHILD_SA carlo{1} with SPIs cfd87900_i [...]
>>>>> 21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> sending DELETE for ESP CHILD_SA with SPI cfd87900
>>>>> 21:48:58 newton: 09[CHD] <ikev2-eap-mschapv2|1> CHILD_SA carlo{1} state change: INSTALLED => DELETING
>>>>> 21:48:58 newton: 09[IKE] <ikev2-eap-mschapv2|1> CHILD_SA closed
>>>>> 21:48:58 newton: 09[CHD] <ikev2-eap-mschapv2|1> CHILD_SA carlo{1} state change: DELETING => DESTROYING
>>>>> 21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> sending DPD request
>>>>> 21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> queueing IKE_DPD task
>>>>> 21:49:22 newton: 16[IKE] <ikev2-eap-mschapv2|1> activating IKE_DPD task
>>>>> 21:49:26 newton: 05[IKE] <ikev2-eap-mschapv2|1> retransmit 1 of request with message ID 8
>>>>> 21:49:34 newton: 09[IKE] <ikev2-eap-mschapv2|1> retransmit 2 of request with message ID 8
>>>>> 21:49:47 newton: 12[IKE] <ikev2-eap-mschapv2|1> retransmit 3 of request with message ID 8
>>>>> 21:50:10 newton: 05[IKE] <ikev2-eap-mschapv2|1> retransmit 4 of request with message ID 8
>>>>> 21:50:52 newton: 07[IKE] <ikev2-eap-mschapv2|1> retransmit 5 of request with message ID 8
>>>>> 21:52:07 newton: 06[IKE] <ikev2-eap-mschapv2|1> giving up after 5 retransmits
>>>>> 21:52:07 newton: 06[CFG] <ikev2-eap-mschapv2|1> sending RADIUS Accounting-Request to server '127.0.0.1'
>>>>> 21:52:08 newton: 06[CFG] <ikev2-eap-mschapv2|1> received RADIUS Accounting-Response from server '127.0.0.1'
>>>>> 21:52:08 newton: 06[IKE] <ikev2-eap-mschapv2|1> IKE_SA ikev2-eap-mschapv2[1] state change: ESTABLISHED => DESTROYING
>>>> So, the question is: are there ways to be more aggressive in detecting closed connections, e.g. :
>>>> - is it possible to destroy IKE SA if there are no Child SAs anymore?
>>>> - or, may be, change parameters of DPD messages retransmission? - qty of messages, fixed delay between messages, smth else?
>>>>
>>>> Any other ways to work around this problem?
>>>>
>>>> Thank you.
>>>>
>>>> --
>>>> Volodymyr Litovka
>>>> "Vision without Execution is Hallucination." -- Thomas Edison
>>>>
>> --
>> Volodymyr Litovka
>> "Vision without Execution is Hallucination." -- Thomas Edison
>>
--
Volodymyr Litovka
"Vision without Execution is Hallucination." -- Thomas Edison
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.strongswan.org/pipermail/users/attachments/20200612/b46ab9b3/attachment.html>
More information about the Users
mailing list