[strongSwan] Strongswan internal DNS-resolution
Dusan Ilic
dusan at comhem.se
Thu Jul 27 13:15:56 CEST 2017
Also I'm confused when you say "I took a "quick" look at the code[1] and
it seems the DNS names are only resolved once the result replaces the
original destination.
So it has nothing to do with caching. Just with a disadvantageous design
decision."
In another source I've read that new DNS-lookups is made for hostnames
with every new connection attempt, even so I need to manually run ipsec
update. Preferably Strongswan should make a new DNS-lookup, instead of
just logging no IKE config found for 94.254.123.x...85.24.241.x, sending
NO_PROPOSAL_CHOSEN. If it had, it would see that the hostname now points
to the right IP.
However, that's another side of the problem, question still remains with
MOBIKE and why %-prefix doesn't allow the other initiator reconnect from
a new IP?
Den 2017-07-27 kl. 13:08, skrev Dusan Ilic:
> Something is sure weird with %-prefix. If I try right=%any on one side
> after the remote endpoint have changed IP, it can connect, but if I
> try right=%hostname it logs
> 13[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
> N(NATD_D_IP) N((16430)) N((16431)) N(REDIR_SUP) ]
> 13[IKE] no IKE config found for 94.254.123.x...85.24.241.x, sending
> NO_PROPOSAL_CHOSEN
>
> However, with %any only one side can be initiator, and that's a no go
> for a site2site. Both sides have auto=route and that's how I would
> like it, so both sides can initiate a connection if needed.
>
> I understand that the issue is that hostname still points to the old
> IP not matching 85.24.241.x , but I thought % prefix would allow a new
> connection from a new IP? Even MOBIKE is enabled, which should allow
> the initiator to change it's public IP...
> How can I solve this issue?
>
>
> Den 2017-07-25 kl. 19:12, skrev Dusan Ilic:
>> I'm having a hard time grasping why this doesnt work.
>>
>> When one side changes public IP (forced with DHCP release and renew)
>> I can see in the log that Strongswan tries to reconnect soon again
>> from the new IP. Meanwhile it activades tasks DPD and MOBIKE. The
>> remote endpoint returns "no proposal chosen" and logs the following:
>>
>> 13[ENC] parsed IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
>> N(NATD_D_IP) N((16430)) N((16431)) N(REDIR_SUP) ]
>> 13[IKE] no IKE config found for 94.254.123.x...85.24.241.x, sending
>> NO_PROPOSAL_CHOSEN
>>
>> Now, on both sides configs I have left/right-parameters prefixed with
>> %, for example %host.dyndns.com, and in ipsec.secrets I have
>> @host.dyndns.com (so it doesnt resolve to an IP). Also left/rightid
>> is the same, prefixed with @.
>>
>> In my understanding this should just work and allow a reconnect from
>> the same peer from another IP, as long as the ID's are the same, but
>> why doesn't it?
>>
>>
>> Den 2017-07-23 kl. 14:45, skrev Dusan Ilic:
>>> One step closer, when adding @ infront of rightid and in
>>> ipsec.secrets on the remote endpoint I can connect again after
>>> running ipsec update, before I had to run ipsec restart for the
>>> connection to be established again. However, that's not optimal, it
>>> should be able to take care of this on its own without manual
>>> intervention. As already said, a Fortigate routers IP-sec
>>> implementation seem to take care of this automatically...
>>>
>>> I can see that when the local endpoint changes public IP, ipsec
>>> statusall shows Tasks active: IKE_DPD for a while, then it takes
>>> down the SA's and starts trying to reconnect again. However looking
>>> at the remote endpoint, it still shows the connection as up. When I
>>> run ipsec down and afterwards ipsec update on the remote endpoint,
>>> the connection goes up (not without ipsec update, then it reports
>>> "no shared key foound"). Before that the local endpoint reports the
>>> same error as before, "no proposal chosen", from the remote endpoint.
>>>
>>> Something is not behaving as it should here, according to how it's
>>> supposed to reading about the prefixes @ and %.
>>>
>>> Den 2017-07-23 kl. 00:55, skrev Dusan Ilic:
>>>> I meant this is the new IP (after running "ipsec update"), but
>>>> ipsec.secrets still refers to the old IP and therefore says no
>>>> shared key found for the new IP.
>>>>
>>>>
>>>> Den 2017-07-23 kl. 00:51, skrev Dusan Ilic:
>>>>> initiating IKE_SA to 85.24.241.x
>>>>> generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
>>>>> N(NATD_D_IP) ]
>>>>> sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
>>>>> received packet: from 85.24.241.x[500] to 94.254.123.x[500]
>>>>> parsed IKE_SA_INIT response 0 [ SA KE No N(NATD_S_IP) N(NATD_D_IP)
>>>>> CERTREQ N(MULT_AUTH) ]
>>>>> received 1 cert requests for an unknown ca
>>>>> authentication of 'local.hostname' (myself) with pre-shared key
>>>>> no shared key found for 'local.hostname' - '85.24.241.x'
>>>>>
>>>>> 85.24.241.x is old IP of the other peer.
>>>>>
>>>>>
>>>>> Den 2017-07-23 kl. 00:47, skrev Dusan Ilic:
>>>>>> I think the problem is that also ipsec.secrets is resolved to the
>>>>>> old IP, so the PSK's doesn't match.
>>>>>>
>>>>>> Jul 23 00:44:25 GW pluto[7661]: loading secrets from
>>>>>> "/etc/ipsec.secrets"
>>>>>> Jul 23 00:44:25 GW pluto[7661]: loaded PSK secret for 85.24.241.x
>>>>>>
>>>>>> How can I use the % feature in ipsec.secrets?
>>>>>>
>>>>>>
>>>>>> Den 2017-07-22 kl. 21:32, skrev Noel Kuntze:
>>>>>>>
>>>>>>> On 22.07.2017 19:57, Dusan Ilic wrote:
>>>>>>>> Okey, the remote endpoint dont know that the other side have
>>>>>>>> changed its IP, however according to below a connection should
>>>>>>>> still be able to be made if the end with the new IP initiates it.
>>>>>>>>
>>>>>>>> "parameter right|leftallowany parameters helps to handle
>>>>>>>> the case where both peers possess dynamic IP addresses that are
>>>>>>>> usually resolved using DynDNS or a similar service.
>>>>>>>>
>>>>>>>> The configuration
>>>>>>>>
>>>>>>>> right=peer.foo.bar <http://peer.foo.bar>
>>>>>>>> rightallowany=yes
>>>>>>>>
>>>>>>>> can be used by the initiator to start up a connection to a peer
>>>>>>>> by resolving peer.foo.bar <http://peer.foo.bar> into the
>>>>>>>> currently allocated IP address.
>>>>>>>> Thanks to the rightallowany flag the connection behaves later on
>>>>>>>> as
>>>>>>>>
>>>>>>>> right=%any
>>>>>>>>
>>>>>>>> so that the peer can rekey the connection as an initiator when his
>>>>>>>> IP address changes. An alternative notation is
>>>>>>>>
>>>>>>>> right=%peer.foo.bar <http://peer.foo.bar>
>>>>>>>>
>>>>>>>> which will implicitly set rightallowany=yes
>>>>>>>> "
>>>>>>>>
>>>>>>>> However, Strongswan on the side that have changed IP is
>>>>>>>> obviously aware of its new IP without restarting (how?), so why
>>>>>>>> does it give the following output when trying to initiate the
>>>>>>>> connection?
>>>>>>>>
>>>>>>>> initiating IKE_SA to 94.254.123 <tel:94.254.123>.x
>>>>>>>> generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
>>>>>>>> N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
>>>>>>>> sending packet: from 85.24.244 <tel:85.24.244>.x[500] to
>>>>>>>> 94.254.123 <tel:94.254.123>.x[500] (464 bytes)
>>>>>>>> received packet: from 94.254.123 <tel:94.254.123>.x[500] to
>>>>>>>> 85.24.244 <tel:85.24.244>.x[500] (36 bytes)
>>>>>>>> parsed IKE_SA_INIT response 0 [ N(NO_PROP) ]
>>>>>>>> received NO_PROPOSAL_CHOSEN notify error
>>>>>>>> establishing connection 'wesafe' failed
>>>>>>> The remote peer sends that error. What does it log?
>>>>>>>
>>>>>>>> Why does it report no proposal chosen, until i restart the
>>>>>>>> remote endpoint?
>>>>>>>> IF I have understood it right, the use of % in front of the
>>>>>>>> hostname should allow a connection attempt from whatever the IP
>>>>>>>> May be?
>>>>>>>>
>>>>>>>> ---- Noel Kuntze skrev ----
>>>>>>>>
>>>>>>>> Seems like it.
>>>>>>>>
>>>>>>>> On 22.07.2017 11:17, Dusan Ilic wrote:
>>>>>>>>> Hi Noel,
>>>>>>>>>
>>>>>>>>> So, are you saying that there is no way to make Strongswan
>>>>>>>>> aware of a domain name have changed IP-address without
>>>>>>>>> restarting it manually?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Den 2017-07-22 kl. 01:49, skrev Noel Kuntze:
>>>>>>>>>> Hi Dusan,
>>>>>>>>>>
>>>>>>>>>> I took a "quick" look at the code[1] and it seems the DNS
>>>>>>>>>> names are only resolved once the result replaces the original
>>>>>>>>>> destination.
>>>>>>>>>> So it has nothing to do with caching. Just with a
>>>>>>>>>> disadvantageous design decision.
>>>>>>>>>>
>>>>>>>>>> Kind regards
>>>>>>>>>>
>>>>>>>>>> Noel
>>>>>>>>>>
>>>>>>>>>> [1]
>>>>>>>>>> https://github.com/strongswan/strongswan/blob/master/src/libcharon/sa/ike_sa.c#L1470
>>>>>>>>>>
>>>>>>>>>> On 21.07.2017 00:19, Dusan Ilic wrote:
>>>>>>>>>>> Okey, so I just did a forced release/renew on the same
>>>>>>>>>>> endpoint, dynamic DNS updated shortly the new IP (ttl 5 min)
>>>>>>>>>>> and after like 10 min or so another endpoint reconnected
>>>>>>>>>>> again (a fortigate, I have two endpoints), however the
>>>>>>>>>>> troubling endpoint (also strongswan) havent connected yet.
>>>>>>>>>>>
>>>>>>>>>>> When logging in to the remote endpoint and pinging the
>>>>>>>>>>> domain name, it resolves to the new IP, but below are the
>>>>>>>>>>> output from both sides of the tunnel when trying to manually
>>>>>>>>>>> run ipsec up-command.
>>>>>>>>>>>
>>>>>>>>>>> On the remote endpoint:
>>>>>>>>>>>
>>>>>>>>>>> First of, running ipsec statusall connection shows as if the
>>>>>>>>>>> tunnel is still up, maybe thats the problem that Strongswan
>>>>>>>>>>> thinks its up even if it isn't?
>>>>>>>>>>>
>>>>>>>>>>> ESTABLISHED 46 minutes ago,
>>>>>>>>>>> 94.254.123.x[local.host.name]...85.24.241.x[85.24.241.x]
>>>>>>>>>>> IKE SPIs: 1dffaab2cafa2f48_i 15e867fa149370f0_r*, pre-shared
>>>>>>>>>>> key reauthentication in 22 hours
>>>>>>>>>>> IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
>>>>>>>>>>> INSTALLED, TUNNEL, ESP SPIs: cf0473a2_i c45028cb_o
>>>>>>>>>>> AES_CBC_128/HMAC_SHA1_96, 8275 bytes_i (2616s ago), 81235
>>>>>>>>>>> bytes_o (8s ago), rekeying in 7 hours
>>>>>>>>>>> 192.168.1.0/24 === 10.1.1.0/26
>>>>>>>>>>>
>>>>>>>>>>> Command ipsec up connection
>>>>>>>>>>>
>>>>>>>>>>> initiating IKE_SA to 85.24.241.x
>>>>>>>>>>> generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
>>>>>>>>>>> N(NATD_D_IP) ]
>>>>>>>>>>> sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
>>>>>>>>>>> retransmit 1 of request with message ID 0
>>>>>>>>>>> sending packet: from 94.254.123.x[500] to 85.24.241.x[500]
>>>>>>>>>>>
>>>>>>>>>>> On the local endpoint (with new IP):
>>>>>>>>>>>
>>>>>>>>>>> initiating IKE_SA to 94.254.123.x
>>>>>>>>>>> generating IKE_SA_INIT request 0 [ SA KE No N(NATD_S_IP)
>>>>>>>>>>> N(NATD_D_IP) N(FRAG_SUP) N(HASH_ALG) N(REDIR_SUP) ]
>>>>>>>>>>> sending packet: from 85.24.244.x[500] to 94.254.123.x[500]
>>>>>>>>>>> (464 bytes)
>>>>>>>>>>> received packet: from 94.254.123.x[500] to 85.24.244.x[500]
>>>>>>>>>>> (36 bytes)
>>>>>>>>>>> parsed IKE_SA_INIT response 0 [ N(NO_PROP) ]
>>>>>>>>>>> received NO_PROPOSAL_CHOSEN notify error
>>>>>>>>>>> establishing connection 'wesafe' failed
>>>>>>>>>>>
>>>>>>>>>>> And when restarting Strongswan on the remote endpoint, it
>>>>>>>>>>> connects again...
>>>>>>>>>>>
>>>>>>>>>>> Den 2017-07-20 kl. 12:00, skrev Dusan Ilic:
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>
>>>>>>>>>>>> I have some issues with a site to site tunnel with two
>>>>>>>>>>>> dynamic endpoints. One side almost never changes IP-adress
>>>>>>>>>>>> (it is DHCP however), the other side changes more
>>>>>>>>>>>> frequently. Both endpoints IP-adresses are using dynamic
>>>>>>>>>>>> DNS and have a corresponding domain name associated at all
>>>>>>>>>>>> times.
>>>>>>>>>>>>
>>>>>>>>>>>> Today one side changed IP, and the new IP have been updated
>>>>>>>>>>>> in public DNS. I understand DNS propagation and caching,
>>>>>>>>>>>> but I seem to not understand how Strongswan handles and
>>>>>>>>>>>> acts upon it.
>>>>>>>>>>>>
>>>>>>>>>>>> For example, I have set keyingtries to %forever on both
>>>>>>>>>>>> sides, so that they continuesly tries to reconnect when
>>>>>>>>>>>> connections is lost. I have also changed the global
>>>>>>>>>>>> initiation parameter from default 0 to 60 s, so that it
>>>>>>>>>>>> retries unsuccesful connections attempts.
>>>>>>>>>>>> Now the other side is trying to reconnect to the old IP
>>>>>>>>>>>> still, however if I ping the hostname from that endpoint it
>>>>>>>>>>>> resolves to the new, correct IP. It seems like Strongswan
>>>>>>>>>>>> is caching the old DNS some how?
>>>>>>>>>>>> At last I tried to restart Strongswan and then it picked up
>>>>>>>>>>>> the new IP.
>>>>>>>>>>>>
>>>>>>>>>>>> I would like to have a system that solves this by itself,
>>>>>>>>>>>> so I don't need to manually have to intervene each and
>>>>>>>>>>>> everythime any of the endpoints get a new IP. How can this
>>>>>>>>>>>> best be achieved?
>>>>>>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
More information about the Users
mailing list