[strongSwan] Under high load, two IKE initiators send the IKE_SA_INIT requests with the same SPI

Chinmaya Dwibedy ckdwibedy at yahoo.com
Mon May 26 11:45:20 CEST 2014


Hi All,
I have modified the strongswan (5.0.4) code to write a new DH
using Octeon Core Crypto Library APIs. Using the load tester plugin of
strongswan (5.0.4), could able to achieve the 250k IPsec tunnels with 850+ tunnels
per second. However I found that, some of the tunnels (approximately 150-200) were
getting up. Upon debugging found that, two IKE initiators send the IKE_SA_INIT
requests with the same SPI set in the IKE header sometimes and hence generate 2
IPsec SAs with the same SPI. As a result of which, the IKE stack at Responder
end denies to configure the second  IPsec
SA. For a given destination address/protocol combination, unique SPI values
must be used. 
But withreduced setup rate  i.e.. 200+,  all these (250k) tunnels are getting up
always. I think, the we need to use the gcc atomic built-in function __sync_fetch_and_add()
in get_spi() function (defined in src/libcharon/plugins/load_tester/load_tester_ipsec.c)
for atomic incrementation.   
METHOD(kernel_ipsec_t, get_spi, status_t,
        private_load_tester_ipsec_t *this, host_t *src, host_t *dst,
        u_int8_t
protocol, u_int32_t reqid, u_int32_t *spi)
{
//*spi = ++this->spi;
*spi=(uint32_t)__sync_fetch_and_add(&this->spi,
1);        
return SUCCESS;
}
 
Can anyone please validate and confirm whether my understanding
is correct or not? Thanks in advance for your support.
Note that, we are using two Multi-Core MIPS64 Processors with
16 cnMIPS64 v2 cores (one acts as an IKE initiator and another as an IKE
responder). We are running strongswan in both systems. Both the systems have
1Gbps Ethernet cards, which are connected to 1 Gbps L2 switch. The Wind River
Linux runs on all the 16 cores. 
Regards,
Chinmaya
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.strongswan.org/pipermail/users/attachments/20140526/317a0f09/attachment.html>


More information about the Users mailing list