[strongSwan] Need to know the Cause behind the enhanced IKE tunnel setup rate

Chinmaya Dwibedy ckdwibedy at yahoo.com
Mon Apr 7 11:08:12 CEST 2014

Hi ,

Can anyone please respond to this email? Thanks in advance for your support.

On Friday, April 4, 2014 4:23 PM, Chinmaya Dwibedy <ckdwibedy at yahoo.com> wrote:
Hi Martin/All,

I could able to achieve the IKE setup rate 400+ with 250k IPsec tunnels (Encryption algo: AES, DH group 1 and integrity algorithm: SHA1). There are no packet losses at both ends (checked via #netstat –s –udp and confirmed). The below changes were made which enhanced the setup rate. 

1)    Configured 64 threads at both ends, and pinned 4 threads in each core using pthread_setaffinity_np() API. I think, it helps to improve the locality of memory access, performing load balancing and achieve parallelism. Note that, the Wind River Linux (SMP) runs on all the 16 cores (Multi-Core MIPS 64 bit Processor).
2)    At both ends , configured the followings in strongswan.conf
processor {
        priority_threads {
           high = 1
           medium = 16

a)    Reserved 16 threads for IKE_SA_INIT processing i.e. an IKE_SA_INIT packet arriving at the UDP socket is assigned to an idle receiver thread (out of these 16 threads) which pre-parses the message and then queues a job with the scheduler for further processing.
b)    Reserved 16 threads for dispatching from sockets.

3)    At IKE Initiator end, reduced the sender threads from 10 to 5 to avoid busy initiating the IKE connection and lock contention.
Please note that, if I do not use the pthread_setaffinity_np() to put threads into different cores, I find the tunnel setup rate to be 250+ (maximum) and there are packets loss at both ends. 

Thus can you please let me know what might be the technical reasons behind this increase in setup rate  (from 250+ to 400+)? Thanks in advance for your response.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.strongswan.org/pipermail/users/attachments/20140407/d7b3185c/attachment-0001.html>

More information about the Users mailing list