[strongSwan] Need to know the Cause behind the enhanced IKE tunnel	setup rate
    Chinmaya Dwibedy 
    ckdwibedy at yahoo.com
       
    Fri Apr  4 12:53:36 CEST 2014
    
    
  
Hi Martin/All,
I could able to achieve the IKE setup rate 400+ with 250k IPsec tunnels (Encryption algo: AES, DH group 1 and integrity algorithm: SHA1). There are no packet losses at both ends (checked via #netstat –s –udp and confirmed). The below changes were made which enhanced the setup rate. 
1)	Configured 64 threads at both ends, and pinned 4 threads in each core using pthread_setaffinity_np() API. I think, it helps to improve the locality of memory access, performing load balancing and achieve parallelism. Note that, the Wind River Linux (SMP) runs on all the 16 cores (Multi-Core MIPS 64 bit Processor).
2)	At both ends , configured the followings in strongswan.conf
processor {
        priority_threads {
           high = 1
           medium = 16
           critical=16
        }
    }
a)	Reserved 16 threads for IKE_SA_INIT processing i.e. an IKE_SA_INIT packet arriving at the UDP socket is assigned to an idle receiver thread (out of these 16 threads) which pre-parses the message and then queues a job with the scheduler for further processing.
b)	Reserved 16 threads for dispatching from sockets.
3)	At IKE Initiator end, reduced the sender threads from 10 to 5 to avoid busy initiating the IKE connection and lock contention.
Please note that, if I do not use the pthread_setaffinity_np() to put threads into different cores, I find the tunnel setup rate to be 250+ (maximum) and there are packets loss at both ends. 
Thus can you please let me know what might be the technical reasons behind this increase in setup rate  (from 250+ to 400+)? Thanks in advance for your response.
Regards,
Chinmaya
    
    
More information about the Users
mailing list