[strongSwan] Locking contention or delay in the Charon process under high load

Chinmaya Dwibedy ckdwibedy at yahoo.com
Mon Apr 14 11:30:30 CEST 2014

Hi Martin,
Using the pthread_setaffinity_np() API to put threads into
different cores, I find the tunnel setup rate to be 400+ (maximum) without any
packets loss at both ends.   Without
setting processor affinity, only once core gets used (100%) and setup rate was
found to be 250 (max). I think, it helps to improve the locality of memory
access, performing load balancing and achieve parallelism. Please correct me if
I am wrong. Note that, the Wind River Linux (SMP) runs on all the 16 cores
(Multi-Core MIPS 64 bit Processor). 
With 400+ TPS, under peak load, the output of top command
shows that Charon process (i.e., main thread) is using more than 100%
(200-600%)  CPU utilization.
	1. To investigate the per-thread (Configured 64
threads at both ends) CPU usage on Linux, using top with the -H option, found
that almost all threads are consuming less than 10% CPU usage.
	2. The CPU usage of the Charon process is spread
reasonably evenly over all of the threads in the Charon process. This spread
implies that no one thread has a particular problem. Although the application
is allowed to use most of the available CPU, approximately 85% of the total CPU
is idle meaning that some points of contention or delay in the Charon process
should be identified.
I profiled (using perf tool) the some threads at both the
ends (on the tested hots) to figure out the bottleneck. The profiled result
shows that, it spends most of the time in __pthread_rwlock_rdlock(). Any clue
what might be the issue? Any suggestions are welcome. Also I compiled with
--enable-lock-profiler and run with --nofork with 250k IPsec session. But
during daemon shutdown, it does not print the cumulative time waited in each
lock to stderr with 250k. But it prints with 1k sessions .
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.strongswan.org/pipermail/users/attachments/20140414/17086146/attachment.html>

More information about the Users mailing list