[strongSwan] Big packet loss under load

Roland Mas lolando at debian.org
Thu May 15 17:13:11 CEST 2014

Martin Willi, 2014-05-15 14:27:58 +0200 :

> Hi Roland,


>> These processes initiate a few hundred sockets between VMs and generate
>> some (reasonable) CPU load.
> Can you quantify this in more detail? What is the overall bandwidth
> used by that traffic?

> How much is that CPU load? Please be aware that usually Linux handles
> IPsec processing in the softirq routine of your NIC, i.e. is bound to a
> single CPU core. The "si" column in "top" is usually a good indicator
> how much load you actually have.

  Load average doesn't go much beyond 2 (that particular VM has 26 cores
for itself), because the connections get dropped before more load can
happen.  However, the "si" column stays 0.0 on both the VMs and the

> Is there any flow control (TCP) involved for that traffic, or are
> these processes just hammering out packets?

  They are TCP sockets, yes.  Mostly long-lived ones, so I believe flow
control has time enough to throttle what needs to be throttled.

>> I tried generating big traffic between phys* (by sending lots of data
>> from vm15 to vm25 and back, using netcat), but even with 50 MB/s going
>> across and back, I can't see any packet loss 
> When using netcat with TCP, flow control takes care that packet loss
> is minimal. You may try to switch to iperf with some larger UDP
> bandwidths to check if you can reproduce these losses. Also, if you
> have a few hundred sockets some special Netfiltering/Conntracking may
> slow things down compared to a single TCP stream?

  I'll investigate these points, and report back if I find anything

  Thanks a lot for your input!

Roland Mas

$ chown -R us:us your_base*

More information about the Users mailing list