[strongSwan] Throughput on high BDP networks
jsullivan at opensourcedevel.com
jsullivan at opensourcedevel.com
Sun May 31 08:19:43 CEST 2015
> On May 31, 2015 at 1:14 AM "jsullivan at opensourcedevel.com"
> <jsullivan at opensourcedevel.com> wrote:
>
>
>
>
> > On May 30, 2015 at 10:42 PM "jsullivan at opensourcedevel.com"
> > <jsullivan at opensourcedevel.com> wrote:
> >
> >
> > > On May 30, 2015 at 6:01 PM Noel Kuntze <noel at familie-kuntze.de> wrote:
> > >
> > >
> > >
> > > -----BEGIN PGP SIGNED MESSAGE-----
> > > Hash: SHA256
> > >
> > > Hello John,
> > >
> > > It is likely that that is caused by insufficient crypto
> > > processing capabilities on either side, so packets need to be dropped,
> > > as the transmit/receive buffers are full.
> > > A solution to this problem is to distribute the work load
> > > over several kernels using pcrypt[1].
> > >
> > > [1] https://wiki.strongswan.org/projects/strongswan/wiki/Pcrypt
> > >
> > > Mit freundlichen Grüßen/Kind Regards,
> > > Noel Kuntze
> > >
> > > GPG Key ID: 0x63EC6658
> > > Fingerprint: 23CA BB60 2146 05E7 7278 6592 3839 298F 63EC 6658
> >
> > This looks like exactly what I need and I do see the dropped packets in ip
> > -s
> > link ls. However, I am utterly confused about what to use for the tcrypt
> > options. I am using esp=aes128gcm8-modp1024 which I would think mean using
> > the
> > example given in the article, i.e.,
> >
> > modprobe tcrypt alg="pcrypt(rfc4106(gcm(aes)))" type=3
> >
> > but I see no improvement with that even after re-establishing the tunnel
> > (ipsec down then ipsec up). I'm guessing my way through /proc/crypto but am
> > really only guessing. There doesn't seem to be a lot of information on using
> > this. Any guidance would be appreciated - John
> >
>
> I've been able to sort through with some guesswork and snooping through
> /proc/crypto (modprobe tcrypt alg="pcrypt(rfc4106-gcm-aesni)" type=3) but the
> results are a little disappointing. I can see the multiple kworker threads
> spread across all 12 cores in these fairly high powered systems but I am still
> dropping packets and performance is not much improved. Any further
> suggestions?
> Thanks - John
>
I've hit another ugly problem. I tried to set this up so it would work at boot:
echo tcrypt >> /etc/modules
echo 'options tcrypt alg="pcrypt(rfc4106-gcm-aesni)" type=3' >
/etc/modprobe.d/tcrypt.conf
and it causes a kernel panic as soon as we attempt to send traffic through the
tunnel - every single time.
I have not had much time to gather information but it looks like it is even
worse. I removed the above lines so that it does not start on boot. After
booting, I stopped ipsec, did the modprobe, started ipsec, brought up the
connection and, as soon as I tried to pass traffic, kernel panic. If I bring
ipsec up without pcrypt, establish the connection, create some traffic, down the
connection, stop ipsec, do the modprobe, start ipsec, up the connection, it then
works. Is this a known problem? I'm running on Debian Wheezy both:
root at lcppeppr-labc02:~# uname -a
Linux lcppeppr-labc02 3.16.0-0.bpo.4-amd64 #1 SMP Debian
3.16.7-ckt9-3~deb8u1~bpo70+1 (2015-04-27) x86_64 GNU/Linux. Thanks - John
> > >
> > > Am 30.05.2015 um 23:57 schrieb jsullivan at opensourcedevel.com:
> > > > Hello, all. We are attempting to use StrongSWAN on a fast (1 Gbps CIR
> > > > one
> > > > side
> > > > and 4x10Gbps on the other) with about 80ms latency so pretty high
> > > > bandwidth
> > > > delay product. The traffic is GRE/IPSec. Our benchmarks show we can
> > > > saturate
> > > > the 1 Gbps side with just GRE sustaining high 800 low 900 Mbps. When we
> > > > activate IPSec, we plummet to around 40 Mbps - maybe we'll hit 400 Mbps
> > > > on
> > > > occasion.
> > > >
> > > > This seems to be a TCP windowing problem provoked by TCP segment
> > > > retransmissions. When we use nstat between runs, GRE shows virtually no
> > > > segment
> > > > retransmissions where GRE/IPSec shows thousands. GRE tunnel MTU is 1412
> > > > so it
> > > > should be fine for both transport and tunnel mode.
> > > >
> > > > sanitized config is:
> > > >
> > > > type=transport
> > > > esp=aes128gcm8-modp1024
> > > >
> > > > leftprotoport=47
> > > > rightprotoport=47
> > > > dpddelay=9
> > > > dpdtimeout=30
> > > > compress=yes
> > > >
> > > > keyingtries=20
> > > > keylife=60m
> > > > rekeymargin=5m
> > > > ikelifetime=3h
> > > > mobike=no
> > > >
> > > > authby=rsasig
> > > > rightrsasigkey=%cert
> > > >
> > > > nat_traversal=yes
> > > > charonstart=yes
> > > > plutostart=yes
> > > >
> > > >
> > > >
> > > > We are using intel cards with igb on one side and ixgbe on the other.
> > > >
> > > > What do we need to do to eliminate the lost packets and where can we see
> > > > the
> > > > drops? I don't see them on any queues, qdiscs - no stats showing packet
> > > > drops.
> > > > Thanks - John
> > > > _______________________________________________
> > > > Users mailing list
> > > > Users at lists.strongswan.org
> > > > https://lists.strongswan.org/mailman/listinfo/users
> > >
> > > -----BEGIN PGP SIGNATURE-----
> > > Version: GnuPG v2
> > >
> > > iQIcBAEBCAAGBQJVajNHAAoJEDg5KY9j7GZYv0sQAJB1ZDJvO5JWQ1jaAyexe1fk
> > > YwvugHH7FAY6mQH0glVct2Ro1+/c0hRCHgnjIKF6ICQkz6GxcHxbJzr+fnGON31c
> > > faoIVgCm4gxCjHR6Tcj1DMC8I6vDeUJ7nM7rGqs1XXN/+H3vLD+KRK9p0RC3mn85
> > > jc1HWCdyDjdjiNEO7ENi6SUYFkYnuiBIcmbroTcIP+FVvnWd/DHAUKSCxGedVTeO
> > > 0izwkKwGPw9bYkBk2l1Q9K+jccpFoyBzFeMKHnrTQ+hzC46qCGbXNmJklVg9vlHX
> > > H6BszfBdKAiDQ5SOaMe955f8RVynWpktZ2FpistfBylrnCyVfBVsDQXXF9BT9287
> > > tUsVfGbeA+4rCVrlLb/wiciqFLCP95bxITCaxW+3cYEYxNnr1wn5I2MRFRWeJrQL
> > > /HVrWtPqcWTFC+G7kt0XfmMdhsCtIURRt1q0k0ULTqVpALXRbJ+ksFR+zS7X1Hiy
> > > h9wy9jpa4v13Yg8vdOFofn6pe/Dv9/SNnLwfzJpmdit9U4A4QW2C/hZB31EmkuLK
> > > GSTCAZguAKnu/QOlN2zUqRLsAVpb4QlnqLqLJkvbnvooJu80OI7t95asutf5B+9i
> > > 3JAunbtHSlci//uYl59/6lvgfFFKvXNxVH9YMsbS+G2Cn0RIQzj6e0ZdSNXFwRxy
> > > 877iMnpvaZdPPlNZ0r6g
> > > =vgw7
> > > -----END PGP SIGNATURE-----
> > >
> > > _______________________________________________
> > > Users mailing list
> > > Users at lists.strongswan.org
> > > https://lists.strongswan.org/mailman/listinfo/users
> >
>
>
>
> > _______________________________________________
> > Users mailing list
> > Users at lists.strongswan.org
> > https://lists.strongswan.org/mailman/listinfo/users
> >
>
> _______________________________________________
> Users mailing list
> Users at lists.strongswan.org
> https://lists.strongswan.org/mailman/listinfo/users
More information about the Users
mailing list