Very slow network speed using Tinc
Dariusz Bączkowski
dariusz.baczkowski at esyscoder.pl
Tue Oct 22 15:38:20 CEST 2013
Dnia wtorek, 22 października 2013 13:58:47 Florent Bautista pisze:
> Hello and thank you for your answer.
>
> On 10/21/2013 04:59 PM, Guus Sliepen wrote:
> > On Mon, Oct 21, 2013 at 04:08:26PM +0200, Florent Bautista wrote:
> >> Between 2 nodes, we have 150 Mbit/s network speed without Tinc (public
> >> IPv4 to public IPv4 using iperf), and only 3 Mbit/s using Tinc (private
> >> IPv4 to private IPv4).
> >
> > Which options did you use when running iperf?
>
> No option. Only -s for the server, and -c for the client. By default, it
> uses TCP. And my measures are with the sames hosts, same commands.
>
> >> Here is the configuration of Tinc we use :
> > [...]
> >
> >> MACExpire = 30
> >
> > Why did you lower this value?
>
> Because I use Tinc in a virtualized environment, and sometimes VM moves
> from one host to another. So I reduced it to let Tinc learn more quickly
> the new path.
>
> >> And for each host :
> > [...]
> >
> >> Cipher = ECDHE-RSA-AES256-SHA384
> >
> > That is an invalid name for an encryption cipher, instead that is a
>
> name for a
>
> > cipher suite. If you want to use AES256 as a cipher use "Cipher =
>
> aes-256-cbc".
>
> Ok. Where can I get the list of all available ciphers ? Because I took
> it from : openssl ciphers -v 'AES+HIGH'
>
> >> Compression = 3
> >
> > Compression may or may not increase performance. Try leaving it out first.
> >
> >> Each node is Intel Core i7/Xeon powered.
> >
> > This should indeed be able to handle more than 100 Mbit/s.
> >
> >> Some precisions :
> >>
> >> physical interfaces MTU is 1500.
> >>
> >> virtual interfaces MTU is also 1500 (we need it for compatibility).
> >>
> >> Can IP fragmentation be that bottleneck ? How can I be sure of that ?
> >
> > Yes, that can be a bottleneck. However, normally tinc will
>
> automatically detect
>
> > the optimal path MTU between nodes, and will send ICMP messages on the
>
> VPN or
>
> > modify the MSS header of TCP packets so that the sender will reduce
>
> the size of
>
> > packets so they will not be fragmented. However, if you send UDP
>
> packets larger
>
> > than the path MTU with the DF bit unset, then tinc has no choice but to
> > fragment those packets.
> >
> > My own tests show that iperf over a tinc VPN saturates a 100 Mbit/s link.
> > However, if you run iperf in UDP mode, it limits the UDP bandwidth to
>
> 1 Mbit/s by default.
>
> > You can increase it, but the order of the command line options is
>
> important:
> > iperf -c host -u -b 150M
> >
> > If you use another order it ignores your bandwidth setting.
> >
> >> If you need some other configuration values, let me know.
> >
> > It might be best to start with the default configuration parameters
>
> first. Use
>
> > only Mode, Name, Address and ConnectTo variables. If that works fine, try
> > adding other configuration statements until the performance drops.
>
> Ok, so I made some tests :
>
> 2 empty VM of Proxmox 3.1 (Linux 2.6.32), connected to a 1 Gbit/s switch.
>
> Tinc 1.0.23 configuration :
>
> tinc.conf
>
> ConnectTo = host1 // Only on host2
> Mode = switch
> Name = host2
>
> hosts/host1 on host2 :
>
> Address = 192.168.0.71
> // + RSA public key
>
>
> "real" network is 192.168.0.0/24. And Tinc layer 2 tunnel uses
> 10.111.0.0/16.
>
> Here are the iperf tests :
>
> Without Tinc :
>
> root at host2:~# iperf -c 192.168.0.71
> ------------------------------------------------------------
> Client connecting to 192.168.0.71, TCP port 5001
> TCP window size: 23.8 KByte (default)
> ------------------------------------------------------------
> [ 3] local 192.168.0.72 port 40353 connected with 192.168.0.71 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 1.08 GBytes 928 Mbits/sec
>
>
> With Tinc :
>
> root at host2:~# iperf -c 10.111.0.1
> ------------------------------------------------------------
> Client connecting to 10.111.0.1, TCP port 5001
> TCP window size: 23.2 KByte (default)
> ------------------------------------------------------------
> [ 3] local 10.111.0.2 port 34523 connected with 10.111.0.1 port 5001
> [ ID] Interval Transfer Bandwidth
> [ 3] 0.0-10.0 sec 104 MBytes 87.2 Mbits/sec
>
>
> It's better than on my production servers, but is it normal ?
>
> I tested with or without Cipher, with or without Digest, with or without
> Compression, and always the same results... I don't think it is expected :)
I use standard tinc settings and tests look like this:
tinc version 1.0.19 (built Apr 22 2013 21:45:37, protocol 17)
[ 6] local 10.71.0.15 port 37975 connected with 10.71.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 268 MBytes 224 Mbits/sec
[ 5] local 10.71.0.15 port 5001 connected with 10.71.0.1 port 47211
[ 5] 0.0-10.0 sec 246 MBytes 206 Mbits/sec
Without tinc:
[ 6] local 10.81.0.15 port 39846 connected with 10.81.0.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 6] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec
[ 5] local 10.81.0.15 port 5001 connected with 10.81.0.1 port 48362
[ 5] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec
This is better then in your case but I also was expecting more. Servers are
connected directly without switch. I suspect that problem is with CPU power
because tincd saturates one core to 100% in such transfers (Intel(R) Xeon(R)
CPU E5-2620 0 @ 2.00GHz).
>
> _______________________________________________
> tinc mailing list
> tinc at tinc-vpn.org
> http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
--
Pozdrawiam,
Dariusz Bączkowski
ESYSCODER Dariusz Bączkowski
e-mail: biuro at esyscoder.pl
tel.: +48 720 820 220
fax: +48 947 166 554
http://esyscoder.pl/
http://system-taxi.pl/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 230 bytes
Desc: This is a digitally signed message part.
URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20131022/00f2241e/attachment.sig>
More information about the tinc
mailing list