Switched tinc VPN question
Valentin Bud
valentin at databus.ro
Wed Oct 31 10:26:09 CET 2012
Hello Guus,
Thank you for taking your time in answering my question. My answer and
some new questions follow inline.
On Mon, Oct 29, 2012 at 06:31:48PM +0100, Guus Sliepen wrote:
> On Mon, Oct 29, 2012 at 04:25:59PM +0200, Valentin Bud wrote:
>
> > My setup is as follows. I have a total of 4 servers. 2 of them are
> > directly connected with one Gbps link. The other two of them are located
> > elsewhere and they are connected via WAN connection. I would mention
> > that the latter two servers are in the same Data Center provider and
> > have 100Mbps link between them.
> >
> > Each of those 4 servers have an OpenvSwitch instance and a few VLANs. I
> > want to extend the Layer 2 network over the WAN with VPN tunnels using
> > tinc because this would ease firewall management, address assignment and
> > enable VM migration.
>
> And I assume the two that are directly connected via gigabit also have their
> OpenvSwitch instances connected together via their LAN interfaces?
Yes they are. The eth1 interfaces of the two servers are configured as
trunks on the OpenvSwitch instances.
>
> > In my first approach I have configured A to ConnectTo C and D, B to
> > ConnectTo C and D, C to ConnectTo A,B,D and D to ConnectTo A,B,C.
> [...]
> > Without STP after starting up the tinc daemons on all 4 machines a
> > broadcast storm results. This was expected. Adding tinc as a port to the
> > switch basically transforms the network in 4 switches connected each
> > other, thus giving a loop.
>
> If the switches on A and B would only be connected to each other via tinc, then
> there would be no broadcast storm.
>
> > Activating STP on all 4 OvS switches resulted
> > in endless STP Root Bridge election. So this approach, connecting all
> > four together, didn't work out.
>
> This is strange, that should just work... But check that each OvS instance has
> a unique MAC address, and/or give each instance a different priority.
It really works. I don't know what happened when I first tried this.
I have reconfigured everything from scratch and connecting all the four
servers via tinc works. I have also forced A to become the Root Bridge
via its Bridge Priority.
>
> > My second approach was to connect only A to C, C to D and D to B.
> > I have activated STP from start and after the Root Bridge election the
> > `tinc` port on B is in STP_BLOCK status, which is good. I have
> > connectivity throughout the entire network.
>
> How you set up ConnectTo's should not matter at all... there should be no
> difference between the first and the second approach.
>
> > There is also a third approach. Connecting A to C with one tinc tunnel,
> > C with D with another, and D with B with yet another tunnel. This would
> > bring a little bit of complexity to the tinc setup because it requires
> > one tunnel for each two nodes I want to connect.
> >
> > My question is, which approach would be better? I am asking this because
> > in the second approach I have one `tinc` on node C that connects A to C
> > and C to D. If for example that interface gets on STP_BLOCK status no
> > traffic will flow from A to C or from C to D.
>
> The tinc interface on node C should never get the STP_BLOCK status, since there
> is no loop that would allow a packet sent out via the tinc interface to come
> back on /another/ port on C's switch.
>
You are absolutely right. I had to revisit STP a little bit. Haven't
used it quite for some time now.
> > Are my assumptions right or am I completely out of the track here? Is it
> > really necessary to have one `tinc` interface between two nodes, or will
> > it work reliably only with one?
>
> In principle it should work with only one tinc interface per node.
You are right, it does work. I have some new questions though.
I have found out an old tinc mailing list thread in which you said that one
can think of tinc in switch mode like a switch without management. That
helped me grasp the concept better. Basically I have 4 L3 switches all
connected to the `tinc` one via their `tinc` interfaces.
After I have configured the network I was curios about performance so I
have ran iperf via tinc VPN. I have noticed that running iperf without
any customized options switches the VPN to TCP because of MTU.
iperf test 1
============
* iperf server on A, VPN interface - tinca, # iperf -s
* iperf client on C, VPN interface - tincc, # iperf -c A -m
### tinc debug log on C
...
Packet for tinca (10.128.3.55 port 655) length 1518 larger than MTU 1459
Packet for tinca (10.128.3.55 port 655) larger than minimum MTU
forwarding via TCP
...
### iperf client output
------------------------------------------------------------
Client connecting to 10.129.2.10, TCP port 5001
TCP window size: 23.2 KByte (default)
[ ID] Interval Transfer Bandwidth
------------------------------------------------------------
[ 3] local 10.129.2.13 port 39442 connected with 10.129.2.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 62.7 MBytes 52.4 Mbits/sec
[ 3] MSS size 1448 bytes (MTU 1500 bytes, ethernet)
iperf test 2
============
* iperf server on A, VPN interface - tinca, # iperf -s
* iperf client on C, VPN interface - tincc, # iperf -c A -M 1401 -m
### tinc debug log on C
Doesn't say anything about transporting over TCP, only tinc chatter
(PING/PONG) and Broadcast traffic.
### iperf client output
------------------------------------------------------------
Client connecting to 10.129.2.10, TCP port 5001
TCP window size: 22.6 KByte (default)
[ ID] Interval Transfer Bandwidth
------------------------------------------------------------
[ 3] local 10.129.2.13 port 39444 connected with 10.129.2.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 95.5 MBytes 80.1 Mbits/sec
[ 3] MSS size 1389 bytes (MTU 1429 bytes, unknown interface)
As shown from the test there is a 30Mbps gain if the transport is done
over UDP, which is a very good thing. I would like to keep it this way
:).
If I run iperf with -M 1402 things go crazy again and transport is done
over TCP. In case you're not familiar with iperf -M stands for setting MSS.
I don't want the VPN to switch to TCP because of performance issues.
Should I modify the tinc interface MTU to 1459, the MTU negotiated by tinc?
I have already done that for testing and it seems that if I change the
MTU of an interface that's part of an OpenvSwitch bridge, all the other
interfaces are set with the same MTU automatically.
The OpenvSwitches will connect a handfull of VMs. If I change the MTU on
all interfaces of the switch I guess I should change it in the VMs also.
This would avoid packet fragmentation.
Or should I not bother with this because only a small percentage of
traffic will trigger the switch to TCP mode? Maybe another good things
would be to deploy tinc without MTU modifications and monitor it closely
and see how it behaves.
Thank you once again for your time. Cheers & Goodwill,
Valentin Bud
>
> --
> Met vriendelijke groet / with kind regards,
> Guus Sliepen <guus at tinc-vpn.org>
More information about the tinc
mailing list