Using Tinc to create overlay network for VMs or LXC containers?
IT Developer
developer at it-offshore.co.uk
Sat Sep 27 16:52:23 CEST 2014
To connect LXC containers to remote OpenVZ containers I setup the LXC
Host as a router listening on a globally routable ipv6 address. The
containers connect to the router initially & then communicate with each
other directly over non globally routable ipv6 private ULA addresses
<https://www.sixxs.net/tools/grh/ula/>.
This setup avoids any overhead & problems with NAT & is reasonably
secure with services listening on non public ipv6 address. Using Tinc
1.1pre10 with /ExperimentalProtocol = yes/ & /AutoConnect = 3/ gives a
robust connection. You will find Tinc 1.1 generates host files / ECDSA
keys for other machines it discovers automatically. The clients will
ConnectTo any machine they have a host file for without this needing to
be set in tinc.conf (the clients only need a single ConnectTo for the
router).
With Tinc1.1 the only thing I found I couldn't do is bind the daemons to
an ip address or interface. Restricting listening to ip4 or ip6 works
just fine (/AddressFamily = ipv6/). I found the latency to be much
better with Tinc1.1
Stuart.
On 09/27/2014 06:36 AM, raul wrote:
> Ok, I just tried this, and additional containers on the same network
> need a 'ConnectTo' to work, I tried the LocalDiscovery option too, but
> to no go. I guess this topography is unique without a clear use case
> beyond connecting at the container rather than host level, and and
> trying to keep as much of the networking in the containers as possible.
>
> This is the scenario, and the objective is to keep as much of the
> networking state on the containers as possible, so the host is
> relatively untouched.
>
> 1. Host A with public IP 1.2.3.4 has 5 containers on a NAT bridge
> 10.0.3.0/24 <http://10.0.3.0/24>
>
> 2. Host B with public IP 2.3.4.5 has 5 containers on a NAT bridge
> 10.0.4.0/24 <http://10.0.4.0/24>
>
> 3. I do a basic install of Tinc on 2 containers on either side -
> Container A and Container B, and give them 10.0.0.1 and 10.0.0.2, and
> set Tinc on Con B to connect to Tinc on Con A via (port forwarded 655
> udp/tcp to 1.2.3.4)
>
> 4. I port forward 655 udp/tcp on Host A 1.2.3.4 so Tinc Con B can
> connect to Tinc Con A
>
> 5. At the end of this Con A and Con B can ping each other.
>
> 6. Now suppose on Host A, I install Tinc on one more container -
> Container C, give it 10.0.0.3 and share the host files of Con A and B.
>
> 7. Con A and Con B cannot ping Con C, untill I add a 'ConnectTo' on
> Con C to connect to Con A at its Con A's NAT IP 10.0.3.4.
>
> 8. Its only at this point that all Con A, Con B and Con C can connect
> to each other.
>
> I guess this can be a mesh overlay network, but is also complex and
> there may be some tricks to make this work more seamlessly. It may
> have some uses but for normal use cases perhaps best to just install
> Tinc on Host A and B which will anyway enable the containers to
> connect to each other seamlessly.
>
> On Fri, Sep 26, 2014 at 2:22 AM, Etienne Dechamps
> <etienne at edechamps.fr <mailto:etienne at edechamps.fr>> wrote:
>
> On Thu, Sep 25, 2014 at 8:50 PM, raul <raulbe at gmail.com
> <mailto:raulbe at gmail.com>> wrote:
> > On the discovery, so it can be taken for granted that without the
> > 'ConnectTo' new Tinc instances on either side in this context will
> > autodiscover each other on the same host? Are they any
> additional settings
> > like 'localdiscovery' to be enabled?
>
> Depends on how you setup the actual tinc graph topology. There are
> situations (depending on how physical IP addresses look like from each
> node's perspective) in which LocalDiscovery might be required for
> nodes to directly talk to each other without having to use more
> central nodes as relays.
>
>
>
>
> _______________________________________________
> tinc mailing list
> tinc at tinc-vpn.org
> http://www.tinc-vpn.org/cgi-bin/mailman/listinfo/tinc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20140927/50a5a00e/attachment.html>
More information about the tinc
mailing list