Tinc crashes when node with identical configuration is present twice
Jörg Weske
joerg.weske at gmail.com
Fri Jun 4 13:44:15 CEST 2010
Hello list,
we have been running tinc to connect multiple nodes without problems for
quite some time now. Thanks for this great piece of software!
Our configuration is as follows:
Two "supernodes" A and B running the tinc daemon are publicly reachable
from the internet. Node A is running Linux and has a static public IP
address. Node B is running Windows with port forwarding through a
firewall with a DynDNS address.
All our tinc nodes connect to both supernodes. We are running tinc
1.0.13 in switch mode, ipv4-only on all nodes. The tinc VPN network is
stable and fast.
Here are the contents of the config file we are using for our roaming
Windows nodes:
Interface=TAP-VPN
AddressFamily=ipv4
MaxTimeout=60
Mode=switch
Name=ROAMING_NODE_X
ConnectTo=SUPERNODE_A
ConnectTo=SUPERNODE_B
Today we observed an interesting scenario. By mistake, one node with
identical configuration was present twice inside our tinc network,
logged in from a different dialup connection. (This happened after a
migration from an old PC to a new one, as the tinc-directory was simply
copied to the new PC.)
This led to an almost immediate crash of the tinc daemons on both
supernodes within a few dozen seconds of the second PC with the
identical configuration coming online. Any attempt to restart the
daemons would lead to another crash within a few minutes.
The following log entry appears before the crash:
1275643910 tinc[31243]: Ready
1275644063 tinc[31243]: Error while translating addresses: ai_family not
supported
1275644063 tinc[31243]: Got unexpected signal 8 (Floating point exception)
I found the following message in the tinc list archives mentioning the
same error and crash, though apparently for a different reason:
http://www.mail-archive.com/tinc@tinc-vpn.org/msg00538.html
Although I understand having the same node twice inside the network is
clearly a hefty configuration error, maybe there is a way to make tinc a
bit more robust against such a situation? After all, our mistake may
accidentially occur in other setups as well.
We were only get our tinc network back online after identifying the
culprit and disabling one of the two nodes causing the trouble.
Thank you!
--
Best regards,
Jörg Weske
More information about the tinc
mailing list