IDEA: How to change the mesh without changing config files.
Vladislav
dvlad666 at hotbox.ru
Tue Mar 2 00:45:33 CET 2010
Dear tinc authors and maintainers!
AFAIK tinc uses hosts in config file only to join the mesh. That means that if i,say, have 2 main 'servers' with public IPs as the main Connect-to points for all the 'client' nodes, and if i change those two, i need (MUST) to reconfigure all config files.
So the question is, can this be worked around using a single DNS name with multiple IP addresses behind it? Try 'nslookup rbc.ru' and you'll see it resolves to different IPs and DNS Round Robin turns them in a circle.
My idea is to register the same DNS name with as many IP addresses as i have 'servers' and may be some 'clients' with public IPs. The point is, a single DNS name will have all servers behind it, so the benefits are:
1)All the configuration files of all nodes will use this same name and resolve it to one of the IPs.
2)No config files need to be changed even if all nodes with public IPs are replaced with new ones.
3)It's possible to somehow mess smth and make Connect-To hosts the way that during the initial startup (after all nodes were switched off) the mesh may be split to islands if some topology-critical linking nodes are down. But if there's at least one node with public IP, and you register it under the common DNS name, you'll be able to link your split network together. (For this to work, see issue 1 below)
3*)That means, if all 'servers' are down and network has lost it's connections, fixing it can be very easily done just by creating an extra node somewhere with public IP and registering it in DNS. With DNS entries being cached, this will get network up ad running again not immediately, but after 5-15 mins. No further actions required!
4)In fact, no need to maintain connect-to links at all, i.e. no need to access 'clients' config files. (some clients can be Grandparents behind NAT :) )
What will be tinc's behavior in this case? I see the following potential problems:
1)DNS name can be resolved to the wrong or offline IP. In this case either it needs to be re-resolved (DNS round robin will put another address first) or better, it should be resolved once with all IPs and then this list of IPs should be tried one by one.
2)DNS name may resolve to the node's own IP address, and in case it is the same (has not changed), that will mean tinc will try to connect to itself.
3)Even if one "good" Ip is found, and connection is successful, will tinc continue to connect to another ones? It really needs to, because if two servers just connect to each other, and two another servers connect to each other, these two parts will never make a full mesh.
4)How many IPs can be registered behind a single DNS name? If the above issues are resolved, i think, even 5 will be fully enough for extremely stable mesh, because if at least one node with public IP exists (that means connections are possible), it will be registered under the name, and all other nodes will find it sooner of later.
5)Do you know any companies/hostings/etc that may provide such service? I do assume they will be not free, but what i want is either:
a)A single DNS name with multiple dynamic DNS names behind it, like mytinc.homeip.net with mytinc-1.homeip.net, mytinc-2.homeip.net, etc... All single nodes should be updatable as usual dynamic entries and the main node mytinc.homeip.net should use all their addresses. That means every 'server' should be bound to update only it's host record.
OR
b)A single DNS name with a FIFO IP address stack and dynamic update support. That means, every 'server' if it is started up or has it's IP changed (or every X mins), sends an updated IP to the DNS record, which has a FIFO stack (queue) of N last IPs. this is just an ideal solution! because DNS name will resolve to last N IPs of the most fresh reachable nodes. This way a network can grow up to any size, and every next node will find previous ones via the DNS name thus keeping mesh unsplit. And if it has a public IP, it sends it to a DNS name. So even no need to maintain host<>record dependances.
What do you guys think?
Regards, Vlad.
More information about the tinc
mailing list