help with routing and multiple subnets
Patrick E. Bennett, Jr.
patrick at pebcomputing.com
Sun Apr 4 02:18:27 CEST 2010
Gents, first of all thanks for tinc, and thanks in advance for your advice.
At the risk of revealing my stupidity and opening myself to ridicule....
I am trying to connect a new remote lan into an existing tinc vpn with
the central tinc vpn server located at 10.57.132.1 on the 10.57.132.0/24
subnet. I set up this central tinc vpn server myself along with several
other remote lans linked to it. It's all been working well for years.
The challenge now is that this new remote lan's primary subnet is not
within 10.57.0.0/255.255.0.0 and I cannot change it because this new
remote lan also connects to another vpn link (not via tinc) in the
192.168.0.0 range. I am, however, in control of most other aspects of
the remote lan. I have been playing with this in "the lab" (ie. my
home's network), and have successfully gotten my own firewall machine
connected to the central tinc server and able to connect to other
machines on the central subnet. However I cannot figure out how to get
the /"lab" clients/ on the 192.168.0.0 range to reach the central tinc
subnet hosts. I thought it would be just a matter of setting some
routes, but I'm not having any luck with that.
Here's the current configuration:
Lab Subnet: 192.168.254.0/24
Lab Server ifconfig (ppp0->eth0, br0 & br1->eth1):
br0 Link encap:Ethernet HWaddr 00:1a:92:c3:01:93
inet addr:192.168.254.1 Bcast:192.168.254.255
Mask:255.255.255.0
inet6 addr: fe80::21a:92ff:fec3:193/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:27379 errors:0 dropped:0 overruns:0 frame:0
TX packets:33103 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2653473 (2.5 MiB) TX bytes:22621129 (21.5 MiB)
br1 Link encap:Ethernet HWaddr 02:70:4c:25:17:27
inet addr:10.57.137.1 Bcast:10.57.137.255
Mask:255.255.255.0
inet6 addr: fe80::70:4cff:fe25:1727/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:342 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:71382 (69.7 KiB)
c4svpn Link encap:UNSPEC HWaddr
00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
inet addr:10.57.137.1 P-t-P:10.57.137.1 Mask:255.255.0.0
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1
RX packets:1125 errors:0 dropped:0 overruns:0 frame:0
TX packets:1866 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:217507 (212.4 KiB) TX bytes:150135 (146.6 KiB)
eth0 Link encap:Ethernet HWaddr 00:40:05:07:25:81
inet6 addr: fe80::240:5ff:fe07:2581/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:23385 errors:0 dropped:0 overruns:0 frame:0
TX packets:19821 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10944214 (10.4 MiB) TX bytes:2751561 (2.6 MiB)
Interrupt:16 Base address:0x2000
eth1 Link encap:Ethernet HWaddr 00:1a:92:c3:01:93
inet6 addr: fe80::21a:92ff:fec3:193/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:27980 errors:0 dropped:0 overruns:0 frame:0
TX packets:33486 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:3175695 (3.0 MiB) TX bytes:22848113 (21.7 MiB)
Interrupt:21 Base address:0x2000
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:15780 errors:0 dropped:0 overruns:0 frame:0
TX packets:15780 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:13855339 (13.2 MiB) TX bytes:13855339 (13.2 MiB)
ppp0 Link encap:Point-to-Point Protocol
inet addr:<public ip> P-t-P:<public ip>
Mask:255.255.255.255
UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1492 Metric:1
RX packets:22267 errors:0 dropped:0 overruns:0 frame:0
TX packets:18577 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:3
RX bytes:10066693 (9.6 MiB) TX bytes:2204983 (2.1 MiB)
Lab Server route table:
Destination Gateway Genmask Flags Metric Ref
Use Iface
<public ip> 0.0.0.0 255.255.255.255 UH 0
0 0 ppp0
192.168.254.0 0.0.0.0 255.255.255.0 U 0
0 0 br0
10.57.137.0 0.0.0.0 255.255.255.0 U 0
0 0 br1
10.57.0.0 0.0.0.0 255.255.0.0 U 0
0 0 c4svpn
0.0.0.0 0.0.0.0 0.0.0.0 U 0
0 0 ppp0
Central Tinc Server is at 10.57.132.1
Central Tinc Server's route table:
Destination Gateway Genmask Flags Metric Ref
Use Iface
<public subnet> 0.0.0.0 255.255.255.248 U 0
0 0 eth0
10.57.132.0 0.0.0.0 255.255.255.0 U 0
0 0 br2
10.57.0.0 0.0.0.0 255.255.0.0 U 0
0 0 c4svpn
0.0.0.0 <public gateway> 0.0.0.0 UG 0 0 0 eth0
The Lab Server tincd is connecting to the Central Tinc Server and is
able to ping/telnet/ssh etc to any client on 10.57.132.0/24.
The Lab server is doing NAT for the 192.168.254 subnet (doesn't seem to
matter if NAT is enabled for only 192.168.254.0 or for both it and
10.57.132.0). Internet access for the lab clients through the NAT is
working.
The Lab clients are receiving ip addresses in the 192.168.254.0/24
subnet (which can't be changed)
The Lab clients can ping the Lab Server Tinc ip address (ie. 10.57.137.1).
The Lab clients /cannot/ ping or otherwise reach the server or clients
on the other side of the vpn (10.57.132.1,2,3,etc)
I have tried:
* from the central tinc vpn setting "route add -net 192.168.254.0
netmask 255.255.255.0 gw 10.57.137.1" and/or "route add -net
192.168.254.0 netmask 255.255.255.0 dev c4svpn". Neither seemed
to help - ping to 192.168.254.1 yields "Destination Net Unknown".
* setting a route on a Lab client, for example a Windows machine
with "route add 10.57.132.0 mask 255.255.255.0 10.57.137.1". This
fails straight away with Windows complaining that "the route
addition failed"
Can one of you provide me the super mojo to make the clients on
192.168.254.0 able to communicate with the hosts on 10.57.132.0 or
convince me that this is a stupid idea and will never work?
Again, thanks in advance,
Patrick
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20100403/0906e23e/attachment.htm>
More information about the tinc
mailing list