Intro
Some experiments just don’t work out. I became curious about a technology that has various names: ethernet bridging, wide-area VLANs, OTV, L2TP, etc. It looked like it could be done on the cheap, but that didn’t pan out for me. But later on we got hold of high-end gear that implements OTV and began to get it to work.
The details
What this is is the ability to extend a subnet to a remote location. How cool is that? This can be very useful for various reasons. A disaster recovery center, for instance, which uses the same IP addressing. A strategic decision to move some, but not all equipment on a particular LAN to another location, or just for the fun of it.
As with anything truly useful there is an open source implementation(s). I found openvpn, but decided against it because it had an overall client/server description and so didn’t seem quite what I had in mind. Openvpn does have a page about creating an ethernet bridging setup which is quite helpful, but when you install the product it is all about the client/server paradigm, which is really not what I had in mind for my application.
Then I learned about Astaro RED at the Amazon Cloud conference I attended. That’s RED as in Remote Ethernet Device. That sounded pretty good, but it didn’t seem quite what we were after. It must have looked good to Sophos as well because as I was studying it, Sophos bought them! Asataro RED is more for extending an ethernet to remote branch offices.
More promising for cheapo experimentation, or so I thought at the time, is etherip.
Very long story short, I never got that to work out in my environment, which was SLES VM servers.
What seems to be the most promising solution, and the most expensive, is overlay transport virtualization (OLTV or simply OTV), offered by Cisco in their Nexus switches. I’ll amend this post when I get a chance to see if it worked or not!
December Update
OTV is beginning to work. It’s really cool seeing it for the first time. For instance, I have a server in South Carolina on an OTV subnet, IP 10.94.45.2. Its default gateway is in New Jersey! Its gateway is in the ARP table, as it has to be, but merely to PING the gateway produces this unusual time lag:
> ping 10.94.45.1
PING 10.94.45.1 (10.194.54.33) 56(84) bytes of data. 64 bytes from 10.94.45.1: icmp_seq=1 ttl=255 time=29.0 ms 64 bytes from 10.94.45.1: icmp_seq=2 ttl=255 time=29.1 ms 64 bytes from 10.94.45.1: icmp_seq=3 ttl=255 time=29.6 ms 64 bytes from 10.94.45.1: icmp_seq=4 ttl=255 time=29.1 ms 64 bytes from 10.94.45.1: icmp_seq=5 ttl=255 time=29.4 ms |
See those response times? Huge. I ping the same gateway from a different LAN but same server room in New Jersey and get this more typical result:
# ping 10.94.45.1
Type escape sequence to abort. Sending 5, 64-byte ICMP Echos to 10.94.45.1, timeout is 2 seconds: !!!!! Success rate is 100 percent (5/5), round-trip min/avg/max = 1/0/1 ms Number of duplicate packets received = 0 |
But we quickly stumbled upon a gotcha. Large packets were killing us. The thing is that it’s one thing to run OTV over dark fiber, which we know another customer is doing without issues; but to run it in an MPLS network is something else.
Before making any adjustment on our servers we found behaviour like the following:
– initial ssh to linux server works OK; but session soon freezes after a directory listing or executing other commands
– pings with the -s parameter set to anything greater than 1430 bytes failed – they didn’t get returned
So this issue is very closely related to a problem we observed on a regular segment where getvpn had just been implemented. That problem, which manifested itself as occasional IE errors, is described in some detail here.
Currently we don’t see our carrier being able to accommodate larger packets so we began to see what we could alter on our servers. On Checkpoint IPSO you can lower the MTU as follows:
> dbset interface:eth1c0:ipmtu 1430
The change happens immediately. But that’s not a good idea and we eventually abandoned that approach.
On SLES Linux I did it like this:
> ifconfig eth1 mtu 1430
In this platform, too, the change takes place right away.
By the we experimented and found that the largest MTU value we could use was 1430. At this point I’m not sure how to make this change permanent, but a little research should show how to do it.
After changing this setting, our ssh sessions worked great, though now we can’t send pings larger than 1402 bytes.
The latest problem is that on our OTV segment we can ping only one device but not the other.
August 2013 update
Well, we are resourceful people so yes we got it running. Once the dust settled OTV worked pretty well, with certain concessions. We had to be able to control the MTU on at least one side of the connection, which, fortunately we always could. Load balancers, proxy servers, Linux servers, we ended up jiggering all of them to lower their MTU to 1420. For firewall management we ended up lowering the MTU on the centralized management station.
Firewalls needed further voodoo. After pushing policy clamping needs to be turned back on and acceleration off like this (for Checkpoint firewalls):
$ fw ctl set int fw_clamp_tcp_mss 1
$ fwaccel off
Conclusion
Having preserved IPs during a server move can be a great benefit and OTV permits it. But you’d better have a talented staff to overcome the hurdles that will accompany this advanced technology.