Categories
Linux Network Technologies SLES TCP/IP

Ethernet Bridging on the cheap. Fail. Then Success with OLTV

Intro
Some experiments just don’t work out. I became curious about a technology that has various names: ethernet bridging, wide-area VLANs, OTV, L2TP, etc. It looked like it could be done on the cheap, but that didn’t pan out for me. But later on we got hold of high-end gear that implements OTV and began to get it to work.

The details
What this is is the ability to extend a subnet to a remote location. How cool is that? This can be very useful for various reasons. A disaster recovery center, for instance, which uses the same IP addressing. A strategic decision to move some, but not all equipment on a particular LAN to another location, or just for the fun of it.

As with anything truly useful there is an open source implementation(s). I found openvpn, but decided against it because it had an overall client/server description and so didn’t seem quite what I had in mind. Openvpn does have a page about creating an ethernet bridging setup which is quite helpful, but when you install the product it is all about the client/server paradigm, which is really not what I had in mind for my application.

Then I learned about Astaro RED at the Amazon Cloud conference I attended. That’s RED as in Remote Ethernet Device. That sounded pretty good, but it didn’t seem quite what we were after. It must have looked good to Sophos as well because as I was studying it, Sophos bought them! Asataro RED is more for extending an ethernet to remote branch offices.

More promising for cheapo experimentation, or so I thought at the time, is etherip.

Very long story short, I never got that to work out in my environment, which was SLES VM servers.

What seems to be the most promising solution, and the most expensive, is overlay transport virtualization (OLTV or simply OTV), offered by Cisco in their Nexus switches. I’ll amend this post when I get a chance to see if it worked or not!

December Update
OTV is beginning to work. It’s really cool seeing it for the first time. For instance, I have a server in South Carolina on an OTV subnet, IP 10.94.45.2. Its default gateway is in New Jersey! Its gateway is in the ARP table, as it has to be, but merely to PING the gateway produces this unusual time lag:

> ping 10.94.45.1

PING 10.94.45.1 (10.194.54.33) 56(84) bytes of data.
64 bytes from 10.94.45.1: icmp_seq=1 ttl=255 time=29.0 ms
64 bytes from 10.94.45.1: icmp_seq=2 ttl=255 time=29.1 ms
64 bytes from 10.94.45.1: icmp_seq=3 ttl=255 time=29.6 ms
64 bytes from 10.94.45.1: icmp_seq=4 ttl=255 time=29.1 ms
64 bytes from 10.94.45.1: icmp_seq=5 ttl=255 time=29.4 ms

See those response times? Huge. I ping the same gateway from a different LAN but same server room in New Jersey and get this more typical result:

# ping 10.94.45.1

Type escape sequence to abort.
Sending 5, 64-byte ICMP Echos to 10.94.45.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/0/1 ms
Number of duplicate packets received = 0

But we quickly stumbled upon a gotcha. Large packets were killing us. The thing is that it’s one thing to run OTV over dark fiber, which we know another customer is doing without issues; but to run it in an MPLS network is something else.

Before making any adjustment on our servers we found behaviour like the following:
– initial ssh to linux server works OK; but session soon freezes after a directory listing or executing other commands
– pings with the -s parameter set to anything greater than 1430 bytes failed – they didn’t get returned

So this issue is very closely related to a problem we observed on a regular segment where getvpn had just been implemented. That problem, which manifested itself as occasional IE errors, is described in some detail here.

Currently we don’t see our carrier being able to accommodate larger packets so we began to see what we could alter on our servers. On Checkpoint IPSO you can lower the MTU as follows:

> dbset interface:eth1c0:ipmtu 1430

The change happens immediately. But that’s not a good idea and we eventually abandoned that approach.

On SLES Linux I did it like this:

> ifconfig eth1 mtu 1430

In this platform, too, the change takes place right away.

By the we experimented and found that the largest MTU value we could use was 1430. At this point I’m not sure how to make this change permanent, but a little research should show how to do it.

After changing this setting, our ssh sessions worked great, though now we can’t send pings larger than 1402 bytes.

The latest problem is that on our OTV segment we can ping only one device but not the other.

August 2013 update
Well, we are resourceful people so yes we got it running. Once the dust settled OTV worked pretty well, with certain concessions. We had to be able to control the MTU on at least one side of the connection, which, fortunately we always could. Load balancers, proxy servers, Linux servers, we ended up jiggering all of them to lower their MTU to 1420. For firewall management we ended up lowering the MTU on the centralized management station.

Firewalls needed further voodoo. After pushing policy clamping needs to be turned back on and acceleration off like this (for Checkpoint firewalls):

$ fw ctl set int fw_clamp_tcp_mss 1
$ fwaccel off

Conclusion
Having preserved IPs during a server move can be a great benefit and OTV permits it. But you’d better have a talented staff to overcome the hurdles that will accompany this advanced technology.

Categories
Admin Network Technologies

The IT Detective Agency: Two of our sites got cut off!

Intro
I sometimes consult for the networking group of a large company. This incident really happened. I don’t know that it could ever happen again to anyone else, but it’s so bizarre that I just had to document it as an example of “you wouldn’t believe it unless you had actually been through it yourself.”

Let’s get into it
This company has lots of small and mid-sized offices connected via MPLS. WAN services are provided by a single telecom throughout the country. I feel obliged to not divulge specifics here. Let’s call the telecom “OE” as in over-extended.

So just before lunch yesterday they tell me that no PCs at one of their sites can access Internet, and this information is coming from a very reliable source. It also comes out that a second site is similarly affected. It kind of sounds like a WAN problem, but no other sites are affected. In the old days you’d almost certainly know to look at the WAN, but these days it’s a little more complicated. Everyone’s PC is in AD and they have the ability to push a GPO to all PCs at a site, so you just never know if the desktop group wasn’t involved in messing them up.

So they tell me they can PING their corporate Intranet server. Fine. But they cannot telnet to port 80. Newsflash. How did they get telnet enabled in Windows 7? I mentally stored this question for my continuing education. Crises are also great learning/teaching moments if you are of that frame-of-mind!

Ping is good. Of course I test the Intranet server myself, iwww.intranet. I can reach port 80 just fine. It happens that the front-end for iwww.intranet is a load balancer. I decide to do a trace using tcpdump. I’m not sure what I’ll find, but taking a trace is sort of a gut reaction in these cases. There’s lots of other traffic so we have to use a filter to see the tree in the forest. They give me the IP of the PC they’re testing from. My expression is something like this:

> tcpdump -i 0.0 host WKSTATION.AD.INTRANE

The 0.0 on this particular device is its way of saying use any and all interfaces.

Here’s what the output looks like:

11:51:03.852511 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de0a5), length 100
11:51:05.855446 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de101), length 100
11:51:06.187940 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de10b), length 100
11:51:08.857957 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de16d), length 100
11:51:09.184072 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de179), length 100
11:51:09.858865 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de198), length 100
11:51:14.855327 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de272), length 100
11:51:15.183349 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de27f), length 100
11:52:19.898380 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3decab), length 100
11:52:22.901684 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3ded28), length 100

I had never seen that before. I loop up ESP and see it is related to IP protocol 50 which in turn is used in IPSEC for VPN connections.

What the…?

Yeah. Let’s review. He’s sending TCP packets to port 80 of iwww.intranet and all I’m seeing are these ESP packets, which isn’t even TCP. Of course the load balancer has no idea whatsoever what to do with those packets and simply does not respond to them.

What would you do? I felt the nail in the coffin would be to take a trace on the PC itself – see how the packets are when they’re coming straight out of the PC. But to be honest they never did make that trace. They didn’t have Wireshark installed so it would take awhile.

Meanwhile, the infrastructure folks are talking to each other and someone mentions the OE has a certain project “runVPN” that thay’re rolling out. Now that sounds suspicous. In the imperfect world you have to work with what you have, not what you’d like to have. Based on our experiences and educated hunches, we now feel pretty certain it’s gotta be a WAN problem caused by OE.

Within an hour OE confirms the problem is of their creation, and they have it fixed. They are very unhappy with the tech who caused it.

Conclusion
Sometimes things are what they appear to be. If you notice I didn’t do much to really help with the issue, and that’s all just about anyone can do when so much is outsourced. They did feel that my trace helped to convince the telecom that this really was their issue.

I guess they were encrypting WAN traffic on one end, but forgot to decrypt it on the other end. One of the strangest things to have on a production network.

Turning on telnet in Windows 7
Did I mention that they tested with telnet on Windows 7? They later explained how to enable it. Go to control panel / Programs / Programs and Features / Turn Windows features on. There is an option for Telnet client. Reboot (yes, you really need to reboot for this to take effect) and you’re good to go.