Categories
Linux

An SSH Terminal App for the HP Touchpad

October, 2016 Update
Needless to say, the HP Touchpad never caught on and mine is collecting dust.

What I just got is the new 8″ Amazon Fire HD Tablet. You can get a free good-quality ssh client for it called serverauditor. The keyboard emulation (linux CLI needs all those unusual keys pretty badly). The battery life is genuinely good – better than the Touchpad. It’s $89.

I’ll keep the blog post below online for historical purposes.

Updated Version
My previous post got out-of-date so rapidly that I have to start this topic all over! DO NOT follow my previous advice.

The Bluetooth Keyboard – It’s Worth It
My Bluetooth keyboard came in. It’s really awesome. I advise to get it if you want to treat your Touchpad (the cognoscenti prefer TP) like a Netbook from time-to-time, namely, by having the ability to type rapidly and comfortably. Get the HP one made for the Touchpad because:
– it’s small like you’d expect as a companion for a small tablet computer, yet the keys are full size
– it has some really convenient shortcut keys so you’re not spending too much time shifting your hand from keyboard to screen, namely:
— volume controls
— screen on/off
— even a key that shows your cards
– plus some keys that do stuff that’s harder with just a TP
— Ctrl (control) key, yeah!
— arrow keys
–mute key
–screen brightness/dimmer keys
–plus other keys I haven’t tested yet
– and the : and / keys are primary keys like they should be

So far I’m missing a
-Home/End key and if I ever get my terminal working again
– an ESC (escape) key

All-in-all I’d say the Bluetooth keyboard is an obviously well-engineered product – a perfect pairing for the TP.

It’s $45 at Amazon. And yes, I am writing this blog entry on my new keyboard!

I also bought an off-brand display stand. By Mivizu. It’s better than NOT having one, but it’s kind of flimsy and awkward. In no way a fun and beautifully engineered companion to the TP, unlike the IPad case that everyone likes to play with.

What About that SSH Terminal?
I probably messed things up with Preware alpha/beta software. They have released an Xterm, but I arrived at it from various previous upgrades and either Xecutah or the XServer is not working for me. The XServer does not launch a new card like it should. See below.

I will probably have to Web Dr my device (start from a factory install state), which they warn you should be prepared to do when using test software. Live and learn. I have not had time to do that yet, but I wanted to delete my old post and get the new facts out here before others went down the wrong path.

So, briefly, an ssh, bash, xterm to your underlying Linux on your TP are all available from the webos-internals.org site.

Sep 29th I saw an upgrade for Xecutah, Servers and xterm – to v 0.9.3. I did the upgrade and, to my surprise, I am back in business again! The xterm launches once again and so I do not have to Web Dr my Touchpad.

I thought I owed it to the community to experiment, so I decided to change root’s shell to bash! That’s right, the shell is that old /bin/sh by default. Once you’ve installed it, bash appears in /opt/bin/bash. Well…that worked too. I now have a comfortable shell that launches for me when I fire up my xterm, or xterms. Of course I brought over my .bashrc file – using sftp of course – with its familiar prompt definition and convenience aliases such as the universal “ll” for ls -l. To make really sure I hadn’t blown up my Touchpad, I rebooted. Yes, reboot from your shell really does work to reboot your TP! And yes, it came back with flying colors.

Esc key in the xterm for real Keyboard Users
I don’t think the HP keyboard has an escape key, not that I can find. So you’re in a bit of a bind if you use it for your xterm during a vi editing session. What you can do is momentarily bring up the virtual keyboard by hitting the, um, keyboard key. Xecutah now comes with instructions on how to generate the escape key on the virtual keyboard (hold t, choose right-most character, then “[” as your next character) which work. Then, when you’ve got your Esc, which you don’t need to often anyways, hit that keyboard key again to recommended using your comfy real keyboard.

So I am a happier camper once again. I even contributed to webos-internals. You should, too, if you think they’re providing a valued service as I do.

To be continued.

Categories
Admin DNS IT Operational Excellence

The IT Detective Agency: How We Neutralized Nasty DNS Clobbering Before it Could Bite Us

This gets a little involved. But if you’re the IT expert called on to fix something, you better be able to roll up your sleeves and figure it out!

In this article, I described how some, but not all ISPs change the results of DNS queries in violation of Internet standards.

A Proxy PAC for All
This work was done for an enterprise. They want everyone to use a proxy PAC file which whose location was to be (obfuscating the domain name just a little here) http://webproxy.intranet.drjohnstechtalk.com/proxy.pac. Centralized large enterprises like this sort of thing because the proxy settings are controlled in the one file, proxy.pac, by the central IT department.

So two IT guys try this PAC file setting on their work PC at their home networks. The guy with Comcast as his ISP reports that he can surf the Internet just fine at home. I, with Centurylink, am not so successful. It takes many minutes before an eventual timeout seems to occur and I cannot surf the Internet as long as I have that PAC file configured. But I can always uncheck it and life is good.

Now along comes a new requirement. This organization is going to roll out VPN without split tunneling, and the initial authentication to that VPN is a web page on the VPN switch. Now we have a real problem on our hands.

With my ISP, I can shut off the PAC file, get to the log-on page, establish VPN, but at that point if I wanted to get back out to the Internet (which is required for some job functions) I’d have to re-establish the PAC file setting. Furthermore it is desirable to lock down the proxy settings so that users can’t change them in any case. That makes it sound impossible for Centurylink customers, right?

Wrong. By the way the Comcast guy had this whole scenario working fine.

The Gory Details
This enterprise organization happened to have chosen legitimately owned but unused internal namespace for the PAC file location, analagous to my webproxy.intranet.drjohnstechtalk.com in my example. I reasoned as follows. Internet Explorer (“IE”) must quickly learn in the Comcast case that the domain name of the PAC file (webproxy.intranet.drjohnstechtalk.com) resolves with a NXDOMAIN and so it must fall back to making DIRECT connections to the Internet. For the unfortunate soul with CenturyLink (me), the domain name is clobbered! It does resolve, and to an active web site. That web site must produce a HTTP 404 not found. At least you’d think so. Today it seems to produce a simplified PAC file, which I am totally astonished by. And I wonder if this is more recent behaviour present in an attempt to ameliorate this situation. In any case, I reasoned that if they were clobbering a non-existent DNS record, we could actually define this domain name, but instead of going through the trouble of setting up a web server with the PAC file, just define the domain name as the loopback interface, 127.0.0.1. There’s no web server to connect to, so I hoped the browser would quickly detect this as a bad PAC URL, go on its way to make DIRECT connections to the VPN authentication web site, and then once VPN were established, use the PAC file again actively to permit the user to surf the Internet. And, furthermore, that this should work for both kinds of users: ones with DNS-clobbering ISPs and ones without.

That’s a lot of assumptions in the previous paragraph! But I built the case for it – it’s all based on reasonable extrapolation from observed behaviour. More testing needs to be done. What we have seen so far is that this DNS entry does no harm to the Comcast user. Direct Internet browsing works, VPN log-in works, Internet browsing post-login works. For the CenturyLink user the presence of this DNS entry permitted the browser of the work PC to surf the Internet very readily, which is already progress. VPN was not tested but I see no reason why it wouldn’t work.

More tests need to be done but it appears to be working out as per my educated guess.

April 2012 Update
Our fix seemed to collapse like a house of cards all-of-a-sudden many months later. Read how instead of panicking, we re-fixed it using our best understanding of the problems and mechanisms involved. The IT Detective Agency: Browsing Stopped Working on Internet-Connected Enterprise Laptops

Conclusion
We found a significant issue with DNS clobbering as practiced by some ISPs in an enterprise-class application: VPN. We found a work-around after taking an educated guess as to what would work – defining webproxy… to resolve to 127.0.01. We could have also changed the domain name of the PAC file – to one that wouldn’t be clobbered – but that was set by another group and so that option was not available to us. Also, we don’t yet know how extensive DNS clobbering is at other ISPs. Perhaps some clobber every domain name which returns a NXDOMAIN flag. That’s what Google’s DNS FAQ seems to imply at any rate. A more sensible approach may have been to migrate to use the auto-detect proxy settings, but that’s a big change for an enterprise and they weren’t ready to do that. A final concern is what if the PC is running a local web server because some application requires it?? That might affect our results.

Case: just about solved!

References
A related case of Verizon clobbering TCP reset packets is described here.

Categories
DNS IT Operational Excellence

DNS Clobbering – How ISPs Twist DNS Replies

Intro
Some ISPs have taken advantage of missing or broken DNS records, using them as an excuse to guide users to their own pages. From an Internet purist’s point-of-view this is bad behavior. I call it DNS clobbering.

In my article Google’s DNS Servers Rock! I mentioned that some ISPs provide a questionable feature that alters the results of DNS queries in unexpected ways, to their advantage.

In DNS if a domain name doesn’t exist the response should have the no such domain flag set. It’s that simple. So for instance I look for a resource record with the name webproxy.drjohnstechtalk.com:

dig webproxy.drjohnstechtalk.com

; <<>> DiG 9.7.1-P2 <<>> webproxy.drjohnstechtalk.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 26054
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;webproxy.drjohnstechtalk.com.  IN      A

;; AUTHORITY SECTION:
drjohnstechtalk.com.    10800   IN      SOA     ns71.domaincontrol.com. dns.jomax.net. 2011040901 28800 7200 604800 86400

See the NXDOMAIN and the ANSWER: 0? That's what I want to see for a non-existent domain name such as this. So all is good with my nameserver (in this case supplied by Amazon Cloud Northeast).

Now let's try that at home, where I have CenturyLink as my ISP. Lo and behold, I get a different answer, a completely different answer. Unfortunately I have to be on their network to get the result and I currently am not. I will try their DNS server 207.14.188.36. I get:

dig www.xyzaabc.com @207.14.188.36

; <<>> DiG 9.3.2 <<>> www.xyzaabc.com @207.14.188.36
; (1 server found)
;; global options:  printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1394
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;www.xyzaabc.com.               IN      A

;; ANSWER SECTION:
www.xyzaabc.com.        60      IN      A       184.106.31.182

;; Query time: 46 msec
;; SERVER: 207.14.188.36#53(207.14.188.36)
;; WHEN: Thu Sep 01 22:46:04 2011
;; MSG SIZE  rcvd: 64

When you use a web browser the browser is initiating these types of queries for you. So if you mistakenly enter the URL www.drjohns.drsjohntechtalk.com in your browser I would like you to get a browser-generated page-not-found error. With CenturyLink that doesn't happen. They assign any unresolvable domain name which begins with www or web an IP address that points you to a search page on their own web server!

I'm sure they would argue that this is done as a convenience for the user, but I'm a user, too, and I don't like this trick of theirs. I'm sure it earns them a bit of revenue as well. I expect ISPs to follow the rules and the rules are pretty clear in this case.

Not all ISPs do this, by the way. A colleague with Comcast as his ISP did some DNS queries for me. The results showed that Comcast was not clobbering these types of resource records.

And it gets worse than that. I actually witnessed an enterprise application that behaved completely differently depending on whether an ISP played this sort of trick or not. And that's nasty.

It's hard for me to get more data except through cooperating customers of other ISPs. Try a few queries for these fictitious domain names and leave a comment with your results and what ISP you use:

www.xyzaabc.com
webproxy.xyzaabc.net
abc.xyzaabc.us

If you don't have a nice home Linux system or cygwin containing dig, you can even use nslookup on a Windows OS. From a CMD window:

nslookup www.xyzaabc.com

Results

ISP

Clobbers DNS?

DNS Server tested

Date

Example Clobber

CenturyLink

YES

207.14.188.36

2011

www.xyzaabc.com returns 72.32.218.57

Comcast

NO

unknown

6/2011

NA

Amazon Cloud NE

NO

172.16.0.23

8/2011

NA

The Amazon Cloud had better not clobber DNS. That is a server environment, and servers may be affected much more than individual users if they get wrong DNS results back.

Categories
Internet Mail IT Operational Excellence Spam

How to Stop Chinese Spam – for Mail Admins, w/ June 2014 update

(Updated 12/19/2011 and 6/2014 with additional character sets)
(updated 9/2012 with additional signature)
Intro
I have been a target for random Chinese language spam in my various email accounts, but the problem has really gotten worse in the past few months.

The thing about these messages is that at first Postini (a Google spam filtering service used mostly by businesses), wasn’t very good at catching them. Postini is about the best in the business, and they’re competently catching just about every other type of spam. But these Chinese character messages kept slipping through…

Their support tech gave me some advice which turned out to be incorrect, but led me in the right direction. Their tech told told me to create a content manager rule, but the actual rule he provided was only going to catch Russian and Ukranian spam!

This is the rule he provided:

Rule Name: Non_English_spam
"Match Any"
Header - matches regex

koi8-r|koi8-u|koi7|koi8
Disposition: delete (blackhole)
Set quarantine to Recipient

I had no idea what that was doing, so I looked up koi8-r, koi8, etc and found that it had to do with the Cyrillic alphabet. So I wondered if the Chinese language spams have something similar, but for Chinese. Indeed they do: gb2312. Looking at a few of my Chinese spams, almost all contain this string in the headers. It’s not always in the exact same place, but it’s there. To be concrete, here’s an example (some headers have been obfuscated to prevent the bad guys from trying to reverse engineer Postini’s scoring algorithms):

Received: from websmtp.sohu.com ([61.135.132.136]) by eu1sys200amx108.postini.com ([207.126.147.10]) with SMTP;
		 Sun, 28 Aug 2011 18:41:21 GMT
Received: from omlbw (unknown [110.53.27.141])
		 by websmtp.sohu.com (Postfix) with ESMTPA id 9B3C6720CEA;
		 Sun, 28 Aug 2011 23:55:04 +0800 (CST)
Message-ID: <[email protected]>
From: =?gb2312?B?y7O1wsf4xu/A1rbguabE3NfU0NCztdPQz965q8u+?= <[email protected]>
To: 
Subject: =?gb2312?B?d3Azz/ogytsg1vcgudwg1/Yg0KkgIMqyIMO0IA==?=
		 =?gb2312?B?uaQg1/cgssUgxNwgzOEgIMn9INK1ILyoIKO/LS0=?=
		 =?gb2312?B?qIk=?=
Date: Sun, 28 Aug 2011 23:55:37 +0800
MIME-Version: 1.0
X-mailer: Lzke 2
X-SOHU-Antispam-Bayes: 0
X-pstn-levels:     omitted
X-pstn-settings: omitted
X-pstn-addresses: from <[email protected]> [49/2] 

Content-Type: multipart/mixed;
		 boundary="----=_NextPart_000_015A_013AC9FA.1A2D5A60"

------=_NextPart_000_015A_013AC9FA.1A2D5A60
Content-Transfer-Encoding: base64
Content-Type: text/html;
		 charset="gb2312"

See it? charset=”gb2312″ appears in the content-type header and =?gb2312? appears in both the Subject and From fields.

That message looks like this as displayed in my mail client:

How do I know this is Chinese? I pasted the characters into translate.google.com and it auto-detected it. That’s a convenient tool!

How do I know it is spam? I am open-minded. Perhaps it is a legitimate business proposition that just happens to be written in Chinese? It does sort of read that way from the translation of any one such message. On the other side are some stronger pieces of evidence. The empty To: header is a strong hint, but some legitimate messages could contain that undesirable feature, so that is merely an indicator but not definitive. Most important is the fact that I get these messages, all showing similar patterns in appearance, and most telling always coming from a different sender tells me unambiguously that this is really, truly spam.

So the actual Postini Content Manager rule to capture Chinese spam is this:

Rule Name: Chinese_spam
"Match Any"
Header matches regex (charset="gb2312"|=\?GB2312\?)

Disposition: delete (blackhole)
Set quarantine to Recipient

Obviously this type of rule is a bit dangerous. What if you are expecting something written in Chinese? It will be subject to the same treatment as the spam. That is why the suggestion is to Set quarantine to recipient so that these messages could be delivered from the user quarantine.

And over the course of a couple months Postini has gotten much better about capturing this type of spam. That is the best thing – to let the experts handle it. They just needed to train their algorithms. I was quite concerned at first that this spam is so different from the usual, recognizable spam campaigns that they might have a hard time spotting it while simultaneously allowing the good Chinese email through. But they’re almost there…

12/19 UpdateThe filter described above has been working extremely well for me. Essentially perfectly, in fact, as I can see when I look in my quarantine. But not today. Today I got some suspected Chinese spam in and examing the headers showed something slightly different. The subject looks like this:

Subject: =?GBK?B?bnZ2dyAyMDExLjEyLTIwMTItMDEgvqsgxrcgzcYgz/ogIGZkZXI=?=

And the Mime header also had that string:

Content-Type: text/plain;
		 charset=GBK

Looking up GBK character set you’ll immediately see it is simplified Chinese, extended. So I think we better add that character set to our expression. It makes our content manager rule only a little more complicated. Now we would have:

Rule Name: Chinese_spam
"Match Any"
Header matches regex (charset="gb(k|2312)"|=\?GB(K|2312)\?)

Disposition: delete (blackhole)
Set quarantine to Recipient

For the complete prescription see the summary in the Conclusion.

If you happened upon this article and don’t have the Postini service is there any relevance? Yes, I think so. You should be able to filter on the message headers to look for the string =?gb2312? or =?gbk? in the beginning of the subject line. To speak about mailers with which I have some experience, in sendmail you could do this with a milter. In PureMessage it would be possible to concoct an appropriate rule as well.

9/2012 Update
My filter was working so well these past few months I essentially forgot about the problem, but the occasional Chinese spam slipped through. How? It used a different encoding. Here is an example subject line:

Subject: =?utf-8?B?6K+35p+l5pS277yB?=

This is displayed by my mail client as three Chinese characters followed by “!” They used a different encoding. This one drove me to do a little research. This is an Encoded-Word, according to Wikipedia’s excellent MIME writeup. The “?B?” in the front means base64 encoding. I had previously written a mimedecoder in perl, which I put to use:

> mimedecode 6K+35p+l5pS277yB

which produces:

???!

which is pretty much garbage. So I decided to analyze the output with unix utility od:

> mimedecode 6K+35p+l5pS277yB|od -x

which gives

0000000 e8af b7e6 9fa5 e694 b6ef bc81

Next, I needed a UTF-8 converter, which I found at this Swiss site.

I used it with input type hexadecimal.

The results reproduced exactly the Chinese characters my mail client displayed to me! It also gives a lot of other descriptions for these characters (such as Cangjie). The first few lines begin:

As character names:

U+8BF7 CJK UNIFIED IDEOGRAPH character (请)
U+67E5 CJK UNIFIED IDEOGRAPH character (查)
U+6536 CJK UNIFIED IDEOGRAPH character (收)
U+FF01 FULLWIDTH EXCLAMATION MARK character (!)

As raw characters:

请查收!

Well, that was an interesting exercise, but I’m not sure we’ve learned anything that can be put to use in a RegEx on the original expression. Unless there’s a way to uniquely identify Chinese characters by the beginning of the encoded-word sequence following the ?B?. I have my doubts, but since I don’t seem to get thee UTF-8 emails from other sources, and I have a sample size of about five emails that fooled the other filter to work with, I have developed a content filter which would capture all of them!

Check for a header containing the RegEx:

=\?utf-8\?B\?[56]

More specifically sometimes the utf-8 string is used in the From header, sometimes it is in the subject. Most of my samples would have been caught by the simpler RegEx =\?utf-8\?B\?5, and I mention that in case you want to be more specific, but there was one recent one that had a “6” instead of a “5.”

For the record here’s that mimedecode “program”

#!/usr/bin/perl
# base64 MIME decoding
# example:
# mimedecode Nz84QGxhdGU=
# =&gt; 7?8@late
use MIME::Base64;
 
foreach (@ARGV) {
#      $encoded = encode_base64($_);
      $decoded = decode_base64($_);
#print "enc,dec: $encoded, $decoded\n";
        print $decoded;
}

And its sister program, which I call mimeencode:

#!/usr/bin/perl
# base64 MIME decoding
# DrJ, 6/2004
# example:
# mimedecode Nz84QGxhdGU=
# =&gt; 7?8@late
use MIME::Base64;
 
foreach (@ARGV) {
      $encoded = encode_base64($_);
#      $decoded = decode_base64($_);
#print "enc,dec: $encoded, $decoded\n";
        print $encoded;
}

There’s probably a built-in linux utility which does the same thing, I just don’t know what that is.

2022(!) update

Well, I finally ran across it. The built-in program to do mimeencode/mimedecode is base64. Oh well, better late than never…

Conclusion
Your users needn’t suffer from Chinese Spam. The vast majority are characterized by, um, Chinese characters, of course, whose presence is almost always indicated by the string gb2312 in the message headers. You can take advantage of that fact and build an appropriate rule for Postini or your mailer. But beware of throwing out the baby with the bathwater! In other words, make sure you only subject your users to this rule unless you either have a good quarantine, or they are sure they should never receive this type of email.

There are some spam types which evade the gb2312 rule mentioned above, however. And this part is not as well tested, frankly. The exceptions, which are still a minority of my Chinese spam, are characterized by a subject line or sender that contains =?utf-8?B?5… or =?utf-8?B?6… (see summary below). My honest expectation is that a rule this broad and coarse will also catch a few other languages (Portuguese?, Urdu?, etc.) so be careful! If you are expecting to get non-english email more testing is in order before implementing the utf-8 filter. But it will certainly help to eliminate even more Chinese spam.

4/2013 update
Summary, including 6/2014 update
My filter has worked very well for me and has withstood the test of time. I catch at least a dozen Chinese spams each day. One got through in 6/2014 however, with character set gb18030. I realize reading the above write-up is confusing because I’ve mixed my love of telling a good IT mystery with my desire to convey useful information. So, to summarize, the new combined rule is:

Match Any:

Header matches RegEx:
(charset=”gb(k|2312|18030)”|=\?GB(K|2312|18030)\?)

Header matches RegEx:
=\?utf-8\?B\?[56]

References
A spate of spam from enom-registered domains is described here.
A disappointing case where Google is not operating their Gmail service as a white-glove service is described here.

Categories
DNS IT Operational Excellence Network Technologies Uncategorized

Google’s DNS Servers Rock!

Intro
DNS is the Domain name Service, the Internet service that converts IP addresses, e.g., 200.54.129.57 into mnemonic names like www.mysite.com.

I tried to run a cache-only DNS server for use by a proxy server. What I found is that certain sites were not accessible on a frequent basis. I think uol.com.br is one of the problem sites (need to check this). It may not mean much to a US audience, but it’s really popular in Brazil!

At some point I happened to learn that Google has a public DNS service. This is worth pondering. No one of any repute has offered a DNS service to that point. There are a host of concerns about security, especially DNS cache poisoning. They blazed a trail, and did it in a way only Google and very few other major infrastructure players could. Not only did they offer a DNS service, they put their DNS servers all over the Internet and created convenient anycast addresses for their servers.

I am no expert on anycast addresses. You can look it up on Wikipedia, however. The essence for my purposes is that with a single IP address you’re going to hit the closest server, network-wise. So no matter where you are some Google DNS server is not far away. Try it. The anycast addresses are 8.8.8.8 and 8.8.4.4. They don’t mind, really! You can ping them. Traceroute to them, whatever. From the Amazon cloud Northeast 8.8.8.8 responds to PINGs in 3.4 ms. That’s really low. Not so low as to make me think they are in the same data center (it is different companies after all), but not far away.

The gold standard for running a DNS service is BIND. I have been running it for many years now and I want to give the Internet Software Consortium their due for providing this wonderful application. Once I got wind of my DNS difficulties as mentioned above, I had to wonder why not everyone else was complaining? They had to be using something else. I ran a flat-out performance test. 5000 queries from an actual proxy log, fed straight to my BIND DNS server, and then to Google’s DNS server 8.8.8.8. I have to dig up the numbers, but Google’s won by quite a bit! This result was actually surprising because you’re always going off-site to the Google DNS server, whereas my server can build up its cache and is right on my network. From where I tested the Google server was about 11 ms away. So 5000 x 11 ms = 55 s. So there is a 55 s handicap from just network considerations alone! Yet it is faster. On the quickest of queries the local server is indeed faster, but what happens is that over the course of real life queries, you always get a few problematic ones which either time out or just seem to take a long time to get back a response. That’s what kills the traditional DNS server and where Google has (obviously) made some optimizations.

And, that’s not all! Google also deals in a more forgiving fashion with broken domain names. I used to get on my high horse and proclaim to others about how broken their DNS servers are – it’s no wonder I can’t resolve their names, which means, by the way, I also cannot get to their web site nor send them email!

It’s effectively like taking yourself off the Internet, or so I thought. Turns out in some cases that’s only true if you’ve constrained yourself to resolving names with BIND. You see, BIND enforces the rules. And I’m a believer in rules. The Internet has about 5,000 technical rules called RFCs. DNS is a topic of many of these rules. The Internet could only have expanded to the size it currently has because all the major players agreed to abide by those rules. What Google has done with their server, in effect, is to say, “Well, if you don’t follow the rules, we’re going to try to work with you anyways.”

Here’s a concrete example. appliedcoatings.org. I guess at some point they’ll actually fix their severely broken DNS, but at the time I write this, August 21, 2011, these comments are valid and their domain is severely broken. In fact, I was amazed that people weren’t jumping up and down screaming at them. I couldn’t even send an email to them. That’s akin to knocking yourself off the Internet, right? Ah, but it all depends on whose DNS servers you are using!

There used to be lots of good free DNS analyzers, like dnsreport.com. You can still find a few around. www.zonecheck.fr, for instance. It shows FAILURE. If it were better written it would show the real problem, which is a lame delegation. But we’re experts, and we don’t need such tools! We will do the queries ourselves and show the lame delegation. We start by learning who are the authoritative nameservers for .ca, the top-level domain used in Canada:

 dig ns ca

; <<>> DiG 9.7.1-P2 <<>> ns ca
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52928
;; flags: qr rd ra; QUERY: 1, ANSWER: 10, AUTHORITY: 0, ADDITIONAL: 1

;; QUESTION SECTION:
;ca.                            IN      NS

;; ANSWER SECTION:
ca.                     83585   IN      NS      a.ca-servers.ca.
ca.                     83585   IN      NS      c.ca-servers.ca.
ca.                     83585   IN      NS      e.ca-servers.ca.
ca.                     83585   IN      NS      f.ca-servers.ca.
ca.                     83585   IN      NS      j.ca-servers.ca.
ca.                     83585   IN      NS      k.ca-servers.ca.
ca.                     83585   IN      NS      l.ca-servers.ca.
ca.                     83585   IN      NS      m.ca-servers.ca.
ca.                     83585   IN      NS      z.ca-servers.ca.
ca.                     83585   IN      NS      sns-pb.isc.org.

;; ADDITIONAL SECTION:
a.ca-servers.ca.        83594   IN      A       192.228.27.11

Now we ask one of them about the nameservers for appliedcoatings.ca:

 dig ns appliedcoatings.ca @a.ca-servers.ca.

; <<>> DiG 9.7.1-P2 <<>> ns appliedcoatings.ca @a.ca-servers.ca.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 288
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      NS

;; AUTHORITY SECTION:
appliedcoatings.ca.     86400   IN      NS      sp2.domainpeople.com.
appliedcoatings.ca.     86400   IN      NS      sp1.domainpeople.com.

So far everything's cool. Now, since the authoritative flag (AA) was not present in that response we re-ask that query, but now to one of the nameservers that's supposed to be authoritative for that domain:

dig ns appliedcoatings.ca @sp2.domainpeople.com.

; <<>> DiG 9.7.1-P2 <<>> ns appliedcoatings.ca @sp2.domainpeople.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24373
;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      NS

;; ANSWER SECTION:
appliedcoatings.ca.     86400   IN      NS      ns1.domainpeople.com.
appliedcoatings.ca.     86400   IN      NS      ns2.domainpeople.com.

Oh, oh. That's not supposed to happen. We're getting back an entirely different set of nameservers. That's a lame delegation. The domain should be considered completely broken. I think even BIND might be forgiving up to this point. a BIND resolver does these types of quesires to get at the answer. At this point it says, "OK, this is strange, but not necessariily fatal. I will ask my subsequent queries to ns1.domainpeople.com and ns2.domainpeople.com since they are listed as being the nameservers of record.

So now let's get to something useful: looking up the mail exchanger record so we see how to deliver mail to this domain. BIND, which has been fastidiously following the rules, does it as follows:

dig mx appliedcoatings.ca @ns1.domainpeople.com.

; <<>> DiG 9.7.1-P2 <<>> mx appliedcoatings.ca @ns1.domainpeople.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 49996
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      MX

;; Query time: 79 msec
;; SERVER: 204.174.223.72#53(204.174.223.72)
;; WHEN: Sun Aug 21 19:05:43 2011
;; MSG SIZE  rcvd: 36

That's not good. Status is REFUSED. But BIND can even forgive this slight. There is one more nameserver to try after all, right? Last chance query:

dig mx appliedcoatings.ca @ns2.domainpeople.com.

; <<>> DiG 9.7.1-P2 <<>> mx appliedcoatings.ca @ns2.domainpeople.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: REFUSED, id: 44404
;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      MX

;; Query time: 72 msec
;; SERVER: 64.40.96.140#53(64.40.96.140)
;; WHEN: Sun Aug 21 19:07:34 2011
;; MSG SIZE  rcvd: 36

Status also REFUSED. Now we are really and truly dead. If you are using a BIND nameserver you have no way to send email to [email protected]. But not so with Google!

Of course I don't know how Google wrote their DNS server, but I do think that some of their infrastructure experts write it themselves rather than using open source programs. So with a Google nameserver you will get a response:

dig mx appliedcoatings.ca @8.8.8.8

; <<>> DiG 9.7.1-P2 <<>> mx appliedcoatings.ca @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 6901
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      MX

;; ANSWER SECTION:
appliedcoatings.ca.     82805   IN      MX      10 mail.appliedcoatings.ca.

;; Query time: 4 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Sun Aug 21 19:11:14 2011
;; MSG SIZE  rcvd: 57

and just to close the loop and make sure this is a valid host you would do this:

dig mail.appliedcoatings.ca @8.8.8.8

; <<>> DiG 9.7.1-P2 <<>> mail.appliedcoatings.ca @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 35190
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;mail.appliedcoatings.ca.       IN      A

;; ANSWER SECTION:
mail.appliedcoatings.ca. 86400  IN      A       66.183.21.181

And we can go the next step and begin an SMTP conversation with that server to make sure it is really operating. After all, if they messed up DNS there's no telling what else they might have gotten wrong.

 telnet  66.183.21.181 25
Trying 66.183.21.181...
Connected to 66.183.21.181.
Escape character is '^]'.
220 mail.appliedcoatings.ca Microsoft ESMTP MAIL Service, Version: 6.0.3790.4675 ready at  Sun, 21 Aug 2011 16:22:04 -0700
HELO localhost
250 mail.appliedcoatings.ca Hello [50.17.188.196]
quit
221 2.0.0 mail.appliedcoatings.ca Service closing transmission channel
Connection closed by foreign host.

Yup. They've got an operating mail server at that IP.

So we can reverse engineer a bit what Google's DNS server must have done behind the scenes to arrive at a valid answer where BIND could not. I'm 100% sure that Google would have also done the query

dig mx appliedcoatings.ca @ns1.domainpeople.com

since that is the right thing to do. But not getting a satisfactory answer (status: REFUSED), what it must do additionally after getting refused a second time by ns2.domainpeople, is to go back to the originally named nameservers sp1 and sp2. Watch what happens in that case:

 dig mx appliedcoatings.ca @sp1.domainpeople.com.

; <<>> DiG 9.7.1-P2 <<>> mx appliedcoatings.ca @sp1.domainpeople.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 10226
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; QUESTION SECTION:
;appliedcoatings.ca.            IN      MX

;; ANSWER SECTION:
appliedcoatings.ca.     86400   IN      MX      10 mail.appliedcoatings.ca.

;; AUTHORITY SECTION:
appliedcoatings.ca.     86400   IN      NS      ns1.domainpeople.com.
appliedcoatings.ca.     86400   IN      NS      ns2.domainpeople.com.

;; ADDITIONAL SECTION:
mail.appliedcoatings.ca. 86400  IN      A       66.183.21.181

The AA (authoritative) flag is set in the response. So it's a good response, but sent to the "wrong" nameserver. Nevertheless, it is a response and it gets anyone using that nameserver more functionality than someone using BIND.

Conclusion
So far we've got three advantages speaking favorably for Google's DNS server: it's faster, it's answers are more complete and it's universally available. Wait, there's more! Another nice thing is what it does not do. Some ISPs have a "feature" I call DNS clobbering. In fact it's so annoying I will devote a whole blog post to describing it in more detail. Essentially they take license with DNS and make up answers to some queries! It's true and it's truly annoying. Not all ISPs do this but mine certainly does. So the other nice thing about Google DNS is that it does not do DNS clobbering and it's available for you to use it at home and avoid this annoying feature. You just set your DNS servers rather than have them assigned automatically via DHCP.

Other Resources
I should mention that while researching public DNS servers I was also led to commercial versions of the same thing. I went so far as to test the timings on one of those services and found that it is more distant, round-trip-wise, than Google's anycast server. Stands to reason. Google's got the best Internet access of anyone. They're on all the major highways. The commercial offerings have some additional cool features, however. They can serve as URL filter. So if someone puts in a URL which leads to a malicious site, for example, they can respond with an answer that spares you from going to that infected site. This is a little more crude than URL filtering at the proxy level, since a DNS server has no knowledge of the URI whereas a proxy URL filter does, but it could be quite serviceable. I'm not sure it allows you to pick and choose URL categories to block as with a URL filter (gambling, porn, hacking sites, etc.).

A lot more information on using Google DNS is at http://code.google.com/speed/public-dns/docs/using.html.

September 1 Update - a Crack in the Infrastructure
I now have my first case of a domain name which Google DNS did not resolve correctly, and for no apparent reason. The domain name is forums.tweaktown.com. Here's proof of Google's failure, followed immediately by Amazon's DNS servers' success:

dig forums.tweaktown.com @8.8.8.8

; <<>> DiG 9.7.1-P2 <<>> forums.tweaktown.com @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 15826
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;forums.tweaktown.com.          IN      A

;; AUTHORITY SECTION:
tweaktown.com.          116     IN      SOA     ns21.domaincontrol.com. dns.jomax.net. 2011060602 28800 7200 604800 86400

;; Query time: 4 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Sep  1 14:40:50 2011
;; MSG SIZE  rcvd: 106


 dig forums.tweaktown.com

; <<>> DiG 9.7.1-P2 <<>> forums.tweaktown.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 52290
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 1

;; QUESTION SECTION:
;forums.tweaktown.com.          IN      A

;; ANSWER SECTION:
forums.tweaktown.com.   1885    IN      A       38.101.21.25

;; AUTHORITY SECTION:
tweaktown.com.          1943    IN      NS      ns22.domaincontrol.com.
tweaktown.com.          1943    IN      NS      ns21.domaincontrol.com.

;; ADDITIONAL SECTION:
ns21.domaincontrol.com. 753     IN      A       216.69.185.11

;; Query time: 0 msec
;; SERVER: 172.16.0.23#53(172.16.0.23)
;; WHEN: Thu Sep  1 14:40:55 2011
;; MSG SIZE  rcvd: 122

All BIND servers I tried during this time returned the correct answer.

Is this an isolated incident or a tip of an iceberg of problems? I hope it is a one-off. I'll post updates as I find out more. I am slightly concerned now.

References and related
I finally wrote my own web interface to DNS and published the code I did it with. Check it out here.

A web interface to Google's public DNS service, which will give you more debug information, is https://dns.google.com/

Categories
Admin Internet Mail IT Operational Excellence

The IT Detective Agency: The Case of Slow Sendmail Performance Finally Cracked

I’ve been running sendmail for years and years. It’s a very solid MTA, though perhaps not fashionable these days. At one point I even made the leap from running on Sun/solaris to SLES. I’ve always had a particular problem on a couple of these servers: they do not react gracefully to mail storms. An application running on another server sends out a daily mail blast to 2000 users, all at once. Hey I’m not running Gmail here, but normal volume is several messages per second nonetheless, and that is handled fairly well.

But this mail blast actually knocks the system offline for a few minutes. The load average rockets up to 160. It’s essentially a self-inflicted denial-of-service attack. In my gut I always felt the situation could be improved, but was too busy to look into it.

When it was time to buy a replacement server, I had to consider and justify what to get. A “screaming server” is a little hard for a hardware vendor to turn into an order! So where are the bottlenecks? I decided to capture output of uptime, which provides load averages, and iostat, an optional package which analyzes I/O usage, at five secon intervals throughout the day. Here’s the iostat job:

nohup iostat -t -c  -m -x 3 > /tmp/iostat &

and the uptime was a tiny script I called cpu-loop.sh:

#!/bin/sh
while /bin/true; do
sleep 5
date
uptime
done

called from the command line as:

nohup ~/cpu-loop.sh > /tmp/cpu &

Strange thing is that though load average shoots the roof, cpu usage isn’t all that high.

If I have this right, load average shows the number of processes scheduled by the scheduler. Sendmail forks a process for each incoming email, so the number of sendmail processes climbs dramatically during a mail storm.

The fundamental issue is are we thirsting for more CPU or more I/O? Then there are the peripheral concerns like speed of pci bus, size of level two cache and number of cpus. The standard profiling tools don’t quite give you enough information.

Here’s actual output of three consecutive iostat executions:

Time: 05:11:56 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.92    0.00    5.36   21.74    0.00   66.99

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.00    0.00    3.00     0.00     0.05    37.33     0.03    8.53   5.33   1.60
sdb               0.00   788.40    0.00  181.40     0.00     3.91    44.12     4.62   25.35   5.46  98.96
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.02    8.00   1.33   0.32
dm-3              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.01    5.67   2.33   0.56
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01   12.00   6.00   0.48
dm-5              0.00     0.00    0.00    7.60     0.00     0.03     8.00     0.08   10.32   1.05   0.80
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  975.00     0.00     3.81     8.00    20.93   21.39   1.01  98.96
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Time: 05:12:01 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.05    0.00    4.34   19.98    0.00   70.64

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.80    0.00    2.80     0.00     0.05    40.00     0.03   10.57   6.86   1.92
sdb               0.00   730.60    0.00  164.80     0.00     3.64    45.20     3.37   20.56   5.47  90.16
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.60     0.00     0.01     8.00     0.03   12.31   2.15   0.56
dm-3              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.02    6.33   3.33   0.80
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01    9.00   5.00   0.40
dm-5              0.00     0.00    0.00    7.60     0.00     0.03     8.00     0.10   13.37   1.16   0.88
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  899.60     0.00     3.51     8.00    16.18   18.03   1.00  90.24
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Time: 05:12:06 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.91    0.00    1.36   10.83    0.00   85.89

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     6.40    0.00    3.40     0.00     0.04    25.88     0.04   12.94   5.18   1.76
sdb               0.00   303.40    0.00   88.20     0.00     1.59    36.95     1.83   20.30   5.48  48.32
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.60     0.00     0.01     8.00     0.04   14.77   2.46   0.64
dm-3              0.00     0.00    0.00    0.60     0.00     0.00     8.00     0.00   12.00   5.33   0.32
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01   11.00   5.00   0.40
dm-5              0.00     0.00    0.00    5.80     0.00     0.02     8.00     0.08   12.97   1.66   0.96
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  393.00     0.00     1.54     8.00     6.46   16.03   1.23  48.32
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Device sdb has reached crazy high utilization levels – 98% before dropping back down to 48%. An average queue size of 4.62 in the first run means a lot of queued up processes awaiting I/O. Write requests (merged) per second of 788 seems respectable. All this, while the CPU is 67% idle!

The conclusion: a solid state drive is in order. We are dying thirsting for I/O more than for CPU. But solid state drives cost money and have to be justified which takes time. Can we do something which proves it will bear out our hypothesis and really alleviate the problem? Yes! SSD is like accessing memory. So let’s build a virtual partition from our memory. tmpfs has made this sinfully easy:

mount -t tmpfs none /mqueue -o size=8192m

We set this to be sendmail’s queue directory. The sendmail mc command looks like this:

define(`QUEUE_DIR',`/mqueue/q*')dnl

which I need to further explain at some point.

Now it’s interesting that this tmpfs filesystem doesn’t even show up in iostat! I guess its usage all counts as cpu usage.

I now have to send my mail blast to the system with this tmpfs setup. I’m expecting to have essentially converted my lack of I/O into better usage of spare CPU, resulting in a higher-performance system.

The Results
The results are in and they are dramatic. Previous results using traditional 15K rotating drive:

- disk device became 98% busy
- cpu idle time only dropped as low as 69%
- load average peaked at 37
- SMTP port shut down for some minutes
- 2030 messages accepted in 187 seconds
- 11 messages/second

and now using tmpfs virtual filesystem:

- the load average rose to 3.1 - a much more tolerable result
- the cpu idle time dropped to 32% during the busiest time
- most imporantly, the server stayed open for business - the SMTP port did not shut down for the first time!!
- the 2000 messages were accepted in 34 seconds.  
- that's a record 59 messages/second!

Conclusion
Disk I/O was definitely the bottleneck for sendmail. tmpfs rocks! sendmail becomes five times faster using it, and is better behaved. The drawback of this filesystem type is that it is completely volatile and I stand to lose messages if the power ever goes out!

Case Closed!

Categories
IT Operational Excellence Uncategorized

The IT Detective Agency: Debugging a Thorny Citrix Connection Issue

This case begins with the observation by the application owner for Citrix XenApp. External users were being knocked out of their sessions frequently – several times a day. And it happened en masse. Before this problem users were typically logged in all day. You can see that many must have been bumped around 12:30 PM then again around 2 PM. The problems began July 5th.

The users suffering the disconnects were all external users who access the applications via a Citrix Secure Gateway. The XenApp servers being accessed are also used on the Intranet and those users were not seeing any drops.

The AO asked if I had changed anything in the network. Nope. Had he changed anything? Nope.

So now we have the classic stand-off, right? AO vs network. There’s a root cause and it’s either the AO or the network guy who’s ultimately at fault.

My attitude in these cases is the following: the network person should prove it’s an application problem and the application owner should prove it’s a network problem! It sounds cynical, but this approach aligns with the best interests of each party. Both are really working towards the same goal, but preserving their own interests. E.g., the networking person thinks that If I can prove it’s an application problem then the AO will quit bothering me and I can get back to my real job. After all, I am not knowledgeable about the application. Even if it is a networking issue, I do not know where the issue is so I need the AO to point out the problem at a detailed level, e.g., the dropped packet or whatever, so I can focus my energies. The reality in my experience is quite different however. The AO typically does not know enough about networking to make this proof.

Nonetheless I proceeded this way, hoping to prove some knid of application problem so I could get back to my normal activities.

We enabled my own PC to use the application. This is always much easier than bothering other people. I can take traces to my heart’s content! So Monday I was connected to XenApp via the CSG. I was going along fine until 11:35 when I got the disconnected message! I later learned that the bulk of users, who are using a different app, were not disconnected then, but were at about an hour later.

Now there’s lots of pieces to look at, any one of which could be at fault. Working from PC on Internet to the XenApp we have: The Internet, my Internet router, firewall, load balancer, CSG server, firewall, XenApp server. That’s a lot to look after, but you have to start somewhere. I chose the load balancer. It was rather confusing, even to establsih a baseline of “normal” activity. I quickly observed that every 30 seconds packets were being transmitted to the PC even when nothing was going on.. Of course the communication was all encrypted so I did not even attempt to look into the packets. But sometimes I saw seven packets, sometimes six, and more rarely different numbers. The packet order didn’t even make sense. Sometimes the load balancer responded to the XenApp before the PC did! The trace of this behaviour until I was disconnected will be shown here when I get the time to include it:

The end of the trace shows a bunch of FIN packets. FIN is used to terminate a TCP connection. Now we’re getting somewhere. It looks like, from a TCP perspective, that a more-or-less orderly shutdown of the connection was occuring. If confirmed that would point to an application problem and life would be good!

The next day I logged into CSG and used a XenApp app again. This time I did an additional trace and included the CSG server itself. Again I was disconnected after a few hours. In this trace the CSG server is called webservera, the XneApp server is xenapp15. This is not a byte-level trace but rather running snoop on Solaris and looking at the meta-data:

________________________________
11:29:23.81577 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:23.81577 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4842, TOS=0x0, TTL=126
11:29:23.81577 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:25.01881 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:25.01881 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4844, TOS=0x0, TTL=126
11:29:25.01881 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:27.42530 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:27.42530 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4861, TOS=0x0, TTL=126
11:29:27.42530 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:30.87645    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:30.87645    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=847, TOS=0x0, TTL=64
11:29:30.87645    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:30.87657    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:30.87657    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=848, TOS=0x0, TTL=64
11:29:30.87657    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:33.02325    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:33.02325    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=849, TOS=0x0, TTL=64
11:29:33.02325    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:53.34945 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:53.34945 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=40, ID=4923, TOS=0x0, TTL=126
11:29:53.34945 xenapp15 -> webservera    TCP D=56011 S=1494 Rst Ack=1900995851 Seq=4198090002 Len=0 Win=0

What I saw this time is that RST packet was being sent from the XenApp server! That’s the very last line, which I will repeat here for emphasis since it is so important to the case:

11:29:53.34945 xenapp15 -> webservera    TCP D=56011 S=1494 Rst Ack=1900995851 Seq=4198090002 Len=0 Win=0

TCP RST is a way to immediately disconnect a connection! It seemed as though this was begin converted to a FIN by the CSG. Now it’s looking very much like for whatever reason the application decided to terminate the connection. It almost has to be an application problem, right?

Wrong! We have to keep an open mind.

This trace, while dense, hints at where the problem may lie. It is taken on the load balancer with tcpdump -i 0.0. The load balancer has two interfaces, one towards the Internet, the other towards webserverw. The hostname of the load balancer’s Internet interface is called CSG, the hostname of the Citrix client on the Internet is drjohnspc.

11:03:59.392810 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 35 win 32768 (DF)
11:03:59.455730 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 35:68(33) ack 1 win 48677 (DF)
11:03:59.554810 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 68 win 32768 (DF)
11:03:59.585845 802.1Q vlan#4094 P0 drjohnspc.20723 > CSG.https: . ack 35 win 64426 (DF)
11:03:59.585855 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 35:68(33) ack 1 win 32768 (DF)
11:03:59.885805 802.1Q vlan#4094 P0 drjohnspc.20723 > CSG.https: . ack 68 win 64393 (DF)
11:04:59.465070 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 68:103(35) ack 1 win 48677 (DF)
11:04:59.465080 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:04:59.564818 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 103 win 32768 (DF)
11:05:00.664812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:02.864812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:07.064810 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:15.264811 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:31.464812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:59.807514 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 103:130(27) ack 1 win 48677 (DF)
11:05:59.807741 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: F 130:130(0) ack 1 win 48677 (DF)
11:05:59.807754 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 131 win 32768 (DF)
11:05:59.807759 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: FP 103:130(27) ack 1 win 32768 (DF)
11:06:03.664813 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: . 68:103(35) ack 1 win 32768 (DF)
11:06:12.642847 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: R 1:1(0) ack 131 win 32768 (DF)
11:06:12.642862 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: RP 103:130(27) ack 1 win 32768 (DF)

Notice the time stamp increasing by larger and larger leaps beginning with 11:05:00.664812. 11:05:02, 11:05:07, 11:05:15, 11:05:31 – the time keeps doubling! this is characteristic of a TCP retransmit. Note that all the other information is the same. It must be retransmitting the same packet. Why? Because it never got there! That seems to be the most likely reason. Now my conviction and hope that an application problem lies at the heart of the issue is starting to crumble. See why you need to keep an open mind? Your opinion can change to the polar opposite conclusion with the input of some additional data like that. Where to turn next?

There is a firewall inbetween the load balancer and the Internet. Now we will focus our attention on it. Could be that it dropped that packet and all the re-transmits.

Here’s the trace of that same conversation on the firewall’s internal interface (which faces the CSG) (I(O) means inbound(outbound) with respect to that interface):

11:04:59.441022  I IP CSG.https > drjohnspc.20723: P 2210302227:2210302262(35) ack 1714160833 win 32768
11:05:00.640781  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:02.840742  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:07.040729  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:15.240780  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:31.440571  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:59.783366  I IP CSG.https > drjohnspc.20723: FP 35:62(27) ack 1 win 32768
11:06:03.640595  I IP CSG.https > drjohnspc.20723: . 0:35(35) ack 1 win 32768
^C

and the trace of the same thing on the firewall’s external interface, i.e., facing the Internet and drjohnspc:

11:03:59.269334  O IP CSG.https > drjohnspc.20723: P 2210302159:2210302194(35) ack 1714160833 win 32768
11:03:59.562011  I IP drjohnspc.20723 > CSG.https: . ack 35 win 64426
11:03:59.562139  O IP CSG.https > drjohnspc.20723: P 35:68(33) ack 1 win 32768
11:03:59.861970  I IP drjohnspc.20723 > CSG.https: . ack 68 win 64393

Notice what’s not present in the exterenal interface trace – all those re-transmits, or even the original packet.

Let’s summarize so far. One of those keep-alive packets from the XenApp server reached the firewall, but didn’t exit the firewall. the only possibility is that it got dropped by the firewall!

Now that was a lot of work, but who’s going to do it if not a patient and methodical IT person?

Results
We got a networking problem on our hands after all. Good thing we persisted in this investigation even when it looked like we were off the hook! Later it was confirmed that the firewall was “aggressively aging” its connections because it had either reached or was very close to its connection limit. The firewall connection limit was raised and the Citrix connection issues went away.

Let’s go back to that simplistic question that non-experts like to ask: what had changed that caused this problem? The change was external events – increased usage of that firewall. Network bandwidth, Internet usage – they all tend to increase over time. There were no changes done by either networking or the application group to cause this issue. A seasoned IT detective uses all available clues and arrives at the right conclusion. The “what has changed” question is normally very relevant, but it can’t be the only tool in your toolbox!

Case closed!

Categories
IT Operational Excellence

Dr John’s Laws of IT

Laws

Here are some laws of IT based on years of observation.  Some of these have very real and practical consequences.

1. IT infrastructure decays over time if left to itself – a sort of entropy sets in.  This is sort of  counterintuitive insofar as people know enough about troubleshooting a problem to ask “what changed.”  Sometimes the answer is nothing at all, or nothing you would ever think of.  For example, I once had an application server start to fail when “nothing had changed.”  The cause, found after mny hairs pulled out?  The log file it wrote to tried to exceed 2 GB on a 32-bit system.  It couldn’t write to its log any longer and the app server just froze up. 

1.1) Corollary to 1.  Neglect works great in the short term, but the way to go is judicious maintenance!  Neglect leads to a 2 GB log in the first place!

2. Things will always go wrong at some point.  It will usually not be for the reason you suspect.

2.1 Corollary to 2.  Effective monitoring is critical.  If you build something critical, build a means to monitor it.  Monitor foundational components as well so that when you need it, you can see what all was working when one thing went south.

3. Software support from large vendors is abysmal.  Most small and mid-sized vendors are no better.  The premise of almost all support I’ve encountered is The customer did something wrong.  The most relevant metric is How quickly can the case be closed?  If in that rare case the customer can prove fault by the vendor, Justify doing Absolutely Nothing about it for as Long as Possible.  And NEVER do somthing immediately useful like let the customer speak to a software developer who actually knows what he/she is talking about.

3.1 Corollary to 3. An IT Professional quickly develops all the skills possessed by front-line engineering who responds to support calls, and can solve most of the problems on his/her own, our of necessity, since the assistance given won’t take the problem further anyways.

Observations

Laws are universal.  The following are key observations that are generally true.

1. The more an IT person thinks about a problem, the better the solution.  Better means cheaper, faster, more elegant, even moving the category from impossible to the possible (and actually this happens frequently believe it or not).

2. Even a brilliant IT professional won’t think up all solutions alone.  Creative problem solving occurs best when there’s a couple brilliant IT professionals bouncing ideas off each other, with a few others at the ready to contribute for specialist opinions.

3. How to estimate the amount of time for an IT project:

    2 x (extimate from experienced IT professional) + constant

If many groups and external partners are involved, the multiplier should be increased to 3 or even 4.

This sounds facetious but it is not.  It is the unfortunate truth of the nature of our work and the unpredictability of the showstopper moments which always occur.

4. All a seasoned IT person needs to decide the impossible is possible is to hear that someone else is doing it!  The creative juices start flowing at that point.  Maybe it’s a competitive thing at that point.

5. Large IT organizations contain a large number of people who actually know surprisingly little about IT.  Small IT organizations are also not immune from this.

Categories
Scams

Spam and Scams – What to Expect When You Start a Blog

In my case – not much! It appears that despite providing top-notch content the only “readers” are those trying to profit from me. To use the word “scam” may be a bit strong, but any outfit that demands money upfront to supposedly help you make money is highly suspect in my playbook.

So I’ve heard from Tina. It goes like this:
Admin – I’ve checked out http://drjohnstechtalk.com/blog/2011/06/grep-is-slow-as-a-snail-in-sles-11/ and I really like your writing style like in your post Grep is Slow as a Snail in SLES 11 | Dr John’s Tech Talk. I am looking for blog authors who would like to write articles as either a full time job or part time job (for some extra money). I think your writing style would work very well. You receive pay per article, anywhere from $5 to $50 per article depending on the topic, article length, etc… If interested you can find more information at www.onlinehomewriter.net.

Please do me a favor and do not follow that link. It redirects you secure.signup-way.com, some strange-looking URL that McAfee categorizes as Malicious Sites, High Risk. So I don’t think I’ll be going there.

Then there’s Tony:
Blog Admin – If your blog isn’t bringing in as much money as you would like it to check out my site www.QuickCashBlogging.com. We show blog owners how to maximize their blogs earnings potential. Tony

McAfee verdict: Spam site, medium risk. That’s just great.

The McAfee URL checker I use is http://www.trustedsource.org/en/feedback/url.

Clearly these people have program trolling the Internet for new domains and new blogs, trying to squeeze some $$ from them. Unfortunately I’m not sure any person who could benefit from the information has read my blogs. So I feel I am making negative progress – instead of elevating the level of discourse on the Internet helping it to be used for more spam and scams.

I just feel bad for humanity. Is this the best we can do? A well-meaning person embarks on a quixotic journey to provide better technical information on some topics, and the average response from my fellow human beings is to try to take advantage of a hopefully vulnerable and naive newbie? I am literally concerned for us as a race.

August 16th Update
The spam and scam started as a trickle. Now it’s raining spam in my inbox. I continue to be disappointed. In email the ratio of spam to “ham” may be about five to one, so not knowing any better you could expect a similar ratio with WordPress blogs. Not so! Of my fifty comments, not counting the ones from myself, the legit comments number about two-and-half, more like a twenty to one ratio. I will probably use a WordPress plugin to cut them off, but since I started on this public service mission, here are some more scams.

This one is spam as it was posted to my Sample page:
HTC is a well known name in the smartphone segment. The company has come up with smartphones boasting of exquisite features and HTC EVO 4G is one of the most potent… It came from an address ending in @mail.ru . One of many hats is as spam fighter. Let me tell you you see an a sender address mail.ru and you’re talking pure spam. The IP resolves to Latvia, however, and that fact hardly inspires confidence either.

To my post WordPress, Apache2, Permalinks and mod_rewrite under Ubuntu I got a comment The Best Way To Fix Acid Reflux. Now that’s a closely related topic!

Another one claims to help if I’m looking for information about babies (very relevant for a tech blog. yeah, right!).

Very many fall into the generic flattery category. Like this one:
Hey There. I found your blog using msn. This is a very well written article. I will be sure to bookmark it and come back to read more of your useful information. Thanks for the post. I’ll definitely return.

Or this:
I agree with your Gnu Parallel Really Helps With Zcat | Dr John's Tech Talk, great post.

I had to investigate those a little bit as I almost fell for one the first time. Then you realize that it’s so generic – except for the one where he obviously just pasted in the title of my post programatically – that it could be used for any blog post.

An equally popular scam are the SEO scams – Search Engine Optimization. I think the point of those scams is to shake a little money from you for supposed help to improve your blog’s ranking in the search engines.

Returning to the flattery scams, how do I know for sure this isn’t real, genuine flattery of my wonderful posts? I’ll tell you. There are a couple unambiguous clues and another strong hint.

Let’s start with the strong hint. Since I haven’t told anyone about my blog, pretty much the only way someone’s going to find it who has legitimate interest in its content is through Google or another search engine. So, in the web server access log, where I am recording the HTTP_REFERER (what URL the browser visited just before hitting my blog post), I should expect to see one of the search engines. I should not see some random web site mentioned because there is simply no good reason for browser controlled by a human being to go from someone else’s web site directly to my web site. And yet that is precisely what I am seeing. I would give examples but it would only serve to promote their web sites, so i will refrain from even an example.

But even more damning is to examine how long the poster has spent on my site. A human being has to read the post, contemplate its meaning, then type in a comment to finally post, right? It could rarely be done in under a minute. WordPress tells me the IP of the poster of the comment. I take that IP and search for it in the access log using grep. I am seeing that these comments are being made in one second after the web page was first downloaded. One second. It is not humanly possible. But for a program, piece of cake.

Here’s a real example (some of this may be cut off, depending on your browser):

109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "GET /blog/2011/06/gnu-parallel-really-helps-with-zcat/ HTTP/1.0" 200 22757 "http://blahblah.net/invest/" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)"
109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "GET /blog/ HTTP/1.0" 301 340 "http://blahblah.net/invest/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) Chrome/4.0.221.7 Safari/532.2"
109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "POST /blog/wp-comments-post.php HTTP/1.0" 302 902 "http://drjohnstechtalk.com" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)"
109.169.61.16 - - [12/Aug/2011:06:31:52 -0400] "GET /blog/ HTTP/1.0" 200 110806 "http://blahblah.net/invest/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) Chrome/4.0.221.7 Safari/532.2"

Also this example illustrates the other damning evidence of lack of human involvement in the comment. A real browser run by a real human being has to pull in a lot of objects to display a single WordPress page. You’ve got stylesheets, external javascript pages and even the image at the top. They should all be requested, and be recorded in the access log. But a programmatically controlled browser needs far less! It needs the HTML of the blog page, and then the page it POSTS the comment to. Perhaps a third page after the POST to show it the POST was successful or not. And that’s exactly what I’ve seen in all the spam/scam comments I’ve checked out by hand, not just the flattery scams. They are all using the absolute minimum page accesses and that simply screams non-human access! I am, unfortunately, not really so special as they would have me believe! And the SEO scams are just annoying advertising. Most of the rest is what I’d call link laundering, where they’re using the legitimacy of my site to try to get links to their shady sites included, by trickery, carelessness or any means. And some are just using it as pure spam to my inbox since that’s where the comments go for review and they don’t even care if I approve their spam for public viewing or not.

Possible Explanation
My hypothesis is that there are specially constructed advanced searches in Google you can do to find new WordPress blogs. You can download the results and programatically loop through them and attempt to post your spam and scams. It’s pretty easy to program a browser like curl using PERL to post to a WordPress blog. Even I could do it! And that low barrier to entry jibes with the level of professionalism I perceive in these scams, which is to say, pretty low, like something I would cook up by my lonesome! Misspellings, poor English, blatant calls-to-action are par for the course, as well as source IPs from remote regions of the world that have no possible interest in my arcane technical postings.

Now you could argue that a real browser could have cached some of those objects and so upon a return visit it might only access a minimum set of objects and hence look a bit like a program. To that I say that it is rarely the case that all objects get cached. And even if they did, you still have to take time to type in your comment, right? No one can do that in a second. The access lines above span the time from 6:31:51 to 6:31:52!!

The Final solution
I think I’ve made my point about the spam. I have followed Ryan’s advice and activated a plugin called Akismet. Their site looks fairly professional – like they know what they are doing. An API key is required to activate the plugin, but that is available for free for personal blogs. I’ll append to this blog whether or not it works!

Feb 28th update
600 spam comments later, 20 in the last few hours alone, I am sooo tired of rotten apples abusing the leave a comment feature, even though I am protected from approving the comments, it is still filling up my database. So I have taken an additional step today and implemented a Captcha plugin. This supposedly requires some human intelligence to answer a simple math problem before the post is allowed. I’ll post here about how well it is or isn’t working.

September update
Well, the captcha plugin has stopped virtually all spam, except one random comment. A user wishing to post a comment has to solve a very simple math/language problem. I recommend this approach. I suppose eventually the scammers will catch up with this defense, but in the meantime I am now enjoying peace and tranquility in my seldom-visited but formerly frequently spammed blog!

Categories
Admin IT Operational Excellence Web Site Technologies

Virtual Server not Working in F5 BigIP

OK. This posting is only directly applicable to the small number of people who run BigIP load balancers. And of that set, only a certaIn subset will likely ever have this situation. Nevertheless, it’s useful to document it. There are lessons in it for the rest of us, it shows the creative problem-solving process used in IT, or rather the creative process that should be used.

So I had a virtual server associated with a certain pool and it was operating fine for years. Then something changes. We want to associate a new name with this virtual server, but test it first, all while keeping the old name working. Well, this is a secured site by which I mean it is running https rather than http. There’s nothing intrinsic in the web site itself that ties it to a particular name. If this were a run-of-the-mill non-secure site you would solve this problem with DNS. Set up an alias and you’re good to go. But secured sites are a wee bit trickier. They present a certificate after all. And the certificate has just one name, at least ours does. Guess I can address multi-name certificates known as Subject Alternative Name CERTs in a separate post. And that name is the original DNS name. What to do? Simple. As any BigIP admin would tell you you create a new virtual server and associate it with a new IP and a new SSL profile containing the new certificate you just bought but the old pool. In DNS assign this new IP to your new DNS name. That’s all pretty straightforward

Having done all that, I blithely tested with lynx (iI’s an old curses-based browser which runs on old Unix systems. The main point is to not test with a complex browser where like Internet Explorer where you are never 100% sure if the problem lies with the browser. If I had it, I would test with curl, but it’s not on that system.). And…it hangs.

Now I’ll admit a lot of stupid things I did (which is typical of any good debugging session of an IT professional – some self-created red herrings accompany any decent sleuthing) and I ratchet up the debugging a notch. Check the web server logs. I see no log of my lynx accesses. Dig a little deeper still. Fire up a trace. Here’s a little time-saver. BigIP does have a tcpdump program, but it is a little stunted. Typically you have multiple interfaces on a BigIP. In this case I felt it pertinent to know if packets were getting to the BigIP from lynx, and then again, if those packets were leaving the BigIP and going to the web server. So the tip is that whereas a “normal” tcpdump might allow you to use the switch -i any to listen on all interfaces, that doesn’t work on BigIP. Use -i 0.0 instead. And of course restrict it somehow so that your own shell session’s packets won’t be picked up by the trace, or else you could be in for a nasty surprise of exponentially increasing traffic (a devastating situation perhaps worthy of its own blog entry!). In this case I added an expression port 443. So I have:

tcpdump -i 0.0 port 443

And, somewhat to my surprise (You should always have a hypothesis, even if it’s just a gut feeling: will this little test work, or not. Why?) not only were packets going from lynx to BigIp and then again to the web server, I could even see returned packets back from the web server to BigIp to lynx. But it was not a lot of packets. A SYN, SYN-ACK and maybe a single data packet and that’s about it. It should have been more chatty.

The more tests you can think of, the better, especially ones that emphasize the marginal differences between the thing that works and the one that doesn’t. One test along those lines: take this same virtual server and associate it with a different pool. I did that, and that test worked!

Next, I tried to access the web server using curl on the BigIP itself. I could, but not at first. First I used the local web server URL http://web_server_ip:443/. It hung my curl command, just like using lynx on the other server. Hmm. I then looked on the web server again. I notice that it has a certificate installed. Ah. So it’s actually running https. So try curl from BigIP again, but this time with the -k switch (insecure, meaning don’t verify the certificate issuer) and a url beginning with https rather than http. Bingo. It comes back with the home page. Now we’re getting somewhere.

Finally I look more closely at the virtual server setup for the old name, the one that works. I see that the server profile is SSL. It basically means that the traffic is encrypted when it hits the BigIP, and the server CERT is associated with the external name. The BigIP decrypts the traffic, then re-encrypts it before sending it along to the web server. The CERT for the second leg is a self-signed CERT and is never seen by users.

I had forgotten to set up my new test virtual server with the server SSL profile, so the second leg of traffic was not being re-encyrpted by the BigIP, even though the web server was only willing to engage in SSL communication with the BigIP. Once I updated the server profile, it all worked fine! Of course after getting the expected results from lynx I went to my desktop browser, just like a regular user, and successfully tested it there as well. You want to make sure your final tests are a realistic approximation of what the user will be doing. If that’s not all possible under your own control, bring in a user for testing.

Liked this article? Here’s another of my IT operational excellence articles that has a somewhat wider applicability.