Categories
Apache

Running CGI Scripts from any Directory with Apache

Intro
This is a really basic issue, but the documentation out there often doesn’t speak directly to this single issue – other things get thrown into the mix. This document is to show how to enable the running of CGI scripts from any directory under htdocs with the Apache2 web server.

The Details
Let’s get into it. CGI – common gateway interface – is a great environment for simple web server programs. But if you’ve got a fairly generic apache2 configuration file and are trying CGI you might encounter a few different errors before getting it right. Here’s the contents of my test.cgi file:

#!/bin/sh
echo "Content-type: text/html"
echo "Location: http://drjohnstechtalk.com"
echo ""

Not tuning my dflt.conf apache2 configuration file in any way, I find to my horror that the cgi file contents gets returned as is initially, just as if it were no more special than a random test file:

> curl -i localhost/test/test.cgi

HTTP/1.1 200 OK
Date: Fri, 24 Feb 2012 14:07:37 GMT
Server: Apache/2
Last-Modified: Fri, 24 Feb 2012 14:05:29 GMT
ETag: "56d8-5c-4b9b640c9dc40"
Content-Length: 92
Content-Type: text/plain
 
#!/bin/sh
echo "Content-type: text/html"
echo "Location: http://drjohnstechtalk.com"
echo ""

That’s no good.

Now I do some research and realize the following line should be added. Here I show it in its context:

<Directory "/usr/local/apache2/htdocs">
# make .cgi and .pl extensions valid CGI types
    AddHandler cgi-script .cgi .pl
    ...

and we get…
> curl -i localhost/test/test.cgi

HTTP/1.1 403 Forbidden
Date: Fri, 24 Feb 2012 14:11:55 GMT
Server: Apache/2
Content-Length: 215
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access /test/test.cgi
on this server.</p>
</body></html>

Hmmm. Better, perhaps, but definitely not working. What did we miss? Chances are we had something like this in our dflt.conf:

<Directory "/usr/local/apache2/htdocs">
# make .cgi and .pl extensions valid CGI types
    AddHandler cgi-script .cgi .pl
    Options Indexes FollowSymLinks
    ...

but what we need is this:

<Directory "/usr/local/apache2/htdocs">
# make .cgi and .pl extensions valid CGI types
    AddHandler cgi-script .cgi .pl
    Options Indexes FollowSymLinks ExecCGI
    ...

Note the addition of the ExecCGI to the Options statement.

With that tweak to our apache2 configuration we get the desired result after restarting:

> curl -i localhost/test/test.cgi

HTTP/1.1 302 Found
Date: Fri, 24 Feb 2012 14:15:47 GMT
Server: Apache/2
Location: http://drjohnstechtalk.com
Content-Length: 209
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://drjohnstechtalk.com">here</a>.</p>
</body></html>

But what if we want our CGI script to be our default page? What I did for that is to add this statement:

DirectoryIndex index.html index.cgi

so now we have:

<Directory "/usr/local/apache2/htdocs">
# make .cgi and .pl extensions valid CGI types
    AddHandler cgi-script .cgi .pl
    Options Indexes FollowSymLinks ExecCGI
    DirectoryIndex index.html index.cgi
    ...

I create an index.cgi and delete the index.html and index.cgi is run from the top-level URL:

> curl -i johnstechtalk.com

HTTP/1.1 302 Found
Date: Fri, 29 Apr 2013 14:15:47 GMT
Server: Apache/2
Location: http://drjohnstechtalk.com/blog/
Content-Length: 209
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>302 Found</title>
</head><body>
<h1>Found</h1>
<p>The document has moved <a href="http://drjohnstechtalk.com/blog/">here</a>.</p>
</body></html>

Conclusion
We have shown the two most common problems that occur when trying to enable CGI execution with the apache2 web server. We have shown how to fix the configuration.

Categories
TCP/IP

C++ TCP Socket Program

Intro
I was looking around for a sample TCP socket program written in C++ that might make working with TCP sockets less mysterious. I expected to find a flood of things to pick from, but that really wasn’t the case.

The Details
OK, I only looked for a few minutes, to be honest. The one I did settle on seems adequate. It’s sufficiently old, however, that it doesn’t actually work as-is. Probably if it did I wouldn’t even mention it. So I thought it was worth repeating here, with some tiny semantic updates.

What I used is from this web page: http://cs.baylor.edu/~donahoo/practical/CSockets/practical/. I was really only interested in the TCP echo client. It’s a good stand-ion for any TCP client I think.

Here’s TCPechoClient.cpp:

/*
 *   C++ sockets on Unix and Windows
 *   Copyright (C) 2002
 *
 *   This program is free software; you can redistribute it and/or modify
 *   it under the terms of the GNU General Public License as published by
 *   the Free Software Foundation; either version 2 of the License, or
 *   (at your option) any later version.
 *
 *   This program is distributed in the hope that it will be useful,
 *   but WITHOUT ANY WARRANTY; without even the implied warranty of
 *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 *   GNU General Public License for more details.
 *
 *   You should have received a copy of the GNU General Public License
 *   along with this program; if not, write to the Free Software
 *   Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
 */
 
// taken from http://cs.baylor.edu/~donahoo/practical/CSockets/practical/TCPEchoClient.cpp
 
#include "PracticalSocket.h"  // For Socket and SocketException
#include <iostream>           // For cerr and cout
#include <cstdlib>            // For atoi()
#include <cstring>            // author forgot this
 
using namespace std;
 
const int RCVBUFSIZE = 32;    // Size of receive buffer
 
int main(int argc, char *argv[]) {
  if ((argc < 3) || (argc > 4)) {     // Test for correct number of arguments
    cerr << "Usage: " << argv[0]
         << " <Server> <Echo String> [<Server Port>]" << endl;
    exit(1);
  }
 
  string servAddress = argv[1]; // First arg: server address
  char *echoString = argv[2];   // Second arg: string to echo
// DrJ test
//  echoString = "GET / HTTP/1.0\n\n";
  int echoStringLen = strlen(echoString);   // Determine input length
  unsigned short echoServPort = (argc == 4) ? atoi(argv[3]) : 7;
 
  try {
    // Establish connection with the echo server
    TCPSocket sock(servAddress, echoServPort);
 
    // Send the string to the echo server
    sock.send(echoString, echoStringLen);
 
    char echoBuffer[RCVBUFSIZE + 1];    // Buffer for echo string + \0
    int bytesReceived = 0;              // Bytes read on each recv()
    int totalBytesReceived = 0;         // Total bytes read
    // Receive the same string back from the server
    cout << "Received: ";               // Setup to print the echoed string
    while (totalBytesReceived < echoStringLen) {
      // Receive up to the buffer size bytes from the sender
      if ((bytesReceived = (sock.recv(echoBuffer, RCVBUFSIZE))) <= 0) {
        cerr << "Unable to read";
        exit(1);
      }
      totalBytesReceived += bytesReceived;     // Keep tally of total bytes
      echoBuffer[bytesReceived] = '\0';        // Terminate the string!
      cout << echoBuffer;                      // Print the echo buffer
    }
    cout << endl;
 
    // Destructor closes the socket
 
  } catch(SocketException &e) {
    cerr << e.what() << endl;
    exit(1);
  }
 
  return 0;
}

Note the cstring header file I needed to include. The standard must have changed to require this since the original code was published.

Then I neeed PracticalSocket.h, but that has no changes from the original version: “http://cs.baylor.edu/~donahoo/practical/CSockets/practical/PracticalSocket.h, and his Makefile is also just fine: http://cs.baylor.edu/~donahoo/practical/CSockets/practical/Makefile. For the fun of it I also set up the TCP Echo Server: http://cs.baylor.edu/~donahoo/practical/CSockets/practical/TCPEchoServer.cpp.

Run

make TCPEchoclient

and you should be good to go. How to test this TCPEchoClient against your web server? I found that the following works:

~/TCPEchoClient drjohnstechtalk.com 'GET / HTTP/1.0
Host: drjohnstechtalk.com
 
' 80

which gives this output:

Received: HTTP/1.1 301 Moved Permanently
Date: Thu, 23 Feb 2012 17:19:02

which, now that I analyze it, looks cut-off. Hmm. Because with curl I have:

curl -i drjohnstechtalk.com

HTTP/1.1 301 Moved Permanently
Date: Thu, 23 Feb 2012 17:19:43 GMT
Server: Apache/2.2.16 (Ubuntu)
X-Powered-By: PHP/5.3.3-1ubuntu9.5
Location: http://www.drjohnstechtalk.com/blog/
Vary: Accept-Encoding
Content-Length: 2
Content-Type: text/html

I guess that’s what you get for demo code. At this point I don’t have a need to sort it out so I won’t. Perhaps we’ll come back to it later. Looking at it, I see the received buffer size is quite small, 32 bytes. I tried to set that to a reasonable value, 200 MBytes, but get a segmentation fault. The largest I could manage, after experiementation, is 10000000 bytes:

//const int RCVBUFSIZE = 32;    // Size of receive buffer
const int RCVBUFSIZE = 10000000;    // Size of receive buffer - why is 10 MB the max. value??

and this does indeed give us the complete output from our web server home page now.

Conclusion
There is some demo C++ code which creates a useable class for dealing with TCP sockets. There might be some work to do before it could be used in a serious application, however.

Categories
Apache Linux Web Site Technologies

Turning Apache into a Redirect Factory

Intro
I’m getting a little more used to Apache. It’s a strange web server with all sorts of bolt-on pieces. The official documentation is horrible so you really need sites like this to explain how to actually do useful things. You needs real, working examples. In this example I’m going to show how to use the mod_rewrite engine of Apache to build a powerful and convenient web server whose sole purpose in life is for all types of redirects. I call it a redirect factory.

Which Redirects Will it Handle
The redirects will be read in from a file with an easy, editable format. So we never have to touch our running web server. We’ll build in support for the types of redirect requests that I have actually encountered. We don’t care what kind of crazy stuff Apache might permit. You’ll pull your hair out trying to understand it all. All redirects I have ever encountered fall into a relatively small handful of use cases. Ordered by most to least common:

  1. host -> new_url
  2. host/uri[Suffix] -> new_fixed_url (this can be a case-sensitive or case-insensitive match to the uri)
  3. host/uri[Suffix] -> new_prefix_uri[Suffix] (also either case-sensitive or not)

So some examples (not the best examples because I don’t manage drj.com or drj.net, but pretend I did):

  1. drj.com/WHATEVER -> http://drjohnstechtalk.com/
  2. www.drj.com -> http://drjohnstechtalk.com/
  3. drj.com/abcPATH/Preserve -> http://drjohnstechtalk.com/abcPATH/Preserve
  4. drj.com/defPATH/Preserve -> http://drjohnstechtalk.com/ghiPATH/Preserve
  5. drj.com/path/with/slash -> http://drjohnstechtalk.com/other/path
  6. drj.com/path/with/prefix -> http://drjohnstechtalk.com/other/path
  7. drj.net/pAtH/whatever -> https://drjohnstechtalk.com/straightpath
  8. drj.net/2pAtH/stuff?hi=there http://drjohnstechtalk.com/2straightpath/stuff?hi=there
  9. my.host -> http://regular-redirect.com/
  10. whatever-host.whatever-domain/whatever-URI -> http://whatever-new-host.whatever-new-domain/whatever-new-URI

All these different cases can be handled with one config file. I’ve named it redirs.txt. It looks like this:

# redirs file
# The default target has to be listed first
defaultTarget   D       http://www.drjohnstechtalk.com/blog/
# hosts with URI-matching grouped together
# available flags: "P" - preserve part after match
#                  "C" - exact case match of URI
 
# Begin host: drj.com:www.drj.com - ":"-separated list of applicable hostnames
/                       http://drjohnstechtalk.com/
/abc    P       http://drjohnstechtalk.com/abc
/def    P       http://drjohnstechtalk.com/ghi
/path/with/slash https://drjohnstechtalk.com/other/path
/path/with/prefix P  https://drjohnstechtalk.com/other/path
# end host drj.com:www.drj.com
 
# this syntax - host/URI - is also OK...
drj.net/ter             http://drjohnstechtalk.com/terminalredirect
drj.net/pAtH    C       http://drjohnstechtalk.com/straightpath
drj.net/2pAtH   CP      http://drjohnstechtalk.com/2straightpath
 
# hosts with only host-name matching
my.host                 http://regular-redirect.com/
www.drj.edu             http://education-redirect.edu/edu-path

The Apache configuration file piece is this:

# I really don't think this does anything other than chase away a scary warning in the error log...
RewriteLock ${APACHE_LOCK_DIR}/rewrite_lock
 
# Inspired by the dreadful documentation on http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html
RewriteEngine on
RewriteMap  redirectMap prg:conf/vhosts/redirect.pl
#RewriteCond ${lowercase:%{HTTP_HOST}} ^(.+)$
RewriteCond ${redirectMap:%{HTTP_HOST}%{REQUEST_URI}} ^(.+)$
# %N are backreferences to RewriteCond matches, and $N are backreferences to RewriteRule matches
RewriteRule ^/.* %1 [R=301,L]

Remember I split up apache configuration into smaller files. So that’s why you don’t see the lines about logging and what port to listen on, etc. And the APACHE_LOCK_DIR is an environment variable I set up elsewhere. This file is called redirect.conf and is in my conf/vhosts directory.

In my main httpd.conf file I extended the logging to prefix the lines in the access log with the host name (since this redirect server handles many host names this is the only way to get an idea of which hosts are popular):

...
    LogFormat "%{Host}i %h %l %u %t \"%r\" %&gt;s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
...

So a typical log line looks something like the following:

drj.com 201.212.205.11 - - [10/Feb/2012:09:09:07 -0500] "GET /abc HTTP/1.1" 301 238 "http://www.google.com.br/url?sa=t&amp;rct=j&amp;q=drjsearch" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET4.0C; .NET4.0E)"

I had to re-compile apache because originally my version did not have mod_rewrite compiled in. My description of compiling Apache with this module is here.

The directives themselves I figured out based on the lousy documentation at their official site: http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html. The heavy lifting is done in the Perl script because there you have some freedom (yeah!) and are not constrained to understand all their silly flags. One trick that does not seem documented is that you can send the full URL to your mapping program. Note the %{HTTP_HOST}%{REQUEST_URI} after the “:”.

I tried to keep redirect.pl brief and simple. Considering the many different cases it isn’t too bad. It weighs in at 70 lines. Here it is:

#!/usr/bin/perl
# Copyright work under the Artistic License, http://www.opensource.org/licenses/Artistic-2.0
# input is $HTTP_HOST$REQUEST_URI
$redirs = "redirs.txt";
# here I only want the actual script name
$working_directory = $script_name = $0;
$script_name =~ s/.*\///g;
$working_directory =~ s/\/$script_name$//g;
$finalType = "";
$DEBUG = 0;
$|=1;
while () {
  chomp;
  ($host,$uri) = /^([^\/]+)\/(.*)/;
  $host = lc $host;
# use generic redirect file
  open(REDIRS,"$working_directory/$redirs") || die "Cannot open redirs file $redirs!!\n";
  $lenmatchmax = -1;
  while() {
# look for alternate names section
    if (/#\s*Begin host\s*:\s*(\S+)/i) {
      @hostnames = split /:/,$1;
      $pathsection = 1;
    } elsif (/#\s*End host/i) {
      $pathsection = 0;
    }
    @hostnames = () unless $pathsection;
    next if /^#/ || /^\s*$/; # ignore comments and blank lines
    chomp;
    $type = "";
# take out trailing spaces after the target URL
    s/\s+$//;
    if (/^(\S+)\s+(\S{1,2})\s+(\S+)$/) {
      ($redirsURL,$type,$targetURL) = ($1,$2,$3);
    } else {
       ($redirsURL,$targetURL) = /^(\S+)\s+(\S+)$/;
    }
# set default target if specified. It has to come at beginning of file
    $finalURL = $targetURL if $type =~ /D/;
    $redirsHost = $redirsURI = $redirsURIesc = "";
    ($redirsHost,$redirsURI) = $redirsURL =~ /^([^\/]*)\/?(.*)/;
    $redirsURIesc = $redirsURI;
    $redirsURIesc =~ s/([\/\?\.])/\\$1/g;
    print "redirsHost,redirsURI,redirsURIesc,targetURL,type: $redirsHost,$redirsURI,$redirsURIesc,$targetURL,$type\n" if $DEBUG;
    push @hostnames,$redirsHost unless $pathsection;
    foreach $redirsHost (@hostnames) {
    if ($host eq $redirsHost) {
# assume case-insensitive match by default.  Use type of 'C' to demand exact case match
# also note this matches even if uri and redirsURI are both empty
      if ($uri =~ /^$redirsURIesc/ || ($type !~ /C/ &amp;&amp; $uri =~ /^$redirsURIesc/i)) {
# find longest match
        $lenmatch = length($redirsURI);
        if ($lenmatch &gt; $lenmatchmax) {
          $finalURL = $targetURL;
          $finalType = $type;
          $lenmatchmax = $lenmatch;
          if ($type =~ /P/) {
# prefix redirect
            if ($uri =~ /^$redirsURIesc(.+)/ || ($type !~ /C/ &amp;&amp; $uri =~ /^$redirsURIesc(.+)/i)) {
              $finalURL .= $1;
             }
          }
        }
      }
    } # end condition over input host matching host from redirs file
    } # end loop over hostnames list
  } # end loop over lines in redirs file
  close(REDIRS);
# non-prefix re-direct. This is bizarre, but you have to end URI with "?" to kill off the query string, unless the target already contains a "?", in which case you must NOT add it! Gotta love Apache...
  $finalURL .= '?' unless $finalType =~ /P/ || $finalURL =~ /\?/;
  print "$finalURL\n";
} # end loop over STDIN

The nice thing here is that there are a couple of ways to test it, which gives you a sort of cross-check capability. Of course I made lots of mistakes in programming it, but I worked through all the cases until they were all right, using rapid testing.

For instance, let’s see what happens for www.drj.com. We run this test from the development server as follows:

> curl -i -H ‘Host: www.drj.com’ ‘localhost:90’

HTTP/1.1 301 Moved Permanently
Date: Thu, 09 Feb 2012 15:24:25 GMT
Server: Apache/2
Location: http://drjohnstechtalk.com/
Content-Length: 235
Content-Type: text/html; charset=iso-8859-1

Moved Permanently

The document has moved here.

 

And from the command line I test redirect.pl as follows:

> echo “www.drj.com/”|./redirect.pl

http://drjohnstechtalk.com/?

That terminal “?” is unfortunate, but apparently you need it to kill off any possible query_string.

You want some more? OK. How about matching a host and the initial path in a case-insensitive manner? No problem, we’re up to the challenge:

> curl -i -H ‘Host: DRJ.COM’ ‘localhost:90/PATH/WITH/SLASH/stuff?hi=there’

HTTP/1.1 301 Moved Permanently
Date: Thu, 09 Feb 2012 15:38:12 GMT
Server: Apache/2
Location: https://drjohnstechtalk.com/other/path
Content-Length: 246
Content-Type: text/html; charset=iso-8859-1

Moved Permanently

The document has moved here.

 

Refer back to the redirs file and you see this is the desired behaviour.

We could go on with an example for each case, but we’ll conclude with one last one:

> curl -i -H ‘Host: DRJ.NET’ ‘localhost:90/2pAtHstuff?hi=there’

HTTP/1.1 301 Moved Permanently
Date: Thu, 09 Feb 2012 15:44:37 GMT
Server: Apache/2
Location: http://drjohnstechtalk.com/2straightpathstuff?hi=there
Content-Length: 262
Content-Type: text/html; charset=iso-8859-1

Moved Permanently

The document has moved here.

 

A case-sensitive, preserve match. Change “pAtH” to “path” and there is no matching line in redirs.txt so you will get the default URL.

Creating exceptions

Eventually I wanted to have an exception – a URI which should be served with a 200 status rather than redirected. How to handle?

# Inspired by the dreadful documentation on http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html
        RewriteEngine on
# just this one page should NOT be redirected
        Rewriterule ^/dontredirectThisPage.php - [L]
        RewriteMap  redirectMap prg:redirect.pl
        ... etc ...

The above apache configuration snippet shows that I had to put the page which shouldn’t be redirected at the top of the ruleset and set the target to “-“, which turns off redirection for that match, and make this the last executed Rewrite rule. I think this is better than a negated match (!) which always gets complicated.

Conclusion
A powerful redirect factory was constructed from Apache and Perl. We suffered quite a bit during development because of incomprehensible documentation. But hopefully we’ve saved someone else this travail.

References and related

2022 update. This is a very nice commercial service for redirects which I have just learned about: https://www.easyredir.com/
This post describes how to massage Apache so that it always returns a maintenance page no matter what URI was originally requested.
I have since learned that another term used in the industry for rediect server is persistent URL (PURL). It’s explained in Wikipedia by this article: https://en.wikipedia.org/wiki/Persistent_uniform_resource_locator

Categories
Ajax

Web to ssh gateway – not so difficult with Right Tools

Intro
I won’t go into details in this posting for fear that the “bad people” will be more likely to benefit than the legitimate users of what I’m describing. That being said there are some legitimate uses, for instance when you need that terminal access but a direct ssh connection just isn’t available.

Ajaxterm
I’m kind of amazed at how far Javascript has come. You can implement a curses-based application in javascript, i.e., a terminal console? Yup. You bet. And the kicker is that it works quite well. Teraterm it ain’t, but I’ll be danged if you can’t vi a file, run top as well your basic commands, all over a pretty standard-looking web page. That’s what we mean by gateway – an application which converts one protocol to another. In this case HTTP to shell (I suppose).

The generic application is called ajaxterm. I used it from a distribution that runs a local python server on my server. It’s described here:

https://github.com/antonylesuisse/qweb/tree/master/ajaxterm/README.txt

If you keep the default screen size, 80×24, he says it has few enough characters that a screen refresh can be contained in one packet. In my testing the echo delay was probably under one quarter second.

Forget about a scroll bar holding 1000’s of lines, however. You get just your basic terminal like in the old days.

Someone reminded me about screen, which I hadn’t been using. Screen is an extremely useful tool. It’s like a terminal multiplexor. Now I normally set up my screen escape sequence to be Ctrl-\, but for some reason this particular sequence is not recognized by Ajaxterm. What I settled on instead is Ctrl-g (escape ^Gg in your .screenrc). I don’t like to use the default Ctrl-a because this is a useful emacs editing mode sequence – takes you to beginning of line. Popping between screens is a little slow with ajaxterm as might be expected. It’s a worst-case, everything must be re-drawn situation, I suppose. But ajaxterm + screen is a pretty powerful combination.

Conclusion
Now I have an additional path to my server’s command line if a direct ssh connection isn’t available.

Categories
Ajax flot jquery Perl

Making Function Plots fun using Ajax while solving a real-world problem

Intro
I learned an awful lot from this exercise. I wanted to plot the trajectory of a foam basketball through the air. You know the kind of thing where you can vary the initial conditions to see what differences the results will produce. Finally, finally a good excuse to learn some Ajax. Ajax is a natural fit because you can work within the same web page and the feel is more interactive.

High level description
There’s so much here to describe I hardly know where to begin. I may never get through describing it all.

At the highest levels I had to learn some of the following:

  • php
  • Ajax
  • DOM
  • Javascript
  • jquery
  • flot
  • json

Perl and basic physics are not on the list – they are used but I already know those!

I basically only learned as much as I needed to accomplish the task. This saved me quite a bit of time as you can get bogged down for months in any single one of those topics above. I’m pretty good at “programming by analogy” and this really put those skills to the test because, as is usually the case, analogies were indeed present, but they weren’t very exact so I needed a scary amount of extrapolation from what samples were easily available.

The net result of all this? I think it’s pretty neat if I say so myself. This web page follows the trajectory of a small foam basketball from a given set of initial conditions. The trajectory is plotted. You tweak the initial conditions and a new trajectory is plotted on top of the old one so you can see the differences. Here’s a link to the application.

To be continued in great detail, hopefully…

Categories
Admin Network Technologies

The IT Detective Agency: virus updates are failing

Intro
This case hardly qualifies as worthy subject matter for the IT Detective Agency – it’s pretty run-of-the-mill stuff. But I wanted to document it for completeness and show how a problem in one thing can turn out to have an unexpected cause (At least to me. In hindsight it’s dead obvious what the issue was likely to have been).

The Situation
We have lots of servers at drjohns. So when one of our admins, Shake, said that one of them, nfuz01, can’t reach the Etrust serverto get its virus updates I had no recollection of what that server is or does. Shake asked if the firewall had changed recently. That’s sort of a tricky question because there are always minor changes being done. Most have absolutely no effect because they are additional rules providing new permissions. So I bravely answered No, it hadn’t. And I wondered what he meant in using the word “reach” anyways.

So I walk up to Shake’s desk to get a better idea. He said not only are updates not virus signature updates not occurring, but neither server can PING the other, neither by name nor by IP address. Now we’re getting somewhere. I still haven’t registered where nfuz01 is, but I know the firewall as I’ve set it up permits ICMP traffic to transit. I suggested that maybe nfuz01 had some missing or messed-up routes. Then I went back to my desk to think some more. That’s what gets me motivated – when I’ve publicly speculated about the root cause of something. It’s not so much that I may be proven wrong, but if I am wrong, I want to be the first to find out and issue a correction.

So I tried a PING from my desktop:

C:\>ping 10.91.12.14
 
Pinging 10.91.12.14 with 32 bytes of data:
Reply from 171.18.252.10: TTL expired in transit.
Reply from 171.18.252.10: TTL expired in transit.

I look up where nfuz01 is. It is in a secondary data center. I ping it from a server in that same data center, but one a different segment – it works fine! I ping it from a Linux server in my main data center – totally different results:

> ping 10.91.12.14
PING 10.91.12.14 (10.91.12.14) 56(84) bytes of data.
From 171.18.252.10 icmp_seq=1 Time to live exceeded
From 171.18.252.10 icmp_seq=2 Time to live exceeded
 
--- 10.91.12.14 ping statistics ---
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 1000ms
 
> traceroute -n 10.91.12.14
traceroute to 10.91.12.14 (10.91.12.14), 30 hops max, 40 byte packets
 1  10.136.188.2  1.100 ms  1.523 ms  2.010 ms
 2  10.1.4.2  0.934 ms  0.941 ms  0.773 ms
 3  10.1.4.141  0.869 ms  0.926 ms  0.913 ms
 4  171.18.252.10  1.076 ms  1.096 ms  1.130 ms
 5  10.1.4.129  1.043 ms  1.029 ms  1.018 ms
 6  10.1.4.141  0.993 ms  0.611 ms  0.918 ms
 7  171.18.252.10  0.932 ms  0.916 ms  1.002 ms
 8  10.1.4.129  0.987 ms  1.089 ms  1.121 ms
 9  10.1.4.141  1.152 ms  1.246 ms  1.229 ms
10  171.18.252.10  2.040 ms  2.747 ms  2.735 ms
11  10.1.4.129  1.332 ms  1.418 ms  1.467 ms
12  10.1.4.141  1.477 ms  1.754 ms  1.685 ms
13  171.18.252.10  1.993 ms  1.978 ms  2.013 ms
14  10.1.4.129  1.930 ms  1.960 ms  2.039 ms
15  10.1.4.141  2.065 ms  2.156 ms  2.140 ms
16  171.18.252.10  2.116 ms  5.454 ms  5.453 ms
17  10.1.4.129  4.466 ms  4.385 ms  4.296 ms
18  10.1.4.141  4.266 ms  4.267 ms  4.260 ms
19  171.18.252.10  4.232 ms  4.216 ms  4.216 ms
20  10.1.4.129  4.182 ms  4.063 ms  4.009 ms
21  10.1.4.141  3.994 ms  3.987 ms  2.398 ms
22  171.18.252.10  2.400 ms  2.484 ms  2.690 ms
23  10.1.4.129  2.346 ms  2.449 ms  2.544 ms
24  10.1.4.141  2.534 ms  2.607 ms  2.610 ms
25  171.18.252.10  2.602 ms  2.742 ms  2.736 ms
26  10.1.4.129  2.776 ms  2.856 ms  2.848 ms
27  10.1.4.141  2.648 ms  3.185 ms  3.291 ms
28  171.18.252.10  3.236 ms  3.223 ms  3.235 ms
29  10.1.4.129  3.219 ms  3.277 ms  3.377 ms
30  10.1.4.141  3.363 ms  3.381 ms  3.449 ms

Cool, right? We’ve caught a network loop in the act. Now I know it isn’t the firewall, it isn’t the routes on nfuz01 but it is something with networking. So I sent that off to them….

In less than an hour I got the explanation as well as the fix:


All should be reachable again. There’s a loop I can’t clear amongst some [telecom-owned] routers in the main data center. I’ve superseded it with two /27s until they clear it.

And it pings fine now:

> ping 10.91.12.14
PING 10.91.12.14 (10.91.12.14) 56(84) bytes of data.
64 bytes from 10.91.12.14: icmp_seq=1 ttl=125 time=46.2 ms
64 bytes from 10.91.12.14: icmp_seq=2 ttl=125 time=40.1 ms
64 bytes from 10.91.12.14: icmp_seq=3 ttl=125 time=23.1 ms
 
--- 10.91.12.14 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 23.114/36.502/46.275/9.796 ms

And Shake says the updates came in.

Case closed!

Conclusion
Why wasn’t the problem more obvious to us from the very beginning? Well, if the admin who said nfuz01 couldn’t reach Etrust had tried to log in nfuz01 through the normal Remote Desktop mechanism – and of course failed – then we might have drilled down into a networking cause more quickly. But nfuz01 is a VM and he must have been logged on via VMWare Virtual Center and so he hadn’t noticed that basically the server couldn’t reach anywhere in our main data center. It is also an obscure server (remember that I had no recall about it?) so no one really noticed that it was effectivley out-of-business.

Categories
Admin IT Operational Excellence Proxy

The IT Detective Agency: the case of the Sales and Use tax software

Intro
I have to give credit to my colleague “Ben” for cracking this case, which left me scratching my head. Users at drjohns were getting new Windows 7 PCs and some of the old software wasn’t going to run on those new PCs, including our indirect tax sales and use software from Thomson Reuters. The new approach is SaaS – software as a service. The new package was approved and everyone thought it was going to work fine, until late in the game it was actually tested. They couldn’t bring up their old tax returns. So at the last hour they bring in the Internet experts.

The Details
At drjohns our users are insulated from the Internet by proxy servers. There are no direct routes. It’s private address space and an explicit proxy connection to browse out to Internet. 99% of the time this works fine. And it sure is a secure way to go. But those exceptions can be quite a headache. This case is a very typical presentation of what we see, though the particular solution varies case by case.

We get detailed network requirements. They usually talk about opening up the firewall to certain servers, etc. We always patiently explain that the firewall is open – to the proxy! The desktops have no Internet routes, nor can they resolve Internet domain names. That’s right we have private root DNS servers. Most vendors have never encountered this setup and so they dig in their heels and insist that the only way is to ‘open the firewall…”

This case was no different, except we didn’t actually talk to the vendor. But their requirements were crystal clear in this networking document. Here’s the snippet that would seem to be fatal given our Intranet architecture:

RS APPLICATION SERVERS
The ONESOURCE Sales & Use Application Servers use TCP/IP communications from the client
PC to the Server. The requirements for communications with the ONESOURCE Application
Servers are itemized below:
- DNS Name Resolution is not used for the Application Servers.
- Proxy Server access to the Application Servers is supported ONLY in transparent mode. The
Proxy Server must not translate the TCP/IP address of the Application Servers. PCs must be
able to establish a connection using the actual TCP/IP address and port numbers of the
Application Server without application “awareness” of a Proxy Server.
- Network Address Translation (NAT) is supported for the client addresses but is NOT
supported for the Application Server addresses.
- Connections are outbound only from the client to the server.
- Security policies, firewall rules, proxy rules and router packet filters must allow outbound
connections (and inbound replies) on destination port 2429 to the Class “B” network address
164.57.0.0. when using the non-WCF application servers. If the client’s account has been
configured to use Windows Communications Foundation or WCF, there are no additional port
requirements. The source port selection uses standard port numbers 1024 and above.

The application installs about 10 ActiveX controls and it wouldn’t run on my desktop. Ben managed to get it to run using the OpenText socks client. It has an option to “socksify everything else” which he says proves to be very useful when you don’t know what specific application to socksify. So now let me repeat what I have just said: Ben got it to work without any changes to the firewall, ignoring all the vendor’s advice and requirements!

I was very pleased as this was getting to be a high-priority issue what with these sales and use taxes due each month.

But Ben didn’t stop there. He came up with even better solution. He said he was looking around at the folder where all the stuff is installed by the application. He noticed a file called ConfigProxy. He configured it to use the system proxy settings. Then he exempted the target site from proxy authentication. Lo and behold that worked as well, with no socksification required at all. We only socksify an app as a last resort.

This latest finding completely contradicts the vendor’s stated network requirements. But it’s better this way.

We now have a happy tax department. Case closed.

Conclusion
Vendor network requirements are not always what they seem. Clearly they are not testing in the more obscure environments such as a private Intranet with an independent namespace that connects to the wider Internet only via explicit proxy. If you’re in this situation, which offers some serious security advantages, there are things you can do to get demanding applications to work.

Categories
Admin IT Operational Excellence Network Technologies

The IT Detective Agency: the case of the Adobe form network issue

Intro
Sometimes IT is called in to fix things we know little or nothing about. We may fix it, still not know anything, except what we did to fix it, and move on. Such is the case here with a mysterious Adobe Form that wasn’t working when I was called in to a meeting to discuss it.

The Case
One of our developers created a simple Adobe Acrobat form document. It has some logic to ask for a username and password and then verify them against an LDAP server using a network connection. Or at least that was the theory. It worked fine and then we got new PCs running Windows 7 and it stopped working. They asked me for help.

I asked to get a copy of the form because I like to test on my desktop where I am free to try dumb things and not waste others’ time. Initially I thought I also had the error. They showed me how to turn on Javascript debugging in edit|preferences. The debug window popped up with something like this:

debug 5 : function setConstants
debug 5 : data.gURL = https://b2bqual.drjohnstechtalk.com/invoke/Drjohns_Services.utilities:httpPostToBackEnd
debug 5 : function today_ymd
debug 5 : mrp::initialize: version 0.0001 debug level = 5
debug 5 : Login clicked
debug 5 : calling LDAPQ
debug 5 : in LDAPQ
 
NotAllowedError: Security settings prevent access to this property or method.
SOAP.request:155:XFA:data[0]:mrp[0]:sub1[0]:btnLogin[0]:clic

But this wasn’t the real problem. In this form you get a yellow bar at the top along with this message and you give approval to the form to access what it needs. Then you run it again.

For me, then, it worked. I knew this because it auto-populated some user information fields after taking a few seconds.

So i worked with a couple people for whom it wasn’t working. One had Automatically detect proxy settings checked. Unfortunately the new PCs came this way and it’s really not what we want. We prefer to provide a specific PAC file. With the auto-detect unchecked it worked for this guy.

The next guy said he already had that unchecked. I looked at his settings and confirmed that was the case. However, in his case he mentioned that Firefox is his default browser. He decided to change it back to Internet Explorer. Then he tested and lo and behold. It began to work for him as well!

When it wasn’t working he was seeing an error:

NetworkError: A network connection could not be created.

Later he realized that in Firefox he also was using auto-detect for the proxy settings. When he switched that to Use System Settings all was OK and he could have FF as default browser and get this form to work.

Conclusion
This is speculation on my part. I guess that our new version of Acrobat Reader X, v 10.1.1, is not competent in interpreting the auto-detect proxy setting, and that it is also tripped up by the proxy settings in Firefox.

There’s a lot more I’d like to understand about what was going on here, but sometimes speed counts. The next problem is already calling my name…

Categories
Admin Internet Mail Linux SLES

Building sendmail on SLES

Intro
My sendmail binary built for SLES 10 SP 3 was not working well at all on SLES 11 SP1. It became apparent that libraries were not compatible so it was time to re-compile. I’ve documented that journey here. There were a few pitfalls along the way so I felt it was worth a blog post should anyone else ever need to do this.

February 2013 update
And now I’ve repeated the journey on SLES 11 SP2 – and ran into new problems! I’ll put that story in the appendix below.

Why Build?
Why build sendmail when you can find a package for it? For security it’s a good idea to run the latest version. It’s easier to defend during an audit. So when I look via zypper, I see it proposes me sendmail v 8.14.3:

# sudo zypper if sendmail
Refreshing service 'nu_novell_com'.
Loading repository data...
Reading installed packages...
 
Information for package sendmail:
 
Repository: SLES11-SP1-Pool
Name: sendmail
Version: 8.14.3-50.20.1
Arch: x86_64
Vendor: SUSE LINUX Products GmbH, Nuernberg, Germany
...

I go to sendmail.org and find that the latest version is actually 8.14.5. And that’s fairly typical. The distributed release is over a year old.

Where this can really matter is when a vulnerability comes out. If you can roll your own you can be the first on the block with that vulnerability fixed – not putting yourself at the mercy of a vendor busy with hundreds of other distractions. And I have seen this phenomenon in action.

Distributing Your Build
I’m mixing up the order here.

Once you have your sendmail built, what’s the minimum set of things you need to put it on other servers?

For one, you gotta have a database package with sendmail. For historical reasons I use sleepycat (I think formerly known as Berkeley db). Only it was gobbled up by Oracle. I don’t have a feel for what the future holds, though I fear decline. Sleepycat provides db. I grabbed the RPM from rpmfind.net:

db-4.7.25-1rt.x86_64.rpm.

This package has to go on each server where you will run sendmail if you are using db for your database. First issue in trying to install this RPM:

# sudo rpm -i db-4.7.25-1rt.x86_64.rpm
        file /usr/bin/db_archive from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_checkpoint from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_deadlock from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_dump from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_hotbackup from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_load from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_printlog from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_recover from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_stat from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_upgrade from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64
        file /usr/bin/db_verify from install of db-4.7.25-1rt.x86_64 conflicts with file from package db-utils-4.5.20-95.39.x86_64

I don’t really know if any of these conflicting files are used by the system. I also don’t know how to install “all but the conflicting files” in RPM. So we’ll try our luck and simply overwrite them:

# sudo rpm -i –force db-4.7.25-1rt.x86_64.rpm

Now no errors are reported.

We gotta make another kludge. sendmail, or at least the version I compiled, needs this db library in /usr/lib64, but the RPM puts it in /usr/lib. So…

# cd /usr/lib64; sudo ln -s ../lib/libdb-4.7.so

By the way, how did I decide that db has to be brought over? I fired up sendmail and got this error:

# sendmail
sendmail: error while loading shared libraries: libdb-4.7.so: cannot open shared object file: Error 40

Now with the sym link made I get a more pleasant application error:

# sendmail
can not chdir(/var/spool/clientmqueue/): Permission denied
Program mode requires special privileges, e.g., root or TrustedUser.

Continuing my out-of-order documentation(!), I create an init script in /etc/init.d and make sure sendmail is going to be started at boot:

# sudo chkconfig -s drjohnssendmail 35

I like to put the logs in /maillog:

# sudo rm /var/log/mail; sudo mkdir !$; sudo ln -s !$ /maillog

I like to have the logging customized a bit, so I modify syslog-ng.conf somewhat:

# DrJ attempt to define filter based on match of sm-mta
filter f_sm-mta     { match("sm-mta"); };
filter f_fs-mta     { match("fs-mta"); };

and

...
#destination mail { file("/var/log/mail"); };
#log { source(src); filter(f_mail); destination(mail); };
 
destination mailwarn { file("/var/log/mail/mail.warn" perm(0644)); };
log { source(src); filter(f_mailwarn); destination(mailwarn); };
 
destination mailerr  { file("/var/log/mail/mail.err" perm(0644) fsync(yes)); };
log { source(src); filter(f_mailerr);  destination(mailerr); };
 
#
# and also all in one file:
#
#
# and also all in one file:
#
destination mail { file("/var/log/mail/stat.log" perm(0644)); };
log { source(src); filter(f_sm-mta); destination(mail); };
log { source(src); filter(f_fs-mta); destination(mail); };

Followed by a

# sudo service syslog restart

Going Back to Compiling
So how did we compile sendmail in the first place, which was supposed to be the subject of this blog?

We downloaded the latest version from sendmail.org and unpacked it. Then we read the INSTALL file in the sendmail-8.14.5 directory for general guidance about the steps.

To make our ocmpilation configuration portable, we try to encapsulate our idiosyncracies in one file, devtools/Site/site.config.m4, which we create. Mine looks as follows:

dnl  DrJ config file for corporate mail delivery
dnl I am leaving out the ldap stuff because we stopped using it
dnl 
dnl which maps we will support - NEWDB is automatic if it finds the db libs
dnl in Linux NDBM is really GDBM, which isn't supported.  NEWDB support is not automatic
define(`confMAPDEF',`-DNEWDB -DMAP_REGEX')
dnl Berkeley DB is here...
dnl this doesn't work, exactly - put the db libs directly into /usr/lib
APPENDDEF(`confLIBDIRS',`-L/usr/local/ssl/lib -L/usr/lib64')
dnl libdb-4 looks like the sleepycat library on Linux
APPENDDEF(`confLIBS',`-ldb-4')
APPENDDEF(`confINCDIRS',`-I/opt/local/include -I/usr/local/ssl/include')
dnl where to put smrsh and mail.local programs
define(`confEBINDIR', `/opt/mail/bin')
dnl where to install include files
define(`confINCLUDEDIR', `/opt/mail/include')
dnl where to install library files
define(`confLIBDIR', `/opt/mail/lib')
dnl man pages
define(`confMANROOT', `/opt/local/man/cat')
dnl unformatted man pages
define(`confMANROOTMAN', `/opt/local/man/man')
dnl the sendmail binary goes into MBINDIR
define(`confMBINDIR', `/opt/mail/bin')
dnl programs only executed by root go to sbin
define(`confSBINDIR', `/opt/mail/sbin')
dnl shared library directory
define(`confSHAREDLIBDIR', `/opt/mail/lib')
dnl user-executable pgms, newaliases, mailq, hoststat, etc
define(`confUBINDIR', `/opt/mail/bin')
dnl TLS support 
APPENDDEF(`conf_sendmail_ENVDEF', `-DSTARTTLS')
APPENDDEF(`conf_sendmail_LIBS', `-lssl -lcrypto')

I’ll explain a bit of this file since of course it is critical to what we are trying to do. I arrived at its current state from a little experimentation, so I don’t know the full explanation of some of the settings. What it’s saying is that we use the NEWDB, which uses that db we spoke of earlier for our maps. I like to install the binaries into /opt/mail/bin. We like to have the option to run TLS.

With that set we run the compile:

# cd sendmail; sh ./Build

and it spits out some errors to the screen which indicate we’re missing some SSL headers. We get them with:

# sudo zypper source-install openssl

Now with any luck it will fully compile.

At this point it helps to create some of the target directories:

# sudo mkdir -p /opt/mail/{bin,sbin} /opt/local/man/cat{1,5,8}

And we create a user account, smmsp, with uid and gid of 225, and a group with the same name.

And then we can install it:

# sudo sh ./Build install

The install should work, mostly. But makemap doesn’t get put in /opt/mail for some reason. So you have to copy it by hand from sendmail-8.14.5/obj.Linux.2.6.32.12-0.7-default.x86_64/makemap to /opt/mail/sbin, for instance. You really need to have a makemap.

Finally, I suggest to recursively copy the cf directory:

# cp -r sendmail-8.14.5/cf /opt/mail

This way you have a pretty relocatable set of files under /opt/mail.

Appendix: Building on SLES 11 SP2
I thought this would be a breeze. Just look at my own blog posting above! Not so fast – that approach didn’t work at all.

You have to appreciate that under SLES 11 SP1 we needed a few key packages that aren’t very common:

– libopenssl-devel-0.9.8h-30.27.11
– zlib-devel-1.2.3-106.34

We pulled them from the SDK DVD. Well, turns out there is no SDK DVD for SLES 11 SP2! Novell, probably in one of those beloved cost-saving measures, no longer releases an SDK DVD. What to do?

Well, I found what not to do. I began copying key files from my SLES 11 SP1 installation, like libcrypto.a, libssl.a, /usr/include/ssl. This all helped to reduce the number of errors. But at the end of the day there was an error I couldn’t chase away no matter what:

/usr/src/packages/BUILD/openssl-0.9.8h/crypto/comp/c_zlib.c:235: undefined reference to `inflate'

Meanwhile the resourceful sysadmin found those development packages. He told me about SUSE Studio, which allows you to build your own distribution. He looked for those development packages in the distribution, found and installed them:

$ rpm -qa|grep devel

libopenssl-devel-0.9.8j-0.26.1
zlib-devel-1.2.3-106.34
zlib-devel-32bit-1.2.3-106.34

Then my build went through fine. Whew!

Conclusion
I could go on and on about details in the setup, but the scope here is the compilation, and we’ve covered that pretty well.

Once you get over the pain of compilation setup, sendmail runs great and is a great MTA.

Categories
Admin IT Operational Excellence Linux Proxy Web Site Technologies

The IT Detective Agency: intermittent web page not found error

Intro
One of the high arts of IT is system integration, and an important off-shoot of this is acquisitions. We are involved in integrating a new location, which, unfortunately, we do not yet have full access to. The local networking is still provided by their vendor, not ours and this makes troubleshooting all the more difficult.

The Details
So the word begins to spread that users at this site are having intermittent problems accessing some of our secure web sites. As it was described to me, they can load the page in their browser for, say five straight times, get a simple Internet Explorer cannot display the web page error, and the sixth time (or whenever) it will load properly. All other connectivity was working. No one else at other locations was having this problem with this web site. More than strange, right?

In drjohn’s perfect IT world, problem reproducibility is critical to resolution, but we simply didn’t have it this time. I also could not produce the problem myself, which means relying on other people.

I’m not sure if we tried to contact their vendor or not at first. But if we had I’m sure they would have denied having anything to do with it.

So we got one of our confederates, Tim, over to this location and we hooked him up with Wireshark so he could get take a packet trace when the failure occurs. It wasn’t long before Tim reproduced the error and emailed us the packet capture.

In the following the PC has IP address 10.200.23.34, the web server is at 10.4.5.6. The Linux command used to look at the capture file is:

# tcpdump -A -r bodega-error.cap port 443 > /tmp/dump

1 15:54:27.495952 IP 10.200.23.34 > 10.4.5.6.https: S 2803722614:2803722614(0) win 64240 <mss 1460,nop,wscale 0,nop,nop,sackOK>
2 15:54:27.496309 IP 10.4.5.6.https > 10.200.23.34: S 3201081612:3201081612(0) ack 2803722615 win 5840 <mss 1432,nop,nop,sackOK>
3 15:54:27.496343 IP 10.200.23.34 > 10.4.5.6.https: . ack 1 win 64240
4 15:54:27.497270 IP 10.200.23.34 > 10.4.5.6.https: P 1:82(81) ack 1 win 64240
5 15:54:27.497552 IP 10.4.5.6.https > 10.200.23.34: . ack 82 win 5840
6 15:54:30.743827 IP 10.4.5.6.https > 10.200.23.34: P 1:286(285) ack 82 win 5840
..S.......^M..i.P.......HTTP/1.0 200 OK^M
Cache-Control: no-store^M
Pragma: no-cache^M
Cache-Control: no-cache^M
X-Bypass-Cache: Application and Content Networking System Software 5.5.17^M
Connection: Close^M
^M
<HTML><HEAD><META HTTP-EQUIV="REFRESH" CONTENT="0;URL=https://10.4.5.6/"></HEAD><BODY>
</BODY></HTML>
 
7 15:54:30.744036 IP 10.200.23.34 > 10.4.5.6.https: F 82:82(0) ack 286 win 63955
8 15:54:30.744052 IP 10.4.5.6.https > 10.200.23.34: F 286:286(0) ack 82 win 5840
9 15:54:30.744077 IP 10.200.23.34 > 10.4.5.6.https: . ack 287 win 63955
10 15:54:30.744289 IP 10.4.5.6.https > 10.200.23.34: . ack 83 win 5840

The output was scrubbed a bit of meaningless junk characters and I added serial packets numbers in the beginning by hand because I don’t (yet) know how to do that with tcpdump!

What, It’s Encrypted – what can you even learn from a trace?
Yeah, an SSL stream sure adds to the already steep challenges we faced in this problem. There just isn’t much to work with. But it is something. I’m about to say what I noticed in this packet trace, but for it to be meaningful you need to know like I did that the web server is situated almost four thousand miles from the user’s location.

The first packet is a SYN from the PC to web server on TCP port 443. So far so good. In fact packets one – three constitute the three-way handshake in TCP.

Although SSL is encrypted, the beginning of the protocol communication should show the SSL cipher being chosen. Unfortunately, tcpdump doesn’t seem to have the smarts to show any of this. So I got myself ssldump. On Ubuntu:

# sudo apt-get install ssldump

did the trick. Then run this same capture file through ssldump, which has very similar arguments to tcpdump:

# ssldump -r bodega-error.cap port 443

New TCP connection #1: 10.200.23.34(2027) <-> 10.4.5.6(443)
1 1  0.0013 (0.0013)  C>S SSLv2 compatible client hello
  Version 3.1
  cipher suites
  TLS_RSA_WITH_RC4_128_MD5
  TLS_RSA_WITH_RC4_128_SHA
  TLS_RSA_WITH_3DES_EDE_CBC_SHA
  SSL2_CK_RC4
  SSL2_CK_3DES
  SSL2_CK_RC2
  TLS_RSA_WITH_DES_CBC_SHA
  SSL2_CK_DES
  TLS_RSA_EXPORT1024_WITH_RC4_56_SHA
  TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA
  TLS_RSA_EXPORT_WITH_RC4_40_MD5
  TLS_RSA_EXPORT_WITH_RC2_CBC_40_MD5
  SSL2_CK_RC4_EXPORT40
  SSL2_CK_RC2_EXPORT40
  TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
  TLS_DHE_DSS_WITH_DES_CBC_SHA
  TLS_DHE_DSS_EXPORT1024_WITH_DES_CBC_SHA
  Unknown value 0xff
Unknown SSL content type 72
1    3.2480 (3.2467)  C>S  TCP FIN
1 2  3.2481 (0.0000)  S>CShort record
1    3.2481 (0.0000)  S>C  TCP FIN

The way to interpret this is that 0.0013 s into the TCP port 443 communication the cipher suites listed above were sent by the PC to the server. This corresponds to our packet number 4 in the trace file.

Using Wireshark to look at the trace is a lot more convenient – it provides packet numbers, timing, decodes packets and displays the SSL ciphers. But I wanted to show that it _could_ be done with text-based tools.

Look at the timings more closely. In the tcpdump output, packet 2, the SYN ACK, comes 1 ms after the SYN. But given the distances involved between PC and server, the SYN ACK should have come more like 100 ms later, at least. Similarly packet 5, which is an ACK, comes less than 1 ms after packet 4. A 1 ms ACK? Physically impossible.

I have seen this behaviour before – on our own load balance – which I know employs some TCP optimization tricks. So I concluded that they must have physically present at this site some kind of appliance which is doing TCP optimization. It can only provide blank ACKs in its rapid-fire responses since it can’t know what data the server is really going to respond with. That might all be OK. But I’m pretty sure the problem lies between packets 5 and 6. 5 is one of those meaningless rapid-fire empty ACKs generated by the local router. But the PC has just sent a wish list of SSL ciphers in packet 4. It needs to be responded to by the server which has to finish setting up the SSL session.

But that critical packet from the server never arrives. Perhaps even some of the SSL handshake is secretly completed between the local router and the server. Who knows? I have heard of man-in-the-middle devices that decrypt SSL sessions. And packet 6 contains fairly inappropriate content. It almost does look like it has been manufactured by a man-in-the-middle device. Its telling the browser to do a redirect to the same site, except specified by IP address rather than FQDN. And that doesn’t make a lot of sense. The browser likely realizes that this amounts to a looping redirect request so at that point it probably decides to cut its losses and FIN the connection in packet 7.

I traced my own PC hitting this same web server. Now I know we don’t have any of these optimizing devices between me and the web server. I don’t have time to show the results here, but to summarize, it looks rather completely different from the trace above. The ACK packets come back in about 100 ms or so. There is no delay of three seconds. The cipher proposals are responded to in a timely fashion. There is no redirect.

Their Side of the Story
We did get to hear back from the vendor who supports the LAN/WAN. They said they were running WCCP and diverting traffic to a proxy server. This was the correct behaviour before we hooked our infrastructure to this site, but is no longer. They realized this was probably a bad thing and took corrective action to turn off WCCP for destinations in the internal network 10.0.0.0/8.

Conclusion
Shutting off WCCP, which diverted web site requests to an old proxy server, fixed the problem.

Case closed.

Unsolved Mysteries
I wish we could tie all the loose ends neatly up, but there are too many players involved. We’ll never really know why the problem was intermittent, for instance. Or why some secure web sites could be accessed without any issue whatsoever throughout this ordeal.

WCCP, Web cache Communication Protocol, is a Cisco-developed routing protocol to transparently intercept traffic destined for web servers. More information can be found on it in wikipedia.

It bothers me that after the SSL session was initiated the dump showed the source, unencrypted, of the HTML redirect packet. Why wasn’t that encrypted? Perhaps the WCCP-invoked proxy server was desperately trying to help the PC recover from an unrecoverable situation and manufactured that HTTP-EQUIV REFRESH… to try to force the PC to choose a web site that might work. The fact that it was sent unencrypted over a channel that should have been encrypted was probably even the death bell that triggered the browser to think this makes no sense at all and is even a violation of security, I’m getting out of here.