Intro
I use the old Sun Java System Web Server, now known as the Oracle Web Server, formerly Sun ONE web server and before that iPlanet Web Server and before that Netscape Enterprise Server. The question came up the other day if the web server times out web pages. I never fully trust the documentation. I developed a simple method to experiment and find the answer for myself.
The Method
Sometimes you test what’s easiest, not what you should. In this case, an easy test is to write a long-running CGI program. This program, timertest.pl, is embarrassingly old, but anyhow…
#!/usr/bin/perl # DrJ, 3/1999 # The new, PERL5 way: use CGI; $query = new CGI; $| = 1; # print "Content-type: text/html\n\n"; print "<h2>Environment Variables</h2> <table> <tr><th>Env Variable</th><th>Value</th></tr>\n"; foreach $key (sort(keys(%ENV))) { print "<tr><td>$key</td><td>$ENV{$key}</td></tr>\n"; } print "</table>\n"; print "<hr> <h2>Name/Value Pairs</h2> <table> <tr><th>Name</th><th>Value</th></tr>\n"; foreach $key ($query->param) { print "<tr><td>$key</td><td>" . $query->param($key) . "</td></tr>\n"; } print "</table>\n"; $host = `hostname`; print "Hostname: $host<br>\n"; sleep($ENV{QUERY_STRING}); print "we have slept for $ENV{QUERY_STRING} seconds.\n"; |
So you see it prints out some stuff, sleeps for a specified time, then prints out a final line. You call it like curl your_sevrer/cgi-bin/timertest.pl?305, where 305 is the time in seconds to sleep. I suggest use of the curl browser so as not to be thrown off by browser complications which may have their own timeouts. curl is simplicity itself and won’t bias the answer. Use a larger number for longer times. That was easy, right? Does it work? No. Does it show what we _really_ wanted to show? Also no. In other words, a CGI program that runs for 610 seconds will be killed by the web server, but that’s really a function of some CGI timer. Five and ten minutes seem to be magic timeout values for some built-in timers, so it is good to test times slightly smaller/larger than those times. So how do we test a plain web page??? It turns out we can…
The Solution – using the Unix bag of tricks
I only have a couple of minutes here. Briefly:
> mknod tmp.htm p
> chown me tmp.htm
(from another window)
> curl my_server/tmp.htm
(back to first window)
> sleep 610; ls -l > tmp.htm
Then wait! mknod as used above is apparently the old, Solaris, syntax. The syntax could be somewhat different under Linux. The point is to create a named pipe. Think of a named pipe, like it sounds, like giving a name to the “|” character used so often in Unix command lines. So it needs a process to give it input and a process to read it, hence the two separate windows.
See if you get the directory listing in your curl window after about 10 minutes. With my Sun Java System Web Server I do, so now I know both curl and the web server support probably unlimited page-load times.
An Unexpected Finding
Another tip and unexpected lesson – don’t use one of your named pipes more than once. If you mess up, create a new one and work with that. What happens when I re-use one of my pipes is that curl is able to read the web page over and over, without a process sending input to the named pipe! That wasn’t supposed to happen. What does it all mean? It can only be – -and I’ve often suspected this – that my web server is caching the content. It’s not a particularly well-documented feature, either. I think most times I wish it’d rather not.
Conclusion
The Sun Java System Web Server times out CGI scripts, but not regular static web pages. We proved this in a few minutes by devising an unambiguous experiment. As an added bonus we also proved that the web server caches at least some pages. The careful observer is always open to learning more than what he or she started out intending to look for!