Categories
Admin Ajax Image Manipulation Web Site Technologies

How to create a Progressive Scrolling Web Gallery

Intro
I know, I know, there are thousands of ways to display your pictures on the web. I did a 60 second search and settled on one approach that looked interesting to me. Then I quickly ran into some limits and made some improvements. That’s why there are now thousands plus one more as of today! The app I improved upon is good for previewing pictures in a directory where there are lots of nice pictures. It makes the downloading more pleasant and shows large-ish thumbnail images that can be enjoyed in their own right while you wait for more images to download.

Thankyou Alexandru Pitea
So I just downloaded the stuff from his fine tutorial, How to Create an Infinite Scrolling Web Gallery. I unpacked his the downloaded zip file: worked first time. That’s a good sign, right? That doesn’t always happen. Then I started to make changes and ruined it.

As previously documented I use Goodsync to sync all my home pictures to my server. So all pictures are present in various folders. But they’re big. I needed thumbnails for this gallery app. I wrote a very crude thumbnail generator. I basically have to edit it each time I work on a different directory. One day I’ll fix it up. I call it createthumbs.php:

<?php
function createThumbs( $pathToImages, $pathToThumbs, $thumbWidth )
{
  // open the directory
  $dir = opendir( $pathToImages );
 
  // loop through it, looking for any/all JPG files:
  while (false !== ($fname = readdir( $dir ))) {
    // parse path for the extension
    $info = pathinfo($pathToImages . $fname);
    // continue only if this is a JPEG image
    if ( strtolower($info['extension']) == 'jpg' )
    {
      echo "Creating thumbnail for {$fname} <br />";
 
      // load image and get image size
      $img = imagecreatefromjpeg( "{$pathToImages}{$fname}" );
      $width = imagesx( $img );
      $height = imagesy( $img );
 
      // calculate thumbnail size
      $new_width = $thumbWidth;
      $new_height = floor( $height * ( $thumbWidth / $width ) );
 
      // create a new temporary image
      $tmp_img = imagecreatetruecolor( $new_width, $new_height );
 
      // copy and resize old image into new image
      imagecopyresized( $tmp_img, $img, 0, 0, 0, 0, $new_width, $new_height, $width, $height );
 
      // save thumbnail into a file
      imagejpeg( $tmp_img, "{$pathToThumbs}{$fname}" );
    }
  }
  // close the directory
  closedir( $dir );
}
// call createThumb function and pass to it as parameters the path
// to the directory that contains images, the path to the directory
// in which thumbnails will be placed and the thumbnail's width.
// We are assuming that the path will be a relative path working
// both in the filesystem, and through the web for links
createThumbs("img/2012_05/","thumb/",200);
?>

Notice these are pretty big thumbnails – 200 pixels. That’s how the gallery program works best, and I think it is a good size for how you will want to browse your pictures.

Then I moved the original img directory to img.orig and made a symbolic link to one of my pictures’s folders (which I had run through the thumbnail generator).

img -> /homepic/pictures_chronological/2012_05/

It worked. But there were a couple annoying things. First, the picture order seemed nearly random. Apparently the order reflected the timestamp of the file, but not a sort by name order. I found it was simple to sort them by name, which produced a nice sensible order, by adding:

...
// sensible sort
$sortbool = sort($files,SORT_STRING);
...

to getImages.php.

The other annoying thing was the infinite scroll. Not sure what the attrtaction was to that. Many comments on his post asked how to turn it off. Turns out that was easy:

// prevent annoying infinite scroll
//$response = $response.$files[$i%count($files)].’;’;
$response = $response.$files[$i].’;’;

in the same file.

One astute user noticed the lack of input validation in the argument to GET, which should always be a non-negative integer. So I incorporated his suggestion for argument validation as well.

The full getImages.php file is here:

<?php
// input argument validation - only numbers permitted
function filter($data) {
if(is_numeric($data)) {
  return $data;
}
  else { header("Location: index.html"); }
}
 
        $dir = "thumb";
        if(is_dir($dir)){
                if($dd = opendir($dir)){
                        while (($f = readdir($dd)) !== false)
                                if($f != "." && $f != "..")
                                        $files[] = $f;
                        closedir($dd);
                }
// sensible sort
$sortbool = sort($files,SORT_STRING);
 
 
        $n = filter($_GET["n"]);
        $response = "";
                for($i = $n; $i<$n+12; $i++){
// prevent annoying infinite scroll
                        //$response = $response.$files[$i%count($files)].';';
                        $res = $files[$i];
                        if  (isset($res)) $response = $response.$res.';';
                }
                echo $response;
        }
?>

I’ve only done a couple tests a couple folders but in those tests they both showed all the pictures and then stopped scrolling, as you naturally would want. So that’s why what I have produced is a progressive scroll, not an infinite scroll the useful progressive scrolling part of the original code was preserved.

I think he even used bigger thumbnails than 200 pixels. For these smaller ones it makes more sense to grab pictures 12 at-a-time. So I made a few changes in index.html to take care of that.

Alexandru also had his first nine images hard-coded into his index.html. Again, I don’t see the point in that – makes it a lot harder to generalize. So I chucked that and appropriately modified some offsets, etc, without any terrible side-effects.

Putting it all together that code now looks like this:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<meta name="viewport" content="width=device-width, user-scalable=no initial-scale=1.0, minimum-scale=1.0" />
<title>Web Gallery | Progressive Sroll</title>
<link rel="stylesheet" href="style.css" />
</head>
 
<body onload="setInterval('scroll();', 250);">
<div id="header">Web Gallery | Progressive Scroll</div>
<div id="container">
</div>
</body>
</html>
<script>
//var contentHeight = 800;
var contentHeight = document.getElementById('container').offsetHeight;
var pageHeight = document.documentElement.clientHeight;
var scrollPosition;
var n = 0;
var xmlhttp;
 
function putImages(){
 
        if (xmlhttp.readyState==4)
          {
                  if(xmlhttp.responseText){
                         var resp = xmlhttp.responseText.replace("\r\n", "");
                         var files = resp.split(";");
                          var j = 0;
                          for(i=0; i<files.length; i++){
                                  if(files[i] != ""){
                                         document.getElementById("container").innerHTML += '<a href="img/'+files[i]+'"><img
 src="thumb/'+files[i]+'" /></a>';
                                         j++;
 
                                         if(j == 3 || j == 6 || j == 9)
                                                  document.getElementById("container").innerHTML += '<br />';
                                          else if(j == 12){
                                                  document.getElementById("container").innerHTML += '<p>'+(n-1)+" Images Di
splayed | <a href='#header'>top</a></p><br /><hr />";
                                                  j = 0;
                                          }
                                  }
                          }
                          if (i < 12) document.getElementById("container").innerHTML += '<p>'+(n-13+i)+" Images Displayed |
 <a href='#header'>top</a></p><br />";
                  }
          }
}
 
 
function scroll(){
 
        if(navigator.appName == "Microsoft Internet Explorer")
                scrollPosition = document.documentElement.scrollTop;
        else
                scrollPosition = window.pageYOffset;
 
        if((contentHeight - pageHeight - scrollPosition) < 200){
 
                if(window.XMLHttpRequest)
                        xmlhttp = new XMLHttpRequest();
                else
                        if(window.ActiveXObject)
                                xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
                        else
                                alert ("Bummer! Your browser does not support XMLHTTP!");
 
                var url="getImages.php?n="+n;
 
                xmlhttp.open("GET",url,true);
                xmlhttp.send();
 
// 12 pictures at a time...
                n += 12;
                xmlhttp.onreadystatechange=putImages;
                contentHeight = document.getElementById('container').offsetHeight;
                //contentHeight += 800;
        }
}
 
</script>

Notice I also played around with the scrolling function because that gave me difficulty. I set the condition contentHeight – pageHeight – scrollPosition to be less than 700, a requirement that is easier to meet, since in my tests I was often getting no scrolling whatsoever.

That’s it!

So to use my improvements you could download the source files from Alexandru’s site, then overwrite getImages.php and index.html from a cut-and-paste from this page.

To do list…
Naturally the first person to try it tried from an Android Smartphone using the Opera browser and it only showed him the first 12 pictures and didn’t do any scrolling. I developed for IE/FF on PC. I’ve just now tried Opera on PC and that worked fine. I’ll have to understand what is happening on Smartphones. So…I learned there is webkit for Smartphone compatibility. I added a meta tag concerning viewport (which I’ve already included in the html source file above). Now the pictures are a little large on my Android browser, and the progressive scrolling takes a nudge to get going, but it basically does work, which is an improvement. But still not on Opera mini! And not that well on Blackberry…

I’d also like to add a folder-browser plug-in.

Conclusion
Pages load fast initially in a progressive scroll approach. So this could be a useful program as a way to display your pictures on your own web site. We fixed up some of the undesirable behaviour of Alexandru’s original version.

Categories
Admin

Mysql Exploit: v. 5.1.6 on CentOS 6 does not appear vulnerable

Intro
As this story makes crystal clear, the test for the mysql password bug is ridiculously simple to run for yourself:

$ for i in `seq 1 1000`; do mysql -u root –password=bad -h 127.0.0.1 2>/dev/null; done

More on that
I am at version 5.1.61:

$ mysql –version

I fully expected to get a mysql> prompt from the above exploit code but I did not.

The Amazon cloud has some decent protections in place.

For instance I tried

$ mysql -u root –password=mysecretpassword -h 127.0.0.1 2>/dev/null

and of course I got in. But modified slightly, to

$ mysql -u root –password=mysecretpassword -h drjohnstechtalk.com 2>/dev/null

and it’s a no go. It just hangs. I can’t believe I never did this earlier, but I wanted to see the routing for my own elastic IP:

$ traceroute drjohnstechtalk.com

traceroute to drjohnstechtalk.com (50.17.188.196), 30 hops max, 60 byte packets
 1  ip-10-10-216-2.ec2.internal (10.10.216.2)  0.343 ms  0.506 ms  0.504 ms
 2  ip-10-1-54-93.ec2.internal (10.1.54.93)  0.571 ms ip-10-1-42-93.ec2.internal (10.1.42.93)  0.565 ms ip-10-1-52-93.ec2.internal (10.1.52.93)  0.366 ms
 3  ip-10-1-39-14.ec2.internal (10.1.39.14)  0.457 ms ip-10-1-41-14.ec2.internal (10.1.41.14)  0.515 ms ip-10-1-37-14.ec2.internal (10.1.37.14)  0.605 ms
 4  216.182.224.84 (216.182.224.84)  0.662 ms 216.182.224.86 (216.182.224.86)  0.606 ms  0.608 ms
 5  216.182.232.53 (216.182.232.53)  0.837 ms 216.182.224.89 (216.182.224.89)  0.924 ms 216.182.232.53 (216.182.232.53)  1.030 ms
 6  ip-10-1-41-13.ec2.internal (10.1.41.13)  0.869 ms ip-10-1-39-13.ec2.internal (10.1.39.13)  1.082 ms ip-10-1-43-13.ec2.internal (10.1.43.13)  1.154 ms
 7  ip-10-1-36-94.ec2.internal (10.1.36.94)  1.481 ms ip-10-1-54-94.ec2.internal (10.1.54.94)  1.351 ms ip-10-1-42-94.ec2.internal (10.1.42.94)  1.173 ms
 8  * * *
 9  * * *
10  * * *
...

So there’s quite a few hops before I hit my own IP! That’s plenty of hops in which to insert a firewall, which I suppose they do, to enforce my personal security policy.

My eth0 IP is 10.10.219.96. Using that:

$ mysql -u root –password=mysecretpassword -h 10.10.21.96

I get:

ERROR 1130 (HY000): Host 'ip-10-10-219-96.ec2.internal' is not allowed to connect to this MySQL server

even though my my.cnf file does not have this apparent restriction and the mysql daemon is listening on all interfaces:

$ netstat -an|grep LISTEN

...
tcp        0      0 0.0.0.0:3306                0.0.0.0:*                   LISTEN

Conclusion
I don’t recall taking special steps to secure my msql installation though it’s not out of the question. So I conclude that inspite of the articles that cite my version as being vulnerable, it is not, at least under CentOS 6, and even if it were, it would be especially hard to exploit for an Amazon cloud server.

Categories
Linux

54 Popular sendmail Features

Intro
Thinking of replacing sendmail? Or switching to sendmail? Here are 54 features I find useful in the way I implement sendmail.

In poorly ordered listing, we have:

– minimal acceptable delivery speed: 100 messages/sec
– queue deletion after 3 days (customizable)
– customizable timers on:
– time to wait for initial connection
– time to wait for response to MAIL command
– time to wait for response to QUIT
– time to wait for response to RCPT TO command
– time to wait for response to RSET command
– time to wait for response to other SMTP commands
– ability to turn off identd usage
– customizable greeting
– ability to deliver local mail for error situations such as looping mail, invalid sender + invalid recipient
– ability to detect looping messages and log and remove them
– errors in MIME format
– configurable maximum message size
– configurable maximum number of recipients per message
– configurable minimum queue age before delivery is re-tried
– configurable address operator characters
– ability to set multiple names for this host
– support for alias address transformations
– support for domain aliasing
– configurable load average at which new messages are refused
– configurable load average at which new messages are queued for later delivery
– configurable load average at which SMTP responses are delayed

– ability to run TLS as server and client, and use a CA-issued certificate
– use of fast table lookups to efficiently handle tables with thousands of entries
– configurable mail relaying decisions based on recipient domain
– ability to turn off UUCP routing
– ability to avoid canonification of recipient domain
– ability to re-write sender address
– ability to make mail relaying decisions based on sender address as well as sender domain
– ability to allow only selected domains/IPs/subnets to relay mail
– ability to reject messages to specified recipients/domains with custom message
– ability to silently discard messages to specific recipients/domains
– ability to discard or reject messages from specific senders or sender domains
– ability to set custom error number for rejected email
– support for mass-import and mass alteration of table entries (e.g., to mail routing/access/alias lists)
– ability to restrict mail relaying to all but a positive match list of IP addresses, subnets and FQDNs
– ability to accept unresolvable domains
– ability to run multiple instances, each with independent configuration, with separate IPs, on same appliance
– ability to make mail routing delivery decisions based on recipient domain configurable by MX lookup, set IP address, FQDN with and without MX lookups
– ability to route all else via DNS lookup
– ability to include comments within the configuration
– ability to turn off ESMTP delivery agent to selected domains and act as simple SMTP delivery agent
– ability to send hourly reports
– log available in real time
– log containing at least these fields: sender, recipients, date/time, delay, size, messageId, TLS used flag, sending MTA, relay MTA, reject reason (if applicable)
– ability to analyze logs with RegEx
– ability to archive logs for up to three months
– ability to send test message through itself with customizable subject on periodic basis
– ability to report on queue contents by top sender/recipient/recipient domain
– ability to force delivery retry on selected domain
– ability to set greeting delay for selected IPs and subnets
– ability to run a browser from same IP as used by the MTA

Most, but not all, of these features are in configured in the .mc file. A few are actually reference to external programs I developed. A few rely on the Linux environment that sendmail runs under.

Conclusion
When you sit down and document it, there’s a lot going on in sendmail.

Categories
Admin Linux Network Technologies Raspberry Pi

Compiling hping on CentOS

Intro
hping was recommend to me as a tool to stage a mock DOS test against one of our servers. I found that I did not have it installed on my CentOS 6 instance and could not find it with a yum search. I’m sure there is an rpm for it somewhere, but I figured it would be just as easy to compile it myself as to find the rpm. I was wrong. It probably was a _little_ harder to compile it, but I learned some things in doing so. So I’ll share my experience. It wasn’t too bad. I have nothing original to add here to what you find elsewhere, except that I didn’t find anywhere else with all these problems documented in one place. So I’ve produced this blog post as a convenient reference.

I’ve also faced this same situation on SLES – can’t find a package for hping anywhere – and found the same recipe below works to compile hping3.

The Details
I downloaded the source, hping3-20051105.tar.gz, from hping.org. Try a ./configure and…

error can not find the byte order for this architecture, fix bytesex.h

After a few quick searches I began to wonder what the byte order is in the Amazon cloud. Inspired I wrote this C program to find out and remove all doubt:

/* returns true if system is big_endian. See http://unixpapa.com/incnote/byteorder.html - DrJ */
#include<stdio.h>
 
main()
{
    printf("Hello World");
    int ans = am_big_endian();
    printf("am_big_endian value: %d",ans);
 
}
 
int am_big_endian()
  {
     long one= 1;
     return !(*((char *)(&one)));
  }

This program makes me realize a) how much I dislike C, and b) how I will never be a C expert no matter how much I dabble.

The program returns 0 so the Amazon cloud has little endian byte order as we could have guessed. All Intel i386 family chips are little endian it seems. Back to bytesex.h. I edited it so that it has:

#define BYTE_ORDER_LITTLE_ENDIAN
/* # error can not find the byte order for this architecture, fix bytesex.h */

Now I can run make. Next error:

pcap.h No such file or directory.

I installed libpcap-devel with yum to provide that header file:

$ yum install libpcap-devel

Next error:

net/bpf.h no such file or directory

For this I did:

$ ln -s /usr/include/pcap-bpf.h /usr/include/net/bpf.h

TCL
Next error:

/usr/bin/ld: cannot find -ltcl

I decided that I wouldn’t need TCL anyways to run in simple command-line fashion, so I excised it:

./configure --no-tcl

Then, finally, it compiled OK with some warnings.

hping3 for Raspberry Pi
On the Raspberry Pi it was simple to install hping3:

$ sudo apt-get install hping3

That’s it!

Raspberry Pi’s are pretty slow to generate serious traffic, but if you have a bunch I suppose they could amount to something in total.

Conclusion
Now I’m ready to go to use hping3 for a SYN_FLOOD simulated attack or whatever else we want to test.

Categories
Admin Internet Mail Linux Perl

The IT Detective Agency: last letter of attachment name is missing!

Intro
Today we bring you an IT whodunit thriller. A user using Lotus Notes informs his local IT that a process that emails SQL reports to him and a few others has suddenly stopped working correctly. The reports either contain an HTML attachment where the attachment type has been chopped to “ht” instead of “htm,” or an MHTML attachment type which has also been chopped, down to “mh” instead of “mht.” They get emailed from the reporting server to a sendmail mail relay. Now the convenient ability to double-click on the attachment and launch it stopped working as a result of these chopped filenames. What’s going on? Fix it!

Let’s Reproduce the Problem
Fortunately this one was easier than most to reproduce. But first a digression. Let’s have some fun and challenge ourselves with it before we deep dive. What do you think the culprit is? What’s your hypothesis? Drawing on my many years of experience running enterprise-class sendmail servers, and never before having seen this problem despite the hundreds of millions of delivered emails, my best instincts told me to look elsewhere.

The origin server, let’s call it aspen, sends few messages, so I had the luxury to turn on tracing on my sendmail server with a filter limiting the traffic to its IP:

$ tcpdump -i eth0 -s 1540 -w /tmp/aspen.cap host aspen

Using wireshark to analyze asp.cap and following the tcp stream I see this:

...
Content-Type: multipart/mixed;
		 boundary="CSmtpMsgPart123X456_000_C800C42D"
 
This is a multipart message in MIME format
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: text/plain;
		 charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
		 name="tower status_2012_06_04--09.25.00.htm"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
		 filename="tower status_2012_06_04--09.25.00.htm
 
<html><head></head><body><h1>Content goes here...</h1></body>
</html>
 
--CSmtpMsgPart123X456_000_C800C42D--

Result of trace of original email as received by sendmail

But the source as viewed from within Lotus Notes is:

...
Content-Type: multipart/mixed;
		 boundary="CSmtpMsgPart123X456_000_C800C42D"
 
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
		 charset="iso-8859-1"
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
		 name="tower status_2012_06_04--09.25.00.htm"
Content-Disposition: attachment;
		 filename="tower status_2012_06_04--09.25.00.ht"
Content-Transfer-Encoding: base64
 
PGh0bWw+PGhlYWQ+PC9oZWFkPjxib2R5PjxoMT5Db250ZW50IGdvZXMgaGVyZS4uLjwvaDE+PC9i
b2R5Pg0KPC9odG1sPg==
 
--CSmtpMsgPart123X456_000_C800C42D--

Same email after being trasferred to Lotus Notes

I was in shock.

I fully expected the message source to go through unaltered all the way into Lotus Notes, but it didn’t. The trace taken before sendmail’s actions was not an exact match to the source of the message I received. So either sendmail or Lotus Notes (or both) were altering the source in significant ways.

At the same time, we got a big clue as to what is behind the missing letter in the file extension. To highlight it, compare this line from the trace:

filename=”tower status_2012_06_04–09.25.00.htm

to that same line as it appears in the Lotus Notes source:

filename=”tower status_2012_06_04–09.25.00.ht

So there is no final close quote (“) in the filename attribute as it comes from the aspen server! That can’t be good.

But it used to work. What do we make of that fact??

I had to dig farther. I was suddenly reminded of the final episode of House where it is apparent that the solving the puzzle of symptoms is the highest aspiration for Doctor House. Maybe I am similarly motivated? Because I was definitely willing to throw the full weight of my resources behind this mystery. At least for the half-day I had to spare on this.

First step was to reproduce the problem myself. For sending an email you would normally use sendmail or mailx or such, but I didn’t trust any of those programs – afraid they would mess with my headers in secret, undocumented ways.

So I wrote my own mail sending program using Perl/Expect. Now I’m not advocating this as a best practice. It’s just that for me, given my skillset and perceived difficulty in finding a proper program to do what I wanted (which I’m sure is out there), this was the path of least resistance, the best and most efficient use of my time. You see, I already had the core of the program written for another purpose, so I knew it wouldn’t be too difficult to finish for this purpose. And I admit I’m not the best at Expect and I’m not the best at Perl. I just know enough to get things done and pretty quickly at that.

OK. Enough apologies. Here’s that code:

#!/usr/bin/perl
# drjohnstechtalk.com - 6/2012
# Send mail by explicit use of the protocol
$DEBUG = 1;
use Expect;
use Getopt::Std;
getopts('m:r:s:');
$recip = $opt_r;
$sender = $opt_s;
$hostname = $ENV{HOSTNAME};
chop($hostname);
print "hostname,mailhost,sender,recip: $hostname,$opt_m,$sender,$recip\n" if $DEBUG;
$telnet = "telnet";
@hosts = ($opt_m);
$logf = "/var/tmp/smtpresults.log";
 
$timeout = 15;
 
$data = qq(Subject: test of strange MIME error
X-myHeader: my-value
From: $sender
To: $recip
Subject: SQLplus Report - tower status
Date: Mon, 4 Jun 2012 9:25:10 --0400
Importance: Normal
X-Mailer: ATL CSmtp Class
X-MSMail-Priority: Normal
X-Priority: 3 (Normal)
MIME-Version: 1.0
Content-Type: multipart/mixed;
        boundary="CSmtpMsgPart123X456_000_C800C42D"
 
This is a multipart message in MIME format
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: text/plain;
        charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
        name="tower status_2012_06_04--09.25.00.htm"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
        filename="tower status_2012_06_04--09.25.00.htm
 
<html><head></head><body><h1>Content goes here...</h1></body>
</html>
--CSmtpMsgPart123X456_000_C800C42D--
 
.
);
sub myInit {
# This structure is ugly (p.148 in the book) but it's clear how to fill it
@steps = (
        { Expect => "220 ",
          Command => "helo $hostname"},
# Envelope sender
        { Expect => "250 ",
          Command => "mail from: $sender"},
# Envelope recipient
        { Expect => "250 ",
          Command => "rcpt to: $recip"},
# data command
        { Expect => "250 ",
          Command => "data"},
# start mail message
        { Expect => "354 Enter ",
          Command => $data},
# end session nicely
        { Expect => "250 Message accepted ",
          Command => "quit"},
);
}       # end sub myInit
#
# Main program
open(LOGF,">$logf") || die "Cannot open log file!!\n";
foreach $host (@hosts) {
  login($host);
}
 
# create an Expect object by spawning another process
sub login {
($host) = @_;
myInit();
#@params = ($host," 25");
$init_command = "$telnet $host 25";
#$Expect::Debug = 3;
my $exp = Expect->spawn("$init_command")
         or die "Cannot spawn $command: $!\n";
#
# Now run all the other commands
foreach $step (@steps) {
  $i++;
  $expstr = %{$step}->{Expect};
  $cmd = %{$step}->{Command};
#  print "expstr,cmd: $expstr, $cmd\n";
# Logging
#$exp->debug(2);
#$exp->exp_internal(1);
  $exp->log_stdout(0);  # disable stdout for each command
  $exp->log_file($logf);
  @match_patterns = ($expstr);
  ($matched_pattern_position, $error, $successfully_matching_string, $before_match, $after_match) = $exp->expect($timeout,
@match_patterns);
  unless ($matched_pattern_position == 1) {
    $err = 1;
    last;
  }
  #die "No match: error was: $error\n" unless $matched_pattern_position == 1;
  # We got our match. Proceed.
  $exp->send("$cmd\n");
}       # end loop over all the steps
 
#
# hard close
$exp->hard_close();
close(LOGF);
#unlink($logf);
}       # end sub login

Code for sendmsg2.pl

Invoke it:

$ ./sendmsg2.pl -m sendmail_host -s [email protected] -r [email protected]

The nice thing with this program is that I can inject a message into sendmail, but also I can inject it directly into the Lotus Notes smtp gateway, bypassing sendmail, and thereby triangulate the problem. The sendmail and Lotus Notes servers have slightly different responses to the various protocol stages, hence I clipped the Expect strings down to the minimal common set of characters after some experimentation.

This program makes it easy to test several scenarios of interest. Leave the final quote and inject into either sendmail or Lotus Notes (LN). Tack on the final quote to see if that really fixes things. The results?

Missing final quote

with final quote added

inject to sendmail

ht” in final email to LN; extension chopped

htm” and all is good

inject to LN

htm in final email; but extension not chopped

htm” and all is good

I now had incontrovertible proof that sendmail, my sendmail was altering the original message. It is looking at the unbalanced quote mark situation and recovering as best as possible by replacing the terminating character “m” with the missing double quote “. I was beginning to suspect it. After that shock drained away, I tried to check the RFCs. I figured it must be some well-meaning attempt on its part to make things right. Well, the RFCs, 822 and 1806 are a little hard to read and apply to this situation.

Let’s be clear. There’s no question that the sender is wrong and ought to be closing out that quote. But I don’t think there’s some single, unambiguous statement from the RFCs that make that abundantly apparent. Nevertheless, of course that’s what I told them to do.

The other thing from reading the RFC is that the whole filename attribute looks optional. To satisfy my curiosity – and possibly provide more options for remediation to aspen – I sent a test where I entirely left out the offending filename=”tower… line. In that case the line above it should have its terminating semicolon shorn:

Content-Disposition: attachment

After all, there already is a name=”tower…” as a Content-type parameter, and the string following that was never in question: it has its terminating semicolon.

Yup, that worked just great too!

Then I thought of another approach. Shouldn’t the overriding definition of the what the filetype is be contained in the Content-type header? What if it were more correctly defined as

Content-type: text/html

?

Content-type appears in two places in this email. I changed them both for good measure, but left the unbalanced quotations problem. Nope. Lotus Notes did not know what to with the attachment it displays as tower status_2012_06_04–09.25.00.ht. So we can’t recommend that course of action.

What Sendmail’s Point-of-View might be
Looking at the book, I see sendmail does care about MIME headers, in particular it cares about the Content-Disposition header. It feels that it is unreliable and hence merely advisory in nature. Also, some years ago there was a sendmail vulnerability wherein malformed multipart MIME messages could cause sendmail to crash (see http://www.kb.cert.org/vuls/id/146718. So maybe sendmail is just a little sensitive to this situation and feels perfectly comfortable and justified in right-forming a malformed header. Just a guess on my part.

Case closed.

Conclusion
We battled a strange email attachment naming error which seemed to be an RFC violation of the MIME protocols. By carefully constructing a testing program we were easily able to reproduce the problem and isolate the fault and recommend corrective actions in the sending program. Now we have a convenient way to inject SMTP email whenever and wherever we want. We feel sendmail’s reputation remains unscathed, though its corrective actions could be characterized as overly solicitous.

Categories
Admin CentOS Linux Raspberry Pi

A few RPM and YUM commands and equivalent on Raspberry Pi

Intro
This post adds nothing to the knowledge out there and readily available on the Internet. I just got tired of looking up elsewhere the few useful rpm and yum commands that I employ. Here’s how I installed a missing binary on one system when I have a similar system that has it.

RPM is the Redhat Package Manager. It is also used on Suse Linux (SLES). A much better resource than this page (Hey, we can’t all be experts!) is http://www.idevelopment.info/data/Unix/Linux/LINUX_RPMCommands.shtml

List all installed packages:

$ rpm −qa
dmidecode-2.11-2.el6.x86_64
libXcursor-1.1.10-2.el6.x86_64
basesystem-10.0-4.el6.noarch
plymouth-core-libs-0.8.3-24.el6.centos.x86_64
libXrandr-1.3.0-4.el6.x86_64
ncurses-base-5.7-3.20090208.el6.x86_64
python-ethtool-0.6-1.el6.x86_64

Same as above – list all installed packages – but list the most recently installed packages first (Wish I had discovered this command sooner)!

$ rpm −qa −−last

libcurl-devel-7.19.7-35.el6                   Mon Apr  1 20:00:47 2013
curl-7.19.7-35.el6                            Mon Apr  1 20:00:47 2013
libidn-devel-1.18-2.el6                       Mon Apr  1 20:00:46 2013
libcurl-7.19.7-35.el6                         Mon Apr  1 20:00:46 2013
libssh2-1.4.2-1.el6                           Mon Apr  1 20:00:45 2013
ncurses-static-5.7-3.20090208.el6             Mon Apr  1 19:59:24 2013
ncurses-devel-5.7-3.20090208.el6              Mon Apr  1 19:58:40 2013
gcc-c++-4.4.7-3.el6                           Fri Mar 15 07:59:36 2013
gcc-gfortran-4.4.7-3.el6                      Fri Mar 15 07:59:34 2013
...

Which package owns a command:

$ rpm −qf `which make`
make-3.81-3.el5

(This was run on an older Redhat 5.6 system which has make.)

Similarly, which package owns a file:

$ rpm −qf /usr/lib64/libssh2.so.1
libssh2-1-1.2.9-4.2.2.1

List files in (an installed) package:
$ rpm −ql freeradius-client-1.1.6-40.1

List files in an rpm package file:
$ rpm −qlp packages/HPSiS1124Core-11.24.241-Linux2.4.rpm

Get history of the package versions on this server:

$ yum history list te-agent

Get history of the list of changes to this package:

$ rpm -q -changelog te-agent

Install a package from a local RPM file:
$ rpm −i openmotif-libs-32bit-2.3.1-3.13.x86_64.rpm

Uninstall a packge:
$ rpm −e package
$ rpm −e freeradius-server-libs-2.1.1-7.12.1

How will you install the missing make in CentOS? Use yum to search for it:

$ yum search make

Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
============================== N/S Matched: make ===============================
automake.noarch : A GNU tool for automatically creating Makefiles
...
imake.x86_64 : imake source code configuration and build system
...
make.x86_64 : A GNU tool which simplifies the build process for users
makebootfat.x86_64 : Utility for creation bootable FAT disk
mendexk.x86_64 : Replacement for makeindex with many enhancements
...

How to install it:

$ sudo yum install make.x86_64

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
Setting up Install Process
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package make.x86_64 1:3.81-19.el6 will be installed
--&gt; Finished Dependency Resolution
 
Dependencies Resolved
 
===========================================================================================================================
 Package                   Arch                        Version                             Repository                 Size
===========================================================================================================================
Installing:
 make                      x86_64                      1:3.81-19.el6                       base                      389 k
 
Transaction Summary
===========================================================================================================================
Install       1 Package(s)
 
Total download size: 389 k
Installed size: 1.0 M
Is this ok [y/N]: y
Downloading Packages:
make-3.81-19.el6.x86_64.rpm                                                                         | 389 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 1:make-3.81-19.el6.x86_64                                                                               1/1
 
Installed:
  make.x86_64 1:3.81-19.el6
 
Complete!

make should now be in your path.

If we were dealing with SLES I would use zypper instead of yum, but the idea of searching and installing is similar.

Debian Linux, e.g. Raspberry Pi

Find which package a file belongs to:

> dpkg -S filepath

List installed packages:

> dpkg -l

List all files belonging to the package iperf3:

> dpkg -L iperf3

Transferring packages from one system to another

When I needed to transfer Debian packages from one system with Internet access to another without, I would do:

apt download apache2

Then sftp the file to the other system and on it do

apt install ./apache2_2.4.53-1~deb11u1_amd64.deb

In fact that only worked after I installed all dependencies. This web of files covered all dependencies:

apache2-bin_2.4.53-1~deb11u1_amd64.deb
apache2-data_2.4.53-1~deb11u1_all.deb
apache2-utils_2.4.53-1~deb11u1_amd64.deb
apache2_2.4.53-1~deb11u1_amd64.deb
libapr1_1.7.0-6+deb11u1_amd64.deb
libaprutil1-dbd-mysql_1.6.1-5_amd64.deb
libaprutil1-dbd-odbc_1.6.1-5_amd64.deb
libaprutil1-dbd-pgsql_1.6.1-5_amd64.deb
libaprutil1-dbd-sqlite3_1.6.1-5_amd64.deb
libaprutil1-ldap_1.6.1-5_amd64.deb
libaprutil1_1.6.1-5_amd64.deb
libgdbm-compat4_1.19-2_amd64.deb
libjansson4_2.13.1-1.1_amd64.deb
liblua5.3-0_5.3.3-1.1+b1_amd64.deb
libmariadb3_1%3a10.5.15-0+deb11u1_amd64.deb
libperl5.32_5.32.1-4+deb11u2_amd64.deb
mailcap_3.69_all.deb
mariadb-common_1%3a10.5.15-0+deb11u1_all.deb
mime-support_3.66_all.deb
mysql-common_5.8+1.0.7_all.deb
perl-modules-5.32_5.32.1-4+deb11u2_all.deb
perl_5.32.1-4+deb11u2_amd64.deb
ssl-cert_1.1.0+nmu1_all.deb

Categories
Admin CentOS Security

How to Set up a Secure sftp-only Service

Intro
Updated Jan, 2015.

Usually I post a document because I think I have something to add. This time I found a link that covers the topic better than I could. I just wanted to have it covered here. What if you want to offer an sftp-only jailed account? Can you do that? How do you do it?

The Answer
Well, it used to be all here: http://blog.swiftbyte.com/linux/allowing-sftp-access-while-chrooting-the-user-and-denying-shell-access/. But that link is no longer valid.

I tried it, appropriately modified for CentOS and it worked perfectly. A few notes. Presumably you will already have ssh installed. Who can imagine a server without it? So there’s typically no need to install openssh-server.

I was leery mucking with subsystem sftp. What if it prevented me from doing sftp to my own account and having full access like I’m used to? Turns out it does no harm in that regard.

Very minor point. His documentation might be good for Ubuntu. To restart the ssh daemon in CentOS/Fedora, I recommend a sudo service sshd restart. Do you wonder if that will knock you out of your own ssh session? I did. It does not. Not sure why not!

These groupadd/useradd/usermod functions are “cute.” I’m old school and used to editing the darn files by hand (/etc/passwd, /etc/group). I suppose it’s safer to use the cute functions – less chance a typo could render your server inoperable (yup, done that).

Let’s call my sftp-only user is joerg.

I did the chown root:root thing, but initially the files weren’t accessible to the joerg user. The permissions were 700 on the home directory, now owned by root. That produces this error when you try to sftp:

$ sftp joerg@localhost
sftp> dir

Couldn't get handle: Permission denied

That’s no good, so I liberalized the permissions:

$ sudo chmod go+rx /home/joerg

My /etc/passwd line for this user looks like this:

joerg:x:1004:901:Joerg, etherip author:/home/joerg:/bin/false

So note the unusual shell, /bin/false. That’s the key to locking things down.

In /etc/group I have this;

joerg:x:1004:

If you want to add the entries by hand to passwd and group then if I recall correctly you run a pwconv to generate an appropriate entry for it in /etc/shadow, and a sudo passwd joerg to set up a desired password.

Does it work? Yeah, it really does.

$ sftp joerg@localhost
Connecting to localhost…
sftponlyuser@localhost’s password:
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> pwd
Remote working directory: /
sftp> cd /etc
Couldn’t canonicalise: No such file or directory
sftp> ls -l
[shows files in /home/joerg]

Moreover, ssh really is shut out:

$ ssh joerg@localhost
joerg@localhost’s password:

This hangs and never returns with a prompt!

Cool, huh?

Locking out this same account
Now suppose you only intended joerg to temporarily have access and you want to lock the account out without actually removing it. This can be done with:

$ sudo passwd -l joerg

This puts an invalid character in that account’s shadow file entry.

Conclusion
We have an easy prescription to make a jailed sftp-only account that we tested and found really works. Regular accounts were not affected. The base article on which I embellished is now kaput so I’ve added a few more details to make up for that.

Categories
Uncategorized

Experimenting with GoodSync

Intro
I was thinking about doing something with photos – not exactly sure what yet. But since I have this nice decicated server with lots of space, it’s a great place to store my collection of thousands of photos. But how to upload them and keep the server copy in sync?

The traditional Unix approach might be to install rsync, and I suppose I could have gotten it to work. But I decided to see what commercial package was out there. I settled on Goodsync. One of the inducements is that it has support for sftp servers. Or so it says.

Well, I got it to work. I decided to use a private key approach to login. I generated my key pair with cygwin’s openssl. It all seemed fine, however, I couldn’t use that key in GoodSync. Based on something I read in their documentation, I decided it might need a key generated by putty. So I downloaed puttygen and generated another keypair. I then had to make a saved session in putty, which I had never done before, using that keypair. I tested with putty’s psftp -load session. It worked.

So I loaded up that session in GoodSync. It began to work. I could successfully analyze.

I thought JPEG files were compressible. I played around with setting the compress option in putty, but it didn’t seem to matter one bit. Then I ran gzip on one of the files and saw essentially no reduction in size from compression.

So now the files are crawling from my home PC over to the server. It will take days to finish. I bought the professional version of GoodSync and run multiple jobs simultaneously, which is kind of nice and a slightly better use of the bandwidth.

Open issues include: would GoodSync’s native server offer me any advantages? Does it even run on Linux? What program do I use to display the images? Is there a scheduler for GoodSync?

Conclusion
GoodSync seems to be a solid program for syncing files from a PC to an sftp server, although that is not its primary focus, The GUI is nice and makes it easy to set up sync jobs.

Categories
Admin Web Site Technologies

Tipsheet: How to run NTP on F5 BigIP

If you feel you’ve added your ntp servers correctly via the GUI (System|Configuration|Device|NTP), and yet you get an output like this:

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 ntp1.drj        .INIT.          16 -    - 1024    0    0.000    0.000   0.000
 ntp2.drj        .INIT.          16 -    - 1024    0    0.000    0.000   0.000

and you observe the time is off by seconds or even minutes, then you may have made the mistake I made. I used fully-qualified domain names (FQDN) for the ntp servers.

Switch from FQDNs to IP addresses and it will work fine:

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntp1.drj         10.23.34.1     3 u  783 1024  377    0.951   -8.830   0.775
+ntp2.drj         10.23.35.1     3 u  120 1024  377   20.963   -8.051   5.705

The date command will now give the correct time.

Having correct time is useful for the logging, especially if you are using ASM and are trying to correlate known activity against the reported errors.

Categories
Perl Python

Help with the NPR Weekend Puzzle – and Learning Python

Intro
As I mentioned in my review of Amazon’s Web Services Summit, Python seems to be the vogue scripting language these days. I decided I had better dust off the brain cells and try it out. I am an old Perl stalwart, but one senses that that language has sort of hit a wall after enthusiasm from 10 years ago began to wane. One of my example Perl scripts is provided in my post about turning HP SiteScope into SiteScope Classic. After deciding I needed a Python project only a few days passed before I came across what I thought would be a worthy challenge – simple yet non-trivial. That is the weekend puzzle as I understand it on NPR.

The Details
Start with a one-syllable, four-letter common boy’s name. Adjust all the letters with the ROT-13 cipher, and arrive at a common two-syllable girl’s name? What are the names? I’m pretty sure I could have figured out how to do this in Perl. Python? If it lived up to its hype then it should also be up to the task IMHO.

About the Weekend Puzzle
I listen to it most weekends. I often end up listening to it twice! I always think of whether it is suitable to be programmed or not. Most often I find it is not. I mean not suitable for the simple scripts and such that I write. I’m not talking about IBM’s Jeopardy-playing Watson! But this weekend I feel that the ROT-13 part of the challenge can definitely be aided by programming. Is that cheating? If I “give away” my solution as I am then I remove my unfair advantage in knowing a thing or two about programming!

The program – npr-test.py

#!/usr/bin/python
# drJ test script - 4/2012
# to get input agruments...
import sys
#inputName = raw_input("Enter name to be translated: ");
#print "Received input is : ", inputName
# and see http://www.tutorialspoint.com/python/string_maketrans.htm
from string import maketrans
intab = "abcdefghijklmnopqrstuvwxyz"
outtab = "nopqrstuvwxyzabcdefghijklm"
trantab = maketrans(intab, outtab)
inputName = sys.argv[1]
 
print inputName,inputName.translate(trantab);

So you see Python has this great maketrans built-in function that we’re able to use to implement the ROT-13 cipher. Of course veterans will probably know an even simpler way to accomplish this, perhaps with the pack/unpack functions which I also considered using.

You call the script like this:

$ npr-test.py john

john wbua

I compiled a list of common four-letter names which I won’t fully divulge. They are in a file called names, one name per line. But how to quickly put it though this program? My old, bad lazy Unix habit was to do this:

$ cat names|while read line; do
npr-test.py $line >> /tmp/results
done

I’ve got it memorized so I lose no time except typing the characters. But I also know the modern way is xargs.

xargs is the really hard part
I keep thinking that xargs is a good habit and one I should get into. But it’s not so easy. It took me awhile to find the appropriate example. And then you run across the debate that holds that Gnu parallel is still the better tool. Anyhow, here’s the xargs way…

$ cat names|xargs -I {} npr-test.py {}|more

amit nzvg
amos nzbf
arno neab
axel nkry
...

Conclusion
This little python program speaks volumes about the versatility of this language. It does have some really interesting properties and at first blush is worth getting to know better. No wonder others have embraced it. It also has helped us solve the weekend puzzle!

The stated answer? Glen and Tyra. You’ll see you can feed either one of these names into the program and come out with the other. I found it amusing that Will Shortz described the puzzle as “hard.” I didn’t think so – not with this program – but I was not the randomly selected winner, either, so I didn’t get a chance to explain how I did it.

The guy who did win explained that he simply wrote the ROT-13 version of the alphabet below the alphabet so he had a convenient look-up table. Clever.