Categories
Admin Linux Network Technologies Raspberry Pi

Compiling hping on CentOS

Intro
hping was recommend to me as a tool to stage a mock DOS test against one of our servers. I found that I did not have it installed on my CentOS 6 instance and could not find it with a yum search. I’m sure there is an rpm for it somewhere, but I figured it would be just as easy to compile it myself as to find the rpm. I was wrong. It probably was a _little_ harder to compile it, but I learned some things in doing so. So I’ll share my experience. It wasn’t too bad. I have nothing original to add here to what you find elsewhere, except that I didn’t find anywhere else with all these problems documented in one place. So I’ve produced this blog post as a convenient reference.

I’ve also faced this same situation on SLES – can’t find a package for hping anywhere – and found the same recipe below works to compile hping3.

The Details
I downloaded the source, hping3-20051105.tar.gz, from hping.org. Try a ./configure and…

error can not find the byte order for this architecture, fix bytesex.h

After a few quick searches I began to wonder what the byte order is in the Amazon cloud. Inspired I wrote this C program to find out and remove all doubt:

/* returns true if system is big_endian. See http://unixpapa.com/incnote/byteorder.html - DrJ */
#include<stdio.h>
 
main()
{
    printf("Hello World");
    int ans = am_big_endian();
    printf("am_big_endian value: %d",ans);
 
}
 
int am_big_endian()
  {
     long one= 1;
     return !(*((char *)(&one)));
  }

This program makes me realize a) how much I dislike C, and b) how I will never be a C expert no matter how much I dabble.

The program returns 0 so the Amazon cloud has little endian byte order as we could have guessed. All Intel i386 family chips are little endian it seems. Back to bytesex.h. I edited it so that it has:

#define BYTE_ORDER_LITTLE_ENDIAN
/* # error can not find the byte order for this architecture, fix bytesex.h */

Now I can run make. Next error:

pcap.h No such file or directory.

I installed libpcap-devel with yum to provide that header file:

$ yum install libpcap-devel

Next error:

net/bpf.h no such file or directory

For this I did:

$ ln -s /usr/include/pcap-bpf.h /usr/include/net/bpf.h

TCL
Next error:

/usr/bin/ld: cannot find -ltcl

I decided that I wouldn’t need TCL anyways to run in simple command-line fashion, so I excised it:

./configure --no-tcl

Then, finally, it compiled OK with some warnings.

hping3 for Raspberry Pi
On the Raspberry Pi it was simple to install hping3:

$ sudo apt-get install hping3

That’s it!

Raspberry Pi’s are pretty slow to generate serious traffic, but if you have a bunch I suppose they could amount to something in total.

Conclusion
Now I’m ready to go to use hping3 for a SYN_FLOOD simulated attack or whatever else we want to test.

Categories
Admin Internet Mail Linux Perl

The IT Detective Agency: last letter of attachment name is missing!

Intro
Today we bring you an IT whodunit thriller. A user using Lotus Notes informs his local IT that a process that emails SQL reports to him and a few others has suddenly stopped working correctly. The reports either contain an HTML attachment where the attachment type has been chopped to “ht” instead of “htm,” or an MHTML attachment type which has also been chopped, down to “mh” instead of “mht.” They get emailed from the reporting server to a sendmail mail relay. Now the convenient ability to double-click on the attachment and launch it stopped working as a result of these chopped filenames. What’s going on? Fix it!

Let’s Reproduce the Problem
Fortunately this one was easier than most to reproduce. But first a digression. Let’s have some fun and challenge ourselves with it before we deep dive. What do you think the culprit is? What’s your hypothesis? Drawing on my many years of experience running enterprise-class sendmail servers, and never before having seen this problem despite the hundreds of millions of delivered emails, my best instincts told me to look elsewhere.

The origin server, let’s call it aspen, sends few messages, so I had the luxury to turn on tracing on my sendmail server with a filter limiting the traffic to its IP:

$ tcpdump -i eth0 -s 1540 -w /tmp/aspen.cap host aspen

Using wireshark to analyze asp.cap and following the tcp stream I see this:

...
Content-Type: multipart/mixed;
		 boundary="CSmtpMsgPart123X456_000_C800C42D"
 
This is a multipart message in MIME format
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: text/plain;
		 charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
		 name="tower status_2012_06_04--09.25.00.htm"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
		 filename="tower status_2012_06_04--09.25.00.htm
 
<html><head></head><body><h1>Content goes here...</h1></body>
</html>
 
--CSmtpMsgPart123X456_000_C800C42D--

Result of trace of original email as received by sendmail

But the source as viewed from within Lotus Notes is:

...
Content-Type: multipart/mixed;
		 boundary="CSmtpMsgPart123X456_000_C800C42D"
 
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Transfer-Encoding: 7bit
Content-Type: text/plain;
		 charset="iso-8859-1"
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
		 name="tower status_2012_06_04--09.25.00.htm"
Content-Disposition: attachment;
		 filename="tower status_2012_06_04--09.25.00.ht"
Content-Transfer-Encoding: base64
 
PGh0bWw+PGhlYWQ+PC9oZWFkPjxib2R5PjxoMT5Db250ZW50IGdvZXMgaGVyZS4uLjwvaDE+PC9i
b2R5Pg0KPC9odG1sPg==
 
--CSmtpMsgPart123X456_000_C800C42D--

Same email after being trasferred to Lotus Notes

I was in shock.

I fully expected the message source to go through unaltered all the way into Lotus Notes, but it didn’t. The trace taken before sendmail’s actions was not an exact match to the source of the message I received. So either sendmail or Lotus Notes (or both) were altering the source in significant ways.

At the same time, we got a big clue as to what is behind the missing letter in the file extension. To highlight it, compare this line from the trace:

filename=”tower status_2012_06_04–09.25.00.htm

to that same line as it appears in the Lotus Notes source:

filename=”tower status_2012_06_04–09.25.00.ht

So there is no final close quote (“) in the filename attribute as it comes from the aspen server! That can’t be good.

But it used to work. What do we make of that fact??

I had to dig farther. I was suddenly reminded of the final episode of House where it is apparent that the solving the puzzle of symptoms is the highest aspiration for Doctor House. Maybe I am similarly motivated? Because I was definitely willing to throw the full weight of my resources behind this mystery. At least for the half-day I had to spare on this.

First step was to reproduce the problem myself. For sending an email you would normally use sendmail or mailx or such, but I didn’t trust any of those programs – afraid they would mess with my headers in secret, undocumented ways.

So I wrote my own mail sending program using Perl/Expect. Now I’m not advocating this as a best practice. It’s just that for me, given my skillset and perceived difficulty in finding a proper program to do what I wanted (which I’m sure is out there), this was the path of least resistance, the best and most efficient use of my time. You see, I already had the core of the program written for another purpose, so I knew it wouldn’t be too difficult to finish for this purpose. And I admit I’m not the best at Expect and I’m not the best at Perl. I just know enough to get things done and pretty quickly at that.

OK. Enough apologies. Here’s that code:

#!/usr/bin/perl
# drjohnstechtalk.com - 6/2012
# Send mail by explicit use of the protocol
$DEBUG = 1;
use Expect;
use Getopt::Std;
getopts('m:r:s:');
$recip = $opt_r;
$sender = $opt_s;
$hostname = $ENV{HOSTNAME};
chop($hostname);
print "hostname,mailhost,sender,recip: $hostname,$opt_m,$sender,$recip\n" if $DEBUG;
$telnet = "telnet";
@hosts = ($opt_m);
$logf = "/var/tmp/smtpresults.log";
 
$timeout = 15;
 
$data = qq(Subject: test of strange MIME error
X-myHeader: my-value
From: $sender
To: $recip
Subject: SQLplus Report - tower status
Date: Mon, 4 Jun 2012 9:25:10 --0400
Importance: Normal
X-Mailer: ATL CSmtp Class
X-MSMail-Priority: Normal
X-Priority: 3 (Normal)
MIME-Version: 1.0
Content-Type: multipart/mixed;
        boundary="CSmtpMsgPart123X456_000_C800C42D"
 
This is a multipart message in MIME format
 
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: text/plain;
        charset="iso-8859-1"
Content-Transfer-Encoding: 7bit
 
SQLplus automated report
--CSmtpMsgPart123X456_000_C800C42D
Content-Type: application/octet-stream;
        name="tower status_2012_06_04--09.25.00.htm"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
        filename="tower status_2012_06_04--09.25.00.htm
 
<html><head></head><body><h1>Content goes here...</h1></body>
</html>
--CSmtpMsgPart123X456_000_C800C42D--
 
.
);
sub myInit {
# This structure is ugly (p.148 in the book) but it's clear how to fill it
@steps = (
        { Expect => "220 ",
          Command => "helo $hostname"},
# Envelope sender
        { Expect => "250 ",
          Command => "mail from: $sender"},
# Envelope recipient
        { Expect => "250 ",
          Command => "rcpt to: $recip"},
# data command
        { Expect => "250 ",
          Command => "data"},
# start mail message
        { Expect => "354 Enter ",
          Command => $data},
# end session nicely
        { Expect => "250 Message accepted ",
          Command => "quit"},
);
}       # end sub myInit
#
# Main program
open(LOGF,">$logf") || die "Cannot open log file!!\n";
foreach $host (@hosts) {
  login($host);
}
 
# create an Expect object by spawning another process
sub login {
($host) = @_;
myInit();
#@params = ($host," 25");
$init_command = "$telnet $host 25";
#$Expect::Debug = 3;
my $exp = Expect->spawn("$init_command")
         or die "Cannot spawn $command: $!\n";
#
# Now run all the other commands
foreach $step (@steps) {
  $i++;
  $expstr = %{$step}->{Expect};
  $cmd = %{$step}->{Command};
#  print "expstr,cmd: $expstr, $cmd\n";
# Logging
#$exp->debug(2);
#$exp->exp_internal(1);
  $exp->log_stdout(0);  # disable stdout for each command
  $exp->log_file($logf);
  @match_patterns = ($expstr);
  ($matched_pattern_position, $error, $successfully_matching_string, $before_match, $after_match) = $exp->expect($timeout,
@match_patterns);
  unless ($matched_pattern_position == 1) {
    $err = 1;
    last;
  }
  #die "No match: error was: $error\n" unless $matched_pattern_position == 1;
  # We got our match. Proceed.
  $exp->send("$cmd\n");
}       # end loop over all the steps
 
#
# hard close
$exp->hard_close();
close(LOGF);
#unlink($logf);
}       # end sub login

Code for sendmsg2.pl

Invoke it:

$ ./sendmsg2.pl -m sendmail_host -s [email protected] -r [email protected]

The nice thing with this program is that I can inject a message into sendmail, but also I can inject it directly into the Lotus Notes smtp gateway, bypassing sendmail, and thereby triangulate the problem. The sendmail and Lotus Notes servers have slightly different responses to the various protocol stages, hence I clipped the Expect strings down to the minimal common set of characters after some experimentation.

This program makes it easy to test several scenarios of interest. Leave the final quote and inject into either sendmail or Lotus Notes (LN). Tack on the final quote to see if that really fixes things. The results?

Missing final quote

with final quote added

inject to sendmail

ht” in final email to LN; extension chopped

htm” and all is good

inject to LN

htm in final email; but extension not chopped

htm” and all is good

I now had incontrovertible proof that sendmail, my sendmail was altering the original message. It is looking at the unbalanced quote mark situation and recovering as best as possible by replacing the terminating character “m” with the missing double quote “. I was beginning to suspect it. After that shock drained away, I tried to check the RFCs. I figured it must be some well-meaning attempt on its part to make things right. Well, the RFCs, 822 and 1806 are a little hard to read and apply to this situation.

Let’s be clear. There’s no question that the sender is wrong and ought to be closing out that quote. But I don’t think there’s some single, unambiguous statement from the RFCs that make that abundantly apparent. Nevertheless, of course that’s what I told them to do.

The other thing from reading the RFC is that the whole filename attribute looks optional. To satisfy my curiosity – and possibly provide more options for remediation to aspen – I sent a test where I entirely left out the offending filename=”tower… line. In that case the line above it should have its terminating semicolon shorn:

Content-Disposition: attachment

After all, there already is a name=”tower…” as a Content-type parameter, and the string following that was never in question: it has its terminating semicolon.

Yup, that worked just great too!

Then I thought of another approach. Shouldn’t the overriding definition of the what the filetype is be contained in the Content-type header? What if it were more correctly defined as

Content-type: text/html

?

Content-type appears in two places in this email. I changed them both for good measure, but left the unbalanced quotations problem. Nope. Lotus Notes did not know what to with the attachment it displays as tower status_2012_06_04–09.25.00.ht. So we can’t recommend that course of action.

What Sendmail’s Point-of-View might be
Looking at the book, I see sendmail does care about MIME headers, in particular it cares about the Content-Disposition header. It feels that it is unreliable and hence merely advisory in nature. Also, some years ago there was a sendmail vulnerability wherein malformed multipart MIME messages could cause sendmail to crash (see http://www.kb.cert.org/vuls/id/146718. So maybe sendmail is just a little sensitive to this situation and feels perfectly comfortable and justified in right-forming a malformed header. Just a guess on my part.

Case closed.

Conclusion
We battled a strange email attachment naming error which seemed to be an RFC violation of the MIME protocols. By carefully constructing a testing program we were easily able to reproduce the problem and isolate the fault and recommend corrective actions in the sending program. Now we have a convenient way to inject SMTP email whenever and wherever we want. We feel sendmail’s reputation remains unscathed, though its corrective actions could be characterized as overly solicitous.

Categories
Admin CentOS Linux Raspberry Pi

A few RPM and YUM commands and equivalent on Raspberry Pi

Intro
This post adds nothing to the knowledge out there and readily available on the Internet. I just got tired of looking up elsewhere the few useful rpm and yum commands that I employ. Here’s how I installed a missing binary on one system when I have a similar system that has it.

RPM is the Redhat Package Manager. It is also used on Suse Linux (SLES). A much better resource than this page (Hey, we can’t all be experts!) is http://www.idevelopment.info/data/Unix/Linux/LINUX_RPMCommands.shtml

List all installed packages:

$ rpm −qa
dmidecode-2.11-2.el6.x86_64
libXcursor-1.1.10-2.el6.x86_64
basesystem-10.0-4.el6.noarch
plymouth-core-libs-0.8.3-24.el6.centos.x86_64
libXrandr-1.3.0-4.el6.x86_64
ncurses-base-5.7-3.20090208.el6.x86_64
python-ethtool-0.6-1.el6.x86_64

Same as above – list all installed packages – but list the most recently installed packages first (Wish I had discovered this command sooner)!

$ rpm −qa −−last

libcurl-devel-7.19.7-35.el6                   Mon Apr  1 20:00:47 2013
curl-7.19.7-35.el6                            Mon Apr  1 20:00:47 2013
libidn-devel-1.18-2.el6                       Mon Apr  1 20:00:46 2013
libcurl-7.19.7-35.el6                         Mon Apr  1 20:00:46 2013
libssh2-1.4.2-1.el6                           Mon Apr  1 20:00:45 2013
ncurses-static-5.7-3.20090208.el6             Mon Apr  1 19:59:24 2013
ncurses-devel-5.7-3.20090208.el6              Mon Apr  1 19:58:40 2013
gcc-c++-4.4.7-3.el6                           Fri Mar 15 07:59:36 2013
gcc-gfortran-4.4.7-3.el6                      Fri Mar 15 07:59:34 2013
...

Which package owns a command:

$ rpm −qf `which make`
make-3.81-3.el5

(This was run on an older Redhat 5.6 system which has make.)

Similarly, which package owns a file:

$ rpm −qf /usr/lib64/libssh2.so.1
libssh2-1-1.2.9-4.2.2.1

List files in (an installed) package:
$ rpm −ql freeradius-client-1.1.6-40.1

List files in an rpm package file:
$ rpm −qlp packages/HPSiS1124Core-11.24.241-Linux2.4.rpm

Get history of the package versions on this server:

$ yum history list te-agent

Get history of the list of changes to this package:

$ rpm -q -changelog te-agent

Install a package from a local RPM file:
$ rpm −i openmotif-libs-32bit-2.3.1-3.13.x86_64.rpm

Uninstall a packge:
$ rpm −e package
$ rpm −e freeradius-server-libs-2.1.1-7.12.1

How will you install the missing make in CentOS? Use yum to search for it:

$ yum search make

Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
============================== N/S Matched: make ===============================
automake.noarch : A GNU tool for automatically creating Makefiles
...
imake.x86_64 : imake source code configuration and build system
...
make.x86_64 : A GNU tool which simplifies the build process for users
makebootfat.x86_64 : Utility for creation bootable FAT disk
mendexk.x86_64 : Replacement for makeindex with many enhancements
...

How to install it:

$ sudo yum install make.x86_64

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
Setting up Install Process
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package make.x86_64 1:3.81-19.el6 will be installed
--&gt; Finished Dependency Resolution
 
Dependencies Resolved
 
===========================================================================================================================
 Package                   Arch                        Version                             Repository                 Size
===========================================================================================================================
Installing:
 make                      x86_64                      1:3.81-19.el6                       base                      389 k
 
Transaction Summary
===========================================================================================================================
Install       1 Package(s)
 
Total download size: 389 k
Installed size: 1.0 M
Is this ok [y/N]: y
Downloading Packages:
make-3.81-19.el6.x86_64.rpm                                                                         | 389 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 1:make-3.81-19.el6.x86_64                                                                               1/1
 
Installed:
  make.x86_64 1:3.81-19.el6
 
Complete!

make should now be in your path.

If we were dealing with SLES I would use zypper instead of yum, but the idea of searching and installing is similar.

Debian Linux, e.g. Raspberry Pi

Find which package a file belongs to:

> dpkg -S filepath

List installed packages:

> dpkg -l

List all files belonging to the package iperf3:

> dpkg -L iperf3

Transferring packages from one system to another

When I needed to transfer Debian packages from one system with Internet access to another without, I would do:

apt download apache2

Then sftp the file to the other system and on it do

apt install ./apache2_2.4.53-1~deb11u1_amd64.deb

In fact that only worked after I installed all dependencies. This web of files covered all dependencies:

apache2-bin_2.4.53-1~deb11u1_amd64.deb
apache2-data_2.4.53-1~deb11u1_all.deb
apache2-utils_2.4.53-1~deb11u1_amd64.deb
apache2_2.4.53-1~deb11u1_amd64.deb
libapr1_1.7.0-6+deb11u1_amd64.deb
libaprutil1-dbd-mysql_1.6.1-5_amd64.deb
libaprutil1-dbd-odbc_1.6.1-5_amd64.deb
libaprutil1-dbd-pgsql_1.6.1-5_amd64.deb
libaprutil1-dbd-sqlite3_1.6.1-5_amd64.deb
libaprutil1-ldap_1.6.1-5_amd64.deb
libaprutil1_1.6.1-5_amd64.deb
libgdbm-compat4_1.19-2_amd64.deb
libjansson4_2.13.1-1.1_amd64.deb
liblua5.3-0_5.3.3-1.1+b1_amd64.deb
libmariadb3_1%3a10.5.15-0+deb11u1_amd64.deb
libperl5.32_5.32.1-4+deb11u2_amd64.deb
mailcap_3.69_all.deb
mariadb-common_1%3a10.5.15-0+deb11u1_all.deb
mime-support_3.66_all.deb
mysql-common_5.8+1.0.7_all.deb
perl-modules-5.32_5.32.1-4+deb11u2_all.deb
perl_5.32.1-4+deb11u2_amd64.deb
ssl-cert_1.1.0+nmu1_all.deb

Categories
Admin CentOS Security

How to Set up a Secure sftp-only Service

Intro
Updated Jan, 2015.

Usually I post a document because I think I have something to add. This time I found a link that covers the topic better than I could. I just wanted to have it covered here. What if you want to offer an sftp-only jailed account? Can you do that? How do you do it?

The Answer
Well, it used to be all here: http://blog.swiftbyte.com/linux/allowing-sftp-access-while-chrooting-the-user-and-denying-shell-access/. But that link is no longer valid.

I tried it, appropriately modified for CentOS and it worked perfectly. A few notes. Presumably you will already have ssh installed. Who can imagine a server without it? So there’s typically no need to install openssh-server.

I was leery mucking with subsystem sftp. What if it prevented me from doing sftp to my own account and having full access like I’m used to? Turns out it does no harm in that regard.

Very minor point. His documentation might be good for Ubuntu. To restart the ssh daemon in CentOS/Fedora, I recommend a sudo service sshd restart. Do you wonder if that will knock you out of your own ssh session? I did. It does not. Not sure why not!

These groupadd/useradd/usermod functions are “cute.” I’m old school and used to editing the darn files by hand (/etc/passwd, /etc/group). I suppose it’s safer to use the cute functions – less chance a typo could render your server inoperable (yup, done that).

Let’s call my sftp-only user is joerg.

I did the chown root:root thing, but initially the files weren’t accessible to the joerg user. The permissions were 700 on the home directory, now owned by root. That produces this error when you try to sftp:

$ sftp joerg@localhost
sftp> dir

Couldn't get handle: Permission denied

That’s no good, so I liberalized the permissions:

$ sudo chmod go+rx /home/joerg

My /etc/passwd line for this user looks like this:

joerg:x:1004:901:Joerg, etherip author:/home/joerg:/bin/false

So note the unusual shell, /bin/false. That’s the key to locking things down.

In /etc/group I have this;

joerg:x:1004:

If you want to add the entries by hand to passwd and group then if I recall correctly you run a pwconv to generate an appropriate entry for it in /etc/shadow, and a sudo passwd joerg to set up a desired password.

Does it work? Yeah, it really does.

$ sftp joerg@localhost
Connecting to localhost…
sftponlyuser@localhost’s password:
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> pwd
Remote working directory: /
sftp> cd /etc
Couldn’t canonicalise: No such file or directory
sftp> ls -l
[shows files in /home/joerg]

Moreover, ssh really is shut out:

$ ssh joerg@localhost
joerg@localhost’s password:

This hangs and never returns with a prompt!

Cool, huh?

Locking out this same account
Now suppose you only intended joerg to temporarily have access and you want to lock the account out without actually removing it. This can be done with:

$ sudo passwd -l joerg

This puts an invalid character in that account’s shadow file entry.

Conclusion
We have an easy prescription to make a jailed sftp-only account that we tested and found really works. Regular accounts were not affected. The base article on which I embellished is now kaput so I’ve added a few more details to make up for that.

Categories
Uncategorized

Experimenting with GoodSync

Intro
I was thinking about doing something with photos – not exactly sure what yet. But since I have this nice decicated server with lots of space, it’s a great place to store my collection of thousands of photos. But how to upload them and keep the server copy in sync?

The traditional Unix approach might be to install rsync, and I suppose I could have gotten it to work. But I decided to see what commercial package was out there. I settled on Goodsync. One of the inducements is that it has support for sftp servers. Or so it says.

Well, I got it to work. I decided to use a private key approach to login. I generated my key pair with cygwin’s openssl. It all seemed fine, however, I couldn’t use that key in GoodSync. Based on something I read in their documentation, I decided it might need a key generated by putty. So I downloaed puttygen and generated another keypair. I then had to make a saved session in putty, which I had never done before, using that keypair. I tested with putty’s psftp -load session. It worked.

So I loaded up that session in GoodSync. It began to work. I could successfully analyze.

I thought JPEG files were compressible. I played around with setting the compress option in putty, but it didn’t seem to matter one bit. Then I ran gzip on one of the files and saw essentially no reduction in size from compression.

So now the files are crawling from my home PC over to the server. It will take days to finish. I bought the professional version of GoodSync and run multiple jobs simultaneously, which is kind of nice and a slightly better use of the bandwidth.

Open issues include: would GoodSync’s native server offer me any advantages? Does it even run on Linux? What program do I use to display the images? Is there a scheduler for GoodSync?

Conclusion
GoodSync seems to be a solid program for syncing files from a PC to an sftp server, although that is not its primary focus, The GUI is nice and makes it easy to set up sync jobs.

Categories
Admin Web Site Technologies

Tipsheet: How to run NTP on F5 BigIP

If you feel you’ve added your ntp servers correctly via the GUI (System|Configuration|Device|NTP), and yet you get an output like this:

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 ntp1.drj        .INIT.          16 -    - 1024    0    0.000    0.000   0.000
 ntp2.drj        .INIT.          16 -    - 1024    0    0.000    0.000   0.000

and you observe the time is off by seconds or even minutes, then you may have made the mistake I made. I used fully-qualified domain names (FQDN) for the ntp servers.

Switch from FQDNs to IP addresses and it will work fine:

# ntpq -p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*ntp1.drj         10.23.34.1     3 u  783 1024  377    0.951   -8.830   0.775
+ntp2.drj         10.23.35.1     3 u  120 1024  377   20.963   -8.051   5.705

The date command will now give the correct time.

Having correct time is useful for the logging, especially if you are using ASM and are trying to correlate known activity against the reported errors.

Categories
Perl Python

Help with the NPR Weekend Puzzle – and Learning Python

Intro
As I mentioned in my review of Amazon’s Web Services Summit, Python seems to be the vogue scripting language these days. I decided I had better dust off the brain cells and try it out. I am an old Perl stalwart, but one senses that that language has sort of hit a wall after enthusiasm from 10 years ago began to wane. One of my example Perl scripts is provided in my post about turning HP SiteScope into SiteScope Classic. After deciding I needed a Python project only a few days passed before I came across what I thought would be a worthy challenge – simple yet non-trivial. That is the weekend puzzle as I understand it on NPR.

The Details
Start with a one-syllable, four-letter common boy’s name. Adjust all the letters with the ROT-13 cipher, and arrive at a common two-syllable girl’s name? What are the names? I’m pretty sure I could have figured out how to do this in Perl. Python? If it lived up to its hype then it should also be up to the task IMHO.

About the Weekend Puzzle
I listen to it most weekends. I often end up listening to it twice! I always think of whether it is suitable to be programmed or not. Most often I find it is not. I mean not suitable for the simple scripts and such that I write. I’m not talking about IBM’s Jeopardy-playing Watson! But this weekend I feel that the ROT-13 part of the challenge can definitely be aided by programming. Is that cheating? If I “give away” my solution as I am then I remove my unfair advantage in knowing a thing or two about programming!

The program – npr-test.py

#!/usr/bin/python
# drJ test script - 4/2012
# to get input agruments...
import sys
#inputName = raw_input("Enter name to be translated: ");
#print "Received input is : ", inputName
# and see http://www.tutorialspoint.com/python/string_maketrans.htm
from string import maketrans
intab = "abcdefghijklmnopqrstuvwxyz"
outtab = "nopqrstuvwxyzabcdefghijklm"
trantab = maketrans(intab, outtab)
inputName = sys.argv[1]
 
print inputName,inputName.translate(trantab);

So you see Python has this great maketrans built-in function that we’re able to use to implement the ROT-13 cipher. Of course veterans will probably know an even simpler way to accomplish this, perhaps with the pack/unpack functions which I also considered using.

You call the script like this:

$ npr-test.py john

john wbua

I compiled a list of common four-letter names which I won’t fully divulge. They are in a file called names, one name per line. But how to quickly put it though this program? My old, bad lazy Unix habit was to do this:

$ cat names|while read line; do
npr-test.py $line >> /tmp/results
done

I’ve got it memorized so I lose no time except typing the characters. But I also know the modern way is xargs.

xargs is the really hard part
I keep thinking that xargs is a good habit and one I should get into. But it’s not so easy. It took me awhile to find the appropriate example. And then you run across the debate that holds that Gnu parallel is still the better tool. Anyhow, here’s the xargs way…

$ cat names|xargs -I {} npr-test.py {}|more

amit nzvg
amos nzbf
arno neab
axel nkry
...

Conclusion
This little python program speaks volumes about the versatility of this language. It does have some really interesting properties and at first blush is worth getting to know better. No wonder others have embraced it. It also has helped us solve the weekend puzzle!

The stated answer? Glen and Tyra. You’ll see you can feed either one of these names into the program and come out with the other. I found it amusing that Will Shortz described the puzzle as “hard.” I didn’t think so – not with this program – but I was not the randomly selected winner, either, so I didn’t get a chance to explain how I did it.

The guy who did win explained that he simply wrote the ROT-13 version of the alphabet below the alphabet so he had a convenient look-up table. Clever.

Categories
DNS

“FBI’s” DNS Severs – Trying to Clear Some of the Hysteria

Intro
This Fox News article about infected computers was brought to my attention:
Hundreds of thousands may lose Internet in July

Hey, their URLs look like my URLs. Wonder if they’re using WordPress.

But I digress. I was asked if this is a hoax. I really had to puzzle over it for awhile. It smells like a hoax. I didn’t find any independent confirmation on the isc.org web site, where I get my BIND source files from. I didn’t see a blog post from Paul Vixie mentioning this incident.

And the comments. Oh my word the comments. Every conspiracy theorist is eating this up, gorging on it. The misinformation is appalling. There is actually no single comment as of this writing (100+ comments so far) that lends any technical insight into the saga.

Let’s do a reset and start from a reasonable person basis, not political ideology.

To recount, apparently hundreds of thousands of computers got hacked in a novel way. They had their DNS servers replaced with DNS servers owned by hackers. in Windows you can see your DNS servers with

> ipconfig /all

in a CMD window.

Look for the line that reads

DNS Servers…

These DNS servers could redirect users to web sites that encouraged users to fill out surveys which generated profit for the hackers.

It’s a lot easier to control a few servers than it is to fix hundreds of thousands of desktops. So the FBI got permission through the courts to get control of the IP addresses of the DNS servers involved and decided to run clean DNS servers. This would keep the infected users working, and they would no longer be prompted to fill out surveys. In fact they would probably feel that everything was great again. But this solution costs money to maintain. Is the FBI running these DNS servers? I highly, highly doubt it. I’ll bet they worked with ISC (isc.org) who are the real subject matter experts and quietly outsourced that delicate task to them.

So in July, apparently, (I can’t find independent confirmation of the date) they are going to pull the plug on their FBI DNS servers.

On that day some 80,000 users in the US alone will not be able to browse Internet sites as a result of this action, and hundreds of thousands outside of the US.

If the FBI works with some DNS experts – and Paul Vixie is the best out there and they are apparently already working with him – they could be helpful and redirect all DNS requests to a web site that explains the problem and suggests methods for fixing it. It’s not clear at this point whether or not they will do this.

That’s it. No conspiracy. The FBI was trying to do the right thing – ease the users off the troubled DNS services to somewhat minimize service disruption. I would do the same if I were working for the FBI.

Unfortunately, feeding fodder to the the conspiracists is the fact that the mentioned web site, dcwg.org, is not currently available. The provenance of that web site is also hard to scrutinize as it’s registered to an individual, which looks fishy. But upon digging deeper I have to say that probably the site is just overwhelmed right now. It was first registered in November – around the time this hack came to light. It stands for DNS Changer Working Group.

Comparitech provides more technical details in this very well-written article: DNS changer malware.

Another link on their site that discusses this topic: check-to-see-if-your-computer-is-using-rogue-DNS

Specific DNS Servers
I want to be a good Netizen and not scan all the address ranges mentioned in the FBI document. So I took an educated guess as to where the DNS servers might actually reside. I found three right off the bat:

64.28.176.1
213.109.64.1
77.67.83.1

Knowing actual IPs of actual DNS servers makes this whole thing a lot more real from a technical point-of-view. Because they are indeed unusual in their behaviour. They are fully resolving DNS servers, which is a rarity on the Internet but would be called for by the problem at hand. Traceroute to them all shows the same path, which appears to be somewhere in NYC. A DNS query takes about 17 msec from Amazon’s northeast data center which is in Virginia. So the similarity of the path seems to suggest that they got hold of the routing for these subnets and are directing them to the same set of clean DNS servers.

Let’s see what else we can figure out.

Look up 64.28.176.1 on arin.net Whois. More confirmation that this is real and subject to a court order. In the comments section it says:

In accordance with a Court Order in connection with a criminal investigation, this registration record has been locked and during the period from November 8, 2011 through March 22, 2012, any changes to this registration record are subject to that Court Order.

I thought we could gain even more insight looking up 213.109.64.1 which is normally address range outside of North America, but I can’t figure out too much about it. You can look it up in the RIPE database (ripe.net). You will in fact see it is registered to Prolite Ltd in Russia. No mention of a court order. So we can speculate that unlike arin.net, RIPE did not bother to update their registration record after the FBI got control. So Prolite Ltd may either have been an active player in the hack, or merely had some of their servers hacked and used for the DNSchanger DNS server service.

Of course we don’t know what the original route looked like, but I bet it didn’t end in new York City, although that can’t be ruled out. But now it does. I wonder how the FBI got control of that subnet and if it involved international cooperation.

7/9 update
The news reports re-piqued my interest in this story. I doubled back and re-checked those three public DNS servers I had identified above. 64.28.176.1, etc. Sure enough, they no longer respond to DNS queries! The query comes back like this:

$ dig ns com @77.67.83.1

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6_2.2 <<>> ns com @77.67.83.1
;; global options: +cmd
;; connection timed out; no servers could be reached

Conclusion
As a DNS and Internet expert I have some greater insight into this particular news item than the average commentator. Interested in DNS? Here’s another article I wrote about DNS: Google’s DNS Servers Rock!

References and related
Here’s a much more thorough write-up of the DNS Changer situation: http://comparitech.net/dnschangerprotection

Categories
Admin

Highlights from the AWS Summit in NYC April 19 2012

Intro
It’s interesting to attend popular IT events. I haven’t been to one for awhile. As informative as the presentations are what you can learn by observing the other participants. Here are my observations from the Amazon AWS Summit yesterday in NYC.

Amazon has some really knoweldgeable managers addressing the audience. These guys had clearly mastered the details.

Learned by Observing Others
I’m a follower not a leader when it comes to consumer gadgets. At this event basically everyone had a Smartphone. Blackberries were well-represented though not predominant. iPhones and other Android phones seemed be in great use (unfortunately I cannot tell one from the other). iPads were pretty common, too. As were small notebooks and the occasional standard laptop. Probably 15% of the audience had one of those devices. The conference gave out WiFi access, which required a username/password to be entered in a web page. That was pretty important as I could not get signal otherwise.

I learned that people now take pictures of slides they want to remember for later. Guess I have to get better at using my Smartphone camera! It’s a great idea, really.

It used to be that these events would be very male-dominated. That’s still the case, but not quite as much. Our lunch tables help 10 people each and there was on average one woman per table, so about 10%.

Now about AWS Itself
Lots and lots of companies are doing cool things with Amazon cloud services. Pinterest, PBS, Exfm, Vox even old-line companies like Shell. I don’t think any serious start-up would buy their own infrastructure today when Amazon has made it so easy and economical to put things in the cloud.

Amazon started the service in 2006 and thus are celebrating six years of cloud service. They have lowered prices 19 times. For me the most significant of those price drops was in early March when it became competitive, no, more than competitive to be considered against the pure hosting companies.

Amazon cloud services not counts about 60 service offerings. I don’t claim to understand even most of them, but the ones I do seem really well thought-out. They really deserve to be that Magic Quadrant leader that Gartner paints them out to be, leading in both vision and ability to execute.

S3 is for storing objects. You don’t care how the object is stored. Your reference to the object will be a URL. One example actual usage: storing images. Each image stored as an object in S3. This makes it easy to begin to use a content distribution network (CDN).

I use EC2 so I have some familiarity with it. Users are supposed to have some familiarity with the author of their image. Oops. I do not. News to me: they have a facility to import VM images.

In virtual private cloud (VPC) the user gets to extend his own private address space into Amazon Cloud. Cool. There is a Direct Connect option to avoid traditional VPN tunnels. Security groups offer a decent way to enforce security but I don’t totally get it.

I’m interested in EtherIP. From this summit I learned of a company that has a product for that. The company is Astaro and the appliance is RED (Remote Ethernet Device). Cool.

One molecular discovery company chained together 30000 Amazon cpu cores, ran it for three hours, for a total of 100,000 hours of compute time, and only paid $5,000/hour. All that horsepower together makes for the 50th fastest supercomputer in the world. The equipment value was about $22,000,000.

I guess attendance was at least 2,000 people. There were lots of younger people there as well as an ecosystem of supporting third-party companies.

The Python language was mentioned three or four times. Its star is on the rise.

I came as a private user of AWS. This is my story. I was disappointed that in all their use cases and examples, there was not one mention of individual users such as myself. It was all geared towards organizations. You get worried when you don’t see anyone like yourself out there. I even asked one of the speakers about that but he said that they take users like me as a given. The hard work was getting the enterprises comfortable with it. I’m not entirely convinced.

Conclusion
Amazon has an exciting cloud offering and they get it. Their customers include probably the most interesting companies on the planet.

Categories
Admin DNS Proxy

The IT Detective Agency: Browsing Stopped Working on Internet-Connected Enterprise Laptops

Intro
Recall our elegant solution to DNS clobbering and use of a PAC file, which we documented here: The IT Detective Agency: How We Neutralized Nasty DNS Clobbering Before it Could Bite Us. Things were going smoothly with that kludge in place for months. Then suddenly it wasn’t. What went wrong and how to fix?? Read on.

The Details
We began hearing reports of increasing numbers of users not being able to access Internet sites on their enterprise laptops when directly connected to Internet. Internet Explorer gave that cryptic error to the effect Internet Explorer cannot display the web page. This was a real problem, and not a mere inconvenience, for those cases where a user was using a hotel Internet system that required a web page sign-on – that web page itself could not be displayed, giving that same error message. For simple use at home this error may not have been fatal.

But we had this working. What the…?

I struggled as Rop demoed the problem for me with a user’s laptop. I couldn’t deny it because I saw it with my own eyes and saw that the configuration was correct. Correct as in how we wanted it to be, not correct as in working. Basically all web pages were timing out after 15 seconds or so and displaying cannot display the web page. The chief setting is the PAC file, which is used by this enterprise on their Intranet.

On the Internet (see above-mentioned link) the PAC file was more of an annoyance and we had aliased the DNS value to 127.0.0.1 to get around DNS clobbering by ISPs.

While I was talking about the problem to a colleague I thought to look for a web server on the affected laptop. Yup, there it was:

C:\Users\drj>netstat -an|more

Active Connections

  Proto  Local Address          Foreign Address        State
  TCP    0.0.0.0:80             0.0.0.0:0              LISTENING
  TCP    0.0.0.0:135            0.0.0.0:0              LISTENING
  ...

What the…? Why is there a web server running on port 80? How will it respond to a PAC file request? I quickly got some hints by hitting my own laptop with curl:

$ curl -i 192.168.3.4/proxy.pac

HTTP/1.1 404 Not Found
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 17 Apr 2012 18:39:30 GMT
Connection: close
Content-Length: 315


Not Found

Not Found


HTTP Error 404. The requested resource is not found.

So the server is Microsoft-HTTPAPI, which upon invetsigation seems to be a Microsoft Web Deployment Agent Service (MsDepSvc).

The main point is that I don’t remember that being there in the past. I felt it’s presence, probably a new “feature” explained the current problem. What to do about it however??

Since this is an enterprise, not a small shop with a couple PCs, turning off MsDepSvc is not a realistic option. It’s probably used for peer-to-peer software distribution.

Hmm. Let’s review why we think our original solution worked in the first place. It didn’t work so well when the DNS was clobbered by my ISP. Why? I think because the ISP put up a web server when it encountered a NXDOMAIN DNS response and that web server gave a 404 not found error when the browser searched for the PAC file. Turning the DNS entry to the loopback interface, 127.0.0.1, gave the browser a valid IP, one that it would connect to and quickly receive a TCP RST (reset). Then it would happily conclude there was no way to reach the PAC file, not use it, and try DIRECT connections to Internet sites. That’s my theory and I’m sticking to it!

In light of all this information the following option emerged as most likely to succeed: set the Internet value of the DNS of the PAC file to a valid IP, and specifically, one that would send a TCP RST (essentially meaning that TCP port 80 is reachable but there is no listener on it).

We tried it and it seemed to work for me and my colleague. We no loner had the problem of not being able to connect to Internet sites with the PAC file configured and directly connected to Internet.

I noticed however that once I got on the enterprise network via VPN I wasn’t able to connect to Internet sites right away. After about five minutes I could.

My theory about that is that I had a too-long TTL on my PAC file DNS entry. The TTL was 30 minutes. I shortened it to five minutes. Because when you think about it, that DNS value should get cached by the PC and retained even after it transitions to Intranet-connected.

I haven’t retested, but I think that adjustment will help.

Conclusion
I also haven’t gotten a lot of feedback from this latest fix, but I’m feeling pretty good about it.

Case: again mostly solved.

Hey, don’t blame me about the ambiguity. I never hear back from the users when things are working 🙂

References
A closely related case involving Verizon “clobbering” TCP RST packets also bit us.