Categories
CentOS DNS Linux Network Technologies Raspberry Pi Security Web Site Technologies

Roll your own dynamic DNS update service

Intro
I know my old Cisco router only has built-in support for two dynamic DNS services, dyndns.org and TZO.com. Nowadays you have to pay for those, if even they work (the web site domain names seem to have changed, but perhaps they still support the old domain names. Or perhaps not!). Maybe this could be fixed by firmware upgrades (to hopefully get more choices and hopefully a free one, or a newer router, or running DD-WRT. I didn’t do any of those things. Being a network person at heart, I wrote my own. I found the samples out there on the Internet needed some updating, so I am sharing my recipe. I didn’t think it was too hard to pull off.

What I used
– GoDaddy DNS hosting (basically any will do)
– my Amazon AWS virtual server running CentOS, where I have sudo access
– my home Raspberry Pi
– a tiny bit of php programming
– my networking skills for debugging

As I have prior experience with all these items this project was right up my alley.

Delegating our DDNS domain from GoDaddy
Just create a nameserver record from the domain, say drj.com, called, say, raspi, which you delegate to your AWS server. Following the example, the subdomain would be raspi.drj.com whose nameserver is drj.com.


DNS Setup on an Amazon AWS server

/etc/named.conf

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
 
options {
//      listen-on port 53 { 127.0.0.1; };
//      listen-on port 53;
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };
        recursion no;
 
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
 
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";
 
        managed-keys-directory "/var/named/dynamic";
};
 
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
 
zone "." IN {
        type hint;
        file "named.ca";
};
 
include "/etc/named.rfc1912.zones";
include "/var/named/dynamic.conf";
include "/etc/named.root.key";

/var/named/dynamic.conf

zone "raspi.drj.com" {
  type master;
  file "/var/named/db.raspi.drj.com";
// designed to work with nsupdate -l used on same system - DrJ 10/2016
// /var/run/named/session.key
  update-policy local;
};

/var/named/db.raspi.drj.com

$ORIGIN .
$TTL 1800       ; 30 minutes
raspi.drj.com      IN SOA  drj.com. postmaster.drj.com. (
                                2016092812 ; serial
                                1700       ; refresh (28 minutes 20 seconds)
                                1700       ; retry (28 minutes 20 seconds)
                                1209600    ; expire (2 weeks)
                                600        ; minimum (10 minutes)
                                )
                        NS      drj.com.
$TTL 3600       ; 1 hour
                        A       125.125.73.145

Named re-starting program
Want to make sure your named restarts if it happens to die? nanny.pl is a good, simple monitor to do that. Here is the version I use on my server. Note the customized variables towards the top.

#!/usr/bin/perl
#
# Copyright (C) 2004, 2007, 2012  Internet Systems Consortium, Inc. ("ISC")
# Copyright (C) 2000, 2001  Internet Software Consortium.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS.  IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
 
# $Id: nanny.pl,v 1.11 2007/06/19 23:47:07 tbox Exp $
 
# A simple nanny to make sure named stays running.
 
$pid_file_location = '/var/run/named/named.pid';
$nameserver_location = 'localhost';
$dig_program = 'dig';
$named_program =  '/usr/sbin/named -u named';
 
fork() && exit();
 
for (;;) {
        $pid = 0;
        open(FILE, $pid_file_location) || goto restart;
        $pid = <FILE>;
        close(FILE);
        chomp($pid);
 
        $res = kill 0, $pid;
 
        goto restart if ($res == 0);
 
        $dig_command =
               "$dig_program +short . \@$nameserver_location > /dev/null";
        $return = system($dig_command);
        goto restart if ($return == 9);
 
        sleep 30;
        next;
 
 restart:
        if ($pid != 0) {
                kill 15, $pid;
                sleep 30;
        }
        system ($named_program);
        sleep 120;
}

The PHP updating program myip-update.php

<?php
# DrJ: lifted from http://pablohoffman.com/dynamic-dns-updates-with-a-simple-php-script
# but with some security improvements
# 10/2016
# PHP script for very simple dynamic DNS updates
#
# this script was published in http://pablohoffman.com/articles and
# released to the public domain by Pablo Hoffman on 27 Aug 2006
 
# CONFIGURATION BEGINS -------------------------------------------------------
# define password here
$mysecret = 'myBigFatsEcreT';
# CONFIGURATION ENDS ---------------------------------------------------------
 
 
$ip = $_SERVER['REMOTE_ADDR'];
$host = $_GET['host'];
$secret = $_POST['secret'];
$zone = $_GET['zone'];
$tmpfile = trim(`mktemp /tmp/nsupdate.XXXXXX`);
 
if ((!$host) or (!$zone) or (!($mysecret == $secret))) {
    echo "FAILED";
    unlink($tmpfile);
    exit;
}
 
$oldip = trim(`dig +short $host.$zone @localhost`);
if ($ip == $oldip) {
    echo "UNCHANGED. ip: $ip\n";
    unlink($tmpfile);
    exit;
}
 
echo "$ip - $oldip";
 
$nsucmd = "update delete $host.$zone A
update add $host.$zone 3600 A $ip
send
";
 
$fp = fopen($tmpfile, 'w');
fwrite($fp, $nsucmd);
fclose($fp);
`sudo nsupdate -l $tmpfile`;
unlink($tmpfile);
echo "OK ";
echo `date`;
?>

In the above file I added the “sudo” after awhile. See explanation further down below.

Raspberry Pi requirements
I’ve assumed you can run your Pi 24 x 7 and constantly and consistently on your network.

Crontab entry on the Raspberry Pi
Edit the crontab file for periodically checking your IP on the Pi and updating external DNS if it has changed by doing this:

$ crontab ‐e
and adding the line below:

# my own method of dynamic update - DrJ 10/2016
0,10,20,30,40,50 * * * * /usr/bin/curl -s -k -d 'secret=myBigFatsEcreT' 'https://drj.com/myip-update.php?host=raspi&zone=drj.com' >> /tmp/ddns 2>&1

A few highlights
Note that I’ve switched to use of nsupdate -l on the local server. This will be more secure than the previous solution which suggested to have updates from localhost. As far as I can tell localhost updates can be spoofed and so should be considered insecure in a modern infrastructure. I learned a lot by running nsupdate -D -l on my AWS server and observing what happens.
And note that I changed the locations of the secret. The old solution had the secret embedded in the URL in a GET statement, which means it would also be embedded in every single request in the web server’s access file. That’s not a good idea. I switched it to a POSTed variable so that it doesn’t show up in the web server’s access file. This is done with the -d switch of curl.

Contents of temporary file
Here are example contents. This is useful when you’re trying to run nsupdate from the command line.

update delete raspi.drj.com A
update add raspi.drj.com 3600 A 51.32.108.37
send


Permissions problems

If you see something like this on your DNS server:

$ ll /var/run/named

total 8
-rw-r--r-- 1 named www-data   6 Nov  6 03:15 named.pid
-rw------- 1 named www-data 102 Oct 24 09:42 session.key

your attempt to run nsupdate by your web server will be foiled and produce something like this:

$ /usr/bin/nsupdate ‐l /tmp/nsupdate.LInUmo

06-Nov-2016 17:14:14.780 none:0: open: /var/run/named/session.key: permission denied
can't read key from /var/run/named/session.key: permission denied

The solution may be to permit group read permission:

$ cd /var/run/named; sudo chmod g+r session.key

and make the group owner of the file your webserver user ID (which I’ve already done here). I’m still working this part out…

That approach doesn’t seem to “stick,” so I came up with this other approach. Put your web server user in sudoers to allow it to run nsupdate (my web server user is www-data for these examples):

Cmnd_Alias     NSUPDATE = /usr/bin/nsupdate
# allow web server to run nsupdate
www-data ALL=(root) NOPASSWD: NSUPDATE

But you may get the dreaded

sudo: sorry, you must have a tty to run sudo

if you manage to figure out how to turn on debugging.

So if your sudoers has a line like this:

Defaults    requiretty

you will need lines like this:

# turn of tty requirements only for www-data user
Defaults:www-data !requiretty

Debugging
Of course for debugging I commented out the unlink line in the PHP update file and ran the
nsupdate -l /tmp/nsupdate.xxxxx
by hand as user www-data.

During some of the errors I worked through that wasn’t verbose enough so I added debugging arguments:

$ nsupdate ‐D ‐d ‐l /tmp/nsupdate.xxxxx

When that began to work, yet when called via the webserver it wasn’t working, I ran the above command from within PHP, recording the output to a file:

...
`sudo nsupdate -d -D -l $tmpfile > /tmp/nsupdate-debug 2>&1`

That turned out to be extremely informative.

Conclusion
We have shown how to combine a bunch of simple networking tools to create your own DDNS service. The key elements are a Raspberry Pi and your own virtual server at Amazon AWS. We have built upon previous published solutions to this problem and made them more secure in light of the growing sophistication of the bad guys. Let me know if there is interest in an inexpensive commercial service.

References and related

nanny.pl write-up: https://www.safaribooksonline.com/library/view/dns-bind/0596004109/ch05s09.html

Categories
Consumer Tech Linux

What I’m trying out now – Amazon Fire HD 8 Tablet

Intro
I had previously praised an HP Touchpad Tablet, but that was another time and times have moved on. Now I’m trying the new Fire HD 8 Tablet and am quite impressed. It’s not perfect however.

Here are some features I really like.

Pros
Long battery life – the HP Touchpad died too quickly – after a couple hours – giving me recharge anxiety
Bright display
Lightweight and sufficiently small – I often carry it around from room to room in the house
High-def resolution: 1280 x 800
Reasonably good app selection
Quad processor makes it responsive and able to run lots of apps at the same time
Switching between apps is pretty easy

Cons
No Groupme app
no X-windows server
no ability to cast, even to Amazon Fire TV Stick!
Speedtest does not work
Home screen is locked to Amazon advertizing
Very unresponsive to swipes – in general very slow

Apps and features I like
Serverauditor – gives me ssh access to my Raspberry Pi and Amazon hosts
Weatherbug
NY Times
Netflix
Hulu
Silk Browser
Calculator is pretty good
Maps is alright
Fitbit – and the Bluetooth actually works with my Charge device
stereo speakers, but not the best dynamic range
prints to WiFi printer, e.g., Canon printers
Bluetooth enabled – can pump audio out to an external Bluetooth speaker

After several months of use I am less impressed. The thing bogs down with my palette of apps and is slow as a dog. I need a minimum of three swipes to unlock the Amazon home screen advertisement, which gets really tiring really fast. So, this tablet is a no-go. Sad.

References and related
My old HP Touchpad article, just for the historical reference

Categories
DNS Linux Perl Raspberry Pi Web Site Technologies

Roll your own domain drop catching service using GoDaddy

Intro
I’m after a particular domain and have been for years. But as a matter of pride I don’t want to overpay for it, so I don’t want to go through an auction. There are services that can help grab a DNS domain immediately after it expires, but they all want $$. That may make sense for high-demand domains. Mine is pretty obscure. I want to grab it quickly – perhaps within a few seconds after it becomes available, but I don’t expect any competition for it. That is a description of domain drop catching.

Since I am already using GoDaddy as my registrar I thought I’d see if they have a domain catching service. They don’t which is strange because they have other specialized domain services such as domain broker. They have a service which is designed for much the same purpose, however, called backorder. That creates an auction bid for the domain before it has expired. The cost isn’t too bad, but since I started down a different path I will roll my own. Perhaps they have an API which can be used to create my own domain catcher? It turns out they do!

It involves understanding how to read a JSON data file, which is new to me, but otherwise it’s not too bad.

The domain lifecycle
This graphic from ICANN illustrates it perfectly for your typical global top-level domain such as .com, .net, etc:
gtld-lifecycle

To put it into words, there is the
initial registration,
optional renewals,
expiration date,
auto-renew grace period of 0 – 45 days,
redemption grace period of 30 days,
pending delete of 5 days, and then
it’s released and available.

So in domain drop catching we are keenly interested in being fully prepared for the pending delete five day window. From an old discussion I’ve read that the precise time .com domains are released is usually between 2 -3 PM EST.

A word about the GoDaddy developer site
It’s developer.godaddy.com. It looks like one day it will be a great site, but for now it is wanting in some areas. Most of the menu items are duds and are placeholders. Really there are only three (mostly) working sections: get started, documentation and demo. Get started is only a few words and one slender snippet of Ajax code, and the demo itself also extremely limited, so the only real resource they provide is Documentation. Documentation is designed as an active documentation that you can try out functions with your data. You run it and it shows you all the needed request headers and data as well as the received response. The thing is that it’s very finicky. It’s supposed to show all the available functions but I couldn’t get it to work under Firefox. And with Internet Explorer/Edge it only worked about half the time. It seems to help to access it with a newly launched browser. The documentation, as good as it is, leaves some things unsaid. I have found:

https://api.ote-godaddy.com/ – use for TEST. Maybe ote stands for optional test environment?
https://api.godaddy.com/ – for production (what I am calling PROD)

The TEST environment does not require authentication for some things that PROD does. This shell script for checking available domains, which I call available-test.sh, works in TEST but not in PROD:

#!/bin/sh
# pass domain as argument
# apparently no AUTH is rquired for this one
curl -k 'https://api.ote-godaddy.com/v1/domains/available?domain='$1'&amp;checkType=FAST&amp;forTransfer=false'

In PROD I had to insert the authorization information – the key and secret they showed me on the screen. I call this script available.sh.

#!/bin/sh
# pass domain as argument
curl -s -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' 'https://api.godaddy.com/v1/domains/available?domain='$1'&amp;checkType=FULL&amp;forTransfer=false'

I found that my expiring domain produced different results about five days after expiring if I used checkType of FAST versus checkType of FULL – and FAST was wrong. So I learned you have to use FULL to get an answer you can trust!

Example usage of an available domain

$ ./available.sh dr-johnstechtalk.com

{"available":true,"domain":"dr-johnstechtalk.com","definitive":false,"price":11990000,"currency":"USD","period":1}

2nd example – a non-available domain
$ ./available.sh drjohnstechtalk.com

{"available":false,"domain":"drjohnstechtalk.com","definitive":true,"price":11990000,"currency":"USD","period":1}

Example JSON file
I had to do a lot of search and replace to preserve my anonymity, but I feel this post wouldn’t be complete without showing the real contents of my JSON file I am using for both validate, and, hopefully, as the basis for my API-driven domain purchase:

{
  "domain": "dr-johnstechtalk.com",
  "renewAuto": true,
  "privacy": false,
  "nameServers": [
  ],
  "consent": {
    "agreementKeys": ["DNRA"],
    "agreedBy": "50.17.188.196",
    "agreedAt": "2016-09-29T16:00:00Z"
  },
  "period": 1,
  "contactAdmin": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    },
    "phone": "+1.5555551212"
  },
  "contactBilling": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    },
    "phone": "+1.5555551212"
  },
  "contactRegistrant": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    }
  },
  "contactTech": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    }
  }
}

Note the agreementkeys value: DNRA. GoDaddy doesn’t document this very well, but that is what you need to put there! Note also that the nameservers are left empty. I asked GoDaddy and that is what they advised to do. The other values are pretty much what you’d expect. I used my own server’s IP address for agreedBy – use your own IP. I don’t know how important it is to get the agreedAt date close to the current time. I’m going to assume it should be within 24 hours of the current time.

How do we test this JSON input file? I wrote a validate script for that I call validate.sh.

#!/bin/sh
# DrJ 9/2016
# godaddy-json-register was built using GoDaddy's documentation at https://developer.godaddy.com/doc#!/_v1_domains/validate
jsondata=`tr -d '\n' &lt; godaddy-json-register`
 
curl -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata" https://api.godaddy.com/v1/domains/purchase/validate

Run the validate script
$ ./validate.sh

HTTP/1.1 100 Continue
 
HTTP/1.1 100 Continue
Via: 1.1 api.godaddy.com
 
HTTP/1.1 200 OK
Date: Thu, 29 Sep 2016 20:11:33 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"2-mZFLkyvTelC5g8XnyQrpOw"
Via: 1.1 api.godaddy.com
Transfer-Encoding: chunked

Revised versions of the above scripts
So we can pass the domain name as argument I revised all the scripts. Also, I provide an agreeddAt date which is current.

The data file: godaddy-json-register

{
  "domain": "DOMAIN",
  "renewAuto": true,
  "privacy": false,
  "nameServers": [
  ],
  "consent": {
    "agreementKeys": ["DNRA"],
    "agreedBy": "50.17.188.196",
    "agreedAt": "DATE"
  },
  "period": 1,
  "contactAdmin": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    },
    "phone": "+1.5555551212"
  },
  "contactBilling": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    },
    "phone": "+1.5555551212"
  },
  "contactRegistrant": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    }
  },
  "contactTech": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "[email protected]",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    }
  }
}

validate.sh

#!/bin/sh
# DrJ 10/2016
# godaddy-json-register was built using GoDaddy's documentation at https://developer.godaddy.com/doc#!/_v1_domains/validate
# pass domain as argument
# get date into accepted format
domain=$1
date=`date -u --rfc-3339=seconds|sed 's/ /T/'|sed 's/+.*/Z/'`
jsondata=`tr -d '\n' &lt; godaddy-json-register`
jsondata=`echo $jsondata|sed 's/DATE/'$date'/'`
jsondata=`echo $jsondata|sed 's/DOMAIN/'$domain'/'`
#echo date is $date
#echo jsondata is $jsondata
curl -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata" https://api.godaddy.com/v1/domains/purchase/validate

available.sh
No change. See listing above.

purchase.sh
Exact same as validate.sh, with just a slightly different URL. I need to test that it really works, but based on my reading I think it will.

#!/bin/sh
# DrJ 10/2016
# godaddy-json-register was built using GoDaddy's documentation at https://developer.godaddy.com/doc#!/_v1_domains/purchase
# pass domain as argument
# get date into accepted format
domain=$1
date=`date -u --rfc-3339=seconds|sed 's/ /T/'|sed 's/+.*/Z/'`
jsondata=`tr -d '\n' &lt; godaddy-json-register`
jsondata=`echo $jsondata|sed 's/DATE/'$date'/'`
jsondata=`echo $jsondata|sed 's/DOMAIN/'$domain'/'`
#echo date is $date
#echo jsondata is $jsondata
curl -s -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata" https://api.godaddy.com/v1/domains/purchase

Putting it all together
Here’s a looping script I call loop.pl. I switched to perl because it’s easier to do certain control operations.

#!/usr/bin/perl
#DrJ 10/2016
$DEBUG = 0;
$status = 0;
open STDOUT, '&gt;', "loop.log" or die "Can't redirect STDOUT: $!";
                   open STDERR, "&gt;&amp;STDOUT"     or die "Can't dup STDOUT: $!";
 
                   select STDERR; $| = 1;      # make unbuffered
                   select STDOUT; $| = 1;      # make unbuffered
# edit this and change to your about-to-expire domain
$domain = "dr-johnstechtalk.com";
while ($status != 200) {
# show that we're alive and working...
  print "Now it's ".`date` if $i++ % 10 == 0;
  $hr = `date +%H`;
  chomp($hr);
# run loop more aggressively during times of day we think Network Solutions releases domains back to the pool, esp. around 2 - 3 PM EST
  $sleep = $hr &gt; 11 &amp;&amp; $hr &lt; 16 ? 1 : 15;
  print "Hr,sleep: $hr,$sleep\n" if $DEBUG;
  $availRes = `./available.sh $domain`;
# {"available":true,"domain":"dr-johnstechtalk.com","definitive":false,"price":11990000,"currency":"USD","period":1}
  print "$availRes\n" if $DEBUG;
  ($available) = $availRes =~ /^\{"available":([^,]+),/;
  print "$available\n" if $DEBUG;
  if ($available eq "false") {
    print "test comparison OP for false result\n" if $DEBUG;
  } elsif ($available eq "true") {
# available value of true is extremely unreliable with many false positives. Confirm availability by making a 2nd call
    print "available.sh results: $availRes\n";
    $availRes = `./available.sh $domain`;
    print "available.sh re-test results: $availRes\n";
    ($available2) = $availRes =~ /^\{"available":([^,]+),/;
    next if $available2 eq "false";
# We got two available eq true results in a row so let's try to buy it!
    print "$domain is very likely available. Trying to buy it at ".`date`;
    open(BUY,"./purchase.sh $domain|") || die "Cannot run ./purchase.pl $domain!!\n";
    while() {
# print out each line so we can analyze what happened
      print ;
# we got it if we got back
# HTTP/1.1 200 OK
      if (/1.1 200 OK/) {
        print "We just bought $domain at ".`date`;
        $status = 200;
     }
    } # end of loop over results of purchase
    close(BUY);
    print "\n";
    exit if $status == 200;
  } else {
    print "available is neither false nor true: $available\n";
  }
  sleep($sleep);
}

Running the loop script
$ nohup ./loop.pl > loop.log 2>&1 &
Stopping the loop script
$ kill ‐9 %1

Description of loop.pl
I gotta say this loop script started out as a much simpler script. I fortunately started on it many days before my desired domain actually became available so I got to see and work out all the bugs. Contributing to the problem is that GoDaddy’s API results are quite unreliable. I was seeing a lot of false positives – almost 20%. So I decided to require two consecutive calls to available.sh to return true. I could have required available true and definitive true, but I’m afraid that will make me late to the party. The API is not documented to that level of detail so there’s no help there. But so far what I have seen is that when available incorrectly returns true, simultaneously definitive becomes false, whereas all other times definitive is true.

Results of running an earlier and simpler version of loop.pl

This shows all manner of false positives. But at least it never allowed me to buy the domain when it wasn’t available.

Now it's Wed Oct  5 15:20:01 EDT 2016
Now it's Wed Oct  5 15:20:19 EDT 2016
Now it's Wed Oct  5 15:20:38 EDT 2016
available.sh results: {"available":true,"domain":"dr-johnstechtalk.com","definitive":false,"price":11990000,"currency":"USD","period":1}
dr-johnstechtalk.com is available. Trying to buy it at Wed Oct  5 15:20:46 EDT 2016
HTTP/1.1 100 Continue
 
HTTP/1.1 100 Continue
Via: 1.1 api.godaddy.com
 
HTTP/1.1 422 Unprocessable Entity
Date: Wed, 05 Oct 2016 19:20:47 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"7d-O5Dw3WvJGo8h30TqR7j8zg"
Via: 1.1 api.godaddy.com
Transfer-Encoding: chunked
 
{"code":"UNAVAILABLE_DOMAIN","message":"The specified `domain` (dr-johnstechtalk.com) isn't available for purchase","name":"ApiError"}
Now it's Wed Oct  5 15:20:58 EDT 2016
Now it's Wed Oct  5 15:21:16 EDT 2016
Now it's Wed Oct  5 15:21:33 EDT 2016
available.sh results: {"available":true,"domain":"dr-johnstechtalk.com","definitive":false,"price":11990000,"currency":"USD","period":1}
dr-johnstechtalk.com is available. Trying to buy it at Wed Oct  5 15:21:34 EDT 2016
HTTP/1.1 100 Continue
 
HTTP/1.1 100 Continue
Via: 1.1 api.godaddy.com
 
HTTP/1.1 422 Unprocessable Entity
Date: Wed, 05 Oct 2016 19:21:36 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"7d-O5Dw3WvJGo8h30TqR7j8zg"
Via: 1.1 api.godaddy.com
Transfer-Encoding: chunked
 
{"code":"UNAVAILABLE_DOMAIN","message":"The specified `domain` (dr-johnstechtalk.com) isn't available for purchase","name":"ApiError"}
Now it's Wed Oct  5 15:21:55 EDT 2016
Now it's Wed Oct  5 15:22:12 EDT 2016
Now it's Wed Oct  5 15:22:30 EDT 2016
available.sh results: {"available":true,"domain":"dr-johnstechtalk.com","definitive":false,"price":11990000,"currency":"USD","period":1}
dr-johnstechtalk.com is available. Trying to buy it at Wed Oct  5 15:22:30 EDT 2016
...

These results show why i had to further refine the script to reduce the frequent false positives.

Review
What have we done? Our looping script, loop.pl, loops more aggressively during the time of day we think Network Solutions releases expired .com domains (around 2 PM EST). But just in case we’re wrong about that we’ll run it anyway at all hours of the day, but just not as quickly. So during the aggressive period we sleep just one second between calls to available.sh. When the domain finally does become available we call purchase.sh on it and exit and we write some timestamps and the domain we’ve just registered to our log file.

Performance
Miserable. This API seems tuned for relative ease-of-use, not speed. The validate call often takes, oh, say 40 seconds to return! I’m sure the purchase call will be no different. For a domainer that’s a lifetime. So any strategy that relies on speed had better turn to a registrar that’s tuned for it. GoDaddy I think is more aiming at resellers of their services.

Don’t have a linux environment handy?
Of course I’m using my own Amazon AWS server for this, but that needn’t be a barrier to entry. I could have used one of my Raspberry Pi’s. Probably even Cygwin on a Windows PC could be made to work.

Appendix A
How to remove all newline characters from your JSON file

Let’s say you have a nice JSON file which was created for you from the Documentation exercises called godaddy-json-register. It will contain lots of newline (“\n”) characters, assuming you’re using a Linux server. Remove them and put the output into a file called compact-json:

$ tr ‐d ‘\n'<godaddy‐json‐register>compact‐json

I like this because then I can still use curl rather than wget to make my API calls.

Appendix B
What an expiring domain looks like in whois

Run this from a linux server
$ whois <expiring‐domain.com>

Domain Name: expiring-domain.com
...
Creation Date: 2010-09-28T15:55:56Z
Registrar Registration Expiration Date: 2016-09-27T21:00:00Z
...
Domain Status: clientDeleteHold
Domain Status: clientDeleteProhibited
Domain Status: clientTransferProhibited
...

You see that Domain Status: clientDeleteHold? You don’t get that for regular domains whose registration is still good. They’ll usually have the two lines I show below that, but not that one. This is shown for my desired domain just a few days after its official expiration date.

2020 update

Four years later, still hunting for that domain – I am very patient! So I dusted off the program described here. Suprisingly, it all still works. Except maybe the JSON file. The onlything wrong with that was the lack of nameservers. I added some random GoDaddy nameservers and it seemed all good.

Conclusion
We show that GoDaddy’s API works and we provide simple scripts which can automate what is known as domain dropcatching. This approach should attempt to register a domain within a cople seconds of its being released – if we’ve done everything right. The GoDaddy API results are a little unstable however.

References and related
If you don’t mind paying, Fabulous.com has a domain drop catching service.
ICANN’s web site with the domain lifecycle infographic.
GoDaddy’s API documentation: http://developer.godaddy.com/
More about Raspberry Pi: http://raspberrypi.org/
I really wouldn’t bother with Cygwin – just get your hands on real Linux environment.
Curious about some of the curl options I used? Just run curl ‐‐help. I left out the description of the switches I use because it didn’t fit into the narrative.
Something about my Amazon AWS experience from some years ago.
All the perl tricks I used would take another blog post to explain. It’s just stuff I learned over the years so it didn’t take much time at all.
People who buy and sell domains for a living are called domainers. They are professionals and my guide will not make you competitive with them.

Categories
Exchange Online Internet Mail

Web forms: creating today’s Open Relays

Intro
Maybe it’s just me, but I’ve been receiving a lot of non-delivery reports in the past couple weeks. I happen to receive email addressed to webmaster@<large company>.com and as well I am knowledgeable about how email works. This puts me in the rare position of knowing how to interpret the meaning of the many NDRs I started to see.

The details
This picture shows the actual state of my inbox if I look for NDRs by searching for the word undeliverable:

NDRs-2016-09-16_14-22-17

Included is the preview panel showing the contents of one of those messages from Wells Fargo.

Actions
Personally, I can live with this level of clutter in my inbox – I actually get hundreds of messages a day like many IT folks. But I put this into a larger context. As a responsible Netizen (not sure if anyone really uses that term, but it was a useful one!) I feel a responsibility to keep the Internet running in an orderly way and stop the abusers as quickly as possible. After all I owe my job to a well-functioning Internet. So I decided rather than to do the easy thing – write rules to shunt these aside, or let clutter go to work, or junk email them, I was going to try to go after the source. Actually sources because you may have noticed that there are multiple domains involved.

First analysis
I thought about how these emails could have originated. I thought, OK, someone spoofs the email address webmaster@<large company>.com, they find a reputable mail server somewhere on the Internet; this mail server has to be an open relay. So the first few mail servers I checked out on Mxtoolbox didn’t show any problems or complaints whatsoever – sterling reputations. Turns out that way of thinking is soooo 20th century. Excusable in my case because I was already running sendmail in the 20th century and in those days that was some of the biggest worries – running an open relay.

Upon further reflection
Someone mentioned it’s probably a web form and I thought, Yes. Did you ever see those forms, like, Share this web page, that allow you to enter your email address and send something to a friend – which they will receive as apparently coming from you? What if in addition you could add your own custom message? Well, that’s the modern way of turning a mail server into an open relay.

The fatal recipe
A web form with the following properties is an open relay enabler:

– permits setting sender address
– permits setting recipient address
– has a comment field which will get sent along to the recipient
– does not employ captcha technology

Only visible from its side effects
The thing is, I am only receiving the NDRs, in other words those emails which had a spoofed sender address (my webmaster address) and a recipient address which for whatever reason decided not to accept the message. The “open relay,” failing to send the message, sends the failure notice to the spoofed sender: my webmaster account. But this NDR does not contain the original message, just vestigial hints about the original message, such as what server it was sent from, who was the recipient and sender, what the subject line was (sometimes), and when the receiving system complained. If a spam email was successfully sent in my name I never get to see it!

But I was able to actually find one of these dangerous forms that is being abused. Here it is:

open-relay-web-form-2016-09-16_14-41-28

Of course some of these forms are more restrictive than others. And almost all share the characteristic that they always put certain words into the subject and perhaps the body as well, which is beyond the control of the abuser. But that free-form message field is gold for the abuser and allows them to put their spammy or malicious message into the body of the email.

So after checking out a few of these domains for open relay and coming up empty, I do think all the abuse was done through too-lenient web pages. So I guess that is the current method of creating a de facto open relay.

Results
I’ve written very well-informed emails, initially trying to send to abuse@<domain>.com. But I also got more creative, in some cases tracing the domain to an ISP and looking up how to contact that ISP’s abuse department. I’m kind of disappointed that Wells Fargo hasn’t responded. Many other ISPs did. I believe that some have already corrected the problem. Meanwhile new ones crop up.

Over the weeks I’ve worked – successfully – with several offenders. Each one represented unique aspects and I had to do some IT detective work to track down someone who wuold be likely to respond. The ones who haven’t cooperated at all are overseas. Here is the wall of shame:

ecritel.fr (handles emails for marieclaire.fr web site)
ifastfinancial.com
RTHK.com.hk

Update
After about a week Wells Fargo did give a brief reply. They did not ask for any details. But the spam from their server did stop as far as I can tell.

If it turns into a never-ending battle I will give up, except perhaps to spend a few minutes a day.

Permanent fix
I don’t know the best way to fix this. I used to be a fan of SPF, but its limitations make it impossible for some large companies who need to have too many third parties sending email on their behalf. I guess Google is pushing DMARC, but I haven’t had time to think through if it’s feasible for large enterprises.

Conclusion
Poorly constructed web forms are the new open relay enablers. Be very careful before creating such a form and at a minimum use good captcha technology so its usage can not be automated.

This is speculation but I would not be surprised to learn that there is a marketplace for a listing of all the poorly constructed web forms out there – that information could be very valuable to spammers who have been increasingly shut out of our inboxes by improved anti-spam detection.

References and related
I found this site helpful in finding valid contacts for a domain: abuse.net. You enter the domain and spits back a couple of valid abuse contact addresses.
I only reluctantly use Mxtoolbox. It’s like a necessary evil. So i don’t want to give out a link for it. Probably nearly as good to check out a mail server’s reputation is senderbase.org. They’re not trying to sell you anything.
DMARC – perhaps the email authentication mechanism of the future.
My old post advocating SPF, which just never caught on…
PHPMailer remote code execution explanation, which takes advantages of web forms used to send email.

Categories
Consumer Tech

Consumer Tech: Getting pictures off the Samsung Galaxy S7

Intro
This is simple enough, but I keep forgetting how to do it since I only do it every few months. And the options provided seem almost limitless. Still, this approach works best in my opinion.

The details
Plug USB cable from phone into PC.
You may see initial pop-up asking what you’d like to do. I would choose Import files.
Look in File Explorer for the phone. You’ll see when you expand it that there are no files beneath it.
Go to phone. Pull down the status bar by dragging from the top.
One of the notifications concerns what to do when the USB cable is plugged in. The default is charge. Change it to share files.
The phone does not remember this setting. You need to repeat this every time you plug it into a PC and want to transfer your pictures! At least that’s my experience.
Now you can expand the phone in File Explorer and find your pictures in a DCIM folder.

Conclusion
The old-fashioned way of using USB cable to transfer pictures is best. They’ve moved things around however so older advice is no longer applicable.

Categories
Consumer Tech

Consumer Tech: Fitbit Charge tracker disconnected solution

Intro
I don’t want to oversell this solution. But let’s face it, you can lose hours and you probably will if you start rummaging through Fitbit’s own community forums on the solution of what to do when your Fitbit Charge or Charge HR doesn’t sync. You pick up a lot of bad and irrelevant and desperate advice.

What worked for me – the long story

Obviously there can be many reasons this may be happening: Bluetooth is off, Bluetooth pairing has been dropped, perhaps low battery, but those ar things you’d think of on your own, right, and anyway they’d be accompanied by other symptoms.

I have a Windows phone. Fitbit has an app through the Windows store. The syncing has always been very finicky. With my Charge HR I would sometimes have to try and re-try the sync for several minutes. Other times it would work right away. I couldn’t use either my home laptop or home desktop computer – both Dells – because the Windows 10 upgrade I did seemed to have wiped out the Bluetooth driver.

Then one day my spouse bought a Fitbit Alta and the helpful guy at Best Buy “helpfully” added her Fitbit and all her information to my account. You see I had commandeered her Samsung phone as well in a desperate attempt to find some device that would sync my Charge HR. It was a total mess. One day her steps overrode my steps and got synced backwards to my Charge! And it was 6000 steps fewer! So I got the idea to log out of my account. I logged back in and the sync worked quickly (quick means about 30 seconds in my experience). Since then I’ve done that a couple more times and both times I was able to sync right away after logging back in.

The summary
For Windows Phone when you can’t sync, and see the message Tracker disconnected when you know full well it has good batteries, it helps to log out of your Fitbit account and log right back in.

Conclusion
Syncing Fitbits is a finicky business in my experience. Their online help is mediocre and will just as likely lead you down the wrong path. Oh, and the wireless dongle that came with my Charge doesn’t fit the device! But I still like the devices overall – guess I got used to them. hopefully this trick to sync a Charge or Charge HR will help someone.

But don’t get me started on what happens when your battery begins to go.

Categories
Consumer Tech

Spousal request for slideshow on TV – fail

Intro
Some things are just a lot harder than they should be. Given that I have two Amazon Firesticks for TVs, and tons of pictures on the Google cloud, wouldn’t it be great if while working at home my spouse could casually view a slideshow – sort of like using the TV as a giant electronic picture frame. Can’t be too hard, right? That was a long-standing request, which started more like “I want to see our pictures on the TV.” Then along came a request to show a home movie through the TV. Together these things broke through my wall of indifference and I was inspired to find a solution. Couldn’t be that hard, right?

Ha, what little did I know.

Some details
List of technologies tried and (mostly) discarded
HDMI
Plex
Kodi
Miracast
physical HDMI cable
SMB
UPNP
AllConnect

Some solutions came close, some not so much. Here are pros and cons of each in the order I tried them.

HDMI cable from laptop to TV
Pros
Well at least it actually works (see Miracast entry below).
Cons
Working with actual cables – no fun. Ties up your laptop fulltime.
Status
Probably fine if your need is very infrequent and you have spare time to mess with the cables.

Miracast
What it is
If it worked, this would be like having a wireless HDMI cable. So you’d cast from, say, a laptop directly to your Firestick.
Pros
I guess none as it doesn’t work. In principle it would be like using HDMI but without messing with the cable. You can mirror your display wirelessly from your laptop, then set your Firestick to permit being used, but it all doesn’t work in the end. My friend actually called Amazon support on this and they confirmed that they do not support Miracast from PCs.
Cons
At best it would tie up your laptop full time casting its screen to your TV. Doesn’t sound that great to me. Those who use Miracast find it unstable in any case.

Plex
What it is
A client/server technology. It is kind of slick nd designed to be consumer friendly. You install the Plex server on your PC.
Pros
It wasn’t too hard to get going. The Plex server can be used with other apps so it’s a generally good thing to have in any case. The Plex app is available on the Amazon store.
If you have home movies on your PC they play really nicely, I’ll give it that.
Your stuff is organized into sensible collections. Browsing through lots of folders of pictures is pretty easy.
Cons
The slideshow terminates at the last picture and stays there. There is no looping, which is bizarre since it’s otherwise so slick.

Kodi
What it is
As far as I can tell it’s an open source media client.
Pros
It can work from a bunch of different sources. I never did get SMB sharing to work, but once I started playing with Upnp I realized I could aim it at my Plex server! And that worked.
Cons
Requires you to jailbreak you Firestick so it’s not a smooth or pleasant installation. Installation requires “sideloading” from an Android device. I can drill down into a folder of photos but once I click slideshow the screen turns black. The thumbnails show up however. Also I read that it reads the EXIF meta information form each picture in a folder and that will take forever on a typical folder with hundreds of pictures. That’s a non-starter.

AllConnect by Tuxera
What it is
You would need the app installed on an Android device as well as the Firestick. You then in principle use your android device to control what gets displayed on your Firestick. This is a casting technology in other words.
Pros
Like a supported version of Kodi – it’s an app right in the Amazon store. Again it was compatible with my Plex server, which was nice.
Cons
Works like crap. You can show one picture at a time. It loses the connection. Slideshow mode doesn’t work. Forces you to pick one photo at a time to add to your slideshow. I don’t think so!
Ties up an Android device so that it ain’t so great either.
Also, because of multiple devices involved it’s a little slow (a couple seconds) to switch between pictures, painting a refresh thing on your screen while you wait.

Conclusion
All approaches fail. Plex comes closest to being useable. Allconnect is horrible, Kodi holds promise for some day, Miracast is a joke. An HDMI cable is a capable fallback solution.

Categories
Admin CentOS

Powershell winrm client error explained

Intro
This particular Powershell error I came across yesterday is not well explained in other forums. I present the error and the explanation though not necessarily the solution.

The details
I recently received my new PC running Windows 8.1. On rare occasions I use Windows Powershell to do some light Exchange Online administration, but I am a complete Powershell novice. I have previously documented how to get Powershell to work through proxy, which is also poorly documented. To repeat it here these steps work for me:

> $credential = Get‐Credential
> $drj = New‐PSSessionOption ‐ProxyAccessType IEConfig
> $exchangeSession = New‐PSSession ‐ConfigurationName Microsoft.Exchange ‐ConnectionUri "https://outlook.office365.com/powershell‐liveid/" ‐Credential $credential ‐Authentication "Basic" ‐AllowRedirection ‐SessionOption $drj
> Import‐PSSession $exchangeSession

And that had always worked under Windows 7. Now this is what I get:

New-PSSession : [outlook.office365.com] Connecting to remote server outlook.office365.com failed with the following
error message : The WinRM client cannot process the request. Basic authentication is currently disabled in the client
configuration. Change the client configuration and try the request again. For more information, see the
about_Remote_Troubleshooting Help topic.
At line:1 char:20
+ $exchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange -Connecti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin
   gTransportException
    + FullyQualifiedErrorId : -2144108321,PSSessionOpenFailed

What’s up with that? I independently checked my account credentials – they were correct. Eventually I learned there is a winrm command which can shed some light on this. In particular this command show the problem quite clearly:

> winrm get winrm/config/client

Client
    NetworkDelayms = 5000
    URLPrefix = wsman
    AllowUnencrypted = false [Source="GPO"]
    Auth
        Basic = false [Source="GPO"]
        Digest = false [Source="GPO"]
        Kerberos = true
        Negotiate = true
        Certificate = true
        CredSSP = false
    DefaultPorts
        HTTP = 5985
        HTTPS = 5986
    TrustedHosts

So even though I have local admin rights, and I launch Powershell (found in C:\Windows\System32\WindowsPowerShell\v1.0) as administrator, still this rights restriction exists and cannot as far as I know be overridden. The specific issue is that a GPO (group policy) has been enabled that prevents use of Basic authentication.

New idea – try Kerberos authentication
More info about our command is available:

> get‐help new‐pssession ‐full|more

You see that another option for the Authentication switch is Kerberos. So I tried that:
> $exchangeSession = New‐PSSession ‐ConfigurationName Microsoft.Exchange ‐ConnectionUri "https://outlook.office365.com/powershell‐liveid/" ‐Credential $credential ‐Authentication "Kerberos" ‐AllowRedirection ‐SessionOption $drj

This produced the unhappy result:

New-PSSession : [outlook.office365.com] Connecting to remote server
outlook.office365.com failed with the following error message : The WinRM
client cannot process the request. Setting proxy information is not valid when
the authentication mechanism with the remote machine is Kerberos. Remove the
proxy information or change the authentication mechanism and try the request
again. For more information, see the about_Remote_Troubleshooting Help topic.
At line:1 char:20
+ $exchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange
-Connecti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~
    + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:Re
   moteRunspace) [New-PSSession], PSRemotingTransportException
    + FullyQualifiedErrorId : -2144108110,PSSessionOpenFailed

So it seems that because I use a proxy to connect Kerberos authentication is not an option. Drat. Digest? Disabled in my client.

Solution
Unless the security and AD folks can be convinced to make an exception for me to this policy I won’t be able to use this computer for Powershell access to Exchange Online. I guess my home PC would work however. It’s in the cloud after all.

So I tried my home PC. Initially I got an access denied. It was that time of the month when it was time to change the password yet again, sigh, which I learned only by doing a traditional login. With my new password thnigs proceeded further, but in response to the Import-PSSession $exchangeSession I got this new error:

Import-PSSession : Files cannot be loaded because running scripts is disabled on this system. Provide a valid
certificate with which to sign the files.
At line:1 char:2
+  Import-PSSession $exchangeSession
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Import-PSSession], PSInvalidOperationException
    + FullyQualifiedErrorId : InvalidOperation,Microsoft.PowerShell.Commands.ImportPSSessionCommand

A quick duckduckgo search shows that this command helps:

> Set-ExecutionPolicy RemoteSigned

And with that, voila, the import worked.

2nd solution

I finally found a Windows server which I have access to and isn’t locked down with that restrictive GPO. I was able to set up my environment on it and it is useable.

3rd solution

I see that as of August 2016 there is an alpha version of powershell that runs on CentOS 7.1. That could be pretty cool since I love Linux. But I’d have to spin up a CentOS 7.1 on Amazon since my server is a bit older (CentOS v 6.6) and (I tried it) it doesn’t run it correctly:

$ sudo powershell

powershell: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.18' not found (required by powershell)
powershell: /usr/lib64/libstdc++.so.6: version `CXXABI_1.3.5' not found (required by powershell)
powershell: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not found (required by powershell)
powershell: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.15' not found (required by powershell)

The instructions for installation are here.

Conclusion
Not every problem has a simple solution, but here I’ve at least documented the issues more clearly than it is elsewhere on the Internet.

References and related
Powershell for Linux? Yes! Instructions here.

Categories
Linux Raspberry Pi

Solution to this week’s NPR puzzle using simple Linux commands

Intro
Every now and then the weekend puzzle is particularly amenable to partial solution by use of simple Linux commands. I suspected such was the case for this week’s, and I was right.

The challenge for this week
Take a seven letter word of one syllable, add the consecutive letters “it” somewhere in the middle to create a nine letter word of four syllables.

The Linux command-line method of solution
On a CentOS system there is a file with words, lots of words. It’s /usr/share/dict/linux.words:

$ cd /usr/share/dict; wc linux.words

479829  479829 4953699 linux.words

So, 479829 words! A lot of them are junk words however, but it has the real ones in there too. This file comes from the RPM package words-3.0-17.el6.noarch.

So here’s a sort of stream-of-consciousness of a Unix person solving the puzzle without doing too much work or too much thinking:

How many seven-letter words are there? First what’s an expression that can answer that? I think this is it but let’s check:

$ egrep '^[a‐z]{7}$' linux.words|more

aaronic
aarrghh
abacate
abacaxi
abacist
abactor
abaculi
abaddon
abadejo
abaised
abaiser
abaisse
abalone
abandon
abandum
abasers
abashed
abashes
abasias
abasing
abatage
abaters
abating
abators
abattis
abattue
abature
abaxial
...

OK, that egrep expresison is right. So the seven-letter word count is then:

$ egrep '^[a‐z]{7}$' linux.words|wc ‐l

40230

That’s a lot – too many to eyeball. OK, so how many nine-letter words are there?

$ egrep '^[a‐z]{9}$' linux.words|wc ‐l

50748

Wow, even more.

OK, we have an idea, based not on what may be the best approach, but based on which Linux commands we know inside and out. The idea is to start from the nine-letter words which contain “it”, remove the “it” and then match the resulting seven-letter character strings against our dictionary to see which are actually words. We know how to do that. The hope is the resulting list will be small enough we can review by hand.

How many nine-letter words contain the consecutive characters “it”?

$ egrep '^[a‐z]{9}$' linux.words|grep it|wc ‐l

3245

They look like this:

abilities
abnormity
aboiteaus
aboiteaux
abolition
abrazitic
absurdity
academite
acanthite
accipiter
acclivity
accredits
accubitum
accubitus
acetosity
acidities
acinacity
aconitine
aconitums
acquisita
...

so it would take forever to go through. If we had a dictionary with the syllable count we coul really narrow it down. I think I’ve seen that, but I’d have to dig that up. We introduce the sed operator to remove the “it” from these words:

$ egrep '^[a‐z]{9}$' linux.words|grep it|sed 's/it//'|more

abilies
abnormy
aboeaus
aboeaux
abolion
abrazic
absurdy
academe
acanthe
acciper
acclivy
accreds
accubum
accubus
acetosy
acidies
acinacy
aconine
aconums
acquisa
...

There are more efficient ways to loop through these results using xargs, but I’m old school and have memorized this older construct which I use:

$ egrep '^[a‐z]{9}$' linux.words|grep it|sed 's/it//'|while read line; do
> grep $line linux.words >> /tmp/lw
> done

We look at the resulting file and found we made a little goof – we didn’t limit the resulting matches to seven characters:

$ more /tmp/lw

academe
academes
Pleuracanthea
vacanthearted
vacantheartedness
aconine
japaconine
pseudaconine
adderspit
affidavit
affidavits
affidavy
preaffidavit
divagating
extravagating
indagating
propagating
self-propagating
...

But that’s easily corrected:

$ cd /tmp; egrep '^[a‐z]{7}$' lw > lw2
$ wc -l lw2

376 lw2

Now that’s a number we can review by hand. Very few of these have only one syllable:

$ more lw2

academe
aconine
alumine
alveole
ammonic
barbary
barbone
basting
bauxite
berline
bethank
boraces
bullion
capella
carbone
carmele
carmine
cascade
catline
cavated
celeste
ceruses
chloric
chondre
chromes
cations
claries
cockney
coenobe
compose
...

I quickly reviewed the list and the answer popped out, somewhere towards the end – you can’t miss it.

Friday update – the solution
The 7-letter word that pops out at you? Reigned, which you immediately see becomes reignited – nine letters and four syllables!

Want to do this on your Raspberry Pi?
The dictionary file there is /usr/share/dictd/wn.index, but you probably don’t have it by default so you’ll need to install a few packages to get it which is simple enough. This post about Words with Friends explains the packages I used to provide that dictionary. Aside from the location of the dictionary, and that it contains fewer(?) words, everything else should be the same.

Conclusion
We have solved this week’s NPR puzzle without any complex programming just by using some simple Linux commands.

References and related
This link is nice because it has a transcription of the puzzle so you don’t have to waste time listening to the whole six-minute segment.
Another NPR puzzle we solved in a similar way.

Categories
Admin Network Technologies

Who’s using the UK Ministry of Defence’s IP addresses?

Intro
When I first came upon a spear phishing email a few months ago which originated from the UK’s Ministry of Defence I thought that was pretty queer. Like, how ironic that an invoice scam is coming from a Defense Ministry. Do they have a bad actor? Are we on the cusp of cracking some big international cybertheft? Do we tell them?

Then their address space came up yet again just a few days ago, this time in a fairly different context. Microsoft’s Exchange Online service hosted in the UK cannot deliver email to a particular domain:

8/5/2016 3:32:35 PM - Server at e********s.net (25.152.12.27) returned '450 4.4.312 DNS query failed(ServerFailure)'

I obscured the domain a bit. But it’s an everyday domain which every DNS server I’ve tested resolves just fine. But Microsoft doesn’t see it that way. Several test messages have shown non-delivery reports using these other addresses as well following the “Server at…”: 25.152.8.27, 25.152.16.27.

The Register sheds the most light – but it still lacks in critical details – on what might have happened to the UK Ministry of Defence’s IPv4 address space, namely, that some was sold. Here’s the article.

How do you show that all these 25.152.8.8 addresses belong to the Ministry of Defence? You use RIPE: http://ripe.net/ and do a search. It shows that 25.0.0.0/8 belongs to them. But according to the article in The Register this is no longer true as of late last year.

Why is Microsoft using these IP addresses? No idea. But something I read got me to suspecting that some outfits decided to use 25/8 address space as though it were private IP addresses!

References and related