Roll your own dynamic DNS update service

I know my old Cisco router only has built-in support for two dynamic DNS services, and Nowadays you have to pay for those, if even they work (the web site domain names seem to have changed, but perhaps they still support the old domain names. Or perhaps not!). Maybe this could be fixed by firmware upgrades (to hopefully get more choices and hopefully a free one, or a newer router, or running DD-WRT. I didn’t do any of those things. Being a network person at heart, I wrote my own. I found the samples out there on the Internet needed some updating, so I am sharing my recipe. I didn’t think it was too hard to pull off.

What I used
– GoDaddy DNS hosting (basically any will do)
– my Amazon AWS virtual server running CentOS, where I have sudo access
– my home Raspberry Pi
– a tiny bit of php programming
– my networking skills for debugging

As I have prior experience with all these items this project was right up my alley.

Delegating our DDNS domain from GoDaddy
Just create a nameserver record from the domain, say, called, say, raspi, which you delegate to your AWS server. Following the example, the subdomain would be whose nameserver is

DNS Setup on an Amazon AWS server


// named.conf
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
// See /usr/share/doc/bind*/sample/ for example named configuration files.
options {
//      listen-on port 53 {; };
//      listen-on port 53;
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };
        recursion no;
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";
        managed-keys-directory "/var/named/dynamic";
logging {
        channel default_debug {
                file "data/";
                severity dynamic;
zone "." IN {
        type hint;
        file "";
include "/etc/named.rfc1912.zones";
include "/var/named/dynamic.conf";
include "/etc/named.root.key";


zone "" {
  type master;
  file "/var/named/";
// designed to work with nsupdate -l used on same system - DrJ 10/2016
// /var/run/named/session.key
  update-policy local;


$TTL 1800       ; 30 minutes      IN SOA (
                                2016092812 ; serial
                                1700       ; refresh (28 minutes 20 seconds)
                                1700       ; retry (28 minutes 20 seconds)
                                1209600    ; expire (2 weeks)
                                600        ; minimum (10 minutes)
$TTL 3600       ; 1 hour

Named re-starting program
Want to make sure your named restarts if it happens to die? is a good, simple monitor to do that. Here is the version I use on my server. Note the customized variables towards the top.

# Copyright (C) 2004, 2007, 2012  Internet Systems Consortium, Inc. ("ISC")
# Copyright (C) 2000, 2001  Internet Software Consortium.
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
# $Id:,v 1.11 2007/06/19 23:47:07 tbox Exp $
# A simple nanny to make sure named stays running.
$pid_file_location = '/var/run/named/';
$nameserver_location = 'localhost';
$dig_program = 'dig';
$named_program =  '/usr/sbin/named -u named';
fork() && exit();
for (;;) {
        $pid = 0;
        open(FILE, $pid_file_location) || goto restart;
        $pid = <FILE>;
        $res = kill 0, $pid;
        goto restart if ($res == 0);
        $dig_command =
               "$dig_program +short . \@$nameserver_location > /dev/null";
        $return = system($dig_command);
        goto restart if ($return == 9);
        sleep 30;
        if ($pid != 0) {
                kill 15, $pid;
                sleep 30;
        system ($named_program);
        sleep 120;

The PHP updating program myip-update.php

# DrJ: lifted from
# but with some security improvements
# 10/2016
# PHP script for very simple dynamic DNS updates
# this script was published in and
# released to the public domain by Pablo Hoffman on 27 Aug 2006
# CONFIGURATION BEGINS -------------------------------------------------------
# define password here
$mysecret = 'myBigFatsEcreT';
# CONFIGURATION ENDS ---------------------------------------------------------
$host = $_GET['host'];
$secret = $_POST['secret'];
$zone = $_GET['zone'];
$tmpfile = trim(`mktemp /tmp/nsupdate.XXXXXX`);
if ((!$host) or (!$zone) or (!($mysecret == $secret))) {
    echo "FAILED";
$oldip = trim(`dig +short $host.$zone @localhost`);
if ($ip == $oldip) {
    echo "UNCHANGED. ip: $ip\n";
echo "$ip - $oldip";
$nsucmd = "update delete $host.$zone A
update add $host.$zone 3600 A $ip
$fp = fopen($tmpfile, 'w');
fwrite($fp, $nsucmd);
`sudo nsupdate -l $tmpfile`;
echo "OK ";
echo `date`;

In the above file I added the “sudo” after awhile. See explanation further down below.

Raspberry Pi requirements
I’ve assumed you can run your Pi 24 x 7 and constantly and consistently on your network.

Crontab entry on the Raspberry Pi
Edit the crontab file for periodically checking your IP on the Pi and updating external DNS if it has changed by doing this:

$ crontab ‐e
and adding the line below:

# my own method of dynamic update - DrJ 10/2016
0,10,20,30,40,50 * * * * /usr/bin/curl -s -k -d 'secret=myBigFatsEcreT' '' >> /tmp/ddns 2>&1

A few highlights
Note that I’ve switched to use of nsupdate -l on the local server. This will be more secure than the previous solution which suggested to have updates from localhost. As far as I can tell localhost updates can be spoofed and so should be considered insecure in a modern infrastructure. I learned a lot by running nsupdate -D -l on my AWS server and observing what happens.
And note that I changed the locations of the secret. The old solution had the secret embedded in the URL in a GET statement, which means it would also be embedded in every single request in the web server’s access file. That’s not a good idea. I switched it to a POSTed variable so that it doesn’t show up in the web server’s access file. This is done with the -d switch of curl.

Contents of temporary file
Here are example contents. This is useful when you’re trying to run nsupdate from the command line.

update delete A
update add 3600 A

Permissions problems

If you see something like this on your DNS server:

$ ll /var/run/named

total 8
-rw-r--r-- 1 named www-data   6 Nov  6 03:15
-rw------- 1 named www-data 102 Oct 24 09:42 session.key

your attempt to run nsupdate by your web server will be foiled and produce something like this:

$ /usr/bin/nsupdate ‐l /tmp/nsupdate.LInUmo

06-Nov-2016 17:14:14.780 none:0: open: /var/run/named/session.key: permission denied
can't read key from /var/run/named/session.key: permission denied

The solution may be to permit group read permission:

$ cd /var/run/named; sudo chmod g+r session.key

and make the group owner of the file your webserver user ID (which I’ve already done here). I’m still working this part out…

That approach doesn’t seem to “stick,” so I came up with this other approach. Put your web server user in sudoers to allow it to run nsupdate (my web server user is www-data for these examples):

Cmnd_Alias     NSUPDATE = /usr/bin/nsupdate
# allow web server to run nsupdate
www-data ALL=(root) NOPASSWD: NSUPDATE

But you may get the dreaded

sudo: sorry, you must have a tty to run sudo

if you manage to figure out how to turn on debugging.

So if your sudoers has a line like this:

Defaults    requiretty

you will need lines like this:

# turn of tty requirements only for www-data user
Defaults:www-data !requiretty

Of course for debugging I commented out the unlink line in the PHP update file and ran the
nsupdate -l /tmp/nsupdate.xxxxx
by hand as user www-data.

During some of the errors I worked through that wasn’t verbose enough so I added debugging arguments:

$ nsupdate ‐D ‐d ‐l /tmp/nsupdate.xxxxx

When that began to work, yet when called via the webserver it wasn’t working, I ran the above command from within PHP, recording the output to a file:

`sudo nsupdate -d -D -l $tmpfile > /tmp/nsupdate-debug 2>&1`

That turned out to be extremely informative.

We have shown how to combine a bunch of simple networking tools to create your own DDNS service. The key elements are a Raspberry Pi and your own virtual server at Amazon AWS. We have built upon previous published solutions to this problem and made them more secure in light of the growing sophistication of the bad guys. Let me know if there is interest in an inexpensive commercial service.

References and related write-up:

Posted in CentOS, DNS, Linux, Network Technologies, Raspberry Pi, Security, Web Site Technologies | Tagged , , , | Leave a comment

What I’m trying out now – Amazon Fire HD 8 Tablet

I had previously praised an HP Touchpad Tablet, but that was another time and times have moved on. Now I’m trying the new Fire HD 8 Tablet and am quite impressed. It’s not perfect however.

Here are some features I really like.

Long battery life – the HP Touchpad died too quickly – after a couple hours – giving me recharge anxiety
Bright display
Lightweight and sufficiently small – I often carry it around from room to room in the house
High-def resolution: 1280 x 800
Reasonably good app selection
Quad processor makes it responsive and able to run lots of apps at the same time
Switching between apps is pretty easy

No Groupme app
no X-windows server
no ability to cast, even to Amazon Fire TV Stick!

Apps and features I like
Serverauditor – gives me ssh access to my Raspberry Pi and Amazon hosts
NY Times
Silk Browser
Calculator is pretty good
Maps is alright
Fitbit – and the Bluetooth actually works with my Charge device
stereo speakers, but not the best dynamic range
prints to WiFi printer, e.g., Canon printers
Bluetooth enabled – can pump audio out to an external Bluetooth speaker

References and related
My old HP Touchpad article, just for the historical reference

Posted in Consumer Tech, Linux | Leave a comment

Roll your own domain drop catching service using GoDaddy

I’m after a particular domain and have been for years. But as a matter of pride I don’t want to overpay for it, so I don’t want to go through an auction. There are services that can help grab a DNS domain immediately after it expires, but they all want $$. That may make sense for high-demand domains. Mine is pretty obscure. I want to grab it quickly – perhaps within a few seconds after it becomes available, but I don’t expect any competition for it. That is a description of domain drop catching.

Since I am already using GoDaddy as my registrar I thought I’d see if they have a domain catching service. They don’t which is strange because they have other specialized domain services such as domain broker. They have a service which is designed for much the same purpose, however, called backorder. That creates an auction bid for the domain before it has expired. The cost isn’t too bad, but since I started down a different path I will roll my own. Perhaps they have an API which can be used to create my own domain catcher? It turns out they do!

It involves understanding how to read a JSON data file, which is new to me, but otherwise it’s not too bad.

The domain lifecycle
This graphic from ICANN illustrates it perfectly for your typical global top-level domain such as .com, .net, etc:

To put it into words, there is the
initial registration,
optional renewals,
expiration date,
auto-renew grace period of 0 – 45 days,
redemption grace period of 30 days,
pending delete of 5 days, and then
it’s released and available.

So in domain drop catching we are keenly interested in being fully prepared for the pending delete five day window. From an old discussion I’ve read that the precise time .com domains are released is usually between 2 -3 PM EST.

A word about the GoDaddy developer site
It’s It looks like one day it will be a great site, but for now it is wanting in some areas. Most of the menu items are duds and are placeholders. Really there are only three (mostly) working sections: get started, documentation and demo. Get started is only a few words and one slender snippet of Ajax code, and the demo itself also extremely limited, so the only real resource they provide is Documentation. Documentation is designed as an active documentation that you can try out functions with your data. You run it and it shows you all the needed request headers and data as well as the received response. The thing is that it’s very finicky. It’s supposed to show all the available functions but I couldn’t get it to work under Firefox. And with Internet Explorer/Edge it only worked about half the time. It seems to help to access it with a newly launched browser. The documentation, as good as it is, leaves some things unsaid. I have found: – use for TEST. Maybe ote stands for optional test environment? – for production (what I am calling PROD)

The TEST environment does not require authentication for some things that PROD does. This shell script for checking available domains, which I call, works in TEST but not in PROD:

# pass domain as argument
# apparently no AUTH is rquired for this one
curl -k ''$1'&checkType=FAST&forTransfer=false'

In PROD I had to insert the authorization information – the key and secret they showed me on the screen. I call this script

# pass domain as argument
curl -s -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' ''$1'&checkType=FULL&forTransfer=false'

I found that my expiring domain produced different results about five days after expiring if I used checkType of FAST versus checkType of FULL – and FAST was wrong. So I learned you have to use FULL to get an answer you can trust!

Example usage of an available domain

$ ./


2nd example – a non-available domain
$ ./


Example JSON file
I had to do a lot of search and replace to preserve my anonymity, but I feel this post wouldn’t be complete without showing the real contents of my JSON file I am using for both validate, and, hopefully, as the basis for my API-driven domain purchase:

  "domain": "",
  "renewAuto": true,
  "privacy": false,
  "nameServers": [
  "consent": {
    "agreementKeys": ["DNRA"],
    "agreedBy": "",
    "agreedAt": "2016-09-29T16:00:00Z"
  "period": 1,
  "contactAdmin": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    "phone": "+1.5555551212"
  "contactBilling": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    "phone": "+1.5555551212"
  "contactRegistrant": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
  "contactTech": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"

Note the agreementkeys value: DNRA. GoDaddy doesn’t document this very well, but that is what you need to put there! Note also that the nameservers are left empty. I asked GoDaddy and that is what they advised to do. The other values are pretty much what you’d expect. I used my own server’s IP address for agreedBy – use your own IP. I don’t know how important it is to get the agreedAt date close to the current time. I’m going to assume it should be within 24 hours of the current time.

How do we test this JSON input file? I wrote a validate script for that I call

# DrJ 9/2016
# godaddy-json-register was built using GoDaddy's documentation at!/_v1_domains/validate
jsondata=`tr -d '\n' < godaddy-json-register`
curl -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata"

Run the validate script
$ ./

HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
Via: 1.1
HTTP/1.1 200 OK
Date: Thu, 29 Sep 2016 20:11:33 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"2-mZFLkyvTelC5g8XnyQrpOw"
Via: 1.1
Transfer-Encoding: chunked

Revised versions of the above scripts
So we can pass the domain name as argument I revised all the scripts. Also, I provide an agreeddAt date which is current.

The data file: godaddy-json-register

  "domain": "DOMAIN",
  "renewAuto": true,
  "privacy": false,
  "nameServers": [
  "consent": {
    "agreementKeys": ["DNRA"],
    "agreedBy": "",
    "agreedAt": "DATE"
  "period": 1,
  "contactAdmin": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    "phone": "+1.5555551212"
  "contactBilling": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
    "phone": "+1.5555551212"
  "contactRegistrant": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"
  "contactTech": {
    "nameFirst": "Dr","nameLast": "John",
    "email": "",
    "phone": "+1.5555551212",
    "addressMailing": {
      "address1": "555 Piney Drive",
      "city": "Smallville","state": "New Jersey","postalCode": "55555",
      "country": "US"

# DrJ 10/2016
# godaddy-json-register was built using GoDaddy's documentation at!/_v1_domains/validate
# pass domain as argument
# get date into accepted format
date=`date -u --rfc-3339=seconds|sed 's/ /T/'|sed 's/+.*/Z/'`
jsondata=`tr -d '\n' < godaddy-json-register`
jsondata=`echo $jsondata|sed 's/DATE/'$date'/'`
jsondata=`echo $jsondata|sed 's/DOMAIN/'$domain'/'`
#echo date is $date
#echo jsondata is $jsondata
curl -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata"
No change. See listing above.
Exact same as, with just a slightly different URL. I need to test that it really works, but based on my reading I think it will.

# DrJ 10/2016
# godaddy-json-register was built using GoDaddy's documentation at!/_v1_domains/purchase
# pass domain as argument
# get date into accepted format
date=`date -u --rfc-3339=seconds|sed 's/ /T/'|sed 's/+.*/Z/'`
jsondata=`tr -d '\n' < godaddy-json-register`
jsondata=`echo $jsondata|sed 's/DATE/'$date'/'`
jsondata=`echo $jsondata|sed 's/DOMAIN/'$domain'/'`
#echo date is $date
#echo jsondata is $jsondata
curl -s -i -k -H 'Authorization: sso-key *******8m_PwFAffjiNmiCUrKe******:**FF73L********' -H 'Content-Type: application/json' -H 'Accept: application/json' -d "$jsondata"

Putting it all together
Here’s a looping script I call I switched to perl because it’s easier to do certain control operations.

#DrJ 10/2016
$DEBUG = 0;
$status = 0;
open STDOUT, '>', "loop.log" or die "Can't redirect STDOUT: $!";
                   open STDERR, ">&STDOUT"     or die "Can't dup STDOUT: $!";
                   select STDERR; $| = 1;      # make unbuffered
                   select STDOUT; $| = 1;      # make unbuffered
# edit this and change to your about-to-expire domain
$domain = "";
while ($status != 200) {
# show that we're alive and working...
  print "Now it's ".`date` if $i++ % 10 == 0;
  $hr = `date +%H`;
# run loop more aggressively during times of day we think Network Solutions releases domains back to the pool, esp. around 2 - 3 PM EST
  $sleep = $hr > 11 && $hr < 16 ? 1 : 15;
  print "Hr,sleep: $hr,$sleep\n" if $DEBUG;
  $availRes = `./ $domain`;
# {"available":true,"domain":"","definitive":false,"price":11990000,"currency":"USD","period":1}
  print "$availRes\n" if $DEBUG;
  ($available) = $availRes =~ /^\{"available":([^,]+),/;
  print "$available\n" if $DEBUG;
  if ($available eq "false") {
    print "test comparison OP for false result\n" if $DEBUG;
  } elsif ($available eq "true") {
# available value of true is extremely unreliable with many false positives. Confirm availability by making a 2nd call
    print " results: $availRes\n";
    $availRes = `./ $domain`;
    print " re-test results: $availRes\n";
    ($available2) = $availRes =~ /^\{"available":([^,]+),/;
    next if $available2 eq "false";
# We got two available eq true results in a row so let's try to buy it!
    print "$domain is very likely available. Trying to buy it at ".`date`;
    open(BUY,"./ $domain|") || die "Cannot run ./ $domain!!\n";
    while(<BUY>) {
# print out each line so we can analyze what happened
      print ;
# we got it if we got back
# HTTP/1.1 200 OK
      if (/1.1 200 OK/) {
        print "We just bought $domain at ".`date`;
        $status = 200;
    } # end of loop over results of purchase
    print "\n";
    exit if $status == 200;
  } else {
    print "available is neither false nor true: $available\n";

Running the loop script
$ nohup ./ > loop.log 2>&1 &
Stopping the loop script
$ kill ‐9 %1

Description of
I gotta say this loop script started out as a much simpler script. I fortunately started on it many days before my desired domain actually became available so I got to see and work out all the bugs. Contributing to the problem is that GoDaddy’s API results are quite unreliable. I was seeing a lot of false positives – almost 20%. So I decided to require two consecutive calls to to return true. I could have required available true and definitive true, but I’m afraid that will make me late to the party. The API is not documented to that level of detail so there’s no help there. But so far what I have seen is that when available incorrectly returns true, simultaneously definitive becomes false, whereas all other times definitive is true.

Results of running an earlier and simpler version of

This shows all manner of false positives. But at least it never allowed me to buy the domain when it wasn’t available.

Now it's Wed Oct  5 15:20:01 EDT 2016
Now it's Wed Oct  5 15:20:19 EDT 2016
Now it's Wed Oct  5 15:20:38 EDT 2016 results: {"available":true,"domain":"","definitive":false,"price":11990000,"currency":"USD","period":1} is available. Trying to buy it at Wed Oct  5 15:20:46 EDT 2016
HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
Via: 1.1
HTTP/1.1 422 Unprocessable Entity
Date: Wed, 05 Oct 2016 19:20:47 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"7d-O5Dw3WvJGo8h30TqR7j8zg"
Via: 1.1
Transfer-Encoding: chunked
{"code":"UNAVAILABLE_DOMAIN","message":"The specified `domain` ( isn't available for purchase","name":"ApiError"}
Now it's Wed Oct  5 15:20:58 EDT 2016
Now it's Wed Oct  5 15:21:16 EDT 2016
Now it's Wed Oct  5 15:21:33 EDT 2016 results: {"available":true,"domain":"","definitive":false,"price":11990000,"currency":"USD","period":1} is available. Trying to buy it at Wed Oct  5 15:21:34 EDT 2016
HTTP/1.1 100 Continue
HTTP/1.1 100 Continue
Via: 1.1
HTTP/1.1 422 Unprocessable Entity
Date: Wed, 05 Oct 2016 19:21:36 GMT
X-Powered-By: Express
Vary: Origin,Accept-Encoding
Access-Control-Allow-Credentials: true
Content-Type: application/json; charset=utf-8
ETag: W/"7d-O5Dw3WvJGo8h30TqR7j8zg"
Via: 1.1
Transfer-Encoding: chunked
{"code":"UNAVAILABLE_DOMAIN","message":"The specified `domain` ( isn't available for purchase","name":"ApiError"}
Now it's Wed Oct  5 15:21:55 EDT 2016
Now it's Wed Oct  5 15:22:12 EDT 2016
Now it's Wed Oct  5 15:22:30 EDT 2016 results: {"available":true,"domain":"","definitive":false,"price":11990000,"currency":"USD","period":1} is available. Trying to buy it at Wed Oct  5 15:22:30 EDT 2016

These results show why i had to further refine the script to reduce the frequent false positives.

What have we done? Our looping script,, loops more aggressively during the time of day we think Network Solutions releases expired .com domains (around 2 PM EST). But just in case we’re wrong about that we’ll run it anyway at all hours of the day, but just not as quickly. So during the aggressive period we sleep just one second between calls to When the domain finally does become available we call on it and exit and we write some timestamps and the domain we’ve just registered to our log file.

Miserable. This API seems tuned for relative ease-of-use, not speed. The validate call often takes, oh, say 40 seconds to return! I’m sure the purchase call will be no different. For a domainer that’s a lifetime. So any strategy that relies on speed had better turn to a registrar that’s tuned for it. GoDaddy I think is more aiming at resellers of their services.

Don’t have a linux environment handy?
Of course I’m using my own Amazon AWS server for this, but that needn’t be a barrier to entry. I could have used one of my Raspberry Pi’s. Probably even Cygwin on a Windows PC could be made to work.

Appendix A
How to remove all newline characters from your JSON file

Let’s say you have a nice JSON file which was created for you from the Documentation exercises called godaddy-json-register. It will contain lots of newline (“\n”) characters, assuming you’re using a Linux server. Remove them and put the output into a file called compact-json:

$ tr ‐d '\n'<godaddy‐json‐register>compact‐json

I like this because then I can still use curl rather than wget to make my API calls.

Appendix B
What an expiring domain looks like in whois

Run this from a linux server
$ whois <expiring‐>

Domain Name:
Creation Date: 2010-09-28T15:55:56Z
Registrar Registration Expiration Date: 2016-09-27T21:00:00Z
Domain Status: clientDeleteHold
Domain Status: clientDeleteProhibited
Domain Status: clientTransferProhibited

You see that Domain Status: clientDeleteHold? You don’t get that for regular domains whose registration is still good. They’ll usually have the two lines I show below that, but not that one. This is shown for my desired domain just a few days after its official expiration date.

We show that GoDaddy’s API works and we provide simple scripts which can automate what is known as domain dropcatching. This approach should attempt to register a domain within a cople seconds of its being released – if we’ve done everything right. The GoDaddy API results are a little unstable however.

References and related
If you don’t mind paying, has a domain drop catching service.
ICANN’s web site with the domain lifecycle infographic.
GoDaddy’s API documentation:
More about Raspberry Pi:
I really wouldn’t bother with Cygwin – just get your hands on real Linux environment.
Curious about some of the curl options I used? Just run curl ‐‐help. I left out the description of the switches I use because it didn’t fit into the narrative.
Something about my Amazon AWS experience from some years ago.
All the perl tricks I used would take another blog post to explain. It’s just stuff I learned over the years so it didn’t take much time at all.
People who buy and sell domains for a living are called domainers. They are professionals and my guide will not make you competitive with them.

Posted in DNS, Linux, Perl, Raspberry Pi, Web Site Technologies | Tagged , , , , | Leave a comment

Web forms: creating today’s Open Relays

Maybe it’s just me, but I’ve been receiving a lot of non-delivery reports in the past couple weeks. I happen to receive email addressed to webmaster@<large company>.com and as well I am knowledgeable about how email works. This puts me in the rare position of knowing how to interpret the meaning of the many NDRs I started to see.

The details
This picture shows the actual state of my inbox if I look for NDRs by searching for the word undeliverable:


Included is the preview panel showing the contents of one of those messages from Wells Fargo.

Personally, I can live with this level of clutter in my inbox – I actually get hundreds of messages a day like many IT folks. But I put this into a larger context. As a responsible Netizen (not sure if anyone really uses that term, but it was a useful one!) I feel a responsibility to keep the Internet running in an orderly way and stop the abusers as quickly as possible. After all I owe my job to a well-functioning Internet. So I decided rather than to do the easy thing – write rules to shunt these aside, or let clutter go to work, or junk email them, I was going to try to go after the source. Actually sources because you may have noticed that there are multiple domains involved.

First analysis
I thought about how these emails could have originated. I thought, OK, someone spoofs the email address webmaster@<large company>.com, they find a reputable mail server somewhere on the Internet; this mail server has to be an open relay. So the first few mail servers I checked out on Mxtoolbox didn’t show any problems or complaints whatsoever – sterling reputations. Turns out that way of thinking is soooo 20th century. Excusable in my case because I was already running sendmail in the 20th century and in those days that was some of the biggest worries – running an open relay.

Upon further reflection
Someone mentioned it’s probably a web form and I thought, Yes. Did you ever see those forms, like, Share this web page, that allow you to enter your email address and send something to a friend – which they will receive as apparently coming from you? What if in addition you could add your own custom message? Well, that’s the modern way of turning a mail server into an open relay.

The fatal recipe
A web form with the following properties is an open relay enabler:

– permits setting sender address
– permits setting recipient address
– has a comment field which will get sent along to the recipient
– does not employ captcha technology

Only visible from its side effects
The thing is, I am only receiving the NDRs, in other words those emails which had a spoofed sender address (my webmaster address) and a recipient address which for whatever reason decided not to accept the message. The “open relay,” failing to send the message, sends the failure notice to the spoofed sender: my webmaster account. But this NDR does not contain the original message, just vestigial hints about the original message, such as what server it was sent from, who was the recipient and sender, what the subject line was (sometimes), and when the receiving system complained. If a spam email was successfully sent in my name I never get to see it!

But I was able to actually find one of these dangerous forms that is being abused. Here it is:


Of course some of these forms are more restrictive than others. And almost all share the characteristic that they always put certain words into the subject and perhaps the body as well, which is beyond the control of the abuser. But that free-form message field is gold for the abuser and allows them to put their spammy or malicious message into the body of the email.

So after checking out a few of these domains for open relay and coming up empty, I do think all the abuse was done through too-lenient web pages. So I guess that is the current method of creating a de facto open relay.

I’ve written very well-informed emails, initially trying to send to abuse@<domain>.com. But I also got more creative, in some cases tracing the domain to an ISP and looking up how to contact that ISP’s abuse department. I’m kind of disappointed that Wells Fargo hasn’t responded. Many other ISPs did. I believe that some have already corrected the problem. Meanwhile new ones crop up.

Over the weeks I’ve worked – successfully – with several offenders. Each one represented unique aspects and I had to do some IT detective work to track down someone who wuold be likely to respond. The ones who haven’t cooperated at all are overseas. Here is the wall of shame: (handles emails for web site)

After about a week Wells Fargo did give a brief reply. They did not ask for any details. But the spam from their server did stop as far as I can tell.

If it turns into a never-ending battle I will give up, except perhaps to spend a few minutes a day.

Permanent fix
I don’t know the best way to fix this. I used to be a fan of SPF, but its limitations make it impossible for some large companies who need to have too many third parties sending email on their behalf. I guess Google is pushing DMARC, but I haven’t had time to think through if it’s feasible for large enterprises.

Poorly constructed web forms are the new open relay enablers. Be very careful before creating such a form and at a minimum use good captcha technology so its usage can not be automated.

This is speculation but I would not be surprised to learn that there is a marketplace for a listing of all the poorly constructed web forms out there – that information could be very valuable to spammers who have been increasingly shut out of our inboxes by improved anti-spam detection.

References and related
I found this site helpful in finding valid contacts for a domain: You enter the domain and spits back a couple of valid abuse contact addresses.
I only reluctantly use Mxtoolbox. It’s like a necessary evil. So i don’t want to give out a link for it. Probably nearly as good to check out a mail server’s reputation is They’re not trying to sell you anything.
DMARC – perhaps the email authentication mechanism of the future.
My old post advocating SPF, which just never caught on…
PHPMailer remote code execution explanation, which takes advantages of web forms used to send email.

Posted in Exchange Online, Internet Mail | Tagged , , , , , | Leave a comment

Consumer Tech: Getting pictures off the Samsung Galaxy S7

This is simple enough, but I keep forgetting how to do it since I only do it every few months. And the options provided seem almost limitless. Still, this approach works best in my opinion.

The details
Plug USB cable from phone into PC.
You may see initial pop-up asking what you’d like to do. I would choose Import files.
Look in File Explorer for the phone. You’ll see when you expand it that there are no files beneath it.
Go to phone. Pull down the status bar by dragging from the top.
One of the notifications concerns what to do when the USB cable is plugged in. The default is charge. Change it to share files.
The phone does not remember this setting. You need to repeat this every time you plug it into a PC and want to transfer your pictures! At least that’s my experience.
Now you can expand the phone in File Explorer and find your pictures in a DCIM folder.

The old-fashioned way of using USB cable to transfer pictures is best. They’ve moved things around however so older advice is no longer applicable.

Posted in Consumer Tech | Tagged | Leave a comment

Consumer Tech: Fitbit Charge tracker disconnected solution

I don’t want to oversell this solution. But let’s face it, you can lose hours and you probably will if you start rummaging through Fitbit’s own community forums on the solution of what to do when your Fitbit Charge or Charge HR doesn’t sync. You pick up a lot of bad and irrelevant and desperate advice.

What worked for me – the long story

Obviously there can be many reasons this may be happening: Bluetooth is off, Bluetooth pairing has been dropped, perhaps low battery, but those ar things you’d think of on your own, right, and anyway they’d be accompanied by other symptoms.

I have a Windows phone. Fitbit has an app through the Windows store. The syncing has always been very finicky. With my Charge HR I would sometimes have to try and re-try the sync for several minutes. Other times it would work right away. I couldn’t use either my home laptop or home desktop computer – both Dells – because the Windows 10 upgrade I did seemed to have wiped out the Bluetooth driver.

Then one day my spouse bought a Fitbit Alta and the helpful guy at Best Buy “helpfully” added her Fitbit and all her information to my account. You see I had commandeered her Samsung phone as well in a desperate attempt to find some device that would sync my Charge HR. It was a total mess. One day her steps overrode my steps and got synced backwards to my Charge! And it was 6000 steps fewer! So I got the idea to log out of my account. I logged back in and the sync worked quickly (quick means about 30 seconds in my experience). Since then I’ve done that a couple more times and both times I was able to sync right away after logging back in.

The summary
For Windows Phone when you can’t sync, and see the message Tracker disconnected when you know full well it has good batteries, it helps to log out of your Fitbit account and log right back in.

Syncing Fitbits is a finicky business in my experience. Their online help is mediocre and will just as likely lead you down the wrong path. Oh, and the wireless dongle that came with my Charge doesn’t fit the device! But I still like the devices overall – guess I got used to them. hopefully this trick to sync a Charge or Charge HR will help someone.

But don’t get me started on what happens when your battery begins to go.

Posted in Consumer Tech | Tagged , , | Leave a comment

Spousal request for slideshow on TV – fail

Some things are just a lot harder than they should be. Given that I have two Amazon Firesticks for TVs, and tons of pictures on the Google cloud, wouldn’t it be great if while working at home my spouse could casually view a slideshow – sort of like using the TV as a giant electronic picture frame. Can’t be too hard, right? That was a long-standing request, which started more like “I want to see our pictures on the TV.” Then along came a request to show a home movie through the TV. Together these things broke through my wall of indifference and I was inspired to find a solution. Couldn’t be that hard, right?

Ha, what little did I know.

Some details
List of technologies tried and (mostly) discarded
physical HDMI cable

Some solutions came close, some not so much. Here are pros and cons of each in the order I tried them.

HDMI cable from laptop to TV
Well at least it actually works (see Miracast entry below).
Working with actual cables – no fun. Ties up your laptop fulltime.
Probably fine if your need is very infrequent and you have spare time to mess with the cables.

What it is
If it worked, this would be like having a wireless HDMI cable. So you’d cast from, say, a laptop directly to your Firestick.
I guess none as it doesn’t work. In principle it would be like using HDMI but without messing with the cable. You can mirror your display wirelessly from your laptop, then set your Firestick to permit being used, but it all doesn’t work in the end. My friend actually called Amazon support on this and they confirmed that they do not support Miracast from PCs.
At best it would tie up your laptop full time casting its screen to your TV. Doesn’t sound that great to me. Those who use Miracast find it unstable in any case.

What it is
A client/server technology. It is kind of slick nd designed to be consumer friendly. You install the Plex server on your PC.
It wasn’t too hard to get going. The Plex server can be used with other apps so it’s a generally good thing to have in any case. The Plex app is available on the Amazon store.
If you have home movies on your PC they play really nicely, I’ll give it that.
Your stuff is organized into sensible collections. Browsing through lots of folders of pictures is pretty easy.
The slideshow terminates at the last picture and stays there. There is no looping, which is bizarre since it’s otherwise so slick.

What it is
As far as I can tell it’s an open source media client.
It can work from a bunch of different sources. I never did get SMB sharing to work, but once I started playing with Upnp I realized I could aim it at my Plex server! And that worked.
Requires you to jailbreak you Firestick so it’s not a smooth or pleasant installation. Installation requires “sideloading” from an Android device. I can drill down into a folder of photos but once I click slideshow the screen turns black. The thumbnails show up however. Also I read that it reads the EXIF meta information form each picture in a folder and that will take forever on a typical folder with hundreds of pictures. That’s a non-starter.

AllConnect by Tuxera
What it is
You would need the app installed on an Android device as well as the Firestick. You then in principle use your android device to control what gets displayed on your Firestick. This is a casting technology in other words.
Like a supported version of Kodi – it’s an app right in the Amazon store. Again it was compatible with my Plex server, which was nice.
Works like crap. You can show one picture at a time. It loses the connection. Slideshow mode doesn’t work. Forces you to pick one photo at a time to add to your slideshow. I don’t think so!
Ties up an Android device so that it ain’t so great either.
Also, because of multiple devices involved it’s a little slow (a couple seconds) to switch between pictures, painting a refresh thing on your screen while you wait.

All approaches fail. Plex comes closest to being useable. Allconnect is horrible, Kodi holds promise for some day, Miracast is a joke. An HDMI cable is a capable fallback solution.

Posted in Consumer Tech | Tagged , , , | Leave a comment

Powershell winrm client error explained

This particular Powershell error I came across yesterday is not well explained in other forums. I present the error and the explanation though not necessarily the solution.

The details
I recently received my new PC running Windows 8.1. On rare occasions I use Windows Powershell to do some light Exchange Online administration, but I am a complete Powershell novice. I have previously documented how to get Powershell to work through proxy, which is also poorly documented. To repeat it here these steps work for me:

> $credential = Get‐Credential
> $drj = New‐PSSessionOption ‐ProxyAccessType IEConfig
> $exchangeSession = New‐PSSession ‐ConfigurationName Microsoft.Exchange ‐ConnectionUri "‐liveid/" ‐Credential $credential ‐Authentication "Basic" ‐AllowRedirection ‐SessionOption $drj
> Import‐PSSession $exchangeSession

And that had always worked under Windows 7. Now this is what I get:

New-PSSession : [] Connecting to remote server failed with the following
error message : The WinRM client cannot process the request. Basic authentication is currently disabled in the client
configuration. Change the client configuration and try the request again. For more information, see the
about_Remote_Troubleshooting Help topic.
At line:1 char:20
+ $exchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange -Connecti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotin
    + FullyQualifiedErrorId : -2144108321,PSSessionOpenFailed

What’s up with that? I independently checked my account credentials – they were correct. Eventually I learned there is a winrm command which can shed some light on this. In particular this command show the problem quite clearly:

> winrm get winrm/config/client

    NetworkDelayms = 5000
    URLPrefix = wsman
    AllowUnencrypted = false [Source="GPO"]
        Basic = false [Source="GPO"]
        Digest = false [Source="GPO"]
        Kerberos = true
        Negotiate = true
        Certificate = true
        CredSSP = false
        HTTP = 5985
        HTTPS = 5986

So even though I have local admin rights, and I launch Powershell (found in C:\Windows\System32\WindowsPowerShell\v1.0) as administrator, still this rights restriction exists and cannot as far as I know be overridden. The specific issue is that a GPO (group policy) has been enabled that prevents use of Basic authentication.

New idea – try Kerberos authentication
More info about our command is available:

> get‐help new‐pssession ‐full|more

You see that another option for the Authentication switch is Kerberos. So I tried that:
> $exchangeSession = New‐PSSession ‐ConfigurationName Microsoft.Exchange ‐ConnectionUri "‐liveid/" ‐Credential $credential ‐Authentication "Kerberos" ‐AllowRedirection ‐SessionOption $drj

This produced the unhappy result:

New-PSSession : [] Connecting to remote server failed with the following error message : The WinRM
client cannot process the request. Setting proxy information is not valid when
the authentication mechanism with the remote machine is Kerberos. Remove the
proxy information or change the authentication mechanism and try the request
again. For more information, see the about_Remote_Troubleshooting Help topic.
At line:1 char:20
+ $exchangeSession = New-PSSession -ConfigurationName Microsoft.Exchange
-Connecti ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OpenError: (System.Manageme....RemoteRunspace:Re
   moteRunspace) [New-PSSession], PSRemotingTransportException
    + FullyQualifiedErrorId : -2144108110,PSSessionOpenFailed

So it seems that because I use a proxy to connect Kerberos authentication is not an option. Drat. Digest? Disabled in my client.

Unless the security and AD folks can be convinced to make an exception for me to this policy I won’t be able to use this computer for Powershell access to Exchange Online. I guess my home PC would work however. It’s in the cloud after all.

So I tried my home PC. Initially I got an access denied. It was that time of the month when it was time to change the password yet again, sigh, which I learned only by doing a traditional login. With my new password thnigs proceeded further, but in response to the Import-PSSession $exchangeSession I got this new error:

Import-PSSession : Files cannot be loaded because running scripts is disabled on this system. Provide a valid
certificate with which to sign the files.
At line:1 char:2
+  Import-PSSession $exchangeSession
+  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidOperation: (:) [Import-PSSession], PSInvalidOperationException
    + FullyQualifiedErrorId : InvalidOperation,Microsoft.PowerShell.Commands.ImportPSSessionCommand

A quick duckduckgo search shows that this command helps:

> Set-ExecutionPolicy RemoteSigned

And with that, voila, the import worked.

2nd solution

I finally found a Windows server which I have access to and isn’t locked down with that restrictive GPO. I was able to set up my environment on it and it is useable.

3rd solution

I see that as of August 2016 there is an alpha version of powershell that runs on CentOS 7.1. That could be pretty cool since I love Linux. But I’d have to spin up a CentOS 7.1 on Amazon since my server is a bit older (CentOS v 6.6) and (I tried it) it doesn’t run it correctly:

$ sudo powershell

powershell: /usr/lib64/ version `GLIBCXX_3.4.18' not found (required by powershell)
powershell: /usr/lib64/ version `CXXABI_1.3.5' not found (required by powershell)
powershell: /usr/lib64/ version `GLIBCXX_3.4.14' not found (required by powershell)
powershell: /usr/lib64/ version `GLIBCXX_3.4.15' not found (required by powershell)

The instructions for installation are here.

Not every problem has a simple solution, but here I’ve at least documented the issues more clearly than it is elsewhere on the Internet.

References and related
Powershell for Linux? Yes! Instructions here.

Posted in Admin, CentOS | Tagged , , , , , | Leave a comment

Solution to this week’s NPR puzzle using simple Linux commands

Every now and then the weekend puzzle is particularly amenable to partial solution by use of simple Linux commands. I suspected such was the case for this week’s, and I was right.

The challenge for this week
Take a seven letter word of one syllable, add the consecutive letters “it” somewhere in the middle to create a nine letter word of four syllables.

The Linux command-line method of solution
On a CentOS system there is a file with words, lots of words. It’s /usr/share/dict/linux.words:

$ cd /usr/share/dict; wc linux.words

479829  479829 4953699 linux.words

So, 479829 words! A lot of them are junk words however, but it has the real ones in there too. This file comes from the RPM package words-3.0-17.el6.noarch.

So here’s a sort of stream-of-consciousness of a Unix person solving the puzzle without doing too much work or too much thinking:

How many seven-letter words are there? First what’s an expression that can answer that? I think this is it but let’s check:

$ egrep '^[a‐z]{7}$' linux.words|more


OK, that egrep expresison is right. So the seven-letter word count is then:

$ egrep '^[a‐z]{7}$' linux.words|wc ‐l


That’s a lot – too many to eyeball. OK, so how many nine-letter words are there?

$ egrep '^[a‐z]{9}$' linux.words|wc ‐l


Wow, even more.

OK, we have an idea, based not on what may be the best approach, but based on which Linux commands we know inside and out. The idea is to start from the nine-letter words which contain “it”, remove the “it” and then match the resulting seven-letter character strings against our dictionary to see which are actually words. We know how to do that. The hope is the resulting list will be small enough we can review by hand.

How many nine-letter words contain the consecutive characters “it”?

$ egrep '^[a‐z]{9}$' linux.words|grep it|wc ‐l


They look like this:


so it would take forever to go through. If we had a dictionary with the syllable count we coul really narrow it down. I think I’ve seen that, but I’d have to dig that up. We introduce the sed operator to remove the “it” from these words:

$ egrep '^[a‐z]{9}$' linux.words|grep it|sed 's/it//'|more


There are more efficient ways to loop through these results using xargs, but I’m old school and have memorized this older construct which I use:

$ egrep '^[a‐z]{9}$' linux.words|grep it|sed 's/it//'|while read line; do
> grep $line linux.words >> /tmp/lw
> done

We look at the resulting file and found we made a little goof – we didn’t limit the resulting matches to seven characters:

$ more /tmp/lw


But that’s easily corrected:

$ cd /tmp; egrep '^[a‐z]{7}$' lw > lw2
$ wc -l lw2

376 lw2

Now that’s a number we can review by hand. Very few of these have only one syllable:

$ more lw2


I quickly reviewed the list and the answer popped out, somewhere towards the end – you can’t miss it.

Friday update – the solution
The 7-letter word that pops out at you? Reigned, which you immediately see becomes reignited – nine letters and four syllables!

Want to do this on your Raspberry Pi?
The dictionary file there is /usr/share/dictd/wn.index, but you probably don’t have it by default so you’ll need to install a few packages to get it which is simple enough. This post about Words with Friends explains the packages I used to provide that dictionary. Aside from the location of the dictionary, and that it contains fewer(?) words, everything else should be the same.

We have solved this week’s NPR puzzle without any complex programming just by using some simple Linux commands.

References and related
This link is nice because it has a transcription of the puzzle so you don’t have to waste time listening to the whole six-minute segment.
Another NPR puzzle we solved in a similar way.

Posted in Linux, Raspberry Pi | Tagged , , | Leave a comment

Who’s using the UK Ministry of Defence’s IP addresses?

When I first came upon a spear phishing email a few months ago which originated from the UK’s Ministry of Defence I thought that was pretty queer. Like, how ironic that an invoice scam is coming from a Defense Ministry. Do they have a bad actor? Are we on the cusp of cracking some big international cybertheft? Do we tell them?

Then their address space came up yet again just a few days ago, this time in a fairly different context. Microsoft’s Exchange Online service hosted in the UK cannot deliver email to a particular domain:

8/5/2016 3:32:35 PM - Server at e******** ( returned '450 4.4.312 DNS query failed(ServerFailure)'

I obscured the domain a bit. But it’s an everyday domain which every DNS server I’ve tested resolves just fine. But Microsoft doesn’t see it that way. Several test messages have shown non-delivery reports using these other addresses as well following the “Server at…”:,

The Register sheds the most light – but it still lacks in critical details – on what might have happened to the UK Ministry of Defence’s IPv4 address space, namely, that some was sold. Here’s the article.

How do you show that all these addresses belong to the Ministry of Defence? You use RIPE: and do a search. It shows that belongs to them. But according to the article in The Register this is no longer true as of late last year.

Why is Microsoft using these IP addresses? No idea. But something I read got me to suspecting that some outfits decided to use 25/8 address space as though it were private IP addresses!

References and related

Posted in Admin, Network Technologies | Tagged | 2 Comments