Categories
Linux

Is Mining Bitcoins on the Amazon Cloud the Road to Riches?

Intro
Answer: Not as far as I can tell. Of course it’s irresistible for us technical folks to try. Here are my back-of-the-envelope calculations for my trial.

The details
A currency that’s not linked to any one government’s policies has a lot of attraction. Bitcoin is that currency, and it seems to be catching on. I knew people last year who were “mining” Bitcoins. I had no idea what they were talking about, but I could tell from what they were saying that they were trying to create more currency units. How strange and wonderful, a currency that gets minted by potentially anyone.

I learn mostly by doing, so I decided to download one of those mining programs and see what this was all about.

Well, I still haven’t learned what it’s all about because it’s more complicated than I thought, but I learned what approach not to take. And that’s what I’m sharing here.

I downloaded bfgminer for my CentOS Amazon EC2 server. That in itself was a good exercise as it needed a whole ecosystem of other packages to be installed first. On my system I found I needed ncurses-devel and libcurl-devel, which brought in other packages so that by the time they were installed I had installed all these packages:

libcurl-devel-7.19.7-35.el6
curl-7.19.7-35.el6
libidn-devel-1.18-2.el6
libcurl-7.19.7-35.el6
libssh2-1.4.2-1.el6
ncurses-static-5.7-3.20090208.el6
ncurses-devel-5.7-3.20090208.el6

It’s also designed more for a different type of computing environment. Getting it to compile was one thing, but getting it to actually run is another.

At first it found nothing to run on. So I had to recompile, this time specifying:

$ ./configure –enable-cpumining

to enable use of my virtual CPU.

It wanted a pool and URL and other things I don’t have when it starts up. I finally found a way to run it in test mode.

The results
My setup at Amazon could calculate 0.4 mega hashes per second. Doesn’t sound too bad, right? Wrong. Looking at some of the relevant numbers and doing a back-of-the-envelope calculation we have:

– total world computing power dedicated to this effort: 60,000 Giga hashes per second
– rate of blocks being written: six per hour
– number of bitcoins in a block: 25
– value of a bitcoin: $78

From this we have:
Minimum computation required for a DIY effort to produce one block:

Effort = 10 minutes * 60 s/min * 60×10^12 hashes/s = 3.6×10^16 hashes =~ 4×10^16 hashes

So with my resources one my small instance this will take me:

time to make a block = 4×10^16 hashes/block / 0.4×10^6 hashes/s = 10^11 s
= 10^11 s * year/(π•10^7 s) =~ 3×10^3 years

Why my fixation on a block as the minimum unit of bitcoins? Because in my five minutes of reading that seems to be the minimum acceptable unit to be able to mint more bitcoins.

By the way, every physicist knows that a year has π•10^7 seconds! That’s one of those useful numbers we carry around in our heads.

For the scientific-notation challenged, I’m saying that it will take me 3,000 years to create a block of bitcoins by myself!

Now let’s have some fun with this. Of course Amazon being the premier cloud hosting company that it is, you can rent (I have heard of this actually being done) 30,000 servers at once.

To be continued…

Appendix
How I measure my has rate
I ran

$ bfgminer –benchmark

Then I did a and got these results:

 [2013-04-16 08:25:39]
Summary of runtime statistics:
 
 [2013-04-16 08:25:39] Started at [2013-04-15 12:55:43]
 [2013-04-16 08:25:39] Pool: Benchmark
 [2013-04-16 08:25:39] CPU hasher algorithm used: c
 [2013-04-16 08:25:39] Runtime: 19 hrs : 29 mins : 56 secs
 [2013-04-16 08:25:39] Average hashrate: 0.4 Megahash/s
 [2013-04-16 08:25:39] Solved blocks: 0
 [2013-04-16 08:25:39] Best share difficulty: 0
 [2013-04-16 08:25:39] Queued work requests: 0
 [2013-04-16 08:25:39] Share submissions: 0
 [2013-04-16 08:25:39] Accepted shares: 0
 [2013-04-16 08:25:39] Rejected shares: 0
 [2013-04-16 08:25:39] Accepted difficulty shares: 0
 [2013-04-16 08:25:39] Rejected difficulty shares: 0
 [2013-04-16 08:25:39] Hardware errors: 0
 [2013-04-16 08:25:39] Efficiency (accepted / queued): 0%
 [2013-04-16 08:25:39] Utility (accepted shares / min): 0.00/min
 
 [2013-04-16 08:25:39] Discarded work due to new blocks: 46376
 [2013-04-16 08:25:39] Stale submissions discarded due to new blocks: 0
 [2013-04-16 08:25:39] Unable to get work from server occasions: 0
 [2013-04-16 08:25:39] Work items generated locally: 0
 [2013-04-16 08:25:39] Submitting work remotely delay occasions: 0
 [2013-04-16 08:25:39] New blocks detected on network: 0
 
 [2013-04-16 08:25:39] Summary of per device statistics:
 
 [2013-04-16 08:25:39] CPU0                | 5s:  0.0 avg:377.4 u:  0.0 kh/s | A:0 R:0 HW:0 U:0.0/m

The about fourth line from the top shows the average has rate of 0.4 Megahashes/second.

Other resources
Bitcoin exchange value really fluctuates a lot compared to conventional government-sponsored currencies! Go here for the current value.

A timely and informative intro to Bitcoin is available here.

Categories
Admin CentOS Security

Example using iptables, the CentOS firewall

Intro
This document is mostly for my own purposes. I don’t even think this is the best way to run the firewall, it’s just the way I happened to adapt.

Background
My friends tell me ipchains was good software. Unfortunately the guy who wrote iptables, which emulates the features of ipchains, wasn’t at that same skill level, and the implementation shows it. I know I struggled with it a bit.

Motivation
I decided to run a local firewall on my HP SiteScope server because a serious security issue was found with our version’s HTTP server such that it was advisable to lock it down to only those administrators who need access to the GUI.

The details
This was actually implemented on Redhat v 5.6, though I don’t suppose it would be much different on CentOS.

December 2013 update
I also tried this same script provided below on a Redhat 6.4 OS – it worked the exact same way without modification.

The main thing is that I maintain a file with the “firewall rules.” I call it iptables. So I need to remember from invocation to invocation where I store this master file. Here are the contents:

#!/bin/sh
# DrJ, 9/2012
# inspired by http://wiki.centos.org/HowTos/Network/IPTables
# flush all previous rules
export PATH=$PATH:/sbin
iptables -F
#
# our main rules here:
#
# Accept tcp packets on destination port 8080 (HP SiteScope) from select individuals
# DrJ: office, home, vpn
iptables -A INPUT -p tcp -s 192.168.76.56 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.2.6.107 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.3.13.138 --dport 8080 -j ACCEPT
#
# the server itself
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8080 -j ACCEPT
#
# set dflt policies
# for logging see http://gr8idea.info/os/tutorials/security/iptables5.html
#iptables -A INPUT -j LOG --log-level 4 --log-prefix 'InDrop '
# this is a killer!
#iptables -P INPUT DROP
# just drop what is really the problem...
iptables -A INPUT -p tcp --dport 8080 -j DROP
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
#
# access for loopback
iptables -A INPUT -i lo -j ACCEPT
#
# Accept packets belonging to established and related connections
#
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# Save settings
#
/sbin/service iptables save
#
# List rules
#
iptables -L -v

Of course you have to have iptables running. I do a

$ sudo service iptables status

to verify that. If its status is “not running,” start it up.

As mentioned in the comments I tried to be more strict with the rules since I’m used to running firewalls with a DENY All rule, but it just didn’t work out well for me. I lost patience and gave up on that and settled for dropping all traffic to TCP port 8080 except the explicitly permitted hosts, which is good enough for our immediate needs.

Conclusion
This is a simple example of a way to use iptables. It’s probably not the best example, but it’s what I used so it’s better than nothing.

Categories
Linux

Solving this week’s NPR weekend puzzle with a few Linux commands

Intro
I listen to the NPR puzzle every Sunday morning. I’m not particularly good at solving them, however – I usually don’t. But I always consider if I could get a little help from my friendly Linux server, i.e., if it lends itself to solution by programming. As soon as I heard this week’s challenge I felt that it was a good candidate. I was not disappointed…

The details
So Will Shortz says think of a common word with four letters. Now add O, H and M to that word, scramble the letters to make another common word in seven letters. The words are both things you use daily, and these things might be next to each other.

My thought pattern on that is that, great, we can look through a dictionary of seven-letter words which contain O, H and M. That already might be sufficiently limiting.

This reminded me of using the built-in Linux dictionary to give me some great tips when playing Words with Friends, which I document here.

In my CentOS my dictionary is /unix/share/dict/linux.words. It has 479,829 words:

$ cd /usr/share/dict; wc linux.words

That’s a lot. So of course most of them are garbagey words. Here’s the beginning of the list:

$ more linux.words

1080
10-point
10th
11-point
12-point
16-point
18-point
1st
2
20-point
2,4,5-t
2,4-d
2D
2nd
30-30
3-D
3-d
3D
3M
3rd
48-point
4-D
4GL
4H
4th
5-point
5-T
5th
6-point
6th
7-point
7th
8-point
8th
9-point
9th
-a
A
A.
a
a'
a-
a.
A-1
A1
a1
A4
A5
AA
aa
A.A.A.
AAA
aaa
AAAA
AAAAAA
...

You see my point? But amongst the garbage are real words, so it’ll be fine for our purpose.

What I like to do is to build up to increasingly complex constructions. Mind you, I am no command-line expert. I am an experimentalist through-and-through. My development cycle is Try, Demonstrate, Fix, Try Demonstrate, Improve. The whole process can sometimes be finished in under a minute, so it must have merit.

First try:

$ grep o linux.words|wc

 230908  230908 2597289

OK. Looks like we got some work to do, yet.

Next (using up-arrow key to recall previous command, of course):

$ grep o linux.words|grep m|wc

  60483   60483  724857

Next:

$ grep o linux.words|grep m|grep h|wc

  15379   15379  199724

Drat. Still too many. But what are we actually producing?

$ grep o linux.words|grep m|grep h|more

abbroachment
abdominohysterectomy
abdominohysterotomy
abdominothoracic
Abelmoschus
abhominable
abmho
abmhos
abohm
abohms
abolishment
abolishments
abouchement
absmho
absohm
Acantholimon
acanthoma
acanthomas
Acanthomeridae
acanthopomatous
accompliceship
accomplish
accomplishable
accomplished
accomplisher
accomplishers
accomplishes
accomplishing
accomplishment
accomplishments
accomplisht
accouchement
accouchements
accroachment
Acetaminophen
acetaminophen
acetoamidophenol
acetomorphin
acetomorphine
acetylmethylcarbinol
acetylthymol
Achamoth
achenodium
achlamydeous
Achomawi
...

Of course, words with capitalizations, words longer and shorter than seven letters – there’s lots of tools left to cut this down to manageable size.

With this expression we can simultaneously require exactly seven letters in our words and require only lowercase alphabetical letters: egrep ′^[a-z]{7}$′. This is an extended regular expression that matches the beginning (^) and end ($) of the string, only characters a-z, and exactly seven of them ({7}).

With that vast improvement, we’re down to 352 entries, a list small enough to browse by hand. But the solution still didn’t pop out at me. Most of the words are obscure ones, which should automatically be excluded because we are looking for common words. We have:

$ grep o linux.words|grep m|grep h|egrep ′^[a-z]{7}$′|more

achroma
alamoth
almohad
amchoor
amolish
amorpha
amorphi
amorphy
amphion
amphora
amphore
apothem
apothgm
armhole
armhoop
bemouth
bimorph
bioherm
bochism
bohemia
bohmite
camooch
camphol
camphor
chagoma
chamiso
chamois
chamoix
chefdom
chemizo
chessom
chiloma
chomage
chomped
chomper
chorism
chrisom
chromas
chromed
chromes
chromic
chromid
chromos
chromyl
...

So I thought it might be inspiring to put the four letters you would have if you take away the O, H and M next to each word, right?

I probably ought to use xargs but never got used to it. I’ve memorized this other way:

$ grep o linux.words |grep m|grep h|egrep ′^[a-z]{7}$′|while read line; do
> s=`echo $line|sed s/o//|sed s/h//|sed s/m//`
> echo $line $s
> done|more

sed is an old standard used to do substitutions. sed s/o// for example is a filter which removes the first occurrence of the letter O.

I could almost use the tr command, as in

> …|tr -d ′[ohm]′

in place of all those sed statements, but I couldn’t solve the problem of tr deleting all occurrences of the letters O, H and M. And the solution didn’t jump out at me.

So until I figure that out, use sed. That gives:

achroma acra
alamoth alat
almohad alad
amchoor acor
amolish alis
amorpha arpa
amorphi arpi
amorphy arpy
amphion apin
amphora apra
amphore apre
apothem apte
apothgm aptg
armhole arle
armhoop arop
bemouth beut
bimorph birp
bioherm bier
bochism bcis
bohemia beia
bohmite bite
camooch caoc
camphol capl
camphor capr
chagoma caga
chamiso cais
chamois cais
chamoix caix
chefdom cefd
chemizo ceiz
chessom cess
chiloma cila
chomage cage
chomped cped
chomper cper
chorism cris
chrisom cris
chromas cras
chromed cred
chromes cres
chromic cric
chromid crid
chromos cros
chromyl cryl
...

Friday update
I can now reveal the section listing that reveals the answer because the submission deadline has passed. It’s here:

...
schmoes sces
schmoos scos
semihot seit
shahdom sahd
shaloms sals
shamalo saal
shammos sams
shamois sais
shamoys says
shampoo sapo
shimose sise
shmooze soze
shoeman sean
sholoms slos
shopman span
shopmen spen
shotman stan
...

See it? I think it leaps out at you:

shampoo sapo

becomes of course:

soap
shampoo

!

They’re common words found next to each other that obey the rules of the challenge. You can probably tell I’m proud of solving this one. I rarely do. I hope they don’t call on me because I also don’t even play well against the radio on Sunday mornings.

Conclusion
Now I can’t give out the answer right now because the submission deadline is a few days from now. But I will say that the answer pretty much pops out at you when you review the full listing generated with the above sequence of commands. There is no doubt whatsoever.

I have shown how a person with modest command-line familiarity can solve a word problem that was put out on NPR. I don’t think people are so much interested in learning a command line because there is no instant gratification and th learning curve is steep, but for some it is still worth the effort. I use it, well, all the time. Solving the puzzle this way took a lot longer to document, but probably only about 30 minutes of actual tinkering.

Categories
Admin Linux Raspberry Pi Security

Generate Pronounceable Passwords

2017 update
Turns out gpw is an available package in Debian Linux, including Raspbian which runs on Raspberry Pi. Who knew? A simple sudo apt-get install gpw will provide it. So I guess the source wasn’t lost at all.

Intro
15 years ago I worked for a company that wanted to require authentication in order to browse to the Internet. I searched around for something.

What I came up with is gpw – generate pronounceable passwords.

The details
I think this approach to secure passwords is no longer best practice, but I still think it has a place for some applications. What it does is analyze a dictionary that you’ve fed it. It then determines the frequency of occurrence of what it calls trigraphs – I guess that’s three consecutive letter combinations. Then it generates random, non-dictionary passwords using those trigraphs, which are presumably wholly or partially pronounceable.

Cute, huh? I’d say one problem is that if the bad guys got wind of this approach, the numbers of combinations they’d have to use to do password cracking is severely restricted.

Sophos has a recommendation for forming good strong passwords. See their blog post about the 50 worse passwords which contains a link to a video on how to choose a good password.

But I still have a soft spot for this old approach, and I think it’s OK to use it, get your password such as inglogri, add a few non-alpha-numeric characters and come up with a reasonably good, memorable password. Every site you use should really get a different password, and this tool might make that actually feasible.

I run it as:

$ gpw

which produces:

seminour
shnopoos
alespige
olpidest
hastrewe
nsivelys
shaphtra
bratorid
melexseu
sheaditi

Its output changes every time, of course.

I mostly run it this way:

$ gpw 1

which produces only a single password, for instance:

ojavishd

You see how these passwords are sort of like words, but not words? Much more memorable than those completely random ones you are sometimes forced to type and which are impossible to remember?

I noted the location where I pulled it from the web 15 years ago as is my custom, but it is no longer available. So I have decided to make it available. I tweaked it to compile on CentOS with a C++ compiler.

Here is the CentOS v 6 binary for x86_64 architecture and README file.

Here is the tar file with the sources and the binary mentioned above. Run a make clean first to begin building it.

Enjoy!

Potential Problems
I know when we originally used it to assign 15,000 unique passwords, the randomness algorithm was so bad that I believe some people received identical passwords! So the the total number of generatable passwords might be severely limited. Please check this before using it in any meaningful way. I would naively expect and hope that it could generate about two- to three-times the number of words in my dictionary (/usr/share/dict/linux.words, with 479,829 words). But I never verified this.

2017 update
I ran it, 100 passwords at a time, on my Rsapberry Pi for a couple minutes. I created 275,900 passwords, of which 269,407 were unique. Strange. So you get some repeats but you motly get new passwords.

Further, I was going to tweak the code to generate 9-letter passwords which would presumably be more secure. But they just didn’t look as good to me, and I’ve only ever used it with 8 letters. So I just decided to keep it at 8 letters. You can experiment with that if you want.

More fun with the Linux dictionary
For another fun example using the Linux dictionary see how I solved the NPR weekend puzzle using it, described here.

A note for Debian Linux users (Ubuntu, Raspberry Pi, …)
The dictionary there is /usr/share/dictd/wn.index. You’ll need to update the Makefile to reflect this. This post about Words with Friends explains the packages I used to provide that dictionary.

Conclusion
An old pronounceable password generating program has been dusted off and given back to the open source community. It may not be state-of-the-art, but it has a role for some usages.

References and related
Want truly random passwords? I want to call your attention to random.org’s password generator: https://www.random.org/passwords/

Most people are becoming familiar with the idea of not reusing passwords but I don’t know if everyone realizes why. This article is a comprehensive review of the topic, plus review of password vaults like Lastpass, etc which you may have heard of: https://pixelprivacy.com/resources/reusing-passwords/

Categories
Admin Linux Security

My favorite openssl commands

Intro
openssl is available on almost every operating system. It’s a great tool if you work with certificates regularly, or even occasionally. I want to document some of the commands I use most frequently.

The details

Convert PEM CERTs to other common formats
I just used this one yesterday. I got a certificate in PEM format as is my custom. But not every web server out there is apache or apache-compatible. What to do? I’ve learned to convert the PEM-formatted certificates to other favored formats.

The following worked for a Tomcat server and also for another proprietary web server which was running on a Windows server and wanted a pkcs#12 type certificate:

$ openssl pkcs12 −export −chain −inkey drjohns.key -in drjohns.crt −name “drjohnstechtalk.com” −CAfile intermediate_plus_root.crt −out drjohns.p12

The intermediate_plus_root.crt file contained a concatenation of those CERTs, in PEM format of course.

If you see this error:

Error unable to get issuer certificate getting chain.

, it probably means that you forgot to include the root certificate in your intermediate_plus_root.crt file. You need both intermediate plus the root certificates in this file.

And this error:

unable to write 'random state'

means you are using the Windows version of openssl and you first need to do this:

set RANDFILE=C:\MyDir\.rnd

, where MyDir is a directory where you have write permission, before you issue the openssl command. See https://stackoverflow.com/questions/12507277/how-to-fix-unable-to-write-random-state-in-openssl for more on that.

The beauty of the above openssl command is that it also takes care of setting up the intermediate CERT – everything needed is shoved into the .p12 file. .p12 can also be called .pfx. so, a PFX file is the same thing as what we’ve been calling a PKCS12 certificate,

How to examine a pkcs12 (pfx) file

$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!

Examine a certificate

$ openssl x509 −in certificate_name.crt −text

Examine a CSR – certificate signing request

$ openssl req −in certificate_name.csr −text

Examine a private key

$ openssl rsa −in certificate_name.key −text

Create a SAN (subject alternative name) CSR

This is a two-step process. First you create a config file with your alternative names and some other info. Mine, req.conf, looks like this:

[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
 
[ dn ]
C=US
ST=New Jersey
CN = drjohnstechtalk.com
 
[ req_ext ]
subjectAltName = @alt_names
 
[ alt_names ]
DNS.1 = drjohnstechtalk.com
DNS.2 = johnstechtalk.com
IP.3 = 50.17.188.196

Note this shows a way to combine IP address with a FQDN in the SAN. I’m not sure public CAs will permit IPs. I most commonly work with a private PKI which definitely does, however.

Then you run openssl like this, referring to your config file (updated for the year 2022. In the past we used 2048 bit length keys but we are moving to 4096):
$ openssl req −new −nodes −newkey rsa:4096 −keyout mykey.key −out myreq.csr -config req.conf

This creates the private key and CSR in one go. Note that it’s recommended to repeat your common name (CN) in one of the alternative names so that’s what I did.

Let’s examine it to be sure it contains the alternative names:

$ openssl req ‐text ‐in myreq.csr

Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        ...
        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com
    Signature Algorithm: sha256WithRSAEncryption
         2a:ea:38:b7:2e:85:6a:d2:cf:3e:28:13:ff:fd:99:05:56:e5:
         ...

Looks good!

SAN on an Intranet with a private PKI infrastructure including an IP address
On an Intranet you may want to access a web site by IP as well as by name, so if your private PKI permits, you can create a CSR with a SAN which covers all those possibilities. The SAN line in the certificate will look like this example:

DNS:drjohnstechtalk.com, IP:10.164.80.53, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com

Note that additional IP:10… with my server’s private IP? That will never fly with an Internet CA, but might be just fine and useful on a corporate network. The advice is to not put the IP first, however. Some PKIs will not accept that. So I put it second.


Create a simple CSR and private key

$ openssl req −new −nodes −out myreq.csr

This prompts you to enter values for the country code, state and organization name. As a private individual, I am entering drjohnstechtalk.com for organization name – same as my common name. Hopefully this will be accepted.

Look at a certificate and certificate chain of any server running SSL

$ openssl s_client ‐showcerts ‐connect https://host[:port]/

Cool shortcut to fetch certificate from any web server and examine it with one command line

$ echo|openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443|openssl x509 ‐text

Alternate single command line to fetch and examine in one go

$ openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443</dev/null|openssl x509 ‐text

In fact the above commands are so useful to me I invented this bash function to save all that typing. I put this in my ~/.alias file (or .bash_aliases, depending on the OS):

# functions
# to unset a function: unset -f foo; to see the definition: type -a foo
certexamine () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
# examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
examinecert () { str=$*;echo $str|grep -q : ;res=$?;if [ "$res" -eq "0" ]; then fqdn=$(echo $str|cut -d: -f1);else fqdn=$str;str="$fqdn:443";fi;openssl s_client  -servername $fqdn -connect $str|openssl x509 -text|more; }

In a 2023 update, I made examinecert more sophisticated and more complex. Now it accepts an argument like FQDN:PORT. Then to examine a certificate I simply type either

$ examinecert drjohnstechtalk.com

(port 443 is the default), or to specify a non-standard port:

$ examinecert drjohnstechtalk.com:8443

The servername switch in the above commands is not needed 99% of the time, but I did get burned once and actually picked up the wrong certificate by not having it present. If the web server uses Server Name Indication – information which you generally don’t know – it should be present. And it does no harm being there regardless.

Example wildcard certificate
As an aside, want to examine a legitimate wildcard certificate, to see how they filled in the SAN field? Yesterday I did, and found it basically impossible to search for precisely that. I used my wits to recall that WordPress, I thought I recalled, used a wildcard certificate. I was right. I think one of those ecommerce sites like Shopify might as well. So you can examine make.wordpress.org, and you’ll see the SAN field looks like this:

 X509v3 Subject Alternative Name:
                DNS:*.wordpress.org, DNS:wordpress.org

Verify your certificate chain of your active server

$ openssl s_client ‐CApath /etc/ssl/certs ‐verify 2 ‐connect drjohnstechtalk.com:443

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify return:1
depth=2 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
verify return:1
depth=1 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
verify return:1
depth=0 /OU=Domain Control Validated/CN=drjohnstechtalk.com
verify return:1
---
Certificate chain
 0 s:/OU=Domain Control Validated/CN=drjohnstechtalk.com
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
 3 s:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFTzCCBDegAwIBAgIJAI0kx/8U6YDkMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
...
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES128-SHA
    Session-ID: 41E4352D3480CDA5631637D0623F68F5FF0AFD3D1B29DECA10C444F8760984E9
    Session-ID-ctx:
    Master-Key: 3548E268ACF80D84863290E79C502EEB3093EBD9CC935E560FC266EE96CC229F161F5EF55DDF9485A7F1BE6C0BECD7EA
    Key-Arg   : None
    Start Time: 1479238988
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

Wrong way to verify your certificate chain
When you first start out with the verify sub-command you’ll probably do it wrong. You’ll try something like this:

$ openssl s_client ‐verify 2 ‐connect drjohnstechtalk.com:443

which will produce these results:

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify error:num=19:self signed certificate in certificate chain
verify return:0
16697:error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed:s3_clnt.c:983:

Using s_client menu through a proxy
Yes! Use the -proxy switch, at least with newer openssl implementations.

Using OCSP
I have had limited success so far to an Online Certificate Status Protocol verification. But I do have something to provide as an example:

$ openssl ocsp ‐issuer cert‐godaddy‐g2.crt ‐cert crt ‐no_nonce ‐no_cert_verify ‐url http://ocsp.godadddy.com/

Response verify OK
crt: good
        This Update: Nov 15 19:56:52 2016 GMT
        Next Update: Nov 17 07:56:52 2016 GMT

Here I’ve stuffed my certificate into a file called crt and stuffed the intermediate certificate into a file called cert-godaddy-g2.crt. How did I know what URL to use? Well, when I examined the certificate file crt it told me:

$ openssl x509 ‐text ‐in crt

...
           Authority Information Access:
                OCSP - URI:http://ocsp.godaddy.com/
...

But I haven’t succeeded running a similar command against certificates used by Google, nor by certificates issued by the CA Globalsign. So I’m clearly missing something there, even though by luck I got the GoDaddy certificate correct.

Check that a particular private key matches a particular certificate
I have to deal with lots of keys and certificates. And certificate re-issues. And I do this for others. Sometimes it gets confusing and I lose track of what goes with what. openssl to the rescue! I find that a matching moduls is pretty much a guarantee that private key and certificate aer a match.

Private key – find the modulus example
$ openssl rsa ‐modulus ‐noout ‐in key

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

Public key – find the modulus example
$ openssl x509 ‐modulus ‐noout ‐in crt

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

The key and certificate were stored in files called key and crt, respectively. Here the modulus has the same value so key and certificate match. Their values are random, so you only need to match up the first eight characters to have an extremely high confidence level that you have a correct match.

Generate a simple self-signed certificate
$ openssl req ‐x509 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

Generating a 2048 bit RSA private key
..........+++
.................+++
writing new private key to 'key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:New Jersey
Locality Name (eg, city) [Default City]:.
Organization Name (eg, company) [Default Company Ltd]:.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:drjohnstechtalk.com
Email Address []:

Note that the fields I wished to blank out I put in a “.”

Did I get what I expected? Let’s examine it:

$ openssl x509 ‐text ‐in cert.pem|more

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 16616841832876401013 (0xe69ae19b7172e175)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        Validity
            Not Before: Aug 15 14:11:08 2017 GMT
            Not After : Aug 15 14:11:08 2018 GMT
        Subject: C=US, ST=NJ, CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d4:da:23:34:61:60:f0:57:f0:68:fa:2f:25:17:
...

Hmm. It’s only sha1 which isn’t so great. And there’s no Subject Alternative Name. So it’s not a very good CERT.

Create a better self-signed CERT
$ openssl req ‐x509 ‐sha256 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

That one is SHA2:

...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
...

365 days is arbitrary. You can specify a shorter or longer duration.

Then refer to it with a -config argument in your

Listing ciphers
Please see this post.

Fetching the certificates from an SMTP server running TLS

$ openssl s_client −starttls smtp −connect <MAIL_SERVER>:25 −crlf
That’s a good one because it’s hard to do these steps by hand.

Working with Java keytool for Tomcat certificates
This looks really daunting at first. Where do you even start? I recently found the answer. Digicert has a very helpful page which generates the keytool command line you need to crate your CSR and provides lots of installation advice. At first I was skeptical and thought you could not trust a third party to have your private key, but it doesn’t work that way at all. It’s just a complex command-line generator that you plug into your own command line. You know, the whole

$ keytool −genkey −alias drj.com −keyalg RSA -keystore drj.jks −dname=”CN=drj.com, O=johnstechtalk, ST=NJ, C=US” …

Here’s the Digicert command line generator page.

Another good tool that provides a free GUI replacement for the Java command-line utilities keytool, jarsigner and jadtool is Keystore Explorer.

List info about all the certificates in a certificate bundle

openssl storeutl -noout -text -certs cacert.pem |egrep ‘Issuer:|Subject:’|more

Appendix A, Certificate Fingerprints
You may occasionally see a reference to a certificate fingerprint. What is it and how do you find your certificate’s fingerprint?

Turns out it’s not that obvious.

Above we showed the very useful command

openssl x509 ‐text ‐in <CRT‐file>

and the results from that look very thoroough as though this is everything there is to know about this certificate. In fact I thought that for yeas, but, it turns out it doesn’t show the fingerprint!

A great discussion on this topic is https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

But I want to repeat the main points here.

The fingerprint is the hash of the certificate file, but in its raw, 8-bit form. you can choose the hash algorithm and learn the fingerprint with the following openssl commands:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha1 (for getting the SHA1 fingerprint)

similarly, to obtain the sha256 or md5 fingerprint you would do:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha256

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐md5

Now, you wonder, I know about these useful hash commands from Linux:

sha1sum, sha256sum, md5sum

what is the relationship between these commands and what openssl returns? How do I run the linux commands and get the same results?

It turns out this is indeed possible. But not that easy unless you know advanced sed trickery and have a uudecode program. I have uudecode on SLES, but not on CentOS. I’m still trying to unpack what this sed command really does…

The certificate files we normally deal with (PEM format) are encoded versions of raw data. uudecode can be used to obtain the raw data version of the certificate file like this:

$ uudecode < <(
sed ‘1s/^.*$/begin‐base64 644 www.google.com.raw/;
$s/^.*$/====/’ www.google.com.crt
)

This example is for an input certificate file called www.google.com.crt. It creates a raw data version of the certificate file called www.google.com.raw.

Then you can run your sha1sum on www.google.com.raw. It will be the same result as running

$ openssl x509 ‐in www.google.com.crt ‐fingerprint ‐sha1

!

So that shows the fingerprint is a hash of the entire certificate file. Who knew?

Appendix B
To find out more about a particluar subcommand:

openssl <subcommand> help

e.g.,

$ openssl s_client help

Conclusion
Some useful openssl commands are documented here. A way to grapple with keytool for Tomcat certificates is also shown as a bonus.

References and related
Probably a better site with similar but more extensive openssl commands: https://www.sslshopper.com/article-most-common-openssl-commands.html

Digicert’s tool for working with keytool.
GUI replacement for keytool, etc; Keystore Explorer.

The only decent explanation of certificate fingerprints I know of: https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

Server Name Indication is described in this blog post.

I’m only providing this link here as an additional reminder that this is one web site where you’ll find a legitimate wildcard certificate: https://make.wordpress.org/ Otherwise it can be hard to find one. Clearly people don’t want to advertize the fatc that they’re using them.

Categories
Admin Internet Mail SLES

The IT Detecive Agency: emails began piling up this week, no obvious cause

Intro
Today I had my choice of problems I could highlight, but I like this one the best. Our mail server delivers email to a wide variety of recipients. All was going well and it ran pretty much unattended until this week when it didn’t go so well. Most emails were getting delivered, but more and more were starting to pile up in the queues. This is the story of how we unraveled the mystery.

The details
It’s best to work from examples I think. I noticed emails to me.com were being refused delivery as well as emails to rnbdesign.com. The latter is a smaller company so we heard from them the usual story that we’re the only ones who can’t send to them.

So I forced delivery with verbose logging. I’m running sendmail, so that looks like this:

> sendmail -qRrnbdesign.com -Cconfig_file -v

That didn’t work out, producing a no route to host type of error. I did a DNS lookup by hand. That showed one set of results, while sendmail was connecting to an entirely different IP address. How could that be??

I was at a loss so I do what I do when I’m desperate: strace. That looks like this:

> strace -f sendmail -qRrnbdesign.com -Cconfig_file -v > /tmp/strace 2>&1

That produced 12,000 lines of output. All the system calls that the process and any of its forked processes invoke. Is that too much to comb through by hand? No, not at all, not when you begin to see the patterns.

I pored over the trace, not knowing what most of it meant, but looking for especially any activity regarding networking and DNS. Around line 6,000 I found it. There was mention of nscd.

For the unaware the use of nscd (nameserver caching daemon) might seem innocent enough, or even good-intentioned. What could be wrong with caching frequently used DNS results? The only issue is that it doesn’t work right! nscd derives from UC Berkeley Unix code and has never been supported. I didn’t even like it when I was running SunOS. It caches the DNS queries but ignores TTLs. This is fatal for mail servers or just about anything you can think of, especially on servers that are infrequently booted as mine are.

I stopped nscd right away:

> service nscd stop

and re-ran the sendmail queue runner (same command as above). The rnbdesign.com emails flowed out instantly! Soon hundreds of stuck emails were flushed out.

Of course for good measure nscd had to be removed from the startup sequence:

> chkconfig nscd off

An IT pro always keeps unsolved mysteries in his mind. This time I knew I also had in hand the solution an earlier-documented mystery about email to paladinny.com.

Conclusion
nscd might show up in your SLES or OpenSuse server. I strongly suggest to disable it before you wind up with old DNS values and an extremely hard-to-debug issue.

Case closed!

Categories
Admin Linux Raspberry Pi

Ssh access to your Raspberry Pi from anywhere

Editor’s 2017 note: Lots of great alternatives are discussed in the Comments section.

Intro
I’ve done a couple things with my Raspberry Pi. There’s this post on setting it up without a monitor, keyboard or mouse, and this post on using it to monitor power and Internet connection at my home.

I eventually realized that the Pi could be accessed from anywhere, with one big assumption: that you have your own hosted server somewhere on the Internet that you can ssh to from anywhere. This is the same assumption I used in describing the power monitor application.

The details
I can’t really take any credit for originality here. I just copied what I saw in another post. My only contribution is in realizing that the Pi makes a good platform to do this sort of thing with if you are running it as a server like I am.

What you can do is to create a reverse ssh tunnel. I find this easier and probably more secure than opening up ssh (inbound) on your home router and mapping that to the Pi. So I’m not going to talk about that method.

First ssh log in to your Pi.

From that session ssh to your hosted server using syntax like this:

> ssh −f −N −R 10000:localhost:22 username@ip_address_of_your_hosted_sever

You can even log out of your Pi now – this reverse tunnel will stay*.

Now to access your Pi from “anywhere,” log into your server like usual, then from that session, login to your Pi thusly:

> ssh −p 10000 pi@localhost

That’s it! You should be logged on after supplying the password to the pi account.

*Except that in my experience the reverse tunnel does not stay! It’s staying up less than two hours.

But I think the approach is sound.

Feb 15th Update
This is a case of RTFM. That same web page I cited above has the necessary settings. I needed to have them on the Pi. It didn’t help when I put them on my Amazon server. Here they are repeated:

TCPKeepAlive yes
ClientAliveInterval 30
#ClientAliveCountMax 30
ClientAliveCountMax 99999
GatewayPorts yes
AllowTcpForwarding yes

This goes into the /etc/ssh/sshd_config file. Make sure you don’t have these mentioned a second time in that file.

With these settings my reverse tunnel has been up all day. It’s a real permanent tunnel now!

Security note
Make sure you modify the default passwords to your Pi before attempting this. You’re potentially exposing your whole home network in creating a reverse tunnel like this so you really have to be careful.

Conclusion
You can use your Raspberry Pi to create a reverse tunnel tht allows you to access it from anywhere, assuming you have a cooperating hosted server on the Internet as a mutual meeting point for the ssh sessions. Exercise caution, though, as you are opening up your Home network as well.

Currently the tunnel doesn’t stay up for very long – perhaps an hour or so. If I find a way to extend that I’ll revise this post.

References
Having trouble ssh’ing to your Ras Pi under any conditions? This article explains how to get past one common cause of this problem.

Categories
Internet Mail Linux Perl Raspberry Pi

Raspberry Pi phone home

Intro
In this article I described setting up my Raspberry Pi without ever connecting a monitor keyboard and mouse to it and how I got really good performance using an UHS SD card.

This article represents my first real DIY project on my Pi – one of my own design. My faithful subscribers will recall my post after Hurricane Sandy in which I reacted to an intense desire to know when the power was back on by creating a monitor for that situation. It relied on extremely unlikely pieces of infrastructure. I hinted that it may be possible to use the Raspberry Pi to accomplish the same thing.

I’ve given it a lot of thought and assembled all the pieces. Now I have a home power/Internet service monitor based on my Pi!

This still requires a somewhat unlikely but not impossible combination of infrastructure:
– your own hosted server in the cloud
– ability to send emails out from your cloud server
– access log files on your cloud server are rolled over regularly
– your Pi and your cloud server are in the same time zone
– Raspberry Pi which is acting as a server (meaning you are running it 24×7 and not rebooting it and fooling with it too much)
– a smart phone to receive alert emails or TXT messages

I used my old-school knowledge of Perl to whip something up quickly. One of this years I have to bite the bullet and learn Python decently, but it’s hard when you are so comfortable in another language.

The details
Here’s the concept. From your Pi you make regular “phone home” calls to your cloud server. This could use any protocol your server is listening on, but since most cloud servers run web servers, including mine, I phone home using HTTP. Then on your cloud server you look for the phone home messages. If you don’t see one after a certain time, you send an alert to an email account. Then, once service – be it power or Internet connectivity – is restored to your house, your Pi resumes phoning home and your cloud monitor detects this and sends a Good message.

I have tried to write minimalist code that yet will work and even handle common error conditions, so I think it is fairly robust.

Set up your Pi
On your Pi you are “phoning home” to your server. So you need a line something like this in your crontab file:

# This gets a file and leaves a timestamp behind in the access log
* * * * * /usr/bin/curl --connect-timeout 30 http://yourcloudserver.com/raspberrypiPhoneHome?`perl -e 'print time()'` > /dev/null 2>&1

Don’t know what I’m talking about when I say edit your crontab file?

> export EDITOR=vi
> crontab -e

That first line is only required for fans of the vi editor.

That part was easy, right? That will have your server “phone home” every minute, 24×7. But we need an aside to talk about time on the Pi.

Getting the right time on the Raspberry Pi
This monitoring solution assumes Ras Pi and home server are in the same time zone (because we kept it simple). I’ve seen at least a couple of my Raspberry Pi’s where the time zone was messed up so I need to document the fix.

Run the date command
$ date

Sat Apr 29 17:10:13 EDT 2017

Now it shows it is set for EDT so the timezone is correct. At first it showed something like UTC.

Make sure you are running ntp:
$ ntpq ‐p

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+time.tritn.com  63.145.169.3     2 u  689 1024  377   78.380    2.301   0.853
-pacific.latt.ne 68.110.9.223     2 u  312 1024  377  116.254   11.565   5.864
+choppa.chieftek 172.16.182.1     3 u  909 1024  377   65.430    4.185   0.686
*nero.grnet.gr   .GPS.            1 u  106 1024  377  162.089  -10.357   0.459

You should get results similar to those above. In particular the jitter numbers should be small, or at least less than 10 (units are msec for the curious).

If you’re missing the ntpq command then do a

$ sudo apt-get install ntp

Set the correct timezone with a

$ sudo dpkg-reconfigure tzdata

and choose Americas, then new York, or whatever is appropriate for your geography. The Internet has a lot of silly advice on this point so I hope this clarifies the point.

Note that you need to do both things. In my experience time on Raspberry Pis tends to drift so you’ll be off by seconds, which is a bad thing. ntp addresses that. And having it in the wrong timezone is just annoying in general as all your logs and file times etc will be off compared to how you expect to see them.

On your server
Here is the Perl script I cooked up. Some modifications are needed for others to use, such as email addresses, access log location and perhaps the name and switches for the mail client.

So without further ado. here is the monitor script:

#!/usr/bin/perl
# send out alerts related to Raspberry Pi phone home
# this is designed to be called periodically from cron
# DrJ - 2/2013
#
# to test good to error transition,
# call with a very small maxDiff, such as 0!
use Getopt::Std;
getopts('m:d'); # maximum allowed time difference
$maxDiff = $opt_m;
$DEBUG = 1 if $opt_d;
unless (defined($maxDiff)) {
  usage();
  exit(1);
}
# use values appropriate for your situation here...
$mailsender = '[email protected]';
$recipient = '[email protected]';
$monitorName = 'Raspberry Pi phone home';
# access line looks like:
# 96.15.212.173 - - [02/Feb/2013:22:00:02 -0500] "GET /raspberrypiPhoneHome?136456789 HTTP/1.1" 200 455 "-" "curl/7.26.0"
$magicString = "raspberrypiPhoneHome";
# modify as needed for your situation...
$accessLog = "/var/log/drjohns/access.log";
#
# pick up timestamp in access file
$piTime = `grep $magicString $accessLog|tail -1|cut -d\? -f2|cut -d' ' -f1`;
$curTime = time();
chomp($time);
$date = `date`;
chomp($date);
# your PID file is somewhere else. It tells us when Apache was started.
# you could comment out these next lines just to get started with the program
$PID = "/var/run/apache2.pid";
($atime,$mtime,$ctime) = (stat($PID))[8,9,10];
$diff = $curTime - $piTime;
print "magicString, accessLog, piTime, curTime, diff: $magicString, $accessLog, $piTime, $curTime, $diff\n" if $DEBUG;
print "accessLog stat. atime, mtime, ctime: $atime,$mtime,$ctime\n" if $DEBUG;
if ($curTime - $ctime < $maxDiff) {
  print "Apache hasn't been running long enough yet to look for something in the log file. Maybe next time\n";
  exit(0);
}
#
$goodFile = "/tmp/piGood";
$errorFile = "/tmp/piError";
#
# Think of it as state machine. There are just a few states and a few transitions to consider
#
if (-e $goodFile) {
  print "state: good\n" if $DEBUG;
  if ($diff < $maxDiff) {
    print "Remain in good state\n" if $DEBUG;
  } else {
# transition to error state
    print "Transition from good to error state at $date, diff is $diff\n";
    sendMail("Good","Error","Last call was $diff seconds ago");
# set state to Error
    system("rm $goodFile; touch $errorFile");
  }
} elsif (-e $errorFile) {
  print "state: error\n" if $DEBUG;
  if ($diff > $maxDiff) {
    print "Remain in error state\n" if $DEBUG;
  } else {
# transition to good state
    print "Transition from error to good state at $date, diff is $diff\n";
    sendMail("Error","Good","Service restored. Last call was $diff seconds ago");
# set state to Good
    system("rm $errorFile; touch $goodFile");
  }
} else {
  print "no state\n" if $DEBUG;
  if ($diff < $maxDiff) {
    system("touch $goodFile");
    sendMail("no state","Good","NA") if $DEBUG;
    print "Transition from no state to Good at $date\n";
# don't send alert
  } else {
    print "Remain in no state\n" if $DEBUG;
  }
}
####################
sub sendMail {
($oldState,$state,$additional) = @_;
print "oldState,state,additional: $oldState,$state,$additional\n" if $DEBUG;
$subject = "$state : $monitorName";
open(MAILX,"|mailx -r \"$mailsender\" -s \"$subject\" $recipient") || die "Cannot run mailx $mailsender $subject!!\n";
print MAILX qq(
$monitorName is now in state: $state
Time: $date
Former state was $oldState
Additional info: $additional
 
- sent from pialert program
);
close(MAILX);
 
}
###############################
sub usage {
  print "usage: $0 -m <maxDiff (seconds)> [-d (debug)]\n";
}

This is called from my server’s crontab. I set it like this:

 Call monitor that sends an alert if my Raspberry Pi fails to phone home - DrJ 2/13
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /home/drj/pialert.pl -m 300 >> /tmp/pialert.log

My /tmp/pialert.log file looks like this so far:

Transition from no state to Good at Wed Feb  6 12:10:02 EST 2013
Apache hasn't been running long enough yet to look for something in the log file. Maybe next time
Apache hasn't been running long enough yet to look for something in the log file. Maybe next time
Transition from good to error state at Fri Feb  8 10:55:01 EST 2013, diff is 420
Transition from error to good state at Fri Feb  8 11:05:02 EST 2013, diff is 1

The last two lines result from a test I ran – i commented out the crontab entry on my Pi to be absolutely sure it was working.

The error message I got in my email looked like this:

Subject: Error : Raspberry Pi phone home
 
Raspberry Pi phone home is now in state: Error 
Time: Fri Feb  8 10:55:01 EST 2013
Former state was Good
Additional info: Last call was 420 seconds ago
 
- sent from pialert program

Why not use Nagios?
Some will realize that I replicated functions that are provided in Nagios, why not just hang my stuff off that well-established monitoring software? I considered it, but I wanted to stay light. I think my approach, while more demanding of me as a programmer, keeps my server unburdened by running yet another piece of software that has to be understood, debugged, maintained and patched. If you already have it, fine, by all means use it for the alerting part. I’m sure it gives you more options. For an approach to installing nagios that makes it somewhat manageable see the references.

A few words about sending mail
I send mail directly from my cloud server, I have no idea what others do. With Amazon, my elastic IP was initially included in blacklists (RBLs), etc, so I really couldn’t send mail without it being rejected. they have procedures you can follow to remove your IP from those lists, and it really worked. Crucially, it allowed me to send as a TXT message. Just another reason why you can’t really beat Amazon hosting (there was no charge for this feature).

And sending TXT messages
I think most wireless providers have an email gateway that allows you to send a TXT message (SMS) to one of their users via email (SMTP) if you know their cell number. For instance with Verizon the formula is

Verizon

AT&T
cell-number@txt.att.net.

T-Mobile
cell-number@tmomail.net

Conclusion
We have assembled a working power/Internet service monitor as a DIY project for a Raspberry Pi. If you want to use your Pi for a lot of other things I suggest to leave this one for your power monitor and buy another – they’re cheap (and fun)!

I will now know whenever I lose power – could be any minute now thanks to Nemo – and when it is restored, even if I am not home (thanks to my SmartPhone). See in my case my ISP, CenturyLink, is pretty good and rarely drops my service. JCP&L, not so much.

Admittedly, most people, unlike me, do not have their own cloud-hosted server, but maybe it’s time to get one?

References
Open Monitoring Distribution (OMD) makes installing and configuring nagios a lot easier, or so I am told. It is described here.
I’ve gotten my mileage out of the monitor perl script in this post: I’ve recently re-used it with modifications for a similar situation except that the script is being called by HP SiteScope, and, again, a Raspberry Pi is phoning home. Described here.

Categories
Admin DNS IT Operational Excellence

The IT Detective Agency: since when can a powered off PC do dynamic DNS updates?

Intro
The IT Detectives are back after a short lull during which no great mysteries needed expert resolution – you knew that situation couldn’t last too long. The following tale was relayed to me, I unfortunately cannot claim to have been any help whatsoever. The details have been somewhat obscured in this retelling.

The details
One of our DNS servers at drjohns was busy fielding lots and lots of DDNS updates. Good, right? No, not so. Because our employee PCs are all configured to not do this very thing. In Windows 7 drilling down into the advanced DNS settings you have a Checkbox for Register this connection’s addresses in DNS. And that is unchecked. So although we use DHCP, the PCs shouldn’t be sending their DDNS updates. Yet they were. In fact at one point a considerable amount of bandwidth was being eaten up with these unwanted updates, so we had to investigate and act. But where to begin?

Word finally got around to one of our PC experts who I guess probably had his suspicions. He suggested the following test:

turn the PC off and look for DDNS updates on the DNS server

Amazingly, that’s exactly what we found to be the case – DDNS updates coming from a powered off PC. The DDNS updates did not always go to the same DNS server. The chosen DNS server seemed randomly chosen, but they all were drjohns DNS servers.

A Wireshark examination of a trace (taken by a network engineer) showed lots of Dynamic Update SOA drj.com. I looked at the trace and found that that was just a title given by Wireshark for what was happening, and not a very accurate one. If you expand the packet you saw inside of it that (mostly) it was a workstation trying to register its A record on the DNS server (a DDNS update). It wasn’t literally trying to change the SOA record for the zone though that might have been the logical result of updating its A record.

What the power-off test showed to our subject-area expert is that Intel vPro was responsible for these DDNS updates. Wait, you ask, what the heck is vPro? We didn’t know either. As I understand it, it’s an additional Intel chip that some business-class laptops (e.g., DELL Latitude) might include that permits more and better remote management, allowing perhaps even some hardware diagnostics to occur.

So let’s go back to that test. Note that I said PC powered off, I did not say disconnected from the network! Powered-off-but-network-connected produces the DDNS update, powered-off-and-disconnected – no update, of course (Hey, it’s not magic going on here!).

So the solution, obvisouly, is to turn off DDNS in vPro. We thought it was off, but maybe not. We expect and hope this to the solution, but a few more days will be needed before this all plays out and we know for sure.

Conclusion
I better hold off on any conclusion until our premise is confirmed! But one feeling I have is that sometimes you have to ingratiate yourself to the right people because no one person has all the answers!

Categories
Admin First Robotics

Interactive Frisbee Trajectory

Intro
The FIRST FRC challenge for 2013, Ultimate Ascent, involves shooting heavy flying discs (sturdy Frisbees) into goals. The physics of the equations of motion have been studied and published. I’ve created an interactive web page which allows you to vary some of the initial conditions to see how the trajectory is affected.

The details
Go here for the web page.

For last year’s challenge, foam balls were to be shot into a basketball hoop, so similar equations of motion applied. Here is that page.