Categories
Admin Linux

Setting up my Galaxy S3 for remote host access

Intro
I just got a Samsung Galaxy S3 last week. Before long I was wondering how I might use it to access my cloud server if indeed it was at all possible.

The Details
Looking at my other Android devices I decided to install Terminal Emulator. That’s a cute application, providing shell access to the underlying OS of your phone. But it’s fairly useless. You get a shell, but your account id, 10155, has essentially no privileges, and you can’t do much more than ls, cd, ps and top. You don’t have enough privileges to look into interesting directories. So you can’t do anything interesting. There’s also no native ssh so you can’t connect to another host.

To ssh to my Amazon cloud server I got the app ConnectBot. This app shows promise. I was able to connect to my server. I read some of the introductory screens which gave some helpful tips. So I quickly learned how to resize the window. I found 80×39 is a good size in portrait orientation. Yes, the font is tiny, but it is legible. Getting an elusive Esc or Ctrl character isn’t bad at all, just tap the top half of the screen.

So running constantly refreshing screens like top worked out fine.

vi was a problem. It used multi-colors in displaying my code. Comments, in dark blue, are not legible to me. In fact using vi at all on this device with a soft keyboard is quite unnatural. It might be better to use a curses-based editor like pico, though I haven’t yet tried it. But w/ vi, I found I could get rid of the multi-colors by setting the TERM environment variable to vt100. It had been screen screen. Once that was done:

> export TERM=vt100

vi displayed all characters in white, and background in black – quite legible.

Conclusion
It’s a strange world where you can administer a virtual server on a device that fits in the palm of your hand, and achieve fairly powerful effects.

Being a resourceful person, I had alternatives to reach my server. There is a web-based terminal emulator which works surprisingly well. See this post for a description.

connectBot is a native ssh remote terminal app and is actually useable as such on a Samsung Galaxy S3, if your eyes are still good! Pay attention to a just a few usage tips and you’ll be in full control of your server.

References
See this post for the world’s most natural, unobtrusive ringtone.

Categories
Admin Linux

The IT Detective Agency: Teraterm’s colors washed out

Intro
Some things we do as IT folk are just embarrassingly stupid in retrospect. This is such a story. I present it in keeping with my overall theme of throwing stuff out there in the hopes that some of it helps someone else facing the same situation.

The details
I love teraterm (from logmett.com). Teraterm plus screen (as in /usr/bin/screen) makes for a great combination when you live and die by the command line.

Actually I have been told I only use a small fraction of teraterm’s capabilities. It is programmable, apparently. I’m a very basic user.

So I had the self-assigned task to switch out a DNS server from an older Solaris OS to Linux. I completed the task months ago and I noticed one small side-effect: certain commands had the effect of washing out the font to just about the same color as the background. For the record my text is black (R,G,B) = (0,0,0) with Attribute Normal and my background is beige (255,255,183). When it’s behaving normally it looks very pleasant and is easy on the eyes.

I noticed when I ran man pages the text was all washed out – just a slightly brighter yellow against the beige background, same story when I ran vi. Comments such as text following # were washed out.

This was the case if I used a docking station. Using the native laptop display, the text was washed out, but not as severely so I could just make it out by straining my eyes.

I played with font color and background color in Teraterm, but didn’t really get anywhere, so I learned to cope. I learned that if I piped the man page to more the text was all-black and I didn’t really lose any functionality. In vi I learned that if I deleted the whitespace before the #, the whole comment became visible, unless it started a line. Kludgy, but it worked and hardly slowed me down – this is after all just one of many, many hosts I was focussed on.

Then it came time to migrate the second and last Solaris DNS server to Linux and I noticed the same thing happening on the new Linux server. What the…?

Previously I wasn’t really even sure when the washed-out problem occurred. This time I had no doubt that it was fine until the OS switch.

That in turn points to some difference in the environment, especially because on my many other Linux sessions I did not have this problem.

> env

shows the environment. By comparing where it was working to where it was not, I zeroed in on this environment variable: TERM.

TERM=vt100

where it wasn’t working

and

TERM=screen

where it was.

I set TERM=screen:

> export TERM=screen

and immediately noticed the display working when running vi. Even multiple colors are displayed.

But actually, hmm, the man pages are still washed out, e.g.,

> man -s1 ls

shows NAME, SYNOPSIS and DESCRIPTION are all yellowed out, as well as all switches! That makes it really difficult to decipher.

Oh, well. This mystery is not completely solved.

My point was going to be that in Solaris the TERM=vt100 made sense – things worked better – and so it was in my .bashrc file. In Linux (SLES) it didn’t make so much sense. No setting for TERM seems to be necessary as the value screen gets auto-defined somehow if you’re using screen.

What I had done was copy my .bashrc file from Solaris to Linux not really thinking about it. That’s what did me in on these two servers.

If I get around to resolving the man pages I’ll revise this post…

2020 update

Still plagued by this issue of washed out colors, I rolled up my sleeves and got it done. Turns out you have to set the Bold font settings separately.  I’m trying settings like in this picture.

References
Teraterm used to be available from logmett.com, (2020 update) but is no longer. I’m looking for a link… Here it is: https://osdn.net/projects/ttssh2/releases/

Conclusion
Problems with washed-out colors using teraterm plus screen are resolved. Once again, this was mostly a self-inflicted problem.

Categories
Admin Network Technologies

Extended Passive Mode FTP through Checkpoint Firewall

Intro
The vast majority of time there is no problem doing an FTP to a server behind a firewall protected by Checkpoint’s Firewall-1. But occasionally there is.

The details

The problem I am about to document I think will only occur on a server that has multiple interfaces. I have seen it occur on multiple operating systems, so that doesn’t seem to matter. On the other hand, I have also not seen it on other similar systems, a point which I don’t fully yet understand.

Nevertheless, a work-around is always appreciated, so I provide what I found here, to complete my extensive documentation of problems I’ve encountered and resolved.

Here is a snippet from the FTP session showing the problem:

ftp> cd uploadDirectory
250 CWD command successful.
ftp> put smallfile.txt
local: smallfile.txt remote: smallfile.txt
229 Entering Extended Passive Mode (|||36945|)
200 EPRT command successful.
421 Service not available, remote server timed out. Connection closed

And here is the solution:

Enter epsv4 after logon and before any other commands are issued. Problem fixed!

Conclusion
We have shown a way to fix a firewall-related problem that manifests itself during extended passive mode FTPs. Some more research should be done to understand under what circumstances this problem should be expected, but it seems to occur with a Checkpoint Firewall-1 firewall and an FTP server with multiple interfaces.

Categories
Admin Network Technologies

The IT Detective Agency: trouble with wireless at home

Intro
I don’t usually have the luxury of writing about a mystery I’ve solved right out of my own home, but there finally is one that I got got to the bottom of recently – poor WiFi performance.

The details
Considering that I deal with this stuff for a living, I have a thread-bare setup at home. After my company-issued router’s WiFi began to work unreliably, I resuscitated an old Linksys wireless router, WRK54G V2. Superficially it seemed to work. But we weren’t very demanding of it.

It eventually seemed to be the case, as visitors mentioned, that streaming videos does not work through wireless. This was hard for me to check, with my broken-down, aging equipment. I have a desktop which always freezes and crashes is you play any Youtube video. And a Netbook which kind of worked better, but its peculiarity is that its ethernet interface doesn’t work. Wirelessly, its version of Flash was too old and insecure for Firefox, and attempts to update Flash using WiFi in turn were unsuccessful.

In general the Linksys router, as I eventually realized, seemed to initially serve up large downloads ok, but then at some point during the download, things begin to crawl and you are left with a download that proceeds at 10 kbit/s or something ridiculously slow like that.

Providing mixed evidence is a Sony BlueRay player. using WiFi it could sort of manage to show a HuluPlus TV episode. You might have to be patient at times while it’s loading, but we did get through a full episode of Grey’s Anatomy recently.

After more complaints I decided enough is enough. It seemed as though my WiFi was the most likely suspect, sifting through the mixed evidence. I perhaps waited so long because who’d think they’d be dealing with two bad WiFi routers from two totally different vendors?

So hedging my bets, I didn’t go all out with a new Gbit router. I reached back in time a little and got a refurbished Cisco 1200E wireless-N router. It was only $28 from Amazon. But before buying it, I read the comments and got one idea about routers: sometimes they need to be rebooted!

This is pretty funny, really, because it is probably apparent to any homeowner, and here I am, a specialist, missing this point. You see with Cisco enterprise-class gear you almost never have to reboot to fix a problem. These things run uninterrupted for not only weeks and months at a time, years at a time is also not at all uncommon. Same for some Unix servers. So from my perspective rebooting is something for consumer devices running Microsoft OSes!

So, before rebooting the Linksys to see if that would cure it, I ran a Ping to Google’s DNS server (very easy to remember its IP) from a CDM window:

> ping -t 8.8.8.8

I didn’t preserve the output, but it wasn’t pretty. It was something like this:

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=369ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=1204ms TTL=56
Reply from 8.8.8.8: bytes=32 time=284ms TTL=56
...

51 msec – fine. But round-trip times much greater than that? That’s not right.

So I hopefully reboot the Linksys router and re-run the test on the Netbook:

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=50ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
...

Much more consistent.

Try a Youtube video from Firefox. Nope, need to update Flash. Update Flash. Nope – download times out and kicks me out.

So I’ve accomplished nothing in rebooting in terms of results that matter.

That’s when I decided to check out of Amazon with that refurbished router.

Aside about Wireless-N
Given my ancient equipment, I was concerned that Wireles-N routers might not be compatible with my wireless radios, which would only support G. Is it backwards compatible? Yes. Some quick research showed that and my own experience confirmed it.

Conclusion
The setup of the router was pretty straightforward although it froze at some point just after I set the wireless password. It helps to have done this a zillion times before. At that point I observed what my default gateway was and hit it as a web site URL. Guessed the admin password incorrectly a zillion times, until I tried the wireless password as the admin password, and, wham – I was in and happily configuring away…

More importantly, I went to that Netbook, updated Flash. No problems. Ran a Youtube video. No problems. Ran a speedtest.net test (which wouldn’t even run before this). Numbers look as good as my wired connection: 6 mbit download, 0.6 mbit upload.

Last test is to see where the speed maxes out within my home network. I plan to hit my Raspberry Pi web server to test this and will provide results as soon as they are available.

Conclusion to the conclusion
So I really was cursed with two bad wireless routers. Sometimes using 10-year-old equipment is really not worth the $30 saved in deferred spending. Read product reviews on Amazon to get hints about real issues others have faced.
To be continued…

Categories
Admin CentOS Security

Example using iptables, the CentOS firewall

Intro
This document is mostly for my own purposes. I don’t even think this is the best way to run the firewall, it’s just the way I happened to adapt.

Background
My friends tell me ipchains was good software. Unfortunately the guy who wrote iptables, which emulates the features of ipchains, wasn’t at that same skill level, and the implementation shows it. I know I struggled with it a bit.

Motivation
I decided to run a local firewall on my HP SiteScope server because a serious security issue was found with our version’s HTTP server such that it was advisable to lock it down to only those administrators who need access to the GUI.

The details
This was actually implemented on Redhat v 5.6, though I don’t suppose it would be much different on CentOS.

December 2013 update
I also tried this same script provided below on a Redhat 6.4 OS – it worked the exact same way without modification.

The main thing is that I maintain a file with the “firewall rules.” I call it iptables. So I need to remember from invocation to invocation where I store this master file. Here are the contents:

#!/bin/sh
# DrJ, 9/2012
# inspired by http://wiki.centos.org/HowTos/Network/IPTables
# flush all previous rules
export PATH=$PATH:/sbin
iptables -F
#
# our main rules here:
#
# Accept tcp packets on destination port 8080 (HP SiteScope) from select individuals
# DrJ: office, home, vpn
iptables -A INPUT -p tcp -s 192.168.76.56 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.2.6.107 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.3.13.138 --dport 8080 -j ACCEPT
#
# the server itself
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8080 -j ACCEPT
#
# set dflt policies
# for logging see http://gr8idea.info/os/tutorials/security/iptables5.html
#iptables -A INPUT -j LOG --log-level 4 --log-prefix 'InDrop '
# this is a killer!
#iptables -P INPUT DROP
# just drop what is really the problem...
iptables -A INPUT -p tcp --dport 8080 -j DROP
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
#
# access for loopback
iptables -A INPUT -i lo -j ACCEPT
#
# Accept packets belonging to established and related connections
#
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# Save settings
#
/sbin/service iptables save
#
# List rules
#
iptables -L -v

Of course you have to have iptables running. I do a

$ sudo service iptables status

to verify that. If its status is “not running,” start it up.

As mentioned in the comments I tried to be more strict with the rules since I’m used to running firewalls with a DENY All rule, but it just didn’t work out well for me. I lost patience and gave up on that and settled for dropping all traffic to TCP port 8080 except the explicitly permitted hosts, which is good enough for our immediate needs.

Conclusion
This is a simple example of a way to use iptables. It’s probably not the best example, but it’s what I used so it’s better than nothing.

Categories
Admin Linux Raspberry Pi Security

Generate Pronounceable Passwords

2017 update
Turns out gpw is an available package in Debian Linux, including Raspbian which runs on Raspberry Pi. Who knew? A simple sudo apt-get install gpw will provide it. So I guess the source wasn’t lost at all.

Intro
15 years ago I worked for a company that wanted to require authentication in order to browse to the Internet. I searched around for something.

What I came up with is gpw – generate pronounceable passwords.

The details
I think this approach to secure passwords is no longer best practice, but I still think it has a place for some applications. What it does is analyze a dictionary that you’ve fed it. It then determines the frequency of occurrence of what it calls trigraphs – I guess that’s three consecutive letter combinations. Then it generates random, non-dictionary passwords using those trigraphs, which are presumably wholly or partially pronounceable.

Cute, huh? I’d say one problem is that if the bad guys got wind of this approach, the numbers of combinations they’d have to use to do password cracking is severely restricted.

Sophos has a recommendation for forming good strong passwords. See their blog post about the 50 worse passwords which contains a link to a video on how to choose a good password.

But I still have a soft spot for this old approach, and I think it’s OK to use it, get your password such as inglogri, add a few non-alpha-numeric characters and come up with a reasonably good, memorable password. Every site you use should really get a different password, and this tool might make that actually feasible.

I run it as:

$ gpw

which produces:

seminour
shnopoos
alespige
olpidest
hastrewe
nsivelys
shaphtra
bratorid
melexseu
sheaditi

Its output changes every time, of course.

I mostly run it this way:

$ gpw 1

which produces only a single password, for instance:

ojavishd

You see how these passwords are sort of like words, but not words? Much more memorable than those completely random ones you are sometimes forced to type and which are impossible to remember?

I noted the location where I pulled it from the web 15 years ago as is my custom, but it is no longer available. So I have decided to make it available. I tweaked it to compile on CentOS with a C++ compiler.

Here is the CentOS v 6 binary for x86_64 architecture and README file.

Here is the tar file with the sources and the binary mentioned above. Run a make clean first to begin building it.

Enjoy!

Potential Problems
I know when we originally used it to assign 15,000 unique passwords, the randomness algorithm was so bad that I believe some people received identical passwords! So the the total number of generatable passwords might be severely limited. Please check this before using it in any meaningful way. I would naively expect and hope that it could generate about two- to three-times the number of words in my dictionary (/usr/share/dict/linux.words, with 479,829 words). But I never verified this.

2017 update
I ran it, 100 passwords at a time, on my Rsapberry Pi for a couple minutes. I created 275,900 passwords, of which 269,407 were unique. Strange. So you get some repeats but you motly get new passwords.

Further, I was going to tweak the code to generate 9-letter passwords which would presumably be more secure. But they just didn’t look as good to me, and I’ve only ever used it with 8 letters. So I just decided to keep it at 8 letters. You can experiment with that if you want.

More fun with the Linux dictionary
For another fun example using the Linux dictionary see how I solved the NPR weekend puzzle using it, described here.

A note for Debian Linux users (Ubuntu, Raspberry Pi, …)
The dictionary there is /usr/share/dictd/wn.index. You’ll need to update the Makefile to reflect this. This post about Words with Friends explains the packages I used to provide that dictionary.

Conclusion
An old pronounceable password generating program has been dusted off and given back to the open source community. It may not be state-of-the-art, but it has a role for some usages.

References and related
Want truly random passwords? I want to call your attention to random.org’s password generator: https://www.random.org/passwords/

Most people are becoming familiar with the idea of not reusing passwords but I don’t know if everyone realizes why. This article is a comprehensive review of the topic, plus review of password vaults like Lastpass, etc which you may have heard of: https://pixelprivacy.com/resources/reusing-passwords/

Categories
Admin Linux Security

My favorite openssl commands

Intro
openssl is available on almost every operating system. It’s a great tool if you work with certificates regularly, or even occasionally. I want to document some of the commands I use most frequently.

The details

Convert PEM CERTs to other common formats
I just used this one yesterday. I got a certificate in PEM format as is my custom. But not every web server out there is apache or apache-compatible. What to do? I’ve learned to convert the PEM-formatted certificates to other favored formats.

The following worked for a Tomcat server and also for another proprietary web server which was running on a Windows server and wanted a pkcs#12 type certificate:

$ openssl pkcs12 −export −chain −inkey drjohns.key -in drjohns.crt −name “drjohnstechtalk.com” −CAfile intermediate_plus_root.crt −out drjohns.p12

The intermediate_plus_root.crt file contained a concatenation of those CERTs, in PEM format of course.

If you see this error:

Error unable to get issuer certificate getting chain.

, it probably means that you forgot to include the root certificate in your intermediate_plus_root.crt file. You need both intermediate plus the root certificates in this file.

And this error:

unable to write 'random state'

means you are using the Windows version of openssl and you first need to do this:

set RANDFILE=C:\MyDir\.rnd

, where MyDir is a directory where you have write permission, before you issue the openssl command. See https://stackoverflow.com/questions/12507277/how-to-fix-unable-to-write-random-state-in-openssl for more on that.

The beauty of the above openssl command is that it also takes care of setting up the intermediate CERT – everything needed is shoved into the .p12 file. .p12 can also be called .pfx. so, a PFX file is the same thing as what we’ve been calling a PKCS12 certificate,

How to examine a pkcs12 (pfx) file

$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!

Examine a certificate

$ openssl x509 −in certificate_name.crt −text

Examine a CSR – certificate signing request

$ openssl req −in certificate_name.csr −text

Examine a private key

$ openssl rsa −in certificate_name.key −text

Create a SAN (subject alternative name) CSR

This is a two-step process. First you create a config file with your alternative names and some other info. Mine, req.conf, looks like this:

[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
 
[ dn ]
C=US
ST=New Jersey
CN = drjohnstechtalk.com
 
[ req_ext ]
subjectAltName = @alt_names
 
[ alt_names ]
DNS.1 = drjohnstechtalk.com
DNS.2 = johnstechtalk.com
IP.3 = 50.17.188.196

Note this shows a way to combine IP address with a FQDN in the SAN. I’m not sure public CAs will permit IPs. I most commonly work with a private PKI which definitely does, however.

Then you run openssl like this, referring to your config file (updated for the year 2022. In the past we used 2048 bit length keys but we are moving to 4096):
$ openssl req −new −nodes −newkey rsa:4096 −keyout mykey.key −out myreq.csr -config req.conf

This creates the private key and CSR in one go. Note that it’s recommended to repeat your common name (CN) in one of the alternative names so that’s what I did.

Let’s examine it to be sure it contains the alternative names:

$ openssl req ‐text ‐in myreq.csr

Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        ...
        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com
    Signature Algorithm: sha256WithRSAEncryption
         2a:ea:38:b7:2e:85:6a:d2:cf:3e:28:13:ff:fd:99:05:56:e5:
         ...

Looks good!

SAN on an Intranet with a private PKI infrastructure including an IP address
On an Intranet you may want to access a web site by IP as well as by name, so if your private PKI permits, you can create a CSR with a SAN which covers all those possibilities. The SAN line in the certificate will look like this example:

DNS:drjohnstechtalk.com, IP:10.164.80.53, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com

Note that additional IP:10… with my server’s private IP? That will never fly with an Internet CA, but might be just fine and useful on a corporate network. The advice is to not put the IP first, however. Some PKIs will not accept that. So I put it second.


Create a simple CSR and private key

$ openssl req −new −nodes −out myreq.csr

This prompts you to enter values for the country code, state and organization name. As a private individual, I am entering drjohnstechtalk.com for organization name – same as my common name. Hopefully this will be accepted.

Look at a certificate and certificate chain of any server running SSL

$ openssl s_client ‐showcerts ‐connect https://host[:port]/

Cool shortcut to fetch certificate from any web server and examine it with one command line

$ echo|openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443|openssl x509 ‐text

Alternate single command line to fetch and examine in one go

$ openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443</dev/null|openssl x509 ‐text

In fact the above commands are so useful to me I invented this bash function to save all that typing. I put this in my ~/.alias file (or .bash_aliases, depending on the OS):

# functions
# to unset a function: unset -f foo; to see the definition: type -a foo
certexamine () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
# examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
examinecert () { str=$*;echo $str|grep -q : ;res=$?;if [ "$res" -eq "0" ]; then fqdn=$(echo $str|cut -d: -f1);else fqdn=$str;str="$fqdn:443";fi;openssl s_client  -servername $fqdn -connect $str|openssl x509 -text|more; }

In a 2023 update, I made examinecert more sophisticated and more complex. Now it accepts an argument like FQDN:PORT. Then to examine a certificate I simply type either

$ examinecert drjohnstechtalk.com

(port 443 is the default), or to specify a non-standard port:

$ examinecert drjohnstechtalk.com:8443

The servername switch in the above commands is not needed 99% of the time, but I did get burned once and actually picked up the wrong certificate by not having it present. If the web server uses Server Name Indication – information which you generally don’t know – it should be present. And it does no harm being there regardless.

Example wildcard certificate
As an aside, want to examine a legitimate wildcard certificate, to see how they filled in the SAN field? Yesterday I did, and found it basically impossible to search for precisely that. I used my wits to recall that WordPress, I thought I recalled, used a wildcard certificate. I was right. I think one of those ecommerce sites like Shopify might as well. So you can examine make.wordpress.org, and you’ll see the SAN field looks like this:

 X509v3 Subject Alternative Name:
                DNS:*.wordpress.org, DNS:wordpress.org

Verify your certificate chain of your active server

$ openssl s_client ‐CApath /etc/ssl/certs ‐verify 2 ‐connect drjohnstechtalk.com:443

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify return:1
depth=2 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
verify return:1
depth=1 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
verify return:1
depth=0 /OU=Domain Control Validated/CN=drjohnstechtalk.com
verify return:1
---
Certificate chain
 0 s:/OU=Domain Control Validated/CN=drjohnstechtalk.com
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
 3 s:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFTzCCBDegAwIBAgIJAI0kx/8U6YDkMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
...
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES128-SHA
    Session-ID: 41E4352D3480CDA5631637D0623F68F5FF0AFD3D1B29DECA10C444F8760984E9
    Session-ID-ctx:
    Master-Key: 3548E268ACF80D84863290E79C502EEB3093EBD9CC935E560FC266EE96CC229F161F5EF55DDF9485A7F1BE6C0BECD7EA
    Key-Arg   : None
    Start Time: 1479238988
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

Wrong way to verify your certificate chain
When you first start out with the verify sub-command you’ll probably do it wrong. You’ll try something like this:

$ openssl s_client ‐verify 2 ‐connect drjohnstechtalk.com:443

which will produce these results:

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify error:num=19:self signed certificate in certificate chain
verify return:0
16697:error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed:s3_clnt.c:983:

Using s_client menu through a proxy
Yes! Use the -proxy switch, at least with newer openssl implementations.

Using OCSP
I have had limited success so far to an Online Certificate Status Protocol verification. But I do have something to provide as an example:

$ openssl ocsp ‐issuer cert‐godaddy‐g2.crt ‐cert crt ‐no_nonce ‐no_cert_verify ‐url http://ocsp.godadddy.com/

Response verify OK
crt: good
        This Update: Nov 15 19:56:52 2016 GMT
        Next Update: Nov 17 07:56:52 2016 GMT

Here I’ve stuffed my certificate into a file called crt and stuffed the intermediate certificate into a file called cert-godaddy-g2.crt. How did I know what URL to use? Well, when I examined the certificate file crt it told me:

$ openssl x509 ‐text ‐in crt

...
           Authority Information Access:
                OCSP - URI:http://ocsp.godaddy.com/
...

But I haven’t succeeded running a similar command against certificates used by Google, nor by certificates issued by the CA Globalsign. So I’m clearly missing something there, even though by luck I got the GoDaddy certificate correct.

Check that a particular private key matches a particular certificate
I have to deal with lots of keys and certificates. And certificate re-issues. And I do this for others. Sometimes it gets confusing and I lose track of what goes with what. openssl to the rescue! I find that a matching moduls is pretty much a guarantee that private key and certificate aer a match.

Private key – find the modulus example
$ openssl rsa ‐modulus ‐noout ‐in key

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

Public key – find the modulus example
$ openssl x509 ‐modulus ‐noout ‐in crt

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

The key and certificate were stored in files called key and crt, respectively. Here the modulus has the same value so key and certificate match. Their values are random, so you only need to match up the first eight characters to have an extremely high confidence level that you have a correct match.

Generate a simple self-signed certificate
$ openssl req ‐x509 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

Generating a 2048 bit RSA private key
..........+++
.................+++
writing new private key to 'key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:New Jersey
Locality Name (eg, city) [Default City]:.
Organization Name (eg, company) [Default Company Ltd]:.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:drjohnstechtalk.com
Email Address []:

Note that the fields I wished to blank out I put in a “.”

Did I get what I expected? Let’s examine it:

$ openssl x509 ‐text ‐in cert.pem|more

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 16616841832876401013 (0xe69ae19b7172e175)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        Validity
            Not Before: Aug 15 14:11:08 2017 GMT
            Not After : Aug 15 14:11:08 2018 GMT
        Subject: C=US, ST=NJ, CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d4:da:23:34:61:60:f0:57:f0:68:fa:2f:25:17:
...

Hmm. It’s only sha1 which isn’t so great. And there’s no Subject Alternative Name. So it’s not a very good CERT.

Create a better self-signed CERT
$ openssl req ‐x509 ‐sha256 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

That one is SHA2:

...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
...

365 days is arbitrary. You can specify a shorter or longer duration.

Then refer to it with a -config argument in your

Listing ciphers
Please see this post.

Fetching the certificates from an SMTP server running TLS

$ openssl s_client −starttls smtp −connect <MAIL_SERVER>:25 −crlf
That’s a good one because it’s hard to do these steps by hand.

Working with Java keytool for Tomcat certificates
This looks really daunting at first. Where do you even start? I recently found the answer. Digicert has a very helpful page which generates the keytool command line you need to crate your CSR and provides lots of installation advice. At first I was skeptical and thought you could not trust a third party to have your private key, but it doesn’t work that way at all. It’s just a complex command-line generator that you plug into your own command line. You know, the whole

$ keytool −genkey −alias drj.com −keyalg RSA -keystore drj.jks −dname=”CN=drj.com, O=johnstechtalk, ST=NJ, C=US” …

Here’s the Digicert command line generator page.

Another good tool that provides a free GUI replacement for the Java command-line utilities keytool, jarsigner and jadtool is Keystore Explorer.

List info about all the certificates in a certificate bundle

openssl storeutl -noout -text -certs cacert.pem |egrep ‘Issuer:|Subject:’|more

Appendix A, Certificate Fingerprints
You may occasionally see a reference to a certificate fingerprint. What is it and how do you find your certificate’s fingerprint?

Turns out it’s not that obvious.

Above we showed the very useful command

openssl x509 ‐text ‐in <CRT‐file>

and the results from that look very thoroough as though this is everything there is to know about this certificate. In fact I thought that for yeas, but, it turns out it doesn’t show the fingerprint!

A great discussion on this topic is https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

But I want to repeat the main points here.

The fingerprint is the hash of the certificate file, but in its raw, 8-bit form. you can choose the hash algorithm and learn the fingerprint with the following openssl commands:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha1 (for getting the SHA1 fingerprint)

similarly, to obtain the sha256 or md5 fingerprint you would do:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha256

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐md5

Now, you wonder, I know about these useful hash commands from Linux:

sha1sum, sha256sum, md5sum

what is the relationship between these commands and what openssl returns? How do I run the linux commands and get the same results?

It turns out this is indeed possible. But not that easy unless you know advanced sed trickery and have a uudecode program. I have uudecode on SLES, but not on CentOS. I’m still trying to unpack what this sed command really does…

The certificate files we normally deal with (PEM format) are encoded versions of raw data. uudecode can be used to obtain the raw data version of the certificate file like this:

$ uudecode < <(
sed ‘1s/^.*$/begin‐base64 644 www.google.com.raw/;
$s/^.*$/====/’ www.google.com.crt
)

This example is for an input certificate file called www.google.com.crt. It creates a raw data version of the certificate file called www.google.com.raw.

Then you can run your sha1sum on www.google.com.raw. It will be the same result as running

$ openssl x509 ‐in www.google.com.crt ‐fingerprint ‐sha1

!

So that shows the fingerprint is a hash of the entire certificate file. Who knew?

Appendix B
To find out more about a particluar subcommand:

openssl <subcommand> help

e.g.,

$ openssl s_client help

Conclusion
Some useful openssl commands are documented here. A way to grapple with keytool for Tomcat certificates is also shown as a bonus.

References and related
Probably a better site with similar but more extensive openssl commands: https://www.sslshopper.com/article-most-common-openssl-commands.html

Digicert’s tool for working with keytool.
GUI replacement for keytool, etc; Keystore Explorer.

The only decent explanation of certificate fingerprints I know of: https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

Server Name Indication is described in this blog post.

I’m only providing this link here as an additional reminder that this is one web site where you’ll find a legitimate wildcard certificate: https://make.wordpress.org/ Otherwise it can be hard to find one. Clearly people don’t want to advertize the fatc that they’re using them.

Categories
Admin Internet Mail SLES

The IT Detecive Agency: emails began piling up this week, no obvious cause

Intro
Today I had my choice of problems I could highlight, but I like this one the best. Our mail server delivers email to a wide variety of recipients. All was going well and it ran pretty much unattended until this week when it didn’t go so well. Most emails were getting delivered, but more and more were starting to pile up in the queues. This is the story of how we unraveled the mystery.

The details
It’s best to work from examples I think. I noticed emails to me.com were being refused delivery as well as emails to rnbdesign.com. The latter is a smaller company so we heard from them the usual story that we’re the only ones who can’t send to them.

So I forced delivery with verbose logging. I’m running sendmail, so that looks like this:

> sendmail -qRrnbdesign.com -Cconfig_file -v

That didn’t work out, producing a no route to host type of error. I did a DNS lookup by hand. That showed one set of results, while sendmail was connecting to an entirely different IP address. How could that be??

I was at a loss so I do what I do when I’m desperate: strace. That looks like this:

> strace -f sendmail -qRrnbdesign.com -Cconfig_file -v > /tmp/strace 2>&1

That produced 12,000 lines of output. All the system calls that the process and any of its forked processes invoke. Is that too much to comb through by hand? No, not at all, not when you begin to see the patterns.

I pored over the trace, not knowing what most of it meant, but looking for especially any activity regarding networking and DNS. Around line 6,000 I found it. There was mention of nscd.

For the unaware the use of nscd (nameserver caching daemon) might seem innocent enough, or even good-intentioned. What could be wrong with caching frequently used DNS results? The only issue is that it doesn’t work right! nscd derives from UC Berkeley Unix code and has never been supported. I didn’t even like it when I was running SunOS. It caches the DNS queries but ignores TTLs. This is fatal for mail servers or just about anything you can think of, especially on servers that are infrequently booted as mine are.

I stopped nscd right away:

> service nscd stop

and re-ran the sendmail queue runner (same command as above). The rnbdesign.com emails flowed out instantly! Soon hundreds of stuck emails were flushed out.

Of course for good measure nscd had to be removed from the startup sequence:

> chkconfig nscd off

An IT pro always keeps unsolved mysteries in his mind. This time I knew I also had in hand the solution an earlier-documented mystery about email to paladinny.com.

Conclusion
nscd might show up in your SLES or OpenSuse server. I strongly suggest to disable it before you wind up with old DNS values and an extremely hard-to-debug issue.

Case closed!

Categories
Admin Linux Raspberry Pi

Ssh access to your Raspberry Pi from anywhere

Editor’s 2017 note: Lots of great alternatives are discussed in the Comments section.

Intro
I’ve done a couple things with my Raspberry Pi. There’s this post on setting it up without a monitor, keyboard or mouse, and this post on using it to monitor power and Internet connection at my home.

I eventually realized that the Pi could be accessed from anywhere, with one big assumption: that you have your own hosted server somewhere on the Internet that you can ssh to from anywhere. This is the same assumption I used in describing the power monitor application.

The details
I can’t really take any credit for originality here. I just copied what I saw in another post. My only contribution is in realizing that the Pi makes a good platform to do this sort of thing with if you are running it as a server like I am.

What you can do is to create a reverse ssh tunnel. I find this easier and probably more secure than opening up ssh (inbound) on your home router and mapping that to the Pi. So I’m not going to talk about that method.

First ssh log in to your Pi.

From that session ssh to your hosted server using syntax like this:

> ssh −f −N −R 10000:localhost:22 username@ip_address_of_your_hosted_sever

You can even log out of your Pi now – this reverse tunnel will stay*.

Now to access your Pi from “anywhere,” log into your server like usual, then from that session, login to your Pi thusly:

> ssh −p 10000 pi@localhost

That’s it! You should be logged on after supplying the password to the pi account.

*Except that in my experience the reverse tunnel does not stay! It’s staying up less than two hours.

But I think the approach is sound.

Feb 15th Update
This is a case of RTFM. That same web page I cited above has the necessary settings. I needed to have them on the Pi. It didn’t help when I put them on my Amazon server. Here they are repeated:

TCPKeepAlive yes
ClientAliveInterval 30
#ClientAliveCountMax 30
ClientAliveCountMax 99999
GatewayPorts yes
AllowTcpForwarding yes

This goes into the /etc/ssh/sshd_config file. Make sure you don’t have these mentioned a second time in that file.

With these settings my reverse tunnel has been up all day. It’s a real permanent tunnel now!

Security note
Make sure you modify the default passwords to your Pi before attempting this. You’re potentially exposing your whole home network in creating a reverse tunnel like this so you really have to be careful.

Conclusion
You can use your Raspberry Pi to create a reverse tunnel tht allows you to access it from anywhere, assuming you have a cooperating hosted server on the Internet as a mutual meeting point for the ssh sessions. Exercise caution, though, as you are opening up your Home network as well.

Currently the tunnel doesn’t stay up for very long – perhaps an hour or so. If I find a way to extend that I’ll revise this post.

References
Having trouble ssh’ing to your Ras Pi under any conditions? This article explains how to get past one common cause of this problem.

Categories
Admin DNS IT Operational Excellence

The IT Detective Agency: since when can a powered off PC do dynamic DNS updates?

Intro
The IT Detectives are back after a short lull during which no great mysteries needed expert resolution – you knew that situation couldn’t last too long. The following tale was relayed to me, I unfortunately cannot claim to have been any help whatsoever. The details have been somewhat obscured in this retelling.

The details
One of our DNS servers at drjohns was busy fielding lots and lots of DDNS updates. Good, right? No, not so. Because our employee PCs are all configured to not do this very thing. In Windows 7 drilling down into the advanced DNS settings you have a Checkbox for Register this connection’s addresses in DNS. And that is unchecked. So although we use DHCP, the PCs shouldn’t be sending their DDNS updates. Yet they were. In fact at one point a considerable amount of bandwidth was being eaten up with these unwanted updates, so we had to investigate and act. But where to begin?

Word finally got around to one of our PC experts who I guess probably had his suspicions. He suggested the following test:

turn the PC off and look for DDNS updates on the DNS server

Amazingly, that’s exactly what we found to be the case – DDNS updates coming from a powered off PC. The DDNS updates did not always go to the same DNS server. The chosen DNS server seemed randomly chosen, but they all were drjohns DNS servers.

A Wireshark examination of a trace (taken by a network engineer) showed lots of Dynamic Update SOA drj.com. I looked at the trace and found that that was just a title given by Wireshark for what was happening, and not a very accurate one. If you expand the packet you saw inside of it that (mostly) it was a workstation trying to register its A record on the DNS server (a DDNS update). It wasn’t literally trying to change the SOA record for the zone though that might have been the logical result of updating its A record.

What the power-off test showed to our subject-area expert is that Intel vPro was responsible for these DDNS updates. Wait, you ask, what the heck is vPro? We didn’t know either. As I understand it, it’s an additional Intel chip that some business-class laptops (e.g., DELL Latitude) might include that permits more and better remote management, allowing perhaps even some hardware diagnostics to occur.

So let’s go back to that test. Note that I said PC powered off, I did not say disconnected from the network! Powered-off-but-network-connected produces the DDNS update, powered-off-and-disconnected – no update, of course (Hey, it’s not magic going on here!).

So the solution, obvisouly, is to turn off DDNS in vPro. We thought it was off, but maybe not. We expect and hope this to the solution, but a few more days will be needed before this all plays out and we know for sure.

Conclusion
I better hold off on any conclusion until our premise is confirmed! But one feeling I have is that sometimes you have to ingratiate yourself to the right people because no one person has all the answers!