Categories
Admin Linux

The IT Detective Agency: Teraterm’s colors washed out

Intro
Some things we do as IT folk are just embarrassingly stupid in retrospect. This is such a story. I present it in keeping with my overall theme of throwing stuff out there in the hopes that some of it helps someone else facing the same situation.

The details
I love teraterm (from logmett.com). Teraterm plus screen (as in /usr/bin/screen) makes for a great combination when you live and die by the command line.

Actually I have been told I only use a small fraction of teraterm’s capabilities. It is programmable, apparently. I’m a very basic user.

So I had the self-assigned task to switch out a DNS server from an older Solaris OS to Linux. I completed the task months ago and I noticed one small side-effect: certain commands had the effect of washing out the font to just about the same color as the background. For the record my text is black (R,G,B) = (0,0,0) with Attribute Normal and my background is beige (255,255,183). When it’s behaving normally it looks very pleasant and is easy on the eyes.

I noticed when I ran man pages the text was all washed out – just a slightly brighter yellow against the beige background, same story when I ran vi. Comments such as text following # were washed out.

This was the case if I used a docking station. Using the native laptop display, the text was washed out, but not as severely so I could just make it out by straining my eyes.

I played with font color and background color in Teraterm, but didn’t really get anywhere, so I learned to cope. I learned that if I piped the man page to more the text was all-black and I didn’t really lose any functionality. In vi I learned that if I deleted the whitespace before the #, the whole comment became visible, unless it started a line. Kludgy, but it worked and hardly slowed me down – this is after all just one of many, many hosts I was focussed on.

Then it came time to migrate the second and last Solaris DNS server to Linux and I noticed the same thing happening on the new Linux server. What the…?

Previously I wasn’t really even sure when the washed-out problem occurred. This time I had no doubt that it was fine until the OS switch.

That in turn points to some difference in the environment, especially because on my many other Linux sessions I did not have this problem.

> env

shows the environment. By comparing where it was working to where it was not, I zeroed in on this environment variable: TERM.

TERM=vt100

where it wasn’t working

and

TERM=screen

where it was.

I set TERM=screen:

> export TERM=screen

and immediately noticed the display working when running vi. Even multiple colors are displayed.

But actually, hmm, the man pages are still washed out, e.g.,

> man -s1 ls

shows NAME, SYNOPSIS and DESCRIPTION are all yellowed out, as well as all switches! That makes it really difficult to decipher.

Oh, well. This mystery is not completely solved.

My point was going to be that in Solaris the TERM=vt100 made sense – things worked better – and so it was in my .bashrc file. In Linux (SLES) it didn’t make so much sense. No setting for TERM seems to be necessary as the value screen gets auto-defined somehow if you’re using screen.

What I had done was copy my .bashrc file from Solaris to Linux not really thinking about it. That’s what did me in on these two servers.

If I get around to resolving the man pages I’ll revise this post…

2020 update

Still plagued by this issue of washed out colors, I rolled up my sleeves and got it done. Turns out you have to set the Bold font settings separately.  I’m trying settings like in this picture.

References
Teraterm used to be available from logmett.com, (2020 update) but is no longer. I’m looking for a link… Here it is: https://osdn.net/projects/ttssh2/releases/

Conclusion
Problems with washed-out colors using teraterm plus screen are resolved. Once again, this was mostly a self-inflicted problem.

Categories
Internet Mail

Analyzing the sendmail log

Intro
If you’ve read any of my posts you’ll see that I believe sendmail is a technical marvel, but that’ snot to say it’s without its flaws.

One of the annoying things is that the From line and To line are recorded spearately, in defiance of common logic. I present a simple program I wrote to join these lines.

The details

Without further ado, here is the program, which I call combine.pl:

#!/usr/bin/perl
# combine lines from stat.log
# Copyright work under the Artistic License, http://www.opensource.org/licenses/Artistic-2.0
# DrJ 6/2013 - make more readable based on this format:
# Date|Time|Size[Bytes]|Direction|Sender|Recipient|Relay-MTA
#
#
# From= usually has address surrounded by <>, but not always
#
# input of
#Jun 20 10:00:21 drjemgw sm-mta[24318]: r5KE0K1U024318: [email protected], size=5331, class=0, nrcpts=1, msgid=<15936355.7941275772268.JavaMail.easypaynet@Z32C1GBSZ4AN2>, proto=SMTP, daemon=sm-mta, relay=amx139.postini.com [20.26.143.12]
#Jun 20 10:00:22 drjemgw sm-mta[24367]: r5KE0K1U024318: to=<[email protected]>, delay=00:00:02, xdelay=00:00:01, mailer=esmtp, pri=125331, relay=drjinternal.com [50.17.188.196], dsn=2.0.0, stat=Sent (r5KE0M6E027784 Message accepted for delivery)
# produces
#20.6.2013|10:00:21|5331|IN|[email protected]|[email protected]|amx139.postini.com
#
use Getopt::Std;
getopts('s:f'); # -s search_string -f for "full" version of output
$DEBUG = 0;
 
print "$relay{$ID}, $lines{$ID}, $sender{$ID}, $size{$ID}\n";
$year = `date +%Y`;
chomp($year);
while(<>) {
  chomp;
  print $_ ."\n" if $DEBUG;
#
# get ID
  ($ID) = /\[\d{2,10}\]:\s+(\w+):\s+/;
#print "ID: $ID\n";
  if ($lines{"$ID"} && / stat=Sent /) {
    if ($opt_f) {
      $lines{"$ID"} .= '**to**line**'.$_;
    } else {
      ($recip,$relay) = /:\sto=<(.+)>,\s.*\srelay=(\S+)\s/;
# there can be multiple recipients listed
      $recip =~ s/[\<\>]//g;
# disposition of email.  This needs customization for your situation, but it only determines IN vs OUT vs INTERNAL so it's not critical...
# In this example coding we get all our inbound email from postini.com, and outbound mail comes from drjinternal
      if ($relay{$ID} =~ /postini\.com/) {
        $disp = "IN";
      } else {
        $disp = $relay =~ /drjinternal/ ? "INTERNAL" : "OUT";
      }
      $lines = "$lines{$ID}|$size{$ID}|$disp|$sender{$ID}|$recip|$relay{$ID}";
      if ( ($lines =~ /$opt_s/ || ! $opt_s) && ($sender{$ID} || $recip) ) {
        $lines .= "|$ID" if $DEBUG;
#        push @lines, $lines; # why bother?  just spit it out immediately
         print "$lines\n";
      }
# save memory, hopefully? - can't do this. sometimes we have multiple To lines
#      undef $relay{$ID}, $lines{$ID}, $sender{$ID}, $size{$ID};
      print "$recip\n" if $DEBUG;
    }
  } else {
    if ($opt_f) {
      $lines{"$ID"} .= '**from**line**'.$_;
    } else {
      ($mon,$date,$time,$sender,$size,$relay) = /^(\w+)\s+(\d+)\s+([\d:]+)\s.+\sfrom=<?([^<][^,]*\w)>?,\ssize=(\d+).*relay=(\S+)/;
# convert month name to month number
      $monno = index('JanFebMarAprMayJunJulAugSepOctNovDec',$mon)/3 + 1;
# the year is faked - it's not actually recorded in the logs so we assume it's the current year...
      $lines{$ID} = "$date.$monno.$year|$time";
      $size{$ID} = $size;
      $sender{$ID} = $sender;
      $relay{$ID} = $relay;
 
      print "$mon,$date,$time,$sender,$size,$relay\n" if $DEBUG;
    }
  }
}
 
# now start matching
if ($opt_f) {
  foreach (@lines) {
    print $_."\n"
  }
}

What it does is combine the From and To lines based on the message ID which is unique to a message.

Usage
I usually use it to suck in an entire day’s log (I call my sendmail log stat.log) and grep the output to look for a particular string. For instance today there was a spam blast where ADP’s identity was phished. The sending domains all contained some variant of adp: adp.net, adp.org, adpmail.com, adp.biz, etc. So I wanted to find the answer to the question who’s received any of these ADP phishing emails today? Here’s how you use the program, to do that:

$ combine.pl<stat.log|grep adp.com|more

The input lines look like this:

Jun 20 10:00:21 drjemgw sm-mta[24318]: r5KE0K1U024318: [email protected], size=5331, class=0, nrcpts=1, msgid=<15936355.7941275772268.JavaMail.easypaynet@Z32C1GBSZ4AN2>, proto=SMTP, daemon=sm-mta, relay=amx139.postini.com [27.16.14.22]
Jun 20 10:00:22 drjemgw sm-mta[24367]: r5KE0K1U024318: to=<[email protected]>, delay=00:00:02, xdelay=00:00:01, mailer=esmtp, pri=125331, relay=drjinternal.com. [50.17.188.196], dsn=2.0.0, stat=Sent (r5KE0M6E027784 Message accepted for delivery)

The output from combine.pl looks like this:

20.6.2013|10:00:21|5331|IN|[email protected]|[email protected]|amx139.postini.com

Yeah, I got that ADP spam by the way…

Conclusion
A useful Perl script has been presented which helps mail admins combine separate output lines into a single entry, preserving the most important meta-data from the mail.

Other interesting sendmail posts are also available here and here.

Categories
Admin Network Technologies

Extended Passive Mode FTP through Checkpoint Firewall

Intro
The vast majority of time there is no problem doing an FTP to a server behind a firewall protected by Checkpoint’s Firewall-1. But occasionally there is.

The details

The problem I am about to document I think will only occur on a server that has multiple interfaces. I have seen it occur on multiple operating systems, so that doesn’t seem to matter. On the other hand, I have also not seen it on other similar systems, a point which I don’t fully yet understand.

Nevertheless, a work-around is always appreciated, so I provide what I found here, to complete my extensive documentation of problems I’ve encountered and resolved.

Here is a snippet from the FTP session showing the problem:

ftp> cd uploadDirectory
250 CWD command successful.
ftp> put smallfile.txt
local: smallfile.txt remote: smallfile.txt
229 Entering Extended Passive Mode (|||36945|)
200 EPRT command successful.
421 Service not available, remote server timed out. Connection closed

And here is the solution:

Enter epsv4 after logon and before any other commands are issued. Problem fixed!

Conclusion
We have shown a way to fix a firewall-related problem that manifests itself during extended passive mode FTPs. Some more research should be done to understand under what circumstances this problem should be expected, but it seems to occur with a Checkpoint Firewall-1 firewall and an FTP server with multiple interfaces.

Categories
Admin Network Technologies

The IT Detective Agency: trouble with wireless at home

Intro
I don’t usually have the luxury of writing about a mystery I’ve solved right out of my own home, but there finally is one that I got got to the bottom of recently – poor WiFi performance.

The details
Considering that I deal with this stuff for a living, I have a thread-bare setup at home. After my company-issued router’s WiFi began to work unreliably, I resuscitated an old Linksys wireless router, WRK54G V2. Superficially it seemed to work. But we weren’t very demanding of it.

It eventually seemed to be the case, as visitors mentioned, that streaming videos does not work through wireless. This was hard for me to check, with my broken-down, aging equipment. I have a desktop which always freezes and crashes is you play any Youtube video. And a Netbook which kind of worked better, but its peculiarity is that its ethernet interface doesn’t work. Wirelessly, its version of Flash was too old and insecure for Firefox, and attempts to update Flash using WiFi in turn were unsuccessful.

In general the Linksys router, as I eventually realized, seemed to initially serve up large downloads ok, but then at some point during the download, things begin to crawl and you are left with a download that proceeds at 10 kbit/s or something ridiculously slow like that.

Providing mixed evidence is a Sony BlueRay player. using WiFi it could sort of manage to show a HuluPlus TV episode. You might have to be patient at times while it’s loading, but we did get through a full episode of Grey’s Anatomy recently.

After more complaints I decided enough is enough. It seemed as though my WiFi was the most likely suspect, sifting through the mixed evidence. I perhaps waited so long because who’d think they’d be dealing with two bad WiFi routers from two totally different vendors?

So hedging my bets, I didn’t go all out with a new Gbit router. I reached back in time a little and got a refurbished Cisco 1200E wireless-N router. It was only $28 from Amazon. But before buying it, I read the comments and got one idea about routers: sometimes they need to be rebooted!

This is pretty funny, really, because it is probably apparent to any homeowner, and here I am, a specialist, missing this point. You see with Cisco enterprise-class gear you almost never have to reboot to fix a problem. These things run uninterrupted for not only weeks and months at a time, years at a time is also not at all uncommon. Same for some Unix servers. So from my perspective rebooting is something for consumer devices running Microsoft OSes!

So, before rebooting the Linksys to see if that would cure it, I ran a Ping to Google’s DNS server (very easy to remember its IP) from a CDM window:

> ping -t 8.8.8.8

I didn’t preserve the output, but it wasn’t pretty. It was something like this:

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=369ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=1204ms TTL=56
Reply from 8.8.8.8: bytes=32 time=284ms TTL=56
...

51 msec – fine. But round-trip times much greater than that? That’s not right.

So I hopefully reboot the Linksys router and re-run the test on the Netbook:

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
Reply from 8.8.8.8: bytes=32 time=50ms TTL=56
Reply from 8.8.8.8: bytes=32 time=51ms TTL=56
...

Much more consistent.

Try a Youtube video from Firefox. Nope, need to update Flash. Update Flash. Nope – download times out and kicks me out.

So I’ve accomplished nothing in rebooting in terms of results that matter.

That’s when I decided to check out of Amazon with that refurbished router.

Aside about Wireless-N
Given my ancient equipment, I was concerned that Wireles-N routers might not be compatible with my wireless radios, which would only support G. Is it backwards compatible? Yes. Some quick research showed that and my own experience confirmed it.

Conclusion
The setup of the router was pretty straightforward although it froze at some point just after I set the wireless password. It helps to have done this a zillion times before. At that point I observed what my default gateway was and hit it as a web site URL. Guessed the admin password incorrectly a zillion times, until I tried the wireless password as the admin password, and, wham – I was in and happily configuring away…

More importantly, I went to that Netbook, updated Flash. No problems. Ran a Youtube video. No problems. Ran a speedtest.net test (which wouldn’t even run before this). Numbers look as good as my wired connection: 6 mbit download, 0.6 mbit upload.

Last test is to see where the speed maxes out within my home network. I plan to hit my Raspberry Pi web server to test this and will provide results as soon as they are available.

Conclusion to the conclusion
So I really was cursed with two bad wireless routers. Sometimes using 10-year-old equipment is really not worth the $30 saved in deferred spending. Read product reviews on Amazon to get hints about real issues others have faced.
To be continued…

Categories
DNS Scams

What if someone approaches you offering a domain?

Intro
As a domain owner you will sooner or later get an unsolicited email like the following one I received March 28th:

Hello,
 
We are promoting the sale of the domain name johnstechtalk.com that is being returned back to the open market very soon.
You own a very similar domain and we wanted to give you a first chance to secure johnstechtalk.com. If this offer is of any 
interest to you, the link below will lead you to our website where you can leave an early offer:
 
http://baselane.net/acquire/c00bsn1ub/J8jIGPiguH
 
Alternatively you can simply reply to this e-mail with your offer and we will manually process your order.
 
Here are a few quick notes about the offer:
-You are leaving an offer for full ownership and control over the domain. 
-You do not have to use our hosting or any other service, you are bidding only for the domain.
-This is a single transaction, no hidden surprises. 
-We will not give away your personal information to anybody.
-You will not need a new website or hosting you can easily redirect your existing website to point to this one.
-Our technical team stands at your disposal for any transfer/redirect issue you may have.
 
Thank you for considering our domain name service!
Please feel free to call us any time we would be really happy to hear from you!
 
Kind regards,
Domain Team

The thing is, this is not complete spam. After all, it is kind of interesting to pick up a shorter domain.

But is this a legitimate business proposition? What can we do to check it? Read on…

The details
The first reaction is “forget it.” Then you think about it and think, hmm, it might be nice to have that domain, too. It’s shorter than my current one and yet very similar, thus potentially enhancing my “brand.”

To check it out without tipping your hat use Whois. I use Network Solutions Whois.

Doesn’t the offer above make it sound like they have control over the domain and are offering you a piece of it? Quite often that’s not at all the case. For them to control the domain to the point where they are selling it would require an upfront investment. So instead what they do in many cases I have encountered is to try to prey on your ignorance.

When I received their offer the Whois lookup showed the domain to be in status

RedemptionPeriod

Form what I have read the redemption period should last 75 days. Its a time when the original owner can reclaim the domain without any penalties. No one else can register it.

If they actually owned the domain and were trying to auction it off, it would have had the standard Lock Status of

clientTransferProhibited

or

clientDeleteProhibited

Furthermore, domains being auctioned usually have special nameservers like these:

Nameservers:
  ns2.sedoparking.com
  ns1.sedoparking.com

Sedo is a legitimate auction site for domains.

johnstechtalk.com, having entered the redemption period, will become up for grabs unless the owner reclaims it.

If I had expressed interest in it I’m sure they would have obtained it, just like I could for myself, at the end of the redemption period and then sold it to me at a highly inflated price.

Not wanting to encourage such unsavory behaviour I made no reply to the offer and checked the status almost every day.

New status – it’s looking good

Last week sometime it entered a new status:

pendingDelete

I think this status persists for three days or so (I forget). Then, when that period is over it shows up as available. I bought it using my GoDaddy account for $9.99 last night – actually $11.00 because there’s an ICANN fee of $0.18 and I rounded up for charity.

And this is not the only domain I have bought this way. I bought vmanswer.com because I was annoyed by the number of unsolicited offers to “buy” it! That purpose was achieved…

But I am watching another domain that was offered to me and really did go to the auction house Sedo, where it is currently sitting (which means no one else is all that interested). I am curious to see what happens when it expires later this year.

Save the labor
How could I have avoided the trouble of those daily whois lookups? Well, on my Linux server there is the ever-handy whois, as in

$ whois johnstechtalk.com

But sometimes it gives fairly complete information and for other domains not so much. It depends on the registrar. For GoDaddy domains you get next to no information:

[Querying whois.verisign-grs.com]
[Redirected to whois.godaddy.com]
[Querying whois.godaddy.com]
[whois.godaddy.com]

I suspect it is a measure GoDaddy takes to avoid programmatic use of WhoIs. Because if it answered with complete information it would be easy for a modest scripter like me to write a program that runs all kinds of queries, which of course would mostly be used by the scammers I suppose. In particular since I wasn’t seeing the domain Lock Status from command-line whois I didn’t bother to write an program to automate my daily query. Otherwise I probably would have.

What about cybersquatters?
In the case mentioned above there is no trademark at stake. Often there is. what should you do if you receive an offer to sell you a domain name which is based on one of your own trademarks? I get lots of those as well. My approach is, of course, to not be extorted. So at first I was ignoring such solicitations. If I want to really go after the domain, I will sic my legal team on them and invoke UDRP (ICANN’s Uniform Domain Dispute Resolution Policy). UDRP comes down heavily in favor of the trademark holder.

But lately I wanted to do something more. Since this is illicit activity at the end of the day, I look at where the email comes from. Often a Gmail account is used. I gather the headers of the message and file a formal complaint with Google’s Gmail abuse form, which I hope leads to their account being shut down. I want to at least inconvenience them without wasting too much of my own resources. Well, I don’t actually know that it works, but it makes me feel better in any case 🙂 .

This is the Gmail abuse page. Yahoo and MSN also have similar forms.

Conclusion
Unsolicited, sound-similar domains is one of the many scams rampant on the Internet. But with the background I’ve provided hopefully you’ll be better at separating the scams from the genuine domain owners seeking to do business through auctions or private sales.

Interested in reading about other scams? Try Spam and Scams – What to Expect When You Start a Blog

Categories
Linux

Is Mining Bitcoins on the Amazon Cloud the Road to Riches?

Intro
Answer: Not as far as I can tell. Of course it’s irresistible for us technical folks to try. Here are my back-of-the-envelope calculations for my trial.

The details
A currency that’s not linked to any one government’s policies has a lot of attraction. Bitcoin is that currency, and it seems to be catching on. I knew people last year who were “mining” Bitcoins. I had no idea what they were talking about, but I could tell from what they were saying that they were trying to create more currency units. How strange and wonderful, a currency that gets minted by potentially anyone.

I learn mostly by doing, so I decided to download one of those mining programs and see what this was all about.

Well, I still haven’t learned what it’s all about because it’s more complicated than I thought, but I learned what approach not to take. And that’s what I’m sharing here.

I downloaded bfgminer for my CentOS Amazon EC2 server. That in itself was a good exercise as it needed a whole ecosystem of other packages to be installed first. On my system I found I needed ncurses-devel and libcurl-devel, which brought in other packages so that by the time they were installed I had installed all these packages:

libcurl-devel-7.19.7-35.el6
curl-7.19.7-35.el6
libidn-devel-1.18-2.el6
libcurl-7.19.7-35.el6
libssh2-1.4.2-1.el6
ncurses-static-5.7-3.20090208.el6
ncurses-devel-5.7-3.20090208.el6

It’s also designed more for a different type of computing environment. Getting it to compile was one thing, but getting it to actually run is another.

At first it found nothing to run on. So I had to recompile, this time specifying:

$ ./configure –enable-cpumining

to enable use of my virtual CPU.

It wanted a pool and URL and other things I don’t have when it starts up. I finally found a way to run it in test mode.

The results
My setup at Amazon could calculate 0.4 mega hashes per second. Doesn’t sound too bad, right? Wrong. Looking at some of the relevant numbers and doing a back-of-the-envelope calculation we have:

– total world computing power dedicated to this effort: 60,000 Giga hashes per second
– rate of blocks being written: six per hour
– number of bitcoins in a block: 25
– value of a bitcoin: $78

From this we have:
Minimum computation required for a DIY effort to produce one block:

Effort = 10 minutes * 60 s/min * 60×10^12 hashes/s = 3.6×10^16 hashes =~ 4×10^16 hashes

So with my resources one my small instance this will take me:

time to make a block = 4×10^16 hashes/block / 0.4×10^6 hashes/s = 10^11 s
= 10^11 s * year/(π•10^7 s) =~ 3×10^3 years

Why my fixation on a block as the minimum unit of bitcoins? Because in my five minutes of reading that seems to be the minimum acceptable unit to be able to mint more bitcoins.

By the way, every physicist knows that a year has π•10^7 seconds! That’s one of those useful numbers we carry around in our heads.

For the scientific-notation challenged, I’m saying that it will take me 3,000 years to create a block of bitcoins by myself!

Now let’s have some fun with this. Of course Amazon being the premier cloud hosting company that it is, you can rent (I have heard of this actually being done) 30,000 servers at once.

To be continued…

Appendix
How I measure my has rate
I ran

$ bfgminer –benchmark

Then I did a and got these results:

 [2013-04-16 08:25:39]
Summary of runtime statistics:
 
 [2013-04-16 08:25:39] Started at [2013-04-15 12:55:43]
 [2013-04-16 08:25:39] Pool: Benchmark
 [2013-04-16 08:25:39] CPU hasher algorithm used: c
 [2013-04-16 08:25:39] Runtime: 19 hrs : 29 mins : 56 secs
 [2013-04-16 08:25:39] Average hashrate: 0.4 Megahash/s
 [2013-04-16 08:25:39] Solved blocks: 0
 [2013-04-16 08:25:39] Best share difficulty: 0
 [2013-04-16 08:25:39] Queued work requests: 0
 [2013-04-16 08:25:39] Share submissions: 0
 [2013-04-16 08:25:39] Accepted shares: 0
 [2013-04-16 08:25:39] Rejected shares: 0
 [2013-04-16 08:25:39] Accepted difficulty shares: 0
 [2013-04-16 08:25:39] Rejected difficulty shares: 0
 [2013-04-16 08:25:39] Hardware errors: 0
 [2013-04-16 08:25:39] Efficiency (accepted / queued): 0%
 [2013-04-16 08:25:39] Utility (accepted shares / min): 0.00/min
 
 [2013-04-16 08:25:39] Discarded work due to new blocks: 46376
 [2013-04-16 08:25:39] Stale submissions discarded due to new blocks: 0
 [2013-04-16 08:25:39] Unable to get work from server occasions: 0
 [2013-04-16 08:25:39] Work items generated locally: 0
 [2013-04-16 08:25:39] Submitting work remotely delay occasions: 0
 [2013-04-16 08:25:39] New blocks detected on network: 0
 
 [2013-04-16 08:25:39] Summary of per device statistics:
 
 [2013-04-16 08:25:39] CPU0                | 5s:  0.0 avg:377.4 u:  0.0 kh/s | A:0 R:0 HW:0 U:0.0/m

The about fourth line from the top shows the average has rate of 0.4 Megahashes/second.

Other resources
Bitcoin exchange value really fluctuates a lot compared to conventional government-sponsored currencies! Go here for the current value.

A timely and informative intro to Bitcoin is available here.

Categories
Admin CentOS Security

Example using iptables, the CentOS firewall

Intro
This document is mostly for my own purposes. I don’t even think this is the best way to run the firewall, it’s just the way I happened to adapt.

Background
My friends tell me ipchains was good software. Unfortunately the guy who wrote iptables, which emulates the features of ipchains, wasn’t at that same skill level, and the implementation shows it. I know I struggled with it a bit.

Motivation
I decided to run a local firewall on my HP SiteScope server because a serious security issue was found with our version’s HTTP server such that it was advisable to lock it down to only those administrators who need access to the GUI.

The details
This was actually implemented on Redhat v 5.6, though I don’t suppose it would be much different on CentOS.

December 2013 update
I also tried this same script provided below on a Redhat 6.4 OS – it worked the exact same way without modification.

The main thing is that I maintain a file with the “firewall rules.” I call it iptables. So I need to remember from invocation to invocation where I store this master file. Here are the contents:

#!/bin/sh
# DrJ, 9/2012
# inspired by http://wiki.centos.org/HowTos/Network/IPTables
# flush all previous rules
export PATH=$PATH:/sbin
iptables -F
#
# our main rules here:
#
# Accept tcp packets on destination port 8080 (HP SiteScope) from select individuals
# DrJ: office, home, vpn
iptables -A INPUT -p tcp -s 192.168.76.56 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.2.6.107 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.3.13.138 --dport 8080 -j ACCEPT
#
# the server itself
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8080 -j ACCEPT
#
# set dflt policies
# for logging see http://gr8idea.info/os/tutorials/security/iptables5.html
#iptables -A INPUT -j LOG --log-level 4 --log-prefix 'InDrop '
# this is a killer!
#iptables -P INPUT DROP
# just drop what is really the problem...
iptables -A INPUT -p tcp --dport 8080 -j DROP
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
#
# access for loopback
iptables -A INPUT -i lo -j ACCEPT
#
# Accept packets belonging to established and related connections
#
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# Save settings
#
/sbin/service iptables save
#
# List rules
#
iptables -L -v

Of course you have to have iptables running. I do a

$ sudo service iptables status

to verify that. If its status is “not running,” start it up.

As mentioned in the comments I tried to be more strict with the rules since I’m used to running firewalls with a DENY All rule, but it just didn’t work out well for me. I lost patience and gave up on that and settled for dropping all traffic to TCP port 8080 except the explicitly permitted hosts, which is good enough for our immediate needs.

Conclusion
This is a simple example of a way to use iptables. It’s probably not the best example, but it’s what I used so it’s better than nothing.

Categories
Linux

Solving this week’s NPR weekend puzzle with a few Linux commands

Intro
I listen to the NPR puzzle every Sunday morning. I’m not particularly good at solving them, however – I usually don’t. But I always consider if I could get a little help from my friendly Linux server, i.e., if it lends itself to solution by programming. As soon as I heard this week’s challenge I felt that it was a good candidate. I was not disappointed…

The details
So Will Shortz says think of a common word with four letters. Now add O, H and M to that word, scramble the letters to make another common word in seven letters. The words are both things you use daily, and these things might be next to each other.

My thought pattern on that is that, great, we can look through a dictionary of seven-letter words which contain O, H and M. That already might be sufficiently limiting.

This reminded me of using the built-in Linux dictionary to give me some great tips when playing Words with Friends, which I document here.

In my CentOS my dictionary is /unix/share/dict/linux.words. It has 479,829 words:

$ cd /usr/share/dict; wc linux.words

That’s a lot. So of course most of them are garbagey words. Here’s the beginning of the list:

$ more linux.words

1080
10-point
10th
11-point
12-point
16-point
18-point
1st
2
20-point
2,4,5-t
2,4-d
2D
2nd
30-30
3-D
3-d
3D
3M
3rd
48-point
4-D
4GL
4H
4th
5-point
5-T
5th
6-point
6th
7-point
7th
8-point
8th
9-point
9th
-a
A
A.
a
a'
a-
a.
A-1
A1
a1
A4
A5
AA
aa
A.A.A.
AAA
aaa
AAAA
AAAAAA
...

You see my point? But amongst the garbage are real words, so it’ll be fine for our purpose.

What I like to do is to build up to increasingly complex constructions. Mind you, I am no command-line expert. I am an experimentalist through-and-through. My development cycle is Try, Demonstrate, Fix, Try Demonstrate, Improve. The whole process can sometimes be finished in under a minute, so it must have merit.

First try:

$ grep o linux.words|wc

 230908  230908 2597289

OK. Looks like we got some work to do, yet.

Next (using up-arrow key to recall previous command, of course):

$ grep o linux.words|grep m|wc

  60483   60483  724857

Next:

$ grep o linux.words|grep m|grep h|wc

  15379   15379  199724

Drat. Still too many. But what are we actually producing?

$ grep o linux.words|grep m|grep h|more

abbroachment
abdominohysterectomy
abdominohysterotomy
abdominothoracic
Abelmoschus
abhominable
abmho
abmhos
abohm
abohms
abolishment
abolishments
abouchement
absmho
absohm
Acantholimon
acanthoma
acanthomas
Acanthomeridae
acanthopomatous
accompliceship
accomplish
accomplishable
accomplished
accomplisher
accomplishers
accomplishes
accomplishing
accomplishment
accomplishments
accomplisht
accouchement
accouchements
accroachment
Acetaminophen
acetaminophen
acetoamidophenol
acetomorphin
acetomorphine
acetylmethylcarbinol
acetylthymol
Achamoth
achenodium
achlamydeous
Achomawi
...

Of course, words with capitalizations, words longer and shorter than seven letters – there’s lots of tools left to cut this down to manageable size.

With this expression we can simultaneously require exactly seven letters in our words and require only lowercase alphabetical letters: egrep ′^[a-z]{7}$′. This is an extended regular expression that matches the beginning (^) and end ($) of the string, only characters a-z, and exactly seven of them ({7}).

With that vast improvement, we’re down to 352 entries, a list small enough to browse by hand. But the solution still didn’t pop out at me. Most of the words are obscure ones, which should automatically be excluded because we are looking for common words. We have:

$ grep o linux.words|grep m|grep h|egrep ′^[a-z]{7}$′|more

achroma
alamoth
almohad
amchoor
amolish
amorpha
amorphi
amorphy
amphion
amphora
amphore
apothem
apothgm
armhole
armhoop
bemouth
bimorph
bioherm
bochism
bohemia
bohmite
camooch
camphol
camphor
chagoma
chamiso
chamois
chamoix
chefdom
chemizo
chessom
chiloma
chomage
chomped
chomper
chorism
chrisom
chromas
chromed
chromes
chromic
chromid
chromos
chromyl
...

So I thought it might be inspiring to put the four letters you would have if you take away the O, H and M next to each word, right?

I probably ought to use xargs but never got used to it. I’ve memorized this other way:

$ grep o linux.words |grep m|grep h|egrep ′^[a-z]{7}$′|while read line; do
> s=`echo $line|sed s/o//|sed s/h//|sed s/m//`
> echo $line $s
> done|more

sed is an old standard used to do substitutions. sed s/o// for example is a filter which removes the first occurrence of the letter O.

I could almost use the tr command, as in

> …|tr -d ′[ohm]′

in place of all those sed statements, but I couldn’t solve the problem of tr deleting all occurrences of the letters O, H and M. And the solution didn’t jump out at me.

So until I figure that out, use sed. That gives:

achroma acra
alamoth alat
almohad alad
amchoor acor
amolish alis
amorpha arpa
amorphi arpi
amorphy arpy
amphion apin
amphora apra
amphore apre
apothem apte
apothgm aptg
armhole arle
armhoop arop
bemouth beut
bimorph birp
bioherm bier
bochism bcis
bohemia beia
bohmite bite
camooch caoc
camphol capl
camphor capr
chagoma caga
chamiso cais
chamois cais
chamoix caix
chefdom cefd
chemizo ceiz
chessom cess
chiloma cila
chomage cage
chomped cped
chomper cper
chorism cris
chrisom cris
chromas cras
chromed cred
chromes cres
chromic cric
chromid crid
chromos cros
chromyl cryl
...

Friday update
I can now reveal the section listing that reveals the answer because the submission deadline has passed. It’s here:

...
schmoes sces
schmoos scos
semihot seit
shahdom sahd
shaloms sals
shamalo saal
shammos sams
shamois sais
shamoys says
shampoo sapo
shimose sise
shmooze soze
shoeman sean
sholoms slos
shopman span
shopmen spen
shotman stan
...

See it? I think it leaps out at you:

shampoo sapo

becomes of course:

soap
shampoo

!

They’re common words found next to each other that obey the rules of the challenge. You can probably tell I’m proud of solving this one. I rarely do. I hope they don’t call on me because I also don’t even play well against the radio on Sunday mornings.

Conclusion
Now I can’t give out the answer right now because the submission deadline is a few days from now. But I will say that the answer pretty much pops out at you when you review the full listing generated with the above sequence of commands. There is no doubt whatsoever.

I have shown how a person with modest command-line familiarity can solve a word problem that was put out on NPR. I don’t think people are so much interested in learning a command line because there is no instant gratification and th learning curve is steep, but for some it is still worth the effort. I use it, well, all the time. Solving the puzzle this way took a lot longer to document, but probably only about 30 minutes of actual tinkering.

Categories
Admin Linux Raspberry Pi Security

Generate Pronounceable Passwords

2017 update
Turns out gpw is an available package in Debian Linux, including Raspbian which runs on Raspberry Pi. Who knew? A simple sudo apt-get install gpw will provide it. So I guess the source wasn’t lost at all.

Intro
15 years ago I worked for a company that wanted to require authentication in order to browse to the Internet. I searched around for something.

What I came up with is gpw – generate pronounceable passwords.

The details
I think this approach to secure passwords is no longer best practice, but I still think it has a place for some applications. What it does is analyze a dictionary that you’ve fed it. It then determines the frequency of occurrence of what it calls trigraphs – I guess that’s three consecutive letter combinations. Then it generates random, non-dictionary passwords using those trigraphs, which are presumably wholly or partially pronounceable.

Cute, huh? I’d say one problem is that if the bad guys got wind of this approach, the numbers of combinations they’d have to use to do password cracking is severely restricted.

Sophos has a recommendation for forming good strong passwords. See their blog post about the 50 worse passwords which contains a link to a video on how to choose a good password.

But I still have a soft spot for this old approach, and I think it’s OK to use it, get your password such as inglogri, add a few non-alpha-numeric characters and come up with a reasonably good, memorable password. Every site you use should really get a different password, and this tool might make that actually feasible.

I run it as:

$ gpw

which produces:

seminour
shnopoos
alespige
olpidest
hastrewe
nsivelys
shaphtra
bratorid
melexseu
sheaditi

Its output changes every time, of course.

I mostly run it this way:

$ gpw 1

which produces only a single password, for instance:

ojavishd

You see how these passwords are sort of like words, but not words? Much more memorable than those completely random ones you are sometimes forced to type and which are impossible to remember?

I noted the location where I pulled it from the web 15 years ago as is my custom, but it is no longer available. So I have decided to make it available. I tweaked it to compile on CentOS with a C++ compiler.

Here is the CentOS v 6 binary for x86_64 architecture and README file.

Here is the tar file with the sources and the binary mentioned above. Run a make clean first to begin building it.

Enjoy!

Potential Problems
I know when we originally used it to assign 15,000 unique passwords, the randomness algorithm was so bad that I believe some people received identical passwords! So the the total number of generatable passwords might be severely limited. Please check this before using it in any meaningful way. I would naively expect and hope that it could generate about two- to three-times the number of words in my dictionary (/usr/share/dict/linux.words, with 479,829 words). But I never verified this.

2017 update
I ran it, 100 passwords at a time, on my Rsapberry Pi for a couple minutes. I created 275,900 passwords, of which 269,407 were unique. Strange. So you get some repeats but you motly get new passwords.

Further, I was going to tweak the code to generate 9-letter passwords which would presumably be more secure. But they just didn’t look as good to me, and I’ve only ever used it with 8 letters. So I just decided to keep it at 8 letters. You can experiment with that if you want.

More fun with the Linux dictionary
For another fun example using the Linux dictionary see how I solved the NPR weekend puzzle using it, described here.

A note for Debian Linux users (Ubuntu, Raspberry Pi, …)
The dictionary there is /usr/share/dictd/wn.index. You’ll need to update the Makefile to reflect this. This post about Words with Friends explains the packages I used to provide that dictionary.

Conclusion
An old pronounceable password generating program has been dusted off and given back to the open source community. It may not be state-of-the-art, but it has a role for some usages.

References and related
Want truly random passwords? I want to call your attention to random.org’s password generator: https://www.random.org/passwords/

Most people are becoming familiar with the idea of not reusing passwords but I don’t know if everyone realizes why. This article is a comprehensive review of the topic, plus review of password vaults like Lastpass, etc which you may have heard of: https://pixelprivacy.com/resources/reusing-passwords/

Categories
Admin Linux Security

My favorite openssl commands

Intro
openssl is available on almost every operating system. It’s a great tool if you work with certificates regularly, or even occasionally. I want to document some of the commands I use most frequently.

The details

Convert PEM CERTs to other common formats
I just used this one yesterday. I got a certificate in PEM format as is my custom. But not every web server out there is apache or apache-compatible. What to do? I’ve learned to convert the PEM-formatted certificates to other favored formats.

The following worked for a Tomcat server and also for another proprietary web server which was running on a Windows server and wanted a pkcs#12 type certificate:

$ openssl pkcs12 −export −chain −inkey drjohns.key -in drjohns.crt −name “drjohnstechtalk.com” −CAfile intermediate_plus_root.crt −out drjohns.p12

The intermediate_plus_root.crt file contained a concatenation of those CERTs, in PEM format of course.

If you see this error:

Error unable to get issuer certificate getting chain.

, it probably means that you forgot to include the root certificate in your intermediate_plus_root.crt file. You need both intermediate plus the root certificates in this file.

And this error:

unable to write 'random state'

means you are using the Windows version of openssl and you first need to do this:

set RANDFILE=C:\MyDir\.rnd

, where MyDir is a directory where you have write permission, before you issue the openssl command. See https://stackoverflow.com/questions/12507277/how-to-fix-unable-to-write-random-state-in-openssl for more on that.

The beauty of the above openssl command is that it also takes care of setting up the intermediate CERT – everything needed is shoved into the .p12 file. .p12 can also be called .pfx. so, a PFX file is the same thing as what we’ve been calling a PKCS12 certificate,

How to examine a pkcs12 (pfx) file

$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!

Examine a certificate

$ openssl x509 −in certificate_name.crt −text

Examine a CSR – certificate signing request

$ openssl req −in certificate_name.csr −text

Examine a private key

$ openssl rsa −in certificate_name.key −text

Create a SAN (subject alternative name) CSR

This is a two-step process. First you create a config file with your alternative names and some other info. Mine, req.conf, looks like this:

[req]
default_bits = 4096
prompt = no
default_md = sha256
req_extensions = req_ext
distinguished_name = dn
 
[ dn ]
C=US
ST=New Jersey
CN = drjohnstechtalk.com
 
[ req_ext ]
subjectAltName = @alt_names
 
[ alt_names ]
DNS.1 = drjohnstechtalk.com
DNS.2 = johnstechtalk.com
IP.3 = 50.17.188.196

Note this shows a way to combine IP address with a FQDN in the SAN. I’m not sure public CAs will permit IPs. I most commonly work with a private PKI which definitely does, however.

Then you run openssl like this, referring to your config file (updated for the year 2022. In the past we used 2048 bit length keys but we are moving to 4096):
$ openssl req −new −nodes −newkey rsa:4096 −keyout mykey.key −out myreq.csr -config req.conf

This creates the private key and CSR in one go. Note that it’s recommended to repeat your common name (CN) in one of the alternative names so that’s what I did.

Let’s examine it to be sure it contains the alternative names:

$ openssl req ‐text ‐in myreq.csr

Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        ...
        Attributes:
        Requested Extensions:
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com
    Signature Algorithm: sha256WithRSAEncryption
         2a:ea:38:b7:2e:85:6a:d2:cf:3e:28:13:ff:fd:99:05:56:e5:
         ...

Looks good!

SAN on an Intranet with a private PKI infrastructure including an IP address
On an Intranet you may want to access a web site by IP as well as by name, so if your private PKI permits, you can create a CSR with a SAN which covers all those possibilities. The SAN line in the certificate will look like this example:

DNS:drjohnstechtalk.com, IP:10.164.80.53, DNS:johnstechtalk.com, DNS:www.drjohnstechtalk.com, DNS:www.johnstechtalk.com

Note that additional IP:10… with my server’s private IP? That will never fly with an Internet CA, but might be just fine and useful on a corporate network. The advice is to not put the IP first, however. Some PKIs will not accept that. So I put it second.


Create a simple CSR and private key

$ openssl req −new −nodes −out myreq.csr

This prompts you to enter values for the country code, state and organization name. As a private individual, I am entering drjohnstechtalk.com for organization name – same as my common name. Hopefully this will be accepted.

Look at a certificate and certificate chain of any server running SSL

$ openssl s_client ‐showcerts ‐connect https://host[:port]/

Cool shortcut to fetch certificate from any web server and examine it with one command line

$ echo|openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443|openssl x509 ‐text

Alternate single command line to fetch and examine in one go

$ openssl s_client ‐servername drjohnstechtalk.com ‐connect drjohnstechtalk.com:443</dev/null|openssl x509 ‐text

In fact the above commands are so useful to me I invented this bash function to save all that typing. I put this in my ~/.alias file (or .bash_aliases, depending on the OS):

# functions
# to unset a function: unset -f foo; to see the definition: type -a foo
certexamine () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
# examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }
examinecert () { str=$*;echo $str|grep -q : ;res=$?;if [ "$res" -eq "0" ]; then fqdn=$(echo $str|cut -d: -f1);else fqdn=$str;str="$fqdn:443";fi;openssl s_client  -servername $fqdn -connect $str|openssl x509 -text|more; }

2025 update

Examinecert is now a little script which takes an optional first argument -v to test certificate verification. Here is the script.

#!/bin/bash
verify=0
if [[ " $* " == *" -v "* ]]; then
url=$2
verify=1
else
url=$1
fi
fqdn=$(echo $url|cut -d/ -f3|cut -d: -f1)
connectString=$(echo $url|cut -d/ -f3|sed '/:/! s/$/:443/')
echo fqdn $fqdn connectString $connectString verify $verify;sleep 2
if [ $verify -eq 0 ]; then
openssl s_client -servername $fqdn -connect $connectString|openssl x509 -text|more
else
# we are interested to verify the CERT
openssl s_client -showcerts -verify_depth 2 -connect $connectString -servername $fqdn <<< Q 2>&1|grep Verification
fi

Example usage

examinecert -v self-signed.badssl.com
fqdn self-signed.badssl.com connectString self-signed.badssl.com:443 verify 1
Verification error: self-signed certificate

Older examinecert construct

In a 2023 update, I made examinecert more sophisticated and more complex. Now it accepts an argument like FQDN:PORT. Then to examine a certificate I simply type either

$ examinecert drjohnstechtalk.com

(port 443 is the default), or to specify a non-standard port:

$ examinecert drjohnstechtalk.com:8443

The servername switch in the above commands is not needed 99% of the time, but I did get burned once and actually picked up the wrong certificate by not having it present. If the web server uses Server Name Indication – information which you generally don’t know – it should be present. And it does no harm being there regardless.

Example wildcard certificate
As an aside, want to examine a legitimate wildcard certificate, to see how they filled in the SAN field? Yesterday I did, and found it basically impossible to search for precisely that. I used my wits to recall that WordPress, I thought I recalled, used a wildcard certificate. I was right. I think one of those ecommerce sites like Shopify might as well. So you can examine make.wordpress.org, and you’ll see the SAN field looks like this:

 X509v3 Subject Alternative Name:
                DNS:*.wordpress.org, DNS:wordpress.org

Verify your certificate chain of your active server

$ openssl s_client ‐CApath /etc/ssl/certs ‐verify 2 ‐connect drjohnstechtalk.com:443

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify return:1
depth=2 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
verify return:1
depth=1 /C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
verify return:1
depth=0 /OU=Domain Control Validated/CN=drjohnstechtalk.com
verify return:1
---
Certificate chain
 0 s:/OU=Domain Control Validated/CN=drjohnstechtalk.com
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
 1 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
   i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
 2 s:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./CN=Go Daddy Root Certificate Authority - G2
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
 3 s:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
   i:/C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIFTzCCBDegAwIBAgIJAI0kx/8U6YDkMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
...
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES128-SHA
    Session-ID: 41E4352D3480CDA5631637D0623F68F5FF0AFD3D1B29DECA10C444F8760984E9
    Session-ID-ctx:
    Master-Key: 3548E268ACF80D84863290E79C502EEB3093EBD9CC935E560FC266EE96CC229F161F5EF55DDF9485A7F1BE6C0BECD7EA
    Key-Arg   : None
    Start Time: 1479238988
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

Wrong way to verify your certificate chain
When you first start out with the verify sub-command you’ll probably do it wrong. You’ll try something like this:

$ openssl s_client ‐verify 2 ‐connect drjohnstechtalk.com:443

which will produce these results:

verify depth is 2
CONNECTED(00000003)
depth=3 /C=US/O=The Go Daddy Group, Inc./OU=Go Daddy Class 2 Certification Authority
verify error:num=19:self signed certificate in certificate chain
verify return:0
16697:error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed:s3_clnt.c:983:

Using s_client menu through a proxy
Yes! Use the -proxy switch, at least with newer openssl implementations.

Using OCSP
I have had limited success so far to an Online Certificate Status Protocol verification. But I do have something to provide as an example:

$ openssl ocsp ‐issuer cert‐godaddy‐g2.crt ‐cert crt ‐no_nonce ‐no_cert_verify ‐url http://ocsp.godadddy.com/

Response verify OK
crt: good
        This Update: Nov 15 19:56:52 2016 GMT
        Next Update: Nov 17 07:56:52 2016 GMT

Here I’ve stuffed my certificate into a file called crt and stuffed the intermediate certificate into a file called cert-godaddy-g2.crt. How did I know what URL to use? Well, when I examined the certificate file crt it told me:

$ openssl x509 ‐text ‐in crt

...
           Authority Information Access:
                OCSP - URI:http://ocsp.godaddy.com/
...

But I haven’t succeeded running a similar command against certificates used by Google, nor by certificates issued by the CA Globalsign. So I’m clearly missing something there, even though by luck I got the GoDaddy certificate correct.

Check that a particular private key matches a particular certificate
I have to deal with lots of keys and certificates. And certificate re-issues. And I do this for others. Sometimes it gets confusing and I lose track of what goes with what. openssl to the rescue! I find that a matching moduls is pretty much a guarantee that private key and certificate aer a match.

Private key – find the modulus example
$ openssl rsa ‐modulus ‐noout ‐in key

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

Public key – find the modulus example
$ openssl x509 ‐modulus ‐noout ‐in crt

Modulus=BADD4167E98A1B51B3F40EF3A0F5E2AC268F37BAC45388A401FB677CEA240CD3530D39B81A450DF061B1145AFA9B00718EF4DBB3E552D5D999C577A6424706782DCB4426D2E7A9615BBC90CED300AD91F63E0E0EA9B4B2D24649CFD44E9735FA7E91EEC939A5B1D8667ADD62CBD15EB01BE0E03EC7532ACEE621386FBADF0161183AB5BDD94D1CFB8A2D5F6B38178A897DB380DC90CEA64C1F149F4B38E845C6C933CBF8F123B1DC411EA2A238B9D9704A43D17F67561F6D4821B721484C6785385BF03CADD91B5F4BD5F9B36F478E74BCAE16B171E3E4AFE3F6C388EA849D792B5C94BD5D279572C8713369D909711FBF0C2B3053380668A2774AFC00F8C911

The key and certificate were stored in files called key and crt, respectively. Here the modulus has the same value so key and certificate match. Their values are random, so you only need to match up the first eight characters to have an extremely high confidence level that you have a correct match.

Generate a simple self-signed certificate
$ openssl req ‐x509 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

Generating a 2048 bit RSA private key
..........+++
.................+++
writing new private key to 'key.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:New Jersey
Locality Name (eg, city) [Default City]:.
Organization Name (eg, company) [Default Company Ltd]:.
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:drjohnstechtalk.com
Email Address []:

Note that the fields I wished to blank out I put in a “.”

Did I get what I expected? Let’s examine it:

$ openssl x509 ‐text ‐in cert.pem|more

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 16616841832876401013 (0xe69ae19b7172e175)
    Signature Algorithm: sha1WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
        Validity
            Not Before: Aug 15 14:11:08 2017 GMT
            Not After : Aug 15 14:11:08 2018 GMT
        Subject: C=US, ST=NJ, CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d4:da:23:34:61:60:f0:57:f0:68:fa:2f:25:17:
...

Hmm. It’s only sha1 which isn’t so great. And there’s no Subject Alternative Name. So it’s not a very good CERT.

Create a better self-signed CERT
$ openssl req ‐x509 ‐sha256 ‐nodes ‐newkey rsa:2048 ‐keyout key.pem ‐out cert.pem ‐days 365

That one is SHA2:

...
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=New Jersey, CN=drjohnstechtalk.com
...

365 days is arbitrary. You can specify a shorter or longer duration.

Then refer to it with a -config argument in your

Listing ciphers
Please see this post.

Fetching the certificates from an SMTP server running TLS

$ openssl s_client −starttls smtp −connect <MAIL_SERVER>:25 −crlf
That’s a good one because it’s hard to do these steps by hand.

Working with Java keytool for Tomcat certificates
This looks really daunting at first. Where do you even start? I recently found the answer. Digicert has a very helpful page which generates the keytool command line you need to crate your CSR and provides lots of installation advice. At first I was skeptical and thought you could not trust a third party to have your private key, but it doesn’t work that way at all. It’s just a complex command-line generator that you plug into your own command line. You know, the whole

$ keytool −genkey −alias drj.com −keyalg RSA -keystore drj.jks −dname=”CN=drj.com, O=johnstechtalk, ST=NJ, C=US” …

Here’s the Digicert command line generator page.

Another good tool that provides a free GUI replacement for the Java command-line utilities keytool, jarsigner and jadtool is Keystore Explorer.

List info about all the certificates in a certificate bundle

openssl storeutl -noout -text -certs cacert.pem |egrep ‘Issuer:|Subject:’|more

Appendix A, Certificate Fingerprints
You may occasionally see a reference to a certificate fingerprint. What is it and how do you find your certificate’s fingerprint?

Turns out it’s not that obvious.

Above we showed the very useful command

openssl x509 ‐text ‐in <CRT‐file>

and the results from that look very thoroough as though this is everything there is to know about this certificate. In fact I thought that for yeas, but, it turns out it doesn’t show the fingerprint!

A great discussion on this topic is https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

But I want to repeat the main points here.

The fingerprint is the hash of the certificate file, but in its raw, 8-bit form. you can choose the hash algorithm and learn the fingerprint with the following openssl commands:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha1 (for getting the SHA1 fingerprint)

similarly, to obtain the sha256 or md5 fingerprint you would do:

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐sha256

$ openssl x509 ‐in <CRT‐file> ‐fingerprint ‐md5

Now, you wonder, I know about these useful hash commands from Linux:

sha1sum, sha256sum, md5sum

what is the relationship between these commands and what openssl returns? How do I run the linux commands and get the same results?

It turns out this is indeed possible. But not that easy unless you know advanced sed trickery and have a uudecode program. I have uudecode on SLES, but not on CentOS. I’m still trying to unpack what this sed command really does…

The certificate files we normally deal with (PEM format) are encoded versions of raw data. uudecode can be used to obtain the raw data version of the certificate file like this:

$ uudecode < <(
sed ‘1s/^.*$/begin‐base64 644 www.google.com.raw/;
$s/^.*$/====/’ www.google.com.crt
)

This example is for an input certificate file called www.google.com.crt. It creates a raw data version of the certificate file called www.google.com.raw.

Then you can run your sha1sum on www.google.com.raw. It will be the same result as running

$ openssl x509 ‐in www.google.com.crt ‐fingerprint ‐sha1

!

So that shows the fingerprint is a hash of the entire certificate file. Who knew?

Appendix B
To find out more about a particluar subcommand:

openssl <subcommand> help

e.g.,

$ openssl s_client help

Conclusion
Some useful openssl commands are documented here. A way to grapple with keytool for Tomcat certificates is also shown as a bonus.

References and related
Probably a better site with similar but more extensive openssl commands: https://www.sslshopper.com/article-most-common-openssl-commands.html

Digicert’s tool for working with keytool.
GUI replacement for keytool, etc; Keystore Explorer.

The only decent explanation of certificate fingerprints I know of: https://security.stackexchange.com/questions/46230/digital-certificate-signature-and-fingerprint#46232

Server Name Indication is described in this blog post.

I’m only providing this link here as an additional reminder that this is one web site where you’ll find a legitimate wildcard certificate: https://make.wordpress.org/ Otherwise it can be hard to find one. Clearly people don’t want to advertize the fatc that they’re using them.