Categories
Admin Linux

Relative time with the linux date command

The situation
A server in Europe needs to transfer a log file which is written every hour from a server in the US. The filename format is
20171013-1039.log.gz
And we want the transfer to be done every hour.

How we did it
I learned something about the date command. I wanted to do date arithmetic, like calculate the previous hour. I’ve only ever done this in Perl. Then I saw how someone did it within a bash script.

First the timezone

export TZ=America/New_York

sets the timezone to that of the server which is writing the log files. This is important.

Then get the previous hour
$ onehourago=`date ‐‐date='1 hours ago' '+%Y%m%d‐%H'`

That’s it!

Then the ftp command looks like
$ get $onehourago

If we needed the log from two hours ago we would have had

$ twohourago=`date ‐‐date='2 hours ago' '+%Y%m%d‐%H'`

If one day ago

$ onedayago=`date ‐‐date='1 days ago' '+%Y%m%d‐%H'`

One hour from now

$ onedayago=`date ‐‐date='+1 hours' '+%Y%m%d‐%H'`

etc.

Why the timezone setting?
Initially I skipped the timezone setting and I simply put 7 hours ago, given that Europe and New York are six hours apart, and that’ll work 95% of the time. But because Daylight Savings time starts and ends at different times in the two continents, that will produce bad results for a few weeks a year. So it’s cleaner to simply switch the timezone before doing the date arithmetic.

Conclusion
The linux date command has more features than I thought. We’ve shown how to create some relative dates.

References and related
On a linux system
$ info date
will give you more information and lots of examples to work from.

Categories
Admin Linux Security

Fail2ban fails to work, I built my own

Intro
I’ve sung the praises of fail2ban as a modern way to shutdown those annoying probes of your cloud server. I recently got to work with a Redhat v 7.4 system, so much newer than my old CentOS 6 server. And fail2ban failed even to work! Instead of the usual extensive debugging I just wrote my own. I’m sharing it here.

The details
I have a bare-bones RHEL 7.4 system. A yum search fail2ban does not find that package. Supposedly you simply need to add the EPEL repository to make that package available but the recipe on how to do that is not obvious. So I got the source for fail2ban and built it. Although it runs, you gotta build a local jail to block ssh attempts and that’s where it fails. So instead of going down that rabbit hole – I was already too deep, I decided to heck with it and I’m building my own.

All I really wanted was to ban IPs which are hitting my sshd server endlessly, often once per second or more. I take it personally.

RHEL 7 has a new firewall concept, firewalld. It’s all new to me and I don’t want to go down that rabbit hole either, at least not right down. So I rely on that old standard of mine: cut off an attacker by making an invalid route to his IP address, along the lines of

$ route add ‐host gw 127.0.0.1

And voila, they can no longer establish a TCP connection. It’s not quite as good as a firewall rule because their source UDP packets could still get through, but come on, we don’t need to be purists. And furthermore, in practice it produces the desired behaviour: stops the ssh dictionary attacks cold.

I knocked tghis out in one night, avoiding the rabbit hole of “fixing” fail2ban. So I had to use the old stuff I know so well, perl and stupid little tricks. I call drjfail2ban.

#!/bin/perl
# suppress IPs with failed logins
# DrJ - 2017/10/07
$DEBUG = 0;
$sleep = 30;
$cutoff = 3;
$headlines = 60;
@goodusers =("drjohn1","user57");
%blockedips = ();
while(1) {
#  $time = `date +%Y%m%d%H%M%S`;
  main();
  sleep($sleep);
}
 
sub main() {
if ($DEBUG) {
  for $ips (keys %blockedips) {
    print "blocked ip: $ips "
  }
}
# man last shows what this means: -i forces IP to be displayed, etc.
open(LINES,"last -$headlines -i -f /var/log/btmp|") || die "Problem with running last -f btmp!!\n";
# output:
#ubnt     ssh:notty    185.165.29.197   Sat Oct  7 19:30    gone - no logout
while(<LINES>) {
  ($user,$ip) = /^(\S+)\s+\S+\s+(\S+)/;
  print "user,ip: $user,$ip\n" if $DEBUG;
  next if $blockedips{$ip};
#we can't handle hostnames right now
  next if $ip =~ /[a-z]/i;
  $candidateips{$ip} += 1;
  $bannedusers{$ip} = $user;
}
for (keys %candidateips) {
  $ip = $_;
# allow my usual source IPs without blocking...
  next if $ip =~ /^(50\.17\.188\.196|51\.29\.208\.176)/;
  next if $blockedips{$ip};
  $usr = $bannedusers{$ip};
  $ipct = $candidateips{$ip};
  print "ip, usr, ipct: $ip, $usr, $ipct\n" if $DEBUG;
# block
  $block = 1;
  for $gu (@goodusers) {
    print "gu: $gu\n" if $DEBUG;
    $block = 0 if $usr eq $gu;
  }
  if ($block) {
# more tests: persistence of attempt
    $hitcnt = $candidateips{$ip};
    if ($hitcnt < $cutoff) {
# do not block and reset counter for next go-around
      print "Not blocking ip $ip and resetting counter\n" if $DEBUG;
      $candidateips{$ip} = 0;
    } else {
      $blockedips{$ip} = 1;
      print "Blocking ip $ip with hit count $hitcnt at " . `date`;
# prevent further communication...
      system("route add -host $ip gw 127.0.0.1");
   }
  }
  #print "route add -host $ip gw 127.0.0.1\n";
}
close(LINES);
} # end main function

Highlights from the program
The comments are pretty self-explanatory. Just a note about the philosophy. I fear making a goof and locking myself out! So I was conservative and try to not do any blocking if the source IP matches one of my favored source IPs, or if the user matches one of my usual usernames like drjohn1. I use obscure userids and the hackers try the stupid stuff like root, admin, etc. So they may be dictionary attacking the password, but they certainly aren’t dictionary attacking the username!

I don’t mind wiping the slate clean of all created routes after sever reboot so I only plan to run this from the command line. To make it persistent until the next reboot you just run it from the root account like so (let’s say we put it in /usr/local/sbin):

$ nohup /usr/local/sbin/drjfail2ban > /var/log/drjfail2ban &

And it just sits there and runs, even after you log out.

Results
Since it hasn’t been running for long I can provide a partial log file as of this publication.

Blocking ip 103.80.117.74 with hit count 6 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 89.176.96.45 with hit count 5 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 31.162.51.206 with hit count 3 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 218.95.142.218 with hit count 6 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 202.168.8.54 with hit count 5 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 13.94.29.182 with hit count 4 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 40.71.185.73 with hit count 4 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 77.72.85.100 with hit count 13 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 201.180.104.63 with hit count 7 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 121.14.27.58 with hit count 4 at Sun Oct  8 17:40:43 CEST 2017
Blocking ip 36.108.234.99 with hit count 6 at Sun Oct  8 17:47:13 CEST 2017
Blocking ip 185.165.29.69 with hit count 6 at Sun Oct  8 18:02:43 CEST 2017
Blocking ip 190.175.40.195 with hit count 6 at Sun Oct  8 19:05:43 CEST 2017
Blocking ip 139.199.167.21 with hit count 4 at Sun Oct  8 19:29:13 CEST 2017
Blocking ip 186.60.67.51 with hit count 5 at Sun Oct  8 20:49:14 CEST 2017

And what my route table looks like currently:

$ netstat ‐rn|grep 127.0.0.1

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
2.177.217.155   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
13.94.29.182    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
31.162.51.206   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
36.108.234.99   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
37.204.23.84    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
40.71.185.73    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
42.7.26.15      127.0.0.1       255.255.255.255 UGH       0 0          0 lo
46.6.60.240     127.0.0.1       255.255.255.255 UGH       0 0          0 lo
59.16.74.234    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
77.72.85.100    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
89.176.96.45    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
103.80.117.74   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
109.205.136.10  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
113.195.145.13  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
118.32.27.85    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
121.14.27.58    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
139.199.167.21  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
162.213.39.235  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
176.50.95.41    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
176.209.89.99   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
181.113.82.213  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.69   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.197  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.198  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.190.58.181  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
186.57.12.131   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
186.60.67.51    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
190.42.185.25   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
190.175.40.195  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
193.201.224.232 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
201.180.104.63  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
201.255.71.14   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
202.100.182.250 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
202.168.8.54    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
203.190.163.125 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
213.186.50.82   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
218.95.142.218  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
221.192.142.24  127.0.0.1       255.255.255.255 UGH       0 0          0 lo

Here’s a partial listing of the many failed logins, just to keep it real:

...
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:28  (00:23)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 19:05  (01:02)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 18:02  (00:15)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:46 - 17:47  (00:00)
ubuntu   ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:46  (00:06)
ubuntu   ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
aaaaaaaa ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
aaaaaaaa ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
root     ssh:notty    206.71.63.4      Sun Oct  8 17:34 - 17:40  (00:06)
root     ssh:notty    206.71.63.4      Sun Oct  8 17:34 - 17:34  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 17:34  (01:19)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
...

Before running drjfail2ban it was much more obnoxious, with the same IP hitting my server every second or so.

Conclusion
I found it easier to roll my own than battle someone else’s errors. It’s kind of fun for me to create these little scripts. I don’t care if anyone else uses them. I will refer to this post myself and probably re-use it elsewhere!

References and related
In an earlier time, I was singing the praises of fail2ban on CentOS.

Categories
Popular Science

Eclipse pics: just me and my cheese grater

Intro
I was at home during the great eclipse of August 2017. I didn’t buy the special viewing glasses. I remembered the general advice that anything with holes in it would show off the shape of the moon covering the sun by its shadow.

Initially I tried paper with three holes from a hole punch. Result were OK but not that impressive.

I was idly pulling disk=hes out of the dishwasher when I came across our cheese grater. Lots and lots of holes all over. This was the thing!

Everyone I’ve shown these two loved the novelty.

The setup
I’m standing just inside the garage – nice smooth surface for shadows – by the doorway. I find a way to simultaneously hold my phone and the cheese grater, and (mostly) get sufficiently out of the way that the cheese grater’s shadow is cast unobstructed onto the garage floor. It was a little tricky.

I’ll present the full picture and then the shadow of the cheese grater blown up at two different times – 2:27 PM and 2:41 PM EST.

And, yes, the holes in the grater are actualklky round and the normal shadow would be full of round circles of light where the sun passes through the grater’s holes!

It was a hot day. I could feel the air cool down during the eclipse! And cicadas started their shrill cry.

2:41 PM pics

Click on any of these pictures to see it blown up.

2:41 PM eclipse shadow pic

And focussing on the cheese grater’s shadow:

Cheese grater shadow during the eclipse

2:27 PM pics
This time is probably closer to the greatest coverage of the moon over the sun.

Eclipse full picture
Eclipse cheese grater pic

Conclusion
A common cheese grater gives us a good idea of what the sun looked like during the eclipse, and creates something marvelous at the same time.

Discussion – 2022 update

The 2017 eclipse was a blown opportunity for citizen astronomers to contribute to one of the most difficult-to-measure astronomical metrics: the bending of nearby starlight by the sun. Believe it or not, the eclipse measurements of this effect from 1919, which proved Einstien’s prediction about gravity’s effect on the path of light, is still about the most precise we have. And it’s not very good! Unlike everything else we measure through scientific experiements, it didn’t improve with time.

References and related

The book No Shadow of a Doubt discusses how they (Eddington and Dyson) measured the bending of starlight by the sun during the 1919 solar eclipse, and all the problems with those attempts.

Categories
Linux Security

Verifying a pkcs12 file with openssl

Intro
The easy way

How to examine a pkcs12 (pfx) file

$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!

The hard way
I went through this whole exercise because I originally could not find the easy way!!!

Get the source for openssl.

Look for pkread.c. Mine is in /usr/local/src/openssl/openssl-1.1.0f/demos/pkcs12.

Compile it.

My first pass:

$ gcc ‐o pkread pkread.c

/tmp/cclhy4wr.o: In function `sk_X509_num':
pkread.c:(.text+0x14): undefined reference to `OPENSSL_sk_num'
/tmp/cclhy4wr.o: In function `sk_X509_value':
pkread.c:(.text+0x36): undefined reference to `OPENSSL_sk_value'
/tmp/cclhy4wr.o: In function `main':
pkread.c:(.text+0x93): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0xa2): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0x10a): undefined reference to `d2i_PKCS12_fp'
pkread.c:(.text+0x154): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x187): undefined reference to `PKCS12_parse'
pkread.c:(.text+0x1be): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x1d4): undefined reference to `PKCS12_free'
pkread.c:(.text+0x283): undefined reference to `PEM_write_PrivateKey'
pkread.c:(.text+0x2bd): undefined reference to `PEM_write_X509_AUX'
pkread.c:(.text+0x320): undefined reference to `PEM_write_X509_AUX'
collect2: ld returned 1 exit status

Corrected compile command
$ gcc ‐o pkread pkread.c ‐I/usr/local/include ‐L/usr/local/lib64 ‐lssl ‐lcrypto

Note that this works for me because I put my ssl and crypto libraries in that directory. Your installation may have put them elsewhere:

$ ll /usr/local/lib64/*.a

-rw-r--r--. 1 root root 4793162 Aug 16 15:30 /usr/local/lib64/libcrypto.a
-rw-r--r--. 1 root root  765862 Aug 16 15:30 /usr/local/lib64/libssl.a

References and related
My favorite openssl commands.

Categories
CentOS Hosting Service

Your AWS Instance was scheduled for retirement? don’t panic

Intro
After nearly four years of continuously running my AWS instance I got this scary email:

What to do?

The details
Since I never developed much AWS expertise (never needed to since it just worked) I was afraid to do anything. That’s sort of why I had kept it running for three and a half years – the last time I had to stop it didn’t work out so well.

Some terms
It helps to review the terms.
image – that’s like the OS. It has a unique identifier. Mine is ami-03559b6a.
instance – that’s a particular image running on a particular virtual server, identified by unique number. Mine is i-1737a673.
retired image – the owner of the image decided to no longer make it available for new instances

What it all means
I run a retired image, so for instance I can’t right-click my instance and:

– launch another like this

What I did to keep my instance running
I didn’t! Before the retirement deadline I stopped my instance. That is a painful process because it takes hours in my case. The server becomes unavailable quickly enough, but the status is stuck in state shutting down for at least a couple hours. But, eventually, it does shut down.

Then, I start it again. That’s it!

When it starts, AWS puts it on different hardware, etc, so I guess literally it is a different instance now, running the same image. I re-associate my elastic IP, and all is good.

So when the “retirement” date came along, there was no outage of my instance as I had already stopped/started it and that was all that was needed.
Amazon’s documentation – as good as it is – isn’t that clear on this point, hence this blog posting…

Side preparations
In case I couldn’t restart my image I had taken snapshots of my EBS volumes, and prepared to run Amazon Linux, which looks pretty similar to CentOS which is what I run. But, boy, learning about VPC and routing was a pain. I had to set all that up and gain at least a rudimentary understanding of all that. None of that existed six years ago when I started out! It was much simpler back then.

What it looks like

To make things concrete, here is my view on the AWS admin portal of my instances.

Conclusion
Having your Amazon AWS instance retired is not as scary as it initially sounds. Basically, just stop and start it yourself and you’ll be fine.

References and Related

(2024 update) Reserved Instances are now passé! I recently began using the AWS Savings Plans which offers more flexibility.

Want to know why you should be getting an AWS reserved instance to save money? This is a great article, like a For Dummies article, on the topic.

Categories
Raspberry Pi

Getting your micro SD card ready to load Raspbian: easier than you thought

Intro
Configuring your own micro SD card in order to install Raspbian on a Raspberry Pi is not so hard. Some of the instructions out there are a bit dated and make it out to be harder than it really is.

2021 update

Nowadays you ought to use the tool they finally created just for this purpose, the Raspberry Pi Imager: Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi

Old information from 2017 follows…

The details
For instance this site has some extra steps you don’t need: http://elinux.org/RPi_Easy_SD_Card_Setup.

I’d stick with the simplest possible approach, which turns out to be this set of instructions: https://diyhacking.com/install-raspbian-raspberry-pis-sd-card/

But all these instructions seem to refer to an IMG file which I don’t even see. The main thing is to download NOOBS (new out-of-box software) from https://www.raspberrypi.org/downloads/ .

Then, get the SD card formatter. But the latest version is 5, not 4, and it looks different from before – there are essentially no options!

SD Card Formatter

So go with Quick Format and it works out OK. Unless your SD card is used. Then choose Overwrite format. That also works but takes a lot longer.

Then when it comes to copying the image file, which makes no sense with NOOBS because the image file is hidden, I think. Just extract all the files form the NOOBs zip file and copy them over to your E: drive, of whatever drive your SD card appears as.

Then follow the instructions on your Ras Pi display.

That’s it! I know because I just did it.

(non-)Reliability of SD Card
For the record, I’m in this situation because my old micro SD card just died. This is after running it continuously for a little over two years. Not very impressive in my book. Also for the record the card came as part of a Cana kit.

Symptoms of SD card failure in my case:

– boot paused, then after 120 seconds spits out some warnings about MMC something or other.
– LED status light solid green

A word about NOOBS and Balena Etcher
Note that the Etcher people were a bit lazy, and refuse to support burning NOOBS to an SD card with Etcher! to repeat, Etcher and NOOBS are incompatible. The stated reason is that NOOBS is not a true image.

A word about downloading from https://www.raspberrypi.org/downloads/
Today my PC just was not up to the task of downloading the full NOOBS zip file. It got to about 800 MB and then kept saying Failed. I found I could restart it, and that would download another 10 MB or so before failing again. This was getting pretty boring so I simply went to a Raspberry Pi and downloaded it from the command line using wget. No problems…

I suspect that my PC’s AV software was running amok and interfering with this download. I haven’t messed with disabling it in awhile (the usual prescription), so i used the Ras Pi itself. Then I did an sftp from my PC to the Ras Pi to get the downloaded image. That was also unusually slow, but it did go through, eventually.

References and related

Just go to this page to learn about and download the Raspberry Pi Imager which works for Windows or Mac: Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi

All about Micro SD cards, their ratings and how to measure your own micro SD card’s performance: Raspberry Pi SD Card Speed Test – Raspberry Pi

Older references from 2017
This is probably the best guide, but I did not find a guide with 100% correct info; https://diyhacking.com/install-raspbian-raspberry-pis-sd-card/
As mentioned the latest image comes from: https://www.raspberrypi.org/downloads/

Categories
Linux SLES Web Site Technologies

Compiling curl and openssl on Redhat Linux

Intro
I have an ancient Redhat system which I’m not in a position to upgrade. I like to use curl to test web sites, but it’s getting to the point that my ancient version has no SSL versions in common with some secure web sites. I desperately wanted to upgrade curl while leaving the rest of the system as is. Is it even possible? How would you do it? All these things and more are explained in today’s riveting blog post.

The details
Redhat version
I don’t know the proper command so I do this:
$ cat /etc/system-release

ed Hat Enterprise Linux Server release 6.6 (Santiago)

Current curl version
$ ./curl ‐‐version

curl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2

Limited set of SSL/TLS protocols
$ curl ‐help

...
 -2/--sslv2         Use SSLv2 (SSL)
 -3/--sslv3         Use SSLv3 (SSL)
...
 -z/--time-cond <time> Transfer based on a time condition
 -1/--tlsv1         Use TLSv1 (SSL)
...

New version of curl

curl 7.55.1 (x86_64-unknown-linux-gnu) libcurl/7.55.1 OpenSSL/1.1.0f zlib/1.2.3

New SSL options

     --ssl           Try SSL/TLS
     --ssl-allow-beast Allow security flaw to improve interop
     --ssl-no-revoke Disable cert revocation checks (WinSSL)
     --ssl-reqd      Require SSL/TLS
 -2, --sslv2         Use SSLv2
 -3, --sslv3         Use SSLv3
...
     --tls-max <VERSION> Use TLSv1.0 or greater
     --tlsauthtype <type> TLS authentication type
     --tlspassword   TLS password
     --tlsuser <name> TLS user name
 -1, --tlsv1         Use TLSv1.0 or greater
     --tlsv1.0       Use TLSv1.0
     --tlsv1.1       Use TLSv1.1
     --tlsv1.2       Use TLSv1.2
     --tlsv1.3       Use TLSv1.3

Now that’s an upgrade! How did we get to this point?

Well, I tried to get a curl RPM – seems like the appropriate path for a lazy system administrator, right? Well, not so fast. It’s not hard to find an RPM, but trying to install one showed a lot of missing dependencies, as in this example:
$ sudo rpm ‐i curl‐minimal‐7.55.1‐2.0.cf.fc27.x86_64.rpm

warning: curl-minimal-7.55.1-2.0.cf.fc27.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID b56a8bac: NOKEY
error: Failed dependencies:
        libc.so.6(GLIBC_2.14)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libc.so.6(GLIBC_2.17)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libcrypto.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libcurl(x86-64) >= 7.55.1-2.0.cf.fc27 is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libssl.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        curl conflicts with curl-minimal-7.55.1-2.0.cf.fc27.x86_64

So I looked at the libcurl RPM, but it had its own set of dependencies. Pretty soon it looks like a full-time job to get this thing compiled!

I found the instructions mentioned in the reference, but they didn’t work for me exactly like that. Besides, I don’t have a working git program. So here’s what I did.

Compiling openssl

I downloaded the latest openssl, 1.1.0f, from https://www.openssl.org/source/ , untar it, go into the openssl-1.1.0f directory, and then:

$ ./config ‐Wl,‐‐enable‐new‐dtags ‐‐prefix=/usr/local/ssl ‐‐openssldir=/usr/local/ssl
$ make depend
$ make
$ sudo make install

So far so good.

Compiling zlib
For zlib I was lazy and mostly followed the other guy’s commands. Went something like this:
$ lib=zlib-1.2.11
$ wget http://zlib.net/$lib.tar.gz
$ tar xzvf $lib.tar.gz
$ mv $lib zlib
$ cd zlib
$ ./configure
$ make
$ cd ..
$ CD=$(pwd)

No problems there…

Compiling curl
curl was tricky and when I followed the guy’s instructions I got the very problem he sought to avoid.

vtls/openssl.c: In function ‘Curl_ossl_seed’:
vtls/openssl.c:276: error: implicit declaration of function ‘RAND_egd’
make[2]: *** [libcurl_la-openssl.lo] Error 1
make[2]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make: *** [all-recursive] Error 1

I looked at the source and decided that what might help is to add a hint where the openssl stuff could be found.

Backing up a bit, I got the source from https://curl.haxx.se/download.html. I chose the file curl-7.55.1.tar.gz. Untar it, go into the curl-7.55.1 directory,
$ ./buildconf
$ PKG_CONFIG_PATH=/usr/local/ssl/lib/pkgconfig LIBS=”‐ldl”

and then – here is the single most important point in the whole blog – configure it thusly:

$ ./configure ‐‐with‐zlib=$CD/zlib ‐‐disable‐shared ‐‐with‐ssl=/usr/local/ssl

So my insight was to add the ‐‐with‐ssl=/usr/local/ssl to the configure command.

Then of course you make it:

$ make

and maybe even install it:

$ make install

This put curl into /usr/local/bin. I actually made a sym link and made this the default version with this kludge (the following commands were run as root):

$ cd /usr/bin; mv curl{,.orig}; ln ‐s /usr/local/bin/curl

That’s it! That worked and produced a working, modern curl.

By the way it mentions TLS1.3, but when you try to use it:

$ curl ‐i ‐k ‐‐tlsv1.3 https://drjohnstechtalk.com/

curl: (4) OpenSSL was built without TLS 1.3 support

It’s a no go. But at least TLS1.2 works just fine in this version.

One other thing – put shared libraries in a common area
I copied my compiled curl from Redhat to a SLES 11 SP 3 system. It didn’t quite run. Only thing is, it was missing the openssl libraries. So I guess it’s also important to copy over

libssl.so.1.1
libcrypto.so.1.1

to /usr/lib64 from /usr/local/lib64.

Once I did that, it worked like a charm!

Conclusion
We show how to compile the latest version of openssl and curl on an older Redhat 6.x OS. The motivation for doing so was to remain compatible with web sites which are already or soon dropping their support for TLS 1.0. With the compiled version curl and openssl supports TLS 1.2 which should keep it useful for a long while.

References and related
I closely followed the instructions in this stackoverflow post: https://stackoverflow.com/questions/44270707/cant-build-latest-libcurl-on-rhel-7-3#44297265
openssl source: https://www.openssl.org/source/
curl sources: https://curl.haxx.se/download.html
Here’s a web site that only supports TLS 1.2 which shows the problem: https://www.askapache.com/. You can see for yourself on ssllabs.com

Categories
Admin Linux

Reverse upper and lower case letters

Intro
I am a look-at-the-keyboard typist! I can’t count the number of times I’ve begun an email with CAPs lock on, and found a nice correspondence looking like this:

hI aNDRES,
 
i RE-CREATED THE SCRIPT.
...

Tired of the re-typing I researched how to quickly repair this, at least for those with a linux command prompt at their disposal.

The details
I added this to my .aliases file:

# reverse upper/lower case
reverse () { tr 'A-Za-z' 'a-zA-Z'; }

Now when I make this mistake I put the text into my clipboard, even if it is multi-line, and fix it like this:

$ echo 'hI aNDRES,

i RE-CREATED THE SCRIPT.'|reverse

Hi Andres,
 
I re-created the script.

After the single quote I pasted in my clipboard text and closed that out with another single quote.

Alternative for those uncertain about Linux aliases
Alternatively, right on the command line you would just run

$ echo 'hI aNDRES,

i RE-CREATED THE SCRIPT.'|tr 'A-Za-z' 'a-zA-Z'

Conclusion
We developed an alias expression to make exchanging upper and lower case in a character string fast and easy, if you just remember a few things and have access to a Linux command prompt.

Categories
Web Site Technologies

F5 BigIP irule: serving a dynamic proxy PAC file

Intro
A large organization needed to have considerable flexibility in serving out its proxy PAC file to web browsers. The legacy approach – perl script on web servers – was replaced by a TCL script I developed which runs on one of their F5 load balancers. In this post I lay out the requirements and how I tackled an unfamiliar language, TCL, to come up with an efficient and elegant solution.
Even if you have no interest in PAC files, if you are developing an irule you’ll probably scratch your head figuring out the oddities of TCL programming. This post may help in that case too.


The irule

# - DrJ 8/3/17
when CLIENT_ACCEPTED {
set cip [IP::client_addr]
}
when HTTP_REQUEST {
set debug 0
# supply an X-DRJ-PAC, e.g., w/ curl, to debug: curl -H 'X-DRJ-PAC: 1.2.3.4' 50.17.188.196/proxy.pac
if {[HTTP::header exists "X-DRJ-PAC"]} {
# overwrite client ip from header value for debugging purposes
  log local0. "DEBUG enabled. Original ip: $cip"
  set cip [HTTP::header value "X-DRJ-PAC"]
  set debug 1
  log local0. "DEBUG. overwritten ip: $cip"
}

# security precaution: don't accept any old uri
if { ! ([HTTP::uri] starts_with "/proxy.pac" || [HTTP::uri] starts_with "/proxy/proxy.cgi") } {
  drop
  log local0. "uri: [HTTP::uri], drop section. cip: $cip"
} else {
#
set LDAPSUFFIX ""
if {[HTTP::uri] ends_with "cgi"} {
  set LDAPSUFFIX "-ldap"
}
# determine which central location to use
if { [class match $cip equals PAC-subnet-list] } {
# If client IP is in the datagroup, send user to appropriate location
    set LOCATION [class lookup $cip PAC-subnet-list]
    if {$debug} {log local0. "DEBUG. match list: LOCATION: $LOCATION"}
} elseif {  $cip ends_with "0" || $cip ends_with "1" || $cip ends_with "4" || $cip ends_with "5" } {
# client IP was not amongst the subnets, use matching of last digit of last octet to set the NJ proxy (01)
    set LOCATION "01"
    if {$debug} {log local0. "DEBUG. match last digit prefers NJ : LOCATION: $LOCATION"}
} else {
# set LA proxy (02) as the default choice
    set LOCATION "02"
    if {$debug} {log local0. "DEBUG. neither match list nor match digit matched: LOCATION: $LOCATION"}
}
 
HTTP::respond 200 content "
function FindProxyForURL(url, host)
{
// o365 and other enterprise sites handled by dedicated proxy...
var cesiteslist = \"*.aadrm.com;*.activedirectory.windowsazure.com;*.cloudapp.net;*.live.com;*.microsoft.com;*.microsoftonline-p.com;*.microsoftonline-p.net;*.microsoftonline.com;*.microsoftonlineimages.com;*.microsoftonlinesupport.net;*.msecnd.net;*.msn.co.jp;*.msn.co.uk;*.msn.com;*.msocdn.com;*.office.com;*.office.net;*.office365.com;*.onmicrosoft.com;*.outlook.com;*.phonefactor.net;*.sharepoint.com;*.windows.net;*.live.net;*.msedge.net;*.onenote.com;*.windows.com\";
var cesites = cesiteslist.split(\";\");
for (var i = 0; i &lt; cesites.length; ++i){
  if (shExpMatch(host, cesites\[i\])) {
    return \"PROXY http-ceproxy-$LOCATION.drjohns.net:8081\" ;
  }
}
// client IP: $cip.
// Direct connections to local domain
if (dnsDomainIs(host, \"127.0.0.1\") ||
           dnsDomainIs(host, \".drjohns.com\") ||
           dnsDomainIs(host, \".drjohnstechtalk.com\") ||
           dnsDomainIs(host, \".vmanswer.com\") ||
           dnsDomainIs(host, \".johnstechtalk.com\") ||
           dnsDomainIs(host, \"localdomain\") ||
           dnsDomainIs(host, \".drjohns.net\") ||
           dnsDomainIs(host, \".local\") ||
           shExpMatch(host, \"10.*\") ||
           shExpMatch(host, \"192.168.*\") ||
           shExpMatch(host, \"172.16.*\") ||
           isPlainHostName(host)
        ) {
           return \"DIRECT\";
        }
        else
        {
           return \"PROXY http-proxy-$LOCATION$LDAPSUFFIX.drjohns.net:8081\" ;
        }
 
}
" \
"Content-Type" "application/x-ns-proxy-autoconfig" \
"Expires" "[clock format [expr ([clock seconds]+7200)] -format "%a, %d %h %Y %T GMT" -gmt true]"
}
}

I know general programming concepts but before starting on this project, not how to realize my ideas in F5’s version of TCL. So I broke all the tasks into little pieces and demonstrated that I had mastered each one. I describe in this blog post all that I leanred.

How to use the browser’s IP in an iRule
If you’ve ever written an iRule you’ve probably used the section that starts with when HTTP_REQUEST. But that is not where you pick up the web browser’s IP. For that you go to a different section, when CLIENT_ACCEPTED. Then you throw it into a variable cip like this:
set cip [IP::client_addr]
And you can subsequently refer back to $cip in the when HTTP_REQUEST section.

How to send HTML (or Javascript) body content

It’s also not clear you can use an iRule by itself to send either HTML or Javascript in this case. After all until this point I’ve always had a back-end load balancer for that purpose. But in fact you don’t need a back-end web server at all. The secret is this line:

HTTP::respond 200 content "
function FindProxyForURL(url, host)
...

So with this command you set the HTTP response status (200 is an OK) as well as send the body.

Variable interpolation in the body

If the body begins with ” it will do variable interpolation (that’s what we Perl programmers call it, anyway, where your variables like $cip get turned into their value before being delivered to the user). You can also begin the body with a {, but what follows that is a string literal which means no variable interpolation.

The bad thing about the ” character is that if your body contains the ” character, or a [ or ], you have to escape each and every one. And mine does – a lot of them in fact.

But if you use { you don’t have to escape characters, even $, but you also don’t have a way to say “these bits are variables, interpolate them.” So if your string is dynamic you pretty mcuh have to use “.

Defeat scanners

This irule will be the resource for a VS (virtual server) which effectively acts like a web server. So dumb enterprise scanners will probably come across it and scan for vulnerabilities, whether it makes sense or not. A common thing is for these scanners to scan with random URIs that some web servers are vulnerable to. My first implementation had this VS respond to any URI with the PAC file! I don’t think that’s desirable. Just my gut feeling. We’re expecting to be called by one of two different names. Hence this logic:

if { ! ([HTTP::uri] starts_with "/proxy.pac" || [HTTP::uri] starts_with "/proxy/proxy.cgi") } {
  drop
...
else
  (send PAC file)

The original match operator was equals, but I found that some rogue program actually appends ?Type=WMT to the normal PAC URL! How annoying. That rogue application, by the way, seems to be Windows Media Player. You can kind of see where they were goinog with this, for Windows Media PLayer you might want to present a different set of proxies, I suppose.

Match IP against a list of subnets and pull out value
Some background. Company has two proxies with identical names except one ends in 01, the other in 02. 01 and 02 are in different locales. So we created a data group of type address: PAC-subnet-list. The idea is you put in a subnet, e.g., 10.9.7.0/24 and a proxy value, either “01” or “02”. This TCL line checks if the client IP matches one of the subnets we’ve entered into the datagroup:

if { [class match $cip equals PAC-subnet-list] } {

Then this tcl line is used to match the client IP against one of those subnets and retrieve the value and store it into variable LOCATION:

set LOCATION [class lookup $cip PAC-subnet-list]

The reason for the datagroup is to have subnets with LAN-speed connection to one of the proxies use that proxy.

Something weird
Now something weird happens. For clients within a subnet that doesn’t match our list, we more-or-less distribute their use of both proxies equally. So at a remote site, users with IPs ending in 0, 1, 4, or 5 use proxy 01:

elseif {  $cip ends_with "0" || $cip ends_with "1" || $cip ends_with "4" || $cip ends_with "5" } {

and everyone else uses proxy 02. So users can be sitting right next to each other, each using a proxy at a different location.

Why didn’t we use a regular expression, besides the fact that we don’t know the syntax 😉 ? You read about regular expressions in the F5 Devcentral web site and the first thing it says is don’t use them! Use something else like start_with, ends_with, … I guess the alternatives will be more efficient.

Further complexity: different proxies if called by different name
Some specialized desktops are configured to use a PAC file which ends in /proxy/proxy.cgi. This PAC file hands out different proxies which do LDAP authentication, as opposed to NTLM/IWA authentication.. Hence the use of the variable LDAPSUFFIX. The rest of the logic is the same however.

Debugging help
I like this part – where it helps you debug the thing. Because you want to know what it’s really doing and that can be pretty hard to find out, right? You could run a trace but that’s not fun. So I create this way to do debugging.

if {[HTTP::header exists "X-DRJ-PAC"]} {
# overwrite client ip from header value for debugging purposes
  log local0. "DEBUG enabled. Original ip: $cip"
  set cip [HTTP::header value "X-DRJ-PAC"]
  set debug 1
  log local0. "DEBUG. overwritten ip: $cip"
}

It checks for a custom HTTP request header, X-DRJ-PAC. You can call it with that header, from anywhere, and for the value put the client iP you wish to test, e.g., 1.2.3.4. That will overwrite the client IP varibale, cip, set the debug variable, and add some log lines which get nicely printed out into your /var/log/ltm file. So your ltm file may log info about your script’s goings-on like this:

Aug  8 14:06:48 f5drj1 info tmm[17767]: Rule /Common/PAC-irule : DEBUG enabled. Original ip: 11.195.136.89
Aug  8 14:06:48 f5drj1 info tmm[17767]: Rule /Common/PAC-irule : DEBUG. overwritten ip: 12.196.68.91
Aug  8 14:06:48 f5drj1 info tmm[17767]: Rule /Common/PAC-irule : DEBUG. match last digit prefers NJ : LOCATION: 01

And with curl it is not hard at all to send this custom header as I mention in the comments:

$ curl ‐H ‘X-DRJ-PAC: 12.196.88.91’ 50.17.188.196/proxy.pac

Expires header
We add an expires header so that the PAC file is good for two hours (7200 seconds). I don’t think it does much good but it seems like the right thing to do. Don’t ask me, I just stole the whole line from f5devcentral.

A PAC file should have the MIME type application/x-ns-proxy-autoconfig. So we set that explicit MIME type with

"Content-Type" "application/x-ns-proxy-autoconfig"

The “\” at the end of some lines is a line continuation character.

Performance
They basically need this to run several hundred times per second. Occasionally PAC file requests “go crazy.” Will it? This part I don’t know. It has yet to be battle-tested. But it is production tested. It’s been in production for over a week. The virtual server consumes 0% of the load balancer’s CPU, which is great. And for the record, the traffic is 0.17% of total traffic from the proxy server, so very modest. So at this point I believe it will survive a usage storm much better than my apache web servers did.

Why bother?
After I did all this work someone pointed out that this all could have been done within the Javascript of the PAC file itself! I hadn’t really thought that through. But we agreed that doesn’t feel right and may force the browser to do more evaluation than what we want. But maybe it would have executed once and the results cached somehow?? It’s hard to see how since each encountered web site could have potentially a different proxy or none at all so an evaluation should be done each time. So we always try to pass out a minimal PAC file for that reason.

Why not use Bluecoat native PAC handling ability?
Bluecoat proxySG is great at handing out a single, fixed PAC file. It’s not so good at handing out different PAC files, and I felt it was just too much work to force it to do so.

References and related
F5’s DevCentral site is invaluable and the place where I learned virtually everything that went into the irule shown above. devcentral.f5.com
Excessive calls to PAC file.

A more sophisticated PAC file example created by my colleague is shown here: https://drjohnstechtalk.com/blog/2021/03/tcl-irule-program-with-comments-for-f5-bigip/

Categories
Linux Web Site Technologies

curl showing its age with SSL error

Intro
I’ve used curl as a debugging tool for a long time. But time moves on and my testing system didn’t. So now for the first time I saw an error that is produced by this situation, and I will explain it.

The details

The error

$ curl ‐i ‐k https://julialang.org/

curl: (35) error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version

$ curl ‐help

...
 -2/--sslv2         Use SSLv2 (SSL)
 -3/--sslv3         Use SSLv3 (SSL)
...
 -1/--tlsv1         Use TLSv1 (SSL)
...

Compare this to a server which I’ve kept up-to-date with openssl and curl:

...
 -2/--sslv2         Use SSLv2 (SSL)
 -3/--sslv3         Use SSLv3 (SSL)
...
 -1/--tlsv1         Use =&gt; TLSv1 (SSL)
    --tlsv1.0       Use TLSv1.0 (SSL)
    --tlsv1.1       Use TLSv1.1 (SSL)
    --tlsv1.2       Use TLSv1.2 (SSL)
...

On this server I can fetch the home page with curl.

So it appears the older system does not have a compatible version of TLS. To confirm this use SSLLABS. We see this:

SSLLabs evaluation of julialang.org

Sure enough, only TLS 1.2 is supported by the server, and my poor old curl doesn’t have that! Too bad for me, but it shows it’s time to upgrade.

Another problem site
askapache.com is another vexing site. On a curl version which supposedly supports tls 1.2 I get this error:
$ curl ‐‐tlsv1.2 ‐‐verbose ‐k https://askapache.com/

* About to connect() to askapache.com port 443 (#0)
*   Trying 192.237.251.158... connected
* Connected to askapache.com (192.237.251.158) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* NSS error -12286
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

This is with curl version 7.19.7 on my CentOS 6.8 system.

This same site works fine on my compiled version of curl with the latest openssl, version 7.55.1. The system-supplied curl is missing support for some cipher suites.

Here’s my compiled curl and openssl list of cipher suites:
$ openssl ciphers

ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:
ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-CHACHA20-POLY1305:
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:
ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:DHE-RSA-AES256-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:
ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES128-SHA:RSA-PSK-AES256-GCM-SHA384:
DHE-PSK-AES256-GCM-SHA384:RSA-PSK-CHACHA20-POLY1305:DHE-PSK-CHACHA20-POLY1305:
ECDHE-PSK-CHACHA20-POLY1305:AES256-GCM-SHA384:PSK-AES256-GCM-SHA384:PSK-CHACHA20-POLY1305:
RSA-PSK-AES128-GCM-SHA256:DHE-PSK-AES128-GCM-SHA256:AES128-GCM-SHA256:PSK-AES128-GCM-SHA256:
AES256-SHA256:AES128-SHA256:ECDHE-PSK-AES256-CBC-SHA384:ECDHE-PSK-AES256-CBC-SHA:SRP-RSA-AES-256-CBC-SHA:SRP-AES-256-CBC-SHA:RSA-PSK-AES256-CBC-SHA384:DHE-PSK-AES256-CBC-SHA384:RSA-PSK-AES256-CBC-SHA:
DHE-PSK-AES256-CBC-SHA:AES256-SHA:PSK-AES256-CBC-SHA384:PSK-AES256-CBC-SHA:ECDHE-PSK-AES128-CBC-SHA256:ECDHE-PSK-AES128-CBC-SHA:SRP-RSA-AES-128-CBC-SHA:SRP-AES-128-CBC-SHA:
RSA-PSK-AES128-CBC-SHA256:DHE-PSK-AES128-CBC-SHA256:RSA-PSK-AES128-CBC-SHA:DHE-PSK-AES128-CBC-SHA:AES128-SHA:PSK-AES128-CBC-SHA256:PSK-AES128-CBC-SHA

and what I see on my older system:
$ openssl ciphers

DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:
CAMELLIA256-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:DES-CBC3-SHA:DES-CBC3-MD5:
DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:AES128-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:
CAMELLIA128-SHA:RC2-CBC-MD5:RC4-SHA:RC4-MD5:RC4-MD5:EDH-RSA-DES-CBC-SHA:EDH-DSS-DES-CBC-SHA:DES-CBC-SHA:
DES-CBC-MD5:EXP-EDH-RSA-DES-CBC-SHA:EXP-EDH-DSS-DES-CBC-SHA:EXP-DES-CBC-SHA:EXP-RC2-CBC-MD5:EXP-RC2-CBC-MD5:EXP-RC4-MD5:EXP-RC4-MD5

Note that when curl successfully connects it shows which cipher suite was chosen if you use the -v switch:

$ curl ‐v ‐k https://drjohnstechtalk.com/

* About to connect() to drjohnstechtalk.com port 443 (#0)
*   Trying 50.17.188.196... connected
* Connected to drjohnstechtalk.com (50.17.188.196) port 443 (#0)
...
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
...

On a more demanding server – one that does not work with old curl, this dialog is longer, TLS 1.2 is preferred and a more secure cipher suite is chosen – one not available on the other system:

(issue standard curl -k -v <server_name>)

*   Trying 50.17.188.197...
* TCP_NODELAY set
* Connected to 50.17.188.197 port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384

Another curl error explained
While running

$ curl -v -i -k https://drjohnstechtalk.com/

* About to connect() to drjohnstechtalk.com port 443 (#0)
*   Trying 50.17.188.196... connected
* Connected to drjohnstechtalk.com (50.17.188.196) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=drjohnstechtalk.com,C=US
*       start date: Apr 03 00:00:00 2017 GMT
*       expire date: Apr 03 23:59:59 2019 GMT
*       common name: drjohnstechtalk.com
*       issuer: CN=Trusted Secure Certificate Authority 5,O=Corporation Service Company,L=Wilmington,ST=NJ,C=US
&gt; GET / HTTP/1.1
&gt; Host: drjohnstechtalk.com
&gt; Accept: */*
&gt; User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 1.0.3705; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.1; .NET CLR 3.0.04506.30;
&gt;
* SSL read: errno -5961
* Closing connection #0
curl: (56) SSL read: errno -5961

What’s going on?
In this test drjohnstechtalk.com was behind a load balancer. The load balancer had SSL configured. The back-end server was not running however though the load balancer’s health check did not detect that condition. So the load balancer permitted the initial connection, but then shut things off when it could not open a connection to the back-end server. So this error has nothing to do with curl showing its age, but I didn’t know that when I started debugging it.

errno 104
Then there’s this one:

$ curl ‐v ‐i ‐k https://lb.drjohnstechtalk.com/

*   Trying 50.17.188.196...
* TCP_NODELAY set
* Connected to fw-change-request.bdrj.net (50.17.188.196) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/pki/tls/certs/ca-bundle.crt
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
*  subject: C=US; ST=NJ; CN=lb.drjohnstechtalk.com
*  start date: Nov 14 12:06:02 2017 GMT
*  expire date: Nov 14 12:06:02 2018 GMT
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
&gt; GET / HTTP/1.1
&gt; Host: lb.drjohnstechtalk.com
&gt; User-Agent: curl/7.55.1
&gt; Accept: */*
&gt;
* OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104
* Closing connection 0
curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104

This also seems to occur as I’ve seen when there’s a load balancer in front of a web server where the load balancer is working fine but the web server is not.

Another example challenging web site
$ curl ‐‐version

curl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
Protocols: tftp ftp telnet dict ldap ldaps http file https ftps scp sftp
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz

$ curl ‐v ‐k https://e1st.smapply.org/

* About to connect() to e1st.smapply.org port 443 (#0)
*   Trying 72.55.140.155... connected
* Connected to e1st.smapply.org (72.55.140.155) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* NSS error -12286
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

I have seen this suggestion on the Internet to fix the system-supplied curl on a CentOS 6.8 system:

yum update -y nss curl libcurl

It didn’t work!

Rationale
I tried to give the owners of e1st.smapply.org a hard time for supporting such a limited set of ciphersuites – essentially only the latest thing (which you can see yourself by running it through sslabs.com): TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384. If I run this through SSL interception on a Symantec proxy with an older image, that ciphersuite isn’t present! I had to upgrade, then it was fine. But getting back to the rationale, they told me they have future-proofed their site for the new requirements of PCI and they would not budge.

Another curl error

curl: (3) Illegal characters found in URL

If your url looks visibly OK, mkae sure you don’t have and non-printed characters in it. Put it through the linux od -c utility. In my case I culled the url from a Location header after parsing it with awk. Unbeknownst to me, tagging along at the end, unseen, was an extra \n\r characters. I had to get rid of those.

Conclusion
A TLS version error is explained, as well as the way it came about. Another curl/SSL error is also explained.

References and related
I eventually came up with the solution: compile my own updated version of curl! I describe how I did it in this blog post.

A more recent TLS versioning problem which I could have only resolved by using curl is described in this post.