Intro
systemd can be pretty formidable to master. Say you have your own little script you like to run but you don’t want to bother with inserting it into the systemd facility. What can you do?
The details
A simple trick is to insert the startup of the script into a crontab like this:
@reboot <path-to-your-script>
@reboot <path-to-your-script>
For more details on how and why this works and some other crontab oddities try
$ man ‐s5 crcontab
An old Unix hand pointed this out to me recently. I am going to make a lot more use of it…
Of course niceties such as run levels, order of startup, etc are not really controllable (I guess). Or maybe it is the case that all your @reboot scripts are processed in order, top to bottom.
The problem with my cable modem is back. So I’ve revisited my own project and just slightly tweaked things. I am also using an RPi model 4 now. Works great…
Original Intro
I lose my Internet far too often – sometimes once a day. Of course I have lots of network gear in a rat’s nest of cables. I narrowed the problem down to the cable modem, which simply needs to be power cycled and all is good. Most people would call their cable company at this point. I decided to make a little project of it to see if I could get my Raspberry Pi to
– monitor the Internet connection and
– automatically power-cycle the cable modem
Cool, right?
Needless to say, if I can power cycle a modem, I can control power to all kinds of devices with the RPi.
Is there a product already on the market?
Why yes, there is. Normally that would shut me down in my tracks because what’s the point? But the product is relatively expensive – $100, so my DIY solution is considerably less since I already own the RPi. See references for a link to the commercial solution to this problem.
Getting a control cable
This is pathetic, but, in 2017 I originally cut out a cable from an old computer that no longer works. The jumper has more pins than I need, but I could make it work. In 2021 I used proper jumper cables. It’s neater. Thing is, I’ve had them for awhile and I forget where I got them from. perhaps a friend.
Setting up my GPIO, just for testing
The following is only there to show you how easy it is to send signals out the GPIO pin. The script I wrote below, connTest.sh, does all this setup for you if you just want to quickly get down to business.
I am plugged into the end so I need to manipulate GPIO pin 21. Become root
$ sudo su – Get to the right directory
$ cd /sys/class/gpio Create the pin for user manipulation
$ echo 21 > export Move to that pin’s directory
$ cd gpio21 Set up pin for sending signal OUT
$ echo out > direction Test what we have so far
$ cat direction
out
out
$ cat value
0
connTest.sh script
I put this in /usr/local/etc and called it connTest.sh. I’m still tinkering with it a bit. But it shows what we’re basically trying to do.
#!/usr/bin/bash
# DrJ 8/2021
# Test if Internet connection is still good and send signal to relay if it is not
# see https://drjohnstechtalk.com/blog/2017/10/raspberry-pi-automates-cable-modem-power-cycling-task/?preview_id=3121&preview_nonce=9b896f248d&post_format=standard&_thumbnail_id=-1&preview=true
Break=300
Sleep=11
pingpadding=30 # if no response ping takes longer to run
log=/var/log/connTest
pinglog=/tmp/ping.log
#
# one-time setup of our GPIO pin so we can control it
# if the power is on the right, GPIO pin 21 is the lower right pin
pin=21
cd /sys/class/gpio
echo $pin > export
cd gpio$pin
echo out > direction
# divert STDOUT and STDERR to log file
exec 1>$log
exec 2>&1
echo "$0 starting monitoring at "$(date)
# report our external IP
curl -s ipinfo.io|head -2|tail -1
while /bin/true; do
try1=$(curl -is --connect-timeout 6 www.google.com|wc -c)
[[ $try1 -lt 300 ]] && {
echo google came up short. Trying amazon next. characters: $try1
sleep 60
try2=$(curl -is --connect-timeout 6 https://www.amazon.com|wc -c)
[[ $try2 -lt 300 ]] && {
echo "#################"
echo "We have a connection problem at "$(date)
echo character counts. google: $try1, amazon, $try2
echo "Power cycling router and waiting for $Break seconds"
# start a ping job
ping -c $Break 1.1.1.1 > $pinglog 2>&1 &
# this will shut power off
echo 1 > value
sleep 4
# and this will turn it back on
echo 0 > value
# this prevents us from too aggressively power-cycling
sleep $(($Break+$pingpadding))
# report on ping results
#22 packets transmitted, 22 received, 0% packet loss, time 53ms
#rtt min/avg/max/mdev = 6.536/15.533/24.705/4.510 ms
echo printing last three lines from ping results log:
tail -3 $pinglog
line=`tail -2 $pinglog|head -1`
t1=`echo -n $line|awk '{print $1}'`
t2=`echo -n $line|awk '{print $4}'`
# downtime=$(($t1-$t2))
# test for integer inputs
[[ "$t1" =~ ^[0-9]+$ ]] && [[ "$t2" =~ ^[0-9]+$ ]] && downtime=$(($t1-$t2))
echo DOWNTIME: $downtime seconds
# report our external IP
curl -s ipinfo.io|head -2|tail -1
echo "#################"
}
}
sleep $Sleep
done
Starting on boot
These days I just use my crontab trick – much easier. You edit your crontab by saying sudo crontab -e. Then put in these lines at the bottom:
# DrJ 7/13/21
@reboot sleep 45; /usr/local/etc/connTest.sh > /tmp/connTestRun.log 2>&1
# bring down wireless after awhile – assume we have a wired connection
@reboot sleep 120; /usr/sbin/ifconfig wlan0 down
Only include that last line if you have an ethernet cable connection, which, for monitoring purposes, you should. WiFi is just not as reliable.
In all this I had the most trouble getting the startup script to bend to my will! But I think it’s functioning now. It may not be the most efficient, but it’s workable, meaning, it starts up connTest.sh after a reboot, and sends the log to /var/log/connTest.
My conntest file looks like this after I rebooted a few days ago:
/usr/local/etc/connTest.sh starting monitoring at Sat 18 Sep 08:11:47 EDT 2021
“ip”: “67.83.122.167”,
#################
We have a connection problem at Mon 20 Sep 14:13:03 EDT 2021
Power cycling router and waiting for 300 seconds
printing last three lines from ping results log:
— 1.1.1.1 ping statistics —
300 packets transmitted, 202 received, +9 errors, 32.6667% packet loss, time 563ms
rtt min/avg/max/mdev = 7.581/13.387/35.981/3.622 ms
DOWNTIME: 98 seconds
“ip”: “67.83.122.167”,
So it needs to restart my cable modem about every other day and often during those critical daytime hours when I am working from home.
Substitute below for one thousand words
Raspberry Pi GPIO pins 21 plus ground connected to the power relay
So you can almost make out the different outlets from the power relay: always on; normally on; normally off. Makes perfect sense, right?
See that green plug on the side of the relay? I was such a newbie I was shoving the wires into it, unsure how to make a good connection. Well, with a little effort it simply pulls out, revealing a screws that can be used to secure the wires in the holes.
Some conclusions about my cable modem problems
The problems always occur during the day, i.e., when it is being used more heavily (the monitoring is 24×7 so it doesn’t distinguish). So somehow it’s actual usage which triggers failure. I wonder if it outputs more heat and overheats when the Internet is used more heavily? Just a hypothesis.
Outage can be reduced to about 90 seconds with this script based on the ping drop testing. Your mileage may vary, as they say.
My ISP does not give me a new IP after I reboot.
A strange error pops up
After running for awhile I noticed this error in the log:
I’ve still got to look into that root cause of that issue. A reboot cleared it up however.
Conclusion
It’s fun to actually turn off and on 110V AC power using your Raspberry Pi! Especially when there is a useful purpose behind it such as a cable modem which starts to perform better after being power cycled. At only $30 this is a pretty affordable DIY project. I provide some scripts which shows how to work with GPIO pins using the command line. That turns out to be not so mysterious after all…
If the switching can work fast enough, I’m thinking of a next project with lights set to musical beats…!
Intro
Sometimes everything is there in place, ready to be used, but you just have to either mistakenly try it, or learn it works by reading about it, because it may be counter-intuitive. Such is the case with Server Name Indication. I thought I knew enough about https to “know” that you can only have one key/certificate for a single IP address. That CERT can be a SAN (subject alternative name) CERT covering multiple names, but you only get one shot at getting your certificate right. Or so I thought. Turns out I was dead wrong.
Some details
Well, SNI guess is a protocol extension to https. You know I always wondered why in proxy server logs it was able to log the domain name? How would it know that if the http protocol conversation is all encrypted? Maybe it’s SNI at work.
Who supports it?
Since this is an extension it has to be supported by both server and browser. It is. Apache24 supports it. IE, Firefox and Chrome support it. Even my venerable curl supports it! What does not support it, right out of the box, is openssl. The openssl s_client command fetches a site’s certificate, but as I found the hard way, you need to add the -servername switch to tell it which certificate you want to examine, i.e., to force it to use SNI.
This is mainly used by big hosting companies so they can easily and flexibly cram lots of web sites onto a single IP, but us small-time self-hosted sites benefit as well. I host a few sites for friends after all.
Testing methodology
This is pretty simple. I have a couple different virtual servers. I set each up with a completely different certificate in my apache virtual server setups. Then I accessed them by name like usual. Each showed me their own, proper, certificate. That’s it! So this is more than theoretical for me. I’ve already begun to use it.
Enterprise usage
F5 BigIP supports this protocol as well, of course. This article describes how to set it up. But it looks limited to only one server name per certificate, which will be inadequate if there are SAN certificates.
Conclusion
https using Server Name Indication allows to run multiple virtual servers, each with its own unique certificate, on a single IP address.
The situation
A server in Europe needs to transfer a log file which is written every hour from a server in the US. The filename format is
20171013-1039.log.gz
And we want the transfer to be done every hour.
How we did it
I learned something about the date command. I wanted to do date arithmetic, like calculate the previous hour. I’ve only ever done this in Perl. Then I saw how someone did it within a bash script.
First the timezone
export TZ=America/New_York
export TZ=America/New_York
sets the timezone to that of the server which is writing the log files. This is important.
Then get the previous hour
$ onehourago=`date ‐‐date='1 hours ago' '+%Y%m%d‐%H'`
That’s it!
Then the ftp command looks like
$ get $onehourago
If we needed the log from two hours ago we would have had
Why the timezone setting?
Initially I skipped the timezone setting and I simply put 7 hours ago, given that Europe and New York are six hours apart, and that’ll work 95% of the time. But because Daylight Savings time starts and ends at different times in the two continents, that will produce bad results for a few weeks a year. So it’s cleaner to simply switch the timezone before doing the date arithmetic.
Conclusion
The linux date command has more features than I thought. We’ve shown how to create some relative dates.
References and related
On a linux system
$ info date
will give you more information and lots of examples to work from.
Intro
I’ve sung the praises of fail2ban as a modern way to shutdown those annoying probes of your cloud server. I recently got to work with a Redhat v 7.4 system, so much newer than my old CentOS 6 server. And fail2ban failed even to work! Instead of the usual extensive debugging I just wrote my own. I’m sharing it here.
The details
I have a bare-bones RHEL 7.4 system. A yum search fail2ban does not find that package. Supposedly you simply need to add the EPEL repository to make that package available but the recipe on how to do that is not obvious. So I got the source for fail2ban and built it. Although it runs, you gotta build a local jail to block ssh attempts and that’s where it fails. So instead of going down that rabbit hole – I was already too deep, I decided to heck with it and I’m building my own.
All I really wanted was to ban IPs which are hitting my sshd server endlessly, often once per second or more. I take it personally.
RHEL 7 has a new firewall concept, firewalld. It’s all new to me and I don’t want to go down that rabbit hole either, at least not right down. So I rely on that old standard of mine: cut off an attacker by making an invalid route to his IP address, along the lines of
$ route add ‐host gw 127.0.0.1
And voila, they can no longer establish a TCP connection. It’s not quite as good as a firewall rule because their source UDP packets could still get through, but come on, we don’t need to be purists. And furthermore, in practice it produces the desired behaviour: stops the ssh dictionary attacks cold.
I knocked tghis out in one night, avoiding the rabbit hole of “fixing” fail2ban. So I had to use the old stuff I know so well, perl and stupid little tricks. I call drjfail2ban.
#!/bin/perl# suppress IPs with failed logins# DrJ - 2017/10/07$DEBUG=0;$sleep=30;$cutoff=3;$headlines=60;@goodusers=("drjohn1","user57");%blockedips=();while(1){# $time = `date +%Y%m%d%H%M%S`;
main();sleep($sleep);}sub main(){if($DEBUG){for$ips(keys%blockedips){print"blocked ip: $ips "}}# man last shows what this means: -i forces IP to be displayed, etc.open(LINES,"last -$headlines -i -f /var/log/btmp|")||die"Problem with running last -f btmp!!\n";# output:#ubnt ssh:notty 185.165.29.197 Sat Oct 7 19:30 gone - no logoutwhile(<LINES>){($user,$ip)=/^(\S+)\s+\S+\s+(\S+)/;print"user,ip: $user,$ip\n"if$DEBUG;nextif$blockedips{$ip};#we can't handle hostnames right nownextif$ip=~/[a-z]/i;$candidateips{$ip}+=1;$bannedusers{$ip}=$user;}for(keys%candidateips){$ip=$_;# allow my usual source IPs without blocking...nextif$ip=~/^(50\.17\.188\.196|51\.29\.208\.176)/;nextif$blockedips{$ip};$usr=$bannedusers{$ip};$ipct=$candidateips{$ip};print"ip, usr, ipct: $ip, $usr, $ipct\n"if$DEBUG;# block$block=1;for$gu(@goodusers){print"gu: $gu\n"if$DEBUG;$block=0if$usreq$gu;}if($block){# more tests: persistence of attempt$hitcnt=$candidateips{$ip};if($hitcnt<$cutoff){# do not block and reset counter for next go-aroundprint"Not blocking ip $ip and resetting counter\n"if$DEBUG;$candidateips{$ip}=0;}else{$blockedips{$ip}=1;print"Blocking ip $ip with hit count $hitcnt at ".`date`;# prevent further communication...system("route add -host $ip gw 127.0.0.1");}}#print "route add -host $ip gw 127.0.0.1\n";}close(LINES);}# end main function
#!/bin/perl
# suppress IPs with failed logins
# DrJ - 2017/10/07
$DEBUG = 0;
$sleep = 30;
$cutoff = 3;
$headlines = 60;
@goodusers =("drjohn1","user57");
%blockedips = ();
while(1) {
# $time = `date +%Y%m%d%H%M%S`;
main();
sleep($sleep);
}
sub main() {
if ($DEBUG) {
for $ips (keys %blockedips) {
print "blocked ip: $ips "
}
}
# man last shows what this means: -i forces IP to be displayed, etc.
open(LINES,"last -$headlines -i -f /var/log/btmp|") || die "Problem with running last -f btmp!!\n";
# output:
#ubnt ssh:notty 185.165.29.197 Sat Oct 7 19:30 gone - no logout
while(<LINES>) {
($user,$ip) = /^(\S+)\s+\S+\s+(\S+)/;
print "user,ip: $user,$ip\n" if $DEBUG;
next if $blockedips{$ip};
#we can't handle hostnames right now
next if $ip =~ /[a-z]/i;
$candidateips{$ip} += 1;
$bannedusers{$ip} = $user;
}
for (keys %candidateips) {
$ip = $_;
# allow my usual source IPs without blocking...
next if $ip =~ /^(50\.17\.188\.196|51\.29\.208\.176)/;
next if $blockedips{$ip};
$usr = $bannedusers{$ip};
$ipct = $candidateips{$ip};
print "ip, usr, ipct: $ip, $usr, $ipct\n" if $DEBUG;
# block
$block = 1;
for $gu (@goodusers) {
print "gu: $gu\n" if $DEBUG;
$block = 0 if $usr eq $gu;
}
if ($block) {
# more tests: persistence of attempt
$hitcnt = $candidateips{$ip};
if ($hitcnt < $cutoff) {
# do not block and reset counter for next go-around
print "Not blocking ip $ip and resetting counter\n" if $DEBUG;
$candidateips{$ip} = 0;
} else {
$blockedips{$ip} = 1;
print "Blocking ip $ip with hit count $hitcnt at " . `date`;
# prevent further communication...
system("route add -host $ip gw 127.0.0.1");
}
}
#print "route add -host $ip gw 127.0.0.1\n";
}
close(LINES);
} # end main function
Highlights from the program
The comments are pretty self-explanatory. Just a note about the philosophy. I fear making a goof and locking myself out! So I was conservative and try to not do any blocking if the source IP matches one of my favored source IPs, or if the user matches one of my usual usernames like drjohn1. I use obscure userids and the hackers try the stupid stuff like root, admin, etc. So they may be dictionary attacking the password, but they certainly aren’t dictionary attacking the username!
I don’t mind wiping the slate clean of all created routes after sever reboot so I only plan to run this from the command line. To make it persistent until the next reboot you just run it from the root account like so (let’s say we put it in /usr/local/sbin):
And it just sits there and runs, even after you log out.
Results
Since it hasn’t been running for long I can provide a partial log file as of this publication.
Blocking ip 103.80.117.74 with hit count 6 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 89.176.96.45 with hit count 5 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 31.162.51.206 with hit count 3 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 218.95.142.218 with hit count 6 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 202.168.8.54 with hit count 5 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 13.94.29.182 with hit count 4 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 40.71.185.73 with hit count 4 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 77.72.85.100 with hit count 13 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 201.180.104.63 with hit count 7 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 121.14.27.58 with hit count 4 at Sun Oct 8 17:40:43 CEST 2017
Blocking ip 36.108.234.99 with hit count 6 at Sun Oct 8 17:47:13 CEST 2017
Blocking ip 185.165.29.69 with hit count 6 at Sun Oct 8 18:02:43 CEST 2017
Blocking ip 190.175.40.195 with hit count 6 at Sun Oct 8 19:05:43 CEST 2017
Blocking ip 139.199.167.21 with hit count 4 at Sun Oct 8 19:29:13 CEST 2017
Blocking ip 186.60.67.51 with hit count 5 at Sun Oct 8 20:49:14 CEST 2017
Blocking ip 103.80.117.74 with hit count 6 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 89.176.96.45 with hit count 5 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 31.162.51.206 with hit count 3 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 218.95.142.218 with hit count 6 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 202.168.8.54 with hit count 5 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 13.94.29.182 with hit count 4 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 40.71.185.73 with hit count 4 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 77.72.85.100 with hit count 13 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 201.180.104.63 with hit count 7 at Sun Oct 8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 121.14.27.58 with hit count 4 at Sun Oct 8 17:40:43 CEST 2017
Blocking ip 36.108.234.99 with hit count 6 at Sun Oct 8 17:47:13 CEST 2017
Blocking ip 185.165.29.69 with hit count 6 at Sun Oct 8 18:02:43 CEST 2017
Blocking ip 190.175.40.195 with hit count 6 at Sun Oct 8 19:05:43 CEST 2017
Blocking ip 139.199.167.21 with hit count 4 at Sun Oct 8 19:29:13 CEST 2017
Blocking ip 186.60.67.51 with hit count 5 at Sun Oct 8 20:49:14 CEST 2017
And what my route table looks like currently:
$ netstat ‐rn|grep 127.0.0.1
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
2.177.217.155 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
13.94.29.182 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
31.162.51.206 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
36.108.234.99 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
37.204.23.84 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
40.71.185.73 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
42.7.26.15 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
46.6.60.240 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
59.16.74.234 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
77.72.85.100 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
89.176.96.45 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
103.80.117.74 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
109.205.136.10 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
113.195.145.13 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
118.32.27.85 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
121.14.27.58 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
139.199.167.21 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
162.213.39.235 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
176.50.95.41 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
176.209.89.99 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
181.113.82.213 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.69 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.197 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.198 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.190.58.181 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
186.57.12.131 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
186.60.67.51 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
190.42.185.25 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
190.175.40.195 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
193.201.224.232 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
201.180.104.63 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
201.255.71.14 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
202.100.182.250 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
202.168.8.54 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
203.190.163.125 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
213.186.50.82 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
218.95.142.218 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
221.192.142.24 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
2.177.217.155 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
13.94.29.182 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
31.162.51.206 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
36.108.234.99 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
37.204.23.84 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
40.71.185.73 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
42.7.26.15 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
46.6.60.240 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
59.16.74.234 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
77.72.85.100 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
89.176.96.45 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
103.80.117.74 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
109.205.136.10 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
113.195.145.13 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
118.32.27.85 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
121.14.27.58 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
139.199.167.21 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
162.213.39.235 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
176.50.95.41 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
176.209.89.99 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
181.113.82.213 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.69 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.197 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.165.29.198 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
185.190.58.181 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
186.57.12.131 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
186.60.67.51 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
190.42.185.25 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
190.175.40.195 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
193.201.224.232 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
201.180.104.63 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
201.255.71.14 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
202.100.182.250 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
202.168.8.54 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
203.190.163.125 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
213.186.50.82 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
218.95.142.218 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
221.192.142.24 127.0.0.1 255.255.255.255 UGH 0 0 0 lo
Here’s a partial listing of the many failed logins, just to keep it real:
...
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:28 (00:23)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 19:05 (01:02)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 18:02 (00:15)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:46 - 17:47 (00:00)
ubuntu ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:46 (00:06)
ubuntu ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
aaaaaaaa ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
aaaaaaaa ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
root ssh:notty 206.71.63.4 Sun Oct 8 17:34 - 17:40 (00:06)
root ssh:notty 206.71.63.4 Sun Oct 8 17:34 - 17:34 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 17:34 (01:19)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
...
...
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:28 (00:23)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
root ssh:notty 190.175.40.195 Sun Oct 8 19:05 - 19:05 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 19:05 (01:02)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
admin ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 185.165.29.69 Sun Oct 8 18:02 - 18:02 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 18:02 (00:15)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:47 - 17:47 (00:00)
root ssh:notty 36.108.234.99 Sun Oct 8 17:46 - 17:47 (00:00)
ubuntu ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:46 (00:06)
ubuntu ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
aaaaaaaa ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
aaaaaaaa ssh:notty 121.14.27.58 Sun Oct 8 17:40 - 17:40 (00:00)
root ssh:notty 206.71.63.4 Sun Oct 8 17:34 - 17:40 (00:06)
root ssh:notty 206.71.63.4 Sun Oct 8 17:34 - 17:34 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 17:34 (01:19)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
root ssh:notty 89.176.96.45 Sun Oct 8 16:15 - 16:15 (00:00)
...
Before running drjfail2ban it was much more obnoxious, with the same IP hitting my server every second or so.
Conclusion
I found it easier to roll my own than battle someone else’s errors. It’s kind of fun for me to create these little scripts. I don’t care if anyone else uses them. I will refer to this post myself and probably re-use it elsewhere!
Intro I was at home during the great eclipse of August 2017. I didn’t buy the special viewing glasses. I remembered the general advice that anything with holes in it would show off the shape of the moon covering the sun by its shadow.
Initially I tried paper with three holes from a hole punch. Result were OK but not that impressive.
I was idly pulling disk=hes out of the dishwasher when I came across our cheese grater. Lots and lots of holes all over. This was the thing!
Everyone I’ve shown these two loved the novelty.
The setup I’m standing just inside the garage – nice smooth surface for shadows – by the doorway. I find a way to simultaneously hold my phone and the cheese grater, and (mostly) get sufficiently out of the way that the cheese grater’s shadow is cast unobstructed onto the garage floor. It was a little tricky.
I’ll present the full picture and then the shadow of the cheese grater blown up at two different times – 2:27 PM and 2:41 PM EST.
And, yes, the holes in the grater are actualklky round and the normal shadow would be full of round circles of light where the sun passes through the grater’s holes!
It was a hot day. I could feel the air cool down during the eclipse! And cicadas started their shrill cry. 2:41 PM pics Click on any of these pictures to see it blown up.
2:41 PM eclipse shadow pic
And focussing on the cheese grater’s shadow:
Cheese grater shadow during the eclipse
2:27 PM pics This time is probably closer to the greatest coverage of the moon over the sun.
Eclipse full pictureEclipse cheese grater pic
Conclusion A common cheese grater gives us a good idea of what the sun looked like during the eclipse, and creates something marvelous at the same time.
Discussion – 2022 update
The 2017 eclipse was a blown opportunity for citizen astronomers to contribute to one of the most difficult-to-measure astronomical metrics: the bending of nearby starlight by the sun. Believe it or not, the eclipse measurements of this effect from 1919, which proved Einstien’s prediction about gravity’s effect on the path of light, is still about the most precise we have. And it’s not very good! Unlike everything else we measure through scientific experiements, it didn’t improve with time.
References and related
The book No Shadow of a Doubt discusses how they (Eddington and Dyson) measured the bending of starlight by the sun during the 1919 solar eclipse, and all the problems with those attempts.
Intro The easy way
How to examine a pkcs12 (pfx) file
$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!
The hard way
I went through this whole exercise because I originally could not find the easy way!!!
Get the source for openssl.
Look for pkread.c. Mine is in /usr/local/src/openssl/openssl-1.1.0f/demos/pkcs12.
Compile it.
My first pass:
$ gcc ‐o pkread pkread.c
/tmp/cclhy4wr.o: In function `sk_X509_num':
pkread.c:(.text+0x14): undefined reference to `OPENSSL_sk_num'
/tmp/cclhy4wr.o: In function `sk_X509_value':
pkread.c:(.text+0x36): undefined reference to `OPENSSL_sk_value'
/tmp/cclhy4wr.o: In function `main':
pkread.c:(.text+0x93): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0xa2): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0x10a): undefined reference to `d2i_PKCS12_fp'
pkread.c:(.text+0x154): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x187): undefined reference to `PKCS12_parse'
pkread.c:(.text+0x1be): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x1d4): undefined reference to `PKCS12_free'
pkread.c:(.text+0x283): undefined reference to `PEM_write_PrivateKey'
pkread.c:(.text+0x2bd): undefined reference to `PEM_write_X509_AUX'
pkread.c:(.text+0x320): undefined reference to `PEM_write_X509_AUX'
collect2: ld returned 1 exit status
/tmp/cclhy4wr.o: In function `sk_X509_num':
pkread.c:(.text+0x14): undefined reference to `OPENSSL_sk_num'
/tmp/cclhy4wr.o: In function `sk_X509_value':
pkread.c:(.text+0x36): undefined reference to `OPENSSL_sk_value'
/tmp/cclhy4wr.o: In function `main':
pkread.c:(.text+0x93): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0xa2): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0x10a): undefined reference to `d2i_PKCS12_fp'
pkread.c:(.text+0x154): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x187): undefined reference to `PKCS12_parse'
pkread.c:(.text+0x1be): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x1d4): undefined reference to `PKCS12_free'
pkread.c:(.text+0x283): undefined reference to `PEM_write_PrivateKey'
pkread.c:(.text+0x2bd): undefined reference to `PEM_write_X509_AUX'
pkread.c:(.text+0x320): undefined reference to `PEM_write_X509_AUX'
collect2: ld returned 1 exit status
Intro
After nearly four years of continuously running my AWS instance I got this scary email:
What to do?
The details
Since I never developed much AWS expertise (never needed to since it just worked) I was afraid to do anything. That’s sort of why I had kept it running for three and a half years – the last time I had to stop it didn’t work out so well.
Some terms
It helps to review the terms. image – that’s like the OS. It has a unique identifier. Mine is ami-03559b6a. instance – that’s a particular image running on a particular virtual server, identified by unique number. Mine is i-1737a673. retired image – the owner of the image decided to no longer make it available for new instances
What it all means
I run a retired image, so for instance I can’t right-click my instance and:
– launch another like this
What I did to keep my instance running
I didn’t! Before the retirement deadline I stopped my instance. That is a painful process because it takes hours in my case. The server becomes unavailable quickly enough, but the status is stuck in state shutting down for at least a couple hours. But, eventually, it does shut down.
Then, I start it again. That’s it!
When it starts, AWS puts it on different hardware, etc, so I guess literally it is a different instance now, running the same image. I re-associate my elastic IP, and all is good.
So when the “retirement” date came along, there was no outage of my instance as I had already stopped/started it and that was all that was needed.
Amazon’s documentation – as good as it is – isn’t that clear on this point, hence this blog posting…
Side preparations
In case I couldn’t restart my image I had taken snapshots of my EBS volumes, and prepared to run Amazon Linux, which looks pretty similar to CentOS which is what I run. But, boy, learning about VPC and routing was a pain. I had to set all that up and gain at least a rudimentary understanding of all that. None of that existed six years ago when I started out! It was much simpler back then.
What it looks like
To make things concrete, here is my view on the AWS admin portal of my instances.
Conclusion
Having your Amazon AWS instance retired is not as scary as it initially sounds. Basically, just stop and start it yourself and you’ll be fine.
References and Related
(2024 update) Reserved Instances are now passé! I recently began using the AWS Savings Plans which offers more flexibility.
Intro
Configuring your own micro SD card in order to install Raspbian on a Raspberry Pi is not so hard. Some of the instructions out there are a bit dated and make it out to be harder than it really is.
The details
For instance this site has some extra steps you don’t need: http://elinux.org/RPi_Easy_SD_Card_Setup.
I’d stick with the simplest possible approach, which turns out to be this set of instructions: https://diyhacking.com/install-raspbian-raspberry-pis-sd-card/
But all these instructions seem to refer to an IMG file which I don’t even see. The main thing is to download NOOBS (new out-of-box software) from https://www.raspberrypi.org/downloads/ .
Then, get the SD card formatter. But the latest version is 5, not 4, and it looks different from before – there are essentially no options!
SD Card Formatter
So go with Quick Format and it works out OK. Unless your SD card is used. Then choose Overwrite format. That also works but takes a lot longer.
Then when it comes to copying the image file, which makes no sense with NOOBS because the image file is hidden, I think. Just extract all the files form the NOOBs zip file and copy them over to your E: drive, of whatever drive your SD card appears as.
Then follow the instructions on your Ras Pi display.
That’s it! I know because I just did it.
(non-)Reliability of SD Card
For the record, I’m in this situation because my old micro SD card just died. This is after running it continuously for a little over two years. Not very impressive in my book. Also for the record the card came as part of a Cana kit.
Symptoms of SD card failure in my case:
– boot paused, then after 120 seconds spits out some warnings about MMC something or other.
– LED status light solid green
A word about NOOBS and Balena Etcher
Note that the Etcher people were a bit lazy, and refuse to support burning NOOBS to an SD card with Etcher! to repeat, Etcher and NOOBS are incompatible. The stated reason is that NOOBS is not a true image.
A word about downloading from https://www.raspberrypi.org/downloads/
Today my PC just was not up to the task of downloading the full NOOBS zip file. It got to about 800 MB and then kept saying Failed. I found I could restart it, and that would download another 10 MB or so before failing again. This was getting pretty boring so I simply went to a Raspberry Pi and downloaded it from the command line using wget. No problems…
I suspect that my PC’s AV software was running amok and interfering with this download. I haven’t messed with disabling it in awhile (the usual prescription), so i used the Ras Pi itself. Then I did an sftp from my PC to the Ras Pi to get the downloaded image. That was also unusually slow, but it did go through, eventually.
Intro
I have an ancient Redhat system which I’m not in a position to upgrade. I like to use curl to test web sites, but it’s getting to the point that my ancient version has no SSL versions in common with some secure web sites. I desperately wanted to upgrade curl while leaving the rest of the system as is. Is it even possible? How would you do it? All these things and more are explained in today’s riveting blog post.
The details Redhat version
I don’t know the proper command so I do this:
$ cat /etc/system-release
ed Hat Enterprise Linux Server release 6.6 (Santiago)
ed Hat Enterprise Linux Server release 6.6 (Santiago)
Current curl version
$ ./curl ‐‐version
--ssl Try SSL/TLS
--ssl-allow-beast Allow security flaw to improve interop
--ssl-no-revoke Disable cert revocation checks (WinSSL)
--ssl-reqd Require SSL/TLS
-2, --sslv2 Use SSLv2
-3, --sslv3 Use SSLv3
...
--tls-max <VERSION> Use TLSv1.0 or greater
--tlsauthtype <type> TLS authentication type
--tlspassword TLS password
--tlsuser <name> TLS user name
-1, --tlsv1 Use TLSv1.0 or greater
--tlsv1.0 Use TLSv1.0
--tlsv1.1 Use TLSv1.1
--tlsv1.2 Use TLSv1.2
--tlsv1.3 Use TLSv1.3
--ssl Try SSL/TLS
--ssl-allow-beast Allow security flaw to improve interop
--ssl-no-revoke Disable cert revocation checks (WinSSL)
--ssl-reqd Require SSL/TLS
-2, --sslv2 Use SSLv2
-3, --sslv3 Use SSLv3
...
--tls-max <VERSION> Use TLSv1.0 or greater
--tlsauthtype <type> TLS authentication type
--tlspassword TLS password
--tlsuser <name> TLS user name
-1, --tlsv1 Use TLSv1.0 or greater
--tlsv1.0 Use TLSv1.0
--tlsv1.1 Use TLSv1.1
--tlsv1.2 Use TLSv1.2
--tlsv1.3 Use TLSv1.3
Now that’s an upgrade! How did we get to this point?
Well, I tried to get a curl RPM – seems like the appropriate path for a lazy system administrator, right? Well, not so fast. It’s not hard to find an RPM, but trying to install one showed a lot of missing dependencies, as in this example:
$ sudo rpm ‐i curl‐minimal‐7.55.1‐2.0.cf.fc27.x86_64.rpm
warning: curl-minimal-7.55.1-2.0.cf.fc27.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID b56a8bac: NOKEY
error: Failed dependencies:
libc.so.6(GLIBC_2.14)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libc.so.6(GLIBC_2.17)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libcrypto.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libcurl(x86-64) >= 7.55.1-2.0.cf.fc27 is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libssl.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
curl conflicts with curl-minimal-7.55.1-2.0.cf.fc27.x86_64
warning: curl-minimal-7.55.1-2.0.cf.fc27.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID b56a8bac: NOKEY
error: Failed dependencies:
libc.so.6(GLIBC_2.14)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libc.so.6(GLIBC_2.17)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libcrypto.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libcurl(x86-64) >= 7.55.1-2.0.cf.fc27 is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
libssl.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
curl conflicts with curl-minimal-7.55.1-2.0.cf.fc27.x86_64
So I looked at the libcurl RPM, but it had its own set of dependencies. Pretty soon it looks like a full-time job to get this thing compiled!
I found the instructions mentioned in the reference, but they didn’t work for me exactly like that. Besides, I don’t have a working git program. So here’s what I did.
Compiling openssl
I downloaded the latest openssl, 1.1.0f, from https://www.openssl.org/source/ , untar it, go into the openssl-1.1.0f directory, and then:
$ ./config ‐Wl,‐‐enable‐new‐dtags ‐‐prefix=/usr/local/ssl ‐‐openssldir=/usr/local/ssl
$ make depend
$ make
$ sudo make install
So far so good.
Compiling zlib
For zlib I was lazy and mostly followed the other guy’s commands. Went something like this:
$ lib=zlib-1.2.11
$ wget http://zlib.net/$lib.tar.gz
$ tar xzvf $lib.tar.gz
$ mv $lib zlib
$ cd zlib
$ ./configure
$ make
$ cd ..
$ CD=$(pwd)
No problems there…
Compiling curl
curl was tricky and when I followed the guy’s instructions I got the very problem he sought to avoid.
vtls/openssl.c: In function ‘Curl_ossl_seed’:
vtls/openssl.c:276: error: implicit declaration of function ‘RAND_egd’
make[2]: *** [libcurl_la-openssl.lo] Error 1
make[2]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make: *** [all-recursive] Error 1
vtls/openssl.c: In function ‘Curl_ossl_seed’:
vtls/openssl.c:276: error: implicit declaration of function ‘RAND_egd’
make[2]: *** [libcurl_la-openssl.lo] Error 1
make[2]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make: *** [all-recursive] Error 1
I looked at the source and decided that what might help is to add a hint where the openssl stuff could be found.
Backing up a bit, I got the source from https://curl.haxx.se/download.html. I chose the file curl-7.55.1.tar.gz. Untar it, go into the curl-7.55.1 directory,
$ ./buildconf
$ PKG_CONFIG_PATH=/usr/local/ssl/lib/pkgconfig LIBS=”‐ldl”
and then – here is the single most important point in the whole blog – configure it thusly:
So my insight was to add the ‐‐with‐ssl=/usr/local/ssl to the configure command.
Then of course you make it:
$ make
and maybe even install it:
$ make install
This put curl into /usr/local/bin. I actually made a sym link and made this the default version with this kludge (the following commands were run as root):
$ cd /usr/bin; mv curl{,.orig}; ln ‐s /usr/local/bin/curl
That’s it! That worked and produced a working, modern curl.
By the way it mentions TLS1.3, but when you try to use it:
curl: (4) OpenSSL was built without TLS 1.3 support
curl: (4) OpenSSL was built without TLS 1.3 support
It’s a no go. But at least TLS1.2 works just fine in this version.
One other thing – put shared libraries in a common area
I copied my compiled curl from Redhat to a SLES 11 SP 3 system. It didn’t quite run. Only thing is, it was missing the openssl libraries. So I guess it’s also important to copy over
libssl.so.1.1
libcrypto.so.1.1
to /usr/lib64 from /usr/local/lib64.
Once I did that, it worked like a charm!
Conclusion
We show how to compile the latest version of openssl and curl on an older Redhat 6.x OS. The motivation for doing so was to remain compatible with web sites which are already or soon dropping their support for TLS 1.0. With the compiled version curl and openssl supports TLS 1.2 which should keep it useful for a long while.