How to POST with curl

Intro
For the hard-core curl fans I find these examples useful.

Example 1
Posting in-line form data, e.g., to an api:

$ curl ‐d 'hi there' https://drjohns.com/api/example

Well, that might work, but I normally add more switches.

Example 2

$ curl ‐iksv ‐d 'hi there' https://drjohns.com/api/example|more

Perhaps you have JSON data to POST and it would be awkward or impossible to stuff into the command line. You can read it from a file like this:

Example 3

$ curl ‐iksv ‐d @json.txt https://drjohns.com/api/example|more

Perhaps you have to fake a useragent to avoid a web application firewall. It actually suffices to identify with the -A Mozilla/4.0 switch like this:

Example 4

$ curl ‐A Mozilla/4.0 ‐iksv ‐d @json.txt https://drjohns.com/api/example|more

Suppose you are behind a proxy. Then you can tack on the -x switch like this next example.

Example 5

$ curl ‐A Mozilla/4.0 ‐x myproxy:8080 ‐iksv ‐d @json.txt https://drjohns.com/api/example|more

Those are the main ones I use for POSTing data while seeing what is going on. You can also add a maximum time (-m I think).

POSTman
Just overhearing people talk, I believe that “normal” people use a tool called POSTman to do similar things: POST XML, SOAP or JSON data to an endpoint. I haven’t had a need to use it or even to look into it myself. yet.

Conclusion
We have documented some useful switches in curl. POSTing data occurs when using APIs, e.g., RESTful APIs, so these techniques are useful to master. Roadblocks thrown up by web application firewalls or proxy servers can also be easily overcome.

Posted in Web Site Technologies | Tagged , , , | Leave a comment

No Internet, secure WiFi status message in Windows 10

Intro
Finding out how Windows decides if there is an Internet connection or not can be a challenge often posed by trying to do an Internet search comprised or words that are common and therefore used in many other contexts. I have to give credit to someone else who found most of these pertinent links that help explain how Windows decides whether or not your PC has an Internet connection.

What they don’t tell you
I think there are a lot more tests microsoft does than what they’ve documented. In my opinion, based on observation, in addition to the sites they recommend to whitelist, also whitelist

www.msftconnecttest.com

Some PCs get stuck in a loop requesting www.msftconnecttest.com/connecttest.txt indefinitely, which isn’t good for anyone.

Here’s one they don’t mention, of the same ilk:

ipv6.msftconnecttest.com/connecttest.txt

I’m thinking to just leave that one alone, unless you really are fully running on ipv6.

Now if you have a PAC file, what you’re going to see are accesses for
<PAC-file-address>/connecttest.txt

I don’t think that one’s documented either. I’m not yet sure how best to have the PAC file web server respond, where best means the reply which would make the PC most likely to decide Yes I really do have an Internet connection.

References and related
This Pulse Secure article is pretty good. You start with an Internet connection, then launch Pulse Secure vpn, then find you are told there is no longer an Internet connection. This explains why it might be, but in my opinion it is incomplete as it does not even consider the case where an authenticating proxy is the sole gateway to the Internet:
https://kb.pulsesecure.net/articles/Pulse_Secure_Article/KB43805

These are two more articles about VPN tunneling
https://community.pulsesecure.net/t5/Pulse-Desktop-Clients/Pulse-Secure-blocks-Windows-10-apps-from-internet-access/td-p/11944
https://docs.pulsesecure.net/WebHelp/PCS/8.3R1/Home.htm#PCS/PCS_AdminGuide_8.3/About_VPN_Tunneling.htm

network Location Awareness (NLA) and Network Connection Status Indicator (NCSI) are explained in these articles
https://support.microsoft.com/en-us/help/4494446/an-internet-explorer-or-edge-window-opens-when-your-computer-connects
https://support.microsoft.com/en-us/help/2778122/using-authenticated-proxy-servers-together-with-windows-8

Posted in Admin, IT Operational Excellence, Network Technologies | Tagged | Leave a comment

Raspberry Pi Recovery Mode or interrupting the boot process

Intro
If you installed Raspbian from the NOOBS distribution as I do, then you may occasionally “blow up” your installation as I just have! You have an out, sort of, short of re-imaging the disk, though about with the same impact.

To interrupt the boot process and enter recovery mode, attach a USB keyboard and repeatedly hit the Shift key. You should come to the NOOBS OS install selection screen. Just re-install Rasbian again…

Symptoms
When I powered up, I got the initial multi-color screen. Then a two-line text message popped up – too quickly to be read, then a grayish screen, then it split into a lower and upper part, then both halves faded away and there it stayed… At that point it was not responsive to any keyboard inputs or mouse clicks.

Conclusion
While doing my advanced slide show and rotating display project i somehow managed to blow up my OS. finding the way to interrupt the boot-up was not so easy so I am amplifying the answer that worked for me on the Internet: repeatedly hit the Shift key during the boot, until you see the NOOBS image selector screen.

Posted in Admin, Linux, Raspberry Pi | Leave a comment

apache as reverse proxy under SLES

IntroJust got my SLES 12 SP4 server. That’s a type of commercial Linux I needed to set up a secure reverse proxy in a hurry. There’s a lot of suggestions out there. I share what worked for me. The version of apache that is supplied, for the record, is apache 2.4.

The most significant error

[Tue Aug 13 15:26:24.321549 2019] [proxy:warn] [pid 5992] [client 127.0.0.1:40002] AH01144: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.

The solution
In /etc/sysconfig/apache2 (in SLES this is a macro that sets up apache with the needed loadmodule statements) I needed a statement like the following:

APACHE_MODULES="actions alias auth_basic authn_file authz_host authz_groupfile authz_core authz_user autoindex cgi dir env
expires include log_config mime negotiation setenvif ssl socache_shmcb userdir reqtimeout authn_core proxy proxy_html proxy_http xml2enc"

In my first crack at it I only had mention of modules to include up to proxy. I needed to add proxy_html and proxy_http (I know it doesn’t display correctly in the line above).

In that same file you need a statement like this as well:

APACHE_SERVER_FLAGS="SSL"

The highlights of my virtual host file, based on the ssl template, are:

<VirtualHost *:443>
# https://www.centosblog.com/configure-apache-https-reverse-proxy-centos-linux/
<Location />
            ProxyPass https://10.1.2.181/
            ProxyPassReverse https://10.1.2.181/
</Location>
 
        #  General setup for the virtual host
##      DocumentRoot "/srv/www/htdocs"
        #ServerName www.example.com:443
        #ServerAdmin webmaster@example.com
        SSLProxyEngine on
        ErrorLog /var/log/apache2/error_log
        TransferLog /var/log/apache2/access_log
 
        #   SSL Engine Switch:
        #   Enable/Disable SSL for this virtual host.
        SSLEngine on
# from https://superuser.com/questions/829793/how-to-force-all-apache-connections-to-use-tlsv1-1-or-tlsv1-2 -DrJ 8/19
        SSLProtocol all -SSLv2 -SSLV3 -TLSv1
#SSLCipherSuite HIGH:!aNULL:!MD5:!RC4
        SSLCipherSuite ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
 
        #   You can use per vhost certificates if SNI is supported.
        SSLCertificateFile /etc/apache2/ssl.crt/vhost-example.crt
        SSLCertificateKeyFile /etc/apache2/ssl.key/vhost-example.key
        SSLCertificateChainFile /etc/apache2/ssl.crt/vhost-example-chain.crt
 
 
        #   Per-Server Logging:
        #   The home of a custom SSL log file. Use this when you want a
        #   compact non-error SSL logfile on a virtual host basis.
        CustomLog /var/log/apache2/ssl_request_log   ssl_combined
 
</VirtualHost>

except that I used valid paths to my certificate, key and CA chain files.

Errors you may encounter
$ curl ‐i ‐k https://localhost/

HTTP/1.1 500 Proxy Error
Date: Thu, 15 Aug 2019 19:10:13 GMT
Server: Apache
Content-Length: 442
Connection: close
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
The proxy server could not handle the request <em><a href="/">GET&nbsp;/</a></em>.<p>
Reason: <strong>Error during SSL Handshake with remote server</strong></p><p />
<p>Additionally, a 500 Internal Server Error
error was encountered while trying to use an ErrorDocument to handle the request.</p>
</body></html>

I traced this error to the fact that initially I did not tell apache to ignore certificate name and other related mismatches. So inserting these directives cured that problem:

SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off

This is discussed in https://stackoverflow.com/questions/18872482/error-during-ssl-handshake-with-remote-server

I finally got past the SSL errors but then I still had a 404 error and an xml2enc error.

When I ran a service apache2 status I saw this:

Aug 15 16:09:31 lusytp008850388 start_apache2[28539]: [Thu Aug 15 16:09:31.879604 2019] [proxy_html:notice] [pid 28539] AH01425: I18n support in mod_proxy_html requires mod_xml2enc. Without it, non-ASCII characters in proxied pages are likely to display incorrectly.

Not certain whether this was important or not, I simply decided to heed the advice so that’s when I added xml2enc to the list of modules to enable in /etc/sysconfig/apache2:

APACHE_MODLUES=actions alias auth...proxy proxy_html proxy_http xml2enc"
HTTP/1.1 404 Not Found

And that was when I put in a URI that worked just fine if I entered it directly in a browser hitting the web server.

I had a hunch that this could occur if the web server was finicky and insisted on being addressed by a certain name. So originally I had statements like this:

            ProxyPass https://10.1.2.181/
            ProxyPassReverse https://10.1.2.181/

I changed it to

            ProxyPass https://myserveralias.example.com/
            ProxyPassReverse https://myserveralias.example.com/

except in place of myserveralias.example.com I put in what I felt the web site operators would have used – the known working alias for direct access to this web site. Of course I first made sure that my apache server could resolve myserveralias.example.com to 10.1.2.181, which it could.

And, voila, no more 404 error!

Conclusion
An SSL reverse proxy to an SSL back-end web server was set up under SLES 12 SP4, using TLS 1.2 and apache 2.4.23, in other words, pretty current stuff.

References and related
Compiling apache2.4

Posted in Apache | Leave a comment

Solution to NPR puzzle using Raspberry Pi

Intro
Take a common five-letter word. If you add an “e” to the end you’ll get a common six-letter word. or add an ‘e” after the second letter to get a different six-letter word, or an “e” after the fourth letter. what word is it?

The technique
I used the strange dictionary built in to my Raspbery Pi, in /usr/share/dict/american-english.

Key command
$ egrep '^[a‐z]{5,6}$' jhwords > /tmp/five‐six

That leaves us with 11897 words (use wc command to learn this).

Now just look at the six-letter words containing an “e” at the end:

$ egrep '^[a‐z]{5}e$' five‐six > e‐at‐end

Down to 784 words.

Now strip off the terminal “e”.

(steps skipped)

And then…

Candidate word list

ameba
ampul
aorta
avers
blond
brows
cloth
corps
demur
expos
fauna
final
flora
fondu
grill
hears
hydra
karat
larva
loath
local
madam
moral
pleas
psych
regal
scrap
sever
sooth
spars
strip
swath
teeth
tibia
uvula
vulva
zombi

You can scan by hand – the answer jumps out at you.

Conclusion
Another NPR Weekend Edition puzzle is solved by use of some simple linux commands.

References and related
Another NPR puzzle similarly solved.

Posted in Uncategorized | Tagged | Leave a comment

Raspberry Pi photo frame using your pictures on your Google Drive

Intro
All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.

So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!

The scripts
Here is the master file which I call master.sh.

#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE="random.list"
NUMFOLDERS=20
DISPLAYFOLDER="/home/pi/Pictures"
DISPLAYFOLDERTMP="/home/pi/Picturestmp"
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER="MaryDocs/Pictures and videos"
 
echo "Starting master process at "`date`
 
mkdir $DISPLAYFOLDERTMP
 
#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:"$STARTFOLDER" > files
 
# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep '\.[jJ][pP][eE]?[gG]$' files |awk '{$1=""; print substr($0,2)}'|grep -i -v /docs/ > jpegs.list
 
# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE
 
# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
  rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP
  sleep $SLEEPINTERVAL
done
 
# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv slideshow; fi
pkill -9 -f qiv
 
# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER
 
mv $DISPLAYFOLDERTMP $DISPLAYFOLDER
 
 
#run looping qiv slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start qiv slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/qiv.sh &
 
if [ $DEBUG -eq 1 ]; then echo "And now it is "`date`; fi

Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.

#!/bin/sh
# -f : full-screen; -R : disable deletion; -s : slideshow; -d : delay <secs>; -i : status-bar;
# -m : zoom; [-r : ranomdize]
# this doesn't handle filenames with spaces:
##cd /media; qiv -f -R -s -d 5 -i -m `find /media -regex ".+\.jpe?g$"`
# this one does:
export DISPLAY=:0
if [ "$1" = "l" ]; then
# print out proposed filenames
  find . -regex ".+\.[jJ][pP][eE]?[gG]$"
else
# args: f fullscreen d delay s slideshow l autorotate R readonly I statusbar
# i nostatusbar m maxspect
  find . -regex ".+\.[jJ][pP][eE]?[gG]$" -print0|xargs -0 qiv -fRsmil -d 5
fi

Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.

#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = <JPEGS>;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
  $t = int(rand($nopics-2));
  print "random number is: $t\n" if $DEBUG;
  ($dateTime) = $jpegs[$t] =~ /(\d{8}_\d{6})/;
  if ($dateTime) {
    print "dateTime\n" if $DEBUG;
  }
  $priorPic = $jpegs[$t-2];
  $Pic = $jpegs[$t];
  $postPic = $jpegs[$t+2];
  print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);

Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!

@reboot sleep 40; cd ~/Pictures; ~/qiv.sh >> ~/qiv.log 2>&1
12 10 * * * ~/master.sh >> ~/master.log 2>&1

Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package.

$ sudo apt-get install rclone
$ rclone config

2019/08/05 20:22:42 NOTICE: Config file "/home/pi/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Encrypt/Decrypt a remote
   \ "crypt"
 6 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 7 / Google Drive
   \ "drive"
 8 / Hubic
   \ "hubic"
 9 / Local Disk
   \ "local"
10 / Microsoft OneDrive
   \ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
12 / Yandex Disk
   \ "yandex" 
Storage>7
 
Google Application Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id>
Google Application Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n> N
If your browser doesn't open automatically go to the following link: https://accounts.google.com/o/oauth2/auth?client_id=202264815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=07ab6a457efc9384772f919dca93375
Log in and authorize rclone for access

You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:

Please copy this code, switch to your application and paste it there:
 
Enter verification code>4/nQEXJZOTdP_asMs6UQZ5ucs6ecvoiLPelQbhI76rnuj4sFjptxbjm7w
--------------------
[remote]
client_id =
client_secret =
token = {"access_token":"ya29.Il-KB3eniEpkdUGhwdi8XyZyfBFIF2ahRVQtrr7kR-E2lIExSh3C1j-PAB-JZucL1j9D801Wbh2_OEDHthV2jk_MsrKCMiLSibX7oa_YtFxts-V9CxRRUirF1_kPHi5u_Q","token_type":"Bearer","refresh_token":"1/MQP8jevISJL1iEXH9gaNc7LIsABC-92TpmqwtRJ3zV8","expiry":"2019-09-21T08:34:19.251821011-04:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
 
Name                 Type
====                 ====
remote               drive
 
e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/r/s/q>q

Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name&gt:)or you will end up seeing messages like this:

./master.sh
-bash: ./master.sh: Permission denied

Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” . : copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete

real    1m12.201s
user    0m15.270s
sys     0m1.816s

My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.
  • References and related
    PiDisplay

    RetroPi, the gaming emulation project for which I bought economical USB speakers.

    The rclone home page.

    A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.

    Posted in Linux, Network Technologies, Raspberry Pi | Tagged , , , | Leave a comment

    F5 Big-IP: When your virtual server does not present your chain certificate

    Intro
    While I was on vacation someone replaced a certificate which had expired on the F5 Big-IP load balancer. Maybe they were not quite as careful as I would like to hope I would have been. In any case, shortly afterwards our SiteScope monitoring reported there was an untrusted server certificate chain. It took me quite some digging to get to the bottom of it.

    The details
    Well, the web site came up just fine in my browser. I checked it with SSLlabs and its grade was capped at B because of problems with the server certificate chain. I also independently confirmed usnig openssl that no intermediate certificate was being presented by this virtual server. To see what that looks like with an exampkle of this problem knidly privided by badssl.com, do:

    $ openssl s_client ‐showcerts ‐connect incomplete-chain.badssl.com:443

    CONNECTED(00000003)
    depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
    verify error:num=20:unable to get local issuer certificate
    verify return:1
    depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
    verify error:num=27:certificate not trusted
    verify return:1
    depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
    verify error:num=21:unable to verify the first certificate
    verify return:1
    ---
    Certificate chain
     0 s:/C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
       i:/C=US/ST=California/L=San Francisco/O=BadSSL/CN=BadSSL Intermediate Certificate Authority
    -----BEGIN CERTIFICATE-----
    MIIE8DCCAtigAwIBAgIJAM28Wkrsl2exMA0GCSqGSIb3DQEBCwUAMH8xCzAJBgNV
    BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNp
    ...
    HJKvc9OYjJD0ZuvZw9gBrY7qKyBX8g+sglEGFNhruH8/OhqrV8pBXX/EWY0fUZTh
    iywmc6GTT7X94Ze2F7iB45jh7WQ=
    -----END CERTIFICATE-----
    ---
    Server certificate
    subject=/C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
    issuer=/C=US/ST=California/L=San Francisco/O=BadSSL/CN=BadSSL Intermediate Certificate Authority
    ...
        Verify return code: 21 (unable to verify the first certificate)

    So you get that message about benig unable to verify the first certificate.

    Here’s the weird thing, the certificate in question was issued by Globalsign, and we have used them for years so we had the intermediate certificate configured already in the SSL client profile. The so-called chain certificate was GlobalsignIntermediate. But it wasn’t being presented. What the heck? Then I checked someone else’s Globalsign certificate and found the same issue.

    Then I began to get suspicious about the certificate. I checked the issuer more carefully and found that it wasn’t from the intermediate we had been using all these past years. Globalsign changed their intermediate certificate! The new one dates frmo November 2018 and expires in 2028.

    And, to compound matters, F5 “helpfully” does not complain and simply does not send the wrong intermediate certificate we had specified in the SSL client profile. It just sends no intermediate certificate at all to accompany the server certificate.

    Conclusion
    The case of the missing intermediate certificate was resolved. It is not the end of the world to miss an intermediate certificate, but on the other hand it is not professional either. Sooner or later it will get you into trouble.

    References and related
    badssl.com is a great resource.
    My favorite openssl commands can be very helpful.

    Posted in IT Operational Excellence, Network Technologies, Web Site Technologies | Leave a comment

    Can you go from Terminal 1 to Terminal 2 in O’Hare without going through the security lines again?

    Yes.

    Yes, at least if you walk. Not sure about other transportation options. Count on about 13 minutes for the walk.

    Not sure why this information is so hard to find… Sometimes you land at one terminal and have to take a puddle jumper to a regional airport out of another and you’d really like to know is 50 minutes or whatever gonna be enough?

    This is probably true for most major airports. It seems for instance that at Newark Liberty you can also move between terminals A, B and C without going through security once again. I thought JFK may have been an exception to this rule, but I’m not sure…

    Posted in Consumer Interest, Security | Tagged , , | Leave a comment

    How to test if a web site requires a client certificate

    Intro
    I can not find a link on the Internet for this, yet I think some admins would appreciate a relatively simple test to know is this a web site which requires a client certificate to work? The errors generated in a browser may be very generic in these situations. I see many ways to offer help, from a recipe to a tool to some pointers. I’m not yet sure how I want to proceed!

    why would a site require a client CERT? Most likely as a form of client authentication.

    Pointers for the DIY crowd
    Badssl.com plus access to a linux command line – such as using a Raspberry Pi I so often write about – will do it for you guys.

    The Client Certificate section of badssl.com has most of what you need. The page is getting big, look for this:

    So as a big timesaver badssl.com has created a client certificate for you which you can use to test with. Download it as follows.

    Go to your linux prompt and do something like this:
    $ wget https://badssl.com/certs/badssl.com‐client.pem

    badssl.com has a web page you can test with which only shows success if you access it using a client certificate, https://client.badssl.com/

    to see how this works, try to access it the usual way, without supplying a client CERT:

    $ curl ‐i ‐k https://client.badssl.com/

    HTTP/1.1 400 Bad Request
    Server: nginx/1.10.3 (Ubuntu)
    Date: Thu, 20 Jun 2019 17:53:38 GMT
    Content-Type: text/html
    Content-Length: 262
    Connection: close
     
    <html>
    <head><title>400 No required SSL certificate was sent</title></head>
    <body bgcolor="white">
    <center><h1>400 Bad Request</h1></center>
    <center>No required SSL certificate was sent</center>
    <hr><center>nginx/1.10.3 (Ubuntu)</center>
    </body>
    </html>

    Now try the same thing, this time using the client CERT you just downloaded:

    $ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://client.badssl.com/

    * About to connect() to client.badssl.com port 443 (#0)
    *   Trying 104.154.89.105... connected
    * Connected to client.badssl.com (104.154.89.105) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * warning: ignoring value of ssl.verifyhost
    * skipping SSL peer certificate verification
    * NSS: client certificate from file
    *       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
    *       start date: Nov 16 05:36:33 2017 GMT
    *       expire date: Nov 16 05:36:33 2019 GMT
    *       common name: BadSSL Client Certificate
    *       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
    * SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
    * Server certificate:
    *       subject: CN=*.badssl.com,O=Lucas Garron,L=Walnut Creek,ST=California,C=US
    *       start date: Mar 18 00:00:00 2017 GMT
    *       expire date: Mar 25 12:00:00 2020 GMT
    *       common name: *.badssl.com
    *       issuer: CN=DigiCert SHA2 Secure Server CA,O=DigiCert Inc,C=US
    > GET / HTTP/1.1
    > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
    > Host: client.badssl.com
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    HTTP/1.1 200 OK
    < Server: nginx/1.10.3 (Ubuntu)
    Server: nginx/1.10.3 (Ubuntu)
    < Date: Thu, 20 Jun 2019 17:59:08 GMT
    Date: Thu, 20 Jun 2019 17:59:08 GMT
    < Content-Type: text/html
    Content-Type: text/html
    < Content-Length: 662
    Content-Length: 662
    < Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
    Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
    < Connection: keep-alive
    Connection: keep-alive
    < ETag: "5d011dab-296"
    ETag: "5d011dab-296"
    < Cache-Control: no-store
    Cache-Control: no-store
    < Accept-Ranges: bytes
    Accept-Ranges: bytes
     
    <
    <!DOCTYPE html>
    <html>
    <head>
      <meta name="viewport" content="width=device-width, initial-scale=1">
      <link rel="shortcut icon" href="/icons/favicon-green.ico"/>
      <link rel="apple-touch-icon" href="/icons/icon-green.png"/>
      <title>client.badssl.com</title>
      <link rel="stylesheet" href="/style.css">
      <style>body { background: green; }</style>
    </head>
    <body>
    <div id="content">
      <h1 style="font-size: 12vw;">
        client.<br>badssl.com
      </h1>
    </div>
     
    <div id="footer">
      This site requires a <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security#Client-authenticated_TLS_handshake">client-authenticated</a> TLS handshake.
    </div>
     
    </body>
    </html>
    * Connection #0 to host client.badssl.com left intact
    * Closing connection #0

    No more 400 error status – that looks like success to me. Note that we had to provide the password for our client CERT, which they kindly provided as badssl.com

    Here’s an example of a real site which requires client CERTs:

    $ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://jp.nissan.biz/

    * About to connect() to jp.nissan.biz port 443 (#0)
    *   Trying 150.63.252.1... connected
    * Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * warning: ignoring value of ssl.verifyhost
    * skipping SSL peer certificate verification
    * NSS: client certificate from file
    *       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
    *       start date: Nov 16 05:36:33 2017 GMT
    *       expire date: Nov 16 05:36:33 2019 GMT
    *       common name: BadSSL Client Certificate
    *       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
    * NSS error -12227
    * Closing connection #0
    * SSL connect error
    curl: (35) SSL connect error

    OK, so you get an error, but that’s to be expected because our certificate is not one it will accept.

    The point is that if you don’t send it a certificate at all, you get a different error:

    $ curl ‐v ‐i ‐k https://jp.nissan.biz/

    * About to connect() to jp.nissan.biz port 443 (#0)
    *   Trying 150.63.252.1... connected
    * Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * warning: ignoring value of ssl.verifyhost
    * skipping SSL peer certificate verification
    * NSS: client certificate not found (nickname not specified)
    * NSS error -12227
    * Closing connection #0
    curl: (35) NSS: client certificate not found (nickname not specified)

    See that client certificate not found? That is the error we eliminated by supplying a client certificate, albeit one which it will not accept.

    what if we have a client certificate but we use the wrong password? Here’s an example of that:

    $ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badpassword https://client.badssl.com/

    * About to connect() to client.badssl.com port 443 (#0)
    *   Trying 104.154.89.105... connected
    * Connected to client.badssl.com (104.154.89.105) port 443 (#0)
    * Initializing NSS with certpath: sql:/etc/pki/nssdb
    * warning: ignoring value of ssl.verifyhost
    * Unable to load client key -8025.
    * NSS error -8025
    * Closing connection #0
    curl: (58) Unable to load client key -8025.

    Chrome gives a fairly intelligible error

    Possibly to be continued…

    Conclusion
    We have given a recipe for testing form a linux command line if a web site requires a client certificate or not. thus it could be turned into a program

    References and related
    My article about ciphers has been popular.

    I’ve also used badssl.com for other related tests.

    Can you use openssl directly? You’d hope so, but I haven’t had time to explore it… Here are my all-time favorite openssl commands.

    https://badssl.com/ – lots of cool tests here. The creators have been really thorough.

    Posted in Admin, Linux, Network Technologies, Raspberry Pi, Security, Web Site Technologies | Tagged | Leave a comment

    F5 clustering tips

    Intro
    I was having trouble with one of my new F5 clusters. I thought I’d be clever and add two new F5’s to the existing cluster, and then remove the old members when everything looked good.

    But the device groups somehow remembered the old servers, even though they weren’t in the device groups. Eventually I could not even sync the two new servers together.

    Here’s a concise summary of the steps which worked, taken from this article:

    Hi Manuel, do you try to setup a sync-failover device-group containing three units? To establish device trust I would recommend to force two units into offline state. Remove the machines from the existing sync-failover device-group (repeat on each machine if required) and delete the sync-failover device-group. Now reset the device trust on all machines. Next step will be to use the active machine to add both offline machines as peers. Now all three units should show up in each machineĀ“s device list. On the active unit create a new device-group of type sync-failover with network failover enabled. Add all machines to the new device-group. This will be synced to all machines and you can release them from forced offline. Time for the initial sync now. Thanks, Stephan

    I get a little confused about sync-only vs sync-failover. That difference is clearly discussed there as well:

    “Sync-only” will not allow to synchronize LTM configurations.
    To synchronize an LTM config all units need to belong to the same “sync-failover” device-group.

    I had afraid and ignorant of the need to reset device trust. For good measure I deleted all device groups. Then this procedure seems to leave me with an auto-created datasync-global-dg (sync-only). Since there was no sync-failover group I manually created one I called sync-LTM-configs. Another auto-created, sync-only group is the device_trust_group. So I see three device groups in all when I am on the sync page, but only two when I am on the device group page.

    Conclusion
    I had afraid and ignorant of the need to reset device trust in cleaning up an F5 cluster with old and new members. But once I did that and followed the recipe above, I was good to go.

    References and related
    The discussion on the F5 Devcentral site: https://devcentral.f5.com/questions/f5-sync-options

    Posted in Admin | Tagged , , , | Leave a comment