Categories
Admin Linux Network Technologies Raspberry Pi Security Web Site Technologies

How to test if a web site requires a client certificate

Intro
I can not find a link on the Internet for this, yet I think some admins would appreciate a relatively simple test to know is this a web site which requires a client certificate to work? The errors generated in a browser may be very generic in these situations. I see many ways to offer help, from a recipe to a tool to some pointers. I’m not yet sure how I want to proceed!

why would a site require a client CERT? Most likely as a form of client authentication.

Pointers for the DIY crowd
Badssl.com plus access to a linux command line – such as using a Raspberry Pi I so often write about – will do it for you guys.

The Client Certificate section of badssl.com has most of what you need. The page is getting big, look for this:

So as a big timesaver badssl.com has created a client certificate for you which you can use to test with. Download it as follows.

Go to your linux prompt and do something like this:
$ wget https://badssl.com/certs/badssl.com‐client.pem

badssl.com has a web page you can test with which only shows success if you access it using a client certificate, https://client.badssl.com/

to see how this works, try to access it the usual way, without supplying a client CERT:

$ curl ‐i ‐k https://client.badssl.com/

HTTP/1.1 400 Bad Request
Server: nginx/1.10.3 (Ubuntu)
Date: Thu, 20 Jun 2019 17:53:38 GMT
Content-Type: text/html
Content-Length: 262
Connection: close
 
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>

Now try the same thing, this time using the client CERT you just downloaded:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://client.badssl.com/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.badssl.com,O=Lucas Garron,L=Walnut Creek,ST=California,C=US
*       start date: Mar 18 00:00:00 2017 GMT
*       expire date: Mar 25 12:00:00 2020 GMT
*       common name: *.badssl.com
*       issuer: CN=DigiCert SHA2 Secure Server CA,O=DigiCert Inc,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: client.badssl.com
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
Server: nginx/1.10.3 (Ubuntu)
< Date: Thu, 20 Jun 2019 17:59:08 GMT
Date: Thu, 20 Jun 2019 17:59:08 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 662
Content-Length: 662
< Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
< Connection: keep-alive
Connection: keep-alive
< ETag: "5d011dab-296"
ETag: "5d011dab-296"
< Cache-Control: no-store
Cache-Control: no-store
< Accept-Ranges: bytes
Accept-Ranges: bytes
 
<
<!DOCTYPE html>
<html>
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="shortcut icon" href="/icons/favicon-green.ico"/>
  <link rel="apple-touch-icon" href="/icons/icon-green.png"/>
  <title>client.badssl.com</title>
  <link rel="stylesheet" href="/style.css">
  <style>body { background: green; }</style>
</head>
<body>
<div id="content">
  <h1 style="font-size: 12vw;">
    client.<br>badssl.com
  </h1>
</div>
 
<div id="footer">
  This site requires a <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security#Client-authenticated_TLS_handshake">client-authenticated</a> TLS handshake.
</div>
 
</body>
</html>
* Connection #0 to host client.badssl.com left intact
* Closing connection #0

No more 400 error status – that looks like success to me. Note that we had to provide the password for our client CERT, which they kindly provided as badssl.com

Here’s an example of a real site which requires client CERTs:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://jp.nissan.biz/

* About to connect() to jp.nissan.biz port 443 (#0)
*   Trying 150.63.252.1... connected
* Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* NSS error -12227
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

OK, so you get an error, but that’s to be expected because our certificate is not one it will accept.

The point is that if you don’t send it a certificate at all, you get a different error:

$ curl ‐v ‐i ‐k https://jp.nissan.biz/

* About to connect() to jp.nissan.biz port 443 (#0)
*   Trying 150.63.252.1... connected
* Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* NSS error -12227
* Closing connection #0
curl: (35) NSS: client certificate not found (nickname not specified)

See that client certificate not found? That is the error we eliminated by supplying a client certificate, albeit one which it will not accept.

what if we have a client certificate but we use the wrong password? Here’s an example of that:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badpassword https://client.badssl.com/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* Unable to load client key -8025.
* NSS error -8025
* Closing connection #0
curl: (58) Unable to load client key -8025.

Chrome gives a fairly intelligible error

Possibly to be continued…

Conclusion
We have given a recipe for testing form a linux command line if a web site requires a client certificate or not. thus it could be turned into a program

References and related
My article about ciphers has been popular.

I’ve also used badssl.com for other related tests.

Can you use openssl directly? You’d hope so, but I haven’t had time to explore it… Here are my all-time favorite openssl commands.

https://badssl.com/ – lots of cool tests here. The creators have been really thorough.

Categories
Web Site Technologies

The IT Detective Agency: Cisco Jabber Carriage Return problem fixed

Intro
Cisco Jabber is a pretty good IM application. I’ve seen how it is a true productivity enhancer. But not so much when it doesn’t work right.

The symptoms
I hadn’t rebooted for awhile. I had a bunch of open conversations. Then all of a sudden, I could no longer send additional Jabbers (IMs, messages, or whatever you call them). I would type my message, hit ENTER (<CR>), and that action would just give send the cursor to the beginnning of a new line below the one I typed in my message box, like a typewriter. I soon realized that I had no way to SEND what I was typing because you use ENTER to do that!

A quick Internet search revaled nothing (hence this article). So I restarted Jabber and that got things working again, but of course I lost all my conversations.

As this happened again, I looked more closely. I eventually noticed this security pop-up was associated with this ENTER problem:

Being a security-minded person I kept clicking No to this pop-up.

Then I noticed the correlation. As soon a I clicked No on that pop-up, my ‘s began to work as expecetd. After a few minutes they stop working again, I hunt for the pop-up, and click No again. And it goes on like this all day.

Hint on finding the pop-up
Jabber has a main narrow window which cpontains all the contacts and other links, and the conversation window. Highlight the main narrow wnidow and the pop-up will appear (if therer is one). Otherwise it can be hard to find.

Why is there a security alert?
Being a srot of certificate expert, I felt obliged to delve into the certificate itself to help whoever may try to solve this. I captured the certificate and found that it is a self-signed certificate! No wonder it’s not accepted. So our Unified Communications vendor, in their infinite wisdom, used self-signed certificates for some of this infrastructure. Bad idea.

I suppose I could accept it, but I’d prefer they fix this. I don’t want end users becoming comfortable overriding security pop-ups.

Conclusion
The sudden inability to use ENTER within Cisco Jabber is explained and a corrective action is outlined.

Case closed!

Categories
Web Site Technologies

Where is my IP without the aggressive ads

Intro
To locate where any IP address is located – known as geoip – you can do a simple duckduckgo search and get an idea, but you may also get sucked into one of those sites that provides a service while subjecting you to a lot of advertising. So I prefer to have the option to go to the source.

For that I kind of like this site: https://www.maxmind.com/en/geoip-demo

Maxmind also has a free downloadable database of all IPs known as GeoLite2. If I get time I may explore using it.

References and related
https://www.maxmind.com/en/geoip-demo

Categories
Raspberry Pi SLES Web Site Technologies

Pi-hole: it’s as easy as pi to get rid of your advertisements

Intro
I learned about pi-hole from Bloomberg Businessweek of all places. Seems right up my alley – uses Raspberry Pi in your home to get rid of advertisements. Turns out it was too easy and I don’t have much to contribute except my own experiences with it!

The details
When I read about it I got to thinking big picture and wondered what would prevent us from running an enterprise version of this same thing? Well, large enerprises don’t normally run production critical applications like DNS servers (which this is, by the way) on Raspberry Pis, which is not the world’s most stable hardware! But first I had to try it at home just to learn more about the technology.

pi-hole admin screen

I was surprised just how optimized it was for the Raspberry Pi, to the neglect of other systems. So the idea of using an old SLES server is out the window.

But I think I got the essence of the idea. It replaces your DNS server with a custom one that resolves normal queries for web sites the usual way, but for DNS queries that would resolve to an Ad server, it clobbers the DNS and returns its own IP address. Why? So that it can send you a harmless blank image or whatever in place of an Internet ad.

You know those sites that obnoxiously throw up those auto-playing videos? That ain’t gonna happen any more when you run pi-hole.

You have to be a little adept at modifying your home router, but they even have a rough tutorial for that.

Installation
For the record on my Rspberry Pi I only did this:
$ sudo su ‐
$ curl ‐sSL https://install.pi‐hole.net | bash

It prompted me for a few configuration details, but the answers were obvious. I chose Google DNS servers because I have a long and positive history using them.

You can see that it installs a bunch of packages – surprisingly many considering how simple in theory the thing is.

Test it
On your Raspberry Pi do a few test resolutions:

$ dig google.com @localhost # should look like it normally does
$ dig pi.hole # should return the IP of your Raspberry Pi
$ dig adservices.google.com # I gotta check this one. Should return IP address of your Pi

It runs a little web server on your Pi so the Pi acts as adservices.google.com and just serves out some white space instead of the ad you would have gotten.

Linksys router
Another word about the home router DHCP settings. You have the option to enter DNS server. So I put the IP address of my raspberry pi, 192.168.1.119. What I expected is that this is the DNS server that would be directly handed out to the DHCP clients on my home network. But that is not the case. Instead it still hands out itself, 192.168.1.1 as DNS server. But in turn it uses the raspberry PI for its resolution. This through me when I did an ipconfig /all on my Windows 10 and didn’t see the DNS server I expected. But it wa all working. About 10% of my DNS queries were pi-holed (see picture of my admin screen above).

I guess pi-hole is run by fanatics, because it works surprisingly well. Those complex sites still worked, like cnn.com, cnet.com. But they probably load faster without the ads.

Two months check up

I checked back with pihole. I know a DNS server is running. The dashboard is broken – the sections just have spinning circle instead of data. It’s already asking me to upgrade to v 3.3.1. I run pihole -up to do the upgrade.

Another little advantage
I can now ssh to my pi by specifying the host as pi.hole – which I can actually remember!

Idea for enterprise
finally, the essence of the idea probably could be ported over to an enterprise. In my opinion the secret sauce are the lists of domain names to clobber. There are five or six of them. Some have 50,000 entries. So you’d probably need a specialized DNS server rather than the default ISC BIND. I remember running a specialized DNS server like that when I ran Puremessage by Sophos. It was optimized to suck in real-time blacklists and the like. I have to dig through my notes to see what we ran. I’m sure it wasn’t dnsmasq, which is what pi-hole runs on the Raspberry Pi! But with these lists and some string manipulation and a simple web server I’d think it’d be possible to replicate in enterprise environment. I may never get the opportunity, more for lack of time than for lack of ability…

Conclusion
Looking for a rewarding project for your Raspberry Pi? Spare yourself Internet advertisements at home by putting it to work.

References and related
The pi-hole web site: https://pi-hole.net/
Another Raspberry Pi project idea: monitor your cable modem and restart it when it goes south.

Categories
Admin Linux Security Web Site Technologies

The IT Detective Agency: the vanishing certificate error

Intro
I was confronted with a web site certificate error. A user was reluctant – correctly – to proceed to an internal web site because he saw a message to the effect:

I tried it myself with IE and got the same thing.
Switch to Chrome and I saw this error:

I wouldn’t bother to document this one except for a twist: the certificate error went away in IE when you clicked through to the login page.

Furthermore, when I examined the certificate with a tool I trust, openssl, it showed the date was not expired.

So what’s going on there?

The details
First thing I dug into was Chrome. I found this particular error can occur if you have an internal certificate issued with a valid common name, but without a Subject Alternative Name. My openssl examination confirmed this was indeed the case for this certificate.

So I decided the Chrome error was a red herring. And confirmed this after checking out other internal web sites which all suffered from this problem.

But that still leaves the IE error unexplained.

As I mentioned in a previous post, I created a shortcut bash function that combines several openssl functions I call examinecert:

examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }

Use it like this:

$ examinecert drjohnstechtalk.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:17:21:b7:12:94:3a:fa:fd:a8:f3:f8:5e:2e:e4:52:35:71
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Validity
            Not Before: Apr  4 08:34:56 2018 GMT
            Not After : Jul  3 08:34:56 2018 GMT
        Subject: CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d3:50:98:6d:72:03:b2:e4:01:3f:44:01:3d:eb:
                    ff:fc:68:7d:51:a4:09:90:48:3c:be:43:88:d7:ba:
                    ...
        X509v3 extensions:
                 ...
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com
                ...

I tried to show a friend the error. I could no longer get IE to show a certificate error. So my friend tried IE. He saw that initial error.

Most people give up at this point. But my position is the kind where problems no one else can resolve go to get resolution. And certificates is somewhat a specialty of mine. So I was not ready to throw in the towel.

I mistrust all browsers. They cache information, try to present you sanitized information. It’s all misleading.

So I ran examinecert again. This time I got a different result. It showed an expired certificate. So I ran it again. It showed a valid, non-expired certificate. And again. It kept switching back-and-forth!

Here it helps to know some peripheral information. The certificate resides on an old F5 BigIP load-balancer which I used to run. It has a known problem with updating certificate if you merely try to replace the certificate in the SSL client profile. It’s clear by looking at the dates the certificate had recently been renewed.

So I now had enough information to say the problem was on the load balancer and I could send the ticket over to the group that maintains it.

As for IE’s strange behavior? Also explainable for the most part. After an initial page with the expired certificate, if you click Continue to this web site it re-loads the page and gets the Good certificate so it no longer shows you the error! So when I clicked on the lock icon to examine the certificate, I always was getting the good version. In fact – and this is an example of the limitation of browsers like IE -you don’t have the option to examine the certificate about which it complained initially. Then IE caches this certificate I think so it persists sometimes even after closing and re-launching the browser.

Case closed.

Conclusion
An intermittent certificate error was explained and traced to a bad load balancer implementation of SSL profiles. The problem could only be understood by going the extra mile, being open-minded about possible causes and “using all my senses.” As I like to joke, that’s why I make the medium bucks!

Other conclusion? openssl is your friend.

References and related
My favorite openssl commands show how to use openssl x509 from any linux server.

Categories
Admin Web Site Technologies

A taste of the Instagram API

Intro
I always want to know more about how things really work behind the scenes, so I was excited when I overheard talk about how one company uses the Instagram API to do some cool things. An API is an application programming interface. It allows you to write programs to automate tasks and do some really cool stuff. So I spoke to one of my sources who shared with me a few companies he knows about who use Instagram’s API to do some cool things. Unfortunately, none of them were willing to reveal the technical details of how they interact with the API, so I am left with only the marketing descriptions of what they have managed to do with it. But what they don’t realize is that as a capable IT person, in some cases I only have to hear that a thing is possible to motivate me. I have literally gone into meetings telling a customer No that’s not possible, hearing from them Yeah, well, they have it running in Europe, and going back to my desk afterwards to totally revise my opinion of what is or isn’t possible and how it could be done. Having said all that, here is what these companies have managed to do, without revealing the secret sauce of how they do it.

Example apps
Post scheduling software
This is used by social media managers to schedule their Instagram posts weeks or months in advance. It allows them to make a bunch of posts at once quickly and saves them time. A friend of a friend in NYC owns a company that does this. His website is bettrsocial.com

Analytic software
Simply Measured offers a free Instagram report for users with up to 25,000 followers. The stats and insights are presented clearly and will help inform your Instagram posting strategy. The report lets you quickly see what has worked well in your Instagram marketing so you can apply these insights to future posts. Web site: https://simplymeasured.com

Automaton software
Some companies connect with Instagram’s API to automate redundant tasks and increase traffic to your Instagram page. Social Network Elite is one of the best sources for growing organic Instagram followers.

Conclusion
Although I don’t even have an Instagram account, I am interested in APIs. The Instagram API does not look too daunting and seems well-documented. I cite a few small businesses that put it to use to do cool stuff. Unfortunately at this time I can’t deliver on the promise of the title of this article – a taste of the API – because I haven’t received any details about the actual usage. Perhaps in some future I will get my own account and develop my own application.

References and related
The Instagram API is documented here: https://www.instagram.com/developer/
My attempt to use the GoDaddy domain API.

Categories
Web Site Technologies

Open Notebook: How does Citrix printing work anyway

Intro
I’m speaking of the old Citrix Receiver client. You launch that and that puts you in a Citrix ICA “jail.” I recently help a company move an app which had been a browser-based app to a browser within Citrix. Users complained they could not print from it… All their local printers were gone. Only a Citrix Universal Printer can be chosen.

What to do?

The solution
When you print, choose the Citrix universal printer.

Click on print again. You get a print preview screen.

Click on the printer symbol in the top bar. You will get your local printer list to choose from

Click on print again and the print job will be sent to the desired printer.

Simple enough, unless you’re going through it for the first time!

How did Citrix Receiver client break out of the jail?
I am told that it uses EMF format. That’s Enhanced Metafile, a successor to WMF, Windows metafile. EMF is a graphics language used in printer drivers. The Wikipedia article on this is surprisingly brief and skeletal: https://en.wikipedia.org/wiki/Windows_Metafile#Variants. So I guess it’s not really a jail at all – that was just my term. And the details beyond this unsatisfactory explanation I do not know. I’ll keep it on the back burner in case I ever get an opportunity to learn more about it.

Open Notebook background
I sometimes write blog posts as a sort of high-quality journal entry. I may very well be the only person who ever refers to them, and that’s OK. It contains enough information to prod my memory though it may not be polished enough to help many others.

References and related
The ICA that I referred to is the communications protocol used between classic Citrix Receiver client and a Citrix server (what we used to call an NFuse server). Wikipedia has a good article on it: https://en.wikipedia.org/wiki/Independent_Computing_Architecture

Categories
Linux Raspberry Pi Web Site Technologies

Raspberry Pi USB webcam turned into IP camera

Intro
Why would you even want to do this when you can buy a native IP webcam for less? I’m not sure, but i found myself in this situation so it could happen to others, and I found some things that worked and some that required quite some effort.

In my previous post I spoke about using opencv on Raspberry Pi.

This post is more about getting at an image with a minimum of lag time and relatively low bandwidth.

The setup
The specific camera I am working with is an ELP mini USB camera for $20.

What I did not do
I considered bolting on an add-on to opencv to convert the video stream into mjpeg. But the process looked relatively obscure so I did not feel that was a good way to go.

I skimmed through the mjpeg (motion jpeg) standard. Looks pretty straightforward. i even considered writing my own streamer. It’s probably not too hard to write a bad one! But I feared it would be unreliable so I didn’t go that route. It’s just jpeg, separator, jpeg, separator, jpeg, etc. Here’s the Wikipedia link: https://en.wikipedia.org/wiki/Motion_JPEG.

I think the best software for is mjpg_streamer. It is not available as a simple package. So you have to compile it and patch it.

Follow his recipe
This guy’s recipe worked for me:
https://jacobsalmela.com/2014/05/31/raspberry-pi-webcam-using-mjpg-streamer-over-internet/

Mostly! I needed the patch as well (which he also mentions). his instructions for the patch aren’t accurate.

He provides a link. You need to save the contents by launching the downloaded file and saving it as input_uvc_patch.txt after opening it in Windows Notepad (if you’re doing this download through Windows).

On the Pi, you would do these steps:

cd ~/mjpg-streamer
patch -p0 < input_uvc_patch.txt
make USE_LIBV4L2=true clean all
sudo make DESTDIR=/usr/local install

That is, assuming you had copied the patch file into that ~/mjpg-streamer directory.

Before we get too far, I wished to mention that the command fswebcam proved somewhat useful for debugging.

Here’s a weird thing about that camera
We had one, then I got another one. The two cameras do not behave the same way!

Device files
I guess Raspberry Pi has its own version of plug-and-play. So what it means is that when you plug in the camera a device file is dynamically created called /dev/video0. Now if you happen to plug in a second USB camera, that one becomes device /dev/video1. Some utilities are designed to work with /dev/video0 and require extra arguments to deal with a camera with a different device number, e.g., fswebcam -d /dev/video1 image.jpg.

But actually running two cameras did not work out too well for me. It seemed to crash and I don’t have time to investigate that.

The working command is…
My livestream.sh file looks like this right now. It will change but this is a good document point.

#!/bin/bash
/usr/local/bin/mjpg_streamer -i "/usr/local/lib/input_uvc.so -yuv -f 12 -q 50 \
 -r 352x288" -o "/usr/local/lib/output_http.so -w /usr/local/www"

The main point is that I found this additional -yuv argument seemed to get the one webcam to work, whereas the other USB camera didn’t need that! If you don’t include it launcher.sh may appear to work, but all you see when you connect to the direct video stream looks like this image:

One time when I ran it it crashed and suggested that -yuv argument be added, so I tried it and it actually worked! That’s how i discovered that oddity.

Bandwidth with those settings
About 2 mbps. How do I measure that? simple. I bring up the web page and tool around the networking stuff until i find Change Adapter Settings (always difficult to find). Then I double-click on my active adapter and stare at the received bytes to get a feel for how much it’s incrementing by each second. Multiply by 10, and voila, you have a crude measure, perhaps +/- 30%, of your bandwidth consumed!

Latency
This is so important it needs its own section.

Latency is pretty good. We’ve measured it to be 0.26 seconds.

fswebcam errors
What happens if you run fswebcam while livestream is running?
$ fswebcam /tmp/image.jpg

--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
Error selecting input 0
VIDIOC_S_INPUT: Device or resource busy

Makes sense. Only one program on the Pi can capture the output form the camera.

Does the simple command fswebcam image.jpg work all the time? No it does not! Sometimes it simply fails, which is scary.

Here is an example of two consecutive calls to fswebcam about a second apart which illustrates the problem:

$ fswebcam /tmp/image.jpg

--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
Adjusting resolution from 384x288 to 352x288.
--- Capturing frame...
Timed out waiting for frame!
No frames captured.

$ fswebcam /tmp/image.jpg

--- Opening /dev/video0...
Trying source module v4l2...
/dev/video0 opened.
No input was specified, using the first.
Adjusting resolution from 384x288 to 352x288.
--- Capturing frame...
Captured frame in 0.00 seconds.
--- Processing captured image...
Writing JPEG image to '/tmp/image.jpg'.

Running two USB cameras wih a single Ras Pi
This initially did not work in my first attempts but now it does!

It probably helps to be running a Raspebrry Pi 3 with Raspbian Stretch OS.

Maybe this wasn’t needed but we made a directory /usr/local/www2 and copied all the files from /usr/local/www to /usr/local/www2. A 2nd USB camera when plugged in creates /dev/video1 as I mentioned. You have to pick a different port, so we chose port 8090. Putting it all together we have the script below, livestream2.sh:

#!/bin/bash
/usr/local/bin/mjpg_streamer -i "/usr/local/lib/input_uvc.so -d /dev/video1 -yuv -q 50 -r 352x288 -f 12" -o "/usr/local/lib/output_http.so -w /usr/local/www2 -p 8090"

If a 2nd camera isn’t plugged in then the script errors out and doesn’t run, which is pretty much what we want. Running it by hand we get this:

$ ./livestream2.sh

MJPG Streamer Version: svn rev: 3:172M
 i: Using V4L2 device.: /dev/video1
 i: Desired Resolution: 352 x 288
 i: Frames Per Second.: 12
 i: Format............: YUV
 i: JPEG Quality......: 80
ERROR opening V4L interface: No such file or directory
 Init v4L2 failed !! exit fatal
 i: init_VideoIn failed

Reining in the bandwidth
We found that by lowering the jpeg quality with the -q option we could reduce the bandwidth and the quality, but the quality was still good enough for our purposes. Now the video streams from both cameras comes in around 4.5 mbps, even in bright lighting. So we settled on -q 50 for a 50% quality. Even a quality of 10 (10%) is not all that bad! I believe the default is 80%.

Bandwidth monitor on the Pi
Some of this was written by the student so apologies for the misspellings! Probably will be refined in the future. We can tease out how much bandwidth we’re actually using on the Pi by measuring the transmitted (TX) bytes periodically. We’ll record that during a matcgh so we can prove to ourselves and others that we have our bandwidth under control – far less than 7 mbps despite using two cameras.

banwidthmonitor.pl Perl program

#!/usr/bin/perl
#monitor banwidth
$DEBUG = 1;
$sleep = 5;
$| = 1;
$date = `date`;
print $date;
for (;;) {
  $tx = `ip -s link show eth0 | tail -1| awk \'{print \$1}\'`;
  print $tx if $DEBUG;
  $txbitstotal = 8 * $tx;
  $timetotal = time;
  $txbits = $txbitstotal - $txbitstotalold if $txbitstotalold;
  $time = $timetotal - $timetotalold;
  $txbitstotalold = $txbitstotal;
  $banwidth = $txbits / $time if $timetotalold;
  print "banwidth $banwidth\n";
  $timetotalold = $timetotal;
# TX: bytes  packets  errors  dropped carrier collsns
#    833844072  626341   0       0       0       0
  sleep $sleep;
}

Output from program
Watch as our bandwidth usage grows to around 700 kbps as we turn on one of our video cameras.
$ ./banwidthmonitor.pl

Tue Jan 30 21:09:32 EST 2018
9894771
banwidth
9895095
banwidth 518.4
10252073
banwidth 571164.8
10697648
banwidth 712920
11151985
banwidth 726939.2
11597595
banwidth 712976
12043230
banwidth 713016
^C

Unreliable video stream startup
Sometimes one video stream does not come on correctly after first power-up. This is most perplexing as with computer gear one expects consistent, reproducible behaviour, yet that is not at all what we’ve observed.
This makes no sense, but in one environment we had our two streams running successfully six times in a row. Then I take the equipment home and find only one of the two streams starts up. It seems more likely to fail after sitting powered off for a few hours! I know it doesn’t make sense but that’s how it is.

In any case we have built a monitor which looks for and corrects this situation. It’s pretty clever and effective if I say so myself! And necessary! We created one monitor each for the two video devices. Here’s videomonitor.sh:

#!/bin/bash
# DrJ make sure video stream is not stuck. Restart it if it is
sleep 8
while /bin/true; do
  chars=`curl -s -m1 localhost:80/?action=stream|wc -c`
  if [ $chars -lt 100 ]; then
# we are stuck!
    date
    echo Video stuck so we will restart it
    pid=`ps -ef|grep mjpg|grep 'p 80'|grep -v sudo|awk '{print $2}'`
    sudo kill $pid
    sleep 1
    ~/livestream.sh &
# restart...
  else
# we have a good stream
    touch /tmp/stream80
  fi
  sleep 5
done

and videomonitor2.sh

#!/bin/bash
# DrJ make sure video stream is not stuck. Restart it if it is
sleep 8
while /bin/true; do
  chars=`curl -s -m1 localhost:443/?action=stream|wc -c`
  if [ $chars -lt 100 ]; then
# we are stuck!
    date
    echo Video stuck so we will restart it
    pid=`ps -ef|grep mjpg|grep 'p 443'|grep -v sudo|awk '{print $2}'`
    sudo kill $pid
    sleep 1
    ~/livestream2.sh &
# restart...
  else
# we have a good stream
    touch /tmp/stream443
  fi
  sleep 5
done

And we’ll start these at boot time like the long and growing list of things we are starting at boot time.

Allowed ports
From rule 66…

R66. Communication between the ROBOT and the OPERATOR CONSOLE is restricted as follows:
A. Network Ports:
HTTP 80: Camera connected via switch on the ROBOT, bi-directional
HTTP 443: Camera connected via switch on the ROBOT, bi-directional
...

So…to be safe we are switching from use of ports 8080 and 8090 to ports 80 and 443. But this means we have to preface certain commands – such as mjpg_streamer – with sudo since tcp ports < 1024 are privileged.

Flashing an led when we have a good video stream
Our led is soldered to a gruond pin and GPIO pin 18.

We call this program ledflash.sh

#!/bin/bash
#flashes the led
while /bin/true; do
if [ -f /tmp/stream80 ] && [ -f /tmp/stream443 ]; then
  pin=18
  cd /sys/class/gpio
  echo $pin > export
  cd gpio$pin
  echo out > direction
  while /bin/true; do
#make 5 quick flashes
    for i in `seq 1 5`; do
      echo 1 > value
      sleep 0.1
      echo 0 > value
      sleep 0.1
    done
#now lets make the long flash
    echo 1 > value
    sleep 0.6
  done
fi
sleep 2
done

We start it at boot time as well. It tells us when both video streams are ready for viewing because only then do the files get created and then the led starts flashing.

It takes about 62 seconds from the time power is supplied to the Raspberry Pi to the time the LED starts flashing (indicating the two video streams are ready).


Picture of setup

This picture goes a long way to convey the ideas.

2 USB cameras, 1 Ras Pi, flashing LED

References and related
Multiple IP addresses
We needed an IP for testing in the lab, another when we brought it home and a third for competitions. This blog post showed how we gave it all needed IP addresses for our purposes!

FIRST FRC provides this guide for use of IP addresses at their events.

Amazon seemed to run out of the original USB camera we worked with. The ELP pinhole USB camera seems to work just as well and is just as cheap, around $20: https://smile.amazon.com/gp/product/B00K7ZWVVO/ref=oh_aui_detailpage_o00_s00?ie=UTF8&psc=1

Raspberry Pi model 2 and 3 GPIO pins are documented here: https://www.raspberrypi.org/documentation/usage/gpio-plus-and-raspi2/.

The lazy person’s way to start any script at during boot-up is presented here: Linux tip: easy way to automatically start a script after a reboot

Categories
Admin Apache Hosting Service Web Site Technologies

Server Name Indication and what it means for those with only a single IP address

Intro
Sometimes everything is there in place, ready to be used, but you just have to either mistakenly try it, or learn it works by reading about it, because it may be counter-intuitive. Such is the case with Server Name Indication. I thought I knew enough about https to “know” that you can only have one key/certificate for a single IP address. That CERT can be a SAN (subject alternative name) CERT covering multiple names, but you only get one shot at getting your certificate right. Or so I thought. Turns out I was dead wrong.

Some details
Well, SNI guess is a protocol extension to https. You know I always wondered why in proxy server logs it was able to log the domain name? How would it know that if the http protocol conversation is all encrypted? Maybe it’s SNI at work.

Who supports it?
Since this is an extension it has to be supported by both server and browser. It is. Apache24 supports it. IE, Firefox and Chrome support it. Even my venerable curl supports it! What does not support it, right out of the box, is openssl. The openssl s_client command fetches a site’s certificate, but as I found the hard way, you need to add the -servername switch to tell it which certificate you want to examine, i.e., to force it to use SNI.

This is mainly used by big hosting companies so they can easily and flexibly cram lots of web sites onto a single IP, but us small-time self-hosted sites benefit as well. I host a few sites for friends after all.

Testing methodology
This is pretty simple. I have a couple different virtual servers. I set each up with a completely different certificate in my apache virtual server setups. Then I accessed them by name like usual. Each showed me their own, proper, certificate. That’s it! So this is more than theoretical for me. I’ve already begun to use it.

Enterprise usage
F5 BigIP supports this protocol as well, of course. This article describes how to set it up. But it looks limited to only one server name per certificate, which will be inadequate if there are SAN certificates.

Conclusion
https using Server Name Indication allows to run multiple virtual servers, each with its own unique certificate, on a single IP address.

References and related
I get my certificates for free using the acme.sh interface to Let’s Encrypt
I’ve written some about apache 2.4 in this post
I don’t think Server Name Indication is explained very well anywhere that I’ve seen. The best dewscription I’ve found is that F5 Devcentral article: https://devcentral.f5.com/articles/ssl-profiles-part-7-server-name-indication
RFC 4366 is the spec describing Server Name Indication.
My favorite openssl commands are listed in this blog post.
SNI is considered insecure because the hostname is sent in plaintext. encrypted SNI is the proposal to address that. Here’s a good write-up about that: https://nakedsecurity.sophos.com/2018/09/26/finally-a-fix-for-the-encrypted-webs-achilles-heel/?utm_source=Naked+Security+-+Sophos+List&utm_campaign=27caad8932-Naked+Security+daily+news+email&utm_medium=email&utm_term=0_31623bb782-27caad8932-418487137

Categories
Linux SLES Web Site Technologies

Compiling curl and openssl on Redhat Linux

Intro
I have an ancient Redhat system which I’m not in a position to upgrade. I like to use curl to test web sites, but it’s getting to the point that my ancient version has no SSL versions in common with some secure web sites. I desperately wanted to upgrade curl while leaving the rest of the system as is. Is it even possible? How would you do it? All these things and more are explained in today’s riveting blog post.

The details
Redhat version
I don’t know the proper command so I do this:
$ cat /etc/system-release

ed Hat Enterprise Linux Server release 6.6 (Santiago)

Current curl version
$ ./curl ‐‐version

curl 7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2

Limited set of SSL/TLS protocols
$ curl ‐help

...
 -2/--sslv2         Use SSLv2 (SSL)
 -3/--sslv3         Use SSLv3 (SSL)
...
 -z/--time-cond <time> Transfer based on a time condition
 -1/--tlsv1         Use TLSv1 (SSL)
...

New version of curl

curl 7.55.1 (x86_64-unknown-linux-gnu) libcurl/7.55.1 OpenSSL/1.1.0f zlib/1.2.3

New SSL options

     --ssl           Try SSL/TLS
     --ssl-allow-beast Allow security flaw to improve interop
     --ssl-no-revoke Disable cert revocation checks (WinSSL)
     --ssl-reqd      Require SSL/TLS
 -2, --sslv2         Use SSLv2
 -3, --sslv3         Use SSLv3
...
     --tls-max <VERSION> Use TLSv1.0 or greater
     --tlsauthtype <type> TLS authentication type
     --tlspassword   TLS password
     --tlsuser <name> TLS user name
 -1, --tlsv1         Use TLSv1.0 or greater
     --tlsv1.0       Use TLSv1.0
     --tlsv1.1       Use TLSv1.1
     --tlsv1.2       Use TLSv1.2
     --tlsv1.3       Use TLSv1.3

Now that’s an upgrade! How did we get to this point?

Well, I tried to get a curl RPM – seems like the appropriate path for a lazy system administrator, right? Well, not so fast. It’s not hard to find an RPM, but trying to install one showed a lot of missing dependencies, as in this example:
$ sudo rpm ‐i curl‐minimal‐7.55.1‐2.0.cf.fc27.x86_64.rpm

warning: curl-minimal-7.55.1-2.0.cf.fc27.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID b56a8bac: NOKEY
error: Failed dependencies:
        libc.so.6(GLIBC_2.14)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libc.so.6(GLIBC_2.17)(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libcrypto.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libcurl(x86-64) >= 7.55.1-2.0.cf.fc27 is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        libssl.so.1.1()(64bit) is needed by curl-minimal-7.55.1-2.0.cf.fc27.x86_64
        curl conflicts with curl-minimal-7.55.1-2.0.cf.fc27.x86_64

So I looked at the libcurl RPM, but it had its own set of dependencies. Pretty soon it looks like a full-time job to get this thing compiled!

I found the instructions mentioned in the reference, but they didn’t work for me exactly like that. Besides, I don’t have a working git program. So here’s what I did.

Compiling openssl

I downloaded the latest openssl, 1.1.0f, from https://www.openssl.org/source/ , untar it, go into the openssl-1.1.0f directory, and then:

$ ./config ‐Wl,‐‐enable‐new‐dtags ‐‐prefix=/usr/local/ssl ‐‐openssldir=/usr/local/ssl
$ make depend
$ make
$ sudo make install

So far so good.

Compiling zlib
For zlib I was lazy and mostly followed the other guy’s commands. Went something like this:
$ lib=zlib-1.2.11
$ wget http://zlib.net/$lib.tar.gz
$ tar xzvf $lib.tar.gz
$ mv $lib zlib
$ cd zlib
$ ./configure
$ make
$ cd ..
$ CD=$(pwd)

No problems there…

Compiling curl
curl was tricky and when I followed the guy’s instructions I got the very problem he sought to avoid.

vtls/openssl.c: In function ‘Curl_ossl_seed’:
vtls/openssl.c:276: error: implicit declaration of function ‘RAND_egd’
make[2]: *** [libcurl_la-openssl.lo] Error 1
make[2]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/usr/local/src/curl/curl-7.55.1/lib'
make: *** [all-recursive] Error 1

I looked at the source and decided that what might help is to add a hint where the openssl stuff could be found.

Backing up a bit, I got the source from https://curl.haxx.se/download.html. I chose the file curl-7.55.1.tar.gz. Untar it, go into the curl-7.55.1 directory,
$ ./buildconf
$ PKG_CONFIG_PATH=/usr/local/ssl/lib/pkgconfig LIBS=”‐ldl”

and then – here is the single most important point in the whole blog – configure it thusly:

$ ./configure ‐‐with‐zlib=$CD/zlib ‐‐disable‐shared ‐‐with‐ssl=/usr/local/ssl

So my insight was to add the ‐‐with‐ssl=/usr/local/ssl to the configure command.

Then of course you make it:

$ make

and maybe even install it:

$ make install

This put curl into /usr/local/bin. I actually made a sym link and made this the default version with this kludge (the following commands were run as root):

$ cd /usr/bin; mv curl{,.orig}; ln ‐s /usr/local/bin/curl

That’s it! That worked and produced a working, modern curl.

By the way it mentions TLS1.3, but when you try to use it:

$ curl ‐i ‐k ‐‐tlsv1.3 https://drjohnstechtalk.com/

curl: (4) OpenSSL was built without TLS 1.3 support

It’s a no go. But at least TLS1.2 works just fine in this version.

One other thing – put shared libraries in a common area
I copied my compiled curl from Redhat to a SLES 11 SP 3 system. It didn’t quite run. Only thing is, it was missing the openssl libraries. So I guess it’s also important to copy over

libssl.so.1.1
libcrypto.so.1.1

to /usr/lib64 from /usr/local/lib64.

Once I did that, it worked like a charm!

Conclusion
We show how to compile the latest version of openssl and curl on an older Redhat 6.x OS. The motivation for doing so was to remain compatible with web sites which are already or soon dropping their support for TLS 1.0. With the compiled version curl and openssl supports TLS 1.2 which should keep it useful for a long while.

References and related
I closely followed the instructions in this stackoverflow post: https://stackoverflow.com/questions/44270707/cant-build-latest-libcurl-on-rhel-7-3#44297265
openssl source: https://www.openssl.org/source/
curl sources: https://curl.haxx.se/download.html
Here’s a web site that only supports TLS 1.2 which shows the problem: https://www.askapache.com/. You can see for yourself on ssllabs.com