Categories
Admin Linux SLES

How to add private root CAs in SLES or Redhat or Debian

Intro
From time-to-time I run my own PKI infrastructure, namely issuing my own certificates from my private root CA. I wanted this root CA to be recognized by Linux utilities running on Suse Linux (SLES), in particular, lftp, which I was trying to use to access an ftps site, which itself is a post for another day. In other words, how do you add a certificate to the certificate store in linux?

The details
Let’s say you have your root certificate in the standard form like this example

-----BEGIN CERTIFICATE-----
MIIIPzCCBiegAwIBAgITfgAAAATHCoXJivwKLQAAAAAABDANBgkqhkiG9w0BAQsF\nADA2MQswCQYD
VQQGEwJERTENMAsGA1UEChMEQkFTRjEYMBYGA1UEAxMPQkFTRiBS\nb290IENBIDIxMB4XDTE3MDgxMDEyNDAwOFoXDTI4MDgxMDEyNTAwOFowXDETMBEG\nCgm
...
PEScyptUSAaGjS4JuxsNoL6URXYHxJsR0bPlet\nSct
-----END CERTIFICATE-----

Then you can put the certificate inline and within one script install it so that it permanently joins the other root CAs in /etc/ssl/certs with a script like this example:

DrJ_Root_CA="-----BEGIN CERTIFICATE-----\nMIIIPzCCBiegAwIBAgITfgAAAATHCoXJivwKLQAAAAAABDANBgkqhkiG9w0BAQsF\nADA2MQswCQYD
VQQGEwJERTENMAsGA1UEChMEQkFTRjEYMBYGA1UEAxMPQkFTRiBS\nb290IENBIDIxMB4XDTE3MDgxMDEyNDAwOFoXDTI4MDgxMDEyNTAwOFowXDETMBEG\nCgm
SJomT8ixkARkWA05FVDEUMBIGCgmSJomT8ixkARkWBEJBU0YxFjAUBgoJkiaJ\nk/IsZAEZFgZCQVNGQUQxFzAVBgNVBAMTDkJBU0YgU1VCIENBIDIzMIICIjAN
Bgkq\nhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqrfoKxrCPCw/u2PBEaAwW/VHLxBw6JNi\n42F3EhXmligGb/Uu4kcWO016IGFatVrPhdAtShAqmTXis0w57hW
jn1Iptvo7rROY\nGPmH7aSW/fYM/x2Lln7NlltayXspWawqBzWzYGADodyjn/Z5TaLYaG8lajiabCM5\nUJDhlZ/SUR3xylqIIFaQK3k2twjeGoxobhbr9hJcQZ
fXF0V5FCSCzJExDYma6bs1\nZtyqP/yHaiOeWXGdnqM9EPfT8kmIC42ZXq7s2JZI5OUflJBbaebYEbuDad6Rh19E\nRchXABLe68+TF/4AZCw16iRwRgq/2Re2W
WPMtVomyZ2txvn51iizqBkdVGzIRklC\n3yIv5MRzDFTfG940/tSAomHsz+RdGbL+NCBeWSY+rnJQdExJ7bLXFLVsTNGL68lP\nMuYrkxYQKWRtVhvQCHsdd5E0
t9QR4iY1JLWQxq3GHy98tBbCGiKMpBbuj/9I/E6c\nGrikouv2QyNnCN34PXpUxTQmDj5LZGV9w2faqpwUBD2ZWsbyVSgvD8TcjdxzcMcj\nLBnYUaZ8wHFqUj2
DBahctfKQxA8Ptrzt1mDIGOQliZGDwrTVMECd+noQhTlF1eS+\nvNraV3dYRMymVxh58MPEaDJgwIRcBWAAOeBbZlyx76oskXdmjOiz5jqyoR5eweCE\ntS4jfM
EW6UECAwEAAaOCAx4wggMaMAsGA1UdDwQEAwIBhjAQBgkrBgEEAYI3FQEE\nAwIBADAdBgNVHQ4EFgQUdn7nwFGpb8uzpFVs5QWQcsA0Q6IwQwYDVR0gBDwwOjA
4\nBgwrBgEEAYGlZAMCAgEwKDAmBggrBgEFBQcCARYaaHR0cDovL3BraXdlYi5iYXNm\nLmNvbS9jcAAwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwEgYDVR
0TAQH/BAgw\nBgEB/wIBADAfBgNVHSMEGDAWgBSS9auUcX38rmNVmQsv6DKAMZcmXDCCAQkGA1Ud\nHwSCAQAwgf0wgfqggfeggfSGgbZsZGFwOi8vL0NOPUJBU
0YlMjBSb290JTIwQ0El\nMjAyMSxDTj1DRFAsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2Vydmlj\nZXMsQ049Q29uZmlndXJhdGlvbixEQz1yb290
LERDPWJhc2YsREM9Y29tP2NlcnRp\nZmljYXRlUmV2b2NhdGlvbkxpc3Q/YmFzZT9vYmplY3RDbGFzcz1jUkxEaXN0cmli\ndXRpb25Qb2ludIY5aHR0cDovL3B
raXdlYi5iYXNmLmNvbS9yb290Y2EyMS9CQVNG\nJTIwUm9vdCUyMENBJTIwMjEuY3JsMIIBNgYIKwYBBQUHAQEEggEoNIIBJDCBuQYI\nKwYBBQUHMAKGgaxsZG
FwOi8vL0NOPUJBU0YlMjBSb290JTIwQ0ElMjAyMSxDTj1B\nSUEsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29u\nZmlndXJhd
GlvbixEQz1yb290LERDPWJhc2YsREM9Y29tP2NBQ2VydGlmaWNhdGU/\nYmFzZT9vYmplY3RDbGFzcz1jZXJ0aWZpY2F0aW9uQXV0aG9yaXR5MGYGCCsGAQUF\n
BzAChlpodHRwOi8vcGtpd2ViLmJhc2YuY29tL3Jvb3RjYTIxL1JPT1RDQTIxLnJ6\nLWMwMDctajY1MC5iYXNmLWFnLmRlX0JBU0YlMjBSb290JTIwQ0ElMjAyM
S5jcnQw\nDQYJKoZIhvcNAQELBQADggIBAClCvn9sKo/gbrEygtUPsVy9cj9UOQ2/CciCdzpz\nXhuXfoCIICgc0YFzCajoXBLj4V6zcYKjz8RndaLabDaaSQgj
phXFiZSBH8OII+cp\nTCWW1x+JElJXo9HB7Ziva2PeuU5ajXtvql5PegFYWdmgK2Q1QH0J2f1rr7B4nNGu\noyBi1TOSll+0yJApjx213lM9obt6hkXkjeisjcq
auMVh+8KloM0LQOTAD1bDAvpa\nVVN9wlbytvf4tLxHpvrxEQEmVtTAdVchuQV1QCeIbqIxW41l6nhE2TlPwEmTr+Cv\najMID/ebnc9WzeweyTddb6DSmn4mSc
okGpj8j8Z7cw173Yomhg1tEEfEzip+/Jx6\nd2qblZ9BUih9sHE8rtUBEPLvBZwr2frkXzL3f8D6w36LxuhcqJOmDaIPDpJMH/65\nAbYnJyhwJeGUbrRm3zVtA
5QHIiSHi2gTdEw+9EfyIhuNKS4FO/uonjJJcKBtaufl\nGFL6y0WegbS5xlMV9RwkM22R7sQkBbDTr+79MqJXYCGtbyX0JxIgOGbE4mxvdDVh\nmuPo9IpRc5Jl
pSWUa7HvZUEuLnUicRbfrs1PK/FBF7aSrJLoYprHPgP6421pl08H\nhhJXE9XA2aIfEkJ4BcKw0BqOP/PEScyptUSAaGjS4JuxsNoL6URXYHxJsR0bPlet\nSct
3\n-----END CERTIFICATE-----\n"
 
cd /etc/pki/trust/anchors/
echo -e -n $DrJ_Root_CA > DrJ_Root_CA.pem
c_rehash
update-ca-certificates

So the key commands are c_rehash and update-ca-certificates.

Usually SLES is similar to Redhat. But it seems to be different in this case.

This was tested on a SLES 12 SP3 system.

It copies the certificate to /etc/pki/trust/anchors, which by itself is insufficient. Then it creates some kind of hash symlink to the CA file and makes sure that this new certificate doesn’t get wiped out by subsequent system patching. That’s the purpose of the c_rehash and update-ca-certificates commands.

You may also see these hashes and certificates in /etc/ssl/certs. I’m not sure because that’s where I started with all this. But merely dropping the private root CA into /etc/ssl/certs is insufficient, I can say from experience!

Redhat
Redhat is better documented, but for completeness I include it here. You have your inline certificate as in the SLES script, then following that:

...
cd /etc/pki/ca-trust/source/anchors/
echo -e -n $DrJ_Root_CA > DrJ_Root_CA.pem
update-ca-trust

So update-ca-trust is the key command for Redhat Linux. This was tested on Redhat Linux v 7.6.

Fedora v 33

Put your CA file with a .crt file extension into /etc/pki/ca-trust/source/anchors like for Redhat. Run update-ca-trust extract

Debian Linux circa 2023

Put your private CA file into a new directory /usr/share/ca-certificates/extra. Then run sudo dpkg-reconfigure ca-certificates. When prompted with a list of bundles to include make sure to enable your new extra file. Your certificate file needs to end in ‘crt’, not, e.g., ‘cer’. Seems pretty arbitrary to me, but that’s how it is. Of course it has to be standard PEM format. (—BEGIN CERTIFICATE—, etc.).

Python and self-signed certificates or certificates from private CAs

First, note that those are two different cases and need to be handled slightly differently! You may be in need of these measures if you are getting an error in python like this:

urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host=’www.myhost.local’, port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, ‘[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1123)’)))

Self-signed certificate

If the certificate is truly self-signed, then throw it into a file, let’s call it my-crt.crt in your home directory. Then set an environment variable before running python:

$ export REQUESTS_CA_BUNDLE=~/my-crt.crt

It should now work.

Python reverts to SSL: CERTIFICATE_VERIFY_FAILED after upgrade

So I had it all working. Then the requests package complained about my version of the urllib package. So I upgraded requests with a pip3 –upgrade requests. They the above-mentioned SSL error came back. I noticed that urllib got upgraded when requests was upgraded.

I basically gave up and totally kludged python to fix this. But only after playing with the certifi package etc. So I took the file that is for me the output of certifi.where(): /usr/local/lib/python3.9/site-packages/certifi/cacert.pem

I edited it as root and simply appended by private root CAs to that file. I hated to do it, and I know I should have at least used a virtual env, blah, blah. But at least now my jobs run. By the way, my by-hand tests of urlopen all worked! So it’s just the way some package was using it beyond my control that I had to create this kludge.

Certificate issued from a private CA

I added the private CA to the system CA on Debian with the update-ca-certificates mentioned above. Still no joy. Then I noticed the web server forgot to provide the intermediate certificate so I added that as well. Then, at least, curl began to work. But not python. Strange. For python I still need to define this environment variable:

$ export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt

A second method to handle the case of a certificate issued from a private CA is to bundle the certificate + the intermediate certificate + the private root CA all into a single file, let’s call it my-crt.crt, in your home directory, and define the envirnoment variable same as for the self-signed certificate case:

$ export REQUESTS_CA_BUNDLE=~/my-crt.crt

My favorite openssl commands shows some commands to run to examine the certificate of a web server.

lftp usage tip with a private CA
If like me you were doing this work in conjunction with running ftps using a certificate signed by a private CA, and want your ftp client, lftp, to not complain about the unrecognized CA, then this tip will help.

After initiating your lftp and sending the username and password, you can send this command
$ ssl:ca-file <path-to-your-private-CA-file>
lftp is so flexible it offers many other ways to do this as well. But this is the one I use.

Conclusion
We show how to add your own root CA to a SLES 12 system. I did not find a good reference for this informaiton anywhere on the Internet.

References and related
My favorite openssl commands.

The basics of working with cipher settings

For Reedhat/CentOS I am evaluating this blog post on the proper way to add your own private CA: https://www.happyassassin.net/2015/01/14/trusting-additional-cas-in-fedora-rhel-centos-dont-append-to-etcpkitlscertsca-bundle-crt-or-etcpkitlscert-pem/

For the Redhat approach I used this blog post: https://www.happyassassin.net/2015/01/14/trusting-additional-cas-in-fedora-rhel-centos-dont-append-to-etcpkitlscertsca-bundle-crt-or-etcpkitlscert-pem/

Categories
Admin Linux Raspberry Pi

Fishcam using Raspberry Pi and some network tricks

Intro
There are more articles about running a webcam using Raspberry Pi than Carter has pills. Why bother to create another? This one is unique insofar as I created a fishcam at a school with a restricted network. None of the reference articles I found discussed a way to get your stream onto the Internet except the simplistic approach only available to homeowners of setting up a rule on a home router. Pimylifeup’s article is typical of that genre.

Cooperating third party
To push this webcam out to the Internet when I had no way to allow inbound traffic to the Pi, I realized that I needed a cooperating third party. I looked briefly for a commercial service specializing in this. I did not find one. I suppose there is, but I don’t know. It was actually quicker to stop the search and use my own AWS server as the cooperating third party.

With a cooperating third party what you can do is set up a forwarder from the Pi to cooperating server on the Internet. So that’s what I did. More on that below.

Network restrictions
The Pi was given WiFi access to a school’s bring your own device (BYOD) WiFi. By trial and error (I did not initiate extensive port scans, etc so as to avoid acting like a hacker). I’m familiar with running a almost completely open Guest wireless. This BYOD was not that for some reason unknown to me. One of the first things I tried, to ssh to my server, was not going through. So I knew there were restrictions. Also PING 8.8.8.8 did not work. So ICMP was blocked as well. But web browsing worked, and so did DNS queries. So TCP ports 80 and 443 were allowed, as well as UDP port 53 and possibly TCP port 53. I also observed there was no proxy server involved in the communication. So I simply tested a few other ports that I know are used from time-to-time: 2443 and 8443. If you a hit a server that is not protected by a firewall and not listening on a port that you are testing you will get a Connection reset if your packets are not blocked by a local firewall. I tested with the nc utility. nc -v <my_server> <port> I found a couple open ports this way. Next question: does the network care what protocol is running on that port? They might be looking for https and I was planning to run ssh. For a simple port blocker it might not distinguish what’s going on. That was indeed the case as I was able to run ssh on this non-standard port.

The single most comlicated thing was formulating the appropriate ssh command. I created a dedicated account on my server for this purpose. I embedded the password into the startup script I created using a utility called sshpass. This is not super secure but I wanted something running quickly.

Here’s that complicated command

sshpass -p <PASSWORD> ssh ‐f ‐N ‐R 8443:localhost:8081 ‐p 2443 <USERNAME>@<SERVER_IP>

That’s a mouthful! Let’s break it down. sshpass just permits you to run the command and not get a login prompt. It needs to be installed with a sudo apt-get sshpass.

The ssh command sets up a reverse tunnel. I have discussed it in my Access your Raspberry Pi from anywhere blog post, however, some things are different and more complicated here. Here we are setting up port 8443 on my server as the tunnel port which will be accessible to the Internet. It is terminated on the Raspberry Pi’s local port 8081 (the port that the motion package uses for the webcam). We had to use ssh to connect to port 2443 on my server because the school network blocked the standard port 22. Then <PASSWORD>, <USERNAME> and <SERVER_IP> are to be replaced with values specific for my server. I don’t want to publish them.

How I got my server to run ssh on port 2443 as well as port 22
This turned out to be one of the easiest things. It’s good to run your own server… In the file /ets/ssh/sshd_config the listening port was commented out, letting the defaul 22 be in effect. So I uncommented that and added port 2443 like this:

...
# Listen on multiple ports - DrJ 2/1/19
Port 22
Port 2443
...

Then a sudo service sshd restart and the server listens on both ports for ssh connections!

About the webcam itself
I just followed the Pimylife article as I mentioned. It talks about using the motion package which I’ve never used before. Now in my other posts you’ll see I do stuff with video on Raspberry Pi. In those we had to fight to get the lag time down and keep bandwidth low. I have to say by comparison this motion package is awful. Lag is a couple seconds. There is no sense whatsoever of true video. Just image, wait, next image, wait. No matter the fps setting. I did not have time to switch to a video package that works better. Anyway motion may provide some other advantages we could eventually use. So I just set it to 2 fps (frames per second) since it doesn’t really matter.

The fishcam is at fishcam. It’s not working right now – just showing black. I’m not sure why.

Auto starting
I’ve documented elsewhere the poor man’s way to start something upon initially booting the Pi: stuff the appropriate command into the crontab file.

So you edit your crontab file with a crontab -e. Then you enter

@reboot sleep 20; sshpass -p <PASSWORD> ssh ‐f ‐N ‐R 8443:localhost:8081 ‐p 2443 <USERNAME>@<SERVER_IP>

That just sleeps for 20 seconds as your Pi boots up, then establishes the reverse tunnel with that complicated command I explained earlier.

Equipment
Usually thes tutorials start with an equipment list. For me that is the least interesting part. I used a Raspberry Pi 3 running Raspbian. For a camera I used one of my spare USB ELP cameras from my extensive work with USB cameras. While developing the solution I needed a keyboard, mouse and HDMI monitor. Once running, the only thing connected to the Pi is the USB camera and the micro USB power supply.

To be continued…

References and related
A very good guide for your typcial webcam-using-a-Raspberry-Pi situation, i.e., not what I am addressing in my article.

Access your Raspberry Pi from anywhere blog post

Run multiple USB cameras on your Raspberry Pi while keeping lag minimal.

For supplies we love visits to The Micro Center in Paterson, NJ. This past weekend we got Raspberry Pi 3’s for only $29. And the sales tax is only 3% and change.

Categories
Linux Raspberry Pi

Evaluation of WPI’s multiple camera coprocessor using Raspberry Pi

Intro
There’s some good and some not-so-good about the new WPI-provided way to handle multiple video streams using a Raspebrry Pi.

ELP Cameras problems
I have bought many of these ELP cameras last year and this. I may be a slow learner, but eventually it dawned on me that the problems I noticed seem to occur because of this model of USB camera. Finally this year we got a chance to explore this further. I got my hands on a Logitech webcam, the kind you use perched on top of a display monitor. We had this set up as a second camera while an ELP camera was the first. Then we rebooted the Pi a whole bunch of times to gather some stats. About 25% of the time there were problems with the ELP over about 10 tries. There were no problems with the Logitech. Here are various problems we’e seen:
– horizontal lines superimposed over image, and image dull
– ghosting, a corner of the field of view is shown in the center of the image
– sometimes the stream never starts and we’re not yet sure if that’s a camera problem or a software problem though I begin to suspect it’s an ELP problem
– one of my pinhole ELP cameras died

So: Logitech webcam is decidedly better.

Power problem
We pay extra attention to the power draw of the Pi. With two cameras attached and a 2 amp or 1.8 amp power supply the red LED power flashes. That is not good. It’s a sign of undervoltage. The command

vcgencmd get_throttled

on your Pi will tell you if there was an undervoltage condition. I see

throttled=0x50005

when using a 2 amp power supply. Note that as far as we can see the camera display itself works just as well. We also have a 3 amp power supply. That produces a solid red led light and vcgencmd get_throttled produces a response of

throttled=0x0,

which probably indicates there were no undervoltage conditions.

The problem we need to avoid for the Pi to attempt to draw more than 2 amps from the regulator. Doing so may shut it down. So we will try to use the Pi along with a powered USB hub.

Bandwidth constraints
We want to be well below 3 mbps for two cameras. How to get there while still providing a useful service. Initially we felt we could run the cameras at 320×240 resolution, 10 fps. But after much playing we found conditions under which we exceed 3 mbps though normally we were below that. I believe that the compressibility of the image is what matters. So a “rich” visudal field with lots of contrasting objects is the least compressible. That vaguely fits our findings as well. So we felt it important to prepare for the worst case. So we actually looked at supported resolutions and settled on 176×144 pixels! It sure isn’t much, agreed, but it’s still helpful. We blow up the images during the display. We use YUYV mode. MJPEG uses considerably more bandwidth.

Refresh trick
With this WPI software, the video streams never display the first time. You have to refresh the page for some reason. We wished to have a one click operation for viewing, however, to minimize the risk of operator error. So we used some old-fashioned META HTML tags to force a page refresh.

Our initial approach was to simply have the web page refresh itself every five seconds. This worked, but caused instability in the video stream itself and given a few minutes, would always crash the video stream. So we came up with an alternative that does a single page refresh. Unfortunately we’re not that conversant in Javascript (I’m sure there’s a way to do this with Javascript) so instead we wrote two HTML pages: a source page with the refresh, and a target page that does not refresh.

Initial page HTML source

<html><head>
<meta http-equiv="Refresh" content="1;url=file:///C:/users/aperture/Desktop/2019-no-refresh.htm">
<title>stream</title></head>
<body>
<img src="http://10.31.42.18:1182/stream.mjpg" width=560 height=400>
<img src="http://10.31.42.18:1181/stream.mjpg" width=560 height=400>
</body>
</html>

And we size the browser window to just fit the two video streams side-by-side.

Target HTML source for 2019-no-refresh.htm

<html><head>
<title>stream</title></head>
<body>
<img src="http://10.31.42.18:1182/stream.mjpg" width=560 height=400>
<img src="http://10.31.42.18:1181/stream.mjpg" width=560 height=400>
</body>
</html>

Timing and sequence of events
After the Ras Pi is powered up, we launch the initial page from the task bar where it was pinned, 20 seconds later.

It takes a bit of time, then it displays the side-by-side video streams as broken images.

The red LEDs on the Logitech webcams begin to glow.

(We know when we see both red LEDs glowing we are good to go by the way).

the refresh occurs automatically to the 2019-no-refresh.htm web page.

Two side-by-side video streams are displayed, each with 560×400 display dimensions.

References and related
My 2018 version of using the Raspebrry Pi to handle two USB cameras: USB webcam on Raspberry Pi

Field Management System spec for 2019

https://s3.amazonaws.com/screensteps_live/exported/Wpilib/2078/103766/Using_a_Raspberry_PI_as_a_video_coprocessor.pdf?1546622998
WPI PDF manual, Using a Ras Pi as Video coprocessor

Download compressed image from Github: https://github.com/wpilibsuite/FRCVision-pi-gen/releases/. Scroll down to Assets and look for FRCVision_image_2019xxxx.zip. (2019.3.1 is the latest at time of this writing.

Logitech webcam: https://smile.amazon.com/gp/product/B01IC2UDMC/ref=ppx_yo_dt_b_asin_title_o00__o00_s00?ie=UTF8&psc=1

FIRST FRC Networking Basics

Categories
Linux Raspberry Pi

Solution to this week’s NPR puzzle using simple Linux commands, again

Intro
As I understood it, this week’s NPR puzzle is as follows. Think of a figure from the Bible with five letters. Move each letter three back, e.g., an “e” becomes a “b.” Find the Biblical figure which becomes an ailment after doing this transformation.

Initial thoughts
I figured this would be eminently amenable to some simple linux commands like I’ve done with previous puzzles (most are not, by the way). I was having a hard time doing these transformations in my head while I was driving, and the first names I tried came up empty, such as Jesus or Moses.

So I figured I could write a program to do the character transformations on each and every word and I could probably find a downloadable text version of the Bible. I didn’t find a pure text version, but I did download an HTML version, which is close enough for our purposes.

Then I was going to just keep the five-letter words and do this transformation on all of them and match against dictionary words. Then I would have taken just those matches and scanned by hand to look for words that are ailments, hoping there wouldn’t be too many matched words to contend with.

Finally settled on a different approach
That looked like a bit of work so I thought about it and decided there had to be a resource for just the figures in the Bible, and voila, there is, in Wikipedia, see the references.

rot13
Rot13 is a famous cipher (encryption is too strong a word to describe this simple approach), where A becomes N, B becomes O, etc. I had a feeling the tr command in linux might be able to do this but didn’t know how. So I searched for linux, tr and rot13 and found an example online. It was easy to adapt.

We need what you could call a rot -3. Here is the command.

$ tr 'A‐Za‐z' 'X‐ZA‐Wx‐za‐w'

So I put the text of the Wikipedia page of Biblical figures into a text file on my linux server, into a file called list-of-biblical-figures. It looks like this:

Adam to David according to the Bible
Creation to Flood
 
    Adam Seth Enos Kenan Mahalalel Jared Enoch Methuselah Lamech Noah Shem
 
Cain line
 
    Adam Cain Enoch Irad Mehujael Methusael Lamech Tubal-cain
 
Patriarchs after Flood
 
    Arpachshad Cainan Shelah Eber Peleg Reu Serug Nahor Terah Abraham Isaac Jacob
 
Tribe of Judah to Kingdom
 
    Judah Perez Hezron Ram Amminadab Nahshon Salmon Boaz Obed Jesse David
...

I was going to tackle just pulling the figures with five-character names, but the whole list isn’t that long so I skipped even that step and just put the list through as is:

$ cat list-of-biblical-figures|tr 'A‐Za‐z' 'X‐ZA‐Wx‐za‐w'

comes back as

Xaxj ql Axsfa xzzloafkd ql qeb Yfyib
Zobxqflk ql Cilla
 
    Xaxj Pbqe Bklp Hbkxk Jxexixibi Gxoba Bklze Jbqerpbixe Ixjbze Klxe Pebj
 
Zxfk ifkb
 
    Xaxj Zxfk Bklze Foxa Jbergxbi Jbqerpxbi Ixjbze Qryxi-zxfk
 
Mxqofxozep xcqbo Cilla
 
    Xomxzepexa Zxfkxk Pebixe Bybo Mbibd Obr Pbord Kxelo Qboxe Xyoxexj Fpxxz Gxzly
 
Qofyb lc Graxe ql Hfkdalj
 
    Graxe Mbobw Ebwolk Oxj Xjjfkxaxy Kxepelk Pxijlk Ylxw Lyba Gbppb Axsfa
...
    Ebola
...

So it’s all gibberish as you might hope. Then towards the end you come across this one thing and it just pops out at you. As is my custom I won’t give it away before the deadline. [update] OK. Submission deadline has passed. Ebola just really popped out. Going back to the original text, you see it lines up with Herod. So there you have it.

I double-checked and confirmed this also works on a Raspberry Pi. I’ve come to realize that most people don’t have their own server, but hundreds of thousands or perhaps millions have a Raspberry Pi, which is a linux server, which makes techiques like this accessible. And fun.

Conclusion
I show a technique for using a linux server such as a Raspberry Pi to solve this week’s NPR puzzle. A very simple approach worked. In fact I was able to solve the puzzle and write this post in about an hour!

References and related
HTML version of Bible: https://ebible.org/Scriptures/eng-web_html.zip
Biblical figures: https://en.wikipedia.org/wiki/List_of_major_biblical_figures
An earlier NPR puzzle solved with linux command line techniques

Categories
Admin Linux Network Technologies SLES

Linux tip: how to enable remote syslog on SLES

Intro
I write this knowing I still don’t know anything to speak of about syslog, but, sometimes you gotta act without knowing. I needed to send syslog to somewhere in a big hurry so I figured out the absolute minimum I needed to do to get it running on one of my other systems.

The details
This all started because of a deficiency in the F5 ASM. At best it’s do slow when looking through the error log. But in particular there was one error that always timed out when I tried to bring up the details, a severity 5 error, so it looked pretty important. Worse, local logging, even though it is selected, also does not work – the /var/log/asm file exists but contains basically nothing of interest. I suppose there is some super-fancy and complicated MySQL command you could run to view the logs, but that would take a long time to figure out.

So for me the simplest route was to enable remote syslog on a Linux server and send the ASM logging to it. This seems to be working, by the way.

The minimal steps
Again, this was for Suse Enterprise Linux running syslog-ng.

  1. modify /etc/sysconfig/syslog as per the next step
  2. SYSLOGD_PARAMS=”-r”
  3. modify /etc/syslog-ng/syslog-ng.conf as per the next step
  4. uncomment this line: udp(ip(“0.0.0.0”) port(514));
  5. launch yast (I use curses-based yast [no X-Windows] which is really cantankerous)
  6. go to Security and Users -> Firewall -> Allowed services -> Internal Zone -> Advanced
  7. add udp port 514 as additional allowed Ports in internal zone and save it
  8. service syslog stop
  9. service syslog start
  10. You should start seeing entries in /var/log/localmessages as in this suitably anonymized example (I added a couple line breaks for clarity:
Jul 27 14:42:22 f5-drj-mgmt ASM:"7653503868885627313","50.17.188.196","/Common/drjohnstechtalk.com_profile","blocked","/drjcrm/bi/tjhmore345","0","Illegal URL,Attack signature detected","200021075","Automated client access ""curl""","US","<!--?xml version='1.0' encoding='UTF-8'?-->44e7f1ffebff2dfb-800000000000000044f7f1ffebff2dfb-800000000000000044e7f1ffe3ff2dfb-80000000000000000000000000000000-000000000000000042VIOL_ATTACK_SIGNATURErequest200021075
7VXNlci1BZ2VudDogY3VybC83LjE5LjcgKHg4Nl82NC1yZWRoYXQtbGludXgtZ251KSBsaWJjdXJsLzcuMTkuNyBOU1MvMy4yNy4xIHpsaWIvMS4yLjMgbGliaWRuLzEuMTggbGlic3NoMi8xLjQuMg0KSG9zdDogYWctaW50ZWw=
01638
VIOL_URL","GET /drjcrm/bi/tjhmore345 HTTP/1.1\r\nUser-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2\r\nHost: drjohnstechtalk.com\r\nAccept: */*\r\n\r\n"

Observations
Interestingly, there is no syslogd on this particular system, and yet the “-r” flag is designed for syslogd – it’s what turns it into a remote syslogging daemon. And yet it works.

It’s easy enough to log these messages to their own file, I just don’t know how to do it yet because I don’t need to. I learn as I need to. just as I learned enough to publish this tip.

Conclusion
We have demonstrated activating the simplest possible remote syslogger on Suse Linux Enterprise Server.

References and related

Want to know what syslog is? Howtonetwork has this very good writeup: https://www.howtonetwork.com/technical/security-technical/syslog-server/

Categories
Linux

Future project idea: Interplanetary file system, IPFS

Intro
If I had more time and more energy, what I’d like is to explore the Interplanetary Filesystem, perhaps put up a server and create some objects. It seems right up my alley as I was an early adapter and put up one of the first web servers on the Internet. IPFS combines a lot of m interests: Linux (it extends the filesystem), web and computer science concepts.

But I don’t have spare time. Maybe later this year…

One year later
OK, I just heard on the Reveal podcast about an invidious use for IPFS. Although I was too busy before, now I’m simply loathe to embrace a technology that’s been embraced by the alt-right. Specifically, i have heard that Rob Monster has stored hate speech manifestos using IPFS and thereby using IPFS to create what is being called an alt-tech stack (their own web browser, social media sites, streaming video sites, etc, etc). It’s a shame.

References and related
Here’s the IPFS web site: https://ipfs.io/
This lecture explains why we need this improved web technology and what it is: https://www.youtube.com/watch?v=HUVmypx9HGI

Categories
Admin Linux Security Web Site Technologies

The IT Detective Agency: the vanishing certificate error

Intro
I was confronted with a web site certificate error. A user was reluctant – correctly – to proceed to an internal web site because he saw a message to the effect:

I tried it myself with IE and got the same thing.
Switch to Chrome and I saw this error:

I wouldn’t bother to document this one except for a twist: the certificate error went away in IE when you clicked through to the login page.

Furthermore, when I examined the certificate with a tool I trust, openssl, it showed the date was not expired.

So what’s going on there?

The details
First thing I dug into was Chrome. I found this particular error can occur if you have an internal certificate issued with a valid common name, but without a Subject Alternative Name. My openssl examination confirmed this was indeed the case for this certificate.

So I decided the Chrome error was a red herring. And confirmed this after checking out other internal web sites which all suffered from this problem.

But that still leaves the IE error unexplained.

As I mentioned in a previous post, I created a shortcut bash function that combines several openssl functions I call examinecert:

examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }

Use it like this:

$ examinecert drjohnstechtalk.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:17:21:b7:12:94:3a:fa:fd:a8:f3:f8:5e:2e:e4:52:35:71
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Validity
            Not Before: Apr  4 08:34:56 2018 GMT
            Not After : Jul  3 08:34:56 2018 GMT
        Subject: CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d3:50:98:6d:72:03:b2:e4:01:3f:44:01:3d:eb:
                    ff:fc:68:7d:51:a4:09:90:48:3c:be:43:88:d7:ba:
                    ...
        X509v3 extensions:
                 ...
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com
                ...

I tried to show a friend the error. I could no longer get IE to show a certificate error. So my friend tried IE. He saw that initial error.

Most people give up at this point. But my position is the kind where problems no one else can resolve go to get resolution. And certificates is somewhat a specialty of mine. So I was not ready to throw in the towel.

I mistrust all browsers. They cache information, try to present you sanitized information. It’s all misleading.

So I ran examinecert again. This time I got a different result. It showed an expired certificate. So I ran it again. It showed a valid, non-expired certificate. And again. It kept switching back-and-forth!

Here it helps to know some peripheral information. The certificate resides on an old F5 BigIP load-balancer which I used to run. It has a known problem with updating certificate if you merely try to replace the certificate in the SSL client profile. It’s clear by looking at the dates the certificate had recently been renewed.

So I now had enough information to say the problem was on the load balancer and I could send the ticket over to the group that maintains it.

As for IE’s strange behavior? Also explainable for the most part. After an initial page with the expired certificate, if you click Continue to this web site it re-loads the page and gets the Good certificate so it no longer shows you the error! So when I clicked on the lock icon to examine the certificate, I always was getting the good version. In fact – and this is an example of the limitation of browsers like IE -you don’t have the option to examine the certificate about which it complained initially. Then IE caches this certificate I think so it persists sometimes even after closing and re-launching the browser.

Case closed.

Conclusion
An intermittent certificate error was explained and traced to a bad load balancer implementation of SSL profiles. The problem could only be understood by going the extra mile, being open-minded about possible causes and “using all my senses.” As I like to joke, that’s why I make the medium bucks!

Other conclusion? openssl is your friend.

References and related
My favorite openssl commands show how to use openssl x509 from any linux server.

Categories
DNS Linux Network Technologies Raspberry Pi Security

Whois information without the pushy hard sell tactics

Intro
Did you ever want to learn about a domain registration but were put off by the hard sell tactics that basically all web-based whois searches subject you to? Me, too. Here’s what you can do.

The details
Linux – so that includes you, Raspberry Pi owners – has a little utility called whois which you can use to get the registrant information of a domain, e.g.,

$ whois johnstechtalk.com

   Domain Name: JOHNSTECHTALK.COM
   Registry Domain ID: 1795918838_DOMAIN_COM-VRSN
   Registrar WHOIS Server: whois.godaddy.com
   Registrar URL: http://www.godaddy.com
   Updated Date: 2017-03-27T00:52:51Z
   Creation Date: 2013-04-23T00:54:17Z
   Registry Expiry Date: 2019-04-23T00:54:17Z
   Registrar: GoDaddy.com, LLC
   Registrar IANA ID: 146
   Registrar Abuse Contact Email: [email protected]
   Registrar Abuse Contact Phone: 480-624-2505
   Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
   Domain Status: clientRenewProhibited https://icann.org/epp#clientRenewProhibited
   Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
   Name Server: NS45.DOMAINCONTROL.COM
   Name Server: NS46.DOMAINCONTROL.COM
   DNSSEC: unsigned
   URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2018-04-19T19:59:35Z <<<
...

Admittedly that did not tell us much, but it points us to another whois server we can try, whois.godaddy.com. So try that:

$ whois ‐h whois.godaddy.com johnstechtalk.com

Domain Name: JOHNSTECHTALK.COM
Registry Domain ID: 1795918838_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: http://www.godaddy.com
Updated Date: 2017-03-27T00:52:50Z
Creation Date: 2013-04-23T00:54:17Z
Registrar Registration Expiration Date: 2019-04-23T00:54:17Z
Registrar: GoDaddy.com, LLC
Registrar IANA ID: 146
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.4806242505
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited http://www.icann.org/epp#clientUpdateProhibited
Domain Status: clientRenewProhibited http://www.icann.org/epp#clientRenewProhibited
Domain Status: clientDeleteProhibited http://www.icann.org/epp#clientDeleteProhibited
Registry Registrant ID: Not Available From Registry
Registrant Name: ******** ******** (see Notes section below on how to view unmasked data)
Registrant Organization:
Registrant Street: ***** ****
Registrant City: Newton
Registrant State/Province: New Jersey
Registrant Postal Code: 078**
Registrant Country: US
Registrant Phone: +*.**********
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: ********@*****.***
Registry Admin ID: Not Available From Registry
Admin Name: ******** ******** (see Notes section below on how to view unmasked data)
...

So now we’re getting somewhere. So GoDaddy tries to force you to their web page an sell you stuff in any case. Not at all surprising for anyone who’s ever been a GoDaddy customer (includes yours truly). Because that’s what they do. But not all registrars do that.

Here’s a real-life example which made me decide this technique should be more broadly disseminated. I searched for information on a domain in Argentina:

$ whois buenosaires.com.ar

This TLD has no whois server, but you can access the whois database at
http://www.nic.ar/

Now if you actually try their suggested whois server, it doesn’t even work:

$ whois ‐h www.nic.ar buenosaires.com.ar

Timeout.

What you can do to find the correct whois server is use iana – Internet Assigned Numbers Authority – namely, this page:

https://www.iana.org/domains/root/db

So for Argentina I clicked on .ar (I expected to find a separate listing for .com.ar but that was not the case), leading to the page:

See it? At the bottom it shows Whois server: whois.nic.ar. So I try that and voila, meaningful information is returned, no ads accompanying:

$ whois ‐h whois.nic.ar buenosaires.com.ar

% La información a la que estás accediendo se provee exclusivamente para
% fines relacionados con operaciones sobre nombres de dominios y DNS,
% quedando absolutamente prohibido su uso para otros fines.
%
% La DIRECCIÓN NACIONAL DEL REGISTRO DE DOMINIOS DE INTERNET es depositaria
% de la información que los usuarios declaran con la sola finalidad de
% registrar nombres de dominio en ‘.ar’, para ser publicada en el sitio web
% de NIC Argentina.
%
% La información personal que consta en la base de datos generada a partir
% del sistema de registro de nombres de dominios se encuentra amparada por
% la Ley N° 25326 “Protección de Datos Personales” y el Decreto
% Reglamentario 1558/01.
 
domain:         buenosaires.com.ar
registrant:     50030338720
registrar:      nicar
registered:     2012-07-05 00:00:00
changed:        2017-06-27 17:42:45.944889
expire:         2018-07-05 00:00:00
 
contact:        50030338720
name:           TRAVEL RESERVATIONS SRL
registrar:      nicar
created:        2013-09-05 00:00:00
changed:        2018-04-17 13:14:55.331068
 
nserver:        ns-1588.awsdns-06.co.uk ()
nserver:        ns-925.awsdns-51.net ()
nserver:        ns-1385.awsdns-45.org ()
nserver:        ns-239.awsdns-29.com ()
registrar:      nicar
created:        2016-07-01 00:02:28.608837

2nd example: goto.jobs
I actually needed this one! So I learned of a domain goto.jobs and I wanted to get some background. So here goes…
$ whois goto.jobs

getaddrinfo(jobswhois.verisign-grs.com): Name or service not known

So off to a bad start, right? So we hit up the .jobs link on iana, https://www.iana.org/domains/root/db/jobs.html, and we spy a reference to their whois server:

Registry Information
This domain is managed under ICANN's registrar system. You may register domains in .JOBS through an ICANN accredited registrar. The official list of ICANN accredited registrars is available on ICANN's website.
URL for registration services: http://www.goto.jobs
WHOIS Server: whois.nic.jobs

So we try that:
$ whois ‐h whois.nic.jobs goto.jobs

   Domain Name: GOTO.JOBS
   Registry Domain ID: 91478530_DOMAIN_JOBS-VRSN
   Registrar WHOIS Server: whois-all.nameshare.com
   Registrar URL: http://www.nameshare.com
   Updated Date: 2018-03-29T20:08:46Z
   Creation Date: 2010-02-04T23:54:33Z
   Registry Expiry Date: 2019-02-04T23:54:33Z
   Registrar: Name Share, Inc
   Registrar IANA ID: 667
   Registrar Abuse Contact Email:
   Registrar Abuse Contact Phone:
   Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Name Server: KATE.NS.CLOUDFLARE.COM
   Name Server: MARK.NS.CLOUDFLARE.COM
   Name Server: NS1.REGISTRY.JOBS
   Name Server: NS2.REGISTRY.JOBS
   DNSSEC: unsigned
   URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of WHOIS database: 2018-04-23T18:54:31Z <<<

Better, but it seems to merely point to a registrar and its whois server:

Registrar WHOIS Server: whois-all.nameshare.com

So let’s try that:

$ whois ‐h whois-all.nameshare.com goto.jobs

Domain Name: GOTO.JOBS
Registry Domain ID: 91478530_DOMAIN_JOBS-VRSN
Registrar WHOIS Server: whois-jobs.nameshare.com
Registrar URL: http://www.nameshare.com
Updated Date: 2018-03-29T20:08:46Z
Creation Date: 2010-02-04T23:54:33Z
Registrar Registration Expiration Date: 2017-02-04T23:54:33Z
Registrar: NameShare, Inc.
Registrar IANA ID: 667
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.7809429975
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Registry Registrant ID:
Registrant Name: DNS Administrator
Registrant Organization: Employ Media LLC
Registrant Street: 3029 Prospect Avenue
Registrant City: Cleveland
Registrant State/Province: OH
Registrant Postal Code: 44115
Registrant Country: United States
Registrant Phone: +1.2064261500
Registrant Phone Ext:
Registrant Fax: +1.1111111111
Registrant Fax Ext:
Registrant Email: [email protected]
Registry Admin ID:
Admin Name: DNS Administrator
Admin Organization: Employ Media LLC
Admin Street: 3029 Prospect Avenue
...

Bingo! We have hit pay dirt. We have meaningful information about the registrant – an address, phone number and email address – and received no obnoxious ads in return. For me it’s worth the extra steps.

ICANN: another alternative
Most registrar’s whois sites are rate-limited. ICANN’s is not. And they also do not sic ads on you. It is

https://whois.icann.org/en/lookup?name=

ICANN, for the record, it the body that decides what goes on in DNS namespace, for instance, what new gTLDS should be added. You can use its whois tool for all gTLDs, but not in general for ccTLDs.

whois is undergoing changes due to GDPR. Especially the “social” information of the contacts: registrant, admin and technical contacts will be masked, except for perhaps state and country, in the future. But whois is slowly dying and a new standard called RDAP will take its place.

References and related
This page has some great tips. Wish I had seen it first! https://superuser.com/questions/758647/how-to-whois-new-tlds

Here’s that iana root zone database link again: https://www.iana.org/domains/root/db

ICANN’s whois: https://whois.icann.org/en/lookup?name=

Categories
Linux Raspberry Pi

Raspberry Pi as Retro Arcade Games emulator

Intro
I am not going to attempt to provide a guide as there are much better guides out there than anything I can produce.

In addition to the arcade function, we wanted to display a slidedeck when not being used for gaming.

Two main approaches I see are

1) install RetroPie, then add X packages
2) install Raspbian, then install RetroPie on top of that

The reason we want X is to run a presentation software such as pipresents, which we are already familiar with.

For approach 1) I roughly followed this installation order.

Notes
Install lightdm and lxde
This takes a long time, maybe 30 minutes:
sudo apt install lxde lxde-core lxterminal lxappearance
sudo apt install lightdm
sudo apt-get install xutils
sudo apt-get install xserver-xorg

But one of my games didn’t run properly afterwards, so I am focused on method 2) for now.

I’m having trouble running startx from a non-console terminal. One thing I’m trying is:
sudo usermod -a -G tty pi
sudo apt-get install xserver-xorg-legacy
These two commands still didn’t do the trick, so I edited this file

/etc/X11/Xwrapper.config

and replaced allowed_user=console with allowed_users=anybody, and that worked! Once.

Then I installed RetroPie, turned it off so it does not autostart, and tried startx from a non-console terminal and I see this error:

(EE) xf86OpenConsole: Cannot open virtual console 2 (Permission denied)

then I re-installed xserver-xorg-legacy and startx once again worked. Hmm.

The instructions for installing RetroPie on top of an existing Raspbian installation are here:

https://retropie.org.uk/docs/Manual-Installation/

You should be comfortable with the linux command line. In the end I like this method of installation the best. I’ve done it several times now.

Equipment ideas
These $15 speakers https://www.amazon.com/gp/product/B003JTHO3U/ref=oh_aui_detailpage_o01_s01?ie=UTF8&psc=1 only use the USB port for power. They have a standard mini-stereo jack that is compatible with the Pi. I bought them. The Pi has enough juice to power them, which is convenient.
I went with NES (Nintendo Entertainment System) games. This pair of USB controllers I am told are a good approximation of the real thing: https://www.amazon.com/gp/product/B075ZN1GXK/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1. they’re only about $14.
Two player arcade quality controller from Recroommasters. About $349.

How to configure two player setup when you have an arcade-style console with only one USB connection
I find the documentation available on the Internet on this particular topic is terrible. In fact I never did find it. This YouTube video was just created. Although it’s specific to their Xtension console it looks to me applicable to any similar console:

https://www.youtube.com/watch?v=E8jHfhM5t_A&feature=youtu.be

Configuration
It takes a little getting used to. There are two main places where you do some configuration. There’s the RetroPie Configuration. Then there’s the emulationstation menu. The main thing to do from the emulationstation menu, which is launched by clicking Start from the main emulationstation screen, is to map the controller keys. For instance I program for an NES controller at home, and bring it to school where there is a cool two-player arcade-style controller which will have to be re-mapped.
The RetroPie configuration shows up from the main screen when you hit the down arrow key or something like that, then A. From here you can launch traditional raspi-config. I also used it to go into RetroPie setup, then into configuration and have emulationstation autolaunch at boot-up. You can also do a reboot from RetroPie setup.

Sound
To force sound out of the 3.5 mm stereo jack, go to RetroPie Configuration|RetroPie Setup|Configuration/tools|801 – audio settings|Headphones – 3.5 mm jack.

To get volume to 100% which you will need with the speakers I list below, go to emulation station menu|sound settings|system volume. By default it seems to be 77% which just isn’t enough juice.

References and related
Good discussion on X windows, display managers and desktop environments: https://raspberrypi.stackexchange.com/questions/26836/possible-to-reinstall-x-server-and-use-graphical-after-having-removed-it
Speakers for about $15: https://www.amazon.com/gp/product/B003JTHO3U/ref=oh_aui_detailpage_o01_s01?ie=UTF8&psc=1
Nintendo style USB controllers, $14: https://www.amazon.com/gp/product/B075ZN1GXK/ref=oh_aui_detailpage_o02_s00?ie=UTF8&psc=1
Two player setup with an arcade controller that has only one USB connection.
Arcade style two player console. Very cool. https://www.recroommasters.com/Xtension-Two-Player-Control-Board-Emulator-Edition-p/rm-xt-sd-board-ee.htm

Categories
Admin Linux Network Technologies

Measuring bandwidth on Checkpoint Gaia

Intro
Sometimes you don’t have the tools you want but you have enough to make do. Such is the case with the command line utilities of the CLI of Checkpoint Gaia. It’s like a basic Linux. The company I consult for is beginning to hit some bandwidth limits and I wanted to understand overall traffic flow better. In the absence of any proper bandwidth monitors I used the netstat command and some approximations. Crude thouigh it may be it already gave me a much better idea about my traffic than I had going into this project.

The details
I call this BASH script netstats.sh

#!/bin/bash
# for Gaia, not IPSO
c=0
sleep=2
while /bin/true; do
  v[1]=`netstat -Ieth1-01 -e|grep RX|grep TX`
  n[1]="vlan 102           "
  v[2]=`netstat -Ieth1-05 -e|grep RX|grep TX`
  n[2]="vlan 103 200.78.39    "
  v[3]=`netstat -Ieth1-02 -e|grep RX|grep TX`
  n[3]="vlan 103 10.31.42"
  v[4]=`netstat -Ieth1-03 -e|grep RX|grep TX`
  n[4]="trunk for VPN      "
# interesting line:
#           RX bytes:4785585828883 (4.3 TiB)  TX bytes:7150474860130 (6.5 TiB)
  date
  for i in {1..4}; do
    RX=`echo ${v[$i]}|cut -d: -f2|awk '{print $1}'`
    TX=`echo ${v[$i]}|cut -d: -f3|awk '{print $1}'`
#    echo "vlan ${n[$i]}        RX,TX: $RX, $TX"
    if [ $c -gt 0 ]; then
      RXdiff=`expr $RX - ${RXold[$i]}`
      TXdiff=`expr $TX - ${TXold[$i]}`
# observed scaling factor: 8.1 bits/byte
      RXrate=$(($RXdiff*81/$sleep/10000000))
      TXrate=$(($TXdiff*81/$sleep/10000000))
      echo "${n[$i]}    RX,TX: $RXrate, $TXrate Mbps"
    fi
# old values
    RXold[$i]=$RX
    TXold[$i]=$TX
  done
  c=$(( $c + 1 ))
  sleep $sleep
done

It’s pretty self-explanatory. I would just note that in the older IPSO OS you don’t have the ability to get the bytes transferred from netstat. Just the number of packets, which is an inherently cruder measure. The calibration of 8.1 bits per byte (there is overhead from the frames) is maybe a little crude but it’s what I measured over the source of a couple minutes.

A quick glance at Redhat or CentOS shows me that this same script, with appropriate modifications for the interface names (eth0, eth1, etc), would also work on those OSes.

IPSO
I really, really wanted some kind of measure for IPSO as well. So I tackled that as best I could. Here is that script:

#!/bin/bash
# for IPSO, not Gaia
c=0
while [ 1 -gt 0 ]; do
# eth1-01: vlan 802; eth1-05: vlan 803 (144.29); eth1-02: vlan 803 (10.201.145)
  v[1]=`netstat -Ieth-s4p1|tail -1`
  n[1]="vlan 208.129.99     "
  v[2]=`netstat -Ieth-s4p2|tail -1`
  n[2]="vlan 208.156.254     "
  v[3]=`netstat -Ieth-s4p3|tail -1`
  n[3]="vlan 208.149.129     "
  v[4]=`netstat -Ieth-s4p4|tail -1`
  n[4]="trunk for Cisco and b2b"
# interesting line:
#Name         Mtu   Network     Address             Ipkts Ierrs    Opkts Oerrs  Coll
#eth-s4p1     16018 <Link>      0:a0:8e:c4:ff:f4 72780201     0 56423000     0     0
  date
  for i in {1..4}; do
    RX=`echo ${v[$i]}|awk '{print $5}'`
    TX=`echo ${v[$i]}|awk '{print $7}'`
#    echo "vlan ${n[$i]}        RX,TX: $RX, $TX"
    if [ $c -gt 0 ]; then
      RXdiff=$(($RX - ${RXold[$i]}))
      TXdiff=$(($TX - ${TXold[$i]}))
# observed: .0043 mbits/packet
      RXrate=$(($RXdiff*43/100000))
# observed: .0056 mbits/packet
      TXrate=$(($TXdiff*56/100000))
      echo "${n[$i]}    RX,TX: $RXrate, $TXrate Mbps"
    fi
# old values
    RXold[$i]=$RX
    TXold[$i]=$TX
  done
  c=$(( $c + 1 ))
  sleep 10
done

The conversion to bits is probably only accurate to +/- 25%, because it depends a lot on the application, i.e., VPN concentrator versus proxy server. I just averaged all applications together because that’s the best I could do. I compared it to a Cisco router’s statistics.

Note that in Gaia cpview can also be run frmo the CLI. Then you can drill down to the specific interface information. I have compared my script to using cpview (which has a default update screen time of 2 seconds) and they’re pretty close. As far as I know there is no way to script cpview. And at the end of the day I suspect it is only doing the same thing my script does.

Conclusion
A script is provided which gives a measure of Mbps bandwidth usage by polling netstat periodically. It’s not exact, but even crude measures can help a network engineer.