apache as reverse proxy under SLES

IntroJust got my SLES 12 SP4 server. That’s a type of commercial Linux I needed to set up a secure reverse proxy in a hurry. There’s a lot of suggestions out there. I share what worked for me. The version of apache that is supplied, for the record, is apache 2.4.

The most significant error

[Tue Aug 13 15:26:24.321549 2019] [proxy:warn] [pid 5992] [client 127.0.0.1:40002] AH01144: No protocol handler was valid for the URL /. If you are using a DSO version of mod_proxy, make sure the proxy submodules are included in the configuration using LoadModule.

The solution
In /etc/sysconfig/apache2 (in SLES this is a macro that sets up apache with the needed loadmodule statements) I needed a statement like the following:

APACHE_MODULES="actions alias auth_basic authn_file authz_host authz_groupfile authz_core authz_user autoindex cgi dir env
expires include log_config mime negotiation setenvif ssl socache_shmcb userdir reqtimeout authn_core proxy proxy_html proxy_http xml2enc"

In my first crack at it I only had mention of modules to include up to proxy. I needed to add proxy_html and proxy_http (I know it doesn’t display correctly in the line above).

In that same file you need a statement like this as well:

APACHE_SERVER_FLAGS="SSL"

The highlights of my virtual host file, based on the ssl template, are:

<VirtualHost *:443>
# https://www.centosblog.com/configure-apache-https-reverse-proxy-centos-linux/
<Location />
            ProxyPass https://10.1.2.181/
            ProxyPassReverse https://10.1.2.181/
</Location>
 
        #  General setup for the virtual host
##      DocumentRoot "/srv/www/htdocs"
        #ServerName www.example.com:443
        #ServerAdmin webmaster@example.com
        SSLProxyEngine on
        ErrorLog /var/log/apache2/error_log
        TransferLog /var/log/apache2/access_log
 
        #   SSL Engine Switch:
        #   Enable/Disable SSL for this virtual host.
        SSLEngine on
# from https://superuser.com/questions/829793/how-to-force-all-apache-connections-to-use-tlsv1-1-or-tlsv1-2 -DrJ 8/19
        SSLProtocol all -SSLv2 -SSLV3 -TLSv1
#SSLCipherSuite HIGH:!aNULL:!MD5:!RC4
        SSLCipherSuite ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
 
        #   You can use per vhost certificates if SNI is supported.
        SSLCertificateFile /etc/apache2/ssl.crt/vhost-example.crt
        SSLCertificateKeyFile /etc/apache2/ssl.key/vhost-example.key
        SSLCertificateChainFile /etc/apache2/ssl.crt/vhost-example-chain.crt
 
 
        #   Per-Server Logging:
        #   The home of a custom SSL log file. Use this when you want a
        #   compact non-error SSL logfile on a virtual host basis.
        CustomLog /var/log/apache2/ssl_request_log   ssl_combined
 
</VirtualHost>

except that I used valid paths to my certificate, key and CA chain files.

Errors you may encounter
$ curl ‐i ‐k https://localhost/

HTTP/1.1 500 Proxy Error
Date: Thu, 15 Aug 2019 19:10:13 GMT
Server: Apache
Content-Length: 442
Connection: close
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>500 Proxy Error</title>
</head><body>
<h1>Proxy Error</h1>
The proxy server could not handle the request <em><a href="/">GET&nbsp;/</a></em>.<p>
Reason: <strong>Error during SSL Handshake with remote server</strong></p><p />
<p>Additionally, a 500 Internal Server Error
error was encountered while trying to use an ErrorDocument to handle the request.</p>
</body></html>

I traced this error to the fact that initially I did not tell apache to ignore certificate name and other related mismatches. So inserting these directives cured that problem:

SSLProxyVerify none
SSLProxyCheckPeerCN off
SSLProxyCheckPeerName off
SSLProxyCheckPeerExpire off

This is discussed in https://stackoverflow.com/questions/18872482/error-during-ssl-handshake-with-remote-server

I finally got past the SSL errors but then I still had a 404 error and an xml2enc error.

When I ran a service apache2 status I saw this:

Aug 15 16:09:31 lusytp008850388 start_apache2[28539]: [Thu Aug 15 16:09:31.879604 2019] [proxy_html:notice] [pid 28539] AH01425: I18n support in mod_proxy_html requires mod_xml2enc. Without it, non-ASCII characters in proxied pages are likely to display incorrectly.

Not certain whether this was important or not, I simply decided to heed the advice so that’s when I added xml2enc to the list of modules to enable in /etc/sysconfig/apache2:

APACHE_MODLUES=actions alias auth...proxy proxy_html proxy_http xml2enc"
HTTP/1.1 404 Not Found

And that was when I put in a URI that worked just fine if I entered it directly in a browser hitting the web server.

I had a hunch that this could occur if the web server was finicky and insisted on being addressed by a certain name. So originally I had statements like this:

            ProxyPass https://10.1.2.181/
            ProxyPassReverse https://10.1.2.181/

I changed it to

            ProxyPass https://myserveralias.example.com/
            ProxyPassReverse https://myserveralias.example.com/

except in place of myserveralias.example.com I put in what I felt the web site operators would have used – the known working alias for direct access to this web site. Of course I first made sure that my apache server could resolve myserveralias.example.com to 10.1.2.181, which it could.

And, voila, no more 404 error!

Conclusion
An SSL reverse proxy to an SSL back-end web server was set up under SLES 12 SP4, using TLS 1.2 and apache 2.4.23, in other words, pretty current stuff.

References and related
Compiling apache2.4

Posted in Apache | Leave a comment

Solution to NPR puzzle using Raspberry Pi

Intro
Take a common five-letter word. If you add an “e” to the end you’ll get a common six-letter word. or add an ‘e” after the second letter to get a different six-letter word, or an “e” after the fourth letter. what word is it?

The technique
I used the strange dictionary built in to my Raspbery Pi, in /usr/share/dict/american-english.

Key command
$ egrep '^[a‐z]{5,6}$' jhwords > /tmp/five‐six

That leaves us with 11897 words (use wc command to learn this).

Now just look at the six-letter words containing an “e” at the end:

$ egrep '^[a‐z]{5}e$' five‐six > e‐at‐end

Down to 784 words.

Now strip off the terminal “e”.

(steps skipped)

And then…

Candidate word list

ameba
ampul
aorta
avers
blond
brows
cloth
corps
demur
expos
fauna
final
flora
fondu
grill
hears
hydra
karat
larva
loath
local
madam
moral
pleas
psych
regal
scrap
sever
sooth
spars
strip
swath
teeth
tibia
uvula
vulva
zombi

You can scan by hand – the answer jumps out at you.

Conclusion
Another NPR Weekend Edition puzzle is solved by use of some simple linux commands.

References and related
Another NPR puzzle similarly solved.

Posted in Uncategorized | Tagged | Leave a comment

Raspberry Pi photo frame using your pictures on your Google Drive

Intro
All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.

So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!

Python rotate image script
This simple script rotates an image if the orientation isn’t right. This happens a lot for our pictures we store on our google drive. Modern cellphones hide all this from you.

import sys
# mostly from https://stackoverflow.com/questions/13872331/rotating-an-image-with-orientation-specified-in-exif-using-python-without-pil-in -
# DrJ 8/2019
from PIL import Image, ExifTags
 
# loop over all aarguments, which are the files to be examined for their orientation
for img in sys.argv[1:]:
  try:
    image=Image.open(img)
    for orientation in ExifTags.TAGS.keys():
        if ExifTags.TAGS[orientation]=='Orientation':
            break
    exif=dict(image._getexif().items())
 
    if exif[orientation] == 3:
        image=image.rotate(180, expand=True)
    elif exif[orientation] == 6:
        image=image.rotate(270, expand=True)
    elif exif[orientation] == 8:
        image=image.rotate(90, expand=True)
    image.save(filepath)
    image.close()
 
  except (AttributeError, KeyError, IndexError):
    # cases: image don't have getexif
    pass

The above script isn’t so great I think because it uses PIL library which does not preserve the EXIF meta informaation. Better is something like this:

import sys
import pyexiv2
 
image = pyexiv2.Image(sys.argv[1])
image.readMetadata()
 
image['Exif.Image.Orientation'] = 6
image.writeMetadata()

To be continued…

References and related
PiDisplay

Posted in Network Technologies, Raspberry Pi | Tagged , , , | Leave a comment

F5 Big-IP: When your virtual server does not present your chain certificate

Intro
While I was on vacation someone replaced a certificate which had expired on the F5 Big-IP load balancer. Maybe they were not quite as careful as I would like to hope I would have been. In any case, shortly afterwards our SiteScope monitoring reported there was an untrusted server certificate chain. It took me quite some digging to get to the bottom of it.

The details
Well, the web site came up just fine in my browser. I checked it with SSLlabs and its grade was capped at B because of problems with the server certificate chain. I also independently confirmed usnig openssl that no intermediate certificate was being presented by this virtual server. To see what that looks like with an exampkle of this problem knidly privided by badssl.com, do:

$ openssl s_client ‐showcerts ‐connect incomplete-chain.badssl.com:443

CONNECTED(00000003)
depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
verify error:num=27:certificate not trusted
verify return:1
depth=0 /C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
 0 s:/C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
   i:/C=US/ST=California/L=San Francisco/O=BadSSL/CN=BadSSL Intermediate Certificate Authority
-----BEGIN CERTIFICATE-----
MIIE8DCCAtigAwIBAgIJAM28Wkrsl2exMA0GCSqGSIb3DQEBCwUAMH8xCzAJBgNV
BAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMRYwFAYDVQQHDA1TYW4gRnJhbmNp
...
HJKvc9OYjJD0ZuvZw9gBrY7qKyBX8g+sglEGFNhruH8/OhqrV8pBXX/EWY0fUZTh
iywmc6GTT7X94Ze2F7iB45jh7WQ=
-----END CERTIFICATE-----
---
Server certificate
subject=/C=US/ST=California/L=San Francisco/O=BadSSL Fallback. Unknown subdomain or no SNI./CN=badssl-fallback-unknown-subdomain-or-no-sni
issuer=/C=US/ST=California/L=San Francisco/O=BadSSL/CN=BadSSL Intermediate Certificate Authority
...
    Verify return code: 21 (unable to verify the first certificate)

So you get that message about benig unable to verify the first certificate.

Here’s the weird thing, the certificate in question was issued by Globalsign, and we have used them for years so we had the intermediate certificate configured already in the SSL client profile. The so-called chain certificate was GlobalsignIntermediate. But it wasn’t being presented. What the heck? Then I checked someone else’s Globalsign certificate and found the same issue.

Then I began to get suspicious about the certificate. I checked the issuer more carefully and found that it wasn’t from the intermediate we had been using all these past years. Globalsign changed their intermediate certificate! The new one dates frmo November 2018 and expires in 2028.

And, to compound matters, F5 “helpfully” does not complain and simply does not send the wrong intermediate certificate we had specified in the SSL client profile. It just sends no intermediate certificate at all to accompany the server certificate.

Conclusion
The case of the missing intermediate certificate was resolved. It is not the end of the world to miss an intermediate certificate, but on the other hand it is not professional either. Sooner or later it will get you into trouble.

References and related
badssl.com is a great resource.
My favorite openssl commands can be very helpful.

Posted in IT Operational Excellence, Network Technologies, Web Site Technologies | Leave a comment

Can you go from Terminal 1 to Terminal 2 in O’Hare without going through the security lines again?

Yes.

Yes, at least if you walk. Not sure about other transportation options. Count on about 13 minutes for the walk.

Not sure why this information is so hard to find… Sometimes you land at one terminal and have to take a puddle jumper to a regional airport out of another and you’d really like to know is 50 minutes or whatever gonna be enough?

This is probably true for most major airports. It seems for instance that at Newark Liberty you can also move between terminals A, B and C without going through security once again. I thought JFK may have been an exception to this rule, but I’m not sure…

Posted in Consumer Interest, Security | Tagged , , | Leave a comment

How to test if a web site requires a client certificate

Intro
I can not find a link on the Internet for this, yet I think some admins would appreciate a relatively simple test to know is this a web site which requires a client certificate to work? The errors generated in a browser may be very generic in these situations. I see many ways to offer help, from a recipe to a tool to some pointers. I’m not yet sure how I want to proceed!

why would a site require a client CERT? Most likely as a form of client authentication.

Pointers for the DIY crowd
Badssl.com plus access to a linux command line – such as using a Raspberry Pi I so often write about – will do it for you guys.

The Client Certificate section of badssl.com has most of what you need. The page is getting big, look for this:

So as a big timesaver badssl.com has created a client certificate for you which you can use to test with. Download it as follows.

Go to your linux prompt and do something like this:
$ wget https://badssl.com/certs/badssl.com‐client.pem

badssl.com has a web page you can test with which only shows success if you access it using a client certificate, https://client.badssl.com/

to see how this works, try to access it the usual way, without supplying a client CERT:

$ curl ‐i ‐k https://client.badssl.com/

HTTP/1.1 400 Bad Request
Server: nginx/1.10.3 (Ubuntu)
Date: Thu, 20 Jun 2019 17:53:38 GMT
Content-Type: text/html
Content-Length: 262
Connection: close
 
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body bgcolor="white">
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx/1.10.3 (Ubuntu)</center>
</body>
</html>

Now try the same thing, this time using the client CERT you just downloaded:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://client.badssl.com/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.badssl.com,O=Lucas Garron,L=Walnut Creek,ST=California,C=US
*       start date: Mar 18 00:00:00 2017 GMT
*       expire date: Mar 25 12:00:00 2020 GMT
*       common name: *.badssl.com
*       issuer: CN=DigiCert SHA2 Secure Server CA,O=DigiCert Inc,C=US
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: client.badssl.com
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Server: nginx/1.10.3 (Ubuntu)
Server: nginx/1.10.3 (Ubuntu)
< Date: Thu, 20 Jun 2019 17:59:08 GMT
Date: Thu, 20 Jun 2019 17:59:08 GMT
< Content-Type: text/html
Content-Type: text/html
< Content-Length: 662
Content-Length: 662
< Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
< Connection: keep-alive
Connection: keep-alive
< ETag: "5d011dab-296"
ETag: "5d011dab-296"
< Cache-Control: no-store
Cache-Control: no-store
< Accept-Ranges: bytes
Accept-Ranges: bytes
 
<
<!DOCTYPE html>
<html>
<head>
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="shortcut icon" href="/icons/favicon-green.ico"/>
  <link rel="apple-touch-icon" href="/icons/icon-green.png"/>
  <title>client.badssl.com</title>
  <link rel="stylesheet" href="/style.css">
  <style>body { background: green; }</style>
</head>
<body>
<div id="content">
  <h1 style="font-size: 12vw;">
    client.<br>badssl.com
  </h1>
</div>
 
<div id="footer">
  This site requires a <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security#Client-authenticated_TLS_handshake">client-authenticated</a> TLS handshake.
</div>
 
</body>
</html>
* Connection #0 to host client.badssl.com left intact
* Closing connection #0

No more 400 error status – that looks like success to me. Note that we had to provide the password for our client CERT, which they kindly provided as badssl.com

Here’s an example of a real site which requires client CERTs:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://jp.nissan.biz/

* About to connect() to jp.nissan.biz port 443 (#0)
*   Trying 150.63.252.1... connected
* Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* NSS error -12227
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

OK, so you get an error, but that’s to be expected because our certificate is not one it will accept.

The point is that if you don’t send it a certificate at all, you get a different error:

$ curl ‐v ‐i ‐k https://jp.nissan.biz/

* About to connect() to jp.nissan.biz port 443 (#0)
*   Trying 150.63.252.1... connected
* Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* NSS error -12227
* Closing connection #0
curl: (35) NSS: client certificate not found (nickname not specified)

See that client certificate not found? That is the error we eliminated by supplying a client certificate, albeit one which it will not accept.

what if we have a client certificate but we use the wrong password? Here’s an example of that:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badpassword https://client.badssl.com/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* Unable to load client key -8025.
* NSS error -8025
* Closing connection #0
curl: (58) Unable to load client key -8025.

Chrome gives a fairly intelligible error

Possibly to be continued…

Conclusion
We have given a recipe for testing form a linux command line if a web site requires a client certificate or not. thus it could be turned into a program

References and related
My article about ciphers has been popular.

I’ve also used badssl.com for other related tests.

Can you use openssl directly? You’d hope so, but I haven’t had time to explore it… Here are my all-time favorite openssl commands.

https://badssl.com/ – lots of cool tests here. The creators have been really thorough.

Posted in Admin, Linux, Network Technologies, Raspberry Pi, Security, Web Site Technologies | Tagged | Leave a comment

F5 clustering tips

Intro
I was having trouble with one of my new F5 clusters. I thought I’d be clever and add two new F5’s to the existing cluster, and then remove the old members when everything looked good.

But the device groups somehow remembered the old servers, even though they weren’t in the device groups. Eventually I could not even sync the two new servers together.

Here’s a concise summary of the steps which worked, taken from this article:

Hi Manuel, do you try to setup a sync-failover device-group containing three units? To establish device trust I would recommend to force two units into offline state. Remove the machines from the existing sync-failover device-group (repeat on each machine if required) and delete the sync-failover device-group. Now reset the device trust on all machines. Next step will be to use the active machine to add both offline machines as peers. Now all three units should show up in each machineĀ“s device list. On the active unit create a new device-group of type sync-failover with network failover enabled. Add all machines to the new device-group. This will be synced to all machines and you can release them from forced offline. Time for the initial sync now. Thanks, Stephan

I get a little confused about sync-only vs sync-failover. That difference is clearly discussed there as well:

“Sync-only” will not allow to synchronize LTM configurations.
To synchronize an LTM config all units need to belong to the same “sync-failover” device-group.

I had afraid and ignorant of the need to reset device trust. For good measure I deleted all device groups. Then this procedure seems to leave me with an auto-created datasync-global-dg (sync-only). Since there was no sync-failover group I manually created one I called sync-LTM-configs. Another auto-created, sync-only group is the device_trust_group. So I see three device groups in all when I am on the sync page, but only two when I am on the device group page.

Conclusion
I had afraid and ignorant of the need to reset device trust in cleaning up an F5 cluster with old and new members. But once I did that and followed the recipe above, I was good to go.

References and related
The discussion on the F5 Devcentral site: https://devcentral.f5.com/questions/f5-sync-options

Posted in Admin | Tagged , , , | Leave a comment

Postfix Operational tips

Intro
I’m trying out the system-supplied postfix on a SLES system. i had been using sendmail but there doesn’t seem to be any development on that software.

Some commands I needed right away
Well, right away I had thousands of queued messages so I needed a way to make sense of what was happening.

For these commands to make sense you need to know that I am running a second postfix configuraiton out of /etc/postfixEXT.

Display the queue

postqueue -c /etc/postfixEXT -p

Force delivery from the queue

postqueue -c /etc/postfixEXT -f

List one email in detail

postcat -vq -c /etc/postfixEXT QUEUEID

Delete one email

postsuper -c /etc/postfixEXT -d QUEUEID

Put mail on hold

postsuper -c /etc/postfixEXT -h ALL|QUEUEID

Release mail form hold

postsuper -c /etc/postfixEXT -H ALL|QUEUEID

How to force delivery of a single message
This command is not documented anywhere – because it doesn’t exist so you have to get creative. If you have the luxury of halting all email for a few seconds simply do this:

Display the queue to find the queue ID of the email you want to force delivery of

postqueue -c /etc/postfixEXT -p

Put all mail on hold

postsuper -c /etc/postfixEXT -h ALL

Now release the hold on that one email

postsuper -c /etc/postfixEXT -H QUEUEID

QUEUEID is, of course, the queue id .e.g., F2A1A27891E, of the email in question.

Look for what happened
Check your mail log’s last lines in /var/log/mail

Revert back to normal running

postsuper -c /etc/postfixEXT -H ALL

Since mail is store-and-forward and not real time, you can do these steps, quickly, even on a production system and no one will be the wiser if you are pretty quick. Probably takes two minutes even for a slow typer.

How to run multiple listeners
I didn’t want to disturb the system-installed postfix too much. I would let it “have” the loopback address, 127.0.0.1, leaving me the other IPs for my relay config to listen on. I added these lines to /etc/postfix/main.cf

multi_instance_enable = yes
multi_instance_directories = /etc/postfixEXT

service postfix start starts up the local postfix plus my relay. Grep the process table for either master or postfix to see. However, to be honest, service postfix stop does not kill all processes. So I always end up killing one of the master processes by hand. Update: postmulti -p stop does the trick to kill all. There is also a status or start option instead of stop.

Sendmail to Postfix migration tips
This could be a separate post but I am too lazy to do that.

What happens to the access file? I kept the name of the file access but just list all the IPs, one per line, without any further arguments, to permit just those IPs relay access. In my main.cf I have a line like this to tie it together:

mynetworks = /etc/postfixEXT/access

Note that there is no hashed or .db version of this file any longer, unlike in the sendmail case.

References and related
Since I mentioned sendmail I have to give a shout out to one of my old sendmail posts.

More info on postfix multiple instances. A pretty complete giude.

Posted in Admin, Network Technologies | Tagged , | Leave a comment

Live stream to YouTube from a Raspberry Pi + webcam or USB microphone

Intro
I’ve been looking at this off and on for awhile now. I finally made a breakthrough this week and started to generate some decent live streams on my Youtube channel, after a lot of misfires.

Note this is applicable for Raspbian Stretch Lite on a Raspberry Pi 3. However, I firmly believe it will work just the same for regular Raspbian Stretch.

There’s a lot of wrong, misleading or outdated information out there on the Internet. Hopefully this will help others to avoid wasting as much time as I had to do.

This project was prompted by my desire to make a more generalized fishcam! Described in this post, my original fishcam implementation – and I realized this form the get-go – has very limited applicability because very few people are in a position to have their own AWS server. And if you don’t know what you’re doing, please don’t run your own server – the security exposure is too great.

So I eventually realized that maybe I could generalize what I had done – essentially remove the dependency on the AWS server – by utilizing Youtube Live Streaming. And, I believe I was right. It’s still a work in progress however.

The command – ffmpeg
I was playing with ffmpeg. The version I am playing with now comes with Raspbian – no need to compile like in the bad old days. ffmpeg -version shows the version to be 3.2.12. I get the impression that its capabilities are version-dependent, so that’s why this information is particularly relevant in this case.

The details
In some of my early attempts I was getting a lot of this (looking at YouTube Live Dashboard)

Dashboard When stream is not quite right

Another attempt
Video works, audio like driving in a car with the windows down. For the record, the command was this:

ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 2500k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 96K \
-r 10 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Video OK, audio choppy message

For the record, the bandwidth required was about 2100 kbps.

List the formats your video device supports

ffmpeg -f video4linux2 -list_formats all -i /dev/video0

Results using my Logitech Webcam

[video4linux2,v4l2 @ 0xcc45c0] Raw       :     yuyv422 :           YUYV 4:2:2 : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
[video4linux2,v4l2 @ 0xcc45c0] Compressed:       mjpeg :          Motion-JPEG : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1200 \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128K \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Audio good, video not working

video terrible, but audio good!

It is not so pretty to use that hardware address for the Logitech webcam device. Where do you see that hardware address? Either a lsusb or a ls /dev/snd/by-id shows addresses of sound devices. I found a simpler substitute:

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1200k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/
<pre>
With this audio's, not too bad, video's a bit choppy. Google reports the stream quality as OK, check resolution.
 
 
So I fix the bandwidth (which was a typo in the above, but one with an interesting result). I set video bandwidth to -b:v 1200k. Now the video is OK once again, but the audio is choppy again! Weird. bandwidth is about 1100 kbps.
 
This version had OK video and OK audio
<pre lang="text'>
ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1600k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

But I keep getting inconsistent results! Sometimes a setting will work, and then I come back to it and it doesn’t. Weird.

Part of the problem is that I have no idea what I’m doing and I didn’t know when i was watching a livestream vs a recorded (on-demand0 one! I have since learned to look for the little red Live button. A picture is worth 10^3 words in this case.

Observed used bandwidth is about 1450 kbits/sec. But still lots of dropped packets. Here is what ffmpeg reports. I’m not sure yet what most of it means:

[alsa @ 0x1502700] ALSA buffer xrun.
[alsa @ 0x1502700] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
frame= 5828 fps=5.0 q=-1.0 Lsize=  205496kB time=00:19:26.20 bitrate=1443.5kbits/s dup=0 drop=11138 speed=   1x
video:187265kB audio:17449kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.382063%
[libx264 @ 0x15100e0] frame I:583   Avg QP: 9.41  size: 53819
[libx264 @ 0x15100e0] frame P:5245  Avg QP:13.53  size: 30578
[libx264 @ 0x15100e0] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0x15100e0] mb P  I16..4: 38.0%  0.0%  0.0%  P16..4: 60.7%  0.0%  0.0%  0.0%  0.0%    skip: 1.4%
[libx264 @ 0x15100e0] coded y,uvDC,uvAC intra: 93.7% 86.2% 82.4% inter: 77.8% 60.5% 34.1%
[libx264 @ 0x15100e0] i16 v,h,dc,p: 17% 23% 15% 45%
[libx264 @ 0x15100e0] i8c dc,h,v,p: 51% 21% 16% 11%
[libx264 @ 0x15100e0] kb/s:1315.22

The video for that run is here: https://youtu.be/oxJaZv0frGM

Suppressing Audio
This is what worked for me.

ffmpeg \
-f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1600k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

That is working great – showing the video as before but now with a silent audio track.

Increase Video Quality
Here I’ve increased video quality a tad by requesting more fps (10) and making qscale 0 (which means highest quality).
https://www.youtube.com/watch?v=5Aall8w4Y3E

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b 3000k -g 20 -b:v 1800k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -qscale 0 \
-b:a 128k \
-r 10 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Bitrate was about 1700 kbps. Quality is maybe a little better. Audio still leaves something to be desired.

Still better video quality

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b 3000k -g 60 -b:v 2000k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -qscale 0 \
-b:a 128k \
-r 30 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

What is observed to happen is that ffmpeg actually chooses 15 fps rather than 30. I’ve read it decides what it is able to do, so maybe that’s the highest fps it can deliver. Video is pretty smooth (See my Livestream link in references if I happen to have it running. Otherwise I will create a video link.) No drops are recorded, but the sound, though not terrible, has some pops. Bandwidth used is about 1900 kbps. So this is definitely my best effort yet. YouTube complains about the unsupported video size of 640×480, but it permits it and I don’t think it’s a real problem.

Reducing bandwidth
This one is pretty good overall. I have no idea why lowering the audio bandwidth might help. It’s counter intuitive. But video motion is not bad – just a tad blurred. I guess q=23. Audio has good patches and not-as good patches. Not as good spots are staticky, not washboard bad. Total bandwidth used is about 611 kbps. So a great compromise. Why does raising the video bandwidth lower the audio quality? I have no idea… The settings below worked for maybe 20 minutes, then YouTube said this Video is unavailable. I at least found out something about that. That shows a problem with the player, not (for once) your stream. so since I’m only concentrating on the stream, that’s good news. So actually it delivered good sound for three hours straight with a few staticky spots.

ffmpeg \
-thread_queue_size 1024 \
-f alsa -i plughw:1,0 \
-thread_queue_size 256 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 30 -b:v 450k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -q:v 5 \
-q:a 0 \
-b:a 64k \
-r 15 \
-s 480x320 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

The audio is creepily sensitive, easily picking up conversations in adjacent rooms.

But then I monkeyed around with the settings, got the washboard sound, came back to this one – a known good – and got washboard audio! What the heck? Why isn’t it consistent?? No idea… Maybe it’s the player that gets messed up?? Now I’m running it again and it’s OK.

Bandwidth talk
It’s important to talk about bandwidth if you haven’t given this any real thought. You have to have a halfway decent broadband connection for this to work, you see? If you have a mid-speed cable modem or DSL, you have much lower upload than download speeds, and you may not be able to pull off a reliable 1.5 mbps upload. For those lucky enough to have Verizon FIOS this is a non-issue. But for instance in the high school where I volunteer they have throttled the guest WiFi network to such an extent that achieving this modest 1.5 mbps is going to present a real challenge. If you rely on a phone’s hotspot you will also probably be unable to get such a speed. So I may look at more ways to reduce the bandwidth required in the future.

Check your bandwidth using speedcheck.org.

And between YouTube and your ISP, it just seems the whole thing about live video broadcasting seems, well, delicate. Stream Health varies between oK, to Excellent to not receiving – all during the same streaming session! It often takes five minutes or so for the stream to appear to be working.

Comparing two webcams
Someone picked up a really cheap DI Chatcam at Microcenter in Paterson. I think that’s Digital Innovations Chatcam. It’s cute. It has a big clip on the end and shines white LEDs when it’s on. I think it was about $12. With the exact same ffmpeg settings (with audio suppressed), the quality was not nearly as good as with the Logitech webcam. Here’s a link to the YouTube video made with the chatcam: https://www.youtube.com/watch?v=OI2IRV1i__k. Note that it has a ministereo plug for audio. I didn;’t even plug it in now that I know how to suppress audio!

The Logitech model is a C525. It was a refurbished model which cost me about $27.The comparable Logitech webcam test is here: https://www.youtube.com/watch?v=L7ZYaRJR7mQ

I need to re-run this test now that I know how to increase the video quality.

A breakthrough: publishing an audio-only stream to YouTube
Besides covering your lens with tape, what’s a software way to blacken the video and concentrate on producing the best audio I wondered?

ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 128 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -b:v 100k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 30 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

The above gives me good audio, and a sold gray background. I love it – for recording band practice or whatever. The breakthrough is that we can avoid wasting cpu cycles on processing input video but just use a color. Thanks Stackoverflow for the tip. Used bandwidth is about 150 kbs – basically nothing! YouTube Dsahboard complains:

OK Video output low
The stream's current bitrate (138.00 Kbps) is lower than the recommended bitrate. 
We recommend that you use a stream bitrate of 2500 Kbps.

But of course that is bogus because that assumes we are trying to put out a rich 1280×720 video, which we are not.

Then eventually YouTube has this complaint:

Bad Bad video settings
Please use a keyframe frequency of four seconds or less. Currently, keyframes are not being sent often enough, which will cause buffering. 
The current keyframe frequency is 8.5 seconds. Note that ingestion errors can cause incorrect GOP (group of pictures) sizes.

Yet the stream does not seem to suffer in any noticeable way from this problem.

For good measure, we add a few extra arguments allow us to remove the keyframes warning. We need to use the -g parameter (group of pictures) at about twice our frame rate, plus, maybe, a no-scenecut argument. Here’s that version.

ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 128 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 60  -x264opts no-scenecut -b:v 150k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 30 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Actual fps is 25, quality is 26 and bitrate is 145 kbps. But audio quality is good. I hear white noise in the background, but hey, this isn’t exactly professional equipment we’re working with. But this is a great solution for an audio-only recording that goes straight out to YouTube. stability is also good.

The load average is high – 3.6 (use top to watch it), almost all of it taken by ffmpeg. So it appears ffmpeg is really working it to produce this audio stream. That makes me suspect it just gets overwhelmed when it’s an audio + video stream? Because I never did find setting swhich produced good quality for both…

Switch to Wifi and Yet another problem surfaces
It seems that with this livestreaming project everything that should just work doesn’t! I had been doing all my testing used wired Ethernet connection and WiFi disabled. anticipating a portable solution, I tried it using WiFi and no Ethernet cable. And washboard audio reappeared. quite often ffmpeg hangs as well. I tried a zillion experiments and now my revelation is that essentially, though we tried to minimize and trivialize video, we were probably still overwhelming the CPU. So I reasoned that these actions will make the load easier on the CPU, without compromising the audio quality:

– reduce frame per second dramatically
– reduce key frames
– reduce video size

And…yes, these things in combination really did help and permit me to run over WiFi now. This version, put inside a script I call ffmpegwireless6.sh, looks like this:

#!/bin/sh
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 64 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 5 \
-s 480x320 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

It doesn’t start consistently, however, but if you run it enough times it’ll go. So, to provide reliability I also scripted around these deficiencies: I decided to just keep trying to start up ffmpegwireless.sh until I jhave evidence it’s working. I call that script masterwireless.sh:

#!/bin/sh
# DrJ 5/2019
LOG="ff.log"`date +%m-%d-%y:%H:%M`
while /bin/true; do
 nohup ./ffmpegwireless6.sh</dev/null>$LOG 2>&1 &
 sleep 7
# want s.th like
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed=0.991x
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed= 1x
 FFOUT=`tail -1 $LOG`
 echo "last line is $FFOUT"
 KB=`echo $FFOUT|awk '{print $(NF-4)}'`
 echo "orig KB: $KB"
 KB=`echo $FFOUT|awk '{print $(NF-5)" "$(NF-4)}'|sed 's/kbits.*//'|awk '{print $NF}'`
 date
 echo "KB is: $KB"
 if [ $KB -gt 129 2>/dev/null ]; then
# let our master process exit - we've got a good audio stream
   echo "Exiting at *** "`date`
   exit
 else
# didn't work out: restart and try again
  echo "*** Restarting ffmpeg at *** "`date`
  pkill -9 -f 'ffmpeg '
 fi
done

And…it works great! Very briefly what it does is t that it calls ffmpegwireless6.sh and backgrounds it, then tests its output. It gives it a few seconds to get going, then kills it unless observed streaming bandwidth is a healthy 135 kbps or so (essentially the video takes almost no bandwidth in ffmpegwireless6.sh.)

Putting it all together – livestreaming audio stream to YouTube automatically upon boot up
So I want to drag this thing to a performance and have a confederate with minimal technical know-how start it up. So basically I want it to start livestreaming when the RasPi is powered up. To do that I made this crontab entry (using crontab -e):

@reboot sleep 20; /home/pi/masterwireless.sh > ff.log 2>&1

It takes a few minutes to get going, but it’s been extremely reliable. It’s started a stream successfully more than 10 times out of 10, at least when I was using my home WiFi connection. When I switched to my phone’s Hotspot, I had one error out of five attempts. The one bad stream just would not start according to Youtube, although per the stats from the log files showed the stream reached the usual good bandwidth. So I don’t know…

And once the stream starts, it is running uninterrupted for hours, anywhere from three to six hours.

Eventually I want to write an API program to automatically check the stream. But before then I may just introduce a refined script which checks the output and restarts ffmpeg when it has ended.

For the record, a typical ff.log file looks like this:

frame=   43 fps= 43 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=164 speed=   0x    ed=   0x
orig KB: dup=0
Tue  7 May 12:32:08 BST 2019
KB is: dup=0
*** Restarting ffmpeg at *** Tue 7 May 12:32:08 BST 2019
frame=  213 fps= 35 q=8.0 size=      47kB time=00:01:40.91 bitrate=   3.8kbits/s dup=0 drop=847 speed=16.7x
orig KB: 3.8kbits/s
Tue  7 May 12:38:53 BST 2019
KB is: 3
*** Restarting ffmpeg at *** Tue 7 May 12:38:53 BST 2019
illed=   86 fps= 14 q=8.0 size=     104kB time=00:00:06.21 bitrate= 136.7kbits/s dup=0 drop=336 speed=1.03x
orig KB: 136.7kbits/s
Tue  7 May 12:39:00 BST 2019
KB is: 136
Exiting at *** Tue 7 May 12:39:00 BST 2019

The other file, which has a name like ff.log05-07-19:12:32, looks more like this:

ffmpeg version 3.2.12-1~deb9u1+rpt1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
  configuration: --prefix=/usr --extra-version='1~deb9u1+rpt1' --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf -
-incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gn
utls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur
128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --ena
ble-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-
libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --ena
ble-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enab
le-libzvbi --enable-omx --enable-omx-rpi --enable-mmal --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --e
nable-libiec61883 --arch=armhf --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 34.101 / 55. 34.101
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.101 / 57. 56.101
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libavresample   3.  1.  0 /  3.  1.  0
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'plughw:1,0':
  Duration: N/A, start: 1557229134.030863, bitrate: 1536 kb/s
    Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Input #1, lavfi, from 'color=color=darkgray':
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #1:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc
[libx264 @ 0x12db850] VBV maxrate unspecified, assuming CBR
[libx264 @ 0x12db850] using SAR=8/9
[libx264 @ 0x12db850] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x12db850] profile High, level 2.1
[libx264 @ 0x12db850] 264 - core 148 r2748 97eaef2 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/
x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_ran
ge=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=8 lookahead_threads=1 sl
iced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 di
rect=1 weightb=1 open_gop=0 weightp=2 keyint=18 keyint_min=1 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=cbr mbtree=1 bit
rate=50 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=50 vbv_bufsize=512 nal_hrd=none filler=0 ip_ratio=1.40
 aq=1:1.00
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/KEY
  Metadata:
    encoder         : Lavf57.56.101
    Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 480x320 [SAR 8:9 DAR 4:3], q=-1--1, 50 kb/s, 5 fps
, 1k tbn, 5 tbc
    Metadata:
      encoder         : Lavc57.64.101 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/50000 buffer size: 512000 vbv_delay: -1
    Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, stereo, s16p, 128 kb/s
    Metadata:
      encoder         : Lavc57.64.101 libmp3lame
Stream mapping:
  Stream #1:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
  Stream #0:0 -> #0:1 (pcm_s16le (native) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
frame=   69 fps= 27 q=8.0 size=      45kB time=00:00:02.820 bitrate= 138.6kbits/s dup=0 drop=256 speed= 1.1x
frame=   79 fps= 17 q=2.0 size=      79kB time=00:00:04.80 bitrate= 134.6kbits/s dup=0 drop=308 speed=1.04x
frame=   91 fps= 13 q=8.0 size=     112kB time=00:00:06.80 bitrate= 134.8kbits/s dup=0 drop=348 speed=1.04x
frame=  101 fps= 11 q=8.0 size=     153kB time=00:00:09.22 bitrate= 135.0kbits/s dup=0 drop=388 speed=1.03x
frame=  112 fps= 10 q=3.0 size=     186kB time=00:00:11.40 bitrate= 133.8kbits/s dup=0 drop=440 speed=1.02x
av_interleaved_write_frame(): Broken pipe time=05:28:03.40 bitrate= 134.2kbits/s dup=0 drop=393880 speed=   1x
etc.
    Last message repeated 1 times
Error writing trailer of rtmp://a.rtmp.youtube.com/live2/KEY: Broken pipeframe=98474 fps=5.0 q=-1.0 Lsize=  322492kB time=05:28:14.00 bitrate= 134.1kbits/s dup=0 drop=393888 speed=0.998x
video:2213kB audio:306620kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.422897%
[libx264 @ 0x125c850] frame I:5471  Avg QP: 0.00  size:    80
[libx264 @ 0x125c850] frame P:27354 Avg QP: 0.00  size:    25
[libx264 @ 0x125c850] frame B:65649 Avg QP: 0.00  size:    17
[libx264 @ 0x125c850] consecutive B-frames: 11.1%  0.0%  0.0% 88.9%
[libx264 @ 0x125c850] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0x125c850] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:100.0%
[libx264 @ 0x125c850] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.0%  0.0%  0.0%  direct: 0.0%  skip:100.0%
[libx264 @ 0x125c850] 8x8 transform intra:0.0%
[libx264 @ 0x125c850] coded y,uvDC,uvAC intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 @ 0x125c850] i16 v,h,dc,p: 95%  0%  5%  0%
[libx264 @ 0x125c850] i8c dc,h,v,p: 100%  0%  0%  0%
[libx264 @ 0x125c850] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x125c850] kb/s:0.92
Conversion failed!

CPU load average is around 1 or so – much less than before. So I think my ideas are on the right track. Why send 30 frames or whatever each and every second to Youtube just to display a gray screen? The CPU has to work to do that. As long as ffmpeg + Youtube has the intelligence to paste together audio snippets 1/5th second in length five times each second the audio should be taken care of, we’re not playing with the sampling rate or anything – is how I reasoned. Key frames are some sort of overhead as well since they’re extra things ffmpeg has to periodically do. Youtube wants one at least every four seconds. We get really close to that limit by multiplying fps * 3.6 s = 5 * 3.6 = 18 for our group-of-pictures (g) parameter. Previously we were sending a key frame more frequently – every two seconds.

Unreliability
Running this command is still hit-or-miss. As often as not it hangs, and then, if it does not hang, as often as not it often outputs washboard audio. You just <Ctrl-C> to get out of it if hangs, or type “q” if it is producing washboard audio.

Note carefully the bandwidth being used, which ffmpeg reports every second. If it is < 128 kbps, you’re hosed and have washboard audio. If it’s about 135 kbps or higher, you’re good. You don’t even need to waste time fiddling with Youtube’s live_dashboard to listen to it. You get this feedback immediately from ffmpeg. And I intend to use these same observed behaviors to script around ffmpeg’s flakiness and keep restarting it automatically until it is producing a good quality audio stream!

Improved startup
This script, which I call continuousaudio.sh, has some debugging at the beginning, then loops to ensure there is always an audio stream being live-streamed as long as the Pi has power. It has been extremely reliable. I settled on this one for my own purposes.

#!/bin/sh
# drJ 5/2019
sleep 20
LOG="ff.log"`date +%m-%d-%y:%H:%M`
# some info for debugging problems
echo "***********"
date; ip add; ping -c2 8.8.8.8; lsusb
nohup ./ffmpegwireless6.sh</dev/null>$LOG 2>&1 &
while /bin/true; do
 sleep 7
# want s.th like
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed=0.991x
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed= 1x
 FFOUT=`tail -1 $LOG`
 echo "last line is $FFOUT"
 KB=`echo $FFOUT|awk '{print $(NF-5)" "$(NF-4)}'|sed 's/kbits.*//'|awk '{print $NF}'`
 echo "orig KB: $KB"
 KB=$(echo $KB|sed s/\\..*//)
 date
 echo "KB is: $KB"
 if [ $KB -gt 129 2>/dev/null ]; then
# stream looks good - do nothing
   echo -n ""
 else
# didn't work out: restart and try again
  echo "*** Restarting ffmpeg at *** "`date`
  pkill -9 -f 'ffmpeg '
  nohup ./ffmpegwireless6.sh</dev/null>$LOG 2>&1 &
 fi
done

Note it still calls ffmpegwireless6.sh, which I believe I have provided above.

ff.log now looks like this:

**********
Fri 31 May 01:10:59 BST 2019
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether b8:27:eb:11:fc:06 brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:44:a9:53 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.170/24 brd 192.168.1.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::1119:b46a:cb69:63c9/64 scope link
       valid_lft forever preferred_lft forever
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=14.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=17.4 ms
 
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 14.671/16.065/17.460/1.400 ms
Bus 001 Device 004: ID 046d:0825 Logitech, Inc. Webcam C270
Bus 001 Device 005: ID 0424:7800 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
last line is frame=   19 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=69 speed=   0x    ^Mframe=   39 fps= 39 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=150 speed=   0x    ^M
orig KB: dup=0
Fri 31 May 01:11:07 BST 2019
KB is: dup=0
*** Restarting ffmpeg at *** Fri 31 May 01:11:07 BST 2019
last line is frame=  193 fps= 35 q=8.0 size=     100kB time=00:00:27.60 bitrate=  29.6kbits/s dup=0 drop=764 speed=4.99x    ^Mframe=  195 fps= 32 q=8.0 size=     108kB time=00:00:28.03 bitrate=  31.4kbits/s dup=0 drop=775 speed=4.65x    ^M
orig KB: 31.4
Fri 31 May 01:11:36 BST 2019
KB is: 31
*** Restarting ffmpeg at *** Fri 31 May 01:11:36 BST 2019
...

My crontab now looks like this:

@reboot /home/pi/continuousaudio.sh > ff.log 2>&1

Portability
I wanted to record a practice session in my house where no Ethernet port is available (hence I had to get WiFi working, which I believe I have). And I wanted convenience – to not worry about being tethered to the wall by an adapter. So I decided to look for an economical power solution for Raspberry Pi. And I found the ones purpose-built are just too expensive to justify. Pijuice, I’m talking about you. So, really, I realized any old portable USB power stick would work. But I wanted something which could last hours. This Omars 10000 mAh portable USB charger seemed like it would do the trick. $16. And it did. It works great! Two hours later, the LEDs show three bars instead of four, so I think this will supply power for about 8 – 9 hours if I pushed it. And it has the form factor of a smartphone. Ideally I’d want a little on/off switch to avoid plugging/unplugging the power cable, but I didn’t find that as of yet. Maybe there’s a cheap USB cable with that…?

So now I’m not tethered by Ethernet cables nor by a power plug. See where this is progressing? If I use my smartphone’s hotspot I should be able to livestream anywhere I can get a signal, so, for instance, at band performances. I haven’t tried that yet, but I’m hopeful…

YouTube quirks
As previously mentioned (I think)( you need to be enabled for livestreaming. It takes about 24 hours for the approval. I suppose they check to make sure you aren’t a perceived threat.

Recording NPR will give you a copyright violation flag! This has happened to me more than once. I think because they play snippets of new music which are flagged.

Lag. I’ve seen lag time as short as four seconds and maybe as long as 20 seconds or so. It is never instantaneous.

My longest video was 20 hours but the processing took days. In fact I’m not sure it ever completed. So I guess the service falls apart after video lengths of I don’t know, maybe 12 hours or so. So if the desire is to have a continuous security webcam I guess you’ll have to break it into chunks. That’s what I’m thinking about next.

A livestream gets converted to a video by YouTube. That takes awhile – maybe as long as the video length itself is? It slaps a date and time onto the video which you see in your video manager. Unfortunately, using this ffmpeg streaming method it chooses the Pacific standard time timezone. I actually don’t see a simple way to change that either. It may require use of the API, which is beyond what I’m willing to tackle right now. So for me, being in the Eastern time zone all the timestamps are off by three hours, which is kind of annoying.

I wondered, does my livestream ID remain constant, or will it change from broadcast to broadcast? This is important for future use of the API. Well, it changes each time I start a new livestream, even though I use a single (my own) account. Each livestream gets a unique ID which then becomes the ID for the DVR of the video which you can view on-demand. And this ID is the part that changes in the URL of an “unpublished” Youtube video. Say your unpublished livestream is
https://www.youtube.com/watch?v=r1wtZwQ-Tk8.
The part of the URL following the v=, namely, in this example, r1wtZwQ-Tk8, is the ID of that video. I would say YouTube tries to be somewhat robust and will not declare your stream has ended until maybe 30 seconds after you have stopped your program. Or maybe it’s a minute or two, I’m not really sure. But I’ve seen that if you restart the streaming quickly enough you’ll be put onto that same livestream. If on the other hand you wait long enough until you see in live_dashboard that stream ended message then It will assign yuo a new video ID if you start your stream again – and don’t forget to reload the live_dashboard page so it can pick up the new ID.

Can you pause a livestream, and later resume, keeping the same URL? In a word, No. Unfortunately. Youtube livestreaming is pretty limited in this way. And how useful would that be? I would use my smartphone to control ffmpeg on my Raspberry Pi to pause our band practice during our lengthy chat breaks, keeping the stream focussed on the music. But no… Not possible.

Logitech webcam quirks
When you pull both video and audio from your Logitech webcam the usage LED illuminates as you’d expect. However, when you’re pulling just the audio, as I show above, that LED does not illuminate, yet it is being used to record all the sounds in its vicinity. I guess I have accidentally and unintentionally stumbled upon a stealth mode, which is a little disconcerting.

Yeti USB microphone quirks
A Yeti mic is extremely sensitive and seems more suited for conversation than music recording in my opinion. Even with the gain all the way down (a must) a loud sound is often distorted. I felt the omni recording mode was the worst in this regard. Stereo recording tolerated sounds better. But, if you want to pikc up every little sound, Yeti is great. More importantly to me, it just worked with the USB settings I used for Logitech. I didn’t have to change a single thing in the way I used ffmpeg.

Testing if the livestream is still running
My idea to do this is to use the YouTube API and periodically test if the livestream is still working. I have read that it can go down for various reason, and there is no goo way from within ffmpeg itself to tell that your stream is no longer live! It will make for a good project to test the livestream using the Google Developer’s API. that will be a separate post if I ever get it working. If it’s found to be down, the Pi could restart ffmpeg, in my thinking.

To do list
I never really perfected the video. Audio I got pretty well.
I will borrow my friend’s Yeti USB mic to see how my audio stream works with a high quality microphone. DONE.
I would like to have a simple external control to turn stream off/ on, whether it is physical or virtual. DONE – see references.
Scripting to monitor stream and restart it once it fails – to have a recording 24×7 like an audio-only security camera. DONE – continuousaudio.sh as documented above.
Pause feature. PARTIALLY DONE.

Conclusion
A Raspberry Pi 3 running Raspbian Stretch Lite is used, along with a Logitech USB webcam, to livestream to YouTube. I showed how to stream video-only with a silent audio track. Then I turned it around and spent most of my time putting a virtual piece of tape over the lens and doing an audio-only livestream. This, after a crap-load of testing and tweaking, eventually began to work in a reliable fashion. Then I showed how to launch the audio-only livestream upon power-up of the Ras Pi.

Since it is a Raspberry Pi, this whole thing lends itself to portability and interesting use cases. With a $17 portable USB battery source and your own Hotspot, you can stream (audio at least) from anywhere you have 4G cell signal – good for recording a banquet, your band performance, or any other long, live event.

I spoke about some of the many quirks of YouTube which are relevant to this project.

References and related
Where I debug YouTube’s messages: https://www.youtube.com/live_dashboard

Fishcam implemented with Raspberry Pi + webcam + help of my AWS server.

One of my test videos: https://youtu.be/oxJaZv0frGM

Check your upload bandwith: speedcheck.org.

YouTube’s links have me confused. If you’re trying to produce a Live Stream you’ll want the live dashboard page to watch it and check its quality as Youtube judges it. Here’s that link: https://www.youtube.com/live_dashboard

Microcenter in Paterson, NJ – best to visit in person, or so I have been told.

My livestream is https://www.youtube.com/watch?v=r1wtZwQ-Tk8

Put virtual tape over your lens by using this tip discussed in Stackoverflow!

Portable, proven (by me) economical USB power supply for your Raspberry Pi – $16.

Economical on/off switch for your Raspberry Pi. This is a great way to stop having to pull out/push in power connectors from your micro USB power source. $10 gets you a four-pack! https://smile.amazon.com/iUniker-Raspberry-Switch-Supply-MicroUSB/dp/B07CTHKXDW/ref=sr_1_2_sspa?keywords=raspberry+pi+on+off+switch&qid=1559477662&s=gateway&sr=8-2-spons&psc=1

Posted in Linux, Network Technologies, Raspberry Pi | Tagged , , , , , , | Leave a comment

How to add private root CAs in SLES or Redhat

Intro
From time-to-time I run my own PKI infrastructure, namely issuing my own certificates form my private root CA. I wanted this root CA to be recognized by Linux utilities running on Suse Linux (SLES), in particular, lftp, which I was trying to use to access an ftps site, which itself is a post for another day.

The details
Let’s say you have your root certificate in the standard form like this example

-----BEGIN CERTIFICATE-----
MIIIPzCCBiegAwIBAgITfgAAAATHCoXJivwKLQAAAAAABDANBgkqhkiG9w0BAQsF\nADA2MQswCQYD
VQQGEwJERTENMAsGA1UEChMEQkFTRjEYMBYGA1UEAxMPQkFTRiBS\nb290IENBIDIxMB4XDTE3MDgxMDEyNDAwOFoXDTI4MDgxMDEyNTAwOFowXDETMBEG\nCgm
...
PEScyptUSAaGjS4JuxsNoL6URXYHxJsR0bPlet\nSct
-----END CERTIFICATE-----

Then you can put the certificate inline and within one script install it so that it permanently joins the other root CAs in /etc/ssl/certs with a script like this example:

DrJ_Root_CA="-----BEGIN CERTIFICATE-----\nMIIIPzCCBiegAwIBAgITfgAAAATHCoXJivwKLQAAAAAABDANBgkqhkiG9w0BAQsF\nADA2MQswCQYD
VQQGEwJERTENMAsGA1UEChMEQkFTRjEYMBYGA1UEAxMPQkFTRiBS\nb290IENBIDIxMB4XDTE3MDgxMDEyNDAwOFoXDTI4MDgxMDEyNTAwOFowXDETMBEG\nCgm
SJomT8ixkARkWA05FVDEUMBIGCgmSJomT8ixkARkWBEJBU0YxFjAUBgoJkiaJ\nk/IsZAEZFgZCQVNGQUQxFzAVBgNVBAMTDkJBU0YgU1VCIENBIDIzMIICIjAN
Bgkq\nhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAqrfoKxrCPCw/u2PBEaAwW/VHLxBw6JNi\n42F3EhXmligGb/Uu4kcWO016IGFatVrPhdAtShAqmTXis0w57hW
jn1Iptvo7rROY\nGPmH7aSW/fYM/x2Lln7NlltayXspWawqBzWzYGADodyjn/Z5TaLYaG8lajiabCM5\nUJDhlZ/SUR3xylqIIFaQK3k2twjeGoxobhbr9hJcQZ
fXF0V5FCSCzJExDYma6bs1\nZtyqP/yHaiOeWXGdnqM9EPfT8kmIC42ZXq7s2JZI5OUflJBbaebYEbuDad6Rh19E\nRchXABLe68+TF/4AZCw16iRwRgq/2Re2W
WPMtVomyZ2txvn51iizqBkdVGzIRklC\n3yIv5MRzDFTfG940/tSAomHsz+RdGbL+NCBeWSY+rnJQdExJ7bLXFLVsTNGL68lP\nMuYrkxYQKWRtVhvQCHsdd5E0
t9QR4iY1JLWQxq3GHy98tBbCGiKMpBbuj/9I/E6c\nGrikouv2QyNnCN34PXpUxTQmDj5LZGV9w2faqpwUBD2ZWsbyVSgvD8TcjdxzcMcj\nLBnYUaZ8wHFqUj2
DBahctfKQxA8Ptrzt1mDIGOQliZGDwrTVMECd+noQhTlF1eS+\nvNraV3dYRMymVxh58MPEaDJgwIRcBWAAOeBbZlyx76oskXdmjOiz5jqyoR5eweCE\ntS4jfM
EW6UECAwEAAaOCAx4wggMaMAsGA1UdDwQEAwIBhjAQBgkrBgEEAYI3FQEE\nAwIBADAdBgNVHQ4EFgQUdn7nwFGpb8uzpFVs5QWQcsA0Q6IwQwYDVR0gBDwwOjA
4\nBgwrBgEEAYGlZAMCAgEwKDAmBggrBgEFBQcCARYaaHR0cDovL3BraXdlYi5iYXNm\nLmNvbS9jcAAwGQYJKwYBBAGCNxQCBAweCgBTAHUAYgBDAEEwEgYDVR
0TAQH/BAgw\nBgEB/wIBADAfBgNVHSMEGDAWgBSS9auUcX38rmNVmQsv6DKAMZcmXDCCAQkGA1Ud\nHwSCAQAwgf0wgfqggfeggfSGgbZsZGFwOi8vL0NOPUJBU
0YlMjBSb290JTIwQ0El\nMjAyMSxDTj1DRFAsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2Vydmlj\nZXMsQ049Q29uZmlndXJhdGlvbixEQz1yb290
LERDPWJhc2YsREM9Y29tP2NlcnRp\nZmljYXRlUmV2b2NhdGlvbkxpc3Q/YmFzZT9vYmplY3RDbGFzcz1jUkxEaXN0cmli\ndXRpb25Qb2ludIY5aHR0cDovL3B
raXdlYi5iYXNmLmNvbS9yb290Y2EyMS9CQVNG\nJTIwUm9vdCUyMENBJTIwMjEuY3JsMIIBNgYIKwYBBQUHAQEEggEoNIIBJDCBuQYI\nKwYBBQUHMAKGgaxsZG
FwOi8vL0NOPUJBU0YlMjBSb290JTIwQ0ElMjAyMSxDTj1B\nSUEsQ049UHVibGljJTIwS2V5JTIwU2VydmljZXMsQ049U2VydmljZXMsQ049Q29u\nZmlndXJhd
GlvbixEQz1yb290LERDPWJhc2YsREM9Y29tP2NBQ2VydGlmaWNhdGU/\nYmFzZT9vYmplY3RDbGFzcz1jZXJ0aWZpY2F0aW9uQXV0aG9yaXR5MGYGCCsGAQUF\n
BzAChlpodHRwOi8vcGtpd2ViLmJhc2YuY29tL3Jvb3RjYTIxL1JPT1RDQTIxLnJ6\nLWMwMDctajY1MC5iYXNmLWFnLmRlX0JBU0YlMjBSb290JTIwQ0ElMjAyM
S5jcnQw\nDQYJKoZIhvcNAQELBQADggIBAClCvn9sKo/gbrEygtUPsVy9cj9UOQ2/CciCdzpz\nXhuXfoCIICgc0YFzCajoXBLj4V6zcYKjz8RndaLabDaaSQgj
phXFiZSBH8OII+cp\nTCWW1x+JElJXo9HB7Ziva2PeuU5ajXtvql5PegFYWdmgK2Q1QH0J2f1rr7B4nNGu\noyBi1TOSll+0yJApjx213lM9obt6hkXkjeisjcq
auMVh+8KloM0LQOTAD1bDAvpa\nVVN9wlbytvf4tLxHpvrxEQEmVtTAdVchuQV1QCeIbqIxW41l6nhE2TlPwEmTr+Cv\najMID/ebnc9WzeweyTddb6DSmn4mSc
okGpj8j8Z7cw173Yomhg1tEEfEzip+/Jx6\nd2qblZ9BUih9sHE8rtUBEPLvBZwr2frkXzL3f8D6w36LxuhcqJOmDaIPDpJMH/65\nAbYnJyhwJeGUbrRm3zVtA
5QHIiSHi2gTdEw+9EfyIhuNKS4FO/uonjJJcKBtaufl\nGFL6y0WegbS5xlMV9RwkM22R7sQkBbDTr+79MqJXYCGtbyX0JxIgOGbE4mxvdDVh\nmuPo9IpRc5Jl
pSWUa7HvZUEuLnUicRbfrs1PK/FBF7aSrJLoYprHPgP6421pl08H\nhhJXE9XA2aIfEkJ4BcKw0BqOP/PEScyptUSAaGjS4JuxsNoL6URXYHxJsR0bPlet\nSct
3\n-----END CERTIFICATE-----\n"
 
cd /etc/pki/trust/anchors/
echo -e -n $DrJ_Root_CA > DrJ_Root_CA.pem
c_rehash
update-ca-certificates

So the key commands are c_rehash and update-ca-certificates.

Usually SLES is similar to Redhat. But it seems to be different in this case.

This was tested on a SLES 12 SP3 system.

It copies the certificate to /etc/pki/trust/anchors, which by itself is insufficient. Then it creates some kind of hash symlink to the CA file and makes sure that this new certificate doesn’t get wiped out by subsequent system patching. That’s the purpose of the c_rehash and update-ca-certificates commands.

You may also see these hashes and certificates in /etc/ssl/certs. I’m not sure because that’s where I started with all this. But merely dropping the private root CA into /etc/ssl/certs is insufficient, I can say from experience!

Redhat
Redhat is better documented, but for completeness I include it here. You have your inline certificate as in the SLES script, then following that:

...
cd /etc/pki/ca-trust/source/anchors/
echo -e -n $DrJ_Root_CA > DrJ_Root_CA.pem
update-ca-trust

So update-ca-trust is the key command for Redhat Linux. This was tested on Redhat Linux v 7.6.

lftp usage tip with a private CA
If like me you were doing this work in conjunction with running ftps using a certificate signed by a private CA, and want your ftp client, lftp, to not complain about the unrecognized CA, then this tip will help.

After initiating your lftp and sending the username and password, you can send this command
$ ssl:ca-file <path-to-your-private-CA-file>
lftp is so flexible it offers many other ways to do this as well. But this is the one I use.

Conclusion
We show how to add your own root CA to a SLES 12 system. I did not find a good reference for this informaiton anywhere on the Internet.

References and related
My favorite openssl commands.

The basics of working with cipher settings

For Reedhat/CentOS I am evaluating this blog post on the proper way to add your own private CA: https://www.happyassassin.net/2015/01/14/trusting-additional-cas-in-fedora-rhel-centos-dont-append-to-etcpkitlscertsca-bundle-crt-or-etcpkitlscert-pem/

For the Redhat approach I used this blog post: https://www.happyassassin.net/2015/01/14/trusting-additional-cas-in-fedora-rhel-centos-dont-append-to-etcpkitlscertsca-bundle-crt-or-etcpkitlscert-pem/

Posted in Admin, Linux, SLES | Tagged , , , | Leave a comment