Categories
Linux Raspberry Pi

Automated YouTube video uploading from Raspberry Pi without using the YouTube api

2021 Intro

This post promises more than it actually delivers, ha, ha. It is squarely aimed at the more mid to advanced RPi enthusiast. Most who read this will get discouraged and look for another solution. I did the same in fact and I will review my failures with alternatives.

The essence of this aproach is screen automation with a very nice tool called xdotool. For me it works. It will definitely, 100% require some tweaks for anyone else. This is not run a few installs, copy this code and you’re good to go. But if you have the patience, you wil be rewarded with either fully or at least semi-automated video uploads to YouTube from your Raspberry Pi.

One caveat. Please obey YouTube’s terms of service. In other words, don’t abuse this! As soon as someone starts using this method in an aggressive or abusive fashion, we will all lose this capability. They have crack security experts and could squelch this approach in a heartbeat.

I actually don’t have all the peices in place for myself, but I have enough cool stuff that I wanted to begin to share my findings.

One beautiful thing about what I’m going to show is that you get to see the cursor moving about the screen in response to your automated commands – you see exactly what it;s doing, which screens it’s clicking through, etc. So if there’s an issue – say YouTube changes its layout – you’ll most likely be able to know how to adapt.

What’s wrong with using YouTube’s api?

Plenty. It used to be feasible. It certainly would make all our lives a lot easier. But YouTube is not a charity. They have squeezed out the little guy by making the barrier to entry so high that it’s really only available to highly determined IT folks. It’s just too difficult to figure out all the neeed screens, etc, and all the help guides refer to older api versions where things were different. YouTube clamped down in July 2020 on who or what can use their api. There’s a lot of old HowTos pre-dating that that will just lead you to dead-ends. So, go ahead, I dare you to stop reading this and use the api and report back. Maybe yuo manage to create a project, great, and an api key, great, and even to assign your api key the correct YouTube specific permissions – all great, and associate crednetials – super, and finally borrow someone’s code to upload a video – been there, done that. That video will be listed private. So then you try to root around to see what you have to do to make it public. Ah, a project review. Great. You were only in test mode. So no your confronted with this form. If they cared about the little guy there would be a radio button – “I only wish to upload a few videos a week for a small cadre of users, spare me the bureaucracy,” and that’d be it. But, no… Are you applying for a quota? Huh? I just want to upload a video and have it marked as unlisted. Some users remarked they filled out the form, never got their project reviewed and never heard back. Maybe they’re the exception, I don’t know. It’s just over the top for me so I give up.

OK. So, maybe YoutubeUploader?

Nope. Doesn’t work. It’s based on the old stuff.

OK. What about that guy’s api-less Node.js uploader?

Maybe. I could not get it to work on RPi. But I didn’t try super hard. I just like rolling my own, frankly. My approach is much more transparent. At least this approach inspired me to imagine the approach I am about to share. Because I believe the Node.js guy is just doing screen scraping but you can’t even see the screens.

Or simply do a Livestream?

Agreed. Livestreaming is quite straightforward by comparison with what I developed. My blog post about one click livestreaming covers it. But I have not had good results with reliability. As often as it works, it doesn’t work. With this new approach I’m going to try to create separate steps so that if anything goes wrong, an individual step can be re-run. Another advantage of separating steps is that a recording can be done “in the field” and without WiFi access. Remember an RPi 3 works great for hours with a decent portable USB battery that’s normally used for phones. Then the resulting recording can be converted to video and uploaded once the RPi is back to its usual WiFi SSID.

Preliminary upload, October 2021

In the video below the right screen is a terminal window showing what the script is doing. It needs some tweaking, and the YouTube window gets stuck so it’s not showing some of the screens. But it’s already totally awesome – and it worked!

Watch an actual video get uploaded
Code for the above

#!/bin/sh
# automate upload of YouTube videos
#
# define some functions
randomsleep(){
# sleep random amount between 1.5 to 2.5 seconds
t10=$(shuf -n1 -i 15-25)
t=$(echo $t10/10|bc -l)
sleep $t
}
drjtool(){
randomsleep
xdotool $1 $2 $3
randomsleep
}

echo Start video upload
echo set display to main display
export DISPLAY=:0
# launch chromium
echo launch chromium
chromium --kiosk https://studio.youtube.com/ > /dev/null 2>&1 &
sleep 20
echo move to CREATE button
drjtool mousemove 579 19
echo click on CREATE button
drjtool click 1
echo move to Upload videos
drjtool mousemove 577 34
echo click Upload videos
drjtool click 1
echo move to SELECT FILES
drjtool mousemove 305 266
echo click on SELECT FILES
drjtool click 1
echo move mouse to Open button
drjtool mousemove 600 396
echo click open and pause a bit for video upload
drjtool click 1
sleep 20
echo "mouse to NEXT button (accept defaults)"
drjtool mousemove 558 386
echo click on NEXT
drjtool click 1
echo move to radio button No it is not made for kids
drjtool mousemove  117 284
echo click radio button
drjtool click 1
echo back to NEXT button
drjtool mousemove 551 384
echo click NEXT
drjtool click 1
echo 'click NEXT again (then says no copyright issues found)'
drjtool click 1
echo click NEXT again
drjtool click 1
echo move to Unlisted visibility radio button
# [note that public would be drjtool mousemove 142 235, private is 142 181]
drjtool mousemove 142 208
echo click Unlisted
drjtool click 1
echo move to copy icon
drjtool mousemove 532 249
echo echo copy URL to clipboard
drjtool click 1
echo move to Save
drjtool mousemove 551 384
echo click Save
drjtool click 1
echo move to CLOSE
drjtool mousemove 434 270
echo click close
drjtool click 1

echo video URL
xsel -b|tee clipboard
echo kill chromium browser
sleep 5
echo kill chromium
kill -9 %1
url=$(cat clipboard|xargs -0 echo)
echo url is $url

I call the script auto-test.sh, just to give it a name.

Ingredients
  • RPi 3 or RPi 4
  • Raspberry OS with full GUI and autologin set up
  • tiger VNC, i.e., the package tigervnc-scraping-server
  • chromium browser – but I think that comes with the GUI install
  • xdotool (apt-get install xdotool)
  • xsel (apt-get install xsel)
  • YouTube account
  • crontab entries – see below
  • you do not need an HDMI display, except for the OS setup
  • a vncviewer such as Real VNC
Idea

Don’t use the GUI for anything else!

Crontab (do a crontab -e to get into your crontab) should contain these lines:

@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -localhost no -passwordfile ~/.vnc/passwd -display :0 >  x0vncserver.log 2>&1

Work with chromium the first time by hand. As I recall you should:

  • Create a directory like 00uploads – so it appears highest in the list
  • put a single video in 00uploads
  • Do an upload by hand (to help chromium remember to choose this upload directory)
  • launch chromium browser
  • log into your YouTube account at https://studio.youtube.com
  • shrink the browser until its size is 50% (Ctrl-Shift – about four times)
  • Don’t add other tabs and stuff to Chromium

Then subsequent launches of chromium should remember a bunch of these settings, specifically, your login info, the shrunken size, the upload directory, and maybe the (lack of) other tabs.

The beauty of this approach is that it is more transparent than the alternatives. You see exactly what your program is doing. You can issue the xdotool commands by hand to, e.g., change up the coordinates a little bit. Or even enter a video title.

So getting back to the idea, the automation idea is to finish a video somehow, then move it to the 00uploads dircetory, invoke this uploader program, then either move it to a uploaded directory or some such.

Imagine the versatility if I used my remote controller for RPi to map one button for audio recording, and a second button for automating video upload! Well, when I find the time that’s what I plan to do. I will make a separate post where the recording and uploading are shown – more or less the culmination of all the pieces.

Oh, and back to the idea again, I wanted to share the unlisted link with band members. So, you see how it is basically in the result of xsel -b since xsel copied the clipboard which contained the YouTube URL for this video we just uploaded? I have to fix up the parsing because some junk characters are getting included, but I plan to email that link to myself first, where I will do a brief manual check, and then forward it to the rest of the band. so, again, it’s really cool that we could even think to pull that off with this simplistic approach.

Techniques developed for this project

Lots.

I “discovered” – in the sense that Columbus discovered America – xdotool as an amazing X Windows screen automation tool. I knew of autohotkey for Windows so inquired what was like it for X Windows. I further learned that xdotool is generally broken when it comes to use with traditional VNC servers such as the native tightvncserver. It simply doesn’t work. But Tiger VNC is a scraping server so it like shares your console screen and makes it available via VNC protocol. That’s required because to develop this approach you have to see what you’re doing. All those coordinates? it comes from experimentation.

I also learned how to embed a YouTube video in my blog post. In fact this is the very first video I made for a blog post. So I did a screen recording for the first time with screen recorder for Windows.

I landed on the idea of a side-by-side video showing my terminal running the automated script in one window and the effect it is having on the chromium browser in the other window running on the RPi.

I put a wrapper around xdotool to make things cleaner. (But it’s not done yet.)

I changed to two-factor authentication to see if it made a difference. It did not. It still remembers the authentication, thankfully, at least for a few days. I wonder for how long though. Hmm.

Kiosk mode. By launching chromium with kiosk mode it not only gives us more screen real estate to work in, it in principle should also permit you to interact with chromium in a regular fashion and still have it come up in a known, fixed position, which is an absolute requirement of this approach. All the buttons have to have the same coordinates from invocation to invocation.

I also developed ffmpeg-based converters which take wav files and converts them to mp3’s (a nice compact format. wav files are space hogs), and another which takes mp3 files and adds a gray screen and converts them to flv ([Adobe] Flash Video, I guess – a compact video file format which YouTube accepts).

I also learned the ffmpeg command to tell exactly how long a recording is.

I also learned how to turn off blocking in ffmpeg so that its constantly writing packets and thus not losing audio data at the end when stopped.

I came up with the idea of randomizing the sleep time between clicks to make it seem more human-like.

And mostly for the purpose of demonstrations, though it also greatly helps in debugging, I introduced around two seconds of sleep both before and after a command is issued. That really makes things a lot clearer.

The results of the clipboard, xsel -b, contains null bytes. I had trouble parsing it to pull out just the url, but finally landed on using xargs -0 which is designed to parse null-delimited strings. And it worked! This was a late edition and did not make it into the video, but is in the provided script above towards the bottom.

ffmpeg chokes on too-complicated filenames. Who knew? I had files containing colon (:) and dash (-) characters which work perfectly fine in linux, but ffmpeg was interpreting part of the filename as a command-line argument it appeared. The way out of that mess was to introduce file: in front of the filename.

I’ll probably put my ffmpeg tricks into my next post because I want to keep this one lean and focused on this one upload automation topic.

References and related

This post was preceded by my post on how to stream live to YouTube with a click of a button on a Raspberry Pi.

Categories
Firewall

Checkpoint Gaia admin tips

Intro

Suppose, hypothetically, that you had super admin access to a CMA in SmartConsole v 80.40, but lacked ssh or GUI access to firewalls within that CMA? What could you do? Can you run commands in a pinch? Yes. You can. Here are some concrete examples.

Caveats

In the servers section of the domain you can right-click and choose “Run one-time script.” That’s great, but I think there are limits. It will time out a script that takes too long. IDK, maybe 10 seconds or so is the maximum time allowed. The returned text gets truncated if it’s too long. 15 lines of text is OK. 200 is not. Somewhere inbetween those two is the limit.

Running clish commands

clish commands can indeed be run this way. I was interested in examining a few routes on a firewall with many static routes. I ran:

netstat -rn|grep 198.23|head -15

Set a static route

clish -sc “set static-route 197.6.75.0/24 nexthop gateway address 10.23.42.10 on”

Redistribute this route via BGP

clish -sc “set route-redistribution to bgp-as 38002.48928 from static-route 197.6.75.0/24 on”

Run a PING (best to restrict the number of ping packets)

ping -c3 1.1.1.1

Show a part of configuration, e.g., BGP stuff

clish -c “show configuration”|grep bgp|head -15

Show cluster IPs

cphaprob -vs all -a -m if|grep 10.182.136

Learn the name of the switch and switch port an interface is connected to (Cisco switches only)

This is a really awesome trick. And it works. Maybe it relies on something clled CDP. Not sure. But you run it and it will tell you the hostname of the switch and the port, e.g., eth1/5.

tcpdump -vnni eth1-08 -s 1500 -c 1 ‘(ether[12:2]=0x88cc or ether[20:2]=0x2000) and not tcp and not udp and not icmp’

The interface name eth1-08 above is just an example.

Better still, it will even tell you the management IP of the switch! That output will appear like this:

Management Addresses (0x16), value length: 13 bytes: IPv4 (1) 10.122.37.81

This command is general-putpose and works with any device with any OS, assuming you can run a packet trace with tcpdump or equivalent. Very cool.

Conclusion

Real firewall admins I know fail to realize that even when they lack shell access to a firewall they can pretty issue any command they need if they use the one-time script option in SmartConsole. It just helps to follow along the lines of the examples above – limiting output, etc. Even clish config changes can be made! A common reason to be in this situation is to learn someone changed a password or cleaned up old accounts.

As a bonus I show how to identify the name of the switch a firewall is connected to as well as the switch port and management IP of that switch. The general-purpose command works on any OS.

Categories
Admin DNS Firewall Network Technologies TCP/IP

The IT detective agency: named times out tcp queries

Intro

I’ve been reliable running ISC’s BIND server for eons. Recently I had a problem getting my slave servers updated after a change to the primary master. What was going on there?

The details

This was truly a team effort. I saw that the zone file had differing serial numbers on the master versus the slave servers. My attempts to update via an rndc refresh zone was having no effect.

So I tried a zone transfer by hand: dig axfr drjohnstechtalk.com @50.17.188.196

That timed out!

Yet, regular dns qeuries went through fine: dig ns drjohnstechtakl.com @50.17.188.196

I thought about it and remembered zone transfers use TCP whereas standard queries use UDP. So I tried a TCP-based simple query: dig +tcp ns drjohnstechtalk.com @50.17.188.196. It timed out!

So of course one suspects the firewall, which is reasonable enough. And when I looked at the firewal I found some funny drops, though i cuoldn’t line them up exactly with my failed tests. But I’m not a firewall expert; I just muddle through.

The next day someone from the DNS group asked how local queries behaved? Hmm. never tried that. So I tried it: dig +tcp ns drjohnstechtalk.com @localhost. That timed out as well! That was a brilliant suggestion as we now could eliminate the firewall and all that complexity from the equation. Because I had tried to do packet traces on two different machines at the same time and line up the results. It wasn’t easy.

The whole issue was very concerning to us because we feared our secondaries would be unable to pudate their slave zones and ultimately time them out. The result would be devastating.

We have support, fortunately. A company that hearkens frmo the good old days, with real subject matter experts. But they’re extremely busy. We did not get a suggestion for a couple weeks. But eventually we did. They had seen this once before.

named time to respond to TCP-based queries

The above graph is from a Zabbix monitor showing how long it takes that dns server to respond to that simple query. 6 s is a time-out. I actually set dig to timeout at 2 s, but in wall-clock time it actually takes 6 s.

The fix

We removed this line from the options block of named.conf:

keep-response-order {any; };

The info fmo the experts is that most likely that was configured as a workaround to CVE-2019-6477 but that issue was fixed since 9.15.6.

Conclusion

We encountered the named daemon in a situation where it was unable to respond to TCP-based DNS queries and hence unable to do zone transfers. So although most queries use UDP, this was a serious issue for us and prevented zones from being updated on all authoritative nameservers.

As is the case with so many modern IT problems, the effect was not black or white. Failures were intermittent, and then permanent. A restart fixed ths issue (forgot to mention so far!). But we involved an expert to find the root cause and it was the presence of a single configuration line in our named.conf. After removing that all was good.

Categories
Admin

Git commands cheat sheet

Intro

This is the list of git commands I compiled.

Create new local GIT repository

git init [project name]

Copy a repository

git clone username@host:/path/to/repository

Add a file to the staging area – must be done or won’t be saved in next commit

git add temp.txt or git add -A (add everything at once)

My working style is to change one to three files, and only add those.

Suppose you run git add, then change the file again, and then run git commit: what version will you push? The original! Run git add, etc all over again to pick up the latest changes. And note that your git adds can be run over the course of time – not all at once!

Create a snapshot of the changes and save to git directory??

Note that any committed changes won’t make their way to the remote repo

git commit –m “Message to go with the commit here”

Nota bene: git does not allow to add empty folders! If you try, you’ll simply see: nothing to commit, working tree clean

Put some “junk file” like .gitkeep in your empty folder for git add/git commit to work.

Set user-specific values

git config –global user.email youremail

Displays the list of changed files together with the files that are yet to be staged or committed

git status

Undo that git add you just did

git status to see what’s going on. Then

git restore –staged filename

and then another git status to see that that worked.

Send local commits to the master branch of the remote repository

  (Replace <master> with the branch where you want to push your changes when you’re not intending to push to the master branch)

git push origin <master>

For real basic setups like mine, where I work on branch master, it suffices to simply do git push

Merge all the changes present in the remote repository to the local working directory

git pull

Create branches and helps you to navigate between them

git checkout -b <branch-name>

Switch from one branch to another

git checkout <branch-name>

View all remote repositories

git remote -v

Connect the local repository to a remote server

git remote add origin <host-or-remoteURL>

Delete connection to a specified remote repository

git remote rm <name-of-the-repository>

List, create, or delete branches

git branch

Delete a branch

git -d <branch-name>

Merge a branch into the active one

git merge <branch-name>

List file conflicts

git diff –base <file-name>

View conflicts between branches before a merge

git diff <source-branch> <target-branch>

List all conflicts

git diff

Mark certain commits, i.e., v1.0

git tag <commitID>

View repository’s commit history, etc

git log, e.g., git log –oneline

Reset index?? and working directory to last git commit

git reset –hard HEAD

Remove files from the index?? and working directory

git rm filename.txt

Revert (undo) changes from a commit as per hash shown by git log –oneline

git revert <hash>

Temporarily save the changes not ready to be committed??

git stash

View info about any git object

git show

Fetch all objects from the remote repository that don’t currently reside in the local working directory

git fetch origin

View a tree object??

git ls-tree HEAD

Search everywhere

git grep <string>

Clean unneeded files and optimize local repository

git gc

Create zip or tar file of a repository

Git archive –format=tar master

Delete objects without incoming pointers??

git prune

Identify corrupted objects

git fsck

Merge conflicts

Today I got this error during my usual git pull:

error: Pulling is not possible because you have unmerged files.
hint: Fix them up in the work tree, and then use 'git add/rm '
hint: as appropriate to mark resolution and make a commit.
fatal: Exiting because of an unresolved conflict.

I did not wish to waste too much time. I tried a few things (git status, etc), none of which worked. As it is a small repository without much at stake and I know which files I changed, I simply deleted the clone and re-cloned the repo, put back the newer versions of the changed files, did a usual git add and git commit and git push. All was good. Not too much time wasted in becoming a gitmaster, a fate I wish to avoid.

Ignore a file

Put the unqualified name of the file in .gitignore at same level as .git. It will not be added to the project. Use this for keeping passwords secret.

Test a command

Most commands have a –dry-run option which prevents it from taking any action. Useful for debugging.

References and related

A convention for commit comments

https://www.freecodecamp.org/news/10-important-git-commands-that-every-developer-should-know/

Categories
Linux Raspberry Pi Web Site Technologies

Raspberry Pi Project: YouTube livestreaming with a click of a button

Intro

Here I’ve combined work I’ve done previously into one single useful application: I can initiate the live streaming of our band practice on YouTube with the click of a single button on a remote control, and stop it with another click.

Equipment

Raspberry Pi 3 or 4 with Raspberry Pi OS, e.g., Raspbian Lite is just fine

Logitech webcam or USB microphone

USB extender (my setup needed this, others may not)

Universal USB-based remote control – see references for a known good one

Method 1

In this method I rapidly blink the onboard red power (PWR) LED of the RPi while streaming is active. Outside of those times it is a solid red. This is my preferred mode – it’s a very visible sign that things are working. I am very excited about this approach.

blinkLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
done

shineLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null

broadcastswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      pkill -9 -f blinkLED
      echo Shine the PWR LED
      $HOME/shineLED.sh
    else
# start it
      echo Blinking PWR LED
      $HOME/blinkLED.sh &
      echo STARTING $PGM
      $PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

The crontab entry and the referenced files are the same as in Method 2.

Method 2

In method 2 I flash the built-in LED on the webcam for a few seconds before starting the audio, and again when the streaming has terminated – as visible signal that the button press registered.

broadcastswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      sleep 1
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
    else
# start it
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
      sleep 1
      echo STARTING $PGM
      $PGM &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

videotst.sh

#!/bin/sh
# just to get the webcam to light up...
ffmpeg -i /dev/video0 -f null - < /dev/null > /dev/null 2>1 &

crontab entry

@reboot sleep 15; /home/pi/broadcastswitch.sh > broadcastswitch.log 2>&1

continuousaudio.sh and ffmpegwireless6.sh

See this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/

evread.py

See this post: https://drjohnstechtalk.com/blog/2020/12/how-to-create-a-software-keyboard/

The idea

I press the Enter button once on the remote to begin the livestream to YouTube. I press it a second time to stop.

By extension this could also control other programs as well (like the photo frame). And other keys could be mapped to other functions. Record-only, don’t livestream, anyone?

I want to do these things because it’s a little tight in the room where I want to livestream – hard to get around. So this keeps me from having to squeeze past other people to access the RPi to for instance power cycle it. In my previous treatment, I had livestreaming start up as soon as the RPi booted up, which means it would only stop when it was similarly powered off, which I found somewhat limiting.

The purpose of videotst.sh in Method 2

videotst.sh serves almost no purpose whatsoever! It can simply be commented out. It’s somewhat specific to my webcam.

You see, I wanted to get some feedback that when I pressed the ENTER button the remote control the RPi had read that and was trying to start the livestream. I thought of flashing one of the built-in LEDs on the RPi. I still need to look into that.

With the robotics team we had soldered on an external LED onto one of the GPIO pins, but that’s way too much trouble.

So what videotst.sh does for me is to engage the webcam, specifically its video component, throwing away the actual video but with the net result that the webcam’s built-in green LED illuminates for a few seconds! That lets me know, “Yeah, your button press was registered and we’re beginning to start the livestream.” You see, because when you run ffmpegwireless6.sh with this webcam, it’s all about the audio. It only uses the audio of the webcam and thus the green “in use” LED never illuminates, unfortunately, while it is livestreaming the pure audio stream. So, similarly, when you press ENTER a second time to stop the stream I illuminate the webcam’s LED for a few seconds by using videotst.sh once again.

Techniques developed for this project

evread.py does some nasty buffering of its output, meaning, although it dos read the key presses on the remote, it holds the results “close to its chest,” and then spits them out, all at once, when the buffer is full. Well, that totally defeats the purpose needed here where I want to know if there’s been a single click. After some insightful Internet searches (note that I did not use Google as a verb, a practice I carry into my personal communication) I discovered the program script, which, when armed with the arguments -q -c, allows you to unbuffer the output of a program! And, it actually works. Cool.

And I made the command decision to “eat” the input. You see the timer of 3 seconds in broadcastswitch.sh? After you’ve done any button press it throws away any further button presses for the next three seconds. I just think that’ll reduce the misfires. In fact, I might take up the practice of double-clicking the ENTER button just to be sure I actually pressed it.

I’m using the double bracket notation more in my bash scripts. It permits use of a RegEx comparison operator. =~. I love regular expressions. More the perl style, PCRE, while this uses extended regular expressions, ERE. But I suppose those are good as well.

Getting control over the power LED was a nice coup. I’m only disappointed that you cannot control its brightness. In the dark it throws off quite a bit of light. But you cannot.

The green LED does not seem nearly as bright so I chose not to play with it. What I don’t want is to have to strain to see whether the thing is livestreaming or not.

Of course getting the whole remote control thing to work at all is another great advancement.

Techniques still to be developed

I still might investigate using voice-driven commands in place of a remote. Obviously, that’s a big nut to crack. Even if I managed to turn it on, turning it off while ffmpeg has commandeered the audio channel is even harder. I wonder if ffmpeg can split the audio stream so another process can be run alongside it to listen for voice commands? Or if an upstream process in front of ffmpeg could be used for that purpose? Or simply run with two microphones (seems wasteful of material)?? Needs research.

Suppose you want to take this on the road? Internet service can be unreliable after all. It’s well known you can power the RPi 3 for many hours with a small portable battery. So how about mapping a second button on the remote to a record-only mode (using the arecord utility, for instance)?? Then you can upload the audio at a time of you convenience. That’s something I can definitely program if I find I need it.

Lingering Problems with this approach

Despite all the care I’ve taken with the continuousaudio.sh script, still, there are times when YouTube does not show that a livestream is going on. I have no idea why at this point. If I knew the cause, I’d have fixed it!

As the livestream aspect of this is actual immaterial to me, I will probably switch to a pure recording mode where I upload in a later step – perhaps all done by the remote control for pure convenience.

Since this blog post has become popular, I may keep it preserved as is and start a new one for this recording approach as some people may genuinely be interested in the livestream aspect.

A very rough estimate of the failure rate is maybe as high as 50% but probably no lower than 25%. So, not great odds if you’re relying on success.

There’s another issue which I consider more minor. The beginning of the stream always sounds like a tape played on fast forward for a few seconds. The end also cut off a few seconds early I think.

Conclusion

We have presented a novel approach to livestreaming on a Raspberry Pi 3 using a remote control for added convenience. All the techniques were home-developed at drjohnstechtalk.com. The materials don’t cost much and it really does work.

References and related

Rii infrared remote control – only $12: Amazon.com: Rii MX3 Multifunction 2.4G Fly Mouse Mini Wireless Keyboard & Infrared Remote Control & 3-Gyro + 3-Gsensor for Google Android TV/Box, IPTV, HTPC, Windows, MAC OS, PS3 : Electronics

25′ USB 2 extender for placing a webcam or USB mic at a distance from the RPi, $15: Amazon.com: HDE USB Extension Cable (USB 2.0 Type A Male to Female) High Speed Data and Power Extension Cable with Active Repeater (25 ft) : Electronics

Reading keyboard input.

Using Remote control to interact with a RPi-based photo frame.

ffmpeg settings to send just audio to YouTube, suppressing video

Battery to make the RPi 3 portable, $17: Amazon.com: Omars Power Bank 10000mAh USB C Battery Pack Slimline Portable Charger with Dual USB Output Compatible with iPhone Xs/XR/XS Max/X, iPad, Galaxy S9 / Note 9 : Cell Phones & Accessories. I’m not exactly sure what to do for an RPi 4 however.

Exetended Regular Expressions

How to control the power to the RPi’s LEDs: https://github.com/mlagerberg/raspberry-pi-setup/blob/master/5.2-leds.md

Categories
Admin Apache CentOS Python Raspberry Pi Web Site Technologies

Traffic shaping on linux – an exploration

Intro

I have always been somewhat agog at the idea of limiting bandwidth on my linux servers. Users complain about slow web sites and you want to try it for yourself, slowing your connection down to meet the parameters of their slower connection. More recently I happened on librespeed, an alternative to speedtest.net, where you can run both server and client. But in order to avoid transferring too much data and monopolizing the whole line, I wanted to actually put in some bandwidth throttling. I began an exploration of available methods to achieve this and found some satisfactory approaches that are readily available on Redhat-type linuxes.

bandwidth throttling, bandwidth rate limiting, bandwidth classes – these are all synonyms for what is most commonly called traffic shaping.

What doesn’t work so well

I think it’s important to start with the walls that I hit.

Cgroup

I stumbled on cgroups first. The man page starts in a promising way

cgroup - control group based traffic control filter

Then after you research it you see that support was enabled for cgroups in linux kernels already long ago. And there is version 1 and 2. And only version 1 supports bandwidth limits. But if you’re just a mid-level linux person such as myself, it is confusing and unclear how to take advantage of cgroup. My current conclusion is that it is more a subsystem designed for use by systemctl. In fact if you’ve ever looked at a status, for instance of crond, you see a mention of a cgroup:

sudo systemctl status crond
? crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-08-09 15:44:24 EDT; 5 days ago
Main PID: 1193 (crond)
Tasks: 1 (limit: 11278)
Memory: 2.1M
CGroup: /system.slice/crond.service
mq1193 /usr/sbin/crond -n

I don’t claim to know what it all means, but there it is. Some nice abilities to schedule and allocate finite resources, at a very high level.

So I get the impression that no one really uses cgroups to do traffic shaping.

apache web server to the rescue – not

Since I was mostly interested in my librespeed server and controlling its bandwidth during testing, I wondered if the apache web server has this capability built-in. Essentially, it does! There is the module mod_ratelimit. So, quest over, and let the implementation begin! Except not so fast. In fact I did enable that module. And I set it up on my librespeed server. It kind of works, but mostly, not really, and nothing like its documented design.


    SetOutputFilter RATE_LIMIT
    SetEnv rate-limit 400 
    SetEnv rate-initial-burst 512

That’s their example section. I have no interest in such low limits and tried various values from 4000 to 12000. I only got two different actual rates from librespeed out of all those various configurations. I could either get 83 Mbps or around 162 Mbps. And that’s it. Merely having any statement whatsoever starts limiting to one of these strange values. With the statement commented out I was getting around 300 Mbps. So I got rate-limiting, but not what I was seeking and with almost no control.

So the apache config approach was a bust for me.

Trickle

There are some linux programs that are perhaps promoted too heavily? Within a minute of posting my first draft of this someone comes along and suggests trickle. Well, on CentOS yum search trickle gives no results. My other OS was SLES v 15 and I similarly got no results. So I’m not enamored with trickle.

tc – now that looks promising

Then I discovered tc – traffic control. That sounds like just the thing. I had to search around a bit on one of my OSes to find the appropriate package, but I found it. On CentOS/Redhat/Fedora the package is iproute-tc. On SLES v15 it was iproute2. On FreeBSD I haven’t figured it out yet.

But it looks unwieldy to use, frankly. Not, as they say, user-friendly.

tcconfig + tc – perfect together

Then I stumbled onto tcconfig, a python wrapper for tc that provides convenient utilities and examples. It’s available, assuming you’ve already installed python, through pip or pip3, depending on how you’ve installed python. Something like

$ sudo pip3 install tcconfig

I love the available settings for tcset – just the kinds of things I would have dreamed up on my own. I wanted to limit download speeds, and only on the web server running on port 443, and noly from a specific subnet. You can do all that! My tcset command went something like this:

$ cd /usr/local/bin; sudo ./tcset eth0 --direction outgoing --src-port 443 --rate 150Mbps --network 134.12.0.0/16

$ sudo ./tcshow eth0

{
"eth0": {
"outgoing": {
"src-port=443, dst-network=134.12.0.0/16, protocol=ip": {
"filter_id": "800::800",
"rate": "150Mbps"
}
},
"incoming": {}
}
}

More importantly – does it work? Yes, it works beautifully. I run a librespeed cli with three concurrent streams against my AWS server thusly configured and I get around 149 Mbps. Every time.

Note that things are opposite of what you first think of. When I want to restrict download speeds from a server but am imposing traffic shaping on the server (as opposed to on the client machine), from its perspective that is upload traffic! And port 443 is the source port, not the destination port!

Raspberry Pi example

I’m going to try regular librespeed tests on my home RPi which is cabled to my router to do the Internet monitoring. So I’m trying

$ sudo tcset eth0 --direction incoming --rate 100Mbps
$ sudo tcset eth0 --direction outgoing --rate 9Mbps --add

This reflects the reality of the asymmetric rate you typically get from a home Internet connection. tcshow looks a bit peculiar however:

{
"eth0": {
"outgoing": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "9Mbps"
}
},
"incoming": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "100Mbps"
}
}
}
}
Results on the RPi

Despite the strange delay-distro appearing in the tcshow output, the results are perfect. Here are my librespeed results, running against my own private AWS server:

Time is Sat 21 Aug 16:17:23 EDT 2021
Ping: 20 ms Jitter: 1 ms
Download rate: 100.01 Mbps
Upload rate: 9.48 Mbps

!

Problems creep in on RPi

I swear I had it all working. This blog post is the proof. Now I’ve rebooted my RPi and that tcset command above gives the result Illegal instruction. Still trying to figure that one out!

March, 2022 update. My RPi had other issues. I’ve re-imaged the micro SD card and all is good once again. I set traffic shaping policies as shown in this post.

Conclusion about tcconfig

It’s clear tcset is just giving you a nice interface to tc, but sometimes that’s all you need to not sweat the details and start getting productive.

Possible issue – missing kernel module

On one of my servers (the CentOS 8 one), I had to do a

$ sudo yum install kernel-modules-extra

$ sudo modprobe sch_netem

before I could get tcconfig to really work.

To do list

Make the tc settings permanent.

Verify tc + tcconfig work on a Raspberry Pi. (tc is definitely available for RPi.)

Conclusion

We have found a pretty nice and effective way to do traffic shaping on linux systems. The best tool is tc and the best wrapper for it is tcconfig.

References and related

Librespeed is a great speedtest.net alternative for hard-code linux types who love command line and being in full control of both ends of a speed test. I describe it here.

tcconfig’s project page on PyPi.

Power cycling one’s cable modem automatically via an attached RPi. I refer to this blog post specifically because I intend to expand that RPi to also do periodic, automated speedtesting of my home braodband connection, with traffic shaping in place if all goes well (as it seems to thus far).

Bandwidth management and “queueing discipline” in all its gory detail is explained in this post, including example raw tc commands. I haven’t digested it yet but it may represent a way for me to get my RPi working again without a re-image: http://www.fifi.org/doc/HOWTO/en-html/Adv-Routing-HOWTO-9.html

Categories
Admin Network Technologies Raspberry Pi

A nice alternative to speedtest.net for the DIY linux crowd

Intro

I was building some infrastucture around automated speedtest.net tests using speedtest-cli. I noticed the assigned servers keep changing, some servers are categorized as malicious sources, some time out if tested on the hour and half-hour, and results are inconsistent depending on which server you get.

So, I saw that the speedtest-cli (linux command-line python script) has a switch for a “mini” server. When I investigated that seemed the answer to the problem – you can set up your own mini server and use that for yuor tests. I.e., control both ends of the test. great.

The speedtest.net mini server was discontinued in 2017! There’s some commercial replacement. So I thought. Forget that. I was disillusioned and then happened upon a breath of fresh air – an open source alternative to speedtest.net. Enter, librespeed.

Some details

librespeed has a command-line program whih is an obvious rip-off of speedtest-cli. In fact it is called librespeed-cli and has many similar switches.

There is also a server setup. Really, just a few files you can put on any apache + php web server. There is a web GUI as well, but in fact I am not even that interested in that. And you don’t need to set it up at all.

What I like is that with the appropriate switches supplied to librespeed-cli, I can have it run against my own librespeed server. In some testing configurations I was getting 500 Mbps downloads. Under less favorable circumstances, much less.

Testing, testing, testing

I tested between Europe and the US. I tested through a proxy. I tested from the Azure cloud to an Amazon AWS server. I tested with a single cpu linux server (good old drjohnstechtakl.com) either as server, or as the client. This was all possible because I had full control over both ends.

Some tips
  1. Play with the speedtest-cli switches. See what works for you. librespeed-cli -h will shows you all the options.
  2. Increasing the stream count can compensate for slower PING times (assuming both ends have a fast connection)
  3. It does support proxy, but
  4. Downloads don’t really work through proxy if the server is only running http
  5. Counterintuitively, the cpu burden is on the client, not the server! My servers didn’t show the slightest bit of resource usage.
  6. Corollary to 5. My 4-cpu client to 1-cpu server test was much faster than the other way around where server and client roles were reversed.
  7. Most things aren’t sensitive to upload speeds anyway so seriously consider suppressing that test with the appropriate switch. Your tests will also run a lot faster (18 seconds versus 40 seconds).
  8. Worried about consuming too much bandwidth and transferring too much data? I also developed a solution for that (will be my next blog post)
  9. So I am running a librespeed server on my little VM on Amazon AWS but I can’t make it public for fear of getting overrun.
  10. ISPs that have excellent interconnects such as the various cloud providers are probably going to give the best results
  11. It is not true your web server needs write access to its directory in my experience. As long as you don’t care about sharing telemetry data and all that.
  12. To emphasize, they supply the speedtest-cli binary, pre-built, for a whole slew of OSes. You do not and should not compile it yourself. For a standard linux VM you will want the binary called librespeed-cli_1.0.9_linux_386.tar.gz
Example files

The point of these files is to test librespeed-cli, from the directory where you copied it to, against your own librespeed server.

json-ns6

[{
"name":"ns6, Germany (active-servers)",
"server":"https://ns6.drjohnstechtalk.com/",
"id":864,
"dlURL":"backend/garbage.php",
"ulURL":"backend/empty.php",
"pingURL":"backend/empty.php",
"getIpURL":"backend/getIP.php",
"sponsorName":"/dev/null/v",
"sponsorURL":"https://dev.nul.lv/"
}]

wrapper.sh

#!/bin/sh
# see ./librespeed-cli -help for all the options
./librespeed-cli --local-json json-ns6 --server 864 --simple --no-upload --no-icmp --ipv4 --concurrent 4 --skip-cert-verify

Purpose: are we getting good speeds?

My purpose in what I am constructing is to verify we are getting good download speeds. I am not trying to hit it out of the park. That consumes (read, wastes) a lot of resources. I am targeting to prove we can achieve about 150 Mbps downloads. I don’t know anyone who can point to 150 Mbps and honestly say that’s insufficient for them. For some setups that may take four simultaneous streams, for others six. But it is definitely achievable. By not going crazy we are saving a lot of data transfers. AWS charges me for my network usage. So a six stream download test at 150 Mbps (Megabits per second) consumes about 325 MBytes download data. If you’re not being careful with your switches, you can easily nudge that up to 1 GB downloads for a single test.

My librespeed client to server tests ran overnight alongside my old approach using speedtest. The speedtest results are all over the place, with a bunch of zeroes for whatever reason, as is typical, while librespeed – and mind you this is from a client in the US, going through a proxy, to a server in Europe – produced much more consistent results. In one case where the normal value was 130 mbps, it dipped down to 110 mbps.

Testing it out at home
Test from a home PC against my own librespeed server

I made my test URL on my AWS server private, but a public one is available at https://librespeed.de/

At home of course I want to test with a Raspberry Pi since I work with them so much. There is indeed a pre-built binary for Raspberry Pi. It is https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The problem with speedtest in more detail

There were two final issues with speedtest that were the straws that broke the camel’s back, and they are closely related.

When you resolve www.speedtest.net it hits a Content Distribution Network (CDN), and the returned results vary. For instance right now we get:

;; QUESTION SECTION:
;www.speedtest.net. IN A
;; ANSWER SECTION:
www.speedtest.net. 4301 IN CNAME zd.map.fastly.net.
zd.map.fastly.net. 9 IN A 151.101.66.219
zd.map.fastly.net. 9 IN A 151.101.194.219
zd.map.fastly.net. 9 IN A 151.101.2.219
zd.map.fastly.net. 9 IN A 151.101.130.219

Note that you can also run speedtest-cli with the –list switch to get a list of speedtest servers. So in my case I found some servers which procuced good results. There was one where I even know the guy who runs the ISP and know he does an excellent job. His speedtest server is 15 miles away. But, in its infinite wisdom, speedtest sometimes thinks my server is in Lousiana, and other times thinks it’s in New Jersey! So the returned server list is completely different for the two cases. And, even though each server gets assigned a unique number, and you can specify that number with the –server switch, it won’t run the test if that particular server wasn’t proposed to you in its initial listing. (It always makes a server listing call whether you specified –list or not, for its own purposes as to which servers to use.)

I tried to use some of tricks to override this behaviour, but short of re-writing the whole thing, it was not going to work. I imagined I could force speedtest-cli to always use a particular IP address, overwriting the return from the fastly results, but getting that to work through proxy was not feasible. On the other hand if you suck it up and accept their randomly assigned server, you have to put up with a lot of garbage results.

So set up your own server, right? The –mini switch seems built to accommodate that. But the mini server was discontinued in 2017. The commercial replacement seemed to have some limits. So it’s dead end upon dead end with speedtest.net.

Conclusion

An open source alternative to speedtest.net’s speedtest-cli has been identified and tested, both server and client. It is librespeed. It gives you a lot more control than speedtest, if that is your thing and you know a smidgeon of linux.

References and related

(2024 update) Cloudflare has a really nice speed test without all the bloat: https://speed.cloudflare.com/

Just to do your own test with your browser the way you do with speedtest.net: https://librespeed.de/

librespeed-cli: https://github.com/librespeed/speedtest-cli

librespeed-cli binaries download page: https://github.com/librespeed/speedtest-cli/releases

RPi version of librespeed-cli: https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The RPi I use for automatically power cycling my cable modem is hard-wired to my router and makes for an excellent platform from which to conduct these speedtests.

librespeed server: https://github.com/librespeed/speedtest . You basically can git clone it (just a bunch of js and php files) from: https://github.com/librespeed/speedtest.git

If, in spite of every positive thing I’ve had to say about librespeed, you still want to try the more commercial speedtest-cli, here is that link: https://www.speedtest.net/apps/cli

In this context a lot of people feel iperf is also worth exploring. I think it is a built-in linux command.

To kick it up a notch for professional-class bandwidth and availability measurements, ThousandEyes is the way to go. This discussion is very enlightening: https://www.thousandeyes.com/blog/caveats-of-traditional-network-tools-iperf

Categories
Admin TCP/IP

Poor man’s port checker for Windows

Intro

Say you want to check if a tcp port is open from your standard-issue Windows 10 PC. Can you? Yes you can. I will share a way that requires the fewest keystrokes.

A use case

We wanted to know if an issue with a network drive mapping was a network issue. The suggestion is to connect to port 445 on the remote server.

How to do it

From a CMD prompt

> powershell

> test-netconnection 192.168.20.250 -port 445

ComputerName : 192.168.20.250.250
RemoteAddress : 192.168.20.250
RemotePort : 445
InterfaceAlias : Ethernet 3
SourceAddress : 192.168.1.101
TcpTestSucceeded : True

That’s for the working case. If it can’t establish the connection it will take awhile and the last line will be False.

What you can type to minimize keystrokes is

test-n <TAB> in place of test-netconnection. It will be expanded to the full thing.

Conclusion

On linux you have tools like nc (netcat), nmap, scapy and even telnet, that we network engineers have used for ages. On Windows the options may be more limited, but this is one good way to know of. In the past I had written about portqry as a similar tool for Windows, but it requires an install. This test-netconnection needs nothing installed.

References and related

scapy, a custom packet generation utility

portqry for Windows

Categories
Consumer Tech

Consumer Tech: Dyson Animal vacuum cleaner stops and starts

Intro

I am kind of annoyed that my simple problem with a Dyson Animal vacuum cleaner took way more Internet research than should have been the case to resolve. I hope to spare someone else that grief.

The details

Actually this is a friend’s vacuum cleaner. I don’t think it would have played out this way had I been the exclusive user of it. She noted that it stops and starts. It goes for a few seconds and stops. Then you can start it up for a few more seconds. And so on.

More clues

This model shows the charge remaining on the battery. Still two bars out of three, so that’s good. It’s been properly charged. After examining it I get the gut feeling that the brush motor shouldn’t draw all that much power.

We also note that when the long tube is removed and its just acting as a short hand-held device, it doesn’t stop.

After doing some Internet research (not finding an exact hit for this model), I am inspired to take things apart and check all the filters. On other models the filters can do you in. This one has been very lightly used to date and so the filters are remarkably clean. No filter issue.

The solution

So I check the brush and that area. It is simply clogged with gray dirt. I begin to clear it out with my fingers – seems no easy way to do it – until all the gray dirt is gone from the brush area where it goes up into the tube.

The results

The results are in. Works like a champ now!

Comment

I am more used to a more obscure brand of vacuum cleaner, Miele. It has a simple orange status thing that visibly shows you when it can’t suck in dirt or what not due to a full bag, a clog, whatever. In addition you can pretty much hear the pitch of the motor’w whine increase when there’s a clog. With the Dyson I didn’t notice that pitch change, nor was there any indicator of a cloged system. So Dyson’s design is faulty.

Conclusion

A Dyson Animal vacuum cleaner was only running for a few seconds at a time. After a few more seconds it could run again. The power level showed two of three led bars. The filters were all clean, but the brush head was clogged with dirt. Cleaning that out fixed everything nicely.

Dyson’s design has to be faulted for not making this clogged situation more evident. Strange, coming from a company which obviously prides itself on its innovative design.

Categories
Security Web Site Technologies

Setting up lftp to do ftp over ssl

Intro

I have seen too much advice on the Internet for resolving problems when one encounters erroers of this sort when using the lftp client on linux:

mput: myfile.log: Fatal error: Certificate verification: unable to get local issuer certificate (3E:...)

or this one:

mput: myfile.log: Fatal error: Certificate verification: unable to get issuer certificate (4A:...)

You research that and get a lot of htis that recommend to

“set ssl:verify-certificate false inside the lftp command”

But, you know, security-wise, that isn’t such a hot approach. And you can do better with just a bit more effort.

The details

Examine what certificate the ftp server is using with this openssl command:

$ openssl s_client -showcerts -connect example.com:21 -starttls ftp

The privatre pki scenario

I’m imagining a scenario where yuo are in a world where a private pki reigns. In that case you want to just make sure lftp knows where to find the private root CA and possibly the intermediate CA.

To be continued…