If you’ve seen any of my many Raspberry Pi photo frame posts, you’ve seen me mention the Pi Display. I recently began having problems with my setup, after many months. I didn’t know if it was the RPi or the display. My display was cracked from abuse anyway so I took the plunge and got a new one. And I re-imaged my micro SD card. I’ll be danged if my brand new display wasn’t blanking out, even as the RPi was still booting! That was frustrating. Here’s how I resolved that. I felt I should publish this article because it’s a little hard to find this exact problem with solution described on the Internet.
The details
The good news is that the Pi Display is still available, incredibly. It comes with absolutely zero documentation. It refers you to element14 or newark web sites, I forget. But those sites just display very generic pages and I couldn’t manage to find any real documentation. I assume the product is considered too old or unsupported or whatever. But since I had bought it already and it seems exactly like the one I was replacing, I went ahead and connected it as before.
Note that the display did not blank out during the installation of Raspbian Lite. That’s a strong hint as to where the problem lies. But after installing that and rebooting, the Pi Display shows bland and white horizontal lines, becomes fainter, and then just blanks out altogether. Rather distressing. I don’t recall this problem from the older versions.
Equipment
I’m still using an old RPi model 3 I have laying around. I guess the new RPi 4 is OK, too. I believe it draws more power though so I actually kind of prefer the old RPi 3’s. You can manage to power both RPi 3 + the PiDisplay from a single power plug if you do it right.
The OS image is Raspberry Pi OS Lite, “bullseye.”
7″ Pi Display – see references.
Random power plug from Amazon. 1.8 amps.
The solution
First, note that I had installed Raspbian bullseye – the latest image as of January 2022. Also note that when connected to an HDMI display, this screen blank-out never occurs! What I needed to do was to connect the RPi to an HDMI display (my TV in my case) and do the sudo apt-get update and sudo apt-get upgrade commands. That recommendation you will find everywhere. In addition, I needed to run raspi-config. In there you navigate to do this navigation:
Display Options > Screen blanking > Would you like to enable screen blanking?
Choose No
I held my breath as I rebooted, and it seemed to want to blank, but then came back a half second later and the display has been illuminated ever since. Now it seems every time I reboot it blanks out as before, during boot, but then “remembers” it’s not supposed to do that and recovers within a second!
How to power both RPi and Pi Display from a single power source
You can connect a short micro USB cable from the USB output of the display to the power input of the RPi – that’s what I do. You could also use the included leads, but I haven’t messed with it.
However, note that – as I found out the hard way – not just any power adapter will do! I started with an Amazon adapter rated for 1 amp. Everything fired up fine, and even ran the slideshow for a bit, but then I guess the power demands were too great and the RPi spontaneously rebooted. So then I switched the system to a larger adapter, also from Amazon – 1.8 amps. So far so good with that one. If that failed, I was prepared with a 2 amp adapter – from a Samsung Galaxy phone. But it seems that will not be necessary.
And this is kind of the beauty of the RPi 3 – you can get away with stuff like that. Your RPi 4 could never do that. It’s a power hog.
Conclusion
The Pi Display is still viable! The price has gone up, but it works. As far as I know in fact it is the “official” display for an RPi. The RPi has a special tab to receive a ribbon cable that connects the two together.
The Pi Display will blank out after a few seconds with newer Raspberry Pi OS images, it seems, even though when paired with an HDMI display this never happens. But it’s easy enough to fix this frustrating problem. Just set screen blanking to No in raspi-config. This ought to better documented on the Internet.
References and related
The Pi Display is a 7″ touch-sensitive LCD display. Its cost is now up to $90. I don’t recommend using its touch capability, but then I don’t like that feature on any display, come to think of it: Amazon.com: Raspberry Pi 7″ Touch Screen Display : Electronics But this latest Pi Display is brighter than my older one. That makes it more visible and enjoyable to watch.
Of course I’m using my Raspberry Pi. I installed a free dictionary (sudo apt-get install wamerican).
I wanted to practice python, but to not go crazy, so I cheated with some grooming with simple shell commands, e.g.,
$ egrep ‘^[a-z]{5}$’ /usr/share/dict/words > words
That plucks out all the five-letter words and stores them in a local file called words. The original dictionary had about 100,000 “words.” There are about 4000 five-letter words left.
import sys,os,re
input = sys.argv[1]
output = []
for character in input:
number = ord(character) - 96
output.append(number)
#print(output)
sum=0
for character in input:
number = ord(character) - 96
sum += number
#print(sum)
newsum = 0
ends = ''
if sum == 51:
#print(input)
for i in input[3:]:
number = ord(i) - 96
newsum += number
ends = ends + i
#print(i,number,newsum,ends)
if newsum < 27:
newl = chr(newsum+96)
newword = re.sub(ends,newl,input)
print(input,newword)
This plucks out all the five-letter words whose characters add up to 51, and adds the value of the last two letters and creates a new word from them. replacing the last two letters with the new letter.
Now, go write your own program. I will share the answer when it's too late to submit - it comes towards the end of the list. It sticks out like a sore thumb - nothing else comes close. So if you just persist you'll see it.
Conclusion
I learned a teensy bit of python, my motivation being to solve the current npr puzzle. And it worked! But my program was surprisingly slow. I guess I wrote it in an inefficient manner.
Imagine an infrastructure team empowered to create its own scripts to do such things as regularly update external dynamic lists (EDLs) or interact with APIs in an automated fashion. At some point they will want to have a meta script in place to check the output of the all the automation scripts. This is something I developed to meet that need.
I am getting tired of perl, and I still don’t know python, so I decided to enhance my bash scripting for this script. I learned some valuable things along the way.
# The suggestion: To have a configuration file with log identifiers
#(e.g. “anydesk-edl”) and per identifier: log file path (“/var/log/anydesk-edl.log”),
# error pattern (“.+\[Error\].+”), start pattern (“.+\[Notice\] Starting$”) end pattern (“.+\[Notice\] Done$”).
#Then just count number of executions (based on start/end) and number of errors.
# the start/end/error values are interpreted as extended regular expressions - see regex(7) man page
identifier: anydesk-edl
path: /var/log/anydesk-edl.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$
identifier: firewall-requester-to-edl
path: /var/log/firewall-requester-to-edl.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$
identifier: sase-ips-to-bigip
path: /var/log/sase-ips-to-bigip.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$
What this script does
So when the guy writes an automation script, he is so meticulous that he follows the same convention and hooks it into the syslogger to create uniquely named log files for it. He writes out a [Notice] Starting when his script starts, and a [Notice] Done when it ends. And errors are reported with an [Error] details. Some of the scripts are called hourly. So we agreed to have a script that checks all the other scripts once a week and send a summary email of the results. I look to see that the count of starts and ends is roughly the same, and I report back the ten most recent errors from a given script. I also look for other basic things. That's the purpose of the function anomalydetection in my script. It's just basic tests. I didn't want to go wild.
But what if there was a problem with one of the scripts, wouldn't we want to know sooner than possibly six days later? So I decided to have my script run every day, but only send email on the off days if an anomaly was detected. This made the logic a tad more complex, but nothing bash and I couldn't handle. It fits the need of an overworked operational staff.
Techniques I learned and re-learned from developing this script
cron scheduling - more to it than you thought
I used to naively think that it suffices to look into the crontab files of all users to discover all the scheduled processes. What I missed is thinking about how log rotate works. How does it work? Turns out there is another section of cron for jobs run daily, weekly and monthly. logrotate is called from cron.daily.
logrotate - potential to do more
The person who wrote the automation scripts is a much better scripter than I am. I didn't want to disappoint so I put in the extra effort to discover the best way to call my script. I reasoned that logrotate would offer the opportunity to run side scripts, and I was absolutely right about that! You can run a script just before the logs rotate, or just after. I chose the just before timing - prerotate. In actual fact logrotate calls the prerotate script with all the log files to be rotate as arguments, which you notice we don't take advantage of, because at the time we were unsure how we were going to interface. But I figure let's just leave it now. man logrotate to learn more.
By the way although I developed on a generic Debian system, it should work on a Raspberry Pi as well since it is Debian based.
BASH - the potential to do more, at a price
You'll note that I use some bash-specific extensions in my script. I figure bash is near universal, so why not? The downside is that when logrotate invokes an external script, it calls is using old-fashioned shell. And my script does not work. Except I learned this useful trick:
if [ ! "$BASH_VERSION" ]; then
exec /bin/bash "$0" "$@"
exit
fi
Note this is legit syntax in SHELL and a legit conditional operator expression. So it means if you - and by you I mean the script talking about itself - are invoked via SHELL, then invoke yourself via BASH and exit the parent afterwards. And this actually does work (To do: have to check which occurs first, the syntax checking or the command invocation).
More
Speaking of that conditional, if you want to know all the major comparison tests, do a man test. I have around to use the double bracket expressions [[ more and more, though they are BASH specific I believe. The double bracket can be followed by a && and then an open curly brace { which can introduce a block of code delimited of course by a close curly brace }. So for me this is an attractive alternative to SHELL's if conditional then code block fi syntax, and probably just slightly more compact. Replace && with || to execute the code block when the condition does not evaluate to be true.
zgrep is grep for compressed files, but we knew that right? But it's agnostic - it works like grep on both compressed and uncompressed files. That's important because with rotated logs you usually have a combination of both.
Now the expert suggested a certain regular expression for the search string. It wasn't working in my first pass. I reasoned that zgrep may have a special mode to act more like egrep which supports extended regular expressions (EREs). EREs aren't really the same as perl-compatible regular expressions (PCREs) but for this kind of simple stuff we want, they're close enough. And sure enough zgrep has the -E option to force it to interpret the expression as an ERE. Great.
RegEx
So in the log.ini file the regular expression has a \[...\] syntax. The backslash is actually required because otherwise the [...] syntax is interpreted as a character class, where all the characters between the brackets get tried to match a single character in the string to be matched. That's a very different match!
My big thing was - will I have to further escape those lines read in from log.ini, perhaps to replace a \[ with a \\[? Stuff like that happens. I found as long as I used those double quotes around the variables (see below) I did not need to further escape them. Similarly, I found that the EREs in log.ini did not need to be placed between quotes though the guy initially proposed that. It looks cleaner without them.
Variable scope
I wasted a lot of time on a problem which I thought may be due to some weird variable scoping. I've memorized this syntax cat file|while read line; do etc, etc so I use it a lot in my tiny scripts. It's amazing I got away with it as much as I have because it has one huge flaw. if you start using variables within the loop you can't really suck them out, unless you write them to a file. So while at first I thought it was a problem of variable scoping - why do my loop variables have no values when the code comes out of the loop? - it really isn't that issue. It's that the pipe, |, created a forked process which has its own variables. So to avoid that I switched to this weird syntax for line in $(<$INI); do etc. So it does the line-by-line file reading as before but without the pipe and hence without the "variable scope" problem.
But in another place in the script - where I add up numbers - I felt I could not avoid the pipe. So there I do write the value to a file.
The conclusion is that with the caveat that if you know what you're doing, all variables have global scope, and that's just as it should be. Hey, I'm from the old Fortran 66/77 school where we were writing Monte Carlos with thousands of lines of code and dozens of variables in a COMMON block (global scope), and dozens of contributors. It worked just fine thank you very much. Because we knew what we were doing.
Adding numbers in bash
Speaking of adding, I can never remember how to add numbers (integers). In bash you can do starts=$((starts + sline)) , where starts and sline are integers. At least this worked in Debian linux Stretch. I did not really get the same to work so well in SLES Linux - at least not inside a loop where I most needed it.
When you look up how to add numbers in bash there are about a zillion different ways to do it. I'm trying to stick to the built-in way.
Sending mail in Debian linux
You probably need to configure a smarthost if you haven't used your server to send emails up until now. You have to reconfigure of the exim4 package:
dpkg-reconfigure exim4-config
This also can be done on a RPi if you ever find you need for it to send out emails.
Variables
If a variables includes linebreaks and you want to see that, put it between double-quotes, e.g., echo "$myVariableWithLineBreaks". If you don't do that it seems to remove the linebreaks. Use of the double quotes also seems to help avoid mangling variables that contain meta characters found in regular expressions such as .+ or \[.
Result of executing the commands
I grew up using the backtick metacharacter, `, to indicate that the enclosed command should be executed. E.g., old way:
DIR=`dirname $0`
But when you think about it, that metacharacter is small, and often you are unlucky and it sits right alongside a double quote or a single quote, making for a visual trainwreck. So this year I've come to love the use of $(command to be executed) syntax instead. It offers much improved readability. But then the question became, could I nest a command within a command, e.g., for my DIR assignment? I tried it. Now this kind of runs counter to my philosophy of being able to examine every single step as it executes because now I'm executing two steps at once, but since it's pretty straightforward, I went for it. And it does work. Hence the DIR variable is assigned with the compound command:
DIR=$(cd $(dirname $0);pwd)
So now I wonder if you can go more than two levels deep? Each level is an incrementally bad idea - just begging for undetectable mistakes, so I didn't experiment with that!
By the way the reason I needed to do that is that the script jumps around to another directory to create temporary files, and I wanted it to be able to reference the full path to its original directory, so a simpler DIR=$(dirname $0) wasn't going to cut it if it's called with a relative path such as ./checklogs.sh
Debugging
I make mistakes left and right. But I know what results I expect. So I generously insert statements as variables get assigned to double check them, prefacing them with a conditional [[ $DEBUG -eq 1 ]] && print out these values. As I develop DEBUG is set to 1. When it's finally working, I usually set it to 0, though in some script I never quite reach that point. It looks like a lot of typing, but it's really just cut and paste and not over-thinking it for the variable dump, so it's very quick to type.
Another thing I do when I'm stuck is to watch as the script executes in great detail by appending -xv to the first line, e.g., #!/bin/bash -xv. But the output is always confusing. Sometimes it helps though.
Techniques I'd like to use in the future
You can assign a function to a variable and then call that variable. I know that will have lots of uses but I'm not used to the construct. So maybe for my next program.
Conclusion
This fairly simple yet still powerful script has forced me to become a better BASH shell scripter. In this post I review some of the basics that make for successful scripting using the BASH shell. I feel the time invested will pay off as there are many opportunities to write such utility scripts. I actually prefer bash to perl or python for these tasks as it is conceptually simpler, less ambitious, less pretentious, yes, far less capable, but adequate for my tasks. A few rules of the road and you're off and running! bash lends itself to very quick testing cycles. Different versions of bash introduced additional features, and that gets trying. I hope I have found and utilized some of the basic stuff that will be available on just about any bash implementation you are likely to run across.
References and related
The nitty gritty details about BASH shell can be gleaned by doing a man bash. It seems daunting at first but it's really not too bad once you learn how to skim through it.
This post builds on the success of previous posts and uses elements from them. I don’t honestly expect anyone to repeat all the ingredients I have assembled here. But I have created them in a fairly modular way so you can pick out those elements which will help your project.
But, it is true, I have gotten the user experience of recording audio from, e.g., a band practice, down to a click of the ENTER button to start the recording, another click to stop it, and a click of the UP ARROW button to process the audio recording – turn it into a video – and upload it to YouTube, mark it as UNLISTED, and send the link to me in an email. Pretty cool if I say so myself. I am refining things as I write this to make it more reliable.
This write-up is not terribly detailed. It presumes at least a medium skill level with linux.
Ingredients
RPi 3 or RPi 4
Raspberry OS desktop running Pixel desktop environment
tiger VNC, i.e., the package tigervnc-scraping-server
chromium-browser (but it comes with)
xdotool (apt-get install xdotool)
xsel (apt-get install xsel)
YouTube account
crontab entries – see below
you do not need an HDMI display, except for the OS setup
#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=ffmpegwireless9.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
matchE="1, 28, 0"
# up arrow
matchU="1, 103, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
if [[ "$line" =~ $matchE ]]; then
# ENTER button section - recording
echo "#################"
echo We caught this input: $line at $(date)
# see if we are already running our recording program or not
pgrep -f $program>/dev/null
# 0 means it's been found
if [ $? -eq 0 ]; then
# kill it
echo KILLING $program
pkill -9 -f $program; pkill -9 arecord; pkill -9 ffmpeg
pkill -9 -f blinkLED
echo Shine the PWR LED
$HOME/shineLED.sh
else
# start it
echo Blinking PWR LED
$HOME/blinkLED.sh &
echo STARTING $PGM
$PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
fi
epochsOld=$epochs
elif [[ "$line" =~ $matchU ]]; then
# UP ARROW button section - processing
echo "###########"
echo processing commencing at $(date)
$HOME/blinktwiceLED.sh &
echo start processing of the recording
$HOME/process.sh >> process.log 2>&1
pkill -9 -f LED
$HOME/shineLED.sh
epochsOld=$epochs
fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done
#!/usr/bin/bash
HOME=/home/pi
sleeptime=5
cd $HOME
# loop over all mp3 files in home directory
ls -1 record*mp3|while read line;do
echo working on $line at $(date)
video=$(echo ${line}|sed 's/mp3/flv/')
echo creating flv video file $video
# create the video first
./mp32flv.sh $line
echo move $line to mp3 directory
[[ -d mp3s ]] || mkdir mp3s
mv $line mp3s
echo mv flv to upload directory
[[ -d 00uploads ]] || mkdir 00uploads
mv $video 00uploads
echo start the upload
./auto-upload.sh
echo get the url to this video on YouTube
url=$(cat clipboard|xargs -0 echo)
echo test that it worked
if [[ ! "$url" =~ "http" ]]; then
echo FAIL. Try once again
./auto-upload.sh
fi
echo send mail to Drj
./announceit.sh
echo move video $video to flvs directory
mv ./00uploads/$video flvs
echo sleep for a bit before starting the next one
sleep $sleeptime
done
echo All done with processing at $(date)
#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.5
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.5
done
#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 3
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.35
echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.35
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.35
done
The recordswitch.sh script waits for input from the remote controller. It is programmed to kick off ffmpegwireless9.sh if the ENTER button is pushed, or process.sh if the UPLOAD button is pushed.
For testing purposes you may want to run process.sh by hand, i.e., ./process.sh, while you are viewing the display using a VNC viewer alongside the terminal screen.
The scripts are quite verbose and give lots of helpful output in their log files.
Upgrading from Raspberry Pi Lite to Raspberry Pi Desktop
Unfortunately the plugin I use inserts a blank line at the top. Those should all be removed.
After getting all the script, make them all executable in one go with a command such as chmod +x *sh
To read the input from the remote controller you need to set up evread.py and there may be some python work to do. This post has those details.
The chromium bowser needs to be run by hand one time over your VNC viewer. Its size has to be shrunk to 50% by running CTRL SHIFT – about four times. You need to log in to your YouTube or Gmail account so it remembers your credentials. And you need to og through the motions of uploading a video so it knows to use the 00uploads directory next time.
Don’t run a recording and an upload at the same time. I think the CPU would be taxed so I did not test that out. But you can record one day – even multiple recordings, and upload them a day or days later. That should work OK. It just processes the files one at a time, hopefully (untested).
announceit.sh is pretty dodgy. You have to understand SMTP mail somewhat to have a spitting chance for that to work. Fortunately I was an SMTP admin previously. So my ISP, Optimum, has a filter in place which prevents ordinary residential customers from sending out normal email to arbitrary SMTP addresses. However, to my surprise, they do run a mail relay server which you can connect to on the standard tcp port 25. I don’t really want to give it away but you can find it with the appropriate Internet search. I assume it is only for Optimum customers. Perhaps your ISP has something similar. So after you install exim4, you can configure a “smarthost” with the command dpkg-reconfigure exim4-config. But, again, you have to know a bit what you are doing. Suffice it to say that I got mine to work.
But for everyone else who can’t figure that out, just comment out this line in process.sh ./announceit.sh. put a # character in the front of the line to do that.
I have really only tested recordings of up to 45 minutes. I think an hour should be fine. I would suggest to break it up for longer.
The files can take a lot of space so you may need to clean up older files if you are a frequent user.
I’ve had about one failure during the upload out of about seven tests. So reliability is pretty good, but probably not perfect.
Why not just livestream? True, it’s sooo much easier. And I’ve covered how to do that previously.But, maybe it’s my WiFi, but its reliability was closer to 50% in my actual experience. I needed greater reliability and turns out I didn’t need the live aspect of the whole thing, just the recording for later critiqueing.
The recording approach I’ve taken uses ffmpeg to directly produce a mp3 file – it’s more compact than a WAV file. In and of itself the mp3 file may be useful to you, to, e.g., include as an attachment in email or whatever. For instance for a single song. All the mp3s are finally stored in a folder called mp3s, and all the videos are finally stored in a folder called flvs.
About that upload
The upload itself is super awesome to watch. I captured an actual automated upload with the script running on the right and the X Window display on the left in this YouTube video.
Somehow I managed to use some of these tools the other day and my mp3s ended up sounding like Alvin and the Chipmunks! I wondered if there was a way to recover them. I found there is, though I had to develop it a bit. It uses the new-ish rubberband filter of ffmpeg. I call this tiny script dechipmunk.sh:
In my case I had to slow things down and lower the pitch by the same factor: one third, hence the 0.3333.
How to pass multiple options to an ffmpeg filter
In doing the above I had to work out the syntax for passing more than one option to the new rubberband filter of ffmpeg. I wanted to specify both the pitch and the tempo options. So you see from above they had to be separated with a colon and the whole filter expression enclosed in quotes. Hence the funny-looking
“rubberband=pitch=0.3333:tempo=0.3333”
Future development
Well, I’m thinking of removing the chit-chat from the recording in an automated fashion. That may mean applying machine learning, or maybe something simpler if someone has covered this territory before for the RPi. But it might be a good excuse to do a shallow dive into machine learning.
Conclusion
I’m sure this method of YouTube upload is very flaky and will probably only work once or twice, if at all. But at least in my trials, it did work a few times. So perhaps it could be hardened and made more error-correcting. There are a lot of moving parts for it all to work. But it’s definitely cool to watch it go when it is working!
This post promises more than it actually delivers, ha, ha. It is squarely aimed at the more mid to advanced RPi enthusiast. Most who read this will get discouraged and look for another solution. I did the same in fact and I will review my failures with alternatives.
The essence of this aproach is screen automation with a very nice tool called xdotool. For me it works. It will definitely, 100% require some tweaks for anyone else. This is not run a few installs, copy this code and you’re good to go. But if you have the patience, you wil be rewarded with either fully or at least semi-automated video uploads to YouTube from your Raspberry Pi.
One caveat. Please obey YouTube’s terms of service. In other words, don’t abuse this! As soon as someone starts using this method in an aggressive or abusive fashion, we will all lose this capability. They have crack security experts and could squelch this approach in a heartbeat.
I actually don’t have all the peices in place for myself, but I have enough cool stuff that I wanted to begin to share my findings.
One beautiful thing about what I’m going to show is that you get to see the cursor moving about the screen in response to your automated commands – you see exactly what it;s doing, which screens it’s clicking through, etc. So if there’s an issue – say YouTube changes its layout – you’ll most likely be able to know how to adapt.
What’s wrong with using YouTube’s api?
Plenty. It used to be feasible. It certainly would make all our lives a lot easier. But YouTube is not a charity. They have squeezed out the little guy by making the barrier to entry so high that it’s really only available to highly determined IT folks. It’s just too difficult to figure out all the neeed screens, etc, and all the help guides refer to older api versions where things were different. YouTube clamped down in July 2020 on who or what can use their api. There’s a lot of old HowTos pre-dating that that will just lead you to dead-ends. So, go ahead, I dare you to stop reading this and use the api and report back. Maybe yuo manage to create a project, great, and an api key, great, and even to assign your api key the correct YouTube specific permissions – all great, and associate crednetials – super, and finally borrow someone’s code to upload a video – been there, done that. That video will be listed private. So then you try to root around to see what you have to do to make it public. Ah, a project review. Great. You were only in test mode. So no your confronted with this form. If they cared about the little guy there would be a radio button – “I only wish to upload a few videos a week for a small cadre of users, spare me the bureaucracy,” and that’d be it. But, no… Are you applying for a quota? Huh? I just want to upload a video and have it marked as unlisted. Some users remarked they filled out the form, never got their project reviewed and never heard back. Maybe they’re the exception, I don’t know. It’s just over the top for me so I give up.
OK. So, maybe YoutubeUploader?
Nope. Doesn’t work. It’s based on the old stuff.
OK. What about that guy’s api-less Node.js uploader?
Maybe. I could not get it to work on RPi. But I didn’t try super hard. I just like rolling my own, frankly. My approach is much more transparent. At least this approach inspired me to imagine the approach I am about to share. Because I believe the Node.js guy is just doing screen scraping but you can’t even see the screens.
Or simply do a Livestream?
Agreed. Livestreaming is quite straightforward by comparison with what I developed. My blog post about one click livestreaming covers it. But I have not had good results with reliability. As often as it works, it doesn’t work. With this new approach I’m going to try to create separate steps so that if anything goes wrong, an individual step can be re-run. Another advantage of separating steps is that a recording can be done “in the field” and without WiFi access. Remember an RPi 3 works great for hours with a decent portable USB battery that’s normally used for phones. Then the resulting recording can be converted to video and uploaded once the RPi is back to its usual WiFi SSID.
Preliminary upload, October 2021
In the video below the right screen is a terminal window showing what the script is doing. It needs some tweaking, and the YouTube window gets stuck so it’s not showing some of the screens. But it’s already totally awesome – and it worked!
@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -localhost no -passwordfile ~/.vnc/passwd -display :0 > x0vncserver.log 2>&1
Work with chromium the first time by hand. As I recall you should:
Create a directory like 00uploads – so it appears highest in the list
put a single video in 00uploads
Do an upload by hand (to help chromium remember to choose this upload directory)
launch chromium browser
log into your YouTube account at https://studio.youtube.com
shrink the browser until its size is 50% (Ctrl-Shift – about four times)
Don’t add other tabs and stuff to Chromium
Then subsequent launches of chromium should remember a bunch of these settings, specifically, your login info, the shrunken size, the upload directory, and maybe the (lack of) other tabs.
The beauty of this approach is that it is more transparent than the alternatives. You see exactly what your program is doing. You can issue the xdotool commands by hand to, e.g., change up the coordinates a little bit. Or even enter a video title.
So getting back to the idea, the automation idea is to finish a video somehow, then move it to the 00uploads dircetory, invoke this uploader program, then either move it to a uploaded directory or some such.
Imagine the versatility if I used my remote controller for RPi to map one button for audio recording, and a second button for automating video upload! Well, when I find the time that’s what I plan to do. I will make a separate post where the recording and uploading are shown – more or less the culmination of all the pieces.
Oh, and back to the idea again, I wanted to share the unlisted link with band members. So, you see how it is basically in the result of xsel -b since xsel copied the clipboard which contained the YouTube URL for this video we just uploaded? I have to fix up the parsing because some junk characters are getting included, but I plan to email that link to myself first, where I will do a brief manual check, and then forward it to the rest of the band. so, again, it’s really cool that we could even think to pull that off with this simplistic approach.
Techniques developed for this project
Lots.
I “discovered” – in the sense that Columbus discovered America – xdotool as an amazing X Windows screen automation tool. I knew of autohotkey for Windows so inquired what was like it for X Windows. I further learned that xdotool is generally broken when it comes to use with traditional VNC servers such as the native tightvncserver. It simply doesn’t work. But Tiger VNC is a scraping server so it like shares your console screen and makes it available via VNC protocol. That’s required because to develop this approach you have to see what you’re doing. All those coordinates? it comes from experimentation.
I also learned how to embed a YouTube video in my blog post. In fact this is the very first video I made for a blog post. So I did a screen recording for the first time with screen recorder for Windows.
I landed on the idea of a side-by-side video showing my terminal running the automated script in one window and the effect it is having on the chromium browser in the other window running on the RPi.
I put a wrapper around xdotool to make things cleaner. (But it’s not done yet.)
I changed to two-factor authentication to see if it made a difference. It did not. It still remembers the authentication, thankfully, at least for a few days. I wonder for how long though. Hmm.
Kiosk mode. By launching chromium with kiosk mode it not only gives us more screen real estate to work in, it in principle should also permit you to interact with chromium in a regular fashion and still have it come up in a known, fixed position, which is an absolute requirement of this approach. All the buttons have to have the same coordinates from invocation to invocation.
I also developed ffmpeg-based converters which take wav files and converts them to mp3’s (a nice compact format. wav files are space hogs), and another which takes mp3 files and adds a gray screen and converts them to flv ([Adobe] Flash Video, I guess – a compact video file format which YouTube accepts).
I also learned the ffmpeg command to tell exactly how long a recording is.
I also learned how to turn off blocking in ffmpeg so that its constantly writing packets and thus not losing audio data at the end when stopped.
I came up with the idea of randomizing the sleep time between clicks to make it seem more human-like.
And mostly for the purpose of demonstrations, though it also greatly helps in debugging, I introduced around two seconds of sleep both before and after a command is issued. That really makes things a lot clearer.
The results of the clipboard, xsel -b, contains null bytes. I had trouble parsing it to pull out just the url, but finally landed on using xargs -0 which is designed to parse null-delimited strings. And it worked! This was a late edition and did not make it into the video, but is in the provided script above towards the bottom.
ffmpeg chokes on too-complicated filenames. Who knew? I had files containing colon (:) and dash (-) characters which work perfectly fine in linux, but ffmpeg was interpreting part of the filename as a command-line argument it appeared. The way out of that mess was to introduce file: in front of the filename.
I’ll probably put my ffmpeg tricks into my next post because I want to keep this one lean and focused on this one upload automation topic.
Here I’ve combined work I’ve done previously into one single useful application: I can initiate the live streaming of our band practice on YouTube with the click of a single button on a remote control, and stop it with another click.
Equipment
Raspberry Pi 3 or 4 with Raspberry Pi OS, e.g., Raspbian Lite is just fine
Logitech webcam or USB microphone
USB extender (my setup needed this, others may not)
Universal USB-based remote control – see references for a known good one
Method 1
In this method I rapidly blink the onboard red power (PWR) LED of the RPi while streaming is active. Outside of those times it is a solid red. This is my preferred mode – it’s a very visible sign that things are working. I am very excited about this approach.
#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.5
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
sleep 0.5
done
#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
if [[ "$line" =~ $match ]]; then
echo "#################"
echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
pgrep -f $program>/dev/null
# 0 means it's been found
if [ $? -eq 0 ]; then
# kill it
echo KILLING $program
pkill -9 -f $program; pkill -9 ffmpeg
pkill -9 -f blinkLED
echo Shine the PWR LED
$HOME/shineLED.sh
else
# start it
echo Blinking PWR LED
$HOME/blinkLED.sh &
echo STARTING $PGM
$PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
fi
epochsOld=$epochs
fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done
The crontab entry and the referenced files are the same as in Method 2.
Method 2
In method 2 I flash the built-in LED on the webcam for a few seconds before starting the audio, and again when the streaming has terminated – as visible signal that the button press registered.
#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
if [[ "$line" =~ $match ]]; then
echo "#################"
echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
pgrep -f $program>/dev/null
# 0 means it's been found
if [ $? -eq 0 ]; then
# kill it
echo KILLING $program
pkill -9 -f $program; pkill -9 ffmpeg
sleep 1
echo turn on led for a few seconds
$HOME/videotst.sh &
sleep $ledtime
pkill -9 ffmpeg
else
# start it
echo turn on led for a few seconds
$HOME/videotst.sh &
sleep $ledtime
pkill -9 ffmpeg
sleep 1
echo STARTING $PGM
$PGM &
fi
epochsOld=$epochs
fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done
I press the Enter button once on the remote to begin the livestream to YouTube. I press it a second time to stop.
By extension this could also control other programs as well (like the photo frame). And other keys could be mapped to other functions. Record-only, don’t livestream, anyone?
I want to do these things because it’s a little tight in the room where I want to livestream – hard to get around. So this keeps me from having to squeeze past other people to access the RPi to for instance power cycle it. In my previous treatment, I had livestreaming start up as soon as the RPi booted up, which means it would only stop when it was similarly powered off, which I found somewhat limiting.
The purpose of videotst.sh in Method 2
videotst.sh serves almost no purpose whatsoever! It can simply be commented out. It’s somewhat specific to my webcam.
You see, I wanted to get some feedback that when I pressed the ENTER button the remote control the RPi had read that and was trying to start the livestream. I thought of flashing one of the built-in LEDs on the RPi. I still need to look into that.
With the robotics team we had soldered on an external LED onto one of the GPIO pins, but that’s way too much trouble.
So what videotst.sh does for me is to engage the webcam, specifically its video component, throwing away the actual video but with the net result that the webcam’s built-in green LED illuminates for a few seconds! That lets me know, “Yeah, your button press was registered and we’re beginning to start the livestream.” You see, because when you run ffmpegwireless6.sh with this webcam, it’s all about the audio. It only uses the audio of the webcam and thus the green “in use” LED never illuminates, unfortunately, while it is livestreaming the pure audio stream. So, similarly, when you press ENTER a second time to stop the stream I illuminate the webcam’s LED for a few seconds by using videotst.sh once again.
Techniques developed for this project
evread.py does some nasty buffering of its output, meaning, although it dos read the key presses on the remote, it holds the results “close to its chest,” and then spits them out, all at once, when the buffer is full. Well, that totally defeats the purpose needed here where I want to know if there’s been a single click. After some insightful Internet searches (note that I did not use Google as a verb, a practice I carry into my personal communication) I discovered the program script, which, when armed with the arguments -q -c, allows you to unbuffer the output of a program! And, it actually works. Cool.
And I made the command decision to “eat” the input. You see the timer of 3 seconds in broadcastswitch.sh? After you’ve done any button press it throws away any further button presses for the next three seconds. I just think that’ll reduce the misfires. In fact, I might take up the practice of double-clicking the ENTER button just to be sure I actually pressed it.
I’m using the double bracket notation more in my bash scripts. It permits use of a RegEx comparison operator. =~. I love regular expressions. More the perl style, PCRE, while this uses extended regular expressions, ERE. But I suppose those are good as well.
Getting control over the power LED was a nice coup. I’m only disappointed that you cannot control its brightness. In the dark it throws off quite a bit of light. But you cannot.
The green LED does not seem nearly as bright so I chose not to play with it. What I don’t want is to have to strain to see whether the thing is livestreaming or not.
Of course getting the whole remote control thing to work at all is another great advancement.
Techniques still to be developed
I still might investigate using voice-driven commands in place of a remote. Obviously, that’s a big nut to crack. Even if I managed to turn it on, turning it off while ffmpeg has commandeered the audio channel is even harder. I wonder if ffmpeg can split the audio stream so another process can be run alongside it to listen for voice commands? Or if an upstream process in front of ffmpeg could be used for that purpose? Or simply run with two microphones (seems wasteful of material)?? Needs research.
Suppose you want to take this on the road? Internet service can be unreliable after all. It’s well known you can power the RPi 3 for many hours with a small portable battery. So how about mapping a second button on the remote to a record-only mode (using the arecord utility, for instance)?? Then you can upload the audio at a time of you convenience. That’s something I can definitely program if I find I need it.
Lingering Problems with this approach
Despite all the care I’ve taken with the continuousaudio.sh script, still, there are times when YouTube does not show that a livestream is going on. I have no idea why at this point. If I knew the cause, I’d have fixed it!
As the livestream aspect of this is actual immaterial to me, I will probably switch to a pure recording mode where I upload in a later step – perhaps all done by the remote control for pure convenience.
Since this blog post has become popular, I may keep it preserved as is and start a new one for this recording approach as some people may genuinely be interested in the livestream aspect.
A very rough estimate of the failure rate is maybe as high as 50% but probably no lower than 25%. So, not great odds if you’re relying on success.
There’s another issue which I consider more minor. The beginning of the stream always sounds like a tape played on fast forward for a few seconds. The end also cut off a few seconds early I think.
Conclusion
We have presented a novel approach to livestreaming on a Raspberry Pi 3 using a remote control for added convenience. All the techniques were home-developed at drjohnstechtalk.com. The materials don’t cost much and it really does work.
I have always been somewhat agog at the idea of limiting bandwidth on my linux servers. Users complain about slow web sites and you want to try it for yourself, slowing your connection down to meet the parameters of their slower connection. More recently I happened on librespeed, an alternative to speedtest.net, where you can run both server and client. But in order to avoid transferring too much data and monopolizing the whole line, I wanted to actually put in some bandwidth throttling. I began an exploration of available methods to achieve this and found some satisfactory approaches that are readily available on Redhat-type linuxes.
bandwidth throttling, bandwidth rate limiting, bandwidth classes – these are all synonyms for what is most commonly called traffic shaping.
What doesn’t work so well
I think it’s important to start with the walls that I hit.
Cgroup
I stumbled on cgroups first. The man page starts in a promising way
cgroup - control group based traffic control filter
Then after you research it you see that support was enabled for cgroups in linux kernels already long ago. And there is version 1 and 2. And only version 1 supports bandwidth limits. But if you’re just a mid-level linux person such as myself, it is confusing and unclear how to take advantage of cgroup. My current conclusion is that it is more a subsystem designed for use by systemctl. In fact if you’ve ever looked at a status, for instance of crond, you see a mention of a cgroup:
sudo systemctl status crond
? crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-08-09 15:44:24 EDT; 5 days ago
Main PID: 1193 (crond)
Tasks: 1 (limit: 11278)
Memory: 2.1M
CGroup: /system.slice/crond.service
mq1193 /usr/sbin/crond -n
I don’t claim to know what it all means, but there it is. Some nice abilities to schedule and allocate finite resources, at a very high level.
So I get the impression that no one really uses cgroups to do traffic shaping.
apache web server to the rescue – not
Since I was mostly interested in my librespeed server and controlling its bandwidth during testing, I wondered if the apache web server has this capability built-in. Essentially, it does! There is the module mod_ratelimit. So, quest over, and let the implementation begin! Except not so fast. In fact I did enable that module. And I set it up on my librespeed server. It kind of works, but mostly, not really, and nothing like its documented design.
That’s their example section. I have no interest in such low limits and tried various values from 4000 to 12000. I only got two different actual rates from librespeed out of all those various configurations. I could either get 83 Mbps or around 162 Mbps. And that’s it. Merely having any statement whatsoever starts limiting to one of these strange values. With the statement commented out I was getting around 300 Mbps. So I got rate-limiting, but not what I was seeking and with almost no control.
So the apache config approach was a bust for me.
Trickle
There are some linux programs that are perhaps promoted too heavily? Within a minute of posting my first draft of this someone comes along and suggests trickle. Well, on CentOS yum search trickle gives no results. My other OS was SLES v 15 and I similarly got no results. So I’m not enamored with trickle.
tc – now that looks promising
Then I discovered tc – traffic control. That sounds like just the thing. I had to search around a bit on one of my OSes to find the appropriate package, but I found it. On CentOS/Redhat/Fedora the package is iproute-tc. On SLES v15 it was iproute2. On FreeBSD I haven’t figured it out yet.
But it looks unwieldy to use, frankly. Not, as they say, user-friendly.
tcconfig + tc – perfect together
Then I stumbled onto tcconfig, a python wrapper for tc that provides convenient utilities and examples. It’s available, assuming you’ve already installed python, through pip or pip3, depending on how you’ve installed python. Something like
$ sudo pip3 install tcconfig
I love the available settings for tcset – just the kinds of things I would have dreamed up on my own. I wanted to limit download speeds, and only on the web server running on port 443, and noly from a specific subnet. You can do all that! My tcset command went something like this:
More importantly – does it work? Yes, it works beautifully. I run a librespeed cli with three concurrent streams against my AWS server thusly configured and I get around 149 Mbps. Every time.
Note that things are opposite of what you first think of. When I want to restrict download speeds from a server but am imposing traffic shaping on the server (as opposed to on the client machine), from its perspective that is upload traffic! And port 443 is the source port, not the destination port!
Raspberry Pi example
I’m going to try regular librespeed tests on my home RPi which is cabled to my router to do the Internet monitoring. So I’m trying
Despite the strange delay-distro appearing in the tcshow output, the results are perfect. Here are my librespeed results, running against my own private AWS server:
Time is Sat 21 Aug 16:17:23 EDT 2021
Ping: 20 ms Jitter: 1 ms
Download rate: 100.01 Mbps
Upload rate: 9.48 Mbps
!
Problems creep in on RPi
I swear I had it all working. This blog post is the proof. Now I’ve rebooted my RPi and that tcset command above gives the result Illegal instruction. Still trying to figure that one out!
March, 2022 update. My RPi had other issues. I’ve re-imaged the micro SD card and all is good once again. I set traffic shaping policies as shown in this post.
Conclusion about tcconfig
It’s clear tcset is just giving you a nice interface to tc, but sometimes that’s all you need to not sweat the details and start getting productive.
Possible issue – missing kernel module
On one of my servers (the CentOS 8 one), I had to do a
$ sudo yum install kernel-modules-extra
$ sudo modprobe sch_netem
before I could get tcconfig to really work.
To do list
Make the tc settings permanent.
Verify tc + tcconfig work on a Raspberry Pi. (tc is definitely available for RPi.)
Conclusion
We have found a pretty nice and effective way to do traffic shaping on linux systems. The best tool is tc and the best wrapper for it is tcconfig.
References and related
Librespeed is a great speedtest.net alternative for hard-code linux types who love command line and being in full control of both ends of a speed test. I describe it here.
Power cycling one’s cable modem automatically via an attached RPi. I refer to this blog post specifically because I intend to expand that RPi to also do periodic, automated speedtesting of my home braodband connection, with traffic shaping in place if all goes well (as it seems to thus far).
Bandwidth management and “queueing discipline” in all its gory detail is explained in this post, including example raw tc commands. I haven’t digested it yet but it may represent a way for me to get my RPi working again without a re-image: http://www.fifi.org/doc/HOWTO/en-html/Adv-Routing-HOWTO-9.html
I was building some infrastucture around automated speedtest.net tests using speedtest-cli. I noticed the assigned servers keep changing, some servers are categorized as malicious sources, some time out if tested on the hour and half-hour, and results are inconsistent depending on which server you get.
So, I saw that the speedtest-cli (linux command-line python script) has a switch for a “mini” server. When I investigated that seemed the answer to the problem – you can set up your own mini server and use that for yuor tests. I.e., control both ends of the test. great.
The speedtest.net mini server was discontinued in 2017! There’s some commercial replacement. So I thought. Forget that. I was disillusioned and then happened upon a breath of fresh air – an open source alternative to speedtest.net. Enter, librespeed.
Some details
librespeed has a command-line program whih is an obvious rip-off of speedtest-cli. In fact it is called librespeed-cli and has many similar switches.
There is also a server setup. Really, just a few files you can put on any apache + php web server. There is a web GUI as well, but in fact I am not even that interested in that. And you don’t need to set it up at all.
What I like is that with the appropriate switches supplied to librespeed-cli, I can have it run against my own librespeed server. In some testing configurations I was getting 500 Mbps downloads. Under less favorable circumstances, much less.
Testing, testing, testing
I tested between Europe and the US. I tested through a proxy. I tested from the Azure cloud to an Amazon AWS server. I tested with a single cpu linux server (good old drjohnstechtakl.com) either as server, or as the client. This was all possible because I had full control over both ends.
Some tips
Play with the speedtest-cli switches. See what works for you. librespeed-cli -h will shows you all the options.
Increasing the stream count can compensate for slower PING times (assuming both ends have a fast connection)
It does support proxy, but
Downloads don’t really work through proxy if the server is only running http
Counterintuitively, the cpu burden is on the client, not the server! My servers didn’t show the slightest bit of resource usage.
Corollary to 5. My 4-cpu client to 1-cpu server test was much faster than the other way around where server and client roles were reversed.
Most things aren’t sensitive to upload speeds anyway so seriously consider suppressing that test with the appropriate switch. Your tests will also run a lot faster (18 seconds versus 40 seconds).
Worried about consuming too much bandwidth and transferring too much data? I also developed a solution for that (will be my next blog post)
So I am running a librespeed server on my little VM on Amazon AWS but I can’t make it public for fear of getting overrun.
ISPs that have excellent interconnects such as the various cloud providers are probably going to give the best results
It is not true your web server needs write access to its directory in my experience. As long as you don’t care about sharing telemetry data and all that.
To emphasize, they supply the speedtest-cli binary, pre-built, for a whole slew of OSes. You do not and should not compile it yourself. For a standard linux VM you will want the binary called librespeed-cli_1.0.9_linux_386.tar.gz
Example files
The point of these files is to test librespeed-cli, from the directory where you copied it to, against your own librespeed server.
#!/bin/sh
# see ./librespeed-cli -help for all the options
./librespeed-cli --local-json json-ns6 --server 864 --simple --no-upload --no-icmp --ipv4 --concurrent 4 --skip-cert-verify
Purpose: are we getting good speeds?
My purpose in what I am constructing is to verify we are getting good download speeds. I am not trying to hit it out of the park. That consumes (read, wastes) a lot of resources. I am targeting to prove we can achieve about 150 Mbps downloads. I don’t know anyone who can point to 150 Mbps and honestly say that’s insufficient for them. For some setups that may take four simultaneous streams, for others six. But it is definitely achievable. By not going crazy we are saving a lot of data transfers. AWS charges me for my network usage. So a six stream download test at 150 Mbps (Megabits per second) consumes about 325 MBytes download data. If you’re not being careful with your switches, you can easily nudge that up to 1 GB downloads for a single test.
My librespeed client to server tests ran overnight alongside my old approach using speedtest. The speedtest results are all over the place, with a bunch of zeroes for whatever reason, as is typical, while librespeed – and mind you this is from a client in the US, going through a proxy, to a server in Europe – produced much more consistent results. In one case where the normal value was 130 mbps, it dipped down to 110 mbps.
Testing it out at home
Test from a home PC against my own librespeed server
I made my test URL on my AWS server private, but a public one is available at https://librespeed.de/
There were two final issues with speedtest that were the straws that broke the camel’s back, and they are closely related.
When you resolve www.speedtest.net it hits a Content Distribution Network (CDN), and the returned results vary. For instance right now we get:
;; QUESTION SECTION:
;www.speedtest.net. IN A
;; ANSWER SECTION:
www.speedtest.net. 4301 IN CNAME zd.map.fastly.net.
zd.map.fastly.net. 9 IN A 151.101.66.219
zd.map.fastly.net. 9 IN A 151.101.194.219
zd.map.fastly.net. 9 IN A 151.101.2.219
zd.map.fastly.net. 9 IN A 151.101.130.219
Note that you can also run speedtest-cli with the –list switch to get a list of speedtest servers. So in my case I found some servers which procuced good results. There was one where I even know the guy who runs the ISP and know he does an excellent job. His speedtest server is 15 miles away. But, in its infinite wisdom, speedtest sometimes thinks my server is in Lousiana, and other times thinks it’s in New Jersey! So the returned server list is completely different for the two cases. And, even though each server gets assigned a unique number, and you can specify that number with the –server switch, it won’t run the test if that particular server wasn’t proposed to you in its initial listing. (It always makes a server listing call whether you specified –list or not, for its own purposes as to which servers to use.)
I tried to use some of tricks to override this behaviour, but short of re-writing the whole thing, it was not going to work. I imagined I could force speedtest-cli to always use a particular IP address, overwriting the return from the fastly results, but getting that to work through proxy was not feasible. On the other hand if you suck it up and accept their randomly assigned server, you have to put up with a lot of garbage results.
So set up your own server, right? The –mini switch seems built to accommodate that. But the mini server was discontinued in 2017. The commercial replacement seemed to have some limits. So it’s dead end upon dead end with speedtest.net.
Conclusion
An open source alternative to speedtest.net’s speedtest-cli has been identified and tested, both server and client. It is librespeed. It gives you a lot more control than speedtest, if that is your thing and you know a smidgeon of linux.
References and related
Just to do your own test with your browser the way you do with speedtest.net: https://librespeed.de/
If, in spite of every positive thing I’ve had to say about librespeed, you still want to try the more commercial speedtest-cli, here is that link: https://www.speedtest.net/apps/cli
So of interest here is that it involved a 64×64 LED matrix display, and a “hat” sold by Adafruit to supposedly make things easier to connect.
Now I am not a hardware guy and never pretended to be. When we realized that it required soldering, I bought those supplies but I didn’t want to be the one to mess things up so he volunteered for that. And yes, it was a mess.
Equipment
See later on in the post for the equipment we used.
The story continues
Who knew that gold on the PCB could be ruined so easily after you’ve changed your mind about a soldered joint and decided to undo what you’ve done? The experience pretty much validated my whole approach to staying away from soldering. So we ruined that hat and had to order another one. While we waited for it I developed a certain strtategy to deal with the shortcomings of the partially destroyed hat. See the section at the bottom entitled Recipe for a broken hat.
Comments on the project
I don’t think the guy did a good job. Details left out where needed, extra stuff added. For instance you don’t need to order that Firebeetle – it’s never used! Also, I’ve been told that his python isn’t very good either. But in general we followed his skimpy instructions. So we ordered the LED matrix from DFRobot, not Adafruit, and I think it’s different. In our case we did indeed follow the project suggestion to wire the 2×8 pin as shown in their (DFRobot’s) picture, leaving out the white wire. Once we soldered the white wire to the hat’s gpio pin 24, we were really in business. What we did not need to do is to solder pin E to either pin 8 or 16. (This is something you apparently need to do for the Adafruit LED.) In our testing it didn’t seem to matter whether or not those connections were made so we left it out on our second hat.
You think he might have mentioned just how much soldering is involed. Let’s see you have 2×20 connector, a 2×8 connector plus a single gpio connection = 57 pins to solder. Yuck.
What we got to work
Deploying images on these LED displays is cool. You just kind of have to see it. It’s hard to describe why. The picture below does not do it justice. Think stadium scoreboard.
In rpi-rgb-led-matrix/utils directory we followed the steps in the README.md file to compile the LED viewer:
#!/bin/sh
# invert images because the sound stuff is otherwise upside-down
sudo led-image-viewer –led-pixel-mapper=”U-mapper;Rotate:180″ –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 /home/pi/walk-in-the-woods.jpg
Walk in the woods
Do we have flicker? Just a tiny bit. You wouldn’t notice it unless yuo were staring at it for a few seconds, and even then it’s just isolated to a small section of the display. Probably shoddy soldering – we are total amateurs.
Tip for your images
Consider that you only have 64 x 64 pixles to work with. So crop your pictures beforehand to focus on the most interesting aspect – people if there are people in the picture (like we’ve done in the above image), specifically faces if there are faces. Otherwise everything will just look like blurs and blobs. You yourself do not have to resize your pictures down to 64 x 64 – the led software will do the resizing. So just focus on cropping down to a square-sized part of the picture you want to draw attention to.
Real-time audio
So my friend got a USB microphone. I developed the following script to make the python example work with real-time sounds – music playing, conversatiom, whatever. It’s really cool – just the slightest lag. But, yes, the LEDs bounce up in response to louder sounds.
So in the directory rpi-rgb-led-matrix/bindings/python/samples I created the script drjexample.sh.
#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so I will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.
rm test.wav; mkfifo test.wav
# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &
# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav
So note that by having inverted the image (180 degree rotation) we have the sounds bars and images both in the same direction so we can switch between the two modes.
I believe to get the python bindings to work we needed to install some additional python libraries, but that part is kind of a blur now. I think what should work is to follow the directions in the README.md file in the directory rpi-rgb-led-matrix/bindings/python, namely
sudo apt-get update && sudo apt-get install python2.7-dev python-pillow -y make build-python sudo make install-python
Hopefully that takes care of it. For sure you need numpy.
Future project ideas
How about a board that normally plays a slideshow, but when the ambient sound reaches a certain level – presumably because music is playing – it switches to real-time sound bar mode?? We think it’s doable.
Recipe for a broken hat
For the LED matrix display we got the DFRobot one since that’s what was linked to in the project guide. But the thing is, the reviewer’s write-up is incomplete so what you need to do involves a little guesswork.
At the end of the day all we could salvage while we wait for a new Adafruit hat to come in is the top fourth of the display! The band below it is either blank, or if we push on the cables a certain way, an unreliable duplicate of the top fourth.
The next band suffers from a different problem. Its blue is non-functional. So it’s no good…
And the last band rarely comes on at all.
OK? So we’re down to a 16 x 64 pixel useable area.
But despite all those problems, it’s still kind of cool, I have to admit! I know at work we have these digital sign boards and this reminds me of that. So first thing I did was to create a custom banner – scrolling text.
I call this display program drjexample2.sh. I put it in the directory rpi-rgb-led-matrix/examples-api-use
But that requires the existence of a ppm file containing the text I wanted to scroll, since I was working by example. So to create that custom PPM file I created this python script.
Once I discovered the problem with the bands – by way of running all the demos and experimenting with the arguments a bit – I noticed this directory: rpi-rgb-led-matrix/utils. I perked right up because it held out the promise of displaying jpeg images. Anyone who has seen any of my posts know that I am constantly putting out raspberry pi based photo displays in one form or another. For instance see https://drjohnstechtalk.com/blog/2021/01/raspberry-pi-photo-frame-using-the-pictures-on-your-google-drive-ii/
But see how cool it is? No? It’s a sleeping, recumbent baby. It’s like the further away from it you get, the clearer it becomes. Trust me in person it does look good. And it feels like creating one of those bright LED displays they use in ballparks.
But this same picture also shows the banding problem.
To get the picture displayed I first cropped it to make it wide and short. I wisely chose a picture which was amenable to that approach. I created this display script:
It went kind of slowly on my RPi 3, but it worked without incident. When I initially ran the led-image-viewer nothing displayed. So the script above shows the results of my experimentation which seems appropriate for our particular matrix display.
How did we get here?
Just to mention it, we followed the general instructions in that project. So I guess no need to repeat the recipe here.
Slideshow
You know I’m not going to let an opportunity to create a slideshow go to waste. So i created a second appropriately horizontal image which I might effectively show in my narrow available band of 16 x 64 pixels. just to share the little script, it is here:
My friend ordered all the stuff listed on the DFRobot project page, including their 64×64 LED matrix. Probably a mistake. They basically don’t document it and refer you to Adafruit, where they deal with a 64×64 LED matrix – their own – which may or may not have the same characteristics, leaving you somewhat in limbo. Next time I would order from Adafruit.
As mentioned above that gold foil on printed circuit boards really does come off pretty easily, and then you’re hosed. Because of the lack of technical specs we were never really sure if we needed to solder the E contact to 8 or to 16 and destroyed all those terminals in the process of backing out.
I actually created custom ppm files of solid colors, red, blue, green, white, so that I could prove my suspicions about the third band. Red and green display fine, blue not at all. White displays as yellow.
Viewed close up, the LED matrix doesn’t look like much, and of course I was close up when I was working with it. But when I stepped back I realized how beautiful a brightly illuminated picture of a baby can be! The pixels merge and your mind fills in the spaces between I guess.
The original idea was to tackle sound but I got stuck on the ability to use it as a photo frame (you know me). But he wants to return to sound which I am dreading….
Testing audio
In our first tests. the audio example wasn’t working. But now it seems to be. The project guy’s python code is named spectrum_matrix.py if I recall correctly. It goes into rpi-rgb-led-matrix/bindings/python/samples. And as he says, you run it from that directory as
$ sudo python spectrum_matrix.py
But, his link to test.wav is dead – yet another deficiency in his write-up. At least in my testing not every possible WAV file may work. This one, moo sounds, does however. http://soundbible.com/grab.php?id=1778&type=wav So, it plays for a few seconds – I can hear it through earphones – and the LEDs kind of go up and down. We recorded a wav file and found that that does not work. The error reads like this:
Home directory not accessible: Permission denied
W: [pulseaudio] core-util.c: Failed to open configuration file '/root/.config/pulse//daemon.conf': Permission denied
W: [pulseaudio] daemon-conf.c: Failed to open configuration file: Permission denied
Traceback (most recent call last):
File "spectrum_matrix.py.orig", line 56, in
matrix = calculate_levels(data,chunk,sample_rate)
File "spectrum_matrix.py.orig", line 49, in calculate_levels
power = np.reshape(power,(64,chunk/64))
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 292, in reshape
return _wrapfunc(a, 'reshape', newshape, order=order)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 2048 into shape (64,64)
Note that I had renamed the original spectrum_matrix.py to spectrum_matrix.py.orig because we started messing with it. Actually, I pretty much get the same error on the file that works; it’s just that I get it at the end of the LED show, not immediately.
Superficially, the two files differ somewhat in their recording format:
I played the voice.wav on a Windows PC – it played just fine. Just a little soft.
So what’s the essential difference between the two files? Well, something that jumps out is that the one is mono, the other stereo. Can we somehow test for that? Yes! I made the simplest possible conversion af a mono to a stereo file with the following ffmpeg command:
I copy converted.wav to test.wav and re-run spectrum_matrix.py. This time it works!
Not sure how my friend produced his wave file. But I want to make one on my own. He had plugged a USB microphone into the RPi. I have done research somewhat related to this – publishing a livestream to Youtube, audio only, video grayed out. That’s in this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/ So I am not afraid to se ffmpeg any longer. So I created this tiny script, record.sh, with my desired arguments:
My friend wants the LED to respond to live input such as his stereo at home. Being a terrible python programmer but at least middling linux techie, I see a way to accomplish it without his having to touch the sample program by employing an old Unix trick: named pipes. So I create this script, drjexample.sh which combines all the knowledge we gained above into one simple script:
#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so i will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.
rm test.wav; mkfifo test.wav
# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &
# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav
So a named pipe is just that. Instead of the pipe character we know and love, you coordinate process output from the first process with process input of the second process by way of this special file. The operating system does all the hard work. But it works just as though you had used the | character.
Best of all, this script actually works, to wit, the LED is now responding to live input. I see it jump when I say test into the microphone. Unknown to me is if it will play for extended periods of time – it would be easy for one process to output faster than the other can input for instance, so a backlog builds up. The responsiveness is good, I would guess around no more than 200 ms lag.
Equipment
We used the equipment from this post, except the Firebeetle. You know, that’s just another reason I consider that post to be a sloppy effort. Who lists a piece of equipment that they don’t use?? And again, next time I would rather search for an LED display from Adafruit. We use an RPi 3 and installed the image on a micro SD card with the new-ish Raspberry Pi imager, which works just great:Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi
Oh, plus the soldering iron and solder. And a multi-colored ribbon cable with female couplers at one end and a 2×8 connector on the other. Not sure if that came with the LED or not since I didn’t order it.
Our power supply is about 5 amps and plugs into the hat. We do not need power for the RPi.
#!/bin/sh
# DrJ 1/2021
# call this from cron once a day to refesh random slideshow once a day
NUMFOLDERS=20
DEBUG=1
HOME=/home/pi
RANFILE=$HOME/random.list
REANFILE=$HOME/rean.list
DISPLAYFOLDER=$HOME/Pictures
DISPLAYFOLDERTMP=$HOME/Picturestmp
EXIFTMP=$HOME/EXIFtmp
EXIF=$HOME/EXIF
TXTDIR=$HOME/picstxt
MSHOW=$HOME/mediashow
MSHOW2=$HOME/mediashowtmp2
MSHOW3=$HOME/mediashowtmp3
SLEEPINTERVAL=1
STARTFOLDER="MaryDocs/Pictures and videos"
echo "Starting master process at "`date`
cd $HOME
rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP
#listing of all Google drive files starting from the picture root
# this takes a few minutes so we may want to skip for debugging
if [ "$1" = "skip" ]; then
if [ $DEBUG -eq 1 ]; then echo SKIP Listing all files from Google drive; fi
else
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:"$STARTFOLDER" > files
# filter down to only jpegs, lose the docs folders and the tiny JPEGs
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs and losing the small images; fi
egrep '\.[jJ][pP][eE]?[gG]$' files |awk '$1 > 11000 {$1=""; print substr($0,2)}'|grep -i -v /docs/ > jpegs.list
fi
# check if we got anything. If our Internt dropped there may have been a problem, for instance
flines=`cat files|wc -l`
if [ $flines -lt 60 ]; then
echo "rclone did not produce enough files. Check your Internet setup and rclone configuration."
echo Only $flines files in the file listing - not enough - so pausing 60 seconds and starting over... at `date`
# start a new job and kill ourselves!
nohup $HOME/master3.sh > master.log 2>&1 &
exit
fi
# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo "\nGenerate random filename triplets"; fi
./random-files3.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE
# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo "\nCopy over these random files"; fi
cat $RANFILE|while read line; do
if [ $DEBUG -eq 1 ]; then echo filepath is $line; fi
rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done
# do a re-analysis to push pictures further apart in time
if [ $DEBUG -eq 1 ]; then echo "\nRe-analyzing pictures for their timestamps"; fi
cd $DISPLAYFOLDERTMP; $HOME/reanalyze.pl
# copy over just the new pictures that we determined were needed
if [ $DEBUG -eq 1 ]; then echo "\nCopy over the needed replacement files"; fi
cat $REANFILE|while read line; do
if [ $DEBUG -eq 1 ]; then echo filepath is $line; fi
rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done
# QC: toss out the pics which are not actually JPEGs
if [ $DEBUG -eq 1 ]; then echo "\nQC: Toss out the pics which are not actually JPEGs"; fi
cd $DISPLAYFOLDERTMP; ../QC.pl
# save EXIF metadata for later
if [ $DEBUG -eq 1 ]; then echo "\nSave EXIF metadata for later"; fi
cd $DISPLAYFOLDERTMP; $HOME/get-all-EXIF.sh
rm -rf $EXIF;mv $EXIFTMP $EXIF
# analyze EXIF info to extract most interesting things
if [ $DEBUG -eq 1 ]; then echo "\nAnalyze EXIF data"; fi
rm -rf $TXTDIR; $HOME/analyze.sh
# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo "\nRotate the pics which need it"; fi
cd $DISPLAYFOLDERTMP; $HOME/rotate-as-needed.sh
# resize pics
if [ $DEBUG -eq 1 ]; then echo "\nSize all pics to the display size"; fi
$HOME/resize.sh
# create text info + images
if [ $DEBUG -eq 1 ]; then echo "\nEmbed pic info"; fi
$HOME/embedpicinfo.sh
cd ~
# kill any old slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old fbi slideshow; fi
sudo pkill -9 -f fbi
pkill -9 -f m3.pl
# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER
mv $DISPLAYFOLDERTMP $DISPLAYFOLDER
cp $MSHOW3 $MSHOW
touch refresh
#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start "\nfbi slideshow in background"; fi
cd $DISPLAYFOLDER ; nohup ~/m3.pl >> ~/m3.log 2>&1 &
if [ $DEBUG -eq 1 ]; then echo "And now it is "`date`; fi
#!/usr/bin/perl
use Getopt::Std;
my %opt=();
#
# assumption is that we are runnin this from a directory containing pictures
$tier1 = 100; $tier2 = 200; $tier3 = 300; # secs
$DEBUG = 1;
$HOME = "/home/pi";
# pics are here
$pNames = "$HOME/reanpicnames";
$ranfile = "$HOME/random.list";
$reanfile = "$HOME/rean.list";
$origfile = "$HOME/jpegs.list";
$mshowt = "$HOME/mediashowtmp";
$mshow2 = "$HOME/mediashowtmp2";
open(REAN,">$reanfile") || die "Cannot open reanalyze file $reanfile!!\n";
$ms = `cat $mshowt`;
print "Original media show: $ms\n" if $DEBUG;
@lines = split('\0',$ms);
$Pdate = $Phr = $Pmin = $Psec = 0;
$diff = 9999;
for($i=0;$i<@lines;$i++){
$date = 0;
$secs = $ymd = 0;
$_ = $lines[$i];
$file = $_;
# ignore pictures with names like 20130820_180050.jpg
next if /\d{8}_\d{4}/;
open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
print "filename: $file\n" if $DEBUG;
while(<ANAL>){
#extract date and time from remaining pictures, if possible
# # DateTimeOriginal = 2018:08:18 20:16:47
# print STDERR "DATE: $_" if $DEBUG;
if (/date/i && $date++ < 1) {
print "date match in getinfo.pyoutput: $_" if $DEBUG;
($ymd,$hr,$min,$sec) = /(\d{4}:\d\d:\d\d) (\d\d):(\d\d):(\d\d)/;
$secs = 3600*$hr + 60*$min + $sec;
print "file,secs,ymd,i: $file,$secs,$ymd,$i\n" if $DEBUG;
$YMD[$i] = $ymd;
$SECS[$i] = $secs;
}
} # end loop over analysis of this pic
} # end loop over all files
# now go over that
$oldfolder = 0;
for($i=1;$i<@lines;$i++){
$folder = int($i/3) + 1;
next unless $folder != $oldfolder;
print "analyzing results. folder no. $folder\n" if $DEBUG;
# analyze pics in triplets
# center pic
$j = ($folder - 1)*3 + 1;
for ($o=-1;$o<2;$o+=2){
$k=$j+$o;
print "j,k,o: $j,$k,$o\n" if $DEBUG;
next unless $SECS[$j] > 0 && $YMD[$j] == $YMD[$k] && $YMD[$j] > 0;
print "We have non-0 dates we're dealing with\n" if $DEBUG;
$file = $lines[$k];
chomp($file);
$diff = abs($SECS[$j] - $SECS[$k]);
print "diff: $diff\n" if $DEBUG;
next unless $diff < $tier3;
# the closer the files are together the more we push away
$bump = 1 if $diff < $tier3;
$bump = 2 if $diff < $tier2;
$bump = 3 if $diff < $tier1;
# get full filepath
$filepath = `grep \"$file\" $ranfile`;
chomp($filepath);
# now use that to search within the jpegs file listing
$prog = $o < 0 ? "head" : "tail";
$newfilepath = `grep -C$bump "$filepath" $origfile|$prog -1`;
($newfile) = $newfilepath =~ /([^\/]+)$/;
chomp($newfile);
print "file,filepath,newfile,newfilepath,bump: $file,$filepath,$newfile,$newfilepath,$bump\n" if $DEBUG;
print REAN $newfilepath;
# we'll get the new pictures over in a separate step to keep this more atomic
$ms =~ s/$file/$newfile/;
}
$oldfolder = $folder;
} # end loop over pics
# print out new mediashow pics in order
print "Printing new mediashow: $ms\n" if $DEBUG;
open(MS,">$mshow2") || die "Cannot open mediashow $mshow2!!\n";
print MS $ms;
close(MS)
#!/usr/bin/perl
# kick out the non-JPEG files - sometimes they creep in
$DEBUG = 1;
$HOME = "/home/pi";
$mshow2 = "$HOME/mediashowtmp2";
$mshow3 = "$HOME/mediashowtmp3";
$ms = `cat $mshow2`;
@pics = split('\0',$ms);
foreach $file (@pics) {
print "file is $file\n" if $DEBUG;
#DSC00185.JPG: JPEG image data, JFIF standard 1.01...
$res = `file "$file"|cut -d: -f2`;
if ($res =~ /JPEG/i){
print "This file is indeed a JPEG image\n" if $DEBUG;
} else {
print "Not a JPEG image! We have to remove this file form the mediashow\n" if $DEBUG;
$ms =~ s/$file\0//;
}
}
# print out new mediashow pics in order
print "Printing new mediashow: $ms\n" if $DEBUG;
open(MS,">$mshow3") || die "Cannot open mediashow $mshow3!!\n";
print MS $ms;
close(MS);
#!/bin/sh
# DrJ 1/2021
# preserve EXIF info of all the images because our rotate step removes it
# and we will use it in subsequent steps
# assumption is that our current directory is the one where we want to read files
EXIFTMP=~/EXIFtmp
mkdir $EXIFTMP
ls -1|while read line; do
echo file is "$line"
~/getinfo.py "$line" > $EXIFTMP/"$line"
done
#!/bin/sh
# DrJ 1/2021
# try to extract date, file and folder name and even GPS info, create jpegs with info
# for each image
# assumption is that are current directory is the one where we want to alter files
HOME=/home/pi
TXTDIR=$HOME/picstxt
# it's assumed EXIF info for each pic has already been extracted and put into EXIF diretory
EXIF=$HOME/EXIF
mkdir $TXTDIR
cd $EXIF
ls -1|while read line; do
echo file is "$line"
echo -n "$line"|../analyzeDate.pl > "$TXTDIR/${line}"
echo -n "$line"|../analyzeGPS.pl >> "$TXTDIR/${line}"
done
#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn't auto-rotate them as far as I know
# assumption is that our current directory is the one where we want to alter files
ls -1|while read line; do
echo file is "$line"
o=`~/getinfo.py "$line"|grep -ai orientation|awk '{print $NF}'`
echo orientation is $o
if [ "$o" -eq "6" ]; then
echo "90 clockwise is needed, o is $o"
# rotate and move it
~/rotate.py -90 "$line"
mv rot_"$line" "$line"
elif [ "$o" -eq "8" ]; then
echo "90 counterclock is needed, o is $o"
# rotate and move it
~/rotate.py 90 "$line"
mv rot_"$line" "$line"
elif [ "$o" -eq "3" ]; then
echo "180 rot is needed, o is $o"
# rotate and move it
~/rotate.py 180 "$line"
mv rot_"$line" "$line"
fi
done
#!/bin/sh
# DrJ 2/2021
# To combat the RPi's inherent sluggish performance we'll downsize the pictures in advance to save fbi the effort
#
# on the pidisplay fbset gives:
#mode "800x480"
# geometry 800 480 800 480 32
# timings 0 0 0 0 0 0 0
# rgba 8/16,8/8,8/0,8/24
#endmode
displaywidth=`fbset|grep geometry|awk '{print $2}'`
displayheight=`fbset|grep geometry|awk '{print $3}'`
ls -1|while read line; do
echo file is "$line"
# this will create a new image with same name prepended with txt_
~/embedpicinfo.py $displaywidth $displayheight "$line"
done
#!/usr/bin/python3
# call with two arguments: degrees-to-rotate and filename
import PIL, os
import sys
from PIL import Image
# first do: pip3 install piexif
import piexif
degrees = int(sys.argv[1])
pic = sys.argv[2]
picture= Image.open(pic)
# see https://github.com/hMatoba/Piexif for piexif writeup
# this method of preserving EXIF info does not always work, and
# causes script to crash when it fails!
##exif_dict = piexif.load(picture.info["exif"])
##exif_bytes = piexif.dump(exif_dict)
## both rotate and preserve EXIF data
##picture.rotate(degrees,expand=True).save("rot_" + pic,"jpeg", exif=exif_bytes)
# rotate (which will blow away EXIF info, sorry...)
picture.rotate(degrees,expand=True).save("rot_" + pic,"jpeg")
#!/usr/bin/python3
# DrJ 2/2021
import PIL, os
import sys
from PIL import Image
# somewhat inspired by http://www.riisen.dk/dop/pil.html
# arguments:
# <width> <height> file
# with and height should be provided as values in pixels
# image file should be provided as argument.
# A pidisplay is 800x480
displaywidth = int(sys.argv[1])
displayheight = int(sys.argv[2])
smallscreen = 801
imageFile = sys.argv[3]
im1 = Image.open(imageFile)
narrowmax = .76
blowupfactor = 1.1
# take less from the top than the bottom
topshare = .3
bottomshare = 1.0 - topshare
# for DrJ debugging
DEBUG = True
if DEBUG:
print("display width and height: ",displaywidth,displayheight)
def imgResize(im):
width = im.size[0]
height = im.size[1]
if DEBUG:
print("image width and height: ",width,height)
# If the aspect ratio is wider than the display screen's aspect ratio,
# constrain the width to the display's full width
if width/float(height) > float(displaywidth)/float(displayheight):
if DEBUG:
print("In section width contrained to full width code section")
widthn = displaywidth
heightn = int(height*float(displaywidth)/width)
im5 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
else:
heightn = displayheight
widthn = int(width*float(displayheight)/height)
if width/float(height) < narrowmax and displaywidth < smallscreen:
# if width is narrow we're losing too much by using the whole picture.
# Blow it up by blowupfactor% if display is small, and crop most of it from the bottom
heightn = int(displayheight*blowupfactor)
widthn = int(width*float(heightn)/height)
im4 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
top = int(displayheight*(blowupfactor - 1)*topshare)
bottom = int(heightn - displayheight*(blowupfactor - 1)*bottomshare)
if DEBUG:
print("heightn,top,widthn,bottom: ",heightn,top,widthn,bottom)
im5 = im4.crop((0,top,widthn,bottom))
else:
im5 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
im5.save("resize_" + imageFile)
imgResize(im1)
#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
###$delay = 4; # for testing
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";
$refreshFile = "$ENV{HOME}/refresh";
chdir($picsDir);
$cn = `ls -1|wc -l`;
chomp($cn);
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
# see if this is a new batch of pictures
$refresh = (stat($refreshFile))[9];
$now = time();
$diff = $now - $refresh;
print "refresh,now,diff: $refresh, $now, $diff\n" if $DEBUG;
if ($diff < 100){
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/newslideshowintro.jpg");
sleep(25);
}
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sleep 1; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
system("sudo xargs -a $mshow -0 fbi --noverbose -1 -T 1 -t $delay ");
###system("sudo xargs -a $mshow -0 fbi -a -1 -T 1 -t $delay "); # for testing
# fbi runs in background, then exits, so we need to monitor if it's still alive
for(;;) {
open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
$match = <MON>;
if ($match) {
print "got fbi match\n" if $DEBUG > 1;
} else {
print "no fbi match\n" if $DEBUG;
# fbi not found
last;
}
close(MON);
print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
usleep($mdelay);
} # end loop testing if fbi has exited
} # close of infinite loop
Optional script
mshowtmp.pl (revision not yet reflected in the tar file)
# display some ip info first
@reboot sleep 15;ip a|grep wlan|sudo tee -a /dev/console > /dev/null
@reboot sleep 22; ./m3.pl >> m3.log 2>&1
# reboot if we can’t reach the Internet
19 5 */2 * * curl google.com > /dev/null 2>&1 || sudo reboot
26 5 */2 * * ./master3.sh >> master.log 2>&1
That will refresh the slideshow every two days, which we found is a good interval for our lifestyle – some days you don’t get around to viewing them. If you want to refresh every day just change ‘*/2″ to ‘*’.
And… that’s it!
Reminder
Don’t forget to make all these files executable. Something like:
$ chmod +x *.pl *.py *.sh
should do it.
My equipment
RPi 3 running Raspbian Lite, OS version “Bullseye,” though older versions also work well, just accommodating the appropriate packages which have changed over time.
Pi Display. The Pi Display resolution is 800×480, so pretty small.
HDMI display such as a TV as alternate to a Pi Display. This does work! I just tested this in Jan, 2022. My Sony TV display resolution is 1920 x 1080.
Pre-install
There are a few things you’ll need (accurate statement as of OS Bullseye, Jan 2022) such as these system packages: fbi, file, rclone, and these python modules: pip, Pillow, and piexif. That’s mostly described in my previous post so I won’t repeat it here. Basically the system package you install with apt-get. After installing pip you use it to install Pillow and piexif.
Getting started
To see how badly things are going for you (hey, I like to be cautiously pessimistic) after you’ve created all these files and have installed rclone, do a
$ ./master3.sh
If you have your rclone file listing (which takes a long time) and want to focusing on debugging the rest of it, do a
$ ./master3.sh skip
Discussion
In this version of Raspberry Pi photo frame I’ve made more effort to force time separation between the randomly selected photos. But, that’s not all. I blow up pictures taken in a narrow (portrait) mode (see next paragraph). And I do some fancy analysis to determine filename, folder, date, time and even location of the pictures. And there’s more. I create an alternate version of each photo which embeds this info at the bottom – in anticipation of my even more fancy remote-controlled slideshow! I am afraid to overwrite what I have previously posted because that by itself is a complete solution and works quite well on its own. So this can be considered worthy of folks looking for a little more challenge to get better results.
The fancyresize.py script is designed around my small PiDisplay which has a horizontal resolution of only 800 pixels. It blows up a narrow, portrait-format picture only if the detected display has a horizontal resolution of no more than 800 pixels. It blows the picture up by 10%, chops off 3% from the top, 7% from the bottom, because that yields optimal results in my experience. If you like that approach but are using a larger HDMI display, you could edit the “801” in that file to make it a larger number (bigger than your display, like 5000).
Show pictures with embedded info
This process is not streamlined. But it can be cool to do it by hand. You could follow these steps.
$ ./mshowtmp.pl; mv mediashowtmp2 mediashow
If you wait the whole cycle the next time around it should display the pictures with the embedded info at the bottom. If you’re impatient, do this:
I’m seeing this while transferring the pictures. Guess I’ll have to slow down the transfer. Not sure. Still figuring this out.
Fun Fact
You know how those old digital cameras created files prefixed with DSC, like DSC00102.JPG? If you read the JPEG spec, which is a pretty dense document, you learn that DSC stands for Digital Still Camera.
Concept for tossing out pictures of documents
We sometimes take pictures of documents, or computer screens, or a slide at a presentation, or a historical marker. They don’t make for compelling slideshow material. Well, the historical markers are debatable since they have character. Anyway, I am looking at using an old open source program called tesseract to do OCR (optical character recognition) on all the photos to help identify those containing a lot of words so they can be excluded. I’ll include that if I determine it to be a good approach.
Installing a searchable dictionary on Raspberry Pi
To install a word dictionary that you can do simple searches against on an RPi, try:
$ sudo apt-get install wamerican
or maybe
$ sudo apt-get install wamerican-huge
Those will produce simple wordlists, not actual dictionaries with definitions as you might have expected. They go into /usr/share/dict, e.g., usr/share/dict/american-english-huge.
The dict program is quite nice. apt-get install dict. Then you run it like this
$ dict neume
and it shoots back definitions and cites sources for those definitions. The drawback for my purposes is that it uses your Internet connection and I’m trying to build a photo frame that doesn’t rely too much on the Internet after the photos themselves are fetched.
git clone https://github.com/thortex/rpi3-tesseract.git cd rpi3-tesseract cd release ./install_requires_related2leptonica.sh ./install_requires_related2tesseract.sh ./install_tesseract.sh
But you’re gonna need git first:
$ sudo apt-get install git
RPi lost Wifi
This could be a whole separate post. In the course of my hard work my RPi just would not acquire an IP address on wlan0.
Here’s a great command to see all the SSIDs it knows about:
$ sudo iwlist wlan0 scan > scan.log
Then you can inspect scan.log in an editor. Turns out the one SSID it needed wasn’t in the list. Turns out I had reserved a DHCP entry for it in my router. My router was simply not cooperating, it seems – the RPi wasn’t doing anything wrong. I was almost ready to re-install the whole thing and waste hours… My router is an older model Linksys WRT1200AC. I removed the DHCP reservation on the router, then did a
$ sudo service networking stop; sudo service networking start
on the RPi, and…all was good! Its assigned IP won’t change that often, I can always check the router to see what it is. The management software with the Linksys is quite good.
I’m still having IP issues. So I added an additional crontab entry to display IP info of the WiFi adapter for a few seconds before the slideshow kicks in. This will tell me if it at least has an IP (which it often doesn’t)! It’s a pretty clever idea if I say so myself – the way I managed to do it that is. It’s not in the tar file.
RPi partially blown up
I’ve been running the photo frame for about two years now. All the residents love to see when the pictures refresh what the new slideshow brings. But in all the work I’ve done here and there I’ve partially blown up the RPi. Symptoms: running curl produces a segmentation fault; running crontab -e produces crontab: “/usr/bin/sensible-editor” exited with status 2; and then there’s the fact I lose my IP after a few days. I bet there’s a lot else that’s wrong too, but the sldieshow stuff keeps chugging along, amazingly. I’m too unmotivated (lazy) to fix all these problems, except the IP thing. That prevents slideshow refreshes. So I’ve decided to script a reboot command to run before the slideshow refresh. The resulting conditional reboot is now incorporated into the crontab entries shown earlier.
And by the way, I did fix this problem by re-imaging the micro SD card. That brought a new problem which is that the display blanked out afew only a few seconds. I bought a new display (turns out I didn’t need to), still had the problem, then figured out how to fix it. I wrote up the fix in this post.
Conclusion
A more advanced treatment of photos is shown in this post than I have done previously. It is fairly robust and will withstand quite a few user errors in my experience. The end result will be an interesting display of your photos, randomly selected but in small groupings.
m3.pl refers to a black.jpg and a newslideshowintro.jpg file. It’s not a disaster to not have those, but the overall experience will be slightly better. Here’s black.jpg:
And the beautiful newslideshowintro.jpg I created is at the top of this blog post.
Tesseract, an surprisingly old and surprisingly good OCR open-source OCR program, is basically impossible to compile for RPi. Fortunately, someone has done it for us. This page has the instructions: https://github.com/thortex/rpi3-tesseract