Categories
Linux Raspberry Pi Web Site Technologies

Raspberry Pi Project: YouTube livestreaming with a click of a button

Intro

Here I’ve combined work I’ve done previously into one single useful application: I can initiate the live streaming of our band practice on YouTube with the click of a single button on a remote control, and stop it with another click.

Equipment

Raspberry Pi 3 or 4 with Raspberry Pi OS, e.g., Raspbian Lite is just fine

Logitech webcam or USB microphone

USB extender (my setup needed this, others may not)

Universal USB-based remote control – see references for a known good one

Method 1

In this method I rapidly blink the onboard red power (PWR) LED of the RPi while streaming is active. Outside of those times it is a solid red. This is my preferred mode – it’s a very visible sign that things are working. I am very excited about this approach.

blinkLED.sh

                    

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
done

shineLED.sh

                    

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null

broadcastswitch.sh

                    

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      pkill -9 -f blinkLED
      echo Shine the PWR LED
      $HOME/shineLED.sh
    else
# start it
      echo Blinking PWR LED
      $HOME/blinkLED.sh &
      echo STARTING $PGM
      $PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

The crontab entry and the referenced files are the same as in Method 2.

Method 2

In method 2 I flash the built-in LED on the webcam for a few seconds before starting the audio, and again when the streaming has terminated – as visible signal that the button press registered.

broadcastswitch.sh

                    

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      sleep 1
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
    else
# start it
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
      sleep 1
      echo STARTING $PGM
      $PGM &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

videotst.sh

                    

#!/bin/sh
# just to get the webcam to light up...
ffmpeg -i /dev/video0 -f null - < /dev/null > /dev/null 2>1 &

crontab entry

@reboot sleep 15; /home/pi/broadcastswitch.sh > broadcastswitch.log 2>&1

continuousaudio.sh and ffmpegwireless6.sh

See this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/

evread.py

See this post: https://drjohnstechtalk.com/blog/2020/12/how-to-create-a-software-keyboard/

The idea

I press the Enter button once on the remote to begin the livestream to YouTube. I press it a second time to stop.

By extension this could also control other programs as well (like the photo frame). And other keys could be mapped to other functions. Record-only, don’t livestream, anyone?

I want to do these things because it’s a little tight in the room where I want to livestream – hard to get around. So this keeps me from having to squeeze past other people to access the RPi to for instance power cycle it. In my previous treatment, I had livestreaming start up as soon as the RPi booted up, which means it would only stop when it was similarly powered off, which I found somewhat limiting.

The purpose of videotst.sh in Method 2

videotst.sh serves almost no purpose whatsoever! It can simply be commented out. It’s somewhat specific to my webcam.

You see, I wanted to get some feedback that when I pressed the ENTER button the remote control the RPi had read that and was trying to start the livestream. I thought of flashing one of the built-in LEDs on the RPi. I still need to look into that.

With the robotics team we had soldered on an external LED onto one of the GPIO pins, but that’s way too much trouble.

So what videotst.sh does for me is to engage the webcam, specifically its video component, throwing away the actual video but with the net result that the webcam’s built-in green LED illuminates for a few seconds! That lets me know, “Yeah, your button press was registered and we’re beginning to start the livestream.” You see, because when you run ffmpegwireless6.sh with this webcam, it’s all about the audio. It only uses the audio of the webcam and thus the green “in use” LED never illuminates, unfortunately, while it is livestreaming the pure audio stream. So, similarly, when you press ENTER a second time to stop the stream I illuminate the webcam’s LED for a few seconds by using videotst.sh once again.

Techniques developed for this project

evread.py does some nasty buffering of its output, meaning, although it dos read the key presses on the remote, it holds the results “close to its chest,” and then spits them out, all at once, when the buffer is full. Well, that totally defeats the purpose needed here where I want to know if there’s been a single click. After some insightful Internet searches (note that I did not use Google as a verb, a practice I carry into my personal communication) I discovered the program script, which, when armed with the arguments -q -c, allows you to unbuffer the output of a program! And, it actually works. Cool.

And I made the command decision to “eat” the input. You see the timer of 3 seconds in broadcastswitch.sh? After you’ve done any button press it throws away any further button presses for the next three seconds. I just think that’ll reduce the misfires. In fact, I might take up the practice of double-clicking the ENTER button just to be sure I actually pressed it.

I’m using the double bracket notation more in my bash scripts. It permits use of a RegEx comparison operator. =~. I love regular expressions. More the perl style, PCRE, while this uses extended regular expressions, ERE. But I suppose those are good as well.

Getting control over the power LED was a nice coup. I’m only disappointed that you cannot control its brightness. In the dark it throws off quite a bit of light. But you cannot.

The green LED does not seem nearly as bright so I chose not to play with it. What I don’t want is to have to strain to see whether the thing is livestreaming or not.

Of course getting the whole remote control thing to work at all is another great advancement.

Techniques still to be developed

I still might investigate using voice-driven commands in place of a remote. Obviously, that’s a big nut to crack. Even if I managed to turn it on, turning it off while ffmpeg has commandeered the audio channel is even harder. I wonder if ffmpeg can split the audio stream so another process can be run alongside it to listen for voice commands? Or if an upstream process in front of ffmpeg could be used for that purpose? Or simply run with two microphones (seems wasteful of material)?? Needs research.

Suppose you want to take this on the road? Internet service can be unreliable after all. It’s well known you can power the RPi 3 for many hours with a small portable battery. So how about mapping a second button on the remote to a record-only mode (using the arecord utility, for instance)?? Then you can upload the audio at a time of you convenience. That’s something I can definitely program if I find I need it.

Lingering Problems with this approach

Despite all the care I’ve taken with the continuousaudio.sh script, still, there are times when YouTube does not show that a livestream is going on. I have no idea why at this point. If I knew the cause, I’d have fixed it!

As the livestream aspect of this is actual immaterial to me, I will probably switch to a pure recording mode where I upload in a later step – perhaps all done by the remote control for pure convenience.

Since this blog post has become popular, I may keep it preserved as is and start a new one for this recording approach as some people may genuinely be interested in the livestream aspect.

A very rough estimate of the failure rate is maybe as high as 50% but probably no lower than 25%. So, not great odds if you’re relying on success.

There’s another issue which I consider more minor. The beginning of the stream always sounds like a tape played on fast forward for a few seconds. The end also cut off a few seconds early I think.

Conclusion

We have presented a novel approach to livestreaming on a Raspberry Pi 3 using a remote control for added convenience. All the techniques were home-developed at drjohnstechtalk.com. The materials don’t cost much and it really does work.

References and related

Rii infrared remote control – only $12: Amazon.com: Rii MX3 Multifunction 2.4G Fly Mouse Mini Wireless Keyboard & Infrared Remote Control & 3-Gyro + 3-Gsensor for Google Android TV/Box, IPTV, HTPC, Windows, MAC OS, PS3 : Electronics

25′ USB 2 extender for placing a webcam or USB mic at a distance from the RPi, $15: Amazon.com: HDE USB Extension Cable (USB 2.0 Type A Male to Female) High Speed Data and Power Extension Cable with Active Repeater (25 ft) : Electronics

Reading keyboard input.

Using Remote control to interact with a RPi-based photo frame.

ffmpeg settings to send just audio to YouTube, suppressing video

Battery to make the RPi 3 portable, $17: Amazon.com: Omars Power Bank 10000mAh USB C Battery Pack Slimline Portable Charger with Dual USB Output Compatible with iPhone Xs/XR/XS Max/X, iPad, Galaxy S9 / Note 9 : Cell Phones & Accessories. I’m not exactly sure what to do for an RPi 4 however.

Exetended Regular Expressions

How to control the power to the RPi’s LEDs: https://github.com/mlagerberg/raspberry-pi-setup/blob/master/5.2-leds.md

Categories
Admin Apache CentOS Python Raspberry Pi Web Site Technologies

Traffic shaping on linux – an exploration

Intro

I have always been somewhat agog at the idea of limiting bandwidth on my linux servers. Users complain about slow web sites and you want to try it for yourself, slowing your connection down to meet the parameters of their slower connection. More recently I happened on librespeed, an alternative to speedtest.net, where you can run both server and client. But in order to avoid transferring too much data and monopolizing the whole line, I wanted to actually put in some bandwidth throttling. I began an exploration of available methods to achieve this and found some satisfactory approaches that are readily available on Redhat-type linuxes.

bandwidth throttling, bandwidth rate limiting, bandwidth classes – these are all synonyms for what is most commonly called traffic shaping.

What doesn’t work so well

I think it’s important to start with the walls that I hit.

Cgroup

I stumbled on cgroups first. The man page starts in a promising way

cgroup - control group based traffic control filter

Then after you research it you see that support was enabled for cgroups in linux kernels already long ago. And there is version 1 and 2. And only version 1 supports bandwidth limits. But if you’re just a mid-level linux person such as myself, it is confusing and unclear how to take advantage of cgroup. My current conclusion is that it is more a subsystem designed for use by systemctl. In fact if you’ve ever looked at a status, for instance of crond, you see a mention of a cgroup:

sudo systemctl status crond
? crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-08-09 15:44:24 EDT; 5 days ago
Main PID: 1193 (crond)
Tasks: 1 (limit: 11278)
Memory: 2.1M
CGroup: /system.slice/crond.service
mq1193 /usr/sbin/crond -n

I don’t claim to know what it all means, but there it is. Some nice abilities to schedule and allocate finite resources, at a very high level.

So I get the impression that no one really uses cgroups to do traffic shaping.

apache web server to the rescue – not

Since I was mostly interested in my librespeed server and controlling its bandwidth during testing, I wondered if the apache web server has this capability built-in. Essentially, it does! There is the module mod_ratelimit. So, quest over, and let the implementation begin! Except not so fast. In fact I did enable that module. And I set it up on my librespeed server. It kind of works, but mostly, not really, and nothing like its documented design.

                    


    SetOutputFilter RATE_LIMIT
    SetEnv rate-limit 400 
    SetEnv rate-initial-burst 512

That’s their example section. I have no interest in such low limits and tried various values from 4000 to 12000. I only got two different actual rates from librespeed out of all those various configurations. I could either get 83 Mbps or around 162 Mbps. And that’s it. Merely having any statement whatsoever starts limiting to one of these strange values. With the statement commented out I was getting around 300 Mbps. So I got rate-limiting, but not what I was seeking and with almost no control.

So the apache config approach was a bust for me.

Trickle

There are some linux programs that are perhaps promoted too heavily? Within a minute of posting my first draft of this someone comes along and suggests trickle. Well, on CentOS yum search trickle gives no results. My other OS was SLES v 15 and I similarly got no results. So I’m not enamored with trickle.

tc – now that looks promising

Then I discovered tc – traffic control. That sounds like just the thing. I had to search around a bit on one of my OSes to find the appropriate package, but I found it. On CentOS/Redhat/Fedora the package is iproute-tc. On SLES v15 it was iproute2. On FreeBSD I haven’t figured it out yet.

But it looks unwieldy to use, frankly. Not, as they say, user-friendly.

tcconfig + tc – perfect together

Then I stumbled onto tcconfig, a python wrapper for tc that provides convenient utilities and examples. It’s available, assuming you’ve already installed python, through pip or pip3, depending on how you’ve installed python. Something like

$ sudo pip3 install tcconfig

I love the available settings for tcset – just the kinds of things I would have dreamed up on my own. I wanted to limit download speeds, and only on the web server running on port 443, and noly from a specific subnet. You can do all that! My tcset command went something like this:

$ cd /usr/local/bin; sudo ./tcset eth0 --direction outgoing --src-port 443 --rate 150Mbps --network 134.12.0.0/16

$ sudo ./tcshow eth0

{
"eth0": {
"outgoing": {
"src-port=443, dst-network=134.12.0.0/16, protocol=ip": {
"filter_id": "800::800",
"rate": "150Mbps"
}
},
"incoming": {}
}
}

More importantly – does it work? Yes, it works beautifully. I run a librespeed cli with three concurrent streams against my AWS server thusly configured and I get around 149 Mbps. Every time.

Note that things are opposite of what you first think of. When I want to restrict download speeds from a server but am imposing traffic shaping on the server (as opposed to on the client machine), from its perspective that is upload traffic! And port 443 is the source port, not the destination port!

Raspberry Pi example

I’m going to try regular librespeed tests on my home RPi which is cabled to my router to do the Internet monitoring. So I’m trying

$ sudo tcset eth0 --direction incoming --rate 100Mbps
$ sudo tcset eth0 --direction outgoing --rate 9Mbps --add

This reflects the reality of the asymmetric rate you typically get from a home Internet connection. tcshow looks a bit peculiar however:

{
"eth0": {
"outgoing": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "9Mbps"
}
},
"incoming": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "100Mbps"
}
}
}
}
Results on the RPi

Despite the strange delay-distro appearing in the tcshow output, the results are perfect. Here are my librespeed results, running against my own private AWS server:

Time is Sat 21 Aug 16:17:23 EDT 2021
Ping: 20 ms Jitter: 1 ms
Download rate: 100.01 Mbps
Upload rate: 9.48 Mbps

!

Conclusion about tcconfig

It’s clear tcset is just giving you a nice interface to tc, but sometimes that’s all you need to not sweat the details and start getting productive.

Possible issue – missing kernel module

On one of my servers (the CentOS 8 one), I had to do a

$ sudo yum install kernel-modules-extra

$ sudo modprobe sch_netem

before I could get tcconfig to really work.

To do list

Make the tc settings permanent.

Verify tc + tcconfig work on a Raspberry Pi. (tc is definitely available for RPi.)

Conclusion

We have found a pretty nice and effective way to do traffic shaping on linux systems. The best tool is tc and the best wrapper for it is tcconfig.

References and related

Librespeed is a great speedtest.net alternative for hard-code linux types who love command line and being in full control of both ends of a speed test. I describe it here.

tcconfig’s project page on PyPi.

Power cycling one’s cable modem automatically via an attached RPi. I refer to this blog post specifically because I intend to expand that RPi to also do periodic, automated speedtesting of my home braodband connection, with traffic shaping in place if all goes well (as it seems to thus far).

Categories
Admin Network Technologies Raspberry Pi

A nice alternative to speedtest.net for the DIY linux crowd

Intro

I was building some infrastucture around automated speedtest.net tests using speedtest-cli. I noticed the assigned servers keep changing, some servers are categorized as malicious sources, some time out if tested on the hour and half-hour, and results are inconsistent depending on which server you get.

So, I saw that the speedtest-cli (linux command-line python script) has a switch for a “mini” server. When I investigated that seemed the answer to the problem – you can set up your own mini server and use that for yuor tests. I.e., control both ends of the test. great.

The speedtest.net mini server was discontinued in 2017! There’s some commercial replacement. So I thought. Forget that. I was disillusioned and then happened upon a breath of fresh air – an open source alternative to speedtest.net. Enter, librespeed.

Some details

librespeed has a command-line program whih is an obvious rip-off of speedtest-cli. In fact it is called librespeed-cli and has many similar switches.

There is also a server setup. Really, just a few files you can put on any apache + php web server. There is a web GUI as well, but in fact I am not even that interested in that. And you don’t need to set it up at all.

What I like is that with the appropriate switches supplied to librespeed-cli, I can have it run against my own librespeed server. In some testing configurations I was getting 500 Mbps downloads. Under less favorable circumstances, much less.

Testing, testing, testing

I tested between Europe and the US. I tested through a proxy. I tested from the Azure cloud to an Amazno AWS server. I tested with a single cpu linux server (good old drjohnstechtakl.com) either as server, or as the client. This was all possible because I had full control over both ends.

Some tips
  1. Play with the speedtest-cli switches. See what works for you. librespeed-cli -h will shows you all the options.
  2. Increasing the stream count can compensate for slower PING times (assuming both ends have a fast connection)
  3. It does support proxy, but
  4. Downloads don’t really work through proxy if the server is only running http
  5. Counterintuitively, the cpu burden is on the client, not the server! My servers didn’t show the slightest bit of resource usage.
  6. Corollary to 5. My 4-cpu client to 1-cpu server test was much faster than the other way around where server and client roles were reversed.
  7. Most things aren’t sensitive to upload speeds anyway so seriously consider suppressing that test with the appropriate switch. Your tests will also run a lot faster (18 seconds versus 40 seconds).
  8. Worried about consuming too much bandwidth and transferring too much data? I also developed a solution for that (will be my next blog post)
  9. So I am running a librespeed server on my little VM on Amazon AWS but I can’t make it public for fear of getting overrun.
  10. ISPs that have excellent interconnects such as the various cloud providers are probably going to give the best results
  11. It is not true your web server needs write access to its directory in my experience. As long as you don’t care about sharing telemetry data and all that.
  12. To emphasize, they supply the speedtest-cli binary, pre-built, for a whole slew of OSes. You do not and should not compile it yourself. For a standard linux VM you will want the binary called librespeed-cli_1.0.9_linux_386.tar.gz
Example files

The point of these files is to test librespeed-cli, from the directory where you copied it to, against your own librespeed server.

json-ns6
                    

[{
"name":"ns6, Germany (active-servers)",
"server":"https://ns6.drjohnstechtalk.com/",
"id":864,
"dlURL":"backend/garbage.php",
"ulURL":"backend/empty.php",
"pingURL":"backend/empty.php",
"getIpURL":"backend/getIP.php",
"sponsorName":"/dev/null/v",
"sponsorURL":"https://dev.nul.lv/"
}]

wrapper.sh
                    

#!/bin/sh
# see ./librespeed-cli -help for all the options
./librespeed-cli --local-json json-ns6 --server 864 --simple --no-upload --no-icmp --ipv4 --concurrent 4 --skip-cert-verify

Purpose: are we getting good speeds?

My purpose in what I am constructing is to verify we are getting good download speeds. I am not trying to hit it out of the park. That consumes (read, wastes) a lot of resources. I am targeting to prove we can achieve about 150 Mbps downloads. I don’t know anyone who can point to 150 Mbps and honestly say that’s insufficient for them. For some setups that may take four simultaneous streams, for others six. But it is definitely achievable. By not going crazy we are saving a lot of data transfers. AWS charges me for my network usage. So a six stream download test at 150 Mbps (Megabits per second) consumes about 325 MBytes download data. If you’re not being careful with your switches, you can easily nudge that up to 1 GB downloads for a single test.

My librespeed client to server tests ran overnight alongside my old approach using speedtest. The speedtest results are all over the place, with a bunch of zeroes for whatever reason, as is typical, while librespeed – and mind you this is from a client in the US, going through a proxy, to a server in Europe – produced much more consistent results. In one case where the normal value was 130 mbps, it dipped down to 110 mbps.

Testing it out at home
Test from a home PC against my own librespeed server

I made my test URL on my AWS server private, but a public one is available at https://librespeed.de/

At home of course I want to test with a Raspberry Pi since I work with them so much. There is indeed a pre-built binary for Raspberry Pi. It is https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The problem with speedtest in more detail

There were two final issues with speedtest that were the straws that broke the camel’s back, and they are closely related.

When you resolve www.speedtest.net it hits a Content Distribution Network (CDN), and the returned results vary. For instance right now we get:

;; QUESTION SECTION:
;www.speedtest.net. IN A
;; ANSWER SECTION:
www.speedtest.net. 4301 IN CNAME zd.map.fastly.net.
zd.map.fastly.net. 9 IN A 151.101.66.219
zd.map.fastly.net. 9 IN A 151.101.194.219
zd.map.fastly.net. 9 IN A 151.101.2.219
zd.map.fastly.net. 9 IN A 151.101.130.219

Note that you can also run speedtest-cli with the –list switch to get a list of speedtest servers. So in my case I found some servers which procuced good results. There was one where I even know the guy who runs the ISP and know he does an excellent job. His speedtest server is 15 miles away. But, in its infinite wisdom, speedtest sometimes thinks my server is in Lousiana, and other times thinks it’s in New Jersey! So the returned server list is completely different for the two cases. And, even though each server gets assigned a unique number, and you can specify that number with the –server switch, it won’t run the test if that particular server wasn’t proposed to you in its initial listing. (It always makes a server listing call whether you specified –list or not, for its own purposes as to which servers to use.)

I tried to use some of tricks to override this behaviour, but short of re-writing the whole thing, it was not going to work. I imagined I could force speedtest-cli to always use a particular IP address, overwriting the return from the fastly results, but getting that to work through proxy was not feasible. On the other hand if you suck it up and accept their randomly assigned server, you have to put up with a lot of garbage results.

So set up your own server, right? The –mini switch seems built to accommodate that. But the mini server was discontinued in 2017. The commercial replacement seemed to have some limits. So it’s dead end upon dead end with speedtest.net.

Conclusion

An open source alternative to speedtest.net’s speedtest-cli has been identified and tested, both server and client. It is librespeed. It gives you a lot more control than speedtest, if that is your thing and you know a smidgeon of linux.

References and related

Just to do your own test with your browser the way you do with speedtest.net: https://librespeed.de/

librespeed-cli: https://github.com/librespeed/speedtest-cli

librespeed-cli binaries download page: https://github.com/librespeed/speedtest-cli/releases

RPi version of librespeed-cli: https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The RPi I use for automatically power cycling my cable modem is hard-wired to my router and makes for an excellent platform from which to conduct these speedtests.

librespeed server: https://github.com/librespeed/speedtest

If, in spite of every positive thing I’ve had to say about librespeed, you still want to try the more commercial speedtest-cli, here is that link: https://www.speedtest.net/apps/cli

Categories
Linux Raspberry Pi

Raspberry Pi + LED Matrix Display project

Intro

A friend bought a bunch of parts and thought we could work on it together. He wanted to reproduce this project: Raspberry Pi Audio Spectrum Display – Hackster.io

So of interest here is that it involved a 64×64 LED matrix display, and a “hat” sold by Adafruit to supposedly make things easier to connect.

Now I am not a hardware guy and never pretended to be. When we realized that it required soldering, I bought those supplies but I didn’t want to be the one to mess things up so he volunteered for that. And yes, it was a mess.

Equipment

See later on in the post for the equipment we used.

The story continues

Who knew that gold on the PCB could be ruined so easily after you’ve changed your mind about a soldered joint and decided to undo what you’ve done? The experience pretty much validated my whole approach to staying away from soldering. So we ruined that hat and had to order another one. While we waited for it I developed a certain strtategy to deal with the shortcomings of the partially destroyed hat. See the section at the bottom entitled Recipe for a broken hat.

Comments on the project

I don’t think the guy did a good job. Details left out where needed, extra stuff added. For instance you don’t need to order that Firebeetle – it’s never used! Also, I’ve been told that his python isn’t very good either. But in general we followed his skimpy instructions. So we ordered the LED matrix from DFRobot, not Adafruit, and I think it’s different. In our case we did indeed follow the project suggestion to wire the 2×8 pin as shown in their (DFRobot’s) picture, leaving out the white wire. Once we soldered the white wire to the hat’s gpio pin 24, we were really in business. What we did not need to do is to solder pin E to either pin 8 or 16. (This is something you apparently need to do for the Adafruit LED.) In our testing it didn’t seem to matter whether or not those connections were made so we left it out on our second hat.

You think he might have mentioned just how much soldering is involed. Let’s see you have 2×20 connector, a 2×8 connector plus a single gpio connection = 57 pins to solder. Yuck.

What we got to work

Deploying images on these LED displays is cool. You just kind of have to see it. It’s hard to describe why. The picture below does not do it justice. Think stadium scoreboard.

In rpi-rgb-led-matrix/utils directory we followed the steps in the README.md file to compile the LED viewer:

sudo apt-get update
sudo apt-get install libgraphicsmagick++-dev libwebp-dev -y
make led-image-viewer
                    
#!/bin/sh
# invert images because the sound stuff is otherwise upside-down
sudo led-image-viewer –led-pixel-mapper=”U-mapper;Rotate:180″ –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 /home/pi/walk-in-the-woods.jpg
Walk in the woods

Do we have flicker? Just a tiny bit. You wouldn’t notice it unless yuo were staring at it for a few seconds, and even then it’s just isolated to a small section of the display. Probably shoddy soldering – we are total amateurs.

Tip for your images

Consider that you only have 64 x 64 pixles to work with. So crop your pictures beforehand to focus on the most interesting aspect – people if there are people in the picture (like we’ve done in the above image), specifically faces if there are faces. Otherwise everything will just look like blurs and blobs. You yourself do not have to resize your pictures down to 64 x 64 – the led software will do the resizing. So just focus on cropping down to a square-sized part of the picture you want to draw attention to.

Real-time audio

So my friend got a USB microphone. I developed the following script to make the python example work with real-time sounds – music playing, conversatiom, whatever. It’s really cool – just the slightest lag. But, yes, the LEDs bounce up in response to louder sounds.

So in the directory rpi-rgb-led-matrix/bindings/python/samples I created the script drjexample.sh.

                    

#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so I will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.

rm test.wav; mkfifo test.wav

# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &

# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav

So note that by having inverted the image (180 degree rotation) we have the sounds bars and images both in the same direction so we can switch between the two modes.

I believe to get the python bindings to work we needed to install some additional python libraries, but that part is kind of a blur now. I think what should work is to follow the directions in the README.md file in the directory rpi-rgb-led-matrix/bindings/python, namely

sudo apt-get update && sudo apt-get install python2.7-dev python-pillow -y
make build-python
sudo make install-python

Hopefully that takes care of it. For sure you need numpy.

Future project ideas

How about a board that normally plays a slideshow, but when the ambient sound reaches a certain level – presumably because music is playing – it switches to real-time sound bar mode?? We think it’s doable.

Recipe for a broken hat

For the LED matrix display we got the DFRobot one since that’s what was linked to in the project guide. But the thing is, the reviewer’s write-up is incomplete so what you need to do involves a little guesswork.

At the end of the day all we could salvage while we wait for a new Adafruit hat to come in is the top fourth of the display! The band below it is either blank, or if we push on the cables a certain way, an unreliable duplicate of the top fourth.

The next band suffers from a different problem. Its blue is non-functional. So it’s no good…

And the last band rarely comes on at all.

OK? So we’re down to a 16 x 64 pixel useable area.

But despite all those problems, it’s still kind of cool, I have to admit! I know at work we have these digital sign boards and this reminds me of that. So first thing I did was to create a custom banner – scrolling text.

I call this display program drjexample2.sh. I put it in the directory rpi-rgb-led-matrix/examples-api-use

                    
#!/bin/sh
# DrJ – 6/21
# nice example
sudo ./demo -D1 –led-rows=64 –led-cols=64 –led-multiplexing=1 –led-brightness=50 -m25 /home/pi/pil_text.ppm

But that requires the existence of a ppm file containing the text I wanted to scroll, since I was working by example. So to create that custom PPM file I created this python script.

                    
#!/usr/bin/python3
from PIL import Image, ImageDraw, ImageFont

# our fonts
###fnt = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 14)
fnt = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf”, 16)
fnt36 = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 36)
fnt2 = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf”, 18)
fntBold = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 40)

###img = Image.new(‘RGB’, (260, 30), color = (73, 109, 137))
img = Image.new(‘RGB’, (260, 30), color = ‘black’)

d = ImageDraw.Draw(img)
###d.text((0,0), “Baby, welcome to our world!!!”, font=fnt, fill=(255,255,0))
d.text((0,0), “Baby, welcome to our world!!!”, font=fnt, fill=’yellow’)

#img.save(‘pil_text.png’)
img.save(‘pil_text.ppm’, format=’PPM’)

Once I discovered the problem with the bands – by way of running all the demos and experimenting with the arguments a bit – I noticed this directory: rpi-rgb-led-matrix/utils. I perked right up because it held out the promise of displaying jpeg images. Anyone who has seen any of my posts know that I am constantly putting out raspberry pi based photo displays in one form or another. For instance see https://drjohnstechtalk.com/blog/2021/01/raspberry-pi-photo-frame-using-the-pictures-on-your-google-drive-ii/

or

https://drjohnstechtalk.com/blog/2020/12/raspberry-pi-photo-frame-using-your-pictures-on-your-google-drive/

Matrix Display

But see how cool it is? No? It’s a sleeping, recumbent baby. It’s like the further away from it you get, the clearer it becomes. Trust me in person it does look good. And it feels like creating one of those bright LED displays they use in ballparks.

But this same picture also shows the banding problem.

To get the picture displayed I first cropped it to make it wide and short. I wisely chose a picture which was amenable to that approach. I created this display script:

                    
#!/bin/sh
sudo led-image-viewer –led-no-hardware-pulse –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 –led-multiplexing=1 /home/pi/baby-sleeping.jpg

But before that can all work, you need to compile the program. Just read through the README.md. It gives these instructions which I followed:

sudo apt-get update
sudo apt-get install libgraphicsmagick++-dev libwebp-dev -y
make led-image-viewer

It went kind of slowly on my RPi 3, but it worked without incident. When I initially ran the led-image-viewer nothing displayed. So the script above shows the results of my experimentation which seems appropriate for our particular matrix display.

How did we get here?

Just to mention it, we followed the general instructions in that project. So I guess no need to repeat the recipe here.

Slideshow

You know I’m not going to let an opportunity to create a slideshow go to waste. So i created a second appropriately horizontal image which I might effectively show in my narrow available band of 16 x 64 pixels. just to share the little script, it is here:

                    
#!/bin/sh
cd /home/pi
sudo led-image-viewer –led-no-hardware-pulse –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 –led-multiplexing=1 -w6 -f baby-and-mom.jpg baby-sleeping.jpg

This infinitely loops over two pictures, displaying each for six seconds.

Start on boot

I have the slideshow start on boot using a simple technique I’ve developed. You edit the crontab file (crontab -e) and enter at the bottom:

                    
@reboot sleep 35; /home/pi/rpi-rgb-led-matrix/utils/jdrexample2.sh > drjexample.log 2>&1
Lessons learned

My friend ordered all the stuff listed on the DFRobot project page, including their 64×64 LED matrix. Probably a mistake. They basically don’t document it and refer you to Adafruit, where they deal with a 64×64 LED matrix – their own – which may or may not have the same characteristics, leaving you somewhat in limbo. Next time I would order from Adafruit.

As mentioned above that gold foil on printed circuit boards really does come off pretty easily, and then you’re hosed. Because of the lack of technical specs we were never really sure if we needed to solder the E contact to 8 or to 16 and destroyed all those terminals in the process of backing out.

I actually created custom ppm files of solid colors, red, blue, green, white, so that I could prove my suspicions about the third band. Red and green display fine, blue not at all. White displays as yellow.

Viewed close up, the LED matrix doesn’t look like much, and of course I was close up when I was working with it. But when I stepped back I realized how beautiful a brightly illuminated picture of a baby can be! The pixels merge and your mind fills in the spaces between I guess.

The original idea was to tackle sound but I got stuck on the ability to use it as a photo frame (you know me). But he wants to return to sound which I am dreading….

Testing audio

In our first tests. the audio example wasn’t working. But now it seems to be. The project guy’s python code is named spectrum_matrix.py if I recall correctly. It goes into rpi-rgb-led-matrix/bindings/python/samples. And as he says, you run it from that directory as

$ sudo python spectrum_matrix.py

But, his link to test.wav is dead – yet another deficiency in his write-up. At least in my testing not every possible WAV file may work. This one, moo sounds, does however. http://soundbible.com/grab.php?id=1778&type=wav So, it plays for a few seconds – I can hear it through earphones – and the LEDs kind of go up and down. We recorded a wav file and found that that does not work. The error reads like this:

Home directory not accessible: Permission denied
W: [pulseaudio] core-util.c: Failed to open configuration file '/root/.config/pulse//daemon.conf': Permission denied
W: [pulseaudio] daemon-conf.c: Failed to open configuration file: Permission denied
Traceback (most recent call last):
File "spectrum_matrix.py.orig", line 56, in
matrix = calculate_levels(data,chunk,sample_rate)
File "spectrum_matrix.py.orig", line 49, in calculate_levels
power = np.reshape(power,(64,chunk/64))
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 292, in reshape
return _wrapfunc(a, 'reshape', newshape, order=order)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 2048 into shape (64,64)

Note that I had renamed the original spectrum_matrix.py to spectrum_matrix.py.orig because we started messing with it. Actually, I pretty much get the same error on the file that works; it’s just that I get it at the end of the LED show, not immediately.

Superficially, the two files differ somewhat in their recording format:

$ file ~/voice.wav moo.wav

/home/pi/voice.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 44100 Hz
moo.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 32000 Hz

I played the voice.wav on a Windows PC – it played just fine. Just a little soft.

So what’s the essential difference between the two files? Well, something that jumps out is that the one is mono, the other stereo. Can we somehow test for that? Yes! I made the simplest possible conversion af a mono to a stereo file with the following ffmpeg command:

$ ffmpeg -i ~/voice.wav -ac 2 converted.wav

$ file converted.wav


converted.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz

I copy converted.wav to test.wav and re-run spectrum_matrix.py. This time it works!

Not sure how my friend produced his wave file. But I want to make one on my own. He had plugged a USB microphone into the RPi. I have done research somewhat related to this – publishing a livestream to Youtube, audio only, video grayed out. That’s in this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/ So I am not afraid to se ffmpeg any longer. So I created this tiny script, record.sh, with my desired arguments:

                    
#!/bin/sh
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
/tmp/ffmpeg.wav

And I ran it while speaking loudly into the mic. It ran OK. The output file comes out as

test.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 48000 Hz

And…it plays with the LEDs dancing. Great.

Audio: LED responds to live input

My friend wants the LED to respond to live input such as his stereo at home. Being a terrible python programmer but at least middling linux techie, I see a way to accomplish it without his having to touch the sample program by employing an old Unix trick: named pipes. So I create this script, drjexample.sh which combines all the knowledge we gained above into one simple script:

                    

#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so i will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.

rm test.wav; mkfifo test.wav

# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &

# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav

So a named pipe is just that. Instead of the pipe character we know and love, you coordinate process output from the first process with process input of the second process by way of this special file. The operating system does all the hard work. But it works just as though you had used the | character.

Best of all, this script actually works, to wit, the LED is now responding to live input. I see it jump when I say test into the microphone. Unknown to me is if it will play for extended periods of time – it would be easy for one process to output faster than the other can input for instance, so a backlog builds up. The responsiveness is good, I would guess around no more than 200 ms lag.

Equipment

We used the equipment from this post, except the Firebeetle. You know, that’s just another reason I consider that post to be a sloppy effort. Who lists a piece of equipment that they don’t use?? And again, next time I would rather search for an LED display from Adafruit. We use an RPi 3 and installed the image on a micro SD card with the new-ish Raspberry Pi imager, which works just great: Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi

Oh, plus the soldering iron and solder. And a multi-colored ribbon cable with female couplers at one end and a 2×8 connector on the other. Not sure if that came with the LED or not since I didn’t order it.

Our power supply is about 5 amps and plugs into the hat. We do not need power for the RPi.

References and related

This post makes it seem like a walk in the park. Our experience is not so much. Raspberry Pi Audio Spectrum Display – Hackster.io

People seem to like this Raspberry Pi photo frame post I did which draws photos from your Google drive. https://drjohnstechtalk.com/blog/2020/12/raspberry-pi-photo-frame-using-your-pictures-on-your-google-drive/

Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi – for putting the operating system formerly known as Raspbian onto a micro SD card.

test.wav (Use this moo wav file and rename to test.wav): http://soundbible.com/grab.php?id=1778&type=wav

Useful ffmpeg commands: 20+ FFmpeg Commands For Beginners – OSTechNix

How I figured out hot to livestream audio to YouTube without video from a RPi using ffmpeg is documented here: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/

Github project for this effort (not completely working as of yet):

Categories
Raspberry Pi

Raspberry Pi photo frame using the pictures on your Google Drive II

Intro

This is basically the same post as my previous post, Raspberry Pi photo frame using your pictures on your Google Drive. The are several ideas I am introducing in this treatment.

  • better time separation of the photos for a more meaningful viewing
  • smart resizing of photos to effectively enlarge narrow photos
  • analysis of photos for date and time
  • analysis of photos for GPS info, converting to city and even address!
  • build up of alternate slideshow which includes date, file, folder and location information embedded at the bottom of every picture
  • quality control check to make sure file is an actual JPEG
  • tiny thumbnail pictures are skipped
  • rotating slideshow refreshes either daily or every three days
  • pictures of documents are excluded (future enhancement)
  • a nice picture is displayed when the slideshow is refreshed

Mostly for my own sake, I’ve re-named most of the relevant files and re-worked some as well in order to avoid name conflicts.

I find this treatment is pretty robust and can withstand a lot of errors and mistakes.

So let’s get started.

The easier way to get the files

Because there are now so many files – 18 at last count! – I’ve bundled them all into a tar file. So to get them all in one fell swoop do this.

$ wget https://drjohnstechtalk.com/blog/downloads/photoFrameII.tar

$ tar xvf photoFrameII.tar

Then skip down to the section of this post called crontab entries, which you will still need to do.

But because I think the scripts could be useful for other projects as well, I’m including them here in their entirety in the following section.

The files

The brains of the thing is master3.sh.

master3.sh

                    

#!/bin/sh
# DrJ 1/2021
# call this from cron once a day to refesh random slideshow once a day
NUMFOLDERS=20
DEBUG=1
HOME=/home/pi
RANFILE=$HOME/random.list
REANFILE=$HOME/rean.list
DISPLAYFOLDER=$HOME/Pictures
DISPLAYFOLDERTMP=$HOME/Picturestmp
EXIFTMP=$HOME/EXIFtmp
EXIF=$HOME/EXIF
TXTDIR=$HOME/picstxt
MSHOW=$HOME/mediashow
MSHOW2=$HOME/mediashowtmp2
MSHOW3=$HOME/mediashowtmp3
SLEEPINTERVAL=1
STARTFOLDER="MaryDocs/Pictures and videos"

echo "Starting master process at "`date`


cd $HOME

rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP

#listing of all Google drive files starting from the picture root
# this takes a few minutes so we may want to skip for debugging
if [ "$1" = "skip" ]; then
  if [ $DEBUG -eq 1 ]; then echo SKIP Listing all files from Google drive; fi
else
  if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
  rclone ls remote:"$STARTFOLDER" > files
# filter down to only jpegs, lose the docs folders and the tiny JPEGs
  if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs and losing the small images; fi
  egrep '\.[jJ][pP][eE]?[gG]$' files |awk '$1 > 11000 {$1=""; print substr($0,2)}'|grep -i -v /docs/ > jpegs.list
fi

# check if we got anything. If our Internt dropped there may have been a problem, for instance
flines=`cat files|wc -l`
if [ $flines -lt 60 ]; then
  echo "rclone did not produce enough files. Check your Internet setup and rclone configuration."
  echo Only $flines files in the file listing - not enough - so pausing 60 seconds and starting over... at `date`
# start a new job and kill ourselves!
  nohup $HOME/master3.sh > master.log 2>&1 &
  exit
fi

# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo "\nGenerate random filename triplets"; fi
./random-files3.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE

# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo "\nCopy over these random files"; fi
cat $RANFILE|while read line; do
  if [ $DEBUG -eq 1 ]; then echo filepath is $line; fi
  rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP
  sleep $SLEEPINTERVAL
done

# do a re-analysis to push pictures further apart in time
if [ $DEBUG -eq 1 ]; then echo "\nRe-analyzing pictures for their timestamps"; fi
cd $DISPLAYFOLDERTMP; $HOME/reanalyze.pl

# copy over just the new pictures that we determined were needed
if [ $DEBUG -eq 1 ]; then echo "\nCopy over the needed replacement files"; fi
cat $REANFILE|while read line; do
  if [ $DEBUG -eq 1 ]; then echo filepath is $line; fi
  rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP
  sleep $SLEEPINTERVAL
done

# QC: toss out the pics which are not actually JPEGs
if [ $DEBUG -eq 1 ]; then echo "\nQC: Toss out the pics which are not actually JPEGs"; fi
cd $DISPLAYFOLDERTMP; ../QC.pl

# save EXIF metadata for later
if [ $DEBUG -eq 1 ]; then echo "\nSave EXIF metadata for later"; fi
cd $DISPLAYFOLDERTMP; $HOME/get-all-EXIF.sh
rm -rf $EXIF;mv $EXIFTMP $EXIF

# analyze EXIF info to extract most interesting things
if [ $DEBUG -eq 1 ]; then echo "\nAnalyze EXIF data"; fi
rm -rf $TXTDIR; $HOME/analyze.sh

# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo "\nRotate the pics which need it"; fi
cd $DISPLAYFOLDERTMP; $HOME/rotate-as-needed.sh

# resize pics
if [ $DEBUG -eq 1 ]; then echo "\nSize all pics to the display size"; fi
$HOME/resize.sh

# create text info + images
if [ $DEBUG -eq 1 ]; then echo "\nEmbed pic info"; fi
$HOME/embedpicinfo.sh

cd ~

# kill any old slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old fbi slideshow; fi
sudo pkill -9 -f fbi
pkill -9 -f m3.pl

# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER

mv $DISPLAYFOLDERTMP $DISPLAYFOLDER
cp $MSHOW3 $MSHOW

touch refresh

#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start "\nfbi slideshow in background"; fi
cd $DISPLAYFOLDER ; nohup ~/m3.pl  >> ~/m3.log 2>&1 &

if [ $DEBUG -eq 1 ]; then echo "And now it is "`date`; fi

random-files3.pl

                    
#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("c:df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$cutoff = $opt{c} ? $opt{c} : 5;
$cutoffS = 60*$cutoff;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
$mshowt = "mediashowtmp";
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = <JPEGS>;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
  $t = int(rand($nopics-2));
  print "random number is: $t\n" if $DEBUG;
# a lot of our pics follow this naming convention
# 20160831_090658.jpg
  ($date,$time) = $jpegs[$t] =~ /(\d{8})_(\d{6})/;
  if ($date) {
    print "date, time: $date $time\n" if $DEBUG;
# ensure neighboring picture is at least five minutes different in time
    $iPO = $iP = $diff = 1;
    ($hr,$min,$sec) = $time =~ /(\d\d)(\d\d)(\d\d)/;
    $secs = 3600*$hr + 60*$min + $sec;
    print "Pre-pic logic\n";
    while ($diff < $cutoffS) {
      $iP++;
      $priorPic = $jpegs[$t-$iP];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $priorPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($secs - $Psecs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
# post-picture logic - same as pre-picture
    print "Post-pic logic\n";
    $diff = 0;
    while ($diff < $cutoffS) {
      $iPO++;
      $postPic = $jpegs[$t+$iPO];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $postPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($Psecs - $secs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
  } else {
    $iP = $iPO = 2;
  }
  $priorPic = $jpegs[$t-$iP];
  $Pic = $jpegs[$t];
  $postPic = $jpegs[$t+$iPO];
  print RAN qq($priorPic
$Pic
$postPic
);
# this is how we'll preserve the order of the pictures. ls -1 often gives a different order!!
($p1) = $priorPic =~ /([^\/]+)$/;
($p2) = $Pic =~ /([^\/]+)$/;
($p3) = $postPic =~ /([^\/]+)$/;
print "p1 p2 p3: $p1 $p2 $p3" if $DEBUG;
$picsinorder .= $p1 . "\0" . $p2 . "\0" . $p3 . "\0";
}
close(RAN);
open(MS,">$mshowt") || die "Cannot open mediashow file $mshowt!!\n";
print MS $picsinorder;
close(MS);
print "pics in order: $picsinorder\n" if $DEBUG;

reanalyze.pl

                    

#!/usr/bin/perl
use Getopt::Std;
my %opt=();
#
# assumption is that we are runnin this from a directory containing pictures
$tier1 = 100; $tier2 = 200; $tier3 = 300; # secs
$DEBUG = 1;
$HOME = "/home/pi";
# pics are here
$pNames = "$HOME/reanpicnames";
$ranfile = "$HOME/random.list";
$reanfile = "$HOME/rean.list";
$origfile = "$HOME/jpegs.list";
$mshowt = "$HOME/mediashowtmp";
$mshow2 = "$HOME/mediashowtmp2";

open(REAN,">$reanfile") || die "Cannot open reanalyze file $reanfile!!\n";
$ms = `cat $mshowt`;
print "Original media show: $ms\n" if $DEBUG;
@lines = split('\0',$ms);
$Pdate = $Phr = $Pmin = $Psec = 0;
$diff = 9999;
for($i=0;$i<@lines;$i++){
  $date = 0;
  $secs = $ymd = 0;
  $_ = $lines[$i];
  $file = $_;
# ignore pictures with names like 20130820_180050.jpg
  next if /\d{8}_\d{4}/;
  open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
  print "filename: $file\n" if $DEBUG;
  while(<ANAL>){
#extract date and time from remaining pictures, if possible
# # DateTimeOriginal = 2018:08:18 20:16:47
#    print STDERR "DATE: $_" if $DEBUG;
if (/date/i && $date++ < 1) {
   print "date match in getinfo.pyoutput: $_" if $DEBUG;
   ($ymd,$hr,$min,$sec) = /(\d{4}:\d\d:\d\d) (\d\d):(\d\d):(\d\d)/;
   $secs = 3600*$hr + 60*$min + $sec;
   print "file,secs,ymd,i: $file,$secs,$ymd,$i\n" if $DEBUG;
   $YMD[$i] = $ymd;
   $SECS[$i] = $secs;
}
} # end loop over analysis of this pic
} # end loop over all files
# now go over that
$oldfolder = 0;
for($i=1;$i<@lines;$i++){
 $folder = int($i/3) + 1;
 next unless $folder != $oldfolder;
   print "analyzing results. folder no. $folder\n" if $DEBUG;
# analyze pics in triplets
# center pic
   $j = ($folder - 1)*3 + 1;
   for ($o=-1;$o<2;$o+=2){
     $k=$j+$o;
     print "j,k,o: $j,$k,$o\n" if $DEBUG;
     next unless $SECS[$j] > 0 && $YMD[$j] == $YMD[$k] && $YMD[$j] > 0;
     print "We have non-0 dates we're dealing with\n" if $DEBUG;
     $file = $lines[$k];
     chomp($file);
     $diff = abs($SECS[$j] - $SECS[$k]);
     print "diff: $diff\n" if $DEBUG;
     next unless $diff < $tier3;
# the closer the files are together the more we push away
     $bump = 1 if $diff < $tier3;
     $bump = 2 if $diff < $tier2;
     $bump = 3 if $diff < $tier1;
# get full filepath
     $filepath = `grep \"$file\" $ranfile`;
     chomp($filepath);
# now use that to search within the jpegs file listing
     $prog = $o < 0 ? "head" : "tail";
     $newfilepath = `grep -C$bump "$filepath" $origfile|$prog -1`;
     ($newfile) = $newfilepath =~ /([^\/]+)$/;
     chomp($newfile);
     print "file,filepath,newfile,newfilepath,bump: $file,$filepath,$newfile,$newfilepath,$bump\n" if $DEBUG;
     print REAN $newfilepath;
# we'll get the new pictures over in a separate step to keep this more atomic
     $ms =~ s/$file/$newfile/;
    }
    $oldfolder = $folder;
} # end loop over pics
# print out new mediashow pics in order
print "Printing new mediashow: $ms\n" if $DEBUG;
open(MS,">$mshow2") || die "Cannot open mediashow $mshow2!!\n";
print MS $ms;
close(MS)

QC.pl

                    

#!/usr/bin/perl
# kick out the non-JPEG files - sometimes they creep in
$DEBUG = 1;
$HOME = "/home/pi";
$mshow2 = "$HOME/mediashowtmp2";
$mshow3 = "$HOME/mediashowtmp3";
$ms = `cat $mshow2`;
@pics = split('\0',$ms);
foreach $file (@pics) {
  print "file is $file\n" if $DEBUG;
#DSC00185.JPG: JPEG image data, JFIF standard 1.01...
  $res = `file "$file"|cut -d: -f2`;
  if ($res =~ /JPEG/i){
    print "This file is indeed a JPEG image\n" if $DEBUG;
  } else {
    print "Not a JPEG image! We have to remove this file form the mediashow\n" if $DEBUG;
    $ms =~ s/$file\0//;
  }
}
# print out new mediashow pics in order
print "Printing new mediashow: $ms\n" if $DEBUG;
open(MS,">$mshow3") || die "Cannot open mediashow $mshow3!!\n";
print MS $ms;
close(MS);

get-all-EXIF.sh

                    

#!/bin/sh
# DrJ 1/2021
# preserve EXIF info of all the images because our rotate step removes it
# and we will use it in subsequent steps
# assumption is that our current directory is the one where we want to read files
EXIFTMP=~/EXIFtmp
mkdir $EXIFTMP
ls -1|while read line; do
  echo file is "$line"
  ~/getinfo.py "$line" > $EXIFTMP/"$line"
done

analyze.sh

                    

#!/bin/sh
# DrJ 1/2021
# try to extract date, file and folder name and even GPS info, create jpegs with info
# for each image
# assumption is that are current directory is the one where we want to alter files
HOME=/home/pi
TXTDIR=$HOME/picstxt
# it's assumed EXIF info for each pic has already been extracted and put into EXIF diretory
EXIF=$HOME/EXIF
mkdir $TXTDIR
cd $EXIF
ls -1|while read line; do
  echo file is "$line"
  echo -n "$line"|../analyzeDate.pl > "$TXTDIR/${line}"
  echo -n "$line"|../analyzeGPS.pl >> "$TXTDIR/${line}"
done

rotate-as-needed.sh

                    

#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn't auto-rotate them as far as I know
# assumption is that our current directory is the one where we want to alter files
ls -1|while read line; do
  echo file is "$line"
  o=`~/getinfo.py "$line"|grep -ai orientation|awk '{print $NF}'`
  echo orientation is $o
  if [ "$o" -eq "6" ]; then
    echo "90 clockwise is needed, o is $o"
# rotate and move it
    ~/rotate.py -90 "$line"
    mv rot_"$line" "$line"
  elif [ "$o" -eq "8" ]; then
    echo "90 counterclock is needed, o is $o"
# rotate and move it
    ~/rotate.py 90 "$line"
    mv rot_"$line" "$line"
  elif [ "$o" -eq "3" ]; then
    echo "180 rot is needed, o is $o"
# rotate and move it
    ~/rotate.py 180 "$line"
    mv rot_"$line" "$line"
  fi
done

resize.sh

                    

#!/bin/sh
# DrJ 2/2021
# To combat the RPi's inherent sluggish performance we'll downsize the pictures in advance to save fbi the effort
#
# on the pidisplay fbset gives:
#mode "800x480"
#    geometry 800 480 800 480 32
#    timings 0 0 0 0 0 0 0
#    rgba 8/16,8/8,8/0,8/24
#endmode
displaywidth=`fbset|grep geometry|awk '{print $2}'`
displayheight=`fbset|grep geometry|awk '{print $3}'`

ls -1|while read line; do
  echo file is "$line"
  ~/fancyresize.py $displaywidth $displayheight "$line"
  mv resize_"$line" "$line"
done

embedpicinfo.sh

                    

#!/bin/sh
# DrJ 2/2021
# To combat the RPi's inherent sluggish performance we'll downsize the pictures in advance to save fbi the effort
#
# on the pidisplay fbset gives:
#mode "800x480"
#    geometry 800 480 800 480 32
#    timings 0 0 0 0 0 0 0
#    rgba 8/16,8/8,8/0,8/24
#endmode
displaywidth=`fbset|grep geometry|awk '{print $2}'`
displayheight=`fbset|grep geometry|awk '{print $3}'`

ls -1|while read line; do
  echo file is "$line"
# this will create a new image with same name prepended with txt_
  ~/embedpicinfo.py $displaywidth $displayheight "$line"
done

Auxiliary files

rotate.py

                    

#!/usr/bin/python3
# call with two arguments: degrees-to-rotate and filename
import PIL, os
import sys
from PIL import Image
# first do: pip3 install piexif
import piexif

degrees = int(sys.argv[1])
pic = sys.argv[2]

picture= Image.open(pic)
# see https://github.com/hMatoba/Piexif for piexif writeup
# this method of preserving EXIF info does not always work, and
# causes script to crash when it fails!
##exif_dict = piexif.load(picture.info["exif"])
##exif_bytes = piexif.dump(exif_dict)
## both rotate and preserve EXIF data
##picture.rotate(degrees,expand=True).save("rot_" + pic,"jpeg", exif=exif_bytes)
# rotate (which will blow away EXIF info, sorry...)
picture.rotate(degrees,expand=True).save("rot_" + pic,"jpeg")

getinfo.py

                    

#!/usr/bin/python3
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS

for (tag,value) in Image.open(sys.argv[1])._getexif().items():
        print ('%s = %s' % (TAGS.get(tag), value))

print ('%s = %s' % (TAGS.get(tag), value))

embedpicinfo.py

                    

#!/usr/bin/python3
# from https://auth0.com/blog/image-processing-in-python-with-pillow/
# fonts are described here:
# https://pillow.readthedocs.io/en/stable/reference/ImageDraw.html
from PIL import Image, ImageDraw, ImageFont
import sys, os

width = int(sys.argv[1])
height = int(sys.argv[2])
# for Pidisplay:
#width = 800
#height = 480
imageFile = sys.argv[3]
imageandtext = 'txt_' + imageFile

tfile = '../picstxt/' + imageFile
f = open(tfile)
txtlines = f.readlines()
f.close()

# our fonts
fnt = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 14)
#fnt = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 40)
fnt36 = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 36)
fnt2 = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf", 18)
fntBold = ImageFont.truetype("/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf", 40)
# PiDisplay resolution is 800x480
margin = 35
# semi-parameterized variables
x0 = margin - 30
x1 = margin -15
cwidth = 6
yline = 27
ychevron=(yline-28)/2
ycoff = 5
yoffset = 12
yrec= -10
rpad = 15
#
textimagey = 130
textimagex = width

# menu items
textimage = Image.new('RGB', (textimagex, textimagey), 'white')
for t in txtlines:
    img_draw = ImageDraw.Draw(textimage)
    img_draw.text((margin, yoffset), t, font=fnt, fill='MidnightBlue')
    yoffset += yline

# merge three images together...
# black background covering whole display:
masterimage = Image.new('RGB',(width,height),'black')
# original image:
oimage = Image.open(imageFile)
# original image width
owidth = oimage.size[0]
# get offset so narrow pictures are centered
xoffset = int((width - owidth)/2)
masterimage.paste(oimage,(xoffset,0))
masterimage.paste(textimage,(0,height - textimagey))

masterimage.save(imageandtext)

fancyresize.py

                    

#!/usr/bin/python3
# DrJ 2/2021
import PIL, os
import sys
from PIL import Image
# somewhat inspired by http://www.riisen.dk/dop/pil.html
# arguments:
# <width> <height> file
# with and height should be provided as values in pixels
# image file should be provided as argument.
# A pidisplay is 800x480

displaywidth = int(sys.argv[1])
displayheight = int(sys.argv[2])
smallscreen = 801
imageFile = sys.argv[3]
im1 = Image.open(imageFile)
narrowmax = .76
blowupfactor = 1.1
# take less from the top than the bottom
topshare = .3
bottomshare = 1.0 - topshare
# for DrJ debugging
DEBUG = True
if DEBUG:
  print("display width and height: ",displaywidth,displayheight)

def imgResize(im):
    width = im.size[0]
    height = im.size[1]
    if DEBUG:
      print("image width and height: ",width,height)

# If the aspect ratio is wider than the display screen's aspect ratio,
# constrain the width to the display's full width
    if width/float(height) > float(displaywidth)/float(displayheight):
      if DEBUG:
        print("In section width contrained to full width code section")

      widthn = displaywidth
      heightn = int(height*float(displaywidth)/width)
      im5 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
    else:
      heightn = displayheight
      widthn  = int(width*float(displayheight)/height)

      if width/float(height) < narrowmax and displaywidth < smallscreen:
# if width is narrow we're losing too much by using the whole picture.
# Blow it up by blowupfactor% if display is small, and crop most of it from the bottom
        heightn = int(displayheight*blowupfactor)
        widthn  = int(width*float(heightn)/height)
        im4 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
        top = int(displayheight*(blowupfactor - 1)*topshare)
        bottom = int(heightn - displayheight*(blowupfactor - 1)*bottomshare)
        if DEBUG:
          print("heightn,top,widthn,bottom: ",heightn,top,widthn,bottom)

        im5 = im4.crop((0,top,widthn,bottom))
      else:
        im5 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter

    im5.save("resize_" + imageFile)

imgResize(im1)

analyzeDate.pl

                    

#!/usr/bin/perl
# 20180818_201647.jpg
use POSIX;
$DEBUG = 1;
$HOME = "/home/pi";
$random = "$HOME/random.list";
$rean   = "$HOME/rean.list";
#$file = "Picturestmp/20180422_134220.jpg";
while(<>){
$GPS = $date = 0;
$gpsinfo = "";
$file = $_;
#open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
open(ANAL,"cat \"$file\"|") || die "Cannot open file: $file!!\n";
print STDERR "filename: $file\n" if $DEBUG;
while(<ANAL>){
  $town = "";
  if (/DateTimeOriginal/i && $date++ < 1) {
# DateTimeOriginal = 2018:08:18 20:16:47
#or...  DateTimeDigitized = 2016/03/07 00:57:49
    print STDERR "DATE: $_" if $DEBUG;
    ($yr,$mon,$date,$hr,$min) = /(\d{4}):(\d\d):(\d\d) (\d\d):(\d\d)/;
    print STDERR "$yr,$mon,$date,$hr,$min\n" if $DEBUG;
# my custom format: Saturday, August 18, 2018  8:16 pm
    $dateinfo =  strftime("%A, %B %d, %Y %l:%M %p", 0, $min, $hr, $date , $mon - 1, $yr - 1900, -1, -1, -1);
  }
}
# folder info from random.list
$match = `cat $random $rean|grep "$file"`;
($folder) = $match =~ /(.+)\/[^\/]+/;
print STDERR "matched line, folder: $match, $folder\n" if $DEBUG;

# if no date, use filesystem date
if ( ! $dateinfo ) {
  $jpegfile = "../Picturestmp/$file";
  $mtime = (stat($jpegfile))[9];
  $handtst = `ls -l "$jpegfile"`;
  @ltime = localtime $mtime;
  $dateinfo =  strftime("(guess) %A, %B %d, %Y %l:%M %p",@ltime);
  print STDERR "No date info. Use filesystem date. mtime is $mtime. dateinfo: $dateinfo\n";
  print STDERR "Hand test of file age: $handtst\n";
}

$dateinfo = $dateinfo || "No date found";
$gpsinfo = $gpsinfo || "No info found";

print qq(File: $file
Folder: $folder
Date: $dateinfo
);
}

analyzeGPS.pl

                    

#!/usr/bin/perl
# use in combination with this post https://drjohnstechtalk.com/blog/2020/12/convert-gps-coordinates-into-town-name/
use POSIX;
$DEBUG = 1;
$HOME = "/home/pi";
#$file = "Pictures/20180422_134220.jpg";
while(<>){
$GPS = $date = 0;
$gpsinfo = "";
$file = $_;
#open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
open(ANAL,"cat \"$file\"|") || die "Cannot open file: $file!!\n";
print STDERR "filename: $file\n" if $DEBUG;
while(<ANAL>){
  $postalcode = $town = $name = "";
  if (/GPS/i) {
    print STDERR "GPS: $_" if $DEBUG;
# GPSInfo = {1: 'N', 2: (39.0, 21.0, 22.5226), 3: 'W', 4: (74.0, 25.0, 40.0267), 5: 1.7, 6: 0.0, 7: (23.0, 4.0, 14.0), 29: '2016:07:22'}
   ($pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec) = /1: '([NS])', 2: \(([\d\.]+), ([\d\.]+), ([\d\.]+)...3: '([EW])', 4: \(([\d\.]+), ([\d\.]+), ([\d\.]+)\)/i;
   print STDERR "$pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec\n" if $DEBUG;
   $lat = $deg + $min/60.0 + $sec/3600.0;
   $lat = -$lat if $pole eq "S";
   $lng = $lngdeg + $lngmin/60.0 + $lngsec/3600.0;
   $lng = -$lng if $hemi = "W" || $hemi eq "w";
   print STDERR "lat,lng: $lat, $lng\n" if $DEBUG;
   #$placename = `curl -s "$url"|grep -i toponym`;
   next if $lat == 0 && $lng == 0;
# the address API is the most precise
   $url = "http://api.geonames.org/address?lat=$lat\&lng=$lng\&username=drjohns";
   print STDERR "Url: $url\n" if $DEBUG;
   $results = `curl -s "$url"|egrep -i 'street|house|locality|postal|adminName'`;
   print STDERR "results: $results\n" if $DEBUG;
   ($street) = $results =~ /street>(.+)</;
   ($houseNumber) = $results =~ /houseNumber>(.+)</;
   ($postalcode) = $results =~ /postalcode>(.+)</;
   ($state) = $results =~ /adminName1>(.+)</;
   ($town) = $results =~ /locality>(.+)</;
   print STDERR "street, houseNumber, postalcode, state, town: $street, $houseNumber, $postalcode, $state, $town\n" if $DEBUG;
# I think locality is pretty good name. If it exists, don't go  further
   $postalcode = "" if $town;
   if (!$postalcode && !$town){
# we are here if we didn't get interesting results from address reverse loookup, which often happens.
     $url = "http://api.geonames.org/extendedFindNearby?lat=$lat\&lng=$lng\&username=drjohns";
     print STDERR "Address didn't work out. Trying extendedFindNearby instead. Url: $url\n" if $DEBUG;
     $results = `curl -s "$url"`;
# parse results - there may be several objects returned
     $topelemnt = $results =~ /<geoname>/i ? "geoname" : "geonames";
     @elmnts = ("street","streetnumber","lat","lng","locality","postalcode","countrycode","countryname","name","adminName2","adminName1");
     $cnt = xml1levelparse($results,$topelemnt,@elmnts);

     @lati = @{ $xmlhash{lat}};
     @long = @{ $xmlhash{lng}};
# find the closest entry
     $distmax = 1E7;
     for($i=0;$i<$cnt;$i++){
       $dist = ($lat - $lati[$i])**2 + ($lng - $long[$i])**2;
       print STDERR "dist,lati,long: $dist, $lati[$i], $long[$i]\n" if $DEBUG;
       if ($dist < $distmax) {
         print STDERR "dist < distmax condition. i is: $i\n";
         $isave = $i;
       }
     }
     $street = @{ $xmlhash{street}}[$isave];
     $houseNumber = @{ $xmlhash{streetnumber}}[$isave];
     $admn2 = @{ $xmlhash{adminName2}}[$isave];
     $postalcode = @{ $xmlhash{postalcode}}[$isave];
     $name = @{ $xmlhash{name}}[$isave];
     $countrycode = @{ $xmlhash{countrycode}}[$isave];
     $countryname = @{ $xmlhash{countryname}}[$isave];
     $state = @{ $xmlhash{adminName1}}[$isave];
     print STDERR "street, houseNumber, postalcode, state, admn2, name: $street, $houseNumber, $postalcode, $state, $admn2, $name\n" if $DEBUG;
     if ($countrycode ne "US"){
       $state .= " $countryname";
     }
     $state .= " (approximate)";
   }
# turn zipcode into town name with this call
   if ($postalcode) {
     print STDERR "postalcode $postalcode exists, let's convert to a town name\n";
     print STDERR "url: $url\n";
     $url = "http://api.geonames.org/postalCodeSearch?country=US\&postalcode=$postalcode\&username=drjohns";
     $results = `curl -s "$url"|egrep -i 'name|locality|adminName'`;
     ($town) = $results =~ /<name>(.+)</i;
     print STDERR "results,town: $results,$town\n";
   }
   if (!$town) {
# no town name, use adminname2 which is who knows what in general
     print STDERR "Stil no town name. Use adminName2 as next best thing\n";
     $town = $admn2;
   }
   if (!$town) {
# we could be in the ocean! I saw that once, and name was North Atlantic Ocean
     print STDERR "Still no town. Try to use name: $name as last resort\n";
     $town = $name;
   }
   $gpsinfo = "$houseNumber $street $town, $state" if $locality || $town;
   } # end of GPS info exists condition
  } # end loop over ANAL file
  $gpsinfo = $gpsinfo || "No info found";
  print qq(Location: $gpsinfo
);
} # end loop over STDIN

#####################
# function to parse some xml and fill a hash of arrays
sub xml1levelparse{
# build an array of hashes
$string = shift;
# strip out newline chars
$string =~ s/\n//g;
$parentelement = shift;
@elements = @_;
$i=0;
while($string =~ /<$parentelement>/i){
 $i++;
 ($childelements) = $string =~ /<$parentelement>(.+?)<\/$parentelement>/i;
 print STDERR "childelements: $childelements" if $DEBUG;
 $string =~ s/<$parentelement>(.+?)<\/$parentelement>//i;
 print STDERR "string: $string\n" if $DEBUG;
 foreach $element (@elements){
  print STDERR "element: $element\n" if $DEBUG;
  ($value) = $childelements =~ /<$element>([^<]+)<\/$element>/i;
  print STDERR "value: $value\n" if $DEBUG;
  push @{ $xmlhash{$element} }, $value;
 }
} # end of loop over parent elements
return $i;
} # end sub xml1levelparse

m3.pl

                    

#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
###$delay = 4; # for testing
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";
$refreshFile = "$ENV{HOME}/refresh";

chdir($picsDir);
$cn = `ls -1|wc -l`;
chomp($cn);
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
# see if this is a new batch of pictures
$refresh = (stat($refreshFile))[9];
$now = time();
$diff = $now - $refresh;
print "refresh,now,diff: $refresh, $now, $diff\n" if $DEBUG;
if ($diff < 100){
  system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/newslideshowintro.jpg");
  sleep(25);
}
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sleep 1; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
    system("sudo xargs -a $mshow -0 fbi --noverbose -1 -T 1  -t $delay ");
    ###system("sudo xargs -a $mshow -0 fbi -a -1 -T 1  -t $delay "); # for testing
# fbi runs in background, then exits, so we need to monitor if it's still alive
    for(;;) {
      open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
      $match = <MON>;
      if ($match) {
        print "got fbi match\n" if $DEBUG > 1;
        } else {
        print "no fbi match\n" if $DEBUG;
# fbi not found
          last;
      }
      close(MON);
      print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
      usleep($mdelay);
    } # end loop testing if fbi has exited
} # close of infinite loop

Optional script

mshowtmp.pl (revision not yet reflected in the tar file)

                    

#!/usr/bin/perl
# add txt_ to beginning of filename
$DEBUG = 1;
$HOME = "/home/pi";
$mshow = "$HOME/mediashow.orig";
$mshow2 = "$HOME/mediashowtmp2";
$ms = `cat $mshow`;
@pics = split('\0',$ms);
$ms = "";
foreach $file (@pics) {
  print "file is $file\n" if $DEBUG;
  $ms .= "txt_" . $file . "\0";
}
# print out new mediashow pics in order
print "Printing new mediashow: $ms\n" if $DEBUG;
open(MS,">$mshow2") || die "Cannot open mediashow $mshow2!!\n";
print MS $ms;
close(MS);

crontab entries

                    
@reboot sleep 20; ./m3.pl >> m3.log 2>&1
26 5 */3 * * ./master3.sh >> master.log 2>&1

That will refresh the slideshow every three days, which we found is a good interval for our lifestyle – some days you don’t get around to viewing them. If you want to refresh every day just change ‘*/3″ to ‘*’.

And… that’s it!

Reminder

Don’t forget to make all these files executable. Something like:

$ chmod +x *.pl *.py *.sh

should do it.

My equipment

RPi 3 running Raspbian Lite

Pi Display (probably would also work with an HDMI display)/ The Pi Display resolution is 800×480, so pretty small.

Pre-install

There are a few things you’ll need such as fbi, python3, pip, python Pillow and rclone. That’s basically described in my previous post so I won’t repeat it here.

Getting started

To see how badly things are going for you (hey, I like to be cautiously pessimistic) after you’ve created all these files and have installed rclone, do a

$ ./master3.sh

If you have your rclone file listing (which takes a long time) and want to focusing on debugging the rest of it, do a

$ ./master3.sh skip

Discussion

In this version of Raspberry Pi photo frame I’ve made more effort to force time separation between the randomly selected photos. But, that’s not all. I blow up pictures taken in a narrow (portrait) mode (see next paragraph). And I do some fancy analysis to determine filename, folder, date, time and even location of the pictures. And there’s more. I create an alternate version of each photo which embeds this info at the bottom – in anticipation of my even more fancy remote-controlled slideshow! I am afraid to overwrite what I have previously posted because that by itself is a complete solution and works quite well on its own. So this can be considered worthy of folks looking for a little more challenge to get better results.

The fancyresize.py script is designed around my small PiDisplay which has a horizontal resolution of only 800 pixels. It blows up a narrow, portrait-format picture only if the detected display has a horizontal resolution of no more than 800 pixels. It blows the picture up by 10%, chops off 3% from the top, 7% from the bottom, because that yields optimal results in my experience. If you like that approach but are using a larger HDMI display, you could edit the “801” in that file to make it a larger number (bigger than your display, like 5000).

Show pictures with embedded info

This process is not streamlined. But it can be cool to do it by hand. You could follow these steps.

$ ./mshowtmp.pl; mv mediashowtmp2 mediashow

If you wait the whole cycle the next time around it should display the pictures with the embedded info at the bottom. If you’re impatient, do this:

$ sudo pkill -9 fbi; sudo pkill -9 m3.pl

$ nohup ./m3.pl > m3.log 2>&1 &

Fun Fact

You know how those old digital cameras created files prefixed with DSC, like DSC00102.JPG? If you read the JPEG spec, which is a pretty dense document, you learn that DSC stands for Digital Still Camera.

Concept for tossing out pictures of documents

We sometimes take pictures of documents, or computer screens, or a slide at a presentation, or a historical marker. They don’t make for compelling slideshow material. Well, the historical markers are debatable since they have character. Anyway, I am looking at using an old open source program called tesseract to do OCR (optical character recognition) on all the photos to help identify those containing a lot of words so they can be excluded. I’ll include that if I determine it to be a good approach.

Installing a searchable dictionary on Raspberry Pi

To install a word dictionary that you can do simple searches against on an RPi, try:

$ sudo apt-get install wamerican

or maybe

$ sudo apt-get install wamerican-huge

Those will produce simple wordlists, not actual dictionaries with definitions as you might have expected. They go into /usr/share/dict, e.g., usr/share/dict/american-english-huge.

The dict program is quite nice. apt-get install dict. Then you run it like this

$ dict neume

and it shoots back definitions and cites sources for those definitions. The drawback for my purposes is that it uses your Internet connection and I’m trying to build a photo frame that doesn’t rely too much on the Internet after the photos themselves are fetched.

$ sudo apt-cache search wordlist

lists all available dictionaries, I believe.

Setting up tesseract

This page has these instructions

git clone https://github.com/thortex/rpi3-tesseract.git 
cd rpi3-tesseract 
cd release 
./install_requires_related2leptonica.sh ./install_requires_related2tesseract.sh 
./install_tesseract.sh

But you’re gonna need git first:

$ sudo apt-get install git

RPi lost Wifi

This could be a whole separate post. In the course of my hard work my RPi just would not acquire an IP address on wlan0.

Here’s a great command to see all the SSIDs it knows about:

$ sudo iwlist wlan0 scan > scan.log

Then you can inspect scan.log in an editor. Turns out the one SSID it needed wasn’t in the list. Turns out I had reserved a DHCP entry for it in my router. My router was simply not cooperating, it seems – the RPi wasn’t doing anything wrong. I was almost ready to re-install the whole thing and waste hours… My router is an older model Linksys WRT1200AC. I removed the DHCP reservation on the router, then did a

$ sudo service networking stop; sudo service networking start

on the RPi, and…all was good! Its assigned IP won’t change that often, I can always check the router to see what it is. The management software with the Linksys is quite good.

Conclusion

A more advanced treatment of photos is shown in this post than I have done previously. It is fairly robust and will withstand quite a few user errors in my experience. The end result will be an interesting display of your photos, randomly selected but in small groupings.

References and related

The tar file which contains everything: https://drjohnstechtalk.com/blog/downloads/photoFrameII.tar

Please see this popular post Raspberry Pi photo frame using your pictures on your Google Drive for more details.

m3.pl refers to a black.jpg and a newslideshowintro.jpg file. It’s not a disaster to not have those, but the overall experience will be slightly better. Here’s black.jpg:

And the beautiful newslideshowintro.jpg I created is at the top of this blog post.

Tesseract, an surprisingly old and surprisingly good OCR open-source OCR program, is basically impossible to compile for RPi. Fortunately, someone has done it for us. This page has the instructions: https://github.com/thortex/rpi3-tesseract

Categories
Raspberry Pi

Raspberry Pi advanced photo frame

Intro

I am assembling a lot of different ideas I have to do some more cool things than was possible with my original Raspberry Pi photo frame effort although that contained a lot of original ideas as well.

Skillset

Intermediate or better linux skills required. Beginners/newbies: please do not attempt as you will encounter insurmountable problems and be left in a trail of tears. I don’t have time to help.

I will be using this remote control which I can attest is indeed compatible with the RPi:

The inexpensive Rii remote controller with keyboard works with your Raspberry Pi

But it has no CTRL key! I can’t survive without that. It seems aimed at media consumers. I guess those types never need that key. At this point in time my idea is to use the arrow keys + Enter button as a familiar way to navigate around a simple menu I plan to create, plus perhaps the menu key. the pre-defined keys fbi has created for navigation are simply not intuitive, nor are they adequate for the tasks I have in mind.

Rii keyValue
up arrow103
down arrow108
right arrow106
left arrow105
Menu127
OK28
Some interesting keys and their values when monitored

In my basic photo frame approach I had a simpler rotate image python script. This python program below rotates pictures by the specified amount and preserves the EXIF tags. It doesn’t update the orientation tag after rotating because for the fbi display program I use that doesn’t matter. I call it rotate.py.

                    
#!/usr/bin/python3
# call with two arguments: degrees-to-rotate and filename
import PIL, os
import sys
from PIL import Image
# first do: pip3 install piexif
import piexif

degrees = int(sys.argv[1])
pic = sys.argv[2]

picture= Image.open(pic)
# see https://github.com/hMatoba/Piexif for piexif writeup
exif_dict = piexif.load(picture.info[“exif”])
exif_bytes = piexif.dump(exif_dict)
# both rotate and preserve EXIF data
picture.rotate(degrees,expand=True).save(“rot_” + pic,”jpeg”, exif=exif_bytes)

As it says in the code you need to install piexif in addition to Pillow:

$ sudo pip3 install Pillow && sudo pip3 install piexif

Example call

$ ./rotate.py 90 20160514_131528.jpg

Difficult problem

I set for myself a problem that is much more difficult than anything I’ve tackled. I wanted to post a preliminary solution, but I don’t want to do constant re-writes which are time-consuming. I have a lot of code re-writes to do. I do have a menu system working, and a couple functions implemented to date. But there is so much more to do to even get to an alpha version.

To be continued…

Pie-in-the-sky To-Do list

  • Use of deep learning AI to toss out images which are poor quality
  • Use Facebook API to identify people in the images
  • Commercialization of idea
  • Smarthome enablement, e.g., hey Alexa, pause that picture and tell me about it, etc

Breaking those pie-in-the-sky ideas down, I believe the RPi is way vastly underpowered to do any serious image analysis; Facebook facial recognition is creepy and an invasion of privacy; commercialization sounds great on paper but in reality is a time sink doing unpleasant things for little or no profit in the end; and lastly Smarthome enablement is actually the most achievable and was my original thinking before I landed on the remote control. I may or may not get back to it one day.

References and related

A more basic approach to creating a Raspberry Pi – based photo frame is described here.

The Rii remote control I am using only cost $12! https://www.amazon.com/gp/product/B01CL3ZXGO/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1

A great tutorial on the Pillow python package which I use for image processing: https://auth0.com/blog/image-processing-in-python-with-pillow/

The full documentation fills it out even more: https://pillow.readthedocs.io/en/stable/

The official Exif documentation is here: http://www.cipa.jp/std/documents/e/DC-008-2012_E.pdf . For instance, page 42 says UserComments get the Tag ID 37510.

Categories
Linux Raspberry Pi

How to create a software keyboard

Intro

This example will probably get used in my advanced Raspberry Pi photo frame treatment. I use image display software, fbi, which is designed to take key presses in order to take certain actions, such as, advance to the next picture, display information about the current picture, etc. qiv works similarly. But I am running an automated picture display, so no one is around to type into the keyboard. hence the need for software emulation of the physical keyboard. I’ve always believed it must be possible, but never knew how until this week.

I am not a python programmer, but sometimes you gotta use whatever language makes the job easier, and I only know how to do this in python, python3 specifically.

This will probably only make sense for Raspberry Pi owners.

Setup

I believe this will work best if your Raspberry Pi has only a keyboard and not a mouse hooked up because in that case your keyboard ought to be mapped to /dev/input/event0. But it’s easy enough to change that. To see whether your keyboard is /dev/input/event0 or /dev/input/event1 or some other, just cat one of those files and start typing. You should see some junk when you’ve selected the right /dev/input file.

The program

I call it keyinject.py.

                    

#!/usr/bin/python3
# inject a single key, acting like a software keyboard
# DrJ 12/20
import sys
from evdev import UInput, InputDevice, ecodes as e
from time import sleep
# set DEBUG = True to print out more information
DEBUG = False
sleepTime = 0.001 # units are secs
# dict of name mappings. key is how we like to enter it, value is what is after KEY_ in evdev ecodes
d = {
 '1'    : '1',
 '2'    : '2',
 '3'    : '3',
 '4'    : '4',
 '5'    : '5',
 '6'    : '6',
 '7'    : '7',
 '8'    : '8',
 '8'    : '8',
 '9'    : '9',
 '0'    : '0',
 'a' : 'A',
 'b' : 'B',
 'c' : 'C',
 'd' : 'D',
 'e' : 'E',
 'f' : 'F',
 'g' : 'G',
 'h' : 'H',
 'i' : 'I',
 'j' : 'J',
 'k' : 'K',
 'l' : 'L',
 'm' : 'M',
 'n' : 'N',
 'o' : 'O',
 'p' : 'P',
 'q' : 'Q',
 'r' : 'R',
 's' : 'S',
 't' : 'T',
 'u' : 'U',
 'v' : 'V',
 'w' : 'W',
 'x' : 'X',
 'y' : 'Y',
 'z' : 'Z',
 '.'    : 'DOT',
 ','    : 'COMMA',
 '/'    : 'SLASH',
 'E': 'ENTER',
 'S': 'RIGHTSHIFT',
 'C': 'LEFTCTRL'
}

# https://python-evdev.readthedocs.io/en/latest/tutorial.html
inputchars = sys.argv
ltrs = inputchars[1]
# get rid of program name

keybd = InputDevice("/dev/input/event0")

for ltr in ltrs:
  ui = UInput.from_device(keybd, name="keyboard-device")
  if DEBUG: print(ltr)
  mappedkey = d[ltr]
  key = "KEY_" + mappedkey
  if DEBUG: print(key)
  if DEBUG: print(e.ecodes[key])

  ui.write(e.EV_KEY, e.ecodes[key], 1) # KEY_ down
  ui.write(e.EV_KEY, e.ecodes[key], 0) # KEY_ up
  ui.syn()
  sleep(sleepTime)
  ui.close()

And it gets called like this:

$ sudo ./keyinject.py my.injected.letters

or

$ sudo ./keyinject.py ./m2.plE

to run the m2.pl script in the current directory and have it behave as though it were launched from a console terminal. The “E” is the ENTER key.

Interesting observations

There really is no such thing as a separate “k” key and “K” key (lower-case versus upper-case). There is only a single key labelled “K” on a keyboard. It’s a physical layer versus logical layer type of thing. The k and K are characters.

In the above program I did some of the keys – the ones I will be needing, plus a few bonus ones. I do need the ENTER key, and I can’t think of a way to convey that to this program, so to send ENTER you would do

$ sudo ./keyinject.py ENTER

But I was able to have these characters represent themselves: . , / so that’s not bad.

Prerequisites

You will need pyhon3 version > 3.5, I think. And the evdev package. I believe you get that with

$ sudo pip3 install evdev

And if you don’t have pip3 you can install that with

$ sudo apt-get install python3-pip

Reading keyboard input

Of course the opposite of simulating key presses is reading what’s been typed from an actual keyboard. That’s possible too with this handy evdev package. The following program is not as polished as the writing program, but it gives you the feel for what to do. I call it evread.py.

                    

#!/usr/bin/python3
# https://python-evdev.readthedocs.io/en/latest/tutorial.html
import asyncio
from evdev import InputDevice, categorize, ecodes

dev = InputDevice('/dev/input/event0')
# following line is optional - it takes away the keybd from fbi!
# there is also a dev.ungrab()
dev.grab()

async def helper(dev):
    async for ev in dev.async_read_loop():
         print(repr(ev))

loop = asyncio.get_event_loop()
loop.run_until_complete(helper(dev))

Note the presence of the dev.grab(). That permits your program to be the exclusive reader of keyboard input, shutting out fbi. It can be commented out if you want to share.

How to interpret the output of evread.py

Say I press and hold the Enter button. I get this output from evread.py.

InputEvent(1629667404, 552424, 4, 4, 458792)
InputEvent(1629667404, 552424, 1, 28, 1)
InputEvent(1629667404, 552424, 0, 0, 0)
InputEvent(1629667404, 808252, 1, 28, 2)
InputEvent(1629667404, 808252, 0, 0, 1)
InputEvent(1629667404, 858249, 1, 28, 2)
InputEvent(1629667404, 858249, 0, 0, 1)
InputEvent(1629667404, 908250, 1, 28, 2)
InputEvent(1629667404, 908250, 0, 0, 1)
InputEvent(1629667404, 958250, 1, 28, 2)
InputEvent(1629667404, 958250, 0, 0, 1)
InputEvent(1629667405, 8250, 1, 28, 2)
InputEvent(1629667405, 8250, 0, 0, 1)
InputEvent(1629667405, 58252, 1, 28, 2)
InputEvent(1629667405, 58252, 0, 0, 1)
InputEvent(1629667405, 108250, 1, 28, 2)
InputEvent(1629667405, 108250, 0, 0, 1)
InputEvent(1629667405, 158250, 1, 28, 2)
InputEvent(1629667405, 158250, 0, 0, 1)
InputEvent(1629667405, 158250, 4, 4, 458792)
InputEvent(1629667405, 158250, 1, 28, 0)
InputEvent(1629667405, 158250, 0, 0, 0)

The 4, 4, longnumber – You always seem to get somethiing like that. Not sure about it.

The 1, 28, 1 – means I’ve pressed the ENTER key. 28 is for ENTER. other keys will have other values assigned.

0, 0, 1 – not really sure. Continuation, or something.

1, 28, 2 – I think I know this. It means I’m continuing to hold down the ENTER button.

1, 28, 0 – I have released the ENTER key

These get generated pretty frequently, perhaps a few a second.

Conclusion

We have created an example program, mostly for Raspberry Pi, though easily adapted to other linux environments, that injects keyboard presses, via a python3 program, as though those keys had been typed by someone using the physical keyboard such that, graphics programs which rely on this, such as fbi or qiv (and probably others – vlc?), can be controlled through software.

We have also provided a basic python program for reading key presses from the actual keyboard. I plan to use these things for my advanced RPi photo frame project.

References and related

The python evdev tutorial is really helpful: https://python-evdev.readthedocs.iogeoh/en/latest/tutorial.html

Raspberry Pi advanced photo frame article does not exist yet. The basic RPi photo frame article is here.

Another piece to the puzzle is turning GPS coordinates into a town name. That brief write-up is here.

A nice example of actually using evread.py to detect button presses from a universal remote attached to an RPi: https://drjohnstechtalk.com/blog/2021/08/raspberry-pi-project-youtube-livestreaming-with-a-click-of-a-button/

Categories
Linux Perl Raspberry Pi Web Site Technologies

Convert GPS Coordinates into town name or address

Intro

This is a small piece of a larger project – displaying your photos on Google Drive using a Raspberry Pi. That project will require completion of many small investigations, this being just one of them.

I thought, wouldn’t it be cool to ask your photo frame when and where a certain picture was taken? I thought that information was typically embedded into the picture by modern smartphones. Turns out this is disappointingly not the case – at least not on our smartphones, except in a small minority of pictures. But since I got somewhere with my investigation, I wanted to share the results, regardless.

Also, I naively assumed that there surely is a web service that permits one to easily convert GPS coordinates into the name – in text – of the closest town. After all, you can enter GPS coordinates into Google Maps and get back a map showing the exact location. Why shouldn’t it be just as easy to extract the nearest town name as text? Again, this assumption turns out to be faulty. But, I found a way to do it that is not toooo difficult.

Example for Cape May, New Jersey

$ curl -s http://api.geonames.org/address?lat=38.9302957777778&lng=-74.9183310833333&username=drjohns

<geonames>
<address>
<street>Beach Dr</street>
<houseNumber>690</houseNumber>
<locality>Cape May</locality>
<postalcode>08204</postalcode>
<lng>-74.91835</lng>
<lat>38.93054</lat>
<adminCode1>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<adminCode2>009</adminCode2>
<adminName2>Cape May</adminName2>
<adminCode3/>
<adminCode4/>
<countryCode>US</countryCode>
<distance>0.03</distance>
</address>
</geonames>

The above example used the address service. The results in this case are unusually complete. Sometime the lookups simply fail for no obvious reason, or provide incomplete information, such as a missing locality. In those cases the town name is usually still reported in the adminName2 element. I haven’t checked the address accuracy much, but it seems pretty accurate, like, representing an actual address within 100 yards, usually better, of where the picture was taken.

They have another service, findNearbyPlaceName, which sometimes works even when address fails. However its results are also unpredictable. I was in Merrillville, Indiana and it gave the toponym as Chapel Manor, which is the name of the subdivision! In Virginia it gave the name The Hamlet – still not sure where that came from, but I trust it is some hyper-local name for a section of the town (James City). Just as often it does spit back the town or city name, for instance, Atlantic City. So, it’s better than nothing.

The example for Nantucket

From a browser – here I use curl in the linux command line – you enter:

$ curl -s http://api.geonames.org/findNearbyPlaceName?lat=41.282778&lng=-70.099444&username=drjohns

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<geonames>
<geoname>
<toponymName>Nantucket</toponymName>
<name>Nantucket</name>
<lat>41.28346</lat>
<lng>-70.09946</lng>
<geonameId>4944903</geonameId>
<countryCode>US</countryCode>
<countryName>United States</countryName>
<fcl>P</fcl>
<fcode>PPLA2</fcode>
<distance>0.07534</distance>
</geoname>
</geonames>

So what did we do? For this example I looked up Nantucket in Wikipedia to find its GPS coordinates. Then I used the geonames api to convert those coordinates into the town name, Nantucket.

Note that drjohns is an actual registered username with geonames. I am counting on the unpopularity of my posts to prevent an onslaught of usage as the usage credits are limited for free accounts. If I understood the terms, a few lookups per hour would not be an issue.

I’m finding the PlaceName lookup pretty useless, the address lookup fails about 30% of the time, so I’m thinking as a backstop to use this sort of lookup:

$ curl ‘http://api.geonames.org/extendedFindNearby?lat=41.00050&lng=-74.65329&username=drjohn’

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<geonames>
<address>
<street>Stanhope Rd</street>
<mtfcc>S1400</mtfcc>
<streetNumber>439</streetNumber>
<lat>41.00072</lat>
<lng>-74.6554</lng>
<distance>0.18</distance>
<postalcode>07871</postalcode>
<placename>Lake Mohawk</placename>
<adminCode2>037</adminCode2>
<adminName2>Sussex</adminName2>
<adminCode1>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<countryCode>US</countryCode>
</address>
</geonames>

Note that gets a reasonably close address, and more importantly, a zipcode. The placename is too local and I will probably discard it. But another lookup can turn a zipcode into a town or city name which is what I am after.

$ curl ‘http://api.geonames.org/postalCodeSearch?country=US&postalcode=07871&username=drjohns’

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<geonames>
<totalResultsCount>1</totalResultsCount>
<code>
<postalcode>07871</postalcode>
<name>Sparta</name>
<countryCode>US</countryCode>
<lat>41.0277</lat>
<lng>-74.6407</lng>
<adminCode1 ISO3166-2=”NJ”>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<adminCode2>037</adminCode2>
<adminName2>Sussex</adminName2>
<adminCode3/>
<adminName3/>
</code>
</geonames>

See? It was a lot of work, but we finally got the township name, Sparta, returned to us.

Ocean GPS?

I was whale-watching and took some pictures with GPS info. Trying to apply the methods above worked, but just barely. Basically all I could get out of the extended find nearby search was a name field with value North Atlantic Ocean! Well, that makes it sounds like I was on some Titanic-style ocean crossing. In fact I was in the Gulf of Maine a few miles from Provincetown. So they really could have done a better job there… Of course it’s understandable to not have a postalcode and street address and such. But still, bodies of waters have names and geographical boundaries as well. Casinos seem to be the main sponsors of geonames.org, and I guess they don’t care. Yesterday my script came up with a location Earth! But now I see geonames proposed several locations and I only look at the first one. I am creating a refinement which will perform better in such cases. Stay tuned… And…yes…the refinement is done. I had to do a wee bit of xml parsing, which I now do.

To get your own account at geonames.org

The process of getting your own account isn’t too difficult, just a bit squirrelly. For the record, here is what you do.

Go to http://www.geonames.org/login to create your account. It sends an email confirmation. Oh. Be sure to use a unique browser-generated password for this one. The security level is off-the-charts awful – just assume that any and all hackers who want that password are going to get it. It sends you a confirmation email. so far so good. But when you then try to use it in an api call it will tell you that that username isn’t known. This is the tricky part.

So go to https://www.geonames.org/manageaccount . It will say:

Free Web Services
the account is not yet enabled to use the free web services. Click here to enable. 

And that link, in turn is https://www.geonames.org/enablefreewebservice . And having enabled your account for the api web service, the URL, where you’ve put your username in place of drjohns, ought to work!

For a complete overview of all the different things you can find out from the GPS coordinates from geonames, look at this link: https://www.geonames.org/export/ws-overview.html

Working with pictures

Please look at this post for the python code to extract the metadata from an image, including, if available GPS info. I called the python program getinfo.py.

Here’s an actual example of running it to learn the GPS info:

$ ../getinfo.py 20170520_102248.jpg|grep -ai gps

GPSInfo = {0: b'\x02\x02\x00\x00', 1: 'N', 2: (42.0, 2.0, 18.6838), 3: 'W', 4: (70.0, 4.0, 27.5448), 5: b'\x00', 6: 0.0, 7: (14.0, 22.0, 25.0), 29: '2017:05:20'}

I don’t know if it’s good or bad, but the GPS coordinates seem to be encoded in the degrees, minutes, seconds format.

A nice little program to put things together

I call it analyzeGPS.pl and a, using it on a Raspberry Pi, but could easily be adapted to any linux system.

                    
#!/usr/bin/perl
# use in combination with this post https://drjohnstechtalk.com/blog/2020/12/convert-gps-coordinates-into-town-name/
use POSIX;
$DEBUG = 1;
$HOME = "/home/pi";
#$file = "Pictures/20180422_134220.jpg";
while(<>){
$GPS = $date = 0;
$gpsinfo = "";
$file = $_;
open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
#open(ANAL,"cat \"$file\"|") || die "Cannot open file: $file!!\n";
print STDERR "filename: $file\n" if $DEBUG;
while(<ANAL>){
  $postalcode = $town = $name = "";
  if (/GPS/i) {
    print STDERR "GPS: $_" if $DEBUG;
# GPSInfo = {1: 'N', 2: (39.0, 21.0, 22.5226), 3: 'W', 4: (74.0, 25.0, 40.0267), 5: 1.7, 6: 0.0, 7: (23.0, 4.0, 14.0), 29: '2016:07:22'}
   ($pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec) = /1: '([NS])', 2: \(([\d\.]+), ([\d\.]+), ([\d\.]+)...3: '([EW])', 4: \(([\d\.]+), ([\d\.]+), ([\d\.]+)\)/i;
   print STDERR "$pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec\n" if $DEBUG;
   $lat = $deg + $min/60.0 + $sec/3600.0;
   $lat = -$lat if $pole eq "S";
   $lng = $lngdeg + $lngmin/60.0 + $lngsec/3600.0;
   $lng = -$lng if $hemi = "W" || $hemi eq "w";
   print STDERR "lat,lng: $lat, $lng\n" if $DEBUG;
   #$placename = `curl -s "$url"|grep -i toponym`;
   next if $lat == 0 && $lng == 0;
# the address API is the most precise
   $url = "http://api.geonames.org/address?lat=$lat\&lng=$lng\&username=drjohns";
   print STDERR "Url: $url\n" if $DEBUG;
   $results = `curl -s "$url"|egrep -i 'street|house|locality|postal|adminName'`;
   print STDERR "results: $results\n" if $DEBUG;
   ($street) = $results =~ /street>(.+)</;
   ($houseNumber) = $results =~ /houseNumber>(.+)</;
   ($postalcode) = $results =~ /postalcode>(.+)</;
   ($state) = $results =~ /adminName1>(.+)</;
   ($town) = $results =~ /locality>(.+)</;
   print STDERR "street, houseNumber, postalcode, state, town: $street, $houseNumber, $postalcode, $state, $town\n" if $DEBUG;
# I think locality is pretty good name. If it exists, don't go  further
   $postalcode = "" if $town;
   if (!$postalcode && !$town){
# we are here if we didn't get interesting results from address reverse loookup, which often happens.
     $url = "http://api.geonames.org/extendedFindNearby?lat=$lat\&lng=$lng\&username=drjohns";
     print STDERR "Address didn't work out. Trying extendedFindNearby instead. Url: $url\n" if $DEBUG;
     $results = `curl -s "$url"`;
# parse results - there may be several objects returned
     $topelemnt = $results =~ /<geoname>/i ? "geoname" : "geonames";
     @elmnts = ("street","streetnumber","lat","lng","locality","postalcode","countrycode","countryname","name","adminName2","adminName1");
     $cnt = xml1levelparse($results,$topelemnt,@elmnts);

     @lati = @{ $xmlhash{lat}};
     @long = @{ $xmlhash{lng}};
# find the closest entry
     $distmax = 1E7;
     for($i=0;$i<$cnt;$i++){
       $dist = ($lat - $lati[$i])**2 + ($lng - $long[$i])**2;
       print STDERR "dist,lati,long: $dist, $lati[$i], $long[$i]\n" if $DEBUG;
       if ($dist < $distmax) {
         print STDERR "dist < distmax condition. i is: $i\n";
         $isave = $i;
       }
     }
     $street = @{ $xmlhash{street}}[$isave];
     $houseNumber = @{ $xmlhash{streetnumber}}[$isave];
     $admn2 = @{ $xmlhash{adminName2}}[$isave];
     $postalcode = @{ $xmlhash{postalcode}}[$isave];
     $name = @{ $xmlhash{name}}[$isave];
     $countrycode = @{ $xmlhash{countrycode}}[$isave];
     $countryname = @{ $xmlhash{countryname}}[$isave];
     $state = @{ $xmlhash{adminName1}}[$isave];
     print STDERR "street, houseNumber, postalcode, state, admn2, name: $street, $houseNumber, $postalcode, $state, $admn2, $name\n" if $DEBUG;
     if ($countrycode ne "US"){
       $state .= " $countryname";
     }
     $state .= " (approximate)";
   }
# turn zipcode into town name with this call
   if ($postalcode) {
     print STDERR "postalcode $postalcode exists, let's convert to a town name\n";
     print STDERR "url: $url\n";
     $url = "http://api.geonames.org/postalCodeSearch?country=US\&postalcode=$postalcode\&username=drjohns";
     $results = `curl -s "$url"|egrep -i 'name|locality|adminName'`;
     ($town) = $results =~ /<name>(.+)</i;
     print STDERR "results,town: $results,$town\n";
   }
   if (!$town) {
# no town name, use adminname2 which is who knows what in general
     print STDERR "Stil no town name. Use adminName2 as next best thing\n";
     $town = $admn2;
   }
   if (!$town) {
# we could be in the ocean! I saw that once, and name was North Atlantic Ocean
     print STDERR "Still no town. Try to use name: $name as last resort\n";
     $town = $name;
   }
   $gpsinfo = "$houseNumber $street $town, $state" if $locality || $town;
   } # end of GPS info exists condition
  } # end loop over ANAL file
  $gpsinfo = $gpsinfo || "No info found";
  print qq(Location: $gpsinfo
);
} # end loop over STDIN

#####################
# function to parse some xml and fill a hash of arrays
sub xml1levelparse{
# build an array of hashes
$string = shift;
# strip out newline chars
$string =~ s/\n//g;
$parentelement = shift;
@elements = @_;
$i=0;
while($string =~ /<$parentelement>/i){
 $i++;
 ($childelements) = $string =~ /<$parentelement>(.+?)<\/$parentelement>/i;
 print STDERR "childelements: $childelements" if $DEBUG;
 $string =~ s/<$parentelement>(.+?)<\/$parentelement>//i;
 print STDERR "string: $string\n" if $DEBUG;
 foreach $element (@elements){
  print STDERR "element: $element\n" if $DEBUG;
  ($value) = $childelements =~ /<$element>([^<]+)<\/$element>/i;
  print STDERR "value: $value\n" if $DEBUG;
  push @{ $xmlhash{$element} }, $value;
 }
} # end of loop over parent elements
return $i;
} # end sub xml1levelparse

Here’s a real example of calling it, one of the more difficult cases:

$ echo -n 20180127_212203.jpg|./analyzeGPS.pl

GPS: GPSInfo = {0: b'\x02\x02\x00\x00', 1: 'N', 2: (41.0, 0.0, 2.75), 3: 'W', 4: (74.0, 39.0, 12.0934), 5: b'\x00', 6: 0.0, 7: (2.0, 21.0, 58.0), 29: '2018:01:28'}
N,41.0,0.0,2.75,W,74.0,39.0,12.0934
lat,lng: 41.0007638888889, -74.6533592777778
Url: http://api.geonames.org/address?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
results:
street, houseNumber, postalcode, state, town: , , , ,
Address didn't work out. Trying extendedFindNearby instead. Url: http://api.geonames.org/extendedFindNearby?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
childelements: <address> <street>Stanhope Rd</street> <mtfcc>S1400</mtfcc> <streetNumber>433</streetNumber> <lat>41.00121</lat> <lng>-74.65528</lng> <distance>0.17</distance> <postalcode>07871</postalcode> <placename>Lake Mohawk</placename> <adminCode2>037</adminCode2> <adminName2>Sussex</adminName2> <adminCode1>NJ</adminCode1> <adminName1>New Jersey</adminName1> <countryCode>US</countryCode> </address>string: <?xml version="1.0" encoding="UTF-8" standalone="no"?>
element: street
value: Stanhope Rd
element: streetnumber
value: 433
element: lat
value: 41.00121
element: lng
value: -74.65528
element: locality
value:
element: postalcode
value: 07871
element: countrycode
value: US
element: countryname
value:
element: name
value:
element: adminName2
value: Sussex
element: adminName1
value: New Jersey
dist,lati,long: 3.88818897839883e-06, 41.00121, -74.65528
dist < distmax condition. i is: 0
street, houseNumber, postalcode, state, admn2, name: Stanhope Rd, 433, 07871, New Jersey, Sussex,
postalcode 07871 exists, let's convert to a town name
url: http://api.geonames.org/extendedFindNearby?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
results,town: <geonames>
<name>Sparta</name>
<adminName1>New Jersey</adminName1>
<adminName2>Sussex</adminName2>
<adminName3/>
</geonames>
,Sparta
Location: 433 Stanhope Rd Sparta, New Jersey (approximate)

Or, if you just want the interesting stuff,

$ echo -n 20180127_212203.jpg|./analyzeGPS.pl 2>/dev/null

Location: 433 Stanhope Rd Sparta, New Jersey (approximate)

Conclusion

An api for reverse lookup of GPS coordinates which returns the nearest address, including town name, is available. I have provided examples of how to use it. It is unreliable, however, and Geonames.org does provide alternatives which have their own drawbacks. In my image gallery, only a minority of my pictures have encoded GPS data, but it is fun to work with them to pluck out the town where they were shot.

I have incorporated this functionality into a Raspberry Pi-based photo frame I am working on.

I have created an example Perl program that analyzes a JPEG image to extract the GPS information and turn it into an address that is remarkably accurate. It is amazing and uncanny to see it at work. It deals with the screwy and inconsistent results returned by the free service, Geonames.org.

References and related

There are lots of different things you can derive given the GPS coordinates using the Geonames api. Here is a list: https://www.geonames.org/export/ws-overview.html

In this photo frame version of mine, I extract all the EXIF metadata which includes the GPS info.

One day my advanced photo frame will hopefully include an option to learn where a photo was taken by interacting with a remote control. Here is the start of that write-up.

You can pay $5 and get a zip codes to cities database in any format. I’m sure they’ve just re-packaged data from elsewhere, but it might be worth it: https://www.uszipcodeslist.com/

For a more professional api, https://smartystreets.com/ looks quite nice. Free level is 250 queries per month, so not too many. But their documentation and usability looks good to me. For this post I was looking for free services and have tried to avoid commercial services.

Categories
Perl Python Raspberry Pi Web Site Technologies

Raspberry Pi photo frame using your pictures on your Google Drive

Editor’s Note

Please note I am putting all my currently active development and latest updates into this newer post: Raspberry Pi photo frame using your pictures on your Google Drive II

Intro

All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.

So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!

Equipment

This work was done on a Raspberry Pi 3 running Raspbian Lite (more on that later). I used a display custom-built for the RPi, Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics), though I believe any HDMI display would do.

The scripts
Here is the master file which I call master.sh.

                    
#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE=”random.list”
NUMFOLDERS=20
DISPLAYFOLDER=”/home/pi/Pictures”
DISPLAYFOLDERTMP=”/home/pi/Picturestmp”
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER=”MaryDocs/Pictures and videos”

echo “Starting master process at “`date`

rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP

#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:”$STARTFOLDER” > files

# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep ‘\.[jJ][pP][eE]?[gG]$’ files |awk ‘$1 > 11000 {$1=””; print substr($0,2)}’|grep -i -v /docs/ > jpegs.list

# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE

# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
rclone copy remote:”${STARTFOLDER}/$line” $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done

# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi
cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh
cd ~

# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi
pkill -9 -f qiv
sudo pkill -9 -f fbi
pkill -9 -f m2.pl

# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER

mv $DISPLAYFOLDERTMP $DISPLAYFOLDER

#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/m2.pl >> ~/m2.log 2>&1 &

if [ $DEBUG -eq 1 ]; then echo “And now it is “`date`; fi

I call the following script random-files.pl:

                    

#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("c:df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$cutoff = $opt{c} ? $opt{c} : 5;
$cutoffS = 60*$cutoff;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = ;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
  $t = int(rand($nopics-2));
  print "random number is: $t\n" if $DEBUG;
# a lot of our pics follow this naming convention
# 20160831_090658.jpg
  ($date,$time) = $jpegs[$t] =~ /(\d{8})_(\d{6})/;
  if ($date) {
    print "date, time: $date $time\n" if $DEBUG;
# ensure neighboring picture is at least five minutes different in time
    $iPO = $iP = $diff = 0;
    ($hr,$min,$sec) = $time =~ /(\d\d)(\d\d)(\d\d)/;
    $secs = 3600*$hr + 60*$min + $sec;
    print "Pre-pic logic\n";
    while ($diff < $cutoffS) {
      $iP++;
      $priorPic = $jpegs[$t-$iP];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $priorPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($secs - $Psecs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
# post-picture logic - same as pre-picture
    print "Post-pic logic\n";
    $diff = 0;
    while ($diff < $cutoffS) {
      $iPO++;
      $postPic = $jpegs[$t+$iPO];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $postPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($Psecs - $secs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
  } else {
    $iP = $iPO = 2;
  }
  $priorPic = $jpegs[$t-$iP];
  $Pic = $jpegs[$t];
  $postPic = $jpegs[$t+$iPO];
  print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);

Bunch of simple python scripts

I call this one getinfo.py:

                    
#!/usr/bin/python3
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS

for (tag,value) in Image.open(sys.argv[1])._getexif().items():
print (‘%s = %s’ % (TAGS.get(tag), value))

print (‘%s = %s’ % (TAGS.get(tag), value))

And here’s rotate.py:

                    
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image

picture= Image.open(sys.argv[1])

# if orientation is 6, rotate clockwise 90 degrees
picture.rotate(-90,expand=True).save(“rot_” + sys.argv[1])

While here is rotatecc.py:

                    
#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image

picture= Image.open(sys.argv[1])

# if orientation is 8, rotate counterclockwise 90 degrees
picture.rotate(90,expand=True).save(“rot_” + sys.argv[1])

And rotate-as-needed.sh:

                    
#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn’t auto-rotate them as far as I know
# assumption is that are current directory is the one where we want to alter files
ls -1|while read line; do
echo fileis “$line”
o=`~/getinfo.py “$line”|grep -ai orientation|awk ‘{print $NF}’`
echo orientation is $o
if [ “$o” -eq “6” ]; then
echo “90 clockwise is needed, o is $o”
# rotate and move it
~/rotate.py “$line”
mv rot_”$line” “$line”
elif [ “$o” -eq “8” ]; then
echo “90 counterclock is needed, o is $o”
# rotate and move it
~/rotatecc.py “$line”
mv rot_”$line” “$line”
fi
don

And finally, m2.pl:

                    

#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";

chdir($picsDir);
system("ls -1 > $pNames");
# forther massage names
open(TMP,"$pNames");
@lines = ;
foreach (@lines) {
  chomp;
  $filesNullSeparated .= $_ . "\0";
}
open(MS,">$mshow") || die "Cannot open mediashow file $mshow!!\n";
print MS $filesNullSeparated;
close(MS);
print "filesNullSeparated: $filesNullSeparated\n" if $DEBUG;
$cn = @lines;
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
sleep(1);
system("sleep 2; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
    #system("cat $picNames|xargs -0 qiv -DfRsmi -d $delay \&");
    system("sudo xargs -a $mshow -0 fbi -a --noverbose -1 -T 1  -t $delay ");
# fbi runs in background, then exits, so we need to monitor if it's still alive
# wait appropriate estimated amount of time, then look aggressively for fbi
    sleep($delay*($cn - 2));
    for(;;) {
      open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
      $match = ;
      if ($match) {
        print "got fbi match\n" if $DEBUG > 1;
        } else {
        print "no fbi match\n" if $DEBUG;
# fbi not found
          last;
      }
      close(MON);
      print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
      usleep($mdelay);
    } # end loop testing if fbi has exited
} # close of infinite loop

You’ll need to make these files executable. Something like this should work:

$ chmod +x *.py *.pl *.sh

My crontab file looks like this (you edit crontab using the crontab -e command):

@reboot sleep 25; cd ~ ; ./m2.pl >> ./m2.log 2>&1
24 16 * * * ./master.sh >> ./master.log 2>&1

This invokes master.sh once a day at 4:24 PM to refresh the 60 photos. My refresh took about 13 minutes the other day, but the old slideshow keeps playing until almost the last second, so it’s OK.

The nice thing about this approach is that fbi works with a lightweight OS – Raspbian Lite is fine, you’ll just need to install a few packages. My SD card is unstable or something, so I have to re-install the OS periodically. An install of Raspberry Pi Lite on my RPi 4 took 11 minutes. Anyway, fbi is installed via:

$ sudo apt-get install fbi

But if your RPi is freshly installed, you may first need to do a

$ sudo apt-get update && sudo apt-get upgrade

python image manipulation

The drawback of this approach, i.e., not using qiv, is that we gotta do some image manipulation, for which python is the best candidate. I’m going by memory. I believe I installed python3, perhaps as sudo apt-get install python3. Then I needed pip3: sudo apt-get install python3-pip. Then I needed to install Pillow using pip3: sudo pip3 install Pillow.

m2.pl refers to a black.jpg file. It’s not a disaster to not have that, but under some circumstances it may help. There it is!

Many of my photos do not have EXIF information, yet they can still be displayed. So for those photos running getinfo.py will produce an error (but the processing of the other photos will continue.)

I was originally rotating the display 90 degrees as needed to display the photos with the using the maximum amount of display real estate. But that all broke when I tried to revive it. And the cheap servo motor was noisy. But folks were pretty impressed when I demoed it, because I did it get it the point where it was indeed working correctly.

Picture selection methodology

There are 20 “folders” (random numbers) of three triplets each. The idea is to give you additional context to help jog your memory. The triplets, with some luck, will often be from the same time period.

I observed how many similar pictures are adjacent to each other amongst our total collection. To avoid identical pictures, I require the pictures to be five minutes apart in time. Well, I cheated. I don’t pull out the timestamp from the EXIF data as I should (at least not yet – future enhancement, perhaps). But I rely on a file-naming convention I notice is common – 20201227_134508.jpg, which basically is a timestamp-encoded name. The last six digits are HHMMSS in case it isn’t clear.

Rclone

You must install the rclone package, sudo apt-get install rclone.

Can you configure rclone on a headless Raspberry Pi?

Indeed you can. I know because I just did it. You enable your Pi for ssh access. do the rclone-config (or whatever it’s called) using putty from a Windows 10 system. You’ll get a long Google URL in the course of configuring that you can paste into your browser. You verify it’s you, log into your Google account. Then you get back a url like http://127.0.0.1:5462/another-long-url-string. Well, put that url into your clipboard and in another login window, enter curl clipboard_contents

That’s what I did, not certain it would work, but I saw it go through in my rclone-config window, and that was that!

Don’t want to deal with rclone?

So you want to use a traditional flash drive you plug in to a USB port, just like you have for the commerical photo frames, but you otherwise like my approach of randomizing the picture selection each day? I’m sure that is possible. A mid-level linux person could rip out the rclone stuff I have embedded and replace as needed with filesystem commands. I’m imagining a colossal flash drive with all your tens of thousands of pictures on it where my random selection still adds value. If this post becomes popular enough perhapsI will post exactly how to do it.

Getting started with this

After you’ve done all that, and want to try it out. you can run

$ ./master.sh

First you should see a file called files growing in size – that’s rclone doing its listing. That takes a few minutes. Then it generates random numbers for photo selection – that’s very fast, maybe a second. Then it slowly copies over the selected images to a temporary folder called Picturestmp. That’s the slowest part. If you do a directory listing you should see the number of images in that directory growing slowly, adding maybe three per minute until it reaches 60 of them. Finally the rotation are applied. But even if you didn’t set up your python environment correctly, it doesn’t crash. It effectively skips the rotations. A rotation takes a couple seconds per image. Finally all the images are copied over to the production area, the directory called Pictures; the old slideshow program is “killed,” and the new slideshow starts up. Whole process takes around 15 minutes.

I highly recommend running master.sh by hand as just described to make sure it all works. Probably some of it won’t. I don’t specialize in making recipes, more just guidance. But if you’re feeling really bold you can just power it up and wait a day (because initially you won’t have any pictures in your slideshow) and pray that it all works.

Tip: Undervoltage thunderbolt suppression

This is one of those topics where you’ll find a lot on the Internet, but little about what we need to do: How do we stop that thunderbolt that appears in the upper right corner from appearing?? First, the boilerplate warning. That thingy appears when you’re not delivering enough voltage. That condition can harm your SD Card, blah, blah. I’ve blown up a few SD cards myself. But, in practice, with my RPi 3, I’ve been running it with the Pi Display for 18 months with no mishaps. So, some on, let’s get crazy and suppress the darn thing. So… here goes. To suppress that yellow stroke of lightning, add these lines to your /boot/config.txt:

                    
# suppress undervoltage thunderbolt – DrJ 8/21
# see http://rpf.io/configtxt
avoid_warnings=1

For good measure, if you are not using the HDMI port, you can save some energy by disabling HDMI:

$ tvservice -o

Still missing

I’d like to display a transition image when switching from the current set of photos to the new ones.

Suppressing boot up messages might be nice for some. Personally I think they’re kind of cool – makes it look like you’ve done a lot more techie work than you actually have!

You’re going to get some junk images. I’ve seen where an image is a thumbnail (I guess) and gets blown up full screen so that you see these giant blocks of pixels. I could perhaps magnify those kind of images less.

Movies are going to be tricky so let’s not even go there…

I was thinking about making it a navigation-enabled photo frame, such as integration with a Gameboy controller. You could do some really awesome stuff: Pause this picture; display the location (town or city) where this photo was taken; refresh the slideshow. It sounds fantastical, but I don’t think it’s beyond the capability of even modestly capable hobbyist programmers such as myself.

I may still spin the frame 90 degrees this way an that. I have the servo mounted and ready. Just got to revive the control commands for it.

References and related

This 7″ display is a little small, but it’s great to get you started. It’s $64 at Amazon: Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics

I have an older approach using qiv which I lost the files for, and my blog post got corrupted. Hence this new approach.

In this slightly more sophisticated approach, I make a greater effort to separate the photos in time. But I also make a whole bunch of other improvements as well. But it’s a lot more files so it may only be appropriate for a more seasoned RPi command-line user.

My advanced slideshow treatment is beginning to take shape. I just add to it while I develop it, so check it periodically if that is of interest. Raspberry Pi advanced photo frame.

Categories
Consumer Interest Consumer Tech Network Technologies Raspberry Pi

Consumer Tech: Home Internet stopped working

Intro

We woke up yesterday to no Internet. The usual remedies consumers go through did nothing to resolve the issue. What to do?

The details – November 25, 2020

The usual restarts or my router and the cable modem did not work. I plugged in my work laptop directly to the cable modem for some quick tests but that did not work.

I plugged my work-issued VPN router directly to the cable modem and it did not pick up an IP and re-establish the tunnel.

When I logged into my router I saw that its WAN IP was listed as 0.0.0.0, which means none at all.

I called the ISP twice. Both time they said they could “see” my modem, and they tried to restart it on their end, but that did not seem to do anything at all, based on the constant status LEDs (see picture below). I got my service visit moved up from Dec 11th to Dec 2nd, but still that would mean a week without Internet – not so great when three people are relying on it for their work.

I rebooted the cable modem a couple times at least. Nothing changed.

Then I started some research on quickie alternatives. Ask a friend from work for a spare Cradlepoint air card? They’re already out on vacation. Get a Chinese-made unlocked hotspot with pre-purchased data? Seems fishy, and ultimately expensive. Verizon brand hotspot? We had a borrowed one. Very finicky. And no ethernet ports.

Raspberry Pi + DIY approach?

At one point in the evening, convinced I would have to wait days for for a visit from the cable guy, I rigged up a spare Raspberry Pi to act as a router between a mobile hotspot (a companion tablet to a Verizon phone) and my Linksys router. Why bother? Why not just use the hotspot directly? Mostly because it’s a pain in the rear to reprogram all those Internet of Things devices one has in ones home these days, notably the several Echo Dots, but as well, a wireless printer, a few laptops, Firesticks, tablets, etc. With this approach I keep the WiFi SSID as it was for all those devices. And, it sort of worked! At least I got one Echo Dot to work. I didn’t push my luck. This stuff consumes a lot of data, even when “idle.”

To be continued…

Linksys WRT1200AC status lights – when healthy!
Cable Modem tatus lights – when operating normally

But I am pretty good at troubleshooting. What I know that less experienced people may not is that all the testing I’ve done to that point was not ironclad proof of failure of the cable modem. I know the traditional advice of old is to hook up a laptop directly to the ethernet port and work with it that way. Furthermore the cable company support said that my status lights were reading normally. So, when I tested my work laptop? Are you kidding? That thing has so many problems when I switch between SSIDs due to some new security software – it loves to display the Globe in the system tray, and the only recourse is to reboot. That’s what I was seeing, but notice I said a quickie test? I did not have time to do that reboot and all that. And that work-issued VPN router? I don’t know how that thing really works either. Never having set it up that way I did not trust reading too much into its results (which was essentially an orange status light instead of the usual white).

So when I had more time in the evening, I hooked up a home laptop which I know should work. After a cable modem reboot in fact I did get an IP and could surf the Internet. That was a glimmer of hope. So I put my router back in place. Still it did not pick up an WAN IP address. Still reading 0.0.0.0 for its IP.

Then I put the laptop back, writing down the IP, subnet mask and default gateway. Then I put my router back, switched its WAN mode from DHCP to fixed IP, putting on the exact IP address the laptop had picked up, with correct subnet mask and default gateway. Still it was not working. When the router is not working the WAN status light is sort of orange-ish. It’s white (pictured above) when the WAN link is communicating.

I decided the fault should lie more with my router than anywhere else, and since it wasn’t working and no number of power cycles was changing that situation, I decided that a factory reset is the thing to try. The last thing I could try. I noted the exact name and passwords of my SSIDs, held the reset button for 15 seconds until the status lights flicked out, and let it start up. It went through a start-up process, which i saw after connecting to its default IP of 192.168.1.1. It was clear it was not seeing the cable modem at the point where it should, but it had some very specific advice to try: power off cable modem, wait two minutes, power it back on, and then it would try again. And that did work! Yeah!

What may have precipitated this

My local cable company was recently bought by a much bigger company. I know for a fact what my WAN IP used to be, and I see it has changed. They now draw from a giant pool of IPs – a /14 in CIDR notation – that’s 262,000 addresses – that belongs to the new owner. So I believe the problem occurred due to a poor implementation of the dhcp protocol within my router, or a poor interplay between my router’s DHCP client and the ISP’s DHCP server. But I can’t research that line of troubleshooting because the ISP’s DHCP policies would require a lot of time-consuming experimentation on my part to reverse engineer based on observed behaviour under different conditions. And I would need an open source DHCP client – but I have the Raspberry Pi running dnsmasq for that, so that end could gather all the needed client information.

Prior to this acquisition I would tend to keep the same WAN IP for years – that’s how stable it was.

Another approach

Very germane to this topic is the fact that my neighbor down the street experienced his own Internet outage the day after I did! His solution was to buy a better cable modem. I did not know you could do that – I thought they were proprietary. He also saw his router with the 0.0.0.0 WAN address. And his approach also worked. This makes me less sure my router was really at fault – maybe Altice screwed up their DHCP service for half a day.

Conclusion

Unusual for me, I’m going to write the conclusion before writing the tedious part which is the full explanation in the middle.

By the end of the day I got the Internet working. After isolating the problem to my home router, the Linksys WRT1200AC, and determining that any amount of power cycling was not clearing things up, a factory reset did the trick! The cable modem and my cable Internet service was fine all along.

References and related

How to turn your Raspberry Pi into a router which shares your hotspot with your home router.

The Linksys WRT1200AC is no longer sold. It looks like the newer version is the WRT1900AC – it even looks identical. It’s a good router. I know there are fancier solutions out there, but there are also worse ones as well, so I can only give my qualified endorsement: https://www.amazon.com/Linksys-AC1900-Source-Wireless-WRT1900AC/dp/B014MIBLSA/ref=sr_1_1?dchild=1&keywords=linksys+wrt1200ac&qid=1606519765&sr=8-1

DHCP and CIDR notation are both described in great detail in their respective Wikipedia articles.