Categories
Internet Mail Linux Raspberry Pi

From Audio Recording to YouTube with two button clicks and a Raspberry Pi

Intro

This post builds on the success of previous posts and uses elements from them. I don’t honestly expect anyone to repeat all the ingredients I have assembled here. But I have created them in a fairly modular way so you can pick out those elements which will help your project.

But, it is true, I have gotten the user experience of recording audio from, e.g., a band practice, down to a click of the ENTER button to start the recording, another click to stop it, and a click of the UP ARROW button to process the audio recording – turn it into a video – and upload it to YouTube, mark it as UNLISTED, and send the link to me in an email. Pretty cool if I say so myself. I am refining things as I write this to make it more reliable.

This write-up is not terribly detailed. It presumes at least a medium skill level with linux.

Ingredients
  • RPi 3 or RPi 4
  • Raspberry OS desktop running Pixel desktop environment
  • tiger VNC, i.e., the package tigervnc-scraping-server
  • chromium-browser (but it comes with)
  • xdotool (apt-get install xdotool)
  • xsel (apt-get install xsel)
  • YouTube account
  • crontab entries – see below
  • you do not need an HDMI display, except for the OS setup
  • a vnc viewer such as Real VNC
  • exim4 and bsd-mailx packages
The scripts

recordswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=ffmpegwireless9.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
matchE="1, 28, 0"
# up arrow
matchU="1, 103, 0"

epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $matchE ]]; then
# ENTER button section - recording
    echo "#################"
    echo We caught this input: $line at $(date)
# see if we are already running our recording program or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 arecord; pkill -9 ffmpeg
      pkill -9 -f blinkLED
      echo Shine the PWR LED
      $HOME/shineLED.sh
    else
# start it
      echo Blinking PWR LED
      $HOME/blinkLED.sh &
      echo STARTING $PGM
      $PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
    fi
    epochsOld=$epochs
  elif [[ "$line" =~ $matchU ]]; then
# UP ARROW button section - processing
    echo "###########"
    echo processing commencing at $(date)
    $HOME/blinktwiceLED.sh &
    echo start processing of the recording
    $HOME/process.sh >> process.log 2>&1
    pkill -9 -f LED
    $HOME/shineLED.sh
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

ffmpegwireless9.sh

#!/bin/sh
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 64 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 5 \
-s 480x320 \
-flush_packets 1 \
-f mp3 file:record-$(date +%m-%d-%y-%H-%M).mp3 \
< /dev/null

mp32flv.sh

#!/bin/sh
# DrJ 10/2021
#
# Note that ffmpeg runs at ~ 4 x real-time when it is producing this flv video file
#
line=$1
time=$(ffprobe -v error -show_entries format=duration   -of default=noprint_wrappers=1:nokey=1 file:${line}|tail -1)
echo recording time: $time s
echo $time > duration
video=$(echo ${line}|sed 's/mp3/flv/')
  ffmpeg \
 -i file:${line} \
 -f lavfi -i color=color=darkgray \
 -c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
 -bufsize 512k \
 -acodec libmp3lame -ar 44100 \
 -threads 8 \
 -b:a 128k \
 -r 5 \
 -s 480x320 \
 -t $time \
 -f flv file:${video} \
 < /dev/null

auto-upload.sh

#!/bin/sh
# automate upload of YouTube videos
#
# define some functions
randomsleep(){
# sleep random amount between 1.5 to 2.5 seconds
t10=$(shuf -n1 -i 15-25)
t=$(echo $t10/10|bc -l)
sleep $t
}
drjtool(){
randomsleep
xdotool $1 $2 $3
randomsleep
}

echo Start video upload
echo set display to main display
export DISPLAY=:0
# launch chromium
echo launch chromium
chromium-browser --kiosk https://studio.youtube.com/ > /dev/null 2>&1 &
sleep 25
echo move to CREATE button
drjtool mousemove 579 19
echo click on CREATE button
drjtool click 1
echo move to Upload videos
drjtool mousemove 577 34
echo click Upload videos
drjtool click 1
echo move to SELECT FILES
drjtool mousemove 305 266
echo click on SELECT FILES
drjtool click 1
echo move mouse to Open button
drjtool mousemove 600 396
echo click open and pause a bit for video upload
drjtool click 1
sleep 20
secs=$(cat duration)
moretime=$(echo $secs/60|bc -l)
sleep $moretime
echo "mouse to NEXT button (accept defaults)"
drjtool mousemove 558 386
echo click on NEXT
drjtool click 1
echo move to radio button No it is not made for kids
drjtool mousemove  117 284
echo click radio button
drjtool click 1
echo back to NEXT button
drjtool mousemove 551 384
echo click NEXT
drjtool click 1
echo 'click NEXT again (then says no copyright issues found)'
drjtool click 1
echo click NEXT again
drjtool click 1
echo move to Unlisted visibility radio button
# [note that public would be drjtool mousemove 142 235, private is 142 181]
drjtool mousemove 142 208
echo click Unlisted
drjtool click 1
echo move to copy icon
drjtool mousemove 532 249
echo echo copy URL to clipboard
drjtool click 1
echo move to Save
drjtool mousemove 551 384
echo click Save
drjtool click 1
echo move to CLOSE
drjtool mousemove 434 304
echo click close
drjtool click 1

echo video URL
xsel -b|tee clipboard
echo '
kill chromium browser'
sleep 25
echo kill chromium
kill -9 %1
sleep 2
url=$(cat clipboard|xargs -0 echo)
echo url is $url

process.sh

#!/usr/bin/bash
HOME=/home/pi
sleeptime=5
cd $HOME
# loop over all mp3 files in home directory
ls -1 record*mp3|while read line;do
 echo working on $line at $(date)
 video=$(echo ${line}|sed 's/mp3/flv/')
 echo creating flv video file $video
# create the video first
 ./mp32flv.sh $line
 echo move $line to mp3 directory
 [[ -d mp3s ]] || mkdir mp3s
 mv $line mp3s
 echo mv flv to upload directory
 [[ -d 00uploads ]] || mkdir 00uploads
 mv $video 00uploads
 echo start the upload
 ./auto-upload.sh
 echo get the url to this video on YouTube
 url=$(cat clipboard|xargs -0 echo)
 echo test that it worked
 if [[ ! "$url" =~ "http" ]]; then
   echo FAIL. Try once again
   ./auto-upload.sh
 fi
 echo send mail to Drj
 ./announceit.sh
 echo move video $video to flvs directory
 mv ./00uploads/$video flvs
 echo sleep for a bit before starting the next one
 sleep $sleeptime
done
echo All done with processing at $(date)

blinkLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
done

shineLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null

blinktwiceLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 3
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
done

announceit.sh

#!/bin/sh
url=$(cat clipboard|xargs -0 echo)
mailx -r [email protected] -s "New youtube video $url posted" [email protected]<<EOF
Check out our latest recording:

      $url

Regards,
Yourself
EOF

crontab entries

@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -passwordfile ~/.vnc/passwd -display :0 >  x0vncserver.log 2>&1

The idea

The recordswitch.sh script waits for input from the remote controller. It is programmed to kick off ffmpegwireless9.sh if the ENTER button is pushed, or process.sh if the UPLOAD button is pushed.

For testing purposes you may want to run process.sh by hand, i.e., ./process.sh, while you are viewing the display using a VNC viewer alongside the terminal screen.

The scripts are quite verbose and give lots of helpful output in their log files.

Upgrading from Raspberry Pi Lite to Raspberry Pi Desktop

I always like to start my RPi OS install with Raspberry Pi Lite. But to follow the upload parts of this post you really need Raspberry Pi Desktop. This article is a good write-up of how to upgrade to Desktop from Lite: How To Upgrade Raspbian Lite to Desktop (PIXEL, KDE, MATE, …) – RaspberryTips

Tips

Unfortunately the plugin I use inserts a blank line at the top. Those should all be removed.

After getting all the script, make them all executable in one go with a command such as chmod +x *sh

To read the input from the remote controller you need to set up evread.py and there may be some python work to do. This post has those details.

The chromium bowser needs to be run by hand one time over your VNC viewer. Its size has to be shrunk to 50% by running CTRL SHIFT – about four times. You need to log in to your YouTube or Gmail account so it remembers your credentials. And you need to og through the motions of uploading a video so it knows to use the 00uploads directory next time.

Don’t run a recording and an upload at the same time. I think the CPU would be taxed so I did not test that out. But you can record one day – even multiple recordings, and upload them a day or days later. That should work OK. It just processes the files one at a time, hopefully (untested).

announceit.sh is pretty dodgy. You have to understand SMTP mail somewhat to have a spitting chance for that to work. Fortunately I was an SMTP admin previously. So my ISP, Optimum, has a filter in place which prevents ordinary residential customers from sending out normal email to arbitrary SMTP addresses. However, to my surprise, they do run a mail relay server which you can connect to on the standard tcp port 25. I don’t really want to give it away but you can find it with the appropriate Internet search. I assume it is only for Optimum customers. Perhaps your ISP has something similar. So after you install exim4, you can configure a “smarthost” with the command dpkg-reconfigure exim4-config. But, again, you have to know a bit what you are doing. Suffice it to say that I got mine to work.

But for everyone else who can’t figure that out, just comment out this line in process.sh ./announceit.sh. put a # character in the front of the line to do that.

I have really only tested recordings of up to 45 minutes. I think an hour should be fine. I would suggest to break it up for longer.

The files can take a lot of space so you may need to clean up older files if you are a frequent user.

I’ve had about one failure during the upload out of about seven tests. So reliability is pretty good, but probably not perfect.

Why not just livestream? True, it’s sooo much easier. And I’ve covered how to do that previously. But, maybe it’s my WiFi, but its reliability was closer to 50% in my actual experience. I needed greater reliability and turns out I didn’t need the live aspect of the whole thing, just the recording for later critiqueing.

The recording approach I’ve taken uses ffmpeg to directly produce a mp3 file – it’s more compact than a WAV file. In and of itself the mp3 file may be useful to you, to, e.g., include as an attachment in email or whatever. For instance for a single song. All the mp3s are finally stored in a folder called mp3s, and all the videos are finally stored in a folder called flvs.

About that upload

The upload itself is super awesome to watch. I captured an actual automated upload with the script running on the right and the X Window display on the left in this YouTube video.

So the upload part was covered in this previous post.

Fixing recording which sounds like chipmunks

Somehow I managed to use some of these tools the other day and my mp3s ended up sounding like Alvin and the Chipmunks! I wondered if there was a way to recover them. I found there is, though I had to develop it a bit. It uses the new-ish rubberband filter of ffmpeg. I call this tiny script dechipmunk.sh:

#!/bin/sh
input=$1
#ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a rubberband='tempo=2' -loglevel warning stretched$input
#ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a rubberband='pitch=2' -loglevel warning stretched$input
ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a "rubberband=pitch=0.3333:tempo=0.3333" -loglevel warning stretched$input

In my case I had to slow things down and lower the pitch by the same factor: one third, hence the 0.3333.

How to pass multiple options to an ffmpeg filter

In doing the above I had to work out the syntax for passing more than one option to the new rubberband filter of ffmpeg. I wanted to specify both the pitch and the tempo options. So you see from above they had to be separated with a colon and the whole filter expression enclosed in quotes. Hence the funny-looking

“rubberband=pitch=0.3333:tempo=0.3333”

Future development

Well, I’m thinking of removing the chit-chat from the recording in an automated fashion. That may mean applying machine learning, or maybe something simpler if someone has covered this territory before for the RPi. But it might be a good excuse to do a shallow dive into machine learning.

Conclusion

I’m sure this method of YouTube upload is very flaky and will probably only work once or twice, if at all. But at least in my trials, it did work a few times. So perhaps it could be hardened and made more error-correcting. There are a lot of moving parts for it all to work. But it’s definitely cool to watch it go when it is working!

References and related

Rii infrared remote control – only $12: Amazon.com: Rii MX3 Multifunction 2.4G Fly Mouse Mini Wireless Keyboard & Infrared Remote Control & 3-Gyro + 3-Gsensor for Google Android TV/Box, IPTV, HTPC, Windows, MAC OS, PS3 : Electronics

Reading keyboard input.

How To Upgrade Raspbian Lite to Desktop (PIXEL, KDE, MATE, …) – RaspberryTips

YouTube Livestreaming with a click of a button on Raspberry Pi

Automated YouTube video uploading from Raspberry Pi without using the YouTube api

Categories
Linux Raspberry Pi

Automated YouTube video uploading from Raspberry Pi without using the YouTube api

2021 Intro

This post promises more than it actually delivers, ha, ha. It is squarely aimed at the more mid to advanced RPi enthusiast. Most who read this will get discouraged and look for another solution. I did the same in fact and I will review my failures with alternatives.

The essence of this aproach is screen automation with a very nice tool called xdotool. For me it works. It will definitely, 100% require some tweaks for anyone else. This is not run a few installs, copy this code and you’re good to go. But if you have the patience, you wil be rewarded with either fully or at least semi-automated video uploads to YouTube from your Raspberry Pi.

One caveat. Please obey YouTube’s terms of service. In other words, don’t abuse this! As soon as someone starts using this method in an aggressive or abusive fashion, we will all lose this capability. They have crack security experts and could squelch this approach in a heartbeat.

I actually don’t have all the peices in place for myself, but I have enough cool stuff that I wanted to begin to share my findings.

One beautiful thing about what I’m going to show is that you get to see the cursor moving about the screen in response to your automated commands – you see exactly what it;s doing, which screens it’s clicking through, etc. So if there’s an issue – say YouTube changes its layout – you’ll most likely be able to know how to adapt.

What’s wrong with using YouTube’s api?

Plenty. It used to be feasible. It certainly would make all our lives a lot easier. But YouTube is not a charity. They have squeezed out the little guy by making the barrier to entry so high that it’s really only available to highly determined IT folks. It’s just too difficult to figure out all the neeed screens, etc, and all the help guides refer to older api versions where things were different. YouTube clamped down in July 2020 on who or what can use their api. There’s a lot of old HowTos pre-dating that that will just lead you to dead-ends. So, go ahead, I dare you to stop reading this and use the api and report back. Maybe yuo manage to create a project, great, and an api key, great, and even to assign your api key the correct YouTube specific permissions – all great, and associate crednetials – super, and finally borrow someone’s code to upload a video – been there, done that. That video will be listed private. So then you try to root around to see what you have to do to make it public. Ah, a project review. Great. You were only in test mode. So no your confronted with this form. If they cared about the little guy there would be a radio button – “I only wish to upload a few videos a week for a small cadre of users, spare me the bureaucracy,” and that’d be it. But, no… Are you applying for a quota? Huh? I just want to upload a video and have it marked as unlisted. Some users remarked they filled out the form, never got their project reviewed and never heard back. Maybe they’re the exception, I don’t know. It’s just over the top for me so I give up.

OK. So, maybe YoutubeUploader?

Nope. Doesn’t work. It’s based on the old stuff.

OK. What about that guy’s api-less Node.js uploader?

Maybe. I could not get it to work on RPi. But I didn’t try super hard. I just like rolling my own, frankly. My approach is much more transparent. At least this approach inspired me to imagine the approach I am about to share. Because I believe the Node.js guy is just doing screen scraping but you can’t even see the screens.

Or simply do a Livestream?

Agreed. Livestreaming is quite straightforward by comparison with what I developed. My blog post about one click livestreaming covers it. But I have not had good results with reliability. As often as it works, it doesn’t work. With this new approach I’m going to try to create separate steps so that if anything goes wrong, an individual step can be re-run. Another advantage of separating steps is that a recording can be done “in the field” and without WiFi access. Remember an RPi 3 works great for hours with a decent portable USB battery that’s normally used for phones. Then the resulting recording can be converted to video and uploaded once the RPi is back to its usual WiFi SSID.

Preliminary upload, October 2021

In the video below the right screen is a terminal window showing what the script is doing. It needs some tweaking, and the YouTube window gets stuck so it’s not showing some of the screens. But it’s already totally awesome – and it worked!

Watch an actual video get uploaded
Code for the above

#!/bin/sh
# automate upload of YouTube videos
#
# define some functions
randomsleep(){
# sleep random amount between 1.5 to 2.5 seconds
t10=$(shuf -n1 -i 15-25)
t=$(echo $t10/10|bc -l)
sleep $t
}
drjtool(){
randomsleep
xdotool $1 $2 $3
randomsleep
}

echo Start video upload
echo set display to main display
export DISPLAY=:0
# launch chromium
echo launch chromium
chromium --kiosk https://studio.youtube.com/ > /dev/null 2>&1 &
sleep 20
echo move to CREATE button
drjtool mousemove 579 19
echo click on CREATE button
drjtool click 1
echo move to Upload videos
drjtool mousemove 577 34
echo click Upload videos
drjtool click 1
echo move to SELECT FILES
drjtool mousemove 305 266
echo click on SELECT FILES
drjtool click 1
echo move mouse to Open button
drjtool mousemove 600 396
echo click open and pause a bit for video upload
drjtool click 1
sleep 20
echo "mouse to NEXT button (accept defaults)"
drjtool mousemove 558 386
echo click on NEXT
drjtool click 1
echo move to radio button No it is not made for kids
drjtool mousemove  117 284
echo click radio button
drjtool click 1
echo back to NEXT button
drjtool mousemove 551 384
echo click NEXT
drjtool click 1
echo 'click NEXT again (then says no copyright issues found)'
drjtool click 1
echo click NEXT again
drjtool click 1
echo move to Unlisted visibility radio button
# [note that public would be drjtool mousemove 142 235, private is 142 181]
drjtool mousemove 142 208
echo click Unlisted
drjtool click 1
echo move to copy icon
drjtool mousemove 532 249
echo echo copy URL to clipboard
drjtool click 1
echo move to Save
drjtool mousemove 551 384
echo click Save
drjtool click 1
echo move to CLOSE
drjtool mousemove 434 270
echo click close
drjtool click 1

echo video URL
xsel -b|tee clipboard
echo kill chromium browser
sleep 5
echo kill chromium
kill -9 %1
url=$(cat clipboard|xargs -0 echo)
echo url is $url

I call the script auto-test.sh, just to give it a name.

Ingredients
  • RPi 3 or RPi 4
  • Raspberry OS with full GUI and autologin set up
  • tiger VNC, i.e., the package tigervnc-scraping-server
  • chromium browser – but I think that comes with the GUI install
  • xdotool (apt-get install xdotool)
  • xsel (apt-get install xsel)
  • YouTube account
  • crontab entries – see below
  • you do not need an HDMI display, except for the OS setup
  • a vncviewer such as Real VNC
Idea

Don’t use the GUI for anything else!

Crontab (do a crontab -e to get into your crontab) should contain these lines:

@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -localhost no -passwordfile ~/.vnc/passwd -display :0 >  x0vncserver.log 2>&1

Work with chromium the first time by hand. As I recall you should:

  • Create a directory like 00uploads – so it appears highest in the list
  • put a single video in 00uploads
  • Do an upload by hand (to help chromium remember to choose this upload directory)
  • launch chromium browser
  • log into your YouTube account at https://studio.youtube.com
  • shrink the browser until its size is 50% (Ctrl-Shift – about four times)
  • Don’t add other tabs and stuff to Chromium

Then subsequent launches of chromium should remember a bunch of these settings, specifically, your login info, the shrunken size, the upload directory, and maybe the (lack of) other tabs.

The beauty of this approach is that it is more transparent than the alternatives. You see exactly what your program is doing. You can issue the xdotool commands by hand to, e.g., change up the coordinates a little bit. Or even enter a video title.

So getting back to the idea, the automation idea is to finish a video somehow, then move it to the 00uploads dircetory, invoke this uploader program, then either move it to a uploaded directory or some such.

Imagine the versatility if I used my remote controller for RPi to map one button for audio recording, and a second button for automating video upload! Well, when I find the time that’s what I plan to do. I will make a separate post where the recording and uploading are shown – more or less the culmination of all the pieces.

Oh, and back to the idea again, I wanted to share the unlisted link with band members. So, you see how it is basically in the result of xsel -b since xsel copied the clipboard which contained the YouTube URL for this video we just uploaded? I have to fix up the parsing because some junk characters are getting included, but I plan to email that link to myself first, where I will do a brief manual check, and then forward it to the rest of the band. so, again, it’s really cool that we could even think to pull that off with this simplistic approach.

Techniques developed for this project

Lots.

I “discovered” – in the sense that Columbus discovered America – xdotool as an amazing X Windows screen automation tool. I knew of autohotkey for Windows so inquired what was like it for X Windows. I further learned that xdotool is generally broken when it comes to use with traditional VNC servers such as the native tightvncserver. It simply doesn’t work. But Tiger VNC is a scraping server so it like shares your console screen and makes it available via VNC protocol. That’s required because to develop this approach you have to see what you’re doing. All those coordinates? it comes from experimentation.

I also learned how to embed a YouTube video in my blog post. In fact this is the very first video I made for a blog post. So I did a screen recording for the first time with screen recorder for Windows.

I landed on the idea of a side-by-side video showing my terminal running the automated script in one window and the effect it is having on the chromium browser in the other window running on the RPi.

I put a wrapper around xdotool to make things cleaner. (But it’s not done yet.)

I changed to two-factor authentication to see if it made a difference. It did not. It still remembers the authentication, thankfully, at least for a few days. I wonder for how long though. Hmm.

Kiosk mode. By launching chromium with kiosk mode it not only gives us more screen real estate to work in, it in principle should also permit you to interact with chromium in a regular fashion and still have it come up in a known, fixed position, which is an absolute requirement of this approach. All the buttons have to have the same coordinates from invocation to invocation.

I also developed ffmpeg-based converters which take wav files and converts them to mp3’s (a nice compact format. wav files are space hogs), and another which takes mp3 files and adds a gray screen and converts them to flv ([Adobe] Flash Video, I guess – a compact video file format which YouTube accepts).

I also learned the ffmpeg command to tell exactly how long a recording is.

I also learned how to turn off blocking in ffmpeg so that its constantly writing packets and thus not losing audio data at the end when stopped.

I came up with the idea of randomizing the sleep time between clicks to make it seem more human-like.

And mostly for the purpose of demonstrations, though it also greatly helps in debugging, I introduced around two seconds of sleep both before and after a command is issued. That really makes things a lot clearer.

The results of the clipboard, xsel -b, contains null bytes. I had trouble parsing it to pull out just the url, but finally landed on using xargs -0 which is designed to parse null-delimited strings. And it worked! This was a late edition and did not make it into the video, but is in the provided script above towards the bottom.

ffmpeg chokes on too-complicated filenames. Who knew? I had files containing colon (:) and dash (-) characters which work perfectly fine in linux, but ffmpeg was interpreting part of the filename as a command-line argument it appeared. The way out of that mess was to introduce file: in front of the filename.

I’ll probably put my ffmpeg tricks into my next post because I want to keep this one lean and focused on this one upload automation topic.

References and related

This post was preceded by my post on how to stream live to YouTube with a click of a button on a Raspberry Pi.

Categories
Linux Raspberry Pi Web Site Technologies

Raspberry Pi Project: YouTube livestreaming with a click of a button

Intro

Here I’ve combined work I’ve done previously into one single useful application: I can initiate the live streaming of our band practice on YouTube with the click of a single button on a remote control, and stop it with another click.

Equipment

Raspberry Pi 3 or 4 with Raspberry Pi OS, e.g., Raspbian Lite is just fine

Logitech webcam or USB microphone

USB extender (my setup needed this, others may not)

Universal USB-based remote control – see references for a known good one

Method 1

In this method I rapidly blink the onboard red power (PWR) LED of the RPi while streaming is active. Outside of those times it is a solid red. This is my preferred mode – it’s a very visible sign that things are working. I am very excited about this approach.

blinkLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
done

shineLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null

broadcastswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      pkill -9 -f blinkLED
      echo Shine the PWR LED
      $HOME/shineLED.sh
    else
# start it
      echo Blinking PWR LED
      $HOME/blinkLED.sh &
      echo STARTING $PGM
      $PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

The crontab entry and the referenced files are the same as in Method 2.

Method 2

In method 2 I flash the built-in LED on the webcam for a few seconds before starting the audio, and again when the streaming has terminated – as visible signal that the button press registered.

broadcastswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=continuousaudio.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
match="1, 28, 0"
epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $match ]]; then
    echo "#################"
    echo We caught this inpupt: $line at $(date)
# see if we are already running continuousaudio or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 ffmpeg
      sleep 1
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
    else
# start it
      echo turn on led for a few seconds
      $HOME/videotst.sh &
      sleep $ledtime
      pkill -9 ffmpeg
      sleep 1
      echo STARTING $PGM
      $PGM &
    fi
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

videotst.sh

#!/bin/sh
# just to get the webcam to light up...
ffmpeg -i /dev/video0 -f null - < /dev/null > /dev/null 2>1 &

crontab entry

@reboot sleep 15; /home/pi/broadcastswitch.sh > broadcastswitch.log 2>&1

continuousaudio.sh and ffmpegwireless6.sh

See this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/

evread.py

See this post: https://drjohnstechtalk.com/blog/2020/12/how-to-create-a-software-keyboard/

The idea

I press the Enter button once on the remote to begin the livestream to YouTube. I press it a second time to stop.

By extension this could also control other programs as well (like the photo frame). And other keys could be mapped to other functions. Record-only, don’t livestream, anyone?

I want to do these things because it’s a little tight in the room where I want to livestream – hard to get around. So this keeps me from having to squeeze past other people to access the RPi to for instance power cycle it. In my previous treatment, I had livestreaming start up as soon as the RPi booted up, which means it would only stop when it was similarly powered off, which I found somewhat limiting.

The purpose of videotst.sh in Method 2

videotst.sh serves almost no purpose whatsoever! It can simply be commented out. It’s somewhat specific to my webcam.

You see, I wanted to get some feedback that when I pressed the ENTER button the remote control the RPi had read that and was trying to start the livestream. I thought of flashing one of the built-in LEDs on the RPi. I still need to look into that.

With the robotics team we had soldered on an external LED onto one of the GPIO pins, but that’s way too much trouble.

So what videotst.sh does for me is to engage the webcam, specifically its video component, throwing away the actual video but with the net result that the webcam’s built-in green LED illuminates for a few seconds! That lets me know, “Yeah, your button press was registered and we’re beginning to start the livestream.” You see, because when you run ffmpegwireless6.sh with this webcam, it’s all about the audio. It only uses the audio of the webcam and thus the green “in use” LED never illuminates, unfortunately, while it is livestreaming the pure audio stream. So, similarly, when you press ENTER a second time to stop the stream I illuminate the webcam’s LED for a few seconds by using videotst.sh once again.

Techniques developed for this project

evread.py does some nasty buffering of its output, meaning, although it dos read the key presses on the remote, it holds the results “close to its chest,” and then spits them out, all at once, when the buffer is full. Well, that totally defeats the purpose needed here where I want to know if there’s been a single click. After some insightful Internet searches (note that I did not use Google as a verb, a practice I carry into my personal communication) I discovered the program script, which, when armed with the arguments -q -c, allows you to unbuffer the output of a program! And, it actually works. Cool.

And I made the command decision to “eat” the input. You see the timer of 3 seconds in broadcastswitch.sh? After you’ve done any button press it throws away any further button presses for the next three seconds. I just think that’ll reduce the misfires. In fact, I might take up the practice of double-clicking the ENTER button just to be sure I actually pressed it.

I’m using the double bracket notation more in my bash scripts. It permits use of a RegEx comparison operator. =~. I love regular expressions. More the perl style, PCRE, while this uses extended regular expressions, ERE. But I suppose those are good as well.

Getting control over the power LED was a nice coup. I’m only disappointed that you cannot control its brightness. In the dark it throws off quite a bit of light. But you cannot.

The green LED does not seem nearly as bright so I chose not to play with it. What I don’t want is to have to strain to see whether the thing is livestreaming or not.

Of course getting the whole remote control thing to work at all is another great advancement.

Techniques still to be developed

I still might investigate using voice-driven commands in place of a remote. Obviously, that’s a big nut to crack. Even if I managed to turn it on, turning it off while ffmpeg has commandeered the audio channel is even harder. I wonder if ffmpeg can split the audio stream so another process can be run alongside it to listen for voice commands? Or if an upstream process in front of ffmpeg could be used for that purpose? Or simply run with two microphones (seems wasteful of material)?? Needs research.

Suppose you want to take this on the road? Internet service can be unreliable after all. It’s well known you can power the RPi 3 for many hours with a small portable battery. So how about mapping a second button on the remote to a record-only mode (using the arecord utility, for instance)?? Then you can upload the audio at a time of you convenience. That’s something I can definitely program if I find I need it.

Lingering Problems with this approach

Despite all the care I’ve taken with the continuousaudio.sh script, still, there are times when YouTube does not show that a livestream is going on. I have no idea why at this point. If I knew the cause, I’d have fixed it!

As the livestream aspect of this is actual immaterial to me, I will probably switch to a pure recording mode where I upload in a later step – perhaps all done by the remote control for pure convenience.

Since this blog post has become popular, I may keep it preserved as is and start a new one for this recording approach as some people may genuinely be interested in the livestream aspect.

A very rough estimate of the failure rate is maybe as high as 50% but probably no lower than 25%. So, not great odds if you’re relying on success.

There’s another issue which I consider more minor. The beginning of the stream always sounds like a tape played on fast forward for a few seconds. The end also cut off a few seconds early I think.

Conclusion

We have presented a novel approach to livestreaming on a Raspberry Pi 3 using a remote control for added convenience. All the techniques were home-developed at drjohnstechtalk.com. The materials don’t cost much and it really does work.

References and related

Rii infrared remote control – only $12: Amazon.com: Rii MX3 Multifunction 2.4G Fly Mouse Mini Wireless Keyboard & Infrared Remote Control & 3-Gyro + 3-Gsensor for Google Android TV/Box, IPTV, HTPC, Windows, MAC OS, PS3 : Electronics

25′ USB 2 extender for placing a webcam or USB mic at a distance from the RPi, $15: Amazon.com: HDE USB Extension Cable (USB 2.0 Type A Male to Female) High Speed Data and Power Extension Cable with Active Repeater (25 ft) : Electronics

Reading keyboard input.

Using Remote control to interact with a RPi-based photo frame.

ffmpeg settings to send just audio to YouTube, suppressing video

Battery to make the RPi 3 portable, $17: Amazon.com: Omars Power Bank 10000mAh USB C Battery Pack Slimline Portable Charger with Dual USB Output Compatible with iPhone Xs/XR/XS Max/X, iPad, Galaxy S9 / Note 9 : Cell Phones & Accessories. I’m not exactly sure what to do for an RPi 4 however.

Exetended Regular Expressions

How to control the power to the RPi’s LEDs: https://github.com/mlagerberg/raspberry-pi-setup/blob/master/5.2-leds.md

Categories
Linux Raspberry Pi

Raspberry Pi + LED Matrix Display project

Intro

A friend bought a bunch of parts and thought we could work on it together. He wanted to reproduce this project: Raspberry Pi Audio Spectrum Display – Hackster.io

So of interest here is that it involved a 64×64 LED matrix display, and a “hat” sold by Adafruit to supposedly make things easier to connect.

Now I am not a hardware guy and never pretended to be. When we realized that it required soldering, I bought those supplies but I didn’t want to be the one to mess things up so he volunteered for that. And yes, it was a mess.

Equipment

See later on in the post for the equipment we used.

The story continues

Who knew that gold on the PCB could be ruined so easily after you’ve changed your mind about a soldered joint and decided to undo what you’ve done? The experience pretty much validated my whole approach to staying away from soldering. So we ruined that hat and had to order another one. While we waited for it I developed a certain strtategy to deal with the shortcomings of the partially destroyed hat. See the section at the bottom entitled Recipe for a broken hat.

Comments on the project

I don’t think the guy did a good job. Details left out where needed, extra stuff added. For instance you don’t need to order that Firebeetle – it’s never used! Also, I’ve been told that his python isn’t very good either. But in general we followed his skimpy instructions. So we ordered the LED matrix from DFRobot, not Adafruit, and I think it’s different. In our case we did indeed follow the project suggestion to wire the 2×8 pin as shown in their (DFRobot’s) picture, leaving out the white wire. Once we soldered the white wire to the hat’s gpio pin 24, we were really in business. What we did not need to do is to solder pin E to either pin 8 or 16. (This is something you apparently need to do for the Adafruit LED.) In our testing it didn’t seem to matter whether or not those connections were made so we left it out on our second hat.

You think he might have mentioned just how much soldering is involed. Let’s see you have 2×20 connector, a 2×8 connector plus a single gpio connection = 57 pins to solder. Yuck.

What we got to work

Deploying images on these LED displays is cool. You just kind of have to see it. It’s hard to describe why. The picture below does not do it justice. Think stadium scoreboard.

In rpi-rgb-led-matrix/utils directory we followed the steps in the README.md file to compile the LED viewer:

sudo apt-get update
sudo apt-get install libgraphicsmagick++-dev libwebp-dev -y
make led-image-viewer

#!/bin/sh
# invert images because the sound stuff is otherwise upside-down
sudo led-image-viewer –led-pixel-mapper=”U-mapper;Rotate:180″ –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 /home/pi/walk-in-the-woods.jpg
Walk in the woods

Do we have flicker? Just a tiny bit. You wouldn’t notice it unless yuo were staring at it for a few seconds, and even then it’s just isolated to a small section of the display. Probably shoddy soldering – we are total amateurs.

Tip for your images

Consider that you only have 64 x 64 pixles to work with. So crop your pictures beforehand to focus on the most interesting aspect – people if there are people in the picture (like we’ve done in the above image), specifically faces if there are faces. Otherwise everything will just look like blurs and blobs. You yourself do not have to resize your pictures down to 64 x 64 – the led software will do the resizing. So just focus on cropping down to a square-sized part of the picture you want to draw attention to.

Real-time audio

So my friend got a USB microphone. I developed the following script to make the python example work with real-time sounds – music playing, conversatiom, whatever. It’s really cool – just the slightest lag. But, yes, the LEDs bounce up in response to louder sounds.

So in the directory rpi-rgb-led-matrix/bindings/python/samples I created the script drjexample.sh.

#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so I will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.

rm test.wav; mkfifo test.wav

# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &

# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav

So note that by having inverted the image (180 degree rotation) we have the sounds bars and images both in the same direction so we can switch between the two modes.

I believe to get the python bindings to work we needed to install some additional python libraries, but that part is kind of a blur now. I think what should work is to follow the directions in the README.md file in the directory rpi-rgb-led-matrix/bindings/python, namely

sudo apt-get update && sudo apt-get install python2.7-dev python-pillow -y
make build-python
sudo make install-python

Hopefully that takes care of it. For sure you need numpy.

Future project ideas

How about a board that normally plays a slideshow, but when the ambient sound reaches a certain level – presumably because music is playing – it switches to real-time sound bar mode?? We think it’s doable.

Recipe for a broken hat

For the LED matrix display we got the DFRobot one since that’s what was linked to in the project guide. But the thing is, the reviewer’s write-up is incomplete so what you need to do involves a little guesswork.

At the end of the day all we could salvage while we wait for a new Adafruit hat to come in is the top fourth of the display! The band below it is either blank, or if we push on the cables a certain way, an unreliable duplicate of the top fourth.

The next band suffers from a different problem. Its blue is non-functional. So it’s no good…

And the last band rarely comes on at all.

OK? So we’re down to a 16 x 64 pixel useable area.

But despite all those problems, it’s still kind of cool, I have to admit! I know at work we have these digital sign boards and this reminds me of that. So first thing I did was to create a custom banner – scrolling text.

I call this display program drjexample2.sh. I put it in the directory rpi-rgb-led-matrix/examples-api-use


#!/bin/sh
# DrJ – 6/21
# nice example
sudo ./demo -D1 –led-rows=64 –led-cols=64 –led-multiplexing=1 –led-brightness=50 -m25 /home/pi/pil_text.ppm

But that requires the existence of a ppm file containing the text I wanted to scroll, since I was working by example. So to create that custom PPM file I created this python script.


#!/usr/bin/python3
from PIL import Image, ImageDraw, ImageFont

# our fonts
###fnt = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 14)
fnt = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf”, 16)
fnt36 = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 36)
fnt2 = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans.ttf”, 18)
fntBold = ImageFont.truetype(“/usr/share/fonts/truetype/dejavu/DejaVuSans-Bold.ttf”, 40)

###img = Image.new(‘RGB’, (260, 30), color = (73, 109, 137))
img = Image.new(‘RGB’, (260, 30), color = ‘black’)

d = ImageDraw.Draw(img)
###d.text((0,0), “Baby, welcome to our world!!!”, font=fnt, fill=(255,255,0))
d.text((0,0), “Baby, welcome to our world!!!”, font=fnt, fill=’yellow’)

#img.save(‘pil_text.png’)
img.save(‘pil_text.ppm’, format=’PPM’)

Once I discovered the problem with the bands – by way of running all the demos and experimenting with the arguments a bit – I noticed this directory: rpi-rgb-led-matrix/utils. I perked right up because it held out the promise of displaying jpeg images. Anyone who has seen any of my posts know that I am constantly putting out raspberry pi based photo displays in one form or another. For instance see https://drjohnstechtalk.com/blog/2021/01/raspberry-pi-photo-frame-using-the-pictures-on-your-google-drive-ii/

or

https://drjohnstechtalk.com/blog/2020/12/raspberry-pi-photo-frame-using-your-pictures-on-your-google-drive/

Matrix Display

But see how cool it is? No? It’s a sleeping, recumbent baby. It’s like the further away from it you get, the clearer it becomes. Trust me in person it does look good. And it feels like creating one of those bright LED displays they use in ballparks.

But this same picture also shows the banding problem.

To get the picture displayed I first cropped it to make it wide and short. I wisely chose a picture which was amenable to that approach. I created this display script:


#!/bin/sh
sudo led-image-viewer –led-no-hardware-pulse –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 –led-multiplexing=1 /home/pi/baby-sleeping.jpg

But before that can all work, you need to compile the program. Just read through the README.md. It gives these instructions which I followed:

sudo apt-get update
sudo apt-get install libgraphicsmagick++-dev libwebp-dev -y
make led-image-viewer

It went kind of slowly on my RPi 3, but it worked without incident. When I initially ran the led-image-viewer nothing displayed. So the script above shows the results of my experimentation which seems appropriate for our particular matrix display.

How did we get here?

Just to mention it, we followed the general instructions in that project. So I guess no need to repeat the recipe here.

Slideshow

You know I’m not going to let an opportunity to create a slideshow go to waste. So i created a second appropriately horizontal image which I might effectively show in my narrow available band of 16 x 64 pixels. just to share the little script, it is here:


#!/bin/sh
cd /home/pi
sudo led-image-viewer –led-no-hardware-pulse –led-gpio-mapping=adafruit-hat –led-cols=64 –led-rows=64 –led-multiplexing=1 -w6 -f baby-and-mom.jpg baby-sleeping.jpg

This infinitely loops over two pictures, displaying each for six seconds.

Start on boot

I have the slideshow start on boot using a simple technique I’ve developed. You edit the crontab file (crontab -e) and enter at the bottom:


@reboot sleep 35; /home/pi/rpi-rgb-led-matrix/utils/jdrexample2.sh > drjexample.log 2>&1
Lessons learned

My friend ordered all the stuff listed on the DFRobot project page, including their 64×64 LED matrix. Probably a mistake. They basically don’t document it and refer you to Adafruit, where they deal with a 64×64 LED matrix – their own – which may or may not have the same characteristics, leaving you somewhat in limbo. Next time I would order from Adafruit.

As mentioned above that gold foil on printed circuit boards really does come off pretty easily, and then you’re hosed. Because of the lack of technical specs we were never really sure if we needed to solder the E contact to 8 or to 16 and destroyed all those terminals in the process of backing out.

I actually created custom ppm files of solid colors, red, blue, green, white, so that I could prove my suspicions about the third band. Red and green display fine, blue not at all. White displays as yellow.

Viewed close up, the LED matrix doesn’t look like much, and of course I was close up when I was working with it. But when I stepped back I realized how beautiful a brightly illuminated picture of a baby can be! The pixels merge and your mind fills in the spaces between I guess.

The original idea was to tackle sound but I got stuck on the ability to use it as a photo frame (you know me). But he wants to return to sound which I am dreading….

Testing audio

In our first tests. the audio example wasn’t working. But now it seems to be. The project guy’s python code is named spectrum_matrix.py if I recall correctly. It goes into rpi-rgb-led-matrix/bindings/python/samples. And as he says, you run it from that directory as

$ sudo python spectrum_matrix.py

But, his link to test.wav is dead – yet another deficiency in his write-up. At least in my testing not every possible WAV file may work. This one, moo sounds, does however. http://soundbible.com/grab.php?id=1778&type=wav So, it plays for a few seconds – I can hear it through earphones – and the LEDs kind of go up and down. We recorded a wav file and found that that does not work. The error reads like this:

Home directory not accessible: Permission denied
W: [pulseaudio] core-util.c: Failed to open configuration file '/root/.config/pulse//daemon.conf': Permission denied
W: [pulseaudio] daemon-conf.c: Failed to open configuration file: Permission denied
Traceback (most recent call last):
File "spectrum_matrix.py.orig", line 56, in
matrix = calculate_levels(data,chunk,sample_rate)
File "spectrum_matrix.py.orig", line 49, in calculate_levels
power = np.reshape(power,(64,chunk/64))
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 292, in reshape
return _wrapfunc(a, 'reshape', newshape, order=order)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 56, in _wrapfunc
return getattr(obj, method)(*args, **kwds)
ValueError: cannot reshape array of size 2048 into shape (64,64)

Note that I had renamed the original spectrum_matrix.py to spectrum_matrix.py.orig because we started messing with it. Actually, I pretty much get the same error on the file that works; it’s just that I get it at the end of the LED show, not immediately.

Superficially, the two files differ somewhat in their recording format:

$ file ~/voice.wav moo.wav

/home/pi/voice.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 44100 Hz
moo.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 32000 Hz

I played the voice.wav on a Windows PC – it played just fine. Just a little soft.

So what’s the essential difference between the two files? Well, something that jumps out is that the one is mono, the other stereo. Can we somehow test for that? Yes! I made the simplest possible conversion af a mono to a stereo file with the following ffmpeg command:

$ ffmpeg -i ~/voice.wav -ac 2 converted.wav

$ file converted.wav


converted.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz

I copy converted.wav to test.wav and re-run spectrum_matrix.py. This time it works!

Not sure how my friend produced his wave file. But I want to make one on my own. He had plugged a USB microphone into the RPi. I have done research somewhat related to this – publishing a livestream to Youtube, audio only, video grayed out. That’s in this post: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/ So I am not afraid to se ffmpeg any longer. So I created this tiny script, record.sh, with my desired arguments:


#!/bin/sh
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
/tmp/ffmpeg.wav

And I ran it while speaking loudly into the mic. It ran OK. The output file comes out as

test.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 48000 Hz

And…it plays with the LEDs dancing. Great.

Audio: LED responds to live input

My friend wants the LED to respond to live input such as his stereo at home. Being a terrible python programmer but at least middling linux techie, I see a way to accomplish it without his having to touch the sample program by employing an old Unix trick: named pipes. So I create this script, drjexample.sh which combines all the knowledge we gained above into one simple script:

#!/bin/sh
# DrJ 6/21
# make the LED react to live sounds by use of a USB microphone
# I am too lazy to look up how to make the python program read from STDIN so i will just
# make the equivalent thing by creating test.wav as a nmed pipe. It's an old linux trick.

rm test.wav; mkfifo test.wav

# background the python program. It will patiently wait for input
sudo python spectrum_matrix.py &

# Now run ffmpeg
# see my own post, https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-ac 2 \
-y \
test.wav

So a named pipe is just that. Instead of the pipe character we know and love, you coordinate process output from the first process with process input of the second process by way of this special file. The operating system does all the hard work. But it works just as though you had used the | character.

Best of all, this script actually works, to wit, the LED is now responding to live input. I see it jump when I say test into the microphone. Unknown to me is if it will play for extended periods of time – it would be easy for one process to output faster than the other can input for instance, so a backlog builds up. The responsiveness is good, I would guess around no more than 200 ms lag.

Equipment

We used the equipment from this post, except the Firebeetle. You know, that’s just another reason I consider that post to be a sloppy effort. Who lists a piece of equipment that they don’t use?? And again, next time I would rather search for an LED display from Adafruit. We use an RPi 3 and installed the image on a micro SD card with the new-ish Raspberry Pi imager, which works just great: Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi

Oh, plus the soldering iron and solder. And a multi-colored ribbon cable with female couplers at one end and a 2×8 connector on the other. Not sure if that came with the LED or not since I didn’t order it.

Our power supply is about 5 amps and plugs into the hat. We do not need power for the RPi.

References and related

This post makes it seem like a walk in the park. Our experience is not so much. Raspberry Pi Audio Spectrum Display – Hackster.io

People seem to like this Raspberry Pi photo frame post I did which draws photos from your Google drive. https://drjohnstechtalk.com/blog/2020/12/raspberry-pi-photo-frame-using-your-pictures-on-your-google-drive/

Introducing Raspberry Pi Imager, our new imaging utility – Raspberry Pi – for putting the operating system formerly known as Raspbian onto a micro SD card.

test.wav (Use this moo wav file and rename to test.wav): http://soundbible.com/grab.php?id=1778&type=wav

Useful ffmpeg commands: FFmpeg Commands: 31 Must-Haves for Beginners in 2022 – VideoProc

Or 20+ FFmpeg Commands For Beginners – OSTechNix

How I figured out hot to livestream audio to YouTube without video from a RPi using ffmpeg is documented here: https://drjohnstechtalk.com/blog/2019/04/live-stream-to-youtube-from-a-raspberry-pi-webcam/

Github project for this effort (not completely working as of yet):

Categories
Admin Linux Network Technologies

Speedtest automation: what they don’t tell you

Intro

I began to implement the autmoation of speedtest checks. I was running the jobs every 10 minutes, but we noticed something flaky in the results. On the hour and on the half hour the tests seemed to be garbage. What’s going on?

Our findings

Well, if you use a scheduler to run a speedtest every 10 minutes, it will start exactly on the hour and exactly on the half-hour, amongst other start times. We were running it eight times to test eight different paths. Only the last two were returning reliable results. The early ones were throwing errors. So I introduced an offset to run the jobs at 2,12,22,32,42,52 minutes. And with this offset, the results became much more reliable.

The inevitable conclusion is that too many other people are running tests exactly on the hour and half hour. A single run takes roughly 30 seconds to complete. And it must be that the servers which speedtest rely on are simply overwhelmed and refuse to do more tests.

References and related

There is a linux script written in python that implements the full speedtest. https://pypi.org/project/speedtest-cli/ It really works, which is cool.

But as well there is a RPM package you can get from the speedtest site itself.

nperf.com looks like a better test than speedtest. I’m going to see if it can be scripted.

Categories
Linux Raspberry Pi

How to create a software keyboard

Intro

This example will probably get used in my advanced Raspberry Pi photo frame treatment. I use image display software, fbi, which is designed to take key presses in order to take certain actions, such as, advance to the next picture, display information about the current picture, etc. qiv works similarly. But I am running an automated picture display, so no one is around to type into the keyboard. hence the need for software emulation of the physical keyboard. I’ve always believed it must be possible, but never knew how until this week.

I am not a python programmer, but sometimes you gotta use whatever language makes the job easier, and I only know how to do this in python, python3 specifically.

This will probably only make sense for Raspberry Pi owners.

Setup

I believe this will work best if your Raspberry Pi has only a keyboard and not a mouse hooked up because in that case your keyboard ought to be mapped to /dev/input/event0. But it’s easy enough to change that. To see whether your keyboard is /dev/input/event0 or /dev/input/event1 or some other, just cat one of those files and start typing. You should see some junk when you’ve selected the right /dev/input file.

The program

I call it keyinject.py.

#!/usr/bin/python3
# inject a single key, acting like a software keyboard
# DrJ 12/20
import sys
from evdev import UInput, InputDevice, ecodes as e
from time import sleep
# set DEBUG = True to print out more information
DEBUG = False
sleepTime = 0.001 # units are secs
# dict of name mappings. key is how we like to enter it, value is what is after KEY_ in evdev ecodes
d = {
 '1'    : '1',
 '2'    : '2',
 '3'    : '3',
 '4'    : '4',
 '5'    : '5',
 '6'    : '6',
 '7'    : '7',
 '8'    : '8',
 '8'    : '8',
 '9'    : '9',
 '0'    : '0',
 'a' : 'A',
 'b' : 'B',
 'c' : 'C',
 'd' : 'D',
 'e' : 'E',
 'f' : 'F',
 'g' : 'G',
 'h' : 'H',
 'i' : 'I',
 'j' : 'J',
 'k' : 'K',
 'l' : 'L',
 'm' : 'M',
 'n' : 'N',
 'o' : 'O',
 'p' : 'P',
 'q' : 'Q',
 'r' : 'R',
 's' : 'S',
 't' : 'T',
 'u' : 'U',
 'v' : 'V',
 'w' : 'W',
 'x' : 'X',
 'y' : 'Y',
 'z' : 'Z',
 '.'    : 'DOT',
 ','    : 'COMMA',
 '/'    : 'SLASH',
 'E': 'ENTER',
 'S': 'RIGHTSHIFT',
 'C': 'LEFTCTRL'
}

# https://python-evdev.readthedocs.io/en/latest/tutorial.html
inputchars = sys.argv
ltrs = inputchars[1]
# get rid of program name

keybd = InputDevice("/dev/input/event0")

for ltr in ltrs:
  ui = UInput.from_device(keybd, name="keyboard-device")
  if DEBUG: print(ltr)
  mappedkey = d[ltr]
  key = "KEY_" + mappedkey
  if DEBUG: print(key)
  if DEBUG: print(e.ecodes[key])

  ui.write(e.EV_KEY, e.ecodes[key], 1) # KEY_ down
  ui.write(e.EV_KEY, e.ecodes[key], 0) # KEY_ up
  ui.syn()
  sleep(sleepTime)
  ui.close()

And it gets called like this:

$ sudo ./keyinject.py my.injected.letters

or

$ sudo ./keyinject.py ./m2.plE

to run the m2.pl script in the current directory and have it behave as though it were launched from a console terminal. The “E” is the ENTER key.

Interesting observations

There really is no such thing as a separate “k” key and “K” key (lower-case versus upper-case). There is only a single key labelled “K” on a keyboard. It’s a physical layer versus logical layer type of thing. The k and K are characters.

In the above program I did some of the keys – the ones I will be needing, plus a few bonus ones. I do need the ENTER key, and I can’t think of a way to convey that to this program, so to send ENTER you would do

$ sudo ./keyinject.py ENTER

But I was able to have these characters represent themselves: . , / so that’s not bad.

Prerequisites

You will need pyhon3 version > 3.5, I think. And the evdev package. I believe you get that with

$ sudo pip3 install evdev

And if you don’t have pip3 you can install that with

$ sudo apt-get install python3-pip

Reading keyboard input

Of course the opposite of simulating key presses is reading what’s been typed from an actual keyboard. That’s possible too with this handy evdev package. The following program is not as polished as the writing program, but it gives you the feel for what to do. I call it evread.py.

#!/usr/bin/python3
# https://python-evdev.readthedocs.io/en/latest/tutorial.html
import asyncio
from evdev import InputDevice, categorize, ecodes
import evdev,re

#/dev/input/event4 MemsArt MA144 RF Controller System Control usb-0000:01:00.0-1.4/input2
#/dev/input/event3 MemsArt MA144 RF Controller Consumer Control usb-0000:01:00.0-1.4/input2
#/dev/input/event2 MemsArt MA144 RF Controller usb-0000:01:00.0-1.4/input1
#/dev/input/event1 MemsArt MA144 RF Controller usb-0000:01:00.0-1.4/input0
#/dev/input/event0 iTalk-02 usb-0000:01:00.0-1.3/input2

devices = [evdev.InputDevice(path) for path in evdev.list_devices()]
#for device in devices:

#    print(device.path, device.name, device.phys)
for device in devices:
    path = device.path
# we only are interested in 0 or 1
    if re.search(r'event[01]',path):
        print(path)
        if re.search('MemsArt',device.name): Riipath = path
# You can hardcode '/dev/input/event0' if you like and do not have an Rii remote ctrlr
dev = InputDevice(Riipath)
# following line is optional - it takes away the keybd from fbi!
# there is also a dev.ungrab()
dev.grab()

async def helper(dev):
    async for ev in dev.async_read_loop():
         print(repr(ev))

loop = asyncio.get_event_loop()
loop.run_until_complete(helper(dev))

Note the presence of the dev.grab(). That permits your program to be the exclusive reader of keyboard input, shutting out fbi. It can be commented out if you want to share.

How to interpret the output of evread.py

Say I press and hold the Enter button. I get this output from evread.py.

InputEvent(1629667404, 552424, 4, 4, 458792)
InputEvent(1629667404, 552424, 1, 28, 1)
InputEvent(1629667404, 552424, 0, 0, 0)
InputEvent(1629667404, 808252, 1, 28, 2)
InputEvent(1629667404, 808252, 0, 0, 1)
InputEvent(1629667404, 858249, 1, 28, 2)
InputEvent(1629667404, 858249, 0, 0, 1)
InputEvent(1629667404, 908250, 1, 28, 2)
InputEvent(1629667404, 908250, 0, 0, 1)
InputEvent(1629667404, 958250, 1, 28, 2)
InputEvent(1629667404, 958250, 0, 0, 1)
InputEvent(1629667405, 8250, 1, 28, 2)
InputEvent(1629667405, 8250, 0, 0, 1)
InputEvent(1629667405, 58252, 1, 28, 2)
InputEvent(1629667405, 58252, 0, 0, 1)
InputEvent(1629667405, 108250, 1, 28, 2)
InputEvent(1629667405, 108250, 0, 0, 1)
InputEvent(1629667405, 158250, 1, 28, 2)
InputEvent(1629667405, 158250, 0, 0, 1)
InputEvent(1629667405, 158250, 4, 4, 458792)
InputEvent(1629667405, 158250, 1, 28, 0)
InputEvent(1629667405, 158250, 0, 0, 0)

The 4, 4, longnumber – You always seem to get somethiing like that. Not sure about it.

The 1, 28, 1 – means I’ve pressed the ENTER key. 28 is for ENTER. other keys will have other values assigned.

0, 0, 1 – not really sure. Continuation, or something.

1, 28, 2 – I think I know this. It means I’m continuing to hold down the ENTER button.

1, 28, 0 – I have released the ENTER key

These get generated pretty frequently, perhaps a few a second.

Conclusion

We have created an example program, mostly for Raspberry Pi, though easily adapted to other linux environments, that injects keyboard presses, via a python3 program, as though those keys had been typed by someone using the physical keyboard such that, graphics programs which rely on this, such as fbi or qiv (and probably others – vlc?), can be controlled through software.

We have also provided a basic python program for reading key presses from the actual keyboard. I plan to use these things for my advanced RPi photo frame project.

References and related

The python evdev tutorial is really helpful: https://python-evdev.readthedocs.iogeoh/en/latest/tutorial.html

Raspberry Pi advanced photo frame article does not exist yet. The basic RPi photo frame article is here.

Another piece to the puzzle is turning GPS coordinates into a town name. That brief write-up is here.

A nice example of actually using evread.py to detect button presses from a universal remote attached to an RPi: https://drjohnstechtalk.com/blog/2021/08/raspberry-pi-project-youtube-livestreaming-with-a-click-of-a-button/

Categories
Linux Perl Raspberry Pi Web Site Technologies

Convert GPS Coordinates into town name or address or GMT offset

Intro

This is a small piece of a larger project – displaying your photos on Google Drive using a Raspberry Pi. That project will require completion of many small investigations, this being just one of them.

I thought, wouldn’t it be cool to ask your photo frame when and where a certain picture was taken? I thought that information was typically embedded into the picture by modern smartphones. Turns out this is disappointingly not the case – at least not on our smartphones, except in a small minority of pictures. But since I got somewhere with my investigation, I wanted to share the results, regardless.

Also, I naively assumed that there surely is a web service that permits one to easily convert GPS coordinates into the name – in text – of the closest town. After all, you can enter GPS coordinates into Google Maps and get back a map showing the exact location. Why shouldn’t it be just as easy to extract the nearest town name as text? Again, this assumption turns out to be faulty. But, I found a way to do it that is not toooo difficult.

Example for Cape May, New Jersey

$ curl -s http://api.geonames.org/address?lat=38.9302957777778&lng=-74.9183310833333&username=drjohns

<geonames>
<address>
<street>Beach Dr</street>
<houseNumber>690</houseNumber>
<locality>Cape May</locality>
<postalcode>08204</postalcode>
<lng>-74.91835</lng>
<lat>38.93054</lat>
<adminCode1>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<adminCode2>009</adminCode2>
<adminName2>Cape May</adminName2>
<adminCode3/>
<adminCode4/>
<countryCode>US</countryCode>
<distance>0.03</distance>
</address>
</geonames>

The above example used the address service. The results in this case are unusually complete. Sometime the lookups simply fail for no obvious reason, or provide incomplete information, such as a missing locality. In those cases the town name is usually still reported in the adminName2 element. I haven’t checked the address accuracy much, but it seems pretty accurate, like, representing an actual address within 100 yards, usually better, of where the picture was taken.

They have another service, findNearbyPlaceName, which sometimes works even when address fails. However its results are also unpredictable. I was in Merrillville, Indiana and it gave the toponym as Chapel Manor, which is the name of the subdivision! In Virginia it gave the name The Hamlet – still not sure where that came from, but I trust it is some hyper-local name for a section of the town (James City). Just as often it does spit back the town or city name, for instance, Atlantic City. So, it’s better than nothing.

The example for Nantucket

From a browser – here I use curl in the linux command line – you enter:

$ curl -s http://api.geonames.org/findNearbyPlaceName?lat=41.282778&lng=-70.099444&username=drjohns

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<geonames>
<geoname>
<toponymName>Nantucket</toponymName>
<name>Nantucket</name>
<lat>41.28346</lat>
<lng>-70.09946</lng>
<geonameId>4944903</geonameId>
<countryCode>US</countryCode>
<countryName>United States</countryName>
<fcl>P</fcl>
<fcode>PPLA2</fcode>
<distance>0.07534</distance>
</geoname>
</geonames>

So what did we do? For this example I looked up Nantucket in Wikipedia to find its GPS coordinates. Then I used the geonames api to convert those coordinates into the town name, Nantucket.

Note that drjohns is an actual registered username with geonames. I am counting on the unpopularity of my posts to prevent an onslaught of usage as the usage credits are limited for free accounts. If I understood the terms, a few lookups per hour would not be an issue.

I’m finding the PlaceName lookup pretty useless, the address lookup fails about 30% of the time, so I’m thinking as a backstop to use this sort of lookup:

$ curl ‘http://api.geonames.org/extendedFindNearby?lat=41.00050&lng=-74.65329&username=drjohn’

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<geonames>
<address>
<street>Stanhope Rd</street>
<mtfcc>S1400</mtfcc>
<streetNumber>439</streetNumber>
<lat>41.00072</lat>
<lng>-74.6554</lng>
<distance>0.18</distance>
<postalcode>07871</postalcode>
<placename>Lake Mohawk</placename>
<adminCode2>037</adminCode2>
<adminName2>Sussex</adminName2>
<adminCode1>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<countryCode>US</countryCode>
</address>
</geonames>

Note that gets a reasonably close address, and more importantly, a zipcode. The placename is too local and I will probably discard it. But another lookup can turn a zipcode into a town or city name which is what I am after.

$ curl ‘http://api.geonames.org/postalCodeSearch?country=US&postalcode=07871&username=drjohns’

<?xml version=”1.0″ encoding=”UTF-8″ standalone=”no”?>
<geonames>
<totalResultsCount>1</totalResultsCount>
<code>
<postalcode>07871</postalcode>
<name>Sparta</name>
<countryCode>US</countryCode>
<lat>41.0277</lat>
<lng>-74.6407</lng>
<adminCode1 ISO3166-2=”NJ”>NJ</adminCode1>
<adminName1>New Jersey</adminName1>
<adminCode2>037</adminCode2>
<adminName2>Sussex</adminName2>
<adminCode3/>
<adminName3/>
</code>
</geonames>

See? It was a lot of work, but we finally got the township name, Sparta, returned to us.

Ocean GPS?

I was whale-watching and took some pictures with GPS info. Trying to apply the methods above worked, but just barely. Basically all I could get out of the extended find nearby search was a name field with value North Atlantic Ocean! Well, that makes it sounds like I was on some Titanic-style ocean crossing. In fact I was in the Gulf of Maine a few miles from Provincetown. So they really could have done a better job there… Of course it’s understandable to not have a postalcode and street address and such. But still, bodies of waters have names and geographical boundaries as well. Casinos seem to be the main sponsors of geonames.org, and I guess they don’t care. Yesterday my script came up with a location Earth! But now I see geonames proposed several locations and I only look at the first one. I am creating a refinement which will perform better in such cases. Stay tuned… And…yes…the refinement is done. I had to do a wee bit of xml parsing, which I now do.

To get your own account at geonames.org

The process of getting your own account isn’t too difficult, just a bit squirrelly. For the record, here is what you do.

Go to http://www.geonames.org/login to create your account. It sends an email confirmation. Oh. Be sure to use a unique browser-generated password for this one. The security level is off-the-charts awful – just assume that any and all hackers who want that password are going to get it. It sends you a confirmation email. so far so good. But when you then try to use it in an api call it will tell you that that username isn’t known. This is the tricky part.

So go to https://www.geonames.org/manageaccount . It will say:

Free Web Services
the account is not yet enabled to use the free web services. Click here to enable. 

And that link, in turn is https://www.geonames.org/enablefreewebservice . And having enabled your account for the api web service, the URL, where you’ve put your username in place of drjohns, ought to work!

For a complete overview of all the different things you can find out from the GPS coordinates from geonames, look at this link: https://www.geonames.org/export/ws-overview.html

Working with pictures

Please look at this post for the python code to extract the metadata from an image, including, if available GPS info. I called the python program getinfo.py.

Here’s an actual example of running it to learn the GPS info:

$ ../getinfo.py 20170520_102248.jpg|grep -ai gps

GPSInfo = {0: b'\x02\x02\x00\x00', 1: 'N', 2: (42.0, 2.0, 18.6838), 3: 'W', 4: (70.0, 4.0, 27.5448), 5: b'\x00', 6: 0.0, 7: (14.0, 22.0, 25.0), 29: '2017:05:20'}

I don’t know if it’s good or bad, but the GPS coordinates seem to be encoded in the degrees, minutes, seconds format.

A nice little program to put things together

I call it analyzeGPS.pl and a, using it on a Raspberry Pi, but could easily be adapted to any linux system.


#!/usr/bin/perl
# use in combination with this post https://drjohnstechtalk.com/blog/2020/12/convert-gps-coordinates-into-town-name/
use POSIX;
$DEBUG = 1;
$HOME = "/home/pi";
#$file = "Pictures/20180422_134220.jpg";
while(<>){
$GPS = $date = 0;
$gpsinfo = "";
$file = $_;
open(ANAL,"$HOME/getinfo.py \"$file\"|") || die "Cannot open file: $file!!\n";
#open(ANAL,"cat \"$file\"|") || die "Cannot open file: $file!!\n";
print STDERR "filename: $file\n" if $DEBUG;
while(<ANAL>){
  $postalcode = $town = $name = "";
  if (/GPS/i) {
    print STDERR "GPS: $_" if $DEBUG;
# GPSInfo = {1: 'N', 2: (39.0, 21.0, 22.5226), 3: 'W', 4: (74.0, 25.0, 40.0267), 5: 1.7, 6: 0.0, 7: (23.0, 4.0, 14.0), 29: '2016:07:22'}
   ($pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec) = /1: '([NS])', 2: \(([\d\.]+), ([\d\.]+), ([\d\.]+)...3: '([EW])', 4: \(([\d\.]+), ([\d\.]+), ([\d\.]+)\)/i;
   print STDERR "$pole,$deg,$min,$sec,$hemi,$lngdeg,$lngmin,$lngsec\n" if $DEBUG;
   $lat = $deg + $min/60.0 + $sec/3600.0;
   $lat = -$lat if $pole eq "S";
   $lng = $lngdeg + $lngmin/60.0 + $lngsec/3600.0;
   $lng = -$lng if $hemi = "W" || $hemi eq "w";
   print STDERR "lat,lng: $lat, $lng\n" if $DEBUG;
   #$placename = `curl -s "$url"|grep -i toponym`;
   next if $lat == 0 && $lng == 0;
# the address API is the most precise
   $url = "http://api.geonames.org/address?lat=$lat\&lng=$lng\&username=drjohns";
   print STDERR "Url: $url\n" if $DEBUG;
   $results = `curl -s "$url"|egrep -i 'street|house|locality|postal|adminName'`;
   print STDERR "results: $results\n" if $DEBUG;
   ($street) = $results =~ /street>(.+)</;
   ($houseNumber) = $results =~ /houseNumber>(.+)</;
   ($postalcode) = $results =~ /postalcode>(.+)</;
   ($state) = $results =~ /adminName1>(.+)</;
   ($town) = $results =~ /locality>(.+)</;
   print STDERR "street, houseNumber, postalcode, state, town: $street, $houseNumber, $postalcode, $state, $town\n" if $DEBUG;
# I think locality is pretty good name. If it exists, don't go  further
   $postalcode = "" if $town;
   if (!$postalcode && !$town){
# we are here if we didn't get interesting results from address reverse loookup, which often happens.
     $url = "http://api.geonames.org/extendedFindNearby?lat=$lat\&lng=$lng\&username=drjohns";
     print STDERR "Address didn't work out. Trying extendedFindNearby instead. Url: $url\n" if $DEBUG;
     $results = `curl -s "$url"`;
# parse results - there may be several objects returned
     $topelemnt = $results =~ /<geoname>/i ? "geoname" : "geonames";
     @elmnts = ("street","streetnumber","lat","lng","locality","postalcode","countrycode","countryname","name","adminName2","adminName1");
     $cnt = xml1levelparse($results,$topelemnt,@elmnts);

     @lati = @{ $xmlhash{lat}};
     @long = @{ $xmlhash{lng}};
# find the closest entry
     $distmax = 1E7;
     for($i=0;$i<$cnt;$i++){
       $dist = ($lat - $lati[$i])**2 + ($lng - $long[$i])**2;
       print STDERR "dist,lati,long: $dist, $lati[$i], $long[$i]\n" if $DEBUG;
       if ($dist < $distmax) {
         print STDERR "dist < distmax condition. i is: $i\n";
         $isave = $i;
       }
     }
     $street = @{ $xmlhash{street}}[$isave];
     $houseNumber = @{ $xmlhash{streetnumber}}[$isave];
     $admn2 = @{ $xmlhash{adminName2}}[$isave];
     $postalcode = @{ $xmlhash{postalcode}}[$isave];
     $name = @{ $xmlhash{name}}[$isave];
     $countrycode = @{ $xmlhash{countrycode}}[$isave];
     $countryname = @{ $xmlhash{countryname}}[$isave];
     $state = @{ $xmlhash{adminName1}}[$isave];
     print STDERR "street, houseNumber, postalcode, state, admn2, name: $street, $houseNumber, $postalcode, $state, $admn2, $name\n" if $DEBUG;
     if ($countrycode ne "US"){
       $state .= " $countryname";
     }
     $state .= " (approximate)";
   }
# turn zipcode into town name with this call
   if ($postalcode) {
     print STDERR "postalcode $postalcode exists, let's convert to a town name\n";
     print STDERR "url: $url\n";
     $url = "http://api.geonames.org/postalCodeSearch?country=US\&postalcode=$postalcode\&username=drjohns";
     $results = `curl -s "$url"|egrep -i 'name|locality|adminName'`;
     ($town) = $results =~ /<name>(.+)</i;
     print STDERR "results,town: $results,$town\n";
   }
   if (!$town) {
# no town name, use adminname2 which is who knows what in general
     print STDERR "Stil no town name. Use adminName2 as next best thing\n";
     $town = $admn2;
   }
   if (!$town) {
# we could be in the ocean! I saw that once, and name was North Atlantic Ocean
     print STDERR "Still no town. Try to use name: $name as last resort\n";
     $town = $name;
   }
   $gpsinfo = "$houseNumber $street $town, $state" if $locality || $town;
   } # end of GPS info exists condition
  } # end loop over ANAL file
  $gpsinfo = $gpsinfo || "No info found";
  print qq(Location: $gpsinfo
);
} # end loop over STDIN

#####################
# function to parse some xml and fill a hash of arrays
sub xml1levelparse{
# build an array of hashes
$string = shift;
# strip out newline chars
$string =~ s/\n//g;
$parentelement = shift;
@elements = @_;
$i=0;
while($string =~ /<$parentelement>/i){
 $i++;
 ($childelements) = $string =~ /<$parentelement>(.+?)<\/$parentelement>/i;
 print STDERR "childelements: $childelements" if $DEBUG;
 $string =~ s/<$parentelement>(.+?)<\/$parentelement>//i;
 print STDERR "string: $string\n" if $DEBUG;
 foreach $element (@elements){
  print STDERR "element: $element\n" if $DEBUG;
  ($value) = $childelements =~ /<$element>([^<]+)<\/$element>/i;
  print STDERR "value: $value\n" if $DEBUG;
  push @{ $xmlhash{$element} }, $value;
 }
} # end of loop over parent elements
return $i;
} # end sub xml1levelparse

Here’s a real example of calling it, one of the more difficult cases:

$ echo -n 20180127_212203.jpg|./analyzeGPS.pl

GPS: GPSInfo = {0: b'\x02\x02\x00\x00', 1: 'N', 2: (41.0, 0.0, 2.75), 3: 'W', 4: (74.0, 39.0, 12.0934), 5: b'\x00', 6: 0.0, 7: (2.0, 21.0, 58.0), 29: '2018:01:28'}
N,41.0,0.0,2.75,W,74.0,39.0,12.0934
lat,lng: 41.0007638888889, -74.6533592777778
Url: http://api.geonames.org/address?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
results:
street, houseNumber, postalcode, state, town: , , , ,
Address didn't work out. Trying extendedFindNearby instead. Url: http://api.geonames.org/extendedFindNearby?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
childelements: <address> <street>Stanhope Rd</street> <mtfcc>S1400</mtfcc> <streetNumber>433</streetNumber> <lat>41.00121</lat> <lng>-74.65528</lng> <distance>0.17</distance> <postalcode>07871</postalcode> <placename>Lake Mohawk</placename> <adminCode2>037</adminCode2> <adminName2>Sussex</adminName2> <adminCode1>NJ</adminCode1> <adminName1>New Jersey</adminName1> <countryCode>US</countryCode> </address>string: <?xml version="1.0" encoding="UTF-8" standalone="no"?>
element: street
value: Stanhope Rd
element: streetnumber
value: 433
element: lat
value: 41.00121
element: lng
value: -74.65528
element: locality
value:
element: postalcode
value: 07871
element: countrycode
value: US
element: countryname
value:
element: name
value:
element: adminName2
value: Sussex
element: adminName1
value: New Jersey
dist,lati,long: 3.88818897839883e-06, 41.00121, -74.65528
dist < distmax condition. i is: 0
street, houseNumber, postalcode, state, admn2, name: Stanhope Rd, 433, 07871, New Jersey, Sussex,
postalcode 07871 exists, let's convert to a town name
url: http://api.geonames.org/extendedFindNearby?lat=41.0007638888889&lng=-74.6533592777778&username=drjohns
results,town: <geonames>
<name>Sparta</name>
<adminName1>New Jersey</adminName1>
<adminName2>Sussex</adminName2>
<adminName3/>
</geonames>
,Sparta
Location: 433 Stanhope Rd Sparta, New Jersey (approximate)

Or, if you just want the interesting stuff,

$ echo -n 20180127_212203.jpg|./analyzeGPS.pl 2>/dev/null

Location: 433 Stanhope Rd Sparta, New Jersey (approximate)

Bonus section
Convert city to GPS coordinates with geonames

Having the city and country you can use the wikipedia search to turn that into serviceable GPS coordinates. This is sort of the opposite problem from what we did earlier. Several possible matches are returned so you need some discretion to ferret out the correct answer. And sometimes smaller towns are just not found at all and only wild guesses are returned! The curl + URL that I’ve been using for this is:

curl ‘http://api.geonames.org/wikipediaSearchJSON?q=Cornwall,Canada&maxRows=10&username=drjohns’

I think if you know the state (or province) you can put that in as well.

Convert GPS coordinates into a GMT offset

The following is partial python code which I have come across and haven’t yet myslef verified. But I am excited to learn of it because until now I only knew how to do this with the Geonames api which I will not be showing because it’s slow, potentially costs money, etc.

from datetime import datetime
from pytz import timezone, utc
from timezonefinder import TimezoneFinder

tf = TimezoneFinder()  # reuse


def get_offset(*, lat, lng):
    """
    returns a location's time zone offset from UTC in minutes.
    """

    today = datetime.now()
    tz_target = timezone(tf.timezone_at(lng=lng, lat=lat))
    # ATTENTION: tz_target could be None! handle error case
    today_target = tz_target.localize(today)
    today_utc = utc.localize(today)
    return (today_utc - today_target).total_seconds()


bergamo = {"lat": 45.69, "lng": 9.67}
minute_offset = get_offset(**bergamo)
print('seconds offset',minute_offset)
parsippany = {"lat": 40.86, "lng": -74.43}
minute_offset = get_offset(**parsippany)
print('seconds offset',minute_offset)

A word about China

Today I tried to see if I could learn the province or county a particular GPS coordinate is in when it is in China, but it did not seem to work. I’m guessing China cities cannot be looked up in the way I’ve shown for my working examples, but I cannot be 100% sure without more research which I do not plan to do.

Conclusion

An api for reverse lookup of GPS coordinates which returns the nearest address, including town name, is available. I have provided examples of how to use it. It is unreliable, however, and Geonames.org does provide alternatives which have their own drawbacks. In my image gallery, only a minority of my pictures have encoded GPS data, but it is fun to work with them to pluck out the town where they were shot.

I have incorporated this functionality into a Raspberry Pi-based photo frame I am working on.

I have created an example Perl program that analyzes a JPEG image to extract the GPS information and turn it into an address that is remarkably accurate. It is amazing and uncanny to see it at work. It deals with the screwy and inconsistent results returned by the free service, Geonames.org.

References and related

There are lots of different things you can derive given the GPS coordinates using the Geonames api. Here is a list: https://www.geonames.org/export/ws-overview.html

In this photo frame version of mine, I extract all the EXIF metadata which includes the GPS info.

One day my advanced photo frame will hopefully include an option to learn where a photo was taken by interacting with a remote control. Here is the start of that write-up.

You can pay $5 and get a zip codes to cities database in any format. I’m sure they’ve just re-packaged data from elsewhere, but it might be worth it: https://www.uszipcodeslist.com/

For a more professional api, https://smartystreets.com/ looks quite nice. Free level is 250 queries per month, so not too many. But their documentation and usability looks good to me. For this post I was looking for free services and have tried to avoid commercial services.

Categories
Admin Linux

vsftd Virtual Users stopped working after patching: the solution

Intro
vsftpd is a useful daemon which I use to run an ftps service (ftp which uses TLS encryption). Since I am not part of the group that administers the server, it makes sense for me to maintain my own userlist rather than rely on the system password database. vsftpd has a convenient feature which allows this known as virtual users.

More details
In /etc/pam.d/vsftpd.virtual I have:

auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
session required pam_loginuid.so

In the file /etc/vsftpd-virtual-user.db I have my Berkeley database of users and passwords. See references on how to set this up.

The point is that I had this all working last year – 2019 – on my SLES 12SP4 server.

Then it all broke
Then in early May, 2020, all the FTPs stopped working. The status of the vsftpd service hinted that the file /lib64/security/pam_userdb.so could not be loaded. Sure enough, it was missing! I checked some of my other SLES12SP4 servers, some of which are on a different patch schedule. It was missing on some, and present on one. So I “borrowed” pam_userdb.so from the one server which still had it and put it onto my server in /lib64/security. All good. Service restored. But clearly that is a hack.

What’s going on
So I asked a Linux expert what’s going on and got a good explanation.

pam_userdb has been moved to a separate package, named pam-extra
 
1) http://lists.suse.com/pipermail/sle-security-updates/2020-April/006661.html
2) https://www.suse.com/support/update/announcement/2020/suse-ru-20200822-1/
 
Advisory ID: SUSE-RU-2020:917-1
Released: Fri Apr 3 15:02:25 2020
Summary: Recommended update for pam
Type: recommended
Severity: moderate
References: 1166510
This update for pam fixes the following issues:
 
- Moved pam_userdb into a separate package pam-extra. (bsc#1166510)
 
Installing the package pam-extra should resolve your issue.

I installed the pam-extra package using zypper, and, yes, it creates a /lib64/security/pam_userdb.so file!

And vsftpd works once more using supported packages.

Conclusion
Virtual users with vsftpd requires pam_userdb.so. However, PAM wished to decouple itself from dependency on external databases, etc, so they bundled this kind of thing into a separate package, pam-extra, more-or-less in the middle of a patch cycle. So if you had the problem I had, the solution may be as simple as installing the pam-extra package on your system. Although I experienced this on SLES, I believe it has or will happen on other Linux flavors as well.

This problem is poorly documented on the Internet.


References and related

https://www.cyberciti.biz/tips/centos-redhat-vsftpd-ftp-with-virtual-users.html

Categories
Admin Linux Network Technologies

Configure rsyslog to send syslog to SIEM server running TLS

Intro
You have to dig a little to find out about this somewhat obscure topic. You want to send syslog output, e.g., from the named daemon, to a syslog server with beefed up security, such that it requires the use of TLS so traffic is encrypted. This is how I did that.

The details
This is what worked for me:

...
# DrJ fixes - log local0 to DrJ's dmz syslog server - DrJ 5/6/20
# use local0 for named's query log, but also log locally
# see https://www.linuxquestions.org/questions/linux-server-73/bind-queries-log-to-remote-syslog-server-4175
669371/
# @@ means use TCP
$DefaultNetstreamDriver gtls
$DefaultNetstreamDriverCAFile /etc/ssl/certs/GlobalSign_Root_CA_-_R3.pem
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode anon
 
local0.*                                @@(o)14.17.85.10:6514
#local0.*                               /var/lib/named/query.log
local1.*                                -/var/log/localmessages
#local0.*;local1.*                      -/var/log/localmessages
local2.*;local3.*                       -/var/log/localmessages
local4.*;local5.*                       -/var/log/localmessages
local6.*;local7.*                       -/var/log/localmessages

The above is the important part of my /etc/rsyslog.conf file. The SIEM server is running at IP address 14.17.85.10 on TCP port 6514. It is using a certificate issued by Globalsign. An openssl call confirms this (see references).

Other gothcas
I am running on a SLES 15 server. Although it had rsyslog installed, it did not support tls initially. I was getting a dlopen error. So I figured out I needed to install this module:

rsyslog-module-gtls

References and related
How to find the server’s certificate using openssl. Very briefly, use the openssl s_client submenu.

The rsyslog site itself probably has the complete documentation. Though I haven’t looked at it thoroughly, it seems very complete.

Want to know what syslog is? Howtoneywork has this very good writeup: https://www.howtonetwork.com/technical/security-technical/syslog-server/

Categories
Admin Apache CentOS Linux Security

Trying to upgrade WordPress brings a thicket of problems

Intro
Wordpress tells me to upgrade to version 5.4. But when I try it says nope, your version of php is too old. Now admittedly, I’m running on an ancient CentOS server, now at version 6.10, which I set up back in 2012 I believe.

I’m pretty comfortable with CentOS so I wanted to continue with it, but just on a newer version at Amazon. I don’t like being taken advantage of, so I also wanted to avoid those outfits which charge by the hour for providing CentOS, which should really be free. Those costs can really add up.

Lots of travails setting up my AWS image, and then…

I managed to find a CentOS amongst the community images. I chose centos-8-minimal-install-201909262151 (ami-01b3337aae1959300).

OK. Brand new CentOS 8 image, 8.1.1911 after patching it, which will be supported for 10 years. Surely it has the latest and greatest??

Well, I’m not so sure…

If only I had known

I really wish I had seen this post earlier. It would have been really, really helpful: https://blog.ssdnodes.com/blog/how-to-install-wordpress-on-centos-7-with-lamp-tutorial/

But I didn’t see it until after I had done all the work below the hard way. Oh well.

When I install php I get version 7.2.11. WordPress is telling me I need a minimum of php version 7.3. If i download the latest php, it tells me to download the latest apache. So I do. Version 2.4.43. I also install gcc, anticipating some compiling in my future…

But apache won’t even configure:

httpd-2.4.43]$ ./configure --enable-so
checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-pc-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... no
configure: error: APR not found.  Please read the documentation.
  --with-apr=PATH         prefix for installed APR or the full path to
                             apr-config
  --with-apr-util=PATH    prefix for installed APU or the full path to
                             apu-config
 
(apr-util configure)
checking for APR... no
configure: error: APR could not be located. Please use the --with-apr option.
 
try:
 
 ./configure --with-apr=/usr/local/apr
 
but
 
-D_GNU_SOURCE   -I/usr/local/src/apr-util-1.6.1/include -I/usr/local/src/apr-util-1.6.1/include/private  -I/usr/local/apr/include/apr-1    -o xml/apr_xml.lo -c xml/apr_xml.c &amp;&amp; touch xml/apr_xml.lo
xml/apr_xml.c:35:10: fatal error: expat.h: No such file or directory
 #include 
          ^~~~~~~~~
compilation terminated.
make[1]: *** [/usr/local/src/apr-util-1.6.1/build/rules.mk:206: xml/apr_xml.lo] Error 1

So I install expat header files:
$ yum install expat-devel
And then the make of apr-util goes through. Not sure this is the right approach or not yet, however.

So following php’s advice, I have:
$ ./configure –enable-so

checking for chosen layout... Apache
...
checking for pcre-config... false
configure: error: pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/

So I install pcre-devel:
$ yum install pcre-devel
Now the apache configure goes through, but the make does not work:

/usr/local/apr/build-1/libtool --silent --mode=link gcc  -g -O2 -pthread         -o htpasswd  htpasswd.lo passwd_common.lo       /usr/local/apr/lib/libaprutil-1.la /usr/local/apr/lib/libapr-1.la -lrt -lcrypt -lpthread -ldl -lcrypt
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_GetErrorCode'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetEntityDeclHandler'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ParserCreate'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetCharacterDataHandler'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ParserFree'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetUserData'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_StopParser'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_Parse'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ErrorString'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetElementHandler'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:48: htpasswd] Error 1

So I try configure or apr-util with expat built-in.

$ ./configure –with-expat=builtin –with-apr=/usr/local/apr

But when I do the make of apr-util I now get this error:

/usr/local/apr/build-1/libtool: line 7475: cd: builtin/lib: No such file or directory
libtool:   error: cannot determine absolute directory name of 'builtin/lib'
make[1]: *** [Makefile:93: libaprutil-1.la] Error 1
make[1]: Leaving directory '/usr/local/src/apr-util-1.6.1'
make: *** [/usr/local/src/apr-util-1.6.1/build/rules.mk:118: all-recursive] Error 1

From what I read this new error occurs due to having –expat-built-in! So now what? So I get rid of that in my configure statement for apr-util. For some reason, apr-util goes through and compiles. And so I try this for compiling apache24:

$ ./configure –enable-so –with-apr=/usr/local/apr

And then I make it. And for some reason, now it goes through. I doubt it will work, however… it kind of does work.

It threw the files into /usr/local/apache2, where there is a bin directory containing apachectl. I can launch apachectl start, and then access a default page on port 80. Not bad so far…

I still need to tie in php however.

I just wing it and try

$ ./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysql

Hey, maybe for once their instructions will work. Nope.

configure: error: Package requirements (libxml-2.0 >= 2.7.6) were not met:

Package 'libxml-2.0', required by 'virtual:world', not found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

So I guess I need to install libxml2-devel:

$ yum install libxm2-devel

Looks like I get past that error. Now it’s on to this one:

configure: error: Package requirements (sqlite3 > 3.7.4) were not met:

So I install sqlite-devel:
$ yum install sqlite-devel
Now my configure almost goes through, except, as I suspected, that was a nonsense argument:

configure: WARNING: unrecognized options: --with-mysql

It’s not there when you look for it! Why the heck did they – php.net – give an example with exactly that?? Annoying. So I leave it out. It goes through. Run make. It takes a long time to compile php! And this server is pretty fast. It’s slower than apache or anything else I’ve compiled.

But eventually the compile finished. It added a LoadModule statement to the apache httpd.conf file. And, after I associated files with php extension to the php handler, a test file seemed to work. So php is beginning to work. Not at all sure about the mysql tie-in, however. In fact see further down below where I confirm my fears that there is no MySQL support when PHP is compiled this way.

Is running SSL asking too much?
Apparently, yes. I don’t think my apache24 has SSL support built-in:

Invalid command 'SSLCipherSuite', perhaps misspelled or defined by a module not included in the server configuration

So I try
$ ./configure –enable-so –with-apr=/usr/local/apr –enable-ssl

Not good…

checking for OpenSSL... checking for user-provided OpenSSL base directory... none
checking for OpenSSL version &gt;= 0.9.8a... FAILED
configure: WARNING: OpenSSL version is too old
no
checking whether to enable mod_ssl... configure: error: mod_ssl has been requested but can not be built due to prerequisite failures

Where is it pulling that old version of openssl? Cause when I do this:

$ openssl version

OpenSSL 1.1.1c FIPS  28 May 2019

That’s not that old…

I also noticed this error:

configure: WARNING: Your APR does not include SSL/EVP support. To enable it: configure --with-crypto

So maybe I will re-compile APR with that argument.

Nope. APR doesn’t even have that argument. But apr-uil does. I’ll try that.

Not so good:

configure: error: Crypto was requested but no crypto library could be enabled; specify the location of a crypto library using --with-openssl, --with-nss, and/or --with-commoncrypto.

I give up. maybe it was a false alarm. I’ll try to ignore it.

So I install openssl-devel:

$ yum install openssl-devel

Then I try to configure apache24 thusly:

$ ./configure –enable-so –with-apr=/usr/local/apr –enable-ssl

This time at least the configure goes through – no ssl-related errors.

I needed to add the Loadmodule statement by hand to httpd.conf since that file was already there from my previous build and so did not get that statement after my re-build with ssl support:

LoadModule ssl_module   modules/mod_ssl.so

Next error please
Now I have this error:

AH00526: Syntax error on line 92 of /usr/local/apache2/conf/extra/drjohns.conf:
SSLSessionCache: 'shmcb' session cache not supported (known names: ). Maybe you need to load the appropriate socache module (mod_socache_shmcb?).

I want results. So I just comment out the lines that talk about SSL Cache and anything to do with SSL cache.

And…it starts…and…it is listening on both ports 80 and 443 and…it is running SSL. So I think i cracked the SSL issue.

Switch focus to Mysql
I didn’t bother to find mysql. I believe people now use mariadb. So I installed the system one with a yum install mariadb. I became root and learned the version with a select version();

+-----------------+
| version()       |
+-----------------+
| 10.3.17-MariaDB |
+-----------------+
1 row in set (0.000 sec)

Is that recent enough? Yes! For once we skate by comfortably. The WordPress instructions say:

MySQL 5.6 or MariaDB 10.1 or greater

I setup apache. I try to access wordpress setup but instead get this message:

Forbidden
 
You don't have permission to access this resource.

every page I try gives this error.

The apache error log says:

client denied by server configuration: /usr/local/apache2/htdocs/

Not sure where that’s coming from. I thought I supplied my own documentroot statements, etc.

I threw in a Require all granted within the Directory statement and that seemed to help.

PHP/MySQL communication issue surfaces
Next problem is that PHP wasn’t compiled correctly it seems:

Your PHP installation appears to be missing the MySQL extension which is required by WordPress.

So I’ll try to re-do it. This time I am trying these arguments to configure:
$ ./configure ‐‐with-apxs2=/usr/local/apache2/bin/apxs ‐‐with-mysqli

Well, I’m not so sure this worked. Trying to setup WordPress, I access wp-config.php and only get:

Error establishing a database connection

This is roll up your sleeves time. It’s clear we are getting no breaks. I looked into installing PhpMyAdmin, but then I would neeed composer, which may depend on other things, so I lost interest in that rabbit hole. So I decide to simplify the problem. The suggested test is to write a php program like this, which I do, calling it tst2.php:

 <!--?php
$servername = "localhost";
$username = "username";
$password = "password";
 
// Create connection
$conn = mysqli_connect($servername, $username, $password);
 
// Check connection
if (!$conn) {
    die("Connection failed: " . mysqli_connect_error());
}
echo "Connected successfully";
?-->

Run it:
$ php tst2.php
and get:

PHP Warning:  mysqli_connect(): (HY000/2002): No such file or directory in /web/drjohns/blog/tst2.php on line 7
 
Warning: mysqli_connect(): (HY000/2002): No such file or directory in /web/drjohns/blog/tst2.php on line 7

Some quick research tells me that php does not know where the file mysql.sock is to be found. I search for it:

$ sudo find / ‐name mysql.sock

and it comes back as

/var/lib/mysql/mysql.sock

So…the prescription is to update a couple things in pph.ini, which has been put into /usr/local/lib in my case because I compiled php with mostly default values. I add the locatipon of the mysql.sock file in two places for good measure:

pdo_mysql.default_socket = /var/lib/mysql/mysql.sock
mysqli.default_socket = /var/lib/mysql/mysql.sock

And then my little test program goes through!

Connected successfully

Install WordPress
I begin to install WordPress, creating an initial user and so on. When I go back in I get a directory listing in place of the index.php. So I call index.php by hand and get a worisome error:

Fatal error: Uncaught Error: Call to undefined function gzinflate() in /web/drjohns/blog/wp-includes/class-requests.php:947 Stack trace: #0 /web/drjohns/blog/wp-includes/class-requests.php(886): Requests::compatible_gzinflate('\xA5\x92\xCDn\x830\f\x80\xDF\xC5g\x08\xD5\xD6\xEE...'

I should have compiled php with zlib is what I determine it means… zlib and zlib-devel packages are on my system so this should be straightforward.

More arguments for php compiling
OK. Let’s be sensible and try to reproduce what I had done in 2017 to compile php instead of finding an resolving mistakes one at a time.

$ ./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysqli –disable-cgi –with-zlib –with-gettext –with-gdbm –with-curl –with-openssl

This gives this new issue:

Package 'libcurl', required by 'virtual:world', not found

I will install libcurl-devel in hopes of making this one go away.

Past that error, and onto this one:

configure: error: DBA: Could not find necessary header file(s).

I’m trying to drop the –with-gdbm and skip that whole DBA thing since the database connection seemed to be working without it. Now I see an openssl problem:

make: *** No rule to make target '/tmp/php-7.4.4/ext/openssl/openssl.c', needed by 'ext/openssl/openssl.lo'.  Stop.

Even if I get rid of openssl I still see a problem when running configure:

gawk: ./build/print_include.awk:1: fatal: cannot open file `ext/zlib/*.h*' for reading (No such file or directory)

Now I can ignore that error because configure exits with 0 status and make, but the make then stops at zlib:

SIGNALS   -c /tmp/php-7.4.4/ext/sqlite3/sqlite3.c -o ext/sqlite3/sqlite3.lo
make: *** No rule to make target '/tmp/php-7.4.4/ext/zlib/zlib.c', needed by 'ext/zlib/zlib.lo'.  Stop.

Reason for above php compilation errors
I figured it out. My bad. I had done a make distclean in addition to a make clean when i was re-starting with a new set of arguments to configure. i saw it somewhere advised on the Internet and didn’t pay much attention, but it seemed like a good idea. But I think what it was doing was wiping out the files in the ext directory, like ext/zlib.

So now I’m starting over, now with php 7.4.5 since they’ve upgraded in the meanwhile! With this configure command line (I figure I probably don’t need gdb):
./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysqli –disable-cgi –with-zlib –with-gettext –with-gdbm –with-curl –with-openssl

Well, the php compile went through, however, I can’t seem to access any WordPress pages (all WordPress pages clock). Yet my simplistic database connection test does work. Hmmm. OK. If they come up at all, they come up exceedingly slowly and without correct formatting.

I think I see the reason for that as well. The source of the wp-login.php page (as viewed in a browser window) includes references to former hostnames my server used to have. Of course fetching all those objects times out. And they’re the ones that provide the formatting. At this point I’m not sure where those references came from. Not from the filesystem, so must be in the database as a result of an earlier WordPress install attempt. Amazon keeps changing my IP, you see. I see it is embedded into WordPress. In Settings | general Settings. I’m going to have this problem every time…

What I’m going to do is to create a temporary fictitious name, johnstechtalk, which I will enter in my hosts file on my Windows PC, in Windows\system32\drivers\etc\hosts, and also enter that name in WordPress’s settings. I will update the IP in my hosts file every time it changes while I am playing around. And now there’s even an issue doing this which has always worked so reliably in the past. Well, I found I actually needed to override the IP for drjohnstechtalk.com in my hosts file. But it seems Firefox has moved on to using DNS over https, so it ignores the hosts file now! i think. Edge still uses it however, thankfully.

WordPress
So WordPress is basically functioning. I managed to install a few of my fav plugins: Akismet anti-spam, Limit Login Attempts, WP-PostViews. Some of the plugins are so old they actually require ftp. Who runs ftp these days? That’s been considered insecure for many years. But to accommodate I installed vsftpd on my server and ran it, temporarily.

Then Mcafee on my PC decided that wordpress.org is an unsafe site, thank you very much, without any logs or pop-ups. I couldn’t reach that site until I disabled the Mcafee firewall. Makes it hard to learn how to do the next steps of the upgrade.

More WordPress difficulties

WordPress is never satisfied with whatever version you’ve installed. You take the latest and two weeks later it’s demanding you upgrade already. My first upgrade didn’t go so well. Then I installed vsftpd. The upgrade likes to use your local FTP server – at least in my case. so for ftp server I just put in 127.0.0.1. Kind of weird. Even still I get this error:

Downloading update from https://downloads.wordpress.org/release/wordpress-5.4.2-no-content.zip…

The authenticity of wordpress-5.4.2-no-content.zip could not be verified as no signature was found.

Unpacking the update…

Could not create directory.

Installation Failed

So I decided it was a permissions problem: my apache was running as user daemon (do a ps -ef to see running processes), while my wordpress blog directory was owned by centos. So I now run apache as user:group centos:centos. In case this helps anyone the apache configurtion commands to do this are:

User centos
Group centos

then I go to my blog directory and run something like:

chown -R centos:centos *
Wordpres Block editor non-functional after the upgrade

When I did the SQL import from my old site, I killed the block editor on my new site! This was disconcerting. That little plus sign just would not show up on new pages, no posts, whatever. So I basically killed wordpress 5.4. So I took a step backwards and started v 5.4 with a clean (empty) database like a fresh install to make sure the block editor works then. It did. Whew! Then I did an RTFM and deactivated my plugins on my old WordPress install before doing the mysql backup. I imported that SQL database, with a very minimal set of plugins activated, and, whew, this time I did not blow away the block editor.

CentOS bogs down

I like my snappy new Centos 8 AMI 80% of the time. But that remaining 20% is very annoying. It freezes. Really bad. I ran a top until the problem happened. Here I’ve caught the problem in action:

top - 16:26:11 up 1 day, 21 min, 2 users, load average: 3.96, 2.93, 5.30
Tasks: 95 total, 1 running, 94 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 2.6 sy, 0.0 ni, 0.0 id, 95.8 wa, 0.4 hi, 0.3 si, 0.7 st
MiB Mem : 1827.1 total, 63.4 free, 1709.8 used, 53.9 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 9.1 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
44 root 20 0 0 0 0 S 1.6 0.0 12:47.94 kswapd0
438 root 0 -20 0 0 0 I 0.5 0.0 1:38.84 kworker/0:1H-kblockd
890 mysql 20 0 1301064 92972 0 S 0.4 5.0 1:26.83 mysqld
5282 centos 20 0 1504524 341188 64 S 0.4 18.2 0:06.06 httpd
5344 root 20 0 345936 1008 0 S 0.4 0.1 0:00.09 sudo
560 root 20 0 93504 6436 3340 S 0.2 0.3 0:02.53 systemd-journal
712 polkitd 20 0 1626824 4996 0 S 0.2 0.3 0:00.15 polkitd
817 root 20 0 598816 4424 0 S 0.2 0.2 0:12.62 NetworkManager
821 root 20 0 634088 14772 0 S 0.2 0.8 0:18.67 tuned
1148 root 20 0 216948 7180 3456 S 0.2 0.4 0:16.74 rsyslogd
2346 john 20 0 273640 776 0 R 0.2 0.0 1:20.73 top
1 root 20 0 178656 4300 0 S 0.0 0.2 0:11.34 systemd

So what jumps out at me is the 95.8% wait time – that ain’t good – an that a process which includes the name swap is at the top of ths list, combined with the fact that exactly 0 swap space is allocated. My linux skills may be 15 years out-of-date, but I think I better allocate some swap space (but why does it need it so badly??). On my old system I think I had done it. I’m a little scared to proceed for fear of blowing up my system.

So if you use drjohnstechtalk.com and it freezes, just come back in 10 minutes and it’ll probably be running again – this situation tends to self-correct. No one’s perfect.

Making a swap space

I went ahead and created a swap space right on my existing filesystem. I realized it wasn’t too hard once I found these really clear instructions: https://www.maketecheasier.com/swap-partitions-on-linux/

Some of the commands are dd to create an empty file, mkswap, swapon and swapon -s to see what it’s doing. And it really, really helped. I think sometimes mariadb needed to swap, and sometimes apache did. My system only has 1.8 GB of memory or so. And the drive is solid state, so it should be kind of fast. Because I used 1.2 GB for swap, I also extended my volume size when I happened upon Amazon’s clear instructions on how you can do that. Who knew? See below for more on that. If I got it right, Amazon also gives you more IO for each GB you add. I’m definitely getting good response after this swap space addition.

An aside about i/o

In the old days I perfected  a way to study i/o using the iostat utility. You can get it by installing the sysstat package. A good command to run is iostat -t -c -m -x 5

Examing these three consecutive lines of output from running that command is very instructional:

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 2226.40 1408.00 9.35 5.54 1.00 0.20 0.04 0.01 2.37 5.00 10.28 4.30 4.03 0.25 90.14

07/04/2020 04:05:36 PM
avg-cpu: %user %nice %system %iowait %steal %idle
1.00 0.00 1.20 48.59 0.60 48.59

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 130.14 1446.51 0.53 5.66 0.60 0.00 0.46 0.00 4.98 8.03 11.47 4.15 4.01 0.32 51.22

07/04/2020 04:05:41 PM
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 1.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 2.69 0.00 0.62 0.10

I tooled around in the admin panel (which previously had brought my server to its knees), and you see the %util shot up to 90%, reads per sec over 2000 , writes per second 1400. So, really demanding. It’s clear my server would die if more than a few people were hitting it hard.  And I may need some fine-tuning.

Success!

Given all the above problems, you probably never thought I’d pull this off. I worked in fits and starts – mostly when my significant other was away because this stuff is a time suck. But, believe it or not, I got the new apache/openssl/apr/php/mariadb/wordpress/centos/amazon EC2 VPC/drjohnstechtalk-with-new-2020-theme working to my satisfaction. I have to pat myself on the back for that. So I pulled the plug on the old site, which basically means moving the elastic IP over from old centos 6 site to new centos8 AWS instance. Since my site was so old, I had to first convert the elastic IP from type classic to VPC. It was not too obvious, but I got it eventually.

Damn hackers already at it

Look at the access log of your new apache server running your production WordPress. If you see like I did people already trying to log in (POST accesses for …/wp-login.php), which is really annoying because they’re all hackers, at least install the WPS Hide Login plugin and configure a secret login URL. Don’t use the default login.

Meanwhile I’ve decided to freeze out anyone who triess to access wp-login.php because they can only be up to no good. So I created this script which I call wp-login-freeze.sh:

#!/bin/sh
# freeze hackers who probe for wp-login
# DrJ 6/2020
DIR=/var/log/drjohns
cd $DIR
while /bin/true; do
tail -200 access_log|grep wp-login.php|awk '{print $1}'|sort -u|while read line; do
echo $line
route add -host $line reject
done
sleep 60
done

Works great! Just do a netstat -rn to watch your ever-growing list of systems you’ve frozen out.

But xmlrpc is the worst…

Bots which invoke xmlrpc.php are the worst for little servers like mine. They absolutely bring it to its knees. So I’ve just added something similar to the wp-login freeze above, except it catches xmlrpc bots:

#!/bin/sh
# freeze hackers who are doing God knows what to xmlrpc.php
# DrJ 8/2020
DIR=/var/log/drjohns
cd $DIR
while /bin/true; do
# example offending line:
# 181.214.107.40 - - [21/Aug/2020:08:17:01 -0400] "POST /blog//xmlrpc.php HTTP/1.1" 200 401
tail -100 access_log|grep xmlrpc.php|grep POST|awk '{print $1}'|sort -u|while read line; do
echo $line
route add -host $line reject
done
sleep 30
done

I was still dissatisfied with seeing bots hit me up for 30 seconds or so, so I decided heck with it, I’m going to waste their time first. So I added a few lines to xmlrpc.php (I know you shouldn’t do this, but hackers shouldn’t do what they do either):

// DrJ improvements
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
// just make bot suffer a bit... the freeze out is done by an external script
   sleep(25);
    //
}
// end DrJ enhancements

This freeze out trick within xmlrpc.php was only going to work if the bots run single-threaded, that is, they run serially, waiting for one request to finish before sending the next. I’ve been running it for a couple days and have enthusiasitically frozen out a few IPs. I can attest that the bots do indeed run single-threaded. So I typically get two entries in my access file to xmlrpc from a given bot, and then the bot is completely frozen out by the loopback route which gets added.

Mid-term issues discovered months later

Well, I never needed to send emails form my server, until I did. And when I did I found I couldn’t. It used to work from my old server… From reading a bit I see WordPress uses PHP’s built-in mail() function, which has its limits. But my server did not have mailx or postfix packages. So I did a

$ yum install  postfix mailx

$ systemctl enable postfix

$ systemctl start postfix

That still didn’t magically make WordPress mail work, although at that point I could send mail by hand frmo a spoofed address, which is pretty cool, like:

$ mailx -r “[email protected]” -s “testing email” [email protected] <<< “Test of a one-line email test. – drJ”

And I got it in my Gmail account from that sender. Cool.

Rather than wasting time debuggin PHP, I saw a promising-looking plug-in, WP Mail SMTP, and installed it. Here is how I configured the important bits:

WP Mail SMTP settings

Another test from WordPress and this time it goes through. Yeah.

Hosting a second WordPress site and Ninja Forms brings it all down

I brushed off my own old notes on hosting more than one WordPress site on my server (it’s nice to be king): https://drjohnstechtalk.com/blog/2014/12/running-a-second-instance-of-wordpress-on-your-server/

Well, wouldn’t you know my friend’s WordPress site I was trying to host brought my server to its knees. Again. Seems to be a common theme. I was hoping it was merely hackers who’d discovered his new site and injected it with the xmlrpc DOS because that would have been easy to treat. But no, no xmlrpc issues so far according to the access_log file. He uses more of the popular plugins like Elementor and Ninja Forms. Well, that Ninja Forms Dashboard is a killer. Reliably brings my server to a crawl. I even caught it in action from a running top and saw swap was the leading cpu-consuming process. And my 1.2 GB swap file was nearly full. So I created a second, larger swap file of 2 GB and did a swapon for that. Then I decommissioned my older swap file. Did you know you can do a swapoff? Yup. I could see the old one descreasing in size and the new one building up. And now the new one is larger than the old ever could be – 1.4 GB. Now Ninja forms dashboard can be launched. Performance is once again OK.

So…hosting second WordPress site now resolved.

Updating failed. The response is not a valid JSON response.

So then he got that error after enabling permalinks. The causes for this are pretty well documented. We took the standard advice and disabled all plugins. Wihtout permalinks we were fine. With them JSON error. I put the .htaccess file in place. Still no go. So unlike most advice, in my case, where I run my own web server, I must have goofed up the config and not enabled reading of the .htaccess file. Fortunately I had a working example in the form of my own blog site. I put all those apache commands which normally go into .htaccess into the vhost config file. All good.

Increasing EBS filesystem size causes worrisome error

As mentioned above I used some of the filesystem for swap so I wanted to enlarge it.

$ sudo growpart /dev/xvda 1
CHANGED: partition=1 start=2048 old: size=16773120 end=16775168 new: size=25163743,end=25165791
root@ip-10-0-0-181:~/hosting$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 12G 0 disk
mqxvda1 202:1 0 12G 0 part /
root@ip-10-0-0-181:~/hosting$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 912292 0 912292 0% /dev
tmpfs 935468 0 935468 0% /dev/shm
tmpfs 935468 16800 918668 2% /run
tmpfs 935468 0 935468 0% /sys/fs/cgroup
/dev/xvda1 8376320 3997580 4378740 48% /
tmpfs 187092 0 187092 0% /run/user/0
tmpfs 187092 0 187092 0% /run/user/1001
root@ip-10-0-0-181:~/hosting$ sudo resize2fs /dev/xvda1
resize2fs 1.44.6 (5-Mar-2019)
resize2fs: Bad magic number in super-block while trying to open /dev/xvda1
Couldn't find valid filesystem superblock.

The solution is to use xfs_growfs instead of resize2fs. And that worked!

$ sudo xfs_growfs -d /
meta-data=/dev/xvda1 isize=512 agcount=4, agsize=524160 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=2096640, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2096640 to 3145467
root@ip-10-0-0-181:~/hosting$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 891M 0 891M 0% /dev
tmpfs 914M 0 914M 0% /dev/shm
tmpfs 914M 17M 898M 2% /run
tmpfs 914M 0 914M 0% /sys/fs/cgroup
/dev/xvda1 12G 3.9G 8.2G 33% /
tmpfs 183M 0 183M 0% /run/user/0
tmpfs 183M 0 183M 0% /run/user/1001
PHP found wanting by WordPress health status

Although my site seems to be humming alnog, now I have to find the more obscure errors. WordPress mentioned my site health has problems.

WordPress site health

I think gd is used for graphics. I haven’t seen any negative results from this, yet. I may leave it be for the time being.

Lets Encrypt certificate renewal stops working

This one is at the bottom because it only manifests itself after a couple months – when the web site certificate either expires or is about to expire. Remember, this is a new server. I was lazy, of course, and just brought over the .acme.sh from the old server, hoping for the best. I didn’t notice any errors at first, but I eventually observed that my certificate was not getting renewed either even though it had only a few days of validity left.

To see what’s going on I ran this command by hand:

“/root/.acme.sh”/acme.sh –debug –cron –home “/root/.acme.sh”

acme.sh new-authz error: {"type":"urn:acme:error:badNonce","detail":"JWS has no anti-replay nonce","status": 400}

seemed to be the most important error I noticed. The general suggestion for this is an acme.sh –upgrade, which I did run. But the nonce error persisted. It tries 20 times then gives up.

— warning: I know enough to get the job done, but not enough to write the code. Proceed at your own risk —

I read some of my old blogs and played with the command

“/root/.acme.sh”/acme.sh –issue -d drjohnstechtalk.com -w /web/drjohns

My Webroot is /web/drjohns by the way. Now at least there was an error I could understand. I saw it trying to access something like http://drjohnstechtalk.com/.well-known/acme-challenge/askdjhaskjh

which produced a 404 Not Found error. Note the http and not https. Well, I hadn’t put much energy into setting up my http server. In fact it even has a different webroot. So what I did was to make a symbolic link

ln -s /web/drjohns/.well-known /web/insecure

I re-ran the acme.sh –issue command and…it worked. Maybe if I had issued a –renew it would not have bothered using the http server at all, but I didn’t see that switch at the time. So in my crontab instead of how you’re supposed to do it, I’m trying it with these two lines:

# Not how you're supposed to do it, but it worked once for me - DrJ 8/16/20
22 2 * * * "/root/.acme.sh"/acme.sh --issue -d drjohnstechtalk.com -w /web/drjohns > /dev/null 2>&1
22 3 16 * * "/root/.acme.sh"/acme.sh --update-account --issue -d drjohnstechtalk.com -w /web/drjohns > /dev/null 2>&1

The update-account is just for good measure so I don’t run into an account expiry problem which I’ve faced in the past. No idea if it’s really needed. Actually my whole approach is a kludge. But it worked. In two months’ time I’ll know if the cron automation also works.

Why kludge it? I could have spent hours and hours trying to get acme.sh to work as it was intended. I suppose with enough persistence I would have found the root problem.

2021 update. In retrospect

In retrospect, I think I’ll try Amazon Linux next time! I had the opportunity to use it for my job and I have to say it was pretty easy to set up a web server which included php and MariaDB. It feels like it’s based on Redhat, which I’m most familiar with. It doesn’t cost extra. It runs on the same small size on AWS. Oh well.

2022 update

I’m really sick of how far behind Redhat is with their provided software. And since they’ve been taken over by IBM, how they’ve killed CentOS is scandalous. So I’m inclined to go to a Debian-based system for my next go-around, which is much more imminent than I ever expected it to be thanks to the discontinuation of support for CentOS. I asked someone who hosts a lot of WP sites and he said he’d use Ubuntu server 22, PHP 8, MariaDB and NGinx. Boy. Guess I’m way behind. He says performance with PHP 8 is much better. I’ve always used apache but I guess I don’t really rely on it for too many fancy features.

References and related
This blog post is about 1000% better than my own if all you want to do is install WordPress on Centos: https://blog.ssdnodes.com/blog/how-to-install-wordpress-on-centos-7-with-lamp-tutorial/

Here is WordPress’s own extended instructions for upgrading. Of course this should be your starting point: https://wordpress.org/support/article/upgrading-wordpress-extended-instructions/

I’ve been following the php instructions: https://www.php.net/manual/en/install.unix.apache2.php

Before you install WordPress. Requirements and such.

This old article of mine has lots of good tips: Compiling apache 2.4

This is a great article about how Linux systems use swap space and how you can re-configure things: https://www.maketecheasier.com/swap-partitions-on-linux/

I found this guide both helpful and informative as well: https://www.howtogeek.com/455981/how-to-create-a-swap-file-on-linux/

Amazon has this clear article on the linux commands you run after you extend an EBS volume. they worked for me: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

My Centos 8 AMI is centos-8-minimal-install-201909262151 (ami-01b3337aae1959300)

My old Lets Encrypt article was helpful in straightening out my certificate errors.

Here’s the acme.sh installation guide for linux.