Categories
Raspberry Pi

Raspberry Pi Display blanks out after a few seconds

Intro

If you’ve seen any of my many Raspberry Pi photo frame posts, you’ve seen me mention the Pi Display. I recently began having problems with my setup, after many months. I didn’t know if it was the RPi or the display. My display was cracked from abuse anyway so I took the plunge and got a new one. And I re-imaged my micro SD card. I’ll be danged if my brand new display wasn’t blanking out, even as the RPi was still booting! That was frustrating. Here’s how I resolved that. I felt I should publish this article because it’s a little hard to find this exact problem with solution described on the Internet.

The details

The good news is that the Pi Display is still available, incredibly. It comes with absolutely zero documentation. It refers you to element14 or newark web sites, I forget. But those sites just display very generic pages and I couldn’t manage to find any real documentation. I assume the product is considered too old or unsupported or whatever. But since I had bought it already and it seems exactly like the one I was replacing, I went ahead and connected it as before.

Note that the display did not blank out during the installation of Raspbian Lite. That’s a strong hint as to where the problem lies. But after installing that and rebooting, the Pi Display shows bland and white horizontal lines, becomes fainter, and then just blanks out altogether. Rather distressing. I don’t recall this problem from the older versions.

Equipment

I’m still using an old RPi model 3 I have laying around. I guess the new RPi 4 is OK, too. I believe it draws more power though so I actually kind of prefer the old RPi 3’s. You can manage to power both RPi 3 + the PiDisplay from a single power plug if you do it right.

The OS image is Raspberry Pi OS Lite, “bullseye.”

7″ Pi Display – see references.

Random power plug from Amazon. 1.8 amps.

The solution

First, note that I had installed Raspbian bullseye – the latest image as of January 2022. Also note that when connected to an HDMI display, this screen blank-out never occurs! What I needed to do was to connect the RPi to an HDMI display (my TV in my case) and do the sudo apt-get update and sudo apt-get upgrade commands. That recommendation you will find everywhere. In addition, I needed to run raspi-config. In there you navigate to do this navigation:

Display Options > Screen blanking > Would you like to enable screen blanking?

Choose No

I held my breath as I rebooted, and it seemed to want to blank, but then came back a half second later and the display has been illuminated ever since. Now it seems every time I reboot it blanks out as before, during boot, but then “remembers” it’s not supposed to do that and recovers within a second!

How to power both RPi and Pi Display from a single power source

You can connect a short micro USB cable from the USB output of the display to the power input of the RPi – that’s what I do. You could also use the included leads, but I haven’t messed with it.

However, note that – as I found out the hard way – not just any power adapter will do! I started with an Amazon adapter rated for 1 amp. Everything fired up fine, and even ran the slideshow for a bit, but then I guess the power demands were too great and the RPi spontaneously rebooted. So then I switched the system to a larger adapter, also from Amazon – 1.8 amps. So far so good with that one. If that failed, I was prepared with a 2 amp adapter – from a Samsung Galaxy phone. But it seems that will not be necessary.

And this is kind of the beauty of the RPi 3 – you can get away with stuff like that. Your RPi 4 could never do that. It’s a power hog.

Conclusion

The Pi Display is still viable! The price has gone up, but it works. As far as I know in fact it is the “official” display for an RPi. The RPi has a special tab to receive a ribbon cable that connects the two together.

The Pi Display will blank out after a few seconds with newer Raspberry Pi OS images, it seems, even though when paired with an HDMI display this never happens. But it’s easy enough to fix this frustrating problem. Just set screen blanking to No in raspi-config. This ought to better documented on the Internet.

References and related

The Pi Display is a 7″ touch-sensitive LCD display. Its cost is now up to $90. I don’t recommend using its touch capability, but then I don’t like that feature on any display, come to think of it: Amazon.com: Raspberry Pi 7″ Touch Screen Display : Electronics But this latest Pi Display is brighter than my older one. That makes it more visible and enjoyable to watch.

One of my very many posts on creating a photo frame with an RPi and the PiDisplay (but also works fine on HDMI TV’s – I tested it!): https://drjohnstechtalk.com/blog/2021/01/raspberry-pi-photo-frame-using-the-pictures-on-your-google-drive-ii/

Categories
Python Raspberry Pi

Solution to NPR’s puzzle using python

Intro

Of course I’m using my Raspberry Pi. I installed a free dictionary (sudo apt-get install wamerican).

I wanted to practice python, but to not go crazy, so I cheated with some grooming with simple shell commands, e.g.,

$ egrep ‘^[a-z]{5}$’ /usr/share/dict/words > words

That plucks out all the five-letter words and stores them in a local file called words. The original dictionary had about 100,000 “words.” There are about 4000 five-letter words left.

I called the python program v.py. Here it is

import sys,os,re
input = sys.argv[1]
output = []
for character in input:
    number = ord(character) - 96
    output.append(number)
#print(output)
sum=0
for character in input:
    number = ord(character) - 96
    sum += number
#print(sum)
newsum = 0
ends = ''
if sum == 51:
    #print(input)
    for i in input[3:]:
        number = ord(i) - 96
        newsum += number
        ends = ends + i
        #print(i,number,newsum,ends)
    if newsum < 27:
        newl = chr(newsum+96)
        newword = re.sub(ends,newl,input)
        print(input,newword)

I ran it like this:

$ cat words|while read line; do python3 v.py $line >> all-possibilities; done

This plucks out all the five-letter words whose characters add up to 51, and adds the value of the last two letters and creates a new word from them. replacing the last two letters with the new letter.

Drumroll

The results are in.

$ cat all-possibilities

allay allz
avoid avom
bergs berz
beset besy
blocs blov
bombs bomu
broke brop
bused busi
comas comt
condo cons
cribs criu
crude crui
cured curi
dines dinx
elite eliy
erect erew
fates fatx
files filx
flies flix
fluff flul
...
thick thin

Now, go write your own program. I will share the answer when it's too late to submit - it comes towards the end of the list. It sticks out like a sore thumb - nothing else comes close. So if you just persist you'll see it.

Conclusion

I learned a teensy bit of python, my motivation being to solve the current npr puzzle. And it worked! But my program was surprisingly slow. I guess I wrote it in an inefficient manner.

Categories
Admin Linux Raspberry Pi

Scripts checker

Intro

Imagine an infrastructure team empowered to create its own scripts to do such things as regularly update external dynamic lists (EDLs) or interact with APIs in an automated fashion. At some point they will want to have a meta script in place to check the output of the all the automation scripts. This is something I developed to meet that need.

I am getting tired of perl, and I still don’t know python, so I decided to enhance my bash scripting for this script. I learned some valuable things along the way.

checklogs.sh

I call the script checklogs.sh Here it is.

#!/bin/bash
# DrJ 2021/12/17, updated 2023/7/26
# it is desired to run this using the logrotate mechanism
#
# logrotate invokes with /bin/sh so we have to do this trick...
if [ ! "$BASH_VERSION" ] ; then
  exec /bin/bash "$0" "$@"
  exit
fi
DIR=$(cd $(dirname $0);pwd)
INI=$DIR/log.ini
DAY=2 # Day of week to analyze full week of logs. Monday is 1, Tuesday 2, etc
DEBUG=0
maxdiff=10
maxerrors=10
minstarts=10
TMPDIR=/var/tmp
cd $TMPDIR
recipients="drjohn@drjohns.com"
#
checklog2() {
  starts=0;ends=0;errors=0
  [[ "$DEBUG" -eq "1" ]] && echo ID, $ID, LPATH, $LPATH, START, $START, ERROR, $ERROR, END, $END
  LPATH="${LPATH}${wildcard}"
# the Ec switches mean (E) extnded regular expressions, (c) count of matching lines
  zgrep -Ec "$START" ${LPATH}|cut -d: -f2|while read sline; do starts=$((starts + sline));echo $starts>starts; done
  zgrep -Ec "$END" ${LPATH}|cut -d: -f2|while read sline; do ends=$((ends + sline));echo $ends>ends; done
# Outlook likes to remove our newline characters - double up on them with this sed trick!
  zgrep -Ec "$ERROR" ${LPATH}|cut -d: -f2|sed 'a\\'|while read sline; do errors=$((errors + sline));echo $errors>errors; done
  exampleerrors=$(zgrep -E "$ERROR" ${LPATH}|head -10)
  starts=$(cat starts)
  ends=$(cat ends)
  errors=$(cat errors)
  info="${info}===========================================
$ID SUMMARY
  Total starts: $starts
  Total finishes: $ends
  Total errors: $errors
  Most recent errors: "
  info="${info}${exampleerrors}

"
  unset NEW
# get cumulative totals
  starttot=$((starttot + starts))
  endtot=$((endtot + ends))
  errortot=$((errortot + errors))
  [[ "$DEBUG" -eq "1" ]] && echo starttot, $starttot, endtot, $endtot, errortot, $errortot
  [[ "$DEBUG" -eq "1" ]] || rm starts ends errors
} # end of checklog2 function

checklog() {
# clear out stats and some variables
starttot=0;endtot=0;errortot=0;info=""
#this IFS and following line is trick to preserve those darn backslash charactes in the input file
IFS=$'\n'
for line in $(<$INI); do
  [[ "$line" =~ ^# ]] || {
  pval=$(echo "$line"|sed s'/: */:/')
  lhs=$(echo $pval|cut -d: -f1)
  rhs=$(echo "$pval"|cut -d: -f2-)
  lhs=$(echo $lhs|tr [:upper:] [:lower:])
  [[ "$DEBUG" -eq "1" ]] && echo line is "$line", pval is $pval, lhs is $lhs, rhs is "$rhs"
  if [ "$lhs" = "identifier" ]; then
    [[ "$DEBUG" -eq "1" ]] && echo matched lhs = identifer section
    [[ -n "$NEW" ]] && checklog2
    ID="$rhs"
  fi
  [[ "$lhs" = "path" ]] && LPATH="$rhs" && NEW=false
  [[ "$lhs" = "error" ]] && ERROR="$rhs"
  [[ "$lhs" = "start" ]] && START="$rhs"
  [[ "$lhs" = "end" ]] && END="$rhs"
  }
done
# call one last time at the end
checklog2
} # end of checklog function

anomalydetection() {
# a few tests - you can always come up with more...
  diff=$((starttot - endtot))
  [[ $diff -gt $maxdiff ]] || [[ $starttot -lt $minstarts ]] || [[ $errortot -gt $maxerrors ]] && {
    ANOMALIES=1
    [[ "$DEBUG" -eq "1" ]] && echo ANOMALIES, $ANOMALIES, starttot, $starttot, endtot, $endtot, errortot, $errortot
  }
} # end function anomalydetection

sendsummary() {
  subject="Weekly summary of sesamstrasse automation scripts - please review"
  [[ -n "$ANOMALIES" ]] && subject="${subject} - ANOMALIES DETECTED PLEASE REVIEW CAREFULLY!!"

  intro="This summarizes the results from the past week of running automation scripts on sesamstrasse.
Please check that values seem reasonable. If things are out of range, check with Heiko or look at
sesamstrasse yourself.

"

  [[ "$DEBUG" -eq "1" ]] && echo subject, $subject, intro, "$intro", info, "$info"
  [[ "$DEBUG" -eq "1" ]] && args="-v"
  echo "${intro}${info}"|mail "$args" -s "$subject" "$recipients"
} # end function sendsummary

# MAIN PROGRAM
# always check the latest log
checklog
anomalydetection

# only check all logs if it is certain day of the week. Monday = 1, etc
day=$(date +%u)
[[ "$DEBUG" -eq "1" ]] && echo day, $day
[[ $day -eq $DAY ]] || [[ -n "$ANOMALIES" ]] && {
  [[ "$DEBUG" -eq "1" ]] && echo calling checklog with wildcard set
  wildcard='*'
  checklog
  sendsummary
}

[[ "$DEBUG" -eq "1" ]] && echo message so far is "$info"

log.ini

# The suggestion: To have a configuration file with log identifiers
#(e.g. “anydesk-edl”) and per identifier: log file path (“/var/log/anydesk-edl.log”),
# error pattern (“.+\[Error\].+”), start pattern (“.+\[Notice\] Starting$”) end pattern (“.+\[Notice\] Done$”).
#Then just count number of executions (based on start/end) and number of errors.

# the start/end/error values are interpreted as extended regular expressions - see regex(7) man page
identifier: anydesk-edl
path: /var/log/anydesk-edl.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$

identifier: firewall-requester-to-edl
path: /var/log/firewall-requester-to-edl.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$

identifier: sase-ips-to-bigip
path: /var/log/sase-ips-to-bigip.log
error: .+\[Error\].+
start: .+\[Notice\] Starting$
end: .+\[Notice\] Done$

What this script does

So when the guy writes an automation script, he is so meticulous that he follows the same convention and hooks it into the syslogger to create uniquely named log files for it. He writes out a [Notice] Starting when his script starts, and a [Notice] Done when it ends. And errors are reported with an [Error] details. Some of the scripts are called hourly. So we agreed to have a script that checks all the other scripts once a week and send a summary email of the results. I look to see that the count of starts and ends is roughly the same, and I report back the ten most recent errors from a given script. I also look for other basic things. That’s the purpose of the function anomalydetection in my script. It’s just basic tests. I didn’t want to go wild.

But what if there was a problem with one of the scripts, wouldn’t we want to know sooner than possibly six days later? So I decided to have my script run every day, but only send email on the off days if an anomaly was detected. This made the logic a tad more complex, but nothing bash and I couldn’t handle. It fits the need of an overworked operational staff.

Techniques I learned and re-learned from developing this script

cron scheduling – more to it than you thought

I used to naively think that it suffices to look into the crontab files of all users to discover all the scheduled processes. What I missed is thinking about how log rotate works. How does it work? Turns out there is another section of cron for jobs run daily, weekly and monthly. logrotate is called from cron.daily.

logrotate – potential to do more

The person who wrote the automation scripts is a much better scripter than I am. I didn’t want to disappoint so I put in the extra effort to discover the best way to call my script. I reasoned that logrotate would offer the opportunity to run side scripts, and I was absolutely right about that! You can run a script just before the logs rotate, or just after. I chose the just before timing – prerotate. In actual fact logrotate calls the prerotate script with all the log files to be rotate as arguments, which you notice we don’t take advantage of, because at the time we were unsure how we were going to interface. But I figure let’s just leave it now. man logrotate to learn more.

By the way although I developed on a generic Debian system, it should work on a Raspberry Pi as well since it is Debian based.

BASH – the potential to do more, at a price

You’ll note that I use some bash-specific extensions in my script. I figure bash is near universal, so why not? The downside is that when logrotate invokes an external script, it calls is using old-fashioned shell. And my script does not work. Except I learned this useful trick:

if [ ! "$BASH_VERSION" ]; then
  exec /bin/bash "$0" "$@"
  exit
fi

Note this is legit syntax in SHELL and a legit conditional operator expression. So it means if you – and by you I mean the script talking about itself – are invoked via SHELL, then invoke yourself via BASH and exit the parent afterwards. And this actually does work (To do: have to check which occurs first, the syntax checking or the command invocation).

More

Speaking of that conditional, if you want to know all the major comparison tests, do a man test. I have around to use the double bracket expressions [[ more and more, though they are BASH specific I believe. The double bracket can be followed by a && and then an open curly brace { which can introduce a block of code delimited of course by a close curly brace }. So for me this is an attractive alternative to SHELL’s if conditional then code block fi syntax, and probably just slightly more compact. Replace && with || to execute the code block when the condition does not evaluate to be true.

zgrep is grep for compressed files, but we knew that right? But it’s agnostic – it works like grep on both compressed and uncompressed files. That’s important because with rotated logs you usually have a combination of both.

Now the expert suggested a certain regular expression for the search string. It wasn’t working in my first pass. I reasoned that zgrep may have a special mode to act more like egrep which supports extended regular expressions (EREs). EREs aren’t really the same as perl-compatible regular expressions (PCREs) but for this kind of simple stuff we want, they’re close enough. And sure enough zgrep has the -E option to force it to interpret the expression as an ERE. Great.

RegEx

So in the log.ini file the regular expression has a \[…\] syntax. The backslash is actually required because otherwise the […] syntax is interpreted as a character class, where all the characters between the brackets get tried to match a single character in the string to be matched. That’s a very different match!

My big thing was – will I have to further escape those lines read in from log.ini, perhaps to replace a \[ with a \\[? Stuff like that happens. I found as long as I used those double quotes around the variables (see below) I did not need to further escape them. Similarly, I found that the EREs in log.ini did not need to be placed between quotes though the guy initially proposed that. It looks cleaner without them.

Variable scope

I wasted a lot of time on a problem which I thought may be due to some weird variable scoping. I’ve memorized this syntax cat file|while read line; do etc, etc so I use it a lot in my tiny scripts. It’s amazing I got away with it as much as I have because it has one huge flaw. if you start using variables within the loop you can’t really suck them out, unless you write them to a file. So while at first I thought it was a problem of variable scoping – why do my loop variables have no values when the code comes out of the loop? – it really isn’t that issue. It’s that the pipe, |, created a forked process which has its own variables. So to avoid that I switched to this weird syntax for line in $(<$INI); do etc. So it does the line-by-line file reading as before but without the pipe and hence without the “variable scope” problem.

But in another place in the script – where I add up numbers – I felt I could not avoid the pipe. So there I do write the value to a file.

The conclusion is that with the caveat that if you know what you’re doing, all variables have global scope, and that’s just as it should be. Hey, I’m from the old Fortran 66/77 school where we were writing Monte Carlos with thousands of lines of code and dozens of variables in a COMMON block (global scope), and dozens of contributors. It worked just fine thank you very much. Because we knew what we were doing.

Adding numbers in bash

Speaking of adding, I can never remember how to add numbers (integers). In bash you can do starts=$((starts + sline)) , where starts and sline are integers. At least this worked in Debian linux Stretch. I did not really get the same to work so well in SLES Linux – at least not inside a loop where I most needed it.

When you look up how to add numbers in bash there are about a zillion different ways to do it. I’m trying to stick to the built-in way.

Sending mail in Debian linux

You probably need to configure a smarthost if you haven’t used your server to send emails up until now. You have to reconfigure of the exim4 package:

dpkg-reconfigure exim4-config

This also can be done on a RPi if you ever find you need for it to send out emails.

Variables

If a variables includes linebreaks and you want to see that, put it between double-quotes, e.g., echo “$myVariableWithLineBreaks”. If you don’t do that it seems to remove the linebreaks. Use of the double quotes also seems to help avoid mangling variables that contain meta characters found in regular expressions such as .+ or \[.

Result of executing the commands

I grew up using the backtick metacharacter, `, to indicate that the enclosed command should be executed. E.g., old way:

DIR=`dirname $0`

But when you think about it, that metacharacter is small, and often you are unlucky and it sits right alongside a double quote or a single quote, making for a visual trainwreck. So this year I’ve come to love the use of $(command to be executed) syntax instead. It offers much improved readability. But then the question became, could I nest a command within a command, e.g., for my DIR assignment? I tried it. Now this kind of runs counter to my philosophy of being able to examine every single step as it executes because now I’m executing two steps at once, but since it’s pretty straightforward, I went for it. And it does work. Hence the DIR variable is assigned with the compound command:

DIR=$(cd $(dirname $0);pwd)

So now I wonder if you can go more than two levels deep? Each level is an incrementally bad idea – just begging for undetectable mistakes, so I didn’t experiment with that!

By the way the reason I needed to do that is that the script jumps around to another directory to create temporary files, and I wanted it to be able to reference the full path to its original directory, so a simpler DIR=$(dirname $0) wasn’t going to cut it if it’s called with a relative path such as ./checklogs.sh

Debugging

I make mistakes left and right. But I know what results I expect. So I generously insert statements as variables get assigned to double check them, prefacing them with a conditional [[ $DEBUG -eq 1 ]] && print out these values. As I develop DEBUG is set to 1. When it’s finally working, I usually set it to 0, though in some script I never quite reach that point. It looks like a lot of typing, but it’s really just cut and paste and not over-thinking it for the variable dump, so it’s very quick to type.

Another thing I do when I’m stuck is to watch as the script executes in great detail by appending -xv to the first line, e.g., #!/bin/bash -xv. But the output is always confusing. Sometimes it helps though.

Compensating for Outlook’s newline handling

Outlook is too clever for its own good and “helpfully” removes what it considers extra linefeeds. Thanks Microsoft. Really helpful. So if you add extra linefeeds you can kind of get around that, but then you go from 0 linefeeds in the displayed output to two. Again, thanks Microsoft.

Anyway, I disocvered sed ‘a/\\/’ is a way to add an extra linefeed to my error lines, where the problem was especially acute and noticeable.

Techniques I’d like to use in the future

You can assign a function to a variable and then call that variable. I know that will have lots of uses but I’m not used to the construct. So maybe for my next program.

Conclusion

This fairly simple yet still powerful script has forced me to become a better BASH shell scripter. In this post I review some of the basics that make for successful scripting using the BASH shell. I feel the time invested will pay off as there are many opportunities to write such utility scripts. I actually prefer bash to perl or python for these tasks as it is conceptually simpler, less ambitious, less pretentious, yes, far less capable, but adequate for my tasks. A few rules of the road and you’re off and running! bash lends itself to very quick testing cycles. Different versions of bash introduced additional features, and that gets trying. I hope I have found and utilized some of the basic stuff that will be available on just about any bash implementation you are likely to run across.

References and related

The nitty gritty details about BASH shell can be gleaned by doing a man bash. It seems daunting at first but it’s really not too bad once you learn how to skim through it.

This post shows how to properly use the syslog package within python to create these log files that I parse.

Categories
Consumer Interest Consumer Tech Uncategorized

Screen Mirroring to Your Smart TV

With the advancements in technology, there are now many features that allow for seamless connection between devices using wireless connections. One of the things that allow this is smart TVs. Currently, the market for them is dominated by South Korean company Samsung. 39% of all sales come from them, which is a huge number in comparison to the 19% from LG and 9.3% from Sony.

There are also devices you can attach to your traditional TV to give it the functions of a smart one, so you won’t have to spend too much to upgrade. With this kind of TV, you can do many different things like stream from platforms like Netflix or Hulu, or even mirror the screen of a mobile device.

What is screen mirroring?


Screen mirroring is basically the ability to project what is on one device to a TV display. This is normally done through the internet and is comparable to connecting a laptop to a monitor using an HDMI cable. As mentioned earlier, there are different ways you can do this. Some TVs have built-in software that allows you to do this, while some use different hardware attachments.

Examples of these accessories are the Amazon Firestick, Apple TV, and Google Chromecast. The last two are some of the most popular ones on the market right now and they both have their own pros and cons. For those already in the Apple ecosystem, using the attachment from the same company will make connecting them easier. If you are looking for a cheaper alternative that can work on almost any device, the Chromecast is your best bet.

These accessories work because of their internet connection. The circuitry is specially designed to deliver signal integrity which ensures that digital and analog signals do not become distorted during propagation. Moreover, this guarantees that the signal can be recovered if temporarily lost, and that screen mirroring is smooth and won’t experience delays.

How to mirror your screen

Make sure your devices are connected to the same internet source


As the feature heavily relies on connection, the only way you can display what is on your other device is by being on the same internet source. Go to the settings of both of your gadgets and connect them to the same wi-fi line. This will make them identifiable to each other and make mirroring possible.

Read the instructions


If you are using a TV box or tool like the Apple TV or Chromecast, make sure to read the connection instructions on the manual. For example, the former requires you to use AirPlay and the manual should teach you which buttons to press on your phone or laptop. For the latter device, you might need a third-party app like the Google Home One to be able to get the accessory to mirror. Be sure to check the instructions given so you can make it a more seamless experience.

Check your Wi-Fi’s integrity


Because mirroring heavily relies on your internet, if the integrity (or speed) that your Wi-Fi is giving out is not enough or lacks bandwidth, you will have a lagging experience. Before you start, try to check the speed of your internet to be sure that it is strong enough. You can simply go on speed test sites on your browser. A good speed would be at least 25mbps, so if it is lower than that, you might not be able to connect or mirror easily.

Screen mirroring is just one-way technology has made life more connected. Gone are the days when other wires and connections were needed. The internet now enables you to perform tasks like projecting from a smaller device to a bigger one, hassle-free.

References and related

Some Firestick problems I’ve encountered are discussed in this post.

Categories
Admin Firewall Linux

Linux: how to estimate bandwidth usage to a particular subnet

Intro

Let’s say someone asks you to estimate the total bandwith used by a particular subnet, or a particular service such as https on port 443. I provide a crude way to do that using tcpdump on a not-too-busy server.

The code

I call it bandwidth.sh. By the way, I ran it on a Checkpoint Gaia appliance so it works there as well.

#!/bin/bash
# DrJ 11/21
sleep=120
file=/tmp/ctpackets
sum() {
sum=0
cat $file|while read line; do
 length=$(echo $line|awk '{print $17}'|sed 's/)//')
 sum=$(expr $sum + $length)
 echo $sum
done
}
while /bin/true; do
tcpdump -c1000 -v -nni eth1 net 216.71/16 > $file
#10:29:49.471455 IP (tos 0x0, ttl 126, id 32399, offset 0, flags [none], proto: UDP (17), length: 105) 10.32.25.126.3391 > 216.71.170.32.61445: UDP, length 77
total=$(sum|tail -1)
t0=$(head -1 $file|awk '{print $1}')
t1=$(tail -1 $file|awk '{print $1}')
h0=$(echo $t0|cut -d: -f1|sed 's/^0//')
h1=$(echo $t1|cut -d: -f1|sed 's/^0//')
m0=$(echo $t0|cut -d: -f2|sed 's/^0//')
m1=$(echo $t1|cut -d: -f2|sed 's/^0//')
s0=$(echo $t0|cut -d: -f3|sed 's/^0//')
s1=$(echo $t1|cut -d: -f3|sed 's/^0//')
s0=$(echo $s0|cut -d\. -f1|sed 's/^0//')
s1=$(echo $s1|cut -d\. -f1|sed 's/^0//')
[ -z "$h0" ] && h0=0
[ -z "$h1" ] && h1=0
[ -z "$m0" ] && m0=0
[ -z "$m1" ] && m1=0
[ -z "$s0" ] && s0=0
[ -z "$s1" ] && s1=0
t0secs=$((3600*$h0+60*$m0+$s0))
t1secs=$((3600*$h1+60*$m1+$s1))
#echo total bytes: $total
elapsed=$(($t1secs-$t0secs))
#echo elapsed time: $elapsed
kbps=$(($total*8/$elapsed/1000))
echo $kbps kbps at $(date)
sleep $sleep
done

The idea

Running tcpdump with the -v switch gives us packet length. We find that length and sum it up. Here we used a filter epxression of 216.71/16 to capture only the traffic from that subnet.

The number of packets to capture has to be tuned to how busy it gets. Now it’s set to only capture 1000 packets. And you see my crude timings are truncated at the second. So 1000 packets in one second or about 1.5 MBytes/sec = 12 Mbps is the maximum sensitivy of this approach. I doub it will really work for interfaces with more thn 100 Mbps, even after you scaled up the count (and don’t forget to change the denominator in the kbps line!

Here’s a sample output:

1000 packets captured
2002 packets received by filter
0 packets dropped by kernel
5 kbps at Wed Nov 3 12:09:45 EDT 2021

I think it’s important to note the number of packets dropped by the kernel. So if it gets too busy as I underatdn it, it will at least try to tell yuo that it couldn’t capture all the data and at that point you can no longer trust this method. Perhaps with enhanced statistical methods it could be salvaged.

I don’t run it continuously to also give the kernel a breather. It probably doesn’t make much difference, but every two minutes seems plenty frequent to me…

Conclusion

We have demonstrated a crude but better-than-nothing script to calculate bandwidth for a given tcpdump filter expression. It won’t win any awards, but it contains some worthwhile ideas. And it seems to work at low bandwidth levels.

Categories
Internet Mail Linux Raspberry Pi

From Audio Recording to YouTube with two button clicks and a Raspberry Pi

Intro

This post builds on the success of previous posts and uses elements from them. I don’t honestly expect anyone to repeat all the ingredients I have assembled here. But I have created them in a fairly modular way so you can pick out those elements which will help your project.

But, it is true, I have gotten the user experience of recording audio from, e.g., a band practice, down to a click of the ENTER button to start the recording, another click to stop it, and a click of the UP ARROW button to process the audio recording – turn it into a video – and upload it to YouTube, mark it as UNLISTED, and send the link to me in an email. Pretty cool if I say so myself. I am refining things as I write this to make it more reliable.

This write-up is not terribly detailed. It presumes at least a medium skill level with linux.

Ingredients
  • RPi 3 or RPi 4
  • Raspberry OS desktop running Pixel desktop environment
  • tiger VNC, i.e., the package tigervnc-scraping-server
  • chromium-browser (but it comes with)
  • xdotool (apt-get install xdotool)
  • xsel (apt-get install xsel)
  • YouTube account
  • crontab entries – see below
  • you do not need an HDMI display, except for the OS setup
  • a vnc viewer such as Real VNC
  • exim4 and bsd-mailx packages
The scripts

recordswitch.sh

#!/bin/bash
# DrJ 8/2021
# Control the livestream of audio to youtube
# works in conjunction with an attached keyboard
# I use bash interpreter to give me access to RegEx matching
HOME=/home/pi
log=$HOME/audiocontrol.log
program=ffmpegwireless9.sh
##program=tst.sh # testing
PGM=$HOME/$program
# de-press ENTER button produces this:
matchE="1, 28, 0"
# up arrow
matchU="1, 103, 0"

epochsOld=0
cutoff=3 # seconds
DEBUG=1
ledtime=10
#
echo "$0 starting monitoring at "$(date)
# Note the use of script -q -c to avoid line buffering of the evread output
script  -q -c $HOME/evread.py /dev/null|while read line; do
[[ $DEBUG -eq 1 ]] && echo line is $line
# seconds since the epoch
epochs=$(date +%s)
elapsed=$((epochs-$epochsOld))
if [[ $elapsed -gt $cutoff ]]; then
  if [[ "$line" =~ $matchE ]]; then
# ENTER button section - recording
    echo "#################"
    echo We caught this input: $line at $(date)
# see if we are already running our recording program or not
    pgrep -f $program>/dev/null
# 0 means it's been found
    if [ $? -eq 0 ]; then
# kill it
      echo KILLING $program
      pkill -9 -f $program; pkill -9 arecord; pkill -9 ffmpeg
      pkill -9 -f blinkLED
      echo Shine the PWR LED
      $HOME/shineLED.sh
    else
# start it
      echo Blinking PWR LED
      $HOME/blinkLED.sh &
      echo STARTING $PGM
      $PGM > $PGM.log.$(date +%m-%d-%y:%H:%M) 2>&1 &
    fi
    epochsOld=$epochs
  elif [[ "$line" =~ $matchU ]]; then
# UP ARROW button section - processing
    echo "###########"
    echo processing commencing at $(date)
    $HOME/blinktwiceLED.sh &
    echo start processing of the recording
    $HOME/process.sh >> process.log 2>&1
    pkill -9 -f LED
    $HOME/shineLED.sh
    epochsOld=$epochs
  fi
[[ $DEBUG -eq 1 ]] && echo No action taken. Continue to listen
fi
done

ffmpegwireless9.sh

#!/bin/sh
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 64 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 5 \
-s 480x320 \
-flush_packets 1 \
-f mp3 file:record-$(date +%m-%d-%y-%H-%M).mp3 \
< /dev/null

mp32flv.sh

#!/bin/sh
# DrJ 10/2021
#
# Note that ffmpeg runs at ~ 4 x real-time when it is producing this flv video file
#
line=$1
time=$(ffprobe -v error -show_entries format=duration   -of default=noprint_wrappers=1:nokey=1 file:${line}|tail -1)
echo recording time: $time s
echo $time > duration
video=$(echo ${line}|sed 's/mp3/flv/')
  ffmpeg \
 -i file:${line} \
 -f lavfi -i color=color=darkgray \
 -c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
 -bufsize 512k \
 -acodec libmp3lame -ar 44100 \
 -threads 8 \
 -b:a 128k \
 -r 5 \
 -s 480x320 \
 -t $time \
 -f flv file:${video} \
 < /dev/null

auto-upload.sh

#!/bin/sh
# automate upload of YouTube videos
#
# define some functions
randomsleep(){
# sleep random amount between 1.5 to 2.5 seconds
t10=$(shuf -n1 -i 15-25)
t=$(echo $t10/10|bc -l)
sleep $t
}
drjtool(){
randomsleep
xdotool $1 $2 $3
randomsleep
}

echo Start video upload
echo set display to main display
export DISPLAY=:0
# launch chromium
echo launch chromium
chromium-browser --kiosk https://studio.youtube.com/ > /dev/null 2>&1 &
sleep 25
echo move to CREATE button
drjtool mousemove 579 19
echo click on CREATE button
drjtool click 1
echo move to Upload videos
drjtool mousemove 577 34
echo click Upload videos
drjtool click 1
echo move to SELECT FILES
drjtool mousemove 305 266
echo click on SELECT FILES
drjtool click 1
echo move mouse to Open button
drjtool mousemove 600 396
echo click open and pause a bit for video upload
drjtool click 1
sleep 20
secs=$(cat duration)
moretime=$(echo $secs/60|bc -l)
sleep $moretime
echo "mouse to NEXT button (accept defaults)"
drjtool mousemove 558 386
echo click on NEXT
drjtool click 1
echo move to radio button No it is not made for kids
drjtool mousemove  117 284
echo click radio button
drjtool click 1
echo back to NEXT button
drjtool mousemove 551 384
echo click NEXT
drjtool click 1
echo 'click NEXT again (then says no copyright issues found)'
drjtool click 1
echo click NEXT again
drjtool click 1
echo move to Unlisted visibility radio button
# [note that public would be drjtool mousemove 142 235, private is 142 181]
drjtool mousemove 142 208
echo click Unlisted
drjtool click 1
echo move to copy icon
drjtool mousemove 532 249
echo echo copy URL to clipboard
drjtool click 1
echo move to Save
drjtool mousemove 551 384
echo click Save
drjtool click 1
echo move to CLOSE
drjtool mousemove 434 304
echo click close
drjtool click 1

echo video URL
xsel -b|tee clipboard
echo '
kill chromium browser'
sleep 25
echo kill chromium
kill -9 %1
sleep 2
url=$(cat clipboard|xargs -0 echo)
echo url is $url

process.sh

#!/usr/bin/bash
HOME=/home/pi
sleeptime=5
cd $HOME
# loop over all mp3 files in home directory
ls -1 record*mp3|while read line;do
 echo working on $line at $(date)
 video=$(echo ${line}|sed 's/mp3/flv/')
 echo creating flv video file $video
# create the video first
 ./mp32flv.sh $line
 echo move $line to mp3 directory
 [[ -d mp3s ]] || mkdir mp3s
 mv $line mp3s
 echo mv flv to upload directory
 [[ -d 00uploads ]] || mkdir 00uploads
 mv $video 00uploads
 echo start the upload
 ./auto-upload.sh
 echo get the url to this video on YouTube
 url=$(cat clipboard|xargs -0 echo)
 echo test that it worked
 if [[ ! "$url" =~ "http" ]]; then
   echo FAIL. Try once again
   ./auto-upload.sh
 fi
 echo send mail to Drj
 ./announceit.sh
 echo move video $video to flvs directory
 mv ./00uploads/$video flvs
 echo sleep for a bit before starting the next one
 sleep $sleeptime
done
echo All done with processing at $(date)

blinkLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.5
done

shineLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# turn on the bright RED PWR (power) LED
echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null

blinktwiceLED.sh

#!/bin/sh
# DrJ 8/30/2021
# https://www.jeffgeerling.com/blogs/jeff-geerling/controlling-pwr-act-leds-raspberry-pi
# put LED into GPIO mode
echo gpio | sudo tee /sys/class/leds/led1/trigger > /dev/null
# flash the bright RED PWR (power) LED quickly to signal whatever
while /bin/true; do
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 3
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
  echo 0|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
  echo 1|sudo tee /sys/class/leds/led1/brightness > /dev/null
  sleep 0.35
done

announceit.sh

#!/bin/sh
url=$(cat clipboard|xargs -0 echo)
mailx -r yourAddress@whatever.com -s "New youtube video $url posted" yourAddress@whatever.com<<EOF
Check out our latest recording:

      $url

Regards,
Yourself
EOF

crontab entries

@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -passwordfile ~/.vnc/passwd -display :0 >  x0vncserver.log 2>&1

The idea

The recordswitch.sh script waits for input from the remote controller. It is programmed to kick off ffmpegwireless9.sh if the ENTER button is pushed, or process.sh if the UPLOAD button is pushed.

For testing purposes you may want to run process.sh by hand, i.e., ./process.sh, while you are viewing the display using a VNC viewer alongside the terminal screen.

The scripts are quite verbose and give lots of helpful output in their log files.

Upgrading from Raspberry Pi Lite to Raspberry Pi Desktop

I always like to start my RPi OS install with Raspberry Pi Lite. But to follow the upload parts of this post you really need Raspberry Pi Desktop. This article is a good write-up of how to upgrade to Desktop from Lite: How To Upgrade Raspbian Lite to Desktop (PIXEL, KDE, MATE, …) – RaspberryTips

Tips

Unfortunately the plugin I use inserts a blank line at the top. Those should all be removed.

After getting all the script, make them all executable in one go with a command such as chmod +x *sh

To read the input from the remote controller you need to set up evread.py and there may be some python work to do. This post has those details.

The chromium bowser needs to be run by hand one time over your VNC viewer. Its size has to be shrunk to 50% by running CTRL SHIFT – about four times. You need to log in to your YouTube or Gmail account so it remembers your credentials. And you need to og through the motions of uploading a video so it knows to use the 00uploads directory next time.

Don’t run a recording and an upload at the same time. I think the CPU would be taxed so I did not test that out. But you can record one day – even multiple recordings, and upload them a day or days later. That should work OK. It just processes the files one at a time, hopefully (untested).

announceit.sh is pretty dodgy. You have to understand SMTP mail somewhat to have a spitting chance for that to work. Fortunately I was an SMTP admin previously. So my ISP, Optimum, has a filter in place which prevents ordinary residential customers from sending out normal email to arbitrary SMTP addresses. However, to my surprise, they do run a mail relay server which you can connect to on the standard tcp port 25. I don’t really want to give it away but you can find it with the appropriate Internet search. I assume it is only for Optimum customers. Perhaps your ISP has something similar. So after you install exim4, you can configure a “smarthost” with the command dpkg-reconfigure exim4-config. But, again, you have to know a bit what you are doing. Suffice it to say that I got mine to work.

But for everyone else who can’t figure that out, just comment out this line in process.sh ./announceit.sh. put a # character in the front of the line to do that.

I have really only tested recordings of up to 45 minutes. I think an hour should be fine. I would suggest to break it up for longer.

The files can take a lot of space so you may need to clean up older files if you are a frequent user.

I’ve had about one failure during the upload out of about seven tests. So reliability is pretty good, but probably not perfect.

Why not just livestream? True, it’s sooo much easier. And I’ve covered how to do that previously. But, maybe it’s my WiFi, but its reliability was closer to 50% in my actual experience. I needed greater reliability and turns out I didn’t need the live aspect of the whole thing, just the recording for later critiqueing.

The recording approach I’ve taken uses ffmpeg to directly produce a mp3 file – it’s more compact than a WAV file. In and of itself the mp3 file may be useful to you, to, e.g., include as an attachment in email or whatever. For instance for a single song. All the mp3s are finally stored in a folder called mp3s, and all the videos are finally stored in a folder called flvs.

About that upload

The upload itself is super awesome to watch. I captured an actual automated upload with the script running on the right and the X Window display on the left in this YouTube video.

So the upload part was covered in this previous post.

Fixing recording which sounds like chipmunks

Somehow I managed to use some of these tools the other day and my mp3s ended up sounding like Alvin and the Chipmunks! I wondered if there was a way to recover them. I found there is, though I had to develop it a bit. It uses the new-ish rubberband filter of ffmpeg. I call this tiny script dechipmunk.sh:

#!/bin/sh
input=$1
#ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a rubberband='tempo=2' -loglevel warning stretched$input
#ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a rubberband='pitch=2' -loglevel warning stretched$input
ffmpeg -thread_queue_size 2048 -i $input -y -ac 1 -filter:a "rubberband=pitch=0.3333:tempo=0.3333" -loglevel warning stretched$input

In my case I had to slow things down and lower the pitch by the same factor: one third, hence the 0.3333.

How to pass multiple options to an ffmpeg filter

In doing the above I had to work out the syntax for passing more than one option to the new rubberband filter of ffmpeg. I wanted to specify both the pitch and the tempo options. So you see from above they had to be separated with a colon and the whole filter expression enclosed in quotes. Hence the funny-looking

“rubberband=pitch=0.3333:tempo=0.3333”

Future development

Well, I’m thinking of removing the chit-chat from the recording in an automated fashion. That may mean applying machine learning, or maybe something simpler if someone has covered this territory before for the RPi. But it might be a good excuse to do a shallow dive into machine learning.

Conclusion

I’m sure this method of YouTube upload is very flaky and will probably only work once or twice, if at all. But at least in my trials, it did work a few times. So perhaps it could be hardened and made more error-correcting. There are a lot of moving parts for it all to work. But it’s definitely cool to watch it go when it is working!

References and related

Rii infrared remote control – only $12: Amazon.com: Rii MX3 Multifunction 2.4G Fly Mouse Mini Wireless Keyboard & Infrared Remote Control & 3-Gyro + 3-Gsensor for Google Android TV/Box, IPTV, HTPC, Windows, MAC OS, PS3 : Electronics

Reading keyboard input.

How To Upgrade Raspbian Lite to Desktop (PIXEL, KDE, MATE, …) – RaspberryTips

YouTube Livestreaming with a click of a button on Raspberry Pi

Automated YouTube video uploading from Raspberry Pi without using the YouTube api

Categories
Linux Raspberry Pi

Automated YouTube video uploading from Raspberry Pi without using the YouTube api

2021 Intro

This post promises more than it actually delivers, ha, ha. It is squarely aimed at the more mid to advanced RPi enthusiast. Most who read this will get discouraged and look for another solution. I did the same in fact and I will review my failures with alternatives.

The essence of this aproach is screen automation with a very nice tool called xdotool. For me it works. It will definitely, 100% require some tweaks for anyone else. This is not run a few installs, copy this code and you’re good to go. But if you have the patience, you wil be rewarded with either fully or at least semi-automated video uploads to YouTube from your Raspberry Pi.

One caveat. Please obey YouTube’s terms of service. In other words, don’t abuse this! As soon as someone starts using this method in an aggressive or abusive fashion, we will all lose this capability. They have crack security experts and could squelch this approach in a heartbeat.

I actually don’t have all the peices in place for myself, but I have enough cool stuff that I wanted to begin to share my findings.

One beautiful thing about what I’m going to show is that you get to see the cursor moving about the screen in response to your automated commands – you see exactly what it;s doing, which screens it’s clicking through, etc. So if there’s an issue – say YouTube changes its layout – you’ll most likely be able to know how to adapt.

What’s wrong with using YouTube’s api?

Plenty. It used to be feasible. It certainly would make all our lives a lot easier. But YouTube is not a charity. They have squeezed out the little guy by making the barrier to entry so high that it’s really only available to highly determined IT folks. It’s just too difficult to figure out all the neeed screens, etc, and all the help guides refer to older api versions where things were different. YouTube clamped down in July 2020 on who or what can use their api. There’s a lot of old HowTos pre-dating that that will just lead you to dead-ends. So, go ahead, I dare you to stop reading this and use the api and report back. Maybe yuo manage to create a project, great, and an api key, great, and even to assign your api key the correct YouTube specific permissions – all great, and associate crednetials – super, and finally borrow someone’s code to upload a video – been there, done that. That video will be listed private. So then you try to root around to see what you have to do to make it public. Ah, a project review. Great. You were only in test mode. So no your confronted with this form. If they cared about the little guy there would be a radio button – “I only wish to upload a few videos a week for a small cadre of users, spare me the bureaucracy,” and that’d be it. But, no… Are you applying for a quota? Huh? I just want to upload a video and have it marked as unlisted. Some users remarked they filled out the form, never got their project reviewed and never heard back. Maybe they’re the exception, I don’t know. It’s just over the top for me so I give up.

OK. So, maybe YoutubeUploader?

Nope. Doesn’t work. It’s based on the old stuff.

OK. What about that guy’s api-less Node.js uploader?

Maybe. I could not get it to work on RPi. But I didn’t try super hard. I just like rolling my own, frankly. My approach is much more transparent. At least this approach inspired me to imagine the approach I am about to share. Because I believe the Node.js guy is just doing screen scraping but you can’t even see the screens.

Or simply do a Livestream?

Agreed. Livestreaming is quite straightforward by comparison with what I developed. My blog post about one click livestreaming covers it. But I have not had good results with reliability. As often as it works, it doesn’t work. With this new approach I’m going to try to create separate steps so that if anything goes wrong, an individual step can be re-run. Another advantage of separating steps is that a recording can be done “in the field” and without WiFi access. Remember an RPi 3 works great for hours with a decent portable USB battery that’s normally used for phones. Then the resulting recording can be converted to video and uploaded once the RPi is back to its usual WiFi SSID.

Preliminary upload, October 2021

In the video below the right screen is a terminal window showing what the script is doing. It needs some tweaking, and the YouTube window gets stuck so it’s not showing some of the screens. But it’s already totally awesome – and it worked!

Watch an actual video get uploaded
Code for the above

#!/bin/sh
# automate upload of YouTube videos
#
# define some functions
randomsleep(){
# sleep random amount between 1.5 to 2.5 seconds
t10=$(shuf -n1 -i 15-25)
t=$(echo $t10/10|bc -l)
sleep $t
}
drjtool(){
randomsleep
xdotool $1 $2 $3
randomsleep
}

echo Start video upload
echo set display to main display
export DISPLAY=:0
# launch chromium
echo launch chromium
chromium --kiosk https://studio.youtube.com/ > /dev/null 2>&1 &
sleep 20
echo move to CREATE button
drjtool mousemove 579 19
echo click on CREATE button
drjtool click 1
echo move to Upload videos
drjtool mousemove 577 34
echo click Upload videos
drjtool click 1
echo move to SELECT FILES
drjtool mousemove 305 266
echo click on SELECT FILES
drjtool click 1
echo move mouse to Open button
drjtool mousemove 600 396
echo click open and pause a bit for video upload
drjtool click 1
sleep 20
echo "mouse to NEXT button (accept defaults)"
drjtool mousemove 558 386
echo click on NEXT
drjtool click 1
echo move to radio button No it is not made for kids
drjtool mousemove  117 284
echo click radio button
drjtool click 1
echo back to NEXT button
drjtool mousemove 551 384
echo click NEXT
drjtool click 1
echo 'click NEXT again (then says no copyright issues found)'
drjtool click 1
echo click NEXT again
drjtool click 1
echo move to Unlisted visibility radio button
# [note that public would be drjtool mousemove 142 235, private is 142 181]
drjtool mousemove 142 208
echo click Unlisted
drjtool click 1
echo move to copy icon
drjtool mousemove 532 249
echo echo copy URL to clipboard
drjtool click 1
echo move to Save
drjtool mousemove 551 384
echo click Save
drjtool click 1
echo move to CLOSE
drjtool mousemove 434 270
echo click close
drjtool click 1

echo video URL
xsel -b|tee clipboard
echo kill chromium browser
sleep 5
echo kill chromium
kill -9 %1
url=$(cat clipboard|xargs -0 echo)
echo url is $url

I call the script auto-test.sh, just to give it a name.

Ingredients
  • RPi 3 or RPi 4
  • Raspberry OS with full GUI and autologin set up
  • tiger VNC, i.e., the package tigervnc-scraping-server
  • chromium browser – but I think that comes with the GUI install
  • xdotool (apt-get install xdotool)
  • xsel (apt-get install xsel)
  • YouTube account
  • crontab entries – see below
  • you do not need an HDMI display, except for the OS setup
  • a vncviewer such as Real VNC
Idea

Don’t use the GUI for anything else!

Crontab (do a crontab -e to get into your crontab) should contain these lines:

@reboot sleep 15; /home/pi/recordswitch.sh > recordswitch.log 2>&1
# launch vnc server on display 1
@reboot sleep 65;x0vncserver -localhost no -passwordfile ~/.vnc/passwd -display :0 >  x0vncserver.log 2>&1

Work with chromium the first time by hand. As I recall you should:

  • Create a directory like 00uploads – so it appears highest in the list
  • put a single video in 00uploads
  • Do an upload by hand (to help chromium remember to choose this upload directory)
  • launch chromium browser
  • log into your YouTube account at https://studio.youtube.com
  • shrink the browser until its size is 50% (Ctrl-Shift – about four times)
  • Don’t add other tabs and stuff to Chromium

Then subsequent launches of chromium should remember a bunch of these settings, specifically, your login info, the shrunken size, the upload directory, and maybe the (lack of) other tabs.

The beauty of this approach is that it is more transparent than the alternatives. You see exactly what your program is doing. You can issue the xdotool commands by hand to, e.g., change up the coordinates a little bit. Or even enter a video title.

So getting back to the idea, the automation idea is to finish a video somehow, then move it to the 00uploads dircetory, invoke this uploader program, then either move it to a uploaded directory or some such.

Imagine the versatility if I used my remote controller for RPi to map one button for audio recording, and a second button for automating video upload! Well, when I find the time that’s what I plan to do. I will make a separate post where the recording and uploading are shown – more or less the culmination of all the pieces.

Oh, and back to the idea again, I wanted to share the unlisted link with band members. So, you see how it is basically in the result of xsel -b since xsel copied the clipboard which contained the YouTube URL for this video we just uploaded? I have to fix up the parsing because some junk characters are getting included, but I plan to email that link to myself first, where I will do a brief manual check, and then forward it to the rest of the band. so, again, it’s really cool that we could even think to pull that off with this simplistic approach.

Techniques developed for this project

Lots.

I “discovered” – in the sense that Columbus discovered America – xdotool as an amazing X Windows screen automation tool. I knew of autohotkey for Windows so inquired what was like it for X Windows. I further learned that xdotool is generally broken when it comes to use with traditional VNC servers such as the native tightvncserver. It simply doesn’t work. But Tiger VNC is a scraping server so it like shares your console screen and makes it available via VNC protocol. That’s required because to develop this approach you have to see what you’re doing. All those coordinates? it comes from experimentation.

I also learned how to embed a YouTube video in my blog post. In fact this is the very first video I made for a blog post. So I did a screen recording for the first time with screen recorder for Windows.

I landed on the idea of a side-by-side video showing my terminal running the automated script in one window and the effect it is having on the chromium browser in the other window running on the RPi.

I put a wrapper around xdotool to make things cleaner. (But it’s not done yet.)

I changed to two-factor authentication to see if it made a difference. It did not. It still remembers the authentication, thankfully, at least for a few days. I wonder for how long though. Hmm.

Kiosk mode. By launching chromium with kiosk mode it not only gives us more screen real estate to work in, it in principle should also permit you to interact with chromium in a regular fashion and still have it come up in a known, fixed position, which is an absolute requirement of this approach. All the buttons have to have the same coordinates from invocation to invocation.

I also developed ffmpeg-based converters which take wav files and converts them to mp3’s (a nice compact format. wav files are space hogs), and another which takes mp3 files and adds a gray screen and converts them to flv ([Adobe] Flash Video, I guess – a compact video file format which YouTube accepts).

I also learned the ffmpeg command to tell exactly how long a recording is.

I also learned how to turn off blocking in ffmpeg so that its constantly writing packets and thus not losing audio data at the end when stopped.

I came up with the idea of randomizing the sleep time between clicks to make it seem more human-like.

And mostly for the purpose of demonstrations, though it also greatly helps in debugging, I introduced around two seconds of sleep both before and after a command is issued. That really makes things a lot clearer.

The results of the clipboard, xsel -b, contains null bytes. I had trouble parsing it to pull out just the url, but finally landed on using xargs -0 which is designed to parse null-delimited strings. And it worked! This was a late edition and did not make it into the video, but is in the provided script above towards the bottom.

ffmpeg chokes on too-complicated filenames. Who knew? I had files containing colon (:) and dash (-) characters which work perfectly fine in linux, but ffmpeg was interpreting part of the filename as a command-line argument it appeared. The way out of that mess was to introduce file: in front of the filename.

I’ll probably put my ffmpeg tricks into my next post because I want to keep this one lean and focused on this one upload automation topic.

References and related

This post was preceded by my post on how to stream live to YouTube with a click of a button on a Raspberry Pi.

Categories
Firewall

Checkpoint Gaia admin tips

Intro

Suppose, hypothetically, that you had super admin access to a CMA in SmartConsole v 80.40, but lacked ssh or GUI access to firewalls within that CMA? What could you do? Can you run commands in a pinch? Yes. You can. Here are some concrete examples.

Caveats

In the servers section of the domain you can right-click and choose “Run one-time script.” That’s great, but I think there are limits. It will time out a script that takes too long. IDK, maybe 10 seconds or so is the maximum time allowed. The returned text gets truncated if it’s too long. 15 lines of text is OK. 200 is not. Somewhere inbetween those two is the limit.

Running clish commands

clish commands can indeed be run this way. I was interested in examining a few routes on a firewall with many static routes. I ran:

netstat -rn|grep 198.23|head -15

Set a static route

clish -sc “set static-route 197.6.75.0/24 nexthop gateway address 10.23.42.10 on”

Redistribute this route via BGP

clish -sc “set route-redistribution to bgp-as 38002.48928 from static-route 197.6.75.0/24 on”

Run a PING (best to restrict the number of ping packets)

ping -c3 1.1.1.1

Show a part of configuration, e.g., BGP stuff

clish -c “show configuration”|grep bgp|head -15

Show cluster IPs

cphaprob -vs all -a -m if|grep 10.182.136

Learn the name of the switch and switch port an interface is connected to (Cisco switches only)

This is a really awesome trick. And it works. Maybe it relies on something clled CDP. Not sure. But you run it and it will tell you the hostname of the switch and the port, e.g., eth1/5.

tcpdump -vnni eth1-08 -s 1500 -c 1 ‘(ether[12:2]=0x88cc or ether[20:2]=0x2000) and not tcp and not udp and not icmp’

The interface name eth1-08 above is just an example.

Better still, it will even tell you the management IP of the switch! That output will appear like this:

Management Addresses (0x16), value length: 13 bytes: IPv4 (1) 10.122.37.81

This command is general-putpose and works with any device with any OS, assuming you can run a packet trace with tcpdump or equivalent. Very cool.

Conclusion

Real firewall admins I know fail to realize that even when they lack shell access to a firewall they can pretty issue any command they need if they use the one-time script option in SmartConsole. It just helps to follow along the lines of the examples above – limiting output, etc. Even clish config changes can be made! A common reason to be in this situation is to learn someone changed a password or cleaned up old accounts.

As a bonus I show how to identify the name of the switch a firewall is connected to as well as the switch port and management IP of that switch. The general-purpose command works on any OS.

Categories
Admin DNS Firewall Network Technologies TCP/IP

The IT detective agency: named times out tcp queries

Intro

I’ve been reliable running ISC’s BIND server for eons. Recently I had a problem getting my slave servers updated after a change to the primary master. What was going on there?

The details

This was truly a team effort. I saw that the zone file had differing serial numbers on the master versus the slave servers. My attempts to update via an rndc refresh zone was having no effect.

So I tried a zone transfer by hand: dig axfr drjohnstechtalk.com @50.17.188.196

That timed out!

Yet, regular dns qeuries went through fine: dig ns drjohnstechtakl.com @50.17.188.196

I thought about it and remembered zone transfers use TCP whereas standard queries use UDP. So I tried a TCP-based simple query: dig +tcp ns drjohnstechtalk.com @50.17.188.196. It timed out!

So of course one suspects the firewall, which is reasonable enough. And when I looked at the firewal I found some funny drops, though i cuoldn’t line them up exactly with my failed tests. But I’m not a firewall expert; I just muddle through.

The next day someone from the DNS group asked how local queries behaved? Hmm. never tried that. So I tried it: dig +tcp ns drjohnstechtalk.com @localhost. That timed out as well! That was a brilliant suggestion as we now could eliminate the firewall and all that complexity from the equation. Because I had tried to do packet traces on two different machines at the same time and line up the results. It wasn’t easy.

The whole issue was very concerning to us because we feared our secondaries would be unable to pudate their slave zones and ultimately time them out. The result would be devastating.

We have support, fortunately. A company that hearkens frmo the good old days, with real subject matter experts. But they’re extremely busy. We did not get a suggestion for a couple weeks. But eventually we did. They had seen this once before.

named time to respond to TCP-based queries

The above graph is from a Zabbix monitor showing how long it takes that dns server to respond to that simple query. 6 s is a time-out. I actually set dig to timeout at 2 s, but in wall-clock time it actually takes 6 s.

The fix

We removed this line from the options block of named.conf:

keep-response-order {any; };

The info fmo the experts is that most likely that was configured as a workaround to CVE-2019-6477 but that issue was fixed since 9.15.6.

Conclusion

We encountered the named daemon in a situation where it was unable to respond to TCP-based DNS queries and hence unable to do zone transfers. So although most queries use UDP, this was a serious issue for us and prevented zones from being updated on all authoritative nameservers.

As is the case with so many modern IT problems, the effect was not black or white. Failures were intermittent, and then permanent. A restart fixed ths issue (forgot to mention so far!). But we involved an expert to find the root cause and it was the presence of a single configuration line in our named.conf. After removing that all was good.

Categories
Admin

Git commands cheat sheet

Intro

This is the list of git commands I compiled.

Create new local GIT repository

git init [project name]

Copy a repository

git clone username@host:/path/to/repository

Add a file to the staging area – must be done or won’t be saved in next commit

git add temp.txt or git add -A (add everything at once)

My working style is to change one to three files, and only add those.

Suppose you run git add, then change the file again, and then run git commit: what version will you push? The original! Run git add, etc all over again to pick up the latest changes. And note that your git adds can be run over the course of time – not all at once!

Create a snapshot of the changes and save to git directory??

Note that any committed changes won’t make their way to the remote repo

git commit –m “Message to go with the commit here”

Nota bene: git does not allow to add empty folders! If you try, you’ll simply see: nothing to commit, working tree clean

Put some “junk file” like .gitkeep in your empty folder for git add/git commit to work.

Set user-specific values

git config –global user.email youremail

Displays the list of changed files together with the files that are yet to be staged or committed

git status

Undo that git add you just did

git status to see what’s going on. Then

git restore –staged filename

and then another git status to see that that worked.

Send local commits to the master branch of the remote repository

  (Replace <master> with the branch where you want to push your changes when you’re not intending to push to the master branch)

git push origin <master>

For real basic setups like mine, where I work on branch master, it suffices to simply do git push

Merge all the changes present in the remote repository to the local working directory

git pull

Create branches and helps you to navigate between them

git checkout -b <branch-name>

Switch from one branch to another

git checkout <branch-name>

View all remote repositories

git remote -v

Connect the local repository to a remote server

git remote add origin <host-or-remoteURL>

Delete connection to a specified remote repository

git remote rm <name-of-the-repository>

List, create, or delete branches

git branch

Delete a branch

git -d <branch-name>

Merge a branch into the active one

git merge <branch-name>

List file conflicts

git diff –base <file-name>

View conflicts between branches before a merge

git diff <source-branch> <target-branch>

List all conflicts

git diff

Mark certain commits, i.e., v1.0

git tag <commitID>

View repository’s commit history, etc

git log, e.g., git log — path_to_your_file

Reset index?? and working directory to last git commit

git reset –hard HEAD

Remove files from the index?? and working directory

git rm filename.txt

Revert (undo) changes from a commit as per hash shown by git log –oneline

git revert <hash>

Temporarily save the changes not ready to be committed??

git stash

View info about any git object

git show

Fetch all objects from the remote repository that don’t currently reside in the local working directory

git fetch origin

View a tree object??

git ls-tree HEAD

Search everywhere

git grep <string>

Clean unneeded files and optimize local repository

git gc

Create zip or tar file of a repository

Git archive –format=tar master

Delete objects without incoming pointers??

git prune

Identify corrupted objects

git fsck

Merge conflicts

Today I got this error during my usual git pull:

error: Pulling is not possible because you have unmerged files.
hint: Fix them up in the work tree, and then use 'git add/rm '
hint: as appropriate to mark resolution and make a commit.
fatal: Exiting because of an unresolved conflict.

I did not wish to waste too much time. I tried a few things (git status, etc), none of which worked. As it is a small repository without much at stake and I know which files I changed, I simply deleted the clone and re-cloned the repo, put back the newer versions of the changed files, did a usual git add and git commit and git push. All was good. Not too much time wasted in becoming a gitmaster, a fate I wish to avoid.

Ignore a file

Put the unqualified name of the file in .gitignore at same level as .git. It will not be added to the project. Use this for keeping passwords secret.

Test a command

Most commands have a –dry-run option which prevents it from taking any action. Useful for debugging.

References and related

A convention for commit comments

https://www.freecodecamp.org/news/10-important-git-commands-that-every-developer-should-know/