Categories
Admin Apache CentOS Python Raspberry Pi Web Site Technologies

Traffic shaping on linux – an exploration

Intro

I have always been somewhat agog at the idea of limiting bandwidth on my linux servers. Users complain about slow web sites and you want to try it for yourself, slowing your connection down to meet the parameters of their slower connection. More recently I happened on librespeed, an alternative to speedtest.net, where you can run both server and client. But in order to avoid transferring too much data and monopolizing the whole line, I wanted to actually put in some bandwidth throttling. I began an exploration of available methods to achieve this and found some satisfactory approaches that are readily available on Redhat-type linuxes.

bandwidth throttling, bandwidth rate limiting, bandwidth classes – these are all synonyms for what is most commonly called traffic shaping.

What doesn’t work so well

I think it’s important to start with the walls that I hit.

Cgroup

I stumbled on cgroups first. The man page starts in a promising way

cgroup - control group based traffic control filter

Then after you research it you see that support was enabled for cgroups in linux kernels already long ago. And there is version 1 and 2. And only version 1 supports bandwidth limits. But if you’re just a mid-level linux person such as myself, it is confusing and unclear how to take advantage of cgroup. My current conclusion is that it is more a subsystem designed for use by systemctl. In fact if you’ve ever looked at a status, for instance of crond, you see a mention of a cgroup:

sudo systemctl status crond
? crond.service - Command Scheduler
Loaded: loaded (/usr/lib/systemd/system/crond.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-08-09 15:44:24 EDT; 5 days ago
Main PID: 1193 (crond)
Tasks: 1 (limit: 11278)
Memory: 2.1M
CGroup: /system.slice/crond.service
mq1193 /usr/sbin/crond -n

I don’t claim to know what it all means, but there it is. Some nice abilities to schedule and allocate finite resources, at a very high level.

So I get the impression that no one really uses cgroups to do traffic shaping.

apache web server to the rescue – not

Since I was mostly interested in my librespeed server and controlling its bandwidth during testing, I wondered if the apache web server has this capability built-in. Essentially, it does! There is the module mod_ratelimit. So, quest over, and let the implementation begin! Except not so fast. In fact I did enable that module. And I set it up on my librespeed server. It kind of works, but mostly, not really, and nothing like its documented design.


    SetOutputFilter RATE_LIMIT
    SetEnv rate-limit 400 
    SetEnv rate-initial-burst 512

That’s their example section. I have no interest in such low limits and tried various values from 4000 to 12000. I only got two different actual rates from librespeed out of all those various configurations. I could either get 83 Mbps or around 162 Mbps. And that’s it. Merely having any statement whatsoever starts limiting to one of these strange values. With the statement commented out I was getting around 300 Mbps. So I got rate-limiting, but not what I was seeking and with almost no control.

So the apache config approach was a bust for me.

Trickle

There are some linux programs that are perhaps promoted too heavily? Within a minute of posting my first draft of this someone comes along and suggests trickle. Well, on CentOS yum search trickle gives no results. My other OS was SLES v 15 and I similarly got no results. So I’m not enamored with trickle.

tc – now that looks promising

Then I discovered tc – traffic control. That sounds like just the thing. I had to search around a bit on one of my OSes to find the appropriate package, but I found it. On CentOS/Redhat/Fedora the package is iproute-tc. On SLES v15 it was iproute2. On FreeBSD I haven’t figured it out yet.

But it looks unwieldy to use, frankly. Not, as they say, user-friendly.

tcconfig + tc – perfect together

Then I stumbled onto tcconfig, a python wrapper for tc that provides convenient utilities and examples. It’s available, assuming you’ve already installed python, through pip or pip3, depending on how you’ve installed python. Something like

$ sudo pip3 install tcconfig

I love the available settings for tcset – just the kinds of things I would have dreamed up on my own. I wanted to limit download speeds, and only on the web server running on port 443, and noly from a specific subnet. You can do all that! My tcset command went something like this:

$ cd /usr/local/bin; sudo ./tcset eth0 --direction outgoing --src-port 443 --rate 150Mbps --network 134.12.0.0/16

$ sudo ./tcshow eth0

{
"eth0": {
"outgoing": {
"src-port=443, dst-network=134.12.0.0/16, protocol=ip": {
"filter_id": "800::800",
"rate": "150Mbps"
}
},
"incoming": {}
}
}

More importantly – does it work? Yes, it works beautifully. I run a librespeed cli with three concurrent streams against my AWS server thusly configured and I get around 149 Mbps. Every time.

Note that things are opposite of what you first think of. When I want to restrict download speeds from a server but am imposing traffic shaping on the server (as opposed to on the client machine), from its perspective that is upload traffic! And port 443 is the source port, not the destination port!

Raspberry Pi example

I’m going to try regular librespeed tests on my home RPi which is cabled to my router to do the Internet monitoring. So I’m trying

$ sudo tcset eth0 --direction incoming --rate 100Mbps
$ sudo tcset eth0 --direction outgoing --rate 9Mbps --add

This reflects the reality of the asymmetric rate you typically get from a home Internet connection. tcshow looks a bit peculiar however:

{
"eth0": {
"outgoing": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "9Mbps"
}
},
"incoming": {
"protocol=ip": {
"filter_id": "800::800",
"delay": "274.9s",
"delay-distro": "274.9s",
"rate": "100Mbps"
}
}
}
}
Results on the RPi

Despite the strange delay-distro appearing in the tcshow output, the results are perfect. Here are my librespeed results, running against my own private AWS server:

Time is Sat 21 Aug 16:17:23 EDT 2021
Ping: 20 ms Jitter: 1 ms
Download rate: 100.01 Mbps
Upload rate: 9.48 Mbps

!

Problems creep in on RPi

I swear I had it all working. This blog post is the proof. Now I’ve rebooted my RPi and that tcset command above gives the result Illegal instruction. Still trying to figure that one out!

March, 2022 update. My RPi had other issues. I’ve re-imaged the micro SD card and all is good once again. I set traffic shaping policies as shown in this post.

Conclusion about tcconfig

It’s clear tcset is just giving you a nice interface to tc, but sometimes that’s all you need to not sweat the details and start getting productive.

Possible issue – missing kernel module

On one of my servers (the CentOS 8 one), I had to do a

$ sudo yum install kernel-modules-extra

$ sudo modprobe sch_netem

before I could get tcconfig to really work.

To do list

Make the tc settings permanent.

Verify tc + tcconfig work on a Raspberry Pi. (tc is definitely available for RPi.)

Conclusion

We have found a pretty nice and effective way to do traffic shaping on linux systems. The best tool is tc and the best wrapper for it is tcconfig.

References and related

Librespeed is a great speedtest.net alternative for hard-code linux types who love command line and being in full control of both ends of a speed test. I describe it here.

tcconfig’s project page on PyPi.

Power cycling one’s cable modem automatically via an attached RPi. I refer to this blog post specifically because I intend to expand that RPi to also do periodic, automated speedtesting of my home braodband connection, with traffic shaping in place if all goes well (as it seems to thus far).

Bandwidth management and “queueing discipline” in all its gory detail is explained in this post, including example raw tc commands. I haven’t digested it yet but it may represent a way for me to get my RPi working again without a re-image: http://www.fifi.org/doc/HOWTO/en-html/Adv-Routing-HOWTO-9.html

Categories
Perl Python Raspberry Pi Web Site Technologies

Raspberry Pi photo frame using your pictures on your Google Drive

Editor’s Note

Please note I am putting all my currently active development and latest updates into this newer post: Raspberry Pi photo frame using your pictures on your Google Drive II

Intro

All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.

So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!

Equipment

This work was done on a Raspberry Pi 3 running Raspbian Lite (more on that later). I used a display custom-built for the RPi, Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics), though I believe any HDMI display would do.

The scripts
Here is the master file which I call master.sh.


#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE=”random.list”
NUMFOLDERS=20
DISPLAYFOLDER=”/home/pi/Pictures”
DISPLAYFOLDERTMP=”/home/pi/Picturestmp”
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER=”MaryDocs/Pictures and videos”

echo “Starting master process at “`date`

rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP

#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:”$STARTFOLDER” > files

# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep ‘\.[jJ][pP][eE]?[gG]$’ files |awk ‘$1 > 11000 {$1=””; print substr($0,2)}’|grep -i -v /docs/ > jpegs.list

# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE

# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
rclone copy remote:”${STARTFOLDER}/$line” $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done

# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi
cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh
cd ~

# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi
pkill -9 -f qiv
sudo pkill -9 -f fbi
pkill -9 -f m2.pl

# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER

mv $DISPLAYFOLDERTMP $DISPLAYFOLDER

#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/m2.pl >> ~/m2.log 2>&1 &

if [ $DEBUG -eq 1 ]; then echo “And now it is “`date`; fi

I call the following script random-files.pl:

#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("c:df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$cutoff = $opt{c} ? $opt{c} : 5;
$cutoffS = 60*$cutoff;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = ;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"> $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i<$nofolders;$i++) {
  $t = int(rand($nopics-2));
  print "random number is: $t\n" if $DEBUG;
# a lot of our pics follow this naming convention
# 20160831_090658.jpg
  ($date,$time) = $jpegs[$t] =~ /(\d{8})_(\d{6})/;
  if ($date) {
    print "date, time: $date $time\n" if $DEBUG;
# ensure neighboring picture is at least five minutes different in time
    $iPO = $iP = $diff = 0;
    ($hr,$min,$sec) = $time =~ /(\d\d)(\d\d)(\d\d)/;
    $secs = 3600*$hr + 60*$min + $sec;
    print "Pre-pic logic\n";
    while ($diff < $cutoffS) {
      $iP++;
      $priorPic = $jpegs[$t-$iP];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $priorPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($secs - $Psecs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
# post-picture logic - same as pre-picture
    print "Post-pic logic\n";
    $diff = 0;
    while ($diff < $cutoffS) {
      $iPO++;
      $postPic = $jpegs[$t+$iPO];
      $Pdate = $Ptime = 0;
      ($Pdate,$Ptime) = $postPic =~ /(\d{8})_(\d{6})/;
      ($Phr,$Pmin,$Psec) = $Ptime =~ /(\d\d)(\d\d)(\d\d)/;
      $Psecs = 3600*$Phr + 60*$Pmin + $Psec;
      print "hr,min,sec,Phr,Pmin,Psec: $hr,$min,$sec,$Phr,$Pmin,$Psec\n" if $DEBUG;
      $diff = abs($Psecs - $secs);
      print "diff: $diff\n" if $DEBUG;
# end our search if we happened upon different dates
      $diff = 99999 if $Pdate ne $date;
    }
  } else {
    $iP = $iPO = 2;
  }
  $priorPic = $jpegs[$t-$iP];
  $Pic = $jpegs[$t];
  $postPic = $jpegs[$t+$iPO];
  print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);

Bunch of simple python scripts

I call this one getinfo.py:


#!/usr/bin/python3
import os,sys
from PIL import Image
from PIL.ExifTags import TAGS

for (tag,value) in Image.open(sys.argv[1])._getexif().items():
print (‘%s = %s’ % (TAGS.get(tag), value))

print (‘%s = %s’ % (TAGS.get(tag), value))

And here’s rotate.py:


#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image

picture= Image.open(sys.argv[1])

# if orientation is 6, rotate clockwise 90 degrees
picture.rotate(-90,expand=True).save(“rot_” + sys.argv[1])

While here is rotatecc.py:


#!/usr/bin/python3
import PIL, os
import sys
from PIL import Image

picture= Image.open(sys.argv[1])

# if orientation is 8, rotate counterclockwise 90 degrees
picture.rotate(90,expand=True).save(“rot_” + sys.argv[1])

And rotate-as-needed.sh:


#!/bin/sh
# DrJ 12/2020
# some of our downloaded files will be sideways, and fbi doesn’t auto-rotate them as far as I know
# assumption is that are current directory is the one where we want to alter files
ls -1|while read line; do
echo fileis “$line”
o=`~/getinfo.py “$line”|grep -ai orientation|awk ‘{print $NF}’`
echo orientation is $o
if [ “$o” -eq “6” ]; then
echo “90 clockwise is needed, o is $o”
# rotate and move it
~/rotate.py “$line”
mv rot_”$line” “$line”
elif [ “$o” -eq “8” ]; then
echo “90 counterclock is needed, o is $o”
# rotate and move it
~/rotatecc.py “$line”
mv rot_”$line” “$line”
fi
don

And finally, m2.pl:

#!/usr/bin/perl
# show the pics ; rotate the screen as needed
# for now, assume the display is in a neutral
# orientation at the start
use Time::HiRes qw(usleep);
$DEBUG = 1;
$delay = 6; # seconds between pics
$mdelay = 200; # milliseconds
$mshow = "$ENV{HOME}/mediashow";
$pNames = "$ENV{HOME}/pNames";
# pics are here
$picsDir = "$ENV{HOME}/Pictures";

chdir($picsDir);
system("ls -1 > $pNames");
# forther massage names
open(TMP,"$pNames");
@lines = ;
foreach (@lines) {
  chomp;
  $filesNullSeparated .= $_ . "\0";
}
open(MS,">$mshow") || die "Cannot open mediashow file $mshow!!\n";
print MS $filesNullSeparated;
close(MS);
print "filesNullSeparated: $filesNullSeparated\n" if $DEBUG;
$cn = @lines;
print "$cn files\n" if $DEBUG;
# throw up a first picture - all black. Trick to make black bckgrd permanent
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
system("sudo fbi -a --noverbose -T 1 $ENV{HOME}/black.jpg");
sleep(1);
system("sleep 2; sudo killall fbi");
# start infinitely looping fbi slideshow
for (;;) {
# then start slide show
# shell echo cannot work with null character so we need to use a file to store it
    #system("cat $picNames|xargs -0 qiv -DfRsmi -d $delay \&");
    system("sudo xargs -a $mshow -0 fbi -a --noverbose -1 -T 1  -t $delay ");
# fbi runs in background, then exits, so we need to monitor if it's still alive
# wait appropriate estimated amount of time, then look aggressively for fbi
    sleep($delay*($cn - 2));
    for(;;) {
      open(MON,"ps -ef|grep fbi|grep -v grep|") || die "Cannot launch ps -ef!!\n";
      $match = ;
      if ($match) {
        print "got fbi match\n" if $DEBUG > 1;
        } else {
        print "no fbi match\n" if $DEBUG;
# fbi not found
          last;
      }
      close(MON);
      print "usleeping, noexist is $noexit\n" if $DEBUG > 1;
      usleep($mdelay);
    } # end loop testing if fbi has exited
} # close of infinite loop

You’ll need to make these files executable. Something like this should work:

$ chmod +x *.py *.pl *.sh

My crontab file looks like this (you edit crontab using the crontab -e command):

@reboot sleep 25; cd ~ ; ./m2.pl >> ./m2.log 2>&1
24 16 * * * ./master.sh >> ./master.log 2>&1

This invokes master.sh once a day at 4:24 PM to refresh the 60 photos. My refresh took about 13 minutes the other day, but the old slideshow keeps playing until almost the last second, so it’s OK.

The nice thing about this approach is that fbi works with a lightweight OS – Raspbian Lite is fine, you’ll just need to install a few packages. My SD card is unstable or something, so I have to re-install the OS periodically. An install of Raspberry Pi Lite on my RPi 4 took 11 minutes. Anyway, fbi is installed via:

$ sudo apt-get install fbi

But if your RPi is freshly installed, you may first need to do a

$ sudo apt-get update && sudo apt-get upgrade

python image manipulation

The drawback of this approach, i.e., not using qiv, is that we gotta do some image manipulation, for which python is the best candidate. I’m going by memory. I believe I installed python3, perhaps as sudo apt-get install python3. Then I needed pip3: sudo apt-get install python3-pip. Then I needed to install Pillow using pip3: sudo pip3 install Pillow.

m2.pl refers to a black.jpg file. It’s not a disaster to not have that, but under some circumstances it may help. There it is!

Many of my photos do not have EXIF information, yet they can still be displayed. So for those photos running getinfo.py will produce an error (but the processing of the other photos will continue.)

I was originally rotating the display 90 degrees as needed to display the photos with the using the maximum amount of display real estate. But that all broke when I tried to revive it. And the cheap servo motor was noisy. But folks were pretty impressed when I demoed it, because I did it get it the point where it was indeed working correctly.

Picture selection methodology

There are 20 “folders” (random numbers) of three triplets each. The idea is to give you additional context to help jog your memory. The triplets, with some luck, will often be from the same time period.

I observed how many similar pictures are adjacent to each other amongst our total collection. To avoid identical pictures, I require the pictures to be five minutes apart in time. Well, I cheated. I don’t pull out the timestamp from the EXIF data as I should (at least not yet – future enhancement, perhaps). But I rely on a file-naming convention I notice is common – 20201227_134508.jpg, which basically is a timestamp-encoded name. The last six digits are HHMMSS in case it isn’t clear.

Rclone

You must install the rclone package, sudo apt-get install rclone.

Can you configure rclone on a headless Raspberry Pi?

Indeed you can. I know because I just did it. You enable your Pi for ssh access. Do the rclone config using putty from a Windows 10 system. You’ll get a long Google URL in the course of configuring that you can paste into your browser. You verify it’s you, log into your Google account. Then you get back a url like http://127.0.0.1:5462/another-long-url-string. Well, put that url into your clipboard and in another login window, enter curl clipboard_contents

That’s what I did, not certain it would work, but I saw it go through in my rclone-config window, and that was that!

Don’t want to deal with rclone?

So you want to use a traditional flash drive you plug in to a USB port, just like you have for the commerical photo frames, but you otherwise like my approach of randomizing the picture selection each day? I’m sure that is possible. A mid-level linux person could rip out the rclone stuff I have embedded and replace as needed with filesystem commands. I’m imagining a colossal flash drive with all your tens of thousands of pictures on it where my random selection still adds value. If this post becomes popular enough perhapsI will post exactly how to do it.

Getting started with this

After you’ve done all that, and want to try it out. you can run

$ ./master.sh

First you should see a file called files growing in size – that’s rclone doing its listing. That takes a few minutes. Then it generates random numbers for photo selection – that’s very fast, maybe a second. Then it slowly copies over the selected images to a temporary folder called Picturestmp. That’s the slowest part. If you do a directory listing you should see the number of images in that directory growing slowly, adding maybe three per minute until it reaches 60 of them. Finally the rotation are applied. But even if you didn’t set up your python environment correctly, it doesn’t crash. It effectively skips the rotations. A rotation takes a couple seconds per image. Finally all the images are copied over to the production area, the directory called Pictures; the old slideshow program is “killed,” and the new slideshow starts up. Whole process takes around 15 minutes.

I highly recommend running master.sh by hand as just described to make sure it all works. Probably some of it won’t. I don’t specialize in making recipes, more just guidance. But if you’re feeling really bold you can just power it up and wait a day (because initially you won’t have any pictures in your slideshow) and pray that it all works.

Tip: Undervoltage thunderbolt suppression

This is one of those topics where you’ll find a lot on the Internet, but little about what we need to do: How do we stop that thunderbolt that appears in the upper right corner from appearing?? First, the boilerplate warning. That thingy appears when you’re not delivering enough voltage. That condition can harm your SD Card, blah, blah. I’ve blown up a few SD cards myself. But, in practice, with my RPi 3, I’ve been running it with the Pi Display for 18 months with no mishaps. So, some on, let’s get crazy and suppress the darn thing. So… here goes. To suppress that yellow stroke of lightning, add these lines to your /boot/config.txt:


# suppress undervoltage thunderbolt – DrJ 8/21
# see http://rpf.io/configtxt
avoid_warnings=1

For good measure, if you are not using the HDMI port, you can save some energy by disabling HDMI:

$ tvservice -o

Still missing

I’d like to display a transition image when switching from the current set of photos to the new ones.

Suppressing boot up messages might be nice for some. Personally I think they’re kind of cool – makes it look like you’ve done a lot more techie work than you actually have!

You’re going to get some junk images. I’ve seen where an image is a thumbnail (I guess) and gets blown up full screen so that you see these giant blocks of pixels. I could perhaps magnify those kind of images less.

Movies are going to be tricky so let’s not even go there…

I was thinking about making it a navigation-enabled photo frame, such as integration with a Gameboy controller. You could do some really awesome stuff: Pause this picture; display the location (town or city) where this photo was taken; refresh the slideshow. It sounds fantastical, but I don’t think it’s beyond the capability of even modestly capable hobbyist programmers such as myself.

I may still spin the frame 90 degrees this way an that. I have the servo mounted and ready. Just got to revive the control commands for it.

Appendix 1: rclone configuration

This is my actual rclone configuration session from January 2022.

rclone config
2022/01/17 19:45:36 NOTICE: Config file "/home/pi/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> remote
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / 1Fichier
\ "fichier"
2 / Alias for an existing remote
\ "alias"
3 / Amazon Drive
\ "amazon cloud drive"
4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
\ "s3"
5 / Backblaze B2
\ "b2"
6 / Box
\ "box"
7 / Cache a remote
\ "cache"
8 / Citrix Sharefile
\ "sharefile"
9 / Dropbox
\ "dropbox"
10 / Encrypt/Decrypt a remote
\ "crypt"
11 / FTP Connection
\ "ftp"
12 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
13 / Google Drive
\ "drive"
14 / Google Photos
\ "google photos"
15 / Hubic
\ "hubic"
16 / In memory object storage system.
\ "memory"
17 / Jottacloud
\ "jottacloud"
18 / Koofr
\ "koofr"
19 / Local Disk
\ "local"
20 / Mail.ru Cloud
\ "mailru"
21 / Microsoft Azure Blob Storage
\ "azureblob"
22 / Microsoft OneDrive
\ "onedrive"
23 / OpenDrive
\ "opendrive"
24 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
25 / Pcloud
\ "pcloud"
26 / Put.io
\ "putio"
27 / SSH/SFTP Connection
\ "sftp"
28 / Sugarsync
\ "sugarsync"
29 / Transparently chunk/split large files
\ "chunker"
30 / Union merges the contents of several upstream fs
\ "union"
31 / Webdav
\ "webdav"
32 / Yandex Disk
\ "yandex"
33 / http Connection
\ "http"
34 / premiumize.me
\ "premiumizeme"
35 / seafile
\ "seafile"
Storage> 13
** See help for drive backend at: https://rclone.org/drive/ **
Google Application Client Id
Setting your own is recommended.
See https://rclone.org/drive/#making-your-own-client-id for how to create your own.
If you leave this blank, it will use an internal key which is low performance.
Enter a string value. Press Enter for the default ("").
client_id>
OAuth Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret>
Scope that rclone should use when requesting access from drive.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Full access all files, excluding Application Data Folder.
\ "drive"
2 / Read-only access to file metadata and file contents.
\ "drive.readonly"
/ Access to files created by rclone only.
3 | These are visible in the drive website.
| File authorization is revoked when the user deauthorizes the app.
\ "drive.file"
/ Allows read and write access to the Application Data folder.
4 | This is not visible in the drive website.
\ "drive.appfolder"
/ Allows read-only access to file metadata but
5 | does not allow any access to read or download file content.
\ "drive.metadata.readonly"
scope> 2
ID of the root folder
Leave blank normally.
Fill in to access "Computers" folders (see docs), or for rclone to use
a non root folder as its starting point.
Enter a string value. Press Enter for the default ("").
root_folder_id>
Service Account Credentials JSON file path
Leave blank normally.
Needed only if you want use SA instead of interactive login.
Leading ~ will be expanded in the file name as will environment variables such as ${RCLONE_CONFIG_DIR}.
Enter a string value. Press Enter for the default ("").
service_account_file>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n>
Remote config
Use auto config?
Say Y if not sure
Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n> N
Please go to the following link: https://accounts.google.com/o/oauth2/auth?access_type=offline&client_id=202264815644.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.readonly&state=2K-WjadN98dzSlx3rYOvUA
Log in and authorize rclone for access
Enter verification code> 4/1AX4XfWirusA-gk55nbbEJb8ZU9d_CKx6aPrGQvDJzybeVR9LOWOKtw_c73U
Configure this as a team drive?
y) Yes
n) No (default)
y/n>
[remote]
scope = drive.readonly
token = {"access_token":"ALTEREDARrdaM_TjUIeoKHuEMWCz_llH0DXafWh92qhGy4cYdVZtUv6KcwZYkn4Wmu8g_9hPLNnF1Kg9xoioY4F1ms7i6ZkyFnMxvBcZDaEwEs2CMxjRXpOq2UXtWmqArv2hmfM9VbgtD2myUGTfLkIRlMIIpiovH9d","token_type":"Bearer","refresh_token":"1//0dKDqFMvn3um4CgYIARAAGA0SNwF-L9Iro_UU5LfADTn0K5B61daPaZeDT2gu_0GO4DPP50QoxE65lUi4p7fgQUAbz8P5l_Rcc8I","expiry":"2022-01-17T20:50:38.944524945Z"}
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Current remotes:
Name Type
==== ====
remote drive
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> q
pi@raspberrypi:~ $

References and related

This 7″ display is a little small, but it’s great to get you started. It’s $64 at Amazon: Amazon.com: Raspberry Pi 7″ Touch Screen Display: Electronics

Is your Pi Display mentioned above blanking out after a few seconds? I have just the solution in this post.

I have an older approach using qiv which I lost the files for, and my blog post got corrupted. Hence this new approach.

In this slightly more sophisticated approach, I make a greater effort to separate the photos in time. But I also make a whole bunch of other improvements as well. But it’s a lot more files so it may only be appropriate for a more seasoned RPi command-line user.

My advanced slideshow treatment is beginning to take shape. I just add to it while I develop it, so check it periodically if that is of interest. Raspberry Pi advanced photo frame.

Categories
Linux Python Raspberry Pi

A first taste of OpenCV on a Raspberry Pi 3

Intro
I’ve done a few things to do some vision processing with OpenCV on a Raspberry Pi 3. I am a rank amateur so my meager efforts will not be of much help to anyone else. My idea is that maybe this could be used on an FRC First Robotics team’s robot. Hence I will be getting into some tangential areas where I am more comfortable.

Even though this is a work in progress I wanted to get some of it down before I forget what I’ve done so far!

Tangential Stuff

Disable WiFi
You shouldn’t have peripheral devices with WiFi enabled. Raspeberry Pi 3 comes with built-in WiFi. Here’s how to turn it off.

Add the following line to your /boot/config.txt file:

dtoverlay=pi3‐disable‐wifi

Reboot.

If it worked you should only see the loopback and eth0 interefaces in response to the ip link command, something like this:

$ ip link
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether b8:27:eb:3f:92:f3 brd ff:ff:ff:ff:ff:ff

Hardcode an IP address the simple-minded way
On a lark I decided to try the old-fashioned method I first used on Sun Solaris, or was it even Dec Ultrix? That is, ifconfig. I thought it was to be deprecated but it works well enough for my purpose.

So something like

$ sudo ifconfig eth0 192.168.1.160

does the job, as long as the network interface is up and connected.

Autolaunch a VNC Server so we can haul the camera image back to the driver station
$ vncserver &hypher;geometry 640×480 ‐Authentication=VncAuth :1

Launch our python-based opencv program and send output to VNC virtual display

$ export DISPLAY=:1
$ /home/pi/.virtualenvs/cv/bin/python green.py > /tmp/green.log 2>&1 &

The above was just illustrative. What I actually have is a single script, launcher.sh which puts it all together. Here it is.

#!/bin/sh
# DrJ
sleep 2
# set a hard-wired IP - this will have to change!!!
sudo ifconfig eth0 192.168.1.160
# launch small virtual vncserver on DISPLAY 1
vncserver -Authentication=VncAuth :1
# launch UDP server
$HOME/server.py > /tmp/server.log 2>&1 &
# run virtual env
cd $HOME
# don't need virtualenv if we use this version of python...
#. /home/pi/.profile
#workon cv
#
# now launch our python video capture program
#
export DISPLAY=:1
/home/pi/.virtualenvs/cv/bin/python green.py > /tmp/green.log 2>&1 &

OpenCV (open computer Vision)
opencv is a bear and you have to really work to get it onto a Pi 3. There is no apt-get install opencv. You have to download and compile the thing. There are many steps and few accurate documentation sources on the Internet as of this writing (January 2018).

I think this guide by Adrian is the best guide:

Install guide: Raspberry Pi 3 + Raspbian Jessie + OpenCV 3

However I believe I still ran into trouble and I needed this cmake command in stead of the one he provides:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D INSTALL_C_EXAMPLES=OFF \
        -D ENABLE_PRECOMPILED_HEADERS=OFF \
        -D INSTALL_PYTHON_EXAMPLES=ON \
        -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules \
        -D BUILD_EXAMPLES=ON ..

I also replaced opencv references to version 3.0.0 with 3.1.0.

I also don’t think I got make -j4 to work. Just plain make.

An interesting getting started tutorial on images, opencv, and python:

http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-image

Simplifying launch of VNC Viewer
I wrote a simple-minded DOS script which launches UltraVNC with a password. So with a double-click it should work).

Here’s a Dos .bat file to launch ultravnc viewer by double-clicking on it.

if not "%minimized%"=="" goto :minimized
set minimized=true
start /min cmd /C "%~dpnx0"
goto :EOF
:minimized
c:\apps\ultravnc\vncviewer -password raspberry 192.168.1.160:1

I’m sure there’s a better way but I don’t know it.

The setup
We have a USB camera plugged into the Pi.
A green disc LED light.
A green filter over the camera lens.
A target with two parallel strips of retro-reflective tape we are trying to suss out from everything else.
Some sliders to control the sensitivity of our color matching.
The request to analyze the video in opencv as well as display it on the driver station.
Have opencv calculate the pixel distance (“correction”) from image center of the “target” (the two parallel strips).
Send this correction via a UDP server to any client who wants to know the correction.

Here is our current python program green.py which does these things.

import Tkinter as tk
from threading import Thread,Event
from multiprocessing import Array
from ctypes import c_int32
import cv2
import numpy as np
import sys
#from Tkinter import *
#cap = cv2.VideoCapture(0)
global x
global f
x = 1
y = 1
f = "green.txt"
 
class CaptureController(tk.Frame):
    NSLIDERS = 7
    def __init__(self,parent):
        tk.Frame.__init__(self)
        self.parent = parent
 
        # create a synchronised array that other threads will read from
        self.ar = Array(c_int32,self.NSLIDERS)
 
        # create NSLIDERS Scale widgets
        self.sliders = []
        for ii in range(self.NSLIDERS):
            # through the command parameter we ensure that the widget updates the sync'd array
            s = tk.Scale(self, from_=0, to=255, length=650, orient=tk.HORIZONTAL,
                         command=lambda pos,ii=ii:self.update_slider(ii,pos))
            if ii == 0:
                s.set(0)  #green min
            elif ii == 1:
                s.set(0)
            elif ii == 2:
                s.set(250)
            elif ii == 3:
                s.set(3)  #green max
            elif ii == 4:
                s.set(255)
            elif ii == 5:
                s.set(255)
            elif ii == 6:
                s.set(249)  #way down below
            s.pack()
            self.sliders.append(s)
 
        # Define a quit button and quit event to help gracefully shut down threads
        tk.Button(self,text="Quit",command=self.quit).pack()
        self._quit = Event()
        self.capture_thread = None
 
    # This function is called when each Scale widget is moved
    def update_slider(self,idx,pos):
        self.ar[idx] = c_int32(int(pos))
 
    # This function launches a thread to do video capture
    def start_capture(self):
        self._quit.clear()
        # Create and launch a thread that will run the video_capture function
#        self.capture_thread = Thread(cap = cv2.VideoCapture(0), args=(self.ar,self._quit))
        self.capture_thread = Thread(target=video_capture, args=(self.ar,self._quit))
        self.capture_thread.daemon = True
        self.capture_thread.start()
 
    def quit(self):
        self._quit.set()
        try:
            self.capture_thread.join()
        except TypeError:
            pass
        self.parent.destroy()
 
# This function simply loops over and over, printing the contents of the array to screen
def video_capture(ar,quit):
    print ar[:]
    cap = cv2.VideoCapture(0)
    Xerror = 0
    Yerror = 0
    XerrorStr = '0'
    YerrorStr = '0'
    while not quit.is_set():
        # the slider values are all readily available through the indexes of ar
        # i.e. w1 = ar[0]
        # w2 = ar[1]
        # etc.
        # Take each frame
        _, frame = cap.read()
        # Convert BGR to HSV
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # define range of blue color in HSV
        lower_green = np.array([ar[0],ar[1],ar[2]])
        upper_green = np.array([ar[3],ar[4],ar[5]])
        # Threshold the HSV image to get only green colors
        mask = cv2.inRange(hsv, lower_green, upper_green)
        # Bitwise-AND mask and original image
        res = cv2.bitwise_and(frame,frame, mask= mask)
        cv2.imshow('frame', frame)
#        cv2.imshow('mask',mask)
#        cv2.imshow('res',res)
        #------------------------------------------------------------------
        img = cv2.blur(mask,(5,5))   #filter (blur) image to reduce errors
        cv2.imshow('img',img)
        ret,thresh = cv2.threshold(img,127,255,0)
        im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
        print 'number of contours==640x480====================  ', len(contours)
        target=0
        if len(contours) > 0:
            numbercontours = len(contours)
            while numbercontours > 0:
                numbercontours = numbercontours -1  # contours start at 0
                cnt = contours[numbercontours]   #this is  getting the first contour found, could look at 1,2,3 etc
                x,y,w,h = cv2.boundingRect(cnt)
#
#---line below has the limits of the area of the target-----------------------
#
                #if w * h > 4200 and w * h < 100000:  #area of capture must exceed  to exit loop
                if h > 30 and w < h/3:  #area of capture must exceed  to exit loop
                    print ' X   Y  W  H  AREA      Xc  Yc      xEr yEr'
                    Xerror = (-1) * (320 - (x+(w/2)))
                    XerrorStr = str(Xerror)
                    Yerror = 240 - (y+(h/2))
                    YerrorStr = str(Yerror)
                    print  x,y,w,h,(w*h),'___',(x+(w/2)),(y+(h/2)),'____',Xerror,Yerror
                    break
 
#-------        draw horizontal and vertical center lines below
                cv2.line(img,(320,0),(320,480),(135,0,0),5)
                cv2.line(img,(0,240),(640,240),(135,0,0),5)
                displaySTR = XerrorStr + '  ' + YerrorStr
                font = cv2.FONT_HERSHEY_SIMPLEX
                cv2.putText(img,displaySTR,(10,30), font, .75,(255,255,255),2,cv2.LINE_AA)
                cv2.imshow('img',img)
# wrtie to file for our server'
                sys.stdout = open(f,"w")
                print 'H,V:',Xerror,Yerror
                sys.stdout = sys.__stdout__
                target=1
                #
                #--------------------------------------------------------------------
        if target==0:
                # no target found. print non-physical values out to a file
                sys.stdout = open(f,"w")
                print 'H,V:',1000,1000
                sys.stdout = sys.__stdout__
        k = cv2.waitKey(1) & 0xFF    #parameter is wait in millseconds
        if k == 27:   # esc key on keboard
            cap.release()
            cv2.destroyAllWindows()
            break
 
if __name__ == "__main__":
    root = tk.Tk()
    selectors = CaptureController(root)
    selectors.pack()
#    q = tk.Label(root, text=str(x))
#    q.pack()
    selectors.start_capture()
    root.mainloop()

Well, that was a big program by my standards.

Here’s the UDP server that goes with it. I call it server.py.

#!/usr/bin/env python
# inspired by https://gist.github.com/Manouchehri/67b53ecdc767919dddf3ec4ea8098b20
# first we get client connection, then we read data frmo file. This order is important so we get the latest, freshest data!
 
 
import socket
import re
 
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 
server_address = '0.0.0.0'
server_port = 5005
 
server = (server_address, server_port)
sock.bind(server)
print("Listening on " + server_address + ":" + str(server_port))
 
while True:
# read up to 32 bytes from client
        payload, client_address = sock.recvfrom(32)
        print("Request from client: " + payload)
# get correction from file
        while True:
                with open('green.txt','r') as myfile:
                        data=myfile.read()
#H,V:  9 -14
                data = data.split(":")
                if len(data) == 2:
                        break
        sent = sock.sendto(data[1], client_address)

For development testing I wrote a UDP client to go along with that server. I called it recvudp.py.

#!/usr/bin/env python
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
 
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
 
sock = socket.socket(socket.AF_INET, # Internet
                 socket.SOCK_DGRAM) # UDP
# need to send one newline minimum to receive server's message...
MESSAGE = "correction";
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
# get data
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message:", data

Problems
Lag is bad. Probably 1.5 seconds or so.
Video is green, but then we designed it that way.
Bandwidth consumption of VNC is way too high. We’re supposed to be under 7 mbps and it is closer to 12 mbps right now.
Probably won’t work under the bright lights or an arena or gym.
Sliders should be labelled.
Have to turn a pixel correction into an angle.
Have to suppress initial warning about ssh default password.

To be improved, hopefully…

Categories
Python Web Site Technologies

Superimpose crosshairs plus grid marks on an image

Intro
My previous effort showed how to superimpose just a crosshairs on an image. That I was able to do within CSS. When it came time to add tick marks to those crosshairs I felt that CSS was getting too complicated. I had only a short time and I honestly couldn’t figure it out.

So I decided on an alternate appraoch of superimposing two images, one of which has transparency. Then it would come down to creating a suitable image that has the crosshairs and tick marks.

The details
I felt this was doable in python and it was, but I needed to add the Python Imaging Library (PIL). On CentOS I simply did a

$ sudo yum install python-imaging

The python program
Here is my python program.

# from http://stackoverflow.com/questions/8376359/how-to-create-a-transparent-gif-or-png-with-pil-python-imaging
# drJ 3/2016
# install PIL: yum install python-imaging
#
from PIL import Image, ImageDraw
width = 640
height = 480
halfwidth=width/2
halfheight=height/2
 
ticklength=10
starttickx=halfwidth - ticklength/2
endtickx=halfwidth + ticklength/2
startticky=halfheight - ticklength/2
endticky=halfheight + ticklength/2
 
img = Image.new('RGBA',(width, height))
 
draw = ImageDraw.Draw(img)
# crosshairs
draw.line((halfwidth, 0, halfwidth, height), fill=252, width=2)
draw.line((0, halfheight, width, halfheight), fill=252, width=2)
# tick marks
 
def my_range(start, end, step):
    while start <= end:
        yield start
        start += step
 
# top to bottom ticks
for y in my_range(0, 480, 30):
    draw.line((starttickx, y, endtickx, y), fill=252, width=2)
# left to right ticks
for x in my_range(20, 640, 30):
    draw.line((x, startticky, x, endticky), fill=252, width=2)
 
img.save('crosshairs.gif', 'GIF', transparency=0)

The web page
We actually have two iamges side-by-side becuase we have two cameras.

<html>
<head>
<style type="text/css">
<!-- DrJ 1/2016
Note that Firefox's implementation of linear-gradient is broken and requires us to
use repeat linear gradient 
Some fairly lousy documentation on repeat linear gradient is here:
https://developer.mozilla.org/en-US/docs/Web/CSS/repeating-linear-gradient
 
-->
 
#jpg1 {
 
    position:absolute;
 
    top:0;
 
    left:0;
 
    z-index:1;
 
}
 
#gif2 {
 
    position:absolute;
 
    top:10;
 
    left:11;
 
 
    /*
 
    set top and left here
 
    */
 
    z-index:1;
}
 
#gif3 {
 
    position:absolute;
 
    top:10;
 
    left:655;
 
 
    /*
 
    set top and left here
 
    */
 
    z-index:1;
}
 
</style></head>
<body>
<table><tr><td>
<div>
  <img id="jpg1" src="http://dcs-931l-ball/mjpeg.cgi" width="640" height="480" />
  <img id="gif2" src="crosshairs.gif" />
</div>
<td>
  <img src="http://dcs-931l-target/mjpeg.cgi" width="640" height="480" />
  <img id="gif3" src="crosshairs.gif" />
</tr></table>
</body></html>

To be continued…

Categories
Network Technologies Python

Tips on using scapy for custom IP packets

Intro
scapy is an IP packet customization tool that keeps coming up in my searches so I could no longer avoid it. I was unnecessarily intimidated because it was built around python and the documentation is a little strange. But I’m warming up to it now…

The details
Download and install
CentOS
Just go to scapy.net and it will propose to you to download the .zip file. I got scapy-2.3.1.zip. Then you can unzip it; change directory to the scapy-2.3.1 sub-directory and run

$ sudo python setup.py install

Debian systems such as Raspberry Pi
Simple. It’s just:

$ sudo apt-get install python-scapy

Usage modes
scapy can be called from within python, but if you’re afraid to do that like I am, you can run it from the command line which simply throws you into a python shell. I’m finding that a lot more comfortable as I slowly learn python syntax and some useful shortcuts.

Example 1
The background
Let’s cut to the chase and do something hard first. Remember how we got those Cisco Jabber packets with DSCP set, causing Cisco Jabber to not work for some users? The long-term solution according to that post is to turn off the DSCP flag for all packets on the Internet router. So we want to be able to generate packets under our control with that flag set so we can see if we’ve managed to turn it off correctly.

DSCP value occupies the first 6 bits of the 8-bit tos field. The packets we got from Cisco had DSCP of 0x2e which is Expedited Forwarding (EF), and if you do the math that corresponds to tos of 0xb8 which in decimal is 184.

$ sudo scapy
>>> sr(IP(dst="50.17.188.196",tos=184)/TCP(dport=80,sport=4025))

Begin emission:
....Finished to send 1 packets.
.*
Received 6 packets, got 1 answers, remaining 0 packets
(<Results: TCP:1 UDP:0 ICMP:0 Other:0>, <Unanswered: TCP:0 UDP:0 ICMP:0 Other:0>)
>>>

Instead of the call to sr you can simply use send. Breaking this down, I’m testing against my drjohns server with IP 50.17.188.196. tos is a property of an IP packet so it’s included as a keyword argument to the IP function. The “/” following the IP function is funny syntax but it somehow says that more properties at different layers are coming. So in the TCP section I used keyword arguments and set source port of 4025 and destination port of 80. What I observed is that this will send a SYN packet even though I didn’t explicitly identify that.

Want to have a random source port like “real” packets? Then use this:

$ >>> sr(IP(dst="50.17.188.196",tos=184)/TCP(dport=80,sport=RandShort()))

Look for it
I know tcpdump better so I look for my packet with that tool like this:

$ sudo tcpdump -v -n -i eth0 host 71.2.39.115 and port 80

tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
19:33:29.749170 IP (tos 0xb8, ttl 39, id 1, offset 0, flags [none], proto TCP (6), length 44)
    71.2.39.115.partimage > 10.185.21.116.http: Flags [S], cksum 0xd97b (correct), seq 0, win 8192, options [mss 1460], length 0
19:33:29.749217 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 44)
    10.185.21.116.http > 71.2.39.115.partimage: Flags [S.], cksum 0x3e39 (correct), seq 3026513916, ack 1, win 5840, options [mss 1460], length 0
19:33:29.781193 IP (tos 0x0, ttl 41, id 19578, offset 0, flags [DF], proto TCP (6), length 40)

Interpretation
Our tos was wiped clean by the time our generated packet was received by Amazon AWS. This was a packet I sent from my home using my Raspberry Pi. So likely my ISP CenturyLink is removing QOS from packets its residential customers send out. With some ISPs and business class service I have seen the tos field preserved exactly. When sent from Amazon AWS I saw the field value altered, but not set to 0!

Example 2, ping
>>> sr(IP(dst="8.8.8.8")/ICMP())

Begin emission:
Finished to send 1 packets.
.*
Received 2 packets, got 1 answers, remaining 0 packets
(<Results: TCP:0 UDP:0 ICMP:1 Other:0>, <Unanswered: TCP:0 UDP:0 ICMP:0 Other:0>)

Getting info on return packet
$ >>> sr1(IP(dst="drjohnstechtalk.com",tos=184)/TCP(dport=80,sport=RandShort()))

Begin emission:
...............................................................................................Finished to send 1 packets.
...........................................*
Received 139 packets, got 1 answers, remaining 0 packets
<IP  version=4L ihl=5L tos=0x0 len=44 id=0 flags=DF frag=0L ttl=25 proto=tcp chksum=0xe1d7 
src=50.17.188.196 dst=144.29.1.2 options=[] |<TCP  sport=http dport=17176 seq=3590570804 ack=1 dataofs=6L reserved=0L flags=SA window=5840 chksum=0x24b0 urgptr=0 options=[('MSS', 1460)] |<Padding  load='\x00\x00' |>>>

Note that this tells me about the return packet, which is a SYN ACK. So it tells me my SYN packet must have been sent from port 17176 (it changes every time because I’ve included sport=RandShort()). Each “.” in the response indicates a packet hitting the interface. I guess it’s promiscuously listening on the interface.

Hitting a closed port

$ >>> sr1(IP(dst="drjohnstechtalk.com",tos=184)/TCP(dport=81,sport=RandShort()))

Begin emission:
....................................................................................
.........................................................................................................Finished to send 1 packets.
...............................................................................
.................................................................................
.........................................................................................

Basically those dots are going to keep going forever until you type -C, because there will be no return packet if something like a firewall is dropping your packet, or the returned packet.

Useful shortcuts
The scapy commands look pretty daunting at first, right? And too much trouble to type in, right? Just get it right once and you’re set. In typical networking debugging you’ll be running such test packets multiple times. Because it’s basically a python shell, you can use the up arrow key to recall the previous thing, or hit it multiple times to scroll through your previously typed commands. And even if you exit and return, it still remembers your command history so you can hit the up-arrow to get back to your commands from previous sesisons and previous days.

References and related
This scapy for dummies guide is very well written.
I’m finding this python tutorial really helpful.
DSCP and explanation of Cisco Jabber not working is described here.
A simpler tool which is fine for most things is nmap. I provide some real-world examples in this blog post.

Categories
Admin Python Raspberry Pi

Building a Four Monitor Media Show using Raspberry Pis

Intro
This is the paper a student wrote under my guidance.

Building a Four Monitor Media Show using Raspberry Pis

The first page
4-monitor-media-display

Link to full article

References
My write-up concerning our novel use of the Pi Presents program, which has a different emphasis and no pictures.

Categories
Python Raspberry Pi

Raspberry Pi visual alerting with Pibrella

I just came across this article: http://www.itworld.com/article/2919325/personal-technology/website-monitoring-with-a-raspberry-pi-for-nighttime-alerts.html?phint=newt%3Ditworld_linux&phint=idg_eid%3Db4ed050ca62ad3f6d5c69e8448d945d7#tk.ITWNLE_nlt_linux_2015-05-19

I wish I had thought of it first! I’m in a similar situation – I constantly get emails and TXT messages overnight so I have to put my phone in airplane mode. A visual alerting might just help give me an early warning for come critical failures.

The Pibrella module that lights up and buzzes has its own web site, pibrella.com.

If I do anything with it I’ll be sure to post it here.

Categories
Linux Python Raspberry Pi

What I’m working on: a Raspberry Pi digital photo frame

Intro
The idea is that for a display kiosk let’s have a Raspberry Pi drive a display like one of those electronic picture frames. Power the thing up, perhaps plug in a flash drive, leave off the mouse and keyboard, but have a display attached, and get it to where it just automatically starts a slideshow without more fuss.

Some discarded options
Obviously this is not breaking new ground. you can find many variants of this on the Internet. An early-on approach that caught my eye is flickrframe. I read the source code to learn that at the end of the day it relies on the fbi program (frame buffer imageviewer). I thought that perhaps I could rip out the part that connects to Flickr but it seemed like too much trouble. At the end of the day it’s just a question of whether to use fbi or not.

Then there’s Raspberry Pi slideshow. That’s a quite good write-up. That’s using pqiv. I think that solution is workable.

But the one I’m focusing on uses qiv. You would have thought that pqiv would rely on qiv (quick image viewer) but it appears not to. So qiv is a separate install. qiv has lots of switches so it’s been written with this kind of thing in mind it seems.

What it looks like so far

#!/bin/sh
# -f : full-screen; -R : disable deletion; -s : slideshow; -d : delay <secs>; -i : status-bar;
# -m : zoom; [-r : ranomdize]
# this doesn't handle filenames with spaces:
##cd /media; qiv -f -R -s -d 5 -i -m `find /media -regex ".+\.jpe?g$"`
# this one does:
if [ "$1" = "l" ]; then
# print out proposed filenames
  cd /media; find . -regex ".+\.jpe?g$"
else
  sleep 5
  cd /media; find . -regex ".+\.jpe?g$" -print0|xargs -0 qiv -f -R -s -d 5 -i -m
fi

The idea being, why not make a slideshow out of all the pictures found on a flash drive that’s been inserted into the Pi? That’s how a standard picture frame works after all. It’s a very convenient way to work with it. That’s the aim of the above script.

Requirements update
OK. Well this happens a lot in IT. We thought we were solving one problem but when we finally spoke with the visual arts team they had something entirely different in mind. They want to mix in movies as well. fbi, pqiv or qiv don’t handle movies. I have mplayer and vlc from my playing around with Raspberry Pi camera. mplayer runs like a dog on the movie files I tried, perhaps one frame update every two seconds. After more searching around I came across omxplayer. That actually works pretty well. It on the other hand doesn’t seem up to the task of handling a mixed multimedia stream of stills and movies. But it did handle the two movies types we had: .mov and .mp4 movie files. omxplayer is written specifically for the Pi so it uses its GPU for frame acceleration. mplayer just seems to rely on the CPU which just can’t keep up on a high-def quality movie. So as a result omxplayer will only play through a true graphical console. It doesn’t even bother you to get your DISPLAY environment variable set up correctly – it’s just going to send everything to the head display.

Overheard recently
And when using my TV as display omxplayer put out the sound, too, perfectly synchronized and of high quality.

I was thinking if we should kludge stitching together qiv and omxplayer. You know letting one lapse and starting up the other to transition from a still to a movie, but I don’t know how to make the transition smooth. So i searched around yet some more and found pipresents. I believe it is a python framework around omxplayer. It’s pretty sophisticated and yet free. It’s actually aimed at museums and can include reactions to pressed buttons as you have at museum displays. So far we got the example media show to loop through – it demonstrates a high-quality short movie and a still plus some captions at the beginning.

Pipresents isn’t perfect however
I quickly found some problems with pipresents so I went the official route and posted them to the github site, not really knowing what to expect. The first issue is that you are not allowed to import .mov files! That makes no sense since omxplayer plays them. So I post this bug and that very same day the author emails me back and explains that you simply edit pp_editor.py line 32 and add .mov as an additional video file type! Sure enough, that did it. Then I found that it wasn’t downsampling my images. These days everyone has a camera or phone that takes mmulti-megapixel images far exceeding a cheap display’s 1280×1024 resolution. So you only see a small portion of your jpeg. I just assumed pipresents would downsample these large pictures because the other packages like qiv do it so readily. Again the same day the author gets back to me and says no this isn’t supported – in pipresents. But there is a solution: I should use pipresents-next! It’s officially in beta but just about ready for production release. I don’t think I’ll go that route but it’s always nice to know your package continues to be developed. I’ve written my own downsampler which I will provide later on.

Screen turns off
The pipresents has a command-line switch, -b, to prevent screen blanking. But I think in general it’s better to not use that switch and instead disable screen blanking in general.

$ sudo nano /etc/kbd/config
– change BLANK_TIME=30 to BLANK-TIME=0
– and change POWERDOWN_TIME=30 to POWERDOWN_TIME=0
$ sudo nano /etc/lightdm/lightdm.conf
– below the [SeatDefault] line create this line:
xserver-command=X -s 0 dpms

How to get started with PiPresents
$ wget https://github.com/KenT2/pipresents/tarball/master -O – | tar xz

There should now be a directory ‘KenT2-pipresents-xxxx’ in your home directory. Rename the directory to pipresents:

$ mv KenT2* pipresents

To save time make sure you have two terminal windows open on your Pi and familiarize yourself with how to cut and paste text between them. Then from the one window you can:

$ cd pipresents; more README.md

while you execute the commands you’ve cut and paste from that window into the other, e.g.,

$ sudo apt-get install python-imaging
etc.

What happens if you forget to install the unclutter package
Not much. It’s just that you will see a mouse pointer in the center of the screen which won’t go away, which is not desirable for black box operation.

Python image downsizing program
This is also known as downsampling. Amazingly, you really don’t find a simple example program like this when you do an Internet search, at least not amongst the first few hits. I needed a program to reduce the large images to the size of the display while preserving the aspect ratio. My display, a run-of-the-mill Acer v173, is 1280 x 1024 pixels. Pretty standard stuff, right? yet the Pi sees it as 1232 x 992 pixels! Whoever would have thought that possible? And with no possible option to change that (at least from the GUI). So just put in the appropriate values for your display. This program just handles one single image file. also note that if it’s a small picture, meaning smaller than the display, it will be blown up to full screen and hence will make a thumbnail image look pixelated. The match doesn’t distinguish small from large images but I fel that is fine for the most part. So without further chatting, here it is. I called it resize3.py:

import Image
import sys
# DrJ 2/2015
# somewhat inspired by http://www.riisen.dk/dop/pil.html
# image file should be provided as argument
# Designed for Acer v173 display which the Pi sees as a strange 1232 x 992 pixel display
# though it really is 1 more run-of-the-mill 1280 x 1024
 
imageFile = sys.argv[1]
im1 = Image.open(imageFile)
 
def imgResize(im):
# Our display as seen by the Pi is a strange 1232 x 992 pixels
    width = im.size[0]
    height = im.size[1]
 
# If the aspect ratio is wider than the display screen's aspect ratio,
# constrain the width to the display's full width
    if width/float(height) > 1232.0/992.0:
      widthn = 1232
      heightn = int(height*1232.0/width)
    else:
      heightn = 992
      widthn  = int(width*992.0/height)
 
    im5 = im.resize((widthn, heightn), Image.ANTIALIAS) # best down-sizing filter
 
    im5.save("resize/" + imageFile)
 
imgResize(im1)

As I am not proficient in python I designed the above program to minimize file handling. That I do in a shell script which was much easier for me to write. Together they can easily handle downsampling all the image files in a particular directory. I call this script reduce.sh:

#!/bin/sh
echo "Look for the downsampled images in a sub-directory called resize
echo "JPEGs GIFs and PNGs are looked at in the current directory
mkdir resize 2>/dev/null
ls -1 *jpg *jpeg *JPG *png *PNG *gif *GIF 2>/dev/null|while read file; do
  echo downsampling $file
# downsample the image file
  python ~/resize3.py "$file"
done

Stopping the slideshow
Sometimes you just need to stop the thing and that’s not so easy when you’ve got it in blackbox mode and running at startup.

If you’re lucky enough to have a keyboard attached to the Pi we found that

<Alt> F4

from the keyboard stops it.

No keyboard? We assigned Our Pi a static IP address and leave an ethernet cable attached to it. Then we put a PC on the same subnet and ssh to it, e.g., using putty or teraterm. Then we run this simple kill script, which I call kill.sh:

#!/bin/sh
pkill -f pipresents.py
pkill omxplayer

Digital photo frame projects morphs to museum-style kiosk display
At times I was tempted to throw out this pipresents software but we persisted. It has a different emphasis from a digital photo frame where you plug in a USB stick and don’t care about the order the pictures are presented to you. pipresents is oriented towards museums and hence is all about curated displays, where you’ve pored over the presentation order and selected your mix of videos and images. And in the end that better matched our requirements.

The manual is wanting for clarity
It’s nice that a PDF manual is included, but it’s a pain to read it to extract the small bits of information you actually need. Here’s what you mostly need to know. An unattended slideshow mixture of images and videos is what he calls a mediashow. Make your own profile to hold your mediashow:

$ cd pipresents; python pp_editor.py

This brings up a graphical editor. Then follow these menus:

Profile|New from template|Mediashow

Choose a short easy-to-type name such as drjmedia.

Click on media.json and then you can start adding images and movies. These are known as “tracks.”

Remove the example track.

Add your own images and movies.

Do a Profile|Validate

There is no Save! Just kill it.

And to run it full screen from your home directory:

$ python pipresents/pipresents -ftop -p drjmedia

Autostarting your mediashow
The instructions provided in the manual.pdf worked on my older Pi, but not on the B+ model Pis. So to repeat it here, modifying it so that it is more correct (the author doesn’t seem comfortable with Linux). Manual.pdf has:

$ mkdir -p ~/.config/lxsession/LXDE
$ cd !$; echo "python pipresents/pipresents.py -ftop -pdrjmedia" > autostart
$ chmod +x autostart

And as I say this worked on my model B Pi, but not my B+. The following discussion about autostarting programs is specific to operating systems which use the LXDE desktop environment such as Raspbian. On the B+ this fairly different approach worked to get the media show automatically starting upon boot:

$ cd /etc/xdg/autostart

Create a file pipresents.desktop with these lines:

[Desktop Entry]
Type=Application
Name=pipresents
Exec=python pipresents/pipresents.py -ftop -pdrjmedia
Terminal=true

But I recommend this approach which also works:

$ mkdir ~/.config/autostart

Place a pipresents.desktop file in this directory with the contents shown above.

More sophisticated approach for better black box operations
We find it convenient to run pp_editor in a virtual display created by vnc. Then we still don’t need to attach keyboard or mouse to the Pi. But the problem is that pipresents will also launch in the vnc session and really slow things down. This is a solution I worked out to have only one instance of pipresents run, even if others X sessions are launched on other displays. Note that this is a general solution and applies to any autostarted program.

The main idea is to test in a simple shell script if our display is the console (:0.0) or not.

I should interject I haven’t actually tested this but I think it’s going to work! Update: Yes, it did work!

Put startpipresents.sh in /home/pi with these contents:

#!/bin/bash
# DISPLAY environment variable is :0.0 for the console display
echo $DISPLAY|grep :0 > /dev/null 2>&1
if [ "$?" == "0" ]; then
#  matched. start pipresents in this xsession, but not any other one
  python pipresents/pipresents.py -ftop -pdrjmedia
fi

Then pipresents.desktop becomes this:

[Desktop Entry]
Type=Application
Name=pipresents
Exec=/home/pi/startpipresents.sh
Terminal=true

To install the vnc server:

$ sudo apt-get install tightvncserver

And to auto-launch it make a vnc.desktop file in ~/.config/autostart like this:

[Desktop Entry]
Type=Application
Name=vncserver
Exec=/home/pi/startvncserver.sh
Terminal=false

and put this in the file /home/pi/startvncserver.sh:

#!/bin/bash
# DISPLAY environment variable is :0.0 for the console display
echo $DISPLAY|grep :0 > /dev/null 2>&1
if [ "$?" == "0" ]; then
#  matched. start vncserver in this xsession, but not any other one
  vncserver
fi

You need to launch vncserver by hand once to establish the password.

And we may as well pre-launch the pp_editor because we’re likely to need that. So make a file in the home directory called startppeditor.sh with these contents:

#!/bin/bash
# DISPLAY environment variable is :1.0 for the vnc display
echo $DISPLAY|grep :1 > /dev/null 2>&1
if [ "$?" == "0" ]; then
#  matched. start ppeditor in this xsession, but not any other one
  python pipresents/pp_editor.py
fi

and in ~/.config/autostart a file called ppeditor.desktop with these contents:

[Desktop Entry]
Type=Application
Name=ppeditor
Exec=/home/pi/startppeditor.sh
Terminal=true

Similarly we can pre-launch an lxterminal because we’ll probably need one of those. Here’s an example startlxterminal.sh:

#!/bin/bash
# DISPLAY environment variable is :1.0 for the vnc display
echo $DISPLAY|grep :1 > /dev/null 2>&1
if [ "$?" == "0" ]; then
#  matched. start a large lxterminal in this xsession, but not any other one
  lxterminal --geometry=100x40
fi

and the autostart file:

[Desktop Entry]
Type=Application
Name=lxterminal
Exec=/home/pi/startlxterminal.sh
Terminal=true

A note about Powerpoint slides
With a Macbook we were able to read in a Powerpoint slideshow and export it to JPEG images, one image per slide. That was pretty convenient. We have done the same directly from Microsoft Powerpoint – it’s a save option.

A note about Mpeg4 videos
Some videos overwhelm these older Pis that we use. Maybe on the Pi 3 they’d be OK? A creative student would hand us his 2 minute movie in mpeg4 format. The Pi would never be able to display it. We learned you can reduce the resolution to get the Pi to display it. A student was doing this on his Macbook, but when he left i had to figure out a way.

The original mpeg4 video had resolution of 1920 x 1080. I wanted to have horizontal resolution of no more than 1232, but maybe even smaller, while preserving the aspect ratio (widescreen format).

I used good ole’ Microsoft Movie Maker. I don’t think it’s available any longer except from dodgy sites, but in the days of Windows 7 you could get it for free through Windows Live Update. Then, if you upgraded that Windows 7 PC to Windows 10, it allowed you to keep Movie Maker. That’s the only way I know of. Not that it’s a good program. It’s not. Very basic. But it does permit resizing a video stream to custom resolution, so I have to give it that. I tried various resolutions nd played them back. i finally settled on the smallest I tried: 800×450. In fact I couldn’t really tell the difference in video quality between all the samples. And of corse 800×450 made for the smallest file. So we took that one. Fortunately, pipresents blew it up to occupy the full screen width (1232 pixels) while preserving the aspect ratio. So it looks great and no further action was needed.

The sound of silence
You want the video sound to come out the stereo mini-jack because you’re not using an HDMI monitor? PiPresents tries to send audio out through HDMI by default so you won’t hear the sounds if you have a VGA monitor. But you can change that. If you want to do this in raw omxplayer the switch which sends the sound out through the mini-jack is:

omxplayer -o local

In pipresents this option is available in the pp_editor. It’s a property of the profile. So you edit the profile, look for omx-audio, and change its value in the drop-down box from hdmi to local. That’s it!

A word about DHCP
We use a PC to connect to the four Pis. They are connected to a hub and there is an Ethernet cable connected to the hub and ready to be connected to a PC with an Ethernet port. The Pis all have private IP addresses: 10.31.42.1, 10.31.42.2, 10.31.42.3 and 10.31.42.4. For convenience, we set up a DHCP server on Pi 1 so that when the PC connects, it gets assigned an IP address on that subnet. DHCP is a service that dynamically assigns IP addresses. Turns out this is dead easy. You simply install dnsmasq (sudo apt-get install dnsmasq) and make sure it is enabled. That’s it! More sophisticated setups require modification of the file /etc/dnsmasq.conf, but for our simple use case that is not even needed – it just picks reasonable values and assigns an appropriate IP to the laptop that allows it to communicate to any of the four Pis.

References and related
I worked on this project with a student. Building a Four Monitor Media Show using Raspberry Pis
Pipresents has its own wordpress site.

LXDE has its own official site.
Read about a first look at the custom-built 7″ Raspberry Pi touch display in this blog post.

An alternative slideshow program to pipresents is to leverage qiv. I put something together and demo it in this post, but with a twist: I pull all the photos from my own Google Drive, where I store 40,000+ pictures!

Categories
Perl Python

Help with the NPR Weekend Puzzle – and Learning Python

Intro
As I mentioned in my review of Amazon’s Web Services Summit, Python seems to be the vogue scripting language these days. I decided I had better dust off the brain cells and try it out. I am an old Perl stalwart, but one senses that that language has sort of hit a wall after enthusiasm from 10 years ago began to wane. One of my example Perl scripts is provided in my post about turning HP SiteScope into SiteScope Classic. After deciding I needed a Python project only a few days passed before I came across what I thought would be a worthy challenge – simple yet non-trivial. That is the weekend puzzle as I understand it on NPR.

The Details
Start with a one-syllable, four-letter common boy’s name. Adjust all the letters with the ROT-13 cipher, and arrive at a common two-syllable girl’s name? What are the names? I’m pretty sure I could have figured out how to do this in Perl. Python? If it lived up to its hype then it should also be up to the task IMHO.

About the Weekend Puzzle
I listen to it most weekends. I often end up listening to it twice! I always think of whether it is suitable to be programmed or not. Most often I find it is not. I mean not suitable for the simple scripts and such that I write. I’m not talking about IBM’s Jeopardy-playing Watson! But this weekend I feel that the ROT-13 part of the challenge can definitely be aided by programming. Is that cheating? If I “give away” my solution as I am then I remove my unfair advantage in knowing a thing or two about programming!

The program – npr-test.py

#!/usr/bin/python
# drJ test script - 4/2012
# to get input agruments...
import sys
#inputName = raw_input("Enter name to be translated: ");
#print "Received input is : ", inputName
# and see http://www.tutorialspoint.com/python/string_maketrans.htm
from string import maketrans
intab = "abcdefghijklmnopqrstuvwxyz"
outtab = "nopqrstuvwxyzabcdefghijklm"
trantab = maketrans(intab, outtab)
inputName = sys.argv[1]
 
print inputName,inputName.translate(trantab);

So you see Python has this great maketrans built-in function that we’re able to use to implement the ROT-13 cipher. Of course veterans will probably know an even simpler way to accomplish this, perhaps with the pack/unpack functions which I also considered using.

You call the script like this:

$ npr-test.py john

john wbua

I compiled a list of common four-letter names which I won’t fully divulge. They are in a file called names, one name per line. But how to quickly put it though this program? My old, bad lazy Unix habit was to do this:

$ cat names|while read line; do
npr-test.py $line >> /tmp/results
done

I’ve got it memorized so I lose no time except typing the characters. But I also know the modern way is xargs.

xargs is the really hard part
I keep thinking that xargs is a good habit and one I should get into. But it’s not so easy. It took me awhile to find the appropriate example. And then you run across the debate that holds that Gnu parallel is still the better tool. Anyhow, here’s the xargs way…

$ cat names|xargs -I {} npr-test.py {}|more

amit nzvg
amos nzbf
arno neab
axel nkry
...

Conclusion
This little python program speaks volumes about the versatility of this language. It does have some really interesting properties and at first blush is worth getting to know better. No wonder others have embraced it. It also has helped us solve the weekend puzzle!

The stated answer? Glen and Tyra. You’ll see you can feed either one of these names into the program and come out with the other. I found it amusing that Will Shortz described the puzzle as “hard.” I didn’t think so – not with this program – but I was not the randomly selected winner, either, so I didn’t get a chance to explain how I did it.

The guy who did win explained that he simply wrote the ROT-13 version of the alphabet below the alphabet so he had a convenient look-up table. Clever.