Categories
Admin Linux

vsftd Virtual Users stopped working after patching: the solution

Intro
vsftpd is a useful daemon which I use to run an ftps service (ftp which uses TLS encryption). Since I am not part of the group that administers the server, it makes sense for me to maintain my own userlist rather than rely on the system password database. vsftpd has a convenient feature which allows this known as virtual users.

More details
In /etc/pam.d/vsftpd.virtual I have:

auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
session required pam_loginuid.so

In the file /etc/vsftpd-virtual-user.db I have my Berkeley database of users and passwords. See references on how to set this up.

The point is that I had this all working last year – 2019 – on my SLES 12SP4 server.

Then it all broke
Then in early May, 2020, all the FTPs stopped working. The status of the vsftpd service hinted that the file /lib64/security/pam_userdb.so could not be loaded. Sure enough, it was missing! I checked some of my other SLES12SP4 servers, some of which are on a different patch schedule. It was missing on some, and present on one. So I “borrowed” pam_userdb.so from the one server which still had it and put it onto my server in /lib64/security. All good. Service restored. But clearly that is a hack.

What’s going on
So I asked a Linux expert what’s going on and got a good explanation.

pam_userdb has been moved to a separate package, named pam-extra
 
1) http://lists.suse.com/pipermail/sle-security-updates/2020-April/006661.html
2) https://www.suse.com/support/update/announcement/2020/suse-ru-20200822-1/
 
Advisory ID: SUSE-RU-2020:917-1
Released: Fri Apr 3 15:02:25 2020
Summary: Recommended update for pam
Type: recommended
Severity: moderate
References: 1166510
This update for pam fixes the following issues:
 
- Moved pam_userdb into a separate package pam-extra. (bsc#1166510)
 
Installing the package pam-extra should resolve your issue.

I installed the pam-extra package using zypper, and, yes, it creates a /lib64/security/pam_userdb.so file!

And vsftpd works once more using supported packages.

Conclusion
Virtual users with vsftpd requires pam_userdb.so. However, PAM wished to decouple itself from dependency on external databases, etc, so they bundled this kind of thing into a separate package, pam-extra, more-or-less in the middle of a patch cycle. So if you had the problem I had, the solution may be as simple as installing the pam-extra package on your system. Although I experienced this on SLES, I believe it has or will happen on other Linux flavors as well.

This problem is poorly documented on the Internet.


References and related

https://www.cyberciti.biz/tips/centos-redhat-vsftpd-ftp-with-virtual-users.html

Categories
Admin Linux Network Technologies

Configure rsyslog to send syslog to SIEM server running TLS

Intro
You have to dig a little to find out about this somewhat obscure topic. You want to send syslog output, e.g., from the named daemon, to a syslog server with beefed up security, such that it requires the use of TLS so traffic is encrypted. This is how I did that.

The details
This is what worked for me:

...
# DrJ fixes - log local0 to DrJ's dmz syslog server - DrJ 5/6/20
# use local0 for named's query log, but also log locally
# see https://www.linuxquestions.org/questions/linux-server-73/bind-queries-log-to-remote-syslog-server-4175
669371/
# @@ means use TCP
$DefaultNetstreamDriver gtls
$DefaultNetstreamDriverCAFile /etc/ssl/certs/GlobalSign_Root_CA_-_R3.pem
$ActionSendStreamDriver gtls
$ActionSendStreamDriverMode 1
$ActionSendStreamDriverAuthMode anon
 
local0.*                                @@(o)14.17.85.10:6514
#local0.*                               /var/lib/named/query.log
local1.*                                -/var/log/localmessages
#local0.*;local1.*                      -/var/log/localmessages
local2.*;local3.*                       -/var/log/localmessages
local4.*;local5.*                       -/var/log/localmessages
local6.*;local7.*                       -/var/log/localmessages

The above is the important part of my /etc/rsyslog.conf file. The SIEM server is running at IP address 14.17.85.10 on TCP port 6514. It is using a certificate issued by Globalsign. An openssl call confirms this (see references).

Other gothcas
I am running on a SLES 15 server. Although it had rsyslog installed, it did not support tls initially. I was getting a dlopen error. So I figured out I needed to install this module:

rsyslog-module-gtls

References and related
How to find the server’s certificate using openssl. Very briefly, use the openssl s_client submenu.

The rsyslog site itself probably has the complete documentation. Though I haven’t looked at it thoroughly, it seems very complete.

Want to know what syslog is? Howtoneywork has this very good writeup: https://www.howtonetwork.com/technical/security-technical/syslog-server/

Categories
Admin Apache CentOS Linux Security

Trying to upgrade WordPress brings a thicket of problems

Intro
Wordpress tells me to upgrade to version 5.4. But when I try it says nope, your version of php is too old. Now admittedly, I’m running on an ancient CentOS server, now at version 6.10, which I set up back in 2012 I believe.

I’m pretty comfortable with CentOS so I wanted to continue with it, but just on a newer version at Amazon. I don’t like being taken advantage of, so I also wanted to avoid those outfits which charge by the hour for providing CentOS, which should really be free. Those costs can really add up.

Lots of travails setting up my AWS image, and then…

I managed to find a CentOS amongst the community images. I chose centos-8-minimal-install-201909262151 (ami-01b3337aae1959300).

OK. Brand new CentOS 8 image, 8.1.1911 after patching it, which will be supported for 10 years. Surely it has the latest and greatest??

Well, I’m not so sure…

If only I had known

I really wish I had seen this post earlier. It would have been really, really helpful: https://blog.ssdnodes.com/blog/how-to-install-wordpress-on-centos-7-with-lamp-tutorial/

But I didn’t see it until after I had done all the work below the hard way. Oh well.

When I install php I get version 7.2.11. WordPress is telling me I need a minimum of php version 7.3. If i download the latest php, it tells me to download the latest apache. So I do. Version 2.4.43. I also install gcc, anticipating some compiling in my future…

But apache won’t even configure:

httpd-2.4.43]$ ./configure --enable-so
checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking build system type... x86_64-pc-linux-gnu
checking host system type... x86_64-pc-linux-gnu
checking target system type... x86_64-pc-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... no
configure: error: APR not found.  Please read the documentation.
  --with-apr=PATH         prefix for installed APR or the full path to
                             apr-config
  --with-apr-util=PATH    prefix for installed APU or the full path to
                             apu-config
 
(apr-util configure)
checking for APR... no
configure: error: APR could not be located. Please use the --with-apr option.
 
try:
 
 ./configure --with-apr=/usr/local/apr
 
but
 
-D_GNU_SOURCE   -I/usr/local/src/apr-util-1.6.1/include -I/usr/local/src/apr-util-1.6.1/include/private  -I/usr/local/apr/include/apr-1    -o xml/apr_xml.lo -c xml/apr_xml.c && touch xml/apr_xml.lo
xml/apr_xml.c:35:10: fatal error: expat.h: No such file or directory
 #include 
          ^~~~~~~~~
compilation terminated.
make[1]: *** [/usr/local/src/apr-util-1.6.1/build/rules.mk:206: xml/apr_xml.lo] Error 1

So I install expat header files:
$ yum install expat-devel
And then the make of apr-util goes through. Not sure this is the right approach or not yet, however.

So following php’s advice, I have:
$ ./configure –enable-so

checking for chosen layout... Apache
...
checking for pcre-config... false
configure: error: pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/

So I install pcre-devel:
$ yum install pcre-devel
Now the apache configure goes through, but the make does not work:

/usr/local/apr/build-1/libtool --silent --mode=link gcc  -g -O2 -pthread         -o htpasswd  htpasswd.lo passwd_common.lo       /usr/local/apr/lib/libaprutil-1.la /usr/local/apr/lib/libapr-1.la -lrt -lcrypt -lpthread -ldl -lcrypt
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_GetErrorCode'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetEntityDeclHandler'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ParserCreate'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetCharacterDataHandler'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ParserFree'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetUserData'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_StopParser'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_Parse'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_ErrorString'
/usr/local/apr/lib/libaprutil-1.so: undefined reference to `XML_SetElementHandler'
collect2: error: ld returned 1 exit status
make[2]: *** [Makefile:48: htpasswd] Error 1

So I try configure or apr-util with expat built-in.

$ ./configure –with-expat=builtin –with-apr=/usr/local/apr

But when I do the make of apr-util I now get this error:

/usr/local/apr/build-1/libtool: line 7475: cd: builtin/lib: No such file or directory
libtool:   error: cannot determine absolute directory name of 'builtin/lib'
make[1]: *** [Makefile:93: libaprutil-1.la] Error 1
make[1]: Leaving directory '/usr/local/src/apr-util-1.6.1'
make: *** [/usr/local/src/apr-util-1.6.1/build/rules.mk:118: all-recursive] Error 1

From what I read this new error occurs due to having –expat-built-in! So now what? So I get rid of that in my configure statement for apr-util. For some reason, apr-util goes through and compiles. And so I try this for compiling apache24:

$ ./configure –enable-so –with-apr=/usr/local/apr

And then I make it. And for some reason, now it goes through. I doubt it will work, however… it kind of does work.

It threw the files into /usr/local/apache2, where there is a bin directory containing apachectl. I can launch apachectl start, and then access a default page on port 80. Not bad so far…

I still need to tie in php however.

I just wing it and try

$ ./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysql

Hey, maybe for once their instructions will work. Nope.

configure: error: Package requirements (libxml-2.0 >= 2.7.6) were not met:

Package 'libxml-2.0', required by 'virtual:world', not found

Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.

So I guess I need to install libxml2-devel:

$ yum install libxm2-devel

Looks like I get past that error. Now it’s on to this one:

configure: error: Package requirements (sqlite3 > 3.7.4) were not met:

So I install sqlite-devel:
$ yum install sqlite-devel
Now my configure almost goes through, except, as I suspected, that was a nonsense argument:

configure: WARNING: unrecognized options: --with-mysql

It’s not there when you look for it! Why the heck did they – php.net – give an example with exactly that?? Annoying. So I leave it out. It goes through. Run make. It takes a long time to compile php! And this server is pretty fast. It’s slower than apache or anything else I’ve compiled.

But eventually the compile finished. It added a LoadModule statement to the apache httpd.conf file. And, after I associated files with php extension to the php handler, a test file seemed to work. So php is beginning to work. Not at all sure about the mysql tie-in, however. In fact see further down below where I confirm my fears that there is no MySQL support when PHP is compiled this way.

Is running SSL asking too much?
Apparently, yes. I don’t think my apache24 has SSL support built-in:

Invalid command 'SSLCipherSuite', perhaps misspelled or defined by a module not included in the server configuration

So I try
$ ./configure –enable-so –with-apr=/usr/local/apr –enable-ssl

Not good…

checking for OpenSSL... checking for user-provided OpenSSL base directory... none
checking for OpenSSL version >= 0.9.8a... FAILED
configure: WARNING: OpenSSL version is too old
no
checking whether to enable mod_ssl... configure: error: mod_ssl has been requested but can not be built due to prerequisite failures

Where is it pulling that old version of openssl? Cause when I do this:

$ openssl version

OpenSSL 1.1.1c FIPS  28 May 2019

That’s not that old…

I also noticed this error:

configure: WARNING: Your APR does not include SSL/EVP support. To enable it: configure --with-crypto

So maybe I will re-compile APR with that argument.

Nope. APR doesn’t even have that argument. But apr-uil does. I’ll try that.

Not so good:

configure: error: Crypto was requested but no crypto library could be enabled; specify the location of a crypto library using --with-openssl, --with-nss, and/or --with-commoncrypto.

I give up. maybe it was a false alarm. I’ll try to ignore it.

So I install openssl-devel:

$ yum install openssl-devel

Then I try to configure apache24 thusly:

$ ./configure –enable-so –with-apr=/usr/local/apr –enable-ssl

This time at least the configure goes through – no ssl-related errors.

I needed to add the Loadmodule statement by hand to httpd.conf since that file was already there from my previous build and so did not get that statement after my re-build with ssl support:

LoadModule ssl_module   modules/mod_ssl.so

Next error please
Now I have this error:

AH00526: Syntax error on line 92 of /usr/local/apache2/conf/extra/drjohns.conf:
SSLSessionCache: 'shmcb' session cache not supported (known names: ). Maybe you need to load the appropriate socache module (mod_socache_shmcb?).

I want results. So I just comment out the lines that talk about SSL Cache and anything to do with SSL cache.

And…it starts…and…it is listening on both ports 80 and 443 and…it is running SSL. So I think i cracked the SSL issue.

Switch focus to Mysql
I didn’t bother to find mysql. I believe people now use mariadb. So I installed the system one with a yum install mariadb. I became root and learned the version with a select version();

+-----------------+
| version()       |
+-----------------+
| 10.3.17-MariaDB |
+-----------------+
1 row in set (0.000 sec)

Is that recent enough? Yes! For once we skate by comfortably. The WordPress instructions say:

MySQL 5.6 or MariaDB 10.1 or greater

I setup apache. I try to access wordpress setup but instead get this message:

Forbidden
 
You don't have permission to access this resource.

every page I try gives this error.

The apache error log says:

client denied by server configuration: /usr/local/apache2/htdocs/

Not sure where that’s coming from. I thought I supplied my own documentroot statements, etc.

I threw in a Require all granted within the Directory statement and that seemed to help.

PHP/MySQL communication issue surfaces
Next problem is that PHP wasn’t compiled correctly it seems:

Your PHP installation appears to be missing the MySQL extension which is required by WordPress.

So I’ll try to re-do it. This time I am trying these arguments to configure:
$ ./configure ‐‐with-apxs2=/usr/local/apache2/bin/apxs ‐‐with-mysqli

Well, I’m not so sure this worked. Trying to setup WordPress, I access wp-config.php and only get:

Error establishing a database connection

This is roll up your sleeves time. It’s clear we are getting no breaks. I looked into installing PhpMyAdmin, but then I would neeed composer, which may depend on other things, so I lost interest in that rabbit hole. So I decide to simplify the problem. The suggested test is to write a php program like this, which I do, calling it tst2.php:

 <!--?php
$servername = "localhost";
$username = "username";
$password = "password";
 
// Create connection
$conn = mysqli_connect($servername, $username, $password);
 
// Check connection
if (!$conn) {
    die("Connection failed: " . mysqli_connect_error());
}
echo "Connected successfully";
?-->

Run it:
$ php tst2.php
and get:

PHP Warning:  mysqli_connect(): (HY000/2002): No such file or directory in /web/drjohns/blog/tst2.php on line 7
 
Warning: mysqli_connect(): (HY000/2002): No such file or directory in /web/drjohns/blog/tst2.php on line 7

Some quick research tells me that php does not know where the file mysql.sock is to be found. I search for it:

$ sudo find / ‐name mysql.sock

and it comes back as

/var/lib/mysql/mysql.sock

So…the prescription is to update a couple things in pph.ini, which has been put into /usr/local/lib in my case because I compiled php with mostly default values. I add the locatipon of the mysql.sock file in two places for good measure:

pdo_mysql.default_socket = /var/lib/mysql/mysql.sock
mysqli.default_socket = /var/lib/mysql/mysql.sock

And then my little test program goes through!

Connected successfully

Install WordPress
I begin to install WordPress, creating an initial user and so on. When I go back in I get a directory listing in place of the index.php. So I call index.php by hand and get a worisome error:

Fatal error: Uncaught Error: Call to undefined function gzinflate() in /web/drjohns/blog/wp-includes/class-requests.php:947 Stack trace: #0 /web/drjohns/blog/wp-includes/class-requests.php(886): Requests::compatible_gzinflate('\xA5\x92\xCDn\x830\f\x80\xDF\xC5g\x08\xD5\xD6\xEE...'

I should have compiled php with zlib is what I determine it means… zlib and zlib-devel packages are on my system so this should be straightforward.

More arguments for php compiling
OK. Let’s be sensible and try to reproduce what I had done in 2017 to compile php instead of finding an resolving mistakes one at a time.

$ ./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysqli –disable-cgi –with-zlib –with-gettext –with-gdbm –with-curl –with-openssl

This gives this new issue:

Package 'libcurl', required by 'virtual:world', not found

I will install libcurl-devel in hopes of making this one go away.

Past that error, and onto this one:

configure: error: DBA: Could not find necessary header file(s).

I’m trying to drop the –with-gdbm and skip that whole DBA thing since the database connection seemed to be working without it. Now I see an openssl problem:

make: *** No rule to make target '/tmp/php-7.4.4/ext/openssl/openssl.c', needed by 'ext/openssl/openssl.lo'.  Stop.

Even if I get rid of openssl I still see a problem when running configure:

gawk: ./build/print_include.awk:1: fatal: cannot open file `ext/zlib/*.h*' for reading (No such file or directory)

Now I can ignore that error because configure exits with 0 status and make, but the make then stops at zlib:

SIGNALS   -c /tmp/php-7.4.4/ext/sqlite3/sqlite3.c -o ext/sqlite3/sqlite3.lo
make: *** No rule to make target '/tmp/php-7.4.4/ext/zlib/zlib.c', needed by 'ext/zlib/zlib.lo'.  Stop.

Reason for above php compilation errors
I figured it out. My bad. I had done a make distclean in addition to a make clean when i was re-starting with a new set of arguments to configure. i saw it somewhere advised on the Internet and didn’t pay much attention, but it seemed like a good idea. But I think what it was doing was wiping out the files in the ext directory, like ext/zlib.

So now I’m starting over, now with php 7.4.5 since they’ve upgraded in the meanwhile! With this configure command line (I figure I probably don’t need gdb):
./configure –with-apxs2=/usr/local/apache2/bin/apxs –with-mysqli –disable-cgi –with-zlib –with-gettext –with-gdbm –with-curl –with-openssl

Well, the php compile went through, however, I can’t seem to access any WordPress pages (all WordPress pages clock). Yet my simplistic database connection test does work. Hmmm. OK. If they come up at all, they come up exceedingly slowly and without correct formatting.

I think I see the reason for that as well. The source of the wp-login.php page (as viewed in a browser window) includes references to former hostnames my server used to have. Of course fetching all those objects times out. And they’re the ones that provide the formatting. At this point I’m not sure where those references came from. Not from the filesystem, so must be in the database as a result of an earlier WordPress install attempt. Amazon keeps changing my IP, you see. I see it is embedded into WordPress. In Settings | general Settings. I’m going to have this problem every time…

What I’m going to do is to create a temporary fictitious name, johnstechtalk, which I will enter in my hosts file on my Windows PC, in Windows\system32\drivers\etc\hosts, and also enter that name in WordPress’s settings. I will update the IP in my hosts file every time it changes while I am playing around. And now there’s even an issue doing this which has always worked so reliably in the past. Well, I found I actually needed to override the IP for drjohnstechtalk.com in my hosts file. But it seems Firefox has moved on to using DNS over https, so it ignores the hosts file now! i think. Edge still uses it however, thankfully.

WordPress
So WordPress is basically functioning. I managed to install a few of my fav plugins: Akismet anti-spam, Limit Login Attempts, WP-PostViews. Some of the plugins are so old they actually require ftp. Who runs ftp these days? That’s been considered insecure for many years. But to accommodate I installed vsftpd on my server and ran it, temporarily.

Then Mcafee on my PC decided that wordpress.org is an unsafe site, thank you very much, without any logs or pop-ups. I couldn’t reach that site until I disabled the Mcafee firewall. Makes it hard to learn how to do the next steps of the upgrade.

More WordPress difficulties

WordPress is never satisfied with whatever version you’ve installed. You take the latest and two weeks later it’s demanding you upgrade already. My first upgrade didn’t go so well. Then I installed vsftpd. The upgrade likes to use your local FTP server – at least in my case. so for ftp server I just put in 127.0.0.1. Kind of weird. Even still I get this error:

Downloading update from https://downloads.wordpress.org/release/wordpress-5.4.2-no-content.zip…

The authenticity of wordpress-5.4.2-no-content.zip could not be verified as no signature was found.

Unpacking the update…

Could not create directory.

Installation Failed

So I decided it was a permissions problem: my apache was running as user daemon (do a ps -ef to see running processes), while my wordpress blog directory was owned by centos. So I now run apache as user:group centos:centos. In case this helps anyone the apache configurtion commands to do this are:

User centos
Group centos

then I go to my blog directory and run something like:

chown -R centos:centos *
Wordpres Block editor non-functional after the upgrade

When I did the SQL import from my old site, I killed the block editor on my new site! This was disconcerting. That little plus sign just would not show up on new pages, no posts, whatever. So I basically killed wordpress 5.4. So I took a step backwards and started v 5.4 with a clean (empty) database like a fresh install to make sure the block editor works then. It did. Whew! Then I did an RTFM and deactivated my plugins on my old WordPress install before doing the mysql backup. I imported that SQL database, with a very minimal set of plugins activated, and, whew, this time I did not blow away the block editor.

CentOS bogs down

I like my snappy new Centos 8 AMI 80% of the time. But that remaining 20% is very annoying. It freezes. Really bad. I ran a top until the problem happened. Here I’ve caught the problem in action:

top - 16:26:11 up 1 day, 21 min, 2 users, load average: 3.96, 2.93, 5.30
Tasks: 95 total, 1 running, 94 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.1 us, 2.6 sy, 0.0 ni, 0.0 id, 95.8 wa, 0.4 hi, 0.3 si, 0.7 st
MiB Mem : 1827.1 total, 63.4 free, 1709.8 used, 53.9 buff/cache
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 9.1 avail Mem

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
44 root 20 0 0 0 0 S 1.6 0.0 12:47.94 kswapd0
438 root 0 -20 0 0 0 I 0.5 0.0 1:38.84 kworker/0:1H-kblockd
890 mysql 20 0 1301064 92972 0 S 0.4 5.0 1:26.83 mysqld
5282 centos 20 0 1504524 341188 64 S 0.4 18.2 0:06.06 httpd
5344 root 20 0 345936 1008 0 S 0.4 0.1 0:00.09 sudo
560 root 20 0 93504 6436 3340 S 0.2 0.3 0:02.53 systemd-journal
712 polkitd 20 0 1626824 4996 0 S 0.2 0.3 0:00.15 polkitd
817 root 20 0 598816 4424 0 S 0.2 0.2 0:12.62 NetworkManager
821 root 20 0 634088 14772 0 S 0.2 0.8 0:18.67 tuned
1148 root 20 0 216948 7180 3456 S 0.2 0.4 0:16.74 rsyslogd
2346 john 20 0 273640 776 0 R 0.2 0.0 1:20.73 top
1 root 20 0 178656 4300 0 S 0.0 0.2 0:11.34 systemd

So what jumps out at me is the 95.8% wait time – that ain’t good – an that a process which includes the name swap is at the top of ths list, combined with the fact that exactly 0 swap space is allocated. My linux skills may be 15 years out-of-date, but I think I better allocate some swap space (but why does it need it so badly??). On my old system I think I had done it. I’m a little scared to proceed for fear of blowing up my system.

So if you use drjohnstechtalk.com and it freezes, just come back in 10 minutes and it’ll probably be running again – this situation tends to self-correct. No one’s perfect.

Making a swap space

I went ahead and created a swap space right on my existing filesystem. I realized it wasn’t too hard once I found these really clear instructions: https://www.maketecheasier.com/swap-partitions-on-linux/

Some of the commands are dd to create an empty file, mkswap, swapon and swapon -s to see what it’s doing. And it really, really helped. I think sometimes mariadb needed to swap, and sometimes apache did. My system only has 1.8 GB of memory or so. And the drive is solid state, so it should be kind of fast. Because I used 1.2 GB for swap, I also extended my volume size when I happened upon Amazon’s clear instructions on how you can do that. Who knew? See below for more on that. If I got it right, Amazon also gives you more IO for each GB you add. I’m definitely getting good response after this swap space addition.

An aside about i/o

In the old days I perfected  a way to study i/o using the iostat utility. You can get it by installing the sysstat package. A good command to run is iostat -t -c -m -x 5

Examing these three consecutive lines of output from running that command is very instructional:

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 2226.40 1408.00 9.35 5.54 1.00 0.20 0.04 0.01 2.37 5.00 10.28 4.30 4.03 0.25 90.14

07/04/2020 04:05:36 PM
avg-cpu: %user %nice %system %iowait %steal %idle
1.00 0.00 1.20 48.59 0.60 48.59

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 130.14 1446.51 0.53 5.66 0.60 0.00 0.46 0.00 4.98 8.03 11.47 4.15 4.01 0.32 51.22

07/04/2020 04:05:41 PM
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.00 0.00 0.00 100.00

Device r/s w/s rMB/s wMB/s rrqm/s wrqm/s %rrqm %wrqm r_await w_await aqu-sz rareq-sz wareq-sz svctm %util
xvda 1.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.50 0.00 0.00 2.69 0.00 0.62 0.10

I tooled around in the admin panel (which previously had brought my server to its knees), and you see the %util shot up to 90%, reads per sec over 2000 , writes per second 1400. So, really demanding. It’s clear my server would die if more than a few people were hitting it hard.  And I may need some fine-tuning.

Success!

Given all the above problems, you probably never thought I’d pull this off. I worked in fits and starts – mostly when my significant other was away because this stuff is a time suck. But, believe it or not, I got the new apache/openssl/apr/php/mariadb/wordpress/centos/amazon EC2 VPC/drjohnstechtalk-with-new-2020-theme working to my satisfaction. I have to pat myself on the back for that. So I pulled the plug on the old site, which basically means moving the elastic IP over from old centos 6 site to new centos8 AWS instance. Since my site was so old, I had to first convert the elastic IP from type classic to VPC. It was not too obvious, but I got it eventually.

Damn hackers already at it

Look at the access log of your new apache server running your production WordPress. If you see like I did people already trying to log in (POST accesses for …/wp-login.php), which is really annoying because they’re all hackers, at least install the WPS Hide Login plugin and configure a secret login URL. Don’t use the default login.

Meanwhile I’ve decided to freeze out anyone who triess to access wp-login.php because they can only be up to no good. So I created this script which I call wp-login-freeze.sh:

#!/bin/sh
# freeze hackers who probe for wp-login
# DrJ 6/2020
DIR=/var/log/drjohns
cd $DIR
while /bin/true; do
tail -200 access_log|grep wp-login.php|awk '{print $1}'|sort -u|while read line; do
echo $line
route add -host $line reject
done
sleep 60
done

Works great! Just do a netstat -rn to watch your ever-growing list of systems you’ve frozen out.

But xmlrpc is the worst…

Bots which invoke xmlrpc.php are the worst for little servers like mine. They absolutely bring it to its knees. So I’ve just added something similar to the wp-login freeze above, except it catches xmlrpc bots:

#!/bin/sh
# freeze hackers who are doing God knows what to xmlrpc.php
# DrJ 8/2020
DIR=/var/log/drjohns
cd $DIR
while /bin/true; do
# example offending line:
# 181.214.107.40 - - [21/Aug/2020:08:17:01 -0400] "POST /blog//xmlrpc.php HTTP/1.1" 200 401
tail -100 access_log|grep xmlrpc.php|grep POST|awk '{print $1}'|sort -u|while read line; do
echo $line
route add -host $line reject
done
sleep 30
done

I was still dissatisfied with seeing bots hit me up for 30 seconds or so, so I decided heck with it, I’m going to waste their time first. So I added a few lines to xmlrpc.php (I know you shouldn’t do this, but hackers shouldn’t do what they do either):

// DrJ improvements
if ($_SERVER['REQUEST_METHOD'] === 'POST') {
// just make bot suffer a bit... the freeze out is done by an external script
   sleep(25);
    //
}
// end DrJ enhancements

This freeze out trick within xmlrpc.php was only going to work if the bots run single-threaded, that is, they run serially, waiting for one request to finish before sending the next. I’ve been running it for a couple days and have enthusiasitically frozen out a few IPs. I can attest that the bots do indeed run single-threaded. So I typically get two entries in my access file to xmlrpc from a given bot, and then the bot is completely frozen out by the loopback route which gets added.

Mid-term issues discovered months later

Well, I never needed to send emails form my server, until I did. And when I did I found I couldn’t. It used to work from my old server… From reading a bit I see WordPress uses PHP’s built-in mail() function, which has its limits. But my server did not have mailx or postfix packages. So I did a

$ yum install  postfix mailx

$ systemctl enable postfix

$ systemctl start postfix

That still didn’t magically make WordPress mail work, although at that point I could send mail by hand frmo a spoofed address, which is pretty cool, like:

$ mailx -r “[email protected]” -s “testing email” [email protected] <<< “Test of a one-line email test. – drJ”

And I got it in my Gmail account from that sender. Cool.

Rather than wasting time debuggin PHP, I saw a promising-looking plug-in, WP Mail SMTP, and installed it. Here is how I configured the important bits:

WP Mail SMTP settings

Another test from WordPress and this time it goes through. Yeah.

Hosting a second WordPress site and Ninja Forms brings it all down

I brushed off my own old notes on hosting more than one WordPress site on my server (it’s nice to be king): https://drjohnstechtalk.com/blog/2014/12/running-a-second-instance-of-wordpress-on-your-server/

Well, wouldn’t you know my friend’s WordPress site I was trying to host brought my server to its knees. Again. Seems to be a common theme. I was hoping it was merely hackers who’d discovered his new site and injected it with the xmlrpc DOS because that would have been easy to treat. But no, no xmlrpc issues so far according to the access_log file. He uses more of the popular plugins like Elementor and Ninja Forms. Well, that Ninja Forms Dashboard is a killer. Reliably brings my server to a crawl. I even caught it in action from a running top and saw swap was the leading cpu-consuming process. And my 1.2 GB swap file was nearly full. So I created a second, larger swap file of 2 GB and did a swapon for that. Then I decommissioned my older swap file. Did you know you can do a swapoff? Yup. I could see the old one descreasing in size and the new one building up. And now the new one is larger than the old ever could be – 1.4 GB. Now Ninja forms dashboard can be launched. Performance is once again OK.

So…hosting second WordPress site now resolved.

Updating failed. The response is not a valid JSON response.

So then he got that error after enabling permalinks. The causes for this are pretty well documented. We took the standard advice and disabled all plugins. Wihtout permalinks we were fine. With them JSON error. I put the .htaccess file in place. Still no go. So unlike most advice, in my case, where I run my own web server, I must have goofed up the config and not enabled reading of the .htaccess file. Fortunately I had a working example in the form of my own blog site. I put all those apache commands which normally go into .htaccess into the vhost config file. All good.

Increasing EBS filesystem size causes worrisome error

As mentioned above I used some of the filesystem for swap so I wanted to enlarge it.

$ sudo growpart /dev/xvda 1
CHANGED: partition=1 start=2048 old: size=16773120 end=16775168 new: size=25163743,end=25165791
root@ip-10-0-0-181:~/hosting$ sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 12G 0 disk
mqxvda1 202:1 0 12G 0 part /
root@ip-10-0-0-181:~/hosting$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 912292 0 912292 0% /dev
tmpfs 935468 0 935468 0% /dev/shm
tmpfs 935468 16800 918668 2% /run
tmpfs 935468 0 935468 0% /sys/fs/cgroup
/dev/xvda1 8376320 3997580 4378740 48% /
tmpfs 187092 0 187092 0% /run/user/0
tmpfs 187092 0 187092 0% /run/user/1001
root@ip-10-0-0-181:~/hosting$ sudo resize2fs /dev/xvda1
resize2fs 1.44.6 (5-Mar-2019)
resize2fs: Bad magic number in super-block while trying to open /dev/xvda1
Couldn't find valid filesystem superblock.

The solution is to use xfs_growfs instead of resize2fs. And that worked!

$ sudo xfs_growfs -d /
meta-data=/dev/xvda1 isize=512 agcount=4, agsize=524160 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=2096640, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 2096640 to 3145467
root@ip-10-0-0-181:~/hosting$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 891M 0 891M 0% /dev
tmpfs 914M 0 914M 0% /dev/shm
tmpfs 914M 17M 898M 2% /run
tmpfs 914M 0 914M 0% /sys/fs/cgroup
/dev/xvda1 12G 3.9G 8.2G 33% /
tmpfs 183M 0 183M 0% /run/user/0
tmpfs 183M 0 183M 0% /run/user/1001
PHP found wanting by WordPress health status

Although my site seems to be humming alnog, now I have to find the more obscure errors. WordPress mentioned my site health has problems.

WordPress site health

I think gd is used for graphics. I haven’t seen any negative results from this, yet. I may leave it be for the time being.

Lets Encrypt certificate renewal stops working

This one is at the bottom because it only manifests itself after a couple months – when the web site certificate either expires or is about to expire. Remember, this is a new server. I was lazy, of course, and just brought over the .acme.sh from the old server, hoping for the best. I didn’t notice any errors at first, but I eventually observed that my certificate was not getting renewed either even though it had only a few days of validity left.

To see what’s going on I ran this command by hand:

“/root/.acme.sh”/acme.sh –debug –cron –home “/root/.acme.sh”

acme.sh new-authz error: {"type":"urn:acme:error:badNonce","detail":"JWS has no anti-replay nonce","status": 400}

seemed to be the most important error I noticed. The general suggestion for this is an acme.sh –upgrade, which I did run. But the nonce error persisted. It tries 20 times then gives up.

— warning: I know enough to get the job done, but not enough to write the code. Proceed at your own risk —

I read some of my old blogs and played with the command

“/root/.acme.sh”/acme.sh –issue -d drjohnstechtalk.com -w /web/drjohns

My Webroot is /web/drjohns by the way. Now at least there was an error I could understand. I saw it trying to access something like http://drjohnstechtalk.com/.well-known/acme-challenge/askdjhaskjh

which produced a 404 Not Found error. Note the http and not https. Well, I hadn’t put much energy into setting up my http server. In fact it even has a different webroot. So what I did was to make a symbolic link

ln -s /web/drjohns/.well-known /web/insecure

I re-ran the acme.sh –issue command and…it worked. Maybe if I had issued a –renew it would not have bothered using the http server at all, but I didn’t see that switch at the time. So in my crontab instead of how you’re supposed to do it, I’m trying it with these two lines:

# Not how you're supposed to do it, but it worked once for me - DrJ 8/16/20
22 2 * * * "/root/.acme.sh"/acme.sh --issue -d drjohnstechtalk.com -w /web/drjohns > /dev/null 2>&1
22 3 16 * * "/root/.acme.sh"/acme.sh --update-account --issue -d drjohnstechtalk.com -w /web/drjohns > /dev/null 2>&1

The update-account is just for good measure so I don’t run into an account expiry problem which I’ve faced in the past. No idea if it’s really needed. Actually my whole approach is a kludge. But it worked. In two months’ time I’ll know if the cron automation also works.

Why kludge it? I could have spent hours and hours trying to get acme.sh to work as it was intended. I suppose with enough persistence I would have found the root problem.

2021 update. In retrospect

In retrospect, I think I’ll try Amazon Linux next time! I had the opportunity to use it for my job and I have to say it was pretty easy to set up a web server which included php and MariaDB. It feels like it’s based on Redhat, which I’m most familiar with. It doesn’t cost extra. It runs on the same small size on AWS. Oh well.

2022 update

I’m really sick of how far behind Redhat is with their provided software. And since they’ve been taken over by IBM, how they’ve killed CentOS is scandalous. So I’m inclined to go to a Debian-based system for my next go-around, which is much more imminent than I ever expected it to be thanks to the discontinuation of support for CentOS. I asked someone who hosts a lot of WP sites and he said he’d use Ubuntu server 22, PHP 8, MariaDB and NGinx. Boy. Guess I’m way behind. He says performance with PHP 8 is much better. I’ve always used apache but I guess I don’t really rely on it for too many fancy features.

References and related
This blog post is about 1000% better than my own if all you want to do is install WordPress on Centos: https://blog.ssdnodes.com/blog/how-to-install-wordpress-on-centos-7-with-lamp-tutorial/

Here is WordPress’s own extended instructions for upgrading. Of course this should be your starting point: https://wordpress.org/support/article/upgrading-wordpress-extended-instructions/

I’ve been following the php instructions: https://www.php.net/manual/en/install.unix.apache2.php

Before you install WordPress. Requirements and such.

This old article of mine has lots of good tips: Compiling apache 2.4

This is a great article about how Linux systems use swap space and how you can re-configure things: https://www.maketecheasier.com/swap-partitions-on-linux/

I found this guide both helpful and informative as well: https://www.howtogeek.com/455981/how-to-create-a-swap-file-on-linux/

Amazon has this clear article on the linux commands you run after you extend an EBS volume. they worked for me: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html

My Centos 8 AMI is centos-8-minimal-install-201909262151 (ami-01b3337aae1959300)

My old Lets Encrypt article was helpful in straightening out my certificate errors.

Here’s the acme.sh installation guide for linux.

Categories
Admin Linux Network Technologies

Quick Tip: Powershell command to unblock a firewall port when running Windows Defender

Setup
I decided to run an X Server on my Windows 10 laptop. I only need it for Cognos gateway configuration, but when you need it, you need it. Of course an X Server listens on port 6000, so hosts outside of your PC have to be able to initiate a TCP connection to your PC with destination port 6000. So that port has to be open. The software I use for the X Server is Mobatek XTerm.

Here is the Powershell command to disable the block of TCP port 6000.

New-NetFirewallRule -DisplayName "MobaXterm Allow Incoming Requests" -Direction Inbound -LocalPort 6000 -Protocol TCP -Profile Domain -Action Allow

The Powershell window needs to be run as administrator. The change is permanent: it suffices to run it once.

Conclusion
And, because inquiring minds want to know, did it work? Yes, it worked and I could send my cogconfig X window to my Mobatek X Server. I had to look for a new Window. It was slow.

Categories
Linux Network Technologies

Network utilities for Windows

Intro
Today I came across a simple but useful tool which runs on Windows systems that will help determine if a remote host is listening on a particular port. I wanted to share that information.

The details
PortQry is attractive because of its simplicity, plus, it is supported and distributed by Microsoft themselves. The help section reads like this:

PortQry version 2.0
 
Displays the state of TCP and UDP ports
 
 
Command line mode:  portqry -n name_to_query [-options]
Interactive mode:   portqry -i [-n name_to_query] [-options]
Local Mode:         portqry -local | -wpid pid| -wport port [-options]
 
Command line mode:
 
portqry -n name_to_query [-p protocol] [-e || -r || -o endpoint(s)] [-q]
        [-l logfile] [-sp source_port] [-sl] [-cn SNMP community name]
 
Command line mode options explained:
        -n [name_to_query] IP address or name of system to query
        -p [protocol] TCP or UDP or BOTH (default is TCP)
        -e [endpoint] single port to query (valid range: 1-65535)
        -r [end point range] range of ports to query (start:end)
        -o [end point order] range of ports to query in an order (x,y,z)
        -l [logfile] name of text log file to create
        -y overwrites existing text log file without prompting
        -sp [source port] initial source port to use for query
        -sl 'slow link delay' waits longer for UDP replies from remote systems
        -nr by-passes default IP address-to-name resolution
            ignored unless an IP address is specified after -n
        -cn specifies SNMP community name for query
            ignored unless querying an SNMP port
            must be delimited with !
        -q 'quiet' operation runs with no output
           returns 0 if port is listening
           returns 1 if port is not listening
           returns 2 if port is listening or filtered
 
Notes:  PortQry runs on Windows 2000 and later systems
        Defaults: TCP, port 80, no log file, slow link delay off
        Hit Ctrl-c to terminate prematurely
 
examples:
portqry -n myserver.com -e 25
portqry -n 10.0.0.1 -e 53 -p UDP -i
portqry -n host1.dev.reskit.com -r 21:445
portqry -n 10.0.0.1 -o 25,445,1024 -p both -sp 53
portqry -n host2 -cn !my community name! -e 161 -p udp
...

The PortQry “install” consisted of unzipping a ZIP file, so, no install at all, and no special permissions needed, which is a plus in my book.

nmap
Of course there is always nmap. I never really got into it so much, but clearly you can go nuts with it. One advantage is that it is available on linux and MacOS as well. But in my opinion it is a heavy-handed install.

References and related
PortQry

nmap

Some nmap examples I have used.

Categories
Admin Linux

Getting GNU screen to work on Windows 10 for a productive terminal multiplex environment

Intro
My jump server is getting old and they’re threatening to cut it off. A jump server is a server from which you launch CLI terminal sessions into your linux servers. Since my laptop has firewall access to all the same servers I wondered if I could build up a productive environment right within Windows 10 on my own laptop. For me this would be running GNU screen as a terminal multiplexer since I hop between terminal screens all day.

More details
Windows 10 is coming around to more fully integrating with Linux! it’s about time. WSL, windows subsystem for Linux, is all about that. And things like bash shell, ubuntu and OpenUSE Linux are available from the windows store. But that was not an option for me. My organizaiton has shut all that down.

So I thought back to my days as a Cygwin user those many years ago… Could I get GNU screen running within Cygwin environment on Windows 10? Well, yes, I can with just a few tweaks.

I think the initial Cygwin install required admin privileges, but once installed to run it does not.

Within Cygwin screen is an optional package and you can run their setup program to search and install it.

Here is my .screenrc file

defscrollback 4000
#change init sequence to not switch width
termcapinfo  xterm Z0=\E[?3h:Z1=\E[?3l:is=\E[r\E[m\E[2J\E[H\E[?7h\E[?1;4;6l
 
# Make the output buffer large for (fast) xterms.
termcapinfo xterm* OL=10000
 
# tell screen that xterm can switch to dark background and has function
# keys.
termcapinfo xterm 'VR=\E[?5h:VN=\E[?5l'
termcapinfo xterm 'k1=\E[11~:k2=\E[12~:k3=\E[13~:k4=\E[14~'
termcapinfo xterm 'kh=\E[1~:kI=\E[2~:kD=\E[3~:kH=\E[4~:kP=\E[H:kN=\E[6~'
 
# special xterm hardstatus: use the window title.
termcapinfo xterm 'hs:ts=\E]2;:fs=\007:ds=\E]2;screen\007'
 
#terminfo xterm 'vb=\E[?5h



lt;200/&gt;\E[?5l' termcapinfo xterm 'vi=\E[?25l:ve=\E[34h\E[?25h:vs=\E[34l' # emulate part of the 'K' charset termcapinfo xterm 'XC=K%,%\E(B,[\304,\\\\\326,]\334,{\344,|\366,}\374,~\337' # xterm-52 tweaks: # - uses background color for delete operations termcapinfo xterm ut #from https://stackoverflow.com/questions/359109/using-the-scrollwheel-in-gnu-screen termcapinfo xterm* ti@:te@ escape ^\\ # changes espace sequence password

Note that in my .screenrc I use <Ctrl-\> as my escape sequence, so, e.g., to pop to the previous screen it is <Ctrl-\> <Ctrl-\>. I’m not sure that’s standard but my fingers will remember that to my dying day. They probably still remember some of those EDT/TPU VAX editor commands to this day!

Compare and contrast
Here are my day 0 observations.

ssh, curl, nslookup and tracert are coming from the underlying Windows system (do a which curl to see that) so that means you get the dumb version your system has.

So there is no dig, and no nc or netcat.

touch, cat, mkdir and vi behave pretty normally. man pages are installed, which can be a help.

If you use proxy, a funny thing can happen and your environment variables can get mixed. You may have inherited an HTTP_PROXY environment variable form the system, but the alias you copied from a linux jump server probably defines an http_proxy environment variable (lower case). And both can co-exist! As to which one curl would then use, who knows? Better just stick to working with the upper-case one and NOT define another in lower case.

For awhile it looked like scrolling was not working at all when screen was running. Then i found that tip I reference at the bottom of my .screenrc file which makes scrolling work via the mouse’s scroll wheel, which isn’t too bad.

Old friends like ls, grep, echo and while (built-in bash command) are available however. dig can be installed from the bind-utils package.

A lot of other packages are optionally available, including a whole X-Windows environment, which I used to run in the past but hope to avoid this time around.

No crontabs however (to have cron daemon requires installing admin privileges) which kind of hurts.

Simple output redirection seems to work, as does job control, e.g.,

ping -t 8.8.8.8 &gt; /dev/null 2&gt;&amp;1 &amp;

Not sure why you’d want to run the above command, but this nice example shows that the /dev/null device exists, and the ping command is inherited from your Windows environment hence the -t option to run it indefinitely, and that it will create a background process which you can view and control with jobs / kill.

Now I typically move my laptop off the work environment each night, so all my ssh logins will be lost, unlike the jump server situation. But our jump server isn’t that stable anyway so no big loss I’d say…

I am sooo used to highlighting text in Teraterm, which is my current environment, and that being sufficient to put that text into the clipboard, that I keep doing that in this environment. But it doesn’t work. I have to use the CMD window convention of highlighting the text and then hitting ENTER to get it into the clipboard. oops. That was because I had been launching Cygwin from a CMD window. Now I am launching from a proper Cygwin shortcut and simple text highlighting works, BUT, right-clicking to paste it in brings up a menu rather than just doing it! So there’s that difference now… Instead of right-click I can quickly paste the text in doing a SHIFT-Insert.

ssh will get you

By default you end up using the Windows-10 supplied ssh, and that works pretty well. But when you’re ready to advance and need to put some thing into a .ssh/config file, forget about it. In principle it’s possible in Windows 10, but it’s too complex. Just install the ssh package. That in turn permits you the facility familiar to you where you can create a ~/.ssh/config file.

How to set your userid by default for your ssh logins

First make sure you install the Cygwin ssh package and are using that one. A which ssh should come back with /usr/bin/ssh.

My config file looks like this:

Host *
User drjohn

That sets my default userid to be drjohn on any random server I ssh to.

New ssh error pops up
Unable to negotiate with 50.17.188.196 port 22: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

This only happened when I switched from my Windows ssh to the Cygwin one. This is, of course, when connecting to a system (ironically, a firewall) with an old image. I think the only solution to be able to access these old systems is to switch back to the Windows 10 ssh – after all we never got rid of it and it used to work. Since all my customary ssh’s are aliased, this works well enough. I just made an alias like this

alias oldFW='screen -t oldFW /cygdrive/c/windows/system32/openssh/ssh.exe [email protected]'

since on my system the Windows 10 openssh is installed there in the system32 folder.

How do you get multiple login sessions (shells) within your screen to the localhost?

Well, you can’t just do a su – and you probably don’t have an ssh daemon running locally, so this is more of a non-trivial question than it first appears.

I define a bunch of aliases. My alias for getting an additional shell on the Windows 10 machine is this:

alias local=’screen -t localhost bash –login -i’

A word on package management
I don’t know why I was afraid of installing packages when I first tried Cygwin over a decade ago. Now for me that’s the key – to understand and practice installing packages because it’s actually really easy when you’re used to it.

The key is to simply keep your initial install setup hanging around, setup-x86_64.exe. In my case it’s in my downloads directory. Example usage: I wondered if I could install a decent version of ping rather than continually suffer with the dumb DOS version. So, fire up the above-mentioned executable. Go through a few screens (where it remembers the answers from the initial install), then search for the package (Yes, it’s there!), and select to install the most recent version from the drop-down. A few more clicks and it’s done and available in your path. it’s that easy… Not sure about uninstalling because you almost never need to do that. It seems maybe a thousand packages are available? so no, there’s no yum or zypper or rpm or apt-get, but who really needs those anyway?

As a concrete example, I am learning about SNMP. So I got something running on a Bluecoat proxy, and I wanted to see what I could see. The guide recommended using snmpwalk, which of course I did not have. So I learned which package it is in with a DDG search, then ran the Cygwin setup, found that package, installed it, and voila, there was snmpwalk in my path. And it worked, by the way. Easy peasy.

Creating your own scripts

If you have the funny situation, like me, where you had enough privileges to install Cygwin, perhaps by temporarily assigning your account the Admin role, but when you use it day-to-day, you do not have admin privileges, you will find yourself unable to create files in some of the system directories like /usr/local/bin – permission denied! But in your home directory you will be able to edit files.

So what I did is to create a bin directory under my home directory, where I plan to add my home-grown scripts such as mimeencode, and make sure my PATH includes this directory with a statement like

 export PATH=$PATH:${HOME}/bin

which I put in my .alias file, which in turn I source from .bashrc.

2021 update: The fate of the screen package

I read somewhere the screen utility which I love is beyond repair and will have to be replaced by something else. Too bad. I’ve used it for about 10 years now.

X Windows

In a previous iteration of Cygwin I had installed the X Server components though I left it out this time around. For an X Server running on my PC, which I do need from time-to-time, I use MobaXterm. Seems to work OK for my purposes, which are very minimal. But I prefer to use Cygwin over MobaXterm for the command line stuff I do.

Conclusion
GNU screen for Windows is indeed possible, but you gotta run it on top of Cygwin. It’s of interest that after all these years Cygwin is still viable on Windows 10. Cygwin can be run in a pretty lightweight fashion if you avoid the X-Windows stuff. There are some quirks but it is surprisingly linux-like at the end of the day. I believe it is really suitable as a replacement for a linux jump server. screen, for the uninitiated, is a temrinal multiplexer, which means it makes it very fast for you to switch between multiple terminal windows.

Some things are a bit different.

I think I will use this both at work and at home… Nope! My home PC runs too darn slow to ever use the Cygwin environment. My work laptop has SSD which probably helps keep performance good.

It is possible to set up an ssh default user.

It is possible to create multiple local shells within one screen within one Cygwin terminal.

So it is really possible to have your Linux command line. I use it every day…

2022 update

WSL2 is the way to go now. The setup can be little tricky, however, but it is worth it. You get a full hypervisor environment, not an emulator as you have with Cygwin. I write it up here. 

References and related

(2022) These days, it’s better to skip Cygwin and go straight to a full VM using WSL2.
Here’s the GNU Cygwin home page: https://www.cygwin.com/

Install Cygwin by running https://www.cygwin.com/setup-x86_64.exe

A newbie’s guide to Cygwin and linux commands: Cygwin Cheat Sheet – Step-by-Step Guide on Installation and Use (pcwdld.com)

Interesting discussion: https://stackoverflow.com/questions/359109/using-the-scrollwheel-in-gnu-screen

If you have a linux jump server that runs screen, or just want to ssh to a linux server, teraterm can be a good choice (as opposed to putty or built-in ssh). These days it can be found here: https://osdn.net/projects/ttssh2/releases/

To have an X Server running locally, MobaXterm seems a good choice. It looks like it’s free: https://mobaxterm.mobatek.net/

Categories
Admin Linux Raspberry Pi

Raspberry Pi Recovery Mode or interrupting the boot process

Intro
If you installed Raspbian from the NOOBS distribution as I do, then you may occasionally “blow up” your installation as I just have! You have an out, sort of, short of re-imaging the disk, though about with the same impact.

To interrupt the boot process and enter recovery mode, attach a USB keyboard and repeatedly hit the Shift key. You should come to the NOOBS OS install selection screen. Just re-install Rasbian again… But if you’re using WiFi first configure your WiFi setup before re-installing Raspbian.

Symptoms
When I powered up, I got the initial multi-color screen. Then a two-line text message popped up – too quickly to be read, then a grayish screen, then it split into a lower and upper part, then both halves faded away and there it stayed… At that point it was not responsive to any keyboard inputs or mouse clicks.

Conclusion
While doing my advanced slide show and rotating display project, I somehow managed to blow up my OS. finding the way to interrupt the boot-up was not so easy so I am amplifying the answer that worked for me on the Internet: repeatedly hit the Shift key during the boot, until you see the NOOBS image selector screen.

Categories
Linux Network Technologies Raspberry Pi

OLD: Raspberry Pi photo frame using your pictures on your Google Drive

Intro

This posting is messed up. I’ll have to re-post. Working on it… Try this post instead.

All my spouse’s digital photo frames are either broken or nearly broken – probably she got them from garage sales. Regardless, they spend 99% of the the time black. Now, since I had bought that Raspberry Pi PiDisplay awhile back, and it is underutilized, and I know a thing or two about linux, I felt I could create a custom photo frame with things I already have lying around – a Raspberry Pi 3, a PiDisplay, and my personal Google Drive. We make a point to copy all our cameras’ pictures onto the Google Drive, which we do the old-fashioned, by-hand way. After 17 years of digital photos we have about 40,000 of them, over 200 GB.

So I also felt obliged to create features you will never have in a commercial product, to make the effort worthwhile. I thought, what about randomly picking a few for display from amongst all the pictures, displaying that subset for a few days, and then moving on to a new randomly selected sample of images, etc? That should produce a nice review of all of them over time, eventually. You need an approach like that because you will never get to the end if you just try to display 40000 images in order!

The scripts
Here is the master file which I call master.sh.

#!/bin/sh
# DrJ 8/2019
# call this from cron once a day to refesh random slideshow once a day
RANFILE="random.list"
NUMFOLDERS=20
DISPLAYFOLDER="/home/pi/Pictures"
DISPLAYFOLDERTMP="/home/pi/Picturestmp"
SLEEPINTERVAL=3
DEBUG=1
STARTFOLDER="MaryDocs/Pictures and videos"
 
echo "Starting master process at "`date`
 
rm -rf $DISPLAYFOLDERTMP
mkdir $DISPLAYFOLDERTMP
 
#listing of all Google drive files starting from the picture root
if [ $DEBUG -eq 1 ]; then echo Listing all files from Google drive; fi
rclone ls remote:"$STARTFOLDER" &gt; files
 
# filter down to only jpegs, lose the docs folders
if [ $DEBUG -eq 1 ]; then echo Picking out the JPEGs; fi
egrep '\.[jJ][pP][eE]?[gG]
 
Needless to say, but I'd better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.
 
Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn't know that so I gotta deal with it). I call this script qiv.sh.
#!/bin/sh
# -f : full-screen; -R : disable deletion; -s : slideshow; -d : delay ; -i : status-bar;
# -m : zoom; [-r : ranomdize]
# this doesn't handle filenames with spaces:
##cd /media; qiv -f -R -s -d 5 -i -m `find /media -regex ".+\.jpe?g$"`
# this one does:
export DISPLAY=:0
if [ "$1" = "l" ]; then
# print out proposed filenames
  find . -regex ".+\.[jJ][pP][eE]?[gG]$"
else
# args: f fullscreen d delay s slideshow l autorotate R readonly I statusbar
# i nostatusbar m maxspect
  find . -regex ".+\.[jJ][pP][eE]?[gG]$" -print0|xargs -0 qiv -fRsmil -d 5
fi

Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.

#!/usr/bin/perl
use Getopt::Std;
my %opt=();
getopts("df:j:r:",\%opt);
$nofolders = $opt{f} ? $opt{f} : 20;
$DEBUG = $opt{d} ? 1 : 0;
$jpegs = $opt{j} ? $opt{j} : "jpegs.list";
$ranpicfile = $opt{r} ? $opt{r} : "jpegs-random.list";
print "d,f,j,r: $opt{d}, $opt{f}, $opt{j}, $opt{r}\n" if $DEBUG;
open(JPEGS,$jpegs) || die "Cannot open jpegs listing file $jpegs!!\n";
@jpegs = ;
# remove newline character
$nopics = chomp @jpegs;
open(RAN,"&gt; $ranpicfile") || die "Cannot open random picture file $ranpicfile!!\n";
for($i=0;$i&lt;$nofolders;$i++) {
  $t = int(rand($nopics-2));
  print "random number is: $t\n" if $DEBUG;
  ($dateTime) = $jpegs[$t] =~ /(\d{8}_\d{6})/;
  if ($dateTime) {
    print "dateTime\n" if $DEBUG;
  }
  $priorPic = $jpegs[$t-2];
  $Pic = $jpegs[$t];
  $postPic = $jpegs[$t+2];
  print RAN qq($priorPic
$Pic
$postPic
);
}
close(RAN);

Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!

@reboot sleep 25; cd ~ ; ./m2.pl &gt;&gt; ./m2.log 2&gt;&amp;1
24 16 * * * ./master.sh &gt;&gt; ./master.log 2&gt;&amp;1

Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package. More recent rclone packages offer more options than what is shown here, but work basically the same way.

$ sudo apt-get install rclone
$ rclone config

2019/08/05 20:22:42 NOTICE: Config file "/home/pi/.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q&gt; n
name&gt; remote
Type of storage to configure.
Choose a number from below, or type in your own value
 1 / Amazon Drive
   \ "amazon cloud drive"
 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
   \ "s3"
 3 / Backblaze B2
   \ "b2"
 4 / Dropbox
   \ "dropbox"
 5 / Encrypt/Decrypt a remote
   \ "crypt"
 6 / Google Cloud Storage (this is not Google Drive)
   \ "google cloud storage"
 7 / Google Drive
   \ "drive"
 8 / Hubic
   \ "hubic"
 9 / Local Disk
   \ "local"
10 / Microsoft OneDrive
   \ "onedrive"
11 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
   \ "swift"
12 / Yandex Disk
   \ "yandex" 
Storage&gt;7
 
Google Application Client Id
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_id&gt;
Google Application Client Secret
Leave blank normally.
Enter a string value. Press Enter for the default ("").
client_secret&gt;
Remote config
Use auto config?
 * Say Y if not sure
 * Say N if you are working on a remote or headless machine or Y didn't work
y) Yes
n) No
y/n&gt; N
If your browser doesn't open automatically go to the following link: https://accounts.google.com/o/oauth2/auth?client_id=202264815644.apps.googleusercontent.com&amp;redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&amp;response_type=code&amp;scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&amp;state=07ab6a457efc9384772f919dca93375
Log in and authorize rclone for access

You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:

Please copy this code, switch to your application and paste it there:
 
Enter verification code&gt;4/nQEXJZOTdP_asMs6UQZ5ucs6ecvoiLPelQbhI76rnuj4sFjptxbjm7w
--------------------
[remote]
client_id =
client_secret =
token = {"access_token":"ya29.Il-KB3eniEpkdUGhwdi8XyZyfBFIF2ahRVQtrr7kR-E2lIExSh3C1j-PAB-JZucL1j9D801Wbh2_OEDHthV2jk_MsrKCMiLSibX7oa_YtFxts-V9CxRRUirF1_kPHi5u_Q","token_type":"Bearer","refresh_token":"1/MQP8jevISJL1iEXH9gaNc7LIsABC-92TpmqwtRJ3zV8","expiry":"2019-09-21T08:34:19.251821011-04:00"}
--------------------
y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d&gt; y
Current remotes:
 
Name                 Type
====                 ====
remote               drive
 
e) Edit existing remote
n) New remote
d) Delete remote
s) Set configuration password
q) Quit config
e/n/d/r/s/q&gt;q

Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:

./master.sh
-bash: ./master.sh: Permission denied

Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” .   – copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete

real    1m12.201s
user    0m15.270s
sys     0m1.816s

My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.

files |awk '{$1=""; print substr($0,2)}'|grep -i -v /docs/ &gt; jpegs.list # throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting # names into a file if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi ./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE # copy over these 60 jpegs if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi cat $RANFILE|while read line; do rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP sleep $SLEEPINTERVAL done # kill any qiv slideshow if [ $DEBUG -eq 1 ]; then echo Killing old qiv slideshow; fi pkill -9 -f qiv # remove old pics if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi rm -rf $DISPLAYFOLDER mv $DISPLAYFOLDERTMP $DISPLAYFOLDER #run looping qiv slideshow on these pictures if [ $DEBUG -eq 1 ]; then echo Start qiv slideshow in background; fi cd $DISPLAYFOLDER ; nohup ~/qiv.sh &amp; if [ $DEBUG -eq 1 ]; then echo "And now it is "`date`; fi

Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” . : copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.

files |awk '{$1=""; print substr($0,2)}'|grep -i -v /docs/ &gt; jpegs.list # throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting # names into a file if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi ./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE # copy over these 60 jpegs if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi cat $RANFILE|while read line; do rclone copy remote:"${STARTFOLDER}/$line" $DISPLAYFOLDERTMP sleep $SLEEPINTERVAL done # rotate pics as needed if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh cd ~ # kill any qiv slideshow if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi pkill -9 -f qiv sudo pkill -9 -f fbi pkill -9 -f m2.pl # remove old pics if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi rm -rf $DISPLAYFOLDER mv $DISPLAYFOLDERTMP $DISPLAYFOLDER #run looping fbi slideshow on these pictures if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi cd $DISPLAYFOLDER ; nohup ~/m2.pl &gt;&gt; ~/m2.log 2&gt;&amp;1 &amp; if [ $DEBUG -eq 1 ]; then echo "And now it is "`date`; fi

Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package. More recent rclone packages offer more options than what is shown here, but work basically the same way.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” .   – copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.


Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” . : copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.
files |awk ‘{$1=””; print substr($0,2)}’|grep -i -v /docs/ > jpegs.list

# throw NUMFOLDERS or so random numbers for picture selection, select triplets of photos by putting
# names into a file
if [ $DEBUG -eq 1 ]; then echo Generate random filename triplets; fi
./random-files.pl -f $NUMFOLDERS -j jpegs.list -r $RANFILE

# copy over these 60 jpegs
if [ $DEBUG -eq 1 ]; then echo Copy over these random files; fi
cat $RANFILE|while read line; do
rclone copy remote:”${STARTFOLDER}/$line” $DISPLAYFOLDERTMP
sleep $SLEEPINTERVAL
done

# rotate pics as needed
if [ $DEBUG -eq 1 ]; then echo Rotate the pics which need it; fi
cd $DISPLAYFOLDERTMP; ~/rotate-as-needed.sh
cd ~

# kill any qiv slideshow
if [ $DEBUG -eq 1 ]; then echo Killing old qiv and fbi slideshow; fi
pkill -9 -f qiv
sudo pkill -9 -f fbi
pkill -9 -f m2.pl

# remove old pics
if [ $DEBUG -eq 1 ]; then echo Removing old pictures; fi
rm -rf $DISPLAYFOLDER

mv $DISPLAYFOLDERTMP $DISPLAYFOLDER

#run looping fbi slideshow on these pictures
if [ $DEBUG -eq 1 ]; then echo Start fbi slideshow in background; fi
cd $DISPLAYFOLDER ; nohup ~/m2.pl >> ~/m2.log 2>&1 &

if [ $DEBUG -eq 1 ]; then echo “And now it is “`date`; fi

Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package. More recent rclone packages offer more options than what is shown here, but work basically the same way.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” .   – copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.


Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” . : copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.


Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package. More recent rclone packages offer more options than what is shown here, but work basically the same way.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” .   – copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related

Current approach and writeup for this photo frame effort.
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.


Needless to say, but I’d better say it, the STARTFOLDER in this script is particular to my own Google drive. Customize it as appropriate for your situation.

Then qiv (quick image viewer) is called with a bunch of arguments and some trickery to ensure proper display of files with spaces in the filenames (an anathema for Linux but my spouse doesn’t know that so I gotta deal with it). I call this script qiv.sh.


Here is the perl script which generates the random numbers and associates them to the file listing we’ve just made with rclone, random-files.pl.


Note that to display 60 pictures only 20 random numbers are used, and then the picture 2 prior and the picture two after the one selected by the random number are also displayed. This helps to provide, hopefully, some context to what is being shown without showing all those duplicate pictures that everyone takes nowadays.

There is an attempt to favor recently uploaded pictures but I really haven’t perfected that part of master.sh, it’s more of a thought at this point.

My crontab entries take care of starting a slideshow upon first boot as well as a daily pick of 60 new random pictures!


Use crontab -e to edit your crontab file.

qiv – an easy install
To install qiv

$ sudo apt-get install qiv

Rclone shown in some detail
The real magic is tapping into the Google Drive, which is done with rclone. There are older packages but they are awful by comparison so don’t waste your time on any other package.

$ sudo apt-get install rclone
$ rclone config


You sign in to your Google account with a regular browser.

After sign-in you see:

rclone wants to access your Google Account
<your_account>@gmail.com
This will allow rclone
to:

See, edit, create, and delete all of your Google Drive files

Make sure you trust rclone

After clicking Allow you get:


Note you can very well keep the root folder id blank. In my case we store all our pictures in one top-level folder and the nested folders get pretty deep, plus there’s a busload of other things on the drive, so I wanted to give rclone the best possible shot at running well. Still, listing our 40,000+ pictures takes 90 seconds or so.

Goofed up your config of rclone? No worries. Remove .config/rclone and start over.

Don’t forget to make all these scripts executable (chmod +x <script_name>:)or you will end up seeing messages like this:


Some noteworthy rclone commands
rclone ls remote: – lists all files, going recursively, no problem with MORE
rclone lsd remote: lists directories in top level of drive
rclone copy remote:”MaryDocs/Pictures and videos/Shutterfly books collection of photos/JJH birth photos/img2165.jpg” . : copies picture to current directory (does not create directory hierarchy)

Do a complete directory listing, capture the results in a file and see how long it took:
$ time rclone ls remote: > lsf-complete


My initial thought was to do a remote mount of the Google Drive onto a Raspberry Pi mount point, but it’s just so slow that it really provides no advantage to do it that way.

Some encountered issues
Well, I blew up on crontab, which in all my years working with linux/unix I’ve never done before. But I managed to fix it.

Prior to discovering rclone I made the mistake of using gdrivefs to create a mounted Google Drive – sounds great in principle, right? What a disaster. The files’ binary data were not correctly preserved when accessed through the mount though the size was! I have also never encountered a mounting software that corrupted files, but this piece of garbage does. One way to detect corruption in a binary file is to do a cksum (or md5sum, just be consistent and use one or the other) of source file and destination version of same file. The result should be the same number.

Imagined but avoided issue: JPEG orientation

I had prepared a whole python program to orient my pictures correctly, but lo and behold I “discovered” that the -l switch in qiv does that for you! So I actually ripped that whole unnecessary step out.

Conclusion
Re-purposing equipment I had lying around: Raspberry Pi 3, Pi Display, and 40,000 JPEG images on Google Drive, I put together a novel photoframe slideshow which randomly displays a different set of 60 pictures each day. It’s a nice way for us to be exposed to our collection of 17+ years of digital photos.

The qiv really is a quick image viewer, i.e., the slideshow runs clean, like a real one.

Long Todo list

  • Improve selection of recent pictures if we’ve just uploaded a bunch of pictures from our smartphones.
  • Hey, how about also showing some of those short videos we also shot with our camera phones and uploaded to Google Drive? And while we’re at it, re-purposing those cheap USB speakers I bought for RetroPi gaming to get the sound, or play a soundtrack!?
  • I realize that although the selection of the 20 anchor pictures is initially random, when they plus the 40 additional photos are presented for display additional order is imposed by the shell’s expansion of the regex and this has a tendency to make the pictures more chronologically organized than they would be by chance.

References and related
PiDisplay

RetroPi, the gaming emulation project for which I bought economical USB speakers.

The rclone home page.

A detailed write-up on using pipresents program where we had a Raspberry Pi drive a mixed media display 9pictures and videos) for a kiosk.

Categories
Admin Linux Network Technologies Raspberry Pi Security Web Site Technologies

How to test if a web site requires a client certificate

Intro
I can not find a link on the Internet for this, yet I think some admins would appreciate a relatively simple test to know is this a web site which requires a client certificate to work? The errors generated in a browser may be very generic in these situations. I see many ways to offer help, from a recipe to a tool to some pointers. I’m not yet sure how I want to proceed!

why would a site require a client CERT? Most likely as a form of client authentication.

Pointers for the DIY crowd
Badssl.com plus access to a linux command line – such as using a Raspberry Pi I so often write about – will do it for you guys.

The Client Certificate section of badssl.com has most of what you need. The page is getting big, look for this:

So as a big timesaver badssl.com has created a client certificate for you which you can use to test with. Download it as follows.

Go to your linux prompt and do something like this:
$ wget https://badssl.com/certs/badssl.com‐client.pem

If this link does not work, navigate to it starting from this link: https://badssl.com/download/

badssl.com has a web page you can test with which only shows success if you access it using a client certificate, https://client.badssl.com/

to see how this works, try to access it the usual way, without supplying a client CERT:

$ curl ‐i ‐k https://client.badssl.com/

HTTP/1.1 400 Bad Request
Server: nginx/1.10.3 (Ubuntu)
Date: Thu, 20 Jun 2019 17:53:38 GMT
Content-Type: text/html
Content-Length: 262
Connection: close

400 Bad Request

No required SSL certificate was sent


nginx/1.10.3 (Ubuntu)

 

Now try the same thing, this time using the client CERT you just downloaded:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://client.badssl.com/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
*       subject: CN=*.badssl.com,O=Lucas Garron,L=Walnut Creek,ST=California,C=US
*       start date: Mar 18 00:00:00 2017 GMT
*       expire date: Mar 25 12:00:00 2020 GMT
*       common name: *.badssl.com
*       issuer: CN=DigiCert SHA2 Secure Server CA,O=DigiCert Inc,C=US
&gt; GET / HTTP/1.1
&gt; User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
&gt; Host: client.badssl.com
&gt; Accept: */*
&gt;
&lt; HTTP/1.1 200 OK
HTTP/1.1 200 OK
&lt; Server: nginx/1.10.3 (Ubuntu)
Server: nginx/1.10.3 (Ubuntu)
&lt; Date: Thu, 20 Jun 2019 17:59:08 GMT
Date: Thu, 20 Jun 2019 17:59:08 GMT
&lt; Content-Type: text/html
Content-Type: text/html
&lt; Content-Length: 662
Content-Length: 662
&lt; Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
Last-Modified: Wed, 12 Jun 2019 15:43:39 GMT
&lt; Connection: keep-alive
Connection: keep-alive
&lt; ETag: "5d011dab-296"
ETag: "5d011dab-296"
&lt; Cache-Control: no-store
Cache-Control: no-store
&lt; Accept-Ranges: bytes
Accept-Ranges: bytes
 
&lt;
 
 
 
 
  <style>body { background: green; }</style>

client.
badssl.com

 
* Connection #0 to host client.badssl.com left intact
* Closing connection #0

No more 400 error status – that looks like success to me. Note that we had to provide the password for our client CERT, which they kindly provided as badssl.com

Here’s an example of a real site which requires client CERTs:

$ curl ‐v ‐i ‐k ‐E ./badssl.com‐client.pem:badssl.com https://jp.nissan.biz/

* About to connect() to jp.nissan.biz port 443 (#0)
*   Trying 150.63.252.1... connected
* Connected to jp.nissan.biz (150.63.252.1) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* NSS: client certificate from file
*       subject: CN=BadSSL Client Certificate,O=BadSSL,L=San Francisco,ST=California,C=US
*       start date: Nov 16 05:36:33 2017 GMT
*       expire date: Nov 16 05:36:33 2019 GMT
*       common name: BadSSL Client Certificate
*       issuer: CN=BadSSL Client Root Certificate Authority,O=BadSSL,L=San Francisco,ST=California,C=US
* NSS error -12227
* Closing connection #0
* SSL connect error
curl: (35) SSL connect error

OK, so you get an error, but that’s to be expected because our certificate is not one it will accept.

The point is that if you don’t send it a certificate at all, you get a different error:

$ curl ‐v ‐i ‐k https://jp.nissan.biz/

* About to connect() to client.badssl.com port 443 (#0)
*   Trying 104.154.89.105... connected
* Connected to client.badssl.com (104.154.89.105) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* Unable to load client key -8025.
* NSS error -8025
* Closing connection #0
curl: (58) Unable to load client key -8025.

Chrome gives a fairly intelligible error

Possibly to be continued…

Conclusion
We have given a recipe for testing form a linux command line if a web site requires a client certificate or not. thus it could be turned into a program

References and related
My article about ciphers has been popular.

I’ve also used badssl.com for other related tests.

Can you use openssl directly? You’d hope so, but I haven’t had time to explore it… Here are my all-time favorite openssl commands.

https://badssl.com/ – lots of cool tests here. The creators have been really thorough.

Categories
Linux Network Technologies Raspberry Pi

Live stream to YouTube from a Raspberry Pi + webcam or USB microphone

2021 pre-cap

I haven’t run this in two years. I just revived my scripts today, resuscitated an old Raspberry Pi, and voila, it’s pretty much good to go. When I moved to my new server in 2020 some of the scripts got mangled. I’m trying to fix those. For the record, I use the continuousaudio.sh script + ffmpegwireless6.sh mentioned below. That gives me a live audio stream on YouTube with a solid gray video. I’m researching further enhancements and will post those if I ever manage to pull them off.

What I would like to do is to have it interpret and respond to a couple audio commands as in “start recording” and “stop recording”. Now whether or not I can pull that off is an entirely different story. But you’ll  never know if you don’t try…

Intro
I’ve been looking at this off and on for awhile now. I finally made a breakthrough this week and started to generate some decent live streams on my YouTube channel, after a lot of misfires.

Note this is applicable for Raspbian Stretch Lite on a Raspberry Pi 3. However, I firmly believe it will work just the same for regular Raspbian Stretch.

There’s a lot of wrong, misleading or outdated information out there on the Internet. Hopefully this will help others to avoid wasting as much time as I had to do.

This project was prompted by my desire to make a more generalized fishcam! Described in this post, my original fishcam implementation – and I realized this form the get-go – has very limited applicability because very few people are in a position to have their own AWS server. And if you don’t know what you’re doing, please don’t run your own server – the security exposure is too great.

So I eventually realized that maybe I could generalize what I had done – essentially remove the dependency on the AWS server – by utilizing Youtube Live Streaming. And, I believe I was right. It’s still a work in progress however.

The command – ffmpeg
I was playing with ffmpeg. The version I am playing with now comes with Raspbian – no need to compile like in the bad old days. ffmpeg -version shows the version to be 3.2.12. I get the impression that its capabilities are version-dependent, so that’s why this information is particularly relevant in this case.

The details
In some of my early attempts I was getting a lot of this (looking at YouTube Live Dashboard)

Dashboard When stream is not quite right

Another attempt
Video works, audio like driving in a car with the windows down. For the record, the command was this:

ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 2500k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 96K \
-r 10 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Video OK, audio choppy message

For the record, the bandwidth required was about 2100 kbps.

List the formats your video device supports

ffmpeg -f video4linux2 -list_formats all -i /dev/video0

Results using my Logitech Webcam

[video4linux2,v4l2 @ 0xcc45c0] Raw       :     yuyv422 :           YUYV 4:2:2 : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
[video4linux2,v4l2 @ 0xcc45c0] Compressed:       mjpeg :          Motion-JPEG : 640x480 160x120 176x144 320x176 320x240 352x288 432x240 544x288 640x360 752x416 800x448 800x600 864x480 960x544 960x720 1024x576 1184x656 1280x720 1280x960
ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1200 \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128K \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Audio good, video not working

video terrible, but audio good!

It is not so pretty to use that hardware address for the Logitech webcam device. Where do you see that hardware address? Either a lsusb or a ls /dev/snd/by-id shows addresses of sound devices. I found a simpler substitute:

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1200k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/
With this audio's, not too bad, video's a bit choppy. Google reports the stream quality as OK, check resolution.


So I fix the bandwidth (which was a typo in the above, but one with an interesting result). I set video bandwidth to -b:v 1200k. Now the video is OK once again, but the audio is choppy again! Weird. bandwidth is about 1100 kbps.

This version had OK video and OK audio
ffmpeg \
-f alsa -i plughw:CARD=U0x46d0x825,DEV=0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1600k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

But I keep getting inconsistent results! Sometimes a setting will work, and then I come back to it and it doesn’t. Weird.

Part of the problem is that I have no idea what I’m doing and I didn’t know when i was watching a livestream vs a recorded (on-demand) one! I have since learned to look for the little red Live button. A picture is worth 10^3 words in this case.

[Pic no longer available – try the thousand words instead!]

Observed used bandwidth is about 1450 kbits/sec. But still lots of dropped packets. Here is what ffmpeg reports. I’m not sure yet what most of it means:

[alsa @ 0x1502700] ALSA buffer xrun.
[alsa @ 0x1502700] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
frame= 5828 fps=5.0 q=-1.0 Lsize=  205496kB time=00:19:26.20 bitrate=1443.5kbits/s dup=0 drop=11138 speed=   1x
video:187265kB audio:17449kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.382063%
[libx264 @ 0x15100e0] frame I:583   Avg QP: 9.41  size: 53819
[libx264 @ 0x15100e0] frame P:5245  Avg QP:13.53  size: 30578
[libx264 @ 0x15100e0] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0x15100e0] mb P  I16..4: 38.0%  0.0%  0.0%  P16..4: 60.7%  0.0%  0.0%  0.0%  0.0%    skip: 1.4%
[libx264 @ 0x15100e0] coded y,uvDC,uvAC intra: 93.7% 86.2% 82.4% inter: 77.8% 60.5% 34.1%
[libx264 @ 0x15100e0] i16 v,h,dc,p: 17% 23% 15% 45%
[libx264 @ 0x15100e0] i8c dc,h,v,p: 51% 21% 16% 11%
[libx264 @ 0x15100e0] kb/s:1315.22

The video for that run is here: https://youtu.be/oxJaZv0frGM

Suppressing Audio
This is what worked for me.

ffmpeg \
-f lavfi -i anullsrc=channel_layout=stereo:sample_rate=44100 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 10 -b:v 1600k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 2 -qscale 3 \
-b:a 128k \
-r 5 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

That is working great – showing the video as before but now with a silent audio track.

Increase Video Quality
Here I’ve increased video quality a tad by requesting more fps (10) and making qscale 0 (which means highest quality).
https://www.youtube.com/watch?v=5Aall8w4Y3E

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b 3000k -g 20 -b:v 1800k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -qscale 0 \
-b:a 128k \
-r 10 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Bitrate was about 1700 kbps. Quality is maybe a little better. Audio still leaves something to be desired.

Still better video quality

ffmpeg \
-f alsa -i plughw:1,0 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -b 3000k -g 60 -b:v 2000k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -qscale 0 \
-b:a 128k \
-r 30 \
-s 640x480 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

What is observed to happen is that ffmpeg actually chooses 15 fps rather than 30. I’ve read it decides what it is able to do, so maybe that’s the highest fps it can deliver. Video is pretty smooth (See my Livestream link in references if I happen to have it running. Otherwise I will create a video link.) No drops are recorded, but the sound, though not terrible, has some pops. Bandwidth used is about 1900 kbps. So this is definitely my best effort yet. YouTube complains about the unsupported video size of 640×480, but it permits it and I don’t think it’s a real problem.

Reducing bandwidth
This one is pretty good overall. I have no idea why lowering the audio bandwidth might help. It’s counter intuitive. But video motion is not bad – just a tad blurred. I guess q=23. Audio has good patches and not-as good patches. Not as good spots are staticky, not washboard bad. Total bandwidth used is about 611 kbps. So a great compromise. Why does raising the video bandwidth lower the audio quality? I have no idea… The settings below worked for maybe 20 minutes, then YouTube said this Video is unavailable. I at least found out something about that. That shows a problem with the player, not (for once) your stream. so since I’m only concentrating on the stream, that’s good news. So actually it delivered good sound for three hours straight with a few staticky spots.

ffmpeg \
-thread_queue_size 1024 \
-f alsa -i plughw:1,0 \
-thread_queue_size 256 \
-f v4l2 -i /dev/video0 \
-c:v libx264 -pix_fmt yuv420p -preset ultrafast -g 30 -b:v 450k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 4 -q:v 5 \
-q:a 0 \
-b:a 64k \
-r 15 \
-s 480x320 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

The audio is creepily sensitive, easily picking up conversations in adjacent rooms.

But then I monkeyed around with the settings, got the washboard sound, came back to this one – a known good – and got washboard audio! What the heck? Why isn’t it consistent?? No idea… Maybe it’s the player that gets messed up?? Now I’m running it again and it’s OK.

Bandwidth talk
It’s important to talk about bandwidth if you haven’t given this any real thought. You have to have a halfway decent broadband connection for this to work, you see? If you have a mid-speed cable modem or DSL, you have much lower upload than download speeds, and you may not be able to pull off a reliable 1.5 mbps upload. For those lucky enough to have Verizon FIOS this is a non-issue. But for instance in the high school where I volunteer they have throttled the guest WiFi network to such an extent that achieving this modest 1.5 mbps is going to present a real challenge. If you rely on a phone’s hotspot you will also probably be unable to get such a speed. So I may look at more ways to reduce the bandwidth required in the future.

Check your bandwidth using speedcheck.org.

And between YouTube and your ISP, it just seems the whole thing about live video broadcasting seems, well, delicate. Stream Health varies between oK, to Excellent to not receiving – all during the same streaming session! It often takes five minutes or so for the stream to appear to be working.

Comparing two webcams
Someone picked up a really cheap DI Chatcam at Microcenter in Paterson. I think that’s Digital Innovations Chatcam. It’s cute. It has a big clip on the end and shines white LEDs when it’s on. I think it was about $12. With the exact same ffmpeg settings (with audio suppressed), the quality was not nearly as good as with the Logitech webcam. Here’s a link to the YouTube video made with the chatcam: https://www.youtube.com/watch?v=OI2IRV1i__k. Note that it has a ministereo plug for audio. I didn;’t even plug it in now that I know how to suppress audio!

The Logitech model is a C525. It was a refurbished model which cost me about $27.The comparable Logitech webcam test is here: https://www.youtube.com/watch?v=L7ZYaRJR7mQ

I need to re-run this test now that I know how to increase the video quality.

A breakthrough: publishing an audio-only stream to YouTube
Besides covering your lens with tape, what’s a software way to blacken the video and concentrate on producing the best audio I wondered?

ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 128 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -b:v 100k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 30 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

The above gives me good audio, and a sold gray background. I love it – for recording band practice or whatever. The breakthrough is that we can avoid wasting cpu cycles on processing input video but just use a color. Thanks Stackoverflow for the tip. Used bandwidth is about 150 kbs – basically nothing! YouTube Dsahboard complains:

OK Video output low
The stream's current bitrate (138.00 Kbps) is lower than the recommended bitrate. 
We recommend that you use a stream bitrate of 2500 Kbps.

But of course that is bogus because that assumes we are trying to put out a rich 1280×720 video, which we are not.

Then eventually YouTube has this complaint:

Bad Bad video settings
Please use a keyframe frequency of four seconds or less. Currently, keyframes are not being sent often enough, which will cause buffering. 
The current keyframe frequency is 8.5 seconds. Note that ingestion errors can cause incorrect GOP (group of pictures) sizes.

Yet the stream does not seem to suffer in any noticeable way from this problem.

For good measure, we add a few extra arguments allow us to remove the keyframes warning. We need to use the -g parameter (group of pictures) at about twice our frame rate, plus, maybe, a no-scenecut argument. Here’s that version.

ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 128 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 60  -x264opts no-scenecut -b:v 150k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 30 \
-s 1280x720 \
-f flv rtmp://a.rtmp.youtube.com/live2/KEY

Actual fps is 25, quality is 26 and bitrate is 145 kbps. But audio quality is good. I hear white noise in the background, but hey, this isn’t exactly professional equipment we’re working with. But this is a great solution for an audio-only recording that goes straight out to YouTube. stability is also good.

The load average is high – 3.6 (use top to watch it), almost all of it taken by ffmpeg. So it appears ffmpeg is really working it to produce this audio stream. That makes me suspect it just gets overwhelmed when it’s an audio + video stream? Because I never did find setting swhich produced good quality for both…

Switch to Wifi and Yet another problem surfaces
It seems that with this livestreaming project everything that should just work doesn’t! I had been doing all my testing used wired Ethernet connection and WiFi disabled. anticipating a portable solution, I tried it using WiFi and no Ethernet cable. And washboard audio reappeared. quite often ffmpeg hangs as well. I tried a zillion experiments and now my revelation is that essentially, though we tried to minimize and trivialize video, we were probably still overwhelming the CPU. So I reasoned that these actions will make the load easier on the CPU, without compromising the audio quality:

– reduce frame per second dramatically
– reduce key frames
– reduce video size

And…yes, these things in combination really did help and permit me to run over WiFi now. This version, put inside a script I call ffmpegwireless6.sh, looks like this:

#!/bin/sh
KEY=691w-uh0z-kx59-c6a7-5xqi # example key
ffmpeg \
-thread_queue_size 4096 \
-f alsa -i plughw:1,0 \
-thread_queue_size 64 \
-f lavfi -i color=color=darkgray \
-c:v libx264 -pix_fmt yuv420p -g 18  -x264opts no-scenecut -b:v 50k \
-bufsize 512k \
-acodec libmp3lame -ar 44100 \
-threads 8 \
-b:a 128k \
-r 5 \
-s 480x320 \
-f flv rtmp://a.rtmp.youtube.com/live2/$KEY

It doesn’t start consistently, however, but if you run it enough times it’ll go. So, to provide reliability I also scripted around these deficiencies: I decided to just keep trying to start up ffmpegwireless.sh until I jhave evidence it’s working. I call that script masterwireless.sh:

#!/bin/sh 
# DrJ 5/2019 
LOG="ff.log"`date +%m-%d-%y:%H:%M` 
while /bin/true; do 
nohup ./ffmpegwireless6.sh$LOG 2>&1 & 
sleep 7 
# want s.th like 
#Frame= 84 fps= 11 q=16.0 size= 43kB time=00:00:07.50 bitrate= 47.1kbits/s dup=0 drop=431 speed=0.991x 
#Frame= 84 fps= 11 q=16.0 size= 43kB time=00:00:07.50 bitrate= 47.1kbits/s dup=0 drop=431 speed= 1x 
FFOUT=`tail -1 $LOG` 
echo "last line is $FFOUT" 
KB=`echo $FFOUT|awk '{print $(NF-4)}'` 
echo "orig KB: $KB" 
KB=`echo $FFOUT|awk '{print $(NF-5)" "$(NF-4)}'|sed 's/kbits.*//'|awk '{print $NF}'` 
date 
echo "KB is: $KB" 
if [ $KB -gt 129 2>/dev/null ]; then 
# let our master process exit - we've got a good audio stream 
  echo "Exiting at *** "`date` 
  exit 
else 
# didn't work out: restart and try again 
  echo "*** Restarting ffmpeg at *** "`date` 
  pkill -9 -f 'ffmpeg ' 
fi 
done

And…it works great! Very briefly what it does is t that it calls ffmpegwireless6.sh and backgrounds it, then tests its output. It gives it a few seconds to get going, then kills it unless observed streaming bandwidth is a healthy 135 kbps or so (essentially the video takes almost no bandwidth in ffmpegwireless6.sh.)

Putting it all together – livestreaming audio stream to YouTube automatically upon boot up
So I want to drag this thing to a performance and have a confederate with minimal technical know-how start it up. So basically I want it to start livestreaming when the RasPi is powered up. To do that I made this crontab entry (using crontab -e):

@reboot sleep 20; /home/pi/masterwireless.sh &gt; ff.log 2&gt;&amp;1

It takes a few minutes to get going, but it’s been extremely reliable. It’s started a stream successfully more than 10 times out of 10, at least when I was using my home WiFi connection. When I switched to my phone’s Hotspot, I had one error out of five attempts. The one bad stream just would not start according to Youtube, although per the stats from the log files showed the stream reached the usual good bandwidth. So I don’t know…

And once the stream starts, it is running uninterrupted for hours, anywhere from three to six hours.

Eventually I want to write an API program to automatically check the stream. But before then I may just introduce a refined script which checks the output and restarts ffmpeg when it has ended.

For the record, a typical ff.log file looks like this:

frame=   43 fps= 43 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=164 speed=   0x    ed=   0x
orig KB: dup=0
Tue  7 May 12:32:08 BST 2019
KB is: dup=0
*** Restarting ffmpeg at *** Tue 7 May 12:32:08 BST 2019
frame=  213 fps= 35 q=8.0 size=      47kB time=00:01:40.91 bitrate=   3.8kbits/s dup=0 drop=847 speed=16.7x
orig KB: 3.8kbits/s
Tue  7 May 12:38:53 BST 2019
KB is: 3
*** Restarting ffmpeg at *** Tue 7 May 12:38:53 BST 2019
illed=   86 fps= 14 q=8.0 size=     104kB time=00:00:06.21 bitrate= 136.7kbits/s dup=0 drop=336 speed=1.03x
orig KB: 136.7kbits/s
Tue  7 May 12:39:00 BST 2019
KB is: 136
Exiting at *** Tue 7 May 12:39:00 BST 2019

The other file, which has a name like ff.log05-07-19:12:32, looks more like this:

ffmpeg version 3.2.12-1~deb9u1+rpt1 Copyright (c) 2000-2018 the FFmpeg developers
  built with gcc 6.3.0 (Raspbian 6.3.0-18+rpi1+deb9u1) 20170516
  configuration: --prefix=/usr --extra-version='1~deb9u1+rpt1' --toolchain=hardened --libdir=/usr/lib/arm-linux-gnueabihf -
-incdir=/usr/include/arm-linux-gnueabihf --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gn
utls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libebur
128 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --ena
ble-libmp3lame --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-
libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --ena
ble-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxvid --enable-libzmq --enab
le-libzvbi --enable-omx --enable-omx-rpi --enable-mmal --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --e
nable-libiec61883 --arch=armhf --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
  libavutil      55. 34.101 / 55. 34.101
  libavcodec     57. 64.101 / 57. 64.101
  libavformat    57. 56.101 / 57. 56.101
  libavdevice    57.  1.100 / 57.  1.100
  libavfilter     6. 65.100 /  6. 65.100
  libavresample   3.  1.  0 /  3.  1.  0
  libswscale      4.  2.100 /  4.  2.100
  libswresample   2.  3.100 /  2.  3.100
  libpostproc    54.  1.100 / 54.  1.100
Guessed Channel Layout for Input Stream #0.0 : stereo
Input #0, alsa, from 'plughw:1,0':
  Duration: N/A, start: 1557229134.030863, bitrate: 1536 kb/s
    Stream #0:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
Input #1, lavfi, from 'color=color=darkgray':
  Duration: N/A, start: 0.000000, bitrate: N/A
    Stream #1:0: Video: rawvideo (I420 / 0x30323449), yuv420p, 320x240 [SAR 1:1 DAR 4:3], 25 tbr, 25 tbn, 25 tbc
[libx264 @ 0x12db850] VBV maxrate unspecified, assuming CBR
[libx264 @ 0x12db850] using SAR=8/9
[libx264 @ 0x12db850] using cpu capabilities: ARMv6 NEON
[libx264 @ 0x12db850] profile High, level 2.1
[libx264 @ 0x12db850] 264 - core 148 r2748 97eaef2 - H.264/MPEG-4 AVC codec - Copyleft 2003-2016 - http://www.videolan.org/
x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_ran
ge=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=8 lookahead_threads=1 sl
iced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 di
rect=1 weightb=1 open_gop=0 weightp=2 keyint=18 keyint_min=1 scenecut=0 intra_refresh=0 rc_lookahead=40 rc=cbr mbtree=1 bit
rate=50 ratetol=1.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=50 vbv_bufsize=512 nal_hrd=none filler=0 ip_ratio=1.40
 aq=1:1.00
Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/KEY
  Metadata:
    encoder         : Lavf57.56.101
    Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p, 480x320 [SAR 8:9 DAR 4:3], q=-1--1, 50 kb/s, 5 fps
, 1k tbn, 5 tbc
    Metadata:
      encoder         : Lavc57.64.101 libx264
    Side data:
      cpb: bitrate max/min/avg: 0/0/50000 buffer size: 512000 vbv_delay: -1
    Stream #0:1: Audio: mp3 (libmp3lame) ([2][0][0][0] / 0x0002), 44100 Hz, stereo, s16p, 128 kb/s
    Metadata:
      encoder         : Lavc57.64.101 libmp3lame
Stream mapping:
  Stream #1:0 -&gt; #0:0 (rawvideo (native) -&gt; h264 (libx264))
  Stream #0:0 -&gt; #0:1 (pcm_s16le (native) -&gt; mp3 (libmp3lame))
Press [q] to stop, [?] for help
frame=   69 fps= 27 q=8.0 size=      45kB time=00:00:02.820 bitrate= 138.6kbits/s dup=0 drop=256 speed= 1.1x
frame=   79 fps= 17 q=2.0 size=      79kB time=00:00:04.80 bitrate= 134.6kbits/s dup=0 drop=308 speed=1.04x
frame=   91 fps= 13 q=8.0 size=     112kB time=00:00:06.80 bitrate= 134.8kbits/s dup=0 drop=348 speed=1.04x
frame=  101 fps= 11 q=8.0 size=     153kB time=00:00:09.22 bitrate= 135.0kbits/s dup=0 drop=388 speed=1.03x
frame=  112 fps= 10 q=3.0 size=     186kB time=00:00:11.40 bitrate= 133.8kbits/s dup=0 drop=440 speed=1.02x
av_interleaved_write_frame(): Broken pipe time=05:28:03.40 bitrate= 134.2kbits/s dup=0 drop=393880 speed=   1x
etc.
    Last message repeated 1 times
Error writing trailer of rtmp://a.rtmp.youtube.com/live2/KEY: Broken pipeframe=98474 fps=5.0 q=-1.0 Lsize=  322492kB time=05:28:14.00 bitrate= 134.1kbits/s dup=0 drop=393888 speed=0.998x
video:2213kB audio:306620kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 4.422897%
[libx264 @ 0x125c850] frame I:5471  Avg QP: 0.00  size:    80
[libx264 @ 0x125c850] frame P:27354 Avg QP: 0.00  size:    25
[libx264 @ 0x125c850] frame B:65649 Avg QP: 0.00  size:    17
[libx264 @ 0x125c850] consecutive B-frames: 11.1%  0.0%  0.0% 88.9%
[libx264 @ 0x125c850] mb I  I16..4: 100.0%  0.0%  0.0%
[libx264 @ 0x125c850] mb P  I16..4:  0.0%  0.0%  0.0%  P16..4:  0.0%  0.0%  0.0%  0.0%  0.0%    skip:100.0%
[libx264 @ 0x125c850] mb B  I16..4:  0.0%  0.0%  0.0%  B16..8:  0.0%  0.0%  0.0%  direct: 0.0%  skip:100.0%
[libx264 @ 0x125c850] 8x8 transform intra:0.0%
[libx264 @ 0x125c850] coded y,uvDC,uvAC intra: 0.0% 0.0% 0.0% inter: 0.0% 0.0% 0.0%
[libx264 @ 0x125c850] i16 v,h,dc,p: 95%  0%  5%  0%
[libx264 @ 0x125c850] i8c dc,h,v,p: 100%  0%  0%  0%
[libx264 @ 0x125c850] Weighted P-Frames: Y:0.0% UV:0.0%
[libx264 @ 0x125c850] kb/s:0.92
Conversion failed!

CPU load average is around 1 or so – much less than before. So I think my ideas are on the right track. Why send 30 frames or whatever each and every second to Youtube just to display a gray screen? The CPU has to work to do that. As long as ffmpeg + Youtube has the intelligence to paste together audio snippets 1/5th second in length five times each second the audio should be taken care of, we’re not playing with the sampling rate or anything – is how I reasoned. Key frames are some sort of overhead as well since they’re extra things ffmpeg has to periodically do. Youtube wants one at least every four seconds. We get really close to that limit by multiplying fps * 3.6 s = 5 * 3.6 = 18 for our group-of-pictures (g) parameter. Previously we were sending a key frame more frequently – every two seconds.

Unreliability
Running this command is still hit-or-miss. As often as not it hangs, and then, if it does not hang, as often as not it often outputs washboard audio. You just <Ctrl-C> to get out of it if hangs, or type “q” if it is producing washboard audio.

Note carefully the bandwidth being used, which ffmpeg reports every second. If it is < 128 kbps, you’re hosed and have washboard audio. If it’s about 135 kbps or higher, you’re good. You don’t even need to waste time fiddling with Youtube’s live_dashboard to listen to it. You get this feedback immediately from ffmpeg. And I intend to use these same observed behaviors to script around ffmpeg’s flakiness and keep restarting it automatically until it is producing a good quality audio stream!

Improved startup
This script, which I call continuousaudio.sh, has some debugging at the beginning, then loops to ensure there is always an audio stream being live-streamed as long as the Pi has power. It has been extremely reliable. I settled on this one for my own purposes.

#!/bin/sh
# drJ 5/2019
LOG="ff.log"`date +%m-%d-%y:%H:%M`
# some info for debugging problems
echo "***********"
date; ip add; ping -c2 8.8.8.8; lsusb
nohup ./ffmpegwireless6.sh > $LOG 2>&1 &
while /bin/true; do
 sleep 7
# want s.th like
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed=0.991x
#Frame=   84 fps= 11 q=16.0 size=      43kB time=00:00:07.50 bitrate=  47.1kbits/s dup=0 drop=431 speed= 1x
 FFOUT=`tail -1 $LOG`
# next line only for DrJ debugging - lines are long
#echo "last line is $FFOUT"
# use gawk instead of awk to parse long lines
 KB=`echo $FFOUT|gawk '{print $(NF-5)" "$(NF-4)}'|sed 's/kbits.*//'|gawk '{print $NF}'`
 echo "orig KB: $KB"
 KB=$(echo $KB|sed s/\\..*//)
 date
 echo "KB is: $KB"
 if [ $KB -gt 129 2>/dev/null ]; then
# stream looks good - do nothing
   echo -n ""
 else
# didn't work out: restart and try again
  echo "*** Restarting ffmpeg at *** "`date`
  pkill -9 -f 'ffmpeg '
  nohup ./ffmpegwireless6.sh > $LOG 2>&1 &
 fi
done

gawk

Eventually I found I needed to use gawk instead of awk in continuousaudio.sh because the number of fields exceeded the max of roughly 32700 in the line I was parsing. To install gawk:

$ sudo apt-get install gawk

Note it still calls ffmpegwireless6.sh, which I believe I have provided above.

ff.log now looks like this:

**********
Fri 31 May 01:10:59 BST 2019
1: lo: &lt;loopback,up,lower_up&gt; mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: &lt;no-carrier,broadcast,multicast,up&gt; mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether b8:27:eb:11:fc:06 brd ff:ff:ff:ff:ff:ff
3: wlan0: &lt;broadcast,multicast,up,lower_up&gt; mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether b8:27:eb:44:a9:53 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.170/24 brd 192.168.1.255 scope global wlan0
       valid_lft forever preferred_lft forever
    inet6 fe80::1119:b46a:cb69:63c9/64 scope link
       valid_lft forever preferred_lft forever
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=56 time=14.6 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=56 time=17.4 ms
 
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 14.671/16.065/17.460/1.400 ms
Bus 001 Device 004: ID 046d:0825 Logitech, Inc. Webcam C270
Bus 001 Device 005: ID 0424:7800 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 002: ID 0424:2514 Standard Microsystems Corp. USB 2.0 Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
last line is frame=   19 fps=0.0 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=69 speed=   0x    ^Mframe=   39 fps= 39 q=0.0 size=       0kB time=00:00:00.00 bitrate=N/A dup=0 drop=150 speed=   0x    ^M
orig KB: dup=0
Fri 31 May 01:11:07 BST 2019
KB is: dup=0
*** Restarting ffmpeg at *** Fri 31 May 01:11:07 BST 2019
last line is frame=  193 fps= 35 q=8.0 size=     100kB time=00:00:27.60 bitrate=  29.6kbits/s dup=0 drop=764 speed=4.99x    ^Mframe=  195 fps= 32 q=8.0 size=     108kB time=00:00:28.03 bitrate=  31.4kbits/s dup=0 drop=775 speed=4.65x    ^M
orig KB: 31.4
Fri 31 May 01:11:36 BST 2019
KB is: 31
*** Restarting ffmpeg at *** Fri 31 May 01:11:36 BST 2019
...
&lt;/broadcast,multicast,up,lower_up&gt;&lt;/no-carrier,broadcast,multicast,up&gt;&lt;/loopback,up,lower_up&gt;

My crontab now looks like this:

@reboot sleep 20;/home/pi/continuousaudio.sh > ff.log 2>&1

Portability
I wanted to record a practice session in my house where no Ethernet port is available (hence I had to get WiFi working, which I believe I have). And I wanted convenience – to not worry about being tethered to the wall by an adapter. So I decided to look for an economical power solution for Raspberry Pi. And I found the ones purpose-built are just too expensive to justify. Pijuice, I’m talking about you. So, really, I realized any old portable USB power stick would work. But I wanted something which could last hours. This Omars 10000 mAh portable USB charger seemed like it would do the trick. $16. And it did. It works great! Two hours later, the LEDs show three bars instead of four, so I think this will supply power for about 8 – 9 hours if I pushed it. And it has the form factor of a smartphone. Ideally I’d want a little on/off switch to avoid plugging/unplugging the power cable, but I didn’t find that as of yet. Maybe there’s a cheap USB cable with that…?

So now I’m not tethered by Ethernet cables nor by a power plug. See where this is progressing? If I use my smartphone’s hotspot I should be able to livestream anywhere I can get a signal, so, for instance, at band performances. I haven’t tried that yet, but I’m hopeful…

YouTube quirks
As previously mentioned (I think)( you need to be enabled for livestreaming. It takes about 24 hours for the approval. I suppose they check to make sure you aren’t a perceived threat.

Recording NPR will give you a copyright violation flag! This has happened to me more than once. I think because they play snippets of new music which are flagged.

Lag. I’ve seen lag time as short as four seconds and maybe as long as 20 seconds or so. It is never instantaneous.

My longest video was 20 hours but the processing took days. In fact I’m not sure it ever completed. So I guess the service falls apart after video lengths of I don’t know, maybe 12 hours or so. So if the desire is to have a continuous security webcam I guess you’ll have to break it into chunks. That’s what I’m thinking about next.

A livestream gets converted to a video by YouTube. That takes awhile – maybe as long as the video length itself is? It slaps a date and time onto the video which you see in your video manager. Unfortunately, using this ffmpeg streaming method it chooses the Pacific standard time timezone. I actually don’t see a simple way to change that either. It may require use of the API, which is beyond what I’m willing to tackle right now. So for me, being in the Eastern time zone all the timestamps are off by three hours, which is kind of annoying.

I wondered, does my livestream ID remain constant, or will it change from broadcast to broadcast? This is important for future use of the API. Well, it changes each time I start a new livestream, even though I use a single (my own) account. Each livestream gets a unique ID which then becomes the ID for the DVR of the video which you can view on-demand. And this ID is the part that changes in the URL of an “unpublished” Youtube video. Say your unpublished livestream is
https://www.youtube.com/watch?v=r1wtZwQ-Tk8.
The part of the URL following the v=, namely, in this example, r1wtZwQ-Tk8, is the ID of that video. I would say YouTube tries to be somewhat robust and will not declare your stream has ended until maybe 30 seconds after you have stopped your program. Or maybe it’s a minute or two, I’m not really sure. But I’ve seen that if you restart the streaming quickly enough you’ll be put onto that same livestream. If on the other hand you wait long enough until you see in live_dashboard that stream ended message then It will assign yuo a new video ID if you start your stream again – and don’t forget to reload the live_dashboard page so it can pick up the new ID.

Can you pause a livestream, and later resume, keeping the same URL? In a word, No. Unfortunately. Youtube livestreaming is pretty limited in this way. And how useful would that be? I would use my smartphone to control ffmpeg on my Raspberry Pi to pause our band practice during our lengthy chat breaks, keeping the stream focussed on the music. But no… Not possible.

Logitech webcam quirks
When you pull both video and audio from your Logitech webcam the usage LED illuminates as you’d expect. However, when you’re pulling just the audio, as I show above, that LED does not illuminate, yet it is being used to record all the sounds in its vicinity. I guess I have accidentally and unintentionally stumbled upon a stealth mode, which is a little disconcerting.

Yeti USB microphone quirks
A Yeti mic is extremely sensitive and seems more suited for conversation than music recording in my opinion. Even with the gain all the way down (a must) a loud sound is often distorted. I felt the omni recording mode was the worst in this regard. Stereo recording tolerated sounds better. But, if you want to pikc up every little sound, Yeti is great. More importantly to me, it just worked with the USB settings I used for Logitech. I didn’t have to change a single thing in the way I used ffmpeg.

Testing if the livestream is still running
My idea to do this is to use the YouTube API and periodically test if the livestream is still working. I have read that it can go down for various reason, and there is no goo way from within ffmpeg itself to tell that your stream is no longer live! It will make for a good project to test the livestream using the Google Developer’s API. that will be a separate post if I ever get it working. If it’s found to be down, the Pi could restart ffmpeg, in my thinking.

To do list
I never really perfected the video. Audio I got pretty well.
I will borrow my friend’s Yeti USB mic to see how my audio stream works with a high quality microphone. DONE.
I would like to have a simple external control to turn stream off/ on, whether it is physical or virtual. DONE – see references.
Scripting to monitor stream and restart it once it fails – to have a recording 24×7 like an audio-only security camera. DONE – continuousaudio.sh as documented above.
Pause feature. PARTIALLY DONE.

Conclusion
A Raspberry Pi 3 running Raspbian Stretch Lite is used, along with a Logitech USB webcam, to livestream to YouTube. I showed how to stream video-only with a silent audio track. Then I turned it around and spent most of my time putting a virtual piece of tape over the lens and doing an audio-only livestream. This, after a crap-load of testing and tweaking, eventually began to work in a reliable fashion. Then I showed how to launch the audio-only livestream upon power-up of the Ras Pi.

Since it is a Raspberry Pi, this whole thing lends itself to portability and interesting use cases. With a $17 portable USB battery source and your own Hotspot, you can stream (audio at least) from anywhere you have 4G cell signal – good for recording a banquet, your band performance, or any other long, live event.

I spoke about some of the many quirks of YouTube which are relevant to this project.

References and related
Where I debug YouTube’s messages: https://www.youtube.com/live_dashboard

Fishcam implemented with Raspberry Pi + webcam + help of my AWS server.

One of my test videos: https://youtu.be/oxJaZv0frGM

Check your upload bandwith: speedtest.net

YouTube’s links have me confused. If you’re trying to produce a Live Stream you’ll want the live dashboard page to watch it and check its quality as Youtube judges it. Here’s that link: https://www.youtube.com/live_dashboard

Use ffmpeg to deal with live input audio and make a matrix LED dance in real time: Raspberry Pi + LED Matrix Display project

Microcenter in Paterson, NJ – best to visit in person, or so I have been told.

My livestream is https://www.youtube.com/watch?v=r1wtZwQ-Tk8

Put virtual tape over your lens by using this tip discussed in Stackoverflow!

Portable, proven (by me) economical USB power supply for your Raspberry Pi – $16.

Economical on/off switch for your Raspberry Pi. This is a great way to stop having to pull out/push in power connectors from your micro USB power source. $10 gets you a four-pack! https://smile.amazon.com/iUniker-Raspberry-Switch-Supply-MicroUSB/dp/B07CTHKXDW/ref=sr_1_2_sspa?keywords=raspberry+pi+on+off+switch&qid=1559477662&s=gateway&sr=8-2-spons&psc=1