Categories
Admin Network Technologies

The IT Detective agency: bad PING times explained

Intro
In a complicated corporate environment somewhat unusual problems can be made extremely difficult to debug as there may be many technicians involved, each one knowing just their piece of the infrastructure. Such was the case when i consulted for a problem in which a company reported slow Internet response for users in Asia.

The details
In preparation for an evening call I managed to get enough access to be able to log into the proxy server their Asian users use. Just issuing regular commands was slow, often hanging for many seconds.

The call
So we had the call at 9 PM. These things are always pretty amazing in the sense of How do corporations ever get anything done when they’ve outsourced so much? So there was a representative from the firewall team from Germany, representative from the telecom in the US, a couple representatives from the same telecom but stationed in Asia, an employee who oversees the telecom vendor in the US and me, representing the proxy service normally handled by a group in Europe. The common language was English, of course, though that doesn’t mean that everyone was easy to understand for us native speakers. No one on the call had any real familiarity with the infrastructure. We were all essentially reverse engineering it from a diagram the telecom produced.

The firewall guy and I both noticed that PINGs from the proxy to its gateway (which was a firewall) took 50 msec. I’ve never seen that. Same for another piece of equipment using that same firewall interface. The telecom, which was responsible for the switches, could not actually log in to all of them. Only the firewall guy could reach a few of them. And he eventually figured out the diagram we were all using was somewhat wrong and different switches were in use for some of the equipment than indicated. Which was important because we wanted to check the interface status on the switch.

So imagine this. The guy with no access to anything, the vendor overseer in the US, patiently asks for the results of a few commands to be shared (by email) with the group. He gets the results of show interface status and identifies one port as looking off. It’s listed as 100 mbit instead of 1 gig. In addition to the strange PNIG times, when we PINGed from equipment in the US, packet loss rate varied frmo between 4 – 15%. Pretty high in other words. They try to reset the port, hard-code both sides, but nothing works. And this is the port that the firewall is connected to.

We finally switch to the backup firewall. This destroys the routing and so that has to be fixed. But finally it is and suddenly response is much better and PINGing the gateway from the proxy is at the expected < 1 msec. Not content to leave it at that, they persist to fix the broken port. Thy reason that the most likely problem after the test results are in is a bad cable. He explains that in 1 gig communication all 8 wires have to be good. If just one breaks you can't run 1 gig. Now they have to figure out who has access to this 3rd-party data center where the equipment is hosted! They finally identify an employee with access and get him to come with different cable lengths (of course no one knows the layout to actually know how long the cable is or how close the equipment is). The cable is replaced and both sides come up 1 gig auto-negotiated! They reverted back to the primary firewall. So in the end the employee without access to anything figured this out. Amazing. The intense activity on this problem lasted from 9 PM to about 4 AM the following morning. The history
Actually it would be a pretty decent turn-around by this company’s standards if the problem had been resolved in seven hours. But actually it had been ongoing for a couple weeks beforehand. It seemed that the total data usage was capped at 100 mbits by that bad cable and so it wasn’t a total outage or totally obvious where to look.

Case closed!

Conclusion
I think a lot of people on the call had the expertise to solve the problem, and much more quickly than it was solved. But no one had sufficient access to do debugging his own way and needed cooperation of others. The telecom who owns and manages the switches particularly disappointed in their performance. Not in the individuals, who seemed to be competent, but in their processes which permitted faulty and incomplete documentation, as well as lack of familiarity with the particular infrastructure – like you’ve just hired a smart network technician and communicated nothing about what he is supporting.
Keep good people on staff! Give them as much access as possible.

Categories
Admin Network Technologies

General failure PING error partially explained

Intro
My Dell PC running Windows 7 was going along fine until one day I noticed I couldn’t get out to the Internet from any browser. As a network specialist I reacted in my standard knee-jerk fashion and tried a few simple network commands from the command prompt to get a better idea of what’s going on.

The details

Here is the first thing I tried – to ping one of Google’s DNS servers which always respond so if you don’t get a response there’s something wrong on your side.

C:\Users\DrJ>ping 8.8.8.8

Pinging 8.8.8.8 with 32 bytes of data:
Request timed out.
General failure.
General failure.
General failure.
 
Ping statistics for 8.8.8.8:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

Weird. Never seen that before.

Then it gets stranger
Then I tried to ping my local router. But first I had to find its IP address:

C:\Users\DrJ>netstat -rn

blah, blah
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination        Netmask          Gateway       Interface  Metric
          0.0.0.0          0.0.0.0    192.168.0.254    192.168.0.102     25
blah, blah

Then ping it:

C:\Users\Dad>ping 192.168.0.254

Pinging 192.168.0.254 with 32 bytes of data:
Reply from 192.168.0.254: bytes=32 time=2ms TTL=64
Reply from 192.168.0.254: bytes=32 time=1ms TTL=64
Reply from 192.168.0.254: bytes=32 time=1ms TTL=64
Reply from 192.168.0.254: bytes=32 time=1ms TTL=64
 
Ping statistics for 192.168.0.254:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 1ms, Maximum = 2ms, Average = 1ms

That’s a normal response, which under the circumstances is also weird. So we can’t ping the Internet but we can ping our local router. Sounds like a problem with either my Internet connection or my home router, right? Yeah, maybe, except for these two important facts. Routers have built-in simple diagnostic tools like PING. So I logged into the local router and ran a ping to 8.8.8.8 and it worked just fine. OK that’s one. Two is that you get a different failure message when your Internet connection is down. I unplugged my DSL router and got this more familiar error:

C:\Users\DrJ>ping 8.8.8.8

Pinging 8.8.8.8 with 32 bytes of data:
Request timed out.
Request timed out.
Request timed out.
Request timed out.
 
Ping statistics for 8.8.8.8:
    Packets: Sent = 4, Received = 0, Lost = 4 (100% loss),

I also observed that if I just waited it out (doing all these tests kills a few minutes) the connection would come back by itself after about 9 or 10 minutes. I timed rebooting. On my PC it’s about six minutes before I have a working Internet connection again. Not very impressive, but that’s how it is.

A Google search showed a bunch of sites with junk answers apparently trying to push adware on your PC. You know those sites that have somehow documented every single PC problem, supposedly, and have a boilerplate bogus generic description of likely causes and the generic fixes which are all the same but actually have nothing whatsoever to do with the specific problem? I’m getting really annoyed with those sites. But one guy mentioned a firewall. I am running McAfee Livesafe. Hmmm.

Self-inflicted denial of service attack
Yup. From the Windows start menu I typed in McAfee to launch it. I navigated to the part for Web and Email protection. Turned off firewall protection.
McAfeeFirewall
The instant I did that my browsers sprung to life! Gmail started working. All was good.

So in the business this is what we call a self-inflicted denial of service, which is somewhat of a tongue-in-cheek name, but apt. A security service that shuts everything down is just as bad as no security whatsoever. I tried to check the McAfee logs to look for a bright red warning that says we’re shutting you down for now but haven’t found anything like that.

And those pings to Google? They now look like this:

C:\Users\DrJ>ping 8.8.8.8

Pinging 8.8.8.8 with 32 bytes of data:
Reply from 8.8.8.8: bytes=32 time=88ms TTL=40
Reply from 8.8.8.8: bytes=32 time=64ms TTL=40
Reply from 8.8.8.8: bytes=32 time=63ms TTL=40
Reply from 8.8.8.8: bytes=32 time=63ms TTL=40
 
Ping statistics for 8.8.8.8:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 63ms, Maximum = 88ms, Average = 69ms

2nd possible cause
Today I also witnessed General failure in testing ping to a single particular destination on a Windows server. So first thing we checked is the Windows firewall. It was disabled. So what else could it be? Since it was a server in a complex environment the application owner had added routes. But he wasn’t very familiar with the route command so he just literally added routes with all the options present like in their example under route /help:

$ route ADD 157.0.0.0 MASK 255.0.0.0 157.55.80.1 METRIC 3 IF 2

Only he chose IF 1. This created the bizarre situation where the route was added with the correct gateway, but the wrong interface! The system assigned IF 1 to 127.0.0.1. So those packets weren’t going anywhere because that’s the loopback interface! I suggested to delete that route, then add it without the METRIC and IF options – that’s how I’ve always done it.

Result: General failure disappeared.

Conclusion
A Windows system reports General failure during a PING test when the ICMP packets cannot leave the system. This can be due to running a local firewall or having bad routes present.

Categories
Admin

Java applications no longer socksify correctly

The details
Newer Java versions, i.e., Java 1.8, prefer IPv4 over IPv6 as opposed to native IPv4. And that doesn’t work correctly with the OpenText socks client. There’s a command line option to change this behavior of the JRE

The solution
This is the way to disable Java’s preferences for using IPv6 on your PC. Run the following from a Command prompt.

C:\> setx _JAVA_OPTIONS -Djava.net.preferIPv4Stack=true

It doesn’t break IPv6 funciotnality. All it does is stop Java from trying to use IPv6 for IPv4 connections.

Once we applied this fix a few of our Java applications that failed with the newer JRE began to socksify properly once again using the Opentext socks client.

To be continued…

Categories
Apache CentOS Hosting Service Web Site Technologies

Compiling Apache 2.4 on CentOS

Intro
This is a tale of one thing leading to another. I’ll probably either continue this post or delete it altogether if I find I’m headed down a wrong path.

The details
I suspect that to get better marks for my server’s SSL implementation I probably need apache 2.4. There is an RPM for apache 2.4 but it is almost two years old! So I decided to bite the bullet and compile the darn thing myself. Easier said than done. My current production version is 2.2.15.

Now if you just want to compile a recent version of apache 2.4 then this guide is much, much better than mine: https://jasonpowell42.wordpress.com/2013/04/05/install-apache-2-4-4-on-centos-6-4/. My guide, where I’ve hit just about every conceivable error and powered through, is more for timid folks like me who want to keep their current apache 2.2 running while trying 2.4. In spite of what you read elsewhere this is possible to do, but you need patience and perseverance.

Getting the source is easy enough. Then you configure it:

httpd-2.4.16$ ./configure −−prefix=/usr/local/apache24

checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... configure: WARNING: APR version 1.4.0 or later is required, found 1.3.9
configure: WARNING: skipped APR at apr-1-config, version not acceptable
no
configure: error: APR not found.  Please read the documentation.

What version of apr do we have?

$ sudo rpm −qa|grep apr

apr-util-devel-1.3.9-3.el6_0.1.x86_64
apr-util-1.3.9-3.el6_0.1.x86_64
apr-util-ldap-1.3.9-3.el6_0.1.x86_64
apr-1.3.9-5.el6_2.x86_64
apr-devel-1.3.9-5.el6_2.x86_64

Drat. No wonder we’re having trouble. Guess we could compile apr ourselves, but perhaps there’s a suitable version out there somewhere we can simply download?


Warning: this approach to apr shown below was a dead end for me. Further down I show a successful approach.

$ sudo yum search apr

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: linux.cc.lehigh.edu
 * epel: mirror.us.leaseweb.net
 * extras: mirror.rackspace.com
 * updates: mirror.es.its.nyu.edu
========================================================== N/S Matched: apr ==========================================================
...
httpd24-apr-debuginfo.x86_64 : Debug information for package httpd24-apr
httpd24-apr-devel.x86_64 : APR library development kit
httpd24-apr-util-debuginfo.x86_64 : Debug information for package
                                  : httpd24-apr-util
httpd24-apr-util-devel.x86_64 : APR utility library development kit
httpd24-apr-util-ldap.x86_64 : APR utility library LDAP support
httpd24-apr-util-mysql.x86_64 : APR utility library MySQL DBD driver
httpd24-apr-util-nss.x86_64 : APR utility library NSS crytpo support
httpd24-apr-util-odbc.x86_64 : APR utility library ODBC DBD driver
httpd24-apr-util-openssl.x86_64 : APR utility library OpenSSL crytpo support
httpd24-apr-util-pgsql.x86_64 : APR utility library PostgreSQL DBD driver
httpd24-apr-util-sqlite.x86_64 : APR utility library SQLite DBD driver
httpd24-apr.x86_64 : Apache Portable Runtime library
httpd24-apr-util.x86_64 : Apache Portable Runtime Utility library
...

I singled out the promising looking ones. After all it’s apache 2.4 that’s driving the need for this version so the httpd24 versions of apr should suffice.

So I installed these:

$ sudo yum install httpd24-apr-util.x86_64
$ sudo yum install httpd24-apr-util-devel.x86_64

Now how do we tell the configurator where our new apr package is?

httpd-2.4.16$ ./configure −−help|grep −i apr

  --enable-hook-probes    Enable APR hook probes
  --with-included-apr     Use bundled copies of APR/APR-Util
  --with-apr=PATH         prefix for installed APR or the full path to
                             apr-config
  --with-apr-util=PATH    prefix for installed APU or the full path to

The with-apr switch looks promising. Now we guess as to exactly what we should put for the path. Here’s what happens when we guess wrong:

httpd-2.4.16$ ./configure −−with-apr=/opt/rh/httpd24/root/usr/lib64 −−prefix=/usr/local/apache24

checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file.

I’ll spare you the guesswork. Here is the path correctly specified:

httpd-2.4.16$ ./configure −−with-apr=/opt/rh/httpd24/root/usr −−prefix=/usr/local/apache24

...
configure: Configuring Apache Portable Runtime Utility library...
configure:
checking for APR-util... yes
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking for gcc option to accept ISO C99... -std=gnu99
checking for pcre-config... false
configure: error: pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/

So we finally got past the apr error and are onto the next one : (. I’ll try to install pcre-devel to see if that helps:

$ sudo yum install pcre-devel.x86_64

Wow! Got lucky that time. That cleared up that error and the configure went all the way through!

Oh, no. It doesn’t compile! It begins to, but it can’t compile export.c:

httpd-2.4.16$ make

...
gawk -f /usr/local/src/apache24/httpd-2.4.16/build/make_exports.awk `cat export_files` > exports.c
/usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -std=gnu99  -pthread      -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE     -I. -I/usr/local/src/apache24/httpd-2.4.16/os/unix -I/usr/local/src/apache24/httpd-2.4.16/include -I/opt/rh/httpd24/root/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/src/apache24/httpd-2.4.16/modules/aaa -I/usr/local/src/apache24/httpd-2.4.16/modules/cache -I/usr/local/src/apache24/httpd-2.4.16/modules/core -I/usr/local/src/apache24/httpd-2.4.16/modules/database -I/usr/local/src/apache24/httpd-2.4.16/modules/filters -I/usr/local/src/apache24/httpd-2.4.16/modules/ldap -I/usr/local/src/apache24/httpd-2.4.16/modules/loggers -I/usr/local/src/apache24/httpd-2.4.16/modules/lua -I/usr/local/src/apache24/httpd-2.4.16/modules/proxy -I/usr/local/src/apache24/httpd-2.4.16/modules/session -I/usr/local/src/apache24/httpd-2.4.16/modules/ssl -I/usr/local/src/apache24/httpd-2.4.16/modules/test -I/usr/local/src/apache24/httpd-2.4.16/server -I/usr/local/src/apache24/httpd-2.4.16/modules/arch/unix -I/usr/local/src/apache24/httpd-2.4.16/modules/dav/main -I/usr/local/src/apache24/httpd-2.4.16/modules/generators -I/usr/local/src/apache24/httpd-2.4.16/modules/mappers  -prefer-non-pic -static -c exports.c && touch exports.lo
exports.c:1244: error: redefinition of ‘ap_hack_apr_allocator_create’
exports.c:198: note: previous definition of ‘ap_hack_apr_allocator_create’ was here
exports.c:1245: error: redefinition of ‘ap_hack_apr_allocator_destroy’
exports.c:199: note: previous definition of ‘ap_hack_apr_allocator_destroy’ was here
exports.c:1246: error: redefinition of ‘ap_hack_apr_allocator_alloc’
exports.c:200: note: previous definition of ‘ap_hack_apr_allocator_alloc’ was here
exports.c:1247: error: redefinition of ‘ap_hack_apr_allocator_free’
exports.c:201: note: previous definition of ‘ap_hack_apr_allocator_free’ was here
exports.c:1248: error: redefinition of ‘ap_hack_apr_allocator_owner_set’

This could be tough! Maybe impossible for me to get past. I’ve never encountered this kind of error. OK. Got it. Not so tough. I had two versions of apr installed – the old one needed by my apache 2.2 and the new one installed as shown above. I didn’t want to completely blow away the old one as I feared that it is dynamically linked by Apache 2.2, so I did the following:

$ cd /usr/lib64; sudo mv apr-1 drjapr-1
– then change to my apache24 root directory and run configure again; then run make

And it went through this time!

Only modules installed
make install however only installed modules, not the httpd binary.

The problem seems related to my original apr libraries. They look like this:

$ sudo rpm −qa|grep ^apr

apr-util-devel-1.3.9-3.el6_0.1.x86_64
apr-util-1.3.9-3.el6_0.1.x86_64
apr-util-ldap-1.3.9-3.el6_0.1.x86_64
apr-1.3.9-5.el6_2.x86_64
apr-devel-1.3.9-5.el6_2.x86_64

I tried to move them all to a temporary directory but then the compiler cannot find libtool which is normally supplied by apr-devel.

I considered removing apr-devel, but boy there are so many dependencies that my other packages have on it that I did not feel comfortable doing that. PHP, apache2.2 and a whole lot more depend on it.

End of dead end approach to apr


New approach needed
My new approach is to try to use the APR from apache itself by downloaded the Unix sources for APR and apr-util from http://apr.apache.org/download.cgi. Yes, this worked best of all. I even put back all the apr files I had moved in the previous failed effort.

It’s not very clear what they mean by unpacking apr and apr-util in srclib. I created symlinks in my srclib directory such that apr -> apr-1.5.2 and apr-util -> apr-util-1.5.4. For the inexperienced the command format is like in this example:

$ ln −s apr-1.5.2 apr

Of course you first have to download the source tarball to your srclib directory and unpack it:

$ tar zxf apr-1.5.2.tar.gz

It Compiles and Installs
So after all those misfires I finally got a version that compiled and installed in its entirety. That process starts with this configure command:

$ ./configure −−with-included-apr −−prefix=/usr/local/apache24

Then the usual make and sudo make install.

Modules problem
I inherited a configuration that had a mods-avalable and a mods-enabled directory which is how my old apache 2.2 was set up. After tweaking the modules path using the replace command, something like this

$ cd /etc; cp −pr apache2 apache24; cd mods-avalable
$ sudo replace /usr/lib/apache2 /usr/local/apache24 −− *.load

I still could not start my new server:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/authz_default.load: Cannot load /usr/local/apache24/modules/mod_authz_default.so into server: /usr/local/apache24/modules/mod_authz_default.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

I looked at all my configuration files and don’t see anything that relies on this module so I deleted the reference to it in mods-enabled.

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/cgi.load: Cannot load /usr/local/apache24/modules/mod_cgi.so into server: /usr/local/apache24/modules/mod_cgi.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

Now I do like to run CGI programs on occasion so this one can’t be so easily brushed aside. It could be that we should be using mod_cgid.so instead.

Then it’s onto this error:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/php5.load: Cannot load /usr/local/apache24/modules/libphp5.so into server: /usr/local/apache24/modules/libphp5.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

I use php so I may have to investigate this one in some detail. simply trying to update the link to where the old libphp5.so resides under apache2.2 brings up this different kind of error:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: undefined symbol: unixd_config
                                                           [FAILED]

Wow. I’m reading various things and it looks like I’ll now have to compile php5 as well. This is getting hairy. This site, although old, seems to explain it most clearly. And of course I’ve got php 5.3 which you can’t even find source for on the php web site, www.php.net

So I downloaded php5.4.43, which is the oldest one I could find on the php web site!

To configure it I used this long list of options, some of which are determined by my choices of location for my apache24 files:

$ ./configure ‐‐with‐apxs2=/usr/local/apache24/bin/apxs ‐‐with‐mysql ‐‐prefix=/usr/local/apache24/php5 ‐‐with‐config‐file ‐path=/usr/local/apache24/php5 ‐‐disable‐cgi ‐‐with‐zlib ‐‐with‐gettext ‐‐with‐gdbm ‐‐with‐curl ‐‐with‐openssl

2017 update for php
I finally needed to update some WordPress packages and found my only transport is ftp. I think my command-line compile options for php5 above leave something to be desired. I think I need to add curl and openssl like so:

$ ./configure ‐‐with‐apxs2=/usr/local/apache24/bin/apxs ‐‐with‐mysql ‐‐prefix=/usr/local/apache24/php5 ‐‐with‐config‐file ‐path=/usr/local/apache24/php5 ‐‐disable‐cgi ‐‐with‐zlib ‐‐with‐gettext ‐‐with‐gdbm ‐‐with‐curl ‐‐with‐openssl

but I get these errors:

ext/curl/.libs/interface.o: In function `php_curl_option_url':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:180: undefined reference to `core_globals'
ext/curl/.libs/interface.o: In function `_php_curl_setopt':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1821: undefined reference to `core_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1804: undefined reference to `core_globals'
ext/curl/.libs/interface.o: In function `curl_progress':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1113: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_write_header':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1264: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_write':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1038: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_read':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1187: undefined reference to `executor_globals'
ext/curl/.libs/streams.o: In function `php_curl_stream_opener':
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:320: undefined reference to `file_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:406: undefined reference to `core_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:414: undefined reference to `core_globals'
ext/curl/.libs/streams.o: In function `on_data_available':
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:68: undefined reference to `executor_globals'
ext/standard/.libs/info.o: In function `php_info_print_request_uri':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:97: undefined reference to `sapi_globals'
ext/standard/.libs/info.o: In function `php_print_gpcse_array':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:213: undefined reference to `executor_globals'
ext/standard/.libs/info.o: In function `php_print_info':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:918: undefined reference to `executor_globals'
collect2: ld returned 1 exit status
make: *** [sapi/cli/php] Error 1

Here the problem seems to be that since I had already compiled php5 and left it around, it was using the old parts.

You need to do a make clean first! Then it compiles.

Now I’m down to this apache error:

$ sudo service apache24 start

Starting apache24: AH00526: Syntax error on line 55 of /etc/apache24/apache24.conf:
Invalid command 'LockFile', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

I’m going to just try to comment out that pesky Option LockFile…. I’ve found this apache page which is helpful for this upgrade: http://httpd.apache.org/docs/trunk/upgrading.html OK, next error:

Starting apache24: AH00526: Syntax error on line 145 of /etc/apache24/apache24.conf:
Invalid command 'User', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Here the advice is to load module mod_unixd. I don’t even have anything like that so I’m looking into it now. OK. It’s in the apache24/modules so I just need to load it in. Next error:

Starting apache24: AH00526: Syntax error on line 161 of /etc/apache24/apache24.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration

Wow. That comes from this pretty standard line:

<Files ~ "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy all
</Files>

This is a helpful document: http://httpd.apache.org/docs/trunk/upgrading.html. So at their recommendation I replaced all that with a

Require all denied

That leads to the next error:

Starting apache24: AH00526: Syntax error on line 166 of /etc/apache24/apache24.conf:
Invalid command 'Require', perhaps misspelled or defined by a module not included in the server configuration

It means Require is not even found. I needed to load some new modules, names authz_code and unixd:

LoadModule authz_core_module /usr/local/apache24/modules/mod_authz_core.so
LoadModule unixd_module /usr/local/apache24/modules/mod_unixd.so

Next error:

AH00526: Syntax error on line 20 of /etc/apache24/mods-enabled/alias.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration

so some of my old conf files that I copied over use the old syntax. The alias.conf file looked like this:

Alias /icons/ "/var/www/icons/"
 
<Directory "/var/www/icons">
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

Again looking at http://httpd.apache.org/docs/trunk/upgrading.html they suggest to replace the Order… and following line with:

Require all granted

Next error:

AH00526: Syntax error on line 3 of /etc/apache24/mods-enabled/deflate.conf:
Invalid command 'AddOutputFilterByType', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

But I was already loading the deflate module which defines AddOutputfilterByType. What I learned is that in apache 2.4 you also need to load mod_filter.

And the next error please:

AH00526: Syntax error on line 43 of /etc/apache24/mods-enabled/ssl.conf:
SSLSessionCache: 'shmcb' session cache not supported (known names: ). Maybe you need to load the appropriate socache module (mod_socache_shmcb?).

That’s in complaint about this line:

SSLSessionCache        shmcb:${APACHE_RUN_DIR}/ssl_scache(512000)

The standard advice for this error is to uncomment this line:

LoadModule socache_shmcb_module modules/mod_socache_shmcb.so

But I don’t have that module!

I guess I chose the wrong options when doing the initial ./configure. See the references for a proper guide that lists some good options.

I’m now trying to configure like this:

$ ./configure −−with-included-apr −−prefix=/usr/local/apache24 −−enable-php5 −−enable-so −−enable-ssl −−with-mpm=prefork

Actually I don’t know if I needed all those options such as enable-ssl. The main thing was that my apache 2.2 mods-available directory didn’t have a mention of mod_socache_shmcb.so. My apache 2.4 built with these config options definitely does. so I just need one of these LoadModule statements like this:

LoadModule socache_shmcb_module /usr/local/apache24/modules/mod_socache_shmcb.so

Well we’ve moved six lines down into that config file. I guess that’s progress! because now we’ve made it all the wy to line 49:

AH00526: Syntax error on line 49 of /etc/apache24/mods-enabled/ssl.conf:
Invalid command 'SSLMutex', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Even apache’s upgrade guide documents this error. It’s caused by a conf file line that looks something like this:

SSLMutex  file:${APACHE_RUN_DIR}/ssl_mutex

and they say – I’m paraphrasing here – just try to comment it out and hope for the best.

Next error:

AH00526: Syntax error on line 9 of /etc/apache24/mods-enabled/status.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Yeah status.conf has

    Order deny,allow
    Deny from all
    Allow from 127.0.0.1 ::1

We’ll try to replace that with this:

Require host 127.0.0.1 ::1

Now it runs through all the configuration OK but doesn’t actually start. I had set up an init.d script and I wasn’t going to go into this but I may have to:

$ sudo service apache24 start

httpd (pid 30896) already running

Remember I am trying to run this while still running the old apache 2.2 server. Process 30896 is the old apache 2.2:

root     30896     1  0 10:05 ?        00:00:00 /usr/sbin/httpd -d /etc/apache2 -f apache2.conf

This results from the byzantine way I set up to launch apache. There is a /etc/sysconfig/apache24 which doesn’t do much other than import environment variable definitions from /etc/apache24/envvars, except I had forgotten to update that path so it pointed to the old /etc/apache2/envvars.

Now it starts! But not without complaint:

Starting apache24: [Thu Aug 06 11:18:04.711658 2015] [core:warn] [pid 22911] AH00117: Ignoring deprecated use of DefaultType in line 178 of /etc/apache24/apache24.conf.
                                                              [  OK  ]

That stems from this line which tries to establish a default MIME type:

DefaultType text/plain

I also notice I cannot really get the status of my new web server:

$ sudo service apache24 status

httpd dead but subsys locked

So stopping/starting doesn’t really work either once it’s started.

What I found is that it seems happier if I have a line in /etc/sysconfig/apache24 which has an explicit PIDFILE defined – I use PIDFILE=/var/run/apache24.pid – with the same filepath as is mentioned in the apache24.conf file, where I have PidFile ${APACHE_PID_FILE} where APACHE_PID_FILE is taken from my envvars and has the value /var/run/apache24.pid. OK, my setup is very convoluted and probably unique. But the problem is common on CentOS so the main takeway is to have consistent reference to the pidfile filepath in /etc/sysconfig/httpd or whatever you are calling it as in your main config file httpd.conf or whatever you are calling it.

Home page test (I’m running on port 1443 to avoid conflict with my production server):

$ curl −i −k https://127.0.0.1:1443/

HTTP/1.1 301 Moved Permanently
Date: Wed, 05 Aug 2015 18:33:19 GMT
Server: Apache/2
X-Powered-By: PHP/5.4.43
Location: https://drjohnstechtalk.com/blog/
Content-Length: 2
Content-Type: text/html

So that looks pretty good.

A simple php test:

$ curl −i −k https://127.0.0.1:1443/phpinfo.php

Long output. Basically looks right.

OK. What about the opening WordPress page?

$ curl −i −H ‘Host: drjohnstechtalk.com’ −k https://127.0.0.1:1443/blog/

Yes. Big long output. Looks good. I don’t think this proves that the mySQL/php interface is really working however as that page could be cached since I use a pagecache plugin.

Next test I’d like to run is the Qualys SSLLabs test, but it won’t run on port 1443. Maybe the DigiCERT test will. Yes, it does allow it. And I no longer have the BREACH vulnerability.

A few words about a BREACH test
This prompted me to look at why Digicert felt I was vulnerable to BREACH in the first place. I thnk it’s related to serving compressed objects. So I thought of this simple test. Against my apache 2.2 I can run a query like this:

$ curl −i −k −−compress https://127.0.0.1:1443/blog/|head −10

Date: Fri, 07 Aug 2015 14:02:48 GMT
Server: Apache/2
X-Powered-By: PHP/5.3.3
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 30414
Content-Type: text/html
 
<!DOCTYPE html>
<html lang="en-US">

See that Content-Encoding: gzip? Yet the actual content that begins <!DOCTYPE html… is in plain text and plainly not compressed. So I really wasn’t vulnerable to BREACH at all. The server claimed to be compressing the pages it was sendnig to the browser but in reality it wasn’t. For apache 2.4 the behaviour is basically the same except there is no response header Content-Encoding: gzip returned. This is why it passes Digicert’s BREACH test with flying colors.

Moving on
Next test. Swap apache 2.2 for apache 2.4 by changing listening ports 443 for 1443. Then do the SSLlabs test. I now get an A. well, actually I get an A both before and after the swap.

WordPress test
I’m writing this using my new shiny apache 2.4. With regards to WordPress it all seems to feel the same as before. One small thing I’ve noticed is that I don’t get WordPress news any longer:

RSS Error: WP HTTP Error: There are no HTTP transports available which can complete the requested request.

Hopefully there’s nothing more serious.

php.ini missing
If you blindly copied my config options for compiling php then sooner or later (much later in my case) you’ll realize that you have no valid php.ini file! You will see an error like this when the date() function is called:

Warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in

So because I used the config option –with-config-fil
e-path=/usr/local/apache24/php5 I needed to put a php.ini file in that directory and only that directory. For now its contents are:

; DrJ, inspired by http://stackoverflow.com/questions/2184513/php-change-the-maximum-upload-file-size - 12/31/14
; Maximum allowed size for uploaded files.
upload_max_filesize = 10M
 
; Must be greater than or equal to upload_max_filesize
post_max_size = 10M
 
; You'll need this to avoid errors with the Date function
; http://stackoverflow.com/questions/16765158/date-it-is-not-safe-to-rely-on-the-systems-timezone-settings
[Date]
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = America/New_York

Appendix A
mod_ssl error after patching

I have an apache 2.2.21 server on a SLES server. After a system patch (I guess) I realized the apache web server wouldn’t start. It shows this error:

> sudo service apache201 start

Starting httpd (/usr/local/apache2/bin/httpd) httpd: Syntax error on line 54 of /usr/local/apache201/conf/httpd.conf: Cannot load /usr/local/apache201/modules/mod_ssl.so into server: /usr/local/apache201/modules/mod_ssl.so: undefined symbol: ap_map_http_request_error

I had been playing fast and loose and I borrowed the mod_ssl.so from some other system, I guess. I forget which. In other words, I dropped in by hand a mod_ssl.so into the directory /usr/lib64/apache2-prefork. I was using those system-supplied modules paired with my compiled apache. All fine until that patch. So I found another mod_ssl.so frm a different system and tried that one. It worked. Whew. These were both SLES 11 SP 4 systems. The older one (with the mod_ssl.so that still works) is dated April 18th, 2017. The one with the broken mod_ssl.so Dec 29thth 2017. That’s from a uname -a.

References and related articles
A proper guide to installing apache 2.4 on CentOS is https://jasonpowell42.wordpress.com/2013/04/05/install-apache-2-4-4-on-centos-6-4/

Some upgrade issues are covered by apache’s own guide: http://httpd.apache.org/docs/2.4/upgrading.html

Scaling up apache to handle more than a couple hundred simultaneous requests is described in this blog post.

The DigiCERT certificate inspector tool, which is what I was referring to in this post when it comes to scanning for BREACH vulnerabilities, is here.

Categories
Admin Perl

Elegant code, poor results

Intro
A colleague of mine is an awesome Perl programmer. It may be a dying language but for those of us who know it it can still do some great things in our hands. So this guy, let’s call him Stain, wrote some code to analyze large proxy logs in parallel by using threads. Using threads presumably makes the analysis program run a lot faster since you’re throwing more CPUs at a problem that is CPU bound. It would have taken me years to write but I bet he did it in a couple days because he’s so experienced. I was eager to reap the benefits of his labor but was disappointed with the results I was getting…

The details
What I need the program for is very basic – just looking for a character string in the lnies of a proxy log. In fact my needs are so basic I don’t need a program at all. I can cobble together my analysis with Linux command-line utilities like a

$ gunzip -c proxy_log.gz|grep drjohnstechtalk > /tmp/drjs

And that’s what I was doing before his program came along. when I run his program with eight threads on a server with 16 cores, it should finish a lot faster than my command line, but it wasn’t. So I rolled up my sleeves to look into his code to see what the problem is. Running his code with a single thread showed that it ran much, much slower than my command-line commands. I focused on that problem as the root of the issue.

Here is the essence of his code for the analysis:

#!/usr/bin/perl
# DrJ research labs...
use strict;
use Compress::Zlib;
use threads;
use threads::shared;
use File::Find;
my @searchStrings = ("drjohnstechtalk");
my @matches;
my $lines;
&amp;searchGZip("/tmp/p-02.gz");
#----------------------------------------------------------------------------
# searchGZip
#----------------------------------------------------------------------------
sub searchGZip{
        my $file = shift(@_);
        my $line;
        my $gz;
        my $loglines = 0;
 
        if($gz = gzopen($file, "rb")){
 
                while($gz-&gt;gzreadline($line)){
                        $loglines++;
                        if(&amp;matchLine($line)){
                                push(@matches,$line);
                        }
                }
                $lines += $loglines;
                $gz-&gt;gzclose;
        }else{
                #&amp;addAppLogMsg("Could not open file $file: $gzerrno\n");
                print "Cannot read file $file\n";
        }
}
 
#----------------------------------------------------------------------------
# matchLine
#----------------------------------------------------------------------------
sub matchLine{
        my $line = shift;
 
        foreach(@searchStrings){
 
                return 1 if $line =~ /$_/
        }
        return 0;
}

He’s using functions I never even head of, like gzopen and gzreadline, apprently from the Compress::Zlib package. Looks elegant, right? Yes. But slow as a dog.

Version 2
So I began to introduce more shell commands into the Perl since shell by itself is faster. In this version I’ve gotten rid of those unfamiliar gzopen and gzreadlikne functions in favor of what I know:

#!/usr/bin/perl
# DrJ research labs...
use strict;
use Compress::Zlib;
use threads;
use threads::shared;
use File::Find;
my @searchStrings = ("drjohnstechtalk");
my @matches;
my $lines;
&amp;searchGZip("/tmp/p-02.gz");
#----------------------------------------------------------------------------
# searchGZip
#----------------------------------------------------------------------------
sub searchGZip{
        my $file = shift(@_);
        my $line;
        my $gz;
        my $loglines = 0;
 
        open(PXY,"gunzip -c $file|");
        while() {
                        $loglines++;
        $line = $_;
                        if(&amp;matchLine($line)){
                                push(@matches,$line);
                        }
                $lines += $loglines;
        }
}
 
#----------------------------------------------------------------------------
# matchLine
#----------------------------------------------------------------------------
sub matchLine{
        my $line = shift;
 
        foreach(@searchStrings){
 
                return 1 if $line =~ /$_/
        }
        return 0;
}

Version 3
Not satisfied with the time, I got rid of the function call for each line in this version.

#!/usr/bin/perl
# DrJ research labs...
use strict;
use Compress::Zlib;
use threads;
use threads::shared;
use File::Find;
my @searchStrings = ("drjohnstechtalk");
my @matches;
my $lines;
&amp;searchGZip("/tmp/p-02.gz");
#----------------------------------------------------------------------------
# searchGZip
#----------------------------------------------------------------------------
sub searchGZip{
        my $file = shift(@_);
        my $line;
        my $gz;
        my $loglines = 0;
 
        open(PXY,"gunzip -c $file|");
        while() {
                        $loglines++;
        $line = $_;
           push(@matches,$line) if $line =~ /$searchStrings[0]/;
                $lines += $loglines;
        }
}

That was indeed still faster, but not as fast as the command line.

Version 4
I wondered if I could do better still by also using the shells built-in match command, grep. That leads to this version:

#!/usr/bin/perl
# DrJ reseach labs...
use strict;
use Compress::Zlib;
use threads;
use threads::shared;
use File::Find;
my @searchStrings = ("drjohnstechtalk");
my @matches;
my $lines;
&amp;searchGZip("/tmp/p-02.gz");
#----------------------------------------------------------------------------
# searchGZip
#----------------------------------------------------------------------------
sub searchGZip{
        my $file = shift(@_);
        my $line;
        my $gz;
        my $loglines = 0;
 
        open(PXY,"gunzip -c $file|grep $searchStrings[0]|");
        while() {
        $line = $_;
           push(@matches,$line);
        }
}

Here’s table with the performance summaries.

Version changes Time (wall clock)
Original 63.2 s
2 gunzip -c instead of gzopen/gzreadline 18.6 s
3 inline instead of function call 14.0 s
4 grep instead of perl match operator 10.8 s

So with these simple improvements the timing on this routine improved from 63.2 s to 10.8 s – a factor of six!!

I incorporated my changes into his original program and now it really is a big help when it runs multi-threaded. I can run eight threads and finish an analysis about six times as quick as searching from the command-line. That’s a significant speed-up which will make us more productive.

Conclusion
Some elegant perl code is taken apart and found to be the cause of a significant slow-down. I show how we rewrote it in a more primitive style that results in a huge performance gain – code that runs six times faster!

Categories
Admin Python Raspberry Pi

Building a Four Monitor Media Show using Raspberry Pis

Intro
This is the paper a student wrote under my guidance.

Building a Four Monitor Media Show using Raspberry Pis

The first page
4-monitor-media-display

Link to full article

References
My write-up concerning our novel use of the Pi Presents program, which has a different emphasis and no pictures.

Categories
Consumer Tech Uncategorized

Amazon Fire Stick/Sony BRAVIA TV compatibility problem – one solution

Intro
There is a very long discussion of this topic on Amazon.com. Sony BRAVIA models more than five years old may not work perfectly with Amazon’s Fire TV stick. I have this problem and I’ll mention my workaround.

The details
My Sony TV model is BRAVIA XBR 32XBR6. I suppose it’s about five years old. I also have it connected to a Sony Blueray player, which can also play Amazon Prime, Netflix, Hulu Plus, etc. But it isn’t nearly as well designed as the Fire Stick and so I bought the Fire Stick, which has better WiFi support and a faster user interface.

Initially the Fire Stick appears to work with the TV and all is good.

Second day: same thing. All is good.

About the third usage, however, and when playing on-demand content I hear what I’m trying to play but I only see a blue screen with the letters HDCP. I think that indicates a digital copy protection mechanism has kicked in.

Powering down the Fire Stick doesn’t seem to work. Turning the TV off and on doesn’t seem to help.

In my case I had the option to switch to watch the same content through my Blueray player (which never displays this problem). Then next time I went back to the Fire Stick (usually days later) all was good.

So I became suspicious about cause and effect and I shortened the cycle.

Get the HDCP problem. Switch HDMI ports to the Blueray player (using the remote). Initiate the Amazon video service connection on the Player (but don’t bother to actually play anything). Switch back to the Fire Stick’s HDMI port. HDCP problem gone!

This solution was not too painful. I also have a Raspberry Pi connected to yet another HDMI TV port. I’ll see if switching to that will do the trick as well – that would be a cheap option that’s not too painful.

2017 update
I never get this HDCP problem any more. The main difference is that I never use my Sony Blueray player for on-demand programming. The Firestick is superior so I always use it. I only use the Blueray player for DVD playback.

October, 2017 update
I got my birthday present – the updated Fire TV stick with Alexa voice remote, but on the same Sony TV. It does seem to work better. The old one was button press, nothing happens, button press, nothing happens, button press, finally it gets the idea. The new one does seem to be better about that.

July 2019 update
Just go the latest and greatest: fire tv stick 4K on Prime Day a few days ago. On this same, now rather old Sony TV model mentioned above, it works really and surprisingly well! I thought surely this older model will not support newer features such as volume control – but it does. And surely it will not support power control of the TV – but it does! And since volume control works (and you can see it is controlling TV volume, not some kind of HDMI or other volume) of course the mute button also works. All these features had required me, up until now, to schlep around two remote controls: the one for the TV plus the Firestick. Now I’ll just need the one. I could probably use Alexa voice commands but I don’t think I’ll want to.

And, it’s just plain more responsive. Previous models were often a bit slow to react to key presses. This does much better on that front. One last thing, it comes pre-configured with your account already set up so you can be up and running much quicker.

I have no idea about the 4k-ness of the picture quality, but the other features alone make me glad I got it.

Fire TV stick 4K

I wish I could answer all the questions raised in the comments but I just don’t have access to any of those models. I would think it ought to work with any Sony TV made in the last 10 years, but maybe it’s not so simple.

Upgrading to a newer Firestick – what to do with your old one
Now you’ve got a Firestick on all your TVs and you want that latest model, and your relatives don’t want your old one either even though it worked just fine. Around here we’d be tempted to send it to a second-hand store, or worse case, to an electronics recycling program. But remember it has all your logins to Amazon (for sure), maybe Hulu, HBO Now, AT&T Now, Netflix, etc. So you better take an extra few minutes to factory reset it. This is a terrific article that gives five different ways to reset your Firestick to factory defaults: Five ways to reset your Firestick. I just wanted to repeat one. You have to have it connected, unfortunately. Hold the right and back buttons of your remote simultaneously for at least 10 seconds. Then follow the prompts. I have tried it and it works. The reset itself takes about 15 minutes on an older Firestick.

Tip for infrequent users

If like me you only watch a few hours a week because you are that busy, and yuo’ve subscribed to a variety of services (Netflix, Youtube Red, Amazon, …), I have this tip to save little of your precious time. You could see pop-ups suggesting to update to the latest version. I suggest to ignore those and wait until you have a bigger block of time. If you only have an hour you don’t want to waste the first five minutes upgrading an app you may have only used once (happens to me a lot). Let’s face it, these things don’t boot up quickly as it is.

2021 tip for US users wanting HBO Max

HBO Max is now available for Firestick. In 2020 it wasn’t for a long stretch.

2021 Tip: Changing Firestick from one TV set to another

I moved my Firestick from a Sony TV to an Insignia brand TV. I thought no big deal, it’s already set up – should just work. Wrong! Yes, the display worked. I could move between apps like usual, but the TV control functions – power on/off, volume, and mute – did absolutely nothing. What gives?

I always assumed that the infrared signal from the Firestick remote only communicates with the Firestick, and that the HDMI protocol included a side channel for power and volume commands. Apparently that assumption is totally wrong. In order to effectuate those things, the Firestick actually acts as a simple remote for your TV set and interacts directly with the TV’s infrared receiver. Who knew? Hence, if your set is a different brand than before, you may have to revive the infrared signal the Firestick puts out. It’s a bit obscure but not difficult. Go to

Settings > Equipment Control > Manage Equipment > TV > Change TV

Then follow the prompts. It actually knows about Insignia TVs and gets it right, by the way.

Related to the above, I recently bought a new Firestick and tried to do the initial setup on that same Insignia TV. I managed to see “Fire” on the screen, and that was it! Nothing else was possible. So I set it up on that Sony Bravia TV, then moved it over to the Insignia. Worked great, except for that On/Off/volume kind of TV controls. And that was simply fixed by the above recipe.

Android and Windows 10 screen mirroring

What I’ve decided to go with is to put the HBO Max app on my Android phone and cast the screen to my Firestick (screen mirroring). There’s a little more setup than you’re used to each time, but it’s not terrible compared with sideloading. In Firestick’s setup Screen you can enable casting. On a Samsung Galaxy pone you have the Smartview app in Settings: pull down from the top twice rapidly to get to Smart View. The Firestick screen should show up as an option. Choose it, then go back to your HBO Max app and play whatever content you like – it should be casting to your TV. I think you can also do screen mirroring from a Windows 10 laptop. Click on Notifications in the far right of the taskbar, you get all those little squares, click expand if needed, until you see Project. In Project look at the bottom and click on Connect to a wireless display.

Conclusion
A solution is offered to the dreaded HDCP problem for Amazon Fire Stick/Sony BRAVIA TV. More research needs to be done to reduce the solution to its essence.

References and related
Here’s the link to that lengthy discussion: http://www.amazon.com/gp/help/customer/forums/ref=cs_hc_g_pg_pg1?ie=UTF8&forumID=Fx1SKFFP8U1B6N5&cdSort&cdThread=Tx1A10FFQXCSZ2M&cdPage=1

Fire TV Stick (2019 model). This is the regular model. You probably don’t need the 4K model unless you have a super TV…

How to reset your (old) Firestick because you’ve upgraded: https://www.guidingtech.com/reset-fire-tv-stick-factory-settings/

Out-of-sync video and audio? Check this suggestion.

Categories
Linux Perl

HTTP replay user-agent

Intro
When debugging F5’s Web Application Firewall (AKA ASM) you get all the HTTP request headers in the GUI which you can cut and paste. Sometimes it is not so obvious how to fix a particular SupportID and you don’t want to bother the user to reproduce the error. This is code that allows you to scrape the headers and send them back, assuming you have a Linux server on the Internet. It’s not quite exact – Perl wants to add a TE header and maybe some others. More importantly, F5 only gives you around the first 5 KB of the headers, and often the real headers + data can be much longer than that. But It’s pretty good at preserving the basic headers and the POSTed data, if any.

The details

#!/usr/bin/perl
# DrJ - 6/2015
# take waf Full Request as input from STDIN and spit it out
use LWP;
$DEBUG = 1;
# example:
#POST /cognosbi/cgi-bin/cognos.cgi HTTP/1.1
#Accept: */*
#Accept-Language: en-us
#Referer: https://ag-intelligence.drj.com/cognosbi/pat/rsapp.htm
#soapaction: http://www.ibm.com/xmlns/prod/cognos/reportService/201109/
#Content-Type: text/xml; charset=utf-8
#Accept-Encoding: gzip, deflate
#User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .
NET4.0C; .NET4.0E; Ion 3.3.0.125; InfoPath.3; MDPA101; OWASMIME/4.0200)
#Host: ag-intelligence.drj.com
#Content-Length: 161667
#DNT: 1
#Connection: Keep-Alive
#Cache-Control: no-cache
#Cookie: c8server=https%3A%2F%2Fag-intelligence.drj.com%3A443%2Fcognosbi%2Fcgi-bin%2Fcognos.cgi; mob_lang=en; TSe36f05=b6a15d33eff8797d98bc75f22d2c85e1f
20fa2f8dda9abed5570682f728766c9684dc6d2c7f603263f299dc03470ed0e5dd2c529d1af78e016f94f63ba1abc528f04a493fe2f9ff9dd4ea3b99a99de5024487a2a9a99de5024487a2a9
a99de5024487a2a9a99de5024487a2a; cc_state="null"; cam_passport=MTsxMDE6ZmVjNDEzYzQtMGVjOC03YjgyLTJlNTgtYjgxNWFhNTE0ZDQxOjMzNjQ5NDA0MDQ7MDszOzA7; CRN=col
umnsPerPage%3D3%26contentLocale%3Den-us%26showHiddenObjects%3Dtrue%26skin%3Dcorporate%26linesPerPage%3D35%26showWelcomePage%3Dtrue%26http%3A%2F%2Fdevelo
per.cognos.com%2Fceba%2Fconstants%2FsystemOptionEnum%23accessibilityFeatures%3Dfalse%26listViewSeparator%3Dbackground%26displayMode%3Dlist%26timeZoneID%
3DEST%26http%3A%2F%2Fdeveloper.cognos.com%2Fceba%2Fconstants%2FbiDirectionalOptionEnum%23biDirectionalFeaturesEnabled%3Dfalse%26format%3DHTML%26automati
cPageRefresh%3D30%26productLocale%3Den%26useAccessibilityFeatures%3Dfalse%26showOptionSummary%3Dtrue%26; cea-ssa=false; usersessionid="AQgAAAA/tHlVAAAA
AoAAACSXrdg581uvb1aFAAAAFJ3FMiV1XFHYYjmBPlmT/mXlCpnFAAAAFlzIt8pvs4xewrB8PBrB1omTHOq"; userCapabilities=f%3Bfdbffc6d%3Be000002f%3Bff077efa%26ARQAAABSdxTI
ldVxR2GI5gT5Zk%2F5l5QqZwtel%2FW97E5sO8sAbwnsxaQA5mLc; cc_session="s_cc:|s_conf:na|s_sch:td|s_hd:sa|s_serv:na|s_disp:na|s_set:|s_dep:na|s_dir:na|s_sms:dd
|s_ct:sa|s_cs:sa|s_so:sa|e_hp:CAMID(*22DRJ*3au*3aagarwap*22)|e_proot:Public*20Folders|prootid:i1E4054857A7F43398F82386788807209|e_mroot:My*20Folders|mr
ootid:i45E22CE51A2D4F9DA668810E67B44CD2|e_mrootpath:CAMID(*22DrJ*3au*3aagarwap*22)*2ffolder*5b*40name*3d*27My*20Folders*27*5d|e_user:John*20Doe|
e_tenantDisplayName:|e_showTenantInfo:true|cl:en-us|dcid:i1E4054857A7F43398F82386788807209|show_logon:true|uig:|ui:|rsuiprofile:all|lch:f|lca:f|ci:f|wri
te:true|eom:0|pp:3364940404|null:|cachestamp:2015-06-04T13:04:34|null:"; BIGipServerag-intelligence.drj.us.secure=1090633994.47873.0000; TS83f7ea=c439d
424e740dc4c1ed042aa7bafa59df20fa2f8dda9abed5570851f1e66c5c4bbcf97c1
 
#<SOAP-ENV:Envelope SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:
SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:bus='http://developer.cognos.com/schemas/bibus/3/' xmlns:rns1='http://developer.cognos.com/schemas/rep
ortService/1'><SOAP-ENV:Header><bus:biBusHeader xsi:type="bus:biBusHeader"><bus:CAM xsi:type="bus:CAM"><authenticityToken xsi:type="xsd:base64Binary">Vj
FZcyLfKb7OMXsKwfDwawdaJkxzqg==</authenticityToken></bus:CAM><bus:CAF xsi:type="bus:CAF"><contextID xsi:type="xsd:string">CAFW00000070Q0FGQTNjMDAwMDAwD
GQUFBQUZKM0ZNaVYxWEZIWVlqbUJQbG1ULW1YbENwbld5amlFRU5ObWJJcTZHaEdLNTdjVWZ0Zy1GWV8zOTgxNzd8cnM_</contextID></bus:CAF><bus:userPreferenceVars SOAP-ENC:arra
yType="bus:userPreferenceVar[]" xsi:type="SOAP-ENC:Array"><item><bus:name xsi:type="xsd:string">productLocale</bus:name><bus:value xsi:type="xsd:string"
>en</bus:value></item><item><bus:name xsi:type="xsd:string">contentLocale</bus:name><bus:value xsi:type="xsd:string">en-us</bus:value></item></bus:userP
referenceVars><bus:dispatcherTransportVars xsi:type="SOAP-ENC:Array" SOAP-ENC:arrayType="bus:dispatcherTransportVar[]"><item xsi:type="bus:dispatcherTra
nsportVar"><name xsi:type="xsd:string">rs</name><value xsi:type="xsd:string">true</value></item></bus:dispatcherTransportVars></bus:biBusHeader></SOAP-E
NV:Header><SOAP-ENV:Body><rns1:update><object xsi:type="bus:report"><searchPath><value xsi:type="xsd:string">/content/folder[@name=&apos;Dashboards and
Portals&apos;]/folder[@name=&apos;DRJOPS (DrJ)&apos;]/folder[@name=&apos;Dashboard Reports&apos;]/report[@name="Sales"]</value></searchPath><specific
ation><value xsi:type="xsd:string" xml:space="preserve">&lt;report xmlns=&quot;http://developer.cognos.com/schemas/report/9.0/&quot; useStyleVersion=&qu
ot;10&quot; expressionLocale=&quot;en-us&quot;&gt;
#&lt;modelPath&gt;/content/folder[@name=&apos;Shared Packages&apos;]/package[@name=&apos;DrJ_Dashboard_Cube&apos;]/model[@name=&apos;model&apos;]&lt;
/modelPath&gt;
#&lt;drillBehavior modelBasedDrillThru=&quot;true&quot;/&gt;
#&lt;layouts&gt;
#&lt;layout&gt;
#&lt;reportPages&gt;
 
#&lt;page name=&quot;DrJ Dashbord&quot;&gt;
#&lt;pageBody&gt;
#&lt;contents&gt;&lt;table&gt;&lt;style&gt;&lt;defaultStyles&gt;&lt;defaultStyle refStyle=&quot;tb&quot;/&gt;&lt;/defaultStyles&gt;&lt;CSS value=&quot;f
ont-family:Arial;font-size:8pt;font-weight:bold&quot;/&gt;&lt;/style&gt;&lt;tableRows&gt;&lt;tableRow&gt;&lt;tableCells&gt;&lt;tableCell colSpan=&quot;1
&quot;&gt;&lt;contents&gt;&lt;table&gt;&lt;style&gt;&lt;defaultStyles&gt;&lt;defaultStyle refStyle=&quot;tb&quot;/&gt;&lt;/defaultStyles&gt;&lt;CSS valu
e=&quot;border-collapse:collapse;width:100%;text-align:center&quot;/&gt;&lt;/style&gt;&lt;tableRows&gt;&lt;tableRow&gt;&lt;tableCells&gt;&lt;tableCell&g
t;&lt;contents&gt;&lt;textItem&gt;&lt;dataSource&gt;&lt;staticValue&gt;In Thousands, Data As of: &lt;/staticValue&gt;&lt;/dataSource&gt;&lt;style&gt;&lt
;CSS value=&quot;color:#0D3F53&quot;/&gt;&lt;/style&gt;&lt;/textItem&gt;&lt;textItem&gt;&lt;dataSource&gt;&lt;reportExpression&gt;CubeDataUpdatedOn ([Gr
ower_Dashboard].[Year])&lt;/reportExpression&gt;&lt;/dataSource&gt;&lt;style&gt;&lt;CSS value=&quot;color:#0D3F53&quot;/&gt;&lt;/style&gt;&lt;/textItem&
gt;&lt;/contents&gt;&lt;style&gt;&lt;CSS value=&quot;text-align:left;vertical-align:middle;font-family:Arial;font-size:9pt;font-weight:bold;border-top-s
tyle:none;border-right-style:none;border-bottom-style:none;border-left-style:none&quot;/&gt;&lt;/style&gt;&lt;/tableCell&gt;&lt;tableCell&gt;&lt;content
s&gt;&lt;image&gt;
#&lt;dataSource&gt;
#&lt;staticValue&gt;../samples/images/Dashboard_Excel.png&lt;/staticValue&gt;
#&lt;/dataSource&gt;
#&lt;conditionalStyles&gt;&lt;conditionalStyleCases refVariable=&quot;For ReportOutput&quot;&gt;&lt;conditionalStyle refVariableValue=&quot;1&quot;&gt;&
lt;CSS value=&quot;display:none&quot;/&gt;&lt;/conditionalStyle&gt;&lt;/conditionalStyleCases&gt;&lt;conditionalStyleDefault/&gt;&lt;/conditionalStyles&
gt;&lt;reportDrills&gt;&lt;reportDrill name=&quot;Excel&quot;&gt;&lt;drillLabel&gt;&lt;dataSource&gt;&lt;staticValue/&gt;&lt;/dataSource&gt;&lt;/drillLa
bel&gt;&lt;drillTarget method=&quot;execute&quot; outputFormat=&quot;spreadsheetML&quot; showInNewWindow=&quot;true&quot;&gt;&lt;reportPath path=&quot;/
content/folder[@name=&apos;Dashboards and Portals&apos;]/folder[@name=&apos;USCROP (Grower)&apos;]/folder[@name=&apos;Dashboard Reports&apos;]/report[@n
ame=&apos;Sales&apos;]&quot;&gt;&lt;XMLAttributes&gt;&lt;XMLAttribute name=&quot;ReportName&quot; value=&quot;Sales&quot; output=&quot;no&quot;/&gt;&lt;
XMLAttribute name=&quot;class&quot; value=&quot;report&quot; output=&quot;no&quot;/&gt;&lt;/XMLAttributes&gt;&lt;/reportPath&gt;&lt;drillLinks&gt;&lt;dr
illLink&gt;&lt;drillTargetContext&gt;&lt;parameterContext parameter=&quot;Select Area&quot;/&gt;&lt;/drillTargetContext&gt;&lt;drillSourceContext&gt;&lt
;parameterContext parameter=&quot;Select Area&quot;/&gt;&lt;/drillSourceContext&gt;&lt;/drillLink&gt;&lt;drillLink&gt;&lt;drillTargetContext&gt;&lt;para
meterContext parameter=&quot;Select DrJIdea Specialist&quot;/&gt;&lt;/drillTargetContext&gt;&lt;drillSourceContext&gt;&lt;parameterContext parameter=
&quot;Select Innovation Specialist&quot;/&gt;&lt;/drillSourceContext&gt;&lt;/drillLink&gt;&lt;drillLink&gt;&lt;drillTargetContext&gt;&lt;parameterContex
t parameter=&quot;AreaTgtBy&quot;/&gt;&lt;/drillTargetContext&gt;&lt;drillSourceContext&gt;&lt;parameterContext parameter=&quot;AreaTgtBy&quot;/&gt;&lt;
/drillSourceContext&gt;&lt;/drillLink&gt;&lt;drillLink&gt;&lt;drillTargetContext&gt;&lt;parameterContext parameter=&quot;ISTgtBy&quot;/&gt;&lt;/drillTar
getContext&gt;&lt;drillSourceContext&gt;&lt;parameterContext parameter=&quot;ISTgtBy&quot;/&gt;&lt;/drillSourceContext&gt;&lt;/drillLink&gt;&lt;drillLin
k&gt;&lt;drillTargetContext&gt;&lt;parameterContext parameter=&quot;Grower Type&quot;/&gt;&lt;/drillTargetContext&gt;&lt;drillSourceContext&gt;&lt;param
eterContext parameter=&quot;Grower Type&quot;/&gt;&lt;/drillSourceContext&gt;&lt;/drillLink&gt;&lt;/drillLinks&gt;&lt;/drillTarget&gt;&lt;/reportDrill&g
t;&lt;/reportDrills&gt;&lt;/image&gt;&lt;image&gt;
#&lt;dataSource&gt;
#&lt;staticValue&gt;../samples/images/Dashboard_PDF.png&lt;/staticValue&gt;
#&lt;/dataSource&gt;
#&lt;conditionalStyles&gt;&lt;conditionalStyleCases refVariable=&quot;For ReportOutput&quot;&gt;&lt;conditionalStyle refVariableValue=&quot;1&quot;&gt;&
lt;CSS value=&quot;display:none&quot;/&gt;&lt;/conditionalStyle&gt;&lt;/conditionalStyleCases&gt;&lt;conditionalStyleDefault/&gt;&lt;/conditionalStyles&
gt;&lt;reportDrills&gt;&lt;reportDrill name=&quot;PDF&quot;&gt;&lt;drillLabel&gt;&lt;dataSource&gt;&lt;staticValue/&gt;&lt;/dataSource&gt;&lt;/drillLabe
l&gt;&lt;drillTarget method=&quot;execute&quot; outputFormat=&quot;PDF&quot; showInNewWindow=&quot;true&quot;&gt;&lt;reportPath path=&quot;/content/fold
er[@name=&apos;Dashboards and Portals&apos;]/folder[@name=&apos;DRJOPS (Grower)&apos;]/folder[@name=&apos;Dashboard Reports&apos;]/report[@name=&apos;Sa
les&apos;]&quot;&gt;
@lines = <STDIN>;
for $line (@lines) {
  $_ = $line;
  chomp;
  if (/^(GET|POST)\s+(\/.*)\s(http|HTTP)/) {
    $uri = $2;
    $method = $1;
    next;
  } elsif (/^User-Agent:/ && ! $datasec) {
    ($agent) = /^User-Agent:\s+(.+)$/;
    print "User-Agent: $agent\n" if $DEBUG;
    next;
  } elsif (/^Content-Length:\s/ && ! $datasec) {
# LWP calculates its own content-length header
      next;
  } elsif  (! $datasec) {
# add header to hash
      ($hdr,$val) = /^(\S+):\s+(.+)$/;
      print "hdr,val: $hdr, $val\n" if $DEBUG;
      $myhash{$hdr} = $val if $hdr;
  }
  if (/^Host: (\S+)/i && ! $datasec) {
    $host = $1;
  }
  $data .= $line if $datasec;
# test if we are at the end of the headers
  $datasec = 1 if /^$/;
}
 
# Create a user agent object
use LWP::UserAgent;
$ua = LWP::UserAgent->new;
$ua->agent($agent);
$req = HTTP::Request->new($method => "https://$host$uri");
# enter all the headers
foreach $key (keys %myhash) {
  $val = $myhash{$key};
  print "key,val: $key, $val\n" if $DEBUG;
  $req->header($key => $val);
}
print "data: $data\n" if $DEBUG;
$req->content($data);
# Pass request to the user agent and get a response back
my $res = $ua->request($req);
 
# Check the outcome of the response
if ($res->is_success) {
 print $res->content;
} else {
  print $res->status_line, "\n";
}

Conclusion
We’ve created a Perl-based HTTP user agent to preserve headers. It’s not perfect but it’s a help, for instance in debugging am F5 web application firewall policy.

Categories
Python Raspberry Pi

Raspberry Pi visual alerting with Pibrella

I just came across this article: http://www.itworld.com/article/2919325/personal-technology/website-monitoring-with-a-raspberry-pi-for-nighttime-alerts.html?phint=newt%3Ditworld_linux&phint=idg_eid%3Db4ed050ca62ad3f6d5c69e8448d945d7#tk.ITWNLE_nlt_linux_2015-05-19

I wish I had thought of it first! I’m in a similar situation – I constantly get emails and TXT messages overnight so I have to put my phone in airplane mode. A visual alerting might just help give me an early warning for come critical failures.

The Pibrella module that lights up and buzzes has its own web site, pibrella.com.

If I do anything with it I’ll be sure to post it here.

Categories
DNS Linux Network Technologies Perl

Announcing a simple DNS web interface and code

Intro
For demonstration purposes I’ve written a WEB interface to do DNS queries. This can be used for light querying. Once it gets abused I will pull it from the web site.

Motivation
Some large enterprises are behind not only a corporate firewall, but also confined to a private namespace with no access to Internet name resolution. Users in such situations can use one of the many available tools to do DNS resolution through the web, but they all want to throw advertising at you and it’s not clear which can be trusted not to load you up with spyware. I am offering this ad-free DNS lookup using my position on the Internet as a trusted source.

And if you’re lucky and looking for code to do this yourself, you might find it. But nowhere will you find a site that’s running its own published code for DNS resolution. Except here.

The code
Admittedly very simple-minded, but hopefully not fatally flawed, here it is in Perl.

#!/usr/bin/perl
use CGI;
$query = new CGI;
%allowedArgs = (domainname =&gt; 'dum',type =&gt; 'dum',short =&gt; 'dum');
#
print "Content-type: text/html\n\n";
print "
\n";
foreach $key ($query->param) {
  exit(1) unless defined $allowedArgs{$key};
  exit(1) if $query->param($key) !~ /^([a-zA-Z0-9\.-]){2,256}$/;
  print "$key " . $query->param($key) . "\n";
}
# possible keys: domainname, type
$domainname = $query->param(domainname);
$type     = $query->param(type);
$type = "any" unless $type;
# argument validation checks
exit(1) if $domainname !~ /^([a-zA-Z0-9\.-]){2,256}$/ || $domainname =~ /\.\./ ||  ! $domainname;
exit(1) if $type !~ /^([a-zA-Z]){1,8}$/;

# short answer?
$short = "+short" if defined $query->param(short);

# authoritative request?
if (defined $query->param(authoritative)) {
# this will be a lot more complicated and so is not implemented. Perhaps someday if there is a request...
}

open(DIG,"dig $short $type $domainname|") || die "Cannot run dig!!\n";
while() {
  print ;
}

Yes it’s very old-school. I do not even use a DNS package. Why bother? It’s not rocket science. There’s a lot more to argument validation than it looks like – you would not believe the evil things people send to your web server. So you have to vigilant about injection attacks or shelling out by use of unexpected characters.

Usage

2020 Update

This URL has been deactivated since I moved to my new server. I’ll have to see if there’s time and interest to restore this functionality.

example 1

https://drjohnstechtalk.com/cgi-bin/digiface.cgi?domainname=johnstechtalk.com&type=a

domainname johnstechtalk.com
type a
 
; &lt;&lt;&gt;&gt; DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.4 &lt;&lt;&gt;&gt; a johnstechtalk.com
;; global options: +cmd
;; Got answer:
;; -&gt;&gt;HEADER&lt;&lt;- opcode: QUERY, status: NOERROR, id: 8711
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;johnstechtalk.com.		IN	A
 
;; ANSWER SECTION:
johnstechtalk.com.	3600	IN	A	50.17.188.196
 
;; Query time: 10 msec
;; SERVER: 172.16.0.23#53(172.16.0.23)
;; WHEN: Mon May  4 14:59:05 2015
;; MSG SIZE  rcvd: 51

example 2
https://drjohnstechtalk.com/cgi-bin/digiface.cgi?domainname=drjohnstechtalk.com&short

domainname drjohnstechtalk.com
short 
50.17.188.196

Familiarity with dig will help you determine the best switches to use as you can see that at the end of the day it is merely calling dig and sending back that output with a minimum of html markup. This will make it easy to parse the output programatically.

Conclusion
A simple DNS web interface is being announced today. Both the service and the code are being made available. The service may be pulled once it becomes abused.

References

2024 update

I learned about this basic but useful web interface to dig today: https://www.digwebinterface.com/
A nice, not too commercial web interface to dig and traceroute that is more user-friendly than mine is http://www.kloth.net/services/dig.php
The dig man pages can be helpful.

Got a geoDNS entry? Although this link has ads, it’s quite interesting because it sends your query to open DNS servers around the world: https://dnschecker.org/.

You can explore some details behind Google’s public resolving server 8.8.8.8 by using the web site: https://dns.google.com/. It’s quite helpful.

I won’t paste the link to my service but you can see what it is from the examples above.

There’s a simple but effective DIG available for your Android smartphone from the Playstore. That’s DNS debugger from TurboBytes. No obnoxious ads and yet no cost.

Of course if you are on the Internet and have access to dig, Google’s DNS servers are available for you to use directly.

Want to learn if the Great Friewall of China is clobbering the expected DNS result? The site https://viewdns.info/chinesefirewall/ is designed to do just that.