Categories
Admin CentOS Digital Currency Linux

Adding a swap file in Amazon AWS for CentOS

Intro
I was running a new daemon on my server, factomd, to experiment with digital currency. It’s an old m1.small instance with only 1.7 GB of memory. The first few times I ran it it would 70000 or so blocks, I would let it run overnight, and then it would run out of memory and crash. My admin skills are a little rusty and dated but I eventually realized that adding swap space to my server could help.

The details
I’ve been running this server for five years and never bothered to create a swap area, as it turns out. My CentOS version is, I think, version 6.0, but it’s hard to tell at this point. Anyway, this command shows the lack of an active swap space:

$ sudo swapon ‐s

Filename                                Type            Size    Used    Priority

What to do?
Amazon has introduced SSD storage and that is recommended for high I/O demands. That makes sense to me to use for swap, which is basically an extension of your memory. It’s also inexpensive in small volumes. I decided to create a 2 GB swap file – roughly the same size as the machine’s physical memory. So I bought a gp2 – general purpose – SSD volume of 2 GB. It’s only $0.20/month!

Where did it go?
After attaching it to my instance, I got what is apparently a one-time message saying what device it would appear as on my instance – /dev/sdg. I was a little nervous – justifiably as it turns out – that I would not see it from CentOS. I tried to mount it – no go. Then I did Internet research and found these two informative commands:

$ sudo lsblk ‐‐output NAME,TYPE,SIZE,FSTYPE,MOUNTPOINT,LABEL

NAME    TYPE  SIZE FSTYPE MOUNTPOINT LABEL
xvdj    disk  100G ext4   /mnt/vol
xvde    disk    6G
`-xvde1 part    6G ext4   /
xvde3   disk  896M swap
xvdk    disk    2G

and

$ sudo fdisk ‐l

Disk /dev/xvdj: 107.4 GB, 107374182400 bytes
255 heads, 63 sectors/track, 13054 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 
 
Disk /dev/xvde: 6442 MB, 6442450944 bytes
255 heads, 63 sectors/track, 783 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xaae7682d
 
    Device Boot      Start         End      Blocks   Id  System
/dev/xvde1   *           1         783     6289416   83  Linux
 
Disk /dev/xvde3: 939 MB, 939524096 bytes
255 heads, 63 sectors/track, 114 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 
 
Disk /dev/xvdk: 2147 MB, 2147483648 bytes
22 heads, 16 sectors/track, 11915 cylinders
Units = cylinders of 352 * 512 = 180224 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x83d4c8ed

Turns out I had a swap file all along but had never activated it! Further, both these commands show that the new volume is appearing as xvdk, not xvdg. Go figure. I guess I had an xvdj volume and it took the next available letter. The mount command also showed me which of the above volumes was in use so I could see which had been added.

Then I used fdisk to create a swap space on it:

$ fdisk /dev/xvdk

Command (m for help): c
DOS Compatibility flag is not set
 
Command (m for help): u
Changing display/entry units to sectors
 
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-4194303, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-4194303, default 4194303):
Using default value 4194303
 
Command (m for help): w
The partition table has been altered!
 
Calling ioctl() to re-read partition table.
Syncing disks.

$ ls /dev/xvdk*

/dev/xvdk  /dev/xvdk1

$ sudo mkswap /dev/xvdk1

Setting up swapspace version 1, size = 2096124 KiB
no label, UUID=0d782596-03e6-48fd-a0fa-2d0e3174f727

$ sudo swapon /dev/xvdk1
The previous command activated our new swap file. To show that we run this command:
$ sudo swapon ‐s

Filename                                Type            Size    Used    Priority
/dev/xvdk1                              partition       2096120 0       -1

Finally to make this swap partition persist after a reboot I added this line to /etc/fstab:

/dev/xvdk1      swap            swap    defaults        0 0

Did it help?
Why yes it did! Now I am using over 900 Mb of swap space, so it was needed pretty badly in fact:

$ sudo swapon ‐s

Filename                                Type            Size    Used    Priority
/dev/xvdk1                              partition       2096120 945552  -1

. And my original motivation – keeping factomd from crashing – was achieved as well. Perhaps it wasn’t so important to use an SSD volume. Mostly the i/o per second was well below 100. But I did have the satisfaction of seeing this burst to 1000, a figure I never could have hit with a traditional drive.

Appendix
Monitoring i/o
These blockchain verifiers can be killers in terms of resource consumption on little servers like mine. The best tool for analyzing what is going on is iostat:

$ iostat ‐xz 10

Linux 2.6.32-131.17.1.el6.x86_64 (ip-10-185-21-116)     05/01/17        _x86_64_        (1 CPU)
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.92    0.00    0.17    0.24    0.85   97.83
 
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
xvdj              0.00     0.45    0.22    0.35     6.90     6.41    23.60     0.01   11.87    8.33   14.05   1.43   0.08
xvde              0.00     0.02    0.02    0.57     0.55     4.70     8.93     0.01   15.32    6.62   15.64   2.84   0.17
xvdep3            0.00     0.00    0.00    0.00     0.00     0.00     8.73     0.00    1.95    1.95    0.00   1.94   0.00
xvdk              0.00     0.01    0.02    0.01     0.19     0.16    11.35     0.00    3.23    0.92   10.75   0.19   0.00
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.65    0.00    6.44   83.93    1.42    4.56
 
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
xvdj              0.00     1.71  232.42    2.11  3440.68    30.54    14.80     0.43    1.84    1.80    6.95   1.72  40.38
xvde              0.00     0.00   74.59    3.65  3773.45    29.17    48.61     0.31    3.99    3.36   16.91   0.99   7.77
xvdk              5.47   414.93  606.78  230.37  4898.01  5162.39    12.02     1.89    2.26    0.88    5.89   0.18  14.89
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           2.63    0.00    4.19   89.55    1.23    2.40
 
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
xvdj              0.00     0.00  374.08    0.50  5435.98     4.02    14.52     0.84    2.25    2.25    4.33   1.32  49.32
xvde              0.00     0.00    3.52    0.28   185.03     2.23    49.29     0.01    1.66    1.41    4.80   0.72   0.27
xvdk              1.79    99.72  521.96  108.88  4189.94  1668.83     9.29     0.76    1.21    0.72    3.53   0.14   8.95
 
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           8.05    0.00    7.10   72.87    8.46    3.52
 
Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
xvdj              0.00     0.00  338.02    8.25  6812.99    66.04    19.87     0.94    2.72    2.71    3.18   1.44  49.84
xvde              0.00     0.00   52.17    1.76  2317.73    14.07    43.24     0.15    2.72    2.43   11.23   0.67   3.63
xvdk              9.20   381.12 1180.58  256.16  9518.27  5098.24    10.17     1.95    1.36    0.78    4.04   0.14  20.65
...

Always mentally discard the first set of numbers when iostat starts up. It needs to initialize its counters from that reading. But this is chock full of information. The cpu time spent waiting for i/o is too high: 70 – 90 % and a lot of that can be blamed on xvdj (%util column for device xvdj). The way I see it if your i/o were instantaneous this number would drop to 0 and our cpu could be doing other more productive things, hence it shows it is a bottleneck 60% of the time. This also shows my swap, xvdk, being sometimes heavily used and not being too much a bottleneck (20% util).

Then of course there is top, which just confirms that factomd is the resource hog:

$ top

top - 11:45:12 up 1246 days, 14:49,  3 users,  load average: 1.55, 1.73, 1.67
Tasks: 108 total,   1 running, 107 sleeping,   0 stopped,   0 zombie
Cpu(s): 10.6%us,  1.7%sy,  0.0%ni,  4.6%id, 82.3%wa,  0.0%hi,  0.2%si,  0.6%st
Mem:   1695600k total,  1682160k used,    13440k free,     1400k buffers
Swap:  2096120k total,  1003088k used,  1093032k free,    45348k cached
 
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
29702 john      20   0 2956m 1.3g 3984 S 21.4 77.9 490:35.59 factomd
...

Type of cpu
Just for the record here’s the type of cpu you get with an m1 small instance:

$ cat /proc/cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 45
model name      : Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
stepping        : 7
cpu MHz         : 1799.999
cache size      : 20480 KB
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu de tsc msr pae cx8 cmov pat clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc up rep_good aperfmperf unfair_spinl
ock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 x2apic popcnt aes hypervisor lahf_lm arat epb xsaveopt pln pts
bogomips        : 3599.99
clflush size    : 64
cache_alignment : 64
address sizes   : 46 bits physical, 48 bits virtual
power management:

So that’s a single 2 GHz cpu.

Conclusion
We showed how to economically add swap to a CentOS image on Amazon AWS. We showed factomd successfully running on this small instance and we showed linux commands that can be used to monitor resource consumption. Knowing what I know now – that factomd is i/o limited – in addition to creating a swap space I probably would have put its files onto its own SSD drive, which is their recommendation anyway.

References and related
I followed this post for the swap partition creation steps: http://network-howtos.blogspot.com/2015/04/adding-new-swap-partition-to-centos-vm.html

Categories
Apache Raspberry Pi

Automating guest wireless access for monitoring purposes with Raspberry Pi

Intro
I decided to monitor guest wireless access to the Internet using a Raspberry Pi. By that I mean a basic, binary, is it working now or not response. The back end is a Cisco wireless LAN Controller (WLC). Like most such systems there is no WiFi password, but your connection is extremely limited until you authenticate to the WLC login page in a browser. Further, this particular system is configured to only permit usage for up to four hours, after which another authentication is required to continue. The system is pretty reliable overall, but there are lots of pieces involved and I decided it would be nice to be the first to know if it isn’t working. And it’d be nice to put one of my spare Raspberry Pi’s to work in this semi-official capacity.

The details
Let’s cut to the chase. This is what my crontab file looks like:

# added for drj4guest WiFi testing - DrJ 4/26/17
# this line should keep us authenticating...
* * * * * curl -d `cat /home/pi/data` https://verify.drj4guests.johnstechtalk.com/login.html > /dev/null 2>&1
# and this is what we actually touch, where we have a separate monitor looking for it...every 2 minutes
*/2 * * * * curl http://johnstechtalk.com/raspberrypidrj4guest?`perl -e 'print time;'` > /dev/null 2>&1

For this to work I need accurate time on the Raspberry Pi. By default it was in the wrong timezone – UTC instead of EDT – and it had anyway drifted by quite a few seconds. I describe how to fix this all up in this post.

Let’s break this down. The WiFi is known as drj4guest, hence some of the naming conventions you see.

Here is the contents of the file data in /home/pi:

buttonClicked=4&redirect_url=johnstechtalk.com%2F&err_flag=0&agree=on&username=john&password=<DRJ4GUEST_PASSWORD>

So I meticulously reverse engineered all the fields the login form sends over and figured out what it is doing.

In the data file I put my assigned WiFi login username, john (replace it with yours) and my password, which also needs to be replaced with an appropriate value for your situation.

Then I decided to run an attempted authentication every one minute, while running the query to my web server every two minutes. That is what the */2 field does in my crontab. That way I will always have authenticated first, even when my four hours has run out.

I like that this also tests the authentication that has been set up, as this could also be the cause of a failure.

Meanwhile my web server log gets entries like this one every two minutes:

50.17.188.196 - - [26/Apr/2017:15:12:02 -0400] "GET /raspberrypidrj4guest?1493233922 HTTP/1.1" 404 219 "-" "curl/7.26.0"

On the webserver
On the webserver being accessed by the Ras Pi I have this Perl script:

#!/usr/bin/perl
# check if Raspberry Pi on the DrJ guest WiFi is phoning home
# - DrJ 4/26/17
#
# to test good to error transition,
# call with a very small maxDiff, such as 0!
use Getopt::Std;
getopts('m:d'); # maximum allowed time difference
$maxDiff = $opt_m;
$DEBUG = 1 if $opt_d;
unless (defined($maxDiff)) {
  usage();
  exit(1);
}
$monitorName = 'Raspberry Pi phone home';
# access line looks like:
# 96.15.212.173 - - [02/Feb/2013:22:00:02 -0500] "GET /raspberrypidrj4guest?136456789 HTTP/1.1" 200 455 "-" "curl/7.26.0"
$magicString = "raspberrypidrj4guest";
$accessLog = "/var/log/apache202/access.log";
#
# pick up timestamp in access file
$piTime = `grep $magicString $accessLog|tail -1|cut -d\? -f2|cut -d' ' -f1`;
$curTime = time();
chomp($time);
$date = `date`;
chomp($date);
# your PID file is somewhere else. It tells us when Apache was started.
# you could comment out these next lines just to get started with the program
$PID = "/var/run/apache202.pid";
($atime,$mtime,$ctime) = (stat($PID))[8,9,10];
$diff = $curTime - $piTime;
if ($curTime - $ctime < $maxDiff) {
  print "Apache hasn't been running long enough yet to look for something in the log file. Maybe next time\n";
  exit(0);
}
print "magicString, accessLog, piTime, curTime, diff: $magicString, $accessLog, $piTime, $curTime, $diff\n" if $DEBUG;
print "accessLog stat. atime, mtime, ctime: $atime,$mtime,$ctime\n" if $DEBUG;
print "Freshness: $diff s\n";
###############################
sub usage {
  print "usage: $0 -m <maxDiff (seconds)> [-d (debug)]\n";
}

It’s designed to be run by SiteScope as a script monitor. You would run it by hand like this:

> ./timecheck.pl ‐m 300

Freshness: 35 s

If that Freshness time grows too large then the Ras Pi hasn’t been phoning home and you – presumably – have a problem somewhere. /var/log/apache202 happens to be where I have my apache access file on that system.

Conclusion
We showed how to set up a Raspberry Pi to monitor Guest WiFi access on a Cisco Wireless LAN Controller, even though the accounts have to re-authenticated every four hours.

References and related
In the consumer space I do something closely related. I use a Ras Pi at home to monitor whether my Internet connection at home is working. The same phone home concept is used.

Categories
Consumer Tech Network Technologies Raspberry Pi

WAN load-balancing routers

Intro
I got an offer for $20/month broadband access from Centurylink. It got me to thinking, could I somehow use that as a backup connection to my current cable ISP? How would that work? Could I use a Raspberry Pi as a WAN load-balancing router?

The details
Well I’m not sure about using Raspberry Pi. It’s not so simple.

But I just wanted to mention there are solutions out there in the marketplace to this very problem. They’re not that easy to find, hence this article. They’re mostly aimed at small businesses where Internet connectivity is very important, like an Internet cafe.

This Cisco dual WAN router for $157 would do the trick:

https://www.amazon.com/Cisco-Dual-Gigabit-Router-RV042G-NA/dp/B008CWW6VY/ref=pd_cp_147_2?_encoding=UTF8&pd_rd_i=B008CWW6VY&pd_rd_r=5XFRCAG9PT7THJW8BMJZ&pd_rd_w=PQrlm&pd_rd_wg=FUaoX&psc=1&refRID=5XFRCAG9PT7THJW8BMJZ

Or for about the same price, this Linksys Dual WAN router:

https://www.amazon.com/Linksys-Business-Gigabit-Router-LRT224/dp/B00GK640D6/ref=pd_sbs_147_6?_encoding=UTF8&pd_rd_i=B00GK640D6&pd_rd_r=5XFRCAG9PT7THJW8BMJZ&pd_rd_w=rmOWr&pd_rd_wg=FUaoX&psc=1&refRID=5XFRCAG9PT7THJW8BMJZ

Want to go consumer grade and save money? This TP-Link model is only about $85:

https://www.amazon.com/TP-LINK-TL-R480T-Balance-Broadband-Configurable/dp/B002T4D3L8/ref=pd_ybh_a_4?_encoding=UTF8&psc=1&refRID=36DXNVKPFB8MN844NVNP

But it’s ports are only 100 mbps, which is kind of surprising in this day and age.

DIY approach for the intrepid

There now appears to be a Raspberry Pi solution since the advent of RPi 4 and OpenWRT support for ARM. This post is amazingly detailed: Raspberry Pi as a home router. The latest generation of Raspberry Pi… | by Vladimír Záhradník | The Startup | Medium

So the idea there would be to get OpenWRT running on an RPi 4, then explore running multi-WAN on OpenWRT: LEDE/OpenWRT — Setting Up Multi-WAN | by CT WiFi | LEDE/OpenWrt & IoT | Medium

Conclusion
We have identified commercial solutions to the question: can I use two ISPs at home to provide high availability and load-balancing. I have my doubts however and I think running OpenWRT may be the best option.

Categories
Security Web Site Technologies

Google Authenticator – not tough to self-host

Intro
I wanted to learn a bit more about digital currencies. I’ll certainly be posting about them in the future. The best way to get some is to open an account with coinbase. But for security reasons – and I am all for securing things as digital currency thefts are notorious – they require two-factor authentication. The least secure method is to have an SMS code sent to your phone.

Well, my phone is a work phone that i use for light personal use. I’ve never owned a personal cell phone. So I’m not even sure my number will be portable if I retire or am severed from the company that supplies the phone. It would be just like me to forget all about it years from now when I’m facing that situation.

They said a more secure method is Google Authenticator. That sounds a bit daunting and perhaps tied to Google? Upon investigation it turns out that neither of those statements is true.

The details

Turns out the Google Authenticator is really an implementation of open standards based on a couple RFCs, RFC 6238 and RFC 4226. So there are other available implementations besides Google’s.

I used this implementation. It works fine for me once I understood how it works! https://github.com/gbraad/gauth

How gauth works
The main thing to understand – and the author doesn’t really explain it – is that the secrets are stored locally on the browser. I didn’t look but it must be in a cookie. So from the same desktop, different browsers you’ll see one sees your added account and the other does not. No secrets are stored on the server so the web server only passively contains the HTML and Javascript files.

So in my opinion you ought to make a secure copy of the secret so it doesn’t vanish when you clear your browser cookies, or your computer crashes, or whatever.

It’s a TOTP: time-dependent one-time password. I am personally familiar and comfortable with the concept having been a long-time RSA token user, back to the days when it was Security Dynamics! So my account cannot be compromised by sharing a one-time code as I do in the screen shot below!


What it looks like

gauth running at Drjohnstechtalk.com

Note the time remaining on the right side. These one-time passwords only last for 30 seconds and then new ones will be displayed.

Keep up your time
Since these codes are time-dependent, it is actually important that your computer be synced to an Internet time source. I hadn’t really messed with that on my Windows 10 system and when I checked the time I found it off by seven seconds which is way too much in my opinion. Being off by a couple minutes is probably fatal. I was syncing to a time source about every five days, which is far too infrequent in my opinion.

Too lazy or unable to host your own?
You can use the one the author hosts: http://gauth.apps.gbraad.nl/. Of course that’s putting your trust in the author so I don’t recommend using that.

How to host it
You basically just download the zip file form the git repository and unzip it somewhere onto your web server. In my case I am keeping the location a secret but it doesn’t really matter as there is nothing really there on the server to hide.

Conclusion
Until now I have wanted two-factor authentication but have hesitated due to my incorrect notion that this would actually tie me down to Google’s Ecosystem. Today I found a simple, independent (of Google) implementation that works with Coinbase. I hope to expand my use of 2FA to my banking apps, WordPress and perhaps other areas now that I am comfortable with it.

References and related
A “simple” implementation of Google Authenticator which can be self-hosted: https://github.com/gbraad/gauth
Wikipedia article on Google Authenticator: https://en.wikipedia.org/wiki/Google_Authenticator. It’s very helpful.

Categories
Network Technologies

Obscure curl error explained – partially

Intro
Are you, like me, vexed by this curl error:

curl: (51) SSL peer certificate or SSH remote key was not OK

?

More details
I have many Linux systems from which to test. But I can only produce this error on some of them. It’s rather strange. I know most of the conditions which create this problem, but not all of them.

As you will see elsewhere on the Internet the error is in general produced by a DNS name/URL mismatch. The funny thing is that I always use the -k switch when running curl. This particular error occurred on some systems even with the -k switch! Now trhat’s noteworthy.

Circumstances which lead to the error

hostname in url does not match name in the certificate, e.g.,

curl -i -k https://vmanswer.com/

For me I only see the error on an older SLES 11 SP2 system. But I’m not sure how significant that is.

Additional debug info can be gleaned by adding the -v switch.

Circumstances which will not produce this error

If the URL hostname and the name on the certificate match, all is good.
If the URL uses an IP rather than a hostname all is good.
Perhaps certain implementations of curl and/or openssl will never produce this error as long as the -k switch is used??

Conclusion
The curl error curl: (51) SSL peer certificate or SSH remote key was not OK has been slightly better explained. It’s generally a hostname/certificate name mismatch and it only occurs on some curl versions.

Categories
Network Technologies Web Site Technologies

SSL Interception: troubleshooting

Intro
SSl Interception is a reality at some larger companies. From a security perspective it is vital as it permits you to extend your AV scanning, botnet detection, 0-day, DLP, cloud security, etc to your https traffic which is normally just an encrypted blur to the edge devices through which the traffic flows.

Bluecoat has a good solution for SSL interception, but it is possible to make some mistakes. Here I document one of those and provide a few other tips.

The details
The general idea is that within your large company – let’s call it “B” – there is an existing PKI infrastructure which is in use. In particular a private root CA has been included in the certificate store on B’s standard PC image. B users use explicit proxy. This is a requirement for SSL interception by the way. Now B’s PKI team issues an intermediate certificate to B’s proxy server such that it can sign certificates. That’s a so-called signing certificate because in the extensions it explicitly mentions:

            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
            X509v3 Key Usage: critical
                Certificate Sign, CRL Sign

B’s proxy, when asked to access an external https site by a desktop PC, then acts as an SSL client, decrypts the traffic, does all its AV, 0-day, DLP inspections, then re-encrypts it with its own on-the-fly issued certificate before sending it along to the desktop! In the PKI world, a signing certificate is a big deal because in the wrong hands havoc could result.

For instance, user requests https://www.google.com/. What user gets is https://www.google.com, but when user inspects the certificate, he sees the a www.google.com certificate issued by the proxy, which was issued from B’s own root CA (screenshot further down below).

Results if implemented badly
You might see this in Internet Explorer for every https site you access:

The security certificate presented by this website was not issued by a trusted certificate authority.

Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.

Looking at the certificate in Chrome (the only way I know how) shows the problem:

Certificate Error
There are issues with the site’s certificate chain (net::ERR_CERT_AUTHORITY_INVALID).

And indeed in examining the certificate it appears stand-alone. The whole chain should normally be displayed there but there is only the end certificate. So browsers won’t trust it.

What is happening in this case is that the proxy is intercepting, but its not providing the intermediate CERT.

Here is a screen shot showing that the proxy is the issuer for the certificate:

What to check
In our experience this can happen if the proxy’s signer certificate is present in a keyring on the proxy, but not present in the CA Certificates. We added this CERT to the CA Certificates and it behaved much better. Here’s a view of the CA Certificates after fixing it:

And a view of the certificate chain:


2022 Update

So the guy who normally does this retired and it fell onto my shoulders – getting the new signing certificates installed. I’ll be danged if I didn’t run into the exact same problem and waste an entire afternoon on it. Until. I discovered my own blog post from five years ago! And it still works! And it’s the only place where this specific problem is discussed and a solution presented. Kudos: me! Now Bluecoat -> Broadcom, but it’s the exact same stuff under the hood.

Other tips
We got no results whatsoever when we initiated an SSL layer until we turned on Detect Protocol:

On the other hand we had a site break just from enabling Detect Protocol. Even when SSLInterception was set to action: Disabled.

We found that action: None worked better for these cases. That sets the behaviour back to what you had before you enabled Detect Protocol. The idea being that Detect Protocol invokes the SSL Proxy component of Bluecoat. The SSL Proxy can mess things up a bit for some SSL sites. Our problem was with a Java SSL site.

What about pinned certificates
Certificate Pinning provides the browser an independent way to verify who was supposed to have issued the site’s certificate. This would seem to be a doomsday scenario for SSL interception, but most browsers have built in an exception so that if the browser is on the local network it will ignore the pin.


Great resource for anyone doing SSL interception

There are many scenarios to consider when you have a Man In The Middle. OWS is Origin Web Server in the following. How will it behave if:

  • the OWS CERT is expired
  • the OWS CERT is self-signed
  • the OWS CERT is revoked
  • the OWS only offers weak ciphers
  • the OWS CERT is from a CA not trusted by the browser
  • the OWS CERT contains the wrong common name
  • the OWS CERT lacks the intermediate CERT
  • the OWS CERT is pinned
  • etc.

Get the idea? Lots of things to consider here – the scenarios, how your SSL intercepting device actually behaves, and how you want your SSL interception to behave for that scenario.

A great resource where they’re done the job for you to build certificates with almost every defect you can think of, is badssl.com.

Regrets
Man I wish openssl supported usage through proxy, in particular openssl s_client. But it doesn’t. Examining certificates with the various browsers is a pain, and I don’t fully trust them. For me openssl is truth.


References and related

All different kinds of faulty certificate scenarios to test with: badssl.com

You can now get “real” certificate for free! I’ve used them myself several times: Lets Encrypt

My article concerning Lets Encrypt usage: Saving money using Lets Encrypt

An article I wrote explaining ciphers.

Some openssl commands I’ve found useful: My favorite openssl commands.

Somehow related to all this is a web site that guarantees to never use SSL, which can be useful when you are using a guest WiFi requiring sign-on and want to hit a “safe” site (a non-SSL site can be hard to find these days): http://neverssl.com/

Categories
Network Technologies Web Site Technologies

Fail: What I’m working on now: Poor man’s version of Speedtest.net

Intro
Now that I have a dual-band router I wanted to run some tests to see if 5 GHz is really faster and more stable than 2.4 GHz, as my intuition was telling me. But my only 5 GHz device where I had a chance to measure was my amazon Fired HD tablet, and wouldn’t you know that it’s incapable of running speedtest (speedtest.net). The web site forced it to a mobile app version, but amazon’s app store, being limited in its offerings, doesn’t have a speedtest app!

Anyway speedtest.net runs ads pretty aggressively, which I don’t like.

So I decided to try to write my own.

This turned out to be very hard to do. It turns out I suck at Javascript.

Some details
Normally I show all my false starts in the hopes that others can learn frmo my mistakes, but my Javascript blunders are just too painful and I never did sort them out. When I use javascript methods to set page timers I got completely inconsistent and hence unreliable results. So I settled on this simplistic PHP approach to gauge download speed:

<!--?php
// - DrJ 3/2017
// the weakness of this method is that it is a single stream
echo "Date: " . date('h:i:s') . "<br-->\n";
$starttime = microtime(true);
for ($x = 0; $x &lt; 750000; $x++) {
  $string .= mt_rand(1000000,9999999);
}
 
echo "<!-- $string -->\n";
//start again
//echo date('h:i:s');
echo "
\n";
$endtime = microtime(true);
$timediff = $endtime - $starttime;
$timediff = $timediff;
//echo "php timer: starttime: " . $starttime . " endtime: " . $endtime . " diff: " . $timediff . "
\n";
echo "Page load time: " . $timediff . " s
\n";
// 1.04 is observed overhead of IP + tcp. try ip -s link show eth0 before and after running curl
$dataset = 1.04*(strlen($string) + 200)/1000000.0;
$mbps = $dataset*8.0/$timediff;
echo "Mbytes downloaded in test: " . $dataset . " Mbytes
\n";
echo "Bandwidth: " . $mbps ." mbps
\n";
?&gt;
 

I called the file index.php and put it on my server in a directory of my choosing, let’s say, downloadtimer, and run it. The results look like this:

Date: 07:30:33
Page load time: 6.9666068553925 s
Mbytes downloaded in test: 5.460208 Mbytes
Bandwidth: 6.270149142432 mbps
 
Test again

Conclusion
I pretty much gave up. Use fast.com instead. It’s much better than speedtest.net. There is also a mobile app for fast.com.

References and related

I just leanred there is a linux script (written in python) which runs speedtest! That is awesome. It is speedtest-clihttps://pypi.org/project/speedtest-cli/

It gives you the download speed, the upload speed. It calculates the closest server. It is really the full monty.

Everyone uses speedtest.net.

I just learned about speedcheck (speedcheck dot org). Initially it seemed promising but too often it simply doesn’t work. And its advertising is just about as aggressive as Okla’s speedtest.net that everyone uses.

Meanwhile a friend pointed out a couple superior speed test web sites. At&t’s Speedtest is a good choice. There are few if any ads, and it runs on my Fire HD tablet and it’s fun to watch. speedtest.att.com

This one seems only slightly less aggressive than speedtest.net: Internet Frog. Internet Frog works on my tablet but with limited functionality and a non-flashy interface.

This site is simplest of all so probably the best: fast.com. It’s run by Netflix who have an obvious interest in helping users establish what their download speed is. It only displays download speed by default, but just click Show more info and you will get ping time and upload speed as well. Also, they actually explain in a transparent way what the heck they are doing: https://medium.com/netflix-techblog/building-fast-com-4857fe0f8adb. And most importantly, they don’t shower you with ads.

Want a large file to experiment with downloading? speed test has a variety through an FTP server: ftp://speedtest.tele2.net/ . On linux I use a command like this:

$ curl ftp://speedtest.tele2.net/50MB.zip > /dev/null

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 50.0M  100 50.0M    0     0  6099k      0  0:00:08  0:00:08 --:--:-- 11.3M

For faster speeds you should download the larger files.

I have also just discovered this site with large files you can download with HTTP:

https://www.thinkbroadband.com/download

For instance here is a direct link to their 100MB file which is nice to work with: http://ipv4.download.thinkbroadband.com/100MB.zip

Categories
Consumer Tech Web Site Technologies

White web page: maximizes your backlight with no invasion of privacy

Intro
Maybe it’s just me, but I’ve always had some issues getting my flashlight app to work on my phones. First there’s the issue of finding one from a trusted source (many contain spyware: access to my contacts?? for a flashlight?? I don’t think so…). So I trusted Swiss Army Knife, but then I had to launch that, then drill down to the flashlight, blah, blah. And the flashlight app on the Windows phone also looks a little seedy. And anyway sometimes you don’t want to overwhelm with your camera’s LED. Maybe just a simple glow from the backlight of your screen is enough to guide you down the hallway int he dark… I know I found myself using both my Fire HD tablet and my Windows phone in exactly that way.

Then I decided to scan a slide, using the backlight of my tablet to permit the scanner to see the colors, etc. That did not work out, by the way. It sounds like a good idea, though, doesn’t it? i guess the backlight is not sufficiently bright. maybe if I play with screen brightness…

Anyway, for all the above reasons I realized I could use a white backlight app. Rather than pay $0.99 for another dodgy app, I decided to write a web page that displays an all-white background. Then i could bookmark it and use it on both my Windows phone and my tablet!

White backlight web page
This is really complicated – don’t try this for yourself. Ha, ha just kidding. This is about as simple as it gets. Falls into the catgeory of “wish I had thought of it sooner,” or “Duh.”

White backlight web page

The HTML code
Want to put this on your own web server? Here is the code.

<html><head><body bgcolor="white"></htnml>

Conclusion
No banners, no ads, no intrusive permissions: this is a web page that maximizes the soft glow of your device’s backlight. You could play with your screen brightness to possibly make it still brighter, adjust the length it glows for, etc. For convenience to pull it up in a jiffy I’ve bookmarked my White backlight web page.


References and related

White backlight web page.

Categories
Admin DNS Network Technologies

Bluecoat ProxySG and DNS using edns seem incompatible

Intro
Imagine your DNS server had this behaviour when queried using dig:

$ dig drjohnstechtalk.com @146.201.145.30

; <<>> DiG 9.9.2-P2 <<>> drjohnstechtalk.com @10.201.145.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: FORMERR, id: 48905
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;drjohnstechtalk.com.           IN      A
 
;; Query time: 1 msec
;; SERVER: 10.201.145.30#53(146.201.145.30)
;; WHEN: Fri Feb 24 12:16:42 2017
;; MSG SIZE  rcvd: 48

That would be pretty disturbing, right? The only way to get dig to behave is to turn off edns like this:

$ dig +noedns drjohnstechtalk.com @146.201.145.30

; <<>> DiG 9.9.2-P2 <<>> +noedns drjohnstechtalk.com @10.201.145.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31299
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;drjohnstechtalk.com.           IN      A
 
;; ANSWER SECTION:
drjohnstechtalk.com.    3277    IN      A       50.17.188.196
 
;; Query time: 3 msec
;; SERVER: 10.201.145.30#53(146.201.145.30)
;; WHEN: Fri Feb 24 12:17:00 2017
;; MSG SIZE  rcvd: 53

Nslookup works. But who uses nslookup anyway?

Furthermore, imagine that DNS client and server are on the same subnet, so there is no firewall intermediating their traffic. so we know we can’t blame firewall cutting off large DNS packets, unlike the suggestions made in the references section.

Well, this is exactly the situation in a large company where I consult. The DNS server is unusual: a Bluecoat ProxySG, which can conveniently combine replies from nameservers from two different namespaces.

There does not seem to be an option to handle edns queries correctly on a Bluecoat device.

The client is running SLES version 11. The real question is how will applications behave? Which type of query will they make?

Bluecoat Response
Bluecoat does not support eDNS and gives a response permitted by RFC2671. RFC2671 also encourages clients to account for error responses and drop the use of eDNS in a retry.

References and related
EDNS: What is it all about? is a really good explanation of edns and how it came about, how it’s supposed to work, etc.
This post suggests some scenarios where edns may not work, though it does not address the Bluecoat issue: http://blog.fpweb.net/strange-dns-issues-better-check-out-edns/#.WLBmw3dvDkk
RFC 2671

Categories
Security

The latest on handling of SHA-1 certificates by the major browsers

Intro
A certain organization is still using SHA-1 certificates internally, in spite of years of warnings, as I write this in February, 2017. But in the security world lack of action = eventual weakness. Ignorance is not bliss and putting your head in the sand is not a viable security strategy. So cracks in their approach are starting to appear, especially with the Chrome browser, the latest version of which is showing their internal SSL sites as not secure.

I found these security blog links in the references section below very helpful in getting a feel for what the major browsers take on all this is. A colleague brought them to my attention.

Solutions
The obvious solution is for them to switch to a PKI based on SHA-256 (also known as SHA-2 for short). But apparently it is like turning an aircraft carrier, or maybe pushing a planet into a different orbit, as bureaucratic inertia keeps progress on this front at the level of barely perceptible.

Here’s an image of that part of Chrome that shows the result of an https Intranet site access:

References and related
These are all from the October – November 2016 time frame.
Google security blog: SHA-1 certificates in Chrome
Mozilla security blog: Phasing Out SHA-1 on the Public Web
Microsoft Edge Developer: SHA-1 deprecation countdown