Categories
Network Technologies

Obscure curl error explained – partially

Intro
Are you, like me, vexed by this curl error:

curl: (51) SSL peer certificate or SSH remote key was not OK

?

More details
I have many Linux systems from which to test. But I can only produce this error on some of them. It’s rather strange. I know most of the conditions which create this problem, but not all of them.

As you will see elsewhere on the Internet the error is in general produced by a DNS name/URL mismatch. The funny thing is that I always use the -k switch when running curl. This particular error occurred on some systems even with the -k switch! Now trhat’s noteworthy.

Circumstances which lead to the error

hostname in url does not match name in the certificate, e.g.,

curl -i -k https://vmanswer.com/

For me I only see the error on an older SLES 11 SP2 system. But I’m not sure how significant that is.

Additional debug info can be gleaned by adding the -v switch.

Circumstances which will not produce this error

If the URL hostname and the name on the certificate match, all is good.
If the URL uses an IP rather than a hostname all is good.
Perhaps certain implementations of curl and/or openssl will never produce this error as long as the -k switch is used??

Conclusion
The curl error curl: (51) SSL peer certificate or SSH remote key was not OK has been slightly better explained. It’s generally a hostname/certificate name mismatch and it only occurs on some curl versions.

Categories
Network Technologies Web Site Technologies

SSL Interception: troubleshooting

Intro
SSl Interception is a reality at some larger companies. From a security perspective it is vital as it permits you to extend your AV scanning, botnet detection, 0-day, DLP, cloud security, etc to your https traffic which is normally just an encrypted blur to the edge devices through which the traffic flows.

Bluecoat has a good solution for SSL interception, but it is possible to make some mistakes. Here I document one of those and provide a few other tips.

The details
The general idea is that within your large company – let’s call it “B” – there is an existing PKI infrastructure which is in use. In particular a private root CA has been included in the certificate store on B’s standard PC image. B users use explicit proxy. This is a requirement for SSL interception by the way. Now B’s PKI team issues an intermediate certificate to B’s proxy server such that it can sign certificates. That’s a so-called signing certificate because in the extensions it explicitly mentions:

            X509v3 Basic Constraints: critical
                CA:TRUE, pathlen:0
            X509v3 Key Usage: critical
                Certificate Sign, CRL Sign

B’s proxy, when asked to access an external https site by a desktop PC, then acts as an SSL client, decrypts the traffic, does all its AV, 0-day, DLP inspections, then re-encrypts it with its own on-the-fly issued certificate before sending it along to the desktop! In the PKI world, a signing certificate is a big deal because in the wrong hands havoc could result.

For instance, user requests https://www.google.com/. What user gets is https://www.google.com, but when user inspects the certificate, he sees the a www.google.com certificate issued by the proxy, which was issued from B’s own root CA (screenshot further down below).

Results if implemented badly
You might see this in Internet Explorer for every https site you access:

The security certificate presented by this website was not issued by a trusted certificate authority.

Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server.

Looking at the certificate in Chrome (the only way I know how) shows the problem:

Certificate Error
There are issues with the site’s certificate chain (net::ERR_CERT_AUTHORITY_INVALID).

And indeed in examining the certificate it appears stand-alone. The whole chain should normally be displayed there but there is only the end certificate. So browsers won’t trust it.

What is happening in this case is that the proxy is intercepting, but its not providing the intermediate CERT.

Here is a screen shot showing that the proxy is the issuer for the certificate:

What to check
In our experience this can happen if the proxy’s signer certificate is present in a keyring on the proxy, but not present in the CA Certificates. We added this CERT to the CA Certificates and it behaved much better. Here’s a view of the CA Certificates after fixing it:

And a view of the certificate chain:


2022 Update

So the guy who normally does this retired and it fell onto my shoulders – getting the new signing certificates installed. I’ll be danged if I didn’t run into the exact same problem and waste an entire afternoon on it. Until. I discovered my own blog post from five years ago! And it still works! And it’s the only place where this specific problem is discussed and a solution presented. Kudos: me! Now Bluecoat -> Broadcom, but it’s the exact same stuff under the hood.

Other tips
We got no results whatsoever when we initiated an SSL layer until we turned on Detect Protocol:

On the other hand we had a site break just from enabling Detect Protocol. Even when SSLInterception was set to action: Disabled.

We found that action: None worked better for these cases. That sets the behaviour back to what you had before you enabled Detect Protocol. The idea being that Detect Protocol invokes the SSL Proxy component of Bluecoat. The SSL Proxy can mess things up a bit for some SSL sites. Our problem was with a Java SSL site.

What about pinned certificates
Certificate Pinning provides the browser an independent way to verify who was supposed to have issued the site’s certificate. This would seem to be a doomsday scenario for SSL interception, but most browsers have built in an exception so that if the browser is on the local network it will ignore the pin.


Great resource for anyone doing SSL interception

There are many scenarios to consider when you have a Man In The Middle. OWS is Origin Web Server in the following. How will it behave if:

  • the OWS CERT is expired
  • the OWS CERT is self-signed
  • the OWS CERT is revoked
  • the OWS only offers weak ciphers
  • the OWS CERT is from a CA not trusted by the browser
  • the OWS CERT contains the wrong common name
  • the OWS CERT lacks the intermediate CERT
  • the OWS CERT is pinned
  • etc.

Get the idea? Lots of things to consider here – the scenarios, how your SSL intercepting device actually behaves, and how you want your SSL interception to behave for that scenario.

A great resource where they’re done the job for you to build certificates with almost every defect you can think of, is badssl.com.

Regrets
Man I wish openssl supported usage through proxy, in particular openssl s_client. But it doesn’t. Examining certificates with the various browsers is a pain, and I don’t fully trust them. For me openssl is truth.


References and related

All different kinds of faulty certificate scenarios to test with: badssl.com

You can now get “real” certificate for free! I’ve used them myself several times: Lets Encrypt

My article concerning Lets Encrypt usage: Saving money using Lets Encrypt

An article I wrote explaining ciphers.

Some openssl commands I’ve found useful: My favorite openssl commands.

Somehow related to all this is a web site that guarantees to never use SSL, which can be useful when you are using a guest WiFi requiring sign-on and want to hit a “safe” site (a non-SSL site can be hard to find these days): http://neverssl.com/

Categories
Network Technologies Web Site Technologies

Fail: What I’m working on now: Poor man’s version of Speedtest.net

Intro
Now that I have a dual-band router I wanted to run some tests to see if 5 GHz is really faster and more stable than 2.4 GHz, as my intuition was telling me. But my only 5 GHz device where I had a chance to measure was my amazon Fired HD tablet, and wouldn’t you know that it’s incapable of running speedtest (speedtest.net). The web site forced it to a mobile app version, but amazon’s app store, being limited in its offerings, doesn’t have a speedtest app!

Anyway speedtest.net runs ads pretty aggressively, which I don’t like.

So I decided to try to write my own.

This turned out to be very hard to do. It turns out I suck at Javascript.

Some details
Normally I show all my false starts in the hopes that others can learn frmo my mistakes, but my Javascript blunders are just too painful and I never did sort them out. When I use javascript methods to set page timers I got completely inconsistent and hence unreliable results. So I settled on this simplistic PHP approach to gauge download speed:

<!--?php
// - DrJ 3/2017
// the weakness of this method is that it is a single stream
echo "Date: " . date('h:i:s') . "<br-->\n";
$starttime = microtime(true);
for ($x = 0; $x &lt; 750000; $x++) {
  $string .= mt_rand(1000000,9999999);
}
 
echo "<!-- $string -->\n";
//start again
//echo date('h:i:s');
echo "
\n";
$endtime = microtime(true);
$timediff = $endtime - $starttime;
$timediff = $timediff;
//echo "php timer: starttime: " . $starttime . " endtime: " . $endtime . " diff: " . $timediff . "
\n";
echo "Page load time: " . $timediff . " s
\n";
// 1.04 is observed overhead of IP + tcp. try ip -s link show eth0 before and after running curl
$dataset = 1.04*(strlen($string) + 200)/1000000.0;
$mbps = $dataset*8.0/$timediff;
echo "Mbytes downloaded in test: " . $dataset . " Mbytes
\n";
echo "Bandwidth: " . $mbps ." mbps
\n";
?&gt;
 

I called the file index.php and put it on my server in a directory of my choosing, let’s say, downloadtimer, and run it. The results look like this:

Date: 07:30:33
Page load time: 6.9666068553925 s
Mbytes downloaded in test: 5.460208 Mbytes
Bandwidth: 6.270149142432 mbps
 
Test again

Conclusion
I pretty much gave up. Use fast.com instead. It’s much better than speedtest.net. There is also a mobile app for fast.com.

References and related

I just leanred there is a linux script (written in python) which runs speedtest! That is awesome. It is speedtest-clihttps://pypi.org/project/speedtest-cli/

It gives you the download speed, the upload speed. It calculates the closest server. It is really the full monty.

Everyone uses speedtest.net.

I just learned about speedcheck (speedcheck dot org). Initially it seemed promising but too often it simply doesn’t work. And its advertising is just about as aggressive as Okla’s speedtest.net that everyone uses.

Meanwhile a friend pointed out a couple superior speed test web sites. At&t’s Speedtest is a good choice. There are few if any ads, and it runs on my Fire HD tablet and it’s fun to watch. speedtest.att.com

This one seems only slightly less aggressive than speedtest.net: Internet Frog. Internet Frog works on my tablet but with limited functionality and a non-flashy interface.

This site is simplest of all so probably the best: fast.com. It’s run by Netflix who have an obvious interest in helping users establish what their download speed is. It only displays download speed by default, but just click Show more info and you will get ping time and upload speed as well. Also, they actually explain in a transparent way what the heck they are doing: https://medium.com/netflix-techblog/building-fast-com-4857fe0f8adb. And most importantly, they don’t shower you with ads.

Want a large file to experiment with downloading? speed test has a variety through an FTP server: ftp://speedtest.tele2.net/ . On linux I use a command like this:

$ curl ftp://speedtest.tele2.net/50MB.zip > /dev/null

 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 50.0M  100 50.0M    0     0  6099k      0  0:00:08  0:00:08 --:--:-- 11.3M

For faster speeds you should download the larger files.

I have also just discovered this site with large files you can download with HTTP:

https://www.thinkbroadband.com/download

For instance here is a direct link to their 100MB file which is nice to work with: http://ipv4.download.thinkbroadband.com/100MB.zip

Categories
Admin DNS Network Technologies

Bluecoat ProxySG and DNS using edns seem incompatible

Intro
Imagine your DNS server had this behaviour when queried using dig:

$ dig drjohnstechtalk.com @146.201.145.30

; <<>> DiG 9.9.2-P2 <<>> drjohnstechtalk.com @10.201.145.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: FORMERR, id: 48905
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;drjohnstechtalk.com.           IN      A
 
;; Query time: 1 msec
;; SERVER: 10.201.145.30#53(146.201.145.30)
;; WHEN: Fri Feb 24 12:16:42 2017
;; MSG SIZE  rcvd: 48

That would be pretty disturbing, right? The only way to get dig to behave is to turn off edns like this:

$ dig +noedns drjohnstechtalk.com @146.201.145.30

; <<>> DiG 9.9.2-P2 <<>> +noedns drjohnstechtalk.com @10.201.145.30
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31299
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;drjohnstechtalk.com.           IN      A
 
;; ANSWER SECTION:
drjohnstechtalk.com.    3277    IN      A       50.17.188.196
 
;; Query time: 3 msec
;; SERVER: 10.201.145.30#53(146.201.145.30)
;; WHEN: Fri Feb 24 12:17:00 2017
;; MSG SIZE  rcvd: 53

Nslookup works. But who uses nslookup anyway?

Furthermore, imagine that DNS client and server are on the same subnet, so there is no firewall intermediating their traffic. so we know we can’t blame firewall cutting off large DNS packets, unlike the suggestions made in the references section.

Well, this is exactly the situation in a large company where I consult. The DNS server is unusual: a Bluecoat ProxySG, which can conveniently combine replies from nameservers from two different namespaces.

There does not seem to be an option to handle edns queries correctly on a Bluecoat device.

The client is running SLES version 11. The real question is how will applications behave? Which type of query will they make?

Bluecoat Response
Bluecoat does not support eDNS and gives a response permitted by RFC2671. RFC2671 also encourages clients to account for error responses and drop the use of eDNS in a retry.

References and related
EDNS: What is it all about? is a really good explanation of edns and how it came about, how it’s supposed to work, etc.
This post suggests some scenarios where edns may not work, though it does not address the Bluecoat issue: http://blog.fpweb.net/strange-dns-issues-better-check-out-edns/#.WLBmw3dvDkk
RFC 2671

Categories
Admin Network Technologies Raspberry Pi

Use Raspberry Pi to explore mDNS

Intro
I am confounded by the Bonjour field on my d-Link DCS-931L IP webcam. I should be able to use it to see my desired hostname, but it doesn’t take. Why?

The details
Having a Raspberry Pi on the same network I realized I could at least learn definitively whether or not my new name was being taken, or what the old name was.

You install avahi-discover to do that:

$ sudo apt-get install avahi-discover

Those who follow my blog will realize I am big on Linux command-line, not so much on GUIs. I mention it because unfortunately avahi-discover only works in the GUI. Not having a console I actually had to fire up vncserver and use my vncviewer on my PC! Then I could launch avahi-discover from a terminal window running on the GUI.

The extra fuss was just a few steps anyway, and well worth it.

avahi-discover broke down my home network and all the discovered devices in a very orderly fashion, e.g., the webcam appeared under web servers.

And what did I learn? Indeed, my name had not “taken” for some reason. So the system-supplied name was there instead. For the record that is

dcs931le1a6.local

And testing it:

$ ping dcs931le1a6.local

did indeed show me that it was resolvable by that name form my local network. My PC could reach it by that name as well. I tied to name it DCS-931L-BALL, and I know someone else who did this successfully, and I had even done it in the past, but it was just not taking it this time.

References and related
mDNS is multicast DNS. It’s designed for home networks. It’s pretty common from wjhat I see, yet largely unknown since It people do not encounter it in enterprise environments. As usual Wikipedia has a good article on it.
Superimposing crosshairs on a webcam image.

Categories
Linux Network Technologies

Switch home router to DD-WRT: FAIL

Intro
I am having problems with my home router, a Cisco E1200, especially with the wireless connections. I thought it might be interesting to try to run it using the open source routing code DD-WRT. Since I am a Linux geek DD-WRT had some attraction for me and I figured it really couldn’t make matters worse. Boy was I wrong.

The details
Dropped connections, slow response, degradation over time – that is all par for the course for my E-1200. Again, mostly affecting WiFi.

Starting from this bare-bones installation write-up, http://www.dd-wrt.com/wiki/index.php/Linksys_E1200v2, I did indeed manage to upgrade the firmware to DD-WRT.

Things they don’t tell you that you probably want to know

Initial login password is blank, and the username is root, not admin.

I wanted to have the SSID I had been using preserved with the same password as well, so that, ideally, I would not have to revisit my devices to get them to learn about a new SSID setup. This was especially important to me due to a wireless Canon 3600 series printer which is particularly difficult to set up. You do it once, fumbling around until it works, and hope to never have to do it again.

And…yes, it auto-created that SSID and I saw client logged into it, so I guess it preserved the password as well. I don’t really know the characteristics that a client uses to decide this is the same SSID as before. The MAC address may be part of that decision. But since this was the same hardware the MAC address was preserved as well.

The results
My hard-wired connection worked pretty well. But WiFi, if you can believe it, was even worse than before! My Office Dell PC just would not pick up an IP address although it did connect to it. When you run ipconfig and only see an address beginning with 169.254. you are in trouble, and that’s what I had. My Dell 2-in-1 laptop could connect OK. But sometimes it worked, sometimes not, over WiFi, and worse than before.

And although some of the Linuxy type things looked somewhat familiar, like bridging with a br0 interface, I didn’t want to invest a lot of time debugging my issues. And the web GUI was a little slow.

ssh was disabled by default. No idea how to turn it on. Do I didn’t have the usual comfort of a Linux command line in working with it.

Issuing commands via the web GUI was just too painfully slow.

Also, come to think of it, it did not grab an IP over its WAN connection. Now I have an unusual ISP that permits me two valid Internet addresses. My Cisco Meraki takes the other address. But rebooting cable modem, Cisco router, etc in any combination just did not permit me to get that 2nd IP address I had been using. Eventually I knocked my Meraki offline. I wasn’t expecting that as it normally runs flawlessly and I hadn’t touched it.

So needless to say I was pretty disgusted and gave up. Question is, could I go back to the Cisco firmware??

Back to Cisco’s embrace
Well it turns out you can go back. Cisco meanwhile had released a newer version of firmware for it and made it available for download over the Internet.

I got the initial Cisco-looking page but had a really tough time logging in! None of the default of previous username/password combinations worked!! root/(blank), admin/admin, (blank)/root, admin/1234, admin/previous_password, none of it let me in! I tried a reset. No go. I read different directions on how to reset. Someone mentioned a 30/30/30 rule. No go. (I guess that was 30 seconds reset, 30 seconds wait, 30 seconds without power). The more official recommendation seemed to be 10 seconds reset. Eventually one of those resets did work – I think the 10 second one, and the default admin/admin got me in. That was a relief!

I figured if my SSID carried over to DD-WRT, surely it would carry over going back to Cisco. But, strangely, it did not. The name was similar, but not the same. Old name: Cisco76538. New: Linksys76538. No way to change it. Thanks Cisco, that was really helpful. CORRECTION. You know how you get used to certain settings? I had WPS enabled. For some reason you enable it in two places. Well, the one place, if you turn it off, allows you to change the SSID! But I need WPS (WiFi protected setup) for some basic Canon printers I have. So I don’t think this is an out.

So I had to visit all my clients one-by-one to re-enter the WiFi info, I still haven’t gotten to that one printer though! And my Wink Hub was no fun to re-configure either.

And performance is inconsistent once again, but much better than under DD-WRT. It’s too early to tell if it is an improvement over the older firmware.

And I gave up on using a 2nd IP address at home. I just channel everything through the Meraki.

Some more thoughts on why the office computer did not get an IP address though it was connected to the DD-WRT network
I’ve seen this problem just this week with a different DHCP server. I think you may only get a 169.254… address if your DHCP server already has your MAC address in its table, so it decides you don’t need another IP, or something like that. But things didn’t seem to get any better after a reboot of the router. So I don’t know.

Some more thoughts on why WiFi performs better through Meraki
The Cisco E1200 is a cheap, 2.4 GHz-band router. It can be set to auto-hop if one of the channels gets interference – that’s one of the WPS buttons. I’m beginning to suspect that is what is happening as I do see the neighbor’s SSIDs. Meraki is dual band, 5 GHz + 2.4 gHz and has the intelligence to use both as needed. I think it has MIMO as well. So it’s almost always going to do better than a cheap home router.

What I ended up doing
In the end I bought a dual-band router: Linksys WRT1200AC for $91 from Amazon. Turns out my Dell computers do not support 5 gHz. who knew?

Conclusion
Upgrade firmware to DD-WRT? Maybe with enough effort I could have gotten DD-WRT to work. It allows more control than Cisco’s firmware. But with the minimal configuration I was willing to do it was basically useless – very inconsistent and just not working with some devices.

References and related
Very brief DD-WRT install instructions for an E1200: http://www.dd-wrt.com/wiki/index.php/Linksys_E1200v2
Official E1200 download site: http://www.linksys.com/us/support-article?articleNum=148523

Categories
Network Technologies Security

The IT Detective agency: the case of the incompatible sftp client

Intro
I was asked for assistance with this sftp problem:

$ sftp <user@host>

DH_GEX group out of range: 1536 !< 1024 !< 8192
Couldn't read packet: Connection reset by peer

We actually spoke with the operator of the sftp server who said their server is not compatible with openSSH version 6.2

The details
you can see your version of openssh and a lot more by running it again with the <v flag:

$ sftp ‐v [email protected]

OpenSSH_6.2p2, OpenSSL 0.9.8j-fips 07 Jan 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 20: Applying options for *
debug1: Connecting to securefile.drj.com [50.17.188.196] port 22.
debug1: Connection established.
debug1: identity file /home/drj/.ssh/id_rsa type -1
debug1: identity file /home/drj/.ssh/id_dsa type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version SSHD
debug1: no match: SSHD
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-sha1 none
debug1: kex: client->server aes128-cbc hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1536<7680<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
DH_GEX group out of range: 1536 !< 1024 !< 8192
Couldn't read packet: Connection reset by peer

At the same time, I could successfully connect on an older system – SLES 11 SP2 – whose detailed connection looked like this:

Connecting to securefile.drj.com...
OpenSSH_5.1p1, OpenSSL 0.9.8j-fips 07 Jan 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to securefile.drj.com [50.17.188.196] port 22.
debug1: Connection established.
debug1: identity file /home/drj/.ssh/id_rsa type -1
debug1: identity file /home/drj/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version SSHD
debug1: no match: SSHD
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-sha1 none
debug1: kex: client->server aes128-cbc hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
The authenticity of host 'securefile.drj.com (50.17.188.196)' can't be established.
RSA key fingerprint is 13:bb:75:21:86:c0:d6:90:3d:5c:81:32:4c:7e:73:6b.
Are you sure you want to continue connecting (yes/no)?

See that version? it is only openSSH version 5.1. It seems to permit a shorter Diffie Hellman group exchange key length than does the newer version of openssh.

My solution
The standard advice on duckduckgo is to set this parameter in your $HOME/.ssh/config:

KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

But this did not work for my case. And I didn’t want to change a setting that would weaken the security of all my other sftp connections. So I settled on using an additional command-line argument which is an openSSH parameter when running sftp:

$ sftp ‐o KexAlgorithms=diffie‐hellman‐group14‐sha1 <host>

And this worked!

And this works as well:

$ sftp ‐o KexAlgorithms=diffie‐hellman‐group1‐sha1 <host>

Why the change?
The minimum key length was changed from 1024 to 1536 and later to 2048 to avoid the logjam vulnerability.

Conclusion
Not all versions of openSSH are equal. In particular openSSH v 6.2 may be incompatible with older sftp servers. I showed a way to make a newer sftp work with an older server. Case closed.

References and related
Novell’s discussion of the issue is here: https://www.novell.com/support/kb/doc.php?id=7016904
Here’s some more information on the logjam vulnerability: https://weakdh.org/

Categories
Admin Apache Network Technologies Security

drjohnstechtalk now uses HTTP Strict Transport Security, HSTS

Intro
I was reading about a kind of amazingly thorough exploit which could be done using a Raspberry Pi zero. Physical access is required, but the scope of what this guy has figured out and put together is really amazing.

Reading the description I decided is a good exercise in making sure I understand the underlying technologies. One was admittedly something I hadn’t seen before: DNS re-binding. That got me to reading about DNS re-binding, and that got me to looking at defenses against DNS rebinding.

HSTS to the rescue
Since in DNS rebinding you may have either a MITM (man in the middle) or a web site impersonated by a hacker, one defense against it is to use HTTPS. (The hacker will not have access to a web site’s private key and therefore has no way to fake a certificate). But what they can do is redirect users from HTTPS to HTTP, where no certificate is required.

HSTS is designed to make that move tip off the user by complaining to the user. Upon first visit the user gets a cookie that says this site should be https. Subsequent visits then are enforced by the user’s browser that the site accessed must be HTTPS.

drjohnstechtalk update
Two years ago I switched the default way I run my blog web site from HTTP to HTTPS due to the encryption offered by HTTPS, and the fact that search engines penalize HTTP sites.

It seems a natural progression in this age of increasing security awareness to up the ante and now also run HSTS. For me this was easy. Since I run my own apache server I simply needed to add the appropriate HTTP Response header to my server responses.

This is done within the virtual server section of the apache configuration like so:

# Guarantee HTTPS for 1/2 Year including Sub Domains  - DrJ 11/22/16
# see https://itigloo.com/security/how-to-configure-http-strict-transport-security-hsts-on-apache-nginx/
    Header always set Strict-Transport-Security "max-age=15811200; includeSubDomains; preload"

Of course this requires that the apache mod_headers is included.

Results
I test it form a linux server like this:

$ curl ‐i ‐k ‐s https://drjohnstechtalk.com/blog/|head ‐15

HTTP/1.1 200 OK
Date: Tue, 22 Nov 2016 20:30:56 GMT
Server: Apache/2
Strict-Transport-Security: max-age=15811200; includeSubDomains
Vary: Cookie,Accept-Encoding
X-Powered-By: PHP/5.4.43
X-Pingback: https://drjohnstechtalk.com/blog/xmlrpc.php
Last-Modified: Tue, 22 Nov 2016 20:30:58 GMT
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
 
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8" />

See that new header Strict-Transport-Security: max-age=15811200; includeSubDomains; preload? That’s the result of what we did. But unless we put that preload at the end it doesn’t verify!

Capture-HSTS-verify-with-error

References and related
drjohnstechtalk is now encrypted – blog posting from 2014
This site has a good description of the requirements for a proper HSTS implementation, and I see that I missed something! https://hstspreload.appspot.com/
You can’t run https without a certificate. I soon will be using the free certificates offered by Let’s Encrypt. Here’s my write-up.

Categories
CentOS DNS Linux Network Technologies Raspberry Pi Security Web Site Technologies

Roll your own dynamic DNS update service

Intro
I know my old Cisco router only has built-in support for two dynamic DNS services, dyndns.org and TZO.com. Nowadays you have to pay for those, if even they work (the web site domain names seem to have changed, but perhaps they still support the old domain names. Or perhaps not!). Maybe this could be fixed by firmware upgrades (to hopefully get more choices and hopefully a free one, or a newer router, or running DD-WRT. I didn’t do any of those things. Being a network person at heart, I wrote my own. I found the samples out there on the Internet needed some updating, so I am sharing my recipe. I didn’t think it was too hard to pull off.

What I used
– GoDaddy DNS hosting (basically any will do)
– my Amazon AWS virtual server running CentOS, where I have sudo access
– my home Raspberry Pi
– a tiny bit of php programming
– my networking skills for debugging

As I have prior experience with all these items this project was right up my alley.

Delegating our DDNS domain from GoDaddy
Just create a nameserver record from the domain, say drj.com, called, say, raspi, which you delegate to your AWS server. Following the example, the subdomain would be raspi.drj.com whose nameserver is drj.com.


DNS Setup on an Amazon AWS server

/etc/named.conf

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
 
options {
//      listen-on port 53 { 127.0.0.1; };
//      listen-on port 53;
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { any; };
        recursion no;
 
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
 
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";
 
        managed-keys-directory "/var/named/dynamic";
};
 
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
 
zone "." IN {
        type hint;
        file "named.ca";
};
 
include "/etc/named.rfc1912.zones";
include "/var/named/dynamic.conf";
include "/etc/named.root.key";

/var/named/dynamic.conf

zone "raspi.drj.com" {
  type master;
  file "/var/named/db.raspi.drj.com";
// designed to work with nsupdate -l used on same system - DrJ 10/2016
// /var/run/named/session.key
  update-policy local;
};

/var/named/db.raspi.drj.com

$ORIGIN .
$TTL 1800       ; 30 minutes
raspi.drj.com      IN SOA  drj.com. postmaster.drj.com. (
                                2016092812 ; serial
                                1700       ; refresh (28 minutes 20 seconds)
                                1700       ; retry (28 minutes 20 seconds)
                                1209600    ; expire (2 weeks)
                                600        ; minimum (10 minutes)
                                )
                        NS      drj.com.
$TTL 3600       ; 1 hour
                        A       125.125.73.145

Named re-starting program
Want to make sure your named restarts if it happens to die? nanny.pl is a good, simple monitor to do that. Here is the version I use on my server. Note the customized variables towards the top.

#!/usr/bin/perl
#
# Copyright (C) 2004, 2007, 2012  Internet Systems Consortium, Inc. ("ISC")
# Copyright (C) 2000, 2001  Internet Software Consortium.
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH
# REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
# AND FITNESS.  IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT,
# INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
# LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
# OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
# PERFORMANCE OF THIS SOFTWARE.
 
# $Id: nanny.pl,v 1.11 2007/06/19 23:47:07 tbox Exp $
 
# A simple nanny to make sure named stays running.
 
$pid_file_location = '/var/run/named/named.pid';
$nameserver_location = 'localhost';
$dig_program = 'dig';
$named_program =  '/usr/sbin/named -u named';
 
fork() && exit();
 
for (;;) {
        $pid = 0;
        open(FILE, $pid_file_location) || goto restart;
        $pid = <FILE>;
        close(FILE);
        chomp($pid);
 
        $res = kill 0, $pid;
 
        goto restart if ($res == 0);
 
        $dig_command =
               "$dig_program +short . \@$nameserver_location > /dev/null";
        $return = system($dig_command);
        goto restart if ($return == 9);
 
        sleep 30;
        next;
 
 restart:
        if ($pid != 0) {
                kill 15, $pid;
                sleep 30;
        }
        system ($named_program);
        sleep 120;
}

The PHP updating program myip-update.php

<?php
# DrJ: lifted from http://pablohoffman.com/dynamic-dns-updates-with-a-simple-php-script
# but with some security improvements
# 10/2016
# PHP script for very simple dynamic DNS updates
#
# this script was published in http://pablohoffman.com/articles and
# released to the public domain by Pablo Hoffman on 27 Aug 2006
 
# CONFIGURATION BEGINS -------------------------------------------------------
# define password here
$mysecret = 'myBigFatsEcreT';
# CONFIGURATION ENDS ---------------------------------------------------------
 
 
$ip = $_SERVER['REMOTE_ADDR'];
$host = $_GET['host'];
$secret = $_POST['secret'];
$zone = $_GET['zone'];
$tmpfile = trim(`mktemp /tmp/nsupdate.XXXXXX`);
 
if ((!$host) or (!$zone) or (!($mysecret == $secret))) {
    echo "FAILED";
    unlink($tmpfile);
    exit;
}
 
$oldip = trim(`dig +short $host.$zone @localhost`);
if ($ip == $oldip) {
    echo "UNCHANGED. ip: $ip\n";
    unlink($tmpfile);
    exit;
}
 
echo "$ip - $oldip";
 
$nsucmd = "update delete $host.$zone A
update add $host.$zone 3600 A $ip
send
";
 
$fp = fopen($tmpfile, 'w');
fwrite($fp, $nsucmd);
fclose($fp);
`sudo nsupdate -l $tmpfile`;
unlink($tmpfile);
echo "OK ";
echo `date`;
?>

In the above file I added the “sudo” after awhile. See explanation further down below.

Raspberry Pi requirements
I’ve assumed you can run your Pi 24 x 7 and constantly and consistently on your network.

Crontab entry on the Raspberry Pi
Edit the crontab file for periodically checking your IP on the Pi and updating external DNS if it has changed by doing this:

$ crontab ‐e
and adding the line below:

# my own method of dynamic update - DrJ 10/2016
0,10,20,30,40,50 * * * * /usr/bin/curl -s -k -d 'secret=myBigFatsEcreT' 'https://drj.com/myip-update.php?host=raspi&zone=drj.com' >> /tmp/ddns 2>&1

A few highlights
Note that I’ve switched to use of nsupdate -l on the local server. This will be more secure than the previous solution which suggested to have updates from localhost. As far as I can tell localhost updates can be spoofed and so should be considered insecure in a modern infrastructure. I learned a lot by running nsupdate -D -l on my AWS server and observing what happens.
And note that I changed the locations of the secret. The old solution had the secret embedded in the URL in a GET statement, which means it would also be embedded in every single request in the web server’s access file. That’s not a good idea. I switched it to a POSTed variable so that it doesn’t show up in the web server’s access file. This is done with the -d switch of curl.

Contents of temporary file
Here are example contents. This is useful when you’re trying to run nsupdate from the command line.

update delete raspi.drj.com A
update add raspi.drj.com 3600 A 51.32.108.37
send


Permissions problems

If you see something like this on your DNS server:

$ ll /var/run/named

total 8
-rw-r--r-- 1 named www-data   6 Nov  6 03:15 named.pid
-rw------- 1 named www-data 102 Oct 24 09:42 session.key

your attempt to run nsupdate by your web server will be foiled and produce something like this:

$ /usr/bin/nsupdate ‐l /tmp/nsupdate.LInUmo

06-Nov-2016 17:14:14.780 none:0: open: /var/run/named/session.key: permission denied
can't read key from /var/run/named/session.key: permission denied

The solution may be to permit group read permission:

$ cd /var/run/named; sudo chmod g+r session.key

and make the group owner of the file your webserver user ID (which I’ve already done here). I’m still working this part out…

That approach doesn’t seem to “stick,” so I came up with this other approach. Put your web server user in sudoers to allow it to run nsupdate (my web server user is www-data for these examples):

Cmnd_Alias     NSUPDATE = /usr/bin/nsupdate
# allow web server to run nsupdate
www-data ALL=(root) NOPASSWD: NSUPDATE

But you may get the dreaded

sudo: sorry, you must have a tty to run sudo

if you manage to figure out how to turn on debugging.

So if your sudoers has a line like this:

Defaults    requiretty

you will need lines like this:

# turn of tty requirements only for www-data user
Defaults:www-data !requiretty

Debugging
Of course for debugging I commented out the unlink line in the PHP update file and ran the
nsupdate -l /tmp/nsupdate.xxxxx
by hand as user www-data.

During some of the errors I worked through that wasn’t verbose enough so I added debugging arguments:

$ nsupdate ‐D ‐d ‐l /tmp/nsupdate.xxxxx

When that began to work, yet when called via the webserver it wasn’t working, I ran the above command from within PHP, recording the output to a file:

...
`sudo nsupdate -d -D -l $tmpfile > /tmp/nsupdate-debug 2>&1`

That turned out to be extremely informative.

Conclusion
We have shown how to combine a bunch of simple networking tools to create your own DDNS service. The key elements are a Raspberry Pi and your own virtual server at Amazon AWS. We have built upon previous published solutions to this problem and made them more secure in light of the growing sophistication of the bad guys. Let me know if there is interest in an inexpensive commercial service.

References and related

nanny.pl write-up: https://www.safaribooksonline.com/library/view/dns-bind/0596004109/ch05s09.html

Categories
Admin Network Technologies

Who’s using the UK Ministry of Defence’s IP addresses?

Intro
When I first came upon a spear phishing email a few months ago which originated from the UK’s Ministry of Defence I thought that was pretty queer. Like, how ironic that an invoice scam is coming from a Defense Ministry. Do they have a bad actor? Are we on the cusp of cracking some big international cybertheft? Do we tell them?

Then their address space came up yet again just a few days ago, this time in a fairly different context. Microsoft’s Exchange Online service hosted in the UK cannot deliver email to a particular domain:

8/5/2016 3:32:35 PM - Server at e********s.net (25.152.12.27) returned '450 4.4.312 DNS query failed(ServerFailure)'

I obscured the domain a bit. But it’s an everyday domain which every DNS server I’ve tested resolves just fine. But Microsoft doesn’t see it that way. Several test messages have shown non-delivery reports using these other addresses as well following the “Server at…”: 25.152.8.27, 25.152.16.27.

The Register sheds the most light – but it still lacks in critical details – on what might have happened to the UK Ministry of Defence’s IPv4 address space, namely, that some was sold. Here’s the article.

How do you show that all these 25.152.8.8 addresses belong to the Ministry of Defence? You use RIPE: http://ripe.net/ and do a search. It shows that 25.0.0.0/8 belongs to them. But according to the article in The Register this is no longer true as of late last year.

Why is Microsoft using these IP addresses? No idea. But something I read got me to suspecting that some outfits decided to use 25/8 address space as though it were private IP addresses!

References and related