Categories
Network Technologies Security

LDAP authentication on the F5 BigIP without Access Policy Manager

Intro
I recently received revised guidelines for dmz best practices which mentioned a requirement to implement application-independent authentication using the F5 web application firewall. I had never heard of it and didn’t think it was possible without buying the very expensive APM license. They insisted it was possible and even easy to do. So I investigated and found they were right!

The details
This is a feature added around version 11.4.

On the F5, go to Local Traffic|Profiles|Authentication|Configurations and create a new configuration. Here you put in the essential LDAP information and give these settings a name such as myLDAP. I needed to set Login Attribute to cn. Then go to …Authentication|Profiles and create a new one. Set parent profile as LDAP and associate the configvuration myLDAP to it. Rule can be _sys_auth_ldap.

In the virtual server Properties tab look for the section Authentication Profiles. Pick the profile you created.

That’s it! Your virtual server now has application-independent authentication using your preferred LDAP source.

So far I only tested against an LDAP source that doesn’t require an ldap bind. But I did successfully test against an ldaps source (which runs on port 636 and encrypts the communication using SSL. I got that to work setting SSL to Enabled and essentially taking the other SSL-related default values.

Conclusion
We show how to implement application-independent authentication on an F5 BigIP which only has the local traffic manager (LTM) license. We used an LDAP directory for the authentication source. I believe a certificate mechanism would also have been possible. As it happens our LDAP source was not an Active Directory (AD) tree, but I believe it would be possible to use that as well. We also did not limit access to any specific group, but that is probably possible as well.

Categories
Admin Linux Security Web Site Technologies

The IT Detective Agency: the vanishing certificate error

Intro
I was confronted with a web site certificate error. A user was reluctant – correctly – to proceed to an internal web site because he saw a message to the effect:

I tried it myself with IE and got the same thing.
Switch to Chrome and I saw this error:

I wouldn’t bother to document this one except for a twist: the certificate error went away in IE when you clicked through to the login page.

Furthermore, when I examined the certificate with a tool I trust, openssl, it showed the date was not expired.

So what’s going on there?

The details
First thing I dug into was Chrome. I found this particular error can occur if you have an internal certificate issued with a valid common name, but without a Subject Alternative Name. My openssl examination confirmed this was indeed the case for this certificate.

So I decided the Chrome error was a red herring. And confirmed this after checking out other internal web sites which all suffered from this problem.

But that still leaves the IE error unexplained.

As I mentioned in a previous post, I created a shortcut bash function that combines several openssl functions I call examinecert:

examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }

Use it like this:

$ examinecert drjohnstechtalk.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:17:21:b7:12:94:3a:fa:fd:a8:f3:f8:5e:2e:e4:52:35:71
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Validity
            Not Before: Apr  4 08:34:56 2018 GMT
            Not After : Jul  3 08:34:56 2018 GMT
        Subject: CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d3:50:98:6d:72:03:b2:e4:01:3f:44:01:3d:eb:
                    ff:fc:68:7d:51:a4:09:90:48:3c:be:43:88:d7:ba:
                    ...
        X509v3 extensions:
                 ...
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com
                ...

I tried to show a friend the error. I could no longer get IE to show a certificate error. So my friend tried IE. He saw that initial error.

Most people give up at this point. But my position is the kind where problems no one else can resolve go to get resolution. And certificates is somewhat a specialty of mine. So I was not ready to throw in the towel.

I mistrust all browsers. They cache information, try to present you sanitized information. It’s all misleading.

So I ran examinecert again. This time I got a different result. It showed an expired certificate. So I ran it again. It showed a valid, non-expired certificate. And again. It kept switching back-and-forth!

Here it helps to know some peripheral information. The certificate resides on an old F5 BigIP load-balancer which I used to run. It has a known problem with updating certificate if you merely try to replace the certificate in the SSL client profile. It’s clear by looking at the dates the certificate had recently been renewed.

So I now had enough information to say the problem was on the load balancer and I could send the ticket over to the group that maintains it.

As for IE’s strange behavior? Also explainable for the most part. After an initial page with the expired certificate, if you click Continue to this web site it re-loads the page and gets the Good certificate so it no longer shows you the error! So when I clicked on the lock icon to examine the certificate, I always was getting the good version. In fact – and this is an example of the limitation of browsers like IE -you don’t have the option to examine the certificate about which it complained initially. Then IE caches this certificate I think so it persists sometimes even after closing and re-launching the browser.

Case closed.

Conclusion
An intermittent certificate error was explained and traced to a bad load balancer implementation of SSL profiles. The problem could only be understood by going the extra mile, being open-minded about possible causes and “using all my senses.” As I like to joke, that’s why I make the medium bucks!

Other conclusion? openssl is your friend.

References and related
My favorite openssl commands show how to use openssl x509 from any linux server.

Categories
Admin Network Technologies Security

The IT Detective agency: Some insights into 4096-bit SSL keys

Intro
I was recently asked if a new certificate a web site is about to deploy would require any changes to our clients such as needing to import this certificate into their Java keystore.

The details
Well, I saved the certificate on a Linux server calling it my.crt and examined it using openssl:

$ openssl x509 ‐text ‐in my.crt

My greatest hits amongst the openssl commands are listed here: My favorite openssl commands

Anyway, the output begins like this:

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            68:5f:f8:b6:5e:56:c2:1d
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority - G2
        Validity
            Not Before: Apr  5 22:57:01 2018 GMT
            Not After : Apr  5 22:57:01 2020 GMT
        Subject: 1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=California/2.5.4.15=Private Organization/serialNumber=C2417721, C=US, ST=California, L=Carlsbad, CN=www.drj.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
            RSA Public Key: (4096 bit)
                Modulus (4096 bit):
                    00:da:c7:18:a2:4d:b5:c9:95:22:b0:64:50:e7:b8:
                    ...

So I checked the text after the Issuer field, C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority – G2
This is the intermediate CA. And it exactly matches their current certificate we already trust. So no problem, right, we are good to go, right? Not so fast grasshopper. This certificate contains a totally new element for us. I happened to notice it has a 4096 bit key length. Never seen that before though I have heard about it.

How do we even know our old browsers and even proxy server are going to be good with that? The best way I reasoned is simply to find another site with a 4096 bit certificate. Well, it took me almost an hour before I found one, and DDG and Google searches proved fruitless. I found it by taking logical guesses, as in, surely some security-minded organization has deployed these already??

ssllabs.com. Nope. godaddy.com. Nope. www.google.com. Nope. Gnupg.org, Nah, ah. Lets Encrypt. Also a no. Then I tried nist.org and found the weirdest thing. They send several certificates, one of which is *.bluehost.com which is 4096 bits. But it makes no sense being part of the certificates on nist.org, as an ssllabs.com server eval will tell you. So then I tried www.bluehost.com. Paydirt!

$ examinecert www.bluyehost.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            af:a7:b9:22:4f:d5:7e:6b:78:4b:5a:23:d0:35:50:23
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Domain Validation Secure Server
CA
        Validity
            Not Before: Oct 16 00:00:00 2015 GMT
            Not After : Oct 17 23:59:59 2018 GMT
        Subject: OU=Domain Control Validated, OU=Hosted by BlueHost.Com, INC, OU=PositiveSSL Wildcard, CN=*.unifiedlayer.co
m
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:c5:2b:10:d2:20:bb:d9:1b:e1:d3:b2:d1:9b:6f:
                    ...

examinecert is a bash function I created defined as:

examinecert () { echo|openssl s_client -connect "$@":443|openssl x509 -text|more; }

And for this company that brings up a host of questions. if their again IE 11 has never encountered a web site with this long of a key length, how will we know what will happen the first time?

Also, some sites get SSL intercepted by Bluecoat proxy. How will that infrastructure handle it? Will it handle it?

That;s why it was so important to find a real-world example, as painful as that exercise proved to be.

The answers are somewhat surprising.

Yes, ancient Internet Explorer probably handles 4096 bit key lengths just fine. I actually haven’t fully tested that one yet.

But it doesn’t matter for this company. Their Bluecoat proxy intercepts the SSL. So, yes, that part works, and re-creates its own certificate, but issued as a standard 2048-bit key length! So that is what IE sees so I know there will be no issue there. I say surprising because usually the generated certificates so carefully preserve all aspects of a certificate: same expiration date, same common name, etc. Whether or not this key length reduction is configurable or not I have yet to find out.

Follow up
As a result of my prodding, badssl.com will include a 4096-bit certificate with which to test things out.

Conclusion
After an arduous search (I’m sure next year this time this will become much easier) we found a public site which can be used to test 4096 bit key lengths: www.bluehost.com. Obviously GoDaddy also issues 4096-bit certificates since that is what this particular web site uses as their issuer, but I have yet to find an actual live example of one.

Bluecoat SSL interception by default does handle this long key length, but generates its private version of it with only a 2048 key length, to our surprise.

Just remember, if you have a Raspberry Pi you can run all these commands that I’ve shown because you have a bone fide Linux system.

Case: closed!

References and related
This site has all sorts of SSL scenarios to test against: https://badssl.com/.
To jump straight to their 4096-bit CERT: https://rsa4096.badssl.com/

Categories
DNS Linux Network Technologies Raspberry Pi Security

Whois information without the pushy hard sell tactics

Intro
Did you ever want to learn about a domain registration but were put off by the hard sell tactics that basically all web-based whois searches subject you to? Me, too. Here’s what you can do.

The details
Linux – so that includes you, Raspberry Pi owners – has a little utility called whois which you can use to get the registrant information of a domain, e.g.,

$ whois johnstechtalk.com

   Domain Name: JOHNSTECHTALK.COM
   Registry Domain ID: 1795918838_DOMAIN_COM-VRSN
   Registrar WHOIS Server: whois.godaddy.com
   Registrar URL: http://www.godaddy.com
   Updated Date: 2017-03-27T00:52:51Z
   Creation Date: 2013-04-23T00:54:17Z
   Registry Expiry Date: 2019-04-23T00:54:17Z
   Registrar: GoDaddy.com, LLC
   Registrar IANA ID: 146
   Registrar Abuse Contact Email: [email protected]
   Registrar Abuse Contact Phone: 480-624-2505
   Domain Status: clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited
   Domain Status: clientRenewProhibited https://icann.org/epp#clientRenewProhibited
   Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
   Name Server: NS45.DOMAINCONTROL.COM
   Name Server: NS46.DOMAINCONTROL.COM
   DNSSEC: unsigned
   URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2018-04-19T19:59:35Z <<<
...

Admittedly that did not tell us much, but it points us to another whois server we can try, whois.godaddy.com. So try that:

$ whois ‐h whois.godaddy.com johnstechtalk.com

Domain Name: JOHNSTECHTALK.COM
Registry Domain ID: 1795918838_DOMAIN_COM-VRSN
Registrar WHOIS Server: whois.godaddy.com
Registrar URL: http://www.godaddy.com
Updated Date: 2017-03-27T00:52:50Z
Creation Date: 2013-04-23T00:54:17Z
Registrar Registration Expiration Date: 2019-04-23T00:54:17Z
Registrar: GoDaddy.com, LLC
Registrar IANA ID: 146
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.4806242505
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Domain Status: clientUpdateProhibited http://www.icann.org/epp#clientUpdateProhibited
Domain Status: clientRenewProhibited http://www.icann.org/epp#clientRenewProhibited
Domain Status: clientDeleteProhibited http://www.icann.org/epp#clientDeleteProhibited
Registry Registrant ID: Not Available From Registry
Registrant Name: ******** ******** (see Notes section below on how to view unmasked data)
Registrant Organization:
Registrant Street: ***** ****
Registrant City: Newton
Registrant State/Province: New Jersey
Registrant Postal Code: 078**
Registrant Country: US
Registrant Phone: +*.**********
Registrant Phone Ext:
Registrant Fax:
Registrant Fax Ext:
Registrant Email: ********@*****.***
Registry Admin ID: Not Available From Registry
Admin Name: ******** ******** (see Notes section below on how to view unmasked data)
...

So now we’re getting somewhere. So GoDaddy tries to force you to their web page an sell you stuff in any case. Not at all surprising for anyone who’s ever been a GoDaddy customer (includes yours truly). Because that’s what they do. But not all registrars do that.

Here’s a real-life example which made me decide this technique should be more broadly disseminated. I searched for information on a domain in Argentina:

$ whois buenosaires.com.ar

This TLD has no whois server, but you can access the whois database at
http://www.nic.ar/

Now if you actually try their suggested whois server, it doesn’t even work:

$ whois ‐h www.nic.ar buenosaires.com.ar

Timeout.

What you can do to find the correct whois server is use iana – Internet Assigned Numbers Authority – namely, this page:

https://www.iana.org/domains/root/db

So for Argentina I clicked on .ar (I expected to find a separate listing for .com.ar but that was not the case), leading to the page:

See it? At the bottom it shows Whois server: whois.nic.ar. So I try that and voila, meaningful information is returned, no ads accompanying:

$ whois ‐h whois.nic.ar buenosaires.com.ar

% La información a la que estás accediendo se provee exclusivamente para
% fines relacionados con operaciones sobre nombres de dominios y DNS,
% quedando absolutamente prohibido su uso para otros fines.
%
% La DIRECCIÓN NACIONAL DEL REGISTRO DE DOMINIOS DE INTERNET es depositaria
% de la información que los usuarios declaran con la sola finalidad de
% registrar nombres de dominio en ‘.ar’, para ser publicada en el sitio web
% de NIC Argentina.
%
% La información personal que consta en la base de datos generada a partir
% del sistema de registro de nombres de dominios se encuentra amparada por
% la Ley N° 25326 “Protección de Datos Personales” y el Decreto
% Reglamentario 1558/01.
 
domain:         buenosaires.com.ar
registrant:     50030338720
registrar:      nicar
registered:     2012-07-05 00:00:00
changed:        2017-06-27 17:42:45.944889
expire:         2018-07-05 00:00:00
 
contact:        50030338720
name:           TRAVEL RESERVATIONS SRL
registrar:      nicar
created:        2013-09-05 00:00:00
changed:        2018-04-17 13:14:55.331068
 
nserver:        ns-1588.awsdns-06.co.uk ()
nserver:        ns-925.awsdns-51.net ()
nserver:        ns-1385.awsdns-45.org ()
nserver:        ns-239.awsdns-29.com ()
registrar:      nicar
created:        2016-07-01 00:02:28.608837

2nd example: goto.jobs
I actually needed this one! So I learned of a domain goto.jobs and I wanted to get some background. So here goes…
$ whois goto.jobs

getaddrinfo(jobswhois.verisign-grs.com): Name or service not known

So off to a bad start, right? So we hit up the .jobs link on iana, https://www.iana.org/domains/root/db/jobs.html, and we spy a reference to their whois server:

Registry Information
This domain is managed under ICANN's registrar system. You may register domains in .JOBS through an ICANN accredited registrar. The official list of ICANN accredited registrars is available on ICANN's website.
URL for registration services: http://www.goto.jobs
WHOIS Server: whois.nic.jobs

So we try that:
$ whois ‐h whois.nic.jobs goto.jobs

   Domain Name: GOTO.JOBS
   Registry Domain ID: 91478530_DOMAIN_JOBS-VRSN
   Registrar WHOIS Server: whois-all.nameshare.com
   Registrar URL: http://www.nameshare.com
   Updated Date: 2018-03-29T20:08:46Z
   Creation Date: 2010-02-04T23:54:33Z
   Registry Expiry Date: 2019-02-04T23:54:33Z
   Registrar: Name Share, Inc
   Registrar IANA ID: 667
   Registrar Abuse Contact Email:
   Registrar Abuse Contact Phone:
   Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
   Name Server: KATE.NS.CLOUDFLARE.COM
   Name Server: MARK.NS.CLOUDFLARE.COM
   Name Server: NS1.REGISTRY.JOBS
   Name Server: NS2.REGISTRY.JOBS
   DNSSEC: unsigned
   URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of WHOIS database: 2018-04-23T18:54:31Z <<<

Better, but it seems to merely point to a registrar and its whois server:

Registrar WHOIS Server: whois-all.nameshare.com

So let’s try that:

$ whois ‐h whois-all.nameshare.com goto.jobs

Domain Name: GOTO.JOBS
Registry Domain ID: 91478530_DOMAIN_JOBS-VRSN
Registrar WHOIS Server: whois-jobs.nameshare.com
Registrar URL: http://www.nameshare.com
Updated Date: 2018-03-29T20:08:46Z
Creation Date: 2010-02-04T23:54:33Z
Registrar Registration Expiration Date: 2017-02-04T23:54:33Z
Registrar: NameShare, Inc.
Registrar IANA ID: 667
Registrar Abuse Contact Email: [email protected]
Registrar Abuse Contact Phone: +1.7809429975
Domain Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
Registry Registrant ID:
Registrant Name: DNS Administrator
Registrant Organization: Employ Media LLC
Registrant Street: 3029 Prospect Avenue
Registrant City: Cleveland
Registrant State/Province: OH
Registrant Postal Code: 44115
Registrant Country: United States
Registrant Phone: +1.2064261500
Registrant Phone Ext:
Registrant Fax: +1.1111111111
Registrant Fax Ext:
Registrant Email: [email protected]
Registry Admin ID:
Admin Name: DNS Administrator
Admin Organization: Employ Media LLC
Admin Street: 3029 Prospect Avenue
...

Bingo! We have hit pay dirt. We have meaningful information about the registrant – an address, phone number and email address – and received no obnoxious ads in return. For me it’s worth the extra steps.

ICANN: another alternative
Most registrar’s whois sites are rate-limited. ICANN’s is not. And they also do not sic ads on you. It is

https://whois.icann.org/en/lookup?name=

ICANN, for the record, it the body that decides what goes on in DNS namespace, for instance, what new gTLDS should be added. You can use its whois tool for all gTLDs, but not in general for ccTLDs.

whois is undergoing changes due to GDPR. Especially the “social” information of the contacts: registrant, admin and technical contacts will be masked, except for perhaps state and country, in the future. But whois is slowly dying and a new standard called RDAP will take its place.

References and related
This page has some great tips. Wish I had seen it first! https://superuser.com/questions/758647/how-to-whois-new-tlds

Here’s that iana root zone database link again: https://www.iana.org/domains/root/db

ICANN’s whois: https://whois.icann.org/en/lookup?name=

Categories
Admin Linux Security

Fail2ban fails to work, I built my own

Intro
I’ve sung the praises of fail2ban as a modern way to shutdown those annoying probes of your cloud server. I recently got to work with a Redhat v 7.4 system, so much newer than my old CentOS 6 server. And fail2ban failed even to work! Instead of the usual extensive debugging I just wrote my own. I’m sharing it here.

The details
I have a bare-bones RHEL 7.4 system. A yum search fail2ban does not find that package. Supposedly you simply need to add the EPEL repository to make that package available but the recipe on how to do that is not obvious. So I got the source for fail2ban and built it. Although it runs, you gotta build a local jail to block ssh attempts and that’s where it fails. So instead of going down that rabbit hole – I was already too deep, I decided to heck with it and I’m building my own.

All I really wanted was to ban IPs which are hitting my sshd server endlessly, often once per second or more. I take it personally.

RHEL 7 has a new firewall concept, firewalld. It’s all new to me and I don’t want to go down that rabbit hole either, at least not right down. So I rely on that old standard of mine: cut off an attacker by making an invalid route to his IP address, along the lines of

$ route add ‐host gw 127.0.0.1

And voila, they can no longer establish a TCP connection. It’s not quite as good as a firewall rule because their source UDP packets could still get through, but come on, we don’t need to be purists. And furthermore, in practice it produces the desired behaviour: stops the ssh dictionary attacks cold.

I knocked tghis out in one night, avoiding the rabbit hole of “fixing” fail2ban. So I had to use the old stuff I know so well, perl and stupid little tricks. I call drjfail2ban.

#!/bin/perl
# suppress IPs with failed logins
# DrJ - 2017/10/07
$DEBUG = 0;
$sleep = 30;
$cutoff = 3;
$headlines = 60;
@goodusers =("drjohn1","user57");
%blockedips = ();
while(1) {
#  $time = `date +%Y%m%d%H%M%S`;
  main();
  sleep($sleep);
}
 
sub main() {
if ($DEBUG) {
  for $ips (keys %blockedips) {
    print "blocked ip: $ips "
  }
}
# man last shows what this means: -i forces IP to be displayed, etc.
open(LINES,"last -$headlines -i -f /var/log/btmp|") || die "Problem with running last -f btmp!!\n";
# output:
#ubnt     ssh:notty    185.165.29.197   Sat Oct  7 19:30    gone - no logout
while(<LINES>) {
  ($user,$ip) = /^(\S+)\s+\S+\s+(\S+)/;
  print "user,ip: $user,$ip\n" if $DEBUG;
  next if $blockedips{$ip};
#we can't handle hostnames right now
  next if $ip =~ /[a-z]/i;
  $candidateips{$ip} += 1;
  $bannedusers{$ip} = $user;
}
for (keys %candidateips) {
  $ip = $_;
# allow my usual source IPs without blocking...
  next if $ip =~ /^(50\.17\.188\.196|51\.29\.208\.176)/;
  next if $blockedips{$ip};
  $usr = $bannedusers{$ip};
  $ipct = $candidateips{$ip};
  print "ip, usr, ipct: $ip, $usr, $ipct\n" if $DEBUG;
# block
  $block = 1;
  for $gu (@goodusers) {
    print "gu: $gu\n" if $DEBUG;
    $block = 0 if $usr eq $gu;
  }
  if ($block) {
# more tests: persistence of attempt
    $hitcnt = $candidateips{$ip};
    if ($hitcnt < $cutoff) {
# do not block and reset counter for next go-around
      print "Not blocking ip $ip and resetting counter\n" if $DEBUG;
      $candidateips{$ip} = 0;
    } else {
      $blockedips{$ip} = 1;
      print "Blocking ip $ip with hit count $hitcnt at " . `date`;
# prevent further communication...
      system("route add -host $ip gw 127.0.0.1");
   }
  }
  #print "route add -host $ip gw 127.0.0.1\n";
}
close(LINES);
} # end main function

Highlights from the program
The comments are pretty self-explanatory. Just a note about the philosophy. I fear making a goof and locking myself out! So I was conservative and try to not do any blocking if the source IP matches one of my favored source IPs, or if the user matches one of my usual usernames like drjohn1. I use obscure userids and the hackers try the stupid stuff like root, admin, etc. So they may be dictionary attacking the password, but they certainly aren’t dictionary attacking the username!

I don’t mind wiping the slate clean of all created routes after sever reboot so I only plan to run this from the command line. To make it persistent until the next reboot you just run it from the root account like so (let’s say we put it in /usr/local/sbin):

$ nohup /usr/local/sbin/drjfail2ban > /var/log/drjfail2ban &

And it just sits there and runs, even after you log out.

Results
Since it hasn’t been running for long I can provide a partial log file as of this publication.

Blocking ip 103.80.117.74 with hit count 6 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 89.176.96.45 with hit count 5 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 31.162.51.206 with hit count 3 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 218.95.142.218 with hit count 6 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 202.168.8.54 with hit count 5 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 13.94.29.182 with hit count 4 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 40.71.185.73 with hit count 4 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 77.72.85.100 with hit count 13 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 201.180.104.63 with hit count 7 at Sun Oct  8 17:34:43 CEST 2017
SIOCADDRT: File exists
Blocking ip 121.14.27.58 with hit count 4 at Sun Oct  8 17:40:43 CEST 2017
Blocking ip 36.108.234.99 with hit count 6 at Sun Oct  8 17:47:13 CEST 2017
Blocking ip 185.165.29.69 with hit count 6 at Sun Oct  8 18:02:43 CEST 2017
Blocking ip 190.175.40.195 with hit count 6 at Sun Oct  8 19:05:43 CEST 2017
Blocking ip 139.199.167.21 with hit count 4 at Sun Oct  8 19:29:13 CEST 2017
Blocking ip 186.60.67.51 with hit count 5 at Sun Oct  8 20:49:14 CEST 2017

And what my route table looks like currently:

$ netstat ‐rn|grep 127.0.0.1

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
2.177.217.155   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
13.94.29.182    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
31.162.51.206   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
36.108.234.99   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
37.204.23.84    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
40.71.185.73    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
42.7.26.15      127.0.0.1       255.255.255.255 UGH       0 0          0 lo
46.6.60.240     127.0.0.1       255.255.255.255 UGH       0 0          0 lo
59.16.74.234    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
77.72.85.100    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
89.176.96.45    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
103.80.117.74   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
109.205.136.10  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
113.195.145.13  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
118.32.27.85    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
121.14.27.58    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
139.199.167.21  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
162.213.39.235  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
176.50.95.41    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
176.209.89.99   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
181.113.82.213  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.69   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.197  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.165.29.198  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
185.190.58.181  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
186.57.12.131   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
186.60.67.51    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
190.42.185.25   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
190.175.40.195  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
193.201.224.232 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
201.180.104.63  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
201.255.71.14   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
202.100.182.250 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
202.168.8.54    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
203.190.163.125 127.0.0.1       255.255.255.255 UGH       0 0          0 lo
213.186.50.82   127.0.0.1       255.255.255.255 UGH       0 0          0 lo
218.95.142.218  127.0.0.1       255.255.255.255 UGH       0 0          0 lo
221.192.142.24  127.0.0.1       255.255.255.255 UGH       0 0          0 lo

Here’s a partial listing of the many failed logins, just to keep it real:

...
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:28  (00:23)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
root     ssh:notty    190.175.40.195   Sun Oct  8 19:05 - 19:05  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 19:05  (01:02)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
admin    ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    185.165.29.69    Sun Oct  8 18:02 - 18:02  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 18:02  (00:15)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:47 - 17:47  (00:00)
root     ssh:notty    36.108.234.99    Sun Oct  8 17:46 - 17:47  (00:00)
ubuntu   ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:46  (00:06)
ubuntu   ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
aaaaaaaa ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
aaaaaaaa ssh:notty    121.14.27.58     Sun Oct  8 17:40 - 17:40  (00:00)
root     ssh:notty    206.71.63.4      Sun Oct  8 17:34 - 17:40  (00:06)
root     ssh:notty    206.71.63.4      Sun Oct  8 17:34 - 17:34  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 17:34  (01:19)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
root     ssh:notty    89.176.96.45     Sun Oct  8 16:15 - 16:15  (00:00)
...

Before running drjfail2ban it was much more obnoxious, with the same IP hitting my server every second or so.

Conclusion
I found it easier to roll my own than battle someone else’s errors. It’s kind of fun for me to create these little scripts. I don’t care if anyone else uses them. I will refer to this post myself and probably re-use it elsewhere!

References and related
In an earlier time, I was singing the praises of fail2ban on CentOS.

Categories
Linux Security

Verifying a pkcs12 file with openssl

Intro
The easy way

How to examine a pkcs12 (pfx) file

$ openssl pkcs12 ‐info ‐in file_name.pfx
It will prompt you for the password a total of three times!

The hard way
I went through this whole exercise because I originally could not find the easy way!!!

Get the source for openssl.

Look for pkread.c. Mine is in /usr/local/src/openssl/openssl-1.1.0f/demos/pkcs12.

Compile it.

My first pass:

$ gcc ‐o pkread pkread.c

/tmp/cclhy4wr.o: In function `sk_X509_num':
pkread.c:(.text+0x14): undefined reference to `OPENSSL_sk_num'
/tmp/cclhy4wr.o: In function `sk_X509_value':
pkread.c:(.text+0x36): undefined reference to `OPENSSL_sk_value'
/tmp/cclhy4wr.o: In function `main':
pkread.c:(.text+0x93): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0xa2): undefined reference to `OPENSSL_init_crypto'
pkread.c:(.text+0x10a): undefined reference to `d2i_PKCS12_fp'
pkread.c:(.text+0x154): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x187): undefined reference to `PKCS12_parse'
pkread.c:(.text+0x1be): undefined reference to `ERR_print_errors_fp'
pkread.c:(.text+0x1d4): undefined reference to `PKCS12_free'
pkread.c:(.text+0x283): undefined reference to `PEM_write_PrivateKey'
pkread.c:(.text+0x2bd): undefined reference to `PEM_write_X509_AUX'
pkread.c:(.text+0x320): undefined reference to `PEM_write_X509_AUX'
collect2: ld returned 1 exit status

Corrected compile command
$ gcc ‐o pkread pkread.c ‐I/usr/local/include ‐L/usr/local/lib64 ‐lssl ‐lcrypto

Note that this works for me because I put my ssl and crypto libraries in that directory. Your installation may have put them elsewhere:

$ ll /usr/local/lib64/*.a

-rw-r--r--. 1 root root 4793162 Aug 16 15:30 /usr/local/lib64/libcrypto.a
-rw-r--r--. 1 root root  765862 Aug 16 15:30 /usr/local/lib64/libssl.a

References and related
My favorite openssl commands.

Categories
Security Web Site Technologies

Google Authenticator – not tough to self-host

Intro
I wanted to learn a bit more about digital currencies. I’ll certainly be posting about them in the future. The best way to get some is to open an account with coinbase. But for security reasons – and I am all for securing things as digital currency thefts are notorious – they require two-factor authentication. The least secure method is to have an SMS code sent to your phone.

Well, my phone is a work phone that i use for light personal use. I’ve never owned a personal cell phone. So I’m not even sure my number will be portable if I retire or am severed from the company that supplies the phone. It would be just like me to forget all about it years from now when I’m facing that situation.

They said a more secure method is Google Authenticator. That sounds a bit daunting and perhaps tied to Google? Upon investigation it turns out that neither of those statements is true.

The details

Turns out the Google Authenticator is really an implementation of open standards based on a couple RFCs, RFC 6238 and RFC 4226. So there are other available implementations besides Google’s.

I used this implementation. It works fine for me once I understood how it works! https://github.com/gbraad/gauth

How gauth works
The main thing to understand – and the author doesn’t really explain it – is that the secrets are stored locally on the browser. I didn’t look but it must be in a cookie. So from the same desktop, different browsers you’ll see one sees your added account and the other does not. No secrets are stored on the server so the web server only passively contains the HTML and Javascript files.

So in my opinion you ought to make a secure copy of the secret so it doesn’t vanish when you clear your browser cookies, or your computer crashes, or whatever.

It’s a TOTP: time-dependent one-time password. I am personally familiar and comfortable with the concept having been a long-time RSA token user, back to the days when it was Security Dynamics! So my account cannot be compromised by sharing a one-time code as I do in the screen shot below!


What it looks like

gauth running at Drjohnstechtalk.com

Note the time remaining on the right side. These one-time passwords only last for 30 seconds and then new ones will be displayed.

Keep up your time
Since these codes are time-dependent, it is actually important that your computer be synced to an Internet time source. I hadn’t really messed with that on my Windows 10 system and when I checked the time I found it off by seven seconds which is way too much in my opinion. Being off by a couple minutes is probably fatal. I was syncing to a time source about every five days, which is far too infrequent in my opinion.

Too lazy or unable to host your own?
You can use the one the author hosts: http://gauth.apps.gbraad.nl/. Of course that’s putting your trust in the author so I don’t recommend using that.

How to host it
You basically just download the zip file form the git repository and unzip it somewhere onto your web server. In my case I am keeping the location a secret but it doesn’t really matter as there is nothing really there on the server to hide.

Conclusion
Until now I have wanted two-factor authentication but have hesitated due to my incorrect notion that this would actually tie me down to Google’s Ecosystem. Today I found a simple, independent (of Google) implementation that works with Coinbase. I hope to expand my use of 2FA to my banking apps, WordPress and perhaps other areas now that I am comfortable with it.

References and related
A “simple” implementation of Google Authenticator which can be self-hosted: https://github.com/gbraad/gauth
Wikipedia article on Google Authenticator: https://en.wikipedia.org/wiki/Google_Authenticator. It’s very helpful.

Categories
Security

The latest on handling of SHA-1 certificates by the major browsers

Intro
A certain organization is still using SHA-1 certificates internally, in spite of years of warnings, as I write this in February, 2017. But in the security world lack of action = eventual weakness. Ignorance is not bliss and putting your head in the sand is not a viable security strategy. So cracks in their approach are starting to appear, especially with the Chrome browser, the latest version of which is showing their internal SSL sites as not secure.

I found these security blog links in the references section below very helpful in getting a feel for what the major browsers take on all this is. A colleague brought them to my attention.

Solutions
The obvious solution is for them to switch to a PKI based on SHA-256 (also known as SHA-2 for short). But apparently it is like turning an aircraft carrier, or maybe pushing a planet into a different orbit, as bureaucratic inertia keeps progress on this front at the level of barely perceptible.

Here’s an image of that part of Chrome that shows the result of an https Intranet site access:

References and related
These are all from the October – November 2016 time frame.
Google security blog: SHA-1 certificates in Chrome
Mozilla security blog: Phasing Out SHA-1 on the Public Web
Microsoft Edge Developer: SHA-1 deprecation countdown

Categories
Network Technologies Security

The IT Detective agency: the case of the incompatible sftp client

Intro
I was asked for assistance with this sftp problem:

$ sftp <user@host>

DH_GEX group out of range: 1536 !< 1024 !< 8192
Couldn't read packet: Connection reset by peer

We actually spoke with the operator of the sftp server who said their server is not compatible with openSSH version 6.2

The details
you can see your version of openssh and a lot more by running it again with the <v flag:

$ sftp ‐v [email protected]

OpenSSH_6.2p2, OpenSSL 0.9.8j-fips 07 Jan 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 20: Applying options for *
debug1: Connecting to securefile.drj.com [50.17.188.196] port 22.
debug1: Connection established.
debug1: identity file /home/drj/.ssh/id_rsa type -1
debug1: identity file /home/drj/.ssh/id_dsa type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version SSHD
debug1: no match: SSHD
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-sha1 none
debug1: kex: client->server aes128-cbc hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1536<7680<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
DH_GEX group out of range: 1536 !< 1024 !< 8192
Couldn't read packet: Connection reset by peer

At the same time, I could successfully connect on an older system – SLES 11 SP2 – whose detailed connection looked like this:

Connecting to securefile.drj.com...
OpenSSH_5.1p1, OpenSSL 0.9.8j-fips 07 Jan 2009
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug1: Connecting to securefile.drj.com [50.17.188.196] port 22.
debug1: Connection established.
debug1: identity file /home/drj/.ssh/id_rsa type -1
debug1: identity file /home/drj/.ssh/id_dsa type -1
debug1: Remote protocol version 2.0, remote software version SSHD
debug1: no match: SSHD
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.1
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-cbc hmac-sha1 none
debug1: kex: client->server aes128-cbc hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<2048<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
The authenticity of host 'securefile.drj.com (50.17.188.196)' can't be established.
RSA key fingerprint is 13:bb:75:21:86:c0:d6:90:3d:5c:81:32:4c:7e:73:6b.
Are you sure you want to continue connecting (yes/no)?

See that version? it is only openSSH version 5.1. It seems to permit a shorter Diffie Hellman group exchange key length than does the newer version of openssh.

My solution
The standard advice on duckduckgo is to set this parameter in your $HOME/.ssh/config:

KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1

But this did not work for my case. And I didn’t want to change a setting that would weaken the security of all my other sftp connections. So I settled on using an additional command-line argument which is an openSSH parameter when running sftp:

$ sftp ‐o KexAlgorithms=diffie‐hellman‐group14‐sha1 <host>

And this worked!

And this works as well:

$ sftp ‐o KexAlgorithms=diffie‐hellman‐group1‐sha1 <host>

Why the change?
The minimum key length was changed from 1024 to 1536 and later to 2048 to avoid the logjam vulnerability.

Conclusion
Not all versions of openSSH are equal. In particular openSSH v 6.2 may be incompatible with older sftp servers. I showed a way to make a newer sftp work with an older server. Case closed.

References and related
Novell’s discussion of the issue is here: https://www.novell.com/support/kb/doc.php?id=7016904
Here’s some more information on the logjam vulnerability: https://weakdh.org/

Categories
Admin Apache Network Technologies Security

drjohnstechtalk now uses HTTP Strict Transport Security, HSTS

Intro
I was reading about a kind of amazingly thorough exploit which could be done using a Raspberry Pi zero. Physical access is required, but the scope of what this guy has figured out and put together is really amazing.

Reading the description I decided is a good exercise in making sure I understand the underlying technologies. One was admittedly something I hadn’t seen before: DNS re-binding. That got me to reading about DNS re-binding, and that got me to looking at defenses against DNS rebinding.

HSTS to the rescue
Since in DNS rebinding you may have either a MITM (man in the middle) or a web site impersonated by a hacker, one defense against it is to use HTTPS. (The hacker will not have access to a web site’s private key and therefore has no way to fake a certificate). But what they can do is redirect users from HTTPS to HTTP, where no certificate is required.

HSTS is designed to make that move tip off the user by complaining to the user. Upon first visit the user gets a cookie that says this site should be https. Subsequent visits then are enforced by the user’s browser that the site accessed must be HTTPS.

drjohnstechtalk update
Two years ago I switched the default way I run my blog web site from HTTP to HTTPS due to the encryption offered by HTTPS, and the fact that search engines penalize HTTP sites.

It seems a natural progression in this age of increasing security awareness to up the ante and now also run HSTS. For me this was easy. Since I run my own apache server I simply needed to add the appropriate HTTP Response header to my server responses.

This is done within the virtual server section of the apache configuration like so:

# Guarantee HTTPS for 1/2 Year including Sub Domains  - DrJ 11/22/16
# see https://itigloo.com/security/how-to-configure-http-strict-transport-security-hsts-on-apache-nginx/
    Header always set Strict-Transport-Security "max-age=15811200; includeSubDomains; preload"

Of course this requires that the apache mod_headers is included.

Results
I test it form a linux server like this:

$ curl ‐i ‐k ‐s https://drjohnstechtalk.com/blog/|head ‐15

HTTP/1.1 200 OK
Date: Tue, 22 Nov 2016 20:30:56 GMT
Server: Apache/2
Strict-Transport-Security: max-age=15811200; includeSubDomains
Vary: Cookie,Accept-Encoding
X-Powered-By: PHP/5.4.43
X-Pingback: https://drjohnstechtalk.com/blog/xmlrpc.php
Last-Modified: Tue, 22 Nov 2016 20:30:58 GMT
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
 
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8" />

See that new header Strict-Transport-Security: max-age=15811200; includeSubDomains; preload? That’s the result of what we did. But unless we put that preload at the end it doesn’t verify!

Capture-HSTS-verify-with-error

References and related
drjohnstechtalk is now encrypted – blog posting from 2014
This site has a good description of the requirements for a proper HSTS implementation, and I see that I missed something! https://hstspreload.appspot.com/
You can’t run https without a certificate. I soon will be using the free certificates offered by Let’s Encrypt. Here’s my write-up.