Categories
Admin DNS Network Technologies Security

The IT Detective agency: Live hack caught, partially stopped

Intro
In my years at cybersecurity I’ve been sufficiently removed from the action that I’ve rarely been involved in an actual case. Until last night. A friend, whom I’ll call Jute, got a formal complaint about one of his hosted Windows servers.

We have detected multiple hacking attempts from your ip address 47.5.105.236 (Hilfer Online) to access our systems.
>
> Log of attempts:
> – Hack attempt failed at 2019-01-17T14:22:41.6539784Z. Attempted user name: Not specified (typical for port scanners or denial of service attacks), system accessed: RDP, ip address accessed: 158.69.241.92
> – Hack attempt failed at 2019-01-17T14:22:26.2213808Z. Attempted user name: Not specified (typical for port scanners or denial of service attacks), system accessed: RDP, ip address accessed: 158.69.241.92
> – Hack attempt failed at 2019-01-17T14:22:10.6304194Z. Attempted user name: Not specified (typical for port scanners or denial of service attacks), system accessed: RDP, ip address accessed: 158.69.241.92
>
>
> Please investigate this problem.
>
> Sent using IP Ban Pro
> http://ipban.com

Hack, cont.
I’ve changed the IPs to protect the guilty! But I’ve conveyed the specificity of the error reporting. Nice and detailed.

Jute has a Windows Server 2012 at that IP. He is not running a web server, so that conveniently and dramatically narrows the hackable footprint of his server. I ran a port scan and found ports 135, 139 and 3389 open. His provider (which is not AWS) offers a simple firewall which I suggested we use to block ports 135 and 139 which are for Microsoft stuff. He was running it as a local sever so I don’t tihnk he needed it.

Bright idea: use good ole netstat
The breakthrough came when I showed him the poor man’s packet trace:

netstat -an

from a CMD prompt. He ran that and I saw not one but two RDP connections. One we easily identified as his, but the other? It was coming form another IP belonging to the same provider! RDP is easily identified by just looking for the port 3389 connections. Clearly we had caught first-hand an unauthorized user.

I suggested a firewall rule to allow only his Verizon range to connect to server on port 3389. But, I am an enterprise guy, used to stateful firewalls. When we set it up we cut off his RDP session to his own server! Why? I quickly concluded this was amateur hour and a primitive, ip-chains-like stateless firewall. So we have to think about rules for each packet, not for each tcp connection.

Once we put in a rule to block access to ports 135 and 139, we also blocked Jute’s own RP session. So the instructions said once you use the firewall, an implicit DENY ALL is added to the bottom of the rules.

So we needed to add a rule like:

SRC: ANY DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: 3389 ACTION: allow

But his server needs to access web sites. That’s a touch difficult with a stateless firewall. You have to enter the “backwards” rule (outbound traffic is not restricted by firewall):

SRC: ANY DST: 47.5.105.236 SRC_PORT: 443 DST_PORT: ANY ACTION: allow

But he also needs to send smtp email, and look up DNS! This is getting messy, but we can do it:

SRC: ANY DST: 47.5.105.236 SRC_PORT: 25 DST_PORT: ANY ACTION: allow
SRC: ANY DST: 47.5.105.236 SRC_PORT: 53 DST_PORT: ANY ACTION: allow

We looked up the users and saw Administrator and another user Update. We did not recognize Update so he deleted it! And changed the password to Administrator.

Finally we decided we had to bump this hacker.

So we made two rules to allow him but deny the zombie computer:

SRC: 158.69.240.92 DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: 3389 ACTION: reject
SRC: ANY DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: 3389 ACTION: allow

Pyrrhic victory
Success. We bumped that user right out while permitting Jute’s access to continue. The bad news? A new interloper replaced it! 95.216.86.217.

OK. So with another rule we can bump that one too.

Yup. Another success. another interloper jumps on in its place. 124.153.74.29. So we bump that one. But I begin to suspect we are bailing the Titanic with a thimble. It’s amazing. Within seconds a blocked IP is being replaced with a new one.

We need a more sweeping restriction. So we reasoned that Jute will RP from his provider where his IP does not really change.

So we replace

SRC: ANY DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: 3389 ACTION: allow

with

SRC: 175.198.0.0/16 DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: 3389 ACTION: allow

and we also delete the specific reject rules.

But now at this point for some reason the implicit DENY ALL rule stops working. From my server I could do an nc -v 47.5.105.236 3389 and see that that port was open, though it should ont have been. So we have to add a cleanup rule at the bottom:

SRC: ANY DST: 47.5.105.236 SRC_PORT: ANY DST_PORT: ANY ACTION: reject

That did the trick. Port no longer opened.

There still appears in netstat -an listing the last interloper, but I think it just hasn’t been timed out yet. netstat -an also clearly shows (to me anyway) what they were doing: scanning large swaths of the Internet for other vulnerable servers! The tables were filled with SYN-SENT to port 3389 of consecutive IPs! Amazing.

So I think Jute’s server was turned into a zombie which was tasked with recruiting new zombies.

We had finally frozen them out.

Later that night
Late that same night he calls me in a panic. He uses a bunch of downstream servers and that wasn’t working! The downstream servers run on a range of ports 14800 – 15200.

Now bear in mind the provider only permits us 10 firewall rules, so it’s getting kind of tight. But we manage to squeeze in another rule:

SRC: ANY DST: 47.5.105.236 SRC_PORT: 14800-15200 DST_PORT: ANY ACTION: allow

He breathes a sigh of relief because this works! But I want him we are opening a slight hole now. Short term there’s nothing we can do. It’s a small exposure: 400 open ports out of 65000 possible. It should hold him for awile with any luck.

He also tried to apply all updates at my suggestion. I’m still not sure what vulnerability was (is) exploited.

Case: tentaively closed

Our first attempt to use the Windows firewall itself was not initially successful. We may return to it.

Conclusion
We catch a zombie computer totally exploiting RDP on a Windows 2012 server. We knocked it off and it was immediately repaced with another zombie doing the same thing. Their task was to find more zombies to join to the botnet. Inbound firewall rules defined on a stateless firewall were identified which stopped this exploitation while permitting desired traffic. Not so easy when you are limited to 10 firewall rules!

This is a case where IPBAN did us a favor. The system worked as it was supposed to. We got the alert, and acted on it immediately.

I’m not 100% sure which RP vulnerability was exploited. It may have been an RCE – remote code execution not even requiring a valid logon.

References and related
The rest of the security world finally caught up with this, with Microsoft releasing a critical patch in May. I believe I was one of the first to publicly document this exploit. https://nakedsecurity.sophos.com/2019/06/10/the-goldbrute-botnet-is-trying-to-crack-open-1-5-million-rdp-servers/?utm_source=Naked+Security+-+Sophos+List&utm_campaign=0fd82f7fce-Naked+Security+-+June+test+-+groups+1+and+3&utm_medium=email&utm_term=0_31623bb782-0fd82f7fce-418487137

Categories
Admin Proxy

Google Hangouts Meet – what do these IPs all have in common?

2021 update

142.250.82.77

142.250.82.113

142.250.82.28

142.250.82.126

2020 update

142.250.82.42

142.250.82.71
142.250.82.78
209.85.144.127
74.125.250.71

Suspected additional IPs

And I observed the user agent “Google Hangouts” trying these IPs, but by the FQDN mtalk.google.com (whose resolution must by very dynamic) rather than raw IP:

142.250.31.188
172.253.122.188
172.253.63.188

Older IPs
173.194.207.127
209.85.232.127
64.233.177.127
64.233.177.103
74.125.196.127

They all have been used by Google’s Hangouts Meet based on my observation.
If you have an environment which uses proxy authentication, the above IPs do not play well with that. So you’ll need to disable proxy authentication for them for Hangouts Meet to work.

Otherwise you can do the initial connect but will be dropped after about 45 seconds.

Finally, although you can look up each individually and learn of its association to Google, Google’s own documentation is devoid of any references to them. That is unfortunate.

So the IPs in actual use is probably much larger, but these are what I’ve observed over the course of a few days of testing.

What I’ve done
Google is very hard to reach. They only provide indirect means for regular users. So not knowing any insiders I submitted feedback at https://meet.google.com/ which is what they suggest. I provided a detailed description. I doubt they will do anything with my feedback. We’ll see. Update. Correct. Months have passed and they never bothered to get back to me.

Conclusion
Google has an undocumented dependency on a whole set of IPs for hangout Meets to actually work through a proxy which requires authentication. Contacting Google for more information is probably impossible, but I will try.

Categories
Admin Security

Great serial port concentrator: Raritan Dominion

Intro
Every now and then you find a product that is a leap ahead of where you were. Such is the case for us with regards to our product of choice for serial consoles.

The old
For Bluecoat (Symantec) proxy and AV systems, we had been using an ancient Avocent CPS device. It permitted ssh connection. It was slow and the menu very limited. But it did permit us to connect multiple serial consoles to one concentrator device at least.
For low-end firewalls we had been using DigiConnects, one per firewall. They are small, which may be thir one advantage. They are tricky to initially set up. Then they are slow to use.

In with the new
We heard about the Raritan Dominion line of products, stranegly enough, from some IT guys in Europe. It’s strange because they are right here in New Jersey – the company name probably comes form the Raritan river. But our usual reseller never heard of them. The specific device is a Dominion SXII.

It’s so much better than those older products. You can use their GUI to connect. This is a vendor who got their act together and eliminated Java. So many other security vendors have yet to do that, incredible as it is to say that.
It tries to autosense the wiring of the serial connector. That doesn’t always work, but it’s very easy for you to hardwire a port as DCE, or if that dosn’t work, try DTE. I use one type for my Symantec devices, another for firewalls.

Labelling the port with meaningful names is a snap, of course.

The Digis would interfere with the reboot process of the firewall so we’d have to detach them if we were going to rbeoot the firewall. These do not. So much better…

You can combine them with power control but we aren’t going to do that.

Don’t want to use the GUI? No problem, console access through ssh is also possible. Of configure dedicated ports that you ssh to for individual consoles.

Sending signals and cleanly disconnecting is easy with their menuss. Connecting to multiple consoles is alsono problem.

They have something called in-the-rack access. I know this will be useful but I haven’t figured out how to use it yet. But if it is what it sounds like it is, it will allow me to be in the server room and access any console by using a direct connection of some sort to the Dominion SXII.

And they’re just plain faster. A lot faster.

And, considering, they’re not so expensive.

They worked so much better than expected that we pretty much immediately filled up the ports with firewalls and other stuff.

Conclusion
A leap forward in productivity was realized by utilizing Raritn’s Dominion SXII serial port concentrator. Commissioning new security gear has never been esaier…

References and related
Raritan’s web site: https://www.raritan.com

Categories
Admin Perl

Counting active leases on an old ISC DHCP server

Intro
Checkpoint Gaia offers a DHCP service, but it ias based on a crude and old dhcp daemon implementation frmo ISC. Doesn’t give you much. Mostly just the file /var/lib/dhcpd/dhcpd.leases, which it constantly updates. A typical dhcp client entry looks like this:

 
lease 10.24.69.22 {
  starts 5 2018/11/16 22:32:59;
  ends 6 2018/11/17 06:32:59;
  binding state active;
  next binding state free;
  hardware ethernet 30:d9:d9:20:ca:4f;
  uid "\0010\331\331 \312O";
  client-hostname "KeNoiPhone";
}


The details

So I modified a perl script to take all those lines and make sense of them.
I called it lease-examine.pl.
Here it is

#!/usr/bin/perl
# from https://askubuntu.com/questions/219609/how-do-i-show-active-dhcp-leases - DrJ 11/15/18
 
my $VERSION=0.03;
 
##my $leases_file = "/var/lib/dhcpd/dhcpd.leases";
my $leases_file = "/tmp/dhcpd.leases";
 
##use strict;
use Date::Parse;
 
my $now = time;
##print $now;
##exit;
# 12:22 PM 11/15/18 EST
#my $now = "1542302555";
my %seen;       # leases file has dupes (because logging failover stuff?). This hash will get rid of them.
 
open(L, $leases_file) or die "Cant open $leases_file : $!\n";
undef $/;
my @records = split /^lease\s+([\d\.]+)\s*\{/m, <L>;
shift @records; # remove stuff before first "lease" block
 
## process 2 array elements at a time: ip and data
foreach my $i (0 .. $#records) {
    next if $i % 2;
    ($ip, $_) = @records[$i, $i+1];
    ($ip, $_) = @records[$i, $i+1];
 
    s/^\n+//;     # && warn "leading spaces removed\n";
    s/[\s\}]+$//; # && warn "trailing junk removed\n";
 
    my ($s) = /^\s* starts \s+ \d+ \s+ (.*?);/xm;
    my ($e) = /^\s* ends   \s+ \d+ \s+ (.*?);/xm;
 
    ##my $start = str2time($s);
    ##my $end   = str2time($e);
    my $start = str2time($s,UTC);
    my $end   = str2time($e,UTC);
 
    my %h; # to hold values we want
 
    foreach my $rx ('binding', 'hardware', 'client-hostname') {
        my ($val) = /^\s*$rx.*?(\S+);/sm;
        $h{$rx} = $val;
    }
 
    my $formatted_output;
 
    if ($end && $end < $now) {
        $formatted_output =
            sprintf "%-15s : %-26s "              . "%19s "         . "%9s "     . "%24s    "              . "%24s\n",
                    $ip,     $h{'client-hostname'}, ""              , $h{binding}, "expired"               , scalar(localti
me $end);
    }
    else {
        $formatted_output =
            sprintf "%-15s : %-26s "              . "%19s "         . "%9s "     . "%24s -- "              . "%24s\n",
                    $ip,     $h{'client-hostname'}, "($h{hardware})", $h{binding}, scalar(localtime $start), scalar(localti
me $end);
    }
 
    next if $seen{$formatted_output};
    $seen{$formatted_output}++;
    print $formatted_output;
}

Even that script produces a thicket of confusing information. So then I further process it. I call this script dhcp-check.sh:

#!/bin/sh
# DrJ 11/15/18
# bring over current dhcp lease file from firewall FW-1
date
echo fetching lease file dhcpd.leases
scp admin@FW-1:/var/lib/dhcpd/dhcpd.leases /tmp
# analyze it. this should show us active leases
echo analyze dhcpd.leases
DIR=`dirname $0`
$DIR/lease-examine.pl|grep active|grep -v expired > /tmp/intermed-results
# intermed-results looks like:
#10.24.76.124   : "android-7fe22a415ce21c55" (50:92:b9:b8:92:a0)    active Thu Nov 15 11:32:13 2018 -- Thu Nov 15 15:32:13 2018
#10.24.76.197   : "android-283a4cb47edf3b8c" (98:39:8e:a6:4f:15)    active Thu Nov 15 11:37:23 2018 -- Thu Nov 15 15:32:14 2018
#10.24.70.236   : "other-Phone"            (38:25:6b:79:31:60)    active Thu Nov 15 11:32:24 2018 -- Thu Nov 15 15:32:24 2018
#10.24.74.133   : "iPhone-de-Lucia"          (34:08:bc:51:0b:ae)    active Thu Nov 15 07:32:26 2018 -- Thu Nov 15 15:32:26 2018
#exit
# further processing. remove the many duplicate lines
echo count active leases
awk '{print $1}' /tmp/intermed-results|sort -u|wc -l > /tmp/dhcp-active-count
echo count is `cat /tmp/dhcp-active-count`

And that script gives my what I believe is an accurate count of the active leases. I run it every 10 minutes from SiteScope and voila, we have a way to make sure we’re coming close to running out of IP addresses.

Categories
Admin Linux Network Technologies SLES

Linux tip: how to enable remote syslog on SLES

Intro
I write this knowing I still don’t know anything to speak of about syslog, but, sometimes you gotta act without knowing. I needed to send syslog to somewhere in a big hurry so I figured out the absolute minimum I needed to do to get it running on one of my other systems.

The details
This all started because of a deficiency in the F5 ASM. At best it’s do slow when looking through the error log. But in particular there was one error that always timed out when I tried to bring up the details, a severity 5 error, so it looked pretty important. Worse, local logging, even though it is selected, also does not work – the /var/log/asm file exists but contains basically nothing of interest. I suppose there is some super-fancy and complicated MySQL command you could run to view the logs, but that would take a long time to figure out.

So for me the simplest route was to enable remote syslog on a Linux server and send the ASM logging to it. This seems to be working, by the way.

The minimal steps
Again, this was for Suse Enterprise Linux running syslog-ng.

  1. modify /etc/sysconfig/syslog as per the next step
  2. SYSLOGD_PARAMS=”-r”
  3. modify /etc/syslog-ng/syslog-ng.conf as per the next step
  4. uncomment this line: udp(ip(“0.0.0.0”) port(514));
  5. launch yast (I use curses-based yast [no X-Windows] which is really cantankerous)
  6. go to Security and Users -> Firewall -> Allowed services -> Internal Zone -> Advanced
  7. add udp port 514 as additional allowed Ports in internal zone and save it
  8. service syslog stop
  9. service syslog start
  10. You should start seeing entries in /var/log/localmessages as in this suitably anonymized example (I added a couple line breaks for clarity:
Jul 27 14:42:22 f5-drj-mgmt ASM:"7653503868885627313","50.17.188.196","/Common/drjohnstechtalk.com_profile","blocked","/drjcrm/bi/tjhmore345","0","Illegal URL,Attack signature detected","200021075","Automated client access ""curl""","US","<!--?xml version='1.0' encoding='UTF-8'?-->44e7f1ffebff2dfb-800000000000000044f7f1ffebff2dfb-800000000000000044e7f1ffe3ff2dfb-80000000000000000000000000000000-000000000000000042VIOL_ATTACK_SIGNATURErequest200021075
7VXNlci1BZ2VudDogY3VybC83LjE5LjcgKHg4Nl82NC1yZWRoYXQtbGludXgtZ251KSBsaWJjdXJsLzcuMTkuNyBOU1MvMy4yNy4xIHpsaWIvMS4yLjMgbGliaWRuLzEuMTggbGlic3NoMi8xLjQuMg0KSG9zdDogYWctaW50ZWw=
01638
VIOL_URL","GET /drjcrm/bi/tjhmore345 HTTP/1.1\r\nUser-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2\r\nHost: drjohnstechtalk.com\r\nAccept: */*\r\n\r\n"

Observations
Interestingly, there is no syslogd on this particular system, and yet the “-r” flag is designed for syslogd – it’s what turns it into a remote syslogging daemon. And yet it works.

It’s easy enough to log these messages to their own file, I just don’t know how to do it yet because I don’t need to. I learn as I need to. just as I learned enough to publish this tip.

Conclusion
We have demonstrated activating the simplest possible remote syslogger on Suse Linux Enterprise Server.

References and related

Want to know what syslog is? Howtonetwork has this very good writeup: https://www.howtonetwork.com/technical/security-technical/syslog-server/

Categories
Admin Linux Security Web Site Technologies

The IT Detective Agency: the vanishing certificate error

Intro
I was confronted with a web site certificate error. A user was reluctant – correctly – to proceed to an internal web site because he saw a message to the effect:

I tried it myself with IE and got the same thing.
Switch to Chrome and I saw this error:

I wouldn’t bother to document this one except for a twist: the certificate error went away in IE when you clicked through to the login page.

Furthermore, when I examined the certificate with a tool I trust, openssl, it showed the date was not expired.

So what’s going on there?

The details
First thing I dug into was Chrome. I found this particular error can occur if you have an internal certificate issued with a valid common name, but without a Subject Alternative Name. My openssl examination confirmed this was indeed the case for this certificate.

So I decided the Chrome error was a red herring. And confirmed this after checking out other internal web sites which all suffered from this problem.

But that still leaves the IE error unexplained.

As I mentioned in a previous post, I created a shortcut bash function that combines several openssl functions I call examinecert:

examinecert () { echo|openssl s_client -servername "$@" -connect "$@":443|openssl x509 -text|more; }

Use it like this:

$ examinecert drjohnstechtalk.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            04:17:21:b7:12:94:3a:fa:fd:a8:f3:f8:5e:2e:e4:52:35:71
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, O=Let's Encrypt, CN=Let's Encrypt Authority X3
        Validity
            Not Before: Apr  4 08:34:56 2018 GMT
            Not After : Jul  3 08:34:56 2018 GMT
        Subject: CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:d3:50:98:6d:72:03:b2:e4:01:3f:44:01:3d:eb:
                    ff:fc:68:7d:51:a4:09:90:48:3c:be:43:88:d7:ba:
                    ...
        X509v3 extensions:
                 ...
            X509v3 Subject Alternative Name:
                DNS:drjohnstechtalk.com
                ...

I tried to show a friend the error. I could no longer get IE to show a certificate error. So my friend tried IE. He saw that initial error.

Most people give up at this point. But my position is the kind where problems no one else can resolve go to get resolution. And certificates is somewhat a specialty of mine. So I was not ready to throw in the towel.

I mistrust all browsers. They cache information, try to present you sanitized information. It’s all misleading.

So I ran examinecert again. This time I got a different result. It showed an expired certificate. So I ran it again. It showed a valid, non-expired certificate. And again. It kept switching back-and-forth!

Here it helps to know some peripheral information. The certificate resides on an old F5 BigIP load-balancer which I used to run. It has a known problem with updating certificate if you merely try to replace the certificate in the SSL client profile. It’s clear by looking at the dates the certificate had recently been renewed.

So I now had enough information to say the problem was on the load balancer and I could send the ticket over to the group that maintains it.

As for IE’s strange behavior? Also explainable for the most part. After an initial page with the expired certificate, if you click Continue to this web site it re-loads the page and gets the Good certificate so it no longer shows you the error! So when I clicked on the lock icon to examine the certificate, I always was getting the good version. In fact – and this is an example of the limitation of browsers like IE -you don’t have the option to examine the certificate about which it complained initially. Then IE caches this certificate I think so it persists sometimes even after closing and re-launching the browser.

Case closed.

Conclusion
An intermittent certificate error was explained and traced to a bad load balancer implementation of SSL profiles. The problem could only be understood by going the extra mile, being open-minded about possible causes and “using all my senses.” As I like to joke, that’s why I make the medium bucks!

Other conclusion? openssl is your friend.

References and related
My favorite openssl commands show how to use openssl x509 from any linux server.

Categories
Admin Network Technologies Security

The IT Detective agency: Some insights into 4096-bit SSL keys

Intro
I was recently asked if a new certificate a web site is about to deploy would require any changes to our clients such as needing to import this certificate into their Java keystore.

The details
Well, I saved the certificate on a Linux server calling it my.crt and examined it using openssl:

$ openssl x509 ‐text ‐in my.crt

My greatest hits amongst the openssl commands are listed here: My favorite openssl commands

Anyway, the output begins like this:

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            68:5f:f8:b6:5e:56:c2:1d
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority - G2
        Validity
            Not Before: Apr  5 22:57:01 2018 GMT
            Not After : Apr  5 22:57:01 2020 GMT
        Subject: 1.3.6.1.4.1.311.60.2.1.3=US/1.3.6.1.4.1.311.60.2.1.2=California/2.5.4.15=Private Organization/serialNumber=C2417721, C=US, ST=California, L=Carlsbad, CN=www.drj.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
            RSA Public Key: (4096 bit)
                Modulus (4096 bit):
                    00:da:c7:18:a2:4d:b5:c9:95:22:b0:64:50:e7:b8:
                    ...

So I checked the text after the Issuer field, C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority – G2
This is the intermediate CA. And it exactly matches their current certificate we already trust. So no problem, right, we are good to go, right? Not so fast grasshopper. This certificate contains a totally new element for us. I happened to notice it has a 4096 bit key length. Never seen that before though I have heard about it.

How do we even know our old browsers and even proxy server are going to be good with that? The best way I reasoned is simply to find another site with a 4096 bit certificate. Well, it took me almost an hour before I found one, and DDG and Google searches proved fruitless. I found it by taking logical guesses, as in, surely some security-minded organization has deployed these already??

ssllabs.com. Nope. godaddy.com. Nope. www.google.com. Nope. Gnupg.org, Nah, ah. Lets Encrypt. Also a no. Then I tried nist.org and found the weirdest thing. They send several certificates, one of which is *.bluehost.com which is 4096 bits. But it makes no sense being part of the certificates on nist.org, as an ssllabs.com server eval will tell you. So then I tried www.bluehost.com. Paydirt!

$ examinecert www.bluyehost.com

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            af:a7:b9:22:4f:d5:7e:6b:78:4b:5a:23:d0:35:50:23
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=GB, ST=Greater Manchester, L=Salford, O=COMODO CA Limited, CN=COMODO RSA Domain Validation Secure Server
CA
        Validity
            Not Before: Oct 16 00:00:00 2015 GMT
            Not After : Oct 17 23:59:59 2018 GMT
        Subject: OU=Domain Control Validated, OU=Hosted by BlueHost.Com, INC, OU=PositiveSSL Wildcard, CN=*.unifiedlayer.co
m
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (4096 bit)
                Modulus:
                    00:c5:2b:10:d2:20:bb:d9:1b:e1:d3:b2:d1:9b:6f:
                    ...

examinecert is a bash function I created defined as:

examinecert () { echo|openssl s_client -connect "$@":443|openssl x509 -text|more; }

And for this company that brings up a host of questions. if their again IE 11 has never encountered a web site with this long of a key length, how will we know what will happen the first time?

Also, some sites get SSL intercepted by Bluecoat proxy. How will that infrastructure handle it? Will it handle it?

That;s why it was so important to find a real-world example, as painful as that exercise proved to be.

The answers are somewhat surprising.

Yes, ancient Internet Explorer probably handles 4096 bit key lengths just fine. I actually haven’t fully tested that one yet.

But it doesn’t matter for this company. Their Bluecoat proxy intercepts the SSL. So, yes, that part works, and re-creates its own certificate, but issued as a standard 2048-bit key length! So that is what IE sees so I know there will be no issue there. I say surprising because usually the generated certificates so carefully preserve all aspects of a certificate: same expiration date, same common name, etc. Whether or not this key length reduction is configurable or not I have yet to find out.

Follow up
As a result of my prodding, badssl.com will include a 4096-bit certificate with which to test things out.

Conclusion
After an arduous search (I’m sure next year this time this will become much easier) we found a public site which can be used to test 4096 bit key lengths: www.bluehost.com. Obviously GoDaddy also issues 4096-bit certificates since that is what this particular web site uses as their issuer, but I have yet to find an actual live example of one.

Bluecoat SSL interception by default does handle this long key length, but generates its private version of it with only a 2048 key length, to our surprise.

Just remember, if you have a Raspberry Pi you can run all these commands that I’ve shown because you have a bone fide Linux system.

Case: closed!

References and related
This site has all sorts of SSL scenarios to test against: https://badssl.com/.
To jump straight to their 4096-bit CERT: https://rsa4096.badssl.com/

Categories
Admin Web Site Technologies

A taste of the Instagram API

Intro
I always want to know more about how things really work behind the scenes, so I was excited when I overheard talk about how one company uses the Instagram API to do some cool things. An API is an application programming interface. It allows you to write programs to automate tasks and do some really cool stuff. So I spoke to one of my sources who shared with me a few companies he knows about who use Instagram’s API to do some cool things. Unfortunately, none of them were willing to reveal the technical details of how they interact with the API, so I am left with only the marketing descriptions of what they have managed to do with it. But what they don’t realize is that as a capable IT person, in some cases I only have to hear that a thing is possible to motivate me. I have literally gone into meetings telling a customer No that’s not possible, hearing from them Yeah, well, they have it running in Europe, and going back to my desk afterwards to totally revise my opinion of what is or isn’t possible and how it could be done. Having said all that, here is what these companies have managed to do, without revealing the secret sauce of how they do it.

Example apps
Post scheduling software
This is used by social media managers to schedule their Instagram posts weeks or months in advance. It allows them to make a bunch of posts at once quickly and saves them time. A friend of a friend in NYC owns a company that does this. His website is bettrsocial.com

Analytic software
Simply Measured offers a free Instagram report for users with up to 25,000 followers. The stats and insights are presented clearly and will help inform your Instagram posting strategy. The report lets you quickly see what has worked well in your Instagram marketing so you can apply these insights to future posts. Web site: https://simplymeasured.com

Automaton software
Some companies connect with Instagram’s API to automate redundant tasks and increase traffic to your Instagram page. Social Network Elite is one of the best sources for growing organic Instagram followers.

Conclusion
Although I don’t even have an Instagram account, I am interested in APIs. The Instagram API does not look too daunting and seems well-documented. I cite a few small businesses that put it to use to do cool stuff. Unfortunately at this time I can’t deliver on the promise of the title of this article – a taste of the API – because I haven’t received any details about the actual usage. Perhaps in some future I will get my own account and develop my own application.

References and related
The Instagram API is documented here: https://www.instagram.com/developer/
My attempt to use the GoDaddy domain API.

Categories
Admin Linux Network Technologies

Measuring bandwidth on Checkpoint Gaia

Intro
Sometimes you don’t have the tools you want but you have enough to make do. Such is the case with the command line utilities of the CLI of Checkpoint Gaia. It’s like a basic Linux. The company I consult for is beginning to hit some bandwidth limits and I wanted to understand overall traffic flow better. In the absence of any proper bandwidth monitors I used the netstat command and some approximations. Crude thouigh it may be it already gave me a much better idea about my traffic than I had going into this project.

The details
I call this BASH script netstats.sh

#!/bin/bash
# for Gaia, not IPSO
c=0
sleep=2
while /bin/true; do
  v[1]=`netstat -Ieth1-01 -e|grep RX|grep TX`
  n[1]="vlan 102           "
  v[2]=`netstat -Ieth1-05 -e|grep RX|grep TX`
  n[2]="vlan 103 200.78.39    "
  v[3]=`netstat -Ieth1-02 -e|grep RX|grep TX`
  n[3]="vlan 103 10.31.42"
  v[4]=`netstat -Ieth1-03 -e|grep RX|grep TX`
  n[4]="trunk for VPN      "
# interesting line:
#           RX bytes:4785585828883 (4.3 TiB)  TX bytes:7150474860130 (6.5 TiB)
  date
  for i in {1..4}; do
    RX=`echo ${v[$i]}|cut -d: -f2|awk '{print $1}'`
    TX=`echo ${v[$i]}|cut -d: -f3|awk '{print $1}'`
#    echo "vlan ${n[$i]}        RX,TX: $RX, $TX"
    if [ $c -gt 0 ]; then
      RXdiff=`expr $RX - ${RXold[$i]}`
      TXdiff=`expr $TX - ${TXold[$i]}`
# observed scaling factor: 8.1 bits/byte
      RXrate=$(($RXdiff*81/$sleep/10000000))
      TXrate=$(($TXdiff*81/$sleep/10000000))
      echo "${n[$i]}    RX,TX: $RXrate, $TXrate Mbps"
    fi
# old values
    RXold[$i]=$RX
    TXold[$i]=$TX
  done
  c=$(( $c + 1 ))
  sleep $sleep
done

It’s pretty self-explanatory. I would just note that in the older IPSO OS you don’t have the ability to get the bytes transferred from netstat. Just the number of packets, which is an inherently cruder measure. The calibration of 8.1 bits per byte (there is overhead from the frames) is maybe a little crude but it’s what I measured over the source of a couple minutes.

A quick glance at Redhat or CentOS shows me that this same script, with appropriate modifications for the interface names (eth0, eth1, etc), would also work on those OSes.

IPSO
I really, really wanted some kind of measure for IPSO as well. So I tackled that as best I could. Here is that script:

#!/bin/bash
# for IPSO, not Gaia
c=0
while [ 1 -gt 0 ]; do
# eth1-01: vlan 802; eth1-05: vlan 803 (144.29); eth1-02: vlan 803 (10.201.145)
  v[1]=`netstat -Ieth-s4p1|tail -1`
  n[1]="vlan 208.129.99     "
  v[2]=`netstat -Ieth-s4p2|tail -1`
  n[2]="vlan 208.156.254     "
  v[3]=`netstat -Ieth-s4p3|tail -1`
  n[3]="vlan 208.149.129     "
  v[4]=`netstat -Ieth-s4p4|tail -1`
  n[4]="trunk for Cisco and b2b"
# interesting line:
#Name         Mtu   Network     Address             Ipkts Ierrs    Opkts Oerrs  Coll
#eth-s4p1     16018 <Link>      0:a0:8e:c4:ff:f4 72780201     0 56423000     0     0
  date
  for i in {1..4}; do
    RX=`echo ${v[$i]}|awk '{print $5}'`
    TX=`echo ${v[$i]}|awk '{print $7}'`
#    echo "vlan ${n[$i]}        RX,TX: $RX, $TX"
    if [ $c -gt 0 ]; then
      RXdiff=$(($RX - ${RXold[$i]}))
      TXdiff=$(($TX - ${TXold[$i]}))
# observed: .0043 mbits/packet
      RXrate=$(($RXdiff*43/100000))
# observed: .0056 mbits/packet
      TXrate=$(($TXdiff*56/100000))
      echo "${n[$i]}    RX,TX: $RXrate, $TXrate Mbps"
    fi
# old values
    RXold[$i]=$RX
    TXold[$i]=$TX
  done
  c=$(( $c + 1 ))
  sleep 10
done

The conversion to bits is probably only accurate to +/- 25%, because it depends a lot on the application, i.e., VPN concentrator versus proxy server. I just averaged all applications together because that’s the best I could do. I compared it to a Cisco router’s statistics.

Note that in Gaia cpview can also be run frmo the CLI. Then you can drill down to the specific interface information. I have compared my script to using cpview (which has a default update screen time of 2 seconds) and they’re pretty close. As far as I know there is no way to script cpview. And at the end of the day I suspect it is only doing the same thing my script does.

Conclusion
A script is provided which gives a measure of Mbps bandwidth usage by polling netstat periodically. It’s not exact, but even crude measures can help a network engineer.

Categories
Admin Linux

Linux Tip: too lazy to write a startup script? how to fake it

Intro
systemd can be pretty formidable to master. Say you have your own little script you like to run but you don’t want to bother with inserting it into the systemd facility. What can you do?

The details
A simple trick is to insert the startup of the script into a crontab like this:

@reboot <path-to-your-script>

For more details on how and why this works and some other crontab oddities try

$ man ‐s5 crcontab

An old Unix hand pointed this out to me recently. I am going to make a lot more use of it…

Of course niceties such as run levels, order of startup, etc are not really controllable (I guess). Or maybe it is the case that all your @reboot scripts are processed in order, top to bottom.