Categories
Admin Internet Mail Linux

The IT Detective Agency: generated email goofs attachments

Intro
Today was a busy day! A rather expert user asks for my advice about emails being generated from his own Unix system, by his own processes, that occasionally come out with the encoding of the attachments showing up in the body of the message.

I am not a message formatting expert, or at least I wasn’t prior to this question today. Bbut if a sendmail expert can’t provide an answer, who will? So anyways, this user, let’s call him Rob forwards me this email:

Dear DrJohn,
 
I was wondering if you have some idea as to why sometimes the attachment 
is getting included in the text of the email instead of being recognized as attachment, 
see example below.
 
Any pointers would be helpful, as this makes it at the least cumbersome 
to open the attachment by cutting , pasting the text attachment part 
as file and uudecoding it into binary before it can be opened for content view.
 
Regards,
Rob
 
Mime-Version: 1.0
Content-Type: multipart/mixed;
                 boundary="----=_Part_168946_477699193.1322837415283"
X-Mailer: sendmsg
 
------=_Part_168946_477699193.1322837415283
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
 
Dear Team;
 
While executing the flow, In.process:RapidProductMovementReport_New, the following exception occurred.
 
java.lang.Exception: java.sql.SQLException:ORA-12899: value too large for column "B2B_RAPID"."P_MOVEMENT_DETAIL"."UNIT_OF_MEASURE_TYP" (actual: 3, maximum: 2)
 
 
Error Dump :
com.wm.lang.flow.FlowException:java.lang.Exception: java.sql.SQLException:ORA-12899: value too large for column ... (actual: 3, maximum: 2)
 
 
Pipeline values (see attachment)
 
Caller: Rapid_In.process:RapidRouter
 
Stack: %serviceStack%
------=_Part_168946_477699193.1322837415283
Content-Type: application/gzip; name=pipeline.xml.gz
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=pipeline.xml.gz
 
H4sIAAAAAAAAAOy9bXPiSJr3+/rMp8hwbEzMHbFFKVOpp57umpVBBp0SkkcS5WHeVLhtutr32sbH
dtV0f/uTEmDLIMBISErBf2O3to2SJJUP18PvujLz53/8cXdLfkwen26m97+c0I5yQib3V9Prm/tv
v5yM4rMP5sk/Pv3l5y+Xt98nT68FmSj46S+E/PwjeUDuL+8mv5xcT6++303un+M/HyYnn84fp9ff
r56H0x+T5MNw8jB9fP75Y/qFla/eX/72fOlNvzl3lze39vX14+Tp6eTTKTu1r+9u7v/n18un3zpX
07t1X7++eZxcPYuWnXxy/dNg5PfWlbydfvvt5naS/HHy6eP04fnjfya/3k2ef59eP328fHi4vbm6
TCoS5Z4.......

Hmm. So what do you think? I have often seen these Mime-type headers and paid absolutely no attention to them. When things work what does it matter? But now they’re not working and it does matter.

First I tried to reproduce the problem using a technique I had gleaned looking over the shoulders of some Unix admins. I knew they had an easy method to send attachments from the command line of their Unix systems so I walked over and asked them how they did it. Sure enough, it was dead easy. It goes something like this:

# uuencode file.txt file.txt|mailx -s “here is your attachment” recipient_address

I rolled up my sleeves, set recipient_address to a valid SMTP mail address. Sure enough. I got it as an attachment in Lotus Notes. But it’s an ugly attachment and doesn’t have any of the nice MIME formatting about it. so it’s probably a bit of luck that my MUA (mail user agent) understands that I mean to create an attachment. I don’t think all MUAs will do that, unless it’s following a more obscure RFC which I’m not aware of.

The original source of the message looks like this (my attachment is called cogstartup.xml.gz):

Date: Fri, 2 Dec 2011 14:54:34 -0500
From: ...
Message-Id: ...
To: ...
Subject: test using uuencode
X-MIMETrack: Itemize by SMTP Server on ... (Release 7.0.4|March 23, 2009) at
 02/12/2011 20:54:36,
		 Serialize by Notes Client ... (Release 8.5.1|September
 28, 2009) at 12/02/2011 04:12:40 PM,
		 Serialize complete at 12/02/2011 04:12:40 PM
X-TNEFEvaluated: 1
 
begin 644 cogstartup.xml.gz
M'XL(""57L4X``V-O9W-T87)T=7`N>&UL`.U]VW;;R+'H^WP%HA?-G$5)MN<2
MCW?&>].2/:/$NL24,\EY\0(!D$0,`@P`2N9\_:E+W]$@`(J2LG-&*QE+)-!=
M755=MZZ...

Very different from what we saw above, right? None of those MIME-related headers seem to be present. So I decided that uuencode is too primitive to reproduce the problem.

I was next going to try to generate an email with the help of a PERL module, like MIME::Lite. But I decided that was too much trouble as I would need to download it and install it first!

So I ventured to see if I could get lucky and figure out the problem by educating myself on the standard, without wasting too much time. The relevant RFC seems to be 1341. I prefer the older RFCs because I suppose they’re shorter – easier to understand because life was simpler in those days! Once yuo parse through the verbiage and repitition, there wasn’t much to it. In particular, it mentioned that the Header

Mime-Version: 1.0

has to be included amongst the header fields. If it is not, the correct behaviour for a MUA is to interpret all the encodings and stuff as just part of a regular body text, which it will display to the user.

I ran a test using sendmail as my sending MUA from a Linux server. With the sendmail agent you can add headers, at least that’s how I remember it:

# sudo sendmail -v recipient< tst where tst is a file I created that starts with the line Mime-Version: 1.0 Yes, indeedy. I received it and my MUA interpreted the attachment as an attachment, displaying the attachment name and the appropriate icon type for it! Now put just a blank line at the top of my tst file, pushing all the rest of the stuff down by a line, and the behaviour is completely different. Then my MUA treats everything as literal body text, just as the old RFC says it must, and it looks just the way it did when Rob forwarded it to me. Conclusion
I explained to Rob that he must sometimes be introducing an extra line above the Mime-Verison header, which would cause this problem.

He thanked me.

Case closed!

Categories
Admin IT Operational Excellence

The IT Detective Agency: SAP Afaria communication to APNS failed

Intro
IT is often called in and expected to produce results when only vague or partial information is presented. This is especially so when integrating commercial software.

The Issue
We were looking to implement SAP SUP (Sybase Unwired Platform). Part of it – for mobile device management for iOS devices – requires an Afaria server to communicate with the Apple Push Notification Service, gateway.push.apple.com, port 2195; and feedback.push.apple.com, port 2196.

Whatever software it is that’s running on Afaria, it’s not well-behaved insofar as it does not support proxy access. That means it expects to be on the Internet and be able to initiate these communication channels unencumbered.

Since I consider that to be a security risk I looked for a way to mediate this communication over a proxy server anyways, even though it wasn’t supposed to work.

Yes, we did get it to work, but not exactly in the way we expected.

We built TCP tunnels on the proxy.

To be continued…

Categories
Admin IT Operational Excellence Linux SLES

The IT Detective Agency: Cognos stopped working

Intro
Here’s another in our continuing exciting IT drama. A user reports that her Cognos app stopped working. She’s in charge of the Cognos application servers, I run the Cognos gateway on a Linux server. I have almost no working knowledge of Cognos. I learned just enough to get the gateway installed and configured on Linux, specifically SLES. Cognos is used for business intelligence reports and is now owned by IBM.

The Details
The home page came up just fine, so I knew the web server – Apache, of course – was working. I know I hadn’t changed anything on the gateway. She also says that she hadn’t changed anything on the dispatcher. So she asks me to save the config. It’s an X application. I run cogconfig.sh, which by the way is in COGNOS-INSTALL_DIR/bin64, not COGNOS-INSTALL_DIR/bin, contrary to the documentation for Linux. I cannot save the config. She asks me to export it. I can’t do that either! I get the error

CAM-CRP-1057 unable to generate the machine specific symmetric key.

She asks me to delete the keypairs. These are in the directories COGNOS-INSTALL_DIR/configuration/{signkeypair,encryptkeypair}. So I clear out those. Still I cannot save or export the configuration. I quickly switch to a Solaris server which we had hoped to retire in order to get a working gateway while we mulled the problem over.

Over the next days I checked to see if Java had changed. Getting a working JRE was a little tricky on SLES. Nothing had changed. After the system admin came back from vacation the next week I asked if by chance. The last log showed he was logged in at the time. He admits to changing one thing.

He changed the system name. This system has multiple interfaces and a unique hostname for each interface. The hosts file in /etc/hosts included entries for each of the interface IPs. Seeing there were no other changes I concluded that this little innocent act was enough to kill the communication. Note that he did not change any of the routing, however. When you’re dealing with encryption, it can be that the system name is significant. So when those keys were initially generated they were tied to that name and would only work with that original hostname. At least that is my reverse engineering of the matter. Cognos is a pretty closed system so it’s hard to pin down more precisely what is going on.

Conclusion
The hostname was changed back to the original name. Sure enough, now I can export the config and most importantly, save it without any errors.

Case closed!

Lessons Learned
Well, avoiding finger-pointing and quick judgements was helpful in this case. Of course I suspected she actually had done something to the dispatcher, but I behaved as though the problem might be on my side. We treated each other professionally while the system was down and we had no clue why. That was very helpful.

Categories
Admin IT Operational Excellence Linux SLES

The IT Detective Agency: the case of the messages from mars

Intro
Today we got a “funny” message on our SLES 11 server in the /var/log/warn file. You might think that Martians have landed!

The Details
Specifically this:

Nov 9 10:54:19 drjohn24 kernel: [72397.088297] martian source 10.120.2.24 from 10.0.0.3, on dev eth1
Nov 9 10:54:19 drjohn24 kernel: [72397.088300] ll header: 78:e7:d1:7b:25:32:00:a0:8e:a8:8e:b3:08:00

Every time I pinged 10.120.2.24 (drjohn24) from 10.0.0.3 it would produce those two lines in the warn and messages file. More worrisome, I could not ssh from one host to the other. I could ssh from a host on the local network to drjohn24. We observed this behaviour even with the firewall disabled. Strange, right?

One more thing to note: drjohn24 has two network interfaces and various routes defined.

The Solution
It didn’t take too long to get to the bottom of this. We set up the routes wrong. We meant to create a default route out of eth0, which was right, and a net-10 route for eth1, which we specified incorrectly. Do

netstat -rn

to show all routes. I had this:

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
10.120.2.0      0.0.0.0         255.255.255.128 U         0 0          0 eth1
10.120.3.0      0.0.0.0         255.255.255.0   U         0 0          0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
10.0.0.0        10.120.2.1      255.255.255.128 UG        0 0          0 eth1
128.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo
0.0.0.0         10.120.3.1      0.0.0.0         UG        0 0          0 eth0

Do you see the error? We put the mask on the 10.0.0.0 the same as we put on the interface and that’s not what we wanted.

The corrected version looks like this:

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
...
10.0.0.0        10.120.2.1      255.0.0.0       UG        0 0          0 eth1
...

Conclusion
So what was happening is that the inbound packet from 10.0.0.3 was arriving at eth0 as we intended. But SLES 11 is now clever enough to realize, based on its routing table, that that is not the expected interface where a packet with that source IP should arrive. It should have arrived at eth1 because of the default route. No other static route was more specific for 10.0.0.3 due to our error. And apparently even with firewall turned off, SLES gets very defensive at this point. I’m not sure if it was sending return packets out of eth1 or not, because I kept looking for them out of eth0!

Once we corrected the routes the inbound packet arrived at eth0 and was returned with an answer packet from eth0 and the martian messages went away.

The martian message thing is a little obscure, and at the time more a distraction than anything else as we had to research what that meant. I guess for the future we’ll instantly know. It’s very similar to defining network topology on your firewalls in an anti-spoofing defense.

Case closed!

Categories
Uncategorized

Running Multiple Web Server Instances on SLES

Intro
My initial plan was to run multiple virtualhosts under one global umbrella. But my virtualhosts have to do diverse things. One will front-end JBoss, another will front-end IBM WebSphere, etc. I thought or rather hoped that the differences could all be expressed in the separate virtualhosts sections. But I’ve been stopped dead in my tracks with JBoss when I tried that:

Syntax error on line 28 of /usr/local/apache2/conf/mod-jk.conf:
JkWorkersFile cannot occur within  section

Hmm. So JkWorkersFile has to appear in the global section. I’m not comfortable with that. I guess it’s time to consider running separate processes for my different classes of service. How best to do that?

To be continued…

Categories
Linux Perl

Words with Friends Gentle Word Hints

Intro
My friend and I are in a perpetual game of Words With Friends on our smartphones these days. It suits me to a T because I like to take a looong time to come up with just the right move (although as time has passed you’ll see I’ve become less enamored with the app. You’ll see this if you have the patience to read through the whole article which was written progressively as I continued to play more games0. And you can’t knock over the board and lose all your played tiles! Yet you can still get advice from other friends by showing the board on your smartphone.

I have a good vocabulary, my friend a little less so. But he takes advantage of a peculiar quirk of playing the game in this way – he makes up words and sees if they’re accepted by the program. And…sometimes they are! So Words with Friends uses a ridiculous dictionary or dictionaries. That’s how he came up with hila. Later I turned the tables on him. I played a “word” that I didn’t believe to be a word, but rather one that I believed Words with Friends might believe to be a word: carbo. Sure enough. Accepted. 59 points. The “o” allowed me to stretch to reach the triple word tile and intersect a “T” to make “to” across. Check the American heritage dictionary. Not there, except as a prefix. My next best idea would only have been about 21 points.

I tried one of those cheating programs, www.lexicalwordfinder.com. I had the letters e,e, u, m, n, a and I think a. What does it suggest? neume. Ha? Who in the world ever heard of that? So it’s obviously plugged into those ridiculous dictionaries.You might as well let computers play other computers if you’re going to use cheats like that. My idea is to make gentle suggestions that you the educated person would have thought of on your own if you had enough time. Or maybe like me it’s on the tip of your tongue but you can’t quite find it in your brain. So I am writing a simple program which draws on a common dictionary – words every well-educated person ought to know. That’s also easier to program! Since I can’t compete with the big boys, my contribution will be to show the steps how I am writing such a program.

I’m running Ubuntu server, which is a Debian Linux variant. Initially I wasn’t sure what all I would need so I did:

# sudo apt-get install dictd dict dict-wn dict-gcide

in the hopes of getting a words file! dict-wn is the WordNet dictionary; gcide is Gnu collaborative international dictionary of english

I found the goods in /usr/share/dictd. wn.index has 87,924 unique words:

# awk '{print $1}' wn.index|uniq|wc

March, 2013 update – CentOS
I’ve since switched my hosting platform to CentOS. There there is the dictionary /usr/share/dict/linux.words. If you don’t have it install the package words-3.0-17.el6. It has 480,000 “words!” I’m not sure why the huge discrepancy, but I know that’s a lot more words than are traditionally mentioned as the number of words in the English language.

. After removing numbers and punctuation marks it’s at 80,724:

# awk '{print $1}' wn.index|uniq|egrep -v [0-9\'.-]|wc

Here are the first few words now:

# awk '{print $1}' wn.index|uniq|egrep -v [0-9\'.-]|head -25
a
aa
aaa
aachen
aah
aaland
aalborg
aalii
aalst
aalto
aar
aardvark
aardwolf
aare
aarhus
aaron
aarp
aas
aave
ab
aba
abaca
abacinate
aback
abactinal

Many of these look suspicious. I want to at least throw out the proper names. I don’t care if Words with Friends accepts them or not. I don’t believe they should be used. For instance:

# dict aar
1 definition found

From WordNet (r) 3.0 (2006) [wn]:

  Aar
      n 1: a river in north central Switzerland that runs northeast
           into the Rhine [syn: {Aare}, {Aar}, {Aare River}]

So how are we going to get rid of the words with capitals? Unfortunately in the index they are all in lower case. I only see the possibility to do a dictionary lookup on each and every one. Here’s what I came up with for that. I’m sure some wiser guy could write this as a 1-liner, but hey, it is what it is:

#!/usr/bin/perl
# DrJohn, 11/2011
# check if words are upper or lower case
# we can only learn when doing a dict lookup
# input are proposed words
$DEBUG = 0;
while() {
  chomp;
  $word = $_;
  $cnt = 0;
  open(DEF,"dict -d wn $word|");
  while() {
    $cnt++;
    if ($cnt == 5) {
# this is the line that repeats the word
      ($wordagain) = $_ =~ /(\w+)/;
      print $wordagain if $DEBUG;
      print "$word\n" if $wordagain eq $word;
      last;
    } # end line five condition stuff
  } # end loop over definition
} # end STDIN

Run it on and the first few results are now like this:

aa
aah
No definitions found for "aaland"
aalii
aardvark
aardwolf
aba
abaca
abacinate
aback
abactinal

We got rid of Aar and many others, but we also got rid of one of the most common words – “a.” Not that it matters for this game, but this points to a possibility that the choice of case in the definition is somewhat arbitrary and a word with several meanings, any of which requires upper case, say like English, is going to be written upper case. Indeed that is the case for English, which of course is a nice word and perfectly acceptable when used with the meaning (sports) the spin given to a ball by striking it on one side. Sigh, nothing’s ever easy. So lets’ modify our program to make a separate list of rejected words so we can review it by hand and add back in words which can be used in lower case. Whenever yuo do something by eye yuo want to reduce the task as much as possible. Here we can take advantage of another fact: a capitalized word with a single definition is not of interest to us because that single definition is the one for which capitalization is required, like Aar. So we only need to consider the cases of capitalized words with mutliple definitions, one of which may be typically used with the word spelled in lower case, like English.

We showed the results syntax for a word with single definition above, for Aar. Here’s an example of a capitalized word with multiple definitions:

 dict -d wn english
1 definition found

From WordNet (r) 3.0 (2006) [wn]:

  English
      adj 1: of or relating to or characteristic of England or its
             culture or people; "English history"; "the English landed
             aristocracy"; "English literature"
      2: of or relating to the English language
      n 1: an Indo-European language belonging to the West Germanic
           branch; the official language of Britain and the United
           States and most of the commonwealth countries [syn:
           {English}, {English language}]
      2: the people of England [syn: {English}, {English people}]
      3: the discipline that studies the English language and
         literature
      4: (sports) the spin given to a ball by striking it on one side
         or releasing it with a sharp twist [syn: {English}, {side}]

There’s different characteristics we could use as markers. I propose to look for a digit immediately followed by a colon as a definition marker. More than one occurrence probably means the word has multiple definitions and we should consider it. So here’s a re-worked version of our program to accomplish that:

#!/usr/bin/perl
# DrJohn, 11/2011
# check if words are upper or lower case
# we can only learn when doing a dict lookup
# input are proposed words
$DEBUG = 0;
open(CAND,">/tmp/candidates") || die "cannot open /tmp/candidates!!\n";
while() {
  chomp;
  $word = $_;
  $cnt = 0;
  $cand = 0;
  $cntdef = 0;
  open(DEF,"dict -d wn $word|");
  while() {
    $cnt++;
    if ($cnt == 5) {
# this is the line that repeats the word
      ($wordagain) = $_ =~ /(\w+)/;
      print $wordagain if $DEBUG;
      if ($wordagain eq $word) {
        print "$word\n";
        last;
      } else {
# maybe there are multiple definitions
        $cand = 1;
      }
    } elsif ($cand) {
      $cntdef++ if /\d:/;
    } # end line five condition stuff
  } # end loop over definition
# print candidate rejected word if there were multiple definitions
  print CAND "$word\n" if $cntdef > 1;
} # end STDIN

Note the regex \d: that we use to determine a definition.

Running this on the first few words, we have a, aaron and ab to consider. Ab is interesting because it’s the name of a degree (in fact I have that degree), as well as shorthand for muscles of the abdomen. So, a lower case usage exists!

Now the program is running kind of slow. So since we don’t want to run it multiple times, perhaps it’s time to turn our attention to this other the problem: all the lines like No definitions found for “aaland” that we are seeing in the meantime:

# grep ^aaland wn.index
aaland islands  OgB     Cz

So these are due to compound words which we don’t want anyways because they won’t be accepted.

So running the modified program produces an output with 63185 words, and another 2170 words to be considered for review. The first few are as follows:

aa
aah
aalii
aardvark
aardwolf
aba
abaca
abacinate
aback
abactinal
abacus
abaft
abalone
abamp
abampere
abandon

Let’s check one:

# dict aalii
1 definition found

From WordNet (r) 3.0 (2006) [wn]:

  aalii
      n 1: a small Hawaiian tree with hard dark wood

Now check American Heritage dictionary, which I consider the Bible. Not there. I believe it would be accepted by Words with Friends because it plays using the Lexical Word Finder cheating program. Just enter your tiles as A A L I I A A to see for yourself.

And here are the first few rejected words which are to be reviewed:

a
aaron
ab
abdias
abel
aberdeen
abilene
abkhas
abkhasian
abkhaz
abkhazian
abnaki
aboriginal
ac
achaean
achomawi
actinia
actium
ad
adalia
adam
adams
add

Add? How did it get there? Well, you know, ADD, the medical condition? Yes, it’s exasperating. But that’s what you get with free. 2000 is not too many to review, however.

I’m having some doubts about the whole project now. I had the letters D E E G R I U. An open D was on the board where there was room for a couple tiles above it and three tiles below. Using the three tiles below would make the play triple word. I initially came up with drug. Then edger, which works out to the same number of points because in WWF the U is two points for some reason. But I wanted to use another tile so I thought and thought. Edgier? Nope, doesn’t fit. Ridge? Fits, but too short. Then the Aha moment: ridged. And I felt a moment of pleasure realizing that most people wuold not have come up with it. But a word suggestion program? It wuold have spit it out first thing, taking away from that aspect of the game.

But then there is the other side. Successful play depends on rote memorization of all two-letter words. And what passes for a WWF word is very questionable, as I’ve said above. Like how about jo? Yup. No, not in standard dictionaries, though.

I’m still thinking about what algorithm to use for the actual program. I can see that potentially it will be expensive, computationally speaking, given all the variants that must be tested. So I thought of making the task even easier: throw out words with more than eight letters. To see how many long words we’re going to toss:

# egrep '\w{9}' betterdict|wc

where betterdict is the results of all my pruning described above. It’s 32386 words we’ll toss. That ought to help alot. And 464 candidate words less we’ll have to consider. We’ll do something like # egrep -v ‘\w{9}’ betterdict > newbetterdict to build our even more slimmed-down dictionary.

Our First Match Program
Here’s a first stab at a matching program. It actually is pretty good (meaning there aren’t an overwhelming number of results) if you have the typical combination of unruly letters. Note we use the awesome of power of regular expressions to do all the heavy lifting.

#!/usr/bin/perl
$m = $ARGV[0];
$DEBUG = 0;
print "match: $m\n";
$dict = "/usr/share/dictd/newbetterdict";
open(DICT,"$dict") || die "cannot open dict $dict!!\n";
while(<DICT>) {
  chomp;
  $word = $_;
  print "word: $word\n" if $DEBUG;
  if ($word =~ /^[$m]{2,}$/) {
# we have the beginning of a match
    $cnt++;
    print "match: word: $word\n";
  }
}
print "matched: $cnt\n";

You run it from the command line with the letters you want to try as argument:

 # match.pl fxitau
match: fxitau
match: word: aa
match: word: affix
match: word: aft
match: word: ataxia
match: word: ax
match: word: fa
match: word: fat
match: word: faux
match: word: fax
match: word: fiat
match: word: fit
match: word: fix
match: word: ft
match: word: ii
match: word: iii
match: word: ix
match: word: tat
match: word: tatu
match: word: tau
match: word: taut
match: word: tax
match: word: taxi
match: word: tiff
match: word: tit
match: word: titi
match: word: tufa
match: word: tuff
match: word: tuft
match: word: tut
match: word: tux
match: word: xi
match: word: xii
match: word: xiii
match: word: xix
match: word: xx
match: word: xxi
...

What’s wrong of course is that while we are requiring matched words to be formed from our letters and only those letters, we have not taken care to avoid duplicate use of the same letter. That regular expression does a lot, but it’s not quite doing everything at this point. Now we start having to get creative to take it to the next level. I’m not sure myself how I’m going to do it.

More on that game. Now we’re getting into the groove. And by that I mean you make up words. So our final score was 381 to 347. Mind you, I’m not claiming any kind of expertise in the game. This is just a sad reflection on the liberalness of what WWF calls a “word.” Here are our made-up words from that one game: carbo, qi, ne, deni, oi, jo, wo and da. Of these, wo is the only one in the American heritage dictionary as a variant spelling of woe. Sad, right? It completely changes the strategy of the game.

Revised Program: Almost There
I thought for awhile I could use the transliteration operator tr, but alas, it does not do interpolation, so I had to go with something a bit more clumsy. But the whole program, which is basically functional, now returns only those words which match the letters provided, which is already a great help. Here is the warts-and-all version:

#!/usr/bin/perl
$m = $ARGV[0];
$DEBUG = 0;
print "match: $m\n";
# split up match
for($i=0;$i<length($m);$i++) {
  $ltr = substr($m,$i,1);
  print "ltr: $ltr\n" if $DEBUG;
# count frequency of this letter
  $ltrhash{$ltr}++;
}
$dict = "/usr/share/dictd/newbetterdict";
open(DICT,"$dict") || die "cannot open dict $dict!!\n";
while(<DICT>) {
  chomp;
  $word = $_;
  $bad = 0;
  %ltrwordhash = ();
  print "word,m: $word,$m\n" if $DEBUG;
  if ($word =~ /^[$m]{2,}$/) {
    print "Begin word analysis\n" if $DEBUG;
# we have the beginning of a match
    for($i=0;$i<length($word);$i++) {
      $ltr = substr($word,$i,1);
      $ltrwordhash{$ltr}++;
      print "ltr,cnt: $ltr,$ltrwordhash{$ltr}\n" if $DEBUG;
# throw out words with too many letter occurences
      if ($ltrwordhash{$ltr} > $ltrhash{$ltr}) {
        print "word tossed due to excess letters. max: $ltrhash{$ltr}\n" if $DEBUG;
        $bad = 1;
        last;
      }
    }
    next if $bad;
# what remains are the good words!
    print "matched word: $word\n";
  }
}

The DEBUG statements helped me find coding errors. I set DEBUG = 1 and run the program. I had forgotten the
%ltrwordhash = (); statement initially to clear out that hash for each new word. That was not good, but a review of the debug output quickly showed what was going on. Now we run it again with my current letters plus a free one (“l”) I want to use from the board:

# match.pl liaeinpu
match: liaeinpu
matched word: ail
matched word: ain
matched word: ale
matched word: alien
matched word: aline
matched word: alp
matched word: alpine
matched word: ane
matched word: ani
matched word: anil
matched word: anile
matched word: ape
matched word: elan
matched word: en
matched word: ie
matched word: ii
matched word: il
matched word: in
matched word: inula
matched word: lane
matched word: lap
matched word: lapin
matched word: lea
matched word: lean
matched word: leap
matched word: lei
matched word: leu
matched word: li
matched word: lie
matched word: lien
matched word: lieu
matched word: lii
matched word: line
matched word: lineup
matched word: lip
matched word: lupin
matched word: lupine
matched word: nail
matched word: nap
matched word: nape
matched word: napu
matched word: neap
matched word: nil
matched word: nip
matched word: nu
matched word: pa
matched word: pail
matched word: pain
matched word: pal
matched word: pale
matched word: pan
matched word: pane
matched word: panel
matched word: pe
matched word: pea
matched word: peal
matched word: pean
matched word: pel
matched word: pen
matched word: penal
matched word: penial
matched word: pi
matched word: pia
matched word: pie
matched word: pilau
matched word: pile
matched word: pin
matched word: pine
matched word: pineal
matched word: plain
matched word: plan
matched word: plane
matched word: plea
matched word: pul
matched word: pula
matched word: pule
matched word: pun
matched word: ulna
matched word: unai
matched word: up

Cool, huh? I wanted to get a long word starting with “l” to pick up a double-word. I was coming up short. I maybe eventually would have thought of it, but before I ran the program, I was not coming up with long matches. So lineup and lupine are really helpful suggestions. Even though it’s gentle hints, it still feels like cheating, however!

And only 38 lines of code, including comments and DEBUG statements.

Next we’ll add some features. Let’s allow a starting/middle/ending letter to be specified. To support optional command-line arguments it’s nice to have more sophisticated argument parsing.

We started a new game. It’s just getting ridiculous as my friend gravitates towards a style of play which is a lot less about word knowledge than about trying all possible letter combinations to maximize the total, regardless of whehter or not it seems like a word. And in order to remain competitive I have to play that way as well, to a degree. So far our made-up “words” in this game include: qi, noh (used twice), oho, obe, fe, mm, noo, jo, oxo and deva. That deva really killed me as it was used to make a triple word play. Now we’ve played a total of 14 vertical words and 12 horizontal words. So 11/26 of the words we’ve so-far used are fabricated nonsense words!
I think it’s a mixed blessing. I welcome additional two-letter words. They’re readily memorized and only a finite number can exist (262 of course). And they really help with the play. For three-letter made-up words I’m on the edge but inclined to not encourage their use. Definitely not for four-letter words and higher. The universe of such words is just too great.

Some Nice Touches
Here’s the program which permits optional beginning letter, end letter and middle letter. I’ve introduced argument parsing with the Getopt::Std module.

#!/usr/bin/perl
use Getopt::Std;
getopts('b:m:e:l:');
usage() unless $opt_l;
$m = $opt_l;
$begltr = $opt_b ? $opt_b: ".";
$endltr = $opt_e ? $opt_e: ".";
$midltr = $opt_m ? $opt_m: ".";
$DEBUG = 0;
print "match: $m\n";
# split up match
for($i=0;$i<length($m);$i++) {
  $ltr = substr($m,$i,1);
  print "ltr: $ltr\n" if $DEBUG;
# count frequency of this letter
  $ltrhash{$ltr}++;
}
$dict = "/usr/share/dictd/newbetterdict";
open(DICT,"$dict") || die "cannot open dict $dict!!\n";
while(<DICT>) {
  chomp;
  $word = $_;
  $bad = 0;
  %ltrwordhash = ();
  print "word,m: $word,$m\n" if $DEBUG;
  if ($word =~ /^[$m]{2,}$/) {
# meet begin/middle/end conditions
    next unless $word =~ /^$begltr.*$midltr.*$endltr$/;
    print "Begin word analysis\n" if $DEBUG;
# we have the beginning of a match
    for($i=0;$i<length($word);$i++) {
      $ltr = substr($word,$i,1);
      $ltrwordhash{$ltr}++;
      print "ltr,cnt: $ltr,$ltrwordhash{$ltr}\n" if $DEBUG;
# throw out words with too many letter occurences
      if ($ltrwordhash{$ltr} > $ltrhash{$ltr}) {
        print "word tossed due to excess letters. max: $ltrhash{$ltr}\n" if $DEBUG;
        $bad = 1;
        last;
      }
    }
    next if $bad;
# what remains are the good words!
    print "matched word: $word\n";
  }
}
sub usage {
print "Usage: $0 [-b beginning_letter] [-e end_letter] [-m middle_letter] -l letters\n";
exit(1);
}

I called it match2.pl. Here’s an example using it with my awful letters (And is it just me, or does WWF have a definite proclivity to throw horrible letters at you for many turns in a row??):

# match2.pl -b b -l duttdsb
match: duttdsb
matched word: bud
matched word: bus
matched word: bust
matched word: but
matched word: butt

Note that my simplistic dictionary does not seem to have plurals. It also does not seem to have verb tenses besides present tense. Oh, well. If you realize that, it’s no big deal. It’s supposed to be gentle hints after all (which I can hide behind any time the task of doing a complete job becomes too taxing!).

How We Can Put This on the Web
The typical time it takes to run match2.pl is about 55 msec:

# time ./match2.pl -l zatrd
match: zatrd
matched word: adz
matched word: art
matched word: dart
matched word: rad
matched word: rat
matched word: tad
matched word: tar
matched word: trad
matched word: tzar

real    0m0.055s
user    0m0.050s
sys     0m0.000s

That’s pretty fast. So, anyhow, I’m thinking to take the next step and make this available on the web, at least until it becomes popular, in the form of an Ajax/Perl program. Now I’ve never done an Ajax program, but I’ve been looking for an excuse to do one and I think this fits the bill. I’m envisioning a web page where you punch in your letters and the possible word matches appear on the same page. Another way to go is to write the whole thing as a Javascript program. The dictionary isn’t that large, after all. I’ve also never done anything this ambitious in Javascript, so that might take some doing. I think we’ll tackle Ajax first.

Now gcide.index has 149,682 words, but it’s a mess and includes lots of proper names, so I will not use it.

To be continued…

Categories
Admin Apache IT Operational Excellence Linux Security Web Site Technologies

Apache Tips in Light of Security Problems

Intro
I am far from an expert in Apache. But I have a good knowledge of general best practices which I apply when running Apache web server. None of my tips are particularly insightful – they all can be found elsewhere, but this will be a single place to help find them all together.

To Compile or Not
As of this writing the current version is 2.2.21. The version supplied with the current version of SLES, SLES 11, is 2.2.10. To find the version run httpd -v

I think that’s fairly typical for them to be so many version behind. I recommend compiling your own version. But pay attention to security advisories and check every quarter to see what the latest release is. You’ll have to keep up with it on your own or you’ll actually be in worse shape than if you used the vendor version and applied patches regularly.

What You’ll Need to Know for the Range DOS Vulnerability
When you get the source you might try a simple ./configure, followed by a make and finally make install. And it would all seem to work. You can fetch the home page with a curl localhost. Then you remember about that recent Range header denial of service vulnerability described here. If you test for whether you support the Range header you’ll see that you do. I like to test for this as follows:

$ curl -H "Range: bytes=1-2" localhost

If before you saw something like

<html><body><h1>It works!</h1>

now it becomes

ht

i.e., it grabbed bytes one and two from <html>…

Now there are options and opinions about what to do about this. I think turning off Range header support is the best option. But if you try that you will fail. Why? Because you did not compile in the mod_headers module. To turn off Range headers add these lines to the global part of your configuration:

RequestHeader unset Range
RequestHeader unset Request-Range

To see what modules you have available in your apache binary you do

/usr/local/apache2/bin/httpd -l

which should look like the following if you have taken all the defaults:

Compiled in modules:
  core.c
  mod_authn_file.c
  mod_authn_default.c
  mod_authz_host.c
  mod_authz_groupfile.c
  mod_authz_user.c
  mod_authz_default.c
  mod_auth_basic.c
  mod_include.c
  mod_filter.c
  mod_log_config.c
  mod_env.c
  mod_setenvif.c
  mod_version.c
  prefork.c
  http_core.c
  mod_mime.c
  mod_status.c
  mod_autoindex.c
  mod_asis.c
  mod_cgi.c
  mod_negotiation.c
  mod_dir.c
  mod_actions.c
  mod_userdir.c
  mod_alias.c
  mod_so.c

Notice there is no mod_headers.c which means there is no mod_headers module. And in fact when you restart your apache web server you are likely to see this error:

Syntax error on line 360 of /usr/local/apache2/conf/httpd.conf:
Invalid command 'RequestHeader', perhaps misspelled or defined by a module not included in the server configuration

So you need to compile in mod_headers. Begin by cleaning your slate by running make clean in your source directory; then run configure as follows:

./configure –enable-headers –enable-rewrite

I’ve thrown in the –enable-rewrite qualifier because I like to be able to use mod_rewrite. It is not actually used for the security problems being discussed in this article.

Side note for those using the system-provided apache2 package on SLES
As an alternative to compiling yourself, you may be using an apache package. I have only tested this for SLES (so it would probably be the same for openSUSE). There you can edit the /etc/sysconfig/apache2 file and add additional modules to load. In particular the line

APACHE_MODULES="actions alias auth_basic authn_file authz_host authz_groupfile authz_default authz_user authn_dbm autoindex
 cgi dir env expires include log_config mime negotiation setenvif ssl suexec userdir php5 reqtimeout"

can be changed to

APACHE_MODULES="actions alias auth_basic authn_file authz_host authz_groupfile authz_default authz_user authn_dbm autoindex
 cgi dir env expires include log_config mime negotiation setenvif ssl suexec userdir php5 reqtimeout headers"

Back to compiling. Note that ./configure -help gives you some idea of all the options available, but it doesn’t exactly link the options to the precise module names, though it gives you a good idea via the description.

Then run make followed by make install as before. You should be good to go!

A Built-in Contradiction
You may have successfully suppressed use of range-headers, but on my web server, I noticed a contradictory HTTP Response header was still being issued after all that:

Accept-Ranges:

I use a simple

curl -i localhost

to look at the HTTP Response headers. The contradiction is that your server is not accepting ranges while it’s sending out the message that it is!

So turn that off to be consistent. This is what I did.

# need the following line to not send Accept-Ranges header
Header unset Accept-Ranges
#

Don’t Give Away the Keys
Don’t reveal too much about your server version such as OS and patch level of your web server. I suppose it is OK to reveal your web server type and its major version. Here is what I did:

# don't reveal too much about the server version - just web server and major version
# see http://www.ducea.com/2006/06/15/apache-tips-tricks-hide-apache-software-version/
ServerTokens Major

After all these changes curl -i localhost output looks as follows:

HTTP/1.1 200 OK
Date: Fri, 04 Nov 2011 20:39:02 GMT
Server: Apache/2
Last-Modified: Fri, 14 Oct 2011 15:37:41 GMT
ETag: "12005-a-4af4409a09b40"
Content-Length: 10
Content-Type: text/html

See? I’ve gotten rid of the Accept-Ranges and provide only sketchy information about the server.

I put these security-related measures into a single file I include from the global configuration file httpd.conf into a file I call security.conf. To put it all toegther, at this point my security.conf looks like this:

# 11/2011
# prevent DOS attack.  
# See http://mail-archives.apache.org/mod_mbox/httpd-announce/201108.mbox/%[email protected]%3E - JH 8/31/11
# a good explanation of how to test it: 
# http://devcentral.f5.com/weblogs/macvittie/archive/2011/08/26/f5-friday-zero-day-apache-exploit-zero-problem.aspx
# looks like we do have this vulnerability, 
# trying curl -i -H 'Range:bytes=1-5' http://bsm2.com/index.html
# note that I had to compile with ./configure --enable-headers to be able to use these directives
RequestHeader unset Range
RequestHeader unset Request-Range
#
# need the following line to not send Accept-Ranges header
Header unset Accept-Ranges
#
# don't reveal too much about the server version - just web server and major version
# see http://www.ducea.com/2006/06/15/apache-tips-tricks-hide-apache-software-version/
ServerTokens Major

SSL (added December, 2014)
Search engines are encouraging web site operators to switch to using SSL for the obvious added security. If you’re going to use SSL you’ll also need to do that responsibly or you could get a false sense of security. I document it in my post on working with cipher settings.

Disable folder browsing/directory listing
I recently got caught out on this rookie mistake: Web Directories listing vulnerability. The solution is simple. In side your main HTDOCS section of configuration you may have a line that looks like:

Options Indexes FollowSymLinks ExecCGI

Get rid of that Indexes – that’s what permits folder browsing, So this is better:

Options FollowSymLinks ExecCGI

Turn off php version listing, December 2016 update
Oops. I read about how the 47% of the top million web sites have security issues. One bases for the judgment is to see what version of PHP is running based on the headers. So i checked my https server, and, oops:

$ curl ‐s ‐i ‐k https://drjohnstechtalk.com/blog/|head ‐22

HTTP/1.1 200 OK
Date: Fri, 16 Dec 2016 20:00:09 GMT
Server: Apache/2
Strict-Transport-Security: max-age=15811200; includeSubDomains; preload
Vary: Cookie,Accept-Encoding
X-Powered-By: PHP/5.4.43
X-Pingback: https://drjohnstechtalk.com/blog/xmlrpc.php
Last-Modified: Fri, 16 Dec 2016 20:00:10 GMT
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8
 
<!DOCTYPE html>
<html lang="en-US">
<head>
...

So there is was, hanging out for all to see, PHP version 5.4.43. I’d rather not publicly admit that. So I turned it off by adding the following to my php.ini file and re-starting apache:

expose_php = off

After this my HTTP response headers show only this:

HTTP/1.1 200 OK
Date: Fri, 16 Dec 2016 20:00:55 GMT
Server: Apache/2
Strict-Transport-Security: max-age=15811200; includeSubDomains; preload
Vary: Cookie,Accept-Encoding
X-Pingback: https://drjohnstechtalk.com/blog/xmlrpc.php
Last-Modified: Fri, 16 Dec 2016 20:00:57 GMT
Transfer-Encoding: chunked
Content-Type: text/html; charset=UTF-8

I must have overlooked this when I compiled my own apache v 2.4 and used it to run my principal web server over https.

June 2017 update
PCI compliance will ding you for lack of an X-Frame-Options header. So for a simple web site like mine I can always safely send one out by adding this to my apache.conf file (or whichever apache conf file you deem most appropriate. I have a special security file in conf.d where I actually put it):

# don't permit framing from other sources, DrJ 6/16/17
# https://www.simonholywell.com/post/2013/04/three-things-i-set-on-new-servers/
Header always append X-Frame-Options SAMEORIGIN

PCI compliance will also ding you if TRACE method is enabled. In that security file of my configuration I disable it thusly:

TraceEnable Off

Test both those things in one fell swoop
$ curl ‐X TRACE ‐i ‐k https://drjohnstechtalk.com/

HTTP/1.1 405 Method Not Allowed
Date: Fri, 16 Jun 2017 18:20:24 GMT
Server: Apache/2
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=15811200; includeSubDomains; preload
Allow:
Content-Length: 295
Content-Type: text/html; charset=iso-8859-1
 
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>405 Method Not Allowed</title>
</head><body>
<h1>Method Not Allowed</h1>
<p>The requested method TRACE is not allowed for the URL /.</p>
<hr>
<address>Apache/2 Server at drjohnstechtalk.com Port 443</address>
</body></html>

See? X-Frame-Options header now comes out with desired value. TRACE method was disallowed. All good.

Conclusion
Make sure you are taking some precautions against known security problems in Apache2. For information on running multiple web server instances under SLES see my next post Running Multiple Web Server Instances under SLES.

References and related
Remember, for handling the apache SSL hardening go here.
Compiling apache 2.4
drjohnstechtalk is now an HTTPS site!
TRACE method sounds useful for debugging, but I guess there are exploits so it needs to be disabled. Wikipedia documents it: https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods. Don’t forget that curl -v also shows you your request headers!

Categories
Admin IT Operational Excellence Network Technologies

The IT Detective Agency: ARP Entry OK, PING not Working

Intro
Yes, the It detective agency is back by popular demand. This time we’ve got ourselves a thriller involving a piece of equipment – a wireless LAN controller, WLAN – on a directly connected network. From the router we could see the arp entry for the WLAN, but we could not PING it. Why?

A trace, or more correctly the output of tcpdump run on the router interface connected to that network, showed this:

>
12:08:59.623509  I arp who-has rtr7687.drjohnhilgarts.com tell wlan.drjohnhilgarts.com
12:08:59.623530  O arp reply rtr7687.drjohnhilgarts.com is-at 01:a1:00:74:55:12 (oui Nokia Internet Communications)
12:09:01.272922  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:03.271765  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:05.271469  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:07.271885  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:09.271804  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:09.622902  I arp who-has rtr7687.drjohnhilgarts.com tell wlan.drjohnhilgarts.com
12:09:09.622922  O arp reply rtr7687.drjohnhilgarts.com is-at 01:a1:00:74:55:12 (oui Nokia Internet Communications)
12:09:11.271567  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:13.271716  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:15.271971  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:17.040748  I b8:c7:5d:19:b9:9e (oui Unknown) > Broadcast Null Unnumbered, xid, Flags [Command], length 46
12:09:17.271663  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:19.271832  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:19.392578  I b8:c7:5d:19:b9:9e (oui Unknown) > Broadcast Null Unnumbered, xid, Flags [Command], length 46
12:09:19.623515  I arp who-has rtr7687.drjohnhilgarts.com tell wlan.drjohnhilgarts.com
12:09:19.623535  O arp reply rtr7687.drjohnhilgarts.com is-at 01:a1:00:74:55:12 (oui Nokia Internet Communications)
12:09:20.478397  O arp reply rtr7687.drjohnhilgarts.com is-at 01:a1:00:74:55:12 (oui Nokia Internet Communications)
12:09:21.271714  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:23.271697  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:25.271664  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:27.272156  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:29.271730  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:29.621882  I arp who-has rtr7687.drjohnhilgarts.com tell wlan.drjohnhilgarts.com
12:09:29.621903  O arp reply rtr7687.drjohnhilgarts.com is-at 01:a1:00:74:55:12 (oui Nokia Internet Communications)
12:09:31.271765  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43
12:09:33.271858  I STP 802.1d, Config, Flags [none], bridge-id 2332.3c:df:1e:8f:2b:c0.8312, length 43

What’s interesting is what isn’t present. No PINGs. No unicast traffic whatsoever, yet we knew the WLAN was generating traffic. The frequent arp requests for the same IP strongly hinted that the WLAN was not getting the response. We were not able to check the arp table of the WLAN. And we knew the WLAN was supposed to respond to our PINGs, but it wasn’t. Yet the router’s arp table had the correct entry for the WLAN, so we knew it was plugged into the right switch port and on the right vlan. We also triple-checked that the network masks matched on both devices. Let’s go back. Was it really on the right vlan??

The Solution
What we eventually realized is that in the WLAN GUI, VLANs were assigned to the various interfaces. the switch port, on a Cisco switch, was a regular access port. We reasoned (documentation was scarce) that the interface was vlan tagging its traffic. So we tried to change the access port to a trunk port and enter the correct vlan. Here’s the show conf snippet:

interface GigabitEthernet1/17
 description 5508-wlan
 switchport
 switchport trunk encapsulation dot1q
 switchport trunk allowed vlan 887
 switchport mode trunk
 spanning-tree portfast edge trunk

Bingo! With that in place we could ping the WLAN and it could send us its traffic.

Case closed.

2018 update
I had totally forgotten my own posting. And I’ll be damned if in the heat of connecting a new firewall to a switch port we didn’t have this weird situation where we could see MAC entries of the firewall, and it could see MACs of other devices on that vlan, but nobody could ping the firewall and vica versa. A trace from tcpdump looked roughly similar to the above – a lot of arp who-has firewall, tell server. Sure enough, the firewall guy, new to the group, had configured all his ports to be tagged ports, even those with a single vlan. It had been our custom to make single vlans non-tagged ports. I didn’t start it, that’s just how it was. More than an hour was lost debugging…

And earlier in the year was yet another similar incident, where a router operated by a vendor joining one of our vlans assumed tagged ports where we did not. More than an hour was lost debugging… See a pattern there?

I had forgotten my own post from seven years ago to such an extent, I was just about to write a new one when I thought, Maybe I’ve covered that before. So old topics are new once again… Here’s to remember this for the next time!

Where to watch out for this
When you don’t run all the equipment. If you ran it all you’d have the presence of mind to make all the ports consistent.

Some terminology
A tagged port can also be known as using 802.1q, which is also known as dot1q, which in Cisco world is known as a trunk port. In the absence of that, you would have an access port (Cisco terminology) or untagged port (everyone else).

Conclusion
OK, there are probably many reasons and scenarios in which devices on the same network can see each other’s arp entries, but not send unicast traffic. But, the scenario we have laid out above definitely produces that effect, so keep it in mind as a possibility should you ever encounter this issue.

Categories
Admin IT Operational Excellence Network Technologies

Internet Service Providers Block TCP Port 22 or Do They?

Intro
The original premise of this article is that some Internet Service Providers were seen to block TCP port 22, used by ssh and sftp. However, as often happens during active IT investigations, this turns out to be completely wrong. In fact there was a block in this case we studied, but not by the ISPs. An overly aggressive ACL on the customer premise equipment Internet router is in fact the culprit.

The Problem
(IPs skewed to protect whatever) We asked a partner to do an sftp to drjohnstechtalk.com. All firewall and routing rules were in place. The partner tried it. He saw a SYN packet leaving, but no packets being returned. Here at drjohnstechtalk, we didn’t see any packets whatsoever! This partner makes sftp connections to other servers successfully. What the heck?

We had them try the following basic command:

nc -v host 22

where host is the IP of the target server. The response was:

nc: connect to host port 22 (tcp) failed: No route to host

But switching to port 21 (FTP) showed completely different behaviour: there was no message whatsoever and the session hanged. That’s good! That’s the usual firewalls dropping packets. But this No route to host needs more exploration.

Getting Closer
So we did an open trace. I mean a tcpdump without any limiting expression. The dump showed the SYN out to port 22, followed by this nugget:

13:09:14.279176 IP Sprint_IP > src_IP: ICMP host target_IP unreachable - admin prohibited filter, length 36

Next Steps
This well-intentioned filtering is causing a business problem. The Cisco IOS ACL that got them into trouble was this one:

ip access-list extended drop-spoof-and-telnet
 deny   tcp any any eq 22 log-input

Solution
They liked the idea of this filtering, but apparently this was the first request for inbound ssh access. So they decided to keep this filter rule but precede it with more specific rules as required, essentially acting like a second firewall:

ip access-list extended drop-spoof-and-telnet
 permit tcp host IP_src host IP_dest eq 22
  deny   tcp any any eq 22 log-input
Categories
Admin Apache IT Operational Excellence Security

The Basics of How to Work with Cipher Settings

Trying to upgrade WordPress brings a thicket of problemsDecember, 2014 Update With some tips for making your server POODLE-proof, and 2016 update to deal with OpenSSL Padding Oracle Vulnerability CVE-2016-2107

Intro
We got audited. There’s always something they catch, right? But I actually appreciate the thoroughness of this audit, and I used its findings to learn a little about one of those mystery areas that never seemed to matter until now: ciphers. Now it matters because cipher weakness was the finding!

I had an older piece of Nortel gear which was running SSL. The auditors found that it allows anonymous authentication ciphers. Have you ever heard of such a thing? I hadn’t either! I am far from an expert in this area, but I will attempt an explanation of the implication of this weakness which, by the way, was scored as a “high severity” – the highest on their scale in fact!

Why Anonymous Authentication is a Severe Matter
The briefly stated reason in the finding is that it allows for a Man In the Middle (MITM) attack. I’ve given it some thought and I haven’t figured out what the core issue is. The correct behaviour is for a client to authenticate a server in an SSL session, usually using RSA. If no authentication occurs, a MITM SSL server could be inserted in between client and server, or so they say.

Reproducing the Problem
OK, so we don’t understand the issue, but we do know enough to reproduce their results. That is helpful so we’ll know when we’ve resolved it without going back to the auditors. Our tool of choice is openssl. In theory, you can list the available ciphers in openssl thus:

openssl ciphers -v

And you’ll probably end up with an output looking like this, without the header which I’ve added for convenience:

Cipher Name|SSL Protocol|Key exchange algorithm|Authentication|Encryption algorithm|MAC digest algorithm
DHE-RSA-AES256-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(256)  Mac=SHA1
DHE-DSS-AES256-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(256)  Mac=SHA1
AES256-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(256)  Mac=SHA1
KRB5-DES-CBC3-MD5       SSLv3 Kx=KRB5     Au=KRB5 Enc=3DES(168) Mac=MD5
KRB5-DES-CBC3-SHA       SSLv3 Kx=KRB5     Au=KRB5 Enc=3DES(168) Mac=SHA1
EDH-RSA-DES-CBC3-SHA    SSLv3 Kx=DH       Au=RSA  Enc=3DES(168) Mac=SHA1
EDH-DSS-DES-CBC3-SHA    SSLv3 Kx=DH       Au=DSS  Enc=3DES(168) Mac=SHA1
DES-CBC3-SHA            SSLv3 Kx=RSA      Au=RSA  Enc=3DES(168) Mac=SHA1
DES-CBC3-MD5            SSLv2 Kx=RSA      Au=RSA  Enc=3DES(168) Mac=MD5
DHE-RSA-AES128-SHA      SSLv3 Kx=DH       Au=RSA  Enc=AES(128)  Mac=SHA1
DHE-DSS-AES128-SHA      SSLv3 Kx=DH       Au=DSS  Enc=AES(128)  Mac=SHA1
AES128-SHA              SSLv3 Kx=RSA      Au=RSA  Enc=AES(128)  Mac=SHA1
RC2-CBC-MD5             SSLv2 Kx=RSA      Au=RSA  Enc=RC2(128)  Mac=MD5
KRB5-RC4-MD5            SSLv3 Kx=KRB5     Au=KRB5 Enc=RC4(128)  Mac=MD5
KRB5-RC4-SHA            SSLv3 Kx=KRB5     Au=KRB5 Enc=RC4(128)  Mac=SHA1
RC4-SHA                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=SHA1
RC4-MD5                 SSLv3 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5
RC4-MD5                 SSLv2 Kx=RSA      Au=RSA  Enc=RC4(128)  Mac=MD5
KRB5-DES-CBC-MD5        SSLv3 Kx=KRB5     Au=KRB5 Enc=DES(56)   Mac=MD5
KRB5-DES-CBC-SHA        SSLv3 Kx=KRB5     Au=KRB5 Enc=DES(56)   Mac=SHA1
EDH-RSA-DES-CBC-SHA     SSLv3 Kx=DH       Au=RSA  Enc=DES(56)   Mac=SHA1
EDH-DSS-DES-CBC-SHA     SSLv3 Kx=DH       Au=DSS  Enc=DES(56)   Mac=SHA1
DES-CBC-SHA             SSLv3 Kx=RSA      Au=RSA  Enc=DES(56)   Mac=SHA1
DES-CBC-MD5             SSLv2 Kx=RSA      Au=RSA  Enc=DES(56)   Mac=MD5
EXP-KRB5-RC2-CBC-MD5    SSLv3 Kx=KRB5     Au=KRB5 Enc=RC2(40)   Mac=MD5  export
EXP-KRB5-DES-CBC-MD5    SSLv3 Kx=KRB5     Au=KRB5 Enc=DES(40)   Mac=MD5  export
EXP-KRB5-RC2-CBC-SHA    SSLv3 Kx=KRB5     Au=KRB5 Enc=RC2(40)   Mac=SHA1 export
EXP-KRB5-DES-CBC-SHA    SSLv3 Kx=KRB5     Au=KRB5 Enc=DES(40)   Mac=SHA1 export
EXP-EDH-RSA-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=RSA  Enc=DES(40)   Mac=SHA1 export
EXP-EDH-DSS-DES-CBC-SHA SSLv3 Kx=DH(512)  Au=DSS  Enc=DES(40)   Mac=SHA1 export
EXP-DES-CBC-SHA         SSLv3 Kx=RSA(512) Au=RSA  Enc=DES(40)   Mac=SHA1 export
EXP-RC2-CBC-MD5         SSLv3 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5  export
EXP-RC2-CBC-MD5         SSLv2 Kx=RSA(512) Au=RSA  Enc=RC2(40)   Mac=MD5  export
EXP-KRB5-RC4-MD5        SSLv3 Kx=KRB5     Au=KRB5 Enc=RC4(40)   Mac=MD5  export
EXP-KRB5-RC4-SHA        SSLv3 Kx=KRB5     Au=KRB5 Enc=RC4(40)   Mac=SHA1 export
EXP-RC4-MD5             SSLv3 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5  export
EXP-RC4-MD5             SSLv2 Kx=RSA(512) Au=RSA  Enc=RC4(40)   Mac=MD5  export

I’m not going to explain all those headers because, umm, I don’t know myself. Perhaps in a later or updated posting. The point I want to make here is that as complete as this listing appears, it’s really incomplete. openssl actually supports additional ciphers as well, as I learned by combining information from the audit, plus Nortel’s documentation. In particular Nortel mentions additional ciphers such as these:

ADH-AES256-SHA SSLv3 DH, NONE AES (256) SHA1
ADH-DES-CBC3-SHA SSLv3 DH, NONE 3DES (168) SHA1

I singled these out because the “NONE” means anonymous authentication – the subject of the audit finding! Note that these ciphers were not present in the openssl listing. So now I know Nortel potentially supports anonymous (also called NULL) authentication. There remains the question of whether my specific implementation supports it. Of course the audit says it does, but I want to have sufficient expertise to verify for myself. So, try this:

openssl s_client -cipher ADH-DES-CBC3-SHA -connect IP_of_Nortel_server:443

I get:

---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 411 bytes and written 239 bytes
---
New, TLSv1/SSLv3, Cipher is ADH-DES-CBC3-SHA
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : ADH-DES-CBC3-SHA
    Session-ID: 30F1375839B8CFB508CDEFC9FBE4A5BF2D5CE240038DFF8CC514607789CCEDD5
    Session-ID-ctx:
    Master-Key: B2374E609874D1015DC55BEAA0289310445BAFF65956908A497E5C51DF1301D68CC47AB395DDFEB9A1C77B637A4D306F
    Key-Arg   : None
    Krb5 Principal: None
    Start Time: 1317132292
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---

You see that it listed the Cipher as the one I requested, ADH-DES-CBC3-SHA. Further note that no certificate names are sent. Normally they are. To see if my method is correct, let’s try one of Google’s secure servers. Certainly Google will not permit NULL authentication if it’s a bad practice:

openssl s_client -cipher aNULL -connect 74.125.67.84:443

produces this output:

21390:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:583:

Google does not permit this cipher! As a control, let’s use openssl without specifying a specific cipher against both servers. First, the Nortel server:

openssl s_client -connect IP_of_Nortel_server:443

produces some long output, which spits out the sever certificates, followed by this:

New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : DHE-RSA-AES256-SHA
    Session-ID: 6D1A4383F3DBF4C14007220715ECCFB83D91C524624ACE641843880291200AE2
    Session-ID-ctx:
    Master-Key: BE3FB61B169F497A922A9A172D36A4BB15C26074021D7F22D125875980070E157EDA3100572F927B427B03BF81543E1A
    Key-Arg   : None
    Krb5 Principal: None
    Start Time: 1317132982
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)

So you see client and server agreed to use the cipher DHE-RSA-AES256-SHA, which from our table uses RSA authentication. And hitting Google again without the ciphers argument we get this:

New, TLSv1/SSLv3, Cipher is RC4-SHA
Server public key is 1024 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : RC4-SHA
    Session-ID: 236FDF47DA752E768E7EE32DA10103F1CAD513E9634F075BE8773090A2E7A995
    Session-ID-ctx:
    Master-Key: 39212DE0E3A98943C441287227CB1425AE11CCA277EFF6F8AF83DA267AB256B5A8D94A6573DFD54FB1C9BF82EA302494
    Key-Arg   : None
    Krb5 Principal: None
    Start Time: 1317133483
    Timeout   : 300 (sec)
    Verify return code: 0 (ok)
---

So in this case it is successful, though it has chosen a different cipher from Nortel, namely RC4-SHA. But we can look it up and see that it’s a cipher which uses RSA authentication. Cool.

So we’ve “proven” all our assertions thus far. Now how do we fix Nortel? The Nortel GUI lists the ciphers as

ALL@STRENGTH

Pardon me? It turns out there are cipher groupings denoted by aliases, and you can combine the aliases into a cipher list.

ALL – means all cipher suites
EXPORT – includes cipher suites using 40 or 56 bit encryption
aNULL – cipher suites that do not offer authentication
eNULL – cipher suites that have no encryption whatsoever (disabled by default in Nortel)
STRENGTH – is at the end of the list and sorts the list in order of encryption algorithm key length

List operators are:
! – permanently deletes the cipher from the list.
+ – moves the cipher to the end of the list
: – separator of cipher strings

aNULL is a subset of ALL, and that’s what’s killing us. Putting all this together, the cipher I tried in place of ALL@STRENGTH is:

ALL:!EXPORT:!aNULL@STRENGTH

In this way I prevent NULL authentication and remove the weaker export ciphers. As soon as I applied this cipher list, I tested it. Yup – works. I can no longer hit it by using anonymous authentication:

openssl s_client  -cipher aNULL -connect IP_of_Nortel_server:443

produces

2465:error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure:s23_clnt.c:583:

and using cipher eNULL produces the same error. To make sure I’m sending a cipher which openssl understands, I tried a nonsense cipher as a control – one that I know does not exist:

openssl s_client  -cipher eddNULL -connect IP_of_Nortel_server:443

That gives a different error:

error setting cipher list
2482:error:1410D0B9:SSL routines:SSL_CTX_set_cipher_list:no cipher match:ssl_lib.c:1188:

providing assurance that aNULL and eNULL are cipher families understood and supported by openssl, and that I have done the hardening correctly!

Now you can probably count the number of people still using Nortel gear with your two hands! But this discussion, obviously, has wider applicability. In Apache/mod_ssl there is an SSLCipherSuite line where you specify a cipher list. The auditor’s recommendation is more detailed than what I tried. They suggest the list ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM

October 2014 Update
Well, now we’ve encountered the SSLv3 vulnerability POODLE, which compels us to forcibly eliminate use of SSLv3 on all servers and clients. Let’s say we updated our clients to require use of TLS. How do we gain confidence the update worked? Set one of our servers to not use TLS! Here’s how I did that on a BigIP server:

DEFAULT:!TLSv1:@STRENGTH

I ran a quick test using openssl s_client -connect server:443 as above, and got what I was looking for:

...
SSL handshake has read 3038 bytes and written 479 bytes
---
New, TLSv1/SSLv3, Cipher is AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : SSLv3
    Cipher    : AES256-SHA
...

Note the protocol says SSLv3 and not TLS.

Turning off SSLv3 to deal with POODLE

So that is normally exactly the opposite of what you want to do to turn off SSLv3 – that was just to run a control test. Here’s what to do to turn off SSLv3 on a BigIP:

DEFAULT:!RC4:!SSLv3:@STRENGTH

OK, yes, RC4 is a discredited cipher so disable that as well. Most clients (but not all) will be able to work with a server which is set like this.


Apache and POODLE prevention
Well, I went to the Qualys site and found I was not exactly eating my own dogfood! My own server was considered vulnerable to POODLE, supported weak protocols, etc and only scored a “C.” DrJohnsScoredbyQualys Determined to incorporate more modern approaches to my apache server settings and stealing from others, I improved things dramatically by throwing these additional configuration lines into my apache configuration:

(the following apache configuration lines are deprecated – see further down below)

...
# lock things down to get a better score from Qualys - DrJ 12/17/14
# 4 possible values: All, SSLv2, SSLv3, TLSv1. Allow TLS only:
        SSLProtocol all -SSLv2 -SSLv3
        SSLCipherSuite ALL:!aNULL:!eNULL:!SSLv2:!LOW:!EXP:!RC4:!MD5:@STRENGTH
...


The results after strengthening apache configuration

I now get an “A-” and am not supporting any weak ciphers! Yeah! DrJohnsScoredbyQualys-afterSimpleTweaks It’s because those configuration lines mean that I explicitly don’t permit SSLv2/v3 or the weak RC4 cipher. I need to study to determine if I should support TLSv1.2 and forward secrecy to go to the best possible score – an “A.” (Months later) Well now I do get an A and I’m not exactly sure why the improved score.

BREACH prevention
After all the above measures the Digicert certificate inspector I am evaluating says my drjohnstechtalk site is vulnerable to the Breach attack. From my reading the only practical solution, at least for my case, is to upgrade from apache 2.2 to apache 2.4. Hence the Herculean efforts to compile apache 2.4 as detailed in this blog post. My preliminary finding is that without changing the SSL configuration at all apache 2.4 does not show a vulnerability to BREACH. But upon digging further, it has to do with the absence of the use of compression in apache 2.4 and I’m not yet sure why it isn’t being used!

2016 Update for CVE-2016-2107
I was going to check to see if my current score at SSLLabs is an A-, and what I can do to boost it to an A. Well, I got an F! I guess the lesson here is to conduct periodic tests. Things change!
qualys-drj-2016-11-10

I saw from descriptions elsewhere that my version of openssl, openssl-1.0.1e-30.el6.11, was likely out-of-date. So I looked at my version of openssl on my CentOS server:

$ sudo rpm ‐qa|grep openssl

and updated it:

$ sudo yum update openssl‐1.0.1e‐30.el6.11

Now (11/11/16) my version is openssl-1.0.1e-48.el6_8.3.

Would this upgrade suffice without any further action?

Some background. I had compiled – with some difficulty – my own version of apache version 2.4: https://drjohnstechtalk.com/blog/2015/07/compiling-apache24-on-centos/.

I was pretty sure that my apache dynamically links to the openssl libraries by virtue of the lack of their appearance as listed compiled-in modules:

$ /usr/local/apache24/bin/httpd ‐l

Compiled in modules:
  core.c
  mod_so.c
  http_core.c
  prefork.c

Simply installing these new openssl libraries did not do the trick immediately. So the next step was to restart apache. Believe it or not, that did it!

Going back to the full ssllabs test, I currently get a solid A. Yeah!
qualys-drj-2016-11-11

In the spirit of let’s learn something here beyond what the immediate problem requires, I learned then that indeed the openssl libraries were dynamically linked to my apache version. Moreover, I learned that dynamic linking, despite the name, still has a static aspect. The shared object library must be read in at process creation time and perhaps only occasionally re-read afterwards. But it is not read with every single invocation, which I suppose makes sense form a performance point-of-view.

2016 apache 2.4 SSL config section
For the record…

...
        SSLProtocol all -SSLv2 -SSLv3
        # it used to be this simple
        #SSLCipherSuite ALL:!aNULL:!eNULL:!SSLv2:!LOW:!EXP:!RC4:!MD5:@STRENGTH
# Now it isn't - DrJ 6/2/15. Based on SSL Labs https://weakdh.org/sysadmin.html - DrJ 6/2/15
        SSLCipherSuite          ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
        SSLHonorCipherOrder     on
...

How to see what ciphers your browser supports
Your best bet is the SSLLABS.com web site. Go to Test my Browser.

University of Hannover offers this site. Just go this page. But lately I noticed that it does not list ciphers using CBC whereas the SSLlabs site does. So SSLlabs provides a more accurate answer.

2017 update for PCI compliance
Of course this article is ancient and I hesitate to further complicate it, but I also don’t want to tear it down. Anyway, for PCI compliance you’ll soon need to drop 3DES ciphers (3DES is pronounced “triple-DES” if you ever need to read it aloud). I have this implemented on F5 BigIP devices. I have set the ciphers to:

DEFAULT:!DHE:!3DES:+RSA

and this did the trick. Here’s how to see what effect that has from the BigIP command line:

$ tmm ‐‐clientciphers ‘DEFAULT:!DHE:!3DES:+RSA’

       ID  SUITE                            BITS PROT    METHOD  CIPHER  MAC     KEYX
 0: 49200  ECDHE-RSA-AES256-GCM-SHA384      256  TLS1.2  Native  AES-GCM  SHA384  ECDHE_RSA
 1: 49199  ECDHE-RSA-AES128-GCM-SHA256      128  TLS1.2  Native  AES-GCM  SHA256  ECDHE_RSA
 2: 49192  ECDHE-RSA-AES256-SHA384          256  TLS1.2  Native  AES     SHA384  ECDHE_RSA
 3: 49172  ECDHE-RSA-AES256-CBC-SHA         256  TLS1    Native  AES     SHA     ECDHE_RSA
 4: 49172  ECDHE-RSA-AES256-CBC-SHA         256  TLS1.1  Native  AES     SHA     ECDHE_RSA
 5: 49172  ECDHE-RSA-AES256-CBC-SHA         256  TLS1.2  Native  AES     SHA     ECDHE_RSA
 6: 49191  ECDHE-RSA-AES128-SHA256          128  TLS1.2  Native  AES     SHA256  ECDHE_RSA
 7: 49171  ECDHE-RSA-AES128-CBC-SHA         128  TLS1    Native  AES     SHA     ECDHE_RSA
 8: 49171  ECDHE-RSA-AES128-CBC-SHA         128  TLS1.1  Native  AES     SHA     ECDHE_RSA
 9: 49171  ECDHE-RSA-AES128-CBC-SHA         128  TLS1.2  Native  AES     SHA     ECDHE_RSA
10:   157  AES256-GCM-SHA384                256  TLS1.2  Native  AES-GCM  SHA384  RSA
11:   156  AES128-GCM-SHA256                128  TLS1.2  Native  AES-GCM  SHA256  RSA
12:    61  AES256-SHA256                    256  TLS1.2  Native  AES     SHA256  RSA
13:    53  AES256-SHA                       256  TLS1    Native  AES     SHA     RSA
14:    53  AES256-SHA                       256  TLS1.1  Native  AES     SHA     RSA
15:    53  AES256-SHA                       256  TLS1.2  Native  AES     SHA     RSA
16:    53  AES256-SHA                       256  DTLS1   Native  AES     SHA     RSA
17:    60  AES128-SHA256                    128  TLS1.2  Native  AES     SHA256  RSA
18:    47  AES128-SHA                       128  TLS1    Native  AES     SHA     RSA
19:    47  AES128-SHA                       128  TLS1.1  Native  AES     SHA     RSA
20:    47  AES128-SHA                       128  TLS1.2  Native  AES     SHA     RSA
21:    47  AES128-SHA                       128  DTLS1   Native  AES     SHA     RSA

2018 update and comment about PCI compliance
I tried to give the owners of e1st.smapply.org a hard time for supporting such a limited set of ciphersuites – essentially only the latest thing (which you can see yourself by running it through sslabs.com): TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384. If I run this through SSL interception on a Symantec proxy with an older image, 6.5.10.4 from June, 2017, that ciphersuite isn’t present! I had to upgrade to 6.5.10.7 from October 2017, then it was fine. But getting back to the rationale, they told me they have future-proofed their site for the new requirements of PCI and they would not budge and support other ciphersuites (forcing me to upgrade).

Another site in that same situation is https://shop-us.bestunion.com/. I don’t know if it’s a misconception on the part of the site administrators or if they’re onto something. I’ll know more when I update my own PCI site to meet the latest requirements.

2020 Update

In this year they are trying to phase out TLS v 1.0 and v 1.1 in favor of TLS v 1.2 or v 1.3. Now my web site’s grade is capped at a B because it still supports those older protocols.

Additional resources and references
As you see from the above openssl is a very useful tool, and there’s lots more you can do with it. Some of my favorite openssl commands are documented in this blog post.

A great site for testing the strength of any web site’s SSL setup, vulnerability to POODLE, etc is this Qualys SSL Labs testing site. No obnoxious ads either. A much more basic one is https://www.websiteplanet.com/webtools/ssl-checker/ SSLlabs is much more complete, but it only works on web sites running on the default port 443. websiteplanet is more about whether your certificate is installed properly and such.

Need to know what ciphers your browser supports? Qualys SSL Labs again to the rescue: https://www.ssllabs.com/ssltest/viewMyClient.html shows you all your browser’s supported ciphers. However, the results may not be reliable if you are using a proxy.

An excellent article explaining in technical terms what the problem with SSLv3 actually is is posted by, who else, Paul Ducklin the Sophos NakedSecurity blogger.

This RFC discusses why TLS v 1.2 or higher is preferred over TLS 1.0 or TLS 1.1: https://tools.ietf.org/html/rfc7525

The Digicert certificate inspector includes a vulnerability assessment as well. It seems useful.

Want a readily understandable explanation of what CBC (Cipher Block Chaining) means? It isn’t too hard to understand. This is an excellent article from Sophos’ Paul Ducklin. It also explains the Sweet32 attack.

An equally greatly detailed explanation of the openssl padding oracle vulnerability is here. https://blog.cloudflare.com/yet-another-padding-oracle-in-openssl-cbc-ciphersuites/

A fast dedicated test for CVE-2016-2107, the oracle padding vulnerability: https://filippo.io/CVE-2016-2107/. SSLlabs test is more thorough – it checks for everything – but much slower.

Compiling apache version 2.4 is described here: https://drjohnstechtalk.com/blog/2015/07/compiling-apache24-on-centos/ and more recently, here: https://drjohnstechtalk.com/blog/2020/04/trying-to-upgrade-wordpress-brings-a-thicket-of-problems/

If you want to see how your browser deals with different certificate issues (expired, bad chained CERT) as well different ciphers, this has a test case for all of that. This is very useful for testing SSL Interception product behavior. https://badssl.com/

Aimed at F5 admins, but a really good review for anyone about sipher suites, SSL vs TLS and all that is this F5 document. I recommend it for anyone getting started.

This site will never run SSL! This can be useful when you are trying to login to a hotel’s guest WiFi, which may not be capable of intercepting SSL traffic to force you to heir sign-on page: http://neverssl.com/.

Want to test if a web site requires client certificates, e.g., for authentication? This post has some suggestions.

Conclusion
We now have some idea of what those kooky cipher strings actually mean and our eyes don’t gloss over when we encounter them! Plus, we have made our Nortel gear more secure by deploying a cipher string which disallows anonymous authentication.

It seems SSL exploits have been discovered at reliable pace since this article was first published. It’s best to check your servers running SSL at least twice a year or better every quarter using the SSLlabs tool.