Categories
Admin DNS

Example of case-sensitive DNS usage

Intro
From RFC 1035, written in November, 1987:


Note that while upper and lower case letters are allowed in domain
names, no significance is attached to the case. That is, two names with
the same spelling but different case are to be treated as if identical.

The details
Now fast forward in time 27 years. I learned that Cisco IP Phones, when resolving the Call manager name, require that the DNS name for the Cisco Unified Call Manager be in the same exact upper or lower case as what is configured into the phone.

Suppose your Call Manager’s hostname was configured as CUCM.drjohnstechtalk.com and your DNS servers behaved like this:

> dig CUCM.drjohnstechtalk.com @208.109.255.46

; <<>> DiG 9.9.4-P2 <<>> CUCM.drjohnstechtalk.com @208.109.255.46
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15899
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 1
;; WARNING: recursion requested but not available
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;CUCM.drjohnstechtalk.com.      IN      A
 
;; ANSWER SECTION:
cucm.drjohnstechtalk.com. 3600  IN      A       50.17.188.196

Well, every application that is compliant with this 27-year-old DNS standard would work just fine. But Cisco phone’s will not. If they were configured to use CUCM.drjohnstechtalk.com and your DNS server spits back the answer to an A (address record) query, changing the FQDN to lower-case, it won’t “find” the call manager and won’t boot! So it’s a garbage implementation of DNS.

Shame on Cisco!

I happened to hear about this problem today, so it can occur under those very special circumstances outlined above. We can’t merely say it is only theoretical. However, mitigating circumstances abound that will make this a rarely observed problem.

Mitigation
Newer DNS servers actually spit back the FQDN in the exact same case as it received in the original query. I’m not sure at this point if this is an option or simply a change in behaviour that occurred at some point in the evolution of the ISC BIND resolver. It would be interesting to see when this behaviour changed.

The other mitigation, if you do have the older DNS servers that spit back the FQDN in lower-case is to configure the hostname in your zone file using upper case to agree with the upper-case version you’ve configured on the phone. With either of these mitigations the DNS server response will look like this:

; <<>> DiG 9.9.4-P2 <<>> CUCM.drjohnstechtalk.com @208.109.255.46
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15899
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 1
;; WARNING: recursion requested but not available
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;CUCM.drjohnstechtalk.com.      IN      A
 
;; ANSWER SECTION:
CUCM.drjohnstechtalk.com. 3600  IN      A       50.17.188.196

and the phone will be happy, seeing the case matched and will be able to contact the Call Manager so it can finish booting.

Conclusion
Cisco of all companies has built in to its IP Phones a bad DNS resolver that is case-sensitive. There are some mitigations which can be done while waiting for them to fix this embarrassing bug.

Second example from VMWare circa June 2020
The VMWare Horizon Client v 5.4 has a similar issue. If you use a proxy PAC file with contents like *.drjohnstechtalk.com DIRECT, that may not work for this client if the DNS entry for the hostname was entered in upper case! For instance HostName.DRJOHNSTECHTALK.COM. In that case it acts with case-sensitivyt and ignores the PAC file entry which it should have used to know to make DIRECT (without the aid of a proxy) HTTP connection. Very unfortunate.

References
RFC 1035 – things were so much simpler then!
ISC BIND web site.

Categories
DNS Network Technologies

The IT Detective Agency: internal DNS queries getting clobbered after bind upgrade

Intro
We’ve upgraded BIND innumerable times over the years. There’s never really been an issue. The new version just picks up and behaves exactly like the old version and all is good. But this time, in upgrading from ISC’s BIND v 9.8.5-P2 to BIND v 9.9.5-P1 something was dramatically different.

The details

Look at these queries:

> dig ns 10.in-addr.arpa

; <<>> DiG 9.9.2-P2 <<>> ns 10.in-addr.arpa
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60248
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;10.in-addr.arpa.               IN      NS
 
;; ANSWER SECTION:
10.in-addr.arpa.        0       IN      NS      10.IN-ADDR.ARPA.
 
;; Query time: 0 msec
;; SERVER: blah, blah
;; WHEN: Wed Jun 25 09:49:30 2014
;; MSG SIZE  rcvd: 73

> dig -x 10.100.208.10

; <<>> DiG 9.9.2-P2 <<>> -x 10.100.208.10
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 6088
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
 
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;10.208.100.10.in-addr.arpa.   IN      PTR
 
;; AUTHORITY SECTION:
10.IN-ADDR.ARPA.        86400   IN      SOA     10.IN-ADDR.ARPA. . 0 28800 7200 604800 86400
 
;; Query time: 0 msec
;; SERVER: blah, blah
;; WHEN: Wed Jun 25 09:49:56 2014
;; MSG SIZE  rcvd: 106

That is seriously bad and wrong – for us! This is a cache-only server and there are indeed RFC-1918 addresses defined on internal nameservers, such as that 10.100.200.10.

An email relay which relies on reverse lookups started to fail.

A DuckDuckGo search did not show anything relevant. Maybe Google would have.

I ultimately registered for an account at the knowledge base at isc.org, kb.isc.org, and quickly found my answer.

In fact they were crystal clear in explaining this very problem, so I hesitated to document it here, but I figure others might leap first and then read the documentation later like myself so it might do someone else some good.

They say:

...
Although this will be effective as a workaround, administrators are urged not to just specify empty-zones-enable no;
 
It is much better to use one or more disable-empty-zone option declarations to disable only the RFC 1918 empty zones that are in use internally.  
...

That empty-zones-enable no; by the way is a configuration option you can toss in your main configuration file in the options section.

Case closed.

Conclusion
Our reverse lookups on the Intranet began to fail after an innocuous upgrade of the ISC bind nameserver to version 9.9. A simple addition of an extra configuration statement resolved the matter. I guess it really is a good idea to RTFM.

References
ISC’s site is www.isc.org.
A different type of DNS clobbering was described in this case.
A word about Google’s awesome public DNS service.

Categories
DNS Scams

What if someone approaches you offering a domain?

Intro
As a domain owner you will sooner or later get an unsolicited email like the following one I received March 28th:

Hello,
 
We are promoting the sale of the domain name johnstechtalk.com that is being returned back to the open market very soon.
You own a very similar domain and we wanted to give you a first chance to secure johnstechtalk.com. If this offer is of any 
interest to you, the link below will lead you to our website where you can leave an early offer:
 
http://baselane.net/acquire/c00bsn1ub/J8jIGPiguH
 
Alternatively you can simply reply to this e-mail with your offer and we will manually process your order.
 
Here are a few quick notes about the offer:
-You are leaving an offer for full ownership and control over the domain. 
-You do not have to use our hosting or any other service, you are bidding only for the domain.
-This is a single transaction, no hidden surprises. 
-We will not give away your personal information to anybody.
-You will not need a new website or hosting you can easily redirect your existing website to point to this one.
-Our technical team stands at your disposal for any transfer/redirect issue you may have.
 
Thank you for considering our domain name service!
Please feel free to call us any time we would be really happy to hear from you!
 
Kind regards,
Domain Team

The thing is, this is not complete spam. After all, it is kind of interesting to pick up a shorter domain.

But is this a legitimate business proposition? What can we do to check it? Read on…

The details
The first reaction is “forget it.” Then you think about it and think, hmm, it might be nice to have that domain, too. It’s shorter than my current one and yet very similar, thus potentially enhancing my “brand.”

To check it out without tipping your hat use Whois. I use Network Solutions Whois.

Doesn’t the offer above make it sound like they have control over the domain and are offering you a piece of it? Quite often that’s not at all the case. For them to control the domain to the point where they are selling it would require an upfront investment. So instead what they do in many cases I have encountered is to try to prey on your ignorance.

When I received their offer the Whois lookup showed the domain to be in status

RedemptionPeriod

Form what I have read the redemption period should last 75 days. Its a time when the original owner can reclaim the domain without any penalties. No one else can register it.

If they actually owned the domain and were trying to auction it off, it would have had the standard Lock Status of

clientTransferProhibited

or

clientDeleteProhibited

Furthermore, domains being auctioned usually have special nameservers like these:

Nameservers:
  ns2.sedoparking.com
  ns1.sedoparking.com

Sedo is a legitimate auction site for domains.

johnstechtalk.com, having entered the redemption period, will become up for grabs unless the owner reclaims it.

If I had expressed interest in it I’m sure they would have obtained it, just like I could for myself, at the end of the redemption period and then sold it to me at a highly inflated price.

Not wanting to encourage such unsavory behaviour I made no reply to the offer and checked the status almost every day.

New status – it’s looking good

Last week sometime it entered a new status:

pendingDelete

I think this status persists for three days or so (I forget). Then, when that period is over it shows up as available. I bought it using my GoDaddy account for $9.99 last night – actually $11.00 because there’s an ICANN fee of $0.18 and I rounded up for charity.

And this is not the only domain I have bought this way. I bought vmanswer.com because I was annoyed by the number of unsolicited offers to “buy” it! That purpose was achieved…

But I am watching another domain that was offered to me and really did go to the auction house Sedo, where it is currently sitting (which means no one else is all that interested). I am curious to see what happens when it expires later this year.

Save the labor
How could I have avoided the trouble of those daily whois lookups? Well, on my Linux server there is the ever-handy whois, as in

$ whois johnstechtalk.com

But sometimes it gives fairly complete information and for other domains not so much. It depends on the registrar. For GoDaddy domains you get next to no information:

[Querying whois.verisign-grs.com]
[Redirected to whois.godaddy.com]
[Querying whois.godaddy.com]
[whois.godaddy.com]

I suspect it is a measure GoDaddy takes to avoid programmatic use of WhoIs. Because if it answered with complete information it would be easy for a modest scripter like me to write a program that runs all kinds of queries, which of course would mostly be used by the scammers I suppose. In particular since I wasn’t seeing the domain Lock Status from command-line whois I didn’t bother to write an program to automate my daily query. Otherwise I probably would have.

What about cybersquatters?
In the case mentioned above there is no trademark at stake. Often there is. what should you do if you receive an offer to sell you a domain name which is based on one of your own trademarks? I get lots of those as well. My approach is, of course, to not be extorted. So at first I was ignoring such solicitations. If I want to really go after the domain, I will sic my legal team on them and invoke UDRP (ICANN’s Uniform Domain Dispute Resolution Policy). UDRP comes down heavily in favor of the trademark holder.

But lately I wanted to do something more. Since this is illicit activity at the end of the day, I look at where the email comes from. Often a Gmail account is used. I gather the headers of the message and file a formal complaint with Google’s Gmail abuse form, which I hope leads to their account being shut down. I want to at least inconvenience them without wasting too much of my own resources. Well, I don’t actually know that it works, but it makes me feel better in any case 🙂 .

This is the Gmail abuse page. Yahoo and MSN also have similar forms.

Conclusion
Unsolicited, sound-similar domains is one of the many scams rampant on the Internet. But with the background I’ve provided hopefully you’ll be better at separating the scams from the genuine domain owners seeking to do business through auctions or private sales.

Interested in reading about other scams? Try Spam and Scams – What to Expect When You Start a Blog

Categories
Admin DNS IT Operational Excellence

The IT Detective Agency: since when can a powered off PC do dynamic DNS updates?

Intro
The IT Detectives are back after a short lull during which no great mysteries needed expert resolution – you knew that situation couldn’t last too long. The following tale was relayed to me, I unfortunately cannot claim to have been any help whatsoever. The details have been somewhat obscured in this retelling.

The details
One of our DNS servers at drjohns was busy fielding lots and lots of DDNS updates. Good, right? No, not so. Because our employee PCs are all configured to not do this very thing. In Windows 7 drilling down into the advanced DNS settings you have a Checkbox for Register this connection’s addresses in DNS. And that is unchecked. So although we use DHCP, the PCs shouldn’t be sending their DDNS updates. Yet they were. In fact at one point a considerable amount of bandwidth was being eaten up with these unwanted updates, so we had to investigate and act. But where to begin?

Word finally got around to one of our PC experts who I guess probably had his suspicions. He suggested the following test:

turn the PC off and look for DDNS updates on the DNS server

Amazingly, that’s exactly what we found to be the case – DDNS updates coming from a powered off PC. The DDNS updates did not always go to the same DNS server. The chosen DNS server seemed randomly chosen, but they all were drjohns DNS servers.

A Wireshark examination of a trace (taken by a network engineer) showed lots of Dynamic Update SOA drj.com. I looked at the trace and found that that was just a title given by Wireshark for what was happening, and not a very accurate one. If you expand the packet you saw inside of it that (mostly) it was a workstation trying to register its A record on the DNS server (a DDNS update). It wasn’t literally trying to change the SOA record for the zone though that might have been the logical result of updating its A record.

What the power-off test showed to our subject-area expert is that Intel vPro was responsible for these DDNS updates. Wait, you ask, what the heck is vPro? We didn’t know either. As I understand it, it’s an additional Intel chip that some business-class laptops (e.g., DELL Latitude) might include that permits more and better remote management, allowing perhaps even some hardware diagnostics to occur.

So let’s go back to that test. Note that I said PC powered off, I did not say disconnected from the network! Powered-off-but-network-connected produces the DDNS update, powered-off-and-disconnected – no update, of course (Hey, it’s not magic going on here!).

So the solution, obvisouly, is to turn off DDNS in vPro. We thought it was off, but maybe not. We expect and hope this to the solution, but a few more days will be needed before this all plays out and we know for sure.

Conclusion
I better hold off on any conclusion until our premise is confirmed! But one feeling I have is that sometimes you have to ingratiate yourself to the right people because no one person has all the answers!

Categories
Admin DNS Internet Mail SLES

Strange problem with email to paladinny.com

Intro
This is probably the most obscure of all postings I will ever do – it’s really just opening up my private journal to the Internet, which helps me when I need to recall how I fixed something.

So the story is that I’m having trouble sending email to anyone in the domain paladinny.com, and I just couldn’t figure out why.

The details
With my sendmail config I finally rolled up my sleeves, and did some debugging, even though I am pressed for time. Start up our sendmail debugging session:

> sendmail -Cconfig_file.cf -bt -d35.9

This produces a lot of blah, blah, configuration settings, blah, blah, and finally a sort of sendmail debugging shell. So let’s test a good “normal” domain:

> 3,0 [email protected]

canonify           input: test @ gmail . com
Canonify2          input: test < @ gmail . com >
Canonify2        returns: test < @ gmail . com . >
canonify         returns: test < @ gmail . com . >
parse              input: test < @ gmail . com . >
Parse0             input: test < @ gmail . com . >
Parse0           returns: test < @ gmail . com . >
ParseLocal         input: test < @ gmail . com . >
ParseLocal       returns: test < @ gmail . com . >
Parse1             input: test < @ gmail . com . >
Mailertable        input: < gmail . com > test < @ gmail . com . >
Mailertable        input: gmail . < com > test < @ gmail . com . >
Mailertable      returns: test < @ gmail . com . >
Mailertable      returns: test < @ gmail . com . >
SmartTable         input: test < @ gmail . com . >
SmartTable       returns: test < @ gmail . com . >
MailerToTriple     input: < > test < @ gmail . com . >
MailerToTriple   returns: test < @ gmail . com . >
Parse1           returns: $# esmtp $@ gmail . com . $: test < @ gmail . com . >
parse            returns: $# esmtp $@ gmail . com . $: test < @ gmail . com . >

and then this problem domain:

> 3,0 [email protected]

canonify           input: test @ paladinny . com
Canonify2          input: test < @ paladinny . com >
Canonify2        returns: test < @ paladinny . no-ip . biz . >
canonify         returns: test < @ paladinny . no-ip . biz . >
parse              input: test < @ paladinny . no-ip . biz . >
Parse0             input: test < @ paladinny . no-ip . biz . >
Parse0           returns: test < @ paladinny . no-ip . biz . >
ParseLocal         input: test < @ paladinny . no-ip . biz . >
ParseLocal       returns: test < @ paladinny . no-ip . biz . >
Parse1             input: test < @ paladinny . no-ip . biz . >
Mailertable        input: < paladinny . no-ip . biz > test < @ paladinny . no-ip . biz . >
Mailertable        input: paladinny . < no-ip . biz > test < @ paladinny . no-ip . biz . >
Mailertable        input: paladinny . no-ip . < biz > test < @ paladinny . no-ip . biz . >
Mailertable      returns: test < @ paladinny . no-ip . biz . >
Mailertable      returns: test < @ paladinny . no-ip . biz . >
Mailertable      returns: test < @ paladinny . no-ip . biz . >
SmartTable         input: test < @ paladinny . no-ip . biz . >
SmartTable       returns: test < @ paladinny . no-ip . biz . >
MailerToTriple     input: < > test < @ paladinny . no-ip . biz . >
MailerToTriple   returns: test < @ paladinny . no-ip . biz . >
Parse1           returns: $# esmtp $@ paladinny . no-ip . biz . $: test < @ paladinny . no-ip . biz . >
parse            returns: $# esmtp $@ paladinny . no-ip . biz . $: test < @ paladinny . no-ip . biz . >

I have to look more into what Canonify2 does. But this gives me an idea: force the mailertable to handle paladinny . no-ip . biz the way I want it to, namely:

paladinny.no-ip.biz relay:barracuda.cblconsulting.com

because in DNS my DNS server returns this funny result:

> dig mx paladinny.com

; <<>> DiG 9.6-ESV-R7-P3 <<>> mx paladinny.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17559
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0
 
;; QUESTION SECTION:
;paladinny.com.                 IN      MX
 
;; ANSWER SECTION:
paladinny.com.          351     IN      CNAME   paladinny.no-ip.biz.
 
;; AUTHORITY SECTION:
no-ip.biz.              60      IN      SOA     nf1.no-ip.com. hostmaster.no-ip.com. 2052775595 600 300 604800 600
 
;; Query time: 30 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jan 18 08:53:49 2013
;; MSG SIZE  rcvd: 121

whereas Google’s public DNS says this, which looks like the intended result:

> dig mx paladinny.com @8.8.8.8

; <<>> DiG 9.6-ESV-R7-P3 <<>> mx paladinny.com @8.8.8.8
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3749
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;paladinny.com.                 IN      MX
 
;; ANSWER SECTION:
paladinny.com.          1800    IN      MX      10 barracuda.cblconsulting.com.
 
;; Query time: 236 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Fri Jan 18 08:55:42 2013
;; MSG SIZE  rcvd: 71

So at least we know where that odd paladinny.no-ip.biz comes from, sort of. It comes from my nameserver, but where it got that answer from I have no idea. It doesn’t come from the authoritative nameservers:

> dig mx paladinny.com @dns1.name-services.com.

; <<>> DiG 9.6-ESV-R7-P3 <<>> mx paladinny.com @dns1.name-services.com.
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45704
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
 
;; QUESTION SECTION:
;paladinny.com.                 IN      MX
 
;; ANSWER SECTION:
paladinny.com.          1800    IN      MX      10 barracuda.cblconsulting.com.
 
;; Query time: 82 msec
;; SERVER: 98.124.192.1#53(98.124.192.1)
;; WHEN: Fri Jan 18 08:59:50 2013
;; MSG SIZE  rcvd: 71

A CNAME is not an MX record, so why my nameserver is returning an answer (ANSWER: 1)when queried for the MX record when all it thinks it has is a CNAME seems to be an out-and-out error.

And putting the resolved name in the mailertable is also not normal. Normally you put the domain itself, as in:

paladinny.com relay:barracuda.cblconsulting.com

and of course that’s the first thing I tried, but it has no effect whatsoever.

February Update and Conclusion
The mystery was solved when a whole bunch of email deliveries started failing on my system and I was forced to do some serious debugging. Long story short my SLES system was regrettably running nscd, the nameserver caching daemon. I didn’t even bother to check paladinny.com. So many other things cleared up when I killed it I’m sure it was the cause of the paladinny.com issue as well. This is all described in this post.

Categories
Admin DNS Internet Mail

The IT Detective Agency: can’t get email from one sender

Intro
For this article to make any sense whatsoever you have to understand that I enforce SPF in my mail system, which I described in SPF – not all it’s cracked up to be.

The details
Well, some domain admins boldly eliminated their SOFTFAIL conditions – but didn’t quite manage to pull it off correctly! Today I ran into this example. A sender from the domain pclnet.net sent me email from IP 64.8.71.112 which I didn’t get – my SPF protection rejected it. The sender got an error:

550 IP Authorization check failed - psmtp

Let’s look at his SPF record with this DNS query:

$ dig txt pclnet.net

; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.2 <<>> txt pclnet.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 42145
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;pclnet.net.                    IN      TXT
 
;; ANSWER SECTION:
pclnet.net.             300     IN      TXT     "v=spf1 mx ip4:24.214.64.230 ip4:24.214.64.231 ip4:64.8.71.110 ipv4:64.8.71.111 ipv4:64.8.71.112 -all"

That IP, 64.8.71.112 is right there at the end. So what’s the deal?

Well, Google/Postini was called in for help. They apparently still have people who are on the ball because they noticed something funny about this SPF record, namely, that it isn’t correct. Notice that the first few IPs are prefixed with an ip4? Well the last IPs are prefixed with an ipv4! They are not both valid. In fact the ipv4 is not valid syntax and so those IPs are not considered by programs which evaluate SPF records, hence the rejection!

My recourse in this case was to remove SPF enforcement on an exception basis for this one domain.

Case closed!

Conclusion
It’s now a few months after my original post about SPF. I’m sticking with it and hope to increase its adoption more broadly. It has worked well, and the exceptions, such as today’s, have been few and far between. It’s a good tool in the fight against spam.

Categories
Admin DNS

The IT Detective Agency: is the guy rebooting our Linux server from France?

Intro
Sometimes you get a case that turns out to be silly and puts a smile on your face. This one is hardly worth the effort to write it up, but as I want to portray the life of an IT professional in all its glory and mundacity it’s worth sharing this anecdote.

The details
Yesterday I was migrating some SLES 11 VMs from an old VMWare server to a new platform. A shutdown was required, which I did. I noticed these particular systems had been barely used, hadn’t been booted in over a half year (and hence hadn’t been patched in that long). I guess my enterprise monitoring system actually works because not many minutes after the migration I get a call from the sysadmin, Fautt. He noticed the reboot, but he also noticed something else – a strange IP address associated with the reboot.

He says, over the phone, that the IP is 2.6.32.54 plus some other numbers. Of course when you’re listening over the phone it’s hard to both concentrate on the numbers and understand the concept at the same time. So he threw a giant red herring my way and said that that IP is blah-blah-.wanadoo.fr. Wanadoo.fr caught my ear. I recognize that as an ISP in France, with, like most ISPs, a somewhat dodgy reputation (in my opinion) for sending spam. So you see he led me down this path and I took the bait. I said I would look into it. The obvious underlying yet unspoken concern is this: if someone from France is rebooting my servers we have been completely compromised. So this was potentially very serious. Yet I knew I ahd been the one to reboot so I wasn’t actually panicked.

I ran the commands that I knew he must have run to see what he had seen. They look like this:

> last

reboot   system boot  2.6.32.54-0.3-de Thu Oct 11 14:03          (18:49)
drjohns  pts/0        sys1234.drjohns. Thu Oct 11 13:43 - down   (00:06)
root     pts/0        fauttsys.drjohns Wed Oct 10 11:16 - 11:32  (00:16)
...
root     :0           console          Wed Feb 15 15:13 - 15:55  (00:41)
root     :0                            Wed Feb 15 15:13 - 15:13  (00:00)
root     tty7         :0               Wed Feb 15 15:13 - 15:54  (00:41)
reboot   system boot  2.6.32.54-0.3-de Wed Feb 15 15:07         (33+19:43)
fauttdr  pts/1        fauttsys.drjohns Wed Feb 15 14:38 - down   (00:21)

Actually he told me he had just done a last reboot, but this presentation above makes the illusion more dramatic!

So the last entries contain the IP of the system where the admin logged in from, or domain names if the IP could be reverse looked up in DNS.

But, much like a magician who draws your attention to where he wants it to go, it is all a clever illusion. If you run

> dig -x 2.6.32.54

as Fautt surely must have, you get

; <<>> DiG 9.6-ESV-R5-P1 <<>> -x 2.6.32.54
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 7990
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;54.32.6.2.in-addr.arpa.                IN      PTR
 
;; ANSWER SECTION:
54.32.6.2.in-addr.arpa. 42307   IN      PTR     ABordeaux-652-1-113-54.w2-6.abo.wanadoo.fr.
 
;; Query time: 3 msec
;; SERVER: 10.201.145.20#53(10.114.206.104)
;; WHEN: Fri Oct 12 09:02:21 2012
;; MSG SIZE  rcvd: 96

But I figured that if this was a hack, there might be some mention in Google concerning this notorious IP. I searched for 2.6.32.54. The first few matches talked about a linux kernel.

That was the duh moment! A quick

> uname -a

on the server to confirm:

Linux drjohns28 2.6.32.54-0.3-default #1 SMP 2012-01-27 17:38:56 +0100 x86_64 x86_64 x86_64 GNU/Linux

So clearly the reboot line in the last output was using a different convention than the other entries and was providing the kernel version!

I told Fautt and he was duly embarrassed but thought it was funny (there aren’t many good jokes in a sysadmin’s routine, apparently :)).

Case closed.

Conclusion
A good night’s rest is important. When you’re tired you’re much more prone to superficial thinking that allows you to be drawn into someone else’s completely wrong picture of events.

Categories
DNS IT Operational Excellence Network Technologies

The IT Detective Agency: Roku player can’t connect to Internet

Intro
I bought a used Roku player, just to try it out. Things didn’t go right at first.

The details
Setting it up, I got to the point where it tests its Internet connection. There are three status lights. The first two were OK, but the Internet connection returned an error, I think error #9. It didn’t matter whether I used a wired or wireless connection.

I have an unusual Internet router at home, a Juniper SSG5. All my other devices off that router were getting to Internet just dandy however. Long story short, it turns out that there was a slight DNS misconfiguration. The Juniper had itself as secondary DNS server. There was no primary DNS server. Why this only affected Roku is still a mystery.

I updated the Juniper config to use 4.2.2.3 as primary DNS server and the Roku connected just fine and it worked wirelessly as well.

Case closed!

Conclusion
Well, as a network type-of-guy, I would have appreciated access to the Roku Linux OS so I could have seen for myself what was going on. I’m sure I would have figured out the problem much sooner than I did. The error message was not particularly helpful and most of the online discussions attribute that problem to other causes than was the DNS issue discovered here. Short of OS access, more robust debugging tools, such as PING, would have been might handy.

Categories
DNS

“FBI’s” DNS Severs – Trying to Clear Some of the Hysteria

Intro
This Fox News article about infected computers was brought to my attention:
Hundreds of thousands may lose Internet in July

Hey, their URLs look like my URLs. Wonder if they’re using WordPress.

But I digress. I was asked if this is a hoax. I really had to puzzle over it for awhile. It smells like a hoax. I didn’t find any independent confirmation on the isc.org web site, where I get my BIND source files from. I didn’t see a blog post from Paul Vixie mentioning this incident.

And the comments. Oh my word the comments. Every conspiracy theorist is eating this up, gorging on it. The misinformation is appalling. There is actually no single comment as of this writing (100+ comments so far) that lends any technical insight into the saga.

Let’s do a reset and start from a reasonable person basis, not political ideology.

To recount, apparently hundreds of thousands of computers got hacked in a novel way. They had their DNS servers replaced with DNS servers owned by hackers. in Windows you can see your DNS servers with

> ipconfig /all

in a CMD window.

Look for the line that reads

DNS Servers…

These DNS servers could redirect users to web sites that encouraged users to fill out surveys which generated profit for the hackers.

It’s a lot easier to control a few servers than it is to fix hundreds of thousands of desktops. So the FBI got permission through the courts to get control of the IP addresses of the DNS servers involved and decided to run clean DNS servers. This would keep the infected users working, and they would no longer be prompted to fill out surveys. In fact they would probably feel that everything was great again. But this solution costs money to maintain. Is the FBI running these DNS servers? I highly, highly doubt it. I’ll bet they worked with ISC (isc.org) who are the real subject matter experts and quietly outsourced that delicate task to them.

So in July, apparently, (I can’t find independent confirmation of the date) they are going to pull the plug on their FBI DNS servers.

On that day some 80,000 users in the US alone will not be able to browse Internet sites as a result of this action, and hundreds of thousands outside of the US.

If the FBI works with some DNS experts – and Paul Vixie is the best out there and they are apparently already working with him – they could be helpful and redirect all DNS requests to a web site that explains the problem and suggests methods for fixing it. It’s not clear at this point whether or not they will do this.

That’s it. No conspiracy. The FBI was trying to do the right thing – ease the users off the troubled DNS services to somewhat minimize service disruption. I would do the same if I were working for the FBI.

Unfortunately, feeding fodder to the the conspiracists is the fact that the mentioned web site, dcwg.org, is not currently available. The provenance of that web site is also hard to scrutinize as it’s registered to an individual, which looks fishy. But upon digging deeper I have to say that probably the site is just overwhelmed right now. It was first registered in November – around the time this hack came to light. It stands for DNS Changer Working Group.

Comparitech provides more technical details in this very well-written article: DNS changer malware.

Another link on their site that discusses this topic: check-to-see-if-your-computer-is-using-rogue-DNS

Specific DNS Servers
I want to be a good Netizen and not scan all the address ranges mentioned in the FBI document. So I took an educated guess as to where the DNS servers might actually reside. I found three right off the bat:

64.28.176.1
213.109.64.1
77.67.83.1

Knowing actual IPs of actual DNS servers makes this whole thing a lot more real from a technical point-of-view. Because they are indeed unusual in their behaviour. They are fully resolving DNS servers, which is a rarity on the Internet but would be called for by the problem at hand. Traceroute to them all shows the same path, which appears to be somewhere in NYC. A DNS query takes about 17 msec from Amazon’s northeast data center which is in Virginia. So the similarity of the path seems to suggest that they got hold of the routing for these subnets and are directing them to the same set of clean DNS servers.

Let’s see what else we can figure out.

Look up 64.28.176.1 on arin.net Whois. More confirmation that this is real and subject to a court order. In the comments section it says:

In accordance with a Court Order in connection with a criminal investigation, this registration record has been locked and during the period from November 8, 2011 through March 22, 2012, any changes to this registration record are subject to that Court Order.

I thought we could gain even more insight looking up 213.109.64.1 which is normally address range outside of North America, but I can’t figure out too much about it. You can look it up in the RIPE database (ripe.net). You will in fact see it is registered to Prolite Ltd in Russia. No mention of a court order. So we can speculate that unlike arin.net, RIPE did not bother to update their registration record after the FBI got control. So Prolite Ltd may either have been an active player in the hack, or merely had some of their servers hacked and used for the DNSchanger DNS server service.

Of course we don’t know what the original route looked like, but I bet it didn’t end in new York City, although that can’t be ruled out. But now it does. I wonder how the FBI got control of that subnet and if it involved international cooperation.

7/9 update
The news reports re-piqued my interest in this story. I doubled back and re-checked those three public DNS servers I had identified above. 64.28.176.1, etc. Sure enough, they no longer respond to DNS queries! The query comes back like this:

$ dig ns com @77.67.83.1

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6_2.2 <<>> ns com @77.67.83.1
;; global options: +cmd
;; connection timed out; no servers could be reached

Conclusion
As a DNS and Internet expert I have some greater insight into this particular news item than the average commentator. Interested in DNS? Here’s another article I wrote about DNS: Google’s DNS Servers Rock!

References and related
Here’s a much more thorough write-up of the DNS Changer situation: http://comparitech.net/dnschangerprotection

Categories
Admin DNS Proxy

The IT Detective Agency: Browsing Stopped Working on Internet-Connected Enterprise Laptops

Intro
Recall our elegant solution to DNS clobbering and use of a PAC file, which we documented here: The IT Detective Agency: How We Neutralized Nasty DNS Clobbering Before it Could Bite Us. Things were going smoothly with that kludge in place for months. Then suddenly it wasn’t. What went wrong and how to fix?? Read on.

The Details
We began hearing reports of increasing numbers of users not being able to access Internet sites on their enterprise laptops when directly connected to Internet. Internet Explorer gave that cryptic error to the effect Internet Explorer cannot display the web page. This was a real problem, and not a mere inconvenience, for those cases where a user was using a hotel Internet system that required a web page sign-on – that web page itself could not be displayed, giving that same error message. For simple use at home this error may not have been fatal.

But we had this working. What the…?

I struggled as Rop demoed the problem for me with a user’s laptop. I couldn’t deny it because I saw it with my own eyes and saw that the configuration was correct. Correct as in how we wanted it to be, not correct as in working. Basically all web pages were timing out after 15 seconds or so and displaying cannot display the web page. The chief setting is the PAC file, which is used by this enterprise on their Intranet.

On the Internet (see above-mentioned link) the PAC file was more of an annoyance and we had aliased the DNS value to 127.0.0.1 to get around DNS clobbering by ISPs.

While I was talking about the problem to a colleague I thought to look for a web server on the affected laptop. Yup, there it was:

C:\Users\drj>netstat -an|more

Active Connections

  Proto  Local Address          Foreign Address        State
  TCP    0.0.0.0:80             0.0.0.0:0              LISTENING
  TCP    0.0.0.0:135            0.0.0.0:0              LISTENING
  ...

What the…? Why is there a web server running on port 80? How will it respond to a PAC file request? I quickly got some hints by hitting my own laptop with curl:

$ curl -i 192.168.3.4/proxy.pac

HTTP/1.1 404 Not Found
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Tue, 17 Apr 2012 18:39:30 GMT
Connection: close
Content-Length: 315


Not Found

Not Found


HTTP Error 404. The requested resource is not found.

So the server is Microsoft-HTTPAPI, which upon invetsigation seems to be a Microsoft Web Deployment Agent Service (MsDepSvc).

The main point is that I don’t remember that being there in the past. I felt it’s presence, probably a new “feature” explained the current problem. What to do about it however??

Since this is an enterprise, not a small shop with a couple PCs, turning off MsDepSvc is not a realistic option. It’s probably used for peer-to-peer software distribution.

Hmm. Let’s review why we think our original solution worked in the first place. It didn’t work so well when the DNS was clobbered by my ISP. Why? I think because the ISP put up a web server when it encountered a NXDOMAIN DNS response and that web server gave a 404 not found error when the browser searched for the PAC file. Turning the DNS entry to the loopback interface, 127.0.0.1, gave the browser a valid IP, one that it would connect to and quickly receive a TCP RST (reset). Then it would happily conclude there was no way to reach the PAC file, not use it, and try DIRECT connections to Internet sites. That’s my theory and I’m sticking to it!

In light of all this information the following option emerged as most likely to succeed: set the Internet value of the DNS of the PAC file to a valid IP, and specifically, one that would send a TCP RST (essentially meaning that TCP port 80 is reachable but there is no listener on it).

We tried it and it seemed to work for me and my colleague. We no loner had the problem of not being able to connect to Internet sites with the PAC file configured and directly connected to Internet.

I noticed however that once I got on the enterprise network via VPN I wasn’t able to connect to Internet sites right away. After about five minutes I could.

My theory about that is that I had a too-long TTL on my PAC file DNS entry. The TTL was 30 minutes. I shortened it to five minutes. Because when you think about it, that DNS value should get cached by the PC and retained even after it transitions to Intranet-connected.

I haven’t retested, but I think that adjustment will help.

Conclusion
I also haven’t gotten a lot of feedback from this latest fix, but I’m feeling pretty good about it.

Case: again mostly solved.

Hey, don’t blame me about the ambiguity. I never hear back from the users when things are working 🙂

References
A closely related case involving Verizon “clobbering” TCP RST packets also bit us.