Categories
Admin Internet Mail

SPF – not all it’s cracked up to be

Intro
I recently implemented a strict enforcement of Sender Policy Framework records on a mail server. The amount of false positives wasn’t worth the added benefit.

The details
When I want to learn about SPF I turn to the Wikipedia article. Many secure mail gateways offer the possibility to filter out emails based on the sending MTA being in the list allowed by that domains SPF record in DNS. I began to turn on enforcement domain-by-domain. First for bbb.org since their domain was spoofed by an annoying spam campaign earlier this year. Then fedex.com (another perennial favorite), aexp.com, amazon.com, ups.com, etc.

I was stricter than the standard. FAIL -> reject; SOFTFAIL -> reject! And it seemed to work well. Last week I went whole hog and turned on SPF enforcement for all domains with a defined SPF record, but with slightly relaxed standards. FAIL -> REJECT; SOFTFAIL -> quarantine. While it’s true that it may have helped prevent some new spam campaigns, complaints started mounting. Stuff was going into quarantine that never used to!

I knew SOFTFAIL must be the issue. Most(?) domains seem to have a SOFTFAIL condition. Here is a typical example for the domain communica-usa.com:

$ dig txt communica-usa.com

; <<>> DiG 9.7.3-P3-RedHat-9.7.3-8.P3.el6_2.2 <<>> txt communica-usa.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 15711
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
 
;; QUESTION SECTION:
;communica-usa.com.             IN      TXT
 
;; ANSWER SECTION:
communica-usa.com.      600     IN      TXT     "v=spf1 ip4:72.241.62.1/26 ~all"

So the ~all at the end means they have defined a SOFTFAIL that matches “everything else.” And if a domain does have a SOFTFAIL contingency in its SPF record, this is almost always how it is defined. Which, when you think about it, is the same, formally, as not having bothered to define an SPF record at all! Because the proper action to take is NONE.

I reasoned as follows: why bother defining an SPF record unless you know what you’re doing and you feel you actually can identify the MTAs your mail will come from? Except when you’re first starting out and learning the ropes. That’s why I took stronger action than the standard suggested for SOFTFAIL.

A learning experience
Well, it wasn’t immediate, but after a week to ten days the complaints started rolling in. Looking at what had actually been released from quarantine was an eye-opener. It confirmed something I had previously seen, namely that only a minority speak up so the problems brought to my attention amounted to the tip of the iceberg. So the problem of my own creation was quite widespread in actual fact. I investigated a few by hand. Yup, they were sent from IPs not permitted by the narrowly defined part of their SPF record, and they were repeat offenders. Domains such as the aforementioned communica-usa.com, right.com, redphin.com, etc, etc. I haven’t gotten an explanation for a single infringement – finding someone with sufficient knowledge in these small companies to have a discussion with is a real challenge.

Waving the white flag
I had to back down on my quarantining SOFTFAILs in the global SPF setting. Now it’s SOFTFAIL -> PASS. I could have made domain-by-domain exceptions, but when the number of problem domains climbed over a hundred I decided against that reactive approach. I kept the original set of defined domains, ups.com, etc with their more-aggressive settings. These are larger companies, anyways, which either didn’t even have a SOFTFAIL defined, or never needed their SOFTFAIL.

August 2013 update
Well, a year has passed since this experiment – long enough to establish a trend. I was hoping by writing this article to nudge the community in the direction of wider adoption of SPF usage and more importantly, enforcement. Alas, I can now attest that there is no perceptible trend in either direction. How do I know? Because even with our more lax enforcement, we still ran into problems with lots of senders. I.e., they were too, how do I say this politely, confused, and apparently unable to make their strict SPF records actually match their sending hosts! Incredible, but true. If there were widespread or even spotty adoption, even 5 – 10 % of MTAs adopting enforcement of strict SPF records, they would also have refused messages from senders like this and these senders would have been motivated to fix their self-inflicted problems. But, in reality, what I have seen is complete lack of understanding amongst any of the dozen or so companies with this issue and complete inability to resolve it, forcing me to make an exception for their error.

SPF records can be bad for a number of reasons. Some companies provide two of them, and that is a source of problems. Here are a couple examples I had this issue with:

scterm.com

scterm.com.             3600    IN      TXT     "v=spf1 ip4:65.122.37.36 include:spf.protection.outlook.com -all"
scterm.com.             3600    IN      TXT     "MS=ms56827535"

atlantichealth.org

atlantichealth.org.     3600    IN      TXT     "v=spf1 mx ip4:198.140.183.244/32 ip4:198.140.183.245/32 ip4:198.140.184.244/32 ip4:198.140.184.245/32 ip4:207.211.31.0/25 ip4:205.139.110.0/24 ip4:205.139.111.0/24 -all"

So my guesstimate is < 1% adoption of even the most conservative (meaning least likely to reject legitimate emails) SPF enforcement. Too bad about that. Spammers really appreciate it, though. Conclusion
By being forced to back off aggressive SPF settings most domains with SOFTFAIL defined as “~all” – which is most of them – are lost to any enforcement. It’s a shame. SPF sounded so promising to me at first. Spammers are really good at controlling botnets, but terrible at controlling corporate MTAs. But still I do advocate for wider adoption of SPF, without the SOFTFAIL condition, of course!

References
A nice SPF validator is found here. It has no obnoxious ads or spyware.

Categories
Admin Web Site Technologies

unece.org site is busted again – how to access it anyways

Intro
I don’t usually provide tips on productive use of Internet tools, but, since I needed this one myself today, and I wasn’t 100% sure it was going to work, I think this is worth mentioning.

unece.org – completely broken
A not-so-pretty error message gets spit out, including the text:

Configuration Error: 404 page "/webhome/unece.org/www/fileadmin/read.txt" could not be found.

which reveals just a little too much about the hosting environment (i.e., shared) in my opinion.

Who cares? I like this site because it can act as a sort of arbiter in assigning three-letter names to virtually any locale around the world that has more than a few thousand people. Put the UN two-letter country code in front of that and you have a short, unique five-letter reference to any town or city in the world. This is my bookmark: http://www.unece.org/cefact/locode/service/location.htm

But today it’s not working. Maybe they’ll fix it tomorrow. Who knows? But I need a location code today. What to do? Well, it’s clearly a simple-minded site with static content – a perfect candidate for the Wayback machine at archive.org.

Long story short, yes, a July, 2011 site version was archived and is available. The archived link becomes:

http://web.archive.org/web/20110605223242/http://www.unece.org/cefact/locode/service/location.htm

And, cool, we’re in business and able to look up location codes!

Categories
DNS IT Operational Excellence Network Technologies

The IT Detective Agency: Roku player can’t connect to Internet

Intro
I bought a used Roku player, just to try it out. Things didn’t go right at first.

The details
Setting it up, I got to the point where it tests its Internet connection. There are three status lights. The first two were OK, but the Internet connection returned an error, I think error #9. It didn’t matter whether I used a wired or wireless connection.

I have an unusual Internet router at home, a Juniper SSG5. All my other devices off that router were getting to Internet just dandy however. Long story short, it turns out that there was a slight DNS misconfiguration. The Juniper had itself as secondary DNS server. There was no primary DNS server. Why this only affected Roku is still a mystery.

I updated the Juniper config to use 4.2.2.3 as primary DNS server and the Roku connected just fine and it worked wirelessly as well.

Case closed!

Conclusion
Well, as a network type-of-guy, I would have appreciated access to the Roku Linux OS so I could have seen for myself what was going on. I’m sure I would have figured out the problem much sooner than I did. The error message was not particularly helpful and most of the online discussions attribute that problem to other causes than was the DNS issue discovered here. Short of OS access, more robust debugging tools, such as PING, would have been might handy.

Categories
Admin Security Web Site Technologies

DOS Attack from South Korea

Intro
Last week I witnessed a denial-of-service (DOS) attack against a web server. This is disconcerting to say the least. I felt by putting the information out in the open some leverage might be applied to the responsible party for that server or subnet.

The source of the attack was a single IP, 58.180.70.160.

Some Details
The attack occurred July 31st.

It consisted of over 100,000 object accesses.

I do not see evidence of an actual hack.

It lasted about 10 hours, from 12:30 PM EST to 10:57 PM, with some few preliminary accesses starting at 4 AM before the onslaught at Noon.

Clearly some kind of tool was used. It is very aggressive around forms, doing lots of variations in its POST to the same form, over and over. Here’s one small snippet that gives the flavor:

58.180.70.160 - - [31/Jul/2012:22:53:05 -0400] "POST /[omitted]/forgot_password.jsp../../../../../../../../../../etc/passwd/./././././././././././[pattern repeated - total URI is over 4000 characters!]

Then there’s the fishing around for files which I suppose if present may become a vector for attack, like these lines:

58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /qlc9tjIqjR.cfm HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compa
tible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /_vti_pvt/authors.pwd HTTP/1.1" 404 292 "-" "Mozilla/4.0
(compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET / HTTP/1.1" 200 9491 "-" "Mozilla/4.0 (compatible; MSIE 8
.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /crossdomain.xml HTTP/1.1" 404 292 "-" "Mozilla/4.0 (comp
atible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "GET /solr/select/?q=test HTTP/1.1" 404 292 "-" "Mozilla/4.0 (
compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /inexistent_file_name.inexistent0123450987.cfm HTTP/1.1"
404 292 "-" "<script>alert(12345)</script>" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET http://www.acunetix.wvs HTTP/1.1" 404 292 "-" "Mozilla/4.
0 (compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /index HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compatible; MS
IE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:47 -0400] "GET /server-info HTTP/1.1" 404 292 "-" "Mozilla/4.0 (compatib
le; MSIE 8.0; Windows NT 6.0)" "-" "-"
58.180.70.160 - - [31/Jul/2012:12:31:48 -0400] "POST /_vti_bin/shtml.exe?_vti_rpc HTTP/1.1" 405 124 "-" "Mozi
lla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)" "-" "-"

The line mentioning Acunetix is I think a key to understanding this attack. I think it is leaving a cookie crumb trail to let the attacked site know what tool was used: Acunetix WVS Free Edition scanner.

So it’s a semi-legitimate tool, or at least one openly referenced by Google. But in the hands of a malicious or simply ignorant user, such a tool actually does become a DOS weapon. Its aggressive POSTs often have consequences for back-end databases. A medium-duty site is typically not tuned to handle 8,000 POSTS per hour, which is the rate they came in – over two per second. So the site can be tied up servicing these POSTs and not have resources left over to handle legitimate requests.

To get some background on where this originated from and by whom, I did a whois search on ARIN, the main Internet address space registry. This told me that that address space has been delegated to APNIC, the ASIA-Pacific NIC. So I did a Whois search on APNIC. That showed me that in fact that range is in turn delegated to Korea NIC! So it’s off to the KRNIC and their whois… That in turn gave me the most specific information of all:

More specific assignment information is as follows.
 
[ Network Information ]
IPv4 Address       : 58.180.68.0 - 58.180.71.255 (/22)
Network Name       : SHINBIRO-INFRA
Organization Name  : ONSE Telecom
Organization ID    : ORG2324
Address            : 2F Bundang Center, Onse IDC, 235-230, 
Zip Code           : 448-500
Registration Date  : 20081107
Publishes          : Y
 
 See the Contact Info
[ Technical Contact Information ] Name : IP Manager Organization Name : ONSE Telecom Zip Code : 448-500 Phone : +82-2-1666-0120 E-Mail : [email protected]
 
 
- KISA/KRNIC Whois Service -

Obviously it doesn’t get so specific that it tells me who owns the attacking host, but the mentioned subnet, 58.180.68.0/22 is not too big, all things considered, so that’s quite specific compared to where we were when we started our journey. It seems that its administered by ONSE Telecom in South Korea.

I tried to contact them by email – the email bounced. I also wrote to [email protected], which also bounced.

KRNIC is run by KISA, which claims to be an Internet security enterprise (fat chance in my book). I filled out their online contact form – it consistently failed to accept my message, no matter how simple I made it.

At this point I decided that this is too many offenses. The owners of that subnet, 58.180.68.0/22 are not playing by the rules of the Internet and they should be cut off from the Internet until they follow the rules.

So I set up rules to discard all packets from subnet 58.180.68.0/22 and I recommend that others do the same on their web sites.

Amazon Cloud not very sophisticated with regards to security
I initially tried to take my own advice and sought to put the above subnet into a deny security rule on my Amazon cloud service. I work with firewalls and this sort of thing is commonplace and been around for well over 15 years. But you can’t do it on the Amazon Cloud security service! I was shocked. They have so many sophisticated services and have clearly thought out this “cloud thing” better than just about anyone, but they didn’t stop to listen to customer requests on this basic security issue – a allow all but these subnets type of capability.

Conclusion
A DOS was observed emanating from a host in South Korea using Acunetix’s free WVS scanner. The owner of the subnet does not post legitimate contact information and should be banned from Internet connectivity for this egregious behaviour which was perpetrated using its address space.

Categories
Admin Linux SLES

The IT Detective Agency: A couple of our SLES servers are running very slowly

Intro
Sometimes truth is stranger than fiction, even in the IT realm. I actually had a mere supporting role in this case – credit must be given to my persistent and accomplished friends for finding the root cause.

This Unlikely case begins with a serendipitous accident
An incident accidentally gets assigned to me about a couple of application servers running slowly – the echo of command-line typing comes in fits and starts. Well, I quickly decide it’s not my issue and I look around to see who should handle it. Probably the network specialist, right? No other server is having this issue, and the servers with the issue are responding fine to PING on their local segment. But still, it sounds like a network problem. For instance an interface with duplex settings mismatched as compared to the switch port.

But I decide to be nice about it and approach the guy with the problem, “Freg,” and ask him what he thinks should be done with the ticket. He takes the opportunity to show me the problem in person. So I listen to his story and politely look. This took place on July 31st, mind you (yesterday). No one’s using theses servers until they tried today. They are too slow to use now. The last time they were used was about 50 days ago – they were fine then. There are two servers with a similar problem.

And it goes on like that. These are running an ERP app which starts a Java process. Both of these servers are VMs. So lots of facts are being thrown at me. Maybe some are relevant and some not. I have never heard of these servers and am not familiar with the app. No root access is available to us, but we can log on as the same user who runs the ERP app.

I do an uptime. It’s up 54 days. More imporantly, the load average is very high – 25 and pinned there because the 5- and 15-minute average is also around 25. I now feel comfortable explaining why the slow character-typing echo. So it’s not a network problem after all… Talk to a sysadmin is my advice. But he sits there expecting me to do more, to somehow offer something helpful…

So I hem and haw and do essentially the one thing I know how to do in these circumstances: run strace. I get the process number of the offending process and try to get some insight into what it’s spending its time on. I don’t do this very often, and when I do I usually don’t learn very much, but it is another piece of the puzzle: I have learned more than when I started out. Let’s say the process number was 26743, then I ran:

> strace -f -p 26743

and simply watched the output. The output was weird. It was filled with calls to a function I’ve never seen before: futex. In fact it was all futex calls, looping, rapidly, and the same calls, producing timeout errors.

You can look in section 2 of the man pages:

> man -s2 futex

on a Linux system and see for yourself that futex is the Fast userspace locking system call. Don’t ask me. I’d say it’s a kernel thing. But it intuitively doesn’t seem right that a program would be using excessive amounts of CPU doing nothing but this one system call. A more healthy program will be seen making a nice variety of calls, especially and usually TCP-related ones like open, read, socket, gethostbyname, etc.

A popular cause ruled out
I also checked out /etc/resolv.conf. It’s often that a sysadmin messes that file up with an invalid nameserver, or even, just the other day, a nameserver line that omitted the nameserver directive and only contained the IP address of the nameserver! The symptom of that is different. The initial login prompt comes slowly (as it times out doing a reverse lookup on your source IP address), but character echo after that is normal.

The leap second
Other ideas for isolating the problem: reboot, but turn off this process at start-up. I think the reboot did help. But our sysadmin found that in fact this is a known issue on Suse Linux (SLES).

From we learn that a leap second was added June 30th, 2012, and there is a problem associated with it. It “can cause applications that are using FUTEXes to consume 100% of CPU. The issue is present in all Linux kernel versions >= 2.6.22, therefor affecting SLES 11 SP1 and later releases.” Wow!

The remedy in that case is to execute the command

$ date -s “$(LC_ALL=C date)”

to trigger a clock_was_set() system call. We did this and it seems to have fixed our issue.

Case closed.

Conclusion
The best sleuthing involves multiple people looking at things. Sometimes their individual breakthroughs need to be combined. Here the incidental observation of futex calls helped associate a cpu problem to a kernel bug related to a leap second implementation. This also explains why the problem did not exist 50 days ago – that was before June 30th.