Categories
Admin Network Technologies Raspberry Pi

Basic networking: creating a virtual IP in Debian Linux

Intro
A quick Internet search showed a couple top-level matches that didn’t quite work for me, so I’m documenting how I got my multiple IP assignments on one interface to stick. This was work done for my Raspberry Pi, but it should apply to any Debian Linux system.

The details
This was work done for my Raspberry Pi, but it should apply to any Debian Linux host. I made my file /etc/network/interfaces look as follows:

auto lo
auto eth0
auto eth0:0
 
iface lo inet loopback
# DrJ change: make IP static
# somewhat inspired by http://www.techiecorner.com/486/how-to-setup-static-ip-in-debian/ - DrJ 1/8/13
#iface eth0 inet dhcp
iface eth0 inet static
address 192.168.2.100
gateway  192.168.2.1
netmask 255.255.255.0
network 192.168.2.0
broadcast 192.168.2.255
 
# virtual IP on eth0
iface eth0:0 inet static
address 10.31.42.11
netmask 255.255.255.0
network 10.31.42.0
broadcast 10.31.42.255

I think the key statement which is missing on some people’s examples are the lines at the top of the file:

auto lo
auto eth0
auto eth0:0

When I didn’t have those I was finding that my primary IP was defined upon reboot, but not my virtual IP, although the virtual IP could be dynamically created with a simple

sudo ifup eth0:0

Still, I wanted it to survive a reboot and adding the auto lines did the trick.

Conclusion
A few of the pages you will find on the Internet may give incomplete information on how to configure virtual IPs in Debian Linux. The approach outlined above should work. Additional virtual IPs would just require sections like eth0:1, eth0:2, etc modelled after what was done for eth0:0

References
I present some basic information on one way to get started on the Pi without an external monitor (yes, it can be done) here!
If you think you like networking, you will learn a lot of useful tips in this posting which describes how to turn your Raspberry Pi into a full-blown router.

Categories
Admin Apache Linux

Recording Host Header in the apache access log

Intro
Guess I’ve made it pretty clear in previous posts that Apache documentation is horrible in my opinion. So the only practical way to learn something is to configure by example. In this post I show how to record the Host header contained in an HTTP request in your Apache log.

The details
Why might you want to do this? Simple, if you have multiple hosts using one access log in common. For instance I have johnstechtalk.com and drjohnstechtalk.com using the same log, which I view as a convenience for myself. But now I want to know if I’m getting my money’s worth out of johnstechtalk.com, which I don’t see as the main URL, but I I use it to to type it into the browser location bar and get directed onto my site – fewer letters.

So I assume you know where to find the log definitions. You start with that as a base and create a custom-defined access log name. These two lines, taken from my actual config file, apache2.conf, show this:

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" \"%{Host}i\"" DrJformat

Then I have my virtual server in a separate file containing a reference to that custom format:

#CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log combined
CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log DrJformat

The ${APACHE_LOG_DIR} is an environment variable defined in envvars in my implementation, which may be unconventional. you can replace it with a hard-wired directory name if that suits you better.

There is some confusion out there on the Internet. Host as used in this post refers as I have said to the value contained in the HTTP Host Request header. It is not the hostname of the client.

Here are some recorded access resulting from this format early this morning:

108.163.185.34 - - [08/Jan/2014:02:21:32 -0500] "GET /blog/2012/02/tuning-apache-as-a-redirect-engine/ HTTP/1.1" 200 11659 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" "drjohnstechtalk.com"
5.10.83.22 - - [08/Jan/2014:02:21:56 -0500] "GET /blog/2013/03/generate-pronounceable-passwords/ HTTP/1.1" 200 8253 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.0; +http://ahrefs.com/robot/)" "drjohnstechtalk.com"
220.181.108.91 - - [08/Jan/2014:02:23:41 -0500] "GET / HTTP/1.1" 301 246 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" "vmanswer.com"
192.187.98.164 - - [08/Jan/2014:02:25:00 -0500] "GET /blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/ HTTP/1.0" 200 32338 "http://drjohnstechtalk.com/blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/" "Opera/9.80 (Windows NT 5.1; MRA 6.0 (build 5831)) Presto/2.12.388 Version/12.10" "drjohnstechtalk.com"

While most lines contain drjohnstechtalk.com, note that the next-to-last line has the host vmanswer.com, which is another domain one I bought and associated with my site to try it out.

Conclusion
We have shown how to record the contents of the Host header in an Apache access log.

Related rants against apache
Creating a maintenance page with Apache web server
Turning Apache into a Redirect Factory
Running CGI Scripts from any Directory with Apache

Categories
Admin CentOS

CentOS 6.0 VM ran out of memory

Intro
I’m just creating this post to have documented what happened to me personally. I have a CentOS 6.0 image with Amazon AWS. It was based on a minimal image, which I purposefully selected so it wouldn’t be loaded down with junky daemons. Ran fine for a year, then one day nothing!

The details
I think it was up for 400 days consecutive! That’s not necessarily a good idea, but those are the facts. Then over the weekend I could neither ssh nor access its web server. Oh, oh. You’ve got really limited options at that point with a cloud server. I stopped it from the AWS console and then started it. No joy. More drastic action – really the last thing I can do short of abandoning the whole image: Terminate. After some breath-holding moments, and after I remembered to re-associate the elastic IP, it came up. Whew! Now it came up as CentOS v 6.4, which I don’t fully understand, but it works.

I checked the /var/log/messages file for clues as to what happened. There actuially were some pretty good clues. Here is the last of many, many similar lines I observed:

...
Nov 29 08:39:23 ip-10-114-206-104 kernel: Out of memory: kill process 29076 (httpd) score 107231 or a child
Nov 29 08:39:23 ip-10-114-206-104 kernel: Killed process 31306 (httpd) vsz:264320kB, anon-rss:30852kB, file-rss:312kB
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd cpuset=/ mems_allowed=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: Pid: 31506, comm: httpd Not tainted 2.6.32-131.17.1.el6.x86_64 #1
Nov 29 08:39:23 ip-10-114-206-104 kernel: Call Trace:
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810c00f1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811102bb>] ? oom_kill_process+0xcb/0x2e0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110880>] ? select_bad_process+0xd0/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110918>] ? __out_of_memory+0x58/0xc0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110b19>] ? out_of_memory+0x199/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81120262>] ? __alloc_pages_nodemask+0x812/0x8b0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8115473a>] ? alloc_pages_current+0xaa/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110d717>] ? __page_cache_alloc+0x87/0x90
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122bab>] ? __do_page_cache_readahead+0xdb/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122d01>] ? ra_submit+0x21/0x30
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110e9e3>] ? filemap_fault+0x4c3/0x500
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810061af>] ? xen_set_pte_at+0xaf/0x170
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81137204>] ? __do_fault+0x54/0x510
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811377b7>] ? handle_pte_fault+0xf7/0xb50
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81007c4f>] ? xen_restore_fl_direct_end+0x0/0x1
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81006d4b>] ? xen_set_pmd_hyper+0x8b/0xc0
...

So it ran out of memory. I guess there’s a memory leak somewhere, although another posting I saw hinted at a flaw in the CentOS under paravirtualization. I have no idea.

The interesting thing to me is that the error was ongoing for days. So I had I been watching for it, I could have been pro-active in rebooting my server.

Conclusion
My AWS-hosted CentOS VM gave me a scare when it stopped responding. I had to terminate it. An out-of-memory error in the kernel seems to be the proximate cause.

Categories
Admin

WebDav via HTTP (not HTTPS)

Intro
Just because I document it here in this space doesn’t mean it’s best practice or even a good idea. Such is the case today as I document a BAD IDEA – how to get WebDav working to your Windows 7 PC over HTTP instead of HTTPS. This might be appropriate only if WebDav server and client are both on the same very private Intranet.

WebDAV stands for Web-Based Distributed Authoring and Versioning, by the way.

The details
This comes straight from Microsoft. They just don’t make it clear that these steps apply to this case of trying to get WebDAV working over HTTP.

Windows 7 by default only allows for Webdav connections across HTTPS protocol. There is a work around. In order for you to connect to our WebDav directories you will need to make the following registry change:

To enable Basic authentication on the client computer, follow these steps:
1) Click Start , type regedit in the Start Search box, and then click regedit.exe in the Programs list.
2) Locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
3) Double-click the BasicAuthLevel registry key.
4) In the Value data box, type 2, and then click OK.
5) Exit Registry Editor, and then restart the computer.

Why this is a bad idea
Now that we’ve shown how to do it, let’s explain why you shouldn’t! If you use basic authentication over HTTP your password is not encrypted, it is merely encoded. It is trivial for anyone listening in – you know who you are, NSA! – to decode that password.

Conclusion
I’ve documented it before trying it! That’s always dangerous, but this blog makes for such a convenient knowledge base that I felt that was the most important first step.

I will update this to indicate whether or not I actually got it to work.

Categories
Admin Linux

The IT Detective Agency: Can someone really see what we’re doing in our X sessions?

Intro
We’ve been audited again. My most faithful followers will recall the very popular and educational article on SSL ciphers that cane out of our previous audit. So I guess audits are a good thing – helps us extend our learning.

This time we got dinged on that most ancient of protocols, X Windows. So this article is aimed at all those out there who, like me, know enough about X11 to get it more-or-less working, but not enough to claim power user status. The X cognescenti will find this article redundant with other material already widely available. Whatever. Sometimes I will post an article as though it were my own personal journal – and I like to have all my learning in one easy-to-find place.

The details
The findings amount to this: anyone in our Intranet can take a screen shot of what the people using Exceed are currently seeing. The nice tool (Nessus) actually provided such a screen shot to back up its claim, so this wasn’t a hypothetical. At Drjohn’s we believe in open source, but we do have our secrets and confidential information, so we don’t want everyone to have this type of access.

Here is some of the verbatim wording:

Synopsis
The remote X server accepts TCP connections.
 
Description
The remote X server accepts remote TCP connections. It is possible for an attacker to grab a screenshot of the remote host.
 
Solution
Restrict access to this port by using the 'xhost' command. If the X client/server facility is not used, disable TCP connections to the X server entirely.
 
Risk Factor
Critical
 
CVSS Base Score
10.0 (CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C)
 
References
CVE CVE-1999-0526
...
Hosts
drjms.drjs.com (tcp/6002)
It was possible to gather the following screenshot of the remote computer.
...

So in my capacity as old Unix hand I was asked to verify the findings. This turned out to be dead easy. Here are the steps:

– pick a random Linux machine which has xwd installed
> xwd -debug -display drjms.drjs.com:2 -root -out drjms.xwd
> cat drjms.xwd|xwdtopnm|pnmtopng > drjms.png

The PNG file indeed showed lots of interesting stuff from a screen capture of the user’s X server – amazing.

I should mention that tcp port 600 maps to X Server display 0, 6001 to 1, 6002, to 2, etc. That’s why I set my display to drj…com:2 since port 6002 was mentioned in the findings as vulnerable.

My advice: don’t use

> xhost +

or whatever is the equivalent to that in Exceed onDemand.

Guilty
Now I have to admit to using xhost + in just this way in my own past – so convenient! Now that I see how dead easy it makes it to get a screenshot – in fact I tested the whole thing against my own XServer first – I will forego the convenience.

Conclusion
This is the danger in knowing something and some things, but not enough!

References
But I still stand by use of xhost + in the privacy of your home network, as for instance I show it in my Raspberry Pi without a monitor acticle.

I picked off that command set from this interesting article: https://www.linuxquestions.org/questions/linux-general-1/commanding-the-x-server-to-take-a-png-screenshot-through-ssh-459009/

Categories
Admin

The IT Detective Agency: New AIX Server working really slowly

Intro
Don’t ask me why anyone would willingly run IBM AIX, but it happens. And when they do, watch out for network punishment. We dealt with such a case, unfortunately, and we ran into a serious, somewhat obscure network issue and figured out the solution (we think). Maybe someone else can learn from this painful experience. Or maybe we’ll completely forget what we ourselves have done two years from now and find ourselves stepping on the same rake.

The details

So this new AIX server was configured to run an very old application, WebMethods, that makes a lot of database connections as well as connections to external partners for document exchange.

This had been working fine on the old AIX server, but we switched to newer hardware. As much as possible the old configurations were used. Yet this new server just couldn’t keep up with the load. Its queue started building up, connections to the database climbed into the hundreds, and then it just seemed like it was doing nothing at all.

Someone lent me root access so I can join the debugging party. What, no bash shell! Not even a properly configured korn shell. And everything’s just a little different on AIX – nothing is quite how you are accustomed to it. But at least it has tcpdump. I guess they also have their own AIXish utility as well, but I never bothered with that. tcpdump seemed to work. So I quickly began to get a feel from what the application folks were saying about their transfers which weren’t going outbound, and only slowly going inbound. They used port 5443 on one of the interfaces, en5:

# tcpdump -i en5 -n port 5443

And, true, not much was going on.

This went on for a day and things were looking desperate – to the point where we decided to go back to the old hardware! But we never stopped thinking.

Check the traffic to the Oracle database:

# tcpdump -i en0 -n port 1521

No, not that much either.

Try to check system logs, but who knows where those are? The ones I found had absolutely nothing of interest.

Being a DNS guy, I decided to check for DNS traffic:

# tcpdump -i en0 -n udp

(everything else uses tcp so I could get away with this).

Now DNS turned out to be quite chatty – around a dozen entries per second. And a lot of repetition. And a lot of IPv6 queries, labelled as AAAA?. I didn’t like it.

And this jogged my memory. I remembered encountering these IPv6 queries and wanting to turn them off on the old AIX servers. But how to do that??

As in all things, Google (actually DuckDuckGo) is your friend. You modify the /etc/netsvc.conf file. You need an entry like this:

hosts = local4, bind4

To be continued…

Categories
Admin Network Technologies

Are we in Brazil? We are NOT in Brazil.

But Google thinks we are:

$ curl -i http://www.google.com/

returns:

HTTP/1.1 302 Found
Location: http://www.google.com.br/?gws_rd=cr&ei=nrA4UoPZB8fb4AP3loGYAg
Cache-Control: private
Content-Type: text/html; charset=UTF-8
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
Date: Tue, 17 Sep 2013 19:42:22 GMT
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic
Content-Length: 262
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: PREF=ID=9ed4d9e91767f4ce:FF=0:TM=1379446942:LM=1379446942:S=O9jxL9sXRCc0kF-E; expires=Thu, 17-Sep-2015 19:42:22 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=BJA1pj-o2tu-IDI9KS5MjQvoKPJoY8x3uREAmeGCItGKxxfovFqdjPhvBaHUHcISFLVcTWSmCXwiTAuVhF4DIYCbPcuubfBBEGYaNy2wgveeyvGj35xTzM4Oo-yCLaDe; expires=Wed, 19-Mar-2014 19:42:22 GMT; path=/; domain=.google.com; HttpOnly
 
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.com.br/?gws_rd=cr&amp;ei=nrA4UoPZB8fb4AP3loGYAg">here</A>.
</BODY></HTML>

and MSN is similar:

$ curl -i http://www.msn.com/

HTTP/1.1 302 Found
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/xhtml+xml; charset=utf-8
Location: http://br.msn.com/?rd=1&ucc=BR&dcc=BR&opt=0
P3P: CP="NON UNI COM NAV STA LOC CURa DEVa PSAa PSDa OUR IND"
X-AspNet-Version: 4.0.30319
S: BN2SCH030301048
Edge-control: no-store
Date: Tue, 17 Sep 2013 20:02:42 GMT
Content-Length: 172
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: MC1=V=3&GUID=7173a7e9bb5444fc88d52c3d302f0cdf; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
Set-Cookie: MC1=V=3&GUID=7173a7e9bb5444fc88d52c3d302f0cdf; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
Set-Cookie: brdSample=89; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
blah, blah
 
 
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="http://br.msn.com/?rd=1&amp;ucc=BR&amp;dcc=BR&amp;opt=0">here</a>.</h2>
</body></html>

Now a request from the IP Next door to it, on the same subnet, produces an entirely different result:

$ curl -i http://www.google.com/

Date: Tue, 17 Sep 2013 19:48:11 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more in
fo."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic
Transfer-Encoding: chunked
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: PREF=ID=0e280a6247dbe89f:FF=0:TM=1379447291:LM=1379447291:S=Ws_EoUItgwgQMFLL; expires=Thu, 17-Sep-2015 19:48:11
 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=MEOnbrr2pgKz8MVKEzIQ2v9f4nkR0o5FXJxbeBRk3GZg6GsfKNRZm9UEHTzcuRF5A6D2kXKa4N6FQnP88fkVLrgDokdOlXvX1Oba2JzC
koZ0K0PiACYSiTPCru5eH9C3; expires=Wed, 19-Mar-2014 19:48:11 GMT; path=/; domain=.google.com; HttpOnly
 
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"><head><meta content="Search the world's information,
 including webpages, images, videos and more. Google has many special features
...

This is a problem when you do searches in which you expect to get localized results, like a search for dinner. Believe me, you see completely different things if Google thinks your IP is in Brazil.

And some services simply shut themselves off. Pandora simply does not work, for instance.

What is going on?

It seems some Geo-location service has incorrectly tagged our IP as Brazil. Not sure how to fix that…

Google does have an IP location change form, but they say it will take a month for them to make the change. That hardly seems like Internet time! https://support.google.com/websearch/contact/ip And what about the others? I don’t see any simple fix for Pandora for instance.

Possible solution?
Someone suggested that the proxy has cached Google’s home page. This is one thing I didn’t test for. However, these proxies are very sophisticated and know to refresh popular pages frequently so I very much doubt that was the cause of the problem.

Workarounds
Google present a link Google.com in the lower right corner of the page which you can click on to pop you to the English-language version of the site, but it’s still not location-aware and treats you more as an English traveler in Brazil when you search for a generic term like restaurant.

You can also go to http://www.google.com/ncr. I personally haven’t tried it, but that’s a tip I just received.

Oct 30 update
Well, finally, after filling out the IP change form twice, and of course keeping all Brazilian users off that proxy, Google has finally deemed fit to declare our proxy really is in the US after all.

It happens again…
Some light usage by a fraction of the user base and I find all proxies are in Brazil yet again. I fill out the form March 19th 2015 and a couple times after that. Now it is May 11th and Google still has not acted. This time I have not kept Brazil users off the proxies, because really this is ridiculous. When users complain I urge them to use duckduckgo.com for its better privacy protections. June 24th update. I’ve been checking every week just about. I’ve filled out Google’s own form about four times in total. So finally after three months, Google set us back to US Enlgish. Without a word to us, of course. Incredible.

Google has gotten big and unresponsive.

Conclusion
Use duckduckgo.com.

References and related
A good web site to learn where your IP is without too much malvertising that you often get with these kinds of things is: https://ipinfo.io/

Categories
Admin Apache

Creating a maintenance page with Apache web server

Intro
Sometimes you want to run a web server that spits back the same page – a maintenance screen – no matter what URI it was accessed by. This is a simple few lines change to accomplish that.

The details
Why might you want to do this? Suppose you had a load balancer for a particular service. And suppose you have to move all the pool members at the same time to a different data center. You’re left with no service at all.

So you can use priority groups and add a lower-priority “maintenance server” to the pool which is not getting moved, and it will answer all queries destined for the service with your desired maintenance page.

I read through the dreadful documentation on Apache (how about this page for a little guilty pleasure reading) and found this way to do it. OK, disclosure time. When you hold a hammer, everything looks like a nail. I had used Mod Rewrite previously so I had some familiarity with it, and I guessed it could be used for this purpose as well. In reality there are probably lots of ways to accomplish this same end goal.

Inside your virtual server:

<VirtualHost *:80>
RewriteEngine on
RewriteRule  ^.*                 /maintenance.htm  [L]
...other usual stuff establishing home dir and permissions ...
</VirtualHost>

My maintenance page
For the record my page looks like this:

<html><head><title>Site Maintenance</title></head>
<body>
<font face="arial">
<h1>Site Maintenance</h1>
<strong>
This site is temporarily down.<br>
Service will be restored by 2 PM Saturday.
</font>
</strong>
</body>
</html>

Gotchas
The Rewrite rule would need refinement if you wanted to maintain a corporate identity and offer a maintenance page that has images, stylesheets, etc.

And after posting this I ran into another trouble. The actual pages we wanted the message for weren’t getting that message, which was quite mysterious. Instead the message was like this:

Service Temporarily Unavailable
 
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.

I guess you could do worse, but that’s not my message so where did it comes from and how come I didn’t see it before?

This one is simple enough. I had only tested with random characters, as in

curl -i http://localhost/asdasdd

I hadn’t tested with one of the actual URIs, many of which end in .jsp. Long story short, I had re-purposed a web server instance that was front-ending a jBoss application server, so it had statements that made it handle JSP pages differently! In particular, there was this:

# JBoss include stuff
Include conf/mod-jk.conf

and this:

JkMountCopy On

With those lines both commented out it began to throw my maintenance page for all requests, as originally intended. Crisis averted.

Appendix – How to redirect just a specific page
If you want to implement a redirect for just a specific page you can follow this example:

# redirect for test of PAC file - DrJ 4/10/17
RewriteEngine on
RewriteRule  /proxy.pac                 http://50.17.188.196/proxy.pac [L]

Here I am only redirecting requests for the URI proxy.pac and sending it to another server. All other pages remain unaffected.

Conclusion
We have shown how to create a web server that whatever you’ve asked of it, always returns a maintenance page along with HTTP status code of 200. This can be helpful for maintenance or moving situations.

References
My most creative use of URI rewriting is in creating an Apache “redirect factory,” which is described here.

Categories
Admin

The IT Detective Agency: strange ssl error explained

Intro
Fromm time-to-time I get an unusual ssl error when using curl to check one of my web sites. This post documents the error and how I recovered from it.

The details

I was bringing up a new web site on the F5 BigIP loadbalancer. It was a secure site. I typically use the F5 as an ssl acclerator so it terminates the ssl connection and makes an http connection back to the origin server.

So I tested my new site with curl:

$ curl -i -k https://secure.drj.com/

curl: (52) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104

Weird, I thought. I had taken the certificate from an older F5 unit and maybe I had installed it or its private key wrong?

I tested with openssl:

$ openssl s_client -showcerts -connect secure.drj.com:443

...
SSL handshake has read 2831 bytes and written 453 bytes
---
New, TLSv1/SSLv3, Cipher is RC4-SHA
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
    Protocol  : TLSv1
    Cipher    : RC4-SHA
    Session-ID: E95AB5EA2D896D5B3A8BC82F1962FA4A68669EBEF1699DF375EEE95410EF5A0C
    Session-ID-ctx:
    Master-Key: EC5CA816BBE0955C4BC24EE198FE209BB0702FDAB4308A9DD99C1AF399A69AA19455838B02E78500040FE62A7FC417CD
    Key-Arg   : None
    Start Time: 1374679965
    Timeout   : 300 (sec)
    Verify return code: 20 (unable to get local issuer certificate)
---

This all looks pretty normal – the same as what you get from a healthy working site. So the SSL, contrary to what I was seeing from curl, seemed to be working OK.

OK, so, SSL is handled by the F5 itself we were saying. That leaves the origin server. Bingo!

In F5 you have virtual servers and pools. You configure the SSL CERTs and the public-facing IP and the pool on the virtual server. The pool is where you configure your origin server(s).

I had forgotten to associate a default pool with my virtual server! So the F5 had nowhere to go really with the request after handling the initial SSL dialog.

I don’t think the available help for this error is very good so I wanted to offer this specific example.

So I associated a pool with my virtual server and immediately the problem went away.

Case closed.

Conclusion
We solved a very specific case this week and hopefully provided some guidance to others who might be seeing this issue.

References
My favorite openssl commands

Categories
Admin Internet Mail

Analysis of a spam campaign and how we managed to fight back for a few days

Intro
A long-running spam campaign has been bothering me lately. In this post I analyze it from a sendmail perspective and provide a simple script I wrote which helped me fight back.

The details
Let’s have a look see at the July 3rd variant of this spam. Although somewhat different from the previous campaigns in that this did not provide users with a carefully phished email to their inbox, from a sendmail perspective it had a lot of the same features.

So the July 3rd spam was a spoof of Marriott. Look at these from lines. They pretty much shout the pattern out:

Jul  3 14:12:20 drjemgw sm-mta[4707]: r63IA8dJ004707: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-m
ta, relay=eu1sysamx113.postini.com [217.226.243.182]
Jul  3 14:12:22 drjemgw sm-mta[7088]: r63ICDA7007088: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, r
elay=eu1sysamx138.postini.com [217.226.243.227]
Jul  3 14:12:23 drjemgw sm-mta[7220]: r63ICIhL007220: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-m
ta, relay=eu1sysamx103.postini.com [217.226.243.52]
Jul  3 14:12:24 drjemgw sm-mta[7119]: r63ICEp6007119: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm
-mta, relay=eu1sysamx112.postini.com [217.226.243.181]
Jul  3 14:12:33 drjemgw sm-mta[7346]: r63ICO8H007346: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta,
relay=eu1sysamx110.postini.com [217.226.243.59]
Jul  3 14:12:34 drjemgw sm-mta[7425]: r63ICTsI007425: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-
mta, relay=eu1sysamx107.postini.com [217.226.243.56]
Jul  3 14:12:35 drjemgw sm-mta[7387]: r63ICRMP007387: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=s
m-mta, relay=eu1sysamx108.postini.com [217.226.243.57]
Jul  3 14:12:39 drjemgw sm-mta[1757]: r63I7dfa001757: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, re
lay=eu1sysamx138.postini.com [217.226.243.227]
Jul  3 14:12:40 drjemgw sm-mta[6643]: r63IBpYm006643: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, relay=e
u1sysamx120.postini.com [217.226.243.189]
Jul  3 14:12:42 drjemgw sm-mta[4894]: r63IAFug004894: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta,
 relay=eu1sysamx110.postini.com [217.226.243.59]
Jul  3 14:12:43 drjemgw sm-mta[7573]: r63ICZJq007573: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mt
a, relay=eu1sysamx140.postini.com [217.226.243.229]
Jul  3 14:12:45 drjemgw sm-mta[7698]: r63ICfP9007698: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon
=sm-mta, relay=eu1sysamx102.postini.com [217.226.243.51]
Jul  3 14:12:46 drjemgw sm-mta[7610]: r63ICblx007610: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-
mta, relay=eu1sysamx109.postini.com [217.226.243.58]
Jul  3 14:12:50 drjemgw sm-mta[7792]: r63ICl6Y007792: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, rela
y=eu1sysamx112.postini.com [217.226.243.181]
Jul  3 14:12:51 drjemgw sm-mta[6072]: r63IBGCU006072: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, d
aemon=sm-mta, relay=eu1sysamx126.postini.com [217.226.243.195]
Jul  3 14:12:51 drjemgw sm-mta[7549]: r63ICYnm007549: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, rela
y=eu1sysamx115.postini.com [217.226.243.184]
Jul  3 14:12:55 drjemgw sm-mta[7882]: r63ICrUW007882: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, r
elay=eu1sysamx139.postini.com [217.226.243.228]
Jul  3 14:12:57 drjemgw sm-mta[7925]: r63ICtav007925: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, relay=e
u1sysamx110.postini.com [217.226.243.59]
Jul  3 14:12:57 drjemgw sm-mta[7930]: r63ICu5c007930: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-
mta, relay=eu1sysamx125.postini.com [217.226.243.194]
Jul  3 14:12:58 drjemgw sm-mta[7900]: r63ICsOE007900: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, relay=eu1
sysamx131.postini.com [217.226.243.220]
Jul  3 14:13:00 drjemgw sm-mta[7976]: r63ICwmu007976: [email protected], size=0, class=0, nrcpts=1, proto=SMTP, daemon=sm-mta, relay=
eu1sysamx127.postini.com [217.226.243.196]

Of 2035 of these that were sent out, 192 were delivered, meaning got into the users inbox past Postini’s anti-spam defenses. So that’s a pretty high success rate as spam goes. And users get concerned.

Now look at sendmail’s access file which I created shortly after becoming aware of similar phishing of linkedin.com more recently on July 11th:

# 7/11/13
linkedinmail.com DISCARD
linkedinmail.net DISCARD
linkedinmail.org DISCARD
linkedinmail.biz DISCARD
linkedin.net DISCARD
linkedin.org DISCARD
linkedin.biz DISCARD
inbound.linkedin.com DISCARD
complains.linkedin.com DISCARD
emalsrv.linkedin.com DISCARD
clients.linkedin.com DISCARD
emlreq.linkedin.com DISCARD
customercare.linkedin.com DISCARD
m.linkedin.com DISCARD
enc.linkedin.com DISCARD
services.linkedin.com DISCARD
amc.linkedin.com DISCARD
news.linkedin.com DISCARD

You get the idea.

What I noticed in these campaigns is a wide variety of subdomains of the domain being phished, with and without “mail” attached to the domain. In particular some rather peculiar-looking subdomains such as complains and emalsrv. So I realized that instead of waiting for me to get the spam, I can constantly comb the log file for these peculiar subdomains. If I come across a new one, voila, it means a new spam campaign has just started! And I can send myself an alert so I can decide – by hand – how best to treat it, knowing it will generally follow the pattern of the recent campaigns.

Now here’s the script I wrote to catch this type of pattern early on:

#!/usr/bin/perl
# DrJ, 7/2013
# I keep my sendmail log file here in /maillog/stat.log and cut it daily
$sl = "/maillog/stat.log";
# 10000 lines occurs in about eight minutes
$DEBUG = 0;
$i = 0;
$lastlines = "-10000";
$access = "/etc/mail/access";
open(ACCESS,$access) || die "Cannot open $access!!\n";
@access = <ACCESS>;
open(SL,"/usr/bin/tail $lastlines $sl|") || die "cannot run tail $lastlines on $sl!!";
print "anti-spam domain: ";
while(<SL>){
  ($domain) = /from=\w{1,25}@(?:emalsrv|complains)\.([^\.]+)\./;
  if ($domain) {
# test if we already have it on our access table
    $seenit = 0;
    foreach $line (@access) {
# lazy, inaccurate match, but good enough...
      $seenit = 1 if $line =~ /$domain/;
      print "seenit, domain, line: $seenit, $domain, $line\n" if $DEBUG;
    }
    if (! $seenit) {
      $i++;
      print "$domain\n" if $i == 1;
    }
  }
}

I call the script spam-check.pl. I invoke spam-check.pl every couple minutes from HP SiteScope. There I have alerts set up which email me a brief message that includes the new domain that is being phished.

No sooner had I implemented this script than it went off and told me about that linkedin phishing spam campaign! That was sweet.

Recent campaigns
Here is a chronology of spam campaigns which follow the pattern documented above. They seem to cook them up one per day.

5/16
wallmart.com - their misspelling, not mine!
5/29
amazon.com
6/20
adp.com
date uncertain
ebay.com
7/9
eftps.com
7/10
visabusinessnews.com
7/11
linkedin.com
7/15
ups.com
7/16
twitter.com
7/17
marriott.com
mmm.com - this one changed up the pattern a bit
7/18
marriott.com - again
ups.com - with somewhat new pattern
7/22
AA.com
7/23-28
a bit more AA.com, a smattering of marriott.com and ebay.com
7/29
tapering off...
7/30 and later
spammer seems to have gone on hiatus, or finally been arrested
10/2, they're back
staples.com

One example spam
Here was my phishing spam from 3M which I got yesterday:

From: "3M" <[email protected]>
To: DrJ
__________________________________________________
This is an automated e-mail.
PLEASE DO NOT RESPOND TO THIS EMAIL ACCOUNT.
This account is not reviewed for responses.
 
--------------------------------------------------------------------------------
 
 
This email is to confirm that on 07/17/2013, 3M's bank (JP Morgan) has debited $15,956.64 from your bank account.
 
If you have any questions, please visit the 3M EIPP Helpline at this link.

The HTML source for that last line looks like this:

If you have any questions, please visit the 3M EIPP=
 
		 		 Helpline <a href=3D"http://vlayaway.com/download/mmm.com.e-marketing.ht=
ml?help">at this link.</div>

When I checked Bluecoat’s K9 webfilter, which I even use at home, the URL in the link, vlayaway… was not rated. I submitted a suggest category, Malicious Sources, and they efficiently assigned it that category within minutes of my submission.

Also, note that the envelope sender of my email differs from the Sender header. The envelope sender was [email protected].

A word about DISCARD vs ERROR
While I’m waiting for more spam of this sort to come in as I write this on July 22nd, I had a brainstorm. Rather than DISCARDind these emails, which doesn’t tip the sender off, it’s probably better to send a 550 error code, which rattles the system a bit more. I think a sending IP with too many of these errors will be temporarily banned by Postini for all their users. So I changed all my DISCARDs. Here is the syntax for one example line:

linkedin-mail.com ERROR:"550 Sender banned. Please use legitimate domain to send email."

I originally wanted to put the message “No such user,” to try to get the spammer to take that specific recipient off their spam list, but it doesn’t really work in the right way: the error is reported in the context of the sender address, not the recipient address.

Here is the protocol which shows what I am talking about:

$ telnet drj.postini.com 25
Trying 217.136.247.13...
Connected to drj.postini.com..
Escape character is '^]'.
220 Postini ESMTP 133 y678_pstn_c6 ready.  CA Business and Professions Code Section 17538.45 forbids use of this system for unsolicited electronic mail advertisements.
helo localhost
250 Postini says hello back
mail from: [email protected]
250 Ok
rcpt to: [email protected]
550 5.0.0 [email protected]... Sender banned. Please use legitimate domain to send email. - on relay of: mail from: [email protected]
quit
221 Catch you later

So that – on relay of: mail from: … is added by Postini so it really doesn’t make sense to say No such user in that context.

Conclusion
My satisfaction may be short-lived. But it is always sweet to be on top, even for a short while.

References
For a lighthearted discussion of HP SiteScope, read the comments from this post.

Sendmail is discussed in various posts of mine. For instance, Analyzing the sendmail log, and Obscure tips for sendmail admins.