Categories
Admin Apache Linux

Recording Host Header in the apache access log

Intro
Guess I’ve made it pretty clear in previous posts that Apache documentation is horrible in my opinion. So the only practical way to learn something is to configure by example. In this post I show how to record the Host header contained in an HTTP request in your Apache log.

The details
Why might you want to do this? Simple, if you have multiple hosts using one access log in common. For instance I have johnstechtalk.com and drjohnstechtalk.com using the same log, which I view as a convenience for myself. But now I want to know if I’m getting my money’s worth out of johnstechtalk.com, which I don’t see as the main URL, but I I use it to to type it into the browser location bar and get directed onto my site – fewer letters.

So I assume you know where to find the log definitions. You start with that as a base and create a custom-defined access log name. These two lines, taken from my actual config file, apache2.conf, show this:

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" \"%{Host}i\"" DrJformat

Then I have my virtual server in a separate file containing a reference to that custom format:

#CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log combined
CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log DrJformat

The ${APACHE_LOG_DIR} is an environment variable defined in envvars in my implementation, which may be unconventional. you can replace it with a hard-wired directory name if that suits you better.

There is some confusion out there on the Internet. Host as used in this post refers as I have said to the value contained in the HTTP Host Request header. It is not the hostname of the client.

Here are some recorded access resulting from this format early this morning:

108.163.185.34 - - [08/Jan/2014:02:21:32 -0500] "GET /blog/2012/02/tuning-apache-as-a-redirect-engine/ HTTP/1.1" 200 11659 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" "drjohnstechtalk.com"
5.10.83.22 - - [08/Jan/2014:02:21:56 -0500] "GET /blog/2013/03/generate-pronounceable-passwords/ HTTP/1.1" 200 8253 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.0; +http://ahrefs.com/robot/)" "drjohnstechtalk.com"
220.181.108.91 - - [08/Jan/2014:02:23:41 -0500] "GET / HTTP/1.1" 301 246 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" "vmanswer.com"
192.187.98.164 - - [08/Jan/2014:02:25:00 -0500] "GET /blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/ HTTP/1.0" 200 32338 "http://drjohnstechtalk.com/blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/" "Opera/9.80 (Windows NT 5.1; MRA 6.0 (build 5831)) Presto/2.12.388 Version/12.10" "drjohnstechtalk.com"

While most lines contain drjohnstechtalk.com, note that the next-to-last line has the host vmanswer.com, which is another domain one I bought and associated with my site to try it out.

Conclusion
We have shown how to record the contents of the Host header in an Apache access log.

Related rants against apache
Creating a maintenance page with Apache web server
Turning Apache into a Redirect Factory
Running CGI Scripts from any Directory with Apache

Categories
Admin CentOS

CentOS 6.0 VM ran out of memory

Intro
I’m just creating this post to have documented what happened to me personally. I have a CentOS 6.0 image with Amazon AWS. It was based on a minimal image, which I purposefully selected so it wouldn’t be loaded down with junky daemons. Ran fine for a year, then one day nothing!

The details
I think it was up for 400 days consecutive! That’s not necessarily a good idea, but those are the facts. Then over the weekend I could neither ssh nor access its web server. Oh, oh. You’ve got really limited options at that point with a cloud server. I stopped it from the AWS console and then started it. No joy. More drastic action – really the last thing I can do short of abandoning the whole image: Terminate. After some breath-holding moments, and after I remembered to re-associate the elastic IP, it came up. Whew! Now it came up as CentOS v 6.4, which I don’t fully understand, but it works.

I checked the /var/log/messages file for clues as to what happened. There actuially were some pretty good clues. Here is the last of many, many similar lines I observed:

...
Nov 29 08:39:23 ip-10-114-206-104 kernel: Out of memory: kill process 29076 (httpd) score 107231 or a child
Nov 29 08:39:23 ip-10-114-206-104 kernel: Killed process 31306 (httpd) vsz:264320kB, anon-rss:30852kB, file-rss:312kB
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd cpuset=/ mems_allowed=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: Pid: 31506, comm: httpd Not tainted 2.6.32-131.17.1.el6.x86_64 #1
Nov 29 08:39:23 ip-10-114-206-104 kernel: Call Trace:
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810c00f1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811102bb>] ? oom_kill_process+0xcb/0x2e0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110880>] ? select_bad_process+0xd0/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110918>] ? __out_of_memory+0x58/0xc0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110b19>] ? out_of_memory+0x199/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81120262>] ? __alloc_pages_nodemask+0x812/0x8b0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8115473a>] ? alloc_pages_current+0xaa/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110d717>] ? __page_cache_alloc+0x87/0x90
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122bab>] ? __do_page_cache_readahead+0xdb/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122d01>] ? ra_submit+0x21/0x30
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110e9e3>] ? filemap_fault+0x4c3/0x500
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810061af>] ? xen_set_pte_at+0xaf/0x170
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81137204>] ? __do_fault+0x54/0x510
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811377b7>] ? handle_pte_fault+0xf7/0xb50
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81007c4f>] ? xen_restore_fl_direct_end+0x0/0x1
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81006d4b>] ? xen_set_pmd_hyper+0x8b/0xc0
...

So it ran out of memory. I guess there’s a memory leak somewhere, although another posting I saw hinted at a flaw in the CentOS under paravirtualization. I have no idea.

The interesting thing to me is that the error was ongoing for days. So I had I been watching for it, I could have been pro-active in rebooting my server.

Conclusion
My AWS-hosted CentOS VM gave me a scare when it stopped responding. I had to terminate it. An out-of-memory error in the kernel seems to be the proximate cause.

Categories
Home Computing

DVD to Mpeg drama – solved

Intro
My trusty and now old Sony Handycam is still a darn capable recoding device. But how to get one of its videos onto YouTube? Everything’s changed since I bought it. Still, you’d think this would be dead easy, right? It really wasn’t.

The details
I also happen to have a Sony DVDirect to create DVDs from my recorded tapes. That works quite well in fact. But the DVDs it creates, which play just great on a standard DVD player, have strange files when examined on the computer. a couple huge VOB files plus some smaller ones.

I tried DVDx.. Failed miserably. It started up OK but it just refused to do anything with my DVD.

Then I saw some forums with those DVDx problems mentioning using good old AutoGK. They kindly provided a link. That, in turn, led to the kind of installation experience I have learned to dread.It proposed to install some spyware and change my search engine – all very bad signs. When I selected Advance options I could turn all that off, so I continued. Then it proposed to install more spyware. Turn off. Then some more. Finally there was what I think was a spyware installation offer which only provided two choices: agree to continue or disagree and exit the installation. I exited the installation.

A friend suggested Camtasia, but to buy is $300 and I just couldn’t see it. And I hate to get comfortable with something for a 30 day trial period and then not be able to re-use it later.

I wondered if my DVD player software, PowerDVD, might be able to do it, at least in the purchased version – the free version doesn’t seem to be able to. I never did figure that out – it wasn’t obvious from the documentation.

In the past I had streamed directly from the Camcorder to my old computer using Sony’s supplied USB cable. But there is no default driver for Windows 7 that can capture that stream. In the past I had used Sony’s suggested program, Imagemixer. I’ve long since lost the CD, if it would even work on Windows 7. Imagemixer was long ago replaced by Pixela. Sony’s site kindly informs that neither is supported and they don’t offer a download any longer. Instead they have some other software, Picture Motion Browser, which wasn’t clearly going to work anyways. But when you try to download it it asks for a CD key. Huh?

So by now I felt like this simple chore was quite the quest, you see.

Frustrated, I decided to look at Microsoft MovieMaker. I actually didn’t think it was going to be able to read my DVD at first since it doesn’t even have those file types in its default search. But switching to browse all files I clicked on one of my VOB files and it read it in!

I was quickly able to cut some from the beginning and some form the end and save it to my computer. I tihnk technically it thereby converted it from an essentially MPEG-2 format to MPEG-4 format. There was a built-in YouTube button, so you think, Cool, I can directly upload it to YouTube. But that required a Microsoft account. Huh? I don’t need yet another account lying around the Internet for no good reason. So I didn’t bother with that.

So we just logged on to YouTube and uploaded it. It’s kind of large-ish (140 MB) so the upload is of course slow on a DSL line. But at least it did work.

I looked again and found a real company that I trust and recognize that has an economical media converter just like I was looking. Arcsoft has its Media Converter for about $27. I’ll probably try that one next time. I don’t mind paying a modest amount for software that does what I want it to.

Conclusion
I’ve documented a simple requirement that turned into a quest. Of course this kind of thing happens frequently. Maybe my quest will help someone else. But even if not, I think this will serve as a nice journalled account which will help me next time I want to post from my Camcorder to YouTube.

Categories
Network Technologies

How we got a little extra oomph from our firewall cluster, and why this trick no longer works for us

Intro
I was a running some Checkpoint firewalls in a cluster. In fact it’s been that way for years and years. At some point you get comfortable and forget to challenge and understand how it was set up. In this case re-examining the setup rewarded us with temporary survival as we were able to offload the primary member. Read on for the details…

The details
This firewall cluster included an active/standby pairing – a Nokia cluster with no state sync. The active firewall, an older model, was often hitting 99% or even 100% cpu usage on a daily basis. Dropped packets were correlated with these cpu spikes, and time-sensitive protocols, especially SIP used by IP phones, suffered mightily. Call quality often degraded, or the call was altogether dropped.

Some other relevant facts in this case: these firewalls were not doing NAT, they were acting more like routers with a firewall function. There are a handful of key servers behind them, like a VPN concentrator, a proxy, a Juniper ISG VPN concentrator, etc. On the external side was an Internet router, also under our control.

So the breakthrough was in revisiting what makes them active/passive in the first place. We weren’t relying on Checkpoint clustering. We used VRRP, defined through a Voyager setup. Then we set up our routing on all protected devices to use these VRRP IPs for their default routes. It all worked great until more and more usage crept in and then complaints started rolling in.

Upgrading costs $$ and the procurement cycle takes some time. What to do immediately, if anything?

The loudest complaints were from users of the Juniper ISG SSL VPN concentrators, who ran VOIP over those connections. What I realized (which of course is obvious in hindsight), is that this device could have all its traffic routed to the standby firewall where there was no cpu load whatsoever, and leave everything else on the active firewall.

How we did it
This was accomplished by adjusting the default route of the ISG to use the physical IP of the standby firewall, as opposed to the VRRP IP. Then, to avoid asymmetric routing, a host route was defined on the Internet router for this ISG, using as gateway the physical external IP of the standby firewall (again as opposed to the external VRRP IP.)

How it worked
It worked like a charm. We were well below our Internet link capacity, after all. So the master firewall was really the chokepoint for this voice traffic. Once we got it onto this unused firewall all the complaints stopped.

This is of course just a stop-gap measure because of course now we have no redundancy if we lose one of the firewalls. But meanwhile we’ve bought some time and kept the work-from-home users running smoothly. The master firewall still hits 99% cpu, but not quite as frequently. It’s difficult to find a true root cause, but an upgrade is definitely in order. Acceleration is already in place.

Why it won’t work for you – Checkpoint Cluster
Fast-forward five years and I tried this same trick which has served me well over the years. No worky. Why? Well these days we’ve switched to use of a Checkpoint Cluster with SYNC. In a Checkpoint cluster the secondary firewall will not forward traffic. In fact a firewall guy was the first to inform me of that. I didn’t believe him so I went ahead and configured it anyway. Sure enough, it simply didn’t work.

So for us, this trick has played itself out. But we used it multiple times during the five years it was available to us.

Conclusion
By re-visiting some old design principles were we able to get a little more mileage out of our firewalls and buy ourselves some time until we can do a planned upgrade.

Categories
Admin

WebDav via HTTP (not HTTPS)

Intro
Just because I document it here in this space doesn’t mean it’s best practice or even a good idea. Such is the case today as I document a BAD IDEA – how to get WebDav working to your Windows 7 PC over HTTP instead of HTTPS. This might be appropriate only if WebDav server and client are both on the same very private Intranet.

WebDAV stands for Web-Based Distributed Authoring and Versioning, by the way.

The details
This comes straight from Microsoft. They just don’t make it clear that these steps apply to this case of trying to get WebDAV working over HTTP.

Windows 7 by default only allows for Webdav connections across HTTPS protocol. There is a work around. In order for you to connect to our WebDav directories you will need to make the following registry change:

To enable Basic authentication on the client computer, follow these steps:
1) Click Start , type regedit in the Start Search box, and then click regedit.exe in the Programs list.
2) Locate and then click the following registry key: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
3) Double-click the BasicAuthLevel registry key.
4) In the Value data box, type 2, and then click OK.
5) Exit Registry Editor, and then restart the computer.

Why this is a bad idea
Now that we’ve shown how to do it, let’s explain why you shouldn’t! If you use basic authentication over HTTP your password is not encrypted, it is merely encoded. It is trivial for anyone listening in – you know who you are, NSA! – to decode that password.

Conclusion
I’ve documented it before trying it! That’s always dangerous, but this blog makes for such a convenient knowledge base that I felt that was the most important first step.

I will update this to indicate whether or not I actually got it to work.

Categories
Admin Linux

The IT Detective Agency: Can someone really see what we’re doing in our X sessions?

Intro
We’ve been audited again. My most faithful followers will recall the very popular and educational article on SSL ciphers that cane out of our previous audit. So I guess audits are a good thing – helps us extend our learning.

This time we got dinged on that most ancient of protocols, X Windows. So this article is aimed at all those out there who, like me, know enough about X11 to get it more-or-less working, but not enough to claim power user status. The X cognescenti will find this article redundant with other material already widely available. Whatever. Sometimes I will post an article as though it were my own personal journal – and I like to have all my learning in one easy-to-find place.

The details
The findings amount to this: anyone in our Intranet can take a screen shot of what the people using Exceed are currently seeing. The nice tool (Nessus) actually provided such a screen shot to back up its claim, so this wasn’t a hypothetical. At Drjohn’s we believe in open source, but we do have our secrets and confidential information, so we don’t want everyone to have this type of access.

Here is some of the verbatim wording:

Synopsis
The remote X server accepts TCP connections.
 
Description
The remote X server accepts remote TCP connections. It is possible for an attacker to grab a screenshot of the remote host.
 
Solution
Restrict access to this port by using the 'xhost' command. If the X client/server facility is not used, disable TCP connections to the X server entirely.
 
Risk Factor
Critical
 
CVSS Base Score
10.0 (CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C)
 
References
CVE CVE-1999-0526
...
Hosts
drjms.drjs.com (tcp/6002)
It was possible to gather the following screenshot of the remote computer.
...

So in my capacity as old Unix hand I was asked to verify the findings. This turned out to be dead easy. Here are the steps:

– pick a random Linux machine which has xwd installed
> xwd -debug -display drjms.drjs.com:2 -root -out drjms.xwd
> cat drjms.xwd|xwdtopnm|pnmtopng > drjms.png

The PNG file indeed showed lots of interesting stuff from a screen capture of the user’s X server – amazing.

I should mention that tcp port 600 maps to X Server display 0, 6001 to 1, 6002, to 2, etc. That’s why I set my display to drj…com:2 since port 6002 was mentioned in the findings as vulnerable.

My advice: don’t use

> xhost +

or whatever is the equivalent to that in Exceed onDemand.

Guilty
Now I have to admit to using xhost + in just this way in my own past – so convenient! Now that I see how dead easy it makes it to get a screenshot – in fact I tested the whole thing against my own XServer first – I will forego the convenience.

Conclusion
This is the danger in knowing something and some things, but not enough!

References
But I still stand by use of xhost + in the privacy of your home network, as for instance I show it in my Raspberry Pi without a monitor acticle.

I picked off that command set from this interesting article: https://www.linuxquestions.org/questions/linux-general-1/commanding-the-x-server-to-take-a-png-screenshot-through-ssh-459009/

Categories
Network Technologies

The IT Detective Agency: Why our forwarding vserver doesn’t route

Intro
F5 BigiP appliances are very versatile networking appliances. But sometimes you gotta know what you are doing!

The details
We set up a load-balanced Radius service using the same subnet for the radius servers as the load balancer itself. Setting up this service is moderately tricky. You have to set the default route of the Radius servers to be the load balancer, and on the load balancer SNAT (which I prefer to translate as “source NAT,” though technically it is “secure NAT”) and NAT should be disabled. And there are two services, AAA and audit (UDP ports 1812 and 1813).

So everything’s a bit different when all you'[re used to is creating load-balanced pools for web servers.

So with this setup an incoming packet comes in, its source is preserved, but its destination is NAT’d to the radius server by the load balancer. In the response, the source is the radius server. That gets NAT’d to the IP of the load-balanced service. So there are two stages for incoming request packets (pre- and -post NAT) and two for the responses. Here’s a trace which shows all this:

12:10:41.259073 IP drj-wlc-nausresea055-01.drj.com.filenet-rpc > radius.drj.com.radius: RADIUS, Access Request (1), id: 0x30 length: 260
12:10:41.259086 IP drj-wlc-nausresea055-01.drj.com.filenet-rpc > wusandradaa01.drjad.drj.net.radius: RADIUS, Access Request (1), id: 0x30 length: 260
12:10:41.259735 IP wusandradaa01.drjad.drj.net.radius > drj-wlc-nausresea055-01.drj.com.filenet-rpc: RADIUS, Access Reject (3), id: 0x30 length: 44
12:10:41.259745 IP radius.drj.com.radius > drj-wlc-nausresea055-01.drj.com.filenet-rpc: RADIUS, Access Reject (3), id: 0x30 length: 44

So all is good, right? Except that now we have a default route from the Radius server to the load balancer and so all response traffic is going through the load balancer, even things not related to Radius, such as a packets from an RDP session.

So we defined a forwarding_vserver to make the BigIP act as a router:

A forwarding vserver is a virtual server of type Forwarding (IP). In the bigip.conf file it looks like this:

virtual forwarding_vserver {
   ip forward
   destination any:any
   mask 0.0.0.0
   profiles route_friendly_fastL4 {}
}

But it doesn’t work! Packets from the Radius server come to the load balancer, and then they get source NAT’d to the floating self-IP of the load balancer. That’s no good. In TCP your response packets have to come from the IP you connected to! For simple PINGs you kind of get away with it, but with a warning. In TCP your PC will send a RST (reset) packet every time it gets a response packet with the wrong source IP, even if the other information is correct.

The solution
With the help of someone who understands snat auto-maps better than I do (evidently), I got the tip that I have a global snat-automap enabled, which is doing this. That’s how I’ve always run these LTMs (Local Traffic Managers). I had forgotten why or how I did it. Well the snat-automap pretty mcuh applies to all my other load-balanced services so I can’t simply chuck it. And I don’t have another subnet handy for use so I can’t simply exclude one of my vlans. They suggested that it could be turned off on my forwarding_vserver with an irule! Who would have figured? So I created a very simple irule:

# Turn off snat, i.e., for us in our forwarding_vserver
# inspired by https://devcentral.f5.com/wiki/iRules.snat.ashx
# DrJ, 11/2013
when CLIENT_ACCEPTED {
         snat none
}

and applied it to my forwarding_vserver, which now looks like this:

virtual forwarding_vserver {
   ip forward
   destination any:any
   mask 0.0.0.0
   rules snat-none
   profiles route_friendly_fastL4 {}
}

And voila, the LTM now routes those packets correctly without any address translation! And the Radius service still does its translations as desired.

Case closed!

Conclusion
We learned a little about F5 BigIPs today. The frustrating thing about the documentation is that they don’t really cover actual use cases so they introduce configuration settings without fully explaining why you would want to use them.

For the curious, the forwarding_vserver is accomodating an asymmetric routing situation. Incoming RDP (remote desktop protocol) packets get sent directly from a Cisco router to the Radius server. It’s just the response packets that flow from the Radius server, to the LTM, to the Cisco router.

References and related
In this post I show why a basic virtual server might not be working – a kind of rookie mistake we’ve all probably made at some point!
This post shows some non-trivial iRule examples.

Categories
Admin

The IT Detective Agency: New AIX Server working really slowly

Intro
Don’t ask me why anyone would willingly run IBM AIX, but it happens. And when they do, watch out for network punishment. We dealt with such a case, unfortunately, and we ran into a serious, somewhat obscure network issue and figured out the solution (we think). Maybe someone else can learn from this painful experience. Or maybe we’ll completely forget what we ourselves have done two years from now and find ourselves stepping on the same rake.

The details

So this new AIX server was configured to run an very old application, WebMethods, that makes a lot of database connections as well as connections to external partners for document exchange.

This had been working fine on the old AIX server, but we switched to newer hardware. As much as possible the old configurations were used. Yet this new server just couldn’t keep up with the load. Its queue started building up, connections to the database climbed into the hundreds, and then it just seemed like it was doing nothing at all.

Someone lent me root access so I can join the debugging party. What, no bash shell! Not even a properly configured korn shell. And everything’s just a little different on AIX – nothing is quite how you are accustomed to it. But at least it has tcpdump. I guess they also have their own AIXish utility as well, but I never bothered with that. tcpdump seemed to work. So I quickly began to get a feel from what the application folks were saying about their transfers which weren’t going outbound, and only slowly going inbound. They used port 5443 on one of the interfaces, en5:

# tcpdump -i en5 -n port 5443

And, true, not much was going on.

This went on for a day and things were looking desperate – to the point where we decided to go back to the old hardware! But we never stopped thinking.

Check the traffic to the Oracle database:

# tcpdump -i en0 -n port 1521

No, not that much either.

Try to check system logs, but who knows where those are? The ones I found had absolutely nothing of interest.

Being a DNS guy, I decided to check for DNS traffic:

# tcpdump -i en0 -n udp

(everything else uses tcp so I could get away with this).

Now DNS turned out to be quite chatty – around a dozen entries per second. And a lot of repetition. And a lot of IPv6 queries, labelled as AAAA?. I didn’t like it.

And this jogged my memory. I remembered encountering these IPv6 queries and wanting to turn them off on the old AIX servers. But how to do that??

As in all things, Google (actually DuckDuckGo) is your friend. You modify the /etc/netsvc.conf file. You need an entry like this:

hosts = local4, bind4

To be continued…

Categories
Uncategorized

Measuring the speed of a fast-moving NJ Transit train

Intro
I’m trying to measure the speed of a NJ Transit train that really hauls past my station. I’ll compare results using two different methods, but no fancy equipment like radar guns. Just a Smartphone a ruler and some common sense.

To be continued…

Categories
Admin Network Technologies

Are we in Brazil? We are NOT in Brazil.

But Google thinks we are:

$ curl -i http://www.google.com/

returns:

HTTP/1.1 302 Found
Location: http://www.google.com.br/?gws_rd=cr&ei=nrA4UoPZB8fb4AP3loGYAg
Cache-Control: private
Content-Type: text/html; charset=UTF-8
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more info."
Date: Tue, 17 Sep 2013 19:42:22 GMT
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic
Content-Length: 262
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: PREF=ID=9ed4d9e91767f4ce:FF=0:TM=1379446942:LM=1379446942:S=O9jxL9sXRCc0kF-E; expires=Thu, 17-Sep-2015 19:42:22 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=BJA1pj-o2tu-IDI9KS5MjQvoKPJoY8x3uREAmeGCItGKxxfovFqdjPhvBaHUHcISFLVcTWSmCXwiTAuVhF4DIYCbPcuubfBBEGYaNy2wgveeyvGj35xTzM4Oo-yCLaDe; expires=Wed, 19-Mar-2014 19:42:22 GMT; path=/; domain=.google.com; HttpOnly
 
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
<A HREF="http://www.google.com.br/?gws_rd=cr&amp;ei=nrA4UoPZB8fb4AP3loGYAg">here</A>.
</BODY></HTML>

and MSN is similar:

$ curl -i http://www.msn.com/

HTTP/1.1 302 Found
Cache-Control: no-cache, no-store
Pragma: no-cache
Content-Type: application/xhtml+xml; charset=utf-8
Location: http://br.msn.com/?rd=1&ucc=BR&dcc=BR&opt=0
P3P: CP="NON UNI COM NAV STA LOC CURa DEVa PSAa PSDa OUR IND"
X-AspNet-Version: 4.0.30319
S: BN2SCH030301048
Edge-control: no-store
Date: Tue, 17 Sep 2013 20:02:42 GMT
Content-Length: 172
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: MC1=V=3&GUID=7173a7e9bb5444fc88d52c3d302f0cdf; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
Set-Cookie: MC1=V=3&GUID=7173a7e9bb5444fc88d52c3d302f0cdf; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
Set-Cookie: brdSample=89; domain=.msn.com; expires=Thu, 17-Sep-2015 20:02:42 GMT; path=/
blah, blah
 
 
<html><head><title>Object moved</title></head><body>
<h2>Object moved to <a href="http://br.msn.com/?rd=1&amp;ucc=BR&amp;dcc=BR&amp;opt=0">here</a>.</h2>
</body></html>

Now a request from the IP Next door to it, on the same subnet, produces an entirely different result:

$ curl -i http://www.google.com/

Date: Tue, 17 Sep 2013 19:48:11 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP="This is not a P3P policy! See http://www.google.com/support/accounts/bin/answer.py?hl=en&answer=151657 for more in
fo."
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Alternate-Protocol: 80:quic
Transfer-Encoding: chunked
Proxy-Connection: Keep-Alive
Connection: Keep-Alive
Set-Cookie: PREF=ID=0e280a6247dbe89f:FF=0:TM=1379447291:LM=1379447291:S=Ws_EoUItgwgQMFLL; expires=Thu, 17-Sep-2015 19:48:11
 GMT; path=/; domain=.google.com
Set-Cookie: NID=67=MEOnbrr2pgKz8MVKEzIQ2v9f4nkR0o5FXJxbeBRk3GZg6GsfKNRZm9UEHTzcuRF5A6D2kXKa4N6FQnP88fkVLrgDokdOlXvX1Oba2JzC
koZ0K0PiACYSiTPCru5eH9C3; expires=Wed, 19-Mar-2014 19:48:11 GMT; path=/; domain=.google.com; HttpOnly
 
<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"><head><meta content="Search the world's information,
 including webpages, images, videos and more. Google has many special features
...

This is a problem when you do searches in which you expect to get localized results, like a search for dinner. Believe me, you see completely different things if Google thinks your IP is in Brazil.

And some services simply shut themselves off. Pandora simply does not work, for instance.

What is going on?

It seems some Geo-location service has incorrectly tagged our IP as Brazil. Not sure how to fix that…

Google does have an IP location change form, but they say it will take a month for them to make the change. That hardly seems like Internet time! https://support.google.com/websearch/contact/ip And what about the others? I don’t see any simple fix for Pandora for instance.

Possible solution?
Someone suggested that the proxy has cached Google’s home page. This is one thing I didn’t test for. However, these proxies are very sophisticated and know to refresh popular pages frequently so I very much doubt that was the cause of the problem.

Workarounds
Google present a link Google.com in the lower right corner of the page which you can click on to pop you to the English-language version of the site, but it’s still not location-aware and treats you more as an English traveler in Brazil when you search for a generic term like restaurant.

You can also go to http://www.google.com/ncr. I personally haven’t tried it, but that’s a tip I just received.

Oct 30 update
Well, finally, after filling out the IP change form twice, and of course keeping all Brazilian users off that proxy, Google has finally deemed fit to declare our proxy really is in the US after all.

It happens again…
Some light usage by a fraction of the user base and I find all proxies are in Brazil yet again. I fill out the form March 19th 2015 and a couple times after that. Now it is May 11th and Google still has not acted. This time I have not kept Brazil users off the proxies, because really this is ridiculous. When users complain I urge them to use duckduckgo.com for its better privacy protections. June 24th update. I’ve been checking every week just about. I’ve filled out Google’s own form about four times in total. So finally after three months, Google set us back to US Enlgish. Without a word to us, of course. Incredible.

Google has gotten big and unresponsive.

Conclusion
Use duckduckgo.com.

References and related
A good web site to learn where your IP is without too much malvertising that you often get with these kinds of things is: https://ipinfo.io/