Categories
Admin Internet Mail IT Operational Excellence

The IT Detective Agency: The Case of Slow Sendmail Performance Finally Cracked

I’ve been running sendmail for years and years. It’s a very solid MTA, though perhaps not fashionable these days. At one point I even made the leap from running on Sun/solaris to SLES. I’ve always had a particular problem on a couple of these servers: they do not react gracefully to mail storms. An application running on another server sends out a daily mail blast to 2000 users, all at once. Hey I’m not running Gmail here, but normal volume is several messages per second nonetheless, and that is handled fairly well.

But this mail blast actually knocks the system offline for a few minutes. The load average rockets up to 160. It’s essentially a self-inflicted denial-of-service attack. In my gut I always felt the situation could be improved, but was too busy to look into it.

When it was time to buy a replacement server, I had to consider and justify what to get. A “screaming server” is a little hard for a hardware vendor to turn into an order! So where are the bottlenecks? I decided to capture output of uptime, which provides load averages, and iostat, an optional package which analyzes I/O usage, at five secon intervals throughout the day. Here’s the iostat job:

nohup iostat -t -c  -m -x 3 > /tmp/iostat &

and the uptime was a tiny script I called cpu-loop.sh:

#!/bin/sh
while /bin/true; do
sleep 5
date
uptime
done

called from the command line as:

nohup ~/cpu-loop.sh > /tmp/cpu &

Strange thing is that though load average shoots the roof, cpu usage isn’t all that high.

If I have this right, load average shows the number of processes scheduled by the scheduler. Sendmail forks a process for each incoming email, so the number of sendmail processes climbs dramatically during a mail storm.

The fundamental issue is are we thirsting for more CPU or more I/O? Then there are the peripheral concerns like speed of pci bus, size of level two cache and number of cpus. The standard profiling tools don’t quite give you enough information.

Here’s actual output of three consecutive iostat executions:

Time: 05:11:56 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.92    0.00    5.36   21.74    0.00   66.99

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.00    0.00    3.00     0.00     0.05    37.33     0.03    8.53   5.33   1.60
sdb               0.00   788.40    0.00  181.40     0.00     3.91    44.12     4.62   25.35   5.46  98.96
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.02    8.00   1.33   0.32
dm-3              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.01    5.67   2.33   0.56
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01   12.00   6.00   0.48
dm-5              0.00     0.00    0.00    7.60     0.00     0.03     8.00     0.08   10.32   1.05   0.80
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  975.00     0.00     3.81     8.00    20.93   21.39   1.01  98.96
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Time: 05:12:01 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           5.05    0.00    4.34   19.98    0.00   70.64

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00    10.80    0.00    2.80     0.00     0.05    40.00     0.03   10.57   6.86   1.92
sdb               0.00   730.60    0.00  164.80     0.00     3.64    45.20     3.37   20.56   5.47  90.16
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.60     0.00     0.01     8.00     0.03   12.31   2.15   0.56
dm-3              0.00     0.00    0.00    2.40     0.00     0.01     8.00     0.02    6.33   3.33   0.80
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01    9.00   5.00   0.40
dm-5              0.00     0.00    0.00    7.60     0.00     0.03     8.00     0.10   13.37   1.16   0.88
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  899.60     0.00     3.51     8.00    16.18   18.03   1.00  90.24
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Time: 05:12:06 AM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.91    0.00    1.36   10.83    0.00   85.89

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     6.40    0.00    3.40     0.00     0.04    25.88     0.04   12.94   5.18   1.76
sdb               0.00   303.40    0.00   88.20     0.00     1.59    36.95     1.83   20.30   5.48  48.32
dm-0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-2              0.00     0.00    0.00    2.60     0.00     0.01     8.00     0.04   14.77   2.46   0.64
dm-3              0.00     0.00    0.00    0.60     0.00     0.00     8.00     0.00   12.00   5.33   0.32
dm-4              0.00     0.00    0.00    0.80     0.00     0.00     8.00     0.01   11.00   5.00   0.40
dm-5              0.00     0.00    0.00    5.80     0.00     0.02     8.00     0.08   12.97   1.66   0.96
hda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-6              0.00     0.00    0.00  393.00     0.00     1.54     8.00     6.46   16.03   1.23  48.32
dm-7              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00

Device sdb has reached crazy high utilization levels – 98% before dropping back down to 48%. An average queue size of 4.62 in the first run means a lot of queued up processes awaiting I/O. Write requests (merged) per second of 788 seems respectable. All this, while the CPU is 67% idle!

The conclusion: a solid state drive is in order. We are dying thirsting for I/O more than for CPU. But solid state drives cost money and have to be justified which takes time. Can we do something which proves it will bear out our hypothesis and really alleviate the problem? Yes! SSD is like accessing memory. So let’s build a virtual partition from our memory. tmpfs has made this sinfully easy:

mount -t tmpfs none /mqueue -o size=8192m

We set this to be sendmail’s queue directory. The sendmail mc command looks like this:

define(`QUEUE_DIR',`/mqueue/q*')dnl

which I need to further explain at some point.

Now it’s interesting that this tmpfs filesystem doesn’t even show up in iostat! I guess its usage all counts as cpu usage.

I now have to send my mail blast to the system with this tmpfs setup. I’m expecting to have essentially converted my lack of I/O into better usage of spare CPU, resulting in a higher-performance system.

The Results
The results are in and they are dramatic. Previous results using traditional 15K rotating drive:

- disk device became 98% busy
- cpu idle time only dropped as low as 69%
- load average peaked at 37
- SMTP port shut down for some minutes
- 2030 messages accepted in 187 seconds
- 11 messages/second

and now using tmpfs virtual filesystem:

- the load average rose to 3.1 - a much more tolerable result
- the cpu idle time dropped to 32% during the busiest time
- most imporantly, the server stayed open for business - the SMTP port did not shut down for the first time!!
- the 2000 messages were accepted in 34 seconds.  
- that's a record 59 messages/second!

Conclusion
Disk I/O was definitely the bottleneck for sendmail. tmpfs rocks! sendmail becomes five times faster using it, and is better behaved. The drawback of this filesystem type is that it is completely volatile and I stand to lose messages if the power ever goes out!

Case Closed!

Categories
IT Operational Excellence Uncategorized

The IT Detective Agency: Debugging a Thorny Citrix Connection Issue

This case begins with the observation by the application owner for Citrix XenApp. External users were being knocked out of their sessions frequently – several times a day. And it happened en masse. Before this problem users were typically logged in all day. You can see that many must have been bumped around 12:30 PM then again around 2 PM. The problems began July 5th.

The users suffering the disconnects were all external users who access the applications via a Citrix Secure Gateway. The XenApp servers being accessed are also used on the Intranet and those users were not seeing any drops.

The AO asked if I had changed anything in the network. Nope. Had he changed anything? Nope.

So now we have the classic stand-off, right? AO vs network. There’s a root cause and it’s either the AO or the network guy who’s ultimately at fault.

My attitude in these cases is the following: the network person should prove it’s an application problem and the application owner should prove it’s a network problem! It sounds cynical, but this approach aligns with the best interests of each party. Both are really working towards the same goal, but preserving their own interests. E.g., the networking person thinks that If I can prove it’s an application problem then the AO will quit bothering me and I can get back to my real job. After all, I am not knowledgeable about the application. Even if it is a networking issue, I do not know where the issue is so I need the AO to point out the problem at a detailed level, e.g., the dropped packet or whatever, so I can focus my energies. The reality in my experience is quite different however. The AO typically does not know enough about networking to make this proof.

Nonetheless I proceeded this way, hoping to prove some knid of application problem so I could get back to my normal activities.

We enabled my own PC to use the application. This is always much easier than bothering other people. I can take traces to my heart’s content! So Monday I was connected to XenApp via the CSG. I was going along fine until 11:35 when I got the disconnected message! I later learned that the bulk of users, who are using a different app, were not disconnected then, but were at about an hour later.

Now there’s lots of pieces to look at, any one of which could be at fault. Working from PC on Internet to the XenApp we have: The Internet, my Internet router, firewall, load balancer, CSG server, firewall, XenApp server. That’s a lot to look after, but you have to start somewhere. I chose the load balancer. It was rather confusing, even to establsih a baseline of “normal” activity. I quickly observed that every 30 seconds packets were being transmitted to the PC even when nothing was going on.. Of course the communication was all encrypted so I did not even attempt to look into the packets. But sometimes I saw seven packets, sometimes six, and more rarely different numbers. The packet order didn’t even make sense. Sometimes the load balancer responded to the XenApp before the PC did! The trace of this behaviour until I was disconnected will be shown here when I get the time to include it:

The end of the trace shows a bunch of FIN packets. FIN is used to terminate a TCP connection. Now we’re getting somewhere. It looks like, from a TCP perspective, that a more-or-less orderly shutdown of the connection was occuring. If confirmed that would point to an application problem and life would be good!

The next day I logged into CSG and used a XenApp app again. This time I did an additional trace and included the CSG server itself. Again I was disconnected after a few hours. In this trace the CSG server is called webservera, the XneApp server is xenapp15. This is not a byte-level trace but rather running snoop on Solaris and looking at the meta-data:

________________________________
11:29:23.81577 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:23.81577 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4842, TOS=0x0, TTL=126
11:29:23.81577 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:25.01881 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:25.01881 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4844, TOS=0x0, TTL=126
11:29:25.01881 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:27.42530 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:27.42530 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=46, ID=4861, TOS=0x0, TTL=126
11:29:27.42530 xenapp15 -> webservera    TCP D=56011 S=1494 Push Ack=1900995851 Seq=4198089996 Len=6 Win=64134
________________________________
11:29:30.87645    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:30.87645    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=847, TOS=0x0, TTL=64
11:29:30.87645    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:30.87657    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:30.87657    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=848, TOS=0x0, TTL=64
11:29:30.87657    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:33.02325    webservera -> xenapp15 ETHER Type=0800 (IP), size = 54 bytes
11:29:33.02325    webservera -> xenapp15 IP  D=10.201.88.34 S=10.201.142.41 LEN=40, ID=849, TOS=0x0, TTL=64
11:29:33.02325    webservera -> xenapp15 TCP D=1494 S=56011 Ack=4198090002 Seq=1900995851 Len=0 Win=48871
________________________________
11:29:53.34945 xenapp15 -> webservera    ETHER Type=0800 (IP), size = 60 bytes
11:29:53.34945 xenapp15 -> webservera    IP  D=10.201.142.41 S=10.201.88.34 LEN=40, ID=4923, TOS=0x0, TTL=126
11:29:53.34945 xenapp15 -> webservera    TCP D=56011 S=1494 Rst Ack=1900995851 Seq=4198090002 Len=0 Win=0

What I saw this time is that RST packet was being sent from the XenApp server! That’s the very last line, which I will repeat here for emphasis since it is so important to the case:

11:29:53.34945 xenapp15 -> webservera    TCP D=56011 S=1494 Rst Ack=1900995851 Seq=4198090002 Len=0 Win=0

TCP RST is a way to immediately disconnect a connection! It seemed as though this was begin converted to a FIN by the CSG. Now it’s looking very much like for whatever reason the application decided to terminate the connection. It almost has to be an application problem, right?

Wrong! We have to keep an open mind.

This trace, while dense, hints at where the problem may lie. It is taken on the load balancer with tcpdump -i 0.0. The load balancer has two interfaces, one towards the Internet, the other towards webserverw. The hostname of the load balancer’s Internet interface is called CSG, the hostname of the Citrix client on the Internet is drjohnspc.

11:03:59.392810 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 35 win 32768 (DF)
11:03:59.455730 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 35:68(33) ack 1 win 48677 (DF)
11:03:59.554810 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 68 win 32768 (DF)
11:03:59.585845 802.1Q vlan#4094 P0 drjohnspc.20723 > CSG.https: . ack 35 win 64426 (DF)
11:03:59.585855 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 35:68(33) ack 1 win 32768 (DF)
11:03:59.885805 802.1Q vlan#4094 P0 drjohnspc.20723 > CSG.https: . ack 68 win 64393 (DF)
11:04:59.465070 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 68:103(35) ack 1 win 48677 (DF)
11:04:59.465080 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:04:59.564818 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 103 win 32768 (DF)
11:05:00.664812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:02.864812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:07.064810 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:15.264811 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:31.464812 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: P 68:103(35) ack 1 win 32768 (DF)
11:05:59.807514 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: P 103:130(27) ack 1 win 48677 (DF)
11:05:59.807741 802.1Q vlan#4093 P0 webserverw.https > drjohnspc.20723: F 130:130(0) ack 1 win 48677 (DF)
11:05:59.807754 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: . ack 131 win 32768 (DF)
11:05:59.807759 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: FP 103:130(27) ack 1 win 32768 (DF)
11:06:03.664813 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: . 68:103(35) ack 1 win 32768 (DF)
11:06:12.642847 802.1Q vlan#4093 P0 drjohnspc.20723 > webserverw.https: R 1:1(0) ack 131 win 32768 (DF)
11:06:12.642862 802.1Q vlan#4094 P0 CSG.https > drjohnspc.20723: RP 103:130(27) ack 1 win 32768 (DF)

Notice the time stamp increasing by larger and larger leaps beginning with 11:05:00.664812. 11:05:02, 11:05:07, 11:05:15, 11:05:31 – the time keeps doubling! this is characteristic of a TCP retransmit. Note that all the other information is the same. It must be retransmitting the same packet. Why? Because it never got there! That seems to be the most likely reason. Now my conviction and hope that an application problem lies at the heart of the issue is starting to crumble. See why you need to keep an open mind? Your opinion can change to the polar opposite conclusion with the input of some additional data like that. Where to turn next?

There is a firewall inbetween the load balancer and the Internet. Now we will focus our attention on it. Could be that it dropped that packet and all the re-transmits.

Here’s the trace of that same conversation on the firewall’s internal interface (which faces the CSG) (I(O) means inbound(outbound) with respect to that interface):

11:04:59.441022  I IP CSG.https > drjohnspc.20723: P 2210302227:2210302262(35) ack 1714160833 win 32768
11:05:00.640781  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:02.840742  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:07.040729  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:15.240780  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:31.440571  I IP CSG.https > drjohnspc.20723: P 0:35(35) ack 1 win 32768
11:05:59.783366  I IP CSG.https > drjohnspc.20723: FP 35:62(27) ack 1 win 32768
11:06:03.640595  I IP CSG.https > drjohnspc.20723: . 0:35(35) ack 1 win 32768
^C

and the trace of the same thing on the firewall’s external interface, i.e., facing the Internet and drjohnspc:

11:03:59.269334  O IP CSG.https > drjohnspc.20723: P 2210302159:2210302194(35) ack 1714160833 win 32768
11:03:59.562011  I IP drjohnspc.20723 > CSG.https: . ack 35 win 64426
11:03:59.562139  O IP CSG.https > drjohnspc.20723: P 35:68(33) ack 1 win 32768
11:03:59.861970  I IP drjohnspc.20723 > CSG.https: . ack 68 win 64393

Notice what’s not present in the exterenal interface trace – all those re-transmits, or even the original packet.

Let’s summarize so far. One of those keep-alive packets from the XenApp server reached the firewall, but didn’t exit the firewall. the only possibility is that it got dropped by the firewall!

Now that was a lot of work, but who’s going to do it if not a patient and methodical IT person?

Results
We got a networking problem on our hands after all. Good thing we persisted in this investigation even when it looked like we were off the hook! Later it was confirmed that the firewall was “aggressively aging” its connections because it had either reached or was very close to its connection limit. The firewall connection limit was raised and the Citrix connection issues went away.

Let’s go back to that simplistic question that non-experts like to ask: what had changed that caused this problem? The change was external events – increased usage of that firewall. Network bandwidth, Internet usage – they all tend to increase over time. There were no changes done by either networking or the application group to cause this issue. A seasoned IT detective uses all available clues and arrives at the right conclusion. The “what has changed” question is normally very relevant, but it can’t be the only tool in your toolbox!

Case closed!

Categories
IT Operational Excellence

Dr John’s Laws of IT

Laws

Here are some laws of IT based on years of observation.  Some of these have very real and practical consequences.

1. IT infrastructure decays over time if left to itself – a sort of entropy sets in.  This is sort of  counterintuitive insofar as people know enough about troubleshooting a problem to ask “what changed.”  Sometimes the answer is nothing at all, or nothing you would ever think of.  For example, I once had an application server start to fail when “nothing had changed.”  The cause, found after mny hairs pulled out?  The log file it wrote to tried to exceed 2 GB on a 32-bit system.  It couldn’t write to its log any longer and the app server just froze up. 

1.1) Corollary to 1.  Neglect works great in the short term, but the way to go is judicious maintenance!  Neglect leads to a 2 GB log in the first place!

2. Things will always go wrong at some point.  It will usually not be for the reason you suspect.

2.1 Corollary to 2.  Effective monitoring is critical.  If you build something critical, build a means to monitor it.  Monitor foundational components as well so that when you need it, you can see what all was working when one thing went south.

3. Software support from large vendors is abysmal.  Most small and mid-sized vendors are no better.  The premise of almost all support I’ve encountered is The customer did something wrong.  The most relevant metric is How quickly can the case be closed?  If in that rare case the customer can prove fault by the vendor, Justify doing Absolutely Nothing about it for as Long as Possible.  And NEVER do somthing immediately useful like let the customer speak to a software developer who actually knows what he/she is talking about.

3.1 Corollary to 3. An IT Professional quickly develops all the skills possessed by front-line engineering who responds to support calls, and can solve most of the problems on his/her own, our of necessity, since the assistance given won’t take the problem further anyways.

Observations

Laws are universal.  The following are key observations that are generally true.

1. The more an IT person thinks about a problem, the better the solution.  Better means cheaper, faster, more elegant, even moving the category from impossible to the possible (and actually this happens frequently believe it or not).

2. Even a brilliant IT professional won’t think up all solutions alone.  Creative problem solving occurs best when there’s a couple brilliant IT professionals bouncing ideas off each other, with a few others at the ready to contribute for specialist opinions.

3. How to estimate the amount of time for an IT project:

    2 x (extimate from experienced IT professional) + constant

If many groups and external partners are involved, the multiplier should be increased to 3 or even 4.

This sounds facetious but it is not.  It is the unfortunate truth of the nature of our work and the unpredictability of the showstopper moments which always occur.

4. All a seasoned IT person needs to decide the impossible is possible is to hear that someone else is doing it!  The creative juices start flowing at that point.  Maybe it’s a competitive thing at that point.

5. Large IT organizations contain a large number of people who actually know surprisingly little about IT.  Small IT organizations are also not immune from this.

Categories
Scams

Spam and Scams – What to Expect When You Start a Blog

In my case – not much! It appears that despite providing top-notch content the only “readers” are those trying to profit from me. To use the word “scam” may be a bit strong, but any outfit that demands money upfront to supposedly help you make money is highly suspect in my playbook.

So I’ve heard from Tina. It goes like this:
Admin – I’ve checked out http://drjohnstechtalk.com/blog/2011/06/grep-is-slow-as-a-snail-in-sles-11/ and I really like your writing style like in your post Grep is Slow as a Snail in SLES 11 | Dr John’s Tech Talk. I am looking for blog authors who would like to write articles as either a full time job or part time job (for some extra money). I think your writing style would work very well. You receive pay per article, anywhere from $5 to $50 per article depending on the topic, article length, etc… If interested you can find more information at www.onlinehomewriter.net.

Please do me a favor and do not follow that link. It redirects you secure.signup-way.com, some strange-looking URL that McAfee categorizes as Malicious Sites, High Risk. So I don’t think I’ll be going there.

Then there’s Tony:
Blog Admin – If your blog isn’t bringing in as much money as you would like it to check out my site www.QuickCashBlogging.com. We show blog owners how to maximize their blogs earnings potential. Tony

McAfee verdict: Spam site, medium risk. That’s just great.

The McAfee URL checker I use is http://www.trustedsource.org/en/feedback/url.

Clearly these people have program trolling the Internet for new domains and new blogs, trying to squeeze some $$ from them. Unfortunately I’m not sure any person who could benefit from the information has read my blogs. So I feel I am making negative progress – instead of elevating the level of discourse on the Internet helping it to be used for more spam and scams.

I just feel bad for humanity. Is this the best we can do? A well-meaning person embarks on a quixotic journey to provide better technical information on some topics, and the average response from my fellow human beings is to try to take advantage of a hopefully vulnerable and naive newbie? I am literally concerned for us as a race.

August 16th Update
The spam and scam started as a trickle. Now it’s raining spam in my inbox. I continue to be disappointed. In email the ratio of spam to “ham” may be about five to one, so not knowing any better you could expect a similar ratio with WordPress blogs. Not so! Of my fifty comments, not counting the ones from myself, the legit comments number about two-and-half, more like a twenty to one ratio. I will probably use a WordPress plugin to cut them off, but since I started on this public service mission, here are some more scams.

This one is spam as it was posted to my Sample page:
HTC is a well known name in the smartphone segment. The company has come up with smartphones boasting of exquisite features and HTC EVO 4G is one of the most potent… It came from an address ending in @mail.ru . One of many hats is as spam fighter. Let me tell you you see an a sender address mail.ru and you’re talking pure spam. The IP resolves to Latvia, however, and that fact hardly inspires confidence either.

To my post WordPress, Apache2, Permalinks and mod_rewrite under Ubuntu I got a comment The Best Way To Fix Acid Reflux. Now that’s a closely related topic!

Another one claims to help if I’m looking for information about babies (very relevant for a tech blog. yeah, right!).

Very many fall into the generic flattery category. Like this one:
Hey There. I found your blog using msn. This is a very well written article. I will be sure to bookmark it and come back to read more of your useful information. Thanks for the post. I’ll definitely return.

Or this:
I agree with your Gnu Parallel Really Helps With Zcat | Dr John's Tech Talk, great post.

I had to investigate those a little bit as I almost fell for one the first time. Then you realize that it’s so generic – except for the one where he obviously just pasted in the title of my post programatically – that it could be used for any blog post.

An equally popular scam are the SEO scams – Search Engine Optimization. I think the point of those scams is to shake a little money from you for supposed help to improve your blog’s ranking in the search engines.

Returning to the flattery scams, how do I know for sure this isn’t real, genuine flattery of my wonderful posts? I’ll tell you. There are a couple unambiguous clues and another strong hint.

Let’s start with the strong hint. Since I haven’t told anyone about my blog, pretty much the only way someone’s going to find it who has legitimate interest in its content is through Google or another search engine. So, in the web server access log, where I am recording the HTTP_REFERER (what URL the browser visited just before hitting my blog post), I should expect to see one of the search engines. I should not see some random web site mentioned because there is simply no good reason for browser controlled by a human being to go from someone else’s web site directly to my web site. And yet that is precisely what I am seeing. I would give examples but it would only serve to promote their web sites, so i will refrain from even an example.

But even more damning is to examine how long the poster has spent on my site. A human being has to read the post, contemplate its meaning, then type in a comment to finally post, right? It could rarely be done in under a minute. WordPress tells me the IP of the poster of the comment. I take that IP and search for it in the access log using grep. I am seeing that these comments are being made in one second after the web page was first downloaded. One second. It is not humanly possible. But for a program, piece of cake.

Here’s a real example (some of this may be cut off, depending on your browser):

109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "GET /blog/2011/06/gnu-parallel-really-helps-with-zcat/ HTTP/1.0" 200 22757 "http://blahblah.net/invest/" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)"
109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "GET /blog/ HTTP/1.0" 301 340 "http://blahblah.net/invest/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) Chrome/4.0.221.7 Safari/532.2"
109.169.61.16 - - [12/Aug/2011:06:31:51 -0400] "POST /blog/wp-comments-post.php HTTP/1.0" 302 902 "http://drjohnstechtalk.com" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30)"
109.169.61.16 - - [12/Aug/2011:06:31:52 -0400] "GET /blog/ HTTP/1.0" 200 110806 "http://blahblah.net/invest/" "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.2 (KHTML, like Gecko) Chrome/4.0.221.7 Safari/532.2"

Also this example illustrates the other damning evidence of lack of human involvement in the comment. A real browser run by a real human being has to pull in a lot of objects to display a single WordPress page. You’ve got stylesheets, external javascript pages and even the image at the top. They should all be requested, and be recorded in the access log. But a programmatically controlled browser needs far less! It needs the HTML of the blog page, and then the page it POSTS the comment to. Perhaps a third page after the POST to show it the POST was successful or not. And that’s exactly what I’ve seen in all the spam/scam comments I’ve checked out by hand, not just the flattery scams. They are all using the absolute minimum page accesses and that simply screams non-human access! I am, unfortunately, not really so special as they would have me believe! And the SEO scams are just annoying advertising. Most of the rest is what I’d call link laundering, where they’re using the legitimacy of my site to try to get links to their shady sites included, by trickery, carelessness or any means. And some are just using it as pure spam to my inbox since that’s where the comments go for review and they don’t even care if I approve their spam for public viewing or not.

Possible Explanation
My hypothesis is that there are specially constructed advanced searches in Google you can do to find new WordPress blogs. You can download the results and programatically loop through them and attempt to post your spam and scams. It’s pretty easy to program a browser like curl using PERL to post to a WordPress blog. Even I could do it! And that low barrier to entry jibes with the level of professionalism I perceive in these scams, which is to say, pretty low, like something I would cook up by my lonesome! Misspellings, poor English, blatant calls-to-action are par for the course, as well as source IPs from remote regions of the world that have no possible interest in my arcane technical postings.

Now you could argue that a real browser could have cached some of those objects and so upon a return visit it might only access a minimum set of objects and hence look a bit like a program. To that I say that it is rarely the case that all objects get cached. And even if they did, you still have to take time to type in your comment, right? No one can do that in a second. The access lines above span the time from 6:31:51 to 6:31:52!!

The Final solution
I think I’ve made my point about the spam. I have followed Ryan’s advice and activated a plugin called Akismet. Their site looks fairly professional – like they know what they are doing. An API key is required to activate the plugin, but that is available for free for personal blogs. I’ll append to this blog whether or not it works!

Feb 28th update
600 spam comments later, 20 in the last few hours alone, I am sooo tired of rotten apples abusing the leave a comment feature, even though I am protected from approving the comments, it is still filling up my database. So I have taken an additional step today and implemented a Captcha plugin. This supposedly requires some human intelligence to answer a simple math problem before the post is allowed. I’ll post here about how well it is or isn’t working.

September update
Well, the captcha plugin has stopped virtually all spam, except one random comment. A user wishing to post a comment has to solve a very simple math/language problem. I recommend this approach. I suppose eventually the scammers will catch up with this defense, but in the meantime I am now enjoying peace and tranquility in my seldom-visited but formerly frequently spammed blog!

Categories
Admin IT Operational Excellence Web Site Technologies

Virtual Server not Working in F5 BigIP

OK. This posting is only directly applicable to the small number of people who run BigIP load balancers. And of that set, only a certaIn subset will likely ever have this situation. Nevertheless, it’s useful to document it. There are lessons in it for the rest of us, it shows the creative problem-solving process used in IT, or rather the creative process that should be used.

So I had a virtual server associated with a certain pool and it was operating fine for years. Then something changes. We want to associate a new name with this virtual server, but test it first, all while keeping the old name working. Well, this is a secured site by which I mean it is running https rather than http. There’s nothing intrinsic in the web site itself that ties it to a particular name. If this were a run-of-the-mill non-secure site you would solve this problem with DNS. Set up an alias and you’re good to go. But secured sites are a wee bit trickier. They present a certificate after all. And the certificate has just one name, at least ours does. Guess I can address multi-name certificates known as Subject Alternative Name CERTs in a separate post. And that name is the original DNS name. What to do? Simple. As any BigIP admin would tell you you create a new virtual server and associate it with a new IP and a new SSL profile containing the new certificate you just bought but the old pool. In DNS assign this new IP to your new DNS name. That’s all pretty straightforward

Having done all that, I blithely tested with lynx (iI’s an old curses-based browser which runs on old Unix systems. The main point is to not test with a complex browser where like Internet Explorer where you are never 100% sure if the problem lies with the browser. If I had it, I would test with curl, but it’s not on that system.). And…it hangs.

Now I’ll admit a lot of stupid things I did (which is typical of any good debugging session of an IT professional – some self-created red herrings accompany any decent sleuthing) and I ratchet up the debugging a notch. Check the web server logs. I see no log of my lynx accesses. Dig a little deeper still. Fire up a trace. Here’s a little time-saver. BigIP does have a tcpdump program, but it is a little stunted. Typically you have multiple interfaces on a BigIP. In this case I felt it pertinent to know if packets were getting to the BigIP from lynx, and then again, if those packets were leaving the BigIP and going to the web server. So the tip is that whereas a “normal” tcpdump might allow you to use the switch -i any to listen on all interfaces, that doesn’t work on BigIP. Use -i 0.0 instead. And of course restrict it somehow so that your own shell session’s packets won’t be picked up by the trace, or else you could be in for a nasty surprise of exponentially increasing traffic (a devastating situation perhaps worthy of its own blog entry!). In this case I added an expression port 443. So I have:

tcpdump -i 0.0 port 443

And, somewhat to my surprise (You should always have a hypothesis, even if it’s just a gut feeling: will this little test work, or not. Why?) not only were packets going from lynx to BigIp and then again to the web server, I could even see returned packets back from the web server to BigIp to lynx. But it was not a lot of packets. A SYN, SYN-ACK and maybe a single data packet and that’s about it. It should have been more chatty.

The more tests you can think of, the better, especially ones that emphasize the marginal differences between the thing that works and the one that doesn’t. One test along those lines: take this same virtual server and associate it with a different pool. I did that, and that test worked!

Next, I tried to access the web server using curl on the BigIP itself. I could, but not at first. First I used the local web server URL http://web_server_ip:443/. It hung my curl command, just like using lynx on the other server. Hmm. I then looked on the web server again. I notice that it has a certificate installed. Ah. So it’s actually running https. So try curl from BigIP again, but this time with the -k switch (insecure, meaning don’t verify the certificate issuer) and a url beginning with https rather than http. Bingo. It comes back with the home page. Now we’re getting somewhere.

Finally I look more closely at the virtual server setup for the old name, the one that works. I see that the server profile is SSL. It basically means that the traffic is encrypted when it hits the BigIP, and the server CERT is associated with the external name. The BigIP decrypts the traffic, then re-encrypts it before sending it along to the web server. The CERT for the second leg is a self-signed CERT and is never seen by users.

I had forgotten to set up my new test virtual server with the server SSL profile, so the second leg of traffic was not being re-encyrpted by the BigIP, even though the web server was only willing to engage in SSL communication with the BigIP. Once I updated the server profile, it all worked fine! Of course after getting the expected results from lynx I went to my desktop browser, just like a regular user, and successfully tested it there as well. You want to make sure your final tests are a realistic approximation of what the user will be doing. If that’s not all possible under your own control, bring in a user for testing.

Liked this article? Here’s another of my IT operational excellence articles that has a somewhat wider applicability.

Categories
IT Operational Excellence Linux

Grep is Slow as a Snail in SLES 11 – Solved

I had written earlier about the performance problems of Suse Linux Enterprise Server v 11  Service Pack 1 (SLES 11 SP1)  under VMWare: http://drjohnstechtalk.com/blog/2011/06/performance-degradation-with-sles-11-sp1-under-vmware/.  What I hadn’t fully appreciated at that time is that part of the problem could be with the command grep itself.  Further investigation has convinced me that grep as implemented under SLES 11 SP 1 X86_64 is horrible.  It is seriously broken. The following results are invariant under both a VM and a physical server.

Methodology 1

A cksum shows that grep has changed between SLES 10 SP 3 and SLES 11 SP 1.  I’m not sure what the changes are.  So I performed an strace while grep’ing a short file to see if there are any extra system calls which occur under SLES 11 SP 1.  There are not.

I copied the grep binary from SLES 10 SP 3 to a SLES 11 SP 1 system.  I was afraid this wouldn’t work because it might rely on dynamic libraries which also could have changed.  However this appears to not be the case and the grep binary from the SLES 10 system is about 19 times faster, running on the same SLES 11 system!

Methodology 2

I figure that I am a completely amateur programmer.  If with all my limitations I can implement a search utility that does considerably better than the shell command grep, I can fairly decisively conclude that grep is broken.  Recall that we already have comparisons that show that grep under SLES 10 SP 3 is many times faster than under SLES 11 SP 1.

Results

The table summarizes the findings. All tests were on a 109 MB file which has 460,000 lines.

OS

Type of Grep

Time (s)

SLES 11 SP 1

built-in

42.6

SLES 11 SP 1

SLES 10 SP 3 grep binary

2.5

SLES 11 SP 1

Perl grep

1.1

SLES 10 SP 3

built-in

1.2

SLES 10 SP 3

Perl grep

0.35 s

The Code for Perl Grep

Hey, I don’t know about you, but I only use a fraction of the features in grep. The switches i and v cover about 99% of what I do with it. Well, come to think of it I do use alternate expressions in egrep (w/ the “|” character), and the C switch (provides context by including surrounding lines) can sometimes be really helpful. The i (filenames only) and n (include line numbers) look useful on paper, but you almost never end up needing them. Anyways I simply didn’t program those things to keep it simple. Maybe later. To make it as fast as possible I avoided anything I thought the interpreter might trip over, at the expense of repeating code snippets multiple times. At some point (allowing another switch or two) my approach would be ludicrous as there would be too many combinations to consider. But at least in my testing it does function just like grep, only, as you see from the table above, it is much faster than grep. If I had written it in a compiled language like C it should go even faster still. Perl is an interpreted language so there should always be a performance penalty in using it. The advantage is of course that it is so darn easy to write useful code.

#!/usr/bin/perl
# J.Hilgart, 6/2011
# model grep implementation in Perl
# feel free to borrow or use this, but it will not be supported
use Getopt::Std;
$DEBUG = 0;
# Get the command line options.
getopts('iv');
# the search string has to be present
$mstr = shift @ARGV;
usage() unless $mstr;
$mstr =~ s/\./\\./g;
# the remaining arguments are the files to be searched
$nofiles = @ARGV;
print "nofiles: $nofiles\n" if $DEBUG;
$filePrefix = $nofiles > 1 ? "$_:" : "";
 
# call subroutine based on arguments present
optiv() if $opt_i && $opt_v;
opti()  if $opt_i;
optv()  if $opt_v;
normal();
################################
sub normal {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
# print filename if there is more than one file being searched
    print "$filePrefix$_" if /$mstr/;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print if /$mstr/;
}
}
exit;
} # end sub normal
###############################
sub opti {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" if /$mstr/i;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print if /$mstr/i;
}
}
exit;
} # end sub opti
#################################
sub optv {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" unless /$mstr/;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print unless /$mstr/;
}
}
exit;
} # end sub optv
##############################
sub optiv {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" unless /$mstr/i;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print unless /$mstr/i;
}
}
exit;
} # end sub optiv
sub usage {
# I never did finish this...
}

Conclusion
So built-in grep performs horribly on SLES 11 SP 1, about 17 times slower than the SLES 10 SP 3 grep. I wonder what an examination of the source code would reveal? But who has time for that? So I’ve shown a way to avoid it entirely, by using a perl grep instead – modify to suit your needs. It’s considerably faster than what the system provides, which is really sad since it’s an amateur, two-hour effort compared to the decade+ (?) of professional development on Posix grep. What has me more concerned is what haven’t I found, yet, that also performs horribly under SLES 11 SP 1? It’s like deer on the side of the road in New Jersey – where there’s one there’s likely to be more lurking nearby : ) .

Follow Up
We will probably open a support case with Novell. I am not very optimistic about our prospects. This will not be an easy problem for them to resolve – the code may be contributed, for instance. So, this is where it gets interesting. Is the much-vaunted rapid bug-fixing of open source really going to make a substantial difference? I would have to look to OpenSUSE to find out (where I suppose the fixed code would first be released), which I may do. I am skeptical this will be fixed this year. With luck, in a year’s time.

7/15 Update
There is a newer version of grep available. Old version: grep-2.5.2-90.18.41; New version: grep-2.6.3-90.18.41 Did it fix the problem? Depends how low you want to lower the bar. It’s a lot better, yes. But it’s still three times slower than grep from SLES 10 SP3. So…still a long ways to go.

9/7 Update – The Solution
Novell came through today, three months later. I guess that’s better than I pessimistically predicted, but hardly anything to brag about.

Turns out that things get dramatically better if you simple define the environment variable LC_ALL=POSIX. They do expect a better fix with SLES 11 SP 2, but there’s no release date for that yet. Being a curious sort, I revisited SLES 10 SP3 with this environment variable defined and it also considerably improved performance there as well! This variable has to do with the Locale and language support. Here’s a table with some recent results. Unfortunately the SLES 11 SP 1 is a VM, and SLES 10 SP3 is a physical server, although the same file was used. So the thing to concentrate on is the improvement in performance of grep with vs without LC_ALL defined.

OS

LC_ALL=POSIX defined?

Time (s)

SLES 11 SP 1

no

6.9

SLES 11 SP 1

yes

0.36

SLES 10 SP 3

no

0.35

SLES 10 SP 3

yes

0.19 s

So if you use SLES 10/11, make sure you have a

export LC_ALL=POSIX

defined somewhere in your profile if you plan to use grep very often. It makes a 19x performance improvement in SLES 11 and almost a 2x performance improvement under SLES 10 SP3.

Related
If you like the idea of grep but want a friendlier interface, I was thinking I ought to mention Splunk. A Google search will lead you to it. It started with a noble concept – all the features of grep, plus a convenient web interface so you never have to get yuor hands dirty and actually log into a Linux/unix system. It was like a grep on steroids. But then in my opinion they ruined a simple utility and blew it up with so many features that it’ll take hours to just scratch the surface of its capabilities. And I’m not even sure a free version is still available. Still, it might be worth a look in some cases. In my case it also slowed down searching though supposedly it should have sped them up.

And to save for last what should have come first, grep is a search utility that’s great for looking at unstructured (not in a relational database) data.

Categories
Linux

Gnu Parallel Really Helps With Zcat

I happened across what I think will turn out to be a very useful open source application: Gnu parallel.  Let me explain how I got to that point.  I was uncompressing gzip’d files, of which I have lots.  So many, in fact, it can take the better part of a day to go through them all.  The server I was using is four years old but I don’t have access to anything better.  In fact it seems processor speed has sort of plateaud.  Moore’s Law only applies if you count the number of processors, which are multiplying like rabbits.  So I start to think that maybe my single-threaded approach is wrong, see? I need to scale horizontally, not vetically.

It just happens that in my former life I spent quite some effort into parallelizing a compute-intensive program I wrote.  I can’t believe that it’s still hanging around on the Internet!  I haven’t looked for it for at least 17 years: my FERMISV  Monte Carlo.  So I thought, what if I could parallelize my zcat|grep?  Writing such a thing from scratch would be too time-consuming, but after searching for parallelize zcat I quickly found Gnu Parallel.  It’s one thing to identify such a program in a Google search, another to get it working.  Believe me, I’ve put in my time with some of these WordPress plugins.

The upshot is that, yes, Gnu Parallel actually works and it’s not hard to learn to use.  They have a nice Youtube video.  For me I employed syntax like:  

ls *gz|time parallel -k "zcat {}|grep 192.168.23.34"  > /tmp/matches

.  This, of course, is to be compared with the normal operation I have been doing for eons:

time zcat *gz|grep 192.168.23.34 > /tmp/matches

.  The “time” is just for benchmarking and normally wouldn’t be present. 

The results are that the matches file produced in the two different ways are identical.  That’s good!  The -k switch ensured the order is preserved in the output, and anyways, it didn’t cost me any time. Otherwise the order is not guaranteed. I tested on a server with four cores.  Gnu Parallel reported that it used 375% of my CPU!  It finished in less than half the time of my normal way of doing things.  I need to put it through more real-world exercises, but it really looks promising.  I never got onto the xargs bandwagon (perhaps I should be ashamed to admit it?), but Gnu Parallel can do a lot of that same type of thing, but in my opinion is more intuitive.  Take a look!

Categories
Linux

Performance Degradation With SLES 11 SP1 Under VMWare

Some friends and I found severe performance degradation using SLES 11 SP1 under VMWare.  That’s Suse Linux Enterprise Server Service Pack 1.  I’m still trying to get my head around it – there are lots of variables to control for. Later update For my analysis of the specific problem with grep on SLES 11 SP1, please go here: http://drjohnstechtalk.com/blog/2011/06/grep-is-slow-as-a-snail-in-sles-11/.

The Method

I start with a 20 Mb gzip’d file.  Let’s call it file.gz.  Uncompressed it is 107 MB.  I run

time zcat file.gz > /dev/null

I run it several times and report the lowest number which I am consistently able to reproduce. That feels the fairest way to me to benchmark in this case.

On a fast, 3 GHz physical server running SLES 10 SP3 it takes 0.63 s. Let’s call that server Physical.

Then I add in something to make it useful: grep’ing for an IP address:

time zcat file.gz|grep 192.168.23.34 > /dev/null

On Physical that takes 0.65 s.

My Amazon EC2 image – let’s call it Amazon-EC2 – runs ubuntu 10.10 on a 2.6 GHz VM. To learn CPU speed in ubuntu:

cat /proc/cpuinfo|grep MHz

My SLES 11 SP1 is a guest VM on VMWare ESX. It has a 2.4 GHz processor. Let’s call it SLES11SP1. For CPU speed in SLES*:

dmesg|grep -i cpu

and look for the line that says Intel … GHz.

* 7/1/2011 Correction The correct way to get the processor speed is

grep -i /proc/cpuinfo

The cpu Mhz line shows the running speed. this also seems to work in RHEL (Redhat) and Debian distributions such as Ubuntu. I’ve looked at several models. Usually the model name – which is what you get from the dmesg way – lists a speed that is the same as the cpu MHz given the 1000x difference between GHz and MHz, but not always! I have come across a recently purchased server with a surprisingly slow clock speed, and one that is quite different from the CPU name:

model name      : Intel(R) Xeon(R) CPU           E7520  @ 1.87GHz
cpu MHz         : 1064.000

Who knew you could even buy a server-class CPU that slow these days?

For comparison I have a SLES 10 SP3 VM under VMWare. It also has a 2.4 GHz CPU. SLES10SP3. All servers are X86_64.

The Results
The amazing results:

Server zcat time zcat|grep IP time
Physical 0.63 s 0.65 s
Amazon-EC2 0.73 s 1.06 s
SLES11SP1 0.99 s 5.8 s
SLES10SP3 0.78 s 0.93 s

Analysis
I’ve tried many more variants than displayed in that table, but you get the idea. All VMs are slower than all physical servers tested in all tests. Most discouragingly, the SLES11 SP3 is five or six times slower than a comparable physical server for the real-life test involving grep. I used the same file in all tests.

Conclusions
Virtualization is not a panacea. It has its role, but also its limitations. And either something is wrong with SLES 11 SP1, which I doubt, or something is wrong with the way I set it up, despite the fact that I’ve tried a few variants. We’re going to try it on a physical server next. I’ll update this post if anything noteworthy happens.

Update
I got it tested on a physical server now, too. A HP G6 w/ 4 Gb RAM. SLES 11 SP1. The CPU identified itself as Xeon E5504 @ 2.0 GHz. Here are the awful timings:

Server zcat time zcat|grep IP time
SLES 11 SP1 Physical 0.90 s 4.8 s

That shows that SLES 11 SP1 itself is causing my performance degradation. Not every instance of SLES 11 is faulty, however. It turns out that IBM’s Watson is running it! http://whatis.techtarget.com/definition/ibm-watson-supercomputer.html

For the record, the kernel in SLES 11 SP1 is 2.6.32.12-0.7, while in SLES 10 SP3 it is 2.6.16.60-0.68.1. On a RHEL 5.6 VM (I did not bother to list its results), where the kernel was 2.6.18-238.1.1, the degradation was not nearly so bad either. I wonder if the kernel version is having a major impact? Perhaps not. The kernel in ubuntu 10.10 looks pretty new – 2.6.35-24. I am running

uname -a

to see the kernel version.

Further Update
We also got to test with SLES 10 SP4 VM. You see from the table that it is well-behaved. It had 2 CPUs which identified themselves as X6550 @ 2.0 GHz. 4 GB RAM. Kernel 2.6.16.60-0.85.1

Server zcat time zcat|grep IP time
SLES 10 SP4 VM 0.96 s 1.0 s
Categories
Web Site Technologies

Changing The Font in WordPress TwentyTen Theme

I still think WordPress is a mess!  But it’s better than what I cuold do on my own, so I’m sticking with it for now.  The simplest things quickly devolve into an exercise for a jaded IT veteran.  Don’t even get me started on replacing the header image while using an Ubuntu server which conveniently does not come with php5-gd.

Here is something I actually manmaged to do without hours of effort.  Replacing the font style in my posts.  Everyone knows that sans-serif fonts are more readable, look more professional and are more compact.  Not sure why the twenyten theme does not use it for the body of the posts.  What I found, without being a CSS master, is that if you change line 118 of style.css to

        font-family: Arial, Helvetica, sans-serif;

It works to change the font of all your posts.

Before the change that line reads like this:

That was it for me!  Not bad, eh?

For a more sophisticated treatment, you should consider doing all your customizations in a child theme. It can’t be hard because I just managed to do it! It looks to be a more elegant and robust approach, which appeals to me, because you leave all the files in the parent theme alone, and only override the files which actually need to be changed. Read about it in http://codex.wordpress.org/Child_Themes.

Categories
Web Site Technologies

WordPress Templates are a Nightmare

Like a typical opensource effort, WordPress is a mixed bag. It’s wonderful for scripters like me to get so much access to the source. But the documentation and concepts are opaque, and this is coming from a seasoned IT veteran. Could they have possibly made it more complicated?

I hope to help you cut through the inscrutable explanations on such pages as http://codex.wordpress.org/Template_Tags and cut to the chase for knowing how to change what matters, whether or not, like me, you really understand what the heck they’re talking about.

Say you want to modify something in the appearance of your posts.  That’s what I wanted.  Once I learned the “easy way” to install plugins (see http://drjohnstechtalk.com/blog/2011/06/security-considerations-for-wordpress-plugins-and-upgrades/), I wanted to get a plugin to count the millions of expected visitors, ha, ha!

Now more comfortable with plugins, I actually installed WP-PostViews using my SmartPhone.  Cool, right?  Except that I found installation is one thing, configuring it to actually do something another.  Fortunately, I do have a PhD in a technical field, so I refused to be daunted.

I wanted to display the view count above or below each post.  From the exceedingly poor documentation available on WP-PostViews, I gathered that I needed to insert this php code:

<?php if(function_exists('the_views')) { the_views(); } ?>

into one or more of my template files to display the view count.  The (incorrect) PostViews documentation said just put it into index.php, inside the section

<?php while ( have_posts() ) : the_post(); ?>

Great.  That simply doesn’t exist in my index.php in my theme (twentyten).

So now we’re looking at all these files in that directory, wp-content/themes/twentyten, to figure out which may be the right one:

404.php         comments.php   loop-attachment.php  page.php            tag.php
archive.php     footer.php     loop-page.php        search.php
attachment.php  functions.php  loop.php             sidebar-footer.php
author.php      header.php     loop-single.php      sidebar.php
category.php    index.php      onecolumn-page.php   single.php

As I promised I’ll cut through all the bluster about themes, templates, hierarchies and other WordPress nonsense.  My degree is in experimental physics.  I experiment.  By experimentation and some tiny understanding of their concepts I can now say you need to change these two files:

loop.php
loop-single.php

That’s it.  I just saved you three hours of useless research.

Update
I see they try to make it easier for you by allowing you to edit the template files from within the admin GUI, including some function documentation. It still leaves a lot to be desired.

To be continued…