Categories
Admin Apache CentOS Network Technologies Security Web Site Technologies

Idea for free web server certificates: Let’s Encrypt

Intro
I’ve written various articles about SSL. I just came across a way to get your certificates for free, letsencrypt.org. But their thing is to automate certificate management. I think you have to set up the whole automated certificate management environment just to get one of their free certificates. So that’s a little unfortunate, but I may try it and write up my experience with it in this blog (Update: I did it!). Stay tuned.

Short duration certificates
I recently happened upon a site that uses one of these certificates and was surprised to see that it expires in 90 days. All the certificate I’ve ever bought are valid for at least a year, sometimes two or three. But Let’s Encrypt has a whole page justifying their short certificates which kind of makes sense. It forces you to adopt their automation processes for renewal because it will be too burdensome for site admins to constantly renew these certificates by hand the way they used to.

November 2016 update
Since posting this article I have worked with a hosting firm a little bit. I was surprised by how easily he could get for one of “my” domain names. Apparently all it took was that Let’s Encrypt could verify that he owned the IP address which my domain name resolved to. That’s different from the usual way of verification where the whois registration of the domain gets queried. That never happened here! I think by now the Let’s Encrypt CA, IdenTrust Commercial Root CA 1, is accepted by the major browsers.

Here’s a picture that shows one of these certificates which was just issued November, 2016 with its short expiration.

lets-encrypt-2016-11-22_15-03-39

My own experience in getting a certificate
I studied the ACME protocol a little bit. It’s complicated. Nothing’s easy these days! So you need a program to help you implement it. I went with acme.sh over Certbot because it is much more lightweight – works through bash shell. Certbot wanted to update about 40 packages on my system, which really seems like overkill.

I’m very excited about how easy it was to get my first certificate from letsencrypt! Worked first time. I made sure the account I ran this command from had write access to the HTMLroot (the “webroot”) because an authentication challenge occurs to prove that I administer that web server:

$ acme.sh ‐‐issue ‐d drjohnstechtalk.com ‐w /web/drj

[Wed Nov 30 08:55:54 EST 2016] Registering account
[Wed Nov 30 08:55:56 EST 2016] Registered
[Wed Nov 30 08:55:57 EST 2016] Update success.
[Wed Nov 30 08:55:57 EST 2016] Creating domain key
[Wed Nov 30 08:55:57 EST 2016] Single domain='drjohnstechtalk.com'
[Wed Nov 30 08:55:57 EST 2016] Getting domain auth token for each domain
[Wed Nov 30 08:55:57 EST 2016] Getting webroot for domain='drjohnstechtalk.com'
[Wed Nov 30 08:55:57 EST 2016] _w='/web/drj'
[Wed Nov 30 08:55:57 EST 2016] Getting new-authz for domain='drjohnstechtalk.com'
[Wed Nov 30 08:55:58 EST 2016] The new-authz request is ok.
[Wed Nov 30 08:55:58 EST 2016] Verifying:drjohnstechtalk.com
[Wed Nov 30 08:56:02 EST 2016] Success
[Wed Nov 30 08:56:02 EST 2016] Verify finished, start to sign.
[Wed Nov 30 08:56:03 EST 2016] Cert success.
-----BEGIN CERTIFICATE-----
MIIFCjCCA/KgAwIBAgISA8T7pQeg535pA45tryZv6M4cMA0GCSqGSIb3DQEBCwUA
MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD
ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0xNjExMzAxMjU2MDBaFw0x
NzAyMjgxMjU2MDBaMB4xHDAaBgNVBAMTE2Ryam9obnN0ZWNodGFsay5jb20wggEi
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC1PScaoxACI0jhsgkNcbd51YzK
eVI/P/GuFO8VCTYvZAzxjGiDPfkEmYSYw5Ii/c9OHbeJs2Gj5b0tSph8YtQhnpgZ
c+3FGEOxw8mP52452oJEqrUldHI47olVPv+gnlqjQAMPbtMCCcAKf70KFc1MiMzr
2kpGmJzKFzOXmkgq8bv6ej0YSrLijNFLC7DoCpjV5IjjhE+DJm3q0fNM3BBvP94K
jyt4JSS1d5l9hBBIHk+Jjg8+ka1G7wSnqJVLgbRhEki1oh8HqH7JO87QhJA+4MZL
wqYvJdoundl8HahcknJ3ymAlFXQOriF23WaqjAQ0OHOCjodV+CTJGxpl/ninAgMB
AAGjggIUMIICEDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYIKwYBBQUHAwEG
CCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYEFGaLNxVgpSFqgf5eFZCH
1B7qezB6MB8GA1UdIwQYMBaAFKhKamMEfd265tE5t6ZFZe/zqOyhMHAGCCsGAQUF
BwEBBGQwYjAvBggrBgEFBQcwAYYjaHR0cDovL29jc3AuaW50LXgzLmxldHNlbmNy
eXB0Lm9yZy8wLwYIKwYBBQUHMAKGI2h0dHA6Ly9jZXJ0LmludC14My5sZXRzZW5j
cnlwdC5vcmcvMB4GA1UdEQQXMBWCE2Ryam9obnN0ZWNodGFsay5jb20wgf4GA1Ud
IASB9jCB8zAIBgZngQwBAgEwgeYGCysGAQQBgt8TAQEBMIHWMCYGCCsGAQUFBwIB
FhpodHRwOi8vY3BzLmxldHNlbmNyeXB0Lm9yZzCBqwYIKwYBBQUHAgIwgZ4MgZtU
aGlzIENlcnRpZmljYXRlIG1heSBvbmx5IGJlIHJlbGllZCB1cG9uIGJ5IFJlbHlp
bmcgUGFydGllcyBhbmQgb25seSBpbiBhY2NvcmRhbmNlIHdpdGggdGhlIENlcnRp
ZmljYXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL2xldHNlbmNyeXB0Lm9yZy9y
ZXBvc2l0b3J5LzANBgkqhkiG9w0BAQsFAAOCAQEAc4w4a+PFpZqpf+6IyrW31lj3
iiFIpWYrmg9sa79hu4rsTxsdUs4K9mOKuwjZ4XRfaxrRKYkb2Fb4O7QY0JN482+w
PslkPbTorotcfAhLxxJE5vTNQ5XZA4LydH1+kkNHDzbrAGFJYmXEu0EeAMlTRMUA
N1+whUECsWBdAfBoSROgSJIxZKr+agcImX9cm4ScYuWB8qGLK98RTpFmGJc5S52U
tQrSJrAFCoylqrOB67PXmxNxhPwGmvPQnsjuVQMvBqUeJMsZZbn7ZMKr7NFMwGD4
BTvUw6gjvN4lWvs82M0tRHbC5z3mALUk7UXrQqULG3uZTlnD7kA8C39ulwOSCQ==
-----END CERTIFICATE-----
[Wed Nov 30 08:56:03 EST 2016] Your cert is in  /home/drj/.acme.sh/drjohnstechtalk.com/drjohnstechtalk.com.cer
[Wed Nov 30 08:56:03 EST 2016] Your cert key is in  /home/drj/.acme.sh/drjohnstechtalk.com/drjohnstechtalk.com.key
[Wed Nov 30 08:56:04 EST 2016] The intermediate CA cert is in  /home/drj/.acme.sh/drjohnstechtalk.com/ca.cer
[Wed Nov 30 08:56:04 EST 2016] And the full chain certs is there:  /home/drj/.acme.sh/drjohnstechtalk.com/fullchain.cer

Behind the scenes the authentication resulted in these two accesses to my web server:

66.133.109.36 - - [30/Nov/2016:08:55:59 -0500] "GET /.well-known/acme-challenge/EJlPv9ar7lxvlegqsdlJvsmXMTyagbBsWrh1p-JoHS8 HTTP/1.1" 301 618 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
66.133.109.36 - - [30/Nov/2016:08:56:00 -0500] "GET /.well-known/acme-challenge/EJlPv9ar7lxvlegqsdlJvsmXMTyagbBsWrh1p-JoHS8 HTTP/1.1" 200 5725 "http://drjohnstechtalk.com/.well-known/acme-challenge/EJlPv9ar7lxvlegqsdlJvsmXMTyagbBsWrh1p-JoHS8" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)" "drjohnstechtalk.com"

The first was HTTP which I redirect to https while preserving the URL, hence the second request. You see now why I needed write access to the webroot of my web server.

Refine our approach
In the end iI decided to run as root in order to protect the private key from prying eyes. This looked like this:

$ acme.sh ‐‐issue ‐‐force ‐d drjohnstechtalk.com ‐w /web/drj ‐‐reloadcmd "service apache24 reload" ‐‐certpath /etc/apache24/certs/drjohnstechtalk.crt ‐‐keypath /etc/apache24/certs/drjohnstechtalk.key ‐‐fullchainpath /etc/apache24/certs/fullchain.cer

What’s a nice feature about acme.sh is that it remembers parameters you’ve typed by hand and fills them into a single convenient configuration file. So the contents of mine look like this:

Le_Domain='drjohnstechtalk.com'
Le_Alt='no'
Le_Webroot='/web/drj'
Le_PreHook=''
Le_PostHook=''
Le_RenewHook=''
Le_API='https://acme-v01.api.letsencrypt.org'
Le_Keylength=''
Le_LinkCert='https://acme-v01.api.letsencrypt.org/acme/cert/037fe5215bb5f4df6a0098fefd50b83b046b'
Le_LinkIssuer='https://acme-v01.api.letsencrypt.org/acme/issuer-cert'
Le_CertCreateTime='1480710570'
Le_CertCreateTimeStr='Fri Dec  2 20:29:30 UTC 2016'
Le_NextRenewTimeStr='Tue Jan 31 20:29:30 UTC 2017'
Le_NextRenewTime='1485808170'
Le_RealCertPath='/etc/apache24/certs/drjohnstechtalk.crt'
Le_RealCACertPath=''
Le_RealKeyPath='/etc/apache24/certs/drjohnstechtalk.key'
Le_ReloadCmd='service apache24 reload'
Le_RealFullChainPath='/etc/apache24/certs/fullchain.cer'

References and related
Examples of using Lets Encrypt with domain (DNS) validation: How I saved $69 a year on certificate cost.
The Let’s Encrypt web site, letsencrypt.org
When I first switched from http to https: drjohnstechtalk is now an encrypted web site
Ciphers
Let’s Encrypt’s take on those short-lived certificates they issue: Why 90-day certificates
acme.sh script which I used I obtained from this site: https://github.com/Neilpang/acme.sh
CERTbot client which implements ACME protocol: https://certbot.eff.org/
IETF ACME draft proposal: https://datatracker.ietf.org/doc/draft-ietf-acme-acme/?include_text=1

Categories
Admin CentOS Linux Security

The IT Detective Agency: WordPress login failure leads to discovery of ssh brute force attack

Intro
Yes my WordPress instance never gave any problems for years. Then one day my usual username/password wouldn’t log me in! One thing led to another until I realized I was under an ssh brute force attack from Hong Kong. Then I implemented a software that really helped the situation…

The details
Login failure

So like anyone would do, I double-checked that I was typing the password correctly. Once I convinced myself of that I went to an ssh session I had open to it. When all else fails restart, right? Except this is not Windows (I run CentOS) so there’s no real need to restart the server. There very rarely is.

Mysql fails to start
So I restarted mysql and the web server. I noticed mysql database wasn’t actually starting up. It couldn’t create a PID file or something – no space left on device.

No space on /
What? I never had that problem before. In an enterprise environment I’d have disk monitors and all that good stuff but as a singeleton user of Amazon AWS I suppose they could monitor and alert me to disk problems but they’d probably want to charge me for the privilege. So yeah, a df -k showed 0 bytes available on /. That’s never a good thing.

/var/log very large
So I ran a du -k from / and sent the output to /tmp/du-k so I could preview at my leisure. Fail. Nope, can’t do that because I can’t write to /tmp because it’s on the / partition in my simple-minded server configuration! OK. Just run du -k and scan results by eye… I see /var/log consumes about 3 GB out of 6 GB available which is more than I expected.

btmp is way too large
So I did an ls -l in /var/log and saw that btmp alone is 1.9 GB in size. What the heck is btmp? Some searches show it to be a log use to record ssh login attempts. What is it recording?

Disturbing contents of btmp
I learned that to read btmp you do a
> last -f btmp
The output is zillions of lines like these:

root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
root     ssh:notty    43.229.53.13     Mon Oct 26 14:56 - 14:56  (00:00)
...

I estimate roughly 3.7 login attempts per second. And it’s endless. So I consider it a brute force attack to gain root on my server. This estimate is based on extrapolating from a 10-minute interval by doing one of these:

> last -f btmp|grep ‘Oct 26 14:5’|wc

and dividing the result by 10 min * 60 s/min.

First approach to stop it
I’m at networking guy at heart and remember when you have a hammer all problems look like nails 😉 ? What is the network nail in this case? The attacker’s IP address of course. We can just make sure packets originating from that IP can’t get returned form my server, by doing one of these:

> route add -host 43.229.53.13 gw 127.0.0.1

Check it with one of these:

> netstat -rn

Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
43.229.53.13    127.0.0.1       255.255.255.255 UGH       0 0          0 lo
10.185.21.64    0.0.0.0         255.255.255.192 U         0 0          0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U         0 0          0 eth0
0.0.0.0         10.185.21.65    0.0.0.0         UG        0 0          0 eth0

Then watch the btmp grow silent since now your server sends the reply packets to its loopback interface where they die.

Short-lived satisfaction
But the pleasure and pats on your back will be short-lived as a new attack from a new IP will commence within the hour. And you can squelch that one, too, but it gets tiresome as you stay up all night keeping up with things.

Although it wouldn’t bee too too difficult to script the recipe above and automate it, I decided it might even be easier still to find a package out there that does the job for me. And I did. It’s called

fail2ban

You can get it from the EPEL repository of CentOS, making it particularly easy to install. Something like:

$ yum install fail2ban

will do the trick.

I like fail2ban because it has the feel of a modern package. It’s written in python for instance and it is still maintained by its author. There are zillions of options which make it daunting at first.

To stop these ssh attacks in their tracks all you need is to create a jail.local file in /etc/fail2ban. Mine looks like this:

# DrJ - enable sshd monitoring
[DEFAULT]
bantime = 3600
# exempt CenturyLink
ignoreip = 76.6.0.0/16  71.48.0.0/16
#
[sshd]
enabled = true

Then reload it:

$ service fail2ban reload

and check it:

$ service fail2ban status

fail2ban-server (pid  28459) is running...
Status
|- Number of jail:      1
`- Jail list:   sshd

And most sweetly of all, wait a day or two and appreciate the marked change in the contents of btmp or secure:

support  ssh:notty    117.4.240.22     Mon Nov  2 07:05    gone - no logout
support  ssh:notty    117.4.240.22     Mon Nov  2 07:05 - 07:05  (00:00)
dff      ssh:notty    62.232.207.210   Mon Nov  2 03:38 - 07:05  (03:26)
dff      ssh:notty    62.232.207.210   Mon Nov  2 03:38 - 03:38  (00:00)
zhangyan ssh:notty    62.232.207.210   Mon Nov  2 03:38 - 03:38  (00:00)
zhangyan ssh:notty    62.232.207.210   Mon Nov  2 03:38 - 03:38  (00:00)
support  ssh:notty    117.4.240.22     Sun Nov  1 22:47 - 03:38  (04:50)
support  ssh:notty    117.4.240.22     Sun Nov  1 22:47 - 22:47  (00:00)
oracle   ssh:notty    180.210.201.106  Sun Nov  1 20:44 - 22:47  (02:03)
oracle   ssh:notty    180.210.201.106  Sun Nov  1 20:44 - 20:44  (00:00)
a        ssh:notty    180.210.201.106  Sun Nov  1 20:44 - 20:44  (00:00)
a        ssh:notty    180.210.201.106  Sun Nov  1 20:44 - 20:44  (00:00)
openerp  ssh:notty    123.212.42.241   Sun Nov  1 20:40 - 20:44  (00:04)
openerp  ssh:notty    123.212.42.241   Sun Nov  1 20:40 - 20:40  (00:00)
dff      ssh:notty    187.210.58.215   Sun Nov  1 20:36 - 20:40  (00:04)
dff      ssh:notty    187.210.58.215   Sun Nov  1 20:36 - 20:36  (00:00)
zhangyan ssh:notty    187.210.58.215   Sun Nov  1 20:36 - 20:36  (00:00)
zhangyan ssh:notty    187.210.58.215   Sun Nov  1 20:35 - 20:36  (00:00)
root     ssh:notty    82.138.1.118     Sun Nov  1 19:57 - 20:35  (00:38)
root     ssh:notty    82.138.1.118     Sun Nov  1 19:49 - 19:57  (00:08)
root     ssh:notty    82.138.1.118     Sun Nov  1 19:49 - 19:49  (00:00)
root     ssh:notty    82.138.1.118     Sun Nov  1 19:49 - 19:49  (00:00)
PlcmSpIp ssh:notty    82.138.1.118     Sun Nov  1 18:42 - 19:49  (01:06)
PlcmSpIp ssh:notty    82.138.1.118     Sun Nov  1 18:42 - 18:42  (00:00)
oracle   ssh:notty    82.138.1.118     Sun Nov  1 18:34 - 18:42  (00:08)
oracle   ssh:notty    82.138.1.118     Sun Nov  1 18:34 - 18:34  (00:00)
karaf    ssh:notty    82.138.1.118     Sun Nov  1 18:18 - 18:34  (00:16)
karaf    ssh:notty    82.138.1.118     Sun Nov  1 18:18 - 18:18  (00:00)
vagrant  ssh:notty    82.138.1.118     Sun Nov  1 17:13 - 18:18  (01:04)
vagrant  ssh:notty    82.138.1.118     Sun Nov  1 17:13 - 17:13  (00:00)
ubnt     ssh:notty    82.138.1.118     Sun Nov  1 17:05 - 17:13  (00:08)
ubnt     ssh:notty    82.138.1.118     Sun Nov  1 17:05 - 17:05  (00:00)
...

The attacks still come, yes, but they are so quickly snuffed out that there is almost no chance of correctly guessing a password – unless the attacker has a couple centuries on their hands!

Augment fail2ban with a network nail

Now in my case I had noticed attacks coming from various IPs around 43.229.53.13, and I’m still kind of disturbed by that, even after fail2ban was implemented. Who is that? Arin.net said that range is handled by apnic, the Asia pacific NIC. apnic’s whois (apnic.net) says it is a building in Mong Kok district of Hong Kong. Now I’ve been to Hong Kong and the Mong Kok district. It’s very expensive real estate and I think the people who own that subnet have better ways to earn money than try to pwn AWS servers. So I think probably mainland hackers have a backdoor to this Hong Kong network and are using it as their playground. Just a wild guess. So anyhow I augmented fail2ban with a network route to prevent all such attacks form that network:

$ route add -net 43.229.0.0/16 gw 127.0.0.1

A few words on fail2ban

How does fail2ban actually work? It manipulates the local firewall, iptables, as needed. So it will activate iptables if you aren’t already running it. Right now my iptables looks clean so I guess fail2ban hasn’t found anything recently to object to:

$ iptables -L

Chain INPUT (policy ACCEPT)
target     prot opt source               destination
f2b-sshd   tcp  --  anywhere             anywhere            multiport dports ssh
 
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
 
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination
 
Chain f2b-sshd (1 references)
target     prot opt source               destination
RETURN     all  --  anywhere             anywhere

Indeed, checking my messages file the most recent ban was over an hour ago – in the early morning:

Nov  2 03:38:49 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 62.232.207.210

And here is fail2ban doing its job since the log files were rotated at the beginning of the month:

$ cd /var/log; grep Ban messages

Nov  1 04:56:19 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 185.61.136.43
Nov  1 05:49:21 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 5.8.66.78
Nov  1 11:27:53 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 61.147.103.184
Nov  1 11:32:51 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 118.69.135.24
Nov  1 16:57:05 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 162.246.16.55
Nov  1 17:13:17 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 82.138.1.118
Nov  1 18:42:36 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 82.138.1.118
Nov  1 19:57:55 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 82.138.1.118
Nov  1 20:36:05 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 187.210.58.215
Nov  1 20:44:17 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 180.210.201.106
Nov  2 03:38:49 ip-10-185-21-116 fail2ban.actions[28459]: NOTICE [sshd] Ban 62.232.207.210

Almost forgot to mention
How did I free up space so I could still examine btmp? I deleted an older large log file, secure-20151011 which was about 400 MB. No reboot necessary of course. Mysql restarted successfully as did the web servers and I was back in business logging in to my WP site.

August 2017 update
I finally had to reboot my AWS instance after more than three years. I thought about my ssh usage pattern and decided it was really predictable: I either ssh from home or work, both of which have known IPs. And I’m simply tired of seeing all the hack attacks against my server. And I got better with the AWS console out of necessity.
Put it all together and you get a better way to deal with the ssh logins: simply block ssh (tcp port 22) with an AWS security group rule, except from my home and work.

Conclusion
The mystery of the failed WordPress login is examined in great detail here. The case was cracked wide open and the trails that were followed led to discovery of a brute force attempt to gain root access to the hosting server. Methods were implemented to ward off these attacks. An older log file was deleted from /var/log and mysql restarted. WordPress logins are working once again.

References and related info
fail2ban is documented in a wiki of uneven quality at www.fail2ban.org.
Another tool is DenyHosts. One of the ideas behind DenyHosts – its capability to share data – sound great, but look at this page: http://stats.denyhosts.net/stats.html. “Today’s data” is date-stamped July 11, 2011 – four years ago! So something seems amiss there. So it looks like development was suddenly abandoned four years ago – never a good sign for a security tool.

Categories
Apache CentOS Hosting Service Web Site Technologies

Compiling Apache 2.4 on CentOS

Intro
This is a tale of one thing leading to another. I’ll probably either continue this post or delete it altogether if I find I’m headed down a wrong path.

The details
I suspect that to get better marks for my server’s SSL implementation I probably need apache 2.4. There is an RPM for apache 2.4 but it is almost two years old! So I decided to bite the bullet and compile the darn thing myself. Easier said than done. My current production version is 2.2.15.

Now if you just want to compile a recent version of apache 2.4 then this guide is much, much better than mine: https://jasonpowell42.wordpress.com/2013/04/05/install-apache-2-4-4-on-centos-6-4/. My guide, where I’ve hit just about every conceivable error and powered through, is more for timid folks like me who want to keep their current apache 2.2 running while trying 2.4. In spite of what you read elsewhere this is possible to do, but you need patience and perseverance.

Getting the source is easy enough. Then you configure it:

httpd-2.4.16$ ./configure −−prefix=/usr/local/apache24

checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... configure: WARNING: APR version 1.4.0 or later is required, found 1.3.9
configure: WARNING: skipped APR at apr-1-config, version not acceptable
no
configure: error: APR not found.  Please read the documentation.

What version of apr do we have?

$ sudo rpm −qa|grep apr

apr-util-devel-1.3.9-3.el6_0.1.x86_64
apr-util-1.3.9-3.el6_0.1.x86_64
apr-util-ldap-1.3.9-3.el6_0.1.x86_64
apr-1.3.9-5.el6_2.x86_64
apr-devel-1.3.9-5.el6_2.x86_64

Drat. No wonder we’re having trouble. Guess we could compile apr ourselves, but perhaps there’s a suitable version out there somewhere we can simply download?


Warning: this approach to apr shown below was a dead end for me. Further down I show a successful approach.

$ sudo yum search apr

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: linux.cc.lehigh.edu
 * epel: mirror.us.leaseweb.net
 * extras: mirror.rackspace.com
 * updates: mirror.es.its.nyu.edu
========================================================== N/S Matched: apr ==========================================================
...
httpd24-apr-debuginfo.x86_64 : Debug information for package httpd24-apr
httpd24-apr-devel.x86_64 : APR library development kit
httpd24-apr-util-debuginfo.x86_64 : Debug information for package
                                  : httpd24-apr-util
httpd24-apr-util-devel.x86_64 : APR utility library development kit
httpd24-apr-util-ldap.x86_64 : APR utility library LDAP support
httpd24-apr-util-mysql.x86_64 : APR utility library MySQL DBD driver
httpd24-apr-util-nss.x86_64 : APR utility library NSS crytpo support
httpd24-apr-util-odbc.x86_64 : APR utility library ODBC DBD driver
httpd24-apr-util-openssl.x86_64 : APR utility library OpenSSL crytpo support
httpd24-apr-util-pgsql.x86_64 : APR utility library PostgreSQL DBD driver
httpd24-apr-util-sqlite.x86_64 : APR utility library SQLite DBD driver
httpd24-apr.x86_64 : Apache Portable Runtime library
httpd24-apr-util.x86_64 : Apache Portable Runtime Utility library
...

I singled out the promising looking ones. After all it’s apache 2.4 that’s driving the need for this version so the httpd24 versions of apr should suffice.

So I installed these:

$ sudo yum install httpd24-apr-util.x86_64
$ sudo yum install httpd24-apr-util-devel.x86_64

Now how do we tell the configurator where our new apr package is?

httpd-2.4.16$ ./configure −−help|grep −i apr

  --enable-hook-probes    Enable APR hook probes
  --with-included-apr     Use bundled copies of APR/APR-Util
  --with-apr=PATH         prefix for installed APR or the full path to
                             apr-config
  --with-apr-util=PATH    prefix for installed APU or the full path to

The with-apr switch looks promising. Now we guess as to exactly what we should put for the path. Here’s what happens when we guess wrong:

httpd-2.4.16$ ./configure −−with-apr=/opt/rh/httpd24/root/usr/lib64 −−prefix=/usr/local/apache24

checking for chosen layout... Apache
checking for working mkdir -p... yes
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
configure:
configure: Configuring Apache Portable Runtime library...
configure:
checking for APR... configure: error: the --with-apr parameter is incorrect. It must specify an install prefix, a build directory, or an apr-config file.

I’ll spare you the guesswork. Here is the path correctly specified:

httpd-2.4.16$ ./configure −−with-apr=/opt/rh/httpd24/root/usr −−prefix=/usr/local/apache24

...
configure: Configuring Apache Portable Runtime Utility library...
configure:
checking for APR-util... yes
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking how to run the C preprocessor... gcc -E
checking for gcc option to accept ISO C99... -std=gnu99
checking for pcre-config... false
configure: error: pcre-config for libpcre not found. PCRE is required and available from http://pcre.org/

So we finally got past the apr error and are onto the next one : (. I’ll try to install pcre-devel to see if that helps:

$ sudo yum install pcre-devel.x86_64

Wow! Got lucky that time. That cleared up that error and the configure went all the way through!

Oh, no. It doesn’t compile! It begins to, but it can’t compile export.c:

httpd-2.4.16$ make

...
gawk -f /usr/local/src/apache24/httpd-2.4.16/build/make_exports.awk `cat export_files` > exports.c
/usr/lib64/apr-1/build/libtool --silent --mode=compile gcc -std=gnu99  -pthread      -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE     -I. -I/usr/local/src/apache24/httpd-2.4.16/os/unix -I/usr/local/src/apache24/httpd-2.4.16/include -I/opt/rh/httpd24/root/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/src/apache24/httpd-2.4.16/modules/aaa -I/usr/local/src/apache24/httpd-2.4.16/modules/cache -I/usr/local/src/apache24/httpd-2.4.16/modules/core -I/usr/local/src/apache24/httpd-2.4.16/modules/database -I/usr/local/src/apache24/httpd-2.4.16/modules/filters -I/usr/local/src/apache24/httpd-2.4.16/modules/ldap -I/usr/local/src/apache24/httpd-2.4.16/modules/loggers -I/usr/local/src/apache24/httpd-2.4.16/modules/lua -I/usr/local/src/apache24/httpd-2.4.16/modules/proxy -I/usr/local/src/apache24/httpd-2.4.16/modules/session -I/usr/local/src/apache24/httpd-2.4.16/modules/ssl -I/usr/local/src/apache24/httpd-2.4.16/modules/test -I/usr/local/src/apache24/httpd-2.4.16/server -I/usr/local/src/apache24/httpd-2.4.16/modules/arch/unix -I/usr/local/src/apache24/httpd-2.4.16/modules/dav/main -I/usr/local/src/apache24/httpd-2.4.16/modules/generators -I/usr/local/src/apache24/httpd-2.4.16/modules/mappers  -prefer-non-pic -static -c exports.c && touch exports.lo
exports.c:1244: error: redefinition of ‘ap_hack_apr_allocator_create’
exports.c:198: note: previous definition of ‘ap_hack_apr_allocator_create’ was here
exports.c:1245: error: redefinition of ‘ap_hack_apr_allocator_destroy’
exports.c:199: note: previous definition of ‘ap_hack_apr_allocator_destroy’ was here
exports.c:1246: error: redefinition of ‘ap_hack_apr_allocator_alloc’
exports.c:200: note: previous definition of ‘ap_hack_apr_allocator_alloc’ was here
exports.c:1247: error: redefinition of ‘ap_hack_apr_allocator_free’
exports.c:201: note: previous definition of ‘ap_hack_apr_allocator_free’ was here
exports.c:1248: error: redefinition of ‘ap_hack_apr_allocator_owner_set’

This could be tough! Maybe impossible for me to get past. I’ve never encountered this kind of error. OK. Got it. Not so tough. I had two versions of apr installed – the old one needed by my apache 2.2 and the new one installed as shown above. I didn’t want to completely blow away the old one as I feared that it is dynamically linked by Apache 2.2, so I did the following:

$ cd /usr/lib64; sudo mv apr-1 drjapr-1
– then change to my apache24 root directory and run configure again; then run make

And it went through this time!

Only modules installed
make install however only installed modules, not the httpd binary.

The problem seems related to my original apr libraries. They look like this:

$ sudo rpm −qa|grep ^apr

apr-util-devel-1.3.9-3.el6_0.1.x86_64
apr-util-1.3.9-3.el6_0.1.x86_64
apr-util-ldap-1.3.9-3.el6_0.1.x86_64
apr-1.3.9-5.el6_2.x86_64
apr-devel-1.3.9-5.el6_2.x86_64

I tried to move them all to a temporary directory but then the compiler cannot find libtool which is normally supplied by apr-devel.

I considered removing apr-devel, but boy there are so many dependencies that my other packages have on it that I did not feel comfortable doing that. PHP, apache2.2 and a whole lot more depend on it.

End of dead end approach to apr


New approach needed
My new approach is to try to use the APR from apache itself by downloaded the Unix sources for APR and apr-util from http://apr.apache.org/download.cgi. Yes, this worked best of all. I even put back all the apr files I had moved in the previous failed effort.

It’s not very clear what they mean by unpacking apr and apr-util in srclib. I created symlinks in my srclib directory such that apr -> apr-1.5.2 and apr-util -> apr-util-1.5.4. For the inexperienced the command format is like in this example:

$ ln −s apr-1.5.2 apr

Of course you first have to download the source tarball to your srclib directory and unpack it:

$ tar zxf apr-1.5.2.tar.gz

It Compiles and Installs
So after all those misfires I finally got a version that compiled and installed in its entirety. That process starts with this configure command:

$ ./configure −−with-included-apr −−prefix=/usr/local/apache24

Then the usual make and sudo make install.

Modules problem
I inherited a configuration that had a mods-avalable and a mods-enabled directory which is how my old apache 2.2 was set up. After tweaking the modules path using the replace command, something like this

$ cd /etc; cp −pr apache2 apache24; cd mods-avalable
$ sudo replace /usr/lib/apache2 /usr/local/apache24 −− *.load

I still could not start my new server:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/authz_default.load: Cannot load /usr/local/apache24/modules/mod_authz_default.so into server: /usr/local/apache24/modules/mod_authz_default.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

I looked at all my configuration files and don’t see anything that relies on this module so I deleted the reference to it in mods-enabled.

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/cgi.load: Cannot load /usr/local/apache24/modules/mod_cgi.so into server: /usr/local/apache24/modules/mod_cgi.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

Now I do like to run CGI programs on occasion so this one can’t be so easily brushed aside. It could be that we should be using mod_cgid.so instead.

Then it’s onto this error:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/php5.load: Cannot load /usr/local/apache24/modules/libphp5.so into server: /usr/local/apache24/modules/libphp5.so: cannot open shared object file: No such file or directory
                                                           [FAILED]

I use php so I may have to investigate this one in some detail. simply trying to update the link to where the old libphp5.so resides under apache2.2 brings up this different kind of error:

Starting apache24: httpd: Syntax error on line 203 of /etc/apache24/apache24.conf: Syntax error on line 1 of /etc/apache24/mods-enabled/php5.load: Cannot load /usr/lib/apache2/modules/libphp5.so into server: /usr/lib/apache2/modules/libphp5.so: undefined symbol: unixd_config
                                                           [FAILED]

Wow. I’m reading various things and it looks like I’ll now have to compile php5 as well. This is getting hairy. This site, although old, seems to explain it most clearly. And of course I’ve got php 5.3 which you can’t even find source for on the php web site, www.php.net

So I downloaded php5.4.43, which is the oldest one I could find on the php web site!

To configure it I used this long list of options, some of which are determined by my choices of location for my apache24 files:

$ ./configure ‐‐with‐apxs2=/usr/local/apache24/bin/apxs ‐‐with‐mysql ‐‐prefix=/usr/local/apache24/php5 ‐‐with‐config‐file ‐path=/usr/local/apache24/php5 ‐‐disable‐cgi ‐‐with‐zlib ‐‐with‐gettext ‐‐with‐gdbm ‐‐with‐curl ‐‐with‐openssl

2017 update for php
I finally needed to update some WordPress packages and found my only transport is ftp. I think my command-line compile options for php5 above leave something to be desired. I think I need to add curl and openssl like so:

$ ./configure ‐‐with‐apxs2=/usr/local/apache24/bin/apxs ‐‐with‐mysql ‐‐prefix=/usr/local/apache24/php5 ‐‐with‐config‐file ‐path=/usr/local/apache24/php5 ‐‐disable‐cgi ‐‐with‐zlib ‐‐with‐gettext ‐‐with‐gdbm ‐‐with‐curl ‐‐with‐openssl

but I get these errors:

ext/curl/.libs/interface.o: In function `php_curl_option_url':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:180: undefined reference to `core_globals'
ext/curl/.libs/interface.o: In function `_php_curl_setopt':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1821: undefined reference to `core_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1804: undefined reference to `core_globals'
ext/curl/.libs/interface.o: In function `curl_progress':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1113: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_write_header':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1264: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_write':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1038: undefined reference to `executor_globals'
ext/curl/.libs/interface.o: In function `curl_read':
/usr/local/src/php5/php-5.4.43/ext/curl/interface.c:1187: undefined reference to `executor_globals'
ext/curl/.libs/streams.o: In function `php_curl_stream_opener':
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:320: undefined reference to `file_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:406: undefined reference to `core_globals'
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:414: undefined reference to `core_globals'
ext/curl/.libs/streams.o: In function `on_data_available':
/usr/local/src/php5/php-5.4.43/ext/curl/streams.c:68: undefined reference to `executor_globals'
ext/standard/.libs/info.o: In function `php_info_print_request_uri':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:97: undefined reference to `sapi_globals'
ext/standard/.libs/info.o: In function `php_print_gpcse_array':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:213: undefined reference to `executor_globals'
ext/standard/.libs/info.o: In function `php_print_info':
/usr/local/src/php5/php-5.4.43/ext/standard/info.c:918: undefined reference to `executor_globals'
collect2: ld returned 1 exit status
make: *** [sapi/cli/php] Error 1

Here the problem seems to be that since I had already compiled php5 and left it around, it was using the old parts.

You need to do a make clean first! Then it compiles.

Now I’m down to this apache error:

$ sudo service apache24 start

Starting apache24: AH00526: Syntax error on line 55 of /etc/apache24/apache24.conf:
Invalid command 'LockFile', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

I’m going to just try to comment out that pesky Option LockFile…. I’ve found this apache page which is helpful for this upgrade: http://httpd.apache.org/docs/trunk/upgrading.html OK, next error:

Starting apache24: AH00526: Syntax error on line 145 of /etc/apache24/apache24.conf:
Invalid command 'User', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Here the advice is to load module mod_unixd. I don’t even have anything like that so I’m looking into it now. OK. It’s in the apache24/modules so I just need to load it in. Next error:

Starting apache24: AH00526: Syntax error on line 161 of /etc/apache24/apache24.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration

Wow. That comes from this pretty standard line:

<Files ~ "^\.ht">
    Order allow,deny
    Deny from all
    Satisfy all
</Files>

This is a helpful document: http://httpd.apache.org/docs/trunk/upgrading.html. So at their recommendation I replaced all that with a

Require all denied

That leads to the next error:

Starting apache24: AH00526: Syntax error on line 166 of /etc/apache24/apache24.conf:
Invalid command 'Require', perhaps misspelled or defined by a module not included in the server configuration

It means Require is not even found. I needed to load some new modules, names authz_code and unixd:

LoadModule authz_core_module /usr/local/apache24/modules/mod_authz_core.so
LoadModule unixd_module /usr/local/apache24/modules/mod_unixd.so

Next error:

AH00526: Syntax error on line 20 of /etc/apache24/mods-enabled/alias.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration

so some of my old conf files that I copied over use the old syntax. The alias.conf file looked like this:

Alias /icons/ "/var/www/icons/"
 
<Directory "/var/www/icons">
    Options Indexes MultiViews
    AllowOverride None
    Order allow,deny
    Allow from all
</Directory>

Again looking at http://httpd.apache.org/docs/trunk/upgrading.html they suggest to replace the Order… and following line with:

Require all granted

Next error:

AH00526: Syntax error on line 3 of /etc/apache24/mods-enabled/deflate.conf:
Invalid command 'AddOutputFilterByType', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

But I was already loading the deflate module which defines AddOutputfilterByType. What I learned is that in apache 2.4 you also need to load mod_filter.

And the next error please:

AH00526: Syntax error on line 43 of /etc/apache24/mods-enabled/ssl.conf:
SSLSessionCache: 'shmcb' session cache not supported (known names: ). Maybe you need to load the appropriate socache module (mod_socache_shmcb?).

That’s in complaint about this line:

SSLSessionCache        shmcb:${APACHE_RUN_DIR}/ssl_scache(512000)

The standard advice for this error is to uncomment this line:

LoadModule socache_shmcb_module modules/mod_socache_shmcb.so

But I don’t have that module!

I guess I chose the wrong options when doing the initial ./configure. See the references for a proper guide that lists some good options.

I’m now trying to configure like this:

$ ./configure −−with-included-apr −−prefix=/usr/local/apache24 −−enable-php5 −−enable-so −−enable-ssl −−with-mpm=prefork

Actually I don’t know if I needed all those options such as enable-ssl. The main thing was that my apache 2.2 mods-available directory didn’t have a mention of mod_socache_shmcb.so. My apache 2.4 built with these config options definitely does. so I just need one of these LoadModule statements like this:

LoadModule socache_shmcb_module /usr/local/apache24/modules/mod_socache_shmcb.so

Well we’ve moved six lines down into that config file. I guess that’s progress! because now we’ve made it all the wy to line 49:

AH00526: Syntax error on line 49 of /etc/apache24/mods-enabled/ssl.conf:
Invalid command 'SSLMutex', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Even apache’s upgrade guide documents this error. It’s caused by a conf file line that looks something like this:

SSLMutex  file:${APACHE_RUN_DIR}/ssl_mutex

and they say – I’m paraphrasing here – just try to comment it out and hope for the best.

Next error:

AH00526: Syntax error on line 9 of /etc/apache24/mods-enabled/status.conf:
Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration
                                                           [FAILED]

Yeah status.conf has

    Order deny,allow
    Deny from all
    Allow from 127.0.0.1 ::1

We’ll try to replace that with this:

Require host 127.0.0.1 ::1

Now it runs through all the configuration OK but doesn’t actually start. I had set up an init.d script and I wasn’t going to go into this but I may have to:

$ sudo service apache24 start

httpd (pid 30896) already running

Remember I am trying to run this while still running the old apache 2.2 server. Process 30896 is the old apache 2.2:

root     30896     1  0 10:05 ?        00:00:00 /usr/sbin/httpd -d /etc/apache2 -f apache2.conf

This results from the byzantine way I set up to launch apache. There is a /etc/sysconfig/apache24 which doesn’t do much other than import environment variable definitions from /etc/apache24/envvars, except I had forgotten to update that path so it pointed to the old /etc/apache2/envvars.

Now it starts! But not without complaint:

Starting apache24: [Thu Aug 06 11:18:04.711658 2015] [core:warn] [pid 22911] AH00117: Ignoring deprecated use of DefaultType in line 178 of /etc/apache24/apache24.conf.
                                                              [  OK  ]

That stems from this line which tries to establish a default MIME type:

DefaultType text/plain

I also notice I cannot really get the status of my new web server:

$ sudo service apache24 status

httpd dead but subsys locked

So stopping/starting doesn’t really work either once it’s started.

What I found is that it seems happier if I have a line in /etc/sysconfig/apache24 which has an explicit PIDFILE defined – I use PIDFILE=/var/run/apache24.pid – with the same filepath as is mentioned in the apache24.conf file, where I have PidFile ${APACHE_PID_FILE} where APACHE_PID_FILE is taken from my envvars and has the value /var/run/apache24.pid. OK, my setup is very convoluted and probably unique. But the problem is common on CentOS so the main takeway is to have consistent reference to the pidfile filepath in /etc/sysconfig/httpd or whatever you are calling it as in your main config file httpd.conf or whatever you are calling it.

Home page test (I’m running on port 1443 to avoid conflict with my production server):

$ curl −i −k https://127.0.0.1:1443/

HTTP/1.1 301 Moved Permanently
Date: Wed, 05 Aug 2015 18:33:19 GMT
Server: Apache/2
X-Powered-By: PHP/5.4.43
Location: https://drjohnstechtalk.com/blog/
Content-Length: 2
Content-Type: text/html

So that looks pretty good.

A simple php test:

$ curl −i −k https://127.0.0.1:1443/phpinfo.php

Long output. Basically looks right.

OK. What about the opening WordPress page?

$ curl −i −H ‘Host: drjohnstechtalk.com’ −k https://127.0.0.1:1443/blog/

Yes. Big long output. Looks good. I don’t think this proves that the mySQL/php interface is really working however as that page could be cached since I use a pagecache plugin.

Next test I’d like to run is the Qualys SSLLabs test, but it won’t run on port 1443. Maybe the DigiCERT test will. Yes, it does allow it. And I no longer have the BREACH vulnerability.

A few words about a BREACH test
This prompted me to look at why Digicert felt I was vulnerable to BREACH in the first place. I thnk it’s related to serving compressed objects. So I thought of this simple test. Against my apache 2.2 I can run a query like this:

$ curl −i −k −−compress https://127.0.0.1:1443/blog/|head −10

Date: Fri, 07 Aug 2015 14:02:48 GMT
Server: Apache/2
X-Powered-By: PHP/5.3.3
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 30414
Content-Type: text/html
 
<!DOCTYPE html>
<html lang="en-US">

See that Content-Encoding: gzip? Yet the actual content that begins <!DOCTYPE html… is in plain text and plainly not compressed. So I really wasn’t vulnerable to BREACH at all. The server claimed to be compressing the pages it was sendnig to the browser but in reality it wasn’t. For apache 2.4 the behaviour is basically the same except there is no response header Content-Encoding: gzip returned. This is why it passes Digicert’s BREACH test with flying colors.

Moving on
Next test. Swap apache 2.2 for apache 2.4 by changing listening ports 443 for 1443. Then do the SSLlabs test. I now get an A. well, actually I get an A both before and after the swap.

WordPress test
I’m writing this using my new shiny apache 2.4. With regards to WordPress it all seems to feel the same as before. One small thing I’ve noticed is that I don’t get WordPress news any longer:

RSS Error: WP HTTP Error: There are no HTTP transports available which can complete the requested request.

Hopefully there’s nothing more serious.

php.ini missing
If you blindly copied my config options for compiling php then sooner or later (much later in my case) you’ll realize that you have no valid php.ini file! You will see an error like this when the date() function is called:

Warning: date(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected the timezone 'UTC' for now, but please set date.timezone to select your timezone. in

So because I used the config option –with-config-fil
e-path=/usr/local/apache24/php5 I needed to put a php.ini file in that directory and only that directory. For now its contents are:

; DrJ, inspired by http://stackoverflow.com/questions/2184513/php-change-the-maximum-upload-file-size - 12/31/14
; Maximum allowed size for uploaded files.
upload_max_filesize = 10M
 
; Must be greater than or equal to upload_max_filesize
post_max_size = 10M
 
; You'll need this to avoid errors with the Date function
; http://stackoverflow.com/questions/16765158/date-it-is-not-safe-to-rely-on-the-systems-timezone-settings
[Date]
; Defines the default timezone used by the date functions
; http://php.net/date.timezone
date.timezone = America/New_York

Appendix A
mod_ssl error after patching

I have an apache 2.2.21 server on a SLES server. After a system patch (I guess) I realized the apache web server wouldn’t start. It shows this error:

> sudo service apache201 start

Starting httpd (/usr/local/apache2/bin/httpd) httpd: Syntax error on line 54 of /usr/local/apache201/conf/httpd.conf: Cannot load /usr/local/apache201/modules/mod_ssl.so into server: /usr/local/apache201/modules/mod_ssl.so: undefined symbol: ap_map_http_request_error

I had been playing fast and loose and I borrowed the mod_ssl.so from some other system, I guess. I forget which. In other words, I dropped in by hand a mod_ssl.so into the directory /usr/lib64/apache2-prefork. I was using those system-supplied modules paired with my compiled apache. All fine until that patch. So I found another mod_ssl.so frm a different system and tried that one. It worked. Whew. These were both SLES 11 SP 4 systems. The older one (with the mod_ssl.so that still works) is dated April 18th, 2017. The one with the broken mod_ssl.so Dec 29thth 2017. That’s from a uname -a.

References and related articles
A proper guide to installing apache 2.4 on CentOS is https://jasonpowell42.wordpress.com/2013/04/05/install-apache-2-4-4-on-centos-6-4/

Some upgrade issues are covered by apache’s own guide: http://httpd.apache.org/docs/2.4/upgrading.html

Scaling up apache to handle more than a couple hundred simultaneous requests is described in this blog post.

The DigiCERT certificate inspector tool, which is what I was referring to in this post when it comes to scanning for BREACH vulnerabilities, is here.

Categories
Admin Apache CentOS Security

drjohnstechtalk.com is now an encrypted web site

Intro
I don’t overtly chase search engine rankings. I’m comfortable being the 2,000,000th most visited site on the Internet, or something like that according to alexa. But I still take pride in what I’m producing here. So when I read a couple weeks ago that Google would be boosting the search rank of sites which use encryption, I felt I had to act. For me it is equally a matter of showing that I know how to do it and a chance to write another blog posting which may help others.

Very, very few people have my situation, which is a self-hosted web site, but still there may be snippets of advice which may apply to other situations.

I pulled off the switch to using https instead of http last night. The detail of how I did it are below.

The details
Actually there was nothing earth-shattering. It was a simple matter of applying stuff i already know how to do and putting it all together. of course I made some glitches along the way, but I also resolved them.

First the CERT
I was already running an SSL web server virtual server at https://drjohnstechtalk.com/ , but it was using a self-signed certificate. Being knowledgeable about certificates, I knew the first and easiest thing to do was to get a certificate (cert) issued by a recognized certificate authority (CA). Since my domain was bought from GoDaddy I decided to get my SSL certificate form them as well. It’s $69.99 for a one-year cert. Strangely, there is no economy of scale so a two-year cert costs exactly twice as much. i normally am a strong believer in two-year certs simply to avoid the hassle of renewing, etc, but since I am out-of-practice and feared I could throw my money away if I messed up the cert, I went with a one-year cert this time. It turns out I had nothing to fear…

Paying for a certificate at GoDaddy is easy. Actually figuring out how to get your certificate issued by them? Not so much. But I figured out where to go on their web site and managed to do it.

Before the CERT, the CSR
Let’s back up. Remember I’m self-hosted? I love being the boss and having that Linux prompt on my CentOS VM. So before I could buy a legit cert I needed to generate a private key and certificate signing request (CSR), which I did using openssl, having no other fancy tools available and being a command line lover.

To generate the private key and CSR with one openssl command do this:

$ openssl req -new -nodes -out myreq.csr

It prompts you for field values, e.g.:

Here’s how that dialog went:

Generating a 2048 bit RSA private key
.............+++
..............................+++
writing new private key to 'privkey.pem'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:US
State or Province Name (full name) []:New Jersey
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:drjohnstechtalk.com
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:drjohnstechtalk.com
Email Address []:
 
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

and the files it created:

$ ls -ltr|tail -2

-rw-rw-r-- 1 john john     1704 Aug 23 09:52 privkey.pem
-rw-rw-r-- 1 john john     1021 Aug 23 09:52 myreq.csr

Before shipping it off to a CA you really ought to examine the CSR for accuracy. Here’s how:

$ openssl req -text -in myreq.csr

Certificate Request:
    Data:
        Version: 0 (0x0)
        Subject: C=US, ST=New Jersey, L=Default City, O=drjohnstechtalk.com, CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e9:04:ab:7e:e1:c1:87:44:fb:fe:09:e1:8d:e5:
                    29:1c:cb:b5:e8:d0:cc:f4:89:67:23:ab:e5:e7:a6:
                    ...

What are we looking for, anyways? Well, the modulus should be the same as it is for the private key. To list the modulus of your private key:

$ openssl rsa -text -in privkey.pem|more

The other things I am looking for is the common name (CN) which has to exactly match the DNS name that is used to access the secure site.

I’m not pleased about the Default City, but I didn’t want to provide my actual city. We’ll see it doesn’t matter in the end.

For some CAs the Organization field also matters a great deal. Since I am a private individual I decided to use the CN as my organization and that was accepted by GoDaddy. So Probably its value also doesn’t matter.

the other critical thing is the length of the public key, 2048 bits. These days all keys should be 2048 bits. some years ago 1024 bits was perfectly fine. I’m not sure but maybe older openssl releases would have created a 1024 bit key length so that’s why you’ll want to watch out for that.

Examine the CERT
GoDaddy issued the certificate with some random alpha-numeric filename. i renamed it to something more suitable, drjohnstechtalk.crt. Let’s examine it:

$ openssl x509 -text -in drjohnstechtalk.crt|more

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number:
            27:ab:99:79:cb:55:9f
        Signature Algorithm: sha256WithRSAEncryption
        Issuer: C=US, ST=Arizona, L=Scottsdale, O=GoDaddy.com, Inc., OU=http://certs.godaddy.com/repository/, CN=Go Daddy Secure Certificate Authority - G2
        Validity
            Not Before: Aug 21 00:34:01 2014 GMT
            Not After : Aug 21 00:34:01 2015 GMT
        Subject: OU=Domain Control Validated, CN=drjohnstechtalk.com
        Subject Public Key Info:
            Public Key Algorithm: rsaEncryption
                Public-Key: (2048 bit)
                Modulus:
                    00:e9:04:ab:7e:e1:c1:87:44:fb:fe:09:e1:8d:e5:
                    29:1c:cb:b5:e8:d0:cc:f4:89:67:23:ab:e5:e7:a6:
                     ...

So we’re checking that common name, key length, and what if any organization they used (in the Subject field). Also the modulus should match up. Note that they “cheaped out” and did not provide www.drjohnstechtalk.com as an explicit alternate name! In my opinion this should mean that if someone enters the URL https://www.drjohnstechtalk.com/ they should get a certificate name mismatch error. In practice this does not seem to happen – I’m not sure why. Probably the browsers are somewhat forgiving.

The apache side of the house
I don’t know if this is going to make any sense, but here goes. To begin I have a bare-bones secure virtual server that did essentially nothing. So I modified it to be an apache redirect factory and to use my brand shiny new legit certificate. Once I had that working I planned to swap roles and filenames with my regular configuration file, drjohns.conf.

Objective: don’t break existing links
Why the need for a redirect factory? This I felt as a matter of pride is important: it will permit all the current links to my site, which are all http, not https, to continue to work! That’s a good thing, right? Now most of those links are in search engines, which constantly comb my pages, so I’m sure over time they would automatically be updated if I didn’t bother, but I just felt better about knowing that no links would be broken by switching to https. And it shows I know what I’m doing!

The secure server configuration file on my server is in /etc/apache2/sites-enabled/drjohns.secure.conf. It’s an apache v 2.2 server. I put all the relevant key/cert/intermediate cert files in /etc/apache2/certs. the private key’s permissions were set to 600. The relevant apache configuration directives are here to use this CERT along with the GoDaddy intermediate certificates:

 
   SSLEngine on
    SSLCertificateFile /etc/apache2/certs/drjohnstechtalk.crt
    SSLCertificateKeyFile /etc/apache2/certs/drjohnstechtalk.key
    SSLCertificateChainFile /etc/apache2/certs/gd_bundle-g2-g1.crt

I initially didn’t include the intermediate certs (chain file), which again in my experience should have caused issues. Once again I didn’t observe any issues from omitting it, but my experience says that it should be present.

The redirect factory setup
For the redirect testing I referred to my own blog posting (which I think is underappreciated for whatever reason!) and have these lines:

# I really don't think this does anything other than chase away a scary warning in the error log...
        RewriteLock ${APACHE_LOCK_DIR}/rewrite_lock
<VirtualHost *:80>
 
        ServerAdmin webmaster@localhost
        ServerName      www.drjohnstechtalk.com
        ServerAlias     drjohnstechtalk.com
        ServerAlias     johnstechtalk.com
        ServerAlias     www.johnstechtalk.com
        ServerAlias     vmanswer.com
        ServerAlias     www.vmanswer.com
# Inspired by the dreadful documentation on http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html
        RewriteEngine on
        RewriteMap  redirectMap prg:redirect.pl
        RewriteCond ${redirectMap:%{HTTP_HOST}%{REQUEST_URI}} ^(.+)$
# %N are backreferences to RewriteCond matches, and $N are backreferences to RewriteRule matches
        RewriteRule ^/.* %1 [R=301,L]

Pages look funny after the switch to SSL
One of the first casualties after the switch to SSL was that my pages looked funny. I know from general experience that this can happen if there is hard-wired links to http URLs, and that is what I observed in the page source. In particular my WP-Syntax plugin was now bleeding verbatim entries into the columns to the right if the PRE text contained long lines. Not pretty. The page source mostly had https includes, but in one place it did not. It had:

<link rel="stylesheet" href="http://drjohnstechtalk.com/blog/wp-content/plugins/wp-syntax/wp-syntax.css"

I puzzled over where that originated and I had a few ideas which didn’t turn out so well. For instance you’d think inserting this into wp-config.php would have worked:

define( 'WP_CONTENT_URL','https://drjohnstechtalk.com/blog/wp-content/');

But it had absolutely no effect. Finally I did an RTFM – the M being http://codex.wordpress.org/Editing_wp-config.php which is mentioned in wp-config.hp – and learned that the siteurl is set in the administration settings in the GUI, Settings|General WordPress Address URL and Site Address URL. I changed these to https://drjohnstechtalk.com/blog and bingo, my plugins began to work properly again!

What might go wrong when turning on SSL
In another context I have seen both those errors, which I feel are poorly documented on the Internet, so I wish to mention them here since they are closely related to the topic of this blog post.

SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:601

and

curl: (35) error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol

I generated the first error in the process of trying to look at the SSL web site using openssl. What I do to test that certificates are being properly presented is:

$ openssl s_client -showcerts -connect localhost:443

And the second error mentioned above I generated trying to use curl to do something similar:

$ curl -i -k https://localhost/

The solution? Well, I began to suspect that I wasn’t running SSL at all so I tested curl assuming I was running on tcp port 443 but regular http:

$ curl -i http://localhost:443/

Yup. That worked just fine, producing all the usual HTTP response headers plus the content of my home page. So that means I wasn’t running SSL at all.

This virtual host being from a template I inherited and one I didn’t fully understand, I decided to just junk the most suspicious parts of the vhost configuration, which in my case were:

<IfDefine SSL>
<IfDefine !NOSSL>
...
</IfDefine>
</IfDefine>

and comment those guys out, giving,

#<IfDefine SSL>
#<IfDefine !NOSSL>
...
#</IfDefine>
#</IfDefine>

That worked! After a restart I really was running SSL.

Making it stronger
I did not do this immediately, but after the POODLE vulnerability came out and I ran some tests I realized I should have explicitly chosen a cipher suite in my apache server to make the encryption sufficiently unbreakable. This section of my working with ciphers post shows some good settings.

Mixed content
I forgot about something in my delight at running my own SSL web server – not all content was coming to the browser as https and Firefox and Internet Explorer began to complain as they grew more security-conscious over the months. After some investigation I found that what it was is that I had a redirect for favicon.ico to the WordPress favicon.ico. But it was a redirect to their HTTP favicon.ico. I changed it to their secure version, https://s2.wp.com/i/favicon.ico, and all was good!

I never use the Firefox debugging tools so I got lucky. I took a guess to find out more about this mixed content and clicked on Tools|Web developer|Web console. My lucky break was that it immediately told me the element that was still HTTP amidst my HTTPS web page. Knowing that it was a cinch to fix it as mentioned above.

Conclusion
Good-ole search engine optimization (SEO) has prodded us to make the leap to run SSL. In this posting we showed how we did that while preserving all the links that may be floating ou there on the Internet by using our redirect factory.

References
Having an apache instance dedicated to redirects is described in this article.
Some common sense steps to protect your apache server are described here.
Some other openssl commands besides the ones used here are described here.
Choosing an appropriate cipher suite and preventing use of the vulnerable SSLv2/3 is described in this post.
I read about Google’s plans to encrypt the web in this Naked Security article.

Categories
Admin CentOS

CentOS 6.0 VM ran out of memory

Intro
I’m just creating this post to have documented what happened to me personally. I have a CentOS 6.0 image with Amazon AWS. It was based on a minimal image, which I purposefully selected so it wouldn’t be loaded down with junky daemons. Ran fine for a year, then one day nothing!

The details
I think it was up for 400 days consecutive! That’s not necessarily a good idea, but those are the facts. Then over the weekend I could neither ssh nor access its web server. Oh, oh. You’ve got really limited options at that point with a cloud server. I stopped it from the AWS console and then started it. No joy. More drastic action – really the last thing I can do short of abandoning the whole image: Terminate. After some breath-holding moments, and after I remembered to re-associate the elastic IP, it came up. Whew! Now it came up as CentOS v 6.4, which I don’t fully understand, but it works.

I checked the /var/log/messages file for clues as to what happened. There actuially were some pretty good clues. Here is the last of many, many similar lines I observed:

...
Nov 29 08:39:23 ip-10-114-206-104 kernel: Out of memory: kill process 29076 (httpd) score 107231 or a child
Nov 29 08:39:23 ip-10-114-206-104 kernel: Killed process 31306 (httpd) vsz:264320kB, anon-rss:30852kB, file-rss:312kB
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: httpd cpuset=/ mems_allowed=0
Nov 29 08:39:23 ip-10-114-206-104 kernel: Pid: 31506, comm: httpd Not tainted 2.6.32-131.17.1.el6.x86_64 #1
Nov 29 08:39:23 ip-10-114-206-104 kernel: Call Trace:
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810c00f1>] ? cpuset_print_task_mems_allowed+0x91/0xb0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811102bb>] ? oom_kill_process+0xcb/0x2e0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110880>] ? select_bad_process+0xd0/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110918>] ? __out_of_memory+0x58/0xc0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81110b19>] ? out_of_memory+0x199/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81120262>] ? __alloc_pages_nodemask+0x812/0x8b0
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8115473a>] ? alloc_pages_current+0xaa/0x110
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110d717>] ? __page_cache_alloc+0x87/0x90
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122bab>] ? __do_page_cache_readahead+0xdb/0x210
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81122d01>] ? ra_submit+0x21/0x30
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff8110e9e3>] ? filemap_fault+0x4c3/0x500
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff810061af>] ? xen_set_pte_at+0xaf/0x170
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81137204>] ? __do_fault+0x54/0x510
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff811377b7>] ? handle_pte_fault+0xf7/0xb50
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81007c4f>] ? xen_restore_fl_direct_end+0x0/0x1
Nov 29 08:39:23 ip-10-114-206-104 kernel: [<ffffffff81006d4b>] ? xen_set_pmd_hyper+0x8b/0xc0
...

So it ran out of memory. I guess there’s a memory leak somewhere, although another posting I saw hinted at a flaw in the CentOS under paravirtualization. I have no idea.

The interesting thing to me is that the error was ongoing for days. So I had I been watching for it, I could have been pro-active in rebooting my server.

Conclusion
My AWS-hosted CentOS VM gave me a scare when it stopped responding. I had to terminate it. An out-of-memory error in the kernel seems to be the proximate cause.

Categories
Admin CentOS Security

Example using iptables, the CentOS firewall

Intro
This document is mostly for my own purposes. I don’t even think this is the best way to run the firewall, it’s just the way I happened to adapt.

Background
My friends tell me ipchains was good software. Unfortunately the guy who wrote iptables, which emulates the features of ipchains, wasn’t at that same skill level, and the implementation shows it. I know I struggled with it a bit.

Motivation
I decided to run a local firewall on my HP SiteScope server because a serious security issue was found with our version’s HTTP server such that it was advisable to lock it down to only those administrators who need access to the GUI.

The details
This was actually implemented on Redhat v 5.6, though I don’t suppose it would be much different on CentOS.

December 2013 update
I also tried this same script provided below on a Redhat 6.4 OS – it worked the exact same way without modification.

The main thing is that I maintain a file with the “firewall rules.” I call it iptables. So I need to remember from invocation to invocation where I store this master file. Here are the contents:

#!/bin/sh
# DrJ, 9/2012
# inspired by http://wiki.centos.org/HowTos/Network/IPTables
# flush all previous rules
export PATH=$PATH:/sbin
iptables -F
#
# our main rules here:
#
# Accept tcp packets on destination port 8080 (HP SiteScope) from select individuals
# DrJ: office, home, vpn
iptables -A INPUT -p tcp -s 192.168.76.56 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.2.6.107 --dport 8080 -j ACCEPT
iptables -A INPUT -p tcp -s 10.3.13.138 --dport 8080 -j ACCEPT
#
# the server itself
iptables -A INPUT -p tcp -s 127.0.0.1 --dport 8080 -j ACCEPT
#
# set dflt policies
# for logging see http://gr8idea.info/os/tutorials/security/iptables5.html
#iptables -A INPUT -j LOG --log-level 4 --log-prefix 'InDrop '
# this is a killer!
#iptables -P INPUT DROP
# just drop what is really the problem...
iptables -A INPUT -p tcp --dport 8080 -j DROP
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
#
# access for loopback
iptables -A INPUT -i lo -j ACCEPT
#
# Accept packets belonging to established and related connections
#
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
#
# Save settings
#
/sbin/service iptables save
#
# List rules
#
iptables -L -v

Of course you have to have iptables running. I do a

$ sudo service iptables status

to verify that. If its status is “not running,” start it up.

As mentioned in the comments I tried to be more strict with the rules since I’m used to running firewalls with a DENY All rule, but it just didn’t work out well for me. I lost patience and gave up on that and settled for dropping all traffic to TCP port 8080 except the explicitly permitted hosts, which is good enough for our immediate needs.

Conclusion
This is a simple example of a way to use iptables. It’s probably not the best example, but it’s what I used so it’s better than nothing.

Categories
Admin CentOS Linux Raspberry Pi

A few RPM and YUM commands and equivalent on Raspberry Pi

Intro
This post adds nothing to the knowledge out there and readily available on the Internet. I just got tired of looking up elsewhere the few useful rpm and yum commands that I employ. Here’s how I installed a missing binary on one system when I have a similar system that has it.

RPM is the Redhat Package Manager. It is also used on Suse Linux (SLES). A much better resource than this page (Hey, we can’t all be experts!) is http://www.idevelopment.info/data/Unix/Linux/LINUX_RPMCommands.shtml

List all installed packages:

$ rpm −qa
dmidecode-2.11-2.el6.x86_64
libXcursor-1.1.10-2.el6.x86_64
basesystem-10.0-4.el6.noarch
plymouth-core-libs-0.8.3-24.el6.centos.x86_64
libXrandr-1.3.0-4.el6.x86_64
ncurses-base-5.7-3.20090208.el6.x86_64
python-ethtool-0.6-1.el6.x86_64

Same as above – list all installed packages – but list the most recently installed packages first (Wish I had discovered this command sooner)!

$ rpm −qa −−last

libcurl-devel-7.19.7-35.el6                   Mon Apr  1 20:00:47 2013
curl-7.19.7-35.el6                            Mon Apr  1 20:00:47 2013
libidn-devel-1.18-2.el6                       Mon Apr  1 20:00:46 2013
libcurl-7.19.7-35.el6                         Mon Apr  1 20:00:46 2013
libssh2-1.4.2-1.el6                           Mon Apr  1 20:00:45 2013
ncurses-static-5.7-3.20090208.el6             Mon Apr  1 19:59:24 2013
ncurses-devel-5.7-3.20090208.el6              Mon Apr  1 19:58:40 2013
gcc-c++-4.4.7-3.el6                           Fri Mar 15 07:59:36 2013
gcc-gfortran-4.4.7-3.el6                      Fri Mar 15 07:59:34 2013
...

Which package owns a command:

$ rpm −qf `which make`
make-3.81-3.el5

(This was run on an older Redhat 5.6 system which has make.)

Similarly, which package owns a file:

$ rpm −qf /usr/lib64/libssh2.so.1
libssh2-1-1.2.9-4.2.2.1

List files in (an installed) package:
$ rpm −ql freeradius-client-1.1.6-40.1

List files in an rpm package file:
$ rpm −qlp packages/HPSiS1124Core-11.24.241-Linux2.4.rpm

Get history of the package versions on this server:

$ yum history list te-agent

Get history of the list of changes to this package:

$ rpm -q -changelog te-agent

Install a package from a local RPM file:
$ rpm −i openmotif-libs-32bit-2.3.1-3.13.x86_64.rpm

Uninstall a packge:
$ rpm −e package
$ rpm −e freeradius-server-libs-2.1.1-7.12.1

How will you install the missing make in CentOS? Use yum to search for it:

$ yum search make

Loaded plugins: fastestmirror
Determining fastest mirrors
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
============================== N/S Matched: make ===============================
automake.noarch : A GNU tool for automatically creating Makefiles
...
imake.x86_64 : imake source code configuration and build system
...
make.x86_64 : A GNU tool which simplifies the build process for users
makebootfat.x86_64 : Utility for creation bootable FAT disk
mendexk.x86_64 : Replacement for makeindex with many enhancements
...

How to install it:

$ sudo yum install make.x86_64

Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.umd.edu
 * extras: mirror.umd.edu
 * updates: mirror.cogentco.com
Setting up Install Process
Resolving Dependencies
--&gt; Running transaction check
---&gt; Package make.x86_64 1:3.81-19.el6 will be installed
--&gt; Finished Dependency Resolution
 
Dependencies Resolved
 
===========================================================================================================================
 Package                   Arch                        Version                             Repository                 Size
===========================================================================================================================
Installing:
 make                      x86_64                      1:3.81-19.el6                       base                      389 k
 
Transaction Summary
===========================================================================================================================
Install       1 Package(s)
 
Total download size: 389 k
Installed size: 1.0 M
Is this ok [y/N]: y
Downloading Packages:
make-3.81-19.el6.x86_64.rpm                                                                         | 389 kB     00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : 1:make-3.81-19.el6.x86_64                                                                               1/1
 
Installed:
  make.x86_64 1:3.81-19.el6
 
Complete!

make should now be in your path.

If we were dealing with SLES I would use zypper instead of yum, but the idea of searching and installing is similar.

Debian Linux, e.g. Raspberry Pi

Find which package a file belongs to:

> dpkg -S filepath

List installed packages:

> dpkg -l

List all files belonging to the package iperf3:

> dpkg -L iperf3

Transferring packages from one system to another

When I needed to transfer Debian packages from one system with Internet access to another without, I would do:

apt download apache2

Then sftp the file to the other system and on it do

apt install ./apache2_2.4.53-1~deb11u1_amd64.deb

In fact that only worked after I installed all dependencies. This web of files covered all dependencies:

apache2-bin_2.4.53-1~deb11u1_amd64.deb
apache2-data_2.4.53-1~deb11u1_all.deb
apache2-utils_2.4.53-1~deb11u1_amd64.deb
apache2_2.4.53-1~deb11u1_amd64.deb
libapr1_1.7.0-6+deb11u1_amd64.deb
libaprutil1-dbd-mysql_1.6.1-5_amd64.deb
libaprutil1-dbd-odbc_1.6.1-5_amd64.deb
libaprutil1-dbd-pgsql_1.6.1-5_amd64.deb
libaprutil1-dbd-sqlite3_1.6.1-5_amd64.deb
libaprutil1-ldap_1.6.1-5_amd64.deb
libaprutil1_1.6.1-5_amd64.deb
libgdbm-compat4_1.19-2_amd64.deb
libjansson4_2.13.1-1.1_amd64.deb
liblua5.3-0_5.3.3-1.1+b1_amd64.deb
libmariadb3_1%3a10.5.15-0+deb11u1_amd64.deb
libperl5.32_5.32.1-4+deb11u2_amd64.deb
mailcap_3.69_all.deb
mariadb-common_1%3a10.5.15-0+deb11u1_all.deb
mime-support_3.66_all.deb
mysql-common_5.8+1.0.7_all.deb
perl-modules-5.32_5.32.1-4+deb11u2_all.deb
perl_5.32.1-4+deb11u2_amd64.deb
ssl-cert_1.1.0+nmu1_all.deb

Categories
Admin CentOS Security

How to Set up a Secure sftp-only Service

Intro
Updated Jan, 2015.

Usually I post a document because I think I have something to add. This time I found a link that covers the topic better than I could. I just wanted to have it covered here. What if you want to offer an sftp-only jailed account? Can you do that? How do you do it?

The Answer
Well, it used to be all here: http://blog.swiftbyte.com/linux/allowing-sftp-access-while-chrooting-the-user-and-denying-shell-access/. But that link is no longer valid.

I tried it, appropriately modified for CentOS and it worked perfectly. A few notes. Presumably you will already have ssh installed. Who can imagine a server without it? So there’s typically no need to install openssh-server.

I was leery mucking with subsystem sftp. What if it prevented me from doing sftp to my own account and having full access like I’m used to? Turns out it does no harm in that regard.

Very minor point. His documentation might be good for Ubuntu. To restart the ssh daemon in CentOS/Fedora, I recommend a sudo service sshd restart. Do you wonder if that will knock you out of your own ssh session? I did. It does not. Not sure why not!

These groupadd/useradd/usermod functions are “cute.” I’m old school and used to editing the darn files by hand (/etc/passwd, /etc/group). I suppose it’s safer to use the cute functions – less chance a typo could render your server inoperable (yup, done that).

Let’s call my sftp-only user is joerg.

I did the chown root:root thing, but initially the files weren’t accessible to the joerg user. The permissions were 700 on the home directory, now owned by root. That produces this error when you try to sftp:

$ sftp joerg@localhost
sftp> dir

Couldn't get handle: Permission denied

That’s no good, so I liberalized the permissions:

$ sudo chmod go+rx /home/joerg

My /etc/passwd line for this user looks like this:

joerg:x:1004:901:Joerg, etherip author:/home/joerg:/bin/false

So note the unusual shell, /bin/false. That’s the key to locking things down.

In /etc/group I have this;

joerg:x:1004:

If you want to add the entries by hand to passwd and group then if I recall correctly you run a pwconv to generate an appropriate entry for it in /etc/shadow, and a sudo passwd joerg to set up a desired password.

Does it work? Yeah, it really does.

$ sftp joerg@localhost
Connecting to localhost…
sftponlyuser@localhost’s password:
sftp> pwd
Remote working directory: /
sftp> cd ..
sftp> pwd
Remote working directory: /
sftp> cd /etc
Couldn’t canonicalise: No such file or directory
sftp> ls -l
[shows files in /home/joerg]

Moreover, ssh really is shut out:

$ ssh joerg@localhost
joerg@localhost’s password:

This hangs and never returns with a prompt!

Cool, huh?

Locking out this same account
Now suppose you only intended joerg to temporarily have access and you want to lock the account out without actually removing it. This can be done with:

$ sudo passwd -l joerg

This puts an invalid character in that account’s shadow file entry.

Conclusion
We have an easy prescription to make a jailed sftp-only account that we tested and found really works. Regular accounts were not affected. The base article on which I embellished is now kaput so I’ve added a few more details to make up for that.

Categories
Admin Apache CentOS Linux Web Site Technologies

Major Headaches Migrating Apache from Ubuntu to CentOS

Intro
I’m changing servers from Ubuntu server to CentOS. On Ubuntu I just checked off LAMP and got my environment. In CentOS I’m doing it piece-by-piece. I don’t think my Ubuntu install is quite regular, either, as I bastardized it by adding environment variables in the Apache config file, a concept I borrowed from SLES! Turns out it is quite an ordeal to make a smooth transition. I will share all my pitfalls. I still don’t have it working, but I think I’m over the hump. [Update: now it is working, or 99% of it is working. It is a bit sluggish, however.]

The Details
I installed httpd on CentOS using yum. I also installed some php5 packages which I saw were recommended as well. First thing I noticed is that the directory structure for “httpd” as it seems to be known on CentOS, is dramatically different from “apache2” as it is known in Ubuntu. This example illustrates the point. In CentOS the main config file is

/etc/httpd/conf/httpd.conf

while in Ubuntu I had

/etc/apache2/apache2.conf

so I tarred up my /etc/apache2 files and had the thought “Let’s make this work on CentOS.” Ha. Easier said than done.

To remind, the content of /etc/apache2 is:

apache2.conf, conf.d, sites-enabled sites-available mods-enabled mods-available plus some stuff I probably added, including envvars, httpd.conf and ports.conf.

envvars contains environment variables which are subsequently referenced in the config files, like these:

export APACHE_RUN_USER=www-data
export APACHE_RUN_GROUP=www-data
export APACHE_PID_FILE=/var/run/apache2$SUFFIX.pid
export APACHE_RUN_DIR=/var/run/apache2$SUFFIX
export APACHE_LOCK_DIR=/var/lock/apache2$SUFFIX
# Only /var/log/apache2 is handled by /etc/logrotate.d/apache2.
export APACHE_LOG_DIR=/var/log/apache2$SUFFIX

First step? Well we have to hook httpd startup to our new directory somehow. I don’t recall this part so well. I think I tried this from the command line:

$ apachectl -d /etc/apache2 -f apache2.conf -k start

and it may be at that point that I got the MPM workers error. But I forget. I switched to using the service command and that particular error seemed to go away at some point. I don’t believe I needed to do anything special.

So I tried this edit to /etc/sysconfig/httpd (sparing you the failed attempts):

OPTIONS=”-d /etc/apache2 -f apache2.conf”

Now we try to launch and see what happens.

$ service httpd start

Starting httpd: httpd: Syntax error on line 203 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/alias.load: Cannot load /usr/lib/apache2/modules/mod_alias.so into server: /usr/lib/apache2/modules/mod_alias.so: cannot open shared object file: No such file or directory
[FAILED]

Fasten your seatbelts, put on your big-boy pants or whatever. We’re just getting warmed up.

Let’s look at mods-available/alias.load:

$ more alias.load

LoadModule alias_module /usr/lib/apache2/modules/mod_alias.so

Sure enough, there is not only no such file, there is not even such a directory as /usr/lib/apache2. And all the load files have references like that. Where did the httpd install put its modules anyways? Why in /etc/httpd/modules. So I made a command decision:

$ mkdir /usr/lib/apache2
$ cd !$
$ ln -s /etc/httpd/modules

So where does that put us? Here:

$ service httpd start

Starting httpd: httpd: Syntax error on line 203 of /etc/apache2/apache2.conf: Syntax error on line 1 of /etc/apache2/mods-enabled/ssl.load: Cannot load /usr/lib/apache2/modules/mod_ssl.so into server: /usr/lib/apache2/modules/mod_ssl.so: cannot open shared object file: No such file or directory
     [FAILED]

Not everyone will see this issue. I had used SSL for some testing in Ubuntu so I had that module enabled. my CentOS is a core image and did not come with an SSL module. So let’s get it.

$ yum search mod_ssl

shows the full module name to be mod_ssl.x86_64, so we install it with yum install.

How far did that get us? To here:

$ service httpd start

Starting httpd: httpd: bad user name ${APACHE_RUN_USER}
  [FAILED]

Ah, remember my environment variables from above? As I said I actually use them with lines such as:

User ${APACHE_RUN_USER}

in apache2.conf. But clearly the definitions of those environment variables is not getting passed along. I decide to see if this step might work. I append these two lines to /etc/sysconfig/httpd:

$ Read in our environment variables. Inspired by apache on SLES.
. /etc/apache2/envvars

Could any more go wrong? Sure. Lots! Then there’s this:

$ service httpd start

Starting httpd: httpd: bad user name www-data
      [FAILED]

Amongst all the other stark differences, ubuntu and CentOS use different users to run apache. Great! So I create a www-data user as userid 33, gid 33 because that’s how it was under ubuntu. but GID 33 is already taken in CentOS. It is backup. I decide I will never use it that way, and change the group name to www-data.

That brings us here. you see I have a lot of patience…

$ service httpd start

Starting httpd: Syntax error on line 218 of /etc/apache2/apache2.conf:
Invalid command 'LogFormat', perhaps misspelled or defined by a module not included in the server configuration
   [FAILED]

Now my line 218 looks pretty regular. It’s simply:

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined

I then realized something interesting. The modules built in to httpd on centOS and apache2 are different. apache2 seems to have some modules built in for logging:

$ apache2 -l

Compiled in modules:
  core.c
  mod_log_config.c
  mod_logio.c
  prefork.c
  http_core.c
  mod_so.c

whereas httpd does not:

$ httpd -l

Compiled in modules:
  core.c
  prefork.c
  http_core.c
  mod_so.c

So I made an empty log_config.conf and a log_config.load in mods-available that reads:

LoadModule log_config_module /usr/lib/apache2/modules/mod_log_config.so

I got the correct names by looking at the apache web site documenttion on that module. And i linked those two files up in the mods-available diretory:

$ cd mods-enabled
$ ln -s ../mods-available/log_config.conf
$ ln -s ../mods-available/log_config.load

Next error, please! Certainly. It is:

$ service httpd start

Starting httpd: Syntax error on line 218 of /etc/apache2/apache2.conf:
Unrecognized LogFormat directive %O
  [FAILED]

where line 218 is as recorded above. Well, some searches showed that you need the logio module. Note that it is also compiled into to apache2, but missing from httpd. So I did a similar thing with defining the necessary mods-{available,enabled} files. logio.load reads:

LoadModule logio_module /usr/lib/apache2/modules/mod_logio.so

The next?

$ service httpd start

Starting httpd: (2)No such file or directory: httpd: could not open error log file /var/log/apache2/error.log.
Unable to open logs
   [FAILED]

Oops. Didn’t make that directory. Naturally httpd and apache2 use different directories for logging. What else could you expect?

Now we’re down to this minimalist error:

$ service httpd start

Starting httpd:     [FAILED]

The error log contained this line:

[Mon Mar 19 14:11:14 2012] [error] (2)No such file or directory: Cannot create SSLMutex with file `/var/run/apache2/ssl_mutex'

After puzzling some over this what I eventually noticed is that my environment has references to directories which I haven’t defined yet:

export APACHE_RUN_DIR=/var/run/apache2$SUFFIX
export APACHE_LOCK_DIR=/var/lock/apache2$SUFFIX

So I created them.

And now I get:

$ service httpd start

Starting httpd:           [  OK  ]

But all is still not well. I cannot stop it the proper way. Trying to read its status goes like this:

$ service httpd status

httpd dead but subsys locked

I looked this one up. Killed off processes and semaphores as recommended with ipcs -s (see this link), etc. But since my case is different, I also did something different. I modified my /etc/init.d/httpd file:

#pidfile=${PIDFILE-/var/run/httpd/httpd.pid}
pidfile=${PIDFILE-/var/run/apache2.pid}
#lockfile=${LOCKFILE-/var/lock/subsys/httpd}
lockfile=${LOCKFILE-/var/lock/subsys/apache2}

Believe it or not, this worked. I can now run service httpd status and service httpd stop. To prove it:

$ service httpd status

httpd (pid  30366) is running...

Another Error Crops Up
I eventually noticed another problem with the web site. My trajectory page was not working. Upon investigation I found this comment in my main apache error log (not my virtual server error log, which I still don’t understand):

sh: /home/drj/traj/traj4.pl: Permission denied

This had to be a result of my call-out to a perl program from a php program:

...
$data = exec('/home/drj/traj/traj4.pl'.' '.$escargs);
...

But what’s so special about that call? Worked fine on Ubuntu, and I did a directory listing to show the file was really there. Well, here’s the thing, that file is under my home directory and guess what? When you crate your users in Ubuntu the home directory permissions are set to group and others read. Not in CentOS! A listing of /home looks kind of like this:

/home$ ll

total 12
drwx------ 2 drj   drj     4096 Mar 19 15:26 drj/
...

I set the permissions for all to read:

$ sudo chmod g+rx,o+rx drj

and I was good to go. The program began to work.

May 2013 Update
I was asked how all this survived after a yum update. Answer: pretty well, but not perfectly. The daemon was fine. And what miseld me is that it started fine. But then a couple days later I was looking at my access log and realized…it wasn’t there! Nor the errors log. Well, actually, the default access and error logs were there, but not for my virtual servers.

I soon realized that

$ service httpd status

produced

httpd dead but subsys locked

Well, who reads or remembers their own posts from a year ago? I totally forgot I had already dealt with this once, and my own post didn’t show up in my DDG search. Anywho, I stepped on the same rake twice. Being less patient this time around, probably because I am one year older, I simply altered the /etc/init.d/httpd file (looks like it had been changed by the update) thusly:

#pidfile=${PIDFILE-/var/run/httpd/httpd.pid}
#lockfile=${LOCKFILE-/var/lock/subsys/httpd}
# try as an experiment - DrJ 5/3/13
pidfile=/var/run/apache2.pid
lockfile=/var/lock/apache2/accept.lock

and I made sure I had a /var/lock/apache2 directory. This worked.

I chose a lock file with that particular name because I noticed this in my /etc/apache2/apache2.conf:

LockFile ${APACHE_LOCK_DIR}/accept.lock

To clean things out as I was working and re-working this problem since I couldn’t run

$ service httpd stop

I ran instead:

$ pkill -9 -f sbin/httpd

and I removed /var/run/apache2.pid.

Now, once again, I can get a status on my httpd service and restart works as well and my access and error logs are being written.

Conclusion
This conversion exercise turned out to be quite a teaching lesson and even after all this more remains. After the mysql migration I find the performance to be sub-par – about twice as slow as it was on Ubuntu.

Four months later, CentOS has not crashed once on me. Contrast that with Ubuntu freezing every two weeks or so. I optimized MySQL to cache some data and performance is adequate. I also have since learned about bitnami, which is kind of a stack for all the stuff I was using. Check out bitnami.org.