Categories
Admin Network Technologies Raspberry Pi

A nice alternative to speedtest.net for the DIY linux crowd

Intro

I was building some infrastucture around automated speedtest.net tests using speedtest-cli. I noticed the assigned servers keep changing, some servers are categorized as malicious sources, some time out if tested on the hour and half-hour, and results are inconsistent depending on which server you get.

So, I saw that the speedtest-cli (linux command-line python script) has a switch for a “mini” server. When I investigated that seemed the answer to the problem – you can set up your own mini server and use that for yuor tests. I.e., control both ends of the test. great.

The speedtest.net mini server was discontinued in 2017! There’s some commercial replacement. So I thought. Forget that. I was disillusioned and then happened upon a breath of fresh air – an open source alternative to speedtest.net. Enter, librespeed.

Some details

librespeed has a command-line program whih is an obvious rip-off of speedtest-cli. In fact it is called librespeed-cli and has many similar switches.

There is also a server setup. Really, just a few files you can put on any apache + php web server. There is a web GUI as well, but in fact I am not even that interested in that. And you don’t need to set it up at all.

What I like is that with the appropriate switches supplied to librespeed-cli, I can have it run against my own librespeed server. In some testing configurations I was getting 500 Mbps downloads. Under less favorable circumstances, much less.

Testing, testing, testing

I tested between Europe and the US. I tested through a proxy. I tested from the Azure cloud to an Amazon AWS server. I tested with a single cpu linux server (good old drjohnstechtakl.com) either as server, or as the client. This was all possible because I had full control over both ends.

Some tips
  1. Play with the speedtest-cli switches. See what works for you. librespeed-cli -h will shows you all the options.
  2. Increasing the stream count can compensate for slower PING times (assuming both ends have a fast connection)
  3. It does support proxy, but
  4. Downloads don’t really work through proxy if the server is only running http
  5. Counterintuitively, the cpu burden is on the client, not the server! My servers didn’t show the slightest bit of resource usage.
  6. Corollary to 5. My 4-cpu client to 1-cpu server test was much faster than the other way around where server and client roles were reversed.
  7. Most things aren’t sensitive to upload speeds anyway so seriously consider suppressing that test with the appropriate switch. Your tests will also run a lot faster (18 seconds versus 40 seconds).
  8. Worried about consuming too much bandwidth and transferring too much data? I also developed a solution for that (will be my next blog post)
  9. So I am running a librespeed server on my little VM on Amazon AWS but I can’t make it public for fear of getting overrun.
  10. ISPs that have excellent interconnects such as the various cloud providers are probably going to give the best results
  11. It is not true your web server needs write access to its directory in my experience. As long as you don’t care about sharing telemetry data and all that.
  12. To emphasize, they supply the speedtest-cli binary, pre-built, for a whole slew of OSes. You do not and should not compile it yourself. For a standard linux VM you will want the binary called librespeed-cli_1.0.9_linux_386.tar.gz
Example files

The point of these files is to test librespeed-cli, from the directory where you copied it to, against your own librespeed server.

json-ns6

[{
"name":"ns6, Germany (active-servers)",
"server":"https://ns6.drjohnstechtalk.com/",
"id":864,
"dlURL":"backend/garbage.php",
"ulURL":"backend/empty.php",
"pingURL":"backend/empty.php",
"getIpURL":"backend/getIP.php",
"sponsorName":"/dev/null/v",
"sponsorURL":"https://dev.nul.lv/"
}]

wrapper.sh

#!/bin/sh
# see ./librespeed-cli -help for all the options
./librespeed-cli --local-json json-ns6 --server 864 --simple --no-upload --no-icmp --ipv4 --concurrent 4 --skip-cert-verify

Purpose: are we getting good speeds?

My purpose in what I am constructing is to verify we are getting good download speeds. I am not trying to hit it out of the park. That consumes (read, wastes) a lot of resources. I am targeting to prove we can achieve about 150 Mbps downloads. I don’t know anyone who can point to 150 Mbps and honestly say that’s insufficient for them. For some setups that may take four simultaneous streams, for others six. But it is definitely achievable. By not going crazy we are saving a lot of data transfers. AWS charges me for my network usage. So a six stream download test at 150 Mbps (Megabits per second) consumes about 325 MBytes download data. If you’re not being careful with your switches, you can easily nudge that up to 1 GB downloads for a single test.

My librespeed client to server tests ran overnight alongside my old approach using speedtest. The speedtest results are all over the place, with a bunch of zeroes for whatever reason, as is typical, while librespeed – and mind you this is from a client in the US, going through a proxy, to a server in Europe – produced much more consistent results. In one case where the normal value was 130 mbps, it dipped down to 110 mbps.

Testing it out at home
Test from a home PC against my own librespeed server

I made my test URL on my AWS server private, but a public one is available at https://librespeed.de/

At home of course I want to test with a Raspberry Pi since I work with them so much. There is indeed a pre-built binary for Raspberry Pi. It is https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The problem with speedtest in more detail

There were two final issues with speedtest that were the straws that broke the camel’s back, and they are closely related.

When you resolve www.speedtest.net it hits a Content Distribution Network (CDN), and the returned results vary. For instance right now we get:

;; QUESTION SECTION:
;www.speedtest.net. IN A
;; ANSWER SECTION:
www.speedtest.net. 4301 IN CNAME zd.map.fastly.net.
zd.map.fastly.net. 9 IN A 151.101.66.219
zd.map.fastly.net. 9 IN A 151.101.194.219
zd.map.fastly.net. 9 IN A 151.101.2.219
zd.map.fastly.net. 9 IN A 151.101.130.219

Note that you can also run speedtest-cli with the –list switch to get a list of speedtest servers. So in my case I found some servers which procuced good results. There was one where I even know the guy who runs the ISP and know he does an excellent job. His speedtest server is 15 miles away. But, in its infinite wisdom, speedtest sometimes thinks my server is in Lousiana, and other times thinks it’s in New Jersey! So the returned server list is completely different for the two cases. And, even though each server gets assigned a unique number, and you can specify that number with the –server switch, it won’t run the test if that particular server wasn’t proposed to you in its initial listing. (It always makes a server listing call whether you specified –list or not, for its own purposes as to which servers to use.)

I tried to use some of tricks to override this behaviour, but short of re-writing the whole thing, it was not going to work. I imagined I could force speedtest-cli to always use a particular IP address, overwriting the return from the fastly results, but getting that to work through proxy was not feasible. On the other hand if you suck it up and accept their randomly assigned server, you have to put up with a lot of garbage results.

So set up your own server, right? The –mini switch seems built to accommodate that. But the mini server was discontinued in 2017. The commercial replacement seemed to have some limits. So it’s dead end upon dead end with speedtest.net.

Conclusion

An open source alternative to speedtest.net’s speedtest-cli has been identified and tested, both server and client. It is librespeed. It gives you a lot more control than speedtest, if that is your thing and you know a smidgeon of linux.

References and related

(2024 update) Cloudflare has a really nice speed test without all the bloat: https://speed.cloudflare.com/

Just to do your own test with your browser the way you do with speedtest.net: https://librespeed.de/

librespeed-cli: https://github.com/librespeed/speedtest-cli

librespeed-cli binaries download page: https://github.com/librespeed/speedtest-cli/releases

RPi version of librespeed-cli: https://github.com/librespeed/speedtest-cli/releases/download/v1.0.9/librespeed-cli_1.0.9_linux_armv7.tar.gz

The RPi I use for automatically power cycling my cable modem is hard-wired to my router and makes for an excellent platform from which to conduct these speedtests.

librespeed server: https://github.com/librespeed/speedtest . You basically can git clone it (just a bunch of js and php files) from: https://github.com/librespeed/speedtest.git

If, in spite of every positive thing I’ve had to say about librespeed, you still want to try the more commercial speedtest-cli, here is that link: https://www.speedtest.net/apps/cli

In this context a lot of people feel iperf is also worth exploring. I think it is a built-in linux command.

To kick it up a notch for professional-class bandwidth and availability measurements, ThousandEyes is the way to go. This discussion is very enlightening: https://www.thousandeyes.com/blog/caveats-of-traditional-network-tools-iperf

Categories
Admin TCP/IP

Poor man’s port checker for Windows

Intro

Say you want to check if a tcp port is open from your standard-issue Windows 10 PC. Can you? Yes you can. I will share a way that requires the fewest keystrokes.

A use case

We wanted to know if an issue with a network drive mapping was a network issue. The suggestion is to connect to port 445 on the remote server.

How to do it

From a CMD prompt

> powershell

> test-netconnection 192.168.20.250 -port 445

ComputerName : 192.168.20.250.250
RemoteAddress : 192.168.20.250
RemotePort : 445
InterfaceAlias : Ethernet 3
SourceAddress : 192.168.1.101
TcpTestSucceeded : True

That’s for the working case. If it can’t establish the connection it will take awhile and the last line will be False.

What you can type to minimize keystrokes is

test-n <TAB> in place of test-netconnection. It will be expanded to the full thing.

Conclusion

On linux you have tools like nc (netcat), nmap, scapy and even telnet, that we network engineers have used for ages. On Windows the options may be more limited, but this is one good way to know of. In the past I had written about portqry as a similar tool for Windows, but it requires an install. This test-netconnection needs nothing installed.

References and related

scapy, a custom packet generation utility

portqry for Windows

Categories
Admin Firewall Proxy

Checkpoint SYN Defender: what you don’t know can hurt you

Intro

Our EDI group hails me last Friday and says they can’t reach their VANs, or at best intermittently. What to do, what to do… I go on the offensive and say they have to stop using FTP (and that’s literal FTP, not sftp, not FTPs, just plain old FTP), it’s been out of date for at least 15 years.

But that wasn’t really helping the situation, so I had to dig a lot deeper. And frankly, I was coincidentally having intermittent issues with my scripted speedtests. Could the two be related?

The details

We have a bunch of synthetic monitors we run though that same firewall. They were failing every few minutes, and then became good.

And these FTPs were like that as well. Some would work and then minutes later not work.

The firewall person on call looked at the firewall, saw some of the described traffic passing through, and declared firewall is fine.

So I got a more cooperative firewall colleague on this. And he got a really expert Checkpoint support person on the call. That guy led us to look at SYN DEFENDER which is part of IPS and enabled via fw accel. If it sees too many out of state packets in a given time it will shut down the interface where the problem was observed!

The practical effect is that even if you’re taking traces on the Checkpoint, checking the logs, etc, you won’t see the traffic! So that really throws most firewall admins is this situation is so unusual and they are not trained to look for it.

In this case it was an internal firewall and ir was comfortable to disable SYN DEFENDER on it. All problems went away after that.

Four months later…

Then four months later, after the firewall was upgraded to v 81.10, they must have set SYN DEFENDER (AKA synatk) up all over again. And of course no one was thinking about it or expecting what happened next, which is, these exact same problems started all over again. But there were different firewall colleagues involved, none with any first-hand experience of the issue. Then I got involved and just sort of tackled my way through it in a trouble-shooting session. No one was placing any judgments (my-stuff-is- fine,-yours-must-be-broken kind of thinking). Then I eventually recalled the old problem, and looked up this post to help name it – SYN DEFENDER – so that that would be meaningful to the firewall colleague. Yup, he took it from there. And we were good. I admonished the on-call guy who totally missed it, and he humbly admitted to not being familiar with this feature and how does it work. So I will explain it to him.

Results of running fwaccel synatk config:

enabled 0
enforce 0
global_high_threshold 10000
periodic_updates 1
cookie_resolution_shift 6
min_frag_sz 80
high_threshold 5000
low_threshold 1000
score_alpha 100
monitor_log_interval (msec) 60000
grace_timeout (msec) 30000
min_time_in_active (msec) 60000

These are probably the defaults as we haven’t messed with them. Right now you see it’s disabled. It spontaneously re-enabeld itself after only a few days, and the problems started all over again.

References and related

VAN: Value Added Network

Categories
Admin Network Technologies

The IT Dective Agency: someone stole my switch port

Intro

A complex environment produces some too-strange-to-be-true type of issues. Yesterday was one of those days. Let me try to set this up like a script from a play.

The setting

A non-descript server room somewhere in the greater New York City area.

The equipment

A generic security appliance we’ll call ThousandEyes PX, just to make up a name.

Cisco Nexus 7K plus a FEX

The players

Dr John – the protagonist

PCT – a generic network vendor

Florence Ranjard – an admin of ThousandEyes PX in France

Shake Abel – a server room resource in PA

Cloud Johnson – someone in Request Management

Bill Otto – a network guy at heart, forced to deal with his now vendor-managed network via ITIL

The processes

ITIL – look it up

Scene 1

An email from Dr John….

Hi Bill,

Thanks.

Well that’s messed up, as they say. I wouldn’t believe it if I hadn’t seen it for myself. Someone, “stole” our port and assigned it to a different device on a different vlan – despite the fact that it was in active use!

I guess I will try to “steal” it back, assuming I can find the IT Catalog article, or maybe with the help of Cloud.

Fortunately I have console access to the Fireeye. I artificially introduced traffic, which I see reflected in the port statistics. So I know the ThousandEyes is still connected to this port, despite the wrong vlan and description.

Regards,

Dr John

Scene 2

One Week earlier

Siting at home due to Covid, Florence realizes she can no longer access the management port of her group’s ThousandEyes security appliance located in another continent. She beings to investigate and even contacts the vendor…

Scene 3

This exciting script is to be contiued, hopefully

Categories
Admin Linux Network Technologies

Speedtest automation: what they don’t tell you

Intro

I began to implement the autmoation of speedtest checks. I was running the jobs every 10 minutes, but we noticed something flaky in the results. On the hour and on the half hour the tests seemed to be garbage. What’s going on?

Our findings

Well, if you use a scheduler to run a speedtest every 10 minutes, it will start exactly on the hour and exactly on the half-hour, amongst other start times. We were running it eight times to test eight different paths. Only the last two were returning reliable results. The early ones were throwing errors. So I introduced an offset to run the jobs at 2,12,22,32,42,52 minutes. And with this offset, the results became much more reliable.

The inevitable conclusion is that too many other people are running tests exactly on the hour and half hour. A single run takes roughly 30 seconds to complete. And it must be that the servers which speedtest rely on are simply overwhelmed and refuse to do more tests.

References and related

There is a linux script written in python that implements the full speedtest. https://pypi.org/project/speedtest-cli/ It really works, which is cool.

But as well there is a RPM package you can get from the speedtest site itself.

nperf.com looks like a better test than speedtest. I’m going to see if it can be scripted.

Categories
Admin Network Technologies

What is the one DHCP problem managed network providers never recognize?

Answer: the one where their switch eats the DHCPDISCOVER packets. And the amazing thing is they never learn. And the second amazing thing is that they actually don’t apply the most basic networking debugging techniques when such a problem occurs. I’m talking your basic, DHCPDISCOVER packet goes to your switch, same DHCPDISCOVER packet never arrives to the DHCP server on same switch. We know it to be the case, but, to help convince yourself that your switch is eating the packets, do networky things like create a span port of the DHCP server’s port to prove to yourself that no DHCP requests are coming in. And yet, they are never prepared to do that, to propose that. So instead indirect proxies are used to draw the conclusion.

I’ve been involved in three of four such debugging sessions. They take hours. I took notes when it happened again this weekend. I guess that setup is pretty typical of how it plays out. A data center was moved, including a DHCP server. The new data center has a MAN network to the old one. All IPs were preserved. When they turned on the moved DHCP server DHCP lease were no longer getting handed out. In fact it was worse than that. With the moved DHCP sever turned off, most DHCP leases were working. But with it on, that’s when things really began to go south!

Here’s the switch port they noted for the iDRAC:

sh run int gi1/0/24
Building configuration…
Current configuration : 233 bytes
!
interface GigabitEthernet1/0/24
description --- To-cnshis01-iDRAC - iDRAC
switchport access vlan 202
switchport mode access
logging event link-status
speed 100
duplex full
spanning-tree portfast
ip dhcp snooping trust
end

The first line of course if the IOS command. OK, so they had that on the iDRAC, right. But on the actual server port they had this:

sh run int gi1/0/23
Building configuration…
Current configuration : 258 bytes
!
interface GigabitEthernet1/0/23
description --- To-cnshis01-Gb1 - Gb1
switchport access vlan 202
switchport mode access
logging event link-status
spanning-tree portfast
service-policy input PMAP_COS_REMARK_IN
service-policy output PMAP_COS_OUT
end

I basically told them cheekily up front that this is usually a network switch problem and that they have to play with the DHCP snooping enable setting.

And I have to say that the usual hours of debugging were short-circuited this time as they seemed to believe me, and simply experimented by adding

ip dhcp snooping trust

to the DHCP server’s main port. We immediately began seeing DHCPDISCOVER pakcets come in to the DHCP server, and the team testified that people were getting leases.

Final mystery explained

Now why were things behaving really badly – no leases – when the DHCP server was up but no DHCPDISCOVER requests were getting to it? I have the explanation for that as well. You see there is a standby DHCP server which is designed for failure of the primary DHCP server. But not for this type of failure! That’s right. There is an out-of-band (by that I mean not carried over DHCP ports like UDP port 67) communication between standby and primary which tells the standby Hey, although you got this DHCPDISCOVER request, ignore it becasue the primary is active and will serve it! And meanwhile, as we have said, the primary wasn’t getting the requests at all. Upshot: no one gets leases.

Just to mention it

My second-to-last debugging session of this sort was a little different. There they mentioned that there was a “global setting” which governed this DHCP snooping on the switch. So they had to do something with that (enable or disable or something). So there was no issue with the individual switch ports. For me that’s just a variation on the same theme.

2023 debugging session

Well, nothing has really changed two years after I originslly posted this article and I get on these troubleshooting sessions with the vendor, people from the firewall team, vendor management people – it’s quite an affair. And as before it always takes a minimum of a couple hours for the network vendor to find their mistakes in their configuration. I have just done two of these sessions in the last week.

The last one was a tad different. There was a firewalled segment. The PCs behind it were not receiving IPs from the dhcp server. It is worth mentioning. Someone suggested a traceroute from the dhcp server to this subnet. And then a traceroute to another subnet (non-firewalled) at the same site which was working. They looked completely different after the first few hops! So about an hour after that they found that they had forgotten to add a route for this subnet pointing to the firewall. And that makes sense in that on the dhcp server – unlike in most cases – I was see the DHCP DISCOVER and it was replying with a DHCP OFFER. But that DHCP OFFER was simply not getting to the firewall at the site.

Cute Mnemonic to remember the four DHCP phases

Can you never remember the phases of the DHCP protocol like me? Then remember only this: DORA.

  • Discover
  • Offer
  • Request
  • Ack

What’s the idea behind this feature?

Having done a total of zero minutes of research on the topic, I will anyway weigh in with my opinion! Suppose someone comes along and plugs in a consumer grade home router into your network. It’s probably going to act as a rogue DHCP server. Imagine the fun trying to debug that situation? We’ve all been there… These rogue devices are probably fairly common. So if your corporate switch doesn’t suppress certain DHCP packets from ports where they are not expected, then this rogue device will begin to take down your subnet and totally bewilder everyone. I imagine this setting that is the topic of this blog post stems from trying to suppress all unknown DHCP packets in advance. Its just that sometimes the setting is taken too far and, e.g., a firewall which relays DHCP requests is also getting its DHCP packets suppressed.

Conclusion

I normally would have presented this as part of my IT Detective series. But I feel this is more like a lament about the sad state of affairs with our network providers. And though I’ve seen this issue about four times in the past 12 months, they always act like they have no idea what we’re talking about. They’ve never encountered this problem. They have no idea how to fix it. And they have no idea how to further debug it.. What steps does the customer wish?

References and related

Juat because I mentioned it, here’s on of those IT Detective Agency blog posts: The IT Detecive Agency: web site not accessible

Categories
Admin Web Site Technologies

TCL iRule program with comments for F5 BigIP

Intro

A publicity-adverse colleague of mine wrote this amazing program. I wanted to publish it not so much for what it specifically does, but as well for the programming techniques it uses. I personally find i relatively hard to look up concepts when using TCL for an F5 iRule.

Program Introduction

Test


# RULE_INIT is executed once every time the iRule is saved or on reboot. So it is ideal for persistent data that is shared accross all sessions.
# In our case it is used to define a template with some variables that are later substituted

when RULE_INIT {
# "static" variables in iRules are global and read only. Unlike regular TCL global variables they are CMP-friendly, that means they don't break the F5 clustered multi-processing mechanism. They exist in memory once per CMP instance. Unlike regular variables that exist once per session / iRule execution. Read more about it here: https://devcentral.f5.com/s/articles/getting-started-with-irules-variables-20403
#
# One thing to be careful about is not to define the same static variable twice in multiple iRules. As they are global, the last iRule saved overwrites any previous values.
# Originally the idea was to load an iFile here. That's also the main reason to even use RULE_INIT and static variables. The reasoning was (and I don't even know if this is true), that loading the iFile into memory once would have to be more efficient than to do it every time the iRule is executed. However, it is entirely possible that F5 already optimized iFiles in a way that loads them into memory automatically at opportune times, so this might be completely unnecessary.
# Either way, as you can tell, in the end I didn't even use iFiles. The reason for that is simply visibility. iFiles can't be easily viewed from the web UI, so it would be quite inconvenient to work with.
# The template idea and the RULE_INIT event stayed, even though it doesn't really serve a purpose, except maybe visually separating the templates from the rest of the code.
#
# As for the actual content of the variable: First thing to note is the use of  {} to escape the entire string. Works perfectly, even though the string itself contains braces. TCL magic.
# The rest is just the actual PAC file, with strategically placed TCL variables in the form of $name (this becomes important later)

            set static::pacfiletemplate {function FindProxyForURL(url, host)
{
            var globalbypass = "$globalbypass";
            var localbypass = "$localbypass";
            var ceglobalbypass = "$ceglobalbypass";
            var zpaglobalbypass = "$zpaglobalbypass";
            var zscalerbypassexception = "$zscalerbypassexception";

            var bypass = globalbypass.split(";").concat(localbypass.split(";"));
            var cebypass = ceglobalbypass.split(";");
            var zscalerbypass = zpaglobalbypass.split(";");
            var zpaexception = zscalerbypassexception.split(";");

            if(isPlainHostName(host)) {
                        return "DIRECT";
            }

            for (var i = 0; i < zpaexception.length; ++i){
                        if (shExpMatch(host, zpaexception[i])) {
                                   return "PROXY $clientproxy";
                        }
            }

            for (var i = 0; i < zscalerbypass.length; ++i){
                        if (shExpMatch(host, zscalerbypass[i])) {
                                   return "DIRECT";
                        }
            }

            for (var i = 0; i < bypass.length; ++i){
                        if (shExpMatch(host, bypass[i])) {
                                   return "DIRECT";
                        }
            }

            for (var i = 0; i < cebypass.length; ++i) {
                        if (shExpMatch(host, cebypass[i])) {
                                   return "PROXY $ceproxy";
                        }
            }

            return "PROXY $clientproxy";
}
}

            set static::forwardingpactemplate {function FindProxyForURL(url, host)
{
            var forwardinglist = "$forwardinglist";
            var forwarding = forwardinglist.split(";");

            for (var i = 0; i < forwarding.length; ++i){
                        if (shExpMatch(host, forwarding[i])) {
                                   return "PROXY $clientproxy";
                        }
            }

            return "DIRECT";
}
}
}

# Now for the actual code (executed every time a user accesses the vserver)
when HTTP_REQUEST {
    # The request URI can of course be used to differentiate between multiple PAC files or to restrict access.
    # So can basically any other request attribute. Client IP, host, etc.
            if {[HTTP::uri] eq "/proxy.pac"} {

                        # Here we set variables with the exact same name as used in the template above.
                        # In our case the values come from a data group, but of course they could also be defined
                        # directly in this iRule. Using data groups makes the code a bit more compact and it
                        # limits the amount of times anyone needs to edit the iRule (potentially making a mistake)
                        # for simple changes like adding a host to the bypass list
                        # These variables are all set unconditionally. Of course it is possible to set them based
                        # on for example client IP (to give different bypass lists or proxy entries to different groups of users)
                        set globalbypass [ class lookup globalbypass ProxyBypassLists ]
                        set localbypass [ class lookup localbypassEU ProxyBypassLists ]
                        set ceglobalbypass [ class lookup ceglobalbypass ProxyBypassLists ]
                        set zpaglobalbypass [ class lookup zpaglobalbypass ProxyBypassLists ]
                        set zscalerbypassexception [ class lookup zscalerbypassexception ProxyBypassLists ]
                        set ceproxy [ class lookup ceproxyEU ProxyHosts ]

                        # Here's a bit of conditionals, setting the proxy variable based on which virtual server the
                        # iRule is currently executed from (makes sense only if the same iRule is attached to multiple
                        # vservers of course)
                        if {[virtual name] eq "/Common/proxy_pac_http_90_vserver"} {
                            set clientproxy [ class lookup formauthproxyEU ProxyHosts ]
                        } elseif {[virtual name] eq "/Common/testproxy_pac_http_81_vserver"} {
                            set clientproxy [ class lookup testproxyEU ProxyHosts]
                        } elseif {[virtual name] eq "/Common/proxy_pac_http_O365_vserver"} {
                            set clientproxy [ class lookup ceproxyEU ProxyHosts]
                        } else {
                            set clientproxy [ class lookup clientproxyEU ProxyHosts ]
                }

                        # Now this is the actual magic. As noted above we have now set TCL variables named for example
                        # $globalbypass and our template includes the string "$globalbypass"

                        # What we want to do next is substitute the variable name in the template with the variable values
                        # from the code.
                        # "subst" does exactly that. It performs one level of TCL execution. Think of "eval" in basically
                        # any language. It takes a string and executes it as code.
                        # Except for "subst" there are two in this context very useful parameters: -nocommands and -nobackslashes.
                        # Those prevent it from executing commands (like if there was a ping or rm or ssh or find or anything
                        # in the string being subst'd it wouldn't actually try to execute those commands) and from normalizing
                        # backslashes (we don't have any in our PAC file, but if we did, it would still work).
                        # So what is left that it DOES do? Substituting variables! Exactly what we want and nothing else.
                        # Now since the static variable is read only, we can't do this substitution on the template itself.
                        # And if we could it wouldn't be a good idea, because it is shared accross all sessions. So assuming
                        # there are multiple versions of the PAC file with different proxies or bypass lists, we would
                        # constantly overwrite them with each other.
                        # The solution is simply to save the output of the subst in a new local variable that exists in
                        # session context only.
                        # So from a memory point of view the static/global template doesn't really gain us anything.
                        # In the end we have the template in memory once per CMP and then a substituted copy of the template
                        # once per session. So as noted earlier, could've probably just removed the entire RULE_INIT block,
                        # set the template in session context (HTTP_REQUEST event) and get the same result,
                        # maybe even slightly more efficient.
                        set pacfile [subst -nocommands -nobackslashes $static::pacfiletemplate]

                        # All that's left to do is actually respond to the client. Simple stuff.
                        HTTP::respond 200 content $pacfile "Content-Type" "application/x-ns-proxy-autoconfig" "Cache-Control" "private,no-cache,no-store,max-age=0"
            # In this example we have two different PAC files with different templates on different URLs
            # Other iRules we use have more differentiation based on client IP. In theory we could have one big iRule
            # with all the PAC files in the world and it would still scale very well (just a few more if/else or switch cases)
            } elseif { [HTTP::uri] eq "/forwarding.pac" } {
                set clientproxy [ class lookup clientproxyEU ProxyHosts]
                set forwardinglist [ class lookup forwardinglist ProxyBypassLists ]
            set forwardingpac [subst -nocommands -nobackslashes $static::forwardingpactemplate]
            HTTP::respond 200 content $forwardingpac "Content-Type" "application/x-ns-proxy-autoconfig" "Cache-Control" "private,no-cache,no-store,max-age=0"
            } else {
                # If someone tries to access a different path, give them a 404 and the right URL
                HTTP::respond 404 content "Please try http://webproxy.drjohns.com/proxy.pac" "Content-Type" "text/plain" "Cache-Control" "private,no-cache,no-store,max-age=0"
            }
}

To be continued...

Categories
Admin Web Site Technologies

Building a regular (non-bloggy) web site with WordPress

Intro

I recently was a first-hand witness to the building of a couple web sites. I was impressed as the webmaster turned them into “regular” web sites – some bit of marketing, some practical functionality – and removed all the traditional blog components. Here are some of the ingredients.

The ingredients

Background images and logo

unsplash.com – a place to look for quality, non-copyrighted images on a variety of topics. These can serve as a background image to the home page for instance.

looka.com – a place to do your logo design.

Theme

Astra

Security Plugins

WPS Hide Login

Layout Plugins

Elementor

Envato Elements

Form Plugins

Contact Form 7

Contact Form 7 Captcha

Ninja Forms. Note that Ninja Forms 3 includes Google’s reCAPTCHA, so no need to get that as a separate plugin. I am trying to work with Ninja Forms for my contact form.

Infrastructure Plugins

WP Mail SMTP – my WordPress server needs this but your mileage may vary.

How-to videos

I don’t have this link yet.

Reference and related

To sign up for an API key for Google’s reCAPTCHA, go here: http://www.google.com/recaptcha/admin

Categories
Admin

OpenSCAD export to STL does nothing

Quick Tip
If you are using OpenSCAD for your 3D model construction, and after creating a satisfactory model do an export to STL, you may observe that nothing at all happens!

I was stuck on this problem for awhile. Yes, the solution is obvious for a regular users, but I only use it every few months. If you open the console you will see the problem immediately:

ERROR: Nothing to export! Try rendering first (press F6).

But in my case I had closed the console, forgot there was such a thing, and of course it remembers your settings.

So you have to render your object (F6) before you can export as an STL file.

References and related
I don’t know why this endplate design blog post which I wrote never caught on. I think the pictures are cool.

OpenSCAD is a 3D modelling application that uses CSG – constructive Solid Geometry. It’s very math and basic geometric shapes focussed – perfect for me. https://www.openscad.org/

Categories
Admin Linux

vsftd Virtual Users stopped working after patching: the solution

Intro
vsftpd is a useful daemon which I use to run an ftps service (ftp which uses TLS encryption). Since I am not part of the group that administers the server, it makes sense for me to maintain my own userlist rather than rely on the system password database. vsftpd has a convenient feature which allows this known as virtual users.

More details
In /etc/pam.d/vsftpd.virtual I have:

auth required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
account required pam_userdb.so db=/etc/vsftpd/vsftpd-virtual-user
session required pam_loginuid.so

In the file /etc/vsftpd-virtual-user.db I have my Berkeley database of users and passwords. See references on how to set this up.

The point is that I had this all working last year – 2019 – on my SLES 12SP4 server.

Then it all broke
Then in early May, 2020, all the FTPs stopped working. The status of the vsftpd service hinted that the file /lib64/security/pam_userdb.so could not be loaded. Sure enough, it was missing! I checked some of my other SLES12SP4 servers, some of which are on a different patch schedule. It was missing on some, and present on one. So I “borrowed” pam_userdb.so from the one server which still had it and put it onto my server in /lib64/security. All good. Service restored. But clearly that is a hack.

What’s going on
So I asked a Linux expert what’s going on and got a good explanation.

pam_userdb has been moved to a separate package, named pam-extra
 
1) http://lists.suse.com/pipermail/sle-security-updates/2020-April/006661.html
2) https://www.suse.com/support/update/announcement/2020/suse-ru-20200822-1/
 
Advisory ID: SUSE-RU-2020:917-1
Released: Fri Apr 3 15:02:25 2020
Summary: Recommended update for pam
Type: recommended
Severity: moderate
References: 1166510
This update for pam fixes the following issues:
 
- Moved pam_userdb into a separate package pam-extra. (bsc#1166510)
 
Installing the package pam-extra should resolve your issue.

I installed the pam-extra package using zypper, and, yes, it creates a /lib64/security/pam_userdb.so file!

And vsftpd works once more using supported packages.

Conclusion
Virtual users with vsftpd requires pam_userdb.so. However, PAM wished to decouple itself from dependency on external databases, etc, so they bundled this kind of thing into a separate package, pam-extra, more-or-less in the middle of a patch cycle. So if you had the problem I had, the solution may be as simple as installing the pam-extra package on your system. Although I experienced this on SLES, I believe it has or will happen on other Linux flavors as well.

This problem is poorly documented on the Internet.


References and related

https://www.cyberciti.biz/tips/centos-redhat-vsftpd-ftp-with-virtual-users.html