Consumer Interest

Consumer tech: fixing my Acurite electronic rain rauge


Acurite seems to have a corner on the consumer low-end weather metrics market. I.e., rain gauges and similar. In the past I’ve bought multiple cheap plastic rain gauges of the Acurite brand. They are quite nice, and cheap. But exposed to freezing water they will develop leaks. And the plastic yellows after a year.

My wife gave me an electronic Acurite rain. The setup was simple and it was working fine. Then one rainy day I noticed there was nothing recorded. Why?

The details

Of course an obvious go-to is the batteries. But I have a battery tester and in this case they tested fine. So I took out the measuring unit and put it next to the recording unit. Still nothing. I took all batteries out. As the recording unit (the inside part of the gauge) was coming up, I noticed that it showed a signal strength going from zero to four bars, over-and-over, which to me indicated it was looking for and not finding a signal from the measuring (outdoor) unit.

What this said to me is the following: the problem was focused on the measuring unit. Likely it wasn’t on for whatever reason.

At this point you could rightly object to point out that maybe the two units simply were on different wavelengths. But I had already taken care of that. I made sure they were both on A. so I feel I had adequately ruled that out.

I noticed the terminals in the measurer’s battery compartment were dulled with crud. I’ve encountered this issue before on my home thermostat. My solution there was to add some wadded-up aluminum foil to the springy terminal. I did the same here; and voila, I began to get a steady four bars on the radio measurement strength!

A healthy Acurite electronic rain gauge, model 02446

I fixed my Acurite rain gauge tonight and shared what I did in case someone else has this issue. It has lasted a year and a half so far. I hope to get a couple more years out of it!

I’m not sure where the crud comes from (the batteries are not leaking!) that eventually cuts off electric contact with the springy terminal (negative contact), but wadded-up aluminum covering it does the trick!

Network Technologies

Trying to improve my home WiFi with a range extender


My Teams meetings in the mornings had poor audio quality and sometimes I could not share my screen. My suspicions focused on my home WiFi Router, which is many years old. I decided to make an experiment and get a range extender. The results are, well, mixed at best.

Windows command

netsh wlan show interface

There is 1 interface on the system:
Name : Wi-Fi 
Description : Intel(R) Dual Band Wireless-AC 3168 
GUID : f1c094c0-fcb7-4e47-86ba-51df737e58c8 
Physical address : 28:c6:3f:8f:3a:27 
State : connected 
SSID : DrJohn 
BSSID : ec:c3:02:eb:2d:7c 
Network type : Infrastructure 
Radio type : 802.11ac 
Authentication : WPA2-Personal 
Cipher : CCMP 
Connection mode : Auto Connect 
Channel : 153 
Receive rate (Mbps) : 292.5 
Transmit rate (Mbps) : 292.5 
Signal : 99% 
Profile : DrJohn

802.11ac is WiFi 5. 802.11n is WiFi 2, to be clear about it.

What’s going on

My work laptop starts out using WiFi 5 (803.11ac). The signal is around 60% or so. So I guess not super great. Then after an hour or so it switches to WiFi 2 (802.11n)! Audio in my meetings gets disturbed during this time.

My WiFi Extender did not really change this behavior to my surprise! But maybe the quality is better.

One morning I started out on WiFi 4, the signal quality varied between 94% down to 61%, all while nothing was being moved, and within a matter of minutes! The lower Signal values are associated with slower transmit and receive rates, naturally. But at least with the extender WiFi 4 seems OK. It’s useable for my interactive meetings. In my experience, once you are on WiFi 4 you are very unlikely to automagically get switched back to WiFi 5. But the reverse is not true. So there’s a lot of variability in the signal over the course of minutes. But I stayed on WiFi 4 for over three hours without its changing. I connected to a differ SSID, then connected back to my _EXT SSID and, bam, WiFi 5, but only at 52% signal strength.

The way I know this behavior in detail is that I happen to have a ThousandEyes endpoint agent installed and I have access to this history of the connection quality, signal strength, thoughput, etc. ThousandEyes is pretty cool.

Further experimentation

The last couple days I’ve been getting WiFi 5 and it’s been sticking. What’s the difference? This sounds incredibly banal, but I stood the darn extender upright! That’s right, during those days when I was mostly getting WiFi 4 the Extender had all its antennae sticking out, but it was flat on a table. I am in a room across the hallway. Then I managed to stand it upright – a little tricky since it is pluued into an extension cord. I’m still across the hallway. But things have been behaving better ever since.

Does a WiFi extender create a new SSID?

Yes! It creates an SSID named after your SSID with an _EXT appended to that name. However, it is very important to note that it is a bridged network so it means your _EXT-connected devices see all your devices not on _EXT, and that makes it very convenient. The subnet used is your primary router’s subnet, in other words.

This TP-Link (see references) seems to have lots of nice features. MIMO, AP mode, mesh mode, etc. You may or may not need them right away. For instance, the device has several status LEDs which get kind of bright for a bedroom at nighttime. Originally we covered it with a dark T-Shirt. Then I looked at it and saw it has an LED switch! That’s right. Just press that LED switch and those way-too-bright LEDs stop illuminating, while the device keeps on working. A very small but thoughtful feature which you would never even think to look for but turns out to be important. It might have overheated had we kept it covered with that T-Shirt.

To be continued…

References and related

TPLink AC1900 WiFi Range Extender at Amazon (Costs about $69. I do not get promotional credits!)

Firewall Linux Network Technologies

The IT Detective Agency: the case of the mysterious ICMP host administratively prohibited packets


I haven’t published a new case in a while, not for lack of cases, but more that they they all fall into something I’ve already written about. But today there is definitely something new.

Some details

Thousandeyes agent-to-agent communication was generally working for all our enterprise agents after fixing firewall rules, etc, except for this one agent hosted in Azure US East. Was it something funny about the firewalls on either side of the vpn tunnel to this cloud? Ping tests were working. But a connection to tcp port 49153, which is used for agent-to-agent communication gave a response in the form of an ICMP type 3 code 10 packet which said something like host administratively prohibited. What?

The Cisco TAM suggested to look at iptables. I did a listing with iptables -L. The output is pretty long and I’m not experienced looking at it. Nothing much jumped out at me, but I did note the presence of this line:

REJECT     all  —  anywhere             anywhere             reject-with icmp-host-prohibited

in a couple of the chains, which seemed suspicous.

An Internet search pointed towards firewalld since the agent is a Redhat 7.9 system. Indeed firewalld was running:

systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2023-10-12 15:26:25 UTC; 5h 45min ago

The suggestion is to test with firewalld disabled. Indeed this produced correct results – no more ICMP packets back.

But it’s probably a good security measure to run firewalld, so how to modify it? This note from Redhat was particularly helpful in learning how to add a rule to the firewall. I pretty much just needed to do this to permanently add my rule:

firewall-cmd –add-port 49153/tcp –permanent

Afterwards the agent-to-agent tests began to be run successfully.

Which runs first, tcpdump or firewalld?


This is a good question to ask because if the order had been different, and who knows, you might have your packets dropped before you ever see them on tcpdump. But tcpdump seems to get a pretty clean mirror of what the network interface gets before application or kernel processing.

The new equivalent to netstat -an

If I want to see the listening processes in Redhat I might do a

ss -ln

In the old days I memorized using netstat -an, but that is now frowned upon.


We solved a case where tcp packets were getting returned with an ICMP packet which basically said: prohibited. This was due to the host, a Redhat 7 system, having restricted ports due to firewalld running. Once firewalld was modified this traffic was permitted and Thousandeyes Tests ran successfully. We also proved that tcpdump runs before firewalld.

References and related

How to add rule to firewalld on Redhat-like systems.

Admin Apache Linux

Cloudflare: an added layer of protection for your personal web site


I was looking at what Cloudflare could do for my web site. A colleague pointed out that they have a free usage tier which supplies a web application firewall and some anti-bot measures. I checked it out and immedaitely signed up!

The details

What Cloudflare is supplying at no cost (for personal web sites like mine) is amazing. It’s not just a world-class dns service. That would already be amazing. Run against and you will see several different IPs mentioned around the world- just like the big guns! I also get for free some level of mitigation against dns-based attackes.

Web site protections

I don’t fully understand their products so I don’t know what level of protections I am getting in the free tier, but there are at least some! They say they’ve blocked 10 requests in the last few days

Web usage stats

I have to admin using raw linux tools against my apache access file hasn’t bee n the most illuminating until now. Now that I use Cloudflare I get a nice visual presentation showing where (which country) my visitors came from, where the bots come from, how much data was transmitted.

Certificate for HTTPS

Cloudflare automatically takes care of the web site certificate. I had to do nothing at all. So now I can forget my call out to LetsEncrypt. I wonder if GoDaddy is still charging $69 annually for their certificates.


Yeah my web site just feels faster now since the switch. It just does. And Cloudflare stats say that about 30% of the content has been served from their cache – all with zero setup effort on my part! I also believe they use certain tcp acceleration techniques to speed things up.


And Cloudflare caches some of my objects to boost performance. Considering that I pay for data transfer at Amazon AWS, it’s a fair question to ask if this caching could even be saving me money? I investigated this and found that I get billed maybe $ .02 per GByte, and in a busy month I might use .8 GB or so, so $ .02 per month. So I might occasionally save a penny or so – nothing substantial though!


Even with this free tier you get some geoDNS functionality for free, namely, visitors from around the world will see an IP address which is geographically close to where they are, bossting their performance when using your site. Stop to think about that. That’s a whole lot of infrastructure sophistication that they’re just giving you for free!

Why are they giving this much away?

I think they have the noble aim of improving the security posture of the Internet writ large. Much as letsencrypt greatly accelerated the adoptipon of web page encyrption (https) by making certificates free, Cloudflare hopes to accelerate the adoption of basic security measures for every web site, thereby lifting the security posture of the Internet as a whole. Count me as a booster!

What’s their business model. How will they ever make money?

Well, you’re only supposed to use the free tier for a personal web site, for one. My web sites don’t really have any usage and do not display ads so I think I qualify.

More importantly, the free security protections and acceleration are a kind of teaser and the path to upgrading to profesisonal tier is very visibly marked. So they’re not 100% altruistic.

Why I dislike GoDaddy

Let’s contrast this with offerings from GoDaddy. GoDaddy squeezes cents out of you at every turn. They make it somewhat mysterious what you are actually paying for so they’re counting on fear of screwing up (FOSU, to coin a term). After all, except for the small hit to your wallet, getting that upgraded tier – whois cloaking, anyone? – might be what you need. Who knows. Won’t hurt, right? But I get really tired of it. Amazon AWS is perhaps middle tier in this regards. They do have a free tier virtual server which I used initially. But it really doesn’t work except as a toy. My very modest web site overwhlemed it on too many occasions. So, basically useless. Everything else: you pay for it. But somehow they’re not shaking the pennies out of you at every turn unlike GoDaddy. And AWS even shows you how to optimize your spend.

How I converted my live site to Cloudflare

After signing up for Cloudflare I began to enter my dns domains, e.g.,,, plsu a few others. They explained how at GoDaddy I had to update the nameserver records for these doamins, which I did. Then Cloudflare has to verify these updates. Then my web sites basically stopped working. So I had to switch the encryption mode to full. This mode encrypts the back-end data to my web server, but it accepts a self-signed certificate, no matter if it’s expired or not and no matter who issued it. That is all good because you still get the encrypted channel to your content server.

Then it began to work!

Restoring original visitor IPs to my apache web server logs

Very important to know frmo a technical standpoint that Cloudflare acts as a reverse proxy to your “content server.” Knowing this, you will also know that your content server’s apache logs get kind of boring because they will only show the Cloudflare IPs. But Cloudflare has a way to fix that so you can see the original IPs, not the Cloudlfare IPs in your apache logs.

Locking down your virtual server

If Internet users can still access the web server of your virtual server directly (bypassing Cloudflare), your security posture is only somewhat improved. To go further you need to use a local firewall. I debated whether to use AWS Network Security Groups or iptables on my centos virtual server. I went with iptables.

I lossely followed this developer article. Did I mention that Cloudflare has an extensive developer community?

Actually I had to install iptables first because I hadn’t been using it. So my little iptables script I created goes like this.


# from
# For IPv4 addresses
curl -s|while read ip; do
 echo adding $ip to iptables restrictions
 iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT
iptables -I INPUT -p tcp -m multiport --dports http,https -s $ip -j ACCEPT
# maybe needed it just once??
#iptables -A INPUT -p tcp -m multiport --dports http,https -j DROP
# list all rules
iptables -S

I believe I just need to run it the one time, not, e.g., after every boot. We’ll soon see. The output looks like this:

-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -s -p tcp -m multiport --dports 80,443 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 80,443 -j DROP

Note that this still leaves ssh open, but that’s ok since it is locked down via Network Security Group rules. No urgent need to change those.

Then I made sure that direct access to my content server freezes, which it does, and that access through the official DNS channels which use Cloudflare still works, which it did. So… all good. The setup was not hard at all. But since I have several hosted web sites for the iptables to make any sense I had to be sure to migrate all my hosted sites over to Cloudflare.

Not GoDaddy

I was dreading migrating my other zones (dns domains) over to Cloudflare. Still being in the GoDaddy mindframe I figured, sure, Cloudflare will permit me one zone for free, but then charge me for a second one.

So I plunged ahead. No charge!

And a third one: Also no charge!

And a fourth, and a fifth and a sixth.

I thought perhaps five will be the threshold. But it wasn’t. I only have six “zones” as Cloudflare now calls them. But they are all in my account and all free. Big relief. This is like the anti-GoDaddy.

DNS changes

Making DNS changes is quite fast. The changes are propagated within a minute or two.

api access

Everything you can do in the GUI you can do through the api. I had previously created and shared some model python api scripts.


As if all the above weren’t already enough, I see Cloudflare also gives my web site accessibility via ipv6:

$ dig +short aaaa


I guess it’s accessible through ipv6 but I haven’t quite proven that yet.

Mail forwarding

I originally forgot that I had set up mail forwarding on GoDaddy. It was one of the few free things you could get. I think they switched native Outlook or something so my mail forwarding wasn’t working. On a lark I checked if Cloudflare has complementary mail forwarding for my domains. And they do! So that’s cool – another free service I will use.

Sending mail FROM this Cloudflare domain using your Gmail account

This is more tricky than simple mail forwarding. But I think I’ve got it working now. You use Gmail’s own server ( as your relay. You also need to set up an app password for Gmail. Even though you need to specify a device such as Windows, it seems once enabled, you can send from this new account from any of your devices. I’ve found that you also need to update your TXT record (see link below) with an expanded SPF information:

v=spf1 ~all

In words it means the Google and Cloudflare sending servers are authorized to sends emails with this domain in the sender field, mail from elsewhere will be marked.

Even after all that I wasn’t seeing my sent message at work where Microsoft 365 is in use. It landed in the Junk folder! Why? The sending email “appears similar to someone who previously sent you email, but may not be that person.” Since I am a former mail admin I am sympathetic to what they’re trying to do – help hapless users avoid phishing; because it’s true – the characters in my test email did bear similarities to my regular email. My regular email is first_name.last_name @, while mail from this domain was first_name @ last_name + s .com Mail sent to a fellow Gmail user suffered no such fate however. Different providers, different approaches. So I can accept that. Once it’s set up you get a drop-down menu of sending addresses every time you compose a new message! The detailed instructions are at the Cloudflare community site.

Cost savings using Cloudflare

Suppose like me you only use GoDaddy as your reistrar and get all your other services in some other way. Well, Cloudflare began to pitch me on transferring my domanis to them. I thought, Aha, this is the moment they will make money off me. So I read their pitch. Their offer is to bill me for the charges they incur from ICANN or wherever, i.e., pass-through charges without any additional middleman overhead. It’s like, what? So let’s say at GoDaddy I pay $22 per year per domain. Well with Cloudflare I’d be paying something like $10 per year. For one domain I wouldn’t bother, but since I have more than five, I will be bothering and gladly leaving GoDaddy in the dust. I have just transferred the first two domains. GoDaddy seems to drag out the process as long as possible. I found I could expedite it by approving the transfer in the GoDaddy portal. Then Cloudflare had them within 30 minutes. It’s not super-easy to do a transfer, but also not impossble.

In typical GoDaddy style, executing a domain transfer to another registrar seems essentially impossible if you use their latest Domain portfolio app. Fortunately i eventually noticed the option to switch from “beta” to the old Domain manager, which still has the option and looks a bit more like their documentation. I’ve generated auth codes and unlocked, etc. And I even see the correct domain status (ok as opposed to client transfer prohibited) when I do a whois, but now Cloudflare, which is usually so quick to execute, seems to be lagging in recognizing that the domains have been unlocked and suggests to check back in some hours. Weird. The solution here was to provide my credit card info. Even 12 hours later I was having this trouble where it said none of my domains were eligible for transfer. As soon as I provided my payment information, it recognized two of my domains as eligible for transfer.

A plug for GoDaddy

As my favorite sport seems to be bashing GoDaddy I wanted to balance that out and say a few kind words about them. Someone in my houisehold just started a job with a startup who uses GoDaddy. It provides desktop Outlook Email, MS Teams, Sharepoint, helps with consulting, etc. And on day one this person was up and running. So if you use their services, they definitely offer value. My issue is that I tried to restrict my usage to just one service – domain registrar – and they pushed me to use it more extensively, which I resisted. But for a small business which needs those thnigs, it’s fine.

How many domains are you sharing your IP with?

The thnig with Cloudflare is that they assign you to a couple of their IP addresses, often beginning with either 172.67 or 104…. . Now did you ever wonder with how many other web sites you’re sharing those IPs? If not, you should! I found a tool that provides the answer: So for this free tier they seem to keep the number around 500 unique domains per IP! Yes that’s a lot, but I’d only be concerned if there was evidence of service degradation, which so far I have not seen. What’s nice about the dnsyltics site is that it lists a few of the domains – far from all of them, but at least it’s 20 or 30 – associated with a given IP. That can be helpful during truobleshooting.


What Cloudflare provides for protective and performance services represents a huge forward advance in the state of the art. They do not niggle you for extra charges (entice is more the word here) for Fear of Screwing Up.

All in all, I am amazed, and I am something of an insider – a professional user of such services. So I heartily endorse using Cloudflare for all personal web servers. I have not been sponsored or even in contact with Cloudflare, by the way!

References and related

Cloudlfare tip: Restoring original visitor IPs to your apache web server.

Locking your virtual server down to just Cloudflare IPs:

Using the Cloudflare python api: working examples

Sending Gmail with your Cloudlflare domain as sending address

Cloudflare’s analysis of the exploit HTTP/2 Rapid Reset is extremely detailed. See and .

I remember being so excited to discover free certificates from LetsEncrypt.

A good explanation of SPF records

Turn an IP addres into a list of associated domain names:

Linux Perl Python

Using syslog within python


We created a convention where-in our scripts log to syslog with a certain style. Originally these were Perl scripts but newer scripts are written in python. My question was, how to do in python what we had done in Perl?

The details

The linux system uses syslog-ng. In /etc/syslog-ng/conf.d I created a test file 03drj.conf with these contents:

destination d_drjtest { file("/var/log/drjtest.log"); };
filter f_drjtest{ program("drjtest"); };
log { source(s_src); filter(f_drjtest); destination(d_drjtest); flags(final); };

So we want that each of our little production scripts has its own log file in /var/log/.

The python test program I wrote which outputs to syslog is this:



import syslog
syslog.syslog(syslog.LOG_NOTICE,'[Notice] Starting')
syslog.syslog(syslog.LOG_ERR,'[Error] We got an error')
syslog.syslog(syslog.LOG_INFO,'[Info] Just informational stuff')

Easy, right? Then someone newer to python showed me what he had done – not using syslog but logger, in which he accomplished pretty much the same goal but by different means. But he had to hard-code a lot more of the values and so it was not as elegant in my opinion.

In any case, the output is in /var/log/drjtest.log which look like this after a test run:

Jul 24 17:45:32 drjohnshost drjtest[928]: [Notice] Starting
Jul 24 17:45:32 drjohnshost drjtest[928]: [Error] We got an error
Jul 24 17:45:32 drjohnshost drjtest[928]: [Info] Just informational stuff

We show how to properly use the syslog facility within python by using the syslog package. It’s all pretty obvious and barely needs to be mentioned, except when you’re just statring out you want a little hint that you may not find in the man pages or the documentation at syslog-ng.

References and related

I have a neat script which we use to parse all these other scripts and give us once a week summary emails, unless and error has been detected in which case the summary email goes out the day after a script has reported an error. It has some pretty nice logic if I say so myself. Here it is: drjohns script checker.


My favorite Flux tips


I used the Flux language available within Grafana for my InfluxDB data source to do a number of things which, while easy, are not easy to find. So here I am doing a brain dump to document the language features I’ve so far used, and especially to describe these features with terms common to other more well-known languages such as python.

A word about Flux

Nothing is straightforward when it comes to Flux. And to make matters worse, the terminology is also strange. I am at the beginning level of beginning, so I don’t know their terminology, much less positioned to defend it.

Flux vs InfluxQL

It seems InfluxQL is the old stuff. It had SQL-like statements. Flux syntax seems rather different. But its capabilities have been enhanced. But since flux is a common word I am never sure how to formulate an Internet search. Flux lang? So when you do searches you’ll often see references to the old stuff (SELET statements and the like), which I always avoid.


I believe, here in June 2023 as I write this, that our InfluxDB is v 2.x, Grafana is version 10 (as of today!), so that gives us Flux v 0.x, I guess???

Time Series data

One has to say a word about time series data. But I’m not the one to say it. I simply muddle along until my stuff comes out as I wish it to. Read this somewhat helpful page to attempt to learn about time series:


Grafana is very helpful in pointing out errors. You create your Flux query, try to refresh, and it throws up a traingular-shaped warning icon which says which exact characters of which exact line of your Flux query it doesn’t like. It it weren’t for this I’d have wasted many more hours beyond the ones already wasted.

Print out variable values to debug

In python I print out variable values to understand what’s happening. I have no idea how to do this in Influx. Of course I’m not even sure variable is the right word.


This working example will be heplful to illustrate some of the things I’ve learned.


import "date"
import "dict"
weekdayDict = [0:"Sunday", 1:"Monday", 2:"Tuesday", 3:"Wednesday", 4:"Thursday",
  5:"Friday", 6:"Saturday"]
regionOffsetDict = ["AP":8h,"NA":-4h,"EU":0h,"SA":-3h]
offset_dur = dict.get(dict:regionOffsetDict, key:"${Region}", default:0h)
startOnToday = date.add(d: -${day}d, to: now())
startCorrected = date.add(d: offset_dur, to: startOnToday)
startTrunc = date.truncate(t: startCorrected, unit: 1d)
stopTrunc = date.add(d: 1d, to: startTrunc)
year = string(v: date.year(t: startTrunc))
month = string(v: date.month(t: startTrunc))
day = string(v: date.monthDay(t: startTrunc))
dayWint = date.weekDay(t: startTrunc)
dayW = dict.get(dict:weekdayDict, key:dayWint, default:"Unknown day")
niceDate = "Date: " + dayW + " "+ year + "-" + month + "-" + day
// see
// and
from(bucket: "poc_bucket2")
  |> range(start: startTrunc, stop: stopTrunc)
  |> filter(fn: (r) =>
    r._measurement == "vedge" and
    r._field == "percent" and 
    r.item == "$item"
  |> keep(columns:["_time","_value","item"])
  |> aggregateWindow(every: 1h, timeSrc: "_start", fn: mean)
  |> set(key:"hi", value:niceDate)

Comment lines

Begin a line with // to indicate it is a comment line. But // can also be used to comment at the end of a line.

Concatenating strings

Let’s start out easy. Concatenating strings is one of the few things which work the way you expect:

niceDate = “Date: ” + dayW + ” “+ year + “-” + month + “-” + day

Variables are strongly typed

Let’s back up. Yes there are variables and they have one of a few possible types and the assignment operator = works for a few simple cases. So far I have used the types int, string, duration, time and dict. Note that I define all my variables before I get to the range statement. Not sure if that’s essential or not. If a function expects a duration, you cannot fake it out by using a string or an int! There are conversion functions to convert from one type to another.

literal string syntax: use double quotes

I’m so used to python’s indifference as to whether your string literals are enclosed by either single quotes or double quotes, and I prefer single quotes. But it’s only double quotes in Flux.

Add an additional column to the table

|> set(key:”hi”, value:niceDate)

Why do this? I want that in my stat visualization, when you hover over a stat, you learn the day and day-of-the-week.

Copy (duplicate) a column

|> duplicate(column: “_value”, as:”value”)

Convert the _value column to a string

I guess this only works on the _value column.

|> toString()

For arbitary conversion to a string:

string = string(v: variable)

But if you’re inside an fn function, you need something like this:

 |> map(fn: (r) => ({r with valueString: string(v: r._value)}))


dictionaries are possible. I define weekdayDict,

weekdayDict = [0:”Sunday”, 1:”Monday”,…

then I use it:

dayW = dict.get(dict:weekdayDict, key:dayWint, default:”Unknown day”)

Dates, durations and date arithmetic

I guess it makes sense in a time series to devote a lot of attention to date arithmetic and such. In my script above I do some of the following things:

  • truncate the current day down to Midnight
  • add a full day to a date
  • pull out the year, month, date
  • convert a date object to a string

template variables

day is a template variable (I think that’s the term). It is set up as a hidden variable with the (hand-coded) values of 0,1,2,3,4,…32.

dropping columns

We all know about the drop(columns:[…, but how about if you have so many columns it’d be more economical to simply keep the ones you need? That is my situation, hence the line:

|> keep(columns:[“_time”,”_value”,”item”])

Lumping data together into every hour, aka data aggregation or windowing

|> aggregateWindow(every: 1h, timeSrc: “_start”, fn: mean)

Note the keep columns is in fron of the aggregation. Things didn’t turn out so well if I flipped the order.

Adding additional columns

|> set(key:”hi”, value:niceDate)

So when you hover over a row with the mouse, this will produce a pleasant Wednesday 2023-6-14

Appending additional series together

I have had success with union when the tables all had the same colunms.

union(tables: [meanTbl,n95Tbl,maxTbl])

Outputting additional series

Use yield. Let’s say you’ve outputted one table and want to output a second table based on different criteria. The second table can be output using

|> yield(name: “second”)

Regular Expressions (RegEx)

Should be supported. I haven’t had opportunity to use them yet however.

A short word on what I’m doing

I have a stat visualization where the stat blocks repeat vertically and there are 24 per row. Get it? Each stat contains a single number representing the value for that hour of the day. Then there is a repeat over template variable day.

Just above this panel is another thin panel which contains the numbers 0 through 23 so you know which hour of the day it is. That is another stat visualization which contains this code:


import "generate"
data = generate.from(
  count: 24,
  fn: (n) => n,
  start: 2021-01-01T00:00:00Z,
  stop: 2021-01-06T00:00:00Z,
  |> range(start: 2021-01-01T00:00:00Z, stop: 2021-01-24T05:00:00Z) 
  |> set(key:"hi", value:"local vEdge hour")

Generate your own test data – generate.from()

The non-intuitive code snippet above shows one example of generating my own data into a table which is used to display stat blocks which contain 0, 1, 2, …, 23 and when you mouseover the pop-up text says “local vEdge hour.” The time references are mostly throw-away dummy values, I guess.

Loops or iterations

Flux sucks when it comes to program loops. And say yuo wanted nested loops? Forget about it. It’s not happening. I have no idea how to do a simple loop. I know, seriously? Except I only needed it for the data generation where I do show how to do it within a generate.from code block.

Refer to the time picker’s time range

This is surprisingly hard to find. I don’t know why.

  |> range(start: v.timeRangeStart, stop: v.timeRangeStop)

Setting Grafana time picker to a stop time in the future

This is actually a Grafana thing, but since I’m on a roll, here goes. From now-5d To now+10h

Is Flux only good for Grafana?

No. They have made it its own stand-alone thing so it could be used in other contexts.

How to subtract two times

Intuitively since there is a date.add function you would expect a date.sub function so that you cuold subtract two times and get a duration as a result. But they messed up and omitted this obvious function. Also, just for the record, the subtract operator is not overloaded and therefor you canno simply do a c = b – a if a and b are of type time. You will get a

invalid: error @4:5-4:10: time is not Subtractable

Yet I wanted to do a comparison of the difference between two times and a known duration. How to do it? So what you can do is convert the two times into, e.g., day of the year with a date.yearDay function. The days are integers. Then subtract them and compare the difference to another integer (representing the number of days in this example) using simple integer comparison operator such as >.

Is flux dead?

Now, in October 2023, when I get online help on flux I see this information:

Flux is dead message

I have a lot invested in flux at this point so this could be a huge development for me. I’m seeking to get more details.

References and related

Flux 0.x documentation

Time Series Intro – somewhat helpful

Data types in Flux

Real world examples I developed using Grafana and Flux

Linux Python

Cloudflare DNS: using the python api


The examples provided on github are kind of wrong. I created an example script which actually works. If you simply copy their example and try the one where you add a DNS record using the python interface to the api, you will get this error:

CloudFlare.exceptions.CloudFlareAPIError: Requires permission “” to create zones for the selected account

Read on to see the corrected script.

The details

I call the program below This one was copied from somewhere and it did simply work without modification:


import CloudFlare
import sys

def main():
    zone_name = sys.argv[1]

    cf = CloudFlare.CloudFlare()

    # query for the zone name and expect only one value back
        zones = cf.zones.get(params = {'name':zone_name,'per_page':1})
    except CloudFlare.exceptions.CloudFlareAPIError as e:
        exit('/zones.get %d %s - api call failed' % (e, e))
    except Exception as e:
        exit('/zones.get - %s - api call failed' % (e))

    if len(zones) == 0:
        exit('No zones found')

    # extract the zone_id which is needed to process that zone
    zone = zones[0]
    zone_id = zone['id']

    # request the DNS records from that zone
        dns_records = cf.zones.dns_records.get(zone_id)
    except CloudFlare.exceptions.CloudFlareAPIError as e:
        exit('/zones/dns_records.get %d %s - api call failed' % (e, e))

    # print the results - first the zone name
    print("zone_id=%s zone_name=%s" % (zone_id, zone_name))

    # then all the DNS records for that zone
    for dns_record in dns_records:
        r_name = dns_record['name']
        r_type = dns_record['type']
        r_value = dns_record['content']
        r_id = dns_record['id']
        print('\t', r_id, r_name, r_type, r_value)


if __name__ == '__main__':

The next script adds a DNS record. This is the one which I needed to modify.


# kind of from
# except that most of their python examples are wrong. So this is a working version...
import sys
import CloudFlare

def main():
    zone_name = sys.argv[1]
    print('input zone name',zone_name)
    cf = CloudFlare.CloudFlare()
# zone_info is a list: [{'id': '20bd55fbc94ff155c468739', 'name': '', 'status': 'pending',
    zone_info = cf.zones.get(params={'name': zone_name})
    zone_id = zone_info[0]['id']

    dns_records = [
        {'name':'foo', 'type':'A', 'content':''},

    for dns_record in dns_records:
        r =, data=dns_record)

if __name__ == '__main__':

The zone_id is where the original program’s wheels fell off. Cloudflare Support does not support this python api, at least that’s what they told me. So I was on my own. What gave me confidence that it really should work is that when you install the python package, it also installs cli4. And cli4 works pretty well! The examples work. cli4 is a command line program for linux. But when you examine it you realize it’s (I think) using the python behind the scenes. And in the original bad code there was a POST just to get the zone_id – that didn’t seem right to me.

References and related

The Cloudflare api

The (wrong) api examples on github

My hearty endorsement of Using Cloudflare’s free tier to protect your personal web site.

Linux Python Raspberry Pi

vlc command-line tips


I’m looking to test my old Raspberry Pi model 3 to see if it can play mp4 videos I recorded on my Samsung Galaxy A51 smartphone. I had assumed it would get overwhelmed and give up, but I haven’t tried in many years, so… The first couple videos did play, sort of. I was using vlc. Now if you’ve seen any of my posts you know I’ve written a zillion posts on running a dynamic slideshow based on RPi. Though the most important of these posts was written years ago, it honestly still runs and runs well to this day, amazingly enough. Usually technology changes or hardware breaks. But that didn’t happen. Every day I enjoy a brand new slideshow in my kitchen.

In most of my posts I use the old stalwart program fbi. In fact I don’t even have XWindows installed – it’s not a requirement if you know what you’re doing. But as far as I can see, good ‘ole fbi doesn’t do streaming media such as videos in mp4 format. As far as I know, vlc is more modern and most importantly, better supported. So after a FAIL trying with mplayer (still haven’t diagnose that one), I switched to trials with vlc.

I haven’t gotten very far, and that’s why I wanted to share my learnings. There’s just so much you can do with vlc, that even what you may think are the most common things anyone would want are very hard to find working examples for. So that’s where I plan to contribute to the community. As I figure out an ‘easy” thing, I will add it here. And if I’m the only one who ever refers to this post, so be it. I love my own My favorite python tips, post, for instance. it has everything I use on a regular basis. So I’m thinking this will be similar.

References and related

My RPi slideshow blog post

My favorite python tips – everything I need!

Consumer Interest Consumer Tech

Consumer Tech: How I fixed my Samsung Galaxy A51 Black Screen of Death


After its customary overnight charging my A51 simply showed me a black screen in the morning. Yet I felt something was there because when I plugged it into the computer’s USB port the device was recognized. I was very concerned. But I did manage to completely fix it!

The symptoms

So various sites address this problem and give somewhat different advice. I sort of needed to combine them. So let’s review.

  • Black screen
  • Holding power button down for any length of time does nothing
  • plugging in to USB port of computer shows A51 device
What kind of works, but not really

Yes it’s true that holding the power button and volume down button simultaneously for a few seconds (about three or four) will bring up a menu. The choices presented are

  • Restart
  • Power off
  • Emergency Call

There’s no point to try Emergency Call. But when you try Restart you are asked to Restart a second time. Then the screen goes black again and you are back to where you started. If you choose Power off the screen goes black and you are back to where you started.

What actually works

Continue to hold the power button and volume down button simultaneously – ignore the screen you get mentioned above. Then after another 15 seconds or so it displays a lightning bolt inside a cricle. And if you keep holding that will disappear and you have a black screen. Keep holding and the lightning bolt appears, etc. So let them go. I don’t think it matters at which stage.

Now hopefully you have realy powered off the phone. So then hold the power button for a few seconds like you do to start the phone after it’s been powered off. It should start normally now.

As the other posts say, when you see Samsung on your screen you know you are golden.


I have shared what worked for me recover my Samsugn Galaxy A51 from its Black Screen of Death.


Consumer Tech: tips for giving


I give to lots of non-profits and realized there are some common elements and that perhaps others could learn from what I have observed. I also give to some political organizations.

Define terms

By non-profit I mean a 501(c)(3) as per IRS regulations. These can have a local focus or a national focus. They will be chartered in a particular state which will have its own rules for incorporation. For my purposes that doesn’t matter too much. I believe they all will have a board of directors. They all have to abide by certain rules such as spending most of what they take in (I think).

Common to all

Engage, engage, engage

They want to send you frequent correspondences, sometimes under various pretenses, to keep you engaged. You will receive correspondence under the following pretenses: the “annual renewal”, the “quarterly report”, the xmas update, the thankyou for contributing letter, the “for tax purposes” letter, the emergency appeal or rapid reaction to something in the news, the special donor multiplying your gift by 3x or even 10x, the estate giving solicitation, and, worst of all, the fake survey. I’m talking about you, ACLU. I have never once seen a published result of these fake surveys, which have zero scientific value and consist of one-sided questions. I used to fill them out the opposite way they expected out of spite, but to no avail as they kept coming with self-addressed stamped envelopes no less. All these correspondences have in common that they will always solicit you to give even more money as though what you’ve already given isn’t good enough.

But by all means read the newsletters on occasion to make sure they are doing the things you expect of them based on their mission. And ignore the extra pleas for money unless you are truly sympathetic. Emergencies do occur, after all.

Snail mail? No problem

You would naively think that by creating a known, non-trivial cost to these non-profits, namely forcing them to contact you by postal mail that they would send you fewer requests for money. Not so! I only contribute online when it seems to be the only practical way to do so (I’m thinking of you, Wikimedia), yet still, I get, no exaggeration, about a letter every two weeks from my organizations.

Phone etiquette

First off, you don’t need to give out your phone number even though they ask for it. It’s asked for in a purposefully ambiguous way, near the billing, as though it is needed to process your credit card. It isn’t. I happily omit my phone number. I figure if they really need it they can just wrote me a letter asking for it – and that’s never happened.

But if you’ve made the mistake of having given out your number, perhaps in the past, you may get called periodically. They do have the right to call you. But you can ask them to put you on a do not call list. What I do once I learn what organization is calling, is to sometime during their long opening sentence – which may come after you confirmed your identity – is to hold the phone away from my ear a little and just calmly say I’m not going to give any more money and hang up.

Universities have a special way of asking for money. I knew classmates who did this for their campus job. They call alumni, especially recent alumni who are more naive, and engage them with a scripted conversation that starts innocently enough, :I’m from your college and I wanted to update you update recent happenings on campus.” pretty soon they’re being asked to donate at the $250 level, then after an uncomfortable No, they’re relieve to learn they can still contribute at the $125 level, and so on down until the hapless alumnus/alumna is guilted into contributing something, anything at all.

Local giving

Fortunately, local giving where they haven’t signed on to use professional fund-raising organizations is more pleasant because you are normally not solicited very often, often just once a year.

Track it

I keep a spreadsheet with my gifts and a summed column at the bottom. I create a new worksheet named with the year for each new year. I have a separate section at the bottom with my non-deductible contributions.

I try to give all my gifts in the first one or two months of the year.

Come tax time, I print out the non-profit giving and include it with my paperwork for my accountant, but more on that below.

Deductions – forget about it

I pat a lot of taxes and still these days (from about 2018 onwards) I don’t get any tax credit for my contributions. Why? The reason is that the standard deduction is so high that it applies instead. This is the case ever since the tax changes of 2017. So if that’s true for me I imagine that’s true for most people. But each year I try…

Non-deductible organizations

Some organziations you would think are non-profits, but they are actually structured differently and so they are not. I’m thinking of you, The Sierra Club. The Sierra Club is using much of your donation to lobby politicians to their point of view about environmental issues and therefore by the rules cannot be a non-profit in the sense of a 501(c)(3).


I’m not sure what privacy rules apply around your giving. In my personal experience, there are few constraints. This means expect your name to be sold as part of a giant list of donors. You are data and worth $$ to the organization selling your name to, usually, like-minded organizations who will hope to extract more money out of you. To be concrete, let’s say you donated one time to a senator in a tight senate race. Before six months is up every senator in a competitive race from that party will be soliciting you for funds. And not just one time but often on that bi-weekly basis! Once again using snail mail seems to be no obstacle. maybe it is even worse because with email you can in theory unsubscribe. I haven’t tried it but perhaps I should. I just hate to have my inbox flooded with solicitations. I’m really afraid to contribute to a congressional race for this reason. Or a governor’s race.

But this privacy issue is not just restricted to PACs sharing your data. Let’s say a relative had congenital heart failure so you decide to contribute to a heart association. Eventually you will be solicited by other major organizations with focus on other organs or health in general: lungs, kidneys, cancer, even the same organ but from a different organization, etc. Your data has been sold…

Amazon Smile – Giving while shopping

When I first learned of Amazon Smile from a friend at work I thought there was no way this could be true. Margins are said to be razor thin in retail, yet here was Amazon giving away one half percent of your order to the charity of your choice?? Yet it was true. And Amazon gave away hundreds of millions of dollars. Even my local church got into the program. My original recipient was Habitat for Humanity, which raised well over ten thousand dollars from Amazon Smile.

But Amazon killed this too-good-to-be-true program in March 2023 for reasons unknown. I’m not sure if other merchants have something which can replace it and will update this if I ever find out.

The efficiency of your charity

You want to know if a large portion of your gift to a particular charity is going towards the cause that is its mission, or, to administrative costs such as fund-raising itself. I’ve noticed good charities actually show you a pie chart which breaks down the amount taken by administrative overhead – usually 5 – 10 percent. But another way to learn something about efficiency is to use a third party web site such as Charity Navigator. But don’t get crazy about worrying about their ratings. I have read criticisms of their methods. Still, it’s better than nothing. 5 – 10 % administrative costs is fine. Hey, I used to know people who worked in such administrative positions and they are good people who deserve decent pay. Another drawback of Charity Navigator is that it won’t have ratings for your local charities.

For PACs as far as I know, there is no easy way to get comparable information. You just have to hope your money is well spent. I guess they have quarterly disclosure forms they fill out, but I don’t know how to hunt that down.


The national organizations know everything you have ever given and will suggest you give at slightly higher amounts than you have in the past. 25 years ago the American Cancer Society asked if I would solicit my neighbors for contributions, which I did. I pooled all the money and gave them something like $300. I swear for the next 15 years they solicited me suggesting I contribute at that level even though I never gave more than $40 in the following 15 years. So annoying…

Death – an opportunity – for them

Many charities will encourage you to remember them in your estate planning. I suppose this may be a reasonable option if you feel really identified with their cause. I suppose The Nature Conservancy evokes these kinds of emotions, for example, because who doesn’t love nature? So think about your legacy, what you’re leaving behind.

National, with local chapters

Some national charities have local chapters. I’m thinking of you, Red Cross. I’m not really sure how this works out. But I know I have received solicitations from both the local chapter as well as the national chapter. So just be mindful of this. I suppose when you give to the local chapter it has more discretion on spending your donation locally and I guess giving a fraction of it to the national chapter.

Charitable Annuities

I don’t know all the details but if you have for instance appreciated equities instead of paying capital gains taxes you could gift them to a charity and receive a deduction for the gift. They in turn, if they’re a big outfit, usually a university, can set up a charitable annuity which provides you further tax benefits. I will flesh this out if I ever come across it again.


As a reliable contributor I am annoyed by the methods employed to shake even more out of my pockets. But I guess those methods work in bulk and so they continue to be used. As far as I can tell all national non-profits use professional fund-raising methods which closely follow the same patterns.

Although the tenor of this post is terribly cynical, obviously, I think non-profits are doing invaluable work and filling some of the gaping holes left by government. If I didn’t think so I wouldn’t be contributing. Most non-profits do good work and are run efficiently, but the occasional scam happens.

References and related

I mentioned, but do not endorse too heartily, Charity Navigator.