Categories
CentOS Debian Linux

What happened to insert mode on the latest version of vi?

Intro

As a creature of habit, I fall for an editor and can never imagine using something else. In the VAX days there was EDT, which if memory serves was replaced by the even better VPU. On Ultrix we ran a pretty nice editor simply called e from Rand Corporation. Then there was the love affair with emacs, and finally for the last 30 years vi.

Well with my latest server, a Debian 12 machine, I was having trouble with insert mode, specifically, inserting text from my Windows clipboard. Never had problems before….

Well for some reason, if you want to insert text from the clipboard, you now use <CTRL.>-<SHIFT>V in command mode. Well, at least on Windows 11 running WSL 2 that seems to work. I now realize that doesn’t work from Windows 10 with WSL. I had better figure this out soon…

Windows 10 running WSL

The terminal type (check the TERM environment variable) is set toxterm-color256. I tried pasting any and all registers which is the standard thing you would do if you use the standard Internet advice. None of it worked for me. I finally realized on my own that – and this harkens back to my old days with the beloved VAX 780 – that if I set the terminal type to vt100 all was good! Seriously. Back in the day we had physical VT100 terminals. Well, before that I think there was a VT52? Then maybe a VT102. VT202 was a big upgrade. Anyway, initially I added the following line to my .bashrc file:

export TERM=vt100

and now I can insert clipboard text the way I always have (mouse right-click) in vi insert mode. This kludge was how we fixed a lot of terminal display issues in the old days. But now I see display from top is messed up! Probably other curses-based apps as well. So two steps forward, one step back. So now what I’ve done is removed that line from .bashrc and put the following lines in my .bash_aliases file:

# DrJ kludge to get vi to work and keep top working
alias top='export TERM=xterm-256color;\top'
alias vi='export TERM=vt100;\vi'

That \top harkens back to an old linux convention where a command preceded by \ invokes a program but ignores defined aliases for that program.

Conclusion

I have offered one possible solution to the can’t insert text from the clipboard problem into my vi: set the TERM environment variable to the old-fashioned vt100. Now I can once again right-click while in insert mode to paste in clipboard text.

This was a very vexing issue for a creature of habit such as me!

References and related

This whole issue came up only when I switched from CentOS 8 to Debian 12 as my back-end server. Believe me, Debian 12 is so superior in so many ways this little setback would never make a material impact in that decision. Here’s the write-up of my upgrade.

Categories
CentOS Debian Linux Raspberry Pi

drjohnstechtalk now runs on a modern OS

Intro

I’m thrilled to announce that the long-running blog drjohnstechtalk.com has now been migrated to a modern back-end operating system. drjohnetchtalk.com is, a far as I know, the only quality-written technical resource on the Internet which is not supported by ads. Instead it runs on a pay-it-forward approach, embracing the spirit of the old Internet before it was ruined by big money.

drjohnstechtalk.com has been providing solutions to obscure tech questions since 2011.

The details

I like to run my own server which I can use for other purposes as well. I think that approach used to be more common. Now it’s harder to find others using it. Anyway, my old hosting environment is a CentOS server. I had hoped it would last me up to 10 years! 10 years is about the duration of long-term support for Redhat linux. It’s a real pain to migrate a WordPress blog with lots of history where it is important to preserve the articles and the permalinks. This article documents the nightmare I put myself through to get that up and running. Before that there was a CentOS 6 server. Then in 2022 – only about two years later – I learned that CentOS was dead! IBM had killed it. I’m over-simplifying here somewhat, but not by much.

So my blog sort of limped on on this unsupported system, getting riskier by the day to run as I was missing out on security patches. Then my companyt accidentally included one of my blogs in a security scan and I saw I had some vulnerabilities. So I upgraded WordPress versions and plugin versions. So with up to date software, the stage was set to migrate to a newer OS. Further motivation was provided by the fact that after the WP upgrade, the pages loaded more slowly. And sometimes the site just collapsed and crashed.

I have come to love Debian linux due to my positive experience with running it on Raspberry Pis and a few other places. It tends to run more recent versions of open source software, for instance. So I chose a Debian linux server. Then I forget where I learned this. Perhaps I asked someone at work which web server to use, but the advice was to use nginx, not apache! This was very new to me as I had never run nginx, not that I was in love with apache.

So, anyway, here I am writing this on my shiny new Debian 12 bookworm server which is running an nginx web server! And wow my site loads so much faster now. It’s really striking…

Running WordPress in a subdirectory with nginx

There always has to be a hard part, right? This was really, really hard. I run WP in the subdirectory blog as you can see from any of my URLs. I must have scoured a dozen sites on how to do it, none of which completely worked for me. So I had to do at least some of the heavy lifting and work out a working config on my own.

Here it is:

# mostly taken from https://www.nginx.com/resources/wiki/start/topics/recipes/wordpress/
# but with some important mods
upstream php {
    server unix:/var/run/php/php8.2-fpm.sock;
}

server {
  listen 443 ssl;

    include snippets/self-signed.conf;


    server_name drjohnstechtalk.com www.drjohnstechtalk.com;

    root /web/drjohns;
    index index.php index.html;


    access_log /var/log/nginx/drjohns.access.log;
    error_log /var/log/nginx/drjohns.error.log;

    client_max_body_size 100M;

# the following section prevents wp-admin from infintely redirecting to itself!
    location /blog/wp-admin {
            root /web/drjohns;
            try_files $uri $uri/ /blog/wp-admin/index.php?$args;
    }

    location /blog {
            root /web/drjohns/blog;
            try_files $uri $uri/ /blog/index.php?$args;
    }
    location ~ \.php$ {
#NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
         include fastcgi_params;
         fastcgi_intercept_errors on;
         fastcgi_pass php;
         fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    }
    location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg) {
            expires max;
            log_not_found off;
    }
}

I had to add ths svg file type to ignore, the location directive that matches /blog/wp-admin/. I had to define the upstream label as php and refer to that label in fastcgi_pass. I had to figure out my correct version of fastcgi. I tossed out some location directives which weren’t too important to me.

I disabled the wp-hide-login plugin while I grappled with why I was getting first a 404 not found for /blog/wp-admin/, then later, the too many redirects error. But I still had the issue with it disabled. Once I resolved the problem by adding the /blog/wp-admin location directive – I seem to be the only one on the Internet offering this solution and no other solution worked for me! – then I re-enabled the hide login plugin. The other plugins are working I would say.

Firewall?

I gather the current approach to host-based firewall on Debian 12 is to run ufw. A really good article on setting it up is here: https://www.cyberciti.biz/faq/set-up-a-firewall-with-ufw-on-debian-12-linux/

I’m on the fence about it, fearing it might slow my speedy server. But it looks pretty good. So for now I am relying on AWS Network Security Group rules. Did you know you can ask them to increase your max rule quota frmo 20 to 40? Yes, you can. I did and got approved overnight. I have added the Cloudflare ranges.

Cloudflare

I continue to use Cloudflare as reverse proxy, certificate issuer, DNS provider and light security screening. The change to the new server did not alter that. But I needed a new config file to properly report the origin IP address in my access files. The following file does the trick for me. It is up to date as of February 2024, can be placed in your /etc/nginx/conf.d directory and called, e.g., cloudlfare.conf.

# up to date as of 2/2024
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
set_real_ip_from 103.31.4.0/22;
set_real_ip_from 104.16.0.0/13;
set_real_ip_from 104.24.0.0/14;
set_real_ip_from 108.162.192.0/18;
set_real_ip_from 131.0.72.0/22;
set_real_ip_from 141.101.64.0/18;
set_real_ip_from 162.158.0.0/15;
set_real_ip_from 172.64.0.0/13;
set_real_ip_from 173.245.48.0/20;
set_real_ip_from 188.114.96.0/20;
set_real_ip_from 190.93.240.0/20;
set_real_ip_from 197.234.240.0/22;
set_real_ip_from 198.41.128.0/17;
set_real_ip_from 2400:cb00::/32;
set_real_ip_from 2606:4700::/32;
set_real_ip_from 2803:f800::/32;
set_real_ip_from 2405:b500::/32;
set_real_ip_from 2405:8100::/32;
set_real_ip_from 2c0f:f248::/32;
set_real_ip_from 2a06:98c0::/29;

real_ip_header CF-Connecting-IP;

The idea is that if the source IP of the HTTP connection to nginx is from the Cloudflare range of IPs, then this must represent a request proxied through Cloudflare and the original IP of the client is in the HTTP header CF-Connecting-IP, which nginx can report on. If not, just use the normal IP from the TCP connection.

Swap space

On CentOS I had to provide some swap space because otherwise apache + mariaDB + WordPress would easily send its cpu soaring. So far I have not had to do that with my new Debian 12! That is great… So I have a t2.small instance with 25 GB of gp2 storage (100 iops). The server is basically running with a 0.00 load average now. I don’t get a lot of traffic so I hope that infrastructure will suffice.

Set the timezone

My Debian system started out in the UTC timezone. This command confirms that:

sudo timedatectl

This command brings up a menu and i can change the timezone to US Eastern:

sudo dpkg-reconfigure tzdata

Automate patching

It hasn’t run yet, but I’m hoping this root crontab entry will automate the system updates:

59 2 * * 0 (date && apt-get update && apt-get upgrade -y) >> /home/admin/hosting/update.log 2>&1

Debian 12 lifecycle

There should be three years of full support plus two more years of long term support for a stable Debian release, if I’ve undrstood it correctly. So I believe I may hope to get five years out of my Bookworm version, give or take. Debian — Debian Releases

Fixing the vi editor

I’ve never really had a problem with vi until this server. I show how I fixed it in this blog post.

Status after a few days – not all positive news

Well after a few days I feel the server response has noticeably slowed. I could not run top because I messed up the terminal with my fix to vi! So in a panic I restarted mariadb which seemed to help performance a lot. I will have to figure out how to monitor for this problem and how best to address it. I’m sure it will return. Here is my monitor.sh script:

#!/bin/bash
# restart mariaDB if home page response becomes greater than one second
curl -m1 -o /dev/null -ksH 'Host:drjohnstechtalk.com' https://localhost/blog/
# if curl didn't have enough time (one sec), its exit status is 28
[ $? -eq 28 ] && (systemctl stop mariadb; sleep 3; systemctl start mariadb; echo mariadb restart at $(date))

I invoke it from root’s crontab every three minutes:

# check that our load time is within reason or else restart mariadb -DrJ 2/24
*/3 * * * * sleep 25;cd /home/admin/hosting; ./monitor.sh >> monitor.log 2>&1

I do love my kludges. I will be on the lookout for a better long-term solution.

Conclusion

The technical blogging web site drjohnstechtalk.com now runs on new infrastructure: Debian 12 running nginx. It is muich faster than before. The migration was moderately painful! I have shared the technical details on how I managed to do it. I hope that, unlike my previous platform of CentOS 8, this platform lasts me for the next 10 years!

References and related

My second article!

nginx’s own advice about how to configure it to run WordPress

Trying to upgrade WordPress brings a thicket of problems

One of many RPi projects of mine: Raspberry Pi light sensor project

ufw firewall for Debian 12

Debian — Debian Releases

Cloudflare, an added layer of security for your web site

IP Ranges | Cloudflare

What happened to insert mode on the latest version of vi?

Categories
Debian Linux Raspberry Pi

My favorite bash scripting tips

Intro

The linux bash shell is great and very flexible. I love to use it and have even installed WSL 2 on my PCs so I can use it as much as possible. When it comes to scripting it’s not exactly my favorite. there is so much history it has absorbed that there are multiple ways to do everything: the really old way, the new way, the alternate way, etc. And your version of bash can also determine what features you can use. nevertheless, I guess if you stick to the basics it makes sense to use bash for simple scripting tasks.

So just like I’ve compiled all the python tips I need for writing my simple python scripts in one convenient, searchable page, I will now do the same for bash. No one but me uses it, but that’s fine.

Iterate (loop) over a range of numbers

END=255 # for instance to loop over an ocetet of an IP address
for i in $(seq 1 $END); do
  echo $i
done
# But if it's OK to just hard-wire start and end, then it's simpler to use:
for i in {1..255}; do echo $i; done

Infinite loop
while /bin/true; do...done

You can always exit to stop it.

Sort IPs in a sensible order

$ sort -n -t . -k1,1 -k2,2 -k 3,3 -k4,4 tmp

What directory is this script in?

DIR=$(cd $(dirname $0);pwd);echo$DIR

Guarantee this script is interpreted (run) by bash and not good ‘ole shell (sh)!
if [ ! "$BASH_VERSION" ] ; then
  exec /bin/bash "$0" "$@"
  exit
fi
Count total occurrences of the word print in a bunch of files which may or may not be compressed, storing the output in a file

print=0
zgrep -c print tst*|cut -d: -f2|while read pline; do prints=$((prints + pline));echo $prints>prints; done

Note that much of the awkwardness of the above line is to get around issues I had with variable scope.

Permitted characters in variable names

Don’t use _ as you might in python! Stick to alphanumeric, but also do not begin with a number!

Execute a command

I used to use back ticks ` in the old days. parentheses is more visually appealing:

print1=$(cat prints)

Variable type

No, variables are not typed. Everything is treated as a string.

Function definition

Put function definitions before they are invoked in the script. Invocation is by plain name. function syntax is as in the example.

sendsummary() {
# function execution statements go here, then close it out
} # optionally with a comment like end function sendsummary
sendsummary # invoke our sendsummary function
Indentation

Unlike python, line indentation does not matter. I recommend to indent blocks of code two spaces, for example, for readability.

Booleans and order of execution
[[ "$DEBUG" -eq "1" ]] && echo subject, $subject, intro, "$intro"

The second statement only gets executed if the first one evaluated as true. Now a more complex example.

[[ $day == $DAY ]] || [[ -n “$anomalies” ]] && { statements…}

The second expressions get evaluated if the first one is false. If either the first or second expressions are true, then the last expression — a series of statements in what is essentially an unnamed function, hence the enclosing braces — gets executed. The -n is a test to see of length of a string is non-zero. See man test.

Or just use old-fashioned if-then statements?

The huge problem with the approach above is that it may be hard to avoid that multiple statements get executed in their own forked shell. so if they’re trying set a variable, or even do an exit, it may not produce the desired result! I may need further research to refine my approach, but the old if – then clause works for me – no subshell needs to be created.

Conditionals

Note that clever use of && and || can in many cases obviate the need for a class if…then structure, but see thw warning above. But you can use if thens. An if block is terminated by a fi. There is an else statement as well as an elif (else if) statement.

grep conditionals
ping -c1 8.8.8.8|grep -iq '1 received'
[ $? -eq 0 ] && echo this host is alive

So the $? variable after grep is run contains 0 if there was a match and 1 if there was no match. -q argument puts grep in “quiet” mode (no output).

More sophisticated example testing exit status and executing multiple commands

#!/bin/bash
# restart mariaDB if home page response becomes greater than one second
curl -m1 -ksH 'Host:drjohnstechtalk.com' https://localhost/blog/ > /dev/null
# if curl didn't have enough time (one sec), its exit status is 28
[ $? -eq 28 ] && (systemctl stop mariadb; sleep 3; systemctl start mariadb; echo mariadb restart at $(date))

Note that I had to group the commands after the conditional test with surrounding parentheses (). That creates a code block. Without those the semicolon ; would have indicated the end of the block! A semicolon ; separates commands. Further note that I nested parentheses and that seems to work as you would hope. also note that STDOUT has been redirected by the greater than sign > to /dev/null in order to silently discard all STDOUT output. /dev/null is linux-specific. The windows equivalent, apparently, is nul. Use curl -so nul suppress output on a Windows system.

Reading in parameters from a config file

Lots of techniques demoed in this example!

# read in params from file QC.conf
IFS=$'\n'
echo Parameters from file
for line in $(<QC.conf); do
  [[ "$line" =~ ^# ]] || {
  pval=$(echo "$line"|sed 's/ //g')
  lhs=$(echo "$pval"|cut -d= -f1)
  rhs=$(echo "$pval"|cut -d= -f2)
  declare -g $lhs="$rhs"
  echo $lhs is ${!lhs}
  }
done

Note the use of declare with the -g (global) switch to assign a variable to a variable-defined variable name! Note the use of < to avoid creation of a subshell. Note the use of -P argument in grep so that it uses perl-style regex! Note the way to get the value of a variable whose name itself is represented by a variable var is ${!var}.

This script parses a config file with values like a = a_val, where spaces may or may not be present.

One square bracket or two?

I have no idea and I use whatever I get to work. All my samples work and I don’t have time to test all variations.

Variable scope

I really struggled with this so I may come back to this topic!

Variable interpolation

$variable will suffice for simple, i.e., one-word content. But if the variable contains anything a bit complex such as words separated by spaces, or containing unusual characters, better go with double quotes around it, “$variable”. And sometimes syntactically throw in curly braces to separate it from other elements, “${variable}”

Eval
eval="ls -l"
$eval # executes ls -l
Shell expansion
mv Pictures{,.old} # renames directory Pictures to Pictures.old
Poor man’s launch at boot time

Use crontab’s @reboot feature!

@reboot sleep 25; ./recordswitch.sh > recordswitch.log 2>&1

The above expression also shows how to redirect standard error to standard out and have both go into a file.

Use extended regular expressions, retrieving a positional field using awk, and how to subtract (or add) two numbers
t1=`echo -n $line|awk '{print $1}'` 
t2=`echo -n $line|awk '{print $4}'` 
# test for integer inputs 
[[ "$t1" =~ ^[0-9]+$ ]] && [[ "$t2" =~ ^[0-9]+$ ]] && downtime=$(($t1-$t2))

Oops, I used the backticks there! I never claim that my way is the best way, just the way that I know to work! I know of a zillion options to add or subtract numbers…

Get last field using awk
echo hi.there.111|awk -F\. '{print $NF}' # returns 111
Print all but the first field using awk

awk ‘{$1=””; print substr($0,2)}’

Why do assignments have no extra spaces?

It simply doesn’t work if you try to put in spacing around the assignment operator =.

Divert stdout and stderr to a file from within the script
log=/tmp/my-log.log
exec 1>$log 
exec 2>&1
Lists, arrays amd dictionary variables

I don’t think bash is for you if you need these types of variables.

Formatted date

date +%F

produces yyyy-mm-dd, i.e., 2024-01-25

date +%Y%m%d -> 20240417

Poor man’s source code versioning

The old EDT/TPU editor on VAX used to do this automatically. Now I want to save a version of whatever little script I’m currently working on in the ~/tmpFRI (if it’s Friday) directory to sort of spread out my work by day of the week. I call this script cpj so it’s easy to type:

#!/bin/bash
# save file using sequential versioning to tmp area named after this day - DrJ
DIR='~'/tmp$(date +%a|tr '[a-z]' '[A-Z]') # ~/tmp + day of the week, e.g., FRI
DIRREAL=$(eval "echo $DIR") # the real diretory we need
mkdir -p $DIRREAL
for file in $*; do
  res=$(ls $DIRREAL|egrep "$file"'\.[0-9]{1,}$') # look for saved version numbers of this filename
  if test -n "$res"; then # we have seen this file...
    suffix=$(echo $res|awk -F\. '{print $NF}')  # pull out just the number at the end
    nxt=$(($suffix+1)) # add one to the version number
    saveFile="${file}"."${nxt}"
  else # new file to archive or no versioned number exists yet
    [[ -f $DIRREAL/$file ]] && saveFile="$file".1
    [[ -f $DIRREAL/$file ]] || saveFile=""
  fi
  cp "$file" $DIRREAL/"$saveFile"
  [[ -n $saveFile ]] && target=$DIR/"$saveFile"
  [[ -n $saveFile ]] || target="$DIR"
  echo copying "$file" to "$target"
done

It is a true mis-mash of programming styles, but it gets the job done. Note the use of eval. I’m still wrapping my head around that. Also note the technique used to upper case a string using tr. Note the use of extended regular expressions and egrep. Note the use of tilde ~ expansion. I insist on showing the target directory as ~/tmpSAT or whatever because that is what my brain is looking for. Note the use of nested $‘s.

Now that cpj is in place I occasionally know I want to make that versioned copy before I launch the vi editor, so I created a vij in my bash alias file thusly:

vij () { cpj "$@";sleep 1;vi "$@"; }

Complementing these programs is my gitj script which pushes my code changes to my repository after running pyflakes for python files:

#!/bin/bash
file="$@"
status=0

pushfile() {
  git add "$file"
  echo -n "Enter comment: "
  read comment
  fullComment=$(echo -e ${file}: "${comment}\n[skip ci]")
  echo -e "The full comment will be:\n${fullComment}"
  git commit -m "$fullComment"
  git push
  date
}

suffix=$(echo $file|awk -F\. '{print $NF}')  # pull out just the file type
if [[ $suffix == py ]]; then
  echo python file. Now running pyflakes on it;pyflakes $file;status=$?
  if [[ $status -eq 1 ]]; then echo syntax error detected so no git commands will be run
    exit 1
  else # python file checked out
    pushfile
  fi
else # was not a python file
  pushfile
fi

Another example

I wrote this to retain one backup per month plus the last 28 days.

#!/bin/bash
# do some date arithmetic to preserve backup from first Monday in the month
#[[ $(date +%a) == "Wed" ]] && { echo hi; }
DEBUG=0
DRYRUN=''
[[ $DEBUG -eq 1 ]] && DRYRUN='--dry-run'
if [[ $(date +%a) == "Mon" ]] && [[ $(date +%-d) -lt 8 ]]; then
# preserve one month ago's backup!
  echo "On this first Monday of the month we are keeping the Monday backup from four weeks ago"
else
  d4wksAgo=$(date +%Y%m%d -d'-4 weeks') # four weeks ago
  oldBackup=zones-${d4wksAgo}.tar.gz
  git rm $DRYRUN backups/$oldBackup
fi
today=$(date +%Y%m%d)
todaysBackup=zones-${today}.tar.gz
git add $DRYRUN backups/$todaysBackup

It incorpoates a lot of the tricks I’ve accumulated over the years, too numerous to recount. But it’s a good example to study.

Calculate last weekday

today=$(date -u +%Y%m%d) # UTC date
# last weekday calculation
delta="-1"
[[ $(date -u +%a) != "Mon" ]] || delta="-3"
lastday=$(date -u +%Y%m%d -d"${delta} days")

Output the tab character in an echo statement

Just use the -e switch as in this example:

echo -e “$subnet\t$SSID”

Get top output in a non-interactive (batch) shell

top -b -n 1

Prompting for user input

echo -n “Give your input: “

read userInput

Print first 120 characters of each line in a text file

cat file | cut -c -120

Reverse the lines in a file

tac file > file-reversed # tac is cat in reverse!

Send email when there is no mailx, mail or postifx setup

Use curl!

curl –url smtp://mail-relay.com –mail-from $sender –mail-rcpt $recipient -T <(echo -e “$msg”)

Format json into something readable

curl json_api|python3 -m json.tool

Merge every other line in a file

sed ‘N;s/\n/ /’ file

Ending script on compound conditional can be a bad idea

I ended my script with this statement:

# send alerts if needed
[[ $notify -gt 0 ]] && alerting

Problem was, this last statement has normal value of 1 (first condition is false so second expression not evaluated) so whole script exits with value 1 and my ADO pipeline felt that was an error! Guess I’ll add an exit 0 at the end…

Editing file in place with sed

Thge -i switch to sed is designed to do your substitutions right in the file. Here’s an actual crontab entry where I used that switch:

35 22 * * * sed -i s'/enabled=0/enabled=1/' /etc/yum.repos.d/thousandeyes.repo > /dev/null 2>&1
Date of a file in seconds

The output from e.g., ls -l is unparseable. This will do the trick. Technically this reports the last modified time of filename in seconds.

echo $(($(date +%s) - $(date +%s -r "$filename")))

Conclusion

I have documented here most of the tecniques I use from bash to achieve simple yet powerful scripts. My style is not always top form, but as I learn better ways I will adopt and improve.