Categories
Linux

Install WSL2 for Windows 10 Home Edition: not as easy as they say, but not impossible either, and definitely worth it, plus tips for Windows 11

2023 Intro – Windows 11 Home Edition

I just got my HP Aero with Windows 11. One of the very first things I did is to get WSL going, that’s how important it is to me. I did not do a simple wsl –install, at least not initially. Maybe it would have worked, maybe not. Instead I launched a PowerShell window as administrator and ran the two dism commands as shown below. Then this command did nothing – it’s not recognized:

wsl --set-default-version 2

Then I just went for it and tried to install Debian:

wsl --install --distribution Debian

It seemed to go through, but I remember that it always does (the fake). But all of a sudden I was being asked to set up an account. A reboot, and then a there was a Debian window. And the wsl command works. So no kernel patching needed any longer (I believe).

The section below is my original post based on my experience installing wsl on Windows 10 home edition.

Intro

I installed WSL2 on my work laptop a couple weeks ago. It didn’t go terribly smoothly but now that I have it, I love it. I had been using a Cygwin environment, but I fear that is looking a little long in the tooth. WSL2 is fast to start up. But the main contrast is that while Cygwin is an emulator, WSL2 is a true hypervisor so you get a full-fledged linux VM, right on your PC. Of course this was always possible with products like VirtualBox or whatnot, but Microsoft has sort of built in this capability with newer versions of Windows 10, so there’s no mussing with external software any longer.

But at work I have Windows 10 Professional, of course. What about at home where I have Windows 10 Home Edition like most of us? My understanding is that you could not run a hypervisor with Windows 10 Home Edition. And I was probably right, until recently. But now you can. I know because I just managed it tonight.

None of the tutorials out there were exactly right, but they all contained pieces of the truth. So my contribution is to add weight to the correct steps you’ll need to take. Unfortunately I only get to do it once so my notes aren’t the best. Still, I may be able to spare you some pitfalls.

Why you should want WSL2

If you love linux command-line, then I would say this is a must-have.

What doesn’t work

You’ll see suggestions to fire up powershell and simply run

wsl –install

Chances are about 95% that that won’t work if you are reading this article – would that it would be so simple.

Instead, do this

Open a powershell window as administrator. To do that type powershell in the start menu, and look around at all the options. Pick out the one that mentions Run as administrator.

Running Powershell as Administrator

Then enter this command into the PS window.

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

Then this.

dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart

Then this.

wsl --set-default-version 2

You need to update your kernel. Download this WSL2 kernel update file and install it: https://wslstorestorage.blob.core.windows.net/wslblob/wsl_update_x64.msi

A reboot at this point is probably a good idea.

Now you need to get yourself a linux distro to install.

There are certain wsl commands you can issue which will helpfully give you the URL to the Debian distro: You put the URL into the browser and it redirects you to the MS Store. But I forget what that is. perhaps wsl -d Debian. But I suppose you can simply go to the MS Store directly and search for Debian and install it.

Nameserver issues when using vpn

Actually when I switched from wsl v 1 to wsl v 2, name resolution really didn’t work at all. The proximate cause is that the /etc/resolv.conf file contained the IP of the host system. But the host system doesn’t have a dns server… So after considering other options, I think the best is to embrace this guy’s script. It is supposed to dynamically figure out the best nameservers, which is pretty cool: https://github.com/jacob-pro/wsl2-dns-agent

He writes:

(Optionally) save it to your startup folder (%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup), so it is automatically launched when you log in.

So I guess that’s the current way to run a custom program upon startup. That could be useful.

Well, that approach hasn’t been working so well for me. For now I am updating /etc/resolv.conf by hand. First I break the symlink then I run chattr -i resolv.conf to bypass the warning that this is a read-only file and finally I edit it by hand. I know. Crude. But it works.

Enhance your experience with Windows Terminal

One good suggestion out of Windows Central is to use Windows Terminal. At least it looks good. I haven’t had time to try it myself. I normally just fire up a CMD window and type wsl. My Debian starts immediately and I have a satisfactory command-line environment. But working with multiple windows will be nice so I have to check it out.

Just look for Windows Terminal in the MS Store.

Windows Central suggestions

A web site called Windows Central has a pretty good stepwise guide. But their advertising is so obnoxious, I’m afraid to accidentally touch any part of the page for fear of getting sent to one of their many advertisers. Even still it probably happened about five times. So I won’t make the link to them too prominent. And, anyway, their guide is a little oversimplified.

My equipment

I have a four-year old HP Pavilion laptop running Windows 10 2021 H2 if I remember correctly. It has solid state drives so it’s not too slow, and it boots pretty quickly.

BIOS – basically impossible to get into these days

I’m sure people who do this for a living will disagree, but for ordinary people it’s basically impossible to disrupt the boot process to modify the BIOS settings. And you may need to do that. In fact that was the hardest thing of all for me. Pressing F10 or delete or Escape or F2 – and does that mean hold the FN key down first?? No one explains that, and I don’t have patience to watch a YouTube video. But after trying a bunch of combinations and booting a bunch of times, and never getting into the BIOS settings, I was really glad to learn Windows 10 offers an alternate way! And it works…

Access BIOS settings from Windows 10

Very briefly, the steps are:

Windows Settings > Update & Security > Recovery > Restart Now > Advanced Startup -> Restart Now > Reboot > select Troubleshoot > Advanced Options > UEFI Firmware Settings > (BIOS menu) enable virtualization > Save.

To see the details, go to this HP article: How to Enter BIOS Setup on Windows PCs | HP® Tech Takes

Why you may need to alter the BIOS settings

Well, on my laptop my installation of Debian kept failing with this error. Error: 0x80370102 The virtual machine could not be started because a required feature is not installed. I read on a Microsoft site that could be because the ability to run virtual servers was not enabled in the BIOS. And, yes, that turned out to be absolutely true. It was disabled. So I enabled it and bam, the Debian install started asking me for a username and password, and I was running a Debian VM!

To be fleshed out as my time permits…

But, I love my Debian linux. It’s just like Raspberry Pi OS Lite. I just install packages as I need them: python, pip, curl, bind9-dnsutils, ssh, etc.

Operating inbound TCP services

After the initial thrill wears off, you realize you may need practical things that you have on your Raspberry Pi such as an ssh server or a web server. I believe this will be possible. Still working on it. After installing ssh you can fire it up:

$ sudo service ssh start

This post describes some of those service commands which you have under a WSL linux install: [3 Fixes] System Has Not Been Booted With Systemd as Init System (partitionwizard.com)

If you ignore that article you may see this error! System has not been booted with systemd as init system (PID 1). Can’t operate.

Back to your ssh server. Now you can already connect to it from the Windows system itself, e.g., from a CMD window:

C:\Users\me> ssh user@localhost

user is the Debian user you set during initial setup. So, anyway, that works and that’s cool. But you’re still locked out from the outside.

This helpful Microsoft article discusses networking for WSL2. Apparently it is still evolving and so it’s a bit primitive right now: Accessing network applications with WSL | Microsoft Docs

From a CMD Window launched as administrator:

netsh interface portproxy add v4tov4 listenport=22 listenaddress=0.0.0.0 connectport=22 connectaddress=172.22.167.12

But this does not work in my case. Firewall thing, I’m sure. Yes! for me, where I also run Mcafee, I needed to go to their firewall settings > Ports and system services. Then I had to add a service for TCP port 22 – the ssh default port. Then it began to work and my RPi could ssh and sftp to my Debian VM! sftp kind of hanged a bit. Have to see how bad that is.

How the filesystems are mapped

Where are your nice, tidy linux directories to be found on your ugly File Explorer? You should have a Linux > Debian (or whatever your installed distribution is) section added to the bottom of your File Explorer.

Debian filesystem as it appears in File Explorer

But really, where is that? For me, it is:

C:\Users\USERNAME\AppData\Local\Packages\TheDebianProject.DebianGNULinux_76v4gfsz19hv4\LocalState\rootfs\home\…

And turning things around, how do you navigate to the C drive from your linux command line? Just

cd /mnt/c

Most of the interesting files in my case are in /mnt/c/user/USERNAME

Debian linux loses time

Older versions of WSL may have their system clock drift severely compared to the underlying system’s hardware clock. sudo hwclock -s may restore things. Also see https://stackoverflow.com/questions/65086856/wsl2-rest-api-error-due-to-wsl2-clock-out-of-sync-with-windows-clock

References and related

That obnoxious Windows Central article I mentioned above with a lot of the WSL2 installation information. It’s a veritable minefield of links to irrelevant stuff, so you’ve been warned: How to install Linux WSL2 on Windows 10 and Windows 11 | Windows Central

WSL2 kernel update.

Seeing Error: 0x80370102? Try Troubleshooting Windows Subsystem for Linux | Microsoft Docs A whole host of other WSL2 errors are addressed in this article as well.

This article purports to be for servers, but I think it’s applicable to PCs as well. It gets pretty technical. System requirements for Hyper-V on Windows Server | Microsoft Docs

About starting system services such as the ssh daemon: [3 Fixes] System Has Not Been Booted With Systemd as Init System (partitionwizard.com)

A good overview of WSL2 networking: Accessing network applications with WSL | Microsoft Docs

How to Enter BIOS Setup on Windows PCs | HP® Tech Takes

With WSL 2 dns name resolution can often be mucked up. This guy has a nice fix: https://github.com/jacob-pro/wsl2-dns-agent

Info about a clock drift problem: https://stackoverflow.com/questions/65086856/wsl2-rest-api-error-due-to-wsl2-clock-out-of-sync-with-windows-clock

Categories
IT Operational Excellence Linux

Grep is Slow as a Snail in SLES 11 – Solved

I had written earlier about the performance problems of Suse Linux Enterprise Server v 11  Service Pack 1 (SLES 11 SP1)  under VMWare: http://drjohnstechtalk.com/blog/2011/06/performance-degradation-with-sles-11-sp1-under-vmware/.  What I hadn’t fully appreciated at that time is that part of the problem could be with the command grep itself.  Further investigation has convinced me that grep as implemented under SLES 11 SP 1 X86_64 is horrible.  It is seriously broken. The following results are invariant under both a VM and a physical server.

Methodology 1

A cksum shows that grep has changed between SLES 10 SP 3 and SLES 11 SP 1.  I’m not sure what the changes are.  So I performed an strace while grep’ing a short file to see if there are any extra system calls which occur under SLES 11 SP 1.  There are not.

I copied the grep binary from SLES 10 SP 3 to a SLES 11 SP 1 system.  I was afraid this wouldn’t work because it might rely on dynamic libraries which also could have changed.  However this appears to not be the case and the grep binary from the SLES 10 system is about 19 times faster, running on the same SLES 11 system!

Methodology 2

I figure that I am a completely amateur programmer.  If with all my limitations I can implement a search utility that does considerably better than the shell command grep, I can fairly decisively conclude that grep is broken.  Recall that we already have comparisons that show that grep under SLES 10 SP 3 is many times faster than under SLES 11 SP 1.

Results

The table summarizes the findings. All tests were on a 109 MB file which has 460,000 lines.

OS

Type of Grep

Time (s)

SLES 11 SP 1

built-in

42.6

SLES 11 SP 1

SLES 10 SP 3 grep binary

2.5

SLES 11 SP 1

Perl grep

1.1

SLES 10 SP 3

built-in

1.2

SLES 10 SP 3

Perl grep

0.35 s

The Code for Perl Grep

Hey, I don’t know about you, but I only use a fraction of the features in grep. The switches i and v cover about 99% of what I do with it. Well, come to think of it I do use alternate expressions in egrep (w/ the “|” character), and the C switch (provides context by including surrounding lines) can sometimes be really helpful. The i (filenames only) and n (include line numbers) look useful on paper, but you almost never end up needing them. Anyways I simply didn’t program those things to keep it simple. Maybe later. To make it as fast as possible I avoided anything I thought the interpreter might trip over, at the expense of repeating code snippets multiple times. At some point (allowing another switch or two) my approach would be ludicrous as there would be too many combinations to consider. But at least in my testing it does function just like grep, only, as you see from the table above, it is much faster than grep. If I had written it in a compiled language like C it should go even faster still. Perl is an interpreted language so there should always be a performance penalty in using it. The advantage is of course that it is so darn easy to write useful code.

#!/usr/bin/perl
# J.Hilgart, 6/2011
# model grep implementation in Perl
# feel free to borrow or use this, but it will not be supported
use Getopt::Std;
$DEBUG = 0;
# Get the command line options.
getopts('iv');
# the search string has to be present
$mstr = shift @ARGV;
usage() unless $mstr;
$mstr =~ s/\./\\./g;
# the remaining arguments are the files to be searched
$nofiles = @ARGV;
print "nofiles: $nofiles\n" if $DEBUG;
$filePrefix = $nofiles > 1 ? "$_:" : "";
 
# call subroutine based on arguments present
optiv() if $opt_i && $opt_v;
opti()  if $opt_i;
optv()  if $opt_v;
normal();
################################
sub normal {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
# print filename if there is more than one file being searched
    print "$filePrefix$_" if /$mstr/;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print if /$mstr/;
}
}
exit;
} # end sub normal
###############################
sub opti {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" if /$mstr/i;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print if /$mstr/i;
}
}
exit;
} # end sub opti
#################################
sub optv {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" unless /$mstr/;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print unless /$mstr/;
}
}
exit;
} # end sub optv
##############################
sub optiv {
foreach (@ARGV) {
  open(FILE,"$_") || die "Cannot open $_!!\n";
  while(<FILE>) {
    print "$filePrefix$_" unless /$mstr/i;
  }
  close(FILE);
}
if (! $nofiles) {
# no files specified, use STDIN
while(<STDIN>) {
  print unless /$mstr/i;
}
}
exit;
} # end sub optiv
sub usage {
# I never did finish this...
}

Conclusion
So built-in grep performs horribly on SLES 11 SP 1, about 17 times slower than the SLES 10 SP 3 grep. I wonder what an examination of the source code would reveal? But who has time for that? So I’ve shown a way to avoid it entirely, by using a perl grep instead – modify to suit your needs. It’s considerably faster than what the system provides, which is really sad since it’s an amateur, two-hour effort compared to the decade+ (?) of professional development on Posix grep. What has me more concerned is what haven’t I found, yet, that also performs horribly under SLES 11 SP 1? It’s like deer on the side of the road in New Jersey – where there’s one there’s likely to be more lurking nearby : ) .

Follow Up
We will probably open a support case with Novell. I am not very optimistic about our prospects. This will not be an easy problem for them to resolve – the code may be contributed, for instance. So, this is where it gets interesting. Is the much-vaunted rapid bug-fixing of open source really going to make a substantial difference? I would have to look to OpenSUSE to find out (where I suppose the fixed code would first be released), which I may do. I am skeptical this will be fixed this year. With luck, in a year’s time.

7/15 Update
There is a newer version of grep available. Old version: grep-2.5.2-90.18.41; New version: grep-2.6.3-90.18.41 Did it fix the problem? Depends how low you want to lower the bar. It’s a lot better, yes. But it’s still three times slower than grep from SLES 10 SP3. So…still a long ways to go.

9/7 Update – The Solution
Novell came through today, three months later. I guess that’s better than I pessimistically predicted, but hardly anything to brag about.

Turns out that things get dramatically better if you simple define the environment variable LC_ALL=POSIX. They do expect a better fix with SLES 11 SP 2, but there’s no release date for that yet. Being a curious sort, I revisited SLES 10 SP3 with this environment variable defined and it also considerably improved performance there as well! This variable has to do with the Locale and language support. Here’s a table with some recent results. Unfortunately the SLES 11 SP 1 is a VM, and SLES 10 SP3 is a physical server, although the same file was used. So the thing to concentrate on is the improvement in performance of grep with vs without LC_ALL defined.

OS

LC_ALL=POSIX defined?

Time (s)

SLES 11 SP 1

no

6.9

SLES 11 SP 1

yes

0.36

SLES 10 SP 3

no

0.35

SLES 10 SP 3

yes

0.19 s

So if you use SLES 10/11, make sure you have a

export LC_ALL=POSIX

defined somewhere in your profile if you plan to use grep very often. It makes a 19x performance improvement in SLES 11 and almost a 2x performance improvement under SLES 10 SP3.

Related
If you like the idea of grep but want a friendlier interface, I was thinking I ought to mention Splunk. A Google search will lead you to it. It started with a noble concept – all the features of grep, plus a convenient web interface so you never have to get yuor hands dirty and actually log into a Linux/unix system. It was like a grep on steroids. But then in my opinion they ruined a simple utility and blew it up with so many features that it’ll take hours to just scratch the surface of its capabilities. And I’m not even sure a free version is still available. Still, it might be worth a look in some cases. In my case it also slowed down searching though supposedly it should have sped them up.

And to save for last what should have come first, grep is a search utility that’s great for looking at unstructured (not in a relational database) data.

Categories
Linux

Gnu Parallel Really Helps With Zcat

I happened across what I think will turn out to be a very useful open source application: Gnu parallel.  Let me explain how I got to that point.  I was uncompressing gzip’d files, of which I have lots.  So many, in fact, it can take the better part of a day to go through them all.  The server I was using is four years old but I don’t have access to anything better.  In fact it seems processor speed has sort of plateaud.  Moore’s Law only applies if you count the number of processors, which are multiplying like rabbits.  So I start to think that maybe my single-threaded approach is wrong, see? I need to scale horizontally, not vetically.

It just happens that in my former life I spent quite some effort into parallelizing a compute-intensive program I wrote.  I can’t believe that it’s still hanging around on the Internet!  I haven’t looked for it for at least 17 years: my FERMISV  Monte Carlo.  So I thought, what if I could parallelize my zcat|grep?  Writing such a thing from scratch would be too time-consuming, but after searching for parallelize zcat I quickly found Gnu Parallel.  It’s one thing to identify such a program in a Google search, another to get it working.  Believe me, I’ve put in my time with some of these WordPress plugins.

The upshot is that, yes, Gnu Parallel actually works and it’s not hard to learn to use.  They have a nice Youtube video.  For me I employed syntax like:  

ls *gz|time parallel -k "zcat {}|grep 192.168.23.34"  > /tmp/matches

.  This, of course, is to be compared with the normal operation I have been doing for eons:

time zcat *gz|grep 192.168.23.34 > /tmp/matches

.  The “time” is just for benchmarking and normally wouldn’t be present. 

The results are that the matches file produced in the two different ways are identical.  That’s good!  The -k switch ensured the order is preserved in the output, and anyways, it didn’t cost me any time. Otherwise the order is not guaranteed. I tested on a server with four cores.  Gnu Parallel reported that it used 375% of my CPU!  It finished in less than half the time of my normal way of doing things.  I need to put it through more real-world exercises, but it really looks promising.  I never got onto the xargs bandwagon (perhaps I should be ashamed to admit it?), but Gnu Parallel can do a lot of that same type of thing, but in my opinion is more intuitive.  Take a look!

Categories
Web Site Technologies

Security Considerations for WordPress Plugins and Upgrades

The following comments apply to WordPress v 3.1.3 and may not apply to earlier versions, with which I have no familiarity.

WordPress has an interesting idea for doing upgrades and downloading plugins. It took some getting used to until I learned to embrace it. I needed to understand the security considerations. Now I have a much better handle on it and feel comfortable with it.

First thing after installing WordPress, Murphy’s law you know, I was presented with an important security upgrade the very next day. I did the upgrade the hard way, doing all the file manipulation by hand. Copying files here and there, etc. I run the web server with a different user than the owner of the HTML documents to make things more secure. So I naively figured there was no way WordPress’s offer of automatically updating my installation would be possible in my case. After all all it could do was to run with the permissions of the web server, which as I say doesn’t have permissions to write to the relevant parts of the filesystem, right?

Then I learned that my colleagues on the Newton Robotics Team were managing to do it under the same conditions, so it piqued my curiosity. The next plugin I wished to install, WP-Syntax, offered me the same possibility of automatically installing it from the WordPress Admin GUI. It suggested that all I needed was to enter FTP credentials or use FTP/SSL. It did not explain how those credentials were going to be used, and I feared that they would be shared with another site.  Let’s think about this (this is how an IT person thinks).  There are two main possibilties. 1) The FTP client is initiated from an external site, probably where the repository where the plugin is housed, e.g., wordpress.org.  It was my gut feeling that was the case.  2) that the FTP client is on my local server where I run WordPress.  But, huh, what’s the point of that?

Turns out that 2) is what’s happening.  But then what is the point and how does it work?  By reverse engineering and reasoning, it must work as follows.  WordPress must download the plugin from the distribution site, perhaps through HTTP or FTP.  Perhaps it uses the FTP proxy feature where an intermediate can have an FTP connection to twp FTP servers and transfer files between them.  To expand it and put it into the local WordPress plugins directory, where the web server doesn’t have permissions to write, it definitely has to use FTP, but you gave it the credentials of the account that does have permissions to write to the plugins directory!  Clever, huh?  Of course this presupposes something.  Maybe if I read the WordPress requirements I would see that running an FTP server is strongly recommended. But I didn’t so this is another lesson learned through the school of hard knocks!  You see,  Ubuntu server and I think most linux distributions do not even bother to give you an FTP server.  Without a local FTP server WordPress cannot pull off its trick.  I’m not sure why they cannot use sftp, which is pretty universal these days.  In Ubuntu, you have the FTP client, but not the server.

I tried to run ftpd on my server to see what I would get.  It was missing and several packages which provide it were mentioned.  I chose inetutils-ftpd:  sudo apt-get install inetutils-ftpd.  I quickly learn that it relies on inetd, which I see I am not even running.  But it also has the option to run as a daemon: ftpd -D, which I chose to do (it won’t start after reboot without more jiggering, but I can start it by hand as I don’t need it often).

But how do I test my new FTP server?  Will it really work when WordPress tries to use it?  

Feb 2012 Update
I am now comfortable with directing WordPress to do my upgrade. I got tired of it bugging me about the 3.3.1 release so I relented and upgraded to it. I learned how to backup my database first, which is when I saw it was dominated by all the spam and scams I have been receiving. So I went back to the dashboard, got rid of 600 spam comments and re-ran the database mysqldump. The database dump file reduced in size from 10 MB to 3 MB! So it was 70% spam. Great people out there, huh? But I digress. I temporarily enabled my FTP daemon as described above and all went fine.

Then I enabled simple captcha challenge for POSTers. For now simple math seems to be flummoxing the auto-scam submitters! Next day my instance died. No idea why…