Categories
Admin Linux

Linux command line tips

Intro
I love the Linux command line. I don’t know how to categorize the tips I have collected so I’m dumping them here for the time being.

The newer netstat
In the old days I used to constantly do

$ netstat -an|grep LISTEN|grep <PORT>

to see what tcp ports I had processes listening on. While that is very helpful, it doesn’t tell you the listening process. A better approach:

$ lsof -i tcp:<PORT>

where is 22, 80 or whatever you want to look for. lsof does show the process.

Repeat last argument

I am constantly doing things like

$ grep <string> <big-file> > /tmp/results

and then I want to have a look at that file but I don’t want to type in

$ more /tmp/results

so what do I do?

I type this to give me the last argument:

$ more !$

My fingers have memorized that pattern and so I do it without conscious thought, and you’re holding down the SHIFT key so it can be typed very quickly.

But sometimes I can’t wait to type even that. I want to collect the grep results into a file in case there were a lot of matches, but I also want to see the matching results on my screen as soon as they are available. What do I do? Use tee, as in:

$ grep <string> <big-filez> |tee /tmp/results|more

Cool, huh?

More Obscure stuff
A cool blog with lots of good, obscure tips, generally more technical than the ones above, is https://die-computerhilfe.de/blog

To be continued…

Conclusion

Categories
Linux Raspberry Pi

Superimposing a grid on your raspivid output

Intro
In my previous post I outlined how to get real-time video from your Raspberry Pi with its camera, and to make it somewhat robust. In the conclusion I mentioned that it would be nice to superimpose (overlay) a grid over that image, and speculated that openCV might be just the tool to do it. Here I demonstrate how I have done it, and what compromises I had to make along the way.

The details
Well, let’s talk about why I didn’t go the openCV route. I began to bring down the source code for raspivid and raspistill, as outlined in this series of blog posts. And I did get it to compile, but it’s a lot of packages to bring down, and then I still needed to add in the openCV stuff. He provided one example of a hacked source file, but for raspistill, and I needed raspivid which is slightly different. Then there was cmake to master – I have no idea never having used it before. And then I would have needed to figure out openCV, which in turn might require programming in C++, which I have only the most basic skills. And then after all that, my fear was that it would slow down the video to the point where we would lose the real-time aspect! So the barriers were many, the risk was great, the reward not that great.

Logic dictates there should be another way
I reasoned as follows. Windows display graphics. Something decides what pixels to display, and this is true for every window, including mplayer. So if you can get control of what decides how to draw pixels in Windows we can draw our grid on the client side in Windows rather than on the encoder side on the Pi. So I looked for a way to superimpose an image using mplayer. Though they don’t use that term, I soon was drawn to what sounded similar, a -vf (video filter) switch with post-processing capability. I don’t know how to bring up the mplayer documentation in Windows, but on the Pi it’s just

$ man mplayer

and you’ll get a whole long listing. Under -vf are different filters, non of which sounded very promising. then I came across geq (general equation). That sounded pretty good to me. I searched for examples on the web and came across this very helpful discussion of how to use it, with examples.

So, off I went. A lot of the stuff I tried initially didn’t work. Then, when it did work, it lost the real-time feature that’s so important to us. Or the convergence to real-time took too long. I finally settled on this string for my mplayer:

mplayer -vf geq=p(X\,Y)*(1-gt(mod(X/SW\,100)\,98))*(1-gt(mod(Y/SH\,100)\,98)) -ontop -fps 27 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -

in combination with these switches on raspivid:

raspivid -n -o - -t 9999999 -rot 180 -w 560 -h 420 -b 1000000 -fps 9

And, voila, my grid of black lines appears at 100 pixel intervals. Convergence is about 30 seconds and we have preserved the real-timeyness of the video!

But it as a series of compromises and tuning that got me there. For my desired 640 x 480 video I could get real-time video at about 7 fps (frame per second). I think my PC, a Dell Insipron 660, just can’t keep up at higher fps. Because when you think about it, it’s got to do calculations for each and every pixel, which must introduce quite some overhead. Perhaps things will go better on PCs that don’t need the -vo gl switch of mplayer which I have to use on my Dell display. So I kept the pixels per second constant and calculated what area I would have to shrink the picture to to increase the fps to a value that gave me sufficient real-timeyness. I decided there was a small but noticeable difference between 7 fps and 9 fps.

So

pixels/second = fps * Area,

so keeping that constant,

7 fps * A7 = 9 fps * A9

A = w*h = w*((3/4)*w)

So after some math you arrive at:

w9 = w7*sqrt(7/9) = 640 * 0.935 ~ 560 pixels

and h9 = w9*3/4 = 420 pixels

And that worked out! So a slightly smaller width gives us fewer pixels to have to calculate, and allows us to converge to real-time and have almost unnoticeable lag.

What we have now
One the Pi the /etc/init.d/raspi-vid now includes this key line:

raspivid -n -o - -t 9999999 -rot 180 -w 560 -h 420 -b 1000000 -fps 9|nc  -l 443

and on my PC the key line in my .bat file now looks like this:

c:\apps\netcat\nc 192.168.0.90 443|c:\apps\smplayer\mplayer\mplayer -vf geq=p(X\,Y)*(1-gt(mod(X/SW\,100)\,98))*(1-gt(mod(Y/SH\,100)\,98)) -ontop -fps 27 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -

For the full versions of the files, and more discussion about the switches I chose, go back to my previous article about screaming streaming on the Pi, and just substitute in these lines in the obvious place. Adjust the IP to your Pi’s IP address.

No tick marks?
My original goal was to innlude tick marks, but I see given the per-pixel calculations required that that’s gonna be a lot more complicated and could only further slow us down (or force us to reduce the picture size further). So for now I think I’ll stop here.

A word on YUV coloring
I am much more comfortable with RGB, but that seems not to be used in Pi video stream. I guess raspivid encodes using YUV. I haven’t mastered the representation of YUV, but here’s a couple words on it anyways! I’m sure it’s related to YCbCr, which is described here. So because groups of pixels share a color, if you change the function above to mod(X/SW\,101),99), for instance, you get alternating green and black grid lines as you go from even to odd pixels. That is my vague understanding at this point. I learned just enough to get my black grid lines but no more…

Unsolved Mystery
Although the approach outlined above does generally work and can be real-time, I find that it also gets laggy when i leave the room and there is no motion. I’m not sure why. Then I introduce motion and it converges again to real-time. I don’t think this behaviour was so noticeable before I added the grid lines, but I need more tests.

Conclusion
We’ve shown how to overlay a black grid on the video output of our Raspberry Pi, while keeping the stream real-time with almost unnoticeable lag.

Conclusion
We have managed to overlay a black grid on our video using built-in functionality of mplayer. It appreciably slows things down. so your mileage may vary depending on your hardware.

References
The original Screaming Streaming on the Raspberry Pi article.

Categories
Admin IT Operational Excellence Linux Network Technologies Raspberry Pi

Screaming Streaming on the Raspberry Pi

Intro
The Raspberry Pi plus camera is just irresistible fun. But I had a strong motivation to get it to work the way I wanted it to as well: a First robotics team that was planning on using it for vision for the drive team. So of course those of us working on it wanted to offer something with a real-time view of the field with a fast refresh rate and good (though not necessarily perfect) reliability. Was it all possible? Before starting I didn’t know. In fact I started the season in January not knowing the team would want to use a Raspberry Pi, much less that there was a camera for it! But we were determined to push through the obstacles and share my love of the Pi with students. Eventually we found a way.

The details
Well, we sure made a lot of missteps along the way, that’s why I’m excited to write this article to help others avoid some of the pain points. It needs to be fleshed out some more, but this post will be expanded to become a litany of what didn’t work – and that list is pretty long! All of it borrowed from well-meaning people on various Internet sites.

The essence of the solution is the quick start page – I always search for Raspberry pi camera quick start to find it – which basically has the right idea, but isn’t fleshed out enough. So raspivid + nc + a PC with netcat (nc) and mplayer will do the trick. Below I provide a tutorial on how to get it all to work.

Additional requirement
Remember I wanted to make this almost fool-proof. So I wanted the Pi to be like a passive device that doesn’t need more than a one-time configuration. Power-up and she’s got to be ready. Cut power and re-power, it better be ready once more. No remote shell logins, no touching it. That’s what happens when it’s on the robot – it suddenly gets powered up before the match.

Here is the startup script I created that does just that. I put it in /etc/init.d/raspi-vid:

#! /bin/sh
# /etc/init.d/raspi-vid
# 2/2014
 
# The following part always gets executed.
echo "This part always gets executed"
 
# The following part carries out specific functions depending on arguments.
case "$1" in
  start)
    echo "Starting raspi-vid"
# -n means don't show preview on console; -rot 180 to make image right-side-up
# run a loop because this command dies unless it can connect to a listener
    while /bin/true; do
# if acting as client do this. Probably it's better to act as server however
# try IPs of the production PC, test PC and home PC
#      for IP in 10.31.42.5 10.31.42.6 192.168.2.2; do
#        raspivid -n -o - -t 9999999 -rot 180 -w 640 -h 480 -b 800000 -fps 15|nc $IP 80
#      done
#
# act as super-simple server listening on port 443 using nc
# -n means don't show preview on console; -rot 180 to make image right-side-up
# -b (bitrate) of 1000000 (~ 1 mbit) seems adequate for our 640x480 video image
# so is -fps 20 (20 frames per second)
# To view output fire up mplayer on a PC. I personally use this command on my PC:
# c:\apps\netcat\nc 192.168.2.100 443|c:\apps\smplayer\mplayer\mplayer -ontop -fps 60 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -
      raspivid -n -o - -t 9999999 -rot 180 -w 640 -h 480 -b 1000000 -fps 20|nc  -l 443
# this nc server craps out after each connection, so just start up the next server automatically...
      sleep 1;
    done
    echo "raspi-vid is alive"
    ;;
  stop)
    echo "Stopping rasip-vid"
    pkill 'raspi-?vid'
    echo "raspi-vid is dead"
    ;;
  *)
    echo "Usage: /etc/init.d/rasip-vid {start|stop}"
    exit 1
    ;;
esac
 
exit 0

I made it run on system startup thusly:

$ cd /etc/init.d; sudo chmod +x raspi-vid; sudo update-rc.d raspi-vid defaults

Of course I needed those extra packages, mplayer and netcat:

$ sudo apt-get install mplayer netcat

Actually you don’t really need mplayer, but I frequently used it simply to study the man pages which I never did figure out how to bring up on the Windows installation.

On the PC I needed mplayer and netcat to be installed. At first I resisted doing this, but in the end I caved. I couldn’t meet all my requirements without some special software on the PC, which is unfortunate but OK in our circumstances.

I also bought a spare camera to play with my Pi at home. It’s about $25 from newark.com, though the shipping is another $11! If you’re an Amazon Prime member that’s a better bet – about $31 when I looked the other day. Wish I had seen that earlier!

I guess I used the links provided by the quick start page for netcat and mplayer, but I forget. As I was experimenting, I also installed smplayer. In fact I ended up using the mplayer provided by smplayer. That may not be necessary, however.

A word of caution about smplayer
smplayer, if taken from the wrong source (see references for correct source), will want to modify your browser toolbar and install adware. Be sure to do the Expert install and uncheck everything. Even so it might install some annoying game which can be uninstalled later.

Lack of background
I admit, I am no Windows developer! So this is going to be crude…
I relied on my memory of some basics I picked up over the years, plus analogies to bash shell programming, where possible.

I kept tweaking a batch file on my desktop. So I associated notepad to my Send To menu. Briefly, you type

shell:sendto

where it says Search programs and files after clicking the Start button. Then drag a copy of notepad from c:\windows\notepad into the window that popped up.

Now we can edit our .bat file to our heart’s content.

So I created a mplayer.bat file and saved it to my desktop. Here are its contents.

if not "%minimized%"=="" goto :minimized
set minimized=true
start /min cmd /C "%~dpnx0"
goto :EOF
:minimized
rem Anything after here will run in a minimized window
REM DrJ 2/2014
rem 
rem very simple mplayer batch file to play output from a Raspberry Pi video stream
rem
rem Use the following line to set up a server
REM c:\apps\netcat\nc -L -p 80|c:\apps\smplayer\mplayer\mplayer -fps 30 -vo gl -cache 1024 -msglevel all=0 -

rem Set up as client with this line...
rem put in loop because I want it to start up whenever there is something listening on port 80 on the server
 
:loop

 
rem this way we are acting as a client - this is more how you'd expect and want things to work
c:\apps\netcat\nc 192.168.2.102 443|c:\apps\smplayer\mplayer\mplayer -ontop -fps 60 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -

rem stupid trick to sleep for about a second. Boy windows shell is lacking...
ping 127.0.0.1 -n 2 -w 1000 > NUL
 
goto loop

A couple notes about what is specific to my installation. I like to install programs to c:\apps so I know I installed them by hand. So that’s why smplayer and netcat were put there. Of course 192.168.2.102 is my Pi’s IP address on my home network. In this post I describe how to set a static IP address for your Pi. We also found it useful to have the CMD Window minimize itself after starting up and running in the background, so the I discovered that the lines on the top allow that to happen.

The results
With the infinite loops I programmed either Pi or mplayer.bat can be launched first – there is no necessary and single order to do things in. So it is a more robust solution than that outlined in the quick start guide.
Most of my other approaches suffered from lag – delay in displaying a live event. Some other suggested approaches had quite large lag in fact. The lag from the approach I’ve outlined above is only 0.2 s. So it feels real-time. It’s great. Below I outline a novel method (novel to me anyways) of measuring lag precisely.
Many of my other approaches also suffered from a low refresh rate. You’d specify some decent number of frames per second, but in actual fact you’d get 1 -2 fps! That made for choppy and laggy viewing. With the approach above there is a full 20 frames per second so you get the feel of true motion. OK, fast motions are blurred a bit, but it’s much better than what you get with any solution involving raspistill: frame updates every 0.6 s and nothing you do can speed it up!
Many Internet video examples showed off high-resolution images. I had a different requirement. I had to keep the bandwidth usage tamped down and I actually wanted a smaller image, not larger because the robot driver has a dashboard to look at.
I chose an unconventional port, tcp port 443, for the communication because that is an allowed port in the competition. The port has to match up in raspi-vid and mplayer.bat. Change it to your own desired value.

Limitations
Well, this is a one-client at a time solution, for starters! did I mention that nc makes for a lousy server?
Even with the infinite looping, things do get jammed up. You get into situation where you need to kill the mplayer CMD window to get things going again.
I would like to have gotten the lag down even further, but haven’t had time to look into it.
Begin a video amateur I am going to make up my own terms! This solution exhibits a phenomenon I call convergence. What that means is that once the mplayer window pops up, which takes a few seconds, what it’s displaying shows a big lag – about 10 seconds. But then it speeds along through the buffered frames and converges with real-time. This convergence takes slightly more than 10 seconds. So if you need instant-on and real-time, you’re not getting it with this solution!

What no one told us
I think we were all so excited to get this little camera for the Pi no one bothers to talk about the actual optical properties of the thing! And maybe they should. because even if it is supposedly based on a cellphone camera, I don’t know which cellphone, certainly not the one from my Samsung Galaxy S3. The thing is (and I admit someone else first pointed this out to me) that it has a really small field-of-view. I measured it as spreading out only 8.5″ at a 15″ distance – that works out to only 31.6 degrees! See what I mean? And I don’t believe there are any tricks or switches to make that larger – that’s dictated by the optics of the lens. This narrow field-of-view may make it unsuitable for use as security camera or many other projects, so bear that in mind. If I put my Samsung next to it and look at the same view its field of view is noticeably larger, perhaps closer to 45 degrees.

Special Insights
At some point I realized that the getting started guide put things very awkwardly in making the PC the server and the Pi the client. You normally want things the other way around, like it would be for an ethernet camera! So my special insight was to realize that nc could be used in the reverse way they had documented it to switch client/server roles. nc is still a lousy “server,” if you can call it that, but hey, the price is right.

Fighting lag
To address the convergence problem mentioned above I chose a frame rate much higher on the viewer than on the camera. The higher this ratio the faster convergence occurs. So I have a 3:1 ratio: 60 fps on mplayer and 20 fps on raspivid. The PC does not seem to strain from the small bit of extra cpu cycles this may require. I think if you have an exact fps match you never get convergence, so this small detail alone could convince you that raspivid is always laggy when in fact it is more under your control than you realized.

Even though with the video quality such as it is there probably is no real difference between 10 fps and 20 fps, I chose 20 fps to reduce lag. After all, 10 fps means an image only every 100 msec, so on average by itself it introduces a lag of half that, 50 msec. Might as well minimize that by increasing the fps to make this a negligble contributor to lag.

Measuring lag
Take a smartphone with a stopwatch app which displays large numbers. Put that screen close up to the Pi camera. Arrange it so that it is next to your PC monitor so both the smartphone and the monitor are in your field of view simultaneously. Get mplayer.bat running on your PC and move the video window close to the edge of the monitor by the smartphone.

Now you can see both the smartphone screen as well as the video of the smartphone screen running the stopwatch (I use Swiss Army Knife) so you can glance at both simultaneously and quantify the lag. But it’s hard to look at both rapidly moving images at the same time, right? So what you do is get a second camera and take a picture of the two screens! We did this Saturday and found the difference between the two to be 0.2 s. To be more scientific several measurements ought to be taken and results avergaed and hundredths of seconds perhaps should be displayed (though I’m not sure a still picture could capture that as anything other than a blur).

mplayer strangeness on Dell Inspiron desktop
I first tried mplayer on an HP laptop and it worked great. It was a completely different story on my Dell Inspiron 660 home desktop however. There that same mplayer command produced this result:

...
VO: [directx] 640x480 => 640x480 Packed YUY2
FATAL: Cannot initialize video driver.
 
FATAL: Could not initialize video filters (-vf) or video output (-vo).
 
 
Exiting... (End of file)

So this was worrisome. I happened on the hint to try -vo gl and yup, it worked. Supposedly it makes for slower video so maybe on PCs where this trick is not required lag could be reduced.

mplayer personal preferences
I liked the idea of a window without a border (-noborder option) – so the only way to close it out is to kill the CMD window, which helps keep them in sync. Running two CMD windows doesn’t produce such good results!

I also wanted the window to first pop-up in the upper right corner of the screen, hence the -geometry 600:50

And I wanted the video screen to always be on top of other windows, hence the -ontop switch.

I decided the messages about cache were annoying and unimportant, hence the message suppression provided by the -msglevel all=0 switch.

Simultaneously recording and live streaming
I haven’t played with this too much, but I think the unix tee command works for this purpose. So you would take your raspivid line and make it something like:

raspivid -n -o – -t 9999999 -rot 180 -w 640 -h 480 -b 1000000 -fps 20|tee /home/pi/video-`date +%Y%h%d-%H%M`|nc -l 443

and you should get a nice date-and-time-stamped output file while still streaming live to your mplayer! Tee is an under-appreciated command…

Conclusion
I have tinkered with the Pi until I got its camera display to be screaming fast on my PC. I’ve shown how to do this and described some limitations.

Next Act?
I’m contemplating superimposing a grid with tick marks over the displayed video. This will help the robot driver establish their position relative to fixed elements on the field. This may be possible by integrating, for instance, openCV, for which there is some guidance out there. But I fear the real-time-ness may greatly suffer. I’ll post if I make any significant progress!
Update: I did get it to work, and the lag was an issue as suspected. Read about it here.

References and related
First Robotics is currently in season as I write this. The competition this year is Aerial Assist. More on that is at their web site, http://www3.usfirst.org/roboticsprograms/frc
Raspberry Pi camera quick start is a great place to get started for newbies.
Setting one or more static IP addresses on your Pi is documented here.
How not to set up your Pi for real-time video will be documented here.
How to get started on your Pi without a dedicated monitor is described here.
Finally, how to overlay a grid onto your video output (Yes, I succeeded to do it!) is documented here.
Correct source for smplayer for Windows.

Categories
Admin Apache Linux

Recording Host Header in the apache access log

Intro
Guess I’ve made it pretty clear in previous posts that Apache documentation is horrible in my opinion. So the only practical way to learn something is to configure by example. In this post I show how to record the Host header contained in an HTTP request in your Apache log.

The details
Why might you want to do this? Simple, if you have multiple hosts using one access log in common. For instance I have johnstechtalk.com and drjohnstechtalk.com using the same log, which I view as a convenience for myself. But now I want to know if I’m getting my money’s worth out of johnstechtalk.com, which I don’t see as the main URL, but I I use it to to type it into the browser location bar and get directed onto my site – fewer letters.

So I assume you know where to find the log definitions. You start with that as a base and create a custom-defined access log name. These two lines, taken from my actual config file, apache2.conf, show this:

LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\" \"%{Host}i\"" DrJformat

Then I have my virtual server in a separate file containing a reference to that custom format:

#CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log combined
CustomLog ${APACHE_LOG_DIR}/../drjohns/access.log DrJformat

The ${APACHE_LOG_DIR} is an environment variable defined in envvars in my implementation, which may be unconventional. you can replace it with a hard-wired directory name if that suits you better.

There is some confusion out there on the Internet. Host as used in this post refers as I have said to the value contained in the HTTP Host Request header. It is not the hostname of the client.

Here are some recorded access resulting from this format early this morning:

108.163.185.34 - - [08/Jan/2014:02:21:32 -0500] "GET /blog/2012/02/tuning-apache-as-a-redirect-engine/ HTTP/1.1" 200 11659 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.71 Safari/537.36" "drjohnstechtalk.com"
5.10.83.22 - - [08/Jan/2014:02:21:56 -0500] "GET /blog/2013/03/generate-pronounceable-passwords/ HTTP/1.1" 200 8253 "-" "Mozilla/5.0 (compatible; AhrefsBot/5.0; +http://ahrefs.com/robot/)" "drjohnstechtalk.com"
220.181.108.91 - - [08/Jan/2014:02:23:41 -0500] "GET / HTTP/1.1" 301 246 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" "vmanswer.com"
192.187.98.164 - - [08/Jan/2014:02:25:00 -0500] "GET /blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/ HTTP/1.0" 200 32338 "http://drjohnstechtalk.com/blog/2012/02/running-cgi-scripts-from-any-directory-with-apache/" "Opera/9.80 (Windows NT 5.1; MRA 6.0 (build 5831)) Presto/2.12.388 Version/12.10" "drjohnstechtalk.com"

While most lines contain drjohnstechtalk.com, note that the next-to-last line has the host vmanswer.com, which is another domain one I bought and associated with my site to try it out.

Conclusion
We have shown how to record the contents of the Host header in an Apache access log.

Related rants against apache
Creating a maintenance page with Apache web server
Turning Apache into a Redirect Factory
Running CGI Scripts from any Directory with Apache

Categories
Admin Linux

The IT Detective Agency: Can someone really see what we’re doing in our X sessions?

Intro
We’ve been audited again. My most faithful followers will recall the very popular and educational article on SSL ciphers that cane out of our previous audit. So I guess audits are a good thing – helps us extend our learning.

This time we got dinged on that most ancient of protocols, X Windows. So this article is aimed at all those out there who, like me, know enough about X11 to get it more-or-less working, but not enough to claim power user status. The X cognescenti will find this article redundant with other material already widely available. Whatever. Sometimes I will post an article as though it were my own personal journal – and I like to have all my learning in one easy-to-find place.

The details
The findings amount to this: anyone in our Intranet can take a screen shot of what the people using Exceed are currently seeing. The nice tool (Nessus) actually provided such a screen shot to back up its claim, so this wasn’t a hypothetical. At Drjohn’s we believe in open source, but we do have our secrets and confidential information, so we don’t want everyone to have this type of access.

Here is some of the verbatim wording:

Synopsis
The remote X server accepts TCP connections.
 
Description
The remote X server accepts remote TCP connections. It is possible for an attacker to grab a screenshot of the remote host.
 
Solution
Restrict access to this port by using the 'xhost' command. If the X client/server facility is not used, disable TCP connections to the X server entirely.
 
Risk Factor
Critical
 
CVSS Base Score
10.0 (CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C)
 
References
CVE CVE-1999-0526
...
Hosts
drjms.drjs.com (tcp/6002)
It was possible to gather the following screenshot of the remote computer.
...

So in my capacity as old Unix hand I was asked to verify the findings. This turned out to be dead easy. Here are the steps:

– pick a random Linux machine which has xwd installed
> xwd -debug -display drjms.drjs.com:2 -root -out drjms.xwd
> cat drjms.xwd|xwdtopnm|pnmtopng > drjms.png

The PNG file indeed showed lots of interesting stuff from a screen capture of the user’s X server – amazing.

I should mention that tcp port 600 maps to X Server display 0, 6001 to 1, 6002, to 2, etc. That’s why I set my display to drj…com:2 since port 6002 was mentioned in the findings as vulnerable.

My advice: don’t use

> xhost +

or whatever is the equivalent to that in Exceed onDemand.

Guilty
Now I have to admit to using xhost + in just this way in my own past – so convenient! Now that I see how dead easy it makes it to get a screenshot – in fact I tested the whole thing against my own XServer first – I will forego the convenience.

Conclusion
This is the danger in knowing something and some things, but not enough!

References
But I still stand by use of xhost + in the privacy of your home network, as for instance I show it in my Raspberry Pi without a monitor acticle.

I picked off that command set from this interesting article: https://www.linuxquestions.org/questions/linux-general-1/commanding-the-x-server-to-take-a-png-screenshot-through-ssh-459009/

Categories
Admin Linux

Setting up my Galaxy S3 for remote host access

Intro
I just got a Samsung Galaxy S3 last week. Before long I was wondering how I might use it to access my cloud server if indeed it was at all possible.

The Details
Looking at my other Android devices I decided to install Terminal Emulator. That’s a cute application, providing shell access to the underlying OS of your phone. But it’s fairly useless. You get a shell, but your account id, 10155, has essentially no privileges, and you can’t do much more than ls, cd, ps and top. You don’t have enough privileges to look into interesting directories. So you can’t do anything interesting. There’s also no native ssh so you can’t connect to another host.

To ssh to my Amazon cloud server I got the app ConnectBot. This app shows promise. I was able to connect to my server. I read some of the introductory screens which gave some helpful tips. So I quickly learned how to resize the window. I found 80×39 is a good size in portrait orientation. Yes, the font is tiny, but it is legible. Getting an elusive Esc or Ctrl character isn’t bad at all, just tap the top half of the screen.

So running constantly refreshing screens like top worked out fine.

vi was a problem. It used multi-colors in displaying my code. Comments, in dark blue, are not legible to me. In fact using vi at all on this device with a soft keyboard is quite unnatural. It might be better to use a curses-based editor like pico, though I haven’t yet tried it. But w/ vi, I found I could get rid of the multi-colors by setting the TERM environment variable to vt100. It had been screen screen. Once that was done:

> export TERM=vt100

vi displayed all characters in white, and background in black – quite legible.

Conclusion
It’s a strange world where you can administer a virtual server on a device that fits in the palm of your hand, and achieve fairly powerful effects.

Being a resourceful person, I had alternatives to reach my server. There is a web-based terminal emulator which works surprisingly well. See this post for a description.

connectBot is a native ssh remote terminal app and is actually useable as such on a Samsung Galaxy S3, if your eyes are still good! Pay attention to a just a few usage tips and you’ll be in full control of your server.

References
See this post for the world’s most natural, unobtrusive ringtone.

Categories
Admin Linux

The IT Detective Agency: Teraterm’s colors washed out

Intro
Some things we do as IT folk are just embarrassingly stupid in retrospect. This is such a story. I present it in keeping with my overall theme of throwing stuff out there in the hopes that some of it helps someone else facing the same situation.

The details
I love teraterm (from logmett.com). Teraterm plus screen (as in /usr/bin/screen) makes for a great combination when you live and die by the command line.

Actually I have been told I only use a small fraction of teraterm’s capabilities. It is programmable, apparently. I’m a very basic user.

So I had the self-assigned task to switch out a DNS server from an older Solaris OS to Linux. I completed the task months ago and I noticed one small side-effect: certain commands had the effect of washing out the font to just about the same color as the background. For the record my text is black (R,G,B) = (0,0,0) with Attribute Normal and my background is beige (255,255,183). When it’s behaving normally it looks very pleasant and is easy on the eyes.

I noticed when I ran man pages the text was all washed out – just a slightly brighter yellow against the beige background, same story when I ran vi. Comments such as text following # were washed out.

This was the case if I used a docking station. Using the native laptop display, the text was washed out, but not as severely so I could just make it out by straining my eyes.

I played with font color and background color in Teraterm, but didn’t really get anywhere, so I learned to cope. I learned that if I piped the man page to more the text was all-black and I didn’t really lose any functionality. In vi I learned that if I deleted the whitespace before the #, the whole comment became visible, unless it started a line. Kludgy, but it worked and hardly slowed me down – this is after all just one of many, many hosts I was focussed on.

Then it came time to migrate the second and last Solaris DNS server to Linux and I noticed the same thing happening on the new Linux server. What the…?

Previously I wasn’t really even sure when the washed-out problem occurred. This time I had no doubt that it was fine until the OS switch.

That in turn points to some difference in the environment, especially because on my many other Linux sessions I did not have this problem.

> env

shows the environment. By comparing where it was working to where it was not, I zeroed in on this environment variable: TERM.

TERM=vt100

where it wasn’t working

and

TERM=screen

where it was.

I set TERM=screen:

> export TERM=screen

and immediately noticed the display working when running vi. Even multiple colors are displayed.

But actually, hmm, the man pages are still washed out, e.g.,

> man -s1 ls

shows NAME, SYNOPSIS and DESCRIPTION are all yellowed out, as well as all switches! That makes it really difficult to decipher.

Oh, well. This mystery is not completely solved.

My point was going to be that in Solaris the TERM=vt100 made sense – things worked better – and so it was in my .bashrc file. In Linux (SLES) it didn’t make so much sense. No setting for TERM seems to be necessary as the value screen gets auto-defined somehow if you’re using screen.

What I had done was copy my .bashrc file from Solaris to Linux not really thinking about it. That’s what did me in on these two servers.

If I get around to resolving the man pages I’ll revise this post…

2020 update

Still plagued by this issue of washed out colors, I rolled up my sleeves and got it done. Turns out you have to set the Bold font settings separately.  I’m trying settings like in this picture.

References
Teraterm used to be available from logmett.com, (2020 update) but is no longer. I’m looking for a link… Here it is: https://osdn.net/projects/ttssh2/releases/

Conclusion
Problems with washed-out colors using teraterm plus screen are resolved. Once again, this was mostly a self-inflicted problem.

Categories
Linux

Is Mining Bitcoins on the Amazon Cloud the Road to Riches?

Intro
Answer: Not as far as I can tell. Of course it’s irresistible for us technical folks to try. Here are my back-of-the-envelope calculations for my trial.

The details
A currency that’s not linked to any one government’s policies has a lot of attraction. Bitcoin is that currency, and it seems to be catching on. I knew people last year who were “mining” Bitcoins. I had no idea what they were talking about, but I could tell from what they were saying that they were trying to create more currency units. How strange and wonderful, a currency that gets minted by potentially anyone.

I learn mostly by doing, so I decided to download one of those mining programs and see what this was all about.

Well, I still haven’t learned what it’s all about because it’s more complicated than I thought, but I learned what approach not to take. And that’s what I’m sharing here.

I downloaded bfgminer for my CentOS Amazon EC2 server. That in itself was a good exercise as it needed a whole ecosystem of other packages to be installed first. On my system I found I needed ncurses-devel and libcurl-devel, which brought in other packages so that by the time they were installed I had installed all these packages:

libcurl-devel-7.19.7-35.el6
curl-7.19.7-35.el6
libidn-devel-1.18-2.el6
libcurl-7.19.7-35.el6
libssh2-1.4.2-1.el6
ncurses-static-5.7-3.20090208.el6
ncurses-devel-5.7-3.20090208.el6

It’s also designed more for a different type of computing environment. Getting it to compile was one thing, but getting it to actually run is another.

At first it found nothing to run on. So I had to recompile, this time specifying:

$ ./configure –enable-cpumining

to enable use of my virtual CPU.

It wanted a pool and URL and other things I don’t have when it starts up. I finally found a way to run it in test mode.

The results
My setup at Amazon could calculate 0.4 mega hashes per second. Doesn’t sound too bad, right? Wrong. Looking at some of the relevant numbers and doing a back-of-the-envelope calculation we have:

– total world computing power dedicated to this effort: 60,000 Giga hashes per second
– rate of blocks being written: six per hour
– number of bitcoins in a block: 25
– value of a bitcoin: $78

From this we have:
Minimum computation required for a DIY effort to produce one block:

Effort = 10 minutes * 60 s/min * 60×10^12 hashes/s = 3.6×10^16 hashes =~ 4×10^16 hashes

So with my resources one my small instance this will take me:

time to make a block = 4×10^16 hashes/block / 0.4×10^6 hashes/s = 10^11 s
= 10^11 s * year/(π•10^7 s) =~ 3×10^3 years

Why my fixation on a block as the minimum unit of bitcoins? Because in my five minutes of reading that seems to be the minimum acceptable unit to be able to mint more bitcoins.

By the way, every physicist knows that a year has π•10^7 seconds! That’s one of those useful numbers we carry around in our heads.

For the scientific-notation challenged, I’m saying that it will take me 3,000 years to create a block of bitcoins by myself!

Now let’s have some fun with this. Of course Amazon being the premier cloud hosting company that it is, you can rent (I have heard of this actually being done) 30,000 servers at once.

To be continued…

Appendix
How I measure my has rate
I ran

$ bfgminer –benchmark

Then I did a and got these results:

 [2013-04-16 08:25:39]
Summary of runtime statistics:
 
 [2013-04-16 08:25:39] Started at [2013-04-15 12:55:43]
 [2013-04-16 08:25:39] Pool: Benchmark
 [2013-04-16 08:25:39] CPU hasher algorithm used: c
 [2013-04-16 08:25:39] Runtime: 19 hrs : 29 mins : 56 secs
 [2013-04-16 08:25:39] Average hashrate: 0.4 Megahash/s
 [2013-04-16 08:25:39] Solved blocks: 0
 [2013-04-16 08:25:39] Best share difficulty: 0
 [2013-04-16 08:25:39] Queued work requests: 0
 [2013-04-16 08:25:39] Share submissions: 0
 [2013-04-16 08:25:39] Accepted shares: 0
 [2013-04-16 08:25:39] Rejected shares: 0
 [2013-04-16 08:25:39] Accepted difficulty shares: 0
 [2013-04-16 08:25:39] Rejected difficulty shares: 0
 [2013-04-16 08:25:39] Hardware errors: 0
 [2013-04-16 08:25:39] Efficiency (accepted / queued): 0%
 [2013-04-16 08:25:39] Utility (accepted shares / min): 0.00/min
 
 [2013-04-16 08:25:39] Discarded work due to new blocks: 46376
 [2013-04-16 08:25:39] Stale submissions discarded due to new blocks: 0
 [2013-04-16 08:25:39] Unable to get work from server occasions: 0
 [2013-04-16 08:25:39] Work items generated locally: 0
 [2013-04-16 08:25:39] Submitting work remotely delay occasions: 0
 [2013-04-16 08:25:39] New blocks detected on network: 0
 
 [2013-04-16 08:25:39] Summary of per device statistics:
 
 [2013-04-16 08:25:39] CPU0                | 5s:  0.0 avg:377.4 u:  0.0 kh/s | A:0 R:0 HW:0 U:0.0/m

The about fourth line from the top shows the average has rate of 0.4 Megahashes/second.

Other resources
Bitcoin exchange value really fluctuates a lot compared to conventional government-sponsored currencies! Go here for the current value.

A timely and informative intro to Bitcoin is available here.

Categories
Linux

Solving this week’s NPR weekend puzzle with a few Linux commands

Intro
I listen to the NPR puzzle every Sunday morning. I’m not particularly good at solving them, however – I usually don’t. But I always consider if I could get a little help from my friendly Linux server, i.e., if it lends itself to solution by programming. As soon as I heard this week’s challenge I felt that it was a good candidate. I was not disappointed…

The details
So Will Shortz says think of a common word with four letters. Now add O, H and M to that word, scramble the letters to make another common word in seven letters. The words are both things you use daily, and these things might be next to each other.

My thought pattern on that is that, great, we can look through a dictionary of seven-letter words which contain O, H and M. That already might be sufficiently limiting.

This reminded me of using the built-in Linux dictionary to give me some great tips when playing Words with Friends, which I document here.

In my CentOS my dictionary is /unix/share/dict/linux.words. It has 479,829 words:

$ cd /usr/share/dict; wc linux.words

That’s a lot. So of course most of them are garbagey words. Here’s the beginning of the list:

$ more linux.words

1080
10-point
10th
11-point
12-point
16-point
18-point
1st
2
20-point
2,4,5-t
2,4-d
2D
2nd
30-30
3-D
3-d
3D
3M
3rd
48-point
4-D
4GL
4H
4th
5-point
5-T
5th
6-point
6th
7-point
7th
8-point
8th
9-point
9th
-a
A
A.
a
a'
a-
a.
A-1
A1
a1
A4
A5
AA
aa
A.A.A.
AAA
aaa
AAAA
AAAAAA
...

You see my point? But amongst the garbage are real words, so it’ll be fine for our purpose.

What I like to do is to build up to increasingly complex constructions. Mind you, I am no command-line expert. I am an experimentalist through-and-through. My development cycle is Try, Demonstrate, Fix, Try Demonstrate, Improve. The whole process can sometimes be finished in under a minute, so it must have merit.

First try:

$ grep o linux.words|wc

 230908  230908 2597289

OK. Looks like we got some work to do, yet.

Next (using up-arrow key to recall previous command, of course):

$ grep o linux.words|grep m|wc

  60483   60483  724857

Next:

$ grep o linux.words|grep m|grep h|wc

  15379   15379  199724

Drat. Still too many. But what are we actually producing?

$ grep o linux.words|grep m|grep h|more

abbroachment
abdominohysterectomy
abdominohysterotomy
abdominothoracic
Abelmoschus
abhominable
abmho
abmhos
abohm
abohms
abolishment
abolishments
abouchement
absmho
absohm
Acantholimon
acanthoma
acanthomas
Acanthomeridae
acanthopomatous
accompliceship
accomplish
accomplishable
accomplished
accomplisher
accomplishers
accomplishes
accomplishing
accomplishment
accomplishments
accomplisht
accouchement
accouchements
accroachment
Acetaminophen
acetaminophen
acetoamidophenol
acetomorphin
acetomorphine
acetylmethylcarbinol
acetylthymol
Achamoth
achenodium
achlamydeous
Achomawi
...

Of course, words with capitalizations, words longer and shorter than seven letters – there’s lots of tools left to cut this down to manageable size.

With this expression we can simultaneously require exactly seven letters in our words and require only lowercase alphabetical letters: egrep ′^[a-z]{7}$′. This is an extended regular expression that matches the beginning (^) and end ($) of the string, only characters a-z, and exactly seven of them ({7}).

With that vast improvement, we’re down to 352 entries, a list small enough to browse by hand. But the solution still didn’t pop out at me. Most of the words are obscure ones, which should automatically be excluded because we are looking for common words. We have:

$ grep o linux.words|grep m|grep h|egrep ′^[a-z]{7}$′|more

achroma
alamoth
almohad
amchoor
amolish
amorpha
amorphi
amorphy
amphion
amphora
amphore
apothem
apothgm
armhole
armhoop
bemouth
bimorph
bioherm
bochism
bohemia
bohmite
camooch
camphol
camphor
chagoma
chamiso
chamois
chamoix
chefdom
chemizo
chessom
chiloma
chomage
chomped
chomper
chorism
chrisom
chromas
chromed
chromes
chromic
chromid
chromos
chromyl
...

So I thought it might be inspiring to put the four letters you would have if you take away the O, H and M next to each word, right?

I probably ought to use xargs but never got used to it. I’ve memorized this other way:

$ grep o linux.words |grep m|grep h|egrep ′^[a-z]{7}$′|while read line; do
> s=`echo $line|sed s/o//|sed s/h//|sed s/m//`
> echo $line $s
> done|more

sed is an old standard used to do substitutions. sed s/o// for example is a filter which removes the first occurrence of the letter O.

I could almost use the tr command, as in

> …|tr -d ′[ohm]′

in place of all those sed statements, but I couldn’t solve the problem of tr deleting all occurrences of the letters O, H and M. And the solution didn’t jump out at me.

So until I figure that out, use sed. That gives:

achroma acra
alamoth alat
almohad alad
amchoor acor
amolish alis
amorpha arpa
amorphi arpi
amorphy arpy
amphion apin
amphora apra
amphore apre
apothem apte
apothgm aptg
armhole arle
armhoop arop
bemouth beut
bimorph birp
bioherm bier
bochism bcis
bohemia beia
bohmite bite
camooch caoc
camphol capl
camphor capr
chagoma caga
chamiso cais
chamois cais
chamoix caix
chefdom cefd
chemizo ceiz
chessom cess
chiloma cila
chomage cage
chomped cped
chomper cper
chorism cris
chrisom cris
chromas cras
chromed cred
chromes cres
chromic cric
chromid crid
chromos cros
chromyl cryl
...

Friday update
I can now reveal the section listing that reveals the answer because the submission deadline has passed. It’s here:

...
schmoes sces
schmoos scos
semihot seit
shahdom sahd
shaloms sals
shamalo saal
shammos sams
shamois sais
shamoys says
shampoo sapo
shimose sise
shmooze soze
shoeman sean
sholoms slos
shopman span
shopmen spen
shotman stan
...

See it? I think it leaps out at you:

shampoo sapo

becomes of course:

soap
shampoo

!

They’re common words found next to each other that obey the rules of the challenge. You can probably tell I’m proud of solving this one. I rarely do. I hope they don’t call on me because I also don’t even play well against the radio on Sunday mornings.

Conclusion
Now I can’t give out the answer right now because the submission deadline is a few days from now. But I will say that the answer pretty much pops out at you when you review the full listing generated with the above sequence of commands. There is no doubt whatsoever.

I have shown how a person with modest command-line familiarity can solve a word problem that was put out on NPR. I don’t think people are so much interested in learning a command line because there is no instant gratification and th learning curve is steep, but for some it is still worth the effort. I use it, well, all the time. Solving the puzzle this way took a lot longer to document, but probably only about 30 minutes of actual tinkering.

Categories
Admin Linux Raspberry Pi Security

Generate Pronounceable Passwords

2017 update
Turns out gpw is an available package in Debian Linux, including Raspbian which runs on Raspberry Pi. Who knew? A simple sudo apt-get install gpw will provide it. So I guess the source wasn’t lost at all.

Intro
15 years ago I worked for a company that wanted to require authentication in order to browse to the Internet. I searched around for something.

What I came up with is gpw – generate pronounceable passwords.

The details
I think this approach to secure passwords is no longer best practice, but I still think it has a place for some applications. What it does is analyze a dictionary that you’ve fed it. It then determines the frequency of occurrence of what it calls trigraphs – I guess that’s three consecutive letter combinations. Then it generates random, non-dictionary passwords using those trigraphs, which are presumably wholly or partially pronounceable.

Cute, huh? I’d say one problem is that if the bad guys got wind of this approach, the numbers of combinations they’d have to use to do password cracking is severely restricted.

Sophos has a recommendation for forming good strong passwords. See their blog post about the 50 worse passwords which contains a link to a video on how to choose a good password.

But I still have a soft spot for this old approach, and I think it’s OK to use it, get your password such as inglogri, add a few non-alpha-numeric characters and come up with a reasonably good, memorable password. Every site you use should really get a different password, and this tool might make that actually feasible.

I run it as:

$ gpw

which produces:

seminour
shnopoos
alespige
olpidest
hastrewe
nsivelys
shaphtra
bratorid
melexseu
sheaditi

Its output changes every time, of course.

I mostly run it this way:

$ gpw 1

which produces only a single password, for instance:

ojavishd

You see how these passwords are sort of like words, but not words? Much more memorable than those completely random ones you are sometimes forced to type and which are impossible to remember?

I noted the location where I pulled it from the web 15 years ago as is my custom, but it is no longer available. So I have decided to make it available. I tweaked it to compile on CentOS with a C++ compiler.

Here is the CentOS v 6 binary for x86_64 architecture and README file.

Here is the tar file with the sources and the binary mentioned above. Run a make clean first to begin building it.

Enjoy!

Potential Problems
I know when we originally used it to assign 15,000 unique passwords, the randomness algorithm was so bad that I believe some people received identical passwords! So the the total number of generatable passwords might be severely limited. Please check this before using it in any meaningful way. I would naively expect and hope that it could generate about two- to three-times the number of words in my dictionary (/usr/share/dict/linux.words, with 479,829 words). But I never verified this.

2017 update
I ran it, 100 passwords at a time, on my Rsapberry Pi for a couple minutes. I created 275,900 passwords, of which 269,407 were unique. Strange. So you get some repeats but you motly get new passwords.

Further, I was going to tweak the code to generate 9-letter passwords which would presumably be more secure. But they just didn’t look as good to me, and I’ve only ever used it with 8 letters. So I just decided to keep it at 8 letters. You can experiment with that if you want.

More fun with the Linux dictionary
For another fun example using the Linux dictionary see how I solved the NPR weekend puzzle using it, described here.

A note for Debian Linux users (Ubuntu, Raspberry Pi, …)
The dictionary there is /usr/share/dictd/wn.index. You’ll need to update the Makefile to reflect this. This post about Words with Friends explains the packages I used to provide that dictionary.

Conclusion
An old pronounceable password generating program has been dusted off and given back to the open source community. It may not be state-of-the-art, but it has a role for some usages.

References and related
Want truly random passwords? I want to call your attention to random.org’s password generator: https://www.random.org/passwords/

Most people are becoming familiar with the idea of not reusing passwords but I don’t know if everyone realizes why. This article is a comprehensive review of the topic, plus review of password vaults like Lastpass, etc which you may have heard of: https://pixelprivacy.com/resources/reusing-passwords/