Categories
Linux Python Raspberry Pi

A first taste of OpenCV on a Raspberry Pi 3

Intro
I’ve done a few things to do some vision processing with OpenCV on a Raspberry Pi 3. I am a rank amateur so my meager efforts will not be of much help to anyone else. My idea is that maybe this could be used on an FRC First Robotics team’s robot. Hence I will be getting into some tangential areas where I am more comfortable.

Even though this is a work in progress I wanted to get some of it down before I forget what I’ve done so far!

Tangential Stuff

Disable WiFi
You shouldn’t have peripheral devices with WiFi enabled. Raspeberry Pi 3 comes with built-in WiFi. Here’s how to turn it off.

Add the following line to your /boot/config.txt file:

dtoverlay=pi3‐disable‐wifi

Reboot.

If it worked you should only see the loopback and eth0 interefaces in response to the ip link command, something like this:

$ ip link
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether b8:27:eb:3f:92:f3 brd ff:ff:ff:ff:ff:ff

Hardcode an IP address the simple-minded way
On a lark I decided to try the old-fashioned method I first used on Sun Solaris, or was it even Dec Ultrix? That is, ifconfig. I thought it was to be deprecated but it works well enough for my purpose.

So something like

$ sudo ifconfig eth0 192.168.1.160

does the job, as long as the network interface is up and connected.

Autolaunch a VNC Server so we can haul the camera image back to the driver station
$ vncserver &hypher;geometry 640×480 ‐Authentication=VncAuth :1

Launch our python-based opencv program and send output to VNC virtual display

$ export DISPLAY=:1
$ /home/pi/.virtualenvs/cv/bin/python green.py > /tmp/green.log 2>&1 &

The above was just illustrative. What I actually have is a single script, launcher.sh which puts it all together. Here it is.

#!/bin/sh
# DrJ
sleep 2
# set a hard-wired IP - this will have to change!!!
sudo ifconfig eth0 192.168.1.160
# launch small virtual vncserver on DISPLAY 1
vncserver -Authentication=VncAuth :1
# launch UDP server
$HOME/server.py > /tmp/server.log 2>&1 &
# run virtual env
cd $HOME
# don't need virtualenv if we use this version of python...
#. /home/pi/.profile
#workon cv
#
# now launch our python video capture program
#
export DISPLAY=:1
/home/pi/.virtualenvs/cv/bin/python green.py > /tmp/green.log 2>&1 &

OpenCV (open computer Vision)
opencv is a bear and you have to really work to get it onto a Pi 3. There is no apt-get install opencv. You have to download and compile the thing. There are many steps and few accurate documentation sources on the Internet as of this writing (January 2018).

I think this guide by Adrian is the best guide:

Install guide: Raspberry Pi 3 + Raspbian Jessie + OpenCV 3

However I believe I still ran into trouble and I needed this cmake command in stead of the one he provides:

cmake -D CMAKE_BUILD_TYPE=RELEASE \
        -D CMAKE_INSTALL_PREFIX=/usr/local \
        -D INSTALL_C_EXAMPLES=OFF \
        -D ENABLE_PRECOMPILED_HEADERS=OFF \
        -D INSTALL_PYTHON_EXAMPLES=ON \
        -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules \
        -D BUILD_EXAMPLES=ON ..

I also replaced opencv references to version 3.0.0 with 3.1.0.

I also don’t think I got make -j4 to work. Just plain make.

An interesting getting started tutorial on images, opencv, and python:

http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_image_display/py_image_display.html#display-image

Simplifying launch of VNC Viewer
I wrote a simple-minded DOS script which launches UltraVNC with a password. So with a double-click it should work).

Here’s a Dos .bat file to launch ultravnc viewer by double-clicking on it.

if not "%minimized%"=="" goto :minimized
set minimized=true
start /min cmd /C "%~dpnx0"
goto :EOF
:minimized
c:\apps\ultravnc\vncviewer -password raspberry 192.168.1.160:1

I’m sure there’s a better way but I don’t know it.

The setup
We have a USB camera plugged into the Pi.
A green disc LED light.
A green filter over the camera lens.
A target with two parallel strips of retro-reflective tape we are trying to suss out from everything else.
Some sliders to control the sensitivity of our color matching.
The request to analyze the video in opencv as well as display it on the driver station.
Have opencv calculate the pixel distance (“correction”) from image center of the “target” (the two parallel strips).
Send this correction via a UDP server to any client who wants to know the correction.

Here is our current python program green.py which does these things.

import Tkinter as tk
from threading import Thread,Event
from multiprocessing import Array
from ctypes import c_int32
import cv2
import numpy as np
import sys
#from Tkinter import *
#cap = cv2.VideoCapture(0)
global x
global f
x = 1
y = 1
f = "green.txt"
 
class CaptureController(tk.Frame):
    NSLIDERS = 7
    def __init__(self,parent):
        tk.Frame.__init__(self)
        self.parent = parent
 
        # create a synchronised array that other threads will read from
        self.ar = Array(c_int32,self.NSLIDERS)
 
        # create NSLIDERS Scale widgets
        self.sliders = []
        for ii in range(self.NSLIDERS):
            # through the command parameter we ensure that the widget updates the sync'd array
            s = tk.Scale(self, from_=0, to=255, length=650, orient=tk.HORIZONTAL,
                         command=lambda pos,ii=ii:self.update_slider(ii,pos))
            if ii == 0:
                s.set(0)  #green min
            elif ii == 1:
                s.set(0)
            elif ii == 2:
                s.set(250)
            elif ii == 3:
                s.set(3)  #green max
            elif ii == 4:
                s.set(255)
            elif ii == 5:
                s.set(255)
            elif ii == 6:
                s.set(249)  #way down below
            s.pack()
            self.sliders.append(s)
 
        # Define a quit button and quit event to help gracefully shut down threads
        tk.Button(self,text="Quit",command=self.quit).pack()
        self._quit = Event()
        self.capture_thread = None
 
    # This function is called when each Scale widget is moved
    def update_slider(self,idx,pos):
        self.ar[idx] = c_int32(int(pos))
 
    # This function launches a thread to do video capture
    def start_capture(self):
        self._quit.clear()
        # Create and launch a thread that will run the video_capture function
#        self.capture_thread = Thread(cap = cv2.VideoCapture(0), args=(self.ar,self._quit))
        self.capture_thread = Thread(target=video_capture, args=(self.ar,self._quit))
        self.capture_thread.daemon = True
        self.capture_thread.start()
 
    def quit(self):
        self._quit.set()
        try:
            self.capture_thread.join()
        except TypeError:
            pass
        self.parent.destroy()
 
# This function simply loops over and over, printing the contents of the array to screen
def video_capture(ar,quit):
    print ar[:]
    cap = cv2.VideoCapture(0)
    Xerror = 0
    Yerror = 0
    XerrorStr = '0'
    YerrorStr = '0'
    while not quit.is_set():
        # the slider values are all readily available through the indexes of ar
        # i.e. w1 = ar[0]
        # w2 = ar[1]
        # etc.
        # Take each frame
        _, frame = cap.read()
        # Convert BGR to HSV
        hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
        # define range of blue color in HSV
        lower_green = np.array([ar[0],ar[1],ar[2]])
        upper_green = np.array([ar[3],ar[4],ar[5]])
        # Threshold the HSV image to get only green colors
        mask = cv2.inRange(hsv, lower_green, upper_green)
        # Bitwise-AND mask and original image
        res = cv2.bitwise_and(frame,frame, mask= mask)
        cv2.imshow('frame', frame)
#        cv2.imshow('mask',mask)
#        cv2.imshow('res',res)
        #------------------------------------------------------------------
        img = cv2.blur(mask,(5,5))   #filter (blur) image to reduce errors
        cv2.imshow('img',img)
        ret,thresh = cv2.threshold(img,127,255,0)
        im2,contours,hierarchy = cv2.findContours(thresh, cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
        print 'number of contours==640x480====================  ', len(contours)
        target=0
        if len(contours) > 0:
            numbercontours = len(contours)
            while numbercontours > 0:
                numbercontours = numbercontours -1  # contours start at 0
                cnt = contours[numbercontours]   #this is  getting the first contour found, could look at 1,2,3 etc
                x,y,w,h = cv2.boundingRect(cnt)
#
#---line below has the limits of the area of the target-----------------------
#
                #if w * h > 4200 and w * h < 100000:  #area of capture must exceed  to exit loop
                if h > 30 and w < h/3:  #area of capture must exceed  to exit loop
                    print ' X   Y  W  H  AREA      Xc  Yc      xEr yEr'
                    Xerror = (-1) * (320 - (x+(w/2)))
                    XerrorStr = str(Xerror)
                    Yerror = 240 - (y+(h/2))
                    YerrorStr = str(Yerror)
                    print  x,y,w,h,(w*h),'___',(x+(w/2)),(y+(h/2)),'____',Xerror,Yerror
                    break
 
#-------        draw horizontal and vertical center lines below
                cv2.line(img,(320,0),(320,480),(135,0,0),5)
                cv2.line(img,(0,240),(640,240),(135,0,0),5)
                displaySTR = XerrorStr + '  ' + YerrorStr
                font = cv2.FONT_HERSHEY_SIMPLEX
                cv2.putText(img,displaySTR,(10,30), font, .75,(255,255,255),2,cv2.LINE_AA)
                cv2.imshow('img',img)
# wrtie to file for our server'
                sys.stdout = open(f,"w")
                print 'H,V:',Xerror,Yerror
                sys.stdout = sys.__stdout__
                target=1
                #
                #--------------------------------------------------------------------
        if target==0:
                # no target found. print non-physical values out to a file
                sys.stdout = open(f,"w")
                print 'H,V:',1000,1000
                sys.stdout = sys.__stdout__
        k = cv2.waitKey(1) & 0xFF    #parameter is wait in millseconds
        if k == 27:   # esc key on keboard
            cap.release()
            cv2.destroyAllWindows()
            break
 
if __name__ == "__main__":
    root = tk.Tk()
    selectors = CaptureController(root)
    selectors.pack()
#    q = tk.Label(root, text=str(x))
#    q.pack()
    selectors.start_capture()
    root.mainloop()

Well, that was a big program by my standards.

Here’s the UDP server that goes with it. I call it server.py.

#!/usr/bin/env python
# inspired by https://gist.github.com/Manouchehri/67b53ecdc767919dddf3ec4ea8098b20
# first we get client connection, then we read data frmo file. This order is important so we get the latest, freshest data!
 
 
import socket
import re
 
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
 
server_address = '0.0.0.0'
server_port = 5005
 
server = (server_address, server_port)
sock.bind(server)
print("Listening on " + server_address + ":" + str(server_port))
 
while True:
# read up to 32 bytes from client
        payload, client_address = sock.recvfrom(32)
        print("Request from client: " + payload)
# get correction from file
        while True:
                with open('green.txt','r') as myfile:
                        data=myfile.read()
#H,V:  9 -14
                data = data.split(":")
                if len(data) == 2:
                        break
        sent = sock.sendto(data[1], client_address)

For development testing I wrote a UDP client to go along with that server. I called it recvudp.py.

#!/usr/bin/env python
import socket
UDP_IP = "127.0.0.1"
UDP_PORT = 5005
 
print "UDP target IP:", UDP_IP
print "UDP target port:", UDP_PORT
 
sock = socket.socket(socket.AF_INET, # Internet
                 socket.SOCK_DGRAM) # UDP
# need to send one newline minimum to receive server's message...
MESSAGE = "correction";
sock.sendto(MESSAGE, (UDP_IP, UDP_PORT))
# get data
data, addr = sock.recvfrom(1024) # buffer size is 1024 bytes
print "received message:", data

Problems
Lag is bad. Probably 1.5 seconds or so.
Video is green, but then we designed it that way.
Bandwidth consumption of VNC is way too high. We’re supposed to be under 7 mbps and it is closer to 12 mbps right now.
Probably won’t work under the bright lights or an arena or gym.
Sliders should be labelled.
Have to turn a pixel correction into an angle.
Have to suppress initial warning about ssh default password.

To be improved, hopefully…

Categories
Admin IT Operational Excellence Linux Network Technologies Raspberry Pi

Screaming Streaming on the Raspberry Pi

Intro
The Raspberry Pi plus camera is just irresistible fun. But I had a strong motivation to get it to work the way I wanted it to as well: a First robotics team that was planning on using it for vision for the drive team. So of course those of us working on it wanted to offer something with a real-time view of the field with a fast refresh rate and good (though not necessarily perfect) reliability. Was it all possible? Before starting I didn’t know. In fact I started the season in January not knowing the team would want to use a Raspberry Pi, much less that there was a camera for it! But we were determined to push through the obstacles and share my love of the Pi with students. Eventually we found a way.

The details
Well, we sure made a lot of missteps along the way, that’s why I’m excited to write this article to help others avoid some of the pain points. It needs to be fleshed out some more, but this post will be expanded to become a litany of what didn’t work – and that list is pretty long! All of it borrowed from well-meaning people on various Internet sites.

The essence of the solution is the quick start page – I always search for Raspberry pi camera quick start to find it – which basically has the right idea, but isn’t fleshed out enough. So raspivid + nc + a PC with netcat (nc) and mplayer will do the trick. Below I provide a tutorial on how to get it all to work.

Additional requirement
Remember I wanted to make this almost fool-proof. So I wanted the Pi to be like a passive device that doesn’t need more than a one-time configuration. Power-up and she’s got to be ready. Cut power and re-power, it better be ready once more. No remote shell logins, no touching it. That’s what happens when it’s on the robot – it suddenly gets powered up before the match.

Here is the startup script I created that does just that. I put it in /etc/init.d/raspi-vid:

#! /bin/sh
# /etc/init.d/raspi-vid
# 2/2014
 
# The following part always gets executed.
echo "This part always gets executed"
 
# The following part carries out specific functions depending on arguments.
case "$1" in
  start)
    echo "Starting raspi-vid"
# -n means don't show preview on console; -rot 180 to make image right-side-up
# run a loop because this command dies unless it can connect to a listener
    while /bin/true; do
# if acting as client do this. Probably it's better to act as server however
# try IPs of the production PC, test PC and home PC
#      for IP in 10.31.42.5 10.31.42.6 192.168.2.2; do
#        raspivid -n -o - -t 9999999 -rot 180 -w 640 -h 480 -b 800000 -fps 15|nc $IP 80
#      done
#
# act as super-simple server listening on port 443 using nc
# -n means don't show preview on console; -rot 180 to make image right-side-up
# -b (bitrate) of 1000000 (~ 1 mbit) seems adequate for our 640x480 video image
# so is -fps 20 (20 frames per second)
# To view output fire up mplayer on a PC. I personally use this command on my PC:
# c:\apps\netcat\nc 192.168.2.100 443|c:\apps\smplayer\mplayer\mplayer -ontop -fps 60 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -
      raspivid -n -o - -t 9999999 -rot 180 -w 640 -h 480 -b 1000000 -fps 20|nc  -l 443
# this nc server craps out after each connection, so just start up the next server automatically...
      sleep 1;
    done
    echo "raspi-vid is alive"
    ;;
  stop)
    echo "Stopping rasip-vid"
    pkill 'raspi-?vid'
    echo "raspi-vid is dead"
    ;;
  *)
    echo "Usage: /etc/init.d/rasip-vid {start|stop}"
    exit 1
    ;;
esac
 
exit 0

I made it run on system startup thusly:

$ cd /etc/init.d; sudo chmod +x raspi-vid; sudo update-rc.d raspi-vid defaults

Of course I needed those extra packages, mplayer and netcat:

$ sudo apt-get install mplayer netcat

Actually you don’t really need mplayer, but I frequently used it simply to study the man pages which I never did figure out how to bring up on the Windows installation.

On the PC I needed mplayer and netcat to be installed. At first I resisted doing this, but in the end I caved. I couldn’t meet all my requirements without some special software on the PC, which is unfortunate but OK in our circumstances.

I also bought a spare camera to play with my Pi at home. It’s about $25 from newark.com, though the shipping is another $11! If you’re an Amazon Prime member that’s a better bet – about $31 when I looked the other day. Wish I had seen that earlier!

I guess I used the links provided by the quick start page for netcat and mplayer, but I forget. As I was experimenting, I also installed smplayer. In fact I ended up using the mplayer provided by smplayer. That may not be necessary, however.

A word of caution about smplayer
smplayer, if taken from the wrong source (see references for correct source), will want to modify your browser toolbar and install adware. Be sure to do the Expert install and uncheck everything. Even so it might install some annoying game which can be uninstalled later.

Lack of background
I admit, I am no Windows developer! So this is going to be crude…
I relied on my memory of some basics I picked up over the years, plus analogies to bash shell programming, where possible.

I kept tweaking a batch file on my desktop. So I associated notepad to my Send To menu. Briefly, you type

shell:sendto

where it says Search programs and files after clicking the Start button. Then drag a copy of notepad from c:\windows\notepad into the window that popped up.

Now we can edit our .bat file to our heart’s content.

So I created a mplayer.bat file and saved it to my desktop. Here are its contents.

if not "%minimized%"=="" goto :minimized
set minimized=true
start /min cmd /C "%~dpnx0"
goto :EOF
:minimized
rem Anything after here will run in a minimized window
REM DrJ 2/2014
rem 
rem very simple mplayer batch file to play output from a Raspberry Pi video stream
rem
rem Use the following line to set up a server
REM c:\apps\netcat\nc -L -p 80|c:\apps\smplayer\mplayer\mplayer -fps 30 -vo gl -cache 1024 -msglevel all=0 -

rem Set up as client with this line...
rem put in loop because I want it to start up whenever there is something listening on port 80 on the server
 
:loop

 
rem this way we are acting as a client - this is more how you'd expect and want things to work
c:\apps\netcat\nc 192.168.2.102 443|c:\apps\smplayer\mplayer\mplayer -ontop -fps 60 -vo gl -cache 1024 -geometry 600:50 -noborder -msglevel all=0 -

rem stupid trick to sleep for about a second. Boy windows shell is lacking...
ping 127.0.0.1 -n 2 -w 1000 > NUL
 
goto loop

A couple notes about what is specific to my installation. I like to install programs to c:\apps so I know I installed them by hand. So that’s why smplayer and netcat were put there. Of course 192.168.2.102 is my Pi’s IP address on my home network. In this post I describe how to set a static IP address for your Pi. We also found it useful to have the CMD Window minimize itself after starting up and running in the background, so the I discovered that the lines on the top allow that to happen.

The results
With the infinite loops I programmed either Pi or mplayer.bat can be launched first – there is no necessary and single order to do things in. So it is a more robust solution than that outlined in the quick start guide.
Most of my other approaches suffered from lag – delay in displaying a live event. Some other suggested approaches had quite large lag in fact. The lag from the approach I’ve outlined above is only 0.2 s. So it feels real-time. It’s great. Below I outline a novel method (novel to me anyways) of measuring lag precisely.
Many of my other approaches also suffered from a low refresh rate. You’d specify some decent number of frames per second, but in actual fact you’d get 1 -2 fps! That made for choppy and laggy viewing. With the approach above there is a full 20 frames per second so you get the feel of true motion. OK, fast motions are blurred a bit, but it’s much better than what you get with any solution involving raspistill: frame updates every 0.6 s and nothing you do can speed it up!
Many Internet video examples showed off high-resolution images. I had a different requirement. I had to keep the bandwidth usage tamped down and I actually wanted a smaller image, not larger because the robot driver has a dashboard to look at.
I chose an unconventional port, tcp port 443, for the communication because that is an allowed port in the competition. The port has to match up in raspi-vid and mplayer.bat. Change it to your own desired value.

Limitations
Well, this is a one-client at a time solution, for starters! did I mention that nc makes for a lousy server?
Even with the infinite looping, things do get jammed up. You get into situation where you need to kill the mplayer CMD window to get things going again.
I would like to have gotten the lag down even further, but haven’t had time to look into it.
Begin a video amateur I am going to make up my own terms! This solution exhibits a phenomenon I call convergence. What that means is that once the mplayer window pops up, which takes a few seconds, what it’s displaying shows a big lag – about 10 seconds. But then it speeds along through the buffered frames and converges with real-time. This convergence takes slightly more than 10 seconds. So if you need instant-on and real-time, you’re not getting it with this solution!

What no one told us
I think we were all so excited to get this little camera for the Pi no one bothers to talk about the actual optical properties of the thing! And maybe they should. because even if it is supposedly based on a cellphone camera, I don’t know which cellphone, certainly not the one from my Samsung Galaxy S3. The thing is (and I admit someone else first pointed this out to me) that it has a really small field-of-view. I measured it as spreading out only 8.5″ at a 15″ distance – that works out to only 31.6 degrees! See what I mean? And I don’t believe there are any tricks or switches to make that larger – that’s dictated by the optics of the lens. This narrow field-of-view may make it unsuitable for use as security camera or many other projects, so bear that in mind. If I put my Samsung next to it and look at the same view its field of view is noticeably larger, perhaps closer to 45 degrees.

Special Insights
At some point I realized that the getting started guide put things very awkwardly in making the PC the server and the Pi the client. You normally want things the other way around, like it would be for an ethernet camera! So my special insight was to realize that nc could be used in the reverse way they had documented it to switch client/server roles. nc is still a lousy “server,” if you can call it that, but hey, the price is right.

Fighting lag
To address the convergence problem mentioned above I chose a frame rate much higher on the viewer than on the camera. The higher this ratio the faster convergence occurs. So I have a 3:1 ratio: 60 fps on mplayer and 20 fps on raspivid. The PC does not seem to strain from the small bit of extra cpu cycles this may require. I think if you have an exact fps match you never get convergence, so this small detail alone could convince you that raspivid is always laggy when in fact it is more under your control than you realized.

Even though with the video quality such as it is there probably is no real difference between 10 fps and 20 fps, I chose 20 fps to reduce lag. After all, 10 fps means an image only every 100 msec, so on average by itself it introduces a lag of half that, 50 msec. Might as well minimize that by increasing the fps to make this a negligble contributor to lag.

Measuring lag
Take a smartphone with a stopwatch app which displays large numbers. Put that screen close up to the Pi camera. Arrange it so that it is next to your PC monitor so both the smartphone and the monitor are in your field of view simultaneously. Get mplayer.bat running on your PC and move the video window close to the edge of the monitor by the smartphone.

Now you can see both the smartphone screen as well as the video of the smartphone screen running the stopwatch (I use Swiss Army Knife) so you can glance at both simultaneously and quantify the lag. But it’s hard to look at both rapidly moving images at the same time, right? So what you do is get a second camera and take a picture of the two screens! We did this Saturday and found the difference between the two to be 0.2 s. To be more scientific several measurements ought to be taken and results avergaed and hundredths of seconds perhaps should be displayed (though I’m not sure a still picture could capture that as anything other than a blur).

mplayer strangeness on Dell Inspiron desktop
I first tried mplayer on an HP laptop and it worked great. It was a completely different story on my Dell Inspiron 660 home desktop however. There that same mplayer command produced this result:

...
VO: [directx] 640x480 => 640x480 Packed YUY2
FATAL: Cannot initialize video driver.
 
FATAL: Could not initialize video filters (-vf) or video output (-vo).
 
 
Exiting... (End of file)

So this was worrisome. I happened on the hint to try -vo gl and yup, it worked. Supposedly it makes for slower video so maybe on PCs where this trick is not required lag could be reduced.

mplayer personal preferences
I liked the idea of a window without a border (-noborder option) – so the only way to close it out is to kill the CMD window, which helps keep them in sync. Running two CMD windows doesn’t produce such good results!

I also wanted the window to first pop-up in the upper right corner of the screen, hence the -geometry 600:50

And I wanted the video screen to always be on top of other windows, hence the -ontop switch.

I decided the messages about cache were annoying and unimportant, hence the message suppression provided by the -msglevel all=0 switch.

Simultaneously recording and live streaming
I haven’t played with this too much, but I think the unix tee command works for this purpose. So you would take your raspivid line and make it something like:

raspivid -n -o – -t 9999999 -rot 180 -w 640 -h 480 -b 1000000 -fps 20|tee /home/pi/video-`date +%Y%h%d-%H%M`|nc -l 443

and you should get a nice date-and-time-stamped output file while still streaming live to your mplayer! Tee is an under-appreciated command…

Conclusion
I have tinkered with the Pi until I got its camera display to be screaming fast on my PC. I’ve shown how to do this and described some limitations.

Next Act?
I’m contemplating superimposing a grid with tick marks over the displayed video. This will help the robot driver establish their position relative to fixed elements on the field. This may be possible by integrating, for instance, openCV, for which there is some guidance out there. But I fear the real-time-ness may greatly suffer. I’ll post if I make any significant progress!
Update: I did get it to work, and the lag was an issue as suspected. Read about it here.

References and related
First Robotics is currently in season as I write this. The competition this year is Aerial Assist. More on that is at their web site, http://www3.usfirst.org/roboticsprograms/frc
Raspberry Pi camera quick start is a great place to get started for newbies.
Setting one or more static IP addresses on your Pi is documented here.
How not to set up your Pi for real-time video will be documented here.
How to get started on your Pi without a dedicated monitor is described here.
Finally, how to overlay a grid onto your video output (Yes, I succeeded to do it!) is documented here.
Correct source for smplayer for Windows.