Posted on 2 Comments

netpp for Windows quick start

Here’s a short step by step intro to install and run netpp on Windows10 (it will work likewise on older Windows versions):

  1. Download Python 2.7 (32 bit) and install it. Make sure to install for all users, otherwise the netpp installer will not find the Python directory and complain.
  2. Download and run:
    [sdm_download id=”1349″ fancy=”1″]
  3. Run the example server via Start Menu->netpp->Run example server. A warning will appear and request you to unblock the service, possibly. Then, a small example netpp server is listening on your local machine.
  4. Start the IDLE environment via Start Menu->Python 2.x->IDLE. Read more in detail below…

If you did not download Python before installing netpp, the netpp installer will throw a warning, but still continue.

First steps

Windows10 netpp session
Windows10 netpp session

When you have started IDLE, first try to import the netpp module as shown in the screen shot. Then make a connection to your local example server using the .connect() method.

The .sync() command creates a local property tree with root node ‘r’. When first called, the device server is queried for all available properties, so this can take a long time on some systems. Once the query has completed, the tree is stored in a cache and is only reloaded, if the device properties have changed.

Note the message “using PWD for storage”. If you have no cache directory created, the cache will be installed in the current program’s working directory, which may not be desired. Create the folders ‘.netpp/cache’ in your home directory, then the warning will go away. If you ever need to manually delete the cache files, you will find them there.

Next, we are going to look at the properties. This is simply done the pythonish way using ‘dir’.

And last, we obtain a property value using the .get() method. Simple as that.

Using the Power shell

There are two command line utilities:

  • master: Simple command line demo tool for netpp access
  • netpp-cli: interactive netpp client

Open up a Power shell (or a legacy cmd.exe) and change directory to where your netpp binaries are installed, like

cd c:\Program Files (x86)\section5\netpp\bin

The master.exe is a very primitive command line tool for netpp device query. When run without arguments, it will display the available hubs and send out a broadcast on the local network for attached netpp devices. If your example server is running, you will see it listed. Try accessing it:

.\master TCP:192.168.56.1

and it will list the device’s properties.

The netpp-cli.exe is an interactive console with a bit more caching functionality and session character, i.e. when you made a connection, the device will reserve a session with you until you exit the CLI. This operation mode may be required on some more complex devices that work session based.

Make a connection to a device:

.\netpp-cli TCP:192.168.56.1

At the netpp prompt, type ‘?’. Now you can just get a property value by typing its property name, like Container.Test. When you append a value, you can change a property, likewise.

Process viewer/browser support

To use the predefined GUI process control based on the free pvbrowser (see github repository), you need a demo setup based on a modhub or another Linux embedded demo setup running a pvhub server. This assumes a netpp node setup with default design.

  1. Start the pvbrowser on your PC
  2. Enter the URL
    pv://modhub/netpp:UDP:192.168.0.5:2016

    into the pvbrowser for a direct connection to the netpp node (assuming default IP at 192.168.0.5). Replace ‘modhub’ by the IP address of your pvhub server host.

  3. The process viewer should display something like below:
PVbrowser netpp node v0
PVbrowser netpp node example

Troubleshooting

There are some known issues with the evaluation release netpp 0.5-rc1 on Windows and IDLE:

  • When a cache directory was not created ($HOME\.netpp\cache), Python(IDLE) will report:
    C:\Users\me\/.netpp/cache not found, using PWD for storage

    This is mostly not a problem, however when access rights are limited, creating the cache can fail. Create this directory in your $HOME then.

  • Upon synchronization, a failure can occur when a cache created from an older netpp version is found:
    r = a.sync()
    
    ...
    
    IndexError: tuple index out of range

    Search for the corresponding *.pyp file and remove it

Further resources:

LabVIEW wrapper based on OpenG LabPython
  • LabVIEW project build (testing): [sdm_download id=”1444″ fancy=”0″]
Documentation:
  • netpp HOWTO: [sdm_download id=”1350″ fancy=”0″]
  • API reference OpenSource v0.3x [ HTML ]

 

Posted on Leave a comment

Image compression prediction techniques

Prediction techniques are present in almost any approach to image compression, be it lossless or lossy. The general idea of a predictor is, to extrapolate a pixel from its neighbours, i.e. assuming it has a specific value. The difference between this assumed value and the effective value is regarded as error. For a round trip coding, i.e. forward and inverse transformation, it is only required to store a reference, prediction history and the errors.

Since these errors are – seen from a statistical distribution – rather limited, the encoding can happen effectively with less bits than the eight bits normally used per pixel channel.

A simple predictor

A very simple predictor tries to predict pixel values based on the previous scan line or pixel row. It evaluates the pixels [0, 1, 2] as shown below to extrapolate the value of [X]:

00  11  22 ...
##  XX  ##

Running this through a test image and storing only the error, we get something like below:

Test image from unsplash, scaled/cropped
Prediction error image

The prediction error image was quantized to make the potential compression gain more visible.

By simple bit plane reshuffling and entropy coding techniques deployed by the JPEG XS format, our predictor experiment allows to compress the above image lossless by roughly factor two. Using Wavelet decomposition, lossless compression can be achieved up to factor four. By quantization (that will introduce information loss), much higher compression in the range of 1:10-1:20 can be achieved with quality compareable to the classic JPEG format, however with reduced block artefacts at higher compression.

Context sensitivity and adaptivity

When recompressing images, artefacts can arise that are hard to re-compress. Likewise, bayer pattern raw sensor images converted to YUV or RGB values can lead to frequency artefacts that cause higher entropy.  Our approach uses a very simple, context sensitive predictor based on a 8-state finite state machine where the decision on what state is chosen next is based on a simple history and a comparison. The image below shows a typical error distribution for these eight states. For a normal grayscale image, the statistical distribution is alike for each state. For artefact-tainted images, the distribution can be rather distorted.

Error distribution for eight contexts
Refined context tracker

Experiments with a higher number of contexts and a complex tracking history yielded a better compression for artefact-tainted images – under certain circumstances. The above T shaped predictor (internally named ‘STP8’ for sliding T predictor) was enhanced to 27 states (‘STP27’) in particular to perform well in the following scenarios:

  • JPEG recompression artefacts
  • Bayer pattern artefacts (Bayer to YUV422)
  • Custom color pattern experiments (undocumented)

Each of these artefact scenarios requires a specific default parameter setting for the STP27. A classification of pixels is then done ‘on the fly’ and adaptive compression is applied. This showed a significant improvement over the STP8, but introduces some complexity of the hardware state machine and requires a rather large huffman coding table.

In simulation experiments with a number of ‘classic’ test images, this turned out to actually beat lossless wavelet compression.

Hardware implementation

Using our proprietary FLiX coprocessing engine, a hardware based pipeline can be generated that uses FPGA technology to compress grayscale images at a very high data rate (up to 200 Mpixel/s per channel). Basically, no multiplication is required for the ‘baseline’ variant. For images with higher amount of artefacts however, decorrelation approaches are deployed that will require multiplications, and a feedback regulation similar to convolutional neuronal networks. This is still under scrutiny. For most sensor imagery however, it appears that the compression gain is not excessive. Update: Obsolete with STP27 implementation.


Update: 2.4.2015
Repub: 11.9.2017 [Replace images]