Posted on

Setting up virtual CPU environment on Windows

This is a short howto to get a Linux specific Virtual Chip running on a Windows OS:

  1. Download and install the Docker Toolbox
  2. Download and install the Xming X-Server
  3. Run ‘Docker Quickstart terminal’, normally installed on your desktop. Be patient, the environment takes some time to start up
  4. Download MaSoCist docker container:
    docker pull hackfin/masocist
  5. Optional: (GTKwave support): Prepare the Xming server by running XLaunch and configuring as follows using the Wizard:
    • Multiple Windows
    • Start no client
    • No access control selected (Warning, this could cause security issues, depending on your system config)
  6. Start the container using the script below
    docker run -ti --rm -u masocist -w /home/masocist -e DISPLAY=192.168.99.1:0 \
    -v /tmp/.X11-unix:/tmp/.X11-unix hackfin/masocist bash

You might save this script to a file like run.sh and start it next time from the Docker Quickstart terminal:

. run.sh

Depending on the MaSoCist release you’ve got, there are different install methods:

Opensource github

See README.md for most up to date build notes. GTKwave is not installed by default. Run

sudo apt-get update gtkwave

to install it.

When you have X support enabled, you can, after building a virtual board, run

make -C sim run

from the MaSoCist directory to start the interactive wave display simulation. If the board uses the virtual UART, you can now also connect to /tmp/virtualcom using minicom.

Custom ‘vendor’ license

You should have received a vendor-<YOUR_ORGANIZATION>.pdf for detailed setup notes, see ‘Quick start’ section.

A few more notes

  • All changes you will make to this docker container are void on exit. If this is not desired, remove the ‘–rm’ option and use the ‘docker ps -a’ and ‘docker start -i <container_id>’ commands to reenter your container. Consult the Docker documentation for details.
  • Closing the GTKwave window will not stop the simulation!
  • Ctrl-C on the console stops the simulation, but does not close the wave window
  • The UART output of the virtual SoC is printed on the console (“Hello!”). Virtual UART input is not supported on this system, but can be implemented using tools supporting virtual COM ports and Windows pipes.
  • Once you have the Docker container imported, you can alternatively use the Kitematic GUI and apply the above options, in particular:
    • -v: Volume mounts to /tmp/.X11-unix
    • -e: DISPLAY environment setting
Posted on Leave a comment

Image compression prediction techniques

Prediction techniques are present in almost any approach to image compression, be it lossless or lossy. The general idea of a predictor is, to extrapolate a pixel from its neighbours, i.e. assuming it has a specific value. The difference between this assumed value and the effective value is regarded as error. For a round trip coding, i.e. forward and inverse transformation, it is only required to store a reference, prediction history and the errors.

Since these errors are – seen from a statistical distribution – rather limited, the encoding can happen effectively with less bits than the eight bits normally used per pixel channel.

A simple predictor

A very simple predictor tries to predict pixel values based on the previous scan line or pixel row. It evaluates the pixels [0, 1, 2] as shown below to extrapolate the value of [X]:

00  11  22 ...
##  XX  ##

Running this through a test image and storing only the error, we get something like below:

Test image from unsplash, scaled/cropped
Prediction error image

The prediction error image was quantized to make the potential compression gain more visible.

By simple bit plane reshuffling and entropy coding techniques deployed by the JPEG XS format, our predictor experiment allows to compress the above image lossless by roughly factor two. Using Wavelet decomposition, lossless compression can be achieved up to factor four. By quantization (that will introduce information loss), much higher compression in the range of 1:10-1:20 can be achieved with quality compareable to the classic JPEG format, however with reduced block artefacts at higher compression.

Context sensitivity and adaptivity

When recompressing images, artefacts can arise that are hard to re-compress. Likewise, bayer pattern raw sensor images converted to YUV or RGB values can lead to frequency artefacts that cause higher entropy.  Our approach uses a very simple, context sensitive predictor based on a 8-state finite state machine where the decision on what state is chosen next is based on a simple history and a comparison. The image below shows a typical error distribution for these eight states. For a normal grayscale image, the statistical distribution is alike for each state. For artefact-tainted images, the distribution can be rather distorted.

Error distribution for eight contexts
Refined context tracker

Experiments with a higher number of contexts and a complex tracking history yielded a better compression for artefact-tainted images – under certain circumstances. The above T shaped predictor (internally named ‘STP8’ for sliding T predictor) was enhanced to 27 states (‘STP27’) in particular to perform well in the following scenarios:

  • JPEG recompression artefacts
  • Bayer pattern artefacts (Bayer to YUV422)
  • Custom color pattern experiments (undocumented)

Each of these artefact scenarios requires a specific default parameter setting for the STP27. A classification of pixels is then done ‘on the fly’ and adaptive compression is applied. This showed a significant improvement over the STP8, but introduces some complexity of the hardware state machine and requires a rather large huffman coding table.

In simulation experiments with a number of ‘classic’ test images, this turned out to actually beat lossless wavelet compression.

Hardware implementation

Using our proprietary FLiX coprocessing engine, a hardware based pipeline can be generated that uses FPGA technology to compress grayscale images at a very high data rate (up to 200 Mpixel/s per channel). Basically, no multiplication is required for the ‘baseline’ variant. For images with higher amount of artefacts however, decorrelation approaches are deployed that will require multiplications, and a feedback regulation similar to convolutional neuronal networks. This is still under scrutiny. For most sensor imagery however, it appears that the compression gain is not excessive. Update: Obsolete with STP27 implementation.


Update: 2.4.2015
Repub: 11.9.2017 [Replace images]