Posted on Leave a comment

Lattice VIP IMX214/CrossLink issues

Overview

The imx214 sensors are configured using the ‘default’ sequence from the reference design, but at a lower PLL frequency around 54 MHz. Both sensors are started synchronously.

Single camera configuration

This setup uses the single camera bit file provided from Helion Vision.

  • Framing errors occur early in the entire video stream, then it runs stable for a very long time (recorded up to 150’000 frames)

Dual camera configuration

This setup uses the stereo camera reference design from the Lattice website (DualCSI2toRaw10_impl1.bit).

Color shifts
Bayer pattern offsets

Issues:

  • Framing very unstable, right image shows interesting color shift
  • Offset changing from frame to frame, displaying as above

Further analysis

The reason for the occuring DEMUX errors from the JPEG encoder is occasional invalid framing. Frames are then dropped and the image is out of sync.

Possibilities:

  1. Framing from sensor is wrong (critical sensor configuration mode)
  2. Framing from Sensor correct, but translation hickups inside CrossLink
  3. Irregular timing (too short Hblank time) stressing the JPEG encoder FIFOs

(1) can not be verified without a MIPI timing debugger. (2) can not be simulated due to closed source of CrossLink Firmware.

For (3), the LINE_VALID (blue) and FRAME_VALID (yellow) signals, both routed to external debug header display as follows:

IMX214 sensor framing via CrossLink

The above behaviour of two subsequent pixel lines with short blanking time occur in the current Stereo and single sensor CrossLink firmware configuration.

Potential remedies

Sorted by ascending complexity:

  • Find magic setting for more regular MIPI data transfer
  • Use another sensor (parallel interface)
  • Try to fix irregular timing by a ‘sanity checker’ interface with line buffer
  • Revisit Crosslink firmware (consider fixes by third party)
Posted on Leave a comment

MaSoCist opensource 0.2 release

I’ve finally got to release the opensource tree of our SoC builder environment on github:

https://github.com/hackfin/MaSoCist

Changes in this release:

  • Active support for Papilio and Breakout MachXO2 board had been dropped
  • Very basic support for the neo430 (msp430 compatible) added, see Docker notes below
  • Includes a non-configureable basic ‘eval edition’ (in VHDL only) of our pipelined ZPUng core
  • Basic Virtual board support (using ghdlex co-simulation extensions)
  • Docker files and recipes included

Docker environment

Docker containers are in my opinion the optimum for automated deployment and for testing different configurations. To stay close to actual GHDL simulator development, the container is based on the ghdl/ghdl:buster-gcc-7.2.0 edition.

Here’s a short howto to set up an environment ready to play with. You can try this online at

https://labs.play-with-docker.com, for example.

Just register yourself a Docker account, login and start playing in your online sandbox.

If you want to skip the build, you can use the precompiled docker image by running

docker run -it -v/root:/usr/local/src hackfin/masocist

and skip (3.) below.

You’ll need to build and copy some files from contrib/docker to the remote Docker machine instance.

  1. Run ‘make dist’ inside contrib/docker, this will create a file masocist_sfx.sh
  2. Copy Dockerfile and init-pty.sh to Docker playground by dragging the file onto the shell window
  3. Build the container and run it:
    docker build -t masocist .
    
    docker run -it -v/root:/usr/local/src masocist
  4. Copy masocist_sfx.sh to the Docker machine and run, inside the running container’s home dir (/home/masocist):
    sudo sh /usr/local/src/masocist_sfx.sh
  5. Now pull and build all necessary packages:
    make all run
  6. If nothing went wrong, the simulation for the neo430 CPU will be built and started with a virtual UART and SPI simulation. A minicom terminal will connect to that UART and you’ll be able to speak to the neo430 ‘bare metal’ shell, for example, you can dump the content of the virtual SPI flash by:
    s 0 1
    

    Note: This can be very slow. On a docker playground virtual machine, it can take up to a minute until the prompt appears, depending on the server load.

Development details

The simulation is a cycle accurate model of your user program, which you can of course modify. During the build process, i.e. when you run ‘make sim’ in the masocist(-opensource) directory, the msp430-gcc compiler builds the software C code from the sw/ directory and places the code into memory according to the linker script in sw/ldscripts/neo430. This results in a ELF binary, which is again converted into a VHDL initialization file for the target. Then the simulation is built.

The linker script is, however very basic. Since a somewhat different, automatically generated memory map is used at this experimental stage, all peripherals are configured in the XML device description at hdl/plat/minimal.xml, however the data memory configuration (‘dmem’ entity) does not automatically adapt the linker script.

Turning this into a fully configurable solution is left to be done.

 

 

 

Posted on Leave a comment

Multiple node monitoring/control via Python/netpp

Since scalability of the netpp node solution was advertised, one issue has turned into a FAQ: How to handle errors on loaded networks and traffic between multiple nodes?

Especially with UDP, plenty of scenarios can occur which can confuse all higher protocol layers: lost packets, reverse packet order, duplicate packets…

How to handle these errors, does the netpp layer take care of it all?

It doesn’t. The current strategy with UDP is: We want to see all errors. If we don’t, we’d rather switch to a TCP implementation.

In our test scenario we have, connected by a Gigabit Hub:

  • One Gigabit Ethernet capable client, one 100M client
  • Six netpp nodes

If the network is not completely jammed, the usual you would see is timeouts. In Python, these are simple IOError exceptions. If an illegal packet sequence is detected, a SystemError exception will be raised.

As a simple example, the script below will poll all detected hosts at highest frequency possible and cover IOError and SystemError exceptions with different recovery timings.

Note that there is no particular finer control for specific errors in Python. If required, these have to be handled on the C-API level.

import netpp
import time
import sys


hostlist = range(8, 15)


def init(hostlist):
    targets = []
    for h in hostlist:
        try:
            ip = "192.168.0.%d" % h
            d = netpp.connect("UDP:%s:2016" % ip)
            print "Target %s alive" % ip
            targets.append((h, d.sync()))
        except:
            print "Target %s down" % ip
            pass
        
    return targets


def poll(targets):
    for h, t in targets:
        rev = t.SysCtrl.ReleaseTag.get()
        print "Node %d : Class '%s' rev %s" % (h, t.name(), rev)
        l = t.LED.Red.get()
        l = not l
        t.LED.Red.set(l)
        print 40 * "-"

    l = True
    while 1:
        for h, t in targets:
            try:
                t.LED.Green.set(l)
                dataready = t.UART.RXREADY.get()
                if dataready:
                    print "> %s" % t.UART.RxData.get()
            except SystemError:
                print "Node %d: %s" % (h, sys.exc_info()[1])
                time.sleep(1.0)
            except IOError:
                print "Node %d: %s" % (h, sys.exc_info()[1])
                time.sleep(0.1)
        l = not l
        # time.sleep(0.05)

targets = init(hostlist)
poll(targets)

Script details

The script will just toggle the Green LED for each target and check if there’s input available on the UART. If you have the corresponding netpp node connected via USB serial, you can type in a character at the terminal and see it reported from the script.

Timeouts and sessions

UDP is session-less, therefore netpp handles the connectivity from peer to peer. By default, only two simultaneous connections (two clients) are supported. If a connection is lost, the netpp node will terminate it after a certain timeout, if a new connection is detected. This is signalled on the netpp node UART console by:

disconnect_timeout: c0a80002:49587

If the connection is lost from the netpp node side, for example via a reboot or long cable disconnect, the client may not detect that and keep sending queries. In this case, you might see the following error on the netpp node console:

QRY 55 NAK

(the 55 could be any other code).

In this case, the session would have to be reopened from the client:

d = netpp.connect(...)

FPGA goes cloudy

If you’re collecting data as simple as Temperature and Humidity, for instance, you might want to push the data to the cloud. This is also done using a simple Python script doing a HTTP get request to ThinkSpeak. Note the netpp node does not push the data, the script is running on an embedded Linux module.

[iframe_loader src=”https://thingspeak.com/channels/415371/charts/1?bgcolor=%23ffffff&color=%23d62020&dynamic=true&results=60&type=line&update=15″]
[iframe_loader src=”https://thingspeak.com/channels/415371/charts/2?bgcolor=%23ffffff&color=%23d62020&dynamic=true&results=60&type=line&update=15″]