Posted on

netpp for Windows quick start

Here’s a short step by step intro to install and run netpp on Windows10 (it will work likewise on older Windows versions):

  1. Download Python 2.7 (32 bit) and install it
  2. Download the netpp installer and run it
  3. Run the example server via Start Menu->netpp->Run example server. A warning will appear and request you to unblock the service, possibly. Then, a small example netpp server is listening on your local machine.
  4. Start the IDLE environment via Start Menu->Python 2.x->IDLE. Read more in detail below…

If you did not download Python before installing netpp, the netpp installer will throw a warning, but still continue.

First steps

Windows10 netpp session
Windows10 netpp session

When you have started IDLE, first try to import the netpp module as shown in the screen shot. Then make a connection to your local example server using the .connect() method.

The .sync() command creates a local property tree with root node ‘r’. When first called, the device server is queried for all available properties, so this can take a long time on some systems. Once the query has completed, the tree is stored in a cache and is only reloaded, if the device properties have changed.

Note the message “using PWD for storage”. If you have no cache directory created, the cache will be installed in the current program’s working directory, which may not be desired. Create the folders ‘.netpp/cache’ in your home directory, then the warning will go away. If you ever need to manually delete the cache files, you will find them there.

Next, we are going to look at the properties. This is simply done the pythonish way using ‘dir’.

And last, we obtain a property value using the .get() method. Simple as that.

Using the Power shell

There are two command line utilities:

  • master: Simple command line demo tool for netpp access
  • netpp-cli: interactive netpp client

Open up a Power shell (or a legacy cmd.exe) and change directory to where your netpp binaries are installed, like

cd c:\Program Files (x86)\section5\netpp\bin

The master.exe is a very primitive command line tool for netpp device query. When run without arguments, it will display the available hubs and send out a broadcast on the local network for attached netpp devices. If your example server is running, you will see it listed. Try accessing it:

.\master TCP:192.168.56.1

and it will list the device’s properties.

The netpp-cli.exe is an interactive console with a bit more caching functionality and session character, i.e. when you made a connection, the device will reserve a session with you until you exit the CLI. This operation mode may be required on some more complex devices that work session based.

Make a connection to a device:

.\netpp-cli TCP:192.168.56.1

At the netpp prompt, type ‘?’. Now you can just get a property value by typing its property name, like Container.Test. When you append a value, you can change a property, likewise.

Process viewer/browser support

To use the predefined GUI process control based on the free pvbrowser (see github repository), you need a demo setup based on a modhub or another Linux embedded demo setup running a pvhub server. This assumes a netpp node setup with default design.

  1. Start the pvbrowser on your PC
  2. Enter the URL
    pv://modhub/netpp:UDP:192.168.0.5:2016

    into the pvbrowser for a direct connection to the netpp node (assuming default IP at 192.168.0.5). Replace ‘modhub’ by the IP address of your pvhub server host.

  3. The process viewer should display something like below:
PVbrowser netpp node v0
PVbrowser netpp node example

 

More information

 

  • netpp HOWTO: [PDF]
  • API reference OpenSource v0.3x [ HTML ]

 

Posted on

netpp node evaluation platform

netpp node [Spartan6-LX9]

I am glad to announce a new user evaluation platform module called ‘netpp node’. Its motto is ‘IoT on FPGA done right’:

netpp node preview rendering
  • Integrated high speed UDP stack for maximum possible bandwidth over 100Mbit Ethernet
  • dagobert network SoC with various configureable interfaces (SPI, UART, PWM, I2C) and pin multiplexing options.
  • SDK: GCC, GDB hardware debugger
  • Up to 5 analog I/O channels with ‘analog population option’
  • Piggy backs onto a custom I/O PCB using two 2×16 pin headers

The default firmware runs a fully functional netpp stack for remote control and measurement.

Its main applications:

  • Reliable computing (safety relevant applications with tamper-safe main loop watchdogs)
  • Guaranteed real-time response in network for scaleable applications (100s of units). Performance outlines:
    • Up to 700 netpp continuous property requests per second verified
    • Push-on-Demand streaming of arbitrary dataports (high speed ADC) at maximum network bandwidth (10 MByte/s)
  • Simple DSP applications and smart analog measurement (low power, filtering and differential options)
  • Evaluation of next generation ZPU architecture for embedded GNU style developers
netpp node alternate view
Update [11.9.]:
netpp node PCB prototype
netpp node PCB prototype

First prototypes are finished and are running 24/7 in the test bench at this moment.

Things are going very smooth so far, just a minor capacitor change will be required for the v0.1 series.

Test procedures

As the full model of this design is available for simulation, we can verify the system effectively against stress situations. In particular, network safety is of outmost importance. The test procedure check list of the dagobert SoC:

Completed

  • ARP and ping flooding
  • netpp packet performance test
  • Broken packet handling

In process

  • Jumbo packet flooding
  • Lost interrupt scenario (packet queue desynchronization)

Monitoring netpp packet performance

Packet behaviour in a real network is measured using the Wireshark protocol analyzer.

The figure below shows some example netpp transaction log that the netpp node handles at a very low CPU overhead based on direct register accesses.The red bars is the effective number of query responses using somewhat ineffective ping-pong requests. The performance can be increased by accumulating data into larger buffer properties.

For i2c or SPI transactions however, the packet rate is expected way lower.

For high speed performance like MJPEG video streaming, a separate UDP/RTP queue can be set up within the firmware to reach maximum throughput. However, there is no handshaking using this method.

The image below shows a repeated property query from within Python. The pauses are introduced by external disturbance (stress test) that causes a packet drop – and the netpp engine to timeout and re-synchronize.

Python property query session
Python property query session

Improved RX/TX queue

With an improved packet FIFO on FPGA, I was able to crank up the number of netpp requests per second, as shown in the Wireshark trace below. This test makes sure that several netpp clients can poll the netpp node at high frequencies without disturbing each other. The blue trace is a repeated poll of the full property tree, the red bars are the timed queries from a process viewer daemon. With no other disturbance, we get the occasional drops (e.g. at 45s, 101.5s) due to the queue running full

Preliminary documentation

Posted on

ECP5G Versa board under Linux

The ECP5 platform has caught quite some attention a while ago, being on the rather high end with respect to GigaBit LVDS communication of all sorts. It’s the successor of the ECP3 which has been performing well with HDMI applications in the past.

Lattice Semiconductor had launched a Promo action for this Versa ECP5G board. With great assistance from Future Electronics Switzerland I was able to get hold of a devkit.

Running the Lattice Diamond toolchain on a Linux environment has so far been straightforward, with minor quirks on the GUI side. Let me revisit the important items to get up and running. If you happen to have a Linux OS installed that does not match the Lattice Semi recommendations for a development system, you might want to look at the Docker approach on how to set up the environment.

Programmer preparation

Porting a very simple CPU with UART interface to the platform, the final step is flashing things down into the board using the Lattice Diamond Programmer. Usually, when not having installed any udev rules, these are the steps you have to go through with root powers, in order to get the USB FTDI programmer interface recognized:

Bus 001 Device 013: ID 0403:6010 Future Technology Devices International, Ltd FT2232C Dual USB-UART/FIFO IC

According to this device enumeration, you give access to the device:

chmod a+rw /dev/bus/usb/001/013

and unbind the first ttyUSB0 device from the ftdi_sio driver – unfortunately, the Lattice driver is not able to unbind from within.

echo -n $TTY_ID > /sys/bus/usb/drivers/ftdi_sio/unbind

Replace $TTY_ID by the tty device entry in /sys/bus/usb/drivers/ftdi_sio/ that typically is of the form “1-3.2:1.0” (the trailing 0 stands for port A where the JTAG is at). If you have plugged in more than one FTDI adapter with UART capabilities, you will see several and need to figure out which is which.

Now you can download code into the programmer.

To automatize the process, you could also use the script below:

#/bin/bash

allow_io=`lsusb | sed -n 's/^Bus \([0-9]*\) Device \([0-9]*\): ID 0403:6010 .*/\1\/\2/p'`

unbind_tty=`ls /sys/bus/usb/drivers/ftdi_sio/ | sed -n 's/\(.*\:1\.0\).*/\1/p'`

sudo chmod a+rw \/dev\/bus\/usb\/$allow_io
sudo sh -c "echo $unbind_tty > /sys/bus/usb/drivers/ftdi_sio/unbind"

SPI flash download

When downloading your design into the SPI flash, make sure you have MASTER_SPI_PORT=ENABLED set in your *.lpf file. Otherwise the programmer will fail with an error report on CHECK_ID:

ERROR - Verification Error...when Processing function: 'CHECK_ID'

Then you’ll have to use the fast download (into SRAM) with an enabled Master SPI in order to be able to flash again.

Design issues

When starting to port our SoC design to this board, quite a few issues came up:

  • Don’t bother developing with Diamond v3.7. Some very obscure behaviour with wrong I/O mapping cost quite some headache. Seems to be solved in v3.8
  • v3.8 however is somewhat misleading with respect to output and return states from the Synopsys Synthesis engine ‘Synplify’. Make sure to check your design thoroughly, if Synplify throws an error, Diamond sometimes would not recognize that and map/PAR an old design netlist.
  • Weird random behaviour can occur under Linux with Diamond during PAR, like error messages with respect to path names. This seems to happen especially with long names and underscores. The behaviour has been around in many previous versions of Diamond, Tech Support has refused to accept this as a bug so far. The workaround is to call PAR using the command line (TCL) or use the Run Manager.
    Addendum: It seems that this bug is fixed in v3.9, but there is no release note about it.

 

Talking through the UART

In theory, you should be able to talk to your design through the UART (if your design supports it) by firing up minicom:

minicom -o -D /dev/ttyUSB1

Now here comes the catch: The default EEPROM from my Versa 5G board did not have a correct descriptor, however it came with the default VID:PID from FTDI, so the ftdi_sio driver would recognize it, but in fact not communicate, neither report an error.

So, in order to properly use this board, you may have to erase the EEPROM of the FTDI adapter on the Versa kit using FTDIs Mprog tool or alike. You might want to save the previous EEPROM content for reference, however it does not seem needed, the Programmer recognizes the Board just fine.

Finally, after downloading our SoC setup into the board, it is talking:

Booting, HW rev: 04 -- Running at 50 MHZ

------------- test shell -------------
-- ZpuSoC for Versa ECP5 --
-- (c) 2017 www.section5.ch --
-- type 'h' for help --
# 

Simulation issues

When trying to fully simulate the SoC setup with PLL primitives and some instanced ip cores created by the Clarity module (obviously that’s the IPExpress descendant for ECP5), it turns out that some of the simulation primitives for the VHDL side are missing.

Some of them could be converted using the VHDL conversion trick from Icarus Verilog by the following Makefile rule:

%.vhdl: %.v
    iverilog -tvhdl -o $@ -pdepth=1 $<

However, some of the needed components might not convert without additional tweaking. Hopefully, Lattice Semi will come out with updated VHDL libraries.

IPcore simulation under GHDL

When generating IP cores that depend on library items and running them through GHDL, you might see this error message:

warning: component instance "scuba_vlo_inst" is not bound

However, if you have prepared your FPGA primitive components library, like ecp5um-obj93.cf right, the primitive simulation models should be in there (search for ‘vlo’ in the *.cf file if in doubt).

The reason why this is happening is that there could be component prototype declarations in the IP core file that shouldn’t be there in order to reference to the components from the library. So: Just remove the “component” declaration sections and all should link fine. The drawback of this is, that you need two IP core versions, one for simulation and one for synthesis. If you have a better solution, let me know.

Next steps

Now, the fancy stuff on this board to be evaluated is:

  • DDR3 memory
  • Two GigE capable interfaces

Also, the ECP5 on this board has enough resources to run several ZPUng cores simultaneously. For safety reasons, we wouldn’t want the IoT crap run in the same environment as our controlling main loop.

Therefore, a second processor (Core B) is instanced for running the Ethernet Stack (lwip) only, while maintaining a simple DMA channel to the controller Core A for communication. If core B is compromised, Core A will still maintain its “hardened” control loop and not go haywire.

Implementing DDR3 is tricky. Therefore you might want to use the DDR3 IP core supplied by Lattice. On the demo kit, it will work for a few hours and then pull the global reset.

Networking

Of course, I was very curious about the Ethernet ports on this board. It’s armed with two GigE capable Marvell Phys whose data sheets are a little hard to get hold of, but one might also look at various source code around the web or just check the reference design from Lattice. The reference design uses lwip, since I only need and want UDP, I ported a zerocopy-capable UDP stack I developed for the Blackfin EMAC to the ZPUng SoC (“cranach”), which is equipped with some DMA capable scratch pad memory for a proper packet queue.

After all, the FPGA is now able to speak netpp, so I can turn on an LED for example:

> netpp UDP:192.168.05:2016 LED.Yellow 1

Resource usage

You might want to know how much logic and RAM is consumed by this solution.

Design Summary
   Number of registers:   2770 out of 44457 (6%)
      PFU registers:         2767 out of 43848 (6%)
      PIO registers:            3 out of   609 (0%)
   Number of SLICEs:      2963 out of 21924 (14%)
      SLICEs as Logic/ROM:   2891 out of 21924 (13%)
      SLICEs as RAM:           72 out of 16443 (0%)
      SLICEs as Carry:        309 out of 21924 (1%)
   Number of LUT4s:        4154 out of 43848 (9%)
   Number of block RAMs:  28 out of 108 (26%)
   Number of DCS:  1 out of 2 (50%)
   Number of PLLs:  1 out of 4 (25%)

As for the actual program code, containing:

  • UDP stack supporting ARP, ICMP ping
  • Minimal shell (UART)
  • System I/O drivers (UART, Timer, MAC, PWM)
  • netpp minimal server with some LED handling

This is what’s effectively downloaded into the target:

(gdb) init
Loading section .fixed_vectors, size 0x400 lma 0x0
Loading section .l1.text, size 0x4af5 lma 0x400
Loading section .rodata, size 0x168 lma 0x4ef8
Loading section .rodata.str1.4, size 0xf60 lma 0x5060
Loading section .data, size 0x2c0 lma 0x5fc0
Start address 0x0, load size 25213


There’s a significant amount of string data for debugging in the .rodata.str1.4 section due to debugging info, plus some netpp descriptors. These again could be ‘overlayed’ to the SPI flash, as they are not too frequently accessed. To be investigated next…

Posted on

netpp 0.5 release

Netpp 0.5 release notes

The netpp v0.5 release includes new functionality:

  • Proxy support: A netpp slave can connect to other slaves via netpp or other protocols
  • Support for the new ZPUng / MaSoCist setup (netpp on FPGA!)
  • More platforms added: esp8266/xtensa, avr, cc430, armv5, mips32
  • Improved dynamic property handling
  • Windows Python installer for Python v2.7
  • Java class wrappers
  • Added support for SWAP, modbus and MQTT (commercial)