Posted on Leave a comment

Dynamic netpp properties

The legacy

Up to netpp v0.3x, device properties used to be static, that means, a device had a certain predefined set of properties written in XML and that’s it. No stuff on-the-fly.
However, as coders familiar with the internals know, there are dynamic properties: The port concept of netpp allows to dynamically add a ‘Port’ to a ‘Hub’ when the latter is probed.
For example, when sending a ‘probe’ broadcast on the ‘TCP’ Hub, all responding servers are added to the Port property list and show up when running the netpp master tool

> netpp
Available interfaces/hubs:
 Child: [80000000] 'TCP'
    Child: [80010000] '192.168.3.100:2008'
 Child: [80000001] 'UDP'
    Child: [80020000] '192.168.3.100:2008'

The challenge

So, where would dynamic properties in an embedded device make sense? You could think of the following scenarios:

  1. A device would not only certain properties when in a certain operation mode
  2. A device would retrieve its properties from another description than the static property list

The first scenario could so far always be dealt with using class derivation: When in Mode A, the device shows its base properties, when in mode B, it shows the base properties plus some extensions. The built in method of the base device would just solve this by returning the proper device index on the fly from the local_getroot() function.

The second scenario however is trickier:  Of course you could build a static property list from another device description and register it with netpp. However, this would turn out in a rather complicated external code mess, it turns out to be easier to enhance the existing structures for the dynamic properties.

Let’s just work on an example to get to the details:

Example for VHDL test bench

The VPI specification originating from the Verilog HDL (hardware description language) allows to iterate through the signal hierarchy of a hardware design simulation. Using these extensions, quite a few hardware simulators allow to load a shared library on top of the simulation that does a few things, like external signal manipulation.

In various articles (like this), it is described how to interface netpp with virtual hardware using the VHPI specification. However, the VHPI interface does not have the fancy shared library option. This means, each test bench that should be netpp capable must be manually enhanced using the desired netpp interface and DClib hardware description that maps abstract properties into register addresses on the VHDL side.

For a quick and dirty approach, it would be nice if we could just load a module on top of a simulation that queries the top level signals of the test bench, export them as netpp properties, and the user could manipulate them from outside (or remotely) using a python script.

Within our so called vpiwrapper (vpiwrapper.c), we create a function that creates a dynamic property from a signal

TOKEN property_from_signal(TOKEN parent, vpiHandle sig)

The parent is a dynamic(!) root node that we have created previously, if none did already exist.

Enhanced functionality

The above example just created a number of top level properties from existing signals. So far so good. But what if we wanted the basic functionality from the VHPI extensions, too, common to all the simulations? This again would be the static properties from previous approaches. So we mix a lot of stuff: VPI with VHPI extensions and static with dynamic properties.

Wouldn’t that turn out into a nightmare?

Nope. The answer is again derivation: We just create two device root nodes: One is the static node from the static property list (proplist.c). The other is the dynamic node that we created inside the vpiwrapper (if it didn’t already exist). Now we simply set the static node as “base class” of the dynamic node. That means, the property iteration on the slave side (inside the simulation) will see the signal properties created dynamically and the static properties, likewise.

As example, a running “simram” simulation will answer the netpp call as follows:

Properties of Device 'VPI_GHDLwrapper' associated with Hub 'TCP':
Child: [80000001] 'clk'
Child: [80000002] 'we'
Child: [80000003] 'addr'
Child: [80000004] 'data0'
Child: [80000005] 'data1'
Child: [80000006] 'data2'
Child: [80000007] 'ram0'
Child: [80000008] 'ram1'
Child: [00000002] 'Enable'
Child: [00000005] 'Irq'
Child: [00000007] 'Reset'
Child: [00000008] 'Throttle'
Child: [00000009] 'Timeout'
Child: [00000003] 'Fifo'

The properties beginning with a capital are the static ones. You can see a difference in the TOKEN values as well when compared to the signal properties with lower caps. (Internals experts know, that dynamic property tokens have the MSB set)

On a side note: The properties ram0 and ram1 are explicitely registered by this simulation. The implement a netpp BUFFER variable that can simply be read/written asynchronously during the simulation. From the VHDL side, they simulate a simple dual port memory.

Posted on Leave a comment

Virtual Hardware JPEG encoding

For a while I had been messing with various DSP architectures while playing with FPGA technology. So far both worlds were kinda separated in its own sandbox, the FPGA got to do really stupid interfacing and simple transforms while the DSP was doing the real complex encoding.
Now it’s time for a leap: Why not move some often used DSP primitives smoothly into the FPGA?
What kept me from doing it, were the tools. Most time is actually not burned on the concept or implementation, but on the debugging. Since the tools helping to debug would cost a fortune, it was just more economic to put a powerful chip next to the FPGA, even if it would have had the resources to run a decent number of soft cores in parallel.
Well, turns out somehow that in the development process of the past years, own tools can do. The ghdl extensions described here, allow to verify processing chains with real data just by replacing a hard VHDL FIFO module for data input by its virtual counterpart. This VirtualFIFO just runs in the simulation and can be fed from outside (via network) with data.

One good example of a complex processing chain is a JPEG encoder, which is typically implemented in software, i.e. as a serial procedure running on a CPU. If you’d want to migrate parts of the encoding such as the computationally expensive DCT into real parallel working hardware, the classical way could be to produce some number of offline data sets to run through the testbench (simulation) to verify the correct behaviour of your design.
But you could also just loop in the simulation into your software, in the sense of a “Software in the Loop” co-simulation.

Hacking the JPEG encoder

So, to replace the DCT of a JPEG routine using our DCT hardware simulation, we have to loop in some piece of code that implements our virtual DCT. We call it “remote DCT”, because the simulation could run on another machine. To speak to the remote DCT, we use the VirtualFIFO, which has a netpp interface, meaning, it can be accessed from the network.

The following python script demonstrates, how the remote DCT is accessed:

import netpp
d = netpp.connect("TCP:localhost")
r = d.sync()
buf = r.Fifo.Buffer
b = get_next_buffer()  # Example function returning a buffer
buf.set(b) # Send buffer b
rb = buf.get() # Get return buffer

The C version of this looks a bit more complicated, but basically makes API calls to netpp to transfer the data and wait for the return. This is simply the concept of swapping out local functions against remote procedure calls that are answered by the VHDL simulation.

This way, we run our JPEG encoder with a black and white test PNG shown below:

Original PNG image
Original PNG image

Since the DCT is running in a hardware simulation and not in software, it is very slow. It can take minutes, to encode an image. However, what we can get from this simulation, is a cycle accurate waveform of things that are happening under the hood.

DCT encoder waveforms
DCT encoder waveforms

After many hours of debugging and fixing some data flow issues, our virtual hardware does what it should.

Encoded JPEG image
Encoded JPEG image

And this is the encoded result. By swapping out the remoteDCT routine again by the built in routine, we get a reference image which we can subtract from the virtually hardware encoded image. If the result is all zeros, we know that both methods produce identical results.

Synthesis

Now the interesting part: How much hardware resources are allocated? See the synthesis results output below (This is for a Xilinx Spartan3E 250).

Typically, the timing results from synthesis can’t be trusted, when place & route has completed and fitted other logic as well, the maximum clock will significantly decrease. Since this design is using up all DSP slices on this FPGA, we’ll only see this as an intermediate station and move on to more gates and DSP power.

 Number of Slices:                      872  out of   2448    35%  
 Number of Slice Flip Flops:            623  out of   4896    12%  
 Number of 4 input LUTs:               1668  out of   4896    34%  
 Number of IOs:                          39
 Number of bonded IOBs:                  38  out of     66    57%  
 Number of BRAMs:                         6  out of     12    50%  
 Number of MULT18X18SIOs:                12  out of     12   100%     

 Timing constraint: Default period analysis for Clock 'clk'
 Clock period: 6.710ns (frequency: 149.032MHz)
Posted on 2 Comments

DPF hacking part three

Sorry, sorry, sorry.

Time flies! We had the full standalone firmware running for a while, but just didn’t get around to publish it.
It’s nice to see that plenty of people are actually using this hack on their linux powered sat receivers, dreamboxes, etc.
Less nice to see, that people do mods, but don’t contribute back. Didn’t exactly motivate us to release the most recent fun stuff. But anyhow, we give it another go by kicking out another 0.200 developer release. This contains the raw framework only. No games, and no fun stuff. Although one most wanted feature comes with it: no more waiting for the device to go in bluescreen mode to be ready for lcd4linux.

Here’s a package with the ready built firmware:

[ removed ]

If you have successfully managed to get the hack to run on one of your DPFs, then you’ll likely be able to determine your DPF type (blue, white, etc.).  If you don’t, you’ll have to do some trial & error. Just flash the entire .bin image to your DPF using ProgSPI.exe or restore.py. For detailed instructions, there are various readme files in the source distribution.

To obtain these utilities or checkout the current source, see links in DPF hacking part two

VNC on DPF
VNC DPF client demo

There’s some fun stuff in the next box, really. The 1.0 firmware will have support for a full remote display based on the VNC protocol. For efficiency, the 1.0 protocol will not be compatible with the current SCSI command emulation. So for all programmers: Don’t access the DPF directly, if you want to use future versions. Use the dpflib only!

 

One last note: We’ve come across a few pre-flashed Pearl units sold on ebay for a 3x price. Guys, that simply is lame. If we wanted to make money, we’d have done that way earlier.