Posted on

ZPU next generation – pipelined

Working with various ZPU adaptations for quite a while, some feature requests came up for which there was no immediate solution except swapping out the ZPU core. In the past months, quite a number of SoC designs turned out to be workarounds which were sufficient for its purpose, but were prognosed to be a maintenance nightmare.

Long story short: I decided to give it a new go. This time in MyHDL. If you’re not aware of this developer’s gem, check out the official website. It has its quirks, but it is yet the best tool to design a CPU, due to Python’s extensive test benching features. Verification and validation of a CPU is way easier than using the classical VHDL/Verilog approach.

Pipelining the ZPU

The ZPU is a pure stack machine, therefore sorting out the register hazards like on the MIPS architecture is kinda simple. The first approach however does not use a separate fetch stage, therefore branch penalties are not too bad for the moment (although sacrificing some clock speed). This design introduced “shortcuts”: When a pop() instruction is following a push(), the value does not need to be fetched from the stack explicitely but can be bypassed by a ‘write register’. There are some minor but nasty exceptions  that need to be sorted out, but these turned out to require not too much logic.

Eventually, parts of the design were borrowed from another very basic VLIW concept I used on a MIPS16 clone (made a while ago) and ZPU instructions just translate to VLIW sequences. Operations like LOAD that don’t pipeline well due to possible I/O stalls are just implemented as VLIW ‘micro code’.

Differences to ‘classic’ ZPU4/Zealot ‘small’

In the original ZPU small, quite some traffic occured between core and the shared memory (program, data, stack). The ZPUng introduces some changes:

  • Separate stack memory: This allows a pure register (distributed RAM) synthesis for higher speed. Plus, the stack cannot trash the program code
  • Shared Program/Data: This is required to be compatible to the Zealot ‘Phi’ programs. However, traffic is reduced and the writes are delayed (writeback stage). ZPUng v0.2 implements instruction prefetching and DMA access to the program/data memory
  • Optional: Allow usage of pseudo dual port memory on very small FPGAs.
  • A read immediately following a write is a classic hazard scenario which is handled by bypass logic on the stack memory. On the prog/data memory, it is not relevant on the ZPU.

DMA access could already be implemented on the ZPU4 using a specific DMA capable memory block.

I had implemented ICE debugging for the Zealot, using our in-house “StdTAP” interface that is running on a few native JTAG primitives of various FPGA vendors. The new ZPUng should of course behave likewise. Since an ICE event is handled like an exception on high priority, a bit of logic had to be added.

Handling events: IRQ and emulation

This is the harder part: The existing ZPU and the same program code with IRQ handlers is required to work likewise on the new ‘ZPUng’ (working title). However, there are a few extras: By using an external System Interrupt Controller (SIC), we get more control over generating interrupt vectors. Remember, the standard ZPU4 or Zealot in its “phi” configuration has only one interrupt channel and vector. In the ZPUng, we take an external interrupt vector from the SIC which can be configured using the peripheral I/O (memory mapped registers; MMR). Because the ZPU is a stack machine, no specific “return from exception” command is required, therefore it is very simple to register an IRQ: Just set the interrupt vector register ‘n’ of the SIC to a C function address handler.

The very tricky part is, to make interrupt handling work together with the on chip debugger (In Circuit Emulation aka ICE). There a few boundary conditions:

  • IRQs don’t interrupt inside a “IM” (immediate load) sequence. Therefore, no fixed IRQ latency possible, but IM sequences are always atomic
  • IRQs can interrupt inside a single step ICE session

Another feature of the IRQ enhancements: Typically, an IRQ handler acknowledges the interrupt request to the SIC, allowing another interrupt to occur. If this happens before the IRQ routine is actually ended, it will re-enter itself and trash the stack, eventually. This is avoided on the ZPUng by clearing the corresponding IPEND flag just before the final return (POPPC). The logic sets the IRQACK flag (which prevents reentrance) to the IRQ state on every branch instruction. So interrupt routines are not reentrant when following this scheme. Reentrance could be enabled by nasty hacks messing with the SIC configuration.

IRQ redesign rev1

In order to re-enable IRQ reentrancy and allowing IRQ priorisation through the SIC, the interrupt handling was redesigned in ZPUng v1 such that higher prioritised IRQs can interrupt a current interrupt handler. Other implementations had made use of a POPINT opcode – the same is now happening here, with one exception: It just clears the flag, return is still done by a POPPC. This makes the code easier to handle. IPEND flags are now cleared at the beginning of the IRQ handler and a final IRQ_REARM() macro clears the internal IRQ acknowledge.

The SIC was changed such that recurring IRQs with lower priority don’t cause another “dingdong” on the IRQ pin.

Resource usage

For example, the ZPUng ‘small’ (compatible to the ‘phi’ config) alone was synthesized in two configurations for a MACHXO2 7000 with the following resource usage:

Speed optimized (max. 32 MHz as SoC) : LUT: 906 Registers: 153  SLICE: 453
Area optimized (max. 25 Mhz as SoC)  : LUT: 745 Registers: 152  SLICE: 361

This SoC configuration just uses a system interrupt controller plus a 2×16 GPIO bank as peripherals. Complex peripherals on the Wishbone bus would slow down the design further due to logic congestion of the current architecture.

Synthesis on the Papilio Spartan3-250k platform produces quite similar results, however the CPU is running a few MHz faster. This is very likely due to the Xilinx architecture being a little faster on the block RAM side.

Development Roadmap

  • Sources release: Planned, but not scheduled yet.
  • Speedup optimizations: There is not much in for it, and changes to the pipeline will increase logic elements. This is something where other CPU architectures perform better.
  • Explore ‘sliding register window’ tricks for pure research fun. Might require a full new instruction set…?
Posted on

In Circuit Emulation for MIPS soft CPU

Many developers got used to this: Plug in a JTAG connector to your embedded target and debug the misbehaviour down to the hardware. No longer a luxury, isn’t it?
When you move on to a soft core from opencores.org for a simple SoC solution on FPGA, it may be. Most of them don’t have a TAP – a Test Access Port – like other off the shelf ARM cores.

We had pimped a ZPU soft core with in circuit emulation features a while ago (In Circuit Emulation for the ZPU). This enabled us to connect to a hardware ZPU core with the GNU debugger like we would attach to for example a Blackfin, msp430, you name it.

Coming across various MIPS compatible IP cores without debug facilities, the urge came up to create a standard test access port that would work on this architecture as well. There is an existing Debug standard called EJTAG, but it turned out way more complex to implement than simple In Circuit Emulation (ICE) using the above TAP from the ZPU.

So we would like to do the same as with the ZPU:

  • Compile programs with the <arch>-elf-gcc
  • Download programs into the FPGA hardware using GDB
  • set breakpoints, inspect and manipulate values and registers, etc.
  • Run a little “chip scope”

Would that work without killer efforts?

Yes, it would. But we’re not releasing the white rabbit yet. Stay tuned for embedded world 2013 in Nuremberg in February. We (Rene Doß from Dossmatik GmbH Germany and myself) are going to show some interesting stuff that will boost your SoC development big time by using known architectures and debug tools.

His MIPS compatible Mais CPU is on the way to become stable, and turns out to be a quick bastard while being stingy on resources.

For a sneak peek, here’s some candy from the synthesizer (including I/O for the LCD as shown below):

Selected Device : 3s250evq100-5

Number of Slices:                     1172  out of   2448    47%
Number of Slice Flip Flops:            904  out of   4896    18%
Number of 4 input LUTs:               2230  out of   4896    45%
Number of IOs:                          15
Number of bonded IOBs:                  15  out of     66    22%
Number of BRAMs:                         2  out of     12    16%
Number of GCLKs:                         3  out of     24    12%

And for the timing:

Minimum period: 13.498ns (Maximum Frequency: 74.084MHz)
Mais on Papilio with TFT wing

Update: Here’s a link to the presentation (as PDF) given on the embedded world 2013 in Nuremberg:

presentation-2013

Posted on

A full baseline pipelined JPEG encoder in VHDL

As described in a previous post, a framework was built to loop in a DCT hardware design into a software JPEG encoder for verification (and acceleration) purposes.

Turns out this strategy speeds up development a lot, and that the remaining modules on the way to a full hardware based and pipelined JPEG encoding solution weren’t a big job. Actually, I was expecting that this enhanced encoder would no longer fit into a small Spartan3E 250k. Wrong!

Have a look:

Device utilization summary:
---------------------------

Selected Device : 3s250evq100-5 

 Number of Slices:                     1567  out of   2448    64%  
 Number of Slice Flip Flops:           1063  out of   4896    21%  
 Number of 4 input LUTs:               2915  out of   4896    59%  
    Number used as logic:              2900
    Number used as Shift registers:      15
 Number of IOs:                          49
 Number of bonded IOBs:                  47  out of     66    71%  
 Number of BRAMs:                        12  out of     12   100%  
 Number of MULT18X18SIOs:                11  out of     12    91%  
 Number of GCLKs:                         2  out of     24     8%

JPEG encoder latency and timing

From the XST summary, we get:

Timing Summary:
---------------
Speed Grade: -5

   Minimum period: 13.449ns (Maximum Frequency: 74.353MHz)
   Minimum input arrival time before clock: 9.229ns
   Maximum output required time after clock: 6.532ns
   Maximum combinational path delay: 7.693ns

The timing is again optimistic, place and route normally deteriorates the latencies. The maximum clock is in fact the clock you can feed the JPEG encoder with pixel data (12 bit) without causing overflow. The output is a huffman coded byte stream that is typically embedded into a JFIF structure header, table data and the appropriate markers by a CPU.

There is quite some room for optimization, in fact, the best compromise of BRAM bandwidth and area has not yet been reached. Quite a few BRAMs ports are not used, but kept open to allow access through an external CPU, like for manipulation of the Huffman tables.

JPEG encoder waveforms
JPEG encoder waveforms

The last performance question might be the latency: how long does it take until encoded JPEG data appears after the first arriving pixel data? The above waveform snapshot should speak for itself: at 50MHz input clock, the latency is approx. 4 microseconds.

Colour encoding

We haven’t talked about colour yet. This is a complex subject, because there are many possibilities of encoding colour, but not really for the JPEG encoder. This is rather a matter of I/O sequencing and the proper colour conversion. As you might remember, a JPEG encoder does not encode three RGB channels, but in YUV space, which might be roughly described as: brightness, redness and blueness. The ‘greenness’ is implicitely included in this information. But why repeat what’s already nicely described: You find all the details right here on Wikipedia.

So, to encode all the colour, we just need properly separated data according to one of the interleaving schemes (4:2:0 or 4:2:2) and feed the MCU blocks of 8×8 pixels through the encoder while assserting the channel value (Y, Cb, Cr) on the channel_select input. Voilà.

Turns out that the Bayer Pattern that we receive from many optical colour sensors can be converted rather directly into YUV 4:2:0 space using the right setting for our Scatter-Gather unit (‘Cottonpicken’ engine). With a tiny bit of software intervention through a soft core, we finally cover the entire colour processing stream. Proof below.

Colour JPEG
Colour JPEG output from the encoder
The original RGB photo
The original RGB photo

As you can see, the colours are quite not perfect yet compared with the original. This is a typical problem, that you get a greenish tint. We leave this to the colour optimization department 🙂

One more serious word: Just yesterday I’ve read the news and had to see that the person who changed the optical colour sensor industry, Bryce Bayer, has passed away. As a final “thank you” to his work, I’d like to post the Bayer Picture of the above.

Bayer pattern source image
Dedicated to Bryce Bayer
Posted on

Dynamic netpp properties

The legacy

Up to netpp v0.3x, device properties used to be static, that means, a device had a certain predefined set of properties written in XML and that’s it. No stuff on-the-fly.
However, as coders familiar with the internals know, there are dynamic properties: The port concept of netpp allows to dynamically add a ‘Port’ to a ‘Hub’ when the latter is probed.
For example, when sending a ‘probe’ broadcast on the ‘TCP’ Hub, all responding servers are added to the Port property list and show up when running the netpp master tool

> netpp
Available interfaces/hubs:
 Child: [80000000] 'TCP'
    Child: [80010000] '192.168.3.100:2008'
 Child: [80000001] 'UDP'
    Child: [80020000] '192.168.3.100:2008'

The challenge

So, where would dynamic properties in an embedded device make sense? You could think of the following scenarios:

  1. A device would not only certain properties when in a certain operation mode
  2. A device would retrieve its properties from another description than the static property list

The first scenario could so far always be dealt with using class derivation: When in Mode A, the device shows its base properties, when in mode B, it shows the base properties plus some extensions. The built in method of the base device would just solve this by returning the proper device index on the fly from the local_getroot() function.

The second scenario however is trickier:  Of course you could build a static property list from another device description and register it with netpp. However, this would turn out in a rather complicated external code mess, it turns out to be easier to enhance the existing structures for the dynamic properties.

Let’s just work on an example to get to the details:

Example for VHDL test bench

The VPI specification originating from the Verilog HDL (hardware description language) allows to iterate through the signal hierarchy of a hardware design simulation. Using these extensions, quite a few hardware simulators allow to load a shared library on top of the simulation that does a few things, like external signal manipulation.

In various articles (like ‘Asynchronous remote simulation‘), it is described how to interface netpp with virtual hardware using the VHPI specification. However, the VHPI interface does not have the fancy shared library option. This means, each test bench that should be netpp capable must be manually enhanced using the desired netpp interface and DClib hardware description that maps abstract properties into register addresses on the VHDL side.

For a quick and dirty approach, it would be nice if we could just load a module on top of a simulation that queries the top level signals of the test bench, export them as netpp properties, and the user could manipulate them from outside (or remotely) using a python script.

Within our so called vpiwrapper (vpiwrapper.c), we create a function that creates a dynamic property from a signal

TOKEN property_from_signal(TOKEN parent, vpiHandle sig)

The parent is a dynamic(!) root node that we have created previously, if none did already exist.

Enhanced functionality

The above example just created a number of top level properties from existing signals. So far so good. But what if we wanted the basic functionality from the VHPI extensions, too, common to all the simulations? This again would be the static properties from previous approaches. So we mix a lot of stuff: VPI with VHPI extensions and static with dynamic properties.

Wouldn’t that turn out into a nightmare?

Nope. The answer is again derivation: We just create two device root nodes: One is the static node from the static property list (proplist.c). The other is the dynamic node that we created inside the vpiwrapper (if it didn’t already exist). Now we simply set the static node as “base class” of the dynamic node. That means, the property iteration on the slave side (inside the simulation) will see the signal properties created dynamically and the static properties, likewise.

As example, a running “simram” simulation will answer the netpp call as follows:

Properties of Device 'VPI_GHDLwrapper' associated with Hub 'TCP':
Child: [80000001] 'clk'
Child: [80000002] 'we'
Child: [80000003] 'addr'
Child: [80000004] 'data0'
Child: [80000005] 'data1'
Child: [80000006] 'data2'
Child: [80000007] 'ram0'
Child: [80000008] 'ram1'
Child: [00000002] 'Enable'
Child: [00000005] 'Irq'
Child: [00000007] 'Reset'
Child: [00000008] 'Throttle'
Child: [00000009] 'Timeout'
Child: [00000003] 'Fifo'

The properties beginning with a capital are the static ones. You can see a difference in the TOKEN values as well when compared to the signal properties with lower caps. (Internals experts know, that dynamic property tokens have the MSB set)

On a side note: The properties ram0 and ram1 are explicitely registered by this simulation. The implement a netpp BUFFER variable that can simply be read/written asynchronously during the simulation. From the VHDL side, they simulate a simple dual port memory.