Posted on

RISC-V in the loop

Continuous integration (‘CI’) for hardware is a logical step to take: Why not do for hardware, what works fine for software?

To keep things short: I’ve decided to stick my proprietary RISC-V approach ‘pyrv32’ into the opensourced MaSoCist testing loop to always have an online reference that can run anywhere without massive software installation dances.

Because there’s still quite a part of the toolchain missing from the OpenSource repo (work in progress), only a stripped down VHDL edition of the pyrv32 is available for testing and playing around.

This is what it currently does, when running ‘make all test’ in the provided Docker environment:

  • Build some tools necessary to build the virtual hardware
  • Compile source code, create a ROM file from it as VHDL
  • Build a virtual System on Chip based on the pyrv32 core
  • Downloads the ‘official’ riscv-tests suite onto the virtual target and runs the tests
  • Optionally, you can also talk to the system via a virtual (UART) console

Instructions

This is the quickest ‘online’ way without installing software. You might need to register yourself a docker account beforehand.

  1. Log in at the docker playground: https://labs.play-with-docker.com
  2. Add a new instance of a virtual machine via the left panel
  3. Run the docker container:
    docker run -it hackfin/masocist
  4. Run the test suite:
    wget section5.ch/downloads/masocist_sfx.sh && sh masocist_sfx.sh && make all test
  5. Likewise, you can run the virtual console demo:
    make clean run-pyrv32
  6. Wait for Boot message and # prompt to appear, then type h for help.
  7. Dump virtual SPI flash:
    s 0 1
  8. Exit minicom terminal by Ctrl-A, then q.

What’s in the box?

  • ghdl, ghdlex: Turns a set of VHDL sources into a simulation executable that exposes signals to the network (The engine for the virtual chip).
  • masocist: A build system for a System on Chip:
    • GNU Make, Linux kconfig
    • Plenty of XML hardware definitions based on netpp.
    • IP core library and plenty of ugly preprocessor hacks
    • Cross compiler packages for ZPU, riscv32 and msp430 architectures
  • gensoc: SoC generator alias IP-XACT’s mean little brother (from another mother…)
  • In-House CPU cores with In Circuit Emulation features (Debug TAPs over JTAG, etc.):
    • ZPUng: pipelined ZPU architecture with optimum code density
    • pyrv32: a rv32ui compatible RISC-V core
  • Third party opensource cores, not fully verified (but running a simple I/O test):
    • neo430: a msp430 compatible architecture in VHDL
    • potato: a RISC-V compatible CPU design

Posted on

MaSoCist opensource 0.2 release

I’ve finally got to release the opensource tree of our SoC builder environment on github:

https://github.com/hackfin/MaSoCist

Changes in this release:

  • Active support for Papilio and Breakout MachXO2 board had been dropped
  • Very basic support for the neo430 (msp430 compatible) added, see Docker notes below
  • Includes a non-configureable basic ‘eval edition’ (in VHDL only) of our pipelined ZPUng core
  • Basic Virtual board support (using ghdlex co-simulation extensions)
  • Docker files and recipes included

Docker environment

Docker containers are in my opinion the optimum for automated deployment and for testing different configurations. To stay close to actual GHDL simulator development, the container is based on the ghdl/ghdl:buster-gcc-7.2.0 edition.

Here’s a short howto to set up an environment ready to play with. You can try this online at

https://labs.play-with-docker.com, for example.

Just register yourself a Docker account, login and start playing in your online sandbox.

If you want to skip the build, you can use the precompiled docker image by running

docker run -it -v/root:/usr/local/src hackfin/masocist

and skip (3.) below.

You’ll need to build and copy some files from contrib/docker to the remote Docker machine instance.

  1. Run ‘make dist’ inside contrib/docker, this will create a file masocist_sfx.sh
  2. Copy Dockerfile and init-pty.sh to Docker playground by dragging the file onto the shell window
  3. Build the container and run it:
    docker build -t masocist .
    
    docker run -it -v/root:/usr/local/src masocist
  4. Copy masocist_sfx.sh to the Docker machine and run, inside the running container’s home dir (/home/masocist):
    sudo sh /usr/local/src/masocist_sfx.sh
  5. Now pull and build all necessary packages:
    make all run
  6. If nothing went wrong, the simulation for the neo430 CPU will be built and started with a virtual UART and SPI simulation. A minicom terminal will connect to that UART and you’ll be able to speak to the neo430 ‘bare metal’ shell, for example, you can dump the content of the virtual SPI flash by:
    s 0 1
    

    Note: This can be very slow. On a docker playground virtual machine, it can take up to a minute until the prompt appears, depending on the server load.

Development details

The simulation is a cycle accurate model of your user program, which you can of course modify. During the build process, i.e. when you run ‘make sim’ in the masocist(-opensource) directory, the msp430-gcc compiler builds the software C code from the sw/ directory and places the code into memory according to the linker script in sw/ldscripts/neo430. This results in a ELF binary, which is again converted into a VHDL initialization file for the target. Then the simulation is built.

The linker script is, however very basic. Since a somewhat different, automatically generated memory map is used at this experimental stage, all peripherals are configured in the XML device description at hdl/plat/minimal.xml, however the data memory configuration (‘dmem’ entity) does not automatically adapt the linker script.

Turning this into a fully configurable solution is left to be done.

 

 

 

SoC design virtualization

Live simulation examples

The video below demonstrates a live running CPU simulation which can be fully debugged through gdb (at a somewhat slower speed). Code can also be downloaded into the running simulation without the need to recompile.

Older videos

These are legacy flash animations which may no longer be supported by your browser. They demonstrate various trace scenarios of cycle accurate virtual SoC debugging.

Virtualisation benefits

Being able to run a full cycle accurate CPU simulation is helpful in various situations:

  • Verification of algorithms
  • Hardware verification: Make sure a IP core is functioning properly and not prone to timing issues
  • Firmware verification: hard verification of proper access (access to uninitialized registers or variables is found immediately)
  • Safety relevant applications: Full proof of correct functionality of a program main loop

Virtual interfaces and entities

Virtual entities allow to loop in external data or events into a fully cycle and timing accurate HDL simulation. For example, interaction with a user space software can take place by a virtual UART console, like a terminal program can speak to a program running on a simulated CPU with 16550 UART.

For all these virtualisation approaches, the software has to take a very different timing into account, because the simulation would run slower by up to a factor of 1000, when simulating complex SoC environments. However, using mimicked timing models, it turns out that the software in general becomes more stable and race conditions are avoided effectively.

So far, the following simple models are covered by our co-simulation library:

  • Virtual UART/PTY
  • FIFO: Cypress FX2 model, FT2232H model
  • Packet FIFO: Ethernet MAC receive/transmit (without Phy simulation)
  • Virtual Bus: Access wishbone components by address or registers directly by Python, like:
    fpga.SPI_CONFIG.ENABLE.set(True)

The virtualization library is issued in an opensource version at: github:hackfin/ghdlex

More complex model concepts:

RAM

For fast simulation, a dual port RAM model was implemented for Co-Simulation that allows access through a back door via network. That way, new RAM content can be updated in fraction of seconds for regression tests via simple Python scripting.

Virtual optical sensor

For camera algorithm verification with simulated image data (like from a PNG image or YUV video), we have developed a customizeable virtual sensor model that can be fed with arbitrary image data, likewise. Its video timing (blanking times, etc.) can be configured freely, image data is fed by a FIFO. A backwards FIFO channel can again receive processed data, like from an edge filter. This way, complex FPGA and DSP hybrid systems can be fully emulated and algorithms be verified by automated regression tests.

Display

For visual direct control or front end for virtualized LCD devices, the netpp display server allows to post YUV, RGB or indexed, hardware-processed images to be displayed on the PC screen from within the simulation. For example, decoded YUV-format video can be displayed. When running as full, cycle accurate HDL simulation, this is very slow. However, functional simulation for example through Python has turned out to be quite effective.

See also old announcement post for example: [ Link ]

Co-Simulation ecosystems

Sometimes it is necessary to link different development branches, like hardware and software: Make ends meet (and meet the deadlines, as well). Or, you might want to pipe processed data from matlab into your simulation and compare with the on-chip-processed result for thorough coverage or numerical stability. This is where you run into the typical problem:

  • Simulation runs on host A (Linux workstation)
  • Your Labview client runs on the Student’s Windows PC in the other building
  • The sensors are on the roof

For all these networking issues, our netpp protocol can be the glue, making it easy to link it all together and make it talk effectively. That’s also how it works with the simulation model.