Posted on Leave a comment

MaSoCist opensource 0.2 release

I’ve finally got to release the opensource tree of our SoC builder environment on github:

https://github.com/hackfin/MaSoCist

Changes in this release:

  • Active support for Papilio and Breakout MachXO2 board had been dropped
  • Very basic support for the neo430 (msp430 compatible) added, see Docker notes below
  • Includes a non-configureable basic ‘eval edition’ (in VHDL only) of our pipelined ZPUng core
  • Basic Virtual board support (using ghdlex co-simulation extensions)
  • Docker files and recipes included

Docker environment

Docker containers are in my opinion the optimum for automated deployment and for testing different configurations. To stay close to actual GHDL simulator development, the container is based on the ghdl/ghdl:buster-gcc-7.2.0 edition.

Here’s a short howto to set up an environment ready to play with. You can try this online at

https://labs.play-with-docker.com, for example.

Just register yourself a Docker account, login and start playing in your online sandbox.

If you want to skip the build, you can use the precompiled docker image by running

docker run -it -v/root:/usr/local/src hackfin/masocist

and skip (3.) below.

You’ll need to build and copy some files from contrib/docker to the remote Docker machine instance.

  1. Run ‘make dist’ inside contrib/docker, this will create a file masocist_sfx.sh
  2. Copy Dockerfile and init-pty.sh to Docker playground by dragging the file onto the shell window
  3. Build the container and run it:
    docker build -t masocist .
    
    docker run -it -v/root:/usr/local/src masocist
  4. Copy masocist_sfx.sh to the Docker machine and run, inside the running container’s home dir (/home/masocist):
    sudo sh /usr/local/src/masocist_sfx.sh
  5. Now pull and build all necessary packages:
    make all run
  6. If nothing went wrong, the simulation for the neo430 CPU will be built and started with a virtual UART and SPI simulation. A minicom terminal will connect to that UART and you’ll be able to speak to the neo430 ‘bare metal’ shell, for example, you can dump the content of the virtual SPI flash by:
    s 0 1
    

    Note: This can be very slow. On a docker playground virtual machine, it can take up to a minute until the prompt appears, depending on the server load.

Development details

The simulation is a cycle accurate model of your user program, which you can of course modify. During the build process, i.e. when you run ‘make sim’ in the masocist(-opensource) directory, the msp430-gcc compiler builds the software C code from the sw/ directory and places the code into memory according to the linker script in sw/ldscripts/neo430. This results in a ELF binary, which is again converted into a VHDL initialization file for the target. Then the simulation is built.

The linker script is, however very basic. Since a somewhat different, automatically generated memory map is used at this experimental stage, all peripherals are configured in the XML device description at hdl/plat/minimal.xml, however the data memory configuration (‘dmem’ entity) does not automatically adapt the linker script.

Turning this into a fully configurable solution is left to be done.

 

 

 

SoC design virtualization

Live simulation examples

The video below demonstrates a live running CPU simulation which can be fully debugged through gdb (at a somewhat slower speed). Code can also be downloaded into the running simulation without the need to recompile.

Older videos

These are legacy flash animations which may no longer be supported by your browser. They demonstrate various trace scenarios of cycle accurate virtual SoC debugging.

Virtualisation benefits

Being able to run a full cycle accurate CPU simulation is helpful in various situations:

  • Verification of algorithms
  • Hardware verification: Make sure a IP core is functioning properly and not prone to timing issues
  • Firmware verification: hard verification of proper access (access to uninitialized registers or variables is found immediately)
  • Safety relevant applications: Full proof of correct functionality of a program main loop

Virtual interfaces and entities

Virtual entities allow to loop in external data or events into a fully cycle and timing accurate HDL simulation. For example, interaction with a user space software can take place by a virtual UART console, like a terminal program can speak to a program running on a simulated CPU with 16550 UART.

For all these virtualisation approaches, the software has to take a very different timing into account, because the simulation would run slower by up to a factor of 1000, when simulating complex SoC environments. However, using mimicked timing models, it turns out that the software in general becomes more stable and race conditions are avoided effectively.

So far, the following simple models are covered by our co-simulation library:

  • Virtual UART/PTY
  • FIFO: Cypress FX2 model, FT2232H model
  • Packet FIFO: Ethernet MAC receive/transmit (without Phy simulation)
  • Virtual Bus: Access wishbone components by address or registers directly by Python, like:
    fpga.SPI_CONFIG.ENABLE.set(True)

The virtualization library is issued in an opensource version at: github:hackfin/ghdlex

More complex model concepts:

RAM

For fast simulation, a dual port RAM model was implemented for Co-Simulation that allows access through a back door via network. That way, new RAM content can be updated in fraction of seconds for regression tests via simple Python scripting.

Virtual optical sensor

For camera algorithm verification with simulated image data (like from a PNG image or YUV video), we have developed a customizeable virtual sensor model that can be fed with arbitrary image data, likewise. Its video timing (blanking times, etc.) can be configured freely, image data is fed by a FIFO. A backwards FIFO channel can again receive processed data, like from an edge filter. This way, complex FPGA and DSP hybrid systems can be fully emulated and algorithms be verified by automated regression tests.

Display

For visual direct control or front end for virtualized LCD devices, the netpp display server allows to post YUV, RGB or indexed, hardware-processed images to be displayed on the PC screen from within the simulation. For example, decoded YUV-format video can be displayed. When running as full, cycle accurate HDL simulation, this is very slow. However, functional simulation for example through Python has turned out to be quite effective.

See also old announcement post for example: [ Link ]

Co-Simulation ecosystems

Sometimes it is necessary to link different development branches, like hardware and software: Make ends meet (and meet the deadlines, as well). Or, you might want to pipe processed data from matlab into your simulation and compare with the on-chip-processed result for thorough coverage or numerical stability. This is where you run into the typical problem:

  • Simulation runs on host A (Linux workstation)
  • Your LabVIEW client runs on the Student’s Windows PC in the other building
  • The sensors are on the roof

When you order a IP core design, you might want to have the same reference (test environment) as we do. This is based on a Docker container, so you do not have local dependency issues. Plus, it allows a continuous integration of software and hardware designs.

HDL playground

The HDL playground is a Jupyter Notebook based environment that is launched in a browser via the link below:

https://github.com/hackfin/hdlplayground

Features:

  • Co-Simulation of Python stimuli and your own data against Verilog (optional VHDL) modules
  • yosys, nextpnr and Lattice ECP5 specific tools for synthesis, mapping and PnR in the cloud
  • Auto-testing of Notebooks
  • No installation of local software (other than the Docker service)

Hardware-Design

section5 besitzt eine reichhaltige Erfahrung in den Bereichen Elektronik und Signalverarbeitung auf verschiedenen Plattformen, von einfachen diskreten Mikrocontroller-Entwicklungen über komplexe Linux-Systeme bis hin zur Simulation und Implementierung von FPGA Designs (MyHDL/VHDL):

  • Embedded Linux, bare metal crt0.s, GNU, Kerneltreiber
  • FPGA-Programmierung, GPU/DSP cores
  • CAD 3D und Visualisierung
  • Prototypenbau, Referenzdesigns

Beispielanwendungen:

CAD und Prototypen

Rendering FPGA-Platine

Beispielplatine ‘denverII’

Schnell am Markt mit einem funktionierenden Prototypen – aus einer Hand:

  • PCB-Design-Tools (Kicad, Eagle, Altium möglich)
  • 3D CAD und Visualisierung (Photo-Rendering-Studien)
  • Neu: faltbare Gehäuseprototypen (Papier, Plastik, Metall)
  • manuelle PCB-Bestückung, Unterstützung für Fertigung von Kleinserien mit gängigen Bestückern
  • Testbench für Massenfertigung von Erstserien

FPGA IP Cores und CPU-Designs

>> Kundenspezifische Prozessorlösungen

Tools

Unsere ‘in-house’-Tools sind teilweise OpenSource oder im Rahmen von Entwicklungspaketen verfügbar:

  • ghdlex: Co-Simulation von Soft/Hardware
  • gensoc: SoC-Peripherie-Generator (CPU-Definition netpp -> HDL), IP-XACT/Qsys Translation
  • Build-System für SoCs auf unterschiedlichen FPGA-Plattformen