Live simulation examples
The video below demonstrates a live running CPU simulation which can be fully debugged through gdb (at a somewhat slower speed). Code can also be downloaded into the running simulation without the need to recompile.
These are legacy flash animations which may no longer be supported by your browser. They demonstrate various trace scenarios of cycle accurate virtual SoC debugging.
- 01-configure: Configuration of the CPU
- 02-debug: Virtual Debugger session
- 03-interactive: Interactive register manipulation through gdb
- 04-breakpoint: Setting of break points in program code
- 05-virtualboard: Virtual I/O manipulation session
- 06-irqevent: Tracing an IRQ event
Being able to run a full cycle accurate CPU simulation is helpful in various situations:
- Verification of algorithms
- Hardware verification: Make sure a IP core is functioning properly and not prone to timing issues
- Firmware verification: hard verification of proper access (access to uninitialized registers or variables is found immediately)
- Safety relevant applications: Full proof of correct functionality of a program main loop
Virtual interfaces and entities
Virtual entities allow to loop in external data or events into a fully cycle and timing accurate HDL simulation. For example, interaction with a user space software can take place by a virtual UART console, like a terminal program can speak to a program running on a simulated CPU with 16550 UART.
For all these virtualisation approaches, the software has to take a very different timing into account, because the simulation would run slower by up to a factor of 1000, when simulating complex SoC environments. However, using mimicked timing models, it turns out that the software in general becomes more stable and race conditions are avoided effectively.
So far, the following simple models are covered by our co-simulation library:
- Virtual UART/PTY
- FIFO: Cypress FX2 model, FT2232H model
- Packet FIFO: Ethernet MAC receive/transmit (without Phy simulation)
- Virtual Bus: Access wishbone components by address or registers directly by Python, like:
The virtualization library is issued in an opensource version at: github:hackfin/ghdlex
More complex model concepts:
For fast simulation, a dual port RAM model was implemented for Co-Simulation that allows access through a back door via network. That way, new RAM content can be updated in fraction of seconds for regression tests via simple Python scripting.
Virtual optical sensor
For camera algorithm verification with simulated image data (like from a PNG image or YUV video), we have developed a customizeable virtual sensor model that can be fed with arbitrary image data, likewise. Its video timing (blanking times, etc.) can be configured freely, image data is fed by a FIFO. A backwards FIFO channel can again receive processed data, like from an edge filter. This way, complex FPGA and DSP hybrid systems can be fully emulated and algorithms be verified by automated regression tests.
For visual direct control or front end for virtualized LCD devices, the netpp display server allows to post YUV, RGB or indexed, hardware-processed images to be displayed on the PC screen from within the simulation. For example, decoded YUV-format video can be displayed. When running as full, cycle accurate HDL simulation, this is very slow. However, functional simulation for example through Python has turned out to be quite effective.
See also old announcement post for example: [ Link ]
Sometimes it is necessary to link different development branches, like hardware and software: Make ends meet (and meet the deadlines, as well). Or, you might want to pipe processed data from matlab into your simulation and compare with the on-chip-processed result for thorough coverage or numerical stability. This is where you run into the typical problem:
- Simulation runs on host A (Linux workstation)
- Your LabVIEW client runs on the Student’s Windows PC in the other building
- The sensors are on the roof
When you order a IP core design, you might want to have the same reference (test environment) as we do. This is based on a Docker container, so you do not have local dependency issues. Plus, it allows a continuous integration of software and hardware designs.
The HDL playground is a Jupyter Notebook based environment that is launched in a browser via the link below:
- Co-Simulation of Python stimuli and your own data against Verilog (optional VHDL) modules
- yosys, nextpnr and Lattice ECP5 specific tools for synthesis, mapping and PnR in the cloud
- Auto-testing of Notebooks
- No installation of local software (other than the Docker service)