This code collection is an attempt to gather a few useful routines into a pool of reusable functionality. It is FAR from being a real library, but technically, we treat it as one.
The main purpose is to enable GHDL to communicate easily with external applications in both directions, for example, to read in real data samples or output processing results to a existing and proven software driver.
This 'library' makes heavy use again of the netpp library. The reason is that netpp already provides an efficient framework for test benching via scripts as well as many I/O features spread across the network. So you can easily realize a distributed processing and simulation solution without having to move hardware around or even restructure your entire lab. So for example, you could grab the current measurement samples from the LHC lab, the SETI project and the CERN and run it through your VHDL RTL description :-)
Also, it offers a certain level of abstraction which makes it easy to swap software components or VHDL entities.
Currently, ghdlex operates in the following variants:
--vpi
optionThe first method is the simplest, you just add netpp property functionality to an existing test bench. The netpp.vpi
shared library module scans the top level signals of the test bench and exports them as dynamic properties.
The second method is almost as simple. Typically, you add a virtualized entity to your design or you just flip a configuration statement to use the simulation architecture of an entity. For example, you instance the VFIFO entity in your design and a separate netpp thread is automatically started when you run the simulation.
The third method is the direct access to netpp. This can be tricky, as netpp can have the role of a master, of a slave, or both. For example, the simulation would be a master in case it drives a virtual frame buffer and requires no more interaction. Typically it acts as a slave when it uses the VFIFO only. You'll have to somewhat dig into the netpp internals. Quite a few netpp data structures can be accessed from VHDL. This method requires you to become familiar with the Autowrapped type mapping C <-> GHDL module.
There are only a few default virtual entities that come with ghdlex:
They all depend on netpp, so they call netpp_init() at start up of the simulation.
Previously, some of those modules did require to load the netpp.vpi module on top. This is no longer necessary, as soon as they call netpp_init(), the netpp API is initialized. If not, your simulation will stop with a Null pointer exception and write a warning about using netpp.vpi.
It is not a problem to load the netpp.vpi on top of an already initialized netpp server, but you will also get a warning on the console. Future versions may run an extra server on a separate port.
The VFIFO is normally the first thing to implement for testing a hardware and software design in cooperation. From the host side, it works like a typical FIFO adapter that is accessed through USB or a serial interface.
For example to access the virtual FIFO on the simulation, start 'simboard'. This will, among other things, output something like:
Reserved FIFO ':simboard:nfifo(0):fifo:' with word size 1, size 0x400
Initialize FIFO with word width of 8 bits
Initialize FIFO with word width of 8 bits
ProbeServer listening on UDP Port 7208...
Listening on UDP Port 2008...
Listening on TCP Port 2008...
Then run the python script test.py from another console:
python test.py
You will now see a simple loop back of the bytes sent from the Python script.
From the slave side, there are a few example implementations for:
If you wish to used more FIFOs and other virtual entities in your design, you might rather use the VFIFO instead. See also Automated netpp interfacing through VPI.
As mentioned, there is no official 'API' for GHDL. There are implementations that seem to conform to the public VHPI specifications, this library is not yet making use of the VHPI interface. Instead, it is using potentially dangerous methods to access the GHDL internal structures directly.
So, even if there is no official "clean" way to do it, we should at least keep the potential changes in one place and automate wrapping and data type exchange as far as possible. Therefore you are encouraged to use meta types so that you will not have to change your test bench simulation code all over if the API changes.
For a short comparison between C and VHDL/Ada (the latter being the actual heart of GHDL), we can definitely reassert that there are many more data type variants in VHDL than in C. This can make the interfacing complex. So far, we have only covered a few data types:
All currently covered types are found in the typedef section of the Autowrapped type mapping C <-> GHDL module.
The following modules provide the API for the GHPI extension:
From the C API side:
The VPI interface originates from the Verilog world and allows simple signal manipulation from external processes as well. GHDL implements only a small subset of VPI, however this is sufficient for interactive manipulation of signals. The difference from GHDLs VHPI implementation is, that there is support for loading of VPI extensions (which are simply shared libraries with a specific API). The ghdlex VPI wrapper for netpp exports top level signals to netpp properties which can be manipulated remotely through a C interface or Python scripts. See Automated netpp interfacing through VPI for details.
Note that this way of pin manipulation is not real time in terms of simulation. The "now" on the software side is not defined on the simulation side. Therefore you could miss a signal, when it is pulsed from the software side too fast. However, this can be a good test for your design, although we would recommend to use the VPI interface only for rather static signals. If you try to mimick a clock signal, it will likely go wrong.
In this case, a GHDL simulation acts as slave (or server). For data I/O with external programs, there are two examples (see example/ folder):
In the following examples, GHDL is used as a master (or client), talking to external devices.
These examples can easily be modified to write directly to local devices instead of going through remote handlers.
The VPIwrapper for netpp can be added to any existing test bench and does the following:
As an example, a testbench is loaded with the netpp.vpi module:
./simx --vpi=netpp.vpi
and responds with:
loading VPI module 'netpp.vpi' VPI module loaded! Reserved RAM ':simram:ram0:' with word size 0x1000(8192 bytes) ProbeServer listening on UDP Port 7208... Registered RAM property with name ':simram:ram0:' Listening on UDP Port 2008... Reserved RAM ':simram:ram1:' with word size 0x1000(8192 bytes) Listening on TCP Port 2008... Registered RAM property with name ':simram:ram1:'
From the client side, the top level properties of this module can be accessed as follows:
netpp localhost
Child: [80000001] 'clk' Child: [80000002] 'we' ... Child: [00000002] 'Enable' Child: [00000005] 'Irq' Child: [00000007] 'Reset' Child: [00000008] 'Throttle' Child: [00000009] 'Timeout' Child: [00000003] 'Fifo'
The properties shown with capitals are static and inherited from a default GHDL wrapper device. This does not necessarily have to be the case. It is up to the netpp device implementation, what properties it exhibits or what device description it inherits from.
Instances of virtual entities are displayed in the VHDL path name notation, like
:simram:ram0:
Accessing these properties from Python raises a little issue with the Python name space rules, as member names of that sort are not allowed. Therefore, you have to get the property token by using the getattr() function, like
ram0 = getattr(root_node, ":simram:ram0:")
Manipulating a pin means, setting a netpp property:
netpp localhost we 1 # Set 'we' to HIGH
Note that this manipulation can interfere with internal stimuli. If you change signals like the 'clk' signal which is typically generated inside the VHDL test bench, randomly inexplicable behaviour can occur.
As the simulation always runs slower than the real world, you have to introduce some timing tricks to make software cooperate properly with the simulation. For example, when using a VFIFO, the software has to use greater timeouts to wait for the simulation to finish.
On the other hand, you might not want to run the simulation at full speed when it is not interacting with the software. For this case, the VFIFO entity has a throttle pin. If '1', the Virtual FIFO will sleep the specified SLEEP_CYCLES when there is no activity.
That way, a simulation time scale can somewhat be controlled such that it looks like "real time" (with respect to a scale).
Some other entities that may not be contained in the free ghdlex distribution use the global_throttle signal. The user has to assign this signal himself on the top level implementation. The example board.vhdl demonstrates how the global_throttle signal is controlled from outside via netpp via a property definition.
When using the netpp.vpi module, you can manipulate this signal automatically from the netpp side by its name in the hierarchy.
Also, there might be a global_dbgclk signal for some debugger entities. Make sure this signal is driven from the top level simulation, otherwise your units may not act. For most of these debugger units, there is a USE_GLOBAL_CLK generic flag that is false by default, i.e. you will have to use a clock specification. For detailed information, please refer to the specific debug module section.
Because a lot of manual coding needs to be done in order to wrap a C routine by a VHDL call, some highly experimental tricks to abuse the C preprocessor are used. Basically, a .chdl file will be turned into a .vhdl file. See Makefile for specific rules. This allows to resolve define and include statements and makes the API definition somewhat easier, since all can be generated from one header file.
The important files:
If you wish to add another virtual entity, you will have to touch these files. Typically, the steps are:
libnetpp.chdl
)apidef.h
Files for hackers - please only extend, don't change: