Posted on Leave a comment

High level synthesis Part 1

Creating hardware elements in Python using its generator methods leads us to a few application scenarios that make complex development fun and less error prone. Let’s start with a simple use case.

Counter abstraction

Assume you have a complex design using a bunch of FIFOs and plenty of counters. Wait, what kind of counters? The first implementation – for debugging reasons – is likely to begin with a description of a binary (ripple) counter, as long as there is no cross clock domain transition involved (this we deal with below). So, you’d simulate, find it’s running correctly, and drop the design into the synthesizer toolchain, realizing there is going to be a bottleneck with respect to speed or logic usage. Then one might recall that there is not only one way to count: an LFSR implementation for instance uses less logic than a ripple counter and cycles through some interval, where each value occurs only once.

So, having started out with a VHDL design, you’d have to identify every counter instance, change it into an LFSR, deal with translation of decimal start and stop values into their LFSR counterparts (note this is not deterministic, unlike a gray code). Using the Counter abstraction from the cyrite library, this plays nicer with some early planning.

The Counter class represents a Signal type that performs like a usual signal, allows adding or subtracting a constant as usual within a @genprocess notation or @always function in MyHDL style. So we’d instance Counters instead of signals, but introduce no further abstraction, it will still look all quite explicit:

c = Counter(SIZE = 12, START_VALUE = 0)

@always_seq(clk.posedge, reset)
def worker():
    if en: = c + 1

However when it’s about to change to a different counter implementation, we’d want to swap them all at once using one flag. So one would end up with a class design, typically, where the desired counter type is provided at the initialization stage, and the counter abstraction will generate the rest. Then, inside the Python ‘factory’ class, we’d spell:

class Unit(Module):
    def __init__(self):
        self.Counter = LFSRX
    def unit0(clk: ClkSignal, r: ResetSignal, a : Signal, b : Signal.Output):
        c = self.Counter(...)

        @always_seq(clk.posedge, r)
        def worker():

Now here comes the catch: The above worker process should not change, but the + 1 operation really only applies to a binary counter. Plus, a specific counter component such as an LFSR module will have to be implicitely inferred. Here’s where Pythons meta programming features kick in: Overriding the __add__ method, we can return a generator object creating HDL elements, plus apply rules that make sure only applicable counting steps are allowed (for instance, a gray counter always counts up or down by one).
Behind this is a Composite class that provides mechanisms to combine hardware block implementations (hierarchically resolved) with a resulting signal.

Here’s how they play in detail (allow a few minutes maximum for the binder to start up):

When it comes to LFSRs, there are a few issues. Let’s recap:

  • LFSR generators are given a known good polynom creating a maximum-length sequence of numbers (2 ** n – 1), n = number of register bits. See also LFSR reference polynoms.
  • The base LFSR feedback must not start with all zeros or all ones, depending on implementation
  • There is no direct function providing the LFSR-specific representation of a decimal value. It has to be iterated.

In a FIFO scenario, you might usually require a counter covering all values in the interval [0, 2**n-1]. Therefore, you’ll have to generate a LFSR variant with an extra feedback, that includes the zero and all ones state. This is covered by the LFSRX implementation.

Normally, you’ll have *in* and *out* counters that are compared against each other, so there is no need to determine the value of the n’th state. However, when generating a video signal that way, for instance, this is different, because a specific, configureable interval is to be accounted for. So: we need some software extension to cycle the LFSR through n times in order to get the corresponding value. This is viable for small n, but will take plenty of time for larger.

Even worse would the situation be when a reverse look-up is required. Then, we’d have to store all values in a large table. Again, not an option for large n. The improved solution however lies somewhere in between: a bit of storage and a bit of iterating, until the value is found.

Assuming, this correspondence mapping is solved, we can finally tackle the comparison against this value: Since we already introduced plenty of abstraction, overriding customizing the __eq__ special method creates no further pain. We check against an integer and translate this ad-hoc into the corresponding state’s value.

This way it is possible to abstract counter elements and swap them against each other – provided they support the counting methods (LFSR: one up only, Gray: One up, one down).


The question might arise: What’s that got to do with High Level Synthesis? (HLS)

Well, this is only Part 1. What we understand as HLS so far (the term being dominated by Xilinx), is that we can drop in a formula written in a sequential programming language and get it rolled out into hardware. There’s some massive intelligencia behind that, like attempts to detect known elements and somehow infer a clock/area optimized net list of primitives. However, this is not what we intended: HLS should give maximum control over what’s created during synthesis from a high level perspective. This can be basically covered using these mechanisms, depending on the target, the same hardware description rolls out into target specific elements. We can leave the optimum inference to the synthesis/mapper intelligence, or we can decide to impose our own design rules and for instance infer specific (truncated or intentionally fuzzy) multipliers for certain AI operations.

HDL issues

The ongoing, everlasting and omnipresent discussion whether hardware design language authoring is considered ‘programming’ or ‘describing functionality’ has obviously gotten to a certain annoyance level in the communities which often split themselves into Hardware and Software developers. The Xilinx HLS marketing does not take the tension out of the situation, as it suggests that a Software Developer can simply drop Matlab or C code into a tool and get the right thing out of it. I believe, this is a wrong approach, as it will never bridge the gap between software and hardware development.

Eventually, let’s face it: VHDL *is* a programming language, as it was designed as one: to model behaviour. It is not a description language like XML, as it gives little means to specifically describe the wanted result in an abstracted way. I.e. unlike XML, practise shows that there is no way to self-verify or emit towards synthesis according to specific design rules or makes it very hard to extend using own datatypes, although the architecture for derivation is implemented.

On the other hand, it is a massive trial and error game to determine which construct can be processed by what tool, let aside anything to do with simulation. The VHDL language complexity has led many commercial developments to poor or incorrect language support in the past.

So, back to HLS: Why does Python, as a programming language, get us there? Answer: it’s the built-in features:

  • Meta programming (overriding of operations)
  • Generator concepts (yield), etc.
  • Self-parsing: AST analysis, Transpilation

Using generator constructs, we still are programming. But we are lead to a different way of thinking with respect to creating elements and the tool itself (the HLS kernel) will tell us early what is allowed for synthesis, and what only for simulation. That way, a Python coder is taught how to describe, effectively.

To be continued in Part 2..

Posted on Leave a comment

Hardware generation and simulation in Python

There are various approaches to Python HDLs, some more suited to Python developers than to HDL developers. They all have one thing in common: The very refined test bench capabilities of the Python ecosystem which allow you to just connect almost everything to all. From all these Python dialects, myHDL turns out to be the most readable and sustainable language for hardware development. Let me outline a few more properties:

  • Has a built-in simulator (limited to defined values)
  • Converts a design into flattened Verilog or VHDL
  • Uses a sophisticated ‘wire’ concept for integer arithmetics

In a previous post, I mentioned experiments with yosys and its Python API. Not much has changed on that front, as the myHDL ‘kernel’ based approach turned out to be unmaintainable for various reasons. Plus, the myHDL kernel has a basic limitation due to its AST-Translation into target HDLs that impedes code reusability and easy extendability with custom signal types.

For experiments with higher level synthesis, such as automated pipeline unrolling or matrix multiplications, a different approach was taken. This ‘kernel’, if you will, can handle the legacy myHDL data types plus derived extensions. This works as follows:

  • Front end language (myHDL) is slightly AST-translated into a different internal representation language (‘myIRL’)
  • The myIRL representation is executed within a target context to generate logic as:
    • VHDL (synthesizeable)
    • RTL (via pyosys target)
    • mangled Verilog (via yosys)

Now the big omnipresent question is: Does that logic perform right? How to verify?

  • The VHDL output (hierarchical modules) is imported into the GHDL simulator and can be driven by a test bench. The test bench is also generated as a VHDL module. Co-Simulation support is currently not provided.
  • The Verilog output can be simulated with iverilog, however, Co-Simulation is not enabled for the time being for this target
  • The RTL representation is translated to C++ via the CXXRTL back end and is co-simulated against the Python test bench. Note that support for signal events are rudimentary. CXXRTL is targeting at speedy execution with defined values (no ‘X’ and ‘U’)

Instead using classic documentation frameworks, the strategy was chosen again to use Jupyter Notebooks running in a Jupyter Lab environment. Again, the Binder technology enables us to run this in the cloud without requirement to install a specific Linux environment. The advantages:

  • Auto-Testing functionality for notebooks in a reference Docker environment
  • Reduced overhead for creating minimum working examples or error cases

This Binder is launched via the button below.

Launch button for myhdl emulation demos

Overview of functionality:

  • Generation of hardware as RTL or VHDL
  • Simulation (GHDL, rudimentary CXXRTL)
  • RTL display, output of waveforms
  • Application examples:
    • Generators (CRC, Gray-Counter, …)
    • Pipeline and vector operations
    • Extension types (SoC register map generation, etc.)

Yosys synthesis and target architectures

The OpenSource yosys tool finally allows to drop a reference tool chain flow into the cloud without licensing issues. This is in particular interesting for new, sustainable FPGA architectures. A few architectures have been under scrutiny for ‘dry dock’ synthesis without actually having hardware.

In particular, a reference SoC environment (MaSoCist) was dropped into the make flow for various target architectures to see:

  • How much logic is used
  • If synthesis translates into the correct primitives
  • If the entire mapped output simulates correctly with different simulators

The latter is a huge task that could only be somewhat automated using Python. Therefore, the entire MaSoCist SoC builder will slowly migrate towards a Python based architecture.

It is expected to document some more in particular about several architectures.

As an example, a synthesis and mapping step for a multiplier:


As always with educational software, some scenarios don’t play. The restrictions in place for this release:

  • Variable usage in HDL not supported
  • Custom generators, such as Partial assignments (p(1 downto 0) <= s) or vector operations not supported in RTLIL
  • Limited support for @block interfaces
  • Thus: No HLS alike library support through direct synthesis (yet)

Exploring CXXRTL

CXXRTL by @whitequark is a relatively fresh simulator backend for yosys, creating heavily template-decorated C++ code compiling into a binary executable simulation model. It was found to perform quite well as a cythonized (compiled Python) back end driven from a thin simulator API integrated into the MyIRL library.

Since it requires its own driver from the top, a thin simulator API built on top of the myIRL library takes care of the event scheduling, unlike GHDL or icarus verilog which handle delays and delta cycling for complex combinatorial units. It is therefore still regarded as a ‘know thy innards’ tool. A few more benefits:

  • Allows to distribute functional simulation models as executables, without requirements to publish the source
  • Permits model-in-the-loop scenarios to integrate external simulators as black boxes
  • Eventually aids in mixed language (VHDL, Verilog, RTL) and many-level model simulations

There are also drawbacks: Like the MyHDL simulator, CXXRTL is not aware of ‘U’ (uninitialized) and ‘X’ (undefined) values, it knows 0 and 1 signals only. It is therefore not suitable for a full trace of your ASIC’s reset circuitry without workarounds. Plus, CXXRTL only processes synthesizeable code and would not provide the necessary delay handling for post place and route simulation.

Co-Simulation: How does this play with MyHDL syntax?

This is where it gets complicated. MyHDL allows a a subset of Python code to be translated to Verilog of VHDL such that you can write simple test benches for verification that run entirely in the target language.

Then there’s the co-simulation option, where native Python code (featured by the myHDL ‘simulator kernel’, if you will) runs alongside a compiled simulation model of your hardware. The simplest setup is basically a circuit or entire virtual board with only a virtual reset and clock stimulus. Any other simulation model, like as UART, a SPI flash, etc. can be connected to such a simulation with more or less effort. The big issue: Who is producing the event, who is consuming it? This leads us back to the infamous master and slave topic (I am aware it’s got a connotation).

The de-facto standards aiding us so far in the simulator interfacing ecosystem:

  • VHDL: VHPI, VHDLDIRECT, specific GHDL implementations
  • Verilog/mixed: VPI, FLI
  • QEMU as CPU emulation coupled to hardware models

The easiest to handle may be the VPI transaction layer, that is already present for myHDL. In this implementation, it is using a pipe to send signal events to the simulation and reading back results through another reverse path. Here, the myHDL plays a clear master role. For GHDL, a asynchronous concept was explored via my ghdlex library, allowing distributed co-simulation across networks where master and slave relationships are becoming fuzzy.

Finally, the CXXRTL method provides most flexibility, as we can add blackbox hardware that does just something. We have the full control here over a simple C++-layer without any overhead induced through pipes. The binding for Python can easily be created using Cython code. However it requires to clearly separate testbench code from hardware implementation.

This implies:

  • Test bench must be written in myHDL syntax style and needs to use specific simulation signal classes
  • Extended bulk signal/container classes re-usage is restricted
  • Hardware description can be in any syntax or intermediate representation, as well as blackbox Verilog or VHDL modules

Links and further documentation

As usual in the quickly moving opensource world, documentation is sparse and solutions on top of it are prone to become orphanware, once the one man bands retire or lose interest. However, I tend to rate the risk very low in this case. Useful links so far (hopefully, there’ll be found more soon):


  • Recommended for academical or private/experimental use only
  • The pyosys API (Python wrapper for libyosys) may at this moment crash without warning or yield misleading feedback. There’s not much being done about this now as updates from the yosys development are expected.
  • Therefore, jupyter notebooks may crash and you may lose your input/data
  • No liability taken!
Posted on Leave a comment

MyHDL and (p)yosys: direct synthesis using Python

MyHDL as of now provides a few powerful conversion features through AST (abstract syntax tree) parsing. So Python code modules can be parsed ‘inline’ and emit VHDL, Verilog, or … anything you write code for.

Using the pyoysis API, we can create true hardware elements in the yosys-native internal representation (RTLIL).

You can play with this in the browser without installing software by clicking on the button below. Note that this free service offered by might not always work, due to resource load and server availability.

Note:Maintenance for this repository is fading out. May no longer work using the Binder service.

Simple counter example

Let’s have a look at a MyHDL code snippet. This is a simple counter, that increments when ce is true (high). When it hits a certain value, the dout output is asserted to a specific value.

def test_counter(clk, ce, reset, dout, debug):
    counter = Signal(modbv(0)[8:])
    d = Signal(intbv(3)[2:])

    @always_seq(clk.posedge, reset)
    def worker():
        if ce:
   = counter + 1

    def assign():
        if counter == 14:
   = 1
   = 1
        elif counter == 16:
   = 1
   = 3
   = 0
   = 0

    return instances()

When we run a simple test bench which provides a clock signal clk, and some pseudo random assertion of the ce pin, we get this:

Waveform of simulation

Now, how do we pass this simple logic on to yosys for synthesis?

Pyosys python wrapper

The pyosys module is a generated python wrapper, covering almost all functionality from the RTLIL yosys API. In short, it allows us to instanciate a design and hardware modules and add hardware primitives. It’s like wiring up 74xx TTL logic chips, but the abstract and virtual way.

Means, we don’t have to create Verilog or VHDL from Python and run it through the classic yosys passes, we can emit synthesizeable structures directly from the self-parsing HDL.

Way to synthesis

Now, how do we get to synthesizeable hardware, and how can we control it?

We do have a signal representation after running the analysis routines of MyHDL. Like we used to convert to the desired transfer language, we convert to a design, like:

design = yshelper.Design("test_counter")
a = test_counter(clk, ce, reset, dout, debug)
a.convert("yosys_module", design, name="top", trace=True)
design.display_rtl() # Show dot graph

The yosys specific convert function, as of now, calls the pyosys interface to populate a design with logic and translates the pre-analysed MyHDL signals into yosys Wire objects and Signals that are finally needed to create the fully functional chain of the logic zoo. The powerful ‘dot’ output allows us to look at what’s being created from the above counter example (right click on image and choose ‘view’ to see it in full size):

Schematic of synthesis, first output stage

You might recognize the primitives from the hardware description. A compare node if counter == 14 translates directly to the $eq primitive with ID $2. A Data flip flop ($dff) however is generated somewhat implicit by the @always_seq decorator from the output of a multiplexer. And note: This $dff is only emitted, because we have declared the reset signal as synchronous from the top level definition. Otherwise, a specific asynchronous reset $adff would be instanciated.

The multiplexers finally are those nasty omnipresent elements that route signals or represent decisions made upon a state variable, etc.

You can see a $mux instanciated for the reset circuit of the worker() function, appended to another $mux taking the decision for the ce pin whether to keep the counter at its present value or whether to increment it ($11). The $pmux units are parallel editions that cover multiple cases of an input signal. Together with the $eq elements, they actually convert well to a lookup table — the actual basic hardware element of the FPGA.


The standard VHDL/Verilog conversion flattens out the entire hierarchy before conversion. This approach avoids this by maintaining a wiring map between current module implementation and the calling parent. Since @block implementations are considered smart and can have arbitrary types of parameters (not just signals), this is tricky: We can not just blindly instance a cell for a module and wire everything up later, as it might be incompatible. So we determine a priori by a ‘signature key’ if a @block instance is compatible to a previous instance of the same implementation.

All unique module keys are thus causing inference of the implementation as a user defined cell. The above dot schematic displays the top level module, instancing a counter and two LFSR8 cells with different startup value and dynamic/static enable.

Black boxes

When instancing black box modules or cells directly from MyHDL, you had to create a wrapper for it, using the ugly vhdl_code or verilog_code attribute hacks. This can be a very tedious process, when you have to infer vendor provided cells. You could also direct this job to the yosys mapper. The following snippet demonstrates an implementation of a black box: the inst instance adds the simulation for this black box, however, the simulation is not synthesized, instead, the @synthesis implementation is applied during synthesis. Note that this can be conditional upon the specified USE_CE in this example.

def MY_BLACKBOX(a, b, USE_CE = False):
    "Blackbox description"
    inst = simulate_MY_BLACKBOX(a, b)

    def implementation(module, interface):
        name =
        c = module.addCell(yshelper.ID(name), \
        port_clk = interface.addWire(a.clk)
        c.setPort(yshelper.PID("CLK"), port_clk)

        if USE_CE:
            port_en = module.addSignal(None, 1)
            in_en = interface.addWire(a.ce)
            in_we = interface.addWire(a.we)
            and_inst = module.addAnd(yshelper.ID(name + "_ce"), \
                in_en, in_we, port_en)
            port_en = interface.addWire(a.we)

        c.setPort(yshelper.PID("EN"), port_en)

    return inst, implementation

This also allows to create very thin wrappers using wrapper factories for architecture specific black boxes. In particular, we can also use this mechanism for extended High Level Synthesis (HLS) constructs.


Now, how would we verify if the synthesized output from the MyHDL snippet works correctly? We could do that using yosys’ formal verification workflow (SymbiYosys), but MyHDL already provides some framework: Co-Simulation from within a python script against a known working reference, like a Verilog simulation.

These verification tests are run automatically upon checkin (continuous integration, see also docker container hackfin/myhdl_testing:yosys)

An overview of the verification methods are given inside the above binder. Note: the myhdl_testing:yosys container is not up to date and is currently used for a first stage container build only.

Functionality embedded in the container:

  • Functional MyHDL simulation of the unit under test with random stimulation
  • Generation of Verilog code of the synthesized result
  • Comparison of the MyHDL model output against the Verilog simulation output by the cycle-synchronous Co-Simulation functionality of MyHDL

There are some advantages to this approach:

  • We can verify the basic correctness of direct Python HDL to yosys synthesis (provided that we trust yosys and our simulator tools)
  • We can match against a known good reference of a Verilog simulator (icarus verilog) by emitting Verilog code via MyHDL
  • Likewise, we can also verify against emitted VHDL code (currently not enabled)

Synthesis for target hardware (ECP5 Versa)

Currently, only a few primitives are supported to get a design synthesized for the Lattice ECP5 architecture, in particular the Versa ECP5 development kit. The following instructions are specific to a Linux docker environment.

First, connect the board to your PC and make sure the permissions are set to access the USB device. Then start the docker container locally as follows:

docker run -it --rm --device=/dev/bus/usb -p 8888:8888 hackfin/myhdl_testing:jupyosys jupyter notebook --ip --no-browser

Then navigate to this link (you will have to enter the token printed out on the console after starting the container):

Then you can synthesize, map, run PnR and download to the target in one go, see the ECP5 specific examples in the playground.

Note: when reconnecting the FPGA board to Power or USB, it may be necessary to restart the container.

Status, further development

MyHDL main development has turned out to have some more serious issues:

  • Numerous bugs and piling up pull requests
  • Architectural issues (AST parsing versus execution/generation), problematic internals for hierarchy preservation
  • Slow execution for complex hierarchies
  • No closed source modules feasible

From more experiments with a different approach, the roadmap has become clearer such that all yosys related direct synthesis support is migrated to the new ‘v2we’ kernel which provides some MyHDL emulation. For the time being, the strategy is to go back to VHDL transfer and verification using GHDL to make sure the concept is sound and robust. Link to prototype examples: