Posted on

section5 slowly fading out

Time to take new challenges: I have suspended all office operations for good and we have successfully completed migration to another country. Meaning, this website will go private in future and only legacy support for IP cores are offered. In detail:

  • ICEbear Plus is no longer sold
  • netpp nodes are produced by a third party on request (sales suspended)
  • Video streaming IP: dagobert MJPEG encoder will be maintained until end of 2024 for the Lattice ECP 5 series only
  • Several opensource projects will no longer be maintained and may be provided as Docker containers only. Links to repositories:

Note that you can no longer get support for OpenSource, they are provided ‘as is’, with possibly lacking licence clarity

Posted on Leave a comment

ICEbearPlus end of life notice

Part sourcing has become a challenge for many hardware makers doomed with volatile and small production numbers.

Due to some sources being depleted until further notice, alternative ways had to be followed that introduced quite a few extra costs and some yield issues that weren’t fun to watch (yes, we’ve come across fakes). Therefore, the last batch that is regarded ‘sane’ is now being sold out, after ~17 years of service.

This means unfortunately for customers without particular service contract:

  • No more replacement units available
  • No more software updates

Also and obviously, these last units were on a MOQ basis and PCB only, meaning, they are formally sold as component or replacement part, as there is currently no plan to keep units on stock to cover for warranty time.

This means, in a warranty case (which have occured very rarely so far), that:

  • You return the broken unit for examination (valid until end of 2023)
  • It is likely to be repaired/fixed and not replaced in whole

Note that voltage excess (fried USB I/O) is not covered by warranty.

After all, I’d like to express my thanks to the faithful customership coming back every now and then. I’ve been glad to be of service, and (remembering famous literature): So long and thanks for all the fish!

Posted on Leave a comment

High level synthesis Part 1

Creating hardware elements in Python using its generator methods leads us to a few application scenarios that make complex development fun and less error prone. Let’s start with a simple use case.

Counter abstraction

Assume you have a complex design using a bunch of FIFOs and plenty of counters. Wait, what kind of counters? The first implementation – for debugging reasons – is likely to begin with a description of a binary (ripple) counter, as long as there is no cross clock domain transition involved (this we deal with below). So, you’d simulate, find it’s running correctly, and drop the design into the synthesizer toolchain, realizing there is going to be a bottleneck with respect to speed or logic usage. Then one might recall that there is not only one way to count: an LFSR implementation for instance uses less logic than a ripple counter and cycles through some interval, where each value occurs only once.

So, having started out with a VHDL design, you’d have to identify every counter instance, change it into an LFSR, deal with translation of decimal start and stop values into their LFSR counterparts (note this is not deterministic, unlike a gray code). Using the Counter abstraction from the cyrite library, this plays nicer with some early planning.

The Counter class represents a Signal type that performs like a usual signal, allows adding or subtracting a constant as usual within a @genprocess notation or @always function in MyHDL style. So we’d instance Counters instead of signals, but introduce no further abstraction, it will still look all quite explicit:

c = Counter(SIZE = 12, START_VALUE = 0)
...

@always_seq(clk.posedge, reset)
def worker():
    if en:
        c.next = c + 1

However when it’s about to change to a different counter implementation, we’d want to swap them all at once using one flag. So one would end up with a class design, typically, where the desired counter type is provided at the initialization stage, and the counter abstraction will generate the rest. Then, inside the Python ‘factory’ class, we’d spell:

class Unit(Module):
    def __init__(self):
        self.Counter = LFSRX
...
    @block_component
    def unit0(clk: ClkSignal, r: ResetSignal, a : Signal, b : Signal.Output):
        c = self.Counter(...)

        @always_seq(clk.posedge, r)
        def worker():
            ...

Now here comes the catch: The above worker process should not change, but the + 1 operation really only applies to a binary counter. Plus, a specific counter component such as an LFSR module will have to be implicitely inferred. Here’s where Pythons meta programming features kick in: Overriding the __add__ method, we can return a generator object creating HDL elements, plus apply rules that make sure only applicable counting steps are allowed (for instance, a gray counter always counts up or down by one).
Behind this is a Composite class that provides mechanisms to combine hardware block implementations (hierarchically resolved) with a resulting signal.

Here’s how they play in detail (allow a few minutes maximum for the binder to start up):

https://mybinder.org/v2/gh/hackfin/myhdl.v2we/verilog?urlpath=lab/tree/examples/composite_classes.ipynb

When it comes to LFSRs, there are a few issues. Let’s recap:

  • LFSR generators are given a known good polynom creating a maximum-length sequence of numbers (2 ** n – 1), n = number of register bits. See also LFSR reference polynoms.
  • The base LFSR feedback must not start with all zeros or all ones, depending on implementation
  • There is no direct function providing the LFSR-specific representation of a decimal value. It has to be iterated.

In a FIFO scenario, you might usually require a counter covering all values in the interval [0, 2**n-1]. Therefore, you’ll have to generate a LFSR variant with an extra feedback, that includes the zero and all ones state. This is covered by the LFSRX implementation.

Normally, you’ll have *in* and *out* counters that are compared against each other, so there is no need to determine the value of the n’th state. However, when generating a video signal that way, for instance, this is different, because a specific, configureable interval is to be accounted for. So: we need some software extension to cycle the LFSR through n times in order to get the corresponding value. This is viable for small n, but will take plenty of time for larger.

Even worse would the situation be when a reverse look-up is required. Then, we’d have to store all values in a large table. Again, not an option for large n. The improved solution however lies somewhere in between: a bit of storage and a bit of iterating, until the value is found.

Assuming, this correspondence mapping is solved, we can finally tackle the comparison against this value: Since we already introduced plenty of abstraction, overriding customizing the __eq__ special method creates no further pain. We check against an integer and translate this ad-hoc into the corresponding state’s value.

This way it is possible to abstract counter elements and swap them against each other – provided they support the counting methods (LFSR: one up only, Gray: One up, one down).

Synthesis

The question might arise: What’s that got to do with High Level Synthesis? (HLS)

Well, this is only Part 1. What we understand as HLS so far (the term being dominated by Xilinx), is that we can drop in a formula written in a sequential programming language and get it rolled out into hardware. There’s some massive intelligencia behind that, like attempts to detect known elements and somehow infer a clock/area optimized net list of primitives. However, this is not what we intended: HLS should give maximum control over what’s created during synthesis from a high level perspective. This can be basically covered using these mechanisms, depending on the target, the same hardware description rolls out into target specific elements. We can leave the optimum inference to the synthesis/mapper intelligence, or we can decide to impose our own design rules and for instance infer specific (truncated or intentionally fuzzy) multipliers for certain AI operations.

HDL issues

The ongoing, everlasting and omnipresent discussion whether hardware design language authoring is considered ‘programming’ or ‘describing functionality’ has obviously gotten to a certain annoyance level in the communities which often split themselves into Hardware and Software developers. The Xilinx HLS marketing does not take the tension out of the situation, as it suggests that a Software Developer can simply drop Matlab or C code into a tool and get the right thing out of it. I believe, this is a wrong approach, as it will never bridge the gap between software and hardware development.

Eventually, let’s face it: VHDL *is* a programming language, as it was designed as one: to model behaviour. It is not a description language like XML, as it gives little means to specifically describe the wanted result in an abstracted way. I.e. unlike XML, practise shows that there is no way to self-verify or emit towards synthesis according to specific design rules or makes it very hard to extend using own datatypes, although the architecture for derivation is implemented.

On the other hand, it is a massive trial and error game to determine which construct can be processed by what tool, let aside anything to do with simulation. The VHDL language complexity has led many commercial developments to poor or incorrect language support in the past.

So, back to HLS: Why does Python, as a programming language, get us there? Answer: it’s the built-in features:

  • Meta programming (overriding of operations)
  • Generator concepts (yield), etc.
  • Self-parsing: AST analysis, Transpilation

Using generator constructs, we still are programming. But we are lead to a different way of thinking with respect to creating elements and the tool itself (the HLS kernel) will tell us early what is allowed for synthesis, and what only for simulation. That way, a Python coder is taught how to describe, effectively.

To be continued in Part 2..