LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I work with malleable VIs a lot on desktop. They can save a lot of boilerplate code. From my understanding, the entire feature has no effect once code is compiled: the VI is converted to an "instance" VI with the proper data types and inlined into the caller. This doesn't seem like it should be problematic to use on an FPGA target.

 

Here are a few FPGA-specific wins that I think VIM support would deliver:

  • Array size adaptation
    • One can imagine writing a "parallel for loop" VIM that simply expands out N parallel iterations for array sizes 0-N
  • Fixed point configuration adaptation
    • Without VIM, if you write a VI that operates on fixed point numbers, it will auto-coerce to the configuration on the controls of the VI

 

Hi,

 

I'd like to request that NI enables the use of Xilinx XPM Macros in Component Level IP (CLIP) and Socketed CLIP and custom user IP:

 

Xilinx recomments XPM macros for Clock Domain Crossing (CDC) and FIFO / Memory instantiation because they are more easily reconfigured / managed than classical IP and (especially in case of CDCs) auto-generate timing constraints to ease getting a correctly constrained design.

 

XPM Macros are available for all current Xilinx/AMD devices (7-Series to Versal):

XPM Macros in 7-Series Libraries:

https://docs.amd.com/r/2021.1-English/ug953-vivado-7series-libraries/Xilinx-Parameterized-Macros 

XPM Macros in UltraScale Libraries:

https://docs.amd.com/r/en-US/ug974-vivado-ultrascale-libraries/Xilinx-Parameterized-Macros 

XPM Macros in Versal (Premium/AI) Libraries:

https://docs.amd.com/r/en-US/ug1344-versal-architecture-libraries/Xilinx-Parameterized-Macros 

https://docs.amd.com/r/en-US/ug1485-versal-architecture-premium-series-libraries/Xilinx-Parameterized-Macros 

https://docs.amd.com/r/en-US/ug1353-versal-architecture-ai-libraries/Xilinx-Parameterized-Macros 

 

Unfortunately, XPM macros are only available in SystemVerilog at lowest level, although VHDL instantiation templates do exist (see documentation above).

AMD/Xilinx seems to have no plans to add pure VHDL XPM Macros:

https://support.xilinx.com/s/question/0D52E00006hpeAySAI/no-vhdl-simulation-models-for-xpms-?language=en_US 

Excerpt:

graces (AMD) on 2017-12-07
There's no plan to support VHDL model for XPM in future releases.

 

graces (AMD) on 2017-12-14

The Xilinx direction for any new development is in Verilog. It can be overridden with business justification though.

 

If you have a strong demand, I'd suggest that you open a Service Request with Technical Support and get a CR filed. The SR will be linked to the CR. If quite a few SRs are linked to the CR, the chance to get it implemented will be larger.

 

Since NI does not document that the FPGA module does not play nicely with CLIP that uses XPM macros, I had to find out what is needed to get them to work:

My preliminary result is that I can get a bit file if I only change/patch the call of xelab that the FPGA module performs from

xelab.bat -m64 xil_defaultlib.conf12B308D38326465793B06F85282B8708 -L xil_defaultlib -L unisim -L unimacro -L xilinxcorelib -L secureip -snapshot my_clip_top_level_entity -dll -prj clipsyn.prj

to 

xelab.bat -m64 xil_defaultlib.conf12B308D38326465793B06F85282B8708 -L xil_defaultlib -L unisim -L unimacro -L xilinxcorelib -L secureip -L xpm -snapshot my_clip_top_level_entity xil_defaultlib.glbl -dll -prj clipsyn.prj

and add the line 

verilog xil_defaultlib "C:\NIFPGA\programs\Vivado2021_1\data\verilog\src\glbl.v"

to clipsyn.prj.

I'm using Labview 2022Q3 with the 2022Q3 FPGA module, targeting a sbRIO-9629's Artix 7 with the bundled Vivado 2021.1

So it seems all that is needed to enable the use of XPM macros in (socketed) CLIP is a configuration option that performs a simple extension of the elaboration command arguments and file list.

 

If NI decides not to support XPM macros for whatever reason, this should be documented very prominently in the CLIP sections of the user manuals. Also, NI should consider providing their own macros with the same functionality in this case, at least for CDCs.

 

Using a netlist or user IP to wrap XPM CDC macros does not work, since the design constraints are added by tcl files (partially only as exceptions), that are not retained in the netlist or IP, see

https://support.xilinx.com/s/question/0D54U00008aHc9ESAS/handleretainexport-xpm-cdc-macro-timing-constraints-for-ooc-design-exported-to-a-netlist-that-is-then-used-and-implemented-with-another-design?language=en_US 

So there is currently no workaround to use a design that depends on them.

 

Thanks for considering this.

It would be nice to have control of clock.-independent assignments of signals from I/O nodes (without synchronising registers) without having to specifically having to use a clock for the connection.

 

Intaris_0-1719315020552.png

 

 

Image says it all.

 

We have tried using a static assignment on the top-level diagram, without using a SCTL but it appears that does not work. The example links within a single CLIP, but the idea is aimed at actually doing some connections between multiple different CLIPs without the need for a specific VHDL wrapper for each individual configuration.

Visually detecting the presence of CDC (Clock Domain Crossing) in LV code is only semi-intuitive. It is required to check the set clock of the SCTL and / or follow the wire to the clock constant / control in order to understand in which clock domain the code is running.

 

I suggest having the option to automatically colour the background of the SCTL according to the clock being used.

Obviously, this won't work well over VI borders, but at least the option to have it vivible on the same diagram is already a nice step towards better visibility of this really important part of LabVIEW FPGA programming.

 

An option to actually couple a colour with any given clock constant / control for SCTLs would be an addition I personally would very much appreciate.

 

Intaris_0-1716558682629.png

 

Yes, these colours are probably a bit extreme, but given the fact that I'm dealing with so many individual processes, it is preferable to having to constantly follow all the wires or investigate all of the SCTL settings.

 

Allows for ILA and other debugging capabilities.

 

When using FPGA code on a different target, you can copy/paste all the User Defined Variables (UDV) to the new target, but you must manually re-link every UDV in the FPGA code to the UDV on the new target, which is a very tedious process when there are more than a handful of UDVs. Having the UDVs relative-referenced would make the FPGA code much more portable. On a cRIO, I always use FIFOs, but UDVs are the only method of getting data out of an EtherCAT chassis.

 

Malleable FPGA VIs import into the Desktop Execution Node with the same datatype as the FPGA VI's "malleable terminal".  The Desktop Execution Node does not mutate the input type to match the "malleable terminal" of the FPGA VI.  As a result, host VI test benches cannot iterate Type Specialization Structure cases in the malleable FPGA VI.

 

The "anything" input to this Assert Structural Type Match node is an I16, which breaks this case against an I16, which is the "malleable terminal" of this VI.

 

PIE5669450_0-1687372970404.png

 

The Desktop Execution Node only sees the I16, and coerces other datatypes.

 

PIE5669450_1-1687373121899.png

 

IMO the compiled Type Specialization Structure case is a critical unit test, which depends on the data type of the control wired to the "malleable terminal", so this is a critical limitation of the Desktop Execution Node.

 

I think the intended use-case for the DEN is to hook into an FPGA VI that's in a loop, and if so, the inputs to the malleable VI are selected by the calling VI.  So, maybe this isn't a limitation of the DEN itself, but of the DEN workflow.

 

Thanks for your consideration,

 

Steve K

When simulating an FPGA VI, I use sampling probes (https://www.ni.com/docs/en-US/bundle/labview-fpga-module/page/lvfpgahelp/using_sampling_probe.html).

 

If I close the VI the sampling probes are lost.  This is a request to be able to save the sampling probes for a given FPGA VI.

In the past customers used ChipScope (https://www.xilinx.com/products/intellectual-property/chipscope_ila.html) with LabVIEW FPGA or this https://www.ni.com/en-us/support/downloads/tools-network/download.xilinx-chipscope-pro-debugging-break-out-box.html#372379.

 

This is a request for Integrated Logic Analyzer (ILA) of Xilinx Vivado in the LabVIEW FPGA tool flow.

When dragging multiple IO items from project to a block diagram, it'd be great to have them show up as a single IO node instead of multiple ones. To be backward compatible it could be something like <Shift>-drag. This improves code readibility by producing more compact code.

 

AndreasStark_0-1680632105226.png

 

Hello,

 

I recently have issue configuring FPGA Vis to be run seemlessy by the same host code, because of incompatible interface between VIs. Here is the Configure FPGA VI Reference Interface :

 

Configure FPGA VI Reference InterfaceConfigure FPGA VI Reference Interface

 

And here are two (differents) interface, fore FPGA 1.vi and FPGA 2.vi respectively, as seen in the context help. I just duplicate the VI for the example and get the tab order modified - see under Registers :

 

Context Help for FPVA 1.viContext Help for FPVA 1.viContext Help for FPVA 2.viContext Help for FPVA 2.vi

 

I think it could be more consistent to have the same kind of display in the configure dialog, with the same control order. It's quite confusing not seeing any difference when configuring a reference to discover that something is wrong at run-time (controls and indicators are separated, and then sorted alphabetically - I only set controls in my example code, no indicators). The context help over the dynamic reference finally helps me to figure out what was wrong but it tokk me a while...

 

Please note that the FPGA FIFOs have to be define if the same order from one bitfile to an other (if there is differents targets, or differents projects). This is correctly reflected by the configuration window.

 

So I suggest having a more coherent display of control and indicators interfaces, that correspond to the effective interface (just like the context help does), i.e. the tab order of the controls under Registers.

 

Best regards,

We're starting development on an Ultrascale device, KU40 and am missing the option to utilise the DSP48E2 primitive as we have for DSP48 and DSP48E1.

 

Intaris_0-1670929335347.png

 

NXG seemed to have it, but as we all know, NXG is no more.

 

https://www.ni.com/docs/en-US/bundle/labview-nxg-fpga-module-cdl-api-ref/page/dsp48e2.html

 

Can we please have a DSP48E2 primitive for LabVIEW FPGA? I would really like access to the new features supported, including the wider multiplier.

As somewhat an opposite request to this idea

https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Ability-to-define-datatype-of-Registers-FIFOs-from-code-without/idi-p/3123936

I would like to show some pertinent information to the configuration of certain primitives in FPGA code.

 

Intaris_0-1663335955202.png

 

The ability to turn this display on and off just like a label would be very welcome indeed. I'd always have it visible.

 

I just spent two days tracking down a bug which ended up being an under-dimensioned Block RAM instantiation (and how BRAM indexing works, just throwing bits away instead of coercing the read/write index), something whose configuration is completely hidden from view. Why can't we have some visible elements to show the size of a Block RAM and the Datatype (FXP would do for any given FXP type). Same goes for FIFOs, whether a FIFO is 16 elements or 8192 elements deep is a very important piece of information. And of course I mean only the primitives which instantiate the resources, not FP references for these items, even though the datatype of these would also be a very welcome addition.

The project I'm currently working on involves a USRP 2954R with a small amount of FPGA programming (with the code running at 200MSPs). I ,of course, started editing the USRP FPGA Streaming example code to achieve this.

 

The Receiver code on the FPGA was edited to take the samples at 200MSPs (without the usual decimation), and perform a complex multiplication on it using a high throughput math palette. Post this I decimate my samples (Using the same decimator VI used in the Streaming example code) on a different loop in the Main FPGA VI.

 

Unfortunately I keep receiving a timing error on compilation which, upon investigation, shows a large number of non-diagram components eating away at the loop time. What I don't understand is why a complex multiplication followed by a decimation would require that much time to execute.

 

I've tried using the pipeline feature in the Complex Multiplication and also various compilation styles that optimize timing but I'm not able to cross 150MHz clock rate.

 

I also checked the knowledge NI page that talks about Non-Diagram components but pretty much most of the issues according to the page is about a long critical path, which in my case is not relevant because I literally perform only 2 operations in the concerned loop along with the necessary pipelining operations.

 

I've included the image of the timing violation along with the VIs. Could anyone please let me know what's going on or if I'm doing something wrong?

 

PS: The compiler I'm using is Vivado 2019.1.1

 

PS 2 : I haven't started working on the host yet

Download All

Sometimes, you might not care about the outputs under certain input conditions. "Not caring" can lead to significant improvements in optimization and thus resource utilization but there's no way to tell LabVIEW right now that you "don't care". I propose we create new data types that can support "don't care". It should start with the booleans but when you convert a boolean array to an integer, if one of booleans has a "don't care" the numeric output also then becomes a "don't care" which is yet another data type we need as well.

 

Here's what a "don't care" might look like if the user didn't care about the output if the input was 2:

dont care.png

Both resource and timing reports can be hard to read.

 

Timing shows clocks that may not appear as SCTLs.

Resources show items which are not easy to trace back the resources on the FPGA itself.

 

This varies based on the target being compiled, for say 7976 (Kintex-7), 5785 (Ultrascale), and x410 (RFSoC).

 

There could be a knowledgebase article on "How to understanding LabVIEW FPGA compile results"

 

I know that for more detailed compile results we have exported to Vivado but for new users this can be very intimidating and a big distraction.

The Vision FPGA has come around for maybe ten years? Nowadays, FPGA has much more resources than their ancestor, but in Vision FPGA, we can still only handle 8 pixels a single cycle, which sometimes comes as a bottleneck.

 

Also, I think it would be good to add a signed 16-bit image data type for Vision FPGA; when we use u8 subtraction, we are losing some of the information for the output is only another u8 image. If we have the i16 datatype, it will be possible to do a lossless subtraction. Sometimes every bit counts.

 

 

 

The functionality of the update firmware button on NI MAX (R series targets) is not clearly described on NI documentation:

 

Juanjo_B_0-1636058231554.png

 

It would be nice to add a description of its functionality in the LabVIEW FPGA Module Help document.

 

I'm bringing back to life this long-lost idea: https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/pre-and-post-build-options/idi-p/2364676

as I think there are lots of situations where pre/post build actions could be useful for FPGA.

 

For example, as suggested here: https://forums.ni.com/t5/LabVIEW/Populate-FPGA-array-with-values/m-p/4145330#M1195362

I have a large array of coefficients that I want to load from a file, then populate an array constant with it. I have a script to do it, now I would like to automatically run it before compiling the bitfile. For context, I want it to be a constant because controls take more resources and do not allow constant folding optimization.

 

I already had another situation where I made a tool to auto-generate code in case structures based on some specifications given by the developer. If however the developer forgets to run it before compiling, the FPGA VI won't work properly as necessary code has not been generated.

 

More generally, I think scripting for FPGA is way underrated. As FPGA code is quite often tedious and redundant to create (because optimization is priorized over readability, and because of the lack of type genericity), scripting has a great potential here. Allowing to run pre/post build actions for FPGA bitfiles would surely take FPGA scripting to the next level !

With even simple examples we experience errors when trying to run Instruction Framework based LabVIEW FPGA VIs.

 

This is a blocker for our using Instruction Framework.