LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Improper use of Global Variables in a SCTL causes compiling error 61056.

 

Currently, this error does not alert the user until a considerable amount of time has been used during compiling.

Please include a check in LabVIEW for inproper use and alert user before compiling. 

 

*Created for service request per customer recommendation.

Many times I create a new FPGA VI to run from the same project and it needs an extra memory block or maybe a new I/O pin, so I add it in the project for that new VI. Meanwhile all my other FPGA VIs that don't have anything to do with that added piece will now need to recompile (very time consuming).

 

It would be nice if those VIs did not need to recompile since that new memory block, I/O, or clock are not being used in the old already compiled VIs.

Can the memory initialization browse button be changed to behave like traditional browse buttons rather than always defaulting to C:\Program Files\National Instruments\LabVIEW 2009\user.lib\ ?

 

18005i8BBA2FCBE02CA594

Memory initialization is one of the more tedious aspects of LVFPGA coding.  A lot of my LVFPGA vis have multiple memory elements that I need to access simultaneously for a given operation.  I've tried to streamline the initialization process by making all memory initialization vis read from an init values file and populate the array indicator.  However now I have to have multiple initialization vis reading from different points in the same init values file.  If I could somehow get a parameter into the memory initialization vi, I could programmatically select from where in the init values file to read.  Here is how this could work:

 

17975iD53439E474101C29

I do a lot of debugging by simply running my LVFPGA code in traditional labview test benches.  Its kind of a pain to have to open up an FPGA scoped version of my vis just to configure the memory elements or just to view the length/data types.

 

17857iA97F5936BD2AC9A3

I have several projects that use the same code modules.  FIFOs are used to communicate with these modules.  It would be really nice if I didn't have to keep recreating the same FIFOs for each new project just to be able to reuse my modules.  I suggest being able to save FIFOs (DMAs also) in a lvlib file, similar to project variables in the Windows LabVIEW.

We're starting development on an Ultrascale device, KU40 and am missing the option to utilise the DSP48E2 primitive as we have for DSP48 and DSP48E1.

 

Intaris_0-1670929335347.png

 

NXG seemed to have it, but as we all know, NXG is no more.

 

https://www.ni.com/docs/en-US/bundle/labview-nxg-fpga-module-cdl-api-ref/page/dsp48e2.html

 

Can we please have a DSP48E2 primitive for LabVIEW FPGA? I would really like access to the new features supported, including the wider multiplier.

 

The functionality of the update firmware button on NI MAX (R series targets) is not clearly described on NI documentation:

 

Juanjo_B_0-1636058231554.png

 

It would be nice to add a description of its functionality in the LabVIEW FPGA Module Help document.

 

The rvi folder has automation tools for FPGA compiles.  These are not very well documented.  There are no examples on using these.

 

Could additional info and examples be provided?

 

This is useful for projects where automated building helps continuous integration with tools such as Jenkins or Bamboo.

Default interface for FIFOs is Timeout (https://zone.ni.com/reference/en-XX/help/371599P-01/lvfpgaconcepts/fpga_interface_options/)

 

I would prefer the default be Handshaking.

Better visual indication of estimated and final timings in compilation report.

 

Would it be possible to add some visual clues as to whether a given clock in an FPGA design has been met or not? Maybe a background colour, green for good and red for bad?

 

color clocks.png

Sometimes it's really hard working out which clocks have met timing and which not.

Hello,

Recently, Last year, i've had a bad experience, when i tried to compile my old FPGA applications with Windows 10.

 

=> The FPGA ISE XILINX compiler is no more compatible with Windows 10.

 

Will something be done ? I got no clear answer from NI support ...It should be a XILINX problem !

 

The issue is that some products on the NI web site, are sold without clear information about the incompatibility with Win 10 !

 

Please, add a "clear highlighted Warning" on the product page in order to inform about the problem : On FPGA boards and on CRIO targets ...

 

Thanks for help.

 

Xilinx supports BRAM primitives (FIFO and normal BRAM) with certain varying width read and write ports.  For some applications, the ability to write 2x 16 bit values to a FIFO in one loop and read 1x 16 bit value from the FIFO at double clock rate in another loop can be very useful.

 

As it stands, the IPCore for such BRAM primitives, although present in LabVIEW FPGA, cannot be used without a CLIP (essentially making this aspect of the IPCore useless).

 

It would be cool if LV would expose the ability to have differing read and write port widths for BRAM.

Why the "Stacked Sequence Structure" is still present in the FPGA palette? It has been removed from other targets palettes. I think it's better not be in the FPGA palette also.

FPGA registers would be more user friendly, if they could be quick dropped and also searchable (find caller as has been suggested before). This would be also great for handshakes.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

Currently, when you put a fixed point number into a case structure, it uses the next largest integer and you get a red cooercion dot:

Allow fxd point integers in case structures.PNG

 This is unfortunate because, you have to have a default case. It would be nice if the case structures could take the fixed point type since there's isn't any of the ambiguity that exits with floating point. Using a smaller number for the selector might also provide an optimization.

 

 

When using the Xilinx IP nodes in LabVIEW FPGA it becomes very difficult to support source code control and branching.  The biggest issue is the fact that the "Folder for Support Files" entry is absolute.  So when we need to branch the code to isolate new feature development from the main trunk the relative path is now wrong.  Please make this and all other paths relative to support a more robust development environment.

 

 

A very useful feature of the FPGA Butterworth filter is the ability to use it multiple times, saving FPGA resources.

 

However this is not possible for 32 bit wide filters, only for 16 bit filters.

 

It would be useful if the 32bit filters could go multichannel too, at least two channel

 

 

The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.

 

Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.

 

In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.

 

It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.