LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When you are using same code on different boards, it would be a big help if you could set the "FPGA VI Reference" indicator as "Adapt to source". When I use dirrerent DMA on different target, then the wire break every time I change target:

 

My FGV for the FPGA reference looks like:

 

FPGA ref Adapt to source.png

In LabVIEW FPGA 2011, only the base clocks enumerated in the project and clocks derived from the base clock(s) are available in the FPGA Clock Control. I’d like LabVIEW to show the top level clock in this control as well.

 

Consider designs with nested components that both CAN and CANNOT be optimized with the single-cycle timed loop. If the domain of the SCTL does not match the top-level clock domain that contains it, you seem to pay a heavy performance penalty. I presume it’s due to the clock-crossing logic under the hood. Thank you, by the way, for dealing with this for me!  For example, consider this VI:

 

2013-03-04_164826.png

 

The While Loop will take more ticks (a few hundred more in cases I’ve seen) to execute than if the Clock Control constant was set to 200MHz (assuming you could compile). So, just set the TLC and the clock control to be the same, right? Sure, except when you change the top-level clock and a few hours later, when the compile is finished, realized you forgot (gasp) to change a clock constant and the code doesn't fill its timing requirement anymore.

 

Project Clocks:

2013-03-04_163653.png

 

LabVIEW 2011 Behavior:

2013-03-04_163818.png

 

Desired Behavior:

2013-03-04_163818b.png

 

Thanks!

 

-Steve K

The loop timer express VI is very useful to time a loop to an exact rate, however... if you want to be sure the loop is meeting the rate requested... you also have to put in tic count VIs like this:

 

loop counter fpga.png

 

Since the loop timer express VI already is calculating how long it needs to wait in order to achieve the desired loop time, I would prefer it if at least output a bool that indicated it failed to achieve the timing required.

 

failed timing.png

 

It would be best if it output the actual tics it waited in like I16 form so it could go negative (indicating the # of tics it failed to achieve timing by.

 

counts waited.png

For some application, I find myself configuring memory blocks for the storage of custom controls which I am maintaining with a type def.  Type definitions normally have the advantage that changing them will update them everywhere they are used.

 

Unfortunately, when I change a type def control for which a memory block has been configured, the memory block does not update this, and my code breaks.  It appears that the memory block disconnects the control from its type def when configured.  It would be nice if the memory block was reconfigured - as this is what I would expect to happen with a type def control.

It would be nice to have "time unit converters" in the Labview FPGA Timing menu.

 

My need would be, to automatically, convert Ticks to µs, according to the local Clock cycle frequency ...

 

  • Ticks -> µs
  • µs -> Ticks
  • Ticks -> mSec
  • mSec -> Ticks

 

Using this kind of automatic converters in place of "manual calculations with constants" would help during code evolution ...

Visually detecting the presence of CDC (Clock Domain Crossing) in LV code is only semi-intuitive. It is required to check the set clock of the SCTL and / or follow the wire to the clock constant / control in order to understand in which clock domain the code is running.

 

I suggest having the option to automatically colour the background of the SCTL according to the clock being used.

Obviously, this won't work well over VI borders, but at least the option to have it vivible on the same diagram is already a nice step towards better visibility of this really important part of LabVIEW FPGA programming.

 

An option to actually couple a colour with any given clock constant / control for SCTLs would be an addition I personally would very much appreciate.

 

Intaris_0-1716558682629.png

 

Yes, these colours are probably a bit extreme, but given the fact that I'm dealing with so many individual processes, it is preferable to having to constantly follow all the wires or investigate all of the SCTL settings.

 

Working with the NI 5785 our team had a hard time understanding how to use TClk without all of the extra (e.g. streaming) code that comes with the example.

 

Through support we were eventually put in touch with R&D and they told us how to initiate TClk by setting some of the FPGA controls.  This was helpful but not intuitive.

 

TClk helps support beamforming applications shown in the NI Marketing but without this usability it is very difficult (impossible) to develop applications promised.

 

TClk also has other lower level features such as the delay correction.  No info is posted on this either but it is a property we can read.

Arising from similar requirements as I have posted many moons ago: HERE.... I naively thought putting a terminal in a disable structure would remove it from the FPGA compile. It doesn't.

 

Years later, I have developed a nice debug interface for my FPGA code which is becoming more and more modular as I refactor it.  I have many sub-modules with their own debug interfaces which can be turned on or off from the top-level VI via LVOOP method injection.

 

The problem is that I can't really compile my entire FPGA VI with ALL debug paths enabled as this just won't fit (It will sometimes compile, but most often not and our FPGA code base is still growing).  And this is before I even think about making my debug information more detailed.  I would like to be able to easily switch certain aspects of the debug interface on and off as testing requirements change.  On the debug interface level I can do this easily by simply not reading the data from the objects being used for the data transfer or simply passing in abstract methods which don't actually do anything and get optimised away.  But I'm left with a load of FP controls which are still eating up resources on the FPGA target. Smiley Mad I don't want to delete the controls because that leads me to X copies of ever-so-slightly out-of-sync versions of my test VI which quickly becomes a maintenance nightmare.  Instead, I want to be able to "easily" reconfigure my test front-panel to only compile the stuff I'm currently actually interested in.

 

Part of what I would like is the ability to actually define areas of the FP which are enabled, disabled or enabled (and preferably also based on whether simulation is active or not - hence conditional disables for FP).  This way, when compiling, the FP elements will actually disappear and full resource savings can be made (as Xilinx is clever enough to optimise away any pointless code LV may stillhave instantiated in VHDL).  In addition, the ability to define certain controls as being enabled only when in simulation mode can allow us to have SGL graphs and so on present when needed during debugging.

 

So, would having conditional disable options for the FP (where controls are shown as greyed out when not available) be of interest to anyone?  If this would be an FPGA only thing, I wouldn't shed and tears.

 

Am I the only one who would use this? hmm. Maybe.

I often work with the FPGA in hybrid mode because the Scan Interface covers most of the project requirements 90% of the time.  When NI added support for the SGL datatype to the FPGA module in 2012 (?), they overlooked user-defined variables.  There is currently no built-in support for typecasting a SGL to U32, so passing SGL data back to the host requires FP controls or using custom typecasting solutions (see SGL typecast) on both the FPGA and host layers.

 

Please add SGL as an option for user-defined variables.

 

 

Can the memory initialization browse button be changed to behave like traditional browse buttons rather than always defaulting to C:\Program Files\National Instruments\LabVIEW 2009\user.lib\ ?

 

18005i8BBA2FCBE02CA594

Memory initialization is one of the more tedious aspects of LVFPGA coding.  A lot of my LVFPGA vis have multiple memory elements that I need to access simultaneously for a given operation.  I've tried to streamline the initialization process by making all memory initialization vis read from an init values file and populate the array indicator.  However now I have to have multiple initialization vis reading from different points in the same init values file.  If I could somehow get a parameter into the memory initialization vi, I could programmatically select from where in the init values file to read.  Here is how this could work:

 

17975iD53439E474101C29

I do a lot of debugging by simply running my LVFPGA code in traditional labview test benches.  Its kind of a pain to have to open up an FPGA scoped version of my vis just to configure the memory elements or just to view the length/data types.

 

17857iA97F5936BD2AC9A3

I have several projects that use the same code modules.  FIFOs are used to communicate with these modules.  It would be really nice if I didn't have to keep recreating the same FIFOs for each new project just to be able to reuse my modules.  I suggest being able to save FIFOs (DMAs also) in a lvlib file, similar to project variables in the Windows LabVIEW.

The rvi folder has automation tools for FPGA compiles.  These are not very well documented.  There are no examples on using these.

 

Could additional info and examples be provided?

 

This is useful for projects where automated building helps continuous integration with tools such as Jenkins or Bamboo.

Default interface for FIFOs is Timeout (https://zone.ni.com/reference/en-XX/help/371599P-01/lvfpgaconcepts/fpga_interface_options/)

 

I would prefer the default be Handshaking.

Better visual indication of estimated and final timings in compilation report.

 

Would it be possible to add some visual clues as to whether a given clock in an FPGA design has been met or not? Maybe a background colour, green for good and red for bad?

 

color clocks.png

Sometimes it's really hard working out which clocks have met timing and which not.

Why the "Stacked Sequence Structure" is still present in the FPGA palette? It has been removed from other targets palettes. I think it's better not be in the FPGA palette also.

FPGA registers would be more user friendly, if they could be quick dropped and also searchable (find caller as has been suggested before). This would be also great for handshakes.

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

When using the Xilinx IP nodes in LabVIEW FPGA it becomes very difficult to support source code control and branching.  The biggest issue is the fact that the "Folder for Support Files" entry is absolute.  So when we need to branch the code to isolate new feature development from the main trunk the relative path is now wrong.  Please make this and all other paths relative to support a more robust development environment.