LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

We have a "group" of occurrences.  We can't identity individual occurrencs easily because you can't store them in an array (where we would use indexes to identify).  When using a cluster they all get the same name (labels aren't used!) which is bug-prone.  The only workaround is to create a control cluster, and that's not a clean solution.

In LabVIEW FPGA 2011, only the base clocks enumerated in the project and clocks derived from the base clock(s) are available in the FPGA Clock Control. I’d like LabVIEW to show the top level clock in this control as well.

 

Consider designs with nested components that both CAN and CANNOT be optimized with the single-cycle timed loop. If the domain of the SCTL does not match the top-level clock domain that contains it, you seem to pay a heavy performance penalty. I presume it’s due to the clock-crossing logic under the hood. Thank you, by the way, for dealing with this for me!  For example, consider this VI:

 

2013-03-04_164826.png

 

The While Loop will take more ticks (a few hundred more in cases I’ve seen) to execute than if the Clock Control constant was set to 200MHz (assuming you could compile). So, just set the TLC and the clock control to be the same, right? Sure, except when you change the top-level clock and a few hours later, when the compile is finished, realized you forgot (gasp) to change a clock constant and the code doesn't fill its timing requirement anymore.

 

Project Clocks:

2013-03-04_163653.png

 

LabVIEW 2011 Behavior:

2013-03-04_163818.png

 

Desired Behavior:

2013-03-04_163818b.png

 

Thanks!

 

-Steve K

The loop timer express VI is very useful to time a loop to an exact rate, however... if you want to be sure the loop is meeting the rate requested... you also have to put in tic count VIs like this:

 

loop counter fpga.png

 

Since the loop timer express VI already is calculating how long it needs to wait in order to achieve the desired loop time, I would prefer it if at least output a bool that indicated it failed to achieve the timing required.

 

failed timing.png

 

It would be best if it output the actual tics it waited in like I16 form so it could go negative (indicating the # of tics it failed to achieve timing by.

 

counts waited.png

For some application, I find myself configuring memory blocks for the storage of custom controls which I am maintaining with a type def.  Type definitions normally have the advantage that changing them will update them everywhere they are used.

 

Unfortunately, when I change a type def control for which a memory block has been configured, the memory block does not update this, and my code breaks.  It appears that the memory block disconnects the control from its type def when configured.  It would be nice if the memory block was reconfigured - as this is what I would expect to happen with a type def control.

Compiling can take long and it would be cool to get updates via sms or email at various stages of the compiling process.

 

 

At present, if you are trying to simulate your FPGA's actual logic, using a custom VI like this:

1234.png

Then you know that your custom VI test bench only has one case for methods (just a general method case, not a case for each method available). There are ways to get around this problem--for example, this example emulates a node and suggests using a different timeout value for wait on rising edge, wait on falling edge, etc, but one still has to write the code for the different methods.

 

My suggestion is as simple as this: make test benches easier to use by handling all of the methods and properties with a set behavior. That way, all one has to set up when creating a test bench is the input and output on each I/O read/write line. At the very least, it would be nice to have the ability to read what method is being called, so the appropriate code can be set up without complicated case structures.

 

  Recompiling an FPGA VI can be time consuming when debugging a large program.  The emulator mode is not useful when the process includes debugging real I/O connections (vs. emulator simulated).  I would propose a useful "fix" to the emulator I/O problem.  Could the emulation mode have the ability to use all the I/O's as "pass through" connections from the FPGA to the host in order to actually use the I/O's.  This would involve a very simple FPGA VI that connects all the I/O's to appropriate indicators or controls.  If this pre-compiled VI is downloaded and running on the FPGA during emulation mode, then you could actually debug real I/O connections without compiling your entire VI.

Hello,

 

It should be nice to be able to get some general informations, on windows,  about a FGPA VI using its reference.

 

For example it should be interesting to get ...

 

 

  • The main cycle loop frequency
  • A version ID 
  • The CRC of the bitfile
  • ...
This kind of informations could be usefull in case of dynamic bitfiles downloads ...
Or when you try to connect to a running target, you could ask dynamically to get informations ...
I think this kind of informations are known by Labview FPGA ... but only the property nodes are missing.
Thanks.

 

FPGA bitfiles should not have any dependency on the project name or target name.  What if you change the name of your project?  What if you change the name of the target?  These dependencies should only correspond to the VI and its location in the project tree and FPGA target. FPGA bitfiles should be in the same directory as the vi but with a different extension.

Change the automatic name and path of FPGA bitfiles from:

.\FPGA Bitfiles\ProjectName.lvprog_TargetName_ViName.vi.lvbitx

to

.\ViName.vi.lvbitx

 

20041iD9562FE2CAEEA87E

It would be nice to have "time unit converters" in the Labview FPGA Timing menu.

 

My need would be, to automatically, convert Ticks to µs, according to the local Clock cycle frequency ...

 

  • Ticks -> µs
  • µs -> Ticks
  • Ticks -> mSec
  • mSec -> Ticks

 

Using this kind of automatic converters in place of "manual calculations with constants" would help during code evolution ...

Visually detecting the presence of CDC (Clock Domain Crossing) in LV code is only semi-intuitive. It is required to check the set clock of the SCTL and / or follow the wire to the clock constant / control in order to understand in which clock domain the code is running.

 

I suggest having the option to automatically colour the background of the SCTL according to the clock being used.

Obviously, this won't work well over VI borders, but at least the option to have it vivible on the same diagram is already a nice step towards better visibility of this really important part of LabVIEW FPGA programming.

 

An option to actually couple a colour with any given clock constant / control for SCTLs would be an addition I personally would very much appreciate.

 

Intaris_0-1716558682629.png

 

Yes, these colours are probably a bit extreme, but given the fact that I'm dealing with so many individual processes, it is preferable to having to constantly follow all the wires or investigate all of the SCTL settings.

 

The Vision FPGA has come around for maybe ten years? Nowadays, FPGA has much more resources than their ancestor, but in Vision FPGA, we can still only handle 8 pixels a single cycle, which sometimes comes as a bottleneck.

 

Also, I think it would be good to add a signed 16-bit image data type for Vision FPGA; when we use u8 subtraction, we are losing some of the information for the output is only another u8 image. If we have the i16 datatype, it will be possible to do a lossless subtraction. Sometimes every bit counts.

 

 

I'm bringing back to life this long-lost idea: https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/pre-and-post-build-options/idi-p/2364676

as I think there are lots of situations where pre/post build actions could be useful for FPGA.

 

For example, as suggested here: https://forums.ni.com/t5/LabVIEW/Populate-FPGA-array-with-values/m-p/4145330#M1195362

I have a large array of coefficients that I want to load from a file, then populate an array constant with it. I have a script to do it, now I would like to automatically run it before compiling the bitfile. For context, I want it to be a constant because controls take more resources and do not allow constant folding optimization.

 

I already had another situation where I made a tool to auto-generate code in case structures based on some specifications given by the developer. If however the developer forgets to run it before compiling, the FPGA VI won't work properly as necessary code has not been generated.

 

More generally, I think scripting for FPGA is way underrated. As FPGA code is quite often tedious and redundant to create (because optimization is priorized over readability, and because of the lack of type genericity), scripting has a great potential here. Allowing to run pre/post build actions for FPGA bitfiles would surely take FPGA scripting to the next level !

Working with the NI 5785 our team had a hard time understanding how to use TClk without all of the extra (e.g. streaming) code that comes with the example.

 

Through support we were eventually put in touch with R&D and they told us how to initiate TClk by setting some of the FPGA controls.  This was helpful but not intuitive.

 

TClk helps support beamforming applications shown in the NI Marketing but without this usability it is very difficult (impossible) to develop applications promised.

 

TClk also has other lower level features such as the delay correction.  No info is posted on this either but it is a property we can read.

The document High Performance FPGA Devleoper's guide lists a parallelized bubble sort.  I tried this out, and found that it actually doesn't work.  This this matrix successfully gets the max value on top, and the minimum value on the bottom, it doesn't completely sort the values between. 

Bubble Sort.JPG

 

In this example, if the highest value started at the end of the array (red), and the 2nd highest value started 2nd from the end (pink), the high value ends up at the top, and the 2nd highest value ends up 3rd from the top. 

Bubble Sort - Markup.JPG

This lattice can be completed to sort the middle sections by adding 3 more columns, one with 3 Min/Max blocks comparing the center 6 values, one with 3 Min/Max blocks comparing the center 4, and a final Min/Max operation comparing the middle 2. This will complete the sort, but will take 7 sequential steps instead of the 4 listed.  The following works to sort the entire array:

Bubble Sort - Corrected.JPG

 

 

Along the same line of thinking as this idea:

https://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Handle-Fixed-Point-Integers-Going-Into-Case-Structures/idi-p/3360844

It would be cool if we got rid of the coercion dot here for smaller integers and integer fixed point numbers as the index for an array.

Smaller integer representation for indexes2.png

As it is, it's hard to tell if the compiler will use the bigger I32 for this.

This behavior would also useful for other integer inputs like the memory address input.

In labview it is not allowed to exit SCTL running in an external clock domain. Labview claims it could lead to instability of code due to glitches etc on the external clock.

I propose to leave the option open to the programmer to take that risk, which is not always there. It can lead to better understandable code.

For example I have code where I read data from an NI5752 ADC module and store in in block RAM (32 ADc channels, 32 block RAMs). Reading from that ADC implies acquiring the data in the external ADC clock domain. So, also the writing to memory is in that clock domain.

I needed to implement a function to reset the memory as well. That means writing to that memory.  That has to be done in an SCTL in the same external clock domain.

However, this reset function (subVI) can no be inserted in the normal "enable chain" of the main program, since the SCTL can not be terminated and the memory reset subVI never terminates.

 

Now I had to make an ugly trick to get this done. In the main program I create a dead branch doing the reset. That subVI never stops, but after the reset has been done it send a signal via a FIFO to the "wait reset" subVI in the main enable chain. the wait reset is running in the default clock domain and can exit the wait loop after the reset signal has been received.

Capture.PNG

However, this trick is not easy to understand from the program. It would have been easier if the reset function (external clocked loop)  could have exited by itself and be inserted in the main enable chain. That would have been more logical..

 

 

Arising from similar requirements as I have posted many moons ago: HERE.... I naively thought putting a terminal in a disable structure would remove it from the FPGA compile. It doesn't.

 

Years later, I have developed a nice debug interface for my FPGA code which is becoming more and more modular as I refactor it.  I have many sub-modules with their own debug interfaces which can be turned on or off from the top-level VI via LVOOP method injection.

 

The problem is that I can't really compile my entire FPGA VI with ALL debug paths enabled as this just won't fit (It will sometimes compile, but most often not and our FPGA code base is still growing).  And this is before I even think about making my debug information more detailed.  I would like to be able to easily switch certain aspects of the debug interface on and off as testing requirements change.  On the debug interface level I can do this easily by simply not reading the data from the objects being used for the data transfer or simply passing in abstract methods which don't actually do anything and get optimised away.  But I'm left with a load of FP controls which are still eating up resources on the FPGA target. Smiley Mad I don't want to delete the controls because that leads me to X copies of ever-so-slightly out-of-sync versions of my test VI which quickly becomes a maintenance nightmare.  Instead, I want to be able to "easily" reconfigure my test front-panel to only compile the stuff I'm currently actually interested in.

 

Part of what I would like is the ability to actually define areas of the FP which are enabled, disabled or enabled (and preferably also based on whether simulation is active or not - hence conditional disables for FP).  This way, when compiling, the FP elements will actually disappear and full resource savings can be made (as Xilinx is clever enough to optimise away any pointless code LV may stillhave instantiated in VHDL).  In addition, the ability to define certain controls as being enabled only when in simulation mode can allow us to have SGL graphs and so on present when needed during debugging.

 

So, would having conditional disable options for the FP (where controls are shown as greyed out when not available) be of interest to anyone?  If this would be an FPGA only thing, I wouldn't shed and tears.

 

Am I the only one who would use this? hmm. Maybe.

I often work with the FPGA in hybrid mode because the Scan Interface covers most of the project requirements 90% of the time.  When NI added support for the SGL datatype to the FPGA module in 2012 (?), they overlooked user-defined variables.  There is currently no built-in support for typecasting a SGL to U32, so passing SGL data back to the host requires FP controls or using custom typecasting solutions (see SGL typecast) on both the FPGA and host layers.

 

Please add SGL as an option for user-defined variables.

 

 

HERE I detailed a problem I currently have with Registers between two clock domains which are closely related (phase-locked).

 

It turns out that there is handshaking going on which, essentially, is not really neccessary.  It would be nice to have the option to have something similar to a Register for such clock domains where we know explicitly the relationships between the clocks and thus does not require handshaking.

 

Shane.