LabVIEW FPGA Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I don't like static resource definitions FIFOs, Block RAMs or DMAs in my projects.  I prefer to have the code declare such entities as they are required because this makes scalability much easier to achieve.

For FIFOs, BlockRAM and so this is no problem, but there are two things we currently cannot instantiate in code:

DMA Channels

Derived clocks

 

To deal with the first, why can't we define a DMA channel in the code?  When parsing the code before compiling, the presence of a DMA channel can be autodetected and added to the interface for the Bitfile. 

 

To try to decouple my code from static DMAs, I actually have started defining my core FPGA VIs as accepting FIFOs with Write functions (For DMAs to host) or Read functions (for writing to FPGA) required.  I can then, without having to change my project, wrap this FPGA VI in another VI which can then input wither a DMA channel (which unfortunately must be defined in the project) or a standard FIFO which cen then be used for debugging.

 

Please allow for the instantiation of DMA channels in code.

In correlation with another general idea I have posted, I have come to the conclusion that it would be nice to run an analysis of the Xilinx log in order to give feedback over which code has been constant folded by the Xilinx compiler.

 

Other aspects such as specific resource utilisation would be really cool also (SRL32 vs Regsiters for Feedback nodes).  This would obviously be a post-bitfile operation but could at least give some direct feedback as to what the Xilinx compiler has modified in the code (Dead code elimination, constant folding etc.).

This is the current situation when dealing with register creation on FPGA targets:

 

Regsiter Create.png

 

This is what I would like:

 

Regsiter Create wishful thinking.png

 

I am currently creating a group of classes to abstract out inter-loop communication and the ONLY thing changing between classes (aside from variations between Ragister vs FIFO vs Global and so on) is the datatype.  Being able to link the Register creation to a data input (the data value of the class itself for example) would save a lot of work in such operations.  If it were also possible to do the same for the Register stored within the class private data then implementing different classes int his way would be really easy.

 

Even without classes, the ability to autoadapt the type of registers / FIFOs in this way would be a real step towards re-usable code on FPGA.

 

 

bonjour, 

Voici l'erreur que j'obtiens quand je veux compiler mon programme FPGA

 

 

LabVIEW FPGA: La compilation a échoué à cause d'une erreur Xilinx.

Details:
ERROR:Pack:2310 - Too many comps of type "SLICE" found to fit this device.

Design Summary:
Number of errors: 1
Number of warnings: 89
Logic Utilization:
Number of Slice Flip Flops: 7,963 out of 10,240 77%
Number of 4 input LUTs: 10,607 out of 10,240 103% (OVERMAPPED)
Logic Distribution:
Number of occupied Slices: 5,523 out of 5,120 107% (OVERMAPPED)
Number of Slices containing only related logic: 4,143 out of 5,523 75%
Number of Slices containing unrelated logic: 1,380 out of 5,523 24%
*See NOTES below for an explanation of the effects of unrelated logic.
Total Number of 4 input LUTs: 11,028 out of 10,240 107% (OVERMAPPED)
Number used as logic: 10,454
Number used as a route-thru: 421
Number used as 16x1 RAMs: 70
Number used as Shift registers: 83
Number of bonded IOBs: 90 out of 324 27%
IOB Flip Flops: 97
Number of MULT18X18s: 38 out of 40 95%
Number of BUFGMUXs: 2 out of 16 12%

Peak Memory Usage: 359 MB
Total REAL time to MAP completion: 19 secs
Total CPU time to MAP completion: 19 

 

voici une capture ecran du programme 

merci d'avance 🙂

Long compile times are a necessary evil of FPGA code.  Even with the vast improvements of Vivado, compile time still ranks as the biggest killer of large project efficiency.  As compile times approach 3-4 hours, their successful completion becomes paramount.  All too often I find that the Xilinx compiler running on the compile worker has completed successfully however some small communication glitch either between my development machine and the farmer or the farmer and the worker has caused the compile to be lost.  It is quite frustrating to know you have a completed bitfile from Xilinx but the NI tools will not perform the final processing steps required to create the lvbitx file.  The only solution is to restart the compile costing another 3-4 hours of productivity.

 

Typical workflow in our company for these large projects is to spend mornings testing and stressing the compile(s) from overnight.  Then make any bug fixes and incremental feature improvements and try to start a compile by mid-morning.  By mid-afternoon when the compile is complete do the process again so that you can process another build for overnight.  If one of the compiles fails because of timing or resource problems, there's nothing that can be done.  But if it fails because of glitches in NI's compile wrapper code, that becomes a waste of a half of a day of productivity.

 

I propose that the current methods for compiling bitfiles be modified.  The goal is to improve user productivity.  Some of my suggestions include:

  • For a given build specification, give it the ability to re-attempt to retrieve the last completed compile. This option would be available even if the VI's that created that compile had been modified.
  • If a compile was completed previously for this build specification and there has yet to be a successful lvbitx generation, prompt the user before doing anything that would destroy the ability to retrieve it.
  • Make sure that all of this still works when changing connections to the worker.  For example if I start my compile at work then take my laptop home and want to login to my VPN at night to check on my compile
  • Don't remove any chance to get a compile if there was a communication error.  Right now when I get the communication error, I see a red X in the compilation status and my only option is to remove it from the list.

Xilinx log window should use a fixed-width font.

 

Which of these two string indicators with identical content is easier to read?

 

FPGA Xilinx Log font.png

 

Spoiler
The one on the left is Courier, the one on the right is the default Application font

The LabVIEW FPGA module has supported static dispatch of LabVIEW Class types since 2009. This essentially means all class wires must be analyzable and statically determinable at compile-time to a single type of class. However, this class can be a derived class of the original wire type which means, for instance, invoking a dynamic dispatch method can be supported since the compiler knows exactly which function will always be called.

 

http://zone.ni.com/reference/en-XX/help/371599H-01/lvfpgaconcepts/fpgaclassesinvis/

 

This is not sufficient for many applications. Implementations that require message passing or other more event oriented programming models tend to use enums and flattened bit vectors to pass different pieces of data around on the same wire. All of this packing and unpacking can automatically be handled by the compiler if we can use run-time dynamic dispatch to describe the application.

 

We call for the LabVIEW FPGA module to add support for true run-time dynamic dispatch to take care of this tedious, annoying, and down-right boring job of figuring out how to pack and unpack bits everywhere. Whose with me?

Hi there,

 

I got following feedback from a LV FPGA user:

 

When developing a FPGA application in LabVIEW, after submiting a FPGA code compilation - usually quite a lengthy process - if you modify the code either on the Front Panel or Block Diagram while compiling is in progress, this results in a Compilation Error at the end.


And this occurs regardless the modification be only a mere cosmetic change, without any implication in the code that is being compiled.
This is quite frustrating when you realize that the compilation has failed (maybe after half an hour waiting) just because you unconsciously clicked and resized some control or node.

 

In such a situation, when LabVIEW detects a code change while the FPGA compilation is running, it should warn the user with a message box; if the user confirms the code change, the current compilation can be inmediately aborted or let it continue (at user option); on the other hand, if the user cancels the modification, nothing happens and the compilation continues to a successful (hopefully) end.

 

 

Thanks

Álvaro

On the cRIO-9068, the third serial port and the second Ethernet adapter is actually mounted on the FPGA, resources are consumed to redirect to realtime. Currently there are no access to this resource on the FPGA for developers, only from the Realtime.

 

I would like some I/O Nodes for interacting with these devices on the FPGA. NI could put up some examples how they could be used.

 

Today the resources are invisible to the developer, except for the additional long compile time and resources used (about 7%).

 

I attached pictures of the FPGA design and the resources consumed for a blank vi.

 

 

Sincerly,

Jens Eriksen

 

 

The CORDIC High throughput functions available in LabVIEW are capable of running at high frequencies, thus allowing FPGA code to (for example) multiplex multiple demodulators without exploding device utilisation.

 

Unfortunately, the option to apply a Gain correction to the results does not pipeline the actual multiplication, thus artificially limiting the available speed of the CORDIC algorithms.

 

In my code I always deactivate the Gain compensation and do this "manually" allowing the code to compile at much higher frequencies and more efficiently utilising the FPGA device.

 

It would be great if it were possible to also pipeline this multiplication as part of the CORDIC High-throughput node instead of being forced to implement the multiplication separately.

 

User Lorn has found a brilliant tip for *DRASTICALLY* speeding up FPGA compile times under Windows for PCs with the turbo boost feature. What's more, it's extremely simple to implement.

 

Please let's see this in future versions of LabVIEW as standard.

 

http://forums.ni.com/t5/LabVIEW-FPGA-Idea-Exchange/Multi-core-Compiling/idc-p/2301338#M297

For debugging, using FPGA VIs in interactive mode can be very valuable.  I have, to this day, not been able to find out how LV determines if a bitfile and a VI match.

 

Therefore whenever I click on the run button for a VI, I'm never quite sure if the bitfile will match or not and often have to wait 1-5 minutes before I can resume working with LabVIEW.  This is a very high price to pay for something which I end up cancelling.  I would like very much if the IDE would TELL ME that the bitfile and VI don't match before starting a new compilation and thus wasting my time.

 

This is opposed to a CTRL_Click of the run arrow which explicitly tells the IDE to compile.

Hi, since there an be a queue for compiling FPGA code, it seems natural to me to also be able to make a queue for generating intermediate files.

 

I'm working with 10 build specs. for compilation per project and generating intermediate files for my design takes aprox. 3-4 minutes. This means that I need to sit by my computer for half an hour just waiting and clicking build on every build specification. Sometimes I work with FPGA VI which need to build intermediate files for something like 7-10 minutes, so this is a pain.

 

It would be great if there was a way of just highlighting all build specifications for compilation with shift and just creating the intermediate files for them automatically one by one.

 

Can this be done?

As the compilation goes on of the LabVIEW FPGA code to bitfile, there is an intermediary step when a VHDL file (or maybe Verilog?) is generated. This file would be very beneficial if you want to use another FPGA target, that NI supports. I know that this VHDL file cannot be directly used for non supported FPGA, but it would be a very good starting point for the ones that know VHDL language.

Hi,

 

I find Conditional Disable Symbols in FPGA code very useful especially in a R&D environment where needs and code change back and forth rapidly. However, I also find it hard to keep track of all these changes. I propose to add support for reading FPGA Conditional Disable Symbols from Host to enable VIs like "Is_FPGA_Function_A_Enabled.vi" that would allow for the Host program to know the state (hardware revision of sorts) of the FPGA bitfile and adapt.

 

I'll give an example to illustrate this proposal.

 

Function A is implemented in FPGA bitfile v1. Function B is implemented in Host and is based on multiple calls to Function A. Your boss now wants Function B implemented in FPGA for performance reasons with means to disable the code if required. For this you define a Conditional Disable Symbol in the FPGA project "FPGA_WITH_FUNC_B" and write FPGA code for FPGA bitfile v2. Switching between v1 and v2 is easy enough from the project manager, but for the Host side there's no way of knowing whether Function B is implemented directly in the FPGA or should be "emulated" via Function A as before. If you could do a check like "if FPGA_WITH_FUNC_B == TRUE" you could easily make the Host aware of this.

 

Regards,

solarsd

Currently when you build a VI the bit file path is stored as relative (you can see it in the project XML). This means if you change the project location either:

 

  • Moving machines.
  • Checking in and out of source code control on different machines

You have to recompile the FPGA to use VI mode or run interactively. It seems the bitfile could be stored as a relative path like all VIs in the projects.

 

Cheers,

James

Per NI Applications Engineering, "If you intend to run multiple compiles in parallel on the [Linux] server then yes, you will need the Compile Farm Toolkit running on a Windows machine to handle the parallel workers."  I would like NI to support the FPGA Compile Farm Toolkit on Linux, so I don't need a dedicated Windows server to outsource compiles to workers.

When I build a new bitfile for my project, I sometimes (shock horror) make mistakes and bring the whole house of cards crashing down.

 

In situations like that, I would love to have the last version of the bitfile available for re-testing.  Ideally, I could specify a pre- and post-build option for my compilation where I can define my own automatic re-naming and archiving scheme so that I no longer need to do painful re-compiles for reverting my code.

 

I am aware that this probably applies to more than FPGA but here the compilationt imes are more prohibitive and I feel the need is larger.

Sometimes I just want to compile a lot of Bitfiles (Be it for a release or a debugging test case) and I have to right click each and every Build spec and choose "Build".  then wait about 10 seconds and do the same again for the next build spec.

 

How about being able to select multiple build specs and then select "Build Selection" and have time to go for lunch while the PC queues up all the compilations?

 

I don't use a compile farm and everything is done locally but at least the queuing could be automated.

 

Shane.

When working with CLIP-generated clocks we need to have good UCF files for proper compilation control (Something we now have after WEEKS of debugging Smiley Mad).

 

At the moment the ucf files MUST be in Users\Public\Documents\National Instruments\FlexRIO\IOModules for the code to work even though all other CLIP-relevant files can be located anywhere.

 

Please let us use the ucf file located int he same directory as the CLIP we're using otherwise we'll end up with cross-linking nightmares between users who don't have the right version in their local folder.

 

Shane