09-18-2008 11:32 AM
Hello, I was wondering the normal time for LabVIEW to compile a program for cRIO FPGA target.
It has been taking 15 minutes per compile. I was wondering whether this is normal or whether there
is something I could do to make it faster.
- Thank you
09-18-2008 02:01 PM
I am really familiar with FPGA but not with the NI boards specifically. I actually haven't had a chance to use them yet but I thought I could give you an explanation anyways. If I write something wrong, hopefully someone will correct me.
In case you don't already know, once your code is written using LabVIEW, to be able to run on the FPGA machine, it has to be compiled, or even transformed, into a series of logic elements. The process starts by tranforming the LabVIEW code into VHDL (or verilog, I don't know which one they use) and then the Xilinx tool (company who makes the FPGA) transform this file into a complex series of logic operations (down to the level of AND and OR gate basically).
This operation, on a simple code, can take only a few seconds but when you start using almost all the ressources in the FPGA ( > 75% ), it becomes really hard for the software to place and route everything inside the FPGA. Therefore, the job of the Placer and Optimizer tools become really tedious and long. I have worked on many projects in the past where the compile time was over an hour but PC have become more powerful since then! I believe that 15 minutes, especially if you are using the bigger Virtex and more than 75% of the FPGA ressources is possible. Take a look at the report files generated by the Xilinx compiler to see the details!
For the solutions, I know that you can force sections of the code to be placed in a specific way (i.e. complied once and reusing that placement all the time) when you work at the VHDL level but I don't know if you have those options in LV.
Olivier
09-18-2008 04:20 PM
This is the compile report of the code I am working with:
Status: Compilation successful.
Compilation Summary
-------------------
Logic Utilization:
Number of Slice Flip Flops: 5,531 out of 40,960 13%
Number of 4 input LUTs: 5,475 out of 40,960 13%
Device Utilization Summary:
Number of BUFGMUXs 2 out of 8 25%
Number of LOCed BUFGMUXs 1 out of 2 50%
Number of External IOBs 164 out of 333 49%
Number of LOCed IOBs 164 out of 164 100%
Number of MULT18X18s 14 out of 40 35%
Number of Slices 3874 out of 20480 18%
Number of SLICEMs 4 out of 10240 1%
Clock Rates: (Requested rates are adjusted for jitter and accuracy)
Base clock: 40 MHz Onboard Clock
Requested Rate: 40.408938MHz
Theoretical Maximum: 54.191730MHz
Base clock: MiteClk (Used by non-diagram components)
Requested Rate: 33.037101MHz
Theoretical Maximum: 61.406202MHz
Start Time: 9/18/2008 12:13:18 PM
End Time: 9/18/2008 12:29:22 PM
This is the compile report if I delete most of the code.
It only has a while loop and I/O pin read and write, and an inverter.
Status: Compilation successful.
Compilation Summary
-------------------
Logic Utilization:
Number of Slice Flip Flops: 583 out of 40,960 1%
Number of 4 input LUTs: 1,021 out of 40,960 2%
Device Utilization Summary:
Number of BUFGMUXs 2 out of 8 25%
Number of LOCed BUFGMUXs 1 out of 2 50%
Number of External IOBs 164 out of 333 49%
Number of LOCed IOBs 164 out of 164 100%
Number of Slices 671 out of 20480 3%
Number of SLICEMs 4 out of 10240 1%
Clock Rates: (Requested rates are adjusted for jitter and accuracy)
Base clock: 40 MHz Onboard Clock
Requested Rate: 40.408938MHz
Theoretical Maximum: 85.113627MHz
Base clock: MiteClk (Used by non-diagram components)
Requested Rate: 33.037101MHz
Theoretical Maximum: 70.566650MHz
Start Time: 9/18/2008 2:00:03 PM
End Time: 9/18/2008 2:12:19 PM
09-18-2008 04:24 PM
09-18-2008 04:32 PM
ecw wrote:Hello, I was wondering the normal time for LabVIEW to compile a program for cRIO FPGA target.
It has been taking 15 minutes per compile. I was wondering whether this is normal or whether there
is something I could do to make it faster.
Taking 15 minutes per compile is normal. Depending on how much logic you have on your FPGA VIs that number could go to an hour or more, but never less than a few/several minutes.
The things you can do make it faster is to have at least 2GB of RAM and a fast processor. Disabling anti-virus software might help a bit, but not that much.
The other things you can do to avoid doing that many compilations, is to run your code in what is called emulation mode (run code in development computer), which is a settting of the FPGA Target item in the LV Project. That way you can run your FPGA code on Windows and verify some aspects of it (you we will not get real I/O but if using LV 8.6 you can choose either to get random data from I/O nodes, or the ability to simulate the I/O yourself. I suggest you refer to the documentation for more information on how to use those capabilities of LV FPGA.
09-19-2008 09:14 AM
Thank you, this brings another question:
As far as I know the FPGA cannot intrinsically do floating point math.
However, it is running at 75KHz which gives me the precision to work with PWM signals.
FPGA part compiles in 10-15 minutes.
In Scan mode, I could do floating point math, but it runs at about 100Hz. This takes only
one minute to compile.
I was wondering how to assign some part of the code to run in FPGA and others in scan mode, because
there are only a few measurement function that I need the FPGA, the other parts could run slower. I think
I should be able to compile some code for the FPGA, leave them there, then work with the code in scan mode
without recompiling the FPGA. Is this the better way of doing it, and how do I configure it to do this?
09-19-2008 12:30 PM
Hi,
In 8.6, it is actually possible to have both an FPGA VI and Scan engine functionality. The catch is that you have to decide which modules to run with scan engine, and which modules to service with the FPGA VI. You cannot choose to manipulate certain IO points on a module with FPGA, and others with Scan engine. This may be a showstopper for you, but if not, its fairly easy to get into this dual mode.
First, add you chassis to your cRIO target, then choose c-series modules for the chassis. These modules will use the scan engine. Any other modules not included in the chassis will not be used initially. Once the chassis is configured and in the project with c-series modules, you can right click the chassis and add an FPGA target. You can then add the remaining c-series modules to this new RIO target.
You can build an FPGA VI for the c-series modules attached to the FPGA target just as you would in 8.5, achieving the same loop rates. And you can continue to use the scan engine variables with the scan engine enabled modules.
09-19-2008 11:42 PM
I think I am able to use this dual mode. In the FPGA code, I placed a control and indicator for a digitial I/O pin so that the code in the Chassis could read and write to it. The FPGA itself is looping at 75KHz. The I/O pin that is indirectly controlled by the Chassis shows that the Chassis's while loop is looping at 37KHz. The code for the Chassis compiles in an instance.
According to your concern I assume that when I put more code in the Chassis's while loop the rate would start to drop.
- Thank you.
09-22-2008 11:32 AM
Hi,
I'm not entirely sure what you mean by chassis loop. We usually define things as an FPGA VI and a Real-time VI (sometimes called the host VI when the hardware is run headlessly.) Judging by your statements, I assume your Chassis code is the Real-time VI, in which case you are correct. As you add functionality to the loop which currently reads and writes to the FPGA VI (which is in turn controlling your DIO pin,) the loop rate will slow down. How much it will slow down depends on the operations. If it slows down too much, you could break your functionality in that loop into two loops. One could run your time critical reads and writes from the FPGA VI, while the other would do any calculation you required. You could then use local variables or Real-Time FIFOs to communicate between these two loops.