03-10-2020 07:45 AM - edited 03-10-2020 07:56 AM
@NAGINENI wrote:
Dear Intaris,
@As I mentioned in the replay,
If we generally use an FPGA clock rate of 40 MHz, the developed LabVIEW program is running in the time of 25 nanoseconds (one-clock cycle). In case, if I am executing my LabVIEW VI program in the cRIO RT (Real-Time) environment, how much maximum time (one-clock cycle) it will take to running the VI program.
Your definitely trying to get the wrong question answered! There is no inherent clock cycle to your RT code!
The code will run as fast as you make your loops iterate, if and only if any operations inside that loop can execute within that interval. If you process a complicated analysis routine inside a loop whose processing time takes 10 seconds, then your loop will iterate at most at a 10 seconds interval no matter what timing you provide inside that loop. That still allows LabVIEW to run a different loop at a few ms interval (don't expect RT to make loop timing accurate to 1 ms, it can work as fast if you don't do much processing inside that loop, but even RT can not guarantee you that it will always process within 1ms). The same applies to Windows computers only here you get at best a reasonable repeatable interval in the range of several 10s of ms and absolutely no hard guarantees even for 1s intervals.
But the clock cycle of your program is whatever you program in your application and each loop in a LabVIEW program can run at its own clock cycle that you have to somehow control yourself (by doing some explicit delay inside the loop to not let it run at full speed for instance or by using the Timed Loop with an explicit interval).
If you program a loop in LabVIEW that has no inherent timing (no delays of any kind and also no asynchronous communication routines such as VISA Read or TCP Read) that loop will run at maximum speed which only depends on the actual code that is processed inside. If that processing takes 1ms the loop will iterate about 1000 times a second, if the processing takes 10us the loop will likely process at around 100kHz. How much processing time a specific routine takes is very much dependent on what you do in that routine and needs to be individually measured in order to make any estimations about how fast a loop can iterate.
Your question seems to indicate that you are more thinking about a PLC ladder logic execution where you often have a specific cycle time in which the entire "input, process, output" chain executes every single iteration and often runs in the ms interval range. That does apply in some ways to the LabVIEW FPGA part of programming a cRIO device although with sub us intervals, but the RT programming is really more like a normal computer and needs to be viewed as such when trying to make statements about how the code executes.