10-17-2005 02:33 PM
11-14-2005 03:12 PM
11-14-2005 05:35 PM
11-15-2005 07:22 PM
11-13-2006 07:21 AM
I am optimizing my real-time application to avoid Memory Manager calls during execution. I have basically one main 1ms timing loop that has to be executed without interruption, i.e. without cycles "lost". Other threads are running in parallel, that do not need such determinism.
To monitor the "lost" cycles, I use the "late iteration" indicator provided by the timing loop.
I use the Execution Trace Toolkit (ETT) intensively to track down all MM calls, and my application is much more stable after having cleaned up all array operations that produced the "green flags" visible on ETT indicating these MM calls.
Now my main concern is to remove all additional "red flags" that appear in the 1ms loop with ETT. They are related to "low level resource" access. What I discover is that in some circumstances I have inside the 1ms additonal "red flags" that lead to priority inversion, in other words the loop is interrupted as it needs a low-level resource nto available. After many investigations, I deduced this low level access was due to update of indicators put on the front panel.
I have only one front panel open on the host PC used to monitor the real-time application during debugging phases. The diagram associated to this front panel contains the 1ms timing loop as well as some while loops with low-priority VIs. I guess that an indicator being updated in a while loop may lead to priority inversion in the timing loop. One important remark : this happens generally under high CPU usage (roundabout 80%) and has never been observed otherwise (30% CPU during "idle" phases)
I always heard from NI that front panels on host PC used to monitor the real time application on a remote RT controller did not affect its determinism. This seems not to be the case during important controller load. I understand these data has to be transferred to the host PC for visualization, and it may lead to extra resources being used. In my case, I just display basic informations : a couple of boolean indicators as well as some numeric displays. I do not transfer array indicators.
How to avoid such priority inversions that seem due to indicators on front panels ? Is there any property available to tell LabVIEW the indicator update can be skipped (as the "skip subroutine call" option for subroutine VIs) ?
11-13-2006 08:54 AM
I think you pretty much figured it out. The Front Panel should be treated as a shared resource and you shouldn't put front panel controls or indicators in a determinisitic loop. Instead you could use RT FIFO, either the VIs or shared variables, to pass the data in and out of the deterministic loop from a lower priority loop. I know the some of the shipping examples may have controls and indicators in their loops to avoid having a more complicated diagram but I also know the the DAQ RT examples have a note on the block diagram stating something to that effect.
As for Front Panel communication to a host PC not hurting determinism, that it still true since that communcation is down at normal priority. However in the deterministic loop, accessing the controls and indicators isn't deterministic which is where the problem is. As you noticed, it typically isn't very bad and you rarely see a long priority inversion which is why most people don't run into it. Hope that helps.
-JRA
11-13-2006 02:00 PM