Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

Trying to understand the memory manager and time critical loops

Here are some simple questions regarding the memory manager(MM).  I'm asking with the hope that I can avoid a case of the dreaded priority inversion, where a time critical loop must wait for a low priority loop to finish using the memory manager, before it can proceed...
  1. If I try to index into an array using an enumerated type (even an I32 representation), will the memory manager be invoked? (why do this? - I have a number of parameters that are convenient to pass as an array and can be easily re-defined by editing the enum.)
  2. If I first convert the enum to an I32, will this call the memory manager?
  3. If I "type cast" the enum to an I32, instead of "converting" it, is there any difference, i.e. does type casting require the memory manager, since I think it just flattens to string and then unflattens to a different data type (which would imply that unflattening also invokes the MM)?
  4. Are there any significant changes in LV8 memory management?
  5. Would the RT execution trace toolkit help answer these questions?
My goal is to develop a simple way to modify / access parameters in a time critical loop.  I started with a type def cluster in a functional global variable which could be accessed via bundle/unbundle, but then thought it would be even more straightforward to pass a generic array and index into it using an enum (see my first question).  Part of this move toward arrays was because I found that clusters still cannot be passed through RTFIFO's even with the new shared variables, but arrays can be.  I even thought about type casting clusters to byte arrays to pass through an rtfifo and type casting back to a cluster (see question 3) - but that was probably too clever by half - if it invokes the MM.

The RT communications wizard addresses these issues in kind, but I don't want to have 30 RTFIFO's in my time critical loop passing parameters and data in and out.

Has anyone else been looking at these issues and come up with a good generic solution / architecture for passing integers, floats, events, and other state info in a way that is more easily understood by non-RT folk?
0 Kudos
Message 1 of 7
(5,195 Views)
Hi ArtDee,

    To see if the memory manager is being invoked, you can use the Tools>>Advanced>>Show Buffer Allocations. This will put down a small black dot on all instances of memory management. For example, typecasting an Enum into an I32.

Richard

Field Sales Engineer, New Jersey
National Instruments
0 Kudos
Message 2 of 7
(5,120 Views)
Richard,
I've learned a few things since my last post - my current understanding is as follows:. 
  1. I could use a strict type def'ed ring control with an I32 representation instead of an enum to avoid the need for coercion, and the labels will remain with the constants. 
  2. Using the conversion functions can bypass the memory manager, but using the type cast function will definitely invoke it - this is not intuitive and I would like to have it verified.
  3. Coercion dots (not sure about the buffer allocation tool) can be "fake" and may not really indicate that the memory manager was needed - specifically, indexing into an array should not require MM intrusion -- the compiler is at least smart enough to do this alone.
  4. The memory manager is a very complicated module that is not well understood by most mortals (even NI support people) - it is significantly different (less sophisticated) in the RT systems,  usually because garbage collection is difficult to do in a deterministic way.
  5. The RT execution trace tool seems too primitive / limited to give much insight into MM details - not very much documentation on this subject and playing with the tool was not very instructive regarding these specific questions.
Any further verification of these points would be appreciated.  It seems like a collection of these issues as applied to LabVIEW RT would be a useful document.
Art
0 Kudos
Message 3 of 7
(5,114 Views)
Hi Art,

Conversion, even in a case such as I32 -> U32, will invoke the memory manager. The Tools>>Advanced>>Show Buffer Allocations will expose all areas where the memory manager is needed. It is best practice as you pointed out to eliminate these as much as possible. Coercion dots are identical to conversions found on the conversion palette. They are signified as dots specifically so that desktop OS or Real-Time OS programmers can see where memory management is being used.

While the memory manager is indeed complicated, as LabVIEW programmers we don't have to interact with it explicitly. That is, we have no ability to control when garbage collection happens. Similarly, we cannot observe when these events beyond our control specifically happen. We can however see via the advanced tool and the Trace Toolkit when the MM is in use. The Execution Trace Tool allows one to see when, and for how long, it is in use, with very high time resolution. If priority inversion or increased jitter are due to the MM, this will be apparent from the Tool. Mutex's can be marked with flags indeicating when they are obtained.

The Buffer Allocation tool will answer your questions. While the RTMM will be different than the one used on Windows, LabVIEW is the common denominator in this case and the buffer allocations happen in the same instances on both systems. Good programming practices in Windows or MacOS are even more critical on RT systems but they are the same set of rules. Once you have trimmed out all the buffer allocations you possibly can, the Trace Tool will show you the results in fine (time) detail.

Happy optimizing, and let me know if you have more questions.


Richard

Field Sales Engineer, New Jersey
National Instruments
0 Kudos
Message 4 of 7
(5,095 Views)

I am optimizing my real-time application to avoid Memory Manager calls during execution. I have basically one main 1ms timing loop that has to be executed without interruption, i.e. without cycles "lost". Other threads are running in parallel, that do not need such determinism.

To monitor the "lost" cycles, I use the "late iteration" indicator provided by the timing loop.

I use the Execution Trace Toolkit (ETT) intensively to track down all MM calls, and my application is much more stable after having cleaned up all array operations that produced the "green flags" visible on ETT indicating these MM calls.

Now my main concern is to remove all additional "red flags" that appear in the 1ms loop with ETT. They are related to "low level resource" access. What I discover is that in some circumstances I have inside the 1ms additonal "red flags" that lead to priority inversion, in other words the loop is interrupted as it needs a low-level resource nto available. After many investigations, I deduced this low level access was due to update of indicators put on the front panel.

I have only one front panel open on the host PC used to monitor the real-time application during debugging phases. The diagram associated to this front panel contains the 1ms timing loop as well as some while loops with low-priority VIs. I guess that an indicator being updated in a while loop may lead to priority inversion in the timing loop. One important remark : this happens generally under high CPU usage (roundabout 80%) and has never been observed otherwise (30% CPU during "idle" phases)

I always heard from NI that front panels on host PC used to monitor the real time application on a remote RT controller did not affect its determinism. This seems not to be the case during important controller load. I understand these data has to be transferred to the host PC for visualization, and it may lead to extra resources being used. In my case, I just display basic informations : a couple of boolean indicators as well as some numeric displays. I do not transfer array indicators.

How to avoid such priority inversions that seem due to indicators on front panels ? Is there any property available to tell LabVIEW the indicator update can be skipped (as the "skip subroutine call" option for subroutine VIs) ?

0 Kudos
Message 5 of 7
(4,701 Views)

I think you pretty much figured it out. The Front Panel should be treated as a shared resource and you shouldn't put front panel controls or indicators in a determinisitic loop. Instead you could use RT FIFO, either the VIs or shared variables, to pass the data in and out of the deterministic loop from a lower priority loop. I know the some of the shipping examples may have controls and indicators in their loops to avoid having a more complicated diagram but I also know the the DAQ RT examples have a note on the block diagram stating something to that effect.

As for Front Panel communication to a host PC not hurting determinism, that it still true since that communcation is down at normal priority. However in the deterministic loop, accessing the controls and indicators isn't deterministic which is where the problem is. As you noticed, it typically isn't very bad and you rarely see a long priority inversion which is why most people don't run into it. Hope that helps.

-JRA

0 Kudos
Message 6 of 7
(4,697 Views)
Thank you for your answer JR.
You confirm what I suspected. I will use data transfer VIs to visualize these informations in a separate loop in order to not affect my time critical loop determinism.
Another trick I would use is to put the indicators in a case structure with a checkbox to allow their update upon request.
0 Kudos
Message 7 of 7
(4,689 Views)