07-04-2012 05:15 PM - edited 07-04-2012 05:19 PM
If you're concerned with hitting the memory manager when resizing the array, consider sending #ch and block size as the first elements of the array, and updating the 4x100 array in place (like the screen shot except connecting the input to output tunnels in the False case). The consumer could be programmed to consider valid data, but the queue payload would potentially contain stale data as well. If you approve of this approach, you might utilize RT FIFOs instead of queues. This still won't prevent calling the memory manager if the number of enqueues starts to grow, but you can specify a maximum queue size for that. Thoughts?
- Steve K
07-05-2012 09:32 AM
Hi Pie -
That's not a bad idea. I'm already using a maximum queue size to prevent it from reallocating on enqueue. My main question was whether pushing a 100x3 array element into a queue that was initialized with 100x4 array elements will cause a reallocation. I read in the LV Help that queues automatically preallocate in LVRT with the "Obtain Queue" prim, so I assume a queue of arrays will preallocate with elements of the size passed into the prim.
So will enqueuing a smaller array cause LV to reallocate that element? If so, I'll have to implement the suggestion you made. If not, then I may just leave the queue alone and enqueue whatever array I want.
I elected not to use RTFIFOs because of their general restrictiveness and inability to pass strings. (At some point the data gets flattened and sent over the network through a generic comm link component.) Since jitter isn't a concern in this application -- throughput and CPU overhead are the biggest concerns -- I decided to stick with queues and utilize some existing templates and components from past projects.
07-20-2012 02:19 PM
I was pretty surprised to find Memory Resize events on every Enqueue (and Dequeue) in my example on Windows.
The queue is initialized as 4x100xI32 = 1600B. You'll see in the attached log that enqueuing 3x100xI32 resizes -400, enqueuing 2x100xI32 resizes -800, and enqueuing 1x100xI32 resizes -1200. So LabVIEW on Windows pre-allocates queue elements based on the initial array dimensions, and resizes each when enqueuing. Truthfully, I would have lost a bet on this one.
I don't expect the Queue operation to behave differently on RT, but if you need to be absolutely sure the behavior is the same instead of using the work-around mentioned above, let me know.
- Steve K
07-20-2012 02:29 PM
Steve -
I opened a Service Request for this and have been getting help from Thomas Law by email. You might want to compare your notes with his.
07-20-2012 02:37 PM - edited 07-20-2012 02:45 PM
It looks like they're still working on SR xxxx167. I let Thomas know I'm available to chat about this. Would you post the answer from AE when they have it? Others can learn from the forum, nobody can reuse an SR.
- Steve K
07-20-2012 02:40 PM
I don't know that the AE has "the answer" yet. I was asking in a roundabout way that you check in with him to review each other's investigations and sign off on the final conclusions. Two heads are better than one, especially when one of those heads has years of experience with LV.
04-08-2013 10:16 AM
Hi,
Resurrecting thread because the discussion and content appears to STILL be valid..
In LV 2011 SP1, NI RIO-12, on a vxWorks cRIO-9014, obtain queue absolutely does NOT pre-allocate memory the way you would expect based on the documentation. This is consistent with the findings on windows from a few years back and also relevant to the thread opener (though the previously posted work-around seems to work, see below).
While nobody seems to have filled in the results of the service ticket or any other comments, I found this thread helpful in my investigation of what appeared to be a memory leak.
I was adviced by NI support to set the max q size to some large number I would never hit to prevent memory allocations in my RT.. What I found is that the allocated memory ("used") continues to grow with EVERY enqueu even if the queue is empty at the time of the enqueue operation. It grows until the RIO crashes or until enough memory has been allocated to hold the max of the q size defined.
This is in contradiction to what one would expect reading the help for the obtain Q, "Note (Real-Time Module) max queue size preallocates the specified number of elements in the queue when running on an RT target."
(In my application, enqueues during normal operation are spaced minutes appart, causing an out of memory crash after weeks and months of operation. The memory growth is slow enough that you would need to monitor memory every few days to see a trend that would alert you if you were not already aware of and looking for this "leak".)
By enquing dummies up to the q size limits (as suggested elsewhere in this thread), you force the out of memory crash and/or max allocation to happen first. I will likely open a new SR ticket for this on basis of the documentation not matching the behavior. Also noteworthy is that just because an AE says something, don't skip detailed testing, we're all humans and sometimes we give advice based on faulty input such as inconsistent linkage between "life" and "help file documents".
(Note that If you want to duplicate this, put a small (1000ms?) delay between each queue "action-block" you want to test and your subequent "get memory used" call, as I found moving too fast (no delay) would report "stale" used memory values. 1000 ms certainly longer than it needs to be, but I didn't want to waste time figuring out the refresh rate of the memory reporting call.)
Based on this, for my use-case (where my determinism is measured in minutes so a "small" jitter of seconds is ok), I will set my max Q size to something really small (e.g. 5), pre-fill the Q, flush, then set a 5 to 10 second enqueu time-out throughout my application. (My consumer runs at a sub-second rate, so if the enqueue ever has to wait, something is probably wrong anyway).
04-08-2013 11:11 AM
I will set my max Q size to something really small (e.g. 5), pre-fill the Q, flush, then set a 5 to 10 second enqueu time-out throughout my application.
That's exactly what I do in both Windows and RT; many times explicit code is better than relying on implicit features, at least for legibility and ease of debugging.
Regarding the Obtain Queue's memory allocation behavior on RT, note that RT targets are generally slow and you can actually watch the memory manager allocating memory for the queue in real time. This is almost certainly the behavior you observed in your test; you seem to describe it here without realizing what's really happening:
What I found is that the allocated memory ("used") continues to grow with EVERY enqueu even if the queue is empty at the time of the enqueue operation. It grows until the RIO crashes or until enough memory has been allocated to hold the max of the q size defined.
(If the RIO crashed, you asked for too much. There's no sanity check that throws an error, but there is a VI that tells you how much memory is available for allocation...maybe you should call that before deciding how much to request.) The Obtain Queue function will return before the memory has been fully allocated, but the allocation is definitely happening in response to that function having been called. Maybe the real discrepancy between your expectations and the actual behavior is that you expect all allocation to finish before the function returns?
04-08-2013 11:35 AM - edited 04-08-2013 11:51 AM
Attached is a VI that can be used to test this behavior. Its a condensed down program with no external dependencies (at the cost of readability) and certain structures would appear to not make much sense, but they did before I stripped out proprietary stubs and data structures/constants. Please excuse the poor layout as my focus has been to identify the root cause of growing memory in one of our customer's app's.
The TDMS part is optional, but is basically a "write stats to file" that runs every 50th iteration of the memory polling loop. I am aware I could make this more efficient by pre-alloc those shift register arrays and use replace instead of build, but these operations are in the noise of what the Queue is doing, and this was not necessary for this test app. Not doing this also allows me not to worry about tracking "current" index and special handling TDMS writes on exit.
I've linked this discussion to the SR AE and I'll forward relevant information from NI as and when I get it.
Thanks,
Kjell
04-08-2013 11:37 AM
David_Staab,
That is definitley a possibility, but one that does not seem to be supported by the RIO that I have running the "real" application. It's some 3 weeks in and still growing and the growth has been (seemingly) traced down to "enqueue element" during accelerated testing iterations.
I'll go back to my test VI and see, but yes, I would expect that obtain Q would not return until the allocation has been finished (during first call).