LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
Nate_Moehring

new primitive to deallocate unused memory from queues

Status: New

Deallocate Queue Memory.PNG

 

Many people do not realize that memory allocated by a queue is never deallocated until the queue is destroyed or the call-chain that created the queue stops running.  This is problematic for queues that are opened at the beginning of the application and used throughout because all of the queues will always retain their maximum size, causing the application to potentially hold a lot of memory that is currently unused or seldomly used.

 

Consider a consumer that occassionally lags behind and the size of a queue will grow tremendously.  Then the consumer picks back up and services the elements out of the queue in a short period of time.  It is unlikely the queue will be this large again for quite some time, but unfortunately no memory will be deallocated.

 

I'd like a primitive that will deallocate all of that memory down to just the current number of elements in the queue.  Since the queue won't need that much memory again for a long time and the queue will auto-grow again as needed, I'd like to recover that memory now instead of waiting for the application to be restarted (which is currently the only time the queue is created.)

 

The alternative is to add some code to periodically force destroy the queue and have the consumer gracefully handle the error and open a new reference.  Then replicate this change for all queues.  Seems messy and puts too much responsibility on the consumer.  I'd rather just periodically call a 'deallocate queue memory' primitive on the queue reference within the producer, perhaps once every few minutes, just to be sure none of the queues are needlessly holding a large amount of memory.

 

I believe this will:

  • Improve performance in other areas of the application because less time will be spent looking for available memory to allocate.
  • Reduce the chance of Out of Memory errors because large blocks of memory will not be held [much] longer than they are needed.
  • Improve the common user opinion that LabVIEW applications are memory hogs or leaky.

I realize this will hurt enqueue performance when the queue begins to grow quickly again, but this area is not a bottleneck for my application.

 

Thanks!

Nate Moehring

SDG
12 Comments
SteenSchmidt
Trusted Enthusiast

One use case I run into often is the queue queueing up data until the consumer is ready - until a connection has been made, until some modules finish loading or similar. And when the consumer is ready, it's much faster than the producer(s), so it's always able to keep up. But the initial memory growth of the queue will never be deallocated. In some applications this could be hundreds of MB of memory.

 

I'd use a deallocate queue primitive in dataflow, for a single queue (queue refnum in), within the consumer - basically like I'd call the dequeue or the queue status primitives. In my use case it's the consumer that knows when it is able to hand off the data waiting in its queue (this consumer then also becomes a producer on TCP/IP, or into the event subsystem, or onto another queue, or into a file or such).

 

I'm not a fan of automatic memory deallocation either, as I sometimes initialize (pre-allocate) queues to minimize jitter (on platforms where I can't use RT FIFOs). Also, a free-floating primitive like the Request Deallocation primitive gives me the creeps. There's too much black-box waving a wand feeling about that Smiley Wink.

 

Cheers,

Steen

CLA, CTA, CLED & LabVIEW Champion
Nate_Moehring
Active Participant

This is a good point Steen.  I have this use case too, and you make a good argument that the deallocate queue primitive is just as desirable (or even more so) in the consumer loo, than the producer loop.  In either case, the important point is it's a simple primitive that doesn't do anything more than the name says, and it's no overhead on the queues because it only happens when it's called.  Thanks for chiming in!

 

Nate

Nate Moehring

SDG