LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Queues and memory allocation

Do you see any potential sources of jitter or "non-determinism" on how queues are used for the deterministic loop shown below? Assume that the Message object does not contain any resizable types (strings, arrays, etc). Thank your for your comments!

 

Screen Shot 2013-04-06 at 1.21.26 AM.png

Javier Ruiz - Partner at JKI
jki.net
vipm.io
0 Kudos
Message 1 of 13
(4,237 Views)

Hi jarc,

 

do you want to dequeue elements from the queue in two places?

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 13
(4,226 Views)

Not really. The figure is not my specific use case, but the idea is:

1. I want to receive a message object via a queue on my deterministic loop.

2. The deterministic loop checks the first queue for messages and at the same time is doing something with its own copy of the message object.

3. The deterministic loop enqueues message object to the second queue as needed.

4. The third loop is waiting for message object from the deterministic loop.

 

The main question is: are the "read queue" and "lossy enqueue" functions allocating memory or causing other jitter issues at run-time? (The queues are created with a fixed size)

 

Thanks! 😃

Javier Ruiz - Partner at JKI
jki.net
vipm.io
0 Kudos
Message 3 of 13
(4,201 Views)

LabVIEW is pretty smart about allocating memory. With fixed sized queues and fixed size message objects memory allocations should take place the first time the queue is called.

 

Queues are pretty fast so I doubt they would be causing any jitter issues.  The Do Something block in the Deterministic Loop needs to run in less than 10 ms of course.  Unless you are running on a real time OS, you could have jitter due to the OS preempting the LV process.  This would be most noticeable in the Deterministic Loop.

It is likely that the Dequeue in the Determinstic Loop will occur ~10 ms later than the Enqueue in the Generate Message Loop.  Both the Enqueue and the Dequeue may occur almost immediately after an iteration starts.

 

How will you be stopping your loops? As shown they are all infinite loops.

 

Lynn

0 Kudos
Message 4 of 13
(4,191 Views)

@johnsold wrote:

LabVIEW is pretty smart about allocating memory. With fixed sized queues and fixed size message objects memory allocations should take place the first time the queue is called.

 

Queues are pretty fast so I doubt they would be causing any jitter issues.  The Do Something block in the Deterministic Loop needs to run in less than 10 ms of course.  Unless you are running on a real time OS, you could have jitter due to the OS preempting the LV process.  This would be most noticeable in the Deterministic Loop.

It is likely that the Dequeue in the Determinstic Loop will occur ~10 ms later than the Enqueue in the Generate Message Loop.  Both the Enqueue and the Dequeue may occur almost immediately after an iteration starts.

 

How will you be stopping your loops? As shown they are all infinite loops.

 

Lynn


Thanks for the answer Lynn. The figure is just a 2-min recreation of a very complex system (hence the infinite loops). The real system is actually running on RT, so I am not worried about OS jitter. The "do something"part is meant to run faster than the cycle time (real cycle time ~usecs), otherwise a "late" warning is activated (on the real system). In summary then, the enqueue and dequeue inside the deterministic loop should not be causing jitter or taking a long time, right? With this I am looking for an alternative to RT FIFOs that allows me to use a class-typed queue and still keep determinism. 

 

Thank you again!

 

 

 

Javier Ruiz - Partner at JKI
jki.net
vipm.io
0 Kudos
Message 5 of 13
(4,182 Views)

I have no RT experience, so cannot advise you in that regard.  The queue mechanism itself is probably not a problem.  If it is a concern, you may want to put together a realistic test to try to measure the jitter.

 

Lynn

0 Kudos
Message 6 of 13
(4,177 Views)

@johnsold wrote:

I have no RT experience, so cannot advise you in that regard.  The queue mechanism itself is probably not a problem.  If it is a concern, you may want to put together a realistic test to try to measure the jitter.

 

Lynn


I think, this has been discussed before few times. A bit search results the maximum queue size has to do with the queue logic, not the memory allocations. Some folks did benchmarks to prove it, could not find it.

Not sure about RT and OOP objects.

 

Hope this helps.

0 Kudos
Message 7 of 13
(4,174 Views)

@Bublina wrote:

@johnsold wrote:

I have no RT experience, so cannot advise you in that regard.  The queue mechanism itself is probably not a problem.  If it is a concern, you may want to put together a realistic test to try to measure the jitter.

 

Lynn


I think, this has been discussed before few times. A bit search results the maximum queue size has to do with the queue logic, not the memory allocations. Some folks did benchmarks to prove it, could not find it.

Not sure about RT and OOP objects.

 

Hope this helps.


I am not sure if I understand what you mean by "A bit search results the maximum queue size has to do with the queue logic, not the memory allocations". Could you explain a little further?

 

Thank you.

Javier Ruiz - Partner at JKI
jki.net
vipm.io
0 Kudos
Message 8 of 13
(4,158 Views)

Setting a maximum queue size does not pre-allocate space for that number of elements, it just limits the queue from growing beyond that size. You could still get jitter on the enqueue until you reach the maximum queue size. If you really need deterministic behavior, you should use an RT-FIFO (but I understand about the classes).

 

Wait for Next Millisecond Multiple isn't really deterministic either, since if you miss a period you'll end up waiting two periods.

0 Kudos
Message 9 of 13
(4,149 Views)

@nathand wrote:

Setting a maximum queue size does not pre-allocate space for that number of elements, it just limits the queue from growing beyond that size. You could still get jitter on the enqueue until you reach the maximum queue size. If you really need deterministic behavior, you should use an RT-FIFO (but I understand about the classes).

 

Wait for Next Millisecond Multiple isn't really deterministic either, since if you miss a period you'll end up waiting two periods.


Thank you Nathand.

The "Next Millisecond Multiple" was just for the purpose of showing timing on this example. I thought the "Obtain Queue" would preallocate memory for fixed-sized queues. That is a deal breaker then.

 

Thanks again!

Javier Ruiz - Partner at JKI
jki.net
vipm.io
0 Kudos
Message 10 of 13
(4,140 Views)