04-06-2013 03:23 AM
Do you see any potential sources of jitter or "non-determinism" on how queues are used for the deterministic loop shown below? Assume that the Message object does not contain any resizable types (strings, arrays, etc). Thank your for your comments!
04-06-2013 05:00 AM
04-06-2013 01:39 PM
Not really. The figure is not my specific use case, but the idea is:
1. I want to receive a message object via a queue on my deterministic loop.
2. The deterministic loop checks the first queue for messages and at the same time is doing something with its own copy of the message object.
3. The deterministic loop enqueues message object to the second queue as needed.
4. The third loop is waiting for message object from the deterministic loop.
The main question is: are the "read queue" and "lossy enqueue" functions allocating memory or causing other jitter issues at run-time? (The queues are created with a fixed size)
Thanks! 😃
04-06-2013 04:40 PM
LabVIEW is pretty smart about allocating memory. With fixed sized queues and fixed size message objects memory allocations should take place the first time the queue is called.
Queues are pretty fast so I doubt they would be causing any jitter issues. The Do Something block in the Deterministic Loop needs to run in less than 10 ms of course. Unless you are running on a real time OS, you could have jitter due to the OS preempting the LV process. This would be most noticeable in the Deterministic Loop.
It is likely that the Dequeue in the Determinstic Loop will occur ~10 ms later than the Enqueue in the Generate Message Loop. Both the Enqueue and the Dequeue may occur almost immediately after an iteration starts.
How will you be stopping your loops? As shown they are all infinite loops.
Lynn
04-06-2013 05:07 PM
@johnsold wrote:
LabVIEW is pretty smart about allocating memory. With fixed sized queues and fixed size message objects memory allocations should take place the first time the queue is called.
Queues are pretty fast so I doubt they would be causing any jitter issues. The Do Something block in the Deterministic Loop needs to run in less than 10 ms of course. Unless you are running on a real time OS, you could have jitter due to the OS preempting the LV process. This would be most noticeable in the Deterministic Loop.
It is likely that the Dequeue in the Determinstic Loop will occur ~10 ms later than the Enqueue in the Generate Message Loop. Both the Enqueue and the Dequeue may occur almost immediately after an iteration starts.
How will you be stopping your loops? As shown they are all infinite loops.
Lynn
Thanks for the answer Lynn. The figure is just a 2-min recreation of a very complex system (hence the infinite loops). The real system is actually running on RT, so I am not worried about OS jitter. The "do something"part is meant to run faster than the cycle time (real cycle time ~usecs), otherwise a "late" warning is activated (on the real system). In summary then, the enqueue and dequeue inside the deterministic loop should not be causing jitter or taking a long time, right? With this I am looking for an alternative to RT FIFOs that allows me to use a class-typed queue and still keep determinism.
Thank you again!
04-06-2013 05:25 PM
I have no RT experience, so cannot advise you in that regard. The queue mechanism itself is probably not a problem. If it is a concern, you may want to put together a realistic test to try to measure the jitter.
Lynn
04-06-2013 05:29 PM
@johnsold wrote:
I have no RT experience, so cannot advise you in that regard. The queue mechanism itself is probably not a problem. If it is a concern, you may want to put together a realistic test to try to measure the jitter.
Lynn
I think, this has been discussed before few times. A bit search results the maximum queue size has to do with the queue logic, not the memory allocations. Some folks did benchmarks to prove it, could not find it.
Not sure about RT and OOP objects.
Hope this helps.
04-06-2013 09:31 PM
@Bublina wrote:
@johnsold wrote:
I have no RT experience, so cannot advise you in that regard. The queue mechanism itself is probably not a problem. If it is a concern, you may want to put together a realistic test to try to measure the jitter.
Lynn
I think, this has been discussed before few times. A bit search results the maximum queue size has to do with the queue logic, not the memory allocations. Some folks did benchmarks to prove it, could not find it.
Not sure about RT and OOP objects.
Hope this helps.
I am not sure if I understand what you mean by "A bit search results the maximum queue size has to do with the queue logic, not the memory allocations". Could you explain a little further?
Thank you.
04-06-2013 10:31 PM
Setting a maximum queue size does not pre-allocate space for that number of elements, it just limits the queue from growing beyond that size. You could still get jitter on the enqueue until you reach the maximum queue size. If you really need deterministic behavior, you should use an RT-FIFO (but I understand about the classes).
Wait for Next Millisecond Multiple isn't really deterministic either, since if you miss a period you'll end up waiting two periods.
04-06-2013 10:37 PM
@nathand wrote:
Setting a maximum queue size does not pre-allocate space for that number of elements, it just limits the queue from growing beyond that size. You could still get jitter on the enqueue until you reach the maximum queue size. If you really need deterministic behavior, you should use an RT-FIFO (but I understand about the classes).
Wait for Next Millisecond Multiple isn't really deterministic either, since if you miss a period you'll end up waiting two periods.
Thank you Nathand.
The "Next Millisecond Multiple" was just for the purpose of showing timing on this example. I thought the "Obtain Queue" would preallocate memory for fixed-sized queues. That is a deal breaker then.
Thanks again!