LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
dgdgomez

Lossy Enqueue Element At Opposite End

Status: Declined

National Instruments will not be implementing this idea. See the idea discussion thread for more information.

Many Real-Time Testing (RTT) systems require a mechanism to store data in one central location that can be accessed by the different parts of the application. The Buffered Variable Table (BVT) is a set of LabVIEW VIs that developers use to store and retrieve data asynchronously from different parts of an application.

 

Normally, when I program applications in RTT, I need store data in one central location that can be accessed by the different parts of the application, for this, I usually use "queue operations" with a fixed size.

 

But sometimes, I need to insert an element at the beginning of the queue, but if it is full, it is necessary to dequeue and queue again.

 

To solve this problem, I could use a code similar to the image, but the applications could become unstable.

lossy.png

 

For this reason, my proposal is that labview provides the function of "Lossy Enqueue Element At Opposite End".

 

10 Comments
wiebe@CARYA
Knight of NI

Yes, I needed this a few times. E.g. when a push to a networkstream fails, you often want to put the element back on the internal queue.

AristosQueue (NI)
NI Employee (retired)

> it is necessary to dequeue and queue again.

Which may not even work -- you dequeue, then you try to enqueue and discover the queue is full again because the producer got there first to put a new value in. So I hope you aren't trying to do that.


> Yes, I needed this a few times. E.g. when a push to a networkstream

> fails, you often want to put the element back on the internal queue.

 

There is no case that I know of where that *wouldn't* result in inconsistent data propagation. If you have one, I'm curious to hear it.

 

If the internal queue is a data stream, then if the network stream fails and your internal queue is a lossy queue, then just drop the packet. Putting it back on, *especially* if it bumped the front of the queue, would create ripples in the streaming data -- you'd lose the gradual degredation of signal that lossy enqueuing provides and instead go to severe jumps of high quality signal, which is generally undesirable.

 

If the internal queue is a command queue, then if the network stream fails, you can put it back on the queue, but you absolutely cannot do it lossily or you'll drop commands randomly -- you can't Enqueue Lossy into a command queue from either direction! You have to be working with an open-ended queue to put it back on the queue safely OR you need to not put it back on the actual queue but instead rest it in a secondary buffer -- for example, a second queue that is normally empty and you only put data into when it is backwash like this.

 

Lossy Enqueue At Opposite End is a problematic request.

wiebe@CARYA
Knight of NI

Well, if the (sized) queue was lossy, you'd force a lost element at the end (last added element). If you can't enqueuer it at the opposite end, you lose an element in the middle. Both are corrupting data, but I prefer losing the element at the end, as the queue is full anyway and no items will fit in...

dgdgomez
Member

Maybe I do not indicate it clearly, but the example is only conceptual, the code should never be implemented.

 

Why does "Lossy Enqueue Element At Opposite End" present more problems than "Lossy Enqueue Element"? From my point of view, both implementations should present the same number of problems.

 

Why do you say Lossy Enqueue At Opposite End is a problematic request?

AristosQueue (NI)
NI Employee (retired)

@dgdgomezWhy does "Lossy Enqueue Element At Opposite End" present more problems than "Lossy Enqueue Element"? From my point of view, both implementations should present the same number of problems.

Consider a rising stream of numbers

1, 2, 3, 4, 5, 6, etc

 

Suppose I have a queue bounded at size five and a slow dequeuer. I enqueue

<front of queue> 1, 2, 3, 4, 5 <back of queue>

Then when I enqueue 6, this is the result

<front of queue> 2, 3, 4, 5, 6 <back of queue>

Now the dequeuer reads 2. Then I continue putting in data... 7, 8, 9, etc.

 

The dequeuer will read some sequence of digits, something like

2, 4, 7, 8, 10, etc

These are monotonically rising, just like my inputs are monotonically rising.

 

This is easy to see with a monotonic list of numbers, but it applies to any data signal -- the dequeuer essentially gets a lower-resolution sampling of the input. Sure, the signal is lossy, but meaningful data is still preserved because an enqueue on a full queue always pushes out the oldest data.

 

When you enqueue at the other end, there is no mathematical meaning to what is happening with the data stream. It isn't a down sampling of the original. Indeed, you would be injecting late generated values in the middle of the stream. So doing the lossy enqueue wrong end operation on a data stream doesn't make any sense.

 

On a command stream, it is even worse -- doing a lossy enqueue from EITHER end never makes sense on a command stream. When you enqueue at wrong end, you're saying there's a high priority message, but if the stream is lossy, how do you define which lower priority message is an acceptable one to ditch? There are commands B and C that may not be valid until command A is sent, but you may have just trashed A. There's no sane way to make a lossy command queue.

 

Trying to do a lossy enqueue at wrong end has no meaningful use cases, unless you can name a new one that I've never heard of. My bet is that the problem you're trying to solve by asking for "lossy enqueue at opposite end" is actually going to inject bugs into your code rather than solve your problem, and you need to be finding a different solution anyway.

wiebe@CARYA
Knight of NI

It's really not that big a deal to have this, but I'll explain the one time I ran into this. Years ago, hope I remember it correctly...

 

Consider a fixed sized queue, size 10. The read end will read those elements, while the write end will write them (duh).

 

At some point the read might get an element (3), try to push it to a network stream for example, and that might fail:

3, 4, 5, 6, 7, 8, 9

3 is read. Meanwhile, 10, 11, 12, 13 is written (non-lossy, fails when full).

4, 5, 6, 7, 8, 9, 10, 11, 12, 13

3 to network stream fails.

 

Now I need to make a local buffer to preserve 3. It would be convenient to push the 3 back to the queue, so the sequence is restored. The 13 will be lost, but everything after it will be lost anyway. The need for a local buffer while I already have a perfectly good buffer is something to avoid.

 

On the receiver end, once the network stream is restored, I'll get

1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12

Instead of

1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13

 

Of course, tons of workarounds, but you asked for a use case...

 

Not sure how I fixed it back then, it slowed me down for 15 min. or so. I think a preview to get the element and a get only when the send succeeds sounds good...

Darren
Proven Zealot
Status changed to: Declined

National Instruments will not be implementing this idea. See the idea discussion thread for more information.

dgdgomez
Member

Finally the idea has been declined. I was developing an example, when the idea was declined and I've been considering whether or not to add the example, I've finally put it here. This example was inspired by the following publication Undo (Ctrl + Z) that works in stand alone application built in LabVIEW (exe).

 

Another examples that I can think of are based on design patterns, such as the singleton pattern, but it seems that this type of implementation presents problems, so I can understand the decision to declined this idea.

 

Regards.

AristosQueue (NI)
NI Employee (retired)

I looked at your example. You've implemented the lossy stack. Ok, but... why are you using *any* of the queue primitives for that? That should be a by-value data structure, not a by-reference structure. A simple array used as a circular buffer -- wrap it in a class to enforce good access patterns. But using a reference structure for that? That just adds performance overhead, even if you did take the time to wrap it in a class to make sure no one mucks with the queues outside of their intended pattern. Using the queue refnums as a data structure like that is pretty wasteful. There's a nice circular buffer implementation over on LAVA. And several others in various places if you don't like that one.

 

As for the singleton pattern, yes, one embodiment of the singleton pattern in G is using a single-element queue, but using either version of lossy would defeat the locking safety of that queue, so I don't see how that's an argument for the opposite direction lossy enqueue.

 

(Tangent: I didn't actually run your code -- password-protected VI contains untrusted code, and tonight I didn't feel like digging out my old laptop that I can wipe when I'm done. What's so mystical in the lossy backend enqueue that you needed a password? Also, for reference, passwords on VIs don't encrypt the block diagram [documented], so it is possible [relatively easy] to crack open the VIs and look at diagrams anyway... I didn't do that, but I figured I'd mention it in case you commonly use that to protect IP. Password on VIs just keeps honest people honest.)

wiebe@CARYA
Knight of NI

I can live with it, but I'm puzzled by the reason (that I can't find).

 

Sure, the signal is lossy, but meaningful data is still preserved because an enqueue on a full queue always pushes out the oldest data.

 

When you enqueue at the other end, there is no mathematical meaning to what is happening with the data stream

In my example, the exact opposite is true. Keeping the data from the start end makes sense, because the monotonically rising sequence is kept. I don't see how my example is not meaningful or has no mathematical meaning.