There are some important issues to be solved, for sure. I liked the idea, but should not be a "trap" in our code to cause strange behaviuours.
In other hand, it could work like the compare functions that have an option. For the queue, when you connect an 1D array, could be somehting like this: Enqueue Mode -> Enqueue Elements / Enqueue Agregates.
What if we side-step the entire issue and say that you can't do Enqueue Multiple on a bounded queue? That way there isn't a question about how to handle the array not fitting.
As a point of reference, the Network Streams feature has a similar interface to what is being asked for by this idea and this is how it behaves:
All multi-element writes and reads are atomic. There is no interspersing of data that can happen with regards to other writes and reads that might be executing in parallel.
All multi-element writes and reads are "transactional". That is, they either write or read the requested amount of data within the timeout period, or they leave the stream in an unmodified state. In this case, the write or read will return a timed out indicator.
If you try to write an array of elements that is larger than the fixed size buffer, an error is thrown immediately.
If the write or read has to block, it doesn't hold any locks while doing so. Instead, it will block on an event and get woken up again when data has been read/written. If there are multiple callers blocked on the stream, there is no guarantee of fairness in terms of who gets access to the stream next. This means if one caller is constantly trying to read 10 elements and one caller is constantly trying to read 1 element, the caller trying to read 10 elements could get starved out if the caller requesting 1 element can process and consume data fast enough such that there are never 10 elements available.
So far we haven't heard any complaints about these policies (not that we won't hear some now ).
> there is no guarantee of fairness in terms of who gets access to the stream next.
The queues do guarantee fairness currently. I'd hate to give that up when introducing this feature -- start introducing exceptions to something like that and people stop trusting the mainline use case. Besides, if you already have a queue that can potentially sleep on read or write, then there shouldn't be any surprises when it actually does so, so not allowing the 1-element reader to proceed if the 10-element reader is first in line seems like it would be ok to me.
I have needed this, or alternatively some way of temprarily locking everybody else out of queue writes, numerous times. I usually circumvent it by making the queue element itself an array of the original type, but that introduces multiple inefficiencies.
I'd expect such a function to either write the entire block, or on queue full or timeout not write any of the array elements (and then signal queue full or timeout in a similar fashion to the single element write primitive).
But this is quite an old thread, so I don't know if it is still "in effect"? Just stumbled upon it searching for duplicates of my own identical idea - turns out it was alread submitted.
If implemented, my preference would be for preventing the possibility that the queue is accessed (by other enqueue actions) when an array of element is "array-enqueued". After all, the intent of enqueuing a series of elements is in general to have them be read in the order they are placed at the source. Alternatively, an option could be offered (such as a Recommended input).
If the order at the source can be interfered with by other processes, then there is a potential for hurtful collisions.
The current Loop+multiple-individual-element-enqueue-action approach seems to not protect from this possibility. In a single producer scenario, that is not an issue. In a multiple producers one, that can be a serious issue.
The current alternative of using a queue element being itself an array of atomic elements (mentioned in one of the comments) is OK but not particularly elegant for a state machine architecture.
Each case in the consumer loop handling a specific atomic element, you would have to enclose your case structure within a For loop like this (can't attach the LV 2013 Project but the snippet hopefully gives the gist of it):
As a side note, the comment arrow attached to the "with that too" label disappeared in snippetting process...
Came to the ideas exchange to look for this exact idea. I have a queue that dequeues data to a log file. Multiple places can put data into the queue (e.g. transmitted and received data) and I either write a single message or read multiple messages at a time. In the logging loop, I flush the data out of the queue to disk.
I think there is merit in both the lock-enqueue single-unlock-next and lock-enqueue all-unlock but if there was a performance improvement in enqueuing an array of data without releasing the lock, that would be a bonus too.
As for enqueuing to a finite-sized queue - it should be possible...it depends on the method used above but I think it should either enqueue all or nothing in the case of maintaining the lock perhaps allowing for a timeout like the other enqueue functions.
I just noticed that the prime fix for this in LabVIEW today has not been mentioned in this thread --
When you do Obtain Queue, at the same time, call Obtain Semaphore Reference. Pass both refnums around as a pair everywhere. Before you do Enqueue or Dequeue or Release Queue, always call Acquire Semaphore and then call Release Semaphore after all of the operations that you need to have be atomic.