12-08-2021 07:08 AM
@JÞB wrote:
Not quite Mr. Ko
Queues and other "Named" things such as:
- Notifiers
- Dynamic events
- I/O Sessions (VISA, DAQmx, RFmx, etc...)
- Timed Loops
- .....
Are Context Aware! So, "Data" on MyComputer will not collide with "Data" on cRIO1 or "Data" on MyCoollvlibp (you need a public method to access a plug-ins Named object ref)
Still choosing a good name is a good idea 💡.
Really? Has this changed? Maybe I am out of date, but I was always under the impression that named queues could collide.
12-08-2021 08:11 AM
@billko wrote:
@JÞB wrote:
Not quite Mr. Ko
Queues and other "Named" things such as:
- Notifiers
- Dynamic events
- I/O Sessions (VISA, DAQmx, RFmx, etc...)
- Timed Loops
- .....
Are Context Aware! So, "Data" on MyComputer will not collide with "Data" on cRIO1 or "Data" on MyCoollvlibp (you need a public method to access a plug-ins Named object ref)
Still choosing a good name is a good idea 💡.
Really? Has this changed? Maybe I am out of date, but I was always under the impression that named queues could collide.
They can - but only in the same LabVIEW instance, so they can't collide:
in 2020 and 2018 on the same PC
in 2 differently named projects on the same PC
in an exe running on a PC and talking to an EXE on a cRIO (as these are 2 different LabVIEW instances).
they can collide in the same dev environment if not wrapped in a project, or the same exe.
It means that using obtain Queue by name, you can monitor a queue size/status for debug purposes from a debug VI in the program (if properly written to handle the open and close) - but you can't use an external program to probe your program for this info.
(It's one of the few good reasons to use a named queue - if you can guarantee the naming convention).
12-08-2021 08:47 AM
@Wiebe
named queue just means I can probe the queue size for debug from my debug VI if the Queue exists - nothing more.
the name is a static string which I have defined which is used nowhere else in the project. I want to read the queue as a continuous operation 24/7 not sure that flushing it will help. The consumer (dequeue) is itself a producer loop passing data to the rest of the program and the rate at which the data is taken governs the size of this queue.
@Everyone.
The enqueue function is in a while loop with 1 other function only. (TCP Read). This is reading a Raw TCP datastream from an device. The producer MUST spin fast, as the datastream can run at 19M0B/s (possibly faster) but the output buffer on the external device is only 512bytes 😂 - when it gets full it crashes. Dequeue takes the datastream and re-assembles if into packets of 1sec of data for processing in my program to process. I don't ever want to reach queue full, but if I do, I need to stop the raw TCP stream fast, before the external device crashes. - Data rate can change and so can TCP packet size, hence the resizable queue to ensure that I have a time limited amount of data buffered in the queue before I terminate the connection. If the queue is filling too fast the configurable params in the program haven't been setup to allow enough time to pull data off the queue after processing. (This can be a super processor intensive program depending on user setup.)
@Kevin
I tried to not assign time critical priority, but I fell over much faster on the external side due to the buffer of the external device. I've got 96 Main functions in this program 😣 50 run in parallel most of it is asynchronous. I'm using different execution systems and priorities to get maximum performance. Its not a bandwidth thing, this was working for 6 months before my last change and I had it streaming at 190MB/s I'm seeing this at 7MB/s incoming data. Not using re-entancy here. Making this a chunking thing would require a major code re-factor.
It was working, now it's broken, and none of you have seen anything like this before?
I was wondering if the DeQ was holding open the reference to the Q and preventing the EnQ taking place momentarily resulting in a timeout - that's my best guess for the behaviour I'm seeing,
12-08-2021 08:58 AM
On the deque side of things have you tried flushing the queue and processing the array of of elements rather than dequeuing the elements one at a time? This should keep your queue closer to empty since each each dequeue will empty it every time.
12-08-2021 09:11 AM
@James_W wrote:
@Wiebe
named queue just means I can probe the queue size for debug from my debug VI if the Queue exists - nothing more.
I do understand the use case.
I've seen others get burned by this one time too many, so I'd have to be pretty desperate to do this (even for a 'good' reason).
I'd make each dynamic loop\module\whatever, return, publish, share or store it's queue and use it to debug, no need for naming the queues.
Suit yourself, I don't mind if you use named queues 😋.
@James_W wrote:
@Wiebe
I want to read the queue as a continuous operation 24/7 not sure that flushing it will help. The consumer (dequeue) is itself a producer loop passing data to the rest of the program and the rate at which the data is taken governs the size of this queue.
I'm sure I don't understand what you're trying to do.
The VI that doesn't show the problem didn't help 😂.
12-08-2021 09:26 AM
@Wiebe
Basically I'm trying to workout how I can get a queue timeout when the queue is not full (and hasn't had time to fill) with the setup shown.
12-08-2021 09:29 AM
I've not seen any such issue when using the native Queue palette functions. In my experience, dequeues and enqueues do not interfere with one another in the manner you wonder about. Caveat:
- I've rarely used non-infinite timeouts for enqueuing functions. I typically do not limit my queue sizes unless I *also* do lossy enqueuing to make a kind of circular buffer.
If concerned about such contention, why not use wiebe's idea of flushing the queue once a second instead of dequeuing and enqueueing at ~100 kHz each? (19 MB/sec / ~100 B/packet ~= packets at 190 kHz). That would reduce your "contention" opportunities dramatically.
As an aside, the source device strikes me as ill-designed when it has a 19+ MB/sec stream rate, a mere 500 byte buffer, and a hard crash when the buffer fills.
-Kevin P
12-08-2021 09:50 AM
As an aside, the source device strikes me as ill-designed when it has a 19+ MB/sec stream rate, a mere 500 byte buffer, and a hard crash when the buffer fills.
-Kevin P
I might have got my numbers wrong, don't know.😉
- They certainly look bad, which makes me think I've got them wrong.
Having a look at tow to implement Wiebe's idea at the moment. (Having trouble reproducing the problem, The last test failed to read the TCP port fast enough in the producer loop😣)
James
12-08-2021 10:23 AM
ok... re-looking at my archtecture, why not flush instead of reading the queue with a dequeue?
Well when the user presses Stop, the TCP raw socket is closed and the producer is closed, this the flushes the remaining data on the buffer queue.
Any data already in the consumer queue will be processed. If I flush data in here and have 5 mins worth of data, but take 20mins to process it , the stop time may become more undesirable (It is currently a requirement to complete processing all data in the system and finish that before allowing the user to continue. The buffer is not a requirement, so data in there can be chucked.
(It comes down to user requirements)
doesn't mean I can't try and make the producer loop spin faster though 😉
James
12-08-2021 11:34 AM
@James_W wrote:
Ok, thanks for the feedback...
@Lucian:
you haven't quite understood the my intentions here. I need to allow the producer loop to not hang when the consumer loop has caused the queue to fill up. (Well actually the consumers of the consumer, but that's not important)
The timeout was added to prevent a fatal crash on an external device it's buffer overflows due to not being read my the producer loop due to the queue being full. The queue has a fixed size to prevent a windows memory overflow and system shutdown if processing gets too laggy. I'm dealing with fast data streams of large data sets and can kill a server style PC in 15mins without this. This is a serious memory management tool!
@Jay
I've already looked at the help (hence the example VI to test). I don't log an error when I get a timeout which leads me to believe that this is a true timeout, as otherwise I would have caught an error in my error log on the input to the enqueue function. Technically this is the acquisition device and the acquisition loop as I stream a Raw TCP string in this loop. Not sure how I put a TCP read (with timeout) and enqueue into a timed loop so I can monitor completed late, but maybe I've missed a trick.
I don't want to use lossy enqueue - that defeats the object of knowing if the queue is full and will lose the data I want to process
@Bilko
I have a plugin architecture and have carefully assigned a queue name here that no-one in their right mind will think of😉
James
Thank you James!
OK, I believe a real enqueue tmo. ASSUMING THAT! it must be a memory allocation delay (Big Data right?)
What is the Q data type? And how can we reduce memory allocations? Then become driving questions.
Asside: allowing a debug probe into the queue may possibly force on some debug hooks in the guts of queues. AQ would have to chime in but, it seams reasonable.
Back to the queue. Are you preallocating memory by using a max queue size on the obtain queue?
I believe you said that the data is a string....hmmm can you convert to a fixed size byte array and use a termination? My favorite is 00 B7 00, NUL Thorn NUL 😀
Can a DVR work better than a queue here? ( likely )