09-24-2021 07:25 AM
Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.
09-24-2021 11:30 AM
I've found that most of my simple utility type classes work just fine on the by value cluster of a class. Most of the uses I find is where there is an initialize that does some stuff, then have other functions that operate on that class. But all the important parts of the functions rely on the data from the initialize. And so for those having stuff be by reference just seems like another layer that isn't required. If all the stuff that is important is from the init, then just shove everything from there into the class to be used later.
Yes there are times that initially the class just needs stuff from the init, and then later on I realize having something be by reference would make the API cleaner by being able to specify stuff later. And in cases like that I will store stuff in a DVR that has a type def cluster in it. But because of the nature of classes and private data, upgrading the class to use a new version that went from by value, to by reference, should still be compatible. Assuming I didn't do something crazy like change connector panes or something. But even then I can leave the VI in the package just not shown on the palette as a somewhat deprecated function, and replace it on the palette with a new one.
I guess I'm just trying to say I default to by value, and use a DVR to become by reference when its needed.
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
17 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
09-27-2021 08:55 AM
@drjdpowell wrote:
Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.
Actually I have - but that's in other threads 6months+ ago.
I won't got into the full details of the months of debug... but the upshot was:
The issue is that Queue memory does not get get freed until the queue is released, and so assigning large contiguous blocks of data to a queue, then manipulating a part of the data after dequeue means the contiguous block of memory is not allocated the the queue any more, but the queue is allocated the same amount of memory until the garbage collector kicks in.
This can be a real PITA.
The DETT showed where the issue was and an internal tool showed the RAM allocated to the EXE was ramping up over time due to the queues until the restructure - and would carry on ramping for 2days - 2weeks depending on execution speed until the garbage collector kicked in properly.
The DVR inside the Queue resolves the issue as large datasets don't get put into queues so the memory is allocated and freed much more rapidly and the program settles within an hour.
(Which is kind of important considering if it locks up we can be streaming at >180MB/s into the program and we are processing all of the data in real time. - the datasets are big!)
James
09-27-2021 09:08 AM
@Hooovahh wrote:
I guess I'm just trying to say I default to by value, and use a DVR to become by reference when its needed.
I guess I'm not using a DVR in the normal use case. I'm actually using it to prevent a memory leak issue and queues getting over bloated.
So I create the DVR, enqueue it and destroy it straight after dequeue as I seem to get a more memory efficient architecture.
So I guess you would probably say I'm using the DVR to pass data by value - rather than by reference (since I only ever use a single data value reference with each data set sent to avoid race condition issues).
It seems to be more memory efficient and allow for better parallel processing. (In all the benchmarking I've done with arrays over 1000x1000 in size, - you have to go smaller to see a performance decrease with the functions I'm using.)
James
09-27-2021 09:20 AM
How does using multiple DVRs to the same data prevent race conditions? A single DVR would ensure that there are not parallel write actions to the. I believe that it will allow parallel reads though. are you creating/destroying DVRs to the same data set? That seems like it would be an issue. If it is different data than this approach would work. Another solution would be to queue a class with the DVR to the dataset in the class private data. This would fully encapsulate the data set.
09-27-2021 04:28 PM - edited 09-27-2021 04:32 PM
@James_W wrote:
@drjdpowell wrote:
Ah, so you've demonstrated you have a copying problem that a DVR improves, but not that it is a Queue issue necessarily.
Actually I have - but that's in other threads 6months+ ago.
I won't got into the full details of the months of debug... but the upshot was:
The issue is that Queue memory does not get get freed until the queue is released, and so assigning large contiguous blocks of data to a queue, then manipulating a part of the data after dequeue means the contiguous block of memory is not allocated the the queue any more, but the queue is allocated the same amount of memory until the garbage collector kicks in.
This can be a real PITA.
The DETT showed where the issue was and an internal tool showed the RAM allocated to the EXE was ramping up over time due to the queues until the restructure - and would carry on ramping for 2days - 2weeks depending on execution speed until the garbage collector kicked in properly.
The DVR inside the Queue resolves the issue as large datasets don't get put into queues so the memory is allocated and freed much more rapidly and the program settles within an hour.
(Which is kind of important considering if it locks up we can be streaming at >180MB/s into the program and we are processing all of the data in real time. - the datasets are big!)
James
Ah, I see what you mean, now. I experimented with the DETT and some simple test VIs. Though I found it wasn't the Queue, per se (same results were had with arrays), but rather that, going by-val, memory is not deallocated, but with DVRs it is. LabVIEW must follow one algorithm with by value (retain the allocation with the expectation of reusing it) and a different one with the DVR (deallocate). In your use case (a large buffered build up of data on startup that isn't repeated) using DVRs makes sense.
09-28-2021 04:51 AM
@Mark_Yedinak wrote:
How does using multiple DVRs to the same data prevent race conditions? A single DVR would ensure that there are not parallel write actions to the. I believe that it will allow parallel reads though. are you creating/destroying DVRs to the same data set? That seems like it would be an issue. If it is different data than this approach would work. Another solution would be to queue a class with the DVR to the dataset in the class private data. This would fully encapsulate the data set.
The race condition is in the consumers - I want the consumers to all run in parallel as fast possible. but I don't know which is going to be the last consumer to finish with the data.
By Value, I don't need to worry, if I use a DVR properly I can create a DVR and read one DVR from all consumers - but then I need to know when I've finished processing the data (and even if a consumer is meant to be processing the data) - the use of a DVR with multiple parallel reads, might be nice, but is going to be more of a headache due to the race conditions that I will get.
using DVRs properly (and allowing parallel reads) would create the race conditions...
So I split the wires then create the DVRs (to a copy of the same dataset) - as I said. It's about memory management.
09-28-2021 04:59 AM
@drjdpowell wrote:
Ah, I see what you mean, now. I experimented with the DETT and some simple test VIs. Though I found it wasn't the Queue, per se (same results were had with arrays), but rather that, going by-val, memory is not deallocated, but with DVRs it is. LabVIEW must follow one algorithm with by value (retain the allocation with the expectation of reusing it) and a different one with the DVR (deallocate). In your use case (a large buffered build up of data on startup that isn't repeated) using DVRs makes sense.
You found it then. 😉
(and hence my reasoning for throwing a Class down a DVR)
My use case is actually the build up of buffered data in in acquisition stage, but the size of the acquisition is repeatable but re-configurable.