LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DVR Size Limitations

Hi all,

 

I've been using DVRs for a while now and they are lifesavers for the large datasets I have to pass around most of my programs. I'm wondering now if there's a better way to use them though. I currently have a DVR that is passed around in only one VI but it is growing in both the number of elements in the DVR and the size of those elements. I've been contemplating whether or not I should break this DVR up into multiple DVR references for organization purposes, but my real question is if having a massive DVR is a performance hit on my program. Is LabVIEW handling the memory needed for the DVR gracefully as my arrays grow? I know the DVR itself is just a reference, but the memory location it points to is getting larger so would breaking it up make sense?

 

Thanks!

0 Kudos
Message 1 of 8
(3,605 Views)

This isn't really a question about DVRs.  As you said yourself, that is just a reference.  The question is about how you organize your data that the DVR is pointing to.  If you are thinking of breaking it up, and it is grows larger with arrays, then this must be some sort of super cluster.

 

Breaking it up won't reduce your overall memory usage.  If it is a lot of data, it is going to eat up a lot of memory whether it is in one large cluster, or smaller cluster.  But if it is one large cluster, you will require one large chunk of contiguous memory to hold it.  And if it grows and outgrows the memory space available for it, the the operating system is going to need to find another larger chunk of free memory to move it into.

 

If the data is saved in smaller pieces, then you need a large chunk of memory, just several smaller chunks that would be easier for the OS to find.

Message 2 of 8
(3,599 Views)

There will be potentially a performance hit when you access a large element inside the DVR depending on what you do inside the In Place Element (IPE) structure with that data element and a guaranteed performance hit when you wire such an element out of the IPE as LabVIEW needs to make a copy of it. Elements that you do not explicitly access inside the IPE will normally not influence the execution speed at all, so having a huge mega pronto saurus DVR is not necessarily going to make your program run slow.

 

But!!!! Having such a beast in your application may be a big warning sign that your application has grown into an unmanageable piece of nightmare! Generally it is not a good sign if you have one huge piece of thing in your application that manages everything. Doing that in just a cluster is definitely a big performance problem. But while putting it in a DVR can help to limit the performance impact, it still can turn into a maintenance nightmare for anyone but yourself who might ever have to work on that software and quite likely yourself too after you have not worked on that program for a while.

Rolf Kalbermatter  My Blog
DEMO, Electronic and Mechanical Support department, room 36.LB00.390
Message 3 of 8
(3,586 Views)

Thanks for the clarification RavensFan. In this case it's not a super cluster and the array sizes are manageable. The data is grouped into several smaller clusters as it makes sense for organization. I guess I need to have a little more faith in LabVIEW that it will handle the DVR memory allocation in the correct way for my smaller cluster arrangements. My question was also geared towards other users' personal experience with keeping track of multiple DVRs in a project and whether or not there was any benefit to using multiple DVRs instead of just one massive (yet well organized) DVR.

0 Kudos
Message 4 of 8
(3,583 Views)

I appreciate both of your insights! You're right to question my architecture Rolf, it's starting to scare me too. I need to take some time to consider what can be calculated on the spot and what really needs to be placed into the DVR for later use. I think some of my initial choice of style was to do "preprocessing" of the data and put it in the DVR but that preprocessing is really revealing itself to be the processing of the data and there's nothing "pre" about it. I also think there are copies of very similar data sets floating around in the DVR and I'm not utilizing the data the way I should be.

0 Kudos
Message 5 of 8
(3,576 Views)

A DVR still had to represent a single piece of data.  So if it is multiple things, then it has to be a cluster.  It may be a cluster of clusters,  or a cluster of clusters of clusters of arrays,  but that is still a super cluster.

 

I think you realized that it has grown to a size that is beginning to concern you and you asked the question, that in itself is a reason to reconsider how the data is being managed.

0 Kudos
Message 6 of 8
(3,574 Views)

Also remember that, as far as memory is concerned, even if the number of elements in your cluster is growing it shouldn't be too bad because LabVIEW should only hold the handles to strings and arrays in the cluster and not the actual data.

 

https://www.ni.com/docs/en-US/bundle/labview/page/how-labview-stores-data-in-memory.html

Matt J | National Instruments | CLA
Message 7 of 8
(3,531 Views)

This sounds like a problem I recently came across. 

 

Data of the class changed methods of the class...

 

What? When you start worrying about how your class implements the methods.... you forgot to build a child class override of that method 

 

DO  method x TO something...

 

J. That is why you need 1 class to rule them all and a lot of config files to hold the UUT specific options....

 

Oops. 


"Should be" isn't "Is" -Jay
0 Kudos
Message 8 of 8
(3,524 Views)