LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

memory manager optimization

the LabVIEW 2014 Profile Perfromance and Memory  says case 1 is using 3 MBy and the other case is using 4MBy

 

That was the metric. This is on Windows, not RT.

 

mcduff

 

See http://forums.ni.com/t5/LabVIEW/Queue-Memory-Allocation-Weirdness/m-p/1990809#M656598

 

for strange memory probelms regarding queues

Message 11 of 50
(2,026 Views)

Understood.  When I run the Profile Performance and Memory on the RT system, you're basically saying that I can run it one way, look at the "Max Bytes", then run it with the other code option and look at "Max Bytes" for this particular VI and compare?

 

My code with in-place shows Max Bytes = 72.34k, avg blocks = 25, min blocks = 25, max blocks = 26

My code with array sub-set shows Max   = 88.67k, avg blocks = 27, min blocks = 27, max blocks = 27

 

Re-factoring to use DVR I getMax Bytes = 9.27k,    avg blocks = 24, min blocks = 24, max blocks = 24

 

Presumably the rest of the buffer now resides in the top level vi where the DVR is created... but for some strange reason that VI shows 0 across all tabs in the profile... 0 ram, 0 time... but I know it was running..

 

Thanks for pushing me to check the DVR wrapped version..  looks fairly superior and as an added benefit I'm only passing a reference in and out of the 2-3 subVI's I have, instead of the array.

 

 

 

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 12 of 50
(2,012 Views)

As a side note on weirdness and Queues, obtain Q with fixed max depth on an RT does not pre-allocate memory (or at least not last I checked, maybe fixed in 2015).  Instead you had to fill the Q to the max size.. Then, if you used 'flush' memory COULD in some instances be de-allocated... so to prevent that, the recomendation was/is to dequeue (and ignore) the queue back to 0...  Queues on RT of course is potentially a really bad idea anyway, if you are on RT for the sake of deterministic execution times anyway.  If you don't care about jitter etc., then they are still somewhat fickle, but have less CPU overhead than the RT FIFO's.

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 13 of 50
(2,009 Views)

Understood.  When I run the Profile Performance and Memory on the RT system, you're basically saying that I can run it one way, look at the "Max Bytes", then run it with the other code option and look at "Max Bytes" for this particular VI and compare?

 

Yes.

 

I have no experience with RT, only the Windows side. That being said, the Profile Performance and Memory usually gives a good indication of the memory BUT it is not always right. When I looked at the Windows Task Manager for some situations I would see a difference, I do not know if there is anything similar to the Task Manager on the RT side. So, what I guess I am saying I would optimize first with the Profile Performance and Memory then with the Task Manager if those gave me different answers. Generally, I use DVRs for large arrays. It is easier to pass around a reference than an array.

 

Cheers,

mcduff

Message 14 of 50
(1,995 Views)

There's a bunch of misleading information here.

 

First, there's no reason to use an IPE structure. LabVIEW will allocate a subarray for the array subset, which means allocating only a new starting point, array length, and stride (the gap between elements, if you were decimating or reversing the array), but pointing at the original data (NOT making a copy). For more information on subarrays, see https://lavag.org/topic/7307-another-reason-why-copy-dots-is-a-bad-name-for-buffer-allocations/

 

Mcduff's example snippets also may not be that helpful, because the array size is constant, so the Initialize Array can get folded into an array constant. This makes sense if the array is actually constant, but if you do something that modifies the array (which could include splitting and recombining it, depending on how smart the compiler is), LabVIEW then needs to make a copy of that entire constant array, because constants are read-only. I suspect that the DVR works around this in some way, which would explain why the non-DVR case is much worse - but I would still recommend avoiding the IPE entirely here, and if you want accurate measurements, replace the array sizes with controls. See https://lavag.org/topic/7063-why-does-the-replace-array-subset-double-the-used-memory/

Message 15 of 50
(1,977 Views)

Pure gold those links nathand.  THANKS!

 

Since my buffer array is 'carried around' a bit, and is indeed of a fixed size, in general I think it might still make sense to pass the DVR reference around instead of the array... for the example shown (where I split the array and write a portion to file), I see what you mean nathand, and will re-work that portion to not use IPE.  (Where I put data into the fixed array buffer, I assume I would still be better off to use IPE's? After all, I'm specifically looking to replace big chunks of the array with new values, in-place.. seems like the text book example of when to use IPE?)

 

I will do some more benchmarks tomorrow to look into performance; both speed and ram use, and see if there are any big up or down sides on either of those metrics one way or the other.

 

Thanks to all of you who are posting and participating!

QFang
-------------
CLD LabVIEW 7.1 to 2016
0 Kudos
Message 16 of 50
(1,969 Views)

You are probably correct on all accounts as usual. I believe a buffer will be allocated for the File Write part regardless of whether you use an IPE or Array Subset, the only question is if the IPE or Array Subset makes a copy. I always use the IPE becasue at one point when I rewrote the Min-Max Decimation VI there was a memory copy due to the Array subset, in the IPE there was no extra buffer. Again, I do not know what version, I can place the Min Max decimation here if anybody wants it to examine.

 

Cheers,

mcduff

0 Kudos
Message 17 of 50
(1,960 Views)

@QFang wrote:

(Where I put data into the fixed array buffer, I assume I would still be better off to use IPE's? After all, I'm specifically looking to replace big chunks of the array with new values, in-place.. seems like the text book example of when to use IPE?)


I think you're attributing some magic to an IPE. The compiler already arranges code to do work in-place when it determines that's possible. There should be no advantage to doing Replace Array Subset on its own, versus splitting the array in an IPE and then substituting part of the array with the new data before recombining. In both cases, you're copying new data into a portion of the existing array.

 

The textbook use of an IPE is when you want to operate in-place on the elements of the array (as shown for example in the IPE help for Array Split/Replace Subarrays). If your input to one of the subarrays originates from somewhere other than the corresponding IPE subarray outputs, then you gain nothing from the IPE, because there's nothing that can be done in-place there. The two subarrays (the existing data and the new values you want to replace them with) are both in memory at the same time, in different memory locations, and you have to copy one over the other.

Message 18 of 50
(1,953 Views)

Nathand is correct

Case A is the Best as far memory as code simplicity.

Case B uses the same memory as Case A but mode code.

Case C uses the most memory.

(Note the delay is to monitor the memory usuage on the Windows Task Manager, Should be not more than 3 Mby which is true for A & B)

 

Case A

CaseA.png

 

Case B

CaseB.png

 

Case C

CaseC.png

0 Kudos
Message 19 of 50
(1,937 Views)

What about the case with no DVR? I doubt the DVR makes a difference in memory use; it might slightly negatively impact performance because there's an extra level of de-referencing needed to get to the data.

0 Kudos
Message 20 of 50
(1,925 Views)