LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Is Delete From Array function optimized while deleting from the front or end of an array.

Solved!
Go to solution
Recently I am dealing with some large data, which often ended up popping the "out of memory" message. I seldom caught up with such large data so frequently I have used Delete From Array function in my program to access portions of arrays, especially accessing the last elements of arrays which is very convenient by Delete From Array. But now I'm wondering should I replace all of them with Array Subset which performs no data copying. The point is has the Delete From Array function optimized to avoid data copying while deleting from the front or end of an array?
0 Kudos
Message 1 of 13
(7,169 Views)

I can't currently prove this, but I suspect that Delete From Array is optimized for the case where the other inputs are left unwired (deleting the last element of the array). I'm going to guess that in that situation LabVIEW can leave the existing array in place and simply reduce the array size by 1. In a quick test, I don't see any difference in buffer allocations for that case versus array subset, other than that delete from array is less code. However, deleting from the beginning of an array, or deleting multiple elements, is probably less efficient.

0 Kudos
Message 2 of 13
(7,136 Views)

Thank you for replying.

Following your thinking I inferred that deleting n elements from the beginning of an array could also be optimized, by just reducing the array size by n and move the array pointer along n positions.

I found that the buffer allocations is no difference even for Delete From Array with and without the index input wired (see attached figure). Well, a buffer allocation doesn't neccessary mean data copying. Is it?

0 Kudos
Message 3 of 13
(7,127 Views)

@hlcfreelook wrote:
I seldom caught up with such large data so frequently I have used Delete From Array function in my program to access portions of arrays, especially accessing the last elements of arrays which is very convenient by Delete From Array.

Please define what you mean by "access". To get a single element, you would use index array and to get a subset of the array, you would use "array subset". Both don't resize or touch the original array, which is the ideal case. The "delete from array" function typically creates two new arrays, the deleted portion and the original with a part deleted. There are many scenarios where the operation must copy both arrays.

 

Can you describe in more detail how you are accessing the array and what you do with it later?

0 Kudos
Message 4 of 13
(7,122 Views)

Sorry for the vagueness.

When I am using Delete From Array, sometimes i just want to see the elements of the last row, other times i want to split the array into two parts (the first row and the others) for later calculation (shown in attached png).

Surely I can do both tasks with Array Subset. But I have unconsciously use Delete From Array in my previously codes as it is more handy for these tasks than Array Subset. While using Delete From Array, in the first task you don't need to know the array size beforehand and in the second task you get both potion of the array through one function.

tasks.PNG

0 Kudos
Message 5 of 13
(7,116 Views)

Yes, that's theoretically true about incrementing the pointer to delete elements at the beginning, but more complicated because the amount to be incremented needs to be calculated (for example in the 2D array case you showed, or for an array of clusters). Also then there's the issue of LabVIEW needing to check for the special case of the index being 0, and does it do that at compile time (so only a constant input of 0) or at run-time (with an unknown value wired in)? So that's why I think it's probably optimized for the unwired case. If it weren't optimized at all, the index default would probably be 0 like other array functions; there's probably a reason it defaults to the last element and I suspect it's to allow a simple optimization for that case. 

 

I don't know if that applies for 2D arrays, though. I mistakenly assumed 1D since you didn't mention it in the first post.

0 Kudos
Message 6 of 13
(7,086 Views)

Your inferences make sense. The trade off between such optimization and overheading may not proven to be profitable, let alone there is already a solution (the Array Subset way).

Thanks again for replying.

0 Kudos
Message 7 of 13
(7,072 Views)

So I asked myself, "What would Crossrulz do?".  I'm not sure this would be the answer, but anyway ...

 

I did an experiment.  I generated a 1000x1000 array of random numbers, then did the following:  Timed how long it took to do 1000 removals of the last row, 1000 removals of the first row, and 1000 removals of row 123 (a "random" row).

 

The answer was it took 3.76 ms to remove the first row, 3.57 the last row, and 3.26 a middle row.  Yea, but how reproducible was this?  So I wrapped the entire thing in another For loop and did that 1000 times.  The results were shocking!

 

The first set of Times were roughly twice as long as all the rest!  I split out the first times, took the mean and standard deviations of the rest, and the results are shown below.  Note that now we get a more "expected" order, first and last row deletions are slightly faster than middle, but again, the difference is small enough that for arrays of this size, the question is moot.

 

It might be interesting (for the Experimenters out there) to vary other things such as Array Size.  But it might also be fruitful to say that the folks at NI who implement the LabVIEW code have a lot of experience in optimizing the code, so we should probably not worry to much about it.

Deletion Timing.png

Deletion Timing.png

 

Bob Schor

0 Kudos
Message 8 of 13
(7,070 Views)

Lets see how much we can learn from this benchmark. If you disable debugging, it completes in an instant because of dead code elimination. From the similarlty between the various delete locations (when debugging is enabled and dead code executes), we can conclude that there is probably no signiificant in-place optimization when deleting from one of the ends. Then you also left out the interesting case where you delete columns instead of rows. Also this case takes a similar amount of time and since arrays must be contiguous in memory, I suspect that ALL variations create a new copy for the outputs, otherwise we would see large differences. Just guessing.

 

Any time "delete from array" is used repeatedly in a relatively tight loop, there is probably a better way. Often the entire algorithm can be desinged to be much more "in place". If I use this function, I use it exactly once before any tight loop.

Message 9 of 13
(7,058 Views)

... and THAT, folks, is why Altenbach is a LabVIEW Champion and Knight of NI.

 

BS

0 Kudos
Message 10 of 13
(7,038 Views)