07-21-2009 05:17 PM
I'm trying to improve on the canned NI VI Threshold Peak Detector, so I started to write my own. I can't figure out why this VI would take 60 seconds to exectute. See atttached jpeg.
Admittedly, the array can be large, in this case 300k elements, but I shoot through arrays 4 times that size all the time. It must be something obvious, I just can't see it.
If I disable the case structure, the boolean array fills up in like .25 seconds, that's normal.
Any ideas?
07-21-2009 05:24 PM
07-21-2009 05:26 PM
Did that, same result.
07-21-2009 05:29 PM
Looks like this now, no change. Weird.
I put an indicator on i and it looks like the first 100k fills fast (3 seconds) the rest takes over a minute.
07-21-2009 05:40 PM
It's called memory thrashing, which is what happens when you use Build Array inside a loop. See, for example, Darren't recent nugget on this. Arrays need to be contiguous in RAM. Thus, as the array grows LabVIEW must reallocate more memory, and it must make sure that it does so with a contiguous block. If none is available you will actually get an out of memory error.
What exactly don't you like about the canned Threshold Peak Detector that you felt required improvement?
07-21-2009 05:56 PM
The indicator should also be dumped, probably good to a few million elements on a typical machine. If you want to be a little more memory friendly, I would initialize an array with the same size as the original array and wire this to the shift register containing the results. Add a second shift register initialized to 0 which keeps track of how many points you have found. Instead of building the array, use the second shift register as an index to replace the element in the result array. When the loop is complete, add an ArraySubset to trim the Array and you are set.
All that being said, if it is faster than the built-in function I'd be surprised.
07-21-2009 06:02 PM
Thanks for the explanation, any work arounds?
The threshold peak detector has a very nice built in feature known as Width. The Width is the minimum contiguous samples that must exceed Threshold in order to count as a peak.
Unfortunately, Width only applies for determining when a peak starts, not when it ends. The routine would be more usefull to me if a single sample, below Threshold, didn't reset the routine looking for another peak. Also, in order to find falling edges, I have to invert the array and run the detector again. I thought I'd take care of it all in one routine.
07-21-2009 06:22 PM
07-21-2009 06:53 PM
Please always attach the actual VI instead of a picture so we can play around with it without having to spend time rewriting from scratch.
Also include tyical data and expected results.
(I would say the VI in the first picture is slow because you have execution highlighting enabled.... :D)
07-21-2009 07:31 PM - edited 07-21-2009 07:34 PM
OK, as others have already said, you are thrashing your memory. Here's a quick rewrite that does 300k in about 10ms and 3M points in 100ms on my very old computer.
The critical points is to operate in-place on arrays of fixed size. Here we allocate the output array at the worst case scenario once. (Worst case is when all points match, so the output array is the same size as the input array!)
We keep the insert point in another shift register and increment it whenever we place a new index using "replace array subset". The other case has everything wired across unchanged. At the end, we trim the indices array to the correct size once. We have one array resizing operation instead of a huge number.