10-27-2006 11:18 AM
10-27-2006 11:42 AM
10-27-2006 11:44 AM
Don't have LV on my network PC, but here are quick thoughts:
1. Comparing floating-point numbers for equality is almost always a Bad Idea, which is another strike against "Search 1D Array."
2. "Threshold 1D Array" expects an array that's sorted in ascending order, which it sounds like you don't have.
3. Unless your incoming 1D array has some other characteristic to exploit, you may be stuck with raw code. I'd think you'd be able to handle an array size of maybe 100 thousand samples in 0.1 seconds.
4. I'd bring the array into a while loop without auto-indexing. I'd index 2 values out of the array on each pass through the loop. One index will increment forward from the beginning of the array and the other decrements backward from the end. Loop ends when both the forward incrementing index and backward decrementing index have found a value greater than your threshold, or when the indices "cross". Of course, once your backword-moving indexer finds your latest value, future loops would bypass that 1/2 of the loop code, and likewise for the forward-moving one.
Hopefully someone else can post an example or screenshot. I'd do it myself if I had LV handy here...
-Kevin P.
10-27-2006 12:05 PM - edited 10-27-2006 12:05 PM
How is this for starters?
(Inspired by Christian's reply)
Ben
Message Edited by Ben on 10-27-2006 12:06 PM
10-27-2006 01:05 PM
10-27-2006 02:26 PM
The only other comment I'd add is that altenbach's and Ben's solutions are simpler in concept and less susceptible to a buggy implementation than mine. Unless your arrays are really big (10's or 100's of thousands of elements or so), I'd expect their methods to meet your timing requirements.
If your arrays ARE really big and those methods are too slow (regularly or occasionally), it'd be because both of them make a really big Boolean array (or two) each time you call them. ONLY then would I suggest you bother working out the details of my suggestion, where no new arrays need to be allocated.
-Kevin P.
10-27-2006 02:35 PM
11-01-2006 03:29 PM
I threw together a quick example that seemed to work pretty fast. But then, just for comparison's sake, I timed it against the version Ben posted. Except for a few cases where the earliest and latest matching elements were quite near the beginning and end of the array, his was faster, even for array sizes of 1 million! I want to look it over more closely, but barring any further report back here, it appears that my method has NO advantages after all.
(For test data, I generated gaussian noise, and used a comparison threshold that I varied between about 3-6 standard deviations.)
-Kevin P.
11-02-2006 08:02 AM
Kevin wrote;
" ... it appears that my method has NO advantages after all. "
It takes guts to make a posting like that!
I suspect the bulk of the time in my version is spent allocating memory. I also suspect that the buffers once allocated remain allocated for use durring subsequent calls.
Since I suspect your version did not create an intermediate buffer (my booleans) the competition was between how quick could I fill in my buffer vs how fast can your while loop iterate.
Still guessing....
Since the while loop stalls pipe-lining, I suspect the repeated checks if the loop should stop are what is the bottle-kneck.
On the plus side!
If my guesses are correct, its nice to know that LV can run the simple solution so efficiently.
Ben
11-02-2006 10:58 AM