01-22-2013 12:31 PM
Ok,
I'm sure this is a Rube, I just can't see it yet! can anyone suggest a Non Rube way of adding these 2 arrays together and reshaping them to get the desired resultant array?
(I know I could have posted on the forum but this is so blatently goingto end up here anyway!
James
01-22-2013 01:06 PM
@bsvare wrote:
Going through old thread. I found a rube posted by most of the people here.
http://forums.ni.com/t5/LabVIEW/Tally-array-value-instances/m-p/1748174
And here's the non-rube version:
Except for the fact that you can actually look inside the diagram of Unique Numbers and Multiplicity VI and you will notice that is it not very efficient because it re-searches the array for every element and builds two arrays in shift registers. I bet that some of the offered solution in the quoted thread would be several orders of magnitude more efficient than the stock VI for large inputs. (not tested).
My best bet is that the code you present here is quite rubbish rube-ish under the hood compared to the stramelined solutions. 😄
01-22-2013 01:10 PM - edited 01-22-2013 01:11 PM
That code is from NI. They never write rube-ish code.
"I won't be wronged. I won't be insulted. I won't be laid a-hand on. I don't do these things to other people, and I require the same from them." John Bernard Books
01-23-2013 11:31 AM - edited 01-23-2013 11:34 AM
@altenbach wrote:
@bsvare wrote:
Going through old thread. I found a rube posted by most of the people here.
http://forums.ni.com/t5/LabVIEW/Tally-array-value-instances/m-p/1748174
And here's the non-rube version:
Except for the fact that you can actually look inside the diagram of Unique Numbers and Multiplicity VI and you will notice that is it not very efficient because it re-searches the array for every element and builds two arrays in shift registers. I bet that some of the offered solution in the quoted thread would be several orders of magnitude more efficient than the stock VI for large inputs. (not tested).
My best bet is that the code you present here is quite
rubbishrube-ish under the hood compared to the stramelined solutions. 😄
You know. that comment actually bothered me. Maybe because of this thread: Find from 1D array and build new one. So, I took all the code samples in that thread, and appending mine to the list. I wrote mine before I knew the existance of NI's code. I then created a timing VI that build an array of 500 samples of rand(1,50) and got these results:
LandBelenky Top:153ms
LandBelenky Bottom: 332ms
Silver_Shaper: 85ms
Darin.K: 36ms
Altenbach First: 14ms
Altenbach Second: 74ms
NIs: 25ms
Mine: 19ms
I then figured there's got to be significant difference depending on number of different number types. So, I altered the build to an array of 500 samples of rand (1,65000) and got these results.
LandBelenky Top: 22,437ms
LandBelenky Bottom: 851ms
Silver_Shaper: 121,444ms
Darin.K: 107ms
Altenbach First: 155ms
Altenbach Second: 822ms
NIs: 143ms
Mine: 133ms
So... NIs code is very comparable to the best times for many repeats, and close to best time for few repeats. (Note, I'm not checking if the results are similar, nor checking for negative numbers.. Both of these will need to be taken into account who's code is really the best. I also did not attempt to optimize any of the functions by applying parallel loops)
"I won't be wronged. I won't be insulted. I won't be laid a-hand on. I don't do these things to other people, and I require the same from them." John Bernard Books
01-23-2013 11:40 AM
(Sorry, posting by phone)
Yes, NI is fast for small arrays (500 is small!), try 100000 elements. What was the average number of duplicates?
01-23-2013 12:48 PM
@altenbach wrote:
(Sorry, posting by phone)
Yes, NI is fast for small arrays (500 is small!), try 100000 elements. What was the average number of duplicates?
Array of 100,000 samples of rand(1,50) and got these results: (Ran 5 times instead of 500)
LandBelenky Top:298ms
LandBelenky Bottom: 616ms
Silver_Shaper: 146ms
Darin.K: 86ms
Altenbach First: 6ms
Altenbach Second: 76ms
NIs: 36ms
Mine: 26ms
Array of 100,000 samples of rand (1,65000) and got these results: (Ran 1 time instead of 500)
LandBelenky Top: 6,850ms
LandBelenky Bottom: 16,581ms
Silver_Shaper: 39,492ms
Darin.K: 2,985ms
Altenbach First: 1ms *Can't be accurate*
Altenbach Second: 14,396ms
NIs: 2,629ms
Mine: 2,613ms
"I won't be wronged. I won't be insulted. I won't be laid a-hand on. I don't do these things to other people, and I require the same from them." John Bernard Books
01-23-2013 01:07 PM - edited 01-23-2013 01:08 PM
Note, that my "second" also uses "build array" in a loop, so there are definitely significant improvements possible. It is faster for some other sizes, e.g. 500000 (0:1000).
(note that I labeled it "simple version" in the original thread, it was not meant to be optimized for speed) 😄
01-23-2013 01:42 PM - edited 01-23-2013 02:20 PM
All code that re-searches the array for each new element will probably be O(N²) while sorting is O(NlogN) (with all the later processing ~ O(N) and thus irrelevant). For large N, presorting will always be better.
(This problem is small if the number of unique elements is small, because the searched array remains small. )
Here's a quick draft where I remove the reallocation penalty of my code. See if it performs any better. 😉 (it does in my testing :D)
(not fully tested, so check for bugs and correct operation)
In general, you should also disable debugging for benchmarking.
01-30-2013 07:14 AM
THis one is a nice piece of art. My 27" monitor was not wide enough to see the entire block diagram.
Definitely an authentic RG code. Create 29 1D array, and then combine them to create a 2D array before sending it to a graph... Instead of creating a 2D array and working with it.
Found here...
Here is a partial image of the masterpiece:
01-30-2013 09:27 AM
Maybe it just that I'm fundamentally lazy, and that's why I never did something like that....
But did this person, after hours and hours of drawing wires, never think:
"There MUST be a better way to do this?!"