LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

The time complexity of "Sort 1D Array"

Does anyone know the time complexity of "Sort 1D Array"? The document of this function does not say anything about that, so I write test VI to sort an array of 10000 integer elements. It seems that the excution time is less than 1 second, so I guess the complexity is O(n*logn). Any suggestion would be appreciated, thanks.
0 Kudos
Message 1 of 11
(4,529 Views)
You cannot calculate the time complexity from a single measurement (10000 in under 1 sec). It could be O(n*logn) even if it would take a century or  only a nanosecond for this.:o
 
You need to measure the time as a function of array size, graph it on a log-log plot, and look at the slope for large inputs, for example.
 
Back to the original question. LabVIEW uses a version of quicksort, so O(NlogN) is a good estimate for the average performance. Of course in the worst case, it could be O(N2) for pathological data.
 
 
0 Kudos
Message 2 of 11
(4,515 Views)
Thanks for your explaination, altenbach. And is there any official document talking about the time complexity of "Sort 1D Array"?
0 Kudos
Message 3 of 11
(4,506 Views)
I don't think there is anything specific to LabVIEW, because these are pretty well known algorithms. Any general discussion, such as the wikipedia article linked in my earlier post should be fine as a starting point.
 
Do you have any special concerns?
0 Kudos
Message 4 of 11
(4,495 Views)

I'm working on a "words counting" related puzzle, and I'm considering optimizing my old algorithm by sorting the 1D array. That's why I need to know the time complexity of this sorting function. I think it would be better if the time complexity is described in the function document.

0 Kudos
Message 5 of 11
(4,490 Views)
If you want to provide a suggestion in terms of documentation updates (or anything else for that matter), you can submit it to the Product Suggestion Center.

By the way, your name and bars are in blue, which indicates you are with NI, so I would think you would have internal channels for this, no?
0 Kudos
Message 6 of 11
(4,466 Views)


FanZhang wrote:

I'm working on a "words counting" related puzzle, and I'm considering optimizing my old algorithm by sorting the 1D array. That's why I need to know the time complexity of this sorting function. I think it would be better if the time complexity is described in the function document.


All you need to do is do some benchmarks with your typical input sizes to see if it is fast enough. I can assure you that the sorting algorithym used by LabVIEW is pretty much the best you can do.
 
The big Oh notation is irrelevant as long as the sorting is fast enough.
 
If your algorithm offers dramatic improvements if the input is sorted, you need to test if it is worth it in your specific situation. (e.g. "find array element" must do a linear search (O(N)) if the input is random, but can use a binary search (O(logN) if the input is sorted). So if you can sort once and then do a huge number of efficient operations that rely on sorted input, the improvements are huge, while if you need to sort every time, it is probably not worth it. It all depends on what you are trying to do.
You need to make a few code alternatives and the benchmark them.
 
Don't forget that if speed is important and huge arrays are involved, you need to carefully design your code to avoid data copies and ensure that all operations are "in place". This is typically much more important.
 
Do you have some more details about the algrithm you are trying to use? What do you actually need to do?


Message Edited by altenbach on 07-10-2008 09:00 AM
0 Kudos
Message 7 of 11
(4,461 Views)
Although the puzzle was finished, I'd like to discuss it here. The problem could be briefly described as "Counting the words(an array of string) and output the top 4 frequently used ones". As the labview do not provide container class(such as map class in c++), I decided to first sort the array by word, then do the counting. After sorting the array, the counting could be finished in linear time, since the same word are consecutive in the array. This is my algorithm, any other ideas are appreciated.
0 Kudos
Message 8 of 11
(4,436 Views)

Sounds similar to one of the old LabVIEW challenges from spring 2004: 😄

Coding Challenge - The Meta Keyword Generator, Results



Message Edited by altenbach on 07-10-2008 10:10 PM
0 Kudos
Message 9 of 11
(4,425 Views)

A few thoughts based on my recollection of that old coding challenge.  There are different kinds of algorithmic approaches you may consider.

1. A simple approach to that challenge could involve a lot of search / lookup functions.  Each time you parse a word, you perform a search to see if it's in your list already.  If so, increment its count.  If not, add it to the list with count=1.

   This was far too slow for word counting large docs.  Too many slow-ish searches.  But pretty easy to implement and maybe good enough for smaller docs.

2. I pursued an approach that used hashing instead of searches.  It *seemed* too slow, but some months after the challenge when I had another use for the core hash table functionality, I found a really dumb bug that I'd overlooked before.  Fixing it improved the speed by something like 100x.  (Missed one spot where I needed a shift register to enforce in-placeness).

   This is largely the same approach, merely trying to speed up the lookup/search function via a more complex technique (hash tables).  As I found out, the complexity makes it more bug-prone.  Still, once fixed, lookup speed is in the order of constant with no clear dependency on the number of elements in the hash table.  (Note: it *is* still dependent on the "fill ratio".  You want enough memory available to size your table for maybe ~25% usage or less.)

3.  One idea I tried out was to maintain a separate list of the 10 or 25 most-common words, sorted by count.  (The main reason was that there could be a tie for 4th place at the end, and I wanted enough extra candidates around to get the correct tie-breaker result.)  Each time I parsed another word and incremented its count, I'd check to see whether its new count qualified it to belong in the special most-common list.  I still did a lot of lookup and sort stuff, but it was always on a very

 

There are surely other kinds of data structures and algorithms available.  Do you have a specific app in mind for this?  Or just generally curious?  I ask because some of the things one might do to optimize for a specific app (or code challenge) wouldn't be so appropriate as a general-purpose routine.

-Kevin P.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 10 of 11
(4,396 Views)