09-27-2014 03:25 PM
@Neos wrote:
so while passing the variant into subvi will create extra copy ( so extra memory )????Sorry for hijacking the thread but i think this question is related to the content if OP is going to use variant technique.
Passing a value into a subvi does not create an additional copy (with some exceptions, for example if the subVI's front panel is visible then LabVIEW needs to make a copy to display). LabVIEW makes a copy only when one function modifies the value on a wire before the code is finished using the original value. If that doesn't answer your question then I'm not sure what you're asking.
09-30-2014 09:15 PM - edited 09-30-2014 09:23 PM
Thank you all for the valuable feedbacks and special thanks to "altenbach" for making the variant attribute clear with the example in my context.
Congratulations Lynn for your Knighthood!
altenbach wrote:
I don't know how it performs memory-wise. 😉
I didn't reply sooner precisely to verify this.
Actually, I also spent time going through that link to the nugget you mentioned, and in turn started my browsing from one page to another and so on... wikipedia is amazing. Had to cut short my journey as I must finish my experiments urgently.
In the attachment, I am sharing the updated VI where I am comparing the array version (as I had been doing) with the variant attribute you implemented.
For an input array of size 266824, where each element string is merged with each element string of another array (with just 2 elements) and the resulting string is searched in another array and if not found, added (insert into array), and another parallel array is also maintained with a corresponding value.
The Variant Attribute method just took around 3 seconds, whereas the Array method is still running since 50 minutes, while I am writing this message, on my aforementioned high performance Intel core i7 machine. Ok, it's finishing (I am checking the current string in the probe and looking for it in an excel file I exported from the output array of Variant Attribute method). Phew... it took whopping 3395864 milliseconds.
Also, I have used the sample as in my experiments - T_str and X(S) string arrays with reasonably large size (although not in millions), and it clearly showed the time complexity differences.
In the program, I have put some constants which represent a smaller version of the type of arrays that I deal with. As the learning algorithm functions, these arrays grow by an order in length and in the case of rows_X(S), also in width (by adding more columns). But these constants can be used to experiment the performance of various methods (by assuming a runtime input to the program from the calling program). Please feel free to use them. I wanted to use that array of 3.5 million rows, but I was being greedy (trying to copy several arrays) and that 3.5 million long array wasn't copiable. And finally LabVIEW crashed. So had to run the program again. In future, I will share the program where I am making these strings by combinations, just to seek your opinion on what can be improved there (although it hardly takes 1 minute and a half to generate 250,000 combinations of various ranks and items).
Regarding the redundant and dead code in my original program, actually, the program had some other functions as well, which I removed because they were not necessary for the problem I was highlighting. They were some messaging and user interactions, i.e., not going to make a difference in the processing time (just wait time till the user replies). So I removed them, but in the haste missed to remove those few remaining items.
But this variant attribute really works wonders for my case. I am going to check other functions as well, where I am doing this repeated search in array.
johnsold wrote:
Insert Into Array Is very expensive in terms of time and memory. Arrays occupy contiguous memory so continually growing an array requires frequent re-allocations of memory. Those not quite big enough memory segments which have been set aside are not necessarily released from LV and probably will not be reused because of the contiguous memory requirement. So you can get out of memory errors while plenty of memory locations are unused but no single block is large enough. Replace Array Subset is much better.
With this Variant Attribute approach, I also don't need to worry about using "replace array subset" over "insert into array", as suggested.
Furthermore, I am also looking forward to see how I can write these X(S) + rows_X(S) 2D array in an Excel file without memory problems (as people suggested, writing in Chunks). I also read documentation, but couldn't get the idea. If someone can show that, I would appreciate that very much. The program is shared in the first message in this thread. The large array can be used from the output or from constant values on the blockdiagram of the VI attached with this message.
Thanks!
09-30-2014 09:30 PM - edited 09-30-2014 09:31 PM
By the way, with the Variant Attribute, I also don't need to worry about Sorting the array.
But I guess the sorting must not take a lot of time either, and I can perform that only just before writing to file.
Is there such fast method also to check unique elements in the array? Is there a way so that the variant attribute method does not allow duplicate attributes in a variant?
10-01-2014 12:35 AM
@Vaibhav wrote:
Is there a way so that the variant attribute method does not allow duplicate attributes in a variant?
You cannot have duplicate attributes. That is handled automatically.
From the help on Set Variant Attribute:
"name is the name of the attribute you want to edit or create. If name matches an attribute, this function replaces the attribute with the value specified. If name does not match an attribute, this function creates a new attribute. "
10-01-2014 01:59 AM
It's interesting to note that you now have an empty data, a Void, with 200000 attributes ... sounds like a die hard administrators dream. 😄
/Y
10-01-2014 03:38 AM
@altenbach wrote:
You cannot have duplicate attributes. That is handled automatically.
From the help on Set Variant Attribute:
"name is the name of the attribute you want to edit or create. If name matches an attribute, this function replaces the attribute with the value specified. If name does not match an attribute, this function creates a new attribute. "
Thanks. Makes perfect sense. I now remember to have "seen" the help page but didn't remember reading it well.
10-01-2014 03:40 AM
@Yamaeda wrote:
It's interesting to note that you now have an empty data, a Void, with 200000 attributes ... sounds like a die hard administrators dream. 😄
/Y
Hi Yamaeda,
Sorry but didn't quite get your anecdote. 😞
By the way, can you please see the program "Write to file" in the first message, and show what you meant to say by "writing in chunks" ?
Thanks ahead!
10-01-2014 03:45 AM
Kudos to all for help, tips or even just for raising a question that prompted an answer.
I am still looking for the remaining answers, so will not yet accept a solution (just to keep the thread alive).
10-01-2014 05:13 AM - edited 10-01-2014 05:15 AM
@Vaibhav wrote:
Hi Yamaeda,
Sorry but didn't quite get your anecdote. 😞
By the way, can you please see the program "Write to file" in the first message, and show what you meant to say by "writing in chunks" ?
Thanks ahead!
If using the Variant attributes, you create 1 variant, which you never give any value, but stack it with attributes, which technically mean you have 0 data and 200k attributes. In this case it's a "hack" since you use the attributes _as_ data, but anyway. 🙂
I've tweaked your Write to file some. The Write spreadsheet file probably makes a data copy, so i'll write it line by line if it's a memory problem. Else that function is simpler to use.
(You could also use Get array subset and write it in chunks until "Rest of array" is empty)
/Y
10-01-2014 05:34 AM
Hi,
Thanks for the explanation. I know it takes out the humour when you have to explain, but now I got it and I laughed anyways. :manlol:
I asked yesterday to my co-supervisor if he has got the 2014 version, but he hasn't yet. So I couldn't open your file on my otherwise oh-I-thoght-up-to-date version of LabVIEW 2013 SP1.
By the way, I also think that with this Variant Attribute method, since it's damn fast, I can even write the whole 3.5 million rows one by one in a matter of seconds, by the "append" to file option. No? I will give it a try.
I will check your updated file once I can open it but in fact I thought you were mentioning to that giga_labview.llb file in the http://www.ni.com/white-paper/3625/en/ as the second poster "BowenM" had referred to that file as well.
The file from that .llb, "GLV_StreamToDisk.vi" is attached with this message.
I am trying to understand it, if someone can help me decode that. 😛
Thanks!