LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Query about clustering unrelated large amounts of data together vs. keeping it separate.

I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.

A description of the general situation:

A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.


What I think Labview does:

As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.

If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.


In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.


Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)

Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?


The attachment:

A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.


I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.


Thank you all very much for your time and for sharing your Labview experience,
Richard Dwan
0 Kudos
Message 1 of 4
(2,898 Views)
Richard,
Unfortunately, I don't have a complete answer for you.
The unbundle by name function is not reusing the memory space, so there is no memory leak, but carrying the large cluster through all the iterations seems to take more time. I assume this is because the entire cluster is read in place. However, I found a neat trick, so I am not sure how much of the above is true. If in the first loop, you put a sequence structure, and pass just one of the numeric wires through it, the loop time drops down to about 30ms.
Zvezdana S.
0 Kudos
Message 2 of 4
(2,898 Views)
Zvezdana,

That is an interesting observation you have made and seems to me to be quite inexplicable. The trick is interesting but not practical for me to use in developing a large piece of software. Thanks for your input - I think I'll be contacting technical support for an explaination along with some other anomolies involving large arrays that I have spottted.

Thanks all,
Richard Dwan
0 Kudos
Message 3 of 4
(2,898 Views)
> That is an interesting observation you have made and seems to me to be
> quite inexplicable. The trick is interesting but not practical for me
> to use in developing a large piece of software. Thanks for your input
> - I think I'll be contacting technical support for an explaination
> along with some other anomolies involving large arrays that I have
> spottted.
>

The deal here is that the bundle and unbundle nodes must be very careful
when they are swapping elements around. This used to make copies in the
normal cases, but that has been improved. The reason that the sequence
affects it is that it affects the algorithm so that it orders the
element movement so that the algorithm succeeds in avoiding a copy.
Another, more obvious way
is to use a regular bundle and unbundle, not
the named variety. These tend to have an easier time in the algorithm also.

Technically, I'd report the diagram to tech support to see if the named
bundle/unbundle case can be handled as well. In the meantime, you can
leave the data unbundled, as in the faster version.

Greg McKaskle
0 Kudos
Message 4 of 4
(2,898 Views)