LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

LabVIEW is overzealous in copying clusters

I have been rooting through my code with "Show Buffer Allocations" taking out unnecessary copy operations and I stumbled on what people from my area of the country call a bug or at least poor performance.
 
The summary is if you split off anything from a cluster and take that split off part and the cluster itself into a case statement, the cluster is copied.  The VI is attached as well..
 
Also, I notice that when an already allocated constant is put into a shift register that a copy is made.  Please see below.  Notice how the copy at the case statement goes away if either the connection is not made or some other intervening block is placed.
 

Message Edited by Belzar on 03-30-2007 03:55 AM

Message Edited by Belzar on 03-30-2007 03:56 AM

Download All
0 Kudos
Message 1 of 17
(4,458 Views)
Belzar,

this is a very odd thing. I have opened your vi with LV 8.20 and have seen the alloc-dots just like shown in your screenshot.
After changing somthing which really changes the compiled code, the dots disappeared. Even reverting the change didn't show the dots again.

So which version of LV are you working with?

Norbert B.
Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 2 of 17
(4,441 Views)
Hi

He's using 8.20 as you can see when you open his VI. I also checked this. All dots disappear when you change something, but in my case they reappear when I save or run the modified VI. This is really strange. Is the "show buffer allocations" really accurate, i.e. will there really be buffer allocations in the compiled code?

Daniel

0 Kudos
Message 3 of 17
(4,436 Views)
Belzar,

i did some further testing/research on that issue with the follwing info:
Daniel is correct about that only saving enables the allocation dots to appear. So this is no "malfunction" here.
And i can at least explain the first two dots:
a) Exit of BD-cluster: This is a BD constant so the data is stored in the dynamic data-range: you have to allocate it once if the vi is executed.
b) Exit of shiftregister: This memory has also to be allocate once at the beginning of the execution to include the data passed to the shiftregister during execution.

The third dot is already discussed on the forum, but i dont know the reason too. But take a look here for infos.

hope this helps,
Norbert
Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 4 of 17
(4,423 Views)
Norbert,

if your explanation about the memory allocation at the shift register (b) is correct, then why isn't it required if the cluster is generated in a SubVI?
See image.

Daniel

0 Kudos
Message 5 of 17
(4,417 Views)
Daniel,

please note that the following is just a guess from myself:
If you pass the cluster from your subvi like shown in your picture, LabVIEW already knows the the cluster is only needed in the shiftregister so it transfers the data from the stack (change of VI) directly to the dataspace of the shiftregister. Or in other words: the shiftregister is located in the dataspace where the cluster will be passed to.

You can verify this by just splitting up the cluster-wire to an additional inputtunnel like shown in my screenshot. There you will go again with an additional allocation.

hope this helps,
Norbert
Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
0 Kudos
Message 6 of 17
(4,412 Views)
I guess the summary is - why wouldn't LV merely copy the integer going into the case statement input and not copy the entire cluster (which in my case was big).  Something is whacked.
0 Kudos
Message 7 of 17
(4,378 Views)
Hi Belzar,
 
I don't have time at the moment to fully anayze what is happening in your examples but I think that barched wires are hitting you. When a wire branches, things get complicated because both sinks of the data expect to get the contents of the data buffer as left by the source. If one of the wire branches fedds a node that modifies the data in the wire, the data is duplicated to enusre the changes from two nodes executing in parallel get good inputs.
 
You may want to look at this thread were I beat this subject to death when Shane.
 
 
 
Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 17
(4,373 Views)
What is a "barched wire"?  Smiley Very Happy  I know you mean branched, but on initial read I hit that phrase and thought there was some new class of wire...reading on cleared it up.

I'm reading the lengthy post you forwarded, thanks.  I've seen branching do what you suggest.  I think something deeper is going on here, though.

If you look at the top and bottom while loops, they only differ by the U8 cast.  The cluster wire branches in both cases, but somehow the U8 cast fixes the issue.  Something is whacky.

Thanks for the reply
0 Kudos
Message 9 of 17
(4,366 Views)

Something stinks worse now.  By the way this is running in LV RT.

Here is a new diagram that represents in a simple way what I am trying to do (explanation below image):

Using Method 4 (to prevent the displayed allocation at the case statement tunnel in Method 2), I had a mysterious 5 microseconds between writing to a shift register at the end of a loop and reading from it at the start.

Just switching to Method 5 (everything else the same), I got it to go to 1 microsecond.  Hmm, I didn't think 400% overhead is necessary for reading an int (or enum in my case) from a cluster.

I also have an additional mysterious 4 usec appearing within the loop itself (inside the case statement) probably caused by something similar.

My guess is it revolves around cluster copies that the "buffer allocation" tool fails to announce.  My diagram looks the same to the buffer allocation tool no matter which method I choose.

Thanks to Execution Trace Tool for rooting it out.

If I have a chance, I'll try to create an example that shows this timing without all the complexity of my application.

 

Message Edited by Belzar on 03-30-2007 05:33 PM

Download All
0 Kudos
Message 10 of 17
(4,349 Views)