LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Community Nugget 2-19-2007 "Stacked Sequence Exterminator"

Yeah, I'm using 8.20....

But a lot of the time, I find something (or more accurately I don't find something) and I take my copy of 6.1 and go in the corner and keep telling myself things will be OK.

Er, regarding defaults.  Hmm. Er. Yeah.  Sorry.

Ben, I'll have a look at the code tomorrow.  I think LV simply misses an opportunity to perform an in-place operation with an array as part of a cluster.  I'd almost call this a bug, but I don't want to upset anyone.  My original point was that the operations on the one array in the cluster are independent on the size of the other.  I never thought that the operation on the array itself would be so much slower in an cluster than not.  That's an interesting differentiation which I shall give much consideration....

Till tomorrow,

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 31 of 42
(6,224 Views)
OK, I had a look at ben's VI and it seems that the fact that an array is in a cluster imposes a hefty performance penalty.

But I don't understand why.  I think the compiler misses out on a lot of possible optimisation in these situations.

Check out my further modified version below.  It answers the following question:

Which of the following code should be more efficient?

 

The one on the left if what we've been doing up to now.  I could imagine that a copy of the top array is needed as the cluster itself is carried forward, thus requiring two seperate entities of the array in memory.  The second picture however obliterates the cluster and re-forms it.  There is no need whatsoever for creating a copy of either array in this example.

The second code snippet is less efficient.  It also shows dramatic differences between clusters 1 (small second array - not modified) and 2 (big second array - not modified).  It seems that BOTH arrays are being copied at one point or another before re-forming the cluster.  This is simply too much work for what should be a relatively simple operation.  I mean why copy an array which has NOTHING done to it in the code? (second array in the cluster).  As ben pointed out in a previous post here, there should only be some pointer shifting going on.

Is there a reason for this which I might understand?  Is this a missed optimization opportunity for the compiler?  Personally, I think this can be "fixed".

Summary (for myself in case I re-read this mail in the "future"):
Operations on arrays in clusters are more expensive than separate arrays (Ben's point - undisputed).
Operations on arrays as in the left picture are independent on all other elements in the cluster (My original point and consistent to -what used to be- application note 154).
Operations unbundling and re-bundling, even without making changes, is most expensive.

Shane.

Message Edited by shoneill on 02-22-2007 09:35 AM

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 32 of 42
(6,149 Views)


But I don't understand why.  I think the compiler misses out on a lot of possible optimisation in these situations.

Yes LabVIEW is not very well optimized compiler. Why? Because it doesn't have to. Optimization means risks as compiler gets more complicated. There is no competition in the market of LabVIEW compilers so NI doesn't have to optimize the code better. Users will still use LabVIEW even if it's not as optimized as it could be.
--
Tomi Maila
0 Kudos
Message 33 of 42
(6,142 Views)
I'm not experienced in the complexity of compilers, but the speed of execution of LabVIEW is most certainly a factor.  LabVIEW may not have any direct competitors (yet) as a visual programming tool, but most applications and customers don't really care HOW something is programmed.  If the supplied solution is too slow with LabVIEW, they WILL look elsewhere.  I don't think LabVIEW is unique enough to allow itself sloppy code optimisations.

There is already a lot of code optimisation done by the compiler.  I agree that "new" optimisations may create some unintentional problems, but NI have good engineers, and I'm sure there's a reason why LV costs so much.....

I'm currently still the opinion that the issue we have seen with the clusters CAN and SHOULD be improved.

These are basically the same optimisations performed on other arrays too, why shouldn't they be valid tor clustered Arrays too (Seeing as the arrays in a cluster are stored separate to the cluster itself, just like a "normal" array).

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 34 of 42
(6,139 Views)


...I'm sure there's a reason why LV costs so much.....

I'm sure this reason is not directly related to the production expenses but more to what people are ready to pay for LabVIEW.
--
Tomi Maila
0 Kudos
Message 35 of 42
(6,137 Views)
Sorry, I forgot my <\sarcasm> tag.......

Of course the price is at least partially market-defined.  That's basic economics, but there are a lot of man-months go into each version of LV.  There's quite a bit of work behind the scenes there. 

Shane.

Message Edited by shoneill on 02-22-2007 10:22 AM

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 36 of 42
(6,135 Views)

Hi Shane Tomi et al,

I do not have the time at the moment to a proper analysis of the code snippets you posted.

LV IS remarkably optimized. The "Inplaceness Algorithm" is amazing.

THe only short-coming I see in it's "shroud of mystery". What I mean by this is that all of the wonders it performs are top notch intelectual property that we do not know the details of and are subject to change when further improvements are implemented.

So lacking a complete set of documentation on its behaviour, we are forced to poke, prod and discuss to learn about it.

I imagine it to be a lot like learing to ride a surf board. There are two extreme approaches that can be taken to learn to surf. I could break books and study all of the physics involved and memorize the equations so that I can solve them as I ride. Another approach is to forget about the physics and just jump on and start riding.

When it comes to ridng the wave of LV, many of just jump on and learn by our falls.

The physics appraoch to learning LV is not possible because the "physics books on LV" are locked up in an ivory tower and are subject to change.

Back to the surf board.

We could take a hybrid approach to learning and combine the book work with the practiacl experience.

Returning to the LV experience

We can do use the same hybrid approach BUT we have to do the research as we go! When we fall off we should stop and analyze "why did that fail?".

When we discover new sublties, we should share. THat is what i tried to do above when pointing out what I had learned from a one of my "wipe-outs".

Still driving the LV is like surfing anaology

In the same way that subtle shifts in weight can influence the behaviour of the surf board, little things in LV can make a big difference in our direction. In your second snippet, you did not wire the cluster into the top of the bundle as you did in the first. This small "wave of the hand" gave the Inplaceness Algorithm the hint it need to know that you were not going to step on everything.

Done rambling. What do YOU think?

Ben

 

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 37 of 42
(6,114 Views)
First things first:

Ben, your last post came accross a bit defensive, and I'm sorry if you felt there was any agression implied in my posts.  Just in case of anyone feeling irked by any of this: I'm not disagreeing or trying to argue with anyone here.  I agree with Ben's sentiments about clustered arrays being a lot slower, even if I originally misinterpreted his statement.  After observing it myself, I immediately understood what he'd been saying (I'm a bit like that sometimes).  Your observation trumped my assumption.

I am certainly one who has tried to learn LabVIEW through trial and error (man I've had my fair share of error).  I've also learned a huge portion of what I know from these forums.  I've (up to now) not had a single LabVIEW course, even though I cheated once and went to a NI day in Switzerland.....

But what I have learned is that certain actions lead to certain results.  Just like the scientist I'm trained to be.  This "Array in a cluster" thing seems to be out-of-line to previous experiences with LabVIEW, hence my originally not grasping your original point, Ben.

Regarding your last post, I agree 100%.  But when we, by observing, get a bit of a picture of the inner workings (hi Rolf!) and this seems odd, I too think we need to discuss it.  I've no problem with someone showing me that what I've thought / done / said is a load of baloney, but it's important to note the subltle differences.  I simply don't understand the performance penalty in these examples.

Back to the problem at hand:

Ben wrote: "In your second snippet, you did not wire the cluster into the top of the bundle as you did in the first. This small "wave of the hand" gave the Inplaceness Algorithm the hint it need to know that you were not going to step on everything."

Exactly, that's what I thought when I wired it up.  However, it appears that that solution is DRAMATICALLY SLOWER than the other, which surprised me quite a lot (I suggest re-reading the post).  It seems that even wiring to an unbundle output terminal creates a copy automatically, no matter what you do with it.  It seems that the bundle/unbundle is a go : no go thing.  If the output terminal isn't wired, fine.  If it's wired, make a copy and proceed as normal.  This "create a copy" flies in the fact of the "in-placeness" of LV.  It seems that the "In-placeness" stopped short of clusters, which is a terrible shame and I'm actually amazed it took me so long to notice this (and without Ben, I probably never would).

I think I see the chance to "correct" something in LabVIEW which may lead to a possible solution to the "sometimes unwired shift registers" we have been discussing.  Being able to cluster all elements required in a state machine without performance penalty would be a benefit to anyone who programs this way I reckon.....  This is how I thought it worked when I originally suggested this, but Ben has changed my mind rather dramatically.  And I think (bearing in mind my complete noob status on compiler technology) it would be less complicated to implement this "correction" than a "new feature".

I remember doing a report as an undergrad on LV way back (1992 or something) and reading a quote from once of the LV founders, and I paraphrase here : "The in-place array operations belong to some of the most complex code present in the LabVIEW source."  I think they were peferring to a "sieve" benchmark.....  Sounds familiar somehow 😉

Ben, if you get some time to have a look at the code, I'd be grateful for your input to see if I'm really off on a mad trip again.

Shane.


PS Shouldn't we start a new thread on this??

Message Edited by shoneill on 02-22-2007 03:21 PM

Message Edited by shoneill on 02-22-2007 03:23 PM

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 38 of 42
(6,105 Views)

Abosultely no problem Shane!

LV is my sweetheart. When ever her beauties and virtues are called into question, I feel moved to defind her honour.

I'll see if I can back to your examples when I have time to do a proper job.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 39 of 42
(6,093 Views)
I've initiaited a new thread for this topic.

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 40 of 42
(6,074 Views)