11-17-2006 09:30 AM
See this thrad for starters on constant folding.
OK, I looked at that. It has some strange things in it:
LabVIEW uses constant folding to optimize the performance of VIs. With constant folding, the compiler stores constant values when it compiles VIs instead of calculating them at run time.
The idea of "calculating" a constant is a bit weird, but I suppose it could mean that if I have a constant 17 wired to an INIT ARRAY function with a type of DBL, then it could calculate a constant array of 17 doubles, and store that structure in the VI, instead of doing the allocation at run time.
Still that doesn't apply here - the #SAMPLES input is NOT a constant, it's a TERMINAL which is connected to it's CALLER.
One confusion might be the definition of "compile time" vs. "Run time". As Jeff stated above there is also the case where inputs to a structure (loop, etc) are treated as constants within that structure.
Hmmm. Perhaps it THINKS the value is constant, because it would be constant WITHIN the STRUCTURE, if I executed that case. BUT WHY is it thinking that when I DON"T EXECUTE THAT CASE ???
You can show constant folding on the block diagram by selecting Tools»Options, selecting Block Diagram from the Category list, and placing checkmarks in the Show constant folding of wires and Show constant folding of structures checkboxes. When you place a checkmark in the Show constant folding of wires checkbox, hash marks appear on the wires attached to constants that are constant folded. When you place a checkmark in the Show constant folding of structures checkbox, gray hash marks appear inside structures that are wired to constants.
I did that and see no change in the block diagram's appearance.
The hash marks might not appear until after you run the VI."
Well, isn't THAT special. So I run it, and the hash marks appear on the #SAMPLES wire INSIDE the WHILE loop.
Maybe I should put the terminal INSIDE the case that uses it.
YES! That fixes it! It does nothing, as it's supposed to.
But that's CLEARLY a LabVIEW bug, since it is misinterpreting what is intended.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
11-17-2006 10:12 AM - edited 11-17-2006 10:12 AM
Message Edited by Darren on 11-17-2006 10:13 AM
11-17-2006 10:29 AM
CAR = Citizen's Action Report ??? ;->
I have filed a bug report as well, but it claims I need a service contract, so I don't know if they'll listen to me.
it seems like you have a suitable workaround...just specify a default value for # Samples less than 2147483647.
I picked that number to be safe - if the terminal is unwired, it means ALL SAMPLES. In other cases (which I haven't shown), I use the MAX of (#SAMPLES terminal, #SAMPLES in FILE) to actually do the work. So if I want to process the whole file, I leave the #SAMPLES terminal unwired, if I want to process a portion of it, I wire the START SAMPLE and #SAMPLES terminals as appropriate.
Regardless of that workaround, it points up how LV is being stupid. If I changed the default value to 1000, then the crash might go away, but my behavior changes. And even if I do that, it looks like that when I call the other functions that don't use this terminal, I STILL have to watch it allocate a chunk of memory for me. Which is EXTREMELY ironic, since this VI has two modes, one which uses lots of RAM and little time, the other which uses Diska and time, but less RAM.
Please, LabVIEW, stop trying to be so "smart" and "helpful".
Blog for (mostly LabVIEW) programmers: Tips And Tricks
11-17-2006 10:46 AM
Hi Coastal,
I watch for your posting because you always post with great questions and I get a chuckle out of you postings.
CAR = Corrective Action Report. We use these numbers to make sure the posted bugs get fixed. If you have not noticed it previously, we try to maintain a monthly list of the bugs. Yu canfind this months bug list here.
http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=2943&jump=true
Take care,
Ben
11-17-2006 11:07 AM
"We"? Are you part of NI?
Well, by moving the terminal INSIDE the case (and thereby HAVING to use local variables in the other cases, a practice I do NOT normally follow, since it results in unnecessary copies of data), it has fixed my immediate problem.
My sub-program (some 300 VIs) now runs, although there are some rough edges.
Still, it's obvious that there are lots of other places where this "performance enhancement" is actually hurting me, just not fatally like this case. Apparently, it's going to be allocating all kinds of unnecessary crap, every time I call one of these VIs, all in the name of being "smart".
It's hard to imagine the thinking of the person who thought this was a good idea.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
11-17-2006 11:20 AM - edited 11-17-2006 11:20 AM
No, I am not NI !
"We" = LabVIEW Champions
http://zone.ni.com/devzone/cda/tut/p/id/5263
+
all of the other contributors to the NI Discusion forums Particularly contributors like you) as well as those from LAVA.
http://forums.lavag.org/forums.html
If you trace back through the monthly lists and all of the provided links, you will see that "We" have had a big impact on the "bug-fix cycle" for LV.
Thank you for helping out!
Ben
See this thread to find out about how the Bug Thread started.
http://forums.ni.com/ni/board/message?board.id=BreakPoint&message.id=2668#M2668
Message Edited by Ben on 11-17-2006 11:21 AM
11-17-2006 11:35 AM
Since my program now runs, I decide to compare performance.
My program processes a data file, two stages are separately timed and displayed.
On the SAME virtual machine, with the SAME LabVIEW code, I get this:
LV 8.2:
29947 / 6361 mSec, run 1
27488 / 6936 mSec, run 2
LV 7.0:
16575 / 5908 mSec, run 1
16813 / 4801 mSec, run 2
My job is to advise my client: We move to LV 8.2, or we don't.
I'm leaning towards "don't", as a result of this.
Can anyone convince me otherwise?
Blog for (mostly LabVIEW) programmers: Tips And Tricks
11-17-2006 11:42 AM
11-17-2006 11:47 AM
Yes, I understand all that.
My 8.2 tests were conducted with the program which had been compiled and saved with 8.2 (and even closed). The 7.0 tests were conducted with a separate copy, still from 7.0 (I'm doing this on a virtual machine, to keep it completely safe - my production machine is a different box altogether).
It's bad news to see such numbers.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
11-17-2006 12:21 PM
The file I processed in my earlier test is a relatively large one: 61200 samples of 90 channels, plus the second part computes the same number of samples for another 90 or so channels - the whole thing is held in memory (The VI in question gives the option of caching to RAM or caching to disk.
My virtual machine is limited to 512 Mbytes RAM, eeven though I have 1G physically installed in the host.
So, on the chance that the memory limit was hitting me harder in 8.2 that on 7.0, I tested again with a lesser file.
This file has 262 samples, only 60 channels, and the 2nd part cannot be completed because some data is missing. Still the first part does exactly the same thing as before.
The numbers are:
LV 8.2:
2959, 3241, 2997 mSec (three separate runs)
LV 7.0:
874, 786, 830 mSec (three separate runs)
Is that ugly or what?
Blog for (mostly LabVIEW) programmers: Tips And Tricks