08-24-2007 01:37 PM
08-24-2007 02:13 PM
08-25-2007 08:41 AM
08-25-2007 09:57 AM - edited 08-25-2007 09:57 AM
Message Edited by mikeporter on 08-25-2007 10:59 AM
08-25-2007 11:44 AM
Camu wrote:Hi, i want simplify a very complex Vi by using subvis and local variables. Is there any drawback of using too many subvis?
08-25-2007 12:00 PM
08-25-2007 12:42 PM
andre.buurman@carya wrote:
Read a local variable is nothing more than using the variable name again in a text based language.
NO! It is very different! You cannot compare sequential, text based code with dataflow code. This is a very dangerous comparison because LabVIEW does not execute code in a predictable order if you have disconnected code islands, each using locals. In order to cure the race conditions, you would need place all code segments into frames of a sequence structure, further complicating the program.
For some easy reading, have a look at this old thread: http://forums.ni.com/ni/board/message?board.id=170&message.id=112401#M112401
As has been mentioned during several Keynotes at NI week, LabVIEW has excellent capabilities to automatically use multiple processor cores due to its inherent parallelism. Currently, the main push in processor development is going to multicore designs and LabVIEW code will really shine while most text based code will be stuck in the mud. Of course if you chop up your code and sequentialize (sic ;)) it with sequences and local variables, you're back to square one and are throwing away one of the most powerful advantages of LabVIEW. 🙂
I am so glad that LabVIEW does not contain a vertical flat sequence, or we would see something that even more resembles text code. 😄
andre.buurman@carya wrote:
In my opinion the performance issues with Locals only apply when working with large datasets and slow machines.
"Slow machines" is a meaningless term. I remember when the '486 processors first came out, it was argued that they are so much more powerful than the '386 that they should be used mostly for server applications. A poorly written program can bring any processor to its knees. A well written program should run well on any hardware. It is virtually never a solution to "cure" a sluggish program by throwing faster hardware at it. Same with memory. Sure, you can go to 4GB, but that will give you only a factor of two or three over the typical RAM (and you cannot even use it all on 32bit architectures), almost insignificant! Alternatively, you can often streamlime the code and easily eliminate as many extra data copies.
Compared to the hardware that will be available in a few years, even the fastest machine today is extremely slow. Years ago, fantastic applications have been written that run well in a 100MHz P1 with 64MB of RAM.
In summary, if you want to fully harness the power of current and upcoming multicore CPUs, you better quickly loose your sequential thinking and write LabVIEW code the way it was intended, using the power of dataflow to your advantage. Good luck! 🙂
08-25-2007 03:36 PM
08-25-2007 06:06 PM
@Gabi1 wrote:
on a side note, i am not totally accepting the idea of "zero" locals.
Oh, yes! I use local all the time. This is not some religious thing. Locals definitely have their place, especially in dealing with UI issues. However, they should not be used as a cheap substitute for naked "variables" as discussed here.
I don't think there is a LabVIEW lobby, and I'm certainly not part of it. I don't even own NI stock. 😮
LabVIEW works they way I think and has empowered me to do things with ease that I was struggling forever with text based code in the past. So, yes, I am a fan, groupie, (or whatever you want to call it) of the visionary ideas layed out by Jeff Kodosky and crew over 20 years ago. LabVIEW is something I can use everyday and it makes my life easier. That should count for something! 🙂
08-26-2007 01:50 AM
Ho Yes! count me in as big fan!
i rather do labview and use NI tools for developments of control systems all day long than any other job
thing is, i am a post doc right now, so have too many other things to do ( like ultracold molecules out of atomic Bose Einstein condensates...)
i also found out LV to be quite efficient: this is not an objective observation, but i made some MC and molecular dynamics simultations in LV, which turned out to be faster than their counterparts in Fortran!
Amazing compiler this thing...