LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

memory management and style

I am self-taught in LabView like most people in academic science, but the main VI I use has finally become sufficiently complex that I have to worry about memory management and other concepts that beginners don't normally worry about.

so I read the "LabVIEW Performance and Memory Management - Tutorial"
http://zone.ni.com/devzone/conceptd.nsf/webmain/732CEC772AA4FBE586256A37005541D3?opendocument
which was pretty informative. But the document brought up several questions regarding efficient memory management and style:

1. First, they tell you to avoid using complex data structures such as clusters, clusters of clusters, arrays of clusters, etc. But if you have a complex VI, you really have no choice. There's only so many terminals one can put on a sub-VI, and even if you could put more, you wouldn't want to have 100 wires sticking out. The tutorial tells you to do stuff like, have separate arrays of data of a single type. I can see how this is more efficient, but then when you want to pass one set of data (that would normally be in a cluster) to another VI , you have to index all N arrays, stuff them in a cluster, then put them into a sub-VI.
That's hardly an optimal solution, coding wise, and more importantly, style-wise. I worry about style because, (a) I'm about to graduate, and I have to pass my code onto someone else, (b) it saves a lot of development time, and (c) my programming courses in college beat on style relentlessly.
In the C and Java courses I have taken, I've always been taught to encapsulate data into logical data structures. It would seem that clusters are the analog of structs in C or objects in Java, and I'm sure there are plenty of very complex VI's out there that must use such data structures. Is there a way to make cluster use more memory efficient? Right now, I use references to clusters, but the "Memory Management" tutorial says that when you use attribute (property?) nodes in a Sub-VI, you then have to allocate memory in the Sub-VI for the front-panel controls referenced. So that would seem to defeat the purpose of using references to avoid making a copy of a complicated data structure.

2. The tutorial also says to avoid using local variables and instead use "data flow" programming, such as shift registers and continuous wires that go into, through, and out of loops. But, again, if you have complex program, you're going to have a ton of wires stretching across your VI, especially if you're not encapsulating your data in clusters. And you'd have to cross wires, which is apparently a big no-no. So how do you use data flow programming wihtout making your VI impossible to read?

3. Is there a way to use LabView in a more object-oriented way? LabView touts itself as being object-oriented, but it doesn't really let users take advantage of that. I can see that LabView and G itself is based on object-oriented code, but users can't really make their own objects and manipulate them as they can in C++ or Java. I realize that there are add-on packages you can buy, but it seems that they don't really give all the functionality of an Object Oriented language (e.g. interfaces, real polymorphism). It seems that at least some sort of ability to make objects and classes would let people clean up their code. Whether that overhead would increase memory use unacceptably is something the NI engineers would have to worry about. I guess the debate over performance vs. style is pretty general to all languages.

4. Is there any way to learn good LabView style? as mentioned before, most people in my field are self-taught. There haven't any LabView classes at the schools I've been to, and I don't have a lot of time to travel to or take the NI-sponsored classes. My and of my lab-mates' VI's are gawdawful messes of tangled wires. LabView the development environment doesn't really seem to force you to use some semblance of style like, say, Java does. I know the whole "wires go left to right" bit, but that only takes you so far, and I had been using LabView for 3 years before I found that one out.

5. I'm using LabView 6.1. Is LabView 7 more memory/performance efficient? Or do the new bells and whistles make it less efficient? In LabView 6.1 I've noticed that tab controls slow down things quite a bit.

Anyway, if you have any answers or comments, I would appreciate your input.
0 Kudos
Message 1 of 7
(3,987 Views)
Speaking for myself (also a self-taught LabVIEWer),

1) Using clusters "to encapsulate data into logical data structures" is fine. The problems start when you group lots of un-related values into a cluster just to get rid of wires. By doing things this way, you might need to access a large cluster each time you want to read an integer value or something (You don't want to pass a cluster with a 1M array to a sub-vi when all aou want is a boolean value). Grouping logically allows you to use clusters, as far as I'm concerned. It certainly helps clean up wires, this is correct. As with most guidelines where things are frowned upon, it's generally only the mis-use which is frowned upon (Like Local variables). I would definitely use clusters instead of refe
rences which forces everything through the GUI thread.

2) Using data-flow programming thechniques requires you to organise your VI into blocks of (logically) related functionality. You then can create sub-VIs to handle a set of related functions, allowing you to clean up your diagram immensely. Think in terms of data sources and data targets. Organise your code according to this.

3) Why? What do you want to do with OO that you can't do with LabVIEW? I agree, many things need to be done DIFFERENTLY in LabVIEW than in C++, but nearly everything is possible.

4) One good way is to go back through your older VIs and clean them up until your satisfied. This will allow you to see where mistakes were made in the past. Otherwise, logical grouping, uncluttered diagrams and labelling are all very important. If your diagram doesn't appeal to you, re-organise it and give it to someone else to review. This can help a lot.

5) I don't know. I'm sticking to LV 6.1 for the time bein
g.

Hope this helps

Shane.
Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
Message 2 of 7
(3,987 Views)
A few more thoughts:
1) Shane has definitely hit on the right idea, but there is another dimension to it. Using complex data structures does add additional memory and performance hits beyond the fact that you have to keep all of the data together. When storing data in memory there is obviously some overhead associated with each level of complexity you add to a data structure (you have to somehow keep track of what is in the structure, and where it resides in memory).

Does this mean that you shouldn't use complex data structures? No, the value in readibility and maintainability that you gain by using well designed data structures will almost always outweigh the modest performance hit they cause. There are almost always optomization issues involved with writing better code (true of any language), if memory and speed are truly all you are concerned about, then you would be writing in assembler. As Shane said, the memory and performance guidelines are generally only applicable in cases of mis-use, or in cases where you have to squeeze just a few microseconds out of a VI.

2) You should never use a local variable to pass data that could be passed by a wire. Local variables cost memory and performance (not much, you shouldn't worry about it in cases where they are neccessary, but it can add up if misused). In addition, they reduce the readability of your VI, since there is data being transfered without any visible link. They also make execution control harder and can lead to race conditions if misused. As Shane said, if you find that you have to string wires a long way across your diagram then you probably aren't making use of SubVIs or organizing your block diagrams well.

3. There are some Object Oriented principles which are difficult to accomplish with LabVIEW. It's generally possible to do most things, but they may be more difficult than in other languages (which in some cases may be for the best, design patterns which apply well in other languages don't always make as much sense in LabVIEW). LabVIEW is a language which is under constant development, and it is likely that Object Oriented programming will become easier with time. In the mean time, I believe that there have been a couple of open source projects aimed at improving LabVIEW's Object Oriented capabilities. I haven't personally used any of them, but if you are interested I would guess that openg.org would probably be a good place to start.

4) NI teaches a number of courses aimed at improving development techniques (LV Intermediate I probably focuses on it the most). Ideally, taking these would be a great idea, however, we understand that this may not be affordable or practical for university students. The ideal solution on an academic level would be to teach university courses on it, or at least include it in current courses. NI is very commited to assisting anyone who wants to expand the LabVIEW instruction which their courses are providing, we provide a large body of resources and contacts which can make it a lot easier to incorporate this material. You should encourage your professors to work with their local NI reps to take advantage of these resources.

For a more short term solution, the LabVIEW Development Guidelines would probably be a good place to start:
http://digital.ni.com/manuals.nsf/webAdvsearch/5FBA64AD223A76A786256D2C00561F2D?OpenDocument&vid=niwc&node=132100_US

5) As with the rest of your questions, no simple yes/no answer to this one. LV 7.0 is faster for some things, as we are always working to optomize our code and slower for others, as new features or better written code sometimes add overhead.
http://zone.ni.com/devzone/conceptd.nsf/webmain/DC9B6DD177D91D6286256C9400733D7F?opendocument provides some examples, but the fact is that it really varies by application. In general they tend to balance out and most applications run at about the same speed.

Finally, a note about using the forum: In general, you'll probably get many more answers to your questions if you post them seperately, most people are more willing to read a paragraph or two and post their thoughts on it than they are to read a page and write a long response to a number of questions (luckily there are some exceptions :)).

Hope some of that helps,
Ryan K.
NI
0 Kudos
Message 3 of 7
(3,987 Views)
Thanks for responding, Shane. All of that was very informative. I still have some comments:

1) Could you elaborate on the statement "references [force] everything through the GUI thread?" This kind of makes sense to me, but I haven't found much info on the web besides "using references for non-GUI stuff is not recommended." I have personally noticed that sub-VI's that use refnums and property nodes do take more resources.

3) Re: OO

Having interfaces (abstract classes) would be nice, from a polymorphic point of view. I know LabView has polymorphic VI's, but do polymorphic VI's distinguish between *different* cluster type def's? A more important problem is that you can't easily make an array of different (but related) type-def's. That takes awa
y a lot of the elegance of polymorphism.

Here's a real example: I have 3 different types of pumps that have different serial protocols. They all have basic functions like "pump" and "stop." In LabView, I have to call the "XXXXpump.vi" for each different type of pump, and I can't have all the pumps in the same array.

In Java, I would create an abstract class called "PumpObject," and each subclass would HAVE to have a "pump" function. I could write the classes Pump1, Pump2, and Pump3, each of which is a subclass of PumpObject. So then, in my main function, I could have an array of PumpObjects, and tell all of the elements to call their "pump" functions. Each type of pump would know what to do, even though they all handle their serial communications differently. This would take all of 4 lines of code.

I realize that, with enough polymorphic VI's, case statements, refnums, or variants, I could work something similarly out, but most people would not be able to work out such a
hack.
0 Kudos
Message 4 of 7
(3,987 Views)
thanks for the comments, ryan. It's always nice to hear from the people who develop the software.

1) It seems to me that allowing people to pass pointers to complex data structures would allow for more efficient memory management. I know about refnums, but Shane and others seem to imply that refnums/property nodes come with unnecessary GUI overhead. I hate pointers as much as anyone else, but being able to pass by reference without overhead would be HUGE. Is there a way to do this?

2) The "avoid local variables when possible" bit is news to me. But I get the point 😉

3) as far as I know, the GOOP projects use refnums, which are subject to the same overhead constraints as mentioned in (1).


4) I think the problem with college courses i
s that most professors who design course curricula know about LabView, but they don't use it themselves -- their graduate students do. I've seen certain TA's make LabView instruction a pet project, but the novelty of G means that most students are happy to get to the point where they don't have broken wires everywhere (believe me, there's lots of swearing on LabView lab days). And most students are more worried about running their experiments than programming.
0 Kudos
Message 5 of 7
(3,987 Views)
> 1) Could you elaborate on the statement "references [force] everything
> through the GUI thread?" This kind of makes sense to me, but I
> haven't found much info on the web besides "using references for
> non-GUI stuff is not recommended." I have personally noticed that
> sub-VI's that use refnums and property nodes do take more resources.
>

There are several types of refnums. File refnums refer to the contents
of the file, and are sort of a pointer, but not to memory. Control
refnums on the otherhand are references to UI objects that happen to
have Value as one of their many properties. Note that value wasn't even
there until LV6 I think. Since they affect UI objects, they all
serialize through the UI thread. Reading a Boolean from its terminal or
local happens in whatever thread is executing. Doing any property on a
UI object, including reading the value property switches to the UI
thread and waits for it to do the job. Of course if you have a VI that
does lots of UI stuff, that makes sense, and in fact LV leaves the
execution of the diagram in the UI thread to avoid the switching
overhead so that your UI code migrates to the UI thread and your non-UI
thread stays in the diagram thread -- nice. When you mix UI and non-UI
all over the place, you incur the switches.

> 3) Re: OO
>
> Having interfaces (abstract classes) would be nice, from a polymorphic
> point of view. I know LabView has polymorphic VI's, but do
> polymorphic VI's distinguish between *different* cluster type def's?
> A more important problem is that you can't easily make an array of
> different (but related) type-def's. That takes away a lot of the
> elegance of polymorphism.
>

You are exactly right. You mentioned in an earlier post that LV is
touted as being object oriented. Some people say that, but LV is
lacking several object oriented elements, and the developers of LV do
not refer to it as an object oriented language, at least not yet.

To do the three different pumps in an array, you can use an old school
approach of a union, or you can look at the GOOP tools. You can have a
hierarchy of references and do either implicit or explicit casting to
retrieve the true class type from the base.

Greg McKaskle
0 Kudos
Message 6 of 7
(3,987 Views)
> 1) It seems to me that allowing people to pass pointers to complex
> data structures would allow for more efficient memory management. I
> know about refnums, but Shane and others seem to imply that
> refnums/property nodes come with unnecessary GUI overhead. I hate
> pointers as much as anyone else, but being able to pass by reference
> without overhead would be HUGE. Is there a way to do this?

Actually, many of LV's datatypes are implemented with references, but
the diagram behavior is as if they were done by value. You mention that
you hate pointers, presumably this is in a serial language with a single
thread doing one thing at a time. Putting pointers in a parallel
language is ten times as bad. If instead of an array, you had a
reference to the array and the wire splits, the split pieces of code
will interact. The top branch will do a filter, then a power spectrum,
then plot. The middle one will subtract off a baseline and get an rms
value. The bottom one will simply plot. Since there is no longer any
sequencing between them, they interleave and you get jibberish. To
clean things up you either serialize them by putting them back into a
wire, make explicit copies, or use mutexes and semaphores to control the
order of execution. Ick.

So LV and most other dataflow languages have by-value semantics, meaning
that a wire represents the value, not a reference. On the otherhand,
all arrays and strings, are implemented by reference. The LV compiler
then determines how few copies are needed in order to maintain the same
behavior as if all were copies. The performance and memory usage
chapter gives examples of how this works.

Is it possible to do pointers? Sure, some have done it. You call a
block that copies or swaps the data into a private storage, and gives
back a pointer/reference/U32. Later in their diagram, they call passing
in the pointer and get back the array, or some portion of it. Of course
LV knows nothing about these pointers, and you can't operate on things
using LV icons until you go get the data. In many cases this uses just
as much memory as the simple code without the pointers.

>
> 2) The "avoid local variables when possible" bit is news to me. But I
> get the point 😉
>
> 3) as far as I know, the GOOP projects use refnums, which are subject
> to the same overhead constraints as mentioned in (1).
>

Current GOOP uses the pointer approach mentioned above, and all objects
are referred to. The reads and writes are also serialized so that if
you ask for the data on an object, you wait for someone else to finish
working on the object, then you get the elements, modify and return
them. Again, parallelism and pointers are something you have to manage
pretty closely.

>
> 4) I think the problem with college courses is that most professors
> who design course curricula know about LabView, but they don't use it
> themselves -- their graduate students do. I've seen certain TA's make
> LabView instruction a pet project, but the novelty of G means that
> most students are happy to get to the point where they don't have
> broken wires everywhere (believe me, there's lots of swearing on
> LabView lab days). And most students are more worried about running
> their experiments than programming.

This is the approach in the basket weaving class, the physics class,
etc. It isn't unique to LV. And really, the point of the class is
rarely to learn LabVIEW or another language, it is to learn some
principles, and to do that you need to use some tools. Some students
embrace the tools and realize they will need to know this later in life
while others hack their way through just enough to get the desired A, B,
or C.

Enjoy,
Greg McKaskle
0 Kudos
Message 7 of 7
(3,987 Views)