LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I've just run into a "feature" of the LV2009 Parallel For Loops which is a bit of a nuisance!  The number of loop instances is determined by two values: 1) the number of instances in the Configure Iteration Parallelism dialog, and 2) the number wired to the P terminal of the loop.  It turns out that the number of instances created is the smaller of these two.

The nuisance is that if I wire the number of processors from CPU Information to P (as recommended) then when my new 16-core machine arrives (I wish!) I don't get that benefit unless the dialog value is also >= 16.  And the 64-core machine that arrives next requires the dialog value to be reset in every Parallel For Loop - or I set it to be some unreasonably large number now, and therefore it's pretty much meaningless.

 

My suggestion is that the default number of instances set in the dialog is "Maximum" - i.e. it will use the maximum number of processors available.  It should still be able to be set lower should the programmer wish to restrict the number below that.  Then the default case works transparently as the programmer would usually want without needing to wire from CPU Information, and there are no surprises down the track when loops don't speed up on a new machine as expected.

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application. The regular graph can take a bit more data, but still has its limits. Having 100k number of points to display is quite common (in my case it's most often months of 1 second data).

 

The idea could be formulated to just "improve how graphs handle large data sets"...but how that would be done depends a bit on what optimizations the graph code is open for. The most effective solution however would probably be to do what you currently have to write yourself - a surrounding decimation logic.

 

So my suggestion is to add an in-built decimation feature, where you can choose to have it automatically operated when needed - or when you say it is needed, and possibly with a few different ways to do the decimation (Min-Max/ Nth point etc.). The automatics should be on by default - making the problem virtually invisible for the novice user.

 

A big advantage of doing it within the graph is that it will (should) integrate fully with the other features of the graph - like zooming, cursors etc.

Simply wire an INT byte count (and the queue wire) and you're done. Ideally, this function would be placed just after Obtain Queue.

 

 

 

Message Edited by Broken Arrow on 04-07-2010 01:50 PM

LabVIEW  has a somewhat hidden feature built into the variant attributes functionality that easily allows the implementation of high performance associative arrays. As discussed elsewhere, it is implemented as a red-black tree.

 

I wonder if this functionality could be exposed with a more intuitive set of tools that does not require dummy variants and somewhat obscure VIs hidden deeply in the variant palette (who would ever look there!).

 

Also, the key is currently restricted to strings (Of course we can flatten anything to strings to make a "name" for a more generalized use of all this).

 

I imagine a set of associative array tools:

 

 

  • Create associative array (key datatype, element datatype)
  • insert key/element pair (replace if key exists)
  • lookup key (index key) to get element
  • read all keys
  • delete key/element
  • delete all keys/elements
  • dump associative array to disk
  • restore associative array from disk
  • destroy associative array
  • ... (I probably forgot a few more)
 
 
I am currently writing such a tool set as a high performance cache to avoid duplicate expensive calculations during fitting (Key: flattened input parameters, element: calculated array).
 
However, I cannot easily write it in a truly generalized way, just as a version targeted for my specific datatype. I've done some casual testing and the variant attribute implementation is crazy fast for lookup and insertion. Somebody at NI really did a fantastic job and it would be great to get more exposure for it.
 
Example performance: (Key size: 1200bytes, element size 4096: bytes, 10000 elements) 
insert: ~60 microseconds
random lookup: ~12 microseconds
(compare with a random lookup using linear search (search array): 10ms average. 1000x slower!)
 
Thanks! 

 

I rediscovered a kind of an annoying behavior with multiplot XY-Graphs which appears to have existed since LabVIEW's emergence. Say I want to use the XY-Graph like a curve tracer, thus adding XY data pairs to an existing multiplot XY-Graph by passing the array of array clusters around in loop shift registers and fill in new data as it's produced. This all works well BUT when adding multiple plots in different colors it shows that consecutive plots are always drawn underneath the already existing ones instead of the other way around. I find that new plots should be drawn over already existing ones also according to the sequence shown in the plot legend.
 
If your new plot has the same or similar values than the previous one then the second plot will be hidden underneath the first one. You can only see a second one if the values are different. It's counterintuitive I think and I wonder why it has always been like that.
 
I plead for a XY-Graph option along with a property node to change this so that new plots cover old ones when they intercept. The legend sequence should not be changed because the top to bottom order is correct. 
 
I demo VI of the misbehavior is attached 
 
I hope I get enough support for this to be done. 

Urs

Urs Lauterburg
Physics demonstrator
Physikalisches Institut
University of Bern
Switzerland

Classes in LabVIEW are a great step over (and finally, with LV 2009 them start to work...) but there are still two 'holes'

 

Abstract methods. 

It would be great to have the possibility to define abstract methods and interfaces. Now I'm forcing an error into the error out indicator to notify the usage of a method not yet defined but it would be better to make the compiler to recognize the usage of abstract methods during the design time. One way to define abstract methods could be the introduction of a new entry in the 'class menu' and allowing to define them just in term of front panel (block diagram not available).

 

Class duplication.

An object is duplicated on each node, so is not easy to work into parralel loops on the same instance. To use the same instance I have used references, but it is not so easy to use (not as the 'normal' wires) and it hasn't the same performance (working with reference is heavier than working with instances). It would be great to introduce a mechanism that implements the convertion from instance to object, something likes a standard 'getReference' and 'getInstance'.

If I create an application and start it for the first time, an ini file is created with default values, which at least I d'ont know, where these default values come from. There should be a transparent method to configure the default ini options of an application. I think a new page in the application builder, which provides all possible ini parameters, is the way to go.

 

With this new feature I can shsre my exe without the ini file, too.

If I want to convert several Booleans to the corresponding (binary) Number, I have to built an array and use the function "Boolean Array to Number". Why not create a funtion, which excepts several booleans (number of inputs can be changed like "Compound Arithmetic", starting with LSB) and gives back the corresponding number.

 

And while you are at it, you can create a "reverse" function, which has a number as input and gives back the bits as booleans (several outputs starting with LSB, which can be changed like "Compound Arithmetic").

I want to be able to work on STABLE versions of LV.

 

The last great stable version I remember was 6.1 (I never had 7.1).

 

2009 and 8.5.1 were not bad but please give us a feature-fixed long-term support version of LabVIEW.

 

For anyone unfamiliar with the idea, many Linux distributions offer the same:  Here's a link to the Ubuntu webpage outlining THEIR LTS strategy.

 

Shane.

Add CUDA support for analysis, comutation and vision.

I would like LabVIEW to allow my class methods to accept DVRs in the class input and handle all the locking etc... and error handling behind the scenes.  

I understand the consequences of using a DVR in this way (performance, race conditions etc...) but as a user, I still want the option. 

There has been a lot of talk of this subject on the forums, but I can't find a posted idea on this.

Cheers

JG 

 dvr.png

    How about adding a feature to a few of the LV primitives such that an error cluster can be optionally added to catch some common issues that right now require special checking:  For example, Divide by zero, returning an error as well as Nan, or Index Array where the index < 0 or > size.  THe error cluster would allow easier handling of these cases WHEN DESIRED, by wiring directly to a case structure, etc.

    (This idea evolved form a discussion at the CLA Summit during a talk by Nate Meohring.)

 

DaveT

My apologies if this has already been suggested ...  I've written many VIs with a While loop that has True wired to the Stop terminal, making it a "Do Once" loop.  This is such a common construct (i.e. a VIG) that it might merit its own "structure", something that "looks like" a While loop but has no Stop terminal, no "i" indicator, and is guaranteed to "Run Once".  I think having a unique "look" for this common special use of the While loop would be a useful addition.  Among other things, it would clearly distinguish "purpose" as different than a While construct.

 

Bob Schor

I'm using Key Down? and Key Up events to turn the keyboard into a series of switches for a behavioral experiment.  For example, I want the user to push down Caps Lock with the left hand, Return with the right, then use the appropriate hand to do a specified task.  By monitoring Key Down and Key Up events, I can capture the timing of the user's "button sequences" (to the accuracy of Window's clock).

 

Key Down? provides three indicators of what key is pressed -- Char, which is an I16 representation of the Ascii character (and hence can be "converted" easily into a string one could test, e.g. is this "A"?), VKey, an enum saying if the key is an Ascii character or a "special" key (such as Caps or Return), and ScanCode, which is another I16 that corresponds (somehow) to each key on the keyboard.  There are also boolean indicators that can tell you if Ctrl, Shift, or Alt are being simultaneously pressed.  Of these, the least "transparent" is ScanCode, as there is no obvious way (other than placing a comment on your code) to know that 58 corresponds to CapsLock.

 

Unfortunately, Key Up only provides ScanCode!  So while I can write "nice" code that can more-or-less self-document that I'm testing for Caps Lock or Return (simply wire VKey to a Case statement, which will allow me to have a case labelled "Caps", pretty obvious, no?), I don't have such functionality with the Key Up event.

 

Suggestion -- add Char and VKey inputs to the Key Up event!  This will make it "symmetrical" with respect to Key Down?, and will enable producing Key Up code that doesn't need to rely on "magic numbers" (Scan Codes) for specific keys.

 

Bob Schor

It would be helpful if I could programtically retrive the compile/build date of an EXE created with the LabVIEW through a property (maybe invoke?) node. Of course, this would be meaningless during development...Even if it is not an option to retrive this programatically with a property node, perhaps it can be something set when the EXE is built and retrived/displayed when the EXE runs?

 

I use the date the application is built in an indiacator onscreen with the revision number of the software to indicate revision and build date. Right now I have to manually change this everytime. I don't believe retriving the "modified date" the "File/Directory Info.vi" is safe enough as those file parameters can change.

 

I haven't been able to find a way to automate this, so I'm suggesting it here. Obviouslly, whenever I am doing something manually over and over again that is a good thing to automate. Smiley Happy

One of the recent great additions to LV (for me at least) was the parallel for-loop.  This is a great and quick way to essentially generate a worker pool.

 

The only drawback I see is that we need to know the total number of iterations in advance which can mostly be OK but .

 

By allowing essentially the same function for a while loop  we can spawn parallel processing workers for an indefinite period.  For operations which can be spread over multiple cores and whose execution order is not important (or whose order can be re-created afterwards) this can be very tasty indeed. 

 

An application I have is the processing of data which is being input via Queue and whose output can be sent also by Queue (as a part of the incoming queue data).  I don't care which worker processes which queue entry as long as they run in parallel and the output goes to the right place. 

 

I abuse this a bit at the moment as can be seen in THIS post.   This approach is limited by the neccessity to know the total number of loop iterations in advance.

 

Shane.

I would like to be able to create executables that don’t require the runtime engine in LabVIEW. Perhaps a palette of basic functions that can compiled without the runtime engine and an option in the application builder for that. I routinely get executables from programmers that don’t require a runtime installation. I just put it on my desktop and it runs. It would be nice that if I get a request, I could create, build, and send them an exe in an email without worrying about runtime engine versions, transferring large installer files to them, etc.

One of the quickest (and easiest to forget) methods to improve performance of your VI is to disable debugging.  It requires a trip down the VI properties to get there, however.  I propose adding a button on the BD alongside the other debugging buttons to Disable and Enable Debugging.  

Is there any reason why backwards compatibility couldn't be added to the LabVIEW datalog access VIs? 

 

Why does a LabVIEW 8.x (or 7, or 6, or 5, or 4, or 3) datalog file NEED to be converted to LabVIEW 2009 datalog format.

 Shouldn't it be optional?

 

 

I have around 8 terabytes of data in LabVIEW 7 and 8 formats.  Switching to version 2009 will be very painful (as was the switch to other versions), in that it requires me to convert each datalog file.

 

- yes I know about SilentDatalogConvert, yes I could write a simple program to churn through all my data files. That much data would take weeks or months of continuous chugging - seems silly.

 

I'd even settle for backwards compatibility with caveats - read-only for example.

 

The Cluster type of my datalog file hasn't changed in 15 years. Maybe a cluster type check first to determine if the format really requires an update, but even then allow access without conversion

I have noticed that Property Nodes for certain numerics do not properly store Range Properties as the native datatype of the control, but rather stores them as DBLs. I have attached the screenshot to illustrate my point:

 

PropNodeDatatype.png

 

Notice the first Numeric's datatype is a DBL, and therefore you see no coercion dots for the Range Properties or the Value Property. The bottom two examples show I32 numerics, which have the proper datatype for the Value Property, but have the wrong DBL datatype for the Range Properties.

 

I propose that the Range Properties should reflect the datatype of the Numeric, just as the Value Property does!

 

(This may affect other Property/Invoke Nodes besides Range... please list any other datatype inconsistencies you may find in the comments.)