LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I often employ a design pattern of for each item in a list where the list is a typdef enumerated type.  This is a great way of making a linear sequence where each item is tested one time (test sequences, list checking .....).  I used to make an array of the type and populate it with one of each item, this is not very scalable since when I change the typdef I have to redo the list.  Instead I do a little type manupulation to make sure that I itterate one time for each intem in my enum, this requires some support code and extra wiring.  It would be nice if I could make a for loop that takes the enyumerated type and used it as its itterator.  See below:

 

 

 

 

For each enum loop.jpg

The new for loop which takes an enum would greatly simplify the code readability.

I use this all the time and find it to work great, uppdate my enum and handle the new case, no chance of missing a case (I use it in conjunction with case structures with no defaults so my code breaks and forces me to handle the new case.

 

I have attached the code if any one is interested

 

 

None of the existing synchronization VIs work for "instantaneous" many-to-one notification, (or do they?)

  • Notifier is one-to-one or one-to-many,
  • Rendezvous is many-to-many
  • Occurances is one-to-many
  • Semaphore handles a different sync issue!

 

In many cases, one VI may need to wait for several other VIs before performing a task, simple example is on closing,

Even more specifically, other VIs could do the "notification" and continue, but the one VI must wait until those VIs have met (and passed) the "notification" point.

 

Although the mechanism can be built easily, it could be defined by NI as a standard VI.

 

Sia

When I need to pass a data from a loop to another I need to create an indicator and local variables which create copies in memory for each variables. So we have at least 3 copies in memory for the same data. One for the wire itself, one for the indicator and one for the local variable. Why there is no kind of pointers which LabVIEW could handle from under the hood with the wire's memory adress. Imagine something like that. Then we don't need to use a hidden indicator only for data passing.

pointer.PNG

This example doesn't look very clever but you can see what I mean. 

For modern application development we need better methods to detect wheter our application is called to open a file.

Currently the only help NI gives is based on command line parameters. And we need to jump through loops to get it working to react on opening a file when the program allready runs.

 

This is a major showstopper in creating professional applications.

 

LabVIEW 8.2 had a hidden event for getting this event, which unfortunatly doesn't work in later versions.

 

 

LabVIEW loops are quite flexible and allow me to do just about anything I want, but I often find myself writing the same code over and over again trying to iterate across different data types and data structures. There are several situations where smarter looping constructs could greatly simplify my code. One simple example is stepping by a delta between a minimum and maximum value. Currently, I have to calculate the number of iterations required ahead of time (for a for loop) or do a comparison with the maximum (with a while loop) and use shift registers to maintain the intermediate value. I'd like to be able to wire the max, min, and delta to my loop and have LabVIEW do the required calculations for me. The iteration terminal could also adapt to the proper data type given the input parameters. Perhaps the iteration terminal would have two outputs, one with the current iteration count and the other with the proper iteration value.

 

stepped-loop.PNG 

 

Another useful feature would be allowing me to wire a queue reference to the loop count terminal of the for loop and having it automatically pop each value from the queue and feed it to me through the iteration terminal. It would do this until there are no values left in the queue or until the code stops the loop. One could write an algorithm that pushes new points into the queue from within the loop or push the current value back onto the queue for later processing.

 

I'm sure there are other useful iteration strategies that are fairly common, please share them with the community

I give an example to explain what I mean:

1.) Create a big array, eg 200x 200 x 200 3D-Array.

2.) Create a sub-VI with a 3D-Array input and output. Inside the VI, use the replace element function to replace one elemente of your choice.

3.) Connect the big array with your sub-VI and check the timing.

With LV8.5 I get about 200 ms for running the sub-VI. Checking the replace element function gets delta t of 0 ms! It looks like, that the output of the sub-VI has to allocate new memory for the array, even if I do not change the size of the array.

 

It would be cool, if I could create sub-VIs, which reuse pre-allocated memory for the inputs and outputs, therefore allowing a similiar effect as the inplace structure. As of how to configure this in LabVIEW you can be creative 😉 Perhaps do this with the conector pane.

 

Regards,

Marc

When dropping a probe on a string allow the probe to be resized. In addition, allow the user to select the display format (hex, ASCII,  or codes).
When a string is the input to a case structure's selector tunnel allow selector labels to contain regular expressions.

  Why?

- To implement OO design patterns.

We are getting into trouble with all these run-time engine updates! Every time a new release or service pack come out we have to create a new installer with the new run-time engine and send it out onto all our customer's machines. It is not convenient to develop in 20 different versions of Labview and we like to keep our executables and updates recent.

 

Our instruments run on XPe with very little extra room for an additional RTE every 6 months. Asking the customer to uninstall old RTEs is painful as they are not supposed to go that deep inot our XPe build. 

 

I would like to see a modularized run-time engine where we don't need to update the whole thing every release. I know  with .NET updates are only necessary in 3-5 year increments. That would be much more acceptable IMHO:smileyhappy:

 

Having read THIS thread I realised that nearly the only time I ever use a sequence structure is to time code.

 

Then I thought, why shouldn't the Sequence structure have a timing output node (selectable) so that we don't have to manually do this every time we want to test code speed.

 

Then I thought a bit further and thought, why not have this for ALL structures, it's such a useful tool, it could be used for For loops and while lops too.

 

Timing for structures.PNG

Shane.

Whilst pondering Chris Relf's presentation on LVOOP based error handling for NI Week, it occured to me that it would be really nice if it was possible to define LVOOP methods that would be called automatically when LabVIEW needed to convert a class to a different (eg built in) type.

For example, imagine I have a class for handling errors - the obvious thing to do is to wire that class wire into a case structure selector and provide cases for error/no error. Of course that won't work, but if I could define a method that takes my class in and outputs an error cluster and was somehow appropriately marked so that LV would automatically call it, then the case structure could automatically do 'the right thing'.

The inverse functions that build a class from another type would be a form of constructor for the class that might be useful - e.g. I have a class that represents a measurement, wiring a path control into the class terminal causes LabVIEW to run a method that I have defined and appropriately marked that loads the measurement from disc and initialises the class.

There are a few problems I can immediately see - if one defines two possible conversion methods (e.g. class to error cluster and class to enum) one needs to have a way of defining which one is used (right click on terminal of node would seem the most obvious to me) and you'd probably want a specially covered coercion node as well....

I often find myself having to sort the rows of a 2D array based on a one of the columns.  It would be nice to be able to have a native function for that.  Currently I turn each row into a cluster, sort the array of clusters, then convert back to a 2D array -- way too much overhead and wiring time. 

 

Instead, I envision something like Matlab's sortrows or Excel's expand selection.  

Inputs would be a 2D array, a column index, and an ascending/descending selector. Output would be the 2D array with rows reordered such that the specified column is sorted.

(Similar title here.  Different concept.)

 

Many OO languages provide an Interface construct that allows objects in unrelated class hierarchies to expose similar functionality that can be called via common methods.  For example, a Car class and a Race class might both implement a Start method.  Unless their common ancestor has a Start method (which may not be practical) there's no way for me to take advantage of dynamic dispatching.  Interfaces allow an object to temporarily disguise itself as a different object so I can use a single vi on multiple objects.

 

Using the Car/Race example, I would like to be able to create an IStartable interface construct that defines a Start method.  Car and Race, in implementing IStartable, create their own Start methods that will override IStartable.Start.  I could then transform the Car and Race objects into an IStartable type and wire that into IStartable.Start.  IStartable.Start would then automatically transform each object back into its original type and call its member Start vi.

 

A couple points:

  • Classes need to be able to implement multiple interfaces.
  • There needs to be a way for class designers to indicate which interfaces it implements.  Setting that up in the class property dialog box seems like the best solution.
  • There also needs to be a way for class designers to designate which member vis to use when overriding an Interface method.  I can't think of a good way to do this in the vi itself, so I'd also manage these settings in the class properties dialog.
  • The illustration uses a Type Cast prim even though the actual type is not being changed.  It would need a new prim.
  • I've toyed with the idea of hiding interfaces on the bd by having an 'Invoke Interface Method' node that would automatically do the transformation and call back into the class' overriding method, but I think that restricts developers too much.

I'm finding that often my classes have some sort of dynamic private data that needs to be initialized before the object will work correctly.  (Queues, User Events, etc.)  Currently I have to implement some sort of initialization vi that the class user must call every time an object is created.  If the user forgets to do this Labview raises an invalid refnum error.  There are workarounds such as wrapping the class in a .lvlib, using class factories, or checking the queue refnum with every class sub vi.  However, they are workarounds that require extra coding and add complexity.

 

I'd like to have the ability to define a private class constructor that fires behind the scenes every time an object constant or control returns the default value during execution.  With this ability I can be certain the object's dynamic resources have been allocated correctly and it simplifies the api for class users.

I spent some time the other day troubleshooting a problem and couldn't for the life of me figure out why my program wasn't working. By chance, the guy who'd installed the runtime engine on the computer I was installing to had left the executable on the desktop. He had used the minimum install that's normally for remote front panels. It would be great if there was a warning when you tried to run a program that needs the full run-time install but only the minimum was installed. Most of the program worked (a suprising amount actually), but it was only certain features that didn't. The features that didn't work (like TDMS logging) wouldn't throw an error either, even with debugging enabled. If there had been a pop-up warning that the program wouldn't work completely with the current run-time engine, I would have saved a couple of hours of combined time between me and the other engineers trying to run a test.

I always use "Find All Instances" for searching where a subvi / control is used.

 

When looking into code that is locked (read-only because of being under source control), this option is missing!!.

  • Now I always have to press Ctrl-M to Unlock, then press change on the next dialog
    ctrl+m.PNG
    before the "Find All Instances" becomes active
  • Or open the VI Hierarchy (could take a while on a large project)
    and here the normal right-click "Find All Instances" is not blocked!!

 

I don't think it should be a big problem to enable the right-click menu on a locked vi...

Traditional IVI drivers haven't worked. The industry has been waiting for this for too long.

 

Best regards, Pavan

I don't know how many times I've added a case statement post-programming, but I do know that there isn't an easy way to make a tunnel the case selector.  Usually I delete the tunnel and then drag the case selector down and then rewire, there should be an easier way.  For loops and while loops have an easy way to index/unindex or replace with shift register, why can't a case statement be the same?

Case2.PNG 

Case3.PNG 

LabView Control and Simulation Design Toolkit

 

-A simulation subVI fails to compile if an initial condition of zero exists on an integration block.

 

 IDEA: Run a zero initial condition check on a system before attempting to compile.  If the condition exists, don't attempt to compile and report the problem to user.

 

 

 

-A simulation subVI fails to compile if a change is made after the subVI is created.

 

IDEA: If a change is detected in the subVI, force the subVI to recompile every time a file command is implemented.