LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Hi, i wanted to suggest the creation of a separate utility software that would convert a VI from any version to any other version. This would save people a lot of time by not waiting to get it converted from their respective threads. Also it would serve for more people to able to reply on the forum(me included since i am using LabVIEW 8.6 and most of the posts contain VIs made in 2009 and 2010 even though most of the time the same functions are avalable in 8.6 😞 ).

 

 

PS: Sorry, got no pictures

 

I would like to see the Join and Split Numbers function to be expandable and polymorphic. I’m not arguing big vs little. Just accept there are two Endian worlds and work with them. Have you ever joined two or four numbers from a data stream in Little Endian? You have to change the order and cross wires as shown in figure (1).

Join Numbers Figure 1.GIF

This is not that clean and it gets worse when you need to split numbers and send them back to a device. Because I join and split a lot of numbers I created a library of vi’s that are clean on the diagram and visually indicate Big vs. Little Endian. A simple arrow works for me to indicate Big vs. Little Endian shown in figure (2). This Library is also attached.

Join Numbers Figure 2.GIF

I know the Join and Split two numbers in Big Endian is the same function in LabVIEW. This provide visual consistence on my block diagrams. An example of block diagram code that shows the difference between Big and Little Endian form is shown below in figure (3).

Join Numbers Figure 3.GIF

Here is what I would like to see National Instruments create. Make the Join and Split Numbers function to be expandable and polymorphic. The words are only going to get bigger and there will always be two Endian worlds. Make the Join and Split Numbers vi’s expand like the build array function. Click and drag possibly in groups of 2 i.e. 2, 4, 6 and 8 inputs or outputs for the Split Numbers vi. Of course the output data type would correspond to the number input connections. The polymorphic examples are shown below in figure 4.

Join Numbers Figure 4.GIF

To take the polymorphic function one step further it could include the data type. There are times when I need to join numbers and convert to a signed integer or a double floating point. A demonstration of the polymorphic data type is shown in figure 5 and 6 with before and after examples.

Join Numbers Figure 5.GIF

Expanding the functionality of the Join and Split Number vi’s will reduce block diagram clutter, increase coding speed and maintain visual readability. What do you think?

 

I've encountered a programming situation where I may need to call 'Match Regular Expression' where the regex is selected at runtime, and where that regex may potentially have a variable number of submatches to return.  Unfortunately, right now, the submatch count is a compile-time decision based on how far out I grow the node.  I can grow the node to some maximum number of submatches ever expected, and thankfully the node doesn't throw a runtime error if there are fewer, or greater, submatch expressions in the regex.  I'm building the individual returns into a string array for further processing, but it would be much more versatile if the node could return the submatches with a properly-sized string array.

Hi,

 

Currently if a VI is set to subroutine priority you can only call subVIs within that which are also subroutine priority (to prevent priority inversion I guess).

 

It would be great if it was possible to also use inlined subVIs inside subroutine VIs.

 

As inlining basically defeats a VI's priority setting an inlined subVI would just "inherit" the subroutine priority of its caller. I configure many of my very small reuse VIs  as inlined (most of those in the GPower Error & Warning toolset from v2.1 onwards for instance), since they typically perform much better than subroutine that way. But since they are configured as inlined, this effectively prevents them from being (re)used inside subroutine VIs.

 

Cheers,

Steen

I was recently trying to develop a function to navigate thru a deeply nested directory structure and came across system path length limitations which could potentially be addressed by use of a "change directory" function.

 

I realize I could use system exec with cmd /c cd <path>, but found this extremely slow

In LabVIEW it is not possible to have an array of arrays. Something that gets you close is a 2D array but each row must be the same size. You can have an array of clusters of arrays to get around this limitation. Many languages do support arrays of arrays.

Hi,

I want to start discussion about how to enhance Loop Conditional Terminals in LabVIEW. Generally my idea is to have an easy way how to monitor Conditional Terminal of user-defined "primary" loop. Under the hood there can be for instance notification triggered from the "primary" loop and one or more "slave" loop(s) equipped with "Wait for notification" (with timeout = 0) with predefined logical operation on the terminal input.
So this allows you to have one STOP source loop and one or more listeners.

 

sopping loops.png


Anyone wants to expand this idea?

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application. The regular graph can take a bit more data, but still has its limits. Having 100k number of points to display is quite common (in my case it's most often months of 1 second data).

 

The idea could be formulated to just "improve how graphs handle large data sets"...but how that would be done depends a bit on what optimizations the graph code is open for. The most effective solution however would probably be to do what you currently have to write yourself - a surrounding decimation logic.

 

So my suggestion is to add an in-built decimation feature, where you can choose to have it automatically operated when needed - or when you say it is needed, and possibly with a few different ways to do the decimation (Min-Max/ Nth point etc.). The automatics should be on by default - making the problem virtually invisible for the novice user.

 

A big advantage of doing it within the graph is that it will (should) integrate fully with the other features of the graph - like zooming, cursors etc.

Here is an idea that I have wanted to make happen for a while. It is the Asyncronous Call By Reference. Basically, it is just a Call By Reference, but split in half so that the call is not syncronous (blocking), and return data can be obtained from multiple locations.  Plus, there should be ways to check the status of the asynchronous call and kill it.  I've even implemented this, to some degree, here http://forums.openg.org/index.php?showtopic=88
 
 
Message Edited by Jim Kring on 06-10-2009 08:14 AM

 

the input value of the conditional terminal should  " pass through "

 

                  (like the "case selector" of the "Case Structure")

 

 

                    xxxxx.png

 

                                                         like this,

 

                    yyyyyy.png

There is a method how to call a .exe with Parameters over the  ( http://digital.ni.com/public.nsf/allkb/17C3AD70493CE0208625666A00763364?OpenDocument)

But there seems to be no Way to give a Value to Return to the cmd as ReturnValue.

In C this would be expressed instead of

void main(void)

as 

int main (char *Args[])

{

return = x; // returns X as return value back to cmd

}

or similar.

 

The only way to do this now is to write a C DLL wrapper around the Exe and call this one to get a custom Return value. This is not very comfortable when you choose LabVIEW in order not to have to write in C.

Instead this should be included in the Build Specs just like with the Arguments. The Datatype and Data to be returned should be specified via Property nodes, and the Value should be passed back up to the CMD when finished.

BuildSpec.png

This is handy when using Batch scripts.

This has been brought up long ago, but I thing it deseves to be discussed here in the Idea exchange.

 

There are situations where it might be beneficial if we could have a string datatype that has a defined length. Arrays of such string would be stored flat in memory.

 

Application would include:

  • Typecasting a long string to a string array where the element is fixed length would slice up the string into an array of equal length strings.
  • Reading a binary file as an array of fixed strings would do the same.
  • ...

 

The default value would be a string of the defined lenght filled with \00. Shorter inputs would get padded with \00

Of course certain operations would drop the length, e.g. when concatenating such strings, the length would get dropped from the result, turning it into a plain string.

By default, when you build an application, the front panels of all VIs are removed.

 

Of course, this would make applications completely unusable, so there are certain changes which cause LV to keep the front panel in, such as setting the front panel to open when the VI is called. This gives LV a clear indication you want the front panel and these changes are generally enough for most of the VIs which people use.

 

There are, however, times when you want a VI to keep the panel, but it's a VI which usually (or sometimes even never) won't be displayed. Today there are several ways to handle this case.

 

 

The official way is too flimsy:

 

Keep_FP_AB.png

 

 

It's limited to specific build specs and it's too easy to break (e.g. remove the VI from the project and add it back).

 

 

 

 

The automatic way relies on additional changes you're likely to make if you intend to show the VI, but is also too easy to break:

 

Keep_FP_Property.png

 

Someone could just unset the property and then it's gone.

 

 

 

 

What I usually today is something like this:

 

Keep_FP.png

 

The static property node ensures the FP will remain and the comment makes it less likely that someone will remove it.

 

 

 

 

What I want to see, however, is something more explicit, possibly in the shape of a new VI property:

 

Keep_FP_New_Property.png

I would like to propose the use of a stacked parallel execution structure.  Especially in FPGA applications, this will solve the problem of diagrams running off the screen.  All execution pages will run simultaneously as if independent while loops were scattered across the BD.  This idea potentially leads into a 3 dimensional visuallization of coding LabVIEW. Note: In the image, the pages are offset but should look similar to a stacked sequence structure.

 

 

Parallel Execution Structure 3.JPG

 

 

In LabVIEW if I want to use a property/method of a control I have to right click and select property. Same for second control I want to use same property I have to do same operation again. My idea is if we select all the controls and right click, some common property like value, visible and etc..... Should display and by selection it generate automatically. This can reduce my application development time.

 

 

Multiple Control Selection.png

 

So if this key feature is there in LabVIEW we can select multiple controls or properties and perform some basic operations to reduce our application development timing.

 

Thanks and Regards
Himanshu Goyal | LabVIEW Engineer- Power System Automation
Values that steer us ahead: Passion | Innovation | Ambition | Diligence | Teamwork
It Only gets BETTER!!!

I finally found the time to download a trial version of LV2012. I find that the new 2012 features are really useful and interesting. Good job NI! Smiley Very Happy

One of the greatest for me is definitely the new "concatenating tunnel mode".

I've noticed that it cannot be activated with a variable of scalar type, and this obviously makes sense for almost every type of variables. However, strings are only formally scalar variable. I have lots of VI where the concatenate string vi has exactly the same use (on strings) that Build array has on numerics type of data, particularly when the concatenation had to be conditional. Obviously it is possible to convert the string into a U8 array before to connect the wire to the tunnel, and reconvert it into a string out of the loop, but this convertion probably can be internally managed without too much effort.

The attached snipped can show you what I mean. So, in my opinion, if tunnels could accept strings also when in Concatenating (conditional or not) mode would be great.

Cheers.

Didn't find this idea posted, I think it's a must.

 

It would be very usefull for MulticolumnListbox Item Names in which we need to change cell values...

inplacepropertynode.png

Right now we can make a conditional disable structure that behaves one way if we the VI is running in the Run-Time engine or not.  What I think would be useful is if we could also decide on performing one case or another based on the version of the Run-Time/Development of LabVIEW.

 

Say I develop some neat little reuse VI that relies on getting the name of the label of data type passed in.  OpenG has a functional already that does this called Get Data Name.  NI has a version as well in vi.lib called GetTypeInfo.vi.  The problem is in 2011 the OpenG method is about 10 times faster, but in the 2013 version the NI version is 10 times faster.

 

Wouldn't it be nice if a conditional disable structure could choose to do one thing over another, based on the version of LabVIEW it was being run in?  This way reuse code could be written to perform best in what ever version it was running in.

 

There are many situations were one code written using some function either runs slower or faster in different versions of LabVIEW and using this we could choose the best option for that environment.

 

Of course there is a work around and that is to read the App.VersionYear property, put that into a case statement and perform different code for one version or another but this has added over head, and is called every time it is needed.

 

EDIT: Also the App.VersionYear can't be used in inlined VIs because it is a property node which a conditional disable can be used.

The "Convert Instance VI to Standard VI" right-click option on malleable VIs made me think if this technology could be used for debugging otherwise inlined VIs (and therefore also malleable VIs)?

 

VIM Convert.png

 

VIs are obviously inlined without their block diagrams, so my suggestion would require two changes:

 

1) 'Allow debugging' must be settable in combination with inlining in VI Properties >> Execution.

2) LabVIEW IDE and RTE would have to call inlined VIs in different ways, depending on this setting. Suggestion follows:

 

Running in the IDE

The IDE would now create a temporary unique instance for each running inlined subVI that has debugging enabled. This would allow us to double-click on an "inlined" subVI while it's running, and have that temporary instance open up ready for probing. Settings like the 'SubVI Node Setup' dialog would now also be available for inlined VIs, if they have 'Allow debugging' on. Set 'Allow debugging' off and inlined VIs will behave as today (so no change for any existing code).

 

Running in the RTE

Executables are built with 'Enable debugging' either on or off (Advanced page of the Application Builder). I suggest that Application Builder builds in unique instances in place of debug-enabled inlined VIs when 'Enable debugging' is on, and bakes in ordinary inlined code when 'Enable debugging' is off. This would allow us to debug into inlined (and thus also malleable) VIs inside executables with remote debugging. Loose VIs called with the RTE would have had their inlined callees compiled with either debug capability or not, depending on their 'Allow debugging' setting as discussed above in "Running in the IDE".

 

What do you think - would it be nice to be able to debug inlined VIs?

So many times, Darren's history probes have been a great help while debugging my code ! This concept may be enhanced. But at least, these probes should ship with the LabVIEW installer.