LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I think it would be nice if LabVIEW was smart enough to know that when I drop a For Loop around scalar inputs it doesn't auto-index output tunnels - but rather uses Shift Registers - for matching inputs and outputs.

The common use case for this is with the Error input/output - it annoys me how it becomes an Array output.

 

As it is already wired, inline and not broken, dropping a For Loop around it should not break my code! 

 

Reference or Class inputs are other use case too - I want to pass the same thing around not create an Array

Shift registers are better than non-auto-indexed tunnels (other option) as they protect the inputs on zero iterations.

 

21826iFF181EE2E7ECE408

 

This would remove one step required for most use cases, speeding up my development experience.

Enable a Sub VI to launch as a daemon without having to open a reference to it using its path.  The VI Properties page would look like this:

 

21429iB830C4B88A795136

Wait until done would be checked by default, and auto dispose reference would be left false be default.  So instead of parsing the path to the VI on disk and using a method to run it:

21433i0CFC0F873277247C

I just set the correct properties in the VI Properties page and drop the subVI on the block diagram of the calling VI.

It would be very beneficial if there was a tool that could help developers create better code.  The advances in the in LabVIEW 2010 with multi-pass compilation and use of Dataflow Intermediate Representation (DFIR) it could provide feedback to the developer on how to write better code.  As time goes on we all think we write perfect code and it would be beneficial to have the compiler remind us that there is a better way.  As the code is compiled, it could identify modules that are written inefficiently and could use a rewrite.  By providing a simple color coded list of VIs compiled, this could provide quick feedback on the state of your current code.  Red would indicate heavy optimization, yellow would indicate some optimization, and green would be the code is good.  Over time, engineers would learn how to write better code based on compiler feedback.

 

VI Name

% Optimized

subSigGeneratorBlock.vi

64

Nearest Frequency for Block.vi

62

Nearest Freq in Int Cycles.vi

55

subInternalTiming.vi

55

NI_MABase.lvlib

41

NI_AALBase.lvlib

38

subShouldUseDefSigName.vi

34

sub2ShouldUseDefSigName.vi

30

Clear Errors.vi

25

subGetSignalName.vi

22

Merge Errors.vi

18

ex_GenAddAttribs.vi

14

Timestamp Subtract.vi

10

DU64_U32SubtractWithBorrow.vi

9

I128 Timestamp.ctl

4

ex_SetExpAttribsAndT0.vi

0

Timestamp Add.vi

0

DU64_U32AddWithOverflow.vi

0

ex_SetAllExpressAttribs.vi

0

ex_WaveformAttribs.ctl

0

ex_WaveformAttribsPlus.ctl

0

Waveform Array To Dynamic.vi

0

ex_CorrectErrorChain.vi

0

 

Other advances would be to click on the VI in the list and be presented with a list or graphical representation on the changes that were made to the source VI during optimization.  This would be similar to the diagrams shown in the DFIR Decompositions in the NI LabVIEW Compiler: Under the Hood document on NI’s web site.  Options for the future could be suggesting documents for better code development, training, or the level of LabVIEW (Novice, CLAD, CLD, or CLA) the code is written.

 

It seems that useful information could be feed back to the developer based on the compile process.  This can be implemented in many ways and I have provided a few suggestions.  As developers we should always look for ways to create better code and not depend on the compiler to make our code better. 

 

Reference Document: NI LabVIEW Compiler: Under the Hood

https://www.ni.com/en/support/documentation/supplemental/10/ni-labview-compiler--under-the-hood.html

 

 

Hi,

 

the idea subject says it all. I think this is usefull for multi-level data types with data placed into the parent and its child class (and in the child of the child class ...). To transport the data, you use the parent class. If you want to modify the data inside of a child class, you program a corresponding dynamic dispatch VI or you have to convert the parent class to its child class, modify the data, and convert back to parent. The In Place Element Structure would be nice for the second method.

 

Regards,

Marc

LabVIEW should have a VI which accepts a path to a class and returns a 1D array of the names of all the classes it inherits from, similar to how the Call Chain primitive returns an array of the calling VIs. It should do this without loading it into memory (which can be very expensive).

 

This can be useful for determining whether a class inherits from another class without having to load it. See this thread for an example.

 

Additionally, there should probably also be a similar VI which accepts the class instead of a path, just for symmetry.

I was just involved in a discussion about a default return value for the dequeue primitive when a different idea struck me.

 

Why can't we define a default value for typedefs.  Especially enums would benefit by having a case reserved for operations where something didn't work as expected and a default value is returned.  Being able to seperate this value from other (normally required) values would be a boon for ensuring data integrity.

 

21307i98E925E66B45E54C

 

Why can't we have the default value set to something we don't normally interpret as being valid?

This is written as both an Idea and as a Community Nugget.

 

Did you know there exists a function that decreases code fragility when it comes to typecasting and type converting datatypes? It's called 'Coerce to Type', and I bet you've never heard of this function unless you have kept up with this conversation. Thanks to RandyP for creating that Idea which culminated in a 'public' release of the Coerce to Type function.

 

21195iD087EF25489F6CED

 

Since that post, I have become aware of potential risks/bugs I had been proliferating in my coding style. First, I will briefly describe my understanding of the difference between typecasting and typeconverting in the context of LabVIEW. Next, I'll show a few use cases where this node increases code robustness. Finally, it's up to you to Kudos this Idea so we get Coerce to Type officially supported and in the palette!

 

Simply, "type converting" preserves the value of a wire, and "typecasting" preserves the raw bits that express that value on a wire. These two concepts are not interchangeable - they perform distinctly different data transfer functions (which is why they show up on two separate subpalettes: "Numeric>>Conversion" and "Numeric>>Data Manipulation"). Then there's this new function: Coerce to Type. I think of it as a Coerce-Cast combo. The data input is first Coerced to the datatype harvested from the top input, and then it is typecasted to that type.

 

Dynamic event registration is sensitive to the name on the wire, and for documentation's sake it has historically been important to typecast the event source ref to achieve a good name. Well, typecasting refs can get you into trouble (ask Ben), especially if you change the source control type while forgetting to update your "pretty name" ref constant.

 

21187iA83D4F02211D2DCB

 

My next favorite example is when you need to coerce a numeric datatype into an enum. Sometimes it's impossible to control the source datatype of an integer, while it's desirable to typecast the value into an enum for self-documented syntax inside your app. 

 

For instance, take the "standard" integer datatype in LabVIEW - I32 - and cast it to an enum. You're going to get some unexpected results (assuming, you expected the *value* to be preserved). Consider the following scenario:

 

  1. You desire to typecast a plain integer '2' into an enum {Zero, One, Two, Three}, and after 45 minutes of debugging, realize "Typecasting" has hacked off 75% of the bits and clobbered the value. Drats!
  2. The enterprising engineer you are, you determine how to fix the problem with a deftly-placed "Coerce to U8". (This is one of the fragile errors I proliferated prior to learning about this node)
  3. Maniacal manager/customer comes along and says "I want that enum to count to 10k". Drats again. A datatype change from U8 to U16 for the typedef'd enum, and a lot of typing with the wretched enum editor. Finally, two hours into testing and wondering why the program doesn't work, you realize you also forgot to replace all of the Type Converts to U16 (this is the definition of fragile code: you change one thing, and another thing(s) breaks).
  4. Rockstar Programmer Epiphany: use Coerce to Type, bask in your robust code. You even enjoy data value preservation from floating point numbers.
21191iD3339D40A531F181
 
Finally, typecasting can generate mysterious failure modes at Run-Time, but Coerce to Type can catch errors at Design Time. This is especially helpful for references (see above), but can also prevent some boneheaded data gymnastics (see below). Whew! Saved by compiler type resolution mismatch!
 
21193iC698E122C5BE16AC
 
In short, now that you realize you need this function, I hope you will want to see it added to the Data Manip palette.
 
Penultimate note: Coerce to Type is neither a replacement for typecast nor any of the type converts!!! There are distinct scenarios appropriate for each of the three concepts.
Ultimate note: please check the comments section, because I expect you'll find corrections to my terminology/concepts from GregR, et al.

With text based languages, the For Loop has a programmable starting index, stopping index, and step size.  With Labview, the starting index is always zero and the step size is always 1.  It is not changeable.  I would like to see Labview have a real For Loop where there would be three terminals inside the For Loop that can be set by the user.  One terminal for initial value (starting index), one for final value (stopping index), and one for step size.  This would be of great value to all Labview programmers.  Of course the terminals can be much smaller than what is displayed in the picture.  One or two letter terminals, such as ST for start, SP for stop, and SZ for step size would do fine.  (or N for initial value, F for fnal value, and S for step size).  The real for loop should be capable of going in a negative direction, like starting at 10, ending at -10, with a step size of -2.

 

 

21077i3760182794779C02

Issue:

Dynamic Launching of a VI works fine when the plug-in code is launched from the LabVIEW source code.

When the Application was build it fails to launch the VI. Upon debugging I found that I am reusing some of the SubVIs in the plug-in VIs that is part of the EXE. I tried renaming and re-linking of those Vis for the Plug-in VIs it works fine.

 

Solution:

           Similar to Name spacing provided by Lvlib files. Add an additional option in the Exe build spec, to add a name space for the application this will rename all the VIs as "my project:main.vi, My project:test1.VI" etc inside the Exe. This will resolve the naming issue created by the LabVIEW runtime engine. So a plug-in VI can reuse the test1.Vi from the Source code.  

 

 

I have had to jump through hoops to create a .net process or use runas to get portions of code to run

when the end used does NOT have admin privileges.

 

Add Option under VI Properties >> Execution to allow setting the VI to run as admin or another user.

What happens when you want to change a Border Node on an IPE? It is a manual process which I propose can be automated as shown in the two examples I have provided...

 

InPlaceReplace.jpg


 

 

 

 

We I love the idea of the new cluster constants. I have found thought if I need to edit these constants to change something It opens the control back to full size. This really defeats the purpose of these space saving features. We need to be able to double click these constants and have them open in a new window so that my block diagram size does not change

 

20861i667C4C24B992C741

 

If my diagram gets this big if I need to change a true or false state every time I need to change it what good is the nice new icon or constant.

 

Just a thought.

In their current form, Auto-Indexing tunnels only operate on a single dimension of an array.  For example.  If you input a 2D array, through an auto indexing tunnel into a for loop, and display the resulting 1D array in an indicator inside the for loop as below, you will always get the last row.

 

I'd like to see a feature where you can right-click on the tunnel or something, and set it to auto-index by column, instead of by row, and get the last column instead.20773i86B483107F51CD3820775i651136B201680B64

 

It could be as simple as an option in the context menu for the auto-indexing tunnel to say "Index by rows" or "Index by columns"  It gets more complex with 3D 4D and moreD arrays, but you could do something like a submenu flyout that says "Index By Dimension" > "1", "2", "3" etc

 

Shift registers are used often in functional globals where they need to only be initialized on first call, requiring extra code to handle this case. 

The feedback node handles initialization and enabling (optional updates without adding any code),  why are these not options on the shift registers.  I would just use feedback nodes, but shift registers are more readable in the action engine template.  See the code below, the feedback is much cleaner:

 

19755i0FB0D2F0E1AC0D3F

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The new SR with the same options yields clean code (I think it is cleaner since there are less object to use real-estate) using shift registers:

 

 

19761i9E4B99749B3BD05B

 

 

 

Loop indexes max out at 2,147,483,647.  It would be nice to have an option to wrap around the loop index instead of having it remain constant, in the event that you have a process that depends on that loop index incrementing.

Will LabVIEW run in Windows Azure? If not, why not?

 

What is the compatibility of LabVIEW with Azure Cloud Computing.

Good programming practice says to handle your errors.  If you want to clear a specfic error, you have to do some variation on the following code:

 

ClearSingleError.png

 

Since this can be pretty cumbersome in your code, I would love to have a Clear Specific error VI/function that will do exactly this.  It can also be polymorphic and accept an array of multiple errors.  To make it even better, you could probably merge it with the existing clear error VI and set it up where if the error input is empty, it clears all errors.

 

CaptainKirby made a nice community example that does all this, but I'd like it to be on the palette for everyone to use...

https://forums.ni.com/t5/Example-Code/Clearing-a-Specific-Error-From-The-Error-Cluster/ta-p/3517242

 

It is my understanding that writing to the Value property of a Property Node is slow because it uses the UI thread and blocks until the control graphic is updated with the new value.  However, one of Dr. Damien's threads (link goes to thead) states that the Control Value.Set property for VIs is asynchronous, meaning that they don't block on the front panel update, and so should be faster than the standard Value property.  However, this method is not so desirable since it is dependent on knowing the control's label.

 

So the question came up:  why not have a Value property that is asynchonous?

It has been a few months since a suggestion has been made to do something about the For Loop, so if nothing else it is time to stir things up a little bit.  There have been several suggestions to do something with the iterators, for example:

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Smart-Iterators-with-Loops/idi-p/967321

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/for-loop-increment/idi-p/1097818

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/quot-Start-Step-Stop-quot-structure-on-For-loops/idi-p/1022771

 

None of these has really gained traction, and heck, I haven't settled on any of them myself.  In most cases I am satisfied with a workaround, usually involving the Ramp VI.

 

There is one case where I am not happy with the workaround.  I happen to need reverse iteration values quite often (N-1,N-2,...0).  With the N terminal pinned to the top-left corner I have little choice but to do the following and start zigging and zagging wires:

 

18391iB5AE9450400CAAD6

 

I would simply like a reverse iteration terminal which can be moved around at will and simply counts down.  Doesn't have to look like I have drawn it, that just happens to be intuitive to me.  Naturally this terminal (and the normal iteration terminal) should have the option to be hidden.

 

I thought about having the option to have the loop spin backwards, similar to reversing all auto-indexing inputs and having the iteration terminal count down, but I just could not decide what to do about auto-indexed outputs.  My G-instincts tell me that they should be built in the order of loop cycles, my C-instincts tell me I am building array[i] and if i goes in reverse order, so should the array elements.  For now I say, forget it, and just stick to the simple terminal.  Array reversal is essentially free, so I at least have a workaround there.

Include VI that generates timestamp in Excel format something like the one attached  as a standard in timing functions.