LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Allow pressing 'n' and 's' for Save/Don't Save at VI close.

 

When closing a VI with unsaved changes, and you get the ”save changes before closing” dialog, you cannot use the keyboard and just press 'n' for no or 's' for save like in every other applications. You have to move your mouse and click, which is so much slower and less ergonomic than just pressing 'n'.

 

Look at any other application, for example Notepad, npp and OpenOffice:

notepad.PNGnpp.PNGoo.PNG

 

 LabVIEW should be more standardized on keyboard control.

labview.png

This "complaint" goes for many more dialogs (all?) in LabView.  It's just not keyboard friendly.

 

/Thomas

If the case structure is used with enum or numerical values as selector type LV automatically converts value lists to value ranges. On the one hand this is nice to keep the header small but on the other hand it can lead to very hard to find program flow errors.

But let's have a look to the pictures first:


left: the value list I entered to the selector field;          right: the range that LV creates automatically:
 
 case_list.jpgcase_range.jpg


Here is the complete piece of code showing that there are four values in the enum constant and that each value is handled (no default case required):
case_structure_with_enum_input.jpg 

 

 

What happens if you ...
a) .. extend the enum between "case 3" and "case 4"?: The new value(s) will be handled in the case ""case 2" .. "case 4""
Do you always want that the new value is handled in this case?? I don't!
b) .. add a new line to the enum before case 2 or after case 4?: The VI can not be executed anymore because the value is not handled in any case of the case structure.
Now you have to adjust the code => more work but you're sure that the code is doing the right thing.
c) .. have a default case and you add the values to the enum as described on a and b: The code would be executable each time but only on b it will be handled in the default case.

Usually I want to decide how a value is handled, meaning I want to extend or change the case structure by myself and I don't want to be forced by LV to keep care of a special order of values inside of my enum typedefs.

Because of this I suggest to add a new option to LV block diagram configuration: "automatically change case structure list of values to value range: never / ask / always".
The default value for this option should be "ask". In a situation like shown on the first image a dialog like this can appear:
 
case_ask.jpg
If the users selects "yes" the code will change to a list like shown in image 2. If he selects "no" the case structure remains like he has entered the values (image 1).

I have seen a few posts online indicating a couple of changes here and there for LVMerge and LVDiff.  I think the main problem is that we just want it to work with third party SCM tools like Mercurial / TortoiseHG, SVN / TortoiseSVN, RTC (Rational Team Concert), Surround, ect.  The ability to check in and out from project explorer is not quite cutting it.  With more and more developers using distributed SCC products like GIT or Mercurial the ability to merge and diff becomes really important.  Here is a list of what I would like to see in newer versions of LabVIEW and the LVMerge and LVDiff tools that would give us more time to develop code and less time spent managing software.

 

1. Both LVMerge and LVDiff work ok with a single vi, but not that well when hierarchies are different.  Most SCC applications will only download one file or the files that are different between change sets when merging or differencing change sets.  The basic problem is that in order to diff or merge LabVIEW requires the vi to be loaded which requires loading dependencies.  If the merge and diff tools didn’t require the vi’s dependencies to be loaded into memory then many of the issues just go away.

 

 I understand the technical challenges in order to implement this feature, but I think it would be a far better approach then trying to write a huge amount of code in order to handle multiple SCC tools.  Even if that code was written the workaround solutions are not pretty.  For example you would have to download both change set’s entire hierarchies to disk and then compare them.  How well will that work with very large projects?  Or with a merge where you need three version of the hierarchy?

 

I think the better solution is to just make LabVIEW files act like text files when diff and merges are performed.  The information in the LabVIEW file points to dependencies, and if those dependencies’s attributes change then flag them, but LabVIEW should be able to generate the LabVIEW "file" in a vacuum for differencing and merging.  May require caching more information about dependencies then what is currently saved in the vi, but I think that would allow a better merge / diff utility.

 

2. Support differencing and merging of all LabVIEW file types (projects, xctls, classes, libraries...).  Sometimes the text merge features in some programs don't take into account the complexity of the relationships between LabVIEW files.  It would be nicer to have a visual diff in terms of how LabVIEW treats the file (e.g. how TestStand differences sequence files).

 

3.  This is more of a bug fix, but improve the merge windows.  I have found a couple of situations where the code being merged is not even shown in the three windows (base, theirs, mine) and I cannot scroll to that location in the code to intelligently perform the merge.

 

4. LVMerge exe exits right away before the merge is complete.  TortoiseHG thinks the merge is complete for a file when the exe exits so it blows right past all of the other file merges so only the first file is merged.

 

5. Cover these use cases as differences between change sets when merging and differencing (item 1 solution should cover these):

Setup: use TortoiseHG to difference an entire change set in the repository with the local drive, or merge two change set in the repository.

  • Moved files (not just vi's) in the hierarchy
  • Deleted files in the hierarchy
  • Files added or removed from classes/libraries/projects
  • File 1 changed and dependency File 2 changes it's connector pane connection to work with the changes in file 1
  • Both new change sets add the same file that the base change set does not contain
  • Same as the last item, but added file has a different location on the hard drive between the change sets

There are probably more that I am not thinking of, but that would be a good start.

 

6. Simplify the entire process by providing a real IDE for merging and differencing files.  I am envisioning something with a hierarchy of views like Beyond Compare.  It would allow you to simplify some actions at a high level, but would give you the power to perform advanced actions.  For example, present a list of differences, types of differences, and the types of merges possible. 

  • Maybe some files can be auto-merged
  • SCC thinks the files were different but there are only cosmetic changes
  • User could choose between a deleted file or a changed file
  • Two versions are different but the user knows which one to choose without performing a merge.

The user should be able to have a quick view (no loading dependencies), or double click on the item and a new tab comes up that allows an actual file merge (vi, class, project, ect.).

 

 I think a tool like that that has the ability to interface with third party SCC tools would be a huge timesaver especially when dealing with distributed environments where merges occur more often.  It may need to stream all of the changes from a tool like TortoiseHG before performing a merge (probably easiest implementation), or rewrite the GUI for managing Mercurial or GIT change sets directly.

 

 The other option is to say "just use a check in check out central repository SCC", and I would say Phooeey! to that. Smiley Happy  After using Mercurial with TortoiseHG for a while I would not switch to anything other then another distributed SCC application...even with the difficulties with the merges and differences, using another system still poses similar problems (sometimes even worse) and the software management in those programs just stifles productivity.  Has anyone ever tried to move a class and its members to a different directory after the code has already been checked into an SCC tool like Perforce?  How about managing a multi-branch project where stable release updates are not only applied to the trunk but to a major feature branch?  Painful...

 

I think that these changes to the LabVIEW development environment will move us from “writing LabVIEW code” to “software development using LabVIEW”.

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

If you want to replace two or more controls with another type of control you should be allowed to just select all of the existing controls and do all the replacements in one operation.

 

Currently you have to select one control at a time, right-click it, choose replace, locate the new control - then do it all over again with the next control, even though the type of control you want it changed to is the same as for the previous one.

 

multi-replace.png

spent half a day today figuring out how to get my icon that was created in LabView's icon editor over to the icon editor in the Application builder.  After much trial and error the only method I found was to select and copy the pixels to the clipboard while in the LabView icon editor, then run the Application builder and open its icon editor and then paste the contents of the clipboard into it.  Some pixels change colors when doing this.  Why doesn't the LabView icon editor allow one to save an icon in the .ico format that the Application icon editor requires?  Seems logical to keep the same icon for your compiled app instead of having to create a new one in the App Builder.  It would be even better if the Application Builder would just automatically load the icon from the startup VI by default, but allow the icon to be changed in Application Builder icon editor if desired. 

Insert spelling help functionality in VI description!

This would be great..

 

Before:

VIDescription.JPG

 

 

After:

VIDescription2.jpg

Expanding a little on the idea I hinted at in this recent thread, here is something that strikes me as a missing feature in LabVIEW.

I'll start with some illustrations.

 

Starting situation: let's say I have a main VI and I create a new case in a case structure, because I need to perform some data processing. I drop a few property nodes, references, have data flowing from shift registers left to right , and I pull a Data Value Reference wire from a big data structure:

 

ScreenHunter_006.jpg

 

New feature 1: I drop a BLANK VI on the block diagram (which I choose with the appropriate number of connectors):

 

ScreenHunter_005.jpg

 

New feature 2: I now CONNECT my inputs and outputs to this blank VI:

 

ScreenHunter_005.jpg

 

ending up with this:

 

ScreenHunter_001.jpg

 

New feature 3: Now, I can double-click the blank VI to open its panel, which reveals a neat FP and a BD with the Controls and Indicators I need (not shown). I now can start wiring the functions I need to implement.

 

I'll comment on the idea next.

 

 

 

Currently in IMAQ Vision, there is a function to convert an image to a standard formatted string (JPEG, PNG, etc.).  The contents of the string is exactly the same as the data that is normally written to a file.  This is a great function to compress an image and send it over FTP, etc.

 

The problem is that there is no inverse function available - you can't convert the string directly to an image.  The only way to do it is to save the string to a file, then read it as an image, which slows down the process significantly.

 

I repeatedly see the need for this function.  People receive strings from other devices or PCs and want to display them as images.

 

There are functions to flatten images to/from strings, but they are limited in their scope (JPEG and PNG only, I believe) and they include the image name and other information in the string.  The format is proprietary and can't be read by other programming languages (yes, some people still program in languages other than LabVIEW).  It is also difficult when you want to unflatten a string and store it in an image with a different name than it was stored with.

 

Bruce

 

[Update] Part of this idea for the PNG format was implemented for LabVIEW 2013. Please see this post for details.

If you happen to put something that LabVIEW doesn't quite like in folders like "~\LabVIEW xxx\project"
LabVIEW will crash at launch and give you a very helpful crash report as shown here : https://forums.ni.com/t5/Discussions-au-sujet-de-NI/Crash-de-LabVIEW-%C3%A0-son-lancement/m-p/4215135#M35349

 

In order to avoid being told by NI support that you have to re-install LabVIEW

Please add a "safe mode" that will ignore whatever was added by the user (or VIPM) in the subfolders of the LabVIEW folder.

 

And also fix CAR 732888

Hi,

 

I think there is too much clicking around in the BD to enable/disable/configure node options. I'd like some sort of context sensitive menu that just automatically appears when you hover your pointer on top of any object in the block diagram. For instance when you right-click the Read from Text File primitive you get this context menu:

 

ReadFromTextFile_0.png

 

You can do a lot of things here it seems, but there are actually only 4 direct configuration points, namely the ones I've highlighted with a red dot. It'd save a lot of right-clicking and subsequent menu browsing if a configuration menu just appear when you hover over an object for a short amount of time (50 ms maybe, the exact delay must be figured out by the UI guys):

 

ReadFromTextFile_1.png  Hover for a fraction of a second...

 

ReadFromTextFile_2.png  ...go ahead and configure the node.

 

a) The node configuration menu may not cover any of the object itself, thus you'll still have access to the entire object graphic if your purpose was interaction with that.

 

b) No menu should appear unless your tool is empty. No need to make the node configuration menu appear if you approach the object with a wire for a terminal for instance.

 

c) Advantages are (at least) no mouse-clicking necessary, no menu fly-out hunting, no grayed options that takes space, no duplicate/triplicate options that takes space, no configuration dialogs that doesn't add anything extra (how about all those Properties dialogs that just lets you edit the object's label in a fancy way, but you have to look, because a new property might've been added in this LabVIEW version?). Just an intelligently populated menu with only the configuration options that makes sense in this context.

 

d) Help text for each configuration item should be shown in the context help window when you move over each item in the list, possibly minimizing the need for opening detailed help.

 

e) The short delay from still pointer to menu emergence means you can still move your pointer around the BD without menus flying about everywhere, while the delay is small enough that the menu seems to appear almost instantly when you do hold the pointer still on top of an object.

 

I envision such a node configuration menu to appear for any node basically. One of the key aspects being that the menu is object and context dependent. For instance a subVI:

 

SubVI.png

 

... or a tunnel on a While loop:

 

LoopTunnel.png

 

For some objects there might even be some key information available in this node configuration menu, information that is otherwise several clicks away. For subVIs such information could be if it is inlined, reentrant, has debugging enabled etc.:

 

SubVIExtra.png

 

Cheers,

Steen

This is an excellent feature for populating a new cluster that I use everyday:

 

DragAndDropCluster.png

 

This is a feature that needs to be added:

 

DragAndDropArray.png

 

Stipulation: if several constants are highlighted, and they are the same datatype, then allow them to be dropped into an empty array; else, disallow dropping. This will determine the datatype of the array, and initialize the first n items that you dropped in.

 

Formula Node output variable names should be shifted right 1 pixel within their borders. Currently, the names are left-justified. Since each character is also left-justified (i.e. the white-space to separate them from adjacent text is on the right of each), input variable names have a 0-pixel left margin and 3-pixel right margin within their 1-pixel border. Since the output variable two-pixel borders grow inward to match the outer dimensions of input variables, the text pixel is cropped with excess on the right. This is problematic for the 1-pixel wide lower-case letter L which is cropped in half by the left border but still has a 2-pixel right margin. If shifted, names would be centered, cropping would not be as apparent, and the letter L would be distinct. The image below is enlarged 200% so the issue is more obvious (but the name is easier to read).

 

formnode.bmp

The In Range and Coerce function is frequently used to determine whether a value is within range of an upper limit and lower limit values.

 

But when it is out of range, you often also want to know whether the value is out of range too high, or out of range too low.  It is easy enough to add a comparison function alongside and compare the original value to the upper limit.  It's another primitive and 2 more wire branches.  But since comparison is one of the primary purposes of the In Range and Coerce function, why shouldn't it be built into it?

 

The use case that made me think of this that as come up in my code a few times over the years is with any kind of thermostat type of control, particularly one that would have hysteresis built into it.  If a temperature is within range (True), then you would often do nothing.  If it is lower than the lower limit, you'd want to turn on a heater.  If it is higher than the upper limit, than you'd turn off the heater.  (Or the opposite if you are dealing with a chiller.)

 

Request Add an additional output to In Range and Coerce that tells whether the out of range condition is higher than the upper limit, or lower than the lower limit.

 

(That does leave the question as to what should the value of this extra output be when the input value is within range.  Perhaps, the output should not be a boolean, but  a numeric value of 1, 0, and -1, perhaps an enum.)

It would be quite helpful if LabVIEW would automatically grow down a function when bringing a new wire to build array/string concatenate/build cluster/interleave arrays/build matrix/compound arithmetic, allowing the programmer to make a new connection to the function if the function currently does not have room for the connection.  

 

Even better is if it would grow/insert where the wire is near the function, below an existing connection, similar to 'ADD INPUT'.

new.PNG

When operating graphs it is easy to change the min/max values of an axis by clicking on them, typing in the new value and hitting enter. However, if autoscale is enabled, the newly entered setting will not be applied. So the user first has to manually disable autoscale and then change the scale of the axis. It would be nice to automatically disable autoscale when the user manually enters a new min/max value, thus saving a few clicks.

 

Thanks.

Often, I have wires already on a block diagram and I want to add a new structure (case, flat sequence, etc.) that utilizes those wires internal to the structure or in some cases passes them completely through the structure. For example, consider needing to implement data flow for a node that has no error input and output, such as when using the Wait (ms) function. To implement data flow when using this function, I might place it in a flat sequence with a single frame and pass an error wire through it as shown:

Ryan_Wright__1-1708013874803.png

If the wire already exists and you try to put a new structure over the top of it (using the same example referenced above), you get the following behavior:

Ryan_Wright__2-1708013992633.png

It would be really nice if LabVIEW would automatically pass the wire through the structure or at least wire up to the left border when doing this as shown in the following pictures:

Ryan_Wright__3-1708014077542.png

Ryan_Wright__4-1708014170821.png

The automatic wiring behavior of whether to pass the wire completely through the new structure or to only wire to the left border could be evaluated on a case-by-case basis for each structure (For Loop, While Loop, Case Structure, Event Structure, Flat Sequence, Diagram Disable Structure, Conditional Disable Structure, etc.).

I don't know why LabVIEW does this, but when you copy and paste (with ctrl-c and ctrl-v as opposed to ctrl-drag) certain items on your block diagram, they do not act as most would expect.  I think copy and paste should make an exact copy of what you are copying instead of changing the functionality/behavior.  Examples

 

Local Variable:  

 

variable copy.PNG

 

If you copy a local variable, it creates a new control and makes a local variable out of that.  I really just want a copy of the local variable that's already there.  If I wanted a new control, I would have made one, no?

 

Property Node:

 

property node copy.PNG

 

 Copying an implicit property/invoke node changes it to an explicit node.  Why wouldn't it just make another copy of the implicit node that I already had?

 

I'm sure there's other examples of this behavior that I can't think of now, so feel free to add them to the comments.  If you have a good reason why this behavior is here, please let me know as be happy to be corrected...

 

Hi

A recent discussion on the forum "why not everybody uses units" explains this much better than I can.

http://forums.ni.com/t5/LabVIEW/Why-do-very-few-programmers-use-LabVIEW-unit-labels/m-p/1802644#M621423

 

But for faster readers, several bugs hinder the universal use of units in LabVIEW.

It is a wonderful system but should be completed in a way (at least the square function should be using it) but all the other remarks found in that thread like the identical look of the expression node and the unit, not working formula nodes etc. should at least be taken seriously.

 

I lik the system but for now it is hindered by too many bugs.

There should be a Browse button on the right click menu for front panel items as it is on the block diagram right click menu.

Quiztus2_0-1707817399604.png

Make the search in Browse traverse the displayed property tree instead of the type specialization.

Otherwise you won't find Increment when searching the graph's properties:

Quiztus2_4-1707817683201.png

 

Just display the hierarchy with the results like:
X Scale::Range::Increment

Y Scale::Range::Increment