LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

This is a simple request.

 

Notice that these 2 property nodes look exactly alike but if you right click each and select Name Format -> Long Names they are actually different. It took me a while to figure out these were actually different.

2 Property Nodes.png

 

This is an example demonstrating the difference

Running Example.png

 

Please change the short names so that they don't look exactly the same.

When you or two numeric numbers it does a bitwise or.  The same should happen when using Or Array Elements, but it is not supported for numeric arrays. I see no reason why it shouldn't work just the same?

 

test.PNG

We've had a few recent discussions relating to labels of block diagram objects. I don't think the following came up yet, but I think it's one area where improvements are really needed.

 

If we create a property node or invoke node, it has a label indentical to the linked object. However, this label can be changed to anything we want, possibly leading to code that is very confusing! For example if we had two terminals labeled A and B, we could label the property node linked to A with "B" and vice versa, completely obfuscating the code!

 

(The beginner could also incorrectly assume that changing the label is the correct way to change the association to a different terminal)

 

These nodes should really have the label of the linked terminal hardwired to the corresponding terminal name, because anything else simply would not make a lot of sense.

 

My graphic is not as nice as Jack's here, but I think the idea is clear. It would certainly need a few tweaks from the art department. 😉

 

 

We might need an option to hide the label. Oversized labels should be truncated for display, only showing the full text when hovering.

Important disclaimer: I am sorry, this is not my original idea, because it is (together with many other nice features!) already implemented in the LabVIEW UI builder. Credits go to the relevant NI developers.

 

This Idea posted here tries to see how much interest there is in such a feature and to show support that we want this also implemented in plain old LabVIEW. Personally, I really like it! 🙂

 

(It is also somewhat related to this idea (but probably not a true duplicate), especially my comment there already exapands on the idea!)

 

The Unused Items Tray (see also e.g. Christina's Blog entry) is an area on the diagram that contains terminals or FP objects that have not yet been manually placed or are currently only "used" on the BD "xor" FP, i.e. not on both.

 

The current problem in LabVIEW is that if you place a new control on a complicated VI, the terminal will end up on some random place, typically piled well camouflaged on top of some other diagram structures and possible a scroll away from where it is actually needed. Similarly, if we create a control or indicator from a terminal on the diagram, the control will typically be outside the carefully placed visible area and there is no way to grab it without first messing up the FP alignment by either scrolling or enlarging the window.

 

In all these cases, it would be nice if the objects on the "other" window would land in a predictable reserved floating location for quick and easy manual placement where they belong. During development, there are probably a few terminals that are not yet used or connected, and we should be able to place them back into the unused items tray of the diagram so they never scroll out of view or get misplaced.

 

On the front panel, we might have some spare controls "parked", and they would not show in run mode at all. This could even be abused for "local variable zombies" (=hidden controls with the sole purpose of making a local variable) along these lines, which might be OK (=better than hidden controls!). Front panel items in this tray would look like their palette entries. We don't want e.g. full-sized graphs in there!

 

Now, how should all this look like? In order not to cover valuable diagram space, it should be something that collapses to near nothing when not used and would only expand to show the current contents when hovering or clicking in a special area. Of course there should also be an option to pin it open. When minimized, it should look different when "not empty" vs. "empty".

 

I often create maps in a for loop, as shown below.  However, the "Insert Into Map" value input doesn't adapt to the value type, leading to extra hurdles in order to create the correct map constant to feed into the for loop/VI.

_carl_3-1596837062665.png

 

It'd be cool if I didn't have to do the extra work to create this map constant.  Ideally I'd be able to wire any value type into the "Insert into Map" VI, and then just right-click on the output terminal and select "create constant".

 

(It appears that you can now drag and drop a value type into a map constant which does make it easier, previously I would "build map" node in, get the constant it from it, and then delete it. Either way, it's still extra steps.)

 

I would like to be able to create BD array constants from the predefined string and numeric constants.

 

Presently you cannot create an array constant by dropping a predefined constant into an empty array box on the BD.  Arrays of string constants such as Carriage Return, Line Feed, Tab, and Space need to be created programmatically using build array or manually using the '\' codes or hex codes.  Arrays created in such a manner do not have the graphical indications of what is contained.  Some of the constants (Space) are actually VIs, which may complicate things for NI.

 

Similarly the Math and Science constants like pi and e should be allowed to be dragged to an array of DBL.

 

Lynn

Arrays from constants.png

 

 

Currently enums can not be in a cluster that you want convert to JSON. It would be great, if this could be added.

 

Either the enum value could be converted to its internal representation (e.g. u16) or the value could be saved as a string (make it easier to read the JSON and parse it outside LabVIEW).

 

The unflatten should be able to handle the u16 easily to convert back to enum or take the string and search through the enum of the datatype that one has to connect to the vi. If there is no match the VI can issue an error.

 

There are a plethora of timestamp formats used by various operating systems and applications. The 1588 internet time protocol itself lists several. Windows is different from various flavors of Linux, and Excel is very different from about all of them. Then there are the details of daylight savings time, leap years, etc. LabVIEW contains all the tools to convert from one of these formats to another, but getting it right can be difficult. I propose a simple primitive to do this conversion. It would need to be polymorphic to handle the different data types that timestamps can take. This should only handle numeric data types, such as the numeric Excel timestamp (a double) or a Linux timestamp (an integer). Text-based timestamps are already handled fairly well. Inputs would be timestamp, input format type, output format type, and error. Outputs would be resultant format and error.

For modern application development we need better methods to detect wheter our application is called to open a file.

Currently the only help NI gives is based on command line parameters. And we need to jump through loops to get it working to react on opening a file when the program allready runs.

 

This is a major showstopper in creating professional applications.

 

LabVIEW 8.2 had a hidden event for getting this event, which unfortunatly doesn't work in later versions.

 

 

The ability to shrink cluster constants on the block diagram is awesome and works really well, however when they are a typedef they appear as a large icon and not the small cluster I have come to love.

Make it so that the typedef glyph works with the shrunken cluster properly.

Better Cluster - Old Cluster

Best Cluster - cluster idea2.png

At my work we often have structures inside structures several to many layers deep. We try to keep the inner most ones small to keep things to a single screen size overall if possible. But this causes a problem when you have to move an item inside a structure that it currently doesn't fit in. you first have to make room somehow for the structure to get bigger then you can drag the item inside and again shrink the surroundings (the last step can get really ugly when you have multiple layered structures or a lot going on around the outside).

So what if instead of moving the item into the structure you could move the structure around the item. If you could CTL+drag an edge, and instead of trying to push everything around the structure out of the way as it grows, it instead envelops the items you drag over (or spitting them out if you decrease the size). Same basic behaviors as dragging an item into or out of a structure but from the reference frame of the structure instead of the item (or the other way around, not sure how to describe it, kind of like plotting the motion the earth from the suns perspective or the motion of the sun from the earths perspective, they are both fundamentally wrong but both very useful for different tasks).  

 

tshurtz_0-1586795430997.png     goes to tshurtz_1-1586795505912.png  or vice versa where the only thing that was moved would have been the right side of the inner case structure while holding CTL (or shift or alt or some other equally suitable special key) 

 

I am guessing that it is not a simple request but would be very helpful to me on at least a daily basis.

Please include mSec value in probe watch. This helps to find the execution time between two events within a few mSec delay. Thanks

Allow tables, list boxes, etc to be able to lock the right hand column. Currently if you resize a table or list box (or any control/indicator with columns) you end up seeing more columns. It would be nice to be able to lock the right coumn so no additional columns are displayed. Most instances of tables have a defined set of columns which will be used and it looks rather ugly to resize and end up with empty columns. The current alternative is to have code running that will do this programmatically which is a bit cumbersome. Effectively this is allowing the programmer to set a maximum number of columns.

One thing prevent our company to upgrade to newer labview is that the run-time engine gets too big. Even I really like the features in later version, I still have to save the files back to 7.1 then to 7.0 to build application installer. Under 7.0, the run-time engine is very small, and the installer is less than 10MB and I can easily email the program to customers. In version 8.0, the run-time engine is 65MB and now the latest version is well above 100MB.

 

What I would like to see is that application builder get smart, only pick what is needed and build slim app.

One of the annoyance of designing a UI is that you will never know for sure how it will look on another user computer.

 

For instance, you design your UI with the default 13 point font size and when a user that has it default font size set to say 16 (and with a different font type) open your UI everything is a mess (text run out of the screen, text overlay controls ...).

 

In built application (meaning in an exe) one can add the following line to the executable and this fix the problem by coercing the font type and size.

 

  • AppFont=""Tahoma" 13"
  • DialogFont="Tahoma" 13
  • SystemFont="Tahoma" 13
  • CurrentFont="Tahoma" 13

 

But what about a reusable tool that are basically a source code distribution?

 

Currently, the simplest way to ensure this outcome is to write a reusable VI that will recursively set the font size and type of every labels or string to be what you designed your UI to use. This VI has to be run by the UI every time it starts.

 

What I propose is to add a global VI settings (probably somewhere in the VI properties) that will persist whatever font settings was used to design the UI.

 

Persistent font settings.png

 

This setting should default to false.

 

The image above is one possible solution (the simplest one).

 

Another solution would be to explicitely defined what every font style&size should be for every font type (something very similar to the explicit definition one can use in an ini that I mentioned above). In that case, there could be an entirely new font category in the VI properties.

 

Persistent font settings 2.png

 

PJM

There is a need to have the capabilities to make comment about a type def/strict type def control (in the control editor) that are only visible in the control editor.

 

Control Editor Comment.png

 

For instance you might want to point to the person editing the control that he/she should not rename that particular cluster element or this will brake the code (and you may want to use an arrow pointing to the element along with some text).

 

Currently, if you do that the comment become part of the control and is visible for every front panel instance. This bring absolutely nothing to be able to see that type of comment on the FP and it should not be seen by the user of the control (because this actually make the type def so large an unwieldy).

 

Therefore, there should be a need to select a group of decorations and mark them has only visible in the control editor .

 

Note: Right now the only way to achieve something similar to what I describe, is to put the comment in the control description, but this is no great because:

 

  1. People tend to not expect control to have VI description.
  2. This does not allow to use arrow and such to directly point  to a specific part of a control.

 

PJM

When a cluster contains refs to different type of controls that have a common ancester, the "cluster top array" primitive should be able to handle it instead of having a broken wire.

 

Example_VI_BD.png

 

Indeed, if the lower snippet works, one can assume that it shouldn't be too hard to get the upper one to work as well when possible.

I'd like to see the Destination Path in the application builder have an option to be a symbolic path based on the project directory (the direcotry where the project file lives) rather than an absolute path. For instance, if the project path (in our case, a local Git repo) was "SS10 Src Repo" and there was a subdirectory called "Builds", it would be nice to NOT have to specify "C:\users\xxx\Desktop\SS10 Src Repo\Builds" as the destination path, but rather check a box to say "symbolic path relative to Project Dir" and simply specify "Builds". If the repo is moved or renamed, all those destination paths in each project file need to be edited.

 

One reason for doing this is to locate executable that are built from the project, and called from project files via System Exec.vi. At runtime you need to find the executables (assuming they are localized and not in the PATH) and  you can use the Application Directory file constant to figure out where your executables SHOULD be (relative to  AppDir). But configuration to specify their location in each project file using an absolute path is a b**ch...  it would be great to not have to edit all the dest paths in each .lvproj each time we check out the repo on a new machine, or by a different user...

LabVIEW currently porvides TCP and UDP primatives. It would be nice if it had a PING primative as well.

Populating the Tree Control with items takes very long time.  I suggest improving the performance of a tree control.  Many other applications have tree controls that are populated in a small amount of time, so it should be possible with LabVIEW.

 

I know of three ways to populate a tree control.  The first is to individually add items using the Add Item invoke method.  This method takes a very long time.  Adding 15,000 entries took over 180 seconds.

 

The second way is to use the"Add Multiple Items to End" invoke method.  This took over 20 seconds for 15,000 entries.

 

The third way is to programatically respond to the user expanding an item in the tree and populating only as necessary.  I assume that this is fast, but it seems like a lot of work to do every time a tree control is used that could have a lot of items. Maybe LabVIEW could improve performance by using the third approach internally for the programmer. 

 

Currently I am hesitant to use a tree control because of performance.  LabVIEW is a great product, and making the tree control perform better would improve LabVIEW even more.