LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When working on VIs that require large arrays or images for input, I often need to run a calling VI rather than the VI I'm currently working on, which involves a lot of tabbing between windows and clutter from having more open windows. I realize it is possible to set default values for controls, but I would rather avoid that as I often use optional connections to my SubVIs and therefore don't want to change default values.
In my opinion it would be useful to have the option to configure the run button to run a different VI than the currently open one, when working on VIs specifically developed to run as subVIs in the context of a higher-level caller.

When using VI scripting to create UI objects, please add the new "NXG Style" objects as an option

Add a right click option to navigate to the method in the project window when you right click on a send message VI. Bonus if it gave you an option to go to the base class method or any overriding methods. 

 

NavToMethod.jpg

I often run into the issue that I have one event case that I need to execute for multiple controls. Most of the times I simply read out the terminals and most of times I get the "NewVal" from the terminal. Some times however that's not the case. I'm currently having the issue that I use a property node of a control to trigger the event and every other time the terminal gives out the "OldVal".

 

I believe the proper way to solve this issue would be to go by the reference. I find this blows up the code more than it should do. Looking around in the community I've found two other ideas suggesting different solutions for this issue:

 

The first has multiple boolean controls triggering the same event. SectorEffector's solution was asking for a reference case structure, which might be a great idea, especially when the event is triggered by different data types. When I have multiple boolean triggering one event I almost always build an array from the terminals and convert the array to an integer, so that a case structure or other code can deal with the value. But again I'd have to deal with fiddling the new value from the triggering control and the other control values together.

 

The second mentions actually two ideas. The actual idea of rcpacini is to have an enum indicating which control triggered the event. This is a similar approach to what SectorEffector suggested. They're both aiming at using a case structure with a different cases for each event source. The alternative idea of rcpacini was to have event data elements for each control. The is more similar to my idea, but I would like to have the values of all registered event sources clustered together.

 

Here a side by side of the current solution and what I'd suggest:

New value by referenceNew value by referenceNew value clusterNew value cluster

 

The order of the events would determine the order of the values in the cluster, so a way to arrange the events like a move up and move down button would be helpful:

Event source order up/down buttonsEvent source order up/down buttons

Often times I need to change a property of something in labview that seems to only be available through a property node. And often I don't want to do it during run time just once at development but I find it really hard to get references to some objects and I want them in another VI that I can run real quick to set the property and then be done. If you have the option enables that shows labview scripting property node options and invoke nodes, can under create there be a  Create reference in a new vi. And if you click it you get a new VI with just a ref that you can then do whatever you need with it. And please make this work for anything that has a property or method related to it. I have some tools that I wrote that can kind of do this but I am sure it is not a good way of doing it (but it is the best I have got).

On Plot Attribute Change event, the cluster structure should match with the group of properties such that we could pass it as is and re-use it when we need to re-apply all properties. Inside the property node there should be a section Plot.Attributes.All Elements that would be identical to the cluster returned by the event.

 

Also worth mention that Plot.Visible is not part of the cluster but the event is triggered when user toggles the plot visibility. Strangely the Plot.Name can also be edited by user but that doesn't trigger the event.

 

I wonder if it's a missing feature or more like a design flaw.

 

JICR_0-1581470137676.png

 

enum.png

...fast selection of enum value/member instead of opening enum-element or constant...

Flattened String To Variant is so close to letting us define the layout of arbitrary external data.  However, labview's insistence on all strings and arrays being prepended with their lengths in the data string means when working with binary data from non-labview sources, the array or string lengths might have to be calculated or are somewhere not immediately adjacent to the data.  I understand it's far too late to modify the base unflatten from string or read from file as requested in this idea, but this feature would be incredibly useful for reading binary data directly into clusters without requiring either multiple copies of incredibly large (100,000+) arrays floating around, or incredibly tedious and complex sequences of Read from Binary File nodes.

In OOP, interfaces define a contract. When a method/function expects a type of interface as an argument, any type implementing this interface can be passed. A well known and much used concept. I imagine something like this could be implemented for clusters in LabVIEW. Imagine if you could have a subVI define a cluster/cluster array, mark it as an interface and be able to wire any cluster/cluster array that adheres to this interface (adhering to the interface would basically mean that an unbundle by name node accessing any/all members of the interface would also work on any “derived” cluster). Both the cluster on the front panel and the wire should clearly indicate that this is an interface, which would mean that the cluster/wire actually contains more members, but you only have access to the ones defined by the interface. I definitely see some use cases for this. Sound useful/feasible?

This subject has been touched on briefly, but only in a post that started out just suggesting an indication of whether or not a case structure is case insensitive. This has been implemented since then, of course.

 

What I'd like to bring to the surface again is the case sensitivity. The mention of this in the previous post was suggesting that case insensitivity be made the default. I don't agree with this - especially with the repercussions it could have on converting older code. 

 

However, I think it would certainly be useful and acceptable to add a checkbox to the "Block Diagram >> General" options for "Case structures case insensitive by default" (see screenshot). This would not break previous code, since turning it on would only effect case structures created after turning it on. Additionally, the "default default" would still be case insensitive, but programmers that do use string-based case selection could leverage this to their advantage (and not deal with forgetting to turn it on via the right-click option).

can we have an option of loading only user-specified rows from a delimited spreadsheet instead of all rows, perhaps allowing the user to specify which and how many rows to read? this is potentially useful if the user want to read the select or latest few rows (typically at the last few rows due to write append) without having to load unnecessarily large amount of data into memory.

 

implementation wise, can use a row index and length input optional terminals defaulted at index:0 and length:-1, to yield all rows by default for the rows (note: terminal name changed) output terminal. index:-1 to specify from end of file and non-zero length:L for how many rows from the end of file (-L for reversed order, last row on top). if index:-1, the data read will be replacing elements within a fixed sized array of L elements until the EOF returns true. for example, index:-1 and lenght:5 would yield last 5 rows from file. 

 

read delimited spreadsheet.png

thanks for reading and have a great day

There are already several Ideas on the exchange regarding custom error-code files. A subset are listed below:

Project error code files (request to locate error file near lvproj file, or via Project properties)

Error txt file (allow specifying a path when calling General/Simple Error Handler, store the custom file somewhere not in <National Instruments>/{Shared,<lv-version>}/...)

Make custom error code files part of a project (allow project-level error codes, not stored in user.lib)

Error ring without file (request for an error-ring typedef that could be stored in the project's source code collection)

 

My suggestion here is most like the last - I'd like to suggest a file type like a .ctl (or perhaps via typedef, if technically better/more implementable) that can be added to a project, library or class and contains a list of errors.

 

Ideally, these errors should be only possible to throw from inside the library, but must of course be visible from outside of the library.

Implementing these at the project level restricts the possibility of use with PPLs, reusable libraries, etc, because the project file is a dev-only construct (as far as I understand), unlike classes and libraries (which are part of what you distribute in whatever form you choose).

 

The ability to store these files in a library or class also allows the possibility of scoping the error code, such that the handler can more easily identify the source of the error.

 

LabVIEW already categorizes it's errors into broad sections (LabVIEW, MAX, NI Platform Services) etc and so perhaps these could be used as the scope of the existing, built-in error codes, although I'm not sure if this would require significant work or not... I suspect many things seem easier when you're not the person who will implement the changes!

When using refnum arrays with auto-indexed tunnels in loops, I often bump into the absence of the submenus "Property for <ClassName> Class" and "Method for <ClassName> Class" in the "Create" Menu.

The submenus appear when right-clicking normal tunnels, shift registers, wires, subVI terminals, Constants, Controls, everywhere, except for auto-indexed tunnels.

image.pngThis idea is a revival of an old post from Darin.K : https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Create-Property-or-Invoke-Nodes-from-Auto-indexed-tunnels/idi-p/1143340

which has been declined in 2015 for too low priority reason, but since it is still annoying now (e.g. with LV 2018), especially when using VI server (dynamic VI calls, UI elements manipulation, VI Scripting...), I think NI should reconsider it.

On all "Find Items" windows (Find Project Items, Find Items with No Callers, Find Missing Items), it would be nice if they were all sortable by file type.

 

And I think I saw this suggestion on the forums somewhere before, but it would also be nice if the windows didn't disappear everytime you clicked an item to go to it.

Hello forum

 

Wouldn't it be nice if we can add W10 IoT Enterprise PCs as Embedded Targets, where we can create VI executable and set it as startup programs and once deployed, the target will be automatically configured to: launch the startup programs with Embedded Enabling Features (EEF), Enhanced Write Filter (EWF), Hibernate Once, Resume Many (HORM) and File Based Write Filter (FBWF) but on different volume; as presented in NI Week TS2361 & TS8562 slides.

 

Thanks

hello forum

 

what do you think about a "feature" where developers can enforce version control on applications deployed into end-user systems? it may sound something like some feature of a certain OS, but it may be beneficial in a way or two...

 

the most obvious being for version enforcement, some members may not like this, but often time newer versions of application are developed to address certain issues of the previous version, and it would be pointless if the end user kept sticking to the old version for certain feature they may have grown to like

 

it can also be extended to subscription based or trial version deployments, a more friendly way for developers to present their systems for customers for in-situ system trial

 

one way to deploy this feature, is to minimally trace the execution instances of the RTE in deployed systems, across newer and older version, log them for follow-up actions, that can range from friendly popup reminders for update to application execution prevention.

 

what do you think?

Make Trim Whitespace.vi remove non-breaking space characters.  This could be accomplished by adding "\A0" to the "regular expression" input.

 

Also, make "White Space?" return True for non-breaking space characters.

 

If a DLL is configured in a Call Library Function Node and is either missing or defective, the entire program cannot be executed. In most situations, this behavior is acceptable.

However, it can also be quite frustrating, especially if the part of the program that uses the DLL is not called. Additionally, it would be helpful for debugging purposes if the program could still execute even when a DLL is unavailable.

Therefore, I propose that the configuration dialog of the Call Library Function Node includes an additional checkbox labeled "Break execution if DLL cannot be loaded." If this checkbox is checked (default setting), the behavior will remain as it currently is. If the checkbox is unchecked, the program should run until the affected node is executed. At that point, the node's error output should indicate either "DLL not found" or "DLL found but cannot be loaded," and execution should proceed beyond the node. In this scenario, it will be the programmer's responsibility to handle the error appropriately.

As an extension of this idea, I could also envision a new entry in the LabVIEW.ini / application.ini file to assist with debugging. If this entry is not present, the explicit configuration of the node will be used. If it is available, it will override the explicit configuration with either true or false.

Please also see this related idea: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/get-DLL-path-from-the-call-library-function-node/idi-p/4433632

I would like the ability to perform a one page or bounded clean up on some of my diagrams. I understand such a cleanup will take longer and might not be possible in all scenarios but a space bounded cleanup would make certain block diagrams a lot easier to work with.

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.