LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

 I believe the number and age of "New" ideas on this exchange renders the word meaningless. I just Kudoed an idea that was 13 years old and marked as "Status: New". This idea would be in middle school; that doesn't sound particularly new. Inaction on an idea after some amount of time should automatically trigger some other status. 

Class data is painful, if not impossible, to properly probe when debugging LabVIEW code.

 

It doesn't need to be this way. Every other programming language I've worked with allows viewing this type of data when debugging.

 

Currently, by default, you only see probe data based on the wire's edit-time class definition, not it's actual runtime class instance. It would save me weeks (literally) a year if I could simply place a probe on a class wire and view the internal class data of the runtime class, including all levels of inheritance.

 

I realize it's not a small ask since the probe GUI would need to be dynamic (as data type/GUI elements, not just values, would need to be updated when probe is hit.)

Every now and then, I stumble upon the following error when trying to use the "Match Regular Expression" node in a inlined/malleable VI:

 

raphschru_0-1727975484834.png

 

If I understand correctly this discussion, this is because it is an XNode, which is currently (or definitively) not supported in inlined VIs.

But further in the discussion, it is said that an exception was added in the compiler to allow inlining the "Error Ring" XNode.

 

My idea is to consider adding the same exception for the "Match Regular Expression" XNode, or make any modification that would result in this node being inlinable.

 

Also, there is nothing in the generated code of the "Match Regular Expression" XNode that prevents inlining!

All it really does is using a CLFN to call function "MatchRegExpEfficient" from the LabVIEW library.

 

Regards,

Raphaël.

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

When typing a path in the Terminal (Linux, Windows or macOS) hitting the table key does an auto-complete, this is extremely useful.

 

I wish the Path control and - let's dream - the path constant would behave the same.

 

It's probably only applicable to absolut path values.

 

I've made a QControl that does that, it's a bit basic but it does help, I might post it GitHub if there is interest.

Some very useful properties for UI manipulation on graphs and charts do not exist. This makes it impossible to make very dynamic UI's with variable number of plots or dynamic resizing in panels.

It is possible to set Legend.number of plots and Legend position. It is not possible to set:

Graph palette, scale legend or cursor legend position. This makes resizing graphs very limiting.   

Further, it is possible to programmatically create a custom legend.  However it is not possible with cursors because there is no property available to get the current cursor values. Please can we have cursor.values[] and cursor.legend.number of rows

 

 

What is really nice about the database toolkit is that "database SELECT" variant data can be cast directly into an (array of) cluster using the "database variant to data" function.

aartjan_0-1725964773106.png

However, it cannot cast MySQL enums to Labview enums. By default, MySQL returns the enum string, not the underlying integer. As soon as the casting function encounters a string to cast to an enum, it fails to convert all following data.

My not so elegant workarounds:

- cast the enums in SQL and use "database execute SQL" instead of "database SELECT"

- create a view with the enums converted to int.

 

"database INSERT" does work without hacks: you can use clusters with enums to insert data into MySQL tables. The only caveat there is that LV enums start at zero, whereas MySQL enums start at 1. I work around this by adding an "undefined" value in the Labview enum.

 

My suggestion is to support enums in the database toolkit. After all, in Labview, it is not hard to convert a string into the matching enum.

 

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.

Problem: Currently wiring an error wire to a structure input tunnel that does not continue as a wire clears the error that exists on the wire.

 

Happy case: When running the VI shown below, Automatic Error Handling correctly detects that the error out terminal of Error Cluster From Error Code.vi is unwired, and handles the error (displays the error as a dialogue window).

2 (edited).png

 

 

 

 

 

 

 

 

 

 

Unhappy case: Wiring the error wire from the error out terminal of Error Cluster From Error Code.vi to a structure input tunnel clears the error. Automatic Error Handling does not detect or handle the error.

3 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

In my opinion this was simply an unfortunate design decision (can happen to all of us) back when it was made, decades ago. IMO there is no logical argument to support this behaviour. The fact that the error wire is wired to an input tunnel does not mean that the error was handled. At best, when a programmer intentionally used this technique, it represents a non-self-documenting coding practice (why not use the self-documenting Clear Errors.vi?). At worst, it means clearing errors simply because the programmer forgot to wire the wire through the structure. It means clearing errors when the programmer did not explicitly ask for this. It means "sweeping errors under the carpet", and can result in overly "optimistic" applications (apps that seemingly execute without error when in fact unhandled errors are being generated).

 

Please note that even though the screenshot above shows a Flat Sequence Structure input tunnel, the behaviour applies to every structure (case structure, for loop, while loop, etc).

 

To summarise, the problem is that the screenshot above is functionally equivalent to explicitly using the Clear Errors.vi, as seen below.

4 (edited).png

 

 

 

 

 

 

 

 

 

 

 

Clear Errors.vi is of course the self-documenting, recommend method of clearing errors. It should also be the only method of clearing errors.

 

Ironically, Clear Errors.vi itself uses the "clear error by wiring it to input tunnel" technique inside its "0" case, as seen below.

5 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To its credit, Clear Errors.vi uses a correct technique for clearing errors inside its other, default case.

6 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another example found "in the wild" of a VI using the "clear error by wiring it to input tunnel" technique. This VI ships with LabVIEW and is found at <LabVIEW installation folder>\vi.lib\Utility\EditLVProj\Identify VIs Among Project Items.vi.

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Solution: Disable the "clear error by wiring it to input tunnel" behaviour. This would fix what IMO is an incorrect design decision. Unfortunately, fixing this decision now would result in VIs that use the "clear error by wiring it to input tunnel" technique to start throwing unhandled errors if AEH is enabled. This is not ideal, but it might be worth accepting this short-term drawback for long-term gain.

 

Moreover, it may be useful to introduce a Clear Errors node (primitive function). The Clear Errors.vi could then make use of the Clear Errors node inside both of its cases. Alternatively, the Clear Errors node could simply replace and supersede the Clear Errors.vi.

Automatic Error Handling (AEH) is a useful feature. It captures errors that were otherwise left unwired by the programmer (intentionally or accidentally). It represents a "safety net" that can make the programmer aware of errors that they may otherwise remain unaware of.

 

Problem: Currently AEH functionality is only available in the development environment. It is not available in built executables. Even when all VIs in a project have AEH enabled, once built into an EXE, all VIs behave as if AEH was disabled.

 

Solution: It should be possible to honour each VI's AEH setting in built EXEs too, not just in DevEnv. The EXE build specification could contain a setting named "Honour each VI's Automatic Error Handling setting in EXE". When ticked (enabled), any VIs for which AEH was enabled in the development environment will continue to benefit from AEH behaviour in the EXE. Any VIs for which AEH was disabled will continue to have it disabled in the EXE. This means that, from an error handling/error manipulation point of view, the application would behave identically when being run as an EXE as when being run in Development Environment. This is more consistent, and can be helpful.

 

The current behaviour (forcibly removing AEH in EXE) means that EXEs are prone to having errors that were not discovered during DevEnv testing being "swept under the carpet". In other words, currently EXEs are overly "optimistic" - they can make the programmer believe that everything is ok when in fact one or multiple unhandled errors are occurring, errors that would have been visible in DevEnv. This is particularly relevant to apps that run for long periods of time (e.g. life cycle testers) that may encounter errors that were simply unforeseen or untested in DevEnv (e.g. error after one month of continuous running due to running out of disk space when saving measurements log file to disk).

 

The screenshot below shows how the new setting could look like in the Advanced page of the EXE build spec.

3 Screenshot 1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • To be absolutely clear: I am not asking for AEH to be enabled by default in all EXEs. I am also not asking for AEH to become enabled in all VIs when the new build spec setting is ticked.  This would override the AEH setting of all VIs - I am not asking for this.
  • The default value of the new setting should be False (unticked). When False, the built EXE would behave exactly as it does now - AEH would be disabled in all VIs. This would maintain the current behaviour as default.
  • The new setting would give the programmer more control - it would allow the programmer to decide whether they want AEH or not in their EXE. Currently AEH is taken away in EXE, even when we (professional LabVIEW programmers) might want it enabled.
  • I would be happy if the new setting was available only for desktop, non-real-time applications, and not on Real-Time targets.

Problem: Currently, the native nodes and VIs that can be used for error manipulation are located in the Dialog & User Interface palette. While manipulating errors can mean generating dialogues, and can influence the User Interface/User Experience, error manipulation is a broader and stand-alone topic.

 

Solution: Nodes and VIs that are relevant to error handling/error manipulation should be given their own subpalette inside the Programming palette. The new subpalette could be named "Error Handling", "Error Manipulation" or "Errors".

 

1 (annotated).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2 (annotated).png

Problem: When developing or inheriting a large code base it is helpful to know which VI has Automatic Error Handling (AEH) enabled and which has it disabled. Currently, the quickest way to get this information is to bring up the VI Properties window (pressing Ctrl + I) and navigate to the Execution page. This is tedious when done on large numbers of VIs.

 

Solution: LabVIEW should display whether AEH is enabled or disabled on the Block Diagram. For example, a grey triangle located in the bottom-right corner of the block diagram window could indicate that AEH is disabled, and an "error green" triangle could indicate that AEH is enabled, as seen in the screenshots below. This display method is just a suggestion - professional UX designers may well find a better method. I would be happy with any indication method that I could at a glance see on the block diagram window.

 

2 Screenshot (AEH off).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

3 Screenshot (AEH on).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Idea expansion:

  • Executing a single left click on the triangle (or any other indication method) would toggle the setting to its other value. For example, a single left click on the grey triangle would toggle the AEH to enabled and the triangle would become green.

TL;DR: the idea is please make building as fast in new LabVIEWs post 2023 Q3 as old ones when you have a clear compiled object cache and when your file timestamps have changed.

 

LabVIEW 2023 Q3 includes a major change to the app builder: 

 

LabVIEW 2023 Q3 has improved cache behavior for packed project libraries and applications.

 

The first build will populate the cache, and then the subsequent builds will be much faster.

Great! It is pretty easy to confirm this and it works! Unless...

 

1. File timestamps change - such as when you clean a directory and clone project files in your CI job

  • Git does not make file timestamps match "commit date" - when you clone fresh all the timestamps are now o'clock, when you change branches all the files that change are reset to now

2. The cache is cleared prior to the build - such as when you want a repeatable build unaffected by prior state so you set your CI jobs to clear it before each build

3. You are doing a "first build" on a new project - such as when you use a container or VM disk image to reset the entire environment to a clean state prior to your CI job

 

In all my testing, any of these conditions pretty strictly cause builds to be slower in LabVIEW 2023 Q3 than in earlier versions: with a clear cache by typically ~60%, with new timestamps and a persisted cache depends on how much of the code has new timestamps.

 

In other words, I expect most everyone using CI will find building slower in 2023 Q3+ than later versions. So, the idea is: whatever optimizations were done, include an option to revert to the old behavior - build spec, config ini, editor option, whatever.

I'm proposing a new certification, one certification to rule them all, the Certified LabVIEW Gangster Group Developer.

 

Or simply a fully functioning CLA that requires 3 people to take the test.

Architect

Developer

Embedded Developer


It should function using an FPGA, or a simulated interface.

 

 

When wiring from inside a for loop to outside it, and the user lands the data on a singular data node (not an array or cluster), automatically set the tunnel mode to Last Value to prevent unnecessary code cleanup.

Suppose you have a while loop containing a case state machine with several cases.  Then later you decide to add a shift register.  It's very time consuming to wire every case when you might need to alter the contents of the shift register in a single case.  So when adding the shift register and the user connects the SR wires to one case, auto connect all the other cases internally.

The help page, that is supposed to provide a starting point for the developers on creating readable, high quality code in LabVIEW seems to be unchanged for quite some time: LabVIEW Style Checklist - NI. 

At different workplaces (including NI) and different teams I have seen different implementations of such guides, many of which included extra "rules".

One such example is the file naming convention:

  • Using spaces for separating words.
  • Using Libraries to namespace VIs instead of including the noun in every related VI.

These are the most prominent examples I can come up with from the top of my head, but I'm sure that there are more.

 

I'm curious if an updated version exists somewhere that could be used to replace the above refenced help page?

If there is not, then I think could we collect some ideas here for updating this document.

When you create a class, it gets assigned a "random" color.  But there are only 6 (or so) colors that are randomly chosen from. I do a lot of LVOOP, and generate a lot of classes. But because there's such a small selection of colors, I'll often drop class methods with the same color banner next to each other (and yes, with text), and it can sometimes be a bit misleading.

 

Why not have, say, 64 colors that are used to randomly assign class colors?  Or better yet, just use a randomly generated color within a range of shades?  It's such a simple change yet it would have a meaningful impact on the developer experience (well, at least mine).

 

_carl_0-1721943387831.png

 

The LabVIEW Robotics Module consists of a variety of sensor and actuator drivers, motion algorithms (such as kinematics), world map creation and search algorithms, a world simulator, etc.

 

It hasn't been updated since 2019 and is 32-bit only.

 

I would like to see it made open source.

I believe some or all of the sensor drivers are already available on ni.com/idnet.

There are several other VI-based components and examples that are standalone and could be easily released independently.

I realize that the simulator might have some 3rd party constraints for releasing as open source, but I'd love to see it released if possible.