LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.

Problem: Currently wiring an error wire to a structure input tunnel that does not continue as a wire clears the error that exists on the wire.

 

Happy case: When running the VI shown below, Automatic Error Handling correctly detects that the error out terminal of Error Cluster From Error Code.vi is unwired, and handles the error (displays the error as a dialogue window).

2 (edited).png

 

 

 

 

 

 

 

 

 

 

Unhappy case: Wiring the error wire from the error out terminal of Error Cluster From Error Code.vi to a structure input tunnel clears the error. Automatic Error Handling does not detect or handle the error.

3 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

In my opinion this was simply an unfortunate design decision (can happen to all of us) back when it was made, decades ago. IMO there is no logical argument to support this behaviour. The fact that the error wire is wired to an input tunnel does not mean that the error was handled. At best, when a programmer intentionally used this technique, it represents a non-self-documenting coding practice (why not use the self-documenting Clear Errors.vi?). At worst, it means clearing errors simply because the programmer forgot to wire the wire through the structure. It means clearing errors when the programmer did not explicitly ask for this. It means "sweeping errors under the carpet", and can result in overly "optimistic" applications (apps that seemingly execute without error when in fact unhandled errors are being generated).

 

Please note that even though the screenshot above shows a Flat Sequence Structure input tunnel, the behaviour applies to every structure (case structure, for loop, while loop, etc).

 

To summarise, the problem is that the screenshot above is functionally equivalent to explicitly using the Clear Errors.vi, as seen below.

4 (edited).png

 

 

 

 

 

 

 

 

 

 

 

Clear Errors.vi is of course the self-documenting, recommend method of clearing errors. It should also be the only method of clearing errors.

 

Ironically, Clear Errors.vi itself uses the "clear error by wiring it to input tunnel" technique inside its "0" case, as seen below.

5 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To its credit, Clear Errors.vi uses a correct technique for clearing errors inside its other, default case.

6 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another example found "in the wild" of a VI using the "clear error by wiring it to input tunnel" technique. This VI ships with LabVIEW and is found at <LabVIEW installation folder>\vi.lib\Utility\EditLVProj\Identify VIs Among Project Items.vi.

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Solution: Disable the "clear error by wiring it to input tunnel" behaviour. This would fix what IMO is an incorrect design decision. Unfortunately, fixing this decision now would result in VIs that use the "clear error by wiring it to input tunnel" technique to start throwing unhandled errors if AEH is enabled. This is not ideal, but it might be worth accepting this short-term drawback for long-term gain.

 

Moreover, it may be useful to introduce a Clear Errors node (primitive function). The Clear Errors.vi could then make use of the Clear Errors node inside both of its cases. Alternatively, the Clear Errors node could simply replace and supersede the Clear Errors.vi.

Automatic Error Handling (AEH) is a useful feature. It captures errors that were otherwise left unwired by the programmer (intentionally or accidentally). It represents a "safety net" that can make the programmer aware of errors that they may otherwise remain unaware of.

 

Problem: Currently AEH functionality is only available in the development environment. It is not available in built executables. Even when all VIs in a project have AEH enabled, once built into an EXE, all VIs behave as if AEH was disabled.

 

Solution: It should be possible to honour each VI's AEH setting in built EXEs too, not just in DevEnv. The EXE build specification could contain a setting named "Honour each VI's Automatic Error Handling setting in EXE". When ticked (enabled), any VIs for which AEH was enabled in the development environment will continue to benefit from AEH behaviour in the EXE. Any VIs for which AEH was disabled will continue to have it disabled in the EXE. This means that, from an error handling/error manipulation point of view, the application would behave identically when being run as an EXE as when being run in Development Environment. This is more consistent, and can be helpful.

 

The current behaviour (forcibly removing AEH in EXE) means that EXEs are prone to having errors that were not discovered during DevEnv testing being "swept under the carpet". In other words, currently EXEs are overly "optimistic" - they can make the programmer believe that everything is ok when in fact one or multiple unhandled errors are occurring, errors that would have been visible in DevEnv. This is particularly relevant to apps that run for long periods of time (e.g. life cycle testers) that may encounter errors that were simply unforeseen or untested in DevEnv (e.g. error after one month of continuous running due to running out of disk space when saving measurements log file to disk).

 

The screenshot below shows how the new setting could look like in the Advanced page of the EXE build spec.

3 Screenshot 1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • To be absolutely clear: I am not asking for AEH to be enabled by default in all EXEs. I am also not asking for AEH to become enabled in all VIs when the new build spec setting is ticked.  This would override the AEH setting of all VIs - I am not asking for this.
  • The default value of the new setting should be False (unticked). When False, the built EXE would behave exactly as it does now - AEH would be disabled in all VIs. This would maintain the current behaviour as default.
  • The new setting would give the programmer more control - it would allow the programmer to decide whether they want AEH or not in their EXE. Currently AEH is taken away in EXE, even when we (professional LabVIEW programmers) might want it enabled.
  • I would be happy if the new setting was available only for desktop, non-real-time applications, and not on Real-Time targets.

I'm proposing a new certification, one certification to rule them all, the Certified LabVIEW Gangster Group Developer.

 

Or simply a fully functioning CLA that requires 3 people to take the test.

Architect

Developer

Embedded Developer


It should function using an FPGA, or a simulated interface.

 

 

It would be useful if the Project Explorer supported a Shortcut item type.

 

  • The Shortcut item would be associated with exactly one VI, CTL, lvclass, or lvlib item that exists in the same project.
  • When the Shortcut is double-clicked the Project Explorer would perform the same action as if the referenced item itself had been double-clicked. For example, if the referenced item is a VI or CTL, then double-clicking the Shortcut would open the VI or CTL.
  • It should be possible to associate any number of Shortcuts with the same underlying item. For example, there could be two Shortcuts that refer to the same VI.
  • When the underlying item is removed from the project, Project Explorer would automatically remove all associated Shortcuts.

Use cases

  • DQMH-based projects. It may be desirable to create a virtual folder named say "DQMH Main VIs" that contains a Shortcut to the Main.vi of each DQMH module.
  • Large projects with multiple EXE build specs. It may be desirable to create a virtual folder named say "EXE Main VIs" that contains a Shortcut to the Startup VI of each EXE.

One of the things LabVIEW is best at is its innate parallelism. Parallelizing for loops with a right-click is something other languages wish they had. Having all code on the block diagram (or equivalent) innately run in parallel is something few other languages have even tried, much less succeeded at.

 

Crunching large datasets can be sped up through use of GPUs, and over the past decade or more, Nvidia has kept their promise of having a common interface to their hardware through CUDA. 

 

Having a LabVIEW add-on that is well maintained, and that can give GPU parallelism in the same ways we've seen LabVIEW deliver CPU parallelism would be a game changer for many labs and manufacturing environments. It could also help LabVIEW be a leader in the AI space.

 

I suggest that you create this add on package using CUDA as the underlying GPU calls, in order to keep the code easy to manage, while also providing the package for a large number of supported GPUs.

When a subarray is created, the data is copied from the origin-array.

 

The LabVIEW Compiler does not recognise if the subarray data is then only read.

It would be a major improvement for memory and even CPU usage if the compiler recognised that some part of the array is only read and pass only a reference to that part of the array in the memory.

 

When in the front panel, most LabVIEW data types that wrap other data (such as arrays, DVRs, maps, sets, user events, etc.) allow you to directly view and modify the wrapped data.  For some of these data types it is not the default behavior, but it is available if to right-click on the control and select "Show Control...".

 

For illustrative purposes:

_carl_0-1709850594929.png

In all but one of the cases above, I can see that the underlying data type is a boolean, and I can swap this out easily for any other data of my choosing, such as a typedef.

 

The one exception: event registration refnums.  There is no "Show Control" option, and from the front panel it is impossible to update this underlying data type. Instead, you have to create a new event registration refnum in the block diagram. I do find that this makes it difficult to work with, maintain, and debug event registration refnums when maintained in class data.

 

The request: treat event registration refnums like any other data container type, and expose a "Show Control" option that makes the underlying data type accessible in front panels.

Changing the misleading red pause sign for continuing the execution in debugging mode to a more appropriate icon (e.g. red play button) would improve the user experience and help newcomers intuitively learn about the feature.

I think it would be very useful to have the create DVR, change to DVR and change from DVR shortcuts on the menu.

I already did some of this, but it shall be a built in functionality.

I also encountered a bug in the change from DVR for controls and I couldn't work them around yet. To be exact, when a control is moved out of a DVR using scripting it becomes locked and the label is made invisible. The same is not true for the same operation on the UI. The only workaround I found was to replace the created control with one of the same style. The new problem is, that enums and rings that aren't typedefs loose their item lists. Of course this could be worked around too, but there might be other types I haven't tried yet.

To begin with, this is very likely a corner case, I don't expect this to get ever implemented, but the VIs are password protected which makes customizing them difficult.

When a project with a lot of VIs is open in a LabVIEW instance the Quick Drop loads for ~2 seconds, which is more then enough for me to press the shortcut combination too, leading to another pop-up windows to appear (like find window for Ctrl+F).

I wish this would be options, i.e. another option on the customization panel, or an option inside the project file/library which might be used less frequently.

When logging errors to a text file, I would like to see the latest error that occurred rather then the first error to ever happen and have to scroll to the bottom of the file and view the latest error (I know first world problems).  If there was a way to place the cursor at the start of the first line on a text file and enter the data shifting all the original data over/down, that would be helpful.  Currently when the cursor is placed at the start of the file it deletes the data as it is overwritten, if 25 characters is being written, then the first 25 characters of the text file is overwritten.

 

It is possible to read the entire file and concatenate the new data at string 0 with the old data at string 1, but if the file gets large (we don't clean our files / want to remove error data) this can become cumbersome and CPU consuming.

 

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019RNsSAM&l=en-US - Dated 7 July 2022, states that prepend can not happen the way I would like.

It would be useful if you could change the numeric (FXP) information (signed, word length, integer word length) of a data type during run time.

Since all diagram types must be known at compile time, there is no way to dynamically decide the type of a wire. Variants are the only way to handle dynamic types at runtime. However, variants do not expose any way to take data of one type and convert it to a different type within the variant.

 

This functionality would be useful for example, if you use the 'Get Fixed-Point Information - NI VI' to extract fixed-point numeric (FXP) information from a data type stored in a variant, and then could use that information to caste it into a variant. This is solved in the image attached using a case structure, but this is not a very elegant solution. 

 

image003.png

"<LabVIEW 20XX>\project\procmphier.llb\CMP Compare two VIs.vi" is an excellent VI that lets us develop tools to address some shortcomings of "raw" LVCompare.exe.

 

There should be a similar VI to open the LVMerge dialog.

 

The part of "CMP Compare two VIs.vi" that I find most useful and would be invaluable for LVMerge is the option to pass in different AppRefs, which lets us load different versions of VIs with the same name and compare them, thus providing a path to eliminate all the breakages associated with renaming files (eg "claims to be part of library," "could not find class with this name...").

 

There are some tantalizing VIs in "project\_promergevis.llb\NI_promergevis.lvlib" but there doesn't seem to be a public means to open the VIs in different application instances.

It would be helpful to add an option like "Digital Value" in the DAQmx Trigger.vi (comparable to Analog Window).
Background: when missing the edge of a digital trigger signal (due to being late in code execution after eg. waveform reconfiguration), the trigger event does not happen. With this new option it is possible to execute the trg event asap after missing the edge. 

Increase priority of Correct Action Request (CAR) #109501 created in ~2008 about sorting events alphabetically. Use cases include finding events quicker and simplifying comparison results.

 

c.f. https://forums.ni.com/t5/LabVIEW/Case-order-of-EVENT-structure/m-p/782114

Many times, at least in my codes, it's necessary to act only on a rising / falling or changed edge of a boolean signal.

Probably this option can be added for CASE-structures with a boolean-type case selector of cause. 

When you open a .lvproj,

and then open the "Dependencies" folder,

and then right-click on an item,

and then select "Why is this item in Dependencies?"

LabVIEW always crashes.  It's been like this for many years.

My idea is to change the "Why is this item in Dependencies?" so that it works instead of cras

Only method to verify time of the Time Stamp indicator is to go Properties of the Time Stamp>> Display format>>Select Advanced editing mode and see the Format String display.

MMargaryan_0-1686132778472.png

The Idea is to add Format String Display in the Display Format Default editing mode to make it easier for users identify if the time is local, UTC or other.

 

Hi

While working with LabVIEW I would suggest that there are broken arrow that shows that the code is not complete if we left to wire any input or output.

I would suggest that there should be an option in the block diagram that we can select the active region (it could be as to draw box) and LabVIEW should only consider code that is inside the box and outside of the box if any unwired input  or output should be ignored.

 

I hope  I made my idea clear.

 

Thanks

Asif