LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Since a few years, we have native support for Map and Set in LabVIEW.

How about adding a DataFrame type similar to other programming languages (possible even with a native interaction with Python)?

 

A DataFrame type would be a 2D value where columns can have different datatypes. Currently, one needs to build around this by creating an 1D array of a cluster (or class) type.

Access to the data would be with numerical indexing for the rows and field access (like in a cluster) for the columns.

 

KR, Benjamin

It's always a pleasure to uncover shared memory problems when working with base vi.lib methods that *should* work as preallocated clones and have uninitialized shift registers and then having to establish a chain of vi references to safely call those methods in an application with distributed/parallel threads. It feels like extra work but does fix the problem. It's a major unknown gotcha unless you're already familiar with the vi you are calling.

 

My most recent hiccup: the NI PID library. I needed two closed loop controllers in actor framework, and was getting very strange timing, setpoint, process variable, and control output crossovers until I realized the stock .vi's had shared memory in the form of uninitialized shift registers (despite being in the context of a pre-allocated actor). Creating references of the vi's I needed and storing them in the actor at launch fixed the problem, but at that point the effort in writing my own PID .vi starts to be a favorable time tradeoff. At least I am able to peek under the hood of the PID library, other aspects of vi.lib... not so much.

 

Or maybe this is a teaching problem. I haven't come across ways of navigating this issue from official NI documentation, in fact the way I learned I needed to call the PID.vi by-reference was from the forum and rather matter-of-factly. There are a couple of great blogs that cover this issue in detail, so I don't feel alone in my ignorance. Maintaining State Information in LabVIEW Applications, Part 5 - LabVIEW Field Journal Archives | LabVIEW Field Journal Archives

I like compile time type safety checks. I dislike using variants. Occasionally, and increasingly more often, I find myself going to great lengths to provide compile time type safety. At a point, the type check gets lost in the inheritance hierarchy and I am back to depending on runtime checks for errors. It's not uncommon for me to have a class method that needs to "just work" across the bulk of the base types, but it sure is a pain to make wrapper classes, static inlined methods, and a nasty polymorphic .vi to mimic this behavior. Perhaps I am ignorant to some features of LV (do malleable vi's fit in here somewhere?), but multiple dispatch/function overloading sure seems like the silver bullet for this issue without messy inheritance trees.

 

I'm open to discussion on alternatives. This "problem" has come up in a couple of recent projects of mine, and I always feel dirty using a variant or making a static API for a class that ought to be extensible.

Using property nodes, it's possible to retrieve a list of subVI references within a VI's block diagram. However, there's currently no way to link each reference to its specific subVI instance.

For example, if you place three instances of subVI "A" in a diagram, you can obtain three references to these subVIs, but there is no way to determine which reference corresponds to which instance.

The proposed idea is to introduce a "This SubVI" reference — a property or node that would return the reference of the current subVI instance (the one hosting the object or node). This enhancement would significantly improve scripting capabilities, support tool development, and aid in debugging tools.

If a DLL is configured in a Call Library Function Node and is either missing or defective, the entire program cannot be executed. In most situations, this behavior is acceptable.

However, it can also be quite frustrating, especially if the part of the program that uses the DLL is not called. Additionally, it would be helpful for debugging purposes if the program could still execute even when a DLL is unavailable.

Therefore, I propose that the configuration dialog of the Call Library Function Node includes an additional checkbox labeled "Break execution if DLL cannot be loaded." If this checkbox is checked (default setting), the behavior will remain as it currently is. If the checkbox is unchecked, the program should run until the affected node is executed. At that point, the node's error output should indicate either "DLL not found" or "DLL found but cannot be loaded," and execution should proceed beyond the node. In this scenario, it will be the programmer's responsibility to handle the error appropriately.

As an extension of this idea, I could also envision a new entry in the LabVIEW.ini / application.ini file to assist with debugging. If this entry is not present, the explicit configuration of the node will be used. If it is available, it will override the explicit configuration with either true or false.

Please also see this related idea: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/get-DLL-path-from-the-call-library-function-node/idi-p/4433632

I realize that the DBCT is nearing its 20th birthday, and probably is not a priority for a fix, but this is so simple and has tripped up a number of LabVIEW programmers doing database applications for so many years.

 

The VI Rec Fetch Next Recordset (R).vi has a flaw which manifests itself in stepping past the last recordset in a multi-recordset return - then leaks a reference which causes mayhem in downstream code.  A simple test of the recordset reference would make this VI properly useful.

 

Among my other posts in reply to forum users about databases, the issue is captured succinctly here.

 

I long ago made the mods to the toolkit VI (actually, two), and reapply them with each new LabVIEW version as needed.  If anyone at NI/Emerson wants to look into this, I'd be happy to share the particulars.

 

Dave

Many new PCs running Windows run on ARM processors (like the Snapdragon), rather than x86 architecture. LabVIEW does not support ARM.

 

A few years back, my LabVIEW software would work on any Windows PC. That is no longer the case. That is a huge and increasing limitation.

Problem: Many native VIs use the Non-reentrant execution reentrancy setting.

 

Solution: The vast majority of native VIs should use the Preallocated clone reentrancy setting.

  • The native VIs that need to use Non-reentrant or Shared clone are few and far between - they should be identified on a case-by-case basis. Their Context Help and/or Detailed Help should explain why they need to be set to Non-reentrant or Shared clone.

The following is a selection of vi.lib VIs that should use Preallocated clone. This selection is meant to serve as a starting point and is not comprehensive.

 

1.png

2.png

3.png

 

Notes:

  • This idea is related to: The reentrancy of new VIs should be "Preallocated clone" . They both argue in favour of using the Preallocated clone setting more.
  • A significant number of native VIs are already configured to use Preallocated clone, which is great.
  • There are curious cases where closely related VIs are set to different reentrancy settings. For example, Color to RGB.vi is rightly using Preallocated clone, while RGB to Color.vi is Non-reentrant. Similarly, Trim Whitespace.vi is rightly Preallocated clone, while Normalize End Of Line.vi - which lives next to it on the String palette - is Non-reentrant.
    • This suggest that the reentrancy setting of some native VIs was chosen haphazardly. This needs to be rectified.
  • The fact that so many native VIs are non-reentrant partly defeats LabVIEW's remarkable ability to create parallel code easily. Loops that are supposed to be parallel and independent are in fact dependent on each other when they use multiple instances of these non-reentrant native VIs. When an application uses multiple instances of these native VIs it is as if there are "hidden semaphores" that are added between the various call locations that call these native VIs. This leads to less performant applications (more CPU cycles, longer execution time, larger EXE compiled code size).

Problem: When creating a new VI using File >> New VI (Ctrl + N) its reentrancy setting is Non-reentrant execution.

 

Solution: The reentrancy setting of new VIs should be Preallocated clone reentrant execution.

 

Background:

In most applications the vast majority of VIs could and should be set to "Preallocated clone reentrant execution". In a nutshell, Preallocated clone ensures that each instance of a given VI is completely independent of all other instances. This is desirable in the vast majority of cases.

 

We should encourage best practices by changing the default reentrancy setting to the setting that is desireable in most cases, namely to Preallocated clone.

 

The other two options - Non-reentrant execution and Shared clone reentrant execution - are the best choice in specialised cases only.

  • Non-reentrant is necessary when needing to guarantee that multiple instances of a VI block each other from executing at the same time and/or when wishing to use uninitialised shift registers as a means of asynchronous data communication (e.g. FGV or Action Engine).
  • Shared clone is best when aiming to reduce memory usage. Shared clone can help on memory-scarce targets such as cRIOs or sbRIOs, or when dealing with massive amounts of data (arrays of millions or billions of elements). Worrying about memory usage is not a concern for the vast majority of VIs, especially when working on modern desktop systems that have 8, 16, or more GB of RAM memory, and when dealing with reasonable amounts of data (arrays with up to a few million elements).

New VIs set to Preallocated clone would be in good company, as detailed below.

 

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • The vast majority of LabVIEW nodes rightly execute as if they were set to Preallocated clone. This enables us to use many instances of each node freely without worrying that it will create timing dependencies between vastly different caller VIs.
    • For example, one can use the Add node inside as many DQMH modules or Actor Framework actors as they wish, without creating any timing dependency between those callers, which is great.
    • As far as I am aware, the only nodes that execute as if they were set to Non-reentrant are those that require the UI thread. These nodes execute in a sequential, blocking manner.
  • Many of the VIs that ship with LabVIEW are rightly set to Preallocated clone.
    • I believe that most VIs that ship with LabVIEW and are set to something other than Preallocated clone should be set to Preallocated clone, but this should be addressed in a separate idea.
  • All inlined VIs (and therefore, all VIMs, which must be inlined) execute as if they were set to Preallocated clone.

Further notes:

  • If Non-reentrant was chosen as the default because it was judged to be the friendliest to new users (an argument that I believe does not outweigh the arguments in favour of Preallocated clone), then at least there should be a Tools >> Settings option to enable people to change the default reentrancy setting.
  • The fact that new VIs are by default Non-reentrant defeats the benefit that LabVIEW offers in terms of ease of creating parallel threads. Many of these threads will in fact not be truly parallel, because of the undesired one-instance-running-at-a-time blocking effect of Non-reentrant instances that execute in vastly different areas of an application.
  • I have never tested this, but EXEs are likely to be smaller (perhaps by a few KB, or even a few hundred KB) when the vast majority of VIs are set to Preallocated clone. When using multiple instances of Non-reentrant VIs LabVIEW must add some kind of mutex'es (mutually exclusive locks) in the compiled code. When VIs are set to Preallocated clone these mutex'es disappear, leaving behind smaller, cleaner compiled code.
  • The disappearance of mutexes might help enable the LabVIEW compiler to perform optimisations that are not possible when non-reentrant boundaries are present.

Reentrancy resources:

To achieve better performance and efficiency, the “In Place Element Structure” should be used instead of the “Bundle by Name” method for “VIs for Data Member Access” (specifically for write VIs). The "Mark as Modifier" option should also be activated.

 

TapioS_4-1740049307134.png

 

TapioS_3-1740049278327.png

If the compiler optimization can do this automatically, I apologize for this post. 

The database toolkit is limted by the database variant to data function. It can only cast to a labview datatype as long as you wire that datatype to the type input. This means that you have to know the datatype of any SQL query in advance (or convert to string). It would be very useful if the function would also accept a variant datatype. This way it would be possible to cast any complex type into labview datatype, without the need of a predefined cluster.

 

Image - casting the database input with a the variant type input (circled) doesn't work

aartjan_0-1735459849988.png

 

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.

Problem: Currently wiring an error wire to a structure input tunnel that does not continue as a wire clears the error that exists on the wire.

 

Happy case: When running the VI shown below, Automatic Error Handling correctly detects that the error out terminal of Error Cluster From Error Code.vi is unwired, and handles the error (displays the error as a dialogue window).

2 (edited).png

 

 

 

 

 

 

 

 

 

 

Unhappy case: Wiring the error wire from the error out terminal of Error Cluster From Error Code.vi to a structure input tunnel clears the error. Automatic Error Handling does not detect or handle the error.

3 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

In my opinion this was simply an unfortunate design decision (can happen to all of us) back when it was made, decades ago. IMO there is no logical argument to support this behaviour. The fact that the error wire is wired to an input tunnel does not mean that the error was handled. At best, when a programmer intentionally used this technique, it represents a non-self-documenting coding practice (why not use the self-documenting Clear Errors.vi?). At worst, it means clearing errors simply because the programmer forgot to wire the wire through the structure. It means clearing errors when the programmer did not explicitly ask for this. It means "sweeping errors under the carpet", and can result in overly "optimistic" applications (apps that seemingly execute without error when in fact unhandled errors are being generated).

 

Please note that even though the screenshot above shows a Flat Sequence Structure input tunnel, the behaviour applies to every structure (case structure, for loop, while loop, etc).

 

To summarise, the problem is that the screenshot above is functionally equivalent to explicitly using the Clear Errors.vi, as seen below.

4 (edited).png

 

 

 

 

 

 

 

 

 

 

 

Clear Errors.vi is of course the self-documenting, recommend method of clearing errors. It should also be the only method of clearing errors.

 

Ironically, Clear Errors.vi itself uses the "clear error by wiring it to input tunnel" technique inside its "0" case, as seen below.

5 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To its credit, Clear Errors.vi uses a correct technique for clearing errors inside its other, default case.

6 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another example found "in the wild" of a VI using the "clear error by wiring it to input tunnel" technique. This VI ships with LabVIEW and is found at <LabVIEW installation folder>\vi.lib\Utility\EditLVProj\Identify VIs Among Project Items.vi.

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Solution: Disable the "clear error by wiring it to input tunnel" behaviour. This would fix what IMO is an incorrect design decision. Unfortunately, fixing this decision now would result in VIs that use the "clear error by wiring it to input tunnel" technique to start throwing unhandled errors if AEH is enabled. This is not ideal, but it might be worth accepting this short-term drawback for long-term gain.

 

Moreover, it may be useful to introduce a Clear Errors node (primitive function). The Clear Errors.vi could then make use of the Clear Errors node inside both of its cases. Alternatively, the Clear Errors node could simply replace and supersede the Clear Errors.vi.

Automatic Error Handling (AEH) is a useful feature. It captures errors that were otherwise left unwired by the programmer (intentionally or accidentally). It represents a "safety net" that can make the programmer aware of errors that they may otherwise remain unaware of.

 

Problem: Currently AEH functionality is only available in the development environment. It is not available in built executables. Even when all VIs in a project have AEH enabled, once built into an EXE, all VIs behave as if AEH was disabled.

 

Solution: It should be possible to honour each VI's AEH setting in built EXEs too, not just in DevEnv. The EXE build specification could contain a setting named "Honour each VI's Automatic Error Handling setting in EXE". When ticked (enabled), any VIs for which AEH was enabled in the development environment will continue to benefit from AEH behaviour in the EXE. Any VIs for which AEH was disabled will continue to have it disabled in the EXE. This means that, from an error handling/error manipulation point of view, the application would behave identically when being run as an EXE as when being run in Development Environment. This is more consistent, and can be helpful.

 

The current behaviour (forcibly removing AEH in EXE) means that EXEs are prone to having errors that were not discovered during DevEnv testing being "swept under the carpet". In other words, currently EXEs are overly "optimistic" - they can make the programmer believe that everything is ok when in fact one or multiple unhandled errors are occurring, errors that would have been visible in DevEnv. This is particularly relevant to apps that run for long periods of time (e.g. life cycle testers) that may encounter errors that were simply unforeseen or untested in DevEnv (e.g. error after one month of continuous running due to running out of disk space when saving measurements log file to disk).

 

The screenshot below shows how the new setting could look like in the Advanced page of the EXE build spec.

3 Screenshot 1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • To be absolutely clear: I am not asking for AEH to be enabled by default in all EXEs. I am also not asking for AEH to become enabled in all VIs when the new build spec setting is ticked.  This would override the AEH setting of all VIs - I am not asking for this.
  • The default value of the new setting should be False (unticked). When False, the built EXE would behave exactly as it does now - AEH would be disabled in all VIs. This would maintain the current behaviour as default.
  • The new setting would give the programmer more control - it would allow the programmer to decide whether they want AEH or not in their EXE. Currently AEH is taken away in EXE, even when we (professional LabVIEW programmers) might want it enabled.
  • I would be happy if the new setting was available only for desktop, non-real-time applications, and not on Real-Time targets.

TL;DR: the idea is please make building as fast in new LabVIEWs post 2023 Q3 as old ones when you have a clear compiled object cache and when your file timestamps have changed.

 

LabVIEW 2023 Q3 includes a major change to the app builder: 

 

LabVIEW 2023 Q3 has improved cache behavior for packed project libraries and applications.

 

The first build will populate the cache, and then the subsequent builds will be much faster.

Great! It is pretty easy to confirm this and it works! Unless...

 

1. File timestamps change - such as when you clean a directory and clone project files in your CI job

  • Git does not make file timestamps match "commit date" - when you clone fresh all the timestamps are now o'clock, when you change branches all the files that change are reset to now

2. The cache is cleared prior to the build - such as when you want a repeatable build unaffected by prior state so you set your CI jobs to clear it before each build

3. You are doing a "first build" on a new project - such as when you use a container or VM disk image to reset the entire environment to a clean state prior to your CI job

 

In all my testing, any of these conditions pretty strictly cause builds to be slower in LabVIEW 2023 Q3 than in earlier versions: with a clear cache by typically ~60%, with new timestamps and a persisted cache depends on how much of the code has new timestamps.

 

In other words, I expect most everyone using CI will find building slower in 2023 Q3+ than later versions. So, the idea is: whatever optimizations were done, include an option to revert to the old behavior - build spec, config ini, editor option, whatever.

I'm proposing a new certification, one certification to rule them all, the Certified LabVIEW Gangster Group Developer.

 

Or simply a fully functioning CLA that requires 3 people to take the test.

Architect

Developer

Embedded Developer


It should function using an FPGA, or a simulated interface.

 

 

I was recently trying to develop a function to navigate thru a deeply nested directory structure and came across system path length limitations which could potentially be addressed by use of a "change directory" function.

 

I realize I could use system exec with cmd /c cd <path>, but found this extremely slow

The array "Startup/Always Included" carries these Vi's into pre/post build action VI's.

The order of the vi's in this array depends on the order they have been added to/created in the project when, not the order they can be seen on the project window itself.

Here's the project window:

GICSAGM_0-1718176051948.png

Here's the order within prebuild vi:

GICSAGM_1-1718176157393.png

 

The order is NOT the same. "startup" is the first but if you delete from the project and then re-add it, it will become the last.

 

I suggest that the order in pre/post build VI should be:

first - startup.vi

following: always included vi's in the same order they appear in the project window.

 

Also, the always included list should have a 'mechanism' to change the order of the vi's in the list - be it up/down arrow buttons on the side that would move the selected vi or a similar command in a "right click" drop down menu.

 

In this way, any pre/post build action that may involve any of these vi's can be clearly defined and remain stable during the lifetime of the project without running into the risk of getting the wrong vi to "work" on or having to edit the pre/post build vi to add or edit the vi's names every time there is a change in the startup/always included list.

 

One of the things LabVIEW is best at is its innate parallelism. Parallelizing for loops with a right-click is something other languages wish they had. Having all code on the block diagram (or equivalent) innately run in parallel is something few other languages have even tried, much less succeeded at.

 

Crunching large datasets can be sped up through use of GPUs, and over the past decade or more, Nvidia has kept their promise of having a common interface to their hardware through CUDA. 

 

Having a LabVIEW add-on that is well maintained, and that can give GPU parallelism in the same ways we've seen LabVIEW deliver CPU parallelism would be a game changer for many labs and manufacturing environments. It could also help LabVIEW be a leader in the AI space.

 

I suggest that you create this add on package using CUDA as the underlying GPU calls, in order to keep the code easy to manage, while also providing the package for a large number of supported GPUs.