LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It would be nice to get 64bit references. The old 32bit references are build up from two main parts: the descriptor represented on 12bits and the address represented on 20bits. If the application requires more than the available 1048576 references the labview generates an error because the memory is full. The workaround is to close references found in the memory. If an application requires serialization, this rule should be kept in mind. If the application serializes references by an input data, the input data should be limited, which means that really big data can't be handled although the machine have enough memory to handle them. Unfortunately, this feature in the labview has been not changed in the last 30 years although 64bit processor architectures are used world-wide and the 64bit labview plays a bigger role every year.
I think, the main reason is the backward compatibility...but is it possible to provide 64bit references which could be chosen by the developer itself, where the descriptor field will not be changed but additional 32bits are available for addressing? e.g. new option in the labview settings "use 64bit references" or selection (polymorphic) from all native components which could be create a reference. If the feature is supported by (environment) settings, then the compiler uses it generic. If the feature is supported component level, then the references should be checked for conflicts. At the end the architect/developer/application is no more limited inside in the labview because the available 4503599627370496 addresses are more than enough for a long time again.

It would be nice to have a performant way to execute an abstract represented VI.

There are currently two generic ways to execute a VI by reference:

- abstract represented but not performant (FP required, running in UI threading)

- specific represented and performant

Executing a VI currently.png

Is it possible to provide such a functionality which combines the advantages of both solutions like this?

Executing a VI proposed.png

This means, the target VI could be handled in abstract way without coupling to the specific VI connector panel.

The Desktop Execution Trace Toolkit (DETT) collects hundreds to thousands of data points that indicate memory allocations, queue references, events, and more.  The data is always available and often is overwhelming.  The only way to make sense of data quickly is to create custom probes that create highlights in the data, however, that is only after you know which VI is worth probing further.

 

The idea:

Feed all the data normally collected by the DETT about a LabVIEW project directly into Nigel so that it can provide human-friendly suggestions explaining:

Level 1

How often a VI allocates memory in a repeated fashion

The time jitter between different events

VI's that could be good candidates for refactoring

 

Level 2

Improvements to the code within that VI based on best practices for optimal compiler operations.

 

This is based off a discussion at GDevCon #6 on September 10th, 2025.

 

It would be nice to have a managed solution to get LV Class properties both in development and in runtime environment: name of the properties, concerning accessor VIs with path and scope information.

Currently the only way to get these information is to handle the classes by XML parsing methods. This is neither managed solution nor working for classes which are found in compiled codes.

There is a managed way to open a library both in development and in runtime environment via Library.Open method. The same method is available for Class Libraries but without the possibility to open the Class in runtime environment although the Class Library is a specialization of the Library.

LVClassOpen.png

Therefore the getting the hierarchy and/or the members of a Class could be performed by terrifying non-managed way in Runtime environment.

It would be nice to get the same functionality from this method as found by the Libraries.

 

It's always a pleasure to uncover shared memory problems when working with base vi.lib methods that *should* work as preallocated clones and have uninitialized shift registers and then having to establish a chain of vi references to safely call those methods in an application with distributed/parallel threads. It feels like extra work but does fix the problem. It's a major unknown gotcha unless you're already familiar with the vi you are calling.

 

My most recent hiccup: the NI PID library. I needed two closed loop controllers in actor framework, and was getting very strange timing, setpoint, process variable, and control output crossovers until I realized the stock .vi's had shared memory in the form of uninitialized shift registers (despite being in the context of a pre-allocated actor). Creating references of the vi's I needed and storing them in the actor at launch fixed the problem, but at that point the effort in writing my own PID .vi starts to be a favorable time tradeoff. At least I am able to peek under the hood of the PID library, other aspects of vi.lib... not so much.

 

Or maybe this is a teaching problem. I haven't come across ways of navigating this issue from official NI documentation, in fact the way I learned I needed to call the PID.vi by-reference was from the forum and rather matter-of-factly. There are a couple of great blogs that cover this issue in detail, so I don't feel alone in my ignorance. Maintaining State Information in LabVIEW Applications, Part 5 - LabVIEW Field Journal Archives | LabVIEW Field Journal Archives

Many new PCs running Windows run on ARM processors (like the Snapdragon), rather than x86 architecture. LabVIEW does not support ARM.

 

A few years back, my LabVIEW software would work on any Windows PC. That is no longer the case. That is a huge and increasing limitation.

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

Similar ideas have already been posted but non of them totally hits what I'm missing.

 

As always - what is the current situation?:
- LabVIEW is installed under "C:\Program Files (x86)\..." - this is a folder where admin rights are usually required to modify it.

- in the named folder the standard VIs are included: vi.lib / user.lib / instr.lib

- my project lays somewhere else - e.g. "C:\myLVProjects\project1", "C:\myLVProjects\project2", ...

- my first project is LabVIEW32 bit, my second is LV64bit,
- one project needs to have the binary code separated from the VI, the other one not
- one projects needs some toolkits from the VI Package manager, the other one not or maybe even incompatible ones to the first project

- one project needs VIs from an additional search path, the other one too but with a different version


long story short:
There are many situations where the configurations of two or more projects conflict with each other. Therefore, switching between two projects can be quite effortful — from adjusting configurations to (de)installing toolkits.

What I like to propose is the following:
Having a new option in the LabVIEW start screen "Create a new development environment".
The only the user has to do is to select a folder on a drive with enough free space.
LabVIEW shall now setup an entire code- and configuration environment. Means it copies vi.lib, user.lib, instruments.lib to this folder. It create a LabVIEW.ini and a link to the exe using this INI. It creates a folder for the compiled object code cache and so on. Installing add-ons from VI package manager shall also be stored in this new folder structure.
And of course, there shall be a place where I can manually add some libraries, my project files and finally also my build results.
If LabVIEW is now started from the named link it only uses the VIs from the environment folder as long relative paths are used (which shall be default).

Again in short:
I expect an encapsulated environment that contains everything that is needed to develop one project entirely independent from a second and third project.

 

Using property nodes, it's possible to retrieve a list of subVI references within a VI's block diagram. However, there's currently no way to link each reference to its specific subVI instance.

For example, if you place three instances of subVI "A" in a diagram, you can obtain three references to these subVIs, but there is no way to determine which reference corresponds to which instance.

The proposed idea is to introduce a "This SubVI" reference — a property or node that would return the reference of the current subVI instance (the one hosting the object or node). This enhancement would significantly improve scripting capabilities, support tool development, and aid in debugging tools.

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

Problem: Many native VIs use the Non-reentrant execution reentrancy setting.

 

Solution: The vast majority of native VIs should use the Preallocated clone reentrancy setting.

  • The native VIs that need to use Non-reentrant or Shared clone are few and far between - they should be identified on a case-by-case basis. Their Context Help and/or Detailed Help should explain why they need to be set to Non-reentrant or Shared clone.

The following is a selection of vi.lib VIs that should use Preallocated clone. This selection is meant to serve as a starting point and is not comprehensive.

 

1.png

2.png

3.png

 

Notes:

  • This idea is related to: The reentrancy of new VIs should be "Preallocated clone" . They both argue in favour of using the Preallocated clone setting more.
  • A significant number of native VIs are already configured to use Preallocated clone, which is great.
  • There are curious cases where closely related VIs are set to different reentrancy settings. For example, Color to RGB.vi is rightly using Preallocated clone, while RGB to Color.vi is Non-reentrant. Similarly, Trim Whitespace.vi is rightly Preallocated clone, while Normalize End Of Line.vi - which lives next to it on the String palette - is Non-reentrant.
    • This suggest that the reentrancy setting of some native VIs was chosen haphazardly. This needs to be rectified.
  • The fact that so many native VIs are non-reentrant partly defeats LabVIEW's remarkable ability to create parallel code easily. Loops that are supposed to be parallel and independent are in fact dependent on each other when they use multiple instances of these non-reentrant native VIs. When an application uses multiple instances of these native VIs it is as if there are "hidden semaphores" that are added between the various call locations that call these native VIs. This leads to less performant applications (more CPU cycles, longer execution time, larger EXE compiled code size).

The recently introduced Raspberry Pi is a 32 bit ARM based microcontroller board that is very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK (or other methods of getting LabVIEW to run on it).

 

The Raspberry Pi is a $35 (with Ethernet) credit card sized computer that is open hardware. The ARM chip is an Atmel ARM11 running at 700 MHz resulting in 875 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power. So, about 15 times the processing power!

 

Wouldn’t it be great to programme the Raspberry Pi in LabVIEW?

At the moment, using Diagram Disable Structure with Channel Wires results in a broken arrow, which prevents compilation. However, it would be highly beneficial to allow temporarily disabling the transmission of values or messages to different parts of a program—for example, during debugging.

Quiztus2_0-1755004887752.png

A tedious workaround used in the past involved connecting ))Channel.vi with ChannelOp.ctl from the generated Channel Wire library to the channel in the enabled case. This allowed bypassing the broken arrow issue.

Quiztus2_1-1755005119160.png

However, since around 2025, these components have become natively private, making the workaround even more cumbersome and less viable.

I’d like to propose the following improvements:

  • Native compatibility between Channel Wires and Diagram Disable Structure.

  • Optional front panel connection for Channel Wire controls—i.e., they shouldn’t require a wired connection to function or compile.

These changes would streamline debugging and development workflows and reduce unnecessary complexity.

 

Since a few years, we have native support for Map and Set in LabVIEW.

How about adding a DataFrame type similar to other programming languages (possible even with a native interaction with Python)?

 

A DataFrame type would be a 2D value where columns can have different datatypes. Currently, one needs to build around this by creating an 1D array of a cluster (or class) type.

Access to the data would be with numerical indexing for the rows and field access (like in a cluster) for the columns.

 

KR, Benjamin

Typical question in development process: "How quickly does my code execute? What runs faster... Code A or Code B?" So, if you're like me, you throw in a quick sequence that looks like this:

 

TimingDuringDevelopment.png

 

AHHH! What a mess! It's so hard to fit it in, with FP real estate so packed these days!

 

We need this:

ProposedTimingDuringDevelopment.png

 Just like my other idea, and for simplicity's sake NI, I would be PERFECTLY happy even if you had to set up the probes during edit mode, and were not able to "probe" while running.

 

 As a bonus, this idea may be extrapolated into n timing probes, where you can find delta t between any two of the probes.

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

Some languages like Rust and Zig have a feature called Tagged Enums (or Sum Types) that allow you to create a data type that can be one of a few different types where there is a name associated with each type. In LabVIEW, however, Enums are limited to consecutive numeric integer values -- there's no way to associate a type with each named value.

 

The power of combining an Enum with a data type for each value is that we could potentially use a Case Structure as a switch statement with type assertion and data conversion built in! This would allow us to create robust, type-safe code that is easier to maintain and understand.

 

example_equipment_variant.png

See this github repository for a more complete proposal and an example implementation that gets us closer to achieving this in LabVIEW.

There are times when I leave a VI with modal properties open and then I run the main application that also calls this VI if opened in the development environment. This locks all running windows due to the modal VI. I propose a button in the taskbar that aborts all running VIs OR perhaps a list is opened on right-click of all running VIs 🙂

 

abort_all_running_vi.png

 

 

Please implicitly consider array index during index / replace elements in In Place Elements Structure if I am starting from Index 0

 

Present method:
image.png

 

Expected method:
image (1).png

 

[admin edit 2021-02-24]: placed images in-line with text and removed them as attachments

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects, and recompilation of VIs when editing/opening source code.