LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

If a Facade VI of an XControl registers for some dynamic events (whatever the source), a firing of one of these Events will NOT trigger actual activity (Facade VI activity) within the XControl.

 

If we register for a static Event (Mouse move on the FP for example) we DO get a trigger for the XControl (Facade VI becomes active).

 

The unusual situation rises that the Danymic events are registered but not executed UNTIL a static Event is called, after which all of the dynamic events are also dealt with.

 

Please make it possible for Dynamically registered Events within an XControl to "trigger" the XControl just as static events do.

 

Shane.

If you do a dynamic call from a built application and it fails because the VI in question depends on a VI that it is unable to locate when called in a built application environment - the only way to figure out what went wrong is to rewrite your app so that it opens the front panel of the VI, and then click on its broken run-button...There should be a way to get that error description without having to do anything with the application.

 

The real challegne however comes when you run into the same problem on a real-time target. There you can not open the front panel...and basically have to search in the dark to find a solution.

 

Feedback to the programmer's machine would be nice, but it should not only work when you have LabVIEW running. It should be possible to e.g. put a switch in an INI file...and then get a text log that describes, in full detail, what goes wrong with the dynamic calls.

I like using Linux whenever I can, particularly when running large software like LabVIEW, since it tends to crawl on my XP systems. I was happy to realize that LabVIEW works on Linux, but soon after I was disappointed by the lack of usefulness of it when interfacing with hardware. I need to use the RealTime module to interface with my RealTime Compact-RIO. I also need Linux support for the FPGA module, as I need to program the FPGA attached to my cRIO. I'm sure I am not the only person who would like the ability to do this.

 

Without support for any of the hardware or LabVIEW modules I need, the Linux version of LabVIEW is entirely useless to me, and XP as an OS simply cannot perform up to par for me.

Hello everybody,

 

(as suggested I will separate my idea Expand the functionality of Event structures into four seperate ideas to allow giving kudos separately.)

 

Make it possible to add conditions to events, e.g. only allow event foo, if condition bar is True. I'm not sure, how this should look like. Some static conditions could be "key pressed == a" coupled with "key down" event. Conditions could also be coupled to controls via registering the control reference like dynamic events. If my idea Expand Event structure functionality: Register new types of references is realized, a condition could also be something like "tcp/ip connection is offline". If a condition is not fullfilled, than either nothing happens or there is a leftside node, which indicates the status of the condition. This should be configurable.

 

Regards,

Marc

If a DLL is configured in a Call Library Function Node and is either missing or defective, the entire program cannot be executed. In most situations, this behavior is acceptable.

However, it can also be quite frustrating, especially if the part of the program that uses the DLL is not called. Additionally, it would be helpful for debugging purposes if the program could still execute even when a DLL is unavailable.

Therefore, I propose that the configuration dialog of the Call Library Function Node includes an additional checkbox labeled "Break execution if DLL cannot be loaded." If this checkbox is checked (default setting), the behavior will remain as it currently is. If the checkbox is unchecked, the program should run until the affected node is executed. At that point, the node's error output should indicate either "DLL not found" or "DLL found but cannot be loaded," and execution should proceed beyond the node. In this scenario, it will be the programmer's responsibility to handle the error appropriately.

As an extension of this idea, I could also envision a new entry in the LabVIEW.ini / application.ini file to assist with debugging. If this entry is not present, the explicit configuration of the node will be used. If it is available, it will override the explicit configuration with either true or false.

Please also see this related idea: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/get-DLL-path-from-the-call-library-function-node/idi-p/4433632

Problem: When creating a new VI using File >> New VI (Ctrl + N) its reentrancy setting is Non-reentrant execution.

 

Solution: The reentrancy setting of new VIs should be Preallocated clone reentrant execution.

 

Background:

In most applications the vast majority of VIs could and should be set to "Preallocated clone reentrant execution". In a nutshell, Preallocated clone ensures that each instance of a given VI is completely independent of all other instances. This is desirable in the vast majority of cases.

 

We should encourage best practices by changing the default reentrancy setting to the setting that is desireable in most cases, namely to Preallocated clone.

 

The other two options - Non-reentrant execution and Shared clone reentrant execution - are the best choice in specialised cases only.

  • Non-reentrant is necessary when needing to guarantee that multiple instances of a VI block each other from executing at the same time and/or when wishing to use uninitialised shift registers as a means of asynchronous data communication (e.g. FGV or Action Engine).
  • Shared clone is best when aiming to reduce memory usage. Shared clone can help on memory-scarce targets such as cRIOs or sbRIOs, or when dealing with massive amounts of data (arrays of millions or billions of elements). Worrying about memory usage is not a concern for the vast majority of VIs, especially when working on modern desktop systems that have 8, 16, or more GB of RAM memory, and when dealing with reasonable amounts of data (arrays with up to a few million elements).

New VIs set to Preallocated clone would be in good company, as detailed below.

 

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • The vast majority of LabVIEW nodes rightly execute as if they were set to Preallocated clone. This enables us to use many instances of each node freely without worrying that it will create timing dependencies between vastly different caller VIs.
    • For example, one can use the Add node inside as many DQMH modules or Actor Framework actors as they wish, without creating any timing dependency between those callers, which is great.
    • As far as I am aware, the only nodes that execute as if they were set to Non-reentrant are those that require the UI thread. These nodes execute in a sequential, blocking manner.
  • Many of the VIs that ship with LabVIEW are rightly set to Preallocated clone.
    • I believe that most VIs that ship with LabVIEW and are set to something other than Preallocated clone should be set to Preallocated clone, but this should be addressed in a separate idea.
  • All inlined VIs (and therefore, all VIMs, which must be inlined) execute as if they were set to Preallocated clone.

Further notes:

  • If Non-reentrant was chosen as the default because it was judged to be the friendliest to new users (an argument that I believe does not outweigh the arguments in favour of Preallocated clone), then at least there should be a Tools >> Settings option to enable people to change the default reentrancy setting.
  • The fact that new VIs are by default Non-reentrant defeats the benefit that LabVIEW offers in terms of ease of creating parallel threads. Many of these threads will in fact not be truly parallel, because of the undesired one-instance-running-at-a-time blocking effect of Non-reentrant instances that execute in vastly different areas of an application.
  • I have never tested this, but EXEs are likely to be smaller (perhaps by a few KB, or even a few hundred KB) when the vast majority of VIs are set to Preallocated clone. When using multiple instances of Non-reentrant VIs LabVIEW must add some kind of mutex'es (mutually exclusive locks) in the compiled code. When VIs are set to Preallocated clone these mutex'es disappear, leaving behind smaller, cleaner compiled code.
  • The disappearance of mutexes might help enable the LabVIEW compiler to perform optimisations that are not possible when non-reentrant boundaries are present.

Reentrancy resources:

To achieve better performance and efficiency, the “In Place Element Structure” should be used instead of the “Bundle by Name” method for “VIs for Data Member Access” (specifically for write VIs). The "Mark as Modifier" option should also be activated.

 

TapioS_4-1740049307134.png

 

TapioS_3-1740049278327.png

If the compiler optimization can do this automatically, I apologize for this post. 

It would be useful if LabVIEW offered the option to use an Error Collector Node inside structures.

 

The Error Collector Node would collect or capture any/all unhandled errors that occurred inside the structure that the node is part of. If one or more unhandled errors occurred, the Error Collector Node would output the first error that occurred (in chronological error). If no errors occurred, the node would output "No Error".

 

The following annotated screenshot shows how the Error Collector Node would help reduce the number of error wire segments and of Merge Errors nodes.

Combined 1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

The following screenshot shows a second example. Notice that using the Error Collector Node would result in a significant reduction of block diagram items: 10 fewer wire segments and 1 fewer Merge Error Node. The reduction in block diagram items could be even larger if we consider that the other cases of the case structure (potentially dozens of cases) also benefit from the same Error Collector Node being placed on the border of the case structure.

Combined 2.png

 

 

 

 

 

 

 

 

 

 

 

 

The two screenshots above are examples created specifically for the purpose of posting this idea. The following screenshot is a real-world example (taken from production code) of a VI that could benefit from the Error Collector Node, which would remove the need for numerous error wire segments and Merge Error nodes.

1.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • The Error Collector Node was prototyped as a hexagonal output tunnel in the screenshots above. I would, of course, be happy if a different glyph or shape is chosen to represent this new type of tunnel.
  • Error Collector Node would essentially behave similarly to a localised Automatic Error Handling functionality. It would collect or capture any unhandled errors inside its area of responsibility (the structure), and would convert those errors back into "manually-handled" errors - i.e. errors than are passed downstream via an error wire.
  • The Error Collector Node would be useful especially in situations where many code branches execute in parallel as it eliminates the need for lots of Merge Error nodes.
  • The first two screenshots mention the words "approximately equivalent to". "Approximately" because, if multiple errors occur inside the structure, the Error Collector Node does not guarantee which of those errors are output in the same way that the Merge Errors node does. For example, in the first screenshot, if all three VIs (VI A, VI B, and VI C) experience an error, there is no guarantee as to which of those errors the Error Collector Node would output. The node would output the first error that occurred (in chronological order), so it would depend on which VI finished execution first. This could change from execution to execution, and from machine to machine. Whereas the right-hand-side version, which uses Merge Errors, would always output the error generated by VI A.
    • This would usually not be an issue in practice. If more determinism is needed, the programmer could, of course, fall back on manually wiring the errors to define an exact error behaviour.
  • It should be possible to add a maximum of one Error Collector Node to each structure: Flat Sequence Structures, Case Structures, For Loops, While Loops, etc.
  • It would be useful if the Error Collector Node could be used outside of structures (directly on the outermost region of a block diagram). Again, enforcing a maximum of one Error Collector Node per outer block diagram would make sense. The Error Collector Node would execute after all other block diagram nodes and structures (would be the last thing to execute). The output of the Error Collector node could then be fed directly to an "Error Out" block diagram terminal. This would remove the need to wire most error wires inside the VI, while ensuring that no error goes uncaptured.
  • If the Error Collector Node exist only as a structure output tunnel, and not as a stand-alone node outside of structures, then a better name for it might be the Error Collector Output Tunnel.
  • The behaviour of the Error Collector Node would be unaffected by Automatic Error Handling being enabled or disabled in that VI.
  • Using Error Collector Nodes would benefit programmers in the following ways:
    • Would reduce the amount of "click-work" that programmers currently need to do (the number of wire segments and Merge Error nodes that need to be created), while ensuring that all unhandled errors are captured.
    • Would reduce the amount of block diagram "clutter". This "clutter" is apparent in the third screenshot, which shows many criss-crossing error and DAQmx wires.
    • Would decrease the size on disk of VIs thanks to fewer block diagram items needing to be represented in the VI file. This would help towards making git repositories a little bit smaller, and loading VIs into memory a little bit quicker.
  • Informally, using Error Collector Nodes would sit in-between the strictness of manually wiring all error outputs, and the looseness of relying solely on Automatic Error Handling. The error handling gold-standard would remain manually wiring all error outputs, but using Error Collector Nodes might be "good enough" in many situations, if used judiciously.

Problem: Currently wiring an error wire to a structure input tunnel that does not continue as a wire clears the error that exists on the wire.

 

Happy case: When running the VI shown below, Automatic Error Handling correctly detects that the error out terminal of Error Cluster From Error Code.vi is unwired, and handles the error (displays the error as a dialogue window).

2 (edited).png

 

 

 

 

 

 

 

 

 

 

Unhappy case: Wiring the error wire from the error out terminal of Error Cluster From Error Code.vi to a structure input tunnel clears the error. Automatic Error Handling does not detect or handle the error.

3 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

In my opinion this was simply an unfortunate design decision (can happen to all of us) back when it was made, decades ago. IMO there is no logical argument to support this behaviour. The fact that the error wire is wired to an input tunnel does not mean that the error was handled. At best, when a programmer intentionally used this technique, it represents a non-self-documenting coding practice (why not use the self-documenting Clear Errors.vi?). At worst, it means clearing errors simply because the programmer forgot to wire the wire through the structure. It means clearing errors when the programmer did not explicitly ask for this. It means "sweeping errors under the carpet", and can result in overly "optimistic" applications (apps that seemingly execute without error when in fact unhandled errors are being generated).

 

Please note that even though the screenshot above shows a Flat Sequence Structure input tunnel, the behaviour applies to every structure (case structure, for loop, while loop, etc).

 

To summarise, the problem is that the screenshot above is functionally equivalent to explicitly using the Clear Errors.vi, as seen below.

4 (edited).png

 

 

 

 

 

 

 

 

 

 

 

Clear Errors.vi is of course the self-documenting, recommend method of clearing errors. It should also be the only method of clearing errors.

 

Ironically, Clear Errors.vi itself uses the "clear error by wiring it to input tunnel" technique inside its "0" case, as seen below.

5 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

To its credit, Clear Errors.vi uses a correct technique for clearing errors inside its other, default case.

6 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Another example found "in the wild" of a VI using the "clear error by wiring it to input tunnel" technique. This VI ships with LabVIEW and is found at <LabVIEW installation folder>\vi.lib\Utility\EditLVProj\Identify VIs Among Project Items.vi.

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Solution: Disable the "clear error by wiring it to input tunnel" behaviour. This would fix what IMO is an incorrect design decision. Unfortunately, fixing this decision now would result in VIs that use the "clear error by wiring it to input tunnel" technique to start throwing unhandled errors if AEH is enabled. This is not ideal, but it might be worth accepting this short-term drawback for long-term gain.

 

Moreover, it may be useful to introduce a Clear Errors node (primitive function). The Clear Errors.vi could then make use of the Clear Errors node inside both of its cases. Alternatively, the Clear Errors node could simply replace and supersede the Clear Errors.vi.

Automatic Error Handling (AEH) is a useful feature. It captures errors that were otherwise left unwired by the programmer (intentionally or accidentally). It represents a "safety net" that can make the programmer aware of errors that they may otherwise remain unaware of.

 

Problem: Currently AEH functionality is only available in the development environment. It is not available in built executables. Even when all VIs in a project have AEH enabled, once built into an EXE, all VIs behave as if AEH was disabled.

 

Solution: It should be possible to honour each VI's AEH setting in built EXEs too, not just in DevEnv. The EXE build specification could contain a setting named "Honour each VI's Automatic Error Handling setting in EXE". When ticked (enabled), any VIs for which AEH was enabled in the development environment will continue to benefit from AEH behaviour in the EXE. Any VIs for which AEH was disabled will continue to have it disabled in the EXE. This means that, from an error handling/error manipulation point of view, the application would behave identically when being run as an EXE as when being run in Development Environment. This is more consistent, and can be helpful.

 

The current behaviour (forcibly removing AEH in EXE) means that EXEs are prone to having errors that were not discovered during DevEnv testing being "swept under the carpet". In other words, currently EXEs are overly "optimistic" - they can make the programmer believe that everything is ok when in fact one or multiple unhandled errors are occurring, errors that would have been visible in DevEnv. This is particularly relevant to apps that run for long periods of time (e.g. life cycle testers) that may encounter errors that were simply unforeseen or untested in DevEnv (e.g. error after one month of continuous running due to running out of disk space when saving measurements log file to disk).

 

The screenshot below shows how the new setting could look like in the Advanced page of the EXE build spec.

3 Screenshot 1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes

  • To be absolutely clear: I am not asking for AEH to be enabled by default in all EXEs. I am also not asking for AEH to become enabled in all VIs when the new build spec setting is ticked.  This would override the AEH setting of all VIs - I am not asking for this.
  • The default value of the new setting should be False (unticked). When False, the built EXE would behave exactly as it does now - AEH would be disabled in all VIs. This would maintain the current behaviour as default.
  • The new setting would give the programmer more control - it would allow the programmer to decide whether they want AEH or not in their EXE. Currently AEH is taken away in EXE, even when we (professional LabVIEW programmers) might want it enabled.
  • I would be happy if the new setting was available only for desktop, non-real-time applications, and not on Real-Time targets.

I'm proposing a new certification, one certification to rule them all, the Certified LabVIEW Gangster Group Developer.

 

Or simply a fully functioning CLA that requires 3 people to take the test.

Architect

Developer

Embedded Developer


It should function using an FPGA, or a simulated interface.

 

 

One of the things LabVIEW is best at is its innate parallelism. Parallelizing for loops with a right-click is something other languages wish they had. Having all code on the block diagram (or equivalent) innately run in parallel is something few other languages have even tried, much less succeeded at.

 

Crunching large datasets can be sped up through use of GPUs, and over the past decade or more, Nvidia has kept their promise of having a common interface to their hardware through CUDA. 

 

Having a LabVIEW add-on that is well maintained, and that can give GPU parallelism in the same ways we've seen LabVIEW deliver CPU parallelism would be a game changer for many labs and manufacturing environments. It could also help LabVIEW be a leader in the AI space.

 

I suggest that you create this add on package using CUDA as the underlying GPU calls, in order to keep the code easy to manage, while also providing the package for a large number of supported GPUs.

When a subarray is created, the data is copied from the origin-array.

 

The LabVIEW Compiler does not recognise if the subarray data is then only read.

It would be a major improvement for memory and even CPU usage if the compiler recognised that some part of the array is only read and pass only a reference to that part of the array in the memory.

 

It would be helpful to add an option like "Digital Value" in the DAQmx Trigger.vi (comparable to Analog Window).
Background: when missing the edge of a digital trigger signal (due to being late in code execution after eg. waveform reconfiguration), the trigger event does not happen. With this new option it is possible to execute the trg event asap after missing the edge. 

Many times, at least in my codes, it's necessary to act only on a rising / falling or changed edge of a boolean signal.

Probably this option can be added for CASE-structures with a boolean-type case selector of cause. 

Hi

While working with LabVIEW I would suggest that there are broken arrow that shows that the code is not complete if we left to wire any input or output.

I would suggest that there should be an option in the block diagram that we can select the active region (it could be as to draw box) and LabVIEW should only consider code that is inside the box and outside of the box if any unwired input  or output should be ignored.

 

I hope  I made my idea clear.

 

Thanks

Asif 

It would be useful to add a Boolean for resetting "First call?". Sometimes it might be required to reset a first call during a run.   
It could be as following:

KhalilEslami_0-1684738034968.png

 

It makes no sense to write the channel name every time with TDMS write. You could easily improve write performance and disk utilization if you stored the channel names once and used a smaller alias to them (see example below). There are other work arounds including buffering

https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z000000P9reSAC&l=en-US

But the downside of buffering is that the data is stored in memory so if you lost power, you would potentially be missing a large chunk of data. Also, this solution is compatible with the buffering idea in that this would make the files even smaller.

In my example even after just 100 writes, the example using the lookup table is 20% smaller.

We can also get even bigger gains if instead of using hex-ascii strings for the short names, we used an integer.

 

TDMS long name lookup.png

I really like the option to use indicators (connected to the connector pane) as the output for webservice methods. By default, Labview will serialize it to JSON, but text and xml are also options. It works quite well and it saves a lot of coding writing your own serialisation.
I have some suggestions for the serialize functionality:

 

1. order the JSON output by tabbing order when there are multiple output indicators. This prevents that you end up always clustering all controls into one, just to enforce order.

 

2. it would be nice if an enum could be represented by its string instead of its index.

 

3. support for maps

Allow the mechanical action of radio buttons to be switch until released.

 

The way I make arrow buttons now is to put switch until released buttons in a cluster and watch for the value changed event of the cluster. When it changes, I convert the cluster into an boolean array, that array into a number, then feed that number to a case structure. Switch until released radio buttons, "No Selection" being a necessary default, would make that code nicer. The case selector would be an enumeration instead of a number.