LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

A quick one-button solution to view pre-configured Design Rule results per VI. Not quite an analyzer.  One layer (VI) deep. Pull-down icon changes from green check mark to the Alert symbol suggested here if violations exist.

 

LV_LIVE_DRC.png

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

FIR Filter is almost the same as convolution, except that it has a init/cont terminal while convolution has an input for algorithm (Direct, frequency domain). FIR filter always uses direct convolution.

 

If "cont" is not used, convolution based FIR filtering could be orders of magnitude faster because it scales much less steeply with input sizes. Examples have been discussed where switching algorithms from "Direct" to "frequency domain" can turn minutes into seconds (e.g. 1M points and 5k filter).

 

While the knowledgeable programmer can of course make his own using the convolution primitives (also programming around other limitations because this idea is not implemented :(), it might be more intuitive if the FIR Filter had an "algorithm" input where we can select between the same choices as for convolution. (From my casual understanding, "frequency domain" would ignore the "cont" input because it is incompatible. This can just be mentioned in the help.)

 

altenbach_0-1600102620742.png

 

Apparently VI Analyzer toolkit doesn't support any 64bit LabVIEW version, this is really something I miss!

 

With all the good reasons to use it that NI gives it's hard to understand!

I'm a huge fan of the Stall Data Flow malleable VI except in the case that I have it wired on an error wire (my most common use case) and there's an error on the wire.  I generally trust and expect that VIs (especially those on the palettes) will no-op (with rare exceptions like close ref methods) and fail fast in the event of an error on the error in terminal.  I understand that this is kind of a unique case since the VI in question is malleable and doesn't have an actual error in terminal, but my guess is that most users:

 

1.  Wire this specific VIM on error wires most often

2.  Likely don't want the VI to stall in the event of an error

3.  Would prefer to see that error propagated as quickly as possible like most other VIs do

 

Thoughts?

Deallocate Queue Memory.PNG

 

Many people do not realize that memory allocated by a queue is never deallocated until the queue is destroyed or the call-chain that created the queue stops running.  This is problematic for queues that are opened at the beginning of the application and used throughout because all of the queues will always retain their maximum size, causing the application to potentially hold a lot of memory that is currently unused or seldomly used.

 

Consider a consumer that occassionally lags behind and the size of a queue will grow tremendously.  Then the consumer picks back up and services the elements out of the queue in a short period of time.  It is unlikely the queue will be this large again for quite some time, but unfortunately no memory will be deallocated.

 

I'd like a primitive that will deallocate all of that memory down to just the current number of elements in the queue.  Since the queue won't need that much memory again for a long time and the queue will auto-grow again as needed, I'd like to recover that memory now instead of waiting for the application to be restarted (which is currently the only time the queue is created.)

 

The alternative is to add some code to periodically force destroy the queue and have the consumer gracefully handle the error and open a new reference.  Then replicate this change for all queues.  Seems messy and puts too much responsibility on the consumer.  I'd rather just periodically call a 'deallocate queue memory' primitive on the queue reference within the producer, perhaps once every few minutes, just to be sure none of the queues are needlessly holding a large amount of memory.

 

I believe this will:

  • Improve performance in other areas of the application because less time will be spent looking for available memory to allocate.
  • Reduce the chance of Out of Memory errors because large blocks of memory will not be held [much] longer than they are needed.
  • Improve the common user opinion that LabVIEW applications are memory hogs or leaky.

I realize this will hurt enqueue performance when the queue begins to grow quickly again, but this area is not a bottleneck for my application.

 

Thanks!

I think LabVIEW should have something like the dialog VIs that acts as a popup where the vi displays a message to the user but doesn't hang up code and wait on the user to press a button.

 

An example case is a state machine that has error checking built in errors out.  Instead of creating a more complex error handling scheme to display the state machine errored out and heres why then shutdown the state machine, you could instead drop this async popup in the state with the message to display the error occurred then have the state machine shutdown like normal.  This way, the user knows "the state machine shutdown for X reason" while the state machine also goes into a safe state.  

 

I think the async popup is easier to trace since the error message is located in that section that detected the error vs having some sort of store an error string then in the shut down case read if there is an error and if so display the error once the station is already shutdown.  Additionally, you cold see where all asnyc message blocks are via dropping one in your code then right clicking and searching for all instances.  This could streamline the process to see where the fault occurred since the async message block would be in the location of the fault itself.

 

I have currently built something like this that is a reentrant VI whose input is a string and a secondary VI that launches the display message VI.  It's a bit of a work around, but I can display a message to the user while my code continues to do whatever it needs to.  

I'm working on a high-performance application where every bit of time counts.

 

LabVIEW's parallel execution helps but we are having to write individual functions in C so we can try out the AVX instructions available on modern processors.

 

LabVIEW recently used SSE2 so I guess the compiler supports this (well, LLVM certainly does). It would be good to have an advanced option to enable more recent updates (AVX+) to avoid going to C!

Over the years I have run into situations where I have to monitor files accessed by another application from within my LabVIEW application (Example: monitoring the standard_out/standard_error files from an app launched with the system exec).

Currently the only way to do that is to use polling. 

 

I would love for LabVIEW to have built in events for file monitoring so polling would no longer be required.

 

For instance, the following events would be very useful:

  1. File Opened
  2. File Closed
  3. File Change  (I really want this one if nothing else)
  4. File Created
  5. File Deleted 
  6. more...

 

Below is an example as to how it might look like.

5-20-2010 9-20-09 AM.png

 

As LabVIEW evolves more and more, the compiler takes over an awful lot of code optimisation for us.  This leads us to situations where relatively large and important pieces of code can be evaluated at compile time and constant folded which can greatly aid execution speed.  This is good.

 

Constant folding can be a great aid when programming but at the moment, it's usage is a bit "hit and miss" due to the opaqueness of the process.  We already have constant folding highlighting, which really helps things (even if the feedback is sometimes very hard to understand).  But this doesn't always give us enough feedback.

 

What I would like is the option to declare a portion of code as "Requires constant folding" (like a "Precompile" structure).  In this way, I can, as a programmer, designate some code which is meant to be evaluated at compile time.  If the compiler is unable to constant fold this code, then the VI should be broken.  My motivations are three-fold.

  1. Sometimes we want to specifically make use of the constant folding capabilities of the compiler, but a small change can result in the code no longer being constant folded.  I would like explicit feedback when code I want constant folded is not constant foldable.
  2. I have no idea whether code complexity has an effect on the ability to constant fold.  Other compiler optimisations (Like unbundle-unbundle inplaceness) are dependent on code complexity.  Explicit declaration is not code complexity dependent.
  3. When looking at FPGA designs, the ability to perform constant folding of data otherwise requiring resources or affecting performance is very powerful.  In such a "Constant folding" code, we could also allow mathematical functions to be used which are otherwise not supported on the target (max/min of an array in a timed loop for example), or creating default data for an array (to be used as Block RAM) based on an existing equation where constants are defined as DBL.

 

One example of FPGA code is automatic latency balancing of several parameter pathways into a process where the code accepts abstract parameter objects whose latency is queried via a dynamic dispatch VI which simply returns a constant.  I use dependency injection to tell the sub-VIs which communication pathways they are being given and they can then query the latency and do some static timing calculations for the delays required on different pathways.  Tests have shown that this is constant folded and that it is thus possible to write very robust FPGA code which auto-adjusts request indices for parameters in multiplexed code.  At the moment, things seem to work but the ability to specifically designate such code as being constant folded would be welcome to make sure I don't accidentally produce a version which doesn't actually return a constant (and my compiles fail, I get timing errors, or just over-use resources)....  In the code below, all of the code circled in blue is constant-folded when compiling the FPGA code.  In the sub-VIs I have to do some awkward calculations because certain functioanlity is not available on FPGA.  By defining this code as requiring explicit constant folding, I could theoretically utilise the full palette of LV functions and also be guaranteed a compile error (LabVIEW error, not Xilinx) if the code thus designated can not be constant folded.

 

2016-09-15 10_13_00.png

 

So in a way, it's similar to the In place element structure which, when all goes well, should not be needed but there are cases (I've run into some myself) where either small changes in code can make the desired operation impossible or where the code complexity can cause the optimisation to not be performed.  As such, it is still required at times to explicitly designate some code paths as being In-Place.  I would like to have the same functionality for "Constant folding".

The number of parallel Instance is currently capped at 64, independent of hardware. This limit should be raised.

 

First Reason: Since even 64 bit Windows 7 supports up to 256 cores, it would be reasonable to raise that limit to 256.

 

(Even the next version of windows mobile (8) will support 64 cores. (Mobile! On a Phone! 🐵. Obviously the upcoming hardware is fast moving in that direction.)

 

Second Rason: Sometimes it is useful to generate many instances, even if we have fewer cores available, for example maintain individual data in a large number of identical reentrant subVIs. (Such an usage example where we want many instances even on a single core machine can be found here)

 

Idea: Raise the max number of parallel Instances of a parallel FOR loop to 256.

I was searching for occurences of a reference to a Graph in one VI, and as I was interrupted, came back to the search result after the interruption, only to discover that the Search Result Window did actually not show ANY kind of useful information regarding the object I was searching references for:

 

Screen Shot 2014-08-19 at 18.03.18.png

 

I know I have outrageous expectations as a LabVIEW user, but this seems to me an odd lack of feature:

 

- From this window, I have absolutely  no clue what I am searching for. In particular, if I have in the mean time jumped from windows to windows...

- ...there is no way to go back to the object these references are linked to (unless I go to one of the references and then look for the Control or Indicator they are associated with).

 

Of course asking for a VI information when this is provided in the list below is maybe unnecessary.

But consider this global variable whose references I was looking for:

 

Screen Shot 2014-08-20 at 10.12.34.png

 

Same thing here:

- I do not know the type of the global.

- I do not know which VI it is part of (Globals are saved in a VI).

- I do not know where I started my seach from (but that's more of a back-to-source button issue).

 

Suggestion: provide as much information as possible about the starting point of the search, when said starting point is an object (by contrast to a text search).

 

Tested in LV 2013 SP1 64 bits.

It's been great to be able to find Items with No Callers in Project. How about Find Broken VIs as well?

 

FindBroken.png

When building Installers for an updated version of an executable, you can't select whether or not to overwrite the current source files directory. The reason this is a problem is when distributing to computers that need different configuration files that are installed with the original version but have since been customized to the specific computer. If you try to only include certain source files and not others then it still overwrites the entire directory and you lose the files you were trying to save. If there was an option in the Source File Settings window to "Overwrite", it could be automatically checked for the same default functionality but you could uncheck it to allow writing into existing directories.

 

Source File Settings.png

 

If you like this idea then give me some Kudos and hopefully we can get it in the next version of LabVIEW!

 

Peter W.

Applications Engineering

National Instruments

www.ni.com/support

I want to be able to work on STABLE versions of LV.

 

The last great stable version I remember was 6.1 (I never had 7.1).

 

2009 and 8.5.1 were not bad but please give us a feature-fixed long-term support version of LabVIEW.

 

For anyone unfamiliar with the idea, many Linux distributions offer the same:  Here's a link to the Ubuntu webpage outlining THEIR LTS strategy.

 

Shane.

Since LabVIEW 2017, it's possible to build application with a compatibility with future version of run time engine.

This option is set by default but can be disabled.

 

I just discover that this option is set for real time applicaiton and cannot be unset. I mean that if you build your application in with labview real time 2017, it will run with a system installed with a newer version of LabVIEW Real time.

 

This can be a good idea, but I'm a little bit surprise that I cannot have informations on that options for real time application and I can't control it.

 

Here is a way to test it. Tested on a real time desktop with pharlaps.

Install RT target with LV 2017.

Build an application and set it to run as startup. A simple application writting something in the console is enough.

Make sure your applicaiton is running at startup.

Update your system by only installing LabVIEW real time 2019.

Restart your system and your application is still running !

 

Because I faced an issue where LabVIEW 2020 broke my application build in LabVIEW 2017, I'm asking myself how NI can garanty that a real time system will work in any case if we upgrade the system to a higher version of LabVIEW real time version without recompiling the application.

 

Real time system can be used to control system that can be a secure system. If a user update by error a system, I want to keep my system safe for user.

 

So my idea is to remove this option or give access to the user to unselect this option to avoid any bad behavior.

 

best regards

I really think this is a bug, but I'm going to file this as a feature request, just to add more weight to the issue...

 

The VI Server Application method Library.Get File LV Version claims to be able to tell you the version of a Library (or XControl, XNode, LVClass) on disk, without loading it into memory.

 

noname.gif

 

Here are the docs:

 

12-1-2010 4-18-09 PM.png

 

But, when you try to call this method on a Library saved in a version of LabVIEW newer than the version of LabVIEW in which the method is executing, you get the following error:

 

Error 1125 (LabVIEW: File version is later than the current LabVIEW version.)

 

Why an error?  It had to read the version in order to generate the error.  Why not just return the version?

 

Thanks in advance for your kudos 🙂

Currently there are no officially supported frameworks for Unit Testing in LabVIEW for Linux.

 

A lack of a unit testing framework on LabVIEW for Linux reduces LabVIEW's usability in widely-recognized and industry standard software engineering practices.

 

A Unit Test Framework created by NI already exists, as well as a 3rd-party tool for free, VI Tester by JKI. However, neither of these are available for desktop Linux (or Macintosh).

 

NI LabVIEW Unit Test Framework Toolkit

 

VI Tester - JKI

https://github.com/JKISoftware/JKI-VI-Tester/wiki 

When I installed LabView I realized that a really large number of automatically starting programs is being installed. Even when running LabView not all of them are really needed. But they are still actively working in the PCV's memory ant take the power that would be needed for other applications. Basically when LabView is not running at all. A possibility should be implemented to deactivate these automatic applicatins: -Reduce the number of Autostart-applications to these that are really needed -Give the possibility to switch them all off when there is no intention to run LabView. A reboot for re-activating these autostart applications would not be that problem. -A minimum request is to give a list of Autostart applications that are needed for each LabView-Application. This would help to deactivate the autostart-programs manually.

The Desktop Execution Trace Toolkit (DETT) can trace enqueue/dequeue operations, so I would think this is feasible:

 

Add a mechanism to like the Profile>Performance & Memory view to display all active queues in memory by name (for "unnamed" ones, use whatever unique refnum/identifier is available) as well as the max size and current # of items in queue. You could use the same "Snapshot" functionality as the Profiler tool.

 

A particular use case:

We were tracking a memory leak in a large application that resulted from an unbounded queue whose consumer was disabled. The standard DETT and Profile tools weren't showing where the excess memory consumption was coming from, since queue data does not "belong" to a single VI. Granted, you can see the individual enqueue/dequeue operations in DETT, and even highlight pairs, but that's a little cumbersome in a large application.