LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I recently found out that LabVIEW uses the Wichmann-Hill (1984) random number generator.

http://digital.ni.com/public.nsf/allkb/9D0878A2A596A3DE86256C29007A6B4A

 

...which in some fields is completely unacceptable for not being random enough (cycle length of 6.9536e12?)

A much better (and arguably a much more currently standard one) would be Mersenne Twister (cycle length of 2e199371).

http://www.mathworks.com/help/matlab/ref/randstream.list.html

We're witnessing more and more requests to stop LV hiding important information from us.  In One direction we want to be able to know (and some want to break code) if structures are hiding code.

 

Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.

 

Examples include (Please please feel free to add more in the comments below)

 

Array to cluster (Cluster size hidden)

Boolean array to number (Sign mode hidden)

FXP simple Math (Rounding, saturation and output type hidden)

SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)

Sub VI settings in general (Subroutine, debugging)

 

I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing.  I don't mean to claim anyone's thunder.  I only decided to post this because of the apparent large number of ideas which have this basic idea at heart.  While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.

 

My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.

 

As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.

 

Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points.  As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong.  Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly.  My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong.  Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer.  Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.

According to my understanding and the following discussion:

http://forums.ni.com/t5/LabVIEW/Why-exactly-can-t-I-inline-this/td-p/3126562

certain types of Property nodes and Invoke nodes should be inlineable without any great problems.

 

I would suggest that the LabVIEW compiler get's a bit smarter about this aspect of limiting the ability to inline code.  If there's no technical limitation regarding inlining of a property node or Invoke node which is fed with a reference (which does not originate from a local object i.e. a static reference to a FP object) then it should be inlineable.

The Run Method is pretty much the only option we have in LabVIEW to scale an application dynamically by instantiating new VIs - but there is one big catch - for some reason it requires the unser interface to be idle(!).

 

Related methods like setting control values e.g. do not have this requirement so they do not pose a problem - but the main method for dynamic instantiation does - and can be blocked by something as simple as a user that opens the calendar view of a date and time control (which of course never should do this either, but that's another issue/idea).

 

Personally I use the run method to create new trend windows (you never know how many trends a user wants to see at the same time), create session handlers for remote clients etc. The times it is used to actually create user interfaces it is not a big problem that the run method is in the user interface thread, but for session handlers and other things that needs to be created in the background based on requests from the outside? A HUGE issue.

In LabVIEW we can dynamically run a VI in a few ways:

a) If it's not running Top Level VI or if the VI re-entrant with the Run method.

b) Already running as sub VI, with Call By Reference.

c) Make a new VI and drop the (running) sub VI on the diagram.

 

Downside of a) is we can't always make sub VI's re-entrant, but still want to call it by reference. Downside of b) is we need to know the strict type (connector pane). Downside of c) is we might end up with a lot of VI's just to function as Top Level VI for the sub VI's and it doesn't work in executables.

 

I like to propose a method, so we can dynamically call a sub VI without knowing the strict type.

This is all I want.png

Using it, we enable LV to dynamically run sub VI's while setting\getting it's parameters by name.

 

For sub VI's (already running) this method will act as Top Level VI. For Top Level VI's it will fail unless it's idle.

 

 

 

(Please ignore my first confusing attempt)

(not really related to this idea)

 

Deferring panel updates is a tedious procedure, requiring a panel reference (not intuitive to get!) and a couple of property nodes. However, deferring panel updates is sometimes very important. For example when coloring the fields of a large table according to their values, a serious performance impact is encountered unless we defer panel updates. There are many other situations where panel updates should be suspended temporarily, especially if property nodes are used in a tight loop.

 

My suggestions is to make flat sequences a little bit more useful by adding a new function to the right-click menu of each frame: "Defer Panel Updates". Now a new connector will appear on the frame that accepts a panel reference. If left unwired, this applies to the panel of the current VI, but we can wire a reference to any panel we desire (useful for subVIs). There should be a visual indication indicating that updates are deferred, e.g. a slightly different background pattern (e.g. faint checkerboard or similar). Note that this idea does not clutter the palettes with a new structure, but simply extends the functionality of existing elements. Let's put those sequences to some good use!

 

During execution of such a frame, the panel updates are deferred and automatically enabled again once the frame ends. The configuration should be "per frame" of a multi-frame flat sequence.

 

This has many advantages over the current situation:

 

  • It is immediately clear what part of the code executes during the deferral
  • The deferral is clearly delimited and it is not possible to accidentally forget to re-enable the updates
As an example, here is my subVI code to color the fields of a large correlation matrix (located on the main VI) according the respective value so we can easily pick out where correlations between fitting parameters are a problem.
TOP: current solution. BOTTOM: how it could look like with this idea implemented.

 

 

I think LabVIEW should have something like the dialog VIs that acts as a popup where the vi displays a message to the user but doesn't hang up code and wait on the user to press a button.

 

An example case is a state machine that has error checking built in errors out.  Instead of creating a more complex error handling scheme to display the state machine errored out and heres why then shutdown the state machine, you could instead drop this async popup in the state with the message to display the error occurred then have the state machine shutdown like normal.  This way, the user knows "the state machine shutdown for X reason" while the state machine also goes into a safe state.  

 

I think the async popup is easier to trace since the error message is located in that section that detected the error vs having some sort of store an error string then in the shut down case read if there is an error and if so display the error once the station is already shutdown.  Additionally, you cold see where all asnyc message blocks are via dropping one in your code then right clicking and searching for all instances.  This could streamline the process to see where the fault occurred since the async message block would be in the location of the fault itself.

 

I have currently built something like this that is a reentrant VI whose input is a string and a secondary VI that launches the display message VI.  It's a bit of a work around, but I can display a message to the user while my code continues to do whatever it needs to.  

We are logging shared variables with the citadel database via the variable logging. In some cases (reset of PC ...) citadel makes a repair of the database after restart. When having databases greater than 20-30 GB this operation takes at least one hour. During this operation the Shared Variable Engine is not responding. When using the shared variables not only for logging purpose but also for controlling options and visualizations on that same pc this is very critical - this stops also the production for one hour.

 

It will be better to activate the Shared Variable Engine although during the repairing process of the database. We could pass on the data during the logging process but not on the controlling part of that aggregate.

 

Best Regards,

 

--

Joachim

The current typecast function does do a buffer allocation and copy operation even if a inplace typecast would be possible.

 

Please optimize the current function or provide an advanced version of the it.

 

Original discussion: Lava - Inplace Typecast possible?

While debugging my application I became curious that if an easy way finding the error is available how the things will be. Am not sure somebody has proposed it already, am giving my suggestion. The VI can have the option of pausing it when an error occurs anywhere in that VI.

 

Error Handling.png

 

 

I guess this would help us much better way of debugging an application.

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

A quick one-button solution to view pre-configured Design Rule results per VI. Not quite an analyzer.  One layer (VI) deep. Pull-down icon changes from green check mark to the Alert symbol suggested here if violations exist.

 

LV_LIVE_DRC.png

When you’re making a By Reference LabVIEW Object using a Data Value References (DVRs) the user of your class would need to embed each Dynamic Dispatching VI inside an In Place Element Structure (IPE). Or you have to create wrapper VI for each method but this undermines the advantages of LVOOP Inheritance.
 
The idea is that a DVR containing a LabVIEW Object wired to a Dynamic Dispatching Terminal is equal to calling the Method VI inside the IPE structure like illustrated below.

DVR DynamicDispatch.PNG

Message Edited by Support on 01-15-2010 04:39 PM

FIR Filter is almost the same as convolution, except that it has a init/cont terminal while convolution has an input for algorithm (Direct, frequency domain). FIR filter always uses direct convolution.

 

If "cont" is not used, convolution based FIR filtering could be orders of magnitude faster because it scales much less steeply with input sizes. Examples have been discussed where switching algorithms from "Direct" to "frequency domain" can turn minutes into seconds (e.g. 1M points and 5k filter).

 

While the knowledgeable programmer can of course make his own using the convolution primitives (also programming around other limitations because this idea is not implemented :(), it might be more intuitive if the FIR Filter had an "algorithm" input where we can select between the same choices as for convolution. (From my casual understanding, "frequency domain" would ignore the "cont" input because it is incompatible. This can just be mentioned in the help.)

 

altenbach_0-1600102620742.png

 

I'm a huge fan of the Stall Data Flow malleable VI except in the case that I have it wired on an error wire (my most common use case) and there's an error on the wire.  I generally trust and expect that VIs (especially those on the palettes) will no-op (with rare exceptions like close ref methods) and fail fast in the event of an error on the error in terminal.  I understand that this is kind of a unique case since the VI in question is malleable and doesn't have an actual error in terminal, but my guess is that most users:

 

1.  Wire this specific VIM on error wires most often

2.  Likely don't want the VI to stall in the event of an error

3.  Would prefer to see that error propagated as quickly as possible like most other VIs do

 

Thoughts?

I was recently trying to develop a function to navigate thru a deeply nested directory structure and came across system path length limitations which could potentially be addressed by use of a "change directory" function.

 

I realize I could use system exec with cmd /c cd <path>, but found this extremely slow

Apparently VI Analyzer toolkit doesn't support any 64bit LabVIEW version, this is really something I miss!

 

With all the good reasons to use it that NI gives it's hard to understand!

Deallocate Queue Memory.PNG

 

Many people do not realize that memory allocated by a queue is never deallocated until the queue is destroyed or the call-chain that created the queue stops running.  This is problematic for queues that are opened at the beginning of the application and used throughout because all of the queues will always retain their maximum size, causing the application to potentially hold a lot of memory that is currently unused or seldomly used.

 

Consider a consumer that occassionally lags behind and the size of a queue will grow tremendously.  Then the consumer picks back up and services the elements out of the queue in a short period of time.  It is unlikely the queue will be this large again for quite some time, but unfortunately no memory will be deallocated.

 

I'd like a primitive that will deallocate all of that memory down to just the current number of elements in the queue.  Since the queue won't need that much memory again for a long time and the queue will auto-grow again as needed, I'd like to recover that memory now instead of waiting for the application to be restarted (which is currently the only time the queue is created.)

 

The alternative is to add some code to periodically force destroy the queue and have the consumer gracefully handle the error and open a new reference.  Then replicate this change for all queues.  Seems messy and puts too much responsibility on the consumer.  I'd rather just periodically call a 'deallocate queue memory' primitive on the queue reference within the producer, perhaps once every few minutes, just to be sure none of the queues are needlessly holding a large amount of memory.

 

I believe this will:

  • Improve performance in other areas of the application because less time will be spent looking for available memory to allocate.
  • Reduce the chance of Out of Memory errors because large blocks of memory will not be held [much] longer than they are needed.
  • Improve the common user opinion that LabVIEW applications are memory hogs or leaky.

I realize this will hurt enqueue performance when the queue begins to grow quickly again, but this area is not a bottleneck for my application.

 

Thanks!

I'm working on a high-performance application where every bit of time counts.

 

LabVIEW's parallel execution helps but we are having to write individual functions in C so we can try out the AVX instructions available on modern processors.

 

LabVIEW recently used SSE2 so I guess the compiler supports this (well, LLVM certainly does). It would be good to have an advanced option to enable more recent updates (AVX+) to avoid going to C!

Over the years I have run into situations where I have to monitor files accessed by another application from within my LabVIEW application (Example: monitoring the standard_out/standard_error files from an app launched with the system exec).

Currently the only way to do that is to use polling. 

 

I would love for LabVIEW to have built in events for file monitoring so polling would no longer be required.

 

For instance, the following events would be very useful:

  1. File Opened
  2. File Closed
  3. File Change  (I really want this one if nothing else)
  4. File Created
  5. File Deleted 
  6. more...

 

Below is an example as to how it might look like.

5-20-2010 9-20-09 AM.png