LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea
When searching for a single pattern we define the pattern as *.vi (for example). If we want to filter out say three patterns like (*.c, .h,*.txt). It doesnt work that way. In file dialog if we define these patterns seperated by semicolon it returns the desired result but in list folder we have to use a for loop for multiple pattern searches. Will it not be useful if we define (*c;*.h;*.txt) in the pattern and get only these files?

Currently there is no way to delete multiple elements in an array, that are not in sequence.

This way elements 2,4,7 would be deleted in just one function.

 

Delete from Array.png

 

It is cleaner code and in big arrays can enhange performance (at least i think)

I would like to have the ability to set the compare aggregates mode for comparisons involving containers (arrays certainly, clusters would be a nice bonus) and a scalar value.  This includes the comparisons to 0 functions as well.

 

compareAggregatesIdea.png

One of the quickest (and easiest to forget) methods to improve performance of your VI is to disable debugging.  It requires a trip down the VI properties to get there, however.  I propose adding a button on the BD alongside the other debugging buttons to Disable and Enable Debugging.  

HABF?

 

An acronym for one of my favorite Spolskyisms. Great article, read it.

 

Background

 

When you discover what you consider to be a bug in LabVIEW's IDE or language, it's a difficult process to report the bug and track the bug's status as new LabVIEW editions debut. This idea addresses the transparency and facilitation of this process, and is meant to appeal to both those who create LabVIEW and those who use LabVIEW.

 

Problems with Current Issue Tracking Platform

 

"Platform" is a generous term for the current reporting and tracking process:

 

  1. The issue reporting procedure is undocumented - few seem to know how to report issues, fewer know how to track documented issues
  2. Issue tracking status is largely monitored by a squad of Dedicated Volunteer Bug Scrapers
  3. Duplication of effort (for users, AE's, and R&D) is probable since there is not a centralized, searchable repository
  4. Relies on unreliable methods including email, FTP uploads, phone conversations, forums...

Comparing LabVIEW Issue Tracking and Feature Tracking Platforms

 

Before the Idea Exchange, there was the Product Suggestion Center (PSC). What's that, you ask? It's a hole in the internet you threw your good Ideas into. Smiley Very Happy The Idea Exchange revolutionized Feature Suggestions by introducing a platform that allows an unprecedented level of public brainstorming and symbiotic discussions between R&D and customers. Further, we can watch Ideas flow from inception to implementation.

 

I want to see an analogue for Issue Tracking.

 

Proposed Solution

 

A web-based platform with the following capabilities:

 

  1. Allows users to interactively search a known bug database. Knowing the status of a bug (not yet reported, pending fix for next release, already fixed in new release...) will minimize duplicated effort
  2. Allow embedded video screen captures (such as Jing)
  3. Allow uploaded files that demonstrate reproduceable issues
  4. Allow different "Access Levels" for different bug types. View and Modify permissions should be settable based on User ID, User Group, etc... (Some types of bugs should not be public)
  5. Allow different access levels for content within one bug report. For instance, a customer may want a bug report to be public and searchable, yet attach Private Access Level to proprietary uploads.
  6. Allow collaborative involvement for adding content to Bug Reports, where any member can upload additional information (given they have Modify Access Level privileges)
  7. The Homepage of the Issue Tracker should be accessible and visible through ni.com (maybe IDE too, such as a GSW link)
  8. A more detailed Bugfix Report for each LabVIEW debut (cryptic subject lines on the current Bugfix List is not helpful)
  9. Specific fields such as Related Bug Reports and Related Forum Posts that allow easily-identifiable cross-referencing.
  10. An email-based alert system, letting you know when the status or content changes in bug reports of interest (:thumbup1:)
  11. Same username and logon as the Forums
  12. Bonus: Visible download links to patches and other bug-related minutiae

 

Additional Thoughts

 

  • I have used the Issue Tracking platform used in Betas, and the exposed featureset is too lacking to fulfill the spirit of this Idea
  • I realize the initial and ongoing investment for such a system is high compared to most Ideas on the Exchange. Both issue tracking and economics are sensitive issues, but the resultant increase in product stability and customer confidence makes the discussion worth having.
  • Just to clarify, a perfectly acceptable (and desirable) action is to choose an established issue-tracking service provider (perhaps one of NI's current web service providers carries an acceptable solution?), not create this behemoth in-house.

 

BugFeature.png

(Picture first spotted on Breakpoint)

 

Generally: you are voting for a platform that eases the burden of Issue Reporting, additionally offering a means of Issue Tracking.

 

My suggestions are neither concrete nor comprehensive: please voice further suggestions, requirements, or criticisms in the Comments Section!

After 2011 we have the option to ignore all the missing vi's which are missing. But after loading the project if a vi is loaded and if it has a missing vi then there is no way to check from where it has to be loaded (Expected path of the missing vi). So it would be a good option to check the Expected path of the missing vi after loading its caller (May be in the properties of the missing vi).

I often have a situation where certain code parts don't really need to be recalcuated, because their inputs have not changed from an earlier encounter.

 

For example if a function is the sum of two different complicated and slow calculations where each of the two depend on a different subset of parameters, often only one needs to be recalculated, while the other intermediary result could be grabbed from a cached value. (This occurs for example during the calcuation of partial derivatives).

 

In some cases, the compiler could probably recognize these situations, but I think we should have a special structure to allow caching of intermediary results.

 

I typically use a case structure and two feedback nodes as shown in the example on the left. If the value is different, the code in the TRUE case is executed, else the value stored in the SR is returned immediately. (If the code depends on several values, I bundle them all before the comparison operation)

 

I propose an simple Caching Structure as show on the right that retains that same functionality with less baggage.

 

 

Details: The containing code is calculated at first run and stored in the output tunnel. In later calls, it is either (A) recalculated, or (B) the previously stored value returned at the output tunnels. Recalculation only occurs under some conditions (.i.e. usually when actually needed)

 

The structure has the following features:


  • Two possible types of input tunnels (the functionality is indicated by a different look).
    1. One where a value change triggers a recalcuation of the containing code
    2. One where a value change does not trigger a recalculation of the containing code
  • A special boolean terminal to force a recalculation unconditionally
  • output tunnels that retain data between calls

There can be an unlimited number of input and output tunnels of any type described above. A recalculation of the inner code occurs if any of the triggering tunnels have a changed value.

 

We can think of it similar to "short-term folding". 

The ability to define anonymous methods to be called multiple times within a block diagram.Capture.PNG

 

 

Really, athe end of the building process it would be nice to know how long the building process lasted.

 

One of the benefit would be to make build duration comparison between different LabVIEW versions much easier 😮

 

Clipboard01.jpg

FIR Filter is almost the same as convolution, except that it has a init/cont terminal while convolution has an input for algorithm (Direct, frequency domain). FIR filter always uses direct convolution.

 

If "cont" is not used, convolution based FIR filtering could be orders of magnitude faster because it scales much less steeply with input sizes. Examples have been discussed where switching algorithms from "Direct" to "frequency domain" can turn minutes into seconds (e.g. 1M points and 5k filter).

 

While the knowledgeable programmer can of course make his own using the convolution primitives (also programming around other limitations because this idea is not implemented :(), it might be more intuitive if the FIR Filter had an "algorithm" input where we can select between the same choices as for convolution. (From my casual understanding, "frequency domain" would ignore the "cont" input because it is incompatible. This can just be mentioned in the help.)

 

altenbach_0-1600102620742.png

 

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

At present we can select only a single VI on "RightClick>Select VI" option in Block diagram. It would be much helpful if we can select multiple VIs at the same time. Image for reference

 

 

MultipleSel-3.png

 

MultipleSel-1.png

 

Present MethodMultipleSel-2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Proposed Method

Download All

In LabVIEW we can dynamically run a VI in a few ways:

a) If it's not running Top Level VI or if the VI re-entrant with the Run method.

b) Already running as sub VI, with Call By Reference.

c) Make a new VI and drop the (running) sub VI on the diagram.

 

Downside of a) is we can't always make sub VI's re-entrant, but still want to call it by reference. Downside of b) is we need to know the strict type (connector pane). Downside of c) is we might end up with a lot of VI's just to function as Top Level VI for the sub VI's and it doesn't work in executables.

 

I like to propose a method, so we can dynamically call a sub VI without knowing the strict type.

This is all I want.png

Using it, we enable LV to dynamically run sub VI's while setting\getting it's parameters by name.

 

For sub VI's (already running) this method will act as Top Level VI. For Top Level VI's it will fail unless it's idle.

 

 

 

(Please ignore my first confusing attempt)

My forum question asked here: http://forums.ni.com/t5/LabVIEW/Determine-and-Minimize-LabVIEW-Build-Dependencies/m-p/1441734 has prompted me to post a New Idea on this exchange.

 

    To summarize:  LabVIEW executables tend to be on the LARGE side especially when driver packages like DAQmx, VISA amd IMAQ are required.  I'm asking if NI could add some enhanced functionality to the Application/Executable Builder that assist the user in determining and minimizing the size of the final distribution package.  Ideally this would be a completely automatic function that would simply minimize the build parameters to the smallest possible set after analyzing the active project.  Additionally, it would be great if this would be smart enough to pare down the driver package itself to the bare minimum.  (I.E. It shouldn't require a 150MB DAQmx package size to read a few thermocouples with a 9211 module.)

    At the very least, the Additional Installers tab should show the size of the packages you're checking off and the resulting installer's total size.

 

An example of the size increases of required installer packages for a simple DAQmx application:

Build Size (MB)
Program Itself
8
Run-time Engine Added
104
DAQmx Core Drivers Added
258
DAQmx Core Drivers + MAX Added
712

I recently found out that LabVIEW uses the Wichmann-Hill (1984) random number generator.

http://digital.ni.com/public.nsf/allkb/9D0878A2A596A3DE86256C29007A6B4A

 

...which in some fields is completely unacceptable for not being random enough (cycle length of 6.9536e12?)

A much better (and arguably a much more currently standard one) would be Mersenne Twister (cycle length of 2e199371).

http://www.mathworks.com/help/matlab/ref/randstream.list.html

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects.

Create a Panel Open event.

 

PanelOpenEvent.png

 

Among other things, it would provide a clean entry point for guaranteeing Control References have been created, allowing for their dynamic event registration to prevent errors such as the following:

 

PanelOpenError.png

As we look at the Comparison Palette, there are three functions that offer negative logic, but fail to offer the positive logic alternative:

 

NotLogic.png

 

I commonly find myself placing a Boolean "not" right after these functions (e.g., if I want to know if there are elements in an array I must test if it is "not not empty", or if I want to know if a ref is valid I must test if it is "not not valid". Take a look at my other idea which illustrates not not logic.).

 

I have two proposed solutions: 1. offer a single function that has an "Invert?" option on the output, or 2. offer both the positive and negative logic primitives on the palette.

Message Edited by Laura F. on 11-16-2009 08:51 AM

We're witnessing more and more requests to stop LV hiding important information from us.  In One direction we want to be able to know (and some want to break code) if structures are hiding code.

 

Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.

 

Examples include (Please please feel free to add more in the comments below)

 

Array to cluster (Cluster size hidden)

Boolean array to number (Sign mode hidden)

FXP simple Math (Rounding, saturation and output type hidden)

SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)

Sub VI settings in general (Subroutine, debugging)

 

I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing.  I don't mean to claim anyone's thunder.  I only decided to post this because of the apparent large number of ideas which have this basic idea at heart.  While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.

 

My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.

 

As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.

 

Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points.  As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong.  Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly.  My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong.  Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer.  Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.