LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

After 2011 we have the option to ignore all the missing vi's which are missing. But after loading the project if a vi is loaded and if it has a missing vi then there is no way to check from where it has to be loaded (Expected path of the missing vi). So it would be a good option to check the Expected path of the missing vi after loading its caller (May be in the properties of the missing vi).

The ability to define anonymous methods to be called multiple times within a block diagram.Capture.PNG

 

 

Really, athe end of the building process it would be nice to know how long the building process lasted.

 

One of the benefit would be to make build duration comparison between different LabVIEW versions much easier 😮

 

Clipboard01.jpg

I have long defended NI's decision to bind the lifespan of DVRs to their creator VI but, with the addition of malleability (totally awesome) and the fix in LV 2020 that allows "New Data Value Reference" to be called directly in an inlined VI (thanks, AQ), the ephemeral auto-cleanup behavior of DVRs is now a big blocker to a reference-based strictly-typed API.

 

I want to open a strictly-typed and malleable reference and keep that reference open until I choose to close it, regardless of whether the opener leaves memory. Without malleability, I could delegate the DVR creation to another thread/VI and get the persistent reference back by callback but I can't have the home-baked persistence and the malleabilty.

Please add a persistence/auto-cleanup flag to New Data Value Reference so I that reference can persist if I want it to. (I need both the persistence and the malleability to write some really cool code.)

Persistent_DVR_idea.png

Here's an old request for this feature that was declined, but things have changed a lot since then: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Persistent-DVR-s/idc-p/2856348#M27799 

It has been a few months since a suggestion has been made to do something about the For Loop, so if nothing else it is time to stir things up a little bit.  There have been several suggestions to do something with the iterators, for example:

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Smart-Iterators-with-Loops/idi-p/967321

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/for-loop-increment/idi-p/1097818

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/quot-Start-Step-Stop-quot-structure-on-For-loops/idi-p/1022771

 

None of these has really gained traction, and heck, I haven't settled on any of them myself.  In most cases I am satisfied with a workaround, usually involving the Ramp VI.

 

There is one case where I am not happy with the workaround.  I happen to need reverse iteration values quite often (N-1,N-2,...0).  With the N terminal pinned to the top-left corner I have little choice but to do the following and start zigging and zagging wires:

 

18391iB5AE9450400CAAD6

 

I would simply like a reverse iteration terminal which can be moved around at will and simply counts down.  Doesn't have to look like I have drawn it, that just happens to be intuitive to me.  Naturally this terminal (and the normal iteration terminal) should have the option to be hidden.

 

I thought about having the option to have the loop spin backwards, similar to reversing all auto-indexing inputs and having the iteration terminal count down, but I just could not decide what to do about auto-indexed outputs.  My G-instincts tell me that they should be built in the order of loop cycles, my C-instincts tell me I am building array[i] and if i goes in reverse order, so should the array elements.  For now I say, forget it, and just stick to the simple terminal.  Array reversal is essentially free, so I at least have a workaround there.

My forum question asked here: http://forums.ni.com/t5/LabVIEW/Determine-and-Minimize-LabVIEW-Build-Dependencies/m-p/1441734 has prompted me to post a New Idea on this exchange.

 

    To summarize:  LabVIEW executables tend to be on the LARGE side especially when driver packages like DAQmx, VISA amd IMAQ are required.  I'm asking if NI could add some enhanced functionality to the Application/Executable Builder that assist the user in determining and minimizing the size of the final distribution package.  Ideally this would be a completely automatic function that would simply minimize the build parameters to the smallest possible set after analyzing the active project.  Additionally, it would be great if this would be smart enough to pare down the driver package itself to the bare minimum.  (I.E. It shouldn't require a 150MB DAQmx package size to read a few thermocouples with a 9211 module.)

    At the very least, the Additional Installers tab should show the size of the packages you're checking off and the resulting installer's total size.

 

An example of the size increases of required installer packages for a simple DAQmx application:

Build Size (MB)
Program Itself
8
Run-time Engine Added
104
DAQmx Core Drivers Added
258
DAQmx Core Drivers + MAX Added
712

As we look at the Comparison Palette, there are three functions that offer negative logic, but fail to offer the positive logic alternative:

 

NotLogic.png

 

I commonly find myself placing a Boolean "not" right after these functions (e.g., if I want to know if there are elements in an array I must test if it is "not not empty", or if I want to know if a ref is valid I must test if it is "not not valid". Take a look at my other idea which illustrates not not logic.).

 

I have two proposed solutions: 1. offer a single function that has an "Invert?" option on the output, or 2. offer both the positive and negative logic primitives on the palette.

Message Edited by Laura F. on 11-16-2009 08:51 AM

Create a Panel Open event.

 

PanelOpenEvent.png

 

Among other things, it would provide a clean entry point for guaranteeing Control References have been created, allowing for their dynamic event registration to prevent errors such as the following:

 

PanelOpenError.png

At present we can select only a single VI on "RightClick>Select VI" option in Block diagram. It would be much helpful if we can select multiple VIs at the same time. Image for reference

 

 

MultipleSel-3.png

 

MultipleSel-1.png

 

Present MethodMultipleSel-2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Proposed Method

Download All

We're witnessing more and more requests to stop LV hiding important information from us.  In One direction we want to be able to know (and some want to break code) if structures are hiding code.

 

Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.

 

Examples include (Please please feel free to add more in the comments below)

 

Array to cluster (Cluster size hidden)

Boolean array to number (Sign mode hidden)

FXP simple Math (Rounding, saturation and output type hidden)

SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)

Sub VI settings in general (Subroutine, debugging)

 

I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing.  I don't mean to claim anyone's thunder.  I only decided to post this because of the apparent large number of ideas which have this basic idea at heart.  While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.

 

My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.

 

As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.

 

Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points.  As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong.  Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly.  My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong.  Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer.  Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.

I recently found out that LabVIEW uses the Wichmann-Hill (1984) random number generator.

http://digital.ni.com/public.nsf/allkb/9D0878A2A596A3DE86256C29007A6B4A

 

...which in some fields is completely unacceptable for not being random enough (cycle length of 6.9536e12?)

A much better (and arguably a much more currently standard one) would be Mersenne Twister (cycle length of 2e199371).

http://www.mathworks.com/help/matlab/ref/randstream.list.html

The Run Method is pretty much the only option we have in LabVIEW to scale an application dynamically by instantiating new VIs - but there is one big catch - for some reason it requires the unser interface to be idle(!).

 

Related methods like setting control values e.g. do not have this requirement so they do not pose a problem - but the main method for dynamic instantiation does - and can be blocked by something as simple as a user that opens the calendar view of a date and time control (which of course never should do this either, but that's another issue/idea).

 

Personally I use the run method to create new trend windows (you never know how many trends a user wants to see at the same time), create session handlers for remote clients etc. The times it is used to actually create user interfaces it is not a big problem that the run method is in the user interface thread, but for session handlers and other things that needs to be created in the background based on requests from the outside? A HUGE issue.

(not really related to this idea)

 

Deferring panel updates is a tedious procedure, requiring a panel reference (not intuitive to get!) and a couple of property nodes. However, deferring panel updates is sometimes very important. For example when coloring the fields of a large table according to their values, a serious performance impact is encountered unless we defer panel updates. There are many other situations where panel updates should be suspended temporarily, especially if property nodes are used in a tight loop.

 

My suggestions is to make flat sequences a little bit more useful by adding a new function to the right-click menu of each frame: "Defer Panel Updates". Now a new connector will appear on the frame that accepts a panel reference. If left unwired, this applies to the panel of the current VI, but we can wire a reference to any panel we desire (useful for subVIs). There should be a visual indication indicating that updates are deferred, e.g. a slightly different background pattern (e.g. faint checkerboard or similar). Note that this idea does not clutter the palettes with a new structure, but simply extends the functionality of existing elements. Let's put those sequences to some good use!

 

During execution of such a frame, the panel updates are deferred and automatically enabled again once the frame ends. The configuration should be "per frame" of a multi-frame flat sequence.

 

This has many advantages over the current situation:

 

  • It is immediately clear what part of the code executes during the deferral
  • The deferral is clearly delimited and it is not possible to accidentally forget to re-enable the updates
As an example, here is my subVI code to color the fields of a large correlation matrix (located on the main VI) according the respective value so we can easily pick out where correlations between fitting parameters are a problem.
TOP: current solution. BOTTOM: how it could look like with this idea implemented.

 

 

There are already a couple ideas to retain wire values on a hierarchy (here and here). This is a request to disable (or toggle) the 'retain wire values' option on all VIs of a hierarchy in the same vein as 'disable breakpoints on hierarchy'. This request was spurred by an investigation into VI performance on resizing a very large 2D array of strings, wherein several memory allocation strategies and lookup strategies (arrays, maps, variants) had been exhausted, only to discover that a subVI whose (front panel and diagram had been closed) was set to retain wire values. Merely toggling the retain wire value setting off on this particular subVI improved performance by approximately one order of magnitude for this particular project.

 

Investigating and accurately measuring run-time LabVIEW performance can be challenging with so many debugging tools available, the impacts of which are not immediately obvious. For this reason, I would like to see a menu option that recurses on VI hierarchy and toggles the 'retain wire values' option, in the same vein as the option which removes all breakpoints from VI hierarchy. The option would be very helpful with run-time performance investigation or optimization.

According to my understanding and the following discussion:

http://forums.ni.com/t5/LabVIEW/Why-exactly-can-t-I-inline-this/td-p/3126562

certain types of Property nodes and Invoke nodes should be inlineable without any great problems.

 

I would suggest that the LabVIEW compiler get's a bit smarter about this aspect of limiting the ability to inline code.  If there's no technical limitation regarding inlining of a property node or Invoke node which is fed with a reference (which does not originate from a local object i.e. a static reference to a FP object) then it should be inlineable.

In LabVIEW we can dynamically run a VI in a few ways:

a) If it's not running Top Level VI or if the VI re-entrant with the Run method.

b) Already running as sub VI, with Call By Reference.

c) Make a new VI and drop the (running) sub VI on the diagram.

 

Downside of a) is we can't always make sub VI's re-entrant, but still want to call it by reference. Downside of b) is we need to know the strict type (connector pane). Downside of c) is we might end up with a lot of VI's just to function as Top Level VI for the sub VI's and it doesn't work in executables.

 

I like to propose a method, so we can dynamically call a sub VI without knowing the strict type.

This is all I want.png

Using it, we enable LV to dynamically run sub VI's while setting\getting it's parameters by name.

 

For sub VI's (already running) this method will act as Top Level VI. For Top Level VI's it will fail unless it's idle.

 

 

 

(Please ignore my first confusing attempt)

We are logging shared variables with the citadel database via the variable logging. In some cases (reset of PC ...) citadel makes a repair of the database after restart. When having databases greater than 20-30 GB this operation takes at least one hour. During this operation the Shared Variable Engine is not responding. When using the shared variables not only for logging purpose but also for controlling options and visualizations on that same pc this is very critical - this stops also the production for one hour.

 

It will be better to activate the Shared Variable Engine although during the repairing process of the database. We could pass on the data during the logging process but not on the controlling part of that aggregate.

 

Best Regards,

 

--

Joachim

While debugging my application I became curious that if an easy way finding the error is available how the things will be. Am not sure somebody has proposed it already, am giving my suggestion. The VI can have the option of pausing it when an error occurs anywhere in that VI.

 

Error Handling.png

 

 

I guess this would help us much better way of debugging an application.

The current typecast function does do a buffer allocation and copy operation even if a inplace typecast would be possible.

 

Please optimize the current function or provide an advanced version of the it.

 

Original discussion: Lava - Inplace Typecast possible?

When you’re making a By Reference LabVIEW Object using a Data Value References (DVRs) the user of your class would need to embed each Dynamic Dispatching VI inside an In Place Element Structure (IPE). Or you have to create wrapper VI for each method but this undermines the advantages of LVOOP Inheritance.
 
The idea is that a DVR containing a LabVIEW Object wired to a Dynamic Dispatching Terminal is equal to calling the Method VI inside the IPE structure like illustrated below.

DVR DynamicDispatch.PNG

Message Edited by Support on 01-15-2010 04:39 PM