LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

HABF?

 

An acronym for one of my favorite Spolskyisms. Great article, read it.

 

Background

 

When you discover what you consider to be a bug in LabVIEW's IDE or language, it's a difficult process to report the bug and track the bug's status as new LabVIEW editions debut. This idea addresses the transparency and facilitation of this process, and is meant to appeal to both those who create LabVIEW and those who use LabVIEW.

 

Problems with Current Issue Tracking Platform

 

"Platform" is a generous term for the current reporting and tracking process:

 

  1. The issue reporting procedure is undocumented - few seem to know how to report issues, fewer know how to track documented issues
  2. Issue tracking status is largely monitored by a squad of Dedicated Volunteer Bug Scrapers
  3. Duplication of effort (for users, AE's, and R&D) is probable since there is not a centralized, searchable repository
  4. Relies on unreliable methods including email, FTP uploads, phone conversations, forums...

Comparing LabVIEW Issue Tracking and Feature Tracking Platforms

 

Before the Idea Exchange, there was the Product Suggestion Center (PSC). What's that, you ask? It's a hole in the internet you threw your good Ideas into. Smiley Very Happy The Idea Exchange revolutionized Feature Suggestions by introducing a platform that allows an unprecedented level of public brainstorming and symbiotic discussions between R&D and customers. Further, we can watch Ideas flow from inception to implementation.

 

I want to see an analogue for Issue Tracking.

 

Proposed Solution

 

A web-based platform with the following capabilities:

 

  1. Allows users to interactively search a known bug database. Knowing the status of a bug (not yet reported, pending fix for next release, already fixed in new release...) will minimize duplicated effort
  2. Allow embedded video screen captures (such as Jing)
  3. Allow uploaded files that demonstrate reproduceable issues
  4. Allow different "Access Levels" for different bug types. View and Modify permissions should be settable based on User ID, User Group, etc... (Some types of bugs should not be public)
  5. Allow different access levels for content within one bug report. For instance, a customer may want a bug report to be public and searchable, yet attach Private Access Level to proprietary uploads.
  6. Allow collaborative involvement for adding content to Bug Reports, where any member can upload additional information (given they have Modify Access Level privileges)
  7. The Homepage of the Issue Tracker should be accessible and visible through ni.com (maybe IDE too, such as a GSW link)
  8. A more detailed Bugfix Report for each LabVIEW debut (cryptic subject lines on the current Bugfix List is not helpful)
  9. Specific fields such as Related Bug Reports and Related Forum Posts that allow easily-identifiable cross-referencing.
  10. An email-based alert system, letting you know when the status or content changes in bug reports of interest (:thumbup1:)
  11. Same username and logon as the Forums
  12. Bonus: Visible download links to patches and other bug-related minutiae

 

Additional Thoughts

 

  • I have used the Issue Tracking platform used in Betas, and the exposed featureset is too lacking to fulfill the spirit of this Idea
  • I realize the initial and ongoing investment for such a system is high compared to most Ideas on the Exchange. Both issue tracking and economics are sensitive issues, but the resultant increase in product stability and customer confidence makes the discussion worth having.
  • Just to clarify, a perfectly acceptable (and desirable) action is to choose an established issue-tracking service provider (perhaps one of NI's current web service providers carries an acceptable solution?), not create this behemoth in-house.

 

BugFeature.png

(Picture first spotted on Breakpoint)

 

Generally: you are voting for a platform that eases the burden of Issue Reporting, additionally offering a means of Issue Tracking.

 

My suggestions are neither concrete nor comprehensive: please voice further suggestions, requirements, or criticisms in the Comments Section!

The LabVIEW compiler currently appears to use one core of a multi-core processor.  It would be nice if it fully utilized multiple cores to speed building of large projects.

Windows 10 now handles long file paths although it has to be enabled with via various possible registry or group policy edits. Windows applications have to opt in. LabVIEW source code should be able to utilize long file paths.

After applying my own subjective intellisense (see also ;)), I noticed that "replace array subset" is almost invariably followed by a calculation of the "index past replacement". Most of the time this index is kept in a shift register for efficient in-place algorithm implementations (see example at the bottom of the picture copied from here).

 

I suggest new additional output terminals for "replace array subset". The new output should be aligned with the corresponding index input and outputs the "index past replacement" value. This would eliminate the need for external calculation of this typically needed value and would also eliminate the need for "wire tunneling" as in the example in the bottom right. (sure we can wire around as in the top right examples, but this is one of the cases where I always hide the wire to keep things aligned with the shift register).

 

If course the idea needs to be extended for multidimensional arrays. I am sure if can be all made consistent and intuitive. There should be no performance impact, because the compiler can remove the extra code if the new output is not wired.

 

Several String functions have an "offset past ..." output (e.g. "search and replace string", "match pattern", etc.) and I suggest to re-use the same glyph on the icon.

 

Here is how it could look like (left) after implementing the idea. equivalent legacy code is shown on the right.

 

 

 

Idea Summary: "Replace array subset" should have new outputs, one for each index input, that provides the "index past replacement" position.

 

 

 

 

I often have code in my apps where some error-out nodes are not wired, simply because the errors are generally not of interest to me or the error wiring would clutter up my block diagram. Typically this happens a lot in UI handling code where a lot of property nodes are used. For these parts I would rely on the automatic error handling for debugging purposes. One of the drawbacks of this method is that program execution is suspended when the automatic error handler kicks in. Even worse if this happens for code that is in a loop. You're only option then would be to abort the app, which e.g. is no good for your reference-based objects, etc.

 

I would love to have the ability to just specify my own 'Automatic Error Handler', enabling me to decide what to do with the unhandled errors. Just logging them is what first comes to mind, but maybe also do some special stuff depending on the type of error, just like a 'normal' error handler. I want to be in control!

 

Added values of this is that your application then has a catch-all error handler which enables you to at least log every error that occurs, even if not wired through. (Everyone forgets to wire some error-out that they actually did want to wire one time or another don't they? ;-))

 

Ofcourse the proposed setting in the image would ideally also be available programmatically by application property nodes.

 

21-4-2013 22-55-19.png

It has been a few months since a suggestion has been made to do something about the For Loop, so if nothing else it is time to stir things up a little bit.  There have been several suggestions to do something with the iterators, for example:

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Smart-Iterators-with-Loops/idi-p/967321

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/for-loop-increment/idi-p/1097818

 

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/quot-Start-Step-Stop-quot-structure-on-For-loops/idi-p/1022771

 

None of these has really gained traction, and heck, I haven't settled on any of them myself.  In most cases I am satisfied with a workaround, usually involving the Ramp VI.

 

There is one case where I am not happy with the workaround.  I happen to need reverse iteration values quite often (N-1,N-2,...0).  With the N terminal pinned to the top-left corner I have little choice but to do the following and start zigging and zagging wires:

 

18391iB5AE9450400CAAD6

 

I would simply like a reverse iteration terminal which can be moved around at will and simply counts down.  Doesn't have to look like I have drawn it, that just happens to be intuitive to me.  Naturally this terminal (and the normal iteration terminal) should have the option to be hidden.

 

I thought about having the option to have the loop spin backwards, similar to reversing all auto-indexing inputs and having the iteration terminal count down, but I just could not decide what to do about auto-indexed outputs.  My G-instincts tell me that they should be built in the order of loop cycles, my C-instincts tell me I am building array[i] and if i goes in reverse order, so should the array elements.  For now I say, forget it, and just stick to the simple terminal.  Array reversal is essentially free, so I at least have a workaround there.

When you’re making a By Reference LabVIEW Object using a Data Value References (DVRs) the user of your class would need to embed each Dynamic Dispatching VI inside an In Place Element Structure (IPE). Or you have to create wrapper VI for each method but this undermines the advantages of LVOOP Inheritance.
 
The idea is that a DVR containing a LabVIEW Object wired to a Dynamic Dispatching Terminal is equal to calling the Method VI inside the IPE structure like illustrated below.

DVR DynamicDispatch.PNG

Message Edited by Support on 01-15-2010 04:39 PM

This idea came to me from Darren's Nugget 2-23-2018 on Data Agnostic Probes I thought it might be useful to write a Probe.vim or specifically, a data type malleable probe to gain the ability to have some access to the data itself in a general smart probe and maintain the ability to display the data in a type specific manner.

 

One example would be a "Data History Probe" that displays the history values of any data type.  I'm sure there are other good uses.

My forum question asked here: http://forums.ni.com/t5/LabVIEW/Determine-and-Minimize-LabVIEW-Build-Dependencies/m-p/1441734 has prompted me to post a New Idea on this exchange.

 

    To summarize:  LabVIEW executables tend to be on the LARGE side especially when driver packages like DAQmx, VISA amd IMAQ are required.  I'm asking if NI could add some enhanced functionality to the Application/Executable Builder that assist the user in determining and minimizing the size of the final distribution package.  Ideally this would be a completely automatic function that would simply minimize the build parameters to the smallest possible set after analyzing the active project.  Additionally, it would be great if this would be smart enough to pare down the driver package itself to the bare minimum.  (I.E. It shouldn't require a 150MB DAQmx package size to read a few thermocouples with a 9211 module.)

    At the very least, the Additional Installers tab should show the size of the packages you're checking off and the resulting installer's total size.

 

An example of the size increases of required installer packages for a simple DAQmx application:

Build Size (MB)
Program Itself
8
Run-time Engine Added
104
DAQmx Core Drivers Added
258
DAQmx Core Drivers + MAX Added
712

Create a Panel Open event.

 

PanelOpenEvent.png

 

Among other things, it would provide a clean entry point for guaranteeing Control References have been created, allowing for their dynamic event registration to prevent errors such as the following:

 

PanelOpenError.png

(not really related to this idea)

 

Deferring panel updates is a tedious procedure, requiring a panel reference (not intuitive to get!) and a couple of property nodes. However, deferring panel updates is sometimes very important. For example when coloring the fields of a large table according to their values, a serious performance impact is encountered unless we defer panel updates. There are many other situations where panel updates should be suspended temporarily, especially if property nodes are used in a tight loop.

 

My suggestions is to make flat sequences a little bit more useful by adding a new function to the right-click menu of each frame: "Defer Panel Updates". Now a new connector will appear on the frame that accepts a panel reference. If left unwired, this applies to the panel of the current VI, but we can wire a reference to any panel we desire (useful for subVIs). There should be a visual indication indicating that updates are deferred, e.g. a slightly different background pattern (e.g. faint checkerboard or similar). Note that this idea does not clutter the palettes with a new structure, but simply extends the functionality of existing elements. Let's put those sequences to some good use!

 

During execution of such a frame, the panel updates are deferred and automatically enabled again once the frame ends. The configuration should be "per frame" of a multi-frame flat sequence.

 

This has many advantages over the current situation:

 

  • It is immediately clear what part of the code executes during the deferral
  • The deferral is clearly delimited and it is not possible to accidentally forget to re-enable the updates
As an example, here is my subVI code to color the fields of a large correlation matrix (located on the main VI) according the respective value so we can easily pick out where correlations between fitting parameters are a problem.
TOP: current solution. BOTTOM: how it could look like with this idea implemented.

 

 

How about having a timeout occurrence as an input for functions which support timeouts?

 

I am illustrating a single use case with queues (and a notifier) but I would see this as being beneficial to nearly ALL functions with timeout inputs.

 

Sometimes we'd like to wait for one of a few different functions (an example of mine which springs to mind is the Dequeue primitive).  At the moment we must essentially poll each primitive with a relatively short timeout, see if one responded and then handle accordingly.  This is (for me at least) ugly to look at and introduces polling which I generally don't like.  I'm an events man.

 

Occurrence timeout.png

 

What I propose would be that instead of simply defining a timeout in milliseconds we can define both a timeout (in milliseconds AND an occurrence for the timeout).  If we wire this data to several primitives (all sharing the same occurrence) the first primitive to receive data triggers the occurrence so that the others waiting in parallel return also.

 

In the case where no data arrives, each function waits the defined amount of time but upon timeout DOES NOT fire the occurrence.  This would cover corner cases where you may want different parallel processes to have different timeouts (Yes there are cases although they may be rare).  It is possible to change the "priorities" of the incoming queues in thie way.

 

Background info: One example where I could use this is for RT communication.  Here we multiplex many different commands over TCP or UDP.  On the API side it would be beneficial to be able to work with several strictly-typed queues to inject data into this communication pipe while still maintining maximum throughput.  I don't like using variants or flattened strings to achieve this multiplexing.

 

Being forced to poll means the code must decide between efficiency (low CPU usage) OR response time (setting a very low timeout).  Although the CPU load for polling may be low, it's not something I personally like implementing and I think it's generally to be avoided.

 

There IS a LVOOP solution to this particular problem but not everyone can use LVOOP in their projects (for various reasons).  I can envisage other use cases where interrupting a timeout would be desireable (Event structure, wait on ms, VISA read and so on and so forth).

 

Shane.

The Run Method is pretty much the only option we have in LabVIEW to scale an application dynamically by instantiating new VIs - but there is one big catch - for some reason it requires the unser interface to be idle(!).

 

Related methods like setting control values e.g. do not have this requirement so they do not pose a problem - but the main method for dynamic instantiation does - and can be blocked by something as simple as a user that opens the calendar view of a date and time control (which of course never should do this either, but that's another issue/idea).

 

Personally I use the run method to create new trend windows (you never know how many trends a user wants to see at the same time), create session handlers for remote clients etc. The times it is used to actually create user interfaces it is not a big problem that the run method is in the user interface thread, but for session handlers and other things that needs to be created in the background based on requests from the outside? A HUGE issue.

Many of the Mathematics and SIgnal Processing VIs retain state, rendering them unusable inside reentrant VIs: http://digital.ni.com/public.nsf/allkb/543589DF37BA48CF8625744A00543FF3

 

Many of the VIs in this list (all those in my current application, unfortunately) can only work with single-channel data. When manipulating multi-channel data, you can work around that fact by running the channels serially through the VI you need, but that (1) takes much longer for large data sets or several channels, and (2) is not an option when performing live manipulation of streaming data block-by-block.

 

I ran into this problem while developing code in the Actor Framework, where Do.vi and Actor Core.vi (the two main framework methods) are both Shared Reentrant. Now that AF is a native feature in LV, I expect that more people will run into problems with these VIs.

 

We need stateless versions of these VIs so we can use multiple copies in on a multichannel data set. You can probably keep backward compatibility by pushing the core logic to a new stateless subVI and keeping the shift register or feedback node on the main VI's diagram.

TensorFlow is an open source machine learning tool originally developed by Google research teams.

 

Quoting from their API page:

 

TensorFlow has APIs available in several languages both for constructing and executing a TensorFlow graph. The Python API is at present the most complete and the easiest to use, but the C++ API may offer some performance advantages in graph execution, and supports deployment to small devices such as Android.

Over time, we hope that the TensorFlow community will develop front ends for languages like Go, Java, JavaScript, Lua R, and perhaps others.

 

Idea Summary: I would love to see LabVIEW among the "perhaps others". 😄

 

(Disclaimer: I know very little in that particular field of research)

 

I am extensively using the Array to Spreadsheet string Primitive and most of the time I never used the Format string input (Use to wire an empty string constant) and still I am getting the right result what I expect. So I think it would be better if the Required Terminal is changed to an Optional Terminal.

 

It is known that the Array to Spreadsheet string is Polymorphic, but when we wire an array of I32 and DBL the output string is DBL format only. It would be good if the output String adapts the data type that is wired unless otherwise specified by the Format string.

 

Changes needed for Array to Spreadsheet String.png

Really, athe end of the building process it would be nice to know how long the building process lasted.

 

One of the benefit would be to make build duration comparison between different LabVIEW versions much easier 😮

 

Clipboard01.jpg

Over the years I have run into situations where I have to monitor files accessed by another application from within my LabVIEW application (Example: monitoring the standard_out/standard_error files from an app launched with the system exec).

Currently the only way to do that is to use polling. 

 

I would love for LabVIEW to have built in events for file monitoring so polling would no longer be required.

 

For instance, the following events would be very useful:

  1. File Opened
  2. File Closed
  3. File Change  (I really want this one if nothing else)
  4. File Created
  5. File Deleted 
  6. more...

 

Below is an example as to how it might look like.

5-20-2010 9-20-09 AM.png

 

It is common, in writing reusable code, to handle arbitrary clusters in variants.  To access the elements of the cluster, one wants to convert the cluster into an array of variants containing the individual items.  After access, one needs to convert the array of variants back into the original cluster.

 

There are several examples of packages that use this on NI.com and in the LAVAg.org Code Repository.  They mostly use functions for working with Variant Clusters from OpenG; however, these can be quite slow.   Recent LabVIEW versions have had the ability to do much of the functions quicker, however, there is a very imortant missing native ability: to convert an Array of Variants into a Variant Cluster.

 

The other direction, Cluster to Array of Variants, works like this:

Array of Variants to Cluster.png

But trying to reverse the process breaks:

Array of Variants to Cluster.png

 

So my idea is make the second image work; make an Array of Variants interchangable with a Variant Cluster in the "Variant to Data" LabVIEW primative. The interchangability should apply to contained subclusters/arrays also.  The matching of array to cluster elements can be by cluster order rather than element name.

 

This would greatly aid the performance of reuasable packages that operate on arbitrary clusters.