LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application. The regular graph can take a bit more data, but still has its limits. Having 100k number of points to display is quite common (in my case it's most often months of 1 second data).

 

The idea could be formulated to just "improve how graphs handle large data sets"...but how that would be done depends a bit on what optimizations the graph code is open for. The most effective solution however would probably be to do what you currently have to write yourself - a surrounding decimation logic.

 

So my suggestion is to add an in-built decimation feature, where you can choose to have it automatically operated when needed - or when you say it is needed, and possibly with a few different ways to do the decimation (Min-Max/ Nth point etc.). The automatics should be on by default - making the problem virtually invisible for the novice user.

 

A big advantage of doing it within the graph is that it will (should) integrate fully with the other features of the graph - like zooming, cursors etc.

This has been brought up long ago, but I thing it deseves to be discussed here in the Idea exchange.

 

There are situations where it might be beneficial if we could have a string datatype that has a defined length. Arrays of such string would be stored flat in memory.

 

Application would include:

  • Typecasting a long string to a string array where the element is fixed length would slice up the string into an array of equal length strings.
  • Reading a binary file as an array of fixed strings would do the same.
  • ...

 

The default value would be a string of the defined lenght filled with \00. Shorter inputs would get padded with \00

Of course certain operations would drop the length, e.g. when concatenating such strings, the length would get dropped from the result, turning it into a plain string.

We're witnessing more and more requests to stop LV hiding important information from us.  In One direction we want to be able to know (and some want to break code) if structures are hiding code.

 

Others want LV primitives to give visual feedback as to how they are configured, especially if that configuration can have an effect on what's being executed or how it's executed.

 

Examples include (Please please feel free to add more in the comments below)

 

Array to cluster (Cluster size hidden)

Boolean array to number (Sign mode hidden)

FXP simple Math (Rounding, saturation and output type hidden)

SubVI node setup (When right lcicking the subVI on the BD and changing it's properties - show FP when run, suspend and so on)

Sub VI settings in general (Subroutine, debugging)

 

I know there are already ideas out there for most of these (and I simply chose examples to link to here - I don't mean to leave anyone's ideas out on purpose) but I feel that instead of targetting the individual neurangic points where we have problems, I would like to acknowledge for NI R&D that the idea behind most of these problems (Some of them go much further than simply not hiding the information, and I have given most kudos for that) is that hiding information from us regarding important differences in code execution is a bad thing.  I don't mean to claim anyone's thunder.  I only decided to post this because of the apparent large number of ideas which have this basic idea at heart.  While many of those go further and want additional action taken (Most of which are good and should be implemented) I feel the underlying idea should not be ignored, even if all of the otherwise proposed changes are deemed unsuitable.

 

My idea can be boiled down to the fact that ALL execution relevant information which is directly applicable to the BD on view should be also VISIBLE on the BD.

 

As a disclaimer, I deem factors such as FIFO size and Queue size to be extraneous factors which can be externally influenced and thus does not belong under this idea.

 

Example: I have some Oscilliscope code running on FPGA and had the weirdest of problems where communications worked fine up to (but not including 524288 - 2^19) data points.  As it turns out, a single "Boolean array to number" was set to convert the sign of the input number which turned out to be completely wrong.  Don't know where that came from, maybe I copied the primitive when writing the code and forgot to set it correctly.  My point is that it took me upwards of half a day to track down this problem due to the sheer number of possible error sources I have in my code (It's really complicated stuff in total) and having NO VISUAL CLUE as to what was wrong.  Had there been SOME kind of visual clue as to the configuration of this node I would have found the problem much earlier and would be a more productive programmer.  Should I have set the properties when writing the code initially, sure but as LV projects grow in complexity these kinds of things are getting to be very burdensome.

I would like to propose the use of a stacked parallel execution structure.  Especially in FPGA applications, this will solve the problem of diagrams running off the screen.  All execution pages will run simultaneously as if independent while loops were scattered across the BD.  This idea potentially leads into a 3 dimensional visuallization of coding LabVIEW. Note: In the image, the pages are offset but should look similar to a stacked sequence structure.

 

 

Parallel Execution Structure 3.JPG

 

 

The number of parallel Instance is currently capped at 64, independent of hardware. This limit should be raised.

 

First Reason: Since even 64 bit Windows 7 supports up to 256 cores, it would be reasonable to raise that limit to 256.

 

(Even the next version of windows mobile (8) will support 64 cores. (Mobile! On a Phone! 🐵. Obviously the upcoming hardware is fast moving in that direction.)

 

Second Rason: Sometimes it is useful to generate many instances, even if we have fewer cores available, for example maintain individual data in a large number of identical reentrant subVIs. (Such an usage example where we want many instances even on a single core machine can be found here)

 

Idea: Raise the max number of parallel Instances of a parallel FOR loop to 256.

Currently, if you use the stock one and two button dialog VIs in your code, they will block the root loop while being displayed.

For those of you who don't know what the 'root loop' is, this is the core process of LabVIEW that many UI functions must execute under.  One of the key functions that executes in the root loop is VI Server calls to open a new dynamic VI.  So, if your code has multiple threads all performing operations that involve dynamic calls to other 'plug-in' VIs and one part of your code calls one of these stock dialog VIs, then all the other threads will be blocked until the user clears the dialog.

As a result of this, I have had to write my own 'root loop safe' versions of these dialogs to prevent this from happening.

 

As a side note, dropping down a menu and leaving it down without selecting anything also blocks the root loop.  It would be great if they could fix this too!

 

 

I envision a structure much like a case structure, in which you select your event for evaluating the code inside the structure and the values become constants at the node. The interior would allow code that may normally not be able to run on the host for example, on fpga it might allow the use of doubles and strings and resized arrays, because it isn't actually going to be executed on the host just evaluated and stored as a constant. This would allow for more configuration for fpga and even have some benefits at the traditional desktop environment. For example you could set the structure to evaluate on app build and produce a string constant that is the build date so the build date could be shown on UI to help distinguish builds. 

image.png

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find). Please move the Stacked Sequences from the Programming Palette to a "retro" or "classic" palette. This is dissuade novices from overusing them

 


Previous wording:

 

There has been a lot of discussion on this but no LabVIEW Idea (that I was able to find).

Retain them for legacy code but please remove them from the Programming Palette

 

Lets vote this in and get rid of Stacked Sequences forever.

I would go so far as to release a patch to remove them from all Installed LabVIEW Versions!

Message Edited by Laura F. on 09-30-2009 03:49 PM
When I installed LabView I realized that a really large number of automatically starting programs is being installed. Even when running LabView not all of them are really needed. But they are still actively working in the PCV's memory ant take the power that would be needed for other applications. Basically when LabView is not running at all. A possibility should be implemented to deactivate these automatic applicatins: -Reduce the number of Autostart-applications to these that are really needed -Give the possibility to switch them all off when there is no intention to run LabView. A reboot for re-activating these autostart applications would not be that problem. -A minimum request is to give a list of Autostart applications that are needed for each LabView-Application. This would help to deactivate the autostart-programs manually.

This is sort of two features bundled together, but they make sense to do them at the same time.

 

First, add an easy way to temporarily disable the "Allow debugging" VI property across an entire project.  This would be step 1 in making an easy "Release Mode" option.

 

Second, add some conditional disable symbols to all projects.  If the project is in debug mode, add "DEBUG_MODE" to the project, and if it's in release mode, add "RELEASE_MODE".  While it is possible to do this manually now, each user could choose a different name for their symbol.  If LabVIEW does this for everyone, then it allows better library interoperability.  The main use case for these symbols is to add debugging traces and breakpoints that are undesirable in shipping code.

LabVIEW could use a feature that's commonly used in C++, the "final" specifier for a class override method.  This would allow a child class to override a method from a parent class (or interface) and then prevent child classes of itself from overriding.  Currently, with large inheritance structures, it becomes difficult for developers to create child classes since so many of the methods can be inherited from.  The final specifier would allow you to create intermediate classes that define certain override functionality that does not need to be further overwritten and only pass on the ability to override methods that are important to child classes.

FInal Override.png

I would like to have the ability to set the compare aggregates mode for comparisons involving containers (arrays certainly, clusters would be a nice bonus) and a scalar value.  This includes the comparisons to 0 functions as well.

 

compareAggregatesIdea.png

At present we can select only a single VI on "RightClick>Select VI" option in Block diagram. It would be much helpful if we can select multiple VIs at the same time. Image for reference

 

 

MultipleSel-3.png

 

MultipleSel-1.png

 

Present MethodMultipleSel-2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Proposed Method

Download All

In LabVIEW it is not possible to have an array of arrays. Something that gets you close is a 2D array but each row must be the same size. You can have an array of clusters of arrays to get around this limitation. Many languages do support arrays of arrays.

I rediscovered a kind of an annoying behavior with multiplot XY-Graphs which appears to have existed since LabVIEW's emergence. Say I want to use the XY-Graph like a curve tracer, thus adding XY data pairs to an existing multiplot XY-Graph by passing the array of array clusters around in loop shift registers and fill in new data as it's produced. This all works well BUT when adding multiple plots in different colors it shows that consecutive plots are always drawn underneath the already existing ones instead of the other way around. I find that new plots should be drawn over already existing ones also according to the sequence shown in the plot legend.
 
If your new plot has the same or similar values than the previous one then the second plot will be hidden underneath the first one. You can only see a second one if the values are different. It's counterintuitive I think and I wonder why it has always been like that.
 
I plead for a XY-Graph option along with a property node to change this so that new plots cover old ones when they intercept. The legend sequence should not be changed because the top to bottom order is correct. 
 
I demo VI of the misbehavior is attached 
 
I hope I get enough support for this to be done. 

Urs

Urs Lauterburg
Physics demonstrator
Physikalisches Institut
University of Bern
Switzerland

Dear NI,

 

Please fix LabVIEW so that when you right-click on a wire on the block diagram the menu pops up immediately, not after 1 second. Also, while you are fixing that, it would be great if you could fix the speed it takes to open a control/indicator properties configuration dialogue.

 

As far as I can recall these were fine in LV 7, but sometime after that itall started to get a bit sluggish.

 

Thanks!

 

 

 

 

This idea was posted years ago and declined for lack of interest  (it got 6 out of 7 necessary, but I would have been the 7th!).  I would like to bring it back, I would like my application to have access to it's own version number.  In fact you can open the project programatically and see some build properties but not that one.  I can then grab the version from the build properties and set the default values on my FPGA code before compiling.

 

I was trying to make a pre-build VI that would look at the build properties and copy the version data into a control.  Can't be done.  I find this very useful to make sure that my RT system and my FPGA code have the correct versions.

 

Same as with an about box, or version checking for compatibility.

 

The previous thread suggested a routine in the FileVersion.llb but that seems to only be available in a single platform.  Not useful for RT linux or Mac.  The version is not available until the executable is built which does not work for FPGA.

 

At the moment my only recourse is to hand copy the version from the build properties and then set those as a series of 4 integers on the FP.  (Then select them all and set their values to default, hence the other suggestion about right click)

By default, when you build an application, the front panels of all VIs are removed.

 

Of course, this would make applications completely unusable, so there are certain changes which cause LV to keep the front panel in, such as setting the front panel to open when the VI is called. This gives LV a clear indication you want the front panel and these changes are generally enough for most of the VIs which people use.

 

There are, however, times when you want a VI to keep the panel, but it's a VI which usually (or sometimes even never) won't be displayed. Today there are several ways to handle this case.

 

 

The official way is too flimsy:

 

Keep_FP_AB.png

 

 

It's limited to specific build specs and it's too easy to break (e.g. remove the VI from the project and add it back).

 

 

 

 

The automatic way relies on additional changes you're likely to make if you intend to show the VI, but is also too easy to break:

 

Keep_FP_Property.png

 

Someone could just unset the property and then it's gone.

 

 

 

 

What I usually today is something like this:

 

Keep_FP.png

 

The static property node ensures the FP will remain and the comment makes it less likely that someone will remove it.

 

 

 

 

What I want to see, however, is something more explicit, possibly in the shape of a new VI property:

 

Keep_FP_New_Property.png

In LabVIEW 2009, the programmatic API for shared variables and I/O variables was introduced. This allows you to reference a variable by name, rather than dropping a static node on the diagram.

 

Some of the benefits are: iterating over many variables with looping structures, creating modular code, and dynamically accessing variables based on names in a config file for example. 

 

Programmatic access to single-process shared variables would also be useful.

 

(Single-process variables are effectively global variables (not network published), but use the same static node as the shared variable and are contained in project libraries.) 

 

Problem: 

When handling larger data blocks I get "not enough memory" error messages with only option to kill the executable (from time to time, but usually unwanted Smiley Wink ).

Most of the time this involves array functions (initialize array, build array, etc). As this problem is also related to the used PC it cannot be tested (easily) on a developer machine before deploying the executable to a wide range of computers...

 

I propose the idea to enable some error handling in such cases. This could be done in two ways:

1) Add error output to array functions. This could be available optionally by right-click menu so you don't break older code. Output would be an error stating "Not enough memory for operation ..."

 

2) Add a new application event "Application.MemoryAllocationError". This way the program the program can atleast catch the problem. (Inspired by "OnError" constructs of text-based programming languages...)