LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

So when it comes to using a queue, there is a somewhat common design pattern used by NI examples, which makes a producer consumer loop, where the consumer uses a dequeue function with a timeout of -1.  This means the function will wait forever until an event comes in.  But a neat feature of this function is it also returns when the queue reference becomes invalid, which can happen if the queue reference is closed, or if the VI that created that reference stops running.

 

This idea is to have similar functionality when it comes to user events.  I have a common design pattern with a publisher subscriber design where a user event is created and multiple loops register for it.  If for some reason the main VI stops, that reference becomes invalid but my other asynchronous loops will continue running.  In the past I've added a timeout case, where I check to see if the user event is still valid once every 5 seconds or so, and if it isn't then I go through my shutdown process.

 

What I am thinking is that there could be another event to register for, which gets generated when that user event which is registered for, becomes invalid so that polling for the validity of the user event isn't necessary.

 

before:

before.png

after:

after.png

Cluster Size as a Wired Input:

 

  • Easier to see
  • More implicit
  • Nearly impossible to forget to set it (if it were a required input).

 Cluster Size.gif

It would be useful to have a node that could be fed any wire and return the size (in bytes) of the data on that wire. In other words, the node would return the number of bytes that the wire occupies in memory.

 

1 (edited).png

 

 

 

 

 

 

For example, the node would return a value of:

  • 1 byte when fed a U8 wire
  • 2 bytes when fed a U16
  • 4 bytes when fed a I32
  • 8 bytes when fed a DBL
  • 800 bytes when fed a 1D array that contains 100 DBL elements
  • 9 bytes when fed a cluster that contains a DBL and a U8
  • 9 bytes when fed an object that contains a DBL and a U8
  • 18 bytes when fed an object that contains two other objects that occupy 9 bytes each
  • and so on

Notes

  • The node would would enhance LabVIEW programmers' ability to monitor and audit memory usage.
  • The node may serve as an additional tool to detect memory leaks (by repeatedly calling the VI on the same wire and checking whether the size is going up).
  • The node would simply be interesting to programmers interested in performance and would enable programmers to learn more about LabVIEW internals.
  • The node would be useful especially to query the size of complex data structures, such as objects that contains other objects that themselves contain objects, or clusters that contain arrays, or arrays that contain clusters, or objects that contain DVRs.
  • I would be happy if the node had a second input named "Mode" (or similar). This input may be a typedef enum with items named "Shallow Measurement" and "Deep Measurement" (or similar). This input could be required, recommended, or optional.
    • When "Shallow Measurement" mode is selected, the node would return the size of all the by-value data fields in the main input wire, but would not add up the size of data referenced by DVRs or other references. For example, a wire that contains a cluster that contains a DBL, a U8, and a DVR would return perhaps 13 bytes (8 + 1 + presumably 4 bytes for the DVR reference itself). It would not add to the result the size of the data referenced by the DVR.
    • When "Deep Measurement" is selected, the node would recursively scan all data structures, including DVRs and other references.
  • In both "Shallow Measurement" and "Deep Measurement" modes there should be no limit to the scanning depth. In other words, if a cluster contains a cluster that contains a cluster and so on, they should all be measured regardless of the nesting depth. Similarly, if "Deep Measurement" is selected, and a DVR contains a DVR that contains a DVR and so on, the data behind all these DVRs should be added to the total.
  • When in "Deep Measurement" mode and fed a Queue reference wire, the node could perhaps return the size of all the data in the queue. In other words, the size of all the elements present in the queue.
  • Perhaps the LabVIEW compiler already uses a "Get Data Size" function internally? If such a function already exists, perhaps it would be relatively straight forward to expose it as a node in the palettes?
  • Perhaps the best location for this node would be in the Programming >> Application Control >> Memory Control palette.

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Thanks!

This idea came from customer Jason Willis during an NIWeek 2012 brainstorm session with LV developers. To me, it seemed like a good idea, so I figured I would post it to the community to flesh it out and see what kudos it gets.

 

When you have a reentrant VI, you have the one real VI and many clone VIs. Debugging the clones is hard. One way to make it easier might be to make the probes behave like the breakpoints do.

 

When you put a breakpoint on an individual clone, only that clone gets the breakpoint. But if you put a breakpoint on the real VI, all the clones get that breakpoint. That gives you a way to stop at a point of execution regardless of which clone gets pulled from the clone pool.

 

We could make probes do the same: if you put a probe on a real VI, any time the block diagram of a clone gets opened, a probe would be added to its wires in the same locations as on the real VI. If you removed the probe from the real VI, all the duplicate probes on the clones would go away too. But if you added a probe to a clone, the other clones would not get a probe.

Searched briefly but couldn't find any ideas about this.

 

I know we have the ability #via_ignore comments to ignore specific tests for a specific VI, but I am looking at a different use case.

 

Here's the use case, I use DQMH. When you create a new DQMH module there is a lot of plumbing code that comes with it. It's standard stuff. Very rarely do I have to open or edit it. Much of it is scripting generated. It often fails tests but I don't care. In addition to failing, it takes up test time, which slows my feedback loop. I would a way to signal to VI analyzer to skip these files. I know I can use the VIAN API to limit the files it checks, but I was thinking there had to be a better way. 

 

Implementation Ideas - I had 2 main ideas

- Regex matching on VI names - with the regex pulled out of a text file somewhere ala gitignore. Many of the DQMH generated VIs have standard names, so that is easy. When generating events, they don't but you could easily add a prefix/suffix or something that the regex would pick up.

- Using the Tag API to tag the VI. I like this because the scripting can just apply the tag or apply it to the template the scripter uses. Downside is: kind of hidden from the user and perhaps if I decide to make some edits to this VI  I may want VIAN to stop ignoring it and it's not immediately obvious how to do that.

 

Note:

I picked the Execution and Performance Label because it didn't seem to fit any of the labels easily. If this is the wrong label and you are an admin, please relabel it.

 

 

When programming with large applications, often times you'll have clusters carrying a lot of information.  If you hover over the cluster wire and observe Context Help, you might see something like this:

ContextHelpOriginal.PNG

 

This Context Help window above is rather large and doesn't necessarily make it any easier to see the structure or contents of the cluster. My proposed idea calls for the ability to expand or collapse the cluster contents within the Context Help window, such as this:

ContextHelpNewIdea.png

 

What do you think?

 

I propose that Case Selectors should accept any type of reference, and the two cases generated are "Valid Ref" and "Invalid Ref". (This would be very similar to the current behavior of the Case Selector accepting errors with the two cases of "Error" and "No Error".)

 

The current behavior using "Not a Number/Path/Refnum" is very unintuitive. It requires the programmer to use Not Logic (i.e., do something if the reference is "not not valid").

 

ReferencesIntoCaseSelectors.png

 

 

On a For Loop that is configured to have parallel iterations there is a nifty feature when using errors on a shift register.  This feature is explained in an article here.  It basically ensures that For Loops that run 0 times, will preserve the incoming error as if the tunnel was a shift register.  However this feature also means that errors from iteration 0, won't be passed into iteration 1.  What is returned if the loop runs 0 times is just the incoming error.  But if it runs for more than that, the errors are all merged and returned will be the first error seen.

 

Untitled.png

This idea is just to allow this already existing feature to work on non-parallel configured For Loops.  When would this be useful?  Say I want to delete an array of files.  I want to attempt to delete all files, even if there is an error in trying to delete some, just keep trying to delete the other files.  Initially you may think this:

 

Untitled2.png

But that has the clear problem that if the incoming array is empty, then the error will be cleared.  So we often do something like this.

Untitled3.png

And honestly this bit of code isn't that big of a deal, but this feature already exists.  However it only works on For Loops that are configured to run iterations in parallel.  In this case I might be attempting to delete files in a specific order due to their placement on disk, and running in parallel isn't what I want.

 

Since not everyone wants all errors in loops to act this way it would need to be a right click menu to turn the tunnel into a normal one, a shift registered one, or a preserve and merge one.

Untitled4.png

That is what I want.

Data Type Representation Pallette

 

Can the the options on the data type Representation pallette be changed from grey to match the color of the numerical icons and terminals?

 

Thank You!

Melissa McQueen

LabVIEW could use a feature that's commonly used in C++, the "final" specifier for a class override method.  This would allow a child class to override a method from a parent class (or interface) and then prevent child classes of itself from overriding.  Currently, with large inheritance structures, it becomes difficult for developers to create child classes since so many of the methods can be inherited from.  The final specifier would allow you to create intermediate classes that define certain override functionality that does not need to be further overwritten and only pass on the ability to override methods that are important to child classes.

FInal Override.png

I think the Array Element Gap should be sizable. This would facilitate lining up FP arrays with other items on the FP, or simply as a mechanism to add more apparent delineation between elements.

The size should be set in the Properties box, not by dragging the element gap with the mouse - that would add too much "cursor noise".

A new Property Node for this feature would complete Idea.

 

GapSize.png

Using 2023 for the first time (skipped 2021 and 2022)

 

QuickDrop used to be quick.

It was very convenient.

 

Now it takes a second to load.

 

That's pretty inconvenient if you're trying to... do much of anything.

 

Best example: If you're trying to remove code (Quickdrop's Control-R), now instead of quickly removing the selected code, you get to deal with accidentally RUNNING your VI (Menu's Control-R)

 

Please make it Quick again

This idea was posted years ago and declined for lack of interest  (it got 6 out of 7 necessary, but I would have been the 7th!).  I would like to bring it back, I would like my application to have access to it's own version number.  In fact you can open the project programatically and see some build properties but not that one.  I can then grab the version from the build properties and set the default values on my FPGA code before compiling.

 

I was trying to make a pre-build VI that would look at the build properties and copy the version data into a control.  Can't be done.  I find this very useful to make sure that my RT system and my FPGA code have the correct versions.

 

Same as with an about box, or version checking for compatibility.

 

The previous thread suggested a routine in the FileVersion.llb but that seems to only be available in a single platform.  Not useful for RT linux or Mac.  The version is not available until the executable is built which does not work for FPGA.

 

At the moment my only recourse is to hand copy the version from the build properties and then set those as a series of 4 integers on the FP.  (Then select them all and set their values to default, hence the other suggestion about right click)

It is time to put a dent in the floating point "problems" encountered by many in LV.  Due to the (not so?) well-known limitations of floating point representations, comparisons can often lead to surprising results.  I propose a new configuration for the comparison functions when floats are involved, call it "Compare Floats" or otherwise.  When selected, I suggest that Equals? becomes "Almost Equal?" and the icon changes to the approximately equal sign.  EqualToZero could be AlmostEqualToZero, again with appropriate icon changes.  GreaterThanorAlmostEqual, etc.

 

AlmostEqual.png

 

 

I do not think these need to be new functions on the palette, just a configuration option (Comparison Mode).  They should expose a couple of terminals for options so we can control what close means (# of sig figs, # digits, absolute difference, etc.) with reasonable defaults so most cases we do not have to worry about it.  We get all of the ease and polymorphism that comes with the built-in functions.

 

There are many ways to do this, I won't be so bold as to specify which way to go.  I am confident that any reasonable method would be a vast improvement over the current method which is hope that you are never bitten by Equals?.

When I have an array of clusters and I want to locate the array index where a specific element of the cluster has a certain value, I need to first build an array from the Array of Clusters and then search that to find the Array element I want:

 code example.jpg

 

It would be nice (and cleaner and likely faster) if I could wire the Array of Clusters into an unbundle by name function and select String[] to get the array of string element to search.

In addition, if the cluster contained nested clusters, I could access them the same way, using the dot notation alrerady supported.  For example, the unbundle would let me select 'cluster1.subcluster2.String[]' to access the subarray of an element.

 

code example2.jpg 

 

(This is different and less controversional than  this related old idea)

 

Arrays of timestamps only contain legal values and can even be sorted. We can use "search 1D array" to find a matching timestamp just fine.

 

As with DBLs, there might be a very close array value that is a few LSBs off, but well within the error of the experiment so it is sufficient to find the closest value. We can always test later if that value is close enough for what we need or if "not found" would be a better result.

 

If we have a sorted input array, the easiest solution is "threshold 1D array" giving us a fractional index of a linearly interpolated value. For some unknown reason, timestamps are not allowed for the inputs, limiting the usefulness of the tool. One possible workaround is to convert to DBLs, with the disadvantage that quite a few bits are dropped (timestamp: 128 bits, DBL: 64bits).

 

Similarly, "Interpolate 1D array" also does not allow timestamp inputs. In both cases, there is an obvious and unique result that is IMHO in no way confusing or controversial.

 

 

 

IDEA Summary: "Threshold 1D Array" and "Interpolate 1D Array" should work with timestamps.

 

 

 

The array "Startup/Always Included" carries these Vi's into pre/post build action VI's.

The order of the vi's in this array depends on the order they have been added to/created in the project when, not the order they can be seen on the project window itself.

Here's the project window:

GICSAGM_0-1718176051948.png

Here's the order within prebuild vi:

GICSAGM_1-1718176157393.png

 

The order is NOT the same. "startup" is the first but if you delete from the project and then re-add it, it will become the last.

 

I suggest that the order in pre/post build VI should be:

first - startup.vi

following: always included vi's in the same order they appear in the project window.

 

Also, the always included list should have a 'mechanism' to change the order of the vi's in the list - be it up/down arrow buttons on the side that would move the selected vi or a similar command in a "right click" drop down menu.

 

In this way, any pre/post build action that may involve any of these vi's can be clearly defined and remain stable during the lifetime of the project without running into the risk of getting the wrong vi to "work" on or having to edit the pre/post build vi to add or edit the vi's names every time there is a change in the startup/always included list.

 

 

The Arduino Due is a 32 bit ARM based microcontroller board that is destined to be very popular. It would be great if we could programme it in LabVIEW. This product could leverage off the already available LabVIEW Embedded for ARM and the LabVIEW Microcontroller SDK.

 

The Arduino Due is currently in developer trials and is due out later this year. It is expected to be about $50 and is open hardware. The ARM chip is an Atmel SAM3X8E ARM Cortex M3 running at 84 MHz resulting in 100 MIPS of performance. By way of comparison, the current LabVIEW Embedded for ARM Tier 1 (out-of-the-box experience) boards have only 60 MIPS of processing power.

 

The Arduino brand has an enormous following and Google has selected the Arduino Due for their recently introduced (28 June 2012) Accessory Development Kit for Android mobile phones and tablets (the ADK2012).

 

(By the way, the currently-available LabVIEW Arduino toolkit does not target the Arduino (and couldn’t since the Arduino Uno uses only an 8 bit microcontroller). Instead there is fixed C code running on the Arduino to transfer peripheral information to the serial port and back. That is, none of the LabVIEW target code executes on the Arduino. This idea is for LabVIEW code developed on a desktop to be transferred and execute on the target Arduino Due.)

 

Wouldn’t it be great to programme the Arduino Due in LabVIEW?

It would help a lot if we have an option called pause on first Error. So that the VI at the node/primitive that generates the error and highlights that node so that user decides to continue or abort.

I would like to be able to create executables that don’t require the runtime engine in LabVIEW. Perhaps a palette of basic functions that can compiled without the runtime engine and an option in the application builder for that. I routinely get executables from programmers that don’t require a runtime installation. I just put it on my desktop and it runs. It would be nice that if I get a request, I could create, build, and send them an exe in an email without worrying about runtime engine versions, transferring large installer files to them, etc.