LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

When you have a control name containing 'in', creating an indicator should automatically name the indicator 'out'.  Currently the indicator name is appended with '2'.  This could be expanded to include control names where the name includes a default value, and the capitalization is matched, too:

 

name indicators.PNG

Tab controls can be scaled to fit a pane, and that's great.

 

The problem is when you have objects inside a tab control. For instance: a sub-panel container or a multicolumn listbox.

 

Unfortunately, these items can only be set to fit/scale to pane, and not to the bounding area of the tab control.  

 

I propose an idea to allow an object to be set to scale with a tab control's page. This will allow the tab control to scale with a pane.  Embedding a sub-panel container into a tab's page will allow a great amount of configuration since the dynamically loaded vi can handle scaling on it's own.  (hint: you can then have splitters inside a tab control)  

 

right click option.jpg

If the case structure is used with enum or numerical values as selector type LV automatically converts value lists to value ranges. On the one hand this is nice to keep the header small but on the other hand it can lead to very hard to find program flow errors.

But let's have a look to the pictures first:


left: the value list I entered to the selector field;          right: the range that LV creates automatically:
 
 case_list.jpgcase_range.jpg


Here is the complete piece of code showing that there are four values in the enum constant and that each value is handled (no default case required):
case_structure_with_enum_input.jpg 

 

 

What happens if you ...
a) .. extend the enum between "case 3" and "case 4"?: The new value(s) will be handled in the case ""case 2" .. "case 4""
Do you always want that the new value is handled in this case?? I don't!
b) .. add a new line to the enum before case 2 or after case 4?: The VI can not be executed anymore because the value is not handled in any case of the case structure.
Now you have to adjust the code => more work but you're sure that the code is doing the right thing.
c) .. have a default case and you add the values to the enum as described on a and b: The code would be executable each time but only on b it will be handled in the default case.

Usually I want to decide how a value is handled, meaning I want to extend or change the case structure by myself and I don't want to be forced by LV to keep care of a special order of values inside of my enum typedefs.

Because of this I suggest to add a new option to LV block diagram configuration: "automatically change case structure list of values to value range: never / ask / always".
The default value for this option should be "ask". In a situation like shown on the first image a dialog like this can appear:
 
case_ask.jpg
If the users selects "yes" the code will change to a list like shown in image 2. If he selects "no" the case structure remains like he has entered the values (image 1).

In LabVIEW 7.1 it was possible to select a single letter and format the font (Bold, Underline), without having to affect the rest of the letters.

 

Since LabVIEW 8.x (verified in 8.5) it is no longer possible to do so.  This was discovered while coding in LabVIEW 2009 SP1.

See images below:

 

 

In the above images, only the 'S' in Select was bold and underlined, whereas trying to do the same in more recent versions of LabVIEW causes the entire text to be formatted.  Why remove a good-working feature?

 

Please bring back that feature.

 

Thanks,

 

RayR

I searched reentrant in this idea exchange and didn't see anything on it so here goes. I would like some sort of indicator showing that a VI is reentrant. Right now, as far as I know, you have to go into the properties and check. It would be nice if there could be either something on the icon, or a small label that would show if the VI was reentrant. Or maybe a slight "halo" around the VI, something to that effect. Any other ideas on how to do this are welcome.

 

 

 

Message Edited by for(imstuck) on 02-24-2010 10:00 AM

I16 is currently the only output representation of this function, and it is particularly unhandy - I typically end up typecasting it 90% of the time. I propose the output representation of this primitive can be changed via Right-Click Menu.

 

BoolOutputRepresentation.png

As mentioned here, "Event Sources can be time consuming in finding when you have lots of FP objects."  I'd like a FP node Right Click menu option on terminal (or on the node itself) to Find Event Case if it is applicable. (Event Structure must exist)".

(This is different and less controversional than  this related old idea)

 

Arrays of timestamps only contain legal values and can even be sorted. We can use "search 1D array" to find a matching timestamp just fine.

 

As with DBLs, there might be a very close array value that is a few LSBs off, but well within the error of the experiment so it is sufficient to find the closest value. We can always test later if that value is close enough for what we need or if "not found" would be a better result.

 

If we have a sorted input array, the easiest solution is "threshold 1D array" giving us a fractional index of a linearly interpolated value. For some unknown reason, timestamps are not allowed for the inputs, limiting the usefulness of the tool. One possible workaround is to convert to DBLs, with the disadvantage that quite a few bits are dropped (timestamp: 128 bits, DBL: 64bits).

 

Similarly, "Interpolate 1D array" also does not allow timestamp inputs. In both cases, there is an obvious and unique result that is IMHO in no way confusing or controversial.

 

 

 

IDEA Summary: "Threshold 1D Array" and "Interpolate 1D Array" should work with timestamps.

 

 

 

The one button dialog is a great tool when debugging code. It's very convenient to wire a string to the dialog to probe data when regular debugging techniques are not available. There are times when it would be handy to wire a non-string data type and not have to either convert a number or a boolean to a string. Formatting should be limited (for simplicity) such that data types like arrays and clusters would need to be handled by the user.

 

onebutton.jpg

There is a construct I am quite fond of in pointer-friendly languages, using iterator math to implement circular buffers of arbitrary data types.  They are a little bit slower to use than straight arrays, but they provide a nice syntax for fixed sized buffers and are helpful in cases where you will be prepending and appending elements.

 

I am pretty certain that queues are implemented as circular buffers under the hood, so much of the infrastructure is already in place, this is mostly adding a new API.  Added bonus:  the explicit circular buffer can be synchronous, unlike the queue, so for example you can put them in subroutine VIs.

 

It should be easy to convert 1D arrays to/from circular buffers.  Array->CB is basically free, the elements are in order in memory.  CB->Array requires two block copies (most of the time).  This can be strategically mananged, much like Reverse or Transpose operations.

 

Use cases:

 

You can implement most of  the following two ideas naturally:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Looping-Input-Tunnels/idi-p/2020406

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/New-modes-on-auto-indexed-input-array-tunnels-in-loops/idi-p/2263706

 

Circular buffers would auto-index and cycle the elements and not participate in setting 'N'.

 

You can do 95+% of what I wanted to do with negative indexing:

http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Negative-Values-in-Index-Array-or-Array-Subset/idi-p/960863

 

A lot of the classic divide and conquer algorithms become tractable in LV.  You can already use queues to implement your own stack and outperform native recursion.  A CB implementation of the stack would be amenable to subroutine priority and give a nice performance kick.  I have done it by hand for a few datatypes and the beauty and simplicity of  the recursive solution gets buried in the implementation of the stack.  A drop-in node or two would give you a cleaner look and high-octane performance.

 

Finally, perhaps the most practical reason yet:  simple XY Charts.

 

As for appearance I'd suggest a modified wire like the matrix data type.  Most if not all Array primitives should probably accept the CB.  A few new nodes are needed to get/set buffer size and number of elements and to do the conversions to/from 1D arrays. The control/indicator could have some superpowers:  set the first element, wraparound scrolling (the first element should be highlighted).

What bugs me about the VI Profiling Tool is that it is not intuitive.  The information it provides is really useful; however, it's so hidden and difficult to interpret that few people actually know where it is to use it.  Let's say you are simply acquiring data using DAQmx and writing that data to a file, as below:

 

Block Diagram of Inefficient DAQmx and File I/O

 

You might want to find out how to make this more efficient, but the only way you know is to insert Tick Count VIs and wrap your wires through sequence structures to do it.  It's annoying, and there are other ideas from JackDunaway (here) and JohnMc19 (here) which aim to simplify the use of those VIs. 

 

But why re-code your application when the VI Profiler can do it for you?  In addition, the VI profiler has more timing information (longest, shortest, average, total, etc) as well of number of runs and memory allocation data.

 

Good news: VI Profiler makes getting the data easy. 

Bad news: VI Profiler makes using the data difficult.

 

Why Using the Profiler is Difficult

 

First, you need to know it exists among a number of bland and condensed gray menus (Tools>>Profile>>Performance and Memory).

 

Next, you have to coordinate starting the profiler with starting your VI (start the Profiler, then start your VI, then stop your VI, then stop the Profiler).

 

Finally, you have to dig through a TON of VIs to find the ones that are relevant (I assume this is because, for polymorphic VIs, all of the instances are loaded into memory, even the dozens that aren't currently being used.)

 

When you find the VI you wish to examine, it will look something like this:

 

VI Profiler Currently

 

Have fun sorting through that!  When I finally find a VI that's hogging memory or speed, I'd expect to click on it to navigate to that VI.  NOPE!  All the VI Profiler does is make the line bold.  Not particularly easy to use...

 

I can't say if it's possible to get rid of VIs that aren't being used, or to make the menu option more visible to the user, but I do have an idea or two for how to make this information easier to understand in LabVIEW.

 

So here's what I suggest:

Adding a couple of check-boxes to the top of the VI Profiler will view in relation to your LabVIEW VI.  Notice the extra check boxes in the top of this image. 

 

VI Profiler With Color Shading

 

One checkbox allows you to color the column you wish to highlight in your LabVIEW code.  The other checkbox inserts a text comment containing the highlighted data straight into you LabVIEW code (right next to the sub VIs):

 

Shading SubVIs According to Relative Execution Speeds

From the above picture, you can clearly see the Write to Spreadsheet File VI is the slowest to execute.  Next in line are the Start DAQmx Task VI and then the Stop DAQmx Task VI.  So if a developer wanted to find out how to make his loop run faster (and therefore increase the rate data is read from his PC RAM), he would know the VIs that are more red are the ones he needs to focus on first.

 

Also, if a user wants to highlight the memory usage, he could select a memory column from the VI Profiler.

 

Highlight Average Memory Usage Per VI 

 

Then the LabVIEW block diagram would look like this:

 

Block Diagram Shaded in Blue for Average Memory Usage

In this case, if a developer wants to find out how he can optimize his code for memory usage, he knows where to start.

 

Side-note: I think selecting multiple colors at a time (one for each column of data you wish to highlight) would be cool, but that would start to get messy on the block diagram.

 

Other data, like the number of runs, could highlight which sections of code are running more often than others.

 

If we integrate the VI profiler more effectively into LabVIEW, there are a lot of benefits:

 

1. Re-coding to find timing specs won't be necessary for Sub-VIs

2. Monitoring memory allocations much easier (some users don't know it's possible with LabVIEW).

3. When there's a problem, it's easier to understand which SubVIs are slowing down code or hogging their memory.

4. Developers can further code development WHILE being wary of inefficiencies.

5. More integrated development environment "feel" for new customers or the experienced G-coder.

 

Please let me know what you think!

Hello,

 

I would like to suggest that LabVIEW support HDF5 for data format, file storage and data transfer.   I would to suggest the support include maintenance to coincide with new and future versions of HDF5.

 

Thank you.

I like to have my project window open, nice and skinny, on the left of my screen.  I have some extra stuff installed into my environment, which means more buttons, and that means that I can't see all the buttons unless I expand it out a lot:

 

Good.jpg

 

I'd prefer to be able to have more than just one row of buttons:

 

Better.jpg

How about a right mouse click option to hide the datatype text in a static reference? Since we have the datatype color, it is a bit redundant anyway. Here's an idea:

 

small_refs.GIF

I did a search for "VI Hierarchy" and didn't turn up this idea.  I sure hope it isn't a repeat. 

 

If you have a polymorphic VI in your VI hierarchy, when you view the hierarchy, it seems like every single polymorphic instance of that VI shows up, as well.  This doesn't help me at all, and I don't know why it works this way.  I should only see the specific VI's I am using.

 

After dropping just one function (DAQmx Read [Analog DBL 1Chan 1Samp])

daqmx read.png

 

Now check out my VI hierarchy!

VI Hierarchy Gone Berserk after One DAQmx Function Dropped.png

 

It's gone berserk!  It is showing me every single flavor of DAQmx Read!  Try this for your own amusement. Smiley Wink

 

Why don't I see only one DAQmx Read function in this hierarchy?  Or maybe someone can shed some light on why it SHOULD work this way.  I definitely think it SHOULDN'T.  The block diagram I posted above should not have a hierarchy like this!

Message Edited by Support on 07-30-2009 09:27 AM

Right clicking a FOR/While loop index would allow the user to select the array dimension.  This could then be displayed on the VI as shown. 

 

indexing - old.jpg

indexing - new2.jpg

 

 

(Note that this idea has already been proposed and auto-declined. So I'm trying again, this time with a different UX, and pictures!)

 

I've got some code on my diagram:
1.png

 

I need to wrap the code in a case structure, so I do:
2.png

 

Then I connect a Boolean wire to the selector terminal and go on my merry wiring way. Unfortunately, I forgot to consider the fact that I need this code to run in the FALSE case, not the TRUE case. But since nothing is broken in my code, I don't realize my mistake until I start running things. I've made this mistake so many times over the years (the most recent being tonight), that I've decided to propose a solution.

 

There are plenty of times that I want the wrapped code to be in the TRUE case. There are also plenty of times I want the wrapped code to be in the FALSE case. With no obvious default that makes sense most of the time, here's what I propose:

 

If you interactively drop a case structure by dragging a rectangle around *existing* code, we float a button over where you let go of the mouse and give you a chance to make the visible frame the FALSE case instead of the TRUE case:

3.png

(I suck at Microsoft Paint, I'm sure somebody can come up with a better looking button or glyph)

 

If you click that button, then the case structure turns to the FALSE case. If you do *anything else*, the button goes away and the case stays TRUE.

 

With this proposed change, any time I wrap existing diagram code with a case structure, I'll be forced to think about whether the case needs to be TRUE or FALSE. And I'm given an easy out if it's supposed to be the TRUE case.

Title basically says it all but I'll elaborate.  With increasing monitor resolution, a 16x16 glyph on a listbox doesn't work very well.  On a 4K monitor this is awful tiny.  This idea is to support larger glyphs in Listboxes, Multi-column Listboxes, and Trees.  Glyphs are used in several places but on favorite of mine is to have item selection with checkboxes, example here.  Allowing for these glyphs to grow with the row height would make them appear more cohesive.  There is a thread discussing this topic, and a work around involving an array of picture rings that is placed over the listbox control.  Here is a demo from that thread:

 

Untitled.png

This work around is fine for simple things but doesn't scale well, and doesn't support trees easily.  I for instance want to have two trees, where a user can drag and drop from one to the other with the larger glyphs coming along with the drop.  Having to handle where the user dropped, and then dynamically building the glyphs to overlay on top of the tree, with indentation, and hiding when a tree's leaf is closed is a major pain.  Please NI add a feature allowing for larger glyphs, and I would be so happy.

So when it comes to using a queue, there is a somewhat common design pattern used by NI examples, which makes a producer consumer loop, where the consumer uses a dequeue function with a timeout of -1.  This means the function will wait forever until an event comes in.  But a neat feature of this function is it also returns when the queue reference becomes invalid, which can happen if the queue reference is closed, or if the VI that created that reference stops running.

 

This idea is to have similar functionality when it comes to user events.  I have a common design pattern with a publisher subscriber design where a user event is created and multiple loops register for it.  If for some reason the main VI stops, that reference becomes invalid but my other asynchronous loops will continue running.  In the past I've added a timeout case, where I check to see if the user event is still valid once every 5 seconds or so, and if it isn't then I go through my shutdown process.

 

What I am thinking is that there could be another event to register for, which gets generated when that user event which is registered for, becomes invalid so that polling for the validity of the user event isn't necessary.

 

before:

before.png

after:

after.png

This is something a few power users have asked me about. There's no Instrument Driver or VIPM Idea Exchange, so I thought I would post it here.


What if VIPM could manage Instrument Drivers from IDNet?
There are a few key benefits this would offer us...

  • download IDNet drivers directly from VIPM 
  • track which version of a driver you are using for different projects and revert when necessary 
  • wrap up ID dependencies in a VIPC file for use at a customer site
Install Other Version.png
Get Info.png