LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

 

 

The idea of overloading methods is wide spread in other languages. Labview already supports this in a way with polymorphic vis.

 

The code snippet below has 2 classes, parent and child

 

methods are described in the image below. 

 

The underline idea is that parent can define 1 method (poly.vi) that can handle multiple inputs (polymorphic) but does not have to implement the functionality for all the input types (it just needs to know about them) and the child class can override poly.vi and the interface it want to implement.

 

overloading methods.png

 

 

Even though the code above is correct, Labview gives an error.

 

"Dynamic dispatch VIs cannot be members of polymorphic VIs."

 

I dont see why not, since dynamic distpacthing will still have an entry point for both classes (poly.vi) and choosing which polymorphic vi to use will be define at design time.

 

 

Unlike the capability to swap diagrams in regular case structures, the conditional case structure CANNOT swap diagrams. It would be nice if the conditional case structure could swap diagrams to avoid pulling out code from one case into another.

 

be able to choice the "minus sign" for the negative non-decimal numbers

 

            i'm only talking about the way that an indicator can display a number.

 

                                                  like this,

 

                     demo.png

 

minus.png

When a subVI is created on the block diagram of a VI, it takes on the class or library of the calling VI.

 

When a control is created on the block diagram or front panel of a VI, it doesn't take on any ownership at all. It is just placed in the top level hierarchy in the project.

In the illustration below, this isn't a big deal because the project is empty. This becomes a bigger problem when there are dozens of classes and folders to sift through to find the original class by which the control was meant to be owned.

 

Current.bmpBetter.png

 

I have often the case, that i have to display an array of cluster. It takes a lot of space on the front panel to show all the captions of the cluster elements, cause they will be shown on each array element. A better solution would be, showing the captions or labels just on the top visible element.

IMAG000.jpg

IMAG001.jpg

It is possible to import an EPICS .db file into LabVIEW in order to use the Process Variables (PVs) within LabVIEW.

(see http://www.ni.com/white-paper/14144/en/ )

 

But, all records are imported as seperate LabVIEW items.

 

Each PV has to be seperately added as a 'bound shared variable' for inclusion into a VI.

Then each PV will need to be seperately connected up to a control or indicator, unless some means of iterating over the collection is implemented.

 

This is all fine if there are 3 or 4 PVs (as is the case for the example app).

 

My current application is quite modest in scope - there are 15 PVs for each of 6 devices, so 90 PVs altogether. It is barely feasible to follow this manual process for each of these - it would take hours and be very finger-trouble prone.

 

Many EPICS IOCs can use thousands, or even millions, of PVs.

 

I would suggest that the .db file import wizard process the PVs from each file into a cluster.

Or - possibly better - process PVs into an array of clusters.

 

IMO, the current implementation just isn't scalable to 'real world' control system IOC use.

 

If NI wish to provide LV integration with large-scale EPICS projects, I beleieve a better way of doing needs to be found.

 

(Re-posted from https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Enhanced-EPICS-Support/idc-p/3203 )

 

The NI_ChannelLength is a handy property, written to each channel in a TDMS file that can be read to tell you the number of samples in that channel, without having to read all the samples and do an array size operation on it.  Having this in a property can also be useful for programs like DIadem or the DataFinder tookit which index these properties.

 

This idea is to have an option to add a few more properties built into the TDMS write operation.  It would be best if this were an option given to the TDMS Open, which is off by default.

 

I think adding a NI_ChannelMinimum, NI_ChannelMaximum, and NI_ChannelAverage would be very helpful so that this information is available without having to read every sample, for every channel, for every group.  Again the benefit can be clear when using DIadem or DataFinder and having this information be quickly available.

 

Of course we can do this today if we don't mind having to read every sample, perform the Min/Max/Average then write this property, but this can be a very time and memory intensive process for large files with lots of samples, channels, and groups.  For channels with data types which aren't a numeric, I'd say a constant can be used, like NaN, or 0 if the data type is not a double.  I think this would be most useful for channels with a numeric data type, waveform, or timestamp.

I find myself wiring errors out of the loop using auto-indexing and then immediately using a merge errors on the array of errors.  With the addition of the Loop Terminals for concatenating arrays and conditional arrays, please add one on error wire terminals that can do the merge errors in one step.  

 

For Example, current implementation:

 

Merge Errors.png

What I would like to have but would do the same thing in one step:

 

Merge Errors New Terminal.png

 

 

Dear community and developers

 

I would have the suggestion to add a simple context menu enty on property nodes and on invoke nodes.

 

Just add in invoke nodes the menu entry (Change to property node) and

on property nodes add an etry (Change to invoke node).

 

I often have to change this and this would help to improve the work flow for me.

 

Gernot Hanel

IONICON Analytik Gesellschaft m.b.H.

www.ionicon.com

Big clusters... we've all needed to work with them.

 

Currently, to change from one cluster element in a bundle/unbundle, clicking on the element name brings up context menus with the hierarchical structure of the cluster...from the beginning. Say you want to move to the next element in a cluster at a certain level, you have to navigate all the way through the hierarchy. You could drag down and then rewire and delete, but that's still more effort than feels right.

 

messy-cluster.png

How about changing the appearance and functionality of (un)bundle by name so that each section of the cluster name is a clickable link. Clicking on it would allow me to browse the hierarchy at that level, so click on the last element (currently Port) would show me the following option:

 

tidy-cluster.png

 

There's only one option at this level at the moment, but if I were to do the same on the Laser Settings section, it would drop down a menu with All Elements, <redacted section>, Laser Marker Mode and Laser Settings.

Simple thing:
It will be nice to have context help with monospaced font. The functionality of this help currently is very poor but with monospace font we will be able to add e.g. simple "arrays" and probably many more others descriptive things that can be usually found in e.g. .h files.

 

Below simple character based array in notepad++ and in LV context help.

 

notepad.JPG

 

context.jpg

 

Regards,

Michał Bieńkowski

I am struggling with my Event Structure event list and the corresponding list of cases in the parallel consumer loop Case Structure.

Both have currently over 100 cases each and finding one or scrolling down to access the latest one has become painful due to the lack of a scrollbar in these lists.

For instance, here is the Event Structure list:

 

Screen Shot 2015-09-29 at 12.19.16.png

 

Same goes for the list of controls in a Local Variable (and other objects, I am sure).

There is no reason why such lists do not have a vertical scrollbar when that corresponding to a Enum do have a scrollbar:

 

Screen Shot 2015-09-29 at 13.33.30.png

 

Or is there?

 

Suggestion: All long pulldown lists should have a vertical scrollbar

99% of the time when I use a Diagram Disable Structure, I am disabling code with an error cluster wired through. I don't want to lose the errors coming in, just the single operation, so I manually wire the error cluster through the empty case each time.

 

I've talked to others in the past about this and it would be nice for LabVIEW to be all-knowing and wire all datatypes through that match, but that would definitely lead to conflicts and mistakes. Error clusters, on the other hand, are simple are nearly always a single wire in and out.

Simply auto-wiring the error cluster input to the output would make the debugging process much easier.

 

Code with disabled operation:

Disable.PNG

It would be useful to have something like a referenced comment. You can place this comment in the block diagrams of several VIs of a project, and by editing one instance of this comment, it will change all instances at one time.

 

Example:

 

The comment describes the channel list of an application:

 

Comment:

AI_00: Torque [Nm]

AI_01: Pressure [bar]

AI_02: ValvePosition [%]

[...]

 

You place this comment in the DAQ-VI. But it would be helpful to have the identical comment in the MeasFile-VI and/or in the VI that combines the channel and scaling informations to a 2D-string array, so you can present all these informations together (e.g. in a multi cloumn list box or something else).

 

When you later add some new channels e.g., it will be annoying to edit all these comments step by step, something like a 'comment type def' would be a practical solution.

 which it is still not doing in LabVIEW 2015.

Example:  I graph a temperature input, using auto-scale on Y.  The end-customer complains that the temperature suddenly started rising at the wrong time.  I try (futily) to explain that the rise was only 0.01 degrees, but the scale on the graph expands it to fill the screen.  Then someone else comes in, and we repeat the conversation.

Auto scale chart.png

 

What I'd like is a feature to keep auto-scale, but set the minimum span.  For example, set the minimum span to 10 degrees:

Auto scale chart min.png

 

The story is similar for integer charts.  Too often, a value is on the bottom of the chart, or on the top, and all you see easily are vertical lines

Auto scale chart bool.png

With an auto-scale-min-span of 1.1, it would look better:

Auto scale chart bool min.png

 

I use a lot of very generic programming, so may not know if I should set the minimum span to 1.1 for boolean values.  For that case, it would be great to have a feature that makes the auto-scale go X% beyond the min & max values.  In this case 5% would give the results above.

 

One more feature (one I've written programmatically, and it's a MAJOR pain):  set auto-scale to only scale up if the data is less than 50% of the current span, and scale down as needed (but over-shoot to anticipate more scaling).  It's irritating to watch a graph constantly scaling up and down, especially an XY graph.

 

And finally (I've also done this programmatically, and it would be tough to make automatic):  lock several chart scales together.  If an operator changes scale on one XY graph, the others change to match.

 

 

Want to have graph display a certain scale unless values go outside scale min or max and then do autoscale but only in direction which scale bounds were crossed.

Example:
Normally want graph to display X scale 0 to 10 to display to user:
Scale 0 to 10 look good.PNG
If set same graph to autoscale would get the following graph that user could interpret as values are swinging all over the place but this could just be noise and I do not want to display this format to user:
Autoscale Bad.PNG
So I want a solution that incorporates manual scale and autoscale by autoscaling only after scale limit is exceeded. Asume get a data point of 13 which is above the max scale range of 10, graph would do a single autoscale only in direction above 10 to change max to 13.
Desired Result.PNG
Would be Graph Scales property. Option disabled if Autoscale was selected.
Properties.png

I know can use property nodes to programatically do this in my program but it is much more involved having to constantly check to see if values have gone outside range and then issue a single Autoscale.

An RT program can be ran either from a host PC (what I call the "interpreter mode"), or as an exe in the startup directory on the RT controller. When running from the host PC (for debugging purposes), it allows front panel "property nodes" to execute properly as you would expect. After building, and transferring to the RT app to the startup directory on the RT controller, the program errors out on the first occurance of a front panel property node. The reason is obvious; a front panel is non-existent in an RT application, hence the front panel property nodes are rejected. Of note, no errors or warnings are generated during the RT app build operation.

 

Recommend that the build application simply ignore the front panel property nodes as it ignores the front panel in general. This would allow the programmer to retain the same version of the source code for either mode of operation.

 

Thanks,

Bob

When selecting block diagram items from the "search results" screen, the resulting highlighted area on the block diagram always seems to be on the extreme edges of the screen. This requires a finite amount of time scanning all four corners of the screen for the item. Recommend that the "found" item always be positioned in the exact center of the screen to eliminate this issue.

Thanks,

Bob

When performing some scripting I found it handy at times to create the block diagram objects, use a Cut Selection method on the selection objects, and then I would have the image of the objects in my clipboard.  The cases when I needed this I also needed to delete the scripted objects so this one method performed the delete, and allowed me to have an image of the scripted code.

 

When doing the same operations on the front panel I noticed there isn't a Cut Selection method on either the panel or pane classes and made a post about it.

 

http://forums.ni.com/t5/LabVIEW/Missing-Cut-Selection-Front-Panel-Method/m-p/3190560

 

This idea is to create the Cut Selection method on either the pane or panel class (or both I don't really care) which operates like like the Copy Selection, and Delete would today.