LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Instead of having nested case structures to implement if...else if...else if...else

I'd like to have an elseif structure where only one frame executes when one condition is TRUE.

Elseif structure

 

 

 

 

 

 

 

 

 

 

 

The built-in LabVIEW comparison and array sort primitives are inadequate for many applications involving clusters.  For example, the clusters may contain elements that

  • Cannot accurately be compared using the default method, such as case-preserved but case-insensitive strings.
  • Have a comparison sense (in a particular instance) that is opposite in sense to another member of the same cluster.
  • Weight the ordering more heavily (in a particular instance) than another member, but whose location in the cluster is below the other member, and so have less effect on importance than the other member.
  • Should not be considered at all in the comparison.

For example, consider the following cluster:

db-cluster.PNG

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now, suppose I want to sort an array of this cluster, but I am uninterested in the VendorCode or the Password, and I want the Server, Database, and User to be compared caselessly. The Sort 1-D Array primitive will not do this properly. The common pattern for overcoming this is something like the code below.

sort-pattern.PNG

 

 

 

 

 

 

 

 

 

This does the job, but it is not particularly efficient for large arrays.  I could code my own sort routine, but that's not the best use of my time, nor is it very efficient.

 

A similar argument can be made for simple comparison (lt, le, eq, etc.) between two clusters, although this is easily done with a sub-VI.

 

My proposal is to take an object-oriented approach and allow clusters to decide how they are to be compared.  This would involve something like attaching a VI to a cluster (typedef). This would allow the default comparison of two of these clusters to be determined by the provider of the cluster, rather than the writer of the code that later needs to compare the clusters.  I will leave it to LabVIEW designers how to associate the comparison code with the cluster, but giving a typedef a block diagram is one way that comes to mind.

 

Of course, different elements may need to be compared in different ways at different times. This leads to the thought that Sort 1-D Array ought to take an optional reference to a sorting VI to be used instead of whatever the default is. This idea was touched on in this thread but never thoroughly explored.  The reference would have to be to a VI that conformed to some expected connector pane, with well-defined outputs, like this:

compare 1.PNG

 

Strictly speaking, the x > y? output is not required here.  Another possibility is

compare 2.PNG

 

which simply outputs an integer whose sign determines the comparison results.  Clusters that cannot be strictly ordered would somehow have to be restricted to equal and not equal.

 

The advantage to wiring a reference to such a VI into the Sort 1-D Array primitive is obvious.  It is less obvious that there would be any utility to be gained from providing such an input to the lt, le, eq, etc. primitives, but consider that this would allow the specifics of the comparison to be specified at run-time much more easily than can presently be done.

I searched on "polymorphic" and did not find this idea posted. 

 

I just learned over here that when you use a polymorphic VI, all flavors of that VI load into memory!  That's why a VI hierarchy gets so cluttered so fast when you use them.

 

In the object-oriented version of polymorphism, all possible polymorphic cases need to be coded and loaded into memory, since any of these possible cases could be called depending on the execution of the program.  In the LabVIEW-specific version of polymorphism, where a function has many flavors, perhaps due to a change in data type on one of the inputs, it is not usually the case that all of the different polymorphic members can execute at run time.  In fact, I believe it is usually the case that only ONE of the cases will ever be called or execute.

 

So, why are all of the other polymorphic members in memory?  I don't know.  I think they shouldn't be.  They seem to be eating RAM for no good purpose.

 

Load only the specifically called version of a polymorphic VI into memory.

If a Facade VI of an XControl registers for some dynamic events (whatever the source), a firing of one of these Events will NOT trigger actual activity (Facade VI activity) within the XControl.

 

If we register for a static Event (Mouse move on the FP for example) we DO get a trigger for the XControl (Facade VI becomes active).

 

The unusual situation rises that the Danymic events are registered but not executed UNTIL a static Event is called, after which all of the dynamic events are also dealt with.

 

Please make it possible for Dynamically registered Events within an XControl to "trigger" the XControl just as static events do.

 

Shane.

When sending data to an XControl terminal, the action returns immediately regardless of how long the XControl has to update its display.  When using an XControl in a situation where the individual data updates  are very close together, the "Data Change" Events within the XControl stack up and the XControl can lag significantly (Multiple seconds) behind the ACTUAL data. 

Think of a typical In-box on an overworked clerk's desk.  It just keeps getting higher and higher, and he's stuck dealing with "old" data.

 

When the loop calling the XControl is stopped, the XControl will continue updating even though it's not actually receiving any new data.  It must still work through the backlog of old data.....  This is extremely bad from a UI point of view.

 

This is different when using any of the in-built controls.  If a control takes 5ms to update, the loop sending to the terminal for that control will wait until the control is finished displaying.  As such, the control effectively limits the rate (5ms) of the calling loop to match its drawing speed.

 

XControls should do this also (perhaps automatically, perhaps optionally).

 

Discussed in forums HERE.

 

Shane.

Dear NI,

 

Please fix LabVIEW so that when you right-click on a wire on the block diagram the menu pops up immediately, not after 1 second. Also, while you are fixing that, it would be great if you could fix the speed it takes to open a control/indicator properties configuration dialogue.

 

As far as I can recall these were fine in LV 7, but sometime after that itall started to get a bit sluggish.

 

Thanks!

 

 

 

 

Currently, we have to use Unbundle By Name from Cluster and select an element for Case Section

 

1.png

 

It would be great if we could just wire the Cluster Directly and have a Right-Click Option at Case selector to select an element (one element only).

 

2.png

 

P.S. If it is a reasonable suggestion and gets enough Kudos to get R & D team’s attention for feasibility of this idea, then we ask for more logical operators support that would be useful. Also multiple elements and/or more statement node i.e. (type == Array and # elements <= 2)!!!

Only sometime I miss “if statement” support in LabVIEW. 

 

 

Message Edited by Support on 07-16-2009 11:56 AM

If you do a dynamic call from a built application and it fails because the VI in question depends on a VI that it is unable to locate when called in a built application environment - the only way to figure out what went wrong is to rewrite your app so that it opens the front panel of the VI, and then click on its broken run-button...There should be a way to get that error description without having to do anything with the application.

 

The real challegne however comes when you run into the same problem on a real-time target. There you can not open the front panel...and basically have to search in the dark to find a solution.

 

Feedback to the programmer's machine would be nice, but it should not only work when you have LabVIEW running. It should be possible to e.g. put a switch in an INI file...and then get a text log that describes, in full detail, what goes wrong with the dynamic calls.

It is fairly common to build applications in LabVIEW that are useful to run as a service (monitoring software e.g.) , however to do so today you need to "cheat" and use srvany, and write and run batch files to get it installed. 

 

It would be great if we could build real services, and the installer would take care of the installation process. You could even have a nice template for services with an accompanying user interface client, with notification icon and everything.

 

LabVIEW everywhere...Not just on different targets, but in every part of the system. 🙂 

 

Why different path's are not enough ? 

So that the developer can work on the different version of the program,

and not to bother of getting a "mess" of vi's and controls.

i have faced this problem many a times. Suppose I have a main VI opened and if I have a subVI (say sample.vi) in the main VI and I try to open another VI of the same name as subVI , then the subVI in main VI will get replaced with the second subVI. Labview should take care that any VI should be referenced with the complete path and just not the VI name. 

 

Currently, when you use the Reshape Array primitive, if new elements are created, they are filled with the default data of the datatype. Alternatively, allow a terminal that defines the fill value.

 

CurrentReshapeArray.png

The convolution tools have polymorphic instances for 1D and 2D data.

 

The 2D instances have a very useful input to control the output size (full, size X, compact).

 

The 1D instances don't have this input (Why?!). In the vast majority of my 1D convolution applications, I would prefer a "size x" behavior for the output, making the output size identical to the input spectrum, no matter the size of the convolution kernel. If the 1D convolutions would accept this option, performance and inplaceness could possibly be optimized compared to a manual trimming later.

 

altenbach_0-1702743177185.png

 

 

Suggestion:

All convolution instances need an "output size" input, not just the 2D versions.

For some reason, the conversion bullet for fixed point (FXP) does not accept array inputs, so if we have array data, we currently need to wrap FOR loops around it.

 

 Suggestion: Allow array inputs for "to FXP" to make it consistent will all other conversion bullets.

 

(see also this post for an applied example).

 

 

 Often I find that i need to create a empty copy of an existing type, this is essentially done by indexing the array and then creating an empty array of the indexed type.  Essientally many array manipulation is done this way.  It would be nice if there was a single function that does this.

 

 

 

This code:

 Create New array by type.JPG

 

Is replaced by this code:

Create New array by type new.jpg

When I have an array of clusters and I want to locate the array index where a specific element of the cluster has a certain value, I need to first build an array from the Array of Clusters and then search that to find the Array element I want:

 code example.jpg

 

It would be nice (and cleaner and likely faster) if I could wire the Array of Clusters into an unbundle by name function and select String[] to get the array of string element to search.

In addition, if the cluster contained nested clusters, I could access them the same way, using the dot notation alrerady supported.  For example, the unbundle would let me select 'cluster1.subcluster2.String[]' to access the subarray of an element.

 

code example2.jpg 

 

Right click a property node to set defer panel updates (with automatic un-setting at completion), so this does not have to be done explicitly with another ref to the owning VI. A glyph indicating this has been set could appear to indicate the option is invoked, similar to 'ignore errors inside node'.

Both the diagram disable structure and the conditional disable structure are intended to allow easy enabling or disabling of blocks of code. These nodes ought to have no effect on a diagram except to remove or add sections of diagram. But they currently have the side effect of formalizing the blocks of code they surround, as if they were a sequence structure. There are two cases where this is undesirable.

 

1. Without the disable structure, these two loops would run in parallel. 

 conddis_unwantedsync.png

2. This VI arguably ought to terminate because that wire dependency from the loop to the Stop primitive is only created because whatever is in the disabled frame needs the result of the loop, but the code in the enabled frame does not. But because the disable structure acts as part of the code, this loop runs forever.

 conddis_shouldterminate.png

This ties into a recent post I made on the NI Forum HERE.

 

I have already run into this problem a few times.

 

Somewhere in my code I have a process returning an object of a particular class which does not have the exact same type as a dynamic dispatch input.   My post above shows a workaround for this (which, if it wasn't so horrible it'd be funny).

 

Load class from XML workaround.PNG

 

I've run into similar problems where the object is being passed via Queue or Event where I have to pass the Objects as a parent class to enable portability but where I KNOW the only source of the data is a VI sending an object of EXACTLY THE SAME TYPE as the dynamic dispatch input.

 

A normal "to more specific class" does not work presumably because the resulting object COULD be a further sibling of the required object, thus breaking any dynamic dispatch tables.

 

Up to now I've had to write a class member to take the existing object and another object of type parent and make a cast and update the values manually which is really annoying because it requires each and every child class to implement this (and make use of the parent function).

 

So what I'd like is a function to allow an EXACT cast of an LVOOP object so that I can still satisfy the Dynamic Dispatch requirements without having to have a piece of code for each and every child class created.  If the Object classes don't match EXACTLY, return an error and the contents of the object used for the cast.

 

Shane.

With "Sort 1D array", we can get ascending ordered elements, as below.sort 1D array.PNG

 But in practice,  the chances of using elements with ascending order and descending order are almost the same. Thus it will be helpful to add a node returns the descending ordered elements, or users are able to select between two options within one node. Although we can reverse the array after "ascending sorting" to achieve a descending order, it costs extra CPU and memory.