LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I really like block diagram cleanup in 8.6 but it does not handle labels very well at this time.  We stopped using floating text for code documentation since cleanup sends them to the lower left of the diagram.  Instead we started using labels for structures, VI icons, etc.

 

The 'space' the labels occupy should not be ignored.

LV_label.PNG

 

-Brian

LabView Control and Simulation Design Toolkit

 

-A simulation subVI fails to compile if an initial condition of zero exists on an integration block.

 

 IDEA: Run a zero initial condition check on a system before attempting to compile.  If the condition exists, don't attempt to compile and report the problem to user.

 

 

 

-A simulation subVI fails to compile if a change is made after the subVI is created.

 

IDEA: If a change is detected in the subVI, force the subVI to recompile every time a file command is implemented.

 

As mentioned at the end of my comment here, editing text is a bit clumsy. There should be a text toobar that is similar to what we can find in any other application.

 

Maybe it could be dynamic so it only appears when editing text.

 

Here's the quote from the other thread:

 

One thing that should be improved is the font pulldown which feels so early 1990's. When working with text, we want a text toolbar like anywhere else, (even in the post editor here in the forum!) with a bold, italic, etc. buttons, font and size rings, etc. You know what I mean!

The built-in LabVIEW comparison and array sort primitives are inadequate for many applications involving clusters.  For example, the clusters may contain elements that

  • Cannot accurately be compared using the default method, such as case-preserved but case-insensitive strings.
  • Have a comparison sense (in a particular instance) that is opposite in sense to another member of the same cluster.
  • Weight the ordering more heavily (in a particular instance) than another member, but whose location in the cluster is below the other member, and so have less effect on importance than the other member.
  • Should not be considered at all in the comparison.

For example, consider the following cluster:

db-cluster.PNG

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Now, suppose I want to sort an array of this cluster, but I am uninterested in the VendorCode or the Password, and I want the Server, Database, and User to be compared caselessly. The Sort 1-D Array primitive will not do this properly. The common pattern for overcoming this is something like the code below.

sort-pattern.PNG

 

 

 

 

 

 

 

 

 

This does the job, but it is not particularly efficient for large arrays.  I could code my own sort routine, but that's not the best use of my time, nor is it very efficient.

 

A similar argument can be made for simple comparison (lt, le, eq, etc.) between two clusters, although this is easily done with a sub-VI.

 

My proposal is to take an object-oriented approach and allow clusters to decide how they are to be compared.  This would involve something like attaching a VI to a cluster (typedef). This would allow the default comparison of two of these clusters to be determined by the provider of the cluster, rather than the writer of the code that later needs to compare the clusters.  I will leave it to LabVIEW designers how to associate the comparison code with the cluster, but giving a typedef a block diagram is one way that comes to mind.

 

Of course, different elements may need to be compared in different ways at different times. This leads to the thought that Sort 1-D Array ought to take an optional reference to a sorting VI to be used instead of whatever the default is. This idea was touched on in this thread but never thoroughly explored.  The reference would have to be to a VI that conformed to some expected connector pane, with well-defined outputs, like this:

compare 1.PNG

 

Strictly speaking, the x > y? output is not required here.  Another possibility is

compare 2.PNG

 

which simply outputs an integer whose sign determines the comparison results.  Clusters that cannot be strictly ordered would somehow have to be restricted to equal and not equal.

 

The advantage to wiring a reference to such a VI into the Sort 1-D Array primitive is obvious.  It is less obvious that there would be any utility to be gained from providing such an input to the lt, le, eq, etc. primitives, but consider that this would allow the specifics of the comparison to be specified at run-time much more easily than can presently be done.

1. Allow for "Tabbed Browsing" of VI's to better manage windows.

2. Allow for the BD to be open independent of the FP.

3. Allow dockable palettes... dock to either the edge of the screen, or to the top bar (pictured below) of LabVIEW.

4. As a bonus, consider being able to open PDF's, txt's, and html's in tabs also for Help and documentation.

5. Finally, allow the project tree to be docked into the IDE.

 

Please, add your own IDE upgrade ideas in this discussion - illustrations will be especially helpful here. If it's a major enough idea, create a new idea!

 

LabVIEW2010.png

We need something sort of like MS Office's "Clippie", but for LabVIEW.

 

1.png

Message Edited by Jim Kring on 07-24-2009 04:26 PM

I searched on "polymorphic" and did not find this idea posted. 

 

I just learned over here that when you use a polymorphic VI, all flavors of that VI load into memory!  That's why a VI hierarchy gets so cluttered so fast when you use them.

 

In the object-oriented version of polymorphism, all possible polymorphic cases need to be coded and loaded into memory, since any of these possible cases could be called depending on the execution of the program.  In the LabVIEW-specific version of polymorphism, where a function has many flavors, perhaps due to a change in data type on one of the inputs, it is not usually the case that all of the different polymorphic members can execute at run time.  In fact, I believe it is usually the case that only ONE of the cases will ever be called or execute.

 

So, why are all of the other polymorphic members in memory?  I don't know.  I think they shouldn't be.  They seem to be eating RAM for no good purpose.

 

Load only the specifically called version of a polymorphic VI into memory.

If a Facade VI of an XControl registers for some dynamic events (whatever the source), a firing of one of these Events will NOT trigger actual activity (Facade VI activity) within the XControl.

 

If we register for a static Event (Mouse move on the FP for example) we DO get a trigger for the XControl (Facade VI becomes active).

 

The unusual situation rises that the Danymic events are registered but not executed UNTIL a static Event is called, after which all of the dynamic events are also dealt with.

 

Please make it possible for Dynamically registered Events within an XControl to "trigger" the XControl just as static events do.

 

Shane.

When sending data to an XControl terminal, the action returns immediately regardless of how long the XControl has to update its display.  When using an XControl in a situation where the individual data updates  are very close together, the "Data Change" Events within the XControl stack up and the XControl can lag significantly (Multiple seconds) behind the ACTUAL data. 

Think of a typical In-box on an overworked clerk's desk.  It just keeps getting higher and higher, and he's stuck dealing with "old" data.

 

When the loop calling the XControl is stopped, the XControl will continue updating even though it's not actually receiving any new data.  It must still work through the backlog of old data.....  This is extremely bad from a UI point of view.

 

This is different when using any of the in-built controls.  If a control takes 5ms to update, the loop sending to the terminal for that control will wait until the control is finished displaying.  As such, the control effectively limits the rate (5ms) of the calling loop to match its drawing speed.

 

XControls should do this also (perhaps automatically, perhaps optionally).

 

Discussed in forums HERE.

 

Shane.

It's a simple idea really.  Recompile the existing code so that the SPT runs in Linux and Mac, not just Windows.

In order to straighten out your wires and reduces crossover, it's often necessary to swap inputs. Or, if you accidentally wire y^x when you wanted x^y, you need to swap inputs. Currently, we must delete the wires to both inputs, then rewire into the opposite input. Proposed: "Swap Inputs" option:

 

SwapInputs.png

 

(The expected behavior is obvious for a function that only has two inputs - please give your input on expected behavior on functions with more than two inputs. Right now I lean toward limiting "Swap Inputs" to primitives and user-defined functions that only have 2 inputs)

The LabVIEW Project Explore has Standard Toolbar and it’s been greatly appreciated.

Why not Standard Toolbar for VI? Most programmers are used to Ctrl+S for saving the change (one hand use), but there aren’t many to use or use to Ctrl+N and Ctrl+O.

 

t2.png

 

I think Standard Toolbar for VI would be very useful option!

Given a multiframe flat sequence, there is sometimes the need to remove one of the frames because it is not needed. This is currently hard.

 

We can add frames, merge frames, insert frames, and remove the entire frame structure, but if we try to remove a single frame, it drags it's entire contents with it to neverland.

 

IDEA: When removing a frame of a flat sequence, the code should stay, just the frame should disappear.

 

This is a more natural operation. Deleting the code first or later if desired is trivial.

 

(Originally I was only thinking of edge frames (Inner frames can already simply be merged). Another option would be to split the existing sequence into two sequences if an inner frame is removed. Would that be useful too? Maybe!)

I was inspired by the idea 'Color properties should default to colorbox data' to post this one.

 

You can drop the listbox symbol constant from the pallet onto the block diagram.  But, if you create a control or convert it to a control, you end up with a numeric control, not a pict ring of the symbols.

It would be nice, when including the listbox symbol in a data structure to allow for it's graphical representation.  That way you would not have to lookup the index number with the constant to figure out what symbol had been set in the data structure when you are debugging.

 

If there is a work around for this, please let me know! 

If you do a dynamic call from a built application and it fails because the VI in question depends on a VI that it is unable to locate when called in a built application environment - the only way to figure out what went wrong is to rewrite your app so that it opens the front panel of the VI, and then click on its broken run-button...There should be a way to get that error description without having to do anything with the application.

 

The real challegne however comes when you run into the same problem on a real-time target. There you can not open the front panel...and basically have to search in the dark to find a solution.

 

Feedback to the programmer's machine would be nice, but it should not only work when you have LabVIEW running. It should be possible to e.g. put a switch in an INI file...and then get a text log that describes, in full detail, what goes wrong with the dynamic calls.

I think there needs to be an option for a dockable context help it will be in the options menu and you will check it for yes you want the context help to be docked and than you will be able to click where you want the context help to be docked (top, bottom, left right) and than you will have the option of stretching it on the bottom (like in the first picture) or leaving it in the right or left hand corner of the bottom.  also if you want it to be stretched the items within the context help will modify to a easier way to read it (like in the second picture of course the font will ned to be made easier to see as I edited this in paint:)) this will also be true if you choose to have it docked on the top, left, or right.

 

it would also fit to whatever size panel you have open so it wouldn't go across your screen if you make your panel smaller (also if your front panel and block diagram are different sizes it would change sizes as it needs, and if you use the tile left right or tile top bottom it would show on the bottom or top, left or right in full stretch and the same for bottom and top).

 

this would also not abstruct you from your code it would be attached to the window and have its own scroll (if needed).

Can somebody explain to me why the Colour box control is on the Numeric Palette as a control but as a constant it's on the Dialog & User Interface Palette???

 

 

CB Control.PNG CB Constant.PNG

 

It doesn't matter HOW many times I look for the Colour box constant on the Numeric palette and then move to the Dialog & User Interface palette, I NEVER learn.

 

Please please move the constant to the Numeric palette where it belongs.

 

Shane.

 

 

This one of my very old ideas and goes all the way back to InfoLabVIEW. I recently got reminded in this thread to write it up as an idea. You might have heard it before. If not, read it :D)

 

Currently, an output tunnel gets the default value for the given datatype if "use default if unwired" is enabled and a case executes where it is not wired. Recently, we also got the "linked tunnels" feature, which is more like an editing assistant.

 

Many times we have a big stack of cases but the computation of many outputs is shared by many cases, maybe with one or two notable exceptions. 😉 It would be cool to be able to define this shared "default" code only once so it is executed unless we create an exception case.

 

My suggestion is to have a new, special case that allows us to define the output of each tunnel for cases where it does not receive an overwriting input.

 

The image shows a few possibilities for an event structure (same applies for all other relevant structures).

 

A: A reference is wired across by default. We don't need to wire across any other case.

B: Nothing is defined, so it acts like today. This is the default, so everything is automatically backwards compatible with existing code.

C: A number is incremented with each iteration unless we overwrite in a specific case

D: The default output is based on the operations of several inputs.

E: If a tunnels is unwired, we get NaN (or whatever we need) instead of zero. For I32 me might want -1, for example.

F: Same as A. This is similar (but not exactly the same) as linked tunnels. (I.e. It also applies to existing unwired cases)

G: This tunnel is defined in all cases. If we add an unwired case later it would act like B.

H: (not shown): certain global event terminals (e.g. time) should also be available in the "default definition case", because we might want to utilize it for a default output.

 

 

 Downconversion would be somewhat messy. It would probably need to wire the relevant default operations into all cases where an output is not wired, keeping the functionalty the same.

Message Edited by altenbach on 07-11-2009 10:45 AM

If you want to distribute your application to several countries, you must find a way to localize the UI.

There are a lot of threads in the forum, documents in the NI Developer Zone, etc... but all of them are based on the Export Strings/Import Strings functions.

 

I think that this approach has some problems:

  • difficulties in changing languages on the fly (during EXE execution)
  • how can you switch between English and Chinese (or Chinese and Russian), for example?


I had this kind of problems in the past also with CVI, but with the new CVI 9.0, a very interesting, powerful and easy solution has been implemented.

Basically it uses a new property called "charset" that you can set for all your panel, but also for every control of your UI.

Playing with this property you can also have several different languages (mixed single-byte and multi-byte too) on the same window: and it doesn't matter how the language settings of your OS are!!!

 

Don't you think a localization approach like the CVI one would be nice?

And an Integrated Localization Utility (like CVI) should be appreciated too...

 

vix

 

 

In my project, I wanted to set the cell color in excel as red during a failure condition. But to my surprise I found that the color was set to blue and NOT red.It took me a long time to understand that the hexadecimal values for the color constants are different in Labview and MS Excel.

I wonder why the color constants are not standardised, when Labview supports features like ActiveX controls.