LabVIEW Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Several users of XNET miss an important function from the driver: to read signals' logical value from the database.

For example, the value of 1 means 'Initialized', 0 means 'not initialized'. We would like to read those strings.

 

See this thread.

Reference conversation in this thread: Speed of Cursor Clicks on graph

 

The cursor button pad for changing the position of the cursor is overly sensitive to the duration of the click in how far to move the cursor.  A normal button click sometimes move the cursor one tick like you'd expect.  Sometimes it jumps several positions.  It requires an exceptionally quick click to keep it from jumping to far.  The button pad is designed so that if you hold down the button, it moves continuously.  But what is a normal single click gets interpreted as a button hold.

 

There needs to be more of a delay between when a button goes down to when it gets intepreted as a hold before repeating the cursor.  Some usability testing may be needed, but perhaps a delay of 1/2 to 1 second.  I believe Windows has a setting that determines when a double click is treated a double click or two single clicks, that timing might apply here.

 

If you are uncertain what I am talking about, create a graph with some data.  Make the cursor palette visible and add a cursor.  Try to click the button left or right to make it move just a single tick.

 

One more thing to improve.  When doing this while the VI is not running and is in edit mode, sometimes a button click on the pad winds up distortingn the diamond shaped button rather than being interpreted as a click.

 

As LabVIEW evolves more and more, the compiler takes over an awful lot of code optimisation for us.  This leads us to situations where relatively large and important pieces of code can be evaluated at compile time and constant folded which can greatly aid execution speed.  This is good.

 

Constant folding can be a great aid when programming but at the moment, it's usage is a bit "hit and miss" due to the opaqueness of the process.  We already have constant folding highlighting, which really helps things (even if the feedback is sometimes very hard to understand).  But this doesn't always give us enough feedback.

 

What I would like is the option to declare a portion of code as "Requires constant folding" (like a "Precompile" structure).  In this way, I can, as a programmer, designate some code which is meant to be evaluated at compile time.  If the compiler is unable to constant fold this code, then the VI should be broken.  My motivations are three-fold.

  1. Sometimes we want to specifically make use of the constant folding capabilities of the compiler, but a small change can result in the code no longer being constant folded.  I would like explicit feedback when code I want constant folded is not constant foldable.
  2. I have no idea whether code complexity has an effect on the ability to constant fold.  Other compiler optimisations (Like unbundle-unbundle inplaceness) are dependent on code complexity.  Explicit declaration is not code complexity dependent.
  3. When looking at FPGA designs, the ability to perform constant folding of data otherwise requiring resources or affecting performance is very powerful.  In such a "Constant folding" code, we could also allow mathematical functions to be used which are otherwise not supported on the target (max/min of an array in a timed loop for example), or creating default data for an array (to be used as Block RAM) based on an existing equation where constants are defined as DBL.

 

One example of FPGA code is automatic latency balancing of several parameter pathways into a process where the code accepts abstract parameter objects whose latency is queried via a dynamic dispatch VI which simply returns a constant.  I use dependency injection to tell the sub-VIs which communication pathways they are being given and they can then query the latency and do some static timing calculations for the delays required on different pathways.  Tests have shown that this is constant folded and that it is thus possible to write very robust FPGA code which auto-adjusts request indices for parameters in multiplexed code.  At the moment, things seem to work but the ability to specifically designate such code as being constant folded would be welcome to make sure I don't accidentally produce a version which doesn't actually return a constant (and my compiles fail, I get timing errors, or just over-use resources)....  In the code below, all of the code circled in blue is constant-folded when compiling the FPGA code.  In the sub-VIs I have to do some awkward calculations because certain functioanlity is not available on FPGA.  By defining this code as requiring explicit constant folding, I could theoretically utilise the full palette of LV functions and also be guaranteed a compile error (LabVIEW error, not Xilinx) if the code thus designated can not be constant folded.

 

2016-09-15 10_13_00.png

 

So in a way, it's similar to the In place element structure which, when all goes well, should not be needed but there are cases (I've run into some myself) where either small changes in code can make the desired operation impossible or where the code complexity can cause the optimisation to not be performed.  As such, it is still required at times to explicitly designate some code paths as being In-Place.  I would like to have the same functionality for "Constant folding".

It will be good if we have event inspector window as same as Probe watch window in new screen.

so that user will trigger the event in the front panel and also can view the inspetor window

This option will be easy for the user to watch the event as soon as its been triggered without opening the New windpw separately.

Event Inspector Window.PNG

It wil  be good if we have Most Likely same as Probe watch widow.

It will be useful during debugging.

 

Currently (LV 2016), if you want to turn on or off tip strips on a VI FP, you have to set the tip strip string of each individual control:

Screen Shot 2016-08-28 at 11.41.47.png

 

This also means, you have to keep track of them (SR, LVG, etc).

I suggest adding a VI Property "Tip Strips Visible" which would mirror the option available in the development environment (but of GLOBAL scope):

 

Screen Shot 2016-08-28 at 11.22.07.png

 

BTW, Controls means exactly what it means in the Options panel, i.e. Controls AND Indicators.

So currently in order to have multiple scales in an XY Graph, you must pre-allocate the scales by "Duplicating Scale" on the Front Panel. This does not allow the user to dynamically set the number of scales unless they create them before hand and programmatically make visible/invisible. 

 

XY Graph.png

 

I suggest making some node (invoke or property) to duplicate the scales. This would make the XY Graph scales much more extendable. 

Hello,

 

Recently we do not have real control on the shortcut menu. We can modify/disable the Graph's Run-Time shortcut menu, and we can access Events of selection for example. This does not work for the Plot Legend: if someone wants to use the legend, but for example exclude some of its options from the menu list, recently is not possible. We have neither Events associated with the Plot Legend Run-Time Menu Shortcuts.

Could this be fixed/enhanced? Either to give an option to have the same custom modification as for the Graph's menu, and/or get Events (filtered too) for this menu.

What do you think?

 

Idea came from this post: http://forums.ni.com/t5/LabVIEW/Export-Data-Right-Click-Graph-Display-Format/m-p/3337820/highlight/true#M979937

With the release of labview 2016 the issue of cross-platform support for video becomes critical.

 

As stated in the "features and changes" page: "NI no longer provides the 32-bit version of LabVIEW for OS X."

 

Up to now it was possible to use Christophe Salzmann's QTLib to work with video. This was great software because the very same "universal" VIs ran on Windows and OSX systems. However, this tool relied on QuickTime. QuickTime is 32-bit and will not evolve to 64-bit (it has been replaced by newer libraries), and so does QTLib.

 

Thus all the code developed with labview including cross-platform video handling cannot be maintained and developed!

 

On the other hand, video is becoming an ubiquitous feature in computer technologies, from mobile apps, to IT, to big data and deep learning. It is a strategic time to push for video on labview across all platforms. It is time for video handling VIs to appear in the standard dev package, just as jpeg or png read/write VIs were added after photos massively entered the computer world.

 

I propose the addition in the "Programming/Graphics&Sound" palette of a subset of VIs for handling video (open video file for read or write, read file frame by frame, append one frame to file, grab frames from webcam). There is no doubt that such feature will be valuable in a future labview release!

 

Currently, we have the option to cluster event registration refnums when working with the Event structure to allow for some modicum of organisation.

 

Clustered event registration refnums.png

 

A co-worker recently tried to wire up a cluster of clusters of event registration refnums and was surprised that it didn't work.  I thought he'd made a mistake but to my dismay I could not get it working.  It seems that there's a one-cluster rule when working with event registration refnums with the event structure.

I think everyone would agree that more hierarchical organisation of User Events is a good thing so my proposal is that far more complex clusters be supported by the event structure, allowing us for more hierarchical structure of events.

My example does not include a name for the topmost cluster of the lower example - obviously naming the clusters / events would be far more useful when using events.

The current legend properties are too low level - requires a loop to iterate on active plot and set/get legend text.

There should be high level (express ?) VI's to do the most common task of labeling a plot legend.

Yes this can all be done with lower level primitives today but why waste everyone's time with coding something that is always needed.

Ideally an express VI that takes a reference to a plot and an array of numbers of strings and allows the user to generate an appropriate legend for the plot based on the array input. The express VI can be saved and modified by the user for special, uncommon applications but most users can just use express VI to quickly create legend for plots.

If we use "find missing items" in a project, and there are no missing items, the dialog says "No items were found". (see image)

 

This is the exact opposite of "No missing items were found" and could thus be misinterpreted as "All items are missing" instead. 😮

 

 

IDEA: I would suggest to change the dialog to a clearer wording.

While there are ways to temporaily disable autowiring, some should never happen in the first place. For example if the resulting wire has a net right-to-left flow, it should not be created at all!

 

Let's have a look at the following simple scenario (see picture):

 

Place an "add" primitive (or almost anything else!), then use ctrl-drag to create a second copy right below it. (Sometimes we need to add a few different things in parallel!). LabVIEW immediately connects a random input and output with a circuitous backwards wire for no good reason at all. 😞

 

 

If I really wanted them to be connected, I definitely would have placed them side-by-side!!!

 

Idea summary: if any auto-generated wire would result in a net backwards dataflow, it should not be created at all!

If you create a strict type def of a numeric control and select radix visible and say change radix to Hex.  When you drop the control on a front panel the radix is visible and the number is in Hexidecimal format.  However if you drop this control onto a block diagram as a constant, the numeric constant defaults back to Decimal format and you must then right-click toi show Radix and change to Hexidecimal format.

 

If you strictly define the control to be in Hex format, the 'strict' type should be applied for the constant as well as for the controls and indicators.  Cuurnetly it will default as Hex format for control and indicator, but not as a constant on the Block Diagram.

 

Using LabVIEW 2014

The ability to left pad the exponents of numeric controls and indicators with zero(s) (via format specifiers in the advanced display format properties) would allow fixed width representations of scientifically notated values that would more cleanly right align. This is most evident while scrolling through an array of such numerics.  A control with this capability would be preferable to using a formatted string which is prone to user input errors (or require code to check validity) and still be subject to data entry property restrictions.  This is the same as MS Excel's default format for scientific notation ('0.00E+00' in their 'type' format specifiers).   One could select a minimum control width and expect it to be populated uniformly.

 

In this example, forced two digit exponents more easily visually integrate when looking down an array and comparing the contents to hard copy data, especially so when the hard copy data is in x.xxE+xx format.  Which looks 'cleaner'?

 

Aligned exponent comparison.png

How about adding a new text formatting ribbon within the toolbar in the next LabVIEW release as in Microscoft Word? I believe such a functionality would significantly improve a developer's everyday life when it comes to edit one by one commands and indicators... I apologize in advance for the poor quality of my screenshot but still I'm sure you'll get my point.

Current LV version should not be listed here! Many a time, this has confused me & even made me think that I'm already one version ahead of time.

 

Save for Previous Version - Has Current Version.jpg

When working on old code where we may have some large structures (don't start on style, large structures exist and will continue to exist as long as LabVIEW is marketed to non-programmers) which become a real nuisance to work with.

 

If I'm refactoring such code and have significantly reduced the footprint of the BD, I need to go wandering around looking for the vertical middle of the structure so that I can resize it (make it smaller!).  Why don't the resize handles always appear on the edge of a structure where the mouse is if nothing else is in the way?  Why the insistance that only the corners and middles of the edges can be used to resize?

 

The same occurs if I temporarily need more space, go searching for the middle of the structure, resize and go wandering back.  It would be nice if I could take any arbitrary point on the structure edge to resize.  Of course the corners would remain the only places where we can resize in two dimensions simultaneously.

In Front panel and Block Diagram when we place a control (Graph) over another control (Numeric) everything is fine and we don't have any clue that an object is overlapped. The only way to find the control (Numeric) is by going to the BD and navigating to the FP by using the Find Control menu. So it would be nice if we get a shadow for the object which is overlapping any other object. This is also applicable to the BD objects where there are chances where the VIs gets duplicate when we try to create a copy by using Ctrl+Drag. 

WaveformChart Overlapped control.png

Using the "Find Control Option" in BD

WaveformChart Overlapped control.png

 

WaveformandNumericSeperated.png

 

This would be simple if we come to know that an object is overlapped by a shadow

WaveformchartShadow.png

 

During the run time this shadow can be removed like how we use controls over the Tab container.

Please ignore if it is already suggested.

 

I wish there were an easier way to Search and Replace an element with In Place Element Structure.

 

Existing.png

 

This will easier to replace particular elements

 

Proposed.png

It would be very usefull

 

An add-on to the highlight execution that makes the highlight execution go into the subVIs and keeps doing highlight execution in the subVI