LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Allow the mechanical action of radio buttons to be switch until released.

 

The way I make arrow buttons now is to put switch until released buttons in a cluster and watch for the value changed event of the cluster. When it changes, I convert the cluster into an boolean array, that array into a number, then feed that number to a case structure. Switch until released radio buttons, "No Selection" being a necessary default, would make that code nicer. The case selector would be an enumeration instead of a number.

It's a little annoying to try to draw long wires to a terminal that's currently off screen -- you have to hold your mouse at the edge of the screen and LabVIEW slowly scrolls the window (Shift will speed this up a little bit).  Currently, neither the mouse wheel nor the touch pad work for panning/scrolling while a wire is currently in progress.

 

As an improvement, it would be great if the mouse wheel allowed panning the diagram while the new wire was still in progress.

 

See the attached video.

 

in LabVIEW NXG we have G Types Control which allow us to create and delete controls dynamically at run time.

Creating Controls Dynamically on a Panel - LabVIEW NXG 5.1 Manual - National Instruments 

 

Since LabVIEW NXG is deprecated (no further development) and announced that: the strengths of the NXG platform will be integrated into LabVIEW 2021(+),

 

Will that possible for us to have such G Type feature in LabVIEW? when probably?

Although it appears that the 2020 version of the Block Diagram Cleanup Tool (BDCT) does do a better job than its predecessors, I would still say that the BDCT’s results are still less than optimal. Most LabVIEW Idea Exchange posts concerning the BDCT talk about label positioning and alignment.  Here I would like to focus on the issue of horizontal expansion of the block diagram and a holistic view of what LabVIEW features contribute to that effect.

 

Like most programmers, when developing a VI block diagram, I try to keep the diagram no more than one screen wide.  I have learned a few tricks over the years that help manage horizontal expansion, such as resizing an object’s label so that the words appear on multiple lines before running the BDCT. This allows for some compression horizontally and allows for some growth vertically to compensate. Horizontal expansion of the block diagram is certainly expected to some extent because data flow happens left to right, of course; but it would seem logical to incorporate that knowledge into the BDCT algorithm to compensate and find ways to adjust the spacing so that the rearrangement creates more a bit more vertical expansion and less horizontal expansion—while still satisfying usual criteria such as no backwards-running wires.

 

Because data flow is horizontal, to help the BDCT work better, NI may want to think about what visual features—other than left-to-right data flow—contribute to a unnecessarily wide block diagram. I seldom have an issue with a block diagram becoming too tall.  Admittedly, poor programming technique can result in too-wide block diagrams, but let’s look at a few other things. What elements of LabVIEW’s block diagram unnecessarily consume width? Here’s a few that I could think of:

 

  1. Long names in bundle/unbundle cluster objects, property/invoke nodes, Enum/ring constants, local variables, formula nodes, etc. – Most LabVIEW-supported languages are left-to-right horizontal. Long names, especially when using nested clusters, take up horizontal space. I like being able to use long names; it helps the code to be more self-documenting. If those type of objects supported word-wrap, that would help conserve width.
  2. Expression nodes and multi-digit constants – no word-wrap available here either.
  3. Named terminals of timing and event structures – They are what they are, and you cannot remove all of the ones you don’t use, and so they take up space horizontally.
  4. Some native functions – There are some LabVIEW native function icons that are wider than they are tall. Some examples: In-Range/Coerce and Initialize Array.
  5. Shift registers – It has a subtle effect, but shift registers are wider than they are tall. Do they have to be that way?

To fully tackle this issue, I believe you have to look at things holistically and not just as potential improvements that could be made in the BDCT.  Recognize that data flow is left-to-right horizontal and you will have long text names; you can’t really do anything about that. However, there are other things that could be done in LabVIEW feature-wise to help compensate for width-wise block diagram growth.

Hi,

 

I want to find warnings in an opened VI. For example, a function has moved outside the case or loop structure and is not visible anymore. I call the error list STRG +L and give the show warnings a TRUE. I double click on the specific warning and I get the invisible VI or function highlighted, so I can move it back into the visible area by keys. The problem is, I always have to find the VI in a list of approx. 1000 VIs, which have a warning. It would be great if I would have a search by name function, or I could see the warning/errors of the active VI window. 

I have attached an example picture of what I mean (I use a german version) 

 

Cheers 

RogerG

 

The title says it all.

In 2021 there was excitement about improvements to LVCompare and LVMerge, however there is still no way to compare classes or libraries.

I'm ok with not merging. I know that is a minefield. But at least show me what methods were added, what library or class settings were changed, which VIs moved from private to public and for classes, diff the private data and maybe show changes to the class heirarchy?

The request:

The thickness of splitter bars can be resized through the front panel at edit time, but this property is not exposed through property nodes. Why not expose this?

 

Background:

I use splitter bars all the time for resizable GUIs. They're also cumbersome. Many of the feature improvements I'd love to see have already been captured on this forum. In the mean time, I've got tools for automating what I can -- but I've hit a dead end when it comes to resizing the splitters programmatically. This requested property node doesn't exist and pre-sized splitters can't be copied from template VIs using scripting because the "Move" method throws an error.

Since LabVIEW 2021 is announced that will adopt the advantage features of NXG, I expect some thing like:

- In NXG we could dock constants to functions & nodes. hope can also be in LabVIEW 

- In NXG we could also reduce space of block diagram by (Ctrl + mouse click & move up/left) while Down/right is to extend the space. In LabVIEW still couldn't reduce the space. all mouse direction are extending the space.

- provide unplaced terminal, controls and/or indicator pallet, is very use full.

- zoom in/out, I've no Idea why NXG can while LabVIEW can't.

Hi,

 

Wouldn't it be simpler instead of alien and directory manual setup (with still incompatibilities) that NI compile Labview 2020 SP1 into a .deb package? As:

- the Debian/Ubuntu user community is huge,

- CentOS has been abandonned with end of support on 12/31/2021, replaced by some kind of an "experimental" linux

- RedHat was acquired by $IBM$,

- Suse on the other hand is tedious.

 

Thank you NI!

Christian

Private NI customer since LV 1.1

How is NI-MAX still so bad after all these years? It goes completely unresponsive and crashes at the slightest provocation. This feels like it should be NI's bread-and-butter.

 

Every LabVIEW team ends up recreating so much of the functionality that NI-MAX is supposed to offer out of the box. Maybe with the discontinuation of NXG, some resources should be allocated to making this product usable.

 

TurboPhil_1-1629324790878.png

 

 

KB articles like this and this probably shouldn't need to exist.

If you want to select, say, a cluster inside an array, you have a really narrow range of pixels to target. If you click in the body of the cluster, it selects the array. If you have a cluster in a cluster, it doesn't select anything to click blank space inside the cluster.

 

Attached file indicates what I'm talking about. If you want the green line, that's pretty narrow. Many people may be able to handle that degree of precision. But some people may find it hard to point that precisely.

 

SO, it'd be nice if you could select anything in the containment hierarchy, and discretely adjust your way to the right thing. I suggest that tapping shift would select the parent of the current selection. Tapping ctrl would select the child of the current selection that the mouse is hovering over.

 

Like, select the inner array and tap shift, and the selection is now the cluster it's in.

 

Showing this as an option in the contextual menu, with the shortcut indicated to the right, would be a good way to introduce this to the user -  "select parent (shift)", "select child 'childName' (ctrl)"

A complex number can be displayed as Re / Im or as r / theta.

 

Sadly, complex controls / indicators / block diagram constants can only display as Re / Im

 

It would be very nice if they could also be set to display r / theta without having to do something like this :

 

altenbach_0-1629124969353.png

Source discussion : https://forums.ni.com/t5/LabVIEW/Complex-numeric-display-format/m-p/4172124

There is a strange asymmetry in the conversion pallet:

heel_1-1628866756838.png

 

I cant see any reason why it is not supported to use the type conversion on an array of FXP when it is on an element. It should be either none or both.

 

Pressing Shift and dragging an object with the mouse make it move only horizontally or vertically. Great!

 

Unfortunately Labview decides which direction to move (horizontally or vertically) based on the first pixel the mouse moves. So, if you want to move something horizontally, but while clicking your mouse moves one pixel vertically, you're stuck with that direction and can start over again. ☹️

 

"Normal" applications decide the direction to move based on the ratio of x/y mouse movement: If the mouse moved more in vertical direction, the user obviously wants to move that way, if the mouse moved more in horizontal direction, then that is the user's intent. This means you can switch the direction while dragging and you're not stuck to one direction!

 

PLEASE modify the annoying behavior of Labview accordingly!

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

When integrating libraries written in C# a common scenario that occurs is the necessity to cast a method to a generic delegate defined in the class in order to, for instance, subscribe callbacks to events belonging to sub-classes inherited by the object instantiated by the Constructor Node.

 

So far LabVIEW efficiently allows to subscribe to events, registering a VI as callback in case the event involved is fired, but does not allow to cast a VI to a generic delegate and that's very limiting.

 

The only workaround existing is to wrap the C# library, adding an additional C# layer that instantiates the delegates on behalf of LabVIEW and then exposes plain methods, properties and events that are straightly callable by LabVIEW itself.

 

The idea is to allow  LabVIEW to instantiate generic delegates based on the definition of the delegate provided by the C# library and return the delegate object so that it can be provided as input to library's methods requesting it.

 

This feature would make the LabVIEW integration of C# dlls more solid and completely manageable within LabVIEW code without further wrappers needed.

Right now if you have a string constant in a VI and you edit it and save it, LVCompare just highlights that the string constant has been edited. For small strings, not a big deal. However I have a bunch of big long SQL query strings.  In that case LVCompare is pretty useless. I know I changed the string constant, what I want to know is what did I change in the string constant.

Imagine that I have a UI with dozens of graphs all showing acquired time history data.  Time could be on the X axis, but it could also be on the Y axis depending on the convention of the UI.  The user wants to see what happened between 1 minute ago and 2 minutes ago.  The user needs to look at data on various graphs to make a determination.  The user wants to go to one plot and make a zoom operation on the time scale and have all other plots show the same time scale interval.

 

Why not have a properties tab on plots that allows the developer to define a named linked scale and assign a scale on the current graph to link to the global scale.  So the developer can drop XY graphs and set the time scale to link to create a standard time scale common between plots.  When the user changes the time scale on one linked plot, it automatically propagates to all other linked plots.  The common scales would be global across any VIs in memory.

 

This would be useful for linking time scales, but I also see it as potentially useful for linking other scales, say the Y scale is temp and the user wants all temp plots to match scale.  This would be harder though as some negotiation would be needed if autoscale is turned on where where all plots would have to report their min/max to a process that would then pass out updated scale ranges as some reasonable interval (1 hz, 10 hz, etc).

 

All of the above can be managed with resize events and programmatic handling of those events, but I waste many dev hours on that task.  I'd love for this to be an out of box feature.

I am extending on an old idea, but the implementation is different than the OP so I made this a new idea:

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Decimation-feature-built-into-the-graph-indicators/idi-p/1109956

 

What I would want would be an XY graph with automatic disk buffering and on screen decimation.  Imagine this, I drop a super duper large data sets XY graph.  Then I start sending data in chunks of XY pairs to the graph (updating the graph at 10 hz while acquisition is running at +5000Hz).  We are acquiring lots of high rate data.  The user wants to see XX seconds on screen, probably 60 seconds, 120 seconds, or maybe 10 or 30 minutes, whatever.  That standard plot width in time is defined as a property of the plot.  So now data flows in and is buffered to a temp TDMS file on disk with only the last XX seconds of data showing on the graph.  The user can specify a file location for the plot buffers in the plot properties (read only at runtime). 

 

We decimate the incoming data as follows:

  • Calculate the maximum possible pixel width of the graph for the largest single attached monitor
  • Divide the standard display width in time by the max pixel width to calculate the decimation interval
  • Buffer incoming data in RAM and calculate the min and max value over the time interval that corresponds to one pixel width.  Write both the full rate data to the temp TDMS and the time stamped min and max values at the decimation interval
  • Plot a vertical line filling from the min to max value at each decimation interval
  • Incoming data will always be decimated at the standard rate with decimated data and full rate data saved to file
  • In most use, the user will only watch data streaming to the XY graph without interaction.  In some cases, they may grab an X scroll bar and scroll back in time.  In that case the graph displays the previously decimated values so that disk read and processing in minimized for the scroll back request.
  • If the user pauses the graph update, they can zoom in on X.  In that case, graph would rapidly re-zoom on the decimated envelope of data.  In the background, the raw data will be read from the TDMS and re-decimated for the current graph x range and pixel width and the now less decimated data will be enveloped on screen to replace the prior decimated envelope.  The user can carry on zooming in in this manner until there is at least one vertical line of pixels for every data point at which point the user sees individual points and not an envelope between the min and max values.
  • Temp TDMS files are cleared when the graph is closed. 
  • The developer can opt to clear out the specified temp location on each launch in case a file was left on disk due to a crash.

This arrangement would allow unlimited zooming and graphing of large datasets without writing excessive data to the UI indicator or trying to hold excessive data in RAM.  It would allow a user to scroll back over days of data easily and safely.  The user would also have access to view the high rate data should they need to. 

 

TLDR: LabVIEW should be able to throw and catch exceptions like other languages.

 

I'll start by saying that I always leave automatic error handling turned on - please don't hate me! Yes, I am a CLA, and maybe I'm the only CLA in the world that does this - but I firmly believe that if there is an unhandled error out there in my code, I want to know about it! I don't want it to pop up in production of course, but in development the automatic error handling lets me know about any mistakes I have so that I can handle those errors properly.

 

Now - this led me to an even crazier idea. What if there was a way we could turn automatic error handling into a full feature that could not only help during development, but also be used in production.

 

Other programming languages have the concept of an "exception" which you need to "catch" to handle it properly. The idea is that throwing an exception is better than returning a value that indicates an error because then you have to rely on the person using the function to remember to check that output - but the exception will pop out at them and so they'll realize they need to handle it.

 

Okay so here's the idea for LabVIEW: If the automatic error handling catches an error that isn't wired out, that should be an event that programmers can "catch", either in a normal event structure or some kind of special new error event structure. That way, we can choose what behavior we want for an unhandled error (instead of the default NI dialog box), eg: maybe we just want to log it to file, maybe we want to display it in a smaller window, or handle it in some other way, depending on the application.