LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Problem: Currently, the native nodes and VIs that can be used for error manipulation are located in the Dialog & User Interface palette. While manipulating errors can mean generating dialogues, and can influence the User Interface/User Experience, error manipulation is a broader and stand-alone topic.

 

Solution: Nodes and VIs that are relevant to error handling/error manipulation should be given their own subpalette inside the Programming palette. The new subpalette could be named "Error Handling", "Error Manipulation" or "Errors".

 

1 (annotated).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2 (annotated).png

Hi all,

image processing is not anymore the "tough stuff"/"niche" of the past. Videos are everywhere now, maybe even more than sounds. So functions that handle video should be included in the cross-platform standard labview, at least for the essential functions (reading/writing files with a minimal set of codecs, acquisition of frames through ip or usb for common protocols, conversion between image format for display and processable data). This would attract a lot of young users. It would also jumpstart future developments of labview in the direction of AI.

Thx

This is an idea I've been working on for a while. It's time to let others start evaluating it. 🙂

000.png

001.png

 ^ I included the above for Dmitry Sagatelyan and similar folks who have asked me for these things over the years so they know the mindset to use when evaluating the idea. But it's written up below for LabVIEW users who only know LabVIEW as it stands today (Q3 2024).

002.png

003.png

004.png

005.png

006.png

007.png

 

Feedback and questions welcome. 

I realize that the DBCT is nearing its 20th birthday, and probably is not a priority for a fix, but this is so simple and has tripped up a number of LabVIEW programmers doing database applications for so many years.

 

The VI Rec Fetch Next Recordset (R).vi has a flaw which manifests itself in stepping past the last recordset in a multi-recordset return - then leaks a reference which causes mayhem in downstream code.  A simple test of the recordset reference would make this VI properly useful.

 

Among my other posts in reply to forum users about databases, the issue is captured succinctly here.

 

I long ago made the mods to the toolkit VI (actually, two), and reapply them with each new LabVIEW version as needed.  If anyone at NI/Emerson wants to look into this, I'd be happy to share the particulars.

 

Dave

This topic keeps coming up randomly.  A LabVIEW class keeps a mutation history so that it can load older versions of the class.  But how often does this actually need to be done?  I have never needed it.  Many others I have talked with have never needed it.  But it often causes problems as projects and histories get large.  For reference: Slow Editor Performance with Large LabVIEW Projects Containing Many Classes.  The history is also just added bloat to your files (lvclass and any built files that use the lvclass) for something most of us do not need and sometimes causes build errors.

 

My proposal is to add an option to the class properties to keep the mutation history.  It can be enabled by default to keep the current behavior.  But allowing me to uncheck this option and then never have to worry about clearing the history again would be well worth it.

After working with text-based languages recently, I've become more aware of a very painful flaw in the LabVIEW IDE.

 

First of all, as software engineer, I like to perform experiments. Make a small change, test it. If it doesn't work, then just use Git to roll back the changes. I've been doing this for years, and with LabVIEW it has been fairly painful. Until recently I didn't realize there was a better way.

Why is it painful? Everytime I use Git to check out a different branch or roll things back, I am forced to close LabVIEW or at least close the project so that LabVIEW detects and loads the new code from disk instead of whatever it has in memory. I lived with that for years because I didn't know any better.

 

Enter text-based programming and IDEs: VSCode, PyCharm, probably any other modern IDE. I try an experiment, it doesn't work. I roll the changes back in Git and guess what? I don't have to open and close anything! The IDE just automatically detects the file has changed and loads the new file!

 

When is LabVIEW going to get with the times?

My request is for NI to review and develop the ability to select where (IP specifically) a NSV will deploy on a system (eg. PC) containing multiple networks.

 

After searching through various forums and issues, I understand that currently we can only choose LabView software's global default adapter via unusual Windows methods. What if other projects do need to use the primary NIC ? or what if in one project I need some NSV to go to primary NIC and some to use the secondary NIC ? As various hacks and workarounds did not fulfill my requirements I opened a request for technical support. After review, the case (#03609821) concluded that this was not possible and suggested I post here to see if others also would find this useful to boost development priority.

 

My example scenario is this. My company develops specialized motion products (mainly automotive transmissions for heavy duty and defense applications). We primarily use NI in our dyno and own a few NI hardware systems: cRIO, cDAQ, PXI.

I work in the dyno room here and am trying to set up a cRIO as a datalogger and RT OS to primarily measure analog and CANbus signals and thereby control testing.

 

The test fixture is operated on a PC with two network cards. One private LAN to talk to the cRIO to stream data and control. The network shared variables keep track of various statuses. The other network card is on the company LAN primarily so we can transfer data and share fixture status. The secondary purpose of being on the company LAN is that we have 6 configurable dyno motors (usually used in pairs) in the room that run on a shared master controller (non-NI). Currently that is done via PC RS232 to the cRIO, but we’re planning eventually to transition that to a more open ethernet architecture. NSVs would be ideal for sharing each dynos’ status as the serial is limited in 1-1 connection and bandwidth.

Control and indicator references are currently 19 pixels tall. They should be 16 pixels tall. References would then align better with other items which are already 16 pixels tall, such as the Bundle By Name and Unbundle By Name nodes.

 

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This idea is inspired by this idea and this idea.

VI Analyzer is great tool but can only perform static analysis on VI (as far as I know). It could be nice to run some tests on libraries / classes or even projects files as well to enforce good practices. 

I have a habit of putting an enum with a digital display visible in each of my case structure frames controlled via enum.

 

It has become second nature already. Today I stopped and wondered why we can't simply include a digital representation as an "[X]" appended in the visible selector.

 

So here I am asking for it.

 

Intaris_0-1728572984028.png

 

For projects, libraries, and their associated files, if the "Save Version" property is set to an earlier version of LabVIEW, the project and its files are not recompiled to the editor version.  There should be an option on the Mass Compile dialog that allows the "Save Version" property of projects, libraries, and classes to be overridden.

 

smmarlow_0-1745952830273.png

 

 

Idea: The In Range and Coerce Include upper limit option should be selected by default.

 

1 (annotated).png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Maybe it's just me, but when using the In Range and Coerce node I virtually always need to have both the Include lower limit and Include upper limit options selected. In approximately ten years of using the node I think I used a different configuration less than five times. It has entered my muscle memory that the first thing I do after dropping the In Range and Coerce node is to right-click it and select Include upper limit.


In my experience this point of view is supported by anecdotal evidence. For example, I have recently seen a large codebase that was rightfully using lots of In Range and Coerce instances. All of the nodes had been left in their default configuration (Include lower limit selected, Include upper limit unselected). However, after inspecting the code carefully I came to the conclusion that the intention was for all of the nodes to perform an inclusive comparison on both sides. This was confirmed by a conversation with the original code author. The author had simply been unaware of the true behaviour of the node (he had assumed it performs inclusive comparison on both ends) and was unaware of the right-click options!

Idea: The Assert Structural Type Match node should be growable (able to expand the number of inputs downwards and/or upwards). This would be similar to how many well-loved nodes can be "grown" downwards or upwards, such as Build Array, Concatenate Strings, Index Array, Merge Errors, etc.

 

1 (edited).png

 

 

 

 

 

 

 

 

 

 

 

3 Growable Nodes (edited).png

 

 

 

 

 

 

 

 

 

The following screenshot shows a real-world VIM that I created where I would have benefited from this feature. I needed to ensure that three inputs were all of the same data type. This required using two Assert Structural Type Match nodes. It would be possible to use a single node with three inputs if this idea was implemented. This would result in fewer wires and fewer objects on the block diagram.

4.png

 

 

 

 

 

 

 

 

 

 

 

 

I am making ever more webservices with labview, but I feel I have little control over the server. It can happpen that a webservice becomes unreachable. Sometimes I would be a crashed webservice application, sometimes it is the NI webserver. But no tools available to find out. I can imagine the following tools in the NI webserver API

- start/stop server

- disable/enable server

- enumerate active webservices

- start/stop or enable/disbale webservices

- redirect domain root endpoint to webservice

- set favicon

- some proxy options would be nice to redirect a domain to a specific webservice

The asterisk (*) that LabVIEW automatically places in the title bar when a file is modified should be placed first not last so that it can be seen when the window is too narrow to display the whole string, e.g. where other programs like Notepad.exe put it when then text truncation is indicated by an ellipsis (...). Some titles can be much longer when they include lvlib and lvclass contexts.

 

ast.jpg

The LabVIEW Robotics Module consists of a variety of sensor and actuator drivers, motion algorithms (such as kinematics), world map creation and search algorithms, a world simulator, etc.

 

It hasn't been updated since 2019 and is 32-bit only.

 

I would like to see it made open source.

I believe some or all of the sensor drivers are already available on ni.com/idnet.

There are several other VI-based components and examples that are standalone and could be easily released independently.

I realize that the simulator might have some 3rd party constraints for releasing as open source, but I'd love to see it released if possible.

Let's Encrypt is an Certificate Authority (CA) that provides FREE TLS certificates for people and organisations to secure their websites and servers. Their certificates are valid for a maximum of 3 months, but client applications can automatically update the certificate when due.

It would be very nice to have this functionality in the NI web server. Exipiring certificates are cumbersome to manage.

The name of the "Get Date/Time in Seconds” function is misleading. The function should be renamed.

Combined v2.png

 

 

 

 

 

 

 

Details

  • The current name does not make it clear which Date/Time it is going to return. The words "now" or "current" are missing.
  • The "In Seconds" portion is misleading and unnecessary. The function correctly returns a timestamp data type. The timestamp represents a moment in time that is expressed not just in seconds, but also using lots of other time units such as days, hours, minutes, ms, us, ns, etc. I understand that when a timestamp is converted to a DBL, the value represents the fractional number of seconds since the beginning of the epoch, but this is an implementation detail. It should not be part of the name of the function.
  • “Get Date/Time in Seconds” would be a suitable name for a conversion function that takes in a formatted Date/Time string and outputs a DBL that represents the number of seconds since the beginning of the epoch.

Names of equivalent functions in other languages

  • .NET: System.DateTime.Now
  • C++: std::chrono::system_clock::now()
  • Python: datetime.now()

Notice that the equivalent function names contain the word "now" and omit "in seconds".

 

Perhaps the best new name for the function would be “Get Date/Time Now”. This name would be very much in line with the .NET, C++ and Python equivalent function names. This name would earn the "let's not reinvent the wheel" prize.

 

Nevertheless, I would be happy with the following names too:

  • “Get Timestamp Now”
  • “Get Current Date/Time”
  • “Get Current Timestamp”

Notes

  • Changing a primitive name may break VIs that use VI scripting to find or create this node. This is a downside. I believe that in this case the long-term benefits would outweigh the relatively minor annoyance of hopefully relatively few developers having to modify those scripting VIs to use the new primitive name.

When typing a path in the Terminal (Linux, Windows or macOS) hitting the table key does an auto-complete, this is extremely useful.

 

I wish the Path control and - let's dream - the path constant would behave the same.

 

It's probably only applicable to absolut path values.

 

I've made a QControl that does that, it's a bit basic but it does help, I might post it GitHub if there is interest.

In a recent version of LabVIEW the height of Unbundle By Name and Bundle By Name elements, Local Variables and Global Variables was standardised to 16 pixels. This was a welcome improvement. (I'm fairly sure that the improvement was suggested by a LabVIEW Idea. I would have liked to link to that idea here but unfortunately I can't find it right now.)

 

The size of Boolean constants is currently 16 pixels (width) x 14 pixels (height). This should be standardised to 16 pixels x 16 pixels. A vertical stack of Boolean constants would better align with a stack of local variables, global variables, or UBN/BBN elements. They would also align better with the default sized LabVIEW grid (16 x 16 grid).

1 (annotated).png

 

 

 

 

 

 

 

 

Thanks!