LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Problem: When an error occurs inside some DAQmx VIs, the error source (the string component of the error cluster) does not contain the call chain. This means that it is impossible to know the location where the error occurred based on looking at the error message.

 

Real-world example: The other day I encountered DAQmx error -200477 on a cRIO-9045 that uses DAQmx to acquire data from several different C-Series modules. The real-time application that was running on the cRIO contained an error handling module, which correctly logged the error to file. I saw the following when using PuTTY and the linux 'cat' command to display the contents of the error log file.

 

1.png

 

 

 

 

 

 

Notice that the error message does not contain any information as to where in the codebase this error actually occurred. This meant that I had to spend a few minutes inspecting the moderately large codebase before identifying the likely location of the error. Running subsequent tests I was able to confirm that that was the location of the error. The error was soon understood and fixed.

 

Root-cause: The root-cause of the issue (lack of call chain information) is the DAQmx Fill In Error Info.vi. In the real-world example above the error was occurring inside the DAQmx Timing (Sample Clock).vi.


4.png

 

 

 

 

 

 

The block diagram of this VI is seen below:

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Any LabVIEW errors generated inside that VI are generated by the DAQmx Fill In Error Info.vi. This VI is, of course, essentially a simple translator between the return type (I32) output of the Call Library Function Node and a native LabVIEW error. That VI has an unwired input named depth whose default value is 1. This means that only the last link of the call chain is inserted into the error message.

 

3.png

 

 

 

 

 

 

 

 

 

Solution: Always insert the complete call chain inside all DAQmx error messages.

The LabVIEW Robotics Module consists of a variety of sensor and actuator drivers, motion algorithms (such as kinematics), world map creation and search algorithms, a world simulator, etc.

 

It hasn't been updated since 2019 and is 32-bit only.

 

I would like to see it made open source.

I believe some or all of the sensor drivers are already available on ni.com/idnet.

There are several other VI-based components and examples that are standalone and could be easily released independently.

I realize that the simulator might have some 3rd party constraints for releasing as open source, but I'd love to see it released if possible.

In the drivers (DAGmx,NI-Scope/Fgen,...) 

If a value (usually as DBL)  enters a config vi,  it could be coerced by the driver.

That vi should directly output the actual value.

 

Most often seen in the Forum: A user declares a non valid DAQmx Timing Sampleclock and wonder why the wfrm(data) is not what he expected.

Yes, RTFM help, you can read the properties after configuration  but it would easy to add an output to the config vi with the actual (maybe coerced) value. Making the user avare of that 'feature' (instead of popping an error 😉 )

 

Should not only cover samplerate ,  other values like FGEN standard funktion parameters (frequency, Phase), actual range, ...

So one can set many of the standard properties for VISA connections.  However the serial port latency setting is not one of these.  It can easily be set by a standard POSIX ioctl call for both linux and MacOS (who the heck knows how Windows does it).  Many character based devices can be manipulated through these ioctl calls.

 

To be specific, the FTDI USB-Serial port driver chips used in MANY devices as a USB interface or just as a serial port have a slow default latency that is fine for preserving CPU cycles but not great for modern high speed custom communication.

 

1. Just add a a property that sets the serial port latency using an ioctl IOSSDATALAT property configuration.

 

2. More generally allow an interface that will call ioctl with a supplied property and the LV programmer can pass the correct property configuration.

 

3. Less satisfying but possibly easier to implement is to have the VISA property be the file descriptor number.  This simplifies the programming where one has to now get the "Name" property and then search the process information table for that "Name" (or really path) and then extract the file descriptor.  Only then can one set the latency for that device.

 

Hello old friends,  it has been a while!

 

I would like to see MQTT client functionality added to LabVIEW.

This idea was submitted by a user who requested I post this for them.

 

As of now, the VISA resource controls only allow you to select resource names without their full Windows description.  You can select individual COM ports (COM3, COM5, etc), or pick from a list of alias names if you've defined aliases for your com ports.  But, it might be nice to give the user a configurable option which will provide the additional descriptive information that you can find in Windows Device Manager.  This could allow novice users to select the desired COM port based on the actual physical layer needed for the application.  Again, I'm pretty sure you can work around this by reviewing the different COM ports in Measurement & Automation Explorer, or even creating your own aliases to surface the additional information.  But, if I'm creating an executable, to be used on different systems by novice users, then I may not want them to have to go into MAX to be able to properly identify their desired port.

 

So, instead of asking the user to select a COM port from a list of items looking like this...

travisferguson_0-1641925866067.png

 

Maybe give an option in a property page for the VISA Resource Control that might look like this (this is a mark-up)...

 

travisferguson_1-1641925926951.png

so that an operator can pick from a more descriptive list like what you see in the Windows Device Manager...

travisferguson_2-1641925994204.png

 

Thank you,

 

Although it appears that the 2020 version of the Block Diagram Cleanup Tool (BDCT) does do a better job than its predecessors, I would still say that the BDCT’s results are still less than optimal. Most LabVIEW Idea Exchange posts concerning the BDCT talk about label positioning and alignment.  Here I would like to focus on the issue of horizontal expansion of the block diagram and a holistic view of what LabVIEW features contribute to that effect.

 

Like most programmers, when developing a VI block diagram, I try to keep the diagram no more than one screen wide.  I have learned a few tricks over the years that help manage horizontal expansion, such as resizing an object’s label so that the words appear on multiple lines before running the BDCT. This allows for some compression horizontally and allows for some growth vertically to compensate. Horizontal expansion of the block diagram is certainly expected to some extent because data flow happens left to right, of course; but it would seem logical to incorporate that knowledge into the BDCT algorithm to compensate and find ways to adjust the spacing so that the rearrangement creates more a bit more vertical expansion and less horizontal expansion—while still satisfying usual criteria such as no backwards-running wires.

 

Because data flow is horizontal, to help the BDCT work better, NI may want to think about what visual features—other than left-to-right data flow—contribute to a unnecessarily wide block diagram. I seldom have an issue with a block diagram becoming too tall.  Admittedly, poor programming technique can result in too-wide block diagrams, but let’s look at a few other things. What elements of LabVIEW’s block diagram unnecessarily consume width? Here’s a few that I could think of:

 

  1. Long names in bundle/unbundle cluster objects, property/invoke nodes, Enum/ring constants, local variables, formula nodes, etc. – Most LabVIEW-supported languages are left-to-right horizontal. Long names, especially when using nested clusters, take up horizontal space. I like being able to use long names; it helps the code to be more self-documenting. If those type of objects supported word-wrap, that would help conserve width.
  2. Expression nodes and multi-digit constants – no word-wrap available here either.
  3. Named terminals of timing and event structures – They are what they are, and you cannot remove all of the ones you don’t use, and so they take up space horizontally.
  4. Some native functions – There are some LabVIEW native function icons that are wider than they are tall. Some examples: In-Range/Coerce and Initialize Array.
  5. Shift registers – It has a subtle effect, but shift registers are wider than they are tall. Do they have to be that way?

To fully tackle this issue, I believe you have to look at things holistically and not just as potential improvements that could be made in the BDCT.  Recognize that data flow is left-to-right horizontal and you will have long text names; you can’t really do anything about that. However, there are other things that could be done in LabVIEW feature-wise to help compensate for width-wise block diagram growth.

How is NI-MAX still so bad after all these years? It goes completely unresponsive and crashes at the slightest provocation. This feels like it should be NI's bread-and-butter.

 

Every LabVIEW team ends up recreating so much of the functionality that NI-MAX is supposed to offer out of the box. Maybe with the discontinuation of NXG, some resources should be allocated to making this product usable.

 

TurboPhil_1-1629324790878.png

 

 

KB articles like this and this probably shouldn't need to exist.

Currently the only way to set/modify Tcp socket options is by directly calling some system library, as done for example in this post.

 

This not only causes code difficult to understand ("what does that library call do again?") but also poses problems when you want to use your code on different operating systems: Currently the only way to do this is using "conditional disable structures", and then Labview still tries to load the code used for a different operating system...

 

Labview should have a standard way to set socket options within Labview code, at least for the most important options (Nagle algorithm leaps to mind...). This could either be done as additional inputs to the "Tcp open connection"-VI, or (much better) using property nodes for Tcp connections.

 

It would be really nice if we could build PPL for multiple targets. The LV help states:

 

"If a VI calls a packed project library compiled for one target and you open the VI on another target that has a different operating system, the packed project library fails to load."

 

We should be able to choose which target's compiled code shall the builder include in the PPL. This would make the use of PPLs in a multi-target system so much better.

Problem

Many times, the bulk of LabVIEW development happens on computers that will never interface with hardware. A dozen engineers may be collaborating on code that will ultimately run on a dedicated machine somewhere, that is connected. Yet, as things currently are, I have to install more than I need on my development machine to get access to API VIs. If I am working on my laptop on an application with DAQ, RF, Spectrum analyzer, etc. components, I have to choose to either download and install all of that, or deal with missing VIs and broken arrows. This seems needless, since my particular machine will never actually interface with the hardware.

 

Idea

I would like to have the option to install only the LabVIEW VIs and ignore the driver itself. In many, if not most cases, the LabVIEW API could be independent of driver version. It could install very quickly, since it would just be a set of essentially no-op VIs. I don't care that the VIs would do nothing. They would just be placeholders for my development purposes. This would allow me to have full API access to develop my code without having to carry around large driver installations that I will never actually use.

If we can have instrument simulator VIs, we can test our codes prior to the actual hardware acquisition

 

The idea proposes device driver packages from IDNet to include a vendor instrument simulator VI that can be run prior to code testing. The VI can be configurable (VISA based on virtual ports created during runtime, file paths to read from to simulate measurements and responses to messages, etc) and started asynchronously (call & forget) at the start of the program and terminates when the caller terminates.

 

or as a template from the "New > Simulated Instrument" where user can modify according to the instrument that they are going to use.

 

...and that it covers NI and 3rd party instruments

NXG needs an Idea Exchange.  The feedback button is a lame excuse for a replacement.  Why?

 

  • I can't tell if my idea has been suggested before.  (And maybe someone else's suggestion is BETTER and I want to sign onto it, instead.)
  • NI has to slog through bunches of similar feedback submissions to determine whether or not they are the same thing.
  • Many ideas start out as unfocused concepts that are honed razor sharp by the community.
  • This is an open loop feedback system.

Let's make an Idea Exchange for NXG!

The current way to bind many shared variables to an OPC client, is through browsing a tree and selecting.

This is very time consuming when you have hundrends or thousands of tags to bind. Specialy if the all the tags are not in the same path.

A better way, is to bind the variables, by simple text.

Example: We will insert the following text

A1M01Z1:value

A2M01Z1:value

A3M01Z1:value

 

and LabVIEW will automatically create 3 variables, bounding to those addresses (with the same name, prefered). Many OPC Servers supports this type of address.

Note that the true path of A1M01Z1 could be something very big, like:

My Computer\OPC ABB.lvlib\_OPC1\[Control Structure]\Root\NETWORK 1\Nodecm3\Extended Process Objects\MB300 AI\A1M01Z1\VALUE

 

This way you can add thousands of items in minutes. It is quite easy for the R&D team to implement and will help many professional engineers.

Most probably, this idea will not accept many kudos, but i think R&D must consider to implement this.

 

(this was discussed with NI technical suport, Reference#3279019)

 

Thank you all, for reading

Hello forum

 

Wouldn't it be nice if we can add W10 IoT Enterprise PCs as Embedded Targets, where we can create VI executable and set it as startup programs and once deployed, the target will be automatically configured to: launch the startup programs with Embedded Enabling Features (EEF), Enhanced Write Filter (EWF), Hibernate Once, Resume Many (HORM) and File Based Write Filter (FBWF) but on different volume; as presented in NI Week TS2361 & TS8562 slides.

 

Thanks

With the advent of the IoT and the growing need to synchronize automation applications  and other TSN (Time Sensitive Networking) use cases UTC (Coordinated Universal Time) is becoming more and more problematic.  Currently, there are 37 seconds not accounted for in the TimeStamp Which is stored in UTC.  The current I64 portion of the timestamp datatype that holds number of seconds elapsed since 0:00:00 Dec 31, 1900 GMT is simply ignoring Leap Seconds.  This is consistent with most modern programming languages and is not a flaw of LabVIEW per se but isn't quite good enough for where the future is heading   In fact, there is a joint IERS/IEEE working group on TSN 

 

Enter TAI or International Atomic Time: TAI has the advantage of being contiguous and is based on the SI second making it ideal for IA applications.  Unfortunately, a LabVIEW Timestamp cannot be formated in TAI.   Entering a time of 23:59:60 31 Dec 2016, a real second that did ocurr, is not allowed.  Currently IERS Bulletin C is published to Give the current UTC-TAI offset but, required extensive code to implement the lookup and well, the text.text just won't display properly in a %<>T or %^<>T (local abs time container and UTC Abs time container)  We need a %#<>T TAI time container format specifier. (Or soon will!)

A common problem for usesrs is attaching additional input devices (eg. touch panels, keyboards, ...), to LV-based systems (including embedded controllers), but yet LV seems to miss an generic API/binding for that.

 

In Linux world (and BSD, too) we have standard APIs for that. On kernel side, there is evdev, which has lots of drivers for quite any kind of input devices you could buy (and easy to extent for new, even selfmade devices). In userland, there's an small wrapper library - libevdev - which encapsulates the platform/kernel specific stuff and is easily portable to other OS'es (eg. there's already a BSD port, Win32 and MacOS ports shouldn't be a big deal either).

 

This library could be directly called from a VI. This VI could also exist in several specific (and potentially highly optimized) versions, to Labview users have a generic API handling input devices (such as standard USB HID class) in LV.

Idea is quite simple - add to instrument drivers project wizard template for switch/MUX/matrix devices.

 

Custom switching boards are commonly used, b/c for some simple purposes it is cheaper to make custom design with simple functionality (some micro processor which controls relays), and which could communicate by serial line. Then instrument driver - is ideal solution to create simple API in LabVIEW for its control; but unfortunately, there is no template for such types of devices in New Instrument Driver project wizard.

If you are using TCP to communicate to a different code environment, you may want to set some of the socket options. For example, for responsive control, you will want to disable Nagle's algorithm. There is currently no obvious or easy way to do this. TCP Get Raw Net Object.vi in <vi.lib>\utility\tcp.llb will provide the raw socket ID, but you then need to call setsockopt() on your particular platform using the call library node. You can do this with the code provide here. A much better way would be adding a property node to the TCP reference that allowed you to set and query the options directly.

The VISA test panel is a very valuable tool for troubleshooting instrument connectivity issues.

 

This used to be included with the VISA runtime, or at least with any installer that also included the VISA runtime.

 

Now I have to separately download and install the FULL VISA just to get this valuable tool. 

 

That makes installing a LabVIEW executable a multistep process as now I have to run two different installers. 

 

NI-MAX and the VISA test panel should ALWAYS BE included in any installer that includes the VISA runtime.