LabVIEW Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Well, I think the title says it all...

 

There are many threads on the NI website about this certain topic, but none of them really shows how to deal with this "problem" correctly!

 

It is a very common task to synchronize a AI signal (let's say 0-10 V from a torque sensor) with a Ctr signal (e.g. an angular position of a drive which causes the torque). How do I correctly display the torque over the drive angle in a X-Y graph?

 

It would be great if NI offers a reference example in the LV example finder, how to solve such a task elegantly and efficiently.

 

I'm not sure if this is the appropriate place for this suggestion, but anyway...I would love to see this in the LV example finder!

 

Regards

A:T:R

Currently the "Bytes at port" property only applies to serial ports. If you are trying to write a generic driver or communications package that supports multiple connection types you cannot use the "bytes at port" since it will not work for anything but a serial connection. There is a related idea here which proposes the "Bytes at port" can be added to the event structure. It also suggests that this be expanded to other connection types. My idea suggests that at a minimum the VISA property node can be expanded.

The Problem


Doing a long finite acquisition in DAQmx results in the manner shown below results in a all the data from the acquistion residing in a 2D array of waveforms that the user must rearrange to begin working with.  Since a 2D array of waveforms is not really useful with any processing functions in LabVIEW, why not come up with an automatic way to get the datatype you want (a 1D array of waveforms with the new samples appended to the Y array of the appropriate channel.)

regular tunnel.png


2Dwaveform.png

 

The Solution


Give the user the ability to create a "waveform autoindexing tunnel" via a context menu option.  This tunnel would automatically output the appended waveforms, 1 per channel.  This could be done behind the scenes in the most memory efficient way possible so as to save users the headache of trying to modify the 2D array they currently get.

waveformTunnel.png

 

1Dwaveform.png

 

 

Zoomed in Images


regularZoomed'.png

 

 

waveformZoomed.png

 


Hello,

 

I work as a LabVIEW integrator and for one of my customer I develop application, with maintenance goal for their installation all over the world with RS232 or Bluetooth communication for example, deployed on targets as PDA (PocketPC 2003 & Windows Mobile 6) or Touch Panel (Windows CE 4.2 to 5.0) using LabVIEW 8.6 Mobile Module.

This LabVIEW additonal module works quite well to deploy the same application in this differant targets, but I encountered some problems and I have some suggestions for Mobile Module :

 

1/ MMI (Man Machine Interface)

      -  for graphics indicators (as Gauge or Slide) graphical object ramp or multi Slider is deleted by LabVIEW when you build an EXE => I think that a graphical objets with all its graphical property could be added in Mobile Module (and works on these all targets). See examples in GraphicalObjects.PNG

      - as I say in my introduction my application is deployed all over the world and usually I used "Caption" dynamic modification for multilingual management for all my controls & indicators. But in mobile module we can't do this (but we can modify dynamically boolean text), so I think that it could be possible.

 

2/ VISA (& Bluetooth) Management

      -  I think there is a bug with VISA installation. Indeed with the installer you can choose the installation directory (default directory or specific directory : in non volatil memory) but if you don't install VISA support in default directory it doesn't work (with PocketPC 2003 for example) I think that could be resolved

-  As I said in introduction my application could communicate throught different protocols (RS232, Bluetooth) and with LV 8.6 Mobile Module we can now use VISA : great !!!. But in Windows Mobile 6, VISA does not support bluetooth (it seems to have an incompatibility with virtual aliases in the registry) ; and in PocketPC2003 it works very well. I think that could be resolved

Problem: When an error occurs inside some DAQmx VIs, the error source (the string component of the error cluster) does not contain the call chain. This means that it is impossible to know the location where the error occurred based on looking at the error message.

 

Real-world example: The other day I encountered DAQmx error -200477 on a cRIO-9045 that uses DAQmx to acquire data from several different C-Series modules. The real-time application that was running on the cRIO contained an error handling module, which correctly logged the error to file. I saw the following when using PuTTY and the linux 'cat' command to display the contents of the error log file.

 

1.png

 

 

 

 

 

 

Notice that the error message does not contain any information as to where in the codebase this error actually occurred. This meant that I had to spend a few minutes inspecting the moderately large codebase before identifying the likely location of the error. Running subsequent tests I was able to confirm that that was the location of the error. The error was soon understood and fixed.

 

Root-cause: The root-cause of the issue (lack of call chain information) is the DAQmx Fill In Error Info.vi. In the real-world example above the error was occurring inside the DAQmx Timing (Sample Clock).vi.


4.png

 

 

 

 

 

 

The block diagram of this VI is seen below:

2.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Any LabVIEW errors generated inside that VI are generated by the DAQmx Fill In Error Info.vi. This VI is, of course, essentially a simple translator between the return type (I32) output of the Call Library Function Node and a native LabVIEW error. That VI has an unwired input named depth whose default value is 1. This means that only the last link of the call chain is inserted into the error message.

 

3.png

 

 

 

 

 

 

 

 

 

Solution: Always insert the complete call chain inside all DAQmx error messages.

If we can have instrument simulator VIs, we can test our codes prior to the actual hardware acquisition

 

The idea proposes device driver packages from IDNet to include a vendor instrument simulator VI that can be run prior to code testing. The VI can be configurable (VISA based on virtual ports created during runtime, file paths to read from to simulate measurements and responses to messages, etc) and started asynchronously (call & forget) at the start of the program and terminates when the caller terminates.

 

or as a template from the "New > Simulated Instrument" where user can modify according to the instrument that they are going to use.

 

...and that it covers NI and 3rd party instruments

The current way to bind many shared variables to an OPC client, is through browsing a tree and selecting.

This is very time consuming when you have hundrends or thousands of tags to bind. Specialy if the all the tags are not in the same path.

A better way, is to bind the variables, by simple text.

Example: We will insert the following text

A1M01Z1:value

A2M01Z1:value

A3M01Z1:value

 

and LabVIEW will automatically create 3 variables, bounding to those addresses (with the same name, prefered). Many OPC Servers supports this type of address.

Note that the true path of A1M01Z1 could be something very big, like:

My Computer\OPC ABB.lvlib\_OPC1\[Control Structure]\Root\NETWORK 1\Nodecm3\Extended Process Objects\MB300 AI\A1M01Z1\VALUE

 

This way you can add thousands of items in minutes. It is quite easy for the R&D team to implement and will help many professional engineers.

Most probably, this idea will not accept many kudos, but i think R&D must consider to implement this.

 

(this was discussed with NI technical suport, Reference#3279019)

 

Thank you all, for reading

It'd be great if NI Spy had an "always on top" selection under the View menu.

 

thanks! Smiley Happy

Hello LabVIEW Experts,

 

I thought of this recently when I was setting up a test computer.  I started up my LabVIEW project file and opened up my host VI only to find that I had some broken wires... DOH!  After looking over MAX and googling a few error codes, I found that the test machine I had didn't have RT installed.  That's when it hit me, wouldn't it be great if the VI would have recognized that I was trying to connect to a cRIO chassis and gave me a little pop up telling me what software I was missing and where to get it?  Sure would have made my day easier.

 

 

There are express VIs for Analog/Digital/Counter channels.How about express VIs for configuraing and acquiring data from an RS232 port?

Traditional IVI drivers haven't worked. The industry has been waiting for this for too long.

 

Best regards, Pavan

I like using Linux whenever I can, particularly when running large software like LabVIEW, since it tends to crawl on my XP systems. I was happy to realize that LabVIEW works on Linux, but soon after I was disappointed by the lack of usefulness of it when interfacing with hardware. I need to use the RealTime module to interface with my RealTime Compact-RIO. I also need Linux support for the FPGA module, as I need to program the FPGA attached to my cRIO. I'm sure I am not the only person who would like the ability to do this.

 

Without support for any of the hardware or LabVIEW modules I need, the Linux version of LabVIEW is entirely useless to me, and XP as an OS simply cannot perform up to par for me.

the Visa Find Resource routine is very useful to be able to automate your program to figure out which port or which visa resource to use.  However it is not available for non-VISA communication paths such as the NI-845x devices.  I would like the same functionality as Find Resource but be able to find what NI-845x devices are connected and then use those devices.

In the drivers (DAGmx,NI-Scope/Fgen,...) 

If a value (usually as DBL)  enters a config vi,  it could be coerced by the driver.

That vi should directly output the actual value.

 

Most often seen in the Forum: A user declares a non valid DAQmx Timing Sampleclock and wonder why the wfrm(data) is not what he expected.

Yes, RTFM help, you can read the properties after configuration  but it would easy to add an output to the config vi with the actual (maybe coerced) value. Making the user avare of that 'feature' (instead of popping an error 😉 )

 

Should not only cover samplerate ,  other values like FGEN standard funktion parameters (frequency, Phase), actual range, ...

Although it appears that the 2020 version of the Block Diagram Cleanup Tool (BDCT) does do a better job than its predecessors, I would still say that the BDCT’s results are still less than optimal. Most LabVIEW Idea Exchange posts concerning the BDCT talk about label positioning and alignment.  Here I would like to focus on the issue of horizontal expansion of the block diagram and a holistic view of what LabVIEW features contribute to that effect.

 

Like most programmers, when developing a VI block diagram, I try to keep the diagram no more than one screen wide.  I have learned a few tricks over the years that help manage horizontal expansion, such as resizing an object’s label so that the words appear on multiple lines before running the BDCT. This allows for some compression horizontally and allows for some growth vertically to compensate. Horizontal expansion of the block diagram is certainly expected to some extent because data flow happens left to right, of course; but it would seem logical to incorporate that knowledge into the BDCT algorithm to compensate and find ways to adjust the spacing so that the rearrangement creates more a bit more vertical expansion and less horizontal expansion—while still satisfying usual criteria such as no backwards-running wires.

 

Because data flow is horizontal, to help the BDCT work better, NI may want to think about what visual features—other than left-to-right data flow—contribute to a unnecessarily wide block diagram. I seldom have an issue with a block diagram becoming too tall.  Admittedly, poor programming technique can result in too-wide block diagrams, but let’s look at a few other things. What elements of LabVIEW’s block diagram unnecessarily consume width? Here’s a few that I could think of:

 

  1. Long names in bundle/unbundle cluster objects, property/invoke nodes, Enum/ring constants, local variables, formula nodes, etc. – Most LabVIEW-supported languages are left-to-right horizontal. Long names, especially when using nested clusters, take up horizontal space. I like being able to use long names; it helps the code to be more self-documenting. If those type of objects supported word-wrap, that would help conserve width.
  2. Expression nodes and multi-digit constants – no word-wrap available here either.
  3. Named terminals of timing and event structures – They are what they are, and you cannot remove all of the ones you don’t use, and so they take up space horizontally.
  4. Some native functions – There are some LabVIEW native function icons that are wider than they are tall. Some examples: In-Range/Coerce and Initialize Array.
  5. Shift registers – It has a subtle effect, but shift registers are wider than they are tall. Do they have to be that way?

To fully tackle this issue, I believe you have to look at things holistically and not just as potential improvements that could be made in the BDCT.  Recognize that data flow is left-to-right horizontal and you will have long text names; you can’t really do anything about that. However, there are other things that could be done in LabVIEW feature-wise to help compensate for width-wise block diagram growth.

Hello forum

 

Wouldn't it be nice if we can add W10 IoT Enterprise PCs as Embedded Targets, where we can create VI executable and set it as startup programs and once deployed, the target will be automatically configured to: launch the startup programs with Embedded Enabling Features (EEF), Enhanced Write Filter (EWF), Hibernate Once, Resume Many (HORM) and File Based Write Filter (FBWF) but on different volume; as presented in NI Week TS2361 & TS8562 slides.

 

Thanks

Imagine going into a customer site, after a 2 hour drive. Or worse a 2 hour drive after a 6 hour flight.

And being able to connect your Android smart phone to the instrument under question.

Run a test program, or calibration program from a special GPIB controller that plugs into your phones USB/Charger port.

 

How much easier would that make field service?

I would like DSC to have the ability to use array variables from a PLC as shared variables arrays. This is currently not supported, and adding individual variables to then make up an array is cumbersome. Using methods other than shared variables are not as portable, and equally cumbersome,

Most of the time, I put remote devices on our internal network. If I set up the device so the IP Address is dynamically assigned, the IP often changes. But, the project needs me to point to a device via specific IP address. If the IP Address changes, I constantly have to update the remote devices properties in my project, often times not realizing this is even an issue until a deployment fails because the device can't be found.

 

I'm proposing an idea to link to a devices' alias (or some unique ID other than IP) and allow the project to automagically update the IP Address.

 

 

 

When using the ECU M&C toolkit to handle XCP communication on an XNET interface port (using the the xxx:yyy@nixnet syntax), I've found that Disconnecting and Closing the XCP connection will cause the whole XNET interface to stop, even if other XNET sessions are still using it. 

 

This seems to be bad behavior for an XNET port.  Typically, when a session is closed, the interface will not be closed until every other session is done using it.  In this case, though, the interface gets closed without regard to other XNET sessions that may be running on the same port, and they consequently stop receiving or transmitting CAN frames, without any indication of why.

 

This is particularly problematic for me if I am using a Stream In session to log all messages on the interface.  If a self-contained XCP code module is run (Open, Connect, Read, Disconnect, Close), then the interface gets closed/stopped, and the Stream In session stops receiving frames.

 

I believe that this issue is happening with the NI Automotive Diagnostic Command Set (ADCS) as well.  It also seems to close the interface when a Disagnostic session is closed (see here and here).