Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

Hi All,

 

In my post on the LabVIEW board I asked if it was possible to have control over the DIO of a simualted DAQ device. Unfortunately it seems this feature is not available. Once MAX is closed the DIOs run through their own sequences.

 

If there was a non-blocking way to control a simulated DAQ device through MAX it would permit much simpler prototyping of systems before they need to be deployed to hardware. For example if you want to see how a program responds to a value change simply enter it in the non-blocking MAX UI. Or as in my original case can make an executable useable even if you don't have all the necessary hardware.

 

I think this feature should only be available for simulated devices.

 

Thanks for reading - and hopefully voting,

Dave

 

Multiple people have requested that there be a natural way for Labview and SignalExpress to do a rotational speed measurement using a quadrature encoder. An express VI under "Acquire Signals>>Counter Input>>Rotational Speed" that asks you basic quadrature encoder type questions and computes the rotational speed would be very useful. The information it asks would be things such as Ticks per Revolution, Decoding type (x1, x2, x4) would be useful in computing rotational speed. In addition, this can be then converted into a shipping example for DAQmx relatively easily. I have had multiple people ask this question and believe that especially within SignalExpress, this would be very useful.

 

 

Rotation.png

 

 

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

When it comes to documentation of an measurement, you need to report ALL settings of a device that effects that measurement.

From a core memory dump written as a hex string to a XML document.... anything that shows up a difference in the settings that affect the measurement would be fine for documentation.

Something like a big property node readout followed by a format into string .... but make sure not to miss a property.... and a bit more complicated when it comes to signal routing....

 

A measurement that isn't sufficiently documented is all for naught. 

or

Just think of a nasty auditor 😉

 

It's so easy to make measurements with LabVIEW, please make it easy and consistent to document it.

 

Example:

A quick measurement setup with the DAQ-assistant/Express fills Gigabytes but after a certain time they are useless because nobody knows how they where taken. A simple checkbox could add all this information in the variant of the waveform. (or TDMS or ...) even if the operator don't have a clue of all the settings that affect his measurements.

 

There is currently no API available to develop applications that will use the functionality of the GPIB Analyzer. There are customers who would like to be able to monitor the GPIB bus from LabVIEW, so this would be helpful.

I am using DAQmx Physical channel controls in user Interface to select the particular DAQ modules.I would like to display only the particular type (AI,DI,AO,DO.....) of modules which is connected in the system.For example i need to display only, DO-NI-9477 cDAQ modules in the physical channel not the other DIO models like NI 9403...IO name filtering option not useful to filter the other models which is same type.

 

That should be great if NI provides the option for filtering the modules name based on their  product type or user configurable naming (for example, if cDAQ1 device renamed as "DEV1" means user can enter filter the device based on the string "DEV").

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

For instance, a couple ideas to get the ball rolling:

  1. Add an interval attribute like "Change file after n samples".   We would then auto-increment the file name and change to that file when we have logged "n" samples.
  2. Make file path attribute changeable at runtime.  We have a file path attribute for logging.  The idea here would be to support changing the file path "on the fly" without stopping and starting the task.  The problem here is that it would not suit very well a use case where you want a specific file size.  Additionally, it wouldn't be as easy to use as #1; it would be more flexible though.
  3. (Any additional ideas/use cases?)

Thank you for your input!

 

Andy McRorie

NI R&D

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

What I'm currently thinking (because it seems the most flexible to different criteria and situations) is to simply allow you to set the file path property while the task is running (on DAQmx Read property node).  The only downside I can think of with this approach is that you wouldn't know exactly when we change to the new file.  We could guarantee within (for example) 1 second, but you wouldn't be able to specify the exact size.

 

Would this be a good solution for you?  Can you think of a better way to specify this behavior?

 

Currently it is hard to find out whether a property can be set for a specific channel, or only per module. An example is the Strain Gauge Excitation property, which can only be set per module. Other properties can be different per channel.

 

Idea: Add a device specific comment in for example Max, about the different properties.  For example:

 

MAX idea.png

 

 

Just what the title says!

Measurement and Automation Explorer MAX's Test Panel's Analog Input provides a quick method to examine a signal and vary acquisition parameters.  It would be useful to be able to zoom the time axis and have a cursor display so that for example noise level or rise time could be looked at in more detail.  The time axis limits can currently be manually overwritten as a way to zoom but that is cumbersome.  Assuming the graph being used in this test panel is built from a standard NI graph, it should have zoom and cursor capability already part of it and thus easily added.

 

Steve

 

It seems the only indication in MAX that a device is simulated is that under the Devices and Interfaces section, the tiny glyph to the left of the device name is colored yellow instead of being white/transparent.  I end up not remembering what color means what.  It would be useful to add text "Simulated" next to the device name.  It would also help to distinguish simulated devices by having the color of that glyph be green (instead of its current transparent/white) when the device is installed and detected.  Have the color change to red (and keep the existing red X) if had been detected and a device number assigned but is currently not installed/detected.  Then simulated devices being yellow may imply "warning/caution" or "not real".  Perhaps also have a help-hint popup ("Detected" or "Not Detected" or "Simulated") when the mouse hovers over device names.

 

MAX Simulated Device.jpg

I rarely have to set up hardware for a new analog measurement and always have to puzzle over the difference between RSE and NRSE modes. I think of the inverting input as the reference, so "Non-Referenced Single-Ended" doesn't make sense to me. And, if I run the AISense line to my remote sensor, isn't that a Referenced Single-Ended measurement?

 

Yesterday, I noticed that at least some on-line documentation now refers to GRSE (Ground Referenced Single-Ended); adding that single letter helps a lot. What about adding another single letter and referring to the other mode as RRSE (Remote Referenced Single-Ended)? One letter could save a lot of people a lot of time.

We normally have a DAQ system consisting of several elements:

-Sensor

-Custom filtering/attenuation

-Signal conditioning

-NI-DAQ device

 

When we use scales in DAQmx we have to create a scale for every 'route' we use (sometimes we have to use a 4 kAmps sensor for a 100 amps signal).

If we could define a scale in a task consisting of multiple scales, we could directly pick the sensor and signal conditioning we use for each signal. A change in one of these elements could easily be adjusted.

 

Ton

This is pretty trivial to achieve through LabVIEW itself, but...

 

Signal Express is a simple, stand alone data acquisition system that allows those with limited exposure to LabVIEW set up simple test and measurement routines. One area where this is ideal - at least, for me - is in environmental or long life testing. Instead of crafting a beautiful piece of custom software for my colleagues, I can hand them a DAQ, point them in the direction of the SignalExpress and DAQmx installers, and off they go. With a little fiddling, they can create a logger that suits their needs.

 

One thing I've noticed, however, is that when sampling with non-simultaneous cards such as the USB 6225, users will select 1-pt-on-demand, set to some big interval, and then come back screaming at the top of their lungs - "OHMYGOD THERE'S CROSSTALK BETWEEN CHANNELS!". With a little bit of fault-finding, it's easy to point out that it's not crosstalk, but ghosting between channels, because I would guess that 1-pt-on-demand uses interval sampling and rattles through the multiplexing as quickly as it can.

 

My idea: give users the option to either select round-robin mode with a sensible delay, or complete control over the interchannel delay.

 

I realise that the standard line is usually "use LabVIEW" - I do - but I'd rather spend my time working on the important stuff and empowering those with less experience and/or exposure to make accurate measurements.

It gets a bit annoying that PXI1Slot2 is listed after PXI1Slot14 when doing an ascii sort. I (ok, admittedly, my coworker) proposes having naming conventions that will allow for a better ascii sort. For instance, PXI1Slot002 PXI1Slot014. 

I have an application in which I have to digitize a pulse across a shunt resistor.  The common mode voltage can be up around 60VDC.  The digitizing cards I was able to find cannot perform a differential measurement without digitizing both sides of the resistor and then subtracting.  This method causes a lot of error due to the needed voltage ranges.  I have been able to digitize some of these pulses with the PXI-4072 DMM with great success.  However, I can control when those pulses occur and setup trigger lines as needed.  Other pulses I need to digitize will occur whenever the UUT decides to put it out.  What is really needed is a way to trigger the DMM on a measured voltage level.  Just for reference, Agilent's PXI DMMs can do this.  It seems such a shame I haven't found a way to do this with NI's DMM.  As a final thought, some pretrigger data would be needed to properly capture the pulse.  Though, pretrigger data would be nice in any hardware triggered acquisition.

When a piece of hardware is simulated with MAX, I would like to be able to insert a transfer function or a signal simulating VI to allow me to get a more realistic test of a system. The current default of generating a sine wave for simulated acquisition only lets me test part of the code. If a transfer function, lookup table, or custom vi were able to be substituted for the sine wave generation, then I would be able to test many other facets of a system.

When I drop a DAQmx Task constant on my LabVIEW block diagram, I have the right-click menu option to Generate Code >>Configuration 

 

This scripts out a sub.vi that creates the Task by adding each channel with channel specific properties sequentially, one channel at a time, to repeated calls of DAQmx Configure Channel.vi.

 

I have a hard time saving that vi output!  I have not really learned how to name it properly without Using:  "BLEEP_BLEEP_BLEEP_Configuration_that_cannot_be BLEEPING_scaled.vi"

 

I believe that an autoindexing loop would be much nicer.

 

My grandmother thanks you for improving my manner of speach.