Data Acquisition Idea Exchange

Community Browser
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

I would like to see a C series module just like the NI 9474 only with push-pull outputs.

 

MOSFET_Push_Pull_Amp.png

Hi

 

I'd like to see PCI express versions of existing PCI Analogue Output cards eg PCI6713 and PCI6733.

 

I'm finding it quite difficult (and quite a bit more expensive) to source desktop PCs featuring PCI slots.

 

 

NI should make sure that the measurement uncertainty specifications for its DAQ hardware are aligned with uncertainty analyses that are performed according the ISO "Guide to the expression of Uncertainty in Measurement" (GUM). See http://www.bipm.org/en/publications/guides/gum.html. Furthermore, the language used could conform to the ISO "International Vocabulary of Metrology" (VIM). See http://www.bipm.org/en/publications/guides/vim.html.

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

Would like to be able to collect a couple channels of analog inputs to the iPad.  This is nice but I need a minimum of 2 analog inputs and I would rather have NI:  http://www.oscium.com/

 

Response from coorporate:

"We don't currently have anything that would meet the customer's requirement of being able to plug in directly into the iPad for data acquisition.

I don't believe that the iPad supports Silverlight which is a framework developed by Microsoft.  Also, wireless DAQ has to communicate with a host running DAQmx, so the customer would still need a 2nd computer even if using wireless DAQ.

If you want to connect data acquisition hardware (of any form-factor) to a machine running LabVIEW and DAQmx,  then use LabVIEW Web Services to publish the front panel to the web and view/control it from his iPad.

We do have several USB products that will work with Windows-based netbooks that could be an alternative solution if topic is open to a non-Apple platform.  For example, the 5132/5133 are bus-powered digitizers with much higher sample rate, bandwidth, and buffer size compared to the Oscium device.  However, the price is also quite a bit higher."

I would like to have an programmable gain amplifier in the analog output path that I can use to adjust the amplitude of an output signal.  In control applications, this would be much better than having to stop a continuous task, reload the data with a new amplitude, and start the task again.

 

Ideally, for some of my applications, it would be nice to be able to generate a basic waveform scaled to +/- 1V and then have a property that I can write to while the task is running to set the gain.

The term "Incomplete Sample Detection" comes from DAQmx Help.  It affects buffered time measurement tasks on X-series boards, the 661x counter/timers, and many 91xx series cDAQ chassis.  It is meant to be a feature, but it can also be a real obstacle.

 

How the feature works ideally: Suppose you want to configure a counter task to measure buffered periods of a 1-channel encoder.  You use implicit timing because the signal being measured *is* the sample clock.  The 1st "sample clock" occurs on the 1st encoder edge after task start, but the time period it measures won't represent a complete encoder interval.  Reporting this 1st sample could be misleading as it measures the arbitrary time from the software call to start the task until the next encoder edge.

   On newer hardware with the "Incomplete Sample Detection" feature, this meaningless 1st sample is discarded by DAQmx.  On older hardware, this 1st sample was returned to the app, and it was up to the app programmer to deal with it.

 

Problem 1: Now suppose I'm also using this same encoder signal as an external sample clock for an AI task that I want to sync with my period measurement task.  Since DAQmx is going to discard the counter sample that came from the 1st edge, my first 5 samples will correspond to edges 2-6.  Over on the AI task, my first 5 samples will correspond to edges 1-5.

   My efforts to sync my tasks are now thwarted because their data streams start out misaligned.  The problem and workaround I'm left with are at least as troublesome as the one that was "solved" by this feature.

 

Problem 2:  Suppose I had a system where my period measurement task also had an arm-start trigger, and I depended on a cumulative sum of periods to be my master time for the entire system.  In this case, the 1st sample is the time from the arm-start trigger to the 1st encoder edge, and it is *entirely* meaningful.  On newer hardware, DAQmx will discard it and I'll have *no way* to know my timing relative to this trigger. 

   Older boards (M-series, 660x counter/timers) could handle this situation just fine. On newer boards, I'm stuck with a much bigger problem than the one that the feature was meant to solve.

 

So can we please have a DAQmx property that allows us to turn this "feature" OFF?  I understand that it'd have to be ON by default so as not to break existing code.

 

 

-Kevin P

The vast majority of my working life is spent with RIO devices or midrange X series cards, but I often come across applications where an inexpensive, reliable DAQ would be handy for low level tasks - monitoring presence sensors, measuring voltages at moderate precision and slow speed, providing interlocks for material storage bins etc.

 

Traditionally, you'll see a lot of USB 600X units being used for applications like these. However, running on USB has a few associated problems: unreliability of the Windows bus, cable strain relief on USB connectors, mounting of USB 600X units, connection type. Don't get me wrong, you can do a lot with these units but they're not an ideal, inexpensive solution for production processes.

 

There's a jump between the functionality of these USB units and X (or even M or E for the vintage crowd) series cards. The only thing that's really in that range anymore is the B series PCI-6010 card, which has the fantastic benefit of using a 37W DSUB connector too, but is a little limited in terms of channel offerings and the like.

 

I'd like to see the B series range revived to provide products that fit between the PCIe-6320 and the USB 600X devices, providing non-USB connection and preferably with a DSUB backplane connector for cost and ease of use. This would provide a more reliable offering for simple acquisition tasks in the industrial environment at a cost-effective price point.

We mostly develop PXIe based high speed (RF) applictions which stores data on one or more RAIDs.

Several customers already asked for a high speed ethernet connection do move this data over the net.

 

Yet there is only one PXIe 10 GBE availible and it is NOT from NI.

We would already need a 40 GBE solution the comming year.

 

PCI Express 40 GBE ist almost commonly avalilible, a mezzanine board solution would be sufficient if nothing else works.

But there is no carrier board availibe, too.

 

I feel kind of left alone with all this data, waiting on those bigg RAIDs for beeing processed / copied.

 

 

With NI 9234 board you can use 4 IEPE sensors but you don' have IEPE open/short detection capability.

NI 9232 board has IEPE open/short detection capability but has only 3 channels.

 

I think that a board with 4 channels (as 9234) and an IEPE open/short detection capability would be great!

NI provides some 100-Pin-DAQ devices, e.g. one for INDUSTRIAL DIGITAL IO

https://www.ni.com/en-us/shop/model/pci-6515.html

 

But why doesn't you offer also a basic connector block for a reasonable price, especially for industrial applictations, where it is common to wire (DIO) signals through DIN rail mounted terminal blocks?

 

This connector block should have the following features:

 

- DIN rail mountable

- simple wire connection, best with spring terminals

- 100 Pin-cable connection

      (https://www.ni.com/en-us/support/model.sh100m-100m-flex-cable.html)

- relatively small for installation in a switch cabinet

- no signal conditioning, just clamps

- much cheaper than then currently available SCB-100 block

 

Please see also this related idea:

http://forums.ni.com/t5/Data-Acquisition-Idea-Exchange/Terminal-Block-layouts/idi-p/2160542

 

Regards

A-T-R

 

 

Many users (such as our customers) expect from devices in the mid-price segment like the USB-62xx family a proper input signal handling adapted to the selected input sampling rate.

 

Proposal:

Include anti-aliasing filters in mid-class hardware,such as the USB-62xx family. As an immediate action, please include at least a warning remark in the user manual of the devices.

Every application with variable sampling rate needs appropriate and adaptable input signal filtering.

Many NI DAQ devices do not contain anti-aliasing filters corresponding to the sampling frequencies (e.g USB-62xx family).

Signals containing higher frequency components than nyquist frequency will be folded to lower frequencies causing wrong spectrum information.

Many applications need different sampling frequency settings but use the same external hardware. In these cases, hardware including filters have to be designed for the highest possible frequency. This situation leads to unrecoverable errors in the frequency spectrum, if input signal components do not meet the nyquist criteria.

 

Thanks

Klaus

www.rfbeam.ch

 

 

 

There is a need for a quick low cost device to controlling/communicating with the popular low-voltage differential signaling (LVDS).  There is a need for transmit only, receive only, and transmit/receive USB devices.  The current solution is to buy the NI USB-6501 and build a daughter board with TTL-LVDS ICs on it to interface to digital equipment in the military/defense industry. Lots of time and energy is wasted on designing, building, and cabling boards as as additional interface step, where a simple USB solution could be made available.

Currently, DSA devices that use voltage excitation have no method to provide that excitation to a particular device within test panels. The only method to do this would be to create a task in Measurement and Automation Explorer which takes much more time that doing a simple test panels test. This should be a fairly simple addition to the test panels user interface. One could simply have a box to check if they require excitation, and a control to determine the voltage level to provide to the DUT. They currently have this for IEPE devices, and it makes sense that voltage excitation should be the same.

Suggest NI produce an inexpensive (<$100) USB "stick" that has 2 hardware counters on it for optically isolated measurement of encoders, or other high-speed devices. The stick would have a standard connector it for easy wiring of differential encoders with ABZ lines. The device would enable measuring two separate encoders or track two sections of a shaftless drive line that needs to position-follow. One or two DIO lines would be a bonus. This would seem to be a good fit for the industrial machine markets (at the very least). Today you need to buy a multifunction daq for a several hundred dollars if you want two counters.

 

Contact me with any further questions.

 

 

Thank you!

 

Rick Yahn

QuadTech, Inc.

414-566-7938

rick.yahn@quadtechworld.com

 

When a DI change detection task runs, the first sample shows the DI state *after* the first detected change.  There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.

 

This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure.  Forum discussions that prompted this suggestion can be seen here and here.

 

The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change.  The present API provides only the latter.

 

Full state knowledge before and after each change can be used to identify the changed lines.  (Similarly, initial state and change knowledge could be used to identify post-change states.)

 

My preferred approach in the linked discussions is to expose the initial state through a queryable property node.  The original poster preferred to have a distinct task type in which initial state would be the first returned sample.  A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.

 

 

-Kevin P

Hello

 

I don't know if there is already an "idea exchange" about this, it could be usefull that USB devices or adapter support USB 3.0.

A device isn't recognized through an USB 3.0 port.

 

Thanks !

 

Vincent O.

Despite having dedicated RTD cards on other platforms, this doesn't exist on PXI.  The existing bridge input cards have significantly higher sample rates than needed for typical temperature measurements, and it would be nice to have a more economical drop-in option for RTDs that supported more channels.

I rarely have to set up hardware for a new analog measurement and always have to puzzle over the difference between RSE and NRSE modes. I think of the inverting input as the reference, so "Non-Referenced Single-Ended" doesn't make sense to me. And, if I run the AISense line to my remote sensor, isn't that a Referenced Single-Ended measurement?

 

Yesterday, I noticed that at least some on-line documentation now refers to GRSE (Ground Referenced Single-Ended); adding that single letter helps a lot. What about adding another single letter and referring to the other mode as RRSE (Remote Referenced Single-Ended)? One letter could save a lot of people a lot of time.