Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

I have recently upgraded the CPU in one of out test stand to the Window 7 operating system.

I was able get drivers for all of the 13 PXI cards that are used in the tests stand with the exception of the PXI 4351.

I use this card primarily for voltage measurements although I occasionally do use it with thermocouples. 

I see that the card will be obsolete in September and that is disappointing. 

This card was not inexpensive when we purchased it and the replacement is near $2000. Dollars. 

I really hope that an updated driver would be available before NI quits supporting their current product.

I rarely have to set up hardware for a new analog measurement and always have to puzzle over the difference between RSE and NRSE modes. I think of the inverting input as the reference, so "Non-Referenced Single-Ended" doesn't make sense to me. And, if I run the AISense line to my remote sensor, isn't that a Referenced Single-Ended measurement?

 

Yesterday, I noticed that at least some on-line documentation now refers to GRSE (Ground Referenced Single-Ended); adding that single letter helps a lot. What about adding another single letter and referring to the other mode as RRSE (Remote Referenced Single-Ended)? One letter could save a lot of people a lot of time.

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

Consider the case where you have a cDAQ II chassis filled with analog input modules. If I have several modules that are running at the same rate I can use channel expansion to include them in the same task. Since they're in the same task they will have the same timing and triggering and will share the same timing engine. This leaves the other two timing engines available for tasks with different timing constraints. If I want to dynamically change the sampling rate of just one of the modules I would have to stop the task and, hence, the acquisition on the other modules. If the same timing engine could be shared among tasks, each module could have its own task and be independently controlled. I imagine this would involve a lot of changes to the DAQmx task structure but it's something that would come in handy.

The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.

 

This would make for much easier offline development and ultimate deployment to real hardware.

A customer of mine was trying to read 8 thermocouple readings simultaneously over the course of a week and then store the data.  She quickly found out that there are memory limitations with Signal Express.  Eventually no matter how She saved or logged their data, she would run out of memory.  My product suggestion is to write code that will determine the RAM on a customers computer as well as available hard drive space and show customers (at the start of a task) exactly how long they can acquire signals without filling up their hard drive or seriously draining their resources.  That way when they start a process, they are aware of what they're working with in terms of space at that instant rather than when an error pops up three hours or so later.  Most customers would figure out these needs in advance of any acquisition, but for the ease of the customer, this would be a welcomed additional feature.  It would also be nice to have a document that explained how this time frame was calculated.  I suggest that when a task is ran, there is a pop up that explains the maximum amount of samples that can be acquired and time that can pass.  The document I mentioned could be available via hyper-link, and the hyper-link text could read "How did we calculate this number?", which could explain the process in more depth.

 

I have submitted this idea for Signal Express at the product suggestion page found at www.ni.com/contact ("feedback" link in bottom left of window).  I figured this would be a good idea for DAQ in general for any extended signal acquisition.

 

Shawn Shaw

Applications Engineering Department

National Instruments

Currently it is hard to find out whether a property can be set for a specific channel, or only per module. An example is the Strain Gauge Excitation property, which can only be set per module. Other properties can be different per channel.

 

Idea: Add a device specific comment in for example Max, about the different properties.  For example:

 

MAX idea.png

 

 

What I would like is a contious sampling DAQ task which resets the count to zero evey time the DAQmx Read.vi is called.  This way you see which direction and by how much the encoder has moved betwen samples.  If it also provided the delta time that would be ideal. 

 

There are ways to do things like this currently but I have run up aginst two applications which would benifit forom something like this and I cannot be the only one. 

NI USB-4432 has 5 input,But only There are 4 channels with software-selectable IEPE signal conditioning (0 or 2.1 mA).

 

 

Make NI USB-4433  5 channels with software-selectable IEPE signal conditioning (0 or 2.1 mA).

 


 


 

I am using DAQmx Physical channel controls in user Interface to select the particular DAQ modules.I would like to display only the particular type (AI,DI,AO,DO.....) of modules which is connected in the system.For example i need to display only, DO-NI-9477 cDAQ modules in the physical channel not the other DIO models like NI 9403...IO name filtering option not useful to filter the other models which is same type.

 

That should be great if NI provides the option for filtering the modules name based on their  product type or user configurable naming (for example, if cDAQ1 device renamed as "DEV1" means user can enter filter the device based on the string "DEV").

We normally have a DAQ system consisting of several elements:

-Sensor

-Custom filtering/attenuation

-Signal conditioning

-NI-DAQ device

 

When we use scales in DAQmx we have to create a scale for every 'route' we use (sometimes we have to use a 4 kAmps sensor for a 100 amps signal).

If we could define a scale in a task consisting of multiple scales, we could directly pick the sensor and signal conditioning we use for each signal. A change in one of these elements could easily be adjusted.

 

Ton

I continually come to your site looking for the DAQmx base API manual and have yet to find it.  I eventually have to dig out an old CD to find my copy.

 

How 'bout posting these online so that we can help ourselves out of jams?

 

Thanks,

Jeff

Based on this question, I would like to add a new category of events to LabVIEW: Max-events.

 

This category could contain the following events:

-Hardware Added

-Hardware removed

-Configuration changed

    -Scales

    -Channels

    -Tasks

 

If you know other events, please post them.

As a user of the SCXI 1000 with a 1302 board, I had a lot of difficulty finding information on the pinout of my board. The 1302 is a direct link to the 6220 DAQ card on my computer. When I first looked for the 1302, I could find very little information on it. By adding a few windows and commands to the M&AE, I probably would have been able to solve my problem in minutes instead of days.

 

Solution

A. Being able to add boards which link to pinouts of the computer's DAQ and treat them as a board.

B. Have a list of pinout boards which can link to the DAQ. Since I was dealing with a 1302 which has 34 pins instead of 64 (or something close to that) I had difficulty finding the information on the pinouts. It was far easier to find the pinout of the 6220.

 

It could be I was a little annoyed at the amount of time that I spent on finding information on the 1302, but it took me far longer to find the information than it should have. This data should be built in.

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

For instance, a couple ideas to get the ball rolling:

  1. Add an interval attribute like "Change file after n samples".   We would then auto-increment the file name and change to that file when we have logged "n" samples.
  2. Make file path attribute changeable at runtime.  We have a file path attribute for logging.  The idea here would be to support changing the file path "on the fly" without stopping and starting the task.  The problem here is that it would not suit very well a use case where you want a specific file size.  Additionally, it wouldn't be as easy to use as #1; it would be more flexible though.
  3. (Any additional ideas/use cases?)

Thank you for your input!

 

Andy McRorie

NI R&D

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

What I'm currently thinking (because it seems the most flexible to different criteria and situations) is to simply allow you to set the file path property while the task is running (on DAQmx Read property node).  The only downside I can think of with this approach is that you wouldn't know exactly when we change to the new file.  We could guarantee within (for example) 1 second, but you wouldn't be able to specify the exact size.

 

Would this be a good solution for you?  Can you think of a better way to specify this behavior?

 

Hi all,

 

Any series card should have a feature listing different  parameters like voltage, temperature etc it supports(May be a property node should be used). so that user can configure the required parameter among the supported.

Ex: SCXI -1520 module can be configured as Strain, Pressure or voltage but this information will be known only by seeing its manual or when a task is created in MAX. But in LabVIEW               Software i cant get this information directly. Because it allows me to configure 1520 as temperature also and we will come to known that 1520 module doesn't support for temperature       parameters only when once tried to acquire.

 

So what you people think about you.Share your ideas on this please. 

 

Regards,

geeta 

Hello,

 

How often you have build Labview applications using simulated DaqMx boards ...

And how often you were limited by the default behaviour of simulated boards ... ( Sinewave for analogic inputs, Counter square signal for digital inputs ... )

 

It would be nice to integrate in DaqMx simulated boards, the abilty to modify the default behaviour of simulated inputs ... thru dedicated popups

 

It would be nice, for each task linked to a simulated daqMx board, to launch a popup window ...

 

  • For digital input, give the abilty to modify for each configured channel , the current binary value.
  • For analog input, give the ability to choose between a fixed value, a sine wave, a square signal ... white noise ...
  • For digital output, give the ability to view the current setted values
  • For analog output, give the ability to view the current simulated output value on a waveform chart ...

 

A more powerfull tool could also integrate a simulated channels switching mechanism ... A simulated output could be linked to a simulated input 

 

This feature could be a good way to create an application which could simulate a complete process ... this application could be used to validate a complete system

(such a kind of SIL architecture)

 

Other idea .... A complete daqMx simulation API ...

 

  • Creation of an API which could instanciate a simulated daqMx board (Wich could be seen via MAX)
    • Takes place of the actual limited daqMx simulated board
  • This device could then be accessed by other application thru daqMx
  • This API could have access to all channels of this simulated device.
  • This API could force, programmatically, the value of the simulated input channels according to a realistic process model

 

Something like this ...

 

 

 DaqMxSimulatedAPI.PNG

 

Just as a better integration of the poor mans DAQ, I see it as an door opener for serious DAQ hardware.