Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.

 

This would make for much easier offline development and ultimate deployment to real hardware.

I continually come to your site looking for the DAQmx base API manual and have yet to find it.  I eventually have to dig out an old CD to find my copy.

 

How 'bout posting these online so that we can help ourselves out of jams?

 

Thanks,

Jeff

It is a frequent requirement to make measurements on production lines. Position on these is often tracked with Rotary Encoders https://en.wikipedia.org/wiki/Rotary_encoder . Many NI devices can accept the quadrature pulse train from such a device, and correctly produce a current position count. The information in the 2 phase pulse train allows the counter to correctly track foward and reverse motion.

 

What would be very useful would be a callback in NI-DaqMX that is called after every n pulses, ideally with a flag to indicate whether the counter is higher or lower than the previous value, i.e. the direction.

 

This has recently been discussed on the multifunction DAQ board here: http://forums.ni.com/t5/Multifunction-DAQ/quadrature-encoder-based-triggering/td-p/3125468 . So I am not alone in requesting something more programmer friendly than the workaround offered there.

 

 

Every time I have to work with a NI daq device the first thing i need to know is what pins can or cant do something.

Currently this involves looking through something like 7 diffrent documents to find little bits of information and bringing them back to your applicaiton.

 

A block diagram could easily be a refrence point for the rest of the documentation (you want to know about pin IO for your device look at this document)

Plus a good block diagram can tell you what you need to know quickly, and clearly. A picture is worth 1000 words?

 

Some might find the current documentation adiquite, but personally i would really like to have a block diagram that represents the internals and capiblities of the pins and device in general. Most Microcontrollers have this and it is an extremly useful tool. So why not have one for the Daq devices as well?

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

For instance, a couple ideas to get the ball rolling:

  1. Add an interval attribute like "Change file after n samples".   We would then auto-increment the file name and change to that file when we have logged "n" samples.
  2. Make file path attribute changeable at runtime.  We have a file path attribute for logging.  The idea here would be to support changing the file path "on the fly" without stopping and starting the task.  The problem here is that it would not suit very well a use case where you want a specific file size.  Additionally, it wouldn't be as easy to use as #1; it would be more flexible though.
  3. (Any additional ideas/use cases?)

Thank you for your input!

 

Andy McRorie

NI R&D

It has come up a few times from customers, and I wanted to gauge interest and solicit ideas on how this should work.

 

Currently, with the built-in TDMS logging support, if you want to change to a new file in the middle of logging, you need to stop the task and start again.  For some use cases, this isn't practical (for example, http://forums.ni.com/t5/LabVIEW/Why-the-TDMS-file-is-larger-than-it-should-be/m-p/1176139#M511099).

 

The question is: How would you like to specify the "new file" behavior and what are your use cases?

 

What I'm currently thinking (because it seems the most flexible to different criteria and situations) is to simply allow you to set the file path property while the task is running (on DAQmx Read property node).  The only downside I can think of with this approach is that you wouldn't know exactly when we change to the new file.  We could guarantee within (for example) 1 second, but you wouldn't be able to specify the exact size.

 

Would this be a good solution for you?  Can you think of a better way to specify this behavior?

 

I need to frequently check the existence of an overtemperature or any health parameters of the NI card PICe-6323 for my application.

I am using the DAQmx ANSI C library for my application.

 

I have tried using the DAQmxGetReadOvertemperatureChansExist function but found that's for C Series Devices.

 

Please help me out.

Counter tasks can only take 1 channel, due to the nature of timed signals, obviously. When setting up a system with 16 DUTs with counter outputs, this requires 16 tasks. Every single one has to be painstakingly created and configured. (As an aside: Defining a tabulator sequence still seems a mystery to NI's programmers, even though LabVIEW supports this)

Wouldn't it be nice to have a Ctrl+c,Ctrl+v sequence for tasks and then only modify physical channel? IMHO: yes.

 

KR

nimic

The DAQmx API is extremely useful; one especially useful part of the API is the automatic logging feature. This part of the API is efficient, easy to set-up, and largely bug free, well-done NI.

 

One problem with the automatic logging feature is the value of the t0. This value is determined by the system clock, which is the clock onboard the controller. A lot of PXI systems have the ability to use GPS modules or other timing modules. It would be good to use this clock instead.

 

In NI-Sync we can create an event at a specific time and use this to trigger the DAQmx data acquisition. It would be nice to use this "event time" instead of the system clock. There is a property in the DAQmx Timing Property Node, under Advanced, called First Sample Timestamp:Value Property. However, this property is read only, please change it to writable also. In this case we can then write an exact GPS time start to the data acquisition.

 

Below is one simple use case of the property node.

 

mcduff

snip.png

 

 

 

The term "Incomplete Sample Detection" comes from DAQmx Help.  It affects buffered time measurement tasks on X-series boards, the 661x counter/timers, and many 91xx series cDAQ chassis.  It is meant to be a feature, but it can also be a real obstacle.

 

How the feature works ideally: Suppose you want to configure a counter task to measure buffered periods of a 1-channel encoder.  You use implicit timing because the signal being measured *is* the sample clock.  The 1st "sample clock" occurs on the 1st encoder edge after task start, but the time period it measures won't represent a complete encoder interval.  Reporting this 1st sample could be misleading as it measures the arbitrary time from the software call to start the task until the next encoder edge.

   On newer hardware with the "Incomplete Sample Detection" feature, this meaningless 1st sample is discarded by DAQmx.  On older hardware, this 1st sample was returned to the app, and it was up to the app programmer to deal with it.

 

Problem 1: Now suppose I'm also using this same encoder signal as an external sample clock for an AI task that I want to sync with my period measurement task.  Since DAQmx is going to discard the counter sample that came from the 1st edge, my first 5 samples will correspond to edges 2-6.  Over on the AI task, my first 5 samples will correspond to edges 1-5.

   My efforts to sync my tasks are now thwarted because their data streams start out misaligned.  The problem and workaround I'm left with are at least as troublesome as the one that was "solved" by this feature.

 

Problem 2:  Suppose I had a system where my period measurement task also had an arm-start trigger, and I depended on a cumulative sum of periods to be my master time for the entire system.  In this case, the 1st sample is the time from the arm-start trigger to the 1st encoder edge, and it is *entirely* meaningful.  On newer hardware, DAQmx will discard it and I'll have *no way* to know my timing relative to this trigger. 

   Older boards (M-series, 660x counter/timers) could handle this situation just fine. On newer boards, I'm stuck with a much bigger problem than the one that the feature was meant to solve.

 

So can we please have a DAQmx property that allows us to turn this "feature" OFF?  I understand that it'd have to be ON by default so as not to break existing code.

 

 

-Kevin P

When a DI change detection task runs, the first sample shows the DI state *after* the first detected change.  There's not a clear way to know what the DI state was just *before* the first detected change, i.e. it's *initial* state.

 

This idea has some overlap with one found here, but this one isn't restricted to usage via DAQmx Events and an Event Structure.  Forum discussions that prompted this suggestion can be seen here and here.

 

The proposal is to provide an addition to the API such that an app programmer can determine both initial state just before the first detected change and final state resulting from each detected change.  The present API provides only the latter.

 

Full state knowledge before and after each change can be used to identify the changed lines.  (Similarly, initial state and change knowledge could be used to identify post-change states.)

 

My preferred approach in the linked discussions is to expose the initial state through a queryable property node.  The original poster preferred to have a distinct task type in which initial state would be the first returned sample.  A couple good workarounds were proposed in those threads by a contributor from NI, but I continue to think direct API support would be appropriate.

 

 

-Kevin P

I use Daqmx a lot for writing .NET based measurement software.

 

Whereas the API itself is quite decent, the docs are horrible. Accessing them is convoluted at best, requiring the VS help viewer. Almost nothing is available online and decent examples are quite scarce, which will definitely be an issue for absolute beginners...

 

This definitely deserves some attention!

 

Cheers,

 

Kris

Would it be possible to update the export wizard in MAX so that the NI-DAQmx Tasks list under Data Neighborhood is listed in alphabetical order?  In the main MAX application the list is in order, so finding tasks that are named with a common prefix is easy.  However, in the export wizard you have to scroll and hope you clicked them all.

 

Thanks,

-Brian

Certified LabVIEW Developer

Lead Engineer - LabVIEW

Advanced Development

GE Appliances

 

T 502-452-3831

F 502-452-0467

D *334-3831

E brian.schork@ge.com

In MAX, you can open up a test panel for a DAQmx device.

 

I woudl like to format the numbers on the axis of the graphs. I have a calibration routine that requires that the signal get as close to 5 V as possible. When you get to less than 10mV, you the numbers on the vertical axis go from 5.01 to 5. So all you see on the graph is a bunch of 5's. It would be nice to be able to see the values in as much resolution as the channel will handle. Even at the maximum range, it can still do 2mV per bit. It would be nice to see 5.004 instead of 5.

 

 

 

We normally have a DAQ system consisting of several elements:

-Sensor

-Custom filtering/attenuation

-Signal conditioning

-NI-DAQ device

 

When we use scales in DAQmx we have to create a scale for every 'route' we use (sometimes we have to use a 4 kAmps sensor for a 100 amps signal).

If we could define a scale in a task consisting of multiple scales, we could directly pick the sensor and signal conditioning we use for each signal. A change in one of these elements could easily be adjusted.

 

Ton

Hi,

 

I was suggested to post the issue with nidaqmx not supporting the upcoming version 3.9 of Python in this group. Here is my original post: https://forums.ni.com/t5/Multifunction-DAQ/Deprecation-warning-for-nidaqmx-using-Python/m-p/4051990/highlight/true#M99324

 

In short this is the warning currently seen:

DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working

 

... it doesn't seem like an issue that would require a large effort to solve, so I hope this will be prioritized accordingly.

 

BR Jesper

 

 

Hello,

 

Some of application needs voltage level of 3.3 V, 2.5 V & 1.8 V. (Low Voltage IC families).

In this case we need to operate digital/counter output at desire voltage level.

It will be good if NI provide it.

Also, with reference to following link,

https://forums.ni.com/t5/Data-Acquisition-Idea-Exchange/0-PWM-duty-cycle-with-DAQmx/idi-p/3819545

PWM output can be set to 0% duty cycle (Idle Low) & 100% duty cycle (Idle High).

 

BR & Stay Healthy

It gets a bit annoying that PXI1Slot2 is listed after PXI1Slot14 when doing an ascii sort. I (ok, admittedly, my coworker) proposes having naming conventions that will allow for a better ascii sort. For instance, PXI1Slot002 PXI1Slot014. 

A DAQmx Device property "Watchdog Timer Supported" would provide indication of that capability for the selected hardware device.  Few devices (X series being one) have a watchdog timer.  Since they are often used as a means of safeguarding, it is important to know that the selected device supports that function.  Currently, one has to attempt to Create a Watchdog Timer Task and then trap error -200662 which says the device does not support that feature.

 

When a piece of hardware is simulated with MAX, I would like to be able to insert a transfer function or a signal simulating VI to allow me to get a more realistic test of a system. The current default of generating a sine wave for simulated acquisition only lets me test part of the code. If a transfer function, lookup table, or custom vi were able to be substituted for the sine wave generation, then I would be able to test many other facets of a system.