Data Acquisition Idea Exchange

Community Browser
Top Authors
cancel
Showing results for 
Search instead for 
Did you mean: 
Post an idea

Currently when streaming analog or digital samples to DAQ board, output stays at the level of last sample received when buffer underflow occurs. This behavior can be observed on USB X Series Multifunction DAQ boards. I have USB-6363 model. The exact mode is hardware-timed, buffered, continuous, and non-regenerating. The buffer underflow error code is -200290 “The generation has stopped to prevent the regeneration of old samples. Your application was unable to write samples to the background buffer fast enough to prevent old samples from being regenerated.”

 

I would like to have an option to configure DAQ hardware to immediately set voltage on analog and digital outputs to a predefined state if the buffer underrun occurs. Also, I would like to have an option to immediately set one of PFI pins on buffer underrun.  

 

I believe this could be accomplished by modifying X series firmware and providing configuration of this feature in the DAQmx API. If no more samples are available in the buffer the DAQ board should immediately write predefined digital states / analog levels to outputs and indicate buffer underrun state on PFI line. Then it should report error to PC.

 

Doing this in firmware has certain advantages:

  1. It can be done quickly (possibly within the time of the next missing sample – at 2Ms/s that’s 0.5us).
  2. Handles all situations (software lockups, excessive CPU loading by other processes, loss of communication do to bus traffic, interface disconnection…)
  3. It does not require any additional hardware (to turn off outputs externally).
  4. Buffer underrun indication on PFI line could provide additional safety measure (it could be used for example to immediately disable external power amplifier connected to DAQ AO). 

Doing this using other methods is just too slow, does not handle all situations, or requires additional external circuitry.

 

Setting outputs from software, once error occurs, is slow (~25ms / time of 50000 samples at 2MS/s) and does not handle physical disconnection of the interface. Analog output does eventually go to 0 V on USB-6363 when USB cable is disconnected, but it takes about half a second.  

 

Using watchdog timer would also be too slow. The timer can be set to quite a short time, but form the software, I would not be able to reset it faster than every 10ms. It also would require switching off analog channels externally with additional circuitry, because watchdog timer is not available for analog channels.

 

The only viable solution right now is to route task sample clock to PFI and detect when it stops toggling. It actually does stop after last sample is programmed. Once that occurs, outputs can be switched off externally. This requires a whole lot of external circuitry and major development time. If you need reaction time to be within time of one or two samples, pulse detector needs to be customized for every possible sampling rate you might what to use. To make this work right for analog output, it would take RISC microcontroller and analog electronic switches. If you wanted to use external trigger to start the waveform, microcontroller would have to turn on the analog switch, look for beginning of waveform sample clock, record initial clock interval as reference, and finally turn off the switch if no pulse is received within reference time.

 

I’m actually quite impressed how well USB-6363 handles streaming to outputs. This allows me to output waveforms with complexity that regular arbitrary generators with fixed memory and sequencing simply cannot handle. The buffer underflow even at the highest sampling rate is quite rare. However, to make my system robust and safe, I need fast, simple, and reliable method of quickly shutting down the outputs that only hardware/firmware solution can provide.

 

Thanks,

Sebastian

I'm working with some B class devices and can't work on the project at home because I don't have the hardware and can't simulate it. Can you add the NI USB-6008 and 9 (whole class would be nice) to the MAX 'create simulated device'  list?

 

thanks

 

frank9

 

It would be better, to have auto zero support to all daqmx devices (or) to give any offset value to read or configuration of channel function

Have Max maintain a database of your transducers calibration due dates that can be monitored in LabView. Currently I maintain a lot of transducers that are used throughout my programs. I have them selectable through the Custom scale input. Unfortunately I cannot conduct a quick check to make sure that my transducer is in or out of calibration through the program. I would be nice to have that capability.

Would it be possible to update the export wizard in MAX so that the NI-DAQmx Tasks list under Data Neighborhood is listed in alphabetical order?  In the main MAX application the list is in order, so finding tasks that are named with a common prefix is easy.  However, in the export wizard you have to scroll and hope you clicked them all.

 

Thanks,

-Brian

Certified LabVIEW Developer

Lead Engineer - LabVIEW

Advanced Development

GE Appliances

 

T 502-452-3831

F 502-452-0467

D *334-3831

E brian.schork@ge.com

When using a buffered counter output task, the initial delay value is not used at all.  Instead, the user specifies an array of high and low times and the first low time is used as the initial delay.

 

If the output pulse train is repeated multiple times (or continuously), the first low time represents both the time from the trigger until the first pulse as well as the time between the last pulse and the first pulse.

 

It would be desirable to decouple these parameters by allowing for the option to use Initial Delay on buffered counter output tasks (e.g. with a channel property).  Here are a couple use cases off the top of my head where Initial Delay would be very helpful (if not required):

 

1.  This is the case I ran into (posted here😞  if you want to repeat a pulse train continuously every N seconds, you have to either have that N second delay at the start of the task or use another counter as a trigger source.  Depending on high and low times you might be able to get away with writing new values to the counter on-the-fly but this isn't a universal solution.

 

2.  If you wanted to synchronize multiple continuous buffered counter output tasks (with each one sharing a fixed desired period) to a common trigger source but with a different initial delay, you would be unable to do so since the requirement of having different initial delay would affect the period of your actual signal.  You would have to compensate by tweaking the other high/low times in your waveform (giving you something that you don't really want).

Hello,

 

DaqMx Vis only works when the NI Device Loader service is running.

 

If this windows service is not running, DaqMx functions generates error like "Device not found..., undefined board, undefined hardware ...."

 

For some week, i get such an error and it take me a long time to point the real cause !

A windows or software update had changed the service startup sequence ... and the Ni device loader service was no more starting.

The solution to this problem was to configure the NI Device Loader service in order to force restart on start failure. 

 

It should be nice if daqMX functions could generate the "right error".

 

An error like : "Ni services are not running please check their current state ... The DaqMx devices could not be accessed when Ni Device Loader service is not running".

 

This problem also generates problem in MAX ! (The device treeview takes a long time to expand ... and the device autotest fail)

 

 

Thanks for kudossing this idea which could help understanding windows services problems.

Is there any technical reason why this cannot be added to DAQmx?  M series boards still have features that cannot be found on X or S series such as analog current input.

Ideally, it would be best to be able to have multidevice tasks for both M and X at the same time.

If you set up a change detection event as so:

change detection.png

 

There isn't anything in the event data node to tell you which line triggered the interrupt. I'm proposing we add something in the event data node for this event (like a bit field or a reference to the channel) so the programmer would know which line fired the event.

 

The workaround is you do a DAQmx read at this point and you mask the data vs previous data.. but I would prefer not to do this.

I find myself quite often needing to modify the DaqMX tasks of chassis that aren't currently plugged into my system.  I develope on a laptop, and then transfer the compiled programs to other machines.  When the other machines are running the code and thus using the hardware I have to export my tasks and chassis, delete the live but unplugged chassis from my machine, then import the tasks and chassis back in generating the simulated chassis.  When I'm finished with the task change and code update, to test it I have to export the tasks and chassis, plug in the chassis, and re-import to get a live chassis back.

 

Can it be made as simple as right clicking on a chassis and selecting 'simulated' from the menu to allow me to configure tasks without the hardware present?

 

Thanks,

Brian

Certified LabVIEW Developer

GE Appliances

A DAQmx Device property "Watchdog Timer Supported" would provide indication of that capability for the selected hardware device.  Few devices (X series being one) have a watchdog timer.  Since they are often used as a means of safeguarding, it is important to know that the selected device supports that function.  Currently, one has to attempt to Create a Watchdog Timer Task and then trap error -200662 which says the device does not support that feature.

 

Hello,

 

I recently discovered that the SCXI-1600 is not supported in 64-bit Windows.  From what NI has told me, it is possible for the hardware to be supported, but NI has chosen not to create a device driver for it.

 

I'm a bit perplexed by this position, since I have become accustomed to my NI hardware just working.  It's not like NI to just abandon support for a piece of hardware like this -- especially one that is still for sale on their website.

 

Please vote if you have an SCXI-1600 and might want to use it in a 64-bit OS at some time in the future.

 

Thanks,

Doug

 

 

"Without needing to clear "all" associated events, or EVEN opening MAX, I would like the ability to replace NI-USB Device "Doohickey123" serial number "junkgarbagestuff" with another NI-USB device of the same type-  perhaps a pop-up option like.... ""Replace no longer installed NI-53xx alias "gizmo"  with new NI-53xx?""  

 

Sure would help when I swap NI-xxxx devices amongst systems- especially the USB devices!

When a piece of hardware is simulated with MAX, I would like to be able to insert a transfer function or a signal simulating VI to allow me to get a more realistic test of a system. The current default of generating a sine wave for simulated acquisition only lets me test part of the code. If a transfer function, lookup table, or custom vi were able to be substituted for the sine wave generation, then I would be able to test many other facets of a system.

I have a data acquisition NI-DAQmx/C++ program where I am continuously acquiring 5 channels of data at 40KHz/channel and reading them in 0.1 second chunks.  This successfully works perfectly for over 14 hrs continuous acquisition, but at 14hrs, 54 min and 47 seconds the program hangs up due to an overflow in the int32 DAQmxInternalAIbuffer_Offset value sent to the DAQmxSetReadOffset() function.  In the DAQmxSetReadRelativeTo() function, I set the offset relative to the first sample using DAQmx_Val_FirstSample.  (See "32-bit lmitation pof the NI-DAQmx int32 DAQmxSetReadOffset() function?")

 

It would be very helpful for the DAQmxSetReadOffset() offset value to be 64-bits rather than the current int32 value.  This would make this function analogous to the DAQmxGetReadTotalSampPerChanAcquired() which returns a 64-bit value.  I understand that the offset is maintained internally as a 64-bit value, so perhaps this would not be too difficult to do.

 

I hope that National Instruments fixes this limitation in their API, not just for 64-bit Windows, but also for 32-bit Windows because a lot of us are still using 32-bit compilers and our users are using Windows XP.  Perhaps it could be implemented as a separate DAQmxSetReadOffset64() 64-bit function for the 32-bit Windows.

 

Thank you,
Bill Anderson

 

Hello,

 

For those of us who develop using DAQmx all the time, this might seem silly.  Nonetheless, I'm finding that users of my software are repeatedly having a tough time figuring out how to select multiple physical channels for applications that use DAQmx.  Here's what I'm talking about:

DAQmxChannels.png

 

Typically a user of my universal logger application wishes to acquire from ai0:7, for example.  They attempt to hold down shift and select multiple channels, only to assume that one channel at a time may be aquired.  For some odd reason, nearly everyone fears the "Browse" option because they don't know what it does.

 

 

While, as a developer, I have no problem whatsoever knowing to "Browse" in order to accomplish this, I was just asked how to do this for literally the fifth time by a user.  Thus, I'm faced with three choices: Keep answering the same question repeatedly, develop my own channel selection interface, or ask if the stock NI interface may be improved.

 

I'm not sure of the best way to improve the interface, but the least painless manner to do so might be to simply display the "Browse" dialog on first click rather than displaying the drop-down menu.

 

Please, everyone, by all means feel free to offer better ideas.  What I do know for certain, though, is that average users around here continually have a tough time with this.

 

Thanks very much,

 

Jim

 

We need a way to query an output task to determine its most recently output value.  Or alternately, a general ability to read back data from an output task's buffer.

 

This one's been discussed lots of times over the years in the forums but I didn't see a related Idea Exchange entry.  Most of the discussion I've seen has related to AO but I see no reason not to support this feature for DO as well.

 

There are many apps where normal behavior is to generate an AO waveform for a long period of time.  Some apps can be interrupted unexpectedly by users or process limit monitoring or safety range checking, etc.  When this happens, the output task will be in a more-or-less random phase of its waveform.  The problem is: how do we *gently* guide that waveform back to a safe default value like 0.0 V?  A pure step function is often not desirable.  We'd like to know where the waveform left off so we can generate a rampdown to 0.  In some apps, the waveform shape isn't directly defined or known by the data acq code.  So how can we ramp down to 0 if we don't know where to start from?  This is just one example of the many cases where it'd be very valuable to be able to determine the most recently updated output value.

 

Approach 1:

  Create a DAQmx property that will report back the current output value(s).  I don't know if/how this fits the architecture of the driver and various hw boards.  If it can be done, I'd ideally want to take an instantaneous snapshot of whatever value(s) is currently held in the DAC.  It would be good to be able to polymorph this function to respond to either an active task or a channel list.

 

Approach 2 (active buffered tasks only):

   We can currently query the property TotalSampPerChanGenerated as long as the task is still active.  But we can't query the task to read back the values stored in the buffer in order to figure out where that last sample put us.  It could be handy to be able to query/read the *output* buffer in a way analogous to what we can specify for input buffers.  I could picture asking to DAQmx Read 1 sample from the output buffer after setting RelativeTo = MostRecentSample , Offset = 0 or -1 (haven't thought through which is the more appropriate choice).  In general, why *not* offer the ability to read back data from our task's output buffers?

 

-Kevin P

It would be great if the full DAQmx library supported all NI data acquisition products on Windows, Mac OS X and Linux. The situation right now is too much of a hodge-podge of diverse drivers with too many limitations. There's an old, full DAQmx library that supports older devices on older Linux systems, but it doesn't look like it's been updated for years.  DAQmx Base is available for more current Linux and Mac OS systems, but doesn't support all NI devices (especially newer products).  DAQmx Base is also quite limited, and can't do a number of things the full DAQmx library can.  It's also fairly bloated and slow compared to DAQmx.  While I got my own application working under both Linux and Windows, there's a number of things about the Linux version that just aren't as nice as the Windows version right now.  I've seen complaints in the forums from others who have abandoned their efforts to port their applications from Windows to Mac OS or Linux because they don't see DAQmx Base as solid or "commercial-grade" enough.

 

I'd really like to be able to develop my application and be able to easily port it to any current Windows, Mac or Linux system, and have it support any current NI multi-function DAQ device, with a fast, capable and consistent C/C++ API.

 

Anyone else see this as a priority for NI R&D?

 

I know it is possible to listen for multiple DI lines when performing a Change Detection (with a pxi 6511, for example), however it appears to not be a feature to know which channel was the cause of the event trigger  without building your own comparison logic around the DIO reads.

 

I would like to see an additional property, either under 'Timing', or under 'Channel' that would allow me to ask, for the specific channel / line in the task that caused the change-event instance rather than having to search for it manually in a 1D boolean array.

 

 

 

 

The title pretty much says it all. I would like the ability to either configure a full hardware compliment as simulated devices then switch them over to real devices when the hardware arrives or go from real devices to simulated devices without the need to add new, discrete simulated devices to MAX.

 

This would make for much easier offline development and ultimate deployment to real hardware.