Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

How to feed data from file to buffer while analog outputting?

I've got a mission critical voltage generation subsystem that has to be finalize this week. Its supposed to read a file, extract the data and translate it into voltage values. These values are fed into the output buffer (after the Task, channels, timing and callback is set) before the task is started. This setup wont work since my target file contains an extremely large number or timeseries data. The only way is to somehow setup the task, channels, timing and callback, then start the task, feeding new buffer data at set intervals. Attempts to do the refresh using callback is not successful as it seems that i'm required to reset the task, channel, timing and etc all over again before new buffer data can be outputted. Is there some example or way to do this? Please help! I'm using an M-series 6221, NiDaqMx 8.3 and VC++.
0 Kudos
Message 1 of 10
(4,077 Views)

MTee,

As I understand it, you have a very large data file that contains voltage values and timing information.  These voltage values need to be written to the analog out of your 6221 at the appropriate times specified within the file, but you are experiencing difficulties because your current configuration requires stopping and starting the task every time you read more data from the file.

If you configure an Analog Output task to generate samples On-Demand you can avoid continuously starting and stopping a task.  Place the DAQmx Write task that is configured for NChan 1Samp within a while structure and send it a new voltage value with every iteration of the loop.  Parse the file within the same loop and extract the voltage value for every particular iteration.  Since your file contains timing information you'll need to use that information to configure how long it waits before fetching the next voltage value.  I would use a case structure within the while loop that only modifies the DAQmx Write VI only when the timing requirements have been met.  For more accurate timing, you could use one of your card's counters.

Hopefully I understood your problem correctly and this is the information you needed.   Let me know how it works out!




Elijah Kerry
NI Director, Software Community
0 Kudos
Message 2 of 10
(4,060 Views)
Hello Elijah,

Thanks for your advice! I've actually contacted support over this, and i've employed somewhat a system based on your advice. You've got the idea right off, except that the timing information is not from the file. Its supposed to be user configurable. By setting an update rate, there's a class built to recondition the data from file so that it matches with the update rate. My only concern is to parse the data, apply the conditioning, then, at each iteration, extract one data (apply the data->voltage calculation) then write the voltage to the channel. This iteration is controlled using windows' timer, with its elapsetime = 1000/updaterate. All works well, except that doing it this way seems to lag the generation process behind the DAQ process. The highest updaterate that the generation can catch up with the DAQ (which was timed using aiclock) is just 5hz. You mentioned about using the counters on my card. Can you please explain how to do this? is that a timing source? Does it function like the window timer? I apologize for this since i've just barely started with programming DAQ applications.
0 Kudos
Message 3 of 10
(4,054 Views)

MTee,

Sorry for not explaining my idea in greater detail.  My thought was that you could use the counter as a means to accurately set the delay between outputting data.  Given that you know how long you want to wait before transmitting the next bit of data, you could internally route the output of a counter to a PFI line and count the ticks.  Unfortunately, this still introduces inaccuracies since you count the ticks in software, but it's not a bad way to go.

After thinking about it, there are a couple different ways that you could configure your timing.  If timing is really critical, you might consider using LabView Real Time.  As long as you're using Windows you're at the mercy of the operating system.  However, you do have a couple other option in Windows.  You could use the timed loop for fairly accurate software timing, but I think the best option would be to prepare a waveform.  I'm still unclear as to how you're getting the timing information, but provided that you know the timing information in advance, you could programmatically build a waveform.

I'm a little unclear on the delay you're having problems with.  Do you want the data to be generated at the same time it's acquired?  There is no way to eliminate a delay since your computer has to acquire, process, condition and output a new signal.  If you want the processed signal to be phase locked with the original you could acquire the original and pass it through your code via the same task.  This would ensure that it experienced the same delay that the processed signal experiences.

Perhaps if I understood exactly what your application is trying to accomplish I could offer more appropriate advice.  Is the delay between samples set via timing information saved in a separate file or is it being modified in real time?  When you described your system as "mission critical," are you saying that the timing is really critical?  Again, if it is really critical that it be precise, consider using LabView Real Time.

Hope this gives you some ideas to go on.

 

Elijah Kerry
NI Director, Software Community
0 Kudos
Message 4 of 10
(4,006 Views)
Hi Elijah,

The application i'm developing centers on water levels in an experiment environment. It reads a file containing saved water levels at time intervals and the frequency, converts the data into voltage according to a set calibration factor, then outputs them at that frequency. I assume that the voltage data travels to a set of pump-related hardware that moves water in or out to achieve that water level. The file size is variable so we have to cater to the worst scenario, for instance, the water level data at 30min intervals for 1 year. Thus, it doesnt make sense to grab all the data, put them into a buffer, write to channel, then start continuous output. Rather, the ideal process is to configure task-channel-timing, fill first portion of data into buffer, then start task. At intervals, update the buffer with the next portion and continue output, until all data is out or the user stops it. The update rate is determined by the fixed frequency in file, so there are no changes to it throughout the process.

I mentioned that it was critical because the app also does data acquisition simultaneously. It gathers the voltage data from water level sensors, converts them into water levels, then saves it into another file. This operation starts as the first voltage is written, and ends when the output process ends. When these two operations are done simultaneously, one will do two things, generate water levels from a simulated data set, and record the actual experiment water level data.

The windows timing problem lies in the generation process. You see, since the DAQ operation starts and stops according to the generation process, logically, if generation outputs 20seconds of data, the acquired data should also be about 20secs. But what happened was, instead of 20secs, i get an extra 3-5 timesteps of data at the end. These extra timesteps were traced to instances along the curves where two points show almost exactly the same voltages. This happened because wintimer lagged in the generation of the next voltage data, and the DAQ captured the voltage from the previous generation instead. I'm not sure if i'm explaining things right but here's an example:

(supposed)Output Data:   0.3444, 0.4444, 0.4999, 0.3555, 0.2333, 0.0444, -0.0666, -0.2111, -0.3555, -0.4666
Input Data:                         0.3444, 0.4444, 0.4999, 0.49987,0.3555, 0.2333, 0.0444, -0.0666, -0.2111, -0.2110, -0.3555, -0.4666

This would pretty much depict the problem. I've contacted support again for this, i'll be extremely grateful for any advice you can give me. Thanks again!

mark
0 Kudos
Message 5 of 10
(3,992 Views)
Hey MTee,

Since you've contacted support I was hoping that you could post the service request number you were assigned.  With that information I can work with the other engineer you're in contact with to solve your issue.  Thanks, and I'll be in touch soon!


Message Edited by Elijah K on 11-14-2006 12:34 AM

Message Edited by Elijah K on 11-14-2006 12:34 AM

Elijah Kerry
NI Director, Software Community
0 Kudos
Message 6 of 10
(3,985 Views)
Mark,

The bottom line is that if you implement your timing in software, you are at the mercy of Windows when it comes to timing.  Since windows is non-deterministic, you will notice impresicions in delicate timing tasks.  This will be true of any software.

If you have information regarding the timing of the data to be output slightly before hand, you could calculate the number of samples you want to talk and create the acquisition task for N-samples.

Or, if you aquire more samples than you want, could you just throw out extra samples? 
Elijah Kerry
NI Director, Software Community
0 Kudos
Message 7 of 10
(3,953 Views)
Hi mark-
 
If you need to update the waveform on the fly, without glitching, and with hardware timing you should consider using the DAQmx write property DAQmxSetWriteRegenMode(); to disable generation (i.e. to avoid glitching) and then make sure you update the DAQmx Write buffer as often as necessary to avoid buffer underflows.
 
I have attached a quick example that shows how to do this- note that it will error out after five writes have completed, but the method should illustrate how to set it up for your app.  You'll probably want to still use some Windows timer function to institute a delay between writes so that the loop doesn't just spin freely.
 
Hopefully this helps-

Message Edited by Tom W [DE] on 11-15-2006 11:13 AM

Tom W
National Instruments
0 Kudos
Message 8 of 10
(3,946 Views)
I dont have the support reference because i'm communicating directly to a specific NI engineer back in DK. Anyways, he has just informed me that cyclic buffer output for both analog and digital channels at the same time is not supported by the M-Serie daq cards. The DO is not required to follow the update rate of analog output, so this can be done through callback and a postmessage to output a digital signal every sec, for instance. The important thing is, is cyclic buffered analog output possible with the M_serie daq card?

What i meant by cyclic buffering is that the buffer that is being output by the card (according to the sampling timer setting) needs to be refreshed with the next portion of data, without any stopping-update-starting in between. If this is possible, then i dont know how to implement this. I need an example on how this can be done. Otherwise, i'll have to follow the advice to use a separate DIO card just for DO, or RealTime to time it right.

I do have a question bout the LabView Realtime. The NI site says that there's software and hardware RealTime. Does the software RealTime module require LabView or can i connect to it like an API? I need to implement this within my application (means i only have Ni-DAQmx 8.3, and VC++ 7 mfc).
0 Kudos
Message 9 of 10
(3,921 Views)

Hi mark-

See my last post for discussion that addresses your question about periodic AO buffer updating and for the example you requested. 

The current RealTime offerings are only available with LabVIEW and NI CVI.  There is no support available for RealTime with NI-DAQmx under VC++.

Let us know if you need more info.  Thanks-

Tom W
National Instruments
0 Kudos
Message 10 of 10
(3,918 Views)