LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How can AI keep up with the change of AO in NI-USB-6251?

I'm away from any LabVIEW machine and only looked at the block diagram pdf.

 

You appear to be configured correctly for sync.  Input sample #i from the AI task will correspond to output sample #i from the AO task.  Some fine tuning on your "delay from sample clock" may improve your measurements, but you're in a good starting point.  (Note that you don't really need the AI task to be triggered at all -- you could get rid of the trigger config entirely, though it's likely not hurting anything either.)

 

As to your latest question, it sounds like you have a *finite* pattern of AO samples to generate.  So change your AO task to do Finite Samples.  You'll need to also wire in the correct exact # of samples needed to generate those 4 ramp patterns back-to-back.

 

So generate the 1D array of sample data for each of those 4 ramps, and either append them together into a longer 1D array that you send to DAQmx Write, or send one 1D array each to 4 consecutive calls to DAQmx Write.

 

Your AI task is configured to have it's sample timing driven by the AO task.  Sampling *will* happen in sync at the hardware level.  The process of calling DAQmx Read to retrieve those samples from the task buffer is distinct and has no influence on the timing of when those samples were taken. 

 

This may sound like a subtle distinction but it's a fairly crucial one.  *Sampling* in sync is the correct goal.  *Reading* can then be done at a more leisurely pace.  A typical rule of thumb for continuous tasks is to read a chunk of samples representing ~ 1/10th sec at a time.  The most common way to read from finite tasks is to read the entire set of measurements all at once after the finite task has run to completion.   

 

When you configure your tasks properly, the driver and hardware are doing most of the important work to sync your AO and AI samples.  Your code doesn't have to do much more work after that.

 

 

-Kevin P

 

P.S.  Most of our installs here are LabVIEW 2016 with DAQmx 16.something where 62xx M-series boards like yours are still supported.  Dunno what docs are still around on the site, but M-series were among the first general purpose boards that were supported *only* by DAQmx and have still never been supported by the legacy DAQ driver.

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 11 of 26
(1,601 Views)

Hello Kevin,

 

Really thank you very much for your detail explanation and valuable suggestions.  I am now able to generate and read the ramp signals simultaneously.  In the new vi, there are 10 ramps, starting from 0 to 1V and ending at 0 to 10V, each last for 0.25s.  Please see the attached front panel and block diagram.  I do not read all the samples once because I want to see the change of the ramp in-situ in the waveform graphs.  Is it possible to simplify the block diagram to perform the same job?

AO is now sync with AI and the DAQmx read has a tiny delay after AO start, am I right?  If yes, how can I find out the actual delay time?

Download All
0 Kudos
Message 12 of 26
(1,592 Views)

Functionally, you can't reduce the code on the block diagram much.  The call to configure a start trigger for the AI task could be eliminated because the shared sample clock alone guarantees sampling sync.  That's about it.

    Aesthetically, you could perhaps rearrange and condense some stuff to look a little tidier (and be easier to understand with less effort), but what you've got now looks decent enough to me.

 

There are 2 answers about the delay.

1. Hardware.  Your board manual will have info that lets you figure out the exact timing from the sample clock edge to the convert clock edge that latches a sample.  Keep in mind the fact that you're programming an explicit additional "delay from sample clock" that you need to account for as well.  It's been a while since I needed to look up such stuff in detail, so be careful to see whether your one-channel task follows special rules that are different from multi-channel tasks.

 

2. Software.  The delay from starting the task until the call to DAQmx Read returns data to your graphs will be pretty much ((# samples) / (sample rate)).  If you ask for 100 samples while sampling at 1000 Hz, it'll be about 0.1 sec until you see your data on the graph.  But even though you don't *see* the first sample for 0.1 seconds, it was actually latched in hardware pretty immediately after starting the tasks (as determined by the hardware timing I referred to in #1).

 

 

-Kevin P

 

 

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 13 of 26
(1,587 Views)

Hello Kevin,

 

Really appreciated for your support in the past and I learn a lot from you.  Without your help, my tasks cannot be accomplished.

As you predicted, based on the code that you helped me to develop, I can modify the code to generate different kinds of waveform to perform my own tasks.

I hope in the future, I can learn more from you.  Also, I would like to take this opportunity to thank all the people who had tried to help me.  I wish you all the best in your future endeavors.

 

Jacob

0 Kudos
Message 14 of 26
(1,576 Views)

Hello Kevin,

 

It is me again.  I modified the previous vi program and found the following problem.  When I set the delta V to 0.02V, after I run the program, it took more than 30s for the waveforms to appear on the screen.  If I set the delta V to 0.01V, the waveforms never appear.  I guess this is due to too many data points are generated by the basic function generator and it take long time to do that.  Is it possible to put the basic function generator and the AO write inside the while loop so that each time when the amplitude of the function generator is changed, the AO can write that amount of data and AI read the corresponding data?

 

Jacob

0 Kudos
Message 15 of 26
(1,559 Views)

Hello Kevin,

I forgot to attach the files in my last reply.  Here are the files.

Download All
0 Kudos
Message 16 of 26
(1,559 Views)

I'm not at a LV machine and can only look at the block diagram picture now.  I looked at details a little closer this time than last due to your reported symptoms.

 

1. I'd first investigate the loop where you build the waveform.  Copy that code into a separate vi and experiment with it to make sure it does what you intend.   I'm not sure from just the picture, so first confirm that the output arrays look right for a variety of input values.   (In the separate vi, you can wire them into a graph and/or array for the sake of inspection).   Maybe the arrays you build up are much bigger than you think?

 

2. I also notice that the # of samples you request per iteration is actually *larger* than the task's buffer.  You're configured for finite samples with a specific # wired into DAQmx Timing.  You then add that # samples to your # of For Loop iterations and wire the result into DAQmx Read.  Since you can't retrieve more samples than the total # that will be acquired, the DAQmx Read call probably won't return until it times out.  I don't see the timeout wired, so the default value will be used.  I thought that'd be 10 seconds, but maybe my memory's off and the default is the 30 sec you observe.

    To read data a little at a time, you need to request a smaller # samples from DAQmx Read.  The correct # is going to be based on the rate at which you need to update your calcs and user display.  If you want to update 10 times a second, ask for (sample_rate / 10) samples (which gives you 1/10th sec worth).

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 17 of 26
(1,552 Views)

Hello Kevin,

 

Thank you very much for your prompt reply.  I copy the for loop to a new VI and I found that it takes 35s to finish executing the VI if delta V=0.02.  If delta V=0.01, after executing the program for a while, it said memory is not enough and the program halted.  Therefore, the array size is too big to execute at one time.  This is the reason I asked whether I can put the basic function generator inside the while loop and generate waveform for each amplitude during each cycle of the while loop.

Write now, the program can run without any problem if delta V=0.1.  Since the wiring is too messy in my VI, it is not easy to see the # of samples in different places.  For start V=0, end V=10 and delta V=0.1.  The # of samples per channel wiring to the DAQmx timing is 25000x101=2525000.  The for loop N=101 and the # of samples per channel for the DAQmx read is actually equals to 25000 which is not large.  If delta V is 0.01, the program halted because of not enough memory to generate the waveform array. 

Therefore, I guess by putting the basic function inside the while loop and generate the waveform for each amplitude during each cycle of the while loop may help.  However, this also involves putting the DAQmx write inside the while loop.  I foresee there may be synchronization problem as the DAQmx write and DAQmx read are both inside the while loop.  What is your opinion?

0 Kudos
Message 18 of 26
(1,545 Views)

If it takes 35 seconds to *define* the waveform you wish to generate in a finite output task, there's something very wrong.  It could be the nature of *what* you're trying to do but it also could be *how* you've gone about doing it.

 

It also appears that the scope of this program has changed pretty drastically.  We started with a series of 0.0->X volt ramps appended together, where endpoint X stepped in 1 V increments 1,2,3,4,5.   You are now generating and appending hundreds of waveforms at amplitude X where the amplitude steps in 0.02 or 0.01 V increments.

 

Can you describe more about what you're trying to do with this test?  What's being tested, what's the importance of sync, what sample rates are needed, how long do stimuli need to be generated in order to believe that the measured response is stable or typical, etc.?  I really can't tell whether you need tweaks for efficiency or a fundamentally different approach altogether.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 19 of 26
(1,521 Views)

Hello Kevin,

 

Sorry to make you confused.  The ultimate goal of the program is to generate a smooth sine waveform at 1000Hz and this waveform will be applied to an LCD device and the transmittance through the LCD device will be monitored by a photodiode when the amplitude of the applied voltage changed.

In my program, the frequency is fixed and the amplitude of the output voltage is changed from 0 to 10V in a step of 0.01V.  For each voltage step, the output waveform should maintain for 0.25s.    During that period, the output voltage is monitored by AI channel 0 and the photodiode voltage is monitored by AI channel 1.  The AO and AI should be synchronized.

The AO sampling rate should be high enough to produce a smooth sine wave, therefore 100k samples per second is used.  For each cycle of the for loop, it generate a large amount of sine wave cycles data to fill up the 0.25s time slot.  Since those data are repetitive, it may be redundant if I can find a clever way to perform my goal.  Do you have any good suggestion?

 

Jacob

0 Kudos
Message 20 of 26
(1,518 Views)