Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Delay on Do Not Allow Regeneration mode

A couple more thoughts.

 

Maybe you should consider using DAQmx Events to determine when to write new AO data to the task buffer.   There's one for "every N samples transferred from buffer" which seems pretty appropriate.

 

Note: I made a brief effort to make mods to your code in-place to illustrate, but failed.  I'm not an expert in DAQmx Events and haven't used them very much (no *good* reason, just already familiar with older methods before they were introduced quite a few years ago).  But it appears that they must be created/registered *prior* to the event structure and fed into the left-hand-side Dynamic Registration terminal from outside.         This carries some implications.  You'll need to create your AO task before the loop which further means you can't clear it inside the loop to invalidate it.  Even further, that will mean that you can't change your AO buffer size during run-time.  So that pretty well means you can't change the sinusoid frequency either.

   Quite a few constraints, you'll have to think through which parameters *need* to be able to change during run-time and which ones can remain constant for several cycles of AO generation during a particular run, requiring you to stop the app when you want to change them.  (Or making a more elaborate app with an additional outer loop and some means for a user to know what "mode" they're in.)

 

On a different note, I'm a little suspicious of your sine wave adjustment plan.  Changes to the offset value will cause step changes in your output, exactly the kind of "glitching" one would usually hope to avoid when operating a device that wants a nice smooth sine wave as input.

   Amplitude changes could *generally* have the same problem, but you're presently avoiding them by defining your sine wave to start and end at 0-crossings.

   What are you controlling?  Why the sine wave?  What's the physical effect of generating what you call a "multi-sinusoidal" waveform.  What do you mean by multi-sinusoidal -- is it the sum of multiple sine waves, probably at different frequencies?   How do you plan to guarantee that all of them start and end at 0-crossings?

 

You might have good solid answers to all these concerns.  But there are many complicating factors involved here.  I can't tell yet whether you've properly assessed what you *actually* need here.  I've been in a lot of threads where a difficult problem statement was later considered "solved" by the original poster via a really lame and poor approximation of what they claimed to need.

   We tend to come into a problem wanting and hoping for *everything* until we get into it and find that most things come at a cost.  *Then* we start figuring out what's *really* important.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 11 of 24
(1,658 Views)

Hi Kevin,

 

First off all, thanks for your time! I really appreciated your comments.

 

Well, I'm sorry if I was not able to explain my application, sometimes is so obvious for us that we neglect when try explain to others. What I can say is, so far it is working pretty well, I'm adjusting individual amplitudes of the sinusoidals and I'm collecting long long data and no "glitching" are occurring. Glitching would be terrible in my frequency domain analyses. My only problem is this huge latency.

 

I really would like to not focused this problem as a related with my application, the vi that I attached came from the NI examples. If this example works fine (latency = 2*buffer size), I'll be able to translate to my application without problems, I'm not a Labview expert but it looks not difficult. The question that intrigues me is why we are seeing this huge latency?

 

I read the links that you suggested about USB buffer. I tried to change the buffer size of my USB-6003 but the minimal possible is 1024 and the latency was the same.

 

I'll study how implement DAQmx Events, this is also new for me.

 

Please, there is any chance of you test the attached vi (with the same sample rate and buffer size) in another board to check this latency? This will be a good argument to my boss that we need buy other board, maybe?

 

Thanks,

Andrea

0 Kudos
Message 12 of 24
(1,651 Views)

My pretty strong opinion: if low latency matters, you don't want to be stuck using a USB device.   However, I recognize that not everyone has other options.

 

Can you back-save the vi you posted to LV 2016?  I'm at work today and will have access to real DAQ hardware at the end of the day, but the PC in question only has LV 2016 on it.   Be sure that all relevant control values are saved as defaults too please.

 

I get what you're saying about focusing on the simple example just to study latency.  But it's still often helpful to know more about the big picture.  For example, we probably don't have the same notion of "glitching."   I would expect a mid-run change to the DC offset to cause a kind of "glitch" due to a forced step response in the midst of the sinusoids.  Does that *not* happen?  Are are you not actually changing the DC offset?   And if you don't intend to change the DC offset mid-run, you maybe should feed its value in from outside the loop so the code is protected from a user inadvertantly changing the GUI value mid-run.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 13 of 24
(1,645 Views)

Hi Kevin,

 

I'm sorry, I just read your last message. I've been busy with other activities.

 

Well, if it still worth, attached is a 2016 version. Adjustable parameters are in yellow.

 

Answer your question, I'll never, never change the offset just the individual amplitudes of the multi-sinusoidal waveform (frequencies between 1 to 20 Hz).

 

Thank you!!!

Andrea

0 Kudos
Message 14 of 24
(1,626 Views)

Well things got a little curiouser indeed.

 

I had a brief chance to try things out on a PCIe-6341 which apparently plays by different rules than we've been discussing so far.  (And I expect the same to be true of all X-series devices.)

 

DAQmx threw a warning "Data Transfer Request Condition" property to the value "Onboard Memory Empty".  The warning message pointed me toward setting the Onboard Memory Buffer Size to a small value instead.  (On a simulated M-series device that I tried, trying to set the onboard buffer size resulted in an error, but I could set the Data Transfer Request Condition to "Onboard Memory Empty".  I couldn't test for timing behavior though because it was just a simulated device.)

 

When I set it to be just a few samples, I could see that the latency corresponded well to the size of the task buffer set by the call to DAQmx Configure Output Buffer.  I even did a quick experiment where I made the task buffer very small too, something like 10 samples I think.  When I did both, there was very little latency at all, the response was almost instant.

 

Sorry, I didn't have any USB devices so I don't know what to say about anything USB-related, such as the apparently inaccessible USB Transfer buffer (see the links I gave in msg #5).

 

Tip: on the GUI control for your AI channels, you can right click and pick I/O Name Filtering..., then check the box to include internal channels.  You'll find that you can designate something like _ao0_vs_aognd and _ao1_vs_aognd.  This lets you measure your output signal without needing to do any physical loopback wiring.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 15 of 24
(1,613 Views)

Hi Kevin,

 

So, during your test with the PCIe-6341 the warning "Data Transfer Request Condition" was fired and you had to adjust the Onboard Memory Buffer Size.

 

When you wrote, "When I set it to be just a few samples...", it means the Onboard Memory Buffer Size? Interesting, because in my board (USB-6003) the minimum value of Onboard Memory Buffer Size is 1024 samples and I had no difference in the latency using 1024 or 2047 samples. How much was the value set?

 

When you wrote, "I could see that the latency corresponded well to the size of the task buffer set by the call to DAQmx Configure Output Buffer..." means that the latency was about 1 second? 200 samples/sec and buffer size of 200?

 

I never worked with the task buffer before, I'll dig in how to use and perform some tests here.

 

Thanks for the AI channels tip and for everything!!!

Andrea

0 Kudos
Message 16 of 24
(1,607 Views)

First I'll answer your specific questions, then some general stuff.

 

Yes, it was the onboard buffer size and I set it to something like 5 or 10 samples for my PCIe device.  It'd need to be quite a bit bigger for USB.

 

And yes, the latency when writing 200 samples at a time to a size 200 task buffer seemed like a little more than a second.  (There's some explanation below why it would often be more than a second.  Theoretically, it should have been about 1.5 on average.  Details way down below.)

 

Now onto some general stuff, and first maybe it'll help to have another look at one of the messages I linked in msg #5.

 

I'd also recommend changing some things like the sample rate to see how it affects the latency.  I'd be inclined to expect a pretty direct relationship -- at twice the sample rate, latency would be about half the time.  I'm basically figuring that the latency is mostly a function of the # of samples working through various buffers along the way.

 

Your original observation of ~8 seconds with a sample rate of 200 Hz suggests somewhere around 1600 samples worth of buffer to account for.

 

If your device *honors* your Data Transfer Request Condition of "onboard memory empty", the onboard memory buffer size shouldn't be playing much of a role.  But what if the device silent ignores the request, or some other part of task setup overrides it?

 

You said that you can set the onboard buffer size for your task as small as 1024 samples, right?  Have you confirmed that?  The code you posted doesn't show any attempt to set the onboard buffer size, only the DAQmx task buffer size.   Or were you thinking you were doing this when setting the property USB Transfer Request Size to 1024?  Again, based on the article I linked, this property influences the size of the USB transfer buffer.  (It isn't clear to me whether it *sets* the size exactly).

    At least as an experiment, I'd recommend trying to set both the Onboard Buffer Size and the USB Transfer Request Size pretty small, let's say 50 samples or so.

 

You've got another 200 samples worth of latency from the DAQmx task buffer.  Since you're writing to the task as fast as DAQmx will let you, you can expect almost the full 200 samples worth of latency there.

 

We've accounted for about 1200 samples worth of latency.  I'd venture that if your device *does* honor the "onboard memory empty" setting, it won't actually be *completely* empty.  Not sure how to guess how many more samples, but it's probably a little conservative for a USB device.

 

There's actually another subtle source of 0-200 samples worth of *perceived* latency.  Within 1 iteration of the loop, the attempt to call DAQmx Read will get block for 200 samples worth of time before it returns.  Nothing else in the loop will need an appreciable amount of time to execute. 

    If you change PSOL or sinusoid amplitude on the GUI, it'll happen at some random moment within that 200 sample blocking time.  So you'll have 0-200 extra samples to wait for before you get to the next iteration to *use* your changed value, and then have to wait another 200.  (I didn't think of this until after I did my experiments, but I think it's the reason why my latency seemed a bit longer than 1 second.)

 

So we've got 1024 + ~200-400 + the # samples in the device's onboard buffer.  That can account for up to 7 of your ~8 observed seconds at 200 Hz sample rate.

 

If you can't make the USB Transfer Request Size any smaller, perhaps you could run your AO at a higher rate so that the same size buffer represents less latency time?

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 17 of 24
(1,596 Views)

Hi Kevin,

 

I had to wait until today because I did not have access to the board and Labview on the weekends.

 

The latency that you found (buffer seemed like a little more than a second) is amazing, I was expecting something like 2 seconds (2*buffer size).

 

Yeah, I read the link from your message 5 and was from this message I learned how to set the USB buffer size. There, the message from 07-11-2012 we have “2) USB Transfer Buffers.  You can get/set information about these buffers via the USB Transfer Request Size and USB Transfer Request Count properties.  These properties are buried deep in the DAQmx channel property node”.

 

I included this USB buffer size in the vi (delay non regeneration v2 lv2016) that I sent to you on 05-14-2020 (block diagram with USB buffer size.png attached). In this vi, when I tried to set the USB buffer size lower than 1024 samples I had an error (error changing the USB buffer size.png attached).

 

Yeah, I agree, something is not allowing me to reduce the USB buffer size, and maybe this is the problem.  If you look at the error message, says something about “conflicts with another property” but unfortunately gave no clue about which property. Other interesting thing is when I changed the USB buffer size from 2047 (original value) to 1024 (almost half) the latency didn’t change. Maybe this board *does not honor* this adjust?

 

Thank you so much for your time!

Andrea

0 Kudos
Message 18 of 24
(1,576 Views)

From your observation, it looks like the 6003 is not honoring the buffer size adjustment. Have you attempted the workaround I suggested to add a delay to your AO loop to force it run fewer time per second? My reasoning is that since regeneration is turned off, the call to the write is providing all the data into the buffer. Initially until the onboard USB buffer is filled up, the write will execute as fast as possible until the buffer is full and data backs up. That is where the delay is coming from from your observations. Since the requirement of the application is 2 buffer worth of delay, if we control how much data gets written into the buffer, then it should work. If you add a delay of 0.8 second after the write, that should cause the write to execute no more than once per second. Then there will be less data stuck on the USB buffer and reduce the delay of the waveform updating.

0 Kudos
Message 19 of 24
(1,561 Views)

Jerry makes a good point about not iterating as fast as the calls to DAQmx Write allow you to because that will make the USB Transfer buffer fill up immediately and always stay full after that.  

    I'm not as sure that a 0.8 second wait will completely solve the problem though, I think it'll just delay it.  As long as you're writing 1 second worth of data every 0.8 seconds, it sure seems you'll eventually run into the same buffer fill situation.

 

For the long run, you need your average DAQmx Write data rate (samples written to the task per second) to equal your device generation data rate (samples generated per second).  Using DAQmx Events will probably be the best way to do that.   Have you done any experimenting with them yet?

 

As another option, have you tried changing the sample rate yet?  If not, give it a try.  I'd suggest something like 1 kHz.  Then even if you fill all the buffers, the latency should be reduced by a factor of ~1/5.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy came to an end (finally!). Unfortunately, pricing favors the captured and committed over new adopters -- so tread carefully.
0 Kudos
Message 20 of 24
(1,551 Views)