08-09-2011 01:53 PM
Hi,
I have a strange situation. When I tried to config my DO as continuous samples with one sample to write, I will get 200609 errors. I got the same error with a DAQ assistant. I am using Labview 2011 with PCIe-6343.
Thank you in advance.
Ming
Solved! Go to Solution.
08-09-2011 02:19 PM
Hello!
DAQmx has logic to determine how bigger the output buffer size should be based on the number of samples to write. In this case, since you input only one sample to write, it will pick a buffer size of one sample. However continuous hardware-timed DO generation need at least two samples worth of data in the buffer to start, thus causing the error -200609. If you write two samples to the buffer, DO generation should proceed smoothly.
Error -200609 occurred at DAQ Assistant
Possible Reason(s):
Generation cannot be started, because the selected buffer size is too small.
Increase the buffer size.
Selected Buffer Size: 1
Minimum Required Buffer Size: 2
May you tell me a little bit more about your use case? If you are trying to output just one DO sample at a time and your application does not have any timing restriction, using on demand timing instead of continuous hardware-timed might serve you better.
Please let us know if you have any more questions.
08-09-2011 02:27 PM
Thank you for the quick reply. I attached an example code with my project. Although this code is more related with file recoding (messy) but my problem is demonstrated also.
I tried to control a switch using my DO. However, I can not make the program run.
08-09-2011 02:58 PM
Using continuous samples with a sample clock seems a bit odd when you are using 1Line 1Point. Use Finite Samples. Of course, having your DAQmx Write in the loop when you are never changing the data seems a bit odd as well. Just set the value outside the loop and forget the sample clock.
08-09-2011 03:04 PM
Sorry.
This is only an example to show the procedure that I need to deal with.
The data of the DO will be changed in the loops for sure in real situation.
Should I use finite samples in the DAQMX timing.VI?
Thank you for the reply.
08-09-2011 03:09 PM
The example is based on Multi-Function-Synch Dig Read Write With Counter.vi
If there are better examples available, I will be very happy to try.
Thanks.
08-09-2011 04:49 PM
In you code, it seems like you have a counter output task generation a continuous pulse train. You have a continuous AI acquisition and a continuous DO generation synchronized to that pulse train. In your loop, you are acquiring AI data and updating the digital lines every iteration. From your code example, that's all the information I can gather. Did I miss anything?
I do some more question for you: What is your AI acquisition's relationship to DO generation in your application? Do you need to just toggle a digital switch based on the data from AI acquisition every loop iteration? Or do you want to generate a digital waveform every loop iteration? Does the digital waveform changes every loop iteration or it will stay the same?
If you just need to toggle a digital switch based on the data from AI acquisition every loop iteration, you can just do Dennis Knutson suggested above. In this case, you don't even need to synchronize the continuous AI acquisition and the continuous DO generation together. You can just remove DAQmx Timing (Sample Clock) VI for your DO generation task to make your code example run.
If you actually need to synchronize a digital waveform with your AI acquisition and you want to write a digital waveform every loop iteration, you want to use the Digital Write VI that write multiple samples. You also need to pre-write samples to the DO buffer before you start your DO generation task.
Does this resolve your issue? Please let us know.
It seems like you are a new user to the DAQmx API, NI.com has many great tutorials on programming with DAQmx that can help you get familiar with different functions with DAQmx API. I listed a few links below:
08-09-2011 08:35 PM
Thank you for the kind help. I am new in LabView myself and it is good to know all the details.