04-14-2025 04:58 AM
Hello,
I have a USB-6343 DAQ and I am trying to perform simultaneous AO and AI at the fastest possible rate.
For my application (which is implementing a Pound-Drever-Hall loop😞
I manage to implement this by setting up a "finite samples" task for the AO and AI and starting and stopping the tasks at every iteration of the loop. The issue is that the start/stop takes ~30ms which is way larger than the actual acquisition (0.5ms at 200kHz clock) and that is limiting the loop time.
To improve this I am experimenting with running a continuous AO with the Sample Clock:Underflow Behavior set to Pause until Data Available; this way I can massively improve the loop time but I am now struggling on how to synchronize the AI, to make sure that I read exactly the 100 samples acquired while the AO was running. I was looking into the possibility of using a trigger produced by the AO when new samples are available but I couldn't find any option for that.
I would appreciate any suggestion on how to achieve this or whether there is a difference solution to the problem I described.
04-16-2025 06:39 AM - edited 04-16-2025 06:40 AM
Hi cbevil,
Instead of restarting the tasks you could use "retriggering" to reduce delay between loop cycles. With retriggering, you would define a trigger (e.g a PFI line) for your AI and AO to be triggered by. This trigger should be started each time one of your cycles (reading samples, calculating new offset) is completed, to start a new cycle. The advantage of retriggering is that you will not have to start/stop your tasks in every loop cycle. Only at the beginning and end of your program.
Retriggering is not supported by all USB DAQs, but it is by your USB 6343.
I am currently looking into how this could be achieved using the DAQmx driver in Python (to check feasability, and also for my own curiosity).
Best regards
Leonard
04-16-2025 07:38 AM - edited 04-16-2025 07:42 AM
Your mention of the property Sample Clock:Underflow Behavior was intriguing as it's a property I've never familiarized myself with. But I suspect you were right to assess that you wouldn't be able to reliably correlate your AI samples to your intermittent AO outputs.
Can you talk us through some more details about your system and your requirements? How exactly is the ~30 msec loop time a problem for you? Does it affect system behavior or is it simply an inefficiency in your test that you'd like to address?
Here are a couple bread crumbs for you in the meantime, a couple ideas you can research further:
1. Use DAQmx Control Task to "commit" the task before your loop. This will speed up your subsequent stops & restarts.
2. Share a sample clock to keep AO and AI correlated in hardware, configure AO buffer(s) to be small to reduce latency, and make both AO and AI be continuous tasks. (Note that it can get tricky to accomplish low latency while keeping your data correlated if your AO task ever regenerates from your task buffer.)
-Kevin P
04-16-2025 07:58 AM
Hi Leonard,
thanks a lot for your suggestion!
I was also considering this option but I am not sure how to send a trigger from the software without setting up a task (which I guess would have the same issue of starting and stopping every time).
Did you have something specific in mind?
Best regards,
Carlo
04-16-2025 08:54 AM
Hi Kevin,
thanks a lot for your reply!
Regarding the Sample Clock:Underflow Behavior I was hoping that there is a signal on the board which is low when the FIFO is empty and high otherwise and which I could thus use to trigger the AI, but I couldn't find anything like that.
The 30 msec loop time is a problem for me because the dynamics of the system I am trying to lock to the minimum with the Pound-Drever-Hall loop are faster (on the few msec scale). Therefore the system has already drifted away from the minimum by the time I am at the next iteration.
Regarding your other suggestions:
Best regards,
Carlo
04-16-2025 09:36 AM - edited 04-16-2025 09:38 AM
To be honest, I don't think there's much hope of "getting there from here", if we define "there" as a low-latency control loop based on the phase of a high bandwidth system response and "here" as restarting finite tasks while working across USB under Windows.
If your system's dynamic response to a changed output stimulus dissipates within a handful of msec, I think your *only* hope with USB and Windows is to make both tasks continuous, and then get clever with code to manage the problem you found when regeneration is enabled. You'd also need to put in some work to get low latency from the AO task which will require small buffers and almost certainly also require regeneration to be enabled.
I don't know enough about your signals, your system, or your method for determining phase to try to walk through all the details. Would you be able to loop back your AO signal to an an AI channel that you include in your AI task? That'd give you a time-correlated data stream of stimulus & response to use for evaluating phase. (Detail note: you may need to query the task for the ConvertClock.Rate because *some* of the apparent phase difference will be caused by the time delay inherent in AI multiplexing.)
-Kevin P
04-16-2025 02:52 PM - edited 04-16-2025 03:02 PM
Hi cbevil,
You could generate the signal from a separate application. I suggest the following setup:
- one script with the simultaneous AI & AO, as well as the AI processing
- the AI & AO are triggered when rising edge appears on PFI0
- one script generating the trigger signal (output over digital output line)
- physical routing between the digital line prior and PFI0
You need to be careful with the rate of the trigger signal. You would need to configure the signal so that there is enough time between each rising edge for the AI processing, as well as the read (for AI) and write (for AO). For that you would need to figure out how much time those operations take. Due to the non-RT conditions on your windows PC, your AI processing might get interrupted. Hence, it can always happen that you miss one or more rising edges. Would this be a problem for you scenario?
I attached a Python example of the first script that I suggested above. (with some comments)
As pointed out by Kevin_Price, USB might be a bottleneck leading to high execution time in the read calls (for AI). This would naturally increase the time to execute one loop. For your type of application a cRIO with RT FPGA processing would be better.
Best regards
Leonard