05-12-2025 02:18 PM
Hello,
I am acquiring data at 100 MS/s through DAQ card.
They are pulse traces received back.
Now when i use write text file or other standard file i/o; I get lag and some samples are missed.
Similar is the case with TDMS applied in the same loop.
Can some one please guide me how to log such data.
Its generated inside a while loop.
Thanks
Solved! Go to Solution.
05-12-2025 02:32 PM
Have you tried DAQmx logging? Which card are you using?
If DAQmx logging is not fast enough to capture your data, consider logging only transitions. Depending on the density of the pulses, logging only transitions could dramatically decimate the amount of data that needs to be written to disk.
Also, please help the community help you by attaching your code.
05-12-2025 02:58 PM
Are you using a proper program architecture for high speed data collection like a Producer/Consumer?
05-12-2025 03:07 PM
@jpderek wrote:
Hello,
I am acquiring data at 100 MS/s through DAQ card.
They are pulse traces received back.
Now when i use write text file or other standard file i/o; I get lag and some samples are missed.
Similar is the case with TDMS applied in the same loop.
Can some one please guide me how to log such data.
Its generated inside a while loop.
Thanks
As far as I know, there are no DAQ cards that can do 100MSa/s. Are you using NI-Scope? You may be able to get to those rates for continuous acquisition using NI-Scope. TDMS should work for that, but you need to use the advanced functions, write raw data and include scaling information, and write in a multiple of the disk sector size. You will need to use a second loop for this. In addition, you will need a RAID to keep up with that speed.
05-13-2025 07:25 AM
The fastest i could find (at a quick glance) was 20Ms/s
PXIe-4481 - NI
05-13-2025 10:47 AM
Thanks for your kind response.
I am using Gage DAQ card.
Will the DAQmx work for this or is it only NI card specific.
I will share the code but it is only initializing the DAQ card and then when data is acquired in the while loop, it is saving data through File I/o functions.
Thanks
05-13-2025 10:57 AM
@jpderek wrote:
Thanks for your kind response.
I am using Gage DAQ card.
Will the DAQmx work for this or is it only NI card specific.
I will share the code but it is only initializing the DAQ card and then when data is acquired in the while loop, it is saving data through File I/o functions.
Thanks
DAQmx will NOT work with this card. The advice is pretty much the same as before:
05-13-2025 11:41 AM
I just read your post and would tried the prod/consumer (event) example from the template.
For the consumer side, which method of file writing should I use?
Will this not be same as writing in the while loop of data acquisition?
Thanks
05-13-2025 11:49 AM - edited 05-13-2025 11:53 AM
@jpderek wrote:
I just read your post and would tried the prod/consumer (event) example from the template.
For the consumer side, which method of file writing should I use?
Will this not be same as writing in the while loop of data acquisition?
Thanks
TBH: It should not matter because the benefit of the Producer/Consumer is the Queue in between the loops. The Queue is a buffer that allows the DAQ (Producer loop) to run full speed and the Consumer loop can then dequeue, analyze, display, and save data to disk without affecting the speed of the DAQ (Producer loop).
I suppose if you were collecting large amounts of data you could still run into memory issues if the Queue size gets out of hand before it can be written to disk. So in that case faster formats like TDMS could be useful.
05-13-2025 11:54 AM
For the consumer side, which method of file writing should I use?
TDMS or flat binary file, but TDMS would be better.
Will this not be same as writing in the while loop of data acquisition?
No.
Assume you download 1 second of data in the loop, so each second you download 100MSa. If your file write is in the same loop, then you have 1 second to save that data before the next data chunk arrives. If saving the data takes less than 1s, then it can be in the same loop, but this is not typical. So you have a second loop in case there are any slowdowns. But as said earlier if your second loop is too slow, then eventually you will run out of memory.