04-02-2021 05:03 AM - edited 04-02-2021 05:07 AM
Hi,
I'm currently working on a project that acquires enormous amount of data.
During the acquisition, the data is logged in multiple tdms files. Then at the end of the acquisition, I need to average the data from the TDMS logging files each N samples and split the data into multiple files that corresponds to different devices.
The strategy i used to fullfill this task is :
- open one by one the TDMS logging files
- average the samples
- split the logging file's data into 256 files
...
-open next TDMS logging file
-average the samples
-split data and append to the 256 created files
...
I tried two different ways, saving the final data into csv files or into other tdms files.
I expected the second method to be faster but appending data to existing tdms seems way slower.
I was wondering why ? have you got any tips to optimize that data processing ?
For example when I acquire 1xx logging files of 1,5 GB the logging files to TDMS process takes 3 times longer than the logging files to CSV process.
Thanks for your help
04-02-2021 07:03 AM
One I thing I see that could improve things is to remove your inner FOR loop where you are writing the new TDMS file. You may have to also remove the Transpose 2D Array to write the data correctly. So this will write all of the data in one write instead of many. That might also help the defragment process.