11-09-2015 02:27 PM
I meant around a billion.
11-09-2015 02:30 PM
How about this, while reading the chunks of TDMS files for processing and plotting, How about I plot just one data, erase the processing data and keep on bulding the plot by erasing the previous chunk of processing data. Can this be done in an efficient way so that I can avoid concatenating from the foor loop?
11-09-2015 02:34 PM
Create a FOR loop where you read 1000 samples at a time and take the mean. Autoindex at the right loop boundary. Now you have an array that is 1000x smaller than the original data and corresponds to a 10kHz sampling rate, which is plenty for a 1.5kHz signal. Take the FFT using a df corresponding to the reduced dt. Modify as needed.
11-09-2015 02:36 PM
@onairwithash wrote:
How about this, while reading the chunks of TDMS files for processing and plotting, How about I plot just one data, erase the processing data and keep on bulding the plot by erasing the previous chunk of processing data. Can this be done in an efficient way so that I can avoid concatenating from the foor loop?
You never said anything about "plotting". Once you create indicators and graphs, memory use will at least triple.
I would suggest to do all computations without involving front panel objects.
11-09-2015 06:22 PM
Let's take a step back and try to get to the root of the problem.
What does each line in the file represent with regard to your physical measurement? How many data points are in each line?
When you convert the position data to velocity data, how do you manage the large data set size issue?
After you get the velocity data do you need to retain the original position data?
Lynn
11-10-2015 11:58 AM
Each line represents an Encoder position of a motor. There is just one data point in each line
I splt the huge file into chunks of files, read them once at a time, do the computation for velocity and write into the same chunk of tdms file.
After I get the velocity data, I no longer need the position data.
11-10-2015 03:17 PM
Thank you for the information.
Essentially you then have velocity data sampled at 10 MHz.
You only care about velocity components at frequencies below 1500 Hz.
Consider this: If you decimate the data by a factor of 1000, then the remaining data is the equivalent of sampling at 10 kHz (5 kHz Nyquist frequency). If you average 1000 points of the 10 MHz data and save the average value, this becomes your decimated data. The array will be ~1Megasamples.
I would probably include the decimation in the velocity calculation and never create the large files unless you think you will need the high frequency data later. Think about that two or three times before deciding to discard the data because once it is gone you can never recover it.
Note: When doing Fourier Transform-based analysis the frequency resolution df = fs/N, where fs is the sampling frequency and N is the number of samples analyzed. Higher speed sampling only gives you better frequency resolution if you use larger data sets. If your reduced data has fs = 10 kHz and N = 1E6 then df = 0.01 Hz. The original data had fs = 10 MHz and N = 1E9 for df = 0.01 Hz.
Lynn
11-10-2015 03:40 PM
@johnsold wrote:
If you average 1000 points of the 10 MHz data and save the average value, this becomes your decimated data.
That's exactly what I was suggesting. 😄
11-10-2015 03:42 PM
Cool...I will do the decimation and let you guys know...
11-10-2015 03:54 PM
I like to give credit where it is due. Altenbach not only suggested that earlier but I probably saw it when he posted it but did not recall it when I posted the decimation suggestion.
Lynn