LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Decimate a 1 GB size array to meaning full data and apply filter

Solved!
Go to solution

I meant around a billion. 

0 Kudos
Message 11 of 21
(2,244 Views)

How about this, while reading the chunks of TDMS files for processing and plotting, How about I plot just one data, erase the processing data and keep on bulding the plot by erasing the previous chunk of processing data. Can this be done in an efficient way so that I can avoid concatenating from the foor loop? 

0 Kudos
Message 12 of 21
(2,240 Views)
Solution
Accepted by onairwithash

Create a FOR loop where you read 1000 samples at a time and take the mean. Autoindex at the right loop boundary. Now you have an array that is 1000x smaller than the original data and corresponds to a 10kHz sampling rate, which is plenty for a 1.5kHz signal. Take the FFT using a df corresponding to the reduced dt. Modify as needed.

Message 13 of 21
(2,234 Views)

@onairwithash wrote:

How about this, while reading the chunks of TDMS files for processing and plotting, How about I plot just one data, erase the processing data and keep on bulding the plot by erasing the previous chunk of processing data. Can this be done in an efficient way so that I can avoid concatenating from the foor loop? 


You never said anything about "plotting". Once you create indicators and graphs, memory use will at least triple.

I would suggest to do all computations without involving front panel objects.

0 Kudos
Message 14 of 21
(2,230 Views)

Let's take a step back and try to get to the root of the problem.

 

What does each line in the file represent with regard to your physical measurement? How many data points are in each line?

 

When you convert the position data to velocity data, how do you manage the large data set size issue?

 

After you get the velocity data do you need to retain the original position data?

 

Lynn

0 Kudos
Message 15 of 21
(2,199 Views)

Each line represents an Encoder position of a motor. There is just one data point in each line 

I splt the huge file into chunks of files, read them once at a time, do the computation for velocity and write into the same chunk of tdms file. 

After I get the velocity data, I no longer need the position data. 

0 Kudos
Message 16 of 21
(2,169 Views)

Thank you for the information. 

 

Essentially you then have velocity data sampled at 10 MHz.

 

You only care about velocity components at frequencies below 1500 Hz.

 

Consider this: If you decimate the data by a factor of 1000, then the remaining data is the equivalent of sampling at 10 kHz (5 kHz Nyquist frequency). If you average 1000 points of the 10 MHz data and save the average value, this becomes your decimated data.  The array will be ~1Megasamples.

 

I would probably include the decimation in the velocity calculation and never create the large files unless you think you will need the high frequency data later.  Think about that two or three times before deciding to discard the data because once it is gone you can never recover it.

 

Note: When doing Fourier Transform-based analysis the frequency resolution df = fs/N, where fs is the sampling frequency and N is the number of samples analyzed. Higher speed sampling only gives you better frequency resolution if you use larger data sets. If your reduced data has fs = 10 kHz and N = 1E6 then df = 0.01 Hz. The original data had fs = 10 MHz and N = 1E9 for df = 0.01 Hz.

 

Lynn

0 Kudos
Message 17 of 21
(2,148 Views)

@johnsold wrote:

If you average 1000 points of the 10 MHz data and save the average value, this becomes your decimated data.


That's exactly what I was suggesting. 😄

0 Kudos
Message 18 of 21
(2,133 Views)

Cool...I will do the decimation and let you guys know...

0 Kudos
Message 19 of 21
(2,129 Views)

I like to give credit where it is due.  Altenbach not only suggested that earlier but I probably saw it when he posted it but did not recall it when I posted the decimation suggestion.

 

Lynn

0 Kudos
Message 20 of 21
(2,120 Views)