LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

slow running data for heart rate variability VI Please help my thesis is due in 3days!! :(

Hi there i'm having a great issue with a VI i have created to detect heart rate variabilty from ECG and PPG signals and seeing which one is most accurate with regards to HRV.

 

 

I initially recorded 25 minutes worth of data and saved this into a text file. which has 2 columns the 1st being PPG data the 2nd ECG. There are at least 1000000 rows of data 

 

 

I am now running this data into a VI which should give me results such a RMSDD PNN50, poincare plots and histograms etc  the trouble i'm having is the VI is initially starts of fast then becomes incredibly slow i left it running for over 10hours and it hardly read any data 

 

and me not being such a big labview expert knows how to fix this. I would be most grateful if somone could help me find my error as my thesis is due in 3 days and i'm stuck on this last part. I have attached my VI below . please please help me! 😞

0 Kudos
Message 1 of 4
(2,854 Views)

The most likely cause of the slow down is the use of Build Array inside the loop. This causes frequent memory reallocations. As the arrays grow you might even get out of memory errors. Even though the arrays are not as large as the available memory, an array must occupy contiguous locations in memory. Freqeunt reallocations as the array grows can fragment the available memory space to the point where there is no longer an available contiguous space large enough.

 

The fix: Preallocate a space large enough for the data outside the loop and use Replace Array Subset inside the loop.

 

Other comments:

- Do not wire N and use an autoindexing tunnel on a for loop at the same time. Use one or the other, usually autoindexing. 

- What is the purpose of getting an Array Subset which omits the first element and then subrtacting the original array? Subtracting arrays of different lengths may not do what you expect. Rotating the array might be faster.

- What is the purpose of taking the absolute value of every element in an array and only using the last element?

- You could plot all the data outside the loop on graphs.  No need to write to a chart once for each point in the data set.

- Peak Detector.vi will work on arrays of data and produce arrays of peaks. This might be faster than the point by point version.

- I suspect that you could do all the calculations once after you find the RR intervals rather than repeating them on each iteration of the loop.  The interim results are not very meaningful anyway.

 

Lynn

0 Kudos
Message 2 of 4
(2,814 Views)

If you do not know LabVIEW, why did you make it a central part of your thesis?  Do you know any other Programming Language, such as Matlab, which can read your data file?  Can you read it into Excel?

 

With three days left until your thesis is due, this is not a time to try to learn LabVIEW.  Are you expected to do all of your Thesis work yourself, or can you collaborate with a fellow student to get this part accomplished?  What does your Thesis Advisor suggest?

 

Bob Schor

0 Kudos
Message 3 of 4
(2,806 Views)

First question: does the code produce the correct result, just slowly?

 

So you have a million data points and process them one point at a time in a FOR loop, updating several xy graphs and charts and building arrays. Most indicators will get overwritten with each iteration.

 

You are heavily filtering the data (twice!) and take the derivative (twice=second derivative). You are doing a pt-by-pt peak finding on 50 points with every new point added, meaning that you mostly analyze the same data (differing by only one point with each iteration. Seems inefficient. Wouldn't you detect most peaks 50x?

 

Since your data is filtered, maybe it would be sufficient to decimate the data before the loop processing. How correlated are adjacent points?

 

Have you analyzed which part of the code is slowest? What is the current execution time ("slow" is subjective) and what speed would you like to have instead?

 

Things like the histogram only needs to be done exactly once at the end. It seems pointless to do it over and over, a million times in a row even if the data did not change. Same problem for all the graphs.

 

You seems to take the difference between adjacent points in the generated array. Again you are repeating the same calculation over and over. Wouldn't it be much more effiicent to keep the array of pairwise differences in the shift register and use a scalar shift register to retain the immediately previous point. Subtract the new point from the previous and add it to the array instead.

0 Kudos
Message 4 of 4
(2,786 Views)