‎11-27-2022 03:40 PM
The point is that if you fit your original (sparse) data to the correct model, you get correct estimates for the parameter (and parameter errors if you know the magnitude of the noise in the data!) Once you have a reliable set of parameters, they can be used to calculate the curve with infinite timing resolution. (Even femtosecond, if you wish!). 😄
A fit with only 25 point should complete way below sub-milliseconds.
A spline is NOT a correct interpolation because it does not agree with your model. A spline forcefully goes through all points, even if a point is way off because of noise.
Taking the derivative adds more massaging to the data AND increases all errors. However, taking the derivative of your spline based on sparse data will gloss over it. Do you have the correct formula for the raw data that should be fit? Do you have a link to the theory behind all this?
‎11-28-2022 07:47 AM
Yes, I understand your points. It is the derivative of the original data that I must fit to a Gamma variate (the model in theory) and have revised the program to fit the derivative of the original data sampled at 1.5 seconds. Then I can use the fit parameters to generate the Gamma variate and get my increased time resolution.. Then too the goodness of fit is correct. I have also considered integrating the Gamma variate model and fitting that to the original data, but have not tried that yet.
‎11-28-2022 07:50 AM
Yes, thanks. I see your point and have revised the program accordingly to fit the original data and then using the fit parameters calculate for a higher time resolution.
‎12-04-2022 09:33 AM
As I said, fitting with <10% of the number of points (25 instead of 360) will speed things up dramatically. In addition, you probably can also calculate the analytical derivatives inside the model, which will give you another significant speed boost.