11-02-2021 11:28 AM
I have an application in which the samples for Standard Deviation PtByPt VI arrive irregularly (time intervals vary by factors of two to four), but I want a averaging time that is roughly constant. The rate is unpredictable in advance.
Naively, this functionality requires that the sample length be adjusted concomitantly.
Fortunately, I don't need an accurate answer; the routine is just used to determine if the data has stabilized.
Does anyone know what will happen if the sample length is changed on the fly, without reinitializing the Standard Deviation PtByPt vi? Does the routine do approximately the right thing?
One can look at the block diagram of Standard Deviation PtByPt, but its workings are obscure.
11-02-2021 11:32 AM
Set a null sample length. SD will be calculated with the available data, whatever the actual length.
Initialize when you need to start over.
11-02-2021 11:49 AM
@Joel wrote:Does anyone know what will happen if the sample length is changed on the fly, without reinitializing the Standard Deviation PtByPt vi? Does the routine do approximately the right thing?
There's no time factor involved at all. Standard Deviation PtByPt simply calculates the Standard Deviation of the samples.
So it does the right thing, just not what you want.
I don't think there's a standard way to calculate a standard deviation over time, but I could be wrong.
If you do know how you want to calculate this, we can help.
11-02-2021 11:52 AM
Unfortunately this doesn't work for my application. The routine is initialized when the data stream changes state, and there is an immediate large scale drift in the data; I am trying to determine when this drift ends. If I keep all data, this initial drift will never drop out entirely. Eventually, of course, its contribution will become small, but this will occur long after the drift ends.
11-02-2021 12:14 PM
You can run the mean ptbypt in a separate loop where you read the current value (either changed or unchanged) at regular intervals, e.g. using a local variable (ugh!).
11-02-2021 12:28 PM
You can keep an array of all timestamps, and an array of all values. All in your own PtByPt VI, or a FGV, or in a class's private data, it doesn't really matter for the principle.
When you add a point, check (with the array of timestamps) if there are points that are 'obsolete', and remove them. Then, calculate the stdev on the remaining points with "Std Deviation and Variance.vi".
There's no way to (mis)use Standard Deviation PtByPt for this, it's size is set at init.
I'm not sure if there is a way to do a weighted stdev, where larger dTs count heavier. Not sure if that's what you're looking for.
11-02-2021 01:21 PM
The time factor comes from the sample length. The vi maintains a queue, but it doesn't appear to have a mechanism to change the queue length except when it initializes.
I guess I am going to have to write something myself if I want a responsive routine.
11-03-2021 02:28 AM
@Joel ha scritto:
The time factor comes from the sample length. The vi maintains a queue, but it doesn't appear to have a mechanism to change the queue length except when it initializes.
I guess I am going to have to write something myself if I want a responsive routine.
The key point is that you can initialize multiple times, for example every 30 seconds. When you initialize, old data are discarded.
Of course, your code is responsible to track the elapsing of time. You need to keep in memory (e.g. in a shift register or, better, in a feedback node) the date of the last initialize operation. When the current date is greater than your preset, then initialize again and send the new value to the SD PtbyPt vi.