02-02-2016 05:06 AM
MAybe someone could explain to me the behaviour I observe when utilising the "Autoscale Delay" property of a graph or chart in LabVIEW.
According to the help it should set the time limit before a scale contraction takes place but have no affect on a scale expansion.
It seems to work, kind of, but weirdly. An example is included. If we monitor the max and min values over time, we see that mostly, we have normal expansions and contractions. I have even added a "0" constant to my display in order to remove the lower value from the autoscale equation.
If, however, I decrease the magnitude of the data beyond some unknown delta, then the graph will delay its autoscale as expected. What kind of comparison is being done in the background that small deltas are NOT affected (and this is actually what I want to manage) whereas larger ones are.
Example. If I change the scale of my random data from 8 to anywhere between 7.0 and 8, it doesn't seem to wait at all before contracting the scale whereas if I change the scale from 8 to 6,9 it waits. Thinking it might be linked to a change in single digits, I tried changing scale from 1000 to 920, scaling was performed automatically (ignoring the delay setting). Changing to 800 waited.
This change in data scale of 1000 to 980 results in an immediate scale contraction even though the set scale is BELOW the current maximum of the scale.
Doing the same thing but going from 1000 to 800 causes the autoscale contraction to wait as advertised.
Is the function in the background doing only an approximate chack using exponent of the scale max and min or what?
Shane