ha noi,
I am a little confused as to what is going on in the VI you posted, but I think I understand what you are trying to achieve based on the visual description and original image you posted. I will see if I can put together an example that you can adapt for you application, but I'm not totally sure about a few of the finer details. For example, are you always reading data in a constant chunk size (such as 12 elements per read, as in your explanation picture)?
Also, supposing there is a constant chunk size of 12:
On the first "period" where each bar represents one data point, a chunk is going to add 12 bars to the graph.
When each bar is representing 2 data points, each chunk will add 6 bars.
When bars are being scaled by 4, each chunk will add 3 bars.
But what happens after that? When we start scaling by a factor of 8, chunk sizes of 12 aren't evenly divisible. Each chunk represents 1.5 bars of the graph (12 / 8). So what I was thinking is that you would use the average of the first 8 data points for the first bar, and then the remaining 4 data points to make up a second bar. When the next chunk of data is read, bars would be recalculated so that there would be 3 bars (24 total data points / 8 = 3 bars). It would continue this want until the next scaling was made to combine 16 points per bar.
When this happened, things would continued in the same way. The first chunk would stil only be 12, so they would make up one bar. After the second chunk there would be 24 data points, so a total of 2 bars (16 in the first bar, 8 in the next).
This method would be fairly scalable I think and wouldn't require a constant chunk size. The only thing is that all raw data for the second half of the graph would need to be stored with bars recalculated each iteration until the next "divide down" is required. After being scaled to the first half of graph, the raw data could be discarded and only the bar values for the first half would need to be stored.
So, any further information or clarification would definitely be helpful!