LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
ChrisLudwig

Large Datasets XY Graph (with autobuffering and display envelope decimating)

Status: New

I am extending on an old idea, but the implementation is different than the OP so I made this a new idea:

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Decimation-feature-built-into-the-graph-indicators/id...

 

What I would want would be an XY graph with automatic disk buffering and on screen decimation.  Imagine this, I drop a super duper large data sets XY graph.  Then I start sending data in chunks of XY pairs to the graph (updating the graph at 10 hz while acquisition is running at +5000Hz).  We are acquiring lots of high rate data.  The user wants to see XX seconds on screen, probably 60 seconds, 120 seconds, or maybe 10 or 30 minutes, whatever.  That standard plot width in time is defined as a property of the plot.  So now data flows in and is buffered to a temp TDMS file on disk with only the last XX seconds of data showing on the graph.  The user can specify a file location for the plot buffers in the plot properties (read only at runtime). 

 

We decimate the incoming data as follows:

  • Calculate the maximum possible pixel width of the graph for the largest single attached monitor
  • Divide the standard display width in time by the max pixel width to calculate the decimation interval
  • Buffer incoming data in RAM and calculate the min and max value over the time interval that corresponds to one pixel width.  Write both the full rate data to the temp TDMS and the time stamped min and max values at the decimation interval
  • Plot a vertical line filling from the min to max value at each decimation interval
  • Incoming data will always be decimated at the standard rate with decimated data and full rate data saved to file
  • In most use, the user will only watch data streaming to the XY graph without interaction.  In some cases, they may grab an X scroll bar and scroll back in time.  In that case the graph displays the previously decimated values so that disk read and processing in minimized for the scroll back request.
  • If the user pauses the graph update, they can zoom in on X.  In that case, graph would rapidly re-zoom on the decimated envelope of data.  In the background, the raw data will be read from the TDMS and re-decimated for the current graph x range and pixel width and the now less decimated data will be enveloped on screen to replace the prior decimated envelope.  The user can carry on zooming in in this manner until there is at least one vertical line of pixels for every data point at which point the user sees individual points and not an envelope between the min and max values.
  • Temp TDMS files are cleared when the graph is closed. 
  • The developer can opt to clear out the specified temp location on each launch in case a file was left on disk due to a crash.

This arrangement would allow unlimited zooming and graphing of large datasets without writing excessive data to the UI indicator or trying to hold excessive data in RAM.  It would allow a user to scroll back over days of data easily and safely.  The user would also have access to view the high rate data should they need to. 

 

14 Comments
JimChretz
Active Participant

I was thinking to make a library that would do exactly what you describe, but it's a lot of work, so yes I hope NI makes it!

Mads
Active Participant

If LabVIEW 2025 just has one new feature I hope it is this one (hopefully it will have many more, but that would be a change from what we have seen the last years). The need to compensate for the bad performance of the XY graph itself by having a separate buffer for the full XY set and logic to feed new portions of that data into the display when zooming is a fundamental issue developers should not have to deal with.

The functionality and look of the graphs and charts in general is core to the type of applications made in LabVIEW so making them top-notch is important for the success of LabVIEW in general.

wiebe@CARYA
Knight of NI

Decimation on arbitrary (X, Y) graph data would be a waste of time (both for the CPU and the developer).

 

The data you're describing is actually (Time, Y) data. Then decimation makes sense and can greatly speed up thinks. I've implemented this (over and over again).

 

Knowing that X is actually Time, and thus increasing, values is required to do proper decimation.

 

If X isn't guaranteed increasing values, you'd want to decimate in both X and Y directions, and that would usually make the decimation more expensive than the drawing. You should probably draw to an image buffer and use the image overlay to draw the buffer in the graph. Add only new data to the buffer, and redraw the entire buffer only when rescaling (or disallow rescaling).

 

So I think we'd first need a (Time, Y) graph, or a property to signal this special data property to a (X, Y) graph.

 

I guess this covers a thing or two (thanks Mads) Re: Why are charts and graphs so difficult? - NI Community 

Intaris
Proven Zealot

I agree with Wiebe that decimating a true XY dataset requires every action to access every datapoint* in order to find out if it has an element in the visible area (* aside from caching information on data regions).

 

Decimating a time series is far far easier computationally. I don't know about "more expensive than the drawing", but it's certainly a lot less benefit.

Mads
Active Participant

Decimation at one or multiple levels (one is at the rendering level e.g., which this article describes is there: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019YLKSA2&l=en-NO) is not really the topic here (that's just an implementation detail), but how to get XY graphs that are able to handle larger data sets smoothly without making people address the issue with their own code over and over (as you say you have), and with the many added benefits of having the functionality integrated into the indicator itself (filling in new points when zooming etc.).

I would love for the graphs (time-series graph if you prefer) to have an in-built navigator control item you could make visible, just like they have legends, cursors:

Mads_0-1718618596314.png

 

We have this in-house already as an X-Control (and there is an example here at ni.com as well from a competition that was held many years ago, but I could not find it just now), but there are limitations to it that could be eliminated if the functionality was built into the indicator instead. Currently we have applications where this had to be solved by using a .Net control instead. LabVIEW should at least be able to match that performance with the in-built controls.

 

wiebe@CARYA
Knight of NI

>Decimating a time series is far far easier computationally. I don't know about "more expensive than the drawing", but it's certainly a lot less benefit.

 

I guess I should have said: "more expensive than only drawing"... If each (X, Y) point is drawn on a new pixel, the decimation doesn't help the drawing but costs a lot. So it will hurt, not help. Simply drawing the data will be less expensive.

 

Even for time series there are situations where decimating doesn't help and simply wastes resources.

 

If the (X, Y) graph (or, AFAIC, a new (T, Y) graph gets build in decimation (and I think it should) it should be optional. Maybe even switchable at run time. Even testing if the data is eligible for decimation is expensive.

Intaris
Proven Zealot

If the linked LabVIEW article is supposed to suggest this is already built-in in LabVIEW, since which version is this because I certainly see HUGE performance benefits of doing the decimation myself.

Mads
Active Participant

The linked article is just about the decimation used for plot rendering and it has probably always been there, I just referred to it to illustrate how decimation *is* used without us noticing it.

The type of decimation we add ourselves to prevent the graphs from getting too sluggish (Operating with two data sets; one that is actually fed to the indicator and one that holds the full set and is used to repopulate the indicator when needed) is as you have observed obviously not in there already. If that was added into the internal logic of the graph we should see those performance benefits.

wiebe@CARYA
Knight of NI

>I certainly see HUGE performance benefits of doing the decimation myself.

 

That probably doesn't apply to XY Graphs, only to ('constant dT') Graphs. The article mentions decimation is used in Graphs, and we all read that it will apply to all graphs, including XY Graph, but I'm sure it doesn't. Or it's done in a way it actually makes things slower. Or it's all made up, and there's no decimation at all.

 

A constant dT would be even easier to decimate, compared to a increasing T... So I bet iff there's Graph decimation, it's only in plain Graphs, not XY Graphs.

 

EDIT: Closer examination seems to prove this. 7M random points in a graph is faster. 7M random points in an XY Graph is slow. >20M means trouble.

Intaris
Proven Zealot

I certainly don't think it's always been there. We are admittedly using an old version of LabVIEW, but our own implementation of decimation on a waveform graph with 8 datasets yields day and night results when using our decimation routine. We output max. 3x pixel width (there's a reason for 3x) of the visible area of the graph as data points. Our responsiveness is much much better like this. So for waveform graphs in LV 2015, I don't see that there could be any sort of effective decimation going on, the performance without is awful.