LabVIEW Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
Mads

Decimation feature built into the graph indicators

Status: New

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application. The regular graph can take a bit more data, but still has its limits. Having 100k number of points to display is quite common (in my case it's most often months of 1 second data).

 

The idea could be formulated to just "improve how graphs handle large data sets"...but how that would be done depends a bit on what optimizations the graph code is open for. The most effective solution however would probably be to do what you currently have to write yourself - a surrounding decimation logic.

 

So my suggestion is to add an in-built decimation feature, where you can choose to have it automatically operated when needed - or when you say it is needed, and possibly with a few different ways to do the decimation (Min-Max/ Nth point etc.). The automatics should be on by default - making the problem virtually invisible for the novice user.

 

A big advantage of doing it within the graph is that it will (should) integrate fully with the other features of the graph - like zooming, cursors etc.

7 Comments
JÞB
Knight of NI

NOOOOOO!

 

"--------------------------------------------------------------------------\

 

Create an XY Graph and feed it a time stamped XY plot with some hundred thousand points...and you have yourself a very sluggish and possibly crash-ready application

---------------------------------------------------------------------------------

\

Is expected behavior.  NEVER lose data, NEVER infer what the programmer menat to say, do what I said not what I meant to say. I expect to find my own bugs and learn to manage large data sets within the application I am running.

 


"Should be" isn't "Is" -Jay
Mads
Active Participant

Jeff,

 

1. First of all, this would be an option so your argument about it not being expected behaviour is not valid. If you prefer your expected behaviour of a crash then feel free to have one and build lots of code to manage it, instead of having a checkbox that gives you the option to get it handled automatically (when needed) and seemlessly integrated with the graphs other features. (I would prefer the feature to allow us to programmatically set a threshold, this limit would decide at what amount of data points the algorithm would kick in.).

 

2. Secondly you are already not seeing the data you feed to the graph, and that is with data sets that are seemingly handled OK. The display resolution limits how many points it can draw. Unfortunately this reduction is now done at a level that gives you the reduction, but not the ability to run smoothly. I am suggesting that this is moved up one level so that it is (optionally) handled in a crash free way within the graph itself.

 

3. LabVIEW does not necessarily aim to please the programmer who likes to find his own bugs. Would a scientist or engineer from another dicipline *expect* LabVIEW to crash - and be glad it did not give them an option to not write surrouding code to achieve what they need to do? I think not. (Come to think of it I'm one of those too, even though I appreciate a good bug hunt! It's about access to functionality, and not spending time on things that we could get for free once NI has done it for us!).

 

4. LabVIEW, like most technology, aims to exceed your expectationsSmiley Wink Ten years ago you would have expected the graph to crash with just a fraction of the points it can handle now. For LabVIEW to succeed it has to offer you something even better tomorrow - and preferably even before it is allowed to do so (in a lazy fashion) by progress on the hardware side...

Brian_Powell
Active Participant

I think this needs a bit more explanation.

 

I think you are talking about just decimating data for display.  If you put the graph on the connector pane and called this VI as a subVI, would you want all the data or the decimated data?  I suspect you want all the data, but please confirm.

 

It would be interesting to break down the performance problem into whether it's a drawing issue or a data copying issue.

 

The last time I looked at the graph code (a few years ago, so things might have changed), we decimated the data when we rendered the graph.  So if 1000 points land on the same pixel (or line), we only draw the pixel (or line) once.  So we try to avoid having the drawing be the slow part, but that doesn't mean we've succeeded.

 

If you replaced the graph with an array and ran the VI, is it still slow?  If so, then the issue is with data copying.  In that case, I'd suggest decimating the data yourself on the diagram, rather than have an indicator do it.

 

Finally, the issue might be with the way data is handled internal to the graph--e.g., perhaps we aren't decimating efficiently between the front panel operate data and what gets drawn on the screen.

 

Mads
Active Participant

Yes, I am only talking about the display.

 

If you made a local variable of the graph and read it somewhere you should still get the full data set, otherwise you could have just used a regular decimation function on it prior to feeding it into the graph.

 

There is no problem if the data is fed to an array indicator, only if it is in a graph.

Mads
Active Participant

Not that anyone should ever use a local with such a large array. The point is that the data is still there, and the graph will "feed" on it when it needs new data points to display when the user e.g. has zoomed in.

ChrisLudwig
Member

I really like this idea, but I had a pretty different idea of how it should be implemented.  I therefore created my own idea.  Feel free to review that idea and upvote it or comment if you like it.  I have no idea of how to handle a local variable in my case, as the graphed data would actually be a array of X and Ymin/Ymax pairs.  So data could flow into a local in one format (raw data), but a read from the local would be in another format (enveloped data).  So, that would be strange for certain.

 

https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Large-Datasets-XY-Graph-with-autobuffering-and-displa...

Mads
Active Participant

I really do not care much about how the improved performance of XY graphs are implemented. That is up to NI to choose. We should simply be able to wire the data into an XY graph and it will handle it well even though the data set is large. Today they become sluggish very quickly. You can get proper (read:much much better) performance today already if you use a .Net alternative. The native graphs should (of course) be (at least) just as good. 

 

Your request for a way to handle incremented changes to the graph data is good (instead of having to write the full set each time...although having better performance will alleviate the issue somewhat) - but I would then prefer to just have it as a mode for the XY graph where you say that it should increment the data (or not)...and then perhaps have some methods and properties to manage it. Charts add new data like this already so you could say it would be like an XY chart (which is cumbersome to simulate with todays chart), but instead of actually having such a chart it would just be a mode for the XY graph.