LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Workaround for displaying a very long period of time in a chart.

Nope...no misconception here.

I was only wondering if you were somehow getting more information than you needed.

Just trying to figure out a way or reasoning behind your memory problems.

0 Kudos
Message 11 of 25
(1,992 Views)
Marc,

The computer cannot display more pixels than the size of your chart, so somewhere some compression or other processing takes place. LabVIEW, the OS, or the video driver may each have different means of handling more data points than pixels. If you control the number of points you write to the chart so that it does not exceed the number of pixels, it should not slow things down much.

The largest chart I can fit on my screen is just over 1500 pixels wide. If each of your 32 channels has one pixel per column on the chart you have 48,000 points. This is compared to the 552,960,000 samples you take over 96 hours (17,280,000 per channel). Regardless of what you want to see, those 17 million samples will be reduced to about 1500 pixels by the time it appears on the chart. Save all the data to a file (or multiple files) and display a reduced set.

I recommend that you figure out what characteristic of the data is most important for the user to see and display that. 1500 pixels spread over 96 hours results in one pixel every 3.84 minutes. Average or median? Min or max? Some other statistical measure? Perhaps two charts, one with the past hour and another with the entire run, would be appropriate. Or one chart could be configured to "zoom in" on some portion of the data under user control. To zoom in on old data would require reading some of the data back from the files.

Lynn
Message 12 of 25
(1,994 Views)
Thank you. I have seen information regarding this, I need to look into it more. I'm pretty sure there are VIs that deal with what you are describing, but I forget what they're called. I just wish there was a way for me to keep a running chart that was only saved as a graphic so that I wouldn't have to program around it.
0 Kudos
Message 13 of 25
(1,988 Views)
Here's another question. For a waveform chart, the history length is given in waveforms. Lets say I set this to 1000. Does this take into account my scan rate? Does it check the scan rate of whatever is connected to it prior to running in order to determine the amount of memory to allocate?
0 Kudos
Message 14 of 25
(1,983 Views)
Marc,

LabVIEW will not examine the settings of your data acquisition before it runs your code in order to provide the memory allocation. When using waveforms wired into a chart, I believe this linked knowledgebase addresses this question for you. I hope this is useful for you. Thanks,

Mike D.
0 Kudos
Message 15 of 25
(1,967 Views)
Marc,

There is no trivial way to do what you want to do in LabVIEW (or most other languages, for that matter).  However, it is possible with a bit of care.  If you write your subVIs correctly, it will be fairly easy to maintain.  There are several points you will have to address.
  1. You want to use the Waveform Graph, not the Waveform Chart.  This will allow you total programmatic control over your waveform buffer.  Each time you want to display, you write the entire plotted set of data to the graph.  Note that you will be plotting a decimated version of your data, but that decimated version will change every time you add data to your buffer.
  2. You need to create an efficient waveform buffer which allows you to save a single copy of your data and allows you do decimate it without producing lots of extra copies.  Depending on your needs, this could be a LabVIEW 2 global, a single-element queue containing an array, or a disk buffer.
  3. When you display, decimate the data so that you plot no more than two or three times as many points as the pixel width of your graph.
This is an advanced topic, so don't be dismayed if it seems a bit much.  Read the tutorial on handling large data sets in LabVIEW (Managing Large Data Sets in LabVIEW) for examples of most of the concepts I mentioned above.  If you still need more help, let us know.
Message 16 of 25
(1,943 Views)

I think you pasted the wrong link. You linked to this thread.

I didn't think a chart would be that smart, but then how would it figure out how much memory to allocate? If the history length is 10 waveforms, but it doesn't know how many data points are in a waveform, it won't be able to figure out how much to allocate. Right?

0 Kudos
Message 17 of 25
(1,939 Views)
The link works fine for me...
 
And I am a little confused, much like Marc appears to be, about the selection of charts/graphs for this type of problem.
0 Kudos
Message 18 of 25
(1,931 Views)

Thanks DFGray. However, I do need a chart because I'm using stacked plots. I realize what I have to do, I'm just figuring out how to do it now. The hard part is that a test could be as short as 1 minute, or as long as 4 days. Each waveform is 1 second long, so sometimes a waveform might be ~20 pixels long, and sometimes it won't even be 1 pixel. So if a waveform is only a pixel, that pixel has to represent the minimum and maximum values during that period of time.

This is turning out to be a lot more work than I want it to be, especially since the program is done and working. Now they just want it to work for long tests. Thanks for the help though, I'll read that article.

0 Kudos
Message 19 of 25
(1,930 Views)
Steve, I meant the link Duffman posted. DFGrey replied as I was writing, sorry for the confusion.
0 Kudos
Message 20 of 25
(1,924 Views)