12-16-2010 07:35 AM
Hello,
I have a problem:
in my .vi there are about 10 XYgraphs on screen.
The problem is that after an amount of time (and registered points) the value of sample time, I have superimposed at the beginning of the test, increments during time spent becouse the CPU in not able to process and refresh in another while loop the amount of points in the graphs.
I verify this because when I hide the graphs on screen the sample time return at original value.
My question is this:
is possible with labview to dedicate and fix a part of CPU to a particular while loop?
in order to give priority to this while loop and guarantee this loop is executed in any condition?
12-16-2010 07:49 AM
Hi Lesterino,
your problem is mostly not the priority of a while loop, but the amount of data collected and displayed in a graph...
If you would limit the amount of data then your timing problems will vanish! It's senseless to display more than ~1000points in a graph!
12-16-2010 08:04 AM
You can control which CPU code runs in by placing the code inside a Timed Seq structure and use it to dictate which CPU it executes in.
Updating UI at greater than 30 Hz does not makes sense. It you update it more often, reduce the update rate.
You can "defer front Panel updates" before you update a graph chart then undefer after (serach this site for Defer.FPUpdate or something like that).
Ben
12-16-2010 10:54 AM
Lesterino wrote:is possible with labview to dedicate and fix a part of CPU to a particular while loop?
in order to give priority to this while loop and guarantee this loop is executed in any condition?
As Ben mentioned, timed loops can be assigned to a specific CPU core. This also means you need a multicore CPU. You cannot dedicate "part" of a CPU to a certain task. Also, be aware that on a multipurpose OS, nothing is guaranteed. If you need a truly deterministic loop, you need LabVIEW RT.
In my opinion, you are looking in the wrong place for a solution to your problem. If you hammer 10 xy graphs with near infinite amounts of ever growing data, you are simply not doing it right. Most likely, you have constant memory reallocations of your data structures. What good are graphs if they should not be updated because other parts of the code needs the CPU?
Are all graphs visible at the same time? How big are they?
How much data is in the xy graphs? Can you show us some code?
12-16-2010 05:11 PM
Hello Altenbach, Ben and GerdW
thanks for your answers.
I know the possibility to dedicate a multicore CPU in a timed loop, but I'll hope to a more general solution.
I need to not clear the graphs, I need to see all the points from 0 to x hours.
I have 12 XY graphs in a full screen, in the windows there are only the graphs. The graphs are without grids and only one series of points per graph.
I need to register many hours. I have a sample time of 1s. only after 1 hour the real time sample increase up to 1.8, 2 seconds.
I have an intel 3,2 GHz CPU, 2G RAM, but I test this condition with other pc, with the same results.
I am sure you tell me that the samples are toooooo much. but I ask you what is the best solution.
Probably for this type of applications, after a consistently number of samples in the graphs, it was fine if abview puts an image instead of the past number of points! or something else. in this condition labview not process any past data but only visualize in ex. a jpg on the left side of the graphs area! this is only an extreme idea for next release! 🙂
12-17-2010 06:53 AM
@Lesterino wrote:
Hello Altenbach, Ben and GerdW
thanks for your answers.
I know the possibility to dedicate a multicore CPU in a timed loop, but I'll hope to a more general solution.
I need to not clear the graphs, I need to see all the points from 0 to x hours.
I have 12 XY graphs in a full screen, in the windows there are only the graphs. The graphs are without grids and only one series of points per graph.
I need to register many hours. I have a sample time of 1s. only after 1 hour the real time sample increase up to 1.8, 2 seconds.
I have an intel 3,2 GHz CPU, 2G RAM, but I test this condition with other pc, with the same results.
I am sure you tell me that the samples are toooooo much. but I ask you what is the best solution.
Probably for this type of applications, after a consistently number of samples in the graphs, it was fine if abview puts an image instead of the past number of points! or something else. in this condition labview not process any past data but only visualize in ex. a jpg on the left side of the graphs area! this is only an extreme idea for next release! 🙂
Your question is one of the classic LV questions ("There is nothing new under the sun." Eclesiastes?) and has been answered many times already. In fact I have a Tag Cloud devoted to LabVIEW Performance related threads that can be found here.
Of particular interest is Dr Grey's white paper on optimizing for large data sets (search this site for Large Data sets) whre he discusses a number of techniques that can help you.
If after reviewing those threads you still have trouble, post back and we will take to from there.
Ben
12-17-2010 08:22 AM - edited 12-17-2010 08:23 AM
Thanks for the kind words, Ben.
As has been mentioned before, plotting too many points to a graph than you have pixels to display is rather pointless. However, you are using an XY graph, not a waveform graph, so the decimation of this data for display is a bit different than the algorithm and code given in the Managing Large Data Sets in LabVIEW tutorial. To decimate for XY, at each point, determine if the next point will go outside the current pixel location. If so, add it to the plot; if not, delete it. You can also cull any points that are offscreen due to zooming. This is most easily done by adding the first point that is offscreen, then adding a NaN data set to break the graph. Then add the point before the first next point that is on-screen. This ensure your lines will be correctly drawn onto screen. Check for single points going offscreen, and do not add the NaN set in that case. I have attached a VI set that implements this (LabVIEW 8.2.1). The top level VI is xyDec_DecimateForXYPlot.vi. The inputs are a reference to the XY graph and the X and Y arrays of data. Outputs are the original inputs and the decimated data. Two notes on the code:
Decimation of unevenly sampled waveforms plotted on an XY graph is left as an exercise to the reader. Note that a variant of the standard waveform decimation in the tutorial that uses time instead of number of points to determine the decimation interval works well. I may write a post on this early next year.
Good luck!
12-17-2010 10:27 AM
Another option would be to map the data into a 2D array an show it as an intensity graph. Now the memory footprint is constant and you can draw an infinite number of points over time.
Decide on a useful x and y resolution, initialize a 2D array of zeroes, then replace elements with 1 as data appears. For mutliple plots, replace with other integers, one for each plot.
As a starting point, have a look at my old example discussed here. Your situation is even simpler, because you would use a constant z and not a 2D histogram where bins accumulate over time.
12-20-2010 02:18 AM
Hello Dr Grey,
I think your attach is very interesting.
I ask you how can I use it with a xygraph with two (or more) signals.
In this case I have the same X but different Y.
12-20-2010 09:27 AM
Another option would be to map the data into a 2D array an show it as an intensity graph. Now the memory footprint is constant and you can draw an infinite number of points over time.
It's a possibility, but how can I use it instead of a xygraph?
I don't understand where need to put X,Y arrays.
thanks