08-31-2009 04:55 PM
OK, I've tracked down some of this behavior, finally.
The CPU usage is my own fault. Although I don't understand why popping up a selector changed it, it resulted from an old debugging file.
The debugging code, which did an open - write file at end - close sequence for every debugging line, was intended for a specific purpose where that behavior was fine.
Now it's being used more often, and doing that 6-7 times per second really eats up the CPU.
So, I have changed to a cache debugger, where it stores the debugging lines and writes the file once when shut down.
That has reduced the CPU usage to 2-4%, back where I would expect.
It has eliminated the "stalls" where the chart wouldn't scroll, the "tick" marks in the data, and the issues with gridlines not being drawn correctly.
I cannot make it do the overlapped drawing thing, either.
Although I have found my way out of this hole, I still consider it a LabVIEW bug, because there should be no way for a chart to display such anomalies.
Apparently, CPU usage elsewhere makes the chart more susceptible to this problem.
I was thinking that the graph errors were CAUSING the CPU usage, when it's really the other way round.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-01-2009 07:07 AM
CoastalMaineBird wrote:OK, I've tracked down some of this behavior, finally.
...
So, I have changed to a cache debugger, where it stores the debugging lines and writes the file once when shut down.
That has reduced the CPU usage to 2-4%, back where I would expect.
It has eliminated the "stalls" where the chart wouldn't scroll, the "tick" marks in the data, and the issues with gridlines not being drawn correctly.
...
I suspect you are on to something now with the interaction of the file I/O with this issue. All of my apps log data so that is a common factor for my observations of this issue.
Additonally I have never tried to recreate this issue while logging was running.
Ben
09-01-2009 07:20 AM
For me, the debugging log is an optional feature that is switched on and off. Only yesterday did I realize that logging was ON.
I'm not sure if it's file I/O in particular, or CPU hoggage in general that triggers the misbehavior. I suppose I could construct a test to find out, but I'm not that interested, since 8.6 is a dead end anyway.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-01-2009 10:18 AM
09-01-2009 10:28 AM
I've not tried in 2009.
My logging was simply:
Open given text file
Write Line at End.
Close Text File.
the line written was either 'Unit 0: received ""' , or 'Unit 0: Received "MRAT 0,4,122,5"
or something similar, having to do with alarms on a TCP/IP instrument.
The problem was that it was operating at 5-7 times per second, and the file had gotten to be 4-5 MBytes in size.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-01-2009 10:31 AM
None of my apps have been moved to LV 2009 yet so I can't commnet on that.
In the LV 8.2 app I was logging to tab-delimeted files.
In the LV 8.6 app I ws logging to raw binary while the test was running.
Ben
09-02-2009 04:29 PM - edited 09-02-2009 04:29 PM
I have been trying to recreate the issue to no avail in 2009. I have a garbage text file over 11 MBs in size with which I am writing garbage text each iteration of the loop. I am also simulating the signals from two sources and writing them to a chart. I have also tried causing my hard drive to flail by loading multiple large programs at once with the VI running, but even if it hiccups the chart doesn't seem to lose its place.
I'm posting a VI snippet in case you see something that I might not be recreating.
09-02-2009 04:58 PM
Well, my situation is busier.
I have frames of 150 channels (DBL) coming in at 10 Hz via TCP from a PXI box.
At 2 Hz, I also have smaller (20-byte) packets coming in from PXI.
Every 5th frame is picked out and causes an UPDATE DISPLAY event, where I pick out four channels and update the two charts.
Also I have sporadic alarms reports coming in from 1-4 other TCP devices.
And I'm running a separate LabVIEW program which is collecting 64 channels of cDAQ data at 10 Hz, and sending it TCP to the PXI for collation into the big frame.
And the receiver for the alarms was opening the file, writing a line (even if it was "Received: nothing" and closing the file, at 5-7 times per second.
So I had a bit more stress than your situation.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-03-2009 06:04 AM
Oh, i forgot:
In my case, I have a simulator running which simulates the four gas analyzers. That means I have four TCP links from the host to the simulators carrying sporadic alarm data, and four UDP links sending 80-100 character packets at 10 Hz from the host to the PXI.
Still, the CPU normally is running at 5%, without the open file-write file-close file part.
Blog for (mostly LabVIEW) programmers: Tips And Tricks
09-03-2009 07:06 AM
That is too easy on the machine.
The 8.6 app I last saw this issue on was capturing from about 70 AI channels and a couple of of counters. They were pushed off to two queues, one for logging and the other for updates of the GUI.
So break-up your example into a producer that drives multiple consumers and make sure the logging is happening durring the acq (Open files once and keep appending).
But come to think of it...
That app had a simulation mode built-in so ...
Send me an e-mail and I'll post up a zip to the FTP site (too large for NI e-mail to accept).
Ben