LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Scan backlogs and CPU usage - whats happening here?

I have been experimenting with the "Cont Acq&Chart (Async Occurence).vi sample and I would be interested to know if anyone can explain my results:

I noticed that with the example vi, I can cause the scan backlog to increase by moving windows around - the interesting thing is that if the backlog increases, it always reduces back down to the nearest integer multiple of the number of scans to read. So, in my case I have a scan rate of 200, and a number of scans = 20, and the if the backlog increases to anything less than 20, it will reduce back to zero. Once it gets above 20 however, it only reduces back down to 20. The same applies for 40, 60 etc.

So, I modified the vi so that AI read reads the number scans enter
ed, plus the scan backlog from the last iteration of the loop, on the grounds that this would get rid of the backlog. And so it does - but here is where it gets more interesting: each time I cause a backlog to build up by moving windows around the screen (in my case task manager on the performance tab), the cpu jumps by 20 or 30 % and *settles* at the new level. Do this a few times and you can soon get to 100%. This does *not* happen with the unmodified vi - cpu usage drops back to its previous level (<10%).

Can anyone explain a) why the backlog sticks at a multiple of the numbers scans, and b) why the cpu usage goes up (and stays up) if the backlog is requested?

Ian
0 Kudos
Message 1 of 12
(3,815 Views)
I have seen this before. It has to do with the processor getting busy painting windows while dragging.

Try this
1-Right click on your desktop>>Properties
2-Click on "Appearance" tab, then "Effects"
3-Uncheck "Show Window Content while dragging"

try running your VI now.

ARafiq
0 Kudos
Message 2 of 12
(3,815 Views)
Perhaps I wasn't clear in my original post - I fully expect the cpu usage to go up and scan backlogs to occur when I move windows around (this is just a way to generate a temporary increase in cpu load), so I'm not trying to avoid that: What I'm interested in is that the cpu usage *stays* high after moving windows around, and its labview that is using up processor time.

I think that trying to catch up with the scan backlog is causing the AI Read to get out of sync and start using up cpu time - this appears to be confirmed by profiling, where the execution time of ai read keeps going up.

I'm trying to understand exactly what happens with buffering and daq occurences so that I can tailor the catch mechanism correctly.

Ian
0 Kudos
Message 3 of 12
(3,815 Views)
Please post a copy of your modified VI.

Are you changing the input to the DAQ Occurance Config VI?

I think you are getting multiple occurances firing that is getting you ahead of teh actual Data acquisition.

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 4 of 12
(3,815 Views)
Hi, heres the modified example, in lv 7, plus a saved as lv 6.1 version (i hope). I'm not changing the input to the occurrance config vi.

The changes are a case structure to select between various experiments on the number if scans to read (including the original), and a chart to see whats happening. (I'm using a 6052e connection to an scxi chassis).

Sorry about the delay replying - I've been away for a couple of days.

Ian
0 Kudos
Message 5 of 12
(3,815 Views)
I believe that the answer to this question is the way LabVIEW handles the occurrences with respect to DAQ.

In this VI, every time the buffer fills with a multiple of "number of scans to read at a time", the occurrence is set, allowing the AI Read VI to read data out of the buffer. If the scan backlog is less than the "number of scans to read at a time", the Wait on Occurrence VI sees the next occurrence happen, and reads all the data out of the buffer to catch up.

For example, let's say 20 is the number of scans to read at a time. You're moving windows around, and the scan backlog goes to 14. This means that the AI Read didn't get to execute *exactly* when the occurrence happened (when there was 20 scans in the buffer--now there's 34)
, so it read 20 scans out of the buffer, and there were 14 left. The next occurrence is set after 6 more scans come in, so the wait on occurrence VI runs almost immediately--and the AI Read VI again reads 20 scans out of the buffer, clearing the backlog.

Now, let's try this again, but this time, the backlog goes above 20. Since the occurrence is being set every time 20 scans are in the buffer, the "wait on occurrence" actually misses an occurrence being set. The occurrence is not a buffer, meaning once it is set, it has to be read by the wait on occurrence VI before it is cleared and can be set again. Therefore, it has been set twice by the DAQ occurrence VI, but only read once by the Wait on Occurrence VI, losing one of the times it's set. Therefore, there will always be at least 20 scans in the backlog, since that occurrence was skipped.

When using this VI in modified form, I usually have it monitor the backlog. If it gets above the "number of scans to read at a time", on
the next iteration, I'll have the AI Read VI read ALL the data in the buffer, just to clear out the backlog to keep it from doing exactly what you're seeing.

Hope that made sense.

Mark
0 Kudos
Message 6 of 12
(3,815 Views)
Ian,

I think I answered your first question in a reply to your posted file attachments. However, I can explain b) as well.

In my experience, it's not a good idea to combine DAQ occurrences while varying the "number of scans to read" input on the AI read VI. Here's why!

First, the reason for the DAQ occurrence is to free up the CPU while waiting for data. If you take out the DAQ occurrence, and just run continuous acquisition, you'll find that it takes a LOT of CPU time to perform the same acquisition. Why? Because if the AI Read VI is called before the data is available in the buffer, it will take up the whole CPU while waiting for it. Therefore, if you use the DAQ occurrence, you don't call the AI Read VI until you're sure the amount of data you want to read from AI Read is actually IN the buffer. That way, you don't tie up the CPU.

Now, if you change the number of scans the AI Read VI reads with respect to the amount of data that triggers the occurrence, this is where you have problems. Let me give you an example, and see if you concur.

Let's assume that the number of scans to read is 20, and you're setting your occurrence every 20 scans. Everything is peachy, no backlogs. Now move some windows around. Let's say you see a backlog of 10. On the next iteration, the occurrence is set when 10 more scans enter the buffer, so there's 20 scans in the buffer. Under normal circumstances (meaning, you're always reading 20 scans out of the buffer in AI Read), it would clear this right up--no problem. But now, you've got (number of scans to read + scan backlog) as the input to AI Read, so it's waiting for 30 scans! So it ties up the CPU a bit, waiting for 10 more scans to become available, since there was only 20 when the occurrence happened.....

So 30 scans become available, and it resumes. 10 scans later, the occurrence happens--it thinks there's 20 in there again. See how the AI Read and the occurrence get out of sequence? It can get all screwed up. Pretty soon, after doing this several times, your occurrence and AI Read are so out of whack, you're pretty much running without the occurrence at all--the AI Read keeps having to wait for the data to be available.

That's why, as I suggested earlier, I check the backlog to see if it's greater than the number of scans to read. If it is, read the entire buffer ONCE (put in a -1 as the number of scans to read), and then go back to the original number of scans to read. This will keep your AI Read and occurrence synched up.

Mark
Message 7 of 12
(3,815 Views)
This is the direction I was headed.

Thanks Mark!

Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 8 of 12
(3,815 Views)
Hi Mark,

Thanks for your very succint answer(s). I definitely agree with your analysis - having thought about what was going on (and knocking out a test vi to see how occurrences actually work), I'm sure you are right about occurences being missed, and the high cpu usage being due to ai read waiting for scans.

It eventually occurred to me (no pun intended) that this would only happen when ai read is called with a number of scans to read greater than what is available - so I arrived at calling ai read with a number of scans to read = 0, in order to retrieve how many are available, and then *if* this is > 0, reading all of them. This seems to work reliably all the time, and is very similar to your suggestion.

Ian
0 Kudos
Message 9 of 12
(3,815 Views)
Mark,

I am having the same problem reading 6 6071E cards synchronized with the RTSI 0 line. Processor usage would gradually creep up to 100 % and bog down the data processing portion of my program. I am operating at 500 HZ and reading 250 scans every 500 msec. I set one occurrence using the last card to be read. I tried your suggestion, but switching in a -1 to AI read when AI start is set to 0 (for continuous acquisition) causes AI read's 'scans to read' input to default to 100 scans. Now I'm REALLY backlogging.

Todd
0 Kudos
Message 10 of 12
(3,815 Views)