LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Scan backlogs and CPU usage - whats happening here?

How are you using the occurrence to read the data from all 6 cards?

Mark
0 Kudos
Message 11 of 12
(489 Views)
Mark,

Since my application requires that the channel configuration may any combination of 1 to 6 cards, I sort the channel list to get all the channels in order. Then I build an array of channel names that contains n channel names, separated by commas, per array element.

I split the array at 1, using the first card, whichever it is, for the master, and the remaining cards are configured as slaves in a for loop.

Since all cards a triggered simultaneously from the master using the RTSI line,I look at my channel list array and use the card with the most channels to be the one to generate the occurrence. Once this configuration is done, the task IDS are passed into a while loop which runs until data acq is stopped. Inside the while lo
op I call AI read in a for loop driven by the task ID array.

Since a -1 would not work in continuous acauisition mode, I wired the backlog array to a shift register and add the backlog number to the nominal desired scans on the next iteration. This seems to work well, since the first couple of iterations will show some backlog scans, but then things stabilize. I think, however, that unless the backlog number gets to be significant, I might be better off leaving the backlog alone. I noticed that at 200 HZ scan rate with 6 cards running 32 channels each, the backlog array showed 0,1,2,3,4,5 , which is most likely due to the time it takes to read each card. If I just leave the backlog alone, the data will be time synched.

I can still cause the processor to load up, but it takes a lot of screen movement and window open/close. But even at 100%, I am not bogging down the other processes. I just ran a test for 36 hours at 500 HZ with 144 channels, and my processor usage stayed a
t 20%.

Todd
0 Kudos
Message 12 of 12
(489 Views)