Hello,
I am using DAQmx with an M series PCI card, measurement studio 7.x with VB.NET. I am continuously sampling voltage and looking for specific values in order to compare to a threshold. When my application runs it pegs the CPU usage at 100%. I have also tested the application with all of the callback code commented out (except the calls to keep acquisition going)...in other words all of the code that checks the samples and performs all of the mathematical operations. What I am left with is a callback that would take the CPU an extremely short amout of time to execute...much faster in fact than how fast samples would be acquired on the card. This still pegs the CPU at 100%. By reading various posts on this forum and a KB article I am to believe that the way in which DAQmx polls the card (even when using the yield mode) it eats up any CPU time available. This is entirely unacceptable for the applications that I, and several other developers using NI's DAQmx and their compatible products, am implementing. I must be able to perform various algorithyms on the data sets coming in, as well a manage a database, dump reports to the main form, and log files in real time...all of which will require precious CPU time and all of which are running in the same program that handles the continuous acquisition. On top of this the application will run for approximately three months at a time. This will melt my CPU if it runs at 100% all the time.
Right now I am testing my application and the task seems to stall out on me after a couple of days. I check my task manager when this happens. max.exe uses some CPU time and seemingly alot of memory for having never explicitly been called. My application itself uses a fairly flat amount of memory...it never increases or decreases very much or very often. It gets most of the CPU time. Developer Studio .NET gets a segment of CPU time and uses it's typical amount of memory except when I have a large amout of debug information printing out (now commented out). When I was running with debug output I noticed that number of acquired samples and processed samples for that task stopped incrementing. I put a break point in at the beginning of my callback funtion and it gets hit. I step through until I reach a line that checks the total number of acquired samples on the task stream and my funciton exits prematurely. Has anyone seen something like this before?
I have seen similarly posted topics here (about 100% CPU usage with continuous acquisition), but never a real solution. Most replies consist of "it is meant to do that so you have the quickest response as to when samples are ready". I have also seen posted here (http://forums.ni.com/ni/board/message?board.id=232&message.id=426&requireLogin=False) that in the test panel of max running the same aquisition does not peg the CPU because there is a sleep call entered somewhere (I am guessing it is in the thread that handles the polling). Why is there no way for a programmer to sacrifice a little response to save some CPU time? I know this has bothered other users of DAQmx, some of which I work with and others I read about on this forum. I would like to free up some amount of my CPU time, if not as a reserve in case my algorithym suddenly needs more processing power, to simply save my client's CPU from being destroyed with three months of acquisition with the CPU pegged.
What can I do to free up CPU usage? Should I contain my callback functions within their own prioritized threads in order to make sure they execute fast enough? As experts on NI's Measurement Studio and DAQmx, what do you recommend that I do?
CLA, CCVID, Certified Instructor