09-21-2006 11:53 AM
09-21-2006 01:00 PM
09-21-2006 01:52 PM
09-21-2006 03:09 PM
You got it. An async timer is started and executed in a thread separate from the main thread. Not only does this help in preventing the main thread from blocking your data acquisition, it also keeps your acquisition from blocking your user interface.
If your main thread needs periodic access to the data from the acquisition thread CVI has tools to make that easy too. Look at thread safe queues (TSQs) and thread safe variables.
One key feature of CVI is that your timer thread can update indicators on your main panel directly from the timer thread. As long as both threads do not try to write to the same indicator you wont have any problems. Another tip to get good performance is to use SetCtrlAttribute() with the ATTR_CTRL_VAL when updating a large group of indicator values, then call ProcessDrawEvents() at the end of the routine. This avoids multiple screen redraw cycles and greatly improves the performance of screen updates when using really fast timer loops.
09-21-2006 04:14 PM
09-22-2006 07:40 AM
09-22-2006 11:46 AM
Thanks mvr.
I have already implemented an async timer (30 ms) in my code. The timer works well. However I met another problem.
In my async timer (30 ms) callback, I draw my DAQ data in a graph (not vs time) which was built in my main thread user interface. In my main thread (DAQ routine), I have ProcessSystemEvents function to respond "stop", "pause" events from user interface. I noticed the DAQ rate was also dropped a lot even when I use async timer. It will drop more if I set the interval of timer less such as 10 ms. I found ProcessSystemEvents in my main thread made DAQ rate lower. It seems to me, whenever the async timer callback draw DAQ data in the graph, the ProcessSystemEvents in the main thread will respond correspondingly, which will lower the DAQ rate. But if I move the ProcessSystemEvents or add ProcessDrawEvents to my timer callback, the graph won't be updated until the timer is stopped. How can I resolve this problem? Thanks.
09-25-2006 07:53 AM
There are a lot of possible causes, but If I understand your application correctly, in the timer callback you Read the DAQ data and write it to the user interface on a graph control that was constructed in the main thread. RunUserInteface() is also called from and running in the main thread.
You said:
>>In my main thread (DAQ routine), I have ProcessSystemEvents function to
>>respond "stop", "pause" events from user interface.
What "DAQ Routine" is running in the main thread? Are this the event callbacks to start, stop and pause the timer callback running in the secondary Async timer thread that does the DAQ read routine?
You said:
>>I noticed the DAQ rate was also dropped a lot even when I use async timer.
>>It will drop more if I set the interval of timer less such as 10 ms.
I think you are saying that as you increase the timer firing rate from every 30 to every 10mS that the number of DAQ acquisitions is less than you expected. Do you have any idea how long it takes a DAQ read cycle to complete on your hardware? This can be a bit difficult to determine if you dont have the right tools. One method is to set a DIO bit as you start the daq read cycle and clear it when the read statement execution is complete. You can look at the bit with an oscilloscope and get a pretty good idea of what your hardware timing is. Just don't forget to take out the overhead of the call to the DIO routines by setting up and measuring a test case that only toggles the DIO bits. Another method is to use the Windows High Performance Timers to try and "profile" the execution speed of your routine.
Another possible bottleneck on your system could be the video card. Older hardware can be very inefficient at updating the video screen and will block inside the main thread if you are updating the user interface at a very high rate. But if your video card is relatively new, this should not be an issue. You can also have issues if you have a lot going on in your main thead that can block the display calls you are making from the secondary thread. Where I am going with this is that is sounds like your timer callback takes longer to execute than 10mS.
Adding a thread keeps the routines in the different threads from blocking each other. Windows takes care of giving each thread it's own slice of time. But it does not make "more time" on the CPU. If you have a group of routines that take a total of 15mS to execute in total, splitting them over different threads will not make them execute quicker.
There are a couple of approaches to take working on this:
If the bottleneck is the video card, it is unlikely that you need to update the video screen at a 100Hz rate (10mS interval), you may be able to reduce the video update load on the CPU by just not outputting every DAQ data block to the user interface.
Your DAQ read routine my be inefficient. What kind of DAQ card are you getting the data from? If you can strip down your code to just the main thread, its graph, and the timer callback parts that Read the DAQ card and Write to the graph, and post it here so that I can run it I can probably give you more input. You may also get the benefit of the experience of the other "Gurus" in this forum.
One other note, running a timer below about 10mS on current generation hardware can be unreliable. Windows XP itself is not the ideal kind of OS for these kind of requirements. How much jitter in the timer you can stand is dependant on your specific application and how accurate you really need it to be, but better performance (more "determinate") can be obtained from one of the RealTime OS flavors like CVI/Real-time.
09-25-2006 01:47 PM
Thanks.
In order to find out whether it's a problem of slow video card, I set the time interval to be 1 s for displaying graph. I also displayed two of the DAQ data in numeric box. The graph can not be updated at all. However, one of the numeric boxes can be continuously displayed. I also programed the following codes to test my video card speed.
#include "asynctmr.h"
#include <ansi_c.h>
#include <cvirte.h>
#include <userint.h>
#include "timer.h"
static int panelHandle;
int CVICALLBACK MyTimer (int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2);
int main (int argc, char *argv[])
{
if (InitCVIRTE (0, argv, 0) == 0)
return -1; /* out of memory */
if ((panelHandle = LoadPanel (0, "timer.uir", PANEL)) < 0)
return -1;
DisplayPanel (panelHandle);
RunUserInterface ();
DiscardPanel (panelHandle);
return 0;
}
int CVICALLBACK Ok (int panel, int control, int event,
void *callbackData, int eventData1, int eventData2)
{
switch (event)
{
case EVENT_COMMIT:
NewAsyncTimer (0.010, -1, 1, MyTimer, 0);
while (1);
break;
}
return 0;
}
int CVICALLBACK MyTimer (int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2)
{
double x1, x2, x3, y;
x1 = rand () / 1.0;
x2 = rand () / 2.0;
x3 = rand () / 3.0;
y = rand () / 4.0;
SetCtrlVal (panelHandle, PANEL_NUMERIC, x1);
SetCtrlVal (panelHandle, PANEL_NUMERIC_2, x2);
SetCtrlVal (panelHandle, PANEL_NUMERIC_3, x3);
PlotPoint (panelHandle, PANEL_GRAPH, x1, y, VAL_EMPTY_SQUARE,
VAL_RED);
return 1;
}
The program can display x1, x2, x3 in the numeric boxes and draw the plot in the graph in real time. Can this demonstrate my video card fast enough to display the data timely? I feel the main problem is still the function "ProcessSystemEvents". If I put it in my main thread DAQ routine, all the graph and numeric boxes can be updated correctly even though my DAQ rate dropped. Is there any other posssibilities that make my main thread block timer callback from updating the user interface?
Thanks.
09-26-2006 08:32 AM - edited 09-26-2006 08:32 AM
I ran your sample program on a CVI 7.1 system with similar results to what you saw:
The three numeric controls update without issue, even when the main thread has a callback locked in a while loop or calling Delay(). But the Graph does not behave the same when the main thread is blocked. The way the sample code is written you get a new plot for each point. On a graph with a legend displayed this adds a new entry to the legend for each point. When the main thread is blocked the graph updates without issue, the legend does not and remains frozen. This is not an issue unless the main thread is blocked.
So this sample program has proven the ability to execute a callback in another thread and output the data to a display created in the main thread. So far so good.
>>Can this demonstrate my video card fast enough to display the data timely?
If it looks like it is updating correctly, it is probably fine. But we can check this and a more important question at the same time. You need to know if your timer is firing at the right interval, fortunately there is a easy way to check this by looking at eventData2 of the timer callback. This is a pointer to the time elapsed since the last timer event (accurate to about 1mS). Add a few lines to your code, start execution, then stop and look at timeStamps in the debugger:
#define SAMPLE_COUNT 25
static double timeStamps[SAMPLE_COUNT];
int CVICALLBACK MyTimer (int reserved, int timerId, int event, void *callbackData, int eventData1, int eventData2)
{
double x1, x2, x3, y;
static int i=0;
x1 = rand () / 1.0;
x2 = rand () / 2.0;
x3 = rand () / 3.0;
y = rand () / 4.0;
SetCtrlVal (panelHandle, PANEL_NUMERIC, x1);
SetCtrlVal (panelHandle, PANEL_NUMERIC_2, x2);
SetCtrlVal (panelHandle, PANEL_NUMERIC_3, x3);
PlotPoint (panelHandle, PANEL_GRAPH, x1, y, VAL_EMPTY_SQUARE,
VAL_RED);
if (i<SAMPLE_COUNT)
timeStamps[i++]=*(double*)eventData2;
return 0;
}
If the interval is 0.000010 then you have not problems. Even if it is a little worse, you will probably be ok once you recompile your code in Release Mode (which normally executes much faster).
When you describe your program output when set to execute on 1S intervals as only updating one of the numerics, and not the graph it sounds like the main thread is totally blocked. You also mention
>>I feel the main problem is still the function "ProcessSystemEvents".
>>If I put it in my main thread DAQ routine, all the graph and numeric
>>boxes can be updated correctly even though my DAQ rate dropped.
If your DAQ acquisition is in the main thread, you are not getting much benefit from the async timer. You should not need to add ProcessSystemEvents() in your secondary thread at all. If you need to add it anywhere in your main thread, then that point is probably a bottleneck in your code. Are you acquiring data through NIDAQmx, legacy NIDAQ, or some other interface. If you can add in some more of your code to the sample above, maybe we can see where the problem is occurring.
Message Edited by mvr on 09-26-2006 08:35 AM