Counter/Timer

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA / IRQ methods of data transfer

Hi,

I have a couple of questions re. DMA/Programmed IO data transfer that I have managed to get by without knowing the answers to but am interested to resolve.

I use a DAQCard 6062E / 6036E and PCI-MIO-16XE-50. I have written a CVI application that employs buffered period measurement and, at other times, buffered event counting to log pulses coming from gas meters. At the same time I acquire analogue data on a number of channels, sometimes continuously, other times in blocks.

I have noticed two things:
a) With the DAQCards (that do not support DMA) I find that I get a -10920 error (pulses may have been lost due to speed limitations of your system) with very moderate pulse rates (~70Hz) with the default analogue transfer me
thod which is to interrupt every half-fifo buffer. If I force this to interrupt on every analogue sample then I can acquire much much faster pulse rates (~kHz). I am confused by this as I would expect the default of one interrupt every half-fifo to give the CPU more time to service the counter buffer (which I assume is a separate hardware buffer on the card - I would also like to know how big these fifo buffers are if anyone knows).

b) The second problem relates to the PCI card. The default is to use DMA for data transfer, but again I get the -10920 error fairly often (but randomly) in the same sort of application, unless I change to interrupt driven transfers. My guess here is that the CPU is not relinquishing the bus to the DMAC often enough to service the fifo buffer.

As I have said, I have worked around these problems but would like to know more if anyone can help.

Thanks

Jamie
0 Kudos
Message 1 of 5
(4,318 Views)
Jamie,

I'll try to help.

I have experimented with this a bit and I have found that:

For the PCI case, I get better performance with an interrupt condition of half-FIFO-full for the AI operation. DMAs gave me even better performance.

Setting the AI transfer condition to interrupt on every sample quickly generates an overflow error on my counter or on my AI operation. I also experimented with leaving the counter as a DMA-based transfer or have it use interrupts.

It does not seem to match the behavior you were seeing, but maybe we can brainstorm on this to see if we can get extra information that may help to clarify the issue. Some details that might help:

- There is not really a dedicated FIFO for the counter operations on MIO boards. In practice, you might say that it is 2 samples deep.

- The buffer size you are using for both operations (counters and AI) is really important, as well as the ratio to the number of samples you read per iteration in your application. The more samples you read at a time, the more efficient the operation becomes.

- In order to properly benchmark this you need to make sure you use a parallel loop architecture in your application. Same application running parallel loops inside of which it reads from the circular buffer.

- The current NI-DAQ driver is not a fully multithread driver. This means that at certain times the driver might temporarily lock up access to the hardware resource. I have seen this behavior before and it might be the cause of what you are seeing. We are coming out with a new driver that would improve multithreading performance.

- Additional information on the systems running the applications might help. Low processor speed or memory conditions might affect the way a transfer behaves. AI_Read and Buffer_CTR_Read functions normally poll for data, taking your CPU usage to 100%, this affects the ability of the CPU to respond to certain interrupts. One idea would be to introduce small delays within the reading loops to lower the CPU usage. The problem is that you would be limited to the Windows timer resolution:

#Number of Samples to read at a time / Sample Rate > 1 ms

I am not yet certain about what could be causing the behavior but maybe the comments above might help.

Let me know,

Alejandro
Message 2 of 5
(4,318 Views)
Alejandro,

Thanks for your excellent answer.

The problem may lie in that I have not employed multi-threading in my application. The method I am using is as follows:

1) Set up counter operation (ND_BUFFERED_PERIOD_MEASURE)
2) Use a callback timer to generate EVENT_TIMER_TICKs and on each tick a) sample an analogue channel using DAQ_Op(), scale using DAQ_VScale() and average using Mean(), then call ProcessSystemEvents() before sampling another channel; then b) process all available points in the counter buffer using GPCTR_Read_Buffer().

This is probably not a good method. I should probably have the counter operation and analogue operations and User Interface running in different threads. Would that be the reason for my problems?

Regards


Jamie Fraser
0 Kudos
Message 3 of 5
(4,318 Views)
Jamie,

Your suggestions are correct. There are things you can try which would improve performance:

- The Counter operation seems to be properly configured. Whether finite (single-buffered) or continuous (double-buffered), the most efficient way to do things is to configure the counter operation once, start it and then read data from the buffer in a loop.

- The Analog Input operation could be improved. DAQ_OP is a synchronous function that configures the board every time it is called (higher overhead). The function starts an AI operation every time and it polls until all the requested samples are available. Once the data is available, the function returns and clears the board's configuration. A more efficient way would be to configure
the AI operation once user lower level functions, start it and then read from it within a loop. If the continuity of the data is not important you might enable the overwriting of data so you don't get overflow errors.

- Using multiple threads at the application level will allow you to have two parallel loops running at the same time. Putting each of these threads to sleep (if possible) would save CPU time and improve performance. I attach a word document that shows the general scheme of such an application. For the specific function calls you need to use you can refer to the NI-DAQ User Manual, it has flowcharts of the functions you must use, especially for the Analog Input case.

I hope this helps,

Alejandro
0 Kudos
Message 4 of 5
(4,318 Views)
Hi,

As a closure note to this thread for anyone experiencing similar problems, please see below the source of this effect. Perhaps an NI engineer will correct any technical inaccuracies in my statements.

Although I was originally not using multi-threading, I re-wrote my application to be multi-threaded which has many benefits, but did not sort the problem with pulse counting. I was still getting the -10920 error with quite low pulse frequencies.

The problem was in the way I had coded my routines (the discussion below refers to the multi-threaded solution). I was setting up an analogue DAQ and buffered period measurement in separate threads, but was then looping at full speed in each thread and calling DAQ_DB_HalfReady() and G
PCTR_Watch() at EACH LOOP ITERATION! I would then transfer/process data when enough data had been transferred from the DAQ device by NI-DAQ.

The problem seems to have been that I was entering the NI-DAQ DLL so frequently with my calls to the NI-DAQ functions that it was not having sufficient free time to service the counter interrupts before the counter hardware save registers were being overwritten.

Putting a short Delay() between each NI-DAQ function call allowed the background data transfer process to run fast enough to count much higher pulse frequencies.

If this seems obvious to most people I apologise, but it did throw me for quite a while. This experience was with NI-DAQ 6.9.3 - from what I have read of NI-DAQ 7, this problem would probably not occur as the new driver is fully multi-threaded.

Jamie Fraser
0 Kudos
Message 5 of 5
(4,318 Views)