09-17-2007 12:51 PM
09-17-2007 01:00 PM
09-17-2007 02:02 PM
09-17-2007 02:18 PM
09-17-2007 03:01 PM
09-18-2007 09:12 AM
09-18-2007 09:18 AM
09-18-2007 09:21 AM
09-18-2007 09:59 AM
09-18-2007 10:24 AM
Fwalker,
I'll add a litte more information on to what Preston said concerning DMA. Regarding how big of a FIFO on the RT side you can create, its really a hard question to answer to put a firm number on without lots of other variables because its like asking how large can my application. The loose answer is as much RAM is available after the drivers, application, and any other data has been loaded into memory.
However, I would guess reason your asking is to prevent the FIFO from overflowing. Increasing the buffer size will not necessarily help an application because if the FIFO is filling up than your just delaying the inevitable and its better to manage the data flow in the way the code is written. By default, the host buffer size is 2 times bigger than the FPGA buffer with a minimum size of 10,000 elements. In general you should set it to at least two times the Number of Elements you plan to use. And generally 4 times the size works well but anything more is really un-needed.
Each DMA transaction has overhead, so reading larger blocks of data is typically better. The DMA FIFO.Read function automatically waits until the Number of Elements you requested becomes available, minimizing the processor usage. However, cpu usage may increase if the data is coming in at a slower rate. This is because the heuristics used in the DMA API to determine when to sleep or to poll depend on the amount of data and number of elements still coming. If its small it might still spin and drive up the cpu usage. Its better to use some mechanism to ensure data \space is available, rather than relying on the blocking behavior on the host DMA nodes. I manage this by using interrupts, timed loops, polling by reading 0 elements, or scheduling followed by polling.
Using interrupts with DMA works really well when the data is sent not very often, as an IRQ adds little overhead. Using the Elements Remaining indicator to poll and then read those numbers of elements is not recommended because it eliminates optimizations built into the API but for simple applications it does work well as Preston suggested. Using a shift register to pass # of elements to the next iteration during the read is okay, but it has a high overhead if there is a small number of elements. It could be combined with sleep in the loop to keep processor burden low.
Hope that helps a little,
Bassett