LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

DMA "Acquire Read Region" method

I've been meaning to ask this question for a while now.

 

I just came across the FPGA method mentioned in the title of this post.  I understand it gives a DVR pointing to the actual memory range for a DMA transfer but I have some questions regarding the implementation.

 

Does the function to return the DVR only return when the requested memory space is available AND the requested number of elements are already present?

 

Can this be used to speed up DMA transfer between loops?

 

Shane

 

Edit: What i mean by speeding up transfers between loops is the following: I would like to have the DMA read outside my time critical loop on RT and passing the data in via FIFO or QUEUE but when I try to do this with a standard DMA FIFO read, I get some not very good performance and a lot of jitter.  I am hoping this new method might be a step in improving that situation.

0 Kudos
Message 1 of 4
(3,031 Views)

It has been a while since I used this function, but I believe the function will return a region of the number of elements specified unless it times out in which case it will give you a region with whatever elements were available when the timeout triggered.

 

I still don't understant your comment on transferring data between loops. I would assume reading the points from DMA within your time critical loop directly would introduce less jitter than having an intermediate FIFO because it would introduce fewer copies of data. And the Read Region methods give you a way to not introduce a copy at all.

0 Kudos
Message 2 of 4
(3,012 Views)

There is method to my insanity.

 

We have an application where wa are running a RT loop at 20kHz including DMA transfers (25kHz is possible but with more jitter).  When looking at the RT execution trace toolkit we see that the DMA transfers are actually making up a rather large portion of our loop times.  The idea was to offload the actual DMA transfer to a seperate CPU core in order to allow for a data transfer method with a lower minumum runtime.

 

In our case our DMA transfers are taking approximately 10 us per call which actually ends up liniting our maximum loop rate "artiificially".  We have DMA transfers in both directions, so in essence we lose 20us through the DMA transfers.  Of course having the DMA transfer in the timed loop is great for jitter but the maximum loop rate is lower than it theoretically could be if the data transfer between RT and FPGA was faster.

 

I had a system up and running with the DMA transfers offloaded to a seperate CPU core but the jitter was too high.  I was hoping the ability to pass a DVR instead might help things in this regard.

 

Is there any way of offloading the DMA transfers in this way, essentially "pipelining" the DMA transfers to allow for higher RT Loop rates but without introducing some nasty jitter?

 

Shane.

0 Kudos
Message 3 of 4
(2,995 Views)

I just revisited this topic today.

 

I replaced our standard DMA read with the "Acquire Read Region" version in order to try to shave a few us off our loop time.

 

It achieved the opposite.  My FIFO read timing went from 9 us to 12 us.  My understanding was that LESS work should be going on here, not more.

 

The other code is unchanged, I utilise an IPE to read individual elelements of the DMA array when processing instead of addressing the array directly as before.  I also destroy the DVR immediately after processing.

 

I don't understand why it's taking longer.

 

LV 2012 SP1, PXI-e-8115, FPGA 7965R

0 Kudos
Message 4 of 4
(2,788 Views)