Real-Time Measurement and Control

cancel
Showing results for 
Search instead for 
Did you mean: 

cRIO dropping data samples as 0's

I have been grappling with this problem on my own for far too long. I think it's time to join the forum!

 

I am attempting to sample data from a 9201 module in a cRIO 9074 and display it on a PC. The problem is, depending on how much data is removed from the buffer between the RT target and the PC at a time, some samples are dropped and returned as 0's. But below some critical value, no samples are dropped. The basic architecture is FPGA -> DMA FIFO -> RT -> Shared Variable -> PC. I suspect the issue is misuse of the array index and the data array at the host, but I'm not sure. I attached the relevant code snippets. Here are two graphs that help illustrate the dropped samples (or lack thereof):

 

3x100.png3x1000.png

 

The module is being fed with a 0.1 Hz sine wave. On the top, the cRIO is sending data to the host in batches of 100 samples x 3 channels at a time. On the bottom, the cRIO is sending data to the host in batches of 1000 samples x 3 channels at a time. The part that makes me think it involves array handling is that when I feed AI 0, it appears in the graph Channel 1, and when I feed AI 2, it appears in the graph Channel 0, which is not at all what I expected!

 

Can anyone help me with this problem? I'm not understanding why changing the size of shared variable reads causes data lossSmiley Frustrated

Download All
0 Kudos
Message 1 of 9
(6,823 Views)

Hello Logan,

 

It looks to me like this issue is related to FIFO interleaving. In order to avoid array manipulation issues, you need to read the data out of the FIFO in a multiple of 3, and you need to avoid getting any overflows. If the FIFO overflows, the ordering of the data will get messed up, and you won't know whether you're reading x, y, or z. Same thing happens if you read out of the FPGA FIFO in a multiple of less than three.

 

Also, you're sending an array of data up to your host program, but it looks like you're reading only two elements at a time from that FIFO. How big is the array coming from the FIFO-enabled shared variable going to the host PC? Realistically you should be splitting your data into x, y and z values before sending it up to the host side, to avoid interleaving issues if items get dropped. It's possible that your host FIFO isn't configured to handle 3000 elements, in which case you could just expand the size of the FIFO and that would probably solve the issue. But I would recommend splitting the array on the real-time side anyway, since it's more deterministic. 

Colden
0 Kudos
Message 2 of 9
(6,798 Views)

I set the multiplier default to 3, but I think I'll use a constant instead to be certain I can't change it by mistake. The depth of the FPGA -> RT FIFO is currently at 8191, and the number of elements remaining never approaches that number.

 

As far as reading only two elements at a time from the shared variable, I arbitrarily chose two channels to make an XY plot. When I added a third plot and index array with row index 2, I got a flat line, not even noise. Which leads me to the shared variable FIFO, which was configured as a Single Element FIFO of only two elements! I can't really test changes over the weekend, but unless I find otherwise it looks like at least part of the problem is fixed.

 

As far as splitting the array in the RT target, how would that affect CPU overhead? The RT CPU usage has consistently been somewhere above 90%, varying somewhat with buffer read size. However, the only other task on the RT host is a timed loop of lower priority that executes two scan engine samples and two shared variable writes at a rate of once per second. Could this CPU usage be a symptom of shared variable FIFO troubles as well? (I followed the perennial advice of reducing the DMA FIFO read timeout all the way to 0 to no avail.)

 

Logan H

0 Kudos
Message 3 of 9
(6,790 Views)

Thanks for your help, increasing the width of the network variable array did in fact allow all the channels of data to be sent across the network. I also payed closer attention and found that occasionally data would be dropped and the waveforms would change channels spontaneously, so I separated the channels into separate variables at the RT processor like Colden suggested.

 

But now this is where the rub is, now it looks like only the first sample of each DMA FIFO bulk read gets passed along:

 

FirstDMAFIFOSamples.png

The shared variable have a FIFO depth of 1024, samples are being read 1000 x 3 at a time, and the same effect occurs both with and without network buffering. The same critical value effect is appearing, if the FPGA polls the AIN module quickly enough or the DMA FIFO reads occurs quickly enough, the waveform is continuous. I think this causes each sample to the be the first in a batch of one, but this seems to be more of the same problem. Do some loops need to be synchronized or maybe it makes sense to use a bare TCP connection over shared variables?

 

I really appreciate any help or advice

- Logan H

Download All
0 Kudos
Message 4 of 9
(6,767 Views)

Hello LoganH,

 

Thanks for the post, I have a few questions to get a better understanding of what may be causing this behavior:

How fast are you sampling data on your RT and FPGA side, what loop rates are you running?

What are the values of your elements remaining on your RT side?

What values are the timeouts set to?

Would it be possible for you to post your project files so that we can take a closer look at your application?

 

Thanks, Paul-B

Applications Engineer
National Instruments
0 Kudos
Message 5 of 9
(6,715 Views)

Your CPU usage is way to high, and this may be part or all of your problem.

 

I don't know what you've got the timeout set to on the DMA FIFO read from the FPGA, but if it's anything other than zero you're in trouble. NI's implementation of the DMA FIFO read function is completely barking, while it waits it consumes 100% of the CPU causing low priority stuff like network traffic to stop working properly. You need to call the FIFO read function with 'timeout' and 'number of elements' set to zero, then compare the 'elements remaining' output to your desired number. Once it is >= to that number call the FIFO read again with a timeout of zero and your desired number of elements. In the meantime you need to call a wait function to free up the CPU.

 

The application engineer will be able to provide you with a link to the explanation of this "expected behaviour". 

0 Kudos
Message 6 of 9
(6,704 Views)

Paul-B, here is more of the information you are looking for:

The sample rate is 1-10 kHz per channel (3-30 kHz aggregate)

The loop rate that produced the images in my above post was 10 ms on the RT target

All DMA FIFO related timeouts are set to 0 ms

 

As far as elements remaining and loop rates, the images produced above were with a RT target loop rate of 10 ms, and the elements remaining would build from zero until a read happened and fall down again. I attached some projects files with extra code disabled (BTW this is my first attempt at posting a project on the forum, so if there's a problem please let me know,) but with some different behavior. The attached files seem to provide sensible data for all three channels, but require a specific RT loop rate, DMA FIFO read size, and FPGA loop rate (50 ms, 500 x 3 samples, and 100 us respectively) and cause the number of remaining elements to be constant. This may work because data is produced and consumed at exactly the same rate, but I thought the entire point of a FIFO was to decouple different loops from timing differences and jitter.

 

Spruce, at one point today I was able to get CPU usage down to ~40%, but it has since returned to high 80's/ low 90's even with a 0 timeout. Am I understanding your advice correctly: call a "faux" FIFO read, then compare the elements remaining output with your desired number, and when they are equal, call a FIFO read "for real?" I thought this was the normal behavior for a FIFO read with a timeout of 0, could you elaborate on the difference?

 

Thanks for all the help!

Logan H

0 Kudos
Message 7 of 9
(6,687 Views)

Take a look here:

 

http://digital.ni.com/public.nsf/allkb/583DDFF1829F51C1862575AA007AC792?OpenDocument

 

In that example you just need to add a call to a wait function in the false case. I'd choose a value of about twice your sample period. The rate at which data appears in your DMA FIFO will then dictate the loop speed, no need for a timed loop.

 

I suspect that shared variables are probably not helping your cause, you'd be much better packetising the data and using a network stream to get it to the PC:

 

http://www.ni.com/white-paper/12267/en

 

In your FPGA vi the sample rate you have selected for the ADC should dictate the loop rate, again no need for timed loops (or wait functions, in this case).

 

0 Kudos
Message 8 of 9
(6,677 Views)

I implemented the DMA FIFO read as Spruce suggested, and the last vestiges of shared variables are gone and replaced with data streams. RT CPU usage is down to a consistent ~35% as well!

 

Spruce, when you say:


@Spruce wrote:

 

In your FPGA vi the sample rate you have selected for the ADC should dictate the loop rate, again no need for timed loops (or wait functions, in this case).

 


I don't fully understand what you mean, is "Sample Rate" some kind of FPGA I/O property?

 

There also seems to be some interplay between DMA FIFO size, sample rate, and data stream buffer sizes that tends to fill up the DMA FIFO at some rate and drop entire DMA FIFO reads (as evidenced by large discontinuities) despite low resource utilization. On top of that there can be significant latency that comes and goes. Is there a simple fix that I might miss in my inexperience with data streams? If not, I think it might be time to let this thread lie Smiley Happy

Or at least start a new one in a more general forum...

 

Thank you to everyone who has assisted!

Logan H

0 Kudos
Message 9 of 9
(6,659 Views)