04-07-2020 09:44 AM - edited 04-07-2020 10:56 AM
Hi everyone, i hope this is the right group for the issue i'm about to illustrate.
In essence: i have a problem related to the FPGA DMA FIFOs when i use Target to Host type FIFO (sending data from FPGA to RT).
In order to explain the issue in the most simple way here i'm posting a simple example with a FPGA vi that writes every 200 microseconds and a RT vi that reads 1000 data points.
FPGA
RT
When i run the RT code, it should read 1000 data points every 200 ms. However, as you can see in the following chart related to the RT vi, the elapsed time goes to 201 ms periodically:
This is something that i have never experienced, even in other projects.
Does anyone have clues about this behavior?
An interesting thing to notice is that if i change the data point to read or the frequency of the FPGA writer, the behaviour in the RT is with the same periodic 1 ms over the medium time rate (even if the time rate change.. for example if i read 500 data points i see 100 ms with peaks of 101 ms).
Thanks in advance!!!
Solved! Go to Solution.
04-07-2020 10:57 AM
Hi Mattia88,
This behavior doesn't necessarily appear unexpected to me (though it still could be depending on what you find). You're expecting that the code will execute in exactly 200 milliseconds every time on both sides including the transfer of the data. While in general that's a reasonable assumption you will see jitter or other small timing differences. That 1 millisecond difference is likely an example of such and you might actually be seeing it swing in both directions depending on rounding. More on that later.
This can happen for a number of reasons in this case:
It's also worthwhile to note that this timing difference is only when the RT OS side gets the data. The actual acquisition should still be occurring "on time" on the FPGA (which you can benchmark as well).
A couple of other comments (with the acknowledgement that your code is just a simple case so you may be doing things differently in a normal application):
04-07-2020 02:22 PM
Thanks for the response.
However i have some points to make:
04-07-2020 02:51 PM
- RT code has plenty of time to run. Between two consecutive reading operations we have 200 ms which should compensate for code jitter inside the loop.
While this may be true, I would point out that your Real-Time code has to wait at least 200 milliseconds before the data is available. That doesn't account for the time it takes to transfer or read the data. I would generally expect the next iteration to be faster ("catch-up" and compensate for the jitter) but as I've pointed out you're only looking at millisecond accuracy so that's not necessarily clear.
- If i set a timed loop in RT to run at 200 ms, the behavior is the following: the cycle runs in 200ms but the reading alternates between 1000 and 999 points. So it seems that 200 ms, sometimes, are not enough to have 1000 points available, which is strange because FPGA runs at exactly 200 microseconds
This is more concerning. The FPGA takes 200 ms to fetch the 1000 points but that does not include the latency of the DMA transfers or the additional operations performed in RT to read the information from the driver. However, I would expect you would see things bounce between 999 and 1001 if you're reading everything in the FIFO each Timed Loop cycle so it's a bit confusing that it isn't, especially if you've been able to benchmark the FPGA loop and confirm the timing there.
- I used the exact same procedure last year and the behavior was different (i remember it working as i expect it, without those 1ms delays). Could it be caused by new version of the RT Linux OS or the FPGA firmware?
It's possible, assuming that everything else was the same.
Overall, I'd recommend reaching out to NI Support through official channels if this performance difference is a concern for your application. If you can provide a sample reproducing project, it's much quicker for them to take a look and drive things further.
04-07-2020 03:10 PM - edited 04-07-2020 04:47 PM
Yes.
I remember that in previous tests (last year) the timing was as you have described: the cycle run at 200 ms with rare peaks at 201 ms immediatly followed by 199 ms (catching up).
The behaviour i'm seeing now is a bit problematic. These delays always over the 200ms will cause serious drifts over long times.
EDIT:
Doing the time checking in microseconds i got this:
There seems to be a somewhat random pattern but with an offset (the medium value is above 200000 microsecond).
There is more or less a 70 microseconds constant jitter..
04-07-2020 07:13 PM
After a bit of work and cursing i think i solved it.
The problem is related to the NI softare installed on the cRIO. After re-installing the recommended software (NI compactRIO 18.5 - january 2019) without any custom modification.
Moreover i think is the version of the RealTime or NI RIO that were causing the issue (maybe is related to the compatibility between those softwares and the FPGA firmware).
Now it works as expected.
Thanks for the support!!!
04-08-2020 10:33 AM
Hi Mattia88,
Glad to hear you got it working!
Can you describe your reproducing case? I'd be interested in:
After talking internally with some people, the lack of "catch-up" is concerning. While that problem wasn't evident in the way the RT side benchmarking looked originally, the measurements you made regarding the actual transfers and using a timed loop seem worth us at least filing a Bug to follow-up on.