NI Linux Real-Time Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronizing FPGA with Real-Time

Solved!
Go to solution

Hi everyone, i hope this is the right group for the issue i'm about to illustrate.

 

In essence: i have a problem related to the FPGA DMA FIFOs when i use Target to Host type FIFO (sending data from FPGA to RT).

 

In order to explain the issue in the most simple way here i'm posting a simple example with a FPGA vi that writes every 200 microseconds and a RT vi that reads 1000 data points.

 

FPGA

Mattia88_0-1586270140722.png 

 

RT

Mattia88_1-1586270221315.png

 

When i run the RT code, it should read 1000 data points every 200 ms. However, as you can see in the following chart related to the RT vi, the elapsed time goes to 201 ms periodically:

 

Mattia88_2-1586270557769.png

 

This is something that i have never experienced, even in other projects.

Does anyone have clues about this behavior?

 

An interesting thing to notice is that if i change the data point to read or the frequency of the FPGA writer, the behaviour in the RT is with the same periodic 1 ms over the medium time rate (even if the time rate change.. for example if i read 500 data points i see 100 ms with peaks of 101 ms).

 

Thanks in advance!!!

 

 

0 Kudos
Message 1 of 7
(2,834 Views)

Hi Mattia88,

 

This behavior doesn't necessarily appear unexpected to me (though it still could be depending on what you find). You're expecting that the code will execute in exactly 200 milliseconds every time on both sides including the transfer of the data. While in general that's a reasonable assumption you will see jitter or other small timing differences. That 1 millisecond difference is likely an example of such and you might actually be seeing it swing in both directions depending on rounding. More on that later. 

 

This can happen for a number of reasons in this case:

  • Maybe the shared bus between the FPGA and processor is slightly busier at some times so it takes longer for the data to be come available. 
  • Maybe there's a process in the Linux RT OS that periodically runs and temporarily blocks the loop from running either because the processor is available or the process is a higher priority.
  • Maybe your timing is typically 199.5-200.4 milliseconds and occasionally creeps higher. This wouldn't be accurately reflected if you were benchmarking at the millisecond level as opposed to the microsecond level but would still result in a true "average" timing of 200 milliseconds on the RT side.

It's also worthwhile to note that this timing difference is only when the RT OS side gets the data. The actual acquisition should still be occurring "on time" on the FPGA (which you can benchmark as well).

 

A couple of other comments (with the acknowledgement that your code is just a simple case so you may be doing things differently in a normal application):

Charlie J.
National Instruments
0 Kudos
Message 2 of 7
(2,807 Views)

Thanks for the response.

However i have some points to make:

  • RT code has plenty of time to run. Between two consecutive reading operations we have 200 ms which should compensate for code jitter inside the loop.
  • If i set a timed loop in RT to run at 200 ms, the behavior is the following: the cycle runs in 200ms but the reading alternates between 1000 and 999 points. So it seems that 200 ms, sometimes, are not enough to have 1000 points available, which is strange because FPGA runs at exactly 200 microseconds
  • I used the exact same procedure last year and the behavior was different (i remember it working as i expect it, without those 1ms delays). Could it be caused by new version of the RT Linux OS or the FPGA firmware?
0 Kudos
Message 3 of 7
(2,787 Views)

  • RT code has plenty of time to run. Between two consecutive reading operations we have 200 ms which should compensate for code jitter inside the loop.

While this may be true, I would point out that your Real-Time code has to wait at least 200 milliseconds before the data is available. That doesn't account for the time it takes to transfer or read the data. I would generally expect the next iteration to be faster ("catch-up" and compensate for the jitter) but as I've pointed out you're only looking at millisecond accuracy so that's not necessarily clear.

 

  • If i set a timed loop in RT to run at 200 ms, the behavior is the following: the cycle runs in 200ms but the reading alternates between 1000 and 999 points. So it seems that 200 ms, sometimes, are not enough to have 1000 points available, which is strange because FPGA runs at exactly 200 microseconds

This is more concerning. The FPGA takes 200 ms to fetch the 1000 points but that does not include the latency of the DMA transfers or the additional operations performed in RT to read the information from the driver. However, I would expect you would see things bounce between 999 and 1001 if you're reading everything in the FIFO each Timed Loop cycle so it's a bit confusing that it isn't, especially if you've been able to benchmark the FPGA loop and confirm the timing there.

 

  • I used the exact same procedure last year and the behavior was different (i remember it working as i expect it, without those 1ms delays). Could it be caused by new version of the RT Linux OS or the FPGA firmware?

It's possible, assuming that everything else was the same. 

 

Overall, I'd recommend reaching out to NI Support through official channels if this performance difference is a concern for your application. If you can provide a sample reproducing project, it's much quicker for them to take a look and drive things further.

Charlie J.
National Instruments
0 Kudos
Message 4 of 7
(2,778 Views)

Yes.

I remember that in previous tests (last year) the timing was as you have described: the cycle run at 200 ms with rare peaks at 201 ms immediatly followed by 199 ms (catching up).

 

The behaviour i'm seeing now is a bit problematic. These delays always over the 200ms will cause serious drifts over long times.

 

EDIT:

Doing the time checking in microseconds i got this:

Mattia88_0-1586296114794.png

 

 

There seems to be a somewhat random pattern but with an offset (the medium value is above 200000 microsecond).

There is more or less a 70 microseconds constant jitter..

0 Kudos
Message 5 of 7
(2,773 Views)
Solution
Accepted by topic author Mattia88

After a bit of work and cursing i think i solved it.

The problem is related to the NI softare installed on the cRIO. After re-installing the recommended software (NI compactRIO 18.5 - january 2019) without any custom modification.
Moreover i think is the version of the RealTime or NI RIO that were causing the issue (maybe is related to the compatibility between those softwares and the FPGA firmware).

 

Now it works as expected.

 

Mattia88_0-1586304693040.png

 

Thanks for the support!!!

 

0 Kudos
Message 6 of 7
(2,757 Views)

Hi Mattia88,

 

Glad to hear you got it working!

Can you describe your reproducing case? I'd be interested in:

  • What software were you using on the Host and cRIO?
    • E.g., What custom modifications did you make?
  • What does the project configuration look like (since the target and FPGA configurations would be in there)?
  • Did you have to rebuild your bitfile before the behavior went away? Or were you using the same bitfile?

After talking internally with some people, the lack of "catch-up" is concerning. While that problem wasn't evident in the way the RT side benchmarking looked originally, the measurements you made regarding the actual transfers and using a timed loop seem worth us at least filing a Bug to follow-up on.

Charlie J.
National Instruments
0 Kudos
Message 7 of 7
(2,722 Views)