LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

FPGA I/O pin delay

Solved!
Go to solution

I have a question concerning an FPGA project I am developing.
I am using a NI PCI-7813R Digital RIO as a target and FPGA Module 8.6.1
I am implementing a SPI bus, Master.
I am using a state machine in a Single Cycle Timed Loop. (roughly patterned on one of your IP examples)
I am running the loop at 20 MHz which produces a 10 MHz clock for the SPI bus data.
I am sending out 8 bit byte data as delivered by a FIFO from the host.
Likewise I am returning 8 bit byte data to the host using another FIFO.
I have no problem sending data, generating all the chip select and data pulses on my desired clock edges.
It is textbook clean and perfect as viewed on a scope / logic analyzer.

 

When I read data from an I/O pin however I have found an unexplained behaviour.
It is this: the data seems to lag by two read operations.
When I read data from the I/O pin at the specified edge of the clock that I generate,
I found that the data was two bits right shifted, i.e., delayed, from that appearing on the scope / logic analyzer.
I performed a work around by doing two more I/O pin read operations in the gap time between data bytes.
There are no clock signals generated and no valid data on the I/O pin at the time of these two read operations as testified to by the scope.
And now the data received matches perfectly with that sent.
I can only assume that there is some sort of pipeline or inherent delay in the I/O read operations. (at the higher clock rates)

 

I suspect that there may be something in the optimization performed in the compilation of the SCTL structure causing this.
I had found, some prior time in my development, that the data was bit shifted by only 1 position.
I believe that this was at a slower overall clock rate.

 

I also ran the same state machine logic in a conventional while loop with an FPGA Wait, to produce a much slower system
and I found that there was no delay at all.

 

I can find nothing in the configuration of the I/O Pins that might affect this. (I have turned off arbitration)
Likewise I can find nothing in the documentation that might allude to this behaviour.
LabVIEW 8.6.1 FPGA Module Known Issues (http://digital.ni.com/public.nsf/allkb/F6B5DAFBC1A8A22D8625752F00611AFF)

 

I am preparing to use and deploy the code with the workaround because it seems to be reliable.
But I am at a loss to be able to explain (in my documentation of the code) why it is necessary
or how to fix it if the compiler changes.
Do you have any suggestions?

0 Kudos
Message 1 of 2
(4,110 Views)
Solution
Accepted by Paul_Conaway
I think what you are running into is the number of synchronization registers being used with the digital I/O.  If you right click on the I/O item in the project and select the properties page, you should see an option for number of syncronizing registers for output data and output enable.  These settings are global settings for that I/O item that will effect all I/O nodes writing to that I/O item.  Similarly, if you right click on the I/O Node on the block diagram and select properties, you should see a setting for number of synchronizing registers for read.  This setting is specific to that instance of the node for that I/O item and can be configured differently for each node on the diagram.  The end effect is that each sychnronization register will delay that signal by one clock tick.  These registers are inserted to prevent metastability issues and ensure you always have valid signal levels when the I/O is sampled on the clock edge.  This is an issue anytime the producer and consumer of the signal are running off of different clocks or at different clock rates.  If the external device driving your digital inputs is running synchronous to the clock you're producing, you may be able to eliminate the sync registers.  However, you should do some analysis of the propagation delays of the signal between the two devices and ensure all setup and hold times are still being met before doing so.  In the end, I think the easier and more robust solution will be to compensate for the synchronization delays in your code as you're already doing.  I hope this helps to clarify things.
Message 2 of 2
(4,097 Views)