09-04-2017 04:57 AM
I was investigating a higher-then-expected resource usage in some FPGA code I'm working on and I was the opinion that I has too register heavy..... I had made some optimisations and am usually relatively good at estimating how much resources are saved by certain operations. Caveat is that my main source of information is a very old document for LV 8.6 and Virtex-II targets..... I can't find a link to it anywhere any more, but I have a local copy I've been using as a reference for quite some time.
I'm using LV 2015 on a Virtex 5.
I went looking for the code which was costing me approximately 1k registers more than expected and I came up blank until I realised I had added a couple of DMA channels (64-bit). Some quick tests later showed me that a single 64-bit DMA Channel Host-to-Target costs 1.1k Registers, 1.1k LUTs and 1 BRAM. This seems excessive. As reference a 16-bit DMA costs 830 Registers and 780 LUTs (and 1 BRAM of course) according to my tests but the reference document for Virtex-II says that a I16 DMA costs 412 Registers and 595 LUTs (and 1 BRAM). This seems to have grown significantly over the years.....
So before I start crying into my coffee over lost FPGA fabric, is there an FPGA wizard at NI who can explain why DMAs have become more expensive over time? Is it a LabVIEW thing, a FPGA target thing? Are there any newer guidelines as to which operations cost what (roughly) for more current software and hardware?