09-23-2011 05:05 AM
(My first post here)
I'm trying to optimize my FPGA code for size and found tips on the web.
I try to implement the following tip mentioned on http://zone.ni.com/reference/en-XX/help/371599B-01/lvfpgaconcepts/using_small_data_opt/
Use the smallest data type possible to decrease the size and increase the speed of an FPGA VI. For example, the Index Array function uses a 32-bit integer as the default for the first index input. If the array you wire to this input contains less than 256 elements, you can change the representation of the input from a 32-bit integer to an 8-bit integer.
I had already used the smallest data type where possible, but the index input of some array's are still 32-bit and could - according to this text - be optimized, because they hold far less than 256 elements.
But how can I do this?
I cannot find a menu to change the representation of the index-input of the Index Array function.
Greetings, Johan
09-23-2011 05:41 AM
Hi Johan,
I think the document is suggesting that you can right click on a numeric constant that is wired into the index array function and select Representation>U8.
Regards,
Steve
09-23-2011 06:17 AM
Thank you Stephen.
I have tried that, but then I get a coersion dot on the input of the Index Array function.
I was told that such a coersion dot will costs extra FPGA resources, so I wonder if such a solution does optimize the size of my code at all.
Greetings, Johan.
09-23-2011 08:27 AM
You can use the "to unsigned short integer" function if you want. As far as FPGA resources go, I'd be more concerned about the arrays than a few coercion dots
@Johan Mulderij wrote:Thank you Stephen.
I have tried that, but then I get a coersion dot on the input of the Index Array function.
I was told that such a coersion dot will costs extra FPGA resources, so I wonder if such a solution does optimize the size of my code at all.
Greetings, Johan.