12-03-2013 11:50 AM
Hi all,
I'm having some accuracy issues with the high throughput inverse tangent. I'm am it in my LabVIEW FPGA code but am troubleshooting in the My Computer environment. Basically, compared to the NON high throughput inverse tangent, the high thoroughput one is much less accurate--sometimes off by 2 degrees.
I've attached a screenshot of a sample run where I use both inverse tangents to compute the inverse tangent of 40/40 and convert to degrees. The answer I seek is 45deg.
Have you guys also experienced these inaccuracies with the high throughput inverse tangent? Any solutions? I'll need a solution for my FPGA. I feel like I'm almost better off computing the ratio and then feeding that into a LUT for the arctan.
Thanks
Solved! Go to Solution.
12-03-2013 06:56 PM
Jeffrey,
I have not used FPGA but I suspect that the issue is the limited size of the fixed point representation of the data.
Lynn
12-03-2013 10:05 PM
I agree that the resolution of the input is causing the issue you are seeing. if you change the inputs to 24,16 instead of 16,16 you will get better results. if your data is 16 bits, this is an easy conversion before arctan.
12-04-2013 09:15 AM
Thanks Lynn and Stu,
I thought that the function would automatically use whatever fixed resolution it had internally and wouldn't be dependent on the input representation. Looks like it adapts to my input. Going from +/-16,16 to +/-24,16 improved things dramatically. I just didn't want to use a larger representation than necessary to represent the input since it wastes a little bit of space...
Thanks again!
12-04-2013 09:55 AM
Then you should mark Stu's message as the solution to your problem rather than your own thank you message.
First you will need to go to the options menu to the upper right of your message and unmark it as the solution.
12-04-2013 10:37 AM
Yah. I originally clicked the wrong solution. Fixed.
Thanks
12-09-2013 01:53 PM
Hi Jeffrey,
The root problem is actually in the default choice for the Internal word length (on the CORDIC Details tab of the configuration page). That value is set to match the input width rather than padding it for increased accuracy. This is a known issue, where the defaults were originally chosen to emphasize resource usage over accuracy.
Making the input types wider is just tricking that parameter's default to change from 16 to 26. You can get the same result by leaving the input types at 16 bits and simply changing the internal word length to 26. This may or may not save a few LUTs for this function, but in general is a more efficient practice. Whenever you're adjusting that parameter for accuracy, I would also recommend changing the rounding mode to round-half-up to get another 1/2 bit of accuracy at nominal cost. That doesn't affect the result for your particular test case, but it does have an impact on the worst-case error over all inputs.