LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

high throughput inverse tangent (2 input) inaccurate

Solved!
Go to solution

Hi all,

 

I'm having some accuracy issues with the high throughput inverse tangent. I'm am it in my LabVIEW FPGA code but am troubleshooting in the My Computer environment. Basically, compared to the NON high throughput inverse tangent, the high thoroughput one is much less accurate--sometimes off by 2 degrees. 

 

I've attached a screenshot of a sample run where I use both inverse tangents to compute the inverse tangent of 40/40 and convert to degrees. The answer I seek is 45deg.

 

Have you guys also experienced these inaccuracies with the high throughput inverse tangent? Any solutions? I'll need a solution for my FPGA. I feel like I'm almost better off computing the ratio and then feeding that into a LUT for the arctan.

 

Thanks

 

--
Jeffrey Lee
0 Kudos
Message 1 of 7
(3,616 Views)

Jeffrey,

 

I have not used FPGA but I suspect that the issue is the limited size of the fixed point representation of the data.

 

Lynn

Message 2 of 7
(3,593 Views)
Solution
Accepted by topic author jblee

I agree that the resolution of the input is causing the issue you are seeing.  if you change the inputs to 24,16 instead of 16,16 you will get better results.  if your data is 16 bits, this is an easy conversion before arctan.

Stu
Message 3 of 7
(3,575 Views)

Thanks Lynn and Stu,

 

 

I thought that the function would automatically use whatever fixed resolution it had internally and wouldn't be dependent on the input representation. Looks like it adapts to my input. Going from +/-16,16 to +/-24,16 improved things dramatically. I just didn't want to use a larger representation than necessary to represent the input since it wastes a little bit of space...

Thanks again!

 

 

--
Jeffrey Lee
0 Kudos
Message 4 of 7
(3,547 Views)

Then you should mark Stu's message as the solution to your problem rather than your own thank you message.

 

First you will need to go to the options menu to the upper right of your message and unmark it as the solution.

0 Kudos
Message 5 of 7
(3,539 Views)

Yah. I originally clicked the wrong solution. Fixed.


Thanks

 

--
Jeffrey Lee
Message 6 of 7
(3,524 Views)

Hi Jeffrey,

 

The root problem is actually in the default choice for the Internal word length (on the CORDIC Details tab of the configuration page). That value is set to match the input width rather than padding it for increased accuracy. This is a known issue, where the defaults were originally chosen to emphasize resource usage over accuracy.

 

Making the input types wider is just tricking that parameter's default to change from 16 to 26. You can get the same result by leaving the input types at 16 bits and simply changing the internal word length to 26. This may or may not save a few LUTs for this function, but in general is a more efficient practice. Whenever you're adjusting that parameter for accuracy, I would also recommend changing the rounding mode to round-half-up to get another 1/2 bit of accuracy at nominal cost. That doesn't affect the result for your particular test case, but it does have an impact on the worst-case error over all inputs.

Message 7 of 7
(3,476 Views)