LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

Are there math lib differences between debug and release versions of an executable?

In my project I got an error message from a device "Set value out of range" when I compiled my project in 'release' configuration. I recompiled the project in 'debug' version and it worked again. So I looked at the values that were calculated during my program run. And there were differences! The nominal value is 0.02 and I got:
 
19.999 999 552 965 170 0 E-3   in release and
19.999 999 552 965 160 7 E-3   in debug mode
 
This difference is not big, but where does it come from?
0 Kudos
Message 1 of 4
(3,362 Views)
Hallo lutzhoerl,

I have a few questions:
1. Which version of CVI are you using?
2. Have you seen this behavior on other CVI versions (if available)?
3. Could you post a small application that shows this behavior?

Bye
Daniel
NIG
0 Kudos
Message 2 of 4
(3,334 Views)
Hello Daniel
1.I am using CVI 8.1.0 (271)
2. No, but did not check.
3. Yes. Here it is:
 
#include <stdio.h>
#include <math.h>
void main(void)
{
   double a,b;
 
   a = 0.0199999995529651700;      // a value in Watt
   b = 10.0 * log10(a * 1000.0);   // same value in dBm
   printf("%20.18e   -->  %20.18e\n", a, b);
}
On my machine this outputs:
1.999999955296517112e-02   -->  1.301029985956743218e+01
in DEBUG and
 
1.999999955296517112e-02   -->  1.301029985956743040e+01
in RELEASE mode.
 
 
Thank you for any hint how to avoid this.
Best regards
Lutz
0 Kudos
Message 3 of 4
(3,297 Views)
One idea for the reason of the differences:
The double floating point type use 64 bit wide values in memory , but the FPU may  use internally 80 bits for its calculations. In debug mode the compiler may have to  add a copy of an intermediate result from FPU register to memory  to make that value viewable by the debugger and has to cut it to 64 bit precision to do so.
You have to analyze the generated assembler code to verify that.
 
0 Kudos
Message 4 of 4
(3,280 Views)