LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Unexpected math outcomes with expression node

Thanks for the explanation hunter. The most troubling part then is:


Huter wrote: 

 

But when you do the step all at once, it maintains the double precision accuracy internally, and gives the mathematically correct answer, 2.

 

 

 

 Why would it do the math at double precision when I used single precision througout. Is this standard? Would it do double math at quad precision?

 

CLED (2016)
0 Kudos
Message 11 of 19
(1,258 Views)

That is a nice formula.  Try this to avoid your issue.

 

a: convert SGL to Decimal string (now you and the processor are looking at the same stuff)

b: Split string at "."

c: Search string after split for the first non zero character ( reg exp="[1-9]") (this fn returns the index where the number was found)

d: increment the index and you have your answer!

 

 

 


"Should be" isn't "Is" -Jay
0 Kudos
Message 12 of 19
(1,243 Views)
Sounds a bit too CPU intensive no? Definitely easier to look at though.
CLED (2016)
0 Kudos
Message 13 of 19
(1,234 Views)
It avoids rounding errors because it depends on the exact behavior that confuses us poor humans.  The CPU is on salary. The code developer could spend weeks validating the conversion exception handling and still miss one. 

"Should be" isn't "Is" -Jay
0 Kudos
Message 14 of 19
(1,225 Views)

The assumption is you are using a data type for size reasons or becasue it needs to be compatable with something, not becasue you want it to be inaccurate. So it gets the most accurate value and approximates it when it converts back to SGL.

 

-Hunter

0 Kudos
Message 15 of 19
(1,217 Views)
Another possibility: One could also create an array like (0,10^-n,10^(-n+1),...1) and then use Threshold 1D array and use N-floor(index)
CLED (2016)
0 Kudos
Message 16 of 19
(1,214 Views)

Huter wrote:

The assumption is you are using a data type for size reasons or becasue it needs to be compatable with something, not becasue you want it to be inaccurate. So it gets the most accurate value and approximates it when it converts back to SGL.

 

-Hunter


Personally, I'd rather have the option to do the math in 32 bit precision than to not have the option. You can always convert it to a double before hand and pass it into the expression node and convert it to a single after. Hypothetically, maybe the user wants the smaller memory footprint or maybe the user is trying to make things more CPU efficient.  

 

CLED (2016)
0 Kudos
Message 17 of 19
(1,200 Views)

It is a misnomer (for 32 or 64 bit processors) that doing the calculation with Single precision would be more efficent for the CPU. On a 32 bit processor no matter how small your data type is, all mathmetical operations are going to be executed on 32 bit numbers (32 bit ALU), that is how the processor is physicaly built. The internal pipelines are all 32 bits wide. It actualy has to add a step to truncate it down to 24 bits when you cram it into a single at the end. If you were using 16 bit numbers some processors could optimize by doing two 16 bit opperations in one 32 bit cycle, but this won't work for 24 bits. Using single precision won't improv your processor time, however it can improv the memory usage and data size. If you wanted to do true 16 bit math it would add an extra step between each step.

 

-Hunter
Message 18 of 19
(1,182 Views)
Good information, thanks 
CLED (2016)
0 Kudos
Message 19 of 19
(1,175 Views)