LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Convert binary bytes to Labview Extended Precision type



ecc83 wrote:
As for the endian format, I have set the read binary when reading the 10 bytes
for both big and little endian, and there was no difference....go figure.

Of course there will be no difference if you read bytes (e.g. as string or U8 array). Endian-ness is only defined for multibyte representations.
 
In this case, you could just invert the padded U8 array before casting to EXT.
0 Kudos
Message 11 of 22
(2,536 Views)
The array you posted will in LabVIEW never be 2200 because the exponent is 0. In IEEE 754 the exponent is biased for extended the bias would be (2^31 -1)=huge. So an exponent of 0 would give 1-(2^31 -1)=approx. -infinity.  So your value will be very close to 0.

Have you checked the wiki page on IEEE floating point?
But if you check the internet (and links provided by the wiki) you will see the extended data-type isn't clearly defined in IEEE 754. And the bias value isn't stated in the standard.

Ton
Free Code Capture Tool! Version 2.1.3 with comments, web-upload, back-save and snippets!
Nederlandse LabVIEW user groep www.lvug.nl
My LabVIEW Ideas

LabVIEW, programming like it should be!
0 Kudos
Message 12 of 22
(2,532 Views)
Hi,

Please check this:

http://en.wikipedia.org/wiki/Endianness

According to this, endian refers to byte level manipulations, not arrays of bytes. I have written and implemented
routines in C using this same topology.
Labview's "read binary" VI runs the data format on a single byte at a time (big, little endian) or on as many bytes as you wire in.

Manipulating the array's byte order...does nothing useful.

Gary.
0 Kudos
Message 13 of 22
(2,520 Views)
Hi TonP:


Yes, you are quite right with your assessment!

At this point I cannot be certain what is happening, because the data structure I am reading from
was written to using a structure defined in Delphi. The total structure size is 128 bytes, and
the first 2 types are a double and a word (12 bytes). I wire a cluster with those 2 types into
read binary VI and they come out as expected. Next, I read in an array of 10 bytes and what I have
pasted on previous posting is typical when the actual value is about 2200.00.
I know the data reading is correct because the rest of the structure is fixed field sizes known to Labview
(doubles/singles/bytes) and all values are coming out correctly. The files size - the header size / 128
comes out to an even integer. So the block sizes are correct. I guess what I am trying to point out is
that the 10 byes is correct for that extended field! Also, Delphi's documentation also states that
10 bytes is the size of an extended precision.

I have made a test where I've created an array of 4 bytes, wired to a type
cast of single precision, and the value comes out to a specific value.

I also created an array of 8 bytes and done the same for a double, and the value comes out from
the typecast correctly. The actual hex values for the input have come from a couple of different
websites that have value-to-hex byte converters for single and double precision. I guess I need to
study these to understand what is happeneing and why the 2 arrays have very different hex values
wired into the typecast and result in the same value (2028.1234)

I guess the real question is....understanding how the compiler reads in and manipulates those
bytes, for example, in the Delphi program. This could only be understood and seen at the assembler level!
This would provide the insight necessary to solving the mystery.


Thanks for your comments and time,
Gary.







0 Kudos
Message 14 of 22
(2,519 Views)
Hi all...my silly mistake:

I stated earlier that Delphi used a "standard IEEE 10 byte storage" I was quite incorrect on that.
Delphi is on the freak show list. A 10 byte IEEE number is Delphi's implementation. It appears
they should have used the extra 6 bytes to make it on an even byte boundary. Some sites
on Delphi even state      "....be careful when using this format as it is not compatible across
other platforms and compilers..."

I eat the humblest of pie....and NI is absolutely right using 16 byes for double precision.
I will curl up under the table now...in fetal position....
Smiley Sad

Thanks,
Gary.
0 Kudos
Message 15 of 22
(2,518 Views)

Hi Gary,

      I haven't given-up yet - have you?  Came accross the following snippet.  Its very interesting because the exponent appears last, in a 10-byte sequence - treated as 5 Word (2-byte) pairs.  Another interesting thing is that the bit-pattern of LabVIEWs mantissa - for 2200 - with "hidden 1" added-back is:

100010011000 (Exp = 11),

and the most significant bits, starting with first non-zero, when representing your data:

100010001110

Interesting, since we're not expecting exactly 2200...

Still, 184 and 30 are hard to interpret as an exponent - even after subtracting an extended-precision "bias" of 16387, With a 15-bit exponent, we really want to see a 0x40 (0r 0xB0) for one of the exponent bytes, else the exponent starts to look crazy, so the hunt continues...

I came across a couple of links (like this one) that refer to a "double-extended" format using a minimum of 79 bits which may yet describe your data.

Cheers.

 

"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
0 Kudos
Message 16 of 22
(2,486 Views)

Here's a VI that produces 2190.70 from your data by scaling each byte according to its position in the input array.  In other words, it doesn't treat the array as representing an "IEEE" float - so this VI is just a curiosity.

I don't think Delphi is "freak"ish because it uses a 10-byte format - 10 bytes seems to be the standard for "Extended" precision, though "Extended" precision wasn't defined well in IEEE-related docs I managed to find.  "The Intel Microprocessors" Third Edition, by Barry B Brey (copyright 1994) describes the 10-byte format as a "temporary form" used by processors to help retain precision while doing single and double-precision math.  This "Extended" format uses an explicit "1" bit in the MSBit of the mantissa and many descriptions of "Extended Precision" found on the web describe it, but LabVIEW is different...

LabVIEW employs the "hidden" '1' format (as is proper for SGLs and DBLs.)  It makes sense because the "temporary" Extended of old - with fixed '1' in the Mantissa - wasn't designed to be used by the programmer, but LabVIEW's Extended Precision is.  Now, I wonder if LabVIEW employs it's own "Extended Extended" precision, behind the scenes, to maintain precision during Extended computations - could be that's what the extra 6 bytes are for...

If your data really represents the bits of a float - complete with sign, exponent and mantissa sections - then it's probably possible to massage the bits and turn them into a LabVIEW Extended precision float.  But it's been difficult to find the Exponent bits in the data so-far presented!  Can you share a few more records?

Cheers!

"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
0 Kudos
Message 17 of 22
(2,462 Views)


tbd wrote:

Now, I wonder if LabVIEW employs it's own "Extended Extended" precision, behind the scenes, to maintain precision during Extended computations - could be that's what the extra 6 bytes are for...


When LV supported Solaris, it needed a 128 bit representation because it was supported there and it probably just stayed around for compatibilty (and byte alignment?). I'm assuming that LV simply uses the built-in extended mode of the processor and any data which LV has which is above its level of resolution is ignored. You might wish to read this.

___________________
Try to take over the world!
0 Kudos
Message 18 of 22
(2,455 Views)

Perhaps you could learn something too, tst.

In the following post-script file, W. Kahan describes that IEEE-754 extendeds don't employ the hidden 1 for historical reasons - LabVIEW does employ hidden 1, so if there's such a thing as IEEE-754 compliant, LabVIEW's EXTs arent, exactly.

www.cs.berkeley.edu/~wkahan/ieee754status/ieee754.ps

In the following pdf from Intel, it describes the three FP types of their FPU - the "extended real" type uses an explicit "integer" in the MSBit of the mantissa - unlike LabVIEW.
http://developer.intel.com/design/pentium/manuals/24319001.pdf

I don't know what format Brian meant when he said "LabVIEW uses an extended precision" FP and uses "an extended format internally".  I don't think he said LabVIEW uses precisely the same format as the FPU - just that NI uses the FPU in its extended mode when processing LabVIEW's EXTs.

Regardless of what Brian said or meant, LabVIEW's EXT doesn't comply with the description for IEEE-754 "Extended precision" or Intel FPU "Extended Real"

(... perhaps Delphi's 10-byte extended did. Smiley Wink )

"Inside every large program is a small program struggling to get out." (attributed to Tony Hoare)
Download All
0 Kudos
Message 19 of 22
(2,432 Views)


tbd wrote:

Perhaps you could learn something too, tst.


I definitely could, but I'm probably not going to, because it's right at about this point that I enter "boring class mode" (glazed-over eyes, music running through my head, etc.). You've pretty much got everything I know about the subject in my last post.

___________________
Try to take over the world!
0 Kudos
Message 20 of 22
(2,415 Views)