LabWindows/CVI

cancel
Showing results for 
Search instead for 
Did you mean: 

Floating point number handling bug

I think there is a bug in the way that LabWindows/CVI handles floating point numbers, both single and double precision. Double first; the few lines shown below give a
result = 2.936788244924931 E +194 when the actual decimal conversion is -48.9121589. (See this conversion page http://babbage.cs.qc.edu/IEEE-754/64bit.html). I have a binary stream coming from a Vector Network Analyser which I can output in Single Precision Binary(32 bit)  or Double precision Binary (64 bit) (IEEE-754). The line below simplifies the problem from a longer array.

void main (void)
{

char    Input[16] = "C04874C1A0000000";
double  Result;

        Scan(Input, "%s>%f", &Result);
}

Following an example I tried the line below. This gave a result = 2.1738120934444362 E -71 when the actual decimal conversion is -48.9121589.

 

void main (void)
{

char Input[16] = "C04874C1A0000000";
double Result;
 
 Scan(Input, "%1f[z]>%f", &Result);
}
 
I then tried Single Precision which is actually my desired format. This gives the result of 1.8364521 E -39 when the actual decimal conversion is -48.9121589. (See the conversion page
 
void main (void)
{
char Input[8] = "C243A60D";
float Result;
 
 Scan(Input, "%s>%f[b4]", &Result);
}
 
If  the error of my ways can be explained, I would be grateful. 
0 Kudos
Message 1 of 8
(4,268 Views)

I think you may be expecting more of the Scan function than it is capable of. It would be happy with this conversion:

 

char Input [16] = "-48.9121589";
float Result;

Scan (Input, "%s>%f", &Result);

 

but your ASCII representation of a binary code corresponding to a floating point number will need more manipulation.

 

JR

 

0 Kudos
Message 2 of 8
(4,253 Views)

Hello,

 

a string with hexadecimal digits is not a valid interpretation of a number in ANSI C. It's a mere string.

 

If you have binary data coming in from the vector analyzer, then you can convert each 8-byte group into a double number directly. Example below:

unsigned char buffer[]={0x00,0x00,0x00,0xA0,0xC1,0x74,0x48,0xC0};    // Binary representation of double -48.9121589
double *dptr=(double*)buffer;                                        // Direct conversion to double type 

 

The pointer dptr now points to a double value of -48.912158966064453.

 

If your vector analyzer sends the data in text format, you would need to first convert the data to raw binary and then to double.

 

Hope this helps.

LDP 

0 Kudos
Message 3 of 8
(4,235 Views)

Thanks ldp, the lines are helpful but it would appear that the bytes for IEEE-754 format are in the reverse order normally than the order you specified. You said;

 

"If you have binary data coming in from the vector analyzer, then you can convert each 8-byte group into a double number directly. Example below:

unsigned char buffer[]={0x00,0x00,0x00,0xA0,0xC1,0x74,0x48,0xC0};    // Binary representation of double -48.9121589
double *dptr=(double*)buffer;                                        // Direct conversion to double type "

 

The double precision data is in C04874C1A0000000 order, see attached double precision data file. If I have to re-order the bytes, it will be more efficient to stick to the ASCII representation which I have working fine.

 

0 Kudos
Message 4 of 8
(4,218 Views)

Hello again,

 

since your vector analyzer sends you the raw data in big-endian order, you can use a simple loop to read it in.

As an example, I just wrote this in the Interactive Window and it works great:

 

// Buffer with raw binary data for a double in big-endian order
static unsigned char    buffer[]={0xC0,0x48,0x74,0xC1,0xA0,0x00,0x00,0x00};
static unsigned char    *buff=buffer;

// To simulate reading raw data
#define ReadNextByte()    *buff++;


static int                i;
static unsigned __int64    number=0;
static double            dbl;

for(i=0;i<8;++i) {
    number <<= 8;
    number |= ReadNextByte();
}
dbl = *(double*)&number;

 

The result was again -48.912158966064453.

 

Reading in big-endian order data is usually easier beacuse you can always shift the result left by 8 bits and OR in the next byte which makes it faster since you are using only bitwise operators.

Reading in little-endian order data would be slightly slower but definitely not as slow as converting ASCII to double.

 

Hope this helps.

LDP

0 Kudos
Message 5 of 8
(4,192 Views)

Thanks, I can see that this works, it's great but I am having some data type issues in trying to apply it to my array of data as shown in the attached text file of my previous response. Single Precision would still be my preference and I will attach a file of the binary for that. If I enclose your loop in a for loop that repeats for every amplitude value, e.g.

 

static char BufferRd[457];
static unsigned char    *buff=buffer;
// To simulate reading raw data
#define ReadNextByte()    *buff++;


static int                i, j;
static unsigned __int64    number=0;
static double            dbl;
double   SweepData[113];

 

for(j=0;j<113;j++) {

        *buff = BufferRd[j];

    for(i=0;i<8;++i) {
        number <<= 8;
        number |= ReadNextByte();
    }

    SweepData[j] = *(double*)&number;

}

This has errors in compilation because of my data type issues but I would greatly appreciate you thoughts, thanks. kazinoz.

0 Kudos
Message 6 of 8
(4,180 Views)

You are still confusing a binary number with its ASCII representation. ldp's example works with raw, single byte binary values. For each such value, you seem to have 3 coded ASCII bytes in your data stream. So to get to ldp's example single byte value of 0xC0, you need to process the three byte string "C0 " first. Clearly not impossible, but it does require a bit more thought.

 

JR

0 Kudos
Message 7 of 8
(4,163 Views)

The code should have these few extra lines to strip off the ascii at the start, the stream is begun with a # and then a number that identifies how many ascii characters follow and then the binary stream.

static char BufferRd[457], BufferB[453];
static unsigned char    *buff=buffer;
// To simulate reading raw data
#define ReadNextByte()    *buff++;


static int    i, j, dummy[6] = {0,0,0,0,0,0};
static unsigned __int64    number=0;
static double            dbl;
double   SweepData[113];

 

Scan (BufferRd, "%s>#%i[w5]%s", dummy, BufferB);  // Strips off start of array ('dummy' can be used or discarded as needed)
for(j=0;j<113;j++) {

        *buff = BufferB[j];

    for(i=0;i<8;++i) {
        number <<= 8;
        number |= ReadNextByte();
    }

    SweepData[j] = *(double*)&number;

}

The ascii start to this stream has contributed to my data type problem.

 

0 Kudos
Message 8 of 8
(4,141 Views)