08-14-2008 08:58 PM
I think there is a bug in the way that LabWindows/CVI handles floating point
numbers, both single and double precision. Double first; the few lines shown
below give a
result = 2.936788244924931 E +194 when the
actual decimal conversion is -48.9121589. (See this conversion page http://babbage.cs.qc.edu/IEEE-754/64bit.html).
I have a binary stream coming from a Vector Network Analyser which I can output
in Single Precision Binary(32 bit) or Double precision Binary (64 bit)
(IEEE-754). The line below simplifies the problem from a longer array.
void
main (void)
{
char Input[16] = "C04874C1A0000000";
double
Result;
Scan(Input, "%s>%f",
&Result);
}
Following an example I tried the line below. This gave a result = 2.1738120934444362 E -71 when the actual decimal conversion is -48.9121589.
08-15-2008 04:12 AM
I think you may be expecting more of the Scan function than it is capable of. It would be happy with this conversion:
char Input [16] = "-48.9121589";
float Result;
Scan (Input, "%s>%f", &Result);
but your ASCII representation of a binary code corresponding to a floating point number will need more manipulation.
JR
08-15-2008 11:43 AM
Hello,
a string with hexadecimal digits is not a valid interpretation of a number in ANSI C. It's a mere string.
If you have binary data coming in from the vector analyzer, then you can convert each 8-byte group into a double number directly. Example below:
unsigned char buffer[]={0x00,0x00,0x00,0xA0,0xC1,0x74,0x48,0xC0}; // Binary representation of double -48.9121589
double *dptr=(double*)buffer; // Direct conversion to double type
The pointer dptr now points to a double value of -48.912158966064453.
If your vector analyzer sends the data in text format, you would need to first convert the data to raw binary and then to double.
Hope this helps.
LDP
08-17-2008 07:23 PM
Thanks ldp, the lines are helpful but it would appear that the bytes for IEEE-754 format are in the reverse order normally than the order you specified. You said;
"If you have binary data coming in from the vector analyzer, then you can convert each 8-byte group into a double number directly. Example below:
unsigned char buffer[]={0x00,0x00,0x00,0xA0,0xC1,0x74,0x48,0xC0}; // Binary representation of double -48.9121589
double *dptr=(double*)buffer; // Direct conversion to double type "
The double precision data is in C04874C1A0000000 order, see attached double precision data file. If I have to re-order the bytes, it will be more efficient to stick to the ASCII representation which I have working fine.
08-18-2008 12:54 PM
Hello again,
since your vector analyzer sends you the raw data in big-endian order, you can use a simple loop to read it in.
As an example, I just wrote this in the Interactive Window and it works great:
// Buffer with raw binary data for a double in big-endian order
static unsigned char buffer[]={0xC0,0x48,0x74,0xC1,0xA0,0x00,0x00,0x00};
static unsigned char *buff=buffer;
// To simulate reading raw data
#define ReadNextByte() *buff++;
static int i;
static unsigned __int64 number=0;
static double dbl;
for(i=0;i<8;++i) {
number <<= 8;
number |= ReadNextByte();
}
dbl = *(double*)&number;
The result was again -48.912158966064453.
Reading in big-endian order data is usually easier beacuse you can always shift the result left by 8 bits and OR in the next byte which makes it faster since you are using only bitwise operators.
Reading in little-endian order data would be slightly slower but definitely not as slow as converting ASCII to double.
Hope this helps.
LDP
08-18-2008 10:21 PM
Thanks, I can see that this works, it's great but I am having some data type issues in trying to apply it to my array of data as shown in the attached text file of my previous response. Single Precision would still be my preference and I will attach a file of the binary for that. If I enclose your loop in a for loop that repeats for every amplitude value, e.g.
static char BufferRd[457];
static unsigned char *buff=buffer;
// To simulate reading raw data
#define ReadNextByte() *buff++;
static int i, j;
static unsigned __int64 number=0;
static double dbl;
double SweepData[113];
for(j=0;j<113;j++) {
*buff = BufferRd[j];
for(i=0;i<8;++i) {
number <<= 8;
number |= ReadNextByte();
}
SweepData[j] = *(double*)&number;
}
This has errors in compilation because of my data type issues but I would greatly appreciate you thoughts, thanks. kazinoz.
08-19-2008 03:29 AM
You are still confusing a binary number with its ASCII representation. ldp's example works with raw, single byte binary values. For each such value, you seem to have 3 coded ASCII bytes in your data stream. So to get to ldp's example single byte value of 0xC0, you need to process the three byte string "C0 " first. Clearly not impossible, but it does require a bit more thought.
JR
08-19-2008 06:56 PM
The code should have these few extra lines to strip off the ascii at the start, the stream is begun with a # and then a number that identifies how many ascii characters follow and then the binary stream.
static char BufferRd[457], BufferB[453];
static unsigned char *buff=buffer;
// To simulate reading raw data
#define ReadNextByte() *buff++;
static int i, j, dummy[6] = {0,0,0,0,0,0};
static unsigned __int64 number=0;
static double dbl;
double SweepData[113];
Scan (BufferRd, "%s>#%i[w5]%s", dummy, BufferB); // Strips off start of array ('dummy' can be used or discarded as needed)
for(j=0;j<113;j++) {
*buff = BufferB[j];
for(i=0;i<8;++i) {
number <<= 8;
number |= ReadNextByte();
}
SweepData[j] = *(double*)&number;
}
The ascii start to this stream has contributed to my data type problem.