10-03-2021 07:30 AM - edited 10-03-2021 07:31 AM
I am trying to write a c code from a labview vi.
Red colored blocks are FTDI write and read. I have done the write and read operation.
The output of read is as follows
Now I have to perform the type cast and byte swapping function to decode the data.
But i unable to understand what type cast block does.
Can anyone help me how to decode this data?
10-03-2021 09:13 AM - edited 10-06-2021 03:05 PM
What's the datatype of the blue array coming out of the read function? (pictures are a poor way to show LabVIEW code! Who wrote it? looks very clunky!)
Typecast just re-interprets the raw bits as a different datatype. Just display the before and after in binary padded to the number of bits in the datatype to see what's happening.
I am a graphical programmer and cannot help you further. I am sure somebody will help you find the correct way to do in in text code.
10-03-2021 10:47 AM
If you want to know what a function (such as TypeCast) does, you can put it on a Block Diagram, right-click it, and choose "Help". Be sure to read the Detailed Help. Note that TypeCast doesn't change the "raw bits" in memory -- it just "repackages them" as though they are a different datatype. This is a section from the Help:
You can use this function with an array of scalars or an array of clusters of scalars. For example, if you typecast an array of four 16-bit integers to an array of 32-bit integers, the output array contains two elements, each formed from the bits of pairs of elements from the input array. If the input array does not contain enough bytes to form a whole number of output elements, LabVIEW omits the final elements of the input array.
Bob Schor
10-06-2021 06:03 AM - edited 10-06-2021 06:06 AM
In terms of C programming, the Typecast when run on a Little Endian hardware (all Intel CPUs) it is really similar to a C typecast but with a caveat. LabVIEW always interpretes the data stream as being in Big Endian format and does an according byte swapping if run on an Intel CPU.
Basically:
uint8_t input_data_stream[100] = {.......};
uint16_t *output_data;
void SwapBytesU16(uint16_t *data, int32_t len)
{
for (int32_t i = 0; i < len; i++)
data[i] = (((data[i] & 0xFF) << | ((data[i] & 0xFF00) >> 8));
}
uint16_t* LabVIEWTypecastToU16(uint8_t *data, int32_t len)
{
uint16_t* ptr = (uint16_t*)data;
#if LittleEndian
SwapBytesU16(ptr, len / sizeof(uint16_t));
#endif
return ptr;
}
Since you are executing this VI on Windows, LabVIEW is running on a Little Endian CPU and is typecasting the data and then the following ByteSwap does reverse that byte swapping. More efficient would have been to use the Unflatten function which allows you to select in what Endianness the byte stream data is, which avoids the double byte swapping altogether. Also your LabVIEW code would not work properly on a Big Endian hardware, which admittingly is hard to get your hands on nowadays, but with the Unflatten function and the explicit Little Endian selection it would work there too.
Basically for your case with your C code running on a Little Endian machine, you do just want to do a simple cast (potentially a safe_cast if you do compile as C++):
uint16_t *output data = (uint16_t*)input_data_stream;