04-13-2007 10:14 PM
04-14-2007 03:52 AM
This is interpreting the stream as 32bit integer array and then byte and word swapping the 32bit elements. After that it interprets the 32bit integer as a single precision floating point value and scales it with 65536.
@DTR wrote:I am reading a binary string from a UDP port and am trying to convert it to numeric data. A colleage is doing this in CVI and I am just trying to convert his code into LabVIEW code.Below is the CVI code:next=(0x00ff0000&(buf[4*x+1]<<16))|(0xff000000&( buf[4*x+0]<<24))|
(0x000000ff&( buf[4*x+3]))|(0x0000ff00&( buf[4*x+2]<<8));ff=(double)next; //*16.0;
g_waveform[x]= ff/65536.0;Attached is a picture of my LabVIEW code.I am not getting the correct data.Can anyone see an obvious problem?
04-14-2007 07:59 AM - edited 04-14-2007 07:59 AM

Message Edited by altenbach on 04-14-2007 05:59 AM