First you convert the waveform data to an array of doubles. That is OK. Then you convert the doubles to 16-bit integers. That will truncate the decimal, but since you multiplied by 1000, I assume you are considering that so that is OK too. Then you make your fatal error in wiring the I16 array to a Byte Array to String function. See that little red dot on the left side if the Byte Array to String function? That tells you there is a coercion going on that is chopping each I16 down to a U8. Since you only have 1 integer in your 16-bit array, you end up with only 1 byte in the coerced byte array.
I recommend the Flatten to String function. It has the option (in newer LabVIEW versions) to change the byte-order (LabVIEW is big endian, but Intel is little endian) and can prepend the array size or not. Since you seem to want just a single number, you do not want the array size.
See attached example.
Dan Press
PrimeTest Automation