I have a double precision floating point number that I want to convert into a byte array. Right now I'm under the impression that I have to convert the double to an integer first, then to byte array. The only problem is, my double is quite large (requires 48-bits) and there is no 48-bit integer in LabVIEW. But the main question is...is there a way to convert a double to a byte array? Thanks!
You can do this with one function by using the 'Type Cast' node (Advanced >> Data Manipulation). This is a little more flexible than the flatten function because you can cast directly to your desired type bypassing the string conversion. Also, you have more flexibility to choose your resulting type. You can convert to an array or cluster of numbers(integer or float) as well as strings. You can cast most basic LabVIEW types to/from most any other LabVIEW type. See the attached for an example.
Thanks for the suggestion. However, I don't think I stated clearly what I needed. Say I have a double with a value of 10.00. When I use the data manipulation method, I get a byte array with values 40 (Hex) 24 (Hex) BUT, what I wanted was a byte array with the value A (Hex) So in essence, convert a double to int, then to byte array. But to do that would require a 48-bit int. Any suggestions? Thanks.
Thanks for the suggestion. However, I don't think I stated clearly what I needed. Say I have a double with a value of 10.00. When I use the data manipulation method, I get a byte array with values 40 (Hex) 24 (Hex) BUT, what I wanted was a byte array with the value A (Hex) So in essence, convert a double to int, then to byte array. But to do that would require a 48-bit int. Any suggestions? Thanks.
> So in essence, convert a double to int, then to byte array. But to do > that would require a 48-bit int. Any suggestions? Thanks.
Try converting to a string, or use flatten to string, then convert the string to a byte array. Because of endianness, you may need to reverse the byte order depending on what you want.
Thanks so much. It worked perfectly. I have another section where I have to do the same thing, but the integer is a 48-bit TWO'S COMPLEMENT number now. I can't just divide by 256 anymore...but have to worry about negation and stuff. Can LabVIEW help w/ anything like this? Thanks.