To convert a floating-point value to it's binary equivalent all you have to do is use the flatten to string function. The tricky part is getting the application on the other end to recognize the format. As soon as you start writing binary data you have to start worrying about what processor the end user's machine is using because it makes a difference in how the bitstream is interpretted (bid-endian vs little-endian).
Remember also that to represent LV timestamps (which in the wider world are very nonstandard) you have to use either a double-precision float or a U32 number. In the Microsoft world a slightly more standard representation of time is the number of days since Midnight Jan 1 1900--a value which can be represented in a single-precision float.
T
here are also datalogging examples that demonstrate writing binary files--though those tend to use very low-level binary representations. They aren't even scaled to floats. This would provide the most compact storage since no number would be more than 2 bytes (16-bits), and the fastest too since you aren't taking the time to scale it in LV.
Mike...