09-10-2008 08:41 AM
plit string wrote:By your explanation, I did setup the instrument with corrected 64-bit reading (since in LabView does not have the 32-bit but DOUBLE instead),
LabVIEW does have a 32-bit real. It's SGL. LabVIEW Numeric Data Types.
09-10-2008 10:46 AM
plit string wrote:By your explanation, I did setup the instrument with corrected 64-bit reading (since in LabView does not have the 32-bit but DOUBLE instead), so that I use 8 bytes then it works perfectly as your suggested casting
Well, if your instrument gives out 4 byte strings as you claimed in the first message of this thread, I don't see how you can suddenly use 8 bytes (DBL) and still get a meaningful result.
If you look at the image in my example, you see that the indicator says SGL. This is a 4 byte floating point number as apparently required. All you need to do is right-click on the orange diagram constant and select "Representation...SGL" and do the same to all other relevant terminals as needed.
These are all extremely basic things and I would urge you to do a few more tutorials before diving into a more serious project. 🙂
You might also want to read e.g. How LabVIEW Stores Data in Memory