That decision was made before I started at NI... I would suppose the decision was made because it generally makes things MUCH easier for the user. It is easier for a user to read a value of 0.2 Volts than for them to have to read a value of 0x45 (note: I am making the numbers up and the scaled value may not correspond to the voltage mentioned above). If they received the 0x45, they would then have to scale with the formule EUV=Data/65535*(maxScale-minScale)+minScale. Thus they would have to keep track of the ranges on each and every tag. Then, anytime you tried to write an analog value, you would have to calculate the Data to write as data=(DesiredEUV-minscale)/(MaxScale-MinScale)*65535 also
having to track the min and max scale values for every single point. Consider that a FieldPoint system, solely comprised of serial modules can have 25 network modules on the same serial port, each having 9 I/O modules, each having 16 channels, that would potentially be 3600 separate points that you would have to track the scale min & max values on... Since you can have multiple serial ports...
Aaron
LabVIEW Champion, CLA, CPI