Thanks Chris, that helped me figure out how to optomize my code. And the "show buffer allocations" tool is pretty neat. After all this time, I missed that somehow.
My question about multiplying floats was kindof a side curiosity. At some point I read or heard someone say that instead of performing mathematical operations or labVIEW functions on floating datatypes, one could change them to integers to improve efficiency or speed. You could multiply the decimal values by some constant and change them to integers, do the operations, and then change them back to float and divide by the same constant. The theoretical hypothesis was that the operations would run faster or more efficiently on integers than floats and the multiplication/division would give you back the same value you started with. Is there any plausability in this?
edit: this is assuming you have a lot of operations, enough to justify the datatype switching
Message Edited by kaufman on 06-28-2006 10:01 AM