Hi all,
I have a very processor and memory intensive application that acquires data, filters it, and displays it (64 channels at 30 kHz each). I think I could improve performance by using single precision rather than double precision floats, but Measurement Studio's functions (plotting, filtering) seem to rely on doubles. Is there any way to avoid unnecessary casting? Or to at least cast efficiently? The data is acquired as 16-bit integers, in order to speed up writing to disk. Until now, I've been converting the ints to doubles with the device scaling coefficients, then using the NI filter and plotting methods.
Thanks,
John