LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Single Precision version of Analysis Tools

I have an application that requires the use of single precision (float) math when performing a number of functions within Labview (e.g., chirp pattern, uniform white noise, auto power spectrum). The reason why is that we are using parts of the program to compare to an embedded DSP component that uses single precision floating point math. Almost all of these VIs in Labview end up calling external code that appears to require the use of the double precision (double) inputs and outputs.

One obvious solution would be to simply wire the single precision inputs and outputs to the supplied VIs (or, alternatively, convert the inputs and outputs to the desired format). Unforunately, there are a few issues with that approach:

1. The low-level math is performed at the double precision level; the output of a given operation may not be the same as that which would occur when using single precision math.

2. Because most of the outputs from (as well as some of the inputs to) the external code components are large 1D arrays, the conversion of the arrays from double to single precision (or single to double for the inputs) is causing substantial overhead.

My questions are as follows:

1. Is there any way to recompile these external code modules to use single precision variables? The source code is not provided to the users, so I guess National Instruments would have to do this.

2. I haven't used the pt-by-pt elements included within Labview that much -- I've noticed that much more of the code is G-based (as opposed to calling external routines). I could attempt to recast all of those routines, but this application is not for real-time use and I would assume that the overhead issues might be on par with those that I currently am experiencing. More importantly, there are elements of the "standard" analysis tools (i.e., chirp signal) that are not included in the pt-by-pt family.

Any suggestions would be appreciated. Obviously, I would like not to have to reverse engineer the external code that's already been written for these tools.


Stephen Applebaum, PhD
0 Kudos
Message 1 of 5
(3,145 Views)
Stephen,

Even if National Instruments recompiled the external code, it would not make a difference. Intel based processors use an 80-bit floating point unit for floating point math. Even if you were to use single precision, the number would get promoted, the math done, and the number truncated. Your best solution may be to write the software that will do the math you need yourself. I am sorry that I do not have any better news than this.

If you don't want to see as much overhead with coercion, try doing the caculations in smaller chunks.

Randy Hoskin
Applications Engineer
National Instruments
http://www.ni.com/ask
0 Kudos
Message 2 of 5
(3,143 Views)
Randy-

I agree with you that when performing operations that depend on 32-bit operation with a n-bit register (n=64,80,128) for true emulation, using standard code with float rather than double declarations may not make a difference (it depends on the compiler settings and the platform; for instance, see http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vccore98/HTML/_core_.2f.op.asp for the switch in Visual C++ for Intel computers to change the compiler behavior). That is a specialized application that one really can't expect National Instruments to address.

However, the issue that I am mainly trying to address is execution speed and unnecessary overhead. There are a number of signal generation and analysis routines (as opposed to the simulation routines referred to above) that we've written for these fairly large arrays of single-precision numbers --- the type of register is used to hold intermediate results is not critical, as a small amount of roundoff error is fine.

My experience, both within Labview with G-only code (i.e., no external code routines for which type declarations can't be modified), and with other programming languages in general, is that the execution speed of a given mathematical routine on an array of floats is usually substantially faster than that for an array of doubles, due primarily to memory I/O issues. So, in the case at hand, we are taking two potential performance hits:

a. the conversion of the array from float to double (and double back to float at the end) in Labview

b. the use of double arrays (the overhead associated with dealing with a structure that is twice as large as its float counterpart) inside the external code

I believe that there could be a significant speed increase for these types of application (i.e., when large arrays of single precision data are involved) if single precision versions of certain elements of the analysis tools were available. This is something from which a number of Labview users might benefit.

Is there any way that you can provide single-precision components of the lvanlys.dll library for the following functions:

White (Uniform White Noise.vi)
Spectrum (Power Spectrum.vi)

and, important to a lesser extent,

ChirpCIN (Chirp Pattern.vi)
SineWaveCIN (Sine Wave.vi)
TriangleWaveCIN (Triangle Wave.vi)
SquareWaveCIN (Square Wave.vi)
RealFFTH (Real FFT.vi)

Obviously, any other library routines that these components call would need to be modified for single precision inputs/outputs).

I would obviously let you know the results of using such code on the (hopefully substantial) improvement in execution speed (if you're interested).

Thanks..

Stephen Applebaum, PhD
0 Kudos
Message 3 of 5
(3,143 Views)
Dear Stephen and Randy,

I am very interested in having linear algebra VIs that call the single-precision BLAS functions from the Intel Math Kernel Library. So much so that I may attempt to produce my own library calling VIs. I'm betting that there are other users out there who would like these as well.

It comes down to raw performance. The 32-bit operations have a theoretical performance advantage of 3 to 4 times over their 64-bit counterparts on a Pentium IV. As I understand it, these operations have been optimized for geometrical computations in video games. (I think these are the SSE and SSE2 instructions.)

P.S. Stephen: Are you the Stephen I know from UCSD?

-- Syrus Nemat-Nasser, Ph.D.
0 Kudos
Message 4 of 5
(3,115 Views)
Hi Syrus,

Since this is not a currently available LabVIEW feature and there are no plans of having such functionality in near future, I would suggest that you file a product suggestion for it by going to www.ni.com/ select Contact NI.

Regards,
Ankita
0 Kudos
Message 5 of 5
(3,095 Views)