LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Performance and data types: which to use?

Hi All,

I am wondering what data type to use and the effect of them on memory/speed.

1. What is the difference (if any) of using sgl, dbl, int etc. Looking at the LabVIEW help there seems to be a range of 8-256 bits of storage according to the data type. Is it basically choose the one with the smallest storage that can fit the data?

2. I currently have a cluster flowing through subVI's. The cluster contains the start time (or freq), the delta t (or f) and the array of data (about 500-5000 elements). I tried to use the waveform datatype but it couldn't handle a delta t of 2 nanoseconds (500 MHz signal). Am i ok using the cluster, or should i seperate the components and pass them along? What data type should i use for each of the components?

Thanks

0 Kudos
Message 1 of 9
(3,737 Views)

There are three main issue to consider.

  1. Range and accuracy. If you need a very high level of accuracy, then you will need to use the extended data type or even create your own, although that's unlikely.
  2. Memory. Yes, SGL takes less than DBL, but unless you're dealing with really huge amounts of data this won't matter.
  3. Coercion. Most built in functions work on DBL. If you wire a SGL into them, they will coerce it, possibly creating a copy of the data and increasing your memory usage.

To sum it up, most of the times it would be best to use the default DBL. It's highly unlikely you'll need one of the others.

As for your second question, it sounds to me like the data is a single organism, so I would say you should leave it in the cluster, but that really depends on whether the functions need it or not and whether you're constantly bundling and unbundling the cluster. Note that 5000 elements is far from being a large array and you shouldn't have any problems handling it.

As for the timing unit, if you really only have 5000 elements (that's 10 microseconds of data?) then you should not have a problem with using a U32 with a nanosecond as the base unit. That should give you the ability to measure more than 4 seconds.


___________________
Try to take over the world!
0 Kudos
Message 2 of 9
(3,720 Views)
The answer is: It depends...

Many of the analysis functions use DBL as their input and output datatype. If you are using these, then using DBL throughout avoids the needs for conversion. The Waveform datatype carries timing information so it is useful for things like time domain to frequency domain transformations.

For high speed data acquisition and streaming to disk use unscaled binary out from the DAQ device and binary files.

Other cases may lead to other solutions.

In general avoid making data copies, avoid type coercions, preallocate space for arrays and use replace array element rather than build array.

Search the archives for related topics. These kinds of things get discussed a lot.

Lynn
0 Kudos
Message 3 of 9
(3,720 Views)
Lynn and tst make some very good points. I'm curious about your statement that the waveform data type is having trouble with a dt of 2 nanoseconds. The waveform datatype is really nothing more than a cluster and the dt is a dbl. What makes you think the waveform datatype was giving you problems?
0 Kudos
Message 4 of 9
(3,714 Views)
Hey Dennis,

The problem with waveform is that it uses time stamp and as far as i can figure out it only goes to milli seconds. I need to go to nano seconds. This is to define the start time.

I hope that's right.

Phil
0 Kudos
Message 5 of 9
(3,707 Views)
The time stamp is 64 bit if I remember correctly so it's better than a dbl. Your start time is pretty much limited by the accuracy of the windows clock and there you only have 1 msec resolution. How are you getting start time to nano-second resolution?
0 Kudos
Message 6 of 9
(3,700 Views)
Hi Dennis,

I get the data from a LeCroy 500MHz scope. It consists of an array with delta t known. I take the aquired waveform and chop a bit out and i need to know where that is in relation to the whole waveform so that i can plot the chopped bit over the waveform (basically highlighting the data that is selected). I know how many data points and i know the delta t so i can work out the start time in nano seconds. The cluster can go straight to a waveform graph and display correctly.

Do i take a performance hit converting from cluster to individual components? I do this in sub vi's for doing things like ffts etc.

Regards, Phil.
0 Kudos
Message 7 of 9
(3,696 Views)
Okay, I think I understand. You don't want the absolute time like the timestamp gives you. You want just the relative time? I guess the performance hit would depend on how often you bundle/unbundle. The analysis functions require a waveform data type as an input so it would probably be more efficient to only use your cluster for the waveform display and pass the waveform to all of the other functions.
0 Kudos
Message 8 of 9
(3,688 Views)
Yea, i'm not interested in the time that the waveform was captured (although that might be usefull hmmm.......). The time relative to the waveform is what i need. I tend to use lower level functions for things like fft etc that just require an array rather than the waveform ones.
0 Kudos
Message 9 of 9
(3,678 Views)