LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Help design a new OpenG VI, Average 1D Array

A Different idea,

How much speed does it cost to check the max and min value of the input array to determine if an overflow can even happen?

I guess it may be cheaper than switching to EXT.



worth testing?

Shane.


Message Edited by Intaris on 04-26-2008 05:10 PM
0 Kudos
Message 11 of 14
(1,392 Views)

You only attempt to solve part of the problem so far. Checking the range is much more complicated because if a single value is at the limit, the average will be fine, but if all values are near the limit, the sum can overflow or underflow. I am really not sure how much code you want to throw at all this. Maybe it is sufficient to deocument the shortcomings in the help and keep things lean and fast. 😉

For DBL we can use the "mean.vi" from the statistics palette, which apparently (according to the other thread) overflows less and is very similar in speed to a plain SUM/N. It can be used in everything that does it on a 1D array. It even could be used on 2D arrays by e.g. first average colums followed by averaging the remaining 1D array, etc.

EXT is only needed for EXT or I64/U64 inputs. Does anyone know if polymorphic VIs can interact with the "output configuration" (look e.g. at the properties of the multiply node in LabVIEW 8.5). It would be nice to be able to manually specify DBL output even for U64 inputs.

In addition, the function should also work for complex data. Sometimes we want to average vectors! 🙂



Message Edited by altenbach on 04-26-2008 04:21 PM
0 Kudos
Message 12 of 14
(1,379 Views)
Altenbach,

maybe you should look closer at the code.....

It checks to see if the maximum value is smaller than the maximum value Divided by the number of elements in the array.

This means that ALL of the values could be pretty high, but if they're all smaller than Max/Number of elements, then overflow is not possible.  It may of course be neccessary to set the "maximum" a little below the theoretical maximum dur to rounding problems, but this should work, no?  Even if it is a bit conservative.  Using this method, a single large value could force the EXT version without the sum of the array being anywhere close to the real maximum of the DBL type.

Shane.
0 Kudos
Message 13 of 14
(1,363 Views)
Thanks Shane!

Intaris wrote:
maybe you should look closer at the code.....

Yeah, maybe I should. I don't work well with pictures. 😮 You should always attach some real code. 🙂


Intaris wrote:
Even if it is a bit conservative.  Using this method, a single large value could force the EXT version without the sum of the array being anywhere close to the real maximum of the DBL type.

That's right. For example the following data is safe while it is flagged as unsafe. 🙂
 

My suggestion is to use the NI mean for everything DBL (and below). It is fast, does not need any voodoo code, and gives the correct result even in cases such as the following where sum/N overflows as DBL. It seems silly to duplicate existing code with something less capable and more complicated. 🙂 From what I can tell, mean is included in LabVIEW base (there is no warning in the help). It even has error handling included for free.
 
 
Now if you want to adapt your code for the EXT version, you are welcome. There is a nonzero chance that it will overflow at one point where the result is still in the valid EXT range. 😮


Message Edited by altenbach on 04-27-2008 09:32 AM
Download All
0 Kudos
Message 14 of 14
(1,341 Views)