I'll explain my application before my question. I am taking chunks (usually 15x15 elements, may change) of a picture (2D array, U32), each value is a 24 bit number in the standard RGB format. I find the top 25% of the chunk with the highest luminance values (0.33R+0.59G+0.11B), then average the R, G, and B values for those cells
My question is, what is the most efficient way to calculate the top 25% of luminance values then grab the R, G, and B values and average them? Initially I'm thinking to break out the R, G, and B values while still in a 2D array, then calculate the luminance using the 2D arrays, then takle chunks (15x15) and find the top 25% using something like the VI attached. Is there a better way to do this? I've been thinking maybe avoiding the Index Arrays for each R, G, and B by creating a 15x15 mask of 1's and 0's (1's corresponding to top 25% luminance values) then multiplying R, G, and B matrices by the mask, then use the sum operator and divide by the known number of cells int he top 25%. Any other insights? Efficiency is important here, the pictures can be 5MP.
Michael
Message Edited by miguelc on 09-18-2007 11:45 AM