11-19-2008 10:29 AM
Branson,
Could you please save the two vi's such that they are compatible with Labview 7.1. Thnx.
Regards
Anderson
11-19-2008 10:35 AM
Bruce,
Yes you are right! I will shift the pixels back and trim the image but could you briefly describe why the image quality would be poor if the colors were to change?
The first thing I tried before converting to the U64 cluster was to use replace color channel to combine the images. However, all I got was a black image. Supposedly it does output RGB64 but I'm not sure what I was doing wrong.
Regards,
Anderson
11-19-2008 11:17 AM
Think about what would happen if you separated the RGB planes of a good quality color image, then shifted one plane left one pixel and one plane down one pixel, then reassembled the color planes. It would be like a TV with bad synchronization of the color channels. Most of the picture would look okay, but there would be rainbow edges. Things would get a little blurry without proper alignment.
Also think about why you are using 16 bit color in the first place, instead of 8 bit. You are trying to get a more accurate measurement of the color values, but you aren't aligning the correct pixels. You lose the benefit of 16 bit color when this happens. You would be better off converting the image to U8 and using the standard Bayer conversion routines.
If you are trying to do measurements, the shifted images would also introduce some confusion. Which edge do you use - R, G, or B? If you convert to grayscale, it will still be difficult to determine a proper edge. Color (Bayer) images are hard enough to deal with to begin with, why make it more difficult by having color planes out of alignment???
Bruce
11-19-2008 12:04 PM
It looks like IMAQ ReplaceColorPlane would do the job. You would need to wire in a U64 color image for the destination, and I16 images for each of the color planes. Check the error output to see if it tells you anything important.
For testing, I would start with a U64 color image. Extract the color planes, then combine them into a new image. Maybe for fun, swap some of the color planes just to make sure you are getting a new image. This would help you figure out how replace color plane works without having to deal with Bayer conversions. It could be a problem with your Bayer conversion that is propagating into the color plane combination.
Bruce
11-19-2008 02:08 PM
Bruce,
Thank you for the fast reply. I've finished building the vi for making the bayer image masks and am continuing to put the program together as you described. However, I am having problems understanding why you chose the blur convolution filter for the R and B, but not for G. Could you quickly explain this? Also,
The blur convolution is
1 2 1
2 4 2
1 2 1
What kind of filter is this?? Should there be a negative sign on the 4?
0 1 0
1 4 1
0 1 0
Sorry for all the questions. I really appreciate your help. If you're ever in NJ/NY area, I'll by you a couple beers. 🙂
Regards,
Anderson
11-19-2008 02:46 PM
The convolutions fill in the empty cells for each color plane using averaging. Look at the fill pattern for the different colors. For R and B, there is one pixel every other column and it skips a row. For G, there are twice as many pixels - One every other column, and the pattern shifts one pixel for the next row. Neglecting the proper alignment, the patterns are as follows:
R and B:
X 0 X 0
0 0 0 0
X 0 X 0
0 0 0 0
G:
X 0 X 0
0 X 0 X
X 0 X 0
0 X 0 X
Try out my convolution patterns in a few different positions on these fill patterns. When you are centered on a non-zero (X) pixel, it is multiplied by 4 and all the other coefficients in the matrix are multiplied by zero pixels. When you divide by 4, you get the original pixel back.
If you are between two or four pixels, the matrix will average them (either 2+2/4 or 1+1+1+1/4). In every case, you either end up with the original pixel or an average of the nearest pixels.
Bruce
11-19-2008 06:53 PM
Hello Bruce,
I believe I have created a working vi. I've tested it in my setup and everything looks alright.
It's a bit slow in the beginning because it has to generate the masking matrix. It gets real slow in the beginning when you get up to 4MP. But once it' over the hurdle, it works just fine.
There was a slight issue I had with the convolution divisor, which caused a problem with balancing the color channels.I know that using 4/4/4 [rgb] is suppose to be a nearest neighbor approach but it seems like 2/5/2 works better. It seems to help scale the 3x3 block according to the "real" pixels in the matrix so that the output color channels are more even in intensity.
Color gain was implemented via IMAQ Multiply after the convolution filter.
If there's anything I've done in error or could be optimized, please let me know. Once again, thanks... Couldn't have done it without you!
Regards
Anderson
11-19-2008 10:14 PM - edited 11-19-2008 10:16 PM
I might have a faster way to create the templates. I would start with a full size image (IMAQ SetImageSize) and fill it with ones (IMAQ Fill), then replace every other row and column with zeroes (IMAQ SetRowCol). Odd rows and columns gives you one template, even rows and columns gives you the other. After creating the R and B templates, I would add them together and invert to get the G template. This would eliminate the continuous reallocation of memory that your current version uses.
I'm not surprised that the color gain is so far off. The sensitivities for each color are typically very different. Using different divisors is the easiest way to do a color correction. You shouldn't need to also do an IMAQ Multiply - just adjust your divisor to include the gain.
Bruce