11-10-2008 09:04 PM
Hi~
I'm using LabWindowsCVI8.5 and NIVision to do the Image processing. When I read the "IMAQ Vision for LabWindowsCVI Reference Manual",I found that the imaqBCGTransform function only supports 8-bit Image, while I have todo the BCG tranform on 16-bit image. Is there any solutions to use BCGTransformfunction to process 16-Bit Image? Or Is there any other functions which coulddo the Brightness, Contrast and Gamma correction on 16-bit Image?
Thanks a lot!
11-11-2008 01:53 AM
Hi,
You can use imaqUserLookup function instead of imaqBCGTransform for getting the same for 16 bit images. All what you needed is to generate appropriate lookup table according to BCG values.
Andrey.
11-11-2008 02:33 AM
11-11-2008 02:53 AM - edited 11-11-2008 02:54 AM
Unfortunately I can't give you "ready for use" example for CVI 8.5, but the only LabVIEW example.
I will post here screenshots, probably this will be helpful for you. Anyway the functions absolutely the same in CVI and LabVIEW. Or may be you have LabVIEW too...
In the example below I will create 8bit gradient image, then perform BCG transform, then will use result for LUT generation, then aplly this LUT to 16 bit image:

of course for full range 16 bit image you need to calculate LUT with 65536 elemenths.
and result:

Andrey.
03-25-2010 04:16 AM
of course for full range 16 bit image you need to calculate LUT with 65536 elemenths.
Andrey.
I have tried several things but I always lose a lot of information when I generate a LUT with 65536 elements after a BCG LookUp. In fact there are 65536 elements but they correspond only to 256 values whatever the range is.
Could you explain a bit more about the LUT in case of a full range 16 bit image?
Thanks in advance
Chris
03-25-2010 04:26 AM
The problem that thew only U8 images are supported by BCG Lookup, so it can't be used for LUT generation in your case. You should create LUT with your own code. This table should have 65536 elemenths with according gray values (depends from Brightness, Contrast and Gamma values).
Andrey.
03-25-2010 04:32 AM
That was exactly what I was editing... and ran out of time!
I was trying to say that if we knew the equation of the BCG transformation, the dimension isn't a limitation anymore. We can calculate the 2D array corresponding to the new image after BCG transform.
I think that is what you are saying too, generating a LUT by ourselves taking into account BCG values. I have managed to write an equation that works only for gamma=1.
Any ideas?
Chris
03-25-2010 04:43 AM
03-25-2010 12:35 PM - edited 03-25-2010 12:36 PM
Thank you for the gamma equation... i forgot to multiply by the range (nice factor of 65536!).
I have tried to add brightness and contrast defined by LabVIEW to your equation and it gave something interesting (see VI) and accurate for 8bit Images. Which is why I added an interruptor to switch between 8 and 16 bit images generating the corresponding LUTs.
The negative point is that brightness has to be adapted too. So it varies between 0 and 2 corresponding to 0-255 or 0-65535 depending on which type of image you took.
Don't hesitate to criticize the code or the maths.
PS: I was disapointed to find little information on the NI Vision Concepts Manual which describes and justifies what Vision does. For the BCG LookUp function, there is no description making it more difficult to adapt to our specific needs. I would have hoped BCG to be as well documented as Kernels for example.
Chris
03-25-2010 12:42 PM
Forgot to display the 2 images with the LUT... but i haven't the time right now sorry 🙂