Machine Vision

cancel
Showing results for 
Search instead for 
Did you mean: 

2-tap color camera with Bayer on FPGA framegrabber

Hi,

Please can you help me how to configure a 2-tap color linescan camera to work correctly with FPGA framegrabber to get color image? When I have installed the drivers I have reviewed all 4 available examples from Example Finder. My camera supports only 2-tap 8-bit encoding (no 1-tap). In the Example FInder for color Bayer decoding there is only 1 example (unfortunately only for 1-tap). I have successfully made working scenario with monochromatic image 2-tap - that's fine. But then I was unable to somehow merge the example from Example Finder ("1-Tap 8-Bit Camera with Bayer.lvproj") with my working project (2-tap monochromatic).

 

I am just looking for some easy demonstrative example which would communicate between 2-tap camera, FPGA framegrabber and Labview to see a color image.

 

Best regards

David

 

0 Kudos
Message 1 of 8
(7,678 Views)

Hi David and welcome to NI Forums!

 

Try the method described here.

 

Kind regards:

Andrew Valko
National Instruments Hungary
0 Kudos
Message 2 of 8
(7,634 Views)

Hi,

thanks for reply. I have successfully tried this method. I have made working even setup with 8-tap 8-bit. But this is only for monochromatic pictures. I am now interested how to process the pixel values into color images on the FPGA level. There is a one example C:\Program Files\National Instruments\LabVIEW 2012\examples\Vision-RIO\1-Tap 8-bit Camera with Bayer\1-Tap 8-Bit Camera with Bayer (FPGA).vi which is using special CLIP in order to get Bayer Decoded image on the FPGA level but only for 1-tap. I have tried to modify this in order to get 2-tap working example with Bayer Decoding but unsuccessfully. I even do not know whether the example is really working with 1-tap since I do not have 1-tap camera. I would be very glad for any comments to this.

 

David

0 Kudos
Message 3 of 8
(7,629 Views)

Dear David,

 

Unfortunately, there is no go-to example for what your application goal is, but since you have both the Bayer decoding CLIP and you have a working acquisition set up, you should not have too much trouble in putting it all together. 

In bayer filter cameras, the monochromatic image supplies the following color channels:

Bayer1.png

Now, if you had a one-tap connection, these pixels are just supplied one after another, row by row, from the top left to the bottom right. If you have two taps, there are more possibilities though. The pixel values might come from two rows next to each other, or maybe two columns, or two distinct parts of the image! A few examples:

Basler2.gif

So if the first pixel is x, and the image dimensions are h*w, you second pixel (the second tap) can be x+1, x+w, x+(w/2), x+(h*w/2) or something else. For this, you have to consult the specifications of your camera. 

 

Once you know what is actually incoming from the camera lines, you have a few options. What I would do is try realigning the samples in FPGA (for example, write all the pixels in a block of memory, and then read the result sequentially out). This will basically simulate a one-tap camera for the CLIP, so you do not have to write the reconstruction/interpolation, you can use the CLIP provided with the example. 

 

Kind regards:

Andrew Valko
National Instruments Hungary
Message 4 of 8
(7,612 Views)

Hi,

thanks for reply. I know that my camera is using pattern:

RGRG etc

GBGB etc

I also know that the pixels which are read in this way (picture 3 in the first row)=>  first two taps: first pixel in a row 1 + second pixel in a row 1; second two taps: third pixel in a row 1 + fourth pixel in a row 1; etc. (it meas sequentially).

I have tried several scenarious of modifying the example. Probably the one I expected the most from it was that when a pixels are acquired (it means 2 pixels at the same time) I send firstly the first pixel to the CLIP and in the same go of a loop I send there also the second pixel. In the example it looks that the CLIP "Pixel In" is actually doing the memory automatically because the CLIP is actually taking in so many pixels until the "RGB Output Valid gives true meaning that the RGB decoding is done (in each iteration of a 100MHz loop the program paralelly with pixels readout also checks whether already the "RGB Output Valid" is set to true -> hence it looks to me that the memory is a natural part of the CLIP). I think it will help me know three things:

1) I would like to know whether somebody before had used the 1-tap example as it is provided with some RGB 1-tap camera -> right now it looks to me that there may be a problem in this CLIP.

2) Source code of the CLIP "Bayer decoding" since I believe it could help me very much (some exact description of the parameters "Pixel In", "Write Enable", "New Frame Pulse" etc.). Was not able to find out more info about it.

3) If it is necessary to modify the framegrabber configuration file *.icd which gives the framegrabber info about camera (on the side of camera the configuration is done vie CCT+ serial port communication within Camera Link). Or if this config file is unnecessary in the case of FPGA framegrabber (in the case of NO FPGA framegrabber this file is necessary).

 

David

0 Kudos
Message 5 of 8
(7,607 Views)

Hello,

I was trying to modify example "color camera with bayer on FPGA" too, but I was not succesfull. Anyway I can handle my task with monochrome camera.

So I am asking you, if you have any experience with example for camera with centroid (1-tap camera with centroid on FPGA). I have modified it for 2-tap camera, but values, that centroid function gives me back are incorrect. Actually, it doesn´t count anything. I don´t obtain values of centre of my white dot on black background, but of centre of image. Resolution of image is 2040x1086 and centroid function gives me 1020x543. White dot is approximatelly in 400x300 position. I cannot use probes in FPGA target, so I don´t know, where exactly is the problem. 

 

I used the same centroid function in nonFPGA vi and modified it a little bit, and it works correctly.

 

If you can give me some suggestions, what to do, I will be very glad.

 

Thank you

Best Regards

 

Filip

0 Kudos
Message 6 of 8
(7,467 Views)

Hi,

unfortunately I do not have experience with centroid example. Anyway I have made the color camera working since I am using degayering functions directly within camera. The big problem with the example which provides NI is that they are unable to provide me documentation to the CLIP functions used.

 

David

0 Kudos
Message 7 of 8
(7,464 Views)

Hi Volnas,

 

Did you ever figure how to get the centroiding VI working on the FPGA for multi-Tap camera?

 

Thanks

0 Kudos
Message 8 of 8
(6,018 Views)