LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Mapping images from diferent cameras

Hi everyone! 

I was wondering If you can give me a hand. I have to map two images. They are gotten at the same time from different cameras, these cameras have diferrent sensor sizes (1024x1028 & 1024x1024), so there will exist an area in the bigger sensor which can not be corresponded by the other. 

I have to develop an algorithm to map both images, I have thought create a tool which ask the user to click some points in both images, in order to create a correspondece relatioship. I was reading some maths about it but I think I'm getting to difficult solutions.

 

Do you have any idea about how to solve that??

 

Thanks in advance, Maite. 

0 Kudos
Message 1 of 8
(3,073 Views)

Your two images are almost the same size.  By removing 4 rows of pixels in the 1024 x 1028 (?? curious, maybe it has a side border?) image, you would have identical sizes.  That would be my first suggestion, particularly if the magnification of the two images are such that the pixels of the same image would then overlap.

 

Otherwise, there is a Resample VI in IMAQ that will (a) take a fair amount of processing time and (b) will attempt to smoothly "average" over the image, pixel-by-pixel, to interpolate it up or down to whatever resolution you specify.

 

Bob Schor

0 Kudos
Message 2 of 8
(3,055 Views)

Sorry I made a mistake, its 1024x1280 the bigger one.

I'm looking for establish a correspondence among the pixels of the two images, more than resample, due the cameras are taking different information from the same experiment, I need to know what is the correspondence to compare the different behaviour of the particules involved. (One is a CCD camera and the other is a fluorescence camera) and the things you see are a bit different.

Also in the future the fluorescece camera will be splitted in two, to get information from two fluorofors, so the size of the sensor will be reduced to 602x1024. But this will be later, what i'm triying to explain is what i need is to correspond points from one image to the other, having some of them previously established by the user.

I do not know If it is a bit messy :s Sorry about that.

Thanks for your quick answer 🙂 I'm a bit locked with this

0 Kudos
Message 3 of 8
(3,048 Views)

I would place one or several easily identifiable objects in the field of view.

For example:

  • Place 2 small square objects in diagonal corners.
  • Automatically search these objects in both images obtaining two reference points and distance between them.
  • Map. If cameras are located near each other, linear rescaling could be ok. Otherwise 3D mapping is needed.
_____________________________________
www.azinterface.net - Interface-based multiple inheritance for LabVIEW OOP
0 Kudos
Message 4 of 8
(3,044 Views)

Thanks, I will try in this way 🙂

0 Kudos
Message 5 of 8
(3,032 Views)

Hmm, I didn't consider that the cameras were not seeing the same "scene" (i.e. optically splitting the light path so that both camera orientation with respect to the subject was the same, the only difference being possibly magnification and translation, both of which could be minimized, leaving only resolution, which I tried to finesse).  OK, so it's a full-on Image Transformation problem.  _Y_ gives you a way to build in registration -- you may want to use multiple (three or four) Reference Points.  Can you say Projective Transformation?

 

Bob Schor

0 Kudos
Message 6 of 8
(3,024 Views)

Thanks a lot for your answer Bob!!

There is the same scene, but one is a usual image and the other is flurescence image, and is different the focal plane. so I think an image transformation is what I'm looking for. I willl study about projective transformation and let you know how is it going. Thanks again for your time!

 

Maite.

0 Kudos
Message 7 of 8
(3,018 Views)

I'm implementing the projective transformation, and I think it will work fine, thanks!!!

0 Kudos
Message 8 of 8
(3,002 Views)