Student Projects

cancel
Showing results for 
Search instead for 
Did you mean: 

3-D reconstruction of an orange peel

Contact Information:


University: University of Salerno (Italy) – M.S. Degree in Electronical Engineering

Team Member(s): Antonella Finamore, Antonio De Caro, Paolo Sacco, Piero Castelluccio

Faculty Advisors: Dr. Alfredo Paolillo, http://www.misure.unisa.it

Email Address: finamore.antonella@gmail.com

Project Information

Title: 3-D reconstruction of an orange peel


Description


The objective of this work is to reconstruct the surface of a spherical object from two images of the same object under measurement, using visual stereo reconstruction. It's composed by three essential parts:

           1. Camera Calibration

           2. Constraints research and pattern matching (based on the epipolar line and the spherical geometry of the object)

           3.  3-D reconstruction

Products

Vehicle Software: NI LabView 8.6

Photocamera: Canon EOS


Project Challenge


We developed a LabView software which is able to reconstruct the surface of  a spherical object (an orange in our case), starting from two images obtained with a camera,. The camera is placed in a fixed position, but the test item is positioned on a rotating plane. The idea is to consider two images of the object, acquired respectively before and after a known rotation of the object, as two images acquired by two different cameras, as in a stereo-vision measurement system. Rotation angle, α,  is chosen by user. In this experiment, according to the size of our test item (an orange), α is set to 15°. A further challenge concern the calibration of the two views, since the parameters describing the second view will be calculated on the basis of the parameters of the first view, which will be measured experimentally. The main challenge  is to set up an automatic stereo reconstruction process, using only one camera. The images taken from two different angulations are needed to evaluate depth and to obtain the 3-D space coordinates.

The application described in the following, developed as a project work of the course of “Misure basate sulla Visione” (“Vision-based Measuremen...could be a starting basis for a procedure for the evaluation of fruit aging (useful in food industry), which could be developed in the future.


Project work

Measurement station

img1.png

Fig.1: Measurement station

Software flowchart


Img2.png

Fig.2: Software flowchart

Camera calibration and rotation


The theory behind the algorithms described in the following can hardly be summarized in few pages, so interested readers are referred to books and papers listed in the Section “References” at the end of this document.

The first step of  the project is the evaluation of a relationship between m (pixel coordinates) and Mi (absolute coordinates), using DLT calibration. This relationship can be expressed through a 3x4 matrix P , called camera matrix, m ̃=PM ̃, being m ̃ and M ̃ the homogeneous versions of m and M. The P has to be evaluated with a calibration procedure, requiring the processing of images of a known target. For our application, a cylindrical target object was built as reference for camera calibration. The target object bears 15 circular shaped black aims and their centroids, in pixel coordinates, are detected with the count-object function (IMAQ Count Objects 2.Vi ). The black circular aims are numbered using a binary code associated to surrounding angular sectors (see fig.3).

img3.png

Fig. 3: target

DLT calibration evaluates the camera matrix P for  the first angulation (α=0°), then P  is decomposed in:


               ·   K: matrix of intrinsic parameters

               ·   R: rotation matrix

               ·    t: translation vector


Matrix P for  the second view, PL is calculated from the P of the first view, PR and from the known rotation angle, α. Indeed, forcing a rotation around a fixed axis (y) starting from the original matrix (fig.4).

RL=Ry*RR

Img2.png

Img2.png

Fig 4: Target and camera frames


Point mapping


In this step, a common area, visible in both stereo images is selected and sampled. The common surface is the only area that can be reconstructed. The color images are decomposed in the luminance, saturation and hue planes. Several tests showed the saturation plane is the best choice to maximize pattern recognition. In fig.5, a surface sector is shown:


Img2.png

Fig. 5: Reconstruction area


Constraints and correspondence search


The reconstruction of the three dimensional coordinates of a given point can be achieved only after finding the accurate correspondence between the pixel coordinates in an image and the pixel coordinates in the other image after rotation. Depth is evaluated triangulating a point of interest in left image with the corresponding point in the right image. The corresponding point is found using a pattern matching function, but only after limiting the search area with two constraints (in order to eliminate false matching):


· Epipolar constraint: selected a point mL in the left image (circled in Fig.6), there is a line (the epipolar line), shown in red in Fig.6, which can be evaluated from the two P, where the corresponding point of mmust   lie.

· Geometric constraint: approximating the object with a sphere, since we know the impressed rotation angle α, the expected abscissa (green line of Fig.6) where the point appears in the right image can be estimated      with some geometrical calculations.


The instersection of these two contraints give a quite good estimation of the expected location of the right point, which is then refined with the pattern matching Vis.

Img2.png

Fig. 6: Epipolar and geometric constraints

3D Reconstruction


The final step is stereo reconstruction, where the left-right pairs are triangulated. Fig. 7 shows a result obtained with a 3D graphical indicator of LabView. In some areas there are gaps in the cloud of points, since for these points the results of stereo matching were less reliable.

The software developed allows, if user want, to save on a txt file xyz coordinates of the reconstructed image points. In addition, using the coordinates of the points reconstructed, was also measured, whereas the method of least squares, the value of the diameter in cm.

As a future development, more than one angle of rotation can be applied and the different measured surface patches can be merged in order to reconstruct the entire surface of the object. Fig. 8 shows a rendering of the reconstructed part of the surface, obtained with a commercial software.

Img2.png

Fig.7: Point reconstruction

Img2.png

Fig. 8: Surface mesh

References


[1] R. Hartley, A. Zisserman "Multiple View Geometry in Computer Vision", 2nd Ed., Cambridge University Press

[2] R.C. Gonzalez, R. C. Woods, "Digital Image Processing", 2nd Ed., 2002, Prentice Hall, Inc

[3] A. Paolillo, “Appunti di Misure basate sulla Visione”, Lecture notes, available online at http://www.misure.unisa.it/courses/mbv

[4] P. Sturm, R.I. Hartley, “Triangulation”. Comput. Vision Image Understand. 1997, Vol. 68-2, p. 146–157.

Comments
ec01
Member
Member
on

The project is interessant, have developped all the VI for working with epipolar geometry ?
The LabVIEW examples of stereovision just present a binocular stereovision.

Piero_Castelluc
Member
Member
on

We have developed all VI for working with epipolar geometry.

We did not use the binocular stereovision.

Contributors