09-06-2013 09:23 AM
i have a thresholded image and i need to find the real area of the black region.iam new to lab view.please guide me.i have attached the image..:)
Solved! Go to Solution.
09-06-2013 09:59 AM - edited 09-06-2013 10:04 AM
Hello,
your attached image is not really thresholded (you probably made a screenshot, right?). Basic thresholded images are placed into two bins, so that the pixels have values of 1 - foreground and 0 - background.
I have attached an example how to calculate the area of the black pixels (value = 0). It just loops through every pixel, checking its value and adding +1 if the value is equal to 0 and not adding anything if value is not equal to 0.
The downside is that it is a bit time consuming.
Hope it helps.
Best regards,
K
09-06-2013 10:39 AM
How about performing a histogram, extract the count of the 0 intensity pixels, then converting to real world area by multiply the 0 count by the pixel calibration?
-AK2DM
09-06-2013 11:26 AM
09-07-2013 12:06 AM
hi
sorry for the inconvenience....can you elaborate me how exactly to perform that histogram and get the real area values?
it will be very useful for me...
thanks a lot 🙂
09-07-2013 01:46 AM - edited 09-07-2013 02:10 AM
Hello,
use IMAQ Histogram.vi and wire the image reference. Since your image is thresholded,specify the number of classes as 2.Use unbundle by name (the cluster pallete) on the output "Histogram report" and select "histogram".This array will contain two values, the first (index 0) the number of pixels that are equal to zero,the second, the number of pixels equal to 255.
Just use index array and extract the first value - your area in pixels. You need to calibrate the system in order to get the real values. Take a look at the calibration examples already included. NI Vision also has a a calibration training interface.You could probably use the imaging equation to calculate the size:
I/f = O/d,
where I is the size of the image,f the focal lenght,O object size and d the distance from the object.
Best regards,
K
09-07-2013 02:44 AM
hi
its very kind of you.thanks for your suggestion.iam trying your idea.as iam new ,iam a little slow.if i get it right i ll tell you
thanks Mr.klemen 🙂
09-07-2013 04:51 AM - edited 09-07-2013 04:52 AM
Hello,
me again. i think i should also give the following information, which i left out before:
if you want accurate measurements you need to determine the distance between the camera and the object. You need to perform the calibration in order to obtain this distance. You should use a calibration grid placed in the same position as your object and ensure the perpendicularity as much as possible (optical axis perpendicular to the calibration grid). Then use a program to calibrate your system to obtain the translation and the rotation matrix that transfroms the camera coordinate system to the calibration grid coordinate system (i suggest the camera calibration toolbox for matlab, where this can be obtained real quick). After you (actually, the calibration algorithm does) calculate your translation vector (Tcg) and rotation matrix (Rcg), you need to solve the following equation:
So, here you have a set of three equations and three unknowns, which can be easily solved.
zc is then the distance form the camera to the calibration grid. You must remember that this only applies if the object is placed in the same position as the calibration grid.
Of course, you can measure the approximate distance with a tape measure. I don't know what accuracy do you need.
I hope this helps.
Best regards,
K
09-07-2013 06:29 AM
hi klemen
its me again..iam not aware of any camera details
can u say how to extract the count of the black pixels, then converting to real world area by multiply the 0 count by the pixel calibration using histogram....
sorry man i cant get this thing right....
09-07-2013 12:05 PM
Hello,
You have calculated the number of black pixels using a histogram, right? This is the simple part.
this is how i would approach the problem:
OPTION 1: calculate the linear magnification of your system, using the equation M = f/(f-d) = hi/ho. f is the focal lenght [mm] and d is the distance of the object from the lense [mm]. You can calculate the magnification by placing an object of known height (ho) at the distance d, and calculate the illuminated pixels (hi). Then multiply the pixels with the physical pixel size in mm. When magnification is know, you can always calculate the object size, based on the number of illuminated pixels. But this is only for the specified distance d. If the distance changes, the magnification changes also. If your area covers let's say 50 pixels, arrange it in 50 columns and 1 row. Calculate the object size using first 50 pixels and second using 1 pixel (if the pixel size in different in both directions, take this into account when calculating). Then multiply the calculated object sizes to get the area.
OPTION 2: calibrate the camera system in order to obtain the fx and fy [pix]. Calibration is really simple - use the NI Calibration training interface. Calculate the actual focal lenght in both directions:
fY[mm] = sensor_height[mm] * fy[pix]/sensor_height[pix]
fX[mm] = sensor_width[mm] * fx[pix]/sensor_width[pix]
Let's say that the area of the pixels on your image is Ap. Let's also say that we "arrange" it in Ap columns and 1 row. Calculate the actual object size in X direction:
OX[mm] = Ap * Pw[mm] * d[mm]/fX[mm]
and in the Y direction:
OY[mm] = 1 * Ph[mm] * d[mm]/fY[mm]
where Pw is the element sensor element size in horizontal and Ph in vertical direction [pix].
Then calculate the real-world area by OX*OY [mm^2].
You also need the distance d here - the previous post describes how to calculate this.
Also, hopefully, someone with more experience could comment on what i wrote above. Maybe there is a simpler solution? I am also learning stuff as i go, so any comment would be appreciated.
Hope this helps you get started.
Best regards,
K