08-23-2013 08:02 AM
Hello,
I am trying to set up a stereovision with labview, I am novice at stereo and I would appreciate some help.
I am using cheap 2 usb cameras 640x480px f=3.85mm d=80mm
So far I have calibrated 2 cameras
Then I have calibrated stereo system:
calibration quality and rectification quality is >0.9 so I think all is good so far...
distance z is from 300-500mm so I calculate that number of disparities is 16
window size is set to 7
and from images:
this is what i get
Problems start when I try to calculate depth... Result is exactly the same likedisparity image
My goal Is to create 3d model from depth image and overlay texture on it.
Can I do that by using rectifiled stereo images?
I have attached images that I ve used for callibration, calibrated images and sample images for disparity.
Please help.
And if I made a mistake, please correct me 🙂
08-24-2013 06:12 AM - edited 08-24-2013 06:20 AM
Hello,
can you please provide the original calibration grid images (non-thresholded). Also, i looked at the "calibrated images" folder and the images look like this (nevermind the border):
left
right
What exactly is this?
Best regards,
K
08-24-2013 11:04 AM
Thanks for looking into that.
As far as I understand this calibrated images contain informations about camera and prespective, labview stores that in png files, and it is used later to calibrate stereo. I have no idea from where this pixel data in calibrated images comes from.
New set of images attached.
Best regards
pawhan11
08-26-2013 12:25 AM
Hello,
i am sorry i did not ask for this before, but please tell me the vertical and horizontal distance between the points on the calibration grid (real-world units, best in mm).
Best regards,
K
08-26-2013 01:47 AM
15mm for vertical and horizontal
Best regards,
pawhan11
08-26-2013 07:23 AM - edited 08-26-2013 07:24 AM
Hello,
i have tried to calibrate the grid images, you have given and the quality is indeed greater than 0.95. So, the calibration should be ok by looking at the quality, reprojection error, etc..
How did you calculate the number of disparities to 16? That is a small number for such a close object scene.
Quickly calculating:
image width: 640 pix
fx: 950 pix
Z: 500 mm
stereo resolution = 760 pix
stereo focal length = 1130 pix
disparity = 1130 pix*80mm/500mm ~ 180 pix
Anyway, i still cannot get a proper depth image (by the way, the disparity image is inversly proportional to the depth as the equation tells you - Z = f*b/d) with your images. Also, the widows size basically determines the precision (detail) and noise. Small window stands for better detail, but higher noise. Big window is less detail, less noise.
I've lost so many hours tring to evaluate the NI stereo library, but still cannot say anything about it. I will try once again (hopefully, this weekend) to setup my own scene again and calibrate the system in order to obtain the depth image. In the mean time i suggest you try to follow the instructions that are posted here . Maybe this procedure can help you calibrate your system in order to obtain somewhat normal depth image.
Sorry i could not be more helpful at this time. If i find out anything else, i will post here. Please do the same, since i am very eager to see some results.
Best regards,
K
08-27-2013 08:52 AM
Thanks for looking into that.
I thing I ve managed to make it working, at least I get some acceptable depth values thanks to that article:
http://zone.ni.com/reference/en-XX/help/370281U-01/imaqvision/stereo_vision_faq/
Klemen, on Your blog Yoh have overlaid texture on depth map.
Could You give me some pointers how to do that?
08-28-2013 08:12 AM
Hello,
i have overlaid the texture on a point cloud, yes. I have an ocx control to do this in LV, but unfortunately, I cannot share the code with you.
You can alternatively use 3d picture in LV. Please see the attached example (saved for LV2012). I took the 3d stereo example from labview (the cubes with letters).
Beware that it is poorly coded and very messy. But should give you a basic idea. You should be able to overlay the texture over the points in the pointcloud.
Please tell me if it works for you.
Anyway, if you have a texture image that is aligned with the depth data, you can perform any object detection on the 2D image and then extract the 3D information form the detected coordinate position.
Sorry for the messy post, but i have to be somewhere real quick.
I will get back to you if you have any additional questions.
Best regards,
K
09-23-2013 11:35 AM - edited 09-23-2013 11:39 AM
If you are still interested (or anyone else) in a 3d viewer, i have created a dll that can be used in Labview based on PCL visualization library. It takes the spatial coordinates X,Y,Z and RGB data to represent a point cloud in 3d space.
The dll can be found here:
https://decibel.ni.com/content/blogs/kl3m3n
Best regards,
K
11-20-2013 03:52 PM
Pawhan - can you share your code? I hope to give the stereo vision example a try in the next few days and am looking for a good working example.