10-24-2011 12:31 PM
Hello World
In this post I would like to deal with the topic of stereovision for my application. My application is to detect an object of interest and obtain its 3D co-ordinates. I have suceeded with one camera and well, with two cameras the problem gets a little interesting. Computing stereovision I think should be ok, as I have seen a tutorial on the ni site - but I need to be guaranteed that both cameras will process the information simulataneously. This is where I would need your advice.
At the moment my 2 identical cameras are on order so I guess I could in the meantime run through my current code with all of you. This code is for detecting a "1 pence circular coin" and to obtain its 2d co-ordinates. I too a great advantage of using the vision assistant block and the camera configuration block.
Small issues:
1)
When the VI is run, the livestreaming camera output shows the overlays of more than one circle. In fact you can verify this is true as the X,Y positions of the circle and the bounding box keep changing. The circle parameters change once every say, 2 seconds.
Is it the way I built my vision script. I mean, I did peform a geometric matching of the template, and in case the camera missed it, I also did a circle detection. How can I reduce the effect I describe?
1b) Do you think I am doing too much in the script?
I actually tried a smarter algorithm which was to perform a background subtraction (everything without the penny in the image) from the image with the penny in it. This worked very well and thereafter I performed a circle detection.
However it failed when the penny was on the blue line as the line was subtracted from the image. and so I scrapped it, because I want to perform ball detection in a game of football (in the end) to determine if it crossed the line. But if the ball is not detected when some of it is touching the line then I will not do this algorithm. However, unless you guys can suggest something to add on to this one - I may go back to this algorithm...
2)
A friend told me that I should do the programming avoiding to use the vision block and the camera block. Doing so, would it significantly increase the speed at which my object is detected? Because, I want to actually in the end be able to track a football, which can on average go as fast as 60mph. So this is why the vision processing needs to be real fast - based on this would youagree that I should avoid the vision block in my attatched zip.
3)
When performing the stereovision from my current code, could I just easily copy the camera configuration block, and the same vision assistent block,
and then wire from the camera 1 circle coordinates, and the camera 2 circle coordinates, into a block that performs stereovision?
In the stereovision block - I would only need to use the formula:
D=b*f/d ? { b is distance between cameras is known, d=disparity which is the difference in the same object from two cameras - known, f=focul length of camera - actually I dont know how to get f? Maybe you could tell me how to obtain it.}
From this we would know the depth and thus we now have, X,Y,Z coordinates!
Above are 3 queries, which have subqueries. Sorry for that.
I will hear from you soon hopefully
Saqib
PS in the vision block - you could see the current algorithm work on most of my images. The images you should import are inside the images folder and the template you hsould use is the "good penny" if you wish to see what I mean. And make sure you configure to use your own camera...
10-25-2011 09:38 PM
@saqib_zahir wrote:f=focul length of camera - actually I dont know how to get f? Maybe you could tell me how to obtain it.}
What model cameras are you using?
10-26-2011 03:58 AM
Hi Thanks for the reply. Well it is a basic webcam called:
Creative vista plus webcam (see below)
The front of the camera has a focus tuner which I assume also affects the focal length. However - I am suprised they dont specifiy the focal length on the camera itself, on the packaging etc. I tried searching on the internet and I can not seem to find it.
10-26-2011 03:23 PM
I will phone the company tomorrow directly and see what they say. To make things worse I also ordered 2 logitec cameras (still in order) that dont specify focal length. Looks like I got a busy day calling companies tomorrow.
Meanwhile - a short question I have is how can I obtain the speed of the object I am interested in?
EG, a round coin moves on the live camera and suppose I can obtain its 2d co-ordinates that updates all the time when the object moves. While I can compute the difference in distance from one point to another - how on earth do I access the time it has taken from the webcam?
eg. v=square root{(delta x)^2 +(delta y)^2}/ (delta t)
Is there a block in labview I can drag to start and read time at any time and let it run continuously? I mean even simpler - can I not access the time from the webcam directly? or do I have to start a timer whenever the program is run?
Thanks PS it would be great if you guys could comment on my orginal script too. Even if it means asking me more questions ...
10-26-2011 05:30 PM
Guys I solved one subproblem and another one came.
I solved the issue I had when say the circular object was on the line.
I first loaded the image of ball plus background, and got rid of the blue color (extrtaction). I then stored this into the buffer1. Then I loaded from file the background, and extracted the blue color and stored the result into buffer 2. I then do a subtraction:
buffer 2 - buffer 1, and therefore anything on the line is left behind. I then store this result into buffer 3.
Then I load buffer 1 and do subtraction of:
buffer 1 - buffer 2 = the circular shape is left behind but the image bits on the line are subtracted (as in the background the line is included)
So tHen I add the buffer 3, to the above present picture and I get my full ball! I then do thresholding and fill in some holes and then shape detection.
I therefore have my ball radius and its 2D coordinates 🙂
BUT
its a very long algorithm. The speed it runs at is 2 frames per second on average, which is too slow for my application.
Please any suggestions on improving this algorithm to speed it up will be great.
Saqib
PS Please open the zip, use vision assistant, open my enclosed script and upload the images. You will see the algorithm works for the correct images that is!
10-29-2011 02:58 PM
Hello everybody,
My cameras are on their way but in the mean time I got a major problem...
I tried using the same algorithm (the background subtraction) I recently posted in the vision assistent and it does not work well with a real football on some carpet (at my home). I would have thought the algorithm would have worked but it didn't. The thresholding was a problem as it did not fully outline a circle. Very disappointed at the moment because now it means all my work on the penny, will not work for a football and means I go back to the beginning again.
Any advice would be greatly appreciated.
Saqib
11-02-2011 11:12 PM
Have you tried different threshold values?
11-03-2011 08:55 AM
1) Yes, I tried thresholding on above image. Try it and it is difficult on this ball. I always end up with a deformed blob with its center, x,y position, but it is not good enough. Any vision techniques I can exploit other than the background subtraction? Pattern matching works but on cameras the ball can be big or small, and
pattern matching will therefore not work. Geometric matching resolved the size issue but then, it takes tooo muc processing time. Help guys...
2) I got my cameras now, and to see if they worked simulateously, I built a VI.
I had a while loop which had a stop control, and in the while loop I simply put two aquisition blocks (each correctly configured to each logitec webcam),
and for the image out I used an indicator to see it. It worked first time, but thereafter clicking the stop button, I got continous error messages sayiong Time ran out... and every time I click play, it keeps outputting this error. WHy guys?
3) I emailed the manufacturers for the focul length, when I get this information, I will have done some stereovision. I will ask more on this when I get the information.
Thanks
Saqib
11-03-2011 08:55 AM
1) Yes, I tried thresholding on above image. Try it and it is difficult on this ball. I always end up with a deformed blob with its center, x,y position, but it is not good enough. Any vision techniques I can exploit other than the background subtraction? Pattern matching works but on cameras the ball can be big or small, and
pattern matching will therefore not work. Geometric matching resolved the size issue but then, it takes tooo muc processing time. Help guys...
2) I got my cameras now, and to see if they worked simulateously, I built a VI.
I had a while loop which had a stop control, and in the while loop I simply put two aquisition blocks (each correctly configured to each logitec webcam),
and for the image out I used an indicator to see it. It worked first time, but thereafter clicking the stop button, I got continous error messages sayiong Time ran out... and every time I click play, it keeps outputting this error. WHy guys?
3) I emailed the manufacturers for the focul length, when I get this information, I will have done some stereovision. I will ask more on this when I get the information.
Thanks
Saqib
11-03-2011 08:59 AM
4) Also how can I capture the time from the webcams in Labview? (to work out speed)