Hello everyone,
I've been try to code a Global Localization problem using a Kinect and simulating a Laser Range Finder. However, I cannot perform a good Scan Match between what the Kinect sees and a Blueprint of an office if I don't make a translation from Pixel coordinates to Real World coordinates.
When I try to find a solution, most of what I found is for C# users and they simply use "Skeletal Map" wich gives you the real coordinates of the pixels in real world. Using trigonometry you could find that give a FOV theta for the Kinect and a given depth values of the object you are trying to measure. Then, the with of the frame you got would be
W = tan(theta / 2) * h * 2
Where:
- W = Field of view Width
- theta = Horizontal field of view Angle (60 degrees)
- h = Depth Value
Sounds plausible though, I would like to know if anyone has dealt with this problem before and got any solution implemented in Labview because I don't like to reinvent the wheel.
Thanks in advance.