University: Georgia Institute of Technology
Team Member(s): Lisa Hicks and Valerie Bazie
Faculty Advisors: Fumin Zhang
Email Address: lhicks7@gatech.edu and vbazie3@gatech.edu
Title: EKF-based SLAM and Obstacle Avoidance Robotics Environment Simulation
Description:
Using the LabVIEW Robotics Environment Simulator, we have implemented an Extended Kalman Filter (EKF) based Simultaneous Localization and Mapping (SLAM) algorithm. Our project also includes an obstacle avoidance controller, to navigate the robot through obstacles. Figure 1 shows the test environment used. The robot used in the simulation is the NI Starter Kit 2.0, equipped with a U-blox 5 series GPS/IMU, Hokuyo URG Series LIDAR, and Axis M1011 camera.
Figure 1: Robotics Environment Simulator Screen-shot
Products:
NI LabVIEW 2012
NI LabVIEW Robotics Module - Robotics Environment Simulator
NI Starter Kit 2.0
U-blox 5 series GPS/IMU
HokuyoURG Series LIDAR
Axis M1011 Camera
The Challenge:
SLAM is a popular, yet difficult to implement, technique used to map an unknown environment while tracking a robot’s location within that environment. Although erroneous data from sensors are useful in creating a rough estimate of an environment, passing all sensor output through a Kalman filter is an advantageous way to reduce inaccuracies. Since most robot dynamics are nonlinear, we use an extended Kalman filter rather than a Kalman Filter. An EKF based SLAM algorithm was implemented in order to navigate through a channel while avoiding obsticles in the middle of the channel. This complex algorithm involves several operations. First, we have to detect potential obstacles; this step includes the use of LIDAR data to determine the location of a potential object, and the use of camera data to determine the color of the object. Then, we accurately determine the location of the object by filtering the different sensor data through an Extended Kalman Filter. Finally, use a drive controller the robot has to successfully navigate through the channel.
The Solution:
Object Detection:
LIDAR Object Extraction:
To identify whether an object is detected, we must analyze the difference between the previous and current data points from one scan of LIDAR data. For objects in the simulation two parameters must be considered: (1) the distance threshold, r; (2) the angle span, θ . The distance threshold is defined as the difference of the distances between two consecutive data points, and the angle span is defined as the difference of angles between the beginning and end of an object. These parameters will vary depending on the environment, and must be experimentally set. An illustration of LIDAR data can be seen in Figure 2. A large difference in distance represents a gap in the data, denoting the beginning or end of a potential object. Since noise is to be expected in the LIDAR data, we must further inspect the data to confirm that the potential object fulfills both parameters. A feature label pattern recognition method is used. Each point of a LIDAR scan is given a numerical feature value, 1-6. This can be seen in Figure 2, where each data point is labeled in red with its corresponding feature number.
Figure 2: LIDAR Data with points labeled in red
Figure 3 shows the flow of the data labeling method.
Figure 3: Pattern Matching Flow Chart
Once the data points of an entire LIDAR scan are labeled, a pattern recognition algorithm is used to identify where actual objects are located in the scan. The template used to identify an object consists of a sequence of features represented by (6, 1, 2, 2, ..., 2, 6). In Figure 2, the data represented by (6, 1, 2, 2, 6) is an object, while the data (6, 1, 5) is considered noise, or an object too far away.
Color Object Extraction:
Image processing is performed using the National Instrument Vision Assistant software.
Once an initial image is obtained several separate buffered images are saved, one for each color of object sought. For each color, the same process is repeated from the saved original image. First, blocks of color are detected by setting acceptance thresholds in the Hue Saturation Intensity (HSI) color space. HSI is used instead of a Red Green Blue (RGB) space because the HSI has a lower dependence on background lighting. This filter then produces a binary image consisting of passed or not passed pixels. If there exists a large amount of natural noise close in color to the buoy, the erode function is used. Erode gets rid of textured surfaces, by shrinking them significantly faster than smooth surfaces. This function is useful since natural surfaces tend to be more textured than buoys. Next, a filter for the area of each blob is applied for all images, and any blob smaller than a predetermined threshold is rejected. Each of the remaining blobs are bounded by a rectangle. The result is shown in Figure 4.
Figure 4: Filtered Camera Data with box's around detected objects.
The location of the center of the bottom line for each rectangle is exported along with the color from the filter. This data is then combined with the camera properties: angle, height, field of view, and resolution, to approximate the direction to each blob. This direction is used to match each object in the camera's field of view with the objects detected by the LIDAR.
SLAM:
In order to use the EKF-SLAM, we must create a state space model for the system. The state space model consists of a state equation and an observation equation, with a state vector and an input vector as the input parameters. We use the position and heading of the robot, as well as the coordinates of each detected buoy to construct the state vector. The input vector is composed of the linear and angular velocity of the robot. To implement the EKF-SLAM algorithm, we used x and u as the input parameters to the state and output equations. At each time step sensor data is used to update both equations, which incorporate zero mean Gaussian random noise vectors. Figure 5 shows x and u vectors, along with the state and output equations.
Figure 5: State Vector x, Output Vector u, state and output equations
The GPS provides a latitude and longitude global position of the robot, which are then converted into a Cartesian coordinates. The IMU yaw reading represents a true north magnetic bearing, which is directly used as the vehicle heading. To compute the coordinates of objects we use angle and distance data from the LIDAR, as well as the Cartesian coordinates and heading of the robot. By sending the GPS, IMU, and LIDAR data through an EKF, we are reducing the amount of error associated with each sensor. This allows us to accurately map each detected object.
Navigation Controller:
The navigation controller used is based on a curve tracking algorithm. Curve tracking is a fairly common controller used on autonomous vehicles, and it involves the robot moving along a predefined curve or a line. The first step in this algorithm is to select the objects located in front of the robot. Then, these objects are sorted by distance, and their colors, which is determined by comparing the camera and LIDAR data. There exists a match between the LIDAR and camera data if the object's angle determined using the camera is within a certain range of the angle measured by the LIDAR. In this application the controller is based on a constant linear velocity and an angular velocity determined by the equation in Figure 6.
Figure 6: Angular Velocity used in Navigation controller
Where k1 is the curvature of the curve; ϕ is the counterclockwise angle from the robot to the curve; ρ represents the distance between the robot and the tangent to the curve; ρ_not is the desired separation between the robot and the object; μ and κ are the differential and proportional gains respectively. The curve tracking algorithm applied in the channel navigation is dependent on the color of each detected object. Therefore, there are seven different cases.
Hi i noticed that there is no Throttle Values.ctl file and Steering Values.ctl file
When I run the pond enviroment simulation the robot just moves fowards.
I also think the LIDAR may not be working since i cannot see local map?
I think some dependencies are missing?
I am doing a robotics project where we need a SLAM aglorythm, we are using a LIDAR. We need something like gmapping in ROS but i am new to Labview so I have not be able to find many useful libaries yet
hi Lisa and Valerie
i am preparing NI arc 2014 and love this document.
about mapping and path findning could you please give some hint on how to read your code.
it is a huge project to me and not sure which vi should i study on first.
thanks your sharing and help
Thanks!