GMapping | ROS with Webots | Robotic Software PicoDegree | Part 4 | Best mapping package

Soft illusion
Soft illusion
10.1 هزار بار بازدید - 3 سال پیش - 0:15
0:15 Introduction
02:35 Glimpse of GMapping
3:59 Implementation
08:57 Start GMapping
10:52 Mistake
12:50 Localization
14:50 GPS and IMU
15:51 Localization Node (base_link)
17:59 Lidar link
19:58 map server package
22:58 GMapping parameters

#Gmapping #SLAM #Localization #Mapping

Introduction Video to Localization, Mapping, SLAM, GMapping: What is GMapping ? | Theory | ROS | E...

The master launch file of the Robotics Picodegree launches the following:
1. Home webots world which contains a home and our stark robot. (1st Video)
2. Robot_description - robot urdf or the xacro file with the description of the different frames and links in the robot. (2nd Video)
3. Teleop - Navigate the robot wheels and actuators using keyboard keys and  publish dynamic transforms of the continuous joints on the robot such as the linear joint, camera etc. (3rd Video)
4. Mapping (present video) - Localization node that publishes the base_link and the lidar link transforms required as inputs to perform gmapping, and then we load the saved map in the yaml format using the map server package.

What does the gmapping map topic actually represents?
As the robot moves around, the map continues to build. This is called a 2D occupancy grid where the environment is represented as a regular grid of cells, where the value of each cell encodes its state as free, occupied, or undefined i.e unmapped. Here the greenish grey cells have unknown occupancy values, the white cells are free and black cells are occupied.
This occupancy value of a cell is determined using a probabilistic approach that has the laser data as input to estimate the distances from the lidar to surrounding objects. Everytime a new measurement is made, the cell value is updated using a bayesian approach. The resulting model can be used directly as a map of the environment in navigation tasks as path planning, obstacle avoidance, and pose estimation. The main advantages of this method are: it is easy to construct and it can be as accurate as necessary.

How do we provide pose estimation of robot in webots?
We know that robot localization algorithms uses information from different sensors. The type of sensor used can range from Relative to Absolute Positioning Measurement. Relative measurements include sensors such as wheel odometry and IMU, where the measurements are incrementally used in conjunction with the robot’s motion model to find the robot’s current location. Though these methods are very precise; wheel slippage, and drifts can lead to increased errors.
Absolute measurements are more direct and mostly include sensors that estimate a distance by calculating phase difference between the wave sent and rebounded.
This method also includes the GPS. These methods are independent of previous location estimates  and hence the error does not grow unbounded like in the previous case.
Now each sensor has it’s own limitations. That's the reason its a critical decision to choose the right sensors that can complement each other based on the application along with good sensor fusion techniques.
A simple google search will show you that there are several approaches such as kalman filter, particle, morkov etc to fuse sensor information. and form an estimate of the robot’s position and orientation within the map. The different  techniques use probabilistic algorithms to deal with problems such as noisy observations, sensor aliasing etc to generate not just an estimate of the robot’s pose and orientation but also a measure of the uncertainty/confidence associated with that location’s estimate. In our project we make use of GPS and IMU sensors available in webots that we have mounted on the robot, to provide us with the position and orientation of the robot respectively.

During the process of mapping:
1. It’s best to complete the mapping process with the lidar at the same static location wirt to the base_link, of course you can change this static link later once the map is made.
2. While performing slam, if you pick up and move the robot without the wheels actually moving, it is a big problem. This is because the the odometry data will remain constant, leading to a mismatch in where the robot actually is, and where it thinks it is. This will lead to wrong pose estimation or localization.

As you experiment with different slam algorithms, you will realise that these techniques come with a lot of challenges. Even small errors such as odometry can have large effects on later position estimates. For building consistent maps the robot has to establish correspondence between the present and past positions, which is particularly difficult when closing a cycle. Factors such as hardware slippage, robot speed, frequency of map update etc can have varying effects on the map output. Optimization of the different parameters can effect not just the time taken but also the accuracy of the generated map.
3 سال پیش در تاریخ 1400/06/24 منتشر شده است.
10,100 بـار بازدید شده
... بیشتر