Skip to content

Lidar with TurtleBot


Ros And Slam


The purpose of this project is to explore and map an unknown area. We will use packages that we have created and build several new packages in order to create the map, as well as use the gmapping package to build the map. There are three phases that this project consists of; the first being that we need to calculate the C-Space of the map. Next is to find a frontier to explore and plan an optimal path to that frontier. After the path has been planned, the robot has to navigate to the frontier while avoiding any potential obstacles. The robot continues this sequence until the map is completed and then saves that map into a file. The second phase now begins, starting with using the map that was saved to that file and having the robot navigate back to the starting position while avoiding any obstacles that would come across its path. Finally, phase 3 begins, in which the robot navigates to a specified ‘goal’ position given by the user, again avoiding any obstacles that it might encounter and using the map that was previously stored to navigate through the maze.


As specified in the introduction, there are three phases to this project. Each of these phases required the implementation of various packages or code elements. Some phases shared pieces to the puzzle, while others introduced new pieces. The following is a breakdown of what those elements were and how they were implemented.


Our team began the project utilizing the simulator. The simulator allowed for rapid prototyping and architecture development without worrying about the noise and errors that commonly occurred because of real-world conditions. Our team utilized this to implement our first frontier algorithm, rapidly change and evaluate different A* heuristics, and overall find reliability in the process of mapping before we moved on to testing on an actual robot with messier LIDAR scans and potential network issues. Our team completed all of phase 1 in the simulated map prior to moving to a robot.

Transition to Real-world robot

Upon completing phase 1 in simulation, our team jumped to testing on the actual robot and various fields. The big adjustment that had to be made was to modify the odometry functions to get the transform from gmapping. Thus the robot would consistently re-localize inside the map which helped significantly with dodging obstacles consistently. The generated paths would remain the same, but the robot would often now re-adjust itself onto the path based on gmapping’s localization. This aided the transition significantly as Gazebo never accounted for odometry drift and error.

Frontier Centroid Calculation

OpenCV was utilized for the processing of frontiers, later post-processed to discover their centroids. To detect the frontiers, the image of the map from Gmapping was directed towards OpenCV which converted the image to grayscale, then used canny edge detection to reveal the shape as a whole, which was a combination of walls and frontier. This edge was then compared to the original image, and the logic went as follows: all edge pixels that are not walls, are thus frontiers. These frontiers were then dilated, and eroded, with tiny frontiers neglected (10 or less pixels). Initially the frontiers were dilated significantly to account for small blips in gmapping, however this actually proved detrimental as often frontiers that should have been considered separate were merging together and causing the centroid location to be erroneous. These values were continuously tuned throughout testing as the gmapping in the real world differed quite heavily to the simulator. These frontiers were saved as separate arrays, and centroids were calculated as the average x and y values respectively. These were passed to the path planner node with a weight associated with each frontier corresponding to the number of pixels in the frontier.


Right off the shelf, Gmapping does a good job creating and localizing the robot within the LIDAR readings. However, various parameters within the package were tailored to better fit the robot’s needs. One such parameter was the frequency of the calculations. Default, Gmapping will return values once every 0.05 seconds, which is unnecessary for the robot, and in some cases too much. This parameter was brought down to once every 2 seconds. Additionally, the tolerance of positions and angles was brought up so gmapping would be more tolerant and cause the robot to adjust it’s position less, speeding up the mapping process. Various parameters were messed with to ensure consistency and robustness.

c-space caluclation

We mainly utilized the C-space padding from lab 3, in which we looped through an array’s neighbors and padded any objects that were detected by 3 grid cells. For reference our grid cell size is 2.25 cm. Overall, the C-space padding was lower than the width of the robot.

A* Heuristic and Implementation

This was used in a linear combination with euclidean distance to find the distance between the robot and the goal. This was used to decrease processing time, especially when it came to further routes. Turn penalization was also implemented as well as wall adjacent penalization in order to increase accuracy while turning, decrease collision probability and implement a “neighbors of 24” approach, which exponentially made tile with closer proximity to walls less desirable. What we also do with A* is that after the robot completes 20% of its route, it recalculates the path and relocalizes itself.

AMCL Localization

AMCL was used to begin global localization. The robot would rotate until covariance reached a threshold, usually this required multiple rotations in order to properly localize. After localization the robot moved to the next state, where it awaited a 2D Navigation Goal from RViz (phase three requirement).


While two collisions were observed in the final demonstration of the robot, all three phases were successfully executed roughly within the timeframe given. Phase three exceeded the time frame because of a collision observed at the very end. In a typical case, this would result in a necessary retrial of phase three, however we explained to the SA’s that the robot could handle this. In fact, our phase three code had a nifty piece of logic, where if the robot fell off course or lost localization covariance, that the robot would recalibrate and return to its intended path. This is exactly what occurred, and the robot successfully and miraculously made it to its destination.

Below is an image of our robot’s map of the simulated map. 

Figure 1: Map generated in SIM

An example of AMCL working is shown here. The multiple arrows represent the surviving particles in the filter which average to the robot’s pose. In this example here, the covariance is quite low and the robot is quite properly localized. 

Figure 2: Localization by AMCL


While the methodology explained the various puzzle pieces required for the success of this project, it avoided discussing the dozens of hours spent fumbling through bugs and errors within the code and the process.

Amcl Functionality

At first glance, the process of AMCL seems elementary. Launch it, and let it do its magic. However, in practice there are a few more curiosities that need to be set in place for it to work its “magic”. First off was the discovery of the mysterious empty service call. Essentially, if just launched, AMCL will tell you rather quickly that it is very certain (Covariance of ~0.3)that it has found the robot’s position. However, it will be laughably wrong. It was discovered late one night, that in fact AMCL requires the call to a mysterious empty service call to actually localize. A few tweaks to AMCL parameters later, and it becomes super powerful granting covariances down to 0.04.

A* Accuracy

While the A* algorithm was tuned with its heuristic rather well, it depended on an equally good map to function. However, in phase one this map was constantly being updated, and in phase three the localization could always drift. As such, it was implemented that A* would recalculate its route every 20% of the route. For example, in a given route, once the robot had traversed through 20% of it, the robot would recalculate it’s odometry whether through AMCL or Gmapping, and recalculate its A*. This creates a new path. In the case of phase one, this would continuously update the centroid and the pathing to it.


For our C-space, the reason that we used a padding that was below the width of the robot was to ensure that any small spaces that were technically navigable and pathable weren’t cut off out of the possible paths in the map. This helped deal with the A* wall avoidance heuristic and made the map pathable.


In this project we learned how to lose our minds and watch the world burn as the robot continued to see through walls and forget that they’re there. We also learned how to implement SLAM and use everything that we learned this term and implement it into one amazing program that allowed the robot to use LIDAR and several algorithms in order to map an unknown space and be able to localize itself within a map that it recorded then stored. We also learned about how to make the robot able to travel between specific set points within that map while at the same time avoid any potential obstacles that it could encounter. Designing a system in ROS with multiple nodes that works in simulation and in the real-world was a big take-away and especially the trial-and-error turning that occurred. Our team developed methods to collaborate and work with each other to continue making strong progress even after a couple or tens of hours. We’re excited to have found success in the project and to complete all three phases properly.