One of my longest running projects was the implementation of a Simultaneous Location and Mapping (SLAM) algorithm. A SLAM algorithm is used by a robot to map out and learn its environment based on sensor data it obtains while traveling through an unknown area. For my project I used a simulated robot and environment.
The simulator is where one goes to create the environment, which is in the form of an enclosed maze. To create a maze, you first need to enter in the dimensions. After that is done, you can use the features to customize the maze. For a maze to be usable, a target (shown as a yellow block) and robot (a purple triangle) must be placed somewhere within the environment. There can only be one of each placed in the maze, and any attempt to place a second target/robot will just result in the target/robot being moved to the new location.
Walls and blocks can also be added, and removed from the maze. However those are not needed to use a maze. Any maze that is created with the simulator can also be saved and used/edited in the future. The final thing you can do is set the sensor range of the robot. It should be noted that the robot has more than one sensor and whatever the range is set to will be the range for all the sensors. Once you set up the maze to your liking, all you need to do is upload the robot code you want, synchronize it to the simulator, and then press run.
All simulated robots the same simulated ‘body’ . The customization comes in the form of how the SLAM algorithm works. The robot has 4 sensors: a forwards sensor, two side sensors, and a target sensor. The forwards and side sensors will allow the robot to detect obstacles. The target sensor (as the name implies) allows the robot to know if the target is within sensor range and how far away it is, though it wont tell the robot where it is relative to its current location. Unlike the other three sensors, the target sensor can see through obstacles. That means the robot may need to figure that it should move away from the target in order to get around an obstacle and reach it’s final goal.
My version of a SLAM algorithm records the data it collects in 4 matrices that represent the 4 quadrants of a two dimensional graph, with the origin representing the starting point of the robot. These matrices will store the location of walls, obstacles, and the target (once it has sensed it and figured out the targets location relative to its position), open areas of the map the robot can travel to, unknowns (due to its inability to see past obstacles), and most importantly, it will also record past positions. The simulator is designed so that you can watch the generation of the map as the robot explores and records its environment.
This project was originally something that I did for fun during my summer vacations starting in the summer of 2011 and worked on for about 2-3 years prior to submitting it as my class project for Cognitive Robotics.
When I last left off, there was a bug that needed to be fixed: depending on where the robot started, it could get stuck in an infinite loop. The solution to this is simple, and it was also what I planned to work on next even if I was unaware of this problem: I needed to write the algorithms necessary to enable the robot to look at the path it had taken so far and the data it had recorded to plan out a route to get the robot out of this loop. The idea sounds simple, but it is most likely going to be the most challenging part of this project.