Team One/Final Paper

From Maslab 2013
Revision as of 00:33, 5 February 2013 by Rgomes (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

.

Contents

Maslab 2013 Team 1 Final Paper

Team 1: Daniel Gonzalez, Nanu Roitman, Rodrigo Gomes, Tyler Hamer


.

Overall Strategy

Our group focused on first finalizing the mechanical design and building a robot that would behave consistently and do what we told it to do. After that was done, we programmed it to try and solve the proposed challenge (pick up balls and drop them in the scoring tower or over a wall to the other playing field).

We managed to achieve our goals mechanically: the robot behaved consistently and had the ability to do everything required by the presented challenges: it could score over walls, detect and pick up balls reliably and it was capable of locating itself with minimal error due to very precise encoders. Our low-level strategy relied a lot on encoders, since we used them to guide controllers that made the robot move straight and rotate in place, and also had controllers to tell the robot to go to way-points (locations relative to the robot's position and angle), which worked very well.

However, the encoders turned out to be one of our biggest problems, and the fact that we relied on them so much made us lose a lot of time getting them to work correctly and figuring out why they didn't work so well. It turned out that applying large amounts of force to the motors would cause the encoders to not work completely (they would start not counting or counting backwards – we believe it was because of the large amount of current drawn by the motor turning it into a powerful electromagnet, but we're not sure), which was an issue when the robot got stuck shortly if it was running fast (our ultimate solution was to program the robot to avoid collisions a lot – which we managed to do very successfully – and force it to move slowly).

As for higher level strategy, originally we had planned to use a sophisticated strategy for the robot: have it represent its state according to beliefs he had about the world (such as how long before the round ended, how many balls the robot was holding, …) and cache a policy that would translate the robot's state into an action (that policy would be precomputed through a recursive strategy that would try to find a good policy by looking at some reward metric assigned to actions and maximizing the sum of the rewards).

The robot was capable of doing mapping and planning, and that allows us to give it a large set of useful actions and capabilities (such as remembering the position of a scoring wall and planning to go there, or create plans that maximize visiting unexplored plans). In the end, we barely used those capabilities (mapping was used to make the robot capable of knowing whether a ball was reachable and planning was used to go to unexplored places) because of time constraints. We were unable to program enough actions and devise a good state description for the robot, and ended up implementing a simple strategy: follow walls until the robot either sees a ball or a scoring wall, and when that happens head for the ball/scoring wall (the second only in case the robot believes it is holding a ball or there are less than 30 seconds left), and pick it up/score.

This simple strategy was effective in theory, but due to a lack of time for testing, it ended up failing due to implementation details that were overlooked. The biggest problems we had with it were how long it took the robot to compute that a ball could be reached (sometimes the robot would get stuck trying to reach an “unreachable” ball – to close to the wall – so we used mapping to make it possible for the robot to know to ignore that ball, but that took some time to compute and we didn't account for that, making the robot lose track of the ball before it started heading for it), and the “sloppy” way how the scoring mechanism was implemented (it would drive towards the scoring wall, but assume that the first wall it found was the scoring wall and would drop the balls over that wall – what ended up happening was the robot stopping “prematurely” on its way to the scoring wall, and putting balls over other walls, thus losing points).

For testing the programming of the robot, we developed a simulator close to the end of Maslab, and that offered an advantage, but it was unfortunately too late for it to make a big difference, and we also had to test in the real robot, since the simulator made many assumptions only approximately correct.

Overall, we are very proud of what we achieved, because the robot behaves very well, and wished only to have had more time to test the higher level strategy.


.

Mechanical Design and Sensors

Mechanical Design

Sensors

The robot has very complete sensing capabilities:

 *  Encoders: allowed the robot to know where it was, with 1 or 2 inches of error, even after 3 minutes (the encoders were good enough that an IMU was not necessary);
 *  Distance sensors: an array of 5 Infra-Red distance sensors, exhaustively calibrated, at the angles of -90,-45,0,45 and 90 degrees allowed the robot to avoid walls very well and map the field.
 *  Camera: the camera allowed us to identify scoring walls and balls very reliably, and was calibrated specifically to the lighting in the final competition area, although it behaved very well in other lighting conditions. The camera's measurements were good enough that we could calculate the ball's positions with 1 or 2 inches of error at distances up to 16”.

.

Software Design

.

Controllers

.

Vision

Mapping

The strategy used for mapping was a very simple one, based on the ideas taught in the 6.01 (Introduction to Electrical Engineering and Computer Science) class: represent the world as a 2D grid where each cell is a “bayesian wall”, that is, for each cell, there is a belief on whether it is a wall or not. We extended this to also include scoring walls and balls (however, due to time constraints, these capabilities were not used in the final competition). That belief was updated based on sensor data: every time a distance sensor told us that a specific point was detected as being a wall, a straight line was created between the center of the robot and that point. All the grid cells in that line had their probability of being a wall lowered, and the end cell had it increased according to bayes rule and a rough model of the sensors (the probability that a wall is present, given that the sensor tells us there is a wall there was set to 99% and the probability that a wall is present given that the sensor tells there is no wall there was set to 1% - these probabilities were found in an empirical way).

Planning

Although it barely got used, we implemented ways to plan paths for the robot. One of the issues was that, even though they were heavily optimized for speed, it generally took a second or two to create a plan, which could cause issues sometime. Planning happened in configuration space. Configuration space is the set of points that a point relative to the robot can exist in, without being in collision. For example, if we choose that point to be the center of the robot, and the robot was round with radius r (easiest example), then the configuration space would be the points at least r away from any wall. To get maximum performance, the planning strategy used was greedy search/best first search (basically create a path by appending always choosing the closest point to your goal that you haven't chosen yet), followed by smoothing (if two points connect without collision, there is no point in having the points in-between belonging to the path – especially useful when planning with grid cells, since it eliminates issues with having points be too close together). Configuration Space planning was used to avoid the robot getting stuck trying to reach unreachable balls, since it would classify them as such and the robot ignored them.

Getting it all together

So far, we described the basic capabilities of the robot, but they all got together at a higher-level by building state-machine controllers that join them together. Although originally a logic state description was supposed to exist that would allow the robot to make “smart” decisions, that was not implemented due to time constraints. Instead a simple state machine strategy was implemented that used three high-level controllers: wall following, picking up balls and scoring. The state machine was as follows (picture):


Overall Performance

The team worked well together and we believe our performance was very good overall. We managed to design and build a robot, both mechanically and controller wise, that did exactly what we told it to do. The final robot was stable mechanically (hard to topple-over and structurally sound), and had good controllers (capable of moving forward, rotating in place and moving to relative way-points thanks to the encoders on the motors, and avoided walls consistently thanks to the IR distance sensors). The only part that required more work was the high-level strategy, which we didn't have enough time to develop and test (most/all of the issues found with our basic strategy could have been found with more testing in the playing field and we could have developed and tested a more sophisticated/better strategy given enough time). We also developed a simulator that could have made it possible to test the high-level strategy if we had it earlier in the month (still, it wouldn't have replaced real-world testing). Overall, we think we performed really well, given the time-constraints, and we are proud that we didn't just hack a robot together, but made a well-designed and mechanically sound one, with very good controllers (it also serves as a neat toy that catches our balls and returns them – kind of like bowling). Conclusions MASLAB was a fun and learning experience where we got the opportunity to work in a balanced team to get an interesting project done in an “impossible” (MIT-style) schedule. The best moment in the whole month was when we decided to just work out the kinks and do our best to get a robot done for the final competition and managed to actually do it in a day. We recommend other teams to try and stick to their schedules, and try to focus a lot on getting the design finished and prototyping a lot and quickly to allow the coders to test controllers and high-level functions. Even if the final robot is only made in the last week, the coders should be able to test on a real robot throughout the whole month. A really helpful tool also, is a simulator, if implemented early. Overall, MASLAB was a really fun experience and we feel like we learned a lot and did something we can be proud of.

Personal tools