Team Three/Final Paper
From Maslab 2011
After analyzing the scoring methods and looking into previous contests, we decided to implement a simple but high risk strategy. Our robot should explore the maze, find and capture balls, then score them over the yellow wall. This means that if for some reason the yellow wall is not found, we have a high probably of losing. As a result, scoring in the goals is considered as a back up plan, which means the mechanical design of the robot should be robust enough for both scoring methods. The robot should indiscriminately pick up balls of both colors, store them until the time of dispatch.
Mechanical Design and Sensors
Many interesting mechanical designs were discussed, including catapult, elevator lift, fork lift, four-bar linkage, and spinning wheel. We wanted to make a robot that has not been made before in maslab, simple to construct, and fun to watch. A waterwheel with waterpark slide idea came into being. To further simplify the design, a rotating arm controlled by a servo replaced the waterwheel.
The final design, sensor placement, and work flow of the robot is as the flowing:
1) To accommodate all the necessary components, three horizontal layers are designed. The bottom layer contains battery, wheels and motors. The second layer is for the Eee PC. The top layer is used to mount the uOrc board and the slide. A circular front face connects all three layers and is used to capture/guide balls.
2) The robot, with two-wheel drive in the middle, explores the contest area. Two caster wheels, one in the back and one in the front, provide additional balance. The caster wheels are of different heights to help the robot over come bumps on the carpet.
3) With a long range IR sensor mounted on the front face of the robot and two short IR sensors mounted diagonally on the sides, the robot can perform functions such as wall following, getting out of a large room with small door, etc.
4) A belt of bump sensors is mounted on the bottom layer to help the robot avoid walls.
5) When the robot sees a ball with the camera mounted on its front face, it goes toward it. The break beam sensor near the opening on the front face allows the robot to know when a ball has entered its mouth. The arm is then triggered to scope the ball up, and dump it into the slide. The ball rolls down the slide until it comes to a stop at the exit/drawbridge.
6) Finally, when the camera finds the yellow wall, the robot would drive toward it. As both bump sensors on the front face are triggered, a servo lets the drawbridge down, allowing the balls to roll out due to gravity. 7) The bumper in the front is made into a mustache, and the exit/drawbridge into a monocle. In addition to a black top hat, Monsieur Robot is complete.
The brains of Monsieur Robot were developed using Java in 4 weeks. After coming to overall robot design and strategy conclusions, it was time to get started writing the software. In the end, an intelligence was developed that, although not quite self-aware, still managed to maneuver itself around the field.
From the beginning, we decided to make Monsieur a state-based machine. This seemed the easiest to program and most efficient method for collecting and scoring balls. However, to gain an edge, we knew that transitions between the states had to be very strategic and often exit a state before its conclusion. Come final competition our robot had three distinct states:
- Exploring: Consists of two sub-states, StraightExplore and SpinExplore. In StraightExplore, the robot moves forward, attempting to keep its original angle, but avoiding walls. In this way, it will go straight, but also wall follow if it comes in contact with one. In SpinExplore, the robot turns 2*Math.PI, using its long-range IR sensors to find the direction with the furthest distance away. It combines this with knowledge of its original direction to make a choice of direction to proceed exploring. These exploring states alternate between each other until the robot finds something of interest. If at any point during these two states an object of interest is found, the state changes to its corresponding action.
- CollectBall: If the robot is not full of balls and is not only looking for walls (last 20 seconds), then upon seeing a ball the robot will change into the CollectBall state. During this state the robot uses a dual PID system to move towards the ball (angle and distance control). At the point where the ball is too close to be seen, the robot moves forward blindly until the ball triggers its breakbeam sensor. Then the robot actuates its lift arm to store the ball in its ramp hopper. Upon completion the robot returns to exploring state.
- ScoreWall: If the robot believes it has collected a ball and it sees at yellow wall it enters the ScoreWall state. This state uses dual PID to move towards the center of the yellow wall as seen by the camera. Once the yellow wall has reach a certain height and width in the camera screen (aka it is close and wide enough), the robot charges forward. Using 2 front bump sensors to align itself, the robot then opens its ramp for a hard-coded amount of time, allowing the balls to fall on the opponent's side. It then taunts the opponent for luck.
Of course, these states only represent high-level behavior. Behind the scenes, PIDController, VisionHandler, and Timer do all of the dirty work.
- PIDController runs in a separate thread in order to maintain smooth movement alongside behavior code and camera processing. It is activated by requesting a turn(angle) or a straightMove(distance). These can be combined to move in a curve. None of its methods are blocking, but programs can wait until it reaches its angle or distance thresholds by checking an isRunning() method. It uses the system time to calculate integral and derivative functions and implements optional low-level wall following using the IR sensors. It never directly interfaces with the camera -- the behavior code always passes camera coordinates to the PIDController.
- VisionHandler runs un-threaded, only capturing on-demand. VisionHandler has a getObjects() method that returns all the objects (type, coordinate, and shape info) in a List. Color recognition was implemented with hard-coded HSV ranges. Auto white balance and exposure were disabled for consistent color values. Objects were found using a recursive solid color area function. These were then typed as wall-tops, balls, or yellow walls for the behavior code to use in decision-making. Typing uses shape proportions(height,width), shape area, and density (points/area). Shapes with sufficiently small areas, shapes above the blue wall line, and shapes within goals are filtered out.
- Timer handles keeping track of the game time and killing the JVM when time is up, bringing the robot to a stop. The behavior code also uses Timer's getTimeRemaining() method to make strategic decisions.
Many test methods were developed to observe individual actions, object detection, and PID control performance. Code for goal-scoring and barcode detection were partially developed but later abandoned to hone basic functionality. Java's audio package allowed us to taunt our opponent with clips from Monty Python's The Holy Grail, which was paramount in winning the audience's favor.