# Team Three/Final Paper

### Overall Strategy

After analyzing the scoring methods and looking into previous contests, we decided to implement a simple but high risk strategy. Our robot should explore the maze, find and capture balls, then score them over the yellow wall. This means that if for some reason the yellow is not found, we have a high probably of losing. As a result, scoring in the goals is considered as a back up plan, which means the mechanical design of the robot should be robust enough to both scoring methods. For detailed strategies on exploring, collecting balls and scoring, see the Software Design section.

### Software Design

The brains of Monsieur Robot were developed using Java in 4 weeks. After coming to overall robot design and strategy conclusions, it was time to get started writing the software. In the end, an intelligence was developed that, although not quite self-aware, still managed to maneuver itself around the field.

From the beginning, we decided to make Monsieur a state-based machine. This seemed the easiest to program and most efficient method for collecting and scoring balls. However, to gain an edge, we knew that transitions between the states had to be very strategic and often exit a state before its conclusion. Come final competition our robot had three distinct states:

• Exploring: Consists of two sub-states, StraightExplore and SpinExplore. In StraightExplore, the robot moves forward, attempting to keep its original angle, but avoiding walls. In this way, it will go straight, but also wall follow if it comes in contact with one. In SpinExplore, the robot turns 2*Math.PI, using its long-range IR sensors to find the direction with the furthest distance away. It combines this with knowledge of its original direction to make a choice of direction to proceed exploring. These exploring states alternate between each other until the robot finds something of interest. If at any point during these two states an object of interest is found, the state changes to its corresponding action.
• CollectBall: If the robot is not full of balls and is not only looking for walls (last 20 seconds), then upon seeing a ball the robot will change into the CollectBall state. During this state the robot uses a dual PID system to move towards the ball (angle and distance control). At the point where the ball is too close to be seen, the robot moves forward blindly until the ball triggers its breakbeam sensor. Then the robot actuates its lift arm to store the ball in its ramp hopper. Upon completion the robot returns to exploring state.
• ScoreWall: If the robot believes it has collected a ball and it sees at yellow wall it enters the ScoreWall state. This state uses dual PID to move towards the center of the yellow wall as seen by the camera. Once the yellow wall has reach a certain height and width in the camera screen (aka it is close and wide enough), the robot charges forward. Using 2 front bump sensors to align itself, the robot then opens its ramp for a hard-coded amount of time, allowing the balls to fall on the opponent's side. It then taunts the opponent for luck.

Of course, these states only represent high-level behavior. Behind the scenes, PIDController, VisionHandler, and Timer do all of the dirty work.

• PIDController runs in a separate thread in order to maintain smooth movement alongside behavior code and camera processing. It is activated by requesting a turn(angle) or a straightMove(distance). These can be combined to move in a curve. None of its methods are blocking, but programs can wait until it reaches its angle or distance thresholds by checking an isRunning() method. It uses the system time to calculate integral and derivative functions and implements optional low-level wall following using the IR sensors. It never directly interfaces with the camera -- the behavior code always passes camera coordinates to the PIDController.
• VisionHandler runs un-threaded, only capturing on-demand. VisionHandler has a getObjects() method that returns all the objects (type, coordinate, and shape info) in a List. Color recognition was implemented with hard-coded HSV ranges. Auto white balance and exposure were disabled for consistent color values. Objects were found using a recursive solid color area function. These were then typed as wall-tops, balls, or yellow walls for the behavior code to use in decision-making. Typing uses shape proportions(height,width), shape area, and density (points/area). Shapes with sufficiently small areas, shapes above the blue wall line, and shapes within goals are filtered out.
• Timer handles keeping track of the game time and killing the JVM when time is up, bringing the robot to a stop. The behavior code also uses Timer's getTimeRemaining() method to make strategic decisions.

Many test methods were developed to observe individual actions, object detection, and PID control performance. Code for goal-scoring and barcode detection were partially developed but later abandoned to hone basic functionality. Java's audio package allowed us to taunt our opponent with clips from Monty Python's The Holy Grail, which was paramount in winning the audience's favor.