Team Eleven/Final Paper

From Maslab 2011

< Team Eleven(Difference between revisions)
Jump to: navigation, search
(Table of Contents)
(reFuses's Design)
 
Line 43: Line 43:
'''Mechanical Design'''
'''Mechanical Design'''
 +
In past years, the primary component of MASLAB has been around collecting balls and moving them to a designated scoring area. This year the competition changed slightly and allowed the option of scoring balls either by placing balls in a designated goal, or by placing the balls over a specified wall and onto the opponents playing field. This secondary scoring option is by design more complex, and so makes the design process a bit more drawn out. For our robot the additional points gained through placing balls over the wall was determined to outweigh additional complexity or design time. With this strategy we moved the design forward and broke it into two main components: the chassis and the ball handling system.
-
Strategy - balls over the wall or captured
+
Least Critical Module
-
Design - initial concept to final design
+
The least critical module, LCM, of our robot was the chassis. Its primary role was to be acted the skeleton upon which the robot hardware would be attached. It was composed of laser cut acrylic sheeting, which was connected in a tongue and groove system. These sheets where held together by 4-40 screws and the application of epoxy at the joints. The details and features of the chassis structure were not finalized initially and instead the design was created by iterations in design features. By doing this we were able to independently modify the drive system, ball collection system, and the required sensor mountings. As the final week approached we brought all those systems together and laser cut a brand new robot to which we had implemented all the design changes we required from previous iterations.
 +
 
 +
Most Critical Module
 +
The most critical module, MCM, for the mechanical structure was the ball handling system which was borrowed from MASLAB robots of previous years. First, we utilized a revolving rubber band hub collection mechanism that was used both by past competitors and a few teams from this years competition. This mechanism allows for rapid ball collection, and the ability to move those balls into a passive storage container.This seemed as an ideal solution to this years competition, but because we were aiming to place the balls over a wall, it was decided that we would have to modify the rolling rubber band hub. We determined that by using the collection mechanism to also elevate the balls to a height of 8 inches we could solve two problems with one solution. By increasing the diameter of the revolving rubber band hub and placing it inside of a static hub, we created a elliptical gear system. In this system the rubber band hub acts as the sun gear, the stationary hub acts is the annulus, and the balls to be collected acted as planetary gears. As a secondary aspect, we borrowed the capture and release mechanism for our robot from other groups who would be using gravity to aid drop the balls over the game wall. The ball collection system already brings the balls to a height of 8 inches and allowed for the creation of a sloped storage container to store balls. By attaching a servo controlled door at the lowest point of the sloped storage chamber we were able to provide an outlet channel for the stored balls. Upon operation the servo activated a trap door that was lowered, allowing for the balls to travel over the field wall and onto opponent game space.
 +
 
 +
The Final  Robot
 +
As a whole system picture 1&2  show the exact shape and form for our robot, both the front and back are shown for clarity.
 +
 
 +
Pictures 1 & 2: Credit to Sam Range for his photography, (left) back view, (right)front view
 +
In the picture to the left you can see the  back of the robot with a large amount of tape used to hold the electrical components in place, along with the 12V battery, and the rear bump sensors, and the servo controlled door that was lowered in place to allow the balls over the field wall.
 +
On the right you can see the front of the robot, pictured are the camera, front bump sensors, the MCM ball collection mechanism, and the computer being used to process all the data. In all the robot came to about 10 lbs (dependent upon the number of balls) and covered about one square foot in area.
----
----
-
'''Sensor/Actuators''     
+
''Sensor/Actuators''     
Our mechanical design required an extra drive motor and servo as extra actuators in addition to the wheel motors.  The drive motor was used to spin the roller and the servo to open the trapdoor, which cost 7+5 points.  We wanted to follow the wall, for which we used the Long IR sensors.  Long IR sensors were chosen due to the speed at which the robot approached, the longest distance on the short range IRs were too close and the did not have enough time to turn and consistently hit the wall. We needed the gyro to turn a set distance reliably and look for the balls, in addition to turn 180 degrees and score.  The camera was obviously needed to identify the balls and yellow walls. The bump sensors were needed to safely line up with the yellow wall, as well as to avoid getting stuck.  The front bump sensors, specifically, protected against the robot approaching a wall too close where the front IR sensor was out of range, and thus returned an incorrect value.  In hindsight, it may have been useful to use an laser motion sensor to see if we were moving or not, as we tended to get caught and not necessarily hit a bump sensor.
Our mechanical design required an extra drive motor and servo as extra actuators in addition to the wheel motors.  The drive motor was used to spin the roller and the servo to open the trapdoor, which cost 7+5 points.  We wanted to follow the wall, for which we used the Long IR sensors.  Long IR sensors were chosen due to the speed at which the robot approached, the longest distance on the short range IRs were too close and the did not have enough time to turn and consistently hit the wall. We needed the gyro to turn a set distance reliably and look for the balls, in addition to turn 180 degrees and score.  The camera was obviously needed to identify the balls and yellow walls. The bump sensors were needed to safely line up with the yellow wall, as well as to avoid getting stuck.  The front bump sensors, specifically, protected against the robot approaching a wall too close where the front IR sensor was out of range, and thus returned an incorrect value.  In hindsight, it may have been useful to use an laser motion sensor to see if we were moving or not, as we tended to get caught and not necessarily hit a bump sensor.

Latest revision as of 13:25, 1 February 2011

Contents

MASLab Final Report

Workload Breakdown

Kristen- Sensor/Controls/Behavior

William- Vision/ Behavior

Tim- Mechanical Design


Table of Contents

Overall Strategy Mechanical Design Sensor/Actuators Software Architecture

    RobotMain
    Stop
    BallGrabber
    Navigator
         getState
         forward
         rightTurn
         leftTurn
         rightPID
         leftPID
         bumpLeft
         bumpRight
    WallScorer
    Data 
    ImageProcessor
    Conclusion

reFuses's Design

Overall Strategy - WIN!

We decided to put the balls over the walls. We wanted to be able to hold a number of balls and reliably capture and deposit the balls over the yellow wall. For the general behavior, we wanted the robot to circle every so often and look for balls and walls, collect balls that it found and deposit the balls over the walls found. If, however, neither ball nor walls were in the robot’s sight when it spun, the robot would then attempt to wall follow for a certain amount of time before spinning and looking again. In addition, the robot needed to be awesome and fast.


Mechanical Design In past years, the primary component of MASLAB has been around collecting balls and moving them to a designated scoring area. This year the competition changed slightly and allowed the option of scoring balls either by placing balls in a designated goal, or by placing the balls over a specified wall and onto the opponents playing field. This secondary scoring option is by design more complex, and so makes the design process a bit more drawn out. For our robot the additional points gained through placing balls over the wall was determined to outweigh additional complexity or design time. With this strategy we moved the design forward and broke it into two main components: the chassis and the ball handling system.

Least Critical Module The least critical module, LCM, of our robot was the chassis. Its primary role was to be acted the skeleton upon which the robot hardware would be attached. It was composed of laser cut acrylic sheeting, which was connected in a tongue and groove system. These sheets where held together by 4-40 screws and the application of epoxy at the joints. The details and features of the chassis structure were not finalized initially and instead the design was created by iterations in design features. By doing this we were able to independently modify the drive system, ball collection system, and the required sensor mountings. As the final week approached we brought all those systems together and laser cut a brand new robot to which we had implemented all the design changes we required from previous iterations.

Most Critical Module The most critical module, MCM, for the mechanical structure was the ball handling system which was borrowed from MASLAB robots of previous years. First, we utilized a revolving rubber band hub collection mechanism that was used both by past competitors and a few teams from this years competition. This mechanism allows for rapid ball collection, and the ability to move those balls into a passive storage container.This seemed as an ideal solution to this years competition, but because we were aiming to place the balls over a wall, it was decided that we would have to modify the rolling rubber band hub. We determined that by using the collection mechanism to also elevate the balls to a height of 8 inches we could solve two problems with one solution. By increasing the diameter of the revolving rubber band hub and placing it inside of a static hub, we created a elliptical gear system. In this system the rubber band hub acts as the sun gear, the stationary hub acts is the annulus, and the balls to be collected acted as planetary gears. As a secondary aspect, we borrowed the capture and release mechanism for our robot from other groups who would be using gravity to aid drop the balls over the game wall. The ball collection system already brings the balls to a height of 8 inches and allowed for the creation of a sloped storage container to store balls. By attaching a servo controlled door at the lowest point of the sloped storage chamber we were able to provide an outlet channel for the stored balls. Upon operation the servo activated a trap door that was lowered, allowing for the balls to travel over the field wall and onto opponent game space.

The Final Robot As a whole system picture 1&2 show the exact shape and form for our robot, both the front and back are shown for clarity.

Pictures 1 & 2: Credit to Sam Range for his photography, (left) back view, (right)front view In the picture to the left you can see the back of the robot with a large amount of tape used to hold the electrical components in place, along with the 12V battery, and the rear bump sensors, and the servo controlled door that was lowered in place to allow the balls over the field wall. On the right you can see the front of the robot, pictured are the camera, front bump sensors, the MCM ball collection mechanism, and the computer being used to process all the data. In all the robot came to about 10 lbs (dependent upon the number of balls) and covered about one square foot in area.


Sensor/Actuators

Our mechanical design required an extra drive motor and servo as extra actuators in addition to the wheel motors. The drive motor was used to spin the roller and the servo to open the trapdoor, which cost 7+5 points. We wanted to follow the wall, for which we used the Long IR sensors. Long IR sensors were chosen due to the speed at which the robot approached, the longest distance on the short range IRs were too close and the did not have enough time to turn and consistently hit the wall. We needed the gyro to turn a set distance reliably and look for the balls, in addition to turn 180 degrees and score. The camera was obviously needed to identify the balls and yellow walls. The bump sensors were needed to safely line up with the yellow wall, as well as to avoid getting stuck. The front bump sensors, specifically, protected against the robot approaching a wall too close where the front IR sensor was out of range, and thus returned an incorrect value. In hindsight, it may have been useful to use an laser motion sensor to see if we were moving or not, as we tended to get caught and not necessarily hit a bump sensor.

Sensors/Acutators Used and Corresponding Sensor Points

   0 pts    4 BumpSensors (two front, two back)
   12 pt    3 Long IR sensors (one left, one front, one right)
   0 pts    1 Camera
   0 pts    Gyro
   7 pts    Extra Drive Motor
   5 pts    Servo
24 total pts < max 30 pts

Software Architecture

The software had three main parts, the behavior (RobotMain), the orc interface (Data), and the image processing (ImageProcessor).

RobotMain

RobotMain was the main class that created all classes, initialized the gyro, and started the behavior finite state machine (FSM) when the power button was pushed. The behavior FSM consisted of a Stop, BallGrabber, WallScorer and Navigator state. Each of the states were passed into Data, which allowed them to access the orc board and thus all the sensors and actuators.

Stop

The Stop state was the central state in the state machine and was used in transitioning between the other states. Upon entering this state, the robot would stop, and turn around on itself a certain amount of times, capturing and analyzing a picture to find balls and walls after each turn. It would then transition to the appropriate state according to what was revealed by the picture. Generally, if it found a ball, it would go into BallGrabber state. If it found a wall and decided it wanted to score, it went into WallScorer. And if it didn’t find anything after the X number of turns, it would go into Navigator. This state also enabled us to completely decouple our strategy from the whole code, as we had a separate function to determine when to score. Therefore, whenever we found a wall in a picture, we would call that function which returned true if we wanted to score and false otherwise.

BallGrabber

When the robot entered the BallGrabber state, it continuously takes pictures and processes them with ImageProcessor. The software takes the angle error calculated with the ImageProcessor and ran a PID controller. It continued in this loop until the camera no longer saw the ball. At which point, the robot continued forward for one second to ensure that the ball made it into the ball collector. There is a ball count contained in Data, which the code at this point increments. If at anytime the robot no longer sees the ball, it enters the Stop state again.

As our strategy was to go as fast as possible, the PID controller worked in such a way that when the robot wanted to move left the left motor would slow and the right motor would continue going at it’s maximum. The same worked for the right motor. As the camera angle is very small this worked fairly well as the error was never very big. Unfortunately, the controller was not fast enough for a couple of balls that were just in the camera’s line of sight, however this limitation wasn’t much of a hindrance. The motors were close enough speeds that using the same gains for both the left and right motors worked well.

BallGrabber had a timeout in it, that performed a subroutine contained in Data. This subroutine is used in Stop and Navigator for when it hit a bump sensor and essentially backed up and turned. After this subroutine was finished the FSM entered Navigator.

As our robot’s mechanical design limited it in it’s ability to pick up balls next to walls, this state contained a function that said if the robot had attempted to pick up a ball three times and hit a bump sensor everytime, the behavior FSM entered the Navigator state and went and looked for other balls. It also subtracts three balls from the balls collected count in Data.

Navigator

The Navigator state is another FSM which contains 8 states, getState, forward, leftTurn, rightTurn, leftPID, rightPID, bumpLeft, and bumpRight.

getState: When the behavior FSM enters the Navigator state, it pulls all of the IR sensors. If the front sensor is less than some minimum the robot enters leftTurn or rightTurn depending upon which sensor reads something closer. If one of the left or right IR sensors are within a certain range then it enters left or right pid correspondingly. If all of the sensor read far away it goes forward. If either of the bump sensors are pressed it enters that corresponding state, bumpLeft or bumpRight. forward: This state makes the robot continue to go forward until one of the sensors is within range at which point it enters the same state as described in getState.

leftTurn: In this state the robot continues to turn until the front sensor reads greater than a certain number and the right sensor reads greater than a certain number, at which point it enters rightPID. If a bump sensor is hit, the Navigator FSM will also exit to the appropriate bump state.

rightTurn: This state behaves in the same way as leftTurn only reversed.

leftPID: In this state the robot is wall following using the left IR sensor. When the computer enters this state, the left_dist is set to the IR’s current reading. The error is then caluclated from the current IR reading and left_dist. This controller used similar logic as the BallGrabber, where it slowed down the appropriate motor. This technique gave quite a bit of leeway in the PID controller and allowed for the noise in the IR sensors. To exit this state, the front IR sensor needs to be less than a certain minimum, in which case it enters a right turn. It will also exit to the turn state if the right sensor returns less than the minimum. It will always exit if a bump sensor is hit.

rightPID: This state behaves in the same was as leftPID only reversed.

bumpLeft: This state calls a subroutine contained in Data. Originally, the subroutine was only in navigator, but it is also useful for other states as well so it was moved to Data so all Main state could access it. It is the same one described in the timeout of BallGrabber. Essentially the robot backs up and turns.

bumpRight: This state behaves in the same was as leftPID only reversed.


All of the individual Navigator states contained a timeout. In the event of a timeout, the state switches to the forward state. If the forward state times out then it switches to left or right turn. There is also a Navigator timeout. The Navigator timeout moves the behavior state to Stop where the robot stops and looks around.

WallScorer

The WallScorer contains an approach PID controller, a 180 degree turn, a back up, and a stop and drop trapdoor.

The PID controller is the exact same as the BallGrabber PID. The error is returned from ImageProcesser, in the same units as in BallGrabber, except to the center of the wall. This PID controller is run in a while loop until both front bump sensors are pressed.

When both front bump sensors are pushed, the robot turns 180 degrees, as determined by the gyro. There is not controller in this loop instead a while loop was written which states that if the gyro reads a number less than the a number that corresponds to 180 degrees keep turning. It worked well, despite the lack of controller.

After it has turned a sufficient amount the robot backs up. It will continue to back until it either times out, which it did most of time or both back sensors hit, which indicated a perfect allignment. At this point, the robot lowers it trapdoor and waits while the balls fall out. After this ball count in Data is set to zero, and the main state changes to Stop where the robot looks around for more balls.

There are a couple of different timeout features in this state. First, if the robot times out before both of it’s front sensors are pushed, it assumes that it is stuck and backs up and turns (subroutine in Data). If the robot thinks it has arrived at the wall (aka both bump sensors pressed) it checks and sees if the amount of wall is large enough to be the wall. Otherwise, assumes it is stuck and backs up and turns which is the same subroutine in Data.

If WallScorer times out in any portion after the first two sensors hit the wall it continues on to the next step.

It would have been slightly better if when it was approaching the wall if one bump sensor hit, that side’s wheels spun backwards slightly, and the other’s side wheel went full force.

Data

The data class contains all interfacing to the orcboard. It also allows all other states to interface with the orcboard with get, set and update function for all sensors and actuators. Data also contains and global function and variables than need to be known across all states. The most notable function is the avoidBumpLeft() and avoidBumpRight() methods, which simply back up and turn 90 degrees. This turn is based off of the gyro, but it didn’t need to be. It was simply done because we already had the capability and the actual amount turned changed considerably when the battery was low. This class also kept track of time. TimeCompete() was checked at every while loop. This function allowed us to avoid threading. However, if it wasn’t checked at every while loop and on the off chance the robot got caught in that loop, it wouldn’t stop. In hindsight a thread that only checked the time and stopped the main thread if the time was complete would have been more efficient and reliable. We were concerned about the processing power of the eePC, and if we would notice a change if we introduced another thread, but I do not think that this would have been a problem.

ImageProcessor

Our objective was to obtain the maximum amount of information from each picture, even if that meant that it would take longer to analyze them. For example, we wanted to make sure that, if there was a ball in the picture, then it would be recognized, no matter how small it was. Similarly, we aimed at never confusing a yellow goal with a yellow wall, although that meant having a slower image analysis on average.

The first step was to convert the image to HSV values so as to simplify the process by which we decided of the color of each pixel. Being in HSV made it relatively easy as we only needed to find upper and lower bounds on each of the hue, value and saturation for each color we considered. We then used one-pass connected component analysis to find the various objects in the images. This consisted in basically finding all the clusters of pixels of the same color, starting at the bottom of the picture and making our way up. Going in this direction allowed us to apply blue-line filtering: the top of the walls have a blue line which enables us to distinguish if objects are inside the bounds of the field or not. Therefore, we ignored any pixels above that blue line. We then had multiple methods to accept or reject the remaining clusters as wanted objects.

For the balls, we first had to find a red or green cluster. To make sure that it was a ball, and not simply an error, we could have simply required the size of the cluster to be large. However, this would have not made it possible to see balls that were further away, a feature which was critical in our overall strategy. We therefore added a test, which ensured that the cluster was of a round shape by looking at the pixels on the perimeter of the cluster.

For the yellow walls, the main task was to distinguish them from goals. A goal has a black hole in the middle with yellow on the sides and above it, and a thin strip of white above the yellow. Therefore, we simply checked that the amount of black pixels in the area delimited by the yellow was smaller than some threshold. However, in some images where the goal was at some large angle, the amount of black pixels was very small, so we also added a test which checked if there were white pixels on top of the yellow. With these two tests, we were able to perfectly distinguish the two.

Ultimately, the image processor would calculate the angle needed to turn to get to the closest ball and wall in the picture.


Conclusion

In conclusion, we did fairly well and placed third. We had a couple of problems getting caught, which could have been improved upon. Our major failure mode was the orc board which we managed to short 5 different times. One of these times was in competition, which prevented us in competing in the final round.

Personal tools