Deprecated: (6.186) preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /afs/athena.mit.edu/course/6/6.186/web_scripts/2011/w/includes/Sanitizer.php on line 1470
Team One/Final Paper - Maslab 2011

Team One/Final Paper

From Maslab 2011

(Difference between revisions)
Jump to: navigation, search
Allanm (Talk | contribs)
(Created page with " Magnetometer, but I hardly The competitive design of MASLAB 2011 presented a unique set of challenges. Our team decided that ...")
Newer edit →

Revision as of 04:07, 31 January 2011

                                              Magnetometer, but I hardly
   The competitive design of MASLAB 2011 presented a unique set of challenges.  Our team decided that a more interesting experience could be had by trying to shoot balls over the walls rather than into goals.  Our initial plan was to lift balls up and shoot them between two high speed rollers over the wall.  The robot would wander around the map following a right hand rule while disengaging to pick up balls as it saw them.  While ambitious, we were confident that we could synchronize each system of the robot.  As time progressed, the scope of our mechanical and digital robot changed more simply integrate with the map.
   At a basic level, our robot entailed a conveyor belt attached to an elevated gate.  Two large wheels provided drive and tank-style steering while the weight of the robot balanced between four caster wheels.  Our orc board and battery were mounted on top of the bot while the laptop was mounted in the middle.  The bottom layer of the robot functioned as a collector.  Balls could be run over from the front side of the robot and where funneled back to a vertical conveyor belt.  This belt sandwiched balls between a rolling cloth and rubber band belt and a foam backing.  Upon reaching the second story of the robot, balls would roll down into a gated area.  After butting up against a wall, the gate could be opened, releasing the balls across the wall.
   The bulk of our robot was constructed from acrylic, with some wood support.  Our team made liberal use of the laser cutter to build a frame, bumper buttons, and second story ball release gate.  Wood was inserted to brace our bumpers and serve as axles for the ball lifting rollers.  The roller was powered by a stepper motor which offered the torque necessary to lift the balls and withstand tension from the rubber bands.  In over to overcome bumps on the map floor and the added torque loss from the larger wheels, our driving motors were heavily geared.  The top ball release gate was a simple servo which lifted and dropped a wooden pole.
   Input relied on a combination of touch, IR, and visual sensors.  A webcam was mounted on the front side of the lower level, below the laptop.  On the ground level two rounded shoulder buttons relayed data about physical contact with the world and provided a simple means of determining if we were perpendicular to a wall.  The robot also sported two infrared sensors to provide data into the wander code.  One sensor was placed on the front right of the robot while a center facing sensor was mounted to the left center of the robot.  A break bream sensor was implemented to count captured balls, but was later removed for simplicity.  
   Our code was designed from the top down: we wanted to parallelize our code design as much as possible (and this is recommended, as it keeps each programmer working on an encapsulated system), so we chose to implement a logic-level splitting system.

Essentially, sensor data is fed into a high-level analyzer (FSM) that collects both data and uncertainty to estimate the robot’s current state, and from this estimate generates a suitable low-level Behavior. These low-level Behaviors included Wander, Shoot, Evade, Gather, and Stop (with Stop being triggered by a global time-out). Each low-level Behavior is permitted to update motor values according to the desired behavior (i.e. run our roller motors when we want to collect balls, but not when we’re running away from a wall). We agreed that each individual low-level behavior call should not block if at all possible, because the main loop blocks until the low-level behaviors are complete. The faster executed simple sub-routines, the faster we can process data.

Data was acquired from the two touch sensors mounted to the front bumper, a front-facing IR sensor, a right-facing IR sensor, and the main camera. Processing a single camera frame at 160x120 pixels in Java proved to be unfeasibly slow (this was later determined to be an issue with BufferedImage performance on machines without graphics cards; more details can be found HERE: http://www.jhlabs.com/ip/managed_images.html ) so we went in search of image processing libraries. Ultimately, we decided on a JNI’d implementation of OpenCV, a well-known computer vision library written in C. Note to future teams: if you’re going to use JavaCV, the JNI wrapper to OpenCV, make sure that you compile OpenCV WITHOUT SSE instructions.

Our image processing pipeline acts on an image as follows: 1) Blur the image with a Gassian blur to remove obvious noise 2) Convert the image to HSV, and apply individual band-pass filters to each channel to generate various color-masks (red/green, yellow, and blue). This was accomplished using thresholds generated from test pictures. 3) Apply morphological erosion and dilation to the image with a 5x5 disc structuring element. This has the effect of removing the inevitable noise from various carpet fibers in the blue channel. 4) Apply a blue-line filter to the image, as described in the vision tutorial. 5) Find the contours in each channel to perform connected component labelling. 6) In the red/green channel, report the discovery of a Ball if and only if we find a relatively circular contour above a minimum area. 7) In the yellow channel, report the discovery of a wall if and only if convex hull of the contour has area approximately equal to the area of the contour. Report the discovery of a goal if and only if the convex hull area is significantly larger than the internal area. 8) If the centroid of any reported object is above the top third of the screen, ignore it. This deals with blue-line filtering failures, as well as other strange corner cases.

   The highlevellogic package contains the FSM class. The FSM class’ functionality can generally be described as:
* Function     Wander               Ball Gathering    End            wallAway    yellowWallScore
* State            0                      1            2                3                4                
* t>3min           2                      2            2                2                2        
* touch            3                      3            2                3                4
* SeesBall         1                      1            2                1                4
* SeesGoal         4                      4            2                4                4

There are a couple of exceptions. If we remain in wander state for 40 states (about 5 seconds) we go into the wallAway state. If we have been in Ball Gathering state for more than 5 states (about 0.5 seconds) then we will remian in Ball Gathering state for another 5 states. Lastly, if we remain in yellowWallScore state for longer than 40 states then we will go into wallAway.

   The lowlevellogic package contains the classes WanderingState, ShootingState, EvadingState and CapturingState. WanderingState uses the IR sensor data to right hand rule around the field. ShootingState uses a PI controller to turn towards a wall and then drive forward. EvadingState backs up and turns left. CapturingState uses a PI controller to turn towards a ball and then drive forward, if both touch sensors are depressed then we lift the gate to release stored balls.

The physicalObjects package contains the classes Ball, Goal and Wall. These classes are useful because they store the relevant information gathered from vision processing.

   While most systems worked, it proved critical to ensure proper ball flow.  While our code worked, drive mechanism, and ball gate worked well, the failure of the rollers prevented our ability to score.  Our robot was effectively able to wander and possess balls.  Another area which proved problematic was the touch sensor placement.  On multiple occasions our robot’s touch sensors failed to engage and once caught our robot inside a goal.  Furthermore, the decision to use two larger wheels made motion harder on an uneven field.
   When structuring work, teams should not have mechanical and coding sides work independently.  When the two sides of development do not work hand in hand, one side can begin to outpace the other.  The need to have a robot mechanically developed early on cannot be understated.  The whole team should attempt to work together to create the core of the robot before diverging into fine tuning and coding.  
   Furthermore, those teams who try to implement a roller system should avoid rubber bands as a conveyor mechanism.  Instead try to get a large belt such that balls are unable to slip through and adequate tension exists with which to grab balls.  It is recommend that priority be given to make the length of the conveyor be as small as possible in order to require the least work possible from the turning motor.
   As always, teams should enjoy their work and try learn as much as possible during the month.
Personal tools