Deprecated: (6.186) preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /afs/athena.mit.edu/course/6/6.186/web_scripts/2013/w/includes/Sanitizer.php on line 1550
Team Four/Final Paper - Maslab 2013

Team Four/Final Paper

From Maslab 2013
Jump to: navigation, search

Contents

Overall Strategy

Our strategy for this game was to collect as many balls as possible and then score all of them in the last minute by shooting them into the tower or over the yellow wall, whichever one it saw first. However, we built our robot such that it was tall enough to score into the blue tower for 50 points, in the event the robot did aim for the tower. We originally were going to just aim for the blue tower, because we thought the middle tower would have a large capacity. But, when we found out that the blue tier could only fit four or five, we started aiming for the yellow wall as well, because that offered more points than scoring in the purple tier would. If we had enough time, we would have optimized the code to score balls in the top two tiers, and then the rest over the wall. However, we didn't have time to get that running. We also anticipated that our robot would have a much larger capacity than the other robots, which ended up being kind of true, so our strategy involved pressing the cyan button to put as many balls on the field as possible. We also thought that clearing the field would be a good strategy, but figured it would be difficult to do reliably, especially if there was a ball in a relatively isolated location that the robot never got to. Though we would have mechanisms to theoretically find all the balls, it might have taken a long time. Also, since the button released the balls in small area, theoretically, it would not have been difficult to pick up all of those balls in the area and then move on.

Mechanical Design

Drive Train

Our drive system utilized a simple differential drive system on a circular chassis, which allowed the robot to turn in place without hitting any of the walls or knocking balls away. Putting the drive wheels in the middle of each side allowed the robot to turn around the center of it’s base, simplifying odometry and trajectory planning significantly. Instead of the low-powered, poorly-constructed motors from the kit, we obtained two 12V gear head motors from the staff to power the wheels. These motors (which were the standard kit motors from previous years) had much more power and allowed us to drive more reliably and without a gear box.

In order to maximize the amount of traction we could get on the carpet, we water-jetted two ridiculously dangerous wheels with spikes along their edges. However, because the field was changed to black foam tiles near Mock 3 (which our wheels would have destroyed), we put tape around the circumference covering the spikes, and that seemed to do the trick. However, the robot sometimes got stuck in small kinks on the field (like where the tiles came together) and didn't always move terribly well. If we had more time, new wheels would have been made to roll better on the field while maintaining enough traction.

Ball Collection System

Our ball collection mechanism consisted of a rubber band roller, a small sheet-metal ramp, and a funnel. This type of roller and ramp design seemed to work well for most teams in the past, but it was actually the system we had the most trouble with. With the first iteration of our design the roller was too low to the ground, causing the balls to jam the roller. In fact, this ruined our first roller motor. In our second design the roller was too high and the rubber bands didn't engage the balls enough and couldn’t get them up the ramp. The third time we designed the roller we finally got the height about right, but we had to add zip-ties to help grab the balls and funnel them into the lift mechanism. Unfortunately, though the zip-ties helped us out, they also jammed occasionally when picking up the balls, stalling out the motor. Additionally, the roller seemed to stall when picking up multiple balls at once. To compensate for this, we used current sensing on the roller motor to allow the code to detect when it was jammed. Once the robot confirmed that the roller motor was stalled, it would drive it in reverse, clearing the jam and spitting the balls back out. This meant we lost possession of whatever balls were in the collection mechanism, but they were almost always recovered.

Ball Lift System

The next most challenging mechanism to design was our ball lift and hopper. Early in the design process, we realized that most teams in the past had to design a lift and a storage mechanism separately, which took up lots of room on the robot and slowed down the whole process of collection and scoring. Our mechanism functioned as both a lift and a hopper, which freed up space and optimized the ball handling process of our robot.

The mechanism works by using a long central brush, powered by a gear head motor, to knock a ball up a concentric spiral ramp made of wire. When the ball reached the top of the ramp, it was blocked by a servo arm that extended down into its path. As the brush continued to spin, the bristles would repeatedly hit the ball against this gate servo, effectively pinning it at the top of the spiral.. Because the brush continued to spin, it could lift additional balls until they stopped against the ball ahead of it, building up a chain of balls along the spiral ramp. Once the gate was opened, the entire chain would be pushed up at once, spilling the entire contents out the front of the tower and into whatever goal the robot was pointed at.

To manufacture the spiral, we bent ⅛” aluminum wire around large PVC mandrels on a lathe in [very] low gear. We performed some springback calculations (using helical curvature) to make sure the spirals would be the correct diameter after they came off the mandrel. The tower supports were cut from 1/32” steel and bent on a break in Edgerton, then hole punched and riveted together. Small spiral supports (which connected the spiral wire ramp to the tower) were cut out of ⅛” acrylic sheet on a laser cutter along with the lid. The brush was simply made with some door sweeps cut to length and bolted to a square aluminum shaft. This shaft was turned down to 5/16” on each end to fit into a bushing and to accommodate the gear box at the top.

Shooting Bridge

The bridge at the top of the tower guided balls from the spiral into the tower goal. We designed it to be long enough to reach the central tier of the pyramid from any angle, and deployed it with a simple four bar linkage. This linkage was actuated by a servo fixed on the lid of the robot.


Sensors

In addition to the camera, we used two bump sensors in the front, 1 short-range IR sensor, 1 long-range IR sensor. Ideally we could have used more sensors, but what we had was satisfactory. The short-range IR sensor, positioned on the left side of the robot pointing forward 45 degrees, was used for wall-following; the long-range IR sensor, positioned on the right front and angled 45 degrees to the right, was used to prevent it from crashing into walls in front of it during wall-following; and the bump sensors were used throughout all of the states to tell the robot when it crashed into a wall. The bump sensors were also used to tell us when the robot hit the wall or tower when it was trying to shoot, but we realized a bit too late that that didn't guarantee that we were actually at our target.

IR Sensors

The IR sensors were put at approximately 45 degree angles, such that as much information about the robot could be determined. Not only would diagonal IR sensors provide information about more locations around the robot than just in one spot, it would be able to tell us about any "orientation perturbations." If the IR sensor were point at the wall at a 90 degree angle, it wouldn't be able to tell if the robot were tilted inwards or outwards from the wall, because it would be detecting a larger distance either way. Also, since the short-range IR sensor positioned on the side was facing forwards, it also gave us more information about the wall closer to the front of the robot, so the robot could react in a timely manner.

Even though our long-range IR sensor was only detecting things in front of the robot, having it at a 45 degree angle was crucial since we were only using one. This way, the IR sensors were able to detect things on the right and on the left instead of only just in front of it. However, since we were wall-following on the right, our IR sensor ideally would have been on the left, so we could get information about walls turning away too. However, with PID correction on the robot's distance from the wall was effective enough for wall-following.

In the situation where the walls were bending inwards at an angle similar to the angle of the long-range IR sensor, the robot still managed to avoid the wall, (because of the forward facing short-range IR sensor on the side, but it was a bit close for comfort. In situations like these, it would have been useful to have two long-range IR sensors, both pointing inwards at 45 degree angles.

Bump Sensors

The bump sensors could have been improved if they were more integrated in the mechanical design. Otherwise, they are difficult to make reliable. If they are not properly fixed onto the robot, they can be pushed but not be triggered. Also, most provide small coverage, making it a matter of chance that when the robot hits the wall it will actually hit the bump sensor. We tried adding extensions to our whisker bump sensors, and repositioned them many many times. Only after many iterations, and securely attaching the sensors onto the robot the day before the competition did they work reasonably well (until we accidentally broke one off right before our final match >.>). Note that bump sensors need a pull down resistor (essentially a resistor in series) in order to work properly.

Software

Architecture

In our final set of code that the robot ran, we had separate code for processing vision, interacting with the arduino and various sensors, and the finite state machine (FSM).

Vision

All of our vision code relied on the OpenCV library, which proved to be adequate for our purposes. The general method for extracting colored objects from single images was as follows:

  1. Convert the image from RGB to HSV color space.
  2. Extract all pixels in a certain color range using cv2.inRange().
  3. Use morphological opening to remove noise (erosion with cv2.erode() followed by dilation with cv2.dilate()).
  4. Extract contours in binary image using cv2.findContours().
  5. Select the largest contour by area and calculate the centroid (or center of the bounding box).

An early approach involved fitting circles using a Hough transform, but this was too slow for our needs. This contour approach was considerably faster.

There is also a bug in OpenCV 2.4.3 which causes cv2.findContours() to crash: see http://code.opencv.org/issues/2611 for the fix (this involves re-compiling the source code). OpenCV 2.4.2 should work fine though.

In terms of calibrating, a special calibration module was written that employs a point-and-click interface to extend the color range of the specified color. All HSV color ranges were then exported to a file which was automatically imported by the vision module used by the robot.

FSM

Our finite state machine had two basic states: the exploring/ball collection state, and the scoring state. Within these two basic states, we had "substates" that would mostly control the behavior of the robot. We first sketched out our state machine, which essentially looked like this... [insert picture of state machine]

So in summary, the robot first looked around for balls by turning around on the spot, and if it saw any, it would chase them down. If it didn't, the robot would go wall follow until it saw another ball, or for 15 seconds passed, after which it would scan over the field again. If the robot saw the button, then we would aim for it, as if it were a ball, and ram into it four times consecutively (turns out there was a 20 second delay on it, so it would have been more effective to collect the released balls first). We continued doing that for the first two minutes. In the third minute, the robot searched for the yellow wall or the yellow box in the tower, aligned itself, and then released the balls that were sitting in the tower. After it tried scoring, it would look around and search for any balls (to find the ones that missed the tower or failed to go over the wall), and then try to shoot again with 10 seconds remaining.

We threaded all of our sensors so that they would continuously be updated and set flags as true or false, which would in turn be read by the state machine. At first, some of us were afraid that having so many threads would slow down the processing, but it turns out that it didn't actually affect it too much. On the other hand, it didn't seem to give us any more of an advantage, except for the fact we didn't have to hard-code checking the sensors in every state.

Scoring

Because we were able to see the yellow wall from the tower and align to that, we decided to only search for yellow when scoring. Unfortunately, that meant the robot would be more likely to go for the yellow wall because of its larger size, but if we had enough balls that would be okay.

In terms of aligning to the tower and the yellow wall, we simply used PID control to target the center of the wall, and then have it drive towards it until it triggered a bump sensor. Unfortunately, relying only on the bump sensors meant that the robot couldn't bump into any other wall beforehand, or else it would dump the balls prematurely.

Modularity

The advantage of having sensor code, vision code, and the finite state machine in different places was the modularity of how the robot was going to behave. This was particularly useful for the mock competitions, so we could work on coding for the final competition, but still have our robot function minimally even if not everything was working at the moment.

We also wrote simple test modules, which would allow us to test our sensors and motors quickly. That proved to be extremely useful before and during the competition to make sure everything was functioning.

Placing in the Mock Competitions

Our mock competition performance was sub-par at best, mainly due to the engineers not having thier act together ahead of time. Throughout the first three weeks we had a lot of mechanical problems to overcome, but the problem that held us back the most was the repeated failure of our roller mechanism. Because we didn’t have a working rubberband roller we couldn’t test all of our systems in one integrated package until the seeding competition, which meant even though our helix worked well the first time we couldn’t propperly test its function.

During the final competition, we got to second place mainly through luck. Our series of matches were mainly wins due to us malfunctioning a little bit less than our opponent, even though our robot seemed to work so well in testing.

Problems Encountered

Mechanical Design

There were many, many different and unique problems encountered with the mechanical design of our robot- mostly due to the fact that we had to jump straight into construction and had no time to prototype every mechanism.

The biggest issue we faced was with our rubber band roller. It had to be a very specific diameter and height off the ground in order to engage the balls with enough friction to move them up the ramp, but not so much friction that the motor stalled out. In addition, the ramp had to be bent perfectly in order to keep the balls at a constant distance from the roller at all points in the mechanism.

Getting all of this working took a lot of fiddling and many iterations. In truth, however, our biggest fault was in our assumption that the design would work the first time because it had worked for every other team in the past. We built most of the robot before we tested the roller, and then had to redesign the mechanism to better engage the balls while still fitting in a space designed for the older version. If we had spent a few hours with a prototype roller before we designed everything else, this problem could have been prevented. Note to future teams: the mentality of “oh, this mechanism worked for everyone else, so I don’t really need to prototype it” is a terrible one. Don’t do that.

In addition, we ran into a few problems with assembly and fit. On more than one occasion we had to disassemble half of the base to switch out one part or to screw in a single bolt. The design was relatively modular, but would have been improved greatly if one of the designers did an assembly run-through in solidworks. In addition, the T-bolt slots were designed to fit ¼” acrylic, when in reality the acrylic we were given was 6mm. The fits still worked, but they were not as neat as they could have been.

Electronics

5V Regulator

Because we got a pin stuck in the 5V power source on the Arduino, we had to power all our sensors through the 5V regulator. As a result, since our 5V regulator was sourcing a lot of current, we had to use two 5V regulators in parallel so that they would not overheat and turn off. The two regulators were loaded with the various sensors as equally as possible. Comments online suggested not putting only two 5V regulators in parallel (e.g. branching out into two 5V regulators and then combining those two branches back into one), because even a small difference between the regulator output voltages would result in one regulator sourcing all the current, which would defeat the point of having regulators in parallel at all.

Arduino Connections

We soldered connections onto the Arduino shields in order to make better, more reliable, and more permanent electronic connections from our sensors to the Arduino, compared to using a breadboard. We decided to do this because our hardware did not seem to be functioning reliably. In the mock competition earlier that day, we had rogue motors, which resulted from pwm and direction pins plugged in backwards (which happened when trying to find out why another motor was rogue because of somewhere in the code), Later in the day, we couldn’t get one of our motors to run at all. When trying to identify the issue, which we thought was coming from a bad connection in the motor controller, we accidentally fried our motor controller with the output voltage wire from the 5 V regulator, which was disconnected in order to prevent it from over-heating. (The battery should have been disconnected, but wasn’t). Therefore, in addition to adding a 5 V regulator in parallel to the one we already had, we wanted to finalize our wiring. Additionally, since we had gotten everything to work at least once before, we theoretically knew how to do everything, so we figured we would not have too much trouble doing so.

However, in addition to its benefits, there were its own problems, mostly resulting from inexpert soldering... For example, sometimes things you thought were connected with solder actually were not, and sometimes connections you thought were reliably somehow disconnected, though they were working well earlier. And even worse, somehow, though the sensors might have been working earlier, and though the sensors might be working on the board and reading voltages as expected, the information is no longer being transmitted to the Arduino through the pin it’s connected to. We only “solved” this last issue by completely changing the pin that the sensor was soldered to. If you choose to solder, then the multimeter will be your best friend.

Code

Turns out that for some reason, you can’t use digital pins and analog pins of the same number. This should be a bug in the arduino firmware provided by staff, and hopefully it’ll be corrected soon.

During the Competition

Unfortunately, our robot didn’t perform as well during the competition as we would have liked/as we tested before. One of the most noticeable problems that we encountered during the competition and not during the practice runs was the noise from the audience. We didn’t filter the vision to ignore everything above the blue line, though we probably should have. Before, we had adjusted the camera to be positioned such that it couldn’t see over the six inch wall, but when we reattached it, the camera seems to have been able to see a bit higher. At first, we thought the vision code would be robust enough to withstand a little noise, but we didn’t consider the situations when it only sees objects off the field, like a red box, or an audience member’s bright red sweater.

Hindsight is 20/20

Considering we only actually had a functioning robot within the last 48 hours of MASlab, we think we did pretty well! We think it’s because we had our awesome design, and because we had our code mostly written by the end of the month, even though we were delayed in testing it. However, it is obvious that if we had more time to test the code more, perfect the PID constants, and improve localization strategies, we would have done better.

Therefore, we would suggest deciding on a strategy as quickly as possible, setting up a finite state machine as quickly as possible, and getting started on the code as quickly as possible, even if the robot hasn’t been completely built yet. That way, you can even definitively decide on what sensors you will need--the the mechanical engineers and design for them, and the coders can code for them. Also, as guided by the check-offs, getting the vision done quickly and is a good idea too.

Additionally, making things as modular as possible is going to help out a lot. For example, our alignment to the tower, yellow wall, and cyan button was exactly the same as our method used for ball following/alignment. We simply had to calibrate the vision to detect a certain color, and then we could use PID to align ourselves with the target. Then, because our PID method was relatively general, we were able to do PID on following and all of the alignment just by setting the error and the constants depending on our situation. This made it easier to do a relatively wide range of tasks while keeping things simple. Though the results weren’t perfect, they were effective.

Personal tools