Deprecated: (6.186) preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /afs/athena.mit.edu/course/6/6.186/web_scripts/2014/w/includes/Sanitizer.php on line 1550
Team Five/Final Paper - Maslab 2014

Team Five/Final Paper

From Maslab 2014
(Difference between revisions)
Jump to: navigation, search
(Created page with "Benny and Jets lost, but not because of a catastrophic mechanical failure or a fatal bug in our code. We lost because friction and gravity had a fight, and friction won. ------...")
 
(Software architecture/Sample code)
 
(4 intermediate revisions by one user not shown)
Line 67: Line 67:
  
 
=== Low Level ===
 
=== Low Level ===
  The low level code provided an interface layer between the high level code and the real world.  In order to do this, we had to pick a suite of sensors and actuators that gave us a picture of the environment around us and means to respond in the correct way.
+
The low level code provided an interface layer between the high level code and the real world.  In order to do this, we had to pick a suite of sensors and actuators that gave us a picture of the environment around us and means to respond in the correct way.
  
 
==== Sensors ====
 
==== Sensors ====
Line 75: Line 75:
 
The ultrasonic sensor has a range of 2 cm to 400 cm.  Unfortunately, we found them to be very sensitive to electrical noise.  In particular, running the wires near a motor caused a large amount of garbage data to be recorded.  They also seemed to be prone to getting burnt out randomly.
 
The ultrasonic sensor has a range of 2 cm to 400 cm.  Unfortunately, we found them to be very sensitive to electrical noise.  In particular, running the wires near a motor caused a large amount of garbage data to be recorded.  They also seemed to be prone to getting burnt out randomly.
  
  After hopelessly trying to get the sonars working correctly, we decided to go with a combination of short range and long range IR sensors.  We only used the IR sensors to make “near or far” decisions and we never used the exact distance.  The short range IR functioned as “bump” sensors.
+
After hopelessly trying to get the sonars working correctly, we decided to go with a combination of short range and long range IR sensors.  We only used the IR sensors to make “near or far” decisions and we never used the exact distance.  The short range IR functioned as “bump” sensors.
  
 
'''Encoders''':  We used the encoders to measure the angular displacement of the wheels.  We initially tried to implement a dead reckoning routine, but we found that at almost close to zero speed, the encoders would tick much faster than we expected them to.  In order to work around this issue, we would occasionally reset the accumulated distance traveled so that the error wouldn’t continue integrating.
 
'''Encoders''':  We used the encoders to measure the angular displacement of the wheels.  We initially tried to implement a dead reckoning routine, but we found that at almost close to zero speed, the encoders would tick much faster than we expected them to.  In order to work around this issue, we would occasionally reset the accumulated distance traveled so that the error wouldn’t continue integrating.
Line 89: Line 89:
 
'''Motors''':  We used 4 motors on our robot.  Two of them were used as drive motors.  One of them was used to drive our screw.  The last one is used to drive our roller.
 
'''Motors''':  We used 4 motors on our robot.  Two of them were used as drive motors.  One of them was used to drive our screw.  The last one is used to drive our roller.
 
    
 
    
==== Code ====
+
==== Low-level Code ====
  The firmware on the maple ran at 200 Hz with information about the state of the sensors being transmitted at 10 Hz to the tablet.  Extreme care was taken so that no blocking operation occurred during the operation of the loop.  Each sensor and actuator driver exposed two main functions, void xxx_init(void) and void xxx_periodic(void).  This consistent interface allowed us to easily add and remove different sensors and actuators as we iterated through our design.
+
The firmware on the maple ran at 200 Hz with information about the state of the sensors being transmitted at 10 Hz to the tablet.  Extreme care was taken so that no blocking operation occurred during the operation of the loop.  Each sensor and actuator driver exposed two main functions, void xxx_init(void) and void xxx_periodic(void).  This consistent interface allowed us to easily add and remove different sensors and actuators as we iterated through our design.
  
 
=== Interface ===
 
=== Interface ===
  The communication between the tablet and the maple happened over USB as an emulated serial port.   
+
The communication between the tablet and the maple happened over USB as an emulated serial port.   
 
We defined a protocol that was robust to framing errors and other kinds of communication problems.
 
We defined a protocol that was robust to framing errors and other kinds of communication problems.
  
Line 99: Line 99:
  
 
One thing we discovered is that the SerialUSB is implemented in a blocking manner.  This meant that while the angle rate integration would work correctly when the maple was connected to a mac, because the tablet serviced the maple much less frequently, the angle was not being accurately calculated.  The solution was to increase the time increment we were using to integrate with by a fudge factor.  To find this fudge factor we would rotate the robot by 360 physical degrees and then we would take the reading from the maple and divide.  As an example, if the maple reported that it had rotated 260 degrees after spinning the robot by 360, our fudge factor was 360/260 ≈ 1.385.  In the firmware, we would take our nominal dt (5 ms) and multiply by our fudge factor.  The robot will now correctly report the angular position information.
 
One thing we discovered is that the SerialUSB is implemented in a blocking manner.  This meant that while the angle rate integration would work correctly when the maple was connected to a mac, because the tablet serviced the maple much less frequently, the angle was not being accurately calculated.  The solution was to increase the time increment we were using to integrate with by a fudge factor.  To find this fudge factor we would rotate the robot by 360 physical degrees and then we would take the reading from the maple and divide.  As an example, if the maple reported that it had rotated 260 degrees after spinning the robot by 360, our fudge factor was 360/260 ≈ 1.385.  In the firmware, we would take our nominal dt (5 ms) and multiply by our fudge factor.  The robot will now correctly report the angular position information.
 +
 +
=== Code Repo ===
 +
All our code is housed at [https://github.com/araju/maslab-2014 this] GitHub repo.
  
 
== Things you learned and could have done better ==
 
== Things you learned and could have done better ==
Line 110: Line 113:
 
== Advice for Next year ==
 
== Advice for Next year ==
  
- First off, we strongly suggest double elimination bracket. We would not like the contest to be decided by a freak accident in which an otherwise-reliable robot suffered a strange failure, and considering the number of things that could go wrong,  a double elimination would give every robot a better chance to show what it can truly do. Additionally, a lot of the teams worked extremely hard on their robots, and deserve to see their creations perform at least twice. A month of hard work is worth at least a few extra minutes of time. Also, seeding done at any time before impound can prove to be next to worthless by the actual competition date, so anything to take the emphasis off of initial seeds is a good idea.
+
- First off, we strongly suggest double elimination bracket. We would not like the contest to be decided by a freak accident in which an otherwise-reliable robot suffered a strange failure, and considering the number of things that could go wrong,  a double elimination would give every robot a better chance to show what it can truly do. Additionally, a lot of the teams worked extremely hard on their robots, and deserve to see their creations perform at least twice. A month of hard work is worth at least a few extra minutes of time. Also, seeding done at any time before impound can prove to be next to worthless by the actual competition date, so anything to take the emphasis off of initial seeds is a good idea. It was truly unfortunate that one of the top two robots was inevitably going to be knocked out in the second round, and this could be easily prevented in the future
 +
 
 +
- We were part of the very first game of the tournament, and were taken aback when BotClient failed to provide us with a start signal. Obviously, our robot did not start, and we had to manually edit the code using the tablet, by which time the round had finished. While I realize that there are time-constraints, there was really no reason why our robot should risk losing and getting instantly knocked out of the tournament due to absolutely no fault of our own.
  
-It was truly unfortunate that one of the top two robots was inevitably going to be knocked out in the second round, and this could be easily prevented in the future
+
- Advice for next year's competitors: Do not try to map. Wall Follow or, better yet, don't even wall follow. Spend your time making sure that your robot doesn't ever get stuck. A robot that never gets stuck will always have more opportunities to score.
We were part of the very first game of the tournament, and were taken aback when BotClient failed to provide us with a start signal. Obviously, our robot did not start, and we had to manually edit the code using the tablet, by which time the round had finished. While I realize that there are time-constraints, there was really no reason why our robot should risk losing and getting instantly knocked out of the tournament due to absolutely no fault of our own.
+

Latest revision as of 15:35, 5 February 2014

Benny and Jets lost, but not because of a catastrophic mechanical failure or a fatal bug in our code. We lost because friction and gravity had a fight, and friction won.


Contents

Introduction

Three of the four members of our team took 6.270 as freshmen, and we learned a few valuable lessons that stayed with us through Senior year:

1. Don’t try and do everything. 2. A lot of it comes down to driving and navigation. 3. Never, ever stop moving.

Our entire strategy, reflected in the mechanicals, electronics and code behind our robot, really boils down to these three simple points. During the first couple of days of Maslab, we decided that our strategy would focus on scoring green balls into the upper port of the reactor, and sending red balls over the wall to the opponent’s side of the field. Scoring in unique reactors was considered unnecessary, and trying to extract balls from the silo was deemed superfluous. This game plan greatly simplified matters because our robot effectively required a singular skill-set: gather balls, raise them, and drop them in the required locations.

While the mechanical systems in the robot were obviously a very important part of being able to fulfill the above goals, none of it would be possible unless the robot could locate balls and drive to them. The presence of the kit-bot allowed our coding team to begin working on navigation from Day 1, but we were well aware that they needed the final edition of the robot in order to carry out realistic testing. As such, an emphasis was placed upon getting the robot built as quickly as possible from a mechanical perspective, with issues such as robustness being addressed as a team once the basic shell of the robot had been given to the coders to work with.

Even if our code crashed and our entire strategy was lost, a robot that was blindly wandering the map was infinitely more likely to accidentally pick up a ball than one that was sitting hopelessly in one spot. As such, we decided early on that a great deal of emphasis would be placed on creating timeouts that would prevent the robot getting hung up attempting to complete a single action. A timeout effectively instructs the robot to stop trying to carry out a certain activity after a designated period of time, which means that a robot that is fruitlessly trying to accelerate into a wall should never be stuck in that position for very long.

The Mechanical Side

Once we had decided upon a strategy, we needed to create hardware that would bring the plan to life. The first step involved the gathering of balls. We turned to conventional wisdom for this, using a roller to funnel balls into the underbody of the robot. The design of the roller was simple: it consisted of 2 laser cut ‘gear’ plates, set 5 inches apart on a wooden shaft. Rubber bands spanned the length that separated the gear plates, and the shaft was mounted to a 120:1 Pololu motor. The balls were swept up a shallow ramp, that placed them about an inch above the ground. A funnel built into the ramp guided the balls towards the spiral lift. The spiral lift was arguably the mechanical component that required the most trouble shooting. Strategies used in previous years seemed to show that it was the most compact, reliable way of raising balls, but creating it took a fair bit of effort. We tried several concepts, including wrapping clear laboratory tubing about a central shaft, and using insulation tubing to cut out a spiral shape around a central shaft. In the end, however, we settled for bending welding rods into a spiral shape and attaching it to a wooden dowel at the top and bottom. However, the rod still wobbled considerably, and this we remedied by tying to the central shaft using copper wire. Ultimately, the spiral proved to be highly reliable, raising countless balls without a hitch. It suffered a brief failure in the first round of the competition, but this was after the result of the round had been long since confirmed.

Once the balls made it to the top of the spiral, we needed to then sort them by color so that we could put them where they needed to go. We opted to put the spiral at the center of our robot so the balls would not have to pass all the way under our bot to get to the spiral. This then means in order to have room to sort and store all the balls, we had to send them backwards and then back forwards. In the very back, we put our sorter, and then had symmetric chutes for the red and green balls to separately roll up to the front. Towards the front, the balls were then held back by simple servos. Past the servos, the chutes were angled in to reconnect at the front middle of the bot. This way, both red and greens would fall from the same spot, making it easier on the coding. We simply needed to square up to the target and then the balls, whether red or green, would come out the middle.

In order to get the balls to roll around where we wanted them to, we needed the surfaces to slant down in the desired directions. This meant the balls would be coming out a few inches higher than the exit point for the balls. this lead to some interesting geometrical challenges, as the exit had to be high enough for the reactor slot, while the top had to be low enough for the spiral. We also had to maintain a steep enough angle for the balls to roll down consistently. We found that with the 4 degree angle we chose, the balls would reliably roll if already in motion, but would occasionally stick if sitting still (as in behind the servo stop, they might not roll even when the servo moved away).To combat this, we had the servo stick up at an angle instead of straight up. It would then move past the middle and then down. This motion would push the balls back slightly to agitate them, and encourage them to roll down. In testing, this worked fine. However, new balls were spray painted the day of the competition that we did not get a chance to test out. Due to the fresh paint, these were considerably stickier, and therefore did not roll nearly as well. As a result, several red balls that we caught decided not to roll out of the chute.

The main goal for the mechanical team should be to get a working prototype as quickly as possible. There are a lot of bugs that need to be worked out on the coding end, so giving them something to work with as early as possible is very valuable.


Software architecture/Sample code

High Level

We had three processes: the low-level control and sensor processing, the vision system, and the high-level decision making system. The low-level system is described more below in a later section. This section is devoted to the the latter two systems.

Vision System

The vision processing consisted of of the following steps:

1. Take in camera frame. 2. Resize image to 1/4 the original size. 3. Convert image to HSV color space and apply thresholds to extract different colors (red and green for balls, teal for reactors, etc.) 4. Clean the thresholded image to get rid of noise.

   - Accomplished with this function in OpenCV: org.opencv.imgproc.Imgproc.morphologyEx(Mat src, Mat dst, Imgproc.MORPH_OPEN , Mat kernel)

5. Find the walls in the image. 6. Look for balls and reactors beneath the wall height (helps us not get distracted by colors outisde the playing field). 7. Determine the direction and distance to the balls and reactors found. 8. Publish that information the main high-level decision making process.

The vision system was written in Java and used OpenCV for vision processing. It had capabilities for detecting QR codes as well, using Google’s ZXing project, but the reactors didn’t have them, so we didn’t bother. The vision system worked at about 10 - 15 fps.

High-level Decision Making

We implemented a two-level state machine. The upper level had 4 main states: search, follow balls and reactors, score, and avoid obstacles. Each higher level state had a lower level state machine associated with it.

The search state simply consisted of spinning in a circle, finding the most open direction, and going in that direction till we hit a wall. At that point we back up from the wall and repeat the process.

The ball follow state processed the vision info and drove to the closest ball or reactor.

The score state lines up the robot to the reactor (and retries if the attempt doesn’t work out, as determined by vision info) and releases the balls.

The avoid state backs up and turns, based on the information from the IR sensors.

The high level process was written in Python.

Communication Between Systems

The two systems operated in different threads, so to communicate between the two, we simply published vision info to a socket and allowed the high-level process to consume that data and make decisions based on it. It is important to note that the multi-process communication didn't work when it was run from IDLE on the Python side.


Low Level

The low level code provided an interface layer between the high level code and the real world. In order to do this, we had to pick a suite of sensors and actuators that gave us a picture of the environment around us and means to respond in the correct way.

Sensors

Distance: In order to measure distance we had a number of options, ultrasonic sensors, short range IR sensors and long range IR sensors. The short range IR sensors are digital sensors that trip whenever an object gets to be within about 10 centimeters with a hysteresis of about 5 mm. We also found that they would pick up on random noise if they were >75 cm from a wall. The long range IR sensors have an operating range of 15 - 150 cm. If you go below 15 cm, the sensor gives a reading that is further away than it actually is (eg. if you are 10 cm away, the sensor will say that you are 25 cm away). The ultrasonic sensor has a range of 2 cm to 400 cm. Unfortunately, we found them to be very sensitive to electrical noise. In particular, running the wires near a motor caused a large amount of garbage data to be recorded. They also seemed to be prone to getting burnt out randomly.

After hopelessly trying to get the sonars working correctly, we decided to go with a combination of short range and long range IR sensors. We only used the IR sensors to make “near or far” decisions and we never used the exact distance. The short range IR functioned as “bump” sensors.

Encoders: We used the encoders to measure the angular displacement of the wheels. We initially tried to implement a dead reckoning routine, but we found that at almost close to zero speed, the encoders would tick much faster than we expected them to. In order to work around this issue, we would occasionally reset the accumulated distance traveled so that the error wouldn’t continue integrating.

Gyro: The gyroscope is perhaps the best functioning sensor we had. When standing still, the gyro would measure rotation rates below .75 deg/s. Using this information, we decided to ignore the gyroscope when it reported a rotation rate below 1 deg/s and integrated the angular rate when it was above that threshold. This worked out very well for us.

Color Sensor: We bought a TCS34725 color sensor from adafruit. It communicates over I2C and was very easy to figure out how to use and detected colors reliably. One issue that we found what that the color sensor would lock up due to EMI from the motors/motor controllers. This coupled with the fact that the I2C library we were using was written in a blocking manner meant that when the color sensor refused to communicate, the maple would lock up and would no longer receive commands from the tablet. We mitigated the risk of this happening by having a second maple that would read the color sensor and drive the sorting servo. These components were then powered off of a battery pack composed of AA batteries. We also added an NPN transistor that allowed the second maple to toggle the ground for the color sensor on and off. Finally, we enabled the hardware watchdog so that if the color sensor ever caused the maple to hang, it would automatically reset both the maple and the color sensor.

Actuators

Servos: We used 3 servos on our robot. One servo sorted the balls based on color. The other two servos were used to actuate the gates that held the red and green balls. We used the servo library that is built in. Make sure that you choose pins that are on the same timer so that you don’t use more timers than you need.

Motors: We used 4 motors on our robot. Two of them were used as drive motors. One of them was used to drive our screw. The last one is used to drive our roller.

Low-level Code

The firmware on the maple ran at 200 Hz with information about the state of the sensors being transmitted at 10 Hz to the tablet. Extreme care was taken so that no blocking operation occurred during the operation of the loop. Each sensor and actuator driver exposed two main functions, void xxx_init(void) and void xxx_periodic(void). This consistent interface allowed us to easily add and remove different sensors and actuators as we iterated through our design.

Interface

The communication between the tablet and the maple happened over USB as an emulated serial port. We defined a protocol that was robust to framing errors and other kinds of communication problems.

We defined a start, end and escape characters and we implemented byte stuffing. We also included a basic checksum for good measure. In the body of the message, we defined 3 fields, a command, a length and number of arguments. The communication interface on both ends was implemented using callbacks so that it was very easy to add new commands. At the end of MASLAB we had defined well over 30 commands, although we only ended up using a small subset of those.

One thing we discovered is that the SerialUSB is implemented in a blocking manner. This meant that while the angle rate integration would work correctly when the maple was connected to a mac, because the tablet serviced the maple much less frequently, the angle was not being accurately calculated. The solution was to increase the time increment we were using to integrate with by a fudge factor. To find this fudge factor we would rotate the robot by 360 physical degrees and then we would take the reading from the maple and divide. As an example, if the maple reported that it had rotated 260 degrees after spinning the robot by 360, our fudge factor was 360/260 ≈ 1.385. In the firmware, we would take our nominal dt (5 ms) and multiply by our fudge factor. The robot will now correctly report the angular position information.

Code Repo

All our code is housed at this GitHub repo.

Things you learned and could have done better

We tried several strategies before arriving with our final robot, and we learned a lot with each attempt. First, we attempted to do localization in the GPU. This consisted of grabbing distance information from the sensors and sending it to a localization process. The localization process outputted a list of possible locations the robot could be. The information was stored as an image, which allowed it to be processed in the GPU. The RGB image had different sets of information stored in each color channel. The x and y pixel represented a position X and Y in real world coordinates. The red channel represented robot orientation, green represented the prior confidence associated with each position, and the blue channel represented the distance of each position to the closest wall. Unfortunately, the procedure we tried was not robust to noise and could not adequately figure out where we were and what orientation we were at once noise was added to the system. We briefly tried wall-following, but realized that sonar sensors did not work very well for the task, so we ended up choosing a much simpler approach that better guaranteed we would at least keep moving, which is the key to success. Always incorporate some form of randomness into your strategy. In one of our rounds, our robot got stuck against a reactor because we assumed that backing up would always be an appropriate response to getting stuck. This is not the case and we learned the hard way. Take calculated risks. Know when to bail on your code/mechanism/strategy. We spent an entire week trying to get an accurate state estimation running on the maple and a localization routine running on the tablet. It did not work out and we were able to recover because we realized that although we had the capability to make it work, we wouldn’t have enough time to have a functioning robot by the competition date.

Advice for Next year

- First off, we strongly suggest double elimination bracket. We would not like the contest to be decided by a freak accident in which an otherwise-reliable robot suffered a strange failure, and considering the number of things that could go wrong, a double elimination would give every robot a better chance to show what it can truly do. Additionally, a lot of the teams worked extremely hard on their robots, and deserve to see their creations perform at least twice. A month of hard work is worth at least a few extra minutes of time. Also, seeding done at any time before impound can prove to be next to worthless by the actual competition date, so anything to take the emphasis off of initial seeds is a good idea. It was truly unfortunate that one of the top two robots was inevitably going to be knocked out in the second round, and this could be easily prevented in the future

- We were part of the very first game of the tournament, and were taken aback when BotClient failed to provide us with a start signal. Obviously, our robot did not start, and we had to manually edit the code using the tablet, by which time the round had finished. While I realize that there are time-constraints, there was really no reason why our robot should risk losing and getting instantly knocked out of the tournament due to absolutely no fault of our own.

- Advice for next year's competitors: Do not try to map. Wall Follow or, better yet, don't even wall follow. Spend your time making sure that your robot doesn't ever get stuck. A robot that never gets stuck will always have more opportunities to score.

Personal tools