https://maslab.mit.edu/2011/w/index.php?title=Special:NewPages&feed=atom&hideliu=&hidepatrolled=&hidebots=&hideredirs=&limit=20&namespace=0Maslab 2011 - New pages [en]2024-03-28T08:42:47ZFrom Maslab 2011MediaWiki 1.16.0https://maslab.mit.edu/2011/wiki/Team_Nine/Final_PaperTeam Nine/Final Paper2011-02-01T02:54:45Z<p>Red Lion: /* Photo Gallery */</p>
<hr />
<div>== Suggestions for Future Teams ==<br />
1. Start as early as you can. Getting stuff done over the summer means you can relax more and not pull all nighters during IAP<br />
<br />
2. Read past years wikis and the tutorials. They should point you in the right direction.<br />
<br />
3. Get used to CAD for the meches and get used to Java and the Ubuntu command line for the programmers. It helps a lot to be able to build once from a CAD or to be able to minuplate you r computer without a GUI to save processing power (which will be quite limited)<br />
<br />
4. Design a modular robot. If you have funky systems (like omniwheel drive) that needs work from programmers, build a chassis that can be taken off, so they can play with it to code while the meches continue working on the rest of the robot. We didn’t do this until it was too late and therefore got no (literally zero) testing before the final competition.<br />
<br />
5. Lay out a schedule and STICK TO IT (we had one, but we kept pushing it back)<br />
<br />
6. Don’t try to thread acrylic unless u never plan on removing the screw. Either the acrylic will crack or the threads will strip after two uses. Go for nuts and bolts, they spread out the weight better anyways<br />
<br />
7. Have your nuts and bolts in easy to reach places. Our were hidden behind motors and rollers and it made deconstruction tough despite our modularity<br />
<br />
8. Don’t epoxy until you are SURE your design is FINAL<br />
<br />
9. Be careful of weight. If you use heavy materials like acrylic, sheet metal, and PVC, surprise, your robot will be heavy. Sacrificing smooth looks to cut weight is the smart move here.<br />
<br />
10. Use a mouse for odometry, or better yet two, to have both an x, y, and theta coordinate.<br />
<br />
11. The gyros are pretty bad, the encoders are worse, the mice work great, try to compensate. Another workaround for the gyros are compasses but we didn’t fool around with these.<br />
<br />
12. Back up your code onto somewhere other than the repository. We lost the last week or so of our code on competition day when it was all mysteriously deleted. Not that it really mattered since we had so many other problems but be careful.<br />
<br />
13. Finally, KEEP YOUR DESIGN SIMPLE, as a team with a majority meches, we got a little carried away on the design, and our brainstorming sessions led to a huge, heavy, super complicated robot. It was pretty cool, but the coders didn’t have enough time to test and the code was very complicated anyways.<br />
<br />
== Overall Design ==<br />
<br />
Originally intending to build as simple a robot as possible, long nights of nothing but throwing ideas back and forth left us with quite the complicated design. From our chain linked, vertically mounted drive train, to our lidar system of mapping (aka two ir sensors mounted on servos), our design quickly became more of a task then we were probably able to handle. Still in the end we ended up with a pretty freaking cool robot.<br />
<br />
'''Strategy'''<br />
Voltron is a robot that only goes for putting balls over the wall. Its basic principle is that it has a rubber band roller in the front to scoop up balls, and a large roller in the back to lift balls up to the second level. in hindsight one roller couldve performed both tasks, but hey hindsight is 20/20. When the balls get up to the second level they are held in a collection bay until the robot drives up to the yellow wall, then the gate lowers and the balls are released. The gate is long (6") to allow a factor of safety in aligning with the wall. Other features are two ir sensors mounted on servos that can map out 180 degrees around the robot without the robot itself having to turn. It is basically a poor man's LIDAR. We also have 4 omni wheels that allow our robot to move and turn in any direction, allowing us both to strafe and to rotate. Finally, we have a gyro to align the data from the LIDAR, two bump sensors to determine when we have hit the yellow wall, and an optical mouse for odometry (we originally wanted two but we didnt have time to install it, nevermind code for it)<br />
<br />
'''Materials'''<br />
<br />
Our robot was constructed entirely of acrylic and sheet metal, along with any other parts we had ordered, including sprockets, chains, steel axles, and angle stock. We chose acrylic because it looks hella cool and we thought Voltron deserved to look good. We wanted to be able to see the ball over its entire journey through the bowels of our robot, so acrylic made sense. Its also very easy to lasercut and drill through, the only problem is it was kind of heavy, pretty brittle, and it was sparse. We got lucky in that we found extra acrylic lying about, but we couldnt always count on being able to replace broken acrylic pieces. With the extreme weight of our robot from the acrylic and sheet metal, we decided that the gears running the wheels and rollers should metal instead of plastic so they wouldn't strip. This turned out to be quite unnecessary as I dont believe any other team had stripping problems, and our robot became that much heavier (and more expensive). As you can imagine we had a very heavy robot.<br />
<br />
'''Sensors'''<br />
<br />
Outside of the typical uorcboard, camera, ir senors, bump sensors, servos, and 2 or 3 motors, our design required, an optical mouse for odometry, 3 additional motors, and with that an arduino. Each of these components had to be mounted, of course, and we did so by creating seperate sheet metal cases for the arduino, eepc, and orcboard, and mounting the servos and ir sensors, as well as the camera, on a senors board located underneath the dock where we collected balls. The 4 drive motors were all located above there respective omni wheel, and connected via chains. <br />
<br />
'''Vision'''<br />
<br />
In the end, we opted for a very simple approach with our vision code; frames were taken as quickly as possible, and each pixel was scanned and sorted by color. Neighboring pixels of like cluster were then sorted into blobs, and blobs were treated as viable objects: red and green ones were balls, blue ones were the lines atop the walls, and yellow were goals and scoring walls. This simple approach was relatively fast and could reliably pick up walls and balls; we accelerated it further by down-sampling the image (that is, checking only every other pixel, or every fourth). The biggest issue we had was localization using the camera; the accuracy we achieved using things like blob size and offset within the image was not high enough to be useful, varying by as much as a meter while the robot sat still.<br />
Our final vision code was dramatically simpler than our original intention. Although we initially implemented several advanced features, including Hough transforms and confidence calculations, they were all ultimately removed in the interests of speed and because they didn't offer any particular advantage in the context of the contest. Red or green meant ball, yellow meant goal, and blue meant the top of a wall – anything more complex was largely irrelevant. <br />
That was one of the major lessons learned during MASLAB: however cool or interesting a feature may be, if it doesn't contribute to performing in the contest, it probably isn't a good idea. There's nothing wrong with trying exciting new things, but the time constraints mean that you'll have to pick and choose which exciting things you do – and generally, the ones you want are the ones that will help you win.<br />
<br />
'''Omniwheel Drive'''<br />
<br />
The same lesson applies to our drive train. We chose to build a four-wheel omni-wheel system for enhanced mobility; four wheels were chosen because of the natural symmetry of the robot, and because of size constraints, the motors were mounted above the wheels and connected by chains. Although we were successful in building this complex drive train, it was plagued with issues – the weight of the robot meant the wheels needed to be mounted on steel shafts to avoid excessive flexing, the low wheel-base meant the robot occasionally dragged on the floor, and the number of motors meant we could not use the orc board to control the drive system. <br />
As it turns out, the omni-wheel system didn't even offer us much of an advantage; because of the nature of the contest, robots were almost forced to use 'turn and go' navigation, rather than executing the complex paths the omni-wheels would allow. Had we used a conventional two-motor drive train, we could have avoided a lot of work with minimal impact on practical mobility, and used the extra time to implement things that could have greatly improved our performance in the contest.<br />
<br />
'''Modularity'''<br />
<br />
The final point I would like to make about our robot is its modularity. I would definitely recommend this to future maslab teams when designing. Originally our robot wasnt modular at all. Our first build took us over an hour to finish (Check out a time lapse of our original build here: http://www.youtube.com/watch?v=8_epFGdMhL8). Eventually we cut our robot down and added more brackets and supports to make a modular design. Our robot then consisted of three main components that could be separated with ease to, for quick access fixes and changes. We cut our construction time by over 50%. The three components were our left and right wings, and then our center console. The left and right wings were composed of the front and real wheel assemblies, each of which consisted of a motor mount, a wheel mount, and a chain and sprocket system. These could be bolted onto the sides of the main compartment, which was 7' wide and flat on both sides. This helped us dramatically when working on our robot, and it ended up being one of our proud points in the design.<br />
<br />
However, despite our modularity, it ended up being too little too late. An ideal module system would be for the wheels to be on one module, the sensors on another, and the ball collecting aparatus on a third. This way the coders couldve been testing omniwheel code and sensors code while we finished the build. As it turns out, the coders didnt get the fully funtioning robot until about 6 days before the final competition. This certainly was cutting it close. But then the problems with weight and the robot dragging on the floor, and the failure to identify these problems, meant that the coders never finished their code, and our robot ended up driving straight and blind at the final competition. Moral of the story: make your designs modular so every one can be working at once.<br />
<br />
== Final Outcome==<br />
<br />
Unfortunately our robot did not seem to come together like we had intended. On the day of the final competition we simply did not have a robot ready for the task at hand. For the preliminary round, we lost in both 3 minute sub rounds, scoring zero points. Come the second round we stepped our game up a bit collecting one ball and displacing another. after dropping the one ball we collected, however, we still ended up with zero points. We ultimately came in second to last place. Despite our limited success both my team and I can certainly agree that experience was one filled unmatched learning opportunities. It is an entirely hands on project that will force those who are not experienced in the field to learn new skills they will undoubtedly be able to apply in the future. I recommend this to anyone interested. Voltron force, out!<br />
<br />
== Photo Gallery ==<br />
[[Image:robot1.jpg |thumb|200px|CAD View 1]]<br />
[[Image:robot2.jpg |thumb|200px|CAD View 2]]<br />
[[Image:robot3.jpg |thumb|200px|CAD View 3]]<br />
[[Image:isometric.jpg |thumb|200px|Robot View 1]]<br />
[[Image:frontview.jpg |thumb|200px|Robot View 2]]<br />
[[Image:backview.jpg |thumb|200px|Robot View 3]]<br />
[[Image:lasercut.jpg |thumb|200px|Lasercutting dxf]]<br />
[[Image:construction.jpg |thumb|200px|Mid construction]]<br />
[[Image:table.jpg |thumb|200px|Our workspace]]<br />
[[Image:firstdrive.jpg |thumb|200px|Maiden voyage]]<br />
[[Image:prindle.jpg |thumb|200px|Swag Master Flex working hard]]</div>Green Lionhttps://maslab.mit.edu/2011/wiki/FinalScoresFinalScores2011-02-01T00:26:49Z<p>Yichen: /* Seeding Rank */</p>
<hr />
<div>The final competition is a double-elimination tournament seeded two days before the final competition. Teams that seed well are given byes in certain rounds. It order to make the final competition a reasonable length, several rounds are run a few hours before the final competition. The final runs are completed in front of an audience and every team runs at least once.<br />
<br />
Read about each team in the<br />
[[http://web.mit.edu/6.186/2011/Lectures/Maslab2011Program.pdf Competition Program]]<br />
<br />
== Seeding Rank ==<br />
The seeding rank of the teams are listed below. The ties are broken by number of balls displaced and then by robot weight (lightest robot seeds higher).<br />
<br />
#Team 2 (75 points, 15 balls displaced)<br />
#Team 13 (23 points, 19 balls displaced)<br />
#Team 3 (12 points, 9 balls displaced)<br />
#Team 11 (11 points, 6 balls displaced)<br />
#Team 7A (5 points, 8 balls displaced)<br />
#Team 1 (2 points, 2 balls displaced)<br />
#Team 10 (2 points, 2 balls displaced)<br />
#Team 7B (1 point, 1 ball displaced)<br />
#Team 6 (0 points, 0 balls displaced)<br />
#Team 9 (0 points, 0 balls displaced)<br />
<br />
== Tournament Rounds ==<br />
The final tournament consisted of 23 rounds (some of which are byes) with a winners bracket and a losers bracket. <br><br />
[[File:TournamentRounds.png]]<br />
[[File:FinalRounds.png ]]<br />
<br />
== Scores ==<br />
The scores for each of the rounds above are listed below. For rounds with byes, the scores are marked with zeros. Round 22 started with a robot failure so only one robot ran the first round. When it became apparent that the other robot was not fix-able in time, the working robot was advanced. <br><br />
[[File:Scores.png]]</div>Yichenhttps://maslab.mit.edu/2011/wiki/Team_Thirteen/Final_PaperTeam Thirteen/Final Paper2011-01-31T20:31:01Z<p>Rhan: </p>
<hr />
<div>'''overall strategy'''<br />
<br />
Two weeks into Maslab, January 14th, we decided to radically change our strategy and robot from a simpler design for goal-scoring to a complex and roller-intensive launching mechanism for scoring over walls.<br />
<br />
Given the nature of this year’s competition, there was a clear dichotomy in design plan: namely, a goal scoring mechanism versus a wall scoring mechanism. No team built a robot that could do both. Logically, if one could construct a sound design for scoring over walls, it was entirely advantageous to do so and entirely disadvantageous to try scoring in goals.<br />
<br />
We originally planned to only score goals for easier mechanical design. However, we realized that while our exploring and ball-collecting abilities were on par (if not better) than those of our competitors, we could only hope to earn 4 points per ball as opposed to their 6.<br />
<br />
'''mechanical design and sensors'''<br />
<br />
Review of previous literature emphasized a few key points in mechanical design. Robustness, a small (round) footprint, and as few complicated moving parts as possible, all figured prominently in the design of previous winners.<br />
<br />
Our first robot fit the criteria. Although it was more square than round, it was essentially two plates of acrylic with supporting walls. The motors directly drove the two wheels mounted on either side, and a roller mechanism in front sucked balls into the body of the robot where a ramp reliably funneled the balls into the back. A servo attached to a sheet of metal opened and closed to score balls into goals. The only flaw was that we were unable to score balls over walls.<br />
<br />
A.W.E.S.O.M-O was a far more complicated design. It was larger, elliptical, and had a launching mechanism consisting of home-cut gears turning rollers which brought one ball at a time up a ramp and ideally, over the wall. Though perfectly sound in theory, the practical implementation of this design was limited by the unevenness of the gears which consistently jammed. Eventually, we settled with a single belt that ran over the rollers (reducing the number of gears which had to synchronize and turn), which seemed to work slightly better at propelling the balls up the ramp.<br />
<br />
When deciding the lift mechanism, we had to consider whether we wanted a ground-level or wall-level hopper; in retrospect, a wall-level hopper would have made scoring easier and more reliable. At the time, that was vetoed in favour of a LIDAR system (with long-range IR sensors) we had planned to implement.<br />
<br />
Ultimately we did not construct our LIDAR because mapping became less necessary given our reliable wall-following and navigation. Our navigation system was remarkable, considering that we only used four short-range IR sensors. Two were mounted diagonally in the front left and front right, scanning for obstacles ahead in case the robot wanted to “turnLeft”, “hallFollow” or “wallEnd”. The other sensor was mounted in the right side, in the back, and used to maintain distance from the wall for our “hugRight” state of wall following and to detect a “wallEnd” scenario.<br />
<br />
The camera was only used for ball detection and goal detection. We originally considered implementing stereo vision with two cameras but the unreliable FPS output (ranging from 7 to 30 FPS for no apparent reason) was a deterrent. We also briefly toyed with quadphase encoders on our first robot, but the getDistance() method returned irrational and nonsensical readings. Although we had dead reckoning and velocity inputs on our first robot, and they were more convenient to use, we were ultimately satisfied with the quality of A.W.E.S.O.M-O’s navigation as it was.<br />
<br />
'''software design'''<br />
<br />
In terms of software our robot was quite simple and robust. Our robot, A.W.E.S.O.M-O, had a default state of wall following, but would break behavior, and start to approach a ball or yellow wall if one was detected. Once an object of interest was detected (ball or wall) the robot would approach using a simple PD control loop. After each collected ball or wall scored the robot would enter a scan state, that would break upon seeing a ball. With this behavior we hoped to successfully explore the entire field, and in practice we seemed to do so.<br />
<br />
To make programming the robot easier for all members of the team , we crated HAL (the part of the robot that developed consciousness). HAL is short for hardware abstraction layer, and served as a communication link between the state machine and the robot. HAL handled things like overriding the watchdog, velocity to pwm conversions, voltage to distance conversions for rangefinders, and all other robot inputs and outputs.<br />
<br />
Since our robot was heavily wall follow reliant we decided to make our wall follow state a state machine as well. The wall follow state machine had the following 4 states that handled any scenario possible in the maslab world: “hugRight”, “turnLeft”, “hallFollow”, and “wallEnd”. In the end we did not implement hall follow as our hug right stayed close enough to the wall that we deemed it unnecessary.<br />
<br />
The goals presented a slight problem for our wall following code. Although the edges were yellow the goal itself was black and the camera interpreted that as empty space. Therefore, the robot tried to turn into the goal every time. Eventually we added a panic response; if the camera saw a goal 12 inches or closer, it would abruptly veer away from the goal. Impressively, if the robot had been wall following it would veer back on course after panicking.<br />
<br />
We also had stall detection implemented in vision. If the frame did not differ enough over a certain time the robot would assume it was stuck and would run a freak out function. This feature was a little buggy. When the robot would traverse a large enough room, the subsequent frames would be similar enough to make the robot think it was stalled even though it was crossing a room as wanted.<br />
<br />
'''overall performance'''<br />
<br />
During the competition our robot performed admirably. Despite two burnt motors and a broken camera we placed 4th. In terms of software, we benefited greatly from our freak out function. Like other teams, our robot was sometimes confused by the field and stuck in a rut. However, having a freak out function allowed us to essentially “reset” what the robot was thinking and resume activity. Our mechanical design was the weakest point in our robot, with the scoring mechanism jamming or malfunctioning almost every round.<br />
<br />
'''conclusions/suggestions for future teams'''<br />
<br />
* DO NOT CUT YOUR OWN GEARS!!! Despite being warned by multiple people we tried it anyway and paid the price by having our gear train jam at the worst possible times.<br />
<br />
* On a mechanical note, simpler is better. The more quickly and easily a robot can be put together / taken apart, the more likely one can fix unexpected last-minute issues.<br />
<br />
* Use SVN or some form of version control. Two weeks into Maslab our eeePC kernel panicked and we almost lost all of our code. Luckily we got it back but we still lost 2 days of work time.<br />
<br />
* Plan your state machine and software architecture. Because we had planned everything out we were able to write and test each state separately before putting everything together. This allowed us to find bugs faster, and helped us maintain clean, easily changeable code.<br />
<br />
* Gearing down motors is not that beneficial, the motors provided are powerful enough to drive most robots and speed becomes a critical factor towards the end of the competition. Faster robots collect more balls and explore more of the field.</div>Rhanhttps://maslab.mit.edu/2011/wiki/Team_One/Final_PaperTeam One/Final Paper2011-01-31T04:07:52Z<p>Allanm: </p>
<hr />
<div> '''<nowiki> Magnetometer, but I hardly</nowiki>'''<br />
<br />
<nowiki> The competitive design of MASLAB 2011 presented a unique set of challenges. Our team decided that a more interesting experience could be had by trying to shoot balls over the walls rather than into goals. Our initial plan was to lift balls up and shoot them between two high speed rollers over the wall. The robot would wander around the map following a right hand rule while disengaging to pick up balls as it saw them. While ambitious, we were confident that we could synchronize each system of the robot. As time progressed, the scope of our mechanical and digital robot changed more simply integrate with the map.<br />
At a basic level, our robot entailed a conveyor belt attached to an elevated gate. Two large wheels provided drive and tank-style steering while the weight of the robot balanced between four caster wheels. Our orc board and battery were mounted on top of the bot while the laptop was mounted in the middle. The bottom layer of the robot functioned as a collector. Balls could be run over from the front side of the robot and where funneled back to a vertical conveyor belt. This belt sandwiched balls between a rolling cloth and rubber band belt and a foam backing. Upon reaching the second story of the robot, balls would roll down into a gated area. After butting up against a wall, the gate could be opened, releasing the balls across the wall.</nowiki><br />
<br />
<nowiki> The bulk of our robot was constructed from acrylic, with some wood support. Our team made liberal use of the laser cutter to build a frame, bumper buttons, and second story ball release gate. Wood was inserted to brace our bumpers and serve as axles for the ball lifting rollers. The roller was powered by a stepper motor which offered the torque necessary to lift the balls and withstand tension from the rubber bands. In over to overcome bumps on the map floor and the added torque loss from the larger wheels, our driving motors were heavily geared. The top ball release gate was a simple servo which lifted and dropped a wooden pole.</nowiki><br />
<br />
<nowiki> Input relied on a combination of touch, IR, and visual sensors. A webcam was mounted on the front side of the lower level, below the laptop. On the ground level two rounded shoulder buttons relayed data about physical contact with the world and provided a simple means of determining if we were perpendicular to a wall. The robot also sported two infrared sensors to provide data into the wander code. One sensor was placed on the front right of the robot while a center facing sensor was mounted to the left center of the robot. A break bream sensor was implemented to count captured balls, but was later removed for simplicity. </nowiki> <br />
<br />
<nowiki><br />
Our code was designed from the top down: we wanted to parallelize our code design as much as possible (and this is recommended, as it keeps each programmer working on an encapsulated system), so we chose to implement a logic-level splitting system.</nowiki><br />
<br />
<nowiki>Essentially, sensor data is fed into a high-level analyzer (FSM) that collects both data and uncertainty to estimate the robot’s current state, and from this estimate generates a suitable low-level Behavior. These low-level Behaviors included Wander, Shoot, Evade, Gather, and Stop (with Stop being triggered by a global time-out). Each low-level Behavior is permitted to update motor values according to the desired behavior (i.e. run our roller motors when we want to collect balls, but not when we’re running away from a wall). We agreed that each individual low-level behavior call should not block if at all possible, because the main loop blocks until the low-level behaviors are complete. The faster executed simple sub-routines, the faster we can process data.</nowiki><br />
<nowiki><br />
Data was acquired from the two touch sensors mounted to the front bumper, a front-facing IR sensor, a right-facing IR sensor, and the main camera. Processing a single camera frame at 160x120 pixels in Java proved to be unfeasibly slow (this was later determined to be an issue with BufferedImage performance on machines without graphics cards; more details can be found HERE: http://www.jhlabs.com/ip/managed_images.html ) so we went in search of image processing libraries. Ultimately, we decided on a JNI’d implementation of OpenCV, a well-known computer vision library written in C. Note to future teams: if you’re going to use JavaCV, the JNI wrapper to OpenCV, make sure that you compile OpenCV WITHOUT SSE instructions.</nowiki><br />
<br />
Our image processing pipeline acts on an image as follows:<br />
1) Blur the image with a Gassian blur to remove obvious noise<br />
2) Convert the image to HSV, and apply individual band-pass filters to each channel to generate various color-masks (red/green, yellow, and blue). This was accomplished using thresholds generated from test pictures.<br />
3) Apply morphological erosion and dilation to the image with a 5x5 disc structuring element. This has the effect of removing the inevitable noise from various carpet fibers in the blue channel.<br />
4) Apply a blue-line filter to the image, as described in the vision tutorial.<br />
5) Find the contours in each channel to perform connected component labelling.<br />
6) In the red/green channel, report the discovery of a Ball if and only if we find a relatively circular contour above a minimum area.<br />
7) In the yellow channel, report the discovery of a wall if and only if convex hull of the contour has area approximately equal to the area of the contour. Report the discovery of a goal if and only if the convex hull area is significantly larger than the internal area.<br />
8) If the centroid of any reported object is above the top third of the screen, ignore it. This deals with blue-line filtering failures, as well as other strange corner cases.<br />
<br />
The highlevellogic package contains the FSM class. The FSM class’ functionality can generally be described as:<br />
* Function Wander Ball Gathering End wallAway yellowWallScore<br />
* State 0 1 2 3 4 <br />
* t>3min 2 2 2 2 2 <br />
* touch 3 3 2 3 4<br />
* SeesBall 1 1 2 1 4<br />
* SeesGoal 4 4 2 4 4<br />
<br />
<nowiki>There are a couple of exceptions. If we remain in wander state for 40 states (about 5 seconds) we go into the wallAway state. If we have been in Ball Gathering state for more than 5 states (about 0.5 seconds) then we will remian in Ball Gathering state for another 5 states. Lastly, if we remain in yellowWallScore state for longer than 40 states then we will go into wallAway.</nowiki><br />
<br />
<nowiki> The lowlevellogic package contains the classes WanderingState, ShootingState, EvadingState and CapturingState. WanderingState uses the IR sensor data to right hand rule around the field. ShootingState uses a PI controller to turn towards a wall and then drive forward. EvadingState backs up and turns left. CapturingState uses a PI controller to turn towards a ball and then drive forward, if both touch sensors are depressed then we lift the gate to release stored balls.</nowiki><br />
<br />
<nowiki>The physicalObjects package contains the classes Ball, Goal and Wall. These classes are useful because they store the relevant information gathered from vision processing.</nowiki><br />
<br />
<nowiki> While most systems worked, it proved critical to ensure proper ball flow. While our code worked, drive mechanism, and ball gate worked well, the failure of the rollers prevented our ability to score. Our robot was effectively able to wander and possess balls. Another area which proved problematic was the touch sensor placement. On multiple occasions our robot’s touch sensors failed to engage and once caught our robot inside a goal. Furthermore, the decision to use two larger wheels made motion harder on an uneven field.</nowiki><br />
<br />
<nowiki> When structuring work, teams should not have mechanical and coding sides work independently. When the two sides of development do not work hand in hand, one side can begin to outpace the other. The need to have a robot mechanically developed early on cannot be understated. The whole team should attempt to work together to create the core of the robot before diverging into fine tuning and coding.</nowiki><br />
<br />
<nowiki> Furthermore, those teams who try to implement a roller system should avoid rubber bands as a conveyor mechanism. Instead try to get a large belt such that balls are unable to slip through and adequate tension exists with which to grab balls. It is recommend that priority be given to make the length of the conveyor be as small as possible in order to require the least work possible from the turning motor.</nowiki><br />
<nowiki><br />
As always, teams should enjoy their work and try learn as much as possible during the month.</nowiki></div>Allanmhttps://maslab.mit.edu/2011/wiki/Team_Seven/Final_PaperTeam Seven/Final Paper2011-01-31T03:53:19Z<p>Rafacb: Created page with "== MASLAB 2011: Final Paper == Roberto Meléndez, Christian X. Segura, Rafael Crespo, and Javier E. Ramos This paper serves as an introduction and technical description of our..."</p>
<hr />
<div>== MASLAB 2011: Final Paper ==<br />
<br />
Roberto Meléndez, Christian X. Segura, Rafael Crespo, and Javier E. Ramos<br />
<br />
<br />
This paper serves as an introduction and technical description of our robot,<br />
the Mighty Duck. The paper discusses: overall strategy, mechanical design and<br />
sensors, software design, overall performance, and future suggestions. It is<br />
informal, but as complete as possible to provide guidance to future Maslab teams.<br />
<br />
<br />
== Overall Strategy ==<br />
<br />
In terms of the general and high-level approach to the challenge, our team decided<br />
to develop a simple, yet very maneuverable robot. This is mainly due to the fact<br />
that we only had one programmer, and three MechE’s. We decided to score by<br />
throwing the balls over the fence. Our approach was mechanically simple and<br />
designed to be easy to assemble.<br />
<br />
From a high-level design standpoint, our robot was driven by two geared motors<br />
with a caster wheel in the back. Our mechanical strategy was to collect the balls<br />
using a mechanism that collected the balls by pinching them between sets of<br />
rubber-bands placed in parallel. Our robot was designed to place the balls above<br />
the fence separating the teams to score the maximum amount of points possible.<br />
<br />
<br />
== Mechanical Design ==<br />
<br />
As mentioned above, our robot employed a rubber-band basket mechanism to<br />
collect the balls. The basket was attached to rotational arms that raised and<br />
lowered the basket on top of the balls. As the basket came down, the balls got<br />
pinched between the rubber-bands placed on the underside of the basket. The<br />
basket would then rotate up and let the balls drop through the back of the robot<br />
using the computer as a ramp. The strategy was to collect the balls and then have<br />
the robot back up to the fence. After the robot was properly aligned, the robot<br />
would then deposit the balls.<br />
<br />
[[File:robot.jpg]]<br />
<br />
Also, we designed and built our own wheels, to benefit from improved traction over<br />
the stock ones. The robot employed a servo to rotate the arm and most of the<br />
weight was placed close to the ground for improved stability. The circular design of<br />
the base prevented the robot from getting caught in corners.<br />
<br />
It is important to note that our team built two robots. The first robot was built<br />
during the first 2-week period, and the second robot was built during the last week<br />
of the competition. Our decision to build the second robot was based on that we<br />
couldn’t score that maximum amount of points with the first robot. We could only<br />
dribble balls and place them in the floor goals. Even though we built a second<br />
robot, the first robot served as a coding and prototyping base for the team. It is<br />
important to choose a strategy from the beginning and stick to it. It was very<br />
costly for our team to start building a new robot.<br />
<br />
<br />
== Code ==<br />
<br />
The code for written by our time was focused on simplicity. The team had only one<br />
programmer which meant that the time spent on vision, navigation and the<br />
logistics of the code had to be split up and was probably the main reason why the<br />
robots ended up underperforming.<br />
<br />
<br />
== State Machine ==<br />
<br />
The state machine used for both of our robots was similar. As the Mechanical<br />
Design shows, one of our robots could only go up to balls and pick them up<br />
while the other had the ability to grab them and then throw them over a<br />
wall.<br />
<br />
The first robot we built had only the ability to store balls and score in goals.<br />
Since the scoring mechanism was not very reliable it would only try to score<br />
during the last 40 seconds of the match. Even though the robot did not<br />
score consistently, the navigation and grabbing balls actions were consistent<br />
and incredibly efficient. Here is a rough version of the state machine:<br />
<br />
[[File:statemachine1.jpg]]<br />
<br />
The second robot was a bit more complex in its behavior. It was capable of<br />
scoring over walls with some success and this is why we decided to make a<br />
state machine that would prioritize ball grabbing only if the robot did not<br />
have any balls already stored and scoring otherwise. Since we did not have<br />
time to accurately tell whether the robot had in fact grabbed a ball, we<br />
decided to make the robot try to score if the time limit was approaching.<br />
Here is how the state machine looked like:<br />
<br />
[[File:statemachine2.jpg]]<br />
<br />
<br />
== Vision ==<br />
<br />
One of the teams’ biggest flaws was the vision code. As suggested by the<br />
TA’s, we tested and made the vision of the robot an early priority. Sadly, we<br />
never got it to work consistently or had enough time to test it in real life<br />
action. It is important to point out that even if the pictures are taken in 26-<br />
100, once you start testing your code while the robot moves, the results<br />
become less precise.<br />
<br />
The code consisted of looking for balls, walls and goals. The robot could<br />
easily tell balls apart from the rest of the elements seen on the picture since<br />
they were either red or green, but when it tried to distinguish goals and<br />
walls it would be inconsistent since both where yellow. The differences<br />
between a goal and a wall where minor, so it is very important to put some<br />
time aside just to tackle that problem. It is also crucial that your vision code<br />
is efficient. We managed to do process images with only one iteration of the<br />
picture’s pixels which allowed our robot to make decisions at a rapid rate.<br />
<br />
<br />
== Navigation ==<br />
<br />
For the navigation part, we did not use any fancy PID controller. We went<br />
for simplicity and efficiency and that is what we got. We used 3 IR shortrange<br />
sensors and did a simple wall follower to navigate the map. It did<br />
remarkably well and explored most maps to completion. The only problem<br />
with this strategy is that if one of the sensors fails, then your robot will not<br />
navigate correctly, which is what happened to us in the final competition.<br />
<br />
<br />
== Results and Recommendations ==<br />
<br />
It is highly recommended that you give your coder(s) time to do their work<br />
and that you have at least 2 coders. Our team lacked man-power in this<br />
department and thus our performance was heavily affected since we did not<br />
have time to perfect neither the vision nor the navigation code. We would<br />
also recommend writing very simple navigation code. No PID controller is<br />
necessary unless you are looking for precise movements. The gyro is not<br />
very useful (open loops worked better than gyros in our case…) so don’t use<br />
them! Finally, prioritize on the vision code! Without a good vision code, you<br />
robot will not work well!<br />
<br />
<br />
== Final Conclusion and Remarks ==<br />
<br />
The robot performed remarkably well during the tests, given the time constraints<br />
(1 week to build a new robot), but did not perform as expected during the final<br />
competition, which was a bit disappointing. The main suggestion we have for<br />
future teams is to build early to have a plenty of time for testing and tweaking.<br />
Choose a strategy early, or before the competition and implement it as fast as<br />
possible. Also, there are many things that can be done before the competition,<br />
such as writing code, making parts, and deciding on a navigation strategy.</div>Rafacbhttps://maslab.mit.edu/2011/wiki/Team_Eleven/Final_PaperTeam Eleven/Final Paper2011-01-31T02:28:29Z<p>Kranders: /* reFuses's Design */</p>
<hr />
<div>== '''MASLab Final Report''' ==<br />
<br />
== Workload Breakdown ==<br />
Kristen- Sensor/Controls/Behavior<br />
<br />
William- Vision/ Behavior<br />
<br />
Tim- Mechanical Design<br />
<br />
<br />
== Table of Contents ==<br />
<br />
Overall Strategy<br />
Mechanical Design<br />
Sensor/Actuators<br />
Software Architecture<br />
RobotMain<br />
Stop<br />
BallGrabber<br />
Navigator<br />
getState<br />
forward<br />
rightTurn<br />
leftTurn<br />
rightPID<br />
leftPID<br />
bumpLeft<br />
bumpRight<br />
WallScorer<br />
Data <br />
ImageProcessor<br />
Conclusion<br />
<br />
----<br />
<br />
== reFuses's Design ==<br />
<br />
'''Overall Strategy - WIN! ''' <br />
<br />
We decided to put the balls over the walls. We wanted to be able to hold a number of balls and reliably capture and deposit the balls over the yellow wall. For the general behavior, we wanted the robot to circle every so often and look for balls and walls, collect balls that it found and deposit the balls over the walls found. If, however, neither ball nor walls were in the robot’s sight when it spun, the robot would then attempt to wall follow for a certain amount of time before spinning and looking again. In addition, the robot needed to be awesome and fast.<br />
<br />
----<br />
<br />
'''Mechanical Design'''<br />
In past years, the primary component of MASLAB has been around collecting balls and moving them to a designated scoring area. This year the competition changed slightly and allowed the option of scoring balls either by placing balls in a designated goal, or by placing the balls over a specified wall and onto the opponents playing field. This secondary scoring option is by design more complex, and so makes the design process a bit more drawn out. For our robot the additional points gained through placing balls over the wall was determined to outweigh additional complexity or design time. With this strategy we moved the design forward and broke it into two main components: the chassis and the ball handling system.<br />
<br />
Least Critical Module<br />
The least critical module, LCM, of our robot was the chassis. Its primary role was to be acted the skeleton upon which the robot hardware would be attached. It was composed of laser cut acrylic sheeting, which was connected in a tongue and groove system. These sheets where held together by 4-40 screws and the application of epoxy at the joints. The details and features of the chassis structure were not finalized initially and instead the design was created by iterations in design features. By doing this we were able to independently modify the drive system, ball collection system, and the required sensor mountings. As the final week approached we brought all those systems together and laser cut a brand new robot to which we had implemented all the design changes we required from previous iterations.<br />
<br />
Most Critical Module<br />
The most critical module, MCM, for the mechanical structure was the ball handling system which was borrowed from MASLAB robots of previous years. First, we utilized a revolving rubber band hub collection mechanism that was used both by past competitors and a few teams from this years competition. This mechanism allows for rapid ball collection, and the ability to move those balls into a passive storage container.This seemed as an ideal solution to this years competition, but because we were aiming to place the balls over a wall, it was decided that we would have to modify the rolling rubber band hub. We determined that by using the collection mechanism to also elevate the balls to a height of 8 inches we could solve two problems with one solution. By increasing the diameter of the revolving rubber band hub and placing it inside of a static hub, we created a elliptical gear system. In this system the rubber band hub acts as the sun gear, the stationary hub acts is the annulus, and the balls to be collected acted as planetary gears. As a secondary aspect, we borrowed the capture and release mechanism for our robot from other groups who would be using gravity to aid drop the balls over the game wall. The ball collection system already brings the balls to a height of 8 inches and allowed for the creation of a sloped storage container to store balls. By attaching a servo controlled door at the lowest point of the sloped storage chamber we were able to provide an outlet channel for the stored balls. Upon operation the servo activated a trap door that was lowered, allowing for the balls to travel over the field wall and onto opponent game space.<br />
<br />
The Final Robot<br />
As a whole system picture 1&2 show the exact shape and form for our robot, both the front and back are shown for clarity.<br />
<br />
Pictures 1 & 2: Credit to Sam Range for his photography, (left) back view, (right)front view<br />
In the picture to the left you can see the back of the robot with a large amount of tape used to hold the electrical components in place, along with the 12V battery, and the rear bump sensors, and the servo controlled door that was lowered in place to allow the balls over the field wall.<br />
On the right you can see the front of the robot, pictured are the camera, front bump sensors, the MCM ball collection mechanism, and the computer being used to process all the data. In all the robot came to about 10 lbs (dependent upon the number of balls) and covered about one square foot in area.<br />
<br />
----<br />
<br />
''Sensor/Actuators'' <br />
<br />
Our mechanical design required an extra drive motor and servo as extra actuators in addition to the wheel motors. The drive motor was used to spin the roller and the servo to open the trapdoor, which cost 7+5 points. We wanted to follow the wall, for which we used the Long IR sensors. Long IR sensors were chosen due to the speed at which the robot approached, the longest distance on the short range IRs were too close and the did not have enough time to turn and consistently hit the wall. We needed the gyro to turn a set distance reliably and look for the balls, in addition to turn 180 degrees and score. The camera was obviously needed to identify the balls and yellow walls. The bump sensors were needed to safely line up with the yellow wall, as well as to avoid getting stuck. The front bump sensors, specifically, protected against the robot approaching a wall too close where the front IR sensor was out of range, and thus returned an incorrect value. In hindsight, it may have been useful to use an laser motion sensor to see if we were moving or not, as we tended to get caught and not necessarily hit a bump sensor.<br />
<br />
'''Sensors/Acutators Used and Corresponding Sensor Points''' <br />
<br />
0 pts 4 BumpSensors (two front, two back)<br />
12 pt 3 Long IR sensors (one left, one front, one right)<br />
0 pts 1 Camera<br />
0 pts Gyro<br />
7 pts Extra Drive Motor<br />
5 pts Servo<br />
24 total pts < max 30 pts<br />
<br />
'''Software Architecture''' <br />
<br />
The software had three main parts, the behavior (RobotMain), the orc interface (Data), and the image processing (ImageProcessor). <br />
<br />
'''''RobotMain''''' <br />
<br />
RobotMain was the main class that created all classes, initialized the gyro, and started the behavior finite state machine (FSM) when the power button was pushed. The behavior FSM consisted of a Stop, BallGrabber, WallScorer and Navigator state. Each of the states were passed into Data, which allowed them to access the orc board and thus all the sensors and actuators.<br />
<br />
'''''Stop'''''<br />
<br />
The Stop state was the central state in the state machine and was used in transitioning between the other states. Upon entering this state, the robot would stop, and turn around on itself a certain amount of times, capturing and analyzing a picture to find balls and walls after each turn. It would then transition to the appropriate state according to what was revealed by the picture. Generally, if it found a ball, it would go into BallGrabber state. If it found a wall and decided it wanted to score, it went into WallScorer. And if it didn’t find anything after the X number of turns, it would go into Navigator. This state also enabled us to completely decouple our strategy from the whole code, as we had a separate function to determine when to score. Therefore, whenever we found a wall in a picture, we would call that function which returned true if we wanted to score and false otherwise.<br />
<br />
'''''BallGrabber''''' <br />
<br />
When the robot entered the BallGrabber state, it continuously takes pictures and processes them with ImageProcessor. The software takes the angle error calculated with the ImageProcessor and ran a PID controller. It continued in this loop until the camera no longer saw the ball. At which point, the robot continued forward for one second to ensure that the ball made it into the ball collector. There is a ball count contained in Data, which the code at this point increments. If at anytime the robot no longer sees the ball, it enters the Stop state again.<br />
<br />
As our strategy was to go as fast as possible, the PID controller worked in such a way that when the robot wanted to move left the left motor would slow and the right motor would continue going at it’s maximum. The same worked for the right motor. As the camera angle is very small this worked fairly well as the error was never very big. Unfortunately, the controller was not fast enough for a couple of balls that were just in the camera’s line of sight, however this limitation wasn’t much of a hindrance. The motors were close enough speeds that using the same gains for both the left and right motors worked well. <br />
<br />
BallGrabber had a timeout in it, that performed a subroutine contained in Data. This subroutine is used in Stop and Navigator for when it hit a bump sensor and essentially backed up and turned. After this subroutine was finished the FSM entered Navigator.<br />
<br />
As our robot’s mechanical design limited it in it’s ability to pick up balls next to walls, this state contained a function that said if the robot had attempted to pick up a ball three times and hit a bump sensor everytime, the behavior FSM entered the Navigator state and went and looked for other balls. It also subtracts three balls from the balls collected count in Data.<br />
<br />
'''''Navigator ''''' <br />
<br />
The Navigator state is another FSM which contains 8 states, getState, forward, leftTurn, rightTurn, leftPID, rightPID, bumpLeft, and bumpRight.<br />
<br />
''getState'': When the behavior FSM enters the Navigator state, it pulls all of the IR sensors. If the front sensor is less than some minimum the robot enters leftTurn or rightTurn depending upon which sensor reads something closer. If one of the left or right IR sensors are within a certain range then it enters left or right pid correspondingly. If all of the sensor read far away it goes forward. If either of the bump sensors are pressed it enters that corresponding state, bumpLeft or bumpRight.<br />
''forward'': This state makes the robot continue to go forward until one of the sensors is within range at which point it enters the same state as described in getState.<br />
<br />
''leftTurn'': In this state the robot continues to turn until the front sensor reads greater than a certain number and the right sensor reads greater than a certain number, at which point it enters rightPID. If a bump sensor is hit, the Navigator FSM will also exit to the appropriate bump state.<br />
<br />
''rightTurn'': This state behaves in the same way as leftTurn only reversed.<br />
<br />
''leftPID'': In this state the robot is wall following using the left IR sensor. When the computer enters this state, the left_dist is set to the IR’s current reading. The error is then caluclated from the current IR reading and left_dist. This controller used similar logic as the BallGrabber, where it slowed down the appropriate motor. This technique gave quite a bit of leeway in the PID controller and allowed for the noise in the IR sensors. To exit this state, the front IR sensor needs to be less than a certain minimum, in which case it enters a right turn. It will also exit to the turn state if the right sensor returns less than the minimum. It will always exit if a bump sensor is hit. <br />
<br />
''rightPID'': This state behaves in the same was as leftPID only reversed.<br />
<br />
''bumpLeft'': This state calls a subroutine contained in Data. Originally, the subroutine was only in navigator, but it is also useful for other states as well so it was moved to Data so all Main state could access it. It is the same one described in the timeout of BallGrabber. Essentially the robot backs up and turns.<br />
<br />
'' bumpRight:'' This state behaves in the same was as leftPID only reversed.<br />
<br />
<br />
All of the individual Navigator states contained a timeout. In the event of a timeout, the state switches to the forward state. If the forward state times out then it switches to left or right turn. There is also a Navigator timeout. The Navigator timeout moves the behavior state to Stop where the robot stops and looks around. <br />
<br />
'''WallScorer''' <br />
<br />
The WallScorer contains an approach PID controller, a 180 degree turn, a back up, and a stop and drop trapdoor.<br />
<br />
The PID controller is the exact same as the BallGrabber PID. The error is returned from ImageProcesser, in the same units as in BallGrabber, except to the center of the wall. This PID controller is run in a while loop until both front bump sensors are pressed. <br />
<br />
When both front bump sensors are pushed, the robot turns 180 degrees, as determined by the gyro. There is not controller in this loop instead a while loop was written which states that if the gyro reads a number less than the a number that corresponds to 180 degrees keep turning. It worked well, despite the lack of controller.<br />
<br />
After it has turned a sufficient amount the robot backs up. It will continue to back until it either times out, which it did most of time or both back sensors hit, which indicated a perfect allignment. At this point, the robot lowers it trapdoor and waits while the balls fall out. After this ball count in Data is set to zero, and the main state changes to Stop where the robot looks around for more balls.<br />
<br />
There are a couple of different timeout features in this state. First, if the robot times out before both of it’s front sensors are pushed, it assumes that it is stuck and backs up and turns (subroutine in Data). If the robot thinks it has arrived at the wall (aka both bump sensors pressed) it checks and sees if the amount of wall is large enough to be the wall. Otherwise, assumes it is stuck and backs up and turns which is the same subroutine in Data. <br />
<br />
If WallScorer times out in any portion after the first two sensors hit the wall it continues on to the next step.<br />
<br />
It would have been slightly better if when it was approaching the wall if one bump sensor hit, that side’s wheels spun backwards slightly, and the other’s side wheel went full force.<br />
<br />
'''Data''' <br />
<br />
The data class contains all interfacing to the orcboard. It also allows all other states to interface with the orcboard with get, set and update function for all sensors and actuators. Data also contains and global function and variables than need to be known across all states. The most notable function is the avoidBumpLeft() and avoidBumpRight() methods, which simply back up and turn 90 degrees. This turn is based off of the gyro, but it didn’t need to be. It was simply done because we already had the capability and the actual amount turned changed considerably when the battery was low. This class also kept track of time. TimeCompete() was checked at every while loop. This function allowed us to avoid threading. However, if it wasn’t checked at every while loop and on the off chance the robot got caught in that loop, it wouldn’t stop. In hindsight a thread that only checked the time and stopped the main thread if the time was complete would have been more efficient and reliable. We were concerned about the processing power of the eePC, and if we would notice a change if we introduced another thread, but I do not think that this would have been a problem.<br />
<br />
'''ImageProcessor'''<br />
<br />
Our objective was to obtain the maximum amount of information from each picture, even if that meant that it would take longer to analyze them. For example, we wanted to make sure that, if there was a ball in the picture, then it would be recognized, no matter how small it was. Similarly, we aimed at never confusing a yellow goal with a yellow wall, although that meant having a slower image analysis on average.<br />
<br />
The first step was to convert the image to HSV values so as to simplify the process by which we decided of the color of each pixel. Being in HSV made it relatively easy as we only needed to find upper and lower bounds on each of the hue, value and saturation for each color we considered. We then used one-pass connected component analysis to find the various objects in the images. This consisted in basically finding all the clusters of pixels of the same color, starting at the bottom of the picture and making our way up. Going in this direction allowed us to apply blue-line filtering: the top of the walls have a blue line which enables us to distinguish if objects are inside the bounds of the field or not. Therefore, we ignored any pixels above that blue line. We then had multiple methods to accept or reject the remaining clusters as wanted objects.<br />
<br />
For the balls, we first had to find a red or green cluster. To make sure that it was a ball, and not simply an error, we could have simply required the size of the cluster to be large. However, this would have not made it possible to see balls that were further away, a feature which was critical in our overall strategy. We therefore added a test, which ensured that the cluster was of a round shape by looking at the pixels on the perimeter of the cluster.<br />
<br />
For the yellow walls, the main task was to distinguish them from goals. A goal has a black hole in the middle with yellow on the sides and above it, and a thin strip of white above the yellow. Therefore, we simply checked that the amount of black pixels in the area delimited by the yellow was smaller than some threshold. However, in some images where the goal was at some large angle, the amount of black pixels was very small, so we also added a test which checked if there were white pixels on top of the yellow. With these two tests, we were able to perfectly distinguish the two.<br />
<br />
Ultimately, the image processor would calculate the angle needed to turn to get to the closest ball and wall in the picture.<br />
<br />
----<br />
<br />
'''Conclusion''' <br />
<br />
In conclusion, we did fairly well and placed third. We had a couple of problems getting caught, which could have been improved upon. Our major failure mode was the orc board which we managed to short 5 different times. One of these times was in competition, which prevented us in competing in the final round.</div>Krandershttps://maslab.mit.edu/2011/wiki/Team_Ten/Final_PaperTeam Ten/Final Paper2011-01-30T21:26:30Z<p>Wmusial: </p>
<hr />
<div>Members:<br />
<br />
Alex Teuffer<br />
Voitek Wojciech Musial<br />
Youyou Ma<br />
Arvin Shahbazi Moghaddam<br />
<br />
<font color="red">Future maslab participants scan to the bottom for tips</font><br />
<br />
<b>Overall Strategy/Gameplay Goals</b><br />
<br />
The strategy of our team was to keep the robot as simple as possible. We believe that a simple design with the least amount of things that could go wrong is the best. We based the design of our robot on these principles but the overall shape of it was determined by our gameplay goals. <br><br />
Our goal for this competition was to score over the yellow walls and essentially ignore the possibility of scoring in the mouseholes. The reason for us deciding to pursue this strategy was that it permitted us to concentrate on a single high-scoring goal opportunity. By choosing to build a robot adept at throwing balls over the opposing team's wall we can make a relatively simple design while still being able to score high. <br />
<br />
<b>General Preliminary Timeline - none of it actually followed</b><br />
<br />
Week 1 - Finish preliminary robot design and prototype with some simple camera analysis<br><br />
Week 2 - Perfect ball collection mechanism and mapping algorithms <br><br />
Week 3 - Make it better and more reliable without adding too much complexity to any system. <br><br />
Week 4 - Fail week. Leaves time to resort to old working code and have something functional.<br />
<br />
<b>Design</b><br />
<br />
Our robot consists of an underside guiding system, a middle floor, and a ball 'basket' on the top floor. <br />
<br />
The underside guiding system is essentially two aluminum plates that are placed at an angle with respect to each other so that they funnel the balls that go through a one way gate made of light-weight cardboard into the screw lifting mechanism. This had to be planned with precision in order to make sure the two plates did not disturb the two wheel motors or the mice which we used as a substitution for encoders. The door was a one way gate solely made of cardboard and aluminum wire. It was very simple, light and sturdy and could even capture balls against walls.<br />
<br />
Our robot had two wheel drive and was further supported by two casters in the front. The motors were held parallel to the ground with zipties since the weight of the robot was enough to bend the wheel axis at an angle to the ground. <br />
<br />
The bottom floor that held our battery, gpu, uorc board, and hardrive and which also supported the motors, casters, front gate, underside guiding system, and the top floor as well as the screw mechanism was made of simple peg board. The peg board was sturdy enough to not bend under the weight of all these parts and had the additional bonus of having built in holes which we could use to pass wires through.<br />
<br />
The mice encoders were attached to the pegboard with spring suspension which held the mice with epoxy glue. <br />
<br />
the screw mechanism was held above the ground by two aluminum supports that attached to the back end of the pegboard. This caused us many troubles because we could not easily adjust the height of this mechanism which was necessary for getting over bumps in the playing field. In the end, we raised the entire robot by wrapping double sided tape on the wheels and therefore increasing its radius. <br />
<br />
The screw mechanism consisted of the three polyester shafts, one of which held the screw. The two shafts not attached to the screw would support the ball as it would be pushed up by the screw until it reached a spring wrapped around the middle shaft. The ball would be pushed through the spring by the screw and would then use the spring essentially as a slide into the top floor 'basket'.<br />
<br />
The top floor basket had an acrylic base on which were two aluminum walls that funneled the balls into a servo-operated gate which opened and closed on a hinge made at edgerton. <br />
<br />
<b>Machining</b><br />
<br />
Our robot was mostly put together in the maslab 5th floor lab. The archimedes screw was cut out of a pvc pipe using a saber saw. This was done in the edgerton student machine shop. The lathe in the edgerton machine shop was used to slim down the shaft that held the screw so that it could fit into the gears that would turn it in order to pick up the balls. Besides this, the most intesive part of the machining part of the robot was the assembly of the hinge that held the 2nd floor servo gate open. This was also done in the edgerton student shop.<br />
<br />
<b>Software & Strategy</b><br />
<br />
We have initially intended to pursue an (overly) ambitious software effort of stereoscopic vision. We used two cameras mounted vertically, one on top of another. With this choice, the alignment of features across the two cameras becomes straightforward -- they have the same horizontal position. The video streams were captured in c using opencv v4l2 driver. The following algorithms were run on the raw images to obtain a high-level description of the camera scene:<br><br />
- gaussian blur (implemented as 4 separable passes)<br><br />
- rgb to hsl<br><br />
- convolution with sobel kernel <br><br />
- hysteresis <br><br />
- separation of edges into wall & ball edges depending on edge pixel adjacency<br><br />
- aggregate image statistics (count of pixels falling within predefined interest hue/sat/lum regions)<br><br />
- ransac ball fitting (run on ball edges)<br><br />
- line parametrization (run on blue-tape wall edges)<br><br />
<br />
Because of speed considerations, the above algorithms have been implemented using nVidia CUDA and run on a GeForce 9400M GPU. We have managed to achieve overall 12FPS (capture and processing from both cameras).<br />
<br />
Camera calibration has proven to pose the greatest challenge. The position and orientation of each camera is described by 6 parameters (3 spatial coordinates and 3 angles). Total of 12 parameters need to be accurately measured to reconstruct the absolute position of a feature in 3D. Assuming that we only care about the position of 3D features relative to the robot, we need 6 parameters to describe the position and orientation of one camera relative to the other. We have attempted to take camera data of objects whose true 3d position we measured by hand, and used that truth information to fit the 5 parameters. This approach was proven unsuccessful -- the fits did not converge. We then tried a more academic approach outlined here (http://www.peterhillman.org.uk/downloads/whitepapers/calibration.pdf), to no avail. We then gave up on accuracy of camera readings and eyeballed the parameters...<br />
<br />
Due to lack of accuracy on distance reconstruction, random noise, and artifacts of the line fitting algorithm, the features (balls and walls) reconstructed from a set of two camera frames (top and bottom camera) were not reliable enough to be used for robot navigation. I have attempted to collect and average out features across consecutive capture frames. This has worked moderately well provided the robot sat stationary -- the error on camera reconstruction was distance- and angle- dependent, and therefore any change in robot position and orientation, even if measured with the optical mice accurately, would introduce error into matching of features across consecutive frames and would, generally, screw all our efforts up. A possible solution would involve: <br><br />
- analytical modelling of the camera error due to imprecise calibration<br><br />
- brute force error correction: measure error for a grid of points in 3D and correct the position of reconstructed features. <br />
<br />
Unfortunately we run out of time to successfully pursue the 3D vision approach.<br />
<br />
Having stubbornly tried to make the stereo vision work, we have realized 4 days before the impounding we need a different strategy. We then equipped the robot with bump sensors and made it bounce between walls, occasionally looking around for balls using the stereo vision code. We have run out of time to make the code more sophisticated and robust. <br />
<br />
<b>Odometry</b><br />
<br />
We have initially attempted at building and using the proposed optical encoders. Much to our dissatisfaction, the circuit turned out not reliable and of very poor resolution. We then decided to use two optical mice mounted on spring suspension to ensure constant contact with the floor. The two mice were able to measure position as well as angular orientation of the robot very accurately (random error on the angle +- 0.02 rad, position +- 4cm over a meter of distance covered). The mice needed to be calibrated very accurately, though. The mice would often get de-calibrated because of miniature change of their relative position. Also, in order to read mouse raw data we mounted the mice with custom udev rules, which in turn required that we re-plug the mice every time the computer boots. Forgetting about this caveat caused our robot to go out of control during one of the final competition rounds. <br />
<br />
<br />
<b>Final Run and thoughts:</b><br />
<br />
So despite our robot's inability to score, during the final contest, it somehow managed to try to score, though there were not balls on it yet. Also, it was working quite well despite what we were afraid of might happen. However, halfway through our last round, the batteries died and the computer turned off.<br />
<br />
So thoughts for the future:<br><br />
- don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea. <br><br />
- if your solution involves a custom motherboard, 12 cells of car engine batteries, 100% more cameras than allowed, and you're not using parts that most of the teams seem to use --- YOU'RE DOING SOMETHING WRONG, and you'll suffer hardware problems that noone will help you with. Either take it or die.<br />
- get back up batteries!!<br><br />
- have behavior code first!! Even if it's simple and you plan on doing something different -- have something working at all first, before you attempt a more sophisticated approach. Last thing you want is to write your robot behaviour code during the last week before the competition.<br><br />
- build a quick sustainable robot in the first week so your coder has something to tinker with. (Try laser cutting, though we didn't do that, it seems quite efficient)<br><br />
- focus on the main aspects of the robot, have it more-less working, only then go for the minute details!<br><br />
- try to not to take another class / full time job alongside maslab... your life will turn into a stream of misery. <br />
- never give up!<br></div>Maslab10https://maslab.mit.edu/2011/wiki/Team_Three/Final_PaperTeam Three/Final Paper2011-01-30T03:38:59Z<p>Cloitre: </p>
<hr />
<div><h3>Overall Strategy</h3><br />
<br><br />
<br />
After analyzing the scoring methods and looking into previous contests, we decided to implement a simple but high risk strategy. Our robot should explore the maze, find and capture balls, then score them over the yellow wall. This means that if for some reason the yellow wall is not found, we have a high probability of losing. As a result, scoring in the goals is considered as a back up plan, which means the mechanical design of the robot should be robust enough for both scoring methods. The robot should indiscriminately pick up balls of both colors, store them until the time of dispatch. <br />
<br />
<br><hr><br><br />
<h3>Mechanical Design and Sensors</h3><br />
<br><br />
<br />
Many interesting mechanical designs were discussed, including catapult, elevator lift, fork lift, four-bar linkage, and spinning wheel. We wanted to make a robot that has not been made before in maslab, simple to construct, and fun to watch. A waterwheel with waterpark slide idea came into being. To further simplify the design, a rotating arm controlled by a servo replaced the waterwheel. <br />
<br />
The final design, sensor placement, and work flow of the robot is as the flowing:<br />
<br />
1) To accommodate all the necessary components, three horizontal layers are built. The bottom layer contains the battery, wheels and motors. The second layer is for the Eee PC. The top layer is used to mount the uOrc board and the slide. A circular front face connects all three layers and is used to capture/guide balls. <br />
<br />
2) The robot, with two-wheel drive in the middle, explores the contest area. Two caster wheels, one in the back and one in the front, provide additional balance. The caster wheels are of different heights to help the robot over come bumps on the carpet.<br />
<br />
3) With a long range IR sensor mounted on the front face of the robot and two short IR sensors mounted diagonally on the sides, the robot can perform functions such as wall following, getting out of a large room with small door, etc.<br />
<br />
4) A belt of bump sensors is mounted on the bottom layer to help the robot avoid walls. <br />
<br />
5) When the robot sees a ball with the camera mounted on its front face, it goes toward it. The break beam sensor near the opening on the front face allows the robot to know when a ball has entered its mouth. The arm is then triggered to scope the ball up, and dump it into the slide. The ball rolls down the slide until it comes to a stop at the exit/drawbridge. <br />
<br />
6) The camera then finds the yellow wall, and the robot drives toward it. As both bump sensors on the front face are triggered, a servo lets the drawbridge down, allowing the balls to roll out due to gravity.<br />
<br />
7) The bumper in the front is made into a mustache, and the exit/drawbridge into a monocle. Finally with the addition of a black top hat, Monsieur Robot is complete.<br />
<br />
<br><hr><br><br />
<h3>Building the robot</h3><br />
<br><br />
<br />
In order to achieve all the objectives we decided in our first brainstorming, we CADed the robot to make sure every components would fit in Monsieur Robot. The software we used was SolidWorks. It allows you to create parts and to assemble them. More importantly, it can produce.dwg files that are compatible with a laser cutter. The solution of laser cutting acrylic sheets were then obvious for its convenience.<br />
<br />
Pros and cons of the acrylic sheet technology:<br />
<br />
Pros:<br />
<br />
<ul><br />
<li> You can create complex shapes, the laser cutter can deal with it and provide a fair tolerance on the dimensions.<br />
<li> It is easy to drill and tap even in the thickness. We used 4-40 screws in a 1/4 of an inch thick sheet.<br />
<li> It's cheap. One sheet that's 36x24 inches costs around 30 dollars.<br />
</ul><br />
<br />
Cons:<br />
<br />
<ul><br />
<li> It is brittle. You need to be careful on the load you apply to it. Particularly when it is applied on a screw fixed in the thickness.<br />
<li> This technology allows you to create easily shapes in 2D but not in 3D. To create the ramp for the storage of the balls, we had to assemble 10 parts.<br />
</ul><br />
<br />
<br />
<br><hr><br><br />
<h3>Software Design</h3><br />
<br><br />
<br />
The brains of Monsieur Robot were developed using Java in 4 weeks. After coming to overall robot design and strategy conclusions, it was time to get started writing the software. In the end, an intelligence was developed that, although not quite self-aware, still managed to maneuver itself around the field. <br />
<br />
From the beginning, we decided to make Monsieur a state-based machine. This seemed the easiest to program and most efficient method for collecting and scoring balls. However, to gain an edge, we knew that transitions between the states had to be very strategic and often exit a state before its conclusion. Come final competition our robot had three distinct states:<br />
<br />
<ul><br />
<li>Exploring: Consists of two sub-states, StraightExplore and SpinExplore. In StraightExplore, the robot moves forward, attempting to keep its original angle, but avoiding walls. In this way, it will go straight, but also wall follow if it comes in contact with one. In SpinExplore, the robot turns 2*Math.PI, using its long-range IR sensors to find the direction with the furthest distance away. It combines this with knowledge of its original direction to make a choice of direction to proceed exploring. These exploring states alternate between each other until the robot finds something of interest. If at any point during these two states an object of interest is found, the state changes to its corresponding action. </li><br />
<li>CollectBall: If the robot is not full of balls and is not only looking for walls (last 20 seconds), then upon seeing a ball the robot will change into the CollectBall state. During this state the robot uses a dual PID system to move towards the ball (angle and distance control). At the point where the ball is too close to be seen, the robot moves forward blindly until the ball triggers its breakbeam sensor. Then the robot actuates its lift arm to store the ball in its ramp hopper. Upon completion the robot returns to exploring state.</li><br />
<li>ScoreWall: If the robot believes it has collected a ball and it sees at yellow wall it enters the ScoreWall state. This state uses dual PID to move towards the center of the yellow wall as seen by the camera. Once the yellow wall has reach a certain height and width in the camera screen (aka it is close and wide enough), the robot charges forward. Using 2 front bump sensors to align itself, the robot then opens its ramp for a hard-coded amount of time, allowing the balls to fall on the opponent's side. It then taunts the opponent for luck.</li><br />
</ul><br />
<br />
Of course, these states only represent high-level behavior. Behind the scenes, PIDController, VisionHandler, and Timer do all of the dirty work. <br />
<br />
<ul><br />
<li>PIDController runs in a separate thread in order to maintain smooth movement alongside behavior code and camera processing. It is activated by requesting a turn(angle) or a straightMove(distance). These can be combined to move in a curve. None of its methods are blocking, but programs can wait until it reaches its angle or distance thresholds by checking an isRunning() method. It uses the system time to calculate integral and derivative functions and implements optional low-level wall following using the IR sensors. It never directly interfaces with the camera -- the behavior code always passes camera coordinates to the PIDController. </li><br />
<li>VisionHandler runs un-threaded, only capturing on-demand. VisionHandler has a getObjects() method that returns all the objects (type, coordinate, and shape info) in a List. Color recognition was implemented with hard-coded HSV ranges. Auto white balance and exposure were disabled for consistent color values. Objects were found using a recursive solid color area function. These were then typed as wall-tops, balls, or yellow walls for the behavior code to use in decision-making. Typing uses shape proportions(height,width), shape area, and density (points/area). Shapes with sufficiently small areas, shapes above the blue wall line, and shapes within goals are filtered out.</li><br />
<li>Timer handles keeping track of the game time and killing the JVM when time is up, bringing the robot to a stop. The behavior code also uses Timer's getTimeRemaining() method to make strategic decisions.</li><br />
</ul><br />
<br />
Many test methods were developed to observe individual actions, object detection, and PID control performance. Code for goal-scoring and barcode detection were partially developed but later abandoned to hone basic functionality. Java's audio package allowed us to taunt our opponent with clips from Monty Python's The Holy Grail, which was paramount in winning the audience's favor.<br />
<br />
<br />
<br />
<br><hr><br><br />
<h3>Suggestions to Future Teams</h3><br />
<br><br />
<br />
<br />
<ul><br />
<li>Mechanical: Design for bumps early on -- they can have drastic effects on your design.</li><br />
<li>Software & Electrical: Make sure fundamental features are adequate before moving on to more complex behaviors. For example, make sure the PID controller works very well before trying to use it to follow walls. Test all fundamental features extensively before moving on to complex behaviors. Spend time to make strong cables and organize wires well -- it will save you more time later on. Read analog sensor outputs and write them down somewhere for later reference. Strategically place your camera and tell your Mech E's the required position -- some heights and angles are better than others. Test in all sorts of lighting conditions -- many things can change lighting during the competition. Make use of Java's audio library, not only for taunting, but also for debugging purposes. It is often easier to understand than LED's if the robot simply says: "exploring" or "collecting ball". </li><br />
</ul></div>Jameswhttps://maslab.mit.edu/2011/wiki/Team_Two/Final_PaperTeam Two/Final Paper2011-01-26T01:28:08Z<p>Lbarnes: /* Electric Issues */</p>
<hr />
<div>[[Image: Putzputzbanner.png]]<br />
<br />
==Overview==<br />
[[File:front.jpg|thumb||right|Little putzputz won 1st place!]]<br />
[[File:finalshot.jpg|thumb||right|100px||CAD model finalized before IAP]]<br />
<br />
Putzputz is the result of 3 weeks of Asian parenting. We pushed hard for her to learn to explore her world and redesign it as she sees fit. Early on, she showed a clear aptitude for fetching balls, and we worked hard to teach her to face all her challenges head on. Aside from her occasional temper tantrums, little Putzputz has grown up so quickly and has made us very proud.<br />
<br />
In 26 days, as a team of 4 undergraduate engineering students, we designed, built, programmed, and relentlessly tested a fully autonomous robot that was capable of robustly finding balls and scoring them over walls. Our strategy was simple: Go fast, score balls if you can, find and pick up balls if you can. Nothing better do? Wander around until you see something of interest. Don't ever get stuck, don't ever jam. Random erratic behavior is better than being stuck. <br />
<br />
Mechanically, our robot had a circular footprint, which helped it maneuver, and a compact design that allowed for a flexible yet robust platform for the sensors and software. Every sensor we selected played an important role in the operation of our robot; nothing was extraneous. Software-wise, our robot was driven almost entirely by vision with layers and layers of behaviors and redundant checks to ensure she continued to run in any situation.<br />
<br />
The following details our mechanical, electrical, and software design choices, along with our testing framework, issues we came across, and our tips for future teams. We also want to give a huge thank you to the MASLAB staff for a wonderful adventure!<br />
<br />
<br />
__TOC__<br />
<br />
==Team Members==<br />
<br />
*Leighton Barnes - Course 18, 2013 - Focused on sensor design and electrical work. Instrumental in debugging robot in all disciplines.<br />
*Cathy Wu - Course 6-2, 2012 - Focused on major software components: vision, testing suite, multi-threading, ball collection and scoring behavior. Managed the team and made sure things got done. <br />
*Stanislav Nikolov - Course 6-2, 2011 - Focused on major software components: overall architecture, wall following, control, and stuck detection.<br />
*Dan Fourie - Course 2, 2012 - Focused on mechanical design. Got things done extremely quickly.<br />
<br />
==Mechanical Design==<br />
<br />
Our robot was designed for robustness and reliability. The robot serves as a reliable platform for the vision and control software systems. As such, it should be sturdy, constructed quickly, have extremely low mechanical failure rates, be able to withstand hours of testing, and be robust to positioning errors. The robot was designed with CAD to account for all components, to ensure optimum packing, and to facilitate fabrication with the laser cutter. Our team was fortunate enough to have 24 hour access to a laser cutter and waterjet, which made rapid assembly and adjustments possible. <br />
The robot's structural members were built primarily from 1/4" acrylic sheet. It utilizes a rubber band roller powered by a DC motor to collect balls and 4-bar linkage hopper actuated by another DC motor to get balls over the wall. DC geared motors drive no-slip wheels. The robot underwent brutal testing and survived severe battering valiantly.<br />
<br />
===Drive System===<br />
<br />
The high level design of the robot's drive system consists of three structural boxes secured together in a line. The boxes are incorporated within a 14" circle to ease navigation. The two outside boxes contain the direct-drive motors, which are mounted to aluminum plates for strength. The central box forms the majority of the rest of the robot's structure and primarily contains the hopper. The three boxes are fastened together with steel brackets (to leverage the powers of the laser cutter and to avoid excessive tapping) and locknuts (to ensure the final assembly did not disassemble).<br />
<br />
Toothed no-slip wheels were chosen to minimize slipping on the playing field carpet. This condition proved effective in increasing the speed of the robot and in stalling the drive motors to provide current feedback for stuck detection. The wheels were not perfectly no-slip however, and did not stall in all cases, which was specified in the initial design to obviate the need for bump sensors. The wheels were cut from 1/8" aluminum plate on an abrasive waterjet machine.<br />
<br />
Steel hubs were precision machined to provide a stiff, reliable coupling between the motor shafts and the wheels. They also allowed the wheels to be placed as close as possible to the motor in order to decrease bending torque on the gear boxes.<br />
<br />
===Electronics Mounting===<br />
<br />
Electronics were mounted to the robot with the goals of rigidity, interchangeability, and adjustability where necessary.<br />
<br />
The EeePC was completely disassembled in order to determine the best way to securely mount it to the robot. It was decided to remove the extraneous monitor and keyboard, but to retain the hard, white motherboard shell to protect the sensitive components. While other teams utilized tape or Velcro, our netbook is bolted to an acrylic plate and shock mounted (with foam padding) to an angled back plate. <br />
<br />
In addition, the Orc Board was secured to its own acrylic plate and provided with a protective cover to ward off balls possibly fired from the other side.<br />
<br />
The webcam was removed from its plastic housing and the PCB was potted in epoxy and attached to an acrylic backing plate. These adjustments saved an enormous amount of space and allowed the camera to be positioned in the ideal location on the robot. The camera angle was also adjustable which proved valuable in eliminating the need for blue line filtering.<br />
<br />
The bump sensor suite covering the front 160deg of the robot was an addition to the initial design. The need for immediate and precise digital feedback about the robot's surroundings was understood after initial testing showed that good obstacle avoidance using IR sensors was difficult to achieve. Each of the five bump sensors are made from a strip of spring steel and a small snap-action switch. The extended levers created by the strips provide a larger area of contact and also protect the switches themselves from damage. In addition to bump detection, the left and right bump sensors aid in aligning with a wall.<br />
<br />
A tiny limit switch is triggered at both the up and down limits of the hopper mechanism to signal the motor to stop.<br />
<br />
===Scoring Mechanism===<br />
<br />
Our scoring mechanism was designed to lift balls from low in the robot to high and well beyond the yellow wall as efficiently and as smoothly as possible. 4-bar synthesis was used to generate a linkage that would move the hopper from a tilted back low position to a tilted forward position over the wall. The leading edge of the hopper extended more than an inch above and three inches beyond the top edge of the wall. This large tolerance in scoring positioning proved invaluable in getting balls over the wall from less than ideal orientations. The all metal parts of the hopper provided durability and compact construction. <br />
<br />
Having arrived at this mechanism, and constrained by footprint and form limitations, the rest of the components fell in place around it. <br />
<br />
The tried and true rubber band roller was used for picking up balls. <br />
<br />
<br />
<!--<br />
<gallery><br />
File:Mech1_s.jpg|stage 1<br />
File:mech2_s.jpg|stage 2<br />
File:mech3_s.jpg|stage 3<br />
</gallery><br />
--><br />
<br />
<br />
[[File:Mech1_s.jpg|thumb|none|574px||stage 1]]<br />
[[File:mech2_s.jpg|thumb|none|574px||stage 2]]<br />
[[File:mech3_s.jpg|thumb|none|574px||stage 3]]<br />
<br />
==Electrical Design and Sensors==<br />
<br />
<br />
===Motor Controllers===<br />
<br />
Because our robot design required four motors (2 drive motors, 1 to pick up balls, and one to score them) and our Orc Board only features three H-bridges, we had to design an additional circuit to control the last motor. The motor that drives the roller in the front to pick up balls only had to go in one direction, so we chose that one as the one that would be driven by this additional controller.<br />
<br />
Our first attempt at this additional controller was just a 40N10 power FET whose gate was driven by the digital out of the Orc Board (with a protection diode accross the motor of course). As we learned with this first attempt, the digital out of the Orc Board is somewhere around 3.7V, instead of the nominal 5V, which could barely overcome the 2-4V gate threshold voltage of the FET (or any other power FET we had on hand). Instead of spending the time to build a gate driver to get around this problem we tried an L298 H-bridge package instead. This worked with the logic-level voltage provided by the Orc Board although we stuck to one directional capability in favor of using the standard four protection diodes instead of one.<br />
<br />
===Batteries===<br />
Throughout the build period Dan and Leighton continued to investigate different battery options and even contructed multiple different kinds of battery packs. The input from previous teams' papers suggested that a high voltage (18V or so) NiCd pack from a cordless power tool was the ideal battery. This type of pack was lighter and had a much higher power density than the standard lead-acid pack. In past years, the increased voltage and power-density such a pack offers would have been a huge plus, but this year we were given adaquately powerful drive motors even when driven at the standard 12V. We found that our NiCd packs ran down much too quickly with their 1.7Ah while the standard lead-acid pack, which was rated at 8.7Ah, could last while testing for hours on end. We also briefly tested a pack of four 3.3V A123 cells which seemed like it could have been the perfect choice. The pack was rated at 2.2Ah, was lightest out of all of them, and dumped as much power as we wanted on demand, but it was a pain to charge. We had access to a charger, but not one that we could take with us anywhere.<br />
<br />
===Sensor Choice===<br />
<br />
'''Bump Sensors'''<br />
<br />
In the end our bot had 5 bump sensors in an arc across its front. We had originally only planned for two in the front to help align while scoring over the wall, but we realized late in the game that bump sensors are an effective tool for dealing with any obstacle. They are free, and there is no reason any bot shouldn't be covered with them.<br />
<br />
'''Break-Beam Sensor'''<br />
<br />
We implemented a break-beam sensor just beneath the roller such that we could detect when we picked up a ball. The sensor was just an IR LED on one side of the bot and a phototransistor in series with a 1Mohm resistor on the other side. We then measured the voltage across the 1Mohm resistor with an analog in on the Orc Board and compared it to some threshold in software. If we had bothered to tune the resistor such that the signal would read approximately 0 or 5V depending on whether the beam was broken or not we could have just as easily used a digital in port.<br />
<br />
'''Encoders'''<br />
<br />
The encoders that MASLAB gives you are unreliable and low resolution. We deliberated for some time on how to replace them. Good, high res optical encoders can easily cost 35$ each which we were unwilling to spend. We ended up using little break-beam packages as geartooth sensors on our wheels. While this theoretically gave us 120 ticks/revolution and the sensor responded quickly and accurately, there were a couple problems. First, there was no quadrature encoding and we were forced to assume that the wheels were going the way that we were commanding them to go. The biggest problem, though, was that while there were many threads running the software didn't sample the signal fast enough to catch every tick. In the end, we didn't really end up using our encoders.<br />
<br />
'''IR Range-Finders'''<br />
<br />
We used 3 long range IR sensors in an arc across the front and one short range IR sensor on each side. The idea was to detect obstacles from far away but still have accurate short range readings for wall-following. The short range sensors were much easier to deal with as there is no noticeable dead zone for short distances and out of range readings can be easily filtered out.<br />
<br />
==Software Design==<br />
<br />
===Overview===<br />
<br />
Our software architecture emphasized simplicity and modularity. For the operation of our robot, we used a simple state machine that was mainly driven by a focus on speed and vision. Within each state, we also performed stuck detection and also additional actions if bump sensors were triggered. <br />
<br />
We wrote classes that abstracted out each and every type of sensor we used and we forked a thread for each type to record and process readings. During a run, there are about 10 threads running. <br />
<br />
On top of abstracting out sensors, we also abstracted out everything else, including images, color statistics, and the on buttons, and had a function for just about everything. This is perhaps excessive, and in the end, we had over 9000 lines of code, but it also came in useful again and again. During our numerous testing sessions, we were able to easily fix most issues because all the functions were already available.<br />
<br />
In addition, instead of trying to predict all kinds of situations our robot could be in, we interspersed our code base with the use of randomness and heuristics. For example, if we don't know whether to turn left or right, we will sometimes randomly generate a direction. If we don't know how much we've turned since the last iteration through a loop, we will make a reasonable guess. <br />
<br />
===State Machine and Robot Behaviors===<br />
<br />
We used a simple state machine design that heavily relied on vision. By default, the robot spins in place scanning the surroundings for balls or scoring walls. Detecting respective objects allows the robot to transition into its ball fetching or scoring behaviors. A timeout into a wall following behavior allows us to roam into new regions to find more objects. All behaviors default to scanning for objects. <br />
<br />
Our behavior for obtaining a ball involves lining up to the ball, getting closer to the ball, and then charging it for a short duration. We charge to make up for the complete lack of information when we are too close to a ball for the camera to be useful. This has worked well for us, since it is fairly accurate and also captures balls quickly. <br />
<br />
Our behavior for scoring involves lining up to a scoring wall, moving towards the scoring wall until the appropriate bump sensors trigger, extending the hopper to dump the balls, and retracting the hopper. We stop the roller when moving the hopper to prevent balls from getting stuck underneath the hopper. The bump sensors sometimes take several tries to trigger properly, and we often have problems with the robot thinking it is no longer at a scoring wall because the camera is so close that it can only see the blue tape on the wall.<br />
<br />
===Time and Ball Count===<br />
<br />
We leveraged some time and ball count information to help robot performance. In the first 30 seconds of a round, our robot does not attempt to score, so that it can collect the easy balls. When the hopper is full of balls, the robot will stop looking for balls and focus instead on scoring. In addition, each ball that the robot obtains allows it to wall follow for more time. The idea is that with fewer balls on the field, the robot should be given more time to explore in order to increase its likelihood of finding new things.<br />
<br />
===Vision===<br />
<br />
At a high level, we forked a thread for the camera that continuously takes an image, processes the image, saves statistics about various parts of the image, occasionally publishes the image to BotClient, and repeats. We worked with images in the HSV color space and focused on speed instead of accuracy or detail. That is, we attempted to process as many images as possible and actually did as little processing on each image as we could. The processing steps we took were down sampling 3x, converting from RGB to HSV, and generating statistics on the various colors in the image. Although we implemented many of the fancier image processing techniques (e.g. smoothing, edge detection, connected component labeling), we decided that the higher quality information was not worth slowing down our image processing. Instead, we focused our vision efforts on preprocessing, multi-threading, and various other performance optimizations. In the end, our camera thread was processing images at between 14 and 31 frames per second, depending on how much color there is in an image. (Disclaimer: The staff claims that this rate may not be accurate.)<br />
<br />
To prevent from converting each image from RGB to HSV color space using the slow conversion algorithm provided by the MASLAB API, we allocated a 256x256x256 array at the start of each run that maps every RGB combination to its HSV value. Each image is then converted to HSV format using this lookup table and cuts the image conversion time roughly in half. The allocation of the array itself takes less than 4 seconds and is created by reading in a serialized form of the array itself from disk. <br />
<br />
Mapping from the HSV color space to a color happens in a separate stage with the use of different HSV thresholds for each color. The two sets of color mappings are separate since the thresholds for each color could be different from day to day. The colors we handled were red, green, yellow, and black. Due to our camera placement and angle, we avoided the need to handle blue.<br />
<br />
To determine the thresholds for each individual color, we wrote a user-friendly color calibration utility that we used to adjust to different lighting situations. We place the robot in front of something of the color (e.g. yellow wall) we want to calibrate, start the utility, select the color (e.g. yellow), wait a few seconds, and check BotClient to see if we like the new calibration. The idea is very simple. After studying some images, we found that hue is resilient to lighting changes; it is saturation and value that change. Therefore, we pre-determined the hue values for each color separately based on a sample of images. The utility then takes an image and does two passes through it. First, it collects all pixels within the hue thresholds. In the second pass, it utilizes connected component labeling and generates statistics based on the largest component of the appropriate color to determine reasonable thresholds for saturation and value. Finally, the utility uses the new thresholds to process additional images so that we can move the robot around and evaluate the calibration. The utility did not handle black, but the thresholds for black were fairly straightforward.<br />
<br />
To reduce the number of pixels to process per image, we down sampled 3x. With fewer pixels to work with and less resilience to noise, down sampling really pushed our color calibration utility to its limits. 3x is probably the limit for our image processing code. If we did more filtering, more down sampling could be feasible. <br />
<br />
Yellow occurs in 2 places on the field: yellow scoring walls and along scoring goals. Both use the same yellow; however, the former is favorable to us, but we want to avoid goals. Our solution is very simple. We observed that the inside of the goal itself appears black, but that there is usually very little black along scoring walls. Thus, we will only approach yellow that does not also have black. One complication is that when the staff decided to re-introduce bar codes into the field, sometimes the black on the bar codes will make a scoring wall look like a goal to the robot. The upside of our solutions include its simplicity and the ability to prevent our robot from approaching goals from far away thinking that they are scoring walls. The downside is that goal detecting sometimes has false positives as mentioned before. Using a ratio of black to yellow or using connected component labeling could make goal detection more robust, but we decided to sacrifice accuracy for less processing.<br />
<br />
To reduce unnecessary processing, we publish an image to BotClient only once per 1.5 seconds. This has several benefits. First, the overhead of publishing to BotClient is decreased. Additionally, we draw over images so that the audience (or the engineer) can tell what the robot sees, but this is actually pretty slow. Having to manipulate images only once in a while allowed us to cut down the average image processing speed by 72 to 109 msec per image.<br />
<br />
===Control===<br />
<br />
We used PID position control for aligning with balls and P position control for wall following. We also used open-loop velocity control and abstracted out directly setting PWMs in two ways. The first abstraction was a drive method that takes a particular direction. The second abstraction was a setVelocity method that takes a forward velocity in m/s and a rotational velocity in rad/s. For the second abstraction, we came up with an approximate piece-wise linear model relating wheel velocity and PWM so that we could deal with velocities in a more user-friendly way.<br />
<br />
We considered doing closed-loop velocity control using wheel encoders, but we found that open loop control was good enough to carry out our behaviors, and that it was tricky to estimate tick rates from somewhat noisy encoders. We experimented instead with scaling the PWMs supplied to each wheel to get the robot to drive slightly straigher, since it would veer off to one side.<br />
<br />
===Wall following===<br />
<br />
====Overview====<br />
We used a proportional controller to stay at a fixed distance from and roughly parallel to the wall. Each side of robot had an IR sensor perpendicular to the wall and an IR sensor at roughly 45 degrees to the wall. If one of the side sensors is in range of a wall, we start wall following. This allowed us to start following walls from far away as well as to stay farther from the wall when wall following, which gave us more opportunities to see balls and walls. We set a desired distance for each of the two IR sensors. The robot moves forward at a constant velocity and each IR in-range comes up with a desired rotational velocity by multiplying a gain with the difference between actual and desired distances. The desired rotational velocities are then averaged and the average is commanded to the motors. We did not explicitly calculate the angle to the wall and try to remain parallel but the distance control implicitly took care of that.<br />
<br />
====IR Sensor Calibration====<br />
We found that the raw IR readings did not correspond to the correct distances. We manually calibrated them and came up with linear models for the short-range and long-range IRs, as well as the applicable ranges where the models are valid. The resulting transformed readings were roughly accurate distances in meters, which was good enough for us to work directly with distances rather than try to guess the corresponding raw IR readings.<br />
<br />
====IR Filtering====<br />
IR sensors have limited effective ranges. We found that short-range IRs start giving garbage readings above roughly 0.25 m, and long-range IRs start giving garbage readings below rougly 0.17 m. Garbage readings for the short-range IR were always either too low for the robot to ever experience them or higher than 0.25 m, so we could safely trust any reading between 0.1 m (the minimum short-range reading we could with the way the short-range IRs were mounted) and 0,25 m.<br />
<br />
Unlike the short-range IR, whose sets of garbage readings and good readings are effectively disjoint, long-range IRs are a bit more tricky. If a long-range IR is less than about 0.17 m from an obstacle, it will start producing readings as high as 0.60 m. Thus it is difficult to tell whether we are too close to an obstacle or actually at 0.60 m. We considered long-range IR readings above 0.45 m out of range (and in the case of wall following, 0.75 m, since we would lose walls too often). Thus, when we were somewhat far from the wall but still capable of wall following, we would get garbage readings from the short-range IR, but our long-range IR would be in range and we would get closer to the wall. If we got too close, we would start getting garbage readings on the long-range IR sensor and good readings on the short-range sensor, which would push us away from the wall.<br />
<br />
===Stuck detection===<br />
<br />
If your robot gets stuck and doesn't do anything about it, you're in trouble. If you have implemented timeouts, eventually your robot will switch states and possibly get unstuck. But what if you are really stuck, and your timeout behavior is inadequate at getting you unstuck? Even if it is adequate, waiting until the timeout to do something wastes precious time when you only have three minutes for a run. We wanted to very quickly detect being stuck (e.g. in a second or less) and to do something drastic to get unstuck, while not being too sensitive and generating false positives. This was a challenge. This section provides an overview of our final method, as well as some details on how we got there.<br />
<br />
==== Motor Current ====<br />
<br />
If the wheels are stalled while being commanded to move, then the current through the motors increases. Based on this principle, it is possible to detect if the robot is stuck. We briefly tried using single sensor readings combined with various threshold schemes (constant, and a step function depending on PWM). However, motor current is noisy and has transient peaks (Figure 1) when the PWMs change abruptly, which happens all the time in a typical run. This resulted in many false positives. Ultimately, we filtered motor current in three ways to detect being stuck:<br />
<br />
*Long sliding window average<br />
*Short sliding window average<br />
*Consecutive timesteps above threshold<br />
<br />
'''Figure 1:''' Motor current for a robot driving forward at PWMs of 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 for 1500 ms each. Note the transient spikes.<br />
<br />
[[Image:Rampup.gif]]<br />
<br />
We used the long sliding window to filter out high frequency noise (Figure 3). We then required the current filtered with the long window to be above a threshold for a minimum number of consective timesteps in order to ignore brief peaks and look for sustained peaks, like the one in Figure 2. If those conditions were met, we would check the short window average to see if it was still above threshold. If it is, then we determine that we are stuck. If it isn't, we decide that we've very recently managed to get ourselves unstuck, but the long window still thinks we're stuck due to its longer memory. If we just used the short window, we would be far too sensitive. If we just used the long window, we might have already gotten unstuck by the time we decide that we are. By using the long window statistics as a prerequisite to checking short window statistics, we are more robust to noise on the one hand, and avoid sluggish memory on the other. This way, we are able to decide quickly if we are stuck, and avoid false alarms.<br />
<br />
'''Figure 2:''' Motor current for a robot that drives forward for a bit and then gets stalled.<br />
<br />
[[Image:Stuck.gif]]<br />
<br />
'''Figure 3:''' Motor current (filtered and unfiltered) for a robot that drives forward for a bit and then gets stalled.<br />
<br />
[[Image:Stuck-filtered.gif]]<br />
<br />
====Encoders====<br />
We considered using encoder data to see if we are stuck. We thought this might come in handy if we are stuck at a low velocity and our motor current is not high enough to trigger stuck detection. Our encoder data turned out to be noisy and tricky to deal with, surprisingly because of the software and not the hardware. It turned out that the thread gathering encoder data was not being visited often enough to adequately sample ticks. We ended up not using encoders and instead relying on timeouts for handling situations where we are stuck at low velocities and the motor current is not high enough to indicate being stuck.<br />
<br />
==Testing==<br />
===Testing suite===<br />
<br />
We wrote a light testing suite that consisted of 29 classes that each tested some functionality of the robot (various sensors, color calibration, raising and lowering the hopper). These tests made it both very easy to make sure that all parts of our robot are still actually working and also served as wonderful regression testing tools whenever we made a change to the robot electrically or mechanically. We ran a number of these tests before every competition and they simplified debugging tremendously. They were great tools for the members of our team who did not write code for the robot. Additionally, the tests served as templates for how to use the various classes that were written, which was useful for the other developers.<br />
<br />
===LED debugging===<br />
<br />
We attached LEDs to our robot for color detection: red, green, yellow. If the robot was going for a red ball, the red LED would light up. If the robot was going for a yellow wall, the yellow LED would light up. This was a great way to effortlessly see if our robot was operating in the correct state and if our color calibration was off -- instead of trying to read screen output and watch the robot, we could actually just stalk our robot during test runs.<br />
<br />
In addition, at the start of a run, the yellow LED indicates that the robot is ready to be started. The robot also does a victory light dance every time she scores.<br />
<br />
===Logging===<br />
<br />
We found it useful to log a number of things, especially when debugging stuck detection. We logged motor current and encoder tick values and statistics and plotted them after each run. This was instrumental in understanding how the data looks and using it to detect when we are stuck.<br />
<br />
Towards the beginning, we used an utility to log images taken from the webcam to use for color training purposes. They were instrumental in determining the hue thresholds for the various colors that we used.<br />
<br />
===Live Parameter Loading===<br />
Our testing was made easier by having a config file from which parameters can be loaded in real time while the robot is running. This made it very quick and easy to tweak gains, thresholds and other parameters. We'd come up with the right gains and thresholds for various behaviors in a single run.<br />
<br />
===Mechanical Issues===<br />
<br />
The most serious mechanical issue that arose in testing concerned balls jamming as the hopper rose to score. This jamming was unacceptable because it often prevented reliable scoring for the rest of the run. Fixing the situation could be hacked by limiting the number of balls allowed in the hopper, but restricting the performance of the robot was counterproductive and a more total solution was required. It was found that adjusting the position of the roller motor by adding another gear to the train, as well as shifting the camera mount forward, eliminated jamming, but dropped balls below the hopper, jamming it again on the way down. A flap, coupled to the upward movement of the hopper, was added to restrict the movement of dropped balls to the inside edge of the roller, where they could be picked up again when the hopper descended. This fix increased our ball capacity to eight. <br />
<br />
Along with this fix, the diameter of the roller was increased by a quarter inch, further improving the robot's ball collection capabilities.<br />
<br />
As we were unfortunately not able to provide our EeePC with a static IP address, it was necessary to repeatedly access the PC directly in order to display the dynamic IP address. As our monitor was removed, this had to be accomplished via an external monitor. In its initial position, the PC was mounted too low for easy access to the VGA port. To fix the problem, a new back plate was cut, which raised the PC position while concurrently freeing up space for the heavy SLA battery to move forward.<br />
<br />
The steel pins used to transmit torque on all of our shafts repeatedly fell out. This was a necessary evil because they had to remain temporary in order to facilitate changes and part replacements. They were fully press fit in and epoxied for the final competition.<br />
<br />
One of our drive motors failed on the day of seeding. This was quickly replaced and the robot continued to function properly. Another motor was immediately acquired and installed to forestall the possibility of the other drive motor failing during competition.<br />
<br />
At all times, possible mechanical failure modes were examined and countermeasures were developed to cancel their effects.<br />
<br />
===Electric Issues===<br />
We had several electrical connections break while rewiring because the single-conductor wire from the lab is unnecessarily stiff and torques solder joints. Find some good stranded wire from a different source. This will allow you to route wires compactly without risk of breaking connections.<br />
<br />
The Orc Board can only sample at 400hz for analog in and 1khz for the fast digital in. Keep this in mind while attempting to implement high-res encoders or any other sensor that responds on the millisecond time scale.<br />
<br />
We needed a 5V source to power our fourth motor. PWMing 12V off of the battery could have been risky for the 3V motor (we didn't have a data sheet). It turns out the 5V rail that goes to the I/O is directly from the buck converter and can source a couple amps. This can be used to power lower voltage motors.<br />
<br />
===Software Issues===<br />
<br />
====Multi-threading====<br />
<br />
First, we would sometimes read unfinished statistics (mid-computation) from the vision thread. Second, after we fixed that, our PID controller would always overshoot with the data from the vision thread. Debugging multi-threading is tricky, and Cathy spent a good few days devoted to tracking down concurrent programming issues. The problem arose from the fact that the vision thread is slow relative to the main state machine thread. It is difficult to generate nearly continuous commands to the robot from discontinuous and discrete vision data. <br />
<br />
We used a few techniques to combat these issues. First, always store and compute vision statistics with different variables. That is, do not overwrite the area variable until a new area has been computed completely; otherwise, the state machine thread that pings the vision thread for this data will almost always read something in mid-computation (read: wrong). The trade-off is that the data that is read will almost always be old, but never by more than 70 msec. Second, to combat the fact that the vision thread is going to appear very discrete (read: slow) to the state machine thread when generating output from its PID controllers, we realized that accumulating error (the I term) and using the same derivative term for 70 msec (the D term) caused our robot to overshoot a lot. Thus, we only updated the I term when pinging the vision thread gave us a new statistics. Additionally, we figured that the derivative term would decrease over time if our robot was doing the right thing, so we applied exponential back-off and decayed the D term per iteration through the state machine until a new statistic was generated. <br />
<br />
====Ball capacity and jamming====<br />
<br />
Ball capacity was the last problem we dealt with as a team. On the software side, we dealt with jamming by capping the ball capacity of the robot at 8 balls. At this point, the robot will stop the roller and look exclusively for scoring walls. The hopper itself had a capacity of 5 or 6, so often, a few extra balls will stick against the roller while the hopper goes up and down to score. We also coded the break beam sensor that kept track of balls entering the hopper to accumulate on the off transition (when the ball stops breaking the beam, as opposed to when it starts breaking the beam). This was not entirely reliable, but enabled our robot to sometimes score twice in a row because, immediately after scoring, the extra balls would be pushed into the hopper. <br />
<br />
====Wireless====<br />
<br />
Not exactly a software issue, but the wireless situation in the 6.01 lab was terrible. We found it impossible to test reliably there (apparently even more so than the other teams), so we opted to test on our hall instead. The downside to this was that we did not have the real field pieces to work with during most of our testing time. The upside is that, since we didn't know what we would be dealing with on the actual field, our code was pretty robust in the end. We patched this issue by setting up test fields in 26-100 for a day before the seeding tournament and by testing rigorously during the mock competitions. <br />
<br />
====Wall Following Issues====<br />
<br />
Our biggest problems with wall following were cause by leaky abstractions. Leaky abstractions are abstractions that make assumptions about a situation and ignore unecessary details, only to fail when the assumptions are incorrect and the details actually matter. <br />
<br />
We had a function that determined whether a direction was free or blocked, and used two sets of thresholds, one for short range and one for long range IRs. We used that to build other functions, such as whether we are following a wall, and on which side. Unilaterally applying the same notions of "free" and "blocked", with the same exact thresholds, for all behaviors was a mistake and led to headaches. <br />
<br />
For example, to determine if we were following a wall, we checked to see if either one of each pair of side IRs was "blocked". If we got too far away from the wall such that both IRs on a side are "free" then we would detect that we lost the wall and try to turn in an arc toward it in order to go around what we perceived as a corner. However, the thresholds for determining "blocked" and "free" were set with obstacle avoidance in mind --- not wall following --- and were relatively low. We would often lose the wall not because the wall ended but because we got too far away. This would trigger the corner-rounding behavior, which since we are far from the wall, would make the robot keep driving in a circle until it timed out. It took surprisingy long to find this since we didn't question these low level abstractions.<br />
<br />
==Performance==<br />
<br />
* Day 8: '''First place''' in the first mock competition with 4 points. Runner ups had 1 point. Our robot spins around, aligns to balls, and charges at them. <br />
* Day 12: '''First place''' in the second mock competition with 21 points. Runner ups had 6 points. Our robot can now wall follow and score over walls.<br />
* Day 17: '''Second place''' in the third mock competition with 46 points. We lost to Team 3 with 49 points. Our robot now sees better, moves around better, and gets unstuck sometimes.<br />
* Day 23: '''Seeded first''' in the seeding tournament with 75 points. Runner ups had 23 points. Our robot does not jam, almost never gets stuck, and does some smart things. We basically code-freezed at this point.<br />
* Day 26: '''Won''' the semifinals in the main tournament with 138 vs 54 points. '''Won''' the finals in the main tournament with 106 vs 56 points.<br />
<br />
==Suggestions==<br />
*Form a team early and commit to doing MASLAB for all of IAP. We formed our team before the start of the school year. <br />
<br />
*Have a well balanced team. It's important to cover all grounds with software, mechanical, and electrical. Our 2 software + 1 mechanical + 1 electrical combination balanced us very well.<br />
<br />
*Work really really hard and stay motivated. We pulled endless all-nighters and never gave up. We continued to pester the staff mailing list with questions and even took a day to set up some legit practice fields in 26-100 and test before the seeding tournament.<br />
<br />
*Start before IAP and aim to have most of everything done in the first 2 weeks of IAP. Because we did most of the design before IAP, we managed to have a functional robot (not the pegbot) by the first mock competition, which helped us out greatly. We also were able to spend the last week and a half making fixes for various edge cases and had time to just polish up things. <br />
<br />
*Don't focus too much on the pegbot and the checkpoints. We had at most 1 or 2 people deal with each of the checkpoints, so the rest of the team could focus on machining the actual robot or designing the software framework. Our pegbot was scrapped in less than a week.<br />
<br />
*Test often and relentlessly. You'll find something wrong with your robot every time.<br />
<br />
*Redundancy is the name of the game. It is difficult to anticipate all the possible ways your robot can mess up. We had triple (or more) layers of failsafe behavior for some situations. For example, if we hit an obstacle we would rely on bump sensors, motor current peaks, and timing out to detect if we're stuck and escape (we had planned to have a fourth layer using encoder data, but it was tricky to get the right thresholds and didn't pan out in the time we had).<br />
<br />
*Beware of the leaky abstraction. Abstractions come about by making assumptions so that you can ignore unnecessary details. Having never been a robot, it is truly difficult to make assumptions about how the robot experiences the world with its sensors. Think carefully about specific situations and come up with tailor-made constants and behaviors. Avoid unilaterally using notions like "near" and "far" or "free" and "blocked" for example --- it really depends on the behavior what "near" and "far" mean. See the section on wall following.<br />
<br />
*Do not neglect mechanical design. Robots are crippled every year in the final competition because something breaks, not because their behavior is poor. Software can recover; physically broken things cannot. Do not use cardboard, do not use glue, do not use velcro or tape. Try not to use zipties. Be precise. Bash your robot into walls excessively. Fix anything that breaks with double strength.<br />
<br />
*On that note, do not neglect software design either. Many of our software fixes were trivial because of the infrastructure and abstractions we had set up. Different behavior needed? Define a state for it, and specify the state transitions. Different sensor variants? Plug 'em in, the main application doesn't care. <br />
<br />
*Testing will take up the vast majority of your time. Set up tools to make effective use of that time. We had a utility for loading parameters from a file in real time during a test run, and were able to iterate extremely quickly.<br />
<br />
==Photos==<br />
<br />
[[File:left.jpg|left]]<br />
[[File:right.jpg|right side]]<br />
[[File:back.jpg|back]]<br />
[[File:top.jpg|top]]<br />
[[File:IMG_1452s.jpg|testing on Putz]]<br />
[[File:IMG_1479s.jpg|testing in 6.01 lab]]<br />
[[File:mIMG_1516s.jpg|late night testing in 26-100]]<br />
<br />
==Video==<br />
<br />
[http://www.youtube.com/watch?v=1kvLC3O37OM| Final - Third Run (Final)]<br />
<br />
[http://www.youtube.com/watch?v=080yuAgGR7o| Final - Third Run (Final), Green Side]<br />
<br />
[http://www.youtube.com/watch?v=rry8o8ZYR9o| Final - Second Run (Winner's Bracket Final)]<br />
<br />
[http://www.youtube.com/watch?v=Wdug2pDay0M| Final - Second Run (Winner's Bracket Final), Green Side]<br />
<br />
[http://www.youtube.com/watch?v=bM4al-Xn8XQ| Final - Second Run (Winner's Bracket Final), Red Side]<br />
<br />
[http://www.youtube.com/watch?v=Y7vOot70TOc| Final - First Run, Green Side]<br />
<br />
[http://www.youtube.com/watch?v=5mp4Zuqz7n4| Maslab 2011 Teaser Trailer]<br />
<br />
[http://www.youtube.com/watch?v=RxiwAiYOfsk| Mock 3]<br />
<br />
[http://www.youtube.com/watch?v=plR4XmozLJo| Mock 2]<br />
<br />
[http://www.youtube.com/watch?v=yarssg8vlDA| Mock 1]</div>Dfouriehttps://maslab.mit.edu/2011/wiki/Team_Six/Final_PaperTeam Six/Final Paper2011-01-24T23:22:13Z<p>Emolague: /* Software Design */</p>
<hr />
<div>__TOC__<br />
<br />
==Overall Strategy==<br />
<br />
Our initial overall strategy was as follows:<br />
<br />
'''Robot algorithm''': The algorithm runs as long as the timer is noted less than 3 minutes (180 seconds). The first ball seen will have its color noted and be saved in a variable called our_color. As soon as this is established, search the map for goals, the goals are determined as follows: if a yellow wall is seen, drive up to it and use the ir sensor to determine whether or not the depth of the wall varies along its length. If the wall does vary, save the location of the goal in a list called goals_loc[]. When the number of goals is greater than 2 (or if more than 30 seconds have elapsed), then begin to look for balls. Whenever a ball is found, look for the nearest goal and transfer the ball to that goal. Do this as long as the timer has not gone over 2 minutes. After two minutes, whenever a ball is found, if the distance between the ball and the nearest known wall to the opponents side is less than the distance to the nearest known goal, then save the distance and calculate a random number between 0 and 1 and save it to rand_n. Should rand_n > e ^ -(d_togoal-d_towall), throw the ball over the wall, else, throw the ball into the goal. Stop after 3 minutes.<br />
<br />
'''Robot strategy''': We decided that we are stronger on the course 6 side of the spectrum than the course 2 side of the spectrum. We're going to keep our robot design relatively simple, with a conveyor belt and accompanying pinball-machine-like doors to pull balls into the robot and drop them into a compartment. On the side, we will have a door that opens when told so that we can drop all our balls into the goal. We decided not to drop balls onto the other side in order to keep our robot simple so that we can focus on our code.<br />
<br />
'''Time-allotment strategy''': <br />
* Michael - Since Maslab is his main IAP commitment, he'll code in the evenings and possibly during the day.<br />
* Shawn - Work until 4pm every day, code at night with Michael.<br />
* Piper - Work on building between lecture (or late mornings when lecture isn't happening) and 7pm daily.<br />
* Xavier - Work on building between lecture (or late mornings when lecture isn't happening) and 7pm daily. <br />
<br />
<br />
==Mechanical Design and Sensors==<br />
<br />
We came to realize that our original robot design was problematic. The conveyor belt roller would have to be very small so that the ball would be pulled up it rather than pulled away, and the doors pushing the ball onto the roller would have to be well synchronized. We decided to start from scratch, and came up with something we believe will work much better. Our new design was not only easier to build and more predictably function, but it also allowed us to score over the wall.<br />
<br />
We have a scoop with a slanted arm leading down, lined with teeth to scoop up a ball when the robot drives into the ball. Two motors will raise the scoop so that the ball rolls back into our collection box. We had some issues with not having enough torque before, so we added long metal bars sticking out from our scoop to provide us with the leverage needed to raise it. Under our collection box, we'll keep our motor, orc board, and computer. The collection box itself will be slanted towards an escape hatch in the back. This hatch was initially designed to be a drawbridge, but this design became harder when we decided to use a servo to open and close it, so it is just a slate of metal. Since the escape hatch is over the wall, our balls will be able to fall into the other team's field. We will do this with all our balls instead of scoring in goals. <br />
<br />
We have two bump sensors at the back of our robot (that is, where the escape hatch is, not the side with the scoop and teeth). This way, when our robot backs up, it will be able to detect when it hits the wall. Our camera faces front, through our scoop (which is composed of plexiglass). On each side, we have two IR sensors that are used to detect the robot's angle to the wall.<br />
<br />
==Software Design==<br />
In general, our software was fairly simplistic. The software consisted primarily of two threads, one carrying out the process of image-processing, and the other carrying out the actual motion of the robot. Apart from both threads was a class (public class Commander) which contained code to actually execute the movements of the robot. That is to say, Commander contained the lowest level code on how the robot moves. <br />
<br />
The image processing thread worked with the class ImageScanner. As the name implies, ImageScanner scans the image and makes available to the other thread the positions of various points of interests. The method ImageScanner.analyze() is at the heart of the class and contains four different submethods that all work in the same way. These methods find_red_blob() , find_blue_blob() etc identify the regions of the image that contain the respective colors. When the regions are found, their center of mass and extremities are recorded to be made available to the other thread. In the event that we wanted to see the output of the method in a visual sense, we can uncomment the name method ImageScanner.proc() and then forward the result to a BotClient.<br />
<br />
The finite state machine that comprises the main navigation thread is fairly simplistic. It starts off in a state called "SCAN" and essentially takes several scans of the course in front of it. On the occasion that it might see a red ball, it changes to the state "FOLLOW__FOOBALL" (FOO being either RED or GREEN). In this state, it uses a simplistic proportional controller to try to zero in on the ball. As such whenever the ball is on the left, the robot moves left slightly and the same on the right. Upon having the ball centered, the robot then charges forward at full speed and attempts to pick it up using the state "PICKUP_BALL". The state involves two smaller threads originating in Commander that allow the robot to move backwards slightly while lifting the scoop as well. The actual scoring mechanism was never really implemented, but similar methods to "SCAN" and "FOLLOW_REDBALL" were hypothesized for the search and approach to goals.<br />
<br />
==Overall Performance==<br />
<br />
Our robot's final performance was non-ideal (we were one of the first groups knocked out). We ended up having scoop/code problems that didn't pop up until the night before the competition, and we weren't able to fix it in time. Overall, we had a physical robot prepared, but it was too late to get our code working with it.<br />
<br />
==Conclusions==<br />
<br />
Our team came in with the purpose of learning more about building and coding, having lot of fun and sleep deprivation doing it, and not worrying about being the most competitive team. For the most part, this worked out really well (other than a couple of panicked moments where we were behind on a checkpoint or two and wondered if we should drop out - the staff helped get us up to speed and this ended up not being a problem). Overall, we walked away with what we wanted to get out of MASlab, which is really awesome.<br />
<br />
We ended up deviating from our original scheduling plan quite a bit. Instead of having our strongest coders (Michael and Shawn) code and our builders (Wings and Xavier) build, we had a fuzzy division of labor that arose naturally. After discussing designs, Xavier would go build, and Wings would maintain the journal and keep a to-do list for our robot. Wings and Shawn would also solder and construct other parts (ie, at the Edgerton Center). Michael worked on the code. With both the code and the building, we constantly sanity-checked each other and proposed alternative ideas, coming to a group agreement every step of the way. <br />
<br />
However, we ended up not being effective at this with our code. Michael spent a lot of time coding, but not being able to build the robot fast enough and not being absolutely clear on how the code and robot should interact meant that we ran into a lot of problems when we were trying to tie everything together. With more time, we would've been able to debug enough to have our robot work as intended. In the end, however, we weren't able to do this, and defaulted to the plan of just having the robot drive around and scoop up balls instead of using our scoring mechanism.<br />
<br />
But as stated before, we are very happy with what we got out of MASlab. We learned to build, code, and work as a team. No doubt we'll carry on the awesome lessons we learned here :)<br />
<br />
==Suggestions for future teams==<br />
<br />
When forming a team, make sure that everyone is on the same page as far as what they want to get out of MASlab, how competitive they want to be, and what their time commitments are expected to be. Also, there's absolutely no harm in thinking about ideas over winter break! It certainly means building can start earlier. Which brings us to...<br />
<br />
Build early, build often, and allot more time for this than you think you need! Our main problem was that we were all very new to building, and things tended to go wrong often. You never know when the laser cutter will stop working, you'll probably have to remachine parts several times over, you will find ''tons'' wrong with your robot that you didn't even think about. Building early means that you can make more mistakes, and don't have to be afraid of them. Having a physical robot is ''very'' useful to debugging, so having that early for your coders is incredibly useful.</div>Wingshttps://maslab.mit.edu/2011/wiki/Team_Seven/AssignmentsTeam Seven/Assignments2011-01-18T17:43:26Z<p>Rafacb: /* Image Processing Sample Pictures */</p>
<hr />
<div><div style="text-align: center;"><br />
<br />
<div style="text-align: center;"><br />
<br />
<div style="text-align: center;"><br />
<br />
== Image Processing Sample Pictures ==<br />
[[File:capture3.png]] [[File:capture3result.jpg]]<br />
<br />
[[File:capture9.png]] [[File:capture9result.jpg]]<br />
<br />
These two pictures were used for early testing as they were not actual photos taken by our camera in 26-100.<br />
<br />
[[File:capture11.jpg]]<br />
[[File:capture11result.jpg]]<br />
<br />
This is the result of an actual photo taken by the team.<br />
<br />
</div><br />
<br />
<!-- Centralize page: <div style="text-align: center;"> --></div>Rafacbhttps://maslab.mit.edu/2011/wiki/Team_ThreeTeam Three2011-01-11T00:14:05Z<p>Cookies: </p>
<hr />
<div>Current Team Name: We-Ski!<br />
<br />
Current Robot Name: Monsieur Robot<br />
<br />
== Team and Our RoBot Naming Options==<br />
<br />
Still thinking about what to call ourselves. It should match the team spirit, the principles of the game and personality of the robot.<br />
<br />
Right now, the ideas include <br />
* Screwed (so we are called we are screwed)<br />
* We've Got Balls<br />
* 必胜 (Chinese, pronounced bisheng, means definitely wins)<br />
* neeu (actually the word new, but conceived from the last letter of our last names)<br />
* 一生懸命 (Japanese, pronounced isshoukenmei, means with all one's might)<br />
<br />
<br />
Our little RoBot puppy needs an awesome name too! So far we have:<br />
* Solene (French, OTZ to Audren, and it would work if we have a solenoid on board)<br />
* Nuts (screws and nuts, anyone?)<br />
* RoBot<br />
* Sprite<br />
* Yum! (don't forget to click your tongue when you get to the exclamation point)<br />
* Hovercraft Wanna Be<br />
* The Clock<br />
* One Up (the little green mushroom shaped robot, gives you one point when you eat it in Super Mario)<br />
* Onion With Layers<br />
<br />
== More Information About Our Robot ==<br />
[[http://maslab.mit.edu/2011/wiki/Team_Three/Journal Journal]]<br />
<br />
[[http://maslab.mit.edu/2011/wiki/Team_Three/Assignments Assignments]]<br />
<br />
[[http://maslab.mit.edu/2011/wiki/Team_Three/Final_Paper Final Paper]]</div>StInfinitihttps://maslab.mit.edu/2011/wiki/Team_TwoTeam Two2011-01-10T06:59:56Z<p>Dfourie: /* PUTZ PUTZ */</p>
<hr />
<div><br />
<br />
<br />
=== PUTZ PUTZ ===<br />
<br />
Journal: [http://maslab.mit.edu/2011/wiki/Team_Two/Journal]<br />
<br />
<br />
Final Paper: [http://maslab.mit.edu/2011/wiki/Team_Two/Final_Paper]</div>Dfouriehttps://maslab.mit.edu/2011/wiki/Team_ElevenTeam Eleven2011-01-09T23:39:29Z<p>Kiarash: </p>
<hr />
<div>[[File:How-to-draw-a-smiley-nerd.jpg]]<br />
<br />
<br />
Team members:<br />
<br />
<br />
Kiarash Adl, William Souillard-Mandar, Tim Robertson, Kristen Anderson</div>Kiarashhttps://maslab.mit.edu/2011/wiki/Team_Eleven/JournalTeam Eleven/Journal2011-01-09T21:46:50Z<p>Kranders: </p>
<hr />
<div>Day 1, Jan 3<br />
<br />
*team gets to know each other better <br />
*thinking about the idea<br />
<br />
Day 2, Jan 4<br />
<br />
*Mechanical design finalized<br />
*Some ideas for AI<br />
<br />
Day 3, Jan 5<br />
<br />
*Robot moves <br />
*working on the design and the code<br />
<br />
Day 4, Jan 6<br />
<br />
*Vision code works<br />
<br />
Day 5, Jan 7<br />
<br />
Day 6, Jan 8<br />
<br />
Day 7, Jan 9<br />
<br />
*Group meeting present: William, Kristen, Kiarash, Gil <br />
*New software architecture<br />
<br />
Jan 13,14,15: Stupid Laser cutter</div>Kiarashhttps://maslab.mit.edu/2011/wiki/Team_Three/AssignmentsTeam Three/Assignments2011-01-09T00:07:16Z<p>Cookies: </p>
<hr />
<div>== Maslab RoBot Build Schedule ==<br />
<br />
<table style="background:#D9FADD" border="1" cellpadding="2" cellspacing="0"><br />
<br />
<tr valign="top"><br />
<th width="100px" style="background:#93FAA0">Sunday</th><br />
<th width="135px" style="background:#93FAA0">Monday</th><br />
<th width="135px" style="background:#93FAA0">Tuesday</th><br />
<th width="135px" style="background:#93FAA0">Wednesday</th><br />
<th width="135px" style="background:#93FAA0">Thursday</th><br />
<th width="135px" style="background:#93FAA0">Friday</th><br />
<th width="100px" style="background:#93FAA0">Saturday</th><br />
</tr><br />
<br />
<tr valign="top"><br />
<td><b>1/2</b><br />
<table border="1"> <tr> <td><p> Welcome to IAP@MIT </p> </td> </tr> <br />
<tr> <td><p> Team 3 Presents: </p><br />
<p> Audren Cloitre</p> <p>Stephanie Lin</p> <p>Faye Wu</p> <p>James White</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/3</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 1 </p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> Build Pegbot</p> <p>Brainstorm Strategy and Robot Functionality</p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> uOrcBoard Intro</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/4</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 2 </p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p>Decide on Strategy and Robot Design</p> <p>CAD Day 1 of 5</p></td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Bump Sensor, Encoder and Robot Reaction to Feedback </p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/5</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 3 </p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> CAD Day 2 of 5 </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Camera and Camera Vision </p> <p>Optimize Encoder </p> </td> </tr> </table><br />
<table border="1"> <tr> <td><p>Clean Lab@10pm</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/6</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 4 </p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> CAD Day 3 of 5 </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Encoders, Gyro and PID Controller </p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/7</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 5 </p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> CAD Day 4 of 5 </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Main FSM and Structure of Code </p> <p>More PID</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/8</b><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> CAD Day 5 of 5 </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behavior 1 & 2 </p> <p>More PID</p></td> </tr> </table><br />
</td><br />
</tr><br />
<br />
<!--WEEK2--><br />
<tr valign="top"><br />
<td><b>1/9</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> CAD Model Complete </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behaviors 3 & 4 </p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/10</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 6 </p> <p> Mock Competition 1</p> </td> </tr> </table><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> Machining Day 1 of 3</p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behaviors 5 & 6 </p> <p>Vision Code Improvement</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/11</b><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p>Machining Day 2 of 3</p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behavior 7 & 8</p> <p>Vision Code Improvement</p></td> </tr> </table><br />
</td><br />
<br />
<br />
<td><b>1/12</b><br />
<table style="background:#93EEF7" border="1"> <tr> <td style="background:#D7F4F7"><p> Machining Day 3 of 3</p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Navigation Done</p> <p>Vision Code Improvement</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/13</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Final Robot Frame Built </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Navigation Improvement</p> <p>Vision Code Improvement</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/14</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 7 </p> <p> Mock Competition 2</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/15</b><br />
<table border="1"> <tr> <td><p> MIT Mystery Hunt </p></td> </tr> </table><br />
</td><br />
</tr><br />
<br />
<!--WEEK3--><br />
<tr valign="top"><br />
<td><b>1/16</b><br />
<table border="1"> <tr> <td><p> MIT Mystery Hunt </p></td> </tr> </table><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Mechanical Improvement </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behavior Improvement</p><p>Sensor Calibration</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/17</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Mechanical Improvement </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behavior Improvement</p><p>Sensor Calibration</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/18</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Mechanical Improvement </p> </td> </tr> </table><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Behavior Improvement</p><p>Sensor Calibration</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/19</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Checkpoint 8 </p> <p> Mock Competition 3</p></td> </tr> </table><br />
<table border="1"> <tr> <td><p>Sponsor Dinner</p> </td> </tr> </table><br />
<table border="1"> <tr> <td><p>CleanLab@10pm</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/20</b><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Test and Adapt</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/21</b><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Test and Adapt</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/22</b><br />
<table border="1"> <tr> <td><p> GSC Ski Trip </p></td> </tr> </table><br />
</td><br />
</tr><br />
<br />
<!--WEEK4--><br />
<tr valign="top"><br />
<td><b>1/23</b><br />
<table border="1"> <tr> <td><p> GSC Ski Trip </p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/24</b><br />
<table border="1"> <tr> <td><p> GSC Ski Trip </p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/25</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p>Seeding</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/26</b><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Test and Adapt</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/27</b><br />
<table style="background:#D8FD95" border="1"> <tr> <td style="background:#F1FDDC"><p> Test and Adapt</p><p> Final Code check</p></td> </tr> </table><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p>Robot Impound at 5</p></td> </tr> </table><br />
</td><br />
<br />
<td><b>1/28</b><br />
<table style="background:#FE9B96" border="1"> <tr> <td style="background:#FEDFDD"><p> Final Competition </p></td> </tr> </table><br />
<table border="1"> <tr> <td><p>Competition Tear Down</p> </td> </tr> </table><br />
</td><br />
<br />
<td><b>1/29</b><br />
<table border="1"> <tr> <td><p>Clean Lab</p> </td> </tr> </table><br />
</td><br />
</tr><br />
<br />
</table><br />
<br />
== Maslab RoBot Hardware Design ==<br />
<br />
=== Handdrawn Design ===<br />
<br />
=== CAD Design (preliminary) ===<br />
<table> <br />
<tr> <td> [[Image:Isometric.PNG|x250px|alt="CAD Drawing Isometric"]] </td> <br />
<td> [[Image:Front.PNG|x250px|alt="CAD Drawing Front"]] </td> <br />
<td> [[Image:Left.PNG|x250px|alt="CAD Drawing Profile"]] </td> </tr> <br />
<br />
<tr align="center"> <td> Isometric View </td> <td> Front View </td> <td> Profile View </td> </tr><br />
</table><br />
=== CAD Design (final) ===<br />
[[Image:Final CAD.JPG|alt="CAD Drawing Final Isometric"]] <br />
=== Photographs ===<br />
<br />
== Maslab RoBot Software Architecture ==<br />
<br />
== Maslab RoBot Strategy ==<br />
<br />
<table cellpadding="5" cellspacing="5" rules="none" border="1"> <tr> <td><br />
[[Image:BlackBox.jpg|alt=Black Box]]<br />
</td> </tr><br />
<tr> <td align=center width=100px>Our strategy lies hidden behind a black box, waiting to be revealed.</td> </tr> </table></div>Linschttps://maslab.mit.edu/2011/wiki/Team_SixTeam Six2011-01-08T00:32:24Z<p>Xavier: added team members names</p>
<hr />
<div>Team SLAMBA<br />
<br />
Michael Olague<br />
<br />
Piper "Wings" Hunt<br />
<br />
Shawn Westerdale<br />
<br />
Xavier Jackson</div>Xavierhttps://maslab.mit.edu/2011/wiki/Build.xmlBuild.xml2011-01-06T20:25:06Z<p>Maslab: </p>
<hr />
<div><pre><br />
<project name="ant-tutorial" default="build" basedir="."><br />
<!-- CHANGE THESE THREE VALUES FOR AUTOMATIC UPLOAD --><br />
<property name="robotIP" value="18.62.31.60"/><br />
<property name="destDir" value="/home/maslab/code"/><br />
<property name="username" value="maslab"/><br />
<property name="binDir" value="bin"/><br />
<property name="srcDir" value="src"/><br />
<br />
<target name="build"><br />
<!-- This does deep dependency checking on class files --><br />
<depend srcdir="${srcDir}" destdir="${binDir}" cache="depcache" closure="true"/><br />
<!-- This compiles all the java --><br />
<javac srcdir="${srcDir}" destdir="${binDir}" includes="**/*.java" debug="true" classpath="lib/maslab.jar:lib/orc.jar"/><br />
</target><br />
<!-- Clean everything --><br />
<target name="clean"><br />
<delete><br />
<fileset dir="${binDir}" includes="**/*.class"/><br />
<fileset dir="${binDir}" includes="**/*~" defaultexcludes="no"/><br />
</delete><br />
</target><br />
<!-- Upload files to robot --><br />
<target name="upload" depends="build"><br />
<exec executable="rsync"><br />
<arg line="-e ssh -avr ${binDir} ${username}@${robotIP}:${destDir}"/><br />
</exec><br />
</target><br />
</project><br />
</pre></div>Maslabhttps://maslab.mit.edu/2011/wiki/Team_One/AssignmentsTeam One/Assignments2011-01-05T02:20:34Z<p>Eronsis: </p>
<hr />
<div>Scoring Strategy<br />
<br />
Upon activation, the bot will begin randomly driving around picking up balls with a roller as it drives over them and holds them in a storage area. It will not differentiate between our balls and the opposing team’s balls because it is advantageous to us to have both colors on the other side of the yellow wall. While collecting balls, the robot will watch for yellow walls even though it does not intend to shoot yet. If it sees any, it will attempt to record the location of the wall using a magnetometer (digital compass) and odometry so it may return to face the wall more quickly when the time comes.<br />
<br />
As the balls enter, they trip a switch so the robot can keep track of how many balls it’s holding. When it has picked up about 3 balls (exact quantity subject to change), it will switch to searching for a yellow wall to launch the balls over using a flywheel. The robot will look for the distance to the wall with an IR sensor and adjust the flywheel speed to ensure clearing the wall with minimal risk of overshooting. After expending its ammo, the robot will reenter search and gather mode.<br />
<br />
[[Image:harvester.png]]<br />
<br />
'''Team Calendar'''<br />
<br />
[[Image:Team Schedule.png]]<br />
<br />
'''Software Thread Structure'''<br />
<br />
[[Image:Software FSM.png]]</div>Eronsishttps://maslab.mit.edu/2011/wiki/Team_TenTeam Ten2011-01-04T20:54:49Z<p>Maslab10: </p>
<hr />
<div>Composed of: Arvin Shahbazi Moghaddam, Wojciech Musial, Tongji Li, Alex Teuffer<br />
<br />
Our Epic Journal! [http://maslab.mit.edu/2011/wiki/Team_Ten/Journal]<br />
or the assignments[http://maslab.mit.edu/2011/wiki/Team_Ten/Assignments]...</div>Maslab10https://maslab.mit.edu/2011/wiki/Team_Ten/AssignmentsTeam Ten/Assignments2011-01-04T19:56:31Z<p>Maslab10: </p>
<hr />
<div>'''January 04, 2011'''<br />
'''Tuesday 14:20'''<br />
'''Assignment 2'''<br />
----<br />
<br />
''Strategy'':<br />
So far, we've decided to concentrate on getting all the balls over the wall. However, we do have some other potential strategies about scoring in the goals that might be considered as well.<br />
<br />
''Software Design'': There will be several Java classes which will compose our robot's software. There will be a class which will process the optical as well as other sensor data from the cameras and sensors which will give the robot a sense of its surroundings. It will "tell" the robot the distance to the surrounding walls as well as the balls in sight. A control class will communicate with a driving class to decide the robot's movements based on what the sensors and camera detect. The robot will pick up balls and then decide what color the ball is and whether it should score it or keep it. There will also be a timer running which will control the strategy our robot will use (which varies with time).<br />
<br />
''Mechanical Outline'': Our robot will have three levels. The first level will be the entrance for the balls which will be collected using a motor turning a horizontal spindle of rubber bands mounted across the front of the robot. The balls will be pushed by this spindle onto a very short and low ramp after which the balls will roll into a channel that will lead the balls to a conveyor belt. This conveyor belt will be housed in a vertical half-pipe which will drop the balls onto the third level of our robot. This third level is essentially a box with no top which is inclined so that balls tend to roll to the front part of the robot. The very front edge of the third level of the robot will have a door like that of a pickup truck which lays flat to let out all of the balls that were collected.<br />
We want to keep the battery on the lowest level of the robot so that we can keep our center of mass closer to the ground. The camera, computer will be attached to the second level.<br />
<br />
''Schedule'' [[File:Schedule.jpg]]</div>Maslab10