Team Four/Final Paper

From Maslab 2012
Jump to: navigation, search

Maslab team 4 Final report

Code architecture

Following good software engineering practice, the code for Lolbot is split in three separate programs. First, there is the low-level behaviour in C++ running on the Arduino. The Arduino is responsible for small tasks, such as driving in a straight line, turning 360 degrees or approaching a ball given a delta x. This interfaces with the other programs running on the computer over a simple serial text protocol, which made it extremely easy to test these individual tasks.

The second part is the vision program, written in C++ running on the EeePC. We chose C++ for the vision for efficiency reasons, since this years' default language of Python did not seem fast enough for live vision processing. The vision program interfaces with the main control program with another simple serial text protocol.

The final program was the main control program, written in Python running on the EeePC. The main control program is responsible for processing the low level output of the Arduino and vision program to come up with clever behavior. Writing the main control program in Python was a wise choice, as this made it possible to describe fairly finicky behaviors in short, readable code.

Low level behavior

Making a Maslab robot perform its task of finding, collecting and scoring balls is a difficult task, as illustrated by the number of robots that managed to score during this years' final competition. But the individual tasks (such as going straight or aiming for a ball) are all fairly approachable. The situation becomes a lot more complicated when the different simple tasks have to be combined, and it is only when combining the simple tasks that flaws in the simple tasks become apparent, as error accumulates. We decided early on to make sure that these basic tasks all worked reliably, and could be tested individually. This turned out to be very valuable: in one match during the final competition our vision system broke, but thanks to our well implemented wall following code on the Arduino we still managed to collect balls and win the match.

Our wall following code:

if (front_acc >= 400 || right_acc >= 450) { // turn left hardware.set_motor_speed(L, -60); hardware.set_motor_speed(R, 60); } else { if (right_acc >= 250) { // go straight double p = right_acc - 350; hardware.set_motor_speed(L, 90 - p * .2); hardware.set_motor_speed(R, 90 + p * .2); } else { // turn right hardware.set_motor_speed(L, 100); hardware.set_motor_speed(R, 40); } }

Vision

Our vision was fairly naive in the end. We converted the images to HSV, and used hard coded threshold to detect colors. Since the colors of the field were all extremely distinct, we decided not to use any callibration and instead filter everything higher than the blue wall line. In the end, this worked fairly well, but made testing in different lighting conditions quite a bit trickier. Also, we ended up not using the depth data from the Kinect, mostly because we had more pressing issues to work on. Using the kinect to detect balls that are hard to collect (since they are pressed against walls) is an interesting future plan.

To debug our vision code, we installed Dropbox on the EeePC and saved a raw and processed camera capture every 30 seconds or so. This way, we could see what the robot was seeing, and we gained an excellent testing suite for changes to the vision code.

In support of text based protocols

Often, parts of our robot would break, or our robot would mess up a simple task such as aligning to the yellow wall inexplicably. The seperation of high and low level behavior was very useful here: instead of running a complete match and hoping that the problem would show up, we could simply tell the arduino to align to a wall over and over again. Initially, when we tried to add encoders to our robot, we could simply look at the text sent from the arduino to the computer to see if the data was reasonable. Our simple text based protocol became an excellent debugging feature.

High level behavior

The overall high level behavior of our robot was one big state machine written in Python. We had a basic version of this code running during the second mock competition (in the form of one giant while loop), and the logic changed fairly little after that. The most important change was the addition of many timeouts: whenever a state failed, or was tried too many times recently, the code would simply refuse to run e.g. a ball capture and would instead explore the maze a bit more.

Fun numbers

642 lines (1443 words) of python code 654 lines (2609 words) of C++ vision code 1007 lines (2283 words) of C++ Arduino code

The mechanical design of lolbot was developed with robustness, ease of manufacturing, and simplicity in mind. Our thought was that if anything failed, it better be software. We decided upon using a 4 bar linkage to lift a box as the scoring mechanism and a rubber band roller as a collection mechanism because of their simplicity and the fact that they have been tested by many past robots, including putzputz (2011 champion). The rubber band rollers rolled the balls up a ramp and into the collection box. For getting the balls over the purple line, we used another rubber band roller. The base of lolbot was circular so we would be less likely to get stuck. To further deal with getting stuck, we mounted bump sensors all along the front and sides of the robot and used rotary encoders to determine whether or not the wheels have moved a reasonable amount in a given time. The wheels themselves are spiked to give them more friction on the carpet.


The body of lolbot is made from acrylic, and pieces were assembled with the use of T-nuts and L-brackets. The 4 bar linkage and box were made from sheet metal. The box was bent into shape. Sensors that we decided upon using early all had special mounts on the robot. The bump sensors and one IR sensor were attached with a combination of super glue, zip ties, tape, and hot glue. They proved to be a considerable pain in the butt as they would always get knocked out of place. Actually, most parts of our robot were super glued together the last day.


The layout of the components of lolbot was not optimal. We placed both the battery and laptop in the back of the robot. Luckily, they were very heavy. Otherwise, lolbot would topple over every time it tried to score. Our kinect was mounted inconveniently high up on the robot. The original reason for this was so we could obtain information about the other side of the field when we were scoring and use that to our advantage, but there was not enough time to code that. Thus the kinect being high was just a nuisance because it decreased the amount of information about the immediate surroundings of the robot. The top rubber band shooter was also annoying. It was above wall height so we thought it would not interfere with navigation at all. However, we forgot about pillars. Ramming into pillars and getting stuck became a very large problem when lolbot was trying to navigate. The box was also not ideal. The way it was mounted gave it a tilt which caused it to get stuck on the walls when it was full of balls. We solved this problem by cutting out a large section of the wall. Sometimes, balls would fall out of the box as it was being lifted and get stuck under the box. We solved this problem by attaching a piece of sheet metal to the bottom of the box using fishing line. This for the most part prevented balls from getting stuck under the box when they fell.


The mechanical design of our robot could have been improved greatly, but we chose to not take risks making a new robot. Instead we chose to stick to a robot which we know mostly works, and know all of its failure modes.


WORD TO FUTURE MASLAB TEAMS::::::::::::::::::::::::

don't play around with silly optical encoders. you can get super sweet rotary encoders on McMastercarr for like 70 cents.

Personal tools