Team Three/Final Paper

From Maslab 2012
Revision as of 20:06, 6 February 2012 by Team3 (Talk | contribs)
Jump to: navigation, search

Team 3 Final Paper: The Tools and Albi Story

Overall Strategy Our focus was on taking advantage of the rule that allowed the launching of balls to develop a strategy to collect and launch one ball at a time. Thus, our strategy would be relatively simple: find a ball, find yellow, launch ball. Unfortunately, we ran into several problems in the implementation that we weren’t able to solve by the competition. We were never able to reliably detect when we had “loaded” a ball into the catapult- our strategy hinged on using an LED-photosensor combination, but we couldn’t get our hands on a photosensor. Secondly, we ended up going with one camera due to sensor points. However, we collected balls and fired in opposite directions- meaning we had to precisely calibrate our robot to turn 180 degrees before firing off a ball. Thirdly- the best camera placement to collect balls is low, so that blue line filtering isn’t as needed. But the best place for a yellow-wall finding camera is high, so that the direction of the yellow ball is detected no matter where the robot is. We were forced to go with a compromise between the two, which didn’t do as good a job at either task. Finally, and most cripplingly, the range of our catapult was never quite enough to be a serious delivery method. We achieved a 4-5 feet range (wall clearance) in theoretical testing, but when we used a motor controller instead of a simple switch, the range dropped to ~3 feet, which isn’t really enough to gain efficiency. Using relays would have solved the problem, but by then it was too late to switch over. Ultimately, our strategy held promise, but we failed to overcome several technical challenges.


Mechanical Design and Sensors



Software Design The software matched the simplicity of our hardware design, and used a state machine. There were five states: FindBall, SeekBall, GetBall, FindYellow, SeekYellow, and Shoot.

The FindBall technique rotated the robot a circle, in six steps, looking for a ball at each angle. If a ball was found, the machine transitioned to the GetBall state. Otherwise, the machine transitioned to SeekBall.

GetBall used a PID controller to drive the robot towards the closest ball detected. It then transitioned to the FindYellow State. Unfortunately, we did not manage to acquire a photosensor, which (with an LED) would have told us whether or not we had captured a ball. Thus, our only option was to assume we had captured a ball, and transition to FindYellow.

Seekball randomly wandered around the field. We attempted to implement wall-following, but were not able to get it working. In its place, we just used six bump sensors with aluminum bars between them, covering the entire front 180 degrees of the robot, which was fairly effective in preventing jams.

FindYellow copied FindBall, except it looked for a wall instead. It transitioned to Shoot if it found yellow, or to SeekYellow if it did not.

SeekYellow wandered the field randomly, for a set period of 20 seconds, looking for yellow. If it didn’t find one, it transitioned to shoot in a random direction. This was, of course, suboptimal, but we didn’t want to get stuck with a ball.

Shoot found the yellow wall, turned 180 degrees from it so that the catapult would fire in its direction, and then activated the catapult.

Overall Performance



Conclusions/suggestions for other teams A rather unfortunate mistake was that we attempted to rewrite the software interface in java. We area all aero-astro majors, and java is the language most of us are most comfortable with. Unfortunately, we were forced to abandon the project, as java did not play nice with opencv (the video library) on ubuntu. We lost two weeks to that fiasco.

Personal tools