Deprecated: (6.186) preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /afs/athena.mit.edu/course/6/6.186/web_scripts/2011/w/includes/Sanitizer.php on line 1470
Team Ten/Final Paper - Maslab 2011

Team Ten/Final Paper

From Maslab 2011

< Team Ten(Difference between revisions)
Jump to: navigation, search
m
 
Line 76: Line 76:
So thoughts for the future:<br>
So thoughts for the future:<br>
-
- don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea. </br>
+
- don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea. <br>
-
- if your solution involves a custom motherboard, 12 cells of car engine batteries, 100% more cameras than allowed --- YOU'RE DOING SOMETHING WRONG.
+
- if your solution involves a custom motherboard, 12 cells of car engine batteries, 100% more cameras than allowed, and you're not using parts that most of the teams seem to use --- YOU'RE DOING SOMETHING WRONG, and you'll suffer hardware problems that noone will help you with. Either take it or die.
- get back up batteries!!<br>
- get back up batteries!!<br>
- have behavior code first!! Even if it's simple and you plan on doing something different -- have something working at all first, before you attempt a more sophisticated approach. Last thing you want is to write your robot behaviour code during the last week before the competition.<br>
- have behavior code first!! Even if it's simple and you plan on doing something different -- have something working at all first, before you attempt a more sophisticated approach. Last thing you want is to write your robot behaviour code during the last week before the competition.<br>

Latest revision as of 05:55, 31 January 2011

Members:

Alex Teuffer Voitek Wojciech Musial Youyou Ma Arvin Shahbazi Moghaddam

Future maslab participants scan to the bottom for tips

Overall Strategy/Gameplay Goals

The strategy of our team was to keep the robot as simple as possible. We believe that a simple design with the least amount of things that could go wrong is the best. We based the design of our robot on these principles but the overall shape of it was determined by our gameplay goals.
Our goal for this competition was to score over the yellow walls and essentially ignore the possibility of scoring in the mouseholes. The reason for us deciding to pursue this strategy was that it permitted us to concentrate on a single high-scoring goal opportunity. By choosing to build a robot adept at throwing balls over the opposing team's wall we can make a relatively simple design while still being able to score high.

General Preliminary Timeline - none of it actually followed

Week 1 - Finish preliminary robot design and prototype with some simple camera analysis
Week 2 - Perfect ball collection mechanism and mapping algorithms
Week 3 - Make it better and more reliable without adding too much complexity to any system.
Week 4 - Fail week. Leaves time to resort to old working code and have something functional.

Design

Our robot consists of an underside guiding system, a middle floor, and a ball 'basket' on the top floor.

The underside guiding system is essentially two aluminum plates that are placed at an angle with respect to each other so that they funnel the balls that go through a one way gate made of light-weight cardboard into the screw lifting mechanism. This had to be planned with precision in order to make sure the two plates did not disturb the two wheel motors or the mice which we used as a substitution for encoders. The door was a one way gate solely made of cardboard and aluminum wire. It was very simple, light and sturdy and could even capture balls against walls.

Our robot had two wheel drive and was further supported by two casters in the front. The motors were held parallel to the ground with zipties since the weight of the robot was enough to bend the wheel axis at an angle to the ground.

The bottom floor that held our battery, gpu, uorc board, and hardrive and which also supported the motors, casters, front gate, underside guiding system, and the top floor as well as the screw mechanism was made of simple peg board. The peg board was sturdy enough to not bend under the weight of all these parts and had the additional bonus of having built in holes which we could use to pass wires through.

The mice encoders were attached to the pegboard with spring suspension which held the mice with epoxy glue.

the screw mechanism was held above the ground by two aluminum supports that attached to the back end of the pegboard. This caused us many troubles because we could not easily adjust the height of this mechanism which was necessary for getting over bumps in the playing field. In the end, we raised the entire robot by wrapping double sided tape on the wheels and therefore increasing its radius.

The screw mechanism consisted of the three polyester shafts, one of which held the screw. The two shafts not attached to the screw would support the ball as it would be pushed up by the screw until it reached a spring wrapped around the middle shaft. The ball would be pushed through the spring by the screw and would then use the spring essentially as a slide into the top floor 'basket'.

The top floor basket had an acrylic base on which were two aluminum walls that funneled the balls into a servo-operated gate which opened and closed on a hinge made at edgerton.

Machining

Our robot was mostly put together in the maslab 5th floor lab. The archimedes screw was cut out of a pvc pipe using a saber saw. This was done in the edgerton student machine shop. The lathe in the edgerton machine shop was used to slim down the shaft that held the screw so that it could fit into the gears that would turn it in order to pick up the balls. Besides this, the most intesive part of the machining part of the robot was the assembly of the hinge that held the 2nd floor servo gate open. This was also done in the edgerton student shop.

Software & Strategy

We have initially intended to pursue an (overly) ambitious software effort of stereoscopic vision. We used two cameras mounted vertically, one on top of another. With this choice, the alignment of features across the two cameras becomes straightforward -- they have the same horizontal position. The video streams were captured in c using opencv v4l2 driver. The following algorithms were run on the raw images to obtain a high-level description of the camera scene:
- gaussian blur (implemented as 4 separable passes)
- rgb to hsl
- convolution with sobel kernel
- hysteresis
- separation of edges into wall & ball edges depending on edge pixel adjacency
- aggregate image statistics (count of pixels falling within predefined interest hue/sat/lum regions)
- ransac ball fitting (run on ball edges)
- line parametrization (run on blue-tape wall edges)

Because of speed considerations, the above algorithms have been implemented using nVidia CUDA and run on a GeForce 9400M GPU. We have managed to achieve overall 12FPS (capture and processing from both cameras).

Camera calibration has proven to pose the greatest challenge. The position and orientation of each camera is described by 6 parameters (3 spatial coordinates and 3 angles). Total of 12 parameters need to be accurately measured to reconstruct the absolute position of a feature in 3D. Assuming that we only care about the position of 3D features relative to the robot, we need 6 parameters to describe the position and orientation of one camera relative to the other. We have attempted to take camera data of objects whose true 3d position we measured by hand, and used that truth information to fit the 5 parameters. This approach was proven unsuccessful -- the fits did not converge. We then tried a more academic approach outlined here (http://www.peterhillman.org.uk/downloads/whitepapers/calibration.pdf), to no avail. We then gave up on accuracy of camera readings and eyeballed the parameters...

Due to lack of accuracy on distance reconstruction, random noise, and artifacts of the line fitting algorithm, the features (balls and walls) reconstructed from a set of two camera frames (top and bottom camera) were not reliable enough to be used for robot navigation. I have attempted to collect and average out features across consecutive capture frames. This has worked moderately well provided the robot sat stationary -- the error on camera reconstruction was distance- and angle- dependent, and therefore any change in robot position and orientation, even if measured with the optical mice accurately, would introduce error into matching of features across consecutive frames and would, generally, screw all our efforts up. A possible solution would involve:
- analytical modelling of the camera error due to imprecise calibration
- brute force error correction: measure error for a grid of points in 3D and correct the position of reconstructed features.

Unfortunately we run out of time to successfully pursue the 3D vision approach.

Having stubbornly tried to make the stereo vision work, we have realized 4 days before the impounding we need a different strategy. We then equipped the robot with bump sensors and made it bounce between walls, occasionally looking around for balls using the stereo vision code. We have run out of time to make the code more sophisticated and robust.

Odometry

We have initially attempted at building and using the proposed optical encoders. Much to our dissatisfaction, the circuit turned out not reliable and of very poor resolution. We then decided to use two optical mice mounted on spring suspension to ensure constant contact with the floor. The two mice were able to measure position as well as angular orientation of the robot very accurately (random error on the angle +- 0.02 rad, position +- 4cm over a meter of distance covered). The mice needed to be calibrated very accurately, though. The mice would often get de-calibrated because of miniature change of their relative position. Also, in order to read mouse raw data we mounted the mice with custom udev rules, which in turn required that we re-plug the mice every time the computer boots. Forgetting about this caveat caused our robot to go out of control during one of the final competition rounds.


Final Run and thoughts:

So despite our robot's inability to score, during the final contest, it somehow managed to try to score, though there were not balls on it yet. Also, it was working quite well despite what we were afraid of might happen. However, halfway through our last round, the batteries died and the computer turned off.

So thoughts for the future:
- don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea.
- if your solution involves a custom motherboard, 12 cells of car engine batteries, 100% more cameras than allowed, and you're not using parts that most of the teams seem to use --- YOU'RE DOING SOMETHING WRONG, and you'll suffer hardware problems that noone will help you with. Either take it or die. - get back up batteries!!
- have behavior code first!! Even if it's simple and you plan on doing something different -- have something working at all first, before you attempt a more sophisticated approach. Last thing you want is to write your robot behaviour code during the last week before the competition.
- build a quick sustainable robot in the first week so your coder has something to tinker with. (Try laser cutting, though we didn't do that, it seems quite efficient)
- focus on the main aspects of the robot, have it more-less working, only then go for the minute details!
- try to not to take another class / full time job alongside maslab... your life will turn into a stream of misery. - never give up!

Personal tools