Deprecated: (6.186) preg_replace(): The /e modifier is deprecated, use preg_replace_callback instead in /afs/athena.mit.edu/course/6/6.186/web_scripts/2011/w/includes/Sanitizer.php on line 1470
Team Ten/Final Paper - Maslab 2011

Team Ten/Final Paper

From Maslab 2011

(Difference between revisions)
Jump to: navigation, search
Line 7: Line 7:
-
Overall Strategy/Gameplay Goals
+
<b>Overall Strategy/Gameplay Goals</b>
-
The strategy of our team was to keep the robot as simple as possible. We believe that a simple design with the least amount of things that could go wrong is the best. We based the design of our robot on these principles but the overall shape of it was determined by our gameplay goals.  
+
The strategy of our team was to keep the robot as simple as possible. We believe that a simple design with the least amount of things that could go wrong is the best. We based the design of our robot on these principles but the overall shape of it was determined by our gameplay goals. <br>
Our goal for this competition was to score over the yellow walls and essentially ignore the possibility of scoring in the mouseholes. The reason for us deciding to pursue this strategy was that it permitted us to concentrate on a single high-scoring goal opportunity. By choosing to build a robot adept at throwing balls over the opposing team's wall we can make a relatively simple design while still being able to score high.  
Our goal for this competition was to score over the yellow walls and essentially ignore the possibility of scoring in the mouseholes. The reason for us deciding to pursue this strategy was that it permitted us to concentrate on a single high-scoring goal opportunity. By choosing to build a robot adept at throwing balls over the opposing team's wall we can make a relatively simple design while still being able to score high.  
-
General Preliminary Timeline
+
<b>General Preliminary Timeline - none of it actually followed</b>
-
Week 1 - Finish preliminary robot design and prototype with some simple camera analysis Week 2 - Perfect ball collection mechanism and mapping algorithms Week 3 - Make it better and more reliable without adding too much complexity to any system. Week 4 - Fail week. Leaves time to resort to old working code and have something functional.
+
-
Design
+
Week 1 - Finish preliminary robot design and prototype with some simple camera analysis<br>
 +
Week 2 - Perfect ball collection mechanism and mapping algorithms <br>
 +
Week 3 - Make it better and more reliable without adding too much complexity to any system. <br>
 +
Week 4 - Fail week. Leaves time to resort to old working code and have something functional.
 +
 
 +
<b>Design</b>
Our robot consists of an underside guiding system, a middle floor, and a ball 'basket' on the top floor.  
Our robot consists of an underside guiding system, a middle floor, and a ball 'basket' on the top floor.  
Line 33: Line 37:
The top floor basket had an acrylic base on which were two aluminum walls that funneled the balls into a servo-operated gate which opened and closed on a hinge made at edgerton.  
The top floor basket had an acrylic base on which were two aluminum walls that funneled the balls into a servo-operated gate which opened and closed on a hinge made at edgerton.  
-
Machining
+
<b>Machining</b>
Our robot was mostly put together in the maslab 5th floor lab. The archimedes screw was cut out of a pvc pipe using a saber saw. This was done in the edgerton student machine shop. The lathe in the edgerton machine shop was used to slim down the shaft that held the screw so that it could fit into the gears that would turn it in order to pick up the balls. Besides this, the most intesive part of the machining part of the robot was the assembly of the hinge that held the 2nd floor servo gate open. This was also done in the edgerton student shop.
Our robot was mostly put together in the maslab 5th floor lab. The archimedes screw was cut out of a pvc pipe using a saber saw. This was done in the edgerton student machine shop. The lathe in the edgerton machine shop was used to slim down the shaft that held the screw so that it could fit into the gears that would turn it in order to pick up the balls. Besides this, the most intesive part of the machining part of the robot was the assembly of the hinge that held the 2nd floor servo gate open. This was also done in the edgerton student shop.
-
Software:
+
<b>Software & Strategy</b>
 +
 
 +
We have initially intended to pursue an (overly) ambitious software effort of stereoscopic vision. We used two cameras mounted vertically, one on top of another. With this choice, the alignment of features across the two cameras becomes straightforward -- they have the same horizontal position. The video streams were captured in c using opencv v4l2 driver. The following algorithms were run on the raw images to obtain a high-level description of the camera scene:<br>
 +
- gaussian blur (implemented as 4 separable passes)<br>
 +
- rgb to hsl<br>
 +
- convolution with sobel kernel <br>
 +
- hysteresis <br>
 +
- separation of edges into wall & ball edges depending on edge pixel adjacency<br>
 +
- aggregate image statistics (count of pixels falling within predefined interest hue/sat/lum regions)<br>
 +
- ransac ball fitting (run on ball edges)<br>
 +
- line parametrization (run on blue-tape wall edges)<br>
 +
 
 +
Because of speed considerations, the above algorithms have been implemented using nVidia CUDA and run on a GeForce 9400M GPU. We have managed to achieve overall 12FPS (capture and processing from both cameras).
 +
 
 +
Camera calibration has proven to pose the greatest challenge. The position and orientation of each camera is described by 6 parameters (3 spatial coordinates and 3 angles). Total of 12 parameters need to be accurately measured to reconstruct the absolute position of a feature in 3D. Assuming that we only care about the position of 3D features relative to the robot, we need 6 parameters to describe the position and orientation of one camera relative to the other. We have attempted to take camera data of objects whose true 3d position we measured by hand, and used that truth information to fit the 5 parameters. This approach was proven unsuccessful -- the fits did not converge. We then tried a more academic approach outlined here (http://www.peterhillman.org.uk/downloads/whitepapers/calibration.pdf), to no avail. We then gave up on accuracy of camera readings and eyeballed the parameters...
-
Motherboard:
+
Due to lack of accuracy on distance reconstruction, random noise, and artifacts of the line fitting algorithm, the features (balls and walls) reconstructed from a set of two camera frames (top and bottom camera) were not reliable enough to be used for robot navigation. I have attempted to collect and average out features across consecutive capture frames. This has worked moderately well provided the robot sat stationary -- the error on camera reconstruction was distance- and angle- dependent, and therefore any change in robot position and orientation, even if measured with the optical mice accurately, would introduce error into matching of features across consecutive frames and would, generally, screw all our efforts up. A possible solution would involve: <br>
 +
- analytical modelling of the camera error due to imprecise calibration<br>
 +
- brute force error correction: measure error for a grid of points in 3D and correct the position of reconstructed features.
-
Vision software: We used the combined data of two cameras rather than one to actually measure the distance from the robot to the balls. They were place vertically on top of the each other in front of the robot.  
+
Unfortunately we run out of time to successfully pursue the 3D vision approach.
-
Sensors:
+
Having stubbornly tried to make the stereo vision work, we have realized 4 days before the impounding we need a different strategy. We then equipped the robot with bump sensors and made it bounce between walls, occasionally looking around for balls using the stereo vision code. We have run out of time to make the code more sophisticated and robust.  
-
Bump sensors: When desperate times hit, we added 4 bump sensors in the front of the robot to test when we have hit a wall. It was really to back up the cameras since at times the cameras did not give the most accurate data.  
+
-
Gyroscope: This ended up not as useful as we thought it would be. Most of the measurements that would have been done by the gyro were actually done by the mice.
+
<b>Odometry</b>
 +
We have initially attempted at building and using the proposed optical encoders. Much to our dissatisfaction, the circuit turned out not reliable and of very poor resolution. We then decided to use two optical mice mounted on spring suspension to ensure constant contact with the floor. The two mice were able to measure position as well as angular orientation of the robot very accurately (random error on the angle +- 0.02 rad, position +- 4cm over a meter of distance covered).  The mice needed to be calibrated very accurately, though. The mice would often get de-calibrated because of miniature change of their relative position. Also, in order to read mouse raw data we mounted the mice with custom udev rules, which in turn required that we re-plug the mice every time the computer boots.  Forgetting about this caveat caused our robot to go out of control during one of the final competition rounds.  
-
Mice Encoders: We used optical mice for odometry. They measured distance, speed, and direction. The were built on suspension on the bottom of the robot so that they would always stay in contact with the floor. they were doing quite well until 1 day before the contest when they broke and started to give not so reliable data. Otherwise they would have been quite useful. Another thing was they had to be reseted every time we started the robot otherwise it would not run.
 
Final Run and thoughts:
Final Run and thoughts:
So despite our robot's inability to score, during the final contest, it somehow managed to try to score, though there were not balls on it yet. Also, it was working quite well despite what we were afraid of might happen. However, halfway through our last round, the batteries died and the mother was turned off.
So despite our robot's inability to score, during the final contest, it somehow managed to try to score, though there were not balls on it yet. Also, it was working quite well despite what we were afraid of might happen. However, halfway through our last round, the batteries died and the mother was turned off.
So thoughts for the future:
So thoughts for the future:
-
-get back up batteries!!
+
<b> - don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea. </b>
-
-have behavior code first, because then you won't be left to write all of it in the last week.
+
-get back up batteries!!<br>
-
-even if you code, understanding how the code behaves with the hardware is probably more important. (since that's how you understand what you're coding)
+
-have behavior code first, because then you won't be left to write all of it in the last week.<br>
-
-build a quick sustainable robot in the first week so your coder has something to tinker with. (Try laser cutting, though we didn't do that, it seems quite efficient)
+
-even if you code, understanding how the code behaves with the hardware is probably more important. (since that's how you understand what you're coding)<br>
-
-focus on the main aspects of the robot, have it working, then go for the minute detail!
+
-build a quick sustainable robot in the first week so your coder has something to tinker with. (Try laser cutting, though we didn't do that, it seems quite efficient)<br>
-
-try to not to take another class alongside maslab...life will be difficult.
+
-focus on the main aspects of the robot, have it working, then go for the minute detail!<br>
-
-never give up!
+
-try to not to take another class alongside maslab...life will be difficult.<br>
 +
-never give up!<br>

Revision as of 05:46, 31 January 2011

Members:

Alex Teuffer Voitek Wojciech Musial Youyou Ma Arvin Shahbazi Moghaddam


Overall Strategy/Gameplay Goals

The strategy of our team was to keep the robot as simple as possible. We believe that a simple design with the least amount of things that could go wrong is the best. We based the design of our robot on these principles but the overall shape of it was determined by our gameplay goals.
Our goal for this competition was to score over the yellow walls and essentially ignore the possibility of scoring in the mouseholes. The reason for us deciding to pursue this strategy was that it permitted us to concentrate on a single high-scoring goal opportunity. By choosing to build a robot adept at throwing balls over the opposing team's wall we can make a relatively simple design while still being able to score high.

General Preliminary Timeline - none of it actually followed

Week 1 - Finish preliminary robot design and prototype with some simple camera analysis
Week 2 - Perfect ball collection mechanism and mapping algorithms
Week 3 - Make it better and more reliable without adding too much complexity to any system.
Week 4 - Fail week. Leaves time to resort to old working code and have something functional.

Design

Our robot consists of an underside guiding system, a middle floor, and a ball 'basket' on the top floor.

The underside guiding system is essentially two aluminum plates that are placed at an angle with respect to each other so that they funnel the balls that go through a one way gate made of light-weight cardboard into the screw lifting mechanism. This had to be planned with precision in order to make sure the two plates did not disturb the two wheel motors or the mice which we used as a substitution for encoders. The door was a one way gate solely made of cardboard and aluminum wire. It was very simple, light and sturdy and could even capture balls against walls.

Our robot had two wheel drive and was further supported by two casters in the front. The motors were held parallel to the ground with zipties since the weight of the robot was enough to bend the wheel axis at an angle to the ground.

The bottom floor that held our battery, gpu, uorc board, and hardrive and which also supported the motors, casters, front gate, underside guiding system, and the top floor as well as the screw mechanism was made of simple peg board. The peg board was sturdy enough to not bend under the weight of all these parts and had the additional bonus of having built in holes which we could use to pass wires through.

The mice encoders were attached to the pegboard with spring suspension which held the mice with epoxy glue.

the screw mechanism was held above the ground by two aluminum supports that attached to the back end of the pegboard. This caused us many troubles because we could not easily adjust the height of this mechanism which was necessary for getting over bumps in the playing field. In the end, we raised the entire robot by wrapping double sided tape on the wheels and therefore increasing its radius.

The screw mechanism consisted of the three polyester shafts, one of which held the screw. The two shafts not attached to the screw would support the ball as it would be pushed up by the screw until it reached a spring wrapped around the middle shaft. The ball would be pushed through the spring by the screw and would then use the spring essentially as a slide into the top floor 'basket'.

The top floor basket had an acrylic base on which were two aluminum walls that funneled the balls into a servo-operated gate which opened and closed on a hinge made at edgerton.

Machining

Our robot was mostly put together in the maslab 5th floor lab. The archimedes screw was cut out of a pvc pipe using a saber saw. This was done in the edgerton student machine shop. The lathe in the edgerton machine shop was used to slim down the shaft that held the screw so that it could fit into the gears that would turn it in order to pick up the balls. Besides this, the most intesive part of the machining part of the robot was the assembly of the hinge that held the 2nd floor servo gate open. This was also done in the edgerton student shop.

Software & Strategy

We have initially intended to pursue an (overly) ambitious software effort of stereoscopic vision. We used two cameras mounted vertically, one on top of another. With this choice, the alignment of features across the two cameras becomes straightforward -- they have the same horizontal position. The video streams were captured in c using opencv v4l2 driver. The following algorithms were run on the raw images to obtain a high-level description of the camera scene:
- gaussian blur (implemented as 4 separable passes)
- rgb to hsl
- convolution with sobel kernel
- hysteresis
- separation of edges into wall & ball edges depending on edge pixel adjacency
- aggregate image statistics (count of pixels falling within predefined interest hue/sat/lum regions)
- ransac ball fitting (run on ball edges)
- line parametrization (run on blue-tape wall edges)

Because of speed considerations, the above algorithms have been implemented using nVidia CUDA and run on a GeForce 9400M GPU. We have managed to achieve overall 12FPS (capture and processing from both cameras).

Camera calibration has proven to pose the greatest challenge. The position and orientation of each camera is described by 6 parameters (3 spatial coordinates and 3 angles). Total of 12 parameters need to be accurately measured to reconstruct the absolute position of a feature in 3D. Assuming that we only care about the position of 3D features relative to the robot, we need 6 parameters to describe the position and orientation of one camera relative to the other. We have attempted to take camera data of objects whose true 3d position we measured by hand, and used that truth information to fit the 5 parameters. This approach was proven unsuccessful -- the fits did not converge. We then tried a more academic approach outlined here (http://www.peterhillman.org.uk/downloads/whitepapers/calibration.pdf), to no avail. We then gave up on accuracy of camera readings and eyeballed the parameters...

Due to lack of accuracy on distance reconstruction, random noise, and artifacts of the line fitting algorithm, the features (balls and walls) reconstructed from a set of two camera frames (top and bottom camera) were not reliable enough to be used for robot navigation. I have attempted to collect and average out features across consecutive capture frames. This has worked moderately well provided the robot sat stationary -- the error on camera reconstruction was distance- and angle- dependent, and therefore any change in robot position and orientation, even if measured with the optical mice accurately, would introduce error into matching of features across consecutive frames and would, generally, screw all our efforts up. A possible solution would involve:
- analytical modelling of the camera error due to imprecise calibration
- brute force error correction: measure error for a grid of points in 3D and correct the position of reconstructed features.

Unfortunately we run out of time to successfully pursue the 3D vision approach.

Having stubbornly tried to make the stereo vision work, we have realized 4 days before the impounding we need a different strategy. We then equipped the robot with bump sensors and made it bounce between walls, occasionally looking around for balls using the stereo vision code. We have run out of time to make the code more sophisticated and robust.

Odometry We have initially attempted at building and using the proposed optical encoders. Much to our dissatisfaction, the circuit turned out not reliable and of very poor resolution. We then decided to use two optical mice mounted on spring suspension to ensure constant contact with the floor. The two mice were able to measure position as well as angular orientation of the robot very accurately (random error on the angle +- 0.02 rad, position +- 4cm over a meter of distance covered). The mice needed to be calibrated very accurately, though. The mice would often get de-calibrated because of miniature change of their relative position. Also, in order to read mouse raw data we mounted the mice with custom udev rules, which in turn required that we re-plug the mice every time the computer boots. Forgetting about this caveat caused our robot to go out of control during one of the final competition rounds.


Final Run and thoughts: So despite our robot's inability to score, during the final contest, it somehow managed to try to score, though there were not balls on it yet. Also, it was working quite well despite what we were afraid of might happen. However, halfway through our last round, the batteries died and the mother was turned off. So thoughts for the future: - don't overcomplicate your strategy. I (the coder) prioritized solving the problem of 3d vision, neglecting the very important aspect of robot behaviour until the very last days. Bad, bad bad idea. -get back up batteries!!
-have behavior code first, because then you won't be left to write all of it in the last week.
-even if you code, understanding how the code behaves with the hardware is probably more important. (since that's how you understand what you're coding)
-build a quick sustainable robot in the first week so your coder has something to tinker with. (Try laser cutting, though we didn't do that, it seems quite efficient)
-focus on the main aspects of the robot, have it working, then go for the minute detail!
-try to not to take another class alongside maslab...life will be difficult.
-never give up!

Personal tools