Team Nine/Final Paper

From Maslab 2011

(Difference between revisions)
Jump to: navigation, search
(Overall Design)
Line 1: Line 1:
-
 
== Overall Design ==
== Overall Design ==
Line 11: Line 10:
Outside of the typical uorcboard, camera, ir senors, bump sensors, servos, and 2 or 3 motors, our design required, an optical mouse for odometry, 3 additional motors, and with that an arduino.  Each of these components had to be mounted, of course, and we did so by creating seperate sheet metal cases for the arduino, eepc, and orcboard, and mounting the servos and ir sensors, as well as the camera, on a senors board located underneath the dock where we collected balls.  The 4 drive motors were all located above there respective omni wheel, and connected via chains.  
Outside of the typical uorcboard, camera, ir senors, bump sensors, servos, and 2 or 3 motors, our design required, an optical mouse for odometry, 3 additional motors, and with that an arduino.  Each of these components had to be mounted, of course, and we did so by creating seperate sheet metal cases for the arduino, eepc, and orcboard, and mounting the servos and ir sensors, as well as the camera, on a senors board located underneath the dock where we collected balls.  The 4 drive motors were all located above there respective omni wheel, and connected via chains.  
 +
 +
'''Vision'''
 +
In the end, we opted for a very simple approach with our vision code; frames were taken as quickly as possible, and each pixel was scanned and sorted by color.  Neighboring pixels of like cluster were then sorted into blobs, and blobs were treated as viable objects: red and green ones were balls, blue ones were the lines atop the walls, and yellow were goals and scoring walls.  This simple approach was relatively fast and could reliably pick up walls and balls; we accelerated it further by down-sampling the image (that is, checking only every other pixel, or every fourth).  The biggest issue we had was localization using the camera; the accuracy we achieved using things like blob size and offset within the image was not high enough to be useful, varying by as much as a meter while the robot sat still.
 +
Our final vision code was dramatically simpler than our original intention.  Although we initially implemented several advanced features, including Hough transforms and confidence calculations, they were all ultimately removed in the interests of speed and because they didn't offer any particular advantage in the context of the contest.  Red or green meant ball, yellow meant goal, and blue meant the top of a wall – anything more complex was largely irrelevant. 
 +
That was one of the major lessons learned during MASLAB: however cool or interesting a feature may be, if it doesn't contribute to performing in the contest, it probably isn't a good idea.  There's nothing wrong with trying exciting new things, but the time constraints mean that you'll have to pick and choose which exciting things you do – and generally, the ones you want are the ones that will help you win.
'''Omniwheel Drive'''
'''Omniwheel Drive'''
-
We decided to use omniwheels for our robot, simply for increase maneuverability.  Our motors were mounted above these wheels in a support chamber made of acrylic and connected with 3 inch screws and nutsThese motors were connected to the wheel shaft via chains and sprocketsOur robot, despite its weight, drove reasonably well.
+
The same lesson applies to our drive train.  We chose to build a four-wheel omni-wheel system for enhanced mobility; four wheels were chosen because of the natural symmetry of the robot, and because of size constraints, the motors were mounted above the wheels and connected by chainsAlthough we were successful in building this complex drive train, it was plagued with issues –  the weight of the robot meant the wheels needed to be mounted on steel shafts to avoid excessive flexing, the low wheel-base meant the robot occasionally dragged on the floor, and the number of motors meant we could not use the orc board to control the drive system.   
 +
As it turns out, the omni-wheel system didn't even offer us much of an advantage; because of the nature of the contest, robots were almost forced to use 'turn and go' navigation, rather than executing the complex paths the omni-wheels would allow.  Had we used a conventional two-motor drive train, we could have avoided a lot of work with minimal impact on practical mobility, and used the extra time to implement things that could have greatly improved our performance in the contest.
'''Modularity'''
'''Modularity'''

Revision as of 07:11, 2 February 2011

Overall Design

Originally intending to build as simple a robot as possible, long nights of nothing but throwing ideas back and forth left us with quite the complicated design. From our chain linked, vertically mounted drive train, to our lidar system of mapping (aka two ir sensors mounted on servos), our design quickly became more of a task then we were probably able to handle. Still in the end we ended up with a pretty freaking cool robot.

Materials

Our robot was constructed entirely of acrylic and sheet metal, along with any other parts we had ordered, including sprockets, chains, steel axels, and angle stock. You can probably guess that we had a very heavy robot.

Sensors

Outside of the typical uorcboard, camera, ir senors, bump sensors, servos, and 2 or 3 motors, our design required, an optical mouse for odometry, 3 additional motors, and with that an arduino. Each of these components had to be mounted, of course, and we did so by creating seperate sheet metal cases for the arduino, eepc, and orcboard, and mounting the servos and ir sensors, as well as the camera, on a senors board located underneath the dock where we collected balls. The 4 drive motors were all located above there respective omni wheel, and connected via chains.

Vision In the end, we opted for a very simple approach with our vision code; frames were taken as quickly as possible, and each pixel was scanned and sorted by color. Neighboring pixels of like cluster were then sorted into blobs, and blobs were treated as viable objects: red and green ones were balls, blue ones were the lines atop the walls, and yellow were goals and scoring walls. This simple approach was relatively fast and could reliably pick up walls and balls; we accelerated it further by down-sampling the image (that is, checking only every other pixel, or every fourth). The biggest issue we had was localization using the camera; the accuracy we achieved using things like blob size and offset within the image was not high enough to be useful, varying by as much as a meter while the robot sat still. Our final vision code was dramatically simpler than our original intention. Although we initially implemented several advanced features, including Hough transforms and confidence calculations, they were all ultimately removed in the interests of speed and because they didn't offer any particular advantage in the context of the contest. Red or green meant ball, yellow meant goal, and blue meant the top of a wall – anything more complex was largely irrelevant. That was one of the major lessons learned during MASLAB: however cool or interesting a feature may be, if it doesn't contribute to performing in the contest, it probably isn't a good idea. There's nothing wrong with trying exciting new things, but the time constraints mean that you'll have to pick and choose which exciting things you do – and generally, the ones you want are the ones that will help you win.

Omniwheel Drive

The same lesson applies to our drive train. We chose to build a four-wheel omni-wheel system for enhanced mobility; four wheels were chosen because of the natural symmetry of the robot, and because of size constraints, the motors were mounted above the wheels and connected by chains. Although we were successful in building this complex drive train, it was plagued with issues – the weight of the robot meant the wheels needed to be mounted on steel shafts to avoid excessive flexing, the low wheel-base meant the robot occasionally dragged on the floor, and the number of motors meant we could not use the orc board to control the drive system. As it turns out, the omni-wheel system didn't even offer us much of an advantage; because of the nature of the contest, robots were almost forced to use 'turn and go' navigation, rather than executing the complex paths the omni-wheels would allow. Had we used a conventional two-motor drive train, we could have avoided a lot of work with minimal impact on practical mobility, and used the extra time to implement things that could have greatly improved our performance in the contest.

Modularity

The final point I would like to make about our robot is its modularity. I would definitely recommend this to future maslab teams when designing. Our robot consisted of three main components that could be separated with ease to, for quick access fixes and changes. The three components were our left and right wings, and then our center console. This helped us dramatically when working on our robot, and it ended up being one of our proud points in the design.

Final Outcome

Unfortunately our robot did not seem to come together like we had intended. On the day of the final competition we simply did not have a robot ready for the task at hand. For the preliminary round, we lost in both 3 minute sub rounds, scoring zero points. Come the second round we stepped our game up a bit collecting one ball and displacing another. after dropping the one ball we collected, however, we still ended up with zero points. We ultimately tied for last place. Despite our limited success both my team and I can certainly agree that experience was one filled unmatched learning opportunities. It is an entirely hands on project that will force those who are not experienced in the field to learn new skills they will undoubtedly be able to apply in the future. I recommend this to anyone interested. Voltron force, out!

Personal tools