Team Three/Final Paper

From Maslab 2013
Jump to: navigation, search

Contents

Pre-Competition and TL;DR

  • Before Maslab even started our team read the final papers from past years winners. This is so useful it feels like cheating. Valuable things we learned from past years:
  • Long-range IR sensors apparently suck so we didn’t even bother to try them
  • Short range IRs work better oriented vertically (we didn’t verify this)
  • Sometimes bad motors will have unusually low impedance when they stall which may damage motor controllers
  • CAD everything, see what works and what doesn’t before you build the real thing
  • Consider putting capacitors everywhere (although we never had any problems)
  • Circular robots don’t get stuck, for the love of god build a circular robot

If you’re going to read this for quick take-aways, how about this:

  • Ultrasonic sensors are amazing
  • A large plastic slider works better than any caster, and it’s easier to mount (glue)
  • Wire as neatly as possible from the start. Small investment up front, huge payoff for debugging and reliability
  • Direct drive where you can, because gears are hard and stuff
  • Build up iteratively: avoid long periods of time without a working robot. This applies to both software and mechanical components.
  • Don’t tell anyone, but just P is usually good enough (we never used that fancy I and D business, even though we implemented it in our PID class).
  • Read past years papers because it’s like cheating.

Choosing a Strategy

It’s important to choose a strategy first and build a robot for that particular strategy. Every strategy has some compromises you have to make so trying to do everything usually isn’t effective.

This game had a couple scoring mechanisms: 5 points per ball collected 20 points for putting the ball over the wall 15, 30 and 50 points for scoring in the tower 200 points for clearing the field

We considered various strategies and decided on scoring the top tower. This is clearly the strategy with the highest possible score, but also the most technically challenging. Consider the scoring possibilities for various strategies. With only the 8 balls originally on the field, scoring 4 on the top and 4 in the middle gives 320 points. To achieve this by scoring over the wall requires 16 balls, and scoring this in the middle tower takes 11 balls, both of which require hitting the button at least once and collecting a lot more balls, which is actually more difficult to pull off. Clearing the field is an unreliable strategy because an unfortunately placed ball or strategically thinking opponent can easily stop you.

There are of course compromises. Scoring on the top tower means having a tall robot which could potentially be top-heavy and unstable, and also the requires the most precise alignment and scoring mechanism. Because we assumed tower alignment and scoring would be a time consuming process we wanted to collect balls the majority of the round and only score at the end. This strategy demanded a large ball carrying capacity. But overall these compromises seemed minimal in comparison to the scoring advantage.

Mechanical Design

Brainstorming

Soon after the competition was announced we sketched out some possible designs.

A helical ramp with central brush design (familiar to those of you who did FIRST in 2009) was one of the first designs we considered. We had difficulty figuring out how everything would fit in this design (motors, battery, laptop etc...) We were also concerned about load on the motor driving the brush with multiple balls. For these reasons we didn’t go with this design. However Team 4 has a great implementation of this design. In general their robot is very well made so definitely check out their journal/writeup.

A conveyor belt design was another design we considered. We had a couple concerns with a conveyer belt. One was motor load (if we had a bunch of balls in the conveyer at once). Additionally having a 20” long conveyer would require additional rollers and careful tensioning. Finally, we weren’t sure how well it would work with ball variation. Again, another team build a very solid implementation of this design. Team 6 build a very robust and reliable conveyer belt robot, although the conveyer was only half the length needed to score on the top tower.

Finally we settled on an archimedes screw design similar to 2012’s winner Team 7, which is also often used in rolling ball sculptures. See http://www.youtube.com/watch?v=IUOjhrQ774I for a good example. This design is compact, applies little load to the motor and can be easily extended to any height. For these reasons we went with an archimedes screw design.

The Chassis

Our chassis was just acrylic plates sandwiched between two acrylic circles about 3 inches apart. The whole thing was a typical tab-slot construction with t-slots. This worked really well, easy to disassemble and reassemble. When you design this however, take a few things into consideration. Asymmetric slots so you don’t put it together wrong Make sure the nut and the bolt will be accessible once you stuff your robot with parts. We had some really annoying t-slots Make holes for routing wires Note that removing anything from the inside means taking off an entire plate Mount your wheels like Team 4. The way we did it made it a huge pain to access the wheels and motors. We eventually cut the tabs down so that the outermost plates could slide out without taking apart the whole chassis.

We also got some nice button cap bolts from mcmaster for use on the bottom of the chassis to reduce clearance.

The Drive Train

For our drive train we direct drove 3.5” wheels off a 29:1 reduction polulu 37D motor. This gave a stall torque of 110oz-in with a free-spin RPM of 350 so roughly 65oz-in and 175rpm at peak efficiency. On a 3.5” wheel this comes out to about 2.6 feet per second (a very good pace) and an acceleration of 3.5 feet per second squared for a 20 pound robot (hitting top speed in under a second). These specs seemed reasonable to us, and as mentioned earlier, direct drive is the best.

On the undriven end of things we had a big plastic slider about .375”x1.5”x4” (found it in a scrap pile somewhere, looks like it was cut from a 3” solid plastic round) located at the very back of the robot. We were very concerned about tipping when stopping and accelerating so we put the battery and laptop in the back over the slider. However with a 20” tall robot this didn’t alleviated tipping on hard stops. But really, when you’re stopping, who cares if the front of your robot catches a little? As long as everything is in order when you’re driving and accelerating you’re in good shape.

The Ball-Collector

The ball collector was very similar to the rubber band roller design which seems to go back almost a decade. It works. We burned out a motor at one point because it jammed, so we replaced it with a motor similar to our drive motor. That thing is a monster and will never jam. Few things to note: for best results the ball should always press into the roller at least .25” against the floor or the ramp. Also, the ball should encounter the roller before it encounters the ramp. Our ramp was made from 1/32” aluminum sheet which is great to work with, it is easy to bend by hand and doesn’t fatigue, but still holds its shape well. We referred to it as “magic metal”

The Screw Lift

Our screw was made from ⅛” steel rod. We bent it around pvc pipe, much like http://www.home-workshop.com/spirals.htm and secured it to an aluminum rod through some holes and with some 20 gauge wire. It seems best to have the ball ride on the outside of the screw with two posts to keep it in place. Take some care to figure out the best positioning and sizing of the screw and posts. Be careful in the intake design to avoid jamming. We direct drove the spiral with a 130:1 pololu 37D man-motor, which we suspect would have pulled the top of our robot through the bottom given the opportunity.

Hopper and Ramp

Our hopper was huge and could carry 24 balls, not that it ever came to that. It was mounted on top of bent 1/16” aluminum sheet metal columns, and was itself made from a 1/16” water-jet aluminum sheet. This assembly was massively overbuilt, and even though it was relatively light there was still opportunity to reduce weight here (which we didn’t find necessary). The whole thing was sloped and ended with a big ramp. The ramp acted as a gate, and was powered by a single servo. The servo alone wasn’t strong enough so we janked together a little rubber band solution that provided some assistance in returning the ramp to the closed position. It all worked better than expected.

Hardware

Sensors

Although we intended to use a lot of sensors we ended up using only 6 Ultrasound sensors and the camera. The ultrasound sensors provided accurate data (cm accuracy up to a few meters) but were not able to handle wide angles. The short distance IR's provided data reliable only up to ~20 inches and was very far from linear. The transfer function varied from IR sensor to IR sensor which was annoying to deal with. We modified the staff firmware to handle ultrasounds and ended up being the only team using ultrasounds. The ultrasound sensors are very good when they are working but ours tended to stop working periodically so we had to constantly check them to make sure they were working properly.

Note: The ultrasounds use a function called pulseIn to time the length of the incoming pulse which turns off interrupts. The new arduino library uses interrupts for servo pwm but not for motor pwm. To solve the problem we treated servos as motors in our software.

Motors

Electronics

Software

Vision Code

The vision module is written in C++ OpenCV, after quick benchmarking led us to believe that it was faster than Python OpenCV. The interface between C++ and Python works by building a collection of C++ files into a Python library, which allows Python code to call specified individual functions written in C++. The vision code ran on its own core, using the Python multiprocessing module - this was the only case of multithreading in our code. One problem with the Python-C++ separation was that we never found an easy way to display raw or processed OpenCV video output while simultaneously running the Python robot control code. The HighGUI namedWindow() function creates a window but doesn’t return a handle to it, and autodestructs the window as soon as you leave scope. Thus, namedWindow() and imshow() need to be called in the same subroutine. This can’t happen on the C++ end because, loosely speaking, namedWindow() is called in vision.setup() and imshow() is called in vision.step(). To display the images on the Python end, we would have needed the Python code to receive and handle a pointer to an image matrix, and we never got around to doing this. We compromised by writing our calibration routine in pure C++, so we could observe video output while calibrating the camera, and relying on console logging of object detections while running the actual control software. The core of the vision code is completely straightforward HSV-space thresholding and blob detection. (We were planning on trying out other color spaces like Lab or Luv, which apparently improve upon HSV’s property of representing percieved color distance as actual distance in the color space, but we never got around to it.) We kept track of color ranges for the following: red ball, green ball, purple goal-tower-bottom, yellow goal-tower-middle, blue goal-tower-top, cyan button, and blue wall stripe. In the calibration step, we set the six boundary parameters for each color - min/max for each of hue/sat/val - by observing a binary image of thresholded camera output and adjusting to taste. We get contours off of the thresholded image for each color (except wall stripe), with a reasonable minimum contour size cutoff. We keep track of the center coordinates, area, and bottom-point coordinates for each contour blob. In addition, we keep track of two flags for each blob: isBehindWall, and isInGoal. To make each of these determinations, we check a certain rectangular region for the presence of pixels within certain ranges. The region for each blob is a few pixels wide and extends from the center of the blob to the bottom of the image. If this region in the “wall stripe” thresholded binary image contains any white pixels, then this blob is behind a wall. We decide if it is behind a goal by checking the goal tower binary images instead of the wall stripe one.

   The C++ module ultimately transmits a list of tuples: each tuple representing an object and containing its type and associated data. A Python file wraps the C++ module, runs it on its own thread, and provides getters for object locations, sizes, and so forth.

Software Architecture

Our software was designed to simplify debugging and development. It was composed of several components (implemented as singleton objects) in a chain which processed information from the preceding step to provide to the next step. These were as follows:

===VisionWrapper=== C++ vision code running on a separate core. Does object detection. DataCollection: collects input from the camera as well as sensor data from the arduino. Applies basic processing such as converting sensor readings to distances from the center of the robot. StateEstimation: takes DataCollection input and further processes information and provides a useful interface to the data. For example getNearestBall and getWallPosition. GoalPlanning: uses estimated state to choose a high-level goal and a target MovePlanning: uses estimated state, goal and target to pick a low-level state (wall-follow, approach target, etc.), handles PID loops, and sends drive, spinner, helix, or ramp commands to Control Control: actuates motors

Data Collection

This class was mostly boilerplate. Each sensor type was implemented as a class and provided a way to take a sample, retrieve the value and perform basic processing.

State Estimation

In the end StateEstimation ended up doing a lot less work than we expected. It mostly just aggregated distance sensor information and camera information with a convenient interface. The most important function here was getWallPositionFromTwoSensors. This function would look at the two sensors provided as arguments and output an estimate of the angle and distance of the wall by using trig and the known positions of the sensors. Built on top of this was a function which chose the most important pair of sensors. We considered the most important sensor to be the one nearest to a wall, with the front sensors weighted more heavily than the others. This function was used heavily by the wall following algorithm.

Another useful function was the collision distance function. This would look at all of the range sensors and estimate how far the robot could drive straight before the robot would hit a wall. This estimate was rough, it assumed walls were always perpendicular to the sensor and a collision would occur at the edge of the robot nearest the sensor. Finally, it would return the minimum for all the sensors. While this wasn’t particularly accurate, it provided a good idea of how far the bot was from a collision.

State Estimator however was initially designed with much loftier goals. One simple thing we could have implemented was interpolation between camera frames. The FPS of the camera was capped at 30, however we could have used the IMU or an encoder pair to provide a more accurate estimate of field object positions between frames due to the higher sampling frequency of those sensors. We never got around to it.

Finally, this is where the mapping magic would have happened. Combining sensor data into a estimate of the robots position on the field and configuration of field components would have been in here, or if it got too big and complicated would have been pushed off into a Mapping class which would be tightly coupled to StateEstimation.


Goal Planning

This is where the very high level goal planning happened. At the competition our robot had only two possible goals: Hunt and Score. When the robot starts, or finishes scoring, it enters Hunt mode, and hunts balls for the next 45 seconds before switching to Score mode, where it will stay until it locates the tower and scores. This class also handles choosing targets. Based on information from state estimator and the goal, the robot would choose nearest balls or the tower. A next-step from here would have been a Hit Button goal and the ability to select the button as a target.


Move Planning

This was the component most akin to the state machines most robots had. However, by separating movements from high level goals we were able to make it a very simple state machine with only six states and very simple transitions. Each movement was a state in a state machine and implemented the Movement class. Each Movement implemented a transition and move function. First transition() would be called, which would return the next movement object based on goal and state (often self), then move() which would do the fancy controls business. The Movement superclass implemented automatic timeouts which would force a transition to the TimeoutRun movement. The subclasses could choose the timeout time. Here are the movements we implemented:

WallFollow: Clockwise wall-following (the starting and “default” state) using PID on estimated wall position. This used two PID loops, one on the estimated distance to the wall with a fixed target distance and one on the angle with the goal of being parallel. In order to avoid obstacles and make tight corners this function also used estimated collision distance to scale movement speed, slowing down and even backing up if necessary. This caused some potential steady states in tight corners where the robot was not moving and the distance and angle PID loops were fighting. With some tuning and maybe a few tricks we could have probably fixed this, but we didn’t have time. If you watch the competition you’ll see our robot get stuck on walls a few times. Though it may look as though the robot is against a wall, in some cases it is stuck near the wall too scared to approach it and wanting to neither turn nor move. This state will transition to ApproachTarget if GoalPlanning selects a target and we had it timing out after 15 seconds. ApproachTarget: drive towards target determined by GoalPlanning (with PID correction on angle). In addition, it slows down as it gets close to the target to minimize the risk of missing the target. We attempted to include collision avoidance in here, but we’re not really sure if it worked. If there is no valid target, it will transition back to WallFollow. If the target distance is within a certain threshold distance and angle (depending on target type), we transition to either CaptureBall or AlignWithTower, depending on the target type. CaptureBall: Go through a sequence of movements that involve driving forward and wiggling a bit for a set time, and return to WallFollow. AlignWithTower: Use a PID loop to center the top of the tower. This sequence also lasts a set number of seconds (six seconds when we ran it in competition), and calls for slow forward movement for the first three seconds and only rotation for the last three seconds. If the camera loses the top of the tower before this state runs to its time limit, return to WallFollow. Else, transition to Score. It’s worth noting that the alignment worked exceptionally well, getting as close as possible and hitting the tower dead center something like 90% of the time at competition. Score: Trigger the servo to lower the scoring ramp, wait a few seconds, spin slowly to the side for a few seconds to let any balls that didn’t fit on the top of the tower fall to the middle of the tower, raise the ramp, and return to WallFollow. TimeoutRun: Drive backwards for a second and turn a bit, then return to WallFollow. Could have done something a little smarter with sensors, though it typically took a few tries, it was effective in eventually getting the robot unstuck.

Control

Basically just abstracted away controlling motors and servos. It should have also included features to protect motors from excessive acceleration, but again, we didn’t get around to it.

Personal tools