Team One/Final Paper

From Maslab 2015
Jump to: navigation, search

Contents

Introduction

Our overall goal was to learn a lot — none of us had any significant experience with this type of robotics, and we wanted to see what it was about.

Our basic strategy for the game was simply to stack blocks on the floor; the extra effort required to place the blocks on the middle table didn’t seem worth it. Originally, we had a very simple clamp and pick robot. The idea was that the robot would find a block, lower the clamp, and clamp onto the block. We would then lift up the clamp and deposit the block on top of another block, continuing in this manner until we had built a stack.

This turned out to be somewhat of a pain to code — how do you know where exactly to put this block so that it is on top of another one? How can you tell which height the block needs to go to?—so we decided to modify our strategy somewhat, and stack blocks on an onboard tower on the robot instead. At the end of the game, the tower would open, leaving us with a completed stack. Unfortunately, we decided this a little too late and never got this working.


Software

As newbie programmers with little experience in other languages, we decided to use c++ in order to take full advantage of the example code provided. Our method of programming mostly consisted of us testing out the example code and finding ways to combine the separate programs in a somewhat intelligent manner. For example, we would have logic to read the gyroscope, but we would have to develop the logic to modify the drive based on that reading.

We decided to organize the code in two separate threads; one for the drive logic, including reading the gyroscope and ultrasonic/IR sensors, and one to execute vision code. This would theoretically allow for sensors to be read frequently enough, while still running the robot at an acceptable speed.

Drive control:

We went for the simplest drive logic possible; essentially we would drive forwards until we hit a wall and then turn some arbitrary number of degrees. We were using the gyroscope to incorporate PID control to ensure the robot drove straight, and the ultrasonic to ensure that the robot could sense the walls.

This worked fairly well; however, the range of our ultrasonic was a little short and we kept getting stuck at walls. We think that an ideal way to fix this would have been timeouts (i.e., if you’ve been stuck for x seconds, just back up and turn), but by the end of IAP we didn’t quite have enough time to incorporate this.

Vision:

We were using opencv to sense the red and green blobs on the field. Essentially, our logic was to capture a frame, downsize it, change it to HSV and use blob detection to get the location of the blob. We would then feed the angel the blob was making with the robot to our PID controller, causing the robot to turn towards the blob.

Unfortunately, our HSV values for red were a little off, meaning that the robot was detecting slightly red things as “this is the red you were looking for” red and turning towards them. However, with a little more tuning, we’re fairly confident that we can detect and drive towards red blocks.


Hardware:

Advice for Next Year:

-Put timeouts in your code; this way you don’t have to worry about unreliable sensors

-Label your wires clearly; shorts are no fun

Personal tools