Team Six/Final Paper

From Maslab 2013
Revision as of 02:27, 3 February 2013 by Wliverno (Talk | contribs)
Jump to: navigation, search

Contents

Introduction

foo

Mechanical/Design

foo

Electronics

foo

Programming

For our robot, we used python with OpenCV and the given Arduino library to work with the sensors and motors. In the end, our main program was rather simple. It used a simple looping search algorithm that implemented the vision (and IR) to detect different objects and then process the list of objects to make a decision, setting the motors to a certain state. Our program did use four threads for multi-tasking: a timer, music player, ball counter (returns the current number of balls in the container on the robot), and main thread (everything else). However, our robot only used Position-based tracking for ball/wall following because implimenting PI or PID did not add any efficiency in our program. Furthermore, though mapping was discussed in the initial stages of our program design, it was not implemented.

Vision

Using the OpenCV (Version 2.1) libraries, we were able to use color thresholds to locate objects. A small problem arose when dealing with the lighting conditions of different rooms and varying sunlight. Our first solution was to convert all RGB values into HSV values so that theoretically it would be possible just to have a wide brightness (V) threshold to compensate for different lighting. We attempted to create a "Universal" HSV-value range that would work in most settings. However, as it turns out, with varying types of lights, there are slightly different color values from the different map objects. Our team discussed using Hough transforms to detect different features in the image or Haar cascades to create a more robust vision module. However, we found problems with both methods, including the fact that our vision programmer had never been exposed either method of feature detection, and that he had a simple solution.

Instead of changing our vision code, we made a program to calibrate our vision using a simple GUI and stored the data to a file for future use. This code displays a live feed from the camera (with a small circle in the center). Then, the object is lined up within the circle values of the pixels within the circle are analyzed for the maximum and minimum values. These values are stored to a file and saved for future reference or until they are updated again. This way, calibration of the camera takes a very short amount of time and one does not even have to manually edit the HSV value files.

The main object detection program is called "rectangulate.py" because it inputs an image, an HSV threshold, and the blur factor (Gaussian Blur) to detect objects. It then converts the image to a single channel (grayscale) image, filtering out any colors outside of the thresholds and finds connected pixels, drawing contour lines around those "blobs". Because of the uneven color of the ground, it was possible for a single pixel or a small group of pixels to show up randomly throughout the image, so those contours were filtered out. Finally, the program calculated bounding rectangles around the largest contours and returned those objects. This "rectangulate.py" program was used by other helper classes: ball.py for finding the largest ball or group of balls, wall.py for finding a yellow wall, and button.py for finding the teal button, though this last one was not implemented in the final program.

Omni Drive

foo

Personal tools