Team Six/Final Paper

From Maslab 2013
(Difference between revisions)
Jump to: navigation, search
Line 19: Line 19:
 
Using the OpenCV (Version 2.1) libraries, we were able to use color thresholds to locate objects. A small problem arose when dealing with the lighting conditions of different rooms and varying sunlight. Our first solution was to convert all RGB values into HSV values so that theoretically it would be possible just to have a wide brightness (V) threshold to compensate for different lighting. We attempted to create a "Universal" HSV-value range that would work in most settings. However, as it turns out, with varying types of lights, there are slightly different color values from the different map objects. Our team discussed using Hough transforms to detect different features in the image or Haar cascades to create a more robust vision module. However, we found problems with both methods, including the fact that our vision programmer had never been exposed either method of feature detection, and that he had a simple solution.
 
Using the OpenCV (Version 2.1) libraries, we were able to use color thresholds to locate objects. A small problem arose when dealing with the lighting conditions of different rooms and varying sunlight. Our first solution was to convert all RGB values into HSV values so that theoretically it would be possible just to have a wide brightness (V) threshold to compensate for different lighting. We attempted to create a "Universal" HSV-value range that would work in most settings. However, as it turns out, with varying types of lights, there are slightly different color values from the different map objects. Our team discussed using Hough transforms to detect different features in the image or Haar cascades to create a more robust vision module. However, we found problems with both methods, including the fact that our vision programmer had never been exposed either method of feature detection, and that he had a simple solution.
  
Instead of changing our vision code, we made a program to calibrate our vision using a simple GUI and stored the data to a file for future use. This code displays a live feed from the camera (with a small circle in the center). Then, one line up the object with the circle and, after a key is pressed, the HSV values of the pixels within the circle are analyzed for the maximum and minimum values. These values are stored to a file
+
Instead of changing our vision code, we made a program to calibrate our vision using a simple GUI and stored the data to a file for future use. This code displays a live feed from the camera (with a small circle in the center). Then, one line up the object with the circle and, after a key is pressed, the HSV values of the pixels within the circle are analyzed for the maximum and minimum values. These values are stored to a file and the
  
 
== Omni Drive ==
 
== Omni Drive ==
 +
 +
foo

Revision as of 00:35, 3 February 2013

Contents

Introduction

foo

Mechanical/Design

foo

Electronics

foo

Programming

For our robot, we used python with OpenCV and the given Arduino library to work with the sensors and motors. In the end, our main program was rather simple. It used a simple looping search algorithm that implemented the vision (and IR) to detect different objects and then process the list of objects to make a decision, setting the motors to a certain state. Our program did use four threads for multi-tasking: a timer, music player, ball counter (returns the current number of balls in the container on the robot), and main thread (everything else). However, our robot only used Position-based tracking for ball/wall following because implimenting PI or PID did not add any efficiency in our program. Furthermore, though mapping was discussed in the initial stages of our program design, it was not implemented.

Vision

Using the OpenCV (Version 2.1) libraries, we were able to use color thresholds to locate objects. A small problem arose when dealing with the lighting conditions of different rooms and varying sunlight. Our first solution was to convert all RGB values into HSV values so that theoretically it would be possible just to have a wide brightness (V) threshold to compensate for different lighting. We attempted to create a "Universal" HSV-value range that would work in most settings. However, as it turns out, with varying types of lights, there are slightly different color values from the different map objects. Our team discussed using Hough transforms to detect different features in the image or Haar cascades to create a more robust vision module. However, we found problems with both methods, including the fact that our vision programmer had never been exposed either method of feature detection, and that he had a simple solution.

Instead of changing our vision code, we made a program to calibrate our vision using a simple GUI and stored the data to a file for future use. This code displays a live feed from the camera (with a small circle in the center). Then, one line up the object with the circle and, after a key is pressed, the HSV values of the pixels within the circle are analyzed for the maximum and minimum values. These values are stored to a file and the

Omni Drive

foo

Personal tools