Team Five/Final Paper

From Maslab 2015
Jump to: navigation, search

Team Five Daniel Mendelson, Garrett Watson, Steven Homberg, and Scott Viteri

The Inner Workings of Our Robot, Nathan Landman Junior

Mechanical:

For the mechanical part of Landy, the question we needed to address was how to pick up the blocks. The idea that seemed most reasonable was to pick up the blocks by making use of the middle hole. We decided on this method quickly, but then this method revealed another dilemma. This dilemma, unique to non-spheres, is that orientation matters and we couldn’t just stick the rod into any part of the block, but had to aim into the hole. It’s difficult for a robot to orient itself relative to a block, necessitating a mechanical, passive way to shift the blocks into the right orientation. The first and most effective method we saw, was to cut into our circular baseplate a channel for the blocks to flow into. When making the channel, we had to be careful so that blocks couldn't get stuck inside.and prevent Landy from picking up the blocks. This took a few iterations of the baseplate to get just right that we had a range so that we didn’t need to drive at a specific angle to filter the blocks and they would filter without getting stuck and faithfully every time. To find the right shape, we threw blocks on the ground and pushed the plate forward, trying to simulate every possible situations of fallen blocks so that we’d be safe in any way the blocks fell. After making the baseplate such that we would always know the blocks orientation, the next task was lifting it up. Although many methods were possible, they all essential simulated a conveyor belt motion, so we bought a conveyor belt. Having ordered the conveyor belt, we needed to find a material that was sturdy enough to pick up the block, yet flexible enough that it wouldn’t get caught while rotating under the chassis. The answer: Zip ties. The solution to most things. Thus the blocks were able to rise. The last mechanical issue was the method for sorting. We needed a method for the block to exit the conveyor and fall into a sorter. We also needed the hoppers to be mostly inside the baseplate so that they wouldn’t catch on walls while turning. These weren’t really too hard, we just extended a plate from the conveyor walls that the block slid onto off of sheet metal, having the sorter rotate towards the middle of the robot and drop straight into the hoppers.

Electrical:

Overall wiring: For first prototypes and for testing out sensors, we made generous use of jumper wires (we bought a little kit on the internet). I was way easier just using those as opposed to cutting wires to length every time we wanted to make a change. We ended up using a PWM shield on top of the Edison to drive the servos and motor controllers. We also needed a lot more GPIO pins than the Edison could offer (the gyroscope alone took 4, and all the other photo-sensors or contact sensors really took up a lot of pins). To solve this problem, we bought a mux shield for the purpose of extending our GPIO count to 48 pins (wayyy more than enough).

Power wiring: With regards to power, we connected the main battery in series with a 10 amp fuse and a kill switch. We connected the motor controllers directly to the main power bus, and powered the Edison’s PWM shield off a DC-DC converter. If you’re powering servos off of the PWM shield, power it separately for best performance. All of our sensors were powered off of the Edison.

Sensors: A variety of sensors on the robot allow it to receive feedback from the environment in order to behave intelligently. The most important are the analog short-range infra-red range sensors, which are used extensively in wall-following, our main exploration mechanism. Two facing the left side in the front and back allow the robot to measure the distance from the wall as well as its approximate angle to the wall based on the difference in the two sensors’ readings. Another analog short-range IR sensor as well as a digital ultra-short range IR sensor are mounted on the front to determine when the robot must turn to avoid hitting a wall. Another sensor used for navigation is the limit switch bumper on the right side of the bot, which allows as a fallback for the IRs if the robot drifts too close to the wall for the sensors to give accurate readings or if the robot somehow gets stuck. On the bottom of the robot is a strip of 3 IR reflectance sensors to determine when the bot crosses the tape line for the home base, allowing the robot to be able to tell if it is in position to score enemy-colored stacks. The final significant sensor on the bot was the color sensor with sorting. We tried using many different possible sensors to sense the color of a block, including an I2C color sensor from adafruit, as well as an IR reflectance sensor and a custom circuit involving a photoresistor and red LED. However, we were unable to make any of them distinguish the color of a block with high reliability because there were possible values for each which could be given by either a red or green block.

Software:

Jmraa: We decided java would be the best option for developing our main software because our team members had the most experience with java, but the mraa library for communicating with the Edison arduino breakout is in c and wrapped for c++ only. As a result, we needed to wrap the library for java using JNI in order to actually use the inputs and outputs. The java wrapper created, which we named jmraa, mirrors the structure of the c++ library. It consists of a number of java objects which have native functions corresponding to the functions of the analogous c++ object, as well as a “long” field which in fact holds the pointer to the c++ object used to make the actual calls. The relevant wrapped objects were I2c, Spi, Gpio, and Aio.

Vision: To grab the camera image, we used openCV. Once we had the image, we did all the processing “manually” without any openCV tools. First we filtered the image, converting each pixel to “Blue”, “Red”, “Green”, or “Background”. From there, we did connected-component analysis (aka blob labeling) using a standard two-pass algorithm (Wikipedia has a great explanation of this algorithm). Once the blobs were labeled, we analyzed them to figure out the location and dimensions of each blob. Finally, we wrote some wrapper functions that would give *very* rough estimates of the distance and heading toward an observed blob.

Architecture: The java programming was based on a state system architecture. The basic idea of the robot software was to read each sensor once per cycle into a class that was accessible by the robot main. The states would proceed to read in these sensor values and could change to a different state depending on either the sensor inputs or simple timeouts. The robot main would look at the current state and that state would change a set of robot variables. The robot main translated the value of these changed variables into action on the actual robot. There were certain nuances to our methods such as modification of output variables without referencing the states, in order to enable certain robot actions at all times, regardless of state.

Personal tools