Team Six/Final Paper
Janky the cocoabot won the competition on MASLAB 2015. It had a very simple design and its code was very modular.
The whole code can be found on our github repository: 
Language and Libraries
The whole code was written in C++11, with the exception of the code present in our Arduino, responsible for communicating with the color sensor, which was written in C.
We decided first to use C++ since most of the example code for the sensors was in C++ and no one in the group felt inclined about making use of JNI to interface with the hardware. Then later, we moved to C++11, to make use of the Thread support library "<thread>" and Date and time utilities "<chrono>".
We used Opencv for our computer vision code. We used standard C++ libraries, "string", "unistd", "cmath", "cstdio", "fstream", "iostream", "signal.h", besides libraries related to containers, such as vector.
We decided that modularity and easy integration were very important for our design. Because of this we made 3 decisions about the code, that later helped us considerably:
The code would be multithreaded, so that it could be more easily debugged, and the implementation of timeouts would be easier. Example
- Config file 
We would have a main config file that would have all the configurations of the robot, such as number of the pins used by the sensors, proportional constants and gains, definitions of what was and was not present on the robot, etc. There was a considerable amount of macro preprocessing o the code for convenience. By changing a 0 to a 1, we would add new members to our sensors module or change how they were updated.
Since early in the development, we were trying to avoid write code that could not easily be reusable. Instead, we had a single main function on the source that took parameters, and those parameters would redirect the flow of the main function to run the tests. This made sure that all the code could be compiled together, and that it was easy to implement tests that made use of mutiple modules of the robot.
Everything on the code had "unity" test cases to show that they worked on the robot, and had also simple integration tests, to show that it didn't conflict with other parts of the code. This made it easy to debug when something was going wrong.
Since the beginning of the development we had a make file to compile all the code together, so that we wouldn't have to spend time later having to spend time figuring out what .o files were missing for our code to compile.
We had 10 threads running on the robot. A high level decision thread, a sensors thread, a motor control thread, a servo control thread, an actuator thread, a logging thread, an Image Processing thread, two threads listening for interrupts for the encoders, and a process running on an arduino for the color sensor.
The data was shared among the threads by the use of shared memory (class public member variables, getters and setters).
Our sensors thread (or sensors module), updated the data from all our sensors and applied a simple filter to the data acquired. It update rate was considerably higher than the one from any of the other threads, since all of them depended on the data from the sensors. This thread was also responsible for updating the time elapsed (in microseconds), which would be used on the rest of the robot to calculate speed, or time elapsed for tasks.
All the sensors were private members of the main class of this thread, and the other threads could only access the data from the sensors by reading the public variables of this class. A pointer to the sensors module object was passed on the constructor of many of the other classes.
The communication to all our servos and motors were interfaced by a servo shield. A problem with this is that we could only communicate with one motor or servo at a time (even though the delay time was very small). We had a thread that would just read values to write to the servo shield so that our main code wouldn't have to wait for this communication to happen.
This thread also helped was with the problem that the clock rate of the edison board was not constant, because (probably) there was noise from the dc converters. Some of our writes to the motors and servos would fail, and by having a thread writing values to them non-stop made sure that we were resilient to those write fails, even if only 10% of our writes worked.
This thread had 5 pointers, that pointed to the desired value that should be written to the motors or servos, which were received, in our case, by our servos control thread and motors control thread.
It implemented a maximum power threshold to the motors, so that the robot could never go above a certain speed, set by the config file, and made our robot safe to test.
Motors control thread
This thread was responsible for deciding what power should be written to the motors by the actuators thread. It implemented:
- PD controller on angle, so that we could set a desired direction to the robot and make it turn fast and efficiently, make it go straight or make it do dynamic turns.
- Proportional control on position, so that the robot could effectively move a certain distance
- Torque limiter for the wheels. We used an approximation that the torque is "approximately" proportional to the difference between the normalized wheel speed and normalized power set to the motors, and limited this value. This assured that the wheels would never slip on the floor and improved our error in distance moved from 10% to less than 5%.
- Power minimum threshold. There is a minimum limit on how much power you have to apply to the motors before the wheels start moving. Since we were using a proportional gain, if we didn't use this, our robot would be incapable of moving very small distances or very small angles. It improved our error in position from ~5 inches, to < 0.5 inch, and our error in angle from 5 degrees to <1 degree. It had the problem of making our robot never arrive to a static situation, it would always be shaking a few 0.1s of degree from left to right trying to fix its angle.
- Position, angle and angular speed tolerance. In order to make the robot not shake all the time, we had a angle error and position error tolerance, so that if the angle or position error got smaller enough, the robot would stop trying to fix itself.
This thread had a pointer to the sensors class, and it didn't directly communicate with the actuators. The actuators that would point to the values of the variables of this class and read from them.
It could receive only 2 instructions "How much to move? How much to turn?", which was a great abstraction layer to be used by the high level threads.
See here: 
Servos control thread
The servos control is very similar to the motor control thread, but a lot less complex. It would have variables containing what should be the angle of the servos, which would be read by the actuator class. It could set the sort mechanism in sweep mode, it could raise or unraise the robot arm, and hook or unhook a block.
It provided the abstraction layer to the higher level classes that allowed us not have to deal with the exact angles of the servos anymore.
See here: 
Many sensors had to use their own thread. In the case of the encoders, the thread was responsible only for listening to the rising and falling edge of pins in order to keep track of how much the wheel had spin. Other sensors that made use of their own threads, but that we decided not to use were: the gyroscope and the ultrasonic sensors.
Lines 18 and 21
Color sensor (arduino)
It was hard to make the color sensor work with the edison, because of lack of native libraries, so, instead, we ran it on an arduino, that would communicate with the edison by setting a pin to high or low, given the color of the cubes detected.
Our logger appended information from all the modules (sensors, high lever, motor control, servo control, image processing, etc), to a file that could be read later in order to debug. It also updated the last picture taken by the robot on a server (python -m SimpleHTTPServer 80) running inside the robot, which allowed us to see what the robot was seeing. It also saved all the transactions that our state machine was doing.
Our logger class was not thread safe, and it never intended to be, since a messy or partially broken logger file was not important. See here: 
Image Processing thread
This thread would take images from the camera, remove the image data that was above the walls, detect blocks and report their position and angle with an error of at most +2 inch and <4 degrees for blocks that were at most 30 inches away. It could also detect blocks that were farther away, but the errors were slightly larger or the robot was incapable of driving with such precision.
It implemented 3 functions that could be used by the high level decision thread:
It also averaged the data across multiple frames to increase the precision of the position and angle of the block found. See here:
High Level decision thread
This thread was represented by a main function that ran a state machine that implemented procedures (that could also have states inside). It had access to all the other threads, but didn't communicate directly to the actuators thread. It implemented timeouts on the procedures and was, simply speaking, the brain of the robot. See here: and here: 
Classes and implementations
We had 40 classes, which made the code extremely modular. Many classes provided a higher degree of abstraction to certain parts of the robot. An example is that, even though it sensor had its own class, no class besides the sensors module called them directly.
The states of the state machine were implemented as subclasses of a main class that implemented all the main procedures, such as wall follow. The procedures contained static variables, such as the time that they started performing each action or how many times they tried to perform those actions. Since the procedures were functions of the super class, their static variables were shared among all the sub-classes. On problem with this approach were the bugs that we had with virtual destructors, but they were fixed.
Lots of code that we implemented was not used by the final robot. We had implemented a map class with planning to better move the robot around the field, a particle filter to localize the robot, a gyroscope sensor code, ultrasonic sensor code, purple line detector code. Overall it was probably more than 5000 lines of code. Here you can see the mapping and particle filter classes: 
Advice specific for software
- Integrate your code since the beginning: it is not because you can move and localize cubes that you can move while localizing cubes. It is hard to integrate the code, so you should do it since the beginning.
- Multithread makes your code easier to reason about and more modular. Consider it as a way of minimizing the complexity of your code, not as a way of maximizing the complexity,
- More classes = better. The higher the degree of abstraction that you can reach when making high level decisions, the easier it will be to write them.
- TIME OUTS. I can't stress how important they are. If you don't write them, you will be stuck doing the same thing over and over again.
- Test, test and test. Write tests that only make use of a sensor. Write tests that integrate the sensor with a motor. Write test that integrate both of them with the state machine, etc. Tests will make your life a lot easier.
- Main function that takes arguments. Makes creating tests that uses the whole code a lot easier
- IDE. Use an IDE, it will make you more productive and make you commit fewer mistakes. A good one for C++ is QTCreator.
- GIT. Use git to share code among computers. Never commit executables nor .o files nor non compilable code.
- GDB and GDBSERVER. If compiling C and C++ code, add the flag -g, put a break point in main, and debug your code using GDB.
- Localization is fun, but will not work. Learn how to do it anyway for fun, but you don't have the processing power nor quality enough sensors to localize yourself. Wall follow will be the way to go.
We used in the end: 1 webcam, 2 encoders, 2 short IR sensors, 1 ultrashort sharp IR sensor, 1 color sensor.
The driving control was reliable enough that we could use the encoders to detect angles (see software).
The short IRs were used to wall follow, one detected walls on the left and one on the front.
The webcam was used for computer vision and was localized on the right side of the robot, oppositely to the side where we wall followed.
The sharp ultra short IR was used to detect if we had effectively caught a cube, or if we got cubes by accident.
The color sensor sorted the cubes by color.
We had a DC converter for our sensors, a separate one for our sort and hook servos, and a separate one only for the arm servo.
All our power wires were attached very securely; we didn't use a bread board, we used components made specifically for power connections.
We had lots of problems with wire organization and they kept coming loose. It was not until the middle of IAP that we decided to spend ~10h fixing the wires non-stop. Believe me, it was worth it.
Keep things simple. We tried to minimize the number of moving parts.
Our robot was made mostly out of cardboard. Every time we would need an extra thing we would cut it out of cardboard and hot glue to the robot.
Things you learned and could have done better
- If something isn't working correctly for a long time, try something new
- The DC converters that we were using were really bad. Buy a better DC converter to your sensors (something that outputs only a single voltage; i.e. that is not adjustable), and that has a low current (no need to spend extra money), and a separate DC converter with slightly higher current from the servos (this way you won't blow them up because you were drawing too much current).
- If possible, put fuses on each DC converter.