Circuit Threes ⚡3️⃣(team 3)
- Sameen Ahmad
- Matthew Kim (ghost member)
- Jennet Zamanova (ghost member)
- Amir White
- Phoebe Xu
- Angela Chen
- Sebastian De Jesus
Final Summary
Strategy
We are going for scoring strategy 2, which is to intake as many blocks as possible.
Design
Rhubarb Mark.2 (codename: Cucumber)
Rhubard uses entrapption stars to intake blocks. A ramp to automatically stack the blocks he collects.
Software
We used the ROS2 framework to program Rhubarb. Rhubarb utilizes a camera mounted on its left side to see blocks. Images are processed through a node to calculate appropriate speeds of each wheel to either turn or move the robot forward. Another node intakes these speeds and writes them to the motors.
Vision: We used the ROS2 v4l2_camera package to get images from the camera. From this image, we used a CV bridge to convert the ROS2 image into an OpenCV image for further processing. Using CV functions, we determine the center pixel coordinates of a block in the robot's vision, which we then input through a homography matrix to get the x and y distance from the camera in real life. Using this information, calculated the angle the center of the block was from the the center of the robot and the distance away the block was. If the block was in the center, we wnat to drive forward. If the block is not in the center, we want to turn. Using PID calculations, we determined the appropriate speed to right to the left and right motor to achieve either of the respective moves. If the robot does not see any block, we tell the motors to turn right.
Driving: The driving node intakes the left and right motor speeds and uses the Feedback motor class on the teensy to drive the robot. In this node, we also have the intake functionality. The robot is unable to see about 7 centimeters infront of it due to the placement of the camera. We want to start intaking (spinning the entrapption stars) when the block is 5 centimeters infront of the robot. We use the distance sensor to determine the distance away of the closest object right infront of the robot, presumably the block, and if it is less than 5 centimeters, we write to the entrapption stars motors to intake the block. If it is less than 7 centimeters, we write to the wheel motors to drive forward a little bit.
Documentation
Team Roles
Amir - Mechanical, Strategy, Team Management, (Electrical)
Sameen - Mechanical, Strategy, Lawyer, (Electrical)
Phoebe - Software, Strategy, Team Management, (Electrical)
Angela Chen - Software, Strategy
Sebastian de Jesus - Mechanical
Jennet - Software
Matthew Kim: Mechanical, Electrical, Software
What Are We Working On?
Amir: ~~Organizing Team Wiki, U sing Ros2 to drive a predetermined path, Use encoders to determine position and velocity of wheels.~~ Designing ramp element to store and deposit cubes.
Sameen: How to create new messages containing useful information, Develop base implementaiton of PID controllers as a teensy package. Assuming we know position of block, orient robot to align with the block. Assuming we know horizontal distance from block, drive so that we are a set distance from the block (use dummy variables).
Phoebe: Using CV to isolate field elements (red blocks, green blocks, blue tape) IR distance sensor
Angela: Learning to merge CV with ROS2 to drive robot, Better implementation of the camera.
Seb: Designing intake mechanism and chassis
Matthew: Considering alternate intake mechanism for robot design
Team 3 password: pjams333
Teensy Pinout
wires g:2-5 (brown, purple, gray, white, black) [GND, DR1, PWM1, DR2, PWM2]
Fundamental Requirements
Drive !01/09/24Use CV to distinguish between our blocks vs. other team (colors detection)01/11/24- Knock over towers of blocks
- Drive towards a block after identifying it (kinda)
- Incorporate distance sensors into detection code & electronics
Incorporate motor encoders into code- Write a control loop using encoder/sensors/camera to approach a block
- Manipulate blocks to move around the field
- Design and fabricate intake device
- Integrate intake into electrical system and motor controllers
- Code !
- Release blocks
- Program to eject blocks while driving backwards
Scoring More Points
- Lift blocks
- Orient blocks/robot above another block
- Release (potentially different than simple ground release)
Pictures of Rhubarb from 01/09/24
Key Dimensions
- Blocks are 2x2" with a hole in the center of 1" diameter
- Platform is 4" tall; 2' long
Current picture of the field; 01/10/24
Sensors
Logitech C270 HD Webcam (720 p): 1280 by 720 pixels
Short Range IR distance sensor: (prob just need short and long range for distance sensors???) 4 cm - 30 cm
- https://global.sharp/products/device/lineup/data/pdf/datasheet/gp2y0a41sk_e.pdf
- make sure sensor does not receive direct light (maybe put a box around sensor to block out external light)
- make sure lens isn't dirty
- Absolute max rating: suppy voltage (-0.3 to 7 V), output terminal voltage (-0.3 to 0.3 V)
- output terminal voltage range 0.25 - 0.55 V with 0.4 V average
- output voltage difference range 1.95 - 2.55 with 2.25 V average
- average current 12 mA (max 22 mA)
- rough formula: L = 10/voltage - 0.42 where L is distance to reflective object in cm (only works for 3.5 cm to 40 cm) does NOT work for distances < 3.5 cm.
- adc_pin = 38
Long Range IR distance sensor: 20 - 150 cm
Ultra Short IR distance sensor: 2 - 10 cm
Time of flight distance sensor: ~up to 200 cm
Color sensor: RGB and clear light sensor has IR blocking filter
Microswitches lever arm is 16.3 mm body is 20 by 6.4 by 10.2 mm
Motor Encoder (tracks wheel position) 64 counts per revolution, gear ratio of 50:1
Pins!!
Encoders: Vcc 3.3V Feedback Motor (DIR, PWM, ENCA, ENCB) Left (1) --> FeedbackMotor(self.tamp, 16, 15, 31, 32, True) Right (2) --> FeedbackMotor(self.tamp, 14, 13, 35, 36, True)
Cytron Motor Driver: Orange wire (B+), Purple wire (B-)
Smaller Cyton: DIR 6 (white-> purple), PWM 5 (yellow->gray), NC Nothing, GND (black-> white -> black)
Servo: Vcc 5V(red), GND(brown), Signal 23 (orange)
IR Sensor: Vcc 5V (red), GND (black), Signal 22 (yellow)
Github / ROS2
Useful Links
Basic Commands
git add --all
--> stages a change to the directorygit reset
--> undos the current addgit reset --hard
--> deletes all the files and changes added since the prvious commitgit commit -m "[message]"
--> commit changes to local repogit branch [branch name]
--> creates a new branch to the commit treegit checkout [branch name]
--> switch branchesgit merge [branch]
--> merge a branch with the mastergit diff [1st commit name] [2nd commit name]
--> find the differences between two commitsgit blame [file name]
--> show who made the changes to a filegit clone [repo]
git status
--> status of files (modified, staged etc.)git log
--> sees all the recent commits/messages
Terminal Commnads
source install/local_setup.bash
colcon build
--> both should run each time you start a session or update code to setup the environment correctlycat [file name]
--> lets you read into a filetouch [file name]
--> creates new file at current directorymkdir -p [folder name]
--> creates new folder at current directoryros2 run/launch [package] [node/launch]
--> runs node/ launch fileros2 topic echo [topic]
--> will print out messages into terminalvim [file]
--> opens basic text editor for quick fixes to code (push "i" to enter insert mode to start edititng, when done hit Esc then ":x" to save and quit)code .
--> opens VS code
SSH into NUC
- On VS Code get the extension Remote Development
- Open command window (Ctrl+Shift+P) and search for Remote-SSH: Add new host
- Enter ssh -X team3@[ipaddress]
- search ip address on poll me maybe (make sure NUC is turned on)
- Then you must connect to ssh host from command window
- Now navigate the files until you reach our team folder. Click search bar go to team-3 folder
- To Run Open your VM, and navigate to VS Code. git pull to get latest updates. checkout to your branch. get cooking
- While Running Ocassionally run
ps
in terminal to make sure sketches aren't running in the background. If you see more than ps and bash then runkill -9 [program name]
on each extemporaneous program
ROS2 Commands
ros2 topic list
--> outputs a list of topics running in packageros2 topic echo [topic name]
--> outputs message that the topic handlesros2 topic info [topic name]
--> outputs the type of message handled by the topic, the publisher count, and the subscriber countros2 interface show [type of topic message]
--> outputs the variables of the message that is relayed, which can then be modified through the classself.[publisher name] = self.create_publisher ([message type], [topic name], 10
self.[subscriber name] = self.create_subscription ([message type], [topic name], self.[callback name], 10)
- all dependies must be added to
package.xml
- all nodes in a package (including both publishers and subscribers) must be added as a new entry point in
setup.py
as[any node name] = [package name].[file name containing the node]:main
ros2 run [package_name] [entry_point]
--> run noderos2 pkg create [package name] --build-type ament_python
orament_cmake
- always run
colcon build
andsource ~/.bashrc
after creating a node - creating a publisher in ros2
- creating a subscriber in ros2
- To run multiple lines at once write a
.sh
file that contains all the lines you want to run. Use && between commands you want to run consecutively and & between ones you want to run simultaneously. Make sure to include#!/bin/bash
at the top of the file. To run all you do is type the[FILE PATH]
into the terminal. Checkout run_echo.sh in amir_branch for an example !
ROS2 + CV
- https://wiki.ros.org/vision_opencv
- https://wiki.ros.org/cv_bridge/Tutorials/ConvertingBetweenROSImagesAndOpenCVImagesPython
Design
Currently, we do not know where/how the blocks should be placed. To optimize our points, our robot should be able to carry as many blocks as possible. If the scoring zone is on the ground/low and does not require precise placement, we can use a front facing claw/grabber mechanism with a 2 DOF servo mechanicsm and a dump trunk design.
More difficult designs would involve high placement and/or precise placement. For highplacement, an elevator/vertical conveyor mechanism should suffice. For precise placement, a last in first out (LIFO or heap style) design would make the cubes the most accessible. As soon as we know the exact zoning mechanism, the apparatuses can be designed. Until then, we should discuss precise logistics. The only other mechanical designs are basic mounts which can easily be achieved with 3D printing, drilling holes, and standoffs/washers/spacers
Module Mechanical Ideas:
- Vaccuum/sticky arm to intake cubes and store them in the robot
- Compliant wheels/stars to intake cubes
- 4-bar linkage lift system (so intake remains parallel to ground)
- Cover our bot in blue tape to mess up the other teams' camera system
Strategy:
- 1: Destroy and steal all opponent blocks and stacks with long extensible arm.
- 2: Scoop blocks from center platform every time a stack is detected and move onto our platform
- 3: Spend 2 min trying to create stacks and the last minute just trying to scoop up as many as possible
- 4: Spend all the time trying to create stacks on the ground away from the center platform (prioritizing opponent team blocks)
- 5: Dump truck, just try to collect as many blocks as possible: Color invariant
- 6: For last 30 seconds, knock out enemy stacks by launching/long arm and build stack on center platform
7: Stack blocks outside of field??
Potential Purchases
- Compliant Wheels: for taking in blocks. $5.50
- SpinTake: for taking in blocks. $6.25
- Entrapption Star: for intake system: $13.00 (Purchased 2 on 01/11/24)
- Acyrlic/Aluminum for make 4-bar linkage (ask Seb)
- Torsional Spring (anywhere; pretty cheap): For skewering blocks
- Weights: For helping with gravity-assisted skewering
Upgrades, People
Rhubarb Mark.2 (codename: Cucumber)
After hitting the gym, our little Rhuby had a growth spurt. Including entrapption stars to intake blocks. A ramp to automatically stack the blocks he collects. And finally a mounted servo that can change the pivot angle of the ramp from stacking to release mode.
Setting up local repo on your VM
Git and Code Setup
I would recommend doing everything on your virtual machine because some of the UI popups (like the camera and wasd driving window) doesn't work for MacOS.
Download VScode on your virtual machine following the instructions on this link: https://linuxiac.com/install-visual-studio-code-on-ubuntu-22-04/?fbclid=IwAR0DqftaDWjG47rpH5rO1qvtW7YGSDgLl8fiYrjcwvA1xs6U9bRgQ5zwvEk
Also download Ros2 following the Ros2 Tutorial.
Remote Repo: https://github.mit.edu/maslab-2024/team-3 Local repo exists on the NUC and your VM. When you change code, changes will stay only on your machine (or the NUC) until you commit the files and push (to the remote repo). At that point, all changes you made will be on the remote repo. Then, for other people to see your changes, they will need to pull from the remote repo.
Options for editing code:
- Edit directly on the NUC
- To do this, complete SSH into NUC steps (scroll up) and start editing! This option is good if you want to test stuff quickly on the robot. Be careful for more complex tasks because the code on this computer is the final one that we will be using
-
Edit in your local repo on your Virtual Machine
- You will need to have VScode and Ros2 downloaded on your virtual machine
-
Set up an SSH key in your github.mit.edu account
- Follow this tutorial up until the hardware security key part: https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent
- Open up the public key file (path will be listed in the terminal) and copy
- Go to your mit github account > Settings > SSH keys> New SSH Key
- Paste into Key, name it what ever
- Go to remote repo and clone via SSH Key (git clone [git@github.mit.edu:maslab-2024/team-3.git])
Working in the repo: branches; all developement should be done on branches. The master branch should ideally always be a working copy of code When developing a new feature, make a new branch. Pull regularly from the remote branch to keep your code up to date. Once you have your feature working, push it to its respective branch on the remote repo (usually just git push if you are on the correct branch). Submit a pull request and review merge conflicts, maybe get another opinion, and then merge with master. Everyone at that point should pull from the remote repo again
Software Plan
Publisher Node: publishes images Subscriber Node: uses image to determine the center of the block. If center is within a certain distance from the center of the camera (can make this error distance proportional to the distance from the block ie. allow for further blocks to have more error), drive towards the block. Velocity is proportional to distance (the feedback_motor library to set the velocity). If the the center of the block is too far away from the center of the camera, find the angle needed to turn and use feed_back_motor library to turn. Repeat. Once within a certain distance from the robot and center of the block is centered, scoop up block.
Questions: How to structure the nodes. Should the publisher node publish the whole image and then all processessing be done in the subscriber node?
We will want to implement a state machine that determines which sets of moves to perform based on the current state of the robot. For example, if seeBlock == False
then we will want to turn while looking for blocks. If we see an untouched stack we will want to knock it over. If we see blocks on ground of our color, we will want to pick them up. For picking up, it will first engage the camera to determine distance, then the IR sensor will allow us to be more accurate when we get close enough to intake the block. However, we have to be careful to avoid fallen blocks of the opposite color. We should also maintain a global variable that keeps track of how many blocks are in our ramp.
Packages to use: v4l2_camera, apriltage_ros
Schedule:
1/10: Have CV detect/classify blocks
1/11: Test CV, finalize robot design, decide + order BOM
1/12: Mock competition
1/22:
- Sameen: Laser cut parts
- Phoebe: Debug CV publisher, test distance sensor code
1/23:
- Sameen: test v4l2 package
1/24:
- Sameen: debug feedback motor, test homography code (take pixel locations), implement feedback motor for distance and angle, test feedback motor for angle and fine tune pid calcs
- Phoebe : continue to debug cv stuff Implement distance sensor code
implement encoder position code (?)
1/25:
- complete testing base program
1/26:
- Finish building the base robot
- Finish the base program
1/29:
- Finish building the base robot
- test cv and tweak color thresholds
- check homography outputs and scale to account for camera's new position
- debug, debug, debug
1/30:
- fine tune PID values
- get movement of robot towards blocks working?
1/31:
- implement scoring code
2/1:
- debug, debug, debug