Winning Alliance Captain - Colorado State Championship
Division Winning Alliance Captain - FIRST World Championship Ochoa Division
Think Award - FIRST World Championship Ochoa Division
Winning Alliance Captain - FIRST World Championship
Finalist Alliance Captain - Maryland Tech Invitational
Software Award [name?] - Maryland Tech Invitational
Winning Alliance Captain - Chicago Robotics Invitational Premier Event
Golden Bean Award - Chicago Robotics Invitational Premier Event
The Up-A-Creek Robotics 2024-2025 Into the Deep Robot, E-CLIPSE, is the world's most successful clip-bot. It is designed to automatically turn samples into specimens using 9 clips loaded into the robot at the beginning of the autonomous period. It featured a 10 specimen autonomous at Worlds, using vision to pick up samples and convert them to specimens all without moving away from the submersible. Teleop featured a similar automated routine.
E-CLIPSE continues to use our standard Mecanum drive sandwiched between four custom machined aluminum side panels. However, this year due to the limited driving needed, we used the SparkFun Optical Odometry sensor instead of our typical two-wheel odometry.
The intake picks up samples by grabbing them from the inside edge. The gripper is cam based, allowing a large amount of gripping force. It can rotate to pick up samples in any orientation. This is mounted to a four bar on top of a turret. Finally, this whole mechanism is on a linear slide system. This way the intake is able to reach a significant portion of the close half of the submersible.
There is a 120° wide angle webcam on the front of the robot that processes the whole submersible, finding the optimum sample to pick up. The intake then moves to locate that sample and picks it up. It verifies the pickup with a custom stall sensor.
In order to detect a successful sample pickup using as little wires as possible running to the intake, we developed a custom stall sensor that measures servo current. The sensor uses a small .25 ohm sense resistor in series with the servo. The INA169 chip measures and amplifies the voltage drop across the resistor to get the current. This is then fed into a comparator, to compare the voltage with a value set by a potentiometer. The sensor outputs a digital signal that is quick and easy to read by the control hub.
The intake subsystem is mounted on a car antenna powered linear slide. This system is fast, precise, compact, and doesn’t use any belts or strings. It uses two Misumi slides driven by the antenna system. The antenna is a smooth moving telescoping rod system with a toothed nylon cord inside. Because of the small diameter of the tubes, the cord doesn’t buckle. The cord is interfaced with a gear driven by the drive motor, like a flexible rack and pinion.
The transfer is the part of our robot where three major subsystems all come together (intake, clipper, depositor). The intake swings into the transfer spot. At this point in the sequence, the depositor is under the transfer spot, pushing a lever. When it comes up to grab the sample, it releases the lever, moving a wall into place to help secure the sample for clipping.
The entire robot was designed around the Clipper, as it is central to our strategy. At the beginning of the autonomous period (and at least one more time during teleop), the robot picks up 9 clips from the wall and stores them inside the clip claw. The claw is coated in Teflon to allow the clip to be easily removed by the clipping mechanism. The claw is cam based, allowing for a secure grip without stalling the servo. The actual clipper part is a triangular gripper that uses foam to passively grip the clip. It is attached to a servo that rotates to attach the clip. This whole system is mounted on a linear slide using two aluminum rods and powered by a rack and pinion with a 5 turn Gobilda Super Speed Servo. This linear system allows it to pick up clips from any of the nine storage locations, move to the depositor for clipping, and move backwards out of the way after clipping.
The clipper has two main sequences: The prepare sequence and the clipping sequence. During the prepare sequence, the clip claw loosens slightly, and the clipper slides to the location of the next clip. The clip sticks on to the triangular spike and gets removed from storage when the clipper slides back out. The slide then moves backwards and rotates the clip slightly, and pushes it forwards into the Reseater, which makes sure the clip is correctly seated onto the clipper spike.
During the clipping sequence, the depositor holds the sample in place between two walls. The clipper mechanism (loaded with a clip from the prepare sequence) moves into place, also putting a brace lid over the sample. It then simply rotates the clip onto the sample using an Axon Max servo. Then it pulls out of the clip, allowing the depositor to remove the completed specimen and score it on a chamber.
The depositor grabs the sample/specimen from the outside on one end. This allows it to easily grab the sample from the intake, while leaving one end free for clipping. It is cam based, allowing for a secure grip without stalling the servo. It has wrist servo and elbow servo, giving it the necessary degrees of freedom to score on the high and low chambers. The depositor is mounted on a single two stage continuous belt driven lift.
The level two climb was added after states before worlds. It is spring loaded and unfolds when released by a servo. It braces between the high and low bars, and spools a string in, tilting the robot up and hanging off the outside of the submersible.
This year, we had the challenge to detect 3.5 inches x 1.5 inches x 1.5 inches blocks of blue, yellow, or red color. For our vision, we had a 120° camera which gave us the ability to look at the entire submersible. Our intake required us to have the exact x,y coordinates and the orientation of the blocks, so we decided to convert the image our camera took in, into a birds eye view. For all our algorithms, we used OpenCV which is an open-source software library containing a large collection of optimized algorithms for computer vision, image processing, and machine learning tasks. To get the top view, we took the 4 corners of the submersible and made the input we got linear. This way, the distance was equalized, which gave us a top view of the submersible. Next, we wanted to find the color of the block; for which we used homography, allowing us to highlight the portion of interest in white, and make everything darker/grayscale. These white areas were called contours, which we applied our template matching and edge detection to, which then gave us the exact x,y coordinate and orientation of the block. From there, we used inverse kinematics, which is how our intake knows where to go, in order to send our intake to precisely pick up the block. This allowed us to have sub 2.3 second cycles that were fully autonomous, giving us a competitive edge in matches.
Because we used a clipping mechanism in addition to the usual mechanisms, we needed to make sure every moving piece would work together with as much precision as possible. Using state machines and Kotlin coroutines was very important when timing our different mechanisms to work together without wasting any time and without them interfering with each other.