
ADDRAR IV
Autonomous shore-launched retrieval robot.
- Status
- Active — final demo phase
- Role
- Vision · Tracking · Control · Firmware
- Duration
- Sept 2025 — May 2026
- Stack
- Raspberry Pi · ONNX · Python · Custom firmware · PD control
Overview
ADDRAR IV is a tethered autonomous robot built to retrieve floating debris from open water. It detects targets with a custom-trained vision model running on a Raspberry Pi, tracks them through frame-to-frame motion, and steers itself toward them with a closed-loop control system that talks to custom firmware on the motor controller.
I owned the vision pipeline, the tracking layer, the steering controller, and the embedded software end to end. This page is the case study from a year of iteration.
The problem
Marina debris is everywhere and almost nobody picks it up. The interesting engineering question is whether a small autonomous platform can do the job from shore — no GPS waypoints in the middle of the lake, no human in the cab, just a vision system and a propulsion stack reacting in real time.
The hard constraint is the environment. Water reflects everything, light changes minute to minute, debris drifts, the camera shakes, the tether snags. A classifier that works in a dry lab fails the first time a wave catches the sun. So the model and the control system have to be robust to motion, glare, and ambiguity — and they have to make decisions fast enough to keep up with a target that keeps moving.

What I built
The platform splits cleanly into four layers. A camera-and-Pi vision node runs the ONNX classifier at ~50 fps. A multi-target tracker turns per-frame detections into stable identities across time. A PD steering controller takes the active target's position relative to the boat and produces a heading correction. And custom motor firmware translates that correction into differential thrust, with a TURN_HARD logic block that eliminates the reverse-and-replan loops the earlier firmware kept falling into.
Everything talks over serial. The Pi is the brain, the motor controller is the body, and the tether is the umbilical that brings power in and telemetry out.
Results
- 76.9%
- True positive rate
- 4.2%
- Land false-positive rate
- ~50 fps
- On-device inference
- 19
- Fixes shipped in the 5/6 rebuild
How it works
The detector is an FP32 ONNX export of a custom-trained YOLO-class model. I evaluated quantized variants but the FP32 demo model is the deploy artifact — quantization hurt recall on the small-target class enough that it broke the rest of the pipeline downstream.
The tracker is the part that took the most iteration. The flicker problem — a target popping in and out across frames — was killing the controller, because every flicker reset the steering input and the boat would oscillate. I rebuilt the tracker around a hit-streak heuristic that holds an identity through short gaps and rejects single-frame spurious detections. That eliminated the oscillation almost entirely.
Steering is a vanilla PD controller. The proportional term reacts to current bearing error, the derivative term damps overshoot. It works because the rebuilt tracker keeps the input clean.

“The flicker fix was the single biggest unlock — once the tracker stopped lying to the controller, the controller stopped doing dumb things.”
What's next
Tightening the model on edge cases — glare off the water and partial occlusions still trip recall. The plan is one more round of targeted data collection on the failures and a fresh training pass.
On the control side, the open question is whether to layer a smarter search behavior on top of PD steering when no target is in view. Right now the boat holds heading; a slow lawnmower sweep would catch off-frame targets faster.