Vision-Guided Intelligent Waste Sorter
12/10/2024 Rayhan Al-Rabby
#computer-vision
#open-cv
#robotics
Building a rover that classifies recyclables with YOLOv2, OpenCV, and an NCS2 accelerator while staying deployment-ready.
During a group studio in 2024 I led the perception stack for an intelligent waste sorter. We wanted something practical for campus facilities, so we combined confident detection with straightforward hardware.
Technical highlights
- Trained YOLOv2 on a curated image set of campus recyclables and squeezed it onto an Intel NCS2. The accelerator kept inference quick enough (<60 ms) to avoid motor stalls.
- Built the classification service in Python with OpenCV, handing off the labels to a Raspberry Pi that drove the actuation lane.
- Reached 92% detection precision and 90% classification accuracy after adding domain-specific augmentations (wet cans, crushed cardboard, etc.).
Lessons carried forward
This project taught me the value of making prototypes observable. We piped all camera frames, confidence scores, and motor states to a lightweight dashboard in Power BI so the facilities team could trust the system. That mindset now guides how I document robotics experiments and why I keep building analytics hooks into every embedded project.