Robot Control using 20 lines of C++ of Code and Classical Vision Tools

DariusBurschka 128 views 6 slides Aug 24, 2024
Slide 1
Slide 1 of 6
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6

About This Presentation

The system uses OpenCV and an Intel Neurostick on a Raspberry Pi4 to do all the localization and motion planning.
I use the Neurostick (shown as blue USB in the image) for yolo.


Slide Content

Darius Burschka
Machine Vision and Perception Group (MVP)
Department of Computer Science
Technische Universität München
Hardware Fault Tolerant distributed
Scene Analysis for Manipulation Task

System Overview
2Franka’s Panda manipulators will be used (just ordered)
Global camera
Intel Realsense D435
Wrist camera
with IR
Edge system with
Myriad (camera+
robot control)
Cuda enabled system
(object, human pose
recognition, 3D)

3

4

PointCloud based Recognition using Realsense
5

Appearance-based Recognition
6
Hierarchical object recognition (new for embedded applications)
1.Yolo for attention
2a. Appearance based human pose (multi-skeleton) estimation
2b. Clipping of object attention regions
3. Depth augmentation from 3D pointcloud on ”scene server”