UNIVERSITÀ DI PISA
Facoltà di Ingegneria
Laurea Specialistica in Ingegneria dell’Automazione
Tesi di laurea
Candidato:
Lorenzo Peppoloni
Relatori:
Prof. Carlo Alberto Avizzano
Ing. Emanuele Ruffaldi
Contro-relatore:
Prof. Antonio Bicchi
Programming by Demonstration of a robotic
manipulator: acquisition and control
Sessione di Laurea del 20/12/2011
Anno accademico 2011/2012
Summary
The goal of this work is proposing a system capable of learning and performing complex tasks based on perceptual stimuli. The system is able to learn from demonstrations and map tasks execution to its operational space, in order to execute them in an equivalent way. It can manage and balance its physical limits in terms of obstructions, manipulation, vision with dynamically changing environment elements and different contextual situations.
In particular we tackled the problems of:
• management and integration of streams coming from different sensors (two cameras, two lasers, position encoders readings and information from odometry) for generating a context and integrated navigation information, • management of tasks information in relationship to the robot performances
and its constraints such as workspace, manipulation, obstacles
• integration with a semantic architecture that is able to optimize the exe-cution processes.
Contents
1 State of Art 9
1.1 Programming by Demonstration . . . 9
1.1.1 Introduction . . . 9
1.1.2 A quick hystorical overview . . . 10
1.1.3 Engineering-oriented Approach . . . 11
1.1.3.1 Symbolic Encoding . . . 12
1.1.3.2 Trajectory Encoding . . . 12
1.1.3.3 Incremental Teaching Methods . . . 13
1.1.3.4 Human-Robot Interaction in PbD . . . 13
1.1.3.5 Other Learning Techniques . . . 14
1.1.4 Open Issues . . . 14
1.2 Related works . . . 15
1.2.1 Goal-Directed Imitation in a Humanoid Robot, Calinon et al. [29] . . . 15
1.2.2 Learning of Gestures by Imitation in a Humanoid Robot, Calinon et al. [30] . . . 16
1.2.3 On learning, representing and generalizing a task in a hu-manoid robot, Calinon et al. [12] . . . 17
1.2.4 Towards Automated Models of Activities of Daily Life, Tenorth et al. [31] . . . 18 1.2.5 Our goal . . . 19 2 ROS 20 2.1 What is ROS? . . . 20 2.2 ROS features . . . 21 2.3 ROS example . . . 23
2.3.1 Tracking the human . . . 24
2.3.2 Sending commands to the youbot . . . 26
2.4 How it works . . . 27
2.4.1 ROS Filesystem Level . . . 27
2.4.2 ROS Computation Graph Level . . . 28
2.4.3 ROS Community Level . . . 30
2.4.4 Names . . . 31
2.4.4.1 Graph Resource Names . . . 31 2
CONTENTS 3
2.4.4.2 Package Resource Names . . . 32
2.4.5 Higher-Level Concepts . . . 33
2.4.5.1 What is /tf? . . . 33
3 The Microsoft Kinect 35 3.1 Device overview . . . 35
3.2 Open source drivers . . . 36
3.2.1 Third part development . . . 37
3.3 Kinect and ROS . . . 37
3.3.1 openni camera . . . 37
3.3.1.1 Dynamically Reconfigurable Settings . . . 38
3.3.2 openni tracker . . . 39
4 KUKA youBot 41 4.1 The Youbot Hardware . . . 42
4.2 The Youbot Programming API . . . 45
4.3 PID regulation . . . 46
4.3.1 Current PID regulation . . . 46
4.3.2 Velocity PID regulation . . . 48
4.3.3 Position PID regulation . . . 49
4.3.4 Parameter sets for PID regulation . . . 50
4.4 The Youbot and ROS . . . 51
4.4.1 The ROS wrapper . . . 51
4.4.2 The youbot model . . . 52
5 System Architecture 53 5.1 Hardware overview . . . 54 5.2 Work environment . . . 55 5.3 System design . . . 57 5.3.1 Learning Phase . . . 58 5.3.2 Execution Phase . . . 59 5.4 ROS implementation . . . 60 5.4.1 tf . . . 62
6 Motion, localization and autonomous navigation 65 6.1 Differential motion control . . . 66
6.1.1 User assisted differential control . . . 67
6.1.2 Vision assisted differential control . . . 68
6.2 Robot localization . . . 72
6.2.1 AMCL localization . . . 73
6.2.1.1 ROS amcl package algorithm . . . 74
6.2.1.2 Experimental test . . . 80
6.2.2 Markers system . . . 81
6.3 Obstacle detection and avoidance . . . 84
6.3.1 The obstacle management algorithm . . . 84
CONTENTS 4
6.4 Path planning . . . 87
6.4.1 Dynamic Window Approach . . . 87
6.4.2 Implementation . . . 89
6.5 Human tracking and following . . . 92
7 Grasp and affordances 94 7.1 Introduction . . . 94
7.2 The youBot arm . . . 94
7.2.1 Overview . . . 94
7.2.2 The arm kinematic . . . 96
7.2.2.1 Forward Kinematic . . . 97
7.2.2.2 Inverse Kinematic . . . 99
7.2.2.3 Implementation . . . 103
7.3 Grasp and manipulation . . . 104
7.3.1 Grasp modeling and procedures . . . 104
7.3.2 Grasp study . . . 104
7.3.3 Object manipulation . . . 106
7.3.4 Implementation . . . 108 8 Conclusions 113