• Non ci sono risultati.

12 1.1.3.1 Symbolic Encoding

N/A
N/A
Protected

Academic year: 2021

Condividi "12 1.1.3.1 Symbolic Encoding"

Copied!
7
0
0

Testo completo

(1)

Contents

1 State of Art 9

1.1 Programming by Demonstration . . . . 9

1.1.1 Introduction . . . . 9

1.1.2 A quick hystorical overview . . . . 10

1.1.3 Engineering-oriented Approach . . . . 12

1.1.3.1 Symbolic Encoding . . . . 12

1.1.3.2 Trajectory Encoding . . . . 13

1.1.3.3 Incremental Teaching Methods . . . . 14

1.1.3.4 Human-Robot Interaction in PbD . . . . 15

1.1.3.5 Other Learning Techniques . . . . 15

1.1.4 Open Issues . . . . 16

1.2 Related works . . . . 17

1.2.1 Goal-Directed Imitation in a Humanoid Robot, Cali- non et al. [29] . . . . 17

1.2.2 Learning of Gestures by Imitation in a Humanoid Robot, Calinon et al. [30] . . . . 18

1.2.3 On learning, representing and generalizing a task in a humanoid robot, Calinon et al. [12] . . . . 21

1.2.4 Towards Automated Models of Activities of Daily Life, Tenorth et al. [31] . . . . 22

1.2.5 Our goal . . . . 23

2 ROS 24 2.1 What is ROS? . . . . 24

2.2 ROS features . . . . 25

2.3 ROS example . . . . 27

2.3.1 Tracking the human . . . . 28

2.3.2 Sending commands to the youbot . . . . 30

(2)

2.4 How it works . . . . 31

2.4.1 ROS Filesystem Level . . . . 32

2.4.2 ROS Computation Graph Level . . . . 32

2.4.3 ROS Community Level . . . . 36

2.4.4 Names . . . . 36

2.4.4.1 Graph Resource Names . . . . 36

2.4.4.2 Package Resource Names . . . . 38

2.4.5 Higher-Level Concepts . . . . 39

2.4.5.1 What is /tf? . . . . 39

3 The Microsoft Kinect 41 3.1 Device overview . . . . 41

3.2 Open source drivers . . . . 43

3.2.1 Third part development . . . . 43

3.3 Kinect and ROS . . . . 44

3.3.1 openni camera . . . . 44

3.3.1.1 Dynamically Reconfigurable Settings . . . . . 45

3.3.2 openni tracker . . . . 46

4 KUKA Youbot 48 4.1 The Youbot Hardware . . . . 49

4.2 The Youbot Programming API . . . . 53

4.3 PID regulation . . . . 54

4.3.1 Current PID regulation . . . . 54

4.3.2 Velocity PID regulation . . . . 56

4.3.3 Position PID regulation . . . . 57

4.3.4 Parameter sets for PID regulation . . . . 59

4.4 The Youbot and ROS . . . . 59

4.4.1 The ROS wrapper . . . . 59

4.4.2 The youbot model . . . . 61

5 System Architecture 62 5.1 Hardware overview . . . . 63

5.2 Work environment . . . . 64

5.3 System design . . . . 67

5.3.1 Learning Phase . . . . 68

5.3.2 Execution Phase . . . . 69

5.4 ROS implementation . . . . 70

(3)

5.4.1 tf . . . . 74

6 Vision 77 6.1 Initial situation . . . . 78

6.1.1 openni camera and openni tracker [57] e [58] . . . . 78

6.1.2 kinect aux [59] . . . . 78

6.1.3 ar kinect [60] . . . . 78

6.1.4 household objects database [62] . . . . 79

6.1.5 tabletop object detector [63] . . . . 80

6.2 Final situation . . . . 83

6.2.1 Changes to existing packages . . . . 84

6.2.2 New packages . . . . 86

6.2.2.1 nodone . . . . 86

6.3 Conclusions and improvements for the future . . . . 87

7 Cognition 88 7.1 Initial situation . . . . 89

7.1.1 pocketsphinx [66] . . . . 89

7.1.2 sound play [69] . . . . 89

7.2 Final situation . . . . 90

7.2.1 Changes to existing packages . . . . 90

7.2.2 New packages . . . . 91

7.2.2.1 wake up . . . . 91

7.2.2.2 tracker . . . . 92

7.2.2.3 action manager . . . . 96

7.3 Conclusions and improvements for the future . . . . 99

8 Conclusions 100

(4)

List of Figures

1.1 Primitive actions to attain subgoals . . . . 10

1.2 Top: demonstration of an ipsi (top-left) and contralateral (top- right) motion of the right arm. Bottom: reproduction of the motion candidate with lowest cost function, by using only statis- tics to extract the goals. . . . . 17

1.3 Demonstration (left column) and reproduction (right column) of different tasks: waving goodbye (1st line), knocking on a door (2nd line), and drinking (3rd line). . . . 20

1.4 Teaching through kinesthetics for the 3 experiments conducted. Chess Task: Grabbing and moving a chess piece two squares forward. Bucket Task: Grabbing and bringing a bucket to a specific position. Sugar Task: Grabbing a piece of sugar and bringing it to the mouth, using either the right or left hand. . 21

1.5 Abstraction from raw sensor data to description of a daily schedule. . . . . 22

1.6 The picture shows the trajectory of the right hand during five table-setting episodes performed by two different persons. . . . 23

2.1 What is ROS? . . . . 24

2.2 Releases . . . . 25

2.3 ROS Universe . . . . 26

2.4 Node structure . . . . 28

2.5 Users seen by the Kinect rgb sensor and output of the tracker node 29 2.6 Frames tree . . . . 30

2.7 Screenshots to drive the youBot forward and backward . . . . . 31

2.8 ROS at running time . . . . 33

2.9 ROS basic concepts . . . . 34

2.10 Youbot frames in rviz . . . . 40

(5)

3.1 Microsoft Kinect® . . . . 41

3.2 Kinect frames tree . . . . 45

3.3 Users tracking . . . . 46

3.4 Psi Pose . . . . 47

4.1 The KUKA Youbot . . . . 48

4.2 The KUKA youbot base . . . . 50

4.3 General characteristics youBot platform . . . . 50

4.4 The KUKA Youbot arm . . . . 51

4.5 Joints workspace . . . . 52

4.6 General characteristics youBot arm . . . . 52

4.7 Cascaded PID regulation . . . . 54

4.8 Current PID regulation . . . . 55

4.9 Current PID parameter description . . . . 55

4.10 Velocity PID regulation . . . . 56

4.11 Parameter description for velocity PID regulation . . . . 57

4.12 Positioning PID regulation . . . . 58

4.13 Position PID parameter description . . . . 58

4.14 Transition between PID parameter sets . . . . 59

4.15 The KUKA youbot API and the ROS framework . . . . 60

4.16 The youBot model in rviz . . . . 61

5.1 YouBot modified . . . . 63

5.2 Overview of the gripper . . . . 63

5.3 Work environment (left view) . . . . 65

5.4 Work environment (right view) . . . . 65

5.5 2D map of the work environment . . . . 66

5.6 3D mesh of the bowl (top-left), glass (top-rigth) and plate (bot- tom) used in the work . . . . 66

5.7 Functional architecture of the system . . . . 67

5.8 Schematic description of a demonstration . . . . 69

5.9 Schematic description of an execution . . . . 70

5.10 Communication between the vision and the navigation and grasp. 71 5.11 System frames . . . . 75

6.1 Position of the markers in the map (red circles) . . . . 79

6.2 Position of the markers on the table (red circles) . . . . 79

6.3 Examples of used markers . . . . 79

(6)

6.4 Point cloud from the camera of a table with three objects . . . 81

6.5 Objects extracted from the previous scene of point cloud . . . . 82

6.6 Image of a table and three objects . . . . 82

6.7 Detection result: table plane has been detected (note the yellow contour). Objects have been segmented (note the different color point clouds superimposed on them). The bottle and the glass have also been recognized (as shown by the cyan mesh further superimposed on them) . . . . 83

6.8 Final situation of vision system . . . . 84

7.1 Final situation of modeling system . . . . 90

7.2 starting point of the robot (highlighted in red) . . . . 92

7.3 An example of the communication protocol between human and robot . . . . 93

7.4 Learning algorithm . . . . 95

7.5 Example of a file of set up table task . . . . 96

7.6 Example of a file of clear table task . . . . 96

7.7 Execution algorithm . . . . 98

8.1 Screenshots of a part of the learning phase of a set up table action; in particular, the robot goes to see the initial state of the table and the sink. . . . 102

8.2 Screenshots of a part of the learning phase of a set up table action; in particular the user performs a place action of a bowl on the table and the robot receives the command to watch the table to see what has happened. . . . 103

(7)

List of Tables

1.1 Advantages and drawbacks of representing a skill at a sym-

bolic/trajectory level . . . . 12

3.1 Kinect Sensor overview . . . . 43

4.1 The onboard PC characteristics . . . . 49

5.1 The ASUS N53S characteristics . . . . 64

5.2 ROS packages used for Vision and Modeling . . . . 72

5.3 ROS packages used for Navigation, Grasp and Control . . . . 73

5.4 Common ROS packages used . . . . 74

Riferimenti

Documenti correlati

iv. Potentially explosive atmospheres. The information referred to in points 1, 2 and 3 shall be durably marked on or near the motors nameplate. The information listed in points

In their turn, as a result of the latest elections in the local councils of deputies in 2010 out of 21288 places 322 were occupied by the representatives of the parties supporting

Credo che il destino mi abbia fatto conoscere Chi ha paura della Cina quando ero all’ultimo anno dell’università in Cina. Dopo un anno, al tempo in cui cercavo un

Non avere princìpi o idee forti su niente è anche una garanzia contro il rischio di esse- re retorici, cioè di dare troppa importanza a se stessi e alle proprie opinioni.

Following the definition given by professor Liemer in her article On the Origins of le Droit Moral: How non-economics Rights came to be protected in French IP Law (2011, p. 69) the

5  Analysis of Large Finite Arrays With The Characteristic Basis Functions Method

Within this project river sections of the Hróarslækur River will be classified from its source to the conjunction with the Ytri Ranga into four categories: 1) natural parts

The feasibility study for reintroducing brown bears in the Trento area of the Italian Alps has revealed that: (i) there was no possibility for the natural recon- struction of a