Chapter 8
Conclusions
The system developed is capable of
• learning and executing complex everyday tasks from one single human demonstration, the learning phase is entirely based of perception from the objects position to the environmental changes done by the human • creating a context from perception integrating multiple sensors stream:
camera, laserscan, encoder readings
• creating a semantic knowledge of the task observed
• executing tasks in an equivalent way adapting human actions to the robot own physicality through proprioception and limits analysis
• autonomously complete the task learned, except for some failure cases (Sect. 6.3.2)
• recover to some extent from failure situations during the execution phase In the following are presented two series of screenshots from two different exe-cution phases.
In the first one the robot sets up the table moving a bowl already on the table but in the wrong position, in the second a glass is taken from the sink and placed in the right position on the table.
Both executions start with a a preliminary autonomous observation of both the table and the sink to understand which is the initial context in order to optimize the task planning.
CHAPTER 8. CONCLUSIONS 114
CHAPTER 8. CONCLUSIONS 115