• Non ci sono risultati.

PRIMITIVE BASED HIERARCHICAL PLANNING FOR HUMANOID ROBOTS

N/A
N/A
Protected

Academic year: 2021

Condividi "PRIMITIVE BASED HIERARCHICAL PLANNING FOR HUMANOID ROBOTS"

Copied!
93
0
0

Testo completo

(1)

UNIVERSITÁ DIPISA

DOTTORATO DI RICERCA ININGEGNERIA DELL’INFORMAZIONE

P

RIMITIVE BASED HIERARCHICAL PLANNING

FOR HUMANOID ROBOTS

.

DOCTORALTHESIS

Author

Alessandro Settimi

Tutors

Prof. Lucia Pallottino Prof. Antonio Bicchi

Reviewers

Prof. Emilio Frazzoli Prof. Timothy Bretl

The Coordinator of the PhD Program

Prof. Marco Luise

Pisa, November 2016 XXIX

(2)
(3)
(4)
(5)

"Citation" Author

(6)
(7)

Acknowledgements

T

HANKSto the Centro di ricerca E. Piaggio, University of Pisa, and the

Department of Advanced Robotics, Italian Institute of Technology. I would like to thank the European projects Walk-Man [1], Soma [2], PacMan [3] and SoftHands [4] for the generous funding of my work.

(8)
(9)

Ringraziamenti

(10)
(11)

Summary

T

HEkey for robust and reliable usage of humanoid robots in various

and unstructured environments relies on the planning and control capabilities. Moreover, different levels of autonomy are required, depending on the scenario and the task set to be accomplished. In general higher level planning is executed at first, and each action (at lower levels) is subsequently planned. Nevertheless, in case of specific tasks where human decision is still preferred or in case of robot misbehavior, remote control of the robot is needed.

In this thesis a hierarchical planning and control framework for humanoid robots is presented. The purpose of the proposed algorithms is to allow tran-sitions from gross to fine motor skills and to act at different levels, such as joint, cartesian, primitive and task levels.

Primitive and task level planning algorithms based on control primitives have also been implemented for both mobile manipulators and humanoid robots. Several examples of motion primitives are proposed for both robotics platforms.

Furthermore, a Motion Description Language is used for both the formal validation of the obtained plans and for the generation of consistent complex plans starting from previously defined motion primitives.

In case of failure of one of the planning levels, a human can remotely inter-vene to avoid undesired robots behaviors substituting the planner through the proposed Pilot Interface. This is a Graphical User Interface that acts as a natural connection between the different planning level and the human that in case of necessity can decide the correct level of autonomy of the robot

(12)

(and hence the level of planning) to accomplish a task.

Finally, both the planning algorithms and the Pilot Interface have been experimentally validated and applied on different robots and scenarios.

(13)

Sommario

L

A pianificazione ed il controllo sono fondamentali per l’utilizzo di

robot umanoidi in ambienti destrutturati e il livello di automia usato dipende in genere dalla situazione affrontata. Di solito degli algoritmi di alto livello vengono eseguiti per primi e poi le singole azioni di più basso livello vengono pianificate di conseguenza.

In questa tesi si propone un framework gerarchico di pianificazione e control-lo per robot umanoidi. L’obiettivo degli algoritmi sviluppati è quelcontrol-lo di fare in modo che durante la pianificazione si possa passare da gross motor skills a fine motor skill. In altre parole dati i differenti livelli di planning (ai giunti, cartesiano, delle primitive, di task) differenti algoritmi più o meno precisi possono essere applicati a seconda della situazione. Nel caso un algoritmo di pianificazione fallisca un operatore può intervenire da remoto per evitare comportamenti indesiderati da parte del robot, sostituendo il processo di pianificazione attraverso l’interfaccia pilota proposta. Il Motion Description Languageè usato per la validazione formale dei risultati ottenuti e per la creazione di nuove primitive di moto a partire da quelle pre-esistenti. Sono stati implementati algoritmi di pianificazione a livello delle primitive e di task basati sulle primitive di moto sia per robot umanoidi che per manipo-latori mobili. Diversi esempi di primitive di moto per entrambi questi tipi di robot sono presenti nell’elaborato.

Un’interfaccia pilota grafica (Pilot Interface) è stata realizzata per fare da ponte tra i diversi livelli di pianificazione e l’operatore umano. Questo in caso di necessità può cambiare il livello di autonomia del robot (e quindi di pianificatione) intervenendo direttamente per portare a compimento un

(14)

particolare compito.

Gli algoritmi di pianificazione proporti e l’interfaccia pilota sono stati testati in diverse occasioni con diversi robot.

(15)

List of publications

International Journals

1. N. G. Tsagarakis, D. G. Caldwell, A. Bicchi, F. Negrello, M. Garabini, W. Choi, L. Baccelliere, V. G. Loc, J. Noorden, M. Catalano, M. Ferrati, L. Muratore, A. Margan, L. Natale, E. Mingo, H. Dallali, A. Settimi, A. Rocchi, V. Varricchio, L. Pallottino, C. Pavan, A. Ajoudani, J. Lee, and P. Kryczka. WALK-MAN: A High Performance Humanoid Platform for Realistic Environments. Journal of Field Robotics ( JFR) 2016. 2. Ferrati, M., Settimi, A., Muratore, L., Tsagarakis, N., Natale, L., and

Pallottino, L. (2016). The Walk-Man Robot Software Architecture. Frontiers in Robotics and AI, 3, 25.

3. Marino, H., Ferrati, M., Settimi, A., Rosales, C., and Gabiccini, M. (2016). On the Problem of Moving Objects With Autonomous Robots: A Unifying High-Level Planning Approach. IEEE Robotics and Au-tomation Letters, 1(1), 469-476. IEEE.

International Conferences/Workshops with Peer Review

1. Settimi A and Caporale D and Kryczka P and Ferrati M and Pallottino L. Motion Primitive Based Random Planning for Loco-Manipulation Tasks. In the 16th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2016, IEEE (in press).

2. Ferrati, M., Nardi, S., Settimi, A., Marino, H., and Pallottino, L. Multi-object handling for robotic manufacturing. In the 42nd Annual

(16)

Confer-ence of the IEEE Industrial Electronics Society IECON 2016, IEEE (in press).

3. Ajoudani, A., Lee, J., Rocchi, A., Ferrati, M., Hoffman, E. M., Set-timi, A., and Tsagarakis, N. G. (2014, November). A manipulation framework for compliant humanoid coman: Application to a valve turn-ing task. In 2014 IEEE-RAS International Conference on Humanoid Robots (pp. 664-670). IEEE.

4. Lee, J., Ajoudani, A., Hoffman, E. M., Rocchi, A., Settimi, A., Ferrati, M., and Caldwell, D. G. (2014, November). Upper-body impedance control with variable stiffness for a door opening task. In 2014 IEEE-RAS International Conference on Humanoid Robots (pp. 713-719). IEEE.

5. Settimi, A., Pavan, C., Varricchio, V., Ferrati, M., Hoffman, E. M., Rocchi, A., and Bicchi, A. (2014, May). A modular approach for remote operation of humanoid robots in search and rescue scenarios. In International Workshop on Modelling and Simulation for Autonomous Systems (pp. 192-205). Springer International Publishing.

6. Ferrati, M., Settimi, A., and Pallottino, L. (2014, May). ASCARI: A Component Based Simulator for Distributed Mobile Robot Systems. In International Workshop on Modelling and Simulation for Autonomous Systems (pp. 152-163). Springer International Publishing.

7. Hoffman, E. M., Traversaro, S., Rocchi, A., Ferrati, M., Settimi, A., Romano, F., and Tsagarakis, N. G. (2014, May). Yarp based plugins for gazebo simulator. In International Workshop on Modelling and Simu-lation for Autonomous Systems (pp. 333-346). Springer International Publishing.

8. Settimi, A., and Pallottino, L. (2013, December). A subgradient based algorithm for distributed task assignment for heterogeneous mobile robots. In 52nd IEEE Conference on Decision and Control (pp. 3665-3670). IEEE.

(17)

Contents

Introduction 1

Motivation . . . 1

Contribution . . . 2

Outline . . . 3

1 A hierarchical planning framework for humanoid robots. 5 1.1 Joint Level . . . 6

1.2 Cartesian Level . . . 7

1.3 Primitive Level . . . 8

1.4 Task Level . . . 8

2 Motion primitives based sample based planning. 9 2.1 Motion Primitive Formalization . . . 13

2.2 The P-Search* algorithm . . . 18

2.2.1 P-Search* . . . 18

2.2.2 LocalRRT* . . . 19

2.3 Probabilistic Completeness and Asymptotic Optimality . . . 21

2.4 Results . . . 22

2.5 Primitives examples . . . 26

3 Motion Description Language for primitive based planning. 36 3.1 The primitive based MDL - P–MDL . . . 37

4 The Walk-Man Pilot Interface. 49 4.1 The need for semi-autonomy . . . 51

(18)

Contents

4.1.1 Motion Description Languages . . . 53

4.1.2 Semi-autonomy via MDLs . . . 53 4.2 Software Architecture . . . 54 4.3 Implementation . . . 55 4.4 GUI Components . . . 57 4.5 Advanced Features . . . 60 4.6 Applications . . . 62

4.6.1 Valve Turning Task . . . 63

4.6.2 Door Task . . . 63

4.6.3 Drive Task . . . 64

4.6.4 Loco–manipulation Task . . . 64

4.6.5 Whole–body multi–contact Control Task . . . 65

Conclusions 67

A Resources 68

(19)

Introduction

Motivation

H

UMANOIDrobots are now commonly used in research laboratories, and may eventually become a common part of everyday life as well. Currently, an application of such platforms which is point of vivid interest of the research community is their exploitation to help organizations such as firefighters, civil defense corps, police, etc.

This is indeed the basic motivation of the Walk-Man project [1] whose aim is to replace human intervention using a humanoid robot whenever the situation could endanger people lives. The infamous Fukushima accident1

motivated the 2015 Darpa Robotics Challenge (DRC)2, where different

robots, including humanoid ones, competed to show the possible application of these machines in such scenarios. Robots were required to move in an environment meant to be a replica of the disastrous one, accomplishing various tasks. The challenge highlighted both the achievements of the robotic research community and the features that are still to be improved. The participation to DRC has been moreover one of the first milestone of the Walk-Man project.

In such unstructured and unpredictable scenarios the robot(s) will be sent in the dangerous zones, monitored by one or more remote operators, in a supervised autonomy fashion. Because of the possibly poor communication between the robot and the human operator, a major autonomy with respect

1http://spectrum.ieee.org/automaton/robotics/industrial-robots/fukushima-robot-operator-diaries 2http://archive.darpa.mil/roboticschallenge

(20)

Contents

to the classical tele–operation methods is required.

To do this, the planning level cannot be limited to solve problems related to the robot joint level, on the contrary it has to provide solutions for prob-lems of increasing level till the task level planning, where the environment surrounding the robot is considered.

Dealing with the high number of degrees of freedom (DoF) of a humanoid robot and the interactions with the environment results in a complex problem affected by the so called curse of dimensionality. This problem led to many techniques to overcome such intractable complexity, and many approaches have been proposed to simplify the problem. In the various works, although the used techniques are substantially different, the basic idea is the same: simplify the problem to lower its dimension and solve it.

The hierarchical planning approach proposed in this work relies on the same idea of simplifying the problem decreasing its dimensionality.

Contribution

In this work a hierarchical planning framework based on motion primitive is proposed. A motion primitive is an independent entity that can represent every planning and control policies such as walking on flat terrain, single and dual–arm manipulation, multi–contact planning ad even more complex behaviors. Such behaviors can also be characterized by a lower level plan-ning part and a control part. This planplan-ning part can either work at the joint level or at the Cartesian level, then the associated control law acts necessarily at the joint level. In fact a motion planner can either plan on contacts in the Cartesian space and then compute the necessary joint motion to achieve the desired end–effector poses (a two stage planning), or it can plan directly in the joint space, which is usually more computationally expensive (since the greater number of Dof). In this work the first two level of planning are considered as the joint one and the Cartesian one.

The third level of planning is based on primitives. A motion primitive is a basic control entity, that can represent any control law such as: walking on flat terrain, turning a valve, opening a door, etc. A sample–based plan-ning algorithm (namely P-Search∗) that combine the motion primitives is presented here, with the peculiarity of combining different control laws in parallel when possible. This aspect is important when dealing with platforms capable of interact in different ways with the surrounding environment, such as humanoids and mobile manipulators. Both this kind of robots are able to move in the environment and to interact with various object. Combining such capabilities in a parallel way (manipulation and locomotion at the

(21)

Contents

same time) increases the overall performances with respect to a sequential approach.

Finally, the highest level planning works in the task space. The introduced planning algorithm integrates the computational advantages of Motion De-scription Languages (MDL) to find plan in complex environment performing simple shortest path algorithm on the graph representing the scenario. The result of this planning phase is given to P-Search∗ to compose the primitives needed to perform the given tasks. Another important aspect that contribute in increasing performances is that the motion primitives peculiarities are used to inform the developed algorithm during the sampling phase. In partic-ular each primitive is characterized by a trigger condition, indicating when it is reasonable to use it.

Depending on the used planning level the robotic platform is hence characterized by a specific autonomy, for this reason a proper operator interface (namely Pilot Interface (PI)) has been designed. This is the natural connection between the operator and the planning framework, since the pilot can operate at whatever level he wants, from the joint level to the task one, intervening when necessary. As an example the operator can either run the task level planner once the scene is defined to execute the desired tasks or he can act at the level joint to solve situations the planner can’t handle.

Outline

In this section, the contents of each Chapter is described, and the connections among them is highlighted.

Chapter 1: A hierarchical planning framework for humanoid robots. In this chapter a hierarchical planning framework applied to humanoid robot is proposed. The four different identified planning and control levels are:

1. Joint level, 2. Cartesian level, 3. Primitive level, 4. Task level.

Chapter 2: Motion primitives based sample based planning.

In this chapter a planner working at the primitive level is proposed, together with a formal definition of primitives. Asymptotic optimality and proba-bilistic completeness are demonstrated for the proposed algorithm (namely

(22)

Contents

P-Search∗). Simulated and real experiment are reported and different exem-plary primitives are described.

Chapter 3: Motion Description Language for primitive based plan-ning.

The task level planning is addressed in this chapter. Given the environment representation a graph representing the tasks connection is generated. Once a simple shortest path algorithm is execute on the graph, the selected primitives are used by P-Search∗ to found the final solutionto be execute by the robot. Chapter 4: The Walk-Man Pilot Interface.

In a supervised autonomy scenario, the pilot of the robot can run planners at the different level or act directly at these levels. To allow this, a user friendly and reconfigurable pilot interface has been developed. Its description can be found in this chapter.

(23)

CHAPTER

1

A hierarchical planning framework for

humanoid robots.

H

IERARCHICALframeworks (some example can be found in [5–8])

are needed when dealing with complex systems characterized by a large number of degrees of freedom. Indeed this is the case of humanoid robots, where the joints number together with the interaction with the environment can lead problems intractable to solve. In [9–15] different methods to decrease the problem complexity have been investigated, the same idea has been implemented using motion primitives in [11, 16, 17]. In this work a novel motion primitive based hierarchical planning framework is proposed. In Figure 1.1 the hierarchical structure of the planning framework is depicted, highlighting the different level used for planning.

The proposed hierarchical framework allows to transit between gross and fine motor skills, using either the Motion Description Languages to generate task level plans to be refined by the lower level planners or joints level ad-hoc whole-body planning if the problem cannot be solved otherwise.

(24)

Chapter 1. A hierarchical planning framework for humanoid robots.

Figure 1.1: The general structure of the proposed hierarchical planning framework.

1.1

Joint Level

The joint level planning is the lowest one, and since the planning algorithms have to deal directly with the robot joints planning in this level can be affected by the curse of dimensionality. In fact planning for a humanoid robot directly in the joint level can be very computationally expensive, in particular when dealing with sample based methods. Sample based methods such as [18–20] are usually used when dealing with constrained problems for which does not exist an analytic solution. For example the Walk-Man robot (see Figure 1.2 and [21] for more details) is a 31 Dofs robot, which lead to sample in R31for a joint level sample based whole–body planner.

An example of a joint level primitive proposed in the Walk-Man project and characterized also by a planning phase can be found in [22] where a joint level planning under cartesian constraints has been proposed. This work addresses the problem of sample based planning for robotics arms when dealing with constraints that limit the search space. The basic idea is to relax these constraints for the planning phase and enforce them during the task execution by regulating the contact forces on the end–effectors.

(25)

1.2. Cartesian Level

Figure 1.2: The Walk-Man robot. It has 31 Dofs, 7 per arm, 6 per leg, 3 in the torso and 2 in the head.

1.2

Cartesian Level

Planning in the cartesian space allows to not consider the robot kinematics when trying to find a solution for a given problem (such as walking or multi–contact planning). The various end–effector position are found using some assumptions and following different heuristics, kinematics checks are performed on this poses to evaluate the plan feasibility. A particular class of such planning approaches are footstep planning algorithm (see [23, 24] for examples) and multi–contact planning approach (see [25] for examples). This two kind of planners are also under investigation in the Walk-Man project, see Figure 1.3.

Figure 1.3: On the left the result of a footstep planning algorithm applied to the kinematic model of the Walk-Man robot in a stair case scenario. On the right the possible result of a multi–contact planning is applied using a whole–body motion primitive.

(26)

Chapter 1. A hierarchical planning framework for humanoid robots.

1.3

Primitive Level

Motion primitives are considered as stable control laws that can be applied on the considered system. A primitive based planning algorithm is meant to combine this primitives to achieve a certain goal (such as combining locomotion and manipulation as studied in [26]). Examples of primitives based approach are [11, 16, 17].

Primitives can be combined always sequentially and, when it does not jeopardize stability, also in parallel. In Chapter 2 a formal definition of the considered motion primitives and the primitive based P-Search∗ algorithm are explained in details.

1.4

Task Level

Finally the last considered level of planning is the task one. In this case the environment surrounding the robot is translated as a graph using a Motion Description Language and a simple search on it is performed in order to find the motion primitives to use to solve the given problem.

In Chapter 3 the used Motion Description Language is described together with the graph generation and the use of it to find the primitives needed by P-Search∗to plan in the primitives space is explained.

(27)

CHAPTER

2

Motion primitives based sample based

planning.

The text of this chapter is adapted from:

[27] Settimi A, Caporale D, Kryczka P, Ferrati M, and Pallottino L. Motion primitive based random planning for loco-manipulation tasks. In Humanoid Robots (Humanoids), 2016 16th IEEE-RAS International Conference on. IEEE, 2016. In press

[28] Arash Ajoudani, Jinoh Lee, Alessio Rocchi, Mirko Ferrati, Enrico Mingo Hoffman, Alessandro Settimi, Darwin G Caldwell, Antonio Bicchi, and Nikos G Tsagarakis. A manipulation framework for compliant hu-manoid coman: Application to a valve turning task. In Humanoid Robots (Humanoids), 2014 14th IEEE-RAS International Conference on, pages 664–670. IEEE, 2014

[29] Jinoh Lee, Arash Ajoudani, Enrico Mingo Hoffman, Alessio Roc-chi, Alessandro Settimi, Mirko Ferrati, Antonio BicRoc-chi, Nikolaos G Tsagarakis, and Darwin G Caldwell. Upper-body impedance con-trol with variable stiffness for a door opening task. In Humanoid

(28)

Chapter 2. Motion primitives based sample based planning.

Robots (Humanoids), 2014 14th IEEE-RAS International Conference on, pages 713–719. IEEE, 2014

O

NEof the main problems when planning motion of complex robotics systems is the curse of dimensionality. In case of robots able to move in and interact with the environment, such as humanoid robots (see Fig. 2.1 for an example) and mobile manipulators, the problem is even worse. For loco–manipulation tasks, several approaches have been proposed in the literature with the goal of reducing such complexity.

The majority of reduction techniques may be classified as whole–body ( [9–13]) and primitive based planning ( [16, 17]).

It is well known that whole–body planning considers the whole dynami-cal system resulting in high dimensional, numeridynami-cally intractable problems. Sample based approaches that usually perform better than other classical approaches (such as potential fields, exact and approximated decomposition methods, see e.g., [30]) when dealing with problems for which an exact solution is not provided, in higher dimensional spaces are, however, pro-hibitive. The main idea behind whole–body approaches is the limitation of the solution search space based on system and tasks constraints. For exam-ple, in [9, 10] the sampling procedure is biased on constrained submanifolds of the configuration space of lower dimension.

A whole–body contact sampling algorithm has been proposed in [11], where the transitions between contacts are planned to switch from a stance (a robot configuration in space) to another, hence reducing the complexity of the problem. In [12] the whole–body jacobian based method Stack of Tasks is used to tackle the problem of grasping an object while walking using visual servoing. Once the priorities of the tasks (i.e., balancing, locomotion, manipulation) are defined, the developed controller tries to execute lower priority tasks while guaranteeing execution of higher priority ones. This is done by trying to execute lower priority tasks in the nullspace of the higher priority tasks. Similarly, in [13], a Jacobian based approach is used to decouple the manipulation from the locomotion tasks.

A completely different approach is to develop highly performing and robust control laws for particular task executions that do not fully exploit the robot potentiality. For example, advanced control laws are available to perform objects manipulation or to walk on rough terrains. Indeed, a variety of non integrated behaviors is available while there is still a lack of complex whole–body controls. A remarkable approach is hence to exploit and integrate such available control laws to create complex behaviors. In this way such manipulation, locomotion, balancing, and other control laws

(29)

Figure 2.1: The Walk-Man robot walking and manipulating an object in the Gazebo simulation environment.

Figure 2.2: The P-Search∗basic idea. The task is to go from the start position to the goal one loading an object placed in the environment. The object is placed over a table. In subfigures 1(a-b-c-d) a humanoid robot approaches an object, stops, loads it, and then starts moving again. Subfigures 2(a-b-c-d) show the same task executed without stopping but just slowing down near the object.

can be combined without dramatically increasing the complexity of the problem. Such techniques are known as primitive based approaches. In [16], high-quality motion primitives are exploited in order to avoid sampling in the whole configuration space. Contact planners, such as those proposed in [11, 16] can be seen as particular whole–body primitives that can be used with other control laws in the herein proposed approach.

(30)

Chapter 2. Motion primitives based sample based planning.

possible actions (walking, reaching and pushing) are considered separately and executed in a sequential way, i.e., each action is performed only after the previous one is terminated.

Similarly to the Stack of Tasks proposed in [12], methods in [31] exploit motion primitives in a hierarchical framework where the priority of the various primitive has to be chosen in order to execute a certain task. System stability and safety depend on these priorities whose choice is hence critical and task dependent.

In this chapter a primitive-based approach to combine primitives is pro-posed. The implementation shown here is derived from RRT∗ [19]. With respect to RRT [32] and other random graph based algorithms, such as Informed RRT [33], the sampling space is not changed with heuristics but the information available from the primitives design is used to structure a sampling space with desirable properties. Differently from other similar methods, given a set of primitives the proposed technique can combine them in parallel, thus it aims at finding plans that use more than one primitive simultaneously, without jeopardizing the stability of the system. The method is able to jointly use different primitives that were not, in principle, designed to be combined. With respect to the literature, the proposed approach is independent on how primitives are designed and executed. Only basic as-sumptions on the stability and robustness of the considered control laws are required. Requested properties on the primitives make the proposed approach, named P-Search∗in the following, independent from the robotic platform it is applied to (such as humanoids, mobile manipulators, mobile wheeled robots). Motion primitives for humanoid robots may involve ma-nipulation or locomotion, and may use different control approaches. Some examples of manipulation primitives are: turn a valve, open a door, grasp an object, bi-manually manipulate an object; while locomotion primitives can represent dynamic walking, static walking, crawling gaits, stairs climbing.

P-Search∗ will be proved to be able to provide complex plans in which primitives are combined. For example, let’s consider a humanoid robot on which two primitives have been developed: a locomotion primitive to move in the environment, and a manipulation primitive to grab objects with the hand. In Fig. 2.2(1a–1e) the two primitives are used sequentially, the robot stops in front of the object, grasps it and then moves away. An idea of the P-Search∗ outcome is reported in Fig. 2.2(2a–2e), where the robot slows down without stopping its motion to safely catch the object while guaran-teeing stability. The obtained behaviour is not possible with a sequential combination of the primitives nor with a direct parallel combination of the primitives that can easily provide unstable behaviours. Hence a whole–body

(31)

2.1. Motion Primitive Formalization

control approach (a single complex primitive) or a P-Search∗-like algorithm should be used.

In Section 2.1 the class of considered systems is characterized and prim-itives and primprim-itives image space are formally defined. In Section 2.2 the P-Search∗algorithm is described and proved to be probabilistic complete and asymptotic optimal in Section 2.3. Finally, experimental results are reported in Section 2.4 to show the validity of the approach.

2.1

Motion Primitive Formalization

Consider the dynamical system Σ expressed by equations  ˙x(t) = f (x(t), u(t))

y(t) = h (x(t)) , (2.1)

where x ∈ X ⊆ Rn is the state of the system, u ∈ U ⊆ Rm is the

control vector, y ∈ χ ⊆ Rp is the output vector, f : Rn× Rm → X and

h : Rn → χ are respectively the dynamics and the output transformations

of the system. The addressed problem is finding a trajectory between two given states minimizing a given cost function, such as time of execution or energy efficiency. Control inputs associated to the obtained trajectory must also be determined.

Definition 1. Let’s define as π(q, C, χ, σ, ξ, T ) a generic motion primi-tive, a 6-tuple with

• q ∈ Q: the parameters that characterize the primitive;

• χ: the image space of the primitive that corresponds to the image space of the dynamical systems output function;

• σ : X × Q → χ: the steering function of the primitive that is a set– valued function based on the system dynamics from the primitive space to the image space; it can be a map on(0, 1)d, withd ≥ 2.

• C : R≥0× X × Q → R: the cost function associated with the primitive;

• T ∈ R≥0: the duration of the execution of the primitive;

• ξ = ρ(t, y), ρ : R≥0 × χ → Ξ = {0, 1}: a trigger that enables the

execution of the primitive, wheret is the time variable.

Let P be the finite set of the generic primitives. The parameters q, C, χ and σ characterize the control law u associated to the motion primitive. A

(32)

Chapter 2. Motion primitives based sample based planning.

similar definition of a simpler control entity can be found in [34] where an atom(u, ξ, t) is characterized by a control law, a trigger and an execution time. The reader should consider the motion primitives as generic, atomic, control laws. These primitives are standalone by construction but can be combined using the proposed approach if certain stability conditions hold.

In case of complex robotics systems such as humanoids or mobile manip-ulators two approaches can be used to determine a motion primitive for the whole system: a highly complex (whole–body) primitive based approach or a concatenation of simpler primitives defined on decoupled subsystems of lower dimension. The first approach suffers from the curse of dimension-ality, hence several simplifications are usually made to tackle the problem obtaining motion primitives that do not fully exploit the system potentiality. In the latter case, for example, a decoupled locomotion and manipulation approach does not take advantage of the loco–manipulation capabilities of the robotic platform.

For this reason, systems that can be partitioned in N coupled subsystems will be considered, as: x = [x01x02 . . . x0N], u = [u01u02 . . . u0N] and y = [y01y20 . . . yN0 ], where the dynamics can be expressed as:

                         ˙x1 = f1(x, u1) y1 = h1(x) ∈ χ1 ˙x2 = f2(x, u2) y2 = h2(x) ∈ χ2 .. . ˙xN = fN(x, uN) yN = hN(x) ∈ χN. (2.2)

It is worth noting that, even if each sub-system Σi = { ˙xi = fi(x, ui), yi =

hi(x) ∈ χi} depends on a different control variable it also depends on the

whole state variable x ∈ Rn. A motion primitive π

i is defined for each

subsystem Σi since it has direct effects on Σi and undirected effects on

all other the subsystems Σj with j 6= i, through the evolution of the state

variable x. It is worth noting that the primitives set associated to a system can also involve only a subset of its degrees of freedom.

The main idea of the proposed approach is to use motion primitives πifor

the N subsystems Σi as local steering functions in classical sample based

planning algorithms to obtain a plan for the whole system Σ. One of the basic assumption on the motion primitive πi is that the associated control

(33)

2.1. Motion Primitive Formalization

are null (null controls generate steady motions or trim trajectories, see [35]). Moreover, feasibility conditions on the primitives concurrency must be carefully checked (e.g., in the case of humanoid robots, a check on the ZMP-condition [36] can be used for this purpose).

One of the main strengths of this approach is that it does not sample in the whole state space X of Σ, as in RRT, RRT∗, PRM [20], but only in each primitive image space, thus reducing dimensionality. In fact, the P-Search∗ algorithm samples on the union of the primitives image spaces that correspond to submanifolds of the configuration space. Our implementation is derived from RRT∗, but it could be based on any of the aforementioned planning algorithms. In fact, the idea is to modify this algorithms to reduce the search space and increase the overall performance. Indeed, we expect from a direct comparisons of the uninformed algorithms with their primitive based version a reduced planning time due the reduced search space volume. Such comparisons have not been investigated in this work.

A tree T , whose vertices are points z ∈ χ, will iteratively grow on the union of the primitive image spaces that must hence be connected. Indeed, in order to connect samples on an image space χj to the growing tree, χj

must intersect at least one of the image spaces on which the tree lays. In other words, given the set {zi}k1 ∈ T of samples in the tree, it is possible

to sample in the image space χj only if there exists zh ∈ T laying on the

image space χhwith χj∩ χh 6= ∅.

At each iteration i, there exist a non-empty set of active primitives (whose trigger conditions ξiare verified), denoted as PA, and sampling is done in the

union of the image spaces, denoted as χA. It is worth noting that sampling

in each image space is allowed after a trigger condition ξihas been activated,

which corresponds to connect a sample in χi∩χAto the tree T . For example,

a grasp primitive is not activated until the object to be grasped is sufficiently close.

Remark 1. Let P = (V, E) be the graph whose nodes in V are associated to motion primitivesπi. An edge inE between nodes πiandπj exists ifχi∩

χj 6= ∅. A motion planning problem can be solved with the primitive-based

sampling algorithm P-Search∗ only if the graph P is connected. Indeed a treeT connecting any start and goal configurations can be constructed based on samples only if the union of the primitive image spaces is connected. This condition corresponds to a connected graphP.

This means that a planning problem can be solved by P-Search∗ if and only if the start and goal states can be connected using the available mo-tion primitives. Moreover, this implies that a continuous trajectory exists,

(34)

Chapter 2. Motion primitives based sample based planning.

connecting χi and χj. As a clarifying example let’s consider a robot that

must turn a valve positioned on a cluttered terrain. If the available motion primitives do not include a locomotion control law capable of crossing such type of terrain, there is no way that the planning problem can be solved.

To clarify the previous statements, let’s consider the problem of moving a mobile robot in the environment and to interact with an object through manipulation, thanks to one or more robotic arms equipped with hands or grippers. For this problem the full state space is (x, y, θ, v, q1, . . . qN),

where (x, y) is the position on the plane, θ is the heading of the robot, v is the robot walking speed and qi, i = 1 . . . N are the arm joint coordinates.

For this purpose two primitives have been defined:

L: the locomotion primitive, to move the robot on the XY plane, character-ized by the following parameters:

• qL = ∅,

• χL 3 [x y θ v] T

,

• σL, an optimization routine, applied on a simplified dynamics,

mini-mizes the time variable t subject to state and control constraints and returns the robot desired trajectory,

• CL= t,

• TL, is the duration of execution of the steering function σL,

• ξL= 0 until a sample laying in χLis added to T .

M: the manipulation primitive, to move the end-effector towards the object and manipulate it, characterized by the following parameters:

• qM = o, where o is the object pose,

• χM 3 [x y τ ]T,

• σM, the inverse kinematic of the robotic arm, giving the joints desired

values corresponding to a certain value of o and τ , • CM = t,

• TM, is the duration of execution of the steering function σM,

• ξM = 1 when ko − rk ≤ δ, with r the pose of the robot and δ > 0,

(35)

2.1. Motion Primitive Formalization

τ can be set as the rate of completion of the manipulation task in terms of distance between the end–effector and the object. For a grasping task τ = 1 means that the end–effector is gripping the object, and τ is equal to 0 when the end–effector is beyond the maximum distance to perform a successful grasping. This is formalized by the M primitive trigger condition.

Note that, with the proposed approach it is possible to sample in a 4-dimensional space for the locomotion primitive and in a 3-4-dimensional space for the manipulation one instead of sampling in the full 4 + N -dimensional configuration space of the robot, as τ is used to sample only one scalar instead of the entire joint space of the robotic arm. This is even more convenient when dealing with a higher number of DoFs. This is one of the advantages of the method, the trajectory generator computes the joint values corresponding to the value of τ and the P-Search∗ algorithm checks for feasible primitives composition.

Figure 2.3: The image spaces representation of a generic loco–manipulation task.

In Fig. 2.3 the image spaces related to the proposed scenario are depicted, note that they can represent any generic loco–manipulation task. Let’s sup-pose that the problem is to reach a location where to execute a manipulation task (e.g. grasp an object) and come back close to the starting position. As it can be seen in the figure, χLis represented both for τ = 0 and τ = 1, and

the tree connecting the starting point to the goal is made by combination of L primitives (solid lines) and M primitives (dotted lines). It is worth noting

(36)

Chapter 2. Motion primitives based sample based planning.

that dotted lines can involve both locomotion and manipulation resulting in a loco–manipulation behaviour. Also, note that performing a manipulation action with the M primitive while moving in the (x, y) plane is allowed only when a sample in χLis connected to a sample in χM which had nonzero

walking speed v. This is coherent with the idea that the manipulation prim-itive does not affect the locomotion image space directly, but indirectly through the effect of a constant control (trim trajectory) on the locomotion primitive.

2.2

The P-Search* algorithm

In this Section, the P-Search∗ planning algorithm is described to connect initial and goal samples zIand zGrespectively, which lie in the obstacle-free

region of two possibly different image spaces. For example from a standing robot in a position to a pose in which the robot has an object in its hands.

As in many other planning algorithms, alternative versions could be obtained to grow the tree T starting from the goal and back–chaining the primitives to the start or, alternatively, starting two branches from the be-ginning and the end and trying to connect them (see [18]). Here, only the forward growth is described.

Let χi,obs be the obstacle region in the primitive image space, such that

χi \ χi,obs is an open set. In the following, for notation convenience, the

generic obstacle free image space χi,f reeis referred as χi.

The initial sample zI is assumed to be in a feasible condition, which

means that it exists an active primitive πi ∈ V with an associated image

space χI and zI ∈ χI.

2.2.1 P-Search*

The P-Search∗ algorithm, given initial and goal samples zI, zG, provides a

tree T whose vertices are points z ∈ χ. Given two vertices zi, zj ∈ χk an

edge (zi, zj) is an instantiation of the primitive πk ∈ V that steers zi toward

zj in χk.

Line 1: An empty tree T is created and populated with the initial point. ActivePrimitives: Given the newly added point znew, the

ActivePrim-itivesfunction computes all the primitives πj such that znew ∈ χj and

such that znewsatisfies πj trigger conditions ξj. The set of such primitives is

returned and stored in PA.

Primitive Sampling: The SamplePrimitive function return a ran-dom primitive from the set PA.

(37)

2.2. The P-Search* algorithm

Algorithm 1: T ← P-Search∗(zI, zG)

Data: P = (V, E), zI, zG

Result: T a tree whose vertices are points z ∈ χ. Given two vertices zi, zj∈ χk an

edge (zi, zj) is an instantiation of the primitive πk ∈ V that steers zitoward

zjin χk. T ← InsertNode(∅, zI, T ); znew= zI; for i = 1 to N do PA← ActivePrimitives(znew); χi← SamplePrimitive(PA); zrand← Sample(χi);

(znew, T ) ← LocalRRT∗(χi, zrand, T );

return T ;

Sampling: The Sample function takes as argument the obstacle free image space χkfrom where the random sample z ∈ χkis extracted.

LocalRRT*:Given an image space χk, a sample z ∈ χkand the tree T ,

the LocalRRT∗algorithm tries to add the sample z to the tree. A detailed description of the procedure is reported in Section 2.2.2. The outcome of this function is a node znew and an updated tree T where znewhas been added

if a collision–free path in χk from an existing node in T to znew is found.

Moreover, the path is inserted in the tree set of edges with its associated cost.

2.2.2 LocalRRT*

The procedure LocalRRT∗ is now described in detail.

Algorithm 2: (znew, T ) ← LocalRRT∗(χk, z, T )

znearest ← Nearest(χk, z, T );

(znew, xnew) ← Steer(znearest, z);

if Unfeasible(xnew) then

return (−, T );

if ObstacleFree(xnew) then

Znear← Near(znew, T );

zmin ←ChooseParent(T , Znear, znearest, znew);

T ← InsertNode(zmin, znew, T );

T ← Rewire(T , Znear, zmin, znew);

return (znew, T );

else

(38)

Chapter 2. Motion primitives based sample based planning.

Nearest: Given an image space χka sample z ∈ χk and the tree T , the

Nearestfunction returns the vertex znearest ∈ χkof the tree, which is the

nearest to the random point z. By construction, for each point in χkthere

exists at least a vertex of T inside the same image space.

Steer: Given the sample node z and the nearest vertex znearest ∈ T

the Steer function first computes a point znew between znearest and z.

Moreover, it attempts to connect znearestwith znew. Note that the function is

image space–dependent, hence it attempts to connect two points inside the same image space χk. If any, the path between points znearestand znew is

returned. The Steer function is the implementation of the steering function σ of the primitive πk in Definition 1.

Unfeasible: the function Unfeasible checks whether the trajectory xnewcomputed by function Steer violates additional conditions such as

kinematic or dynamic constraints (e.g., ZMP stability conditions). Such function guarantees that the parallel primitives composition does not lead to unstable behaviours.

ObstacleFree: a check on the trajectory returned by the Steer function is performed by the function ObstacleFree. In case of failure the sample znew is discarded, otherwise the Near routine is called.

Near: the Near function returns the set Znear of the tree nodes that are

in a ball centered in the current sample znew of radius dependent on the

current cardinality of the nodes of the tree (see. [19] for details). Such nodes must lay in one of the image spaces znewlays on. The radius of the ball is

computed as in [37] and it is reported in (2.3) where d is the dimension of the search space associated to the considered primitive and n is the current number of sample in such space.

r = min ( γ log(n) n 1d , η ) (2.3) η and γ are tuning variables, and to guarantee the asymptotic optimality the following must hold:

γ >  2  1 + 1 d 1d µ(X f ree) ζd 1d , (2.4)

where ζd is equal to the volume of the unit ball in the d-dimensional

Euclidean space and µ(Xf ree) is the Lebesgue measure of the obstacle–free

space.

ChooseParent: the set Znear is used to choose the node in T that will

(39)

2.3. Probabilistic Completeness and Asymptotic Optimality

Steerfunction from any node in Znear toward znew. zmin is the starting

point of the minimum cost path.

Rewire:this function performs in the same manner as in [19], considering all the possible primitives for rewiring. The rewiring procedure allows to add and remove arcs in the tree to obtain lower cost paths from the start point zIto existing nodes in Znear through znew.

2.3

Probabilistic Completeness and Asymptotic Optimality

Here an investigation on the properties of asymptotic optimality and proba-bilistic completeness is performed, which are fundamental for sample-based algorithms to achieve optimal results. Since the algorithm is implemented as a modified version of RRT∗ [19] it retains many properties from previous work and them can be used to demonstrate the aforementioned properties.

It is well known that probabilistic completeness holds for RRT and RRT* algorithms, [37, 38]. Here the same result is provided for P-Search∗. In what follows, µ indicates the probability measure over a sampling space χ. Probabilistic completeness and optimality properties of RRT∗ follow from the property of robustly feasibility of paths (i.e., obtained paths are at least at δ > 0 distance from the obstacles). The robust feasibility property is verified also by P-Search∗algorithm by a proper implementation of the ObstacleFreefunction (if a sample or a path is closer than δ from an obstacle it is discarded). Hence the following properties can be proved. Theorem 1. If the primitive graph P is connected, the P-Search∗is proba-bilistically complete.

Proof. It must be noticed that, using just one primitive, the algorithm reduces to RRT∗since the whole sampling space is coincident with the image space of that primitive, χI. Hence, given any two samples in the same image space

the probability to find a path between the samples by applying P-Search∗ tends to 1.

Consider now a pair of samples (zi, zj) in different image spaces, χi and

χj. Since the primitive graph P is connected, there always exist a sequence

of pairwise intersecting image spaces from χi to χj. For the probabilistic

completeness on any image space the thesis holds. 

In the Results section of [19] a rear wheel–steered non–holonomic ground vehicle is used to validate the RRT∗ planning algorithm. If the proposed method is applied to this example, the search space would be equal to the image space of the motion primitive (control law) that allows the robot to move in the environment.

(40)

Chapter 2. Motion primitives based sample based planning.

In this case we considered only one primitive but it must be noticed that more than a motion primitive could be defined for the considered system, such as translate on the plane and rotate on the spot. As a matter of fact the definition of motion primitives is a key point of the proposed method, this has to be performed by the user and, at the current time, there is no automatic procedure for motion primitive definition.

Theorem 2. P-Search∗is asymptotically optimal.

Proof. Asymptotic optimality of RRT∗follows from the choice of the radius r used in the RRT∗ Near function. Each primitive image space has an associated search radius (see eq. (2.3)). Since the number of primitives is finite, it is sufficient to consider the largest radius while implementing the

P-Search∗function Near. 

It must be noticed that the aforementioned properties of probabilistic completeness and asymptotic optimality strongly depend on the primitives defined for the system. The achievable cost using the computed solution is limited by the fact that the user could select primitives that do not fully exploit the capabilities of the robot. Moreover, since the primitives properties are used to inform the proposed sampling procedure, the search space is reduced with respect to the one used in standard methods (e.g. RRT∗). After all, the proposed method finds the best possible solution exploring the reduced search space, given by the selected primitives.

2.4

Results

Both simulation and experimental tests have been conducted on a mobile manipulator (consisting of an iRobot Create 2 mobile platform, with an OWI-535 Robotic Arm (4 joints) mounted on board) and on the simulated model of the humanoid robot Walk-Man [39], demonstrating the capability of the approach on different robots with their own primitives (see Fig. 2.4). P-Search∗ has been implemented in MATLAB to validate the effective-ness of the algorithm when performing off-line planning for the aforemen-tioned robots.

The developed code for the P-Search∗implementation is available online1,

together with all the open source code related to the Walk-Man controllers2 and the mobile manipulator ones3.

The faced problem consists of a loco–manipulation task, where a mobile

1https://github.com/CentroEPiaggio/primitives 2https://gitlab.robotology.eu/walkman-drc 3https://github.com/MirkoFerrati/irobotcreate2ros

(41)

2.4. Results

Figure 2.4: The robotic platforms used in the experiments.

robot moves in the environment while interacting with a object. To achieve this, a manipulation primitive and a locomotion primitive are needed. Al-though the aforementioned primitives are equivalent in term of planning, the implemented control laws are different and depend, of course, on the robot type. More details about the primitives related control laws are given in the following.

Mobile Manipulator - Locomotion: Given the plan solution’s way-points (x, y, θ, v, τ ), in case of activation of the L primitive, x, y, θ and v are used to generate the mobile robot trajectory, optimizing the execution time for a unicycle model.

Mobile Manipulator - Manipulation: Given the plan solution’s way-points (x, y, θ, v, τ ), in case of activation of the M primitive, τ is used to generate the end–effector desired position with respect to the target object pose, than Inverse Kinematic is applied to obtain the joints desired values.

Humanoid Robot - Locomotion: In this case x, y, θ and v are used to generate the foot placement and feet trajectories necessary to arrive at the desired position. Based on the foot placement the ZMP reference trajectory is generated and used by the Preview Controller [40] to compute the COM reference trajectory. During execution an implementation of the feedback controller described in [41] is employed to additionally stabilize the robot around the precalculated pattern.

(42)

Chapter 2. Motion primitives based sample based planning.

one of the mobile manipulator, except for the kinematic of the arm. The Walk-Man robot arm has 7 DoF, thus redundancy can be exploited.

(a) (b)

(c) (d)

Figure 2.5: Snapshots of the performed loco–manipulation experiment.

In Fig. 2.5 an experiment for the mobile manipulator involving an ob-stacle is reported. The robot approaches the object using the locomotion primitive (Fig. 2.5a), then it slows down and maintains active this primitive at constant velocity while using the manipulation primitive to grab the object (Fig. 2.5b,2.5c). At the end, the manipulation primitive is maintained at constant (null) velocity and the locomotion primitive is used to bring the object to the desired position (Fig. 2.5d).

In Fig. 2.6 the corresponding image spaces seen from above are reported. The obstacle is reported as the grey zone in the sampling space.

Fig. 2.7 shows the sampling space of the related experiment showing the primitives image spaces. To perform the task, the tree grows from the locomotion primitive image space with τ = 0 to the one at τ = 1, passing through the manipulation primitive image space, represented as a cuboid. The black line represents the final plan generated for the robot.

(43)

2.4. Results

Figure 2.6: Sampling space of experiment depicted in Fig.2.5. The obstacle has been enlarged to take the robot dimension into account.

implemented, to perform complex loco–manipulation tasks using the Walk-Man robot in the Gazebo simulation environment. Here the task consists in removing an object while the robot is walking avoiding obstacles. The same experiment has been performed on the real robot (See Figure 2.9). In Figure 2.10 another experiment using a box instead of a bar is reported, in this case the involved primitives are the same but the manipulation one has been chosen with different parameters since the task requires different arm motion.

The representation of the environment is reported in Fig. 2.11, where the green, blue, and black dots represent respectively the starting position, the object to manipulate and the goal position.

In Fig. 2.12 the plan found by P-Search∗ is shown. The manipulation task accomplishment is represented by the tree growing in the vertical axis, in fact, the start position is in (0, 0, 0) while the goal position is in (1.5, 0, 1). As depicted in Fig. 2.8, the robot executes the task similarly to the mobile manipulator. Walk-Man walks to reach the object, then it starts to manipulate it while walking until the object is removed from the robot way and it can safely cross the passage.

(44)

Chapter 2. Motion primitives based sample based planning.

Figure 2.7: 3D view of the sampling space of experiment depicted in Fig. 2.5.

(a) (b) (c) (d)

Figure 2.8: Walk-Man robot performing a loco–manipulation experiment in the Gazebo simulation environment. The robot first walks to the object, and then starts to raise it to make it sliding away while walking until it is removed.

2.5

Primitives examples

The motion primitives are single control entities meant to be combined to generate a desired behavior. Such control laws for humanoid robots are usually developed for locomotion or for manipulation purposes but they can

(45)

2.5. Primitives examples

(a) (b)

(c) (d)

Figure 2.9: The Walk-Man robot performing a loco–manipulation experiment. The robot first walks to the object, and then starts to raise it to make it sliding away while walking until it is removed.

be parametrized and used as motion primitives to be combined obtaining loco–manipulation tasks.

During the Walk-Man project, the robotic platform has been used in the Darpa Robotics Challenge and other events where several different tasks have been performed. At the current moment, other tasks have been performed only in simulation environment and will be tested on the real robot in the next future. Examples of used motion primitives are:

(46)

Chapter 2. Motion primitives based sample based planning.

(a) (b)

(c) (d)

Figure 2.10: Walk-Man robot performing a loco–manipulation experiment. The robot first walks to the object, and then starts to push it away it while walking until it is removed.

• turn a valve; • open a door;

• walk on a flat terrain;

• remove an object from the way of the robot; • drive a vehicle;

(47)

2.5. Primitives examples

Figure 2.11: Sampling space of loco–manipulation experiment reported in Fig. 2.8.

For the valve turning primitive, [28], the manipulation module first calcu-lates two sets of desired grasping position and orientation for the right and left hand, with a small offset distance from the valve chassis. Consequently, the valve turning task is broken down into four basic motion: reaching (Reach), approaching to grasp the valve (Grasp), rotating the valve (Rotate) and releasing/moving far from the valve (Disengage). The transition be-tween the phases of the task is achieved using a state machine paradigm and can be controlled/interfered by the user.

Reaching action utilizes the right, and the left, arm end-effector Jacobians and implements a closed loop inverse kinematics algorithm (CLIK) to reach to the desired left and right hand positions. Smooth point-to-point trajecto-ries for both hands are generated by the trajectory generation module using a fifth order nonlinear function. Once the SoftHands reach to the desired positions, Grasp primitive makes them approach to the valve and grasp it firmly. Consequently, the circular trajectories (using the developed trajectory generation module) are built for both hands, by taking into account the valve data, and realized. The rotation angle can be re-defined using the pilot interface.

(48)

Chapter 2. Motion primitives based sample based planning.

Figure 2.12: 3D view of the sampling space of experiment depicted in Fig. 2.8.

In the snapshots in Figure 2.13 all the different steps of a valve-turning task executed by the COMAN [42] robot are reported. In Figure 2.14 the same task is executed by the Walk-Man Robot.

For the door opening (see [29]) the door handle data is identified from the 3D point cloud of the scene and the basic motion Grasping, Turning, Opening, Supporting, and Ungrasping composing the door opening motion primitive are executed. In Figures 2.15 and Figures 2.16 the door opening task executed by the robot COMAN in the lab and the robot Walk-Man during the DRC are respectively reported.

The vehicle driving primitive has been realized by two simple basic motion: steer the wheel and accelerate. In Figure 2.17 the robot Walk-Man driving a utility vehicle during the Darpa Robotics Challenge is reported. Given the size of the robot it was positioned in the middle of the seats, using the left leg to accelerate and the left arm to steer the wheel.

More complex motion primitives can be obtained through lower level planning procedures. Example of such primitives are footstep planner and multicontact plannerapproaches. The contacts poses are generated by the planner while the motions are executed with the sequential use of either the

(49)

2.5. Primitives examples

Figure 2.13: The valve rotating primitive executed by the COMAN robot.

Figure 2.14: The valve rotating primitive executed by the Walk-Man robot.

walk on a flat terrain or a whole–body cartesian control law. During the Walk-Man project a footstep planner (See Figure 2.18) has been developed

(50)

Chapter 2. Motion primitives based sample based planning.

Figure 2.15: The door opening primitive executed by the COMAN robot.

and it is currently being extended to a multicontact planner to solve tasks such as the one showed in Figure 2.19.

The idea of such planner is to generate contact switching for the humanoid robot in order to make it move in the desired position and configuration. In particular the contact points situation (either in touch or not) of the robot are represented by binary strings and only strings that differ of one bit can be connected to generate a transition between the related configurations.

(51)

2.5. Primitives examples

Figure 2.16: The door opening primitive executed by the Walk-Man robot during the Darpa Robotics Challenge.

Figure 2.17: The driving primitive executed by the Walk-Man robot during the Darpa Robotics Challenge.

(52)

Chapter 2. Motion primitives based sample based planning.

(53)

2.5. Primitives examples

(a) (b)

(c) (d)

Figure 2.19: Walk-Man robot performing a multi–contact whole–body obstacle crossing experiment.

(54)

CHAPTER

3

Motion Description Language for primitive

based planning.

T

HEintroduced primitive based planning framework is meant to work in a hierarchical way. In particular, the planning procedures must be able to use fine or gross motor skills depending on the situation. As an example, when the environment is characterized by a flat ground the robot can walk using standard techniques based on the Zero Moment Point (ZMP), from the planning point of view these methods require merely the robot position and orientation on the plane (as a unicycle). On the other hand, when the terrain is uneven or unstable, a static walking can be performed, hence a contact planner must be used, lowering the planning level. In the first case the computational advantages of the Motion Description Languages will be used, in the latter an ad–hoc whole–body planning is necessary.

Motion Description Languages, [34, 43–45] allow to formally define complex plans for robots by the combination of pre–defined control laws. In the literature different ways of combining basic control entities are in-troduced, and different rules guarantee feasibility and stability conditions. In the Process Algebra domain (see [46, 47]) the composition of atomic actions is addressed, leading to complex processes (behaviors). Following

(55)

3.1. The primitive based MDL - P–MDL

the same ideas, here it is proposed a primitive based Motion Description Language (namely, P–MDL) to compose stable motion primitives to create new complex whole–body behaviors. These represent the words of the considered language, and can be combined by the use of different operators that will be introduced in Sec. 3.1.

Motion primitives may be used to accomplish a complex mission such as those necessary to find survivors in a disaster scenario or help a human in emergency operations as those considered in the Walk-Man project. For example let’s refer, for the sake of clarity, to scenarios as those reported in Figure 3.1 where the robot must accomplish several sequenced tasks (Figure 3.1a) or can choose a particular sequence of tasks to perform the mission (Figure 3.1b). In the first scenario the robot must traverse a corridor blocked by a (blue) pipe that must be removed then a (red) box is in the middle of the room and must be displaced. As a third step the robot must traverse a corridor in which a (black) hole prevents it to walk. In this case the (grey) lateral desks (or support surfaces) must be used to overcome the hole without falling. Finally, the robot must remove (orange and purple) objects to eventually turn the (green) valve. Differently, the second scenario consists in turning three (green) valves that are located in a room behind two (brown) locked doors. To unlock the doors the robot must first push the red button on the black support and then it can choose from which door enter the valves room. The task sequences necessary to perform the mission can be formally defined as sequences of activation of different control primitives and a planning algorithm can be used to compute a feasible sequence subject to temporal or logic constraints.

3.1

The primitive based MDL - P–MDL

The collection of developed primitives composes a library that can be used by a higher level planner or by the robot operator (as it occurred during the DRC) to create more complex behaviors. For example, the task execution order can be planned at an higher level taking into account the motion primi-tives peculiarities such as stability properties, time of accomplishment etc. Moreover, the higher level planner must also be based on visual information of the environment such as object detection and affordances.

Recalling the given definition of motion primitives, the trigger of a motion primitive π is the key to allow parallel primitives execution, and even more complex behaviors. For example, a manipulation primitive is not activated until the robot is sufficiently close to the object to manipulate. Such activation, based on space or time conditions, is responsibility of the

(56)

Chapter 3. Motion Description Language for primitive based planning.

(a) An obstructed corridor scenario, the tasks to execute are: remove an object from the way, push a heavy object on a side, use support surfaces as hand support to cross an uneven terrain and turn the green valve after the objects in front of it have been removed.

(b) A more complex scenario, here the robot has to push a button to unlock the two doors in front of it. Once in the second room the robot has to rotate the three valves before exiting from the third door.

Figure 3.1: Case study scenarios.

trigger conditions. Trigger conditions can be also used to schedule the order of task performance. Moreover, the available primitives can grow in number in complex scenarios leading to a drastically increase of the computational costs of primitive based motion planners such as the P–Search∗. Hence Model Description Language is used as a tool to formalize possible designs of such trigger conditions and limit the growth of active primitives during the system evolution. First order logic operators and linear temporal logic (LTL) operators are used by P–MDL to express the different conditions.

Let’s consider for example the scenario depicted in Figure 3.2, a complex sequence of tasks that is unpractical to solve when planning in the whole robot state space. This is one of the reason a primitive based approach has been proposed.

In particular the following primitives have been chosen to solve the mission:

• Locomote in the environment on a flat terrain: L,

(57)

3.1. The primitive based MDL - P–MDL

Figure 3.2: A sequence of complex tasks. From the right to the left: remove an object from the way while walking, push an heavy object on a side to surpass it, cross a non–walkable area by using support surfaces to place the hands, remove light and heavy objects and turn a valve.

• Push an object out of the way: P B,

• Cross the gap using horizontal surfaces for hand-support: CP , • Remove heavy object using whole-body grasp: W G,

• Remove light objects: RL, • Turn the valve: T V .

The goal of the task can be represented as:

ξG = ΩRO P ΩP B P ΩCP P ΩW GP ΩRL P ΩT V

It must be noticed that at the current moment there is none automatic way to generate these goal conditions, the user must specify them, depending on the desired results.

Using P–MDL, the requested tasks execution can be represented by the following modified triggers conditions, that are referred to each sub–task:

ξL = 1 ξRO = ξMRO ξP B = ξMP B ∧ ΩRO ξCP = ξMCP ∧ ΩP B ξW G = ξMW G ∧ ΩCP ξRL = ξMRL ∧ ΩW G ξT V = ξMT V ∧ ΩRL

The choice of having the trigger condition ξL= 1 is to permanently have

(58)

Chapter 3. Motion Description Language for primitive based planning.

The ξMiconditions with respect to a generic manipulation task i is defined

as follows:

ξMi =

 1 if koi− rk ≤ δi,

0 otherwise , for given δi > 0

and basically means that the robot (in pose r) has to be sufficiently close to the object location oito execute the particular manipulation task.

With the Ωi condition the accomplishment of the i-th task is represented.

In particular, Ωi = 1 corresponds to a performed task. Such condition can

be used to disable active primitives, reducing their total number.

Finally ξG defines the aim of the mission that the robot must execute,

in this case the robot must execute all six tasks to complete the mission. However putting the task in and among each other would lead to an arbitrary order, and combinatorial possible choices. Although the optimal path could possibly lead to the intuitive, sequential, execution of the task, it is better to explicitly express the temporal dependency of the tasks: the LTL operator P (precedes) has been used to express the fact that from left to right, the tasks must be executed in sequence.

The considered scenario leads to a search space of dimension 10, since the locomotion image space is made by (x, y, θ, v) (robot position, orientation and linear velocity) and each of the six manipulation tasks i is represented by a completion rate τi.

Once the set of trigger conditions is chosen, a sequence of motion primi-tives is straightforwardly obtained (for example trigger condition ξP B

rep-resent that the “Push Box” primitive is activated only after the “Remove Object” task has been performed). The obtained motion primitive sequence can be used by the P–Search∗ algorithm (in particular in the definition of the P–Search∗ primitive graph) while sampling in the primitives image spaces. Using P–Search∗a simultaneous execution of primitives is obtained when possible. Of course, the simultaneous execution of manipulation and loco-motion actions can be performed only if the stability of the robot platform is guaranteed. Moreover the stability of a parallel combination of motion primitives depends on the subset of the state space the primitives use (a locomotion primitive can be composed in a parallel fashion with a manipu-lation one, but potentially not with a whole-body motion one). For example, with the proposed approach, a task that requires walking before and after executing it (to approach and leave the task location) can be executed while walking and guaranteeing stability.

In more complex scenarios the number of possible sequences can easily grows, in this case using a formal approach in the sequences characterization

Riferimenti

Documenti correlati

4 To fit the non-monotonic variation of grain size undergone by Cu nanopowder, it is assumed that the first time a volume element is involved in a collision, its average grain

8 The model simulates the natural his- tory of the disease and forecasts disease burden, medical costs and health effects of HCV, assessed under the status quo and a scenario

Cell autonomous or “intrinsic” mechanisms of resistance (such as activation of alternative signaling pathways, onset of secondary mutations in drug targets, amplification

 “La lezione ha avuto un interessante e dinamico inizio, in quanto gli allievi in quasi totale autonomia hanno proceduto nella realizzazione del proprio labirinto delle

Investigations were conducted throughout Liguria in northwest Italy, particularly in the east of the region. The study sites are located in areas that were, until at least the end of

From this psychophysiological perspective, love can be defined as an emo- tion, a momentary state of shared positive emotions (i.e. joy, awe, interest) 5 be- tween two or more

David Edgar: The Jail Diary of Albie Sachs

Using this database, valuable information is extracted, like the attack chains, the impact, the attacks probability, the reached goals, the number of runs where an agent implements