• Non ci sono risultati.

FOR NATURAL MOTION IN

N/A
N/A
Protected

Academic year: 2021

Condividi "FOR NATURAL MOTION IN"

Copied!
62
0
0

Testo completo

(1)

CONTROL ARCHITECTURE

FOR NATURAL MOTION IN

SOFT ROBOTS

Cosimo Della Santina

(2)

Human Body at Work

I nt r oduct ion

cat ast rophic scenarios. In Fig. 1 some of t he t asks t hat are commonly

execut ed in such sit uat ions are shown.

Figure 1: The collect ion of pict ures shows (from t he t op left t o right

bot t om)

firemen and people of the Civil Protection at work: hitting

t he t op of a roof wit h an axe, hit t ing a wall wit h a hammer,

un-loading heavy object s, using a wat er pump, carrying object s while

climbing st airs, removing an obst acle, climbing on a challenging t

er-rain, rescuing people on t he snow, opening passages removing debris

of a collapsed building. In t he execut ion of t hese t asks humans

ex-ploit some int rinsic feat ures of t heir body, such as power, robust ness,

explosive energy, efficiency, et cet era

The robot ic research community does not underest imat e t he

im-port ance of t he challenge of realising a robot able t o help or subst it ut e

human operat ors t hat accomplish such dangerous t asks when t hese

event s happens. One of t he most import ant init iat ives born from t his

need is t he DARPA Robot ics Challenge [W21]: a worldwide

compe-t icompe-t ion of which compe-t he goal is:

2

• Open breaches in

walls and roofs

[HIGH PEAK POWER]

• Sustain hours of

operations

[HIGH EFFICIENCY]

• Work in dangerous

scenarios

[HIGH ROBUSTNESS]

Human performance makes the difference in real life

(3)

Soft Robotics: Embed a human body key

feature in robots.

Each Human joint has - at

least - two muscles

Co-contraction

allows for

SOFT and

STRONG

(4)

Soft Robotics: Embed a human body key

feature in robots.

Each Human joint has - at

least - two muscles

Co-contraction

allows for

SOFT and

STRONG

behavior

Constant Passive Impedance

u

Compliant

Covering

Rotor

Inertia

Link

Inertia

Variable Passive Impedance

u

Compliant

Covering

Rotor

Inertia

Link

Inertia

(5)

Soft Robotics: Embed a human body key

feature in robots.

Each Human joint has - at

least - two muscles

Co-contraction

allows for

SOFT and

STRONG

behavior

Constant Passive Impedance

u

Compliant

Covering

Rotor

Inertia

Link

Inertia

Variable Passive Impedance

u

Compliant

Covering

Rotor

Inertia

Link

Inertia

Soft

Robotics

(6)

The Problem we want to face

In humans body & brain constitute a very effective synergistic system

but in

soft robots

Due to their

complex

,

nonlinear

,

hard-to-model

dynamics, the control

of Soft Robots is an

arduous task

(7)

Existing methods for soft robots control

Due to their

complex

,

nonlinear

,

hard-to-model

dynamics, the control

of Soft Robots is an

arduous task

Control approaches:

Model-based

techniques (e.g. feedback linearization [Palli et al.

2008], optimal control [Garabini et al. 2011]) have the

strong

drawback

of requiring an

accurate model identification process

(hard to be accomplished and time consuming).

Model-free

techniques are

promising

but still confined to

specific

tasks

(e.g. induction [Lakatos et al. 2013] or damping of

oscillations [Petit et al. 2014] ).

(8)

GOAL

The purpose of this work is to design a

model-free

algorithm capable to control a nonlinear unknown

system, such as a soft robot

Taking inspiration from how the

central nervous system

(CNS

hereafter)

manages the

muscular-scheletric system according to SOA

Motor Control Theories

(9)

A quick introduction

Motor Control

(10)

Motor Control

Mark Latash

: “[Motor Control] can be defined as an area of

science exploring how the nervous system interacts with the rest of

the body and the environment in order to produce purposeful,

(11)

Motor Control

Two problems faced by CNS are:

Unknown Nonlinear Dynamic.

DoF Redundancy.

(12)

Complex Nonlinear Dynamics

𝑣 = 0,04 𝑣

2

+ 5 𝑣 + 140 − 𝑢 + 𝑙

𝑢 = 𝑎 𝑏𝑣 − 𝑢

If 𝑣 = 30 𝑚𝑉

Then

(𝑣, 𝑢) ← (𝑐, 𝑢 + 𝑑)

𝐵𝜃 + 𝐶 𝜃, 𝜃 𝜃 + 𝐺 𝜃 = 𝜏

Human body is a complex system, with strong nonlinearities at

every level.

𝑓 = 𝜌(𝑒

𝛿𝐴

− 1)

(13)

DoF Redundancy

Nikolai Bernstein

: "It is clear that the basic

difficulties for co-ordination consist precisely in

the extreme abundance of degrees of freedom,

with which the [nervous] center is not at first in a

position to deal."

(14)

DoF Redundancy

Nikolai Bernstein

: "It is clear that the basic

difficulties for co-ordination consist precisely in

the extreme abundance of degrees of freedom,

with which the [nervous] center is not at first in a

position to deal."

Various types of redundancy:

• Anatomical

DoFs

Human body is characterized by a

complex highly redundant

structure:

Number of joints > Tasks DoFs

Number of muscles > Number

(15)

DoF Redundancy

Nikolai Bernstein

: "It is clear that the basic

difficulties for co-ordination consist precisely in

the extreme abundance of degrees of freedom,

with which the [nervous] center is not at first in a

position to deal."

Various types of redundancy:

• Anatomical DoFs

• Kinematic

DoFs

Infinite joints trajectories can achieve

the same task, or simply perform the

same e.e. point to point movement.

(16)

DoF Redundancy

Nikolai Bernstein

: "It is clear that the basic

difficulties for co-ordination consist precisely in

the extreme abundance of degrees of freedom,

with which the [nervous] center is not at first in a

position to deal."

Various types of redundancy:

• Anatomical DoFs

• Kinematic DoFs

• Neurophysiological

DoFs

The muscle consists of hundreds of

motor units, and they are activated

by moto-neurons that can spike with

different frequency (hundreds of

variables).

(17)

Similarity with Soft Robots

Nonlinearities:

It is well-known that robotic

chains are nonlinear.

Soft Robots (especially VSA)

hard-to-model dynamic.

Antagonistic VSA present

dynamic behavior similar to

human muscles (same

equilibrium point and

stiffness).

(18)

Similarity with Soft Robots

Types of DoF:

Anatomical ( #motor per

joint >= 1).

Kinematic.

(19)

Roadmap

Understand how CNS works is an open problem yet,

and

no univocal theory exists

(e.g. internal model vs.

equilibrium point).

Rather then trying to replicate an unknown structure, in

this work it is discussed how to replicate the CNS

functionalities

, where exist objective data

largely

accepted

.

This work will be done both for nonlinearity (dynamic

inversion) and DoF problem, on

two levels of control

.

The behaviors we want to replicate will be presented

(20)

A possible control architecture

From Motor Control

To Motion Control

(21)

MC-inspired control architecture

The proposed control

architecture is organized

in two levels:

Low level(hereinafter

LL): to perform

dynamic inversion

.

High level(hereinafter

HL): to perform

DoFs

abundance

management

.

(22)

Purpose: dynamic inversion of the unknown

system.

(23)

Behaviours we want to reproduce

Images taken from [Shadmehr et al. 1994]

CNS presents three peculiar characteristics in learning

new movements:

Learning by repetition: how to invert an unknown dynamic

over a trajectory

FF introduction

(24)

Behaviours we want to reproduce

CNS presents three peculiar characteristics in learning

new movements:

Aftereffect over a learned trajectory.

By removing the force field,

subjects present deformations

of the trajectory specular to

the initial deformation due to

force field introduction.

This behavior is called

mirror-image aftereffect.

(25)

Behaviours we want to reproduce

By removing the force

field, subjects exhibit

deformations of the

trajectories both in the

learned trajectory and in

a new one.

Image taken from [Gandolfo et al. 1996]

CNS presents three peculiar characteristics in learning

new movements:

Aftereffect over trajectories not learned.

FF introduction

Learning

(26)

How to reproduce these

behaviours: existing theories

Internal models are able to

explain all the three

behaviors [Wolpert et al.

1998].

Application in

robotics

In [Schaal et al. 2010] and

in [Nguyen-Tuong et al.

2008] control of robot

through model inversion is

reviewed.

Gaussian Process Regression

(GPR) is shown to be the only

one sufficiently accurate to be

used for control purpose.

Despite this GPR was considered

practically inapplicable due to its

computational costs(explodes with

the growth of regression points).

(27)

Learning by Repetition in Control:

Iterative Learning Control (ILC)

Learning by repetition is naturally interpretable in terms

of ILC

𝑢

𝑖+1

= 𝑢

𝑖

+ 𝑟

𝑒

(t,i)

For more extended introduction and proof of convergences see the thesis

𝑟

𝑒

(t,i) = 𝐼𝐿𝐶

𝑃𝐼𝐷

𝑡, 𝑖 + 𝑃𝐼𝐷 𝑡, 𝑖 + 1

(28)

Iterative Learning Control:

simulative results

0 trials

25 trials

50 trials

Angular error evolution.

Uncertainty inserted at 50-th

iteration.

Angles evolutions over iterations

Learning by repetition is naturally interpretable in terms

of ILC

(29)

Aftereffect is explained by ILC

Final postures

Time evolutions

Time evolutions

(30)

ILC and third behaviour

ILC is a

local method

and can’t explain aftereffect over

unknown trajectories.

Information obtained during learning needs to be

capitalized

in some way.

Existing methods contemplate the independent estimation

of a

complete model

of the system (e.g. [Purwin et al.

(2009)], [Arif et al. (2000)]).

The

limitations

of complete model estimation approach

are

already explained

in previous slides: a novel method

combining

generalization

,

accuracy

and

acceptable

(31)

No free lunch: From Inverse

Functional To Inverse Function

SYS

U(t)

X(t)

Defining

𝐼: 𝐶

0

0, 𝑡

𝑓

→ 𝐶

0

0, 𝑡

𝑓

as the system inverse

functional, it is possible to assert that ILC execution returns a

couple < 𝑥 , 𝐼(𝑥 ) >.

Idea: Using a regression technique in order to

estimate 𝐼 from the set of < 𝑥 , 𝐼(𝑥 ) >

already learned.

Problem: Making a regression of a

functional is a really complex task.

Define:

A subspace of

𝐶

0

0, 𝑡

𝑓

parameterization B

: ℝ

𝑝

→ 𝐶

0

0, 𝑡

𝑓

A time discretization S: 𝐶

0

0, 𝑡

𝑓

→ ℝ

𝑑

Then:

(32)

Possible choices for the subspace

Papers (e.g. [Friedman et al.

(2009)], [Flash et al.

(1985)],[Wann et al. (1988)])

show that many human movements

minimize the jerk:

𝐽 =

𝜕

3

𝑥

𝜕

3

𝑡

2

𝑑𝑡

It is proved that trajectories

that minimize jerk are 5-th

order polynomial splines.

Monic 5-th order polynomial

with two constraints, which

reduce space dimension to 3

and ensure that juxtaposition is

𝐶

2

, are adopted:

𝜕

𝜕

2

2

𝑥

𝑡

𝑡=0,𝑡

𝑓𝑖𝑛

= 0

𝑥

𝑓𝑖𝑛

= 𝑥

𝑖𝑛

+ 𝑥

𝑓𝑖𝑛

𝑡

𝑓𝑖𝑛

Good choice, but in practice a

smaller parameterization can

be better.

(33)

Inversion Function Example

B

𝐼 𝑆 π

: ℝ

3

→ ℝ

22

B

U[k]

I

S

X(t)

π = [𝑥

𝑖𝑛

, 𝑥

𝑖𝑛

, 𝑥

𝑓𝑖𝑛

]

U(t)

U(t)

𝑥

𝑓𝑖𝑛

𝑥

𝑖𝑛

(34)

Low Level Control Scheme

GPR can be

used for map

regression

The regressed map is used for new

trajectories estimation. The remaining errors

can be correct with other iteration of ILC

algorithm. The new result of ILC can then be

used for map regression.

(35)

Proposed controller and aftereffect

on unknown trajectories

Force Field

introduction

Re-Learning

Aftereffect

Final postures

are perfectly

reached

Direct Learning

Aftereffect

Aftereffect

due to

generalization

Low

aftereffect

Trajectories are

deformed: final

postures are not

perfectly

reached.

Aftereffect

The system

experiences two

of the seven

trajectories

(36)

Inverse Map: simulative results

Mean Error: 0.0596 𝑟𝑎𝑑 𝑠𝑒𝑐

Variance: 0.0013 𝑟𝑎𝑑

2

𝑠𝑒𝑐

2

Mean Error: 0.5644 𝑟𝑎𝑑 𝑠𝑒𝑐

Variance: 0.1137 𝑟𝑎𝑑

2

𝑠𝑒𝑐

2

10000 simulations of random tasks for a RR manipulator in the subspace,

with and without a map learned with 32 limit cases.

Other performance results will be given in the global execution and in the thesis

With inverse map

(37)

Experimental Testbed: qbmoves

Qbmoves

(VSA) aim to

human-like performances

in terms of “smooth

movements, shock absorption, safety improvements and performance

improvements”.

Quotes are taken from Natural Machine Motion Initiative website: naturalmotioninitiative.org

Qbmoves

are based on

agonistic/antagonistic

(38)

Experiments Plan

Three types of experiment are performed

PID-ILC can inverts qbmoves arm dynamics:

ILC algorithm is applied to the arm with three different values of

stiffness preset.

PID-ILC can adapt to exogenous uncertain:

Arm grasps a bottle full of water and ILC algorithm learns a

trajectory.

The bottle is emptied and the algorithm is used to the of the inversion

exogenous uncertain represented by the different inertia of the

object.

Regression techniques performances:

The map is regressed by only two trajectories.

(39)

Application to the qbmoves: ILC

Front view

Top view

Experiment 1 30 iterations of ILC algorithm

for the tracking of the 5

th

order polynomial

with described constraints and:

𝑥

𝑖𝑛

= 0

𝑥

𝑖𝑛

= 0

𝑥

𝑓𝑖𝑛

= 𝜋/4

𝑇 = 2 𝑠𝑒𝑐

• Stiff preset = 35

Iteration

To

ta

l e

vo

luti

on

e

rr

or

[r

a

d

s]

(40)

Application to the qbmoves: ILC with

different values of stiffness

To

ta

l e

vo

luti

on

e

rr

or

[r

a

d

s]

Iteration

Iteration

To

ta

l e

vo

luti

on

e

rr

or

[r

a

d

s]

(41)

Application to the qbmoves: Learning by

repetition

FINAL POSITION

WITH UNEXPECTED

UNCERTAIN

FINAL REFERENCE

CONFIGURATION

FINAL RE-LEARNED

CONFIGURATION

FINAL REFERENCE

CONFIGURATION

FINAL POSITION

WITH UNEXPECTED

UNCERTAIN

FINAL REFERENCE

CONFIGURATION

(42)

Application to the qbmoves: Learning by

repetition

The bottle is

emptied

Learning by

repetition

(43)

Application to the qbmoves: Testing the

map

map is regressed by only two trajectories

(

𝑡

𝑓𝑖𝑛

= 2 𝑠𝑒𝑐, 𝑥

𝑖𝑛

= 0,0, 𝑥

𝑖𝑛

= 0,0):

𝑥

𝑓𝑖𝑛

= 0, 𝜋/4

𝑥

𝑓𝑖𝑛

= 3𝜋/8, 𝜋/4

The regressed map

reduces the error

of

trajectories of about the

80%

(7 trials)

Map is

regressed

Final reference

position

Final reference

position

Final

reference

position

Final

reference

position

𝑥

𝑓𝑖𝑛

= 𝜋/4, 𝜋/4

𝑥

𝑓𝑖𝑛

= 𝜋/6, 𝜋/4

(44)

Purpose:

DoFs abundance managment.

(45)

LL Control performs an abstraction

System

largely unknown

and nonlinear

Controlled

system

Known with a

dynamics that

depends on the

subspace chosen

𝑥

𝑘+1

= 𝐷(𝑥

𝑘

, π

𝑘

) + ϕ(𝑥

𝑘

, π

𝑘

)

𝑥

𝑘+1

= 𝑥

𝑘

+ 𝑇π

[3]

𝑘

+ ϕ(𝑥

𝑘

, π

𝑘

)

(46)

LL Control performs an abstraction

The control

variables are the

base parameters.

π = [𝑥

𝑖𝑛

, 𝑥

𝑖𝑛

, 𝑥

𝑓𝑖𝑛

]

π = 0,0,0.3142 𝑟𝑎𝑑/𝑠

π = −0,2094𝑠, −0.1571,0.3142 𝑟𝑎𝑑/𝑠

π = 0,7854𝑠, 0.3142, −1,5708 𝑟𝑎𝑑/𝑠

Example of

global evolution

with

slope

pointed out

The role of the High

Level Controller is

to choose the

sequence of

𝜋

𝑘

.

(47)

Task definition

It is supposed that higher level of control specifies a

task to be accomplished.

A

task

will be defined as a

cost function

and a

set

of constraints

, an example is:

• Task description: hammering a point.

• Task definition:

• Cost function:

𝑑𝑖𝑟𝑒𝑐𝑡𝐶𝑖𝑛𝑒𝑚𝑎𝑡𝑖𝑐(𝑞

𝑓𝑖𝑛

) − 𝑝

𝑒𝑒

• Constraints:

see the thesis for more detailed treatment

𝑞

𝑚𝑖𝑛

< 𝑞 < 𝑞

𝑚𝑎𝑥

𝐽𝑞

𝑓𝑖𝑛

= 𝑣

𝑚𝑎𝑥

(48)

High Level Controller definition

The

High Level Control

is defined by an

optimization

problem

and by an

algorithm

that solve it.

Optimization problem can be formulated as:

min

π,𝑥

ℎ 𝒙 − 𝒚

𝑄

+ 𝜟𝝅

𝑅

𝛿

𝑥

(𝒙) ≤ 𝒙

𝒍𝒊𝒎𝒊𝒕

𝛿

𝑢

(𝝅) ≤ 𝝅

𝒍𝒊𝒎𝒊𝒕

𝑥

𝑘+1

= 𝑥

𝑘

+ 𝑇π

𝑘

(49)

HL Control vs classical techniques

This resolution of redundancy is

much more

general

compared to classical

pseudo-inversion

techniques.

It permits to consider:

Nonlinear hard constraints

Actuation costs

General cost functions

(e.g. being in a point at a

certain time, like in the previous blacksmith

example)

(50)

Feedback Approaches

Two algorithm that can be used are:

Pre-solving the problem and controlling the system

over 𝑥

𝑜𝑝𝑡

, througth a P controller (dead beat).

Recalculating the optimum on-line: MPC approach

min

π,𝑥

ℎ 𝒙 − 𝒚

𝑄

+ 𝜟𝝅

𝑅

𝛿

𝑥

(𝒙) ≤ 𝒙

𝒍𝒊𝒎𝒊𝒕

𝛿

𝑢

(𝝅) ≤ 𝝅

𝒍𝒊𝒎𝒊𝒕

𝑥

𝑘+1

= 𝑥

𝑘

+ 𝑇π

𝑘

(51)

Problem

The

quality

of the

task execution

is strongly affected by

the

accuracy

of the learned low level map.

Fine map

(many regression points collected)

An RR arm is controlled

with pre-solving technique

(one joint evolution in

figures):

Fine map -> Low

tracking error.

Rough map -> High

tracking error.

Rough map

(few regression points collected)

𝜋

1

𝜋

2

𝜋

3

𝜋

4

𝜋

5

𝜋

1

𝜋

2

𝜋

3

𝜋

4

𝜋

5

(52)

Idea: Merging High and Low Level

Control

If a new task is not properly executed

then the accuracy of the map should

be

improved

.

Task

Execution

Sub-Task

Learning

complete

Task

∃𝑖: 𝑒𝑟𝑟

𝑖

> 𝑒𝑟𝑟

𝑡ℎ

NO

YES

Where:

𝑒𝑟𝑟

𝑖

is the error of

the portion related

to 𝜋[𝑖].

(53)

Idea: Merging High and Low Level

Control

If a new task is not properly executed

then the accuracy of the map should

be

improved

.

Task

Execution

Sub-Task

Learning

complete

Task

∃𝑖: 𝑒𝑟𝑟

𝑖

> 𝑒𝑟𝑟

𝑡ℎ

NO

YES

Portion of the

low level

controller that is

improved.

(54)

Idea: Merging High and Low Level

Control

With this algorithm the

Low Level map

is learned

in a

task-oriented way

:

most of the points will be

collected in the portions of

the subspace that are

more used in the tasks,

balancing the

trade-off

between map

dimension

and

accuracy

.

Task

Execution

Sub-Task

Learning

complete

Task

∃𝑖: 𝑒𝑟𝑟

𝑖

> 𝑒𝑟𝑟

𝑡ℎ

NO

YES

(55)

Numerical example

VSA RR

arm

Example description:

• The system controlled is a model of a VSA RR arm.

• Execution of 20 tasks successively.

• Task consists in moving the arm such that

ℎ 𝑥 − 𝑦

𝑗

is minimized,

where ℎ 𝑥 = 𝑎

1

𝑐𝑜𝑠

𝑥

1

+ 𝑎

2

𝑐𝑜𝑠

𝑥

1

+𝑥

2

and 𝑦

𝑗

is the desired

evolution of task j.

• Joints limits are considered.

• No initial map is present.

𝑎

1

𝑐

1

+ 𝑎

2

𝑐

12

min

π,𝑥

𝑎

1

𝑐

𝒙

𝟏

+ 𝑎

2

𝑐

𝒙

𝟏

+𝒙

𝟐

− 𝒚 + 𝜟𝝅

𝒙 ≤ 𝝅/𝟐

(56)

Simulation Results: learning performances

Using

merged

algorithm map converges to a

complete

representation

of the inverse system (no more learning is

(57)

Simulation Results: output tracking

performances

Mean error during task execution

MPC approach presents

better performances because

re-optimization at each

iteration permits to

fully

exploit

the task

redundancies

.

MPC is hardly applicable in

mechanical systems due to

their high bands. This

architecture permits to use it

requiring HL algorithm

execution only between low

level executions.

E.g. if the system moves on 𝑥

different from the desired one

𝑥 , but such that ℎ 𝑥 = ℎ 𝑥 ,

P controller corrects the

(58)

Ensuring task-specific covariation

The task, executed many times with small

variations in the initial conditions, shows a

high variability in joints evolutions,

maintaining task performances.

Trajectories in

joint space.

Trajectories in

task space.

𝑦

=

𝑥

[m]

time [sec]

time [sec]

an

gles

[r

ad]

min

π,𝑥

𝑎

1

𝑐

𝒙

𝟏

+ 𝑎

2

𝑐

𝒙

𝟏

+𝒙

𝟐

− 𝒚 + 𝜟𝝅

𝒙 ≤ 𝝅/𝟐

𝑥

𝑘+1

= 𝑥

𝑘

+ 𝑇π

𝑘

(59)

High Level Control behaves as a

Synergy

The modern definition of synergy, given in [M. Latash

2010], is:

«[…] a hypothetical neural mechanism that

ensures

task-specific co-variation

of elemental variables

providing for desired

stability

properties of an

important

output

(performance) variable»

The fact that

𝑉

𝑔𝑜𝑜𝑑

> 𝑉

𝑏𝑎𝑑

indicates that a task

synergy exists.

The fact that

𝑉

𝑔𝑜𝑜𝑑

≅ 𝑉

𝑏𝑎𝑑

indicates that a task

synergy doesn’t exist.

Locus of configurations

that meet the task

(60)

High Level Control behaves as a

Synergy

Let h(x) be the “important

output variable”

According to this definition

the high level control we

are talking about presents

𝑉

𝑔𝑜𝑜𝑑

≫ 𝑉

𝑏𝑎𝑑

in the

configuration space,

showing a

Synergy – like

behaviour

.

𝑉

𝑔𝑜𝑜𝑑

(61)

Conclusions

In this work:

A new algorithm that allows to

control uncertain nonlinear

Lagrangian systems was

developed.

The algorithm ensures some

characteristics of human natural

movement:

Learning by repetition.

Aftereffect in known and unknown

trajectories.

Capability of ensuring task specific

co-variation in task execution.

Simulations and experiments were

done in order to both performances

and MC similitudes evaluation.

(62)

Presentation Bibliography

 A.V. Alexandrov, A.A.Frolov, J.Massion (2000) «Biomechanical analysis of movement strategies in human forward trunk bending. I. Modeling»  N. Amann, D.H. Owens, E. Rogers « Iterative learning control for discrete-time systems with exponential rate of convergence »

 M.Arif, T.Ishihara, H.Inooka (2000) «Incorporation of experience in iterative learning controllers using locally weighted learning»  C.G.Atkeson, A.W.Moorel, S. Schaal (1996) «Locally Weighted Learning for Control»

 J.L. Emken, R. Benitez, A. Sideris, J. E. Bobrow, D.J. Reinkensmeyer (2007) «Motor Adaptation as a Greedy Optimization of Error and Effort»  J. Friedman, T. Flash (2009) «Trajectory of the index finger during grasping»

 T. Flash, N. Hogan (1985) « The coordination of arm movements: an experimentally confirmed mathematical model »  F. Gandolfo, F. A. Mussa-Ivaldi, E. Bizzi (1996) «Motor learning by field approximation»

 M. Garabini, A. Passaglia, F. Belo, P. Salaris, A. Bicchi (2011) «Optimality principles in variable stiffness control: The VSA hammer»

 J. R. Lackner, P. Dizio (1998) «Gravitational Force Background Level Affects Adaptation to Coriolis Force Perturbations of Reaching Movements»  D. Lakatos, F. Petit, A. Albu-Schaffer(2013) «Nonlinear oscillations for cyclic movements in variable impedance actuated robotic arms»

 M. Latash (2010) «Motor Synergies and the Equilibrium-Point Hypotesis»

 M. Latash (2010) «Stages in Learning Motor Synergies: A view based on the equilibrium-point hypotesis»  M. Latash (2013) «The bliss (not the problem) of motor abundance (not redundancy)»

 M. Latash (2013) «Fundamentals of Motor Control»

 D. Nguyen-Tuong, Jan Peters (2008) «Learning inverse dynamics: a comparison»

 G. Palli, C. Melchiorri, A. De Luca (2008) «On the feedback linearization of robots with variable joint stiffness»

 F. Petit, C. Ott, A. Albu-Schaffer (2014) «A Model-Free Approach to Vibration Suppression for Intrinsically Elastic Robots»  O. Purwin, R. D'Andrea (2009) « Performing aggressive maneuvers using iterative learning control »

 C. E. Rasmussen, C. K. I. Williams (2006) « Gaussian Processes for Machine Learning »

 R. Shadmehr, F. A. Mussa-Ivaldi (1994) «Adaptive Representation of Dynamics during Learning of a Motor Task»  J. P. Scholz, G. Shoner (1998) «The uncontrolled manifold concept: identifying control variables for a functional task»  E. Todorov, M.I. Jordan (2002) «Optimal feedback control as a theory of motor coordination»

 J. Wann, I. Nimmo-Smith, A.M. Wing (1988) « Relation between velocity and curvature in movement: Equivalence and divergence between a power law and a

minimum-jerk model »

 D. M. Wolpert, R. C. Miall, M. Kawato (1998) «Internal models in the cerebellum»

Figura

Figure 1: The collect ion of pict ures shows (from t he t op left t o right bot t om) firemen and people of the Civil Protection at work: hitting t he t op of a roof wit h an axe, hit t ing a wall wit h a hammer,  un-loading heavy object s, using a wat er p

Riferimenti

Documenti correlati

Nel fare ciò un medium estremamente utile fu quello della fotografia, grazie alla sua capacità di preservare l’immagine di Lenin così come era in vita, e

Despite this, titanium may suffer different forms of corrosion in severe environments: uniform corrosion, pitting and crevice corrosion, hydrogen embrittlement,

From the Academic Unit of Musculoskeletal Disease, University of Leeds, Leeds, UK; Third Rheumatology Department, National Institute of Rheumatology and Physiotherapy,

finale dell’opera in cui si parla dell’animale come di un’incarnazione del male e di Euridice come della salvatrice di Orfeo si potrebbe pensare che

Original images of the unrelaxed cluster MACS 0717 (top) and ICL+background maps provided by CICLE (bottom) in the F435W, F606W, and F814W filters (from left to right). The scale of

The evaluation of the antimicrobial activity of both extracts PE and n-BuOH against bacterial strains with two Gram positive (Staphylococcus aureus and Bacillus sp.), one Gram

Finally, based on the negative effect of GBF1 (G-box binding factor 1) and ZCT (zinc-finger Catharanthus transcription factor) proteins on the expression of the TIA (terpenoid