• Non ci sono risultati.

State Estimation Design for COMAN Robot

N/A
N/A
Protected

Academic year: 2021

Condividi "State Estimation Design for COMAN Robot"

Copied!
63
0
0

Testo completo

(1)

corso di laurea magistrale in ingegneria robotica e dell’automazione

tesi di laurea magistrale

State Estimation Design for COMAN Robot

Candidato:

Salvatore Alberto Buccellato

Relatore: Prof. Lorenzo Pollini Tutor: Dr. Hamed Razavi

(2)

Contents

1 Introduction 6

1.1 Problem Definition . . . 7

1.2 State of the art . . . 9

2 COMAN Robot 15 2.1 Hardware . . . 15

2.2 Control . . . 16

3 Extended Kalman Filter 19 3.1 Standard Formulation . . . 19

3.2 Quaternion Formulation . . . 21

3.2.1 General Concepts . . . 21

3.2.2 Approximation . . . 22

3.2.3 Quaternion Integration . . . 22

4 Point Foot Algorithm 23 4.1 Prediction Model . . . 23 4.2 Measurement Model . . . 24 4.3 Discrete Implementation . . . 24 4.4 Observability Analysis . . . 27 4.4.1 Nonlinear Analysis . . . 29 4.4.2 EKF Analysis . . . 31 4.5 Parameters . . . 32 4.5.1 Initial values . . . 33 4.5.2 Initial covariance . . . 34 4.5.3 Feet covariance . . . 35

5 Flat Foot EKF 37 5.1 Prediction Model . . . 37

(3)

5.2.1 Discrete Implementation . . . 39

5.3 Observability Analysis . . . 41

5.4 Parameters . . . 42

6 IMU Calibration 43 7 Motion Capture Results 45 8 Controller Improvement 55 9 Conclusions 56 A Code Guidance 57 A.1 Control.cc . . . 57

A.2 Walking.cc . . . 57

A.3 state estimation.cc . . . 57

A.4 acc integration.cc . . . 57

A.5 or integration.cc . . . 57

A.6 motionless detect.cc . . . 58

A.7 rotateIMU.cc . . . 58

A.8 rot vector.cc . . . 58

A.9 skew.cc . . . 58

A.10 kin initialize.cpp . . . 58

A.11 forward kinematics.cpp . . . 58

A.12 kinematics.cpp . . . 59

A.13 compute matrix.cpp . . . 59

(4)

List of Figures

1 Example of COMAN in a collaborative task . . . 7

2 Measured velocity in a walking gait at 0.2 m/s . . . 8

3 Measured position from the foot in a walking gait at 0.2 m/s . 8 4 Position deriving from IMU integration . . . 9

5 Bloesch ETH quadruped robot . . . 10

6 Signal flow diagram for the complete joint torque aided 6 DOF odometry implemented on the DLR Crawler . . . 11

7 Combined state estimation for Atlas . . . 12

8 CoM reconstruction in [1] . . . 12

9 Linear Inverted Pendulum Model . . . 13

10 Simple example of a moving horizon estimator . . . 14

11 COMAN robot . . . 16

12 CogIMon different scenarios . . . 17

13 COMAN leg: the compliant joints are well visible . . . 17

14 COMAN control scheme: the state estimator is crucial . . . . 18

15 Backlash in the left foot: the left foot, despite being on the ground during a forward movement, rotates around his for-ward axis . . . 35

16 3DM-GX3-25 Datasheet . . . 44

17 Motion Capture System . . . 45

18 Y displacement for exp8 . . . 46

19 X displacement for exp8 . . . 46

20 Total displacement for exp8 . . . 47

21 Predicted angles for exp8 . . . 47

22 Y displacement for exp9 . . . 48

23 X displacement for exp9 . . . 48

24 Total displacement for exp9 . . . 49

(5)

26 Y displacement for exp10 . . . 50

27 X displacement for exp10 . . . 50

28 Total displacement for exp10 . . . 51

29 Predicted angles for exp10 . . . 51

30 Predicted angles for exp7 with point foot . . . 52

31 Predicted angles for exp7 with flat foot . . . 53

32 Left feet position with respect to the IMU . . . 53

33 Right feet position with respect to the IMU . . . 54

34 Estimated velocity in x direction . . . 54

(6)

Abstract

The main objective of this work is to design and implement a state estimator on the humanoid robot COMAN. First we tested a state estimator based on data recorded from a walking gait offline in M atlab environment. The data used came from IMU and leg kinematics.

Subsequently, we compared the results with the results coming from the mo-tion capture and implemented it in the online controller developed in C++. Most of the challenges were found on the hardware: indeed, the presence of backlash in the joints founds is reflected in difficulties in precision and need for a careful tuning of the Kalman Filter used for the estimation.

Two different approaches where followed: point foot and flat foot. We com-puted the results with the data coming from the motion capture in both scenarios.

In conclusion, the whole work is tested on the robot to see how it affects the controller.

(7)

1

Introduction

The problem of indoor navigation for legged robots has been widely studied in literature. This kind of navigation is an essential tool for a robot to be able to know its pose and location in an indoor environment, allowing it not to be dependent from other external navigation sources. However, one of the greatest challenges given by these kind of estimation tools is given by hardware setup, for which every robot becomes unique in its application. This kind of state estimation will essentialy serve two main purposes:

• Vision navigation and SLAM; • Controller improvements.

. Both will be essential in the futher development of the COMAN robot. While the vision system is still under development, the effects on the con-troller can be achieved through a better estimation of the robot pose and of the feet position.

We will first introduce the problem with its mathematics associated and give a quick glance at what is the state of the art. There are many different al-gorithms used with their own peculiarities and we want to know more about them. Then we will develop an algorithm based on our needs and keeping in mind what we learned from literature in order to test it on the COMAN robot and compare the results with real world tracking.

(8)

Figure 1: Example of COMAN in a collaborative task

1.1

Problem Definition

The problem of data fusion arises from an inconsistency of the data coming from the kinematics. Indeed, if we look closely to the current output in Figure 2 we can observe two important things:

• Very noisy data;

• Incorrect displacement and velocity in the x direction;

These have bad effects on the controller. As we will see later, the controller used on the COMAN is mostly based on the Euler angles representation of the orientation and on the measured distance between the two feet. Having bad measurements of these quantities can land to unstable or not very robust controls. On the other hand, IMU data can easily drift as we can see in Figure 4. With a strapdown integration is almost impossible to avoid drift in the results without relying on another source of data (that will be the kinematics

(9)

Figure 2: Measured velocity in a walking gait at 0.2 m/s

Figure 3: Measured position from the foot in a walking gait at 0.2 m/s

in this case). Using a data fusion algorithm will allow us to have a stable and reliable measurement of the desired quantities improving also the overall control of the robot.

(10)

Figure 4: Position deriving from IMU integration

1.2

State of the art

The problem of the state estimation for legged robots has been widely as-sessed in literature over the recent years. This allows for every robot to have a more stable and reliable estimate to use on the controller and for vision purposes, also inside other filters.

A first approach to state estimation for an hexapod was developed by Lin, Komsuoglu and Koditschek in [2] where no assumption is made about the gaits. This lands better results than using just the raw data but has really high errors in the end.

A more recent approach to legged state estimation is performed in the work from Bloesch and Hutter in [3]. In this work is introduced an EKF for a quadruped robot with the only assumption of a point contact between the feet and the ground. This algorithm will be the initial foundation for this work. In this article is observed a 10% divergence on the estimated posi-tion while good properties of observability are mantained. Lately the same authors developed another algorithm based on an Unscented Kalman Fil-ter which was published in [4]. This filFil-ter gives betFil-ter results through the

(11)

usage of an outlier detector, which allows to avoid wrong correction given by outliers in the data. In 2013 another EKF based estimator was used

Figure 5: Bloesch ETH quadruped robot

for the DLR Crawler in the work by Gorner and Stelzer [5]. This approach takes in account estimates from the IMU and the leg kinematics in order to have a better prediction of the orientation angles. However, many prob-lems were persisting with this approach with some probprob-lems in detecting fast movements and the presence of drifts in some components of the estimated angles.

Another interesting approach (that will be thoroughly discussed in this thesis) is the one developed in [6]. The main idea of this work is to add an hypothesis of planar contact between the feet and the ground. This allows to have another measurement and a more stable estimation. This approach has been implemented also in [7] in the Cassie robot with good results on the estimation.

In the work from Xinjilefu and Atkenson [8] another way of implementing the EKF for humanoid state estimation is developed. In this paper as decoupled

(12)

Figure 6: Signal flow diagram for the complete joint torque aided 6 DOF odometry implemented on the DLR Crawler

state estimator was develeoped for the Atlas robot. This approach uses data coming from the encoder of the joints, IMU data and the data coming from the joint torques. This is a full body approach where also the joint position is estimated, landing to improved results, similar to the ones achieved with the Boston Dynamics state estimator (not disclosed).

The last approach that we will see about this kind of usage of the EKF is developed by Fallon, Antone, Roy and Teller [9]. Here a simple EKF is used for the control variables, while the position is corrected through the usage of a LIDAR and stereo vision. This approach is illustrated in Figure 7. . This method has been further developed in [10] with a stereo fusion and a receding horizon footstep planning. Another common approach is to use the dynamics based on the non-linear Zero Moment Point (ZMP) formulation, coupling the dynamic behavior in the frontal and lateral plane and fusing information from the sensors. This approach has been followed in [11] where an EKF was developed using the ZMP and combining it with data coming

(13)

Figure 7: Combined state estimation for Atlas

from the IMU and the leg kinematics. What is really interesting about this paper is that the observability is not lost under any circumstance.

The ZMP formulation is also used in [1]. Here the algorithm proposed is based on the usage of force and torque sensor placed on the feet and the kinematic model of the robot (thus, not using the IMU data). The estima-tior is not based on a Kalman filter but on a complementary filtering that allows to have good results with a bias in the z direction (Figure 8). This

Figure 8: CoM reconstruction in [1]

(14)

more complex model based on the contact wrench central axis and imple-menting it in a complementary filter similar to the one used in [1].

Another interesting approach is used in [13] where only the IMU data is used to estimate the contact forces and the center of mass position. This however relies on a really good model of the dynamics of the robot and can be really difficult to implement in many systems.

A very famous model for the humanoid dynamics is the Linear Inverted Pen-dulum Model (LIPM, Figure 9). This model despite its simplicity can land to very good results as shown in [14]. Here the estimation obtained through the LIPM is used for fall detection and prevention.

Figure 9: Linear Inverted Pendulum Model

In the paper written by Bae and Oh [15] a novel approach is presented. The estimation is performed through a moving horizon estimator (Figure 10 that can partially cope with non-Gaussian modeling error. It uses a simple model (LIPM) but can estimate accurately the pose of the robot. In [16] the authors presented a state estimator based on a modified UKF that uses whole-body multi-contact dynamics. This is a very computationally expensive approach

(15)

Figure 10: Simple example of a moving horizon estimator

that has been implemented splitting the measurement and predictive part with a lag on the former. This allows to add minimal latency to the control loop.

(16)

2

COMAN Robot

The COMAN humanoid robot [17], is being developed within the AMARSI European project. The development of COMAN body exploits the use of actuation systems with passive compliance.

The goal is:

• to reduce the distinction between plant and controller that is typical in traditional control engineering to fully exploit complex body properties; • to simplify perception, control and learning and to explore how com-pliance can be exploited for safer human robot interaction, reduced energy consumption, simplified control, and faster and more aggressive learning.

The COMAN robot is also part of the project CogIMon [18]. The CogIMon project aims at a step-change in human-robot interaction toward the systemic integration of robust, dependable interaction capabilities for teams of humans and compliant robots, in particular the compliant humanoid COMAN.

2.1

Hardware

The COMAN robot is 95cm tall, weighs 31kg and has 25 DOF. Its mechani-cal components are made from titanium alloy, stainless steel and aluminium alloy, giving it good physical robustness. Its modular joint design uses brush-less, frameless DC motors, Harmonic Drive gears and series elastic elements. Leg, waist and shoulder joints have a peak torque capability of 55Nm. Cus-tom torque sensors are integrated into every joint to enable active stiffness control and 6-DOF sensors are included at the ankle to measure ground re-action forces. COMAN can walk and balance using inertial sensors in the pelvis and chest, and its series elastic joint design makes it robust against impacts and external disturbances. COMAN is fully power autonomous.

(17)

Figure 11: COMAN robot

The torso contains a dual core Pentium PC104, onboard battery and battery management system giving up to 212 hours continuous operation.

2.2

Control

The role played by the state estimator in the COMAN control [19] is crucial. Every variable will pass through the state estimation before being computed by the controller (Figure 14). As we can see in the control scheme, the state estimator will get as inputs:

• the feet sensors (F, T);

(18)

Figure 12: CogIMon different scenarios

Figure 13: COMAN leg: the compliant joints are well visible

• the IMU data (Accelerometers, Gyroscopes).

(19)

Figure 14: COMAN control scheme: the state estimator is crucial

(or at most will pass through an average filter). The main output of the state estimator for the control are:

• Position of the robot; • Orientation of the robot;

• Position of the stance foot with respect to the pelvis (where the IMU is located).

(20)

3

Extended Kalman Filter

The Extended Kalman Filter (also EKF) is one of the most popular methods implemented in navigation systems as we have seen in the state of the art section. This filter is the nonlinear version of the more popular Kalman Filter. In this case, we will linearize the state around the current mean and covariance. In this project it will be used first order approximation (exception made for position, which will be at second order).

3.1

Standard Formulation

In the EKF the function of the states and the outputs can be nonlinear (the only hypothesis is that they are differentiable). We will follow this notation: xk = f (xk−1, uk) + wk; (1)

yk = h(xk) + vk. (2)

Where the variables are defined as follows: • xk is the state at time k;

• uk is the input at time k;

• yk is the measurement at time k;

• wk is the process noise;

• vk is the observation noise.

This is valid for the non-linear model of the system. This allows to make a prediction of the current state based on the state at k − 1 and the current input. Remember that the errors must be in the form:

E[wk] = 0 (3)

(21)

E[wkwjT] = 0 (5)

with a zero mean Gaussian model error with covariance Q. We can expand f (x, u) and h(x) in Taylor series in order to approximate the prediction. The EKF will be mainly composed by two steps:

• Prediction step; • Update step.

Naming Pk the covariance matrix of the state, Qk the covariance matrix of

the process and Rk the covariance matrix of the measurement the prediction

step will be:

ˆ

xk|k−1= f (xk−1|k−1ˆ , uk); (6)

Pk|k−1= Fk· Pk−1|k−1· FkT + Qk, (7)

where Fk is the Jacobian of the f function of the state. We have now a

predicted state along with his predicted covariance.

Naming ykthe innovation residual, Skthe innovation residual covariance, Kk

the Kalman gain, Hk the Jacobian of hk and zk the measurement vector, we

can now perform the update step:

yk = zk− h(xk|k−1ˆ , uk); (8) Sk= Hk· Pk|k−1· HkT + Rk; (9) Kk = Pk|k−1· HkT · S −1 k ; (10) ˆ xk|k =xk|k−1ˆ + Kk· yk; (11) Pk|k = (I − Kk· Hk) · Pk|k−1. (12)

In these equations we identified some variables with k|k−1. This notation means that the variable is an ”a priori” estimate. If the variable is identified by k|k this means that it is an ”a posteriori” estimate. This is named with respect to the measurement correction.

The role of the Kalman gain Kk is to weight the prediction and the

(22)

3.2

Quaternion Formulation

In the Kalman Filter that has been developed, the orientation has been defined as a quaternion and used as developed in [16].

3.2.1 General Concepts

This allows to have a compact formulation for the rotation matrices. The quaternion will be defined as follows:

q = q0+ q1i + q2j + q3k (13)

For a quaternion rotation we can write:

q =   kxsin(θ/2) kysin(θ/2) kzsin(θ/2)   (14)

while for the scalar part we have:

q0 = cos(θ/2). (15)

The inverse of the quaternion is defined as: q−1 =−k sin(θ/2)

q4



(16) The symbol ⊗ will represent the quaternion multiplication, which is defined as follows: p ⊗ q =     q4p1+ q3p2− q2p3+ q1p4 −q3p1+ q4p2+ q1p3+ q2p4 q2p1− q1p2+ q4p3+ q3p4 −q1p1 − q2p2− q3p3+ q4p4     (17)

(23)

3.2.2 Approximation

In case of a small rotation δq it is possible to use a small angle approximation for the quaternion. In this case we will have:

δ ¯q = δq δq4  = ˆ k sin(δθ/2) cos(δθ/2)  ≈ 1 2δθ 1  (18) This will lead to this well known formulation for the rotation matrix:

C(δ ¯q) = (I − [δθ×]) (19)

We can thus interpret the quaternion q as the rotation between two frames and we will describe it as follows:

C = exp(−[ˆk × θ]) (20)

3.2.3 Quaternion Integration

In order to deal with the quantities in the filter (gyroscope rates and orienta-tion) we have to understand how to integrate the quaternion to update the orientation at each step. We can write the orientation derivate as follows:

˙

q(t) = 1

2Ω(ω)¯q(t) (21)

where Ω(ω) stands for:     0 ωz −ωy ωx −ωz 0 ωx ωy ωy −ωx 0 ωz −ωx −ωy −ωz 0     (22)

The main assumption that we will use is that we will be using a zeroth order quaternion integrator during which ωk remains constant for ∆t. With this

assumption we can write: ¯ q(tk+1) = "ω |ω| · sin( |ω| 2 ∆t) cos(|ω|2 ∆t) # ⊗ ¯q(tk) (23)

(24)

4

Point Foot Algorithm

Now we have to define the state, the inputs and the measurement in the Extended Kalman Filter. We have to remember that we have data coming from the IMU and from the leg kinematics. The former will be used as input while the latter will be used as measurement (similar to [3]).

4.1

Prediction Model

The states that will be used in our model are described below:

x = [r, v, q, plef t, pright, bf, bω]T. (24)

Where:

• r is the position of the IMU; • v is its velocity;

• q is the quaternion representing a rotation from world to body frame; • p is the world position of the foot (respectively left and right);

• bf and bω are the accelerometer and gyroscope biases.

The derivative can be computed as:

˙r = v; (25) ˙v = CT · ( ˜f − bf − wf) + g; (26) ˙ q = 1 2  ˜ω − bω− ww 0  ⊗ q; (27) ˙ pi = CT · wp,i; (28) ˙ bf = wbf; (29) ˙ bω = wbω. (30)

(25)

Here C represents the rotation given by the quaternion q while the w quantities are the noise biases associated with the process. ⊗ denotes the quaternion vector.

The inputs are represented by ˜f and ˜ω which are the raw acceleration and angular rate sensed by the IMU.

4.2

Measurement Model

The measurement is a function of the measured joint angles and the kinematic model of the leg. This model has been extracted from the URDF file given by the IIT (which is the original manufacturer of COMAN). Through the application Robotran we extracted a C++ code that helps us computing the actual pose and distance of the robot with respect to its feet.

The measurement vector will look like this:

sp = C · (p − r) + np; (31)

where np is the noise vector (combining uncertainty on the encoders and on

the kinematic model).

4.3

Discrete Implementation

The model represented before must now be implemented in a discretized way on the robot. We can assume a zero-order hold on the IMU data on a given timestep ∆t: ˆ r−k+1 = ˆrk++ ∆tˆv+k +∆t 2 2 ( ˆC +T k fˆk+ g); (32) ˆ vk+1− = ˆvk++ ∆t( ˆCk+Tfˆk+ g); (33) ˆ q−k+1 = exp(∆tˆωk) ⊗ ˆqk+; (34) ˆ p−k+1 = ˆp+k; (35) ˆ bf − k+1 = ˆbf + k; (36) ˆ bω − k+1 = ˆbω + k; (37)

(26)

in which exp(ω) = cos( ||ω|| 2 ) sin(||ω||2 )||ω||ω . ! (38) This notation (exponential mapping) allows to add a rotation quantity to the previous quaternion.

In the previous equations we have: ˆ fk= ˜f − ˆbf + k; (39) ˆ ωk= ˜ω − ˆbω + k. (40)

It can be noted that the infinitesimal rotation is computed directly in the base frame, using the strap-down techniques to understand the final orientation. Last, the measurement model can be written as:

ˆ sp,k= ˆCk−(ˆp − k − ˆr − k). (41)

Now the update part has to be implemented. The dynamics has to be lin-earized in order to perfort this step. Thus, we get the linlin-earized model:

˙ δr = δv; (42) ˙ δv = −CTfXδψ − CTδbf − CTwf; (43) ˙ δψ = −ωXδψ − δbω− wω; (44) ˙ δp = Ctwp; (45) ˙ δbf = wbf; (46) ˙ δbω = wbω. (47)

The inputs coming from the IMU are assumed to be bias compensated as in (97) and (98). With this notation vX denotes the skew-symmetric matrix

(27)

associated with the vector v. Writing the linearized system in the form ˙δx = Fcδx + Lcw we have: FC =           0 I 0 0 0 0 0 0 0 −CTfX 0 0 −CT 0 0 0 −ωX 0 0 0 −I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0           (48) and: LC = diag{0, −CT, −I, CT, CT, I, I} (49)

The measurement model can be written as:

sp = −Cδr + (C(p − r))Xδφ + Cδp + np (50)

Defining δy = Hcδx + v we have:

Hc = −C 0 (C(p − r))X C 0 0 0



(51) Now this model has to be discretized. We will assume a zero-order hold on inputs on interval ∆t. This will give the results:

Qk= FkLcQcLTcF T k∆t; (52) Fk =           I I∆t 0 0 0 0 0 0 I −CT kf T k∆t 0 0 −C T k∆t 0 0 0 I − ωT k∆t 0 0 0 −I∆t 0 0 0 I 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 I           (53) Hk = −Ck 0 (Ck(pk− rk))X Ck Ck 0 0  (54)

(28)

4.4

Observability Analysis

In order to introduce the concept of nonlinear observability [20] we consider a system in the form:

˙x = f (x, u) (55)

With x(0) = x0, x ∈ IRn e u ∈ U ⊂ IRm. There are p outputs:

yj = hj(x) (56)

with j = 1, 2, . . . , p. Consider all the functions as analytic and that we want to reconstruct the state x(t) knowing:

• the system model

• the inputs u(t) given by the system in a given interval t ∈ [0, T ] • the corrisponding outputs in the same interval.

Being unique the solution of the differential equation (55), it would be enough to know the value of the state at any time during the evolution and then pro-ceed by integration.

It won’t be possible to reconstruct x(t), however, if there are two states that with the same inputs generate identical outputs.

Two initial states x1 e x2 are not discernable in the interval [0, T ] if, ∀ u ∈ U ,

the trajectories starting from x1 and x2 give the same output in the same

time interval.

We will analyze the local discernability between two close states x1 e x2; we

will consider only functions that evolve close to the initial state during the whole time interval. A system is locally observable in x0 if in the

neighbor-hood of x0 there is not any point not discernable from x0.

The local observability of a nonlinear system can be deducted from the cor-risponding linearized system; this is only a sufficient condition, not necessary.

(29)

Thus, we will need some more instruments, helpful in the study of the non-linear observability.

For the i-th output, we will have: yi( ˙x, u, t) = ∞ X k=0 dkyi( ˙x, u, t) dtk t=0 tk k! (57)

Expliciting the first derivatives we will have:

yi = hi (58) ˙ yi = ∂h(x) ∂x ˙x = ∂h(x) ∂x f + X j ∂h(x) ∂x gjuj = Lfhi+ X j Lgjhiuj (59) ¨ yi = ∂ ∂x(Lfhi+ X j Lgjhiuj)(f + X j gjuj) + X j Lgjhi˙uj (60) = L2fhi+ X j LfLgjhiuj + X j LgjLfhiuj + X k X j LgkLgjhiujuk+ X j Lgjhi˙uj (61) etcetera. Going up to the second order we can write:

yi(x, u, t) = hi(x) + Lfh(x)t + Lgh(x)u(0)t + L2fh(x) t2 2 + LfLgh(x)u(0) t2 2 + LgLfh(x)u(0) t2 2 + L 2 gh(x)u2(0) t2 2 + Lgh(x) ˙u(0) t2 2 (62)

In (62) there are combination of the inputs and the functions of x, collected in the observation space:

O = {h(x), Lfh(x), Lg1h(x), . . . , Lgjh(x), LgjLgkh(x), . . .} (63)

We have to compare this output to what we would have with (x + δx), meaning yi(x + δx, u, t). The difference between the two output functions is:

δy = y(x + δx, u, t) − y(x, u, t)

= dh(x)δx + dLfh(x)tδx + dLgh(x)u(0)tδx + dLfLgh(x)u(0)

t2

2δx + . . . (64)

(30)

From (64) we have that, standing the hypothesis δx 6= 0, x is not discernable from x + δx iff:        dh(x) dLfh(x) dLgh(x) dLfLgh(x) .. .        δx = Θ(x)δx = 0 (65)

We can now compute the observability codistribution:

dO = span(dh(x), dLfh(x), dLgh(x), dLfLgh(x), . . .) (66)

A theorem guarantees that if dim(dO) = n, with n the state space dimension, then the system is locally observable.

4.4.1 Nonlinear Analysis

In order to assess the observability of the state estimation (and in this way assess the limitations of this state estimation) we will do a nonlinear observ-ability analysis [20] as it is done in [3]. The coordinates that will be used are robocentric (if not the coordinates wouldn’t be analytically tractable). Denoting the current operating state as x∗ we will introduce this coordinate transformation: z :=           s1 s2 ¯ v ¯b ω ¯ q ¯bf ¯ r           =           C(p1− r) C(p2− r) Cv bw− b∗w q ⊗ q∗−1 bf − b∗f Cr           (67)

(31)

The order is chosen for the sake of having a nice row echelon result. The prediction and measurement equation are transformed as follows:

˙z :=           (ω − ¯bω)Xs1− ¯v (ω − ¯bω)Xs2− ¯v (ω − ¯bω)Xv + f − ¯¯ bf + ¯CC ∗ g 0 Ω(ω − ¯bω)¯q 0 (ω − ¯bω)Xr + ¯¯ v           (68)

As we know from nonlinear observability theory, we need to compute the Lie-derivatives in order to evaluate the observability matrix. The result of this operation is shown below:

O =                  I 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 −I 0 0 sX1 0 0 0 0 I −2(Cg)X O 1 0 0 0 0 0 2ωX(Cg)X O 2 0 0 0 0 0 0 ∆sXi,j 0 0 0 0 0 0 ∆sXi,jωX 0 0 0 0 0 0 ∆sXi,jωXωX 0 0 0 0 0 0 O3 0 0 0 0 0 0 ... 0                  (69)

with O1,2,3 defined as follows:

O1 = −sX1 ω X − 2(Cv)x; (70) O2 = (sX1 ω X + 3(Cv)x)ωX − ωX(sX1 ωX + 2(Cv)x) − (Cg)X − 2fX; (71) O3 = ωX(sX1 ω X ωX+5(Cv)x)ωX−4fX−3(Cg)X)−(sX1 ωXωX+4(Cv)x)ωX− − 3fX − 2(Cg)XX − 4ωX(Cv)ωT. (72)

The first thing that we can notice about this observability matrix is that the following states are always unobservable:

(32)

• x absolute position; • y absolute position; • z absolute posistion; • absolute yaw.

This could be anticipated thinking that we are not using any global reference. That means that every of these variables that we are going to compute will be referred to the initial position and pose.

However, this is not the only thing that can be deducted from the observ-ability. Conducting a more thorough analysis we can see that there are some singularities in which we have rank drops. However these are not shown if the biases are dropped from the states, ensuring that these unobservabilities will be shown on the biases alone. The cases in which we will observe a rank drop in the observability matrix are shown below:

• ω · Cg = 0 with at least one foot contact brings to a rank loss of at least 1;

• ω = 0 with at least one foot contact brings to a rank loss of at least 2; • ω = 0 and f = 0 gives a rank loss of 3;

These results are critical in the usage of this kind of filter as we will see next. 4.4.2 EKF Analysis

The need for an observability analysis of the Extended Kalman Filter has been addressed by Huang et al. in [21]. It can be shown that the observabil-ity charateristics of the linearized filter can be different from the nonlinear system. These differences can come from linearization or from noise effects.

(33)

However, the effect of noise can be really evident if compared with the noise-less case. As shown in [3] we can compute the local observability matrix for a discrete system as follows:

M =   Hk Hk+1Fk Hk+2Fk+1Fk   (73)

In order to have an observability constrained Kalman Filter [21] the unob-servable space of the nonlinear system must be the same of the linearized one. For the noiseless case the unobservable spaces are the same. However, when noise is introduced the following quantities must be considered in order to have consistency between the two spaces:

rk+1 = rk+ ∆tvk+ ∆t2 2 (C T kfk,1+ g); (74) vk+1 = vk+ ∆t(CkTfk,2+ g); (75) qk+1 = ζ(ωk) ⊗ qk; (76) pk+1 = pk. (77)

It can be noted that the substantial difference is given by two different pa-rameters fk,1 and fk,2. We can describe these quantities as follows:

fk,1= CkT( rk+1− rk− ∆tvk 0.5∆t2 − g); (78) fk,2= CkT( vk+1− vk ∆t − g). (79)

These lands to the results that was needed, a consistency between the two unobservable spaces.

4.5

Parameters

With this kind of filter, many of the parameters depend on the robot and its hardware. We’ll start with the Q matrix that is the covariance matrix

(34)

Table 1: Q Parameters

Qf Qω Qbf Qbω

6.084 · 10−7 2.74 · 10−7 10−8 3.82 · 10−8

of the process noise. The following parameters (that have to be intended as multiplied by an identity matrix 3x3) were selected for this matrix:

This values have been taken from the IMU datasheet squaring the values of the noise from the accelerometers and gyroscopes. We’ll go back to the values of Qp later.

We have to set the values for x(0) and P (0). As for the x(0) parameters we will follow the subsequent approach.

4.5.1 Initial values

For the position values we will initialize rx and ry as 0. For the rz value this

will be initialized to the z value coming from the right leg, setting the initial distance from the floor. The velocities are all initialized to zero.

For the initialization of the orientation angle we have to determine the orientation of the robot about his pitch and roll angles (the absolute yaw is not observable without a magnetometer). One possible way to do so is to look at the accelerations and align this vector to the gravity. However, this alignment has to be set in a moment where the robot is not moving.

We decided to store the previous 500 samples of data coming from the accelerometers. Then, computing the variance of the data (summing up the variances from all the three directions), we can understand if in that time span the robot has been moving. This method can be summarized as follows:

if motionlessflag == false

get newacc

if SIZE < 500

vec = [vec,newacc]

(35)

var = variance(vec)

if var < 2 · 10−6

motionlessflag == true

else

continue

For the inizialization of the biases the IMU has been removed from the robot and calibrated one a flat surface. We attached the IMU to a parallelepiped-shaped box and tested the various results (we should see a vector like (1, 0, 0) every time for the accelerometers and (0,0,0) for the gy-roscopes). Analyzing the mean of each esperiment we intialized the biases as follows:

Table 2: Bias Parameters

bf x bf y bf z bωx bωy bωz

0.0011 0.0120 -0.0016 0.0066 0.011 -0.005

The values of the feet are intialized to zero for the right leg and from the kinematics for the left leg.

4.5.2 Initial covariance

The matrix P (0) represents the initial covariance. A good proxy to follow is to use the values of Q for these initial values. As for Pp we will still use the

same value as Qp, which we will describe in the next section. The values of

P (0) are:

Table 3: P (0) inizialization

Pr(0) Pv(0) Pψ(0) Pbf(0) Pbω(0)

(36)

4.5.3 Feet covariance

As stated before, the feet covariances deserve a special attention. This is caused by the fact that we cannot take a granted number for these, but they are tuned parameters of the filter.

Figure 15: Backlash in the left foot: the left foot, despite being on the ground during a forward movement, rotates around his forward axis

Observing the robot behaviour in the Figure 15, we can see that we have a different backlash in the kinematics between the two legs. The left foot seems to be more affected by errors in the kinematics. In order to consider this, we are going to address the kinematics of this leg with an higher covariance associated with the measurement than the one used for the right leg.

(37)

ground (if not, the assumption ˙pi = CT · wp,i does not hold anymore). One

way to consider this is to set a threshold of force on the feet sensors in order to understand whether or not the foot is on the ground. Based on this we can ”exclude” the foot from the state by setting a really high covariance on that state. We can summarize this approach as follows:

if Flef t,z > 0.45Ftotal,z Qp,lef t= diag(2 · 10−7, 5 · 10−7, 10−6) Rp,lef t= diag(1.25 · 10−3, 5 · 10−4, 2.5 · 10−6) else Qp,lef t= diag(1010) Rp,lef t= diag(1010) if Fright,z > 0.45Ftotal,z Qp,right= diag(2 · 10−7, 5 · 10−7, 10−6) Rp,right= diag(5 · 10−4, 2 · 10−4, ·10−6) else Qp,right= diag(1010) Rp,right= diag(1010)

(38)

5

Flat Foot EKF

Due to the observability problems shown in the previous approach, we can make one more assumption. This assumption relies on [6] for which we can consider a flat-foot contact, resulting in a more stable estimate of the states. This filter continues to make no assumptions about the gait but allows to set a bigger constraint that reduces the non observable conditions.

As before, the inputs are coming from the IMU data, relying on ac-celerometers and gyroscopes measures as first sources of data. Then the measurement update will be computed through leg kinematics, but with the further constraint of the flat foot assumption, computing also the orientation of the feet and not only the position.

5.1

Prediction Model

The states used will be the same as the ones used before adding the world orientation of the feet as described below:

x = [r, v, q, plef t, pright, bf, bω, z]T. (80)

Where, as before:

• r is the position of the IMU; • v is its velocity;

• q is the quaternion representing a rotation from world to body frame; • p is the world position of the foot (respectively left and right);

• bf and bω are the accelerometer and gyroscope biases.

(39)

The derivative can then be computed as: ˙r = v; (81) ˙v = CT · ( ˜f − bf − wf) + g; (82) ˙ q = 1 2  ˜ω − bω− ww 0  ⊗ q; (83) ˙ pi = CT · wp,i; (84) ˙ bf = wbf; (85) ˙ bω = wbω; (86) ˙z = 1 2 ωz 0  ⊗ z (87)

The inputs are represented by ˜f and ˜ω which are the raw acceleration and angular rate sensed by the IMU.

5.2

Measurement Model

The measurement is a function of the measured joint angles and the kine-matic model of the leg. Here the transformation matrix is given by the C++ code extracted by Robotran, thus giving us both the position and the orien-tation of the feet with respect to the body. This will be compared with the measurement vector, that will be computed as shown below:

sp = C · (p − r) + np; sz = exp(nq) ⊗ q ⊗ z−1, (88)

where:

• np is the noise position vector (combining uncertainty on the encoders

and on the kinematic model); • nq is the noise orientation vector.

. Both the errors constitue tuning parameters for the filter as is it is shown in the parameters section.

(40)

5.2.1 Discrete Implementation

The model represented before must now be implemented in a discretized way on the robot. We can still assume a zero-order hold on the IMU data on a given timestep ∆t: ˆ r−k+1 = ˆrk++ ∆tˆv+k +∆t 2 2 ( ˆC +T k fˆk+ g); (89) ˆ vk+1− = ˆvk++ ∆t( ˆCk+Tfˆk+ g); (90) ˆ q−k+1 = exp(∆tˆωk) ⊗ ˆqk+; (91) ˆ p−k+1 = ˆp+k; (92) ˆ bf − k+1 = ˆbf + k; (93) ˆ bω − k+1 = ˆbω + k; (94) ˆ zk+1− = ˆzk+; (95) in which exp(ω) = cos( ||ω|| 2 ) sin(||ω||2 )||ω||ω . ! (96) In the previous equations we have:

ˆ fk= ˜f − ˆbf + k; (97) ˆ ωk= ˜ω − ˆbω + k. (98)

The infinitesimal rotation is still computed directly in the base frame, using the strap-down techniques to understand the final orientation. Last, the measurement model can be written as:

ˆ sp,k= ˆCk−(ˆp − k − ˆr − k); (99) ˆ sz,k = ˆqk−⊗ (ˆz − k) −1 . (100)

(41)

Going to the linearized model we get: ˙ δr = δv; (101) ˙ δv = −CTfXδψ − CTδbf − CTwf; (102) ˙ δψ = −ωXδψ − δbω− wω; (103) ˙ δp = Ctwp; (104) ˙ δbf = wbf; (105) ˙ δbω = wbω.; (106) ˙z = wz. (107)

The inputs coming from the IMU are assumed to be bias compensated as in (97) and (98). With this notation vX denotes the skew-symmetric matrix

associated with the vector v. Writing the linearized system in the form ˙δx = Fcδx + Lcw we have: FC =             0 I 0 0 0 0 0 0 0 0 −CTfX 0 0 −CT 0 0 0 0 −ωX 0 0 0 −I 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0             (108) and: LC = diag{0, −CT, −I, CT, CT, I, I, I} (109)

The measurement model can be written as:

sp = −Cδr + (C(p − r))Xδφ + Cδp + np; (110)

(42)

where C[m] is the rotation associated with the quaternion m. Defining δy = Hcδx + v we have:

Hc=

−C 0 (C(p − r))X C 0 0 0

0 0 I 0 0 0 −C[q ⊗ z−1]



(112) Now this model has to be discretized. We will assume a zero-order hold on inputs on interval ∆t. This will give the results:

Qk= FkLcQcLTcFkT∆t; (113) Fk =             I I∆t 0 0 0 0 0 0 0 I −CT kfkT∆t 0 0 −CkT∆t 0 0 0 0 I − ωkT∆t 0 0 0 −I∆t 0 0 0 0 I 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 I 0 0 0 0 0 0 0 0 I             (114) Hk = −Ck 0 (Ck(pk− rk))X Ck Ck 0 0 0 0 0 I 0 0 0 −C[qk⊗ zk−1]  (115)

5.3

Observability Analysis

An observability analysis similar to the one computed for the first filter is performed. As before the unobservable states are the positions and the ab-solute yaw. However, without including the biases the unobservable space will remain of dimension 4 without other singularities. For observability considerations we will distinguish two main cases:

• Single foot contact; • Two feet contact.

The observability analysis is developed in [6] and gets these results: • Single foot contact:

(43)

– If ω = 0 we have a rank loss of 3; – If ω ⊥ Cg we have a rank loss of 1;

– If ω k Cg the rank loss is at least 1 (up to 3);

– If ω is not parallel nor perpendicular to Cg the rank loss is 0; • Two feet contact:

– If ω = 0 and 2a + g ∦ ∆p the rank loss is 2; – If ω ⊥ Cg the rank loss is 1;

– If ω k Cg and g k ∆p the rank loss is 1; – Otherwise the rank loss is 0.

5.4

Parameters

As for the point foot algorithm we will define the tuning parameters for the state estimation. if Flef t,z > 0.45Ftotal,z Qp,lef t= diag(2 · 10−7, 5 · 10−7, 10−6, 10−4, 10−4, 10−2) Rp,lef t= diag(1.25 · 10−3, 5 · 10−4, 2.5 · 10−6, 1.5 ∗ 10−4, 1.5 ∗ 10−4, 1.5 ∗ 10−2) else Qp,lef t= diag(1010) Rp,lef t= diag(1010) if Fright,z > 0.45Ftotal,z Qp,right= diag(2 · 10−7, 5 · 10−7, 10−6, 10−4, 10−4, 10−2) Rp,right= diag(5 · 10−4, 2 · 10−4, ·10−6, 10−4, 10−4, 10−2) else Qp,right= diag(1010) Rp,right= diag(1010)

(44)

6

IMU Calibration

The IMU mounted on the robot is a 3DM-GX3-25 from Microstrain. The calibration for the biases has been done as follows:

the IMU has been attached to a cubical solid. Then, this solid has been rotated around every axis, resulting in 6 measurements. We had to measure two main quantities:

• accelerometers bias; • gyroscopes bias.

The first one has been calculated by taking the mean of the two calculations on opposite sides of the cube. This will allow to compensate for possible misalignments of the solid with the floor.

The gyroscopes bias instead has been calculated has the mean of every mea-surement. These should be zero in any condition and lands good results as it is shown in the state estimation.

(45)

459 Hurricane Lane, Suite 102 Williston, VT 05495 USA www.microstrain.com ph: 800-449-3878 fax: 802-863-4093 sales@microstrain.com LORD Corporation MicroStrain® Sensing Systems

3DM-GX3 -25 Miniature Attitude Heading Reference System®

Senror Specifications

Accels Gyros Mags Measurement range ±5 g ±300°/sec ±2.5 Gauss Non-linearity ±0.1 % fs ±0.03 % fs ±0.4 % fs In-run bias stability ±0.04 mg 18°/hr — Initial bias error ±0.002 g ±0.25°/sec ±0.003 Gauss Scale factor stability ±0.05 % ±0.05 % ±0.1 % Noise density 80 µg/√Hz 0.03°/sec/√Hz 100 µGauss/√Hz Alignment error ±0.05° ±0.05° ±0.05° User adjustable bandwidth 225 Hz max 440 Hz max 230 Hz max Sampling rate 30 kHz 30 kHz 7.5 kHz max

Options

Accelerometer range ±1.7 g, ±16 g, ±50 g Gyroscope range ±50°/sec, ±600°/sec, ±1200°/sec

AHRS Specifications

Attitude and Heading

Attitude heading range 360° about all 3 axes Accelerometer range ±5g standard Gyroscope range ±300°/sec standard

Static accuracy ±0.5° pitch, roll, heading typical for static test conditions Dynamic accuracy ±2.0° pitch, roll, heading for dynamic (cyclic) test conditions and

for arbitrary angles

Long term drift eliminated by complimentary filter architecture Repeatability 0.2°

Resolution <0.1° Data output rate up to 1000 Hz

Filtering sensors sampled at 30 kHz, digitally filtered (user adjustable ) and scaled into physical units; coning and sculling integrals computed at 1 kHz

Output modes acceleration, angular rate, and magnetic field deltaTheta, deltaVelocity, Euler angles, quaternion, rotation matrix

General

A/D resolution 16 bits SAR oversampled to 17 bits Interface options USB 2.0 or RS232 Baud rate 115,200 bps to 921,600 bps Power supply voltage +3.2 to +16 volts DC Power consumption 80 mA @ 5 volts with USB Connector micro-DB9 Operating temperature -40° C to +70° C

Dimensions 44 mm x 24 mm x 11 mm - excluding mounting tabs, width across tabs 37 mm

Weight 18 grams ROHS compliant Shock limit 500 g

Software utility CD in starter kit (XP/Vista/Win7/Win8 compatible) Software development kit (SDK) complete data communications protocol and sample code

Specifications

8400-0033 rev. 003 Patent Pending Copyright © 2014 LORD Corporation

Strain Wizard®, DEMOD-DC®, DVRT®, DVRT-Link™, WSDA®, HS-Link®, TC-Link®, G-Link®, V-Link®, SG-Link®, ENV-Link™, Watt-Link™, Shock-Link™, LXRS®, Node Commander®, SensorCloud™, Live Connect™, MathEngine®, EH-Link®, 3DM®, FAS-A®, 3DM-GX3®, 3DM-DH®, 3DM-DH3™, MicroStrain®, and Little Sensors, Big Ideas.® are trademarks of LORD Corporation.

Specifications are subject to change without notice.

(46)

7

Motion Capture Results

In order to evaluate the performance of our navigation algorithm, we tested it with the results coming from a motion capture system (Figure 17). We

Figure 17: Motion Capture System

realized three experiments, comparing the predicted position with the one coming from the motion capture. The results are adjusted with an initial yaw for each experiment (the absolute yaw it’s not available without a mag-netometer).

(47)

-30 -20 -10 0 10 20 30 Time(s) -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Displacement(m) Motion Capture y C++ y

Figure 18: Y displacement for exp8

-30 -20 -10 0 10 20 30 Time(s) -0.5 0 0.5 1 1.5 2 Displacement(m) Motion Capture x C++ x

(48)

-30 -20 -10 0 10 20 30 Time(s) 0 0.5 1 1.5 2 2.5 Displacement(m)

Motion Capture Displacement C++ Displacement

Figure 20: Total displacement for exp8

0 0.5 1 1.5 2 2.5 104 -15 -10 -5 0 5 10 15 20 25 30 Roll Pitch Yaw

(49)

-50 -40 -30 -20 -10 0 10 20 30 40 Time(s) -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Displacement(m) Motion Capture y C++ y

Figure 22: Y displacement for exp9

-50 -40 -30 -20 -10 0 10 20 30 40 Time(s) -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Displacement(m) Motion Capture x C++ x

(50)

-50 -40 -30 -20 -10 0 10 20 30 40 Time(s) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Displacement(m)

Motion Capture Displacement C++ Displacement

Figure 24: Total displacement for exp9

0 0.5 1 1.5 2 2.5 3 3.5 104 -15 -10 -5 0 5 10 15 20 25 Roll Pitch Yaw

(51)

-40 -30 -20 -10 0 10 20 30 Time(s) -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Displacement(m) Motion Capture y C++ y

Figure 26: Y displacement for exp10

-40 -30 -20 -10 0 10 20 30 Time(s) -0.2 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Displacement(m) Motion Capture x C++ x

(52)

-40 -30 -20 -10 0 10 20 30 Time(s) 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Displacement(m)

Motion Capture Displacement C++ Displacement

Figure 28: Total displacement for exp10

0 0.5 1 1.5 2 2.5 3 104 -20 -15 -10 -5 0 5 10 15 20 25 Roll Pitch Yaw

(53)

The main difference between the two algorithms can be found when deal-ing with less dynamic gaits. A good example is shown in the experiment named exp 7 (unfortunately there are no data from the motion capture for this experiment). What can be noticed is that without the flat foot assump-tion the estimaassump-tion becomes unstable when standing in place.

(54)

Figure 31: Predicted angles for exp7 with flat foot

Taking a closer look to the variables that will be part of the controller in the next section we get these results:

(55)

Figure 33: Right feet position with respect to the IMU

(56)

8

Controller Improvement

The state estimation has been in conclusion tested online on the robot as described in section 2. The robot was stable achieving the desired gaits without showing problems. A picture of the robot moving with the state estimation online is shown below. The position part of the state estimation will be used instead for the vision that is being developed at BioRob Lab.

(57)

9

Conclusions

As can be seen the flat foot assumption improves the results on the robot. The observability improvement lands to more stable results that prevent the possibility of divergence in the estimation. This is really important mostly for less dynamic gaits (as is the one where the robot stops and stands in place for a while). However, the results can be even better with other inputs. For example a good idea could be to put two IMU on the feet and measure the actual orientation of the feet (being these the main reason of error) and measuring also the error in the model when the feet are not flat on the ground.

The results are reliable with good results on both the motion capture and the controller. More tests can be conducted with a vision system and with a controller that relies on the position in order to understand if these data are good for these kind of applications.

(58)

A

Code Guidance

A.1

Control.cc

In this file most of the parameters are initialized. While other parameters are set, here the tuning parameters of P0 can be modified in order to improve

the results.

A.2

Walking.cc

This is the main file for the testing of the state estimation. Here the output and the input file are determined in order to test different experiments of the COMAN.

A.3

state estimation.cc

This is the main file of the state estimation algorithm. Here all the remaining variables are initialized and it is part of the iterative loop during the con-trol. Inside this file we will find all the other files that will be listed below. However, every change made to the other files will have to get through this script.

A.4

acc integration.cc

In this file the integration of the acceleration is performed. The states at stage k are imported in order to perform the prediction phase.

A.5

or integration.cc

This file is used for the integration of the rotational quantities. The integra-tion is performed through the usage of quaternion a shown in the dedicated section. This allows to have a compact version of the rotation quantities to pass through the phases of the estimation.

(59)

A.6

motionless detect.cc

This is the file used in order to initialize the orientation of the robot. Here the data coming from the accelerometer are stored and the variance is computed in order to detect the moment where the robot is not moving.

A.7

rotateIMU.cc

This script is meant to correct known modeling error in the position of the IMU with respect to the pelvis. This is not necessary in this application but it can be for non modeled errors.

A.8

rot vector.cc

After the motionless detect file has acted, this script will allow to compute a rotation matrix between the g vector and the actual vector coming out from the accelerometers. This will allow to have a first estimate of the initial orientation of the IMU with respect to the navigation frame.

A.9

skew.cc

This file allows to compute the skew matrix derived from a vector.

A.10

kin initialize.cpp

This file initializes the position of the two feet with respect to the pelvis. This is crucial due to the fact that these positions are meant to be fixed while not walking.

A.11

forward kinematics.cpp

This file contains the extracted forward kinematics model of the COMAN. This was generated throught the usage of Robotran. This allows to compute the orientation and the position of the feet with respect to the body frame.

(60)

These quantities are used as measurement for the Kalman Filter as shown before.

A.12

kinematics.cpp

In this file all the calculations about the difference between the measured and the predicted quantities are evaluated.

A.13

compute matrix.cpp

Arguably the most important part of the code, here there are all the other tuning parameters of the filter. All the matrices from the update part of the Kalman filter are computed in this script. Also the check for singularities resides in this script.

A.14

compute x.cpp

Here the new state is computed through the calculation already done in the previous file. This is the last step of the Kalman Filter.

(61)

References

[1] Justin Carpentier, Mehdi Benallegue, Nicolas Mansard, and Jean-Paul Laumond. A kinematics-dynamics based estimator of the center of mass position for anthropomorphic system - a complementary filtering ap-proach. 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015.

[2] Pei-Chun Lin, H. Komsuoglu, and D.e. Koditschek. Sensor data fusion for body state estimation in a hexapod robot with dynamical gaits. IEEE Transactions on Robotics, 22(5):932–943, 2006.

[3] Michael Bloesch, Marco Hutter, Mark Hoepflinger, Stefan Leutenegger, Christian Gehring, C. David Remy, and Roland Siegwart. State estima-tion for legged robots - consistent fusion of leg kinematics and. Robotics: Science and Systems VIII, Sep 2012.

[4] Michael Bloesch, Christian Gehring, Peter Fankhauser, Marco Hutter, Mark A. Hoepflinger, and Roland Siegwart. State estimation for legged robots on unstable and slippery terrain. 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013.

[5] Martin Gorner and Annett Stelzer. A leg proprioception based 6 dof odometry for statically stable walking robots. Autonomous Robots, 34(4):311–326, 2013.

[6] Nicholas Rotella, Michael Bloesch, Ludovic Righetti, and Stefan Schaal. State estimation for a humanoid robot. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014.

[7] Ross Hartley, Josh Mangelson, Lu Gan, Maani Ghaffari Jadidi, Jef-frey M. Walls, Ryan M. Eustice, and Jessy W. Grizzle. Legged robot state-estimation through combined forward kinematic and preintegrated contact factors.

(62)

[8] X Xinjilefu, Siyuan Feng, Weiwei Huang, and Christopher G. Atke-son. Decoupled state estimation for humanoids using full-body dynam-ics. 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014.

[9] Maurice F. Fallon, Matthew Antone, Nicholas Roy, and Seth Teller. Drift-free humanoid state estimation fusing kinematic, inertial and li-dar sensing. 2014 IEEE-RAS International Conference on Humanoid Robots, 2014.

[10] Maurice F. Fallon, Pat Marion, Robin Deits, Thomas Whelan, Matthew Antone, John Mcdonald, and Russ Tedrake. Continuous humanoid lo-comotion over uneven terrain using stereo fusion. 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015. [11] Stylianos Piperakis and Panos Trahanias. Non-linear zmp based state

estimation for humanoid robot locomotion. 2016 IEEE-RAS 16th In-ternational Conference on Humanoid Robots (Humanoids), 2016. [12] Justin Carpentier, Mehdi Benallegue, Nicolas Mansard, and Jean-Paul

Laumond. Center-of-mass estimation for a polyarticulated system in contact-a spectral approach. IEEE Transactions on Robotics, 32(4):810– 822, 2016.

[13] Alexis Mifsud, Mehdi Benallegue, and Florent Lamiraux. Estimation of contact forces and floating base kinematics of a humanoid robot using only inertial measurement units. 2015 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems (IROS), 2015.

[14] X. Xinjilefu, Siyuan Feng, and Christopher G. Atkeson. Center of mass estimator for humanoids and its application in modelling error compen-sation, fall detection and prevention. 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids), 2015.

(63)

[15] Hyoin Bae, Jaesung Oh, Hyobin Jeong, and Jun-Ho Oh. A new state estimation framework for humanoids based on a moving horizon estima-tor. IFAC-PapersOnLine, 50(1):3793–3799, 2017.

[16] Kendall Lowrey, Jeremy Dao, and Emanuel Todorov. Real-time state estimation with whole-body multi-contact dynamics: A modified ukf approach. 2016 IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), 2016.

[17] AMARSI European Project COMAN. https://www.amarsi-project.eu/coman.

[18] CogIMon Project. https://cogimon.eu.

[19] Hamed Razavi, Salman Faraji, and Auke Ijspeert. From standing bal-ance to walking: A single control structure for a continuum of gaits. [20] Robert Hermann and Arthur J. Krener. Nonlinear controllability and

observability. Automatic Control, IEEE Transactions on 22(5):728-740, 1977.

[21] Guoquan P. Huang, Nikolas Trawny, Anastasios I. Mourikis, and Ster-gios I. Roumeliotis. Observability-based consistent ekf estimators for multi-robot cooperative localization, Sep 2010.

Riferimenti

Documenti correlati

A: The acknowledgement of the WFD in the national legislation is expected within December 2003 followed from the identification of the Districts of Basin and of the authorities

This paper aims evaluate two likelihood functions with DREAM algorithm to estimate specific non-unique stage-discharge rating curves: normal likelihood function

A Study in New Zealand which sampled 30 children with Asymptomatic feet aimed to asses the measuring Criteria such as Foot Posture index, Angl Lunge test, and Beighton scale/and

In order to evaluate the feasibility and potential benefit of such a study design, we developed a method that automatically retrieves relevant data from PubMed abstracts in order

ar Also at Department of Physics, Stanford University, Stanford CA, United States of America as Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for

the maximum valve operation frequency is 3 Hz (50 % duty cycle). It was also evaluated that a fluidic channels delivers 1.8 μl/h of fluid at a.. pressurized at 15 psi. Using

Income market (or original) is defined as the sum of income from labor and capital and private transfers to market incomes; gross income is calculated by adding the market