• Non ci sono risultati.

Based on the results of the previous sections, we can use SDA to compute the solution to a Lur’e equation. The resulting algorithm is reported as Algorithm 16. The explicit computation of (a possible choice of) K and L is typically not needed in the applications. If the two matrices are needed, they can be computed using the fact that

AX + XA + Q XB + C

While we have exposed SDA in its application to nonsymmetric Riccati equations, the algorithm was originally born for discrete- and continuous-time Riccati equation in control theory [HCL05, CFL05]. One of its main advantages there is that it is able to preserve the symplectic structure of the SSF-I pencil.

As the matrix H associated with a continuous-time algebraic Riccati equation is a Hamiltonian matrix, its Cayley transform is symplectic. Therefore SDA for control theory problems need only work with symplectic matrices and pencils [CFL05]. An easy check shows that a pencil in SSF-I is symplectic if and only if E = FT, G = GT and H = HT. If this property holds, then all the iterates generated by SDA are symplectic as well, i.e.,

Ek =FkT, Gk=GTk, Hk=HkT

for all k ≥ 0. In particular, at each steps the two matrices Fk+1 and Ek+1 are one the transposed of the other; therefore, only the computation of the first is needed. Similarly, the computation and inversion of I− HkGk can be avoided, since this matrix is the transposed

Chapter 10. Lur’e equations 123

of I− GkHk which is already computed and inverted during the algorithm. If the inversion is performed implicitly (e.g., with a LU factorization), the picture does not change.

In particular, this implies that the symplectic eigensymmetry is preserved during the iteration. This property is crucial, since the eigenvalue condition number for structured perturbations can be substantially lower than the one for unstructured perturbation [KKT06].

Therefore, in presence of Hamiltonian or symplectic eigensymmetry, structure-preserving algorithms yield usually much better results than unstructured ones [BKM05].

Choice of the parameter in the Cayley transform

The accuracy of the computed solution depends also on an appropriate choice of γ. Clearly, two goals have to be considered:

1. the matrix to invert in (10.9) should be well-conditioned;

2. the condition number of the resulting problem, i.e, the separation between the stable and unstable subspace of the Cayley-transformed pencil (10.11), should not be too small.

While the impact of the first factor is easy to measure, the second one poses a more significant problem, since there are no a priori estimates for the conditioning of a subspace separation problem. Nevertheless, we may try to understand how the choice of the parameter γ of the Cayley transform affects the conditioning. The conditioning of the stable subspace depends on the separation between the operator restricted to the semi-stable subspace and semi-unstable invariant subspace [GVL96, Section 7.2], which, roughly speaking, must be small when the eigenvalues are close to the unit circle — see [GVL96, Problem P7.2.10] and [IP10] for a similar, more precise estimate for the nonsymmetric problem. The eigenvalues of the transformed pencil are given by λλ+γ−γ = 1−λ+γ , thus they tend to cluster around 1 for small values of γ, which is undesirable. The closest to 1 is the one for which λ + γ has the largest modulus; we may take as a crude approximation ρ(A) + γ, which can be further approximated loosely withkAk1+ γ. Therefore, as a heuristic to choose a reasonable value of γ, we may look for a small value of

f (γ) = max



condest E1 A2 , kAk1+ γ 2γ

 ,

where condest(·) is the condition number estimate given by Matlab. Following the strategy of [CFL05], we chose to make five steps of the golden section search method [LY08] on f (γ) in order to get a reasonably good value of the objective function without devoting too much time to this ancillary computation.

However, we point out that in [CFL05], a different f (γ) is used, which apparently only takes into account the first of our two goals. The choice of the function is based on an error analysis of their formulas for the starting blocks of SDA. Due to Theorem 6.25, this part of the error analysis can be simplified to checking condest E1 A2.

10.5 Numerical experiments

We have implemented Algorithm 16 (SDA-L) using Matlab, and tested it on the following test problems.

P1 a Lur’e equation with a random stable matrix A∈ Rn×n, a random C = B, Q = 0 and R the m× m matrix with all the entries equal to 1, with rk(R) = 1. Namely, B was

generated with the Matlab command B=rand(n,m), while to ensure a stable A a we used the following more complex sequence of commands: V=randn(n); W=randn(n);

A=-V*V’-W+W’;

P2 a set of problems motivated from real-world examples, taken with some modifications from the benchmark set carex [BLM95a]. Namely, we took Examples 3 to 6, which are a set of real-world problems arising from various applications, varying in size and numerical characteristics, and changed the value of R to get a singular problem. In the original versions of all examples, R is the identity matrix of appropriate size; we simply replaced its (1, 1) entry with 0, in order to get a singular problem.

P3 a highly ill-conditioned high-index problem with m = 1, A = In+ Nn, B = en (the last vector of the canonical basis for Rn), C = −B, R = 0, Q = − tridiag(1, 2, 1).

Such a problem corresponds to a Kronecker chain of length 2n + 1 associated with an infinite eigenvalue, and its canonical semi-stable solution is X = I. Notice that the conditioning of the invariant subspace problem in this case is 1/(2n+1), for an unstructured perturbation of the input data of the order of the machine precision  [GLR06, Section 16.5].

The results of SDA-L are compared to those of a regularization method as the one described in (10.3), for different values of the regularization parameter ε. After the regular-ization, the equations are solved using Algorithm 8 after a Cayley transform with the same parameter γ (R+S), or with the matrix sign method with determinant scaling [Meh91, Hig08]

(R+N). We point out that the control toolbox of Matlab contains a command gcare that solves a so-called generalized continuous-time algebraic Riccati equation based on a pencil in a form apparently equivalent to that in (1.13). In fact, this command is not designed to deal with a singular R, nor with eigenvalues numerically on the imaginary axis. Therefore, when applied to nearly all the following experiments, this command fails reporting the presence of eigenvalues too close to the imaginary axis.

For the problem P3, where an analytical solution X = I is known, we reported the values of the forward error

For P1 and P2, for which no analytical solution is available, we computed instead the relative residual of the Lur’e equations in matrix form

(see (10.13)). The results are reported in Tables 10.1, 10.2 and 10.3. A star ? in the data denotes convergence failure.

We see that in all the experiments our solution method obtains a better result than the ones based on regularization.

Chapter 10. Lur’e equations 125

Table 10.1: Relative residual for P1

n m SDA-L R+S R+N

ε = 10−6 ε = 10−8 ε = 10−12 ε = 10−8 10 3 1× 10−15 2× 10−8 4× 10−10 3× 10−6 3× 10−10 50 5 3× 10−14 8× 10−9 1× 10−7 2× 10−1 4× 10−10 500 10 7× 10−14 2× 10−9 1× 10−7 1× 10−1 ?

Table 10.2: Relative residual for P2

Problem number SDA-L R+S R+N

ε = 10−6 ε = 10−8 ε = 10−12 ε = 10−8 3 6× 10−16 1× 10−7 9× 10−10 8× 10−6 1× 10−9 4 9× 10−16 6× 10−7 6× 10−9 2× 10−7 6× 10−9 5 6× 10−15 3× 10−7 1× 10−9 3× 10−8 1× 10−9 6 2× 10−15 6× 10−12 2× 10−12 1× 10−11 4× 10−13

Table 10.3: Forward error for P3

n SDA-L R+S R+N

ε = 10−6 ε = 10−8 ε = 10−12 ε = 10−8 1 1× 10−8 1× 10−3 1× 10−4 1× 10−6 1× 10−4 2 5× 10−5 3× 10−2 1× 10−2 3× 10−2 ? 3 2× 10−3 1× 10−1 5× 10−2 2× 10+0 ? 4 1× 10−2 4× 10−1 1× 10−1 8× 10−1 ? 5 6× 10−2 1× 10+0 4× 10−1 2× 10+0 ?

10.6 Conclusions and research lines

In this chapter we have presented a new numerical method for the solution of Lur’e matrix equations. Unlike previous methods based on regularization, this approach allows one to solve the original equation without introducing any artificial perturbation. The method is exposed in a work in preparation [PR].

The first step of this approach is applying a Cayley transform to convert the problem to an equivalent discrete-time pencil. In this new form, the infinite eigenvalues can be easily deflated, reducing the problem to a discrete-time algebraic Riccati equation with eigenvalues on the unit circle. For the solution of this latter equation, the structured-preserving doubling algorithm was chosen, due to its good behavior in presence of eigenvalues on the unit circle, as proved by the convergence results in [HL09]. Direct methods, such as the symplectic eigensolvers presented in [Fas00], could also be used for the solution of the deflated DARE.

The numerical experiments confirm the effectiveness of our new approach for regular matrix pencils. It is not clear whether the same method can be adapted to work in cases in which the pencil (1.13) is not regular, which may indeed happen in the context of Lur’e equations. Another issue is finding a method to exploit the low-rank structure of Q (when present). These further developments are currently under our investigation.

Chapter 11

Generalized SDA

11.1 Introduction

The traditional hypotheses needed for the implementation of SDA are that the two Riccati equations

Q + ATX + XA− XGX = 0 Y QY + Y AT + AY − G = 0

have respectively a stabilizing and anti-stabilizing solution. This is equivalent to saying that the Hamiltonian

H = A −G

−Q −AT



has stable and unstable invariant subspaces in the form

U =U(1) U(2)



, V =V(1) V(2)

 ,

with U(1) and V(2) nonsingular. If this property is true, we can form the desired solutions to the two Riccati equations as X = U2U1−1, Y = V1V2−1.

The case in whichH has eigenvalues on the imaginary axis arises as well in the applications [FD87, CG89]; in this case, one looks for the unique semi-stable (semi-unstable) Lagrangian subspaces U (V ). SDA converges in this case as well, as has recently been proved in [HL09], but its convergence turns to linear instead of quadratic.

In the case in which one or both of the blocks U(1) and V(2) is ill-conditioned, the corresponding Riccati solution X or Y becomes very large; the quantities appearing in the algorithm grow as well, and ill-conditioning in the intermediate steps often leads to poor performance in the algorithm. In fact, the algorithm uses the initial data of the control problem only in its initialization; thus, inaccuracy in one of the intermediate steps leads to a loss of accuracy that is impossible to counter in later steps.

In this chapter, we address this situation, and propose a generalization of SDA that aims to attain more accuracy in these problematic cases, at the expense of a larger cost per step.

Numerical examples are provided that show how our SDA variant achieves better results on several border-case problems.

127