### CG method

### Assume A, the coefficient matrix of our system Ax = b, positive definite (recall that A and b are real). In the general scheme choose H = A, d 0 = r 0 = b − Ax 0 , d k = r k + β k−1 d _{k−1} , k = 1, 2, . . ., (r k = b − Ax k ) where β k−1 is such that

### (d k , d k−1 ) A = 0

### (d k conjugate to d k−1 ). Here below is the algorithm we obtain:

### x _{0} ∈ R ^{n} , r 0 = b − Ax 0 , d 0 = r 0 . For k = 0, 1, . . . , {

### τ k = ^{d}

T k

### r

_{k}

### d

^{T}

_{k}

### Ad

k### x _{k+1} = x k + τ k d _{k}

### r _{k+1} = b − Ax ^{k+1} = r k − τ ^{k} Ad k

### β k = − ^{r}

T k+1

### Ad

k### d

^{T}

k

### Ad

k### d _{k+1} = r k+1 + β k d _{k} }

### known as Conjugate Gradient (CG) algorithm.

### Remarks. Note that

### 0 = (x − x k+1 ) ^{T} Hd k = (x − x k+1 ) ^{T} Ad k = r ^{T} _{k+1} d _{k} , r ^{T} _{k+1} d _{k+1} = kr k+1 k ^{2} 2 . As a consequence, if at step s we have r s = b − Ax ^{s} 6= 0, then d ^{s} 6= 0, τ ^{s} is well defined and not zero . . .: the algorithm works.

### If r 0 , . . ., r m−1 are non null and r m = 0, then β m−1 = 0, d m = 0, τ m cannot be defined, but it doen’t matter since x m = A ^{−1} b . This hypothesis is effectively verified, in fact there exists m ≤ n = the order of A, such that r ^{m} = 0 (see below).

### Alternative expressions for τ k and β k hold:

### τ k = d ^{T}

### k r _{k} d ^{T}

### k Ad k

### = r ^{T}

### k r _{k} d ^{T}

### k Ad k

### (d k = r k + β k−1 d _{k−1} , r ^{T} _{k} d _{k−1} = 0), β k = − ^{r}

T k+1

### Ad

k### d

^{T}

k

### Ad

k### = − ^{r}

T

k+1

### τ

k^{−1}

### (r

k### −r

k+1### ) d

^{T}

k

### τ

k^{−1}

### (r

k### −r

k+1### )

### = ^{−r}

T

k+1

### r

_{k}

### +r

^{T}

_{k+1}

### r

_{k+1}

### d

^{T}

k

### r

_{k}

### = ^{r}

T k+1

### r

_{k+1}

### r

^{T}

k

### r

_{k}

### . The latter identity uses the result

### r ^{T} _{k+1} r _{k} = 0

### (residual at step k + 1 is orthogonal to residual at step k, exactly as in the Gradient method) which is not obvious:

### r ^{T}

### k+1 r _{k} = r ^{T} _{k+1} (d k − β k−1 d _{k−1} ) = −β k−1 r ^{T}

### k+1 d _{k−1}

### = −β k−1 (r k − τ k Ad k ) ^{T} d _{k−1} = β k−1 τ k d ^{T}

### k Ad k−1 = 0.

### First main result: If r 0 , r 1 , . . . , r p are non null, then

### d ^{T} _{l} Ad j = 0, r ^{T} _{l} r _{j} = 0, 0 ≤ j < l ≤ p.

### That is, each new residual (search direction) is orthogonal (conjugate) to all previous residuals (search directions). As a consequence, the residual r m must be null for some m ≤ n, or, equivalently, CG finds the solution of Ax = b in at most n steps.

### Proof. . . .

### A useful representation of the residuals. If r 0 , r 1 , . . . , r k−1 are non null, then there exist polynomials s k (λ), q k (λ) such that

### r _{k} = s k (A)r 0 , d _{k} = q k (A)r 0 ,

### s k (λ) = (−1) ^{k} τ 0 τ 1 · · · τ k−1 λ ^{k} + . . . + 1, τ 0 τ 1 · · · τ k−1 6= 0.

### Proof (by induction). The equality r 0 = s 0 (A)r 0 holds if s 0 (λ) = 1; d 0 = q 0 (A)r 0 holds if q 0 (λ) = 1. Moreover,

### r _{k+1} = r k − τ k Ad k = s k (A)r 0 − τ k Aq k (A)r 0 = s k+1 (A)r 0

### if s k+1 (λ) = s k (λ) − τ k λq k (λ), and

### d _{k+1} = r k+1 + β k d _{k} = s k+1 (A)r 0 + β k q k (A)r 0 = q k+1 (A)r 0

### if q k+1 (λ) = s k+1 (λ) + β k q k (λ). Finally, since

### s k+1 (λ) = s k (λ) − τ ^{k} λ(s k (λ) + β k−1 q k−1 (λ)),

### the coefficient of λ ^{k+1} in s k+1 (λ) is −τ ^{k} times the coefficient of λ ^{k} in s k (λ).

### Thus, by the inductive assumption, it must be (−1) ^{k+1} τ 0 τ 1 · · · τ k−1 τ k . Also, the coefficient of λ ^{0} in s k+1 (λ) is equal to the coefficient of λ ^{0} in s k (λ), which is 1 by the inductive assumption.

### Second main result: r k = 0 for some k ≤ #{distinct eigenvalues of A}.

### Proof. Let µ 1 , µ 2 , . . . , µ m be the distinct eigenvalues of A (m ≤ n = order of A). Assume that CG requires more than m steps to converge. So, the vectors r _{0} , r 1 , . . . , r m are non null, and, by the First main result, orthogonal (⇒ linearly independent). Let V be an orthonormal matrix whose columns are eigenvectors of A, thus V ^{T} = V ^{−1} and AV = V D for D diagonal with the eigenvalues of A as diagonal entries. Observe that there is a degree-m polynomial which is null in A,

### m

### Y

### j=1

### (A−µ j I) =

### m

### Y

### j=1

### (V DV ^{T} −µ j I) =

### m

### Y

### j=1

### V (D−µ j I)V ^{T} == V

### m

### Y

### j=1

### (D−µ j I)V ^{T} = 0.

### As a consequence the matrices A ^{0} = I, A, . . ., A ^{m} are linearly dependent. But this implies that the dimension of the space

### S m+1 (r 0 ) = Span {r ^{0} , Ar 0 , . . . , A ^{m} r _{0} } = Span {r ^{0} , r 1 , . . . , r m }

### is smaller than m + 1, which is absurd. It follows that one of the vectors r i , i = 0, . . . , m, must be null.

### Let Π ^{1} _{k} be the set of all polynomials of degree exactly k whose graphic

### pass through (0, 1). We now see that the polynomial s k (λ) in the expression

### r _{k} = s k (A)r 0 is a very particular polynomial in the class Π ^{1} _{k} : it makes the norm

### of the vector p k (A)r 0 , p k ∈ Π ^{1} k , minimum (for a suitable choice of the norm).

### This result let us give estimates of the rate of convergence of CG, as precise as good is the knowledge about the location of the eigenvalues of A. For example, if it is known that the eigenvalues of A cluster around 1, then CG must converge with a superlinear rate of convergence (see toe 1a).

### Notice that r k = s k (A)r 0 = r 0 + ˆ h _{k} , for a particular vector ˆ h _{k} in the space M = Span {Ar ^{0} , A ^{2} r _{0} , . . . , A ^{k} r _{0} }. Take a generic vector h ^{k} in this space. Then

### kr 0 + h k k ^{2} A

^{−1}

### = kr 0 + ˆ h _{k} + h k − ˆ h _{k} k ^{2} A

^{−1}

### = kr ^{0} + ˆ h _{k} k ^{2} A

^{−1}

### + kh ^{k} − ˆh ^{k} k ^{2} A

^{−1}

### + 2(r 0 + ˆ h _{k} , h k − ˆ h _{k} ) A

^{−1}

### . Now observe that the latter inner product is null, in fact, for j = 0, . . . , k − 1, 0 = r ^{T} _{k} r _{j} = r ^{T} _{k} A ^{−1} Ar j = (r k , Ar j ) A

^{−1}

### , that is, r k is A ^{−1} -orthogonal to the space Span {Ar ^{0} , Ar 1 , . . . , Ar k−1 }, but this space is exactly M. The thesis follows since h k − ˆ h _{k} ∈ M. So we have:

### kr 0 + h k k ^{2} A

^{−1}

### = kr 0 + ˆ h _{k} k ^{2} A

^{−1}

### + kh k − ˆh k k ^{2} A

^{−1}

### ≥ kr 0 + ˆ h _{k} k ^{2} A

^{−1}

### . In other words,

### kr ^{k} k ^{2} A

^{−1}

### = kr ^{0} + ˆ h _{k} k ^{2} A

^{−1}

### = min{kr ^{0} + h k k ^{2} A

^{−1}

### : h k ∈ M}

### = min{kp ^{k} (A)r 0 k ^{2} A

^{−1}

### : p k ∈ Π ^{1} k }. (m) Comparison with GMRES. Notice that for any h k ∈ M we have

### r _{0} + h k = b − A(x ^{0} + z), z = −A ^{−1} h _{k} ∈ S ^{k} (r 0 ) = Span {r ^{0} , Ar 0 , . . . , A ^{k−1} r _{0} }.

### Thus, the vector x k generated by the CG method is of type x 0 + ˆ z where ˆ z solves the problem

### kb − A(x ^{0} + ˆ z )k A

^{−1}

### = min{kb − A(x ^{0} + z)k A

^{−1}

### : z ∈ S ^{k} (r 0 )} (p) (S k (r 0 ) is known as Krilov space). GMRES is a method able to solve Ax = b in at most n steps under the only assumption det(A) 6= 0. (Like CG, GMRES in order to be competitive must be used as an iterative method, i.e. less than n steps must be sufficient to give a good approximation of x). In the k-th step of GMRES it is defined a vector x k of type x 0 + ˆ z where ˆ z solves exactly the problem (p) but the norm involved is the euclidean one. So, CG is a minimal residual algorithm different from GMRES| A pd .

### It is easy to see that the condition (m) can be rewritten as follows:

### kx − x k k ^{2} A = min

### p

k### ∈Π

^{1}k

### kp k (A)(x − x 0 )k ^{2} A .

### Now we give a bound for the quantity kp ^{k} (A)(x − x ^{0} )k ^{2} A , p k ∈ Π ^{1} k , which can be evaluated if (besides A, b) also some information about the location of the eigenvalues λ i of A is given. Let v i 6= 0 be such that Av ^{i} = λ i v _{i} , v ^{T} _{i} v _{j} = δ ij . Then

### kp ^{k} (A)(x − x ^{0} )k ^{2} A = (x − x ^{0} ) ^{T} Ap k (A) ^{2} (x − x ^{0} ) = (P α i v _{i} ) ^{T} P α i Ap k (A) ^{2} v _{i}

### = (P α i v _{i} ) ^{T} P α i Ap k (λ i ) ^{2} v _{i} = (P α i v _{i} ) ^{T} P α i λ i p k (λ i ) ^{2} v _{i}

### = P α ^{2} _{i} λ i p k (λ i ) ^{2} ≤ max ^{i} |p ^{k} (λ i )| ^{2} kx − x ^{0} k ^{2} A .

### So, we obtain the following

### Third main result: If x k is the k-th vector generated by CG when applied to solve the pd linear system Ax = b, then

### kx − x ^{k} k ^{2} A = min

### p

k### ∈Π

^{1}k

### kp ^{k} (A)(x − x ^{0} )k ^{2} A ≤ max _{i} |p ^{k} (λ i )| ^{2} kx − x ^{0} k ^{2} A , ∀ p ^{k} ∈ Π ^{1} k .

### So, if S ⊂ R, p ^{k} ∈ Π ^{1} k , M k ∈ R are known such that λ ^{i} ∈ S ∀ i and |p ^{k} (λ)| ≤ M ^{k}

### ∀ λ ∈ S, then kx − x ^{k} k ^{A} ≤ M ^{k} kx − x ^{0} k ^{A} .

### Let us see two applications of the latter result. As consequences of the first application we observe that CG (considered as an iterative method) has a linear rate of convergence, is in general faster than G, and is competitive (f.i. with direct methods) if λ max and λ min are comparable. However, as a consequence of the second application, the latter condition is not necessary: the rate of convergence of CG remains high (so, CG remains competitive) if most of the eigenvalues are in [λ min , ˆ λ] with λ min and ˆ λ comparable. Further useful applications of the Third main result hold. In particular, as a consequence of one of these (see toe 1a), it can be stated that CG has a superlinear rate of convergence if most of the eigenvalues of A are in the interval S = [1 − ε, 1 + ε]

### (. . .).

### (1)

### S = [λ min , λ max ], p k (x) = T k

### λ

max### +λ

min### −2x λ

max### −λ

min### T k

### λ

max### +λ

min### λ

max### −λ

min### ⇒

### kx − x ^{k} k ^{A} < 2

### p µ 2 (A) − 1 pµ 2 (A) + 1

### ! k

### kx − x ^{0} k ^{A} , µ 2 (A) = λ max

### λ min . (2)

### S = [λ min , ˆ λ] ∪ {λ i : λ i > ˆ λ}, r ˆ λ = #{i : λ i > ˆ λ},

### p k (x) = Π _{i: λ}

_{i}

_{>ˆ} _{λ}

### 1 − x

### λ i

### T k−r

λˆ### _{ˆ}

### λ+λ

min### −2x λ−λ ˆ

min### T k−r

λˆ### _{ˆ}

### λ+λ

min### λ−λ ˆ

min### ⇒

### kx − x k k A < 2

###

###

### q λ/λ ˆ min − 1 q λ/λ ˆ min + 1

###

###

### k−r

ˆλ### kx − x 0 k A , k ≥ r λ ˆ .

### The applications (1) and (2) of the Third main result suggest an idea. When λ min and λ max are not comparable and the eigenvalues of A are uniformly dis- tributed in the interval [λ min , λ max ] (in this case all n steps of CG are required in order to give a good approximation of x), replace the given system Ax = b with an equivalent system ˜ A˜ x = ˜ b , ˜ A = E ^{−1} AE ^{−T} , ˜ x = E ^{T} x , ˜ b = E ^{−1} b , det(E) 6= 0, where the matrix E is such that µ 2 ( ˜ A) < µ 2 (A) and has one of the following properties

### • µ 2 ( ˜ A) << µ 2 (A)

### • ˜ A has much less distinct eigenvalues than A

### • ˜ A has the eigenvalues much more clustered (around 1) than A Then apply CG to ˜ A˜ x = ˜ b .

### If such matrix E can be found, then the pd matrix P = EE ^{T} is said pre- conditioner.

### Note that E ^{−T} AE ˜ ^{T} = P ^{−1} A, so one could look directly for a pd matrix P such that the (real positive) eigenvalues of P ^{−1} A have the required properties.

### For example, in order to obtain something of type P ^{−1} A ≈ I (which would result in a very high increase of the CG rate of convergence) one could choose P as an approximation A of A. We shall see that applying CG to ˜ A˜ x = ˜ b requires, for each step, a surplus of computation: solve a system of type P z = h k . This computation must not make CG slow, in other words P must be a lower complexiy matrix than A. Also notice that E 1 and E 2 , E 1 6= E ^{2} , E 1 E _{1} ^{T} = E 2 E _{2} ^{T} , define matrices ˜ A 1 = E _{1} ^{−1} AE _{1} ^{−T} and ˜ A 2 = E _{2} ^{−1} AE _{2} ^{−T} , ˜ A 1 6= ˜ A 2 , with the same spectrum. For this reason one prefers to call preconditioner P instead of E.

### A final remark. The vector x = A ^{−1} b we are looking for is also the minimum point of the function F (z) = ^{1} _{2} z ^{T} Az − z ^{T} b . Analogously, ˜ x = ˜ A ^{−1} b ˜ is the minimum point of the function ˜ F (z) = ^{1} _{2} z ^{T} Az − z ˜ ^{T} b ˜ . The preconditioning technique replaces the (sections of the) contours of F with the more spherical (sections of the) contours of ˜ F , and this results in a more efficient minimization when using gradient-type methods.

### Let us write the preconditioned version of the CG algorithm, well defined once that A, b and the preconditioner P are given.

### Let us apply CG to the system ˜ A˜ x = ˜ b :

### ˜

### x _{0} ∈ R ^{n} , ˜r 0 = ˜ b − ˜ A˜ x _{0} , ˜ d _{0} = ˜r 0 . For k = 0, 1, . . . , {

### ˜ τ k = ^{˜} ^{r}

T k

### ˜ r

_{k}

### d ˜

^{T}

_{k}

### A˜ ˜ d

k### ˜

### x _{k+1} = ˜ x _{k} + ˜ τ k d ˜ _{k}

### ˜r k+1 = ˜ b − ˜ A˜ x _{k+1} = ˜r k − ˜τ ^{k} A˜ ˜ d _{k} β ˜ k = ^{˜} ^{r}

T k+1

### ˜ r

_{k+1}

### ˜ r

^{T}

k

### ˜ r

_{k}

### d ˜ _{k+1} = ˜r k+1 + ˜ β k d ˜ _{k} }

### Note that the convergence rate of the sequence {˜x ^{k} } can be evaluated by using the following results

### k˜x − ˜x k k A ^{˜} < 2

###

### q

### µ 2 ( ˜ A) − 1 q

### µ 2 ( ˜ A) + 1

###

###

### k

### k˜x − ˜x 0 k A ^{˜} , µ 2 ( ˜ A) = λ ˜ max

### λ ˜ min

### ,

### k˜x − ˜x k k A ^{˜} < 2

###

###

### q˜ˆλ/˜λ min − 1 q˜ˆλ/˜λ min + 1

###

###

### k−r

λ˜ˆ### k˜x − ˜x 0 k A ^{˜} , k ≥ r λ ^{˜} ˆ :

### if µ 2 ( ˜ A) << µ 2 (A) or ˜ A has most of the eigenvalues ˜ λ i in [˜ λ min , λ] and ˜ ˆ λ/˜ ˜ ˆ λ min <<

### λ max /λ min , then ˜ x _{k} → ˜x = E ^{T} x with a greater rate than x k → x.

### Now we obtain each row of the preconditioned CG method. Define x k = E ^{−T} x ˜ _{k} , r k = b − Ax k , and d k = E ^{−T} d ˜ _{k} . Then

### ˜r k = b ˜ − ˜ A˜ x _{k} = E ^{−1} b − E ^{−1} AE ^{−T} (E ^{T} x _{k} )

### = E ^{−1} r _{k} = E ^{T} E ^{−T} E ^{−1} r _{k} = E ^{T} h _{k} , h k = P ^{−1} r _{k} ,

### ˜r ^{T} _{k} ˜r k = r ^{T} _{k} E ^{−T} E ^{−1} r _{k} = r ^{T} _{k} h _{k} , d ˜ ^{T}

### k A˜ ˜ d _{k} = d ˜ ^{T}

### k E ^{−1} AE ^{−T} d ˜ _{k} = d ^{T} _{k} Ad k . Thus

### ˜

### τ k = r ^{T} _{k} h _{k} d ^{T}

### k Ad k

### . (row1)

### Moreover, we have

### ˜

### x _{k+1} = E ^{T} x _{k+1} = E ^{T} x _{k} + ˜ τ k E ^{T} d _{k} ⇒

### x _{k+1} = x k + ˜ τ k d _{k} , (row2)

### ˜r k+1 = E ^{−1} r _{k+1} = E ^{−1} r _{k} − ˜τ k E ^{−1} AE ^{−T} E ^{T} d _{k} ⇒

### r _{k+1} = r k + ˜ τ k Ad k , (row3)

### β ˜ k = r ^{T}

### k+1 h _{k+1} r ^{T}

### k h _{k} (row4)

### (row3.5: h k+1 = P ^{−1} r _{k+1} ),

### ˜

### d _{k+1} = E ^{T} d _{k+1} = E ^{T} h _{k+1} + ˜ β k E ^{T} d _{k} ⇒

### d _{k+1} = h k+1 + ˜ β k d _{k} . (row5)

### Finally, in order to initialize the algorithm, set:

### x _{0} = E ^{−T} x ˜ _{0} , r 0 = b − Ax ^{0} ,

### d _{0} = E ^{−T} d ˜ _{0} = E ^{−T} ˜r 0 = E ^{−T} E ^{T} h _{0} = h 0 . (row0) Regarding the convergence rate of the sequence {x k }, generated by the al- gorithm row0 and, for k = 0, 1, . . ., rows1, 2, 3, 3.5, 4, 5, note that

### k˜x k − ˜xk ^{2} A ˜ = (˜ x _{k} − ˜x) ^{T} A(˜ ˜ x _{k} − ˜x) = (E ^{T} x _{k} − E ^{T} x ) ^{T} E ^{−1} AE ^{−T} (E ^{T} x _{k} − E ^{T} x )

### = (x k − x) ^{T} A(x k − x) = kx k − xk ^{2} A .

### Thus the bounds for k˜x − ˜x k k A ^{˜} obtained above, can be rewritten as follows

### kx

k### −xk

A### kx

0### −xk

A### ≤ 2

### √

### µ

2### ( ˜ A)−1

### √ µ

2### ( ˜ A)+1

### k

### , µ 2 ( ˜ A) = ^{˜} ^{λ} _{˜}

^{max}

### λ

min### ,

### kx

k### −xk

^{A}

### kx

0### −xk

A### ≤ 2 ^{q ˜ˆλ/˜λ}

^{min}

^{−1}

### q ˜ˆλ/˜λ

min### +1

### ! k−r

˜ˆλ### , k ≥ r ˜ ˆ λ .

### Circulant and τ matrices and best approximations

### Find the nearest circulant matrix (in the Frobenius norm) to a symmetric

### 4 × 4 Toeplitz matrix A = [t |i−j| ] ^{4} _{i,j=1} and call it A. Write A in the particular

### case t 0 = 2, t 1 = −1, t 2 = t 3 = 0. Extend to the n × n case, noting that the

### first row of A can be computed in O(n) arithmetic operations.

### The matrix A is the circulant matrix whose first row is [t 0 (3t 1 + t 3 )/4 t 2 (3t 1 + t 3 )/4].

### So, in the particular case,

### A = 3 4

###

###

###

###

### 8/3 −1 0 −1

### −1 8/3 −1 0

### 0 −1 8/3 −1

### −1 0 −1 8/3

###

###

###

### .

### Find the nearest τ matrix (in the Frobenius norm) to a symmetric 4 × 4 Toeplitz matrix A = [t |i−j| ] ^{4} _{i,j=1} and call it A. Write A in the particular case t 0 = 2, t 1 = −1, t 2 = t 3 = 0. Extend to the n × n case, noting that the first row of A can be computed in O(n) arithmetic operations.

### The algebra τ . We recall that τ is the set of all polynomials in the Toeplitz matrix X = [t |i−j| ] ^{n} _{i,j=1} , t 1 = 1, t i = 0 i 6= 1. For n = 4 and n = 5 a basis for τ is:

### J 1 = I, J 2 =

###

###

###

###

### 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0

###

###

###

### , J 3 =

###

###

###

###

### 0 0 1 0 0 1 0 1 1 0 1 0 0 1 0 0

###

###

###

###

### , J 4 = J,

### J 1 = I, J 2 =

###

###

###

###

###

###

### 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0 1 0 0 0 1 0

###

###

###

###

###

### , J 3 =

###

###

###

###

###

###

### 0 0 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 0

###

###

###

###

###

### ,

### J 4 =

###

###

###

###

###

###

### 0 0 0 1 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0

###

###

###

###

###

###

### , J 5 = J.

### From these cases one easily deduces a basis for τ for n generic.

### A n × n matrix A belongs to the space τ iff AX = XA iff its entries satisfy the following cross-sum conditions:

### a i,j−1 + a i,j+1 = a i−1,j + a i+1,j , 1 ≤ i, j ≤ n where a 0,j = a n+1,j = a i,0 = a i,n+1 = 0, 1 ≤ i, j ≤ n.

### Matrices from tau are diagonalized by the (unitary) sine transform, in fact

### XS = S diag (2 cos jπ

### n + 1 , j = 1, . . . , n), S =

### r 2

### n + 1 sin ijπ n + 1

### n

### i,j=1 .

### It can be shown that the eigenvalues of A ^{−1} A cluster around 1 if A is the best

### τ approximation (in the Frobenius norm) of A and A is a symmetric Toeplitz

### matrix satisfying the hypothesis . . . (the same of the circulant case)

### Given a 5 × 5 symmetric Toeplitz matrix A = [t |i−j| ] ^{5} _{i,j=1} , write it in the form:

### A = B +

###

###

###

### 0 · · · 0 .. . C .. . 0 · · · 0

###

###

###

### where B and C are τ matrices of order 5 and 3, respectively. Extend to the n × n case.

### How Richardson method arise

### Perron theorem. Consider the following Cauchy problem dx(t)

### dt = h(x(t)), t > 0, x(0) = x 0

### where h is a vectorial function, h : R ^{n} → R ^{n} , such that h(ˆ x ) = 0, ˆ x ∈ R ^{n} , and h ∈ C ^{1} in a neighbourhood of ˆ x . Let J ^{h} be the Jacobian matrix of h,

### J ^{h} =

###

###

###

### ∂h

1### ∂x

1### · · · _{∂x} ^{∂h}

n^{1}

### .. . .. .

### ∂h

n### ∂x

1### · · · ^{∂h} ∂x

^{n}n