• Non ci sono risultati.

Lecture on Numerical Analysis

N/A
N/A
Protected

Academic year: 2021

Condividi "Lecture on Numerical Analysis"

Copied!
20
0
0

Testo completo

(1)

M. Dumbser 1 / 16

Lecture on Numerical Analysis

Dr.-Ing. Michael Dumbser

01 / 12 / 2008

(2)

Linear Algebra

A standard task in linear algera is finding the solution of a system of linear equations of the form

B X

A  

The solution to this task is given by the so-called Gauß-algorithm:

Define a new matrix C as

A B

C

C

ij

 ,

Now the matrix C is modified by a sequence of operations on its rows to transform its left part into the unit matrix.

The admissible row-operations are:

(1) addition / subtraction of multiples of rows

(2) multiplication / division of a row by a constant factor (3) exchange of rows

(3)

M. Dumbser 3 / 16

The Gauß-Algorithm I

In detail, the Gauß-algorithm for the N x N matrix A and the N x M matrix B proceeds as follows:

Loop over all rows. The loop index i runs from 1 to N

(1) check the coefficient C(i,i) on the diagonal of the row.

(2) IF it is zero, look for the next row j with row index j > i which has a nonzero element in column i.

IF there is no such row, then the matrix is singular. EXIT.

ELSE exchange rows i and j in the matrix C.

(3) Divide the whole row i by C(i,i). Its first non-zero entry is now equal to 1.

(4) Subtract a suitable multiple of row i from all rows j > i in order to eliminate all entries in column i of those rows j.

end loop

Part 1 is a forward loop, which transforms the matrix A into a normalized upper triangular form.

(4)

The Gauß-Algorithm II

In detail, the Gauß-algorithm for the N x N matrix A and the N x M matrix B proceeds as follows:

Loop over all rows. The loop index i runs from N to 1

Subtract a suitable multiple of row i from all rows j < i in order to eliminate all entries in column i of those rows j.

end loop

Part 2 is a very simple backward loop, which transforms the matrix A into the unit matrix.

All operations have been performed on the augmented matrix C. The type of row-operations is completely determined by the left part if matrix C, i.e. by the original matrix A. The right part, i.e.

matrix B, is transformed with the same operations in a passive way.

At the end of the Gauß-algorithm, the right part of the matrix C, i.e. where B was located initially, we find the solution X of the equation system.

To compute the inverse of a matrix using the Gauß-algorithm, the right hand side B must

(5)

M. Dumbser 5 / 16

Exercise 1

Write a MATLAB function gauss.m that accepts a general N x N matrix A and a general N x M matrix B as input argument and that returns the N x M matrix X

that solves the matrix equation

(1) Use the Gauß algorithm to solve the equation system

(2) Use the Gauß algorithm to compute the inverse of a random 5 x 5 matrix A generated with the MATLAB command rand as follows:

A = rand(5).

Hint: to generate a 5x5 unit matrix, use the following MATLAB command:

B = eye(5).

B X

A  



 



 



 



 

1 1 1 1

0 0

4 0

2 0

4 0

2 1

21 41

81 21

21

12

x

(6)

A Special Case of Gauß-Eliminiation: The Thomas-Algorithm

In the case where the matrix A is tridiagonal, i.e. with the special structure



 

 

 



 

 

 

N N

N N

N

i i

i

b a

c b

a

c b

a

c b

a

c b

a

c b

A

0 0

0 0

0 0

0

0 0

0

0 0

0

0 0

0

0 0

0 0

1 1

1 3 3

3

2 2

2

1 1

there is a very fast and efficient special case of the Gauß-algorithm, called the Thomas-algorithm.

Since it is a variation of the Gauß-algorith, the Thomas algorithm is a direct method to solve general linear tridiagonal systems. As the original Gauß-algorithm, it proceeds in two stages, one forward elimination and one back-substitution.

(7)

M. Dumbser 7 / 16

The Thomas-Algorithm

Part I: Forward elimination.

 

  

1 1

1 1 1

1 1 1

: :

/ 1 :

/ :

; / :

i i i

i

i i

i i i

d a d

d

c c

a c b

b d d

b c c

Part II: Back substitution

1

i i i i

N N

x c d

x

d

x

(8)

Exercise 2

Write a MATLAB function Thomas.m that realizes the Thomas algorithm and that accepts as input four vectors of equal length a, b, c and d, where a contains the lower diagonal elements, b contains the diagonal elements, c contains the upper diagonal elements and the vector d contains the right hand side of the linear equation system Ax = d. The output of Thomas.m is the solution vector x that satisfies the above-mentioned system.

To test your code, proceed in the following steps:

(1) Generate the vectors a,b,c and d randomly using the MATLAB command rand.

(2) Assemble the full matrix A using the vectors a, b and c.

(3) Solve the system Ax = d directly using MATLAB.

(4) Solve the sytem Ax = d using the function call x = thomas(a,b,c,d).

(9)

M. Dumbser 9 / 16

The Conjugate Gradient Method

For large matrices A, the number of operations of the Gauß algorithm grows with N3. For large but sparse matrices (many entries are zero), it may be more efficient to use an iterative scheme for the solution of the system. For symmetric, positive definite matrices A, a very efficient algorithm can be constructed, which is the so-called conjugate gradient method, which we have already seen for nonlinear, multi-variate optimization.

We start with some definitions:

(1) A matrix is symmetric positive definite, if

(2) Two vectors u and v are called conjugate with respect to matrix A if

 0 v A u

T

A

A

T

and

x

T

A x   0  x   0

(3) If we knew N conjugate directions pj (j = 1 .. N), we could write the solution x of as

Ax   b

N N

p p

p

x   

1

1

 

2

2

 ...   

The coefficients  are given by:

j T

j T

j

j

p A p

b p

 

 

(4) The term is called the residual

r   b   A x

(10)

The Conjugate Gradient Method

The idea of the CG method is to minimize the functional

Using the steepest descent method in direction p, we obtain the minimum of g starting from xk as follows:

x b x A x x

g   

T

  

T

 2

) 1 (

p x

x   

k

  

 

 

  0

0

 

 

 

 

 

b p

x A p

b x A x p

g x

g

K T

T T

 

 

 

r b

x x A

g   

    

Its gradient is:

 

p A p

r p p

A p

x A b

p

T k T T

K T

 

  

 

(11)

M. Dumbser 11 / 16

The Conjugate Gradient Method

A preliminary version of the conjugate gradient method now reads as

DO k = 0... N - 1

0

 0 x

k k k

k

x p

x

1

    

k T k

k T k k

v p

r p

 

0

0

b A x

r      p  

0

r

0

r

k1

b   A x

k1

b   A x

k

 

k

v

k

k

k

A p

v   

j k

j j j

k j k

k

p

p A p

r A r p

p

 

 

 

0

1 1

1

ENDDO

1D minimization along direction pk

Move point x into the minimum along direction pk

k k k

k

r v

r

1

    

Compute new residual ( = negative gradient

= steepest descent direction)

Do not move in the steepest descent direction, but compute a new conjugate direction pk+1 that takes into account all the previous search

directions. As in nonlinear minimization, this guarantees faster convergence.

From definition (3) we know that the algorithm will terminate with the exact solution after at most N iterations

(12)

The Conjugate Gradient Method

After some algebra, the final conjugate gradient method is given as follows

DO k = 0... N - 1

0

 0 x

k k k

k

x p

x

1

    

k T k k k

v p   

 

0

0

b A x

r      p  

0

r

0

k

k

A p

v   

k k

k

k

r p

p   

1

1

1

 

k k k

k

r v

r

1

    

0 0 0

r

T

r

1 1

1

kT k

k

r r

(13)

M. Dumbser 13 / 16

The GMRES Method of Saad and Schultz (1986)

The GMRES method of Saad and Schultz minimizes the residual norm

for j = [1:N]

0

 0

 x x  

w h

j1j

  

j

i

i

j

h i j v

m w

1

) ,

( 

0

0

b A x

r      v  

1

r

0

/ 

1

j T i

ij

v m

h   

j j

j

s

1

: 

1

2 1 2

j j

jj

h

h

 

0 1

r

1

/

1 j j

j

h

s

b x A x

g()  

1 1

A v m   

i=1...j

1 jj

/

j

h

c

jj

: h

j j

j

c

 : 

1

if( j+1 > tol ) (see next page)

j i i ij

i

h s h

c

p

1

1 1

j i i ij

i

h c h

s

q

1

1 1

p

h

ij

h

i1j

q

for i = [1:j-1]

end

(14)

The GMRES Method of Saad and Schultz (1986)

...contiuation of the previous page

for j=[1:N]

if(j+1 > tol)

else

j j

j

w h

v

1

  /

1

1

1

j

j

A v

m  

for i=[j:-1:1]

end

j

i k

k ik i

i

h

1

x

j

v

x    

ii i

i

 / h

 

(15)

M. Dumbser 15 / 16

Exercise 3

Write a MATLAB function CG.m that solves the system

The function should check if A verifies the necessary symmetry property for the CG method.

Solve the system with

b x A    

 

 

10 5

0

5 4 1

0 1 2 A

 

 

3

2

1

b

(16)

The Linear Least-Squares Method

In the case where we have more equations than unknowns in the linear system

b x A    

we call the equation system an overdetermined system. Typical applications for such overdetermined systems can be found in data analysis (linear regression) where so-called best-fit curves have to be computed e.g. from observations or experimental data.

In this case, the matrix is no longer a N x N square matrix, but a M x N rectangular matrix with M > N. In general, such systems do not have a solution in the original sense of the equation. However, according to ideas of Adrien Marie Legendre and Carl Friedrich Gauß, we can try to find the best possible solution of the system by minimizing the following goal function:

   

2

)

( x A x b A x b A x b r

g           

T

      

 0

x b

A  

(17)

M. Dumbser 17 / 16

The Linear Least-Squares Method

The minimum of g(x) can be computed analytically using the differential criterion for a minimum:

b b b A x x A b x A A x x

g (  )  

T T

  

T

  

T T

  

T

 0

2

2  

 

A A x A b

x

g

T

T

The least-squares solution of the overdetermined equation system can therefore be found by solving the so-called normal equation

b A x

A

A

T

 

T

  A A A b

x  

T 1 T

 

is called the pseudo – inverse of the N x M matrix A.

  A

T

A

1

A

T

(18)

The Linear Least-Squares Method

(19)

M. Dumbser 19 / 16

Exercise 4

Write a MATLAB script LSQ.m that computes the coefficients of the parabola for an oblique throw from position (x

0

,y

0

) with horizontal and vertical speed (v

x

,v

y

). The gravitation constant is g = 9.81.

The „measurements“ are available in form of points (x

i

,y

i

) that are generated from the physical parabola of the throw plus a random (positive and negative) perturbation of amplitude A.

Use the linear Least-Squares Method to solve the problem, solving directly the normal equation of the problem.

Plot the „measured data“ as well as the parabola obtained from the least squares method in the same figure.

Compare the exact coefficients of the parabola obtained from the physics of an

oblique throw with the coefficients obtained by the least squares method in function of

the number of points in the measurements.

(20)

Carl Friedrich Gauß

German mathematician, astronomer and physicist 30/04/1777 – 23/02/1855

Significant contributions to

• Linear algebra (Gauß algorithm, Least Squares Method)

• Statistics (Gaussian normal probability distribution)

• Potential theory and differential calculus (Gauß divergence theorem)

• Numerical analysis (Gaussian quadrature rules)

Already during his life, he was called the „prince of mathematics“.

He produced more than 250 scientific contributions to his research fields.

Riferimenti

Documenti correlati

Skinner, Quentin, “Sir Thomas More’s Utopia and the language of Renaissance Humanism”, in Anthony Pagden, ed., The Languages of Political Theory in Early Modern Europe,

It is submitted that the probable intention of the drafters of the Restatement (Second) was to achieve a balance (in section 187 (2) (a)) between the older reasonable

For general functions f(x) the accuracy of polynomial interpolation does not increase when increasing the degree of the interpolation polynomial. It may even happen that the

From the previous Example 4 we find that even for strictly monotone functions f(x) Newton‘s method may not always converge from arbitrary starting points x 0 , but only from starting

The aim of the Gaussian quadrature formulae is now to obtain an optimal quadrature formula with a given number of points by making also the nodes an unknown in the

(2) Use the modified second order Euler method (=second order Runge-Kutta scheme) with different time steps, and compare with the exact solutions. (3) Use the fourth order

decreases the activation energy of the forward reaction and increases the activation energy of the reverse reaction.. increases the activation energy of the forward reaction

For a real number x whose fractional part is not 1/2, let hxi denote the nearest integer