** Root Finding 4**

**4.3 Fixed Point Iteration**

Most of the techniques we will discuss are iterative in nature and the ﬁrst
one is called the ﬁxed point iteration scheme. Instead of looking for a zero of
*the function f (x), it determines a ﬁxed point of an equivalent equation. The*
*equation is rewritten in the form x = g(x), so when x is substituted in the*
*function g(x) it returns the value x, hence the nomenclature ﬁxed point. The*
*conversion of the equation f (x) = 0 into one of the form x = g(x) is not*
always straightforward and deﬁnitely can be done in many ways. For instance
*the previous example, for which f (x) = x* *− 2 sin x*^{2} could be rewritten as
*x = 2 sin x*^{2} *(where g(x) = 2 sin x*^{2}*) or x =*

sin^{−1 x}_{2}. Although the former
version might be more intuitive, there are many examples in which this choice
*is not so. For instance consider the quadratic f (x) = x*^{2}*+ 2x− 3, which could*
*be manipulated to give x = (3− x*^{2}*)/2 or x =√*

3*− 2x. With these two forms*
neither of them seems to have any advantages over the other. Further the
*equation could be written as x = 3/(x + 2) and so on. Before we try to resolve*
this issue let us say how this is then implemented as a numerical scheme.

We rewrite the equation as

*x*_{n}_{+1}*= g(x** _{n}*)

*n = 0, 1,· · · ,*

*which is a recursion formula. It starts with an initial guess, namely x*_{0} (which
may be determined graphically or by another means).

Just about the simplest code for this purpose would be

x0 = 1;

for j=1:10 x0 = g(x0);

end

*where we have a routine g.m which deﬁnes the function g(x) and x0 = 1 is a*
suitable initial guess. This runs through the iterative process ten times. This
kind of code presupposes that it is going to work and will converge in ten steps
(or an appropriate number). You will quickly learn that you can’t rely on this
kind of thing.

We need to pause here to think what we mean by convergence. We could
*work out the value of the function f (x) at the current iterate as a check. We*
would require this to be less than a certain tolerance (the accuracy to which
we would expect to know the answer). Notice that this is slightly diﬀerent to
knowing the root to within a certain tolerance.

Alternatively in this case the code may be deemed to be successful when
*the diﬀerence between x*_{n}_{+1}*and x** _{n}*is less than a certain tolerance. In this case

*we have x*

_{n}_{+1}

*≈ x*

*n*

*⇒ x*

*n*

*≈ g(x*

*n*)

*⇒ f(x*

*n*)

*≈ 0. This tolerance reﬂects how*well we want to know the answer and the parameter maxits is how many times we are prepared to perform the iterations. This is to eliminate problems with cases which don’t converge and hence cause inﬁnite loops. The code we shall use for this is:

**'**

**&**

**$**

**%**

%

% fixed.m

%

function [answer,iflag] = fixed(g,xinit) global tolerance maxits

iflag = 0;

iterations = 0 ;

xnext = feval(g,xinit);

while (iterations<maxits) & abs(xnext-xinit)>tolerance iterations = iterations + 1;

xinit = xnext;

xnext = feval(g, xinit);

end

if iterations == maxits iflag = -1;

answer = NaN;

else

iflag = iterations;

answer = xnext;

end

**#**

**"** **!**

%

% eqn.m

%

function [g] = eqn(x) g = 2*sin(x.ˆ2);

and the main function

**'**

disp([’ Root = ’ num2str(root) ...

’ found in ’ num2str(iflag) ’ iterations’]) end

We have exploited the new command global which allows variables to be
used by any routine which contains a global statement referring to the same
variables, or a subset of them. This is useful when a quantity (or group of
quantities) is unlikely to change and it saves on long argument lists. We have
also used the number 1e-4 which is 1*× 10** ^{−4}* (refer to page 7).

The code above is run and gives the result of*≈ 1 × 10** ^{−14}* after only four
iterations. In order to identify the other roots (near 1/2 and 3/2) we must use
guesses close to these values. Starting around 0.5 (for instance guesses of 0.4
and 0.6 lead to the code ﬁnding the root at zero again). Starting close to the
other root has a variety of outcomes: starting below it produces the zero root,
whereas above leads to the iterations diverging. The reason for this is linked

*to the derivative of g(x). In fact the errors are multiplied by*

*|g*

^{}*(x)| at each*iteration and if this value is greater than one, the method will fail. In this case

*we recall that g(x) = 2 sin x*

^{2}

*so that g*

^{}*(x) = 4x cos x*

^{2}

*which when x =*

^{1}

_{2}

*equals 1.9378 and when x =*

^{3}

_{2}gives

*−3.7690. Near the origin the derivative is*very small and consequently the technique works well.

It appears this method is only able to ﬁnd one root and is thus not very useful. However, at this stage we could change the code eqn.m to

**#**

**"** **!**

%

% eqn.m

%

function [g] = eqn(x) g = sqrt(abs(asin(x/2)));

*This form of the code is a simple alternative way of writing the equation f (x) =*
0 as given above. We use the function abs to ensure that the square root
function will not lead to imaginary quantities^{1}. In this case the code ﬁnds the
*root near x = 1/2 but again fails to obtain the other one. There may well be*
a form of the function which gets this root but any method which requires
this amount of manipulation and user interaction is not suitable for detailed
calculations. There are many ﬂaws with this method and consequently it is not
widely used for one-dimensional problems. However, there are some
higher-dimensional (and more complex) problems for which it is the only feasible
option.

We shall discuss whereby this method is improved upon and instead of using substitution into a function it uses the fact that a zero corresponds to a change in sign.

**4.4 Bisection**

In the previous example we used an initial estimate for a root: here we shall use a bracketing interval. Our initial assumption is that this interval contains a single root. In order to check this hypothesis we shall build in checks at each stage.

*If the initial interval is between x = a and x = b we know that f (a) and*
*f (b) are of diﬀerent signs (for the interval to contain a single root): in which*
case their product must be negative. As the name of the method suggests we
*bisect the interval and deﬁne the point c = (b + a)/2. We can now evaluate the*
*function to obtain f (c).*

1 To evaluate the arcsine we have used the MATLAB command asin, as you might expect there are also acos and atan, as mentioned previously.

*a*

*b* *c* = *a* *+ b*

### 2

This must either be positive or negative (or if we are very lucky zero). In
*which case f (c) will have the same sign as either f (a) or f (b). The new interval*
*can then be deﬁned as c and whichever end of the previous interval represents*
a change in sign of the function.

*A simple version of a code to use this method in an interval [a, b] would be*
**'**

**&**

**$**

**%**
a = 1; b = 5;

for j = 1:10 c = (a+b)/2;

if f(c)*f(a) > 0 a = c;

else b = c;

end end

*where in this case a = 1, b = 5 and the function f (x) is deﬁned in a routine f.m.*

This method is perfectly adequate if you know that the method will converge
*in ten steps, the interval [a, b] actually contains a root and no future iterations*
actually coincide with a root (see Task 4.8).

If we wish to be slightly more careful we would use the codes

**'**

**&**

**$**

**%**

%

% bisect.m

%

function [answer,iflag] = bisect(fun,a,b) global tolerance maxits

iflag = 0;

iterations = 0 ; f_a = feval(fun,a);

f_b = feval(fun,b);

while ((f_a*f_b<0) & iterations<maxits) & (b-a)>tolerance iterations = iterations + 1 ;

c = (b+a)/2;

f_c = feval(fun,c);

if f_c*f_a<0

b=c; f_b = f_c;

elseif f_b*f_c<0

a=c; f_a = f_c;

else

iflag = 1; answer = c;

end end

switch iterations case maxits

iflag = -1; answer = NaN;

case 0

iflag = -2; answer = NaN;

otherwise

iflag = iterations; answer = c;

end

**#**

**"** **!**

%

% func.m

%

function [f] = func(x) f = x-2*sin(x.ˆ2);

and the main controlling function

**'**

**&**

**$**

**%**

%

% mbisect.m

%

global tolerance maxits tolerance = 1e-4;

maxits = 30;

xlower = 0.4;

xupper = 0.6;

[root,iflag] = bisect(’func’,xlower,xupper);

switch iflag case -1

disp(’Root finding failed’) case -2

disp(’Initial range does not only contain one root’) otherwise

disp([’ Root = ’ num2str(root) ...

’ found in ’ num2str(iflag) ’ iterations’]) end

This method is guaranteed to work provided only one root is in the relevant
interval and the function is continuous. It may work if there are three roots but
this is not recommended: in fact it appears to work provided there are an odd
number of roots, because each iteration may remove a number of roots which is
*guaranteed to be even. We note that the length of the interval after n iterations*
*is (b− a)/2** ^{n}*. Hence if we wish to know the root to within a given tolerance
we can work out how many iterations we need to perform. For instance if the

*required tolerance is , we ﬁnd that we need*

*n >* 1
ln 2ln

*b− a*

*.*

**Example 4.2 To determine a root of a continuous function f (x) between zero***and one, given that f (0)f (1) < 0 to within 1×10*^{−4}*requires n > ln(10*^{4}*)/ ln 2≈*
*13.28: in other words fourteen iterations.*

*We shall now discuss other methods for solving equations of the form f (x) =*
0 which exploit the form of the derivative (or at least an approximation to it)
as well as the function.