Suli E., Mayers D.-an Introduction to Numerical Analysis - Solutions
Short Description
Descripción: Numerical Analysis book solutions...
Description
An Introduction to Numerical Analysis Solutions to Exercises Endre S uli and David F. Mayers University of Oxford
CAMBRIDGE UNIVERSITY PRESS
New York
Cambridge Port Chester Melbourne Sydney
1 Solution to Exercise 1.1
The xed points are the solutions of q(x) x 2x + c = 0: Evidently q(0) = c > 0, q(1) = c 1 < 0, q(x) ! +1 as x ! 1, showing that 0 < < 1 < . Now xn = (xn + c) = ( + c); so by subtraction xn = (xn ) = (xn + )(xn ): It follows that if jxn + j < 2, then jxn j < jxn j. Now + = 2, so if 0 x < then x + < 2, and evidently x + > 0. Hence x is closer to then was x , so also 0 x < . An induction argument then shows that each xn satis es 0 xn < , and x + n jx n j < jx j; 2 and xn ! . Now xn is independent of the sign of xn , and is therefore also independent of the sign of x , and it follows that xn ! for all x such that < x < . The same argument shows that if x > then x > x > , and so xn ! 1. As before this means also that xn ! 1 if x < . If x = then of course xn = for all n > 0. If x = , then x = , and again xn = for all n > 0. 2
1
2
1
+1
2
1 2
1
2
1 2 1 2
+1
2 1
2 1
1 2
1
1
2
0
1
1
+1
2
0
1
1
1
1
1
0
0
1
1
2
2
1
0
1
0
1
1
+1
0
2
0
1
0
2
0
2
1
0
0
0
1
2
2
2
2
2
2
0
2
2 Solution to Exercise 1.2
Since f 0(x) = ex 1 and f 00(x) = ex, f 0(x) > 0 and f 00(x) > 0 for all x > 0. It therefore follows from Theorem 1.9 that if x > 0 then Newton's method converges to the positive root. Similarly f 0(x) < 0 and f 00(x) > 0 in ( 1; 0) and the same argument shows that the method converges to the negative root if x < 0. If x = 0 the method fails, as f 0(0) = 0, and x does not exist. For this function f , Newton's method gives exp(xn ) xn 2 xn = xn exp(xn ) 1 2) exp( xn ) = xn 1 (x1n +exp( xn ) xn 1 n > 1: In fact, e is very small indeed. In the same way, when x is large and negative, say x = 100, xn 2 xn xn 1 = 2: Hence when x = 100, the rst few members of the sequence are 100; 99; 98; : : :; after 98 iterations xn will get close to the positive root, and convergence becomes quadratic and rapid. About 100 iterations are required to give an accurate value for the root. However, when x = 100, x is very close to 2, and is therefore very close to the negative root. Three, or possibly four, iterations should give the value of the root to six decimal places. 0
0
0
1
+1
100
0
0
+1
0
0
1
3 Solution to Exercise 1.3
Newton's method is
f (xn ) : f 0 (xn )
xn+1 = xn
To avoid calculating the derivative we might consider approximating the derivative by f (xn + Æ) f (xn ) f 0 (xn ) ; Æ where Æ is small. The given iteration uses this approximation, with Æ = f (xn ); if xn is close to a root then we might expect that f (xn ) is small. If xn is small we can write f (xn ) = f ( ) + (xn )f 0 ( ) + (xn ) f 00 ( ) + O(xn ) = f 0 + f 00 + O() where = xn , and f 0 and f 00 are evaluated at x = . Then f (xn + f (xn )) f (xn ) = f ( + + f 0 + f 00 ) f ( + ) = (f 0 ) + [3f 0f 00 + (f 0) f 00] + O() : Hence [f (xn )] xn = xn f (x + n + f (xn )) f (xn ) 0 0 00 = [(f 0 ) [(+f ) f+0(3f+ff ]0)f 00] + O() 00 0 = f (1f+0 f ) + O() : This shows that if x is suÆciently small the iteration converges quadratically. The analysis here requires that f 000 is continuous in a neighbourhood of , to justify the terms O( ). A more careful analysis might relax this to require only the continuity of f 00. The leading term in xn is very similar to that in Newton's method, but with an additional term. The convergence of this method, starting from a point close to a root, is very similar to Newton's method. But if x is some way from the root f (xn ) will not be small, the approximation to the derivative f 0 (xn ) is very poor, and the behaviour may be very dierent. For the example f (x) = ex x 2 2
1 2
1 2
2
3
3
1 2
2
1 2
2
2
2
3
2
+1
2
2
2
3
1 2
3
2
0
3
+1
0
4 starting from x = 1; 10 and 10 we nd 0
0 1 2 3 4 5
1.000000 1.205792 1.153859 1.146328 1.146193 1.146193
0 1 2 3 4 5
10.000000 10.000000 10.000000 10.000000 10.000000 10.000000
0 -10.000000 1 -1.862331 2 -1.841412 3 -1.841406 4 -1.841406 The convergence from x = 1 is satisfactory. Starting from x = 10 we get similar behaviour to Newton's method, an immediate step to x quite close to 2, and then rapid convergence to the negative root. However, starting from x = 10 gives a quite dierent result. This time f (x ) is roughly 20000 (which is not small), and f (x + f (x )) is about 10 ; the dierence between x and x is excessively small. Although the iteration converges, the rate of convergence is so slow that for any practical purpose it is virtually stationary. Even starting from x = 3 many thousands of iterations are required for convergence. 0
0
1
0
0
0
9500
0
0
1
0
5 Solution to Exercise 1.4
is
The number of correct gures in the approximation xn to the root Dn = integer part of f
log j xnjg: From (1.24) for Newton's method we have j xn j ! jf 00 ( )j : j xn j 2jf 0()j Hence Dn 2 Dn B; where jf 00 ( )j : B = log 2jf 0()j If B is small then Dn is close to 2 Dn, but if B is signi cantly larger than 1 then Dn may be smaller than this. In the example, f (x) = ex x 1:0000000005 f 0 (x) = ex 1 f 00 (x) = ex ; and = 0:0001. Hence e: B = log 2(e : 1) = 3:7 and the number of signi cant gures in the next iteration is about 2k 4, not 2k. Starting from x = 0:0005 the results of Newton's method are 0 0.000500000000000 3 1 0.000260018333542 3 2 0.000149241714302 4 3 0.000108122910746 5 4 0.000100303597745 6 5 0.000099998797906 9 6 0.000099998333362 14 where the last column shows the number of correct decimal places. The root is = 0:000099998333361 to 15 decimal places. The number of correct gures increases by a factor quite close to 4. +1 2
+1
10
+1
+1
0 0001
10
0
0 0001
10
6 Solution to Exercise 1.5
From (1.23) we have xn+1 =
(
xn )2 f 00 (n ) 2f 0(xn ) :
Now f 0() = 0, so by the Mean Value Theorem f 0 (xn ) f 0 ( ) = (xn )f 00 (n ) for some value of n between and xn . Hence ( xn)f 00 (n) : xn = 2f 00(n) Now jf 00(n)j < M and jf 00(n )j > m, and so +1
j xn j < K j xn j; +1
where
M 2m < 1: Hence if x lies in the given interval, all xn lie in the interval, and xn ! . Then n ! , f 00(n) ! f 00() and f 00(n) ! f 00(). This shows that K=
0
xn+1 xn
!
1 2
and convergence is linear, with asymptotic rate of convergence ln 2. For the example f (x) = ex 1 x, f (0) = 0, f 0(0) = 0. Starting from x = 1, Newton's method gives 0 1.000 1 0.582 2 0.319 3 0.168 4 0.086 5 0.044 6 0.022 7 0.011 8 0.006 9 0.003 10 0.001 showing x reducing by a factor close to at each step. 0
0
1 2
7 Solution to Exercise 1.6
When f () = f 0() = f 00() = 0 we get from the de nition of Newton's method, provided that f 000 is continuous in some neighbourhood of , f (x ) xn + 0 n f (x n ) 1 3 000 = xn + 16 ((xxn ))2 ff000((n )) n n 2 f 000 (n ) = ( xn ) 1 3f 000( ) : n If we now assume that in the neighbourhood [ k; + k] of the root 0 < m < jf 000 (x)j < M; where M < 3m; xn+1 =
then
j xn j < K j xn j; +1
where
K=1
M 3m < 1:
Hence if x is in this neighbourhood, all the xn lie in the neighbourhood, and Newton's method converges to . Also, j xn j ! 2 ; j xn j 3 so that convergence is linear, with asymptotic rate of convergence ln(3=2). 0
+1
8 Solution to Exercise 1.7
The proof follows closely the proof of Theorem 1.9. From (1.23) it follows that xn < , provided that xn lies in the interval I = [X; ]. Since f is monotonic increasing and f () = 0, f (x) < 0 in I . Hence if x 2 I the sequence (xn ) lies in I , and is monotonic increasing. As it is bounded above by , it converges; since is the only root of f (x) = 0 in I , the sequence converges to . Since f 00 is continuous it follows that xn f 00 (n ) f 00 ( ) = ! ( xn) 2f 0(xn ) 2f 0() ; so that convergence is quadratic. +1
0
+1 2
9 Solution to Exercise 1.8
Neglecting terms of second order in " we get x =1+" x = 1+" x = " x = 1 " x = 1+" x = 1: Although this value of x is not exact, it is clear that for suÆciently small " the sequence converges to 1. With x and x interchanged, the value of x is of course the same, but x and subsequent values are dierent: 0 1 2
1 2
3 4 5
5
0
1
2
3
x0 = 1 + " x1 = 1 + " x2 = 12 " x3 = 1 " x4 = 1 + " x5 = 1:
10 Solution to Exercise 1.9
The function ' has the form x f (x ) xn f (xn ) (f (xn ) f (xn )) '(xn ; xn ) = n n (xn )(xn )(f (xn ) f (xn)) In the limit as xn ! both numerator and denominator tend to zero, so we apply l'Hopital's rule to give f (xn ) xn f 0(xn ) + f 0 (xn )) lim '(xn ; xn ) = lim 0 x ! f (xn )(xn )(xn ) + (f (xn ) f (xn ))(xn f (xn ) xn f 0 ( ) + = (f (xn ) f ())(xn ) so that 0 0 (xn ) = f((fx(nx ) ) xnf (f))((x) + f ()) : n n In the limit as xn ! the numerator and denominator of (xn ) both tend to zero, so again we use l'Hopital's rule to give f 0 (xn ) f 0 ( ) lim ( xn ) = lim 0 : x 1 ! x 1 ! f (xn )(xn ) + (f (xn ) f ( )) We must now use l'Hopital's rule again, to give nally f 00 (xn ) lim ( xn ) = lim 00 x 1 ! f (xn )(xn ) + f 0 (xn ) + f 0 (xn ) 00 = 2ff 0(()) : Now the limit of ' does not depend on the way in which xn and xn tend to , so nally we have xn f 00 ( ) ! (xn )(xn ) 2f 0() : Now assume that xn (xn )q ! A; then xn (xn )q ! A; or (xn ) =q ! A =q ; 1
1
1
1
1
1
1
1
n
1
1
1
1
1
1
1
1
1
1
1
1
1
n
n
1
1
1
1
n
1
1
1
1
1
1
1
1
1
+1
1
+1
1
1
xn
1
1
1
)
and so
11 xn+1
!A : (xn ) ( ) Comparing with the previous limit, we require f 00 ( ) q 1=q = 1; and A =q = 0 : 2f () This gives a quadratic equation forp q, and since we clearly require that is positive we obtain q = (1 + 5), giving the required result. q
1
=q xn
1+1
1
1+1
1 2
=q
12 Solution to Exercise 1.10
Fig. 1.6 shows a typical situation with f 00(x) > 0, so the graph of f lies below the line P Q. Here P and Q are the points corresponding to un and vn . Also R is the point corresponding to , so that f () < 0. Hence in the next iteration un = and vn = vn. The same picture applies to the next step, and again vn = vn , and so on. Thus if f 00 > 0 in [uN ; vN ], and f (uN ) < 0 < f (vN ) then vn = vN for all n N . If on the other hand f (uN ) > 0 and f (vN ) < 0 we see in the same way that un = uN for all n N . Similar results are easily deduced if f 00 < 0 in [uN ; vN ]; it is only necessary to replace f by the function f . Now returning to the situation in Fig. 1.6, the point vn remains xed, and the points un are monotonically increasing. Hence the sequence (un) is monotonically increasing for n N , and is bounded above by vN , and is therefore convergent to the unique solution of f (x) = 0 in the interval [uN ; vN ]. In the general situation, we see that one end of the interval [un; vn] eventually remains xed, and the other end converges to the root. Write un = + Æ, and un ( + Æ)f (vN ) vN f ( + Æ) (f (vN ) f ( + Æ)) = : Æ Æ(f (vN ) f ( + Æ)) In the limit as Æ ! 0 the numerator and denominator both tend to zero, so we apply l'Hopital's rule to give f (vN ) vN f 0( + Æ) + f 0 ( + Æ) un = lim lim Æ! Æ! Æ f (vn ) f ( + Æ) Æf 0 ( + Æ) 0 0 = f (vN ) vfN(fv ()) + f () N Hence the sequence (un) converges linearly to , and the asymptotic rate of convergence is ( vN )f 0 ( ) ln 1 f (vN ) This may also be written f 0 ( ) ln 1 f 0( ) N for some N lying between and vN . Since f () = 0, it follows that +1
+1
+2
+1
+1
0
0
+1
13 N > . Evidently the closer vN is to the root , the closer f 0 (N ) is to f 0( ), and the more rapidly the iteration converges. Asymptotically this method converges more slowly than the standard secant method. Its advantage is that if f (u ) and f (v ) have opposite signs the iteration is guaranteed to converge to a root lying in [u ; v ]; the method is therefore robust. However, it is easy to draw a situation where v is far from , and where the bisection method is likely to be more eÆcient. 0
0
0
0
0
14 Solution to Exercise 1.11
The sequence (xn ) converges to the two-cycle a; b if x n ! a and ! b, or equivalently with a and b interchanged. So a and b are xed points of the composite iteration xn = h(xn), where h(x) = g(g(x)), and we de ne a stable two-cycle to be one which corresponds to a stable xed point of h. Now h0 (x) = g0 (g(x)) g0 (x); if h0(a) < 1 the xed point a of h is stable; since g(a) = a it follows that if jg0(a)g0 (b)j < 1 then the two-cycle a; b is stable. In the same way, if jg0(a)g0 (b)j > 1 then the two-cycle is not stable. For Newton's method xn = xn f (xm )=f 0 (xn ); and the corresponding function g is de ned by g(x) = x f (x)=f 0 (x): In this case f (x)f 00 (x) g0 (x) = 0 [f (x)] : Hence, if f (a)f 00 (a) f (b)f 00 (b) [f 0 (a)] [f 0 (b)] < 1 the two-cycle is stable. Newton's method for the solution of x x = 0 has the two-cycle a; a if a a a=a 3a 1 a +a a= a 3a 1 : These equations have the solution 1 a= p : 5 0 00 Here f (a) = 3a 1 = 2=5 and f (a) = 6a = 6=p5. So f (a)f 00 (a) f (b)f 00 (b) [f 0 (a)] [f 0 (b)] = 36; and the two-cycle is not stable. 2
x2n+1
+1
+1
2
2
2
3
3
2
3
2
2
2
2
15
Solution to Exercise 2.1
Multiplication by Q on the right reverses the order of the of A. Hence, writing B = QAQ, Bi;j = An i;n j : If L is lower triangular, then Lij = 0 whenever i < j . Hence (QLQ)ij = 0 whenever n +1 i < n +1 j , or when i > j , thus showing that QLQ is upper triangular. Now suppose that A is a general n n matrix, that B = QAQ, and that B can be written as B = L U , where L is unit lower triangular and U is upper triangular. Then A = QBQ = QL U Q = (QL Q)(QU Q) since Q = I , the unit matrix. Now QL Q is unit upper triangular and QL Q is lower triangular, so we have the required form A = UL with L = QU Q and U = QL Q. This factorisation is possible if we can write B = L U , and this can be done if all the leading principal submatrices of B are nonsingular. The required condition is therefore that all the \trailing" principal submatrices of A are nonsingular. The factorisation does not exist, for example, if 2 1 A= 00 ; since the corresponding matrix B has B = 0: columns
+1
1
1
1
1
+1
1
2
1
1
1
1
1
1
1
11
16 Solution to Exercise 2.2
We can write
A = LU ; lower triangular, and U
where L is unit is upper triangular. Now let D be the diagonal matrix whose diagonal elements are the diagonal elements of U , so that dii = uii , and de ne U = D U ; then A = LDU as required, since U is unit upper triangular. The given condition on A ensures that uii 6= 0 for i = 1; : : : ; n. The procedure needs to be modi ed slightly when unn = 0, since then D is singular. All that is necessary is to de ne U = (D ) U , where D is the matrix D but with the last element replaced by 1 and then to replace Unn by 1. If the factorisation A = LU is known, we can use this procedure to nd D and U such that A = LDU . Then AT = U T DLT , which can be written AT = (U T )(DLT ); where U T is unit lower triangular and DLT is upper triangular. This is therefore the required factorisation of AT . 1
1
1
17 Solution to Exercise 2.3
Suppose that the required result is true for n = k, so that any nonsingular k k matrix A can be written as P A = LU . This is obviously true for k = 1. Now consider any nonsingular (n + 1) (n + 1) matrix A partitioned according to the rst row and column. We locate the element in the rst column with largest magnitude, or any one of them if there is more than one, and interchange rows if required. If the largest element is in row r we interchange rows 1 and r. We then write T wT 0T 1 m r P A= = (1 )
p B
pC
0 I
where is the largest element in the rst column of A. Writing out the product we nd that mT = wT pmT + C = B: This gives 1 m = w; and 1 pmT : C =B Note that if = 0 this implies that all the elements of the rst column of A were zero, contradicting our assumption that A is nonsingular. Now det(A) = det(C ), and so the matrix C is also nonsingular, and as it is an n n matrix we can use the inductive hypothesis to write P C = L U : It is then easy to see that T 1 0T 0T 1 m r P A= 0 P P p L 0 U since P P = I . Now de ning the permutation matrix P by 1 0 P= P r (1 )
0T P
(1 )
18 we obtain
PA =
0T P p L
1 mT ;
0 U
which is the required factorisation of A. The theorem therefore holds for every matrix of order n + 1, and the induction is complete. Consider the matrix 0 1 A= 01
and attempt to write it in the form p0 1 s A= qr 01 : This would require p=0 ps = 1 q=0 qs + r = b: where the rst two equations are clearly incompatible. The only possible permutation matrix P interchanges the rows, and the factorisation is obviously still impossible.
19 Solution to Exercise 2.4
Partitioning the matrices by the rst k rows and columns, the equation
Ly = b becomes
L1 O C L2
y1 y2
=
0
; where we have used the fact that the rst k rows of b are zero.
plying out this equation gives
L1y1 = 0 C y1 + L2y2 = ;
Multi-
from which it is clear that y = 0, since L and L are nonsingular. Hence the rst k rows of y are zero. Column j of the inverse of L is y j , the solution of Ly j = e j ; where e j is column j of the unit matrix and has its only nonzero element in row j . Hence the rst j 1 elements of e j are zero, and by what we have just proved the rst j 1 elements of y j are zero. Thus in each column of the inverse all the elements above the diagonal element are zero, and the inverse is lower triangular. 1
1
2
( )
( )
( )
( )
( )
( )
20 Solution to Exercise 2.5
The operations on the matrix B are: (i) Multiplying all the elements in a row by a scalar; (ii) Adding a multiple of one row to another. Each of these is equivalent to multiplying on the left by a nonsingular matrix. Evidently the eect of successive operations (i) is that the diagonal elements in the rst n columns are each equal to 1, and the eect of successive operations (ii) is that all the odiagonal elements are equal to zero. Hence the nal result in the rst n columns is the unit matrix. Hence the combined eect of all these operations is equivalent to multiplying B on the left by A ; and the nal result in the last columns is the inverse A . The diagonal element brr at any stage is the determinant of the leading principal r r submatrix of A, multiplied by each of the preceding scaling factors 1=bjj . Hence if all the leading principal submatrices of A are nonsingular, none of the diagonal elements bjj used at any stage are zero. At each stage, the scaling of the elements in row j requires 2n multiplications. The calculation of each term bik bij bjk involves one multiplication, and there are 2n(n 1) such elements, as row j is not involved. Thus each stage requires 2n multiplications, and there are n stages, giving a total of 2n multiplications. However, in stage j the rst j 1 columns and the last n j + 1 columns are columns of the unit matrix. Hence the scaling of row j only involves n non zero elements, and in the calculating of the new elements bik , half of the factors bjk are zero. This reduces the total number of multiplications from 2n to n . 1
1
2
3
3
3
21 Solution to Exercise 2.6
The initial matrix B is
2 4 2 1 0 01 B = @1 0 3 0 1 0A : 312001 After the successive stages of the calculation the matrix B becomes 0 1 2 1 1=2 0 0 1 B = @ 0 2 2 1=2 1 0 A ; 0 5 1 3=2 0 1 0 1 0 3 0 1 01 B = @ 0 1 1 1=4 1=2 0 A ; 0 0 6 1=4 5=2 1 0 1 0 0 1=8 1=4 1=2 1 B = @ 0 1 0 7=24 1=12 1=6 A : 0 0 1 1=24 5=12 1=6 The inverse matrix A consists of the last three columns of this. 1
0
22 Solution to Exercise 2.7
n X i=1
j(Ax)i j =
n X n X aij xj i=1 j =1
n X n X
jaij j jxj j i=1 j =1 n n X X jxj j jaij j j =1 i=1 n X C jxj j j =1
=
= C kxk : 1
Now choose
n C = max j =1
then evidently
n X i=1
n X i=1
jaij j;
jaij j C:
Let k be the value of j for which this maximum is attained; de ne x to be the vector whose only nonzero element is a 1 in position k. Then n X i=1
j(Ax)i j =
n X i=1
jaik j
=C = C kxk ; so that kAxk = C kxk , giving the result required. 1
1
1
23 Solution to Exercise 2.8
(i) Write = kvk , so that v + : : : + vn = : It is then clear that vj , j = 1; : : : ; n, and so kvk1 kvk . Equality is attained by any vector v which has a single nonzero element. Now write = kvk1, so that jvj j for all j . Then kv k = v + : : : + v n + : : : + = n : This is the required result; in this case equality is attained by any vector v in which all the elements have the same magnitude. (ii) From the de nition of kAk1, kAxk1 kAk1 = max x kxk1 : Choose v to be a vector x for which this maximum is attained. Then kAk1 kvk1 = kAvk1 kAvk (see above) 2
2 1
2
2 2
2
2
2
2
2 1
2
2
2
2
2
kAk kvk kAk pn kvk1 : (see above) 2
2
2
Division by kvk1 gives the required result. To attain equality, we require equality throughout the argument. This means that Av must have a single nonzero element and that all the elements of v must have the same magnitude. Moreover Av must have its maximum possible size in both the 2-norm and the 1-norm. Thus v must be an eigenvector of AT A. An example is 1 1 1 A= 1 1 ; v= 1 : Evidently the rowsums are 2, so that kAk1 = 2. It is easy to seep that AT A = 2I , and has both eigenvalues equal to 2. Hence kAk = 2, as required. For the second inequality, choose u to be a vector which attains the maximum possible kAuk . Then in the same way 2
2
kAk kuk = kAuk 2
2
2
24
pm kAuk1 pm kAk1kuk1 pm kAk1kuk ; and the result follows by division by kuk . The argument follows as 2
2
above, but we have to note that the vector Au has m elements. To attain equality throughout we now require that u has a single nonzero element, that all the elements of Au have the same magnitude, and again that Au must have maximum size in both the 2-norm and the 1-norm. Thus u must again be an eigenvector of AT A. A rather trivial example has A= 1 ; 1 T which p2. is a 2 1 matrix. Clearly kAk1 = 1, A A = 2I , so that kAk = 2
25 Solution to Exercise 2.9
We know that kAk is the largest eigenvalue of AT A, which is n . In the same way kA k is the largest eigenvalue of A T A = (A AT ) ; and the eigenvalues of this matrix are the reciprocals of the eigenvalues of A AT . Now the eigenvalues of A AT are the same as the eigenvalues of AT A, so that kA k = 1= . Now if Q is orthogonal, then QT Q = I , and all the eigenvalues of I are equal to 1. Hence by the result just proved, kQk = 1. Conversely, if kAk = 1, then the largest and smallest eigenvalues of AT A must be equal, so all the eigenvalues are equal. Hence AT A = I , where is the eigenvalue. The matrix AT A is positive de nite, so > 0, and writing 1 Q= p A 2 2 1 2 2
1
1 2 2
1
1
1
2
2
we have shown that QT Q = I , so that Q is orthogonal, as required.
26 Solution to Exercise 2.10
Let be an eigenvalue of AT A and let x 6= 0 be the associated eigenvector. Then, AT Ax = x, and therefore kAxk = xT AT Ax = xT x = kxk : Hence, is a nonnegative real number. Now let kk be a vector norm on Rn and let kk denote the associated subordinate matrix norm on Rnn . Then, jj kxk = kxk = kAT Axk 2 2
2 2
kAT Ak kxk kAT k kAk kxk:
Since x 6= 0, it follows that 0 kAT k kAk for each eigenvalue of AT A. By Theorem 2.9, we then have that
kAk kAT k kAk for any subordinate matrix norm k k. For example, with k k = k k1, on noting that kAT k1 = kAk , we conclude that kAk kAk kAk1 ; 2
1
as required.
2 2
1
27 Solution to Exercise 2.11
On multiplying A by AT from the left, we deduce that 2 3 n 1 1 1 ::: 1 6 1 1 0 0 ::: 0 7 6 7 6 7 1 0 1 0 : : : 0 6 7 AT A = 6 7 6 1 0 0 1 ::: 0 7 6 7 4: : : : : : : : : : : : : : : : : :5
1 0 0 0 ::: 1 Writing the eigenvalue problem AT Ax = x in expanded form then gives nx + x + x + x + : : : + xn = x x + x = x x + x = x x + x = x 1
2
3
4
1
1
2
2
1
3
3
1
4
4
x1 + xn
:::
= xn We observe that = 1 is an eigenvalue corresponding to the (n 2) eigenvectors of AT A of the form x = (x ; x ; : : : ; xn )T , with x = 0, x + : : : + xn = 0. The two remaining eigenvectors of AT A are of the form x = (x ; x ; x ; : : : ; x ) where x and x are nonzero real numbers, and are found by solving the linear system nx + (n 1)x = x x + x = x which has a nontrivial solution when det n 1 n1 1 = (n + 1) + 1 = 0; when q h i 1 = (n + 1) 1 1 n 2 : 2 By Theorem 2.9, we then have that q h i kAk = 12 (n + 1) 1 + 1 n 2 : 1
2
2
1
2
2
2
1
1
1
2
2
1
2
2
2
i.e.,
4 ( +1)
2
4 ( +1)
1
28 Solution to Exercise 2.12
If (I
B ) is singular, there is a nonzero vector x such that
(I
so that
B )x = 0;
x = B x:
Hence
kxk = kB xk kB k kxk; and so kBk 1. It then follows that if kAk < 1, then I A is not
singular. From the relation it follows that and so
(I (I
A)(I
A)
1
A)
1
=I
= I + A(I
A)
1
k(I A) k kI k + kA(I A) k 1 + kAk k(I A k: 1
1
1
Thus
(1 kAk)k(I giving the required result.
A)
1
k 1;
29 Solution to Exercise 2.13
From Ax = b and (A + ÆA)(x + Æx) = b it follows that A Æx + ÆA x + ÆA Æx = 0: Then (I + A ÆA)Æx = A ÆA Æx; or Æx = (I + A ÆA) A ÆA Æx: and so kÆxk k(I + A ÆA) k jjA ÆAk kÆxk: Applying the result of Exercise 2.12 we get, provided that kA kÆxk 1 kA1 ÆAk kA ÆAk kblxk; which is the required result. 1
1
1
1
1
1
1
1
1
1
1
ÆAk < 1,
30 Solution to Exercise 2.14
Choose x to be an eigenvector of AT A with eigenvalue , and Æx to be an eigenvector with eigenvalue . Then Ax = b; so that bT b = xT AT Ax = xT x = xT x; and kbk = kxk : In the same way kÆbk = kÆxk ; and so kÆxk = = kÆbk : kxk kbk 2 2
2 2
2 2
2 2
1 2
2
2
2
2
Now choose x so that = n , the largest eigenvalue of AT A, and Æx so that = , the smallest eigenvalue. Then n = (A); and equality is achieved as required, with b = Ax and Æb = AÆx. 1
1
2
31 Solution to Exercise 2.15
Following the notation of Theorem 2.12, a An = (q Qn ) 0
we have
0
a=@
and so
9 12 0
1
=
and
0
and
A
An = @
q=@
Then
A
A:
0
= qT An = (3=5 4=5 0) @ 0
Qn Rn = An
q rT
=@
0
1
Qn = @
and
1
1
3=5 4=5 0
and Finally
6 8 20
;
p(aT a) = 15 0
rT
rT Rn
0 0 1
6 8 20 0 0 20
1 A
= 10;
1 A:
A;
Rn = 20:
The required QR decomposition is therefore 0 9 6 1 0 3=5 0 1 15 @ 12 8 A = @ 4=5 0 A 0 0 20 0 1
10 20
:
32 The required least squares solution is then given by solving Rn x = QTn b; or 0 1 15 10 x = 3=5 4=5 0 @ 300 A 600 = 660 0 20 x 0 0 1 900 900 which gives x = 74; x = 45: 1 2
1
2
:
33
Solution to Exercise 3.1
Using the relations (3.2) and (3.3) we nd in succession
Hence A = L LT where
l11 l21 l31 l22 l32 l33 0
L=@
= = = = = =
2 3 1 1 0 2:
2 3 1 1 0 2
1 A:
34 Solution to Exercise 3.2
As in the solution to Exercise 1 we nd that A = L LT , where 0 1 0 0 1 L = @ 2 1 0 A: 2 1 1 Writing the system of equations Ax = b, or L LT x = b, in the form L y = b; LT x = y; we nd in succession y = 4 y = 1 y = 1; 1 2 3
and then
which is the required solution.
x3 x2 x1
= 2 = 0 = 1;
35 Solution to Exercise 3.3
In the notation of equations (3.7) this example has ai = 1; i = 2; : : : ; n bi = 2; i = 1; : : : ; n ci = 1; i = 1; : : : ; n 1: The elements lj and uj are determined uniquely by (3.7), with u = b = 2. Hence if we nd expressions which satisfy all these equations then they are the values of lj and uj . Suppose now that lj = (j 1)=j . Then the rst equation, lj = 1=uj , requires that uj = 1=lj = (j + 1)=j: This also satis es u = 2, as required. The second of equations (3.7) is also satis ed, as bj lj cj = 2 + lj = 2 (j 1)=j = (j + 1)=j = uj : Hence all the equations are satis ed, and lj = (j 1)=j; j = 2; : : : ; n uj = (j + 1)=j; j = 1; : : : ; n 1: To nd the determinant, note that det(T ) = det(L) det(U ), and det(L) = 1 since L is unit lower triangular. det(U ) = u u : : : un = 12 32 : : : n +n 1 = n + 1: Hence det(T ) = n + 1. 1
1
1
+1
1
1
1
2
36 Solution to Exercise 3.4
Column j of the matrix T is the vector c j , with the nonzero elements cjj = 1; cjj = 2; cjj = 1; with obvious modi cations when j = 1 or j = n. Hence the scalar product of c j and v k is Mkj = vjk + 2vjk vjk : Since vi k is a linear function of i, for i = 1; : : : ; k and for i = k; : : : ; n, its is clear that Mkj = 0 when j 6= k. This is also true in the special cases j = 1 and j = n since we can extend the de nition of vjk by the same linear functions, which gives v k = 0 and vnk = 0. The two linear functions give the same result when j = k. To nd Mkk a simple calculation gives Mkk = (k 1)(n + 1 k) + 2k(n + 1 k) k(n + 1 k 1) = n + 1: Hence the scalar product of the vector v k =(n + 1) with each column of T is column k of the unit matrix, so that v k =(n + 1) is column k of the matrix T . This shows(that the elements of the inverse are ( )
( )
( )
1
( )
( ) +1
( )
( ) 1
( )
( ) +1
( )
( )
( ) 0
( )
( )
( )
1
i(n+1 k) ; n+1 k(n+1 i) ; n+1
Tik 1 =
ik i k:
This matrix is clearly symmetric. All the elements of the matrix are positive, and the sum of the elements in row i is i n n+1 i X i X k+ (n + 1 k) n+1 k n+1 k i = n n++1 1 i i(i +2 1) + n +i 1 (n i)(n2 i + 1) = i(n +21 i) : The 1 norm of T is the maximum rowsum; its value depends on whether n is odd or even. If n is odd, the maximum is attained when i = (n + 1)=2, and is (n + 1) =8. If n is even, the maximum is attained when i = n=2 and also when i = n=2 + 1; the maximum is n(n + 2)=8. =1
= +1
1
2
37 Evidently the 1 norm of T is 4, so nally the condition number is (n + 1) ; n odd 1 (T ) = n(n + 2); n even: 1 2 1 2
2
38 Solution to Exercise 3.5
With the notation of Theorem 3.4 we can now write juj j jbj j kaj j ucj j j jbj j jaj j j jcj j
>
0:
1
1
Note that the inequality '> jcj j ' now becomes ' jcj j' but we can still deduce that uj 6= 0 since cj is not zero. The matrix 0 1 1 01 T =@ 1 1 0 A 0 1 1 satis es the given conditions, except that c = 0. It is obviously singular, since the rst two rows are identical. This also means that the leading 2 2 principal submatrix is singular, and so the LU factorisation does not exist. 2
39 Solution to Exercise 3.6
The proof is by induction on the order of the matrix; we suppose that the given result is true for every n n tridiagonal matrix, and consider an (n + 1) (n + 1) matrix T in the standard notation 0
1
b1 c1 B a2 b2 c2 C C: T =B @ a3 b3 c3 A ::: ::: We interchange the rst two rows, if ja2j > jb1j, to get the largest element
in the rst column on the diagonal. The result will be 1 b1 c1 d1 B a b c C 2 2 2 C P T = B @ a3 b3 c3 A ::: ::: permutation matrix P may be the unit 0
where the
d1 = 0.
We can now write P T = L U , where 0 1 B l 1 L =B @ 1 1
1
1 C C; A
2
1
matrix, and then
::: :::
0 B
U1 = B @
and
b1 c1 d1 u2 v2 a3 b3 ::: :::
1 C C; A
= a =b = b l c = c l d: In the special case where b = a = 0, so that the rst column of T is entirely zero, and the matrix is singular, we simply take P = I , L = I and U = T . In the matrix U the n n submatrix consisting of the last n rows l2 u2 v2
2
1
1
2
2 1
2
2 1
2
1
1
1
40 and columns is triple diagonal, and so by the induction hypothesis we can write 0 1 Pn
This shows that P T
@
= 1l
2
u2 v2 a3 b3 ::: ::: 0T
In
b1 0
where l = (l ; 0; 0; : : : ; 0)T , and so 1 0T P T = 1 0T 2
2
0 Pn
Pn l2 Ln
A
= LnUn:
(c ; d ; 0; : : :) 1
1
Pn LnUn
b1
(c ; d ; 0; : : :) 1
1
Un Thus the required result is true for all (n +1) (n + 1) matrices. 0
:
Since it is obviously true for any 1 1 matrix, the induction is complete.
41 Solution to Exercise 3.7
This is an informal induction proof. The elements of L and U are determined by (2.19). Suppose that we are calculating the elements lij for a xed i, with j = 1; : : : ; i 1, and we are about to calculate lir , where r < i p. Then bir = 0, since B is Band(p; q). Now ( ) r X 1 lir = bir lik ukr : 1
urr
k=1
Thus if li1 = li2 = : : : = li;r 1 = 0 it follows that lir = 0 also. Evidently li1 = 0 so an induction argument shows that lik = 0 for all k r < i p. Hence L is Band(p; 0). The argument for the matrix U is similar. Calculating the elements uij in order, we are about to calculate uir , where r > i + p. Then bir = 0, and if
u1r = u2r = : : : = ui 1;r = 0
it follows that uir = 0, since
uir = bir
i 1 X k=1
lik ukr :
Moreover it is clear that u r = 0, so the same induction argument shows that ukr = 0 for all k i < r p. Hence U is Band(0; q). 1
42 Solution to Exercise 3.8
With the usual notation, the equations determining the elements of L and U include a l = a and u =a l a : Now we are given that a = 0, but in general l and a are not zero. Hence in general a is not zero. In the same way a l = a and 1 (a l u ): l = u Here a = 0, but there is no reason why l or u should be zero. Although a subdiagonal of A is entirely zero, there is no reason why any of the elements of the same subdiagonal of L should be zero. 21
21
24
11
24
21 14
24
21
24
41
41
42
42
22
11
42
41
12
41
12
14
43
Solution to Exercise 4.1
Suppose that g is a contraction in the 1-norm, as in (4.5). Observe that kg(x) g (y)kp n =p kg(x) g(y)k1 Ln =p kx yk1 Ln =p kx ykp : Therefore, if L < n =p, then g is a contraction in the p-norm. 1
1
1
1
44 Solution to Exercise 4.2
Substituting for x we nd (7x + 25) + x 1
or
2
2
25 = 0;
2 2
50x + 350x + 600 = 0; with solutions x = 3 and 4. The corresponding values of x are 4 and 3, giving the two solutions (4; 3)T and ( 3; 4)T . The Jacobian matrix of f is 2x 2x : 1 7 At the rst solution (4; 3)T this is 8 6 ; 1 7 and at ( 3; 4)T it is 6 8 : 1 7 Clearly the condition is not satis ed in either case. If we change the sign of f the solutions remain the same, but the signs of the elements in the second row of the Jacobian matrix are changed. The condition is now satis ed at the rst solution (4; 3)T , as the matrix becomes 8 6 : 1 7 If we replace f by f the solutions are still the same, and the Jacobian matrix becomes 1 2x 7 2x : 1 7 At the second solution ( 3; 4)T this is 7 1 : 1 7 The relaxation parameter = 1=7 will give convergence in both cases. 2 2
2
2
1
1
2
2
1
2
45 Solution to Exercise 4.3
Clearly '(0) = g(u) and '(1) = g(v). The function ' is dierentiable, and '0 (t) = g0 ((1 t)u + tv) ( u + v): Hence by applying the Mean Value Theorem to ', there is a real number in (0; 1) such that '(1) '(0) = '0 (); which gives g(v) g(u) = (v u)g0 ((1 )u + v): The result now follows by de ning = (1 )u + v, which lies on the line between u and v, and is therefore in , since is convex. Since jg0( )j < 1 there is a value of Æ such that jg0 (z )j k = [1 + jg0 ( )j] < 1 for all z such that jz j Æ. Convergence of the iteration follows in the usual way, since jzn j = jg(z n ) g( )j = jzn j jg0 ()j 1 2
+1
Hence jzn j < knjz (zn) converges to .
0
kjzn j: j provided that jz j Æ, and the sequence 0
46 Solution to Exercise 4.4
The iteration is in component form
x(n+1) = g (x(n) )
= u(x n ; x n ) = v (x n ; x n ) and the complex iteration zn = g(zn) gives x n + {x n = u(x n ; x n ) + {v(x n ; x n ); with { = p 1, which are obviously identical. The condition jg( )j < 1 gives ux + vx < 1; evaluated at the xed point = + { . In its real form a suÆcient condition for convergence is kJ ( ; )k1 < 1, and the Jacobian matrix is x(1n+1) x(2n+1)
( ) 1 ( ) 1
( ) 2 ( ) 2
+1
( +1) 1
( +1) 2
( ) 1
2
( ) 2
( ) 1
( ) 2
2
1
2
1
J=
ux uy vx vy
2
:
Using the Cauchy-Riemann relations this gives the suÆcient condition jux j + jvx j < 1: This is a more restrictive condition than that obtained from the complex form, as it leads to juxj + jvx j < 1 2juxvx j: 2
2
47 Solution to Exercise 4.5
Clearly g (1; 1) = 1 and g (1; 1) = 1, so that (1,1) is a xed point. The Jacobian matrix is x x : J= x x 1
2
2 3 2 3
2 3 2 3
1 2
2 1
Evidently at the xed point kJ k1 = > 1, so the suÆcient condition for convergence is not satis ed. However, as in Exercise 4 this iteration corresponds to complex iteration with the function g(z) = (z + 3 + i), as can be seen by writing down the real and imaginary parts. Then g0(z) = z, and at the xed point p jg0 (1 + {)j = j 2j < 1; so the iteration converges. 4 3
1 3
2
2 3
2 3
48 Solution to Exercise 4.6
In component form the function g(x) = x gi (x) = xi
k X r=1
K (x)f (x) is
Kir (x)fr (x):
Dierentiating with respect to xj gives the (i; j ) element of the Jacobian matrix of g as @gi @xj
=
Æij
=
Æij
=
Æij
k @ X K (x)fr (x) @xj r=1 ir
k X
k X
k X
r=1 k X
@Kir fr r=1 @xj
@Kir fr r=1 @xj
r=1
Kir
@fr @xj
Kir Jrj ;
all evaluated at the point x. When we evaluate this at the point , we know that f () = 0, so that fr = 0 for each value of r. Moreover, K is the inverse of the Jacobian matrix, J , of f , so that k X r=1
Kir Jrj = Æij :
Hence all the elements of the Jacobian matrix of g vanish at the point
.
49 Solution to Exercise 4.7
Evidently at a solution x = x , and 2x = 2. Hence there are two solutions, (1; 1)T and ( 1; 1)T . The Jacobian matrix of f is 2 x 2 x J= 1 1 ; and its inverse is 1 1 2 x J = 2(x + x ) 1 2x : Hence Newton's method gives ! ! ! 1 x x 1 2 x (x ) + ( x ) = x x 2x x x 2(0x + x ) 1 1 + x +2 C 1 B x = @ A 2(x + x ) x + x + 2 1
2 1
2
1
2
2
1
1
(1) 1 (1) 2
(0) 1 (0) 2
1
2
(0) 1
(0) 2
(0) 1
(0) 1
(0) 2 (0) 1
(0) 2
(0) 1
2
(0) 2
2
(0) 2
(0) 2 1 (0) 1
2 2
Thus x n = x n for all positive values of n, and if we write x n = x = x n the iteration becomes x n +1 xn = 2x n : Evidently if x > 0 then x n > 1 for all positive n. Moreover n xn 1 = x2x n 1 (x n 1); and 0 < x 1 < 2x when x > 1. Hence x n ! 1 as n ! 1. It then follows from xn 1 = 2x1n (x n 1) that xn 1 1 (x n 1) ! 2 as n ! 1; so that convergence is quadratic. A trivial modi cation of the argument shows that if x + x < 0 then the iteration converges quadratically to the solution ( 1; 1)T . n
( ) 2
( ) 1 ( )
( ) 2
( ) 1
( +1)
( ) 2
( )
(0)
( )
( )
( +1)
( )
( )
( )
( +1)
( )
2
( )
( +1) ( )
2
(0) 1
(0) 2
(0) 2 2 (0) 2
2
!
50 Solution to Exercise 4.8
We say that (x k ) converges to linearly if (4.19) holds with q = 1, 0 < < 1 and "k = kx k k1. The rate of convergence is de ned as log (1=) = log . The Jacobian matrix of f is 2x 2x ; J= 1 1 whose inverse is 1 1 2x ; J = 2(x x ) 1 2x ( )
( )
10
10
1
2
2
1
1
1
2
provided that x 6= x . Newton's method then gives, provided that n n x 6= x , !0 ! ! n n n n n 1 1 2 x x x x + x @ = xn xn 2(x n x n ) 1 2x n x n +x n 2 and a simple calculation then shows that x n + x n = 2. When x = 1 + ; x = 1 we nd that x = 1 + ; x = 1 : [These are exact values, not approximations for small .] We have shown that for any x , provided x 6= x , the result of the rst iteration satis es x + x = 2. Hence we can write x = 1 + ; x = 1 . Then x n = 1 + =2n ; x n = 1 =2n for n 1; this shows that the (x n ) converges linearly to (1; 1)T , with rate of convergence ln 2. Convergence is not quadratic, because the Jacobian matrix of f is singular at the limit point (1; 1)T . ( ) 1
( ) 2
1
( +1) 1 ( +1) 2
2
( ) 1 ( ) 2
(0) 1
( ) 1
( ) 2
( +1) 1
(0) 2
(1) 1
(1) 2
1 2
(1) 1
( ) 1
(1) 2
1
( )
( ) 2
( ) 1 ( ) 1
( +1) 2
1 2
(0) 1
(0)
(1) 2
( ) 2 ( ) 1
(0) 2
1
(1) 1
2
( ) 2
( ) 2
2
1 A;
51 Solution to Exercise 4.9
With the given value of z we get z + 2 = (2k + ){ + ln[(2k + )] + + 2 = ez = (2k + ){e : Hence ln[(2 k + ] + + 2 : = ln 1 { 2k + Now j ln(1 + {t)j < jtj for all nonzero real values of t, so that + )] + jj + 2 ; jj < ln[(2k (2 k + ) which gives jj < ln[(2(2kk++ ))] + 2 : This is enough to give the required result, since for suÆciently large k the other terms in the numerator and denominator become negligible, and jj is bounded in the limit by ln k=2k. To give a formal analysis, we obviously nd in the denominator that (2k + ) > 2k: For the numerator, we note that 1 < ln k for k > 1, and so ln[(2k + )] + 2 < ln[(2k + )] + 2 ln k < 3 ln k + ln[(2 + k )] < 3 ln k + ln(3) < 3 ln k + ln(3) ln k < [3 + ln(3)] ln k: Finally jj < C ln k ; 1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
1 2
where
k
C=
3 + ln(3) : 2
52
Solution to Exercise 5.1
(i) Lemma 5.2 For the Householder matrix 2 vvT ; H =I T v v which is clearly symmetric, we have H HT = H = (I vvT )(I vvT ) = I 2v vT + vvT vvT = I + ( vT v 2)v vT = I; where we have written = 2=(vT v). Hence H is orthogonal. 2
2
2
(ii)Lemma 5.3 Since Hk is a Householder matrix, 2 v vT ; Hk = I vTk vk k k for some vector vk . Now de ne the vector v, with n elements, partitioned as 0 ; vk so that the rst n k elements of v are zero. Then vT v = vTk vk , and 2 2 I 0 0 n k T I vv = (0 vTk ) 0 Ik vT v vT v vk = I0n k 0Hk = A; so that A is a Householder matrix.
53 Solution to Exercise 5.2
Since this is a 4 4 matrix, two Householder transformations are required. In the rst transformation the Householder vector is v = (0; 4; 2; 2)T . The result of the rst transformation is the matrix 0 2 3 0 01 B 3 1 3 4C B C @ 0 3 3 9 A: 0 4 9 2 In the second transformation the Householder vector is v = (0; 0; 8; 4)T , and the result is 0 2 3 0 01 B 3 1 5 0C B C @ 0 5 11 3 A : 0 0 3 6 This is the required tridiagonal form. Note that the rst row and column, and the leading 2 2 principal minor, are unaected by the second transformation.
54 Solution to Exercise 5.3
Taking = 0, the corresponding Sturm sequence is p = 1 p = 3 =3 p = (2 )(3) 1 = 5 p = (4 )(5) (4)(3) = 8 p = (1 )(8) (5) = 8 5 : If 5 < 8 there are 4 agreements in sign, while if 5 > 8 there are 3 agreements in sign. Taking = 1, the corresponding Sturm sequence is p = 1 p = 3 =2 p = (2 )(2) 1 = 1 p = (4 )(1) (4)(2) = 5 p = (1 )( 5) (1) = : In this case there are always three agreements in sign. Hence if 5 > 8 the number of agreements in sign is the same, so no eigenvalues lie in (0; 1), while if 5 < 8 there is one more agreement in sign for = 0 than for = 1, so there is exactly one eigenvalue in (0; 1). If 5 = 8 then there is an eigenvalue at 0. 0 1 2 3
2
4
2
2
2
0 1 2 3
2
4
2
2
2
2
55 Solution to Exercise 5.4
If H x = c y, where c is a scalar, then xT H T H x = c yT y; but since H is a Householder matrix, H T H = I , and so xT x c = T : y y Writing 2 v vT H =I vT v this shows that 2 x T (v T x ) v = c y : v v Hence v = (x cy), where c is known, and is some scalar. Now H is unaected by multiplying v by an arbitrary nonzero scalar, and the required Householder matrix is H = H (v), where v = x cy, and 2
2
c=
s
xT x : yT y
There are clearly two possible such Householder matrices.
56 Solution to Exercise 5.5
From
(D + "A)(e + "u) = ( + ")(e + "u) we nd by equating powers of " and " that De = e and Du + Ae = u + e: Since D is diagonal it follows from the rst of these that = djj for some j , and that e is the unit vector whose only nonzero element is ej = 1. From the second equation, dkk uk + akj = djj uk + Ækj ; k = 1; : : : ; n: Taking k = j we see at once that = ajj . For k 6= j it also follows that akj uk = : djj dkk Since the eigenvector e + "u has to be normalised, we have (eT + "uT )(e + "u) = 1: Comparing coeÆcients of powers of ", we rst see that eT e = 1, which already holds, and then eT u = 0. This means that uj = 0. 0
1
57 Solution to Exercise 5.6
Multiplying out the partitioned equation and equating powers of " gives, for the leading term, d e = e Dn k f = f : From the rst of these we see that = d ; since none of the elements of the diagonal matrix Dn k are equal to d the second equation shows that f = 0. Now equating coeÆcients of " we get A e + d u + A f = u + e T A e + A f + Dn k v = v : Making use of the facts that = d and f = 0 the rst of these equations shows that is an eigenvalue of A , and the second shows that AT e + Dn k v = v: This equation determines v, since the matrix Dn k I is not singular. Now equating coeÆcients of " we get d x + A u + A v = x + u + e T A u + Dn k y + A v = y + v + f : The rst of these reduces to (A I )u = e A v: Determination of u from this equation is not quite straightforward, since the matrix A I is singular. However, if we know the eigenvalues j and corresponding eigenvectors wj of A we can write X u = j w j : 11
11
11
1
11
2
3
2
11
1
2
2
11
1
2 3
2
1
2
1
1
j
If we number these eigenvalues so that = and e = w , we see that X (j ) j wj = w A v: 1
j
1
Multiplying by wTi we obtain (i ) i = 1
1
wTi A2 v;
1
2
58 This determines i for i 6= 1, since we have assumed that the eigenvalues j are distinct. The equation does not determine the coeÆcient ; as in Exercise 5 this is given by the requirement that the eigenvector of A is normalised. Writing down this condition and comparing coeÆcients of " we see that eT u = 0, thus showing that = 0. 1
1
59 Solution to Exercise 5.7
Since A
I = QR and Q is orthogonal, QT (A I ) = R.
Hence
= RQ + I = QT (A I )Q + I = QT A Q QT I Q + I = QT A Q: Hence B is an orthogonal transformation of A. Also B T = (QT AQ)T = QT AT Q = QT AQ since A is symmetric. Hence BT = B, so that B is symmetric. We build up the matrix Q as a product of plane rotation matrices Rp;p ('p ) as in (5.34), with p = 1; : : : ; n 1. The rst of these rotations, with p = 1, replaces rows 1 and 2 of the matrix A I with linear combinations of these two rows, such that the new (2; 1) element is zero. since the matrix is tridiagonal the new element (1; 3) will in general be nonzero. The second rotation, with p = 2, carries out a similar operation on rows 2 and 3; in the result the element (3; 2) becomes zero, and (2; 4) may be nonzero. We now form the matrix RQ; this involves taking the matrix R and applying the same sequence of plane rotations on the right, but with each rotation transposed. Since R is upper triangular, and the rst rotation operates on columns 1 and 2, a single nonzero element appears below the diagonal at position (2; 1). In the same way the second rotation operates on columns 2 and 3, and introduces a nonzero element at (3; 2). Hence nally the only nonzero elements in the matrix B below the diagonal are in positions (i + 1; i), for i = 1; : : : ; n 1. But we have just shown that B is symmetric; hence B is also tridiagonal. B
+1
60 Solution to Exercise 5.8
For this matrix the shift = ann = 0; we see that 0 1 1 0 A = QR = 1 0 0 1 :
Evidently Q is orthogonal and R is upper triangular. It is then easy to verify that B = RQ = A. Hence in successive iterations of the QR algorithm all the matrices A k are the same as A, so they do not converge to a diagonal matrix. ( )
61 Solution to Exercise 5.9
We use the shift an;n = 10, so that 0
3 4 A I = @ 4 0 The QR decomposition of this matrix is 0
A I = QR = @
3 5
4 5
4 5
3 5
Then the new matrix A is 0 5 RQ + I = @ 0 which is 0
1 A:
10 A@
5 0
12 5 16 5
1 A:
(1)
@
12 5
10 A@
16 5
373 25
64 25
64 25
202 25
3 5
4 5
4 5
3 5
1 A:
1 A
+ 10 I
62
Solution to Exercise 6.1
The inequalities follow immediately from Theorem 6.2, with n = 1, 1 and x = 1. As an example we need a function whose second derivative is constant, so that f 00() = M for any value of ; thus we choose f (x) = x , and M = 2. Since f (1) = f ( 1) = 1 the interpolation polynomial of degree 1 is p (x) = 1, and at x = 0 jf (0) p (0)j = j0 1j = 1 = M =2 as required.
x0 =
1
2
2
1
1
2
63 Solution to Exercise 6.2
(i) The interpolation polynomial is x x a 0 + a = xa : p (x) = a a The dierence is f (x) p (x) = x xa = x(x a)(x + a): Theorem 6.2 gives x(x a) 00 f (x) p (x) = 2! f () = 3x(x a): Comparing these we see that = (x + a)=3: 3
1
3
1
3
2
2
1
(ii) The same calculation for f (x) = (2x a) gives x a x p (x) = ( a) + (2a a) ; a a and the dierence is f (x) p (x) = (2x a) + (x a)a xa = 8x(x a)(2x 2ax + a ): Comparing with x(x a) 2! 48(2 a) we see that there are two values of , given by 2x 2ax + a = : = a 12 4
4
1
4
4
1
3
2
2
2
1 2
2
3
2
1 2
64 Solution to Exercise 6.3
We know that and that
q(xi ) = yi ; r(xi ) = yi ;
i = 0; : : : ; n; i = 1; : : : ; n + 1:
Now suppose that 1 j n; then q(xj ) = r(xj ) = yj . Hence (xj x )r(xj ) (xj xn )q(xj ) p(xj ) = xn x ( xj x )yj (xj xn )yj = xn x = yj : Clearly (x x )r(x ) (x xn )q(x ) p(x ) = xn x = q (x ) = y; and in the same way p(xn ) = yn . Hence p(xj ) = yj ; j = 0; : : : ; n +1, showing that p(x) is the Lagrange interpolation polynomial for the points f(xi ; yi) : i = 0; : : : ; n + 1g. 0
+1
+1
0
0
+1
+1
0
0
0
0
0
0
+1
0
0
+1
+1
+1
0
0
65 Solution to Exercise 6.4
We nd that n (1 1=n) = (2 1=n)(2 3=n) : : : (1=n)( 1=n) = (2n 1)(2nnn 3) : : : (1)( 1) = nn1 2:4:(2: :n: )!:(2n) = nn(2n2)!n n! : Substitution of Stirling's formula (cf. also Chapter 2, Section 2.1) gives 1 (2n) n = e n n (1 1=n) n n 2n nn = e n n = n = 2 n e ; n!1 as required. +1
+1
+1
+1
2 +1 2
+1
+1
+1 2
+1 2
2
66 Solution to Exercise 6.5
Suppose that there exists q n 2 P n , dierent from p n and also having these properties. De ne r = p n q n ; then r(xi ) = 0; i = 0; : : : ; n. By Rolle's Theorem there exist n points j ; j = 1; : : : ; n, one between each consecutive pair of xi , at which r0 (j ) = 0. Applying Rolle's Theorem again, there exist n 1 points k ; k = 1; : : : ; n 1, at which r00 (k ) = 0. But we also know that r00 (xi ) = 0; i = 0; : : : ; n, making a total of 2n points at which r00 vanishes. However, the proof breaks down at this point, since we cannot be sure that each k is distinct from all the points xi ; all we know is that k lies between xk and xk and it is possible that k = xk . Suppose that p (x) = c + c x + c x + c x + c x + c x . The given conditions lead to the equations c c +c c +c c = 1 c = 0 c +c +c +c +c +c = 1 2c 6c + 12c 20c = 0 2c = 0 2c + 6c + 12c + 20c = 0: Adding equations 1 and 3, and using equation 2, gives c + c = 2: Now c = c = 0, so that c = 2. Adding equations 4 and 6 gives 4c + 24c = 0; which is clearly inconsistent. Hence this system of equations have no solution. With the change p ( 1) = 1, adding equations 1 and 3 now gives c + c = 0; so the same argument now leads to c = c = c = 0. The equations now only require that c +c +c = 1 6c + 20c = 0: Hence c = can be chosen arbitrarily, and then c = ; c = 2 +1
2 +1
2 +1
2 +1
2 +1
1
5
0
0
1
1
2
2
2
3
4
3
3
4
4
5
+1
5
5 0
0
1
2
2
3
3
4
4
5
5
2
2
3
4
2
0
2
5
4
4
2
4
5
2
4
0
1
3
3
5
2
4
5
5
3
10 3
1
67 1+ . A general form of the polynomial satisfying the given conditions is p (x) = x + (7x 10x + 3x ); where is arbitrary. 7 3
5
3
5
68 Solution to Exercise 6.6
The polynomial l (x) must satisfy the conditions l (x ) = 1; l (xi ) = l0(xi ) = 0; i = 1; : : : ; n: This shows that n Y (x xi ) l (x) = (x xi ) : 0
0
0
0
2
0
i=1
2
0
The polynomial hi (x) must satisfy hi (xj ) = Æij ; j = 0; : : : ; n; h0i (xj ) = 0; j = 1; : : : ; n: It must therefore contain the factors (x xj ) ; j = 1; : : : ; n; j 6= i, and (x x ). There must be one other linear factor, and so x x hi (x) = (1 + (x xi )) [L (x)] ; xi x i where n Y x xj Li (x) = ; xi xj 2
0
0
2
0
j =1;j 6=i
and the value of is determined by the condition h0i (xi ) = 0. It is easy to see that 1 + 2L0 (xi ) h0i (xi ) = + i xi x and so 1 = 2L0i(xi ): xi x In the same way the polynomial ki(x) must satisfy ki (xj ) = 0; j = 0; : : : ; n; and ki0 (xj ) = Æij ; j = 1; : : : ; n: It is easy to see that x x ki (x) = (x xi )[Li (x)] : xi x Each of these polynomials is of degree 2n. 0
0
2
0
0
As in the proof of Theorem 6.4, we consider the function (t) = f (t) p n (t) f (x)(xp) n(x) (t) ; where now n Y (x) = (x x ) (x xi ) : 2
2
69
2
2
0
2
i=1
If x is distinct from all of the xj , then (t) vanishes at all the points xj , and at x. Hence by Rolle's Theorem 0 (t) vanishes at n +1 points lying between them; 0 (t) also vanishes at xj ; j = 1; : : : ; n. This means that 0 (t) vanishes at 2n + 1 distinct points, so by repeated applications of Rolle's Theorem we see that n () = 0 for some in (a; b). This gives the required result. (2 +1)
70 Solution to Exercise 6.7
The expressions are a natural extension of those in Exercise 6. We now de ne nY x xj ; Li (x) = xi xj 1
and we see that
j =1;j 6=i
x xn ; x0 xn x x0 : ln (x) = Ln (x)2 xn x0 These are both polynomials of degree 2n 1, and satisfy the conditions l0 (x)
=
L0 (x)2
we require. The polynomials hi and ki are given by (x x )(x xn ) hi (x) = [1 + (x xi )] (xi x )(xi xn ) Li(x) ; where 1 1 + 2L0 (xi ) = i xi x xi xn and (x x )(x xn ) : ki (x) = (x xi )[Li (x)] (xi x )(xi xn) The error bound is obtained as in Exercise 6 by considering the function (t) = f (t) p n (t) f (x) (px)n (x) (t) ; where now nY (x) = (x x )(x xn ) (x xi ) : 0
2
0
0
0
2
0
2
2
1
1
2
2
1
0
2
i=1
71 Solution to Exercise 6.8
The zeros of the polynomial are at 2; 1; 0; 1; 2 and 3. These are symmetric about the centre point , and the polynomial has even degree. Hence it is symmetric about the point x = . Since it does not vanish in the interval (0; 1) the maximum is attained at the midpoint, . The maximum value is q( ) = : Write x = xk + =8, where k is an integer and 0 < 1. Then writing u(x) = p(), where p() is the Lagrange interpolation polynomial, we see that p is de ned by the interpolation points (j; f (xj )); j = 2; : : : ; 3. Then the dierence between f (x) and u(x) is just the error in the Lagrange polynomial, which is Q sin x u(x) = j 6!( j ) gV I (); where 2 < < 3, and gV I denotes the 6th derivative of the function g() = f (xk + =8) with respect to . Hence gV I () = (=8) f V I ( ); where xk < < xk . Now f (x) = sin x, so jgV I ()j (=8) : Hence using the above bound on the polynomial. 1 2
1 2
1 2
1 2
225 64
3 =
2
6
2
+3
6
j sin x
p
( ) V I u(x)j = 6! g () 1 225 64 6! 8 0:000018: 6
72 Solution to Exercise 6.9
In the interpolation polynomial the coeÆcient of f (xj ), where 0 j n 1, is n
x xr nY1 x xr " r=0;r6=j xj xr r=0 xj xr " 'j (x) 'j (x ") x xj " = 'j (xj ) 'j (xj ") : " In the same way the coeÆcient of f (xn+j ) is x xr x r=0;r6=j j xr 2Y1
nY1
=
nY1 x xr " x xr r=0 xj + " xr r=0;r6=j xj + " xr "
nY1
=
x xj 'j (x) 'j (x ") : " 'j (xj ) 'j (xj + ")
Adding these two terms gives the required expression. In the limit as " ! 0 it is clear that 'j (x ") ! 'j (x): The required limit can therefore be written ['j (x)] lim G(") = ['j (x)] G0(0); 'j (xj ) "! " 'j (xj ) where x xj x xj " G(") = f (xj + ") f (x ) ; 'j (xj + ") 'j (xj ") j since it is clear that G(0) = 0. Now x xj 0 x xj 0 G0 (0) = f (xj ) 'j (xj ) ['j (xj )] 'j (xj )f (xj ) + 'j (1xj ) f (xj ) ['xj (xjx)]j '0j (xj )f (xj ): In the notation of (6.14) we have 'j (x) Lj (x) = ; 'j (xj ) '0j (x) : L0j (x) = 'j (xj ) A little algebraic simpli cation then shows that as " ! 0 the terms involving f (xj ) and f (xj n ) tend to Hj (x)f (xj ) + Kj (x)f 0 (xj ); which are the corresponding terms in the Hermite interpolation polynomial. 2
2
0
2
2
+
73 Solution to Exercise 6.10
For f (x) = x , since f (0) = 0; f 0(0) = 0, there is no contribution to the interpolation polynomial from these two terms. In the notation of (6.14) we nd that x L (x) = ; a 5
1
K1 (x)
=
H1 (x)
=
Hence
x2 (x a); a2 x2 2 1 a (x a) : a2
= xa (x a) 5a + xa 1 a2 (x a) a = 3a x 2a x : The error is therefore x p (x) = x (x 3a x + 2a ) = x (x a) (x + 2a) = x (x4! a) 5!; where = (x + 2a)=5. This is the required result, since f IV () = 5!. 2
p3 (x)
2
2 3
5
3
2
4
5
2
3 2
2
3
2
2
2
2
2
3
74 Solution to Exercise 6.11
Since f (z) is holomorphic in D, the only poles of g(z) are the points where the denominator vanishes, namely the point z = x and the points z = xj ; j = 0; : : : ; n. Since they are distinct, they are all simple poles. Hence the residue at z = x is Rx = zlim !x(z x)g(z ) n Y
x xk = zlim !x f (z ) k z xk = f (x): In the same way the residue at z = xj is Rj = zlim !x (z xj )g(z ) n Y = lim (z xj ) f (z) x =0
j
xk z x k=0 z xk
z !x j
=
n Y
x xk x k=0;k6=j j xk Lj (x)f (xj ): f (xj )
= Hence the sum of the residues is just f (x) pn(x). The Cauchy Residue Theorem then gives the required result. The contour C consists of two semicircles of radius K , joined by two straight lines of length b a. Hence the length of the contour is 2K + 2(b a). Hence Z Z g(z ) dz jg(z )j dz C C [2K + 2(b a)]G; provided that jg(z)j G for all z on C . Now for all z on C we know that jz xj K and jz xj j K , and since x and all the xj lie in [a; b] we also know that jx xj j (b a). Hence n jg(z )j M (b a) ; z 2 C: +1
K n+1
Using the result of the Residue Theorem, this shows that n jf (x) pn (x)j < (b a + K )M b a
+1
K
;
75 as required. Since K > b a, the right-hand side tends to zero as n ! 1, which is the condition for pn to converge uniformly to f , for x 2 [a; b]. The function f (z) has poles at z = {. For the interval [a; b] we require that the corresponding contour C does not contain these poles; this requires that the distance from { to any point on [a; b] must be greater than b a. For the symmetric interval [ a; a] the closest point to { is x = 0, with distance 1. Hence the length of the interval must be less than 1, and we require that a < 1=2. This is the condition required by the above proof, so it is a suÆcient condition, but may not be necessary. Evidently it is not satis ed by [ 5; 5].
76 Solution to Exercise 6.12
Expanding f (h) and f ( h) about the point x = 0, we get f (h) = f (0) + hf 0 (0) + h f 00 (0) + h f 000 ( ); f ( h) = f (0) hf 0 (0) + h f 00 (0) h f 000 ( ); where 2 (0; h) and 2 ( h; 0). Hence, f (h) f ( h) = 2hf 0 (0) + h (f 000 ( ) + f 000 ( )) : The continuity of f 000 on [ h; h] implies the existence of in ( h; h) such that (f 000 ( ) + f 000 ( )) = f 000(). Therefore, f (h) f ( h) f 0 (0) = h f 000 ( ); 2h and hence " " E (h) = h f 000 ( ) + 2h : Taking the absolute value of both sides, and bounding jf 000()j by M and j" j and j" j by ", we have that jE (h)j h M + h" : Let as consider the right-hand side of this inequality as a function of h > 0; clearly, it is a positive and convex function which tends to +1 if h ! +0 or h ! +1. Setting the derivative of h 7! h M + h" to zero yields, " hM = 0; h and therefore, 3" = h= M gives the value of h for which the bound on jE (h)j is minimized. 2
1 2 1 2
1
2
3
1
3
2
2
1 6
1 2
1 6 1 6
1
3
1
2
2
1 6
1 6
2
+
2
3
+
1 6
1 6
1 3
3
2
3
2
3
2
1 3
3
77
Solution to Exercise 7.1
The weight wk is de ned by Z b
n Y
Z b
x xj dx: a a j =1;j 6=k xk xj Now if xk = a + kh then xn k = a + (n k)h = b kh = a + b xk . Making the change of variable y = a + b x we get wk =
wk
= =
Lk (x)dx =
a + b y xj xk xj b j =1;j 6=k Z b Y n xn j y dy a j =1;j 6=k xn j xn k Z a
n Y
n Y
Z b
=1
y xi x n k xi =n k
= a i ;i6 = wn k ; where we have replaced j by n
dy
i in the last product.
dy
78 Solution to Exercise 7.2
The Newton{Cotes formula using n +1 points is exact for every polynomial of degree n. Now suppose that n is even and consider the polynomial qn (x) = [x (a + b)=2]n : This is a polynomial of odd degree, and is antisymmetric about the midpoint of the interval [a; b]. Hence +1
+1
Z b
a
qn+1 (x)dx = 0:
The Newton{Cotes approximation to this integral is n X
k=0
wk qn+1 (xk ):
Now Exercise 1 has shown that wn k = wk , and the antisymmetry of qn shows that qn (xk ) = qn (xn k ). Hence the Newton{Cotes formula also gives zero, so it is exact for the polynomial qn . Any polynomial of degree n + 1 may be written pn = c qn + pn ; where c is a constant and pn is a polynomial of degree n. Hence the Newton{Cotes formula is also exact for pn . +1
+1
+1
+1
+1
+1
+1
79 Solution to Exercise 7.3
It is clear that a quadrature formula is exact for all polynomials of degree n if, and only if, it is exact for the particular functions f (x) = xr ; r = 0; : : : ; n. The given formula is exact for the function xr provided that Z xr dx = w ( )r + w r ; that is, if 1 ( 1)r = r [( 1)r w + w ]: r+1 To be exact for polynomials of degree 1 we must therefore satisfy this equation for r = 0 and r = 1. This gives 2 = w +w 0 = [ w + w ]: Since is not zero, these give at once w = w = 1. For the formula to be exact for all polynomials of degree 2 we also require to satisfy the equation with r = 2, giving 2 = [w + w ] = 2 : 3 As > 0 this requires the unique value = 1=p3. The same equation with r = 3 gives 0= [ w +w ] which is satis ed, so when = 1=p3 the formula is exact for xr ; r = 0; 1; 2; 3. It is therefore exact for every polynomial of degree 3. [Note that it is also exact for the function xr for every odd integer r.] 1
0
1
1
+1
0
0
1
0
1
0
2
0
3
1
1
2
1
0
1
80 Solution to Exercise 7.4
As in Exercise 3 the formula is exact for all polynomials of degree 3 if, and only if, it is exact for the functions xr ; r = 0; 1; 2; 3. This requires that 2 = w +w +w +w 0 = w w + w +w = w + w + w +w 0 = w w + w +w : From the 2nd and 4th of these equations it is easy to see that w = w and w = w ; this also follows from Exercise 1. The 1st and 3rd equations then become 2 = 2w + 2w = 2w + w : This leads to w = w = ; w = w = . 0
1
0
2 3
1 9
0
0
2
1 3
3
1 3
1
1 9
1
1 27
2
3
2
3
1 27
1
2
3
0
3
1
2
0
2 3
0
3
1 4
0
1
2
1
2 9
3 4
1
81 Solution to Exercise 7.5
By symmetry the integral is zero for every odd power of x, and both formulae also give the result zero. Hence the dierence is zero for the functions x; x ; x . Both formulae give the correct result for all polynomials of degree 3; hence we only have to consider the functions x and 3
5
4
x
6
For x the integral has the value 2=5 and a simple calculation shows that the two formulae give the results (i) 2=3 and (ii) 14=27. Hence the errors are x : (i) 4=15; 16=135: Similarly for x the integral is 2=7, and the two formulae give the results (i) 2=3 and (ii) 122=243, giving for the errors x : (i) 8=21; 368=1701: Hence for the polynomial p =c +c x+c x +c x +c x +c x c , so the second is more the respective errors are (i) c and (ii) accurate, by a factor . For the polynomial p =c +c x+c x +c x +c x +c x +c x the respective errors are (i) c c and (ii) c . Hence if we choose a polynomial for which c = c the error in Simpson's rule is zero, but the error in formula (ii) is not. 4
4
6
6
5
0
1
2
0
1
3
3
4
4 15 4
4 9
6
2
2
2
4 15 4
4
5
5
16 135 4
3
3
8 21 6 6
4
4
7 10 4
5
5
16 135 4
6
6
368 1701
82 Solution to Exercise 7.6
Simple algebra shows that theR errors in theR approximation by the trapezium rule of the integrals x dx and x dx are 3=10 and 1=3 respectively. Similarly the errors in the approximation by Simpson's Rule of the same integrals are 1=120 and 1=48 respectively. Hence the errors in the approximation of 1 0
Z
1
1 0
4
5
(x Cx ) dx by the trapezium rule and Simpson's rule are C (trapezium rule) C (Simpson0 s rule): The trapezium rule gives the correct value of this integral when C = 10=9. Moreover the trapezium rule gives a more accurate result than Simpson's rule for this integral when 5
4
0
1 3 1 48
3 10
1 120
j C 3 10
1 3
j M , as required. For the 2 norm we nd that Z b 4M fkf k1g = dx a 1 + k (x c) = 4Mk tan k(b a) tan k(a b) 4M ; 2
2
2
2
1 2
2
2
2
2
2
1 1 2
1 1 2
2
k
and we have the required property if 8M : k= " (iib) Another function with the required properties is M Æ (a + Æ x); a x x + Æ f (x) = 0; a+Æ x b: 2
2
91
Then, as before, kf k1 = 2M > M . Now Z a Æ (a + Æ fkf k1g = 4M 2
2
+
x)2 dx
Æ2 a 4M 2Æ
= 3 and we have the required property if Æ = 3"=8M . 2
92 Solution of Exercise 8.2
Suppose that p^n is the minimax polynomial of degree n for the even function f on [ a; a]. Then there is a sequence of n + 2 points xj at which f (xj ) p^n (xj ) = ( 1)j E; j = 1; : : : ; n + 2 where jE j = kf p^nk1. Now de ne the polynomial qn by qn (x) = p^n ( x); then f ( xj ) qn ( xj ) = ( 1)j E; j = 1; : : : ; n + 2 since f is an even function. Also kf qn k1 = max jf (x) qn (x)j = max jf ( x) p^n( x)j = kf p^nk1; since the interval [ a; a] is symmetric. Thus the points ( xj ); j = n + 2; n + 1; : : : ; 1 form a sequence of critical points for the approximation qn, and qn is therefore a minimax polynomial for f . But the minimax approximation is unique, so that qn p^n, and p^n is an even polynomial. This means in particular that the minimax polynomial of odd degree 2n + 1 is an even polynomial, and the coeÆcient of x n is zero. It is therefore identical to p^ n, which is the minimax polynomial of degree 2n + 1, as well as being the minimax polynomial of degree 2n. The minimax polynomial p^ n has a sequence of 2n + 3 critical points. The proof has also shown that the critical points are symmetrical in the interval [ a; a]; if xj is a critical point, so is xj . 2 +1
2
2
93 Solution to Exercise 8.3
The minimax polynomial of degree n is an odd function; the minimax polynomial approximation of degree 2n +1 is therefore also the minimax approximation of degree 2n + 2. The proof follows the same lines as that of Exercise 2. With the same notation, qn (x) = p^n ( x); then f ( xj ) qn ( xj ) = ( 1)j E; j = 1; : : : ; n + 2 since f is an odd function. Thus the points ( xj ); j = n + 2; n + 1; : : : ; 1 form a sequence of critical points for the approximation qn to the function f (x), and qn is therefore a minimax polynomial for f . But the minimax approximation is unique, so that qn p^n, and p^n is an odd polynomial. This means in particular that the minimax polynomial of even degree 2n + 2 is an odd polynomial, and the coeÆcient of x n is zero. It is therefore identical to p^ n , which is the minimax polynomial of degree 2n + 2, as well as being the minimax polynomial of degree 2n + 1. The minimax polynomial p^ n has a sequence of 2n + 4 critical points. The proof has also shown that the critical points are symmetrical in the interval [ a; a]; if xj is a critical point, so is xj . 2 +2
2 +1
2 +1
94 Solution of Exercise 8.4
(i) Since g is an odd function and the interval is symmetric, the approximation will be an odd polynomial, so take p (x) = c x: There will be four alternate maxima and minima, symmetric in the interval, including the points 1. The other two will be internal extrema, which we take to be . The condition that these two points are extrema of the error gives c cos = 0; and the condition that the magnitudes of the extrema are equal gives c sin 1 = E c sin = E: From these equations it is easy to deduce that (1 + ) cos sin 1 sin = 0: This equation for has exactly one root in (0,1); it is easy to see that the left-hand side is monotonic decreasing in (0,1). Having determined this value, we see at once that c = cos and E = cos sin 1. The numerical values are = 0:4937, c = 0:8806 and E = 0:0391, giving p (x) = 0:8806x; kg p k1 = 0:0391: (ii) Since h is an even function and the interval is symmetric, the approximation will be an even polynomial, so take p (x) = c + c x : There will be ve alternate maxima and minima, symmetrical in the interval, including the points 1, 0 and 1. The other two will be internal extrema, which we take at . The condition that these points are extrema gives 2c + 2 sin( ) = 0; and the condition that the magnitudes of the maxima and minima are all equal gives c + c cos1 = E; c + c cos( ) = E c 1 = E: 2
1
1
1
1
1
1
2
2
3
0
2
2
2
0
0
2
2
2
2
0
2
From these we obtain in succession c = cos1 1; sin( ) = 1 cos1; E = fc (1 ) + cos( ) cos1g; c = 1 + E: The numerical values are: p (x) = 1:0538 0:4597x ; k cos(x ) p k1 = E = 0:0538; with = 0:6911. 2
2
1 2
2
2
2
0
3
2
2
3
95
96 Solution to Exercise 8.5
Since pn is a polynomial, it is a continuous function. Suppose that pn(0) = A. In the interval [ ; ], where is an arbitrarily small number, there will be points x at which H (x) = 1, and points at which H (x) = 1. At such points jH (x) pn (x)j will be arbitrarily close to jA 1j and jA + 1j respectively. Hence kH pn k1 cannot be smaller than jA 1j or jA + 1j. If A is nonzero, one of these quantities will be greater than 1, and if A = 0 we shall have kf pnk1 1. By the previous argument the polynomial of degree zero of best approximation to H (x) on [ 1; 1] must have p (0) = 0. It is therefore the zero polynomial, p (x) = 0, and is unique. The polynomial of best approximation, of degree 1, must also have p (0) = 0, so must have the form p (x) = c x. Then the dierence e (x) = H (x) c x is antisymmetric about x = 0, so that e( x) = e(x). The dierence attains its extreme values at 0 and at 1; je(1)j = j1 c j, and je(x)j takes values arbitrarily close to 1 close to x = 0. Hence kH p k1 = 1 provided that j1 c j 1; the polynomial of best approximation of degree 1 is not unique. Any polynomial p (x) = c x, with 0 c 2, is a polynomial of best approximation. 0
0
1
1
1
1
1
1
1
1
1
1
1
97 Solution to Exercise 8.6
It is evidently very easy to construct a function f which is zero at each of the points ti ; i = 1; : : : ; k, but is not zero everywhere; any polynomial which has these points as zeros, for example, is such a function. For such a function Zk (f ) = 0, but f is not identically zero. Hence Zk (:) does not satisfy the rst of the axioms for a norm. However, if pn is a polynomial of degree n < k, then if Zk (pn) = 0 the polynomial pn must vanish at the k points tj , so it must vanish identically. Thus Zk (:) is a norm on the space of polynomials of degree n, if k > n; it is easy to see that the other axioms hold. Suppose that the polynomial q satis es the given conditions f (0) q (0) = [f ( ) q ]( ) = f (1) q (1): If q is a polynomial of degree 1 which gives a smaller value for Z , then Z (f q ) < Z (f q ); hence q (x) q (x) must be negative at t and t , and positive at t . This is impossible if q is a polynomial of degree 1. Hence q is the polynomial which minimises Z (f p ) over all polynomials of degree 1. Writing q (x) = + x the conditions give 1 = E = e = E e = E: These easily lead to = e 1, = + pe e. The error is pe + e: Z (f q ) = E = A straightforward, but tedious, approach to the case k = 4 is to solve each of the four problems obtained by choosing three out of these four points. In each case, having constructed the polynomial approximation p , evaluate Z (f p ), and the required approximation is the one which gives the least value to this quantity. Alternatively, choose three of the four points and construct the polynomial p which minimises Z (f p ). Evaluate jf (t) p (t)j at the point t which was omitted. If this value does not exceed Z (f p ), then p is the required approximation. If it does exceed Z (f p ) replace one of the points chosen, for which f (tj ) p (tj ) has the same sign as f (t) p (t ), by t, and repeat the process. Continue this repeated choice of three of the four points until the required polynomial is reached. 1
1 2
1
3
3
1
1 2
1
3
1
1
1
2
1
1
1
1 2
1 2
3 4
3
1
4
1 4
1
1 2
1 2
1 2
1 2
1
1
3
1
1
3
1
3
1
1
1
1
98 Solution to Exercise 8.7
Since A is a xed nonzero constant, the problem of nding the polynomial of best approximation to f (x) 0 by polynomials of the given form is equivalent to choosing the coeÆcients to minimise k0 pn k1 ; which is the same as requiring to minimise kxn (1=A)qn k1 : This is just the problem of approximating the function xn by a polynomial of lower degree; hence we should choose the polynomial pn(x) so that nX xn (1=A) ak xk = 2 n Tn (x); 1
1
1
k=0
where Tn is the Chebyshev polynomial of degree n. The formal result is A pn (x) = n Tn (x); 2 +1
99 Solution to Exercise 8.8
Clearly, f (x) = an+1
"
n X
xn+1
#
ak k x :
a k=0 n+1
We seek the minimax polynomial pn 2 Pn for f on the interval [ 1; 1] in the form n X pn (x) = bk xk : k=0
Thus, f (x) pn (x) = an+1
"
xn+1
n X bk k=0
#
ak k x :
an+1
According to Theorem 8.6, the k k1 norm of the right-hand side is smallest when n X bk ak k x = xn 2 nTn (x) : an +1
k=0
+1
+1
Therefore, the required minimax polynomial for f is pn (x) = f (x) an 2 n Tn (x) : +1
+1
100 Solution to Exercise 8.9
The minimax polynomial must be such that f (x) p (x) has three alternating extrema in [ 2; 1]. Since f is convex, two of these extrema are at the ends 2 and 1, and the other must clearly be at 0. Graphically, the line p must be parallel to the chord joining ( 2; f ( 2)) and (1; f (1)). Thus p (x) = c x: The alternating extrema are than c f ( 2) p ( 2) = 2 (c + ) = f (0) p (0) = 0 (c ) = c )= c : f (1) p (1) = 1 (c These have the same magnitude if c = ( c ); so that the minimax polynomial is p (x) = x; and kf p k1 = . 1
1
1
0
1 3
1
0
1
0
1
0
4 3
0
1
1
2 3
2 3
0
2 3
1 3
4 3
0
4 3
0
0
1 3
101 Solution to Exercise 8.10
A standard trigonometric relation gives cos n = [cos(n + 1) + cos(n 1)]; Writing x = cos ; Tn(x) = cos n, gives (a). 1 2
Suppose that (b) is true for polynomials of degree up to and including Then it follows from (a) that Tn is a polynomial of degree n + 1 with leading coeÆcient 2n; hence (b) follows by induction, since T = 1.
n.
+1
0
Evidently T (x) = 1 is an even function and T (x) = x is an odd function. Then (c) follows by induction, using (a). 0
1
The zeros of Tn(x) are xj = cos j , where j is a zero of cos . Evidently cos n (j n ) = cos(j ) = 0; giving (d); these values of xj are distinct, and lie in ( 1; 1). 1 2
1 2
Part(e) is obvious, since jTn(x)j = j cos nj 1, provided that jxj 1 to ensure that is real. Part (f) follows from the fact that cos n = 1 when = k=n; k = 0 ; : : : ; n.
102 Solution to Exercise 8.11
There are many possible examples, most easily when f is an oscillatory function. For example, consider 3(x a) : f (x) = sin b a This function attains its maxima and minima, all of magnitude 1, at the points j xj = a + (b a); j = 1; 3; 5: 6 Hence the polynomial p of degree 1 of best approximation to f on [a; b] is p (x) 0, and none of the three critical points is equal to a or b. 1
1
103 Solution to Exercise 8.12
By expanding the binomial we see that (1
x + tx)
=
n
n X
k=0
pnk (x)tk :
Substituting t = 1 shows at once that 1=
n X
k=0
pnk (x):
Dierentiate with respect to t, giving nx(1 x + tx)n
substituting t = 1 then gives nx =
n X
=
1
n X k=0
k=0
pnk (x)k tk
1
;
kpnk (x):
Dierentiate again with respect to t, giving n(n
1)x (1 2
x + tx)n
substituting t = 1 then gives n(n
Now (x
n X k=0
k=n)2 pnk (x)
1)x = 2
=
2
n X
k=0
n X
k=0
pnk (x)k(k
k (k
1)pnk (x):
pnk (x)
(2x=n)
k=0
n X
x2
=
+(1=n ) 2
= =
n X k=0
n X k=0
1) tk ; 2
kpnk (x)
k2 pnk (x)
x2 (2x=n)nx + (1=n2)[n(n 1)x2 + nx] x(1 x) : n From the de nition of pn(x), and using the facts that each pnk (x) 0 when 0 x 1 and Pnk=0 pnk (x) = 1, we nd f (x) pn (x) =
n X
[f (x)
k=0
f (k=n)]pnk (x);
104 and
jf (x) pn (x)j
Then
X 1
n X k=0
jf (x) f (k=n)jpnk (x): X
jf (x) f (k=n)jpnk (x) <
("=2)pnk (x)
1
n X
= ("=2) k = "=2:
pnk (x)
=0
For the other sum X X jf (x) f (k=n)jpnk (x) 2M pnk (x) 1
2
2M=Æ 2M=Æ
2
X 2
2
(x
n X k=0
(x
k=n)2 pnk (x) k=n)2pnk (x)
= 2M=Æ x(1 n x) 2M=Æ 41n ; = 2M Æn since all the terms in the sums are non-negative, and 0 x(1 x) 1=4. Thus if we choose N = M=Æ " we obtain X jf (x) f (k=n)jpnk (x) "=2: 2
2
2
0
2
2
Finally adding together these two sums we obtain jf (x) pn (x)j < "=2 + "=2 = " when n N . Since the value of N does not depend on the particular value of x chosen, this inequality holds for all x in [0; 1]. 0
0
105
Solution to Exercise 9.1
We start with ' (x) = 1, and use the Gram-Schmidt process. It is useful to note that Z ln(x) xk dx = 1=(k + 1) : Writing ' (x) = x a ' (x) we need R ln(x) x ' (x)dx a = R ln(x)[' (x)] dx = and so ' (x) = x 1=4: Now writing ' (x) = x b ' (x) b ' (x) we nd in the same way that b = 1=9 b = 5=7: Hence ' (x) = x (1=9) (5=7)(x 1=4) = x (5=7)x + (17=252): 0
1
2
0
1
0
1 0 1 0 1 4
0
0
0
2
0
1
2
2
0 1
2
2 2
0
0
1
1
106 Solution to Exercise 9.2
Since the polynomials 'j (x) are orthogonal on the interval [ 1; 1] we know that Z 'i (t) 'j (t) dt = 0; i 6= j: In this integral we make the change of variable t = (2x a b)=(b a); x = [(b a)t + a + b]; and it becomes Z b 2 dx: 'i ((2x a b)=(b a)) 'j ((2x a b)=(b a)) b a a This shows that the new polynomials form an orthogonal system on the interval [a; b]. From the Legendre polynomials ' (t) = 1 ' (t) = t ' (t) = t 1=3 1
1
1 2
0 1
2
2
we write t = 2x 1 and get the orthogonal polynomials on [0; 1] in the form ' (x) = 1 ' (x) = 2x 1 ' (x) = (2x 1) 1=3 = 4x 4x + 2=3 0 1
2
2
2
The dierent normalisation from the polynomials in Example 9.5 is unimportant.
107 Solution to Exercise 9.3
We are given that Z
1 0
x 'i (x)'j (x) dx = 0;
i 6= j:
Making the change of variable x = t=b this becomes Z b 0
t 'i (t=b) 'j (t=b) dx=b+1 = 0;
i 6= j:
This shows that the polynomials j (x) de ned by j (x) = 'j (x=b); j = 0; 1; : : : are orthogonal over the interval [0; b] with the weight function w(x) = x .
108 Solution to Exercise 9.4
Suppose that the required result is true for some value of k, with 0 k < n 1. Then k d (1 x )n = (1 x )n k qk (n): dx Dierentiation then gives k d (1 x )n = ddx (1 x )n k qk (x) dx = 2(n k)x(1 x )n k qk (x) +(1 x )n k qk0 (x) = (1 x )n k [ 2(n k)xqk (x) + (1 x )qk0 (x)] = (1 x )n k qk (x); which is of the required form with k replaced by k + 1. Since the result is trivially true for k = 0 it is thus true for all k such that 0 k < n. This means that every derivative of order less than n of the function (1 x )n has the term (1 x ) as a factor, and therefore vanishes at x = 1. Write D for d=dx, and suppose that 0 i < j . Then by integrating by parts 2
+1
2
2
2
2
1
2
2
Z
1 1
2
1
2
1
2
+1
2
'i (x)'j (x)dx
= = =
Z
1 1
i D
Z
Di (1 x2 )i Dj (1 x2 )j dx
(1
Z
x2 )i Dj 1 1
1 1
Di+1
(1 x )j (1 x )i Dj (1 1
1
2
1
2
Di+1 (1 x2 )i Dj
1
1
(1
x2 )n dx
x2 )j dx;
since we have proved that the derivatives vanish at 1. The process of integration by parts can be repeated until we nd that Z
1 1
'i (x)'j (x)dx
= ( 1)
i
Z
1 1
D2i (1 x2 )i Dj i
1
(1
x2 )j dx:
This integral is zero, since D i (1 x )i is a constant, and the function Dj i (1 x )j vanishes at 1, 0 i < j . The polynomials 'i (x) and 'j (x) are therefore orthogonal as required. 2
1
2
2
109
Taking j = 0; 1; 2; 3 we get ' (x) = 1; ' (x) = D(1 x ) = 2x; ' (x) = D (1 x ) = D( 4x + 4x ) = 4 + 12x ; ' (x) = D (1 x ) = D ( 6x + 12x 6x ) = D( 6 + 36x 30x ) = 72x 120x : These are the same, apart from constant scale factors as the polynomials given in Example 9.6. 0
2
1
2
2
2 2
3
2
3
3
2 3
2
3
2
3
5
4
110 Solution to Exercise 9.5
This is very similar to Exercise 4, and we shall only give an outline solution. Suppose that Dk xj e x = xj k qk (x)e x ; where qk (x) is a polynomial of degree k. Then by dierentiation Dk xj e x = e x[(j k)xj k qk (x) + xj k qk0 (x) xj k qk (x)] = xj k qk e x; and the rst result follows by induction. To prove that the polynomials form an orthogonal system, write Z 1 Z 1 x e 'i (x)'j (x)dx = Di (xi e x )'j (x)dx; and the orthogonality follows by repeated integration by parts, as in Exercise 4. The rst members of the sequence are ' (x) = 1; ' (x) = ex D(xe x ) = 1 x; ' (x) = ex D[(2x x )e x] = 2 4x + x ; ' (x) = ex D [(3x x )e x ] = exD[(6x 3x 3x + x )e x] = 6 12x + 9x x : +1
1
1
+1
0
0
0 1
2
2
3
2
2
2
3
2
2
2
3
3
111 Solution to Exercise 9.6
Evidently 'j (x) Cj x'j (x) is in general a polynomial of degree j +1, but if Cj is chosen to be equal to the ratio of the leading coeÆcients of 'j (x) and 'j (x) then the coeÆcient of the leading term is zero, and the result is a polynomial of degree j only. Hence it can be expressed as a linear combination of the polynomials 'k (x) in the form +1
+1
'j+1 (x) Cj x'j (x) =
where
j;k =
where
1
Z b
Ak a
j X k=0
j;k 'k (x);
w(x)['j+1 (x) Cj x'j (x)]'k (x)dx;
Ak =
Z b
a
w(x)['k (x)]2 dx:
Now k j in the sum, and so Z b
a
w(x)'j+1 (x)'k (x)dx = 0;
Moreover 'j (x) is orthogonal to every polynomial of lower degree, and so Z b w(x)'j (x) x 'k (x)dx = 0; k + 1 < j: a These two equations show that j;k = 0; k = 0; : : : ; j 1: Hence 'j (x) (Cj x + Dj )'j (x) + Ej 'j (x) = 0; j > 0; where Dj = j;j and Ej = j;j . +1
1
1
112 Solution to Exercise 9.7
Since Cj is the ratio of the leading coeÆcients of the polynomials 'j (x) and 'j (x), which are both positive, Cj is also positive. We saw in Exercise 6 that Z b j;j = w(x)['j (x) Cj x'j (x)]'j (x)dx +1
1
a
+1
1
Z b
= Cj w(x)x'j (x)'j (x)dx; a since 'j (x) and 'j (x) are orthogonal. Now the same argument as in Exercise 6, but with j replaced by j 1, shows that 'j (x) Cj x'j (x) is a polynomial of degree j 1; it is therefore orthogonal to 'j (x). This shows that 1
1
+1
1
Z b
Hence
a
1
w(x)['j (x) Cj 1 x'j
1
(x)]'j (x)dx:
Z b 1 w(x)x'j (x)'j (x)dx = w(x)['j (x)] dx; Cj a a which is positive. Hence Ej is positive. The proof of the interlacing property follows closely the proof of Theorem 5.8; it proceeds by induction. Suppose that the zeros of 'j (x) and 'j (x) interlace, and that and are two consecutive zeros of 'j (x). Then 'j ( ) = Ej 'j ( ); 'j () = Ej 'j (): But there is exactly one zero of 'j (x) between and , so that 'j () and 'j () have opposite signs. Hence 'j () and 'j () also have opposite signs, and there is a zero of 'j (x) between and . This has located at least j 1 zeros of 'j (x). Now suppose that is the largest zero of 'j (x); then is greater than all the zeros of 'j (x), and 'j ( ) > 0, since the leading coeÆcient of each of the polynomials is positive. Hence 'j ( ) < 0, and there is a zero of 'j (x) greater than . By a similar argument 'j (x) has a zero which is smaller than the smallest zero of 'j (x). This has now located all the zeros of 'j (x), and shows that the zeros of 'j (x) and 'j (x) interlace. To start the induction, the same argument shows that the zero of ' (x) lies between the zeros of ' (x). Z b
2
1
1
1
+1
1
+1
1
1
1
1
+1
+1
+1
+1
1
1
+1
+1
+1
+1
+1
1
2
113 Solution to Exercise 9.8
We can write
n+1 'n+1 (x) = cnn+1 +1 x
qn (x);
where qn(x) is a polynomial of degree n. Now the best polynomial approximation of degree n to xn is determined by the condition that xn pn(x) is orthogonal to 'j (x), for j = 0; : : : ; n. But the above equation shows that qn (x) 'n (x) = cn ; xn cn +1
+1
+1
+1 +1 +1
+1
n+1
n
which clearly satis es this orthogonal condition. Hence the best polynomial approximation is cn xn 'n (x) p (x) = n : +1 +1
n
+1
+1
cnn+1 +1
and the expression for the 2 norm of the dierence follows immediately. Using w(x) = 1 on [ 1; 1] we know that ' (x) = x x: so the best approximation to x is the polynomial x. The norm of the error is given by 3
3
5 3
5 3
3
Z
1
= [x x] dx = 152=189: Notice that in this example cjj = 1 for every j . kx
3
p
k
2 2 2
1
3
5 3
2
114 Solution to Exercise 9.9
The proof is by induction, but we shall not ll in all the details. The polynomial 'k (x) is constructed by writing 'k (x) = xk a ' (x) : : : ak 'k (x); where Ra w(x)xk 'j (x)dx aj = R aa : w(x)[' (x)] dx 0
0
a
1
j
1
2
Suppose that the required result is true for polynomials of degree less than k. Since w(x) is an even function, aj is zero whenever j and k have opposite parity. Thus in a polynomial of odd degree all the even coeÆcients are zero, and in a polynomial of even degree all the odd coeÆcients are zero. This gives the required result. The coeÆcients j are given by Ra w(x)f (x)'j (x)dx
j = R aa : a w(x)['j (x)] dx Evidently j is zero if f (x) is an even function and j is odd, since we have just shown that 'j is then an odd function. Similarly if f (x) is an odd function. 2
115 Solution to Exercise 9.10
By de nition H is an odd function, so by the results of Exercise 9 the polynomial of best approximation of degree 0 is just p (x) = 0; and the polynomials of best approximation of degrees 1 and 2 are the same. Moreover p has the form p (x) = ' (x): The orthogonal polynomials in this case are the Legendre polynomials, so ' (x) = x. Hence R H (x)' (x)dx
= R [' (x)] dx R = R2 xdx x dx = 21=3 = 32 : Hence the polynomials of best approximation are p (x) = p (x) = x: 0
1
1
1
1
1
1
1
1 1
1
1
2
1
1
2
1
1 1 0
2
3 2
116
Solution to Exercise 10.1
We saw in Exercise that the sequence of orthogonal polynomials for the weight function ln x on [0; 1] begins with ' (x) = 1; ' (x) = x ; ' (x) = x x+ : For n = 0 there is just one quadrature point, the zero of ' (x), which is at . The corresponding quadrature weight is 0
1 4
1
2
2
5 7
17 252 1
1 4
W0 =
Z
1
ln x dx = 1: For n = 1 the two quadrature points are the zeros of ' (x), which are p p x = 106; x = + 106: The corresponding weights are 0
1
0
W0 =
and
Z
1
0
5 14
1 42
ln x (x
5 14
1
x1 )2 =(x0
1 42
x1 )2 dx =
1 2
+
9 424
p
106
p
106: Note that in this case a good deal of heavy algebra is saved by determining the weights by direct construction, requiring the quadrature formula to be exact for polynomials of degrees 0 and 1. This leads to the two equations Z W +W = ln x dx = 1 W1 =
0
1 2
9 424
1
W0 x0 + W1 x1
Z
= ln x x dx = : Solution of these equations gives the same values as above for W and W. 1 4
0
1
117 Solution to Exercise 10.2
The Gauss quadrature formula Z b
a
w(x)f (x)dx =
n X k=0
Wk f (xk )
is exact when f (x) is any polynomial of degree 2n + 1. It is therefore exact for the polynomial Lk (x), which has degree n. Since Lk (xk ) = 1 and Lk (xj ) = 0 for k 6= j , this shows that Z b
a
w(x)Lk (x)dx = Wk :
118 Solution to Exercise 10.3
The rst two of the sequence of orthogonal polynomials for the weight function w(x) = x on the interval [ 1; 1] are easily found to be ' (x) = 1 and ' (x) = x c; where Z Z c = x xdx x dx = 2=3: Hence the Gauss quadrature formula for n = 0 with this weight function on [0; 1] has quadrature point 2=3 and weight 0
1
1
1
0
0
W0 =
Z
1 0
x dx = 1=2:
The error of this quadrature formula, for a function with a continuous second derivative, is given by Theorem 10.1 as Z f 00 () x(x 2=3) dx = f 00 (): 2! This gives the required result. 1
0
2
1 72
119 Solution to Exercise 10.4
We know that the Chebyshev polynomials Tr (x) form an orthogonal system with weight function w(x) = (1 x ) = on the interval [ 1; 1]. Hence the quadrature points xj for n = 0 are the zeros of Tn (x), which are xj = cos[(2j + 1)=(2n + 2)]; j = 0; : : : ; n: Suppose that for some value of n n X n + 2) : cos(2j + 1) = sin(22 sin j Then nX n + 2) cos(2j + 1) = sin(22 sin + cos(2n + 3) j = sin(2n + 2) +22sinsin cos(2n + 3) n + 4) = sin(22 sin ; so the same result is true with n replaced by n +1. The result is trivially true for n = 0, so it holds for all positive integer n. If = p, where p is an integer, then 2
1 2
+1
=0
+1
=0
n X j =0
cos(2j + 1)p =
n X j =0
( 1)p = ( 1)p(n + 1):
Hence n X k = 1; : : : ; n cos k(2j + 1)=(2n + 2) = 0n + 1 k = 0: j which means that n X Tk (xj ) = 0; k = 1; : : : ; n: =0
j =0
But Tk is orthogonal to T for k > 0, so Z
0
1 1
(1
x2 ) 1=2 Tk (x)dx = 0; k = 1; : : : ; n:
120 Moreover and
n X j =0 Z
1
T0 (xj ) = n + 1
(1 x ) = T (x)dx = : Putting these together we nd that n 1 X 1 Z (1 x ) = Tk (x)dx: Tk (xj ) = n+1 j Thus the quadrature formula with weight function w(x) = (1 x ) = on the interval [ 1; 1], with quadrature points and weights xj = cos(2j + 1)=(2n + 2); Wj = =(n + 1); k = 0; : : : ; n is exact for the polynomials Tk (x); k = 0; : : : ; n. It is therefore exact for every polynomial of degree n, and because of the choice of the quadrature points xj it is also exact for every polynomial of degree 2n + 1. 2
1 2
1
1
=0
0
2
1 2
1
2
1 2
121 Solution to Exercise 10.5
Using the same notation as in Section 10.5, write (x) =
n Y
(x
k=1
xk )2 :
This is evidently a polynomial of degree 2n, and hence the quadrature formula Z b n X w(x)(x)dx = W (a) + Wk (xk ) 0
a
k=1
is exact. But by the de nition (xk ) = 0; k = 1; : : : ; n, so that Z b
a
w(x)(x)dx = W0 (a);
Since w(x) and (x) are positive on [a; b], it follows that W
0
> 0.
122 Solution to Exercise 10.6
Z 0
By integration by parts 1 e xxL0j (x)pr (x)dx = e
x xp r 0
1
Z
=
1
Z
0
e
1
Z
(x)Lj (x)1
e
0
xL
j
xL
j
xpr (x) + pr (x) + xp0r (x)]dx
(x)[
(x)xpr (x)dx
e xLj (x)[pr (x) + xp0r (x)]dx: Since pr (x) + xp0r (x) is a polynomial of degree r, and r < j , the last term is zero by the orthogonality properties of the Laguerre polynomials Lj (x). This gives the required result, and shows that the polynomials de ned by 'j (x) = Lj (x) L0j (x) form an orthogonal system for the weight function w(x) = e xx on the interval [0; 1]. The Radau quadrature formula (10.27) thus gives 0
1
Z
0
e
xp
n
2
(x)dx = W p n(0) + 0 2
n X k=1
Wk p2n (xk )
where the quadrature points xk ; k = 1; : : : ; n are the zeros of the polynomial Ln(x) L0n(x). From Exercise 5 we have L (x) = 1 x, so that L (x) L0 (x) = 2 x, which gives the quadrature point x = 2. For the Gauss quadrature formula with weight function w(x) = xe x the corresponding weight is Z 1 W = xe x = 1: Hence using (10.28) we have W = W =x = 1=2; and W = 1 1=2 = 1=2: Therefore Z 1 e xp (x)dx = p (0) + p (2): 1
1
1
1
1
0
1
1
1
0
0
2
1 2 2
1 2
2
123 Solution to Exercise 10.7
If we divide the polynomial p n (x) by the polynomial (x a)(b x), which has degree 2, the quotient is a polynomial of degree 2n 3, and the remainder (x) has degree 1. This remainder can then be written in the form (x) = r(x a) + s(x b). Using a similar notation to that of Section 10.5, we now write 2
Z b
Z b
Z b
(x)dx = w(x)(x a)(b x)q n (x)dx+ w(x)(x)dx: a a Now let xk ; Wk; k = 1; : : : ; n 1 be the quadrature points and weights respectively for a Gauss quadrature formula using the modi ed weight function w(x)(x a)(b x) over the interval [a; b]. This modi ed weight function is non-negative on [a; b], and the quadrature formula is exact for every polynomial of degree 2n 3. Hence a
w(x)p2n
1
Z b
so that Z b
a
a
w(x)p2n
1
2
w(x)(x a)(b x)q2n
1
(x)dx =
nX1
k=1 k Z b
+ Now
Z b
and
a
w(x)(x)dx = r
x
Z b
a
a
3
(x) =
3
nX1 k=1
Wk p a)(b xk ) 2n
1
3
(xk );
(xk )
nX1
Wk (xk k=1
w(x)(x)dx
w(x)(x a)dx + s
(xk ) (xk a)(b xk )
Wk q2n
Z b
a
(xk ) a)(b xk )
w(x)(b x)dx;
s + : b xk xk a It follows from the de nitions of (x); r and s that r = p2n
r
=
(b)=(b a); s = p n (a)=(b a): We have therefore constructed the Lobatto quadrature formula Z b
a
1
w(x)f (x)dx = W0 f (a) +
2
nX1 k=1
1
Wk f (xk ) + Wn f (b);
124 which is exact when f is any polynomial of degree 2n 1. The quadrature points are xk = xk , and the weights are Wk Wk = (xk a)(b xk ) ; k = 1; : : : ; n 1; and Z b nX W = w(x)(b x)dx Wk (b xk ); 1
0
a
Wn =
k=1
Z b
a
w(x)(x a)dx
nX1 k=1
Wk (xk
a):
The weights wk ; k = 1; : : : ; n 1 are clearly positive, since the weights Wk are positive. To show that W0 is positive, we apply the quadrature formula to the polynomial
P (x) = (b x)
nY1 k=1
(x
xk )2 :
This is a polynomial of degree 2n 1, so the quadrature formula is exact. The polynomial P (x) vanishes at the points xk ; k = 1; : : : ; n 1, and at b, so we nd that Z b
a
w(x)P (x)dx = W0 P (a):
This shows that W 0, since P (x) and w(x) are non-negative on [a; b]. The proof that Wn is positive is similar. 0
125 Solution to Exercise 10.8
We use direct construction, using the condition that the quadrature formula must be exact for polynomials of degree 3. This gives the conditions Z A +A +A = dx = 2 1
0
1
2
A0 + A1 x1 + A2
=
A0 + A1 x21 + A2
=
A1 + A1 x31 + A2
=
1
Z
1
Z
1 1
Z
1 1 1
xdx = 0 x2 dx = 2=3 x3 dx = 0:
From the second and fourth equation we nd that A x (1 x ) = 0, so that either x = 0 or x = 1 or A = 0. Since the quadrature points must be distinct we reject the possibility that x = 1. From the rst and third equations we nd that A (1 x ) = 4=3; so that A cannot be zero. We therefore have x = 0 and A = 4=3. It is then easy to nd that A = A = 1=3, and the quadrature formula is Simpson's rule. 2 1
1 1
1
1
1
1
2 1
1
1
1
0
2
1
126 Solution to Exercise 10.9
Using the given notation, and writing c = b I (m) = S (m)
a,
b a 1 f (a) + f (a + m1 c) + : : : + f (a + mm 1 c) + 21 f (b) 2 m
= b6ma [f (a) + 4f (a + m c) + 2f (a + m c) + 4f (a + +2f (a + mm c) + 4f (a + mm c) + f (b)]: 1 2
Hence 2I (2m)
M (m) = I (m)
2
b a [f (a + 21m c) + : : : + f (a + 2m2m 1 c)]: m
(2.2)
= 2 b2ma [ f (a) + f (a + m c) + f (a + m c) +f (a + m c) + : : : + mm c) + f (b)] b a f (a) + f (a + m c) + : : : + f (b) m = b m a [f (a + m c) + : : : + f (a + mm c)] = M (m): 1 2
1 2
2
1 2
In the same way I (2m) I (m) = 1 3
=
1
2
1 2
1 2
2
2
1
b a 1 1 1 2m [ 2 f (a) + f (a + 2m c) + f (a + m c) +f (a + 23m c) + : : : + 2m2m 1 c) + 21 f (b)] a 1 1b f (a) + f (a + m1 c) + : : : + 21 f (b) 3 2 m b a 1 1 6m [f (a) + 4f (a + 2m c) + 2f (a + m c) + : : : + 4f (a + 2m2m 1 c) + f (b)] S (m): 4 3
= Finally, using these relations, M (m) + I (m) = = = 1 3
1
1
1 2
2 3
3 2
1
2
3 2
4 3
) +:::
mc
1
1
(2.1)
I (2m) I (2m) S (m): 4 3 4 3
2 3 1 3
I (m) + 13 I (m) I (m)
127
Solution to Exercise 11.1
Suppose that on the interval [a; b] the knots are at a = x < x < : : : < xm = b: On each of the m intervals [xi ; xi ] the spline is a polynomial of degree n; the spline is therefore determined by m(n + 1) independent coeÆcients. On each of these intervals the value of the polynomial is given at the two ends; this gives 2m interpolation conditions. At each of the internal knots x ; : : : ; xm the derivatives of orders 1; 2; : : : ; n 1 must be continuous; this gives (m 1)(n 1) smoothness conditions. The number of additional conditions required to de ne the spline uniquely is therefore m(n + 1) 2m (m 1)(n 1) = n 1: 0
1
+1
1
1
128 Solution to Exercise 11.2
(i) According to Theorem 11.1 applied on the interval [xi f (x) sL(x) = f 00 ( )(x xi )(x xi ) ; x 2 [xi But f 00(x) 0, so sL(x) f (x). 1 2
1
1
; xi ], 1
; xi ] :
(ii) Similar to part (i), but using Theorem 6.4 with n = 1. (iii) According to De nition 11.2, the natural cubic spline must satisfy the end conditions s00(x ) = s00(xm ) = 0. The polynomial f does not in general satisfy these conditions, so s and f are not in general identical. 2
0
2
2
129 Solution to Exercise 11.3
The quantities i are determined by (11.7), which in this case become h(i + 4i + i ) = (6=h)f(i + 1) h 2i h + (i 1) h g = 36ih ; i = 1; : : : ; m 1; together with = m = 0. Substituting f 00(xi ) = 6ih for i we see at once that the equations are satis ed. Moreover = 0 as required, but f 00(xm ) = 6mh = 6 6= 0, so the nal equation is not satis ed. Hence these values do not satisfy all the equations, and s is not identical to f . However, if the two additional equations are replaced by = f 00(0) and m = f 00 (1) then all the equations determining i are satis ed. Since the system of equations is nonsingular this means that i = f 00(ih); i = 0; : : : ; m; hence s and f are identical, 1
3 3
+1
3 3
3
2
0
0
2
0
2
3
130 Solution to Exercise 11.4
By de nition
kf sk
X
= kf
2 2
Z
=
1 0
k 'k k22 X
[f (x)
k
To minimise this we require that @ @j
which gives
Z
X
1
2 [f (x)
k
0
k 'k (x)]2 dx:
kf sk = 0; 2 2
k 'k (x)]'j (x)dx = 0;
j = 0; : : : ; m:
This yields the required system of equations. Now 'k (x) is nonzero only on the interval [xk Z
1
1
; xk+1 ], and so
if ji j j > 1: Hence the matrix A is tridiagonal. The diagonal elements are given by 0
Aii
= = = =
Z
1 0
'i (x)'j (x) dx = 0
['i (x)] dx 2
Z (i+1)h
i
(
Z ih
i
2
1)
2
2
h
(
h
['i (x)] dx h (x (i 1)h) dx + Z h
1)
Z
1 0
t2 dt + h
i
( +1)
ih
Z
1 0
t2 dt
h
(x (i + 1)h) dx h 2
2
= 32 h; i = 1; : : : ; m 1 after an obvious change of variable. At the two ends the same argument shows that 1 A ; = Am;m = h: 3 For the nonzero o-diagonal elements we get in the same way 00
Ai;i+1
=
Z
1 0
'i (x)'i+1 (x) dx
= =
Z (i+1)h
h
ih
h
Z
1 0
= 61 h;
and in the same way
(i + 1)h
131 x x ih : dx h
(1 t)tdt i = 0; : : : ; m
1
Ai 1;i = h; 6
1;
i = 1; : : : ; m:
The matrix A is evidently symmetric. Hence all the elements of A are non-negative, and 2 h = Aii > Ai;i + Ai;i = 1 h + 1 h = 1 h; i = 1; : : : ; m 1; 3 6 6 3 with A ; > A ; and Am;m + Am;m : These are the conditions required for the success of the Thomas algorithm. 1
00
+1
01
1
132 Solution to Exercise 11.5
The elements of the vector b are Z bi = x'i (x) dx Z Z ih x (i 1)h dx + = x 1
0
(
i
h
h
Z
1)
ih
Z
1
i
( +1)
h
x
(i + 1)h h
x
dx
1
= h (i 1 + t) t dt + h (i + 1) t) t dt = ih ; i = 1; : : : ; m 1: In the same way Z b =h (1 t)t dt = 61 h : Substituting the values i = ih we get X 1 Aij jh = h [(i 1) + 4i + (i + 1)] 6 2
2
0
0
2
0
1
2
2
0
2
j
= =
and
ih2 bi ;
= 16 h [0 + 1] j = 16 h = b: Thus the quantities i = ih satisfy the system of equations, and the solution is therefore s(xi ) = i = xi . When f (x) = x we nd in the same way that X
A0;j jh
2
2
0
2
bi
= = =
Z
1
0
x2 'i (x) dx
Z
1
Z
(j 1 + t) t dt + h h (i + 1=6): h3 3
0 2
2
1
3 0
(j + 1 t ) t d t 2
133
Using k = (kh) + Ch gives X 1 Aij j = h [(i 1) + 4i + (i + 1) + 6C ] 6 j = h [i + 1=3 + C ]; i = 1; : : : ; m 1; so the equations are satis ed if h [i + 1=3 + C ] = bi = h [i + 1=6] if C = 1=6. A similar argument shows that this value of C also satis es the equations for i = 0 and i = m. Hence the solution of the system of equations is k = (kh) h =6, and so s(xk ) = k = f (xk ) h =6. 2
2
3
3
3
2
2
2
2
2
3
i.e.,
2
2
2
2
134 Solution to Exercise 11.6
If x a then (x a)n = 0 and (x a)n = 0; if x > a then (x a) = (x a). So in both cases (x a)n (x a) = (x a)n : From the de nition nX n + 1 k [(n + 2)h x]Sn (x h) = ( 1) [(n + 2)h x](x h k +1 +
+
+
+1 +
+
+1
=
k=0 nX +2 k=1
( 1)k
giving xSn (x) + [(n + 2)h x]Sn (x h) nX = ( 1)k n +k 1 x +1
k=1
+
n+1
0
xn+ x + (
1)n
1
n+1 k 1
1
n+1 k 1
n+1 n+1
[(n + 2)h
[(n + 2)h
x]
[(n + 2)h
kh)n+
x](x kh)n+ ;
(x
x ](x
kh)n+
(n + 2)h)n
+
nX = ( 1)k n +k 2 (x kh)n + xn + ( 1)n (x (n + 2)h)n k = Sn (x); where we have used the additive property of the binomial coeÆcient n+1 n+1 n+2 + = : k k 1 k +1
+1 +
+1 +
+2
=1
+1
Now to show that Sn(x) 0; it is clear that S (x) 0 for all x. Suppose that, for some positive integer k, Sk (x) 0 for all x. Notice that Sk (x) = 0 when x 0, and when x (k +1)h. It then follows that xSk (x) 0, and that [(k + 2)h x]Sk (x) 0. Thus Sk (x) 0 for all x, and the required result follows by induction. 1
+1
+1 +
135 Solution to Exercise 11.7
The proof is by induction, starting from the fact that S (x) is clearly symmetric. Now suppose that, for some positive integer n, Sn(x) is symmetric. Then use of the recurrence relation from Exercise 6 shows that Sn ((n + 2)h=2 + x) = [(n + 2)h=2 + x]Sn ((n + 2)h=2 + x) +[(n + 2)h (n + 2)h=2 x]Sn ((n + 2)h=2 + x = [(n + 2)h=2 + x]Sn((n + 2)h=2 + x) +[(n + 2)h=2 x]Sn (nh=2 + x) = [(n + 2)h=2 + x]Sn((n + 1)h=2 + x + h=2) +[(n + 2)h=2 x]Sn ((n + 1)h + x h=2): Using the symmetry of Sn(x) we now see that changing the sign of x merely interchanges the two terms in this expression, leaving the sum unchanged. Hence Sn (x) is symmetric, and so by induction every Sn (x) is symmetric. 1
+1
+1
h)
136
Solution to Exercise 12.1
a)
jf (x; y) f (x; z )j = 2x jy z j 2jy z j for all x 2 [1; 1) and all y; z 2 R. The Lipschitz constant is therefore 4
L = 2.
b) By the Mean Value Theorem, (x; )j jy zj jf (x; y) f (x; z )j = j @f @y where lies between y and z. In our case, @f (x; ) = e x2 1 +1 1e ; @y for all x 2 [1; 1) and all y; z 2 R. Therefore, the Lipschitz constant is L = 1=e. c) 2y 2 z j x j jf (x; y) f (x; z )j = (1 + e ) 1 + y 1 + z = 2(1 + e jxj) (1 +j1y )(1yz+j z ) jy zj 2 2 1 jy z j ; for all x 2 ( 1; 1) and for all y; z 2 R. The Lipschitz constant is therefore L = 4. 2
2
2
2
2
137 Solution to Exercise 12.2
Observe that y(x) 0 is the trivial solution to the initial value problem. To nd nontrivial solutions to the initial value problem we separate the variables (assuming now that y 6= 0): 2 y 2 +1 dy = dx : Integrating this yields x+C m y(x) = ; 2m + 1 which is a solution to the dierential equation y0 = y m= m for any choice of the constant C . Hence, m m
2
+1
2
yb (x) =
8 > x+b 2m+1 > > 2 m +1 <
;
> > > x b 2m+1 :
;
0;
m+1
2
(2
+1)
x b x 2 [ b; b] xb
is a solution to the given initial value problem for any b 0. Thus we have in nitely many solutions to the initial value problem. This does not contradict Picard's Theorem, since f (x; y) = y m= m does not satisfy a Lipschitz condition in any neighbourhood of a point (x; 0) for any x 2 R. This can be seen by showing that the opposite of the Lipschitz condition holds: for any L > 0 there exists (x; y) 6= (x; 0) such that jf (x; y) f (x; 0)j = jy m= m j = jyj m= m > Ljyj : Such is for example (x; y) with any x 2 R and m 0 < jyj < 1 : 2
2
(2
+1)
2
2
L
+1
(2
+1)
(2
+1)
138 Solution to Exercise 12.3
The solution to the initial value problem is p + q px y ( x) = 1 + (e 1) : p Picard's iteration has the form: y (x) 1, Z x Z x yn (x) = 1+ (pyn (t)+q)dt = 1+qx+p yn (t)dt ; n = 0; 1; 2; : : : The function y is a polynomial of degree 0, and therefore, by induction, yn is a polynomial of degree n. In particular, y (x) 1 ; y (x) = 1 + (p + q)x ; x y (x) = 1 + (p + q)x + p(p + q) ; 2! 0
+1
0
0
0
0 1
2
2
= 1 + (p + q)x + p(p + q) x2! + p (p + q) x3! etc. p+q (px) + : : : + (px)n yn (x) = 1 + px + p 2! # n! " n X (px)k = 1 + p +p q 1 : k! k Passing to the limit as n ! 1, we have that lim y (x) = 1 + p +p q (epx 1) = y(x) ; n!1 n as required. 2
y3 (x)
2
=0
2
3
139 Solution to Exercise 12.4
Euler's method for the initial value problem has the form yn = yn + hyn= ; n = 0; 1; 2; : : : ; y = 0: Hence yn = 0 for all n 0. Therefore Euler's method approximates the trivial solution y(x) 0, rather than y(x) = (4x=5) = . The implicit Euler method for the initial value problem is yn = yn + hyn= ; n = 0; 1; 2; : : : ; y = 0: Put yn = (Cnh) = ; then, Cn= Cn= = Cn= ; n 0: y = 0 implies that C = 0. Thus, C = C = = C = (C 1) = 0, which means that either C = 0 or C = 1. Taking C = 1, we shall prove by induction the existence of Cn > 1 for all n 2 such that yn = (Cn h) = is a solution of the implicit Euler scheme. We begin by observing that Cn= ((Cn= ) 1) = (Cn= ) : Putting t = Cn= , this gives the polynomial equation p(t) t(t 1) (Cn= ) = 0 : Suppose that Cn 1 for some n 1 (this is certainly true for n = 1). As p(1) < 0 and limt! 1 p(t) = +1, the polynomial p has a root, t, say, in the interval (1; 1); therefore, Cn = t > 1. By induction, then, Cn > 1 for all n 2. 1 5
+1
0
5 4
1 5 +1
+1
0
5 4
5 4 +1
0
1 4 +1
5 4
5 4 1
0
1
1 4 1
1
1
5 4
1 4 +1
1 4 4 +1
1 4 5
1 4 +1
4
1 4 1
1 4 5
+
+1
4
1
140 Solution to Exercise 12.5
The exact solution of the initial value problem is 1 y(x) = x e x : 2 The Euler approximation is yn = yn + h(xn e x 5yn) ; n = 0; 1; 2; : : : ; Clearly, y = 0, y = h e h, etc. 2
5
y0 = 0 :
5 n
+1
1
2
2
yn+1
5
n
1
=
X h2 e 5h (k + 1)(e 5h )k (1
=
h
k=0
2
e (1 5h) h
n
5
1
nX1 k=0
5h)n
1
k
h k e (k + 1) 1 5h 5
Taking n = N 1 where h = 1=N , we have N N =N k X 1 5 e =N yN = e 1 N (k + 1) 1 (5=N ) : N 2
5
k=0
Passing to the limit,
N 1 N5 1 N (N 1) = e Nlim !1 N 2 1 = 2e = y(1) ;
1e lim y = Nlim N !1 N !1 N 2
5
=N
5
2
5
as required.
2
2
5
2
NX2 k=0
(k + 1)
141 Solution to Exercise 12.6
a) The truncation error of Euler's method is y ( x ) y (x n ) 0 1 y(x ) y(xn ) f (xn ; y(xn )) = n y (xn ) = hy00 (n ) Tn = n h h 2 0 for some n 2 (xn ; xn ). In our case, y = ln ln(4 + y ). Therefore, d d 1 1 2yy0 = ln ln(4 + y ) 2y ; y00 = y0 = ln ln(4+y ) = dx dx ln(4 + y ) 4 + y ln(4 + y ) 4 + y and hence jy00(x)j 1 = . Thus, jTnj h. b) We have yn = yn + hf (xn; yn) ; y(xn ) = y(xn ) + hf (xn ; y(xn )) + hTn : By subtraction and using a Lipschitz condition for f (x; y) = ln ln(4+y ), with Lipschitz constant L, and letting en = y(xn) yn, we have e = 0 and jen j jen j + hLjen j + hjTn j ; n = 0; 1; : : : ; N 1 : To calculate the Lipschitz constant of f , note that by the Mean Value Theorem, jf (x; y) f (x; z )j = j @f (x; )j jy zj @y for some which lies between y and z. In our case, 2y @f = 1 @y ln(4 + y ) 4 + y and therefore @f 1 1 @y ln 4 2 : Hence, L = 1=(2 ln4). c) From part b) we have e = 0 and jen j jen j + hLjen j + hjTn j ; n = 0; 1; : : : ; N 1 : Therefore, letting T be such that jTnj T for all n 0 (e.g., T = h), +1
+1
2
+1
2
2
2
1 2
1 2
2
2
2
1 4
+1
+1
2
0
+1
2
2
0
+1
je j hT ; je j (1 + hL)hT + hT ; je j (1 + hL) hT + (1 + hL)hT + hT ; 1 2 3
2
1 4
142
etc.
jen j (1 + hL)n hT + : : : + hT = hT (1 + hL)n + : : : + 1 )n 1 = T [(1 + hL)n 1] T (ehL)n 1 = hT (11++hL hL 1 L L 1
1
= TL enhL 1 TL (eL 1) ; as in our case nh Nh = 1 for all n = 0; 1; : : : ; N 1. This gives, max jy(xn ) ynj h4 (2 ln 4) (e = 1) = h 0:30103067384 nN which is less than 10 provided that N = 1=h 3011 = N . 1 (2 ln 4)
0
4
0
143 Solution to Exercise 12.7
The truncation error is y(x ) y(xn ) 1 Tn = n h 2 h (f (xn ; y(xn )) + f (xn ; y(xn))) : Using repeated integration by parts, Z x +1 (x xn )(x xn )y000 (x)dx = (x xn )(x xn )y00(x) jxx +1
+1
+1
n
+1
xn
Z xn+1
x Z xnn+1
=
xn
x +1 x
= n = n
+1
(2x
xn xn+1 )y00 (x)dx
(2x
xn xn+1 )y00 (x)dx Z x
n+1 xn+1 )y0 (x) jxx==xxnn+1 + 2y0(x)dx x=xn xn )y0 (xn+1 ) + (xn xn+1 )y0 (xn ) + 2[y(xn+1 ) y(xn )] :
(2x
xn
= (xn Therefore, h 1 Z x +1 (x x)(x x )y000 (x)dx : y(xn ) y(xn ) = [y0 (xn )+y0 (xn )] n n 2 2 x Using the Integral Mean Value Theorem on the right-hand side, Z x +1 1 h (xn x)(x xn )dx y(xn ) y(xn ) = [y0 (xn )+y0 (xn )] y000 (n ) 2 2 +1
n
+1
+1
+1
n
n
+1
+1
+1
xn
with n 2 (xn ; xn ). Now, using the change of variable x = xn + sh, +1
Z xn+1
Z
Z
1
1
(xn x)(x xn)dx = h (1 s)sds = h (s s )ds = h : x Thus, we have h 1 h y000(n ) ; y(xn ) y(xn ) = [f (xn ; y(xn )) + f (xn ; y(xn ))] 2 12 with n 2 (xn ; xn ). Hence, 1 h y000(n ) : Tn = 12 In particular, jTn j 121 h M ; where M = max2R jy000 ()j. 3
+1
3
0
n
+1
+1
2
1 6
0
3
+1
+1
2
2
3
144 To derive a bound on the global error en = y(xn) yn, note that h y(xn ) = y(xn ) + [f (xn ; y(xn )) + f (xn ; y(xn ))] + hTn ; 2 h yn = yn + 2 [f (xn ; yn ) + f (xn; yn)] : Subtracting and using a Lipschitz condition on f , we have jen j jen j + h2 (Ljen j + Ljenj) + hjTn j and therefore, jen j jen j + h2 (Ljen j + Ljen j) + 121 h M : This can be rewritten as follows: 1 + hL jen j 1 hL jen j + 1 h M ; n = 0; 1; 2; : : : ; hL +1
+1
+1
+1
+1
+1
+1
3
+1
+1
+1
+1
1 2 1 2
1 12
3
1 2
with e = 0 (assuming that 0 < h < 1=L). By induction, this implies that " # n h M 1 + hL jen j 12L 1 hL 1 : 0
2
1 2 1 2
145 Solution to Exercise 12.8
We shall perform a Taylor series expansion of the truncation error y(xn ) y(xn ) Tn = [f (xn; y(xn )) + f (xn ; y(xn ) + hf (xn ; y(xn)))] h = y(xn ) y(xn ) [y0(xn) + f (xn ; y(xn) + hy0(xn ))] +1
+1
=
1 2
h
1 2
+1
+1
y0 (xn ) + 21 hy00 (xn ) + 16 h2 y000 (xn ) + O(h3 ) 1 fy0 (xn ) + f (xn ; y(xn )) + hfx(xn ; y(xn )) + hy0 (xn )fy (xn ; y(xn )) 2 + 1 [h2fxx(xn ; y(xn)) + 2h2y0(xn )fxy (xn ; y(xn)) + h2(y0(xn ))2 fyy (xn ; y(xn))]g + O(h3 ) 2 2
= h [fy (fx + fy f ) (fxx + 2fxy f + fyy f )] jx x + O(h ) where, in the transition to the last line, we used that y0 = f ; y00 = fx + fy y0 = fx + fy f ; y000 = fxx + fxy y0 + fy y00 + (fxy + fyy y0 )y0 = fxx + 2fxy f + fxfy + f (fy ) + fyy f : 1 6
1 2
2
2
= n
2
3
146 Solution to Exercise 12.9
The classical 4th-order Runge{Kutta method is yn = yn + h(k + 2k + 2k + k ) where, in our case, k = f (xn ; yn ) = yn k = f (xn + h; yn + hk ) = (yn + hyn ) = yn + hyn k = f (xn + h; yn + hk ) = (yn + hyn + h yn) = yn + hyn + h yn k = f (xn + h; yn + hk ) = (yn + hyn + h yn + h yn) = yn + hyn + h yn + h yn : Therefore, yn = 1 + (h) + (h) + (h) + (h) yn : On the other hand, for the exact solution, y(xn ) = ex +1 = ex eh = eh y(xn ) ; and therefore, y(xn ) = 1 + (h) + (h) + (h) + (h) + : : : y(xn ) : The factor multiplying yn in the numerical method coincides with the factor multiplying y(xn ) in the exact solution, up to terms of order O(h ). 1 6
+1
1
2
3
4
1
1 2
1 2
2
1
1 2
1 2
1 2
1 2
3
2
1 2 1 2 2
1 4 1 4
4
1 2 1 2
2
1 2
+1
2
3
2
1 2
2
2
3
2
1 4 1 4
3
1 6
n
+1
4
2
3
2
+1
2
3
3
4
3
4
1 24
n
2
1 6
3
1 24
4
147 Solution to Exercise 12.10
Taylor series expansion of the truncation error yields Tn = (1 )y0 (xn ) + h y00 (xn ) + h y000 (xn ) 1 h (fxx + 2y0fxy + (y0) fyy ) j + O(h ) : x x 2 For consistency, we require that + = 1. For second order accuracy, we demand, in addition, that = . Suppose that + = 1 and = and apply the resulting numerical method to y0 = y, y(0) = 1 whose exact solution in y(x) = ex. Then, Tn = h ex h (0 + 0 + 0) = h ex which cannot be made equal to zero for any choice of , and . Therefore, there is no choice of these parameters for which the order of the method exceeds 2. Suppose that + = 1 and = , and apply the method to y0 = y, y(0) = 1, where > 0. Thus, y = 1 and yn = (1 h + (h) )yn n = 0; 1; : : : : Therefore, yn = (1 h + (h) ))n ; n = 0; 1; : : : : The sequence (yn) is bounded if, and only if, j1 h + (h) j 1 which holds, if and only if, 0 < h 2=. Now, assuming this restriction on h, y(xn ) yn = e x (1 h + (h) )n = (e h )n (1 h + (h) )n = e h (1 h + (h) (e h )n + (e h )n (1 h + h ) + : : : + (1 h + h )n Each of the n terms in the last line is bounded by 1, and therefore j(e h )n +(e h )n (1 h + h )+ : : : +(1 h + h )n j n : On the other hand, (for example, by the Leibniz criterion for the remainder in a convergent alternating series) je h (1 h + (h) j (h) 1 2
2
2
1 6
2
2
3
= n
1 2 1 2
1 6
2
n
1 2
2
2
1 6
2
n
1 2
0
2
1 2
+1
2
1 2
2
1 2
2
1 2
n
2
1 2
1 2
1
1
2
2
2
1 2
1 2
2
2
2
1 2
2
1 2
1 2
2
1 6
3
2
2
1
2
2
1
:
148 and therefore,
jy(xn ) ynj (h) n = h xn where xn = nh, n 0. 1 6
3
1 6
3
2
149 Solution to Exercise 12.11
For this method, = 1, = , = , = 1, = 0, = , = , = 0. Therefore, C = 0; C = 3 + 2 ; C = + 3 ; C = + ; + ; C = C = + : Setting C = 0 implies that 2 = 3 and therefore C = 0 also. Setting C = 0 implies that 7 15 = 27. Solving the linear system 2 = 3, 7 15 = 27 gives = 9, = 6; with this choice, we have C = C = C = C = 0, and also, C = 0 and C = 1=10. Therefore, with = 9, = 6 the method is 4th-order accurate. For such and , the rst characteristic polynomial of the method is (z ) = z + 9(z z ) 1 = (z 1)(z + 10z + 1) p which has the roots z = 1, z = = 5 24. Since the Root Condition is violated by z , the method is not zero-stable. 3
1
2
1
0
3
2
0
0 1
9 2 27 6 81 24 243 120
2
3
4
5
3 2 7 6 51 24 31 120
5 2 9 6 17 24
1
2
3
0
1
2
3
3
2
1
3
4
5
2
2 3
150 Solution to Exercise 12.12
For this method, = 1, = 0, = b, = a, = 0, = 1, = 0, = 0. These give C = 1+b+a; C = 2+b: Setting C = 0, C = 0, to achieve consistency of the method, implies that a = 1, b = 2. Next, 9 + b 2 = 2 = 6= 0 : C = 2 Therefore, the method is consistent and at most rst order accurate. To investigate the zero-stability of the method for a = 1 and b = 2, consider the rst characteristic polynomial of the method: (z ) = z 2z + 1 = (z 1)(z + z 1) : This has roots p z = 1; z = = ( 1 5) one of which (z ) is outside the closed unit disc. By the Root Condition, the method is not zero-stable. Further, by Dahlquist's Equivalence Theorem the method is not convergent. Let us apply the method to y0 = 0, y(0) = 1 whose exact solution is y(x) 1. Then, yn 2yn + yn = 0 : The general solution of this third-order linear recurrence relation is p !n p !n 1 + 5 1 5 ; n yn = A 1 + B +C 2 2 where A, B, C are arbitrary constants. For suitable initial conditions A = 0, B = 0, C = 1, and therefore p !n 1 + 5 ; n yn = ( 1) 2 which means that the numerical solution oscillates between positive and negative values whose absolute value increases exponentially with n, exhibiting `zero-instability'. Clearly, the solution bears no resemblance to the exact solution y(x) 1. 3
1
2
1
0
0
0 1
0
1
3 2
7 2
2
3
2
1
2 3
3
+3
+1
1 2
3
2
151 Solution to Exercise 12.13
The rst characteristic polynomialp of the method is (z) = z , > 0, whose two roots are z = = . To ensure zero-stability (using the Root Condition), we need to assume that 0 < 1. To explore the accuracy of the method, note that = 1, = 0, = , = , = , = . Therefore, C = 1 = 0 with = 1 : With = 1 we also have C = C = C = C = 0, while C = . Therefore, the method is 4th-order accurate. Since the method is zero-stable and 4th-order accurate (in particular it is consistent) when = 1, it follows from Dahlquist's Equivalence Theorem that it is 4th order convergent when = 1. 2
1 2
0
2
1 3
4 3
1
0
2
1 3
1
0
1
2
3
4
5
1 90
152 Solution to Exercise 12.14 Let us explore the zero-stability of the methods.
a) (z) = z 1. This has only one root, z = 1. By the Root Condition, the method is zero-stable. b) (z) = z + z 1. This has roots z = 1 and z = 2. The second root violates the Root Condition; the method is not zero-stable. c) (z) = z 1. This has roots z = = 1. By the Root Condition the method is zero-stable. d) (z) = z z = z(z 1), whose roots are z = 0 and z = 1, so the method is zero-stable. e) (z) = z z = z(z 1), so the method is zero-stable. 2
1
2
2
1 2
2
1
2
2
Let us explore the absolute stability of the methods a) and c).
a) The stability polynomial of the method is (z; h) = z 1 h whose only root is z = 1 + h. For the method to be absolutely stable, it is necessary and suÆcient that jz j < 1, 1 < 1 + h < 1. Hence, the method is absolutely stable if, and only if, 0 < h < 2=( ). c) Here (z; h) = z 1 h(z + 4z + 1). Now (z; h) = 0 if, and only if, 3 + h 4h z 3 h z 3 h = 0 : Note that since < 0 and h > 0, we have 3 h 6= 0: Given a quadratic polynomial of the form z az + b, for both roots to lie in the open interval ( 1; 1) it is necessary and suÆcient that the polynomial is positive at z = 1 and z = 1, it is negative at the stationary point z = a=2 ( , a =4 a =2 + b = b a =4 < 0) and 1 < a=2 < 1 ( , 2 < a < 2). In our case, the rst two requirements yield 1 3 4hh 33 + h > 0; h 1 + 3 4hh 33 + h > 0; h 1
i.e.,
1
1 3
2
2
2
2
i.e.
2
2
2
i.e.
with < 0. These inequalities yield the contradictory requirements that 3 h > 0 and 3 h < 0. The method is not absolutely stable for any value of h > 0.
153 Solution to Exercise 12.15
For this method, = 1, = (1 + a), = 1, = (3 a)=4, = 0, = (1 3a)=4. With these values, C = 1 a is equal to zero if, and only if a = 1. Let us suppose that a = 1. Then, C = C = C = 0 and C = 1=12 6= 0, which means that the method is 3rd-order accurate when a = 1. For a 6= 0 the method is not consistent. To check zero stability, for a = 1, we consider the rst characteristic polynomial (z) = z 2z + 1 = (z 1) . This has double root z = 1 on the unit circle. The Root Condition implies that the method is not zero-stable. For absolute stability for a = 1, consider the stability polynomial (z; h) = z 2z + 1 h(z 1). Proceeding as in part (c) of the previous question, we nd that there is no h > 0 for which the method is absolutely stable. 2
1
1
0
2
0
0
1
2
2
2
3
2
1 2
2
4
154 Solution to Exercise 12.16
We seek a method of the form ayn + byn + cyn = hfn : For this method, = a, = b, = c, = 1, = 0, = 0. We demand that C = a+b+c = 0; C = 2a + b 1 = 0 ; C = a + b 2 = 0: Solving the resulting linear system for a, b, c gives a = ; b = 2; c = : This choice ensures that the method is 2nd-order accurate. The rst characteristic polynomial of the resulting method is (z) = z 2z + which has roots z = 1 and z = 1=3. By the Root Condition the method is zero-stable, and Dahlquist's Equivalence Theorem implies that the method is 2nd-order convergent. The stability polynomial of the method is hz : (z; h) = z 2z + The roots of this have absolute value less than one if and only if (1; h) > 0 ; ( 1; h) > 0 ; at the stationary point z = 2=(3 2h) where z0 (z ; h) = 0 we have (z ; h) < 0 and 1 < z < 1. These requirements yield h > 0, 4 h > 0, (1=2) 1=((3=2) h) > 0, and 1 < 2=(3 2h) < 1, each of which holds for all h > 0 and all < 0. The method is absolutely stable for all h 2 ( 1; 0). +2
+1
2
1
+1
+2
0
2
1
0 1
4 2
2
2
3 2
3 2
2
1 2
1 2
1
2
3 2
0
0
0
2
1 2
2
0
0
155 Solution to Exercise 12.17
The -method has stability polynomial (z; h) = z 1 h[(1 ) + z ] whose only root is 1 + h(1 ) ; z= 1 h where = a + {b is a complex number with negative real part, a < 0. Now, )) + b h (1 ) jz j = (1 + ah(1(1 ah : ) +h b We have jzj < 1 if, and only if, 2a < h(2 1) : a +b For the method to be A-stable we need this to be true for all with negative real part a, which is true if, and only if, 2 1 0, that is, when 1. 2
2
1 2
2
2
2
2
2 2 2
2
2
156 Solution to Exercise 12.18
The quadratic Lagrange interpolation polynomial of y is (x xn )(x xn ) p (x) = (xn xn )(xn xn ) y(xn ) + (xn (x xxnn )()(xxn xn xn) ) y(xn ) + (x (x xxn )()(xx xn x) ) y(xn ) : n n n n Dierentiation of this gives 1 p0 (xn ) = [3y(xn ) 4y(xn ) + y(xn )] : 2h By expanding y(xn) and y(xn ) into a Taylor series about xn we get p0 (xn ) = y0 (xn ) + O(h ) : The truncation error of BDF2 is 3y(xn ) 4y(xn ) + y(xn) f (xn ; y(xn )) : Tn = 2h Noting that the last term is equal to y0(xn ) and expanding each of y(xn ) and y(xn ) into a Taylor series about xn , we get 1 2 000 Tn = h y000 (n ) 3 3 h y (n ) ; where n 2 (xn ; xn ) and n 2 (xn ; xn ). Therefore Tn = O(h ). +1
2
+2
+1
+2
+2
+1
+1
+2
+2
+1
+2
+1
+2
2
+2
+2
+1
+1
+1
2
+2
+2
2
+2
+2
+1
+2
+2
+2
+1
+2
2
+1
+2
2
+2
2
157 Solution to Exercise 12.19
The implicit Runge{Kutta method has the form yn = yn + h(b k + b k ) where k = f (xn + c h; yn + a k h + a k h) ; k = f (xn + c h; yn + a k h + a k h) : When applied to y0 = y, we get k = (yn + a k h + a k h) ; k = (yn + a k h + a k h) : Rearranging this, (1 a h)k a hk = yn a hk + (1 a h)k = yn : The matrix of this linear system is I hA, and the corresponding determinant is = det(I hA). In expanded form, = 1 h(a + a ) + (h) (a a a a ) : Let us suppose that 6= 0. Solving the linear system for k and k gives y (1 + h(a a )) y (1 + h(a a )) k = n ; k = n : For the method with the given Butcher tableau, p p c = (3 3) ; c = (3 + 3) ; b =b = p a = ; a = (3 2 3) ; p a = (3 + 2 3) ; a = : Therefore, p k = yn (1 3)= p k = yn (1 + 3)= which then gives h yn = (1 + )yn +1
1 1
2 2
1
1
11 1
12 2
2
2
21 1
22 2
1
11 1
12 2
2
21 1
22 2
11
1
21
12
1
21
11
2
2
22
2
11 22
12 21
1
12
1
22
1 6
1
1 6
2
1
11
1 4
21
1 12
21
2
2
12
1 2
1 12
22
1
1 6
2
1 6
+1
1 4
2
11
158 with
=1
Thus,
1 2
h + 121 (h)2 :
1 + (h) + (h) : 1 (h) + (h) Writing the quadratic polynomials in the numerator and the denominator in factorised form, we have p p ( h + 3 + { 3)(h + 3 { 3) yn = (h 3 {p3)(h 3 + {p3) yn : The numerical solution will exhibit (exponential) decay for complex = a + {b with negative real part a, a < 0, provided that p p (h + 3 + { 3)(h + 3 { 3) p p < 1: (h 3 { 3)(h 3 + { 3) Writing p = 3 + {p3, this can be written as follows: h + p h + p < 1; h p h p The expression on the left can be rewritten as jhj + jpj + 2Re(ph) jhj + jpj + 2Re(ph) A + Re(X ) A + Re(X ) jhj + jpj 2Re(ph) jhj + jpj 2Re(ph) A Re(X ) A Re(X ) where A = jhj + jpj , and X = 2ph. Now, A + Re(X ) A + Re(X ) 0 and Re(X ) + Re(X ) = 12ha < 0, we deduce that A + Re(X ) A + Re(X )
View more...
Comments