[solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser
April 7, 2017 | Author: ppowoer | Category: N/A
Short Description
Download [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser...
Description
Selected Answers and Hints Chapter 1 Problems 1.2 (1) Inconsistent. (2) (x1 , x2 , x3 , x4 ) = (−1 − 4t, 6 − 2t, 2 − 3t, t) for any t ∈ R. 1.3 (1) (x, y, z) = (t, −t, t). (3) (w, x, y, z) = (2, 0, 1, 3) 1.4 (1) b1 + b2 − b3 = 0. (2) For any bi ’s. 13 13 −4. 1.7 a = − 17 2 , b= 2 , c= 4 , d= 2 4 2 1 8 7 1.9 Consider the matrices: A = , B= , C= . 3 6 3 4 0 1 1.10 Compare the diagonal entries of AAT and AT A. 1.12 (1) Infinitely many for a = 4, exactly one for a 6= ±4, and none for a = −4. (2) Infinitely many for a = 2, none for a = −3, and exactly one otherwise. 1.14 (3) I = I T = (AA−1 )T = (A−1 )T AT means by definition (AT )−1 = (A−1 )T . 1.17 Any permutation on n objects can be obtained by taking a finite number of interchangings of two objects. 1.21 Consider the case that some di is zero. 1.22 x = 2, y = 3, z = 1. 1.23 No, in general. Yes, if the systems is consistent. 1 0 0 1 −1 0 1 0 , 1 −1 . 1.24 L = −1 U = 0 0 −1 1 0 0 1 1.25 (1) Consider (i, j)-entries of AB for i < j. (2) Acan be written asa product elementary matrices. of lower triangular 1 0 0 2 0 0 1 −1/2 0 1 0 , D = 0 3/2 0 , U = 0 1 −2/3 . 1.26 L = −1/2 0 −2/3 1 0 0 4/3 0 0 1 1.27 There are four possibilities for P . 1.28 (1) I1 = 0.5, I2= 6, I3 = 0.55. (2) I1 = 0, I2 = I3 = 1, I4 = I5 = 5. 0.35 1.30 x = k 0.40 , for k > 0. 0.25 0.0 0.1 0.8 90 1.31 A = 0.4 0.7 0.1 with d = 10 . 0.5 0.0 0.1 30
452
Selected Answers and Hints
453
Exercises 1.10 1.1 Row-echelon forms are A, B, D, F . Reduced row-echelon forms are A, B, F . 1 −3 2 1 2 0 0 1 −1/4 3/4 . 1.2 (1) 0 0 0 0 0 0 0 0 0 0 1 −3 0 3/2 1/2 0 0 1 −1/4 3/4 . 1.3 (1) 0 0 0 0 0 0 0 0 0 0 1.4 (1) x1 = 0, x2 = 1, x3 = −1, x4 = 2. (2) x = 17/2, y = 3, z = −4. 1.5 (1) and (2). 1.6 For any bi ’s. 1.7 b1 − 2b2 + 5b3 6= 0. 1.8 (1) Take x the transpose of each row vector of A. 1.10 Try itwith several kinds ofdiagonal matrices for B. 1 2k 3k(k − 1) . 1 3k 1.11 Ak = 0 0 0 1 1.13 See Problem 1.11. 1.15 (1) A−1 AB = B. (2) A−1 AC = C = A + I. 1.16 a = 0, c−1 = b 6= 0. 1 −1 0 0 13/8 −1/2 −1/8 0 1/2 −1/2 0 1/2 3/8 . , B −1 = −15/8 1.17 A−1 = 0 0 1/3 −1/3 5/4 0 −1/4 0 0 0 1/4 8 −19 2 1 1 −23 4 . 1.18 A−1 = 15 4 −2 1 1/3 1/6 1/6 2 8/3 1.21 (1) x = A−1 b = −4/3 −5/3 4/3 5 = −5/3 . −1/3 −2/3 1/3 7 −5/3 1 0 2 0 1 1/2 1.22 (1) A = = LDU, (2) L = A, D = U = I. 4 1 0 3 0 1 1 0 0 1 0 0 1 2 3 0 0 1 1 , 1.23 (1) A = 2 1 0 0 2 3 1 1 0 0 −1 0 0 1 1 0 a 0 1 b/a (2) . b/a 1 0 d − b2 /a 0 1 T 1.24 c = [2 −1 3]T , x = [4 2 3] . 1 1 1 1 0 0 1 0 0 1.25 (2) A = 1 1 0 0 3 0 0 1 4/3 . 0 0 1 0 0 2 1 1 1
454
Selected Answers and Hints
1.26 (1) (Ak )−1 = (A−1 )k . (2) An−1 = 0 if A ∈ Mn×n . k−1 (3) (I − A)(I + A ) = I − Ak . + ··· + A 1 1 1.27 (1) A = . (2) A = A−1 A2 = A−1 A = I. 0 0 1.28 Exactly seven of them are true. (8) If AB has the (right) inverse C, then A−1 = BC. 0 1 (10) Consider a permutation matrix . 1 0
Chapter 2 Problems 2.2 2.3 2.5 2.6 2.7 2.10 2.12 2.13 2.15 2.17 2.18 2.19 2.20 2.21 2.22
2.24 2.25 2.28
(2), (4). (1), (2), (4). T T + A−A See Problem 1.11. For any square matrix A, A = A+A 2 2 Note that any vector v in W is of the form a1 x1 + a2 x2 + · · · + am xm which is a vector in U . tr(AB − BA) = 0. Linearly dependent. Any basis for W must be a basis for V already, by Corollary 2.13. (1) n − 1, (2) n(n+1) , (3) n(n−1) . 2 2 63a + 39b − 13c + 5d = 0. If b1 , . . ., bn denote the column vectors of B, then AB = [Ab1 · · · Abn ]. Consider the matrix A from Example 2.19. (1) rank = 3, nullity = 1. (2) rank = 2, nullity = 2. Ax = b has a solution if and only if b ∈ C(A). A−1 (AB) = B implies rank B = rank A−1 (AB) ≤ rank(AB). By (2) of Theorem 2.24 and Corollary 2.21, a matrix A of rank r must have an invertible submatrix C of rank r. By (1) of the same theorem, the rank of C must be the largest. dim(V + W ) = 4 and dim(V ∩ W ) = 1. A basis for V is {(1, 0, 0, 0), (0, −1, 1, 0), (0, −1, 0, 1)}, for W : {(−1, 1, 0, 0), (0, 0, 2, 1)}, and for V ∩ W : {(3, −3, 2, 1)}. Thus, dim(V + W ) = 4 means V + W = R4 and any basis for R4 works for V + W . 1 1 a 1 0 0 0 b 0 0 2 0 2 −1 2 A= 1 1 1 1 , and c = A 4 = 1 . 0 4 d 0 1 2 3
Exercises 2.11 2.1 Consider 0(1, 1). 2.5 (1), (4).
Selected Answers and Hints
455
2.6 2.7 2.11 2.12
No. (1) p(x) = −p1 (x) + 3p2 (x) − 2p3 (x). {(1, 1, 0), (1, 0, 1)}. 2. 1 if i = j, j ∞ 2.13 Consider {e = {ai }i=1 } where ai = 0 otherwise. 2.14 (1) 0 = c1 Ab1 +· · ·+cp Abp = A(c1 b1 +· · ·+cp bp ) implies c1 b1 +· · ·+cp bp = 0 since N (A) = 0, and this also implies ci = 0 for all i = 1, . . . , p since columns of B are linear independent. (2) B has a right inverse. (3) and (4): Look at (1) and (2) above. 2.15 (1) {(−5, 3, 1)}. (2) 3. 2.16 5!, and dependent. 2.17 (1) R(A)
= h(1, 2, 0, 3), (0, 0, 1, 2)i,
C(A) = h(5, 0, 1), (0, 5, 2)i,
N (A) = h(−2, 1, 0, 0), (−3, 0, −2, 1)i. (2) R(B) = h(1, 1, −2, 2), (0, 2, 1, −5), (0, 0, 0, 1)i, C(B) = h(1, −2, 0), (0, 1, 1), (0, 0, 1)i, N (B) = h(5, −1, 2, 0)i. 2.18 rank = 2 when x = −3, rank = 3 when x 6= −3. 2.20 Since uvT = u[v1 · · · vn ] = [v1 u · · · vn u], each column vector of uvT is of the form vi u, that is, u spans the column space. Conversely, if A is of rank 1, then the column space is spanned by any one column of A, say the first column u of A, and the remaining columns are of the form vi u, i = 2, . . . , n. Take v = [1 v2 · · · vn ]T . Then one can easily see that A = uvT . 2.21 Four of them are true.
Chapter 3 Problems 3.1 To show W is a subspace, see Theorem 3.2. Let Eij be the matrix with 1 at the (i, j)-th position and 0 at others. Let Fk be the matrix with 1 at the (k, k)-th position, −1 at the (n, n)-th position and 0 at others. Then the set {Eij , Fk : 1 ≤ i 6= j ≤ n, k = 1, . . . , n − 1} is a basis for W . Thus 2 dim W = nP − 1. Pn Pm m Pn i k k i 3.2 tr(AB) = i=1 k=1 ai bk = k=1 i=1 bk ai = tr(BA). 0 1 3.3 , since it is simply the change of coordinates x and y. 1 0 3.4 If yes, (2, 1) = T (−6, −2, 0) = −2T (3, 1, 0) = (−2, −2). 3.5 If a1 v1 + a2 v2 + · · · + ak vk = 0, then 0 = T (a1 v1 + a2 v2 + · · · + ak vk ) = a1 w1 + a2 w2 + · · · + ak wk implies ai = 0 for i = 1, . . . , k. 3.6 (1) If T (x) = T (y), then S ◦ T (x) = S ◦ T (y) implies x = y. (4) They are invertible.
456
Selected Answers and Hints
3.7 (1) T (x) = T (y) if and only if T (x − y) = 0, i.e., x − y ∈ Ker(T ) = {0}. (2) Let {v1 , . . . , vn } be a basis for V . If T is one-to-one, then the set {T (v1 ), . . . , T (vn )} is linearly independent as the proof of Theorem 3.7 shows. Corollary basis for V . Thus, for Pn2.13 shows it is aP Pnany y ∈ V , we can n write it as y = i=1 ai T (vi ) = T ( i=1 ai vi ). Set x = i=1 ai vi ∈ V . Then clearly T (x) = y so that T is onto. If T is onto, then for each i = 1, . . . , n i there exists xi ∈ V such that T (xP ) = vi . Then the set {x1 P , . . . , xn } is n n i linearly independent in V , since, if i=1 ai x = 0, then 0 = T ( i=1 ai xi ) = P Pn n i i i=1 ai v implies ai = 0 for all i = 1, i=1 ai T (x ) = P.n. . , n. iThus it is a basis by Corollary 2.13 again. If T (x) = 0 for x = i=1 ai x ∈ V , then Pn Pn 0 = T (x) = i=1 ai T (xi ) = i=1 ai vi implies ai = 0 for all i = 1, . . . , n, that is x = 0. Thus Ker (T ) = {0}. 1 0 about the x-axis. 3.8 Use rotation R π3 and reflection 0 −1 3.9 (1) (5, 2, 3). (2) (2, 3, 0). 2 −3 4 0 7 4 3.12 (1) [T ]α = 5 −1 2 , [T ]β = 2 −1 5 . 4 7 0 4 −3 2 1 2 0 0 3.13 [T ]βα = 1 0 −3 1 . 0 2 3 4 3 0 0 3 2 0 3.15 [S + T ]α = 2 2 3 , [T ◦ S]α = 3 3 3 . 2 3 3 6 5 3 1 −1 0 2 3 0 1 0 , [T ]α = 0 3 6 . 3.16 [S]βα = 1 0 0 1 0 0 4 1 0 1 0 . = [T −1 ]α 3.17 (2) [T ]βα = β −1 1 1 1 0 −1 5 −2 −3 7 1 4 3 −1 , [Id]βα = 3 5 −10 . 3.18 [Id]α β = 2 2 1 1 1 1 −2 1 2 1 1 4 5 3.19 [T ]α = 0 −1 0 , [T ]β = −1 −2 −6 . 1 0 4 1 1 5 3.20 Write B = Q−1 AQ with some invertible matrix Q. (1) det B = det(Q−1 AQ) = det Q−1 det A det Q = det A. (2) tr (B) = tr (Q−1 AQ) = tr (QQ−1 A) = tr (A) (see Problem 3.2). (3) Use Problem 2.21. 3.22 α∗ = {f1 (x, y, z) = x − 21 y, f2 (x, y, z) = 12 y, f3 (x, y, z) = −x + z}.
Exercises 3.10 3.1 (2). 3.2 ax3 + bx2 + ax + c.
457
Selected Answers and Hints 3.5 3.6 3.7 3.8
3.9 3.12
3.13 3.14 3.15
(1) Consider the decomposition of v = v+T2 (v) + v−T2 (v) . (1) {(x, 32 x, 2x) ∈ R3 : x ∈ R}. (2) T −1 (r, s, t) = ( 21 r, 2r − s, 7r − 3s − t). (1) Since T ◦ S is one-to-one from V into V , T ◦ S is also onto and so T is onto. Moreover, if S(u) = S(v), then T ◦ S(u) = T ◦ S(v) implies u = v. Thus, S is one-to-one, and so onto. This implies T is one-to-one. In fact, if T (u) = T (v), then there exist x and y such that S(x) = u and S(y) = v. Thus T ◦ S(x) = T ◦ S(y) implies x = y and so u = T (x) = T (y) = v. Note that T cannot be one-to-one and S cannot be onto. 5 4 −6 18 −4 −3 −2 0 . 0 0 1 −12 0 0 0 1 −1/3 2/3 (1) . −5/3 1/3 0 2 3 −4 (1) , (2) . 3 −1 1 5 (1) T (1, 0, 0) = (4, 0), T (1, 1, 0) = (1, 3), T (1, 1, 1) = (4, 3). z) = (4x −2y + z, y + 2z). 1 0 1 0 2 , (4) 0 0 1 . 1 0 0 0 1 1 1 0 0 1 1 −1 , (2) Q = 1 1 0 = P −1 . (1) P = 0 1 0 0 1 −1 0 Usethe trace. −7 −33 −13 (1) . 4 19 8 −2/3 1/3 4/3 5 1 (2) , (4) 2/3 −1/3 −1/3 . 1 2 7/3 −2/3 −8/3 0 2 1 [T ]α = −1 4 1 = ([T ∗ ]α∗ )T . 1 0 1 1 1 0 −3 1 −1 β (1) . (2) [T ]α = . −1 0 2 1 2 1 N (T ) = {0}, 1 0 2 1 0 0 C(T ) = h (2, 1, 0, 1), (1, 1, 1, 1), (4, 2, 2, 3) i, [T ]βα = −1 0 −1 1 1 3 3 2 1 1 2 1 1 2 p1 (x) = 1 + x − 2 x , p2 (x) = − 6 + 2 x , p3 (x) = − 3 + x − 2 x . Three of them are false.
(2) T (x, 1 3.16 (1) 0 0 3.17 3.18 3.19 3.20
3.25 3.26 3.27
3.29 3.30
y, 1 1 0
.
458
Selected Answers and Hints
Chapter 4 Problems 4.4 4.8 4.9 4.10
4.15 4.16 4.17 4.18
(1) −27, (2) 0, (3) (1 − x4 )3 . (1) −14. (2) 0. See Example 4.6, and use mathematical induction on n. Find the cofactor expansion along the first row first, and then compute the cofactor expansion along the first column of each n × n submatrix (in the second step, use the proof of Cramer’s rule). If A = 0, then clearly adjA = 0. Otherwise, use A · adjA = (det A)I. Use adjA · adj(adjA) = det(adjA) = I. (1) x1 = 4, x2 = 1, x3 = −2. 5 5 10 ,y= ,z= . (2) x = 23 6 2 Ci The solution of the system Id(x) = x is xi = det det I = det A.
Exercises 4.5 4.1 4.2 4.3 4.4 4.5 4.6
k = 0 or 2. It is not necessary to compute A2 or A3 . −37. (1) det A = (−1)n−1 (n − 1). (2) 0. −2, 0, 1, 4. X Consider a1σ(1) · · · anσ(n) . σ∈Sn
4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.16
4.17
4.18 4.19
(1) 1, (2)24. (3) x1 = 1, x2 = −1, x3 = 2, x4 = −2. (2) x = (3, 0, 4/11)T . k = 0 or ±1. x = (−5, 1, 2, 3)T . x = 3, y = −1, z = 2. (3) A11 = −2, A12 = 7, A13 = −8, A33 = 3. −3 5 9 1 18 −6 18 . A−1 = 72 6 14 −18 2 −7 −6 (1) adj(A) = 1 −7 −3 , det(A) = −7, det(adj(A)) = 49, −4 7 5 1 1 −1 4 2 , A−1 = − 71 adj(A). (2) adj(A) = −10 7 −3 −1 det A = 2, det(adj(A)) = 4, A−1 = 12 adj(A). Note that (AB)−1 = B −1 A−1 and A−1 = adj(A)/ det A. (The reader may also try to prove for non-invertible matrices.) this equality 1 3 If we set A = , then the area is 21 | det A| = 4. 3 1
459
Selected Answers and Hints
1 2 √ p 4.20 If we set A = 1 2 , then the area is 12 | det(AT A)| = 3 2 2 . 2 1 X sgn(σ)a1σ(1) · · · anσ(n) . In fact, suppose B is a k ×k matrix, 4.21 Use det A = σ∈Sn
and a permutation σ ∈ Sn sends some number i ≤ k into {k + 1, . . . , n}, then there is an ℓ ≥ k + 1 such that σ(ℓ) ≤ k. Thus aℓσ(ℓ) = 0, and so sgn(σ)a1σ(1) · · · aℓσ(ℓ) · · · anσ(n) = 0. Therefore the only terms that do not vanish are those for σ : {1, . . . , k} → {1, . . . , k}. But then σ : {k+1, . . . , n} → {k + 1, . . . , n}. i.e., σ = σ1 σ2 with sgn(σ) = sgn(σ1 )sgn(σ2 ). Hence, X X det A = sgn(σ1 )a1σ(1) · · · akσ(k) · sgn(σ2 )ak+1σ(k+1) · · · anσ(n) σ1 ∈Sk
=
σ2 ∈Sn−k
det B det D. I 0 4.22 Multiply to the right. B I 4.23 vol(T (B)) = | det(A)|vol(B), for the matrix representation A of T . Clearly, C = AB, so vol(P(C)) = | det(AB)| = | det A|| det B| = | det A| vol(P(B)). 4.24 Exactly seven of them are true. (4) (cIn − A)T = cIn − AT . (10) See Exercise 2.20: det(uvT ) = v1 · · · vn det([u · · · u]) = 0. 1 0 1 (13) Consider 1 1 0 . 0 1 1
Chapter 5 Problems 2
2 5.1 Note that hx, xi = ax21 + 2cx1 x2 + bx22 = a(x1 + ac x2 )2 + ab−c a x2 > 0 for all c x = (x1 , x2 ) 6= 0. For x = (1, 0), we get a > 0. For x = (− a , 1), we get ab − c2 > 0. The converse is easy from the equation above. 5.2 hx, yi2 = hx, xihy, yi if and only if ktx+yk2 = hx, xit2 +2hx, yit+hy, yi = 0 has a repeated real root t0 . 5.3 (4) Compute the square of both sides and use Cauchy-Schwarz inequality. R1 5.5 hf, gi = 0 f (x)g(x)dx defines an inner product on C [0, 1]. Use CauchySchwarz inequality or Problem 5.3. 5.6 (1) √16 (2, 1, −1), (2) √161 (6, 4, −3). 5.7 (1): √ Orthogonal,√(2) and (3): None, (4): Orthonormal. 5.10 1, 3(2x − 1), 5(6x2 − 6x + 1) . 5.12 (1) is just the definition, and use (1) to prove (2). 5.14 ProjW (p) = ( 43 , 35 , − 13 ). 5.16 P T = (P T P )T = P T P = P , and P = P T P = P 2 . T 5.17 For x ∈ Rm , x = hv1 , xiv1 + · · · + hvm , xivm = (v1 v1 )x + · · · + (vm vm T )x.
460 5.18 5.20 5.21 5.23 5.24 5.26 5.27 5.29
Selected Answers and Hints
2 1 2 The null space of the matrix is −1 −1 1 T T x = t[1 − 1 1 0] + s[−4 1 0 1] for t, s ∈ R. R(A)⊥ = N (A). x = (1, −1, 0) + t(2, 1, −1) for any number t. 1 2 For A = are linearly independent. [v v ], two columns 2 1 1 1 2 −1 . P = 1 3 1 −1 2 − 16 + x. s0 −0.4 v0 = x = (AT A)−1 AT b = 0.35 . 1 16.1 2g 1 1 1 (1) r = √2 , s = √6 , a = − √3 , b = √13 , c = √13 . 1 0
5.30 Extend P {v1 , . . . , vm } to P an orthonormal basis {v1 , . . . , vm , . . . , vn }. Then n m i 2 2 kxk = i=1 |hx, v i| + j=m+1 |hx, vj i|2 . 5.31 (1) orthogonal. (2) not orthogonal. 5.32 Let A = QR = Q′ R′ be two decompositions of A. Then QT Q′ = RR′−1 which is an upper triangular and orthogonal matrix. Since (QT Q′ )T = (QT Q′ )−1 = (RR′−1 )−1 = R′ R−1 is upper and lower, QT Q′ is diagonal and orthogonal so that QT Q′ = D = diag[di ] with di = ±1. i.e., Q′ = QD, ′ ′ or ui = ±ui for each i ≥ 1. Since c1 = b′11 u1 = b11 u1 with b′11 , b11 > 0 ′ 1′ 1 1 and so u , u are unit vectors in c direction, we have u1 = u1 . Assume j−1 ′ j−1 j′ j j that u =u and u = −u . Then u becomes a linear combination ′ ′ of u1 , . . . , uj−1 , since cj = b1j u1 + · · · + bjj uj = b′1j u1 + · · · + b′jj uj . Thus, ′ uj = uj , or dj = 1, for all j ≥ 1, so that D = Id. Thus we get Q = Q′ , and then R = R′ follows.
Exercises 5.12 5.1 Inner products are (2), (4), (5). 5.2 For last condition of the definition, note that hA, Ai = tr(AT A) = P the 2 i,j aij = 0 if and only if aij = 0 for all i, j. 5.4 (1) k = 3. p 5.5 (3) kf k = kgk = 1/2, The angle is 0 if n = m, π2 if n 6= m. 5.6 Use the Cauchy-Schwarz inequality and Problem 5.2 with x = (a1 , · · · , an ) n and y = (1, r· · · , 1) in (R , ·). 37 19 5.7 (1) − , . 4 3 (2) If hh, gi = h( a3 + 2b + c) = 0 with h 6= 0 a constant and g(x) = ax2 + bx + c, then (a, b, c) is on the plane a3 + 2b + c = 0 in R3 . 1 3 5.10 (1) v2 , (2) v2 . 2 2 5.12 Orthogonal: (4). Nonorthogonal: (1), (2), (3).
461
Selected Answers and Hints
5.16 Use induction on n. If n = 1, then A has only one column c1 and AT A = det(AT A) is simply the square of the length of c1 . Assume the claim is true for n − 1. Let Bm×(n−1) be the submatrix of A with the first column c1 removed so that A = [ c1 B], and let Cm×n = [ a B], where a = c1 − p and p = ProjW (c1 ) = a2 c2 + · · · + an cn ∈ W for some ai ’s, where W = C(B). Then a is clearly orthogonal to c2 , . . . , ck and p. Claim that det(AT A) = det(C T C) = kak2 det(B T B) = kak2 vol(P(B))2 = vol(P(A))2 . In fact, det(AT A)
= = =
5.17
5.19 5.20
5.21
5.22
aT a + pT p pT B BT p BT B T T a a 0 p p pT B det + det BT p BT B BT p BT B T T a a 0 p p B det + det . 0 BT B BT
det
Since p is a linear combination of the columns of B, one can easily ver pT p B = 0, so that det(AT A) = det(C T C) = ify that det BT kak2 det(B T B). This also shows that the volume is independent of the choice of c1 at the beginning. 1 0 0 √ 0 1 0 T . Then the volume of the tetrahedron is det(A A) = Let A = 3 0 2 1 0 1 2 1. Ax = b has a solution for every b ∈ Rm if k = m. It has infinitely many solutions if nullity = n − k = n − m > 0. The line is a subspace with an orthonormal basis √12 (1, 1), or is the column 1 1 . space of A = √2 1 1 1 0 3 1 1 a Find a least square solution of 1 2 b = 4 for (a, b) 4 1 3 3 in y = a + bx. Then y = x + . 2 1 −1 1 −1 1 0 0 0 1 1 1 Follow Exercise 5.21 with A = 1 . 1 2 4 8 1 3 9 27 Then y = 2x3 − 4x2 + 3x − 5.
462
Selected Answers and Hints
5.25 (1) Let h(x) = 21 (f (x)+f (−x)) and g(x) = 21 (f (x)−f (−x)). Then f = h+g. R1 R −1 (2) For f ∈ U and g ∈ V , hf, gi = −1 f (x)g(x)dx = − 1 f (−t)g(−t)dt R1 = − −1 f (t)g(t)dt = −hf, gi, by change of variable x = −t. (3) Expand the length in the inner product. 5.26 1 sin θ cos θ AT A = sin θ cos θ cos2 θ 1 0 1 0 1 sin θ cos θ = . sin θ cos θ 1 0 cos4 θ 0 1 sin θ cos θ 1 sin θ cos θ A = QR = . cos θ − sin θ 0 cos2 θ 5.27 AT A = I and detAT = det A imply det A = ±1. cos θ sin θ The matrix A = is orthogonal with det A = −1. sin θ − cos θ 5.28 Six of them are true. (1) Consider (1, 0) and (−1, 0). (2) Consider two subspaces U and W of R3 spanned by e1 and e2 , respectively. (3) The set of column vectors in a permutation matrix P are just {e1 , . . . , en }, which is a set of orthonormal vectors.
Chapter 6 Problems 6.3 Zero is an eigenvalue of AB if and only if AB is singular, if and only if BA is singular, if and only if zero is an eigenvalue of BA. Let λ be a nonzero eigenvalue of AB with (AB)x = λx for a nonzero vector x. Then the vector Bx is not zero, since λ 6= 0, but (BA)(Bx) = B(λx) = λ(Bx).
6.4 6.5 6.6 6.7 6.8
This means that Bx is an eigenvector of BA belonging to the eigenvalue λ, and λ is an eigenvalue of BA. Similarly, any nonzero eigenvalue of BA is also an eigenvalue of AB. 1 1 1 0 Consider the matrices and . 0 1 0 1 1 1 Check with A = . 0 1 If A is invertible, then AB = A(BA)A−1 . (1) Use det A = λ1 · · · λn . (2) Ax = λx if and only if x = λA−1 x. (1) If Q = [x1 x2 x3 ] diagonalizes A, then the diagonal matrix must be λI and AQ = λQI. Expand this equation and compare the corresponding columns of the equation to find a contradiction on the invertibility of Q.
Selected Answers and Hints 6.9 6.10
6.11
6.12
463
2 3 2 0 −1 6 −1 Q= ,D= . Then A = QDQ = . 1 2 0 3 −2 6 (1) The eigenvalues of A are 1, 1, −3, and their associated eigenvectors are (1, 1, 0), (−1, 0, 1) and (1, 3, 1), respectively. (2) If f (x) = x10 + x7 + 5x, then f (1), f (1) and f (−3) are the eigenvalues of A10 + A7 +5A. an+1 an 2 1 −2 0 an−1 . The eigenvalues are 1, 2, Note that an = 1 0 0 1 0 an−1 an−2 −1 and eigenvectors are (1, 1, 1), (4, 2, 1) and (1, −1, 1), respectively. It turns 2 2n . out that an = 2 − (−1)n − 3 3 Write the characteristic polynomial as f (x) = xk − a1 xk−1 − · · · − ak−1 x − ak = (x − λ)m g(x), where g(λ) 6= 0. Then clearly f (λ) = f ′ (λ) = · · · = f (m−1) (λ) = 0. For n ≥ k, let f1 (x) = xn−k f (x) = xn − a1 xn−1 − · · · − ak xn−k . Then one can easily show that f2 (λ) = λf1′ (λ) = nλn − a1 (n − 1)λn−1 − · · · − ak (n − k)λn−k = 0, since f1′ (λ) = (n − k)λn−k−1 f (λ) + λn−k f ′ (λ) = 0. Inductively, ′ ′′ ′ fm (λ) = λfm−1 (λ) = λ2 fm−2 (λ) + λfm−2 (λ) = 0
= nm−1 λn − a1 (n − 1)m−2 λn−1 − · · · − ak (n − k)m−k−1 λn−k .
6.15 6.16 6.17
6.18 6.19 6.20
6.21
6.22
Thus, xn = λn , nλn , . . . , nm−1 λn are m solutions. It is not hard to show that they are linearly independent. The eigenvalues are 0, 0.4, and 1, and their eigenvectors are (1, 4, −5), (1, 0, −1) and (3, 2, 5), respectively. Pk For (1), use (A + B)k = i=0 ki Ai B k−i if AB = BA. For (2) and (3), use the definition of eA . Use (1) for (4). T Note that e(A ) = (eA )T by definition (thus, if A is symmetric, so is eA ), and use (4). 0 3 0 Write A = 2I + N with N = 0 0 3 . Then N 3 = 0. 0 0 0 y1 = c1 e2x − 14 c2 e−3x ; y2 = c1 e2x+ c2 e−3x . − c2 e2x + c3 e3x e2x − 2e3x y1 = y1 = x 2x 3x x y2 = c1 e + 2c2 e − c3 e y2 = e − 2e2x + 2e3x , 2x 3x y3 = 2c2 e − c3 e y3 = − 2e2x + 2e3x . t −t 3e − 2 e 2 − e−t . (1) , (2) e−t e−t 1 0 0 With the basis α = {1, x, x2 }, [T ]α = A = 0 2 0 . 0 0 3
464
Selected Answers and Hints
6.23 With the standard basis for M2×2 (R) : α = 1 0 0 1 0 0 0 0 E11 = , E12 = , E21 = , E22 = , 0 0 0 0 1 0 0 1 1 1 0 1 1 1 1 0 [T ]α = A = 0 1 1 1 . The eigenvalues are 3, 1, 1, −1, and their asso1 0 1 1 ciated eigenvectors are (1, 1, 1, 1), (−1, 0, 1, 0), (0, −1, 0, 1), and (−1, 1, −1, 1), respectively. 4 0 1 6.24 With respect to the standard basis α, [T ]α = 2 3 2 with eigenvalues 1 0 4 3, 3, 5 and eigenvectors (0, 1, 0), (−1, 0, 1) and (1, 2, 1), respectively.
Exercises 6.6 6.1 (4) 0 of multiplicity 3, 4 of multiplicity 1. Eigenvectors are ei − ei+1 for P4 1 ≤ i ≤ 3 and i=1 ei . 6.2 f (λ) = (λ + 2)(λ2 − 8λ + 15), λ1 = −2, λ2 = 3, λ3 = 5, 6.4 6.5
6.7 6.8
6.9
6.10 6.12 6.13
x1 = (−35, 12, 19), x2 = (0, 3, 1), x3 = (0, 1, 1). {v} is a basis for N (A), and {u, w} is a basis for C(A). Note that the order in the product doesn’t matter, and any eigenvector of A is killed by B. Since the eigenvalues are all different, the eigenvectors belonging to 1, 2, 3 form a basis. Thus B = 0, that is, B has only the zero eigenvalue, so all vectors are eigenvectors of B. 1 −2 −1 1 4 −1 . A = QDQ−1 = 1 2 1 2 7 Note that Rn = W ⊕ Ker(P ) and P (w) = w for w ∈ W and P (v) = 0 for v ∈ Ker(P ). Thus, the eigenspace belonging to λ = 1 is W , and that to λ = 0 is Ker(P ). For any w ∈ Rn , Aw = u(vT w) = (v · w)u. Thus Au = (v · u)u, so u is an eigenvector belonging to the eigenvalue λ = v · u. The other eigenvectors are those in v⊥ with eigenvalue zero. Thus, A has either two eigenspaces E(λ) that are 1-dimensional spanned by u and E(0) = v⊥ if v · u 6= 0, or just one eigenspace E(0) = Rn if v · u = 0. λv = Av = A2 v = λ2 v implies λ(λ − 1) = 0. Use tr(A) = λ1 + · · · + λn = a11 + · · · + ann . (1) If k = 1, clearly x1 ∈ U . Suppose that the claim is true for k, and x1 + · · · + xk + xk+1 = u ∈ U with xi ∈ Eλi (A). Then, from A(x1 + · · · + xk+1 ) = λ1 x1 + · · · + λk xk + λk+1 xk+1 λk+1 x1 + · · · + λk+1 xk + λk+1 xk+1
¯∈U = Au = u
= λk+1 u ∈ U,
Selected Answers and Hints
465
¯ − λk+1 u ∈ U . Thus by we get (λ1 − λk+1 )x1 + · · · + (λk − λk+1 )xk = u induction all the xi ’s are in U .
6.14
6.16
6.19 6.20
6.21
(2) Write Rn = Eλ1 (A) ⊕ · · · ⊕ Eλk (A). Then U ∩ Eλi (A), for i = 1, . . . , k, span U since any basis vector u in U is of the form u = x1 + · · · + xk with xi ∈ U ∩ Eλi (A) by (1). Thus U = U ∩ Eλ1 (A) ⊕ · · · ⊕ U ∩ Eλk (A). A = QD1 Q−1 and B = QD2 Q−1 imply AB = BA since D1 D2 = D2 D1 . Conversely, suppose AB = BA. If Ax = λi x with x ∈ Eλi (A), then ABx = BAx = λi Bx implies Bx ∈ Eλi (A). That is, each eigenspace Eλi (A) is invariant by B, so that the restriction of B on Eλi (A) is diagonalized. 1 0 1 With respect to the basis α = {1, x, x2 }, [T ]α = 0 1 1 . The eigen1 1 0 values are 2, 1, −1 and the eigenvectors are (1, 1, 1), (−1, 1, 0) and (1, 1, −2), respectively. Eigenvalues are 1, 1, 2 and eigenvectors are (1, 0, 0), (0, 1, 2) and (1, 2, 3). A10 x = (1025, 2050, 3076). Clearly, a0 = 1, a1 = 2 and a2 = 3. Inductively, one can easily see that the sequence {an : n ≥ 1} is a Fibonacci sequence: an+1 = an + an−1 . In fact, in {1, 2, . . . , n}, the size of the class of subsets with the required property may be counted as the number of the members of the class for the set without n plus that of the class for the set without n and n − 1, to each member of this class just add n. One can easily check that det An = det An−1 − det An−2 . Set an = det An , so that an = an−1 − an−2 . With an−1 = an−1 , we obtain a matrix equation: an an−1 1 −1 xn = = = Axn−1 = An x1 , an−1 1 0 an−2
with a1 = 1 and a2 = 0. Using the eigenvalues might make the computation a mess. Instead, one can use the Cayley-Hamilton Theorem 8.13: Since the characteristic polynomial of A is λ2 − λ + 1, A2 − A + I = 0 holds. Thus, A3 = A2 − A = −I, so A6 = I. One can now easily compute an modulo 6. 6.22 The characteristic equation is λ2 −xλ−0.18 = 0. Since λ = 1 is a solution, x = 0.82. The eigenvalues are now 1, −0.18 and the eigenvectors are (−0.3, −1) and (1, −0.6). e e−1 A 6.23 (1) e = . 0 1 6.24 The initial status in 1985 is x0 = (x0 , y0 , z0 ) = (0.4, 0.2, 0.4), where x, y, z represent the percentage smallcar owners. In 1995, the of large, medium, and 0.4 0.7 0.1 0 x1 status is x1 = y1 = 0.3 0.7 0.1 0.2 = Ax0 . Thus, in 2025, 0.4 0 0.2 0.9 z1 the status is x4 = A4 x0 . The eigenvalues are 0.5, 0.8, and 1, whose eigenvectors are (−0.41, 0.82, −0.41), (0.47, 0.47, −0.94), and (−0.17, −0.52, −1.04), respectively.
466
Selected Answers and Hints
y1 (x) = −2e2(1−x) + 4e2(x−1) y1 (x) = e2x (cos x − sin x) 6.27 (1) y2 (x) = −e2(1−x) + 2e2(x−1) (2) y2 (x) = 2e2x sin x. y3 (x) = 2e2(1−x) − 2e2(x−1) . 6.28 y1 = 0, y2 = 2e2t , y3 = e2t . 6.29 (1) f (λ) = λ3 − 10λ2 + 28λ − 24, eigenvalues are 6, 2, 2, and eigenvectors are (1, 2, 1), (−1, 1, 0) and (−1, 0, 1). (2) f (λ) = (λ − 1)(λ2 − 6λ + 9), eigenvalues are 1, 3, 3, and eigenvectors are (2, −1, 1), (1, 1, 0) and (1, 0, 1). 6.30 Three of them are true: 1 0 0 1 1 0 (1) For A = , B = Q−1 AQ means that = . 0 1 1 0 0 1 1 12 1 1 , and B = . (2) Consider A = 0 2 0 12 0 1 1 0 1 1 . . (5) Consider . (4) Consider (3) Consider 1 0 0 0 0 1 (6) If A is similar to I + A, then they have the same eigenvalues so that tr(A) = tr(I + A) = n+ tr(A), which can not be equal. (7) tr (A + B) = tr (A) + tr (B).
Chapter 7 Problems P P 7.1 (1) u · v = uT v P= i ui vi =P i vi ui = v · u. (3) (ku) · v P = i kui vi = k i ui vi = k(u · v). (4) u · u = i |ui |2 ≥ 0, and u · u = 0 if and only if ui = 0 for all i. 7.2 (1) If x = 0, clear. Suppose x 6= 0 6= y. For any scalar k, hy,xi 0 ≤ hx − ky, x − kyi = hx, xi − khx, yi − khy, xi + kkhy, yi. Let k = hy,yi to obtain |hx, xihy, yi − |hx, yi|2 ≥ 0. Note that equality holds if and only if x = ky for some scalar k. (2) Expand kx + yk2 = hx + y, x + yi and use (1). 7.3 Suppose that x and y are linearly independent, and consider the linear dependence a(x+y)+b(x−y) = 0 of x+y and x−y. Then 0 = (a+b)x+(a−b)y. Since x and y are linearly independent, we have a+b = 0 and a−b = 0 which are possible only for a = 0 = b. Thus x + y and x − y are linearly independent. Conversely, if x + y and x − y are linearly independent, then the linear dependence ax+by = 0 of x and y gives 21 (a+b)(x+y)+ 21 (a−b)(x−y) = 0. Thus we get a = 0 = b. Thus x and y are linearly independent. 7.4 (1) Eigenvalues are 0, 0, 2 and their eigenvectors are (1, 0, −i) and (0, 1, 0), √ √ respectively. (2) Eigenvalues are 3, 1+2 5 , 1−2 5 , and their eigenvectors are √ √ √ √ 5−3 1− 5 5+3 1+ 5 (1, −i, 1−i 2 ), ( 2 i, 1, 2 (1 + i)), and (− 2 i, 1, 2 (1 + i)), respectively. 7.5 Refer to the real case.
467
Selected Answers and Hints T
T
7.6 (AB)H = (AB)T = B A = B HAH . 7.7 (AH )(A−1 )H = (A−1 A)H = I. 7.8 The determinant is just the product of the eigenvalues and a Hermitian matrix has only real eigenvalues. 7.9 See Exercise 6.9. 7.10 To prove (3) directly, show that λ(x · y) = µ(x · y) by using the fact that AH x = −µx when Ax = µx. 7.11 AH = B H + (iC)H = B T − iC T = −B − iC = −A. 7.12 ±AB = (AB)H = B HAH = (±B)(±A) = BA, + if they are Hermitian, − if they are skew-Hermitian. 7.13 Note that det U H = det U , and 1 = det I = det(U H U ) = | det U |2 . 7.16 Since A−1 = AH , (AB)HAB = I. 7.17 Hermitian means the diagonal entries are real, and diagonality implies offdiagonal entries Unitary means √ √ must be ±1. 1 √are zero. the diagonal 1 entries 1 1 1 1 3 + 3 + 3 0 √ i − i − i 6 √ 2 7.18 (1) If U = 6 1 √ 2 , U −1 AU = 2 2 1 1 1 3 + 0 −3 3 3 2 2i 3 0 0 1 −1 0 0 6 8 6 8 − 25 i 52 + 15 i 25 + 25 i , U −1 AU = 0 2i 1 . (2) If U = − 25 0 0 2i 0 −1 0 7.20 (4) Normal with eigenvalues 1 ± i so that it is unitarily diagonalizable but not orthogonally. 7.22 This is a normal matrix. From a direct computation, one can find the eigenvalues, 1 − i, 1 − i and 1 + 2i, and the corresponding eigenvectors: (−1, 0, 1), (−1, 1, 0) and (1, 1, 1), respectively, which are not orthogonal. But by an orthonormalization, one can obtain a unitary transition matrix so that A is unitarily diagonalizable. 7.23 AH A = (H1 − H2 )(H1 + H2 ) = (H1 + H2 )(H1 − H2 ) = AAH if and only if H1 H2 − H2 H1 = 0. 7.24 In each subproblem, one directions are all proven in the theorems already. For the other direction, suppose that U H AU = D for a unitary matrix U and a diagonal matrix D. (1) and (2). If all the eigenvalues of A are real (or purely imaginary), then the diagonal entries of D are all real (or purely imaginary). Thus DH = ±D, so that A is Hermitian (or skew-Hermitian). (3) The diagonal entries of D satisfy |λ| = 1. Thus, DH = D−1 , and −1 −1 −1 AH = U D √U =√A . 3 −√2 −1 1 7.25 Q = √ √0 √2 −2 . 6 3 2 1 1 −1 1 1 7.26 (1) A = 21 + 32 , −1" 1 1 1 " # # √ √ (1+ 6)(2+i) (1− 6)(2+i) √ √ 1 1 6 3+2 6 3−2 5 5 √ √ √ √ (2) B = 6 + 6 . (1+ 6)(2−i) (1− 6)(2−i) 7+2 6 7−2 6 5
5
5
5
468
Selected Answers and Hints
7.27 Let A = λ1 P1 + · · · + λk Pk be the spectral decomposition of A. Then ¯ 1 P1 + · · · + λ ¯ k Pk = λ1 P1 + · · · + λk Pk = A. AH = λ 7.28 Take the Lagrange polynomials fi such that fi (λj ) = δij associated with λi ’s as in Section 2.10.2. Then, by Corollary 7.13, fi (A) = fi (λ1 )P1 + · · · + fi (λk )Pk = δi1 P1 + · · · + δik Pk = Pi . −1/3 −1 7.29 (1) A = 2 = 2/3 3 [1], 2 2/3 + −1 −1/3 2/3 2/3 = − 19 92 92 . A+ = 2 = [1] 13 2 " # √ # " √1 √2 √1 − − √12 √12 3 0 6 6 6 (2) B = . √1 √1 − √12 0 √12 0 1 2 2 1/2 0 (3) C + = . 1/2 0 7.31 By the elementary row operations and then to columns in the blocks, X −Y X + iY −Y + iX det B = det = det Y X Y X X + iY 0 = det = det A det A¯ = | det A|2 . Y X − iY
Exercises 7.10 √ 7.1 (1) 6, (2) 4. 7.4 (1) λ = i, x = t(1, −2 − i), λ = −i, x = t(1, −2 + i). (2) λ = 1, x = t(i, 1), λ = −1, x = t(−i, 1). (3) Eigenvalues are 2, 2 + i, 2 − i, and eigenvectors are (0, −1, 1)), (1, − 15 (2 + i), 1), (1, − 15 (2 − i), 1). (4) Eigenvalues are 0, −1, 2, and eigenvectors are (1, 0, −1)), (1, −i, 1), (1, 2i, 1). 7.6 A + cI is invertible if det(A + cI) 6= 0. However, for any matrix A, det(A + cI) = 0 as acomplex polynomial has always a (complex) solution. For the cos θ − sin θ real matrix , A + rI is invertible for every real number r sin θ cos θ since A has no real eigenvalues. 1 1−i √i 1 1 √ 1 1−i . 7.7 (1) √ , (2) 2i 2 0 2 3 1 + i −1 1 i −1 + i 1 1 1 . 7.10 (2) Q = √ 2 1 −1
469
Selected Answers and Hints
7.12 (1) Unitary; diagonal entries are {1, i}. (2) Orthogonal;√{cos θ+i√sin θ, cos θ− i sin θ}, where θ = cos−1 (0.6). (3) Hermitian; {1, 1 + 2, 1 − 2}. 7.13 (1) Since the eigenvalues of a skew-Hermitian matrix must always be purely imaginary, 1 cannot be an eigenvalue. H (2) Note that, for any invertible matrix A, (eA )H = eA = e−A = (eA )−1 . T T 7.14 det(U − λI) = det(U − λI) = det(U − λI). 1 1 −1 2+i 0 H √ 7.15 U = , D = U AU = . 1 0 2−i 2 1 7.17 (1) Let λ1 , . . . , λk be the distinct eigenvalues of A, and A = λ1 P1 +· · ·+λk Pk ¯ 1 P1 +· · ·+ λ ¯ k Pk . Problem 7.28 the spectral decomposition of A. Then AH = λ shows that Pi = fi (A), where fi ’s are the Lagrange polynomial associated with λi ’s as in Section 2.10.2. Then AH =
k X i=1
¯ i Pi = λ
k X
¯ i fi (A) = g(A), λ
i=1
P¯ where g = λ i fi . The converse is clear. (2) Clear from (1) since AH = g(A). 7.18 (See Exercise 6.14.) Since AB = BA, each Eλi (A) is B-invariant. Since B is normal, B H = g(B) for some polynomial g. Thus each Eλi (A) is both B and B H invariant. So the restriction of B on Eλi (A) is normal, since B is normal. That is, A and B have orthonormal eigenvectors in Eλi (A) ∩ Eµj (B). 7.19 (2) The characteristic polynomial of W is f (λ) = λn − 1. (3) The eigenvalues of A are, for k = 0, 1, 2, . . . , n − 1, n X λk = ai ω (i−1)k = a1 + a2 ω k + · · · + an ω (n−1)k . i=1
(4) The characteristic polynomial of B is f (λ) = (λ − n + 1)(λ + 1)n−1 . 7.20 The eigenvalues are 1, 1, 4,√and the orthonormal eigenvectors are ( √12 , − √12 , 0), (− √16 , − √16 , √23 ) and ( √13 , √13 , √13 ). Therefore, 2 −1 −1 1 1 1 1 4 −1 2 −1 + 1 1 1 . A= 3 3 1 1 1 −1 −1 2
7.21 If λ is an eigenvalue of A, then λn is an eigenvalue of An . Thus, if An = 0, then λn = 0 or λ = 0. Conversely, by Schur’s lemma, A is similar to an upper triangular matrix, whose diagonals are eigenvalues that are supposed to be zero. Then it is easy to conclude A is nilpotent. 7.22 Note that A+ b = xr is the unique optimal least square solution in R(A) = R(U ). Since, for any b ∈ Rm ,
AT A(U T (U U T )−1 (LT L)−1 LT )b = (U T LT LU )(U T (U U T )−1 (LT L)−1 LT )b = U T (LT L)(U U T )(U U T )−1 (LT L)−1 LT b = U T LT b = AT b,
470
Selected Answers and Hints
U T (U U T )−1 (LT L)−1 LT b is also a least square solution. Moreover, it is in R(A) = R(U ), since U T times a column vector is a linear combination of row vectors in U which form a basis for R(A). Therefore, it is optimal so that U T (U U T )−1 (LT L)−1 LT b = A+ b. That is, U T (U U T )−1 (LT L)−1 LT = A+ . 7.23 Nine of themare true. cos θ − sin θ (2) Consider with θ 6= kπ. (3) Since rank(A) = rank(AH A) = sin θ cos θ rank(AAH ), λ = 0 may be an eigenvalue for both AH A and AAH with the same multiplicities. Now suppose λ 6= 0 is an eigenvalue of AH A with eigenvector x. Then Ax 6= 0, and (AAH )Ax = A(AH A)x = λAx implies λ is also an eigenvalue of AAH with eigenvector Ax. The converse is the same. Hence, AH A and AAH have the same eigenvalues. 1 1 (4) Consider . (5) If A is symmetric, then it is orthogonally diago0 2 nalizable. Moreover, if A is nilpotent, then all the eigenvalues must be zero, since otherwise it cannot be nilpotent. Thus the diagonal matrix is the zero matrix, and so is A. (6) and (7) A permutation matrix is an orthogonal matrix, but needs not be symmetric. (8) If a nonzero nilpotent matrix N is Hermitian, then U −1 N U = D, where U is a unitary matrix and D is a diagonal matrix whose diagonals are not all zero. Thus Dk 6= 0 for all k ≥ 1. That is N k 6= 0 for all k ≥ 1.
(10) There is an invertible matrix Q such that A = Q−1 DQ. Thus, det(A + iI) = det(D + iI) 6= 0. 1 −1 (11) Consider A = . (12) Modify (10). 2 −1
Chapter 8 Problems
2 0 0 0 0 2 0 0 (3) 0 0 1 1 . 0 0 0 1 8.3 (1) For λ = −1, x1 = (−2, 0, 1), x2 = (0, 1, 1), and for λ = 0, x1 = (−1, 1, 1). (2) For λ = 1, x1 = (−2, 0, 1), x2 = ( 52 , 21 , 0), and for λ = −1, x1 = (−9, −1, 1). 8.5 The eigenvalue is −1 of multiplicity 3 and has only one linearly independent eigenvector (1, 0, 3). The solution is y1 (t) −1 − 5t + 2t2 . −1 + 4t y(t) = y2 (t) = e−t y3 (t) 1 − 15t + 6t2
4 1 8.2 (2) 0 4 0 0
0 1 , 4
471
Selected Answers and Hints 8.6 For any u, v ∈ C, |u + v|2 = (u + v)(¯ u + v¯) = |u|2 + 2ℜ(u¯ v ) + |v|2
≤ |u|2 + 2|u¯ v | + |v|2 = |u|2 + 2|u||v| + |v|2 = (|u| + |u|)2 .
The equality holds ⇔ |u¯ v | = ℜ(u¯ v ): i.e., u¯ v = ℜ(u¯ v ) ∈ R ⇔ u = |u|z and v = |v|z for some z = eiθ . 8.7 Consider the Jordan Canonical form: Q−1 AT Q = J of AT . By taking the transpose of this equation, one gets P −1 AP = J T , where P = (QT )−1 . Let P be the matrix obtained from P by taking reverse ordering of the column vectors in each group corresponding to each Jordan block of P . Then it is −1 easy to see that P AP = J = Q−1 AT Q. That is, A and AT have the same Jordan Canonical form J, which means that the eigenspaces have the same dimension. 8.8 See Problem 6.2. 8.9 Let λ1 , . . . , λn be the eigenvalues of A. Then f (λ) = det(λI − A) = (λ − λ1 ) · · · (λ − λn ).
Thus, f (B) = (B − λ1 Im ) · · · (B − λn Im ) is non-singular if and only if B − λi Im , i = 1, . . . , n, are all non-singular. That is, none of the λi ’s is an eigenvalue of B. 2 8.10 The characteristic polynomial of A is f (λ) = (λ− 1)(λ − 2) , and the remain14 0 84 der is 104A2 − 228A + 138I = 0 98 0 . 0 0 98
Exercises 8.5 8.1 Find the Jordan canonical form of A as Q−1 AQ = J. Since A is nonsingular, all the diagonal entries λi of J, as the eigenvalues of A, are nonzero. Hence, each Jordan blocks Jj of J is invertible. Now one can easily show that (Q−1 AQ)−1 = Q−1 A−1 Q = J −1 which is the Jordan form of A−1 , whose Jordan blocks are of the form Jj−1 . 8.3 (x, y) = 21 (4 + i, i). 1 1 3 1 2 0 1 −1 2 2 8.4 (1) Use = . 1 3 0 4 1 1 − 12 21 # " " # √1 √ √ − √12 . (2) y(t) = 2e4t √12 − 2e2t √1 2 2 1 −2 0 0 −6 2 8 0 13 6 0 0 1 2 0 2 2 1 . 8.5 (1) Use A = −3 1 1 0 0 −4 6 −1 −4 − 0 4 2 2(1−t) 2(t−1) + 4e y1 (t) = −2e (2) y2 (t) = −e2(1−t) + 2e2(t−1) . y3 (t) = 2e2(1−t) − 2e2(t−1)
472
8.6 8.8 8.9 8.10
8.11 8.12 8.14
Selected Answers and Hints y1 (t) = 2(t − 1)et y2 (t) = −2tet . y3 (t) = (2t − 1)et (1) (a − d)2 + 4bc 6= 0 or A = aI. 2 (1) t2 + t −11, (2) t2 + 2t + 13, (3) (t − 1)(t − 2t − 5). 1 0 −1 (3) A−1 = 0 12 − 12 . 0 0 1 5 −22 101 27 −60 . (2) 0 0 0 87 See solution of Problem 8.7. (3) Use (2) and the proof of Theorem 8.11.
(4) Use (3), Theorem 8.11, and (1) of Section 8.3. 1 0 k 8.15 (2) Ak = 0 2k 2k − 1 . 0 0 1 8.16 Four of them are true.
Chapter 9 Problems 9.1
9 3 (1) 3 −1 −4 1
0 1 1 −4 1 1 , (2) 1 0 1 , (3) 2 1 1 0 4
1 1 0 −5
1 1 0 0
0 −5 0 0 . −1 2 2 −1
9.3 (1) The eigenvalues of A are 1, 2, 11. (2) The eigenvalues are 17, 0, −3, and so it is a hyperbolic cylinder. (3) A is singular and the linear form is present, thus the graph is a parabola. √ √ 9.5 B with the eigenvalues 2, 2 + 2 and 2 − 2. 9.7 The determinant is the product of the eigenvalues. 9.9 (1) is indefinite. (2) and (3) are positive definite. 9.10 (1) local minimum, (2) saddle point. 9.12 (2) b11 = b14 = b41 = b44 = 1, all others are zero. 9.14 Let D be a diagonal matrix, and let D′ be obtained from D by interchanging two diagonal entries dii and djj , i 6= j. Let P be the permutation matrix interchanging i-th and j-th rows. Then P DP T = D′ . 9.15 Count the number of distinct inertia (p, q, k). For n, the number of inertia with p = i is n − i + 1. 9.16 (3) index = 2, signature = 1, and rank = 3. 9.17 Note that the maximum value of R(x) is the maximum eigenvalue of A, and similarly for the minimum value.
Selected Answers and Hints
473
√ √ √ √ at ±(1/ 2, 1/ 2), min = 21 at ±(1/ 2, −1/ 2). 1 1 9.19 (1) max = 4 at ± √ (1, 1, 2), min = −2 at ± √ (−1, −1, 1); 6 3 1 1 (2) max = 3 at ± √ (2, 1, 1), min = 0 at ± √ (1, −1, −1) . 6 3 9.22 If u ∈ U ∩ W , then u = αx + βy ∈ W for some scalars α and β. Since x, y ∈ U , b(u, x) = b(u, y) = 0. But b(u, x) = βb(y, x) = −β and b(u, y) = αb(x, y) = α. 9.23 Let c(x, y) = 21 (b(x, y) + b(y, x)) and d(x, y) = 21 (b(x, y) − b(y, x)). Then b = c + d.
9.18 max =
7 2
Exercises 9.13 9.1 9.4 9.3
9.6 9.8 9.10 9.11 9.15 9.18
3 −2 0 1 2 3 7 −8 . (1) , (3) 2 −2 −4 , (4) 5 0 4 −1 3 −4 −3 (2) {(2, 1, 2), (−1, −2, 2), (1, 0, 0)}. (i) If a = 0 = c, then λi = ±b. Thus the conic section is a hyperbola. (ii) Since we assumed that b 6= 0, the discriminant (a − c)2 + 4b2 > 0. By the symmetry of the equation in x and y, we may assume that a − c ≥ 0. If a − c = 0, then λi = a ± b. Thus, the conic section is an ellipse if λ1 λ2 = a2 − b2 > 0, or a hyperbola if a2 − b2 < 0. If λ1 λ2 = a2 − b2 = 0, then it is a parabola when λ1 6= 0 and e′ 6= 0, or a line or two lines for the other cases. If a − c > 0. Let r2 = (a − c)2 + 4b2 > 0. Then λi = (a+c)±r for i = 1, 2. 2 Hence, 4λ1 λ2 = (a + c)2 − r2 = 4(ac − b2 ). Thus, the conic section is an ellipse if det A = ac − b2 > 0, or a hyperbola if det A = ac − b2 < 0. If det A = ac − b2 = 0, it is a parabola, or a line or two lines depending on some possible values of d′ , e′ and the eigenvalues. If λ is an eigenvalue of A, then λ2 and λ1 are eigenvalues of A2 and A−1 , T T respectively. Note xT (A + B)x = x Ax + x Bx. 1 1 1 . The form is indefinite with eigenvalues λ = 5 and (1) Q = √ 2 1 −1 λ = −1. 3 9 2 −1 1 2 , (2) B = (1) A = , (3) Q = . 2 0 0 6 1 −1 (2) The signature is 1, the index is 2, and the rank is 3. 1 1 (2) The point (1, π) is a critical point, and the Hessian is . Hence, 1 −1 f (1, π) is a local maximum. Seven of them are true. (5) Consider a bilinear form b(x, y) = x1 y1 − x2 y2 on R2 . (7) The identity I is congruent to k 2 I for all k ∈ R. (8) See (7). 0 1 is not (9) Consider a bilinear form b(x, y) = x1 y2 . Its matrix Q = 0 0 diagonalizable.
1 2 2 3
View more...
Comments