(Solution Manual) Contemporary Linear Algebra by Howard Anton, Robert C. Busby

December 2, 2016 | Author: stacy | Category: N/A
Share Embed Donate


Short Description

math 232...

Description

CHAPTER 2

Systems of Linear Equations EXERCISE SET 2.1 1.

(a) and {c) are linear. (b) is not linear due to the x1x3 term. {d) is not' linear due to the x} 2 term.

2. (a) a.nd (d) are linear. (b) is not linear because of the xyz term. (c) is not linear because of the x 3/5 term. 1 3. (a) is linear. (b) is linear if k 4.

# 0.

(c) is linear only if k

= 1.

(a) is linear. (b) is linear jf m =f: 0. (c) is linear only if m = 1.

5. (a), (d), and (c) are solutions; these sets of values satisfy all three equations. (b) and (c) are not solutions. 6. (b) , (d), and (e) are solutions; these sets of values satisfy all three equations. (a) and (c) are not solutions. 7. The tluee lines intersect at the point {1, 0) (see figure). The values x equations and this is the unique solution of the system.

= 1, y = 0 satisfy all three

3.x-3y"' 3

:c

The augmented matrix of the system is

l r231

-3

!3

Add -2 times row 1 to row 2 and add -3 times row 1 to row 3:

=! ! Multiply row 2 by -

j

and add 9 times the new row 2 t.o row 3:

From the last row we that the system is redundant (reduces to only two equations). From the second row we see that y =0 and, from back substitution, it follows that x = 1 - 2y = 1. 22

23

Exercise Set 2.1

8. The three lines do not intersect in a. common point (see figure). This system has no solution. )'

The augmented matrix of the system is

and the reduced row echelon form of this matrix (details omitted) is:

The last row corresponds to the equation 0 = 1, so the system is jnconsistent. 9. (a)

The solution set of the equation 7x - 5y = 3 can be described parametrically by (for example) solving the equation for x in terms of y and then making y into a parameter. This leads to x == 3'*:,5t, y = t , where -oo < t < oo.

(b ) The solution set of 3x 1 - 5xz + 4x3 = 7 can be described by solving the equation for terms of xz and x 3 , then making Xz and x 3 into parameters. This leads to x 1 = x:z = s, x3 t, where - oo < s, t < oo.

x1

in

=

(c)

The solution set of - 8x 1 + 2x 2 - 5x 3 + 6x 4 = 1 can be described by (for example) solving the equation for x 2 in t.erms of x 1 , and x 4 , then making x 1 , x3, and x 4 into parameters. 5 6 This leads to :t 1 = r, x 2 = l±Brt s - t, X3 = s, X4 = t, where - 1 + bz + b3

!

-2 -1 : -1 o

0

l

bl

- 12l

o

l -2 -llbll&a can be row reduced to [1o

-2

2b:z)

bt

can be row reduced to

the system is consistent if and only if -b1 31. The augmented matrix

l

can be row reduced to

bl

t 2b1

0 I 2bt - b2

l

+ 113

.

0

Thus

Thus the .

32. The augmented matrix for this system can be row reduced to

1 -1 3 2 0 1 -11 - 5 0 0 0 [0 0

0

0

0

and from tohis it foll ows that the system Ax = b is consistent if and only if the components of b satisfy the equations -b1 + bs + b4 = 0 and + b2 + b4 = 0 . The general solution of this homogeneous system can be written as b1 ::: s + t , b2 = 2s + t, bs = s, b4 = t. Thus the original

l [i]

11

system Ax

b is consistent if and only if the ve•:to' b is of the fonn b

[ ;,(

s

+t



33. The matrix A can be reduced to a row echelon from R by the following sequence of row operations:

ro A= (1) Intercha.nge rows 1 and 2.

ll

-2

[:

7 3

1

3 - 5

1

3

3

1

7

-2 - 5

1

r

(2) Add 2 times row 1 to row 3.

0

8

37

1 0 1

(3) Add -1 times row 2 to row 3.

R

l:

j] j]

7

3 l

:J 3 88]

7

0 0 0

It follows from this that R = E 3 E 2 E 1 A where E 1 , E'2, E 3 . ¥-

this matrix can be reduced to the row echelon form

0 2+2.>.

l

which is singular if and only if>.= - 1. T hus the gjven vectors are d ependent iff >.= 29.

! or>.= -1.

Suppose S = { v 1 , v 3 } is a. linearly independent set. Note first that none of these vectors can be equal to 0 (otherwise S would be linearl y dependent). and so each of the sets {v1 }, {v 2}, and {v 3} is linearly independent . Suppose then that Tis a 2-element subset of S, e.g. T = {v 1 , II T is linearly dependent then there are scalars c1 and c2 , not both zero, such that c1 v 1 + c 2v z = 0 . But, if this were true, then c1 V t + c2v 2 + Ovs = 0 would be a nontrivial linear relationship among the vectors v'h v2 , v 3, and so S would-.be linearly dependent. Thus T = {v1 , v 2 } is linearly independent. T h e same argument applies to any 2-element subset of S. Thus if S is linearly independent , then each of its nonen:tpty subsets is linearly independent. (b) If S = {v1 , v 2, v 3 } is linearly dependent, then there a.re scalars c1o c2 , and c3 , not all zero, s uch that c 1 v1 + c2v2 + CJVJ = 0 . Thus, for any vector v in R", we have c 1 v 1 + c2 v 2 + c3 v3 + Ov = 0 and this is a nontrivial linear relationsbjp among the vectors v 1 , v 2, vs, v. This shows that if S = {v 1 , v 2 , v 3 } is linearly dependent then so is T = (v 1 , v 2 , v 3 , v} for any v .

(a)

30. The arguments used in Exercise 29 can easily be adapted to this more general situation. 3 1.

(u - v ) + (v - w) + (w - u ) penden t set .

= 0 ; t hus the

vectors u - v, v - w, and w - u form a linearly de-··

32. First note that the relationship between the vectors u , v and s, g can be written as

(:)

= and so the inverse relationship is given by

[s] = [0.12 1

g

0.06] _, [u] 1

v

1

= 0.9928

[

1

- 0.12

Parts (a.), (b) , (c) of the problem can now be a nswered as follows: (a)

s

= 6 9i 28 ( u -

0.06v)

(b) (c) 33.

(a) (b)

No. 1t is not closed under either addition or scalar multiplication. P 876

= 0.38c + 0.59m + 0. 73y + 0.07k

P 2t6

= 0.83m

+ 0.34y + 0.47k

P328 = c + 0.47y (c) 34.

H P e75

+ P2J6) corresponds to the CMYK vector

(a) k1 = k2 = (c)

+ 0.30k

=

tc2 4c3

(0.19,0.71 , 0.535, 0.27).

(b) k1

= k2 =

· · · = k1

=

The components of x = + + represent the averag.e scores for the students if the Test 3 grade (the final exam?) is weighted twice as heavily as Test 1 and Test 2.

89

DISCUSSION ANP DISCOVERY

35. (a) kt = kz = k3 = k4 = (c) The comp onents of x; =

i

(b) k1 = k2 = k3 = k4 = k& = represent the average total population of Philadelphia, Bucks, and Delaware counties in each of the sampled years.

+ ! r2 +

n

36. v =

L

vk

k= l

=DISCUSSION AND DISCOVERY Dl . (a) Two nonzero vectors will span ·R 2 if and only if t hey a.re do not lie on a line. (b) T hree nonzero vectors . will span R3 if and only if they d o not lie in a T wo vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples of one another. (b) T wo vectors in R" will span a line if and only if they are not both zero and one is a scalar multiple of the other. (c) span {u} = span {v} if and only if one of vectors u and v is a. scalar multiple of the other.

D2. ( a )

D 3. (a)

Yes. If three nonzero vectors are mutually orthogonal then none of t.hem lit:S in the plane spanned by the other two; thus t he three are linearly independent. (b) Suppose the vectors vl> v2 , and v 3 are nonzero and mutually orthogonal; thus v, · v; = llvdl 2 > 0 fori= 1, 2, 3 and v; · vi = 0 for i=/: j . To prove they are linearly independent we must s how that if C1V1 + C2V2 + C:JVJ = 0 then Ct = c-_a = C3 = 0. Tbis follows from the fact that if c 1v1 + c 2v2 + C3 Y3 = 0, then C;

for i

2

llv;ll = vi· (c1v1 + c2v2

+ c3v3)

= V1



0 = 0

= 1, 2, 3.

D4. The vectors in the fi rst figure are linearly independent since none of them lies in the plane spanlled by the other two (none of them can be expressed as a linear combination of the other two). The vectors in the second figure arc linearly dependent since v 3 = v 1 + v 2 . DS. This :set is d ose>d under scalar multiplication, but uot under u = (1,2) nnd v = (-2, - 1) correspond to points in Lhe set, but

06. (a)

(b) {c)

(d) (e)

D7.

11

For example, the vectors does not.

+ v = (-1, 1)

False. For example, two of the vectors ma.y lie on a line (so one is a scalar multiple of the other), but the third vector may not lie on this same line and therefore cannot be expressed a.s a linear combination of the other two. False. The set of all linear combin ations of two vectors can be {0} (if both are 0), a line (if one is a scalar multiple of the other), or a plane (if they are linearly independent) . False. For example, v and w might be linearly dependent (scalar multiples of each other). [But it is true that if {v, w} is a linearly independent set, and if u cannot be expressed as a linear combination of v and w, then {u, v, w} is a independent set .J True. See Example 9. Ttne:. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0, then k(c1 v1 +c2v2 + c3v2) = 0. Thus, since k =/: 0, it follows that Ct v 1 + c2 v2 + c 3 v 2 :::::: 0 and so c1 = cz = c3 = 0.

(a) False. The set {u, ku} is always a linearly dependent set. (b) False. This statement is true for a homogeneous system (b:::::: 0), but not for a non-homogeneous system. !The solution space of a non-homogeneous linear system is a translated subspace.J

Chapter 3

(c)

True. If W is a subspace, then W is and so spa.n(W} = W .

closed under scalar multiplication and addition,

·

(d) False. Forexample,ifSt = {(I ,O),(O, l}}a.nd S2 = {(1, l) ,(O, l}}.thenspan(SI) = span(S2 )=

R2 ,

but. S1

# S2.

08. Since span(S) is a subspace (already closed under scalar multiplication and addition), we have s pan(spa.n(S}) = spa.n(S). ·

WORKING WITH PROOFS Pl. Let() be the angle between u and w, and let 4> be the angle between v and w . We will show that 9 = ¢. First recall that u · w = llu!lllwll cosO, so u · w kllwll cos9. Simllarly, v • w.=lllwllcos¢. O n the other hand we have ·

=

= l(u · u) + k(u · v) = lk 2 + k(u • v) u. w = lk 2 + k(u . v ) , i.e. l!wll cos e = lk + ( u. v) . Similar calculations show (v · u) + k l ; thus Uwll cos B = Uwll cos¢. It follows that cos 9 =cos¢ and () = . u · w = u · (lu + kv)

aud ::;o kllwll that llwll cos¢ =

X belongs to WIn w2 and k is a scalar, then kx also belongs to Wt n wl since both Wt and w2 are subspaces. Similarly, if XI and X2 belong to Wt n W2 , then x, + X2 belongs to WIn w2. Thus WI n w2 is closed under scalar multiplication and addition, .i.e. wl n w2 is a subspace.

P2. If

P3. First we show t hat ·w1 + W 2 is closed under scalar multiplication: Suppose z = x + y where xis in W1 andy is in W2. Then, for any scalar k , we have kz = k(x + y) = kx + ky, where kx is in W1 and ky is in (since w, and w2 are subspaces); thus kz IS in w} + w2. Finally we show that WI + w2 is closed under addition: Suppose Z l =X) + Yl and Z 2 = X 2 + Y2 o where X) and X2 are in W1 and Y• and Y2 are in W2 . Then z 1 + Z2 == (x1 + y, ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) . where X } + X2 is in lt\ and Yl + Y2 is in w2 (since WI and w2 subspaces); thus Zl + Z2 is in

H.-\ +

w2.

EXERCISE SET 3.5 1.

(a)

The reduced r,Jw echelon form of the augmented matrix of the h omogeneous system is l

0 0 thus a general solut ion is

x1

!

Gr

(in column vector form)

EXERCISE SET 3.5

(c)

91

From (a) and (b), a general solution of the nonhomogeneous system is gjven by

(d) The reduced row echelon form of the

.. ---

-

of the nonhomogeneous system is

-

.. ,,.

2

3 0 0

f ·.:.,

1

-3

;]

0 0

T his solution is related to the one in Qa.rt (c) by the change of variable .s1

--

0

2.

(o) The ponrls to a equation of the form ax + by + cz = d. (e) False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous system Ax = 0.

WORKING WITH PROOFS P 1. Suppose that Ax = 0 has only the and t hat Ax = b is consistent. If Xt and Xz are t wo solutions of Ax= b , then A(x 1 - x 1 ) = Ax 1 - Axz = b - b = 0 anti so Xt - x z = 0 , i.e. x 1 = xz . Thus, if Ax= 0 has only the t rivial solution, t he system Ax = b is either inconsistent or has exactly one solution.

PZ . Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw = b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent or ha. infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has only t.he trivial solution. P3. If x 1 is

= !, X4

=

l, y,

Y<

j ,J;.

-2

0

1

0

3 1

0

0

0]0 . The

The soluUon of Ux

y

1 10 .

....,...,_....._--..._,

11. Le!, elt e2, the standard unit vectors in R3 . ,_?, ?.)__ __ c?J'!IDP..?fL2L. 1 the matriX-".A- 1S-ofitaiiiea-·E'y solvTng Ax·= ei for x = x;. Using the given LUwyj oy llrsf'solviiig ej" _then· sqlving U'x=Y-;19.!.-

x_=;= xi. (1)

The sys tem

0'_::: = 1

2yl + 4yz =0 -4yl - Y2 + 2yJ = 0 Y'2 = -

from w hich we obtain Yt =

TI. Then

Y3 = X1 -

2Xz •r



T

X3

the system Ux = Yt is

!

=

2"' -"3 -- -6l :r.3 --

from which we obtain x 1

=

- L xz = -1, x3 = TI· Thus X 1 =

(2) ComputatioP- of x z: T he sys tem Ly

= e2 :1yl 2yl

!11

= 0,

)/2 ::.:.

[=!]

is = 0

+ 4y2

= 1

Y2

+ 2y3 = 0

-4y1 -

from which we oblaiu

7 il

L Y3 = ! . Then the system U x = Yz is xa - 2x2 -

X3

=

0

=! X3 = 4

xz + 2:r3

fmm which we

j,., = 0, .,

(3} Computation of x 3 : The syst em Ly

m

l· Thus x,

= e3 is 3yl

2yl + 4y2 -4Yt - Y?.

0 == 0

=

+ 2y3 = 1

Chapter 3

108

from whieh we obtain y 1

""'

0, y 2

= 0, y 3

Then t he system Ux = YJ is

=

x1 - 2x2 x2

f•·om whkh

w' obtain x, - l , x,

X3

= 0

+ 2x3 = 0 X3 =

!. Thus x, [

- 1, x,

Finally, as a result of these computations, we conclude that A- 1 = [x1

0

-!] -1 .

l 12. Let e 1 , e 2 , e 3 of! the s tandard unit vectors in R 3 • The jth column x i of the matrix A - .L obtained by solving the syst em Ax = ej for x = Xj. Using the given LU-decomposition, we do this by .first solving Ly = e1 for y = y 1 , and then solving Ux = Yj for x = x;.

The sdu tion of L. y - c, io y,

• nd the solution a[ U x

[

The sohotion of Ly

c,

The solution of Ly

e 3 is Yl -

is y 3

[:].

a nd the.olution of Ux = Y2 is

m,

•nd the solution of V x

r::J·

y , is x,

x,

y 3 is x,

m. [

7

_..2._] 14

* -t. . 'l

7

13. The

nta.t l

l

r.i

ix il can 1.-c reduced to row echelon form by Lhe followin g sequence of oper ations: A:.::

H 1

i

1

0 2

-t

r

0 2

2

-11lj 0 -t

1

[-; ! -ll

2

where thfl a.o:;sociated multipliers are

0

I

2 1 0

-:]

2 I

1

2 1

--+

2

-:]

-t

=U

2, -2, 1 (for the leading entry in t he second row) , and

- J. This leads t.o the factorization A = LU where L

=

0

If instead we prefer a lower 2 1 l

t.ri·e p- '

P

i (\ 0]

[

o

1 . 1 0

Using the given

decomposition, the syst.em p-l Ax = p-lb can be written M

The solution of Ly = p - l b is y 1 = 3, y 2 X1 = X2 = 0, X3 = 0.

= 0, Y3 = 0. The solution of Ux =

y (and of

Ax= b ) is

19. If we interchange rows 2 and 3 of A, then the resulting matrix can be reduced t o row echelon form without any further row !nterchanges. This is equivalent to first multiplying A on the left by the corresponding permutation matrix P: 3

-1

-1

0

2

-1

-1

2

[3

:1 J

3

110

The reduction of P A to row echelon form proceeds as follows:

PA

=

3 0 [3

-1 2 -1

This corresponds to t he LU-decomposition

PA =

3 0

0] [3 0 0] [1

-1 2

1 = 1

[ 3 -1

0 1

0 2 3 0

0 0

of t.he matrix PA, or to the following P LU-decomposition of A

A=

Note that, since pThe system Ax

=b

1

=

!

1 0 0] [3 0 0] [1 [0 1 0 3 0 1 0 0

0

1

0

2 0

0

= p - 1 LU

1

0 as A = P LU.

decomposition ca.n also be

is equivalent to P Ax= P b =

and , using the LU-decornposjtion

obtained above, this can be written as

[3 o olll -·4 !ol [x [-2] = 1

PA x == DUx ==

Finally, the solutjon of Ly Ax = b ) is x 1 = x2 = 20.

0

2

0

0

1

3 0 }

0

0

= Pb is Yt = X3

y 2 = 2, y 3

= 3.

}

x 2]

4

X3

1

Pb

= 3, and the solution of Ux =

y (and of

If we interchange rows 1 and 2 of A, then the resulti ng matrix can be reduced to row echelon form withoul any furt her row interchanges. This is equivalent to first multiplying A on the left by the corresponding permutation matrix P:

The LU-decomposit ion of t he matrix PA is

P A = LU = 2

0

-3

0

0

-;] 1

a.nd the corresponding FLU-decomposit ion of A is

Since P

1

= P , this

.= - 3. 14.

+ 4) + 5 = >.2 + 2>. - 3 =(..\ -1)(>. + 3}. Thus det(A) = 0 if and only if/\= 1

>. = 1, ,\ = 3, or ,.\ = - 2.

15. det(A) = (>. - 1)(>. + l }. Thus dct(A} = 0 if and only if A "" 1 or .A= - 1.

16. A = 2 or ,\ 17. We have

= 5.

1; 1-=_1xl = x( l - x) + 3 = -x2 + x + 3, and 1 0 2 x ] 3

-3 -6 X-

= ((x(x -

5) + 0- 18) - ( -3x- 18 + 0)

= x 2 - 2x

5

Thus t he given equation is if and only if -:t 2 roots of this quadro.tic equation are x = 3 {33.

*

+ :-r + 3 =

.x 2

-

2x, i.e. if 2x 2

-

3x - 3 = 0. The

18. y=3

19. (a)

(c)

1

0

0

0

0

0

- 1 0

1

2

7

0 0 0

I

-4

()

2 0

20. (a)

0 0 2 0 0 0 2

40

Mn

=1 ;

M21

=

M3t =

= (1)(1)(2) (3)

7

0

0

2 10

0 - 1 -- 23

=6

1-2 7

(b)

8

0 0

u

1

1

1

0 0 0

0

2 3

0

0

2

= (1 )(2)(3)(4) =

24

4

= ( - 3)(2)( -1)(3) = 18

3

-11 = 29. c11 = 29 4

,-2 !I= 1

0 '1

0 0 =0 3 0 3 8

a

= 23 =

100 200

21.

-3 1

0

l

{c)

(b)

1 2

0

2

= (1)(-1)(1) = - 1

0 0 1 2

G21 = 11

_ 31 = - 19, C31 = - 19 1

Ml2

=

M22

=

M32

-114 = 2l,Ct2 = - 21

!I=

13,c22

=

13

_ 31=== -19, C32 == 19 1

M13

= 21,

M23

= -5, C23 = s

M33 == 19, c33

= 27

= 19

120

22.

Chapter 4

M21

6

M12

=12,012 = -12

M1s

= 3, C1a =

3

2, C21

=- 2

M22

= 4, Cn = 4

M2a

= l , Oza =

-1

) O,Ca1

=0

Ma2 =

= 6, cn =

Mll

=

21 =

M31

6

0 0 23. (a)

M 13 = 4 1 14 4

(c)

(ct)

24.

= (0 + 0 + 12) -

(12 + 0 + 0) = 0

(f)

26. (a) (b) (c) (d) (e) (f)

0 13

4

-1 6 1· 14 = (8 - 56+24)-(24+56-8)=-96 1 2

4

l

Mn= 4

0

4

3

M 23 = 4

M21=

=0

2

0 23 = ·96

6

14 =(0+ 56 +72)-(0+8+168) = -48

0 22 = - 48

2

- 1

1

1

o

1

3

6

14 = {O+I4+18)-(0+2 - 42) = 72

c21=

-n

2

(a) M3z = - 30, C32 = 30 (c) .M41= - l,C41= l

25. (a) (b) (c) (d) (e)

Ms3 = 0, Cas = 0

3

4 1 (b)

O,C3 2 =0

{b) M44 = 13, 044 = 13 (d) M24 = 0,024 = 0

det(A) = (l) Cu + (- 2)C12 + (3)013 = (1)(29) + (- 2)(-21) + (3)(27) = 152 det(A) = ( I)C ll + (6)C21 + (-3)C31 = (1){29) + (6)(11) + (-3)(-19) = 152 det(A) == (6)C 21 + (7)C22 +(-1 )023 = (6)(11) + (7)(13) + (-1 )(5) = 152 det.(A) = (-2)C12 + (7)C22 + (1)032 = (-2)(- 21) + (7)(13) + {1){19) == 152 det(A) = ( -3)CJI + (l)Ca2 + ( 4)Caa = ( -3)( - 19) + ( 1)(19) + (4)( 19) = 152 det(A) = (3)013 + (-l)Czs + (4)033 = (3)(27} + (- 1)(5) + (4)(19) = 152 det(A) = (l)Cll + (l)Ct2 + (2)013 = (1)(6) + (1)(- 12) + (2)(3) = 0 det(A) = (l)011 + {6) C21 + (3)Ca 1 = (1)(6) + (3)(-2) + (0)(0) = 0 det(A) = (3)021 + (3)022 + (6)023 = (3){-2) + (3)(4) + (6)(-1} = 0 det(A) = (l)Ou + (3)022 + (I)Cs2 = (1)(-12) + {3)(4) + (1)(0) = 0 det(A) = (O)C31 + (l)C32 + (4)C33 = {0)(0) + (1)(0) + (4)(0) = 0 = (2)013 + ( G)02J + {4)033 = (2)(3} + (6)(- 1) + (4)(0) = 0

27. Using column 2: det{A} = 28. Using row 2: det(A)

=

;j = (5){-15+ 7) = - 40 (-4)

= (1)( -18) + (-4)(12)

= - 66

DISCUSSION AND DISCOVERY

30. Jet( A) ::::: k 3

:n.

-

8k 2

-

121

IOk + 95 3

3

3 5 3 2 -2 - (3) 2 2 4 2 10

Using column 3: det(A) = ( -3) 2

5

= (- 3)(128) -

2 - 2 0

(3)( -48)

= -240

32. det(A) = 0 33. By expanding along the t hird column, we have

sinO -cosO

0 0 = (1)

cos B

sin8 s inO- cosO sinO+ cosB 1

I

sinB cosO' . =sin 2 fJ+cos 2 B = 1 -cos 8 smB

for a ll values of 0. 34. AB

=

bf)

a.nd BA

=

ce]. Thus AB = BA if and onJy if ae + bf = bd + ce , and

bd

this is equivalent to t he condition that

36. If A =

I

[ac db] , then tr(A ) = a + d, A

tr( ,1) 2l tr(A2)

1 tr(A)

I= 2 I 1

2

lb

0

-

e d ·-

cl=

f

[a +bed 2

=

b(. -= 2 or >. = -2

Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A)) 2 = det(A) det(AT) = det(AAT ). (b} · Since det(AT A)= (d et (A)) 2 , it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from 'T'heorem 4.2.4, AT A invP.rt.ihiP. if ano only if A is invertible.

(a)

DISCUSSION AND DISCOVERY

36. det(A- 1 BA) 37.

129

= det(A -I) det(B) dct( A) =

llxii 2 IIYII 2 - (x. y) 2 == (xi+

2

Jx

1

IYl

x

2

Y2

1

2

+ IXJ

X:J I 'YJ

YJ

+ lxz

Y2

- 2XtY2X2Yl

38.

(a) We: have det We have

T

2

:r

3

Y3

+

1

2 5 o - 1 3 2

= (x1Y2 -

:r2yd 2 + (x1y3- X3Y1) 2 + (x2Y3- X3Y2) 2

+ xi y§- 2XJY3X3Yl +

2

=

5

= det(B)

vi+ v5>- (XlYl + XzY2 + XsYa) 2

=5 -4 = 1 and det

l 2 0

(b)

dct(B } det(A}

+

= 3- 6

1 = 2(5 - 4) = 2 and

= -3;

3

0

2 - ·3

1

2X2Y3X3Y2

+

thus det(M) =(I){ -3)

= -3.

0

o=

(3)(1 )( - 4) = -12;

th\18 det(M) =

8 -4

(2)( -12) = - 24.

39.

(a)

det(M)

(b) det(M)

=

3 5 -2 6 2 3 5 2

121

=

jo

2 0 1 2 () l

1 3 = (fi + 4) 0 12 0 -4

5 12

- . 13

= (10)(1) 112 - 4

121=

- 13

-1 080

(1){1) ,.,, l

DISCUSSION AND DISCOVERY D l. The matrices sin-gular if and only if the corresponding determ inants are zero. This leads to the system of equations

from whir.h it follows t.hat s

D2. Since det(AB) det(BA).

=

and t =

= det(A) det(B) == det(B) d P.t(A) = det(BA),

D3. If .4 or B is not invertible then either det(A) = 0 or det(B ) det(A) det(B) = 0; thus AB is not invertible.

it i;, always true that det(AB)

= 0 (or both).

It follows that det{AB)

= =

130

Chapter 4

D4. For convenience call the given matri'V A n· If n = 2 or 3, then An can be reduced to the identity matrix by interchanging the first and last rows. Thus det(An) = - 1 if n = 2 or 3. lf = 4 or 5, then two row interchanges are required to reduce An to the identity (interchange the first and last rows, then interchange the second and next to last rows). Thus det (An) = +1 if n = 4 or 5. This pattern continues and can be summarized as follows:

= =- 1 det(A2k) = det(Azk+t) = +1

det(A2k)

fork = 1,3,5, .. .

fork= 2, 4,6, .. .

DS. If A is skew-symmetric, then det(A) = det(AT) = det(-A) = (-l)ndet(A) where n is the size of A. It follows that if A is a skew-symmetric matrix of odd order, then det(A) =- det(A) and so det(A) = 0. D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If n = 2 or 3, then only one interchange is needed and so det(B) = - det(A). If n = 4 or 5, then two interchanges are required and so det(B) = +det( A). This pattern continues;

= - det(A) det(B) = + det(A) det(B)

for n

for n

= 2k or 2k + 1 where k is odd = 2k or 2k + 1 where k is even

D7. (a) False. For example, if A= I= /2, then det(J +A)= det(2/} = 4, whereas 1 + det{A) = 2. (b) Thue. F'tom Theorem 4.2.5 it follows that det(A") = (det(A))n for every n = 1, 2, 3, ... . (c) False. From Theorem 4.2.3(c), we have det(3A) =3n det(A) where n is the size of A. Thus the: stu.ternent is false except when n = 1 or riet(A) = 0 . (d) Thue. If det (A) = 0, the matrix is singular and so the system Ax = 0 has infinitely many solutions.

DB.

(a) True. If A is invertible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows that if A is invertible and det( ABA) = 0, then det(B) = 0. (b) 'frue. If A ,.,; A - l , then since det( A - l) = det1(A) , itiollows that {det( A ))2 == 1 and so det (A) = ±1. (c) True. If the reduced row echelon form of A has a row of zeros, then A is n ot invertible. (d) 'frue. Since = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A)) 2 :?: 0. (e) True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as a prod uct of elementary matrices.

D9. If A = A 2 , then det(A) = det(A 2 ) = (det(A)) 2 and so det(A) = 0 or det(A) = 1. If A= A3, then det(A) = det(A 3 ) = (det(A)) 3 and so det(A) = 0 or det(A) = ± 1. DlO.

Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0, no matter what values are assigned to the starred quantities.

Dll . This permutation of the columns of an n x n matrix A can be attained via a sequence of n- 1 column interchanges which successively move the first column to the right by one position (i.e. interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of the resulting matrix is equal to (-1)"- 1 det(A).

EXERCISE SET 4.3

131

WORKING WITH PROOFS Pl. If x

and y = [ ] then, using cofacto• expansions along the jth w lumn, we have

[]

det(G)

= (x1 + Y1)C1; + (xz + Y2)C2; +

· · · + (xn +yn)Cn;

P2. Suppose A is a square matrix, and B is the matrix that is obtained from A by adding k times the ith row to the jth row. Then, expanding along the jth row ofB , we have det(B) = (ajl

+ kao)Gjl + (aj2 + ka;2)Cj2 + · · •+ (a;n + ktJ.in )Cjn

= det(A) +kdct(C)

where C is the mat.rix obtained from .t1 by replacing the jth row by a copy of tho ith row. Since C has two identical rows, it follows that det( C) = 0, and so det( R) = det(A).

EXERCISE SET 4.3 1. The matrix of cofactors from A

)s

C

=

&

Thus adj{A)

= lr - 2

-!

2

and A-t

-! -s

det(A) = (2)(-3) + (5)(3)

+ (5)( -2) = -1.

3

= -a.dj(A) =

-

3

2 -2 -3

0 I

2

3

3

0 -1 3.

The matrix of cofactors from A is C 2 6

adj(A) =

4.

adj(A)

[

o 4

[

5.

Xt

3

2 0 0]0 , and det (A) = (2}(2) + (- 3)(0) + (5)(0) = 4. Thus [4 6 2 6 4

[2 6 4]

and A- 1 = -1 adj(.4) = -1 0

I]

1

.

1

0 0 2

0

12

-6

17 18 9 :::: - = 16 8

o

4 6 = 4 oo2

4

0 12 29

I! = 17

·1]

G 0 0 2

=

13

X2

=

j;

!I

[

1 29 12

26 13 = - =16 8

l

-·2

;]

Chapter 4

132

6. :r = .:....___ _;_

- 153 =- =3

y

- 51

7.

8.

x=

XJ

=

I t=

-4

-1

2

4

-20

-3

2

- 1 2

-20

1 -4

1

-1 2

2 -3

2

4

-20

2

-3

2

-4

1 2

1 4 2

1

-4

1

144

=- 55

1

y=

1

6

61 - 55

z=

= -

4 -1 2 -3 2

- 1

2 2 -3

4

2

6 - 1

4

-3

1

1

4

1

-2

- 1

2

-2

0

0

0

0 3

4

0

-3

1

-3

1

2 4

- 1

0

2

- 1

0

0

-3

4

0

-3

-3

]

2 - 1

0

0

-3

1

30 = - 11

- 32 - 4 1·1 -1 11 1 - .t - 2 - I -4 2 - 1 - 1

:3

2

-4 - l

1

1

-2

6

12 6 2 4

8

3

2 3

=

3 I

-4

2

1

7

9

- 2115 = - -=5 - 423

Xz

=

38

=- 11

X3

- 1 2 - 1

-32 14

2

11

1 - 1

-4 -4

3 1 2

2 - 1

-1

7

- .t

3

1

1

-4

2 7 3 1

- 32

1

- 1

-4

2

-1

ll

1

- 1

1

1

-2

=

- 1269 = 3 - 423

x4

=

1 9 1 -4 = - 3384 = 8 1 - 423 9

1

9

-4 -4

7

=

14 11

-4

- 423

- 1

1

2

4

- 1

2

4

5 -3 3 - 2 2 - l

4

8 3 Iz = 2 4 8 3

6 "'

-3

6

-2

2 3 5 3

- 1

2 =-=1 2 1

- 1

2

-3 -2

4

2

2 = -= l 2 1

- 1

2

-3 3 -2

4 2

3 5

- 1 - 1

1

2 4 2

2

-230

46

=--=-55 11

1 -3 4 2 - 1 -2 40 4 0 0 = 1 - 3 1 - 11

-2

- 32 14

- 423 4

Xz

1 9

7

-2

- I

=

=

1

-1

2

Xt

1

-4

- 1

10.

= -- 5-1 = -4

6

I

X3

= !..:-,-1

-1

4

9.

204

423 = -- = - 1 - 423

133

EXERCISE SET 4.3

=

11.

x=

4 1 3 2 -2 - 1 1 4 1

2 1

4 3 - 2 -1

3

l

2

1 2 4 2

2 2 4 4 3 0 8 5 12 3 3 6 2

4

- 2

= -= -1 2

X4

2

14

i2 6

-2 ::;: -=- 1 2

1

-2

[

0

0

15. The coefficienl matrix is 11 = solution if k #- 0 and k

i 1 2

5'

3

1

-2

2

3 1 3 -1 -1 4 - 2 1

2

(}

- 33 = -

41

+ sin2 8 =

1. Thus A is invertible

l

1

=

=

1 foro

2

tan acos2 a [

+ n1r.

:f.

cos a

0

k

0

.

sec 2 a

0

3 3 2I] , and d ct(A) = k(k- 4 ). [2k 2k k .. 2 multiplicity 2. (c) The characteristic equation is(>.. multiplicity 2.

7.

(a)

Thus ,\ = 0 is the only eigenvalue; it has aJgebra.ic

= 0.

Thus >..

.>. - ·1

0

2

.>.- 1

o

2

0

A- 1

The characteristic l!quat io!l is 0. Thus .A = 1, >.. = 2, and >.

= 0.

=

,X = ±4 are eigenvalues; each has aJgebra.ic

= 1 is the only eigen.value; it fi8s algebraic

- 1

= ).3

-

6). 2 + 11 .A- 6 = (.A - 1)(>. - 2){>..- 3) =

3 are eigenvalues; each has .algebraic multiplicity 1. .A-4

(b) The characteristic equation is

-t

5

5

.X -1

1

3

). + 1

= A3 -

4>.2

+ 4A =..\(A -

and ). = 2 are eigenvalues; >. = 0 has algebraic multiplicity 1, and ). ).. -3

(c)

- 3

A = - 3 and ,\

-4

1

.x + 2 - 1

The characteristic equation is

-9

= 2 are eigenvalues; A= -

= ).3 -

>.. 2 - 8>.

+ 12 =

).

3 has multiplicity 1, and >..

2) 2

= 0.

Thus A= 0 ..

= 2 hFlS multiplicity 2. (,X+ 3)(A- 2) 2

= 0.

Thus

= 2 has multiplicity 2.

The characteristic equation is >. 3 + 2..\ 2 + A=>.(>.+ 1) 2 = 0 . Thus >. = 0 is an eigenvalue of multiplicity 1, and ).. = -1 is an eigenvalue of multiplicity 2. (b) The characteristic tlltuation is ).3 - 6).. 2 + 12)., - 8 = (A- 2) 3 = D; thus,\ = 2 is an eigenvalue of multiplicity 3. (c) The chara.ctcris t.ic equation is >.. 3 - 2A 2 - 15>. + 36 = (,\ + 4)(A - 3) 2 = 0; t hus A= -4 is an eigenvalue of multiplicity 1, and ,\ = 3 is an eigenvalue of multiplicity 2.

8.

{a)

9.

(a) The eigenspace corresponding to >. yields the general solution x

[:J =

=t

= t, y = 2t;

solution x

= 0, y = t;

=

This

= 2x in the xy-plane.

=: (:] =

This

thus the eigenspace consists of all vectors of the form

Geometrica.lly, this is the line x = 0 (y-a.x.is).

yields the general solution x =

(;]

to A = - 1 is found by solving the system [

(b) The eigenspace corresponding to .A

[:J

found by solving the system [

t.hus the eigenspace consists of all vectors of the form

Geometrically, this is the line y

The eigenspace yields

= 3 is

=4

=:!] [:] =

is found by solving the system [

= 3t, y = 2t; thus the eigenspace

Geometrically, this is the line y

= 3x-

.

This

consist of a ll vectors of the form

E.X ERCISE SET 4.4

141

(x]

0 (c) The eigenspace corresponding to >. = 2 is found by solving the system [ 0 0) = ( ]. This -1 0 y 0 . yields the general solution x = 0, y = t; thus the eigenspace consists of all vectors of the form

[:] = 11).

Geometrically, t his is the line x

= 0. =

(a) The eigenspace corresponding to>. = 4 consists of a ll vectors of the for m (:) the line y

[:] = t

=

this is

T he eigenspace corresponding to >. = - 4 consists of all vectors of the fo rrri this is t he line y = -

(b) The eigenspace corresponding to>.= 0 consists of all vectors of the form

+

=

this

is the entire xy-plane. (c)

The eigenspace corresponding to A. line x

11.

(a)

= 1 consists of all vectors of the form

= 0.

(:J=

[-

t

this is the

"

[ool

The eigenspace corresponding to >.

= 1 is obta ined

yields the general solution x == 0, y

= t, z = 0; thus the eigenspace consists of all vectors of the

hy solving

t [:] ; thls cones pond• to a Hne thmugh the o•igin (the y-axis) in R 3 • Slmilad y,

[

to !.

the eigenspace eigeosp""e conesponding to !.

= t [ = , and the

2 consists of all vecto" of the fo< m [:]

t [- : ]·

3 consists of all vccto" of the fond the point. (5, 1, 3).

The eigenspace con...pondh;g to!.= 2 is found by •olving the system

yields [:]

:l

l f l],

[

to

whichabo con&

Chapter4

142

12.

(a) The cigenspA£e CO.- 5) . The eigenvalues a.rc >. = ..:_ 1 and>.= 5. ( b ) The cha racteristic polynomial is p(A) = (.A - 3) (.X - 7)(>.- 1). 'The eigenvalues are >.= 3, ). = 7, and A= 1. (c) T he characteristic polynomial is p(A) = (>. + - l)(.XThe eigenvalues are.-\= (with multiplicity 2), ). = 1, and>.=

(a )

-s

2 ?0

14. 'l'wo examples are A ::::

.o

0

?.]v

L

and B = [ -3

0 -1

0 0

0

1

-1

u

- 1

0

15. Using t he block diagonal structure, the characterist-ic polynomial of t he given matrix is

p(),)

), - 2

-3

0

0

l

), - 6

0

0

0 0

0 0

.\+2 -1

-5

=

.::: [{>- - 2)(>. - 6)

+ 3)[(>. + 2)(>- -

T hus the eigenvalues are 16.

!). -l 2

=

--3 >. -6

If), + 2 - 1

-5

I

>.- 2

>- - 2

2)- 5J

='"

(>- 2

8), + 15)(,\2

-

5, ), = 3 (with multiplicity 2), and),

-

9) = (>-- 5)(.), - 3)2 (-X + 3)

= -3.

Using the block triangular structure, the characteristic polynomial of the given matrix is

I- 1).011). +0 2

p(>-) = I A

I= >-,

0

>- - 1

2 ().

Thus the eigenvalues of B are ). = 0 (with muitiplicity 2), ,\

+ 2)(>.- 1)

= - 2, and ), = l.

17 . The characteristic polynomial of A is

p( A)

= det( AI - A) =

>. +1

2

-1

). - 2 1

2 -1 =(.-\+ 1)(>. -1) 2 ).

EXERCISE SET 4.4

143

thus the eigenvalues are .X = -1 and ..\ = 1 (with multiplicity 2}. The eigenspace correspontJing to .>. = - 1 is obtained by sol.-ing the system

whkh yield$ [:]

l [ - :]. Similarly, the eigenspace co"esponding to !.

= I is obtained by solviug

H-: -:1m =m which has the general solution

' =t, y "

-t - ·'· ' = ' ' or (in vedorform)

= ·' [-:] + t [- 1]

The eigenvalues of A.! 5 are). = ( - 1) 25 = -1 and >. = ( 1) 25 = 1. Correspcnding eigenvectors same as above. 18.

Thoeigenvalues of A are !. = J , !. =

rll·

and

j, !. =

0, end !.

=2

the

:j. r:J. .

ng eigcnveclo" are [

0

and [

9

r"'peotively. The eigenvalues of A are A = (1)'

1, A =

(!)'

= ,:, .

:0)' =

0.

>. == {2)!} == 512. Corresponding eigenvectors arc the same as above.

l9.

The characteristic polynomial of A is p()..) = >.3 -- >. 2 - 5).. - 3 = (..>.- 3)().. + 1) 2 ; thus the cigeoare .>. 1 = 3, .A2 = -1 , >..3 = -l. We bnve det( A) = 3 and tr(A) = 1. Thus det(A) = 3:; (3)(- 1)(-1} = .A 1.A2>..3 and tr(A) = I= (3) + (- 1) + ( -1) =.A t+ .X2 + .>.3.

W.

The characteristic polynomial of A is p(>..) = >. 3 - 6.:\ 2 I· 12.>- - 8 = (.>.- 2)3; thus the eigenvalues are At = 2, -\2 = 2, A3 = 2. We have det(A) = 8 and tr(A) = (). T hus det(A) = 8 = (2)(2)(2) = ..\. 1.\2.:\3 and tr(A} = 6 = (2) + (2) + (2) = )q + .:\2 + .\3.

= 0 and

!1. The eigenvalues are .\ (

ann

= -!x andy .::::: 2x.

The eigenvalues are ..\ tors [

= 5, with associa.t.ed eigenvectors

GJ respectively. Thus the eigenspaces correspond to t he

perpendicular li nes y

:2.

.>.

v;]and [

= 2 and

>.

= -1, with associated eigenvec-

respectively. Thus the eigenspace::; correspond

to the perpendicular lines y""'

andy=

- J2x.

Chapter 4

144

23. The inv..2 - (b + l )A + (b- 6a), so A has the stated eigenvalues if and o nly if p( 4) = p( - 3) = 0. This leads to the equations

6a - 4b = 12 6a + 3b = 12 from which we conclude that a = 2l b.,., 0. 25. T he characteristic polynomial of A i:; p( A) = >. 2 - (b + 3)A + (3b - 2a), so A has the s tated eigenvalW!S if l.l.lld o nly if p(2) = p(5) = 0. This leads to the equations

- 2a + b = 2 a+ b=5 from which we conclude t.hat a = 1 a nd b = 4.

26. The chl'l.r!'l.ctcristic polynomial of A is p(>. ) = (A - 3)(A2 - 2Ax + x 2 - 4). Note that the second factor in this polynomial cannot. have a double root (for any value of x) since ( -2x) 2 - 4(x 2 - 4) = 16 # 0. Thus t he o nly possible repeated eigenvalue of A is .>- = 3, and this occurs if and only if .\ = 3 is a root of t.he second factor of p(A), i.e. if and only if 9-- 6x + x 2 - 4 = 0. The roots of this quadratic eq uatio n are x = 1 and :t - 5. For t hese values of x, A = 3 is an e igenvalue of multiplicity 2. 27. lf A'l =.f. then A (x ·l- tlx) = Ax+ A 2 x :-:- Ax+ x = x +Ax; thus y = x +Ax is an e igenvector of A corresponding to A = 1. Sirni la dy, z = x ·- Ax is a.n eigenvector of A corresponding to A = -1.

28.

Accorrling to Theorern 4.4 .8, the characteristic polynomial of A can be expressed as

where >.1, >. 2 , ... , Ak arc the distinct eigenvalues of A and m 1 + m 2 + · · · + mk = n. The constant t.crrn iu this polynomial is p(O) . On the other har.J, p(O) == det( - A) = (-l)ndet(A) . 29.

(a)

lJsinf:! Formulu. (22), the d1atacteristic equation of A is .\2 + (a+ d)>..+ (ad - be)= 0. This is n

qundratric equation with discriminant

(a+ d) 2

-

4(ad - be) = a 2

+ 2ad + d2 -

Thus the eigenvalues of A are given by .A= (c)

+ 4/>c > 0 If (a- df + 4bc :-:o 0

{d)

If {n.-

{b) If (u

d)

2

4ad + 4bc = (a - d) 2

d)± J(a- d}'l

+ 4bc

+ 4bcj.

then, from (a) , the characteristic equation has two distinct real roots. then, from (a),

dV + 4bc < ()then, from

il:l one real eigenvalue {of multiplicity 2).

(a ), t here a re no real eigenvalues.

145

EXERCISE SET 4.4

30. lf (a- d)Z + 4bc > 0, we have two distinct real eigenvalues A1 and Az. The corresponding eigenvectors are obtained by solving the homogeneous system ·

bx2

a):tt - cx1

=0

+ (Ai- d)x2 = 0

Since A; is an eigenvalue this system is redundant, and (using the first equation) a general solution is given by

x 1 = t, Finally, setting t

n.

= -b. we ;,;ee that [a =b>.J

is an eigenvector corresponding to

>. =

..\i·

If the characteristic polynomial of A is p(.A) = A2 + 3>.- 4 = (>.- l){A + 4) then the eigenvalues of A are >.1 = 1 a nd A2 = - 4. (a) From Exercise P 3 below, A- 1 has eigenvalues .-\ 1 = 1 a nd ,\2 = (b)

From (a), together with Theorem 4.4.6, it follows tha t. A- 3 has eigenvalues A1 "'""(1 )3 = 1 and >-z-=- ( --

(c)

Prom P4 below, A - 41 has eigenvalues AJ

(d)

From P 5 below, 5A has eigenvalues)"

( e)

From P2( a.) below , t he eigenvalues of .(x · x) == .AIIxll 2 and so

l2.

If ...-lx -= Ax, where x -? 0 , then (Ax) · x

'3.

(a) The characteristic polynomial of the matrix C is

Add .>. l ime.-> the

p(A) =

= det ( >.J -

C) ""

()

0

-I

,\

0 0 0

0

0

0

- 1

0

>..

A 0

- 1

p( ,\}

= -3 and ,\2 = - 4 - 4 = - 8.

A=

(i1:1?t.

co Ct

Cz A+ Cn-

1

row to the first row, then expand by cofactors a.long the first column: 0

A2

- 1

0

Co+ CtA

>.

0 0

0

Ct

0

-1

.>.

0

cz

0

0

0

- 1

..\ +

)..'2

0

-1

..\

0 0

0

0

-1

co

+Ci A

Cz

=

C:n - 1

..\+ Cn - 1

Add A2 times t.he secvnd row to the first row, then expand by cofactors along the first column.

p(.>.)

=

0

Az

- 1

).

0 0

co+ Ct >. + cz>..2

,\2

C2

0

co + C)A+ C2A 2

= 0

0

- 1

..\ + c,._l

0

-1

A+ Cn-l

Chapter4

146

Continuing in this fashion for n - 2 s teps, we obtain

= 1,\n-l

co + Ct A + c2.\

-1

(b) The matrix C

=

[l

1 0

+ · ·· + Cn.-2>-.n- 21

>-+Cn-t

= Co+ CtA 0 0

2

+ C2 A 2 + · · · + Cn-2A

11

00 -2]3 hasp(>.) = 2 - 3>. + o

-1

1

5

-

2

+ . is an eigenva lue of A, then .-\2 is an eigenvalue of A 2 ; thus (.>. 2I - A1)x = 0 has uontrivial solutions. (c) False. If A = 0 is an eigenvalue of A, then the system Ax = 0 ha.s nontrivial solutions; thus A is not invertible anrl so the row vectors and column vectors of A are linearly dependent. !The statement becomes true if "independent" is replaced by "dependent" .J (d) False. For example, A=

G

has eigenva.lues ).

=1

and .\

= 2.

!But it is true tha t a

:;ymmetric matrix has real eigenvalucs.J

DB. (a) False. For example, the reduced row echelon form of A

fl OJ l.S =h 2

I =

[1 OJ . 0

1

(b) True. We havP. A(x 1 + x2) = A1x 1 + .>.2x2 and, if ..X1 f. >.2 it can be shown (sine"' x1 and must be linearly indep(:ndent) thal .)qxl

+ A2Xz f. {J(xl + X2) for any value of a.

X2

147

WOR KING WITH PROOFS

'!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial has at least one real root. (d) 1Tue. If r(A) = .>." + 1, then det(A) = (- l }np(D) = ± 1 =I 0; thus A is invertible. (c)

WORKING WITH PROOFS

b], t hen A2 = [02 +be ca + de

Pl. If A = [ac

A2

_

tr (A)A

= [be -

0

nod so p(A)....: A'l - tr(A)A

P 2.

(a)

ab + bdl and tr(A)A

cb+ d 2 j

d

ad

+ det(A)J

cb _o

ad] =

= (a+ d) [ac

(- de t(A) 0

b] d

d!]; thus

= [02ac+dc +do

ab + ad+ a-

0 ] _ det.(A) =- det(A)J

= 0.

Using previously estabJished properties, we have

111')

= det(Afr ·- AT)= dct((M- A)r) = det(M -

11)

Thus A a11d AT have t.hc *lame characteristic polynomial.

(b ) The e;gcnvalues are 2 and 3 in each case. The eigenspace of A conesponrling to ). tained by solving the system

t o >..

=

2 is obto.inf:d by solving

r:J

= 2 is ob-

t.he eigenspace of AT corresponding ': ' Thus the eigenspace of A corresponds to the

[:]

line y = 2x, whereas eigenspace of A'r corresponds to 11 = 0 . Simtlarly, fo r A 3, the eigf'nspacc of A corresponds to x = 0; whereas the eigeuspace of AT corresponds toy= P3. Suppose that Ax = .Xx where x i- 0 and A is invert.ible. Then x = A- 1 Ax = A- 1 >.x = AA- 1 x and, s1nce ,\ :f; 0 (because .·1 is invertible), it follows that A- 1 x = }x. Thus is an eigenvalue of .t\ - t and x is cormsponding

l

P4. Suppose A

P 5.

tlt;H

Ax

.\.x whcr.x) = (s>..)x.

P 6. If the ma trix A = [: :] is symmP.tric, then c :=: b a.nd so (a- d) 2

+ 4b. is a a eigen-

(a - d) 2 ·+- 4b2 .

ln the case that A has a repeated eigenvalue, we must have (a- d) 2 + 4b2 = 0 and so a= d a.nd b = 0. T hus t he o nly sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa. 1 a nd eigenvectors x 1 and x2 , g.iven by:

.>.1 =

+d)+ J(a- d)2

+ 4b2)

..\2,

with corresponding

Chapter 4

148

The eigenspaces correspond to the lines y

(a- >. 1 ){a- >.2 )

= m 1x

andy = mzx where m;

= (H(a- d)+ J(a- d)2 + 4b2J) (t!(a = t((a -

d)

2

-

(a - d)

2

-

2

4b

l=

=

d) - J(a - d)2

j

= .1, 2. Since

+ 4b2])

2

-b

we have m 1 m 2 = - 1; thus the eigenspaces correspond to perpendicular lines. This proves part (b) of Theorem 4.4.11. Note. It is not possible to have (a - d) 2 + 4b2 < 0; thus the eigenvalues of a 2 x 2 symmetric matrix must necessarily be real P7. Suppose that Ax = .XX a.nd Bx = x . Then we have ABx = A(Bx) = A(x) = >.x and BAx = B (Ax) = B (>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding eigenvector.

CHAPTER 6

Linear Transformations EXERCISE SET 6.1 1.

(a) TA: R2 (b) TA: R3 (c) TA: R3

---? ---? ---?

R 3 ; domain = RZ, codomain = R 3 R 2 ; domain = R3 , codomain= R2 R?; domain= R3 , codomain= R 3

3. The domain of Q , the codomain ofT

T(l , - 2)

= (-1,2,3) .

4 . The domain ofT is R3 , the codomain ofT is R 2 , o.nd T (0, - 1,4) = (-2, 2).

-2xl + :!.:7 + 4XJ]

-2

6. (a) T(x) =

=

[

+ 5x:z 6x1 -

[

+ 7xa

(b) T(x) =

7.

(a) We have

= b !f and only if xis a solution of the linear syst,em

12 [2 5 -3 0 -l

3

[XJX 2] = [- 1 ] 1 3

X;!

The reduced row echelon form of the augmented matrix of the above system is

0 1

6

-3

-111

o o oJ

nl

and it follows that the system ha..o; the general solut.ion x 1

Thus any vee to< of the foJ [-070l 72

_:]

= GJ

(b)

=

72

[

(d)

=

(c)

=

l-1 -1)0 [-2]1 = [-1]2

=

(a)

([-m

(b) r o

=

(c) 24.

( b) T

S Jn

28 -

2sin9cos8]

. sm .. 8 - cos 2 8

m;r:_ 1] . = l+,....'l 1 [1 m

[cos28 .

m]

m2

sin29) COS 28

where B =

wherecosB

3;

(135°).

i (22.5° ).

= v l +m• a.nd sin8 =

'1/JTrn-



Exercl&e Set 6.1

32.

(a) We

169

have m

(b ) We have m 33. (a)

==

2;

thus H

= HL =

= 2; thus P = h

=

We bave m = 3; thus H = HL = the line y

[-! ;J and H ( [!]) [!] = t [z:] = !] !] ) 1[!] = ! ( = [::!]· fo [-: !] = !] = g !] [:] k !] fo :] [:] = =

[

and P ( [

=

and the reflection of x = [: ] about

= 3x is given by H ([:])

(b) We have m = 3; t hus P

= PL =

·

=

=

1 10

and P ([:]) =

=

34. If Tis defined by the formula. T(x,y} = (0,0), then T (cx, cy) = (0,0) = c(O, O) = cT(x,y) and T(x1 + X2, Yt + Y2) = (0, 0) = (0, 0) + {0, 0) = T(x1 , Y1) + T(x2 , Y2); thus Tis linear.

If T is defined hy T(x , y).::: (1, 1), then T(2x, 2y) = (1, I) f. 2(1, 1) = 2T(x, y) and T(x 1 + x 2 , Yl + Y2) ::..., (1, l) =I (1 , 1) + (1 , 1) T(x 1 , yl) + T(xz, Y2); thus T is neither homogeneous nor adJitive.

=

36. The given equations can we writt.en in matrix form us

r:J = -

·--

Thus t he image of the 2s .:.-=: 1. 37. (a) 38.

(a)

[T]

= [1

0

.. .

.= 1 and .>. = - 1 with corresponding eigenspaces given (respectively)

by

where -oo < t

and

< oo.

D3. From familiar trigonometric identities, we have A =

29 -sin 28] __ R-.·e .

cos [ sin 29

cos 28

"

Thus muJtiplication

by A corresponds to rotation about the origin through the angle 2fJ. D4. Jf

A ·/ 1 ·- ..

R o -_ [cos O .. sin sin9

81

cosO'

t h en AT _ -- [ cos 0 sin 8] ..__ [cos( -9) -sin9 cosO sin(-8)

= R- o·

Thus multipli-

cation by !1.,. corresponJs to rotation t hrough the ang le -0. D5 . Since T(O) :::: x 0 =f:. 0, this transformation is not linear. Geometrically, it corresponds to a. rotation

followed by a translation. D6. 1f b = 0, theu geneous.

f is both addi t ive and homogeneous.

If b =f:. 0, then j is neither additive nor homo-

D7 . Since Tis linear, we have T (x 0 +tv) = T(x0 ) + tT(v). Thus, if T(v) =f:. 0, the image of the line x = x o +tv is the line y = Yo+ tw where y 0 = T (xo) and w = T(v) . lf T(v) = 0 , then the image of x = xo + tv is the point Yo = T (xo).

EXERCISE SET 6.2 1.

ArA =

[_:I] 21

2 aQ 2 . AT A= [ ll !.1

_

29 !Q

(_:

iJ·

J [ £! :ollJ - rl0 0 _llJ ·thus A is orthogonal and A - 1 =AT= [aQ l ' ll

' D

29 !Q

-

29

'iG

-

1

29 29

29

.aQ

zg



Exercise Set 6.2

171

u

- !s

12] [ 54

25

4

5 _g

-

5

5

l2

25

100]0 ;

0

= [0

25

2S

·u

25

3

5

1

thus A is orthogonal and A - l

=AT =

0 0 l

16

25

1]. 16

25

I

2

76

5.

r-t _

(a) AT A = A

= [

1

73

thus A is orthogonal. We have det(A)

=

= Ro where B = 3; .

= 1, and

Thus multiplication by A corresponds to CO\tnterclock-

wise rotation about the origin through the a ngle 3;.

(b)

[

J [ ;

A= [

=He w bere fJ

thus A is o'thogonal. We have det(A )

= i · Thus multiplication by A corresponds to ref!ed ion

the line origin through the origin m aking an angle

6.

(a)

AT A= [

A

(b)

=

rlT A= [

1.

(a)

A=

8 . (a)

A=

[

2

Y,}l = (o1 a v;l [./3 _! _ J_

2

A=

2

<

1 0 ]; thus 1

!2

2

2

0

A is orthogonal. We have det (A)

] · Lhus A is or thogonal.

1 •

= l,

and

We have dct(A )

= -1,

a nd so

./3] = He where 0 = %· 2

.:£1 - ! 7. 2

[t ;]

(b)

A=

(b)

A=

[!

9. (a) Expansion in the x-direction with factor ::! (c) Shear in the x-direction with factor 4. 10. (a) (b)

with the p ositive x- axis.

Jl

[

-1]

_v'4J 1-J * [:a l = [0 v'Jl = Re where () = 2

-1 , and so

Compression in the y-direction with factor Dilation wjtb factor 8. (c) Shear in the x -direction with factor - 3. (d) Shear in the y-clirflct.ion with factor 3.

(c)

(c)

(d)

A=

n

(d)

A=

[I 2

{b) Contraction with factor (d) Shear in they-direction with factor - 4.

172

Chapter 6

11. The action of T on the stanciatd unit vectors is e 1 = [ -t

-;

thus [T] = [Te1

-t

-t

-t -t

='Fe, , and e, =

-t

-+

-t

=

Te1 . and

-t

e2

= Te,; thus

4

and

= Te, • e, =

m m-; 4

111 =

14. The action of T on the standard unit. vectors is e 1

=

[i] -}

= T et,

e2

t hus [T]

0 -1

15.

( a)

16.

17.

y

=

[-!

m m-; m

mmm -+

=

=Te2; thus IT) = [Te1 Te2) =

13. The actioo of T on the "anda bz, b3, b4 ) is in W if and only if 3b2 - 2b4 0.

=

Solution 3. In Exercise 12 we foun d that t he vector u = (0, - 0, 1) forms a basis for W .L. Thus b = {bJ , b2. b3 , is in W ;; w .t.t if and only if u · b = + b4 0, i.e., if and only if 3b2 -- 2b4 0.

=

23.

The aug mented matrix lv 1

v2

V:;

=

v 4 I ht l b z l b "J is

-··2

- ]

0 1 1 3

1

l

0 -1 2

2 1 - 1

0

I I

2:

-2

I

-2

2:' -1 5: 5

I I I

'

1

0

2 4: - 2 2: - 3 - 1

I

- 1 1

I I

I

1

0

and the reduced row echelon form of this matrix is 0

0

l

0

0

1 0 0

0 0

matrix jv 1

Vz

'13

0 1

0 0

l: -2 0 0

0 0

1 I' 1:

0

l ''t

1

o:

0

b :t lie in span{v 1 , v 2, v 3 , v 4 }, but b s does not.

From this we conclude Uwt the vectors h 1

2 4. The

0 0

v 4 I b 1 I b2 I b s l is 1

- L

1

0

3 2 1 0 -l

2

0 1 0

3 •t

-2 : -1: o:2 I

1 I'

0 ' 1

'

1 :

7 : -1 : 6 I I 2 I' 2 : 4 1: 2:1

and the reduced row echelon form of this matrix is 0

1

0 0

0

0

1

1

0 0

0

0 0

0 0

0: _! : 0

I

1)

1

I I

§

I-} I

0 0

1

0

0 0

0

o: 1

t

t t

5 6 t - t 5 ' 1t

o: o:'

1

From this we conclude t ha t t he vectors b 1 and b 2 lie in span{v1 , vz , v 3 , v 4 } , but b 3 does not.

Chapter7

210

25.

The reduced row echelon form of the matrix A is

0 0 0 0 1 0 0 0 0 0

0 1

=

Thus the vectors r 1 = (1,3, 0,4, 0,0), r2 = (0,0,1,2,0,0), r 3 = (O, O,O,O, l,O), r4 (0, 0,0,0,0,1), form a basis for the row space of A. We also conclude from a.n inspection of R that the null space of A (solutions of Ax= 0) consists of vectors of the form

= s( - 3, 1, 0, 0,0,0) + t{-4, 0, -2, 1, 0, 0)

x

Thus the vectors n 1 = ( - 3, 1, 0, 0, 0, 0) and n2 = (-4, 0, -2, 1, 0, 0) form a basis for the null space of A. It is easy to check that r; · Dj = 0 for a.ll i, j. Thus row( A) and null(A) are orthogonaf.subspat:es of R 6 • 26 . The

row (>Chf'lon form o f the m11.trix .4 is

R=

1

1

0

0

4

0

1

0

2

0

0

1

t ;j

0

0

0

0

n.

-!]

1

5 3

5 0

4.

Thus the vectors f ) = (1, 0, 0, r 2 = (0, 1, 0, a.nd TJ = (0, 0, 1, } , form a basis for the row of A \Ve also conclude from nn inspection of R that the null space of A (solutions of Ax -::: O) consists of vectors of the form

Thus the vectors n 1 = ( · It is easy to che+I== 3

= row(A).l..

We have

= [-Stl 4: = t [-5] ;

Finally, dim(W)+dim(W).l. = .

21. The suh.spare W, consisting of all vector!'J of the form x = t(2, -1, - 3} , has dimension 1. The subspace n·.l is the hyperplane cons1$tmg of all vectors x = (x,y,z) whlch are orthogonal to (2,-1,3), i.e., which satisfy the equation 2x- y- 3z = 0 A gt•neral !iolution of the latter ts given by

x

= (s. 2s- 3t, t) = s(1. 2, O) + t(O, -3, 1)

wtwre - oo < t < oo. fhus the vectors (1, 2, 0) and (0, -3, 1) form dim(W)

l\

basis for w.t' and we have

+ dim(lV.l.) = I + 2 = 3

=

=

22 .

If u and v nre nonzc>ro column vectors and A = uvr, then Ax = (uvT)x u (vrx) (v. x ) u . Thus Ax= 0 if Bnd only if v · x = 0. i.e.• if and only if xis orthogonal to v . This shows that ker (T) = v.l. . Similar!_\, I he range ofT conststs of all vectors of the form Ax= (v · x }u , and so ran(T) =span{ u }.

23.

If B 1.x for some nonzero vector x, then A x = >. x. On the other hand, since A 2 = (u · v )A, we have A2 x = (u · v )Ax = (u · v)>.x . Thus >.2 = (u · v )>., a.nd if>. :f nit. follows that>.= u · v . 2

2

217

DISCUSSION AND DISCOVERY

(c)

If A is any square matrix, then I - A fails to be jnvertible if and only if there is a nonzero .vector x such tha.t (I - A)x = 0; this is equivalent to saying t hat>. = 1 is an eigenvalue of A. Thus if A ::::: uvT is a nmk 1 r£tatrix for which I - A is invertible, we must have u · v =/= 1 and .4 2 :fA.

DISCUSSION AND DISCOVERY Dl. (a) True. For example, if A ism x n where m > n (more rows than columns) then the rows of A form a set of m vectors in Rn and must therefore be linearly dependent. On t.he other hand, if m < n then the columns of A must be linearly dependent. (b) False. If the additional row is a linear combination of the existing rows, then the rank will not be increased . (c) False. For example, if m = 1 then rank( A)= 1 and nullity( A) = n - 1. (d) True. s·uch a matrix must have rank less than n; thus nullity(A):::::: n - rank(A) == L (e) False. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM nontrivial solutions; thus nulHty(A) ;::: 1. (f) 11-uP.. We must have rank(A) + nullity(A) = 3; thus it is not possible to have rank( A) and nullity(A) both C'qual t.o 1. D2. If A is m x n, then AT is n x m and so, by Theorem 7.4.1, we have rank(AT) + nullity( AT )= m . D3. If A is a 3 x 5 matri."l- 2bt . Thus the system [ 0 0 1 b1 - 2i>l + b3 1

o

1 4. The augmented matrix of Ax = b can be reduced to

Ax = b

is t!ither inconsistent (if bt - 2b2 + b3 =I= 0), or has exactly o ne solution (if b1 - 2b2 + 63 = 0). The latter includes t he case b1 = b2 = b3 = 0; thus the system Ax = 0 has only the trivial sdl'i.ition.

15.

If A

= [- )

then A r A 2

[

I

6 12 12 24

It is clear from inspection t hnt the rows of A and .4.T A

].

are multiples of the single vector u = (1,2). Thus row(A) = row(Ar A) is the !-dimensional space consisting of all scalar multiples of u . Similarly, null (A) = null(AT A) is the !-dimensional space consisting of all vectors v in R2 which are orthogonal to u , i e all V('ctors of the form v = s( -2, 1).

16. The reduced ·row echelon from of A = [ 1

2

AT A = [

1

1

3 -4

]

1 0

is [

0

7 ] and

I -6

the reduced row echelon form of

Thusrow(A)=row(ATA)isthe2-dimensionalspaceconsisting -7 - 11

17

0

0

0

of all linear comb1ne.tions of the vect ors u 1 = (1, 0, 7) and u 2 = {0, I , - 6). Thus null(A) =null( AT A) 1s the 1-dimtlnsional space consisting of aJI vectors v in R3 which n.re o rthogonal to both u 1 and u 2, i e, all \'CClor of the form v = s( - 7,6, 1) 17.

augmented ma.trix of the system Ax = b can be rel!\lo.:ed to

-3

)

0 0 0 0

bt b2 - bl

1

0 0 0

b3 - ·lb2

b4

+

+ 3bl

2b, b5- 8b2 + 7bt

thus the system will be inconsistent unless (b 1 , b2 , b3 , b4 , b5 ) the equations b3 = -3bt -1 4b2, b,. = 2btbs = -?b1 + where b1 can assume any values. 18.

The augmented matrix of the system Ax= b can be 1educed to

1 2 3 0 - 7 -8 [ 0 0 0 thus the system is consistent if and only if b3 solutions.

-1 : 5 0 :

!

= b2 + b1 and, in this case, there will be in!'lnitely many

223

WORKJNG WITH PROOFS

DISCUSSION AND DISCOVERY D l. If A is a 7 X 5 matrix wit h rank 3, t hen A'r also h as rank 3; t hus dim( row( AT)) = dim( col( AT ))= 3 and dim (nuli(AT)) = 7 - 3 = 4. 02. If A has rank k the n, from T heorems 7.5.2 and 7.5.9, we h ave dim(row(AT A))= rank(A1'A) rank(A1 ') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k.

=

D3 . If AT x = 0 has only the trivial solution then, from T heorem 7.5.11, A has full row rank. Thus, if A is m >< n, we mus t m and dim(row(A)) = dim(col(A) ) = m .

D4. ·(a ) False. The row space and column space always have t he same dimension. (b) False. It is always true that rank(A) = ra.nk(AT) , whether A is square or not. {c) True. Under these assumptions , the system Ax = b is consistent (for any b) and so the ma t riCes A and [A I b J have the same rank. (d} True. If an m x n matrix A has full row rank u.nd full column rank, then m = dim(row(A )) = rank(.4) = dim(col(A)) = n. (e) True. If A 1'J\ and A A'T a re both invertible then, from 'Theorem 7.5.10 , A has full column (f)

rank ami full row rank; t hus A is square. True. T he rank of a 3 x 3 matrix is 0, 1, 2, or 3 a.nd the corres ponding nullity is 3, 2, 1, ·or 0.

D5 . (a) The solution$ of the system are given by x = (b - s- t, s, t) where - oo < s, t < oo. This docs not violate T heorem 7.5.7(b). (b) The solutions can be expressed as (b, 0, 0) -1- s( - 1,1, O) + t ( - 1, 0,1), whe re (b, 0, 0) is a pa.rt.icula r solution and s( - 1, 1, 0) + t( - 1, 0, l) is a. general solution of the corresponding homogeneous system.

\

n6.

If A is 3 x 5 , t.hen the columns of /1 are a. set of five vectors in R3 and thus linearly dependent. (b) Lf A 5 x 3, t.hen t.he rows of A are a se t of 5 vectors in R3 a nd thus a.re linearly dependent. (c) Jf A is m x n , with m f n, then eit her t he columns of A are line arly dependent or the rows of A arc linearly depende nt (or both). (a)

WO RKING WITH PROOFS Pl.

Frnm T heorem 7.5.8(a) we have nuli(AT A) = nuii(A) . Thus if A .is m >< n , t hen AT A is n x nand so ra nk (ATA) = n - nuility (AT A) = n - nullity(A} = rank( A). Similarly, nuli{AAT) = nuli(AT ) a.nd so rank(A AT) ::= m = m - nullity(AT) ""' ra.nk(Al' ) = ra nk( A) .

P2.

As above, we have rank( AT A) = n -

P3,

(a) Since null(A1'A) = null(A), we have row(A) {b) AT A is symmetric, we have col(AT A )

P4.

A)= n - nullity(A)

=

ra nk(A) .

= nuB{A).l = null(A'

= row(AT A) =

A)l.

row(A)

= row(A T A ).

= col(AT).

If A is m x n where m < n, then the columns of A form a set of n vectors in Rm and thus are linearly dependent . Similarly, if m > n , then the rows of A form a set of m vectors in and thus are linea r ly depender.t

224

Chapter 7

=

P5. If rank(A 2 ) = ran 1'(A) then dim(null(A 2 )) = n- rank(A 2 ) n- rank( A)= dim(nuii(A}) and, since null(A) pection that Pis symmetric. It is also apparent that P has rank 1 since each of its rows is a sc,\lar mult1plc uf a Finally, it is easy to check that P 2 = P and so P is idl.mpot enl.

Hi.

Let M =

3

?.] u

I

:l

- 1

[

.

Then i\f 1 M

16

=[

orthogonal prOJCclton of !1. 3 onto W

P= AI(MTM)-IMT --=

[-!

'l

;] 9 1

and. from Theorem 7 7 5 , the standard matrix for the

= span {a 1, a2}

is given by

[13 -9] [3 -4

1 3 257 - 9

26

2

0

1 ] 3

113 1 -84 257 [ 96

=-

-84

208

56

96] 56

193

We not e from inspection t.hat lhe matrix Pis symmetric. The reduced row echelon form of P is

! and from this we conclude that P has rank 2. Finally, it is easy to check that

P' =

[

and so P ic:: idl'mpotent

,m 2!1 [

,m = 2!1 [

=P 193

231

EXERCISE SET 7.7

16. Let M

[- :

Then MT M

:]and the •tandW"d matrix for the orthogonal

of R 3 onto W = span{ a 1 , a2} is given by

-23] [1 -2 30 -2 4

-102

305

From inspection we see that P is symmetr ic. The reduced row echelon form of P is 0

1 0 and from this we conclude that P has ranK. 2. Finally, it is easy to check that P 2

= P.

17. The standard matrix for the orthogonal projection of R 3 onto the xz-plane is P =

agcees with the following computation rn;ing Po, mula 0

and M(MT M) - MT

[:

[;

:1

:]

(2n

Let M

[: ;

l

This 0 0 1

Then MT M

,

:J.

=

=

18. The standard matrix for t he orthogonal projection of R 3 onto the yz-p]ane is P

This 0

19. We proceeri as iu Example 6. The general ::oolution of the equa tion x

+y +z =

0

I

0 can Le written as

and so the two column vectors on the right form a basis for the plane. If M is the 3 x 2 matrix these vectors as Jts columns, then MT M

=

and the standard matrix of the orthogonal

projection onto the plane is

lJ

[2 -l] [-1 1 0]

2 -1

2

- 1 0

The orthogonal projection of the vector v on the plane is Pv

=

1

=3

-1

-1

2

-1] [ 2] = 1[ 1]

- 2J -1 -1 - 1 2

4 - 1

7 . - 8

232

Chapter 7

20. The general solution of the equation 2x - y + 3z = 0 can be written as

and so the two column vectors on the right form a basis for the plane. 1f M is the 3 x 2 matrix having these vectors as its columns, then NfT M

[!

=

and the standard matnx of the

projection onto the plane is

P = M(MT M)- 1 MT

=

12 30] _!_ [ 10 - 6] [ 1 2 0] = _!_ [ 102 132 -6] 3

[ 0 1 14 -6

5

0 3

1

14

-

6

3

5

2 -6] [ 2] = 141[34]

10 2 14 [ -6

The orthogonal projection of the vector von the plane is Pv = ]_

13

3

1

li

4 - 1

53 •

-5

21. Let A be the matrix having the given vectors as its columns. The reduced row echel. is nn eigenvalue of A with corresponding eigenvector x (x f 0). Then A 2 x = A(Ax) = A(.h) = ..\2 x. On the other hand, since A 2 =A , we have A 2 x = Ax = .>.x. Since x =/= 0, it follows 'that >.? = >. and so >. = 0 or 1. DlO.

Using ca lculus: The reduced row echelon form of lA I bj is

thus the general solution of 0 0 010

Ax = b is x = (7 - 3t, 3 - t , t} where -oo

llxll 2 = {7- 3t) 2 +

< t < oo. We have (3 - t) 2

+ t2

= 58 - 48t + 1lt2

and so the solut.ion vector of smallest length corresponds to

t :.=:

We conclude that

Xrow

= (7=-- ii, 3- H,

ft lllxii 2 J =

-48 + 22t = 0, i.e., to

5 = ( 1p

Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a.ny solution of Ax = b , e.g., x (7, 3, 0}, onto t he row space of A. From the row reduction alluded to above, we sec that the vectors v 1 = (1, 0, 3) and v 2 == (0, 1, J) form a basis for the row space of

=

.4 . Let. B be the 3 x 2 matrix having these ver,tor as its columns. Then BT B

standard matrix for the orthogonal projection of R 3 onto W

1\

Finally, in agreement with the calculus solution, we have

1

= Px = 11

[;] =

3

1

10

0

and the

= row(A} is given by =

Xrow

=

f [ !] 1

24

3

1

10

238

Chapter 7

Dll. The rows of R Corm a basis for the row space of A, and G = RT has these vectors as its columns. Thus, from Theorem 7.7.5, G(GTG)- 1GT is the standard matrix for the orthogonal projection of Rn onto lV = row{A).

WORKING WITH PROOFS Pl. If x anJ y are vectors in Rn and if a and /3 are scalars, then a· (ax+ /3y) =a( a· x) + /3(a · y). Thus

T(ax + /3y) =

a · (ax + /3y) (a · x) (a · y) llall 2 a = a:lja112a +

= aT(x) + /3T(y)

wh ich shows that T is linear. P 2. If b = ta, then bTb = b · b = (ta) · (ta)

= t 2a · a

= t 2aTa and (similarly) b b T = t2aaT; thus

1 T 1 2T bTb b b = t2aTa t a a P3.

1 T aT a aa

Lf'l P hl' a symmetric n x n matrix that is idempotent. and has rank k. Then W -= col(?) is a k-dimens•onal subspace of R". We will show that P is the standard matrix for the orthogonal projecl ion of Rn onto W, i.e, that Px = projwx for all x in Rn . To this end, we first note that Px belongs to W and that

x

= Px + (x- Px)

= Px t (I- P )x

To show that Px = projwx it suffices (from Theorem 7.7.4) to show that (I- P)x belongs to \1' • .1nd since W - col{ A) = ran(P). this IS equtvalem to showmg that Py · (J - P)x = 0 for all y m R" Finally, since pT = P = P 2 (Pis symmetric and idempotent), we have P(J- P) = P - P 2 - P - P = 0 and so

for evcrv x and y m R" This completes the proof.

EXERCISE SET 7.8 J.

First we note that the columns of A are linearly independent since they are not scalar multiples of each other; thus A has full column rank. It follows from Theorem 7.8.3(b) that the system Ax= b has a unique least squares solution given by

The least squares error vector is

b- A x = 5

4

5

11

8 = 11

15

and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A . For example, (b- Ax )· Ct(A) = 1\ !(-6){1) + {-27}(2) + (15)(4)] = 0

239

EXERCISE SEl' 7.8

2. The colunms of A are linearly independent and so A has full column rank. Thus the system Ax= b has a unique least squares solution given by

1 2.. 6] -2 I I] 2lj21 [-14] 0

-l [

2 1 3

[

- 1 -

9

The least squares error vector is

and it is easy ,to.check that this vector is orthogonal to each of the columns of A. 3.

From Exercise 1, the least squares solution of Ax= b is x =

-1] 1 [ 2 4

Ax =

3 11 5

f1

= 11

8

[28] 16 40

On the other hand, the standard matrix for the orthogona.J projection of R 3 onto col(A) is

[21 25]-l [ 1 2 4] = _1

p _ A(ATA.)-tAT::::.: 4

5

25

35

-1

3 5

:).nd so we have

projcoi(A)b

= Pb = 20

4.

90

170

From Exercise 2, the least squares solution of Ax = b is x

2 --2].l .!.. [ 9] = [3 1 21 -1 4

Ax = 1

220

[-· f>

= 2\

[

20

90

1 = 11

40

= Ax

-t:]; thus

[ tj6]

..!:_ - 5 21

13

On the other hand, the standard matrix of the orthogonal projection ont.o col(A) is

and so we have 1 17

1 .

= 21

13

=Ax

170

240

Chapter 7

5. The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax= .4Th which IS

Sinc·p the rnntrix on the left is nonsingular, this system has the unique solution

X= [J:1] = [248 8]-l [12]8 = 2_ [ 6 -8] [12] =[to] 6 80 -8 24 8 l X2

The error vector is

and the least squares error is li b - AxiJ

=

+

+ (0) 2 =

Ji =

6. The least. squares solutions of Ax= b are obtained by solving the normal system AT Ax = ATb which IS

[4214 12642] [Xl] = [124] X2

This (redundant) system has infinJtely many solutions given by

X= [::J = The error ve,.lor

3tl = [!J + t r n

jc:;

and the least squares error is li b - Axil=

+

=

+

J* = :Lfl.

7. The least SCJUf\rf'S solut1ons of Ax= b are obtai;1ed by solving the normal system AT A x = ATb which is

0

The augmented matdx of this system reduces to [: 0

solutions given by

1

'-a7]

' 1 :

i ; thus there are infinitely many

0'

0

EXERCISE SET 7.8

241

T he error vector is

and lhc least squares error is lib - Ax il = 8.

J(!)2

=

+

=

The least squares solutions of Ax= bare obtained by solving AT Ax= ATb which is

1 [ 17 33

[::]

50

=

X3

6

1:-sltf ;

0

Tbe augmented matrix of this system reduces to

1 :

0

0

I

.

thus there are infinitely many

0

:;olutions given by 1

[W,=:]

x = [:;] T he error vt.'C tor is

=

and t he least squares error is llb - Axll

V( !iY + {fr )2 + {- ·fi-)2 =y

9. The hnc"' model fo, the given data b M v

wh"c M

il

,f1j; = f[I .

and y

2

= [;] .

The least squa = {:3, I) . Then {v 1, v2} is an orthogona l basis for R 2, and the vectors q 1 = II = ( jfo) and q 2 = = (fro, ftc) forrn an orLet v 1

thonon nal hAS is for R 2 .

28.

Let V J = Wt = (1, 0) and V2 = W 'J Then Q 1 = = (1,0) ami q 2 = for R 2

·-

ft:!n

v 1 = (3, -5) - n)(l,0) = (0, - :>). = {0, - 1} fo rm an orthonormal ba:;is

-s 29. Let

YJ

= w, =(1, 1, 1), voz =

w 2 --

= (-1, 1,0)-

J. , 1) = (--1, 1, 0}, and

form u.n orthon orma l hasls for R 3 .

30. Let. v, ,..

WJ

= (1,

o. 0), Y 2 = w 2 - u;;u! v l =

(3. 7, -2)- (f)(l , O, 0)

= (0 , 7, -2). and

Then {v 1 1 v 2, v3} is an orthogonal basis for R 3 1 and the vectors q,

VJ

= llvdl = (l, O, O),

form an orthonormal basis for

q3

R3 .

VJ

(Q

30

105

= llv3 U= ' ./11925 J uns 1

)

Chapter 7

250

31. Let

Vl

=

Wt

= (0, 2, 1, 0), V2 = w2-

VJ

= (1, -1, 0, 0) -

(

1, 0)

= (1, -i,

and

Then {v 1 , v 2 , v 3 , v 4 } is an orthogonal basis for -

ql -

q 3-

YJ llvdl -VJ

-

11 v3 ll-

{0

2Vs J5 0) '"5• Sl ,

(.{IQ

10 ' 10,-

-

q2 -

and the vectors llv211

v'iO) • s ,--s-

.£@

q4

- ( v'3(i 6

-

I

VJo :.&.Q 0) 30 '

IS

I

'

- (v'iS .ill uTI .iil) 15• 1s , - ts • s

fo rm an orthonormal basis for R4. 32.

Let VJ = WJ

= (1. 2.1. 0). V2 = Wz-

= (1,1, 2, 0)-

2, 1, 0)

=(

0),

and

Then {v 1 , v2, v3 , v 4 } is an orthogonal basis for R 4 , and the vectors

form an orthonormal basis for R4 • 33. The vectors w,

R3.

=(

0), w 2

=(

0), and w 3 = (0, 0, 1) form an "rthonormal basis for

EXERCISE SET 7.9

34.

251

7J,

!) and w2 =

Let A be the 2 x 4 ma.trixhavingthe vectors w 1 =

as

rows.

Then row(A) = s pa.n{w1, w2}, and uull{A) = spa.n{wt, w2}.1. . A basis for null(A) can be found by solving the linPnr system Ax = 0 . The reduced row echelon form of the augmented matrix for this system is

0 _l2

4.!.

:I

l

_!

I

1

by

and oo a general ootut;on ;,

2

4

I

1 ;•]

=

0] 0

[m , [\ - ·[-!]+ •[=;]

the vootorn

w, =

4,

1, 0) and w4 = ( 0, 1) form a basis for span{w1, w2}-'-, and B = {wJ, w2, w3, w 4} is a basis for R 4 . Note also t ha t, in adrlition t o being orthogonal to w1 a.nd W'J, the vectors w 3 and w4 arasi::; { qt , Q z, Q:$, q 4} where Q1 = Wt (L Q 2 = W z:::: (-7.f, 0), Q3 =

11:; 11 35.

7:3•

= (-js ,- )5, -:76,0), and q4 =

)r8, 7rn,o).

11 :!g = (--dis, -

Note that W 3 = w 1 + Wz . Thus the subspace W spanned by t he given vectors is 2-diinensiooal with basis {w 1, w2} . Let v1 = Wt = (0, 1, 2) and v 2= w 2- w2·V1 llvdl 2 v1 =(-1, 0,l ) - ( 52)(0,1 ,2) ;;- (

2 1)

Then {v 1. ,·l} i::: an Ol lbogonal basis for\\' , anti the vect-or!' Ut

V1 = llvdl

·

l

= tO. -/S'

2 )

,

form an orthonor mal ba.c;is for 36.

NoLe that w.1 = w 1 - W -z t- w 3 • Thus the ::;uhspace \V spanned hy the given vectors { w 1 , W ?, w3}. Let v 1 = w 1 = (-1 , 2, 4, 7) . t\nd let. with v .....

= \1: 2 -

w3 • v 1

= (-3 0 4 -2) ·- (2.)(-l 2 4 ..,, )

Jlv 1 jj 2

llvdl 2 v1- llv

· v2

2

· ;o

' • ' '

jj2

,

(

• • •

9)

v2 = (2, 2, 7, -3)- 70 (- 1, 2,-1,7)9876 3768 5891 = ( 2005 ) 2005 > 2005 I

Then {vt . v2 , VJ} is an orthogonal ha.osis for U2

=

,;!1.,.

Uv211

=

( - 41 - 2 s2 -35 ) JS'iit4 ' •/5614 1 15iii4' J5614 '

v

form an orthonormal basis for W. 37.

that u 1 and

u2

""' (- 1! _J. 11 ' -:,

I

an(

:3-dimensional

2

1 •

31843) ( ( 401

14

41 26 14' 7) 7 '

3032) 200.'5

-

w. and the vectors Uj = n::u = (j.ffl, UJ

=

llvlil =

(

9876 3768 5891 -3032 ) J 155630 105' v'l55630t05 ' ,/155630105 1 J l55630105

projectior. vf w onto the subspace

are orthonormal vectors. Thus the

W spanned by these two vectors is given by WJ

2

= projw w = (w · u t) u 1 + (w · u 2) u 2 =

and the component of w orthogonal to W is Wz = W-Wl =(1,2, 3)

(

0,

+ (2)(0, 1,0) =

2,

252

Chapter 7

38. First we find vD orthonorma.l basis {Q], Q2} for Let = U t = (-1,0 1 11 2) 1 v2 = U 2 -

Vt

w by applying the Gram-Schmidt process to {UtI u 2}·

-!,!},

-1(-1,0,1,2) = (j,l, and let ql = = ( 0, q2 = = ( T hen {qt , Q2} is an orthonormal basis for H' . and so the orthogonal projection of w = (- 1, 2, 6, 0) onto W is given by

--Js, 7s, fs),

=

Jh. Ji2).

and the component of w orthogonal to W is W2

39.

=W -

Wt = ( -1, 2,6, Q) - (

1

=(

1

19 4 1

If w = (a., b, c), then the vector

is an orthonormal for lhe !-dimensional subspace lV spanned by w . Thus, using Formula (6), the sta.ndard matrix for Lhe orthogonal projection of R 3 onto W is

P

= u T u = a2

1-

:2

+c

2

a] Ia [c b

[aab

ab2

a.c

c

2

b c]

.

= a 2 + + c2

b b

acl be .'2

c-

DISCUSSION AND DISCOVERY D 1.

=

=

If a and b" are nonzero, then u 1 (1, 0, a) and u2 (0, 1, b) form a basis for the plane z = ax + by, ami applKHion of tlw Gram-Schm1dt process to these vectors yields '\n orthonormal basis {Q 1, Q2}

where

02. (a) span{vt} =span{ wl}, span{ v 1, v2} = span{w 1, w 2} 1 and spa.n{vJ , v2 , v 3} =span{ Wt, w2 , w 3 (b) v 3 IS orthogonal to span{ wt, w2}.

D3. If the vectors Wt, w2 , . . , are linearly dependent, t hen at least one of the vectors in the list is a linear combination of the previous ones. If w 1 is a linear combination of w 1 , w 2 , ••. , w 1 _ 1 then , when applying the Gram-Schmidt process at the jth step, the vector v i will be 0 . D4. If A has orthonormal columns, then AAT is the standard matrix for the orthogotia.l projection onto the column space of A. D5.

col(M) = col(P ) (b) Find an orthonormal basis for col(P) and use these vectors as the columns of the matrix M . (c) No. Any orthonormal basis for col(P ) can be used to form the columns of M . (a)

253

EX ERCISE SET 7.10

D 6.

(a) True. A ny orthonormal s et of vectors is linearly iudeptmdent. (b ) Pa.lse. An orthogona:l set may contain 0 . However, it is t r ue t hat any orthogonal 'set of no nzero vect.ors is linea.rly independent. (c) False. Strictly speaking, the subspace {0} has no basis, hence no o r thonormal basis. However , it true tha t any nonzero subspace has an orthonormal basiS. (d ) 1'ruc. 'The vect.or q 3 is ort hogonal to the subspace span{w 1, w 2}.

WORKING WITH PROOFS Pl. If {v1, v2, .. . , is an orthogonal basis for W, then {vt/llvl ll , v2/llv2ll. . .. , is a.n orthonor mal basis. Thus , using part (a), the orthogonal projection of a vector x on W can be expressed as 0

proJwX

=

(

x.

Yt

)

V1

llvtll +

(

x.

V2

l!v2ll

)

Y2

llv1ll + ... +

(

x.

)

Vk

PZ. lf A. is symmet.ric and idempot ent, t hen A is the s tandard matrix of an o rthogon al projection operator; namely the ort hogonal projectio n of R" onto W = col(A). Thus A= UlJT where U is any 11 x k matrix whose column vectors form an orthonormal basis for tV . P3 . We rnust. prove that v i E span {w 1, w2 ___ , w,} for each J - 1, 2, . . -- The proof is by induction on J Step 1. Since

v 1 = w 1 , we have v 1

E

span{w 1 } ;

t.hllS

t.he stat ement is true for j

=

L

Step 2 (inductio n step) . Suppose the st atemeot is t.r ne fo r integers k which a re less t han or to j, i.e., fork = l , 2, . . . , j . Then

and since v 1 C ,c;pan{ w d , v'/. E span {Wt, w·2 ), . .. a.nd v; (7 s pan{ Wt , w 2 , . .. , W j } , it follows that v J+ l C span{ Wt , w 2, .. . , wi, wj 1.1} Thus if the s tatement. is true for e ach of the integers k = 1, 2, ... , j then it is also true fork= j + 1. 0

T hesf: two s te ps complete the proof by induction .

EXERCISE SET 7.10 1 . T he column vec.t,o rs of the matrix A are w 1 procE'SS to

= g}

and

Wz

=

_ Applicat ion of t he Gram-Schmidt

vect.or yields

We have W1 := { w, · q i)q 1 = J5q l and w 2 = {w2 · q 1 )q l + (w2 · Q2)Q2 = -/5q l plio.:ation of Form;.:!a (3) yields the following QR·decomposition of A : A===

fll 2

-i] 3

=

v'5

V5

[J5 0

V5j = QR v'5j

+ ../5q 2.

Thus ap-

254

Chapter 7

2. Application of the Gram-Schmidt process to the column vectors

We have Wt

= J2q 1 and

w2

of A yields

w2 = 3J2q 1 + J3q2. This yields the following QR-decomposition of A:

[l [ 1 2

A= 0 1

1 1]

72 -"J!

=

1 4

3.

and

Wt

l

3v'2

J2

13 [ 0

0

1

J3

1

72

73

= QR

l

Application of the Gram-Schmidt process to the column vectors w 1 and

w2

of A yields

aJ26

Q2

We have WJ = (wl · Qt)CJ1 - 3q, and w2 the following QR-decomposilion of A:

A=

[

11]

=[

s0t 8

JJ26

= (w2 · q, )q l + (wz · Q2)Q23726

=

8

-:

3726

l

3

+ " f q z_

This yields

.l

[o

=QR

1\ppliratton nf thP Grn111-Schrnidt process to the column \'ectors w 1 , wz , w3 of A vields

\Ve have w 1 = J2q 1 , w 2 = "lq1 + ing QR-dec01npositinn l.lf A:

A

1 0 2] 011= [1 2 Q

vl1q2, and

0

73'

72

73'

I

l

w3

1q2 +

= .J2q 1 7s

--Ja

¥

Q3·

This yields the follow-

v'2

[v'20

v'3

0

0

5. Application of the Gram-Schmidt process to the column vectors

-

il =QR J2] 3

h1 3

w 11 w 2 , w3

of A yields

Q2 =

719 We have Wt = J2q 1 , w2 = following QR-decomposition of A :

A=

2 1] I

3

1 = l

+

and wa = J2q l

+ '(P q2 + 3

1

738 72 0

1

-73'8 6

73'8

J2] =QR 0

.i!! 19

This yields the

255

EXERCISE SET 7.10

6.

Application of the Gram-Schmidt process to the column vectors w 1, w2, w 3 of 11 yields

We have w 1 = 2ql, ·decomposition of A:

w2

=-

q1

A=

+qz,

and

w3

r :L [ ; oJ

l

- 1 1

7.

From

3, we have .4

-2

= [- 1 2 1 1] =

[-3_:

2 J

+

=

1 2

:Jv"itl

3 2

7

W'16

3

+ ./i-G3·

0] [o2

-1

J]

=QR

1 0

0

1

72

h

f3

I

..r·t u

LO

This yields the following QR-

J=

Q R T hus the nnrmnl

for

3

Ax = b can be expressed as Rx = Qrb, which is:

,J,!l m Lil =

Solving thi.s system hy back substitution yields the least squares solution Xz =

8. From Ex.erci$c 1, we have A =

102]- [-72 [ -jz 0 1 1 -

0

120

-

x1

[v'2o v"2 v'2] ·""' QR. v'3 - :/f.

,16

73 -)6

0

Thus the normal

2..f6

()

:r

system for Ax = b can be e.xprcssed as Rx = Q1'b, which is:

,..

=-

0

v2

1

0

W3

[::] X3

J3

=

2

v·'G

v6

=

Solving this system by back substit ution yields X3 = Xz the syst.etn Ax = b is consistent and this is its exact solution.

9.

F'rom Excrci11e 5, we have A = system for Ax

= b can

12 1] [0 31 =



1 1 1

0

'.;?S8

kl [Vi

o

719

be expressed as Rx = QTb , which is:

w2 .ill 2 0

x,

0

= 0.

41 0

Note that, in this example,

J2]

3'(P

=

QR. Thus the normal

.ill 19

1

v'2 l - v'38 3

'J19

Solving this system by back substitution yields X3 = 16, x2 = -5, x , = -8. Note that, in this example, the system Ax = b is consistent and is its exact solution.

Chapter 7

256

!:] -H [:

10. Prom Exercise 6, we have A=

-1 1 0

I

- I

I

l

2 72

O

ll

= QR. Thus the normal system

for .-\x - b can be expressed as Rx = Qrb, which is

i][=:] [t =

0

0

1

0

X3

72

Solving this system by back substitution yields x 3 = -2, x 2

= 121 , x 1 =

11. The plane 2x - y + 3z = 0 corresponds to a.l. where a= (2, -1, 3). Thus, writing a as a column vector, the standard matrix for the reflection of R3 about the plane is H

=1-

2 T - -aa ara

=

= [ 0

0

6

1

-3

9

-7

and the reflectiOn of I he vee lor b = (1, 2, 2) about that plane is given, in column form, by Hb=

!2.

lhevlane.x 1-y- lz Ocorrespondstoa 1 wherea=(l,l,-4). Thus,writingaMacolumnvector, t.he standard matnx for the reflection of R 3 about the plane tS 18

[ 1 -4] [ I -4

- -l

- 4 16

:;

:1

.:

_l

9 9 o.nd the retlec:ti•>n of the vector b = (1,0,1) about tltat plane is given, in column form, by

H-:

Hb

13.

H = l - -

2

aT a

T

aa -

2 1 14. H =1- - -aa aT a

I

0

Fi

= 1- -aT2 a aaT

1 0 0 0 1 OJ 0 0

=

-1

2 [- :

6

2 11

r 0

I

3 2

3 I

- 1 1

3

2 -2

0 0 15.

3

1

0

u -!l u 2

[-: -:]

1

0

=

Ul

- 1

0

9

4J 0

=

2

3

0

1 - 1 - 1

2

3

0

9

0 2

IT

IT

2

9

1

TI

3 -3

_..§. II

IT

i

II

- ·0] II 6

IT

-n7

EXERCISE SET7.10

257

0 1

2 16. H- l - -aar = aT a

17. (a)

0 0

0

0]

0 0

2

o

10

1

-2

[-21

6 9 3

4 15

4

-3

0 1

T5

6 2

-1

=

']

5

_1

15 4. -15

-ii

--s

-g2

_.i_

:.!

13

15 7

H' 4

2

5 2

2

4

13

-3

5 1

15 -5

15

15

Let a=v-w=-=(3,,1)-(5,0)=(-2,4), Then His the Householder matrix for the reflection about a..l, and Hv = w.

(b) Let a=v-w=(3,4)-(0,5)=(3,-1),

4



5

Then H is the Householder matrix for the reflection about a..l, and Hv = w. (c) Let a= v.- w = (3,4)- (¥,- {/) = ( 6 S+l'l). Then the appropriate Householder matrix is:

-;12,

2

T

H = I - --aa aT a

18.

(a)

= [10

0] - ----;:::: 2 1

Let a= v- w,., (1, 1)- (v'2,0)

17-25/2] 2 33+8../2

&0- l7v'2

= (1- v'2, 1).

-2-

a1'a

[(1-

v'2) 1-/2

=

2

(b) Let

v- w

=

2

0

=

1

2

1- v'2] I

2

2

w = ( t, 1)

- (

[ot o]l _[¥v'2

Y1-) ' 1+2v'3) =

(1- /2) 2

4-2J2 1-

2- v'2 -2-

-2

=v-

-1 ../2

[1 0] . . __._2_ [ 1v2 1- v'2]

aTa

Let a

l'n

(1, 1)- {0, /2) = (l, 1- .J2). Then the appropriate Householder matrix is:

H = 1 _ _ aaT =

( ] [:r:] = r- 72 [x:)= [5]' then [;r] = [cose4") -sin( sm( ft) cos(

(b)

If

(a)

If [:]

(b) If

(:r] =

=[

11

ll

2

3 4

ll

z

r::J =

then

[:J

=

3 ") 4

ll

rJ [:]-= rJ -rJ -rJ =

=

=

* nl l

nl,

29. (a) If [:] = then [::] =[-::(;) (b) rr x'] [ 1] lhen [x] ;: [ =

30

Ul·

(b) If[::]= 31.

We have [::]

=

'

{i]·

(a)

=

=[

1

-

=

[

0

z'

0

1

-3

[-t]. -3

[t :-:][:] -rl nl u:!l [: ] =u: !l Ul = =

r;] m

and

=

0

=

=

:

[! :-!] [: ]

[! r -;)[-t r m

Thus

-4 .:.::] •I

DISCUSSION AND DISCOVERY

- .J_ [

7

R

l!] IS

2.! •

Let 8 = {v,,v2 , v 3}, where v 1 = (1, 1, 0) , v 2 = (1,0,2),v3 = (0,2, 1) correspond to the column veclors of the matrix P . Then, from Theorem 1.11.8, Pis the transition matrix from 8 to the standard basis S = {e 1 ,e2,e3}. (b) rr pis t he transition matrix from = {e ,,e2,e3} to B = {wl . w 2. W J}, e, = Wt + w2 . e2 = w, + 2w3, a.nd e3 = 2w 2 + W J. Solving these vector equations for w 1 , w 2 , and W J in + = w 2 = %e1 - !e2 + terms of e1, e2, and e3 results in w 1 = = w 3 = -je, + j e2 + = Note that w w 2, W3 correspond to the column vectors of the matrix p- .

02. (a)

s



\e3

!>·

1,

WORKING WITH PROOFS

f v 1, v, , v 3}

D3 . If 8 v1

==-

267

(1, 1, 1 ) ,

D 4. If lwls B is

=w

V ·l

and if

lz u 2 + · · ·-'- b, u,.. , we have:

V

= a1 u1 + a2 U·l + · · · + a

71

Un

anJ w =

+ a2 u2 + · · ' + O.n Un) -1- (b1U1 + b2U2 + ' ' · + bnUn) =(eLl+ bt) Ut + (a2 + b2)u 2 +··· +(a, + bn )Un ·

+ W=

(a1111

Thus (c:v)s = ca2, .. . , can)= c(a1 , . .. , an.) :::;: c(v)s and (v + w)e =(at + b1, .. . , an + b,.,) (a l,a21. .. , an) + (bt , b2, . .. ,bn) = (v)B + (w)s.

=

CHAPTER 8

Diagonalization EXERCISE SET 8.1

Thus [x]a that

= [.];2

[Tx]a

= [:z\:2:r2 ), and (T]s = I[Tvt]a

(Tx]s

= rxl2l- X2] = [22 l

-ol] [

X2

[Tv z]a] =

we note

X2 X, ] = IT]a[x]a

which is Formula (7).

Thus [x]s

= r i_"':r,' ++

,r,

[Tx]o

=

!% 11

:;:rt-

ITis

and

= IITvda

(Tv2JaJ =

I [

-

".

-"5

Finally, we note that (TxJa

\\"l11ch is Formula (7) 3.

Let P

= Ps-•B where S = {e 1,e2} is the standard basis. Then P = [[vds (v2JsJ = P[TJ 8 p -l

4.

Let P

=

For

[I -1] [2 -1] [ 0 1] = [1 -1] _[TJ 1

0

2

0

-1

1

1

1

o where S- { e1o ez} is the standard basis. Then P = llv!]s

PITJaP 5.

=

P\'ery

1

=

2]1

[-t -..,. :>

-

18 5]

_ .1 5

[

1 2]l [l3

5 -2

=

[vz]s] =

and

= [T]

v€:ctor x m R3 , we have

[::]

(jx,- jx, + jx,)

m H·· +

268

+ jx, +

[!]

+ (jx, + !•· -jx,)

m

Exercise Set 8.1

269

ami

Finally, we note that

[TxJH =

-xz - !x3 ]

[- 4x: + - 2Xt

X2

+

-

-

1 2 X3

which is Formula (7). 6.

.For every vector x in R3 ,

X =

= Hx,

have

- fz, + !x,)

nl

+ tlx, +

+ {,x,)

m

+ (-jx, + lx,-!x,)

and

[T] F = [[Tv,]o [7'v,]o [Tv, ]!,] = [ Finally, we note that

-y [ 12 7

-8

which is Formula (7) .

-J -il

nl

270

Chapter 8

7.

= [lvt]s

P =- Ps .... a where 5' is the standard basis. T hen P 3

1

P[TJaP-

=

I

ilH t] [-l

0

-2

-2

1

I

2

[;

1

l

l

-2

2

l

I

1

2

2

.. 'I nrIy, 1'vI1 Stmt

[T]s =

[()]

1

J =IT

[Tis P

=

l

= [-82 156] =[

Si1mlarly.

_

81i

= Pa ... FJ•

61 ]

=

8

=-

1

4

=

Since

112

3

J

an d

r v2 I

= v; + v2

Vj

1

=[

:

= -

31 2

-1

+

2]

22

4S

-M fl]

1] [_..:!..

11

GJ -

and [Tis· =

[

_;j

26

I

TI

1798[ ") = 15 -1 [

=;

=:) = 9]

31

I

2

-2

_86v -15 1

31 2 v;

S1me

[-1

={Tj

n

[0 -2 0

0

1

0

-2

= [TJ

9

Tv2 =

and

= 11 vl

I] = - rrv l2

4

1

V ?.

= v;- v2,

+

19 2 . + nv

Thus

I

we have p

=

ancl P[T is P -

1

= [1o8

IT

II

L798v 2 and T v 2

v2

= [-3]16 -

=[

and Tv2

v! = 10v1 r 8v2 and v 2 = -

- 21(_§§ -¥J !i: 1

-!] [1:1 2

32] =!Tis

= [A2

2

[Tis where, as before, P

+

7 2s

-

11 . The .-qualion P[T]B p - l = [Tis• is equivalent to [T]a have

-

11

3-AJ

[tSh

11

45

i v; + ;sv;. -

i

1

2•

Thus

¥ v2, we hnve

_..!..] _ = [-¥-7i ¥

= p- 1 [Tis•P. .!.2

6l v 90 1

=

[T]s··

T hus, from Exercise 9, we

[]1 1] = p-• [T]s· P -1

= Pe.-s•.

12. The e>quation P[T]sP

· I=

[Tis· is equivalent to [T]s

= P - 1 [Tl"'•P.

have 86

whf•rt•, ns before.

_;]

1

0

II

'2

lo [8

[TJs

=

- 8 and

v 1I

1

12

8

2 + nv 2

I

..1..

;] and

and

P[l]sP

10. We ha\·e Tvl

-:J

_'J.

II

= [: _:)

8

[2] +IT2 r-3] = nv 3

anti [T]s·

TI

I I

Ps ... a•

=

0

j]

_ !

=-II

=

9. We have T v l

3

=

n

3-1] [-y 2

J] H

-1

0

[i

=

= [lv 1]s Jv,]s lv>lsl =

8. Let P - Ps ..... a where S is the standa rd basis. Then P

3

[v2Js [vJ]sJ

=[

61] 90

45

13 -

90

49 - [ ..!. -;rs 45

P = Po-e··

2

[108

Thus, from Exercise

10, we

Exercise Set 8.1

13. The standard matrix is [T}

= [1

. , o 1 matrices are related by the equation

and, from Exercise 9, we have !T}s

5]

-1

P[T] 8 P - 1 = [ 5 -3

14. The standard matrix is [T]

=

ll l!..l 'l 6

_25

TI

11

22 5 22

22

=

= [-

ft

- rr

.These

u

[10 -2]1 = jTJ =[

_

[-1]

+(4z)

and, from Exercise 10, we have {T)s

These

matrices are related by the equation

where P 1 5.

(a)

= [vl I v:J

For ev€r y x

= (:.c,y) in R 2 , we have

a.nd

16.

(b)

Ju agreement wit b l·hrmu Ia. (7), we hn.vr.

{a)

For every x -" (x.y,z) in R 3 , we ha.ve

and \ .. Tx .,;,

[1]

x+ Y + z1j [

I

( v,

-?

/

!

) J •

'··

[0]

Chapter 8

272

(b) ln agreement with Formula. (7), we have

17. Fo r every vector x in R'l, we have x =

[: 1 =

+ %xz)

+ (- {0x1 + -fo:r2)

=

3

+ ix2)v1 + (- 10xl + -foxz)v2

and

Finally, in agreement with Formula. (26), we h ave

18. For every vector x m R2 , we have

anrl

t hus lxls

=

1 -:tt -

4 Xl

; l· 1 X2

1- ;:t:z

jTxJ s'

=

- 2 :r1 1 [

3

z XI

l

+ 3I :r2 +

, and ITJ s•s

'2 J%2

Finally, in agreement with Formula (26), we have

19.

For every vector x in R 3 , we have

= I!Tv1J B'

Exercise Set 8.1

273

and

+ 2X3] 2x x · = [3Xt

Tx =

1

t hUS

[X IB

=

[-

_

4xt + I l 1

( --·x 5 r 1

-

1 2 xz + '2:2:3

zXl l

2 x1

2

-

- 1.. xz 14

[TX JB' =

,

-

r-3] + (6 2

4 ) ;;XJ ,

- x1 1

[-rrt6 - 1

+ 2 x:z + 2x3

4

1

14X2- 'fX3] 3 2 . 7XJ-f4X2+'f:t3

I

+ 12 - ;[J ) [1]

3 -x2 1. = 8 (multiplicity 7). The eigenspace corresponding to >. ""' -3 has dimension 1; the eigenspace corresponding to ). = - J has dimension 1, 2, or 3; and the eigenspace corresponding to A = 8

= 2; thus A and B

are not similar.

= 1 and rank(B) = 2: thus A and B

are tJOt similar.

have dimension I, 2, 3, 4, 5, 6, 7, or 8.

6.

(a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1), A= 1 (rm1ltiplicity 1), .A= -2 (multiplicity 1), and .A = 3 (multiplicity 2). The eigenspaces corresponding to .A= 0, >. = 1; and >. = - 2 each have dimension J. The eigenspact! corresponding to A = 3 has dimension 1 or 2. (b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2), A= 6 (multiplici ty 1), and .A= 2 (multiplicity 3). The eigenspace corresp(1nding to >. = 6 has dimensio n 1, the eigP.nspace corrcspondinr; to A = 0 ha. dimension 1 or 2; and the eigenspace corresponding to A = 2 has dimension 1, 2, (){ 3.

7. Since A is triangular, its characteristic polynomial is p(.A) = (..\- l)(A- 1)(.\ - 2) =(.A -- 1)2 (A- 2). Thus the eigelll'alues of !I are .A= 1 and>.= 2, with algebraic multiplicities 2 a nd 1 respectively. The eigenspace corresponding to A= 1 is the solution space of the system (I- A)x = 0, which is

The general solution or this system is x =

[i]

= t [:] ; thus the eigenspa«

is

1-dime.'5ional and so

A = 1 has geometric multiplicity 1. The eigenspace corresponding to A = 2 is the solution space of the system (2/- A)x = 0 which is

Chapter 8

278

[':] = s

The solution space of tbis system ;., x

[l

thus the eigenspace is 1-dimensiotlai and so

). = 2 also has geometric multiplicity 1. 8. The eigenvalues of A are ). = l , >. = 3, and >. rcultipliclty 1.

= 5; each

with algebraic multiplicity 1 and geometric

9. The characteristic polynomial of A is p(>.) = det(>J -A) = (.X- 5) 2 (>. - 3). Thus the eigenvalues of A are>. = 5 and >. = 3, with algebraic multiplicities 2 and 1 respectively. The eigenspace corresponding to ). = 5 is the solution space of the system (51- A)x = 0, which is

m

Tbe gene>al solution of this system is x

t

[!l;

thus the eigenspace is 1-.- 3) 2 . Thus the eigenvalues of A are>.= - 1 and >. = 3, with algebraic multiplici ties 1 and 2 respectively. The eigenspace corresponding to >. = -1 JS 1-dirnensional, u.nd so >. = -1 has geometnc multiplicity 1. The eigenspace corresponding to >. = 3 is Lhe solution space of the system (31- A)x = 0 which is

[:;] =

The general solution of Lhis system is x

s

-1"' [

+ t [0] ; thus

the eigenspace is 2-dimensional, and so

). = 3 geometric multiplicity 2. ll. The characteristic polynomial vf A is p(>.) = >. 3 + 3>.2 = >. 2 (>. + 3); thus the eigenvalues are >. = 0 and ..\ = - 3, with algebraic multiplicities 2 and 1 respectively. The rank of the matrix

OJ - A = -A

=[

1 - 1

-1

1

279

EXERCISE SET 6.2

is dearly 1 since each of the rows is a scalar multiple o f the lst row. Thus nullity(OJ - A) and this is tbe geometric multiplicity of ..\ = 0. On the other h and, the matrix

-31 --A =-

-2

>. =

3- _1

= 2,

-1]

-1 - 1 -2

1

-2

[- 1

0 1]

has rank 2 since its reduced row echelon form is this is t he geometric multiplicity of

1

:=:

1

1 .

0

0

Thus nullity( -31- A) == 3 - 2 = 1, and

- 3.

12. The characteris tic polynomial of A is (>.- l)(A 2

-

2A + 2); thus >. = 1 is the only real eigenvalue of

A. The reduced row echelon form of the matrix li - A=

is - l

4

- 6

of 11 ·- A is 2 and the geomet ric multiplicity of ).. = 1 is nullity( ll - A)

0

=

(I

thus the rank 0

3 - 2 "" 1.

13. The characteristic polynomia.l of A is p(>-.) = .>.3 - 11>.2 + 39A - 45 = (A - 5)(>.- 3) 2 ; thus the eigenvalues are >. = 5 and ,\ = 3, with algebraic multipli. - l) 2 ; thus the eigenvalues are,\= - 2 and A= 1, with algeb-) =.>- 2 - 3>. + 2 = (>.- l) (A- 2); thus A ho.s two distinct eigenvalues,>. = 1 and>.= 2. The eigenspace corresponding to A = 1 is obtained by solving the system (/ - A)x

= 0. which is

=

The general solution of thls system is x

=

Thus,

I 280

Chapter 8

taking t

= 5, we see that ?l

= [:]

is an eigenvector for

for A= 2. Finally, the matrix P = [p 1

p-1 AP = [

p 2]

= [!

>. = 1. Similarly, P 2 =

[!] is an eigenvector

!} has the property that

4 -3]4 [-14 12] [4 3] [l0 0]2 -20 17 5 4 =

-5

16. The characteristic polynomial of A is (.>.- 1)(.>. + 1); thus A ha.s two distinct eigenvalues, .>.1 = 1 and .>.2

=

-1 . Corresponding eigenvectors are P1

=

the property that p -l AP =

and the matrix P = !P 1

and P2 =

P2J has

=

17. The characteristic polynomial of A is p(>.) = .>.(.>.- 1)(,\- 2); thus A has three distinct eigenvalues, >. = 0, >. = 1, and >. = 2. The eigenspace corresponding to >.. = 0 is obtained by solving· the system

(OJ- A)x

0, which is

n-:=!] [::]

Similarly, the genml solution of (I- A)x is x

t [:]. Thus the mat,ix P

p- 1 AP

=

The

[-! :]

-

0 is x =

+l,

solution of this system is x

and the gene. - 3) 2 ; thus tl.e eigenvalues of A are >.

[i] is an eigenvecto. cwesponding to

2, and v 2

linearly independent eigenvectors corresponding to .X = 3. The matrix P property that, p - t AP = 001003001

19. The characteristic polynomiaJ of A is p(>.)

Lhree distinct eigenvalues

v,

m,

and

= >.3 -

;:

=3.

1

-3

Note. The diagonalizing matrix P is not unique· eigenvectors. This is just one possibility.

: 1

it

3.

and v 3

= (v 1

v 3 J has the

= (..>..- 1)(>. - 2)(.>.- 3); thus A has

Corresponding eigenvectors are v 1

v, [:]- Thus A is diagonalizable and P p-IAP =

[!]

= 2 and >.. =

003

6>.2 + 11>.. - 6

>. 1 = 1, .>.2 = 2 and .>.3

0 -1

0

has the P':perty that

011011

The vecto' v 1

+:J

=

[:].

[: : :] has the pmpe.)

= >..3 -

-t >. 2

+ 5>.- 2

= (>.- 2)(>.- 1)2 ;

thus A bas two

:::: -17

9 f

0

7)

il·

which shows I hat the etgenspacP corresponding to

solution of (I - A)x

0 is x - s [

>.

=2

has dunension 1

Simi larly, llw general

which shows that the eigenspace conesponding tn A = t also

has dimension 1. It follows that the matrix A is not diagonalizable since it has only two linearly independent eigenvectors.

21. The characteristic polynomial of A is p(>.) = (>. - 5)3 ; thus A has one eigenvalue, >. = 5, which has algebraic multipUcity 3. The eigenspace corresponding to >.. = 5 is obtained by solving the system

(51- A)x = 0, which is

H: [:;]

=

m

The genml solution of this system is

X-

t

which shows that t ht:' eigenspacP has dimension 1, i.P , Lhe eigenvalue has geometric mulliplicity 1 lL follows that. A is nnt. since thn sum of t.hr gcomelrif multiplicil.it!S vf its is less than 3 22. The characteristic polynomtaJ nf A is

eigenspace corresponding to A

0

>.2 (>.- 1}; thus the eigenvalues of A are \

hM dimension 2. and the vecto'5 v, = [

hMis lot this space. The "'""' ' '

=

23

=

v,

1 The

[:] fotm a

mfotms a ba.. =

:

v2

=

I.

u! v 3)

has the property that

ThP charactt>risttc polynomtal o f A is p( .\) = (>. + 2) 2 t.\- 3) 2 • thus A has two etgcnwtlues, .-\ = -2 and .>- = 3, each of which has n.lgebraic rnulLiplicity 2. The eigewJpace corresponding L(l ..\ = -2 is

obtained by solving thP. system ( -2/- A)x = 0, which is

The gonctnl solution of this system ts x = r

+s

whteh shows that the eigenspace has dimenSton

2, i.e., that the eigenvalue >. = -2 has geomeLric multiplicity 2. On the other hand, the general

solution of (31 - A )x - 0 is x = I

•nd so

= 3 has geomet d e mu I liphcity I It follows that A is

not d iagonalizable since the sum of the geometric

of its eigenvalues is lt>..ss than 4

282

Chapter 8

The charactenstic polynorrual of A is p(>.) = (>. + 2)2(>.- 3) 2 ; thus A has two eigenvalues A= -2

au.l

3, each of algebrak multiplici. -:_b

dl = >.2 -

4(ad- be)

(a+ d)>.+ (ad - be), and the dis-

= (a- d) 2 + 4bc.

(a) lf (a - d) 2 + 4bc > 0, thon p(>.) has two distinct reo.! roots; thus A is (b)

two distinct Pigenvalues. Tf ( o - d) 2 + tbc ·· n, t h"n r(>.) has no nol diagonahzablP.

since it has

roots; t hue: A has no rc:1l t'lgf"nvalues and is the

DISCUSSION AND DISCOVERY 01.

are not s1mtlar smce rank(A ) 02.

(a)

= 1 and

rank(8)

=2

'ITue. We havP A = p-I AP where P =I

(b) 1Tue. If A 1s s tnHiar to B and B is similar to C. then there arc mvert1ble matrices P 1 nnd P2 surh that A - P1- 1 BP1 and B = F 2- 1 (' P2 . lt follows that A = P1 1 ( P 2 1 C P2 )P1 = (P2P1) 1 C(e'JJ>I ), thus A is sinular to C (c) True. IfA - P 1 BP,thenA- 1 =(P- 1 BP) - 1 = P - 1 B - 1 (P 1 ) 1 p- 1 e- 'P.

( cl) False. This statement does not guarantee that there are enough linearly independent eigenvectors.

has only one (real) eigenvalue, >.

For example, the matrix A= 0 -1

wh1ch has 03 . (a)

=

1,

0

mull1pltcity 1, but A is not dJagonalizable

False. For examplr, 1 -

is diagonalizahle

False. For exc.Lmple, 1f p -l AP is a diagonal matrix then so IS Q 1 AQ where Q = 2P. The diagonalizing matrix (if it exists) is not uruque! (;:) True. Vectors from different eigenspaces correspond to djfferent eigenvalues and are therdore linearly independent. In the situation described {v 1 , v2 , v 3 } is a linearly independent set. (d ) True. If an invert1ble matrix A is similar to a diagtlnal matrix D, then D must also be m1 is the diagoncJ matnx whose diagonal vertible, thus D has nonzero diagonal entries and entries are the reciprocals of the corresponding entries of D. Finally, 1f PIS an mvert1ble ma1 and so A- 1 is sintilar trix such that p -I AP = D, we have p-• A- 1P (P- 1 AP) - 1 = too-l (b)

o-

=

o-

284

Chapter 8

(e) 1'ru_y The vectors in a basis are linearly independent; thus A bas n linear independent \

04.

(a) A is a 6 x 6 matrix. (b) The cigens puce corresponding to >. = 1 has dimension 1 The eigt:nspace corresponding to >. = 3 has dimension l or 2. The eigenspace corresponding to >.= 4 has dtmension 1, 2 or 3. (c) If A is diagonalizable, then the eigenspaces corresponding to >. = 1, >. = 3, and >. = 4 have dimensions 1, 2, and 3 respectively. (d) These vectors must correspond to the eigenvalue >. = 4.

05.

(a) If >. 1 has geometric multiplicity 2 and >.2 has geometric multiplicity 3, then >.3 must have multiplicity l. Thus the sum of the geometric multiplicities is 6 and so A is diagonalizable. (b) In this case the matrix is not diagonalizable since the sum o f the geometric multiplicities of t he e1genvalues is less than 6. (c) The matrix may or may not be diagonalizable. The geomet.ric multiplicity of >.3 .IIWSt be 1 or 2. If the geometric multiplicity of >.3 is 2, then the matrix is diagonalizable. If the geometric mllll.iplicitv of >-1 Is 1, then the matrix is not dia.gonaliza.ble.

WORKING WITH PROOFS Pl. If A and Bare similar, then there is an invertible matnx P such that A = p - t BP. Thus PA = BP and so, usira,v the result of t he ci ted Exercise, we have rank( A) = rank(PA) = rank(BP) = rank(B ) and nullity(A) = nullity(PA ) nullity(BP) = nnllity( B). P2.

If A and 13 are sum lar lhen t here ts an invertible matnx P such that A = p - • BP Thus, using part (e) of Theorem 3 2 12, we have tr(A) = tr(P - 1 BP ) == tr(P 1 ( IJP)) o..: tr(( BP)P - 1 ) = tr(B ).

P3. If

X

= p - l A, we have p - 1 (,\x) = >.P- 1x

f. 0 and Ax = >.x t.hen, since pis invertible and cp-l C'P - 1 x = p - I Ax =

wit h p -• x -:f 0 T hus p -• x is an eigenvector of C corresponding to >..

P4. If A and B ure :m111lar, lhen there is an invertible matrix P such t ha t A = p - l BP. We will prove, by induction, tha t Air P 1 B" P (thus Ak and BJr are similnr) for every positive integer k . Step l The fact that , \ 1 = .1 = p - 1BP = p-l B 1P is gtven

= p - l BI.) = ,\2 - 5,\ = >.(>.- 5). Thus the eigenvalues of A are).= 0 and ,\ = 5, and each of the cigcnspaces has climension 1.

+

2 . The characteristic polynomial of A 1s p(,\) == >. 3 - 27...\-54== (,\- 6){>. 3) 2 Thus the eigenvalues of A are,\ = 6 and .>, -3 . Thf> t>igenspace corresponrling to,\ = C hl\.o:; dimension 1, anrllhe e1genspace

=

corresponding to ,\ =- -3 ha.s rlimPnsion 2.

3. T he characteristic polynom1a.l of A is p(>.) = >.3 - 3).2 = ..\2 (.X- 3). Thus t he eigenvalues of A are A= 0 and ,\ 3. The eigensptLCt: corresponding to >. = 0 has dillleu.sion 2, and the eigenspace corresponding to >. - 3 has dimcllSIOn 1 4 . The charactenstic polynomial of A is p(>.) = ,\,3 - 9...\ 2 + 15).- 7- (,\- 7)(>-- 1) 2 Thus the eigenvalues of A are >. = 7 and >. - 1. The eigenspace cor responding to ,\ = 7 has dimension I, and t he eigenspace corrPspondmg to >. = l has dimension 2.

5 . The gen"al solution or the system (0/ - A)x and v 3

[-:]

0 is x

+ s [-:],

r

form a basts for the eigenspace corresponding to

forms a basis for t he eigenspace corresponding to>.= 3. Since follows that the two eigenspnces are

v3

thus the •ectors v 1

= 0. Similarly the vector v, is or thogonal to both

v1

[

=

and

m

v2

it

Chapter 8

286

6. The general wlution or (7f

>i )x

eigenspa.= -4. Application of the Gram-Schmidt process to {vt} and to {v2 , v 3} yield orthonormal bases { u 1} and {u 2 , u 3 } for the eigenspaces, and the orthogonal matnx

P=

lut

u2

u3) =

-73]

?s [

0

...16

73

has the property I

pT AP

=

76

r-12

2]2

0

[7:

=

76 2

76

73

[20

=D

0

Note. The diagonali?.ing matrix I' is not unique. It depends on the choice of bases for lhe eigenspaces. This is j ,ts t onf' possibilit.y.

EXERCISE SET 8.3

287

10. The characteristac polynomial of A is p(>.) = >.3 + 28>. 2 - 1175>.- 3750 = (>. + 3)(>.- 25)(>. +50); thus the eigenvalues of A are >. 1= -3, >.2 = 25, an.3 = -50. Corresponding eigenvectors are

=

v,

[!],

v,

=[

i],

and v,

=

m.

"!' mutually orthogonal, and the orthogonal

These vc-eto.) = \ 3 and

> = 2.

( 2/ - A )x

The genoral •oluUon or (OJ - .4)x

= 0 is x

""respondin• to I

>.

I[

l]·

=0

Thus the vedors v,

0. o>nd >he vector v, =

2>.2

-

=

[l]

is

n

-g4

0 25

=

5

=D

0

= ).2 (>,- 2); t.hus the eigf'nvalues of A are>.= 0 X

=T

m

m+

and v,

s [ -:]' and the general soluUon

l

= [-:

or

form a basis £or the eigenspace

£mms a basos £or the eigenspace eonesponding to

= 2 . These VPctors are mutually orthogonal, and t.he orthogonal matrix 1

-72

Vt

F =

[

I

llvdl

72

72

0

0

hn.-, the propt>rtv 1hnl

n [ 12.

1

V2

72 0

72

The .4 -

6>.3

-1

13. The characteristic polynomial of A is p(.X)

0, A

- 1

-1

-12 - 21

0

I

of A are A

-1]

---

tal [

1

7J 1 "J2

t

0 is x

and v3

lJ,

1 I

l

v3

[

-72

72

[!],

= 0 is ·x

u

[

ll

0

0

0.

fo,ms a basis

0

0

-.J.. v'l

0

72

l

o o lo o o o

1

0

1

0

()

0

1250..\2

[00 00 00 0]0

-

0 -

= >. 4 -

:he

the

0

0 0 2 0 0

0

0

4

+ 390625 = (>.- 25) 2 (..\ + 25) 2 ;

25, and A= -25. The vecto" v 1 =

eigenspace COHespondi og to A

+s

r

2, and v,

0 0 0 0

0 0

14. The characteristic polynomial of A is p(..\)

eigenvalues of A ace

0 is x

rlr

2

00 0 01 00 1] 1 [3 310000] [00 I

3

2)(>. - 4); thus the eigenvalues

and the genecal solution of (4/- A)x

1

1

0 0

7G

fo.. 2

= tl,

with corresponding nOrJ:!lalized

Thus the s pectral Jccompo.'lition of A is 7-i

I

72

>-t = 2 and

.X2

= 1, with corresponding normalized eigenvect ors u 1 = [

Thus the spectral decomposition of A is

1 7.

:::.:·:,:i•

:.-:·

ii

is l

-3

:2]

=2

2

=

[js] [;n

I

76

1.]-• [-72][- ,', ;!; o]-' [-+a]

[! ! l - +t -t

- 73 ;!;] 1

-! =j]

Note fhl' :;pPctral decomposilion Lc; noluni()ue. l t depends on the r hoice of bases for the eigenspaces. Thic; 1s juc;l one possibility.

18.

-2 A= [

0 3G

n

0 -36 -3 0 ....: -3 1 {0 0 -23 0

1

[0

1 OJ + 25 0

0

= -3 0 l 0OJ - 25 [ _g 0 0 0 25 19. T he maln x A has eigenvalues >. Thus the matr ix P

= [- : -

= -1

and ).

o 0

= 2,

n 1

-25

[.

0

9 25

12

0

"o]

25

mll

0 o o lll

25

= lJ = [-

OJ [- 21 -13]

1024

0

25

16

25

with corresponding eigenvectors

has the property that p- t AP

-3]2 [10

lJ _ 50

0

[-11] and [-32].

It follows that

[ 3070 3069] - 2046 -2045

290

Chapter 8

20. The ma•rix A has eigenvalues>.= 2 and>. = -2, with corresponding eigenvectors

;) has the property that p-l AP = D =

the matr;., P =

AlO = PDIOp- 1 = [1

2

- 1, and the vectorn

conesponding

= 1.

Thus the matrix P

[1024 0

and >.

21. The matrix A has elgenv&lues

to >.

-1]

7

= [:

It, follows that

0 ] [-7

1024

Thus

4] -1

2

= [1024 0

0 ] 1024

1. The ve.

0

0

1, and >.

-3

12 -24

-

=

3

0

0 1

!

D

and it follow• that

[ 6:]

(a) The characteristic polynomial of A is p(>.) ..,. [ :

2

- 1, with conesponding eigenvedorn [:]. [:] ,

has the pcope,ty that p-• AP

= potooop-1 =

of A, we have A 2

1

:1

and A 3

= >.3 -

+ 12>.- 8

=

16

-24 - 40 -72

6>. 2

Computing successive powers

thus 36 -72

44

4] [3 -2

6 [ :

12 -24

44

8 + 12 2 -2 16 3 -6

8/

which shows that A satisfies its characteristic equation, i.e., that p(A} = 0.

{b ) Since A 3 =-6A 2 (c)

Since A

3

-

-

12A + 8/, we have A 4

= 6A 3 -l2A 2 + 8A =

6A -1 12A - 81 = 0, we have A(A 2

2

-

6A

24A 2

-

64A ..._ 4 I

+ 12/) = 8 1 and A- 1 =

6A

+ 12/).

EXERCISE SET 8.3

24.

(a)

291

The characteristic polynomial of A is p(>.) =

A, we have A2

=

=I

-

and A 3 =

>.2

-

>. + 1.

Computmg successive powers of

= A, thus

0 0 1

.A2

>.3

-4 0 5

A

-

=

-5 0 6] [1 0 0] [-5 0 6] [ -3 1 3 -4 0 5

0 1 0 0 0 1

-3 -4

3 = 5

l 0

which shows that. A satisfies its characteristic t>quat1on, i.e., that p(A)

(b) Since A =A, we have = 2 1 (c) Since A = I, we have A- =A. 3

A4

= AA =

AA3

_PetD pT = [- -k *) [e 0] [-h -?z] 2

"2

27

'

0.: 11

.,!}

Prom Exercise 9 we have

pT AP =

[

I

=f 76

l

-72 I

vJ

v'3

Thus A= PDP'l' nnd

e'A

Pe'o J>r

=

"2

=

n

../3

[I

-3 2

4

1 [

t e. '

1

76

..;2

-73

-36

0

.. a

I

j

()

2e2t- 2e-4t

+

":!'+elt'

I

n

-/2

-l''lt

Thus A = PDPT and

-72 2

76

= D.

=

2 -e:!-Tt' 11

"'2

[ e"e2t +_ se-" e-·H 28 .

[

1

J;;

I

.J,.

= 0.

= I.

[t

25 . From Exercise 7 we hnve pT AP = -L

A2

l

['' 0

0

e2' _

1

76

72

'l

=

../3 1

0

-rr.. \ (>

0

7J

0 e .,,

0

()

][I-?z

e- 4t

7ti

0

0 - I

=D - I

I

76 I

J2 I

-73

2e2e2' - 2e-"] 2e-4t 21

e-4t

e21 + 5e- 41 2e2t- 2e-"'

-

4e2t

+ 2e- 4t

Prom Exercise 10 we havP

PTAP =

0 0 0] [ -20

_!

[i

Thus A= PDPT and

1

0

i

-36

4

-g

=D

0 3

s

0

-50

292

Chapter 8 4

-5 0 3

5

[ 25 " e'"

+ -2-e-'" 25

= _ 12 e25t 25

29.

Note that

s in('rrA)

l

sln(21f)

o

0 sin( - 4rr)

0

0

0

sin( - 47Pnt t.he same linear operator. ConwrsPly, lluppost: lhat A = [1']a and C = [TJa• when• T: R" - > Rn IS a linear operator and B, fJ' n.re h1\SCS for If P = PD'-+8 t.hen Pis an ort,hogonnl matrix and C = [T]a· = pT ITlnP pTA P. Thus A and C are orthogonally si1ni ln.r.

nu.

P2 . Suppose A - c1Ut U{ I c2 u 2uf + · · · + Cn llnu ;,· where {u 1, u2, ... , u n} is an orthonormal basis for Rn. Sincf> (u 1 uJ)T = u J'7 ' u J = u1 u f it follov.;s that A'T =A; thus A is symmetnc. Furthermore,

since u{' u1 = u, · u 1

-

o,,

1

\"e have T\

Au1 ={c1u1uf +c2u 2u f -···

2: c, u,u?' u 1 =c1 u 1 I

fo r each j = 112,

, n Thus c 1 , c 2 • •

.

1Cn are e1genvalues of A

P3. T he s pectr a l decomposition A= >q u 1 uf where P = !u tI u 2l · I u nJ and D =

+ A2 u 2ui + ··· -r

ts equivalent. to A= PDPT

A21 . . , >-n); thus

/(A)= P /(D )PT = Pdiag(f( At). /(>•2), ... , /(An))PT = /(AI) ulUJf

+ j(>.2) u 2u f + · · · +

294

Chapter 8

P4. (a)

Suppose A is a symmetric matnx, and >.o is an eigenvalue of A having geometric multiplicity J.. Let l\' be the eigenspace corresponding to >. 0 . Choose an orthonormal basis { u 1, u 2,. , uk} for W, extend it to an orthonormal basis B = {u 1, u2, ... , Uk+J, .. . , u n} for R" , and let P be the orthogonal matrix having the vectors -:>f B as its columns. Then, as ..;hown in Exercise P6(b) of Section 8.2, the product AP can be written asAP= Since P

IS

orthogonal, we have

and since pT AP is a symmetric matrix, it follows that X (b) Since A is similar to C =

= 0.

has the same characteristic polynomial as C, namely

(>.- >.o)" det(>.In-k- Y) = (>.- >.o)kpy(>.) where py(>.) is the characteristic pofynomial of Y . We will now prove that py (>.o) :f. 0 and thus that the a lgebraic multiplicity of >.0 is exactly k. The proof is by contradiction: Suppose py (Ao) - 0, i.e., that >.o is an eigenvo.lue of the matrix Y. Then there is a nonzero vector y in R"-k such that Yy

= ..\oY·

Let x

= [;]

be Lhe vector in R" whose first

k components are 0 and whose last n- k components are those of y

(c)

and sox IS an eigenvector of C corresponding to >.o Since AP = PC, it follows that Px is an eigenvector of A corresponding to >.o But. note that e 1 , •. , are also eigenvectors of C to >.o, and that {e 1 • . . , ek x} IS a linearly independent set It follows that {Pet, .. , Px} 1s a linearly mdependent. set of eigenvectors of A corresponding to >.0 . But this implies that the geometric muJtiplkity of >.0 IS greater than k, a contradiction! It follows from part (b) that the sum of the d1mensions of the eigenspaces of A is equal ton; thus A is dJagonalizable F\u-thermore, smce A IS symmetric, the e1genspaces corresponrling to different eigenvaluPs are orthogonal. Thus we can form an orthonormal bas1s for R" by choosing an orthonormal basis for each of tht> e1genspaces and JOining them together. Since the surn of the dimensions is n, this will be an orthonormal bns1s consisting of ejgenvectors uf A . Thus A IS orthogonally diagonalizable.

EXERCISE SET 8.4 1.

(a)

Then

3x?

+

-!xJ

x2]

X

2

][ 4 -3] [X'] -3

-9

X2

295

EXERCISE SET 8.4

5 . The quadratic form Q

= 2xr

- 2x 1 x 2 can be expressed in matrix notation as

1-

The matrix . \ has ds!•n·:alucs ,\ 1 = 1 and ,\ 2 v2

= [-:]

1 2

)

=x

- Py

wilh corresponding dgt:m'l(tor:> v 1

matrix P =

respective)) Thus

of \'&riable [:

= 3,

[tz 7z

72

[Y1 ] Y2

-

GJ

am.l

orthogonally diagona.IJZcs A, and the change eliminates the cross product terms in Q·

Note that the in verst: relationship between x aad y is

,. = y =

-J'i 7i

pT x = [

[r

1

]

J:'l



- I

G.

rho •• v"' """"'' ,.;, !om• ' "" '"

in Lt ,,, notation

The matrix A hns rigcnvahJe (o, m can be exp.2 ;;:

where A

wit.h cur responding eigenvectors v 1

P

= [_

The angle through which the

=

:] . The eigenvalues of A

and v2 -

Thus 1he matrix

dmgonalizes .4. Note that dct(P) = l, soP is a rotat1on matnx. The

equat1on of lhc conic in the rotntrrl x'y'-coordinate system is

0] [x'] =2 y'

;! 2



which we can write as :r' 2 1- 3y'2 = 1; thus thP conic is an ellipse. The angle of rotation corresponds lo cosO= and sm (} = thus(}= -45°_

7i,

17. (a) The e1genvalucs of A= (b)

negative drfinil('

(d) positive semidPfimtc

and A= 1 and .X= 2; thus A is positive defimtP (c)

indefinite

(e)

negative semidefinite

298

18.

Chapter 8

(a)

The eigenvalues of A =

(b)

negative definite

0 ] and A -5

= 2 and A =

(c) positive definjte

'

(e)

(d ) negative semidefinite

positive semidefinite

> 0 for (x 1 . x2) 'f: (0, 0) ; thus Q is positive definite.

19.

We have Q = xi+

20.

negative definite

21.

We have Q = (x1 - x2) 2 > 0 for x1

22.

negative semidefinite

23.

We have Q

24.

indefinite

25.

(a) The eigenvalues of the matrix A = Since

-5; thus A is indefinite.

= x{ -

> 0 for

lSI = 5 and

I

5 -2

x1

5

x2

and Q = 0 for

of 0, x 2

= 0 and Q < 0

I

x1

for

are >. 0

0

= 0, xz ..f 0; thus Q is indefinite.

x 1

are A= 3 and A = 7; thus A is posi tive definite.

= 21 are positive, we reach

(b ) The eigenvalues of A=

= x2; thus Q is positive semidefinite.

the same conclusion using Theorem 8.4.5.

= 1, >. = 3,

5

The determinants of the principal submatrices are

and ,\ = 5; thus A is positive definite.

12 -}

121 =

= 3, and - l 0 I

thus we reach the same conclusion using T heorem 8.4.5. 26.

( a)

The eigenvalues of the matrix A =

121= '2

2 1 ] ,1 2

a nd 1

are .A

2 0

= 1 and >. = 3; thus A is positive definite. Since

are >. = 1, .-\ = 3, and ,\ = 4; thus A is positive definite.

0 -1

3

3 -1

The determinants of the principal submatrices are

131= 3,

= 5, and

- 1

thus we reach the

= (-

Thus the matrix P

B =

12;

3

conclusion using Theorem 8.4.5.

The matrix A h.as eigenvalues >- 1 v2

0

2 - 1 =

0 -1

(a)

5

= 3 an• positive, we reach t.hc sttme conclusion using Theorem 8.4.5

(b) The eigenvalues of A= [-

27.

0

o = 15;

=(

-

t]

orthogonally diagonalizes A , and the matrix

7;]

72

has the property that 8 2

= 3 and >.2 = 7, with correspo nding eigenvectors v 1 = g)

- !':<

v2

= A.

=

[Y..il! +- 4:11. 2

2

and

EXERCISE SET 8.4

299

(b ) The matrix A has eigenvnlues >. 1

nl,

m

Thus P

nnd v,

' 2

I, >.2

[

I

D=

v'3

[

}nhogonWiy diagonWizea A, and

[-

(a ) The n1atrix A has eigenvalues >.1 = 1 a.nd >.2 nnd

v2

= 0

!'2 + _il '2

0

0

= A.

has I he property that 8 2 28.

.J5

0

m,

5, with corresponding ttigcnvecto.,

3,

[-%

= [:]. Thus lhc matrix P =

=3,

=

with corresponding eigenvectors v 1

orthogonally dia.gonalizt>.s A, and the matrix

B=

72 has the property Lhnl 8 2

= A.

(b ) The "'""" A has •i•cnle x = Py for wh1ch xr Ax= yl'Dy - >-.t!Jf 1where )q and >.2 are the eigenvalues of A. Since ,\ 1 and >.2 are nonnegat1ve, it follows that xT Ax 0 for every vector x in Rn.

EXERCISE SET 8.5 l.

{a)

The first partial derivatives ot fare fz(x , y) = 4y- 4x 3 and f 11 (x, y) = 4x - 4y3 To find the cnt1cal pomts we sl'l l:r and fv equal to zero. This yields the equations y = x3 and x = y3. 1-\om Lh1s we conduJe Lhat y = y9 and so y =0 or y = ±1 Since x = y3 the corresponding values of x are x = 0 and x = ±1 respectively Thus there are three critical points: (0, 0), (1, 1), and ( 1,-1).

(b) The Jlc:-.sian matrix

IS H(x.y) = [ln((x.y)) r y

/xy((z,y)] fvv z.y)

= r-l42:r

2 4 2] - 12y

Evaluating this matrix

at thP. cnticf\1 points off yaelds 0 [4

/l(O,O)

•cl]

The eigenvalues of

H(l,l)

I

=

[-12 4] 4

= H( - 1, -

H( - 1,-1)

= [

- 12 4

are A= ±4; thus the matrix H(O, 0) is indefinite and so

saddle poinl at (0, 0) . The eigenvalues of

H(l,l)

-12 '

are>.= -8 and

1

has a

>. = - 16; thus Lhe matrix

1) is negative definHe and so f has a relative maximum at {1,1) and at

(-1,-1)

2.

(a) The first partial derivatives off are f%(x, y) = 3x2 - 6y and fv(x, y) = -6x- 3y2 . To find the 2 and x = points we set 1% and f 11 equal to zero. This yields the equations y = 4 From Lhis we conclude that y = h and so y = 0 or y = 2. The corresponding values of x are

4x

x = 0 and

4

- 2 respectavdy. Thus there are two critical points: (0, 0) and ( -2, 2).

{b) The Hessian matrix is H (x,y) [

are

:z;,

1zv((:z:,y))] = ( 6x -6 11111 %, Y

6

- ]. The eigenvalues of -611

H (O,O} =

>. = ±6; this matrix is indefinite and so f has a saddle point at (0, 0) . The

eigenvalues of H ( -2, 2) = and so

= [/u((x,y)) fvz Y

r-

12

6 - ]

-6 -12

are

>. = -6 and >. = -18, this matnx is negative definite

f has a relnLive maximum at ( -2, 2)

305

EXERCISE SET 8.5

1 3 . The constraint equation 4x 2

+ 8y2 =

16 can be rewritten as ( )2

+ ( )2 =

l. Thus, with the change

of variable (x, y) = (2x', v'2y'), the problem is to find the exvreme values of z = xy = 2v'2x'y' subject lo xn

+ y'2 = 1

The

l

Note that. z of A are

and v2

=[

>. 1

'=

= xy =

2v'2x'y' can be expressed as z =

v'2 and>.,= -J2, with corresponding (normalized) eigenvectors v 1 =

Thus the constrained maximum is z

(x,y) = (J2, 1). Similarly, the constrained minimum is z or (x,y)= (-J2,1) 14. The constraint :r2

+ 3y2 =

The eigenvalue:. of A are >.1

= J2 occurring at (x', y') =

= -v'2 occurdng at (x',y') = (-72,

2

+ (4y)2 = 1. Thus, setting (x, y) = (4:r', x 2 + xy 1 2y 2 = 16x'2 t+ ¥v'2 subject

16 can be rewritten as ( )

lhe problem is Lu find the extreme values of z = x' 2 + y' 2 = 1. Note that z = l6x' 2 + t[

where A= [

xrT Ax'

¥y'

=8

2

can be expressed as z

= xfT Ax'

where A =

with correspondmg eigenvectors v, = [

u:: 11 =

= = [ constrained m;ucimum is z = 536 occurring at (x',y') = ± {4, !>or (x, y) = .J:(2J3, 1s>· the constrained mmimum is z =a occurring at (x', y') = -4) or (x, y) = ±{2. -2)

an:! v·/= [

Normalized

to

are u 1 =

and

[_

ll ?

Thus the Similarly,

15. The level curve correspondmg to the constramed maximum 1s the hyp.:!rbola 5x 2 - y 2 = 5; it touches the unit circle at (x, y) = (± 1, 0). The curve corre• ponuing to the constrained minimum IS the hyperbola 5z• = -1, it touches the unit Circle al (:r,y) = (0,±1)

16.

!'he lt·v.. l cun·p l'nrrPspondmg to th"' constramed maxirnum 1s the hype1bola xy1t touches lhr> un1t circle at (x, y) == The level curvn corresponding to the constrained minimum is the hyperbola xy it touches the unit circle at. {x, y) = ±(

72).

-!;

-J2)

'Y

"_.!.2

xy=.!. 1

-4 I

.ry: 2

-2

x.y = _l 1

-4

17. The area of the inscribed rectangle is z = 4xy. where (x, y) is the corner pomt that lies in the first quadrant. Our problem is to find the maximum value of z = 4xy subject to the constrakts x;::: 0, y ;::: 0, x 2 + = 25 The constraint equat1on can be rewritten as x'2 + y'2 = 1 where x = Sx' and y = y'. In terms of the vanables x' and y', our problem IS to find the maximum value of z = 20x'y' subject to x' 2 + y' 2

eigenvalue of A area

IS

IS

>.

= 10

= l , x';::: 0, y';::: 0.

Note that z

= x'T Ax' where A =

The largest

1

with c:orresponding (normalized) eigenvector [

z = 10. and th1s occurs whl'n (z',y')

= (-jz,

(x,y)

=

Thus the maximum

306

Chapter 8

18. Our problem is to find the extreme values of z = 4x2 - 4xy + y2 s ubject to x 2 + y2 = 25. Setting x = 5x' and y = 5y' this is equivalent to finding the extreme values of z = 100x'2 - lOOx'y' + 25y'2 subject to x'2 t- y' 2

=1

Note that z = xff Ax' where .-\ = [

The eigenvalues of A are .-\ 1

125 t'.nd >.2 = 0, wit h corresponding (normalized) eigenvectors v 1 = [-

and v 2 =

maximum temperature encountered by the ant is z = 125, and th1s occurs at (x', y')

Thus the

= ±(--;h.

75) or

= 0, and t his occurs at (x', y') =

(x, y) = ± (-2-.15, VS). The minimum temperature encountered is z

±(VS,2VS).

or {x,y) =

[

=

DISCUSSION AND DISCOVERY Dl. (a)

We have fz(x , y) = 4x3 and /,Ax, y ) = 4y 3 ; thus f has a critical point at (0, 0). Simifarly, 9:z(x, y) = 4x3 and g11 (x, y) = - 4y3 , and so g has a critical point a.t {0, 0). The Hessian matrices for

f

and g are HJ(x,y) = [

Since H,(O,O)- H9 (0.0)

=

12 ;

2

°Y ] 12 2

and H 9 (x,y)

second derh·ntl\·e test

IS

= [12;

2

respectively.

mconclusive in both cases.

(b) It is clear that f has a relative minimum at {0, 0) since /{0, 0) = 0 and f(x , y) = x4 + y 4 is strictly positive at all other points (x,y). ln contrast, we have g(O,O) = 0, g(x,O) = x 4 > 0 for x ::/; 0, and g(O, y) = -y4 < 0 for y f. 0. Thus g has a saddle point at (0, 0). D2. The eigenvalues of H

are A= 6 and.,\= -2. Thus His indefinite and so the crit1cal points

=

off (if any} nrc s'\dcile pmnts. Starting from fr.Ax y\(.r y} 2 and /y:r(:r y) = /-ry(xy) = 4 it follows, using partial mlegrat.ion, that. the quadratic form f 1s f(x, y) = :r2 + 4xy + y 2 . This function has one cril!cal point (a saddle) which is located at the origin

D3. lfx is a uni t f>lgenvector corresponding to..\. then q(x) = x1'Ax

= xT(.-\x)

= >.(x T x ) = >.(1)

= >..

WORKING WITH PROOFS Pl. First. note that, as in 03, we have u r,Au m = m and u r,A u M = M On the otl.er hand, since u m and liM are orthogon al, we have = = M(u;: u M) = M (O) = 0 and ur,Au m = 0. It follows that if X r

x TcAXc=

j .f-_":r., UM, then 1

(M-e) M-m u rmAum+O+O+ (e-m ) J\f-m

1

u MAUt.t=

(M-e M -m )m + (e-m) M- m M=c

EXERCISE SET 8.6

m

1. The ch.,.a.2 (>.

- 5); thus the eigenvalues

= v'5 is a singular value of A. =

are A1

= liJ and A2 = 9; thus u 1 = v'i6 = 4 and

Exercise Set 8.6

307

3. The eigenvalues of A7'A

[_

= 5 (i.e., >. = 5 is an eigeJJ.value J5 and cr2 .:: JS.

>. 1 = 5 and >. 2

=

of mulliplicaty 2); thus the 's ingular values of A are crt = 4.

= ('7,

The eigemalucs of ATA values of A are cr 1 =

J4 -

= [Jz v'22] are >.1 = 4 and >.2 v'l =1.

2 and cr2 =

=

and v2

VJ =

"\ Av2 ""

= ../2 ar:d u2 = .J2. =[

-

A•

=2

unit cagenvectors v 1

[-J

0

] [-

0 -·1

=

and v 2

[v2

72 3

= [90

0 ]

0 -4

=

0

16

] are )q

resperLively

The

=3

A 7.

eigen\'t.!Liors " o2

= 2.

0

l]

=[

1

We hnvc

I

0 ct

and v:::

= [-

..L;\v 1 = c

l

t1 2

= .,1,A v 2 =

eigenwcLors we have u 1

anrl

Vt -

=;

Av1 I

[

2.J L1

arf' .\ 1 = 6-t und



= -1, with correspomling umt

The :;mgular values of A arc cr 1 = 8 and

= [-jg] and +. -!.,. • v$ v5

8 0 tl

= [33

3 3 3 3 ] [ ] 3 3 3 3 "2

3

J

]

= (18 18

= 7i

I,.,

u 2 - ..L.4v 2 (1,

4 = ![ 2 0

are

>. 1 = 36 and >.2

4

_JA'

vr>

=

[+1. 72J

Thf' vector

72

72

..l. .;:;

= 0, with corresponding unit Ut

=6,

and

u2 must be chosen so thaL {u1 , u 2}

This results in the following singular value

decomposition.

[33 33] =

6) [•-};] =

rPspccth·ely. The only smgular value of A is

is an or thonormal bas1s for R 2 , e.g., u 2 = (-

A =

=

= UEVT

This results in the fo llowing singular value decomposition:

8. The ('tgcnvn.lucs of AT A

=

= 16 and >.2 = 9, with corresponding smgular values of A are u 1 = 4 and

=

c

1 0 4 6 16 ] = [ of A r A = [' ] (

The

(;

[l0 OJI _UEVT

0 v2

\Vp IHl,·e Ut : --

J

= 4 0

[_

=

choose

:iS oJS

so that {u, , u,, u,} Is an octhonocma\-basis roc R3 . This cesults m the rollowmg singular

value decornposttton

-::-sI

0][80]['2 1] = l/EVT 1 ll 2 ../, _"J.

0 ·I 0

] 3.

0

v5

Using the smgular vrdtH' decomposition A =

0 ll

... $

" 5

J.J [s

6) = [.J.. -

75

0 . + 1)(.>. - 3) 2 ; thus>. = - 1 is an eigenvalue cf multiplicity 1 and

>.

= 3 is an Pigenvalue of multiplicity 2.

The vector v 1

ing to >. - -1' and the vee toes v' =

m

and

v'

the etgenspace . = 0 (with multiplicity n- k). T hus t he singular values of Pare u1 = 1, u2 = 1, ... , U!t = 1.

EXERCISE SET 8.7 = 25; thus A += ( AT A)- 1AT= .}5 (3 4]

1. We have AT A = [3 4] 1 2 ) 3 1

2 . WP. have AT A -

2 I

= [66

6 ), 11

thus the pseudoinversc of A is

A' =(AT A)-1 AT= ....L [ 30

:J

3. We ha-·e AT A - [;

A

5. (a)

+_

= [;: T

-(A A)

_1

A

Ll -6] [I 12] = -6 6 1 3 1

[i

0

T _

1

26 -321 [7 0 5] _ [ ! 74 1 0 5 -

[

[1]=

= [1] [I& is]= (ts fs)

[ts

(c)

AA+ =

=

{d)

A+

=Ill is svmmetnc; thus (.4+ A)T = A+ A .

(e)

T he e1genvalues of AAT:; [

=

rn

and v2

=A+

is symmetnc; thus (AA+)T

9 12 ]

12 16

-:J

=[

we have u 1 = o'. AT v 1 = !(3 4]

0 0

CJ =A

A+AA+

Vt

I

s

- goo -32

(b )

vectors

_!!.IS. ]

-370

thus U>e p"'udotnwse of A ts

AA1-A =

= (ls /ll]

= [fs 2iJ·

are

>q

= 25 and

= AA+ .

= 0, with corresponding unit eigen-

respectively. The only singular value of AT is

[i] = [1].

,1 T = [3 •J

=

0'!

=5, and

This results in the singular value decomposition

III [5 OJ

[-! ll

= UEVT

314

Chapter 8

ThP corresponding reduced singular value decomposition is AT

= (3

4]

= fll [5] [l

= UlEl

vr

and from this we obtam (AT>+ =

(f)

"•Ei"•ur =

The eigenvalues of (A+)T A+ = eigenvectors v 1 and we have

u 1 -

[i] 1

ct ,

and v2

A+v•

[i] UJ 111 = are At=

=

1 25

= (A+>r

and A2 = 0, with corresponcling unit

respectively. The only singular value of A+ is a 1

= 5[,\

fs)

[!] =

= t•

[1). This results in the singular value decompo-

sition

The corresponding reduced singular value decomposition is 3

A+= [ 2 5

= u.E. vt

[1) [t)

=

and from thts we obtam

6.

(a)

(b)

.4 1 A:\

.

[l

4 : 11- .4 =

2

5 __ ]_ 30

=

1

[l ;]

-30

2

5

-1] [i

=

r

[I I]

11

7 - 30

- Jo

2

2

=A

[!

7

s

5

I

6 '2!1

3o

- ygI

l

-.t 13

15

l

=[5 0

(c)

AA'

= [;

i]

-1

is symmetric, thus (AA +)T -= AA+

z

-1$

1] [; i]

(d )

(e) The eigenvalues of AAT =

unol eog•nvocto.s

v, },;;

E

is 'ymme.,ic; thus

3:] v,

are A1

W

= 15, >.2 =

*' Hl·

A)T

A.

'2, and AJ

and ., =

f,o

= 0,

nl

with corresponding

The singular values of

Exercise Set 8.7

315

AT a" u, = Jj5 and

u, = :/rv, =

= .Ji. Setting u , = :, AT

v, =

:J [-!] = f,; [_:].we have

[:

*'

[:

[':]

kJ-u 4

-

726

=

't"'

*' 1:1

and

vr1

and from this it foUows t.hat

(C)

= -

The eignn-.;tlues of (A")rAi-

[

:15]

i& 1;

40

-m

43

arc

ns

;+.

conespondmg uniteigenw>CIO
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF