Linear Algebra Solutions
December 5, 2016 | Author: kaseymarie11111 | Category: N/A
Short Description
solutions to linear algebra textbook...
Description
Instructor’s Solutions Manual for
ELEMENTARY LINEAR ALGEBRA SIXTH EDITION Larson
Houghton Mifflin Harcourt Publishing Company Boston
New York
Publisher: Richard Stratton Senior Sponsoring Editor: Cathy Cantin Senior Marketing Manager: Jennifer Jones Assistant Editor: Janine Tangney Associate Project Editor: Jill Clark Editorial Assistant: Amy Haines
Copyright © 2009 by Houghton Mifflin Harcourt Publishing Company. All rights reserved. Houghton Mifflin Harcourt Publishing Company hereby grants you permission to reproduce the Houghton Mifflin Harcourt Publishing material contained in this work in classroom quantities, solely for use with the accompanying Houghton Mifflin Harcourt Publishing textbook. All reproductions must include the Houghton Mifflin Harcourt Publishing copyright notice, and no fee may be collected except to cover the cost of duplication. If you wish to make any other use of this material, including reproducing or transmitting the material or portions thereof in any form or by any electronic or mechanical means including any information storage or retrieval system, you must obtain prior written permission from Houghton Mifflin Harcourt Publishing Company, unless such use is expressly permitted by federal copyright law. If you wish to reproduce material acknowledging a rights holder other than Houghton Mifflin Harcourt Publishing Company, you must obtain permission from the rights holder. Address inquiries to College Permissions, Houghton Mifflin Harcourt Publishing Company, 222 Berkeley Street, Boston, MA 02116-3764.
ISBN 13: 978-0-547-19855-2 ISBN 10: 0-547-19855-8
PREFACE This Student Solutions Manual is designed as a supplement to Elementary Linear Algebra, Sixth Edition, by Ron Larson and David C. Falvo. All references to chapters, theorems, and exercises relate to the main text. Solutions to every odd-numbered exercise in the text are given with all essential algebraic steps included. Although this supplement is not a substitute for good study habits, it can be valuable when incorporated into a well-planned course of study. We have made every effort to see that the solutions are correct. However, we would appreciate hearing about any errors or other suggestions for improvement. Good luck with your study of elementary linear algebra. Ron Larson Larson Texts, Inc.
iii
CONTENTS
iv
Chapter 1
Systems of Linear Equations..................................................................1
Chapter 2
Matrices.................................................................................................26
Chapter 3
Determinants.........................................................................................63
Chapter 4
Vector Spaces .......................................................................................96
Chapter 5
Inner Product Spaces ..........................................................................137
Chapter 6
Linear Transformations ......................................................................185
Chapter 7
Eigenvalues and Eigenvectors ...........................................................219
C H A P T E R 1 Systems of Linear Equations Section 1.1
Introduction to Systems of Linear Equations........................................2
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination ..........................9
Section 1.3
Applications of Systems of Linear Equations .....................................15
Review Exercises ..........................................................................................................21
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 1 Systems of Linear Equations Section 1.1 Introduction to Systems of Linear Equations 1. Because the equation is in the form a1 x + a2 y = b, it is linear in the variables x and y. 3. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y. 5. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y.
15. Begin by rewriting the system in row-echelon form. The equations are interchanged.
2 x1 +
5 x1 + 2 x2 + x3 = 0 The first equation is multiplied by 12 . x1 +
Adding −5 times the first equation to the second equation produces a new second equation.
2 x − 4t = 0 2 x = 4t
x1 +
x = 2t.
The second equation is multiplied by −2.
9. Choosing y and z as the free variables, let y = s and z = t , and obtain x + s + t = 1 or x = 1 − s − t. So, you can describe the solution set as x = 1 − s − t , y = s and z = t , where s and t are any real numbers. 11. From Equation 2 you have x2 = 3. Substituting this value into Equation 1 produces x1 − 3 = 2 or x1 = 5. So, the system has exactly one solution: x1 = 5 and x2 = 3.
1x 2 2
x1 +
= 0
x2 − 2 x3 = 0 To represent the solutions, choose x3 to be the free variable and represent it by the parameter t. Because x2 = 2 x3 and x1 = − 12 x2 , you can describe the solution set as x1 = −t , x2 = 2t , x3 = t , where t is any real number.
17. 2 x + y = 4
13. From Equation 3 you can conclude that z = 0. Substituting this value into Equation 2 produces
x − y = 2 y
2y + 0 = 3 4 3 2 1
y = 32 . 3 2
and z = 0 into Equation
1, you obtain 3 2
−4
−0 = 0
and z = 0.
2
−2
3, 2
y =
3, 2
x−y=2 x 3 4
−2 −4
x = 32 .
So, the system has exactly one solution: x =
= 0
1x 2 2
− 12 x2 + x3 = 0
So, you can describe the solution set as x = 2t and y = t , where t is any real number.
−x +
= 0
1x 2 2
5 x1 + 2 x2 + x3 = 0
7. Choosing y as the free variable, let y = t and obtain
Finally, by substituting y =
= 0
x2
2x + y = 4
Adding the first equation to the second equation produces a new second equation, 3x = 6, or x = 2. So, y = 0, and the solution is x = 2, y = 0. This is the point where the two lines intersect.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 x− y =1
19.
25.
−2 x + 2 y = 5 y
Introduction to Systems of Linear Equations
−2x + 2y = 5
x +3 y −1 + =1 4 3 2 x − y = 12 y
4 3
−4
x
−2
1 2 3 4 −3 −4
Adding 2 times the first equation to the second equation produces a new second equation. x − y =1
Because the second equation is a false statement, you can conclude that the original system of equations has no solution. Geometrically, the two lines are parallel.
y = 9
6 7
x+3 y−1 + =1 4 3
Multiplying the first equation by 12 produces a new first equation. 2x −
y = 12
Adding the first equation to 4 times the second equation produces a new second equation, 11x = 55, or
27. 0.05 x − 0.03 y = 0.07
10
0.07 x + 0.02 y = 0.16
2x + y = 9
y
4 2 −2
1 2 3
is: x = 5, y = −2. This is the point where the two lines intersect.
y
6
x −2 −4 −6 −8 −10 −12
x = 5. So, 2(5) − y = 12, or y = −2, and the solution
21. 3x − 5 y = 7
8
2x − y = 12
3x + 4 y = 7
0 = 7
2x +
6 4
x−y=1
1
3
3x − 5y = 7 x
2
4
6
8 10
Adding the first equation to 5 times the second equation produces a new second equation, 13x = 52, or x = 4. So, 2( 4) + y = 9, or y = 1, and the solution is: x = 4,
y = 1. This is the point where the two lines intersect. 23. 2 x − y = 5
1 2
4 5 6 7 8
0.07x + 0.02y = 0.16
Multiplying the first equation by 200 and the second equation by 300 produces new equations. 21x + 6 y = 48
y
−4
x −2 −4 −6
10 x − 6 y = 14
5 x − y = 11
−6 −4 −2 −2
0.05x − 0.03y = 0.07 8 6 4 2
x 4
6
5x − y = 11
2x − y = 5
Adding the first equation to the second equation produces a new second equation, 31x = 62, or x = 2. So, 10( 2) − 6 y = 14, or y = 1, and the solution is: x = 2, y = 1. This is the point where the two lines intersect.
−12
Subtracting the first equation from the second equation produces a new second equation, 3x = 6, or x = 2. So, 2( 2) − y = 5, or y = −1, and the solution is: x = 2, y = −1. This is the point where the two lines intersect.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Chapter 1
4
29.
Systems of Linear Equations 35. (a)
x y + =1 4 6 x − y = 3
3
−4
y 8
x y + =1 4 6
6
−3
2 x 1
3
4
x−y=3
Adding 6 times the first equation to the second equation produces a new second equation, 52 x = 9, or x = 18 . 5 18 − y = 3, or y = 3 , and the solution is: x = 18 , 5 5 5 = 53. This is the point where the two lines intersect.
So, y
31. (a) −3x − y = 3
3
−4
4
−3
33. (a)
2x − 8y = 3
4
1 2x
41. Dividing the first equation by 9 produces a new first equation.
+y=0
x− 1x 5
(b) The system is consistent (c) The solution is approximately x =
1, 2
y = − 14 .
(d) Adding − 14 times the first equation to the second equation produces the new second equation 3 y = − 34 , or y = − 14 . So, x = 12 , and the solution 1, 2
37. Adding −3 times the first equation to the second equation produces a new second equation. x1 − x2 = 0 x2 = −1 Now, using back-substitution you can conclude that the system has exactly one solution: x1 = −1 and x2 = −1.
Adding −2 times the first equation to the second equation produces a new second equation. u + 2v = 120 −3v = −120 Solving the second equation you have v = 40. Substituting this value into the first equation gives u + 80 = 120 or u = 40. So, the system has exactly one solution: u = 40 and v = 40.
3
−3
(e) The solutions in (c) and (d) are consistent.
2u + v = 120.
6x + 2y = 1
−4
(b) The system is consistent. (c) There are infinite solutions. (d) The second equation is the result of multiplying both sides of the first equation by 0.2. A parametric representation of the solution set is given by x = 94 + 2t , y = t , where t is any real number.
39. Interchanging the two equations produces the system u + 2v = 120
(b) The system is inconsistent.
is x =
4
0.8x − 1.6y = 1.8
4
−1 −2
4x − 8y = 9
y = − 14 .
(e) The solutions in (c) and (d) are the same.
+
1y 3 2y 5
= − 19 = − 13
Adding − 15 times the first equation to the second equation produces a new second equation. x −
1y 3 7 y 15
= − 19 = − 14 45
Multiplying the second equation by
15 7
produces a new
second equation. x −
1y 3
= − 19
y = − 23
Now, using back-substitution, you can substitute y = − 23 into the first equation to obtain x +
2 9
= − 19 or x = − 13 .
So, you can conclude that the system has exactly one solution: x = − 13 and y = − 23 . Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 43. To begin, change the form of the first equation. 1x 2
+
1y 3
=
23 6
x − 2y = 5
2y 3
=
23 3
x − 2y = 5
47. Adding −2 times the first equation to the second equation yields a new second equation. x + y + z = 6
− z = 0
3x
Adding −3 times the first equation to the third equation yields a new third equation. x + y + z = 6 −3 y −
Subtracting the first equation from the second equation yields a new second equation. x +
2y 3
=
23 3
−3 y − 4 z = −18
Dividing the second equation by − 83 yields a new second
y +
equation. 2y 3
=
Now, using back-substitution you can conclude that the system has exactly one solution: x = 7 and y = 1. 45. Multiplying the first equation by 50 and the second equation by 100 produces a new system.
x1 − 2.5 x2 = −9.5
4 x2 =
1z 3
3
=
−3 y − 4 z = −18
23 3
y = 1
3x1 +
z = −9
Dividing the second equation by −3 yields a new second equation. x + y + z = 6
− 83 y = − 83
x +
5
−3 y − z = −9
Multiplying the first equation by 2 yields a new first equation. x +
Introduction to Systems of Linear Equations
52
Adding −3 times the first equation to the second equation produces a new second equation. x1 − 2.5 x2 = −9.5
11.5 x2 = 80.5 Now, using back-substitution, you can conclude that the system has exactly one solution: x1 = 8 and x2 = 7.
Adding 3 times the second equation to the third equation yields a new third equation. x+ y + z = 6 y + 13 z = 3 −3 z = −9 Dividing the third equation by −3 yields a new third equation. x+ y + z = 6 y + 13 z = 3 z = 3 Now, using back-substitution you can conclude that the system has exactly one solution: x = 1, y = 2, and z = 3.
49. Dividing the first equation by 3 yields a new first equation.
x1 − x1 +
2x 3 2
+
4x 3 3
=
1 3
x2 − 2 x3 = 3
2 x1 − 3 x2 + 6 x3 = 8 Subtracting the first equation from the second equation yields a new second equation. x1 −
2x 3 2
+
4x 3 3
=
1 3
5x 3 2
−
10 x 3 3
=
8 3
2 x1 − 3 x2 + 6 x3 = 8 Adding −2 times the first equation to the third equation yields a new third equation. x1 −
2x 3 2
+
4x 3 3
=
1 3
5x 3 2
−
10 x 3 3
=
8 3
− 53 x2 +
10 x 3 3
=
22 3
At this point you should recognize that Equations 2 and 3 cannot both be satisfied. So, the original system of equations has no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
6
Chapter 1
Systems of Linear Equations
51. Dividing the first equation by 2 yields a new first equation.
x1 +
1x 2 2
3x 2 3
−
= 2
+ 2 x3 = 10
4 x1
55. Adding −2 times the first equation to the second, 3 times the first equation to the third, and −1 times the first equation to the fourth, produces
x +
−
= 2
3x 2 3
−2 x2 + 8 x3 = 2 −2 x1 +
3x2 − 13 x3 = −8
y − 2z
1x 2 2
−
= 2
3x 2 3
−2 x2 + 8 x3 = 2 4 x2 − 16 x3 = −4 Dividing the second equation by −2 yields a new second equation. x1 +
1x 2 2
3x 2 3
−
= 2
x2 − 4 x3 = −1 4 x2 − 16 x3 = −4 Adding −4 times the second equation to the third equation produces a new third equation. x1 +
1x 2 2
3x 2 3
−
= 2
x + y +
0 = 0 Adding − 12 times the second equation to the first equation produces a new first equation.
+
1x 2 3
=
5 2
w =
6
18 z + 26w = 106 3w =
6.
Using back-substitution, you find the original system has exactly one solution: x = 1, y = 0, z = 3, and w = 2. Answers may vary slightly for Exercises 57–63. 57. Using a computer software program or graphing utility, you obtain x1 = −15, x2 = 40, x3 = 45, x4 = −75
59. Using a computer software program or graphing utility, you obtain x = −1.2, y = −0.6, z = 2.4.
x1 =
1, 5
x2 = − 54 , x3 =
1. 2
63. Using a computer software program or graphing utility, you obtain x = 6.8813, y = −163.3111, z = −210.2915,
x2 − 4 x3 = −1 Choosing x3 = t as the free variable, you can describe the solution as x1 =
z +
y − 2 z − 3w = −12
61. Using a computer software program or graphing utility, you obtain
x2 − 4 x3 = −1
x1
= −6.
Adding −7 times the second equation to the third, and −1 times the second equation to the fourth, produces
Adding 2 times the first equation to the third equation produces a new third equation. x1 +
6
7 y + 4 z + 5w = 22
Adding −4 times the first equation to the second equation produces a new second equation. 1x 2 2
z + w =
y − 2 z − 3w = −12
−2 x1 + 3 x2 − 13 x3 = −8
x1 +
y +
5 2
w = −59.2913.
− 12 t , x2 = 4t − 1, and
x3 = t , where t is any real number. 53. Adding −5 times the first equation to the second equation yields a new second equation.
x − 3y + 2z =
18
0 = −72 Because the second equation is a false statement, you can conclude that the original system of equations has no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 65. x = y = z = 0 is clearly a solution.
Dividing the first equation by 4 yields a new first equation. x+
3y 4
+
17 z 4
= 0
Introduction to Systems of Linear Equations 67. x = y = z = 0 is clearly a solution.
Dividing the first equation by 5 yields a new first equation. x +
y −
1z 5
= 0
5 x + 4 y + 22 z = 0
10 x + 5 y + 2 z = 0
4 x + 2 y + 19 z = 0
5 x + 15 y − 9 z = 0
Adding −5 times the first equation to the second equation yields a new second equation.
x +
7
Adding −10 times the first equation to the second equation yields a new second equation. x +
y −
3y 4
+
17 z 4
= 0
1y 4
+
3z 4
= 0
−5 y + 4 z = 0
4 x + 2 y + 19 z = 0
5 x + 15 y − 9 z = 0
Adding −4 times the first equation to the third equation yields a new third equation.
Adding −5 times the first equation to the third equation yields a new third equation.
x +
x +
3y 4
+
17 z 4
= 0
1y 4
+
3z 4
= 0
− y + 2z = 0 Multiplying the second equation by 4 yields a new second equation. x +
3y 4
+
17 z 4
= 0
Adding the second equation to the third equation yields a new third equation. x +
3y 4
+
17 z 4
= 0
1z 5
= 0
= 0
−5 y + 4 z = 0 10 y − 8 z = 0 Dividing the second equation by −5 yields a new second equation. x +
y − y −
y + 3z = 0 − y + 2z = 0
y −
1z 5
1z 5 4z 5
= 0 = 0
10 y − 8 z = 0
Adding −10 times the second equation to the third equation yields a new third equation. x + y −
1z 5
= 0
y −
4z 5
= 0
y + 3z = 0
0 = 0
5z = 0 Dividing the third equation by 5 yields a new third equation.
Adding −1 times the second equation to the first equation yields a new first equation.
x +
x +
3y 4
+
17 z 4
= 0
y + 3z = 0 z = 0 Now, using back-substitution, you can conclude that the system has exactly one solution: x = 0, y = 0, and z = 0.
y −
3z 5
= 0
4z 5
= 0
Choosing z = t as the free variable you find the solution to be x = − 53 t , y =
4 t, 5
and z = t , where t is any real
number. 69. (a) True. You can describe the entire solution set using parametric representation. ax + by = c
Choosing y = t as the free variable, the solution is c b x = − t , y = t , where t is any real number. a a (b) False. For example, consider the system x1 + x2 + x3 = 1 x1 + x2 + x3 = 2 which is an inconsistent system. (c) False. A consistent system may have only one solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
8
Chapter 1
Systems of Linear Equations
71. Because x1 = t and x2 = 3t − 4 = 3x1 − 4, one answer is the system
77. Reduce the system to row-echelon form. Dividing the first equation by cos θ yields a new second equation.
3x1 − x2 = 4
⎛ sin θ ⎞ 1 x+⎜ ⎟y = θ θ cos cos ⎝ ⎠
−3 x1 + x2 = −4. Letting x2 = t , you get x1 =
4+t 4 t = + . 3 3 3
73. Substituting A = 1 x and B = 1 y into the original
system yields 3 A + 4 B = 0. Reduce this system to row-echelon form. Dividing the first equation by 12 yields a new first equation. 7 12
3A + 4B = 0 Adding −3 times the first equation to the second equation yields a new second equation. A− B =
7 12
7B = − 7 4 Dividing the second equation by 7 yields a new second equation. A− B =
+ (cos θ ) y = 0
Multiplying the first equation by sin θ and adding to the second equation yields a new second equation. ⎛ sin θ ⎞ 1 x +⎜ ⎟y = θ θ cos cos ⎝ ⎠
12 A − 12 B = 7
A− B =
(−sin θ ) x
7 12
B = −1 4
⎛ 1 ⎞ sin θ ⎜ ⎟y = θ θ cos cos ⎝ ⎠ ⎛ sin 2 θ sin 2 θ + cos 2 θ 1 ⎞ + cos θ = = ⎜ Because ⎟ cos cos cos θ θ θ⎠ ⎝
Multiplying the second equation by cos θ yields a new second equation. ⎛ sin θ ⎞ 1 x +⎜ ⎟y = θ θ cos cos ⎝ ⎠ y = sin θ
Substituting y = sin θ into the first equation yields ⎛ sin θ ⎞ 1 x +⎜ ⎟sin θ = cos θ cos θ ⎝ ⎠ x =
So, A = −1 4 and B = 1 3. Because A = 1 x and B = 1 y , the solution of the original system of equations
is: x = 3 and y = −4. 75. Substituting A = 1 x, B = 1 y , and C = 1 z , into the
original system yields 2 A + B − 3C = 4 + 2C = 10
4A
−2 A + 3B − 13C = −8. Reduce this system to row-echelon form. A +
1B 2
−
3C 2
= 2
−2 B + 8C = 2 4 B − 16C = −4 A+
1B 2
− 32 C = 2
B − 4C = −1 Letting t = C be the free variable, you have (−t + 5) . So, the solution C = t , B = 4t − 1 and A = 2 to the original problem is x =
2 1 1 1 , y = , z = , where t ≠ 5, , 0. 5−t 4t − 1 t 4
1 − sin 2 θ cos 2 θ = = cos θ . cos θ cos θ
So, the solution of the original system of equations is: x = cos θ and y = sin θ . 79. For this system to have an infinite number of solutions both equations need to be equivalent.
Multiply the second equation by −2. 4 x + ky = 6 −2kx − 2 y = 6 So, when k = −2 the system will have an infinite number of solutions. 81. Reduce the system to row-echelon form. x +
ky = 0
(1 − k ) y 2
= 0
x + ky = 0 y = 0, 1 − k 2 ≠ 0 x = 0 y = 0, 1 − k 2 ≠ 0 If 1 − k 2 ≠ 0, that is if k ≠ ±1, the system will have exactly one solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
83. To begin, reduce the system to row-echelon form.
x + 2 y + kz =
(8 − 3k ) z
6
91.
x − 4 y = −3
x − 4 y = −3
5 x − 6 y = 13
14 y = 28
= −14
85. Reducing the system to row-echelon form, you have
y +
(k − 1) y (1 − k ) y x +
+
(1 − k ) z
3
=
−1
− 1) y +
2
kz =
(1 − k ) z
(−k 2 − k + 2) z
=
3
5x − 6y = 13
x − 4 y = −3
x = 5
y = 2
y = 2
89. Answers vary. ( Hint : Choose three different values for x
and solve the resulting system of linear equations in the variables a, b, and c.)
3 4 5
x − 4y = −3
y
y
y=2
x
−4 −2 −2 −3 −4 −5
3 4 5
= −3k .
87. (a) All three of the lines will intersect in exactly one point (corresponding to the solution point). (b) All three of the lines will coincide (every point on these lines is a solution point). (c) The three lines have no common point.
3
x
−4 −2
−1
If − k 2 − k + 2 = 0, then there is no solution. So, if k = 1 or k = −2, there is not a unique solution.
14y = 28 4
x − 4y = −3
−4 −5
+ (1 − k ) z = 1 − 3k
y +
(k
kz =
5
5 4 3 2
8. 3
x +
y
y
This system will have no solution if 8 − 3k = 0, that is, k =
9
5 4 3
y=2
x=5
5 4 3 1
x
−4 −2 −2 −3 −4 −5
3 4 5
x − 4y = −3
−4 −2 −2 −3 −4 −5
x 1 2 3 4
At each step, the lines always intersect at (5, 2), which is the solution to the system of equations. 93. Solve each equation for y. y =
1 x 100
+ 2
y =
1 x 99
−2
The graphs are misleading because, while they appear parallel, when the equations are solved for y they have slightly different slopes.
Section 1.2 Gaussian Elimination and Gauss-Jordan Elimination 1. Because the matrix has 3 rows and 3 columns, it has size 3 × 3.
13. Because the matrix has a non-zero row without a leading 1, it is not in row-echelon form.
3. Because the matrix has 2 rows and 4 columns, it has size 2 × 4.
15. Because the matrix is in reduced row-echelon form, convert back to a system of linear equations
5. Because the matrix has 1 row and 5 columns, it has size 1 × 5. 7. Because the matrix has 4 rows and 5 columns, it has size 4 × 5.
x1 = 0 x2 = 2 So, the solution is: x1 = 0 and x2 = 2.
9. The matrix satisfies all three conditions in the definition of row-echelon form. Moreover, because each column that has a leading 1 (columns one and two) has zeros elsewhere, the matrix is in reduced row-echelon form. 11. Because the matrix has two non-zero rows without leading 1’s, it is not in row-echelon form.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
10
Chapter 1
Systems of Linear Equations
17. Because the matrix is in row-echelon form, convert back to a system of linear equations.
x1 − x2
= 3
x2 − 2 x3 = 1
21. Because the matrix is in row-echelon form, convert back to a system of linear equations.
x1 + 2 x2
+
x4 = 4
x2 + 2 x3 +
x4 = 3
x3 + 2 x4 = 1
x3 = −1
x4 = 4
Solve this system by back-substitution. x2 − 2( −1) = 1 x2 = −1
Solve this system by back-substitution. x3 = 1 − 2 x4 = 1 − 2( 4) = −7
Substituting x2 = −1 into equation 1,
x2 = 3 − 2 x3 − x4 = 3 − 2( −7) − 4 = 13
x1 − ( −1) = 3
x1 = 4 − 2 x2 − x4 = 4 − 2(13) − 4 = −26
x1 = 2. So, the solution is: x1 = 2, x2 = −1, and x3 = −1. 19. Interchange the first and second rows. ⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢2 1 −1 3⎥ ⎢0 1 2 1⎥ ⎣ ⎦
Interchange the second and third rows. ⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢0 1 2 1⎥ ⎢2 1 −1 3⎥ ⎣ ⎦
Add −2 times the first row to the third row to produce a new third row. ⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢0 1 2 1⎥ ⎢0 3 −3 3⎥ ⎣ ⎦
Add −3 times the second row to the third row to produce a new third row. ⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢0 1 2 1⎥ ⎢0 0 −9 0⎥ ⎣ ⎦
Divide the third row by −9 to produce a new third row.
So, the solution is: x1 = −26, x2 = 13, x3 = −7, and x4 = 4. 23. The augmented matrix for this system is
⎡ 1 2 7⎤ ⎢ ⎥. ⎣2 1 8⎦ Adding −2 times the first row to the second row yields a new second row. ⎡ 1 2 7⎤ ⎢ ⎥ ⎣0 −3 −6⎦ Dividing the second row by −3 yields a new second row. ⎡ 1 2 7⎤ ⎢ ⎥ ⎣0 1 2⎦ Converting back to a system of linear equations produces x + 2y = 7 y = 2. Finally, using back-substitution you find that x = 3 and y = 2. 25. The augmented matrix for this system is
⎡−1 2 1.5⎤ ⎢ ⎥. 3⎦ ⎣ 2 −4
⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢0 1 2 1⎥ ⎢0 0 1 0⎥ ⎣ ⎦
Gaussian elimination produces the following.
Convert back to a system of linear equations.
Because the second row of this matrix corresponds to the equation 0 = 6, you can conclude that the original system has no solution.
x1 − x2 +
x3 = 0
x2 + 2 x3 = 1
⎡ 1 −2 − 32 ⎤ ⎡ 1 −2 − 32 ⎤ ⎡−1 2 1.5⎤ ⎥ ⎥ ⇒ ⎢ ⎢ ⎥ ⇒ ⎢ 3⎦ 3⎦⎥ 6⎦⎥ ⎢⎣2 −4 ⎣ 2 −4 ⎣⎢0 0
x3 = 0 Solve this system by back-substitution. x2 = 1 − 2 x3 = 1 − 2(0) = 1 x1 = x2 − x3 = 1 − 0 = 1 So, the solution is: x1 = 1, x2 = 1, and x3 = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.2 27. The augmented matrix for this system is ⎡−3 5 −22⎤ ⎢ ⎥ 4⎥. ⎢ 3 4 ⎢ 4 −8 32⎥ ⎣ ⎦
Dividing the first row by −3 yields a new first row. ⎡ 1 − 5 22 ⎤ 3 3 ⎢ ⎥ 4 4⎥ ⎢3 ⎢4 −8 32⎥ ⎣ ⎦
Adding −3 times the first row to the second row yields a new second row. 22 ⎤ 3
⎡1 ⎢ ⎥ 9 −18⎥ ⎢0 ⎢4 −8 32⎥ ⎣ ⎦ − 53
Adding −4 times the first row to the third row yields a new third row. 22 ⎤ ⎡1 − 5 3 3 ⎢ ⎥ 9 −18⎥ ⎢0 ⎢0 − 4 8⎥ 3 3⎦ ⎣
Dividing the second row by 9 yields a new second row. ⎡ 1 − 5 22 ⎤ 3 3 ⎢ ⎥ 1 −2⎥ ⎢0 ⎢0 − 4 8⎥ 3 3⎦ ⎣
Adding
4 3
times the second row to the third row
yields a new third row. ⎡ 1 − 5 22 ⎤ 3 3 ⎢ ⎥ 1 −2⎥ ⎢0 ⎢0 0 0⎥⎦ ⎣
Converting back to a system of linear equations produces x −
5y 3
=
22 3
y = −2.
Finally, using back-substitution you find that the solution is: x = 4 and y = −2.
Gaussian Elimination and Gauss-Jordan Elimination
11
29. The augmented matrix for this system is ⎡ 1 0 −3 −2⎤ ⎢ 3 1 −2 5⎥⎥. ⎢ ⎢⎣2 2 1 4⎥⎦ Gaussian elimination produces the following. ⎡ 1 0 −3 −2⎤ ⎡ 1 0 −3 −2⎤ ⎡ 1 0 −3 −2⎤ ⎢ 3 1 −2 5⎥ ⇒ ⎢0 1 7 11⎥ ⇒ ⎢0 1 7 11⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣2 2 1 4⎥⎦ ⎢⎣0 2 7 8⎥⎦ ⎢⎣0 0 −7 −14⎥⎦ Back substitution now yields x3 = 2
x2 = 11 − 7 x3 = 11 − (7)2 = −3 x1 = −2 + 3 x3 = −2 + 3( 2) = 4.
So, the solution is: x1 = 4, x2 = −3, and x3 = 2. 31. The augmented matrix for this system is ⎡ 1 1 −5 3⎤ ⎢ 1 0 −2 1⎥. ⎢ ⎥ ⎣⎢2 −1 −1 0⎥⎦ Subtracting the first row from the second row yields a new second row. ⎡ 1 1 −5 3⎤ ⎢0 −1 3 −2⎥ ⎢ ⎥ ⎢⎣2 −1 −1 0⎥⎦ Adding −2 times the first row to the third row yields a new third row. 3⎤ ⎡ 1 1 −5 ⎢0 −1 3 −2⎥ ⎢ ⎥ ⎢⎣0 −3 9 −6⎥⎦ Multiplying the second row by −1 yields a new second row. 3⎤ ⎡ 1 1 −5 ⎢0 1 −3 2⎥⎥ ⎢ ⎢⎣0 −3 9 −6⎥⎦ Adding 3 times the second row to the third row yields a new third row. ⎡ 1 1 −5 3⎤ ⎢0 1 −3 2⎥ ⎢ ⎥ ⎣⎢0 0 0 0⎥⎦ Adding −1 times the second row to the first row yields a new first row.
⎡ 1 0 −2 1⎤ ⎢0 1 −3 2⎥ ⎢ ⎥ ⎢⎣0 0 0 0⎥⎦ Converting back to a system of linear equations produces x1 − 2 x3 = 1 x2 − 3 x3 = 2. Finally, choosing x3 = t as the free variable you can describe the solution as x1 = 1 + 2t , x2 = 2 + 3t , and x3 = t , where t is any real number. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
12
Chapter 1
Systems of Linear Equations 37. Using a computer software program or graphing utility, the augmented matrix reduces to
33. The augmented matrix for this system is
⎡4 12 −7 −20 22⎤ ⎢ ⎥. ⎣ 3 9 −5 −28 30⎦ Dividing the first row by 4 yields a new first row.
⎡ 1 0 0 −0.5278 23.5361⎤ ⎢ ⎥ ⎢0 1 0 −4.1111 18.5444⎥. ⎢0 0 1 −2.1389 7.4306⎥ ⎣ ⎦
−5 11⎤ ⎡1 3 − 74 2 ⎢ ⎥ ⎢⎣3 9 −5 −28 30⎥⎦
Letting x4 = t be the free variable, you obtain
Adding −3 times the first row to the second row yields a new second row.
x2 = 18.5444 + 4.1111t
⎡ 1 3 − 74 ⎢ 1 ⎢0 0 4 ⎣
−5
11⎤ 2
−13
27 ⎥ 2⎦
−5 11⎤ ⎡ 1 3 − 74 2 ⎢ ⎥ 1 −52 54⎦⎥ ⎣⎢0 0 Converting back to a system of linear equations produces x + 3y −
− 5w =
11 2
39. Using a computer software program or graphing utility, the augmented matrix reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
2⎤ ⎥ −2⎥ 3⎥. ⎥ −5⎥ ⎥ 1⎦
So, the solution is: x1 = 2, x2 = −2, x3 = 3, x4 = −5,
z − 52 w = 54.
Choosing y = s and w = t as the free variables you can describe the solution as x = 100 − 3s + 96t , y = s, z = 54 + 52t , and w = t , where s and t are any real numbers.
and x5 = 1. 41. Using a computer software program or graphing utility, the augmented matrix reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢0 ⎣
35. The augmented matrix for this system is
⎡ 3 ⎢ ⎢ 1 ⎢ 2 ⎢ ⎢− ⎣ 1
x3 = 7.43062 + 2.1389t x4 = t , where t is any real number.
⎥
Multiplying the second row by 4 yields a new second row.
7z 4
x1 = 23.5361 + 0.5278t
6⎤ ⎥ 1 4 2⎥ . 5 20 10⎥ ⎥ 2 8 4⎥⎦ 3 12
1⎤ ⎥ 1 0 0 0 0 −2⎥ 0 1 0 0 0 6⎥ ⎥. 0 0 1 0 0 −3⎥ ⎥ 0 0 0 1 0 −4⎥ 0 0 0 0 1 3⎥⎦ 0 0 0 0 0
Gaussian elimination produces the following.
So, the solution is: x1 = 1, x2 = −2, x3 = 6,
⎡ 3 ⎢ ⎢ 1 ⎢ 2 ⎢ ⎣⎢−1
x4 = −3, x5 = −4, and x6 = 3.
6⎤ ⎡ 1 ⎥ ⎢ 1 4 2⎥ 1 ⇒ ⎢ ⎢ 2 5 20 10⎥ ⎥ ⎢ ⎢− 2 8 4⎥⎦ ⎣ 1 3 12
2⎤ ⎥ 1 4 2⎥ 5 20 10⎥ ⎥ 2 8 4⎥⎦ 1
4
⎡1 ⎢ 0 ⇒ ⎢ ⎢0 ⎢ ⎢⎣0
4 2⎤ ⎡1 ⎥ ⎢ 0 0 0⎥ 0 ⇒ ⎢ ⎥ ⎢ 3 12 6 0 ⎥ ⎢ ⎢⎣0 3 12 6⎥⎦
⎡1 ⎢ 0 ⇒ ⎢ ⎢0 ⎢ ⎢⎣0
1 4 2⎤ ⎡1 ⎥ ⎢ 1 4 2⎥ 0 ⇒ ⎢ ⎥ ⎢ 0 0 0 0 ⎥ ⎢ ⎢⎣0 0 0 0⎥⎦
1
43. The corresponding system of equations is
= 0
x1 4 2⎤ ⎥ 3 12 6⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
x2 + x3 = 0
1
0 0 0⎤ ⎥ 1 4 2⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
Letting z = t be the free variable, the solution is: x = 0, y = 2 − 4t , z = t , where t is any real number.
0 = 0 Letting x3 = t be the free variable, the solution is: x1 = 0, x2 = −t , x3 = t , where t is any real number. 45. The corresponding system of equations is
+ x4 = 0
x1 x3
= 0 0 = 0
Letting x4 = t and x3 = s be the free variables, the solution is: x1 = −t , x2 = s, x3 = 0, x4 = t.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
13
47. (a) Because there are two rows and three columns, there are two equations and two variables. (b) Gaussian elimination produces the following. 2 ⎤ ⎡1 k k 2⎤ ⎡ 1 k 2⎤ ⎡1 ⎢ ⎥ ⇒ ⇒ 7 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 1 ⎣−3 4 1⎦ ⎣0 4 + 3k 7⎦ 4 + 3k ⎦⎥ ⎣⎢
The system is consistent if 4 + 3k ≠ 0, or if k ≠ − 43 . (c) Because there are two rows and three columns, there are two equations and three variables. (d) Gaussian elimination produces the following. 2 0⎤ ⎡1 k k 2 0⎤ ⎡ 1 k 2 0⎤ ⎡1 ⎢ ⎥ 7 ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ 0 1 0⎥ ⎣−3 4 1 0⎦ ⎣0 4 + 3k 7 0⎦ 4 + 3k ⎣⎢ ⎦⎥ Notice that 4 + 3k ≠ 0, or k ≠ − 43 . But if − 43 is substituted for k in the original matrix. Guassian elimination produces the following: ⎡ 1 − 43 2 0⎤ ⎡ 1 − 43 2 0⎤ ⎢ ⎥ ⇒ ⎢ ⎥ 4 1 0⎦⎥ 0 7 0⎦⎥ ⎢⎣0 ⎣⎢−3 The system is consistent. So, the original system is consistent for all real k. 49. Begin by forming the augmented matrix for the system ⎡ 1 1 0 2⎤ ⎢ ⎥ ⎢ 0 1 1 2⎥. ⎢ 1 0 1 2⎥ ⎢ ⎥ ⎢⎣a b c 0⎥⎦ Then use Gauss-Jordan elimination as follows. 1 0 2 ⎤ ⎡ 1 1 0 2⎤ ⎡1 ⎢ 0 1 1 2⎥ ⎢0 1 1 2 ⎥⎥ ⎢ ⎥ ⇒ ⎢ ⎢ 0 −1 1 0⎥ ⎢0 1 0 ⎥ −1 ⎢ ⎥ ⎢ ⎥ ⎣a b c 0⎦ ⎣0 b − a c −2a⎦ ⎡1 ⎢0 ⇒ ⎢ ⎢0 ⎢ ⎣0 b
2 ⎤ ⎡1 ⎥ ⎢0 1 1 2 ⎥ ⇒ ⎢ ⎢0 0 2 2 ⎥ ⎥ ⎢ − a c −2a⎦ ⎣0 1
0
2 ⎤ 1 1 2 ⎥⎥ 0 2 2 ⎥ ⎥ 0 a − b + c −2b⎦
1
0
⎡1 ⎢0 ⇒ ⎢ ⎢0 ⎢ ⎣0
1
0
1 0
2
1 0
1 1
2 ⎤ ⎡1 ⎢0 2 ⎥⎥ ⇒ ⎢ ⎢0 1 ⎥ ⎥ ⎢ 0 a − b + c −2b⎦ ⎣0
1 1 0 1
2 1
⎡1 ⎢0 ⇒ ⎢ ⎢0 ⎢ ⎣0
1 0
2
1 0 1 0 1 1 0 0 a +b+
⎤ ⎡1 ⎥ ⎢ ⎥ ⇒ ⎢0 ⎥ ⎢0 ⎥ ⎢ c⎦ ⎣0
⎤ ⎥ ⎥ ⎥ ⎥ 0 0 a + b + c⎦
0 0
1
1 0 1 0 1 1 0 0 a +b+
⎤ ⎥ ⎥ ⎥ ⎥ c⎦
Converting back to a system of linear equations x =1 y =1 z =1 0 = a + b + c. The system (a) will have a unique solution if a + b + c = 0, (b) will have no solution if a + b + c ≠ 0, and (c) cannot have an infinite number of solutions. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
14
Chapter 1
Systems of Linear Equations
51. Solve each pair of equations by Gaussian elimination as follows.
(a) Equations 1 and 2: 5 ⎡1 0 ⎡4 −2 5 16⎤ 6 ⎢ ⎥ ⇒ ⎢ 5 1 0 0⎦ ⎢⎣0 1 − 6 ⎣1
8⎤ 3 ⎥ − 83 ⎥⎦
⇒ x =
8 3
− 56 t ,
y = − 83 + 56 t , z = t
(b) Equations 1 and 3: 11 ⎡1 0 ⎡ 4 −2 5 16⎤ 14 ⎢ ⎥ ⇒ ⎢ 13 ⎢⎣0 1 − 14 ⎣−1 −3 2 6⎦
36 ⎤ 14 ⎥ 40 − 14 ⎦⎥
⇒ x =
18 7
−
11 t , 14
+ y = − 20 7
13 t , 14
z = t
(c) Equations 2 and 3: ⎡ 1 1 0 0⎤ ⎡1 0 1 3 ⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ x = 3 − t, ⎣−1 −3 2 6⎦ ⎣0 1 −1 −3⎦ y = −3 + t , z = t (d) Each of these systems has an infinite number of solutions. 53. Use Gauss-Jordan elimination as follows.
⎡ 1 2⎤ ⎡1 2⎤ ⎡1 2⎤ ⎡1 0⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ − 1 2 0 4 0 1 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣0 1⎦ 55. Begin by finding all possible first rows.
[0
0], [0 1], [1 0], [1 k ].
For each of these, examine the possible second rows. ⎡0 0⎤ ⎡0 1⎤ ⎡1 0⎤ ⎡1 k ⎤ ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥ ⎣0 0⎦ ⎣0 0⎦ ⎣0 1⎦ ⎣0 0⎦ These represent all possible 2 × 2 reduced row-echelon matrices. 57. (a) True. In the notation m × n, m is the number of
rows of the matrix. So, a 6 × 3 matrix has six rows. (b) True. On page 19, after Example 4, the sentence reads, “It can be shown that every matrix is rowequivalent to a matrix in row-echelon form.” (c) False. Consider the row-echelon form ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0 0 0 0⎤ ⎥ 1 0 0 1⎥ 0 1 0 2⎥ ⎥ 0 0 1 3⎦⎥
which gives the solution x1 = 0, x2 = 1, x3 = 2, and x4 = 3.
(d) True. Theorem 1.1 states that if a homogeneous system has fewer equations than variables, then it must have an infinite number of solutions.
59. First, a and c cannot both be zero. So, assume a ≠ 0, and use row reduction as follows.
⎡a ⎡a b ⎤ ⎢ ⇒ ⎢ ⎥ ⎢0 ⎣c d ⎦ ⎢⎣
b ⎤ b ⎤ ⎥ ⇒ ⎡a −cb ⎢ ⎥ ⎥ + d ⎣0 ad − bc⎦ ⎥⎦ a
So, ad − bc ≠ 0. Similarly, if c ≠ 0, interchange rows and proceed as above. So the original matrix is row equivalent to the identity if and only if ad − bc ≠ 0. 61. Form the augmented matrix for this system
1 0⎤ ⎡λ − 2 ⎢ ⎥ λ − 2 0⎦ ⎣ 1 and reduce the system using elementary row operations.
λ − 2 0⎤ λ −2 0⎤ ⎡ 1 ⎡1 ⎢ ⎥ ⇒ ⎢ ⎥ 2 1 0⎦ ⎣λ − 2 ⎣0 λ − 4λ + 3 0⎦ To have a nontrivial solution you must have
λ 2 − 4λ + 3 = 0
(λ
− 1)(λ − 3) = 0.
So, if λ = 1 or λ = 3, the system will have nontrivial solutions. 63. To show that it is possible you need give only one example, such as
x1 + x2 + x3 = 0 x1 + x2 + x3 = 1 which has fewer equations than variables and obviously has no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3
Applications of Systems of Linear Equations
15
⎡a b ⎤ ⎡a − c b − d ⎤ ⎡a − c b − d ⎤ ⎡− c − d ⎤ ⎡c d ⎤ 65. ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ d ⎦ b ⎦ b⎦ ⎣c d ⎦ ⎣ c ⎣ a ⎣ a ⎣a b ⎦ The rows have been interchanged. In general, the second and third elementary row operations can be used in this manner to interchange two rows of a matrix. So, the first elementary row operation is, in fact, redundant. 67. When a system of linear equations is inconsistent, the row-echelon form of the corresponding augmented matrix will have a row that is all zeros except for the last entry. 69. A matrix in reduced row-echelon form has zeros above each row’s leading 1, which may not be true for a matrix in rowechelon form.
Section 1.3 Applications of Systems of Linear Equations 1. (a) Because there are three points, choose a seconddegree polynomial, p( x) = a0 + a1 x + a2 x 2 . Then
3. (a) Because there are three points, choose a seconddegree polynomial, p( x) = a0 + a1 x + a2 x 2 . Then
substitute x = 2, 3, and 4 into p( x) and equate the
substitute x = 2, 3, and 5 into p( x) and equate the
results to y = 5, 2, and 5, respectively.
results to y = 4, 6, and 10, respectively.
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = 5
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = 4
a0 + a1 (3) + a2 (3) = a0 + 3a1 + 9a2 = 2
a0 + a1 (3) + a2 (3) = a0 + 3a1 + 9a2 = 6
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 = 5
a0 + a1 (5) + a2 (5) = a0 + 5a1 + 25a2 = 10
Form the augmented matrix
Use Gauss-Jordan elimination on the augmented matrix for this system.
2
2
2
2
2
2
⎡1 2 4 5⎤ ⎢ ⎥ ⎢1 3 9 2⎥ ⎢1 4 16 5⎥ ⎣ ⎦
⎡1 2 4 4⎤ ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢ ⎥ 1 3 9 6 ⇒ ⎢ ⎥ ⎢0 1 0 2⎥ ⎢1 5 25 10⎥ ⎢0 0 1 0⎥ ⎣ ⎦ ⎣ ⎦
and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix ⎡ 1 0 0 29⎤ ⎢ ⎥ ⎢0 1 0 −18⎥. ⎢0 0 1 3⎥⎦ ⎣
(b)
So, p( x) = 29 − 18 x + 3 x 2 . (b)
So, p( x) = 2 x.
y
y 10 8 6 4 2
(5, 10) (3, 6) (2, 4) x
6 5 (2, 5) 4 3 2 1
1 2 3 4 5
(4, 5)
(3, 2) x
1 2 3 4 5 6
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
16
Chapter 1
Systems of Linear Equations
5. (a) Using the translation z = x − 2007, the points ( z , y ) are ( −1, 5), (0, 7), and (1, 12). Because there are three points,
choose a second-degree polynomial p( z ) = a0 + a1z + a2 z 2 . Then substitute z = −1, 0, and 1 into p( z ) and equate the results to y = 5, 7, and 12 respectively. a0 + a1 ( −1) + a2 ( −1) = a0 − a1 + a2 = 5 2
a0 + a1 (0) + a2 (0) = a0 2
a0 +
a1 (1) +
= 7
92 (1) = a0 + a1 + a2 = 12 2
Use Gauss-Jordan elimination on the augmented matrix for this system. ⎡ 1 0 0 7⎤ ⎡1 −1 1 5⎤ ⎢ ⎥ ⎢ ⎥ 7 ⎢1 0 0 7⎥ ⇒ ⎢0 1 0 2 ⎥ ⎢0 0 1 3 ⎥ ⎢1 1 1 12⎥ ⎣ ⎦ ⎣ 2⎦
So, p( z ) = 7 +
7z 2
+
3 z2. 2
Letting z = x − 2007, you have p( x) = 7 + (b)
7 2
(x
− 2007) +
3 2
(x
− 2007) . 2
y 12
(1, 12)
9
(0, 7) (−1, 5) −1
3 z 1
(2006) (2007) (2008)
7. Choose a fourth-degree polynomial and substitute x = 1, 2, 3, and 4 into p( x) = a0 + a1 x + a2 x 2 + a3 x3 + a4 x 4 . However,
when you substitute x = 3 into p( x) and equate it to y = 2 and y = 3 you get the contradictory equations a0 + 3a1 + 9a2 + 27 a3 + 81a4 = 2 a0 + 3a1 + 9a2 + 27 a3 + 81a4 = 3
9. Letting p( x) = a0 + a1x + a2 x 2 , substitute x = 0, 2,
and 4 into p( x) and equate the results to y = 1, 3, and 5, respectively. a0 + a1 (0) + a2 (0) = a0 2
= 1
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = 3 2
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 = 5 2
and must conclude that the system containing these two equations will have no solution. Also, y is not a function of x because the x-value of 3 is repeated. By similar reasoning, you cannot choose p( y ) = b0 + b1 y + b2 y 2 + b3 y 3 + b4 y 4 because
Use Gauss-Jordan elimination on the augmented matrix for this system. ⎡1 0 0 1⎤ ⎡ 1 0 0 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 2 4 3⎥ ⇒ ⎢0 1 0 1⎥ ⎢1 4 16 5⎥ ⎢0 0 1 0⎥ ⎣ ⎦ ⎣ ⎦
y = 1 corresponds to both x = 1 and x = 2.
So, p( x) = 1 + x. The graphs of y = 1 p( x) = 1 (1 + x) and that of the function y = 1 − y=
7 x 15
+
1 x 2 are 15
shown below.
1 1+x
y
5 4 3
y=
−
7x 15
+1
)2, 13) )4, 15)
(0, 1) −1
x2 15
x 1
2
3
4
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3
Applications of Systems of Linear Equations
17
11. To begin, substitute x = −1 and x = 1 into p( x) = a0 + a1 x + a2 x 2 + a3 x3 and equate the results
to y = 2 and y = −2, respectively. a0 − a1 + a2 − a3 = 2 a0 + a1 + a2 + a3 = −2 Then, differentiate p, yielding p′( x) = a1 + 2a2 x + 3a3 x 2 . Substitute x = −1 and x = 1 into p′( x) and equate the results to 0. a1 − 2a2 + 3a3 = 0 a1 + 2a2 + 3a3 = 0 Combining these four equations into one system and forming the augmented matrix, you obtain ⎡ 1 −1 1 −1 2⎤ ⎢ 1 1 1 1 −2⎥ ⎢ ⎥. ⎢0 1 −2 3 0⎥ ⎢ ⎥ ⎣0 1 2 3 0⎦ Use Gauss-Jordan elimination to find the equivalent reduced row-echelon matrix ⎡ 1 0 0 0 0⎤ ⎢0 1 0 0 −3⎥ ⎢ ⎥. ⎢0 0 1 0 0⎥ ⎢ ⎥ ⎣0 0 0 1 1⎦ So, p( x) = −3 x + x3 . The graph of y = p( x) is shown below. y
(−1, 2)
x −1
−1 −2
1
2
(1, −2)
13. (a) Because you are given three points, choose a second-degree polynomial, p( x) = a0 + a1x + a2 x 2 . Because the x-values
are large, use the translation z = x − 1990 to obtain ( −10, 227), (0, 249), (10, 281). Substituting the given points into p( x) produces the following system of linear equations. a0 + ( −10)a1 + ( −10) a2 = a0 − 10a1 + 100a2 = 227 2
a0 + a0 +
(0)a1 (10)a1
+ +
(0) a2 (10)2 a2 2
= a0
= 249
= a0 + 10a1 + 100a2 = 281
Form the augmented matrix ⎡1 −10 100 227⎤ ⎢1 0 0 249⎥⎥ ⎢ ⎣⎢1 10 100 281⎥⎦ and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix ⎡ 1 0 0 249⎤ ⎢0 1 0 2.7⎥. ⎢ ⎥ ⎢⎣0 0 1 0.05⎥⎦
So, p( z ) = 249 + 2.7 z + 0.05 z 2 and p( x) = 249 + 2.7( x − 1990) + 0.05( x − 1990) . To predict the population in 2010 2
and 2020, substitute these values into p( x). p( 2010) = 249 + 2.7( 20) + 0.05( 20) = 323 million 2
p( 2020) = 249 + 2.7(30) + 0.05(30) = 375 million 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
18
Chapter 1
Systems of Linear Equations
15. (a) Letting z = x − 2000 the four points are (1, 10,003), (3, 10,526), (5, 12,715), and (7, 14,410).
Let p( z ) = a0 + a1z + a2 z 2 + a3 z 3 . a0 + a1 (1) + a2 (1) + a3 (1) = a0 + a1 + 2
3
a2 +
a3 = 10,003
a0 + a1 (3) + a2 (3) + a3 (3) = a0 + 3a1 + 9a2 + 27 a3 = 10,526 2
3
a0 + a1 (5) + a2 (5) + a3 (5) = a0 + 5a1 + 25a2 + 125a3 = 12,715 2
3
a0 + a1 (7) + a2 (7) + a3 (7) = a0 + 7 a1 + 49a2 + 343a3 = 14,410 2
3
(b) Use Gauss-Jordan elimination to solve the system. ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎣⎢1
1 10,003⎤ ⎡1 ⎥ ⎢ 27 10,526⎥ 0 ⇒ ⎢ ⎥ ⎢ 5 25 125 12,715 0 ⎥ ⎢ 7 49 343 14,410⎦⎥ ⎢⎣0 1
1
3
9
0 0 0 11,041.25⎤ ⎥ 1 0 0 1606.5⎥ 0 1 0 613.25⎥ ⎥ 0 0 1 −45⎥⎦
So, p( z ) = 11,041.25 + 1606.5 z + 613.25 z 2 − 45 z 3 . Letting z = x − 2000, p( x) = 11,041.25 + 1606.5( x − 2000) + 613.25( x − 2000) − 45( x − 2000) . 2
3
Because the actual net profit increased each year from 2000 − 2007 except for 2006 and the predicted values decrease each year after 2008, the solution does not produce a reasonable model for predicting future net profits. 17. Choosing a second-degree polynomial approximation, p( x) = a0 + a1 x + a2 x 2 , substitute x = 0,
π
, and 2 π into p( x) and equate the results to y = 0, 1, and 0,
respectively. a0 a0 +
π 2
a1 +
4
p(0) = a0 + p(1) = a0 +
(0)a1 (1)a1
+ +
(0)2 a2 (1)2 a2
= a0 = a0 + a1 + a2
= 0
a0 + a1 + a2 = 0
a = 1
(iii) From the augmented matrix
a0 + π a1 + π 2 a2 = 0 Then form the augmented matrix, 0 0⎤ ⎡1 0 ⎢ ⎥ 2 ⎢1 π π 1⎥ ⎢ 2 4 ⎥ ⎢ ⎥ 2 0⎥⎦ ⎢⎣1 π π and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix ⎡1 0 0 0⎤ ⎢ ⎥ ⎢ 4⎥ ⎢0 1 0 ⎥. π⎥ ⎢ ⎢ 4⎥ ⎢0 0 1 − 2 ⎥ π ⎢⎣ ⎥⎦ 4 4 4 So, p( x) = x − 2 x 2 = 2 (π x − x 2 ).
π
2
a0
2
π
p( −1) = a0 + ( −1)a1 + ( −1) a2 = a0 − a1 + a2
(ii) a0 − a1 + a2 = 0
= 0
π2
19. (i)
⎡1 −1 1 0⎤ ⎢ ⎥ ⎢1 0 0 0⎥ ⎢1 1 1 0⎥ ⎣ ⎦
and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢0 1 0 0⎥. ⎢0 0 1 0⎥ ⎣ ⎦
So, a0 = 0, a1 = 0, and a2 = 0.
π
Furthermore,
sin
2 4 ⎡ ⎛π ⎞ ⎛π ⎞ ⎤ ⎛π ⎞ ≈ p⎜ ⎟ = 2 ⎢π ⎜ ⎟ − ⎜ ⎟ ⎥ π ⎢⎣ ⎝ 3 ⎠ ⎝ 3 ⎠ ⎦⎥ 3 ⎝3⎠
π
4 ⎡ 2π 2 ⎤ 8 = ≈ 0.889. π 2 ⎢⎣ 9 ⎥⎦ 9 Note that sin π 3 = 0.866 to three significant digits. =
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3
Applications of Systems of Linear Equations
19
21. (a) Each of the network’s six junctions gives rise to a linear equation as shown below. input = output
600 = x1 + x3 x1 = x2 + x4 x2 + x5 = 500 x3 + x6 = 600 x4 + x7 = x6 500 = x5 + x7 Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination. ⎡1 0 ⎢ ⎢ 1 −1 ⎢0 1 ⎢ ⎢0 0 ⎢ ⎢0 0 ⎢0 0 ⎣
1
0 0
0 −1 0 0
0 1
1
0 0
0
1 0
0
0 1
0 0 600⎤ ⎡1 ⎥ ⎢ 0 0 0⎥ ⎢0 ⎥ ⎢0 0 0 500 ⎥ ⇒ ⎢ 1 0 600⎥ ⎢0 ⎥ ⎢ 0⎥ −1 1 ⎢0 ⎢0 0 1 500⎥⎦ ⎣
0 0 0 0 −1
0
1 0 0 0
0 −1
0 1 0 0
1
0
0 0 1 0 −1
1
0 0 0 1
0
1
0 0 0 0
0
0
0⎤ ⎥ 0⎥ 600⎥ ⎥ 0⎥ ⎥ 500⎥ 0⎥⎦
Letting x7 = t and x6 = s be the free variables, you have x1 = s x2 = t x3 = 600 − s x4 = s − t x5 = 500 − t x6 = s x7 = t , where s and t are any real numbers. (b) If x6 = x7 = 0, then the solution is x1 = 0, x2 = 0, x3 = 600, x4 = 0, x5 = 500, x6 = 0, x7 = 0. (c) If x5 = 1000 and x6 = 0, then the solution is x1 = 0, x2 = −500, x3 = 600, x4 = 500, x5 = 1000, x6 = 0, and x7 = −500. 23. (a) Each of the network’s four junctions gives rise to a linear equation, as shown below. input = output
200 + x2 =
x1
x4 = x2 + 100 x3 = x4 + 200 x1 + 100 =
x3
Rearranging these equations and forming the augmented matrix, you obtain ⎡ 1 −1 0 0 200⎤ ⎢ ⎥ ⎢0 1 0 −1 −100⎥. ⎢0 0 1 −1 200⎥ ⎢ ⎥ ⎢⎣ 1 0 −1 0 −100⎥⎦ Gauss-Jordan elimination produces the matrix ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 0 −1
100⎤ ⎥ 1 0 −1 −100⎥ . 0 1 −1 200⎥ ⎥ 0 0 0 0⎥⎦
Letting x4 = t , you have x1 = 100 + t , x2 = −100 + t , x3 = 200 + t , and x4 = t , where t is a real number. (b) When x4 = t = 0, then x1 = 100, x2 = −100, x3 = 200. (c) When x4 = t = 100, then x1 = 200, x2 = 0, x3 = 300.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
20
Chapter 1
Systems of Linear Equations
25. Applying Kirchoff’s first law to either junction produces
I1 + I 3 = I 2
29.
and applying the second law to the two paths produces
+ 1) ( x − 1) 2
A B C + + x − 1 x + 1 ( x + 1)2
= 2
4 x 2 = Ax 2 + 2 Ax + A + Bx 2 − B + Cx − C
R2 I 2 + R3 I 3 = 3I 2 + I 3 = 4
4 x 2 = ( A + B) x 2 + (2 A + C ) x + A − B − C
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
So,
A + B
= 4
+ C = 0 A − B − C = 0.
⎡ 1 −1 1 0⎤ ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢4 3 0 3⎥ ⇒ ⎢0 1 0 1⎥ ⎢0 3 1 4⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦
2A
Use Gauss-Jordan elimination to solve the system. 1⎤ ⎡ 1 1 0 4⎤ ⎡1 0 0 ⎢ ⎥ ⎢ ⎥ 2 0 1 0 0 1 0 3 ⇒ ⎢ ⎥ ⎢ ⎥ ⎢ 1 −1 −1 0⎥ ⎢0 0 1 −2⎥ ⎣ ⎦ ⎣ ⎦
So, I1 = 0, I 2 = 1, and I 3 = 1. 27. (a) To find the general solution, let A have a volts and B have b volts. Applying Kirchoff’s first law to either junction produces
The solution is: A = 1, B = 3, and C = −2
I1 + I 3 = I 2
So,
and applying the second law to the two paths produces R2 I 2 + R3 I 3 = 22 + 3I 3 = b
(x
4 x 2 = A( x + 1) + B( x + 1)( x − 1) + C ( x − 1)
R1I1 + R2 I 2 = 4 I1 + 3I 2 = 3
R1I1 + R2 I 2 = I1 + 2 I 2 = a
4x2
31.
4x2
( x + 1) ( x − 1) 2
20 − x 2
(x
+ 2)( x − 2)
2
=
1 3 2 + − . x − 1 x + 1 ( x + 1)2
A B C + = 2 x+ 2 x−2 ( x − 2)
=
Rearrange these three equations and form the augmented matrix.
20 − x 2 = A( x − 2) + B( x + 2)( x − 2) + C ( x + 2)
⎡ 1 −1 1 0⎤ ⎢ ⎥ ⎢ 1 2 0 a⎥ ⎢0 2 4 b⎥ ⎣ ⎦
20 − x 2 = Ax 2 − 4 Ax + 4 A + Bx 2 − 4 B + Cx + 2C
2
20 − x 2 = ( A + B ) x 2 + ( −4 A + C ) x + 4 A − 4 B + 2C
− b ) 7⎤ ⎥ ( 4a + b) 14⎥. (3b − 2a) 14⎥⎦
(b) When a = 2 and b = 6, then I1 = 0, I 2 = 1, I 3 = 1.
+ C = 0
4 A − 4 B + 2C = 20.
(3a
When a = 5 and b = 8, then I1 = 1, I 2 = 2, I 3 = 1.
= −1
B
−4 A
Gauss-Jordan elimination produces the matrix ⎡1 0 0 ⎢ ⎢0 1 0 ⎢0 0 1 ⎣
A +
So,
Use Gauss-Jordan elimination to solve the system. 1⎤ ⎡ 1 1 0 −1⎤ ⎡1 0 0 ⎢ ⎥ ⎢ ⎥ − ⇒ − 4 0 1 0 0 1 0 2 ⎢ ⎥ ⎢ ⎥ ⎢ 4 −4 2 20⎥ ⎢0 0 1 4⎥ ⎣ ⎦ ⎣ ⎦
The solution is: A = 1, B = −2, and C = 4. So,
20 − x 2
(x
+ 2)( x − 2)
2
=
1 2 4 − + . x+ 2 x − 2 ( x − 2)2
33. Use Gauss-Jordan elimination to solve the system ⎡2 0 1 0⎤ ⎡ 1 0 0 2⎤ ⎢ ⎥ ⎢ ⎥ ⇒ 0 2 1 0 ⎢ ⎥ ⎢0 1 0 2⎥ ⎢ 1 1 0 4⎥ ⎢0 0 1 −4⎥ ⎣ ⎦ ⎣ ⎦
So, x = 2 y = 2 and λ = −4.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1
21
35. Let x1 = number of touchdowns, x2 = number of extra-point kicks, and x3 = number of field goals.
x1 + x2 + x3 = 13 6 x1 + x2 + 3x3 = 46 x2 − x3 = 0 Use Gauss-Jordan elimination to solve the system. ⎡ 1 1 1 13⎤ ⎡ 1 0 0 5⎤ ⎢ ⎥ ⎢ ⎥ ⎢6 1 3 46⎥ ⇒ ⎢0 1 0 4⎥ ⎢0 1 −1 0⎥ ⎢0 0 1 4⎥ ⎣ ⎦ ⎣ ⎦
Because x1 = 5, x2 = 4, and x3 = 4, there were 5 touchdowns, 4 extra-point kicks, and 4 field goals.
Review Exercises for Chapter 1 1. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y.
9. Choosing y and z as the free variables and letting y = s and z = t , you have
−4 x + 2 s − 6t = 1
3. Because the equation is in the form a1 x + a2 y = b, it is linear in the variables x and y.
−4 x = 1 − 2 s + 6t x = − 14 + 12 s − 23 t.
5. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y.
So, the solution set can be described as x = − 14 + 12 s − 32 t , y = s, z = t , where s and t are real numbers.
7. Because the equation is in the form a1 x + a2 y = b, it is linear in the variables x and y. 11. Row reduce the augmented matrix for this system.
⎡1 0 ⎡ 1 1 2⎤ ⎡1 1 2⎤ ⎡ 1 1 2⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢0 1 3 ⎥ ⇒ ⎢ ⎢⎣ ⎢⎣0 1 ⎣3 −1 0⎦ ⎣0 −4 −6⎦ ⎦ 2⎥ Converting back to a linear system, the solution is: x =
1⎤ 2 3⎥ 2⎥ ⎦ 1 2
and y = 32 .
13. Rearrange the equations as shown below.
x − y = −4 2x − 3y = 0 Row reduce the augmented matrix for this system. ⎡ 1 −1 −4⎤ ⎡ 1 −1 −4⎤ ⎡ 1 −1 −4⎤ ⎡ 1 0 −12⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣2 −3 0⎦ ⎣0 −1 8⎦ ⎣0 1 −8⎦ ⎣0 1 −8⎦ Converting back to a linear system, the solution is: x = −12 and y = −8. 15. Row reduce the augmented matrix for this system.
⎡ 1 1 0⎤ ⎡ 1 1 0⎤ ⎡ 1 1 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣2 1 0⎦ ⎣0 −1 0⎦ ⎣0 1 0⎦ ⎣0 1 0⎦ Converting back to a linear system, the solution is: x = 0 and y = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
22
Chapter 1
Systems of Linear Equations
17. The augmented matrix for this system is
⎡ 1 −1 9⎤ ⎢ ⎥ ⎣−1 1 1⎦ which is equivalent to the reduced row-echelon matrix ⎡ 1 −1 0⎤ ⎢ ⎥. ⎣0 0 1⎦ Because the second row corresponds to 0 = 1, which is a false statement, you can conclude that the system has no solution. 19. Multiplying both equations by 100 and forming the augmented matrix produces
⎡20 30 14⎤ ⎢ ⎥. ⎣40 50 20⎦ Use Gauss-Jordan elimination as shown below. 3 7⎤ ⎡1 3 ⎡ 1 3 7⎤ ⎡1 2 2 10 ⇒ 2 10 ⇒ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢⎣40 50 20⎦⎥ ⎢⎣0 −10 −8⎦⎥ ⎢⎣0 1
7⎤ 10 ⎥ 4 ⎥ 5⎦
⎡ 1 0 − 12 ⎤ ⇒ ⎢ ⎥ 4 ⎢⎣0 1 5⎥ ⎦
So, the solution is: x1 = − 12 and x2 = 54 . 21. Expanding the second equation, 3x + 2 y = 0, the augmented matrix for this system is
31. The augmented matrix for this system is
⎡ 12 − 13 0⎤ ⎢ ⎥ 2 0⎥⎦ ⎢⎣ 3
1⎤ ⎡−1 1 2 ⎢ ⎥ − 2 3 1 2 ⎢ ⎥ ⎢ 5 4 2 4⎥ ⎣ ⎦
which is equivalent to the reduced row-echelon matrix
which is equivalent to the reduced row-echelon matrix
⎡ 1 0 0⎤ ⎢ ⎥ ⎣0 1 0⎦.
⎡ 1 0 0 2⎤ ⎢ ⎥ ⎢0 1 0 −3⎥. ⎢0 0 1 3⎥ ⎣ ⎦
So, the solution is: x = 0 and y = 0. 23. Because the matrix has 2 rows and 3 columns, it has size 2 × 3. 25. This matrix has the characteristic stair step pattern of leading 1’s so it is in row-echelon form. However, the leading 1 in row three of column four has 1’s above it, so the matrix is not in reduced row-echelon form. 27. Because the first row begins with −1, this matrix is not in row-echelon form. 29. This matrix corresponds to the system
x1 + 2 x2 = 0 x3 = 0. Choosing x2 = t as the free variable you can describe the solution as x1 = −2t , x2 = t , and x3 = 0, where t is a real number.
So, the solution is: x = 2, y = −3, and z = 3. 33. Use Gauss-Jordan elimination on the augmented matrix. 1⎤ ⎡1 0 0 ⎡ 2 3 3 3⎤ 2 ⎢ ⎥ ⎢ ⎥ 1 ⇒ − 6 6 12 13 0 1 0 ⎢ ⎢ ⎥ 3⎥ ⎢0 0 1 ⎢12 9 −1 2⎥ 1⎥⎦ ⎣ ⎦ ⎣
So, x =
1, 2
y = − 13 , and z = 1.
35. The augmented matrix for this system is 1 −6⎤ ⎡ 1 −2 ⎢ ⎥ ⎢ 2 −3 0 −7⎥ ⎢−1 3 −3 11⎥ ⎣ ⎦
which is equivalent to the reduced row-echelon matrix ⎡ 1 0 −3 4⎤ ⎢ ⎥ ⎢0 1 −2 5⎥. ⎢0 0 0 0⎥ ⎣ ⎦
Choosing z = t as the free variable you find that the solution set can be described by x = 4 + 3t , y = 5 + 2t , and z = t , where t is a real number.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1 37. Use the Gauss-Jordan elimination on the augmented matrix for this system. ⎡1 0 2 3⎤ ⎡2 1 2 4⎤ 2 ⎢ ⎥ ⎢ ⎥ ⎢2 2 0 5⎥ ⇒ ⎢0 1 −2 1⎥ ⎢0 0 0 0⎥ ⎢2 −1 6 2⎥ ⎣ ⎦ ⎣ ⎦
So, the solution is: x =
3 2
− 2t , y = 1 + 2t , z = t ,
where t is any real number.
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
1⎤ ⎥ 0⎥ 0 1 0 4⎥ ⎥ 0 0 1 −2⎥⎦ 0 0 0 1 0 0
So, the solution is: x = 1, y = 0, z = 4, and w = −2.
⎡ 1 0 2 0 0⎤ ⎢ ⎥ ⎢0 1 −1 0 0⎥ ⎢0 0 0 1 0⎥ ⎣ ⎦
1 1 2 −1⎤ ⎡ 2 ⎢ ⎥ ⎢ 5 −2 1 −3 0⎥ ⎢−1 3 2 2 1⎥ ⎢ ⎥ ⎢⎣ 3 2 3 −5 12⎥⎦ which is equivalent to the reduced row-echelon matrix
Choosing z = t as the free variable you find that the solution set can be described by x = −2t , y = t , z = t , and w = 0, where t is a real number. 47. Use Guass-Jordan elimination on the augmented matrix.
1⎤ ⎥ 1 0 0 4⎥ 0 1 0 −3⎥ ⎥ 0 0 1 −2⎥⎦ 0 0 0
⎡ 1 −2 −8 0⎤ ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⇒ 3 2 0 0 ⎢ ⎥ ⎢0 1 0 0⎥ ⎢−1 1 7 0⎥ ⎢0 0 1 0⎥ ⎣ ⎦ ⎣ ⎦
So, the solution is: x1 = 1, x2 = 4, x3 = −3, and x4 = −2. 41. Using a graphing utility, the augmented matrix reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
43. Using a graphing utility, the augmented matrix reduces to
45. Using a graphing utility, the augmented matrix reduces to
39. The augmented matrix for this system is
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
23
0 0 0⎤ ⎥ 1 4 2⎥ 0 0 0⎥ ⎥ 0 0 0⎦⎥
Choosing z = t as the free variable, you find that the solution set can be described by x = 0, y = z − 4t , and z = t , where t is a real number.
So, the solution is x1 = x2 = x3 = 0. 49. The augmented matrix for this system is ⎡2 −8 4 0⎤ ⎢ ⎥ ⎢ 3 −10 7 0⎥ ⎢0 10 5 0⎥ ⎣ ⎦ which is equivalent to the reduced row-echelon matrix ⎡ 1 0 4 0⎤ ⎢ ⎥ 1 ⎢0 1 2 0⎥ ⎢0 0 0 0⎥ ⎣ ⎦ Choosing x3 = t as the free variable, you find that the
solution set can be described by x1 = −4t , x2 = − 12 t , and x3 = t , where t is a real number. 51. Forming the augmented matrix ⎡k 1 0⎤ ⎢ ⎥ ⎣ 1 k 1⎦ and using Gauss-Jordan elimination, you obtain
k ⎡ 1 k 1⎤ ⎡1 ⎢ ⎥ ⇒ ⎢ 2 ⎣k 1 0⎦ ⎣0 1 − k
⎡1 k 1⎤ ⎢ ⎥ ⇒ ⎢ −k ⎦ 0 1 ⎢⎣
−1 ⎤ ⎡ ⎤ ⎢ 1 0 k 2 − 1⎥ ⎥ ⎥ , k 2 − 1 ≠ 0. k ⎥ ⇒ ⎢ k ⎢ ⎥ k 2 − 1⎥⎦ ⎢⎣0 1 k 2 − 1⎥⎦ 1
So, the system is inconsistent if k = ±1. 53. Row reduce the augmented matrix.
⎡1 ⎢ ⎣a (a) (b) (c)
2 3 2 3⎤ ⎡1 ⎤ ⎥ ⎥ ⇒ ⎢ 0 b − 2 a − 9 − 3 a b −9⎦ ( ) ( ) ⎣ ⎦ There will be no solution if b − 2a = 0 and −9 − 3a ≠ 0. That is if b = 2a and a = −3. There will be exactly one solution if b ≠ 2a. There will be an infinite number of solutions if b = 2a and a = −3. That is, if a = −3 and b = −6.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
24
Chapter 1
Systems of Linear Equations 59. (a) False. See page 3, following Example 2.
55. You can show that two matrices of the same size are row equivalent if they both row reduce to the same matrix. The two given matrices are row equivalent because each is row equivalent to the identity matrix.
(b) True. See page 5, Example 4(b). 61. (a) Let x1 = number of three-point baskets, x2 = number of two-point baskets, and x3 = number of one-point free throws.
57. Adding a multiple of row one to each row yields the following matrix. 2 3 " n ⎡1 ⎤ ⎢0 ⎥ n 2 n " n 1 n − − − − ( ) ⎢ ⎥ ⎢0 " −2n −4n −2( n − 1)n ⎥ ⎢ ⎥ # # # ⎢# ⎥ ⎢0 −( n − 1)n −2( n − 1)n " −( n − 1)( n − 1)n⎥ ⎣ ⎦ Every row below row two is a multiple of row two. Therefore, reduce these rows to zeros. 3 " n ⎡1 2 ⎤ ⎢0 − n −2n " − n − 1 n⎥ ( ) ⎢ ⎥ ⎢0 0 ⎥ 0 " 0 ⎢ ⎥ # " # ⎢# # ⎥ ⎢0 0 ⎥ 0 0 ⎣ ⎦ Dividing row two by − n yields a new second row.
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢# ⎢ ⎣0
3x1 + 2 x2 + x3 = 59 3 x1 −
x2
= 0
x2 − x3 = 1
(b) Use Gauss-Jordan elimination to solve the system. ⎡3 2 1 59⎤ ⎡ 1 0 0 5⎤ ⎢ ⎥ ⎢ ⎥ − ⇒ 3 1 0 0 ⎢ ⎥ ⎢0 1 0 15⎥ ⎢0 1 −1 1⎥ ⎢0 0 1 14⎥ ⎣ ⎦ ⎣ ⎦
Because x = 5, y = 15, and z = 14, there were 5 three-point baskets, 15 two-point baskets, and 14 one-point free throws.
n ⎤ ⎥ 2 " n − 1⎥ 0 " 0 ⎥ ⎥ # # ⎥ ⎥ 0 " 0 ⎦ −2 times row two to row one yields a new first row.
2 3 " 1 0 #
0
Adding
⎡1 0 −1 " 2 − n⎤ ⎢ ⎥ ⎢0 1 2 " n − 1⎥ ⎢0 0 0 " 0 ⎥ ⎢ ⎥ ⎢# # # # ⎥ ⎢ ⎥ 0 ⎦ ⎣0 0 0 " This matrix is in reduced row-echelon form. 63.
3x 2 − 3x − 2
(x
+ 2)( x − 2)
2
=
A B C + + x + 2 x − 2 ( x − 2)2
3x 2 − 3 x − 2 = A( x − 2) + B( x + 2)( x − 2) + C ( x + 2) 2
3x 2 − 3 x − 2 = Ax 2 − 4 Ax + 4 A + Bx 2 − 4 B + Cx + 2C 3x 2 − 3 x − 2 = ( A + B) x 2 + ( −4 A + C ) x + 4 A − 4 B + 2C So,
A+ B = 3 C = −3 −4 A + 4 A − 4 B + 2C = −2.
Use Gauss-Jordan elimination to solve the system. 3⎤ ⎡ 1 1 0 ⎡ 1 0 0 1⎤ ⎢−4 0 1 −3⎥ ⇒ ⎢0 1 0 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 4 −4 2 −2⎥⎦ ⎢⎣0 0 1 1⎥⎦ The solution is: A = 1, B = 2, and C = 1. So,
3x 2 − 3x − 2
(x
+ 2)( x − 2)
2
=
1 2 1 + + . x+ 2 x − 2 ( x − 2) 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1 65. (a) Because there are three points, choose a second degree polynomial, p( x) = a0 + a1 x + a2 x 2 . By
substituting the values at each point into this equation you obtain the system a0 + 2a1 + 4a2 = 5
Because you are given three points, choose a seconddegree polynomial, p( x) = a0 + a1x + a2 x 2 . Substituting the given points into p( x) produces the
(0)a1 + (0)2 a2 2 a0 + ( 4)a1 + ( 4) a2 2 a0 + (80)a1 + (80) a2 a0 +
a0 + 4a1 + 16a2 = 20.
Forming the augmented matrix ⎡1 2 4 5⎤ ⎢ ⎥ ⎢1 3 9 0⎥ ⎢1 4 16 20⎥ ⎣ ⎦
= a0
= 80
= a0 + 4a1 +
16a2 = 68
= a0 + 80a1 + 6400a2 = 30
(b) Form the augmented matrix
and using Gauss-Jordan elimination you obtain ⎡1 0 0 90⎤ ⎢ ⎥ 135 ⎢0 1 0 − 2 ⎥. ⎢0 0 1 25 ⎥ 2⎦ ⎣
So, p( x) = 90 −
0 80⎤ ⎡1 0 ⎢ ⎥ 16 68⎥ ⎢1 4 ⎢1 80 6400 30⎥ ⎣ ⎦
and use Gauss-Jordan elimination to obtain the equivalent reduced row-echelon matrix 135 x 2
+
25 x 2 . 2
y 25
⎡1 0 0 80⎤ ⎢ ⎥ 25 ⎢0 1 0 − 8 ⎥. ⎢0 0 1 1⎥ 32 ⎦ ⎣
So, p( x) = 80 −
(4, 20)
20
25 x 8
+
1 x2. 32
(c) The graphing utility gives a0 = 80, a1 = − 25 , and 8
15 10 5
69. (a) There are three points: (0, 80), (4, 68), and (80, 30).
following system of linear equations.
a0 + 3a1 + 9a2 = 0
(b)
25
a2 =
(2, 5) (3, 0) 2
3
4
5
67. Establish the first year as x = 0 and substitute the
values at each point into p( x) = a0 + a1 x + a2 x 2 to obtain the system
25 x 8
+
1 x2. 32
(e) There is precisely one polynomial function of degree n − 1 (or less) that fits n distinct points. 71. Applying Kirchoff’s first law to either junction produces
I1 + I 3 = I 2 and applying the second law to the two paths produces
= 50
a0
other words p( x) = 80 −
(d) The results to (b) and (c) are the same.
x 1
1 . In 32
a0 + a1 + a2 = 60
R1I1 + R2 I 2 = 3I1 + 4 I 2 = 3
a0 + 2a1 + 4a2 = 75.
R2 I 2 + R3 I 3 = 4 I 2 + 2 I 3 = 2.
Forming the augmented matrix
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination.
⎡1 0 0 50⎤ ⎢ ⎥ ⎢1 1 1 60⎥ ⎢1 2 4 75⎥ ⎣ ⎦
and using Gauss-Jordan elimination, you obtain ⎡ 1 0 0 50⎤ ⎢ 15 ⎥ ⎢0 1 0 2 ⎥. ⎢0 0 1 5 ⎥ ⎣ 2⎦
⎡1 0 0 ⎡ 1 −1 1 0⎤ ⎢ ⎢ ⎥ ⎢3 4 0 3⎥ ⇒ ⎢0 1 0 ⎢0 0 1 ⎢0 4 2 2⎥ ⎣ ⎦ ⎣
So, the solution is I1 =
So, p( x) = 50 +
15 x 2
+
5 x 2 . To 2
5, 13
5⎤ 13 ⎥ 6 13 ⎥ 1⎥ 13 ⎦
I2 =
6, 13
and I 3 =
1. 13
predict the sales in the
fourth year, evaluate p( x) when x = 3. p(3) = 50 +
15 2
(3) + 52 (3)2
= $95.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 1 Systems of Linear Equations Section 1.1
Introduction to Systems of Linear Equations........................................2
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination ..........................8
Section 1.3
Applications of Systems of Linear Equations .....................................13
Review Exercises ..........................................................................................................19 Project Solutions...........................................................................................................24
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 1 Systems of Linear Equations Section 1.1 Introduction to Systems of Linear Equations 2. Because the term xy cannot be rewritten as ax + by for any real numbers a and b, the equation cannot be written in the form a1 x + a2 y = b. So this equation is not linear in the variables x and y.
18. (−1, 1)
x
−3 −2 −2 −3 −4
−x + 2 y = 3 Adding the first equation to the second equation produces a new second equation, 5 y = 5, or y = 1.
8. Choosing y as the free variable, let y = t and obtain
So, x = 2 − 3 y = 2 − 3(1), and the solution is: x = −1,
3x − 12 t = 9
y = 1. This is the point where the two lines intersect.
3x = 9 + 12 t 20.
y
So, you can describe the solution set as x = 3 + 16 t and
4 3 2 1
y = t , where t is any real number.
12. From Equation 2 you have x2 = 3. Substituting this value into Equation 1 produces 2 x1 − 12 = 6 or x1 = 9. So, the system has exactly one solution: x1 = 9 and x2 = 3. 14. From Equation 3 you conclude that z = 2. Substituting this value into Equation 2 produces 2 y + 2 = 6 or y = 2. Finally, substituting y = 2 and z = 2 into Equation 1, you obtain x − 2 = 4 or x = 6. So, the system has exactly one solution: x = 6, y = 2, and z = 2.
− 13 y = 1
−2x + 43 y = −4
x
4 −2 −3
x1 − 2s + 3t = 1 So, you can describe the solution set as x1 = 1 + 2 s − 3t , x2 = s, and x3 = t , where t and s are any real numbers.
1 x 2
−4 −3 −2
Dividing this equation by 13 you obtain
x1 = 1 + 2 s − 3t.
x + 3y = 2
x + 3y = 2
6. Because the equation is in the form a1 x + a2 y = b, it is linear in the variables x and y.
10. Choosing x2 and x3 as free variables, let x3 = t and x2 = s and obtain 13x1 − 26 x + 39t = 13.
−x + 2y = 3
4
4. Because the terms x 2 and y 2 cannot be rewritten as ax + by for any real numbers a and b, the equation cannot be written in the form a1 x + a2 y = b. So this equation is not linear in the variables x and y.
x = 3 + 16 t.
y
The two lines coincide. Multiplying the first equation by 2 produces a new first equation. x − −2 x +
2y 3 4y 3
=
2
= −4
Adding 2 times the first equation to the second equation produces a new second equation. x−
2y 3
= 2
0 = 0 Choosing y = t as the free variable, you obtain x =
2t 3
as x =
+ 2. So, you can describe the solution set 2t 3
+ 2 and y = t , where t is any real number.
16. From the second equation you have x2 = 0. Substituting this value into Equation 1 produces x1 + x3 = 0. Choosing x3 as the free variable, you have x3 = t and obtain x1 + t = 0 or x1 = −t. So, you can describe the solution set as x1 = −t , x2 = 0, and x3 = t.
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 22.
Introduction to Systems of Linear Equations y
28.
y
250
8
0.3x + 0.4y = 68.7 0.2x − 0.5y = −27.8
6
4x + 3y = 7
150 100
(101, 96)
− x + 3y = 17 2 x
x
−8 −6 −4 −2 −2
−50
− x + 3 y = 17
0.2 x − 0.5 y = −27.8
4x + 3 y = 7
0.3x + 0.4 y = 68.7
Subtracting the first equation from the second equation produces a new second equation, 5 x = −10, or x = −2.
Multiplying the first equation by 40 and the second equation by 50 produces new equations. 8 x − 20 y = −1112
So, 4( −2) + 3 y = 7, or y = 5, and the solution is:
Adding the first equation to the second equation produces a new second equation, 23 x = 2323, or x = 101.
y 3
So, 8(101) − 20 y = −1112, or y = 96, and the solution
6x + 5y = 21
is: x = 101, y = 96. This is the point where the two lines intersect.
x −3
3
6
9
−3 −6 −9
30.
y
x − 5y = 21
5 4 3 2 1
x − 5 y = 21 6 x + 5 y = 21 Adding the first equation to the second equation produces a new second equation, 7 x = 42, or x = 6. So, 6 − 5 y = 21, or y = −3, and the solution is: x = 6, y = −3. This is the point where the two lines intersect. 26.
y 12 9 6
x−1 y+2 + =4 2 3 x − 2y = 5
3 −3
50 100150200
15 x + 20 y = 3435
x = −2, y = 5. This is the point where the two lines intersect. 24.
3
x 6
2 3
x + 16 y = 23 4x + y = 4 x
−1 −2 2x 3
+
2 3 4 5 6 1y 6
Multiplying the first equation by 6 produces a new first equation. 3x + 2 y = 23 x − 2y = 5 Adding the first equation to the second equation produces a new second equation, 4 x = 28, or x = 7. So, 7 − 2 y = 5, or y = 1, and the solution is: x = 7, y = 1. This is the point where the two lines intersect.
2 3
4x + y = 4
Adding 6 times the first equation to the second equation produces a new second equation, 0 = 0. Choosing x = t as the free variable, you obtain y = 4 − 4t. So, you can describe the solution as x = t and y = 4 − 4t , where t is any real number. 32. (a)
−8x + 10y = 14 4
12
x −1 y + 2 + = 4 2 3 x − 2y = 5
=
4x − 5y = 3 −6
6
−4
(b) This system is inconsistent, because you see two parallel lines on the graph of the system. (c) Because the system is inconsistent, you cannot approximate the solution. (d) Adding −2 times the first equation to the second you obtain 0 = 8, which is a false statement. This system has no solution. (e) You obtained the same answer both geometrically and algebraically. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
4
Chapter 1 34. (a)
1 2
x+
Systems of Linear Equations
1 y 3
=0 2
40. Adding −6 times the first equation to the second equation produces a new second equation. x1 − 2 x2 = 0
9x − 4y = 5
−3
14 x2 = 0
3
Now, using back-substitution, the system has exactly one solution: x1 = 0 and x2 = 0.
−2
(b) Two lines corresponding to two equations intersect at a point, so this system has a unique solution. (d) Adding −18 times the second equation to the first equation you obtain −10 y = 5 or y =
− 12 .
Substituting y = − 12 into the first equation you obtain 9 x = 3 or x =
1 . The 3
solution is: x =
1 3
and y = − 12 .
36. (a)
−5.3x + 2.1y = 1.25
2
−3
3
15.9x − 6.3y = −3.75
−2
(b) Because each equation has the same line as a graph, there are infinitely many solutions. (c) All solutions of this system lie on the line 25 . So let x = t , then the solution set is y = 53 x + 42 21 53 t 21
+
25 42
for any real number t.
(d) Adding 3 times the first equation to the second equation you obtain −5.3x + 2.1 y = 1.25 0 = 0. Choosing x = t as the free variable, you obtain 2.1y = 5.3t + 1.25 or 21 y = 53t + 12.5 or 53 t 21
+
25 . So, 42
as x = t , y =
produces a new first
1x 4 2
= 0
4 x1 + x2 = 0 Adding −4 times the first equation to the second equation produces a new second equation. x1 +
1x 4 2
= 0
0 = 0 Choosing x2 = t as the free variable, you obtain
(e) You obtained the same answer both geometrically and algebraically.
y =
3 2
equation. x1 +
(c) x ≈ 13 , y ≈ − 12
x = t, y =
42. Multiplying the first equation by
53 t 21
you can describe the solution set +
25 42
for any real number t.
(e) You obtained the same answer both geometrically and algebraically. 38. Adding −2 times the first equation to the second equation produces a new second equation. 3x + 2 y = 2 0 = 10 Because the second equation is a false statement, the original system of equations has no solution.
x1 = − 14 t. So you can describe the solution set as x1 = − 14 t and x2 = t , where t is any real number.
44. To begin, change the form of the first equation. x1 x 7 + 2 = 4 3 12 2 x1 − x2 = 12
Multiplying the first equation by 4 yields a new first equation.
x1 +
4x 3 2
=
7 3
2 x1 − x2 = 12 Adding −2 times the first equation to the second equation produces a new second equation. =
7 3
− 11 x = 3 2
22 3
x1 +
4x 3 2
3 yields a new Multiplying the second equation by − 11
second equation. x1 +
4x 3 2
=
7 3
x2 = −2 Now, using back-substitution, the system has exactly one solution: x1 = 5 and x2 = −2. 46. Multiplying the first equation by 20 and the second equation by 100 produces a new system. x1 − 0.6 x2 = 4.2
7 x1 +
2 x2 = 17
Adding −7 times the first equation to the second equation produces a new second equation. x1 − 0.6 x2 = 4.2 6.2 x2 = −12.4 Now, using back-substitution, the system has exactly one solution: x1 = 3 and x2 = −2. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 48. Adding the first equation to the second equation yields a new second equation.
x+
y + z = 2
Adding −4 times the first equation to the third equation yields a new third equation. x +
y +
z = 2
4 y + 3 z = 10 Dividing the second equation by 4 yields a new second equation. y +
z = 2 3z 4
y +
−3 y − 4 z = −4 Adding 3 times the second equation to the third equation yields a new third equation. x + y + y +
= =
=
−
15 t 2
x3 = t , where t is any real number.
x1 − 2 x2 + 5 x3 = 2 8 x2 − 16 x3 = −8
Adding 2 times the second equation to the first equation yields x1
z = 2 3z 4
45 2
x2 − 2 x3 = −1.
equation. y +
x2 =
x1 − 2 x2 + 5 x3 = 2
5 2 7 2
Multiplying the third equation by − 74 yields a new third x + y +
The third equation can be disregarded because it is the same as the second one. Choosing x3 as a free variable and letting x3 = t , you can describe the solution as
Dividing the second equation by 8 yields
z = 2 3z 4 7 −4z
−2 x2 − 15 x3 = −45
54. Adding −3 times the first equation to the second equation produces a new second equation.
5 2
=
13
x1 = 13 − 4t
−3 y − 4 z = −4
x +
+ 4 x3 =
−2 x2 − 15 x3 = −45
= 4
y
5
52. Adding −4 times the first equation to the second equation and adding −2 times the first equation to the third equation produces new second and third equations.
x1
4 y + 3 z = 10 4x +
Introduction to Systems of Linear Equations
+
x3 = 0
x2 − 2 x3 = −1.
5 2
z = −2 Now, using back-substitution, the system has exactly one solution: x = 0, y = 4, and z = −2.
Letting x3 = t be the free variable, you can describe the solution as x1 = −t , x2 = 2t − 1, and x3 = t , where t is any real number.
50. Interchanging the first and third equations yields a new system.
x1 − 11x2 + 4 x3 = 3 2 x1 + 4 x2 −
x3 = 7
5 x1 − 3 x2 + 2 x3 = 3 Adding −2 times the first equation to the second equation yields a new second equation. x1 − 11x2 + 4 x3 = 3 26 x2 − 9 x3 = 1 5 x1 − 3 x2 + 2 x3 = 3 Adding −5 times the first equation to the third equation yields a new third equation. x1 − 11x2 + 4 x3 =
3
26 x2 − 9 x3 =
1
52 x2 − 18 x3 = −12 At this point you realize that Equations 2 and 3 cannot both be satisfied. So, the original system of equations has no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
6
Chapter 1
Systems of Linear Equations
56. Adding −2 times the first equation to the fourth equation, yields
+ 3x4 = 4
x1 2 x2 −
x3 −
Dividing the first equation by 2 produces x+
x4 = 0
− 2 x4 =
3x2
66. x = y = z = 0 is clearly a solution.
8 x + 3 y + 3 z = 0.
Multiplying the fourth equation by −1, and interchanging it with the second equation, yields
Adding −4 times the first equation to the second equation, and −8 times the first equation to the third, yields
2 x2 −
x3 −
1
x4 = 0.
Adding −3 times the second equation to the third, and −2 times the second equation to the fourth, produces x1
+ 3 x4 = 4 x2 − 4 x3 + 6 x4 =
3
12 x3 − 20 x4 = −8 7 x3 − 13 x4 = −6. Dividing the third equation by 12 yields x1
+ 3 x4 = x2 − 4 x3 + 6 x4 = x3 −
5x 3 4
4 3
= − 23
7 x3 − 13 x4 = −6. Adding −7 times the third equation to the fourth yields x1
+ 3 x4 =
4
x2 − 4 x3 + 6 x4 =
3
x3 −
5x 3 4 4x 3 4
= − 32 =
4 3.
Using back-substitution, the original system has exactly one solution: x1 = 1, x2 = 1, x3 = 1 and x4 = 1. Answers may vary slightly for Exercises 58–64. 58. Using a computer software program or graphing utility, you obtain x = 10, y = −20, z = 40, w = −12. 60. Using a computer software program or graphing utility, you obtain x = 0.8, y = 1.2, z = −2.4. 62. Using a computer software program or graphing utility, you obtain x1 = 0.6, x2 = −0.5, x3 = 0.8. 64. Using a computer software program or graphing utility, you obtain x = 6.8813, y = 163.3111, z = 210.2915, w = 59.2913.
= 0
−3 y − z = 0 −9 y + 3 z = 0.
x2 − 4 x3 + 6 x4 = 3 − 2 x4 =
3y 2
x +
+ 3 x4 = 4 3x2
= 0
4x + 3y − z = 0
1
− x2 + 4 x3 − 6 x4 = −3.
x1
3y 2
Adding −3 times the second equation to the third equation yields 3y 2
x +
= 0
−3 y − z = 0 6 z = 0.
Using back-substitution you conclude there is exactly one solution: x = y = z = 0. 68. x = y = z = 0 is clearly a solution.
Dividing the first equation by 12 yields x+
5 y 12
+
1 z 12
12 x + 4 y −
= 0
z = 0.
Adding −12 times the first equation one to the second yields x+
5 y 12
+
1 z 12
= 0
− y − 2 z = 0. Letting z = t be the free variable, you can describe the solution as x =
3 t, 4
y = −2t , z = t , where t is any real
number. 70. (a) False. Any system of linear equations is either consistent which means it has a unique solution, or infinitely many solutions; or inconsistent, i.e., it has no solution. This result is stated on page 6 of the text, and will be proved later in Theorem 2.5. (b) True. See definition on page 7 of the text. (c) False. Consider the following system of three linear equations with two variables. 2 x + y = −3
−6 x − 3 y = 9 x
=
1.
The solution to this system is: x = 1, y = −5. 72. Because x1 = t and x2 = s, you can write x3 = 3 + s − t = 3 + x2 − x1. One system could be
x1 − x2 + x3 = 3 − x1 + x2 − x3 = −3 Letting x3 = t and x2 = s be the free variables, you can describe the solution as x1 = 3 + s − t , x2 = s, x3 = t , where t and s are any real numbers. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.1 1 1 and B = into the original system y x
74. Substituting A =
Introduction to Systems of Linear Equations 1 1 1 , B = , and C = into the y x z original system yields
76. Substituting A =
yields 2 A + 3B = 3A − 4B = −
0
2A +
25. 6
2A +
= −1
B + 3C = 0.
Reduce the system to row-echelon form.
0
2A +
25 9 A − 12 B = − 2 8 A + 12 B = 0
B − 2C = 5
3A − 4B
= −1 5C = −5
3A −
25 = − 2
17 A
B − 2C = 5
3A − 4B
Reduce the system to row-echelon form. 8 A + 12 B =
7
=
4B
−1
−11B + 6C = −17 5C = −5
25 25 1 So, A = − and B = . Because A = and 34 x 51 1 B = , the solution of the original system of equations y 34 51 is: x = − and y = . 25 25
So, C = −1. Using back-substitution, −11B + 6( −1) = −17, or B = 1 and 3 A − 4(1) = −1, or A = 1. Because A = 1 x, B = 1 y, and C = 1 z , the solution of the original system of equations is: x = 1, y = 1, and z = −1.
78. Multiplying the first equation by sin θ and the second by cos θ produces
cos θ ) x + (sin 2 θ ) y = sin θ
(sin θ
−(sin θ cos θ ) x + (cos 2 θ ) y = cos θ Adding these two equations yields
(sin 2 θ
+ cos 2 θ ) y = sin θ + cos θ y = sin θ + cos θ .
So,
(cos θ ) x x =
+ (sin θ ) y = (cos θ ) x + sin θ (sin θ + cos θ ) = 1 and
(1 − sin 2 θ
− sin θ cos θ )
cos θ
=
(cos2 θ
− sin θ cos θ ) cos θ
= cos θ − sin θ .
Finally, the solution is x = cos θ − sin θ and y = cos θ + sin θ . 80. Interchange the two equations and row reduce. x −
3y 2
= −6
kx + y = 4 x −
( 32 k +
3y 2
= −6
1) y = 4 + 6k
So, if k = − 23 , there will be an infinite number of solutions.
x +
y +
z = 0
ky + 2kz = 4k −3 y −
z =
1
If k = 0, then there is an infinite number of solutions. Otherwise, x + y +
z = 0
y + 2z = 4 5 z = 13.
82. Reduce the system. x + ky = 2
(1 − k 2 ) y
84. Interchange the first two equations and row reduce
Because this system has exactly one solution, the answer is all k ≠ 0.
= 4 − 2k
If k = ±1, there will be no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
8
Chapter 1
Systems of Linear Equations
86. Reducing the system to row-echelon form produces
x +
(a
5y +
z = 0
y −
2z = 0
− 10) y + (b − 2) z = c
x + 5y + z = 0 y − 2z = 0
(2a + b − 22) z = c So, you see that
92.
y 3 2 1 −3 −2 −1 −2
x 1
4 5
−4 −5
The two lines coincide.
(a) if 2a + b − 22 ≠ 0, then there is exactly one solution.
2x − 3y = 7
(b) if 2a + b − 22 = 0 and c = 0, then there is an infinite number of solutions.
Letting y = t , x =
(c) if 2a + b − 22 = 0 and c ≠ 0, there is no solution.
The graph does not change.
88. If c1 = c2 = c3 = 0, then the system is consistent because x = y = 0 is a solution. 90. Multiplying the first equation by c, and the second by a, produces
acx + bcy = ec acx + day = af . Subtracting the second equation from the first yields
0 = 0
94. 21x − 20 y =
7 + 3t . 2
0
13x − 12 y = 120 Subtracting 5 times the second equation from 3 times the first equation produces a new first equation, −2 x = −600, or x = 300. So, 21(300) − 20 y = 0, or y = 315, and the solution is: x = 300, y = 315. The graphs are misleading because they appear to be parallel, but they actually intersect at (300, 315).
acx + bcy = ec
(ad − bc) y = af − ec. So, there is a unique solution if ad − bc ≠ 0.
Section 1.2 Gaussian Elimination and Gauss-Jordan Elimination 2. Because the matrix has 2 rows and 4 columns, it has size 2 × 4. 4. Because the matrix has 1 row and 5 columns, it has size 1 × 5. 6. Because the matrix has 1 row and 1 column, it has size 1 × 1. 8. Because the matrix has 4 rows and 1 column, it has size 4 × 1. 10. Because the leading 1 in the first row is not farther to the left than the leading 1 in the second row, the matrix is not in row-echelon form. 12. The matrix satisfies all three conditions in the definition of row-echelon form. However, because the third column does not have zeros above the leading 1 in the third row, the matrix is not in reduced row-echelon form.
14. The matrix satisfies all three conditions in the definition of row-echelon form. Moreover, because each column that has a leading 1 (columns one and four) has zeros elsewhere, the matrix is in reduced row-echelon form. 16. Because the matrix is in reduced row-echelon form, you can convert back to a system of linear equations
x1 = 2 x2 = 3. 18. Because the matrix is in row-echelon form, you can convert back to a system of linear equations
x1 + 2 x2 + x3 = 0 x3 = −1. Using back-substitution, you have x3 = −1. Letting x2 = t be the free variable, you can describe the solution as x1 = 1 − 2t , x2 = t and x3 = −1, where t is any real number.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
20. Gaussian elimination produces the following.
28. The augmented matrix for this system is
⎡2 1 1 0⎤ ⎡ 1 0 1 0⎤ ⎢ ⎥ ⎢ ⎥ − − ⇒ 1 2 1 2 ⎢ ⎥ ⎢ 1 −2 1 −2⎥ ⎢ 1 0 1 0⎥ ⎢2 1 1 0⎥ ⎣ ⎦ ⎣ ⎦
⎡1 2 0⎤ ⎢ ⎥ ⎢1 1 6⎥. ⎢3 −2 8⎥ ⎣ ⎦
⎡ 1 0 1 0⎤ ⎡ 1 0 1 0⎤ ⎢ ⎥ ⎢ ⎥ ⇒ ⎢0 −2 0 −2⎥ ⇒ ⎢0 1 0 1⎥ ⎢2 1 1 0⎥ ⎢2 1 1 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 1 0 1 0⎤ ⎡ 1 0 1 0⎤ ⎢ ⎥ ⎢ ⎥ ⇒ ⎢0 1 0 1⎥ ⇒ ⎢0 1 0 1⎥ ⎢0 1 −1 0⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦ Because the matrix is in row-echelon form, convert back to a system of linear equations. + x3 = 0
x1
9
Gaussian elimination produces the following. ⎡1 2 0⎤ ⎡ 1 2 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 1 6⎥ ⇒ ⎢0 −1 6⎥ ⎢3 −2 8⎥ ⎢0 −8 8⎥ ⎣ ⎦ ⎣ ⎦ 0⎤ ⎡ 1 2 0⎤ ⎡1 2 ⎢ ⎥ ⎢ ⎥ 1 −6⎥ ⇒ ⎢0 1 −6⎥ ⇒ ⎢0 ⎢0 −8 8⎥ ⎢0 0 −40⎥ ⎣ ⎦ ⎣ ⎦
Because the third row corresponds to the equation 0 = −40, you conclude that the system has no solution.
= 1
x2
30. The augmented matrix for this system is
x3 = 1 By back-substitution x1 = − x3 = −1. So, the solution is: x1 = −1, x2 = 1, and x3 = 1. 22. Because the fourth row of this matrix corresponds to the equation 0 = 2, there is no solution to the linear system. 24. The augmented matrix for this system is
⎡ 2 6 16⎤ ⎢ ⎥. ⎣−2 −6 −16⎦
⎡2 −1 3 24⎤ ⎢ ⎥ ⎢0 2 −1 14⎥. ⎢7 −5 0 6⎥ ⎣ ⎦
Gaussian elimination produces the following. 3 12⎤ ⎡1 − 1 ⎡2 −1 3 24⎤ 2 2 ⎢ ⎥ ⎢ ⎥ 2 −1 14⎥ ⎢0 2 −1 14⎥ ⇒ ⎢0 ⎢7 −5 0 6⎥ ⎢7 −5 0 6⎥ ⎣ ⎦ ⎣ ⎦ 3 ⎡1 − 1 12⎤ 2 2 ⎢ ⎥ 2 −1 14⎥ ⇒ ⎢0 ⎢0 − 3 − 21 −78⎥ 2 2 ⎣ ⎦
Use Gauss-Jordan elimination as follows. 8⎤ ⎡ 2 6 16⎤ ⎡ 1 3 ⎡ 1 3 8⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−2 −6 −16⎦ ⎣−2 −6 −16⎦ ⎣0 0 0⎦
3 ⎡1 − 1 2 2 ⎢ 1 1 −2 ⇒ ⎢0 ⎢0 0 − 45 4 ⎣
Converting back to system of linear equations, you have
x + 3 y = 8. Choosing y = t as the free variable, you can describe the solution as x = 8 − 3t and y = t , where t is any real number. 26. The augmented matrix for this system is
Back-substitution now yields
x3 = 6 x2 = 7 +
1x 2 3
= 7+
x1 = 12 − 32 x3 +
⎡2 −1 −0.1⎤ ⎢ ⎥. ⎣ 3 2 1.6⎦
12⎤ ⎥ 7⎥ ⎥ − 135 2 ⎦
1x 2 2
1 2
(6)
= 10
= 12 −
3 2
(6) + 12 (10)
= 8.
So, the solution is: x1 = 8, x2 = 10 and x3 = 6.
Gaussian elimination produces the following. 1⎤ ⎡1 − 12 − 20 ⎡2 −1 −0.1⎤ ⎢ ⎥ ⇒ ⎢ 8⎥ 2 ⎢⎣3 ⎣ 3 2 1.6⎦ 5⎥ ⎦
⎡ 1 − 12 ⇒ ⎢ 7 2 ⎣⎢0
1⎤ − 20 ⎥ 7 ⎥ 4⎦
1⎤ ⎡ 1 − 12 − 20 ⎡1 0 ⇒ ⎢ ⎥ ⇒ ⎢ 1 0 1 ⎥ ⎢⎣0 1 2⎦ ⎣⎢
1⎤ 5 ⎥ 1 ⎥ 2⎦
Converting back to a system of equations, the solution is: x = 15 and y = 12 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
10
Chapter 1
Systems of Linear Equations
32. The augmented matrix for this system is
34. The augmented matrix for this system is
⎡2 0 3 3⎤ ⎢ ⎥ ⎢4 −3 7 5⎥. ⎢8 −9 15 10⎥ ⎣ ⎦
1 8⎤ ⎡ 1 2 ⎢ ⎥. ⎣−3 −6 −3 −21⎦ Gaussian elimination produces the following matrix.
Gaussian elimination produces the following.
⎡ 1 2 1 8⎤ ⎢ ⎥ ⎣0 0 0 3⎦
⎡ 1 0 32 32 ⎤ ⎡2 0 3 3⎤ ⎢ ⎥ ⎢ ⎥ ⎢4 −3 7 5⎥ ⇒ ⎢4 −3 7 5⎥ ⎢8 −9 15 10⎥ ⎢⎣8 −9 15 10⎥⎦ ⎣ ⎦
Because the second row corresponds to the equation 0 = 3, there is no solution to the original system.
3⎤ ⎡ 1 0 32 ⎡ 1 0 32 32 ⎤ 2 ⎢ ⎥ ⎢ ⎥ ⇒ ⎢0 −3 1 −1⎥ ⇒ ⎢0 1 − 13 13 ⎥ ⎢0 −9 3 −2⎥ ⎢ ⎥ ⎣ ⎦ ⎣0 0 0 1⎦
Because the third row corresponds to the equation 0 = 1, there is no solution to the original system. 36. The augmented matrix for this system is
⎡2 ⎢ ⎢3 ⎢1 ⎢ ⎣⎢5
1 −1
2 −6⎤ ⎥ 1 1⎥ 5 2 6 −3⎥ ⎥ 2 −1 −1 3⎥⎦ 4
0
Gaussian elimination produces the following. ⎡1 ⎢ ⎢3 ⎢2 ⎢ ⎢⎣5
6 −3⎤ 5 2 6 −3⎤ 5 2 6 −3⎤ ⎡1 ⎡1 ⎥ ⎢ ⎥ ⎢ 6 17 − 10 ⎥ 1 1⎥ 0 − 11 − 6 − 17 10 0 1 ⎥ ⇒ ⎢ 11 11 11 ⎥ ⇒ ⎢ ⎢0 −9 −5 −10 0⎥ ⎢0 −9 −5 −10 1 −1 2 −6⎥ 0⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 2 −1 −1 3⎥⎦ ⎢⎣0 −23 −11 −31 18⎥⎦ ⎢⎣0 −23 −11 −31 18⎥⎦ 5
2
4
0
⎡1 ⎢ ⎢0 ⇒ ⎢ 0 ⎢ ⎢⎣0 ⎡1 ⎢ ⎢0 ⇒ ⎢ 0 ⎢ ⎢⎣0
5
2
6
1
6 11 1 − 11 17 11
17 11 43 11 50 11
0 0 5 1 0 0
−3⎤ ⎡1 ⎥ ⎢ 10 − 11 ⎥ ⎢0 ⇒ ⎥ ⎢0 − 90 11 ⎥ ⎢ 32 ⎥ ⎢⎣0 − 11 ⎦
−3⎤ ⎡1 ⎥ ⎢ 10 − 11 ⎥ ⎢0 ⇒ ⎢0 1 −43 90⎥ ⎥ ⎢ 781 1562 0 11 − 11 ⎥⎦ ⎢⎣0 2
6
6 11
17 11
−3⎤ ⎥ 1 − 10 11 ⎥ 0 1 −43 90⎥ ⎥ 50 − 32 ⎥ 0 17 11 11 4⎦ 5
5 1 0 0
2
6
6 11
17 11
−3⎤ ⎥ − 10 11 ⎥ 1 −43 90⎥ ⎥ 0 1 −2⎥⎦ 2
6
6 11
17 11
Back-substitution now yields w = −2 z = 90 + 43w = 90 + 43( −2) = 4 6 z − 17 w = − 10 − 6 4 − 17 −2 = 0 y = − 10 − 11 ( ) 11 ( ) 11 11 11 ( ) 11 ( ) . x = −3 − 5 y − 2 z − 6w = −3 − 5(0) − 2( 4) − 6( −2) = 1.
So, the solution is: x = 1, y = 0, z = 4 and w = −2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.2
Gaussian Elimination and Gauss-Jordan Elimination
38. Using a computer software program or graphing utility, you obtain
x = 14.3629 y = 32.7569 z = 28.6356. 40. Using a computer software program or graphing utility, you obtain
48. (a) If A is the augmented matrix of a system of linear equations, then number of equations in this system is three (because it is equal to the number of rows of the augmented matrix). Number of variables is two because it is equal to number of columns of the augmented matrix minus one. (b) Using Gaussian elimination on the augmented matrix of a system, you have the following. 3 ⎡2 −1 ⎡ 2 −1 3⎤ ⎢ ⎢ ⎥ ⎢−4 2 k ⎥ ⇒ ⎢0 0 k + ⎢0 0 ⎢ 4 −2 6⎥ 0 ⎣ ⎦ ⎣
x1 = 2 x2 = −1 x3 = 3 x5 = 1.
42. Using a computer software program or graphing utility, you obtain
x1 = 1 x2 = −1 x3 = 2 x4 = 0
(c) If A is the coefficient matrix of a system of linear equations, then the number of equations is three, because it is equal the number of rows of the coefficient matrix. The number of variables is also three, because it is equal to the number of columns of the coefficient matrix. (d) Using Gaussian elimination on A you obtain the following coefficient matrix of an equivalent system. ⎡1 − 1 2 ⎢ 0 ⎢0 ⎢0 0 ⎣
x5 = −2 x6 = 1. 44. The corresponding equations are
x2 + x3 = 0. Choosing x4 = t and x3 = t as the free variables, you can describe the solution as x1 = 0, x2 = − s, x3 = s, and x4 = t , where s and t are any real numbers. 46. The corresponding equations 0 = 0 and 3 free variables. So, x1 = t , x2 = s, x3 = r , where t , s, r are any real
⎤ ⎥ k + 6⎥ 0 ⎥⎦ 3 2
Because the homogeneous system is always consistent, the homogeneous system with the coefficient matrix A is consistent for any value of k.
= 0
numbers.
⎤ ⎥ 6⎥ ⎥ ⎦
This system is consistent if and only if k + 6 = 0, so k = −6.
x4 = 4
x1
11
50. Using Gaussian elimination on the augmented matrix, you have the following.
⎡1 ⎢ ⎢0 ⎢1 ⎢ ⎣⎢a
1 0 0⎤ ⎡1 ⎢ ⎥ 0 1 1 0⎥ ⇒ ⎢ ⎢0 0 1 0⎥ ⎢ ⎥ b c 0⎦⎥ ⎣⎢0
0 0⎤ ⎥ 1 0⎥ 1 0⎥ ⎥ c 0⎦⎥
1 1 −1
(b
− a)
⎡1 ⎢ 0 ⇒ ⎢ ⎢0 ⎢ ⎢⎣0
1
0
1
1
⎡1 ⎢ 0 ⇒ ⎢ ⎢0 ⎢ ⎣⎢0
1 0 0⎤ ⎥ 1 1 0⎥ 0 1 0⎥ ⎥ 0 0 0⎦⎥
0 0
2
(a
− b + c)
0⎤ ⎥ 0⎥ 0⎥ ⎥ 0⎥⎦
From this row reduced matrix you see that the original system has a unique solution. 52. Because the system composed of equations 1 and 2 is consistent, but has a free variable, this system must have an infinite number of solutions.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
12
Chapter 1
Systems of Linear Equations
54. Use Guass-Jordan elimination as follows. 3⎤ ⎡1 2 3⎤ ⎡1 2 ⎡1 2 3⎤ ⎡ 1 0 −1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⇒ − − ⇒ ⇒ 4 5 6 0 3 6 0 1 2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 1 2⎥ ⎢7 8 9⎥ ⎢0 −6 −12⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
56. Begin by finding all possible first rows
[0
0 0], [0 0 1], [0 1 0], [0 1 a], [1 0 0], [1 0 a], [1 a b], [1 a 0],
where a and b are nonzero real numbers. For each of these examine the possible remaining rows. ⎡0 0 0⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡0 0 1⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡0 1 0⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡0 1 0⎤ ⎢ ⎥ ⎢0 0 1⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡0 1 a⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ , ⎢0 0 1⎥ ⎣ ⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 0 1⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 1 a⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 a 0⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 a 0⎤ ⎢ ⎥ ⎢0 0 1⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 a b⎤ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
⎡1 0 a⎤ ⎡1 0 a⎤ ⎢ ⎥ ⎢ ⎥ ⎢0 0 0⎥ , ⎢0 1 0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
58. (a) False. A 4 × 7 matrix has 4 rows and 7 columns.
(b) True. Reduced row echelon form of a given matrix is unique while row echelon form is not. (See also exercise 64 of this section.) (c) True. See Theorem 1.1 on page 25. (d) False. Multiplying a row by a nonzero constant is one of the elementary row operations. However, multiplying a row of a matrix by a constant c = 0 is not an elementary row operation. (This would change the system by eliminating the equation corresponding to this row.) 60. First you need a ≠ 0 or c ≠ 0. If a ≠ 0, then you have
b ⎡a ⎡a b ⎤ ⎢ ⇒ cb ⎢ ⎥ ⎢0 − + ⎣c d ⎦ ⎢⎣ a
⎤ b ⎤ ⎥ ⇒ ⎡a ⎢ ⎥. ⎥ b ⎣0 ad − bc⎦ ⎥⎦
So, ad − bc = 0 and b = 0, which implies that d = 0. If c ≠ 0, then you interchange rows and proceed. d ⎡c ⎡a b ⎤ ⎢ ad ⎢ ⎥ ⇒ ⎢ 0 − + ⎣c d ⎦ c ⎣⎢
⎤ d ⎤ ⎥ ⇒ ⎡c ⎢ ⎥ ⎥ b ⎣0 ad − bc⎦ ⎦⎥
⎡a b ⎤ Again, ad − bc = 0 and d = 0, which implies that b = 0. In conclusion, ⎢ ⎥ is row equivalent to ⎣c d ⎦ b = d = 0, and a ≠ 0 or c ≠ 0.
⎡1 0⎤ ⎢ ⎥ if and only if ⎣0 0⎦
62. Row reduce the augmented matrix for this system.
⎡1 λ 0⎤ ⎡λ − 1 2 0⎤ ⎡ 1 ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ λ 0⎦ ⎣ 1 ⎣λ − 1 2 0⎦ ⎣⎢0
λ
( −λ
2
+ λ + 2)
0⎤ ⎥ 0⎦⎥
To have a nontrivial solution you must have
λ2 − λ − 2 = 0
(λ
− 2)(λ + 1) = 0.
So, if λ = −1 or λ = 2, the system will have nontrivial solutions.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3 64. No, the echelon form is not unique. For instance, ⎡1 0⎤ ⎡1 2⎤ ⎥. The reduced row echelon form is ⎢ ⎥ and ⎢ ⎣0 1⎦ ⎣0 1⎦ unique.
Applications of Systems of Linear Equations
13
66. Answers will vary. Sample answer: Because the third row consists of all zeros, choose a third equation that is a multiple of one of the other two equations. x + 3 z = −2 y + 4z = 1 2 y + 8z = 2
68. When a system of linear equations has infinitely many solutions, the row-echelon form of the corresponding augmented matrix will have a row that is all zeros.
Section 1.3 Applications of Systems of Linear Equations 2. (a) Because there are three points, choose a second-degree polynomial, p( x) = a0 + a1x + a2 x 2 . Then substitute x = 2, 3, and 4 into p( x) and equate the results to y = 4, 4, and 4, respectively.
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 = 4 2
a0 + a1 (3) + a2 (3) = a0 + 3a1 + 9a2 = 4 2
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 = 4 2
Use Gauss-Jordan elimination on the augmented matrix for this system. ⎡1 2 4 4⎤ ⎡1 0 0 4⎤ ⎢ ⎥ ⎢ ⎥ ⇒ 1 3 9 4 ⎢ ⎥ ⎢0 1 0 0⎥ ⎢1 4 16 4⎥ ⎢0 0 1 0⎥ ⎣ ⎦ ⎣ ⎦
So, p( x) = 4. (b)
y 5
(2, 4)
(4, 4) (3, 4)
3 2 1
x 1
2
3
4
5
4. (a) Because there are four points, choose a third-degree polynomial, p( x) = a0 + a1 x + a2 x 2 + a3 x3 . Then substitute x = −1, 0, 1, and 4 into p( x) and equate the results to y = 3, 0, 1, and 58, respectively.
a0 + a1 ( −1) + a2 ( −1) + a3 ( −1) = a0 − a1 + 2
3
a2 −
a0 + a1 (0) + a2 (0) + a3 (0) = a0 2
a0 +
a1 (1) +
a2 (1) + 2
3
a3 (1) = a0 + a1 + 3
a3 = 3
= 0 a2 +
a3 = 1
a0 + a1 ( 4) + a2 ( 4) + a3 ( 4) = a0 + 4a1 + 16a2 + 64a3 = 58 2
3
Use Guass-Jordan elimination on the augmented matrix for this system. ⎡1 ⎡1 −1 1 −1 3⎤ ⎢ ⎢ ⎥ 1 0 0 0 0 ⎢ ⎥ ⇒ ⎢0 ⎢0 ⎢1 1 1 1 1⎥ ⎢ ⎢ ⎥ ⎢⎣0 ⎣⎢1 4 16 64 58⎥⎦ So, p( x) = − 32 x + 2 x 2 +
0⎤ ⎥ 1 0 0 − 32 ⎥ 0 1 0 2⎥ ⎥ 1⎥ 0 0 1 2⎦ 0 0 0
1 x3. 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
14
Chapter 1
(b)
Systems of Linear Equations
y 60 50 40 30
(4, 58)
(− 1, 3)
(1, 1) x
(0, 0)
1 2 3 4
6. (a) Using the translation z = x − 2005, the points (z, y) are (0, 150), (1, 180), (2, 240), and (3, 360). Because there are four
points, choose a third-degree polynomial p( z ) = a0 + a1z + a2 z 2 + a3 z 3 . Then substitute z = 0, 1, 2, and 3 into p( z ) and equate the results to y = 150, 180, 240, and 360 respectively. a0 + a1 (0) + a2 (0) + a3 (0) = a0 2
3
= 150
a0 + a1 (1) + a2 (1) + a3 (1) = a0 + a1 + a2 +
a3 = 180
a0 + a1 ( 2) + a2 ( 2) + a3 ( 2) = a0 + 2a1 + 4a2 +
8a3 = 240
2
3
2
3
a0 + a1 (3) + a2 (3) + a3 (3) = a0 + 3a1 + 9a2 + 27 a3 = 360 2
3
Use Gauss-Jordan elimination on the augmented matrix for this system. ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎣⎢1
0 150⎤ ⎡1 ⎥ ⎢ 1 180⎥ 0 ⇒ ⎢ ⎥ ⎢ 2 4 8 240 0 ⎥ ⎢ 3 9 27 360⎦⎥ ⎢⎣0 0 0
1 1
0 0 0 150⎤ ⎥ 1 0 0 25⎥ 0 1 0 0⎥ ⎥ 0 0 1 5⎦⎥
So, p( z ) = 150 + 25 z + 5 z 3. Letting z = x − 2005, you have p( x) = 150 + 25( x − 2005) + 5( x − 2005) . 3
y
(b) 400
(3, 360) (2, 240) (1, 180) (0, 150)
300 200
z 4
8)
7)
6)
5)
00
(2
(2
3
00
2
00
00
(2
1
(2
−1
8. Letting p( x) = a0 + a1x + a2 x 2 , substitute x = 0, 2, and 4 into p( x) and equate the results to y = 1, 13 , and 15 , respectively.
a0 + a1 (0) + a2 (0) = a0 2
= 1
a0 + a1 ( 2) + a2 ( 2) = a0 + 2a1 + 4a2 =
1 3
a0 + a1 ( 4) + a2 ( 4) = a0 + 4a1 + 16a2 =
1 5
2 2
Use Gauss-Jordan elimination on the augmented matrix for this system. ⎡1 0 0 ⎢ ⎢1 2 4 ⎢1 4 16 ⎣
1⎤ ⎥
1 3⎥ 1⎥ 5⎦
So, p( x) = 1 −
⎡1 0 0 1⎤ ⎢ ⎥ 7 ⇒ ⎢0 1 0 − 15 ⎥ ⎢0 0 1 1⎥ 15 ⎦ ⎣ 7 x 15
+
1 x2. 15
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3
Applications of Systems of Linear Equations
15
10. Let p( x) = a0 + a1x + a2 x 2 be the equation of the parabola. Because the parabola passes through the points (0, 1)
and
( 12 , 12 ), you have
a0 + a1 (0) + a2 (0) = a0 2
a0 + a1
( 12 ) + a ( 12 )
2
= a0 +
2
= 1 1a 2 1
+
1a 4 2
Because p( x) has a horizontal tangent at
= 12 .
( 12 , 12 ) the derivative of
p( x), p′( x) = a1 + 2a2 x, equals zero when x = 12 .
So, you have a third linear equation
a1 + 2a2
( 12 ) =
a1 + a2 = 0.
Use Gauss-Jordan elimination on the augmented matrix for this linear system. 1⎤ ⎡ 1 0 0 1⎤ ⎡1 0 0 ⎢ 1 1 1⎥ ⎢ ⎥ 1 ⇒ 0 1 0 − 2 ⎢ ⎢ ⎥ 2 4 2⎥ ⎢0 1 1 0⎥ ⎢0 0 1 2⎥ ⎣ ⎦ ⎣ ⎦
So, p( x) = 1 − 2 x + 2 x 2 . y 1
(0, 1)
1 2
( 21, 12 ) x 1 2
1
12. Assume that the equation of the circle is x 2 + ax + y 2 + by − c = 0. Because each of the given points lies on the circle, you have the following linear equations.
(1) 2 (−2) (4)2 2
+
a(1) + (3) + b(3) − c = 2
a + 3b − c + 10 = 0
+ a( −2) + (6) + b(6) − c = −2a + 6b − c + 40 = 0 2
+
a( 4) + ( 2) + b( 2) − c = 4a + 2b − c + 20 = 0 2
Use Gauss-Jordan elimination on the system. ⎡ 1 3 −1 −10⎤ ⎡ 1 0 0 −10⎤ ⎢ ⎥ ⎢ ⎥ ⎢−2 6 −1 −40⎥ ⇒ ⎢0 1 0 −20⎥ ⎢ 4 2 −1 −20⎥ ⎢0 0 1 −60⎥ ⎣ ⎦ ⎣ ⎦
So, the equation of the circle is x 2 − 10 x + y 2 − 20 y + 60 = 0 or ( x − 5) + ( y − 10) = 65. 2
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
16
Chapter 1
Systems of Linear Equations
14. (a) Letting z =
(x
− 1920) , 10
the four data points are (0, 106), (1, 123), (2, 132), and (3, 151). Let p( z ) = a0 + a1z + a2 z 2 + a3 z 3 .
= 106
a0 a0 + a1 + a2 +
a3 = 123
a0 + 2a1 + 4a2 + 8a3 = 132 a0 + 3a1 + 9a2 + 27 a3 = 151 The solution to this system is a0 = 106, a1 = 27, a2 = −13, a3 = 3. So, the cubic polynomial is p( z ) = 106 + 27 z − 13 z 2 + 3 z 3 . Because z =
(x
− 1920) ⎛ x − 1920 ⎞ ⎛ x − 1920 ⎞ ⎛ x − 1920 ⎞ , p( x) = 106 + 27⎜ ⎟ − 13⎜ ⎟ + 3⎜ ⎟. 10 ⎝ 10 ⎠ ⎝ 10 ⎠ ⎝ 10 ⎠ 2
3
(b) To estimate the population in 1960, let x = 1960. p(1960) = 106 + 27( 4) − 13( 4) + 3( 4) = 198 million. 2
3
The actual population was 179 million in 1960. 16. (a) Letting z = x − 2003, the four points are ( −2, 217.8), (0, 256.3), ( 2, 312.4), and ( 4, 377.0).
Let p( z ) = a0 + a1z + a2 z 2 + a3 z 3 .
a0 + a1 ( −2) + a2 (−2) + a2 ( −2) = a0 − 2a1 + 4a2 − 8a3 = 217.8 2
3
a0 +
a1 (0) +
a2 (0) +
a3 (0) = a0
a0 +
a1 ( 2) +
a2 ( 2) +
a3 ( 2) = a0 + 2a1 + 4a2 + 8a3 = 312.4
a0 +
a1 ( 4) +
a2 ( 4) +
a3 ( 4) = a0 + 4a1 + 16a2 + 64a3 = 377.0
2 2 2
3
= 256.3
3 3
(b) Use Gauss-Jordan elimination to solve the system.
⎡1 −2 4 −8 ⎢ ⎢1 0 0 0 ⎢1 2 4 8 ⎢ ⎣⎢1 4 16 64
217.8⎤ ⎡1 ⎥ ⎢ 256.3⎥ 0 ⇒ ⎢ ⎢0 312.4⎥ ⎥ ⎢ ⎢⎣0 377.0⎦⎥
⎤ ⎥ 24.4083 ⎥ ⎥ 2.2 ⎥ −0.189583⎥⎦
0 0 0 256.3 1 0 0 0 1 0 0 0 1
So, p( z ) = 256.3 + 24.4083z + 2.2 z 2 − 0.189583z 3 . Letting z = x − 2003, p( x) = 256.3 + 24.4083( x − 2003) + 2.2( x − 2003) − 0.189583( x − 2003) . 2
3
Predicted values after 2007 continue to increase, so the solution produces a reasonable model for predicting future sales. 18. Choosing a second-degree polynomial approximation p( x) = a0 + a1 x + a2 x 2 , substitute x = 1, 2, and 4
into p( x) and equate the results to y = 0, 1, and 2, respectively.
a0 + a1 +
a2 = 0
a0 + 2a1 + 4a2 = 1 a0 + 4a1 + 16a2 = 2 The solution to this system is a0 = − 43 , a1 = So, p( x) = − 43 +
3x 2
3, 2
and a2 = − 16 .
− 16 x 2 .
Finally, to estimate log 2 3, calculate p(3) = − 43 +
3 2
(3) − 16 (3)2
= 53 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 1.3
Applications of Systems of Linear Equations
17
20. Let
p1 ( x) = a0 + a1 x + a2 x 2 + p2 ( x) = b0 + b1x + b2 x 2 +
+ an −1x n −1 and + bn −1x n −1
be two different polynomials that pass through the n given points. The polynomial p1 ( x) − p2 ( x) = ( a0 − b0 ) + ( a1 − b1 ) x + ( a2 − b2 ) x 2 +
+ ( an −1 − bn −1 ) x n −1
is zero for these n values of x. So a0 = b0 , a1 = b1 , a2 = b2 , …, an −1 = bn −1. Therefore, there is only one polynomial function of degree n − 1 (or less) whose graph passes through n points in the plane with distinct x-coordinates. 22. (a) Each of the network’s four junctions gives rise to a linear equation as shown below. input = output 300 = x1 + x2 x1 + x3 = x4 + 150 x2 + 200 = x3 + x5 x4 + x5 = 350
Reorganize these equations, form the augmented matrix, and use Gauss-Jordan elimination. ⎡ 1 1 0 0 0 300⎤ ⎡ 1 0 1 0 1 500⎤ ⎢ 1 0 1 −1 0 150⎥ ⎢ ⎥ ⎢ ⎥ ⇒ ⎢0 1 −1 0 −1 −200⎥ ⎢0 1 −1 0 −1 −200⎥ ⎢0 0 0 1 1 350⎥ ⎢ ⎥ ⎢ ⎥ 0 0 0 1 1 350 0⎦ ⎣ ⎦ ⎣0 0 0 0 0 Letting x5 = t and x3 = s be the free variables, you have
x1 = 500 − s − t x2 = −200 + s + t x3 = s x4 = 350 − t x5 = t. (b) If x2 = 200 and x3 = 50, then you have s = 50 and t = 350. So, the solution is: x1 = 100, x2 = 200, x3 = 50, x4 = 0, x5 = 350. (c) If x2 = 150 and x3 = 0, then you have s = 0 and t = 350. So, the solution is: x1 = 150, x2 = 150, x3 = 0, x4 = 0, x5 = 350. 24. (a) Each of the network’s four junctions gives rise to a linear equation as shown below. input = output 400 + x2 = x1 x1 + x3 = x4 + 600 300 = x2 + x3 + x5 x4 + x5 = 100
Reorganize these equations, form the augmented matrix, and use Gauss-Jordan elimination. ⎡ 1 −1 0 0 0 400⎤ ⎡ 1 0 1 0 1 700⎤ ⎢ 1 0 1 −1 0 600⎥ ⎢0 1 1 0 1 300⎥ ⎢ ⎥ ⇒ ⎢ ⎥ ⎢0 1 1 0 1 300⎥ ⎢0 0 0 1 1 100⎥ ⎢0 0 0 1 1 100⎥ ⎢0 0 0 0 0 0⎥⎦ ⎣ ⎦ ⎣ Letting x5 = t and x3 = s be the free variables, you can describe the solution as x1 x2 x3 x4 x5
= = = = =
700 − s − t 300 − s − t s 100 − t t , where t and s are any real numbers.
(b) If x3 = 0 and x5 = 100, then the solution is: x1 = 600, x2 = 200, x3 = 0, x4 = 0, x5 = 100. (c) If x3 = x5 = 100, then the solution is: x1 = 500, x2 = 100, x3 = 100, x4 = 0, x5 = 100. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
18
Chapter 1
Systems of Linear Equations
26. Applying Kirchoff’s first law to either junction produces
I1 + I 3 = I 2 and applying the second law to the two paths produces R1I1 + R2 I 2 = 4 I1 + I 2 = 16 R2 I 2 + R3 I 3 = I 2 + 4 I 3 = 8. Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination. ⎡ 1 −1 1 0⎤ ⎡ 1 0 0 3⎤ ⎢ ⎥ ⎢ ⎥ 4 1 0 16 ⇒ ⎢ ⎥ ⎢0 1 0 4⎥ ⎢0 1 4 8⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦
So, the solution is: I1 = 3, I 2 = 4, and I 3 = 1. 28. Applying Kirchoff’s first law to three of the four junctions produces
I1 + I 3 = I 2 I1 + I 4 = I 2 I3 + I 6 = I5 and applying the second law to the three paths produces R1I1 + R2 I 2 = 3I1 + 2 I 2 = 14 R2 I 2 + R4 I 4 + R5 I 5 + R3 I 3 = 2 I 2 + 2 I 4 + I 5 + 4 I 3 = 25 R5 I 5 + R6 I 6 =
I5 +
I 6 = 8.
Rearrange these equations, form the augmented matrix, and use Gauss-Jordan elimination. ⎡ 1 −1 ⎢ ⎢ 1 −1 ⎢0 0 ⎢ ⎢3 2 ⎢ ⎢0 2 ⎢0 0 ⎣
1 0
0 0
0
0 0
1
0⎤ ⎡1 ⎥ ⎢ 0⎥ ⎢0 ⎥ ⎢0 0 ⎥ ⇒ ⎢ 14⎥ ⎢0 ⎥ ⎢ 25⎥ ⎢0 ⎥ ⎢0 8⎦ ⎣
1 0 −1 1 0 0
0 0
4 2
1 0
0 0
1 1
0 0 0 0 0 2⎤ ⎥ 1 0 0 0 0 4⎥ 0 1 0 0 0 2⎥ ⎥ 0 0 1 0 0 2⎥ ⎥ 0 0 0 1 0 5⎥ 0 0 0 0 1 3⎥⎦
So, the solution is: I1 = 2, I 2 = 4, I 3 = 2, I 4 = 2, I 5 = 5, and I 6 = 3. 30.
8x2
(x
− 1) ( x + 1) 2
=
A B C + + x + 1 x − 1 ( x − 1)2
8 x 2 = A( x − 1) + B( x − 1)( x + 1) + C ( x + 1) 2
8 x 2 = Ax 2 − 2 Ax + A + Bx 2 − B + Cx + C 8 x 2 = ( A + B) x 2 + ( −2 A + C ) x + A − B + C
A + B
So,
−2 A
= 8 + C = 0
A − B + C = 0. Use Gauss-Jordan elimination to solve the system. ⎡ 1 1 0 8⎤ ⎡ 1 0 0 2⎤ ⎢ ⎥ ⎢ ⎥ − 2 0 1 0 ⇒ ⎢ ⎥ ⎢0 1 0 6⎥ ⎢ 1 −1 1 0⎥ ⎢0 0 1 4⎥ ⎣ ⎦ ⎣ ⎦
The solution is: A = 2, B = 6, and C = 4. So,
8x2
(x
− 1) ( x + 1) 2
=
2 6 4 + + . x + 1 x − 1 ( x − 1)2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1
32.
3x 2 − 7 x − 12
(x
+ 4)( x − 4)
2
19
A B C + + x + 4 x − 4 ( x − 4)2
=
3x 2 − 7 x − 12 = A( x − 4) + B( x + 4)( x − 4) + C ( x + 4) 2
3x 2 − 7 x − 12 = Ax 2 − 8 Ax + 16 A + Bx 2 − 16 B + Cx + 4C 3x 2 − 7 x − 12 = ( A + B ) x 2 + ( −8 A + C ) x + 16 A − 16 B + 4C
So,
A + −8 A
=
B
3
+ C = −7
16 A − 16 B + 4C = −12. Use Gauss-Jordan elimination to solve the system. 1 0 3⎤ ⎡ 1 ⎡ 1 0 0 1⎤ ⎢ ⎥ ⎢ ⎥ − 8 0 1 − 7 ⇒ ⎢ ⎥ ⎢0 1 0 2⎥ ⎢ 16 −16 4 −12⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦ The solution is: A = 1, B = 2, and C = 1. So,
3x 2 − 7 x − 12
( x + 4)( x − 4)
2
=
1 2 1 + + x + 4 x − 4 ( x − 4)2
34. Use Gauss-Jordan elimination to solve the system. ⎡0 2 2 −2⎤ ⎡ 1 0 0 25⎤ ⎢ ⎥ ⎢ ⎥ ⎢2 0 1 −1⎥ ⇒ ⎢0 1 0 50⎥ ⎢2 1 0 100⎥ ⎢0 0 1 −51⎥ ⎣ ⎦ ⎣ ⎦
So, x = 25, y = 50, and λ = −51.
36. Let x = number of touchdowns, y = number of extrapoint kicks, and z = number of field goals. 6x + y + x − y x
3 z = 55 = 0 −3 z = 1
Use Gauss-Jordan elimination to solve the system. ⎡6 1 3 55⎤ ⎡ 1 0 0 7⎤ ⎢ ⎥ ⎢ ⎥ − ⇒ 1 1 0 0 ⎢ ⎥ ⎢0 1 0 7⎥ ⎢ 1 0 −3 1⎥ ⎢0 0 1 2⎥ ⎣ ⎦ ⎣ ⎦
Because x = 7, y = 7, and x = 2, there were 7 touchdowns, 7 extra-point kicks, and 2 field goals.
Review Exercises for Chapter 1 2. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y. 4. Because the equation is in the form a1 x + a2 y = b, it is linear in the variables x and y. 6. Because the equation cannot be written in the form a1 x + a2 y = b, it is not linear in the variables x and y. 8. Because the equation is in the form a1 x + a2 x = b, it is linear in the variables x and y.
10. Choosing x2 and x3 as the free variables and letting x2 = s and x3 = t , you have
3x1 + 2s − 4t = 0 3x1 = −2 s + 4t x1 =
1 3
(−2s
+ 4t ).
So, the solution set can be described as x1 = − 23 s + 43 t , x2 = s, and x3 = t , where s and t are real numbers. 12. Row reduce the augmented matrix for this system.
⎡1 1 −1⎤ ⎡ 1 1 −1⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣3 2 0⎦ ⎣0 −1 3⎦ ⎡ 1 1 −1⎤ ⎡ 1 0 2⎤ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ 0 1 3 − ⎣ ⎦ ⎣0 1 −3⎦ Converting back to a linear system, the solution is x = 2 and y = −3.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
20
Chapter 1
Systems of Linear Equations
14. Rearrange the equations, form the augmented matrix, and row reduce. 7⎤ ⎡1 0 3⎤ ⎡ 1 −1 ⎡ 1 −1 3⎤ ⎡ 1 −1 3⎤ 3 ⎥. ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢0 1 − 2 ⎥ ⇒ ⎢ 2 − 0 1 − − 4 1 10 0 3 2 ⎢ ⎥ ⎢ ⎣ ⎦ ⎣ ⎦ 3⎦ ⎣ 3⎥ ⎣ ⎦
Converting back to a linear system, you obtain the solution x =
7 3
and y = − 23 .
16. Rearrange the equations, form the augmented matrix, and row reduce.
⎡ 4 1 0⎤ ⎡ 1 −1 0⎤ ⎡ 1 −1 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥. ⎣−1 1 0⎦ ⎣4 1 0⎦ ⎣0 5 0⎦ ⎣0 1 0⎦ Converting back to a linear system, the solution is x = y = 0. 18. Row reduce the augmented matrix for this system. 3⎤ 3⎤ ⎡ 1 3 ⎡1 3 ⎡40 30 24⎤ 4 5 ⇒ 5 ⎥ ⎢ 4 ⎥ ⎢ ⎥ ⇒ ⎢ ⎢⎣20 15 −14⎥⎦ ⎢⎣0 0 −26⎥⎦ ⎣20 15 −14⎦
Because the second row corresponds to the false statement 0 = −26, the system has no solution. 20. Multiplying both equations by 100 and forming the augmented matrix produces
⎡20 −10 7⎤ ⎢ ⎥. ⎣40 −50 −1⎦ Gauss-Jordan elimination yields the following.
So, the solution is: x =
3 5
7⎤ 20 ⎥ 1 2⎥ ⎦
⎡1 0 ⇒ ⎢ ⎢⎣0 1
So, the solution is: x = 5, y = 2, and z = −6. 34. Use Gauss-Jordan elimination on the augmented matrix. ⎡1 0 0 − 3⎤ ⎡2 0 6 −9⎤ 4 ⎢ ⎥ ⎢ ⎥ 3 2 11 16 0 1 0 0 − − ⇒ ⎢ ⎥ ⎢ ⎥ ⎢0 0 1 − 5 ⎥ ⎢ 3 −1 7 −11⎥ ⎣ ⎦ 4⎦ ⎣
36. Use Gauss-Jordan elimination on the augmented matrix. 3⎤ 5 ⎥ 1 2⎥ ⎦
and y = 12 .
22. Use Gauss-Jordan elimination on the augmented matrix.
⎡1 ⎢3 ⎢⎣2
3 1 10⎤ 5⎤ ⎡2 ⎡1 0 0 ⎢ ⎥ ⎢ ⎥ ⎢2 −3 −3 22⎥ ⇒ ⎢0 1 0 2⎥ ⎢4 −2 3 −2⎥ ⎢0 0 1 −6⎥ ⎣ ⎦ ⎣ ⎦
So, the solution is: x = − 34 , y = 0, and z = − 54 .
7⎤ ⎡ 1 −1 7 ⎤ ⎡1 − 1 2 20 ⇒ 2 20 ⎢ ⎥ ⎢ ⎥ ⎢⎣40 −50 −1⎥⎦ ⎢⎣0 −30 −15⎥⎦
⎡1 − 1 2 ⇒ ⎢ 1 ⎢⎣0
32. Use Gauss-Jordan elimination on the augmented matrix.
⎡ 1 0 −3⎤ 3⎤ ⎥ ⇒ ⎢ ⎥ 3 15⎥⎦ ⎣0 1 7⎦
4 7
So, the solution is: x = −3, y = 7.
1⎤ ⎡1 2 6 ⎡ 1 2 6 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢2 5 15 4⎥ ⇒ ⎢0 1 3 2⎥ ⎢ 3 1 3 −6⎥ ⎢0 0 0 1⎥ ⎣ ⎦ ⎣ ⎦
Because the third row corresponds to the false statement 0 = 1, there is no solution. 38. Use Gauss-Jordan elimination on the augmented matrix.
⎡2 5 −19 34⎤ ⎡ 1 0 3 2⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣3 8 −31 54⎦ ⎣0 1 −5 6⎦
24. Because the matrix has 3 rows and 2 columns, it has size 3 × 2.
So, the solution is: x1 = 2 − 3t , x2 = 6 + 5t , and x3 = t , where t is any real number.
26. The matrix satisfies all three conditions in the definition of row-echelon form. Because each column that has a leading 1 (columns 1 and 4) has zeros elsewhere, the matrix is in reduced row-echelon form.
40. Use Gauss-Jordan elimination on the augmented matrix.
28. The matrix satisfies all three conditions in the definition of row-echelon form. Because each column that has a leading 1 (columns 2 and 3) has zeros elsewhere, the matrix is in reduced row-echelon form. 30. This matrix corresponds to the system x1 + 2 x2 + 3 x3 = 0
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢2 ⎢ ⎣2
0 14⎤ ⎡1 ⎥ ⎢ 0 3⎥ ⎢0 ⎥ 0 3 8 6 16 ⇒ ⎢0 ⎥ ⎢ ⎢0 4 0 0 −2 0⎥ ⎥ ⎢ 0 −1 0 0 0⎦ ⎣0 5
3 0
4
2 5
2⎤ ⎥ 0⎥ 0 1 0 0 4⎥ ⎥ 0 0 1 0 −1⎥ ⎥ 0 0 0 1 2⎦ 0 0 0 0
1 0 0 0
So, the solution is: x1 = 2, x2 = 0, x3 = 4, x4 = −1, x5 = 2.
0 = 1.
Because the second equation is impossible, the system has no solution. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1 42. Using a graphing utility, the augmented matrix reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
5 0 0⎤ ⎥ 0 1 0⎥ . 0 0 1⎥ ⎥ 0 0 0⎥⎦
1 0 0 0 1 0 0 0 1
⎤ − 407 98 ⎥ 335 − 49 ⎥ . − 32 ⎥⎥ 743 ⎥ 98 ⎦
So, the solution is: x = − 407 , y = − 335 , z = − 32 , and 98 49 w =
743 . 98
46. Using a graphing utility, the augmented matrix reduces to ⎡ 1 0 0 1.5 0⎤ ⎢ ⎥ ⎢0 1 0 0.5 0⎥. ⎢0 0 1 0.5 0⎥ ⎣ ⎦
Letting w = t , you have x = −1.5t , y = −0.5t , z = −0.5t , w = t , where t is any real number. 48. Use Gauss-Jordan elimination on the augmented matrix. 3 0⎤ ⎡1 0 ⎡2 4 −7 0⎤ 2 ⎢ ⎥ ⎢ ⎥ 5 0 1 − 3 9 0 ⇒ 0 1 − ⎢ ⎥ ⎢ ⎥ 2 ⎢0 0 ⎥ ⎢6 0 ⎥ 9 0 0 0 ⎣ ⎦ ⎣ ⎦
Letting x3 = t be the free variable, you have x1 = − 32 t , x2 =
5 t, 2
⎡ 1 0 37 ⎡1 3 5 0⎤ 2 ⎢ ⎥ ⇒ ⎢ 1 9 0 1 − ⎢⎣1 4 2 0⎥⎦ ⎢⎣ 2 x1 = − 37 t , x2 = 2
44. Using a graphing utility, the augmented matrix reduces to 0 0 0
50. Use Gauss-Jordan elimination on the augmented matrix.
0⎤ ⎥ 0⎦⎥
Letting x3 = t be the free variable, you have
The system is inconsistent so there is no solution.
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎣
21
and x3 = t , where t is any real number.
9 t, 2
and x3 = t.
52. Use Gaussian elimination on the augmented matrix.
⎡1 ⎡ 1 −1 2 0⎤ ⎢ ⎢ ⎥ ⎢−1 1 −1 0⎥ ⇒ ⎢0 ⎢0 ⎢ 1 k 1 0⎥⎦ ⎣ ⎣ ⎡1 ⎢ ⇒ ⎢0 ⎢0 ⎣
−1 0
(k
0⎤ ⎥ 1 0⎥ −1 0⎥⎦ 2
+ 1)
−1 2 0⎤ 1 k + − ( ) 1 0⎥⎥ 0 1 0⎥⎦
So, there will be exactly one solution (the trivial solution x = y = z = 0) if and only if k ≠ −1. 54. Form the augmented matrix for the system ⎡2 −1 1 a⎤ ⎢ ⎥ ⎢ 1 1 2 b⎥ ⎢0 3 3 c⎥ ⎣ ⎦
and use Gaussian elimination to reduce the matrix to row-echelon form. 1 ⎡ ⎢1 − 2 ⎢ 1 ⎢1 ⎢ 0 3 ⎣
1 2 2 3
1 ⎡ a⎤ ⎢1 − 2 ⎢ 2⎥ ⎥ ⇒ ⎢ 3 b⎥ ⎢0 2 ⎥ ⎢ c⎦ 3 ⎣⎢0
−
1 2 3 2 3
a 2 2b − 2 c
⎤ ⎥ ⎥ a⎥ ⎥ ⎥ ⎦⎥
1 ⎡ ⎢1 − 2 ⎢ ⇒ ⎢ 1 ⎢0 ⎢ 3 ⎢⎣0
1 2
1 ⎡ ⎢1 − 2 ⎢ ⇒ ⎢ 1 ⎢0 ⎢ 0 ⎣⎢0
a 2 2b − a 1 3 0 c − 2b +
1 3
a 2 2b − 3 c
1 2
⎤ ⎥ ⎥ a⎥ ⎥ ⎥ ⎥⎦ ⎤ ⎥ ⎥ ⎥ ⎥ ⎥ a⎦⎥
(a) If c − 2b + a ≠ 0, then the system has no solution. (b) The system cannot have one solution. (c) If c − 2b + a = 0, then the system has infinitely many solutions.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
22
Chapter 1
Systems of Linear Equations
56. Find all possible first rows.
[0
0 0], [0 0 1], [0 1 0], [0 1 a], [1 0 0], [1 a 0], [1 a b], [1 0 a]
where a and b are nonzero real numbers. For each of these examine the possible second rows. ⎡0 0 0⎤ ⎡0 0 1⎤ ⎡0 1 0⎤ ⎡0 1 0⎤ ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎣0 0 0⎦ ⎣0 0 0⎦ ⎣0 0 0⎦ ⎣0 0 1⎦ ⎡0 1 a⎤ ⎢ ⎥, ⎣0 0 0⎦
⎡1 0 0⎤ ⎢ ⎥, ⎣0 0 0⎦
⎡1 0 0⎤ ⎢ ⎥, ⎣0 1 0⎦
⎡1 0 0⎤ ⎢ ⎥, ⎣0 0 1⎦
⎡1 0 0⎤ ⎢ ⎥, ⎣0 1 a⎦
⎡1 a 0⎤ ⎢ ⎥, ⎣0 0 0⎦
⎡1 a 0⎤ ⎢ ⎥, ⎣0 0 1⎦
⎡1 a b⎤ ⎢ ⎥, ⎣0 0 0⎦
⎡1 0 a⎤ ⎢ ⎥, ⎣0 0 0⎦
⎡1 0 a⎤ ⎢ ⎥ ⎣0 1 0⎦
58. Use Gaussian elimination on the augmented matrix. ⎡(λ + 2) ⎢ ⎢ −2 ⎢ 1 ⎣
−2
(λ
− 1) 2
3 0⎤ 2 λ 0⎤ ⎡1 ⎥ ⎢ ⎥ 6 0⎥ ⇒ ⎢0 λ + 3 6 + 2λ 0⎥ ⎢0 −2λ − 6 −λ 2 − 2λ + 3 0⎥ λ 0⎥⎦ ⎣ ⎦ ⎡1 2 ⎢ ⇒ ⎢0 λ + 3 ⎢ ⎢0 0 ⎣
λ 6 + 2λ
(λ
2
− 2λ − 15)
0⎤ ⎥ 0⎥ ⎥ 0⎥⎦
So, you need λ 2 − 2λ − 15 = (λ − 5)(λ + 3) = 0, which implies λ = 5 or λ = −3. 60. (a) True. A homogeneous system of linear equations is always consistent, because there is always a trivial solution, i.e., when all variables are equal to zero. See Theorem 1.1 on page 25.
(b) False. Consider for example the following system (with three variables and two equations).
x+ y − z = 2 −2 x − 2 y + 2 z = 1. It is easy to see that this system has no solutions. 62. (a) Let x = number of touchdowns, y = number of extra-point kicks, and z = number of field goals.
6 x + y + 3 z = 45
x − y x
= 0 − 6z = 0
(b) Use Gauss-Jordan elimination to solve the system. ⎡6 1 3 45⎤ ⎡ 1 0 0 6⎤ ⎢ ⎥ ⎢ ⎥ 1 1 0 0 − ⇒ ⎢ ⎥ ⎢0 1 0 6⎥ ⎢ 1 0 −6 0⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦
Because x = 6, y = 6, and z = 1, there were 6 touchdowns, 6 extra-point kicks, and 1 field goal.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 1
64.
3x 2 + 3x − 2
(x
+ 1) ( x − 1) 2
=
23
A B C + + x + 1 x − 1 ( x + 1)2
3x 2 + 3x − 2 = A( x + 1)( x − 1) + B( x + 1) + C ( x − 1) 2
3x 2 + 3x − 2 = Ax 2 − A + Bx 2 + 2 Bx + B + Cx − C 3x 2 + 3x − 2 = ( A + B) x 2 + ( 2 B + C ) x − A + B − C
So,
A +
=
3
2B + C =
3
B
−A +
B − C = −2.
Use Gauss-Jordan elimination to solve the system. 3⎤ ⎡ 1 1 0 ⎡ 1 0 0 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 0 2 1 3⎥ ⇒ ⎢0 1 0 1⎥ ⎢−1 1 −1 −2⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦
The solution is: A = 2, B = 1, and C = 1. So,
3x 2 + 3x − 2
( x + 1) ( x − 1) 2
=
2 1 1 + + . x + 1 x − 1 ( x + 1)2
66. (a) Because there are four points, choose a third-degree polynomial, p( x) = a0 + a1x + a2 x 2 + a3 x3 .
By substituting the values at each point into this equation, you obtain the system
a0 − a1 + a2 − a3 = −1 = 0
a0
a0 + a1 + a2 + a3 = 1 a0 + 2a1 + 4a2 + 8a3 = 4. Use Gauss-Jordan elimination on the augmented matrix. ⎡1 −1 ⎢ ⎢1 0 ⎢1 1 ⎢ ⎣⎢1 2
⎡1 1 −1 −1⎤ ⎢ ⎥ 0 0⎥ ⎢0 ⇒ ⎢ ⎥ 0 1 1 1 ⎢ ⎥ ⎢ 4 8 4⎦⎥ ⎣0
0 0 0 0⎤ ⎥ 1 0 0 23 ⎥ 0 1 0 0⎥ ⎥ 0 0 1 13 ⎥⎦
0
So, p( x) = (b)
2x 3
+ 13 x3.
y
(2, 4)
4 3 2 1
(− 1, − 1)
(1, 1) x
(0, 0) 2
3
68. Substituting the points, (1, 0), (2, 0), (3, 0), and (4, 0) into the polynomial p( x) yields the system
a0 + a1 +
a2 +
a3 = 0
a0 + 2a1 + 4a2 + 8a3 = 0 a0 + 3a1 + 9a2 + 27 a3 = 0 a0 + 4a1 + 16a2 + 64a3 = 0. Gaussian elimination shows that the only solution is a0 = a1 = a2 = a3 = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
24
Chapter 1
Systems of Linear Equations
70. You are looking for a quadratic function y = ax 2 + bx + c, such that the points (0, 40), (6, 73) and (12, 52) lie on its graph. The system of equations to fit the data to a quadratic polynomial is
72. (a) First find the equations corresponding to each node in the network. input = output
x1 + 200 = x2 + x4
c = 40
x6 + 100 = x1 + x3
36a + 6b + c = 73
x2 + x3 = x5 + 300
144a + 12b + c = 52.
x4 + x5 =
Substituting c = 40 into the second and the third equations, you obtain
x6
Rearranging this system and forming the augmented matrix, you have
36a + 6b = 33
12a + 2b = 11
⎡ 1 −1 ⎢ ⎢1 0 ⎢0 1 ⎢ ⎢⎣0 0
12a + b = 1.
The equivalent reduced row-echelon matrix is
144a + 12b = 12 or
Adding −1 times the second equation to the first 9 = − 3. equation you obtain b = 10, so a = − 12 4
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
If you use the graphing utility to graph y = −0.75 x 2 + 10 x + 40 you would see that the data points (0, 40), (6, 73) and (12, 52) lie on the graph. On Page 29 it is stated that if we have n points with district x-coordinates, then there is precisely one polynomial function of degree n − 1 (or less) that fits these points. 80
0 −1 1 1 0
0 −200⎤ ⎥ 0 0 −1 100⎥ . 0 −1 0 300⎥ ⎥ 1 1 −1 0⎥⎦ 0
0 −1 100⎤ ⎥ 1 1 0 −1 0 300⎥ . 0 0 1 1 −1 0⎥ ⎥ 0 0 0 0 0 0⎦⎥ 0 1 0
Choosing x3 = r , x5 = s, and x6 = t as the free variables, you obtain
x1 = 100 − r + t x2 = 300 − r + s
y = −0.75x2 + 10x + 40
x4 = − s + t , where r , s, and t are any real numbers. (b) When x3 = 100 = r , x5 = 50 = s, and
−10
x6 = 50 = t , you have
25
x1 = 100 − 100 + 50 = 50
−10
x2 = 300 − 100 + 50 = 250 x4 = −50 + 50 = 0.
Project Solutions for Chapter 1 1 Graphing Linear Equations 3 ⎡1 ⎤ − 12 ⎡2 −1 3⎤ 2 1. ⎢ ⎥ ⎥ ⇒ ⎢ 3 1 ⎢⎣0 b + 2 a 6 − 2 a⎦⎥ ⎣a b 6⎦
(a) Unique solution if b + 12 a ≠ 0. For instance, a = b = 2. (b) Infinite number of solutions if b + 12 a = 6 − 32 a = 0 ⇒ a = 4 and b = −2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 1
25
(c) No solution if b + 12 a = 0 and 6 − 32 a ≠ 0 ⇒ a ≠ 4 and b = − 12 a. For instance, a = 2, b = −1. y
(d) 2x + 2y = 6 y 2x − y = 3 4 3 2 1
3 2 1 −4
x
−2
−4
2 3
1
2 3 4 −2 −3
y = 3
(b) 2 x −
2x + 2 y = 6
x
−2 x
−2
−2 −3
(a) 2 x −
y
2x − y = 3
−2 −3
2x − y = 3
2x − y = 6
y = 3
(c) 2 x − y = 3
4x − 2 y = 6
2x − y = 6
(The answers are not unique.) 2. (a) x + y + z = 0
(b)
x + y + z = 0
(c)
x + y + z = 0
x + y + z = 0
y + z =1
x + y + z = 1
x− y − z = 0
z = 2
x− y − z = 0
(The answers are not unique.) There are other configurations, such as three mutually parallel planes, or three planes that intersect pairwise in lines. 2 Underdetermined and Overdetermined Systems of Equations 1. Yes, x + y = 2 is a consistent underdetermined system. 2. Yes, x+
y = 2
2x + 2 y = 4 3x + 3 y = 6
is a consistent, overdetermined system. 3. Yes,
x + y + z = 1 x + y + z = 2 is an inconsistent underdetermined system. 4. Yes,
x + y =1 x + y = 2 x + y = 3 is an inconsistent underdetermined system. 5. In general, a linear system with more equations than variables would probably be inconsistent. Here is an intuitive reason: Each variable represents a degree of freedom, while each equation gives a condition that in general reduces number of degrees of freedom by one. If there are more equations (conditions) than variables (degrees of freedom), then there are too many conditions for the system to be consistent. So you expect such a system to be inconsistent in general. But, as Exercise 2 shows, this is not always true. 6. In general, a linear system with more variables than equations would probably be consistent. As in Exercise 5 the intuitive explanation is as follows. Each variable represents a degree of freedom, and each equation represents a condition that takes away one degree of freedom. If there are more variables than equations, in general, you would expect a solution. But, as Exercise 3 shows, this is not always true.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Matrices
2
Section 2.1
Operations with Matrices .....................................................................27
Section 2.2
Properties of Matrix Operations...........................................................33
Section 2.3
The Inverse of a Matrix........................................................................38
Section 2.4
Elementary Matrices.............................................................................42
Section 2.5
Applications of Matrix Operations ......................................................47
Review Exercises ..........................................................................................................54
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Matrices
2
Section 2.1 Operations with Matrices ⎡ 1 −1⎤ ⎡ 2 −1⎤ ⎡1 + 2 −1 − 1⎤ ⎡3 −2⎤ 1. (a) A + B = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣2 −1⎦ ⎣−1 8⎦ ⎣2 − 1 −1 + 8⎦ ⎣1 7⎦ ⎡ 1 −1⎤ ⎡ 2 −1⎤ ⎡1 − 2 −1 + 1⎤ ⎡−1 0⎤ (b) A − B = ⎢ ⎥ −⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣2 −1⎦ ⎣−1 8⎦ ⎣2 + 1 −1 − 8⎦ ⎣ 3 −9⎦
⎡ 2(1) 2( −1)⎤ ⎡ 1 −1⎤ ⎡2 −2⎤ (c) 2 A = 2 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 2 2 2 − 1 2 − 1 ( ) ( ) ⎢⎣ ⎥⎦ ⎣ ⎦ ⎣4 −2⎦ −1⎤ ⎡2 −2⎤ ⎡ 2 −1⎤ ⎡0 (d) 2 A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎣4 −2⎦ ⎣−1 8⎦ ⎣5 −10⎦ (e) B +
1A 2
⎡5 − 3⎤ ⎡ 2 −1⎤ 1 ⎡ 1 −1⎤ ⎡ 2 −1⎤ ⎡ 12 − 12 ⎤ = ⎢ ⎥ = ⎢ 2 152 ⎥ ⎥ + 2⎢ ⎥ = ⎢ ⎥ + ⎢ 1 ⎥ ⎣−1 8⎦ ⎣2 −1⎦ ⎣−1 8⎦ ⎢⎣ 1 − 2 ⎦⎥ 2⎦ ⎣⎢ 0
⎡ 6 −1⎤ ⎡ 1 4⎤ ⎡ 6 + 1 −1 + 4⎤ ⎡ 7 3⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3. (a) A + B = ⎢ 2 4⎥ + ⎢−1 5⎥ = ⎢2 + ( −1) 4 + 5⎥ = ⎢ 1 9⎥ ⎢−3 5⎥ ⎢ 1 10⎥ ⎢ −3 + 1 5 + 10⎥ ⎢−2 15⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡ 6 −1⎤ ⎡ 1 4⎤ ⎡ 6 − 1 −1 − 4⎤ ⎡ 5 −5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) A − B = ⎢ 2 4⎥ − ⎢−1 5⎥ = ⎢2 − ( −1) 4 − 5⎥ = ⎢ 3 −1⎥ ⎢−3 5⎥ ⎢ 1 10⎥ ⎢ −3 − 1 5 − 10⎥ ⎢−4 −5⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡ 2(6) 2( −1)⎤ ⎡ 6 −1⎤ ⎡ 12 −2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (c) 2 A = 2 ⎢ 2 4⎥ = ⎢ 2( 2) 2( 4)⎥ = ⎢ 4 8⎥ ⎢2( −3) 2(5)⎥ ⎢−3 5⎥ ⎢−6 10⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 12 −2⎤ ⎡ 1 4⎤ ⎡ 12 − 1 −2 − 4⎤ ⎡ 11 −6⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 8 − 5⎥ = ⎢ 5 3⎥ (d) 2 A − B = ⎢ 4 8⎥ − ⎢−1 5⎥ = ⎢4 − ( −1) ⎢−6 10⎥ ⎢ 1 10⎥ ⎢ −6 − 1 10 − 10⎥ ⎢−7 0⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣
(e) B +
1A 2
⎡ 4 ⎡ 1 4⎤ ⎡ 6 −1⎤ ⎡ 1 4⎤ ⎡ 3 − 12 ⎤ ⎢ ⎥ ⎢ ⎥ 1⎢ ⎥ ⎢ ⎥ ⎢ 2⎥ = ⎢ 0 = ⎢−1 5⎥ + 2 ⎢ 2 4⎥ = ⎢−1 5⎥ + ⎢ 1 ⎢− 1 5⎥ ⎢ 1 10⎥ ⎢−3 5⎥ ⎢ 1 10⎥ ⎢− 3 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 2 2⎦ ⎣ 2
7⎤ 2
⎥ 7⎥ 25 ⎥ 2⎦
⎡3 2 −1⎤ ⎡0 2 1⎤ ⎡3 + 0 2 + 2 −1 + 1⎤ ⎡ 3 4 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 5. (a) A + B = ⎢2 4 5⎥ + ⎢5 4 2⎥ = ⎢2 + 5 4 + 4 5 + 2⎥ = ⎢7 8 7⎥ ⎢0 1 2⎥ ⎢2 1 0⎥ ⎢0 + 2 1 + 1 2 + 0⎥ ⎢2 2 2⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 3 2 −1⎤ ⎡0 2 1⎤ ⎡3 − 0 2 − 2 −1 − 1⎤ ⎡ 3 0 −2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3⎥ (b) A − B = ⎢2 4 5⎥ − ⎢5 4 2⎥ = ⎢2 − 5 4 − 4 5 − 2⎥ = ⎢−3 0 ⎢0 1 2⎥ ⎢2 1 0⎥ ⎢0 − 2 1 − 1 2 − 0⎥ ⎢−2 0 2⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡ 2(3) 2( 2) 2( −1)⎤ ⎡3 2 −1⎤ ⎡6 4 −2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (c) 2 A = 2 ⎢2 4 5⎥ = ⎢2( 2) 2( 4) 2(5)⎥ = ⎢4 8 10⎥ ⎢2(0) 2(1) 2( 2)⎥ ⎢0 1 2⎥ ⎢0 2 4⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
27
28
Chapter 2
Matrices
⎡ 3 2 −1⎤ ⎡0 2 1⎤ ⎡6 4 −2⎤ ⎡0 2 1⎤ ⎡ 6 2 −3⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (d) 2 A − B = 2 ⎢2 4 5⎥ − ⎢5 4 2⎥ = ⎢4 8 10⎥ − ⎢5 4 2⎥ = ⎢ −1 4 8⎥ ⎢0 1 2⎥ ⎢2 1 0⎥ ⎢0 2 4⎥ ⎢2 1 0⎥ ⎢−2 1 4⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
(e) B −
1A 2
⎡3 3 ⎡0 2 1⎤ ⎡ 3 2 −1⎤ ⎡0 2 1⎤ ⎡ 32 1 − 12 ⎤ ⎥ ⎢2 ⎢ ⎥ 1⎢ ⎥ ⎢ ⎥ ⎢ 5 = ⎢5 4 2⎥ + 2 ⎢2 4 5⎥ = ⎢5 4 2⎥ + ⎢ 1 2 = ⎢6 6 2⎥ ⎢2 3 ⎢2 1 0⎥ ⎢0 1 2⎥ ⎢2 1 0⎥ ⎢ 0 1 1⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 2 2 ⎣
1⎤ 2 ⎥ 9 2⎥
1⎥⎦
7. (a) c21 = 2a21 − 3b21 = 2( −3) − 3(0) = −6
(b) c13 = 2a13 − 3b13 = 2( 4) − 3(−7) = 29 9. Expanding both sides of the equation produces
⎡4 x 4 y⎤ ⎡ 2 y + 8 2 z + 2 x⎤ ⎢ ⎥ = ⎢ ⎥. z − 4 4 ⎣ ⎦ ⎣−2 x + 10 2 − 2 x ⎦ By setting corresponding entries equal to each other, you obtain four equations. 4x = 2 y + 8 ⇒ 4y =
2z + 2x ⇒ 2x − 4 y + 2z = 0
4 z = −2 x + 10 ⇒ −4 =
4x − 2 y = 8 2 x + 4 z = 10
2 − 2x ⇒
2x = 6
Gauss-Jordan elimination produces x = 3, y = 2, and z = 1.
⎡ 1( 2) + 2( −1) 1( −1) + 2(8)⎤ ⎡ 1 2⎤⎡ 2 −1⎤ ⎡0 15⎤ 11. (a) AB = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎢⎣4( 2) + 2( −1) 4( −1) + 2(8)⎥⎦ ⎣4 2⎦⎣−1 8⎦ ⎣6 12⎦ ⎡2(1) + ( −1)( 4) 2( 2) + (−1)( 2)⎤ ⎡ 2 −1⎤⎡ 1 2⎤ ⎡−2 2⎤ (b) BA = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 1 8 4 1 2 8 2 − + − + 1 8 4 2 − ( ) ( ) ( ) ( ) ⎢⎣ ⎣ ⎦⎣ ⎦ ⎣ 31 14⎦ ⎦⎥ 13. (a) AB is not defined because A is 3 × 2 and B is 3 × 3.
⎡0 −1 0⎤ ⎡ 2 1⎤ ⎢ ⎥⎢ ⎥ (b) BA = ⎢4 0 2⎥ ⎢−3 4⎥ ⎢8 −1 7⎥ ⎢ 1 6⎥ ⎣ ⎦⎣ ⎦ ⎡0( 2) + ( −1)( −3) + 0(1) 0(1) + ( −1)( 4) + 0(6)⎤ ⎡ 3 −4⎤ ⎢ ⎥ ⎢ ⎥ 4(1) + 0( 4) + 2(6)⎥ = ⎢10 16⎥ = ⎢ 4( 2) + 0( −3) + 2(1) ⎢8( 2) + ( −1)( −3) + 7(1) 8(1) + (−1)( 4) + 7(6)⎥ ⎢26 46⎥ ⎣ ⎦ ⎣ ⎦
⎡ −1(1) + 3(0) −1( 2) + 3(7)⎤ ⎡−1 3⎤ ⎡−1 19⎤ ⎢ ⎥ ⎢ ⎥ ⎡ 1 2⎤ ⎢ ⎥ 15. (a) AB = ⎢ 4 −5⎥ ⎢ ⎥ = ⎢4(1) + ( −5)(0) 4( 2) + ( −5)(7)⎥ = ⎢ 4 −27⎥ ⎢ 0(1) + 2(0) ⎢ ⎥ ⎣0 7⎦ ⎢ ⎥ 0( 2) + 2(7)⎥⎦ ⎣ 0 2⎦ ⎣ 0 14⎦ ⎣ (b) BA is not defined because B is a 2 × 2 matrix and A is a 3 × 2 matrix. ⎡ 6(10) 6(12)⎤ ⎡ 6⎤ ⎡ 60 72⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 10 2 12 − − − 2 ( ) ( ) ⎢ ⎥ ⎢−20 −24⎥ 17. (a) AB = ⎢ ⎥ [10 12] = ⎢ = ⎢ 1⎥ ⎢ 10 12⎥ 1 10 1(12)⎥ ⎢ ( ) ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 6(10) 6(12)⎥⎦ ⎢⎣ 6⎥⎦ ⎢⎣ 60 72⎥⎦ (b) BA is not defined because B is a 1 × 2 matrix and A is a 4 × 1 matrix.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.1
Operations with Matrices
29
19. Use a graphing utility or computer software program.
5 6 1 8⎤ ⎡ 5 −2 ⎢ ⎥ ⎢ 0 5 5 −1 −3 8⎥ ⎢ 6 −8 −1 4 7 −9⎥ ⎥ (a) 2 A + B = ⎢ ⎢5 0 9 2 3 3⎥ ⎢ ⎥ 1 −7 −8 6 8⎥ ⎢12 ⎢ 5 2 10 −10 −6 −5⎥ ⎣ ⎦ 8 −13 ⎡ 1 ⎢ 7 13 1 − ⎢ ⎢ −3 −3 −10 (b) 3B − A = ⎢ ⎢ 1 7 6 ⎢ ⎢ 1 −4 −7 ⎢ 1 −8 9 ⎣ ⎡ 2 −2 −5 ⎢ 10 ⎢ 6 −27 ⎢ 1 22 −34 (c) AB = ⎢ ⎢ 4 −4 −11 ⎢ ⎢ 8 −4 −11 ⎢ −2 −11 −30 ⎣
11
3
11
−2
−2
0
6
2
4
11
−2
−11
−2
−3
−2
−21
15
37
1
2
9 −10 10
3
21 −21 ⎡ 8 16 ⎢ 15 19 20 6 − ⎢ ⎢ −4 0 −12 −2 (d) BA = ⎢ 3 ⎢ 16 −6 12 ⎢ 9 1 −26 ⎢ 20 ⎢−10 −26 3 33 ⎣
−8 6 −5 11 −6 9
3⎤ ⎥ 3⎥ 1⎥ ⎥ −5⎥ ⎥ 3⎥ −1⎥⎦
−8⎤ ⎥ 1⎥ 7⎥ ⎥ −1⎥ ⎥ 17⎥ 9⎥⎦ 28⎤ ⎥ −8⎥ 11⎥ ⎥ 6⎥ ⎥ 23⎥ −33⎥⎦
21. A + B is defined and has size 3 × 4 because A and B have size 3 × 4. 23.
1D 2
is defined and has size 4 × 2 because D has size 4 × 2.
25. AC is defined. Because A has size 3 × 4 and C has size 4 × 2, the size of AC is 3 × 2. 27. E − 2 A is not defined because E and 2A have different sizes. 29. In matrix form Ax = b, the system is
⎡ −1 ⎢ ⎣−2
1⎤ ⎡ x1 ⎤ ⎡4⎤ ⎥ ⎢ ⎥ = ⎢ ⎥. 1⎦ ⎣ x2 ⎦ ⎣0⎦
Use Gauss-Jordan elimination on the augmented matrix. ⎡ −1 ⎢ ⎣−2
1 1
4⎤ ⎡ 1 0 4⎤ ⎥ ⇒ ⎢ ⎥ 0⎦ ⎣0 1 8⎦
⎡ x1 ⎤ ⎡4⎤ So, the solution is ⎢ ⎥ = ⎢ ⎥. x ⎣ 2⎦ ⎣8⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
30
Chapter 2
Matrices
31. In matrix form Ax = b, the system is ⎡−2 −3⎤ ⎡ x1 ⎤ ⎡ −4⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1⎦ ⎣ x2 ⎦ ⎣ 6 ⎣−36⎦ Use Gauss-Jordan elimination on the augmented matrix.
35. In matrix form Ax = b, the system is ⎡ 1 −5 2⎤ ⎡ x1 ⎤ ⎡−20⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 1 −1⎥ ⎢ x2 ⎥ = ⎢ 8⎥ ⎢ −3 ⎢ 0 −2 ⎢−16⎥ 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎣ ⎦
Use Gauss-Jordan elimination on the augmented matrix.
⎡−2 −3 −4⎤ ⎡ 1 0 −7 ⎤ ⎢ ⎥ ⇒ ⎢ ⎥ 1 −36⎦ ⎣ 6 ⎣0 1 6⎦
⎡ 1 ⎢ ⎢ −3 ⎢ 0 ⎣
⎡ x1 ⎤ ⎡−7⎤ So, the solution is ⎢ ⎥ = ⎢ ⎥. ⎣ x2 ⎦ ⎣ 6⎦ 33. In matrix form Ax = b, the system is 3⎤ ⎡ x1 ⎤ ⎡ 1 −2 ⎡ 9⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ −1 3 −1⎥ ⎢ x2 ⎥ = ⎢−6⎥. ⎢ 2 −5 5⎥ ⎢ x3 ⎥ ⎢17⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
−5 1 −2
2 −20⎤ ⎡ 1 ⎥ ⎢ 8⎥ ⇒ ⎢ 0 ⎢ 0 5 −16⎥⎦ ⎣
−1
−1⎤ ⎥ 3⎥ 1 −2⎥⎦
0
0
1
0
0
⎡ x1 ⎤ ⎡ −1⎤ ⎢ ⎥ ⎢ ⎥ So, the solution is ⎢ x2 ⎥ = ⎢ 3⎥. ⎢ x3 ⎥ ⎢−2⎥ ⎣ ⎦ ⎣ ⎦
Use Gauss-Jordan elimination on the augmented matrix. 3 9⎤ ⎡ 1 −2 ⎡ 1 ⎢ ⎥ ⎢ ⎢ −1 3 −1 −6⎥ ⇒ ⎢ 0 ⎢ 2 −5 5 17⎥ ⎢ 0 ⎣ ⎦ ⎣
0 1 0
1⎤ ⎥ 0 −1⎥ 1 2⎥⎦
0
⎡ x1 ⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ So, the solution is ⎢ x2 ⎥ = ⎢−1⎥. ⎢ x3 ⎥ ⎢ 2⎥ ⎣ ⎦ ⎣ ⎦
37. Expanding the left side of the equation produces ⎡ 1 2⎤ ⎡ 1 2⎤ ⎡ a11 a12 ⎤ ⎡ a11 + 2a21 a12 + 2a22 ⎤ ⎡ 1 0⎤ ⎢ ⎥A = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣3 5⎦ ⎣ 3 5⎦ ⎣ a21 a22 ⎦ ⎣ 3a11 + 5a21 3a12 + 5a22 ⎦ ⎣0 1⎦ from which you obtain the system + 2a21
a11
= 1 + 2a22 = 0
a12 + 5a21
3a11
= 0 + 5a22 = 1.
3a12
Solving by Gauss-Jordan elimination yields a11 = −5, a12 = 2, a21 = 3, and a22 = −1. So, you have ⎡−5 2⎤ A = ⎢ ⎥. ⎣ 3 −1⎦ 39. Expand the left side of the matrix equation. ⎡1 2⎤ ⎡a b⎤ ⎡ 6 3⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣3 4⎦ ⎣ c d ⎦ ⎣19 2⎦ ⎡ a + 2c b + 2d ⎤ ⎡ 6 3⎤ ⎢ ⎥ = ⎢ ⎥ 3 a + 4 c 3 b + 4 d ⎣ ⎦ ⎣19 2⎦ By setting corresponding entries equal to each other, obtain four equations. + 2c
a
+ 2d = 3
b + 4c
3a 3b
= 6 = 19 + 4d = 2
Gauss-Jordan elimination produces a = 7, b = −4, c = − 12 , and d =
7. 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.1
Operations with Matrices
31
41. Expand AB = BA as follows. ⎡w x⎤ ⎡ 1 1⎤ ⎡ 1 1⎤ ⎡w x⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥⎢ ⎥ ⎣ y z ⎦ ⎣−1 1⎦ ⎣−1 1⎦ ⎣ y z⎦ x + z⎤ ⎡w − x w + x⎤ ⎡ w+ y ⎢ ⎥ = ⎢ ⎥ y − z y + z − w + y − x + z⎦ ⎣ ⎦ ⎣ This yields the system of equations − x − y
= 0
w
−z = 0
w
−z = 0 x + y
= 0.
Using Gauss-Jordan elimination you can solve this system to obtain w = t , x = − s, y = s, and z = t , where s and t are any real numbers. So, w = z and x = − y.
⎡−1 0 0⎤ ⎡−1 0 0⎤ ⎢ ⎥⎢ ⎥ 43. AA = ⎢ 0 2 0⎥ ⎢ 0 2 0⎥ ⎢ 0 0 3⎥ ⎢ 0 0 3⎥ ⎣ ⎦⎣ ⎦ ⎡−1( −1) + 0(0) + 0(0) −1(0) + 0( 2) + 0(0) −1(0) + 0(0) + 0(3)⎤ ⎢ ⎥ = ⎢ 0( −1) + 2(0) + 0(0) 0(0) + 2( 2) + 0(0) 0(0) + 2(0) + 0(3)⎥ ⎢ 0( −1) + 0(0) + 3(0) 0(0) + 0( 2) + 3(0) 0(0) + 0(0) + 3(3)⎥ ⎣ ⎦ ⎡ 1 0 0⎤ ⎢ ⎥ = ⎢0 4 0⎥ ⎢0 0 9⎥ ⎣ ⎦ 0⎤ ⎡2 0⎤ ⎡−5 0⎤ ⎡−10 45. AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ − − 0 3 0 4 0 12 ⎣ ⎦⎣ ⎦ ⎣ ⎦ 0⎤ ⎡−5 0⎤ ⎡2 0⎤ ⎡−10 BA = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ 0 4⎦ ⎣0 −3⎦ ⎣ 0 −12⎦ 47. Let A and B be diagonal matrices of sizes n × n. ⎡n ⎤ Then, AB = ⎡⎣cij ⎤⎦ = ⎢∑ aik bkj ⎥ ⎣ k =1 ⎦
where cij = 0 if i ≠ j , and cii = aii bii otherwise. The entries of BA are exactly the same. 49. The trace is the sum of the elements on the main diagonal.
1 + ( −2) + 3 = 2. 51. The trace is the sum of the elements on the main diagonal. 1+1+1+1 = 4
(
n
) ∑ (a
53. (a) Tr ( A + B) = Tr ⎡⎣aij + bij ⎤⎦ =
ii
i =1
n
+ bii ) =∑ aii + i =1
n
∑ bii i =1
= Tr ( A) + Tr ( B)
(
n
) ∑ ca
(b) TR (cA) = Tr ⎡⎣caij ⎤⎦ =
ii
i =1
n
= c ∑ aii i =1
= cTr( A)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
32
Chapter 2
Matrices
⎡ a11 a12 ⎤ 55. Let A = ⎢ ⎥, ⎣a21 a22 ⎦ then the given matrix equation expands to ⎡a11 + a21 a12 + a22 ⎤ ⎡ 1 0⎤ ⎢ ⎥ = ⎢ ⎥. ⎣a11 + a21 a12 + a22 ⎦ ⎣0 1⎦ Because a11 + a21 = 1 and a11 + a21 = 0 cannot both be true, conclude that there is no solution. ⎡i 2 0⎤ ⎡ i 0⎤ ⎡ i 0⎤ ⎡−1 0⎤ = = ⎢ 57. (a) A2 = ⎢ ⎢ ⎥⎢ ⎥ ⎥ 2⎥ ⎢⎣ 0 i ⎥⎦ ⎣0 i⎦ ⎣0 i⎦ ⎣ 0 −1⎦ ⎡−1 0⎤ ⎡ i 0⎤ ⎡−i 0⎤ A3 = A2 A = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ 0 −1⎦ ⎣0 i⎦ ⎣ 0 −i ⎦ ⎡−i 2 0⎤ ⎡−i 0⎤ ⎡ i 0⎤ ⎡ 1 0⎤ A4 = A3 A = ⎢ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 2⎥ i i 0 − 0 i 0 − ⎢⎣ ⎥⎦ ⎣ ⎦⎣ ⎦ ⎣0 1⎦ A2 , A3 , and A4 are diagonal matrices with diagonal entries i 2 , i 3 , and i 4 respectively.
⎡0 −i⎤ ⎡0 −i⎤ ⎡ 1 0⎤ 4 (b) B 2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = A = I 0 0 i i ⎣ ⎦⎣ ⎦ ⎣0 1⎦ 59. Assume that A is an m × n matrix and B is a p × q matrix. Because AB is defined, you have that n = p and AB is m × q. Because BA is defined, you have that m = q and so AB is an m × m square matrix. Likewise, because BA is defined, m = q and so BA is p × n. Because AB is defined, you have n = p. Therefore BA is an n × n square matrix.
61. Let AB = ⎡⎣cij ⎤⎦ , where cij =
n
∑ aik bkj . If the i th row of A has all zero entries, then aik
= 0 for k = 1, … , n. So, cij = 0 for
k =1
all j = 1, … , n, and the i th row of AB has all zero entries. To show the converse is not true consider ⎡ 2 1⎤ ⎡ 1 1⎤ ⎡0 0⎤ AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣−2 −1⎦ ⎣−2 −2⎦ ⎣0 0⎦ ⎡70 50 25⎤ ⎡84 60 30⎤ 63. 1.2 ⎢ ⎥ = ⎢ ⎥ ⎣35 100 70⎦ ⎣42 120 84⎦ ⎡125 100 75⎤ 65. BA = [3.50 6.00]⎢ ⎥ = [1037.5 1400 1012.5] ⎣100 175 125⎦ The entries of BA represent the profit for both crops at each of the three outlets.
67. (a) True. On page 51, “ … for the product of two matrices to be defined, the number of columns of the first matrix must equal the number of rows of the second matrix.” (b) True. On page 55, “ … the system Ax = b is consistent if and only if b can be expressed as … a linear combination, where the coefficients of the linear combination are a solution of the system.” ⎡0.75 0.15 0.10⎤ ⎡0.75 0.15 0.10⎤ ⎡0.6225 0.2425 0.135⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0.47 0.20⎥ 69. PP = ⎢0.20 0.60 0.20⎥ ⎢0.20 0.60 0.20⎥ = ⎢ 0.33 ⎢0.30 0.40 0.30⎥ ⎢0.30 0.40 0.30⎥ ⎢ 0.395 0.405 0.20⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
This product represents the changes in party affiliation after two elections.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2 ⎡ 1 ⎡ 1 2 0 0⎤ ⎢ ⎢ ⎥ −1 71. AB = ⎢0 1 0 0⎥ ⎢ ⎢ ⎢ ⎥ 0 ⎢⎣0 0 2 1⎥⎦ ⎢⎢ 0 ⎣
2 0⎤ ⎡−1 4 0⎤ ⎥ ⎢ ⎥ 1 0⎥ = ⎢−1 1 0⎥ ⎥ 0 1 ⎢ ⎥ ⎥ ⎢⎣ 0 0 5⎥⎦ 0 3⎥⎦
Properties of Matrix Operations
33
75. The augmented matrix row reduces as follows. ⎡ 1 1 −5 3⎤ ⎡ 1 0 0 1⎤ ⎢ ⎥ ⎢ ⎥ − ⇒ 1 0 1 1 ⎢ ⎥ ⎢0 1 0 2⎥ ⎢2 −1 −1 0⎥ ⎢0 0 1 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡3⎤ ⎡ 1⎤ ⎡ 1⎤ ⎡−5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ So, b = ⎢ 1⎥ = 1⎢ 1⎥ + 2 ⎢ 0⎥ + 0 ⎢ −1⎥. ⎢0⎥ ⎢2⎥ ⎢−1⎥ ⎢ −1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
73. The augmented matrix row reduces as follows. 3⎤ ⎡1 −1 2 −1⎤ ⎡ 1 −1 0 ⎢ ⎥ ⇒ ⎢ ⎥ 3 3 1 7 0 0 1 2 − − ⎣ ⎦ ⎣ ⎦ There are an infinite number of solutions. Answers will vary. Sample answer: x3 = −2, x2 = 0, x1 = 3. So, ⎡−1⎤ ⎡1⎤ ⎡ −1⎤ ⎡2⎤ b = ⎢ ⎥ = 3⎢ ⎥ + 0⎢ ⎥ − 2⎢ ⎥. ⎣ 7⎦ ⎣3⎦ ⎣−3⎦ ⎣ 1⎦
Section 2.2 Properties of Matrix Operations ⎡1 2⎤ ⎡ 0 1⎤ ⎡3 6⎤ ⎡0 −4⎤ ⎡ 3 2⎤ 1. 3⎢ ⎥ + ( −4) ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ⎣3 4⎦ ⎣−1 2⎦ ⎣9 12⎦ ⎣4 −8⎦ ⎣13 4⎦ ⎡ 0 1⎤ ⎡ 0 1⎤ ⎡ 0 −12⎤ 3. ab( B ) = (3)( −4) ⎢ ⎥ = ( −12) ⎢ ⎥ = ⎢ ⎥ − 1 2 − 1 2 ⎣ ⎦ ⎣ ⎦ ⎣12 −24⎦ ⎛ ⎡1 2⎤ ⎡ 0 1⎤ ⎞ ⎡ 1 1⎤ ⎡ 7 7⎤ 5. ⎡⎣3 − ( − 4)⎤⎦ ⎜⎜ ⎢ ⎥ − ⎢ ⎥ ⎟⎟ = 7 ⎢ ⎥ = ⎢ ⎥ 3 4 − 1 2 4 2 ⎦ ⎣ ⎦⎠ ⎣ ⎦ ⎣28 14⎦ ⎝⎣
7. (a)
3X + 2 A = B 0⎤ ⎡−8 ⎡ 1 2⎤ ⎢ ⎥ ⎢ ⎥ 3 X + ⎢ 2 −10⎥ = ⎢−2 1⎥ ⎢−6 ⎢ 4 4⎥ 4⎥⎦ ⎣ ⎣ ⎦ ⎡ 9 2⎤ ⎢ ⎥ 3 X = ⎢−4 11⎥ ⎢ 10 0⎥ ⎣ ⎦ ⎡ 3 ⎢ X = ⎢− 43 ⎢ 10 ⎣ 3
2⎤ 3 ⎥ 11 3⎥
0⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
34
Chapter 2 (b)
Matrices 2 A − 5B = 3 X
0⎤ ⎡ 5 10⎤ ⎡−8 ⎢ ⎥ ⎢ ⎥ ⎢ 2 −10⎥ − ⎢−10 5⎥ = 3 X ⎢−6 4⎥⎦ ⎢⎣ 20 20⎥⎦ ⎣ ⎡−13 −10⎤ ⎢ ⎥ ⎢ 12 −15⎥ = 3 X ⎢−26 −16⎥ ⎣ ⎦ ⎡ − 13 − 10 ⎤ 3 ⎢ 3 ⎥ ⎢ 4 −5⎥ = X ⎢− 26 − 16 ⎥ 3⎦ ⎣ 3 (c) X − 3 A + 2 B = O
X = 3 A − 2B 0⎤ ⎡ 2 4⎤ ⎡−12 ⎢ ⎥ ⎢ ⎥ = ⎢ 3 −15⎥ − ⎢−4 2⎥ ⎢ −9 6⎥⎦ ⎢⎣ 8 8⎥⎦ ⎣ ⎡−14 −4⎤ ⎢ ⎥ = ⎢ 7 −17⎥ ⎢−17 −2⎥ ⎣ ⎦ (d) 6 X − 4 A − 3B = 0 6 X = 4 A + 3B 0⎤ ⎡ 3 6⎤ ⎡−16 ⎢ ⎥ ⎢ ⎥ 6 X = ⎢ 4 −20⎥ + ⎢−6 3⎥ ⎢−12 8⎥⎦ ⎢⎣ 12 12⎥⎦ ⎣ 6⎤ ⎡−13 ⎢ ⎥ 6 X = ⎢ −2 −17⎥ ⎢ 0 20⎥ ⎣ ⎦ ⎡− 13 1⎤ ⎢ 61 ⎥ 17 X = ⎢ −3 − 6 ⎥ ⎢ 0 10 ⎥ 3⎦ ⎣ 1 −1⎤ ⎡ 1 3⎤ ⎛ ⎡ 0 1⎤ ⎡ 1 2 3⎤ ⎞ ⎡ 1 3⎤ ⎡ 0 ⎡−3 −5 −10⎤ 9. B(CA) = ⎢ ⎥ ⎜⎜ ⎢ ⎥⎢ ⎥ ⎟⎟ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−1 2⎦ ⎝ ⎣−1 0⎦ ⎣0 1 −1⎦ ⎠ ⎣−1 2⎦ ⎣−1 −2 −3⎦ ⎣−2 −5 −5⎦ ⎛ ⎡ 1 3⎤ ⎡ 0 1⎤ ⎞ ⎡ 1 2 3⎤ ⎡ 1 4⎤⎡ 1 2 3⎤ ⎡ 1 6 −1⎤ 11. ⎜⎜ ⎢ ⎥ + ⎢ ⎥ ⎟⎟ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−2 2⎦⎣0 1 −1⎦ ⎣−2 −2 −8⎦ ⎝ ⎣−1 2⎦ ⎣−1 0⎦ ⎠ ⎣0 1 −1⎦ ⎛ ⎡ 1 3⎤ ⎞⎛ ⎡ 0 1⎤ ⎡ 0 1⎤ ⎞ ⎡−2 −6⎤ ⎡ 0 2⎤ ⎡12 −4⎤ 13. ⎜ −2 ⎢ ⎥ ⎟⎜ ⎜ ⎟⎜ ⎢−1 0⎥ + ⎢−1 0⎥ ⎟⎟ = ⎢ 2 −4⎥ ⎢−2 0⎥ = ⎢ 8 4⎥ − 1 2 ⎦ ⎠⎝ ⎣ ⎦ ⎣ ⎦⎠ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎝ ⎣ ⎡0 1⎤ ⎡2 3⎤ ⎡2 3⎤ ⎡1 0⎤ ⎡2 3⎤ 15. AC = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = BC , but A ≠ B. 0 1 2 3 2 3 ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣1 0⎦ ⎣2 3⎦ ⎡ 3 3⎤ ⎡ 1 −1⎤ ⎡0 0⎤ 17. AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ , but A ≠ 0 and B ≠ 0. 4 4 − 1 1 ⎣ ⎦⎣ ⎦ ⎣0 0⎦ ⎡ 1 2⎤ ⎡ 1 2⎤ ⎡ 1 0⎤ 19. A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = I − − 0 1 0 1 ⎣ ⎦⎣ ⎦ ⎣0 1⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2
Properties of Matrix Operations
35
⎡ 1 2⎤ ⎛ ⎡ 1 0⎤ ⎡ 1 2⎤ ⎞ ⎡ 1 2⎤ ⎡2 2⎤ ⎡2 2⎤ 21. ⎢ ⎥ ⎜⎜ ⎢ ⎥ + ⎢ ⎥ ⎟⎟ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 0 1 0 1 0 1 0 1 0 0 − − − ⎣ ⎦⎝ ⎣ ⎦ ⎣ ⎦⎠ ⎣ ⎦⎣ ⎦ ⎣0 0⎦ ⎡4 2 1⎤ 23. (a) A = ⎢ ⎥ ⎣0 2 −1⎦
T
T
⎡4 0⎤ ⎢ ⎥ = ⎢2 2⎥ ⎢ 1 −1⎥ ⎣ ⎦
⎡2 1 −3⎤ ⎢ ⎥ T 25. (a) A = ⎢ 1 4 1⎥ ⎢0 2 1⎥⎦ ⎣
T
⎡ 2 1 0⎤ ⎢ ⎥ = ⎢ 1 4 2⎥ ⎢−3 1 1⎥ ⎣ ⎦
⎡4 0⎤ ⎡16 8 4⎤ ⎢ ⎥ ⎡4 2 1⎤ ⎢ ⎥ = (b) AT A = ⎢2 2⎥ ⎢ ⎥ ⎢ 8 8 0⎥ 0 2 1 − ⎦ ⎢ 1 −1⎥ ⎣ ⎢ 4 0 2⎥ ⎣ ⎦ ⎣ ⎦
⎡ 2 1 0⎤ ⎡2 1 −3⎤ ⎡ 5 6 −5⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 1⎥ = ⎢ 6 21 3⎥ (b) AT A = ⎢ 1 4 2⎥ ⎢ 1 4 ⎢−3 1 1⎥ ⎢0 2 ⎢−5 3 11⎥ 1⎥⎦ ⎣ ⎦⎣ ⎣ ⎦
⎡4 0⎤ ⎡4 2 1⎤ ⎢ ⎡21 3⎤ ⎥ (c) AA = ⎢ ⎥ ⎢2 2⎥ = ⎢ ⎥ ⎣0 2 −1⎦ ⎢ ⎣ 3 5⎦ ⎥ ⎣ 1 −1⎦
⎡2 1 −3⎤ ⎡ 2 1 0⎤ ⎡14 3 −1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (c) AA = ⎢ 1 4 1⎥ ⎢ 1 4 2⎥ = ⎢ 3 18 9⎥ ⎢0 2 ⎢−1 9 5⎥ 1⎥⎦ ⎢⎣−3 1 1⎥⎦ ⎣ ⎣ ⎦
T
⎡ 0 −4 3 ⎢ 8 4 0 27. (a) AT = ⎢ ⎢−2 3 5 ⎢ ⎣⎢ 0 0 −3
⎡ 0 ⎢ −4 (b) AT A = ⎢ ⎢ 3 ⎢ ⎣⎢ 2
2⎤ ⎥ 1⎥ 1⎥ ⎥ 2⎥⎦
T
⎡ 0 ⎢ −4 = ⎢ ⎢ 3 ⎢ ⎢⎣ 2
8 −2
0⎤ ⎥ 0⎥ 5 −3⎥ ⎥ 1 2⎥⎦
4
3
0 1
8 −2 4 0 1
0⎤ ⎡ 0 −4 3 ⎥⎢ 3 0⎥ ⎢ 8 4 0 5 −3⎥ ⎢−2 3 5 ⎥⎢ 1 2⎦⎥ ⎣⎢ 0 0 −3
⎡ 0 −4 3 ⎢ 8 4 0 (c) AAT = ⎢ ⎢−2 3 5 ⎢ ⎢⎣ 0 0 −3
T
2⎤ ⎡ 68 26 −10 6⎤ ⎥ ⎢ ⎥ 1⎥ 26 41 3 −1⎥ = ⎢ ⎢−10 3 43 5⎥ 1⎥ ⎥ ⎢ ⎥ ⎢⎣ 6 −1 2⎥⎦ 5 10⎥⎦
8 −2
2⎤ ⎡ 0 ⎥⎢ 1⎥ ⎢−4 1⎥ ⎢ 3 ⎥⎢ 2⎥⎦ ⎢⎣ 2
4 0 1
0⎤ 5 −5⎤ ⎡ 29 14 ⎥ ⎢ ⎥ −14 81 −3 3 0⎥ 2⎥ = ⎢ ⎢ 5 −3 39 −13⎥ 5 −3⎥ ⎥ ⎢ ⎥ 1 2⎥⎦ ⎢⎣ −5 2 −13 13⎥⎦
29. In general, AB ≠ BA for matrices. So, ( A + B )( A − B) = A2 + BA − AB − B 2 ≠ A2 − B 2 . T
31.
( AB)
T
⎛ ⎡−3 0⎤ ⎞ ⎜ ⎡−1 1 −2⎤ ⎢ ⎥⎟ = ⎜⎢ ⎥ ⎢ 1 2⎥ ⎟ 2 0 1 ⎦⎢ ⎜⎣ ⎥⎟ ⎣ 1 −1⎦ ⎠ ⎝
⎡ 2 4⎤ = ⎢ ⎥ ⎣−5 −1⎦
T
⎡2 −5⎤ = ⎢ ⎥ ⎣4 −1⎦
T
⎡−3 0⎤ ⎡ −1 2⎤ T ⎡−3 1 1⎤ ⎢ ⎡2 −5⎤ ⎢ ⎥ ⎡−1 1 −2⎤ ⎥ T T B A = ⎢ 1 2⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ 1 0⎥ = ⎢ ⎥ 1⎦ ⎣ 0 2 −1⎦ ⎢ ⎣4 −1⎦ ⎢ 1 −1⎥ ⎣ 2 0 ⎥ ⎣ ⎦ ⎣−2 1⎦ T
33.
( AB)
T
⎛ ⎡ 2 1⎤ ⎞ ⎜⎢ ⎥ ⎡2 3 1⎤ ⎟ = ⎜ ⎢ 0 1⎥ ⎢ ⎥⎟ ⎜ ⎢−2 1⎥ ⎣0 4 −1⎦ ⎟ ⎦ ⎝⎣ ⎠
T ⎡ 2 1⎤ ⎡2 3 1⎤ ⎢ ⎥ T T B A = ⎢ ⎥ ⎢ 0 1⎥ ⎣0 4 −1⎦ ⎢ ⎥ ⎣−2 1⎦
T
1⎤ ⎡ 4 10 ⎢ ⎥ = ⎢ 0 4 −1⎥ ⎢−4 −2 −3⎥ ⎣ ⎦
T
⎡ 4 0 −4⎤ ⎢ ⎥ = ⎢10 4 −2⎥ ⎢ 1 −1 −3⎥ ⎣ ⎦
⎡2 0⎤ ⎡ 4 0 −4⎤ ⎢ ⎥ ⎡2 0 −2⎤ ⎢ ⎥ = ⎢3 4⎥ ⎢ ⎥ = ⎢10 4 −2⎥ ⎢ 1 −1⎥ ⎣ 1 1 1⎦ ⎢ 1 −1 −3⎥ ⎣ ⎦ ⎣ ⎦
35. (a) True. See Theorem 2.1, part 1. (b) True. See Theorem 2.3, part 1. (c) False. See Theorem 2.6, part 4, or Example 9. (d) True. See Example 10.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
36
Chapter 2
Matrices
⎡ 1⎤ ⎡ 1⎤ ⎡ 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 37. (a) aX + bY = a ⎢0⎥ + b ⎢ 1⎥ = ⎢−1⎥ ⎢ 1⎥ ⎢0⎥ ⎢ 3⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
This matrix equation yields the linear system a + b = 2 b = −1 = 3.
a
The only solution to this system is: a = 3 and b = −1. ⎡ 1⎤ ⎡ 1⎤ ⎡1⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ (b) aX + bY = a ⎢0⎥ + b ⎢ 1⎥ = ⎢1⎥ ⎢ 1⎥ ⎢0⎥ ⎢1⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎦
This matrix equation yields the linear system a + b =1 b =1 = 1.
a
The system is inconsistent. So, no values of a and b will satisfy the equation. ⎡ 1⎤ ⎡ 1⎤ ⎡1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ (c) aX + bY + cW = a ⎢0⎥ + b ⎢ 1⎥ + c ⎢1⎥ = ⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎢1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎦ ⎣ ⎦
This matrix equation yields the linear system a + b+ c = 0 b+ c = 0 + c = 0.
a
Then a = −c, so b = 0. Then c = 0, so a = b = c = 0. ⎡ 1⎤ ⎡ 1⎤ ⎡ 2⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (d) aX + bY + cZ = a ⎢0⎥ + b ⎢ 1⎥ + c ⎢−1⎥ = ⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎢ 3⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
This matrix equation yields the linear system a + b + 2c = 0 b− c = 0 a
+ 3c = 0.
Using Gauss-Jordan elimination the solution is a = −3t , b = t and c = t , where t is any real number. If t = 1, then a = −3, b = 1, and c = 1.
39. A19
⎡119 ⎢ = ⎢0 ⎢ ⎢⎣ 0
0
(−1)
19
0
0⎤ ⎡ 1 0 0⎤ ⎥ ⎢ ⎥ 0 ⎥ = ⎢0 −1 0⎥ ⎥ ⎢0 0 1⎥ 119 ⎥ ⎣ ⎦ ⎦
41. There are four possibilities, such that A2 = B, namely ⎡3 0⎤ ⎡−3 0⎤ ⎡−3 0⎤ ⎡3 0⎤ A = ⎢ ⎥, A = ⎢ ⎥, A = ⎢ ⎥. ⎥, A = ⎢ ⎣0 −2⎦ ⎣ 0 2⎦ ⎣0 2⎦ ⎣ 0 −2⎦ 2
0⎤ ⎡2 0⎤ ⎡2 0⎤ ⎡ 1 0⎤ ⎡2 0⎤⎡2 0⎤ ⎡10 0⎤ ⎡2 0⎤ ⎡ 4 0⎤ ⎡ −8 ⎡−4 0⎤ 43. f ( A) = ⎢ ⎥ − 5⎢ ⎥ + 2⎢ ⎥ = ⎢ ⎥⎢ ⎥ − ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ⎣4 5⎦ ⎣4 5⎦ ⎣0 1⎦ ⎣4 5⎦⎣4 5⎦ ⎣20 25⎦ ⎣0 2⎦ ⎣28 25⎦ ⎣−20 −23⎦ ⎣ 8 2⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2
Properties of Matrix Operations
37
2
⎡ 2 1⎤ ⎡ 2 1⎤ ⎡ 1 0⎤ ⎡ 2 1⎤⎡ 2 1⎤ ⎡ 6 3⎤ ⎡2 0⎤ ⎡ 3 2⎤ ⎡ 6 3⎤ ⎡2 0⎤ ⎡−1 −1⎤ 45. f ( A) = ⎢ ⎥ − 3⎢ ⎥ + 2⎢ ⎥ = ⎢ ⎥⎢ ⎥−⎢ ⎥+⎢ ⎥ = ⎢ ⎥−⎢ ⎥+⎢ ⎥ =⎢ ⎥ ⎣−1 0⎦ ⎣−1 0⎦ ⎣0 1⎦ ⎣−1 0⎦⎣−1 0⎦ ⎣−3 0⎦ ⎣0 2⎦ ⎣−2 −1⎦ ⎣−3 0⎦ ⎣0 2⎦ ⎣ 1 1⎦
(
) (
)
47. A + ( B + C ) = ⎡⎣aij ⎤⎦ + ⎡⎣bij ⎤⎦ + ⎡⎣cij ⎤⎦ = ⎡⎣aij ⎤⎦ + ⎡⎣bij ⎤⎦ + ⎡⎣cij ⎤⎦ = ( A + B) + C 49. 1A = 1⎡⎣aij ⎤⎦ = ⎡⎣1aij ⎤⎦ = ⎡⎣aij ⎤⎦ = A 51. (1) A + Omn = ⎡⎣aij ⎤⎦ + [0] = ⎡⎣aij + 0⎤⎦ = ⎡⎣aij ⎤⎦ = A (2) A + ( − A) = ⎡⎣aij ⎤⎦ + ⎡⎣− aij ⎤⎦ = ⎡⎣aij + (− aij )⎤⎦ = [0] = Omn
(3) Let cA = Omn and suppose c ≠ 0. Then, Omn = cA ⇒ c −1Omn = c −1 (cA) = (c −1c) A = A and so, A = Omn . If c = 0, then OA = ⎡⎣0 ⋅ acj ⎤⎦ = [0] = Omn . ⎡0 −2⎤ ⎡ 0 2⎤ 57. Because AT = ⎢ ⎥ = −⎢ ⎥ = − A the matrix 2 0 ⎣ ⎦ ⎣−2 0⎦ is skew-symmetric.
53. (1) The entry in the ith row and jth column of AI n is ai1 0 + + aij 1 + + ain 0 = aij . (2) The entry in the ith row and jth column of I m A is 0a1 j + + 1aij + + 0anj = aij .
55.
( AAT )
T
⎡0 2 1⎤ ⎢ ⎥ 59. Because A = ⎢2 0 3⎥ = A the matrix is symmetric. ⎢ 1 3 0⎥ ⎣ ⎦ T
= ( AT ) AT = AAT which implies that AAT is T
symmetric. Similarly, ( AT A)
T
= AT ( AT )
T
61. Because AT = − A, the diagonal element aii satisfies aii = − aii , or aii = 0.
= AT A, which implies
that AT A is symmetric.
63. (a) If A is a square matrix of order n, then AT is a square matrix of order n. Let ⎡ a11 a12 ⎢ ⎢a21 a22 A = ⎢a31 a32 ⎢ ⎢ ⎢ ⎣an1 an 2
a1n ⎤ ⎡ a11 a21 a31 ⎥ ⎢ a2 n ⎥ ⎢a12 a22 a32 T ⎢a13 a23 a33 ⎥ A . = Then a3n ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ann ⎦ ⎣a1n a2 n a3n
a13 a23 a33 an3
Now form the sum and scalar multiple,
1 2
(A +
AT )
⎡ a11 ⎢ ⎢a21 1 = 2 ⎢a31 ⎢ ⎢ ⎢ ⎣an1
1 2
(A +
an1 ⎤ ⎥ an 2 ⎥ an 3 ⎥. ⎥ ⎥ ⎥ ann ⎦
AT ).
+ a11
a12 + a21
a13 + a31
+ a12
a22 + a22
a23 + a32
+ a13
a32 + a23
a33 + a33
+ a1n
an 2 + a2 n
an3 + a3n
a1n + an1 ⎤ ⎥ a2 n + an 2 ⎥ a3n + an3 ⎥. ⎥ ⎥ ⎥ ann + ann ⎦
and note that, for the matrix A + AT , the ij th entry is equal to the ji th entry for all i ≠ j. So,
1 2
(A +
AT ) is symmetric.
(b) Use the matrices in part (a).
1 2
( A − AT )
⎡ a11 ⎢ ⎢a21 = 12 ⎢a31 ⎢ ⎢ ⎢ ⎣an1
− a11
a12 − a21
a13 − a31
− a12
a22 − a22
a23 − a32
− a13
a32 − a23
a33 − a33
− a1n
an 2 − a2 n
an3 − a3n
a1n − an1 ⎤ ⎥ a2 n − an 2 ⎥ a3n − an3 ⎥. ⎥ ⎥ ⎥ ann − ann ⎦
and note that, for the matrix A − AT , the ij th entry is the negative of the ji th entry for all i ≠ j. So,
1 2
( A − AT ) is skew
symmetric.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
38
Chapter 2
Matrices
(c) For any square matrix A of order n, let B = By part (b), C is skew symmetric. And A =
( A + AT ) and C = 12 ( A − AT ). By part (a), B is symmetric. 1 A + AT + 1 A − AT = B + C as desired. ) 2( ) 2(
1 2
⎡2 −3 4⎤ ⎢ ⎥ (d) For the given A, A = ⎢5 6 1⎥. Using the notation of part (c), ⎢ 3 0 1⎥ ⎣ ⎦ T
B +C =
1 2
(A +
T
A
⎡2 1 7 ⎤ ⎡ 0 4 − 1 ⎤ ⎡ 2 5 3⎤ 2 2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 1 = ⎢ 1 6 2 ⎥ + ⎢−4 0 − 2 ⎥ = ⎢−3 6 0⎥ = A ⎢ 7 1 1⎥ ⎢ 1 1 ⎢ 4 1 1⎥ 0⎥⎦ ⎣ ⎦ ⎣2 2 ⎦ ⎣ 2 2
) + (A − A ) 1 2
T
⎡0 1⎤ ⎡−1 1⎤ ⎡ 1 0⎤ 65. (a) Answers will vary. Sample answer: ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ 1 0⎦ ⎣ 1 0⎦ ⎣−1 1⎦ (b) Let A and B be symmetric. If AB = BA, then ( AB)
T
If ( AB)
T
= BT AT = BA = AB and AB is symmetric.
= AB, then AB = ( AB)
T
= BT AT = BA and AB = BA.
Section 2.3 The Inverse of a Matrix 1⎤ ⎡1 2⎤ ⎡−2 ⎡ 1 0⎤ 1. AB = ⎢ ⎥ ⎢ 3 − 1⎥ = ⎢ ⎥ 3 4 ⎣ ⎦ ⎢⎣ 2 ⎣0 1⎦ 2⎥ ⎦ 1⎤ ⎡1 2⎤ ⎡−2 ⎡ 1 0⎤ BA = ⎢ 3 = ⎢ ⎥⎢ ⎥ ⎥ 1 ⎢⎣ 2 − 2 ⎥⎦ ⎣3 4⎦ ⎣0 1⎦ ⎡−2 2 3⎤ ⎢ ⎥ 3. AB = ⎢ 1 −1 0⎥ ⎢ 0 1 4⎥ ⎣ ⎦ BA =
() 1 3
⎡−4 −5 3⎤ ⎢ ⎥ ⎢−4 −8 3⎥ = ⎢ 1 2 0⎥ ⎣ ⎦
1 3
()
⎡3 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢0 3 0⎥ = ⎢0 1 0⎥ ⎢0 0 3⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
⎡−4 −5 3⎤ ⎡−2 2 3⎤ ⎢ ⎥⎢ ⎥ ⎢−4 −8 3⎥ ⎢ 1 −1 0⎥ = ⎢ 1 2 0⎥ ⎢ 0 1 4⎥ ⎣ ⎦⎣ ⎦
()
⎡3 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 0 3 0 = ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 3⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
() 1 3
5. Use the formula A−1 =
1 3
⎡ d −b⎤ 1 ⎢ ⎥ , where ad − bc ⎣−c a⎦
⎡a b⎤ ⎡1 2⎤ A = ⎢ ⎥ = ⎢ ⎥. So, the inverse is ⎣c d ⎦ ⎣3 7⎦ A− 1 =
⎡ 7 −2⎤ ⎡ 7 −2⎤ 1 . ⎢ ⎥ = ⎢ (1)(7) − (2)(3) ⎣−3 1⎦ ⎣−3 1⎥⎦
⎡ d −b⎤ 1 ⎢ ⎥ , where ad − bc ⎣−c a⎦ 33⎤ ⎡a b⎤ ⎡−7 A = ⎢ ⎥ = ⎢ ⎥. So, the inverse is ⎣c d ⎦ ⎣ 4 −19⎦
7. Use the formula A−1 =
A− 1 =
1
⎡−19 −33⎤ ⎡−19 −33⎤ ⎥ = ⎢ ⎥. −4 −7⎦ ⎣ −4 −7⎦
9. Adjoin the identity matrix to form 1 0 0⎤ ⎡1 1 1 ⎢ [ A I ] = ⎢3 5 4 0 1 0⎥⎥. ⎢3 6 5 0 0 1⎥⎦ ⎣ Using elementary row operations, rewrite this matrix in reduced row-echelon form. 1 1 −1⎤ ⎡1 0 0 ⎢ ⎥ −1 ⎡⎣I A ⎤⎦ = ⎢0 1 0 −3 2 −1⎥ ⎢0 0 1 3 −3 2⎥⎦ ⎣ ⎡ 1 1 −1⎤ ⎢ ⎥ Therefore, the inverse is A−1 = ⎢−3 2 −1⎥. ⎢ 3 −3 2⎥ ⎣ ⎦
(−7)(−19) − 33( 4) ⎢⎣
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.3
11. Adjoin the identity matrix to form −1 ⎡1 2 ⎢ = − A I 3 7 10 [ ] ⎢ ⎢7 16 −21 ⎣
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
Using elementary row operations, you cannot form the identity matrix on the left side. ⎡ 1 0 13 ⎢ ⎢0 1 −7 ⎢0 0 0 ⎣
0 −16 0 1
7⎤ ⎥ 7 −3⎥. 2 −1⎥⎦
13. Adjoin the identity matrix to form ⎡ 1 1 2 [ A I ] = ⎢⎢ 3 1 0 ⎢−2 0 3 ⎣
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
Using elementary row operations, rewrite this matrix in reduced row-echelon form. ⎡⎣I
⎡1 0 0 ⎢ A ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣ −1
− 32 9 2
1⎤ ⎥ −3⎥ 1 1⎥⎦
3 2 7 −2
−1
Therefore, the inverse is A
−1
⎡ 0.1 0.2 0.3 ⎢ = ⎢−0.3 0.2 0.2 ⎢ 0.5 0.5 0.5 ⎣
1 0 0⎤ ⎥ 1 0⎥. 0 0 1⎥⎦
0
Using elementary row operations, reduce the matrix as follows. ⎡1 0 0 ⎢ A−1 ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
Therefore, the inverse is ⎡ 0 −2 0.8⎤ ⎢ ⎥ A−1 = ⎢−10 4 4.4⎥. ⎢ 10 −2 −3.2⎥ ⎣ ⎦
1 − 34 7 20
0 0⎤ ⎥ 1 0 ⎥ 4 − 14 15 ⎥⎦
⎡ 1 0 0⎤ ⎢ ⎥ 1 0 . Therefore, the inverse is A−1 = ⎢− 34 ⎥ 4 ⎢ 7 − 1 1⎥ 4 5⎦ ⎣ 20
19. Adjoin the identity matrix to form 1 ⎡−8 0 0 0 ⎢ 0 1 0 0 0 [ A I ] = ⎢⎢ 0 0 0 0 0 ⎢ 0 0 0 5 0 − ⎣
0 0 0⎤ 1 0 0⎥⎥ . 0 1 0⎥ ⎥ 0 0 1⎦ Using elementary row operations, you cannot form the identity matrix on the left side. Therefore, the matrix A is singular and has no inverse.
The inverse is ⎡−24 ⎢−10 A− 1 = ⎢ ⎢−29 ⎢ ⎣ 12
15. Adjoin the identity matrix to form
⎡⎣I
⎡1 0 0 ⎢ ⎤ A ⎦ = ⎢0 1 0 ⎢0 0 1 ⎣ −1
21. Use a graphing utility or a computer software program.
3 ⎡− 3 1⎤ 2 ⎢ 92 ⎥ 7 = ⎢ 2 − 2 −3⎥. ⎢ −1 1 1⎥⎦ ⎣
[A I]
39
17. Adjoin the identity matrix to form 1 0 0⎤ ⎡1 0 0 ⎢ [ A I ] = ⎢3 4 0 0 1 0⎥⎥. ⎢2 5 5 0 0 1⎥⎦ ⎣ Using elementary row operations, rewrite this matrix in reduced row-echelon form. ⎡⎣I
Therefore, the matrix is singular and has no inverse.
The Inverse of a Matrix
0 −2
0.8⎤ ⎥ 4.4⎥ 10 −2 −3.2⎥⎦
−10
4
1 −2⎤ 3 0 −1⎥⎥ . 7 3 −2⎥ ⎥ −3 −1 1⎦ 7
23. Using a graphing utility or a computer software program, you find that the matrix is singular and therefore has no inverse. ⎡1 2⎤ 25. The coefficient matrix for each system is A = ⎢ ⎥ ⎣1 −2⎦ and the formula for the inverse of a 2 × 2 matrix produces A− 1 =
⎡ 12 1 ⎡−2 −2⎤ ⎢ ⎥ = ⎢1 −2 − 2 ⎣ −1 1⎦ ⎢⎣ 4
1⎤ 2 ⎥. − 14 ⎦⎥
1 ⎤ −1 ⎡1 ⎡ 1⎤ 2 ⎡ ⎤ (a) x = A−1b = ⎢ 12 ⎥⎢ ⎥ = ⎢ ⎥ 1 ⎢⎣ 4 − 4 ⎥⎦ ⎣ 3⎦ ⎣−1⎦ The solution is: x = 1 and y = −1. 1 ⎤ 10 ⎡1 ⎡2⎤ 2 ⎡ ⎤ (b) x = A−1b = ⎢ 12 ⎥⎢ ⎥ = ⎢ ⎥ 1 ⎢⎣ 4 − 4 ⎥⎦ ⎣−6⎦ ⎣4⎦ The solution is: x = 2 and y = 4.
⎡1 (c) x = A−1b = ⎢ 12 ⎢⎣ 4
1 ⎤ −3 2 ⎡ ⎤ ⎥⎢ ⎥ 1 − 4 ⎥⎦ ⎣ 0⎦
⎡− 3 ⎤ = ⎢ 23 ⎥ ⎣⎢− 4 ⎦⎥
The solution is: x = − 32 and y = − 34 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
40
Chapter 2
Matrices
31. Using a graphing utility or a computer software program you have Ax = b
27. The coefficient matrix for each system is ⎡1 2 1⎤ A = ⎢1 2 −1⎥. ⎢ ⎥ ⎣⎢1 −2 1⎦⎥
⎡ 1⎤ ⎢−2⎥ ⎢ ⎥ ⎢ 3⎥ where −1 x = A b = ⎢ ⎥ ⎢ 0⎥ ⎢ 1⎥ ⎢ ⎥ ⎣⎢−2⎦⎥
Using the algorithm to invert a matrix, you find that the inverse is
A− 1
1 1⎤ ⎡0 2 2 ⎢1 ⎥ = ⎢4 0 − 14 ⎥ ⎢1 ⎥ 1 0⎥⎦ ⎢⎣ 2 − 2
1 −2 1 −4⎤ ⎡ 2 −3 ⎢ 3 1 −4 − 1 1 2⎥⎥ ⎢ ⎢ 4 1 −3 4 −1 2⎥ A = ⎢ ⎥, 3⎥ ⎢−5 −1 4 2 −5 ⎢ 1 1 −3 4 −3 1⎥ ⎢ ⎥ ⎣⎢ 3 −1 2 −3 2 −6⎥⎦
1 1⎤ ⎡0 ⎡ 1⎤ 2 2 ⎡ 2⎤ ⎢ ⎥⎢ ⎥ 1 (a) x = A−1b = ⎢ 14 0 − 4 ⎥ ⎢ 4⎥ = ⎢⎢ 1⎥⎥ ⎢1 ⎥ 1 ⎢⎣−1⎥⎦ 0⎥⎦ ⎢⎣−2⎥⎦ ⎣⎢ 2 − 2 The solution is: x1 = 1, x2 = 1, and x3 = −1. 1 1⎤ ⎡0 ⎡ 0⎤ 2 2 ⎡ 1⎤ ⎢ ⎥ (b) x = A−1b = ⎢ 14 0 − 14 ⎥ ⎢⎢ 3⎥⎥ = ⎢⎢ 1⎥⎥ ⎢1 ⎥ 1 ⎢⎣−1⎥⎦ 0⎥⎦ ⎢⎣−3⎥⎦ ⎣⎢ 2 − 2 The solution is: x1 = 0, x2 = 1, and x3 = −1.
⎡ x1 ⎤ ⎡ 20⎤ ⎢x ⎥ ⎢−16⎥ 2 ⎢ ⎥ ⎢ ⎥ ⎢ x3 ⎥ ⎢−12⎥ x = ⎢ ⎥ , and b = ⎢ ⎥. ⎢ x4 ⎥ ⎢ −2⎥ ⎢ x5 ⎥ ⎢−15⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 25⎥⎦ ⎣⎢ x6 ⎦⎥ The solution is: x1 = 1, x2 = −2, x3 = 3, x4 = 0, x5 = 1, x6 = −2.
29. Using a graphing utility or a computer software program, you have Ax = b ⎡ 0⎤ ⎢ 1⎥ ⎢ ⎥ where −1 x = A b = ⎢ 2⎥ ⎢−1⎥ ⎢ ⎥ ⎢⎣ 0⎥⎦
33. (a)
⎡ 1 2 −1 3 −1⎤ ⎡ x1 ⎤ ⎡−3⎤ ⎢ 1 −3 1 2 −1⎥ ⎢x ⎥ ⎢−3⎥ ⎢ ⎥ ⎢ 2⎥ ⎢ ⎥ A = ⎢2 1 1 −3 1⎥ , x = ⎢ x3 ⎥ , and b = ⎢ 6⎥. ⎢ 1 −1 2 ⎢x ⎥ ⎢ 2⎥ 1 −1⎥ ⎢ ⎥ ⎢ 4⎥ ⎢ ⎥ ⎢⎣ x5 ⎥⎦ ⎣⎢ 1 1 −1 2 1⎦⎥ ⎣⎢−3⎦⎥
(b)
The solution is: x1 = 0, x2 = 1, x3 = 2, x4 = −1, and x5 = 0.
(d)
35. (a)
( AB)
(b)
(A ) T
−1
= B A
−1
= (A
−1
−1
)
−1
T
1⎤ 4
⎥ 1⎥ = 1⎥ ⎥ 2⎦
(c) A
(d)
(2 A)
= (A
−1
=
−1
)
2
1 A−1 2
− 14 1 4 1 2
3⎤ 8
⎥ −1⎥ = 1⎥ ⎥ 4⎦
= ( A−1 )
−1
⎡ 2 5⎤ = ⎢ ⎥ ⎣−7 6⎦
T
T
⎡2 −7⎤ = ⎢ ⎥ ⎣5 6⎦
7 2 13 8 17 8
⎤ − 21 4 ⎥ 71 − 16 ⎥ = 3⎥ ⎥ 16 ⎦
(2 A)
−1
=
1 A−1 2
=
⎡ 1 52 ⎤ 2 5⎤ ⎥ ⎥ = ⎢ 7 ⎢⎣− 2 3⎥⎦ ⎣−7 6⎦
1⎡ 2⎢
⎡138 56 −84⎤ 37 26 −71⎥⎥ ⎢⎣ 24 34 3⎥⎦
1⎢ 16 ⎢
⎡ 4
6 1⎤ 2 4⎥⎥ ⎢⎣ 3 −8 2⎥⎦
7 3⎤ ⎡16 ⎡1 − 1 2 4 ⎢7 ⎢3 ⎥ 1 = ⎢2 −2⎥ = ⎢ 4 2 ⎢15 ⎢1 ⎥ 1 12 ⎥⎦ ⎢⎣ 8 ⎣⎢ 4
⎡1 ⎢ 23 = ⎢4 ⎢1 ⎣⎢ 8
( AT )
1 ⎢−2 4⎢
2
−2
⎡7 −3⎤ ⎡ 2 5⎤ ⎡35 17⎤ = B −1 A−1 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣2 0⎦ ⎣−7 6⎦ ⎣ 4 10⎦
2 ⎡ 2 5⎤ ⎡ 2 5⎤ ⎡ −31 40⎤ (c) A−2 = ( A−1 ) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1⎦ ⎣−7 6⎦ ⎣−7 6⎦ ⎣−56
3⎤ ⎡ 69 ⎡ 2 4 5⎤ ⎡ 1 − 1 8 2 2 4 ⎢ 37 ⎢ 3 ⎥ ⎢3 ⎥ 1 1 2 = ⎢− 4 2 4 ⎥ ⎢ 2 − = ⎢ ⎥ 2 16 ⎢ 3 ⎢ 1 1 ⎥ ⎢1 1⎥ 2 1 ⎥ ⎢⎣ 2 2⎦ ⎣⎢ 4 2 ⎦⎥ ⎢⎣ 4
⎡ 1 3 2 ⎢ 1 = ⎢− 12 2 ⎢ 3 − 2 ⎣⎢ 4
( AB)−1
0 − 52 7 8
17 ⎤ 8 ⎥ − 78 ⎥ 25 ⎥ − 16 ⎦⎥
⎡ 7
=
0 34⎤ −40 −14⎥⎥ ⎢⎣30 14 −25⎥⎦
1 ⎢28 16 ⎢
⎡4 −2 3⎤ 2 −8⎥⎥ ⎣⎢ 1 4 2⎥⎦
1 ⎢6 8⎢
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.3
37. Using the formula for the inverse of a 2 × 2 matrix, you have A−1
So, ( I − 2 A)
ad − bc = ( 4)( −3) − ( x)(−2) = 0, which implies that 2 x = 12 or x = 6.
−1 2 A = ⎡( 2 A) ⎤ ⎣ ⎦
−1
=
1 4−
A−1 ( AB) = A−1O
( A−1 A) B
= O
57. To find the inverses, take the reciprocals of the diagonal entries.
⎡−2
1 2⎢ 3 ⎢⎣ 2
1⎤
⎥ − 12 ⎥⎦
⎡−1 = ⎢ 3 ⎢⎣ 4
1⎤ 2 ⎥. − 14 ⎥⎦
43. Using the formula for the inverse of a 2 × 2 matrix, you have ⎡ d −b⎤ 1 ⎢ ⎥ ad − bc ⎣−c a⎦ ⎡ sin θ 1 ⎢ 2 sin θ + cos θ ⎣cos θ
−cos θ ⎤ ⎥ sin θ ⎦
2
⎡ sin θ = ⎢ ⎣cos θ
= I − 2 A.
⎡ 1 0⎤ ⎡−1 0⎤ ⎡0 0⎤ 55. No. For instance, ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥. − 0 1 0 1 ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦
1⎤ ⎡−2 ⎡ 4 −2⎤ ⎢ ⎥ = ⎢ 3 − 1⎥ 6 ⎣−3 1⎦ ⎢⎣ 2 2⎥ ⎦
Then divide by 2 to obtain
=
−1
B = O
41. First find 2A.
A− 1 =
A2 )
53. Because A is invertible, you can multiply both sides of the equation AB = O by A−1 to obtain the following.
⎡ 4 x⎤ 39. The matrix ⎢ ⎥ will be singular if ⎣−2 −3⎦
=
(because A =
= I
So, x = 4.
(2 A)
= I − 4 A + 4 A2 = I − 4A + 4A
Letting A−1 = A, you find that 1 ( 2 x − 9) = −1.
1 2
41
51. ( I − 2 A)( I − 2 A) = I 2 − 2 IA − 2 AI + 4 A2
⎡−3 − x⎤ ⎢ ⎥. 9⎣ 2 3⎦
1 = 2x −
A =
The Inverse of a Matrix
⎡−1 0 0⎤ ⎢ ⎥ (a) A−1 = ⎢ 0 13 0⎥ ⎢ 0 0 1⎥ 2⎦ ⎣ ⎡2 0 0⎤ ⎢ ⎥ (b) A−1 = ⎢0 3 0⎥ ⎢0 0 4⎥ ⎣ ⎦
−cos θ ⎤ ⎥. sin θ ⎦
45. (a) True. See Theorem 2.7. (b) True. See Theorem 2.10, part 1. (c) False. See Theorem 2.9 (d) True. See “Finding the Inverse of a Matrix by GaussJordan Elimination,” part 2, on page 76.
47. Use mathematical induction. The property is clearly true if k = 1. Suppose the property is true for k = n, and consider the case for k = n + 1.
( An +1 )
−1
= ( AAn )
−1
= ( An ) A−1 = ( A−1 −1
A−1 ) A−1 = A−1
n times
A−1
n +1 times
49. Let A be symmetric and nonsingular. Then AT = A and
( A−1 )
T
= ( AT )
−1
= A−1. Therefore A−1 is symmetric.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
42
Chapter 2
Matrices
61. If P is nonsingular then P −1 exists. Use matrix multiplication to solve for A. Because all of the matrices are n × n matrices
59. (a) Let H = I − 2uuT , where uT u = 1. Then H T = ( I − 2uuT )
T
AP = PD
= I T − 2(uuT )
T
APP −1 = PDP −1
T = I − 2 ⎡(uT ) uT ⎤ = I − 2uuT = H . ⎥⎦ ⎣⎢
A = PDP −1.
No, it is not necessarily true that A = D.
So, H is symmetric. Furthermore,
HH = ( I − 2uuT )( I − 2uuT ) = I − 4uu + 4(uu T
2
T
)
63. Answers will vary. Sample answer:
2
⎡ 1 A = ⎢ ⎣−1
= I − 4uuT + 4uuT
0⎤ ⎡1 0⎤ ⎥ or A = ⎢ ⎥. 0⎦ ⎣1 0⎦
= I. So, H is nonsingular.
⎡ 2 (b) uT u = ⎢ ⎣ 2
2 2
⎡ 2⎤ ⎢ ⎥ 2 ⎥ ⎤⎢ ⎡2 2 ⎤ 0⎥ ⎢ 2 ⎥ = ⎢ + + 0⎥ = 1. 4 4 ⎢ ⎥ ⎣ ⎦ ⎦ ⎢ 2 ⎥ ⎢⎣ 0 ⎥⎦
H = I n − 2uuT ⎡ 2⎤ ⎢ ⎥ 1 0 0 ⎡ ⎤ ⎢ 2 ⎥⎡ 2 ⎢ ⎥ = ⎢0 1 0⎥ − 2 ⎢ 2 ⎥ ⎢ ⎢ ⎥⎣ 2 ⎢0 0 1⎥ ⎢ 2 ⎥ ⎣ ⎦ ⎢⎣ 0 ⎥⎦ ⎡1 1 ⎢2 2 ⎡1 0 0⎤ ⎢ ⎢ ⎥ = ⎢0 1 0⎥ − 2 ⎢ 1 1 ⎢2 2 ⎢0 0 1⎥ ⎣ ⎦ ⎢ ⎢⎣ 0 0
2 2
⎤ 0 ⎥ ⎦
⎤ 0⎥ ⎡ 0 −1 ⎥ ⎢ = ⎢−1 0 0 ⎥⎥ ⎢0 0 ⎣ ⎥ 0 ⎥⎦
0⎤ ⎥ 0⎥ 1 ⎥⎦
Section 2.4 Elementary Matrices 1. The matrix is elementary. It can be obtained by multiplying the second row of I 2 by 2. 3. The matrix is elementary. Two times the first row was added to the second row. 5. The matrix is not elementary. The first row was multiplied by 2 and the second and third rows were interchanged. 7. This matrix is elementary. It can be obtained by multiplying the second row of I 4 by −5, and adding the result to the third row.
9. B is obtained by interchanging the first and third rows of A. So, ⎡0 0 1⎤ ⎢ ⎥ E = ⎢0 1 0⎥. ⎢1 0 0⎥ ⎣ ⎦
11. A is obtained by interchanging the first and third rows of B. So, ⎡0 0 1⎤ ⎢ ⎥ E = ⎢0 1 0⎥. ⎢1 0 0⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.4 13. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, interchange the first and second rows of I 2 to obtain E −1
⎡0 1⎤ = ⎢ ⎥. ⎣1 0⎦
15. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, interchange the first and third rows to obtain ⎡0 0 1⎤ ⎢ ⎥ E −1 = ⎢0 1 0⎥. ⎢1 0 0⎥ ⎣ ⎦
17. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, divide the first row by k to obtain ⎡1 E −1 = ⎢ k ⎢ ⎢⎣ 0
⎤ 0⎥ , k ≠ 0. ⎥ 1 ⎥⎦
E
⎡1 0⎤ R1 ↔ R2 ⎢ ⎥ ⎣3 −2⎦
⎡0 1⎤ E1 = ⎢ ⎥ ⎣1 0⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 −2⎦ R2 − 3R1 → R2
⎡ 1 E2 = ⎢ ⎣−3
⎡1 0⎤ ⎢ ⎥ ⎣0 1⎦
0⎤ ⎡1 E3 = ⎢ 1⎥ − 0 2⎥ ⎣⎢ ⎦
2
−1⎤ ⎥ 1 − 16 ⎥ 0 4⎥⎦
⎡ 1 ⎢ ⎢ 0 ⎢ 0 ⎣
0
⎡ 1 ⎢ ⎢ 0 ⎢ 0 ⎣
0
⎡ 1 ⎢ ⎢ 0 ⎢ 0 ⎣
0
−1⎤ ⎥ 1 − 16 ⎥ 0 1⎥⎦
() 1 6
R2 → R2
( 14 ) R
3
→ R3
0⎤ R1 + R3 → R1 ⎥ 1 − 16 ⎥ 0 1⎥⎦
⎡1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
R2 +
() 1 6
R3 → R2
⎡1 0 0⎤ ⎢ ⎥ E1 = ⎢0 16 0⎥ ⎢0 0 1 ⎥ ⎣ ⎦ ⎡1 0 0 ⎤ ⎢ ⎥ E2 = ⎢ 0 1 0 ⎥ ⎢0 0 1 ⎥ 4⎦ ⎣ ⎡1 0 1⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎡1 0 0⎤ ⎢ ⎥ E4 = ⎢0 1 16 ⎥ ⎢0 0 1 ⎥ ⎣ ⎦
⎡1 0 0⎤ ⎡1 0 1⎤ ⎡1 0 0⎤ ⎡1 0 0⎤ ⎥⎢ ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢0 1 16 ⎥ ⎢0 1 0⎥ ⎢0 1 0⎥ ⎢0 16 0⎥ ⎢0 0 1⎥ ⎢0 0 1⎥ ⎢0 0 1 ⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦⎣ ⎦ 4⎦ ⎣ ⎡1 0 1⎤ ⎡1 0 0⎤ ⎡1 0 0⎤ ⎥⎢ ⎢ ⎥⎢ ⎥ = ⎢0 1 16 ⎥ ⎢0 1 0⎥ ⎢0 16 0⎥ ⎢0 0 1 ⎥ ⎢ 0 0 1 ⎥ ⎢ 0 0 1 ⎥ ⎣ ⎦⎣ ⎦ 4⎦ ⎣
21. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
( )R
23. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
A−1 = E4 E3 E2 E1
⎡1 0 0⎤ ⎢ ⎥ = ⎢0 0 1⎥. ⎢0 1 0⎥ ⎣ ⎦
− 12
43
Use the elementary matrices to find the inverse.
19. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, interchange the second and third row to obtain −1
Elementary Matrices
→ R2
⎡1 ⎢ = ⎢0 ⎢0 ⎣
0 1 0
1⎤ 1 4 ⎡ ⎥ 1 ⎢0 24 ⎥ ⎢ 1 ⎥ ⎢0 4 ⎦⎣
⎡1 0 0⎤ ⎢ 1 0⎥ = 0 ⎢ ⎥ 6 ⎢0 ⎥ 0 1⎦ ⎣
0 1 6
0
1⎤ 4 ⎥ 1 24 ⎥ 1⎥ 4⎦
0⎤ ⎥ 1⎦
Use the elementary matrices to find the inverse. A−1 = E3 E2 E1 0⎤ ⎡ 1 ⎡1 = ⎢ 1⎥ ⎢ ⎢⎣0 − 2 ⎥⎦ ⎣−3
0⎤ ⎡0 1⎤ ⎥⎢ ⎥ 1⎦ ⎣1 0⎦
0⎤ ⎡0 1⎤ ⎡1 ⎡ 0 = ⎢3 = ⎢ 1 ⎥⎢ ⎥ 1 ⎢⎣ 2 − 2 ⎥⎦ ⎣1 0⎦ ⎢⎣− 2
1⎤
3⎥ 2⎥ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
44
Chapter 2
Matrices
For Exercises 25–31, answers will vary. Sample answers are shown below. 25. Matrix
Elementary Row Operation
Elementary Matrix
⎡ 1 2⎤ ⎢ ⎥ ⎣0 −2⎦
Add −1 times row one to row two.
⎡ 1 E1 = ⎢ ⎣−1
⎡ 1 0⎤ ⎢ ⎥ ⎣0 −2⎦
Add row two to row one.
⎡ 1 1⎤ E2 = ⎢ ⎥ ⎣0 1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
Divide row two by −2.
0⎤ ⎡1 E3 = ⎢ 1⎥ − 0 2⎥ ⎣⎢ ⎦
0⎤ ⎥ 1⎦
Because E3 E2 E1 A = I 2 , factor A as follows. ⎡1 0⎤ ⎡ 1 −1⎤ ⎡ 1 0⎤ A = E1−1E2−1E3−1 = ⎢ ⎥⎢ ⎥⎢ ⎥ ⎣1 1⎦ ⎣0 1⎦ ⎣0 −2⎦ Note that this factorization is not unique. For example, another factorization is ⎡0 1⎤ ⎡1 0⎤ ⎡ 1 0⎤ A = ⎢ ⎥⎢ ⎥⎢ ⎥. ⎣ 1 0⎦ ⎣1 1⎦ ⎣0 2⎦
27. Matrix
Elementary Row Operation
Elementary Matrix
⎡1 0⎤ ⎢ ⎥ ⎣3 −1⎦
Add ( −1) times row two to row one.
⎡ 1 −1⎤ E1 = ⎢ ⎥ ⎣0 1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 −1⎦
Add −3 times row one to row two.
⎡ 1 0⎤ E2 = ⎢ ⎥ ⎣−3 1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
Multiply row two by −1.
⎡ 1 0⎤ E3 = ⎢ ⎥ ⎣0 −1⎦
Because E3 E2 E1 A = I 2 , one way to factor A is as follows. ⎡ 1 1⎤ ⎡1 0⎤ ⎡ 1 0⎤ A = E1−1E2−1E3−1 = ⎢ ⎥⎢ ⎥⎢ ⎥ ⎣0 1⎦ ⎣3 1⎦ ⎣0 −1⎦
29. Matrix
Elementary Row Operation
Elementary Matrix
⎡ 1 −2 0⎤ ⎢ ⎥ 1 0⎥ ⎢0 ⎢0 0 1⎥ ⎣ ⎦
Add row one to row two.
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢ 1 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Add 2 times row two to row one.
⎡ 1 2 0⎤ ⎢ ⎥ E2 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Because E2 E1 A = I 3 , one way to factor A is A =
E1−1E2−1
⎡ 1 0 0⎤ ⎡ 1 −2 0⎤ ⎢ ⎥⎢ ⎥ = ⎢−1 1 0⎥ ⎢0 1 0⎥. ⎢ 0 0 1⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.4
Elementary Matrices
45
31. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form. ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0
1⎤ ⎥ 1 −3 0⎥ − R2 → R2 0 2 0⎥ ⎥ 0 1 −1⎥⎦
⎡1 0 ⎢ 0 −1 E1 = ⎢ ⎢0 0 ⎢ ⎢⎣0 0
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0
1⎤ ⎥ 1 −3 0⎥ 0 1 0⎥ ⎥ 0 1 −1⎥⎦
⎡1 ⎢ 0 E2 = ⎢ ⎢0 ⎢ ⎣⎢0
0 0 0⎤ ⎥ 1 0 0⎥ 0 12 0⎥ ⎥ 0 0 1⎦⎥
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0
1⎤ ⎥ 1 −3 0⎥ 0 1 0⎥ ⎥ 0 −1 1⎥⎦ − R4 → R4
⎡1 ⎢ 0 E3 = ⎢ ⎢0 ⎢ ⎣⎢0
0 0
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0
1⎤ ⎥ 1 −3 0⎥ 0 1 0⎥ ⎥ 0 0 1⎦⎥ R4 + R3 → R4
⎡1 ⎢ 0 E4 = ⎢ ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 1 1⎥⎦
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 0 1⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ 0 E5 = ⎢ ⎢0 ⎢ ⎣⎢0
0 0 0⎤ ⎥ 1 3 0⎥ 0 1 0⎥ ⎥ 0 0 1⎦⎥
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ 0 E6 = ⎢ ⎢0 ⎢ ⎣⎢0
0 0 −1⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
0
0
( 12 )R
3
→ R3
0
0
R2 + 3R3 → R2
R1 − R4 → R1
0 0⎤ ⎥ 0 0⎥ 1 0⎥ ⎥ 0 1⎥⎦
0⎤ ⎥ 0⎥ 0 1 0⎥ ⎥ 0 0 −1⎥⎦ 1 0
So, one way to factor A is A = E1−1E2−1E3−1E4−1E5−1E6−1 ⎡1 0 ⎢ 0 −1 = ⎢ ⎢0 0 ⎢ ⎣⎢0 0
0 0⎤ ⎡ 1 ⎥⎢ 0 0⎥ ⎢0 1 0⎥ ⎢0 ⎥⎢ 0 1⎦⎥ ⎣⎢0
0 0 0⎤ ⎡ 1 ⎥⎢ 1 0 0⎥ ⎢0 0 2 0⎥ ⎢0 ⎥⎢ 0 0 1⎦⎥ ⎣⎢0
0⎤ ⎡ 1 ⎥⎢ 1 0 0⎥ ⎢0 0 1 0⎥ ⎢0 ⎥⎢ 0 0 −1⎥⎦ ⎢⎣0
0 0
0 0⎤ ⎡ 1 ⎥⎢ 1 0 0⎥ ⎢0 0 1 0⎥ ⎢0 ⎥⎢ 0 −1 1⎥⎦ ⎢⎣0 0
0 0⎤⎡ 1 ⎥⎢ 1 −3 0⎥ ⎢0 0 1 0⎥ ⎢0 ⎥⎢ 0 0 1⎥⎦⎣⎢0 0
0 0 1⎤ ⎥ 1 0 0⎥ . 0 1 0⎥ ⎥ 0 0 1⎥⎦
33. (a) True. See “Remark” following the “Definition of an Elementary Matrix” on page 87. (b) False. Multiplication of a matrix by a scalar is not a single elementary row operation so it cannot be represented by a corresponding elementary matrix. (c) True. See “Definition of Row Equivalence,” on page 90. (d) True. See Theorem 2.13.
35. (a) EA has the same rows as A except the two rows that are interchanged in E will be interchanged in EA. (b) Multiplying a matrix on the left by E interchanges the same two rows that are interchanged from I n in E. So, multiplying E by itself interchanges the rows twice and so E 2 = I n .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
46
Chapter 2
Matrices
⎡ 1 0 0⎤ ⎢ ⎥ 37. A−1 = ⎢0 1 0⎥ ⎢0 0 c⎥ ⎣ ⎦
−1
⎡ 1 0 0⎤ ⎢ ⎥ ⎢b 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
−1
⎡ 1 a 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
−1
⎡ ⎤ ⎡ ⎤ −a 0⎥ ⎢ 1 0 0⎥ ⎢1 ⎢ ⎥ ⎡ 1 0 0⎤ ⎡ 1 − a 0⎤ ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎢ ⎥ = ⎢0 1 0⎥ ⎢−b 1 0⎥ ⎢0 1 0⎥ = ⎢−b 1 + ab 0 ⎥. ⎢ ⎥ ⎢ 0 0 1⎥⎦ ⎢⎣0 ⎢ ⎥ 0 1⎥⎦ 1⎥ ⎢0 0 1 ⎥ ⎣ ⎢0 0 ⎢⎣ ⎢⎣ c ⎥⎦ c ⎥⎦
⎡ 1 0⎤ ⎡ 1 1⎤ ⎡ 1 1⎤ 39. No. For example, ⎢ ⎥⎢ ⎥ = ⎢ ⎥ , which is not elementary. ⎣2 1⎦ ⎣0 1⎦ ⎣2 3⎦
For Exercises 41– 45, answers will vary. Sample answers are shown below. 41. Because the matrix is lower triangular, an LUfactorization is ⎡ 1 0⎤⎡ 1 0⎤ ⎢ ⎥⎢ ⎥ ⎣−2 1⎦⎣0 1⎦
43. Matrix
Elementary Matrix
⎡ 3 0 1⎤ ⎢ ⎥ ⎢ 6 1 1⎥ = A ⎢−3 1 0⎥ ⎣ ⎦ ⎡ 3 0 1⎤ ⎢ ⎥ ⎢ 0 1 −1⎥ ⎢−3 1 0⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢−2 1 0⎥ ⎢ 0 0 1⎥ ⎣ ⎦
⎡3 0 1⎤ ⎢ ⎥ ⎢0 1 −1⎥ ⎢0 1 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢0 1 0⎥ ⎢ 1 0 1⎥ ⎣ ⎦
⎡3 0 1⎤ ⎢ ⎥ ⎢0 1 −1⎥ = U ⎢0 0 2⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 −1 1⎥ ⎣ ⎦
E3 E2 E1 A = U ⎡ 1 0 0⎤ ⎡3 0 1⎤ ⎢ ⎥⎢ ⎥ −1 −1 −1 A = E1 E2 E3 U = ⎢ 2 1 0⎥ ⎢0 1 −1⎥ = LU ⎢−1 1 1⎥ ⎢0 0 2⎥ ⎣ ⎦⎣ ⎦
45. (a) Matrix
Elementary Matrix
⎡ 2 1 0⎤ ⎢ ⎥ ⎢ 0 1 −1⎥ = A ⎢−2 1 1⎥ ⎣ ⎦ ⎡2 1 0⎤ ⎢ ⎥ ⎢0 1 −1⎥ ⎢0 2 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢0 1 0⎥ ⎢ 1 0 1⎥ ⎣ ⎦
⎡2 1 0⎤ ⎢ ⎥ ⎢0 1 −1⎥ = U ⎢0 0 3⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ 1 0⎥ E2 = ⎢0 ⎢0 −2 1⎥ ⎣ ⎦
E2 E1 A = U A =
E1−1E2−1U
⎡ 1 0 0⎤⎡2 1 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ 0 1 0⎥⎢0 1 −1⎥ = LU ⎢−1 2 1⎥⎢0 0 3⎥ ⎣ ⎦⎣ ⎦
⎡ 1 0 0⎤ ⎡ y1 ⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (b) Ly = b : ⎢ 0 1 0⎥ ⎢ y2 ⎥ = ⎢ 2⎥ ⎢−1 2 1⎥ ⎢ y3 ⎥ ⎢−2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
y1 = 1, y2 = 2, and − y1 + 2 y2 + y3 = −2 ⇒ y3 = −5 ⎡2 1 0⎤ ⎡ x1 ⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (c) Ux = y : ⎢0 1 −1⎥ ⎢ x2 ⎥ = ⎢ 2⎥ ⎢0 0 3⎥ ⎢ x3 ⎥ ⎢−5⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ x3 = − 53 , x2 − x3 = 2 ⇒ x2 =
1 3
and 2 x1 + x2 = 1 ⇒ x1 = 13. So, the solution to the system Ax = b is x1 = 13 , x2 = 13 , x3 = − 53 .
47. You could first factor the matrix A = LU . Then for each right hand side bi , solve Ly = b i and Ux = y. 2
⎡ 1 0⎤ ⎡ 1 0⎤ 49. A2 = ⎢ ⎥ = ⎢ ⎥ = A. 0 0 ⎣ ⎦ ⎣0 0⎦
Because A2 = A, A is idempotent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5 3⎤ ⎡ 2 3⎤ ⎡ 2 ⎡ 1 0⎤ 51. A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ≠ A. ⎣−1 −2⎦ ⎣−1 −2⎦ ⎣0 1⎦
47
57. Because A is idempotent and invertible, you have A2 = A A−1 A2 = A−1 A
Because A2 ≠ A, A is not idempotent. ⎡0 0 1⎤ ⎡0 0 1⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 53. A = ⎢0 1 0⎥ ⎢0 1 0⎥ = ⎢0 1 0⎥. ⎢ 1 0 0⎥ ⎢ 1 0 0⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Applications of Matrix Operations
( A−1 A) A =
I
A = I.
59.
( AB)2
= ( AB)( AB ) = A( BA) B = A( AB) B = ( AA)( BB) = AB.
Because A2 ≠ A, A is not idempotent.
So, ( AB) = AB, and AB is idempotent. 2
55. Begin by finding A2 . ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤ A2 = ⎢ ⎥⎢ ⎥ = ⎢ 2⎥ ⎢⎣a(1 + b) b ⎦⎥ ⎣a b⎦ ⎣a b⎦
61. A = Ek B = Ft
E2 E1B, for E1 , …, Ek elementary. F2 F1C , for F1 , …, Ft elementary.
Setting A2 = A yields the equations a = a(1 + b) and
Then
b = b 2 . The second equation is satisfied when b = 1 or b = 0. If b = 1, then a = 0, and if b = 0, then a can be any real number.
A = Ek
E2 E1B = ( Ek
E2 E1 )( Ft
F2 F1C )
which shows that A is row equivalent to C.
Section 2.5 Applications of Matrix Operations 1. The matrix is not stochastic because every entry of a stochastic matrix must satisfy the inequality 0 ≤ aij ≤ 1. 3. This matrix is stochastic because each entry is between 0 and 1, and each column adds up to 1. 5. The matrix is stochastic because 0 ≤ aij ≤ 1 and each
column adds up to 1. 7. Form the matrix representing the given transition probabilities. Let A represent people who purchased the product and B represent people who did not.
From A
B
⎡0.80 0.30⎤ A⎫ P = ⎢ ⎥ ⎬ To ⎣0.20 0.70⎦ B⎭ The state matrix representing the current population is ⎡100⎤ A X = ⎢ ⎥ ⎣900⎦ B The state matrix for next month is ⎡0.80 0.30⎤ ⎡100⎤ ⎡350⎤ PX = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0.20 0.70⎦ ⎣900⎦ ⎣650⎦ The state matrix for the month after next is ⎡0.80 0.30⎤⎡350⎤ ⎡475⎤ P( PX ) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣0.20 0.70⎦⎣650⎦ ⎣525⎦ So, next month 350 people will purchase the product. In two months 475 people will purchase the product.
9. Form the matrix representing the given transition probabilities. Let N represent nonsmokers, S0 represent those who smoke one pack or less, and S1 represent those who smoke more than one pack.
N
From S0
S1
⎡0.93 0.10 0.05⎤ N ⎫ ⎢ ⎥ ⎪ P = ⎢0.05 0.80 0.10⎥ S0 ⎬ To ⎢0.02 0.10 0.85⎥ S1 ⎪ ⎣ ⎦ ⎭
The state matrix representing the current population is ⎡5000⎤ N ⎢ ⎥ X = ⎢2500⎥ S0 ⎢2500⎥ S1 ⎣ ⎦
The state matrix for the next month is ⎡0.93 0.10 0.05⎤ ⎡5000⎤ ⎡5025⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ PX = ⎢0.05 0.80 0.10⎥ ⎢2500⎥ = ⎢2500⎥. ⎢0.02 0.10 0.85⎥ ⎢2500⎥ ⎢2475⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
The state matrix for the month after next is ⎡0.93 0.10 0.05⎤⎡5025⎤ ⎡5047 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ P( PX ) = ⎢0.05 0.80 0.10⎥⎢2500⎥ = ⎢2498.75⎥. ⎢0.02 0.10 0.85⎥⎢2475⎥ ⎢2454.25⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, next month the population will be grouped as follows: 5025 nonsmokers, 2500 smokers of one pack or less per day, and 2475 smokers of more than one pack per day. In two months the population will be grouped as follows: 5047 nonsmokers, 2499 smokers of one pack or less per day, and 2454 smokers of more than one pack per day.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
48
Chapter 2
Matrices
11. Form the matrix representing the given transition probabilities. Let A represent an hour or more of TV and B less than one hour. From A B
⎡0 0.25⎤ A⎫ P = ⎢ ⎥ ⎬ To ⎣ 1 0.75⎦ B⎭ ⎡100⎤ The state matrix representing the current distribution is X = ⎢ ⎥. ⎣100⎦ ⎡0 0.25⎤ ⎡100⎤ ⎡ 25⎤ The state matrix for one day later is PX = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 1 0.75 100 ⎣ ⎦⎣ ⎦ ⎣175⎦ ⎡0 0.25⎤ ⎡ 25⎤ ⎡ 44⎤ In two days the state matrix is P( PX ) = ⎢ ⎥ ⎢ ⎥ ≈ ⎢ ⎥. ⎣ 1 0.75⎦ ⎣175⎦ ⎣156⎦ ⎡ 40⎤ In thirty days, the state matrix will be P 30 X = ⎢ ⎥. ⎣160⎦ So, 25 will watch TV for an hour or more tomorrow, 44 the day after tomorrow, and 40 in thirty days. 13. Because the columns in a stochastic matrix add up to 1, you can represent two stochastic matrices as
d ⎤ b ⎤ ⎡ c ⎡ a P⎢ ⎥ ⎥ and Q = ⎢ ⎣1 − c 1 − d ⎦ ⎣1 − a 1 − b⎦ Then, b ⎡ a PQ = ⎢ − − 1 1 a ⎣
d ⎤⎡ c ⎥⎢ b⎦ ⎣1 − c 1 −
⎤ ⎥ d⎦
⎡ ⎤ ⎡ ac + b(1 − c) ad + b(1 − d ) ac + b − bc ad + b − bd ⎤ = ⎢ ⎥ = ⎢ ⎥ ⎢⎣c(1 − a) + (1 − b)(1 − c) d (1 − a) + (1 − b)(1 − d )⎥⎦ ⎢⎣1 − ( ac + b − bc) 1 − ( ad + b − bd )⎥⎦
The columns of PQ add up to 1, and the entries are non-negative, because those of P and Q are non-negative. So, PQ is stochastic. 15. Divide the message into groups of three and form the uncoded matrices.
S
[19
E
L
5 12]
L
[12
_
C
O
0 3]
[15
S
O
L
14 19]
N
[15
12 9]
I
D A
T
E D
_
[4
20]
[5
0]
1
4
Multiplying each uncoded row matrix on the right by A yields the coded row matrices ⎡ 1 −1 0⎤ 19 5 12 19 5 12 A = [ ] [ ]⎢⎢ 1 0 −1⎥⎥ = [−48 5 31] ⎢−6 2 3⎥ ⎣ ⎦
[12 [15 [15 [4 [5
0 3] A = [−6 −6 9] 14 19] A = [−85 23 43] 12 9] A = [−27 3 15] 1 20] A = [−115 36 59] 4 0] A = [9 −5 −4]
So, the coded message is −48, 5, 31, − 6, − 6, 9, − 85, 23, 43, − 27, 3, 15, −115, 36, 59, 9, − 5, − 4.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5 17. Divide the message into pairs of letters and form the coded matrices. C O M E _ H O M E _ S O O
[3
15]
[13 5]
[0
8]
[15
13]
[5
0]
[19
15]
[15
Applications of Matrix Operations
49
N 14]
Multiplying each uncoded row matrix on the right by A yields the coded row matrices. ⎡1 2⎤ [3 15]⎢ ⎥ = [48 81] ⎣3 5⎦ ⎡1 2⎤ ⎥ = [28 51] ⎣3 5⎦
[13 5]⎢
⎡1 2⎤ ⎥ = [24 40] ⎣3 5⎦
[0 8]⎢ [15 [5 [19
⎡1 2⎤ 13]⎢ ⎥ = [54 95] ⎣3 5⎦ ⎡1 2⎤ 0]⎢ ⎥ = [5 10] ⎣3 5⎦ ⎡1 2⎤ 15]⎢ ⎥ = [64 113] ⎣3 5⎦
⎡1 2⎤ 14]⎢ ⎥ = [57 100] ⎣3 5⎦ So, the coded message is 48, 81, 28, 51, 24, 40, 54, 95, 5, 10, 64, 113, 57, 100.
[15
⎡−5 2⎤ 19. Find A−1 = ⎢ ⎥ ⎣ 3 −1⎦ and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[11
⎡−5 2⎤ 21]⎢ ⎥ = [8 1] ⇒ H, A ⎣ 3 −1⎦
[64
⎡−5 2⎤ 112]⎢ ⎥ = [16 16] ⇒ P, P ⎣ 3 −1⎦
[25
⎡−5 2⎤ 50]⎢ ⎥ = [25 0] ⇒ Y, _ ⎣ 3 −1⎦
[29
⎡−5 2⎤ 53]⎢ ⎥ = [14 5] ⇒ N, E ⎣ 3 −1⎦
[23
⎡−5 2⎤ 46]⎢ ⎥ = [23 0] ⇒ W, _ ⎣ 3 −1⎦
[40
⎡−5 2⎤ 75]⎢ ⎥ = [25 5] ⇒ Y, E ⎣ 3 −1⎦
[55
⎡−5 2⎤ 92]⎢ ⎥ = [1 18] ⇒ A, R ⎣ 3 −1⎦
So, the message is HAPPY_NEW_YEAR.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
50
Chapter 2
21. Find A
−1
Matrices
⎡−13 ⎢ = ⎢ 12 ⎢ −5 ⎣
6 −5 2
4⎤ ⎥ −3⎥ 1⎥⎦
and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[13 [−1 [3 [4 [−5 [4
19 10] A −33 −2
−1
⎡−13 ⎢ = [13 19 10]⎢ 12 ⎢ −5 ⎣
− 77] A−1 = −14] A
−1
− 9] A
=
−1
=
−25 − 47] A−1
=
−1
=
1 1
− 9] A
[2 [7 [5 [0 [5
6 −5 2
5 18] ⇒ B, 0 1 1 1
4⎤ ⎥ −3⎥ = [9 3 5] ⇒ I, C, E 1⎥⎦ E,
R
4] ⇒ G, _,
D
4] ⇒ E, 8] ⇒ _,
4] ⇒ E,
A, D A, H A, D
The message is ICEBERG_DEAD_AHEAD. ⎡a b⎤ 23. Let a A−1 = ⎢ ⎥ and find that ⎣c d ⎦ _ R ⎡a b⎤ [−18 −18]⎢ ⎥ = [0 18] ⎣c d ⎦ ⎡− ⎣ 18( a + c) − 18(b + d )⎤⎦ = [0 18]. So, c = − a and d = −1 − b. Using these values you find that O N ⎡a b ⎤ [1 16]⎢ ⎥ = [15 14] ⎣⎢− a −(1 + b)⎥⎦
[−15a
−15b − 16] = [15 14].
So, a = −1, b = −2, c = 1, and d = 1. Using the matrix ⎡−1 −2⎤ A− 1 = ⎢ ⎥ ⎣ 1 1⎦ multiply each coded row matrix to yield the uncoded row matrices
[13 5], [5
20], [0 13], [5 0], [20 15], [14 9], [7 8], [20 0], [0 18], [15 14].
This corresponds to the message MEET_ME_TONIGHT_RON.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5 ⎡ 1 0 2⎤ ⎡−3 2 2⎤ ⎢ ⎥ ⎢ ⎥ −1 25. For A = ⎢2 −1 1⎥ you have A = ⎢−4 2 3⎥. ⎢0 1 2⎥ ⎢ ⎥ ⎣ 2 −1 −1⎦ ⎣ ⎦
−14 29] A−1 = −15 62] A
−1
[0 [16 [13 [18 [8 [5 [22 [20 [23 [23 [12 [12 [25 [18 [5 [5
=
3 38] A−1 =
20 76] A
−1
=
− 5 21] A−1 =
− 7 32] A
−1
=
9 77] A−1 =
− 8 48] A−1 = − 5 51] A
−1
1 26] A
−1
=
3 79] A−1 = =
−22 49] A−1 = −19 69] A−1 = 8 67] A
−1
=
−11 27] A−1 =
−18 28] A
−1
=
19 20
5] ⇒
_,
S,
User Coal Steel ⎡0.10 0.20⎤ Coal⎫ D = ⎢ ⎬ Supplier ⎥ ⎣0.80 0.10⎦ Steel⎭
5] ⇒ P,
T,
E E
0 20] ⇒ R,
_,
T
5
E,
_
5] ⇒ E,
L,
E
5 14] ⇒ V,
E,
N
8
0] ⇒ T,
H,
_
0] ⇒ W, E,
_
9 12] ⇒ W,
I,
L
0
1] ⇒ L,
_,
A
0] ⇒ Y,
S,
12
5
23 19
0] ⇒ H,
18
⎡ 0.90 −0.20⎤ ⎡10,000 ⎤ ⎢ ⎥X = ⎢ ⎥. 0.80 0.90 − ⎣ ⎦ ⎣20,000⎦ Solve this system by using Gauss-Jordan elimination to ⎡20,000⎤ obtain X = ⎢ ⎥. ⎣40,000⎦
29. From the given matrix D, form the linear system X = DX + E , which can be written as
(I
_
E, M
2] ⇒ E, M, B
0] ⇒ E,
R,
− D) X = E
⎡ 0.6 −0.5 −0.5⎤ ⎡1000⎤ ⎢ ⎥ ⎢ ⎥ 1 −0.3⎥ X = ⎢1000⎥. ⎢ −0.3 ⎢−0.2 −0.2 1.0⎥ ⎢1000⎥ ⎣ ⎦ ⎣ ⎦
1] ⇒ L, W, A
5 13] ⇒ R, 13
The equation X = DX + E may be rewritten in the form ( I − D) X = E; that is,
E
5] ⇒ M, B,
2
51
27. Use the given information to find D.
Multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[38 [56 [17 [18 [18 [29 [32 [36 [33 [41 [12 [58 [63 [28 [31 [41
Applications of Matrix Operations
⎡8622.0⎤ ⎢ ⎥ Solving this system, X = ⎢4685.0⎥. ⎢3661.4⎥ ⎣ ⎦
_
The message is _SEPTEMBER_THE_ELEVENTH_ WE_WILL_ALWAYS_REMEMBER_
31. (a) The line that best fits the given points is shown on the graph. y 4 3
(2, 3)
2
(−2, 0)
(0, 1) x
−1
1
2
⎡0⎤ ⎡1 −2⎤ ⎢ ⎥ ⎢ ⎥ (b) Using the matrices X = ⎢1 0⎥ and Y = ⎢ 1⎥ , you have ⎢1 2⎥ ⎢3⎥ ⎣ ⎦ ⎣ ⎦ ⎡1 −2⎤ ⎡ 1 1 1⎤ ⎢ ⎡3 0⎤ ⎥ XT X = ⎢ ⎥ ⎢1 0⎥ = ⎢ ⎥. ⎣−2 0 2⎦ ⎢ ⎣0 8⎦ ⎥ ⎣1 2⎦ ⎡0⎤ ⎡ 1 1 1⎤ ⎢ ⎥ ⎡4⎤ X Y = ⎢ ⎥ ⎢ 1⎥ = ⎢ ⎥ ⎣−2 0 2⎦ ⎢ ⎥ ⎣6⎦ ⎣3⎦ T
⎡ 1 0⎤ ⎡4⎤ ⎡4⎤ −1 A = ( X T X ) X T Y = ⎢ 3 1 ⎥ ⎢ ⎥ = ⎢ 33 ⎥ ⎢⎣ 4 ⎥⎦ ⎣⎢0 8 ⎦⎥ ⎣6⎦
So, the least square regression line is y =
3x 4
+ 43 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
52
Chapter 2
Matrices
(c) Solving Y = XA + E for E, ⎡ 1⎤ ⎡0⎤ ⎡1 −2⎤ 4 ⎢ 6⎥ ⎢ ⎥ ⎢ ⎥⎡3⎤ E = Y − XA = ⎢ 1⎥ − ⎢1 0⎥ ⎢ 3 ⎥ = ⎢− 13 ⎥. ⎢ 1⎥ ⎢3⎥ ⎢1 2⎥ ⎣⎢ 4 ⎦⎥ ⎣ ⎦ ⎣ ⎦ ⎣ 6⎦
So, the sum of the squared error is ⎡ 1⎤ ⎢ 6⎥ E T E = ⎡⎣ 16 − 13 16 ⎤⎦ ⎢− 13 ⎥ = 16 . ⎢ 1⎥ ⎣ 6⎦
33. (a) The line that best fits the given points is shown on the graph. y 4 3
(0, 4) (1, 3)
2 1 −1
(1, 1)
(2, 0)
1
2
x
3
⎡1 ⎢ 1 (b) Using the matrices X = ⎢ ⎢1 ⎢ ⎢⎣1 ⎡1 ⎢ 1 1 1 1 ⎡ ⎤ ⎢1 XT X = ⎢ ⎥⎢ ⎣0 1 1 2⎦ ⎢1 ⎣⎢1
0⎤ ⎡4⎤ ⎥ ⎢ ⎥ 3 1⎥ and Y = ⎢ ⎥ , you have ⎥ ⎢ 1⎥ 1 ⎥ ⎢ ⎥ ⎢⎣0⎥⎦ 2⎥⎦
0⎤ ⎥ 1⎥ ⎡4 4⎤ = ⎢ ⎥. ⎥ 1 4 6⎦ ⎣ ⎥ 2⎦⎥
⎡4⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢3⎥ ⎡8⎤ X TY = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 0 1 1 2 1 ⎣ ⎦⎢ ⎥ ⎣4⎦ ⎣⎢0⎦⎥ A = ( X T X ) X TY = −1
1⎡ 8⎢
6 −4⎤⎡8⎤ ⎡ 4⎤ ⎥⎢ ⎥ = ⎢ ⎥ 4⎦⎣4⎦ ⎣−2⎦
⎣−4
So, the least squares regression line is y = 4 − 2 x. (c) Solving Y = XA + E for E, ⎡4⎤ ⎡1 0⎤ ⎡4⎤ ⎡4⎤ ⎡ 0⎤ ⎢ 3⎥ ⎢1 1⎥ 4 ⎢ 3⎥ ⎢2⎥ ⎢ ⎥ ⎥ ⎡ ⎤ = ⎢ ⎥ − ⎢ ⎥ = ⎢ 1⎥. E = Y − XA = ⎢ ⎥ − ⎢ ⎢ 1⎥ ⎢1 1⎥ ⎢⎣−2⎥⎦ ⎢ 1⎥ ⎢2⎥ ⎢−1⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 1 2 0 0 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 0⎦ So, the sum of the squared error is ⎡ 0⎤ ⎢ ⎥ 1 E T E = [0 1 −1 0]⎢ ⎥ = 2. ⎢−1⎥ ⎢ ⎥ ⎣⎢ 0⎦⎥
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5
35. Using the matrices
⎡3 3⎤ ⎡5⎤ T XT X = ⎢ ⎥ and X Y = ⎢ ⎥ 3 5 ⎣ ⎦ ⎣9⎦ −1
( X TY ) =
53
37. Using the matrices ⎡1 −2⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ 1 1 −1⎥ and Y = ⎢ ⎥ you have X = ⎢ ⎢1 0⎥ ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣1 1⎥⎦ ⎢⎣2⎥⎦
⎡1 0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ X = ⎢1 1⎥ and Y = ⎢ 1⎥ you have ⎢1 2⎥ ⎢4⎥ ⎣ ⎦ ⎣ ⎦
A = (X T X )
Applications of Matrix Operations
⎡4⎤ ⎡ 4 −2⎤ T XT X = ⎢ ⎥ and X Y = ⎢ ⎥ ⎣−2 6⎦ ⎣ 1⎦
⎡− 1 ⎤ ⎢ 3 ⎥. ⎢⎣ 2⎥⎦
−1 ⎡0.3 0.1⎤⎡4⎤ ⎡1.3⎤ A = ( X T X ) X TY = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣0.1 0.2⎦⎣ 1⎦ ⎣0.6⎦ So, the least squares regression line is y = 0.6 x + 1.3.
So, the least squares regression line is y = 2 x − 13.
39. Using the four given points, the matrices X and Y are ⎡1 −5⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ 1 1⎥ 3 X = ⎢ and Y = ⎢ ⎥. This means that ⎢1 2⎥ ⎢3⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣1 2⎥⎦ ⎢⎣5⎥⎦ ⎡1⎤ ⎡1 −5⎤ ⎢ ⎥ ⎢ ⎥ 1 1 1 1 1 1 1 1 1 1 4 0 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎢3⎥ = ⎡12⎤. ⎢ ⎥ = T = XT X = ⎢ and X Y ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎣−5 1 2 2⎦ ⎢1 2⎥ ⎣0 34⎦ ⎣−5 1 2 2⎦ ⎢3⎥ ⎣14⎦ ⎢⎣1 2⎥⎦ ⎢⎣5⎥⎦ Now, using ( X T X ) to find the coefficient matrix A, you have −1
A = (X X ) T
−1
−1 ⎡ 14 ⎡4 0⎤ ⎡12⎤ X Y = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎣0 34⎦ ⎣14⎦ ⎢⎣ 0
T
So, the least squares regression line is y =
0⎤ ⎡12⎤
1 ⎥ ⎢14⎥ 34 ⎥ ⎦⎣ ⎦ 7 x 17
⎡ 3⎤ = ⎢ 7 ⎥. ⎣⎢17 ⎦⎥
+ 3 = 0.412 x + 3.
41. Using the five given points, the matrices X and Y are ⎡1 −5⎤ ⎡10⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 −1⎥ ⎢ 8⎥ X = ⎢1 3⎥ and Y = ⎢ 6⎥. This means that ⎢ ⎥ ⎢ ⎥ ⎢1 7⎥ ⎢ 4⎥ ⎢1 5⎥ ⎢ 5⎥ ⎣ ⎦ ⎣ ⎦ ⎡1 −5⎤ ⎡10⎤ ⎢ ⎥ ⎢ ⎥ − 1 1 ⎢ ⎥ ⎢ 8⎥ 1 1 1 1 1 5 9 1 1 1 1 1 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡33⎤ T T ⎢ ⎥ X X = ⎢ ⎥1 3 = ⎢ ⎥ and X Y = ⎢ ⎥ ⎢ 6⎥ = ⎢ ⎥. ⎢ ⎥ ⎢ ⎥ ⎣−5 −1 3 7 5⎦ ⎣9 109⎦ ⎣−5 −1 3 7 5⎦ ⎣13⎦ ⎢1 7⎥ ⎢ 4⎥ ⎢1 5⎥ ⎢ 5⎥ ⎣ ⎦ ⎣ ⎦
Now, using ( X T X ) to find the coefficient matrix A, you have −1
−1 ⎡ 109 9⎤ ⎡33⎤ −1 ⎡5 464 A = ( X T X ) X TY = ⎢ ⎥ ⎢ ⎥ = ⎢ 9 − 9 109 13 ⎢ ⎣ ⎦ ⎣ ⎦ ⎣ 464
9 ⎤ 33 ⎡ 15 ⎤ − 464 ⎡ ⎤ = ⎢ 21 ⎥. ⎥ 5 ⎢13⎥ ⎢⎣− 2 ⎥⎦ 464 ⎥ ⎦⎣ ⎦
So, the least squares regression line is y = − 12 x +
15 2
= −0.5 x + 7.5.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
54
Chapter 2
Matrices
⎡1 3.00⎤ ⎡4500⎤ ⎢ ⎥ ⎢ ⎥ 43. (a) Using the matrices X = ⎢1 3.25⎥ and Y = ⎢3750⎥ , you have ⎢3300⎥ ⎢1 3.50⎥ ⎣ ⎦ ⎣ ⎦ ⎡1 3.00⎤ ⎡4500⎤ 1 1⎤ ⎢ 9.75⎤ 1 1⎤ ⎢ ⎡ 1 ⎡ 11,550⎤ ⎡ 1 ⎡ 3 ⎥ ⎥ T X X = ⎢ ⎥ ⎢3750⎥ = ⎢ ⎥. ⎥ ⎢1 3.25⎥ = ⎢ ⎥ and X Y = ⎢ ⎣3.00 3.25 3.50⎦ ⎢ ⎣37,237.5⎦ ⎣3.00 3.25 3.50⎦ ⎢ ⎣9.75 31.8125⎦ ⎥ ⎥ 1 3.50 3300 ⎣ ⎦ ⎣ ⎦ T
Now, using ( X T X ) to find the coefficient matrix A, you have −1
(
A = XTX
)
−1
X TY =
⎡11,650⎤ 1 ⎡31.8125 −9.75⎤⎡ 11,550⎤ 1 ⎡4368.75⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 0.375 ⎣ −9.75 0.375 ⎣ −900⎦ 3⎦⎣37,237.5⎦ ⎣−2400⎦
So, the least squares regression line is y = 11,650 − 2400 x. (b) When x = 3.40, y = 11,650 − 2400(3.40) = 3490. So, the demand is 3490 gallons. ⎡1 ⎢ ⎢1 45. (a) Using the matrices X = ⎢1 ⎢ ⎢1 ⎢ ⎣1 ⎡1 ⎢ ⎢1 1 1 1 1 1 ⎡ ⎤ ⎢ XT X = ⎢ ⎥1 ⎣0 1 2 3 4⎦ ⎢ ⎢1 ⎢ ⎣1
0⎤ ⎡221.5⎤ ⎥ ⎢ ⎥ 1⎥ ⎢230.4⎥ 2⎥ and Y = ⎢229.6⎥ you have ⎥ ⎢ ⎥ ⎢231.4⎥ 3⎥ ⎥ ⎢ ⎥ 4⎦ ⎣237.2⎦ 0⎤ ⎡221.5⎤ ⎥ ⎢ ⎥ 1⎥ ⎢230.4⎥ 5 10⎤ 1 1 1 1 1 ⎡ ⎤ ⎡ 1150.1⎤ ⎡ T ⎢ ⎥ 2⎥ = ⎢ ⎥ and X Y = ⎢ ⎥ ⎢229.6⎥ = ⎢ ⎥ ⎥ 10 30 0 1 2 3 4 ⎣ ⎦ ⎣ ⎦ ⎣2332.6⎦ ⎢231.4⎥ 3⎥ ⎥ ⎢ ⎥ 4⎦ ⎣237.2⎦
Now, using ( X T X ) to find the coefficient matrix A, you have −1
(
A = XTX
)
−1
X TY =
30 −10⎤⎡ 1150.1⎤ ⎥⎢ ⎥ = 5⎦⎣2332.6⎦ ⎣−10
1 ⎡ 50 ⎢
1 ⎡ 50 ⎢
11,177⎤ ⎡223.54⎤ ⎥ = ⎢ ⎥ ⎣ 162⎦ ⎣ 3.24⎦
So, the least squares regression line is y = 3.24t + 223.54. (b) Using a graphing utility with L1 = {0, 1, 2, 3, 4} and L2 = {221.5, 230.4, 229.6, 231.4, 237.2} gives the same least squares regression line: y = 3.24 x + 223.54.
Review Exercises for Chapter 2 ⎡2 1 0⎤ ⎡5 3 −6⎤ ⎡2 1 0⎤ ⎡15 9 −18⎤ ⎡−13 −8 18⎤ 1. ⎢ ⎥ − 3⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎣0 5 −4⎦ ⎣0 −2 5⎦ ⎣0 5 −4⎦ ⎣ 0 −6 15⎦ ⎣ 0 11 −19⎦ ⎡ 1(6) + 2( 4) 1( −2) + 2(0) 1(8) + 2(0)⎤ ⎡ 1 2⎤ ⎡14 −2 8⎤ ⎢ ⎥ ⎢ ⎥ ⎡6 −2 8⎤ ⎢ ⎥ 3. ⎢5 −4⎥ ⎢ ⎥ = ⎢5(6) − 4( 4) 5( −2) − 4(0) 5(8) − 4(0)⎥ = ⎢14 −10 40⎥ 4 0 0 ⎣ ⎦ ⎢6(6) + 0( 4) 6( −2) + 0(0) 6(8) + 0(0)⎥ ⎢6 0⎥ ⎢36 −12 48⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡1( 4) 1( −3) + 3(3) 1( 2) + 3( −1) + 2( 2)⎤ 3⎤ ⎡ 1 3 2⎤ ⎡4 −3 2⎤ ⎡4 6 ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 5. ⎢0 2 −4⎥ ⎢0 3 −1⎥ = ⎢ 0 2(3) 2( −1) + ( −4)( 2) ⎥ = ⎢0 6 −10⎥ ⎢ 0 ⎥ ⎢0 0 ⎢0 0 3⎥⎦ ⎢⎣0 0 2⎥⎦ 0 3( 2) 6⎥⎦ ⎣ ⎣ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
7. Multiplying the left side of the equation yields
17.
⎡5 x + 4 y⎤ ⎡ 2⎤ ⎢ ⎥ = ⎢ ⎥. ⎣ −x + y ⎦ ⎣−22⎦
−x +
2
⎡ 1⎤ ⎡ 1 3 −1⎤ ⎢ ⎥ ⎢ ⎥ AA = ⎢ 3⎥[1 3 −1] = ⎢ 3 9 −3⎥ ⎢−1⎥ ⎢−1 −3 1⎥ ⎣ ⎦ ⎣ ⎦
y = −22.
T
9. Multiplying the left side of the equation yields x2 − 2 x3 ⎡ ⎤ ⎡−1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ − x1 + 3 x2 + x3 ⎥ = ⎢ 0⎥. ⎢2 x1 − 2 x2 + 4 x3 ⎥ ⎢ 2⎥ ⎣ ⎦ ⎣ ⎦
AT = [1 3 −1] ⎡ 1⎤ ⎢ ⎥ AT A = [1 3 −1]⎢ 3⎥ = [11] ⎢−1⎥ ⎣ ⎦
So, the corresponding system of linear equations is
5x + 4 y =
19. Use the formula for the inverse of a 2 × 2 matrix. A−1 =
⎡ d −b⎤ ⎡ −1 1⎤ 1 1 ⎢ ⎥ = ⎢ ⎥ ad − bc ⎣−c a⎦ 3( −1) − ( −1)( 2) ⎣−2 3⎦
So, the corresponding system of linear equations is
⎡ 1 −1⎤ = ⎢ ⎥ ⎣2 −3⎦
x2 − 2 x3 = −1 − x1 + 3x2 +
x3 = 0
2 x1 − 2 x2 + 4 x3 = 2.
55
21. Begin by adjoining the identity matrix to the given matrix.
11. Letting ⎡ x⎤ ⎡2 −1⎤ ⎡ 5⎤ A = ⎢ ⎥ , x = ⎢ ⎥ , and b = ⎢ ⎥ , the given ⎣ y⎦ ⎣ 3 2⎦ ⎣−4⎦ system can be written in matrix form Ax = b ⎡2 −1⎤⎡ x⎤ ⎡ 5⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 3 2 y ⎣ ⎦⎣ ⎦ ⎣−4⎦
⎡2 3 1 [ A I ] = ⎢⎢2 −3 −3 ⎢4 0 3 ⎣
1 0 0⎤ ⎥ 0 1 0⎥ 0 0 1⎥⎦
This matrix reduces to ⎡1 0 0 ⎢ −1 ⎡⎣I A ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
3 20 3 10 − 15
1⎤ 10 ⎥ 2 . − 15 ⎥ 1⎥ 5⎦
3 20 1 − 30 − 15
So, the inverse matrix is
13. Letting 3 1⎤ ⎡ x1 ⎤ ⎡2 ⎡ 10⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A = ⎢2 −3 −3⎥ , x = ⎢ x2 ⎥ and b = ⎢ 22⎥ ⎢ x3 ⎥ ⎢4 −2 3⎥ ⎢−2⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
the given system can be written in matrix form Ax = b 3 1⎤⎡ x1 ⎤ ⎡2 ⎡ 10⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 3 3 x − − = ⎢ ⎥⎢ 2 ⎥ ⎢ 22⎥. ⎢4 −2 3⎥⎢ x3 ⎥ ⎢−2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎡ 1 0⎤ ⎢ ⎥ 15. AT = ⎢ 2 1⎥ , ⎢−3 2⎥ ⎣ ⎦ ⎡ 1 0⎤ ⎡ 1 2 −3⎤ ⎢ ⎥ ⎡ 1 2 −3⎤ ⎢ ⎥ AT A = ⎢ 2 1⎥ ⎢ = 5 −4⎥ ⎥ ⎢ 2 0 1 2 ⎦ ⎢−3 2⎥ ⎣ ⎢−3 −4 13⎥ ⎣ ⎦ ⎣ ⎦
A− 1
⎡ 3 ⎢ 20 3 = ⎢ 20 ⎢− 1 ⎣ 5
3 20 1 − 30 − 15
1⎤ 10 ⎥ 2 . − 15 ⎥ 1⎥ 5⎦
Ax = b
23.
⎡ 5 4⎤⎡ x1 ⎤ ⎡ 2⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−1 1⎦⎣ x2 ⎦ ⎣−22⎦ Because A− 1 =
⎡ 19 ⎡1 −4⎤ 1 ⎢ ⎥ = ⎢1 5(1) − 4( −1) ⎣1 5⎦ ⎢⎣ 9
− 94 ⎤ 5⎥ 9⎥ ⎦
solve the equation Ax = b as follows. ⎡1 x = A−1b = ⎢ 91 ⎢⎣ 9
− 94 ⎤ ⎡ 2⎤ ⎡ 10⎤ ⎥ = ⎢ ⎥ 5⎥⎢ ⎣−12⎦ 9 ⎥⎣ ⎦ −22⎦
⎡ 1 0⎤ ⎡ 1 2 −3⎤ ⎢ ⎡ 14 −4⎤ ⎥ AA = ⎢ ⎥ ⎢ 2 1⎥ = ⎢ ⎥ 5⎦ ⎣0 1 2⎦ ⎢ ⎣−4 ⎥ ⎣−3 2⎦ T
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
56
Chapter 2
Matrices Ax = b
25.
29. A is nonsingular if and only if the second row is not a multiple of the first. That is, A is nonsingular if and only if x ≠ −3.
⎡−1 1 2⎤ ⎡ x1 ⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 3 1 x = ⎢ ⎥ ⎢ 2⎥ ⎢−2⎥ ⎢ 5 4 2⎥ ⎢ x3 ⎥ ⎢ 4⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Using Gauss-Jordan elimination, you find that A
−1
⎡− 2 ⎢ 15 1 = ⎢− 15 ⎢ 7 ⎣ 15
1⎤ 3 ⎥ − 13 ⎥. 1⎥ 3⎦
− 52 4 5 − 53
So, solve the equation Ax = b as follows. ⎡− 2 ⎢ 15 1 x = A−1b = ⎢− 15 ⎢ 7 ⎣ 15
− 52 4 5 − 53
1⎤ 1 3 ⎡ ⎤ ⎥⎢ ⎥ − 13 ⎥ ⎢−2⎥ 1 ⎥ ⎢ 4⎥ 3⎦ ⎣ ⎦
⎡ 2⎤ ⎢ ⎥ = ⎢−3⎥ ⎢ 3⎥ ⎣ ⎦
Alternatively, you could use the formula for the inverse of a 2 × 2 matrix to show that A is nonsingular if ad − bc = 3( −1) − 1( x) ≠ 0. That is, x ≠ −3.
31. Because the given matrix represents the addition of 4 times the third row to the first row of I 3 , reverse the operation and subtract 4 times the third row from the first row. ⎡ 1 0 −4⎤ ⎢ ⎥ E −1 = ⎢0 1 0⎥ ⎢0 0 1⎥⎦ ⎣
27. Because
(3 A)−1
⎡4 −1⎤ = ⎢ ⎥, ⎣2 3⎦
you can use the formula for the inverse of a 2 × 2 matrix to obtain ⎡4 −1⎤ 3A = ⎢ ⎥ ⎣2 3⎦
−1
=
⎡ 3 1⎤ 1 ⎢ ⎥ 4(3) − ( −1)( 2) ⎣−2 4⎦
=
1 ⎡ 3 1⎤ ⎢ ⎥. 14 ⎣−2 4⎦
⎡ 1 ⎢ 14 1 ⎡ 3 1⎤ So, A = ⎢ ⎥ = ⎢ 42 ⎣−2 4⎦ ⎢− 1 ⎢⎣ 21
1⎤ 42 ⎥ ⎥. 2⎥ 21⎥⎦
For Exercises 33–39, answers will vary. Sample answers are shown below. 33. Begin by finding a sequence of elementary row operations that can be used to write A in reduced row-echelon form. Elementary Row Operation
Elementary Matrix
⎡1 ⎢ ⎥ ⎣⎢0 1⎦⎥
Divide row one by 2.
⎡ 1 0⎤ E1 = ⎢ 2 ⎥ ⎣⎢ 0 1⎥⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
Subtract
Matrix 3⎤ 2
3 2
times row two from row one.
⎡1 − 3 ⎤ 2 E2 = ⎢ ⎥ 1⎥⎦ ⎢⎣0
So, factor A as follows. ⎡2 0⎤ ⎡ 1 32 ⎤ A = E1−1E2−1 = ⎢ ⎥ ⎥⎢ ⎣0 1⎦ ⎢⎣0 1⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
57
35. Begin by finding a sequence of elementary row operations to write A in reduced row-echelon form. Elementary Row Operation
Matrix 1⎤ ⎡1 0 ⎢ ⎥ ⎢0 1 −2⎥ ⎢0 0 1⎥⎦ ⎣
1 4
Elementary Matrix ⎡ 1 0 0⎤ ⎢ ⎥ = E1 ⎢0 1 0⎥ ⎢0 0 1 ⎥ 4⎦ ⎣
times row 3.
⎡ 1 0 1⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Add two times row three to row two.
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢0 1 2⎥ ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Add −1 times row three to row one.
⎡ 1 0 −1⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
So, factor A as follows. ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 1⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ −1 −1 −1 = = A E1 E2 E3 ⎢0 1 0⎥ ⎢0 1 −2⎥ ⎢0 1 0⎥. ⎢0 0 4⎥ ⎢0 0 1⎥⎦ ⎢⎣0 0 1⎥⎦ ⎣ ⎦⎣
⎡a 2 + bc ad + bd ⎤ ⎡a b⎤⎡a b⎤ ⎡ 1 0⎤ ⎡a b⎤ 2 37. Let A = ⎢ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎥ then A = ⎢ 2⎥ ⎢⎣ca + dc cb + d ⎥⎦ ⎣c d ⎦ ⎣ c d ⎦⎣ c d ⎦ ⎣0 1⎦ ⎡ 1 0⎤ ⎡−1 0⎤ ⎡−1 0⎤ So, many answers are possible. ⎢ ⎥, ⎢ ⎥, ⎢ ⎥ , etc. ⎣0 1⎦ ⎣ 0 −1⎦ ⎣ 0 1⎦ ⎡ a 2 + bc b( a + d )⎤ ⎡a b⎤ ⎡a b⎤ ⎡a b⎤ 2 . 39. Let A = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ then A = ⎢ 2⎥ ⎢⎣c( a + d ) bc + d ⎦⎥ ⎣c d ⎦ ⎣c d ⎦ ⎣c d ⎦
Solving A2 = A gives the system of nonlinear equations a 2 + bc = a d 2 + bc = d b( a + d ) = b c( a + d ) = c. From this system, conclude that any of the following matrices are solutions to the equation A2 = A. ⎡0 0⎤ ⎡0 0⎤ ⎡0 t ⎤ ⎡1 0⎤ ⎡ 1 t ⎤ ⎡ 1 0⎤ ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥ ⎣0 0⎦ ⎣ t 1⎦ ⎣0 1⎦ ⎣t 0⎦ ⎣0 0⎦ ⎣0 1⎦
41. (a) Letting W = aX + bY + cZ yields the system of linear equations a − b + 3c = 2a
3
+ 4c = 2 3b − c = −4
a + 2b + 2c = −1 which has the solution: a = −1, b = −1, c = 1. (b) Letting Z = aX + bY yields the system of linear equations a − b = 3 2a
= 4 3b = −1
a + 2b = 2 which has no solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
58
Chapter 2
Matrices
(
)
43. Because ( A−1 + B −1 )( A−1 + B −1 ) = I , if ( A−1 + B −1 ) exists, it is sufficient to show that ( A−1 + B −1 ) A( A + B ) B = I for −1
−1
equality of the second factors in each equation.
( A−1 + B −1 )( A( A + B)−1 B) =
(
)
(
A−1 A( A + B ) B + B −1 A( A + B) B −1
−1
)
= A A( A + B ) B + B A( A + B) B −1
−1
−1
−1
= I ( A + B ) B + B −1 A( A + B ) B −1
(
−1
= ( I + B −1 A) ( A + B) B −1
(
)
= ( B −1B + B −1 A) ( A + B ) B = B
−1
= B
−1
−1
)
( B + A)( A + B) B ( A + B)( A + B)−1 B −1
= B −1IB = B −1B = I.
Therefore, ( A−1 + B −1 )
−1
= A( A + B) B. −1
45. Answers will vary. Sample answer: Elementary Matrix Matrix 2 5 ⎡ ⎤ ⎢ ⎥ = A 6 14 ⎣ ⎦ ⎡2 5⎤ ⎢ ⎥ =U ⎣0 −1⎦
⎡ 1 0⎤ E = ⎢ ⎥ ⎣−3 1⎦
EA = U ⎡1 0⎤ ⎡2 5⎤ A = E −1U = ⎢ ⎥⎢ ⎥ = LU ⎣3 1⎦ ⎣0 −1⎦
47. Matrix
Elementary Matrix
⎡ 1 0 1⎤ ⎢ ⎥ ⎢2 1 2⎥ = A ⎢ 3 2 6⎥ ⎣ ⎦ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢3 2 6⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢−2 1 0⎥ ⎢ 0 0 1⎥ ⎣ ⎦
⎡ 1 0 1⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 2 3⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢ 0 1 0⎥ ⎢−3 0 1⎥ ⎣ ⎦
⎡ 1 0 1⎤ ⎢ ⎥ ⎢0 1 0⎥ = U ⎢0 0 3⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ 1 0⎥ E3 = ⎢0 ⎢0 −2 1⎥ ⎣ ⎦
E3 E2 E1 A = U A =
E1−1E2−1E3−1U
⎡ 1 0 0⎤ ⎡ 1 0 1⎤ ⎢ ⎥⎢ ⎥ = LU = ⎢2 1 0⎥ ⎢0 1 0⎥. ⎢ 3 2 1⎥ ⎢0 0 3⎥ ⎣ ⎦⎣ ⎦
⎡ 1 0 0⎤ ⎡ y1 ⎤ ⎡3⎤ ⎡ 3⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ly = b: ⎢2 1 0⎥ ⎢ y2 ⎥ = ⎢7⎥ ⇒ y = ⎢ 1⎥ ⎢3 2 1⎥ ⎢ y3 ⎥ ⎢8⎥ ⎢−3⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 1 0 1⎤ ⎡ x1 ⎤ ⎡ 3⎤ ⎡ 3⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ux = y : ⎢0 1 0⎥ ⎢ x2 ⎥ = ⎢ 1⎥ ⇒ x = ⎢ 1⎥ ⎢0 0 3⎥ ⎢ x3 ⎥ ⎢−3⎥ ⎢−1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
So, x = 4, y = 1, and z = −1.
49. (a) False. See Theorem 2.1, part 1, page 61 (b) True. See Theorem 2.6, part 2, page 68.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
59
⎡ 1 0⎤ 51. (a) False. The matrix ⎢ ⎥ is not invertible. ⎣0 0⎦ (b) False. See Exercise 55, page 72. ⎡580 840 320⎤ ⎡3.05 0.05⎤ ⎡5455 128.2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 53. (a) AB = ⎢560 420 160⎥ ⎢3.15 0.08⎥ = ⎢ 3551 77.6⎥ ⎢860 1020 540⎥ ⎢3.25 0.10⎥ ⎢ 7591 178.6⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
This matrix shows the total sales of gas each day in the first column and the total profit each day in the second column. (b) The gasoline sales profit for Friday through Sunday is the sum of the elements in the second column of AB, 128.2 + 77.6 + 178.6 = $384.40. Bicycling 55. (a) 20-min ⎡2 periods ⎣ (b) BA = ⎡⎣2
1 2
Jogging 1 2
57. The given matrix is not stochastic because the entries in columns two and three do not add up to one.
Walking 3⎤⎦ = B
⎡109 136⎤ ⎢ ⎥ 3⎤⎦ ⎢127 159⎥ = [473.5 588.5] ⎢ 64 79 ⎥ ⎣ ⎦
59.
⎡1 PX = ⎢ 12 ⎣⎢ 2
1 ⎤ 128 ⎤ 4 ⎡ 3 ⎥ ⎢ 64⎥ ⎥⎣ ⎦ 4⎦
⎡ 80⎤ = ⎢ ⎥ ⎣112⎦
⎡ 80⎤ ⎡ 68⎤ P2 X = P⎢ ⎥ = ⎢ ⎥ 112 ⎣ ⎦ ⎣124⎦
(c) The matrix BA represents the number of calories each person burned during the exercises.
⎡ 68⎤ ⎡ 65⎤ P3 X = P ⎢ ⎥ = ⎢ ⎥ ⎣124⎦ ⎣127⎦
61. Begin by forming the matrix of transition probabilities. From Region 1
2
3
⎡0.85 0.15 0.10⎤ 1 ⎫ ⎢ ⎥ ⎪ P = ⎢0.10 0.80 0.10⎥ 2 ⎬ To Region ⎢0.05 0.05 0.80⎥ 3 ⎪ ⎣ ⎦ ⎭
(a) The population in each region after one year is given by ⎡0.85 0.15 0.10⎤ ⎡100,000⎤ ⎢ ⎥⎢ ⎥ PX = ⎢0.10 0.80 0.10⎥ ⎢100,000⎥ ⎢0.05 0.05 0.80⎥ ⎢100,000⎥ ⎣ ⎦⎣ ⎦ ⎡110,000⎤ Region 1 ⎢ ⎥ = ⎢100,000⎥ Region 2 ⎢ 90,000⎥ Region 3 ⎣ ⎦
(b) The population in each region after three years is given by ⎡0.665375 0.322375 0.2435⎤ ⎡100,000⎤ ⎡123,125⎤ Region 1 ⎢ ⎥⎢ ⎥ ⎢ ⎥ P X = ⎢ 0.219 0.562 0.219⎥ ⎢100,000⎥ = ⎢100,000⎥ Region 2 ⎢0.115625 0.115625 0.5375⎥ ⎢100,000⎥ ⎢ 76,875⎥ Region 3 ⎣ ⎦⎣ ⎦ ⎣ ⎦ 3
63. The uncoded row matrices are O
[15
N 14]
E
[5
_ 0]
I
[9
F 6]
_ B 2]
[0
Y
[25
_ 0]
L A 1]
[12
N D 4].
[14
Multiplying each 1 × 2 matrix on the right by A yields the coded row matrices
[103
44], [25 10], [57 24], [4 2], [125 50], [62 25], [78 32].
So, the coded message is 103, 44, 25, 10, 57, 24, 4, 2, 125, 50, 62, 25, 78, 32.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
60
Chapter 2
Matrices
⎡ 3 2⎤ 65. You can find A−1 to be ⎢ ⎥ and the coded row matrices are ⎣4 3⎦
[−45
34], [36 −24], [−43 37], [−23 22], [−37 29], [57 −38], [−39 31].
Multiplying each coded row matrix on the right by A−1 yields the uncoded row matrices A
L
L
_
S
Y
S
[1
12]
[12
0]
[19
25]
[19
T
E
M
20]
[5
13]
S _
[19
0]
G
O
[7
15].
The decoded message is ALL_SYSTEMS_GO. ⎡−2 −1 0⎤ 67. You can find A−1 to be ⎢⎢ 0 1 1⎥⎥ and the coded row matrices are ⎢⎣−5 −3 −3⎥⎦
[58
−3 −25], [−48 28 19], [−40 13 13], [−98 39 39], [118 −25 −48], [28 −14 −14].
Multiplying each coded row matrix on the right by A−1 yields the uncoded row matrices I
[9
V
A
14 22]
N
[1
I
O
19 9]
S
[15
_
A
14 0]
N
[1
T
_
20 0]
D A
[4
W
1 23]
N _
[14
_
0 0].
So, the message is INVASION_AT_DAWN. ⎡−1 −10 −8⎤ ⎢ ⎥ 69. Find A−1 = ⎢−1 −6 −5⎥, ⎢ 0 −1 −1⎥⎦ ⎣
and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[−2
⎡−1 −10 −8⎤ 2 5] A−1 = [−2 2 5]⎢⎢−1 −6 −5⎥⎥ = [0 3 1] ⇒ _, C, A ⎢⎣ 0 −1 −1⎥⎦
[ 39 [−6 [4 [ 31 [19 [−8
−53 −72] −9 93] −12 27] −49 −16] −24 −46] −7 99]
A−1 A−1 A−1 A−1 A−1 A−1
= = = = = =
[14 [15 [8 [18 [5 [15
0 25] 21 0] 5 1] 0 13] 0 14] 23 0]
⇒ ⇒ ⇒ ⇒ ⇒ ⇒
N, O, H, R, E, O,
_, U, E, _, _, W,
Y _ A M N _
The message is _CAN_YOU_HEAR_ME_NOW_.
71. First find the input-output matrix D. User Industry A
B
⎡0.20 0.50⎤ A⎫ D = ⎢ ⎥ ⎬ Supplier Industry ⎣0.30 0.10⎦ B⎭ Then solve the equation X = DX + E for X to obtain ( I − D) X = E , which corresponds to solving the augmented matrix ⎡ 0.80 −0.50 ⎢−0.30 0.90 ⎣
40,000⎤ ⎡133,333⎤ . The solution to this system gives you X ≈ ⎢ ⎥. 80,000⎥⎦ ⎣133,333⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
73. Using the matrices
61
77. (a) Begin by finding the matrices X and Y. ⎡1 ⎢ 1 X = ⎢ ⎢1 ⎢ ⎣⎢1
⎡5⎤ ⎡1 1⎤ X = ⎢⎢1 2⎥⎥ and Y = ⎢⎢4⎥⎥ you have ⎣⎢2⎥⎦ ⎣⎢1 3⎥⎦ ⎡3 6⎤ ⎡11⎤ T XT X = ⎢ ⎥ and X Y = ⎢ ⎥ ⎣6 14⎦ ⎣19⎦
⎡1 ⎢ 1 1 1⎤ ⎢1 ⎡ 1 T X X = ⎢ ⎥⎢ ⎣1.0 1.5 2.0 2.5⎦ ⎢1 ⎣⎢1
⎡ 7 −1⎤ ⎡11⎤ ⎡ 20 ⎤ −1 3 = A = ( X T X ) X TY = ⎢ 3 ⎥ ⎢ ⎥. ⎢19⎥ 3 1 ⎢⎣−1 2 ⎥⎦ ⎣ ⎦ ⎢⎣− 2 ⎥⎦ So, the least squares regression line is y = − 32 x +
20 . 3
⎡32⎤ ⎢ ⎥ 1 1 1⎤ ⎢ 41⎥ ⎡ 1 ⎡174⎤ X TY = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 1.0 1.5 2.0 2.5 48 ⎣ ⎦⎢ ⎥ ⎣322⎦ ⎣⎢53⎦⎥
⎡1 −2⎤ ⎡ 4⎤ ⎢1 −1⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ X = ⎢1 0⎥ and Y = ⎢ 1⎥ you have ⎢ ⎥ ⎢ ⎥ ⎢1 1⎥ ⎢−2⎥ ⎢1 2⎥ ⎢ −3⎥ ⎣ ⎦ ⎣ ⎦
The matrix of coefficients is A = ( X T X ) X TY = −1
⎡5 0⎤ ⎡ 2⎤ T XT X = ⎢ ⎥ and X Y = ⎢−18⎥ 0 10 ⎣ ⎦ ⎣ ⎦
0⎤ ⎡ ⎥
1.0⎤ ⎥ 1.5⎥ 7⎤ ⎡4 = ⎢ ⎥ 2.0⎥ 7 13.5 ⎣ ⎦ ⎥ 2.5⎦⎥
and
75. Using the matrices
⎡1 −1 A = ( X T X ) X TY = ⎢5 ⎢⎣0
1.0⎤ ⎡32⎤ ⎥ ⎢ ⎥ 1.5⎥ 41 and Y = ⎢ ⎥ Then ⎥ ⎢ 2.0 48⎥ ⎥ ⎢ ⎥ 2.5⎦⎥ ⎣⎢53⎦⎥
13.5 −7⎤ ⎡174⎤ ⎡19⎤ ⎥ ⎢ ⎥ = ⎢ ⎥. 4⎦ ⎣322⎦ ⎣ −7 ⎣14⎦
1⎡ 5⎢
So, the least squares regression line is y = 19 + 14 x.
2⎤ ⎡ 0.4⎤
1 ⎢−18⎥ ⎢−1.8⎥ ⎦⎣ ⎦ 10 ⎥ ⎦⎣
(b) When x = 1.6(160 kilograms per square kilometer ),
.
y = 41.4 ( kilograms per square kilometer ).
So, the least squares regression line is y = −1.8 x + 0.4, or y = − 95 x + 52 . ⎡1 ⎢1 ⎢ ⎢1 79. (a) Using the matrices X = ⎢ ⎢1 ⎢1 ⎢ ⎣⎢1
0⎤ ⎡30.37⎤ ⎢32.87⎥ 1⎥⎥ ⎢ ⎥ ⎢ 34.71⎥ 2⎥ ⎥ and Y = ⎢ ⎥ you have 3⎥ ⎢36.59⎥ ⎢38.14⎥ 4⎥ ⎥ ⎢ ⎥ 5⎦⎥ ⎣⎢39.63⎦⎥
⎡1 ⎢1 ⎢ ⎡ 1 1 1 1 1 1⎤ ⎢1 T X X = ⎢ ⎥⎢ ⎣0 1 2 3 4 5⎦ ⎢1 ⎢1 ⎢ ⎣⎢1
0⎤ ⎡30.37⎤ ⎢32.87⎥ 1⎥⎥ ⎢ ⎥ 2⎥ ⎡ 6 15⎤ ⎡ 1 1 1 1 1 1⎤ ⎢ 34.71⎥ ⎡ 212.31⎤ T and X Y = ⎢ ⎥ = ⎢ ⎥ ⎥ ⎢36.59⎥ = ⎢562.77⎥. 3⎥ 15 55 0 1 2 3 4 5 ⎣ ⎦ ⎣ ⎦⎢ ⎣ ⎦ ⎥ ⎢38.14⎥ 4⎥ ⎥ ⎢ ⎥ 5⎦⎥ ⎣⎢39.63⎦⎥
Now, using ( X T X ) to find the coefficient matrix A, you have −1
A = ( X T X ) X TY = −1
55 −15⎤ ⎡ 212.31⎤ ⎥⎢ ⎥ = 6⎦ ⎣562.77⎦ ⎣−15
1 ⎡ 105 ⎢
1 ⎡ 105 ⎢
3235.5⎤ ⎡30.81⎤ ⎥ ≈ ⎢ ⎥ ⎣191.97⎦ ⎣1.828⎦
So, the least squares regression line is y = 1.828 x + 30.81. (b) Using a graphing utility with L1 = {0, 1, 2, 3, 4, 5} and L2 = {30.37, 32.87, 34.71, 36.59, 38.14, 39.63} gives the same least squares regression line: y = 1.828 x + 30.81.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
62
Chapter 2 (c)
Matrices
Year
2000
2001
2002
2003
2004
2005
Actual
30.37
32.87
34.71
36.59
38.14
39.63
Estimated
30.81
32.64
34.47
36.29
38.12
39.95
The estimated values are close to the actual values. (d) The average monthly rate in 2010 is y = 1.828(10) + 30.81 = $49.09. (e)
51 = 1.828 x + 30.81 20.19 = 1.828 x 11 ≈ x The average monthly rate will be $50.00 in 2011.
⎡1 ⎢ ⎢1 ⎢1 81. (a) Using the matrices X = ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎣
0⎤ ⎡1.8⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 2.1⎥ ⎢ 2.3⎥ ⎥ 2 ⎥ and Y = ⎢ ⎥ you have 3⎥ ⎢2.4⎥ ⎢ ⎥ ⎥ 4⎥ ⎢ 2.3⎥ ⎢2.5⎥ ⎥ 5⎦ ⎣ ⎦
⎡1 ⎢ ⎢1 1 1 1 1 1 1 ⎡ ⎤ ⎢1 XT X = ⎢ ⎥⎢ ⎣0 1 2 3 4 5⎦ ⎢1 ⎢ ⎢1 ⎢1 ⎣
0⎤ ⎡1.8⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 2.1⎥ 2⎥ 6 15 1 1 1 1 1 1 ⎡ ⎤ ⎢ 2.3⎥ ⎡13.4⎤ ⎡ ⎤ T ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎥ and X Y = ⎢ 3⎥ ⎣0 1 2 3 4 5⎦ ⎢2.4⎥ ⎣35.6⎦ ⎣15 55⎦ ⎢ ⎥ ⎥ 4⎥ ⎢ 2.3⎥ ⎢2.5⎥ 5⎥⎦ ⎣ ⎦
Now, using ( X T X ) to find the coefficient matrix A, you have −1
A = ( X T X ) X TY = −1
55 −15⎤⎡13.4⎤ ⎥⎢ ⎥ = 6⎦⎣35.6⎦ ⎣−15
1 ⎡ 105 ⎢
⎡1.93⎤ 203⎤ ⎥ ⎥ = ⎢ ⎢⎣0.12⎥⎦ ⎣12.6⎦
1 ⎡ 105 ⎢
So, the least squares regression line is y = 0.12 x + 1.9. (b) Using a graphing utility with L1 = {0, 1, 2, 3, 4, 5} and L2 = {1.8, 2.1, 2.3, 2.4, 2.3, 2.5} gives the same least squares regression line: y = 0.12 x + 1.9. (c)
Year
2000
2001
2002
2003
2004
2005
Actual
1.8
2.1
2.3
2.4
2.3
2.5
Estimated
1.9
2.0
2.1
2.3
2.4
2.5
The estimated values are close to the actual values. (d) The average salary in 2010 is y = 0.12(10) + 1.9 = $3.1 million. (e) 3.7 = 0.12 x + 1.9 1.8 = 0.12 x 15 = x The average salary will be $3.7 million in 2015.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Matrices
2
Section 2.1
Operations with Matrices .....................................................................27
Section 2.2
Properties of Matrix Operations...........................................................33
Section 2.3
The Inverse of a Matrix........................................................................38
Section 2.4
Elementary Matrices.............................................................................42
Section 2.5
Applications of Matrix Operations ......................................................48
Review Exercises ..........................................................................................................54 Project Solutions...........................................................................................................62
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Matrices
2
Section 2.1 Operations with Matrices ⎡ 1 2⎤ ⎡−3 −2⎤ ⎡ 1 − 3 2 − 2⎤ ⎡−2 0⎤ 2. (a) A + B = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣2 1⎦ ⎣ 4 2⎦ ⎣2 + 4 1 + 2⎦ ⎣ 6 3⎦ ⎡ 1 2⎤ ⎡−3 −2⎤ ⎡ 1 + 3 2 + 2⎤ ⎡ 4 4⎤ (b) A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣2 1⎦ ⎣ 4 2⎦ ⎣2 − 4 1 − 2⎦ ⎣−2 −1⎦
⎡ 2(1) 2( 2)⎤ ⎡ 1 2⎤ ⎡2 4⎤ (c) 2 A = 2 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 2 2 2 1 2 1 ( ) ( ) ⎢⎣ ⎥⎦ ⎣ ⎦ ⎣4 2⎦ ⎡2 4⎤ ⎡−3 −2⎤ ⎡5 6⎤ (d) 2 A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎣4 2⎦ ⎣ 4 2⎦ ⎣0 0⎦ (e) B +
1A 2
⎡− 5 −1⎤ ⎡−3 −2⎤ 1 ⎡ 1 2⎤ ⎡−3 −2⎤ ⎡ 12 1⎤ = ⎢ ⎥ ⎥ = ⎢ 2 ⎥ + 2⎢ ⎥ = ⎢ ⎥ + ⎢ 5 1 ⎣ 4 2⎦ ⎣2 1⎦ ⎣ 4 2⎦ ⎢⎣ 1 2 ⎦⎥ ⎣⎢ 5 2 ⎦⎥
⎡ 2 1 1⎤ ⎡ 2 −3 4⎤ ⎡ 2 + 2 1 − 3 1 + 4⎤ ⎡ 4 −2 5⎤ 4. (a) A + B = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 1 1 4 3 1 2 1 3 1 1 4 2 − − − − − − − + − ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣−4 0 2⎦ ⎡ 2 1 1⎤ ⎡ 2 −3 4⎤ ⎡ 2 − 2 1 + 3 1 − 4⎤ ⎡0 4 −3⎤ (b) A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 1 1 4 3 1 2 1 3 1 1 4 2 − − − − − + − − + ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣2 −2 6⎦ ⎡ 2( 2) 2(1) 2(1)⎤ ⎡ 2 1 1⎤ ⎡ 4 2 2⎤ (c) 2 A = 2 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 2 1 2 1 2 4 ⎣−1 −1 4⎦ ⎣−2 −2 8⎦ ⎣⎢ ( − ) ( − ) ( )⎦⎥ ⎡ 4 2 2⎤ ⎡ 2 −3 4⎤ ⎡2 5 −2⎤ (d) 2 A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎣−2 −2 8⎦ ⎣−3 1 −2⎦ ⎣ 1 −3 10⎦ (e) B +
1A 2
1 ⎡ 2 −3 4⎤ 1 ⎡ 2 1 1⎤ ⎡ 2 −3 4⎤ ⎡ 1 2 = ⎢ ⎥ + 2⎢ ⎥ = ⎢ ⎥ + ⎢ 1 1 − − 3 1 2 1 1 4 3 1 2 − − − − − − ⎢ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 2 2
1⎤ 2
⎡ 3 − 52 ⎥ = ⎢ 7 1 2⎥⎦ ⎢⎣− 2 2
9⎤ 2
⎥ 0⎥⎦
⎡ 2 + 0 3 + 6 4 + 2⎤ ⎡2 3 4⎤ ⎡ 0 6 2⎤ ⎡2 9 6⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 6. (a) A + B = ⎢0 1 −1⎥ + ⎢ 4 1 0⎥ = ⎢ 0 + 4 1 + 1 −1 + 0⎥ = ⎢4 2 −1⎥ ⎢2 + ( −1) 0 + 2 1 + 4⎥ ⎢2 0 1⎥ ⎢−1 2 4⎥ ⎢ 1 2 5⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 2 − 0 3 − 6 4 − 2⎤ ⎡2 3 4⎤ ⎡ 0 6 2⎤ ⎡ 2 −3 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) A − B = ⎢0 1 −1⎥ − ⎢ 4 1 0⎥ = ⎢ 0 − 4 1 − 1 −1 − 0⎥ = ⎢−4 0 −1⎥ ⎢2 − ( −1) 0 − 2 1 − 4⎥ ⎢2 0 1⎥ ⎢−1 2 4⎥ ⎢ 3 −2 −3⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡2( 2) 2(3) 2( 4)⎤ ⎡2 3 4⎤ ⎡4 6 8⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (c) 2 A = 2 ⎢0 1 −1⎥ = ⎢2(0) 2(1) 2( −1)⎥ = ⎢0 2 −2⎥ ⎢2( 2) 2(0) ⎢2 0 1⎥ ⎢4 0 2⎥ 2(1)⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎡2 3 4⎤ ⎡ 0 6 2⎤ ⎡4 6 8⎤ ⎡ 0 6 2⎤ ⎡ 4 0 6⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 −2⎥ (d) 2 A − B = 2 ⎢0 1 −1⎥ − ⎢ 4 1 0⎥ = ⎢0 2 −2⎥ − ⎢ 4 1 0⎥ = ⎢−4 ⎢2 0 1⎥ ⎢−1 2 4⎥ ⎢4 0 2⎥ ⎢−1 2 4⎥ ⎢ 5 −2 −2⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡1 2⎤ ⎡ 0 6 2⎤ ⎡2 3 4⎤ ⎡ 0 6 2⎤ ⎡ 1 32 ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ (e) B + 12 A = ⎢ 4 1 0⎥ + 12 ⎢0 1 −1⎥ = ⎢ 4 1 0⎥ + ⎢0 12 − 12 ⎥ = ⎢4 ⎢0 1⎥ ⎢−1 2 4⎥ ⎢2 0 1⎥ ⎢−1 2 4⎥ ⎢ 1 0 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 2⎦ ⎣ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
4⎤ ⎥ − 12 ⎥ 9⎥ 2 2⎦
15 2 3 2
27
28
Chapter 2 Matrices 8. (a) c23 = 5a23 + 2b23 = 5( 2) + 2(11) = 32
(b) c32 = 5a32 + 2b32 = 5(1) + 2( 4) = 13 10. Simplifying the right side of the equation produces
⎡ w x⎤ ⎡−4 + 2 y 3 + 2 w⎤ ⎢ ⎥ = ⎢ ⎥ ⎣ y x⎦ ⎣ 2 + 2 z −1 + 2 x⎦ By setting corresponding entries equal to each other, you obtain four equations. w = −4 + 2 y x = y = x =
⎧−2 y + w = −4 ⎪ ⎪x − 2w = 3 ⇒ ⎨ 2 + 2z ⎪y − 2z = 2 ⎪x = 1 −1 + 2 x ⎩ 3 + 2w
The solution to this linear system is: x = 1, y =
3, 2
z = − 14 , and w = −1.
⎡ 1 −1 7⎤ ⎡ 1 1 2⎤ ⎡1(1) + ( −1)( 2) + 7(1) 1(1) + ( −1)(1) + 7( −3) 1( 2) + ( −1)(1) + 7( 2)⎤ ⎡6 −21 15⎤ ⎥ ⎢ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 12. (a) AB = ⎢2 −1 8⎥ ⎢2 1 1⎥ = ⎢2(1) + ( −1)( 2) + 8(1) 2(1) + (−1)(1) + 8( −3) 2( 2) + ( −1)(1) + 8( 2)⎥ = ⎢8 −23 19⎥ ⎢3 1 −1⎥ ⎢ 1 −3 2⎥ ⎢ 3(1) + 1( 2) + ( −1)(1) 3(1) + 1(1) + ( −1)( −3) 3( 2) + 1(1) + ( −1)( 2)⎥ ⎢4 7 5⎥⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣
1(−1) + 1( −1)1 + 2(1) 1(7) + 1(8) + 2(−1)⎤ ⎡9 0 13⎤ ⎡ 1 1 2⎤ ⎡ 1 −1 7⎤ ⎡ 1(1) + 1( 2) + 2(3) ⎥ ⎢ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (b) BA = ⎢2 1 1⎥ ⎢2 −1 8⎥ = ⎢ 2(1) + 1( 2) + 1(3) 2(−1) + 1( −1) + 1(1) 2(7) + 1(8) + 1(−1)⎥ = ⎢7 −2 21⎥ ⎢ 1 −3 2⎥ ⎢3 1 −1⎥ ⎢1(1) + ( −3)( 2) + 2(3) 1( −1) + ( −3)( −1) + 2(1) 1(7) + ( −3)(8) + 2(−1)⎥ ⎢ 1 4 −19⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎦ ⎣ ⎡2⎤ ⎢ ⎥ 14. (a) AB = [3 2 1] ⎢3⎥ = ⎡⎣3( 2) + 2(3) + 1(0)⎤⎦ = [12] ⎢0⎥ ⎣ ⎦
⎡2(3) 2( 2) 2(1)⎤ ⎡2⎤ ⎡6 4 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) BA = ⎢3⎥[3 2 1] = ⎢3(3) 3( 2) 3(1)⎥ = ⎢9 6 3⎥ ⎢0(3) 0( 2) 0(1)⎥ ⎢0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡0( 2) + (−1)(−3) + 0(1)⎤ ⎡0 −1 0⎤ ⎡ 2⎤ ⎡ 3⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 16. (a) AB = ⎢4 0 2⎥ ⎢−3⎥ = ⎢ 4( 2) + 0( −3) + 2(1)⎥ = ⎢10⎥ ⎢8( 2) + ( −1)( −3) + 7(1)⎥ ⎢8 −1 7⎥ ⎢ 1⎥ ⎢26⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ (b) BA is not defined because B is 3 × 1 and A is 3 × 3. 18. (a) AB is not defined because A is 2 × 5 and B is 2 × 2. ⎡ 1 6⎤ ⎡ 1 0 3 −2 4⎤ (b) BA = ⎢ ⎥⎢ ⎥ ⎣4 2⎦ ⎣6 13 8 −17 20⎦ ⎡ 1(1) + 6(6) 1(0) + 6(13) 1(3) + 6(8) 1( −2) + 6( −17) 1( 4) + 6( 20)⎤ = ⎢ ⎥ ⎢⎣4(1) + 2(6) 4(0) + 2(13) 4(3) + 2(8) 4(−2) + 2( −17) 4( 4) + 2( 20)⎥⎦ ⎡37 78 51 −104 124⎤ = ⎢ ⎥ ⎣16 26 28 −42 56⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.1 20. Because A is 6 × 5 and B is 6 × 6, you have
(a) 2A + B is undefined. (b) 3B − A is undefined. (c) AB is undefined. (d) Using a graphing utility or computer software program, you have 1 18 11 4⎤ ⎡ 10 ⎢ ⎥ − 5 3 − 25 4 11 ⎢ ⎥ ⎢ −2 2 19 −1 −15⎥ ⎥. BA = ⎢ ⎢ 10 −15 8 1 6⎥ ⎢ ⎥ ⎢ −3 −5 −6 −2 −17⎥ ⎢−18 9 2 −8 −11⎥⎦ ⎣ 22. C + E is not defined because C and E have different sizes. 24. −4A is defined and has size 3 × 4 because A has size 3 × 4. 26. BE is defined. Because B has size 3 × 4 and E has size 4 × 3, the size of BE is 3 × 3. 28. 2 D + C is defined and has size 4 × 2 because 2D and C have size 4 × 2. 30. In matrix form Ax = b, the system is
⎡2 3⎤⎡ x1 ⎤ ⎡ 5⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣ 1 4⎦⎣ x2 ⎦ ⎣10⎦ Use Gauss-Jordan elimination on the augmented matrix. ⎡2 3 5⎤ ⎡ 1 0 −2⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣ 1 4 10⎦ ⎣0 1 3⎦
⎡−4 9⎤ ⎡ x1 ⎤ ⎡−13⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣ 1 −3⎦ ⎣ x2 ⎦ ⎣ 12⎦ Use Gauss-Jordan elimination on the augmented matrix.
29
34. In matrix form Ax = b, the system is ⎡ 1 1 −3⎤ ⎡ x1 ⎤ ⎡−1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢−1 2 0⎥ ⎢ x2 ⎥ = ⎢ 1⎥. ⎢ 1 −1 1⎥ ⎢ x3 ⎥ ⎢ 2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Use Gauss-Jordan elimination on the augmented matrix. ⎡ 1 0 0 2⎤ ⎡ 1 1 −3 −1⎤ ⎢ ⎥ ⎢ ⎥ 3 ⎢−1 2 0 1⎥ ⇒ ⎢0 1 0 2 ⎥ ⎢0 0 1 3 ⎥ ⎢ 1 −1 1 2⎥ ⎣ ⎦ 2⎦ ⎣ ⎡ 2⎤ ⎡ x1 ⎤ ⎢ ⎥ ⎢ ⎥ So, the solution is ⎢ x2 ⎥ = ⎢ 32 ⎥. ⎢3⎥ ⎢ x3 ⎥ ⎣ ⎦ ⎣2⎦
36. In matrix form Ax = b, the system is ⎡ 1 −1 4⎤ ⎡ x1 ⎤ ⎡ 17⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 1 3 0⎥ ⎢ x2 ⎥ = ⎢−11⎥. ⎢0 −6 5⎥ ⎢ x3 ⎥ ⎢ 40⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Use Gauss-Jordan elimination on the augmented matrix. ⎡ 1 −1 4 17⎤ ⎡ 1 0 0 4⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 1 3 0 −11⎥ ⇒ ⎢0 1 0 −5⎥ ⎢0 −6 5 40⎥ ⎢0 0 1 2⎥ ⎣ ⎦ ⎣ ⎦ ⎡ x1 ⎤ ⎡ 4⎤ ⎢ ⎥ ⎢ ⎥ So, the solution is ⎢ x2 ⎥ = ⎢−5⎥. ⎢ x3 ⎥ ⎢ 2⎥ ⎣ ⎦ ⎣ ⎦
38. Expanding the left side of the equation produces
⎡2 −1⎤ ⎡2 −1⎤ ⎡ a11 a12 ⎤ ⎢ ⎥A = ⎢ ⎥⎢ ⎥ ⎣3 −2⎦ ⎣3 −2⎦ ⎣a21 a22 ⎦ ⎡ 2a11 − a21 2a12 − a22 ⎤ ⎡ 1 0⎤ = ⎢ ⎥ = ⎢ ⎥ 3 2 3 2 a a a a − − 21 12 22 ⎦ ⎣ 11 ⎣0 1⎦
⎡ x1 ⎤ ⎡−2⎤ So, the solution is ⎢ ⎥ = ⎢ ⎥. ⎣ x2 ⎦ ⎣ 3⎦ 32. In matrix form Ax = b, the system is
Operations with Matrices
and you obtain the system − a21
2a11
− a22 = 0
2a12 − 2a21
3a11 3a12
= 1 = 0 − 2a22 = 1.
⎡ 1 0 −23⎤ ⎡−4 9 −13⎤ ⎢ ⎥ ⇒ ⎢0 1 − 35 ⎥ ⎢⎣ ⎣ 1 −3 12⎦ 3⎥ ⎦
Solving by Gauss-Jordan elimination yields
⎡−23⎤ ⎡ x1 ⎤ So, the solution is ⎢ ⎥ = ⎢ 35 ⎥. x ⎣ 2⎦ ⎣⎢− 3 ⎥⎦
⎡2 −1⎤ So, you have A = ⎢ ⎥. ⎣3 −2⎦
a11 = 2, a12 = −1, a21 = 3, and a22 = −2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
30
Chapter 2 Matrices
40. Expanding the left side of the matrix equation produces
⎡a b⎤⎡2 1⎤ ⎡ 2a + 3b a + b⎤ ⎡3 17⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥. ⎣ c d ⎦⎣3 1⎦ ⎣2c + 3d c + d ⎦ ⎣4 −1⎦ You obtain two systems of linear equations (one involving a and b and the other involving c and d). 2a + 3b = 3 a + b = 17, and 2c + 3d = 4 c + d = −1. Solving by Gauss-Jordan elimination yields a = 48, b = −31, c = −7 and d = 6. ⎡cos α 42. AB = ⎢ ⎣ sin α
−sin α ⎤ ⎡cos β ⎥⎢ cos α ⎦ ⎣ sin β
−sin β ⎤ ⎡ cos α cos β − sin α sin β ⎥⎢ cos β ⎦ ⎢⎣sin α cos β + cos α sin β
cos α ( −sin β ) − sin α cos β ⎤ ⎥ sin α ( −sin β ) + cos α cos β ⎥⎦
⎡cos β BA = ⎢ ⎣ sin β
−sin β ⎤ ⎡cos α ⎥⎢ cos β ⎦ ⎣ sin α
−sin α ⎤ ⎡cos β cos α − sin β sin α ⎥⎢ cos α ⎦ ⎣⎢sin β cos α + cos β sin α
cos β ( −sin α ) − sin β cos α ⎤ ⎥ sin β ( −sin α ) + cos β cos α ⎦⎥
⎡cos(α + β ) −sin (α + β )⎤ So, you see that AB = BA = ⎢ ⎥. ⎢⎣ sin (α + β ) cos(α + β )⎥⎦ ⎡2 0 0⎤ ⎡2 0 0⎤ ⎡4 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 44. AA = ⎢0 −3 0⎥ ⎢0 −3 0⎥ = ⎢0 9 0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎡3( −7) + 0 + 0 0 + 0 + 0 0 + 0 + 0⎤ 0 0⎤ ⎡3 0 0⎤ ⎡−7 0 0⎤ ⎡−21 ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 + 0 + 0 0 + ( −5)4 + 0 0 + 0 + 0⎥ = ⎢ 0 −20 0⎥. 46. AB = ⎢0 −5 0⎥ ⎢ 0 4 0⎥ = ⎢ ⎢ ⎢0 0 0⎥ ⎢ 0 0 12⎥ ⎢ 0 0+0+0 0 + 0 + 0 0 + 0 + 0⎥⎦ 0 0⎥⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎣ Similarly, 0 0⎤ ⎡−21 ⎢ ⎥ BA = ⎢ 0 −20 0⎥. ⎢ 0 0 0⎥⎦ ⎣ 0 0⎤ ⎡ b11 b12 ⎡a11 ⎢ ⎥⎢ 0⎥ ⎢b21 b22 48. (a) AB = ⎢ 0 a22 ⎢ 0 0 a33 ⎥⎦ ⎢⎣b31 b32 ⎣
b13 ⎤ ⎡ a11b11 a11b12 ⎥ ⎢ b23 ⎥ = ⎢a22b21 a22b22 ⎢ a33b31 a33b32 b33 ⎥⎦ ⎣
a11b13 ⎤ ⎥ a22b23 ⎥ a33b33 ⎥⎦
The ith row of B has been multiplied by aii , the ith diagonal entry of A. ⎡ b11 b12 ⎢ (b) BA = ⎢b21 b22 ⎢b31 b32 ⎣
b13 ⎤ ⎡a11 0 0⎤ ⎡ a11b11 a22b12 ⎥⎢ ⎥ ⎢ 0⎥ = ⎢a11b21 a22b22 b23 ⎥ ⎢ 0 a22 ⎢a11b31 a22b32 0 a33 ⎥⎦ b33 ⎥⎦ ⎢⎣ 0 ⎣
a33b13 ⎤ ⎥ a33b23 ⎥ a33b33 ⎥⎦
The ith column of B has been multiplied by aii , the ith diagonal entry of A. (c) If a11 = a22 = a33 , then AB = a11B = BA. 50. The trace is the sum of the elements on the main diagonal.
1 + 1 + 1 = 3. 52. The trace is the sum of the elements on the main diagonal.
1 + 0 + 2 + ( −3) = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.1
54. Let AB = ⎡⎣cij ⎤⎦ , where cij =
n
∑ aik bkj . Then, Tr ( AB)
=
i =1
k =1
Similarly, if BA = ⎡⎣d ij ⎤⎦ , dij =
n
∑bik akj , then Tr ( BA)
n
∑ cii
=
k =1
n
∑ i =1
=
n
⎛
Operations with Matrices
31
⎞
n
∑ ⎜ ∑ aik bki ⎟. i =1
⎝ k =1
⎠
⎛ n ⎞ dii = ∑ ⎜ ∑ bik aki ⎟ = Tr ( AB ). i =1 ⎝ k =1 ⎠ n
⎡ a11 a12 ⎤ ⎡ b11 b12 ⎤ 56. Let A = ⎢ ⎥ and B = ⎢ ⎥. ⎣a21 a22 ⎦ ⎣b21 b22 ⎦ ⎡ 1 0⎤ Then the matrix equation AB − BA = ⎢ ⎥ is equivalent to ⎣0 1⎦ ⎡ a11 a12 ⎤ ⎡ b11 b12 ⎤ ⎡ b11 b12 ⎤ ⎡ a11 a12 ⎤ ⎡ 1 0⎤ ⎢ ⎥⎢ ⎥ −⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣a21 a22 ⎦ ⎣b21 b22 ⎦ ⎣b21 b22 ⎦ ⎣a21 a22 ⎦ ⎣0 1⎦ This equation implies that a11b11 + a12b21 − b11a11 − b12 a21 = a12b21 − b12 a21 = 1 a21b12 + a22b22 − b21a12 − b22 a22 = a21b12 − b21a12 = 1 which is impossible. So, the original equation has no solution.
58. Assume that A is an m × n matrix and B is a p × q matrix. Because the product AB is defined, you know that n = p. Moreover, because AB is square, you know that m = q. Therefore, B must be of order n × m, which implies that the product BA is defined. 60. Let rows s and t be identical in the matrix A. So, asj = aij for j = 1, …, n. Let AB = ⎡⎣cij ⎤⎦ , where
cij =
n
∑ aik bkj . Then, csj
=
k =1
62. (a)
n
n
k =1
k =1
∑ ask bkj , and ctj = ∑ atk bkj . Because ask
= atk for k = 1, …, n, rows s and t of AB are the same.
⎡0 −1⎤⎡1 2 3⎤ ⎡−1 −4 −2⎤ AT = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 3⎦ ⎣ 1 0⎦⎣1 4 2⎦ ⎣ 1 2 ⎡0 −1⎤⎡−1 −4 −2⎤ ⎡−1 −2 −3⎤ AAT = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 3⎦ ⎣ 1 0⎦⎣ 1 2 ⎣−1 −4 −2⎦ Triangle associated with AT Triangle associated with T y 4 3 2 1
−3 −2 −1 −2 −3 −4
Triangle associated with AAT
y
(2, 4)
y
(−2, 3) 4 (3, 2) (1, 1) 1 2 3 4
3 2
(−1, 1)
x −2 −1
(−4, 2) −2 −3 −4
x 1 2 3 4
4 3 2 (−3, −2) 1
(−1, −1)
−2
1 2 3 4
x
−3 −4
(−2, −4)
The transformation matrix A rotates the triangle about the origin in a counterclockwise direction through 90°. (b) Given the triangle associated with AAT, the transformation that would produce the triangle associated with AT would be a rotation about the origin of 90° in a clockwise direction. Another such rotation would produce the triangle associated with T.
⎡100 90 70 30⎤ ⎡110 99 77 33⎤ 64. 1.1⎢ ⎥ = ⎢ ⎥ 40 20 60 60 ⎣ ⎦ ⎣ 44 22 66 66⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
32
Chapter 2 Matrices
66. (a) Use scalar multiplication to find L.
L =
2C 3
=
⎡ 23 (627) 627 681⎤ ⎥ = ⎢2 ⎢⎣ 3 (135) ⎣135 150⎦
2⎡ 3⎢
2 3 2 3
(681)⎤ ⎥ (150)⎦⎥
⎡418 454⎤ = ⎢ ⎥ ⎣ 90 100⎦
(b) Use matrix addition to find M.
⎡627 681⎤ ⎡418 454⎤ ⎡627 − 418 681 − 454⎤ ⎡209 227⎤ M = C − L = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣135 150⎦ ⎣ 90 100⎦ ⎣ 135 − 90 150 − 100⎦ ⎣ 45 50⎦ 68. (a) True. The number of elements in a row of the first matrix must be equal to the number of elements in a column of the second matrix. See page 51 of the text. (b) True. See page 53 of the text.
70. (a) Multiply the matrix for 2005 by ⎡12,607 ⎢ ⎢ 16,131 100 ⎢ A = 26,728 288,131 ⎢ ⎢ 5306 ⎢ ⎣12,524
⎡3.87 ⎢ ⎢5.09 (b) B − A = ⎢9.13 ⎢ ⎢1.64 ⎢ ⎣3.99
This produces a matrix giving the information as percents of the total population.
6286⎤ ⎡4.38 ⎥ ⎢ 7177⎥ ⎢5.60 ⎥ 63,911 11,689 ≈ ⎢9.28 ⎥ ⎢ ⎢1.84 12,679 2020⎥ ⎥ ⎢ 30,741 4519⎦ ⎣4.35
34,418
41,395
Multiply the matrix for 2015 by ⎡12,441 ⎢ ⎢16,363 100 ⎢ B = 29,373 321,609 ⎢ ⎢ 5263 ⎢ ⎣12,826
100 . 288,131
35,289 42,250 73,496 14,231 33,292
100 . This 321,609
produces a matrix giving the information as percents of the total population.
8835⎤ ⎡3.87 ⎥ ⎢ 9955⎥ ⎢5.09 17,572⎥ ≈ ⎢9.13 ⎥ ⎢ ⎢1.64 3337⎥ ⎥ ⎢ 7086⎦ ⎣3.99
10.97 2.75⎤ ⎡4.38 ⎥ ⎢ 13.14 3.10⎥ ⎢5.60 22.85 5.46⎥ − ⎢9.28 ⎥ ⎢ 4.42 1.04⎥ ⎢1.84 ⎥ ⎢ 10.35 2.20⎦ ⎣4.35
11.95 2.18⎤ ⎥ 14.37 2.49⎥ 22.18 4.06⎥ ⎥ 4.40 0.70⎥ ⎥ 10.67 1.57⎦
10.97 2.75⎤ ⎥ 13.14 3.10⎥ 22.85 5.46⎥ ⎥ 4.42 1.04⎥ ⎥ 10.35 2.20⎦
11.95 2.18⎤ ⎡ −0.51 −0.98 ⎢ ⎥ 14.37 2.49⎥ ⎢ −0.51 −1.23 22.18 4.06⎥ = ⎢−0.15 0.67 ⎢ ⎥ ⎢−0.20 0.02 4.40 0.70⎥ ⎢ ⎥ 10.67 1.57⎦ ⎣−0.36 −0.32
0.57⎤ ⎥ 0.61⎥ 1.40⎥ ⎥ 0.34⎥ ⎥ 0.63⎦
(c) The 65+ age group is projected to show relative growth from 2005 to 2015 over all regions because its column in B − A contains all positive percents.
⎡ 0 0 ⎢ 0 0 72. AB = ⎢ ⎢−1 0 ⎢ ⎢⎣ 0 −1
1 0⎤ ⎡ 1 ⎥⎢ 0 1⎥ ⎢5 0 0⎥ ⎢ 1 ⎥⎢ 0 0⎥⎦ ⎢⎣5
2 3 4⎤ 3 4⎤ ⎡ 1 2 ⎥ ⎢ ⎥ 6 7 8⎥ 5 6 7 8⎥ = ⎢ ⎢ −1 −2 −3 −4⎥ 2 3 4⎥ ⎥ ⎢ ⎥ 6 7 8⎥⎦ ⎢− ⎣ 5 −6 −7 −8⎥⎦
74. The augmented matrix row reduces as follows. ⎡ 1 2 4 1⎤ ⎡ 1 0 −2 −3⎤ ⎢ ⎥ ⎢ ⎥ − 1 0 2 3 ⇒ ⎢ ⎥ ⎢0 1 3 2⎥ ⎢ 0 1 3 2⎥ ⎢0 0 0 0⎥ ⎣ ⎦ ⎣ ⎦
There are an infinite number of solutions. For example, x3 = 0, x2 = 2, x1 = −3. ⎡ 1⎤ ⎡ 1⎤ ⎡2⎤ ⎡4⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ So, b = ⎢3⎥ = −3⎢−1⎥ + 2 ⎢0⎥ + 0 ⎢2⎥. ⎢2⎥ ⎢ 0⎥ ⎢ 1⎥ ⎢ 3⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2
Properties of Matrix Operations
33
76. The augmented matrix row reduces as follows. ⎡−3 5 −22⎤ ⎡ 1 −3 10⎤ ⎡ 1 −3 10⎤ ⎡ 1 0 4⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3 4 4 ⇒ 0 9 − 18 ⇒ 0 1 − 2 ⇒ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 1 −2⎥. ⎢ 4 −8 32⎥ ⎢0 −4 ⎢0 ⎢0 0 0⎥ 8⎥⎦ 1 −2⎥⎦ ⎣ ⎦ ⎣ ⎣ ⎣ ⎦
So, ⎡−22⎤ ⎡−3⎤ ⎡ 5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 4⎥ = 4 ⎢ 3⎥ + ( −2) ⎢ 4⎥. ⎢ 32⎥ ⎢ 4⎥ ⎢−8⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
Section 2.2 Properties of Matrix Operations ⎡1 2⎤ ⎡ 0 1⎤ ⎡ 1 3⎤ 2. A + B = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ⎣3 4⎦ ⎣−1 2⎦ ⎣2 6⎦ ⎡ 0 1⎤ ⎡ 0 1⎤ ⎡0 −1⎤ 4. ( a + b) B = (3 + ( −4)) ⎢ ⎥ = (−1) ⎢ ⎥ = ⎢ ⎥ ⎣−1 2⎦ ⎣−1 2⎦ ⎣ 1 −2⎦ ⎡0 0⎤ ⎡0 0⎤ ⎡0 0⎤ 6. ( ab)0 = (3)( −4) ⎢ ⎥ = (−12) ⎢ ⎥ = ⎢ ⎥ 0 0 0 0 ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦ ⎡−6 −3⎤ ⎡ 0 6⎤ ⎡ 6 −9⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 8. (a) X = 3 A − 2 B = ⎢ 3 0⎥ − ⎢ 4 0⎥ = ⎢−1 0⎥ ⎢ 9 −12⎥ ⎢−8 −2⎥ ⎢17 −10⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
(b) 2 X = 2 A − B ⎡−4 −2⎤ ⎡ 0 3⎤ ⎢ ⎥ ⎢ ⎥ 2 X = ⎢ 2 0⎥ − ⎢ 2 0⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 6 −8⎦ ⎣−4 −1⎦ ⎡−4 −5⎤ ⎢ ⎥ 2X = ⎢ 0 0⎥ ⎢ 10 −7⎥ ⎣ ⎦ ⎡−2 − 5 ⎤ 2 ⎢ ⎥ X = ⎢ 0 0⎥ ⎢ 5 − 7⎥ 2⎦ ⎣
(c)
2 X + 3A = B
(d)
⎡−6 −3⎤ ⎡ 0 3⎤ ⎢ ⎥ ⎢ ⎥ 2X + ⎢ 3 0⎥ = ⎢ 2 0⎥ ⎢ 9 −12⎥ ⎢−4 −1⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 6 6⎤ ⎢ ⎥ 2 X = ⎢ −1 0⎥ ⎢−13 11⎥ ⎣ ⎦ ⎡ 3 ⎢ X = ⎢ − 12 ⎢− 13 ⎣ 2
3⎤ ⎥ 0⎥ 11 ⎥ 2⎦
2 A + 4 B = −2 X ⎡−4 −2⎤ ⎡ 0 12⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 2 0⎥ + ⎢ 8 0⎥ = −2 X ⎢ 6 −8⎥ ⎢−16 −4⎥ ⎣ ⎦ ⎣ ⎦ ⎡ −4 10⎤ ⎢ ⎥ 0⎥ = −2 X ⎢ 10 ⎢−10 −12⎥ ⎣ ⎦ ⎡ 2 −5⎤ ⎢ ⎥ ⎢−5 0⎥ = X ⎢ 5 6⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
34
Chapter 2 Matrices ⎡2 4⎤ ⎡ 1 −2⎤ 18. AB = ⎢ ⎥ = ⎥⎢ 1 1⎦⎥ ⎣2 4⎦ ⎣⎢− 2 But A ≠ O and B ≠ O.
⎡ 0 1⎤ ⎛ ⎡ 1 3⎤ ⎡ 0 1⎤ ⎞ 10. C ( BC ) = ⎢ ⎥ ⎜⎜ ⎢ ⎥⎢ ⎥ ⎟⎟ ⎣−1 0⎦ ⎝ ⎣−1 2⎦ ⎣−1 0⎦ ⎠ ⎡ 0 1⎤ ⎡−3 1⎤ ⎡−2 −1⎤ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−1 0⎦ ⎣−2 −1⎦ ⎣ 3 −1⎦
⎡ 1 2⎤ ⎡ 1 2⎤ ⎡ 1 0⎤ 20. A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = I2. ⎣0 −1⎦ ⎣0 −1⎦ ⎣0 1⎦
⎡ 1 3⎤ ⎛ ⎡ 0 1⎤ ⎡0 0⎤ ⎞ 12. B(C + O) = ⎢ ⎥ ⎜⎜ ⎢ ⎥ + ⎢ ⎥ ⎟⎟ ⎣−1 2⎦ ⎝ ⎣−1 0⎦ ⎣0 0⎦ ⎠
2 ⎡ 1 0⎤ So, A4 = ( A2 ) = I 22 = I 2 = ⎢ ⎥. ⎣0 1⎦
⎡ 1 3⎤ ⎡ 0 1⎤ ⎡ −3 1⎤ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−1 2⎦ ⎣−1 0⎦ ⎣−2 −1⎦
⎡ 1 2⎤ ⎡ 1 0⎤ ⎡ 1 2⎤ 22. A + IA = ⎢ ⎥ + ⎢ ⎥⎢ ⎥ ⎣0 −1⎦ ⎣0 1⎦ ⎣0 −1⎦
⎡ 1 3⎤ ⎛ ⎡ 1 2 3⎤ ⎞ 14. B(cA) = ⎢ ⎥ ⎜⎜ ( −2) ⎢ ⎥ ⎟⎟ − 1 2 ⎣ ⎦⎝ ⎣0 1 −1⎦ ⎠
⎡ 1 2⎤ ⎡ 1 2⎤ ⎡2 4⎤ = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ − − 0 1 0 1 ⎣ ⎦ ⎣ ⎦ ⎣0 −2⎦
⎡ 1 3⎤ ⎡−2 −4 −6⎤ ⎡−2 −10 0⎤ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 0 10⎦ ⎣−1 2⎦ ⎣ 0 −2 2⎦ ⎣ 2
⎡ 1 −1⎤ ⎢ ⎥ 24. (a) AT = ⎢3 4⎥ ⎢0 −2⎥ ⎣ ⎦
⎡ 1 2 3⎤ ⎡0 0 0⎤ ⎡12 −6 9⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 16. AC = ⎢0 5 4⎥ ⎢0 0 0⎥ = ⎢16 −8 12⎥ ⎢3 −2 1⎥ ⎢4 −2 3⎥ ⎢ 4 −2 3⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
T
T
⎡ 1 3 0⎤ = ⎢ ⎥ ⎣−1 4 −2⎦
⎡ 1 −1⎤ ⎡ 1 3 0⎤ ⎢ ⎡10 11⎤ ⎥ (b) A A = ⎢ ⎥ ⎢3 4⎥ = ⎢ ⎥ 1 4 2 − − ⎣ ⎦⎢ ⎣11 21⎦ ⎥ 0 2 − ⎣ ⎦ T
⎡ 4 −6 3⎤ ⎡0 0 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ 5 4 4⎥ ⎢0 0 0⎥ = BC ⎢−1 0 1⎥ ⎢4 −2 3⎥ ⎣ ⎦⎣ ⎦ But A ≠ B.
⎡−7 11 12⎤ ⎢ ⎥ T 26. (a) A = ⎢ 4 −3 1⎥ ⎢ 6 −1 3⎥ ⎣ ⎦
⎡0 0⎤ ⎢ ⎥ = O ⎣0 0⎦
⎡ 1 −1⎤ ⎡ 2 −1 2⎤ ⎢ ⎥ ⎡ 1 3 0⎤ ⎢ ⎥ (c) AA = ⎢3 4⎥ ⎢ ⎥ = ⎢−1 25 −8⎥ ⎢0 −2⎥ ⎣−1 4 −2⎦ ⎢ 2 −8 4⎥ ⎣ ⎦ ⎣ ⎦ T
⎡−7 4 6⎤ ⎢ ⎥ = ⎢ 11 −3 −1⎥ ⎢ 12 1 3⎥⎦ ⎣
⎡−7 4 6⎤ ⎡−7 11 12⎤ ⎡ 101 −95 −62⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (b) A A = ⎢ 11 −3 −1⎥ ⎢ 4 −3 1⎥ = ⎢−95 131 126⎥ ⎢ 12 ⎢−62 126 154⎥ 1 3⎥⎦ ⎢⎣ 6 −1 3⎥⎦ ⎣ ⎣ ⎦ T
⎡−7 11 12⎤ ⎡−7 4 6⎤ ⎡ 314 −49 −17⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ (c) AAT = ⎢ 4 −3 1⎥ ⎢ 11 −3 −1⎥ ⎢−49 26 30⎥ ⎢ 6 −1 3⎥ ⎢ 12 1 3⎥⎦ ⎢⎣−17 30 46⎥⎦ ⎣ ⎦⎣
⎡ 4 −3 2 0⎤ ⎢ ⎥ 2 0 11 −1⎥ 28. (a) AT = ⎢ ⎢14 −2 12 −9⎥ ⎢ ⎥ ⎣⎢ 6 8 −5 4⎥⎦
T
⎡ 4 2 14 6⎤ ⎢ ⎥ −3 0 −2 8⎥ = ⎢ ⎢ 2 11 12 −5⎥ ⎢ ⎥ ⎢⎣ 0 −1 −9 4⎥⎦
8 168 −104⎤ ⎡ 4 2 14 6⎤ ⎡ 4 −3 2 0⎤ ⎡ 252 ⎢ ⎥⎢ ⎥ ⎢ ⎥ 3 0 2 8 2 0 11 1 8 77 50⎥ − − − −70 ⎥⎢ ⎥ = ⎢ (b) AT A = ⎢ ⎢ 2 11 12 −5⎥ ⎢14 −2 12 −9⎥ ⎢ 168 −70 294 −139⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 98⎥⎦ ⎢− ⎣⎢ 0 −1 −9 4⎥⎦ ⎢⎣ 6 8 −5 4⎥⎦ ⎣ 104 50 −139 ⎡ 4 −3 2 0⎤ ⎡ 4 2 ⎢ ⎥⎢ 2 0 11 −1⎥ ⎢−3 0 (c) AAT = ⎢ ⎢14 −2 12 −9⎥ ⎢ 2 11 ⎢ ⎥⎢ ⎢⎣ 6 8 −5 4⎥⎦ ⎢⎣ 0 −1
6⎤ ⎡ 29 30 86 −10⎤ ⎥ ⎢ ⎥ 8⎥ 30 126 169 −47⎥ = ⎢ ⎢ 86 169 425 −28⎥ 12 −5⎥ ⎥ ⎢ ⎥ −9 4⎥⎦ ⎢− ⎣ 10 −47 −28 141⎥⎦ 14
−2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2
Properties of Matrix Operations
35
30. In general, AB ≠ BA for matrices. T
32.
( AB)T
⎛ ⎡ 1 2⎤ ⎡−3 −1⎤ ⎞ = ⎜⎜ ⎢ ⎥⎢ ⎥ ⎟⎟ ⎝ ⎣0 −2⎦ ⎣ 2 1⎦ ⎠ T
⎡−3 −1⎤ ⎡ 1 2⎤ BT AT = ⎢ ⎥ ⎢ ⎥ ⎣ 2 1⎦ ⎣0 −2⎦
T
⎡ 1 1⎤ = ⎢ ⎥ ⎣−4 −2⎦
34.
( AB)
⎛ ⎡2 1 −1⎤ ⎡ 1 0 −1⎤ ⎞ ⎜⎢ ⎥⎢ ⎥⎟ = ⎜ ⎢0 1 3⎥ ⎢2 1 −2⎥ ⎟ ⎜ ⎢4 0 2⎥ ⎢0 1 3⎥ ⎟ ⎦⎣ ⎦⎠ ⎝⎣ T
⎡1 −4⎤ = ⎢ ⎥ ⎣1 −2⎦
⎡−3 2⎤ ⎡ 1 0⎤ ⎡1 −4⎤ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 1 2 2 − − ⎣ ⎦⎣ ⎦ ⎣1 −2⎦ T
T
T
⎡ 1 0 −1⎤ ⎡2 1 −1⎤ ⎢ ⎥ ⎢ ⎥ T T B A = ⎢2 1 −2⎥ ⎢0 1 3⎥ ⎢ ⎥ ⎢ ⎥ ⎣0 1 3⎦ ⎣4 0 2⎦
T
⎡4 0 −7⎤ ⎢ ⎥ = ⎢2 4 7⎥ ⎢4 2 2⎥⎦ ⎣
T
⎡ 4 2 4⎤ ⎢ ⎥ = ⎢ 0 4 2⎥ ⎢−7 7 2⎥ ⎣ ⎦
⎡ 4 2 4⎤ ⎡ 1 2 0⎤ ⎡ 2 0 4⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ 1 1⎥ ⎢ 1 1 0⎥ = ⎢ 0 4 2⎥ = ⎢ 0 ⎢−7 7 2⎥ ⎢ ⎥⎢ ⎥ ⎣ ⎦ ⎣−1 −2 3⎦ ⎣−1 3 2⎦
36. (a) False. In general, for n × n matrices A and B it is not true that AB = BA. For example, let ⎡ 1 1⎤ ⎡1 0⎤ A = ⎢ ⎥, B = ⎢ ⎥. 0 0 ⎣ ⎦ ⎣1 0⎦ ⎡2 0⎤ ⎡1 1⎤ Then AB = ⎢ ⎥ ≠ ⎢ ⎥ = BA. 0 0 ⎣ ⎦ ⎣1 1⎦ (b) True. For any matrix A you have an additive inverse, namely − A = ( −1) A. See Theorem 2.2(2) on page 62l. (c) False. Let ⎡ 1 1⎤ ⎡1 0⎤ ⎡2 0⎤ A = ⎢ ⎥, B = ⎢ ⎥, C = ⎢ ⎥. Then ⎣0 0⎦ ⎣1 0⎦ ⎣0 0⎦ ⎡2 0⎤ AB = ⎢ ⎥ = AC , but B ≠ C. ⎣0 0⎦ (d) True. See Theorem 2.6(2) on page 68.
38. (a) Z = aX + bY ⎡ 1⎤ ⎡ 1⎤ ⎡ 1⎤ ⎡ a +b ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢4⎥ = a ⎢2⎥ + b ⎢0⎥ = ⎢ 2a ⎥ ⎢4⎥ ⎢3⎥ ⎢2⎥ ⎢3a + 2b⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ Solving the linear system obtained from this matrix equation, you obtain a = 2 and b = −1. So, Z = 2X − Y. (b) W = aX + bY ⎡0⎤ ⎡1⎤ ⎡1⎤ ⎡ a +b ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0⎥ = a ⎢2⎥ + b ⎢0⎥ = ⎢ 2a ⎥ ⎢1⎥ ⎢3⎥ ⎢2⎥ ⎢3a + 2b⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ It follows from a + b = 0 and 2a = 0 that a, b must both be zero, but this is impossible because 3a + 2b should be 1. aX + bY + cW = O (c) ⎡1⎤ ⎡1⎤ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ a ⎢2⎥ + b ⎢0⎥ + c ⎢0⎥ = ⎢0⎥ ⎢3⎥ ⎢2⎥ ⎢1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ This matrix equation yields the linear system a + b = 0 2a
= 0
3a + 2b + c = 0
which has the unique solution a = b = c = 0. ⎡1⎤ ⎡1⎤ ⎡1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (d) aX + bY + cZ = a ⎢2⎥ + b ⎢0⎥ + c ⎢4⎥ = ⎢0⎥ ⎢3⎥ ⎢2⎥ ⎢4⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ This matrix equation yields the linear system a + b+ c = 0 2a
+ 4c = 0
3a + 2b + 4c = 0.
Solving this system using Gauss-Jordan elimination, you find that there are an infinite number of solutions: a = −2t , b = t , and c = t. For instance, a = −2, b = c = 1. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
36
Chapter 2 Matrices
40. A20
⎡(1)20 ⎢ = ⎢ 0 ⎢ ⎢ 0 ⎣
0
(−1)
20
0
0 ⎤ ⎡1 0 0⎤ ⎥ ⎢ ⎥ 0 ⎥ = ⎢0 1 0⎥ ⎥ ⎢ ⎥ 20 (1) ⎥⎦ ⎣0 0 1⎦
⎡23 ⎡8 0 0⎤ ⎢ ⎢ ⎥ 42. Because A3 = ⎢0 −1 0⎥ = ⎢ 0 ⎢ ⎢0 0 27⎥ ⎢⎣ 0 ⎣ ⎦
0
(−1) 0
3
0 ⎤ ⎡2 0 0⎤ ⎥ ⎢ ⎥ 0 ⎥ , you have A = ⎢0 −1 0⎥. ⎥ ⎢0 0 3⎥ 3 ⎣ ⎦ (3) ⎥⎦
2
⎡5 4⎤ ⎡5 4⎤ ⎡1 0⎤ 44. f ( A) = ⎢ ⎥ − 7⎢ ⎥ + 6⎢ ⎥ ⎣1 2⎦ ⎣1 2⎦ ⎣0 1⎦ ⎡29 28⎤ ⎡35 28⎤ ⎡6 0⎤ = ⎢ ⎥ −⎢ ⎥ + ⎢ ⎥ ⎣ 7 8⎦ ⎣ 7 14⎦ ⎣0 6⎦ ⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦ 3
2
⎡ 2 1 −1⎤ ⎡ 2 1 −1⎤ ⎡ 2 1 −1⎤ ⎡1 0 0⎤ 46. f ( A) = ⎢⎢ 1 0 2⎥⎥ − 2 ⎢⎢ 1 0 2⎥⎥ + 5⎢⎢ 1 0 2⎥⎥ − 10 ⎢⎢0 1 0⎥⎥ ⎢−1 1 3⎥ ⎢−1 1 3⎥ ⎢−1 1 3⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 2
⎡ 2 1 −1⎤⎡ 2 1 −1⎤ ⎡ 2 1 −1⎤⎡ 2 1 −1⎤ ⎡10 5 −5⎤ ⎡10 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ 1 0 2⎥⎢ 1 0 2⎥ − 2 ⎢ 1 0 2⎥⎢ 1 0 2⎥ + ⎢ 5 0 10⎥ − ⎢ 0 10 0⎥ ⎢−1 1 3⎥⎢−1 1 3⎥ ⎢−1 1 3⎥⎢−1 1 3⎥ ⎢−5 5 15⎥ ⎢ 0 0 10⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 5 −5⎤ ⎡ 2 1 −1⎤⎡ 6 1 −3⎤ ⎡ 6 1 −3⎤ ⎡ 0 ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ 1 0 2⎥⎢ 0 3 5⎥ − 2 ⎢ 0 3 5⎥ + ⎢ 5 −10 10⎥ ⎢−1 1 3⎥⎢−4 2 12⎥ ⎢−4 2 12⎥ ⎢−5 5 5⎥⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ 5 −5⎤ ⎡ 16 3 −13⎤ ⎡ 12 2 −6⎤ ⎡ 0 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 21⎥ − ⎢ 0 6 10⎥ + ⎢ 5 −10 10⎥ = ⎢ −2 5 ⎢−18 8 44⎥ ⎢−8 4 24⎥ ⎢−5 5 5⎦⎥ ⎣ ⎦ ⎣ ⎦ ⎣ 6 −12⎤ ⎡ 4 ⎢ ⎥ = ⎢ 3 −11 21⎥ ⎢−15 9 25⎥⎦ ⎣
48. (cd ) A = (cd ) ⎡⎣aij ⎤⎦ = ⎡⎣(cd )aij ⎤⎦ = ⎡⎣c( daij )⎤⎦ = c ⎡⎣daij ⎤⎦ = c( dA) 50. (c + d ) A = (c + d ) ⎡⎣aij ⎤⎦ = ⎡⎣(c + d )aij ⎤⎦ = ⎡⎣caij + daij ⎤⎦ = ⎡⎣caij ⎤⎦ + ⎡⎣daij ⎤⎦ = c ⎡⎣aij ⎤⎦ + d ⎡⎣aij ⎤⎦ = cA + dA
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.2
Properties of Matrix Operations
37
52. (a) To show that A( BC ) = ( AB)C , compare the ijth entries in matrices on both sides of this equality. Assume that A has size n × p, B has size p × r and C has size r × m. Then the entry in kth row and jth column of BC is
∑ l =1 bkl clj , Therefore r
the entry in ith row and jth column of A(BC) is p
r
k =1
l =1
∑ aik ∑ bkl clj
=
∑ aik bkl clj . k, l
The entry in the ith row and jth column of (AB)C is
∑ k =1 aik bkl for each l p
So, dil = r
∑ l =1 dil clj , where r
dil is the entry of AB in ith row and lth column.
= 1, …, r. So, the ijth entry of ( AB)C is
p
∑ ∑ aik bkl clj =∑aik bkl cij. i =1 k =1
k, l
Because all corresponding entries of A(BC) and (AB)C are equal and both matrices are of the same size ( n × m) you conclude that A( BC ) = ( AB)C. (b) The entry in the ith row and jth column of ( A + B )C is ( ail + bil )c1 j + ( ai 2 + bi 2 )c2 j + entry in the ith row and jth column of AC + BC is ( ai1c1 j +
+ aincnj ) + (bi1c1 j +
+ ( ain + bin )cnj , whereas the + bincnj ), which are equal by the
distributive law for real numbers. (c) The entry in the ith row and jth column of c( AB) is c ⎡⎣ai1b1 j + ai 2b2 j +
(cai1 )b1 j
+ ainbnj ⎤⎦. The corresponding entry for (cA) B is
+ (cain )bnj . And the corresponding entry for A(cB) is ai1 (cb1 j ) + ai 2 (cb2 j ) +
+ (cai 2 )b2 j +
+ ain (cbnj ).
Because these three expressions are equal, you have shown that c( AB) = (cA) B = A(cB). 54. (2)
(3)
(
( A + B)
T
= ⎡⎣aij ⎤⎦ + ⎡⎣bij ⎤⎦
(
(cA)T
= c ⎡⎣aij ⎤⎦
)
T
)
T
= ⎡⎣aij + bij ⎤⎦
T
= ⎡⎣ca ji ⎤⎦ = c ⎡⎣a ji ⎤⎦ = c( AT )
= ⎡⎣caij ⎤⎦
T
= ⎡⎣a ji + b ji ⎤⎦ = ⎡⎣a ji ⎤⎦ + ⎡⎣b ji ⎤⎦ = AT + BT
(4) The entry in the ith row and jth column of ( AB) is a j1b1i + a j 2b2i + T
and jth column of B A is b1i a j1 + b2i a j 2 + T
T
a jnbni . On the other hand, the entry in the ith row
+ bni a jn , which is the same.
⎡1 0 ⎤ ⎡0 1⎤ 56. Many examples are possible. For instance, A = ⎢ ⎥ and B = ⎢ ⎥. ⎣1 −1⎦ ⎣1 0⎦
Then, ( AB)
T
⎡ 0 1⎤ = ⎢ ⎥ ⎣−1 1⎦
T
⎡0 −1⎤ ⎡1 1 ⎤ ⎡0 1⎤ ⎡ 1 1⎤ T T = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ , which are not equal. ⎥ , while A B = ⎢ 0 1 1 0 − 1 1 ⎣ ⎦⎣ ⎦ ⎣−1 0⎦ ⎣ ⎦
58. Because A = AT , this matrix is symmetric. 60. Because − A = AT , this matrix is skew-symetric. 62. If AT = − A and BT = − B, then ( A + B )
T
= AT + BT = − A − B = −( A + B ), which implies that A + B is skew-
symmetric. 64.
( A − AT )
T
= AT − ( AT )
T
= AT − A = −( A − AT ), which implies that A − AT is skew-symmetric.
⎡0 1⎤ 66. (a) An example of a 2 × 2 matrix of the given form is A2 = ⎢ ⎥. ⎣0 0⎦ ⎡0 1 2⎤ ⎢ ⎥ An example of a 3 × 3 matrix of the given form is A3 = ⎢0 0 3⎥. ⎢0 0 0⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
38
Chapter 2 Matrices ⎡0 0⎤ (b) A22 = ⎢ ⎥ ⎣0 0⎦ ⎡0 0 3⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎢ ⎥ A32 = ⎢0 0 0⎥ and A33 = ⎢0 0 0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
(c) The conjecture is that if A is a 4 × 4 matrix of the given form, then A4 is the 4 × 4 zero matrix. A graphing utility shows this to be true. (d) If A is an n × n matrix of the given form, then An is the n × n zero matrix.
Section 2.3 The Inverse of a Matrix ⎡ 1 −1⎤ ⎡ 53 2. AB = ⎢ ⎥⎢ 2 ⎣2 3⎦ ⎢⎣− 5 ⎡ 3 BA = ⎢ 25 ⎣⎢− 5
1⎤ 1 5 ⎡ ⎥ 1 ⎢2 ⎥⎣ 5⎦
1⎤ 5 ⎥ 1 5⎥ ⎦
⎡1 0⎤ = ⎢ ⎥ ⎣0 1⎦
−1⎤ ⎡1 0⎤ ⎥ = ⎢ ⎥ 3⎦ ⎣0 1⎦
⎡ 2 −17 11⎤⎡ 1 1 2⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 4. AB = ⎢−1 11 −7⎥⎢2 4 −3⎥ = ⎢0 1 0⎥ ⎢ 0 ⎥ ⎢0 0 1⎥ 3 −2⎥⎢ ⎣ ⎦⎣3 6 −5⎦ ⎣ ⎦ ⎡ 1 1 2⎤⎡ 2 −17 11⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 11 −7⎥ = ⎢0 1 0⎥ BA = ⎢2 4 −3⎥⎢−1 ⎢3 6 −5⎥⎢ 3 ⎢0 0 1⎥ 6 −2⎥⎦ ⎣ ⎦⎣ ⎣ ⎦
6. Use the formula A− 1 =
⎡ d −b⎤ 1 ⎢ ⎥, ad − bc ⎣−c a⎦
10. Adjoin the identity matrix to form 2 ⎡ 1 2 ⎢ [A I] = ⎢ 3 7 9 ⎢−1 −4 −7 ⎣
Using elementary row operations, reduce the matrix as follows. ⎡⎣I
⎡1 0 0 ⎢ A ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣ −1
⎡10 5 −7 [ A I ] = ⎢⎢−5 1 4 ⎢ 3 2 −2 ⎣
⎡1 0 0 ⎢ ⎤ A ⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
⎡⎣I
So, the inverse is
Therefore, the inverse is
8. Use the formula A− 1 =
⎡ d −b⎤ 1 ⎢ ⎥, ad − bc ⎣−c a⎦
where ⎡a b⎤ ⎡−1 1⎤ A = ⎢ ⎥ = ⎢ ⎥ ⎣c d ⎦ ⎣ 3 −3⎦ you see that ad − bc = ( −1)( −3) − (1)(3) = 0. So, the matrix has no inverse.
4⎤ ⎥ 12 −5 −3⎥ −5 2 1⎥⎦ 6
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
Using elementary row operations, reduce the matrix as follows.
⎡a b⎤ ⎡ 1 −2⎤ A = ⎢ ⎥ = ⎢ ⎥. ⎣c d ⎦ ⎣2 −3⎦
A− 1
−13
12. Adjoin the identity matrix to form
where
⎡−3 2⎤ ⎡−3 2⎤ 1 = . ⎢ ⎥ = ⎢ (1)(−3) − (−2)(2) ⎣−2 1⎦ ⎣−2 1⎥⎦
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
A
−1
−1
−10 −4 27⎤ ⎥ 2 1 −5⎥ −13 −5 35⎥⎦
⎡−10 −4 27⎤ ⎢ ⎥ 1 −5⎥. = ⎢ 2 ⎢−13 −5 35⎥ ⎣ ⎦
14. Adjoin the identity matrix to form ⎡ 3 2 5 = [ A I ] ⎢⎢ 2 2 4 ⎢−4 4 0 ⎣
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
Using elementary row operations, you cannot form the identity matrix on the left side. Therefore, the matrix has no inverse.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.3
16. Adjoin the identity matrix to form ⎡⎣I
⎡2 0 0 ⎢ A−1 ⎤⎦ = ⎢0 3 0 ⎢0 0 5 ⎣
1 0 0⎤ ⎥ 1 0⎥. 0 0 1⎥⎦ 0
Using elementary row operations, reduce the matrix as follows. ⎡1 0 0 ⎢ [ A I ] = ⎢0 1 0 ⎢0 0 1 ⎣
0 0⎤ ⎥ 0 13 0⎥ 0 0 15 ⎥⎦ 1 2
Therefore, the inverse is A− 1
⎡ 1 0 0⎤ ⎢2 ⎥ = ⎢ 0 13 0⎥. ⎢0 0 1 ⎥ 5⎦ ⎣
⎡1 0 0 [ A I ] = ⎢⎢3 0 0 ⎢2 5 5 ⎣
1 0 0⎤ ⎥ 0 1 0⎥ 0 0 1⎥⎦
Using elementary row operations, you cannot form the identity matrix on the left side. Therefore, the matrix has no inverse.
20. Adjoin the identity matrix to form 0
0 0
2
0 0
0 −2 0 0
0 3
1 0 0 0⎤ ⎥ 0 1 0 0⎥ . 0 0 1 0⎥ ⎥ 0 0 0 1⎦⎥
Using elementary row operations, reduce the matrix as follows.
⎡⎣I
⎡1 ⎢ ⎢0 A−1 ⎤⎦ = ⎢ 0 ⎢ ⎢⎣0
0 0 0 1 0 0 0 1 0 0 0 1
Therefore, the inverse is
A− 1
39
22. Adjoin the identity matrix to form ⎡4 ⎢ 2 [ A I ] = ⎢⎢ 0 ⎢ ⎣⎢3
8 −7
14
5 −4
6
2
1 0 0 0⎤ ⎥ 0 1 0 0⎥ . 0 0 1 0⎥ ⎥ 0 0 0 1⎥⎦
1 −7
6 −5 10
Using elementary row operations, reduce the matrix as follows. ⎡1 ⎢ 0 [ A I ] = ⎢⎢ 0 ⎢ ⎢⎣0
27 −10
0 0 0 1 0 0
−16
0 1 0
−17
0 0 1
−7
4 −29⎤ ⎥ 5 −2 18⎥ 4 −2 20⎥ ⎥ 2 −1 8⎥⎦
Therefore the inverse is
18. Adjoin the identity matrix to form
⎡1 ⎢ 0 A I = [ ] ⎢⎢ 0 ⎢ ⎣⎢0
The Inverse of a Matrix
0 0⎤ ⎡1 0 ⎢ 1 ⎥ 0 0⎥ ⎢0 = ⎢ 2 . 0 0 − 12 0⎥ ⎢ ⎥ ⎢⎣0 0 0 13 ⎥⎦
0 0⎤ ⎥ 0 12 0 0⎥ 0 0 − 12 0⎥ ⎥ 0 0 0 13 ⎥⎦ 1 0
A− 1
⎡ 27 −10 4 −29⎤ ⎢ ⎥ −16 5 −2 18⎥ = ⎢ . ⎢−17 4 −2 20⎥ ⎢ ⎥ 2 −1 8⎥⎦ ⎣⎢ −7
24. Adjoin the identity matrix to form ⎡1 ⎢ 0 [ A I ] = ⎢⎢ 0 ⎢ ⎢⎣0
3 −2 0 2
4 6
0 −2 1 0
0 5
1 0 0 0⎤ ⎥ 0 1 0 0⎥ . 0 0 1 0⎥ ⎥ 0 0 0 1⎥⎦
Using elementary row operations, reduce the matrix as follows.
⎡⎣I
⎡1 ⎢ 0 A−1 ⎤⎦ = ⎢ ⎢0 ⎢ ⎣⎢0
0 0 0
1 −1.5
1 0 0
0
0 1 0
0
0 0 1
0
−4
2.6⎤ ⎥ 1 −0.8⎥ . 0 −0.5 0.1⎥ ⎥ 0 0 0.2⎦⎥
0.5
Therefore, the inverse is
A− 1
−4 2.6⎤ ⎡ 1 −1.5 ⎢ ⎥ 0 0.5 1 −0.8⎥ = ⎢ . ⎢0 0 −0.5 0.1⎥ ⎢ ⎥ ⎢⎣0 0 0 0.2⎥⎦
26. The coefficient matrix for each system is ⎡2 −1⎤ A = ⎢ ⎥ and the formula for the inverse of a ⎣2 1⎦ 2 × 2 matrix produces A− 1 =
⎡ 14 1 ⎡ 1 1⎤ ⎢ ⎥ = ⎢ 1 2 + 2 ⎣−2 2⎦ ⎢⎣− 2
1⎤ 4 ⎥ 1 ⎥ 2⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
40
Chapter 2 Matrices ⎡ 1 (a) x = A−1b = ⎢ 14 ⎣⎢− 2
1 ⎤ −3 4 ⎡ ⎤ ⎥ 1 ⎢ 7⎥ ⎥⎣ ⎦ 2⎦
⎡1⎤ = ⎢ ⎥ ⎣5⎦
The solution is: x = 1 and y = 5. (b) x = A−1b =
⎡ 14 ⎢ 1 ⎢⎣− 2
1 ⎤ −1 4 ⎡ ⎤ ⎥ 1 ⎢−3⎥ 2 ⎥⎣ ⎦ ⎦
⎡−1⎤ = ⎢ ⎥ ⎣−1⎦
The solution is: x = −1 and y = −1. ⎡ 1 (c) x = A−1b = ⎢ 14 ⎣⎢− 2
1⎤ 6 4 ⎡ ⎤ ⎥ 1 ⎢10⎥ ⎥⎣ ⎦ 2⎦
⎡4⎤ = ⎢ ⎥ ⎣2⎦
The solution is: x = 4 and y = 2.
28. The coefficient matrix for each system is ⎡1 1 −2⎤ ⎢ ⎥ A = ⎢1 −2 1⎥. ⎢1 −1 −1⎥ ⎣ ⎦ Using the algorithm to invert a matrix, you find that the inverse is A
−1
⎡ 1 1 −1⎤ ⎢ ⎥ = ⎢ 23 13 −1⎥. ⎢ 1 2 −1⎥ ⎣3 3 ⎦
⎡ 1 1 −1⎤ ⎡ 0⎤ ⎡1⎤ ⎢2 1 ⎥⎢ ⎥ ⎢⎥ −1 = = − = A b 1⎥ ⎢ 0⎥ (a) x ⎢3 3 ⎢1⎥ ⎢ 1 2 −1⎥ ⎢−1⎥ ⎢1⎥ ⎣⎦ ⎣3 3 ⎦⎣ ⎦
The solution is: x1 = 1, x2 = 1, and x3 = 1. ⎡ 1 1 −1⎤ ⎡−1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (b) x = A−1b = ⎢ 23 13 −1⎥ ⎢ 2⎥ = ⎢0⎥ ⎢ 1 2 −1⎥ ⎢ 0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣3 3 ⎦⎣ ⎦
The solution is: x1 = 1, x2 = 0, and x3 = 1.
30. Using a graphing utility or computer software program, you have Ax = b ⎡ 1⎤ ⎢ ⎥ ⎢ 2⎥ −1 x = A b = ⎢−1⎥ ⎢ ⎥ ⎢ 0⎥ ⎢ ⎥ ⎣ 1⎦
32. Using a graphing utility or computer software program, you have Ax = b ⎡−1⎤ ⎢ ⎥ ⎢ 2⎥ ⎢ 1⎥ x = A−1b = ⎢ ⎥ ⎢ 3⎥ ⎢ ⎥ ⎢ 0⎥ ⎢ 1⎥ ⎣ ⎦
where ⎡ 4 −2 4 2 −5 −1⎤ ⎡ x1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 3 6 −5 −6 3 3⎥ ⎢ x2 ⎥ ⎢ 2 −3 ⎢x ⎥ 1 3 −1 −2⎥ ⎥, x = ⎢ 3 ⎥ , and A = ⎢ ⎢−1 4 −4 −6 2 4⎥ ⎢ x4 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 3 −1 5 2 −3 −5⎥ ⎢ x5 ⎥ ⎢−2 ⎥ ⎢x ⎥ − − 3 4 6 1 2 ⎣ ⎦ ⎣ 6⎦ ⎡ 1⎤ ⎢ ⎥ ⎢ −11⎥ ⎢ 0⎥ ⎥. b = ⎢ ⎢ −9⎥ ⎢ ⎥ ⎢ 1⎥ ⎢−12⎥ ⎣ ⎦ The solution is: x1 = −1, x2 = 2, x3 = 1, x4 = 3, x5 = 0, x6 = 1. 34. (a)
( AB)
(b)
( AT )
(c)
( A)
(d)
(2A)
−1
−1
−2
−1
2 ⎤⎡− 2 ⎡5 11 = B −1 A−1 = ⎢11 ⎥⎢ 73 3 1 ⎢⎣11 − 11⎦⎣ ⎥⎢ 7
⎡− 2 = ⎢ 73 ⎣⎢ 7
= ( A−1 )
T
⎡− 2 2 = ( A−1 ) = ⎢ 73 ⎣⎢ 7 =
1 A−1 2
=
⎡− 72 1 2⎢ 3 ⎢⎣ 7
1⎤ 7 ⎥ 2 ⎥ 7⎦
T
⎡− 2 = ⎢ 17 ⎣⎢ 7
1 ⎤ ⎡− 2 7 ⎥ ⎢ 17 2 ⎥ ⎣⎢ 7 7⎦ 1⎤ 7 ⎥ 2 7⎥ ⎦
1⎤ 7 ⎥ 2 ⎥ 7⎦
3⎤ 7 ⎥ 2 ⎥ 7⎦
⎡− 1 = ⎢ 37 ⎢⎣ 14
=
−4 9⎤ ⎥ ⎣−9 1⎦
1 ⎡ 77 ⎢ 3⎤ 7 ⎥ 2 ⎥ 7⎦
⎡ 1 0⎤ = ⎢7 1 ⎥ ⎣⎢ 0 7 ⎥⎦ 1⎤ 14 ⎥ 1 7⎥ ⎦
where ⎡1 ⎢ ⎢2 A = ⎢1 ⎢ ⎢2 ⎢ ⎣3
1 −1 1
1
1 −1 1
4
1
1
3 −1⎤ ⎡ x1 ⎤ ⎡ 3⎤ ⎢ ⎥ ⎢ ⎥ ⎥ 1 1⎥ ⎢ 4⎥ ⎢ x2 ⎥ ⎢ 3⎥. ⎢ ⎥ ⎥ = x , and b = , x 2 −1 ⎢ ⎥ ⎢ 3⎥ ⎥ ⎢−1⎥ ⎢ x4 ⎥ 1 −1⎥ ⎢ ⎥ ⎢ ⎥ ⎥ −2 1⎦ ⎣ 5⎦ ⎣ x5 ⎦
The solution is: x1 = 1, x2 = 2, x3 = −1, x4 = 0, and x5 = 1. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.3
36. (a)
( AB)
(b)
( AT )
−1
= B A
−1
= ( A−1 )
−1
−1
41
⎡ 6 5 −3⎤⎡ 1 −4 2⎤ ⎡−6 −25 24⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢−2 4 −1⎥⎢0 1 3⎥ = ⎢−6 10 7⎥ ⎢ 1 3 4⎥⎢4 2 1⎥ ⎢17 7 15⎥⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎡ 1 −4 2⎤ ⎢ ⎥ 1 3⎥ = ⎢0 ⎢4 2 1⎥ ⎣ ⎦
T
The Inverse of a Matrix
T
⎡ 1 0 4⎤ ⎢ ⎥ = ⎢−4 1 2⎥ ⎢ 2 3 1⎥ ⎣ ⎦
⎡ 1 −4 2⎤⎡ 1 −4 2⎤ ⎡ 9 −4 −8⎤ 2 ⎢ ⎥⎢ ⎥ ⎢ ⎥ (c) A−2 = ( A−1 ) = ⎢0 1 3⎥⎢0 1 3⎥ = ⎢12 7 6⎥ ⎢4 2 1⎥⎢4 2 1⎥ ⎢ 8 −12 15⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
(d)
(2 A)
−1
=
1 A−1 2
⎡ 12 −2 1⎤ ⎡ 1 −4 2⎤ ⎢ ⎢ ⎥ 3⎥ 1 1 3⎥ = ⎢ 0 = 12 ⎢0 2 2⎥ ⎢2 ⎢4 2 1⎥ 1 12 ⎥⎦ ⎣ ⎦ ⎣
46. (a) True. If A1 , A2 , A3 , A4 are invertible 7 × 7 matrices,
38. The inverse of A is given by
A− 1 =
then B = A1 A2 A3 A4 is also an invertible 7 × 7
1 ⎡−2 − x⎤ ⎢ ⎥. x − 4 ⎣ 1 2⎦
Letting A−1 = A, you find that
matrix with inverse B −1 = A4−1 A3−1 A2−1 A1−1 , by Theorem 2.9 on page 81 and induction.
1 = −1. x −4
(b) True. ( A−1 )
T
So, x = 3. ⎡ x 2⎤ 40. The matrix ⎢ ⎥ will be singular if ⎣−3 4⎦ ad − bc = ( x)( 4) − ( −3)( 2) = 0, which implies that 4 x = −6 or x = − 32 . 42. First find 4A. −1 4 A = ⎡( 4 A) ⎤ ⎣ ⎦
−1
Then, multiply by A =
1 4
(4 A)
=
= 1 4
⎡1 1 8 4⎢ 3 ⎣⎢16
⎡ 18 1 ⎡2 −4⎤ ⎢ ⎥ = ⎢3 4 + 12 ⎣3 2⎦ ⎢⎣16
− 14 ⎤ ⎥. 1 ⎥ 8⎦
to obtain ⎡1 − 14 ⎤ ⎥ = ⎢ 32 3 1 ⎥ ⎢⎣ 64 8⎦
1⎤ − 16 ⎥. 1 ⎥ 32 ⎦
44. Using the formula for the inverse of a 2 × 2 matrix, you have
A− 1 =
⎡ sec θ = ⎢ ⎣− tan θ
− tan θ ⎤ ⎥. sec θ ⎦
−1
(c) False. For example consider the matrix ⎡ 1 1⎤ ⎢ ⎥ which is not invertible, but ⎣0 0⎦ 1 ⋅ 1 − 0 ⋅ 0 = 1 ≠ 0. (d) False. If A is a square matrix then the system Ax = b has a unique solution if and only if A is a nonsingular matrix. 48. AT ( A−1 )
T
= ( A−1 A)
T
( A−1 ) AT = ( AA−1 ) T −1 So, ( A−1 ) = ( AT ) . T
T
= I nT = I n , and = I nT = I n .
50. Because C is invertible, you can multiply both sides of the equation CA = CB by C −1 on the left to obtain the following. C −1 (CA) = C −1 (CB)
(C −1C ) A = (C −1C ) B
⎡ d −b⎤ 1 ⎢ ⎥ ad − bc ⎣−c a⎦
⎡ sec θ 1 = ⎢ sec 2 θ − tan 2 θ ⎣− tan θ
= ( AT ) by Theorem 2.8(4) on page 79.
− tan θ ⎤ ⎥ sec θ ⎦
IA = IB A = B
52. Because ABC = I , A is invertible and A−1 = BC.
So, ABC A = A and BC A = I . So, B −1 = CA.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
42
Chapter 2 Matrices
54. Let A2 = A and suppose A is nonsingular. Then, A−1 exists, and you have the following.
A−1 ( A2 ) = A−1 A
( A−1 A) A =
I
A = I 56. A has an inverse if aii ≠ 0 for all i = 1 … n and
A
−1
⎡1 ⎢a ⎢ 11 ⎢ ⎢0 = ⎢ ⎢ ⎢ ⎢ ⎢0 ⎣
0 1 a22
0
⎤ 0 ⎥ ⎥ ⎥ 0 … 0 ⎥ ⎥. ⎥ ⎥ 1 ⎥ 0 ann ⎥⎦ 0 …
⎡ 1 2⎤ 58. A = ⎢ ⎥ ⎣−2 1⎦ ⎡−3 4⎤ ⎡ 2 4⎤ ⎡5 0⎤ ⎡0 0⎤ (a) A2 − 2 A + 5I = ⎢ ⎥ −⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ⎣−4 −3⎦ ⎣−4 2⎦ ⎣0 5⎦ ⎣0 0⎦
( 15 (2I − A)) = 15 (2 A − A ) = 2
(b) A
1 5
(5 I )
= I
1 −2⎤ −1 ⎥ = A directly. ⎣2 1 ⎦ (c) The calculation in part (b) did not depend on the entries of A. Similarly,
( 15 (2I − A)) A =
I . Or,
1 5
(2 I
− A) =
1⎡ 5⎢
60. Let C be the inverse of ( I − AB ), i.e., C = ( I − AB) . Then C ( I − AB ) = ( I − AB)C = I . Consider the matrix −1
I + BCA. Claim that this matrix is the inverse of I − BA. To check this claim show that
(I
+ BCA)( I − BA) = ( I − BA)( I + BCA) = I .
First show ( I − BA)( I + BCA) = I − BA + BCA − BABCA = I − BA + B(C − ABC ) A = I − BA + B(( I − AB)C ) A I
= 1 − BA + BA = 1 Similarly, show ( I + BCA)( I − BA) = I .
62. Let A, D, P be n × n matrices. Suppose P −1 AP = D. Then P( P −1 AP) P −1 = PDP −1 , so A = PDP −1. It is not necessary that
⎡2 1⎤ ⎡2 0⎤ ⎡ 1 1⎤ A = D. For example, let A = ⎢ ⎥, D = ⎢ ⎥, P = ⎢ ⎥. ⎣0 1⎦ ⎣0 1⎦ ⎣0 −1⎦ ⎡2 1⎤ −1 It is easy to check that AP = PD = ⎢ ⎥ , so you have P AP = D, but A ≠ D. ⎣0 −1⎦
Section 2.4 Elementary Matrices 2. This matrix is not elementary, because it is not square. 4. This matrix is elementary. It can be obtained by interchanging the two rows of I 2 .
6. This matrix is elementary. It can be obtained by multiplying the first row of I 3 by 2, and adding the result to the third row. 8. This matrix is not elementary, because two elementary row operations are required to obtain it from I 4 . Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.4 10. C is obtained by adding the third row of A to the first row. So, ⎡ 1 0 1⎤ ⎢ ⎥ E = ⎢0 1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 −1⎤ ⎢ ⎥ E = ⎢0 1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
14. To obtain the inverse matrix, reverse the elementary row operation that produced it. So,
E −1
⎡ 1 0⎤ = ⎢ 5 ⎥. ⎣⎢0 1⎦⎥
16. To obtain the inverse matrix, Reverse the elementary row operation that produced it. So, add 3 times the second row to the third row to obtain ⎡ 1 0 0⎤ ⎢ ⎥ E −1 = ⎢0 1 0⎥. ⎢0 3 1⎥ ⎣ ⎦
18. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, divide the first row of I 3 by k to obtain
E −1
⎡1 ⎤ ⎢ k 0 0⎥ ⎥ , k ≠ 0. = ⎢ ⎢0 1 0 ⎥ ⎢ ⎥ ⎣0 0 1 ⎦
20. To obtain the inverse matrix, reverse the elementary row operation that produced it. So, add − k times the third row to the second row to obtain
E −1
⎡1 ⎢ 0 = ⎢ ⎢0 ⎢ ⎢⎣0
0
0 0⎤ ⎥ 0⎥ . 1 0⎥ ⎥ 0 1⎥⎦
1 −k 0 0
43
22. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form.
⎡1 0⎤ ⎢ ⎥ ⎣1 1⎦
12. A is obtained by adding −1 times the third row of C to the first row. So,
Elementary Matrices
( 12 )R
→ R1
1
⎡ 1 0⎤ E1 = ⎢ 2 ⎥ ⎣⎢ 0 1⎥⎦ ⎡ 1 0⎤ E2 = ⎢ ⎥ ⎣−1 1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦ R2 − R1 → R2
Use the elementary matrices to find the inverse. ⎡ 12 ⎡ 1 0⎤ ⎡ 12 0⎤ A−1 = E2 E1 = ⎢ ⎥ = ⎢ 1 ⎥⎢ ⎢⎣− 2 ⎣−1 1⎦ ⎢⎣ 0 1⎥⎦
0⎤ ⎥ 1⎦⎥
24. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form. ⎡ 1 0 −2⎤ ⎢ 1⎥ ⎢0 1 2 ⎥ ⎢0 0 1⎥⎦ ⎣
( )R 1 2
2
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢0 12 0⎥ ⎢0 0 1⎥ ⎣ ⎦
→ R2
⎡ 1 0 0⎤ R1 + 2 R3 → R1 ⎢ 1⎥ ⎢0 1 2 ⎥ ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 2⎤ ⎢ ⎥ E2 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ 1⎥ ⎢0 1 2 ⎥ ⎢0 0 1⎥ ⎣ ⎦
0⎤ ⎡1 0 ⎢ ⎥ E3 = ⎢0 1 − 12 ⎥ ⎢0 0 1⎥⎦ ⎣
R2 −
( )R 1 2
3
→ R2
Use the elementary matrices to find the inverse. A−1 = E3 E2 E1 0⎤⎡ 1 0 2⎤⎡ 1 0 0⎤ ⎡1 0 ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢0 1 − 12 ⎥⎢0 1 0⎥⎢0 12 0⎥ ⎢0 0 ⎥⎢ ⎥ 1⎥⎢ ⎣ ⎦⎣0 0 1⎦⎣0 0 1⎦ 2⎤⎡ 1 0 0⎤ ⎡1 0 ⎢ ⎥⎢ ⎥ = ⎢0 1 − 12 ⎥⎢0 12 0⎥ ⎢0 0 ⎥ 1⎥⎢ ⎣ ⎦⎣0 0 1⎦ 2⎤ ⎡1 0 ⎢ 1 ⎥ = ⎢0 2 − 12 ⎥ ⎢0 0 1⎥⎦ ⎣ ⎡0 1⎤ 26. The matrix A = ⎢ ⎥ is itself an elementary matrix, ⎣ 1 0⎦ so the factorization is ⎡0 1⎤ A = ⎢ ⎥. ⎣ 1 0⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
44
Chapter 2 Matrices
⎡ 1 1⎤ 28. Reduce the matrix A = ⎢ ⎥ as follows. ⎣2 1⎦ Matrix
Elementary Row Operation
Elementary Matrix
⎡ 1 1⎤ ⎢ ⎥ ⎣0 −1⎦
−2 times row one to row two
⎡ 1 0⎤ E1 = ⎢ ⎥ ⎣−2 1⎦
⎡ 1 1⎤ ⎢ ⎥ ⎣0 1⎦
−1 times row two
⎡ 1 0⎤ E2 = ⎢ ⎥ ⎣0 −1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
−1 times row two to row one
⎡ 1 −1⎤ E3 = ⎢ ⎥ ⎣0 1⎦
So, one way to factor A is ⎡ 1 0⎤⎡ 1 0⎤⎡ 1 1⎤ A = E1−1E2−1E3−1 = ⎢ ⎥⎢ ⎥⎢ ⎥. ⎣2 1⎦⎣0 −1⎦⎣0 1⎦ ⎡ 1 2 3⎤ ⎢ ⎥ 30. Reduce the matrix A = ⎢2 5 6⎥ as follows. ⎢ 1 3 4⎥ ⎣ ⎦
Elementary Row Operation
Elementary Matrix
−2 times row one to row two
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢−2 1 0⎥ ⎢ 0 0 1⎥ ⎣ ⎦
−1 times row one to row three
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢ 0 1 0⎥ ⎢−1 0 1⎥ ⎣ ⎦
⎡ 1 2 3⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
−1 times row two to row three
⎡ 1 0 0⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 −1 1⎥ ⎣ ⎦
⎡ 1 2 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
−3 times row three to row one
⎡ 1 0 −3⎤ ⎢ ⎥ E4 = ⎢0 1 0⎥ ⎢0 0 1⎥⎦ ⎣
−2 times row two to row one
⎡ 1 −2 0⎤ ⎢ ⎥ E5 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Matrix ⎡ 1 2 3⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢ 1 3 4⎥ ⎣ ⎦ ⎡ 1 2 3⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 1 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
So, one way to factor A is A = E1−1E2−1E3−1E4−1E5−1 = ⎡ 1 0 0⎤⎡ 1 0 0⎤⎡ 1 0 0⎤⎡ 1 0 3⎤⎡ 1 2 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ ⎢2 1 0⎥⎢0 1 0⎥⎢0 1 0⎥⎢0 1 0⎥⎢0 1 0⎥. ⎢0 0 1⎥⎢ 1 0 1⎥⎢0 1 1⎥⎢0 0 1⎥⎢0 0 1⎥ ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.4
Elementary Matrices
45
32. Find a sequence of elementary row operations that can be used to rewrite A in reduced row-echelon form. ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣ 1 ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢ ⎣0 ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0
0
1⎤ 2
0
0
1⎤ 2
1
0
0
0
⎥ 1 0 1⎥ 0 −1 2⎥ ⎥ 0 0 −2⎥⎦
( 14 ) R
→ R1
1
⎡ 14 0 0 0⎤ ⎢ ⎥ 0 1 0 0⎥ E1 = ⎢⎢ 0 0 1 0⎥ ⎢ ⎥ ⎢⎣ 0 0 0 1⎥⎦ ⎡ 1 ⎢ 0 E2 = ⎢ ⎢ 0 ⎢ ⎣⎢−1
⎥ 1⎥ 0 −1 2⎥ ⎥ 0 0 − 52 ⎦⎥ R4 − R1 → R4 1⎤ 2
⎥ 1 0 1⎥ 0 −1 2⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
⎥ 1⎥ 0 1 −2⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0⎤ ⎥ 1 0 1⎥ 0 1 −2⎥ ⎥ 0 0 1⎦⎥
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0⎤ ⎥ 1 0 0⎥ 0 1 −2⎥ ⎥ 0 0 1⎦⎥
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎦⎥
0 0
(− 52 )R
4
→ R4
1⎤ 2
1 0
0 0
− R3 → R3 R1 −
( 12 )R
4
→ R1
0 0
R2 − R4 → R2
R3 + 2 R4 → R3
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ ⎢0 E3 = ⎢ 0 ⎢ ⎢⎣0
0 0
0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 − 52 ⎥⎦
⎡1 ⎢ 0 E4 = ⎢ ⎢0 ⎢ ⎢⎣0
0
⎡1 ⎢ 0 E5 = ⎢⎢ 0 ⎢ ⎢⎣0
0 0 − 12 ⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ 0 E6 = ⎢ ⎢0 ⎢ ⎢⎣0
0⎤ ⎥ 1 0 −1⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡1 ⎢ 0 E7 = ⎢ ⎢0 ⎢ ⎣⎢0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 2⎥ ⎥ 0 0 1⎦⎥
0 0⎤ ⎥ 0 0⎥ 0 −1 0⎥ ⎥ 0 0 1⎥⎦ 1
0 0
So, one way to factor A is A = E1−1E2−1E3−1E4−1E5−1E6−1E7−1 ⎡4 ⎢ 0 = ⎢ ⎢0 ⎢ ⎢⎣0
0 0 0⎤⎡ 1 ⎥⎢ 1 0 0⎥⎢0 0 1 0⎥⎢0 ⎥⎢ 0 0 1⎥⎢ ⎦⎣ 1
0 0 0⎤ ⎡ 1 ⎥⎢ 1 0 0⎥ ⎢0 0 1 0⎥ ⎢0 ⎥⎢ 0 0 1⎥⎦ ⎢⎣0
0⎤ ⎡ 1 ⎥ 1 0 0⎥ ⎢⎢0 0 1 0⎥ ⎢0 ⎥⎢ 0 0 − 52 ⎥⎦ ⎢⎣0 0 0
0 0⎤ ⎡ 1 ⎥⎢ 1 0 0⎥ ⎢0 0 −1 0⎥ ⎢0 ⎥⎢ 0 0 1⎥⎦ ⎢⎣0 0
1 ⎤⎡1 2
0 0 0⎤⎡ 1 ⎥⎢ ⎥⎢ 1 0 0⎥ ⎢0 1 0 1⎥⎢0 0 1 0⎥⎥ ⎢0 0 1 0⎥⎢0 ⎢ ⎥⎢ 0 0 1⎥⎦ ⎢⎣0 0 0 1⎥⎢ ⎦⎣0
0 0
0⎤ ⎥ 0⎥ . 0 1 −2⎥ ⎥ 0 0 1⎥⎦ 0 0
1 0
34. (a) False. It is impossible to obtain the zero matrix by applying any elementary row operation to the identity matrix.
(b) True. See the definition of row equivalence on page 90. (c) True. If A = E1E2 … Ek , where each Ei is an elementary matrix, then A is invertible (because every elementary matrix is) and A−1 = Ek−1 … E2−1E1−1. (d) True. See equivalent conditions (2) and (3) of Theorem 2.15 on page 93. 36. (a) EA and A have the same rows, except that the corresponding row in A is multiplied by c.
(b) E 2 is obtained by multiplying the corresponding row in I by c 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
46
Chapter 2 Matrices
38. First factor A as a product of elementary matrices. ⎡ 1 0 0⎤⎡ 1 0 0⎤⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ A = E1−1E2−1E3−1 = ⎢0 1 0⎥⎢0 1 0⎥⎢0 1 0⎥ ⎢a 0 1⎥⎢0 b 1⎥⎢0 0 c⎥ ⎣ ⎦⎣ ⎦⎣ ⎦
So, A−1 = ( E1−1E2−1E3−1 )
−1
= E3 E2 E1.
0 ⎡1 0 0 ⎤ ⎡ 1 ⎢ ⎥ ⎡ 1 0 0⎤⎡ 1 0 0⎤ ⎢ 0 1 0 0 1 ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 1 0⎥⎢ 0 1 0⎥ = ⎢ ⎢ ⎢ ⎥ ⎢ 1 ⎢ a b ⎥ − ⎢0 0 ⎥ ⎣0 −b 1⎥⎢ ⎢− ⎦⎣− a 0 1⎦ c⎦ c ⎣ ⎣ c
0⎤ ⎥ 0⎥ . 1⎥ ⎥ c⎦
⎡ 1 2⎤ ⎡ 1 0⎤ ⎡2 2⎤ 40. No. For example ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ is not elementary. ⎣0 1⎦ ⎣2 1⎦ ⎣2 2⎦ 42. Matrix
Elementary Matrix
⎡−2 1⎤ ⎢ ⎥ = A ⎣−6 4⎦ ⎡−2 1⎤ ⎢ ⎥ =U ⎣ 0 1⎦
⎡ 1 0⎤ E1 = ⎢ ⎥ ⎣−3 1⎦
⎡1 0⎤⎡−2 1⎤ E1 A = U ⇒ A = E1−1U = ⎢ ⎥⎢ ⎥ = LU ⎣3 1⎦⎣ 0 1⎦ 44. Matrix
Elementary Matrix
⎡ 2 0 0⎤ ⎢ ⎥ ⎢ 0 −3 1⎥ = A ⎢10 12 3⎥ ⎣ ⎦ ⎡2 0 0⎤ ⎢ ⎥ ⎢0 −3 1⎥ ⎢0 12 3⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢ 0 1 0⎥ ⎢−5 0 1⎥ ⎣ ⎦
⎡2 0 0⎤ ⎢ ⎥ ⎢0 −3 1⎥ = U ⎢0 0 7⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢0 1 0⎥ ⎢0 4 1⎥ ⎣ ⎦
⎡ 1 0 0⎤⎡2 0 0⎤ ⎢ ⎥⎢ ⎥ E2 E1 A = U ⇒ A = E1−1E2−1U = ⎢0 1 0⎥⎢0 −3 1⎥ = LU ⎢5 −4 1⎥⎢0 0 7⎥ ⎣ ⎦⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.4 46. (a) Matrix
Elementary Matrices
47
Elementary Matrix
⎡ 2 0 0 0⎤ ⎢ ⎥ ⎢−2 1 −1 0⎥ = A ⎢ 6 2 1 0⎥ ⎢ ⎥ ⎣⎢ 0 0 0 −1⎥⎦ ⎡2 0 0 0⎤ ⎢ ⎥ ⎢0 1 −1 0⎥ ⎢6 2 1 0⎥ ⎢ ⎥ ⎢⎣0 0 0 −1⎥⎦
⎡1 ⎢ 1 E1 = ⎢ ⎢0 ⎢ ⎢⎣0
⎡2 0 0 0⎤ ⎢ ⎥ ⎢0 1 −1 0⎥ ⎢0 2 1 0⎥ ⎢ ⎥ ⎢⎣0 0 0 −1⎥⎦
⎡ 1 ⎢ 0 E2 = ⎢⎢ −3 ⎢ ⎢⎣ 0
⎡2 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
⎡1 0 ⎢ 0 1 E3 = ⎢ ⎢0 −2 ⎢ ⎣⎢0 0
0⎤ ⎥ 1 −1 0⎥ =U 0 3 0⎥ ⎥ 0 0 −1⎥⎦ 0
0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦ 0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎥⎦
⎡ 1 ⎢ −1 E3 E2 E1 A = U ⇒ A = E1−1E2−1E3−1U = ⎢ ⎢ 3 ⎢ ⎢⎣ 0 ⎡ 1 ⎢ −1 (b) Ly = b: ⎢ ⎢ 3 ⎢ ⎢⎣ 0
0 0⎤ ⎥ 0 0⎥ 1 0⎥ ⎥ 0 1⎦⎥ 0 0 0⎤⎡2 ⎥⎢ 1 0 0⎥⎢0 2 1 0⎥⎢0 ⎥⎢ 0 0 1⎥⎢ ⎦⎣0
0⎤ ⎥ 1 −1 0⎥ = LU 0 3 0⎥ ⎥ 0 0 −1⎥⎦ 0
0
0 0 0⎤ ⎡ y1 ⎤ ⎡ 4⎤ ⎥⎢ ⎥ ⎢ ⎥ 1 0 0⎥ ⎢ y2 ⎥ −4 = ⎢ ⎥ ⎥ ⎢ ⎥ ⎢ 2 1 0 y3 15⎥ ⎥⎢ ⎥ ⎢ ⎥ 0 0 1⎥⎦ ⎢⎣ y4 ⎥⎦ ⎢− ⎣ 1⎥⎦
y1 = 4, − y1 + y2 = −4 ⇒ y2 = 0, 3 y1 + 2 y2 + y3 = 15 ⇒ y3 = 3, and y4 = −1. ⎡2 ⎢ 0 (c) Ux = y : ⎢ ⎢0 ⎢ ⎢⎣0
0
0⎤⎡ x1 ⎤ ⎡ 4⎤ ⎥⎢ ⎥ ⎢ ⎥ 0⎥⎢ x2 ⎥ 0 = ⎢ ⎥ ⎥⎢ ⎥ ⎢ 3 0 x3 3⎥ ⎥⎢ ⎥ ⎢ ⎥ 0 −1⎥⎢ ⎢− ⎦⎣ x4 ⎥⎦ ⎣ 1⎥⎦ 0
1 −1 0 0
x4 = 1, x3 = 1, x2 − x3 = 0 ⇒ x2 = 1, and x1 = 2. So, the solution to the system Ax = b is: x1 = 2, x2 = x3 = x4 = 1. ⎡0 1⎤ ⎡a 0⎤⎡d 48. (a) Suppose A = ⎢ ⎥ = ⎢ ⎥⎢ ⎣ 1 0⎦ ⎣b c⎦⎣ 0
e⎤ ⎡ad ⎥ = ⎢ f⎦ ⎣bd
⎤ ⎥. be + cf ⎦ ae
Because 0 = ad , either a = 0 or d = 0. If a = 0, then ae = 0 = 1, which is impossible. If d = 0, then bd = 0 = 1, which is impossible.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
48
Chapter 2 Matrices (b) Consider the following LU-factorization. ⎡a b⎤ ⎡ 1 0⎤⎡ y z⎤ A = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = LU ⎣c d ⎦ ⎣ x 1⎦⎣ 0 w⎦ Then y = a, xy = xa = c ⇒ x =
c . a
⎛c⎞ Also, z = b, and xz + w = d ⇒ w = d − ⎜ ⎟b. ⎝a⎠ b ⎤ ⎡a 0⎤⎡a ⎡a b⎤ ⎢ ⎥⎢ ⎥ The factorization is ⎢ cb , a ≠ 0. ⎥ = ⎢ c ⎥⎢ 1 0 d − ⎥ ⎣c d ⎦ a ⎦⎥ ⎣⎢ a ⎥⎢ ⎦⎣ ⎡0 1⎤⎡0 1⎤ ⎡ 1 0⎤ 50. A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ≠ A. ⎣ 1 0⎦⎣ 1 0⎦ ⎣0 1⎦
58. Assume A is idempotent. Then
A2 = A
( A2 )
= AT
( AT AT )
= AT
T
Because A2 ≠ A, A is not idempotent. ⎡2 3⎤⎡2 3⎤ ⎡7 12⎤ 52. A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1 2 1 2 ⎣ ⎦⎣ ⎦ ⎣4 7⎦ Because A2 ≠ A, A is not idempotent. ⎡0 1 0⎤⎡0 1 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 54. A2 = ⎢ 1 0 0⎥⎢ 1 0 0⎥ = ⎢0 1 0⎥. ⎢0 0 1⎥⎢0 0 1⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Because A2 ≠ A, A is not idempotent. ⎡ a2 0⎤ ⎡a 0⎤⎡a 0⎤ 56. A = ⎢ . ⎥⎢ ⎥ = ⎢ 2⎥ b c b c + ab bc c ⎢ ⎥⎦ ⎣ ⎦⎣ ⎦ ⎣
which means that AT is idempotent. Now assume AT is idempotent. Then AT AT = AT
( AT AT )
T
c = c 2 . If a = a 2 there two cases to consider.
(i) a = 0: If c = 0, then b = 0 ⇒ a = b = c = 0 is a solution. If c = 1, then b can be any number t ⇒ a = 0, c = 1, b = t is a solution. (ii) a = 1: If c = 0, then b can be any number t
⇒ a = 1, c = 0, b = t is a solution. If c = 1, then b = 0 ⇒ a = c = 1, b = 0 is a solution. So, the possible matrices are
T
AA = A which means that A is idempotent. 60. If A is row equivalent to B, then A = Ek
2
In order for A2 to equal A, you need a = a 2 and
= ( AT )
E2 E1B
where E1 , …, Ek are elementary matrices. So, B = E1−1E2−1 Ek−1 A which shows that B is row equivalent to A. 62. If B is row equivalent to A, then
B = Ek
E2 E1 A
where E1 , …, Ek are elementary matrices. Because elementary matrices are nonsingular, B −1 = ( Ek
E1 A)
−1
= A−1E1−1
Ek−1
which shows that B is also nonsingular.
⎡0 0⎤ ⎡0 0⎤ ⎡1 0⎤ ⎡ 1 0⎤ ⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥. ⎣0 0⎦ ⎣ t 1⎦ ⎣t 0⎦ ⎣0 1⎦
Section 2.5 Applications of Matrix Operations 2. This matrix is not stochastic because every entry in a stochastic matrix must satisfy 0 ≤ aij ≤ 1.
4. This matrix is stochastic because each entry is between 0 and 1, and each column adds up to 1. 6. This matrix is stochastic because each entry is between 0 and 1, and each column adds up to 1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5 8. Form the matrix representing the given transition probabilities. A represents infected mice and B noninfected. From A B
⎡0.2 0.1⎤ A⎫ P = ⎢ ⎥ ⎬ To ⎣0.8 0.9⎦ B⎭ The state matrix representing the current population is ⎡100⎤ A X = ⎢ ⎥ . ⎣900⎦ B The state matrix for next week is ⎡0.2 0.1⎤⎡100⎤ ⎡110⎤ PX = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣0.8 0.9⎦⎣900⎦ ⎣890⎦ The state matrix for the week after next is ⎡0.2 0.1⎤⎡110⎤ ⎡111⎤ P( PX ) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣0.8 0.9⎦⎣890⎦ ⎣889⎦ So, next week 110 will be infected, while in 2 weeks 111 will be infected. 10. Form the matrix representing the given transition probabilities. Let A represents users of brand A, B users of brand B, and N users of neither brands. From A B N ⎡0.75 0.15 0.10⎤ A⎫ ⎪ P = ⎢⎢0.20 0.75 0.15⎥⎥ B⎬ To ⎢⎣0.05 0.10 0.75⎥⎦ N ⎭⎪
The state matrix representing the current product usage is ⎡20,000⎤ A X = ⎢⎢30,000⎥⎥ B ⎢⎣50,000⎥⎦ N
The state matrix for next month is ⎡0.75 0.15 0.10⎤⎡20,000⎤ ⎡24,500⎤ ⎢ ⎥⎢ ⎥ PX = ⎢0.20 0.75 0.15⎥⎢30,000⎥ = ⎢⎢34,000⎥⎥. ⎢⎣0.05 0.10 0.75⎥⎢ ⎢⎣ 41,500⎥⎦ ⎦⎣50,000⎥⎦
Similarly, the state matrices for the following two months are ⎡24,500⎤ ⎡27,625⎤ P( PX ) = P ⎢⎢34,000⎥⎥ = ⎢⎢36,625⎥⎥ and ⎢⎣ 41,500⎥⎦ ⎢⎣35,750⎥⎦ P( P( PX ))
Applications of Matrix Operations
49
12. First find ⎡0.6 0.1 0.1⎤ ⎡100⎤ ⎡150⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ PX = ⎢0.2 0.7 0.1⎥ ⎢100⎥ = ⎢170⎥ , and ⎢0.2 0.2 0.8⎥ ⎢800⎥ ⎢680⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎡ 175⎤ ⎢ ⎥ P 2 X = P( PX ) = ⎢217⎥. ⎢ 608⎥ ⎣ ⎦ ⎡ 187.5⎤ ⎢ ⎥ Continuing, you have P 3 X = ⎢247.7⎥. ⎢ 564.8⎥ ⎣ ⎦ ⎡200⎤ ⎢ ⎥ Finally, the steady state matrix for P is ⎢300⎥ because ⎢500⎥ ⎣ ⎦ ⎡0.6 0.1 0.1⎤⎡200⎤ ⎡200⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0.2 0.7 0.1⎥⎢300⎥ = ⎢300⎥. ⎢0.2 0.2 0.8⎥⎢500⎥ ⎢500⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
14. Let
b ⎤ ⎡ a P = ⎢ ⎥ ⎣1 − a 1 − b⎦ be a 2 × 2 stochastic matrix, and consider the system of equations PX = X . b ⎡ a ⎢ 1 a 1 − − ⎣
⎤⎡ x1 ⎤ ⎡ x1 ⎤ ⎥⎢ ⎥ = ⎢ ⎥ b⎦⎣ x2 ⎦ ⎣ x2 ⎦
you have ax1 +
(1 − a) x1
bx2 = x1
+ (1 − b) x2 = x2
or
(a − 1) x1 (1 − a) x1
+ bx2 = 0 − bx2 = 0.
Letting x1 = b and x2 = 1 − a, you have 2 × 1 state matrix X satisfying PX = X ⎡ b ⎤ X = ⎢ ⎥. ⎣1 − a⎦
⎡29,788⎤ = ⎢⎢38,356⎥⎥. ⎢⎣ 31,856⎥⎦
So, the next month’s users will be grouped as follows: 24,500 for brand A, 34,000 brand B, and 41,500 neither. In two months the distribution will be 27,625 brand A, 36,625 brand B, and 35,750 neither. Finally, in three months the distribution will be 29,788 brand A, 38,356 brand B, and 31,856 neither.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
50 Chapter 2 Matrices 16. Divide the message into groups of three and form the uncoded matrices. P
L E
[16 12 5]
A
S E
_
[1 19 5]
S E
N D _
[0 19 5]
[14
4
M
0]
O
N
[13 15 14]
E
Y
[5 25
_ 0]
Multiplying each uncoded row matrix on the right by A yields the following coded row matrices. ⎡ 4 2 1⎤ [16 12 5] A = [16 12 5]⎢⎢−3 −3 −1⎥⎥ = [43 6 9] ⎢ 3 2 1⎥ ⎣ ⎦
[1 [0 [14 [13 [5
5] A =
19
[−38 [−42 [ 44 [ 49 [−55
5] A =
19
0] A =
4
15 14] A = 0] A =
25
−45 −13]
−47 −14] 16 9
10] 12]
−65 −20]
So, the coded message is 43, 6, 9, − 38, − 45, −13, − 42, 47 −14, 44, 16, 10, 49, 9, 12, − 55, − 65, − 20.
18. Divide the message into groups of four and form the uncoded matrices. H E
[8
L
P
5 12 16]
_
I
S
_
[0 9 19 0]
C
O M
I
[3 15 13 9]
N G
[14
7
_
_
0 0]
Multiplying each uncoded row matrix on the right by A yields the coded row matrices ⎡−2 3 −1 −1⎤ ⎢ ⎥ −1 1 1 1⎥ [8 5 12 16] A = [8 5 12 16]⎢⎢ −1 −1 1 2⎥ ⎢ ⎥ ⎢⎣ 3 1 −2 −4⎥⎦ = [15 33 −23 −43]
[0 9 19 0] A = [−28 [3 15 13 9] A = [−7 [14 7 0 0] A = [−35
−10 28 47] 20 7 2] 49 −7 −7].
So, the coded message is 15, 33, −23, − 43, − 28, −10, 28, 47, −7, 20, 7, 2, −35, 49, −7, − 7. 3⎤ ⎡−4 −1 20. Find A−1 = ⎢ ⎥ , and multiply each coded row matrix on the right by A to find the associated uncoded row matrix. 3 2 − ⎣ ⎦
[85
3⎤ ⎡−4 120]⎢ ⎥ = [20 15] ⇒ T, O ⎣ 3 −2⎦
[6 8] A−1 = [0 2] ⇒ _, B [10 15] A−1 = [5 0] ⇒ E, _ [84 117] A−1 = [15 18] ⇒ O, R [42 56] A−1 = [0 14] ⇒ _, N [90 125] A−1 = [15 20] ⇒ O, T [60 80] A−1 = [0 20] ⇒ _, T [30 45] A−1 = [15 0] ⇒ O, _ [19 26] A−1 = [2 5] ⇒ B, E So, the message is TO_BE_OR_NOT_TO_BE.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 2.5
22. Find A
−1
Applications of Matrix Operations
51
⎡ 11 2 −8⎤ ⎢ ⎥ = ⎢ 4 1 −3⎥ , and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix. ⎢−8 −1 6⎥ ⎣ ⎦ ⎡ 11 2 −8⎤ ⎢ ⎥ = [112 −140 83]⎢ 4 1 −3⎥ = [8 1 22] ⇒ H, A, V ⎢−8 −1 6⎥ ⎣ ⎦
[112
−140 83] A
[19 [72 [95 [20 [35 [42
−25 13] A−1 =
−76 61] A
−1
−1
=
−118 71] A−1 =
21 38] A
−23 36] A
−1
=
−1
=
−48 32] A−1 =
[5 [0 [5 [0 [5
1] ⇒
E,
_, A
_,
G, R
1 20] ⇒
E,
A,
T
_, W,
E
E,
K,
E
0] ⇒ N,
D,
_
0
7 18] ⇒ 23 11
[14
4
5] ⇒ 5] ⇒
The message is HAVE_A_GREAT_WEEKEND_. 26. You have
⎡a b⎤ 24. Let A−1 = ⎢ ⎥ and find that ⎣c d ⎦
⎡w −35]⎢ ⎣y ⎡w [38 −30]⎢ ⎣y
[45
_ S
⎡a b⎤ [−19 −19]⎢ ⎥ = [0 19] ⎣c d ⎦ U E
⎡a b⎤ [37 16]⎢ ⎥ = [21 5]. ⎣c d ⎦
This produces a system of 4 equations. −19a
− 19c − 19b
= 0
37b
So, 45w − 35 y = 10
and 45 x − 35 z = 15
38w − 30 y = 8
38 x − 30 z = 14.
Solving these two systems gives w = y = 1 and x = −2, z = −3. So, ⎡1 −2⎤ A− 1 = ⎢ ⎥. ⎣1 −3⎦
− 19d = 19 + 16c
37 a
x⎤ ⎥ = [10 15] and z⎦ x⎤ ⎥ = [8 14]. z⎦
= 21
(b) Decoding you have:
+ 16d = 5.
This corresponds to the message
[45 [38 [18 [35 [81 [42 [75 [2 [22 [15
CANCEL_ORDERS_SUE.
The message is JOHN_RETURN_TO_BASE_.
Solving this system, you find a = 1, b = 1, c = −1, and d = −2. So, A− 1
⎡ 1 1⎤ = ⎢ ⎥. ⎣−1 −2⎦
Multiply each coded row matrix on the right by A−1 to yield the uncoded row matrices.
[3 1], [14 3], [5 12], [0 15], [18 [5 18], [19 0], [0 19], [21 5].
4],
−35] A−1 =
−30] A
−1
−30] A
−1
=
−18] A−1 = =
−60] A−1 = −28] A−1 = −55] A
−1
−21] A
−1
=
−2] A−1 = =
−10] A−1 =
[10 [8 [0 [5 [21 [14 [20 [0 [1 [5
15] ⇒
J, O
18] ⇒
_, R
14] ⇒ H, N 20] ⇒
E,
T
18] ⇒ U, R 0] ⇒ N,
15] ⇒
_
T, O
2] ⇒
_,
19] ⇒ A,
S
0] ⇒
E,
_
B
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
52
Chapter 2 Matrices
28. Use the given information to find D. A B
34. (a) The line that best fits the given points is shown on the graph. y
⎡0.3 0.4⎤ A D = ⎢ ⎥ ⎣0.4 0.2⎦ B
3
(4, 2) (6, 2) (3, 1) 1 (1, 0) (4, 1)
The equation X = DX + E may be written in the form ( I − D) X = E; that is,
2
x
⎡ 0.7 0.4⎤ ⎡50,000⎤ ⎢ ⎥X = ⎢ ⎥. ⎣−0.4 0.8⎦ ⎣30,000⎦
(2, 0) (3, 0) 5 6 −2
⎡130,000⎤ Solving the system, X = ⎢ ⎥. ⎣102,500⎦
(b) Using the matrices ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 X = ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎣⎢1
30. From the given matrix D, form the linear system X = DX + E , which can be written as ( I − D) X = E ⎡ 0.8 −0.4 −0.4⎤ ⎡5000⎤ ⎢ ⎥ ⎢ ⎥ 0.4 0.8 0.2 − ⎥ X = ⎢2000⎥. ⎢− ⎢ 0 −0.2 0.8⎥ ⎢8000⎥ ⎣ ⎦ ⎣ ⎦ ⎡21,875⎤ ⎢ ⎥ Solving this system, X = ⎢17,000⎥. ⎢14,250⎥ ⎣ ⎦
y 4 3
(3, 2)
(− 3, 0)− 1
1
2
⎡ 8 28⎤ XT X = ⎢ ⎥ ⎣28 116⎦
⎡ 8⎤ X TY = ⎢ ⎥ ⎣37⎦
A = (X T X )
⎡− 3 ⎤ = ⎢ 14 ⎥. ⎣⎢ 2 ⎦⎥
−1
( X TY )
So, the least squares regression line is y =
2
(1, 1)
1⎤ ⎡0⎤ ⎥ ⎢ ⎥ 2⎥ ⎢0⎥ ⎥ ⎢0⎥ 3 ⎥ ⎢ ⎥ 3⎥ ⎢ 1⎥ ⎥ and Y = ⎢ ⎥ 4⎥ ⎢ 1⎥ ⎢2⎥ 4⎥ ⎥ ⎢ ⎥ ⎢2⎥ 5⎥ ⎥ ⎢ ⎥ 6⎦⎥ ⎣⎢2⎦⎥
you have
32. (a) The line that best fits the given points is shown on the graph.
(− 1, 1)
(5, 2)
4
− 34 .
1 4
− 14 ⎤⎦
(c) Solving Y = XA + E for E, you have
x 3
E = Y − XA = ⎡⎣ 14
−2
− 14
− 34
1 4
− 14
3 4
and the sum of the squares error is E T E = 1.5.
(b) Using the matrices
36. Using the matrices
⎡0⎤ ⎡1 −3⎤ ⎢ ⎥ ⎢ ⎥ 1 1 −1⎥ X = ⎢ and Y = ⎢ ⎥ ⎢1 1⎥ ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣2⎥⎦ ⎢⎣1 3⎥⎦
⎡1 1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ X = ⎢1 3⎥ and Y = ⎢3⎥ ⎢1 5⎥ ⎢6⎥ ⎣ ⎦ ⎣ ⎦
you have
⎡4 0⎤ ⎡4⎤ T you have X T X = ⎢ ⎥ and X Y = ⎢ ⎥ ⎣0 20⎦ ⎣6⎦ ⎡1 −1 A = ( X T X ) X TY = ⎢ 4 ⎢⎣ 0
1x 2
⎡1 1⎤ ⎡1 1 1⎤ ⎢ ⎡3 9⎤ ⎥ XT X = ⎢ ⎥ ⎢1 3⎥ = ⎢ ⎥ 1 3 5 ⎣ ⎦⎢ ⎣9 35⎦ ⎥ 1 5 ⎣ ⎦
⎡ 1⎤ 0⎤ ⎡4⎤ = ⎢ 3 ⎥. ⎥ ⎣⎢10 ⎦⎥
1 ⎢6⎥ 20 ⎥⎣ ⎦ ⎦
So, the least squares regression line is y =
3 x 10
(c) Solving Y = XA + E for E, you have ⎡ −0.1⎤ ⎢ ⎥ 0.3⎥ E = Y − XA = ⎢ . ⎢−0.3⎥ ⎢ ⎥ ⎣⎢ 0.1⎦⎥
+ 1.
⎡0⎤ ⎡1 1 1⎤ ⎢ ⎥ ⎡ 9⎤ X Y = ⎢ ⎥ ⎢3⎥ = ⎢ ⎥ 1 3 5 ⎣ ⎦⎢ ⎥ ⎣39⎦ ⎣6⎦ T
A = (X T X )
−1
( X TY )
⎡− 3 ⎤ = ⎢ 23 ⎥. ⎣⎢ 2 ⎦⎥
So, the least squares regression line is y =
3x 2
− 32 .
So, the sum of the squares error is E T E = 0.2. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
T
Section 2.5 38. Using matrices
⎡1 0⎤ ⎡ 6⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 4⎥ ⎢ 3⎥ X = ⎢1 5⎥ , Y = ⎢ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 8⎥ ⎢−4⎥ ⎢ ⎥ ⎢ ⎥ 1 10 ⎣ ⎦ ⎣−5⎦
you have ⎡4 0⎤ ⎡ 8⎤ T XT X = ⎢ ⎥, X Y = ⎢ ⎥ 0 40 ⎣ ⎦ ⎣32⎦ −1
( X TY )
⎡1 = ⎢4 ⎣⎢ 0
0⎤ ⎡ 8⎤ ⎡ 2⎤ = ⎢ ⎥. ⎥ ⎣0.8⎦
1 ⎢32⎥ ⎥⎣ ⎦ 40 ⎦
So, the least squares regression line is y = 0.8 x + 2.
⎡1 −3⎤ ⎡4⎤ ⎢ ⎥ ⎢ ⎥ − 1 1 ⎥ and Y = ⎢2⎥ X = ⎢ ⎢1 1⎥ ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢1 3⎥⎦ ⎣⎢0⎦⎥
you have ⎡4 0⎤ ⎡ 7⎤ T XT X = ⎢ ⎥, X Y = ⎢ ⎥ 0 20 ⎣ ⎦ ⎣−13⎦ −1
( X TY )
⎡1 = ⎢4 ⎣⎢ 0
you have ⎡1 0⎤ ⎢ ⎥ ⎢1 4⎥ 1 1 1 1 1 ⎡ ⎤ ⎡ 5 27⎤ ⎢ ⎥ XT X = ⎢ ⎥ ⎢1 5⎥ = ⎢ ⎥ ⎣0 4 5 8 10⎦ ⎣27 205⎦ ⎢1 8⎥ ⎢ ⎥ ⎣1 10⎦ ⎡ 6⎤ ⎢ ⎥ ⎢ 3⎥ 1 1 1 1 1 ⎡ ⎤ ⎡ 0⎤ ⎢ ⎥ X TY = ⎢ ⎥ ⎢ 0⎥ = ⎢ ⎥. ⎣0 4 5 8 10⎦ ⎣−70⎦ ⎢−4⎥ ⎢ ⎥ ⎣−5⎦
40. Using matrices
A = (X T X )
53
42. Using matrices
⎡1 −4⎤ ⎡−1⎤ ⎢ ⎥ ⎢ ⎥ − 1 2 ⎥ and Y = ⎢ 0⎥ X = ⎢ ⎢1 2⎥ ⎢ 4⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢1 4⎥⎦ ⎣⎢ 5⎦⎥
A = (X T X )
Applications of Matrix Operations
⎡ 7⎤ 0⎤ ⎡ 7⎤ = ⎢ 134 ⎥ ⎥ ⎣⎢− 20 ⎥⎦
1 ⎢−13⎥ ⎥⎣ ⎦ 20 ⎦
So, the least squares regression line is y = −0.65 x + 1.75.
A = (X T X )
−1
( X TY )
=
1 ⎡ 296 ⎢
=
1 ⎡ 296 ⎢
205 −27⎤⎡ 0⎤ ⎥⎢ ⎥ 5⎦⎣−70⎦
⎣−27
1890⎤ ⎥. ⎣−350⎦
So, the least squares regression line is 945 . y = − 175 x + 148 148 44. (a) Using the matrices
⎡1 ⎢ 1 X = ⎢ ⎢1 ⎢ ⎣⎢1
25⎤ ⎡82⎤ ⎥ ⎢ ⎥ 30⎥ 75 ,Y = ⎢ ⎥ ⎢67⎥ 35⎥ ⎥ ⎢ ⎥ 40⎦⎥ ⎢⎣55⎥⎦
you have ⎡ 279⎤ ⎡ 4 130⎤ T XT X = ⎢ ⎥, X Y = ⎢ ⎥ ⎣8845⎦ ⎣130 4350⎦
and −1 ⎡127.6⎤ A = ( X T X ) X TY = ⎢ ⎥. ⎣−1.78⎦
So, the least squares regression line is y = −1.78 x + 127.6 (b) When x = 32.95, y = −1.78(32.95) + 127.6 = 68.95 ≈ 69.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
54
Chapter 2 Matrices
46. (a) Using the matrices ⎡1 100⎤ ⎡75⎤ ⎢ ⎥ ⎢ ⎥ X = ⎢1 120⎥ , Y = ⎢68⎥ ⎢1 140⎥ ⎢55⎥ ⎣ ⎦ ⎣ ⎦
you have ⎡1 100⎤ 1 1⎤ ⎢ 360⎤ ⎡ 1 ⎡ 3 ⎥ X X = ⎢ ⎥ ⎢1 120⎥ = ⎢ ⎥ ⎣100 120 140⎦ ⎢ ⎣360 44,000⎦ ⎥ 1 140 ⎣ ⎦ T
⎡ 198⎤ X TY = ⎢ ⎥ ⎣23,360⎦ A = (X T X )
−1
( X TY )
=
44,000 −360⎤⎡ 198⎤ ⎥⎢ ⎥ = 3⎦⎣23,360⎦ ⎣ −360
1 ⎡ 2400 ⎢
1 ⎡ 2400 ⎢
302,400⎤ ⎡ 126⎤ ⎥ = ⎢ ⎥. ⎣ −1200⎦ ⎣−0.5⎦
So, the least squares regression line is y = −0.5 x + 126. (b)
90
(120, 68) (100, 75) (140, 55) 80
160 40
(c)
Number (x)
100
120
140
Percent (y)
75
68
55
Model percent (y)
76
66
56
(d) When x = 170, y = −0.5(170) + 126 = 41%. (e) When y = 40%, you have 40 = −0.5 x + 126 and, therefore, x = 172.
Review Exercises for Chapter 2 ⎡ 1 2⎤ ⎡7 1⎤ ⎡ −2 −4⎤ ⎡56 8⎤ ⎡ 54 4⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2. −2 ⎢5 −4⎥ + 8⎢ 1 2⎥ = ⎢−10 8⎥ + ⎢ 8 16⎥ = ⎢−2 24⎥ ⎢6 0⎥ ⎢ 1 4⎥ ⎢−12 0⎥ ⎢ 8 32⎥ ⎢−4 32⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡ 1(6) + 5( 4) 1( −2) + 5(0) 1(8) + 5(0)⎤ ⎡ 1 5⎤⎡6 −2 8⎤ ⎡ 26 −2 8⎤ 4. ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎢⎣2(6) − 4( 4) 2( −2) − 4(0) 2(8) − 4(0)⎥⎦ ⎣2 −4⎦⎣4 0 0⎦ ⎣−4 −4 16⎦ ⎡2 1⎤⎡ 4 2⎤ ⎡−2 4⎤ ⎡ 5 5⎤ ⎡−2 4⎤ ⎡ 3 9⎤ 6. ⎢ ⎥⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ⎣6 0⎦⎣−3 1⎦ ⎣ 0 4⎦ ⎣24 12⎦ ⎣ 0 4⎦ ⎣24 16⎦ 8. Multiplying the left side of the equation yields
10. Multiplying the left side of the equation yields
So, the corresponding system of linear equations is
y + 2 z⎤ ⎡ ⎡ 0⎤ ⎢ ⎥ ⎢ ⎥ 3 2 x y z + + = ⎢ ⎥ ⎢ −1⎥. ⎢4 x − 3 y + 4 z ⎥ ⎢−7⎥ ⎣ ⎦ ⎣ ⎦
2x −
So, the corresponding system of linear equations is
⎡ 2 x − y⎤ ⎡ 5⎤ ⎢ ⎥ = ⎢ ⎥. x y 3 4 + ⎣ ⎦ ⎣−2⎦ y =
5
3 x + 4 y = −2.
y + 2z = 0
3 x + 2 y + z = −1 4 x − 3 y + 4 z = −7.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2 12. Letting
⎡2 1⎤ ⎡ x⎤ ⎡−8⎤ A = ⎢ ⎥ , x = ⎢ ⎥ , and b = ⎢ ⎥ , ⎣ 1 4⎦ ⎣ y⎦ ⎣−4⎦
the given system can be written in matrix form Ax = b ⎡2 1⎤⎡ x⎤ ⎡−8⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣ 1 4⎦⎣ y⎦ ⎣−4⎦
14. Letting ⎡−3 −1 1⎤ ⎡ x1 ⎤ ⎡ 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A = ⎢ 2 4 −5⎥ , x = ⎢ x2 ⎥ and b = ⎢−3⎥ ⎢ 1 −2 ⎢ x3 ⎥ ⎢ 1⎥ 3⎥⎦ ⎣ ⎣ ⎦ ⎣ ⎦
the given system can be written in matrix form Ax = b. ⎡ 3 2⎤ 16. AT = ⎢ ⎥ ⎣−1 0⎦
22. Begin by adjoining the identity matrix to the given matrix. 1 0 0 0⎤ ⎡1 1 1 1 ⎢0 1 1 1 0 1 0 0⎥ [ A I ] = ⎢⎢0 0 1 1 0 0 1 0⎥⎥ ⎢ ⎥ 0 0 0 1⎦ ⎣0 0 0 1 This matrix reduces to 1 −1 0 0⎤ ⎡1 0 0 0 ⎢0 1 0 0 0 1 −1 0⎥⎥ ⎡⎣I A−1 ⎤⎦ = ⎢ ⎢0 0 1 0 0 0 1 −1⎥ ⎢ ⎥ 0 0 0 1⎦ ⎣0 0 0 1 So, the inverse matrix is ⎡ 1 −1 0 0⎤ ⎢ ⎥ 0 1 −1 0⎥ . A−1 = ⎢ ⎢0 0 1 −1⎥ ⎢ ⎥ ⎣⎢0 0 0 1⎦⎥ 24.
20. From the formula
A−1 =
⎡ d −b⎤ 1 ⎢ ⎥ ad − bc ⎣−c a⎦
you see that ad − bc = 4( 2) − (−1)( −8) = 0, and so the matrix has no inverse.
b
4 −2⎤ ⎥ = − ⎣ 1 3⎦ equation Ax = b as follows. 1⎡ 10 ⎢
⎡ 2 x = A−1b = ⎢ 15 ⎢⎣− 10
T
⎡ 1⎤ ⎢ ⎥ AAT = [1 −2 −3]⎢−2⎥ = [14] ⎢−3⎥ ⎣ ⎦
x
Because A−1 =
⎡ 3 −1⎤⎡ 3 2⎤ ⎡10 6⎤ AAT = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ − 2 0 1 0 ⎣ ⎦⎣ ⎦ ⎣ 6 4⎦
⎡ 1⎤ ⎡ 1 −2 −3⎤ ⎢ ⎥ ⎢ ⎥ AT A = ⎢−2⎥[1 −2 −3] = ⎢−2 4 6⎥ ⎢ −3⎥ ⎢−3 6 9⎥ ⎣ ⎦ ⎣ ⎦
A
⎡3 2⎤ ⎡ x1 ⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 4 x ⎣ ⎦ ⎣ 2⎦ ⎣−3⎦
⎡ 3 2⎤⎡3 −1⎤ ⎡ 13 −3⎤ A A = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−1 0⎦⎣2 0⎦ ⎣−3 1⎦ T
⎡ 1⎤ ⎢ ⎥ 18. A = ⎢−2⎥ ⎢ −3⎥ ⎣ ⎦
55
26.
A
x
⎡ 52 ⎢ 1 ⎣⎢− 10
− 15 ⎤ , solve the 3⎥ ⎥ 10 ⎦
− 15 ⎤ ⎡ 1⎤ ⎡ 1⎤ = ⎢ ⎥ 3 ⎥⎢ ⎥ ⎣−1⎦ 10 ⎥ ⎦ ⎣−3⎦ b
⎡ 1 1 2⎤ ⎡ x1 ⎤ ⎡ 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 1 −1 1⎥ ⎢ x2 ⎥ = ⎢−1⎥ ⎢2 1 1⎥ ⎢ x3 ⎥ ⎢ 2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ Using Gauss-Jordan elimination, you find that 3⎤ 1 ⎡− 2 5 5 5 ⎢ ⎥ 1 . A−1 = ⎢ 15 − 53 5⎥ ⎢ 3 1 − 2⎥ 5 5⎦ ⎣ 5 Solve the equation Ax = b as follows.
⎡− 2 ⎢ 5 −1 x = A b = ⎢ 15 ⎢ 3 ⎣ 5
1 5 − 53 1 5
3⎤ 0 5 ⎡ ⎤ ⎥ 1 ⎢−1⎥ 5⎥ ⎢ ⎥ − 52 ⎥⎦ ⎢⎣ 2⎥⎦
⎡ 1⎤ ⎢ ⎥ = ⎢ 1⎥ ⎢−1⎥ ⎣ ⎦
⎡2 4⎤ = ⎢ ⎥ , you can use the formula for ⎣0 1⎦ the inverse of a 2 × 2 matrix to obtain
28. Because ( 2 A)
−1
⎡2 4⎤ 2A = ⎢ ⎥ ⎣0 1⎦
−1
=
1 ⎡ 1 −4⎤ 1 ⎡ 1 −4⎤ ⎢ ⎥ = ⎢ ⎥ 2 − 0 ⎣0 2⎦ 2 ⎣0 2⎦
⎡1 ⎤ ⎢ 4 −1⎥ 1 ⎡ 1 −4⎤ ⎥. So, A = ⎢ ⎥ = ⎢ 4 ⎣0 2⎦ ⎢ 0 1⎥ ⎢⎣ 2 ⎥⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
56
Chapter 2 Matrices 32. Because the given matrix represents 6 times the second row, the inverse will be 16 times the second row.
⎡2 x⎤ 30. The matrix ⎢ ⎥ will be nonsingular if ⎣ 1 4⎦ ad − bc = ( 2)( 4) − (1)( x) ≠ 0, which implies that
⎡ 1 0 0⎤ ⎢ 1 ⎥ ⎢0 6 0⎥ ⎢0 0 1⎥ ⎣ ⎦
x ≠ 8.
For Exercises 34 and 36, answers will vary. Sample answers are shown below. 34. Begin by finding a sequence of elementary row operations to write A in reduced row-echelon form.
Matrix
Elementary Row Operation
Elementary Matrix
⎡ 1 −4⎤ ⎢ ⎥ ⎣−3 13⎦
Interchange 2 rows.
⎡0 1⎤ E1 = ⎢ ⎥ ⎣ 1 0⎦
⎡ 1 −4⎤ ⎢ ⎥ 1⎦ ⎣0
Add 3 times row 1 to row 2.
⎡1 0⎤ E2 = ⎢ ⎥ ⎣3 1⎦
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
Add 4 times row 2 to row 1.
⎡ 1 4⎤ E3 = ⎢ ⎥ ⎣0 1⎦
Then, you can factor A as follows. ⎡0 1⎤ ⎡ 1 0⎤ ⎡ 1 −4⎤ A = E1−1E2−1E3−1 = ⎢ ⎥⎢ ⎥⎢ ⎥ 1⎦ ⎣ 1 0⎦ ⎣−3 1⎦ ⎣0 36. Begin by finding a sequence of elementary row operations to write A in reduced row-echelon form. Matrix Elementary Row Operation Elementary Matrix ⎡ 1 0 2⎤ ⎢ ⎥ ⎢0 2 0⎥ ⎢ 1 0 3⎥ ⎣ ⎦ ⎡ 1 0 2⎤ ⎢ ⎥ ⎢0 2 0⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 2 0⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
1 3
times row one.
⎡ 13 0 0⎤ ⎢ ⎥ E1 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
Add −1 times row one to row three.
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢ 0 1 0⎥ ⎢−1 0 1⎥ ⎣ ⎦
Add −2 times row three to row one.
⎡ 1 0 −2⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 0 1⎥⎦ ⎣
1 2
times row two.
⎡ 1 0 0⎤ ⎢ ⎥ E4 = ⎢0 12 0⎥ ⎢0 0 1⎥ ⎣ ⎦
So, you can factor A as follows. A =
E1−1E2−1E3−1E4−1
⎡3 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 2⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢0 1 0⎥ ⎢0 1 0⎥ ⎢0 1 0⎥ ⎢0 2 0⎥ ⎢0 0 1⎥ ⎢ 1 0 1⎥ ⎢0 0 1⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦⎣ ⎦⎣ ⎦
⎡a b⎤ 38. Letting A = ⎢ ⎥ , you have ⎣c d ⎦ ⎡a 2 + bc ab + bd ⎤ ⎡a b⎤ ⎡a b⎤ ⎡0 0⎤ = = ⎢ A2 = ⎢ ⎢ ⎥⎢ ⎥ ⎥ 2⎥ ⎢⎣ac + dc cb + d ⎥⎦ ⎣c d ⎦ ⎣c d ⎦ ⎣0 0⎦ So, many answers are possible. ⎡0 0⎤ ⎡0 1⎤ ⎢ ⎥, ⎢ ⎥ , etc. ⎣0 0⎦ ⎣0 0⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
57
40. There are many possible answers.
⎡0 1⎤ ⎡ 1 0⎤ ⎡0 1⎤⎡ 1 0⎤ ⎡0 0⎤ A = ⎢ ⎥, B = ⎢ ⎥ ⇒ AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = O ⎣0 0⎦ ⎣0 0⎦ ⎣0 0⎦⎣0 0⎦ ⎣0 0⎦ ⎡ 1 0⎤ ⎡0 1⎤ ⎡0 1⎤ But, BA = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ≠ O. ⎣0 0⎦ ⎣0 0⎦ ⎣0 0⎦ 42. If aX + bY + cZ = O, then
⎡ 1⎤ ⎡−1⎤ ⎡ 3⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 0 4 0 a ⎢ ⎥ + b ⎢ ⎥ + c ⎢ ⎥ = ⎢ ⎥, ⎢0⎥ ⎢ 3⎥ ⎢−1⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 2⎥⎦ ⎣⎢ 1⎦⎥ ⎣⎢ 2⎦⎥ ⎣⎢0⎦⎥ which yields the system of equations
a − b + 3c = 0 + 4c = 0
2a
3b − c = 0 a + 2b + 2c = 0.
Solving this homogeneous system, the only solution is a = b = c = 0. 44. No, this is not true. For example, let
⎡3 1⎤ ⎡2 −3⎤ ⎡ 1 0⎤ A = ⎢ ⎥. ⎥, B = ⎢ ⎥ and C = ⎢ ⎣1 2⎦ ⎣2 −5⎦ ⎣0 −4⎦ ⎡3 −4⎤ Then, AC = ⎢ ⎥ = CB, but A ≠ B. ⎣1 −8⎦ 46. Matrix
Elementary Matrix
⎡1 1 1⎤ ⎢ ⎥ ⎢1 2 2⎥ = A ⎢1 2 3⎥ ⎣ ⎦ ⎡ 1 1 1⎤ ⎢ ⎥ ⎢0 1 1⎥ ⎢ 1 2 3⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E1 = ⎢−1 1 0⎥ ⎢ 0 0 1⎥ ⎣ ⎦
⎡ 1 1 1⎤ ⎢ ⎥ ⎢0 1 1⎥ ⎢0 1 2⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E2 = ⎢ 0 1 0⎥ ⎢−1 0 1⎥ ⎣ ⎦
⎡ 1 1 1⎤ ⎢ ⎥ ⎢0 1 1⎥ = U ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ E3 = ⎢0 1 0⎥ ⎢0 −1 1⎥ ⎣ ⎦
E3 E2 E1 A = U ⇒ A =
E1−1E2−1E3−1U
⎡1 0 0⎤ ⎡ 1 1 1⎤ ⎢ ⎥⎢ ⎥ = ⎢1 1 0⎥ ⎢0 1 1⎥ = LU ⎢1 1 1⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
58
Chapter 2 Matrices
48. Matrix
Elementary Matrix
⎡2 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢2
1 −1⎤ ⎥ 1 −1⎥ = A 0 −2 0⎥ ⎥ 1 1 −2⎥⎦
⎡2 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
1 −1⎤ ⎥ 3 1 −1⎥ =U 0 −2 0⎥ ⎥ 0 0 −1⎥⎦
1
3
⎡ 1 ⎢ 0 E = ⎢ ⎢ 0 ⎢ ⎢− ⎣ 1
1
⎡1 ⎢ 0 EA = U ⇒ A = ⎢ ⎢0 ⎢ ⎢⎣ 1
0 0 0⎤⎡2 ⎥⎢ 1 0 0⎥⎢0 0 1 0⎥⎢0 ⎥⎢ 0 0 1⎥⎢ ⎦⎣0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 0⎥⎦ 1 −1⎤ ⎥ 3 1 −1⎥ = LU 0 −2 0⎥ ⎥ 0 0 −1⎥⎦ 1
⎡1 ⎢ 0 Ly = b : ⎢ ⎢0 ⎢ ⎢⎣ 1
0 0 0⎤⎡ y1 ⎤ ⎡ 7⎤ ⎡ 7⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −3 1 0 0⎥⎢ y2 ⎥ 3 = ⎢ ⎥ ⇒ y = ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 0 1 0 y3 2 2⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 8⎥⎦ ⎢⎣ 1⎥⎦ 0 0 1⎥⎢ ⎦⎣ y4 ⎥⎦
⎡2 ⎢ 0 Ux = y : ⎢ ⎢0 ⎢ ⎣⎢0
1 −1⎤⎡ x1 ⎤ ⎡ 7⎤ ⎡ 4⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ − −1 1 −1⎥⎢ x2 ⎥ 3 = ⎢ ⎥ ⇒ x = ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎥ ⎢ −1⎥ 0 −2 0 x3 2 ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 1⎥⎦ ⎢− 0 0 −1⎥⎢ ⎦⎣ x4 ⎥⎦ ⎣ 1⎥⎦ 1
3
50. (a) False. The product of a 2 × 3 matrix and a 3 × 5 matrix is a 2 × 5 matrix.
(b) True. See Theorem 2.6(4) on page 68. 52. (a) True. ( ABA−1 ) = ( ABA−1 )( ABA−1 ) = AB( A−1 A) BA−1 = ABIBA−1 = AB 2 A−1. 2
⎡ 1 0⎤ (b) False. Let A = ⎢ ⎥ ⎣0 1⎦
and
⎡−1 0⎤ B = ⎢ ⎥. ⎣ 0 −1⎦
⎡0 0⎤ Then A + B = ⎢ ⎥. ⎣0 0⎦ A + B is a singular matrix, while both A and B are nonsingular matrices. ⎡40 64 52⎤ ⎡3.32 1.32⎤ ⎡501.12 169.12⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 54. (a) AB = ⎢60 82 76⎥ ⎢3.22 1.07⎥ = ⎢700.36 236.86⎥ ⎢76 96 84⎥ ⎢3.12 0.92⎥ ⎢823.52 280.32⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
This matrix shows the total sales of milk each day in the first column and total profit each day in the second column. (b) The profit for Friday through Sunday is the sum of the elements in the second column of AB, 169.12 + 236.86 + 280.32 = $686.30.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2
59
56. (a) In matrix B, grading system 1 counts each midterm as 25% of the grade and the final exam as 50% of the grade. Grading system 2 counts each midterm as 20% of the grade and the final exam as 60% of the grade.
⎡78 ⎢ ⎢84 ⎢92 (b) AB = ⎢ ⎢88 ⎢ ⎢74 ⎢96 ⎣
82 80⎤ 80⎤ ⎡ 80 ⎥ ⎢ ⎥ 88 85⎥ 85.5 85.4 ⎢ ⎥ 0.25 0.20⎤ ⎡ ⎢91.25 93 90⎥ ⎢ 91⎥ ⎥ ⎥ 0.25 0.20⎥ = ⎢ ⎥ ⎢ ⎥ 86 90⎥ ⎢⎢ 88.5 88.8 ⎥ ⎣0.50 0.60⎥⎦ ⎢ ⎥ 78 80⎥ 78 78.4 ⎢ ⎥ ⎢96.75 95 98⎥⎦ 97⎥⎦ ⎣
⎡ B B⎤ ⎢ ⎥ ⎢ B B⎥ ⎢A A⎥ ⎥ (c) ⎢ ⎢ B B⎥ ⎢ ⎥ ⎢ C C⎥ ⎢A A⎥ ⎣ ⎦ 58. This matrix is stochastic because 0 ≤ aij ≤ 1 and each column adds up to 1. ⎡0.6 0.2 0.0⎤ ⎡1000⎤ ⎡ 800⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 60. PX = ⎢0.2 0.7 0.1⎥ ⎢1000⎥ = ⎢1000⎥; ⎢0.2 0.1 0.9⎥ ⎢1000⎥ ⎢1200⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎡ 800⎤ ⎡ 680⎤ ⎢ ⎥ ⎢ ⎥ P X = P ⎢1000⎥ = ⎢ 980⎥; ⎢1200⎥ ⎢1340⎥ ⎣ ⎦ ⎣ ⎦ 2
⎡ 680⎤ ⎡ 604⎤ ⎢ ⎥ ⎢ ⎥ P 3 X = P ⎢ 980⎥ = ⎢ 956⎥. ⎢1340⎥ ⎢1440⎥ ⎣ ⎦ ⎣ ⎦
62. If you continue the computation in Exercise 61, you find that the steady state is ⎡140,000⎤ ⎢ ⎥ X = ⎢100,000⎥ ⎢ 60,000⎥ ⎣ ⎦
which can be verified by calculating PX = X . 64. The uncoded row matrices are B E A
M _
[2
[13
5
1]
M
0 13]
E _
[5
U
0 21]
P _
[16
S
C
O
T
0 19]
[3
15 20]
T
Y
_
[20
25
0]
Multiplying each 1 × 3 matrix on the right by A yields the coded row matrices.
[17
6 20]
[0
0 13]
[−32
−16 −43]
[−6
−3 7] [11 −2 −3] [115 45 155]
So, the coded message is 17, 6, 20, 0, 0, 13, − 32, −16, − 43, − 6, − 3, 7, 11, − 2, − 3, 115, 45, 155.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
60
Chapter 2 Matrices
66. Find A−1 to be
⎡−3 −4⎤ A− 1 = ⎢ ⎥ ⎣ 1 1⎦
and the coded row matrices are
[11
52], [−8 −9], [−13 −39], [5 20], [12 56], [5 20], [−2 7], [9 41], [25 100].
Multiplying each coded row matrix on the right by A−1 yields the uncoded row matrices. S H
[19
O
8]
W
_
[15 23]
M
E
_
[0 13]
[5
0]
T H
[20
8]
E _
M
[5
[13 15]
0]
O
N E
[14
Y _
5]
[25
0]
So, the message is SHOW_ME_THE_MONEY_. 68. Find A−1 to be A
−1
⎡−40 16 9⎤ ⎢ ⎥ = ⎢ 13 −5 −3⎥ ⎢ 5 −2 −1⎥ ⎣ ⎦
and the coded row matrices are
[23
20 132], [54 128 102], [32 21 203], [6 10 23], [21 15 129], [36 46 173], [29 72 45].
Multiplying each coded row matrix on the right by A−1 yields the uncoded row matrices. _
D
[0
4 15]
O
N
T
_
[14 20 0]
H A
[8
V
1 22]
E _
A
_ C
[5
1]
[0
0
O
3 15]
W
_
M
[23 0 15]
A
N
_
[1 14 0]
So, the message is _DONT_HAVE_A_COW_MAN_.
70. Find A
−1
⎡4 ⎢13 8 = ⎢13 ⎢5 ⎣13
1⎤ 13 ⎥ 2 , 13 ⎥ 2⎥ − 13 ⎦
2 13 9 − 13 4 − 13
and multiply each coded row matrix on the right by A−1 to find the associated uncoded row matrix.
[66
[37 [61 [46 [94 [32 [66 [47
27 −31] A−1
5
−9] A−1 =
46 −73] A −14
−1
=
9] A−1 =
21 −49] A −4
⎡4 ⎢13 8 = [66 27 −31]⎢13 ⎢5 ⎣13
−1
=
12] A−1 =
31 −53] A
33 −67] A
−1
=
−1
=
[11 [19 [9 [23 [12 [19 [9
5
2 13 9 − 13 4 − 13
5] ⇒
0 23] ⇒
1⎤ 13 ⎥ 2 13 ⎥ 2⎥ − 13 ⎦
= [25 1 14] ⇒ Y, A, N
K,
E,
E
S,
_, W
0] ⇒
I, N,
_
15 18] ⇒ W, O,
R
0] ⇒
L, D,
_
S,
E,
R
I,
E,
S
14 4
5 18] ⇒ 5 19] ⇒
The message is YANKEES_WIN_WORLD_SERIES.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 2 72. Solve the equation X = DX + E for X to obtain ( I − D) X = E , which corresponds to solving the
augmented matrix. ⎡ 0.9 −0.3 −0.2 ⎢ ⎢ 0 0.8 −0.3 ⎢−0.4 −0.1 0.9 ⎣
3000⎤ ⎥ 3500⎥ 8500⎥⎦
78. (a) Using the matrices
⎡1 ⎢ ⎢1 X = ⎢1 ⎢ ⎢1 ⎢ ⎣1
1⎤ ⎡177.1⎤ ⎢ ⎥ ⎥ 2⎥ ⎢179.9⎥ 3⎥ and Y = ⎢184.0⎥ ⎢ ⎥ ⎥ ⎢188.9⎥ 4⎥ ⎢ ⎥ ⎥ 5⎦ ⎣195.3⎦
The solution to this system is
you have
⎡10000⎤ ⎢ ⎥ X = ⎢10000⎥. ⎢15000⎥ ⎣ ⎦
⎡1 ⎢ ⎢1 1 1 1 1 1 ⎡ ⎤ T ⎢ X X = ⎢ ⎥1 ⎣1 2 3 4 5⎦ ⎢ ⎢1 ⎢ ⎣1
74. Using the matrices
⎡1 ⎢ ⎢1 X = ⎢1 ⎢ ⎢1 ⎢ ⎣1
2⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 3⎥ ⎢ 3⎥ ⎥ 4 and Y = ⎢2⎥ ⎥ ⎢ ⎥ ⎢4⎥ 5⎥ ⎥ ⎢ ⎥ 6⎦ ⎣4⎦
you have ⎡ 5 20⎤ ⎡14⎤ T XT X = ⎢ ⎥, X Y = ⎢ ⎥ ⎣20 90⎦ ⎣63⎦ −1 ⎡ 1.8 −0.4⎤⎡14⎤ ⎡ 0⎤ A = ( X T X ) X TY = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 0.1⎦⎣63⎦ ⎣−0.4 ⎣0.7⎦
So, the least squares regression line is y = 0.7 x. 76. Using the matrices
⎡1 ⎢ ⎢1 X = ⎢1 ⎢ ⎢1 ⎢ ⎣1
1⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 3⎥ ⎥ 1 and Y = ⎢2⎥ ⎥ ⎢ ⎥ ⎢4⎥ 1⎥ ⎥ ⎢ ⎥ 2⎦ ⎣5⎦
you have ⎡5 6⎤ T ⎡15⎤ XT X = ⎢ ⎥, X Y = ⎢ ⎥ ⎣6 8⎦ ⎣20⎦
61
1⎤ ⎥ 2⎥ ⎡ 5 15⎤ 3⎥ = ⎢ ⎥ ⎥ ⎣15 55⎦ 4⎥ ⎥ 5⎦
and ⎡177.1⎤ ⎢ ⎥ ⎢179.9⎥ 1 1 1 1 1 ⎡ ⎤⎢ ⎡925.2⎤ ⎥ X TY = ⎢ ⎥ ⎢184.0⎥ = ⎢ ⎥. ⎣1 2 3 4 5⎦ ⎣ 2821⎦ ⎢188.9⎥ ⎢ ⎥ ⎣195.3⎦
Now, using ( X T X ) to find the coefficient matrix −1
A, you have
55 −15⎤⎡925.2⎤ ⎥⎢ ⎥ 5⎦⎣ 2821⎦ ⎣−15
A = ( X T X ) X TY =
1 ⎡ 50 ⎢
=
1 ⎡ 50 ⎢
−1
8571⎤ ⎥ ⎣ 227⎦
⎡171.42⎤ = ⎢ ⎥. ⎣ 4.54⎦
So, the least squares regression line is y = 4.54 x + 171.42. (b) The CPI in 2010 is y = 4.54(10) + 171.42 = 216.82 The CPI in 2015 is y = 454(15) + 171.42 = 239.52.
−1 ⎡ 2 −1.5⎤⎡15⎤ ⎡ 0⎤ A = ( X T X ) X TY = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣−1.5 1.25⎦⎣20⎦ ⎣2.5⎦
So, the least squares regression line is y = 2.5 x.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
62
Chapter 2 Matrices
⎡1 ⎢ ⎢1 ⎢1 80. (a) Using the matrices X = ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎣
0⎤ ⎡109.5⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 128.3⎥ ⎢140.8⎥ 2⎥ ⎥ and Y = ⎢ ⎥ you have 3⎥ ⎢158.7⎥ ⎥ ⎢ ⎥ 4⎥ ⎢ 182.1⎥ ⎢207.9⎥ 5⎥⎦ ⎣ ⎦
⎡1 ⎢ ⎢1 ⎢1 1 1 1 1 1 1 ⎡ ⎤ XT X = ⎢ ⎥⎢ ⎣0 1 2 3 4 5⎦ ⎢1 ⎢ ⎢1 ⎢1 ⎣
0⎤ ⎡109.5⎤ ⎥ ⎢ ⎥ 1⎥ ⎢128.3⎥ 2⎥ ⎡ 6 15⎤ ⎡ 1 1 1 1 1 1⎤ ⎢140.8⎥ ⎡ 927.3⎤ T ⎥ = ⎢ ⎥ = ⎢ ⎥ and X Y = ⎢ ⎥⎢ ⎥. 3⎥ ⎣15 55⎦ ⎣0 1 2 3 4 5⎦ ⎢158.7⎥ ⎣2653.9⎦ ⎥ ⎢ ⎥ 4⎥ ⎢ 182.1⎥ ⎢207.9⎥ 5⎥⎦ ⎣ ⎦
Now, using ( X T X ) to find the coefficient matrix A, you have −1
A = ( X T X ) X TY = −1
55 −15⎤⎡ 927.3⎤ ⎥⎢ ⎥ = 6⎦⎣2653.9⎦ ⎣−15
1 ⎡ 105 ⎢
1 ⎡ 105 ⎢
11,193⎤ ⎡106.6⎤ ⎥ = ⎢ ⎥. ⎣2013.9⎦ ⎣19.18⎦
So, the least squares regression line is y = 19.18 x + 106.6. (b) Using a graphing utility with L1 = {0, 1, 2, 3, 4, 5} and L2 = {109.5, 128.3, 140.8, 158.7, 182.1, 207.9} gives the same least squares regression line: y = 19.18 x + 106.6. (c)
Year
2000
2001
2002
2003
2004
2005
Actual
109.5
128.3
140.8
158.7
182.1
207.9
Estimated
106.6
125.8
145.0
164.1
183.3
202.5
The estimated values are close to the actual values. (d) The number of subscribers in 2010 is y = 19.18(10) + 106.6 = 298.4 million. (e)
260 = 19.18 x + 106.6 153.4 = 19.18 x 8 ≈ x The number of subscribers will be 260 million in 2008.
Project Solutions for Chapter 2 1 Exploring Matrix Multiplication 1. Test 1 seems to be the more difficult. The averages were:
Test 1 average = 75 Test 2 average = 85.5 2. Anna, David, Chris, Bruce
⎡ 1⎤ ⎡0⎤ 3. M ⎢ ⎥ represents scores on first test. M ⎢ ⎥ represents ⎣0⎦ ⎣ 1⎦ scores on second test. 4. [1 0 0 0]M represents Anna’s scores.
[0
0 1 0]M represents Chris’s scores.
⎡1⎤ 5. M ⎢ ⎥ represents the sum of the test scores for each ⎣1⎦ ⎡1⎤ student, and 12 M ⎢ ⎥ represents each students’ average. ⎣1⎦ 6. [1 1 1 1]M represents the sum of scores on each test; 1 4
[1
1 1 1]M represents the average on each test.
⎡1⎤ 7. [1 1 1 1]M ⎢ ⎥ represents the overall points total for ⎣1⎦ all students on all tests.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 2
8.
1 8
[1
63
⎡1⎤ 1 1 1]M ⎢ ⎥ = 80.25 ⎣1⎦
⎡1.1⎤ 9. M ⎢ ⎥ ⎣1.0⎦ 2 Nilpotent Matrices 1. A2 ≠ 0 and A3 = 0, the index is 3. 2. (a) Nilpotent of index 2
(b) Not nilpotent (c) Nilpotent of index 2 (d) Not nilpotent (e) Nilpotent of index 2 (f) Nilpotent of index 3 ⎡0 0 1⎤ ⎢ ⎥ 3. ⎢0 0 0⎥ index 2; ⎢0 0 0⎥ ⎣ ⎦
⎡0 1 1⎤ ⎢ ⎥ ⎢0 0 1⎥ index 3 ⎢0 0 0⎥ ⎣ ⎦
⎡0 ⎢ 0 4. ⎢ ⎢0 ⎢ ⎣⎢0
0 0 1⎤ ⎥ 0 0 0⎥ index 2; 0 0 0⎥ ⎥ 0 0 0⎦⎥
⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
1 1 1⎤ ⎥ 0 1 1⎥ index 4 0 0 1⎥ ⎥ 0 0 0⎥⎦
⎡0 ⎢ ⎢0 5. ⎢0 ⎢ ⎢0 ⎢ ⎣0
⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0 1 1⎤ ⎥ 0 0 1⎥ index 3; 0 0 0⎥ ⎥ 0 0 0⎦⎥
1 1 1 1⎤ ⎥ 0 1 1 1⎥ 0 0 1 1⎥ ⎥ 0 0 0 1⎥ ⎥ 0 0 0 0⎦
6. No. If A is nilpotent and invertible, then Ak = O for some k and Ak −1 ≠ O. So,
A−1 A = I ⇒ O = A−1 Ak = ( A−1 A) Ak −1 = IAk −1 ≠ O
which is impossible. 7. If A is nilpotent then ( Ak )
T
= ( AT ) = O, But ( AT ) k
k −1
= ( A k −1 )
T
≠ O which shows that AT is nilpotent with the same
index. 8. Let A be nilpotent of index k. Then,
(I
− A)( Ak −1 + Ak − 2 +
+ A2 + A + I ) = I − Ak = I
which shows that
( A k −1 +
Ak − 2 +
+ A2 + A + I )
is the inverse of I − A.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 3 Determinants Section 3.1
The Determinant of a Matrix ...............................................................64
Section 3.2
Evaluation of a Determinant Using Elementary Operations ..............69
Section 3.3
Properties of Determinants...................................................................71
Section 3.4
Introduction to Eigenvalues .................................................................76
Section 3.5
Applications of Determinants ..............................................................80
Review Exercises ..........................................................................................................86 Cumulative Test ...........................................................................................................93
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Determinants
3
Section 3.1 The Determinant of a Matrix 1. The determinant of a matrix of order 1 is the entry in the matrix. So, det[1] = 1. 3.
5.
2 1
11.
5 2
2
4
λ −1
13. (a) The minors of the matrix are shown below.
= 5(3) − ( −6)( 2) = 27
−6 3
= (λ − 3)(λ − 1) − 4( 2) = λ 2 − 4λ − 5
= 2( 4) − 3(1) = 5
3 4
λ −3
M 11 = 4 = 4
M 12 = 3 = 3
M 21 = 2 = 2
M 22 = 1 = 1
(b) The cofactors of the matrix are shown below. 7.
9.
−7 6 1 2
= −7(3) −
3
2 6 0 3
C11 = ( −1) M 11 =
4
C12 = ( −1) M 12 = −3
C21 = ( −1) M 21 = −2
C22 = ( −1) M 22 = 1
2
( 12 )(6) = −24
3
3
4
= 2(3) − 0(6) = 6
15. (a) The minors of the matrix are shown below. M 11 = M 21 = M 31 =
5 6 −3 1
2 1 −3 1
2 1 5 6
= 23
M 12 =
= 5
M 22 =
= 7
M 32 =
4 6 2 1 −3 1
2 1 −3 1
4 6
= −8
M 13 =
= −5
M 23 =
= −22
M 33 =
4
5
= −22
2 −3 −3
2
2 −3 −3 2
4 5
= 5 = −23
(b) The cofactors of the matrix are shown below. C11 = ( −1) M 11 = 23
C12 = ( −1) M 12 =
8
C13 = ( −1) M 13 = −22
C21 = ( −1) M 21 = −5
C22 = (−1) M 22 = −5 4
C23 = ( −1) M 23 = −5
C31 = ( −1) M 31 =
C32 = (−1) M 32 = 22
C33 = ( −1) M 33 = −23
2
3 4
7
3
5
4
5
6
17. (a) You found the cofactors of the matrix in Exercise 15. Now find the determinant by expanding along the second row. −3 4
2 1 5 6 = 4C21 + 5C22 + 6C23 = 4(−5) + 5(−5) + 6(−5) = −75
2 −3 1
(b) Expanding along the second column, −3 4
2 1 5 6 = 2C12 + 5C22 − 3C32 = 2(8) + 5(−5) − 3( 22) = −75.
2 −3 1
64
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.1
The Determinant of a Matrix
65
19. Expand along the second row because it has a zero. 1 4 −2 0 = −3
3 2 −1 4
3
4 −2 4
3
1 −2
+ 2
−1
3
−0
1 4 −1 4
= −3( 20) + 2(1) = −58
21. Expand along the first column because it has two zeros. 2 4
6
0 3
1 = 2
0 0 −5
3
1
0 −5
−0
4
6
0 −5
+0
4 6 3 1
= 2( −15) = −30
23. Expand along the first row.
0.1 0.2 0.3 −0.3 0.2 0.2 = 0.1
0.5 0.4 0.4
0.2 0.2 0.4 0.4
− 0.2
−0.3 0.2
0.5 0.4
+ 0.3
−0.3 0.2
0.5 0.4
= 0.1(0) − 0.2( −0.22) + 0.3( −0.22) = −0.022 25. Use the third row because it has a zero.
x
y 1
2
3 1 = −( −1)
0 −1 1
x 1 2 1
+1
x 2
y 3
= 4x − 2 y − 2
1 5 0
1
2 7 6
4 6
4 12
3 7 7
2 6 2
4
1 −2
2
6
1 5 1 = 2 3 7 7
5 1 7 7
−7
1 1 3 7
4 = 6
1 −2
2
5 1 7 7
−6
1 1 3 7
3
0 6
4 − 4 2 −3 4 1 −2 2
2
+ 6
1 5
+ 2
−3 4 −2 2
− 2
4 12 −2
2
+1
4 12 −3
4
= 6( 2) − 2(32) + 52 = 0 0 6
2 −3 4 = 3 1 −2 2
−3 4 −2 2
−0
2 4 1 2
+ 6
2 −3 1 −2
= 3( 2) + 6( −1)
3 7
= 0 So,
= −20 1 5 1 = 2
4 12
1 −2
2 −3
3
= 2( 28) − 7( 4) + 6( −8) 2 6 2
6
= 5 2 −3
4 12
3 7 7
The 3 × 3 determinants are:
2 7 6
6
0 2 −3
= 6 1 5 1 −3 1 5 1
3 7 0 7
3 7 7
0
The determinants of the two 3 × 3 matrices are:
27. Expand along the third column because it has two zeros.
2 7 3 6
5 3
0
= x − 2 + 3x − 2 y
2 6 6 2
29. Use the first column because it has two zeros.
5 3
0
1 5
4 6
4 12
3 7
0 2 −3
4
1 −2
2
= 2( 28) − 6( 4) + 2( −8)
0
6 = 5(0) − 4(0) = 0.
= 16 So, the determinant of the original matrix is
6( −20) − 3(16) = −168.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
66
Chapter 3
Determinants
31. Using the first row you have
x
y
z
21 −15
w
24
30
−10
24 −32
18
−40
22
−15
24
30
= w 24 −32
32 −35
22
32 −35
21
24
18 − x −10 −32 −40
21 −15
30
18 + y −10
32 −35
−40
21 −15
30
18 − z −10
24
22 −35
−40
24
24 −32 . 22
32
The determinants of the 3 × 3 matrices are: −15
24
30
24 −32
18 = −15
−32
32 −35
22
18
32 −35
− 24
24
18
+ 30
22 −35
24 −32 22
32
= −15(544) − 24( −1236) + 30(1472) = 65,664 21
24
30
−10 −32 −40
18 = 21
32 −35
−32
18
32 −35
− 24
−10
18
−40 −35
+ 30
−10 −32 −40
32
= 21(544) − 24(1070) + 30( −1600) = −62,256 21 −15
30
−10
24
18 = 21
−40
22 −35
24
18
22 −35
+ 15
−10
18
−40 −35
+ 30
−10 24 −40 22
= 21( −1236) + 15(1070) + 30(740) = 12,294 21 −15
24
−10
24 −32 = 21
−40
22
32
24 −32 22
35
+ 15
−10 −32 −40
32
+ 24
−10 24 −40 22
= 21(1472) + 15( −1600) + 24(740) = 24,672
w So,
x
y
z
21 −15
24
30
−10
24 −32
18
−40
22
= 65,664 w + 62,256 x + 12,294 y − 24,672 z
32 −35
33. Expand along the first column, and then along the first column of the 4 × 4 matrix.
5 2 0 0 −2 0
1 4 3
1 4 3 2
2
0 0 2 6
3 = 5
0 0 3 4
1
0 0 0 0
2
0 2 6 3 0 3 4
1
0 0 0 2
5(1) 3 4
1 = 5( 2)
0 0 2
2 6 3 4
4 3 37.
−4 = −43.5
2 −2
3
1
0 0 2
1 −5 − 14
35. 4
2 6 3 = 5(1) 3 4
Now expand along the third row, and obtain 2 6 3
1 2
2
5
1 6 −1
2
−3 2 6
1
4
5
= −1098
3 −2
= 10( −10) = −100.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.1
39.
1
2
2
−1
0
1 2 −2 −3
1
1
4 −1
0
3 2
1
2 0 −2
3 −2 3
2
1 −2 3
1
2
−1
2
3
1
1
0 2
53.
(x
x 2 − 3x − 4 = 0
(x
55.
λ +2 2
6
0
0 0
2
2
0 0
0 −1
= 5(0)( 2)( −1) = 0
2
−2 ± 12 2
=
−2 ± 2 3 2
λ
2
3
0
λ
1
λ +1 2 1
λ
= λ (λ 2 + λ − 2) = λ (λ + 2)(λ − 1)
2
0 0
2 7
0 = ( −1)(3)( 2)(5)(1) = −30
0 0
0 5 −1
0 0
0 0
The determinant is zero when λ (λ + 2)(λ − 1) = 0. So, λ = 0, − 2, 1.
1 59.
(b) True. See the first line, page 124. (c) False. See "Definition of Minors and Cofactors of a Matrix," page 124.
61.
4u −1 −1 2v
x2 + 5x + 4 = 0 + 4)( x + 1) = 0 x = −4, −1
+ 1)( x − 2) − 1( −2) = 0 x2 − x − 2 + 2 = 0 x2 − x = 0 x( x − 1) = 0 x = 0, 1
= ( 4u )( 2v) − ( −1)( −1) = 8uv − 1
e2 x
e3 x
2e 2 x
3e3 x
= e 2 x (3e3 x ) − 2e 2 x (e3 x ) = 3e5 x − 2e5 x = e5 x
+ 3)( x + 2) − 1( 2) = 0 x2 + 5x + 6 − 2 = 0
(x
=
0
47. (a) False. See "Definition of the Determinant of a 2 × 2 Matrix," page 123.
51.
22 − 4(1)( −2)
57. 0 λ + 1 2 = λ
1 −3
0 3 −4 5
(x
= (λ + 2)(λ ) − 1( 2) = λ 2 + 2λ − 2
2(1)
= −1 ±
45. The determinant of a triangular matrix is the product of the elements on its main diagonal.
(x
−2 ±
λ =
43. The determinant of a triangular matrix is the product of the elements on its main diagonal.
49.
λ
1
The determinant is zero when λ 2 + 2λ − 2 = 0. Use the Quadratic Formula to find λ .
−3 7 2
0 0
− 4)( x + 1) = 0 x = 4, −1
4 6 0 = −2(6)( 2) = −24
−1 4
− 1)( x − 2) − 3( 2) = 0
= 329
−2 0 0
2
67
x 2 − 3x + 2 − 6 = 0
41. The determinant of a triangular matrix is the product of the elements on its main diagonal.
5 8 −4
The Determinant of a Matrix
63.
x ln x 1x
1
= x(1 x) − 1(ln x) = 1 − ln x
65. Expanding along the first row, the determinant of a 4 × 4 matrix involves four 3 × 3 determinants. Each of these 3 × 3 determinants requires 6 triple products. So, there are 4(6) = 24 quadruple products. 67. Evaluating the left side yields
w x y
= wz − xy.
z
Evaluating the right side yields −
y
z
w x
= −( xy − wz ) = wz − xy.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
68
Chapter 3
Determinants
69. Evaluating the left side yields
w x y
= wz − xy.
z
Evaluating the right side yields
w x + cw z + cy
y
= w( z + cy ) − y ( x + cw) = wz + cwy − xy − cwy = wz − xy.
71. Evaluating the left side yields
1 x
x2
1 y
y2 =
1 z
z2
y
y2
z
z2
−
x
x2
z
z2
+
x
x2
y
y2
= yz 2 − y 2 z − ( xz 2 − x 2 z ) + xy 2 − x 2 y = xy 2 − xz 2 + yz 2 − x 2 y + x 2 z − y 2 z. Expanding the right side yields
(y
− x)( z − x)( z − y ) = ( yz − xy − xz + x 2 )( z − y ) = yz 2 − y 2 z − xyz + xy 2 − xz 2 + xyz + x 2 z − x 2 y = xy 2 − xz 2 + yz 2 − x 2 y + x 2 z − y 2 z.
73. Expanding the determinant along the first row yields 1
1
a
b
a2
b2
1
c = (bc 2 − cb 2 ) − ( ac 2 − ca 2 ) + ( ab 2 − a 2b) = bc 2 + ca 2 + ab 2 − ba 2 − ac 2 − cb 2 . c2
Expanding the right side yields
(a
− b)(b − c)(c − a) = ( ab − b 2 − ac + bc)(c − a) = abc − cb 2 − ac 2 + bc 2 − ba 2 + ab 2 + ca 2 − abc = bc 2 + ca 2 + ab 2 − ba 2 − ac 2 − cb 2 .
75. (a) Expanding along the first row,
x −1
0 c x b = x
0 −1 a
x b
+ c
−1 a
−1
x
0 −1
= x( ax + b) + c(1) = ax 2 + bx + c.
(b) The right column contains the coefficients a, b, c. So, x
0
0 d
−1
x
0
c
0 −1
x
b
0
0 −1 a
x = x −1
0 c
0
x b + 1 −1
0 −1 a
0 d x
b
0 −1 a
= x( ax 2 + bx + c) + d = ax 3 + bx 2 + cx + d .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.2
Evaluation of a Determinant Using Elementary Operations
69
Section 3.2 Evaluation of a Determinant Using Elementary Operations 1. Because the first row is a multiple of the second row, the determinant is zero.
1 7 −3
25. 1 3 4 8
3. Because the second row is composed of all zeros, the determinant is zero.
1 = 0 −4
4
1
1
4
1
−3 4 = 1( −4)( −7) = 28
0 −7
2 −1 −1
27.
1
3
2
1
3
2 = − 2 −1 − 1
1
1
3
1
1
3
1
3
2
= − 0 −7 −5 = ( −1)( −7 − 10) = 17 0 −2
1
14
11 0
5
4 1
4 3 −2 1 = 4
5 4
29.
−2 3
−22 −13 0
4
= ( −1)
14
11
−22 −13
= ( −1) ⎡⎣(14( −13) − ( −22)(11))⎤⎦ = ( −1)(60) = −60
changed. 19. Because the sixth column is a multiple of the first column, the determinant is zero. 21. Expand by cofactors along the second column. 1 2 2 3
5 −8 0
31.
33.
23. Rewrite the matrix in triangular form.
2 1 −1 1 2 1 −1 1 2 1 −1 1 0 2 0 1 0 2 0 1 0 2 = = 3 −1 1 0 0 −1 −5 0 0 −1 −5 0 4 1 0 0 4 1 0 0 0 −19 = 1(1)( −1)( −19) = 19
A graphing utility or computer software program produces the same determinant, 19.
5
−8 0
7 4 = 41 −21 0 = 1( −105 + 328) = 223
9 −8
= 3 − 4 = −1
A graphing utility or computer software program produces the same determinant, −1.
1 0 0 0
7
0
17. Because the second row of the matrix on the left was multiplied by ( −1), the sign of the determinant is
2 0 3
4
= 0 −4
13. Because a multiple of the first row of the matrix on the left was added to the second row to produce the matrix on the right, the determinants are equal.
−1 1 4 = 1
−4
0 −20 13
11. Because each row in the matrix on the left is divided by 5 to yield the matrix on the right, the determinant of the matrix on the left is 53 times the determinant of the matrix on the right.
1 0 2
7 −3
= 0
7. Because 5 has been factored out of the first row, the first determinant is 5 times the second one.
15. Because a multiple of the first row of the matrix on the left was added to the second row to produce the matrix on the right, the determinants are equal.
8
1
5. Because the second and third columns are interchanged, the sign of the determinant is changed.
9. Because 4 has been factored out of the second column, and 3 factored out of the third column, the first determinant is 12 times the second one.
7 −3
1
7
1
−8
4 −7
9
1
6
7
0
3
6 −3
3
0
7
2
4 −1
=
7
1
4 −7
9 1
6
7 0
2
−9 27 −30 0 4
0 6
13 0 2
7
= − −9 27 −30 4 6
0 2
13 7
= 3 3 −9 10 4
0 13
0 20 −13 = 3 3 −9
10
4
13
0
= 3⎡⎣( −3)260 + 4( 200 − 117)⎤⎦ = −1344 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
70
35.
Chapter 3
Determinants
1 −2 7
9
3 −4 5
5
1 −1
3
6
4
5 3
1 −2 =
2
7
9
2 −16 −22
0
0 12 −20 −28 13 −25 −34
0
1 −2 = 2
0
7
−8 −11
1
0 12 −20 −28 13 −25 −34
0
1 −2 = 2
9
7
9
0
1 −8 −11
0
0 76 104
0
0 79 109
= 2(1)(1)
76 104 79 109
= 2 ⎡⎣76(109) − 79(104)⎤⎦ = 136
1 −1 8 2
4 2
6 0 −4 3
1 −1 0
4
2
8 −16 −12
8
−1
37. 2 0
0 2
6 2 = 0
2 −14
2 8
0 0
0
2
8
0
0
0
1 1
2 2
0
1
1
2
2
1 −1
8
4
0 = 0
−2 −2
2
0 −24 −28 −17 0 −16
−6
−6
0
0
6
−4
−4
0
1
1
2
2
−24 −28 −17 = ( −1) −16
−6
−6
6
−4
−4
−24 −28 −17 = 2 −16
−6
−6
−3
2
2
−24 −28 −17 = 2 −25
0
0 = 50( −56 + 34) = −1100
−3
2
2
39. (a) True. See Theorem 3.3, part 1, page 134.
(b) True. See Theorem 3.3, part 3, page 134. (c) True. See Theorem 3.4, part 2, page 136. 1 0 0
41. 0 k
1 0 0
0 = k 0 1 0 = k
0 0 1
0 0
1
0
1 0
1 0 0
43. 1 0 0 = − 0 1 0 = −1 0 0
1
1 0 0
45. k
0 0 1 1 0 0
1 0 = 0 1 0 =1
0 0 1
0 0 1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.3
Properties of Determinants
71
47. Expand the two determinants on the left.
a12
a13
a21 a22
a11
a12
a23 + b21 a22
a23
a31
a32
a33
a33
a22
a23
a32
a33
= a11
a13
= ( a11 + b11 )
51.
b31
sin θ
53.
a32
a33
a33
cos θ
1 sin θ
a13
a23
cos θ
1
a12
a32
−sin θ sin θ
− a21
a32
a22
( a11 + b11 ) (a21 + b21 ) (a31 + b31 )
=
49.
b11
+ a31
− ( a21 + b21 )
a12
a13
a22
a23
a32
a33
a12
a13
a22
a23
a12
a13
a32
a33
+ b11
a22
a23
a32
a33
+ ( a31 + b31 )
− b21
a12
a13
a32
a33
a12
a13
a22
a23
+ b31
a12
a13
a22
a23
= cos θ (cos θ ) − ( −sin θ )(sin θ ) = cos 2 θ + sin 2 θ = 1
= (sin θ )(sin θ ) − 1(1) = sin 2 θ − 1 = − cos 2 θ
cos x
0
sin x
sin x
0
−cos x
sin x − cos x 1 sin x − cos x
= −1
cos x
sin x
sin x
−cos x
= −1(−cos 2 x − sin 2 x) = cos 2 x + sin 2 x = 1
The value of the determinant is 1 for all x and therefore there is no value of x such that the determinant has a value of zero. 55. If B is obtained from A by multiplying a row of A by a nonzero constant c, then
⎡ a11 ⎢ ⎢ det ( B) = det ⎢cai1 ⎢ ⎢ ⎢ ⎣ an1
a1n ⎤ ⎥ ⎥ cain ⎥ = cai1Ci1 + … + cainCin = c( ai1Ci1 + … + cainCin ) = c det ( A) ⎥ ⎥ ⎥ ann ⎦
Section 3.3 Properties of Determinants 1. (a)
A =
(b)
B =
−2
1
4 −2 1
1
0 −1
= 0
AB =
1 0 0
−2 −3 4
A =
= −1
1⎤ ⎡ 1 1⎤ ⎡−2 ⎡−2 −3⎤ (c) AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ 4 −2⎦ ⎣0 −1⎦ ⎣ 4 6⎦ (d)
−1 2
3. (a)
6
= 0
Notice that A B = 0( −1) = 0 = AB .
1 1 = 2
1 0
−1 0 0
(b)
B =
0 2 0 = −6 0 0 3
⎡−1 2 1⎤ ⎡−1 0 0⎤ ⎡ 1 4 3⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (c) AB = ⎢ 1 0 1⎥ ⎢ 0 2 0⎥ = ⎢−1 0 3⎥ ⎢ 0 1 0⎥ ⎢ 0 0 3⎥ ⎢ 0 2 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ 1 4 3
(d)
AB = −1 0 3 = −12
0 2 0
Notice that A B = 2( −6) = −12 = AB .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Chapter 3
72
Determinants
2 5. (a)
(b)
A =
B =
0 1 1
1 −1 0 1 2
3 1 0
1
2 3 0
1 −1 0 1 2
3 1 0
1
2 3 0
1 0 −1 1
2
0
1
0 2
1 1 −1 0
2 0
3 2
9 4 −3
8
8 5 −4
5
1 1 1
=
1 1
= 2 3 1 = 0 1 2 3 0
1
1
2
3 2
1 0
1 1
1
1 −1 = 0 1 − 1 = 3
1 2
3 0
= − 1 1 −1 = − 1
0 1 1
2
1 −1 = 6
0 −1
1
2
4
0 −1 1⎤ ⎡6 3 −2 2⎤ ⎥ ⎢ ⎥ 1 0 2⎥ 2 1 0 −1⎥ = ⎢ ⎢9 4 −3 8⎥ 1 −1 0⎥ ⎥ ⎢ ⎥ ⎢⎣8 5 −4 5⎥⎦ 2 1 0⎥⎦ 0 0 −2
2
0 −1
1
1
1 1 −1 0
1 1⎤⎡ 1 ⎥⎢ 0 1⎥⎢2 1 0⎥⎢ 1 ⎥⎢ 3 0⎥⎢ ⎦⎣3
6 3 −2 2
=
1 0
⎡2 0 ⎢ 1 −1 (c) AB = ⎢ ⎢2 3 ⎢ ⎢⎣ 1 2
AB =
=
1 1 0
1 0 −1 1
3 2
(d)
1
2
1
5
0 −1
9 4 −3
8
8 5 −4
5
2
1 −1
= −2 9 4 8 5
1
0
2
1 −1
2
1
0
8 − 5 9 4 −3 = −2 1 0 12 − 5 3
1 4
0
5
2
8 5 −4
8 5 2
8 5 −4
5
1 −1
( 12 − 3)
= −2 1 0 12 − 5( −4) −2 0 10 = −2( −34) − 50 = 18
Notice that A B = 3 ⋅ 6 = 18 = AB . 7.
A =
4
6 −8 −3
9. A =
2
6
= 22
6
2
1
3 −4
−1 2 3
9
−1
9 12 = 33 2 3 4 = 33 0
9 12 15
1 0 1
= 4(−11) = −44
3 4 5
13. (a)
0
2
3
7 10
0 10 14
−1 1
A =
(b)
B =
(c)
0 0 ⎡−1 1⎤ ⎡ 1 −1⎤ A+ B = ⎢ = 0 ⎥ + ⎢ ⎥ = 2 0 − 2 0 0 0 ⎣ ⎦ ⎣ ⎦
2 0
= −2
1 −1 −2
0
B =
0
(c)
1
1 0
1
1 1
0 0 0
0 1 3
0 1 2 = 0 1 2 = −1 1 1 1
= ( −27)( −2) = 54 11. (a)
1 1
−1 0 2
(b)
1 0
A = −1 2 1 = 0 2 2 = 0 2 2 = 0
1 1 1
0 0 3 ⎡ 1 0 1⎤ ⎡−1 0 2⎤ ⎢ ⎥ ⎢ ⎥ A + B = ⎢−1 2 1⎥ + ⎢ 0 1 2⎥ = −1 3 3 ⎢ 0 1 1⎥ ⎢ 1 1 1⎥ 1 2 2 ⎣ ⎦ ⎣ ⎦ = −15
= −2
Notice that A + B = −2 + ( −2) = −4 ≠ A + B .
Notice that A + B = 0 + (−1) = −1 ≠ A + B . 15. First observe that A =
6 −11 4
−5
= 14.
(a)
AT = A = 14
(b)
A2 = A A = A
(c)
AAT = A AT = 14(14) = 196
(d)
2 A = 4 A = 56
(e)
A−1 =
2
= 196
1 1 = A 14
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.3 2
0 5
3
2 1
3
= ( −5) = −125 3
(b)
A3 = A
(c)
3B = 34 B = 81(3) = 243
(a)
AT = A = 29
(b)
A2 = A A = 292 = 841
(d)
( AB)
= AB = −15
(c)
AAT = A AT = 29( 29) = 841
(e)
A−1 =
1 1 = − A 5
(d)
2 A = 23 A = 8( 29) = 232
(e)
A−1 =
19. (a)
(b)
4 2
A =
−1 5
(c)
A2 =
(d)
2A =
(e)
−1
21. (a)
(b)
4 −1
AT =
A
= 22
2
−9 23 8
−2 10
=
A =
4
5 22 1 22
= 484
2 11
=
(d)
1
6
8
9
2
2
3 −1
0
2A =
A−1 =
( AB)
= AB = A B = 4 ⋅ 2 = 8
B −1 =
1 1 = B 2
= 0,
10 8
5
29. Because
= −115
4
3 6
2
8 8
3
2 9 −1
7
−2
0
3 = 195 ≠ 0,
31. Because 1 2 2 3
0
−1
4 24
46
71
38 11
106 130 101 40 −1
= 13,225
5
8 −4
2 10
6 16
4 −2
12 16 18
4
6 −2
0
− 173 115 108 115 22 − 115 186 115
5
the matrix is nonsingular.
= −115
26
12
14
1 −5 −10
−2
63 − 115 38 115 12 − 115 91 115
= BT AT = BT AT = B A = 2 ⋅ 4 = 8
the matrix is singular.
2 −1
4
(e)
1 22
8
11
( AB)
27. Because
3
A2 =
(d)
5 4
5 −1 2
(c)
2 A = 24 A = 16 ⋅ 4 = 64
(e)
= 88
1 − 11
1
(c)
T
= 22 = 4
Equivalently,
4 −2
AT =
B2 = B
T
14 18
2
(b)
= 22
5
T
BA = B A = 2 ⋅ 4 = 8
25. (a)
1 1 = A 29
73
AB = A B = −5(3) = −15
23. (a)
17. First observe that A = 4 −1 6 = 29.
Properties of Determinants
3 2 − 13
1
2 ≠ 0,
1 1
the matrix is nonsingular. 33. Because
1 0 −8
= −1840
71 115 41 − 115 19 115 77 − 115
5 6
0 =
2 −1 0
2
0 8 −1 10 0 0
0
1
0 0
0
2
= 0,
the matrix is singular. 1 = − 115
−2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
74
35.
Chapter 3
A
−1
Determinants
⎡ 4 ⎢ 5 1 ⎡ 4 −3⎤ = ⎢ ⎥ = ⎢ 5 ⎣−1 2⎦ ⎢ 1 ⎢⎣− 5
A−1 =
3⎤ − ⎥ 5 ⎥ 2⎥ 5 ⎥⎦
37.
A−1
4 ⎛ 2 ⎞ ⎛ 1 ⎞⎛ 3 ⎞ 8 3 1 − = ⎜ ⎟ − ⎜ − ⎟⎜ − ⎟ = 5 ⎝ 5 ⎠ ⎝ 5 ⎠⎝ 5 ⎠ 25 25 5
Notice that A = 5, so A−1 =
⎡−2 2 −1 ⎤ ⎢ ⎥ ⎢ 1 0 − 1⎥ = ⎢ 2 2⎥ ⎢ ⎥ 3 1 ⎢ ⎥ −1 ⎢⎣ 2 2 ⎥⎦ −2
1 1 = . A 5
A−1 =
1 2 3 2
−1
2
0 − −1
1 1 2 = 2 1 3 2 2
1
Notice that A = 2
1
0 0 − −1
0 2 −1 3 =
1 −2 2
A−1 =
39. A−1
5 ⎡ 1 ⎢− 8 − 8 ⎢ 5 ⎢ 5 ⎢ 12 12 = ⎢ 7 ⎢ 3 ⎢ 8 8 ⎢ 1 ⎢ 1 ⎢⎣ 2 2
7 ⎤ 0⎥ 8 ⎥ 1 1⎥ − − ⎥ 4 3 ⎥ 5 0⎥⎥ − 8 ⎥ 1 0⎥ − ⎥⎦ 2
A−1
1 5 − 8 8 5 5 12 12 = 3 7 8 8 1 1 2 2
7 0 1 5 8 − − 8 8 1 1 − − 1 3 7 4 3 = − 3 8 8 5 0 − 1 1 8 2 2 1 0 − 2
−
Notice that A =
So, A−1 =
1
0 −1
1
0
3 −2
2
0
2
−1
1 −3
1
2
3
2
1
1 1 2 = − 2 1 2 0
2 −1 −3
2 3 = −2, so
0 −4
1 1 = − . A 2
7 1 5 7 1 5 7 − − 8 − − 8 8 8 8 8 8 5 1 1 1 0 −1 2 = − − = − = 8 3 3 0 −1 2 24 1 1 1 1 0 −2 3 − − 2 2 2 2
1 −1 = −3 1
0
3
1 −1
3 −2 = − 3 0 2
−1
2
3
1 −1
3
4 −5 = − 3 0
4 −5 = 24.
2 −1
4 −7
0
1 1 = . A 24
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.3
⎡ 1 −1 1⎤ ⎢ ⎥ ⎢2 −1 1⎥ ⎢ 3 −2 2⎥ ⎣ ⎦
3 −2 2
Because the determinant is zero, the system does not have a unique solution. 43. The coefficient matrix of the system is
2 −3
2 2
1 5 −6
0
45. Find the values of k necessary to make A singular by setting A = 0. k −1
3
2
k −2
xy = det ( A) ⋅ det ( A−1 ) = det ( AA−1 ) = det ( I )
Assume that all of the entries of A and A−1 are integers. Because a determinant is a product of the entries of a matrix, x = det ( A) and y = det ( A−1 ) are integers.
.
Because the determinant of this matrix is 115, and not zero, the system has a unique solution.
A =
n −1
= 1.
1
1 1 −3 −4
−∑ j =1aij .
55. Let det ( A) = x and det ( A−1 ) = y. First note that
−1 1 = 0.
5
ain −1 ,
Therefore, the last column can be reduced to all zeros by adding the other columns of A to it. Because A can be reduced to a matrix with a column of zeros, A = 0.
1 −1 1
1
ai 2 , …,
ai1 ,
which has a determinant of
2
Therefore it must be that x and y are each ±1 because these are the only integer solutions to xy = 1. 57. P −1 AP ≠ A in general. For example,
⎡1 2⎤ −1 ⎡−5 2⎤ ⎡ 2 1⎤ P = ⎢ ⎥, P = ⎢ ⎥, A = ⎢ ⎥, 3 5 3 1 − ⎣ ⎦ ⎣ ⎦ ⎣−1 0⎦ ⎡−27 −49⎤ P −1 AP = ⎢ ⎥ ≠ A. ⎣ 16 29⎦
= ( k − 1)( k − 2) − 6
However, the determinants A and P −1 AP are equal.
= k 2 − 3k − 4 = ( k − 4)( k + 1) = 0
P −1 AP = P −1 A P = P −1 P A =
So, A = 0 when k = −1, 4. 47. Find the value of k necessary to make A singular by setting A = 0. 1
0
59. (a) False. See Theorem 3.6, page 144.
(b) True. See Theorem 3.8, page 146.
61. Let A be an n × n matrix satisfying AT = − A.
2 k
Then,
So, k = 24.
A = AT = − A = ( −1) A . n
49. AB = I , which implies that AB = A B = I = 1.
So, both A and B must be nonzero, because their product is 1.
63. The inverse of this matrix is ⎡0 1⎤ ⎢ ⎥ ⎣ 1 0⎦
51. Let
−1
⎡0 1⎤ = ⎢ ⎥. ⎣ 1 0⎦
⎡0 1⎤ Because AT = A−1 , ⎢ ⎥ is orthogonal. ⎣ 1 0⎦
⎡1 0⎤ ⎡0 1⎤ A = ⎢ ⎥ and B = ⎢ ⎥. ⎣0 0⎦ ⎣0 0⎦ Then A + B = 0 + 0 = 0, and A + B =
1 P A = A P
(c) True. See "Equivalent Conditions for a Nonsingular Matrix," parts 1 and 2, page 147.
3
A = 2 −1 0 = 1( − k ) + 3(8) = 0
4
75
53. For each i, i = 1, 2, ..., n, the ith row of A can be written as
41. The coefficient matrix of the system is
2
Properties of Determinants
1 1 0 0
= 0.
65. Because the matrix does not have an inverse (its determinant is 0), it is not orthogonal.
(The answer is not unique.)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
76
Chapter 3
Determinants
67. The inverse of this elementary matrix is
⎡2 ⎢3 71. A = ⎢ 23 ⎢ ⎢⎣ 13
⎡ 1 0 0⎤ ⎢ ⎥ A−1 = ⎢0 0 1⎥ . ⎢0 1 0⎥ ⎣ ⎦
69. If AT = A−1 , then AT = A−1 and so 2
1 3 2 3
1⎤ 3⎥ − 23 ⎥ ⎥ 2 ⎥ 3⎦
Using a graphing utility you have
Because A−1 = AT , the matrix is orthogonal.
I = AA−1 = A A−1 = A AT = A
− 23
(a), (b) A−1 = 1 ⇒ A = ±1.
⎡ 2 ⎢ 3 = ⎢− 23 ⎢ ⎢⎣ 13
2 3 1 3 − 23
1⎤ 3⎥ 2⎥ 3 ⎥ 2 ⎥ 3⎦
= AT
(c) As shown in Exercise 69, if A is an orthogonal matrix then A = ±1. For this given A you have A = 1. Because A−1 = AT , A is an orthogonal
matrix. 73. SB = S B = 0 B = 0 ⇒ SB is singular.
Section 3.4 Introduction to Eigenvalues ⎡ 1 2⎤ ⎡ 1⎤ ⎡ 1⎤ 1. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = λ1 x1 ⎣0 −3⎦ ⎣0⎦ ⎣0⎦
5. (a)
⎡λ λI − A = ⎢
0⎤ ⎡4 −5⎤ ⎥ − ⎢ ⎥ 0 λ ⎣ ⎦ ⎣2 −3⎦
⎡ 1 2⎤ ⎡−1⎤ ⎡ 3⎤ ⎡−1⎤ Ax 2 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = −3⎢ ⎥ = λ2 x 2 ⎣0 −3⎦ ⎣ 2⎦ ⎣−6⎦ ⎣ 2⎦ ⎡ 1 1 1⎤⎡ 1⎤ ⎡2⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3. Ax1 = ⎢0 1 0⎥⎢0⎥ = ⎢0⎥ = 2⎢0⎥ = λ1 x1 ⎢ 1 1 1⎥⎢ 1⎥ ⎢2⎥ ⎢ 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 1 1 1⎤⎡−1⎤ ⎡0⎤ ⎡−1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax 2 = ⎢0 1 0⎥⎢ 0⎥ = ⎢0⎥ = 0 ⎢ 0⎥ = λ2 x 2 ⎢ 1 1 1⎥⎢ 1⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 1 1 1⎤⎡−1⎤ ⎡−1⎤ ⎡−1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax3 = ⎢0 1 0⎥⎢ 1⎥ = ⎢ 1⎥ = 1⎢ 1⎥ = λ3 x3 ⎢ 1 1 1⎥⎢−1⎥ ⎢−1⎥ ⎢−1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
5 ⎤ ⎡λ − 4 2 = ⎢ ⎥ = λ −λ −2 2 λ 3 − + ⎣ ⎦
The characteristic equation is λ 2 − λ − 2 = 0. (b) Solve the characteristic equation.
λ2 − λ − 2 = 0
(λ
− 2)(λ + 1) = 0
λ = 2, −1 The eigenvalues are λ1 = 2 and λ2 = −1. ⎡−2 5⎤ ⎡−2 5⎤ (c) For λ1 = 2: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−2 5⎦ ⎣ 0 0⎦ The corresponding eigenvectors are nonzero scalar ⎡5⎤ multiples of ⎢ ⎥. ⎣2⎦ ⎡−5 5⎤ ⎡ 1 −1⎤ For λ2 = −1: ⎢ ⎥ ⇒ ⎢ ⎥ − 2 2 ⎣ ⎦ ⎣0 0⎦ The corresponding eigenvectors are nonzero scalar ⎡1⎤ multiples of ⎢ ⎥. ⎣1⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.4
7. (a)
λ − 2 −1 ⎡λ 0⎤ ⎡2 1⎤ λI − A = ⎢ ⎥ −⎢ ⎥ = −3 λ ⎣ 0 λ ⎦ ⎣ 3 0⎦
9. (a)
Introduction to Eigenvalues
77
λ +2 −4 ⎡λ 0⎤ ⎡−2 4⎤ λI − A = ⎢ ⎥ − ⎢ ⎥ = −2 λ −5 ⎣ 0 λ ⎦ ⎣ 2 5⎦
= λ 2 − 2λ − 3
= λ 2 − 3λ − 18 The characteristic equation is λ 2 − 3λ − 18 = 0. (b) Solve the characteristic equation.
The characteristic equation is λ 2 − 2λ − 3 = 0. (b) Solve the characteristic equation.
λ 2 − 2λ − 3 = 0 (λ − 3)(λ + 1) = 0
λ 2 − 3λ − 18 = 0 (λ − 6)(λ + 3) = 0
λ = 3, −1 The eigenvalues are λ1 = 3 and λ2 = −1.
λ = 6, − 3 The eigenvalues are λ1 = 6 and λ2 = −3.
⎡ 1 −1⎤ ⎡ 1 −1⎤ (c) For λ1 = 3: ⎢ ⎥ ⇒ ⎢0 0⎥ 3 3 − ⎣ ⎦ ⎣ ⎦ The corresponding eigenvectors are the nonzero ⎡1⎤ multiples of ⎢ ⎥. ⎣1⎦
⎡ 1 − 12 ⎤ ⎡ 8 −4⎤ ⇒ ⎢ (c) For λ1 = 6: ⎢ ⎥ ⎥ 1⎦ 0⎥⎦ ⎣−2 ⎢⎣0 The corresponding eigenvectors are the nonzero ⎡ 1⎤ multiples of ⎢ ⎥. ⎣2⎦
⎡ 1 13 ⎤ ⎡1 1 ⎤ ⎡−3 −1⎤ ⇒ ⎢ For λ2 = −1: ⎢ ⎥ ⇒ ⎢ 3⎥ ⎥ ⎣−3 −1⎦ ⎢⎣0 0⎦⎥ ⎣⎢−3 −1⎦⎥ The corresponding eigenvectors are the nonzero ⎡−1⎤ multiples of ⎢ ⎥. ⎣ 3⎦
11. (a)
λI − A =
λ −1
1
1
−1
λ −3
−1
3
−1
λ +1
⎡ −1 −4⎤ ⎡ 1 4⎤ ⇒ ⎢ For λ2 = −3: ⎢ ⎥ ⎥ ⎣−2 −8⎦ ⎣0 0⎦ The corresponding eigenvectors are the nonzero ⎡−4⎤ multiples of ⎢ ⎥. ⎣ 1⎦
= (λ − 1)(λ 2 − 2λ − 4) + 1(λ + 2) + 3( 2 − λ ) = λ 3 − 3λ 2 − 4λ + 12
The characteristic equation is λ 3 − 3λ 2 − 4λ + 12 = 0. (b) Solve the characteristic equation.
λ 3 − 3λ 2 − 4λ + 12 = 0
(λ
− 2)(λ + 2)(λ − 3) = 0
The eigenvalues are λ1 = 2, λ2 = −2, λ3 = 3. ⎡ 1 1 1⎤ ⎡ 1 1 1⎤ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (c) For λ1 = 2: ⎢−1 −1 −1⎥ ⇒ ⎢0 −4 0⎥ ⇒ ⎢0 1 0⎥ ⎢ 3 −1 3⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢ 0⎥. ⎢ 1⎥ ⎣ ⎦ 1 1⎤ ⎡−3 ⎡ 1 5 1⎤ ⎡ 1 5 1⎤ For λ2 = −2: ⎢⎢ −1 −5 −1⎥⎥ ⇒ ⎢⎢0 16 4⎥⎥ ⇒ ⎢⎢0 4 1⎥⎥ ⎢⎣ 3 −1 −1⎥⎦ ⎢⎣0 0 0⎥⎦ ⎢⎣0 0 0⎥⎦ ⎡ 1⎤ The corresponding eigenvectors are the nonzero multiples of ⎢⎢−1⎥⎥. ⎢⎣ 4⎥⎦ ⎡ 2 1 1⎤ ⎡ 1 0 1⎤ ⎡ 1 0 1⎤ For λ3 = 3: ⎢⎢−1 0 −1⎥⎥ ⇒ ⎢⎢0 1 −1⎥⎥ ⇒ ⎢⎢0 1 −1⎥⎥ ⎢⎣ 3 −1 4⎥⎦ ⎢⎣0 −1 1⎥⎦ ⎢⎣0 0 0⎥⎦ ⎡−1⎤ The corresponding eigenvectors are the nonzero multiples of ⎢⎢ 1⎥⎥. ⎢⎣ 1⎥⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
78
Chapter 3
13. (a)
Determinants
⎡λ ⎢
0⎤ ⎡ 1 2 1⎤ λ − 1 −2 −1 ⎥ ⎢ ⎥ 0⎥ − ⎢0 1 0⎥ = 0 λ −1 0 ⎥ ⎢ ⎥ 0 λ ⎦ ⎣4 0 1⎦ 0 λ −1 −4 0
λI − A = ⎢ 0 λ ⎢0 ⎣
= (λ − 1)(λ 2 − 2λ + 1) − 4(λ − 1) = λ 3 − 3λ 2 − λ + 3 The characteristic equation is λ 3 − 3λ 2 − λ + 3 = 0. (b) Solve the characteristic equation.
λ 3 − 3λ 2 − λ + 3 = 0 λ 2 (λ − 3) − 1(λ − 3) = 0
(λ 2
− 1)(λ − 3) = 0
The eigenvalues are λ1 = −1, λ2 = 1, λ3 = 3. ⎡ 1 0 12 ⎤ ⎡−2 −2 −1⎤ ⎡−2 −2 −1⎤ ⎡2 2 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (c) For λ1 = −1: ⎢ 0 −2 0⎥ ⇒ ⎢ 0 −2 0⎥ ⇒ ⎢0 1 0⎥ ⇒ ⎢0 1 0⎥ ⎢0 0 0⎥ ⎢−4 0 −2⎥ ⎢ 0 4 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢ 0⎥. ⎢ 2⎥ ⎣ ⎦ ⎡0 1 12 ⎤ ⎡ 0 −2 −1⎤ ⎢ ⎥ ⎢ ⎥ For λ2 = 1: ⎢ 0 0 0⎥ ⇒ ⎢0 0 0⎥ ⎢ 1 0 0⎥ ⎢−4 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 0⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢−1⎥. ⎢ 2⎥ ⎣ ⎦
⎡ 1 0 − 12 ⎤ ⎡ 2 −2 −1⎤ ⎡2 −2 −1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ For λ3 = 3: ⎢ 0 2 0⎥ ⇒ ⎢0 1 0⎥ ⇒ ⎢0 1 0⎥ ⎢0 0 ⎢−4 0 2⎥ ⎢0 −4 0⎥ 0⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎡ 1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢0⎥. ⎢2⎥ ⎣ ⎦
⎡ 2 5⎤ 15. Using a graphing utility or computer software program with A = ⎢ ⎥ produces the eigenvalues {1 −3}. ⎣−1 −4⎦ So, λ1 = 1 and λ2 = −3. ⎡−1 −5⎤ ⎡ 1 5⎤ ⎥ ⇒ ⎢ ⎥ ⎣ 1 5⎦ ⎣0 0⎦
and
⎡−5⎤ x1 = ⎢ ⎥ ⎣ 1⎦
⎡−5 −5⎤ ⎡ 1 1⎤ ⎥ ⇒ ⎢ ⎥ and 1 1 ⎣ ⎦ ⎣0 0⎦
⎡−1⎤ x2 = ⎢ ⎥ ⎣ 1⎦
λ1 = 1: ⎢
λ2 = −3: ⎢
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.4
Introduction to Eigenvalues
79
⎡4 −2 −2⎤ 17. Using a graphing utility or computer software program with A = ⎢⎢0 1 0⎥⎥ produces the eigenvalues {3 2 1}. ⎢⎣ 1 0 1⎥⎦ So, λ1 = 3, λ2 = 2, and λ3 = 1.
⎡−1 2 2⎤
⎡ 1 0 −2⎤ 0⎥⎥ ⎣⎢0 0 0⎦⎥
and
⎡2⎤ x1 = ⎢⎢0⎥⎥ ⎣⎢ 1⎦⎥
⎡ 1 0 −1⎤ 0⎥⎥ ⎢⎣0 0 0⎥⎦
and
⎡ 1⎤ x 2 = ⎢⎢0⎥⎥ ⎢⎣ 1⎥⎦
and
⎡ 0⎤ x3 = ⎢⎢−1⎥⎥ ⎢⎣ 1⎥⎦
λ1 = 3: ⎢⎢ 0 2 0⎥⎥ ⇒ ⎢⎢0 1 ⎣⎢−1 0 2⎥⎦
⎡−2 2 2⎤
λ2 = 2: ⎢⎢ 0 1 0⎥⎥ ⇒ ⎢⎢0 1 ⎢⎣ −1 0
1⎥⎦
⎡−3 2 2⎤
⎡ 1 0 0⎤
λ3 = 1: ⎢⎢ 0 0 0⎥⎥ ⇒ ⎢⎢0 1 1⎥⎥ ⎢⎣ −1 0 0⎥⎦
⎢⎣0 0 0⎥⎦
⎡ 1 0 −1⎤ ⎢ ⎥ 19. Using a graphing utility or computer software program with A = ⎢0 −2 0⎥ produces the eigenvalues {1 − 2}. ⎢0 −2 −2⎥ ⎣ ⎦
So, λ1 = 1 and λ2 = −2. ⎡0 0 1⎤ ⎡0 1 0⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ λ1 = 1: ⎢0 1 0⎥ ⇒ ⎢0 0 1⎥ and x1 = ⎢⎢0⎥⎥ ⎢⎣0 2 1⎥⎦ ⎢⎣0 0 0⎥⎦ ⎢⎣0⎥⎦ ⎡ 1 0 − 13 ⎤ ⎡−3 0 1⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ λ2 = −2: ⎢ 0 0 0⎥ ⇒ ⎢0 1 0⎥ and x 2 = ⎢⎢0⎥⎥ ⎢0 0 ⎢⎣ 0 2 0⎥⎦ ⎢⎣3⎥⎦ 0⎥⎥ ⎣⎢ ⎦
⎡3 0 ⎢0 −1 21. Using a graphing utility or computer software program with A = ⎢ ⎢0 0 ⎢ ⎣0 0 So, λ1 = 5, λ2 = −3, λ3 = 3, and λ4 = −1.
⎡2 ⎢0 λ1 = 5: ⎢ ⎢0 ⎢ ⎣0
0 6 0 0
⎡−6 ⎢ 0 λ2 = −3: ⎢ ⎢ 0 ⎢ ⎣ 0 ⎡0 0 ⎢0 4 λ3 = 3: ⎢ ⎢0 0 ⎢ ⎣0 0 ⎡−4 ⎢ 0 λ4 = −1: ⎢ ⎢ 0 ⎢ ⎣ 0
0 0⎤ 0 0⎥⎥ produces the eigenvalues {5 −3 3 −1}. 2 5⎥ ⎥ 3 0⎦
0⎤ ⎡1 0 0 0⎤ ⎡0⎤ ⎢0 1 0 ⎥ ⎥ ⎢ ⎥ 0 0 0⎥ ⎥ and x = ⎢0⎥ ⇒ ⎢ 1 5 ⎢0 0 1 − ⎥ ⎢5⎥ 3 −5⎥ 3 ⎢ ⎥ ⎥ ⎢ ⎥ −3 5⎦ 0⎦⎥ ⎢⎣0 0 0 ⎣3⎦ 0 0 0⎤ ⎡ 1 0 0 0⎤ ⎡ 0⎤ ⎥ ⎢ ⎥ ⎢ 0⎥ 0 1 0 0⎥ −2 0 0⎥ and x 2 = ⎢ ⎥ ⇒ ⎢ ⎢0 0 1 1⎥ ⎢−1⎥ 0 −5 −5⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 0 −3 −3⎦ 0 0 0 0 ⎣ ⎦ ⎣ 1⎦ 0 0⎤ ⎡0 1 0 0⎤ ⎡ 1⎤ ⎢0 0 1 0⎥ ⎢ ⎥ 0 0⎥⎥ ⎥ and x3 = ⎢0⎥ ⇒ ⎢ ⎢0 0 0 1⎥ ⎢0⎥ 1 −5⎥ ⎥ ⎢ ⎥ ⎢ ⎥ −3 3⎦ 0 0 0 0 ⎣ ⎦ ⎣0⎦ 0 0 0⎤ ⎡ 1 0 0 0⎤ ⎡0⎤ ⎢0 0 1 0⎥ ⎢ ⎥ 0 0 0⎥⎥ ⎥ and x 4 = ⎢ 1⎥ ⇒ ⎢ ⎢0 0 0 1⎥ ⎢0⎥ 0 −3 −5⎥ ⎥ ⎢ ⎥ ⎢ ⎥ 0 −3 −1⎦ ⎣0 0 0 0⎦ ⎣0⎦ 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
80
Chapter 3
Determinants
⎡ 1 0 1 0⎤ ⎢ ⎥ 0 −2 0 0⎥ produces the eigenvalues {−2 −1 1 3}. 23. Using a graphing utility or computer software program with A = ⎢ ⎢0 0 2 1⎥ ⎢ ⎥ ⎢⎣0 0 3 0⎥⎦ So, λ1 = −2, λ2 = −1, λ3 = 1, and λ4 = 3. ⎡−3 ⎢ 0 λ1 = −2: ⎢⎢ 0 ⎢ ⎢⎣ 0
0
−1
0⎤ ⎡1 ⎥ ⎢ 0 0 0⎥ 0 ⇒ ⎢ ⎢0 0 −4 −1⎥ ⎥ ⎢ 0 −3 −2⎥⎦ ⎢⎣0
0 0 0⎤ ⎡0⎤ ⎥ ⎢ ⎥ 0 1 0⎥ 1 and x1 = ⎢ ⎥ ⎢0⎥ 0 0 1⎥ ⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎢⎣0⎥⎦
⎡−2 ⎢ 0 λ2 = −1: ⎢⎢ 0 ⎢ ⎣⎢ 0
0 −1
0 0 − 16 ⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 1 0 0⎥ ⎢ 0⎥ x and = 2 ⎢−2⎥ 1⎥ 0 1 3⎥ ⎢ ⎥ ⎢⎣ 6⎥⎦ 0 0 0⎥⎦
0⎤ ⎡1 ⎢ ⎥ 1 0 0⎥ 0 ⇒ ⎢⎢ 0 −3 −1⎥ ⎢0 ⎥ ⎢⎣0 0 −3 −1⎥⎦
⎡0 ⎢ 0 λ3 = 1: ⎢⎢ 0 ⎢ ⎣⎢0
0 −1
0⎤ ⎡0 ⎥ ⎢ 3 0 0⎥ 0 ⇒ ⎢ ⎢0 0 −1 −1⎥ ⎥ ⎢ 0 −3 1⎥⎦ ⎢⎣0
1 0 0⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 0 1 0⎥ 0 and x3 = ⎢ ⎥ ⎢0⎥ 0 0 1⎥ ⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎢⎣0⎥⎦
⎡2 ⎢ 0 λ4 = 3: ⎢⎢ 0 ⎢ ⎣⎢0
0 −1
0 0 − 12 ⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 1 0 0⎥ ⎢0⎥ and x = 4 ⎥ ⎢ 2⎥ 0 1 −1⎥ ⎢ ⎥ ⎢⎣2⎥⎦ 0 0 0⎥⎦
0⎤ ⎡1 ⎢ ⎥ 5 0 0⎥ 0 ⇒ ⎢⎢ 0 1 −1⎥ ⎢0 ⎥ ⎢⎣0 0 −3 3⎥⎦
25. (a) False. The statement should read "... any nonzero multiple ...". See paragraph following Example 1 on page 153. (b) False. Eigenvalues are solutions to the characteristic equation λ I − A = 0.
Section 3.5 Applications of Determinants 1. The matrix of cofactors is ⎡ 4 −3⎤ ⎡ 4 −3⎤ ⎢ ⎥ = ⎢ ⎥. 1⎦ 1⎦ ⎣−2 ⎣−2 So, the adjoint of A is ⎡ 4 −3⎤ adj( A) = ⎢ ⎥ 1⎦ ⎣−2
T
⎡ 4 −2⎤ = ⎢ ⎥. 1⎦ ⎣−3
Because A = −2, the inverse of A is A− 1
1⎤ ⎡−2 1 1 ⎡ 4 −2⎤ ⎢ ⎥. = adj( A) = − ⎢ = ⎥ ⎢ 3 − 1⎥ A 2 ⎣−3 1⎦ ⎣⎢ 2 2 ⎦⎥
3. The matrix of cofactors is
⎡ 2 ⎢ ⎢ −4 ⎢ ⎢− 0 ⎢ −4 ⎢ ⎢ ⎢ ⎣⎢
6 −12
−
0 −12 0 0 2 6
0
6
0
0 −12
0
1
0
0 −12 −
−
1 0
1 0
1
0 6
0
2 ⎤ ⎥ −4 ⎥ ⎥ 0 ⎥ −4 ⎥ ⎥ 0 ⎥ ⎥ 2 ⎦⎥
0 0⎤ ⎡0 ⎢ ⎥ = ⎢0 −12 4⎥ . ⎢0 −6 2⎥ ⎣ ⎦ 0 0⎤ ⎡0 ⎢ ⎥ So, the adjoint of A is adj( A) = ⎢0 −12 −6⎥ . ⎢0 4 2⎥⎦ ⎣
Because row 3 of A is a multiple of row 2, the determinant is zero, and A has no inverse.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5
Applications of Determinants
81
5. The matrix of cofactors is
⎡ ⎢ ⎢ ⎢ ⎢− ⎢ ⎢ ⎢ ⎢ ⎢⎣
4
3
−
1 −1 −5 −7
4
3
2 4 ⎤ ⎥ 0 1 ⎥ ⎡ −7 2 2⎤ ⎥ −3 −5 ⎥ ⎢ ⎥ − = ⎢−12 3 3⎥. 0 1⎥ ⎢ 13 −5 −2⎥ ⎥ ⎣ ⎦ −3 − 5 ⎥ ⎥ 2 4 ⎥⎦
3
0 −1
−3 −7
1 −1 −5 −7
2
0 −
−1
−3 −7 2
3
⎡−7 −12 13⎤ ⎢ ⎥ So, the adjoint is adj( A) = ⎢ 2 3 −5⎥ . Because A = −3, the inverse of A is ⎢ 2 3 −2⎥⎦ ⎣
A− 1
13 ⎤ ⎡ 7 ⎢ 3 4 − 3⎥ ⎡−7 −12 13⎤ ⎢ ⎥ 1 1⎢ 5⎥ ⎥ ⎢ 2 adj( A) = − ⎢ 2 3 −5⎥ = ⎢− −1 . = 3 3 3⎥ A ⎢ 2 ⎥ ⎢ ⎥ 3 −2⎦ ⎣ 2⎥ ⎢− 2 −1 ⎢⎣ 3 3 ⎥⎦
7. The matrix of cofactors is
⎡ −1 ⎢ 0 ⎢ ⎢ 1 ⎢ ⎢ 2 ⎢ ⎢ −0 ⎢ 1 ⎢ ⎢ 2 ⎢ ⎢ −1 ⎢ 1 ⎢ ⎢ 2 ⎢ ⎢ ⎢ − −1 ⎢ 0 ⎣
4
1
1 2
3 4 − 0
1
3 −1 1
1 2
0
0 2
1 2
−1 1 2
−1
1 2
0
−1 0
1
1
−1 2
1
1 2
0 1 2
− 0 0 2
1 2
−1 1 2
−1 1 2
0
1
−1 0
1
4
1 − 3 4
1
−1
2
1
3 −1 1
1 2
−1 1 2
−1
1 2
0
1
−1 0
−1
2
4
1
1 2
3 4 0
1
1
1 − 3 −1 1
1 2
0
0 2
3 −1 4 ⎤ ⎥ − 0 0 1⎥ −1 1 1 ⎥ ⎥ −1 2 0 ⎥ ⎥ 0 0 1 ⎥ ⎡ 7 7 −4 2⎤ ⎥ ⎢ ⎥ −1 1 1 ⎥ ⎢ 1 1 2 −1⎥ . = ⎥ ⎢ 9 0 −9 9⎥ −1 2 0 ⎥ ⎢ ⎥ ⎢− − 3 −1 4 ⎥ ⎣ 13 −4 10 −5⎥⎦ ⎥ −1 1 1 ⎥ ⎥ −1 2 0 ⎥ ⎥ 3 −1 4 ⎥ 0 0 1 ⎥⎦
⎡ 7 1 9 −13⎤ ⎢ ⎥ 7 1 0 −4⎥ . Because det ( A) = 9, the inverse of A is So, the adjoint of A is adj( A) = ⎢ ⎢−4 2 −9 10⎥ ⎢ ⎥ ⎣⎢ 2 −1 9 −5⎥⎦
A− 1
1 13 ⎤ ⎡ 7 1 − ⎥ ⎢ 9 9 9 ⎢ ⎥ 1 4 ⎢ 7 0 − ⎥ ⎢ 9 1 9 9⎥ adj( A) = ⎢ = ⎥. 4 2 10 A ⎢− ⎥ −1 ⎢ 9 9 9⎥ ⎢ 2 1 5⎥ ⎢ 1 − ⎥ − ⎢⎣ 9 9 9 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
82
Chapter 3
Determinants
9. If all the entries of A are integers, then so are those of the adjoint of A.
Because A
1 = adj( A), and A = 1, the entries of A
−1
A−1 must be integers. 11. Because adj( A) = A A−1 ,
adj( A) =
13. adj( A) =
A =
A A−1 = A n A−1 = A n
−2 0 −1 1 1
0
1 −2
1 = A n −1. A
19. The coefficient matrix is
⎡3 4⎤ A = ⎢ ⎥ , and A = −11. ⎣5 3⎦ Because A ≠ 0, you can use Cramer’s Rule. ⎡−2 4⎤ A1 = ⎢ ⎥, ⎣ 4 3⎦
A1 = −22
⎡3 −2⎤ A2 = ⎢ ⎥, ⎣5 4⎦
A2 = 22
The solution is as follows. = −2
x1 =
= −2
x2 =
So, adj( A) = A . 15. Because adj( A−1 ) = A−1 A and
(adj( A))
−1
( A A−1 )
=
you have adj( A
−1
−1
1 = A, A
) = (adj( A))
−1
.
17. The coefficient matrix is
⎡ 1 2⎤ A = ⎢ ⎥ , and ⎣−1 1⎦
A = 3.
Because A ≠ 0, you can use Cramer’s Rule. Replace
Then solve for x1 and x2 . x1 =
A1 A A2
=
3 =1 3
6 x2 = = = 2 3 A
=
22 = −2 −11
2
8⎤ ⎡11 A1 = ⎢ ⎥, 21 24 − ⎣ ⎦
A1 = −432
⎡20 11⎤ A2 = ⎢ ⎥, ⎣12 21⎦
A2 = 288
The solution is
x2 =
A2 = 6.
A
−22 = −11
Because A ≠ 0, you can use Cramer’s Rule.
⎡5 2⎤ A1 = ⎢ ⎥, ⎣1 1⎦
⎡ 1 5⎤ A2 = ⎢ ⎥, ⎣−1 1⎦
A2
=
8⎤ ⎡20 A = ⎢ ⎥ , and A = −576. 12 24 − ⎣ ⎦
x1 =
Similarly, replace column two with the column of constants to obtain
A
21. The coefficient matrix is
column one with the column of constants to obtain A1 = 3.
A1
A1 A A2 A
=
−432 = −576
=
288 1 = − . 2 −576
3 4
23. The coefficient matrix is
⎡−0.4 0.8⎤ A = ⎢ ⎥ , and ⎣ 2 −4⎦
A = 0.
Because A = 0, Cramer’s Rule cannot be applied. (The system does not have a solution.) 25. The coefficient matrix is
⎡3 6⎤ A = ⎢ ⎥ , and ⎣6 12⎦
A = 0.
Because A = 0, Cramer’s Rule cannot be applied. (The system has an infinite number of solutions.)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5 27. The coefficient matrix is ⎡4 −1 −1⎤ ⎢ ⎥ A = ⎢2 2 3⎥ , ⎢5 −2 −2⎥ ⎣ ⎦
Applications of Determinants
31. The coefficient matrix is
and
A = 3.
⎡3 3 5⎤ ⎢ ⎥ A = ⎢3 5 9⎥ , and ⎢5 9 17⎥ ⎣ ⎦
A = 4.
Because A ≠ 0, you can use Cramer’s Rule.
Because A ≠ 0, you can use Cramer’s Rule.
⎡ 1 −1 −1⎤ ⎢ ⎥ A1 = ⎢10 2 3⎥ , ⎢−1 −2 −2⎥ ⎣ ⎦
A1 = 3
⎡ 1 3 5⎤ ⎢ ⎥ A1 = ⎢2 5 9⎥ , ⎢4 9 17⎥ ⎣ ⎦
A1 = 0
⎡4 1 −1⎤ ⎢ ⎥ A2 = ⎢2 10 3⎥ , ⎢5 −1 −2⎥ ⎣ ⎦
A2 = 3
⎡3 1 5⎤ ⎢ ⎥ A2 = ⎢3 2 9⎥ , ⎢5 4 17⎥ ⎣ ⎦
A2 = −2
⎡4 −1 1⎤ ⎢ ⎥ A3 = ⎢2 2 10⎥ , ⎢5 −2 −1⎥ ⎣ ⎦
A3 = 6
⎡3 3 1⎤ ⎢ ⎥ A3 = ⎢3 5 2⎥ , ⎢5 9 4⎥ ⎣ ⎦
A3 = 2
The solution is x1 = x2 = x3 =
A1 A A2 A A3 A
The solution is
=
3 =1 3
x1 =
=
3 =1 3
x2 =
=
6 = 2. 3
x3 =
29. The coefficient matrix is ⎡3 4 4⎤ ⎢ ⎥ A = ⎢4 −4 6⎥ , and ⎢6 −6 0⎥ ⎣ ⎦
A = 252.
A1 = 252
A2 = 126
⎡3 4 11⎤ ⎢ ⎥ A3 = ⎢4 −4 11⎥ , ⎢6 −6 3⎥ ⎣ ⎦
A3 = 378
x1 = x2 = x3 =
A1 A A2 A A3 A
A A2 A A3 A
0 = 0 4
=
= − =
2 1 = − 4 2
2 1 = . 4 2
⎡−0.4 0.8⎤ A = ⎢ ⎥. ⎣ 2 −4⎦ Using a graphing utility or a computer software program, A = 0. Cramer’s rule does not apply because the coefficient matrix has a determinant of zero. 35. The coefficient matrix is
⎡ 3 11 4⎤ ⎢ ⎥ A2 = ⎢4 11 6⎥ , ⎢6 3 0⎥ ⎣ ⎦
The solution is
A1
33. The coefficient matrix is
Because A ≠ 0, you can use Cramer’s Rule. ⎡11 4 4⎤ ⎢ ⎥ A1 = ⎢11 −4 6⎥, ⎢ 3 −6 0⎥ ⎣ ⎦
83
⎡− 1 A = ⎢ 4 ⎢ 3 ⎣ 2
3⎤ 8⎥ . 3⎥ 4⎦
Using a graphing utility or a computer software program, A = − 34 . ⎡ −2 A1 = ⎢ ⎢−12 ⎣
3⎤ 8⎥ 3⎥ 4⎦
=
252 =1 252
Using a graphing utility or a computer software program, A1 = 3.
=
126 1 = 252 2
So, x1 = A1
=
378 3 = . 252 2
( )
A = 3 ÷ − 34 = −4.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
84
Chapter 3
Determinants 43. The coefficient matrix is
37. The coefficient matrix is
⎡4 ⎢ A = ⎢2 ⎢ ⎢⎣5
⎡ k 2 1 − k⎤ 2 A= ⎢ ⎥ , and A = k − (1 − k ) = 2k − 1. k ⎦ ⎣1 − k
1⎤ ⎥ 3 ⎥. ⎥ 6⎥ ⎦
−1 2 −2
Using a graphing utility or a computer software program, A = 55. ⎡−5 ⎢ A1 = ⎢ 10 ⎢ ⎣⎢ 1
1⎤ ⎥ 3⎥ ⎥ 6 ⎦⎥
−1 2 −2
−55 55
= −1.
39. The coefficient matrix is
⎡ 3 ⎢ A = ⎢−4 ⎢ ⎣⎢ 1
1 −5
−2 1 −5
1⎤ ⎥ −3 ⎥ ⎥ 1 ⎦⎥
A =
−2
9
0
−9
0
3
2
0
175 −25
= −7.
9
0
−9
0
3
2
0
4⎤ ⎥ −6 ⎥ ⎥ 1⎥ 8 ⎥⎦
Using a graphing utility or a computer software program, A1 = 180. So, x1 = A1
A =
A1 A
=
4k − 3 2k − 1
=
4k − 1 . 2k − 1
A2 A
1, 2
A = 2k − 1 = 0 and the
180 36
x2
y1 1 y2 1
x3
y3 1
x1 Area =
± 12
x2
y1 1 0 0 1 y2 1 = 2 0 1 = 6,
x3
y3 1
the area is
1 2
0 3 1
(6)
= 3.
47. Use the formula for area as follows.
4⎤ ⎥ −6 ⎥ . ⎥ 1⎥ 8 ⎦⎥
−2
y =
x1
Using a graphing utility or a computer software program, A = 36. ⎡ 35 ⎢ −17 A1 = ⎢ ⎢ ⎢ 5 ⎢⎣ −4
x =
Because
41. The coefficient matrix is
⎡ 3 ⎢ −1 A = ⎢ ⎢ ⎢ 0 ⎣⎢ 2
A2 = 4k − 1
45. Use the formula for area as follows.
Using a graphing utility or a computer software program, A1 = 175. So, x1 = A1
⎡ k 1⎤ A2 = ⎢ ⎥, ⎣1 − k 3⎦
system will be inconsistent.
Using a graphing utility or a computer software program, A = −25. ⎡−29 ⎢ A1 = ⎢ 37 ⎢ ⎣⎢−24
A1 = 4k − 3
Notice that when k =
1⎤ ⎥ −3 ⎥. ⎥ 1 ⎦⎥
−2
⎡1 1 − k ⎤ A1 = ⎢ ⎥, k ⎦ ⎣3
The solution is
Using a graphing utility or a computer software program, A1 = −55. So, x1 = A1 A =
Replacing the ith column of A with the column of constants yields Ai.
= 5.
x2
y1 1 −1 2 1 1 y2 1 = ± 2 2 2 1 = ± 12 (6) = 3
x3
y3 1
x1 Area =
± 12
−2 4 1
49. Use the fact that
x2
y1 1 1 2 1 y2 1 = 3 4 1 = 0
x3
y3 1
x1
5 6 1
to determine that the three points are collinear. 51. Use the fact that
x1 x2 x3
y1 1 −2 5 1 y2 1 = 0 −1 1 = 2 y3 1 3 −9 1
to determine that the three points are not collinear.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5 53. Use the equation x
y 1
x1
y1 1 = 0
x2
y2 1
Volume = ± 16
y 1
0
0 1 = 3 y − 4 x = 0.
3 4 1
55. Find the equation as follows.
x2
x1
y1
x2
y2
x3
y3
x4
y4
z1 1 z2 1 z3 1 z4 1
Because
x
0 = x1
85
57. Use the formula for volume as follows.
to find the equation of the line. So,
x
Applications of Determinants
y 1
x
y 1
y1 1 = −2
x1
y1
x2
y2
x3
y3
x4
y4
z1 1 1 z2 1 0 = z3 1 0 z4 1 1
0 0 1 1 0 1 0 1 1
= −2,
1 1 1
the volume of the tetrahedron is − 16 ( −2) = 13 .
3 1 = 7 x + 14
−2 −4 1
y2 1
So, an equation for the line is x = −2. 59. Use the formula for volume as follows.
Volume = ± 16
x1
y1
z1 1
3 −1 1 1
x2
y2
z2 1
4 −4 4 1
x3
y3
z3 1
x4
y4
z4 1
= ± 16
1
1 1 1
0
0
= ± 16 (−12) = 2
1 1
61. Use the fact that
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
x4
y4
z4 1
−4 1
0 1
0 1
2 1
=
4 3 −1 1 0 0
= 28
1 1
to determine that the four points are not coplanar. 63. Use the fact that
x1
y1
x2
y2
x3
y3
x4
y4
z1 1 z2 1
0 −1 1
0 =
z3 1 z4 1
0 −1
0 1
1
1
0 1
2
1
2 1
= 0
to determine that the four points are coplanar. 65. Use the equation
x
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
= 0
to find the equation of the plane. So, x
y
z 1
1 −2
1 1
−1 −1 7 1 2
−1 3 1
−2
1 1
1 1 1
1 −2 1
1 −2
1
= x −1 7 1 − y −1 7 1 + z −1 −1 1 − −1 −1 7 = 0, or 4 x − 10 y + 3z = 27. −1 3 1
2 3 1
2
−1 1
2
−1 3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
86
Chapter 3
Determinants
67. Find the equation as follows.
0 =
x
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
=
x
y
z 1
0
0
0 1
1 −1
0 1
0 = x −1
0
0 1 − y 1
1 −1 1
1 −1 1
0
0 1
0 1
0
0 1
0
0
0 1 + z 1 −1 1 − 1 − 1
0 −1 1
0
1 1
0
0 0 = x+ y + z = 0
1 −1
69. The given use of Cramer’s Rule to solve for y is not correct. The numerator and denominator have been reversed. The determinant of the coefficient matrix should be in the denominator. 71. Cramer’s Rule was used correctly. 73. (a) 49a + 7b + c = 4380
64a + 8b + c = 4439 81a + 9b + c = 4524 (b) The coefficient matrix is ⎡49 7 1⎤ ⎢ ⎥ A = ⎢64 8 1⎥ and ⎢ ⎥ ⎣⎢ 81 9 1⎦⎥
A = −2.
⎡4380 7 1⎤ ⎢ ⎥ Also, A1 = ⎢4439 8 1⎥ and A1 = −26, ⎢ ⎥ ⎢⎣4524 9 1⎥⎦ ⎡49 4380 1⎤ ⎢ ⎥ A2 = ⎢64 4439 1⎥ and ⎢ ⎥ ⎣⎢ 81 4524 1⎦⎥
A2 = 272,
⎡49 7 4380⎤ ⎢ ⎥ A3 = ⎢64 8 4439⎥ and ⎢ ⎥ ⎣⎢ 81 9 4524⎥⎦
A3 = −9390.
So, a = (c)
−26 −2
= 13, b =
272 −2
= −136, and c =
−9390 −2
= 4695.
4600
0 4300
10
(d) The function fits the data exactly.
Review Exercises for Chapter 3 1. Using the formula for the determinant of a 2 × 2 matrix,
4 −1 2
2
= 4( 2) − 2( −1) = 10.
3. Using the formula for the determinant of a 2 × 2 matrix,
−3
1
6 −2
= ( −3)( −2) − 6(1) = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3
87
5. Expansion by cofactors along the first column produces
4 −2
1
0 −3
1 =1
1 −1
1
−3
1
1 −1
4 −2
−0
+1
1 −1
4 −2 −3
= 1( 2) + ( −2) = 0.
1
7. The determinant of a diagonal matrix is the product of the entries along the main diagonal.
−2
0
0 −3
0 0 = ( −2)( −3)(−1) = −6
0 −1
0
9. Expansion by cofactors along the first column produces
−3
6
9
9 12 −3 = −3 0 15 −6
12 −3
−9
15 −6
6
9
15 −6
+0
6
9
12 −3
= 81 + 1539 + 0 = 1620.
11. Expansion by cofactors along the second column produces
2 0 −1 4 0 3
3 0
1 2
−2 0
3 1
−4 13.
2 −1 4
−1 2
1
1 2 = 2
−2
1
1 −2
−1 3
4
2 2 −1
1
2
2
−1 3
4
−4
1 2
3
1 = −
= −
2 −1
2
0 −4 −1
3
0 −5 −1
5
6
−11 −5
= 2( −25 + 66) = 82.
6
−1 15.
1 −1
0
0
0
1 −1
0
1
0
1 0 =
1 −1
0 −1
0
0
1 −1
1
1 −1 1
−1
1 −1
0
0
1 −1
0
1
0
1
0 −1
0
0 −1
0
0
1 −1
1
1 −1 = ( −1)
1
0
2 −1
1
0
1
0 −3
1 −1
6
9 10 −1 −1
1 2
2
0
0 −3
1
0 0 −1 −9 1 2
2
−1
0
0
−3
0 0 −1
−9
1
0 0 = −64
= ( −1)
1 −1 2
0
1 −1
1 −1
1
0
1
0 −1
0
0
1 −1
0 −1
1 −1 = ( −1) −1
1
0 −1
2
0 −5 −1
1 −1 1
1
−1
0
0
9 10 −1
0 0 10 26
= −
2 = 2
0
0
= −
3 1
2 2 −1
3 2
= −
6
−11 0 −5
3 1
1
1 2
1 −2 2
= 2 3
5 0
2
0
1 −1
2 −1
2
= ( −1)(1) = −1
0 −64
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
88
Chapter 3
Determinants
17. The determinant of a diagonal matrix is the product of its main diagonal entries. So,
−1
0
0
0
0
0 −1
0
0
0
0
0 −1
0
0
0
0 −1
0
0
0
A =
AT = A = −12
0
(b)
A3 = A
(c)
AT A = AT A = −12( −12) = 144
0 −1
(b)
B =
−1 2 0
1
3 4 2
1
= −1
AB =
1
31.
A−1
A−1
12 5 4 = − 5 7 − 5
−
−
0 3 −2 7
A−1 =
−1
3
1 0 −4
2 = 0 3 6
2 =
0 7 −2
3
2
7 −2
= −20
1 1 = − A 20
⎡2 ⎢3 1 ⎡ 4 1⎤ = ⎢ ⎥ = ⎢ 6 ⎣−2 1⎦ ⎢ 1 ⎢⎣− 3
A−1 =
Notice that A B = AB = 5. ⎡ 12 ⎢ 5 ⎢ ⎢ 4 = ⎢− 5 ⎢ ⎢− 7 ⎢⎣ 5
A =
27. (a)
29. A
= 5
= ( −12) = −1728
1 0 −4
= −5
1 −2 2
3
(d) 5 A = 52 A = 25( −12) = −300
(b)
⎡−1 2⎤⎡3 4⎤ ⎡ 1 −2⎤ (c) AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1⎦ ⎣ 0 1⎦⎣2 1⎦ ⎣2 (d)
= −12.
1 3
(a)
5
21. Because −4 has been factored out of the second column, and 3 factored out of the third column, the first determinant is −12 times the second one.
A =
−2 6
0 = ( −1) = −1.
19. Because the second row is a multiple of the first row, the determinant is zero.
23. (a)
25. First find
2 ⎛1⎞ ⎜ ⎟− 3 ⎝6⎠
1⎤ 6⎥ ⎥ 1⎥ 6 ⎥⎦
1 1 1 ⎛ 1⎞ ⎛ 1⎞ + = ⎜− ⎟ ⎜ ⎟ = 9 18 6 ⎝ 3⎠ ⎝ 6⎠
Notice that A = 6, so A−1 =
1 1 = . 6 A
3 1⎤ − ⎥ 5 10 ⎥ 1 1⎥ 5 5⎥ ⎥ 3 1⎥ 5 10 ⎥⎦
3 1 − 1 5 10 4 1 1 − = 5 5 5 7 3 1 − 5 5 10 1
0
0
0
1 1 1 5 5 = − 10 3 1 5 10
1
Notice that A = 2 −1 4 = 1(0 − 24) + 1(12 + 2) = −10, so A−1 = 2
6 0
⎡ 1⎤ ⎢1 1 ⎢ 3⎥ ⎥ ⎢ 1⎥ ⇒ ⎢0 1 ⎥ ⎢ 7⎥ ⎢0 0 3 ⎦⎥ ⎢⎣ 1 1 1 1 5⎛ 1 ⎞ ⎛1⎞ ⎛ 1⎞ − 2⎜ ⎟ = − , and x1 = − ⎜ ⎟ − 1⎜ − ⎟ So, x3 = , x2 = 2 2 2 2 3 3 2 ⎝ ⎠ ⎝ ⎠ ⎝ 2⎠
5 ⎡ 5 1⎤ ⎡ ⎢1 1 3 1 1 ⎡3 3 5 1⎤ ⎢ ⎢ 3 3⎥ ⎢ ⎥ ⎥ ⇒ ⎢0 2 4 33. (a) ⎢3 5 9 2⎥ ⇒ ⎢ ⎢3 5 9 2⎥ ⎢ ⎢5 9 17 4⎥ 26 ⎢ ⎥ ⎣ ⎦ ⎢0 4 ⎣5 9 17 4 ⎦ 3 ⎣⎢
1 1 = − . 10 A
1⎤ 3⎥ ⎥ 1⎥ 2 2⎥ ⎥ 1⎥ 1 2 ⎥⎦
5 3
= 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3 5 ⎡ ⎢1 1 3 ⎢ (b) ⎢0 1 2 ⎢ ⎢ ⎢0 0 1 ⎣⎢
89
1⎤ 1 1⎤ ⎡ ⎡1 0 0 0 ⎤ ⎡1 0 0 0 ⎤ ⎢1 0 − 3 − 6⎥ 3⎥ ⎢ ⎥ ⎢ ⎥ ⎥ ⎢ ⎥ ⎢0 1 2 1 ⎥ ⎢0 1 0 − 1 ⎥ 1⎥ 1⎥ ⎢ 2 ⇒ 0 1 ⇒ ⎢ 2⎥ ⇒ ⎢ 2⎥ ⎢ 2⎥ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 1⎥ 1⎥ 1⎥ ⎢0 0 1 ⎥ ⎢0 0 1 ⎥ ⎢0 0 1 2⎦ 2⎦ ⎣ ⎣ ⎢⎣ 2 ⎥⎦ 2 ⎦⎥
So, x1 = 0, x2 = − 12 , and x3 = 12 . (c) The coefficient matrix is
⎡3 3 5⎤ A = ⎢⎢3 5 9⎥⎥ and ⎢⎣5 9 17⎥⎦
A = 4.
⎡ 1 3 5⎤ Also, A1 = ⎢⎢2 5 9⎥⎥ and ⎢⎣4 9 17⎥⎦ ⎡3 1 5⎤ A2 = ⎢⎢3 2 9⎥⎥ and ⎢⎣5 4 17⎥⎦ ⎡3 3 A3 = ⎢⎢3 5 ⎢⎣5 9 0 So, x1 = = 0, x2 4
1⎤ 2⎥⎥ 4⎥⎦
=
and
A1 = 0,
A2 = −2,
A3 = 2.
−2 1 2 1 = − , and x3 = = . 4 2 4 2
⎡ 1 2 −1 −7⎤ ⎡ 1 2 −1 −7⎤ ⎡ 1 2 −1 −7⎤ ⎡ 1 2 −1 −7⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 6⎥ ⇒ ⎢0 1 0 −1⎥ ⇒ ⎢0 1 0 −1⎥ 35. (a) ⎢ 2 −2 −2 −8⎥ ⇒ ⎢0 −6 0 ⎢−1 3 4 ⎢0 5 3 ⎢0 5 3 ⎢0 0 1 2⎥ 8⎥ 1⎥⎦ 1⎥⎦ ⎣ ⎣ ⎣ ⎦ ⎣ ⎦
So, x3 = 2, x2 = −1, and x1 = −7 + 1( 2) − 2( −1) = −3. ⎡ 1 2 −1 −7⎤ ⎡ 1 0 −1 −5⎤ ⎡ 1 0 0 −3⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) ⎢0 1 0 −1⎥ ⇒ ⎢0 1 0 −1⎥ ⇒ ⎢0 1 0 −1⎥ ⎢0 0 1 2⎥ ⎢0 0 1 2⎥ ⎢0 0 1 2⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ So, x1 = −3, x2 = −1, and x3 = 2. (c) The coefficient matrix is ⎡ 1 2 −1⎤ ⎢ ⎥ A = ⎢ 2 −2 −2⎥ ⎢−1 3 4⎥ ⎣ ⎦
and
A = −18.
⎡−7 2 −1⎤ ⎢ ⎥ Also, A1 = ⎢−8 −2 −2⎥ and ⎢ 8 3 4⎥⎦ ⎣ ⎡ 1 −7 −1⎤ ⎢ ⎥ A2 = ⎢ 2 −8 −2⎥ and ⎢−1 8 4⎥ ⎣ ⎦ ⎡ 1 2 −7⎤ ⎢ ⎥ A3 = ⎢ 2 −2 −8⎥ ⎢−1 3 8⎥ ⎣ ⎦
So, x1 =
and
A1 = 54,
A2 = 18,
A3 = −36.
54 18 −36 = 2. = −3, x2 = = −1, and x3 = −18 −18 −18 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
90
Chapter 3
Determinants
37. Because the determinant of the coefficient matrix is
5 4
41. Because the determinant of the coefficient matrix is
6
1 2
= 9 ≠ 0,
−1 1
2 5 15 = 0, 3 1
the system has a unique solution.
3
the system does not have a unique solution.
39. Because the determinant of the coefficient matrix is
−1 1 2
43. (a) False. See "Definitions of Minors and Cofactors of a Matrix," page 124.
2 3 1 = −15 ≠ 0,
(b) False. See Theorem 3.3, part 1, page 134.
5 4 2
(c) False. See Theorem 3.9, page 148.
the system has a unique solution.
45. Using the fact that cA = c n A , where A is an
n × n matrix, 4 A = 43 A = 64( 2) = 128. 47. Expand the determinant on the left along the third row
a11
(a31
a21 + c31 )
(a32
a22 + c32 )
a12
a13
a22
a23
= ( a31 + c31 ) = a31
a13
a12
a12
a13
a22
a23
+ c31
( a33
a23 + c33 )
− ( a32 + c32 )
a12
a13
a22
a23
− a32
a11
a13
a21 a23 a11
a13
a21 a23
+ ( a33 + c33 ) − c32
a11
a11
a12
a21 a22
a13
a21 a23
+ a33
a11
a12
a21 a22
+ c33
a11
a12
a21 a22
.
The first, third and fifth terms in this sum correspond to the determinant a12
a13
a21 a22
a23
a31
a33
a11
a32
expanded along the third row. Similarly, the second, fourth and sixth terms of the sum correspond to the determinant a12
a13
a21 a22
a23
c31
c33
a11
c32
expanded along the third row. 49. Each row consists of n − 1 ones and one element equal to 1 − n. The sum of these elements is then
(n
− 1)1 + (1 − n) = 0.
In Section 3.3, Exercise 53, you showed that a matrix whose rows each add up to zero has a determinant of zero. So, the determinant of this matrix is zero. 51. λ I − A =
λ +3
−10
−5
λ −2
= λ 2 + λ − 56 = (λ + 8)(λ − 7)
⎡−5 −10⎤ ⎡ 1 2⎤ ⎡−2⎤ ⎥ ⇒ ⎢ ⎥ and x1 = ⎢ ⎥ ⎣−5 −10⎦ ⎣0 0⎦ ⎣ 1⎦
λ = −8: ⎢
⎡10 −10⎤ ⎡ 1 −1⎤ ⎡1⎤ ⎥ ⇒ ⎢ ⎥ and x 2 = ⎢ ⎥ 5⎦ ⎣−5 ⎣0 0⎦ ⎣1⎦
λ = 7: ⎢
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3
λ −1
0
0
2
λ −3
0
0
0
λ−4
53. λ I − A =
⎡0 ⎢
0 0⎤ ⎥
⎢0 ⎣
0 3⎥⎦
63. The determinant of the coefficient matrix is = (λ − 1)(λ − 3)(λ − 4)
⎡ 1 −1 0⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ 0 1⎥ and x1 = ⎢ 1⎥ ⎢0 0 0⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦
⎡2 0 0⎤ ⎡ 1 0 0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ λ = 3: ⎢2 0 0⎥ ⇒ ⎢0 0 1⎥ and x 2 = ⎢ 1⎥ ⎢0 0 −1⎥ ⎢0 0 0⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡3 0 0⎤ ⎡ 1 0 0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ λ = 4: ⎢2 1 0⎥ ⇒ ⎢0 1 0⎥ and x3 = ⎢0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
57. J (u , v, w) =
= −0.06 ≠ 0.
So, the system has a unique solution. Form the matrices A1 and A2 and find their determinants. ⎡ 0.07 −0.1⎤ A1 = ⎢ ⎥, ⎣−0.01 −0.5⎦
A1 = −0.036
⎡0.2 0.07⎤ A2 = ⎢ ⎥, ⎣0.4 −0.01⎦
A2 = −0.03
x = y =
∂x 1 − ∂v 2 = ∂y 1 ∂v 2
1 1 1 1 2 = − − = − . 4 4 2 1 2
1 2 1 2
0 0 = 2uv
−0.1
So,
55. By definition of the Jacobian,
1 2 − 12
0.2
0.4 −0.5
λ = 1: ⎢2 −2 0⎥ ⇒ ⎢0
∂x ∂u J (u , v ) = ∂y ∂u
91
A1 A A2 A
=
−0.036 = 0.6 −0.06
=
−0.03 = 0.5. −0.06
65. The determinant of the coefficient matrix is 2 3
3
6 6 12 = 168 ≠ 0. 12 9 −1
(
− 14
−
1 4
) = −uv
2vw 2uw 2uv
59. Row reduction is generally preferred for matrices with few zeros. For a matrix with many zeros, it is often easier to expand along a row or column having many zeros. ⎡ 1 2⎤ 61. The matrix of cofactors is given by ⎢ ⎥. ⎣−1 0⎦ ⎡ 0 1⎤ ⎡ 1 −1⎤ So, the adjoint is adj ⎢ ⎥ = ⎢ ⎥. − 2 1 ⎣ ⎦ ⎣2 0⎦
So, the system has a unique solution. Using Cramer's Rule ⎡ 3 3 3⎤ ⎢ ⎥ A1 = ⎢13 6 12⎥ , ⎢ 2 9 −1⎥ ⎣ ⎦
A1 = 84
⎡ 2 3 3⎤ ⎢ ⎥ A2 = ⎢ 6 13 12⎥ , ⎢12 2 −1⎥ ⎣ ⎦
A2 = −56
⎡ 2 3 3⎤ ⎢ ⎥ A3 = ⎢ 6 6 13⎥ , ⎢12 9 2⎥ ⎣ ⎦
A3 = 168.
So, x1 = x2 = x3 =
A1 A A2 A A3 A
=
84 1 = 168 2
=
−56 1 = − 168 3
=
168 = 1. 168
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
92
Chapter 3
Determinants
67. (a) 100a + 10b + c = 308.9
400a + 20b + c = 335.8 900a + 30b + c = 363.6 (b) The coefficient matrix is ⎡100 10 1⎤ ⎢ ⎥ A = ⎢400 20 1⎥ ⎢900 30 1⎥ ⎣ ⎦
A = −2000.
and
⎡308.9 10 1⎤ ⎢ ⎥ Also, A1 = ⎢335.8 20 1⎥ and ⎢363.6 30 1⎥ ⎣ ⎦ ⎡100 308.9 1⎤ ⎢ ⎥ A2 = ⎢400 335.8 1⎥ ⎢900 363.6 1⎥ ⎣ ⎦
and
⎡100 10 308.9⎤ ⎢ ⎥ A3 = ⎢400 20 335.8⎥ ⎢900 30 363.6⎥ ⎣ ⎦ So, a = (c)
and
A1 = −9,
A2 = −5110,
A3 = −565,800.
−9 −5110 −565,800 = 0.0045, b = = 2.555, and c = = 282.9. −2000 −2000 −2000
400
0 250
40
(d) The function fits the data exactly. 69. The formula for area yields x1 Area = ± 12 x2 x3
73. The equation of the plane is given by the equation 1 0 1
x
y
y2 1 = ± 12 5 0 1
0
0 0 1
y3 1
1 0 3 1
y1 1
5 8 1
= ± 12 ( −8)(1 − 5) = 16.
71. Use the equation x
y 1
x1
y1 1 = 0
x2
y2 1
to find the equation of the line. x −4 4
y 1 0 1 = −4 x + 8 y − 16 = 0, or x − 2 y = −4 4 1
0
z 1 = 0.
3 4 1
Expanding by cofactors along the second row yields x
y
z
1 0 3 = 0 0
3 4
or, 9 x + 4 y − 3 z = 0. 75. (a) False. See Theorem 3.11, page 163.
(b) False. See "Test for Collinear Points in the xyPlane," page 165.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapters 1–3
93
Cumulative Test for Chapters 1–3 1. Interchange the first equation and the third equation.
x1 +
x2 +
x3 = −3
2 x1 − 3x2 + 2 x3 = 4 x1 +
9
x2 − 3x3 = 11
Adding −2 times the first equation to the second equation produces a new second equation. x1 +
x2 +
− 5 x2 4 x1 +
x3 = −3 = 15
x2 − 3 x3 = 11
Adding −4 times the first equation to the third equation produces a new third equation. x1 +
x2 +
− 5 x2
x3 = −3 = 15
− 3 x2 − 7 x3 = 23
Dividing the second equation by −5 produces a new second equation. x1 +
x2 +
x3 = −3 = −3
x2
− 3x2 − 7 x3 = 23
Adding 3 times the second equation to the third equation produces a new third equation. x1 + x2 + x2
x3 = −3 = −3
− 7 x3 = 14
Dividing the third equation by −7 produces a new third equation. x1 + x2 + x3 = −3 x2
= −3 x3 = −2
Using back-substitution, the answers are found to be x1 = 2, x2 = −3, and x3 = −2. ⎡0 1 −1 0 2⎤ ⎡ 1 0 2 −1 0⎤ ⎢ ⎥ ⎢ ⎥ 2. ⎢ 1 0 2 −1 0⎥ ⇒ ⎢0 1 −1 0 2⎥ ⎢ 1 2 0 −1 4⎥ ⎢0 0 0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 1 2 1 −2⎤ ⎡ 1 2 0 0⎤ ⎢ ⎥ ⎢ ⎥ 3. ⎢ 0 0 2 −4⎥ ⇒ ⎢0 0 1 −2⎥ ⎢−2 −4 1 −2⎥ ⎢0 0 0 0⎥ ⎣ ⎦ ⎣ ⎦
x1 = s − 2t x2 = 2 + t x3 = t x4 = s
x1 = −2s x2 = s x3 = 2t x4 = t
3 ⎤ 3 ⎡ 1 2 −1 3⎤ ⎡1 2 −1 ⎡1 2 −1 ⎢ ⎥ ⎢ ⎥ ⎢ 4. ⎢−1 −1 1 2⎥ ⇒ ⎢0 1 0 5 ⎥ ⇒ ⎢0 1 0 5 ⎢−1 1 1 k ⎥ ⎢0 3 0 3 + k ⎥ ⎢0 0 0 −12 + ⎣ ⎦ ⎣ ⎦ ⎣
⎤ ⎥ ⎥ k ⎥⎦
k = 12 (for consistent system) 5. BA = [12.50 9.00 21.50]
⎡200 300⎤ ⎢ ⎥ ⎢600 350⎥ = [13,275.00 15,500.00] ⎢250 400⎥ ⎣ ⎦
This product represents the total value of the three products sent to the two warehouses. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
94
Chapter 3
Determinants −2 − x =
1
4− y = 0 ⎡−2 2⎤ ⎡ x 2⎤ ⎡ 1 0⎤ 6. 2 A − B = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⇒ x = −3 ⎣ 4 6⎦ ⎣ y 5⎦ ⎣0 1⎦ y = 4 ⎡17 22 27⎤ ⎢ ⎥ 7. A A = ⎢22 29 36⎥ ⎢27 36 45⎥ ⎣ ⎦
⎡−2 3⎤ 8. (a) ⎢ ⎥ ⎣ 4 6⎦
−1
⎡−2 3⎤ (b) ⎢ ⎥ ⎣ 3 6⎦
−1
⎡ 1 1 0⎤ ⎢ ⎥ 9. ⎢−3 6 5⎥ ⎢ 0 1 0⎥ ⎣ ⎦
6 −3⎤ ⎡− 14 1 ⎡ = − 24 ⎢ ⎥⎢ 1 ⎣−4 −2⎦ ⎢⎣ 6
1⎤ 8 ⎥ 1 ⎥ 12 ⎦
⎡− 72 ⎢ 1 ⎣⎢ 7
6 −3⎤ 1 ⎡ = − 21 ⎢ ⎥ = ⎣−3 −2⎦
−1
11. Because the fourth row already has two zeros, choose it for cofactor expansion. An additional zero can be created by adding 4 times the first column to the fourth column.
6
5
1 0
0
0
1
2 24
= 1(−1) 0 −2 5
1
6
1
2 24
1 = − 0 −2
1
5
5
1
6
Because the first column already has a zero, choose it for the next cofactor expansion. An additional zero can be created by adding −1 times the first row to the third row. 1
2
− 0 −2
24 1 = −(1)( −1)
4 −19
0
2
(d)
A3 = 73 = 343
−2
1
4 −19
1 7
−2 1
−
0 2 −5 −1 0
0 −2 ⎤ ⎥ 1 0 ⎥ ⎡−4 1 2⎤ 1 −5 ⎥ ⎢ ⎥ = 10 3 −5⎥ − ⎥ 1 0 ⎥ ⎢ ⎥ ⎢⎣−7 −1 −2⎥⎦ 1 −5 ⎥ 0 −2 ⎥⎦
0 1 1 2 1 −1
2
1
−5 −1 −2
A− 1
(The answer is not unique.)
1 1
A−1 =
−
1
2
1 −1 0
1
Because A = −11, the inverse of A is
⎡0 1⎤⎡ 1 0⎤⎡ 1 0⎤ A = ⎢ ⎥⎢ ⎥⎢ ⎥. ⎣ 1 0⎦⎣2 1⎦⎣0 −4⎦
1
(c)
⎡−4 10 −7⎤ So, the adjoint of A is ⎢⎢ 1 3 −1⎥⎥. ⎢⎣ 2 −5 −2⎥⎦
0⎤ ⎡ 1 0⎤⎡0 1⎤⎡2 −4⎤ ⎡1 ⎡ 1 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1⎥⎢ − 0 ⎣0 1⎦ ⎣⎢ ⎦ ⎣−2 1⎦⎣ 1 0⎦⎣ 1 0⎦ 4⎥
2 24
AT = A = 7
⎡ ⎢ ⎢ ⎢ ⎢− ⎢ ⎢ ⎢ ⎢ ⎣
⎡ 1 0 −1⎤ ⎢ ⎥ 1⎥ = ⎢0 0 ⎢3 1 − 9⎥ ⎢⎣ 5 5 5⎥ ⎦
1 0 −2
(b)
14. The matrix of cofactors is 1⎤ 7 ⎥ 2 ⎥ 21 ⎦
⎡2 −4⎤ ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤ 10. ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ ⎢ ⎥ ⎣ 1 0⎦ ⎣2 −4⎦ ⎣0 −4⎦ ⎣0 1⎦
5 1
3 A = 34 ⋅ 7 = 567
13. (a)
T
= −(38 − 4) = −34
10 7 ⎤ ⎡ 4 ⎢ 11 − 11 11⎥ ⎡−4 10 −7⎤ ⎢ ⎥ 1 1 1 3 1⎥ adj( A) = − ⎢⎢ 1 3 −1⎥⎥ = ⎢− . = − ⎢ 11 11 11⎥ A 11 ⎢⎣ 2 −5 −2⎥⎦ ⎢ ⎥ 5 2⎥ ⎢− 2 ⎣⎢ 11 11 11⎥⎦
⎡ 1⎤ ⎡ 1⎤ ⎡0⎤ ⎡ 1⎤ 15. a ⎢⎢0⎥⎥ + b ⎢⎢ 1⎥⎥ + c ⎢⎢ 1⎥⎥ = ⎢⎢2⎥⎥ ⎣⎢ 1⎦⎥ ⎣⎢0⎥⎦ ⎣⎢ 1⎦⎥ ⎣⎢ 3⎦⎥
The solution of this system is a = 1, b = 0, and c = 2. (The answer is not unique.) 16.
a −b+c = 2 c =1 4a + 2b + c = 6
The solution of this system is a = c = 1, so y =
7 x2 6
+
1x 6
7, 6
b =
1, 6
and
+ 1.
y
12. (a)
A = 14
(b)
B = −10
(c)
AB = −140
(d)
A
−1
1 1 = = A 14
6
(2, 6)
5 4 3
(−1, 2)
2
(0, 1) −3 −2 −1
1
x 2
3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapters 1–3
95
17. Find the equation using x 0 = x1
x2
y 1
x
y 1 4 1 = x(6) − y (−4) + 1(−22) = 6 x + 4 y − 22.
y1 1 = 1
5 −2 1
y2 1
An equation of the line is 6 x + 4 y − 22 = 0, or 3x + 2 y = 11. 18. Use the formula for area. x1
y1 1
Area = ± 12 x2
x3
19. λ I − A =
3 1 1
y2 1 = ± 12 7 y3 1
3 1 1
1 1 = ± 12 4 0 0 =
7 9 1
λ −1
−4
−6
−1
λ −2
−2
1
2
λ +4
1 2
(−4)(−8)
= 16
7 9 1
= (λ − 1)(λ 2 + 2λ − 8 + 4) + 1(−4λ − 16 + 12) + 1(8 + 6λ − 12) = λ 3 + λ 2 − 6λ + 4 − 4λ − 4 + 6λ − 4 = λ 3 + λ 2 − 4λ − 4
Because
λ 3 + λ 2 − 4λ − 4 = 0 λ 2 (λ + 1) − 4(λ + 1) = 0
(λ 2
− 4)(λ + 1) = 0,
the eigenvalues are λ1 = −2, λ2 = −1, and λ3 = 2. ⎡−3 −4 −6⎤ ⎡ 1 0 2⎤ ⎡−2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ For λ1 = −2: ⎢ −1 −4 −2⎥ ⇒ ⎢0 1 0⎥ and x1 = ⎢ 0⎥ ⎢ 1⎥ ⎢ 1 2 2⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−2 −4 −6⎤ ⎡ 1 0 5⎤ ⎡−5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ For λ2 = −1: ⎢ −1 −3 −2⎥ ⇒ ⎢0 1 −1⎥ and x 2 = ⎢ 1⎥ ⎢ 1⎥ ⎢ 1 2 ⎢0 0 0⎥ 3⎥⎦ ⎣ ⎦ ⎣ ⎣ ⎦ ⎡ 1 −4 −6⎤ ⎡ 1 0 2⎤ ⎡−2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ For λ3 = 2: ⎢−1 0 −2⎥ ⇒ ⎢0 1 2⎥ and x3 = ⎢−2⎥ ⎢ 1⎥ ⎢ 1 2 6⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡0 0⎤⎡0 1⎤ ⎡0 1⎤⎡0 = ⎢ ⎢ ⎥⎢ ⎥ ⎥⎢ 20. No. C could be singular. ⎣0 1⎦⎣0 0⎦ ⎣0 0⎦⎣0 A C B C 21. ( BT B )
T
= B T ( BT )
T
1⎤ ⎥ 0⎦
= BT B
22. Let B, C be inverses of A. Then
B = (CA) B = C ( AB ) = C. 23. (a) A is row equivalent to B if there exist elementary matrices E1 , ⋅ ⋅, Ek such that A = Ek
(b) A row equivalent to B ⇒ A = Ek
E1B
( E1, ⋅ ⋅, Ek
B row equivalent to C ⇒ B = Fl
F1C
( F1, ⋅ ⋅, Fl elementary).
Then A = Ek
E1 ( Fl
E1B.
elementary).
F1 )C ⇒ A row equivalent to C.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 3 Determinants Section 3.1
The Determinant of a Matrix ...............................................................65
Section 3.2
Evaluation of a Determinant Using Elementary Operations ..............69
Section 3.3
Properties of Determinants...................................................................72
Section 3.4
Introduction to Eigenvalues .................................................................77
Section 3.5
Applications of Determinants ..............................................................81
Review Exercises ..........................................................................................................87 Project Solutions...........................................................................................................93
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Determinants
3
Section 3.1 The Determinant of a Matrix 2. The determinant of a matrix of order 1 is the entry in the matrix. So, det[−3] = −3. 4.
6.
−3 1
5 2 2 −2 4
3
12.
λ−2
0
4
λ−4
= (λ − 2)(λ − 4) − 4(0) = λ 2 − 6λ + 8
14. (a) The minors of the matrix are as follows.
= −3( 2) − 5(1) = −11
= 2(3) − 4( −2) = 14
M 11 = 1 = 1
M 12 = 2 = 2
M 21 = 0 = 0
M 22 = −1 = −1
(b) The cofactors of the matrix are as follows. C11 = ( −1) M 11 = 1
C12 = ( −1) M 12 = −2
C21 = ( −1) M 21 = 0
C22 = ( −1) M 22 = −1
2
8.
10.
1 3
5
2 −3 −6
9
⋅ ( −9) − 5 ⋅ 4 = −23
1 3
=
4 −9
3
3
4
= 2(9) − ( −6)( −3) = 0
16. (a) The minors of the matrix are shown below. M 11 = M 21 = M 31 =
3
1
−7 −8 4
2
−7 −8 4 2 3 1
= −17
M 12 =
= −18
M 22 =
= −2
6
1
−3
M 13 =
= 16
M 23 =
= −15
M 33 =
2
4 −8 −3 2
M 32 =
= −52
4 −8
6
1
6
3
4 −7 −3
= −54
4
= 5
4 −7 −3 4 6 3
= −33
(b) The cofactors of the matrix are as follows. C11 = ( −1) M 11 = −17 2
C12 = ( −1) M 12 = 52 3
C13 = ( −1) M 13 = −54
C21 = ( −1) M 21 = 18 3
C22 = ( −1) M 22 = 16
C23 = ( −1) M 23 = −5
C31 = ( −1) M 31 = −2
C32 = ( −1) M 32 = 15
C33 = ( −1) M 33 = −33
4
4
5
4
5
6
18. (a) You found the cofactors of the matrix in Exercise 16. Now find the determinant by expanding along the third row. −3
4
2
6
3
1 = 4C31 − 7C32 − 8C33 = 4( −2) − 7(15) − 8(−33) = 151
4 −7 −8
(b) Expand along the first column. −3
4
2
6
3
1 = −3C11 + 6C21 + 4C31 = −3( −17) + 6(18) + 4( −2) = 151
4 −7 −8
20. Expand along the third row because it has a zero. 2 −1 3 1
4 4 =1
1
0 2
−1 3 4 4
−0
2 3 1 4
+ 2
2 −1 1
4
= 1( −16) + 2(9) = 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
65
66
Chapter 3
Determinants
22. Expand along the first row because it has two zeros. −3
0 0
7 11 0 = −3 1
2 2
11 0
7 0
−0
2 2
1 2
+0
7 11 1
2
= −3( 22) = −66
24. Expand along the first row.
−0.4 0.4 0.3 0.2 0.2 0.2 = −0.4 0.3 0.2 0.2
0.2 0.2
− 0.4
0.2 0.2
0.2 0.2 0.3 0.2
+ 0.3
0.2 0.2 0.3 0.2
= −0.4(0) − 0.4(−0.02) + 0.3( −0.02) = 0.002 26. Expand along the first row. x
y 1
−2 1
−2 −2 1 = x 1
5 1
5 1
− y
−2 1 1 1
+1
−2 − 2 1
5
= x( −7) − y( −3) + (−8) = −7 x + 3 y − 8
28. Expand along the third row because it has all zeros. So, the determinant of the matrix is zero. 30. Expand along the first row, because it has two zeroes.
3 0
7
0
2 6 11 12 4 1 −1 1 5
2
6 11 12 = 3 1 −1 5
2 10
2 6 12
2 +74 1
2 10
2
1 5 10
The determinants of the 3 × 3 matrices are: 6 11 12 1 −1 5
2 = 6
−1
2 10
2
2 10
− 11
1
2
5 10
+ 12
1 −1 5
2
= 6( −10 − 4) − 11(10 − 10) + 12( 2 + 5) = −84 + 84 = 0 2 6 12 4 1
2 = 2
1 5 10
1
2
5 10
−6
4
2
1 10
+ 12
4 1 1 5
= 2(10 − 10) − 6( 40 − 2) + 12( 20 − 1) = 0 So, the determinant of the original matrix is 3(0) + 7(0) = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.1
The Determinant of a Matrix
67
32. Expand along the first row.
y
z
10 15 −25
w
x
30
15 −25
30
10 −25
30
10 15
10 15 −25
30
= w 20 −15 −10 − x −30 −15 −10 + y −30 20 −10 − z −30 20 −15
−30 20 −15 −10
35 −25 −40
30 35 −25 −40
30 −25 −40
30 35 −40
30 35 −25
The determinants of the 3 × 3 matrices are: 15 −25
30
20 −15 −10 = 15 35 −25 −40
−15 −10 −25 −40
+ 25
20 −10 35 −40
+ 30
20 −15 30 −25
= 15(600 − 250) + 25( −800 + 350) + 30( −500 + 525) = 5250 − 11,250 + 750 = −5250 10 −25
30
−30 −15 −10 = 10 30 −25 −40
−15 −10 −25 −40
+ 25
−30 −10 30 −40
+ 30
−30 −15 30 −25
= 10(600 − 250) + 25(1200 + 300) + 30(750 + 450) = 3500 + 37,500 + 36,000 = 77,000 10 15
30
−30 20 −10 = 10 30 35 −40
20 −10 35 −40
− 15
−30 −10 30 −40
+ 30
−30 20 30 35
= 10( −800 + 350) − 15(1200 + 300) + 30( −1050 − 600) = −4500 − 22,500 + 49,500 = −76,500 10 15 −25 −30 20 −15 = 10 30 35 −25
20 −15 35 −25
− 15
−30 −15 30 −25
− 25
−30 20 30 35
= 10( −500 + 525) − 15(750 + 450) − 25( −1050 − 600) = 250 − 18,000 + 41,250 = 23,500 So, the determinant is −5250w − 77,000 x − 76,500 y − 23,500 z. 34. Expand along the second row because it has all zeros. So, the determinant of the matrix is zero.
40. Using a graphing utility or computer software program you have
8
5
1 −2
0
−1
0
7
1
6 −5
0.6
0
8
6
5 −3
0.5 0.8 −0.2 = −0.175.
1
2
5 −8
2
6 −2
36. Using a graphing utility or a computer software program you have 0.25
−1
0.75 0.9 −0.4
8 −3 38. Using a graphing utility or computer software program you have
−1
1
2 4
6
2 3 −2 −1 4
3
5 −2 2
4
4
= −45.
1
3 1
4
3
6
7
2 −5
1
0
= 420,246.
42. The determinant of a triangular matrix is the product of the elements on the main diagonal. 5 0
0
0 6
0 = 5(6)( −3) = −90
0 0 −3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
68
Chapter 3
Determinants
44. The determinant of a triangular matrix is the product of the elements on the main diagonal.
4 0 0
0
1 2
0
0
3 5 3
0
−1
56.
4
0
0
0
0
0
0 = ( 7)
3 −3 5 −1 1 13 4
= λ 2 − 4λ − 1
4±
=
( )(2)(−1)(−2) = 7 1 4
λ
(−4)2 2(1)
20 2
0
1
58. 0 λ
3
0
1 −2
−( −4) ±
λ =
− 2)( x) − ( −3)( −1) = 0 x − 2x − 3 = 0 2
x2 + 2 x + 1 = 0 + 1) = 0
60.
62.
3x 2
−3 y 2
1
1
= (3x 2 )(1) − 1( −3 y 2 ) = 3x 2 + 3 y 2
e− x
xe − x
−x
(1 − x)e− x
−e
= (e − x )(1 − x)e− x − (−e − x )( xe − x ) = (1 − x)e −2 x + xe −2 x = (1 − x + x)(e −2 x ) = e −2 x
64.
x
x ln x
1 1 + ln x
= x(1 + ln x) − 1( x ln x) = x + x ln x − x ln x = x
2
x = −1 − 2)( x) − ( −3)( −1) = 0 x2 − 2 x − 3 = 0
(x
2
So, λ = 0, 4, − 2.
+ 3)( x − 1) − ( −4)(1) = 0 x2 + 2 x − 3 + 4 = 0
(x
2
The determinant is zero when λ (λ − 4)(λ + 2) = 0.
x = 3, −1
54.
0 λ
= λ (λ − 4)(λ + 2)
− 3)( x + 1) = 0
(x
2 λ −2
+1
= λ (λ 2 − 2λ − 8)
(c) True. This is because in a cofactor expansion each cofactor gets multiplied by the corresponding entry. If this entry is zero, the product would be zero independent of the value of the cofactor.
(x
3
= λ 3 − 2λ 2 − 8λ
(b) True. See Theorem 3.1 on page 126.
52.
λ
5
= λ (λ 2 − 2λ − 6) + 1(0 − 2λ )
then det ( A) = 2 ≠ 3 = 1 + 2.
(x
− 4(1)( −1)
4± 2 5 = 2± 2
= λ
⎡ 1 0⎤ A = ⎢ ⎥, ⎣0 2⎦
(x
=
2 λ −2
2
48. (a) False. The determinant of a triangular matrix is equal to the product of the entries on the main diagonal. For example, if
50.
= (λ − 1)(λ − 3) − 4(1)
The determinant is zero when λ 2 − 4λ − 1 = 0. Use the Quadratic Formula to find λ .
1 4
5 2
λ −3
( 12 )(3)(−2) = −12
0 0 0
4
= 4
46. The determinant of a triangular matrix is the product of the elements on the main diagonal.
7
1
= λ 2 − 4λ + 3 − 4
−8 7 0 −2
−8
λ −1
− 3)( x + 1) = 0 x = 3, −1
66. The system of linear equations will have a unique solution if and only if the coefficient matrix is invertible. Using the formula for the inverse of a 2 × 2 matrix, this is equivalent to ad − bc ≠ 0. So,
ad − bc =
a
b
c d
≠ 0
is the required condition for the system to have a unique solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.2
Evaluation of a Determinant Using Elementary Operations
68. Evaluating the left side yields
w cx y cz
69
70. Evaluating the left side yields
w
= cwz − cxy.
x
cw cx
= cwx − cwx = 0.
Evaluating the right side yields c
w x y
z
= c( wz − xy ) = cwz − cxy.
72. Evaluating the left side yields
a +b
a
a
a
a +b
a
a
a
a +b
= ( a + b)
a +b
a
a
a +b
−a
a
a
a a +b
+ a
a a +b a
a
2 = ( a + b) ⎡( a + b) − a 2 ⎤ − a ⎡⎣a( a + b) − a 2 ⎤⎦ + a ⎡⎣a 2 − a( a + b)⎤⎦ ⎣ ⎦
= ( a + b)( 2ab + b 2 ) − a( ab) + a( − ab) = 2a 2b + 3ab 2 + b3 − a 2b − a 2b = b3 + 3ab 2 = b 2 (3a + b).
74. Expand the left side of the equation along the first row.
1
1
1
a
b
c =1
a 3 b3 c 3
b
c
b3
c3
−1
a
c
a3
c3
+1
a
b
a3 b3
= bc3 − b3c − ac3 + a 3c + ab3 − a 3b = b(c3 − a 3 ) + b3 ( a − c) + ac( a 2 − c 2 ) = (c − a ) ⎡⎣bc 2 + abc + ba 2 − b3 − a 2c − ac 2 ⎤⎦ = (c − a ) ⎡⎣c 2 (b − a) + ac(b − a ) + b( a − b)( a + b)⎤⎦ = (c − a )(b − a) ⎡⎣c 2 + ac − ab − b 2 ⎤⎦ = (c − a )(b − a) ⎡⎣(c − b)(c + b) + a(c − b)⎤⎦ = (c − a )(b − a)(c − b)(c + b + a ) = ( a − b)(b − c)(c − a )( a + b + c) 76. If you expand along the row of zeros, you see that the determinant is zero.
Section 3.2 Evaluation of a Determinant Using Elementary Operations 2. Because the second row is a multiple of the first row, the determinant is zero. 4. Because the first and third rows are the same, the determinant is zero. 6. Because the first and third rows are interchanged, the sign of the determinant is changed. 8. Because 2 has been factored out of the first column, the first determinant is 2 times the second one.
10. Because 2 has been factored out of the second column and 3 factored out of the third column, the first determinant is 6 times the second one. 12. Because 6 has been factored out of each row, the first determinant is 64 times the second one. 14. Because a multiple of the first row of the matrix on the left was added to the second row to produce the matrix on the right, the determinants are equal. 16. Because a multiple of the second column of the matrix on the left was added to the third column to produce the matrix on the right, the determinants are equal.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
70
Chapter 3
Determinants
18. Because the second and third rows are interchanged, the sign of the determinant is changed.
8 −7
3
30. 0 −5 6
20. Because the sixth column is a multiple of the second column, the determinant is zero.
4 = 0
1
0 2
2 0 = 2
−1
1 1 −1
24.
1 1
−1 0
2 0
4
1 −1 0
3 1
=
1 0
=
3 2
1 1
−1 0
2 0
4
4 −8 5
2 0
3 2
1 1
−1 0
8
34.
5 2
26. 2
1
1
1
1
9 −4
2
−1
2
7
6 −5
4
1 −2
7
3
0
1
3 −1 −3
0
=
2 5 0 0
1 −2 0
4
0 0
27
7
27
0 −3
−4 −2
= − 0 −13 −9 = (−1)( 26) = −26 2
7
27 7 −1 1
= ( −110)( 27 + 7) = −3740
2
0 −13 −7
0
−11 11
= ( −10)(11)
−4 −2
0
1 −2
4
−11 11
1
0
0 0
7
= ( −5)( 2)
= − 0 −13 −9
0
27
= ( −5)
1
3 −1 − 1
−1
8 0
1 −2 0
−11 11
0 0
−1 − 4 − 2
−1
3
4
−11 11
1 −1 0
28. −1 −4 −2 = − 3 −1 −3 3 −1 −1
11
2 5
9 −4
= 0 −3 −4 = 1( −3)( 2) = −6 0
=
4 10
0 −3 − 2 1
5
9 −4
5
−1 −2 = 0 −3 −4
1 −2
8
21 −8
0
Because there is an entire row of zeros, the determinant is 0. A graphing utility or a computer software program gives the same determinant, 0. 1
0
11 −7 = 4(−88 + 147) = 236
2 0
0 0
4 = 3( −5)(8) = −120
4 −8
32. 8 −5 3 = 0
1 −1 0
−1 0
4
8 −7
3
= 2(1 − 2) = −2
1 −1
4
= 0 −5 0
2
A graphing utility or a computer software program gives the same determinant, −2. 3 2
−5
0 −15 20
6
22. Expand by cofactors along the second row. −1 3
8 −7
3
36.
8
2
0 −3
1 −1
6
8
−4
6
0
9
−7
0
0 14
8
=
8
2
1 −1 22
−4
6
0
1
−7
0
0
0
−3
8
2
= 7 1 −1 22 6
0
−15
1 8
2
= 7 −131 −1 22 0
0
1
= 7(15 + 1048) = 7441
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.2 3 −2 −1 38.
0
5 −1
3 −2
4 3 1
−1
2 1 0
0
0 3 2 = −1
4
7 −8 0 0
1
2
3 1
2
1 0
71
3 −8 − 3 0
−5
0 0
6 −5 −6 0
−1 0 =
4
7 −8
4
3 0 2
Evaluation of a Determinant Using Elementary Operations
2
1
−1 3 −8 −3 4 7 −8
0
−5 6 −5 −6 −1 0 =
2 1
−4 3 −2 0 4 7 −8 0 −11 6
7 0
−4 3 −2 = −
4 7 −8 −11 6
7
0 10 −10 = −
4
7
−11
6
−8 = −10 7
0
1 −1
4 7 −8 = −10⎡⎣( −1)( 28 − 88) − 1( 24 + 77)⎤⎦ = 410
−11 6
7
40. (a) False. Adding a multiple of one row to another does not change the value of the determinant.
(b) True. See page 135. (c) True. In this case you can transform a matrix into a matrix with a row of zeros, which has zero determinant as can be seen by expanding by cofactors along that row. You achieve this transformation by adding a multiple of one row to another (which does not change the determinant of a matrix). 1 0 0
42. 0
1 0 0
1 0 = k 0 1 0 = k
0 0 k
0 0 1
0 0
1 0 0
44. 0
1
1+ a
1
1
1
1+b
1
1
1
1+c
1 0 0 1
−c
1
1
1+ c
abc( ac + ab + bc + abc) abc 1 1 1⎞ ⎛ = abc⎜1 + + + ⎟ b c a⎠ ⎝ =
0 0 1 1 0 0 0 0 1
b
= ac + ab + bc + abc
1 0 = 0 1 0 =1
0 k
0 − a − a − c − ac = 0
= ac − b( − a − c − ac)
1 0 = − 0 1 0 = −1
1 0 0
46. 0
48.
50.
52.
sec θ
tan θ
tan θ
sec θ
sec θ
1
1 sec θ
= sec 2 θ − tan 2 θ = 1
= (sec θ )(sec θ ) − 1(1) = sec 2 θ − 1 = tan 2 θ
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
72
Chapter 3
Determinants
54. Suppose B is obtained from A by adding a multiple of a row of A to another row of A. More specifically, suppose c times the jth row of A is added to the ith row of A.
⎡ ⎢ ⎢ B = ⎢( ai1 ⎢ ⎢ ⎢ ⎣
⎤ ⎥ ⎥ " ( ain + ca jn )⎥ ⎥ ⎥ ⎥ ann " ⎦ "
a11 # + ca j1 ) # an1
a1n
Expand along this row. det B = ( ai1 + ca j1 )Ci1 + " + ( ain + ca jn )Cin = [ai1Ci1 + " + ainCin ] + ⎣⎡ca j1Ci1 + " + ca jnCin ⎦⎤
The first bracketed expression is det A, so prove that the second bracketed expression is zero. Use mathematical induction. For n = 2 (assuming i = 2 and j = 1 ),
⎡ a11 a12 ⎤ ca11C21 + ca12C22 = det ⎢ ⎥ = 0 ( because row 2 is a multiple of row 1). ⎣ca11 ca12 ⎦ Assuming the expression is true for n − 1,
ca j1Ci1 + " + ca jnCin = 0 by expanding along any row different from c and j and applying the induction hypothesis. 56. Use the information given in Table 3.1 on page 138. Cofactor expansion would cost:
(3,628,799)(0.001) + (6,235,300)(0.003)
= $22,334.70.
Row reduction would cost much less:
(288)(0.001)
+ 339(0.003) = $1.30.
Section 3.3 Properties of Determinants 2. (a)
A =
(b)
B =
1 2 2 4
= 0
−1 2 3 0
AB =
5 2 10 4
4. (a)
= −6
⎡ 1 2⎤⎡−1 2⎤ ⎡ 5 2⎤ (c) AB = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣2 4⎦⎣ 3 0⎦ ⎣10 4⎦ (d)
2
= 0
Notice that A B = 0( −6) = 0 = AB .
(b)
0
1
A = 1 −1 2 = 0 3
1 0
2
−1 4
B = 0
1 3 = −7
3 −2
1
⎡2 0 1⎤⎡2 −1 4⎤ ⎡7 −4 9⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ (c) AB = ⎢ 1 −1 2⎥⎢0 1 3⎥ = ⎢8 −6 3⎥ ⎢ 3 1 0⎥⎢ 3 −2 1⎥ ⎢6 −2 15⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ 7 −4
(d)
AB = 8 −6
9 3 = 0
6 −2 15
Notice that A B = 0( −7) = 0 = AB .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.3 3 6. (a)
2 4 0
1 −1 2 1
A =
0
0 3 1
−1
1 1 0
4 2 −1 (b)
0 0
2
1
−1 0
0
0
B =
14 AB =
4 0
0
0
3 1
−1
1
1 0
0
2 −1
2
1 −1 −1 0
=
1 1
⎡ 3 2 ⎢ 1 −1 (c) AB = ⎢ ⎢ 0 0 ⎢ ⎣⎢−1 1
(d)
3
2 −1 = 1 0
2
1 1 3
−1
0 6 3
=
−3 −1 5 0
=
2
4 0
1 −1 −1 0 0
0
3 1
0
0
0 0
2 −1 = 1
4 0 = 9
2
2 1
1
0
0⎤ ⎥ 2 −1⎥ = 2 1⎥ ⎥ 0 0⎥⎦
1 0 0 14
8 9 2
−1
0 6 3
−1
0 6 3
73
= 0
2 −1 0
2 −1
4 0⎤⎡ 4 ⎥⎢ 2 1⎥⎢ 1 3 1⎥⎢ 0 ⎥⎢ 1 0⎥⎢− ⎦⎣ 1
8 9 2
0
3
Properties of Determinants
⎡14 8 ⎢ ⎢ 2 1 ⎢ −1 0 ⎢ ⎢− ⎣ 3 −1
=
−3 −1 5 0
9 2⎤ ⎥ 1 3⎥ 6 3⎥ ⎥ 5 0⎥⎦
14
8 9 2
−1
0 6 3
0
0 0 0
= 0
−3 − 1 5 0
Notice that A B = 0 ⋅ 9 = 0 = AB . 8.
A =
5
15
= 52
10 −20
1
3
2 −4
= 52 ( −10) = −250
0
14. (a)
1 2
2
4 16
0
10. A = 12 −8
1
4
0
8 = 43 3 −2
2
16 20 −4
4
5 −1
1 4
0
= 43 11 8
0
4 5 −1
1 1
(b)
A =
(b)
B =
(c)
4 −4 ⎡1 −2⎤ ⎡3 −2⎤ A+ B = ⎢ = 4 ⎥ + ⎢ ⎥ = 1 0 0 0 1 0 ⎣ ⎦ ⎣ ⎦
1
0
3 −2 0
0
0 1
3 1
(c)
1
⎡0 1 2⎤ ⎡0 1 −1⎤ ⎢ ⎥ ⎢ ⎥ A + B = ⎢ 1 −1 0⎥ + ⎢2 1 1⎥ ⎢2 1 1⎥ ⎢0 1 1⎥ ⎣ ⎦ ⎣ ⎦ = 3 0
= 2
1
0 2 1
1 = 3 0 1 = −2
2 2 2
2 0 1
Notice that A + B = 5 + ( −4) = 1 ≠ A + B .
= 0
Notice that A + B = 2 + 0 = 2 ≠ A + B
0
1 = −2( 2) = −4
B = 2 1
0 2 12. (a)
1 2
0 1 −1
= ( −64)( −36) = 2304 1 −2
0
A = 1 −1 0 = 1 −1 0 = 5
16. First obtain A =
−4 10 5
6
= −74.
(a)
AT = A = −74
(b)
A2 = A A = ( −74) = 5476
(c)
AAT = A AT = ( −74)( −74) = 5476
(d)
2 A = 22 A = 4( −74) = −296
(e)
A−1 =
2
1 1 1 = = − A 74 (−74)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
74
Chapter 3
Determinants 1
5
4
18. First obtain A = 0 −6
(a)
AT = A = 18
(b)
A2 = A A = 182 = 324
(c)
AAT = A AT = (18)(18) = 324
(d)
2 A = 23 A = 8(18) = 144
20. (a)
−2 4
A = T
6 8
A2 = A A = A
(d)
2 A = 22 A = −160
(e)
A−1 =
2
34. Because
0.2 −0.6 0.1
0.8
1 −1 5
1
0.6
0.6
0
0.7 −0.3
0.1
0
0.2 −0.3
0.6
0
36.
(b)
AT = A = −312
(c)
A2 = A A = A
(d)
2 A = 2 A = −4992
(e)
A−1 =
A
−1
= 97,344
⎡ 1 ⎢ 3 1 ⎡ 2 2⎤ = ⎢ ⎥ = ⎢ 6 ⎣−2 1⎦ ⎢ 1 ⎢⎣− 3
A−1 =
4
1 ⎛ 1 ⎞ ⎛ 1 ⎞⎛ 1 ⎞ 1 1 1 + = ⎜ ⎟ − ⎜ − ⎟⎜ ⎟ = 3 ⎝ 6 ⎠ ⎝ 3 ⎠⎝ 3 ⎠ 18 9 6
So A−1 =
AB = A B = 10(12) = 120 4
= (10) = 1000 4
A4 = A
(c)
2 B = 23 B = 23 (12) = 96
(d)
( AB)
= AB = 120
(e)
A−1 =
1 1 = A 10
T
BA = B A = 5( −2) = −10 4
(b)
B4 = B
(c)
2 A = 23 A = 23 ( −2) = −16
(d)
( AB)
= AB = A B = −10
(e)
B −1 =
1 1 = A 5
T
= 54 = 625
1⎤ 3⎥ ⎥ 1⎥ 6 ⎥⎦
Notice that A = 6.
1 1 = − A 312
(b)
= 0.015 ≠ 0,
the matrix is nonsingular.
= −312
3
2
8
the matrix is singular.
= 1600
1 −4 −2
6
4 = 0,
3 2
−1.2
3
8
1 − 14
1 1 = − A 40
2 2
26. (a)
2 − 12 − 52
(c)
24. (a)
2 −1 4
= A = −40
A =
6 3 = −21 ≠ 0,
0
32. Because
A
6 5
= 30 ≠ 0,
30. Because 1 0 4
= −16 − 24 = −40
−2 4
2
the matrix is nonsingular.
(b)
22. (a)
4
the matrix is nonsingular.
1 1 = = A 18
A−1
3 −6
2 = 18.
0 −3
0
(e)
28. Because
38.
A−1
1 1 = . A 6
1⎤ ⎡ 1 1 − ⎥ ⎢− 2 2 ⎢ ⎥ 0⎥ = ⎢ 2 −1 ⎢ ⎥ 1⎥ ⎢ 3 −1 2 ⎥⎦ ⎣⎢ 2 −
A−1 =
1 1 1 0 0 1 − 2 2 1 2 −1 0 2 −1 0 = = − 2 3 1 3 1 −1 −1 2 2 2 2 1
Notice that A = 2
0
1
−1 2 =
1 −2 3
So A−1 =
1
0
2 −1 −3
1 2 = −2.
0 −1
1 1 = − . A 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.3
40.
A−1
Properties of Determinants
7 ⎡ ⎤ 4⎥ ⎢2 −3 2 ⎢ ⎥ 3 ⎢ ⎥ − 1 3 3 ⎥ = ⎢ 2 ⎢ ⎥ 1 0 −1⎥ ⎢0 ⎢ ⎥ 1 ⎢0 −1⎥ 1 − ⎥⎦ 2 ⎣⎢ 7 4 2 −3 2 3 3 1 −3 = 2 0 −1 0 1
2 −3 A−1 =
1 −3 0
1
0
1 −
Notice that A =
1 2
−1
0
1
So, A−1 =
7 4 2 2 −3 4 0 3 −2 3 3 1 1 1 = 1 −3 3 = 1 −3 3 = 2 2 2 2 0 −1 0 1 −1 0 1 −1 1 0 − 0 2
0
0
3
1 −2 −3
1
0
0
2 −2
1 −2 −4
=
1
0
1
0
3
0
0
1
0
0
0
2 −2
1 −2 −4
1
1 0 = − 0
1
3 0 = 2.
0 2 −2
1 1 = . A 2
42. The coefficient matrix of the system is ⎡ 1 1 −1⎤ ⎢ ⎥ ⎢2 −1 1⎥. ⎢ 3 −2 2⎥ ⎣ ⎦
48. Find the values of k that make A singular by setting A = 0. Using the second column in the cofactor
expansion, you have 1 k
Because the determinant of this matrix is zero, the system does not have a unique solution.
2
A = −2 0 − k = − k
3
1 −4
⎡1 −1 −1 −1⎤ ⎢ ⎥ ⎢1 1 −1 −1⎥. ⎢1 1 1 −1⎥ ⎢ ⎥ ⎢⎣1 1 1 1⎥⎦ Because the determinant of this matrix is 8, and not zero, the system has a unique solution. 46. Find the values of k that make A singular by setting A = 0. k −1
2
2
k + 2
−2 − k 3 −4
−1
1
2
−2 − k
= − k (8 + 3k ) − ( − k + 4)
44. The coefficient matrix of the system is
A =
75
= −3k 2 − 7 k − 4 = −(3k + 4)( k + 1).
So, A = 0 implies that k = − 43 or k = −1. 50. Given that AB is singular, then AB = A B = 0. So,
either A or B must be zero, which implies that either A or B is singular.
= ( k − 1)( k + 2) − 4 = k2 + k − 6 = ( k + 3)( k − 2) = 0
which implies that k = −3 or k = 2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
76
Chapter 3
Determinants
52. Expand the determinant on the left a +b
a
a
a
a +b
a
a
a
a +b
(
)
= ( a + b) ( a + b) − a 2 − a(( a + b)a − a 2 ) + a( a 2 − a( a + b)) 2
= ( a + b)( 2ab + b 2 ) − a( ab) + a( − ab) = 2a 2b + ab 2 + 2ab 2 + b3 − 2a 2b = b 2 (3a + b).
54. Because the rows of A all add up to zero, you have 2
−1 −1
A = −3
1
0 −2
2
2 = −3
−1 0 1 0 = 0.
0 −2 0
2
56. Calculating the determinant of A by expanding along the first row is equivalent to calculating the determinant of AT by expanding along the first column. Because the determinant of a matrix can be found by expanding along any row or column, you see that A = AT . 58.
A10 = A
10
= 0 ⇒ A = 0 ⇒ A is singular.
and
⎡−1 0⎤ B = ⎢ ⎥. ⎣ 0 −1⎦
Then det ( A) = det ( B) = 1 ≠ 0 = det ( A + B) (b) True. See Theorem 3.9 and exercise 56 of this section for the proof. (c) True. See page 147 for equivalent conditions for nonsingular matrices and Theorem 3.7 on page 145. 62. If the order of A is odd, then ( −1) = −1, and the result n
of exercise 61 implies that A = − A or A = 0.
⎡ 1 − 1⎤ 1 −1⎤ T −1 2 and AT = ⎡ A−1 = ⎢ 12 ⎥ ⎢ ⎥ , A ≠ A and 1 ⎢⎣− 2 − 2 ⎥⎦ ⎣−1 −1⎦ this matrix is not orthogonal. 66. Because
A− 1
this matrix is orthogonal. ⎡3 0 − 4⎤ 5 ⎢5 ⎥ 0⎥ 70. A = ⎢ 0 1 ⎢4 3⎥ ⎢⎣ 5 0 5⎥ ⎦
(a), (b) A−1
⎡ 3 0 4⎤ 5 ⎢ 5 ⎥ = ⎢ 0 1 0⎥ = AT . ⎢ 4 3⎥ ⎢⎣− 5 0 5 ⎥⎦
(c) As shown in exercise 69, if A is an orthogonal matrix, then A = ±1. For this given A, you have A = 1. Because A−1 = AT , A is an orthogonal matrix. 72. Let A be an idempotent matrix, and let x = det ( A). Then
A2 = A so det ( A2 ) = det ( A). You also have
64. Because
⎡ 1 ⎢ 2 = ⎢ ⎢ 1 ⎢− 2 ⎣
A− 1
1 ⎤ ⎡ 1 0 − ⎢ 2 2 ⎥⎥ ⎢ = ⎢ 0 1 0⎥ = AT , ⎢ ⎥ 1 ⎥ ⎢− 1 0 ⎢⎣ 2 2 ⎥⎦
Using a graphing calculator or a computer software program you have
60. (a) False. Let
⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 1⎦
68. Because
1 ⎤ − 2 ⎥⎥ = AT , 1 ⎥ − ⎥ 2⎦
det ( A2 ) = det ( A ⋅ A) = det ( A) ⋅ det ( A) = det ( A) . So, 2
x = det ( A) is a real number such that x 2 = x. Solving the last equation for x you obtain x = 0 or x = 1. ⎡ A11 74. det ⎢ ⎣ 0
A12 ⎤ ⎥ = (det A11 )(det A22 ) A22 ⎦
this matrix is orthogonal.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.4
Introduction to Eigenvalues
77
Section 3.4 Introduction to Eigenvalues ⎡4 3⎤ ⎡3⎤ ⎡15⎤ ⎡3⎤ 2. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 5⎢ ⎥ = λ1x1 ⎣ 1 2⎦ ⎣1⎦ ⎣ 5⎦ ⎣1⎦ ⎡4 3⎤ ⎡−1⎤ ⎡−1⎤ Ax 2 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = λ2 x 2 1 2 1 ⎣ ⎦⎣ ⎦ ⎣ 1⎦
⎡ 1 −2 1⎤ ⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 4. Ax1 = ⎢0 1 4⎥ ⎢0⎥ = ⎢0⎥ = λ1x1 ⎢⎣0 0 2⎥⎦ ⎢⎣0⎥⎦ ⎢⎣0⎥⎦ ⎡ 1 −2 1⎤ ⎡−7⎤ ⎡−14⎤ ⎡−7⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax 2 = ⎢0 1 4⎥ ⎢ 4⎥ = ⎢ 8⎥ = 2⎢ 4⎥ = λ2 x 2 ⎢⎣0 0 2⎥⎦ ⎢⎣ 1⎥⎦ ⎢⎣ 2⎥⎦ ⎢⎣ 1⎥⎦ 6. (a) The characteristic equation of A is
λI − A =
λ −2
−2
−2
λ −2
= (λ − 2)(λ − 2) − 4 = λ 2 − 4λ . (b) Solve the characteristic equation.
λ 2 − 4λ = 0 λ (λ − 4) = 0 The eigenvalues are λ1 = 0 and λ2 = 4. ⎡−2 −2⎤ ⎡ 1 1⎤ (c) For λ1 = 0: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−2 −2⎦ ⎣0 0⎦ The corresponding eigenvectors are the nonzero ⎡ 1⎤ multiples of ⎢ ⎥. ⎣−1⎦ ⎡ 2 −2⎤ ⎡ 1 −1⎤ For λ2 = 4: ⎢ ⎥ ⇒ ⎢ ⎥ − 2 2 ⎣ ⎦ ⎣0 0⎦ The corresponding eigenvectors are the nonzero ⎡1⎤ multiples of ⎢ ⎥. ⎣1⎦
8. (a) The characteristic equation of A is
λI − A =
λ −2
−5
−4
λ −3
= (λ − 2)(λ − 3) + 20 = λ 2 − 5λ − 14.
(b) Solve the characteristic equation.
λ 2 − 5λ − 14 = 0
(λ
− 7)(λ + 2) = 0
The eigenvalues are λ1 = 7 and λ2 = −2. ⎡ 5 −5⎤ ⎡ 1 −1⎤ (c) For λ1 = 7: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−4 4⎦ ⎣0 0⎦ The corresponding eigenvectors are the nonzero ⎡1⎤ multiples of ⎢ ⎥ . ⎣1⎦ ⎡−4 −5⎤ ⎡−4 −5⎤ For λ2 = −2: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−4 −5⎦ ⎣ 0 0⎦ The corresponding eigenvectors are the nonzero ⎡ 5⎤ multiples of ⎢ ⎥ . ⎣−4⎦ 10. (a) The characteristic equation of A is
λI − A =
λ −3
1
−5
λ +3
= (λ − 3)(λ + 3) + 5 = λ 2 − 4.
(b) Solve the characteristic equation.
λ2 − 4 = 0
(λ
− 2)(λ + 2) = 0
The eigenvalues are λ1 = 2 and λ2 = −2. ⎡ −1 1⎤ ⎡−1 1⎤ (c) For λ1 = 2: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−5 5⎦ ⎣ 0 0⎦ The corresponding eigenvectors are the nonzero ⎡1⎤ multiples of ⎢ ⎥. ⎣1⎦ ⎡−5 1⎤ ⎡−5 1⎤ For λ2 = −2: ⎢ ⎥ ⇒ ⎢ ⎥ ⎣−5 1⎦ ⎣ 0 0⎦ The corresponding eigenvectors are the nonzero ⎡1⎤ multiples of ⎢ ⎥. ⎣5⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
78
Chapter 3
Determinants
12. (a) The characteristic equation of A is
λI − A =
λ −2
0
0
λ −3
−4
0
0
λ −1
−1
= (λ − 2)(λ − 3)(λ − 1). (b) Solve the characteristic equation.
(λ
− 2)(λ − 3)(λ − 1) = 0
The eigenvalues are λ1 = 2, λ2 = 3, and λ3 = 1. ⎡0 0 −1⎤ ⎡0 1 0⎤ ⎢ ⎥ ⎢ ⎥ (c) For λ1 = 2: ⎢0 −1 −4⎥ ⇒ ⎢0 0 1⎥ ⎢0 0 ⎢0 0 0⎥ 1⎥⎦ ⎣ ⎣ ⎦ ⎡ 1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢0⎥. ⎢0⎥ ⎣ ⎦ ⎡ 1 0 −1⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ For λ2 = 3: ⎢0 0 −4⎥ ⇒ ⎢0 0 1⎥ ⎢0 0 2⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡0⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢ 1⎥. ⎢0⎥ ⎣ ⎦ ⎡−1 0 −1⎤ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢ ⎥ For λ3 = 1: ⎢ 0 −2 −4⎥ → ⎢0 1 2⎥ ⎢ 0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ −1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢−2⎥. ⎢ 1⎥ ⎣ ⎦
14. (a)
λI − A =
λ −1
0
−1
0
λ +1
0
−2
−1
λ +1
= (λ + 1)(λ 2 − 3) = λ 3 + λ 2 − 3λ − 3
The characteristic equation is λ 3 + λ 2 − 3λ − 3 = 0. (b) Solve the characteristic equation.
λ 3 + λ 2 − 3λ − 3 = 0
(λ
+ 1)(λ 2 − 3) = 0
The eigenvalues are λ1 = −1, λ2 = − 3, and λ3 =
3.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.4
Introduction to Eigenvalues
79
1⎤ ⎡ ⎡−2 0 −1⎤ ⎡2 0 1⎤ ⎢1 0 2 ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ (c) For λ1 = −1: ⎢ 0 0 0⎥ ⇒ ⎢0 −1 1⎥ ⇒ ⎢ ⎢0 1 −1⎥ ⎢−2 −1 0⎥ ⎢0 0 0⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ ⎣0 0 0 ⎦ ⎡−1⎤ ⎢ ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢ 2⎥. ⎢ 2⎥ ⎣ ⎦
⎡ ⎡− 3 − 1 0 −1 ⎤ ⎢1 0 ⎢ ⎥ − 3 +1 For λ 2 = − 3: ⎢ 0 0 ⎥ ⇒ ⎢⎢ 0 1 ⎢ ⎥ ⎢ −1 − 3 + 1⎥⎦ ⎢⎣ −2 ⎢⎣0 0
3 − 1⎤ ⎥ 2 ⎥ 0 ⎥ ⎥ 0 ⎥⎦
⎡ ⎢− The corresponding eigenvectors are the nonzero multiples of ⎢ ⎢ ⎢ ⎢⎣
3 − 1⎤ ⎥ 2 ⎥ . ⎥ 0 ⎥ 1 ⎥⎦
For λ 3 =
⎡ 3 −1 ⎢ 3: ⎢ 0 ⎢ ⎢⎣ −2
0 3 +1 −1
⎡ −1 ⎤ ⎢1 0 − ⎥ 0 ⎥ ⇒ ⎢ ⎢0 1 ⎥ ⎢ 3 + 1⎥⎦ ⎢⎣0 0
3 + 1⎤ ⎥ 2 ⎥ 0 ⎥ ⎥ 0 ⎥⎦
⎡ 3 + 1⎤ ⎢ ⎥ 2 ⎥ The corresponding eigenvectors are the nonzero multiples of ⎢ . ⎢ 0 ⎥ ⎢ ⎥ ⎢⎣ 1 ⎥⎦ 16. Using a graphing utility or a computer software program for
3⎤ ⎡ 4 A = ⎢ ⎥ 3 2 − − ⎣ ⎦ gives the eigenvalues {1 1}. So, λ1 = λ 2 = 1. ⎡−3 −3⎤ ⎡ 1 1⎤ ⎡ 1⎤ ⎥ ⇒ ⎢ ⎥ ⇒ x = ⎢ ⎥ 3 3 0 0 ⎣ ⎦ ⎣ ⎦ ⎣−1⎦
λ = 1: ⎢
18. Using a graphing utility or a computer software program for ⎡4 0 0⎤ ⎢ ⎥ A = ⎢0 0 −3⎥ ⎢0 −2 1⎥⎦ ⎣
gives the eigenvalues {3 −2 4}. So, λ1 = 3,
λ 2 = −2, and λ 3 = 4. ⎡−1 0 0⎤
⎡ 1 0 0⎤
⎡ 0⎤
⎢⎣ 0 2 2⎥⎦
⎢⎣0 0 0⎥⎦
⎢⎣−1⎥⎦
λ1 = 3: ⎢⎢ 0 3 3⎥⎥ ⇒ ⎢⎢0 1 1⎥⎥ ⇒ x1 = ⎢⎢ 1⎥⎥ ⎡−6
0⎤ 0⎤ ⎡1 0 ⎡0⎤ 3⎥⎥ ⇒ ⎢⎢0 1 −1.5⎥⎥ ⇒ x 2 = ⎢⎢3⎥⎥ ⎢⎣0 0 ⎢⎣2⎥⎦ 2 −3⎥⎦ 0⎥⎦ 0
λ 2 = −2: ⎢⎢ 0 −2 ⎢⎣ 0
⎡0 0 0⎤ ⎡0 1 0⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ λ 3 = 4: ⎢0 4 3⎥ ⇒ ⎢0 0 1⎥ ⇒ x 3 = ⎢⎢0⎥⎥ ⎢⎣0 2 3⎥⎦ ⎢⎣0 0 0⎥⎦ ⎢⎣0⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
80
Chapter 3
Determinants
20. Using a graphing utility or computer software program for ⎡ 1 1 0⎤ ⎢ ⎥ A = ⎢0 −2 1⎥ ⎢0 −2 2⎥ ⎣ ⎦
{
gives the eigenvalues − 2
⎡− 2 − 1 ⎢ λ1 = − 2: ⎢ 0 ⎢ 0 ⎣⎢
λ2 =
⎡ 2 −1 ⎢ 2: ⎢ 0 ⎢ 0 ⎣⎢ ⎡0 −1 0⎤ 3 −1⎥⎥ = ⎢⎣0 2 −1⎥⎦
λ3 = 1: ⎢⎢0
}
2 1 . So, λ1 = − 2, λ 2 =
2, and λ 3 = 1.
⎡ 2⎤ ⎢1 0 ⎥ 2 ⎢ ⎥ −1 0⎤ ⎥ ⎢ 2 + 2⎥ − 2 + 2 −1⎥ ⇒ ⎢0 1 − ⎥ ⇒ 2 ⎥ ⎥ ⎢ 2 − 2 − 2⎥⎦ ⎢0 0 0⎥ ⎢ ⎥ ⎥⎦ ⎣⎢ ⎡ ⎤ 2 − ⎢1 0 ⎥ 2 ⎤ ⎢ ⎥ −1 0 ⎥ ⎢ 2 − 2⎥ −1⎥ = ⎢0 1 − 2 + 2 ⎥ ⇒ x2 = 2 ⎥ ⎥ ⎢ 2 2 − 2⎦⎥ ⎢0 0 0⎥ ⎢ ⎥ ⎢⎣ ⎥⎦ ⎡0 1 0⎤ ⎡ 1⎤ ⎢0 0 1⎥ ⇒ x = ⎢0⎥ 3 ⎢ ⎥ ⎢ ⎥ ⎢⎣0 0 0⎥⎦ ⎢⎣0⎥⎦
x1
⎡ ⎢ ⎢ ⎢2 ⎢ ⎢ ⎢ ⎢ ⎢⎣
⎡ 2⎤ ⎢ − ⎥ 2 ⎢ ⎥ ⎢2 + 2 ⎥ = ⎢ ⎥ 2 ⎥ ⎢ ⎢ 1⎥ ⎢ ⎥ ⎥⎦ ⎣⎢ ⎤ 2 ⎥ 2 ⎥ − 2⎥ ⎥ 2 ⎥ 1⎥ ⎥ ⎦⎥
22. Using a graphing utility or a computer software program for
⎡1 0 ⎢ 0 2 A = ⎢ ⎢0 0 ⎢ ⎣⎢0 −1
2 3⎤ ⎥ 0 0⎥ 1 3⎥ ⎥ 3 1⎥⎦
gives the eigenvalues {1 4 −2 2}. So, λ1 = 1, λ2 = 4, λ3 = −2, and λ 4 = 2. ⎡0 0 −2 −3⎤ ⎡0 ⎢0 −1 0 0⎥ ⎢ ⎥ ⇒ ⎢0 λ1 = 1: ⎢ ⎢0 0 0 −3⎥ ⎢0 ⎢ ⎥ ⎢ ⎣0 1 −3 0⎦ ⎣0 ⎡3 ⎢0 λ2 = 4: ⎢ ⎢0 ⎢ ⎣0
⎡1 0 −2 −3⎤ ⎢ ⎥ 2 0 0⎥ 0 ⇒ ⎢⎢ 0 3 −3⎥ 0 ⎢ ⎥ 1 −3 3⎦ ⎢⎣0
1 0 0 0
0 −2 −3⎤ ⎡1 ⎥ ⎢0 0 0 0⎥ ⇒ ⎢ ⎢0 0 1 −3⎥ ⎥ ⎢ 1 −3 1⎦ ⎣0
0⎤ ⎡ 1⎤ ⎢0⎥ 0⎥⎥ ⇒ x1 = ⎢ ⎥ ⎢0⎥ 1⎥ ⎥ ⎢ ⎥ 0⎦ ⎣0⎦
0 0 − 53 ⎤ ⎡5⎤ ⎥ ⎢0⎥ 1 0 0⎥ ⎢ ⎥ x ⇒ = 2 ⎢3⎥ 0 1 −1⎥⎥ ⎢ ⎥ ⎥ 0 0 0⎦ ⎣3⎦
⎡1 ⎡−3 0 −2 −3⎤ ⎢ ⎢ 0 −4 0 0⎥ ⎥ ⇒ ⎢0 λ3 = −2: ⎢ ⎢0 ⎢ 0 0 −3 −3⎥ ⎢ ⎢ ⎥ 1 −3 −3⎦ ⎢⎣0 ⎣ 0 ⎡1 ⎢0 λ4 = 2: ⎢ ⎢0 ⎢ ⎣0
0 1 0 0
0 1 0 0
1⎤ 3
⎡ −1⎤ ⎥ ⎢ 0⎥ 1 0 0⎥ ⎢ ⎥ ⇒ = x 3 ⎢ 3⎥ 0 1 1⎥⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎣−3⎦ 0 0
0 −9⎤ ⎡9⎤ ⎥ ⎢8⎥ 0 −8⎥ ⇒ x4 = ⎢ ⎥ ⎢3⎥ 1 −3⎥ ⎥ ⎢ ⎥ 0 0⎦ ⎣ 1⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5
Applications of Determinants
81
24. Using a graphing utility or a computer software program for
⎡2 0 −1 −1⎤ ⎢ ⎥ 0 2 1 0⎥ A = ⎢ ⎢0 −1 0 0⎥ ⎢ ⎥ ⎣⎢0 0 2 0⎦⎥ gives the eigenvalues {0 1 2}. So λ1 = 0, λ2 = 1, and λ 3 = 2.
λ1 = 0:
1 ⎡−2 0 ⎢ ⎢ 0 −2 −1 ⎢ 0 1 0 ⎢ ⎣⎢ 0 0 −2
1⎤ ⎡1 ⎢ ⎥ 0⎥ 0 ⇒ ⎢⎢ 0⎥ 0 ⎢ ⎥ ⎢⎣0 0⎥⎦
0 0 − 12 ⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 1 0 0⎥ ⎢0⎥ ⇒ = x1 ⎢0⎥ 0 1 0⎥ ⎥ ⎢ ⎥ ⎢⎣2⎥⎦ 0 0 0⎥⎦
λ 2 = 1:
1 ⎡−1 0 ⎢ ⎢ 0 −1 −1 ⎢ 0 1 1 ⎢ ⎢⎣ 0 0 −2
⎡1 1⎤ ⎢ ⎥ 0⎥ ⎢0 ⇒ ⎢ 0⎥ ⎢0 ⎥ ⎢⎣0 1⎥⎦
⎡ 3⎤ 0 0 − 32 ⎤ ⎥ ⎢ ⎥ 1 −1 1 0 2⎥ ⇒ x = ⎢ ⎥ 2 ⎥ ⎢ 1 1⎥ 0 1 − 2⎥ ⎢ ⎥ ⎢⎣ 2⎥⎦ 0 0 0⎥⎦
λ 3 = 2:
⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
1 1⎤ ⎡0 ⎥ ⎢ 0 −1 0⎥ 0 ⇒ ⎢ ⎢0 1 2 0⎥ ⎥ ⎢ 0 −2 2⎥⎦ ⎢⎣0
1 0 0⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ 0 1 0⎥ 0 ⇒ x3 = ⎢ ⎥ ⎢0⎥ 0 0 1⎥ ⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎢⎣0⎥⎦
0
26. (a) True. The characteristic equation is
λI − A =
λ −2 1 −1
λ
= λ (λ − 2) + 1 = λ 2 − 2λ + 1 = (λ − 1)
2
which implies that λ1 = λ2 = 1. (b) True. The characteristic equation is
λI − A =
λ −4 2 λ
1
= λ (λ − 4) − 2 = λ 2 − 4λ − 2
which implies that λ1 = 2 +
6 and λ2 = 2 −
6.
Section 3.5 Applications of Determinants 2. The matrix of cofactors is
⎡ 4 ⎢ ⎣⎢− 0
− 0⎤ ⎡4 0⎤ ⎥ = ⎢ ⎥. −1 ⎦⎥ ⎣0 −1⎦
So, the adjoint of A is ⎡4 0⎤ adj ( A) = ⎢ ⎥ ⎣0 −1⎦
T
⎡4 0⎤ = ⎢ ⎥. ⎣0 −1⎦
Because A = −4, the inverse of A is A
−1
⎡−1 0⎤ 1 1 ⎡4 0⎤ ⎢ ⎥ adj( A) = − ⎢ = 1 ⎥. ⎥ = ⎢ 4 ⎣0 −1⎦ A 0 4 ⎥⎦ ⎣⎢
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
82
Chapter 3
Determinants 6. The matrix of cofactors is
4. The matrix of cofactors is
⎡ 1 −1 0 −1 0 1 ⎤ − ⎢ ⎥ 2 2 2 2 ⎥ ⎢ 2 2 ⎡ 4 −2 −2⎤ ⎢ ⎥ 1 3 1 2 ⎥ ⎢ ⎥ ⎢ −2 3 − = ⎢ 2 −4 2⎥. ⎢ 2 2 2 2 2 2 ⎥ ⎢ ⎢ ⎥ 1 1⎦⎥ ⎣−5 ⎢ 2 3 1 3 1 2 ⎥ − ⎢ ⎥ 0 −1 0 1 ⎥⎦ ⎢⎣ 1 −1
⎡ 2 3 1 3 − ⎢ −1 −2 ⎢ −1 −2 ⎢ 0 1 ⎢ −1 1 ⎢ 1 −2 −1 −2 ⎢ ⎢ 1 1 0 1 − ⎢ 2 3 1 3 ⎢⎣
So, the adjoint of A is
So, the adjoint of A is
⎡ 4 2 −5⎤ ⎢ ⎥ adj ( A) = ⎢−2 −4 1⎥. ⎢−2 2 1⎥⎦ ⎣
⎡−1 1 1⎤ ⎢ ⎥ adj ( A) = ⎢−1 1 1⎥. ⎢ 1 −1 −1⎥ ⎣ ⎦
Because A = −6, the inverse of A is
Because det ( A) = 0, the matrix A has no inverse.
A− 1
1 −1 −
0 −1 0 0
2 ⎤ ⎥ −1 ⎥ ⎡−1 −1 1⎤ ⎥ 1 ⎥ ⎢ ⎥ = ⎢ 1 1 −1⎥. −1 ⎥ ⎢ 1 1 −1⎥ ⎥ ⎣ ⎦ 1 ⎥ ⎥ 2 ⎥⎦
1 5⎤ ⎡ 2 ⎢− 3 − 3 6⎥ ⎢ ⎥ 1 2 1⎥ ⎢ 1 adj( A) = ⎢ = − ⎥. A 3 3 6 ⎢ ⎥ 1 1⎥ ⎢ 1 ⎢⎣ 3 − 3 − 6 ⎥⎦
8. The matrix of cofactors is
⎡ 1 0 1 ⎢ ⎢ 0 1 1 ⎢ 1 1 1 ⎢ ⎢ 1 1 0 ⎢ ⎢ −0 1 1 ⎢ ⎢ 1 1 1 ⎢ ⎢ 1 1 0 ⎢ 1 0 1 ⎢ ⎢ 1 1 1 ⎢ ⎢ 1 1 0 ⎢ ⎢−1 0 1 ⎢ 0 1 1 ⎣
1 0 1
1 1 1
− 1 1 1
1 0 1
0 1 1
0 1 1
1 1 0
1 1 0
1 1 1
− 1 0 1
0 1 1
0 1 1
1 1 0
1 1 0
− 1 0 1
1 1 1
0 1 1
0 1 1
1 1 0
1 1 0
1 0 1
−1 1 1
1 1 1
1 0 1
1 1 0 ⎤ ⎥ − 1 0 1 ⎥ 0 1 1 ⎥ ⎥ 1 1 1 ⎥ ⎥ 1 0 1 ⎥ ⎡−1 −1 −1 2⎤ ⎥ ⎢ ⎥ 0 1 1 ⎥ ⎢−1 −1 2 −1⎥. = ⎥ ⎢−1 2 −1 −1⎥ 1 1 1 ⎥ ⎢ ⎥ − 1 1 0 ⎥ ⎣⎢ 2 −1 −1 −1⎥⎦ ⎥ 0 1 1 ⎥ ⎥ 1 1 1 ⎥ ⎥ 1 1 0 ⎥ 1 0 1 ⎥⎦
⎡−1 −1 −1 2⎤ ⎢ ⎥ −1 −1 2 −1⎥ So, the adjoint of A is adj ( A) = ⎢ . Because det ( A) = −3, the inverse of A is ⎢−1 2 −1 −1⎥ ⎢ ⎥ ⎢⎣ 2 −1 −1 −1⎥⎦
A−1
1 1 2⎤ ⎡ 1 − ⎥ ⎢ 3 3 3 3 ⎢ ⎥ 1 2 1⎥ ⎢ 1 − ⎢ 3 1 3 3 3⎥ ⎥. adj ( A) = ⎢ = A 2 1 1⎥ ⎢ 1 − ⎢ 3 3 3 3⎥ ⎢ ⎥ 1 1 1⎥ ⎢ 2 − ⎢⎣ 3 3 3 3 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5 10. Following the proof of Theorem 3.10, you have
A adj ( A) = A I . Now, if A is not invertible, then A = 0, and
A adj ( A) is the zero matrix.
n
A−1
−1
1 n−2 A = A A.. A
⎡−1 3⎤ ⎡ 2 −1⎤ 14. A = ⎢ ⎥ ⇒ adj( A) = ⎢ ⎥ ⇒ 1 2 ⎣ ⎦ ⎣−3 −1⎦
x2 =
⎡−1 3⎤ 0 ⎡−1 3⎤ adj(adj( A)) = ⎢ ⎥ = A ⎢ ⎥ ⎣ 1 2⎦ ⎣ 1 2⎦ So, adj (adj ( A)) = A
n−2
A.
16. Illustrate the formula adj ( A
)
−1
= ⎡⎣adj( A)⎤⎦ in case
⎡1 3⎤ ⎡−2 3⎤ −1 A = ⎢ ⎥ ⇒ A = ⎢ ⎥ and ⎣1 2⎦ ⎣ 1 −1⎦ ⎡ −1 −1⎤ adj( A−1 ) = ⎢ ⎥. ⎣−3 −2⎦ ⎡ 2 −1⎤ On the other hand, adj( A) = ⎢ ⎥ and ⎣−3 1⎦ ⎡ −1 −1⎤ −1 (adj( A)) = ⎢−3 −2⎥. ⎣ ⎦ 18. The coefficient matrix is
A = 7.
each column with the column of constants to obtain ⎡−10 −1⎤ A1 = ⎢ ⎥, ⎣ −1 2⎦
A1 = −21
⎡2 −10⎤ A2 = ⎢ ⎥, ⎣ 3 −1⎦
A2 = 28.
x1 = x2 =
A1 A A2 A
= − =
21 = −3 7
28 = 4 7
A1 = 36 A2 = 24
A2 24 1 = = . A 72 3
22. The coefficient matrix is ⎡13 −6⎤ A = ⎢ ⎥, where ⎣26 −12⎦
A = 0.
system does not have a solution.) 24. The coefficient matrix is ⎡−0.4 0.8⎤ A = ⎢ ⎥ , where ⎣ 0.2 0.3⎦
A = −0.28.
Because A ≠ 0, you can use Cramer’s Rule. ⎡1.6 0.8⎤ A1 = ⎢ A1 = 0 ⎥, ⎣0.6 0.3⎦ ⎡−0.4 1.6⎤ A2 = ⎢ A2 = −0.56 ⎥, ⎣ 0.2 0.6⎦ The solution is A1 0 x1 = = = 0 −0.28 A
x2 =
Because A ≠ 0, you can use Cramer’s Rule. Replace
Now solve for x1 and x2 .
A = 72.
Because A = 0, Cramer’s Rule cannot be applied. (The −1
⎡1 3⎤ A = ⎢ ⎥. ⎣1 2⎦
⎡2 −1⎤ A = ⎢ ⎥ , where ⎣3 2⎦
83
Because A ≠ 0, you can use Cramer’s Rule. 12⎤ ⎥, 24⎦ 13⎤ ⎥, 23⎦ The solution is A1 36 1 x1 = = = A 72 2
adj(adj( A)) = adj( A A−1 )
= A
20. The coefficient matrix is ⎡18 12⎤ where A = ⎢ ⎥, ⎣30 24⎦ ⎡13 A1 = ⎢ ⎣23 ⎡18 A2 = ⎢ ⎣30
12. You have
= det ( A A−1 )( A A−1 )
Applications of Determinants
A2 −0.56 = = 2. −0.28 A
26. The coefficient matrix of the system is ⎡3 2⎤ where A = 26. A = ⎢ ⎥, ⎣2 10⎦
Because A ≠ 0, you can use Cramer’s Rule. ⎡ 1 2⎤ A1 = ⎢ A1 = −2 ⎥, ⎣6 10⎦ ⎡3 1⎤ A2 = ⎢ A2 = 16 ⎥, ⎣2 6⎦ The solution is A1 −2 1 x1 = = = − A 26 13
x2 =
A2 16 8 = = . 26 13 A
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
84
Chapter 3
Determinants
28. The coefficient matrix is
⎡4 ⎢ A = ⎢2 ⎢ ⎢⎣8
32. The coefficient matrix is
3⎤ ⎥ 5 ⎥ , where ⎥ −2 ⎥⎦
−2 2 −5
A = −82.
Because A ≠ 0, you can use Cramer’s Rule. ⎡−2 ⎢ A1 = ⎢ 16 ⎢ ⎣⎢ 4
3⎤ ⎥ 5 ⎥, ⎥ −2 ⎦⎥
−2 2 −5
⎡4 ⎢ A2 = ⎢2 ⎢ 8 ⎣⎢
−2 16 4
⎡4 −2 ⎢ 2 A3 = ⎢2 ⎢ 8 5 − ⎢⎣ The solution is
A1 = −410
3⎤ ⎥ 5 ⎥, ⎥ −2 ⎦⎥
A2 = −656
−2 ⎤ ⎥ 16 ⎥ , ⎥ 4 ⎥⎦
A3 = 164
x3 =
A2 A A3 A
=
−656 = 8 −82
=
164 = −2. −82
−21 2 −21
−7 ⎤ ⎥ −2 ⎥ , ⎥ 7⎥ ⎦
⎡ 14 ⎢ A2 = ⎢−4 ⎢ 56 ⎣⎢ ⎡ 14 ⎢ A3 = ⎢−4 ⎢ 56 ⎣⎢
where A = 1568.
−7 ⎤ ⎥ 2 −2 ⎥ , ⎥ 7⎥ −21 ⎦ −21 −7 ⎤ ⎥ 2 −2 ⎥ , ⎥ 7 7⎥ ⎦ 2 −21
34. The coefficient matrix is
⎡0.2 −0.6⎤ A = ⎢ ⎥. ⎣ −1 1.4⎦ Using a graphing utility or a computer software program, A = −0.32. ⎡ 2.4 −0.6⎤ A1 = ⎢ ⎥ ⎣−8.8 1.4⎦ Using a graphing utility or a computer software program, A1 = −1.92.
−1.92 = 6. −0.32
−1⎤ ⎥. − 72 ⎥⎦
Using a graphing utility or a computer software program, A = − 19 . 12
−21 ⎤ ⎥ 2 ⎥, ⎥ 7⎥ ⎦
⎡−20 −1⎤ A1 = ⎢ 7⎥ ⎣⎢ −51 − 2 ⎥⎦
Using a graphing utility or a computer software program, A1 = 19.
( )
So, x1 = A1 A = 19 ÷ − 19 = −12. 12
−21
−21
9
Because A = 0, Cramer’s Rule cannot be applied.
⎡5 A = ⎢6 ⎢4 ⎣3
Because A ≠ 0, you can use Cramer’s Rule. ⎡−21 ⎢ A1 = ⎢ 2 ⎢ 7 ⎣⎢
5
5⎤ ⎥ 9 ⎥ , where A = 0. ⎥ 17 ⎥⎦
36. The coefficient matrix is
30. The coefficient matrix is
⎡ 14 ⎢ A = ⎢−4 ⎢ 56 ⎣⎢
3
So, x1 = A1 A =
−410 x1 = = = 5 A −82 A1
x2 =
⎡2 ⎢ A = ⎢3 ⎢ ⎢⎣5
A1 = 1568
A2 = 3136
A3 = −1568
38. The coefficient matrix is
⎡5 −3 2⎤ ⎢ ⎥ A = ⎢2 2 −3⎥. ⎢ ⎥ −7 8⎥ ⎣⎢ 1 ⎦ Using a graphing utility or a computer software program, A = 0. Cramer’s Rule does not apply because the coefficient matrix has a determinant of zero.
The solution is x1 =
A1 1568 = =1 A 1568
x2 =
A2 3136 = = 2 A 1568
x3 =
A3 1568 = − = −1. A 1568
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 3.5 40. The coefficient matrix is
⎡−151 7 −10⎤ ⎢ ⎥ A1 = ⎢ 86 3 −5⎥ ⎢ ⎥ 2⎥⎦ ⎢⎣ 187 −9 Using a graphing utility or a computer software program, A1 = 11,490. 11,490 A = = 10. 1149
42. The coefficient matrix is ⎡ −1 −1 0 1⎤ ⎢ ⎥ 3 5 5 0⎥ A = ⎢ . ⎢ 0 0 2 1⎥ ⎢ ⎥ ⎣⎢−2 −3 −3 0⎥⎦ Using a graphing utility or a computer software program, A = 1.
⎡ −8 −1 0 1⎤ ⎢ ⎥ 24 5 5 0⎥ A1 = ⎢ ⎢ −6 0 2 1⎥ ⎢ ⎥ ⎢− ⎣ 15 −3 −3 0⎥⎦ Using a graphing utility or a computer software program, A1 = 3. So, x1 = A1 A =
3 1
0 c a
y1 1 1 1 1 y2 1 = ± 12 2 4 1 = ± 12 ( −8) = 4. y3 1 4 2 1
x1
Area = ± 12 x2 x3
48. Use the formula for area as follows.
y1 1 1 1 1 y2 1 = ± 12 −1 1 1 = ± 12 (6) = 3 y3 1 0 −2 1
x1
Area = ± 12 x2 x3 50. Use the fact that
x1 x2 x3
y1 1 −1 0 1 y2 1 = 1 1 1 = 2 y3 1 3 3 1
to determine that the three points are not collinear. 52. Use the fact that
x2
y1 1 −1 y2 1 = −4
x3
y3 1
x1
−3 1 7 1 = 0
2 −13 1
to determine that the three points are collinear. 54. Find the equation as follows x
y 1
x
c
y1 1 = −4 7 1 = 3x + 6 y − 30 y2 1
2
56. Find the equation as follows x
0 = x1 x2
y 1
x
y2 1
3 4 1
58. Use the formula for volume as follows
x1
y1
x2
y2
Volume = ± 16
b a 0
Solving for c 2 you obtain
y 1
y1 1 = 1 4 1 = 2 y − 8
c 0 a
−c( −ba ) + b( ac)
4 1
So, an equation for the line is 2 y + x = 10.
0 c b
−c(c 2 − b 2 ) + a( ac)
y 1
So, an equation for the line is y = 4.
c 0 b
=
46. Use the formula for area as follows.
x2
Similarly, the other two equations follow by using the other altitudes. Now use Cramer’s Rule to solve for cos C in this system of three equations.
b a
c 2 = a 2 + b 2 − 2ab cos C.
0 = x1
= 3.
44. Draw the altitude from vertex C to side c, then from trigonometry c = a cos B + b cos A.
cos C =
85
2ab cos C = a 2 + b 2 − c 2
⎡−8 7 −10⎤ ⎢ ⎥ 3 −5⎥. A = ⎢ 12 ⎢ ⎥ 2⎥⎦ ⎢⎣ 15 −9 Using a graphing utility or a computer software program, A = 1149.
So, x1 = A1
Applications of Determinants
a 2 + b2 − c2 = . 2ab = ± 16
x3
y3
z1 1 z2 1 z3 1
x4
y4
z4 1
1 1
1 1
0 0
0 1
2 1 −1 1 −1 1
=
1 6
(3)
=
1 2
2 1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
86
Chapter 3
Determinants
60. Use the formula for volume as follows
Volume = ± 16
x1
y1
x2
y2
x3
y3
x4
y4
62. Use the fact that
z1 1 z2 1
x1
y1
x2
y2
z3 1 z4 1
x3
y3
x4
y4
0 2 0 1
=
2
3 1
−1
0
1 1
0 −2 −5 1 2
6
= 0
11 1
64. Use the fact that
= ± 16 ( 24) = 4
3 0 0 1
z3 1 z4 1
1
to determine that the four points are coplanar.
0 0 0 1 = ± 16
z1 1 z2 1
1 1 4 1
x1
y1
x2
y2
x3
y3
x4
y4
z1 1 1 z2 1 −3 = z3 1 4 z4 1 3
2 7 1 6 6 1 4 2 1
= −1
3 4 1
to determine that the four points are not coplanar. 66. Find the equation as follows
0 =
x
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
−1 0 1
x =
y
z 1
0 −1 0 1 1
1 0 1
2
1 2 1 0 −1 1
0 0 1
= x 1 0 1 − y 1 0 1 + z 1 1 2 1
2 2 1
= 4 x − 2 y − 2 z − 2,
or
2
0 −1 0
1 1 − 1
1 0
1 1
1 2
2
2x − y − z = 1
68. Find the equation as follows
0 =
x
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
2 7 1
x =
y
z 1
1 2 7 1 4
4 2 1
3
3 4 1
1 7 1
1 2 1
1 2 7
= x 4 2 1 − y 4 2 1 + z 4 4 1 − 4 4 2 3 4 1
3 4 1
= − x − y − z + 10,
or
3 3 1
3 3 4
x + y + z = 10
70. Cramer’s Rule was not used correctly to solve for z. The given setup solves for x not z.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3
87
9a + 3b + c = 158.7
72. (a)
16a + 4b + c = 182.1 25a + 5b + c = 207.9 (b) The coefficient matrix is ⎡ 9 3 1⎤ ⎢ ⎥ A = ⎢16 4 1⎥ ⎢ ⎥ ⎣⎢25 5 1⎦⎥
A = −2.
and
⎡158.7 3 1⎤ ⎢ ⎥ Also, A1 = ⎢ 182.1 4 1⎥ ⎢ ⎥ ⎢⎣207.9 5 1⎥⎦
and
A1 = −2.4,
⎡ 9 158.7 1⎤ ⎢ ⎥ A2 = ⎢16 182.1 1⎥ and ⎢ ⎥ ⎣⎢25 207.9 1⎦⎥
A2 = −30,
⎡ 9 3 158.7⎤ ⎢ ⎥ A3 = ⎢16 4 182.1⎥ and ⎢ ⎥ ⎣⎢25 5 207.9⎦⎥
A3 = −205.8.
−2.4 −30 = 1.2, b = = 15, and −2 −2 −205.8 c = = 102.9. −2
So, a =
(c)
300
0
7 0
(d) The function fits the data exactly.
Review Exercises for Chapter 3 2. Using the formula for the determinant of a 2 × 2 matrix, you have
0 −3 1
2
6. The determinant of a triangular matrix is the product of the entries along the main diagonal.
5
= 0( 2) − (1)( −3) = 3.
0 −1 3 = 5( −1)(1) = −5 0
4. Using the formula for the determinant of a 2 × 2 matrix, you have
−2 0 0 3 −15
10.
3
−5
3
9 −6 = 3
12 −3
3
6
1
0
1
−5
3 −2 = 27 −9
4 −1
2
0 1 3 0 = 27
14 −1 0
0
1
8. The determinant is 0, because the matrix has a column of zeros.
= ( −2)(3) − (0)(0) = −6.
0
0 2
−9
3
14 −1
= 27(9 − 42) = −891
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
88
Chapter 3
Determinants
12. The determinant of a triangular matrix is the product of its diagonal entries. So, the determinant equals 2(1)(3)( −1) = −6. 3 −1
14.
2
3 −1 2
1
−2
0
1 −3
−1
2 −3
4
−2
1 −2
1
=
1
−2
0
1 −3
5
0
1
6
1
0 0
2
−2
24. (a)
1 0
2
1
6
1
1 −3 6
1 −3
(b)
= 9 + 2( −7) = −5
1
2 −1
3
4
1
2 16. 1
3 −1
2 −2
0
−1
2
0
1 −1 = 0
0
1 −2
−5
1
0
2 −1
0
0 −2
3 −4
−4
1
2
0
0 −1
0
1
1 −1 = −
0
3
4
0
1
4 16
0
4 12
1 −2 −5 0
6
21
4 12
= −(72 − 84) = 12 0 0 0 0 2
0 0 0 2
0 0 0 2 0
18. 0 0 2 0 0 = 2 0 2 0 0 0 2 0 0 0 0
Notice that A B = AB = −135. 26 First find
3 0
2
4 10
0
2 3
AB = −135
0 0 2 0
2 1 2 (a)
AT = A = −1
(b)
A3 = A
(c)
AT A = AT A = ( −1)( −1) = 1
3
= ( −1) = −1 3
(d) 5 A = 53 A = 125( −1) = −125 2 28. (a)
A = 5
−1 4 0 3 =
1 −2 0
0 2 0 0 2 0 0 0
1
A = −1 0 0 = −1
1 −2 −5
0
= −0
(d)
1 −4 −10
−1
B = 0 −1 1 = −5
⎡ 1 6 12⎤ ⎢ ⎥ (c) AB = ⎢4 15 27⎥ ⎢ ⎥ ⎢⎣7 6 15⎥⎦
5 1
2 −1
2 1
0
−2 1
+ 2
A = 4 5 6 = 27 7 8 0
= −( −1) 5 1
=1
1 2 3
(b)
A− 1 =
2 −1
4
5
3 =1
0
−3
0 −8
5
3
−3 −8
= −31
1 1 = − 31 A
0 0 2 = 2( −2) 0 2 0 2 0 0 = −4( 2)
0 2 2 0
= (−8)( −4) = 32
20. Because the second and third columns are interchanged, the sign of the determinant is changed. 22. Because a multiple of the first row of the matrix on the left was added to the second row to produce the matrix on the right, the determinants are equal.
30. A
−1
⎡7 ⎢ 74 1 ⎡7 −2⎤ = ⎢ ⎥ = ⎢ 74 ⎣2 10 ⎦ ⎢1 ⎢⎣ 37
1⎤ 37 ⎥ ⎥ 5 ⎥ 37 ⎥⎦
−
7 ⎛ 5 ⎞ ⎛ 1⎞⎛ 1 ⎞ ⎜ ⎟ − ⎜ ⎟ ⎜− ⎟ 74 ⎝ 37 ⎠ ⎝ 37⎠ ⎝ 37 ⎠ 35 1 1 = + = 2738 1369 74
A−1 =
Notice that A = 74. So, A−1 =
1 1 = . A 74
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3
32. A−1
⎡ 2 ⎢− 3 ⎢ ⎢ 2 = ⎢− 3 ⎢ ⎢ 1 ⎢⎣ 2
1 6 1 6
A−1
2 3 2 = − 3 1 2
1 6 1 6
89
⎤ 0⎥ ⎥ ⎥ −1⎥ ⎥ 1⎥ 0 2 ⎥⎦
−
0 −1 = − 1 2
0
0
0
2 3 1 2
1 6
1 −1 = −
1 12
1 2
0
Notice that −1 A = 2
1
2
4
8 = 1(8 − 8) − (−1)( −8 − 4) = −12.
−1 0
1
So, A−1 =
1 1 = − . A 12
⎡ 1 −2 3 0⎤ 3 0⎤ ⎡ 2 1 2 6⎤ ⎡ 1 −2 3 0⎤ ⎡ 1 −2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 2 6⎥ ⇒ ⎢0 5 −4 6⎥ ⇒ ⎢0 1 − 54 65 ⎥ 34. (a) ⎢−1 2 −3 0⎥ ⇒ ⎢2 ⎢ ⎥ ⎢ 3 2 −1 6⎥ ⎢ 3 2 −1 6⎥ ⎢0 8 −10 6⎥ 1 1⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣⎢0 0 So, x3 = 1, x2 =
6 5
+
4 5
(1)
= 2, and x1 = 0 − 3(1) + 2( 2) = 1.
7 ⎡1 0 ⎡ 1 −2 3 0⎤ 5 ⎢ ⎢ ⎥ 1 − 54 65 ⎥ ⇒ ⎢0 1 − 54 (b) ⎢0 ⎢ ⎢ ⎥ 1 1⎥⎦ 1 ⎢⎣0 0 ⎢⎣0 0
12 ⎤ 5 ⎥ 6⎥ 5
⎡ 1 0 0 1⎤ ⎢ ⎥ ⇒ ⎢0 1 0 2⎥ ⎥ ⎢ ⎥ 1⎥⎦ ⎢⎣0 0 1 1⎥⎦
So, x1 = 1, x2 = 2, and x3 = 1. (c) The coefficient matrix is ⎡ 2 1 2⎤ ⎢ ⎥ A = ⎢−1 2 −3⎥ ⎢ ⎥ ⎢⎣ 3 2 −1⎦⎥
A = −18.
and
⎡6 1 2⎤ ⎢ ⎥ Also, A1 = ⎢0 2 −3⎥ ⎢ ⎥ ⎢⎣6 2 −1⎥⎦
and
A1 = −18,
⎡ 2 6 2⎤ ⎢ ⎥ A2 = ⎢−1 0 −3⎥ and ⎢ ⎥ ⎢⎣ 3 6 −1⎦⎥
A2 = −36,
⎡ 2 1 6⎤ ⎢ ⎥ A3 = ⎢−1 2 0⎥ ⎢ ⎥ 3 2 6⎥ ⎣⎢ ⎦
A3 = −18.
So, x1 =
−18 −18
= 1, x2 =
and
−36 −18
= 2, and x3 =
−18 −18
= 1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
90
Chapter 3
Determinants
⎡ 1 3 5 2⎤ ⎡1 ⎡2 3 5 4⎤ ⎢ 2 2 ⎥ ⎢ ⎢ ⎥ 36. (a) ⎢ 3 5 9 7⎥ ⇒ ⎢3 5 9 7⎥ ⇒ ⎢0 ⎢5 9 13 17⎥ ⎢0 ⎢5 9 13 17⎥ ⎣ ⎦ ⎥⎦ ⎣⎢ ⎣⎢
3 2 1 2 3 2
So, x3 = −1, x2 = 2 − 3( −1) = 5, and x1 = 2 −
⎡ 1 3 5 2⎤ 2⎤ ⎥ ⎢ 2 2 ⎥ 1⎥ ⇒ ⎢0 1 3 2⎥ ⎢0 0 1 −1⎥ 7⎥⎦⎥ ⎥⎦ ⎣⎢
5 2 3 2 1 2 5 2
(−1)
−
3 2
(5)
= −3.
⎡ 1 0 0 −3⎤ ⎡ 1 3 5 2⎤ ⎡ 1 0 −2 −1⎤ 2 2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) ⎢0 1 3 2⎥ ⇒ ⎢0 1 3 2⎥ ⇒ ⎢0 1 0 5⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1 −1⎦⎥ ⎣0 0 ⎢⎣0 0 1 −1⎥⎦ ⎢⎣0 0 1 −1⎥⎦ So, x1 = −3, x2 = 5, and x3 = −1. (c) The coefficient matrix is ⎡2 3 5⎤ ⎢ ⎥ A = ⎢ 3 5 9⎥ and A = −4. ⎢ ⎥ ⎢⎣5 9 13⎥⎦ ⎡ 4 3 5⎤ ⎢ ⎥ Also, A1 = ⎢ 7 5 9⎥ ⎢ ⎥ ⎣⎢17 9 13⎦⎥
and
A1 = 12,
⎡2 4 5⎤ ⎢ ⎥ A2 = ⎢ 3 7 9⎥ and ⎢ ⎥ ⎢⎣5 17 13⎥⎦
A2 = −20,
⎡2 3 4⎤ ⎢ ⎥ A3 = ⎢ 3 5 7⎥ ⎢ ⎥ ⎢⎣5 9 17⎥⎦
A3 = 4.
So, x1 =
and
12 −20 4 = −1. = −3, x2 = = 5, and x3 = −4 −4 −4
38. Because the determinant of the coefficient matrix is
2 −5 3 −7
= 1 ≠ 0,
the system has a unique solution. 40. Because the determinant of the coefficient matrix is 2
3
1
2 −3 −3 = 0. 8
6
0
the system does not have a unique solution. 42. Because the determinant of the coefficient matrix is
1 5
3 0
0
4 2
5 0
0
0 0
3 8
6 = −896 ≠ 0,
2 4
0 0 −2
2 0 −1 0
0
the system has a unique solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 3
91
44. (a) True. If either A or B is singular, then det(A) or det(B) is zero (Theorem 3.7), but then det ( AB) = det ( A)det( B) = 0 ≠ −1, which leads to contradiction.
(b) False. det ( 2 A) = 23 det ( A) = 8 ⋅ 5 = 40 ≠ 10. (c) False. Let A and B be 3 × 3 identity matrix I 3 . Then det ( A) = det ( B) = det ( I 3 ) = 1, but det ( A + B ) = det ( 2 I 3 ) = 23 ⋅ 1 = 8 while det ( A) + det ( B) = 1 + 1 = 2.
46. Using the fact that cA = c n A , where A is an n × n matrix, you obtain 2 A = 24 A = 16( −1) = −16.
1 0 2 1 0 48. 1 −1 2 = 1 −1 5
1 0
2 1 0 2 2 + 1 −1 2 1 −1 3 0 1
2
10 = 5 + 5 a
0 1 − a2 1 − a 1 − a
1 1 1
50. 1 a 1 1 = 1 1 1 a 1 0 1 1 1 a 0
1 1 0 a −1 0 a −1
a 1− a 1− a
1 − a2 1 − a 1 − a = − 1−a 1−a
0 a −1 0 a −1
1+ a 1 1 = (1 − a ) −1 3
1
0
1 0
(factoring out (1 − a) from each row )
−1
= (1 − a ) (1(1) − 1( −1 − a − 1)) 3
(expanding along the third row )
= (1 − a ) ( a + 3) 3
52. λ I − A =
λ −5
−2
−4
λ −1
= (λ − 5)(λ − 1) − 8 = λ 2 − 6λ − 3
The eigenvalues are λ1 = 3 + 2 3 and λ2 = 3 − 2 3.
⎡−2 + 2 3
λ1 = 3 + 2 3: ⎢
⎣⎢
−4
1 1 ⎡ ⎤ ⎤ 1 − − 3⎥ 2 2 ⎥ ⇒ ⎢⎢ ⎥ 2 + 2 3 ⎦⎥ ⎣⎢−4 2 + 2 3 ⎦⎥
−2
1 1 ⎡ 1 − − ⇒ ⎢ 2 2 ⎢ 0 ⎢⎣0
⎤ 3⎥ ⎥ ⎥⎦
⎡1 + 3 ⎤ The corresponding eigenvectors are the nonzero multiples of ⎢ ⎥. ⎢⎣ 2 ⎥⎦
⎡−2 − 2 3
λ2 = 3 − 2 3: ⎢ ⎢⎣
−4
1 1 ⎡ ⎤ ⎤ 1 − + 3⎥ 2 2 ⎥ ⇒ ⎢⎢ ⎥ 2 − 2 3 ⎥⎦ ⎢⎣−4 2 − 2 3 ⎥⎦
−2
1 1 ⎡ 1 − + ⇒ ⎢ 2 2 ⎢ 0 ⎢⎣0
⎤ 3⎥ ⎥ ⎥⎦
⎡1 − 3 ⎤ The corresponding eigenvectors are the nonzero multiples of ⎢ ⎥. ⎢⎣ 2 ⎥⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
92
Chapter 3
Determinants
λ +3
0
−4
−2
λ −1
−1
1
0
54. λ I − A =
62. The matrix of cofactors is given by
λ −1
= (λ + 3)(λ − 1) + 4(λ − 1) 2
= (λ − 1)(λ 2 + 2λ + 1) = (λ − 1)(λ + 1)
2
The eigenvalues are λ1 = 1, λ2 = −1, and λ3 = −1 ⎡ 4 0 −4⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ λ1 = 1: ⎢−2 0 −1⎥ ⇒ ⎢0 0 1⎥ ⎢ 1 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
0 1⎤ ⎥ −1 0 −1 0 0⎥ ⎡ −1 0 0⎤ ⎥ 1 1 1 1 −1 ⎥ ⎢ ⎥ − = ⎢ −1 −1 0⎥ . −1 0 −1 0 0⎥ ⎢−3 −2 1⎥ ⎥ ⎣ ⎦ 1 1 1 1 −1⎥ − ⎥ 2 0 2 0 1⎥⎦ 2
0
−1 0
−1 1
−
0
2
So, the adjoint is
64. The determinant of the coefficient matrix is
2
1
3 −1
= −5 ≠ 0.
So, the system has a unique solution. Using Cramer's Rule,
⎡ 2 0 −4⎤ ⎡ 1 0 −2⎤ ⎢ ⎥ ⎢ ⎥ λ2 , λ3 = −1: ⎢−2 −2 −1⎥ ⇒ ⎢0 1 2.5⎥ ⎢ 1 0 −2⎥ ⎢0 0 0⎥⎦ ⎣ ⎦ ⎣
⎡ 0.3 1⎤ A1 = ⎢ ⎥ , A1 = 1.0 ⎣−1.3 −1⎦
The corresponding eigenvectors are the nonzero ⎡ 2⎤ ⎢ ⎥ multiples of ⎢−2.5⎥ . ⎢ 1⎥ ⎣ ⎦
⎡2 0.3⎤ A2 = ⎢ ⎥ , A2 = −3.5. ⎣3 −1.3⎦ So, x =
∂x a b ∂v = = ad − bc ∂y c d ∂v
y =
A1 A A2 A
=
1 = −0.2 −5
=
−3.5 = 0.7. −5
66. The determinant of the coefficient matrix is
1 −1 1
4
58. J (u , v, w) = 2v 2u 0
1
1
⎡ 1 −1 1⎤ ⎡−1 −1 −3⎤ ⎢ ⎥ ⎢ ⎥ adj ⎢0 1 2⎥ = ⎢ 0 −1 −2⎥. ⎢0 0 −1⎥ ⎢ 0 0 1⎥⎦ ⎣ ⎦ ⎣
The corresponding eigenvectors are the nonzero ⎡0⎤ ⎢ ⎥ multiples of ⎢ 1⎥. ⎢0⎥ ⎣ ⎦
∂x ∂u 56. J (u , v) = ∂y ∂u
⎡ ⎢ ⎢ ⎢ ⎢− ⎢ ⎢ ⎢ ⎢ ⎢⎣
4
4
4 −2 −8 = 0.
1 1
8
= 1( 2u ) + 1( 2v) + 1( 2v − 2u ) = 4v
2 −4
So, Cramer's Rule does not apply. 60. Because B ≠ 0, B −1 exists, and you can let
C = AB −1 , then A = CB and
C = AB −1 = A B −1 = A
1 = 1. B
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 3 68. (a) 49a + 7b + c = 296
72. Use the equation
64a + 8b + c = 308
x
81a + 9b + c = 321 (b) The coefficient matrix is ⎡49 7 1⎤ ⎢ ⎥ A = ⎢64 8 1⎥ ⎢ 81 9 1⎥ ⎣ ⎦
c = (c)
and
A1 = −1,
y1 1 = 0 y2 1
x
y 1
2
5 1 = x(6) − y( −4) − 32 = 0, or 2 y + 3x = 16
⎡49 7 296⎤ ⎢ ⎥ A3 = ⎢64 8 308⎥ and ⎢ 81 9 321⎥ ⎣ ⎦
A3 = −480.
= 0.5, b =
−9 −2
74. The equation of the plane is given by
x A2 = −9,
−480 −2
x1 x2
6 −1 1
⎡49 296 1⎤ ⎢ ⎥ A2 = ⎢64 308 1⎥ and ⎢ 81 321 1⎥ ⎣ ⎦
−1 −2
y 1
to find the equation of the line. A = −2.
and
⎡296 7 1⎤ ⎢ ⎥ Also, A1 = ⎢308 8 1⎥ ⎢ 321 9 1⎥ ⎣ ⎦
So, a =
93
= 4.5, and
= 240.
y
z 1
x1
y1
z1 1
x2
y2
z2 1
x3
y3
z3 1
= 0.
So, you have x
y
0
0 0 1
z 1
2 −1 1 1 −3
2 5 1
x
y
z
= 1 2 −1 1 −3
2 5
= x( −7) − y(13) + z (1) = 0.
350
So, the equation of the plane is 7 x + 13 y − z = 0. 76. (a) False. The transpose of the matrix of cofactors of A is called the adjoint matrix of A. 0 250
10
(d) The function fits the data exactly. 70. The formula for area yields x1 Area =
± 12
=
± 12
−4 0 1
y1 1
x2
y2 1 =
x3
y3 1
(−6)(−4 − 4)
(b) False. Cramer's Rule requires the determinant of this matrix to be in the numerator. The denominator is always det(A) where A is the coefficient matrix of the system. (assuming, of course, that it is nonsingular).
± 12
4 0 1 0 6 1
= 24.
Project Solutions for Chapter 3 1 Eigenvalues and Stochastic Matrices ⎡ 7⎤ ⎡ 7⎤ ⎢ ⎥ ⎢ ⎥ 1. Px1 = P ⎢10⎥ = ⎢10⎥ ⎢ 4⎥ ⎢ 4⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 0⎤ ⎡ 0⎤ ⎢ ⎥ ⎢ ⎥ Px 2 = P ⎢−1⎥ = ⎢−.65⎥ ⎢ 1⎥ ⎢ .65⎥ ⎣ ⎦ ⎣ ⎦ ⎡−2⎤ ⎡−1.1⎤ ⎢ ⎥ ⎢ ⎥ Px3 = P ⎢ 1⎥ = ⎢ .55⎥ ⎢ 1⎥ ⎢ .55⎥ ⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
94
Chapter 3
Determinants
⎡ 7 0 −2⎤ ⎢ ⎥ 2. S = ⎢10 −1 1⎥ ⎢ 4 1 1⎥ ⎣ ⎦
0 0⎤ ⎡1 ⎢ ⎥ S PS = ⎢0 .65 0⎥ = D ⎢0 0 .55⎥⎦ ⎣ −1
The entries along D are the corresponding eigenvalues of P. 3. S −1PS = D ⇒ PS = SD ⇒ P = SDS −1. Then
P n = ( SDS −1 ) = ( SDS −1 )( SDS −1 ) " ( SDS −1 ) n
= SD n S −1 ⎡1 ⎢ For n = 10, D = ⎢0 ⎢ ⎢⎣0
0⎤ ⎡.335 .332 .332⎤ ⎥ ⎢ ⎥ 0⎥ ⇒ P10 = SD10 S −1 ≈ ⎢.473 .481 .468⎥ (.65) ⎥ ⎢.192 .186 .200⎥ 10 ⎣ ⎦ 0 (.55) ⎥⎦ 0
10
n
⎡33,287⎤ ⎢ ⎥ ⇒ P10 X ≈ ⎢47,147⎥ ⎢19,566⎥ ⎣ ⎦ 2 The Cayley-Hamilton Theorem 1. λ I − A =
λ −2
2
2
λ +1
= λ2 − λ − 6
⎡ 2 −2⎤ ⎡ 2 −2⎤ ⎡ 2 −2⎤ ⎡ 1 0⎤ ⎡ 8 −2⎤ ⎡ 8 −2⎤ ⎡0 0⎤ A2 − A − 6 I = ⎢ ⎥⎢ ⎥ −⎢ ⎥ − 6⎢ ⎥ = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ 2 1 2 1 2 1 0 1 2 5 2 5 − − − − − − − − ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦
2. λ I − A =
λ −6
0
−4
2
λ −1
−3
−2
0
λ −4
= λ 3 − 11λ 2 + 26λ − 16
⎡ 344 0 336⎤ ⎡44 0 40⎤ ⎡ 6 0 4⎤ ⎡ 1 0 0⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A3 − 11A2 + 26 A − 16 I = ⎢−36 1 −1⎥ − 11⎢−8 1 7⎥ + 26 ⎢−2 1 3⎥ − 16⎢0 1 0⎥ = ⎢0 0 0⎥ . ⎢ 168 0 176⎥ ⎢20 0 24⎥ ⎢ 2 0 4⎥ ⎢0 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3. λ I − A =
λ −a
−b
−c
λ −d
= λ 2 − ( a + d )λ + ( ad − bc)
⎡a 2 + bc ab + bd ⎤ ⎡a b ⎤ ⎡1 A2 − ( a + d ) A + ( ad − bc) I = ⎢ ⎥ − (a + d )⎢ ⎥ + ( ad − bc)⎢ 2 ⎣ac + dc bc + d ⎦ ⎣c d ⎦ ⎣0
⎡0 0⎤ ⎥ = ⎢ ⎣0 1⎦
0⎤ ⎥ 0⎦
⎛1⎞ 4. ⎜ ⎟( − An −1 − cn −1 An − 2 − " − c2 A − c1I ) A ⎝ c0 ⎠ 1 1 = ( − An − cn −1 An −1 − " − c2 A2 − c1 A) = (c0 I ) = I c0 c0
Because c0 I = − An − cn −1 An −1 − " − c2 A2 − c1 A from the equation p( A) = 0.
λI − A = A− 1 =
1
(−1)
λ −1
−2
−3
λ −5
(− A + 6I )
= λ 2 − 6λ − 1
⎡−5 = A − 6I = ⎢ ⎣ 3
2⎤ ⎥. −1 ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 3
95
5. (a) Because A2 = 2 A + I you have A3 = 2 A2 + A = 2( 2 A + I ) + A = 5 A + 2 I .
⎡ 3 −1⎤ ⎡1 A3 = 5⎢ ⎥ + 2⎢ ⎣2 −1⎦ ⎣0
⎡17 0⎤ ⎥ = ⎢ 1⎦ ⎣10
−5 ⎤ ⎥. −3 ⎦
Similarly, A4 = 2 A3 + A2 = 2(5 A + 2 I ) + ( 2 A + I ) = 12 A + 5 I . Therefore, ⎡3 −1⎤ ⎡1 A4 = 12 ⎢ ⎥ + 5⎢ ⎣2 −1⎦ ⎣0
⎡41 0⎤ ⎥ = ⎢ 1⎦ ⎣24
−12 ⎤ ⎥. −7 ⎦
Note: This approach is a lot more efficient, because you can calculate An without calculating all the previous powers of A. (b) First calculate the characteristic polynomial of A.
λ
−1
0
λ I − A = −2 λ − 2 −1
1
= λ 3 − 4λ 2 + 3λ + 2.
λ −2
0
By Cayley-Hamilton Theorem A3 − 4 A2 + 3 A + 2 I = O or A3 = 4 A2 − 3 A − 2 I . Now you can express any positive power An as a linear combination of A2 , A and I. For example, A4 = 4 A3 − 3 A2 − 2 A = 4( 4 A2 − 3 A − 2 I ) − 3 A2 − 2 A = 13 A2 − 14 A − 8 I , A5 = 4 A4 − 3 A3 − 2 A2 = 4(13 A2 − 14 A − 8 I ) − 3( 4 A2 − 3 A − 2 I ) − 2 A2
= 38 A2 − 47 A − 26 I .
Here ⎡0 0 1⎤ ⎢ ⎥ A = ⎢2 2 −1⎥ , ⎢ 1 0 2⎥ ⎣ ⎦
⎡0 0 1⎤⎡0 0 1⎤ ⎡ 1 0 2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ A = AA = ⎢2 2 −1⎥⎢2 2 −1⎥ = ⎢ 3 4 −2⎥. ⎢ 1 0 2⎥⎢ 1 0 2⎥ ⎢2 0 5⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ 2
With this method you can calculate A5 directly without calculating A3 and A4 first. ⎡ 1 0 2⎤ ⎡0 0 1⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A = 38 A − 47 A − 26 I = 38⎢ 3 4 −2⎥ − 47 ⎢2 2 −1⎥ − 26 ⎢0 1 0⎥ ⎢2 0 ⎢ 1 0 2⎥ ⎢0 0 1⎥ 5⎥⎦ ⎣ ⎣ ⎦ ⎣ ⎦ 5
2
⎡12 0 29⎤ ⎢ ⎥ = ⎢20 32 −29⎥ ⎢29 0 70⎥ ⎣ ⎦
Similarly, ⎡ 1 0 2⎤ ⎡0 0 1⎤ ⎡ 1 0 0⎤ ⎡ 5 0 12⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A4 = 13 A2 − 14 A − 8 I = 13⎢3 4 −2⎥ − 14 ⎢2 2 −1⎥ − 8⎢0 1 0⎥ = ⎢11 16 −12⎥ ⎢2 0 ⎢ 1 0 2⎥ ⎢0 0 1⎥ ⎢12 0 29⎥ 5⎥⎦ ⎣ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡2 0 5⎤ ⎢ ⎥ A = 4 A − 3 A − 2 I = ⎢6 8 −5⎥ . ⎢5 0 12⎥ ⎣ ⎦ 3
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 4 Vector Spaces Section 4.1
Vectors in R n .......................................................................................97
Section 4.2
Vector Spaces .....................................................................................102
Section 4.3
Subspaces of Vector Spaces...............................................................104
Section 4.4
Spanning Sets and Linear Independence...........................................106
Section 4.5
Basis and Dimension..........................................................................111
Section 4.6
Rank of a Matrix and Systems of Linear Equations .........................114
Section 4.7
Coordinates and Change of Basis ......................................................119
Section 4.8
Applications of Vector Spaces...........................................................123
Review Exercises ........................................................................................................129
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Vector Spaces
4
Section 4.1 Vectors in R n 1. v = ( 4, 5)
3.
11. v =
3 u 2
3 2
=
= −3,
(−3, ( 9 2
1 x 1
2
3
4
v=
5
4 3 2
−3
u
(2, −4)
−4 −5
x
− 5 − 4 − 3 − 2 −1
5.
y
13. v = u + 2w
1 x −5 −4 −3 −2 −1
1
= ( −2, 3) + 2( −3, − 2) = ( −2, 3) + ( −6, − 4)
−2
= ( −2 − 6, 3 − 4)
−3
= ( −8, −1)
−4
(−3, −4) −5
y
7. u + v = (1, 3) + ( 2, − 2) = (1 + 2, 3 − 2) = (3, 1)
(−2, 3) 4 (−8, −1)
u
2 1
v = u + 2w
−2
2
v
3
4
15. v =
1 2
=
1 2
=
5
(2, −2)
=
9. u + v = ( 2, − 3) + (−3, −1) = ( 2 − 3, − 3 − 1)
(3u
2w −2
1 2
(−9, 7)
)− 92 , 72 )
7 2
)
y
(− 6, 9) 8 4
v=
1
(− 3, −1)
(
= − 92 ,
3u
y
u+v
+ w)
(3(−2, 3) + (−3, − 2)) 1 (−6, 9) + (−3, − 2)) 2(
= ( −1, − 4)
v −1
x
(−6, −4) −4
(1, 3) u+v u (3, 1) x
−1 −1
2
−2
y
3
)
5
3 u 2
(−2, 3)
−2
4
9 2
y
y
−1 −1
(
(−2, 3)
x
1
2
u
1 2 (3u
−8
+ w)
(− 3, − 2) − 4
x
w
(2, − 3)
(−1, − 4)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
97
98
Chapter 4
Vector Spaces
17. (a) 2 v = 2( 2, 1) = ( 2( 2), 2(1)) = ( 4, 2) (b) −3v = −3( 2, 1) = (−3( 2), − 3(1)) = (−6, − 3) (c)
1 v 2
1 2
=
(2, 1)
=
( 12 (2), 12 (1)) = (1, 12 )
y
y
y
4
4 3
2
(4, 2)
2
v
2v
1
1
2
3
(2, 1)
1
4
(−6, −3)
v
1 v 2
2
−3v −2
x
−1 −1
(2, 1) x
−4
(2, 1)
v
2
)1, 12 ) 1
x 2
−4
19. u − v = (1, 2, 3) − ( 2, 2, −1) = ( −1, 0, 4) v − u = ( 2, 2, −1) − (1, 2, 3) = (1, 0, − 4)
21. 2u + 4 v − w = 2(1, 2, 3) + 4( 2, 2, −1) − ( 4, 0, − 4)
= ( 2, 4, 6) + (8, 8, − 4) − ( 4, 0, − 4) = ( 2 + 8 − 4, 4 + 8 − 0, 6 + ( −4) − ( −4)) = (6, 12, 6) 23. 2z − 3u = w implies that 2z = 3u + w , or z =
So, z =
3 2
(1, 2, 3)
+
1 2
(4, 0, − 4)
=
3 u 2
+ 12 w.
( 32 , 3, 92 ) + (2, 0, − 2) = ( 72 , 3, 52 ).
25. (a) 2 v = 2(1, 2, 2) = ( 2, 4, 4) (b) − v = −(1, 2, 2) = ( −1, − 2, − 2)
(c)
1v 2
=
1 2
(1, 2, 2)
=
( 12 , 1, 1) z
z 5 4 3 2
(1, 2, 2) 5
4 3
2
z
2 1
(2, 4, 4) v 2v 2 3 4
x
5 y
x
2
v (1, 2, 2) 1
2
−v (−1, −2, −2)
2 1 2v
y
1 2 x
1
( 12 , 1, 1( (1, 2, 2) v 1 2 y
27. (a) Because ( −6, − 4, 10) = −2(3, 2, − 5), u is a scalar multiple of z.
(
)
(b) Because 2, 43 , − 10 = 3
2 3
(3, 2, − 5), v is a scalar multiple of z.
(c) Because (6, 4, 10) ≠ c(3, 2, − 5) for any c, w is not a scalar multiple of z. 29. (a) u − v = ( 4, 0, − 3, 5) − (0, 2, 5, 4) = ( 4 − 0, 0 − 2, − 3 − 5, 5 − 4) = ( 4, − 2, − 8, 1) (b) 2(u + 3v) = 2 ⎡⎣( 4, 0, − 3, 5) + 3(0, 2, 5, 4)⎤⎦ = 2 ⎡⎣( 4, 0, − 3, 5) + (0, 6, 15, 12)⎤⎦ = 2( 4 + 0, 0 + 6, − 3 + 15, 5 + 12) = 2( 4, 6, 12, 17) = (8, 12, 24, 34) (c) 2 v − u = 2(0, 2, 5, 4) − ( 4, 0, − 3, 5) = (0, 4, 10, 8) − ( 4, 0, − 3, 5) = ( −4, 4, 13, 3)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.1 31. (a) u − v = ( −7, 0, 0, 0, 9) − ( 2, − 3, − 2, 3, 3) = ( −9, 3, 2, − 3, 6)
(b) 2(u + 3v) = 2 ⎡⎣( −7, 0, 0, 0, 9) + 3( 2, − 3, − 2, 3, 3)⎤⎦ = 2 ⎡⎣( −7, 0, 0, 0, 9) + (6, − 9, − 6, 9, 9)⎤⎦ = 2( −1, − 9, − 6, 9, 18) = ( −2, −18, −12, 18, 36) (c) 2 v − u = 2( 2, − 3, − 2, 3, 3) − ( −7, 0, 0, 0, 9) = ( 4, − 6, − 4, 6, 6) − ( −7, 0, 0, 0, 9) = (11, − 6, − 4, 6, − 3)
v = (0, 2, −1, − 2) and w = ( 2, − 2, 1, 3) you have
(c) 4 v + 12 u − w = ( −1.5, 11, − 6.5, −10.5) + 2 v − w ) = (0.25, 3, − 3, −1)
=
37.
1w 2
1 u 2 1 2
−
Solving this system produces a = 1 and b = 2. So, v = u + 2w.
43. The equation au + bw = v
yields the system a + b = −1
45. 2u + v − 3w = 0 w =
−
3 2
2 3
+ 13 v
(0, 2, 7, 5)
+
1 3
(−3, 1, 4, − 8)
) (
) − (0, 3,
(
=
(
1 2
− 0, − 12 − 3, 0 − 92 ,
=
(
1, 2
− 72 ,
1 2
− 92 ,
2
(
= 0 + ( −1),
(0, 2, 3, −1)
=
0,
2u 3
(
(1, −1, 0, 1) − 12 ,
2a − b = 0.
= 0, 43 , 14 , 10 + −1, 13 , 34 , − 83 3 3
3 v 2
1, 2
yields the system a +b = 3
=
35. 2w = u − 3v w =
a(1, 2) + b(1, −1) = (3, 0)
Solving this system produces a = −1 and b = 0. So, v = −u.
(b) w − 3u = ( −1, − 8, 10, 0)
(3u
41. The equation au + bw = v
2a − b = −2.
(a) u + 2 v = (1, 6, − 5, − 3)
(d)
99
a(1, 2) + b(1, −1) = ( −1, − 2)
33. Using a graphing utility with u = (1, 2, − 3, 1),
1 4
Vectors in R n
9 , 2
− 23 1 2
)
(
= −1,
( ))
− − 23
)
= 2u + 3v
w = 4u + 6 v
5 , 3
6,
2 3
4 3
)
( ))
+ 13 , 14 + 34 , 10 + − 83 3 3
)
47. The equation au1 + bu 2 + cu3 = v a( 2, 3, 5) + b(1, 2, 4) + c( −2, 2, 3) = (10, 1, 4) yields the system 2a +
b − 2c = 10
= 4(1, −1, 0, 1) + 6(0, 2, 3, −1)
3a + 2b + 2c = 1
= ( 4, − 4, 0, 4) + (0, 12, 18, − 6)
5a + 4b + 3c = 4.
= ( 4, 8, 18, − 2)
Solving this system produces a = 1, b = 2, and
39. The equation
au + bw = v a(1, 2) + b(1, −1) = ( 2, 1) yields the system a + b = 2 2a − b = 1.
Solving this system produces a = 1 and b = 1. So, v = u + w.
c = −3. So, v = u1 + 2u 2 − 3u3 .
49. The equation au1 + bu 2 + cu3 = v a(1, 1, 2, 2) + b( 2, 3, 5, 6) + c( −3, 1, − 4, 2) = (0, 5, 3, 0)
yields the system a + 2b − 3c = 0 a + 3b +
c = 5
2a + 5b − 4c = 3 2a + 6b + 2c = 0. The second and fourth equations cannot both be true. So, the system has no solution. It is not possible to write v as a linear combination of u1 , u 2 , and u3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
100
Chapter 4
Vector Spaces
51. Write a matrix using the given u1 , u 2 , …, u 5 as columns and augment this matrix with v as a column. ⎡ 1 ⎢ ⎢ 2 A = ⎢−3 ⎢ ⎢ 4 ⎢ ⎣ −1
5⎤ ⎥ 3⎥ 0 1 −1 2 −11⎥ ⎥ 2 1 2 −1 11⎥ ⎥ 1 −4 1 −1 9⎦ 1
0
2
0
2
1
1
2
The reduced row-echelon form for A is ⎡1 ⎢ ⎢0 A = ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
2⎤ ⎥ 1⎥ −2⎥. ⎥ 1⎥ ⎥ −1⎦
So, v = 2u1 + u 2 − 2u3 + u 4 − u5 . Verify the solution by showing that 2(1, 2, − 3, 4, −1) + (1, 2, 0, 2, 1) − 2(0, 1, 1, 1, − 4) + ( 2, 1, −1, 2, 1) − (0, 2, 2, −1, −1) equals (5, 3, −11, 11, 9).
53. Write a matrix using the given u1 , u 2 , …, u 6 as columns and augment this matrix with v as a column. 1 1 3 10⎤ ⎡ 1 1 0 ⎢ ⎥ 2 2 2 0 − −2 2 30⎥ ⎢ ⎢−3 1 −1 3 1 1 −13⎥ ⎥ A = ⎢ ⎢ 4 −1 2 −4 −1 −2 14⎥ ⎢ ⎥ 3 −7⎥ ⎢ −1 2 −1 1 2 ⎢ 2 1 −1 2 −3 0 27⎥⎦ ⎣ The reduced row-echelon form for A is ⎡1 ⎢ ⎢0 ⎢0 A = ⎢ ⎢0 ⎢ ⎢0 ⎢0 ⎣
5⎤ ⎥ 1 0 0 0 0 −1⎥ 0 1 0 0 0 1⎥ ⎥. 0 0 1 0 0 2⎥ ⎥ 0 0 0 1 0 −5⎥ 0 0 0 0 1 3⎥⎦ 0 0 0 0 0
So, v = 5u1 − u 2 + u3 + 2u 4 − 5u5 + 3u 6 . Verify the solution by showing that 5(1, 2, − 3,4, −1, 2) − (1, − 2, 1, −1, 2, 1) + (0, 2, −1, 2, −1, −1) + 2(1, 0, 3, − 4, 1, 2) − 5(1, − 2, 1, −1, 2, 3) + 3(3, 2, 1, − 2, 3, 0). equals (10, 30, −13, 14, − 7, 27).
55. (a) True. See the discussion before "Definition of Vector Addition and Scalar Multiplication in R n , " page 183. (b) False. The vector cv is c times as long as v and has the same direction as v if c is positive and the opposite direction as v if c is negative.
57. The equation av1 + bv 2 + cv 3 = 0 a(1, 0, 1) + b( −1, 1, 2) + c(0, 1, 4) = (0, 0, 0) yields the homogeneous system a −
b b +
= 0 c = 0
a + 2b + 4c = 0.
This system has only the trivial solution a = b = c = 0. So, you cannot find a nontrivial way of writing 0 as a combination of v1 , v 2 , and v 3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.1
Vectors in R n
101
59. (1) u + v = ( 2, −1, 3, 6) + (1, 4, 0, 1) = (3, 3, 3, 7) is a vector in R 4 . (2) u + v = ( 2, −1, 3, 6) + (1, 4, 0, 1) = (3, 3, 3, 7) and
v + u = (1, 4, 0, 1) + ( 2, −1, 3, 6) = (3, 3, 3, 7) So, u + v = v + u. (3)
(u
+ v ) + w = ⎡⎣( 2, −1, 3, 6) + (1, 4, 0, 1)⎤⎦ + (3, 0, 2, 0) = (3, 3, 3, 7) + (3, 0, 2, 0) = (6, 3, 5, 7)
u + ( v + w ) = ( 2, −1, 3, 6) + ⎡⎣(1, 4, 0, 1) + (3, 0, 2, 0)⎤⎦ = ( 2, −1, 3, 6) + ( 4, 4, 2, 1) = (6, 3, 5, 7)
So, (u + v) + w = u + ( v + w ). (4) u + 0 = ( 2, −1, 3, 6) + (0, 0, 0, 0) = ( 2, −1, 3, 6) = u (5) u + ( −u) = ( 2, −1, 3, 6) + ( −2, 1, − 3, − 6) = (0, 0, 0, 0) = 0 (6) cu = 5( 2, −1, 3, 6) = (10, − 5, 15, 30) is a vector in R 4 . (7) c(u + v ) = 5⎡⎣( 2, −1, 3, 6) + (1, 4, 0, 1)⎤⎦ = 5(3, 3, 3, 7) = (15, 15, 15, 35)
cu + cv = 5( 2, −1, 3, 6) + 5(1, 4, 0, 1) = (10, − 5, 15, 30) + (5, 20, 0, 5) = (15, 15, 15, 35) So, c(u + v ) = cu + cv. (8)
(c
+ d )u = (5 + (−2)) ( 2, −1, 3, 6) = 3( 2, −1, 3, 6) = (6, − 3, 9, 18)
cu + du = 5( 2, −1, 3, 6) + ( −2)( 2, −1, 3, 6) = (10, − 5, 15, 30) + (−4, 2, − 6, −12) = (6, − 3, 9, 18) So, (c + d )u = cu + du. (9) c( du) = 5(( −2) ( 2, −1, 3, 6)) = 5(−4, 2, − 6, −12) = ( −20, 10, − 30, − 60)
(cd )u
= (5( −2)) ( 2, −1, 3, 6) = −10( 2, −1, 3, 6) = ( −20, 10, − 30, − 60)
So, c( du) = (cd )u. (10) 1(u) = 1( 2, −1, 3, 6) = ( 2, −1, 3, 6) = u
61. Prove the remaining eight properties. (1) u + v = (u1 , u2 ) + (v1 , v2 ) = (u1 + v1 , u2 + v2 ) is a vector in the plane. (2) u + v = (u1 , u2 ) + (v1 , v2 ) = (u1 + v1 , u2 + v2 ) = (v1 + u1 , v2 + u2 ) = (v1 , v2 ) + (u1 , u2 ) = v + u (4) u + 0 = (u1 , u2 ) + (0, 0) = (u1 + 0, u2 + 0) = (u1 , u2 ) = u (5) u + ( −u) = (u1 , u2 ) + (−u1 , − u2 ) = (u1 − u1 , u2 − u2 ) = (0, 0) = 0 (6) cu = c(u1 , u2 ) = (cu1 , cu2 ) is a vector in the plane. (7) c(u + v ) = c ⎡⎣(u1 , u2 ) + (v1 , v2 )⎤⎦ = c(u1 + v1 , u2 + v2 )
= (c(u1 + v1 ), c(u2 + v2 )) = (cu1 + cv1 , cu2 + cv2 ) = (cu1 , cu2 ) + (cv1 , cv2 ) = c(u1 , u2 ) + c(v1 , v2 ) = cu + cv
(9) c( du) = c( d (u1 , u2 )) = c( du1 , du2 ) = (cdu1 , cdu2 ) = (cd )(u1 , u2 ) = (cd )u (10) 1(u) = 1(u1 , u2 ) = (u1 , u2 ) = u 63. (a) Add −v to both sides (b) Associative property and Additive identity (c) Additive inverse (d) Commutative property (e) Additive identity
65. (a) Additive identity (b) Distributive property (c) Add − c0 to both sides (d) Additive inverse and Associative property (e) Additive inverse (f) Additive identity Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
102
Chapter 4
Vector Spaces 71. You can describe vector subtraction u − v as follows
67. (a) Additive inverse (b) Transitive property (c) Add v to both sides
v
(d) Associative property (e) Additive inverse u
(f) Additive identity ⎡ 1 2 3⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 69. ⎢7 8 9⎥ ⇒ ⎢0 1 0⎥ ⎢4 5 7⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
u−v
Or, write subtraction in terms of addition, u − v = u + ( −1) v.
No.
Section 4.2 Vector Spaces 1. The additive identity of R 4 is the vector (0, 0, 0, 0).
13. M 4,6 with the standard operations is a vector space. All ten vector space axioms hold.
3. The additive identity of M 2,3 is the 2 × 3 zero matrix
15. This set is not a vector space. Axiom 1 fails. The set is not closed under addition. For example, (− x3 + 4 x 2 ) + ( x3 + 2 x) = 4 x 2 + 2 x is not a third-
⎡0 0 0⎤ ⎢ ⎥. ⎣0 0 0⎦ 5. P3 is the set of all polynominals of degree less than or equal to 3. Its additive identity is 0 x + 0 x + 0 x + 0 = 0. 3
2
degree polynomial. 17. This set is not a vector space. Axiom 1 fails. For example, given f ( x) = x and g ( x) = − x,
f ( x) + g ( x) = 0 is not of the form ax + b where
7. In R 4 , the additive inverse of (v1 , v2 , v3 , v4 ) is
a ≠ 0.
(−v1, − v2 , − v3 , − v4 ). 9. M 2,3 is the set of all 2 × 3 matrices. The additive inverse of ⎡ a11 a12 ⎢ ⎣a21 a22
a13 ⎤ ⎥ a23 ⎦
⎡ a11 a12 is − ⎢ ⎣a21 a22
19. This set is not a vector space. The set is not closed under scalar multiplication. For example, (−1)(3, 2) = (−3, − 2) is not in the set. 21. This set is a vector space. All ten vector space axioms hold.
a13 ⎤ ⎡ − a11 − a12 ⎥ = ⎢ a23 ⎦ ⎣− a21 − a22
− a13 ⎤ ⎥. − a23 ⎦
11. P3 is the set of all polynominals of degree less than or equal to 3. The additive inverse of a3 x3 + a2 x 2 + a1 x + a0 is
23. This set is a vector space. All ten vector space axioms hold. 25. This set is not a vector space because it is not closed under addition. A counterexample is ⎡ 1 0⎤ ⎡0 0⎤ ⎡ 1 0⎤ ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥. ⎣0 0⎦ ⎣0 1⎦ ⎣0 1⎦
− ( a3 x3 + a2 x 2 + a1x + a0 ) = − a3 x3 − a2 x 2 − a1x − a0 .
Each matrix on the left is singular, while the sum is nonsingular. 27. This set is a vector space. All ten vector space axioms hold.
29. (a) Axiom 8 fails. For example,
(1 + 2) (1, 1) 1(1, 1) + 2(1, 1)
= 3(1, 1) = (3, 1)
(Because c( x, y)
= (cx, y ))
= (1, 1) + ( 2, 1) = (3, 2).
So, R 2 is not a vector space with these operations.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.2
Vector Spaces
103
(b) Axiom 2 fails. For example,
(1, 2) (2, 1)
+ ( 2, 1) = (1, 0) + (1, 2) = ( 2, 0).
So, R 2 is not a vector space with these operations. (c) Axiom 6 fails. For example, ( −1) (1, 1) =
(
)
−1,
−1 , which is not in R 2 .
So, R 2 is not a vector space with these operations. 31. Verify the ten axioms in the definition of vector space. ⎡ u1 u2 ⎤ ⎡ v1 v2 ⎤ ⎡ u1 + v1 u2 + v2 ⎤ (1) u + v = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ is in M 2,2 . u u v v ⎣ 3 4⎦ ⎣ 3 4⎦ ⎣u3 + v3 u4 + v4 ⎦ ⎡ u1 u2 ⎤ ⎡ v1 v2 ⎤ ⎡ u1 + v1 u2 + v2 ⎤ (2) u + v = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ u u v v ⎣ 3 4⎦ ⎣ 3 4⎦ ⎣u3 + v3 u4 + v4 ⎦ ⎡ v1 + u1 v2 + u2 ⎤ ⎡ v1 v2 ⎤ ⎡ u1 u2 ⎤ = ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ = v + u ⎣v3 + u3 v4 + u4 ⎦ ⎣v3 v4 ⎦ ⎣u3 u4 ⎦ ⎡ u1 u2 ⎤ ⎛ ⎡ v1 v2 ⎤ ⎡ w1 (3) u + ( v + w ) = ⎢ ⎥ + ⎜⎜ ⎢ ⎥ + ⎢ ⎣u3 u4 ⎦ ⎝ ⎣v3 v4 ⎦ ⎣w3
w2 ⎤ ⎞ ⎥⎟ w4 ⎦ ⎟⎠
⎡ u1 u2 ⎤ ⎡ v1 + w1 v2 + w2 ⎤ = ⎢ ⎥ + ⎢ ⎥ ⎣u3 u4 ⎦ ⎣v3 + w3 v4 + w4 ⎦ ⎡ u1 + (v1 + w1 ) u2 + (v2 + w2 )⎤ = ⎢ ⎥ ⎢⎣u3 + (v3 + w3 ) u4 + (v4 + w4 )⎥⎦ ⎡ (u1 + v1 ) + w1 = ⎢ ⎢⎣(u3 + v3 ) + w3
(u 2 (u 4
+ v2 ) + w2 ⎤ ⎥ + v4 ) + w4 ⎥⎦
⎡ u1 + v1 u2 + v2 ⎤ ⎡ w1 = ⎢ ⎥ + ⎢ ⎣u3 + v3 u4 + v4 ⎦ ⎣w3
w2 ⎤ ⎥ w4 ⎦
⎛ ⎡ u1 u2 ⎤ ⎡ v1 v2 ⎤ ⎞ ⎡ w1 = ⎜⎢ + + ⎜ u u ⎥ ⎢v v ⎥ ⎟⎟ ⎢w 4⎦ ⎣ 3 4⎦⎠ ⎣ 3 ⎝⎣ 3
w2 ⎤ ⎥ = (u + v ) + w w4 ⎦
(4) The zero vector is ⎡0 0⎤ 0 = ⎢ ⎥ . So, ⎣0 0⎦ ⎡ u1 u2 ⎤ ⎡0 0⎤ ⎡ u1 u2 ⎤ u+0 = ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ = u. u u 0 0 4⎦ ⎣ 3 ⎣ ⎦ ⎣u3 u4 ⎦ (5) For every ⎡ u1 u2 ⎤ ⎡ −u1 −u2 ⎤ u = ⎢ ⎥ , you have − u = ⎢ ⎥. u u 4⎦ ⎣ 3 ⎣−u3 −u4 ⎦ ⎡ u1 u2 ⎤ ⎡ −u1 −u2 ⎤ u + ( −u) = ⎢ ⎥ + ⎢ ⎥ ⎣u3 u4 ⎦ ⎣−u3 −u4 ⎦ ⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦ = 0 ⎡ u1 u2 ⎤ ⎡ cu1 cu2 ⎤ (6) cu = c ⎢ ⎥ = ⎢ ⎥ is in M 2,2 . u u 4⎦ ⎣ 3 ⎣cu3 cu4 ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
104
Chapter 4
Vector Spaces
⎛ ⎡ u1 u2 ⎤ ⎡ v1 v2 ⎤ ⎞ ⎡ u1 + v1 u2 + v2 ⎤ (7) c(u + v ) = c⎜⎜ ⎢ ⎥ + ⎢ ⎥ ⎟⎟ = c ⎢ ⎥ u u v v 4⎦ ⎣ 3 4⎦⎠ ⎣u3 + v3 u4 + v4 ⎦ ⎝⎣ 3 ⎡ c(u1 + v1 ) c(u2 + v2 )⎤ ⎡ cu1 + cv1 cu2 + cv2 ⎤ = ⎢ ⎥ = ⎢ ⎥ + + c u v c u v ( ) ( ) 3 4 4 ⎥ ⎣cu3 + cv3 cu4 + cv4 ⎦ ⎣⎢ 3 ⎦ ⎡ cu1 cu2 ⎤ ⎡ cv1 cv2 ⎤ ⎡ u1 u2 ⎤ ⎡ v1 v2 ⎤ = ⎢ ⎥ + ⎢ ⎥ = c⎢ ⎥ + c⎢ ⎥ ⎣cu3 cu4 ⎦ ⎣cv3 cv4 ⎦ ⎣u3 u4 ⎦ ⎣v3 v4 ⎦ = cu + cv
(8)
(c
⎡ (c + d )u1 ⎡ u1 u2 ⎤ + d )u = (c + d ) ⎢ ⎥ = ⎢ ⎢⎣(c + d )u3 ⎣u3 u4 ⎦
(c (c
+ d )u2 ⎤ ⎥ + d )u4 ⎦⎥
⎡ cu1 + du1 cu2 + du2 ⎤ ⎡ cu1 cu2 ⎤ ⎡ du1 du2 ⎤ = ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ cu du cu du + + 3 4 4⎦ ⎣ 3 ⎣cu3 cu4 ⎦ ⎣du3 du4 ⎦ ⎡ u1 u2 ⎤ ⎡ u1 u2 ⎤ = c⎢ ⎥ + d⎢ ⎥ = cu + du ⎣u3 u4 ⎦ ⎣u3 u4 ⎦
⎡ c( du1 ) c( du2 )⎤ ⎛ ⎡ u1 u2 ⎤ ⎞ ⎡ du1 du2 ⎤ (9) c( du) = c⎜⎜ d ⎢ ⎥ ⎥ ⎟⎟ = c ⎢ ⎥ = ⎢ u u du du ⎢⎣c( du3 ) c( du4 )⎥⎦ 4⎦⎠ 4⎦ ⎣ 3 ⎝ ⎣ 3 ⎡ (cd )u1 = ⎢ ⎣⎢(cd )u3
(cd )u2 ⎤ ⎥ (cd )u4 ⎦⎥
⎡ u1 u2 ⎤ = (cd ) ⎢ ⎥ = (cd )u ⎣u3 u4 ⎦
⎡ u1 u2 ⎤ ⎡1u1 1u2 ⎤ (10) 1(u) = 1 ⎢ ⎥ = ⎢ ⎥ = u ⎣u3 u4 ⎦ ⎣1u3 1u4 ⎦ 33. This set is not a vector space because Axiom 5 fails. The additive identity is (1, 1) and so (0, 0) has no additive inverse. Axioms 7 and 8 also fail. 35. (a) True. See the first paragraph of the section, page 191.
39. ( −1) v + 1( v ) = ( −1 + 1) v = 0 v = 0. Also, − v + v = 0. So, ( −1) v and −v are both additive
inverses of v. Because the additive inverse of a vector is unique (Problem 41), ( −1) v = − v.
(b) False. See Example 6, page 195. (c) False. With standard operations on R 2 , the additive inverse axiom is not satisfied.
41. Let u be an element of the vector space V. Then −u is the additive inverse of u. Assume, to the contrary, that v is another additive inverse of u. Then
37. (a) Add −w to both sides
u + v = 0 − u + u + v = −u + 0
(b) Associative property (c) Additive inverse
0 + v = −u + 0
(d) Additive identity.
v = −u
Section 4.3 Subspaces of Vector Spaces 1. Because W is nonempty and W ⊂ R 4 , you need only check that W is closed under addition and scalar multiplication. Given
( x1, x2 , x3 , 0) ∈ W
and
( y1, y2 , y3 , 0) ∈ W ,
it follows that
( x1, x2 , x3 , 0)
+ ( y1 , y2 , y3 , 0) = ( x1 + y1 , x2 + y2 , x3 + y3 , 0) ∈ W .
Furthermore, for any real number c and ( x1 , x2 , x3 , 0) ∈ W , it follows that c( x1 , x2 , x3 , 0) = (cx1 , cx2 , cx3 , 0) ∈ W .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.3
Subspaces of Vector Spaces
105
3. Because W is nonempty and W ⊂ M 2,2 , you need only check that W is closed under addition and scalar multiplication. Given ⎡ 0 a1 ⎤ ⎡ 0 a2 ⎤ ⎢ ⎥ ∈ W and ⎢ ⎥ ∈ W it follows that 0 b ⎣1 ⎦ ⎣b2 0⎦
⎡ 0 a1 ⎤ ⎡ 0 a2 ⎤ ⎡ 0 ⎢ ⎥ + ⎢ ⎥ = ⎢ 0 0 b b ⎣1 ⎦ ⎣ 2 ⎦ ⎣b1 + b2
a1 + a2 ⎤ ⎥ ∈ W. 0 ⎦
⎡0 a⎤ ⎡0 a⎤ ⎡ 0 ca⎤ Furthermore, for any real number c and ⎢ ⎥ ∈ W it follows that c ⎢ ⎥ = ⎢ ⎥ ∈ W. 0 b o b ⎣ ⎦ ⎣ ⎦ ⎣cb 0⎦ 5. Recall from calculus that continuity implies integrability, so W ⊂ V . Furthermore, because W is nonempty, you need only check that W is closed under addition and scalar multiplication. Given continuous functions f , g ∈ W , it follows that f + g is continuous, so f + g ∈ W . Also, for any real number c and for a continuous function f ∈ W , cf is continuous. So, cf ∈ W .
7. The vectors in W are of the form ( a, b, −1). This set is not closed under addition or scalar multiplication. For example,
(0, 0, −1)
+ (0, 0, −1) = (0, 0, − 2) ∉ W
and 2(0, 0, −1) = (0, 0, − 2) ∉ W .
9. This set is not closed under scalar multiplication. For example, 2, (1, 1) =
(
2,
)
2 ∉ W.
15. This set is not closed under addition. For example, ⎡ 1 0⎤ ⎡0 0⎤ ⎡ 1 0⎤ ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ∉ W. ⎣0 0⎦ ⎣0 1⎦ ⎣0 1⎦ 17. The vectors in W are of the form ( a, a 3 ). This set is not closed under addition or scalar multiplication. For example,
(1, 1)
+ ( 2, 8) = (3, 9) ∉ W
and 3( 2, 8) = (6, 24) ∉ W .
19. This set is not a subspace of C ( −∞, ∞) because it is not closed under scalar multiplication.
21. This set is a subspace of C ( −∞, ∞) because it is closed under addition and scalar multiplication.
23. This set is a subspace of C ( −∞, ∞) because it is closed under addition and scalar multiplication.
11. Consider f ( x) = e x , which is continuous and nonnegative. So, f ∈ W . The function ( −1) f = − f is negative. So, − f ∉ W , and W is not closed under scalar multiplication.
13. This set is not closed under scalar multiplication. For example,
(−2) (1, 1, 1)
= ( −2, − 2, − 2) ∉ W .
25. This set is a subspace of M m,n , because it is closed under addition and scalar multiplication.
27. This set is a subspace of M m ,n because it is closed under addition and scalar multiplication.
29. This set is not a subspace because it is not closed under addition or scalar multiplication. 31. W is a subspace of R 3 , because it is nonempty and closed under addition and scalar multiplication.
33. Note that W ⊂ R 3 and W is nonempty. If ( a1 , b1 , a1 + 2b1 ) and ( a2 , b2 , a2 + 2b2 ) are vectors in W, then their sum
(a1 , b1, a1
+ 2b1 ) + ( a2 , b2 , a2 + 2b2 ) = ( a1 + a2 , b1 + b2 ,( a1 + a2 ) + 2(b1 + b2 ))
is also in W. Furthermore, for any real number c and ( a, b, a + 2b) in W, c( a, b, a + 2b) = (ca, cb, ca + 2cb)
is in W. Because W is closed under addition and scalar multiplication, W is a subspace of R 3 .
35. W is not a subspace of R 3 because it is not closed under addition or scalar multiplication. For example, (1, 1, 1) ∈ W , but
(1, 1, 1)
+ (1, 1, 1) = ( 2, 2, 2) ∉ W .
Or, 2(1, 1, 1) = ( 2, 2, 2) ∉ W . Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
106
Chapter 4
Vector Spaces
41. Assume A is a fixed 2 × 3 matrix. Assuming W is ⎡ 1⎤ nonempty, let x ∈ W . Then Ax = ⎢ ⎥ . Now, let c be a ⎣2⎦
37. (a) True. See "Remark," page 199. (b) True. See Theorem 4.6, page 202. (c) False. There may be elements of W which are not elements of U or vice-versa.
nonzero scalar such that c ≠ 1. Then cx ∈ R 3 and
39. Let W be a nonempty subset of a vector space V. On the one hand, if W is a subspace of V, then for any scalars a, b, and any vectors x, y ∈ W , ax ∈ W and by ∈ W , and so, ax + by ∈ W .
⎡ 1⎤ ⎡ c⎤ A(cx) = cAx = c ⎢ ⎥ = ⎢ ⎥. ⎣2⎦ ⎣2c⎦ So, cx ∉ W. Therefore, W is not a subspace of R 3 .
On the other hand, assume that ax + by is an element of W where a, b are scalars, and x, y ∈ W . To show that W is a subspace, you must verify the closure axioms. If x, y ∈ W , then x + y ∈ W (by taking a = b = 1 ). Finally, if a is a scalar, ax ∈ W (by taking b = 0 ).
43. Let W be a subspace of the vector space V and let 0v be the zero vector of V and 0 w be the zero vector of W. Because 0w ∈ W ⊂ V , 0w = 0w + ( −0 w ) = 0v So, the zero vector in V is also the zero vector in W.
45. The set W is a nonempty subset of M 2,2 . (For instance, A ∈ W ). To show closure, let X , Y ∈ W ⇒ AX = XA and AY = YA. Then, ( X + Y ) A = XA + YA = AX + AY = A( X + Y ) ⇒ X + Y ∈ W . Similarly, if c is a scalar, then
(cX ) A
= c( XA) = c( AX ) = A(cX ) ⇒ cX ∈ W .
47. V + W is nonempty because 0 = 0 + 0 ∈ V + W . Let u1 , u 2 ∈ V + W . Then u1 = v1 + w1 , u 2 = v 2 + w 2 , where v i ∈ V and w i ∈ W . So,
u1 + u 2 = ( v1 + w1 ) + ( v 2 + w 2 ) = ( v1 + v 2 ) + ( w1 + w 2 ) ∈ V + W . For scalar c, cu1 = c( v1 + w1 ) = cv1 + cw1 ∈ V + W .
If V =
{( x, 0):
x is a real number} and W =
{(0, y):
y is a real number}, then V + W = R 2 .
Section 4.4 Spanning Sets and Linear Independence 1. (a) Solving the equation c1 ( 2, −1, 3) + c2 (5, 0, 4) = (1, 1, −1)
for c1 and c2 yields the system 2c1 + 5c2 =
1
−c1
1
=
3c1 + 4c2 = −1.
This system has no solution. So, u cannot be written as a linear combination of vectors in S.
(
(b) Proceed as in (a), substituting 8, − 14 , 2c1 + 5c2 =
−c1
27 4
) for (1, 1, −1), which yields the system
8
= − 14
3c1 + 4c2 =
27 . 4
The solution to this system is c1 =
1 4
and c2 = 32 . So, v can be written as a linear combination of vectors in S.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.4
Spanning Sets and Linear Independence
107
(c) Proceed as in (a), substituting (1, − 8, 12) for (1, 1, −1), which yields the system 2c1 + 5c2 = 1
−c1
= −8
3c1 + 4c2 = 12.
The solution to this system is c1 = 8 and c2 = −3. So, w can be written as a linear combination of the vectors in S. (d) Proceed as in (a), substituting ( −1, − 2, 2) for (1, 1, −1), which yields the system 2c1 + 5c2 = −1
−c1
= −2
3c1 + 4c2 =
2
The solution of this system is c1 = 2 and c2 = −1. So, z can be written as a linear combination of vectors in S.
3. (a) Solving the equation c1 ( 2, 0, 7) + c2 ( 2, 4, 5) + c3 ( 2, −12, 13) = ( −1, 5, − 6)
for c1 , c2 , and c3 , yields the system 2c1 + 2c2 + 2c3 = −1 4c2 − 12c3 =
5
7c1 + 5c2 + 13c3 = −6.
One solution is c1 = − 74 , c2 =
5 , and 4
c3 = 0. So, u can be written as a linear combination of vectors in S.
(b) Proceed as in (a), substituting ( −3, 15, 18) for ( −1, 5, − 6), which yields the system
2c1 + 2c2 + 2c3 = −3 4c2 − 12c3 = 15 7c1 + 5c2 + 13c3 = 18. This system has no solution. So, v cannot be written as a linear combination of vectors in S. (c) Proceed as in (a), substituting 2c1 + 2c2 + 2c3 = 4c2 − 12c3 = 7c1 + 5c2 + 13c3 =
( 13 , 43 , 12 ) for (−1, 5, − 6), which yields the system
1 3 4 3 1. 2
One solution is c1 = − 16 , c2 = 13 , and c3 = 0. So, w can be written as a linear combination of vectors in S. (d) Proceed as in (a), substituting ( 2, 20, − 3) for ( −1, 5, − 6), which yields the system 2c1 + 2c2 + 2c3 =
2
4c2 − 12c3 = 20 7c1 + 5c2 + 13c3 = −3 One solution is c1 = −4, c2 = 5, and c3 = 0. So, z can be written as a linear combination of vectors in S. 5. Let u = (u1 , u2 ) be any vector in R 2 . Solving the equation
c1 ( 2, 1) + c2 ( −1, 2) = (u1 , u2 ) for c1 and c2 yields the system
2c1 −
c2 = u1
c1 + 2c2 = u2 . This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
108
Chapter 4
Vector Spaces
7. Let u = (u1 , u2 ) be any vector in R 2 . Solving the
17. Let u = (u1 , u2 , u3 ) be any vector in R 3 . Solving the
equation
equation
c1 (5, 0) + c2 (5, − 4) = (u1 , u2 )
c1 ( 4, 7, 3) + c2 ( −1, 2, 6) + c3 ( 2, − 3, 5) = (u1 , u2 , u3 )
for c1 and c2 yields the system
for c1 , c2 , and c3 yields the system
5c1 + 5c2 = u1
4c1 −
−4c2 = u2 . This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 2 . 9. S does not span R 2 because only vectors of the form t ( −3, 5), are in span(S). For example, (0, 1) is not in 2
span(S). S spans a line in R . 11. S does not span R 2 because only vectors of the form t (1, 3) are in span(S). For example, (0, 1) is not in
span(S). S spans a line in R 2 .
span(S). S spans a line in R 2 .
3c1 + 6c2 + 5c3 = u3 .
This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 3 . 19. This set does not span R 3 . S spans a plane in R 3 . 21. Let u = (u1 , u2 , u3 ) be any vector in R 3 . Solving the
equation c1 (1, − 2, 0) + c2 (0, 0, 1) + c3 ( −1, 2, 0) = (u1 , u2 , u3 )
−
c1 −2c1
15. S spans R . Let u = (u1 , u2 ) be any vector in R 2 . Solving the equation
c1 ( −1, 4) + c2 ( 4, −1) + c3 (1, 1) = (u1 , u2 ) for c1 , c2 and c3 yields the system −c1 + 4c2 + c3 = u1
c3 = u1
+ 2c3 = u2 = u3 .
c2
2
This system has an infinite number of solutions if u2 = −2u1 , otherwise it has no solution. For instance (1, 1, 1) is not in the span of S. So, S does not span R 3 . The subspace spanned by S is span( S ) =
{(a, − 2a, b): a and b are any real numbers},
which is a plane in R 3 .
c2 + c3 = u2 .
23. Because ( −2, 2) is not a scalar multiple of (3, 5), the set
This system is equivalent to c1 − 4c2 −
7c1 + 2c2 − 3c3 = u2
for c1 , c2 , and c3 yields the system
13. S does not span R 2 because only vectors of the form t (1, − 2), are in span(S). For example, (0, 1) is not in
4c1 −
c2 + 2c3 = u1
S is linearly independent.
c3 = −u1
15c2 − 5c3 = 4u1 + u2 . So, for any u = (u1 , u2 ) in R , you can take 2
c3 = 0, c2 = ( 4u1 + u2 ) /15 and c1 = 4c2 − u1 = (u1 + 4u2 ) /15.
25. This set is linearly dependent because
1(0, 0) + 0(1, −1) = (0, 0). 27. Because (1, − 4, 1) is not a scalar multiple of (6, 3, 2), the
set S is linearly independent. 29. Because these vectors are multiples of each other, the set S is linearly dependent. 31. From the vector equation
c1 ( −4, − 3, 4) + c2 (1, − 2, 3) + c3 (6, 0, 0) = 0 you obtain the homogenous system −4c1 +
c2 + 6c3 = 0
−3c1 − 2c2
= 0
4c1 + 3c2
= 0.
This system has only the trivial solution c1 = c2 = c3 = 0. So, the set S is linearly independent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.4
Spanning Sets and Linear Independence
109
33. From the vector equation
c1 ( 4, − 3, 6, 2) + c2 (1, 8, 3, 1) + c3 (3, − 2, −1, 0) = (0, 0, 0,0) you obtain the homogeneous system 4c1 +
c2 + 3c3 = 0
−3c1 + 8c2 − 2c3 = 0 6c1 + 3c2 − 2c1 +
c3 = 0 = 0.
c2
This system has only the trivial solution c1 = c2 = c3 = 0. So, the set S is linearly independent. 35. One example of a nontrivial linear combination of vectors in S whose sum is the zero vector is
2(3, 4) − 8( −1, 1) − 7( 2, 0) = (0, 0). Solving this equation for ( 2, 0) yields ( 2, 0) =
2 7
(3, 4)
−
8 7
(−1, 1).
37. One example of a nontrivial linear combination of vectors in S whose sum is the zero vector is
(1, 1, 1)
− (1, 1, 0) − 0(0, 1, 1) − (0, 0, 1) = (0, 0, 0). Solving this equation for (1, 1, 1) yields
(1, 1, 1)
= (1, 1, 0) + (0, 0, 1) + 0(0, 1, 1).
39. (a) From the vector equation
c1 (t , 1, 1) + c2 (1, t , 1) + c3 (1, 1, t ) = (0, 0, 0) you obtain the homogeneous system tc1 + c2 + c3 = 0 c1 + tc2 + c3 = 0 c1 + c2 + tc3 = 0. The coefficient matrix of this system will have a nonzero determinant if t 3 − 3t + 2 ≠ 0. So, the vectors will be linearly independent for all values of t other than t = −2 or t = 1. (b) Proceeding as in (a), you obtain the homogeneous system tc1 + c2 +
c3 = 0
+
c3 = 0
c1
c1 + c2 + 3tc3 = 0.
The coefficient matrix of this system will have a nonzero determinant if 2 − 4t ≠ 0. So, the vectors will be linearly independent for all values of t other than t = 12 . 41. (a) From the vector equation
⎡2 −3⎤ ⎡0 5⎤ ⎡ 6 −19⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ = ⎢ ⎥ − 4 1 1 2 7⎦ ⎣ ⎦ ⎣ ⎦ ⎣10 you obtain the linear system 2c1
=
6
−3c1 + 5c2 = −19 4c1 +
c2 = 10
c1 − 2c2 =
7.
The solution to this system is c1 = 3 and c2 = −2. So, ⎡ 6 −19⎤ ⎡2 −3⎤ ⎡0 5⎤ ⎢ ⎥ = 3⎢ ⎥ − 2⎢ ⎥ = 3 A − 2 B. 10 7 4 1 ⎣ ⎦ ⎣ ⎦ ⎣ 1 −2⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
110
Chapter 4
Vector Spaces
(b) Proceeding as in (a), you obtain the system 2c1
= 6
−3c1 + 5c2 = 2 4c1 +
c2 = 9
c1 − 2c2 = 11. This system is inconsistent, and so the matrix is not a linear combination of A and B. (c) Proceeding as in (a), you obtain ⎡−2 28⎤ ⎡2 −3⎤ ⎡0 5⎤ ⎢ ⎥ = −⎢ ⎥ + 5⎢ ⎥ = − A + 5B 1⎦ ⎣ 1 −11⎦ ⎣4 ⎣ 1 −2⎦ and so the matrix is a linear combination of A and B. (d) Proceeding as in (a), you obtain the trivial combination ⎡0 0⎤ ⎡2 −3⎤ ⎡0 5⎤ ⎢ ⎥ = 0⎢ ⎥ + 0⎢ ⎥ = 0 A + 0 B. 1⎦ ⎣0 0⎦ ⎣4 ⎣ 1 −2⎦ 43. From the vector equation c1 ( 2 − x) + c2 ( 2 x − x 2 ) + c3 (6 − 5 x + x 2 ) = 0 + 0 x + 0 x 2 you obtain the homogeneous system + 6c3 = 0
2c1
−c1 + 2c2 − 5c3 = 0 −c2 + c3 = 0.
This system has infinitely many solutions. For instance, c1 = −3, c2 = 1, c3 = 1. So, S is linearly dependent. 45. From the vector equation c1 ( x 2 + 3x + 1) + c2 ( 2 x 2 + x − 1) + c3 ( 4 x) = 0 + 0 x + 0 x 2 you obtain the homogeneous system c1 − 3c1 +
c2
= 0
c2 + 4c3 = 0
c1 + 2c2
= 0.
This system has only the trivial solution. So, S is linearly independent. 47. S does not span P2 because only vectors of the form s( x 2 ) + t (1) are in span(S). For example, 1 + x + x 2 is not in span(S). 49. (a) Because ( −2, 4) = −2(1, − 2), S is linearly dependent.
(b) Because 2(1, − 6, 2) = ( 2, −12, 4), S is linearly dependent. (c) Because (0, 0) = 0(1, 0), S is linearly dependent. ⎡ 1 0 −3⎤ ⎡ 1 2 −1⎤ ⎡ 1 0 −3⎤ ⎡−2 −6 0⎤ ⎢ ⎥ ⎢ ⎥ 51. Because the matrix ⎢0 1 1⎥ row reduces to ⎢0 1 1⎥ and ⎢ ⎥ , you see that S1 ⎥ row reduces to ⎢ 1 1 2 − ⎣0 1 1⎦ ⎣ ⎦ ⎢2 5 −1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ and S 2 span the same subspace. You could also verify this by showing that each vector in S1 is in the span of S 2 , and
conversely, each vector in S 2 is in the span of S1. For example, (1, 2, −1) = − 14 ( −2, − 6, 0) +
1 2
(1, 1, − 2).
53. (a) False. See "Definition of Linear Dependences and Linear Independence," page 263.
(b) True. See corollary to Theorem 4.8, page 218.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.5 ⎡1 1 1⎤ ⎢ ⎥ 55. The matrix ⎢1 1 0⎥ row reduces to ⎢1 0 0⎥ ⎣ ⎦ shows that the equation
Basis and Dimension
111
65. Consider the vector equation
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ , which ⎢0 0 1⎥ ⎣ ⎦
c1 (u + v ) + c2 (u − v ) = 0. Regrouping, you have
(c1
c1 (1, 1, 1) + c2 (1, 1, 0) + c3 (1, 0, 0) = (0, 0, 0)
+ c2 )u + (c1 − c2 ) v = 0.
Because u and v are linearly independent, c1 + c2 = c1 − c2 = 0. So, c1 = c2 = 0, and the vectors u + v and u − v are linearly independent.
only has the trivial solution. So, the three vectors are linearly independent. Furthermore, the vectors span R 3 because the coefficient matrix of the linear system
67. On [0, 1],
⎡1 1 1⎤ ⎡ c1 ⎤ ⎡ u1 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢1 1 0⎥ ⎢c2 ⎥ = ⎢u2 ⎥ ⎢1 0 0⎥ ⎢c3 ⎥ ⎢ u3 ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
f 2 ( x) = x = x = = ⇒
is nonsingular.
1f 3 1
1 3
(3 x )
( x)
{ f1, f 2} dependent.
On [−1, 1], f1 and f 2 are not multiples of each other. f 2 ( x) ≠ cf1 ( x) for −1 ≤ x < 0, that is
57. Let S be a set of linearly independent vectors and T ⊂ S . If T = {v1 , , v k } and T were linearly
f ( x) = x ≠
dependent, then there would exist constants c1 , , ck , not all zero, satisfying
y
c1v1 + + ck v k = 0. But, v i ∈ S , and S is linearly independent, which is impossible. So, T is linearly independent. 59. If a set of vectors {v1 , v 2 ,
4 3 2
} contains the zero vector,
−4
−2
1 3
(3x) for
−1 ≤ x ≤ 0.
f1 (x) = 3x
f2(x) = |x| x 1 2 3 4
then 0 = 0 v1 + … + 0 v k + 1 i 0 which implies that the set is linearly dependent. 61. If the set {v1 , v k = c1v1 +
, v k −1} spanned v, then + ck −1v k −1
for some scalars c1 , cv1 +
69. On the one hand, if u and v are linearly dependent, then there exist constants c1 and c2 , not both zero, such that c1u + c2 v = 0. Without loss of generality, you can c assume c1 ≠ 0, and obtain u = − 2 v. c1
, ck −1. So,
+ ck −1v k −1 − v k = 0
which is impossible because {v1 ,
, v k } is linearly
On the other hand, if one vector is a scalar multiple of another, u = cv, then u − cv = 0, which implies that u and v are linearly dependent.
independent. 63. Theorem 4.8 requires that only one of the vectors to be a linear combination of the others. In this case, (−1, 0, 2) = 0(1, 2, 3) − (1, 0, − 2), and so, there is no
contradiction.
Section 4.5 Basis and Dimension 1. There are six vectors in the standard basis for R 6 . {(1, 0, 0, 0, 0, 0), (0, 1, 0, 0, 0, 0), (0, 0, 1, 0, 0, 0),
(0, 0, 0, 1, 0, 0), (0, 0, 0, 0, 1, 0), (0, 0, 0, 0, 0, 1)} 3. There are eight vectors in the standard basis.
⎧⎪⎡ 1 0 0 0⎤ ⎨⎢ ⎥, ⎩⎪⎣0 0 0 0⎦
⎡0 1 0 0⎤ ⎢ ⎥, ⎣0 0 0 0⎦
⎡0 0 1 0⎤ ⎢ ⎥, ⎣0 0 0 0⎦
⎡0 0 0 1⎤ ⎢ ⎥, ⎣0 0 0 0⎦
⎡0 0 0 0⎤ ⎢ ⎥, ⎣ 1 0 0 0⎦
⎡0 0 0 0⎤ ⎢ ⎥, ⎣0 1 0 0⎦
⎡0 0 0 0⎤ ⎢ ⎥, ⎣0 0 1 0⎦
⎡0 0 0 0⎤⎫⎪ ⎢ ⎥⎬ ⎣0 0 0 1⎦⎪⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
112
Chapter 4
Vector Spaces
5. There are five vectors in the standard basis for P4 .
{1, x, x 2 , x3 , x4} 7. A basis for R 2 can only have two vectors. Because S has three vectors, it is not a basis for R 2 . 9. S is linearly dependent
((0, 0) ∈ S ) and does not span
R 2 . For instance, (1, 1) ∉ span ( S ).
11. S is linearly dependent and does not span R 2 . For
instance, (1, 1) ∉ span ( S ). 13. S does not span R , although it is linearly independent.
For instance, (1, 1) ∉ span ( S ). 15. A basis for R 3 contains three linearly independent vectors. Because
−2(1, 3, 0) + ( 4, 1, 2) + ( −2, 5, − 2) = (0, 0, 0) S is linearly dependent and is, therefore, not a basis for R3. 17. S does not span R 3 , although it is linearly independent.
For instance, (0, 1, 0) ∉ span ( S ). 19. S is linearly dependent and does not span R 3 . For instance, (0, 0, 1) ∉ span ( S ). 21. A basis for P2 can have only three vectors. Because S has four vectors, it is not a basis for P2 . 23. S is not a basis because the vectors are linearly dependent.
−2(1 − x) + 3(1 − x
⎡ 1 0⎤ ⎡0 1⎤ ⎡ 1 0⎤ ⎡ 8 −4⎤ ⎡0 0⎤ 5⎢ ⎥ − 4⎢ ⎥ + 3⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ − 0 0 1 0 0 1 4 3 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦ Also, S does not span M 2,2 . 29. Because {v1 , v 2} consists of exactly two linearly
independent vectors, it is a basis for R 2 . 31. Because v1 and v 2 are multiplies of each other, they do
not form a basis for R 2 . 33. Because v1 and v 2 are multiples of each other they do
2
2
27. S is not a basis because the vectors are linearly dependent.
) + (3 x
2
− 2 x − 1) = 0
25. A basis for M 2,2 must have four vectors. Because S has
only two vectors, it is not a basis for M 2,2 .
not form a basis for R 2 . 35. Because the vectors in S are not scalar multiples of one another, they are linearly independent. Because S consists of exactly two linearly independent vectors, it is a basis for R 2 . 37. To determine if the vectors in S are linearly independent, find the solution to
c1 (1, 5, 3) + c2 (0, 1, 2) + c3 (0, 0, 6) = (0, 0, 0) which corresponds to the solution of = 0
c1 5c1 +
c2
= 0
3c1 + 2c2 + 6c3 = 0.
This system has only the trivial solution. So, S consists of exactly three linearly independent vectors, and is, therefore, a basis for R 3 . 39. To determine if the vectors in S are linearly independent, find the solution to
c1 (0, 3, − 2) + c2 ( 4, 0, 3) + c3 ( −8, 15, −16) = (0, 0, 0) which corresponds to the solution of 4c2 − 8c3 = 0 3c1
+ 15c3 = 0
−2c1 + 3c2 − 16c3 = 0.
Because this system has nontrivial solutions (for instance, c1 = −5, c2 = 2 and c3 = 1 ), the vectors are linearly dependent, and S is not a basis for R 3 . 41. To determine if the vectors of S are linearly independent, find the solution to
c1 ( −1, 2, 0, 0) + c2 ( 2, 0, −1, 0) + c3 (3, 0, 0, 4) + c4 (0, 0, 5, 0) = (0, 0, 0, 0) which corresponds to the solution of −c1 + 2c2 + 3c3
= 0 = 0
2c1 −c2
+ 5c4 = 0 4c3
= 0.
This system has only the trivial solution. So, S consists of exactly four linearly independent vectors, and is therefore a basis for R 4 . Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.5
Basis and Dimension
113
43. Form the equation
⎡2 0⎤ ⎡ 1 4⎤ ⎡0 1⎤ ⎡0 1⎤ ⎡0 0⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ + c4 ⎢ ⎥ = ⎢ ⎥ ⎣0 3⎦ ⎣0 1⎦ ⎣3 2⎦ ⎣2 0⎦ ⎣0 0⎦ which yields the homogeneous system 2c1 +
= 0
c2 4c2 +
c3 +
c4 = 0
3c3 + 2c4 = 0 3c1 +
c2 + 2c3
= 0.
This system has only the trivial solution. So, S consists of exactly four linearly independent vectors, and is therefore a basis for M 2,2 . 45. Form the equation
c1 (t 3 − 2t 2 + 1) + c2 (t 2 − 4) + c3 (t 3 + 2t ) + c4 (5t ) = 0 + 0t + 0t 2 + 0t 3 which yields the homogeneous system +
c1 −2c1 +
= 0
c3
= 0
c2
2c3 + 5c4 = 0 c1 − 4c2
= 0.
This system has only the trivial solution. So, S consists of exactly four linearly independent vectors, and is, therefore, a basis for P3 . 47. Because a basis for P3 can contain only four basis vectors and set S contains five vectors, you have that S is not a basis for P3 . 49. Form the equation c1 ( 4, 3, 2) + c2 (0, 3, 2) + c3 (0, 0, 2) = (0, 0, 0) which
yields the homogeneous system 4c1
= 0
3c1 + 3c2
= 0
2c1 + 2c2 + 2c3 = 0. This system has only the trivial solution, so S is a basis for R 3 . Solving the system 4c1
= 8
3c1 + 3c2
= 3
2c1 + 2c2 + 2c3 = 8
yields c1 = 2, c2 = −1, and c3 = 3. So, u = 2( 4, 3, 2) − (0, 3, 2) + 3(0, 0, 2) = (8, 3, 8). 51. The set S contains the zero vector, and is, therefore, linearly dependent.
1(0, 0, 0) + 0(1, 3, 4) + 0(6, 1, − 2) = (0, 0, 0). So, S is not a basis for R 3 .
53. Form the equation
(
)
(
)
c1 23 , 52 , 1 + c2 1, 23 , 0 + c3 ( 2, 12, 6) = (0, 0, 0) which yields the homogeneous system 2c 3 1 5 c 2 1
+ +
c2 + 2c3 = 0 3 c 2 2
c1
+ 12c3 = 0 + 6c3 = 0.
Because this system has nontrivial solutions (for instance, c1 = 6, c2 = −2 and c3 = −1 ), the vectors are linearly dependent. So, S is not a basis for R 3 . 55. Because a basis for R 6 has six linearly independent vectors, the dimension of R 6 is 6. 57. Because a basis for R has one linearly independent vector, the dimension of R is 1. 59. Because a basis for P7 has eight linearly independent vectors, the dimension of P7 is 8. 61. Because a basis for M 2,3 has six linearly independent
vectors, the dimension of M 2,3 is 6. 63. One basis for D3,3 is
⎧⎡ 1 0 0⎤ ⎪⎢ ⎥ ⎨⎢0 0 0⎥ , ⎪⎢0 0 0⎥ ⎦ ⎩⎣
⎡0 0 0⎤ ⎫ ⎢ ⎥⎪ ⎢0 0 0⎥ ⎬. ⎢0 0 1⎥ ⎪ ⎣ ⎦⎭ Because a basis for D3,3 has 3 vectors, ⎡0 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ , ⎢0 0 0⎥ ⎣ ⎦
dim( D3,3 ) = 3.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
114
Chapter 4
Vector Spaces
65. The following subsets of two vectors form a basis for R 2 .
{(1, 0), (0, 1)}, {(1, 0), (1, 1)}, {(0, 1), (1, 1)}. 67. Add any vector that is not a multiple of (1, 1). For
instance, the set
{(1, 1), (1, 0)} is a basis for
R2.
69. (a) W is a line through the origin.
(b) A basis for W is
77. (a) False. See paragraph before "Definition of Dimension of a Vector Space," page 227.
(b) True. Find a set of n basis vectors in V that will span V and add any other vector. 79. Because the set S1 = {cv1 ,
a1 (cv1 ) + a2 (cv 2 ) +
{(2, 1)}.
c( a1v1 + a2 v 2 +
(c) The dimension of W is 1. 71. (a) W is a line through the origin.
(b) A basis for W is
{(2, 1, −1)}.
73. (a) A basis for W is
{(2, 1, 0, 1), (−1, 0, 1, 0)}.
S 2 –basis: S1 ∩ S2 –basis: S1 + S 2 –basis:
Because {v1 ,
+ an v n = 0
, v n} are linearly independent, the , an must all be zero. So, S1 is linearly
81. Let W ⊂ V and dim(V ) = n. Let w1 ,
, w k be a basis
for W. Because W ⊂ V , the vectors w1 ,
{(0, 6, 1, −1)}.
, w k are
linearly independent in V. If span ( w1 , , w k ) = V , then dim(W ) = dim(V ). If not, let v ∈ V , v ∉ W . Then dim(W ) < dim(V ).
(b) The dimension of W is 1. 83. (a) S1 –basis:
+ an v n ) = 0
independent.
(b) The dimension of W is 2. 75. (a) A basis for W is
+ an (cv n ) = 0
a1v1 + a2 v 2 + coefficients a1 ,
(c) The dimension of W is 1.
, cv n} has n vectors, you
only need to show that they are linearly independent. Consider the equation
{(1, 0, 0), (1, 1, 0)} {(0, 0, 1), (0, 1, 0)} {(0, 1, 0)} {(1, 0, 0), (0, 1, 0), (0, 0, 1)}
dim( S1 ) = 2 dim( S 2 ) = 2 dim( S1 ∩ S 2 ) = 1 dim( S1 + S2 ) = 3
(Answers are not unique.) (b) No, it is not possible, because two planes cannot intersect only at the origin. 85. If S spans V, you are done. If not, let v1 ∉ span ( S ), and consider the linearly independent set S1 = S ∪ {v1}. If S1 spans V you are done. If not, let v 2 ∉ span ( S1 ) and continue as before. Because the vector space is finite-dimensional, this process will ultimately produce a basis of V containing S.
Section 4.6 Rank of a Matrix and Systems of Linear Equations 1. (a) Because this matrix row reduces to ⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
(b) A basis for the row space is
the rank of the matrix is 2. (b) A basis for the row space is
3. (a) Because this matrix is row-reduced already, the rank is 1.
{(1, 0), (0, 1)}.
{(1, 2, 3)}.
(c) A basis for the column space is {[1]}.
(c) Row-reducing the transpose of the original matrix produces the identity matrix again, and so, a basis ⎪⎧⎡ 1⎤ ⎡0⎤⎪⎫ for the column space is ⎨⎢ ⎥ , ⎢ ⎥⎬. ⎩⎪⎣0⎦ ⎣ 1⎦⎭⎪
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.6 5. (a) Because this matrix row reduces to
⎡ 1 2 −2 0⎤ ⎢ ⎥ ⎢0 0 0 1⎥ ⎢0 0 0 0⎥ ⎣ ⎦
the rank of the matrix is 2.
{(1, 0, ), (0, 1, − )} 1 2
1 2
(c) Row-reducing the transpose of the original matrix produces ⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥. ⎢0 0⎥ ⎣ ⎦
So, a basis for the column space of the matrix is ⎪⎧⎡ 1⎤ ⎡0⎤⎪⎫ ⎨⎢ ⎥ , ⎢ ⎥⎬. ⎪⎩⎣0⎦ ⎣ 1⎦⎪⎭ 7. (a) Because this matrix row reduces to
⎡1 0 1 ⎤ 4 ⎢ ⎥ 3 ⎢0 1 2 ⎥ ⎢0 0 0⎥ ⎣ ⎦ the rank of the matrix is 2. (b) A basis for the row space is
115
9. (a) Because this matrix row reduces to
1⎤ ⎡1 0 2 ⎢ ⎥ 1 ⎣⎢0 1 − 2 ⎥⎦
(b) A basis for the row space is
Rank of a Matrix and Systems of Linear Equations
{(1, 0, ), (0, 1, )}. 1 4
3 2
the rank of the matrix is 2. (b) A basis for the row space of the matrix is {(1, 2, − 2, 0), (0, 0, 0, 1)}. (c) Row-reducing the transpose of the original matrix produces ⎡1 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 1 0 0
19 ⎤ 7⎥ 8⎥ 7 .
⎥ 0⎥ ⎥ 0⎦
So, a basis for the column space of the matrix is ⎧⎡ 1⎤ ⎡ 0⎤⎫ ⎪⎪⎢ ⎥ ⎢ ⎥⎪⎪ ⎨⎢ 0⎥ , ⎢ 1⎥⎬. Equivalently, a basis for the column ⎪⎢19 ⎥ ⎢ 8 ⎥⎪ ⎪⎩⎢⎣ 7 ⎥⎦ ⎢⎣ 7 ⎥⎦⎭⎪ space consists of columns 1 and 4 of the original ⎧⎡−2⎤ ⎡ 5⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ matrix ⎨⎢ 3⎥, ⎢−4⎥⎬ ⎪⎢−2⎥ ⎢ 9⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭
(c) Row-reducing the transpose of the original matrix produces ⎡1 0 − 2 ⎤ 5 ⎢ ⎥ 3⎥ ⎢0 1 . 5 ⎢ ⎥ 0⎥⎦ ⎢⎣0 0 ⎧⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎪⎢ ⎥ ⎢ ⎥ ⎪⎪ So, a basis for the column space is ⎨⎢ 0⎥ , ⎢ 1⎥⎬. ⎪⎢− 2 ⎥ ⎢ 3 ⎥⎪ ⎩⎪⎣⎢ 5 ⎦⎥ ⎣⎢ 5 ⎦⎥ ⎭⎪ Equivalently, a basis for the column space consists of columns 1 and 2 of the original matrix ⎧⎡4⎤ ⎡ 20⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎨⎢6⎥ , ⎢ −5⎥ ⎬. ⎪⎢2⎥ ⎢−11⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
11. (a) Because this matrix row reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0⎤ ⎥ 1 0 0 0⎥ 0 1 0 0⎥ ⎥ 0 0 1 0⎥ ⎥ 0 0 0 1⎦
the rank of the matrix is 5. (b) A basis for the row space is
{(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1)}. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
116
Chapter 4
Vector Spaces
(c) Row reducing the transpose of the original matrix produces ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0⎤ ⎥ 1 0 0 0⎥ 0 1 0 0⎥. ⎥ 0 0 1 0⎥ ⎥ 0 0 0 1⎦
⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎢0⎥ ⎢0⎥ ⎪ ⎪ ⎪ So, a basis for the column space is ⎨⎢0⎥ , ⎢0⎥ , ⎢ 1⎥ , ⎢0⎥ , ⎢0⎥ ⎬. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎪ ⎪⎣0⎦ ⎣0⎦ ⎣0⎦ ⎣0⎦ ⎣ 1⎦ ⎪ ⎩ ⎭
13. Use v1 , v 2 , and v 3 to form the rows of matrix A. Then
write A in row-echelon form. ⎡ 1 2 4⎤ v1 ⎡ 1 0 0⎤ w1 ⎢ ⎥ ⎢ ⎥ A = ⎢−1 3 4⎥ v 2 → B = ⎢0 1 0⎥ w 2 ⎢ 2 3 1⎥ v 3 ⎢0 0 1⎥ w 3 ⎣ ⎦ ⎣ ⎦
So, the nonzero row vectors of B, w1 = (1, 0, 0), w 2 = (0, 1, 0), and w 3 = (0, 0, 1), form a basis for the
row space of A. That is, they form a basis for the subspace spanned by S. 15. Use v1 , v 2 , and v 3 to form the rows of matrix A. Then
write A in row-echelon form. ⎡4 4 8⎤ v1 ⎡ 1 1 0⎤ w1 ⎢ ⎥ ⎢ ⎥ A = ⎢ 1 1 2⎥ v 2 → B = ⎢0 0 1⎥ w 2 ⎢ 1 1 1⎥ v 3 ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
So, the nonzero row vectors of B, w1 = (1, 1, 0) and w 2 = (0, 0, 1), form a basis for the row space of A. That
is, they form a basis for the subspace spanned by S. 17. Begin by forming the matrix whose rows are vectors in S.
⎡ 2 9 −2 53⎤ ⎢ ⎥ 3 −2⎥ ⎢−3 2 ⎢ 8 −3 −8 17⎥ ⎢ ⎥ ⎢⎣ 0 −3 0 15⎥⎦ This matrix reduces to ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0 −1 0⎤ ⎥ 1 0 0⎥ . 0 0 1⎥ ⎥ 0 0 0⎦⎥
So, a basis for span(S) is {(1, 0, −1, 0), (0, 1, 0, 0), (0, 0, 0, 1)}.
19. Form the matrix whose rows are the vectors in S, and then row-reduce. 2 5 28⎤ ⎡−3 ⎡1 ⎢ ⎥ ⎢ 1 −8 −1⎥ 0 ⎢−6 ⇒ ⎢ ⎢ 14 −10 12 −10⎥ ⎢0 ⎢ ⎥ ⎢ 5 12 50⎥⎦ ⎣⎢ 0 ⎣⎢0
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎦⎥
So, a basis for span(S) is
{(1, 0, 0, 0), (0, 1, 0, 0), (0,0, 1, 0), (0, 0, 0, 1)}. 21. Solving the system Ax = 0 yields only the trivial solution x = (0, 0). So, the dimension of the solution space is 0. The solution space consists of the zero vector itself.
23. Solving the system Ax = 0 yields solutions of the form (−2s − 3t , s, t ), where s and t are any real numbers. The dimension of the solution space is 2, and a basis is {(−2, 1, 0), (−3, 0, 1)}.
25. Solving the system Ax = 0 yields solutions of the form (−3t , 0, t ), where t is any real number. The dimension of the solution space is 1, and a basis is
{(−3, 0, 1)} .
27. Solving the system Ax = 0 yields solutions of the form (−t , 2t , t ), where t is any real number. The dimension of the solution space is 1, and a basis for the solution space is {( −1, 2, 1)}.
29. Solving the system Ax = 0 yields solutions of the form (− s + 2t , s − 2t , s, t ), where s and t are any real numbers. The dimension of the solution space is 2, and a basis is {( 2, − 2, 0, 1), ( −1, 1, 1, 0)}.
31. The only solution to the system Ax = 0 is the trivial solution. So, the solution space is
{(0, 0, 0, 0)} whose
dimension is 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.6
Rank of a Matrix and Systems of Linear Equations
33. (a) This system yields solutions of the form (−t , − 3t , 2t ), where t is any real number and a basis is
{(−1, − 3, 2)} .
(b) The dimension of the solution space is 1.
35. (a) This system yields solutions of the form (2s − 3t , s, t ), where s and t are any real numbers and a basis for the solution space is {(2, 1, 0), (−3, 0, 1)}. (b) The dimension of the solution space is 2.
37. (a) This system yields solutions of the form
(−4s − 3t, − s − 23 t, s, t ), where s and t is any real numbers and a basis is {( −4, −1, 1, 0), ( −3, − 23 , 0, 1)}.
(b) The dimension of the solution space is 2.
39. (a) This system yields solutions of the form
(
4 t, 3
− 32 t ,
basis is
)
− t , t , where t is any real number and a
{( , − 4 3
3 , 2
)} {(8, − 9, − 6, 6)}.
−1, 1 or
(b) The dimension of the solution space is 1.
41. (a) This system Ax = b is consistent because its augmented matrix reduces to
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
0 −2 3⎤ ⎥ 1 4 5⎥ . 0 0 0⎥ ⎥ 0 0 0⎦⎥
2 0 0 −5
1⎤ ⎥ 2⎥ . 4 −3⎥ ⎥ 0 0⎥⎦
0 1 0
6
0 0 1 0 0 0
(b) The solutions of the system are of the form
(1 −
2 s + 5t , s, 2 − 6t , − 3 − 4t , t ).
where s and t are any real numbers. That is,
⎡−2⎤ ⎡ 5⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1⎥ ⎢ 0⎥ ⎢ 0⎥ x = s ⎢ 0⎥ + t ⎢−6⎥ + ⎢ 2⎥ , ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢−4⎥ ⎢−3⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 ⎣ ⎦ ⎣ 1⎦ ⎣ 0⎦ where
⎡−2⎤ ⎡ 5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 1⎥ ⎢ 0⎥ x h = s ⎢ 0⎥ + t ⎢−6⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢−4⎥ ⎢ ⎥ ⎢ ⎥ 0 ⎣ ⎦ ⎣ 1⎦
and
xp
⎡ 1⎤ ⎢ ⎥ ⎢ 0⎥ = ⎢ 2⎥. ⎢ ⎥ ⎢−3⎥ ⎢ ⎥ ⎣ 0⎦
47. The vector b is in the column space of A if the equation Ax = b is consistent. Because Ax = b has the solution
⎡−1⎤ ⎡2⎤ ⎡ 3⎤ b = 1⎢ ⎥ + 2 ⎢ ⎥ = ⎢ ⎥. ⎣ 4⎦ ⎣0⎦ ⎣4⎦ 49. The vector b is in the column space of A if the equation Ax = b is consistent. Because Ax = b has the solution
where xp
⎡3⎤ ⎢ ⎥ = ⎢5⎥. ⎢0⎥ ⎣ ⎦
43. (a) This system Ax = b is inconsistent because its augmented matrix reduces to ⎡ 1 0 4 2 0⎤ ⎢ ⎥ ⎢0 1 −2 4 0⎥. ⎢0 0 0 0 1⎥ ⎣ ⎦
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
b is in the column space of A. Furthermore,
⎡ 2⎤ ⎡3⎤ ⎢ ⎥ ⎢ ⎥ x = t ⎢−4⎥ + ⎢5⎥ , ⎢ 1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦
and
45. (a) This system Ax = b is consistent because its augmented matrix reduces to
⎡ 1⎤ x = ⎢ ⎥, ⎣2⎦
(b) The solutions of Ax = b are of the form (3 + 2t , 5 − 4t , t ) where t is any real number. That is,
⎡ 2⎤ ⎢ ⎥ x h = t ⎢−4⎥ ⎢ 1⎥ ⎣ ⎦
117
⎡− 5 ⎤ ⎢ 4⎥ x = ⎢ 34 ⎥, ⎢− 1 ⎥ ⎢⎣ 2 ⎥⎦ b is in the column space of A. Furthermore, ⎡ 1⎤ ⎢ ⎥ b = − 54 ⎢−1⎥ + ⎢ 2⎥ ⎣ ⎦
⎡3⎤
⎡0⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ − 12 ⎢0⎥ = ⎢ 2⎥. ⎢0⎥ ⎢ 1⎥ ⎢−3⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3⎢ ⎥ 1 4⎢ ⎥
51. The rank of the matrix is at most 3. So, the dimension of the row space is at most 3, and any four vectors in the row space must form a linearly dependent set. 53. Assume that A is an m × n matrix where n > m. Then the set of n column vectors of A are vectors in R m and must be linearly dependent. Similarly, if m > n, then
the set of m row vectors of A are vectors in R n , and must be linearly dependent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
118
Chapter 4
Vector Spaces
55. (a) Let
⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 1⎦
⎡0 1⎤ B = ⎢ ⎥. ⎣ 1 0⎦
and
⎡1 1⎤ Then A + B = ⎢ ⎥. ⎣1 1⎦
Note that rank ( A) = rank ( B ) = 2, and rank ( A + B) = 1. (b) Let ⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 0⎦
⎡0 1⎤ B = ⎢ ⎥. ⎣0 0⎦
and
⎡ 1 1⎤ Then A + B = ⎢ ⎥. ⎣0 0⎦
Note that rank ( A) = rank ( B) = 1, and rank ( A + B) = 1. (c) Let ⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 0⎦
⎡0 0⎤ B = ⎢ ⎥. ⎣0 1⎦
and
⎡ 1 0⎤ Then A + B = ⎢ ⎥. ⎣0 1⎦
Note that rank ( A) = rank ( B) = 1, and rank ( A + B) = 2. 57. (a) Because the row (or column) space has dimension no
larger than the smaller of m and n, r ≤ m (because m < n ). (b) There are r vectors in a basis for the row space of A. (c) There are r vectors in a basis for the column space of A. (d) The row space of A is a subspace of R n . (e) The column space of A is a subspace of R m . 59. Consider the first row of the product AB.
⎡ a11 ⎢ ⎢ ⎢am1 ⎣
b1k ⎤ ⎡ ( a11b11 + ⎢ ⎥ ⎥ = ⎢ ⎢( a b + bnk ⎥⎦ ⎣ m1 11
a1n ⎤ ⎡b11 ⎥⎢ ⎥⎢ amn ⎥⎦ ⎢⎣bn1 A
+ a1nbn1 )
(a11b1k
+
+ amnbn1 )
( am1b1k
+
B
First row of AB = ⎡⎣( a11b11 +
+ a1nbnk )⎤ ⎥ ⎥. + amnbnk )⎥⎦
AB + a1nbn1 ),
You can express this first row as a11 (b11 , b12 ,
, ( a11b1k +
+ a1nbnk )⎤⎦.
b1k ) + a12 (b21 ,
, b2 k ) +
+ a1n (bn1 , bn 2 ,
, bnk ),
which means that the first row of AB is in the row space of B. The same argument applies to the other rows of A. A similar argument can be used to show that the column vectors of AB are in the column space of A. The first column of AB is ⎡ a11b11 + ⎢ ⎢ ⎢am1b11 + ⎣
+
a1nbn1 ⎤ ⎡ a11 ⎤ ⎥ ⎢ ⎥ = b 11 ⎢ ⎥ ⎥ + ⎥ ⎢ ⎥ + amnbn1 ⎦ ⎣am1 ⎦
⎡ a1n ⎤ ⎢ ⎥ + bn1 ⎢ ⎥. ⎢amn ⎥ ⎣ ⎦
61. Let Ax = b be a system of linear equations in n variables.
(a) If rank ( A) = rank ([ A b]) = n, then b is in the column space of A, and so Ax = b has a unique solution. (b) If rank ( A) = rank ([ A b]) < n, then b is in the column space of A and rank ( A) < n, which implies that Ax = b has an infinite number of solutions. (c) If rank ( A) < rank ([ A b]), then b is not in the
63. (a) True. See Theorem 4.13 on page 233.
(b) False. The dimension of the solution space of Ax = 0 for m × n matrix of rank r is n − r. See Theorem 4.17 on page 241. 65. (a) True. The columns of A become the rows of the transpose, AT , so the columns of A span the same
space as the rows of AT . (b) False. The elementary row operations on A do not change linear dependency relationships of the columns of A but may change the column space of A.
column space of A, and the system is inconsistent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.7
67. (a) rank ( A) = rank ( B) = 3.
119
69. (a) Ax = Bx ⇒ ( A − B)(x ) = 0 for all x ∈ R n ⇒ nullity( A − B) = n and
nullity( A) = n − r = 5 − 3 = 2.
rank ( A − B) = 0
(b) Choosing x3 = s and x5 = t as the free variables, you have
(b) So, A − B = O ⇒ A = B.
x1 = − s − t x2 = 2s − 3t
Coordinates and Change of Basis
71. Let A and B be 2 m × n row equivalent matrices. The
dependency relationships among the columns of A can be expressed in the form Ax = 0, while those of B in the form Bx = 0. Because A and B are row equivalent, Ax = 0 and Bx = 0 have the same solution sets, and therefore the same dependency relationships.
x3 = s x4 = 5t x5 = t
A basis for nullspace is {(−1, 2, 1, 0, 0), (−1, − 3, 0, 5, 1)}. (c) A basis for the row space of A (which is equal to the row space of B) is
{(1, 0, 1, 0, 1), (0, 1, − 2, 0, 3), (0, 0, 0, 1, − 5)}. (d) A basis for the column space A (which is not the same as the column space of B) is
{(−2, 1, 3, 1), (−5, 3, 11, 7), (0, 1, 7, 5)}. (e) Linearly dependent (f) (i) and (iii) are linearly independent, while (ii) is linearly dependent.
Section 4.7 Coordinates and Change of Basis ⎡4⎤ 1. Because [x]B = ⎢ ⎥ , you can write ⎣ 1⎦ x = 4( 2, −1) + 1(0, 1) = (8, − 3).
Moreover, because (8, − 3) = 8(1, 0) − 3(0, 1), it follows that the coordinates of x relative to S are
[x]S
⎡ 8⎤ = ⎢ ⎥. ⎣−3⎦
3. Because [x]B
⎡2⎤ ⎢ ⎥ = ⎢3⎥ , you can write ⎢ 1⎥ ⎣ ⎦
x = 2(1, 0, 1) + 3(1, 1, 0) + 1(0, 1, 1) = (5, 4, 3).
Moreover, because (5, 4, 3) = 5(1, 0, 0) + 4(0, 1, 0) + 3(0, 0, 1), it follows that the coordinates of x relative to S are
[x]S
⎡5⎤ ⎢ ⎥ = ⎢4⎥. ⎢3⎥ ⎣ ⎦
5. Because [x]B
⎡ 1⎤ ⎢ ⎥ −2 = ⎢ ⎥, you can write ⎢ 3⎥ ⎢ ⎥ ⎢− ⎣ 1⎥⎦
x = 1(0, 0, 0, 1) − 2(0, 0, 1, 1) + 3(0, 1, 1, 1) − 1(1, 1, 1, 1) = ( −1, 2, 0, 1),
which implies that the coordinates of x relative to the standard basis S are ⎡−1⎤ ⎢ ⎥ 2 [x]S = ⎢⎢ ⎥⎥. 0 ⎢ ⎥ ⎢⎣ 1⎥⎦ 7. Begin by writing x as a linear combination of the vectors in B. x = (12, 6) = c1 ( 4, 0) + c2 (0, 3)
Equating corresponding components yields the following system of linear equations. 4c1 = 12 3c2 = 6 The solution of this system is c1 = 3 and c2 = 2. So, x = 3( 4, 0) + 2(0, 3), and the coordinate vector of x
⎡3⎤ relative to B is [x]B = ⎢ ⎥. ⎣2⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
120
Chapter 4
Vector Spaces
9. Begin by writing x as a linear combination of the vector in B.
15. Begin by forming the matrix ⎡1 0 B] = ⎢ ⎣0 1
2 −1⎤ ⎥. 4 3⎦
x = (3, 19, 2) = c1 (8, 11, 0) + c2 (7, 0, 10) + c3 (1, 4, 6)
[ B′
Equating corresponding components yields the following system of linear equations.
Because this matrix is already in the form ⎡⎣I 2 P −1 ⎤⎦ , you see that the transition matrix from B to B' is
8c1 + 7c2 +
c3 = 3
+ 4c3 = 19
11c1
10c2 + 6c3 = 2
The solution of this system is c1 = 1, c2 = −1, and c3 = 2. So, x = 1(8, 11, 0) + ( −1)(7, 0, 10) + 2(1, 4, 6),
and the coordinate vector of x relative to B is
[x]B
⎡ 1⎤ ⎢ ⎥ = ⎢−1⎥ . ⎢ 2⎥ ⎣ ⎦
x = (11, 18, − 7) = c1 ( 4, 3, 3) + c2 ( −11, 0, 11) + c3 (0, 9, 2)
Equating corresponding components yields the following system of linear equations. = 11
3c1 + 11c2 + 2c3 = −7
c3 = 2. So,
x = (11, 18, − 7) = 0( 4, 3, 3) − 1( −11, 0, 11) + 2(0, 9, 2)
and
⎡2 1 = ⎢ ⎣4 3
1 0⎤ ⎥ 0 1⎦
and then use Gauss-Jordan elimination to produce ⎡1 0 P ⎤⎦ = ⎢ ⎢⎣0 1 −1
− 12 ⎤ ⎥. −2 1⎥⎦ 3 2
So, the transition matrix from B to B' is ⎡ 3 − 1⎤ 2⎥. = ⎢ 2 1⎥⎦ ⎢⎣−2
2 − 12 ⎤ ⎥ 1 0 0⎥ . 2 1⎥ 0 − 13 12 ⎦⎥ 1
So, the transition matrix from B to B' is P
−1
⎡1 2 − 12 ⎤ ⎢ ⎥ 1 = ⎢0 0⎥ . 2 ⎢0 − 1 1⎥ ⎢⎣ 3 12 ⎥ ⎦
[ B′
⎡2 −1 B] = ⎢ ⎣1 2
1⎤ ⎥ 5 2⎦
2
and then use Gauss-Jordan elimination to produce 9 5 8 5
4⎤ 5⎥ 3⎥ 5⎦
.
So, the transition matrix from B to B' is
13. Begin by forming the matrix
P −1
⎡⎣I 3
⎡1 0 0 ⎢ −1 P ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣⎢
⎡1 0 ⎡⎣I 2 P −1 ⎤⎦ = ⎢ ⎢0 1 ⎣
⎡ 0⎤ ⎢ ⎥ = ⎢−1⎥ . ⎢ 2⎥ ⎣ ⎦
[ B′ B]
1 0 0⎤ ⎥ 0 1 0⎥ 0 0 1⎥⎦
19. Begin by forming the matrix
The solution to this system is c1 = 0, c2 = −1, and
⎡⎣I 2
⎡1 0 6 [B′ B] = ⎢⎢0 2 0 ⎢0 8 12 ⎣
+ 9c3 = 18
3c1
[x]B
17. Begin by forming the matrix
and then use the Gauss-Jordan elimination to produce
11. Begin by writing x as a linear combination of the vector in B.
4c1 − 11c2
⎡2 −1⎤ P −1 = ⎢ ⎥. ⎣4 3⎦
⎡9 P −1 = ⎢ 5 ⎢8 ⎣5
4⎤ 5⎥ . 3⎥ 5⎦
21. Begin by forming the matrix
[ B′
⎡1 0 0 ⎢ B] = ⎢0 1 0 ⎢0 0 1 ⎣
1 1 1⎤ ⎥ 3 5 4⎥ 3 6 5⎥⎦
Because this matrix is already in the form ⎡⎣I 3 P −1 ⎤⎦, you see that the transition matrix from B to B' is P
−1
⎡1 1 1⎤ ⎢ ⎥ = ⎢3 5 4⎥ . ⎢3 6 5⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.7
23. Begin by forming the matrix ⎡0 −2 1 ′ [B B] = ⎢⎢2 1 1 ⎢1 0 1 ⎣
⎡ 1 −2 −1 ⎢ 3 −5 −2 [B′ B] = ⎢⎢ 2 −5 −2 ⎢ ⎣⎢−1 4 4
and then use Gauss-Jordan elimination to produce ⎡⎣I 3
−7
⎡1 0 0 ⎢ P ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
−2 −3 −5
11
1 0 0 0⎤ ⎥ 0 1 0 0⎥ 0 0 1 0⎥ ⎥ 0 0 0 1⎥⎦
and then use Gauss-Jordan elimination to produce
10⎤ ⎥ 5 −1 −6⎥ . 11 −3 −10⎥⎦ 3
⎡⎣I 4
So, the transition matrix from B to B' is P −1
121
25. Begin by forming the matrix
1 −1 2⎤ ⎥ 2 2 4⎥ 4 0 0⎥⎦
−1
Coordinates and Change of Basis
3 10⎤ ⎡−7 ⎢ ⎥ = ⎢ 5 −1 −6⎥ . ⎢ 11 −3 −10⎥ ⎣ ⎦
⎡1 ⎢ 0 P −1 ⎤⎦ = ⎢ ⎢0 ⎢ ⎢⎣0
0 0 0 1 0 0 0 1 0 0 0 1
−24
7
−10
3
1 −2⎤ ⎥ −1⎥ . −29 7 3 −2⎥ ⎥ 12 −3 −1 1⎥⎦ 0
So, the transition matrix from B to B' is
P −1
⎡−24 7 1 −2⎤ ⎢ ⎥ −10 3 0 −1⎥ = ⎢ . ⎢−29 7 3 −2⎥ ⎢ ⎥ ⎢⎣ 12 −3 −1 1⎥⎦
27. Begin by forming the matrix ⎡ 1 −2 0 ⎢ 1 ⎢ 2 −3 ⎢ ′ [ B B] = ⎢ 4 4 2 ⎢−1 2 −2 ⎢ 1 1 ⎣ 2
0
1 0 0 0 0⎤ ⎥ 0 1 0 0 0⎥ 0 0 1 0 0⎥ ⎥ 0 0 0 1 0⎥ ⎥ 0 0 0 0 1⎦
1
1 −1 2
0
2
1
1
2
and then use Gauss-Jordan elimination to produce
⎡⎣I 5
⎡1 ⎢ ⎢0 P −1 ⎤⎦ = ⎢⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0 0 1 0 0 0 0 1 0 0
3 1 − 11
2 0 − 11
− 54
0 0 1 0
− 34
0 0 0 1
0
9 22 1 2 1 − 11
5 11 3 22 19 − 44 − 14 2 − 11
7⎤ 0 − 11 ⎥ 1 0 − 11 ⎥ 21 ⎥. − 14 22 ⎥ 1 1⎥ 4 2⎥ 5 0 11⎥ ⎦
So, the transition matrix from B to B′ is
P −1
29. (a)
(b)
5 7⎤ ⎡ 1 −3 0 − 11 11 11 ⎢ ⎥ 3 2 1 0 − 11⎥ ⎢ 0 − 11 22 9 21 ⎥ . = ⎢⎢− 54 − 19 − 14 22 44 22 ⎥ 1 1 1⎥ ⎢− 3 − 14 2 4 2⎥ ⎢ 4 5 1 2 − 11 0 ⎢⎣ 0 − 11 11 ⎥ ⎦
[ B′ B]
[B
⎡−12 −4 = ⎢ ⎣ 0 4
⎡1 −2 B′] = ⎢ ⎣3 −2
⎡− 1 (c) P −1P = ⎢ 3 ⎢⎣ 34
1⎤ 3 ⎡6 ⎥ 1 ⎥ ⎢9 − 2⎦⎣
⎡1 0 1 −2⎤ ⎥ ⇒ ⎢ 3 −2⎦ ⎣⎢0 1
−12 −4⎤ ⎡1 0 ⎥ ⇒ ⎢ 0 4⎦ ⎣0 1
− 13 3 4
1⎤ 3 ⎥ − 12 ⎦⎥
= ⎡⎣I P −1 ⎤⎦
6 4⎤ ⎥ = [ I P] 9 4⎦
4⎤ ⎡ 1 0⎤ ⎥ = ⎢ ⎥ 4⎦ ⎣0 1⎦
⎡6 4⎤⎡−1⎤ ⎡6⎤ (d) [x]B = P[x]B ′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 9 4 3 ⎣ ⎦⎣ ⎦ ⎣3⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
122
Chapter 4
Vector Spaces
⎡2 1 0 ⎢ 31. (a) [ B′ B] = ⎢ 1 0 2 ⎢1 0 1 ⎣
1 0 1⎤ ⎡1 0 0 ⎥ ⎢ 0 1 1⎥ ⇒ ⎢0 1 0 ⎢0 0 1 2 3 1⎥⎦ ⎣
1⎤ ⎥ −7 −10 −1⎥ = ⎡⎣I P −1 ⎤⎦ −2 −2 0⎥⎦
⎡1 0 1 ⎢ (b) [ B B′] = ⎢0 1 1 ⎢2 3 1 ⎣
⎡1 0 0 2 1 0⎤ ⎢ ⎥ 1 0 2⎥ ⇒ ⎢0 1 0 ⎢0 0 1 1 0 1⎥⎦ ⎣⎢
1 2 − 12 3 2
5 1⎤ ⎡ 12 ⎡ 4 ⎢ ⎥⎢ (c) P P = ⎢−7 −10 −1⎥ ⎢− 12 ⎢−2 −2 0⎥ ⎢ 3 ⎣ ⎦ ⎢⎣ 2
1 2 − 12 1 2
−1
(d) [x]B = P[x]B ′
⎡ 1 ⎢ 2 = ⎢− 12 ⎢ 3 ⎣⎢ 2
6
− 48 5
−24
4
10
− 56
−5
4⎤ 5⎥ 1⎥ 2 ⎥ − 52 ⎥ ⎦
So, the transition matrix from B to B′ is P −1
4⎤ ⎡− 48 −24 5⎥ ⎢ 5 1 ⎥. = ⎢ 4 10 2 ⎢ 6 ⎥ −5 − 52 ⎥ ⎢⎣ − 5 ⎦
⎡ 4 6 2 ⎢ (b) [ B B′] = ⎢ 2 −5 −1 ⎢−4 −6 8 ⎣
⎡1 0 0 ⎢ [I P] = ⎢0 1 0 ⎢0 0 1 ⎣
3 32 1 16 1 2
17 20 3 − 10 6 5
35. The standard basis in P2 is S = {1, x, x 2} and because p = 4(1) + 11( x) + 1( x 2 ),
it follows that
[ p]S
⎡ 4⎤ ⎢ ⎥ = ⎢11⎥. ⎢ 1⎥ ⎣ ⎦
37. The standard basis in P2 is S = {1, x, x 2} and because
it follows that 2⎤ ⎥ 0 2 5⎥ 4 8 −2⎥⎦ 1 4
17 20 3 − 10 6 5
5⎤ 4 ⎥ − 12 ⎥
0⎥⎦
5⎤ 4 ⎥ − 12 ⎥ .
⎥ 0⎥⎦
(c) Using a graphing utility you have PP −1 = I . (d) [x]B = P[x]B ′
− 54 ⎤ ⎥ 3 = [ I P] 4⎥ 5⎥ ⎥ 4⎦
p = 1(1) + 5( x) − 2( x 2 ),
So, the transition matrix from B′ to B is ⎡3 ⎢ 32 1 P = ⎢ 16 ⎢ ⎢⎣ 12
1 2 − 12 1 2
− 54 ⎤ ⎡ 1 0 0⎤ ⎥ ⎢ ⎥ 3 = ⎢0 1 0⎥ 4⎥ 5⎥ ⎢0 0 1⎥ ⎣ ⎦ 4⎥ ⎦
2⎤ ⎥ 2 −5 −1⎥ −4 −6 8⎥⎦ 4
⎡1 0 0 ⎢ −1 ⎡⎣I P ⎤⎦ = ⎢0 1 0 ⎢ ⎢⎣0 0 1
5
⎡ 11⎤ − 54 ⎤ ⎡ 1⎤ ⎥ ⎢ 49 ⎥ 3 ⎢ ⎥ 2 = ⎥ ⎢− 4 ⎥ 4 ⎢ ⎥ ⎢ 5⎥ 5⎥⎢ ⎥ 1 − ⎥⎣ ⎦ ⎣ 4⎦ 4⎦
1 2 − 12 1 2
⎡1 4 2 ⎢ ′ 33. (a) [ B B] = ⎢0 2 5 ⎢4 8 −2 ⎣
4
⎡ 279 ⎤ ⎡ 1⎤ ⎢ 160 ⎥ ⎢ ⎥ 61 ⎥ = P ⎢−1⎥ = ⎢− 80 ⎢ ⎥ ⎢ 2⎥ 7 ⎢⎣ − 10 ⎥⎦ ⎣ ⎦
[ p]S
⎡ 1⎤ ⎢ ⎥ = ⎢ 5⎥. ⎢−2⎥ ⎣ ⎦
39. The standard basis in M 3,1 is ⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ S = ⎨⎢0⎥, ⎢ 1⎥ , ⎢0⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
and because ⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ X = 0 ⎢0⎥ + 3⎢ 1⎥ + 2 ⎢0⎥ , ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
it follows that
[ X ]S
⎡0⎤ ⎢ ⎥ = ⎢3⎥. ⎢2⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8
Applications of Vector Spaces
123
45. If P is the transition matrix from B′′, to B′, then
41. The standard basis in M 3,1 is
P[x]B ′′ = [x]B ′ . If Q is the transition matrix from B′ to B,
⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ S = ⎨⎢0⎥ , ⎢ 1⎥ , ⎢0⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
then Q[x]B′ = [x]B . So,
and because
which means that QP, is the transition matrix from B′′ to B.
[x]B
⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ X = 1⎢0⎥ + 2 ⎢ 1⎥ − 1⎢0⎥ , ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
47. If B is the standard basis, then
[ B′ B]
= [ B′ I ] ⇒ ⎡ I ⎣
( B′)
−1
⎤ ⎦
shows that P −1 , the transition matrix from B to B′,
it follows that
[ X ]S
= Q[x]B′ = QP[x]B′′
is ( B′) . −1
⎡ 1⎤ ⎢ ⎥ = ⎢ 2⎥. ⎢−1⎥ ⎣ ⎦
If B′ is the standard basis, then
[ B′ B]
43. (a) False. See Theorem 4.20, page 253.
= [ I B]
shows that P −1 , the transition matrix from B to B′, is B.
(b) True. See the discussion following Example 4, page 257.
Section 4.8 Applications of Vector Spaces 1. (a) If y = e x , then y′′ = e x and y′′ + y = 2e x ≠ 0. So, e x is not a solution to the equation.
(b) If y = sin x, then y′′ = −sin x and y′′ + y = 0. So, sin x is a solution to the equation. (c) If y = cos x, then y′′ = −cos x and y′′ + y = 0. So, cos x is a solution to the equation. (d) If y = sin x − cos x, then y′′ = −sin x + cos x and y′′ + y = 0. So, sin x − cos x is a solution to the equation. 3. (a) If y = e −2 x , then y′ = −2e −2 x and y′′ = 4e −2 x . So, y′′ + 4 y′ + 4 y = 4e −2 x + 4(−2e −2 x ) + 4(e −2 x ) = 0, and e −2 x is a solution.
(b) If y = xe −2 x , then y′ = (1 − 2 x)e −2 x and y′′ = ( 4 x − 4)e −2 x . So, y′′ + 4 y′ + 4 y = ( 4 x − 4)e−2 x + 4(1 − 2 x)e −2 x + 4 xe −2 x = 0, and xe −2 x is a solution.
(c) If y = x 2e −2 x , then y′ = ( 2 x − 2 x 2 )e −2 x and y′′ = ( 4 x 2 − 8 x + 2)e −2 x . So, y′′ + 4 y′ + 4 y = ( 4 x 2 − 8 x + 2)e −2 x + 4( 2 x − 2 x 2 )e −2 x + 4( x 2e −2 x ) ≠ 0, and x 2e −2 x is not a solution.
(d) If y = ( x + 2)e −2 x , then y′ = ( −3 − 2 x)e −2 x and y′′ = ( 4 + 4 x)e −2 x . So, y′′ + 4 y′ + 4 y = ( 4 + 4 x)e−2 x + 4( −3 − 2 x)e −2 x + 4( x + 2)e −2 x = 0, and ( x + 2)e −2 x is a solution.
5. (a) If y =
1 6 1 ⎛6⎞ ⎛1⎞ , then y′′ = 4 . So, x 2 y′′ − 2 y = x 2 ⎜ 4 ⎟ − 2⎜ 2 ⎟ ≠ 0, and y = 2 is not a solution. x2 x x ⎝x ⎠ ⎝x ⎠
(b) If y = x 2 , then y′′ = 2. So, x 2 y′′ − 2 y = x 2 ( 2) − 2 x 2 = 0, and y = x 2 is a solution. 2
2
(
2
2
) − 2(e ) ≠ 0, and y = e is not a solution. − 2e ) − 2(e ) ≠ 0, and y = e is not a
(c) If y = e x , then y′′ = 4 x 2e x + 2e x . So, x 2 y′′ − 2 y = x 2 4 x 2e x + 2e x 2
2
2
(
(d) If y = e − x , then y′′ = 4 x 2e − x − 2e − x . So, x 2 y′′ − 2 y = x 2 4 x 2e − x
2
2
x2
− x2
x2
− x2
− x2
solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
124
Chapter 4
Vector Spaces
7. (a) If y = xe 2 x , then y′ = 2 xe2 x + e 2 x and y′′ = 4 xe 2 x + 4e 2 x . So,
y′′ − y′ − 2 y = 4 xe 2 x + 4e 2 x − ( 2 xe 2 x + e 2 x ) − 2( xe 2 x ) ≠ 0, and y = xe 2x is not a solution. (b) If y = 2e2 x , then y′ = 4e 2 x and y′′ = 8e 2 x . So, y′′ − y′ − 2 y = 8e 2 x − 4e 2 x − 2( 2e 2 x ) = 0, and y = 2e 2 x is a solution. (c) If y = 2e −2 x , then y′ = −4e −2 x and y′′ = 8e −2 x . So, y′′ − y′ − 2 y = 8e −2 x − (−4e −2 x ) − 2( 2e −2 x ) ≠ 0, and y = 2e−2 x is not a solution. (d) If y = xe − x , then y′ = e − x − xe − x and y′′ = xe − x − 2e − x . So, y′′ − y′ − 2 y = xe − x − 2e − x − (e − x − xe − x ) − 2( xe − x ) ≠ 0, and y = xe − x is not a solution. e− x
ex 9. W (e x , e − x ) = d x (e ) dx
ex d −x = x (e ) e dx x
sin x
11. W ( x, sin x, cos x) = 1
cos x
e− x
= −2
−e − x
cos x
−sin x = x(−cos 2 x − sin 2 x) − 1(0) = − x
0 −sin x −cos x
13. W (e , xe , ( x + 3)e −x
−x
−x
)
e− x = −e
(1 − x)e ( x − 2) e − x
−x
−x
e− x 1 ex
15. W (1, e x , e 2 x ) = 0 e x 0 e
x
( x + 3)e− x ( − x − 2) e − x ( x + 1)e− x
xe − x
1 = e
−3 x
x
−1 1 − x 1
x−2
x +3 −x − 2 = e x +1
−3 x
x +3
1
x
0
1
1
0 −2
−2
= 0
e2 x 2e 2 x = 4e 3 x − 2e 3 x = 2e 3 x 4e 2 x
17. Because W (sin x, cos x) =
sin x
cos x
cos x −sin x e −2 x
19. W (e −2 x , xe −2 x , ( 2 x + 1)e −2 x ) = −2e −2 x
4e
−2 x
= e
(2 x
xe −2 x
2x + 1
x
−2 1 − 2 x 4
+ 3)e−2 x
−4 xe −2 x (1 − 2 x)e−2 x −2 x ( 4 x − 4)e (8 x − 4)e−2 x
1 −6 x
= −sin 2 x − cos 2 x = −1 ≠ 0, the set is linearly independent.
−4 x
4x − 4 8x − 4
1
x
2x + 1
= e −6 x 0
1
2
0 −4
−8
= 0, the set is linearly dependent. 21. Because 2 −1 + 2 sin x 1 + sin x W ( 2, −1 + 2 sin x, 1 + sin x) = 0
2 cos x
cos x
0
−2 sin x
−sin x
= −4 cos x sin x + 4 cos x sin x = 0,
the set is linearly independent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8 23. Note that e − x + xe − x is the sum of the first two expressions in the set. So, the set is linearly dependent. 25. From Exercise 17 you have a set of two linearly independent solutions. Because y′′ + y = 0 is seconddegree, it has a general solution of the form C1 sin x + C2 cos x.
Applications of Vector Spaces
125
27. From Exercise 20 you have a set of three linearly independent solutions. Because y′′′ + y′ = 0 is third order, it has a general solution of the form C1 + C2 sin x + C3 cos x. 29. Clearly cos ax and sin ax satisfy the differential
equation y′′ + a 2 y = 0. Because W (cos ax, sin ax) = a ≠ 0, they are linearly independent. So, the general solution is y = C1 cos ax + C2 sin ax.
31. First calculate the Wronskian of the two functions W (e ax , xe ax ) =
e ax
xe ax
(ax
ae ax
+ 1)e ax
= ( ax + 1)e 2 ax − axe 2 ax = e 2 ax .
Because W (e ax , xe ax ) ≠ 0 and the functions are solutions to y′′ − 2ay′ + a 2 y = 0, they are linearly independent. 33. No, this is not true. For instance, consider the nonhomogeneous differential equation y′′ = 1. Two
solutions are y1 = x 2 and y2 = x 2 + 1, but 2
2
y1 + y2 is not a solution. 35. The graph of this equation is a parabola x = − y 2 with the vertex at the origin. The parabola opens to the left.
39. First rewrite the equation. x2 y2 − =1 9 16
The graph of this equation is a hyperbola centered at the origin with transverse axis along the x-axis. y 6
y
4
2
−4
−3
−2
−1
x
−6 −4
1
2
4
6
−4
x
−6
−1 2
x 9
−2
−
y2 − 16
1=0
y2 + x = 0
41. First complete the square to find the standard form. 37. First rewrite the equation. x2 y2 + =1 16 4
(x
− 1) = 4( −2)( y + 2) 2
You see that this is the equation of a parabola with vertex at (1, − 2) and opening downward.
You see that this is the equation of an ellipse centered at the origin with major axis falling along the x-axis.
y
1
y −2 −1
5 4 3 1 −4 −2
x 1 2
4 5
−3 −4 −5
x
(1, − 2)
3
4
−4 −5
x 2 − 2x + 8y + 17 = 0
x 2 + 4y 2 − 16 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
126
Chapter 4
Vector Spaces
43. First complete the square to find the standard form.
(3 x
− 6) + (5 y − 5) = 0 2
49. First complete the square to find the standard form.
(y
2
− 5)
2
−
1
The graph of this equation is the single point ( 2, 1).
(x
+ 1)
2
=1
1 2
You see that this is the equation of a hyperbola centered at ( −1, 5) with transverse axis parallel to the y-axis.
y 2
y
10
(2, 1)
1
x 1
8
(− 1, 5)
2 2
9x 2 + 25y 2 − 36x − 50y + 61 = 0
x
45. First complete the square to find the standard form.
( x + 3)
( 13 )
2
−
2
( y − 5)
2
=1
1
You see that this is the equation of a hyperbola centered at ( −3, 5) with transverse axis parallel to the x-axis. y
(−3, 5)
−4 −2
2
4
2x 2 − y 2 + 4x + 10y − 22 = 0
51. First complete the square to find the standard form.
(x
+ 2) = −6( y − 1) 2
This is the equation of a parabola with vertex at ( −2, 1) and opening downward. y
8
1
6 4
−4
−3
−2
−1
2 −6
−4
−1 −2
x
−2
x
1
9x 2 − y 2 + 54x + 10y + 55 = 0
−3
x 2 + 4x + 6y − 2 = 0
47. First complete the square to find the standard form.
(x
+ 2) 2
2
2
+
(y
+ 4)
2
2
1
=1
You see that this is the equation of an ellipse centered at ( −2, − 4) with major axis parallel to the x-axis. y −5 −4 −3 −2 −1
x
−2 −4
x 2 + 4y 2 + 4x + 32y + 64 = 0
53. Begin by finding the rotation angle, θ , where cot 2θ =
a −c 0 −0 π = = 0, implying that θ = . 4 b 1
So, sin θ = 1
2 and cos θ = 1
x = x′ cos θ − y′ sin θ =
2. By substituting
1 ( x′ − y′) 2
and y = x′ sin θ + y′ cos θ =
into xy + 1 = 0
1 ( x′ + y′) 2 y x'
y'
and simplifying, you obtain
( x′)2
2
In standard form
( y′)2 2
1
− ( y′) + 2 = 0.
−
( x′)2 2
= 1.
45° −2
−1
x 1 −1 −2
This is the equation of a hyperbola with a transverse axis along the y′-axis. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8 55. Begin by finding the rotation angle, θ , where a −c 4− 4 π = = 0 ⇒ θ = . b 2 4
cot 2θ =
1 and cos θ = 2
So, sin θ =
1 . By substituting 2
1 ( x′ − y′) 2
x = x′ cos θ − y′ sin θ =
and
Applications of Vector Spaces
127
59. Begin by finding the rotation angle, θ , where a − c 13 − 7 1 π π = = ⇒ 2θ = ⇒θ = . b 3 6 6 3 3
cot 2θ =
So, sin θ =
3 . By substituting 2
1 and cos θ = 2
x = x′ cos θ − y′ sin θ =
3 1 x′ − y′ 2 2
and 1 ( x′ + y′) 2
y = x′ sin θ + y′ cos θ =
y = x′ sin θ + y′ cos θ =
1 3 x′ + y′ 2 2
into 4 x 2 + 2 xy + 4 y 2 − 15 = 0 and simplifying, you obtain
into 13x 2 + 6 3xy + 7 y 2 − 16 = 0 and simplifying,
( x′)
you obtain ( x′) +
( y′)
2
+
( y′)
2
2
= 1, which is an ellipse with major axis
3 5 along the y′-axis.
2
= 1, which is an ellipse with 4 major axis along the y′-axis. y y'
y
2 3
x'
1
45°
y'
−3
x'
30˚
−1
1
x
−2
x
2
3
−2 −3
57. Begin by finding the rotation angle, θ , where 5−5 π cot 2θ = = 0, implying that θ = . −2 4 So, sin θ = 1
2 and cos θ = 1
2. By substituting
1 ( x′ − y′) 2
x = x′ cos θ − y′ sin θ =
and 1 ( x′ + y′) 2
y = x′ sin θ + y′ cos θ =
61. Begin by finding the rotation angle, θ , where cot 2θ =
1−3 1 π = − , implying that θ = . 3 2 3 3
So, sin θ =
3 2 and cos θ = 1 2. By substituting
x = x′ cos θ − y′ sin θ =
(
1 x′ − 2
)
3 y′
and y = x′ sin θ + y′ cos θ =
1 2
(
)
3 x′ + y ′
into
into x 2 + 2 3 xy + 3 y 2 − 2 3 x + 2 y + 16 = 0
5 x 2 − 2 xy + 5 y 2 − 24 = 0 and simplifying, you obtain
and simplifying, you obtain
4( x′) + 6( y′) − 24 = 0.
4( x′) + 4 y′ + 16 = 0.
2
2
2
( x′) In standard form
2
+
6
( y′) 4
= 1.
This is the equation of an ellipse with major axis along the x′-axis.
This is the equation of a parabola with axis on the y′-axis. y'
y 3
y'
In standard form y′ + 4 = −( x′) . 2
2
y x'
60° 2
x'
x 4
6
−2 1 −3
−1 −2
45° 1
x
−4
3
−6
−3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
128
Chapter 4
Vector Spaces
63. Begin by finding the rotation angle, θ , where cot 2θ =
cot 2θ =
1−1 π = 0, implying that θ = . −2 4
So, sin θ = 1
2 and cos θ = 1
x = x′ cos θ − y′ sin θ =
So, sin θ = 1
x = x′ cos θ − y′ sin θ =
1 ( x′ − y′) 2
y = x′ sin θ − y′ cos θ =
1 ( x′ + y′) 2
2( x′) − 1 = 0. 2
The graph of this equation is the line y′ = 0.
The graph of this equation is two lines x′ = ±
y x'
2
y'
45° 1
2
2
1 −3 −2 −1 −1
x = x′cos θ − y′sin θ =
1 and cos θ = 2
45° 1
2
x 3
−2
65. Begin by finding the rotation angle, θ , where
π , then sin θ = 4
x'
3
x
−2
67. If θ =
2 . 2
y
1
−1
1 ( x′ + y′) 2
x 2 + 2 xy + y 2 − 1 = 0 and simplifying, you obtain
2
−1
1 ( x′ − y′) 2
into
2( y′) = 0.
−2
2. By substituting
and
into x 2 − 2 xy + y 2 = 0 and simplifying, you obtain
y'
2 and cos θ = 1
2. By substituting
and y = x′ sin θ + y′ cos θ =
1−1 π = 0, implying that θ = . 2 4
−3
1 . So, 2
1 ( x′ − y′) 2
and y = x′sin θ − y′cos θ =
1 ( x′ + y′). 2
Substituting these expressions for x and y into ax 2 + bxy + ay 2 + dx + ey + f = 0, you obtain, a
1 1 1 1 1 2 2 ( x′ − y′) + b ( x′ − y′)( x′ + y′) + a ( x′ − y′) + d ( x′ − y′) + e ( x′ + y′) + f = 0. 2 2 2 2 2
Expanding out the first three terms, you see that x′y′-term has been eliminated. ⎡ ⎢a 69. Let A = ⎢ ⎢b ⎢⎣ 2
b⎤ b2 2⎥ ⎥ and assume A = ac − ≠ 0. If a = 0, then 4 ⎥ c⎥ ⎦
ax 2 + bxy + cy 2 = bxy + cy 2 = y(cy + bx) = 0, which implies that y = 0 or y =
−bx , c
the equations of two intersecting lines. On the other hand, if a ≠ 0, then you can divide ax 2 + bxy + cy 2 = 0 through by a to obtain x2 +
2 2 2 ⎛ ⎛ b ⎞2 b c b c b ⎞ c⎞ ⎛ b ⎞ ⎛ b ⎞ ⎛ xy + y 2 = x 2 + xy + ⎜ ⎟ y 2 + y 2 − ⎜ ⎟ y 2 = 0 ⇒ ⎜ x + y ⎟ = ⎜⎜ ⎟ − ⎟ y 2. ⎜ ⎝ 2a ⎠ a a a a 2a ⎠ a ⎟⎠ ⎝ 2a ⎠ ⎝ 2a ⎠ ⎝ ⎝
Because 4ac ≠ b 2 , you can see that this last equation represents two intersecting lines.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
129
Review Exercises for Chapter 4 1. (a) u + v = ( −1, 2, 3) + (1, 0, 2) = ( −1 + 1, 2 + 0, 3 + 2) = (0, 2, 5) (b) 2 v = 2(1, 0, 2) = ( 2, 0, 4) (c) u − v = ( −1, 2, 3) − (1, 0, 2) = ( −1 − 1, 2 − 0, 3 − 2) = (−2, 2, 1) (d) 3u − 2 v = 3( −1, 2, 3) − 2(1, 0, 2) = ( −3, 6, 9) − ( 2, 0, 4) = ( −3 − 2, 6 − 0, 9 − 4) = ( −5, 6, 5)
3. (a) u + v = (3, −1, 2, 3) + (0, 2, 2, 1) = (3, 1, 4, 4) (b) 2 v = 2(0, 2, 2, 1) = (0, 4, 4, 2) (c) u − v = (3, −1, 2, 3) − (0, 2, 2, 1) = (3, − 3, 0, 2) (d) 3u − 2 v = 3(3, −1, 2, 3) − 2(0, 2, 2, 1) = (9, − 3, 6, 9) − (0, 4, 4, 2) = (9, − 7, 2, 7)
5. x = = =
1u 2 1 2
−
3 v 2
11. To write v as a linear combination of u1 , u 2 and
− 12 w
(1, −1, 2)
−
( 12 , − 4, − 4)
3 2
(0, 2, 3)
−
1 2
u3 , solve the equation
(0, 1, 1)
c1u1 + c2u 2 + c3u3 = v for c1 , c2 , and c3 . This vector equation corresponds to the system of linear equations
7. 5u − 2x = 3v + w
c1 −
−2x = −5u + 3v + w
x = =
5 u 2 5 2
−
3 v 2
− 12 w
(1, −1, 2) − 32 (0, 2, 3)
−
1 2
(0, 1, 1)
( 52 , − 52 , 5) − (0, 3, 92 ) − (0, 12 , 12 ) = ( 52 − 0 − 0, − 52 − 3 − 12 , 5 − 92 − 12 ) = ( 52 , − 6, 0) =
9. To write v as a linear combination of u1 , u 2 , and u3 , solve the equation c1u1 + c2u 2 + c3u3 = v For c1 , c2 , and c3 . This vector equation corresponds to the system of linear equations c3 =
3
−c1 + 4c2 + 2c3 =
c1 + 2c2 +
0
2c1 − 2c2 − 4c3 = −6. The solution of this system is c1 = 2, c2 = −1, and c3 = 3. So, v = 2u1 − u 2 + 3u3 .
c2
= 1
2c1 − 2c2
= 2
3c1 − 3c2 + c3 = 3 4c1 + 4c2 + c3 = 5. The solution of this system is c1 = c3 = 0. So, v =
9 u 8 1
9 ,c 8 2
= 18 , and
+ 18 u 2 .
⎡0 0 0 0⎤ ⎢ ⎥ 13. The zero vector is ⎢0 0 0 0⎥. ⎢0 0 0 0⎥ ⎣ ⎦
⎡ a11 a12 ⎢ The additive inverse of ⎢a21 a22 ⎢a ⎣ 31 a32 ⎡ − a11 − a12 ⎢ ⎢− a21 − a22 ⎢− a ⎣ 31 − a32
− a13 − a23 − a33
a13 a23 a33
a14 ⎤ ⎥ a24 ⎥ is a34 ⎥⎦
− a14 ⎤ ⎥ − a24 ⎥. − a34 ⎥⎦
15. The zero vector is (0, 0, 0). The additive inverse of a vector in R 3 is ( − a1 , − a2 , − a3 ).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
130
Chapter 4
Vector Spaces
17. Because W =
{( x, y) :
x = 2 y} is nonempty and
19. W is not a space of R 2 . Because W =
W ⊂ R 2 , you need only check that W is closed under addition and scalar multiplication. Because
(2 x1, x1 )
{( x, y) :
y = ax,a is an integer} is
nonempty and W ⊂ R , you only need to check that W is closed under addition and scalar multiplication. Because 2
+ ( 2 x2 , x2 ) = ( 2( x1 + x2 ), x1 + x2 ) ∈ W
and
( x1, ax1 )
c( 2 x1 , x1 ) = ( 2cx1 , cx1 ) ∈ W
+ ( x2 , ax2 ) = ( x1 + x2 , a( x1 + x2 )) ∈ W
and
Conclude that W is a subspace of R 2 .
c( x1 , ax1 ) = (cx1 , acx1 ) is not in W, because ac is not necessarily an integer. So, W is not closed under scalar multiplication.
21. Because W =
{( x, 2 x, 3x) :
x is a real number} is nonempty and W ⊂ R 2 , you need only check that W is closed under
addition and scalar multiplication. Because
( x1, 2 x1, 3x1 )
+ ( x2 , 2 x2 , 3 x2 ) = ( x1 + x2 , 2( x1 + x2 ), 3( x1 + x2 )) ∈ W
and c( x1 , 2 x1 , 3x1 ) = (cx2 , 2(cx1 ), 3(cx1 )) ∈ W ,
conclude that W is a subspace of R 3 .
23. W is not a subspace of C[−1, 1]. For instance, f ( x) = x − 1 and g ( x) = −1 are in W, but their sum ( f + g )( x) = x − 2 is not in W, because ( f + g )(0) = −2 ≠ −1. So, W is closed under addition (nor scalar multiplication).
25. (a) The only vector in W is the zero vector. So, W is nonempty and W ⊂ R3 . Furthermore, because W is closed under addition and scalar multiplication, it is a subspace of R 3 . (b) W is not closed under addition or scalar multiplication, so it is not a subspace of R 3 . For example, (1, 0, 0) ∈ W , and yet 2(1, 0, 0) = ( 2, 0, 0) ∉ W .
27. (a) To find out whether S spans R 3 , form the vector equation c1 (1, − 5, 4) + c2 (11, 6, −1) + c3 ( 2, 3, 5) = (u1 , u2 , u3 ). This yields the system of linear equations c1 + 11c2 + 2c3 = u1 −5c1 + 6c2 + 3c3 = u2 4c1 −
c2 + 5c3 = u3.
This system has a unique solution for every (u1 , u2 , u3 ) because the determinant of the coefficient matrix is not zero. So, S spans R 3 . (b) Solving the same system in (a) with (u1 , u2 , u3 ) = (0, 0, 0) yields the trivial solution. So, S is linearly independent. (c) Because S is linearly independent and S spans R 3 , it is a basis for R 3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
131
29. (a) To find out whether S spans R 3 , form the vector equation
(
)
c1 − 12 , 34 , −1 + c2 (5, 2, 3) + c3 (−4, 6, − 8) = (u1 , u2 , u3 ). This yields the system − 12 c1 + 5c2 − 4c3 = u1 3 c 4 1
+ 2c2 + 6c3 = u2
−c1 + 3c2 − 8c3 = u3 .
which is equivalent to the system c1 − 10c2 + 8c3
= −2u1 =
7c2
2u2 − u3
0 = −34u1 + 28u2 + 38u3.
So, there are vectors (u1 , u2 , u3 ) not spanned by S. For instance, (0, 0, 1) ∉span(S). (b) Solving the same system in (a) for (u1 , u2 , u3 ) = (0, 0, 0) yields nontrivial solutions. For instance, c1 = −8, c2 = 0 and c3 = 1.
(
)
So, −8 − 12 , 34 , −1 + 0(5, 2, 3) + 1( −4, 6, − 8) = (0, 0, 0) and S is linearly dependent. (c) S is not a basis because it does not span R 3 nor is it linearly independent.
31. (a) S span R 3 because the first three vectors in the set form the standard basis of R 3 . (b) S is linearly dependent because the fourth vector is a linear combination of the first three (−1, 2, − 3) = −1(1, 0, 0) + 2(0, 1, 0) − 3(0, 0, 1). (c) S is not a basis because it is not linearly independent.
33. S has four vectors, so you need only check that S is linearly independent. Form the vector equation c1 (1 − t ) + c2 ( 2t + 3t 2 ) + c3 (t 2 − 2t 3 ) + c4 ( 2 + t 3 ) = 0 + 0t + 0t 2 + 0t 3 which yields the homogenous system of linear equations + 2c4 = 0
c1 −c1 + 2c2
= 0
3c2 +
c3
= 0
−2c3 + c4 = 0. This system has only the trivial solution. So, S is linearly independent and S is a basis for P3 .
35. S has four vectors, so you need only check that S is linearly independent. Form the vector equation ⎡1 0⎤ ⎡−2 1⎤ ⎡3 4⎤ ⎡−3 −3⎤ ⎡0 0⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ + c4 ⎢ ⎥ = ⎢ ⎥, 2 3 − 1 0 2 3 1 3 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦ which yields the homogeneous system of linear equations c1 − 2c2 + 3c3 − 3c4 = 0 c2 + 4c3 − 3c4 = 0 2c1 − 3c1
c2 + 2c3 + c4 = 0 + 3c3 + 3c4 = 0.
Because this system has nontrivial solutions, S is not a basis. For example, one solution is c1 = 2, c2 = 1, c3 = −1, c4 = −1. ⎡ 1 0⎤ ⎡−2 1⎤ ⎡3 4⎤ ⎡−3 −3⎤ ⎡0 0⎤ 2⎢ ⎥ + ⎢ ⎥ − ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ 2 3 − 1 0 2 3 1 3 ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
132
Chapter 4
Vector Spaces
37. (a) This system has solutions of the form (−2s − 3t , s, 4t , t ), where s and t are any real
51. (a) Using Gauss-Jordan elimination, the matrix reduces to ⎡1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥. So, the rank is 3. ⎢0 0 1⎥ ⎣ ⎦
numbers. A basis for the solution space is {(−2, 1, 0, 0), (−3, 0, 4, 1)}. (b) The dimension of the solution space is 2—the number of vectors in a basis for the solution space.
39. (a) This system has solutions of the form
( 72 s − t, 73 s, s, t ), where s and t are any real
numbers. A basis is
{(2, 3, 7, 0), (−1, 0, 0, 1)}.
(b) The dimension of the solution space is 2—the number of vectors in a basis for the solution space.
41. The system given by Ax = 0 has solutions of the form (8t , 5t ), where t is any real number. So, a basis for the solution space is {(8, 5)}. The rank of A is 1 (the number of nonzero row vectors in the reduced row-echelon matrix) and the nullity is 1. Note that rank ( A) + nullity( A) = 1 + 1 = 2 = n.
43. The system given by Ax = 0 has solutions of the form (3s − t , − 2t , s, t ), where s and t are any real numbers. So, a basis for the solution space of Ax = 0 is
{(3, 0, 1, 0),(−1, − 2, 0, 1)}. The rank of A is 2 (the number
of nonzero row vectors in the reduced row-echelon matrix) and the nullity of A is 2. Note that rank ( A) + nullity( A) = 2 + 2 = 4 = n.
45. The system given by Ax = 0 has solutions of the form (4t , − 2t , t ), where t is any real number. So, a basis for the solution space is
{(4, − 2, 1)}. The rank of A is 2 (the
number of nonzero row vectors in the reduced rowechelon matrix) and the nullity is 1. Note that rank ( A) + nullity( A) = 2 + 1 = 3 = n.
47. (a) Using Gauss-Jordan elimination, the matrix reduces to ⎡1 0⎤ ⎢ ⎥ ⎢0 1⎥. So, the rank is 2. ⎢0 0⎥ ⎣ ⎦
(b) A basis for the row space is
{(1, 0),(0, 1)}.
49. (a) Because the matrix is already row-reduced, its rank is 1. (b) A basis for the row space is
{(1, − 4, 0, 4)}.
(b) A basis for the row space is {(1, 0, 0),(0, 1, 0),(0, 0, 1)}. ⎡3⎤ 53. Because [x]B = ⎢ ⎥, write x as ⎣5⎦
x = 3(1, 1) + 5( −1, 1) = ( −2, 8). Because ( −2, 8) = −2(1, 0) + 8(0, 1), the coordinate vector of x relative to the standard basis is
[x]S
⎡−2⎤ = ⎢ ⎥. ⎣ 8⎦
⎡1 ⎤ 55. Because [x]B = ⎢ 12 ⎥, write x as ⎣⎢ 2 ⎦⎥
( ) + 12 (1, 0) = ( 34 , 14 ). Because ( 34 , 14 ) = 43 (1, 0) + 14 (0, 1), the coordinate vector x =
1 1, 1 2 2 2
of x relative to the standard basis is
[x]S
⎡3⎤ = ⎢ 14 ⎥. ⎣⎢ 4 ⎦⎥
57. Because [x]B
⎡ 2⎤ ⎢ ⎥ = ⎢ 0⎥, write x as ⎢−1⎥ ⎣ ⎦
x = 2(1, 0, 0) + 0(1, 1, 0) − 1(0, 1, 1) = ( 2, −1, −1). Because (−2, −1, −1) = 2(1, 0, 0) − 1(0, 1, 0) − 1(0, 0, 1), the coordinate vector of x relative to the standard basis is
[x]S
⎡ 2⎤ ⎢ ⎥ = ⎢−1⎥. ⎢−1⎥ ⎣ ⎦
⎡ c1 ⎤ 59. To find [x]B ′ = ⎢ ⎥, solve the equation ⎣c2 ⎦ c1 (5, 0) + c2 (0, − 8) = ( 2, 2). The resulting system of linear equations is 5c1 = 2 −8c2 = 2. So, c1 =
2, c 5 2
⎡ 2⎤ = − 14 , and [x]B ′ = ⎢ 5 ⎥. ⎢⎣− 14 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
61. To find [x]B ′
133
⎡ c1 ⎤ ⎢ ⎥ = ⎢c2 ⎥ solve the equation ⎢c3 ⎥ ⎣ ⎦
c1 (1, 2, 3) + c2 (1, 2, 0) + c3 (0, − 6, 2) = (3, − 3, 0). The resulting system of linear equations is c1 +
=
c2
3
2c1 + 2c2 − 6c3 = −3 3c1
+ 2c3 =
0.
The solution to this system is c1 = −1, c2 = 4, c3 =
[x]B′
3 , 2
and
⎡−1⎤ ⎢ ⎥ = ⎢ 4⎥. ⎢ 3⎥ ⎣ 2⎦
63. To find [x]B ′
⎡ c1 ⎤ ⎢ ⎥ c2 = ⎢ ⎥ , solve the equation ⎢c3 ⎥ ⎢ ⎥ ⎢⎣c4 ⎥⎦
c1 (9, − 3, 15, 4) + c2 ( −3, 0, 0, −1) + c3 (0, − 5, 6, 8) + c4 (−3, 4, − 2, 3) = ( 21, − 5, 43, 14). Forming the corresponding linear system, you find its solution to be c1 = 3, c2 = 1, c3 = 0 and c4 = 1. So,
[x]B′
⎡3⎤ ⎢ ⎥ 1 = ⎢ ⎥. ⎢0⎥ ⎢ ⎥ ⎣⎢ 1⎦⎥
65. Begin by finding x relative to the standard basis x = 3(1, 1) + ( −3)( −1, 1) = (6, 0). ⎡ c1 ⎤ Then solve for [x]B ′ = ⎢ ⎥ ⎣c2 ⎦
67. Begin by finding x relative to the standard basis x = −(1, 0, 0) + 2(1, 1, 0) − 3(1, 1, 1) = (−2, −1, − 3). Then solve for [x]B ′
by forming the equation c1 (0, 1) + c2 (1, 2) = (6, 0). The resulting system of linear equations is c2 = 6 c1 + 2c2 = 0.
by forming the equation c1 (0, 0, 1) + c2 (0, 1, 1) + c3 (1, 1, 1) = ( −2, −1, − 3). The resulting system of linear equations is c3 = −2
The solution to this system is c1 = −12 and c2 = 6. So,
[x]B′
⎡−12⎤ = ⎢ ⎥. ⎣ 6⎦
⎡ c1 ⎤ ⎢ ⎥ = ⎢c2 ⎥ ⎢c3 ⎥ ⎣ ⎦
c2 + c3 = −1 c1 + c2 + c3 = −3.
The solution to this system is c1 = −2, c2 = 1, and c3 = −2. So, you have
[x]B′
⎡−2⎤ ⎢ ⎥ = ⎢ 1⎥. ⎢−2⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
134
Chapter 4
Vector Spaces
69. Begin by forming ⎡1 0 [ B′ B ] = ⎢ ⎣0 1
73. Begin by finding a basis for W. The polynomials in W must have x as a factor. Consequently, a polynomial in W is of the form
3⎤ ⎥. −1 1⎦ 1
p = x(c1 + c2 x + c3 x 2 ) = c1x + c2 x 2 + c3 x3 .
−1
Because this matrix is already in the form [ I 2 P ], you
A basis for W is {x, x 2 , x3}. Similarly, the polynomials in
have
U must have ( x − 1) as a factor. A polynomial in U is of
⎡ 1 3⎤ P −1 = ⎢ ⎥. ⎣−1 1⎦
the form p = ( x − 1)(c1 + c2 x + c3 x 2 )
71. Begin by forming. ⎡0 0 1 ⎢ ′ [ B B ] = ⎢0 1 0 ⎢1 0 0 ⎣
= c1 ( x − 1) + c2 ( x 2 − x) + c3 ( x3 − x 2 ).
1 0 0⎤ ⎥ 0 1 0⎥. 0 0 1⎥⎦
So, a basis for U is {x − 1, x 2 − x, x3 − x 2}. The intersection of W and U contains polynomials with x and ( x − 1) as a factor. A polynomial in W ∩ M is of the
Then use Gauss-Jordan elimination to obtain [ I3
⎡1 0 0 ⎢ P ] = ⎢0 1 0 ⎢0 0 1 ⎣ −1
form
0 0 1⎤ ⎥ 0 1 0⎥. 1 0 0⎥⎦
p = x( x − 1)(c1 + c2 x) = c1 ( x 2 − x) + c2 ( x3 − x 2 ). So, a basis for W ∩ U is {x 2 − x, x3 − x 2}.
So, we have P
−1
75. No. For example, the set {x 2 + x, x 2 − x, 1} is a basis
⎡0 0 1⎤ ⎢ ⎥ = ⎢0 1 0⎥. ⎢1 0 0⎥ ⎣ ⎦
for P2 .
77. Because W is a nonempty subset of V, you need only show that W is closed under addition and scalar multiplication. If ( x3 + x) p( x) and ( x3 + x)q( x) are in W, then ( x3 + x) p( x) + ( x3 + x)q( x) = ( x3 + x)( p( x) + q( x)) ∈ W . Finally, c( x3 + x) p( x) = ( x3 + x)(cp( x)) ∈ W . So, W is a subspace of P5 = V .
79. The row vectors of A are linearly dependent if and only if the rank of A is less than n, which is equivalent to the column vectors of A being linearly dependent. 81. (a) Consider the equation c1 f + c2 g = c1x + c2 x = 0. If x =
1 , then 1 c 2 2 1
+ 12 c2 = 0, while if x = − 12 , you obtain
− 12 c1 + 12 c2 = 0. This implies that c1 = c2 = 0, and f and g are linearly independent. (b) On the interval [0, 1], f = g = x, and so they are linearly dependent.
83. (a) True. See discussion above “Definition of Vector Addition and Scalar Multiplication in R n , ” page 183. (b) False. See Theorem 4.3, part 2, page 186. (c) True. See “Definition of Vector Space” and the discussion following, page 191.
85. (a) True. See discussion under “Vectors in R n , ” page 183. (b) False. See “Definition of Vector Space,” part 4, page 191. (c) True. See discussion following “Summary of Important Vector Spaces,” page 194.
87. (a) Because y′ = 3e3 x and y′′ = 9e3 x , you have y′′ − y′ − 6 y = 9e3 x − 3e3 x − 6(e3 x ) = 0. Therefore, e3 x is a solution. (b) Because y′ = 2e 2 x and y′′ = 4e2 x , you have y′′ − y′ − 6 y = 4e 2 x − 2e 2 x − 6(e 2 x ) = −4e 2 x ≠ 0. Therefore, e2 x is not a solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
135
(c) Because y′ = −3e −3 x and y′′ = 9e −3 x , you have y′′ − y′ − 6 y = 9e −3 x − (−3e −3 x ) − 6(e −3 x ) = 6e −3 x ≠ 0. Therefore, e−3 x is not a solution. (d) Because y′ = −2e −2 x and y′′ = 4e −2 x , you have y′′ − y′ − 6 y = 4e −2 x − (−2e−2 x ) − 6(e −2 x ) = 0. Therefore, e −2 x is a solution.
89. (a) Because y′ = −2e −2 x , you have y′ + 2 y = −2e −2 x + 2e −2 x = 0. Therefore, e −2 x is a solution. (b) Because y′ = e −2 x − 2 xe −2 x , you have y′ + 2 y = e −2 x − 2 xe −2 x + 2 xe−2 x = e −2 x ≠ 0. Therefore, xe −2 x is not a solution. (c) Because y′ = 2 xe − x − x 2e − x , you have y′ + 2 y = 2 xe − x − x 2e − x + 2 x 2e − x ≠ 0. Therefore, x 2e − x is not a solution. (d) Because y′ = 2e −2 x − 4 xe −2 x , you have y′ + 2 y = 2e −2 x − 4 xe −2 x + 2( 2 xe −2 x ) = 2e −2 x ≠ 0. Therefore, 2 xe −2 x is not a solution.
91. W (1, x, e x )
99. Begin by completing the square.
⎡1 x e x ⎤ ⎢ ⎥ = ⎢0 1 e x ⎥ = e x ⎢0 0 e x ⎥ ⎣ ⎦
( x2
− 4 x + 4) + ( y 2 + 2 y + 1) = 4 + 4 + 1
(x
1
sin 2 x
93. W (1, sin 2 x, cos 2 x) = 0
2 cos 2 x
cos 2 x −2 sin 2 x = −8.
− 2) + ( y + 1) = 9 2
2
This is the equation of a circle of radius 9 = 3, centered at ( 2, −1).
0 −4 sin 2 x −4 cos 2 x
y 2
95. The Wronskian of this set is W (e −3 x , xe −3 x ) =
e −3 x −3e
xe −3 x
x
(1 − 3x)e
−3 x
1
−3 x
2
3
4
= (1 − 3 x)e −6 x + 3 xe −6 x −4
= e −6 x .
Because W (e −3 x , xe −3 x ) = e−6 x ≠ 0, the set is linearly independent.
x 2 + y 2 − 4x + 2y − 4 = 0
101. Begin by completing the square. x2 − y 2 + 2 x − 3 = 0
97. The Wronskian of this set is
( x2
+ 2 x + 1) − y 2 = 3 + 1
W (e x , e 2 x , e x − e 2 x ) = e x
e2 x
e x − e2 x
2e 2 x
e x − 2e 2 x = 0.
(x
+ 1) − y 2 = 4
x
2x
e − 4e
(x
+ 1)
ex e
4e
x
2x
Because the third column is the difference of the first two columns, the set is linearly dependent.
2
2
2
2
−
y2 =1 22
y
This is the equation of a hyperbola with center ( −1, 0).
3 2
(− 1, 0) −4
−2 −1
x
2
−2 −3
x 2 − y 2 + 2x − 3 = 0 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
136
Chapter 4
Vector Spaces
103. Begin by completing the square. 2 x − 20 x − y + 46 = 0 2
2( x 2 − 10 x + 25) = y − 46 + 50 2( x − 5) = y + 4 2
This is the equation of a parabola with vertex (5, − 4) . 50 40 30 20 10
sin θ = 1
2 and cos θ = 1
1 = ( x′ − y′) 2
and y = x′ sin θ + y′ cos θ =
into xy = 3, you obtain
x 1 2 3 4
6 7 8
( x′)2 6
105. Begin by completing the square. 4 x 2 + y 2 + 32 x + 4 y + 63 = 0 4( x 2 + 8 x + 16) + ( y 2 + 4 y + 4) = −63 + 64 + 4
( y′)2
−
6
y
1 2
( y′)
2
= 3.
=1
y'
x'
6
45° x
−6
3
6
−6 x
−2 −3
(−4, −2)
−
y
1 −2 −1
2
2
This is the equation of an ellipse with center ( −4, − 2) .
−4
( x′)
you can recognize this to be the equation of a hyperbola whose transverse axis is the x′ -axix.
4( x + 4) + ( y + 2) = 5 2
1 2
1 = ( x′ + y′) 2
In standard form,
(5, − 4) 2x 2 − 20x − y + 46 = 0
−6
π . Therefore, 4 2. By substituting
you find that the angle of rotation is θ =
x = x′ cos θ − y′ sin θ =
y
− 10
107. From the equation. a −c 0−0 cot 2θ = = = 0 b 1
−4 −5
4x 2 + y 2 + 32x + 4y + 63 = 0
109. From the equation a −c 16 − 9 7 cos 2θ = = = − b −24 24 you find that the angle of rotation is θ ≈ −36.87°. Therefore, sin θ ≈ −0.6 and cos θ ≈ 0.8. By substituting x = x′ cos θ − y′ sin θ = 0.8 x′ + 0.6 y′ and y = x′ sin θ + y′ cos θ = −0.6 x′ + 0.8 y′ into 16 x 2 − 24 xy + 9 y 2 − 60 x − 80 y + 100 = 0, you obtain 25( x′) − 100 y′ = −100. In standard form, 2
( x′)2
= 4( y′ − 1)
you can recognize this to be the equation of a parabola with vertex at ( x′, y′) = (0, 1). y
y'
12 10 8 6 4 2 −6−4−2 −4
x
θ ≈ − 36.87° x'
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 4 Vector Spaces Section 4.1
Vectors in R n .......................................................................................97
Section 4.2
Vector Spaces .....................................................................................102
Section 4.3
Subspaces of Vector Spaces...............................................................106
Section 4.4
Spanning Sets and Linear Independence...........................................108
Section 4.5
Basis and Dimension..........................................................................112
Section 4.6
Rank of a Matrix and Systems of Linear Equations .........................115
Section 4.7
Coordinates and Change of Basis ......................................................120
Section 4.8
Applications of Vector Spaces...........................................................124
Review Exercises ........................................................................................................131 Project Solutions.........................................................................................................138
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R Vector Spaces
4
Section 4.1 Vectors in R n 2. v = ( −6, 3)
12. v = u + w = ( −2, 3) + (−3, − 2) = ( −5, 1) y
4.
y
(−2, 3) 4
4
(−2, 3)
v=u+w
3
u
2
v
(−3, −2)
x
−4
1
6.
14. v = −u + w
y
= −( −2, 3) + (−3, − 2) = ( −1, − 5)
1 x
−5 −4 −3 −2 −1
1
v
y
(−2, 3) (−3, −2)
−4 −5 −6
(−2, −5)
y
(4, 7) 6
u+v u
v = u − 2w (6, 4)
(−2, 3)
(3, 1)
1
x 2 3 4 5
u
2
−2w x
v
w
2
4
6
(−3, −2)
(4, −3)
10. u + v = ( 4, − 2) + ( −2, − 3)
18. (a) 4 v = 4(3, − 2) = (12, − 8)
(
)
= ( 4 − 2, − 2 − 3)
(b) − 12 v = − 12 (3, − 2) = − 23 , 1
= ( 2, − 5)
(c) 0 v = 0(3, − 2) = (0, 0) y
y
8
1 x 3 4 5
u u+v
4
(4, − 2)
( − 32 , 1(
− 12 v v x 8
(− 2,− 3) −6 −7
v = −u + w
16. v = u − 2w = (−2, 3) − 2(−3, − 2) = ( 4, 7)
5
−3
−u
(−1, −5)
y
v
x 1 2 3 4
(2, −3)
= (3, 1)
−2−1
u
w
= ( −1 + 4, 4 − 3)
−3 −2 −1 −2 −3 −4
3 2
−2
8. u + v = ( −1, 4) + ( 4, − 3)
(−1, 4)
x
−2 w −2
(−5, 1)
−4 −3 −2 −1 −1
2
(2, − 5)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
12
0v 4v (3, −2) −8 (12, −8) −4
97
98
Chapter 4
Vector Spaces
20. u − v + 2w = (1, 2, 3) − ( 2, 2, −1) + 2( 4, 0, − 4) = ( −1, 0, 4) + (8, 0, − 8) = (7, 0, − 4)
22. 5u − 3v − 12 w = 5(1, 2, 3) − 3( 2, 2, −1) −
( 4, 0, − 4) (5, 10, 15) − (6, 6, − 3) − (2, 0, − 2) (−3, 4, 20)
= =
1 2
(
(
= 8, − 20 , 16 , 4 3 3
(
= 2 ⎡⎣(6, − 5, 4, 3) + ( − 6, 5, − 4, − 3)⎤⎦ = 2(0, 0, 0, 0)
(
) = ( − 4, 10 , − 83 , − 2) − (6, − 5, 4, 3) 3 = ( −10, 25 , − 20 , − 5) 3 3
3z = −2(1, 2, 3) − ( 2, 2, −1) + ( 4, 0, − 4) = ( −2, − 4, − 6) − ( 2, 2, −1) + ( 4, 0, − 4) = (0, − 6, − 9).
(0, − 6, − 9) = (0, − 2, − 3).
34. Using a graphing utility with u = (1, 2, − 3, 1), v = (0, 2, −1, − 2), and
26. (a) − v = −( 2, 0, 1) = ( −2, 0, −1)
w = ( 2, − 2, 1, 3) you have:
(b) 2 v = 2( 2, 0, 1) = ( 4, 0, 2) (c)
1 v 2
=
1 2
(2, 0, 1)
(
= 1, 0,
1 2
(a) v + 3w = (6, − 4, 2, 7)
)
(b) 2w − 12 u =
z
2
(4, 0, 2) v 2 x
( 72 , − 5, 72 , 112 )
(c) 2u + w − 3v = ( 4, − 4, − 2, 11)
(2, 0, 1)
2v
)
(b) 2(u + 3v ) = 2 ⎡(6, − 5, 4, 3) + 3 − 2, 53 , − 43 , −1 ⎤ ⎣ ⎦
(c) 2 v − u = 2 − 2, 53 , − 43 , −1 − (6, − 5, 4, 3)
So,
1 3
)
= (0, 0, 0, 0)
24. 2u + v − w + 3z = 0 implies that 3z = −2u − v + w.
z =
)
32. (a) u − v = (6, − 5, 4, 3) − − 2, 53 , − 43 , −1
−v
(1, 0, 12 )
(d)
(−2, 0, −1)
4
4
(
)
3 4
) for any c, u is not
(
(b) Because −1, 43 , − 32 = −2 12 , − 23 ,
3 4
), v is a scalar
multiple of z.
(
(c) Because (12, 0, 9) ≠ c 12 , − 23 ,
(
)
− 3u + w ) = − 12 , 0, 3, − 4
w = −v − u
a scalar multiple of z.
(
(4v
36. w + u = − v
y
28. (a) Because (6, − 4, 9) ≠ c 12 , − 23 ,
1 2
3 4
) for any c, w is not
a scalar multiple of z. 30. (a) u − v = (0, 4, 3, 4, 4) − (6, 8, − 3, 3, − 5) = ( − 6, − 4, 6, 1, 9)
(b) 2(u + 3v ) = 2 ⎡⎣(0, 4, 3, 4, 4) + 3(6, 8, − 3, 3, − 5)⎤⎦ = 2 ⎡⎣(0, 4, 3, 4, 4) + (18, 24, − 9, 9, −15)⎤⎦ = 2(18, 28, − 6, 13, −11) = (36, 56, −12, 26, − 22) (c) 2 v − u = 2(6, 8, − 3, 3, − 5) − (0, 4, 3, 4, 4) = (12, 16, − 6, 6, −10) − (0, 4, 3, 4, 4)
= −(0, 2, 3, −1) − (1, −1, 0, 1) = ( −1, −1, − 3, 0) 38. w + 3v = −2u w = −2u − 3v = −2(1, −1, 0, 1) − 3(0, 2, 3, −1) = ( − 2, 2, 0, − 2) − (0, 6, 9, − 3) = ( −2, − 4, − 9, 1) 40. The equation au + bw = v a(1, 2) + b(1, −1) = (0, 3)
yields the system a +b = 0 2a − b = 3.
Solving this system produces a = 1 and b = −1. So, v = u − w.
= (12, 12, − 9, 2, −14)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.1
au + bw = v
w =
a(1, 2) + b(1, −1) = (1, −1)
=
yields the system
2u 3 2 3
+ 13 v
(0, 0, − 8, 1)
+
1 3
(1, − 8, 0, 7)
2 + 1 , − 8 , 0, 7 ( (3 3 3) 3) = ( 13 , − 83 , − 16 , 3) 3
= 0, 0, − 16 , 3
1
2a − b = −1. Solving this system produces a = 0 and b = 1. So, v = w = 0u + 1w.
48. The equation
au1 + bu 2 + cu3 = v a(1, 3, 5) + b( 2, −1, 3) + c( −3, 2, − 4) = ( −1, 7, 2)
44. The equation a u + bw = v
yields the system
a(1, 2) + b(1, −1) = (1, − 4)
a + 2b − 3c = −1
yields the system a + b =
99
46. 2u + v − 3w = 0
42. The equation
a +b =
Vectors in R n
3a −
1
2a − b = −4.
b + 2c =
7
5a + 3b − 4c =
2.
Solving this system you discover that there is no solution, so, v cannot be written as a linear combination of u1 , u 2 and u3 .
Solving this system produces a = −1 and b = 2. So, v = −u + 2w.
50. The equation au1 + bu 2 + cu3 = v a(1, 3, 2, 1) + b( 2, − 2, − 5, 4) + c( 2, −1, 3, 6) = ( 2, 5, − 4, 0)
yields the system a + 2b + 2c =
3a − 2b −
c =
2 5
2a − 5b + 3c = −4 a + 4b + 6c =
0.
Solving this system produces a = 2, b = 1 and c = −1. So, v = 2u1 + u 2 − u3. 52. Write a matrix using the given u1 , u 2 , …, u 5 as columns and argument this matrix with v as a column. ⎡ 1 2 ⎢ ⎢ 1 1 A = ⎢−1 2 ⎢ ⎢ 2 −1 ⎢ ⎣ 1 1
1
0
2
2
1
0
0
2
1
1 −1
2 −4
1
2
5⎤ ⎥ 8⎥ 7⎥ ⎥ −2⎥ ⎥ 4⎦
The reduced row-echelon form for A is ⎡1 ⎢ ⎢0 A = ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0 −1⎤ ⎥ 1 0 0 0 1⎥ 0 1 0 0 2⎥. ⎥ 0 0 1 0 1⎥ ⎥ 0 0 0 1 2⎦
So, v = −u1 + u 2 + 2u3 + u 4 + 2u5 . Verify the solution by showing that −(1, 1, −1, 2, 1) + ( 2, 1, 2, −1, 1) + 2(1, 2, 0, 1, 2) + (0, 2, 0, 1, − 4) + 2(1, 1, 2, −1, 2) = (5, 8, 7, − 2, 4).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
100
Chapter 4
Vector Spaces
54. Write a matrix using the given u1 , u 2 , … , u 6 as columns and augment this matrix with v as a column. 1 4 8⎤ ⎡ 1 3 1 3 ⎢ ⎥ − 3 − 2 1 − 1 − 2 2 17 ⎢ ⎥ ⎢ 4 4 1 3 1 −1 −16⎥ ⎥. A = ⎢ ⎢−5 −3 −1 −4 5 3 26⎥ ⎢ ⎥ 0⎥ ⎢ 2 −2 4 2 −3 −1 ⎢ −1 1 −1 3 4 1 −4⎥ ⎣ ⎦ The reduced row-echelon form for A is ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎢0 ⎢0 ⎣
0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1
−1⎤ ⎥ −1⎥ 2⎥ ⎥. −2⎥ ⎥ 0⎥ 4⎥⎦
So, v = (8, 17, −16, 26, 0, − 4) = −u1 − 1u 2 + 2u 3 − 2u 4 + 4u 6 . Verify the solution by showing that −1(1, − 3, 4, − 5, 2, −1) − 1(3, − 2, 4, − 3, − 2, 1) + 2(1, 1, 1, −1, 4, −1) − 2(3, −1, 3, − 4, 2, 3) + 4( 4, 2, −1, 3, −1, 1) equals (8, 17, −16, 26, 0, − 4).
56. (a) True. See page 183. (b) False. The zero vector is defined as an additive identity.
58. The equation av1 + bv 2 + cv 3 = 0 a(1, 0, 1) + b( −1, 1, 2) + c(0, 1, 3) = (0, 0, 0) yields the homogeneous system a −
b b +
= 0 c = 0
a + 2b + 3c = 0.
Solving this system a = −t , b = −t and c = t , where t is any real number. Let t = −1, you obtain a = 1, b = 1, c = −1, and so, v1 + v 2 − v 3 = 0.
60. (1) u + v = ( 2, −1, 3) + (3, 4, 0) = (5, 3, 3) is a vector in R 3 . (2) u + v = ( 2, −1, 3) + (3, 4, 0) = (5, 3, 3) = (3, 4, 0) + ( 2, −1, 3) = v + u (3)
(u
+ v ) + w = ⎡⎣( 2, −1, 3) + (3, 4, 0)⎤⎦ + (7, 8, − 4) = (5, 3, 3) + (7, 8, − 4) = (12, 11, −1)
u + ( v + w ) = ( 2, −1, 3) + ⎡⎣(3, 4, 0) + (7, 8, − 4)⎤⎦ = ( 2, −1, 3) + (10, 12, − 4) = (12, 11, −1) So, (u + v ) + w = u + ( v + w ). (4) u + 0 = ( 2, −1, 3) + (0, 0, 0) = ( 2, −1, 3) = u (5) u + ( −u) = ( 2, −1, 3) + ( −2, 1, − 3) = (0, 0, 0) = 0 (6) cu = 2( 2, −1, 3) = ( 4, − 2, 6) is a vector in R 3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.1
Vectors in R n
101
(7) c(u + v ) = 2 ⎡⎣( 2, −1, 3) + (3, 4, 0)⎤⎦ = 2(5, 3, 3) = (10, 6, 6)
cu + cv = 2( 2, −1, 3) + 2(3, 4, 0) = ( 4, − 2, 6) + (6, 8, 0) = (10, 6, 6) So, c(u + v ) = cu + cv. (8)
(c
+ d )u = ( 2 + ( −1))( 2, −1, 3) = 1( 2, −1, 3) = ( 2, −1, 3)
cu + du = 2( 2, −1, 3) + (−1)( 2, −1, 3) = ( 4, − 2, 6) + ( −2, 1, − 3) = ( 2, −1, 3) So, (c + d )u = cu + du. (9) c( du) = 2(( −1)( 2, −1, 3)) = 2(−2, 1, − 3) = (−4, 2, − 6)
(cd )u
= ( 2( −1))( 2, −1, 3) = ( −2)( 2, −1, 3) = ( −4, 2, − 6)
So, c( du) = (cd )u. (10) 1u = 1( 2, −1, 3) = ( 2, −1, 3) = u 62. Prove each of the ten properties. (1) u + v = (u1 , … , un ) + (v1 , … , vn ) = (u1 + v1 , … , un + vn ) is a vector in R n . (2) u + v = (u1 , … , un ) + (v1 , … , vn ) = (u1 + v1 , … , un + vn ) = (v1 + u1 , … , vn + un ) = (v1 , … , vn ) + (u1 , … , un ) = v + u (3)
(u
+ v ) + w = ⎡⎣(u1 , … , un ) + (v1 , … , vn )⎤⎦ + ( w1 , … , wn ) = (u1 + v1 , … , un + vn ) + ( w1 , … , wn ) = =
((u1 + v1 ) + w1, … , (un + vn ) + wn ) (u1 + (v1 + w1 ), … , un + (vn + wn ))
= (u1 , … , un ) + (v1 + w1 , … , vn + wn ) = (u1 , … , un ) + ⎡⎣(v1 , … , vn ) + ( w1 , … , wn )⎤⎦ = u + ( v + w) (4) u + 0 = (u1 , … , un ) + (0, … , 0) = (u1 + 0, … , un + 0) = (u1 , … , un ) = u (5) u + ( −u) = (u1 , … , un ) + (−u1 , … , − un ) = (u1 − u1 , … , un − un ) = (0, … , 0) = 0
(6) cu = c(u1 , … , un ) = (cu1 , … , cun ) is a vector in R n . (7) c(u + v ) = c ⎡⎣(u1 , … , un ) + (v1 , … , vn )⎤⎦ = c(u1 + v1 , … , un + vn )
= (c(u1 + v1 ), … , c(un + vn )) = (cu1 + cv1 , … , cun + cvn ) = (cu1 , … , cun ) + (cv1 , … , cvn ) = c(u1 , … , un ) + c(v1 , … , cvn ) = cu + cv
(8)
(c
+ d )u = (c + d )(u1 , … , un ) =
((c + d )u1, … , (c + d )un )
= (cu1 + du1 , … , cun + dun ) = (cu1 , … , cun ) + ( du1 , … , dun ) = cu + du
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
102
Chapter 4
Vector Spaces
(9) c( du) = c( d (u1 , … , un )) = c( du1 , … , dun ) = (c( du1 ), … , c( dun )) =
((cd )u1, … , (cd )un )
= (cd )(u1 , … , un ) = (cd )u
(10) 1u = 1(u1 , … , un ) = (1u1 , … , 1u1 ) = (u1 , … , un ) = u 64. Justification for each step: (a) Use the fact that c + 0 = 0 for any real number c, so, in particular, 0 = 0 + 0. (b) Use property 8 of Theorem 4.2. (c) The fact that an equality remains an equality if you add the same vector to both sides. You also used property 6 of Theorem 4.2 to conclude that 0v is a vector in R n , so, −0 v is a vector in R n . (d) Property 5 of Theorem 4.2 is applied to the left hand side. Property 3 of Theorem 4.2 is applied to the right hand side. (e) Use properties 5 and 6 of Theorem 4.2. (f) Use properties 4 of Theorem 4.2.
66. Justification for each step. (a) Equality remains if both sides are multiplied by a non-zero constant. Property 6 of Theorem 4.2 assures you that the results are still in R n . (b) Use property 9 of Theorem 4.2 on the left side and property 4 of Theorem 4.3 (proved in exercise 65) on the right side. (c) Use the property of reals that states that the product of multiplicative inverses is 1. (d) Use property 10 of Theorem 4.2. ⎡ 1 2 3⎤ ⎡ 1 0 −1⎤ ⎢ ⎥ ⎢ ⎥ 68. ⎢7 8 9⎥ ⇒ ⎢0 1 2⎥ ⎢4 5 6⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡3⎤ ⎡ 1⎤ ⎡2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Yes. ⎢9⎥ = ( −1) ⎢7⎥ + 2 ⎢8⎥ ⎢6⎥ ⎢4⎥ ⎢5⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
70. If b = x1a1 + + xna n is a linear combination of the columns of A, then a solution to Ax = b is ⎡ x1 ⎤ ⎢ ⎥ x = ⎢ ⎥. ⎢ xn ⎥ ⎣ ⎦
The system Ax = b is inconsistent if b is not a linear combination of the columns of A.
Section 4.2 Vector Spaces 2. The additive identity of C ( −∞, ∞) is the zero function, f ( x) = 0.
4. The additive identity of M 1,4 is the 1 × 4 zero matrix
[0
0 0 0].
8. In C ( −∞, ∞), the additive inverse of f ( x) is − f ( x). 10. In M 1,4 , the additive inverse of [v1 v2 −v3
is
⎡− a −b⎤ ⎢ ⎥. ⎣ −c − d ⎦
ten vector space axioms hold.
⎡0 0⎤ ⎢ ⎥. ⎣0 0⎦
−v2
⎡a b⎤ ⎢ ⎥ ⎣c d ⎦
14. M 1,1 with the standard operations is a vector space. All
6. The additive identity of M 2,2 is the 2 × 2 zero matrix
[−v1
12. In M 2,2 , the additive inverse of
−v4 ].
v3
v4 ] is
16. This set is not a vector space. The set is not closed under addition or scalar multiplication. For example, (− x5 + x 4 ) + ( x5 − x3 ) = x 4 − x3 is not a fifth-degree polynomial. 18. The set is not a vector space. Axiom 1 fails. For example, given f ( x) = x 2 and g ( x) = − x 2 + x, f ( x) + g ( x) = x is not a quadratic function.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.2
Vector Spaces
103
26. This set is not a vector space. The set is not closed under addition nor scalar multiplication. A counterexample is
20. The set is not a vector space. Axiom 6 fails. A counterexample is −2( 4, 1) = ( −8, − 2) is not in the set
⎡ 1 0⎤ ⎡ 1 0⎤ ⎡2 0⎤ ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥. − 0 1 0 1 ⎣ ⎦ ⎣ ⎦ ⎣0 0⎦
because x < 0, y < 0. 22. The set is a vector space.
Each matrix on the left is nonsingular, and the sum is not.
24. This set is not a vector space. The set is not closed under addition nor scalar multiplication. A counterexample is
28. C[0, 1] is a vector space. All ten vector space axioms
⎡1 1⎤ ⎡1 1⎤ ⎡2 2⎤ ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥. ⎣1 1⎦ ⎣1 1⎦ ⎣2 2⎦
hold.
Each matrix on the left is in the set, but the sum is not in the set. 30. (a) Axiom 10 fails. For example, 1( 2, 3, 4) = ( 2, 3, 0) ≠ ( 2, 3, 4). (b) Axiom 4 fails because there is no zero vector. For instance,
(2, 3, 4)
+ ( x, y, z ) = (0, 0, 0) ≠ ( 2, 3, 4) for all choices of ( x, y, z ).
(c) Axiom 7 fails. For example, 2 ⎡⎣(1, 1, 1) + (1, 1, 1)⎤⎦ = 2(3, 3, 3) = (6, 6, 6) 2(1, 1, 1) + 2(1, 1, 1) = ( 2, 2, 2) + ( 2, 2, 2) = (5, 5, 5) So, c(u + v ) ≠ cu + cv. (d)
( x1, y1, z1 )
+ ( x2 , y2 , z2 ) = ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1) c( x, y , z ) = (cx + c − 1, cy + c − 1, cz + c − 1)
This is a vector space. Verify the 10 axioms. (1)
( x1, y1, z1 )
+ ( x2 , y2 , z2 ) ∈ R 3
(2)
( x1, y1, z1 )
+ ( x2 , y2 , z2 ) = ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1) = ( x2 + x1 + 1, y2 + y1 + 1, z2 + z1 + 1) = ( x2 , y2 , z2 ) + ( x1 , y1 , z1 )
(3)
(4)
( x1, y1, z1 ) + ⎡⎣( x2 , y2 , z2 ) + ( x3 , y3 , z3 )⎤⎦ = ( x1 , y1 , z1 ) + ( x2 + x3 + 1, y2 + y3 + 1, z2 + z3 + 1) = ( x1 + ( x2 + x3 + 1) + 1, y1 + ( y2 + y3 + 1) + 1, z1 + ( z2 = (( x1 + x2 + 1) + x3 + 1, ( y1 + y2 + 1) + y3 + 1, ( z1 + z2 = ( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1) + ( x3 , y3 , z3 ) = ⎡⎣( x1 , y1 , z1 ) + ( x2 , y2 , z2 )⎤⎦ + ( x3 , y3 , z3 )
+ z3 + 1) + 1) + 1) + z3 + 1)
0 = ( −1, −1, −1): ( x, y, z ) + ( −1, −1, −1) = ( x − 1 + 1, y − 1 + 1, z − 1 + 1)
= ( x, y , z )
(5)
−( x, y, z ) = ( − x − 2, − y − 2, − z − 2):
( x, y , z )
+ ( −( x, y, z )) = ( x, y, z ) + ( − x − 2, − y − 2, − z − 2) = ( x − x − 2 + 1, y − y − 2 + 1, z − z − 2 + 1) = ( −1, −1, −1) = 0
(6)
c( x, y , z ) ∈ R
3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
104
Chapter 4
(7)
Vector Spaces
c(( x1 , y1 , z1 ) + ( x2 , y2 , z2 ))
= c( x1 + x2 + 1, y1 + y2 + 1, z1 + z2 + 1) = (c( x1 + x2 + 1) + c − 1, c( y1 + y2 + 1) + c − 1, c( z1 + z2 + 1) + c − 1) = (cx1 + c − 1 + cx2 + c − 1 + 1, cy1 + c − 1 + cy2 + c − 1 + 1, cz1 + c − 1 + cz2 + c − 1 + 1) = (cx1 + c − 1, cy1 + c − 1, cz1 + c − 1) + (cx2 + c − 1, cy2 + c − 1, cz2 + c − 1) = c( x1 , y1 , z1 ) + c( x2 , y2 , z2 ) (8)
(c
+ d )( x, y, z ) =
((c + d ) x + c + d
− 1, (c + d ) y + c + d − 1, (c + d ) z + c + d − 1)
= (cx + c − 1 + dx + d − 1 + 1, cy + c − 1 + dy + d − 1 + 1, cz + c − 1 + dz + d − 1 + 1) = (cx + c − 1, cy + c − 1, cz + c − 1) + ( dx + d − 1, dy + d − 1, dz + d − 1) = c( x, y , z ) + d ( x, y , z )
(9)
c( d ( x, y, z )) = c( dx + d − 1, dy + d − 1, dz + d − 1) = (c( dx + d − 1) + c − 1, c( dy + d − 1) + c − 1, c( dz + d − 1) + c − 1) =
((cd ) x + cd
− 1, (cd ) y + cd − 1, (cd ) z + cd − 1)
= (cd )( x, y, z )
(10) 1( x, y, z ) = (1x + 1 − 1, 1y + 1 − 1, 1z + 1 − 1) = ( x, y , z )
Note: In general, if V is a vector space and a is a constant vector, then the set V together with the operations u ⊕ v = (u + a ) + ( v + a ) − a c * u = c (u + a ) − a
is also a vector space. Letting a = (1, 1, 1) ∈ R 3 gives the above example. 32. Verify the ten axioms in the definition of vector space. (1) u + v = (u , 2u ) + (v, 2v) = (u + v, 2u + 2v) = (u + v, 2(u + v)) is in the set. (2) u + v = (u , 2u ) + (v, 2v ) = (u + v, 2u + 2v ) = (v + u , 2v + 2u ) = (v, 2v) + (u , 2u ) = v + u.
(3) u + ( v + w ) = (u , 2u ) + ⎡⎣(v, 2v ) + ( w, 2 w)⎤⎦ = (u , 2u ) + (v + w, 2v + 2w) = (u + (v + w), 2u + ( 2v + 2 w)) =
((u
+ v) + w, ( 2u + 2v) + 2 w)
= (u + v, 2u + 2v) + ( w, 2 w) = ⎡⎣(u , 2u ) + (v, 2v)⎤⎦ + ( w, 2w) = (u + v ) + w
(4) The zero vector is 0 = (0, 0) u + 0 = (u , 2u ) + (0, 0) = (u , 2u ) = u.
(5) The additive inverse of (u , 2u ) is
(−u, − 2u ) u + ( −u )
= ( −u , 2(−u )). = (u , 2u ) + ( −u , 2(−u )) = (0, 0) = 0
(6) cu = c(u , 2u ) = (cu , 2(cu )) is in the set.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.2
Vector Spaces
105
(7) c(u + v ) = c ⎡⎣(u , 2u ) + (v, 2v)⎤⎦ = c(u + v, 2u + 2v)
= (c(u + v), c( 2u + 2v)) = (cu + cv, c( 2u ) + c( 2v)) = (cu , c( 2u )) + (cv, c( 2v)) = c(u , 2u ) + c(v, 2v) = cu + cv
(8)
(c
((c + d )u, (c + d )2u ) = (cu + du, c(2u ) + d (2u )) (cu, c(2u )) + (du, d (2u )) = c(u, 2u ) + d (u, 2u )
+ d )u = (c + d )(u , 2u ) = =
= cu + du
(
)
(9) c( du) = c( d (u , 2u )) = c( du , d ( 2u )) = c( du ), c( d ( 2u )) =
((cd )u, (cd )(2u ))
= (cd )(u , 2u ) = (cd )u
(10) 1(u) = 1(u , 2u ) = (u , 2u ) = u 34. Yes, V is a vector space. Verify the 10 axioms (1) x, y ∈ V ⇒ x + y = xy ∈ V
(V
= positive real numbers)
(2) x + y = xy = yx = y + x (3) x + ( y + z ) = x + ( yz ) = x( yz ) = ( xy ) z = ( x + y ) z = ( x + y ) + z (4) x + 1 = x1 = x = 1x = 1 + x (5) x +
1 ⎛1⎞ = x⎜ ⎟ = 1 x ⎝ x⎠
( Zero vector is 1)
1⎞ ⎛ ⎜ additive inverse of x is ⎟ x⎠ ⎝
(6) For c ∈ R, x ∈ V , cx = x c ∈ V . (7) c( x + y ) = ( x + y ) = ( xy ) = x c y c = x c + y c = cx + cy c
(8)
(c
c
+ d ) x = x c + d = x c x d = x c + x d = cx + dx
(9) c( dx) = ( dx) = ( x d ) = x(dc) = ( dc) x = (cd ) x c
c
(10) 1x = x1 = x. 36. (a) True. For a set with two operations to be a vector space, all ten axioms must be satisfied. Therefore, if one of the axioms fails then this set cannot be a vector space. (b) False. The first axiom is not satisfied, because x + (1 − x) = 1 is not a polynomial of degree 1, but is a sum of polynomials of degree 1. (c) True. This set is a vector space, because all ten vector space axioms hold. 38. Prove that 0 v = 0 for any element v of a vector space V. First note that 0v is a vector in V by property 6 of the definition of a vector space. Because 0 = 0 + 0, you have 0 v = (0 + 0) v = 0 v + 0 v. The last equality holds by property 8 of the definition of a vector space. Add ( −0 v ) to both sides of the last equality to obtain 0 v + ( −0 v ) = (0 v + 0 v ) + (−0 v ). Apply property 3 of the definition of a vector space to the right hand side to obtain 0 v + ( −0 v) = 0 v + (0 v + ( −0 v ))
By property 5 of the definition of a vector space applied to both sides you see that 0 = 0 v + 0. But the right hand side is equal to 0v by property 4 of the definition of a vector space, and so you see that 0 = 0 v, as required.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
106
Chapter 4
Vector Spaces
40. Suppose, by way of contradiction, that there are two distinct additive identities 0 and u 0 . Consider the vector 0 + u 0 . On one hand this vector is equal to u 0 because 0 is an additive identity. On the other hand, this vector is also equal to 0 because u 0 is an additive identity. So in contradiction to the assumption that 0 and u 0 are distinct you obtain 0 = u 0 . This proves that the additive identity in a vector space is unique.
Section 4.3 Subspaces of Vector Spaces 2. Because W is nonempty and W ⊂ R 3 , you need only check that W is closed under addition and scalar multiplication. Given
( x1, y1, 2 x1
− 3 y1 ) and
( x2 , y2 , 2 x2
− 3 y2 ),
it follows that
( x1, y1, 2 x1
− 3 y1 ) + ( x2 , y2 , 2 x2 − 3 y2 ) = ( x1 + x2 , y1 + y2 , 2( x1 + x2 ) − 3( y1 + y2 )) ∈ W .
Furthermore, for any real number c and ( x, y, 2 x − 3 y ) ∈ W , it follows that c( x, y, 2 x − 3 y ) = (cx, cy, 2(cx) − 3(cy )) ∈ W .
4. Because W is nonempty and W ⊂ M 3,2 , you need only check that W is closed under addition and scalar multiplication. Given b1 ⎤ ⎡ a1 ⎢ ⎥ 0⎥ ∈ W a b + 1 ⎢ 1 ⎢ 0 c1 ⎥⎦ ⎣
⎡ a2 ⎢ ⎢a2 + b2 ⎢ 0 ⎣
and
b2 ⎤ ⎥ 0⎥ ∈ W c2 ⎥⎦
it follows that b1 ⎤ ⎡ a2 ⎡ a1 ⎢ ⎥ ⎢ ⎢a1 + b1 0 ⎥ + ⎢a2 + b2 ⎢ 0 c1 ⎥⎦ ⎢⎣ 0 ⎣
b2 ⎤ a1 + a2 b1 + b2 ⎤ ⎡ ⎥ ⎢ ⎥ 0 ⎥ = ⎢( a1 + a2 ) + (b1 + b2 ) 0 ⎥ ∈ W. ⎢ 0 c2 ⎥⎦ c1 + c2 ⎥⎦ ⎣
Furthermore, for any real number d, b⎤ db⎤ ⎡ a ⎡ da ⎢ ⎥ ⎢ ⎥ d ⎢a + b 0⎥ = ⎢da + db 0 ⎥ ∈ W . ⎢ 0 ⎢ 0 c⎥⎦ dc ⎥⎦ ⎣ ⎣
6. Recall from calculus that differentiability implies continuity. So, W ⊂ V . Furthermore, because W is nonempty, you need only check that W is closed under addition and scalar multiplication. Given differentiable functions f and g on [0, 1], it follows that f + g is differentiable on [0, 1], and so f + g ∈ W . Also, for any real number c and for any differentiable function f ∈ W , cf is differentiable, and therefore cf ∈ W . 8. The vectors in W are of the form ( a, 1). This set is not closed under addition or scalar multiplication. For example,
(2, 1)
+ ( 2, 1) = ( 4, 2) ∉ W
and 2( 2, 1) = ( 4, 2) ∉ W . 10. This set is not closed under scalar multiplication. For example, 1 2
(4, 3)
( 32 ) ∉ W .
= 2,
12. This set is not closed under addition. For example, consider f ( x) = − x + 1 and g ( x) = x + 2, and
f ( x ) + g ( x) = 3 ∉ W . 14. This set is not closed under addition. For example, (3, 4, 5) + (5, 12, 13) = (8, 16, 18) ∉ W . 16. This set is not closed under addition or scalar multiplication. For example, ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡2 0⎤ ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ ∉W 0 1 0 1 ⎣ ⎦ ⎣ ⎦ ⎣0 2⎦ ⎡ 1 0⎤ ⎡2 0⎤ 2⎢ ⎥ = ⎢ ⎥ ∉ W. ⎣0 1⎦ ⎣0 2⎦ 18. The vectors in W are of the form ( a, a 2 ). This set is not closed under addition or scalar multiplication. For example,
(3, 9)
+ ( 2, 4) = (5, 13) ∉ W
and 2(3, 9) = (6, 18) ∉ W .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.3 20. This set is a subspace of C ( −∞, ∞) because it is closed under addition and scalar multiplication. 22. This set is a subspace of C ( −∞, ∞) because it is closed
Subspaces of Vector Spaces
107
26. This set is not a subspace, because it is not closed under scalar multiplication. 28. This set is not a subspace, because it is not closed under addition.
under addition and scalar multiplication. 24. This set is not a subspace of C ( −∞, ∞) because it is not closed under addition or scalar multiplication.
30. This set is a subspace of M m ,n because it is closed under addition and scalar multiplication. 32. W is not a subspace of R 3 . For example, (0, 0, 4) ∈ W and (1, 1, 4) ∈ W , but
(0, 0, 4)
+ (1, 1, 4) = (1, 1, 8) ∉ W , so W is not closed
under addition. 34. W is a subspace of R 3 . Note first that W ⊂ R 3 and W is nonempty. If ( s1 , s1 − t1 , t1 ) and ( s2 , s2 − t2 , t2 ) are in W, then their sum is also in W.
( s1, s1
− t1 , t1 ) + ( s2 , s2 − t2 , t2 ) = ( s1 + s2 , ( s1 + s2 ) − (t1 + t2 ), t1 + t2 ) ∈ W .
Furthermore, if c is any real number,
c( s1 , s1 − t1 , t1 ) = (cs1 , cs1 − ct1 , ct1 ) ∈ W . 36. W is not a subspace of R 3 . For example, (1, 1, 1) ∈ W and (1, 1, 1) ∈ W , but their sum,
(2, 2, 2) ∉ W . So, W is not closed under addition. 38. (a) False. Zero subspace and the whole vector space are not proper subspaces, even though they are subspaces. (b) True. Because W must itself be a vector space under inherited operations, it must contain an additive identity. (c) False. Let
{( x, 0) : x ∈ R} ⊂ R , = {(0, y ) : y ∈ R} ⊂ R 2 .
W = U
2
42. Because W is not empty you need only check that W is closed under addition and scalar multiplication. Let c ∈ R and x, y, ∈ W . Then Ax = 0 and Ay = 0. So, A( x + y ) = Ax + Ay = 0 + 0 = 0, A(cx) = cAx = c0 = 0.
Therefore, x + y ∈ W and cx ∈ W . 44. Let V = R 2 . Consider W =
{( x, 0) :
x ∈ R},
U =
{(0, y) :
y ∈ R}.
Then W ∪ U is not a subspace of V, because it is not closed under addition. Indeed, (1, 0), (0, 1) ∈ W ∪ U , but (1, 1) (which is the sum of these two vectors) is not.
Then the set W ∪ U is not closed under addition, because (1, 0), (0, 1) ∈ W ∪ U , but
(1, 0) + (0, 1)
= (1, 1) is not.
40. Because W is not empty (for example, x ∈ W ) you need only check that W is closed under addition and scalar multiplication. Let
a1x + b1y + c1z ∈ W , a2 x + b2 y + c2 z ∈ W . Then
(a1x + b1y + c1z ) + (a2x + b2y + c2 z) = (a1x + a2x) + (b1y + b2y ) + (c1z + c2z ) = (a1 + a2 )x + (b1 + b2 )y + (c1 + c2 )z ∈ W . Similarly, if ax + by + cz ∈ W and d ∈ R, then
d ( ax + by + cz ) = dax + dby + dcz ∈ W .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
108
Chapter 4
Vector Spaces
46. S is a subspace of C[0, 1]. S is nonempty because the zero function is in S. If f1 , f 2 ∈ S , then 1
∫ 0 ( f1 + f 2 )( x)dx
1
=
∫ 0 ⎡⎣ f1( x) + f 2 ( x)⎤⎦dx
=
∫ 0 f1( x)dx + ∫0 f 2 ( x)dx
1
1
= 0+0 = 0
⇒
f1 + f 2 ∈ S
If f ∈ S and c ∈ R, then 1
∫ 0 (cf )( x)dx
=
1
∫ 0 cf ( x)dx
1
= c ∫ f ( x)dx = c0 = 0 ⇒ cf ∈ S . 0
So, S is closed under addition and scalar multiplication. 48. Let c be scalar and u ∈ V ∩ W . Then u ∈ V and u ∈ W , which are both subspaces. So, cu ∈ V and cu ∈ W , which implies that cu ∈ V ∩ W .
Section 4.4 Spanning Sets and Linear Independence 2. (a) Solving the equation
c1 (1, 2, − 2) + c2 ( 2, −1, 1) = (1, − 5, − 5) for c1 and c2 yields the system
c1 + 2c2 =
1
2c1 −
c2 = −5
−2c1 +
c2 = −5.
This system has no solution. So, u cannot be written as a linear combination of the vectors in S. (b) Proceed as in (a), substituting ( −2, − 6, 6) for (1, − 5, − 5). So, the system to be solved is c1 + 2c2 = −2 2c1 −
c2 = −6
−2c1 +
c2 = 6.
The solution to this system is c1 = − 14 and c2 = 52 . So, v can be written as a linear combination of the vectors in S. 5 (c) Proceed as in (a), substituting ( −1, − 22, 22) for (1, − 5, − 5). So, the system to be solved is c1 + 2c2 = −1 2c1 −
c2 = −22
−2c1 +
c2 = 22.
The solution to this system is c1 = −9 and c2 = 4. So, w can be written as a linear combination of the vectors in S. (d) Proceed as in (a), substituting ( −4, − 3, 3) for (1, − 5, − 5), which yields the system c1 + 2c2 = −4 2c1 − −2c1 +
c2 = −3 c2 =
3
The solution of this system is c1 = −2 and c2 = −1. So, z can be written as a linear combination of vectors in S.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.4
Spanning Sets and Linear Independence
109
4. (a) Solving the equation
c1 (6, − 7, 8, 6) + c2 ( 4, 6, − 4, 1) = (−42, 113, −112, − 60) for c1 and c2 yields the system 6c1 + 4c2 =
−42
−7c1 + 6c2 =
113
8c1 − 4c2 = −112 6c1 +
c2 =
−60.
The solution to this system is c1 = −11 and c2 = 6. So, u can be written as a linear combination of the vectors in S. (b) Proceed as in (a), substituting 6c1 + 4c2 = −7c1 + 6c2 =
( 492 , 994 , −14, 192 ) for (−42, 113, −112, − 60), which yields the system
49 2 99 4
8c1 − 4c2 = −14 6c1 +
c2 =
19 . 2
The solution to this system is c1 =
3 4
(
and c2 = 5. So, v can be written as a linear combination of the vectors in S.
(c) Proceed as in (a), substituting −4, −14, 6c1 + 4c2 =
27 53 , 2 8
) for (−42, 113, −112, − 60), which yields the system
−4
−7c1 + 6c2 = −14 8c1 − 4c2 = 6c1 +
c2 =
27 2 53 . 8
This system has no solution. So, w cannot be written as a linear combination of the vectors in S.
(
)
for ( −42, 113, −112, − 60), which yields the system (d) Proceed as in (a), substituting 8, 4, −1, 17 4 6c1 + 4c2 =
8
−7c1 + 6c2 =
4
8c1 − 4c2 = −1 6c1 +
c2 =
17 4.
The solution of this system is c1 =
1 2
and c2 = 54 . So, z can be written as a linear combination of vectors in S.
6. Let u = (u1 , u2 ) be any vector in R 2 . Solving the
8. Let u = (u1 , u2 ) be any vector in R 2 . Solving the
equation
equation
c1 (1, −1) + c2 ( 2, 1) = (u1 , u2 )
c1 ( 2, 0) + c2 (0, 1) = (u1 , u2 )
for c1 and c2 yields the system
for c1 and c2 yields the system
c1 + 2c2 = u1 −c1 +
c2 = u2 .
The system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 2 .
2c1
= u1
c2 = u2 . The system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 2 . 10. S does not span R 2 because only vectors of the form t (1, 1) are in span(S). For example, (0, 1) is not in
span(S). S spans a line in R 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
110
Chapter 4
Vector Spaces
12. S does not span R 2 because only vectors of the form t (1, 2) are in span(S). For example, (0, 1) is not in
20. Let u = (u1 , u2 , u3 ) be any vector in R 3 . Solving the
equation
c1 (1, 0, 1) + c2 (1, 1, 0) + c3 (0, 1, 1) = (u1 , u2 , u3 )
2
span(S). S spans a line in R . 14. Let u = (u1 , u2 ) be any vector in R 2 . Solving the
for c1 , c2 and c3 yields the system
c1 + c2
equation
c2 + c3 = u2
c1 (0, 2) + c2 (1, 4) = (u1 , u2 )
This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 3 .
c2 = u1 2c1 + 4c2 = u2 . The system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 2 .
22. This set does not span R 3 . Notice that the third and fourth vectors are spanned by the first two.
(4, 0, 5) (2, 0, 6)
16. Let u = (u1 , u2 ) be any vector in R 2 . Solving the
equation
c1 ( −1, 2) + c2 ( 2, −1) + c3 (1, 1) = (u1 , u2 ) for c1 , c2 and c3 yields the system
= 2(1, 0, 3) + ( 2, 0, −1) = 2(1, 0, 3)
So, S spans a plane in R 3 . 24. This set is linearly dependent because
(−2, 4)
−c1 + 2c2 + c3 = u1
c2 + c3 = u2 .
+ 2(1, − 2) = (0, 0).
26. This set is linearly dependent because
This system is equivalent to c1 − 2c2 − c3 = −u1
−3(1, 0) + (1, 1) + ( 2, −1) = (0, 0).
3c2 + 3c3 = 2u1 + u2 . So, for any u = (u1 , u2 ) in R 2 , you can take
c3 = 0, c2 = ( 2u1 + u2 ) 3, and c1 = 2c2 − u1 = (u1 + 2u2 ) 3. So, S spans R 2 . 18. Let u = (u1 , u2 , u3 ) be any vector in R . Solving the 3
28. Because ( −1, 3, 2) is not a scalar multiple of (6, 2, 1), the
set S is linearly independent. 30. From the vector equation
(
c1 34 , 52 , 3 c 4 1 5 c 2 1 3 c 2 1
c1 (6, 7, 6) + c2 (3, 2, − 4) + c3 (1, − 3, 2) = (u1 , u2 , u3 ) for c1 , c2 and c3 yields the system
c3 = u1
7c1 + 2c2 − 3c3 = u2 6c1 − 4c2 + 2c3 = u3. This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans R 3 .
3 2
) + c (3, 4, 72 ) + c (− 32 , 6, 2) = (0, 0, 0) 2
3
you obtain the homogeneous system
equation
6c1 + 3c2 +
+ c3 = u3.
c1
for c1 and c2 yields the system
2c1 −
= u1
+ 3c2 −
3 c 2 3
= 0
+ 4c2 + 6c3 = 0 +
7 c 2 2
+ 2c3 = 0.
This system has only the trivial solution c1 = c2 = c3 = 0. So, the set S is linearly independent. 32. Because the fourth vector is a linear combination of the first three, this set is linearly dependent.
(1, 5, − 3)
= (1, 0, 0) +
5 4
(0, 4, 0)
+
1 2
(0, 0, − 6)
34. From the vector equation
c1 (0, 0, 0, 1) + c2 (0, 0, 1, 1) + c3 (0, 1, 1, 1) + c4 (1, 1, 1, 1) = (0, 0, 0, 0) you obtain the homogeneous system c4 = 0 c3 + c4 = 0 c2 + c3 + c4 = 0 c1 + c2 + c3 + c4 = 0. This system has only the trivial solution c1 = c2 = c3 = c4 = 0. So, the set S is linearly independent. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.4 36. One example of a nontrivial linear combination of vectors in S whose sum is the zero vector is
(2, 4)
Spanning Sets and Linear Independence 42. From the vector equation
⎡ 1 −1⎤ ⎡ 4 3⎤ ⎡ 1 −8⎤ ⎡0 0⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ = ⎢ ⎥ ⎣4 5⎦ ⎣−2 3⎦ ⎣22 23⎦ ⎣0 0⎦
+ 2( −1, − 2) + 0(0, 6) = (0, 0).
Solving this equation for ( 2, 4) yields
(2, 4)
you obtain the homogeneous system
= −2( −1, − 2) + 0(0, 6).
38. One example of a nontrivial linear combination of vectors in S whose sum is the zero vector is
2(1, 2, 3, 4) − (1, 0, 1, 2) − (1, 4, 5, 6) = (0, 0, 0, 0). Solving this equation for (1, 4, 5, 6) yields
(1, 4, 5, 6)
= 2(1, 2, 3, 4) − (1, 0, 1, 2).
40. (a) From the vector equation
c1 (t , 0, 0) + c2 (0, 1, 0) + c3 (0, 0, 1) = (0, 0, 0)
Because this system has only the trivial solution c1 = c2 = c3 = 0, the set of vectors is linearly independent. 44. From the vector equation c1 ( x 2 − 1) + c2 ( 2 x + 5) = 0 + 0 x + 0 x 2 you obtain
the homogenous system −c1 + 5c2 = 0 2c2 = 0 c1
c3 = 0. Because c2 = c3 = 0, the set will be linearly independent if t ≠ 0. (b) Proceeding as in (a), you obtain the homogeneous system tc1 + tc2 + tc3 = 0 = 0
46. From the vector equation
c1 ( x 2 ) + c2 ( x 2 + 1) = 0 + 0 x + 0 x 2 you obtain the homogenous system c2 = 0 0 = 0
The coefficient matrix will have nonzero determinant if 2t 2 − t ≠ 0. That is, the set will be linearly independent if t ≠ 0 or t ≠
= 0.
This system has only the trivial solution. So, S is linearly independent.
+ c3 = 0.
tc1
c3 = 0 8c3 = 0
5c1 + 3c2 + 23c3 = 0.
= 0
tc1 + c2
c1 + 4c2 + −c1 + 3c2 −
4c1 − 2c2 + 22c3 = 0
you obtain the homogeneous system tc1 = 0 c2
111
1. 2
c1 + c2 = 0.
This system has only the trivial solution. So, S is linearly independent.
48. Let a0 + a1 x + a2 x 2 + a3 x3 be any vector in P3 . Solving the equation
c1 ( x 2 − 2 x) + c2 ( x3 + 8) + c3 ( x3 − x 2 ) + c4 ( x 2 − 4) = a0 + a1x + a2 x 2 + a3 x3
for c1 , c2 , c3 and c4 yields the system c2 + c3
= a3
− c3 +
c1 −2c1
c4 = a2 = a1
8c2
− 4c4 = a0 .
This system has a unique solution because the determinant of the coefficient matrix is nonzero. So, S spans P3 . 50. Let U be another subspace of V that contains S. To show that span ( S ) ⊂ U , let u ∈ span ( S ). Then
u =
k
∑ci vi , where
v i ∈ S . So, v i ∈ U ,
i =1
because U contains S. Because U is a subspace, u ∈ U .
⎡0 0 2⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 52. The matrix ⎢0 1 1⎥ row reduces to ⎢0 1 0⎥ and ⎢ 1 1 1⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦ ⎡1 1 2⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 1 1 1 row reduces to ⎢ ⎥ ⎢0 1 0⎥ as well. So, both ⎢1 2 1⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
sets of vectors span R 3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
112
Chapter 4
Vector Spaces
54. (a) False. A set is linearly dependent if and only if one of the vectors of this set can be written as a linear combination of the others. (b) True. See the definition of spanning set of a vector space on page 209. ⎡ 1 3 0⎤ ⎢ ⎥ 56. The matrix ⎢2 2 0⎥ row reduces to ⎢ 3 1 1⎥ ⎣ ⎦ shows that the equation
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥ which ⎢0 0 1⎥ ⎣ ⎦
c1 (1, 2, 3) + c2 (3, 2, 1) + c3 (0, 0, 1)
62. Suppose the v k = c1v1 + u ∈ V,
u = d1v1 + = d1v1 +
+ ck −1v k −1. For any vector
+ d k −1v k − 1 + d k v k + d k −1v k −1 + d k (c1v1 +
= ( d1 + c1d k ) v1 +
+ ck −1v k −1 )
+ ( d k −1 + ck −1d k ) v k −1
which shows that u ∈ span ( v1 , … , v k −1 ). 64. A set consisting of just one vector is linearly independent if it is not the zero vector. 66. The vectors are linearly dependent because
only has the trivial solution. So, the three vectors are linearly independent. Furthermore, the vectors span R 3 because the coefficient matrix of the linear system ⎡ 1 3 0⎤⎡ c1 ⎤ ⎡ u1 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 2 0 c = ⎢ ⎥⎢ 2 ⎥ ⎢u2 ⎥ ⎢ 3 1 1⎥⎢c3 ⎥ ⎢ u3 ⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
(v
− u ) + ( w − v ) + (u − w ) = 0
68. Consider c1 Av1 + c2 Av 2 + c3 Av 3 = 0 A(c1v1 + c2 v 2 + c3 v 3 ) = 0 A A(c1v1 + c2 v 2 + c3 v 3 ) = A−1 0 −1
is nonsingular.
c1v1 + c2 v 2 + c3 v 3 = 0
Because {v1 , v 2 , v 3} are linearly independent,
58. If S1 is linearly dependent, then for some u1 , … , u n , v ∈ S1 , v = c1u1 + + cnu n . So, in S 2 , you have v = c1u1 + S 2 is linearly dependent.
+ cnu n , which implies that
c1 = c2 = c3 = 0, proving that { Av1 , Av 2 , Av 3} are
linearly independent. If A = 0, then { Av1 , Av 2 , Av 3} = {0} is linearly
60. Because {u1 , … , u n , v} is linearly dependent, there exist
dependent.
scalars c1 , … , cn , c not all zero, such that c1u1 + … + cnu n + cv = 0. But, c ≠ 0 because {u1 , … , u n} are linearly independent. So, cv = −c1u1 −
− cnu n ⇒ v =
−c1 u1 − c
−
cn un c
Section 4.5 Basis and Dimension 2. There are four vectors in the standard basis for R 4 . {(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)} 4. There are four vectors in the standard basis for M 4,1. ⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎢0⎥ ⎪ ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎪ ⎪⎢0⎥ ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
6. There are three vectors in the standard basis for P2 .
{1, x, x 2} 8. A basis for R 2 can only have two vectors. Because S has three vectors, it is not a basis for R 2 .
10. S is linearly dependent and does not span R 2 . 12. S is linearly dependent and does not span R 2 . 14. S does not span R 2 , although it is linearly independent. 16. A basis for R 3 contains three linearly independent vectors. Because −1( 2, 1, − 2) + ( −2, −1, 2) + ( 4, 2, − 4) = (0, 0, 0) S is linearly dependent and is, therefore, not a basis for R3.
18. S does not span R 3 , although it is linearly independent. 20. S is not a basis, because it has too many vectors. A basis for R 3 can only have three vectors.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.5
Basis and Dimension
113
34. Because {v1 , v 2} consists of exactly two linearly
22. S is not a basis because it has too many vectors. A basis for P2 can only have three vectors.
independent vectors, it is a basis for R 2 .
24. S is not a basis because the vectors are linearly dependent.
36. Because the vectors in S are not scalar multiples of one another, they are linearly independent. Because S consists of exactly two linearly independent vectors, it is a basis for R 2 .
1(6 x − 3) + 1(3x 2 ) + 3(1 − 2 x − x 2 ) = 0 26. A basis for M 2,2 must have four vectors. Because S only
38. S does not span R 3 , although it is linearly independent.
has two vectors, it is not a basis for M 2,2 .
So, S is not a basis for R 3 . 28. S does not span M 2,2 , although it is linearly independent.
40. This set contains the zero vector, and is, therefore, linearly dependent.
30. Because {v1 , v 2} consists of exactly two linearly
1(0, 0, 0) + 0(1, 5, 6) + 0(6, 2, 1) = (0, 0, 0)
independent vectors, it is a basis for R 2 .
So, S is not a basis for R 3 .
32. Because {v1 , v 2} consists of exactly two linearly independent vectors, it is a basis for R 2 .
42. To determine if the vectors of S are linearly independent, find the solution to c1 (1, 0, 0, 1) + c2 (0, 2, 0, 2) + c3 (1, 0, 1, 0) + c4 (0, 2, 2, 0) = (0, 0, 0, 0).
Because the corresponding linear system has nontrivial solutions (for instance, c1 = 2, c2 = −1, c3 = −2, c4 = 1, the vectors are linearly dependent, and S is not a basis for R 4 . 44. Form the equation ⎡ 1 2⎤ ⎡2 −7⎤ ⎡ 4 −9⎤ ⎡12 −16⎤ ⎡0 0⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ + c4 ⎢ ⎥ = ⎢ ⎥ ⎣−5 4⎦ ⎣6 2⎦ ⎣11 12⎦ ⎣17 42⎦ ⎣0 0⎦
which yields the homogeneous system c1 + 2c2 + 4c3 + 12c4 = 0
2c1 − 7c2 − 9c3 − 16c4 = 0 −5c1 + 6c2 + 11c3 + 17c4 = 0
4c1 + 2c2 + 12c3 + 42c4 = 0. Because this system has nontrivial solutions (for instance, c1 = 2, c2 = −1, c3 = 3 and c4 = −1 ), the set is linearly dependent, and is not a basis for M 2,2 . 46. Form the equation c1 ( 4t − t 2 ) + c2 (5 + t 3 ) + c3 (3t + 5) + c4 ( 2t 3 − 3t 2 ) = 0
which yields the homogeneous system + 2c4 = 0
c2 −c1
4c1
− 3c4 = 0 + 3c3
= 0
5c2 + 5c3
= 0.
This system has only the trivial solution. So, S consists of exactly four linearly independent vectors. Therefore, S is a basis for P3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
114
Chapter 4
Vector Spaces
48. Form the equation c1 (t 3 − 1) + c2 ( 2t 2 ) + c3 (t + 3) + c4 (5 + 2t + 2t 2 + t 3 ) = 0
which yields the homogeneous system +
c1
2c2
c4 = 0
+ 2c4 = 0. c3 + 2c4 = 0
−c1
+ 3c3 + 5c4 = 0.
This system has nontrivial solutions (for instance, c1 = 1, c2 = 1, c3 = 2, and c4 = −1 ). Therefore, S is not a basis for P3 because the vectors are linearly dependent.
50. Form the equation
54. Form the equation
c1 (1, 0, 0) + c2 (1, 1, 0) + c3 (1, 1, 1) = (0, 0, 0)
c1 (1, 4, 7) + c2 (3, 0, 1) + c3 ( 2, 1, 2) = (0, 0, 0)
which yields the homogeneous system
which yields the homogeneous system
c1 + c2 + c3 = 0
c1 + 3c2 + 2c3 = 0
c2 + c3 = 0
4c1
c3 = 0.
7c1 +
This system has only the trivial solution, so S is a basis for R 3 . Solving the system
+ 2c3 = 0 c2 + 2c3 = 0.
This system has only the trivial solution, so S is a basis for R 3 . Solving the system
c1 + c2 + c3 = 8
c1 + 3c2 + 2c3 = 8
c2 + c3 = 3
4c1
c3 = 8
7c1 +
+
c3 = 3
c2 + 2c3 = 8
yields c1 = 5, c2 = −5, and c3 = 8. So,
yields c1 = 1, c2 = 3 and c3 = −1. So,
u = 5(1, 0, 0) − 5(1, 1, 0) + 8(1, 1, 1) = (8, 3, 8).
u = (8, 3, 8) = 1(1, 4, 7) + 3(3, 0, 1) − 1( 2, 1, 2).
52. The set S contains the zero vector, and is, therefore, linearly dependent.
56. Because a basis for R 4 has four linearly independent vectors, the dimension of R 4 is 4.
0(1, 0, 1) + 1(0, 0, 0) + 0(0, 1, 0) = (0, 0, 0)
58. Because a basis for R 3 has three linearly independent vectors, the dimension of R 3 is 3.
So, S is not a basis for R 3 .
60. Because a basis for P4 has five linearly independent vectors, the dimension of P4 is 5. 62. Because a basis for M 3,2 has six linearly independent vectors, the dimension of M 3,2 is 6. 64. One basis for the space of all 3 × 3 symmetric matrices is ⎧⎡ 1 0 0⎤ ⎡0 1 0⎤ ⎡0 0 1⎤ ⎡0 0 0⎤ ⎡0 0 0⎤ ⎡0 0 0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎨⎢0 0 0⎥ , ⎢ 1 0 0⎥ , ⎢0 0 0⎥ , ⎢0 1 0⎥ , ⎢0 0 1⎥ , ⎢0 0 0⎥ ⎬. ⎪⎢0 0 0⎥ ⎢0 0 0⎥ ⎢ 1 0 0⎥ ⎢0 0 0⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎪ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎭ ⎩⎣
Because this basis has 6 vectors, the dimension is 6. 66. Although there are four subsets of S that contain three vectors, only three of them are bases for R 3 .
{(1, 3, − 2), (−4, 1, 1), (2, 1, 1)}, {(1, 3, − 2), (−2, 7, − 3), (2, 1, 1)}, {(−4, 1, 1), (−2, 7, − 3), (2, 1, 1)} The set
{(1, 3, − 2), (−4, 1, 1), (−2, 7, − 3)} is linearly dependent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.6
Rank of a Matrix and Systems of Linear Equations
68. You can add any vector that is not in the span of S = {(1, 0, 2), (0, 1, 1)}. For instance, the set
{(1, 0, 2), (0, 1, 1), (1, 0, 0)} is a basis for
3
R.
70. (a) W is a line through the origin (the y-axis) (b) A basis for W is
{(0, 1)}.
(c) The dimension of W is 1. 72. (a) W is a plane through the origin. (b) A basis for W is
{(2, 1, 0), (−1, 0, 1)}, obtained by
letting s = 1, t = 0, and then s = 0, t = 1. (c) The dimension of W is 2. 74. (a) A basis for W
{(5, − 3, 1, 1)}.
(b) The dimension of W is 1. 76. (a) A basis for W is
{(1, 0, 1, 2), (4, 1, 0, −1)}.
(b) The dimension of W is 2. 78. (a) True. See Theorem 4.10 and definition of dimension on page 225. (b) False. A set of n − 1 vectors could be linearly dependent, for instance, they can all be multiples of each other. 80. Suppose that P had a finite basis B = { p1 , … , pn}. Let m be the maximum degree of all polynomials in B. Then the polynomial x m + 1 is not in the span of B, and so, P is not finite-dimensional.
115
82. (1) Let S = {v1 , … , v n} be a linearly independent set of vectors. Suppose, by way of contradiction, that S does not span V. Then there exists v ∈ V such that v ∉ span ( v1 , … , v n ). So, the set {v1 , … , v n , v} is linearly independent, which is impossible by Theorem 4.10. So, S does span V, and therefore is a basis. (2) Let S = {v1 , … , v n} span V. Suppose, by way of contradiction, that S is linearly dependent. Then, some v i ∈ S is a linear combination of the other vectors in S. Without loss of generality, you can assume that v n is a linear combination of
v1 , … , v n −1 , and therefore, {v1 , … , v n −1} spans V.
But, n − 1 vectors span a vector space of at most dimension n − 1, a contradiction. So, S is linearly independent, and therefore a basis. 84. Let the number of vectors in S be n. If S is linearly independent, then you are done. If not, some v ∈ S is a linear combination of other vectors in S. Let S1 = S − v. Note that span ( S ) = span ( S1 ) because v is a linear combination of vectors in S1. You now consider spanning set S1. If S1 is linearly independent, you are done. If not repeat the process of removing a vector which is a linear combination of other vectors in S1 to obtain spanning set S 2 . Continue this process with S2 . Note that this process would terminate because the original set S is a finite set and each removal produces a spanning set with fewer vectors than the previous spanning set. So, in at most n − 1 steps the process would terminate leaving you with minimal spanning set, which is linearly independent and is contained in S. 86. If a set {v1 , … , v m} spans V where m < n, then by Exercise 84, you could reduce this set to an independent spanning set of less than n vectors. But then dimV ≠ n.
Section 4.6 Rank of a Matrix and Systems of Linear Equations 2. (a) Because this matrix row reduces to ⎡ 1 2⎤ ⎡ 1 0⎤ ⎢ ⎥ or ⎢ ⎥ ⎣0 1⎦ ⎣0 1⎦
the rank of the matrix is 2. (b) A basis for the row space of the matrix is
4. (a) Because this matrix is row-reduced already, the rank is 1. (b) A basis for the row space is
{(0, 1, − 2)}.
(c) A basis for the column space is {[1]}.
{(1, 2), (0, 1)} (or {(1, 0), (0, 1)}).
(c) Row-reducing the transpose of the original matrix produces ⎡ 1 1 ⎤ ⎡ 1 0⎤ 2 or ⎢ ⎥ ⎢ ⎥. ⎢⎣0 1⎥⎦ ⎣0 1⎦ So, a basis for the column space of the matrix is
{(1, ), (0, 1)} (or {(1, 0), (0, 1)}). 1 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
116
Chapter 4
Vector Spaces
6. (a) Because this matrix row reduces to ⎡1 0 ⎢ ⎢⎣0 1
10. (a) Because this matrix row reduces to ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
3⎤ 2⎥ 5 4⎥ ⎦
the rank of the matrix is 2. (b) A basis for the row space is
{(1, 0, ), (0, 1, )}. 3 2
5 4
2 0 3⎤ ⎥ 0 1 4⎥ 0 0 0⎥ ⎥ 0 0 0⎦⎥
the rank of the matrix is 2.
(c) Row-reducing the transpose of the original matrix produces
(b) A basis for the row space is
{(1, 2, 0, 3), (0, 0, 1, 4)}.
(c) Row-reducing the transpose of the original matrix produces
⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥. ⎢0 0⎥ ⎣ ⎦
⎡1 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎣0
So, a basis for the column space of the matrix is {(1, 0), (0, 1)}.
0 1 0 0
5 9 − 94
2⎤ 9⎥ 2⎥ 9 .
⎥ 0 0⎥ ⎥ 0 0⎦
So, a basis for the column space is
8. (a) Because this matrix row reduces to
{(1, 0, , ), (0, 1, − , )}, or, the first and third 5 2 9 9
4⎤ 5 ⎥ 1⎥ 5
⎡1 0 ⎢ ⎢0 1 ⎢ ⎥ ⎢⎣0 0 0⎥⎦
4 2 9 9
columns of the original matrix.
the rank of the matrix is 2. (b) A basis for the row space is
{(1, 0, ), (0, 1, )}. 4 5
1 5
(c) Row-reducing the transpose of the original matrix produces ⎡1 0 ⎢ ⎢0 1 ⎢ ⎢⎣0 0
23 ⎤ 7⎥ 2 ⎥. 7
⎥ 0⎥⎦
So, a basis for the column space of the matrix is
{(1, 0, ), (0, 1, )}, or the first 2 columns of the 23 7
2 7
original matrix.
12. (a) Because this matrix row reduces to ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0⎤ ⎥ 1 0 0 0⎥ 0 1 0 0⎥ ⎥ 0 0 1 0⎥ ⎥ 0 0 0 1⎦
the rank of the matrix is 5. (b) A basis for the row space is
{(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1)}.
(c) Row reducing the transpose of the original matrix produces ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 0 0⎤ ⎥ 1 0 0 0⎥ 0 1 0 0⎥. ⎥ 0 0 1 0⎥ ⎥ 0 0 0 1⎦
So, a basis for the column space is {(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 0, 1, 0), (0, 0, 0, 0, 1)}. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.6
14. Use v1 , v 2 , and v 3 to form the rows of matrix A. Then write A in row-echelon form. ⎡4 2 −1⎤ v1 ⎡ 1 0 0⎤ w1 ⎢ ⎥ ⎢ ⎥ A = ⎢ 1 2 −8⎥ v 2 → B = ⎢0 1 0⎥ w 2 ⎢0 1 2⎥ v 3 ⎢0 0 1⎥ w 3 ⎣ ⎦ ⎣ ⎦
So, the nonzero row vectors of B,
w1 = (1, 0, 0), w 2 = (0, 1, 0), and w 3 = (0, 0, 1), form a basis for the row space of A. That is, they form a basis for the subspace spanned by S.
16. Use v1 , v 2 , and v 3 to form the rows of matrix A. Then write A in row-echelon form. ⎡ 1 2 2⎤ v1 ⎡ 1 0 0⎤ w1 ⎢ ⎥ ⎢ ⎥ A = ⎢−1 0 0⎥ v 2 → B = ⎢0 1 1⎥ w 2 ⎢ 1 1 1⎥ v 3 ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
w1 = (1, 0, 0) and w 2 = (0, 1, 1), form a basis for the row space of A. That is, they form a basis for the subspace spanned by S.
18. Begin by forming the matrix whose rows are vectors in S. ⎡ 6 −3 6 34⎤ ⎢ ⎥ 3 19⎥ ⎢ 3 −2 ⎢ 8 3 −9 6⎥ ⎢ ⎥ ⎢− ⎣ 2 0 6 −5⎥⎦
26. Solving the system Ax = 0 yields solutions of the form (−4t , t , 0), where t is any real number. The dimension of the solution space is 1, and a basis is
{(−4, 1, 0)}.
28. Solving the system Ax = 0 yields solutions of the form (2s − t , s, t ) where s and t are any real numbers. The dimension of the solution space is 2, and a basis is {(−1, 0, 1), (2, 1, 0)}.
30. Solving the system Ax = 0 yields solutions of the form (2s − 5t , − s + t , s, t ), where s and t are any real
32. The only solution to the system Ax = 0 is the trivial solution. So, the solution space is
{(0, 0, 0, 0)} whose
dimension is 0.
is
{(0, 0, 0)}.
(b) The dimension of the solution space is 1.
36. (a) This system yields solutions of the form (4t − 2s, s, t ), where s and t are any real numbers
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 1⎦⎥
and a basis for the solution space is {(−2, 1, 0), (4, 0, 1)}. (b) The dimension of the solution space is 2.
So, a basis for span(S) is
{(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}. = R4 )
20. Form the matrix whose rows are the vectors in S, and then row-reduce. ⎡ 2 5 −3 −2⎤ ⎡1 ⎢ ⎥ ⎢ ⎢−2 −3 2 −5⎥ ⇒ ⎢0 ⎢ 1 3 −2 2⎥ ⎢0 ⎢ ⎥ ⎢ ⎢− ⎢⎣0 ⎣ 1 −5 3 5⎥⎦
dimension of the solution space is 2, and a basis is {(−4, 1, 0), (−2, 0, 1)}.
34. (a) The only solution to this system is the trivial solution x = y = z = 0. So, the basis for the solution space
This matrix reduces to
(span(S)
24. Solving the system Ax = 0 yields solutions of the form (−4s − 2t , s, t ), where s and t are any real numbers. The
numbers. The dimension of the solution set is 2, and a basis is {( −5, 1, 0, 1), ( 2, −1, 1, 0)}.
So, the nonzero row vectors of B,
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
117
Rank of a Matrix and Systems of Linear Equations
3⎤ ⎥ 1 0 −13⎥ 0 1 −19⎥ ⎥ 0 0 0⎥⎦ 0 0
So, a basis for span(S) is
{(1, 0, 0, 3), (0, 1, 0, −13), (0, 0, 1, −19)}.
38. (a) This system yields solutions of the form
( 85 t , − 158 t, 89 t , t ), where t is any real number and a , 9 , 1 or basis for the solution space is {( 58 , − 15 8 8 )} {(5, −15, 9, 8)}
(b) The dimension of the solution space is 1.
40. (a) This system yields solutions of the form
(−t + 2s − r, − 4t − 8s − 13 r, r , s, t ), where r, s and t are any real numbers and a basis is
{(−1, − 4, 0, 0, 1), (2, − 8, 0, 1, 0), (−1, −
1 , 1, 3
)}
0, 0 .
(b) The dimension of the solution space is 3.
22. Solving the system Ax = 0 yields solutions of the form (t , 2t ), where t is any real number. The dimension of the solution space is 1, and a basis is
{(1, 2)}. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
118
Chapter 4
Vector Spaces
42. (a) The system Ax = b is inconsistent because its augmented matrix reduces to ⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎣⎢0
−2 0⎤ ⎥ 1 0 − 12 0⎥ . 1 0⎥ 0 1 2 ⎥ 0 0 0 1⎦⎥ 0 0
52. The rank of the matrix is at most 3. So, the dimension of the column space is at most 3, and any four vectors in the column space must form a linearly dependent set. 54. Many examples are possible. For instance,
44. (a) The system Ax = b is consistent because its augmented matrix reduces to ⎡ 1 −2 0 4⎤ ⎢ ⎥ ⎢0 0 1 0⎥. ⎢0 0 0 0⎥ ⎣ ⎦
(b) The solutions of Ax = b are of the form (4 + 2t , t , 0), where t is any real number. That is, ⎡2⎤ ⎡4⎤ ⎢ ⎥ ⎢ ⎥ x = t ⎢ 1⎥ + ⎢0⎥ , ⎢0⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦
56. Let ⎣⎡aij ⎦⎤ = A be an m × n matrix in row-echelon form. The nonzero row vectors r1 , … , rk of A have the form (if the first column of A is not all zero) r1 = (e11 , … , e1 p , … , e1q , …) r3 = (0, … , 0, 0, … , 0, e3q , …),
and so forth, where e11 , e2 p , e3q denote leading ones.
and
xp
⎡4⎤ ⎢ ⎥ = ⎢0⎥ . ⎢0⎥ ⎣ ⎦
46. (a) The system Ax = b is consistent because its augmented matrix reduces to ⎡ 1 0 4 −5 6 0⎤ ⎢ ⎥ ⎢0 1 2 2 4 1⎥ . ⎢0 0 0 0 0 0⎥ ⎣ ⎦
(b) The solutions of the system are of the form
(−6t
⎡ 1 0⎤ ⎡0 0⎤ ⎡0 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 0 0 0 1 ⎣ ⎦⎣ ⎦ ⎣0 0⎦ rank 1 rank 1 rank 0
r2 = (0, … , 0, e2 p , … , e2 q , …)
where ⎡2⎤ ⎢ ⎥ x h = t ⎢ 1⎥ ⎢0⎥ ⎣ ⎦
50. The vector b is not in the column space of A because the linear system Ax = b is inconsistent.
+ 5s − 4r , 1 − 4t − 2 s − 2r , r , s, t ),
where r , s and t are any real numbers. That is,
Then the equation c1r1 + c2r2 +
+ ck rk = 0
implies that c1e11 = 0, c1e1 p + c2e2 p = 0, c1e1q + c2e2 q + c3e3q = 0,
and so forth. You can conclude in turn that c1 = 0, c2 = 0, … , ck = 0, and so the row vectors are linearly independent. 58. Suppose that the three points are collinear. If they are on the same vertical line, then x1 = x2 = x3 . So, the matrix has two equal columns, and its rank is less than 3. Similarly, if the three points lie on the novertical line y = mx + b, you have ⎡ x1 ⎢ ⎢ x2 ⎢ x3 ⎣
y1 1⎤ ⎡ x1 mx1 + b 1⎤ ⎥ ⎢ ⎥ y2 1⎥ = ⎢ x2 mx2 + b 1⎥. ⎢ x3 mx3 + b 1⎥ y3 1⎥⎦ ⎣ ⎦
⎡−4⎤ ⎡ 5⎤ ⎡−6⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢−2⎥ ⎢−2⎥ ⎢−4⎥ ⎢ 1⎥ x = r ⎢ 1⎥ + s ⎢ 0⎥ + t ⎢ 0⎥ + ⎢0⎥ , ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢ 1⎥ ⎢ 0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 ⎣ ⎦ ⎣ ⎦ ⎣ 1⎦ ⎣0⎦
Because the second column is a linear combination of the first and third columns, this determinant is zero, and the rank is less than 3.
where
On the other hand, if the rank of the matrix
⎡−4⎤ ⎡ 5⎤ ⎡−6⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 2 4 − − − ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ x h = r 1 + s 0 + t 0 and x p = ⎢0⎥. ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢ 1⎥ ⎢ 0⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 0⎦ ⎣ 0⎦ ⎣ 1⎦ ⎣0⎦
⎡ x1 ⎢ ⎢ x2 ⎢ x3 ⎣
y1 1⎤ ⎥ y2 1⎥ y3 1⎥⎦
is less than three, then the determinant is zero, which implies that the three points are collinear.
48. The vector b is not in the column space of A because the linear system Ax = b is inconsistent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.6
Rank of a Matrix and Systems of Linear Equations
119
62. (a) True. The null space of A is also called the solution space of the system Ax = 0.
⎡1 2⎤ 60. For n = 2, ⎢ ⎥ has rank 2. ⎣3 4⎦
(b) True. The null space of A is the solution space of the homogeneous system Ax = 0.
⎡ 1 2 3⎤ ⎢ ⎥ For n = 3, ⎢4 5 6⎥ has rank 2. ⎢7 8 9⎥ ⎣ ⎦
64. (a) False. See "Remark," page 234.
(b) False. See Theorem 4.19, page 244.
In general, for n ≥ 2, the rank is 2, because rows 3, … , n, are linear combinations of the first two rows. For example, R3 = 2 R2 − R1. 66. (a) rank ( A) = rank ( B ) = 3 nullity( A) = n − rank ( A) = 5 − 3 = 2
(b) Matrix B represents the linear system of equations x1
+ 3 x3 x2 −
− 4 x5 = 0 + 2 x5 = 0
x3
x4 − 2 x5 = 0.
Choosing the non-essential variables x3 and x5 as the free variables s and t produces the solution x1 = −3s + 4t x2 = s − 2t x3 = s x4 = 2t x5 = t.
So, a basis for the null space of A is ⎧⎡−3⎤ ⎡ 4⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ 1⎥ ⎢−2⎥⎪ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎨⎢ 1⎥ , ⎢ 0⎥⎬. ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ 0⎥ ⎢ 2⎥⎪ ⎪⎣ 0⎦ ⎣ 1⎦⎪ ⎩ ⎭
(c) By Theorem 4.14, the nonzero row vectors of B form a basis for the row space of A: {(1, 0, 3, 0, − 4), (0, 1, −1, 0, 2), (0, 0, 0, 1, − 2)}. (d) The columns of B with leading ones are the first, second, and fourth. The corresponding columns of A form a basis for the column space of A: {(1, 2, 3, 4), ( 2, 5, 7, 9), (0, 1, 2, −1)}. (e) Because the row-equivalent form B contains a row of zeros, the rows of A are linearly dependent. (f) Because a3 = 3a1 − a2 , {a1 , a2 , a3} is linearly dependent. So, (i) and (iii) are linearly independent. 68. Let A be an m × n matrix.
(a) If Ax = b is consistent for all vectors b, then the augmented matrix [ A b] cannot row-reduce to [U b′] where the last row of U consists of all zeros. So, the rank of A is m. Conversely, if the rank of A is m, then rank ( A) = rank ([ A b]) for all vectors b, which implies that Ax = b is consistent. (b) Ax is a linear combination x1a1 + has only the trivial solution x1 =
+ xna n of the columns of A. So, Ax = x1a1 + = xn = 0 if and only if a1 ,
+ xna n = 0
, a n are linearly independent.
70. Let x ∈ N ( A) ⇒ Ax = 0 ⇒ AT Ax = 0 ⇒ x ∈ N ( AT A).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
120
Chapter 4
Vector Spaces
Section 4.7 Coordinates and Change of Basis ⎡−2⎤ 2. Because [x]B = ⎢ ⎥ , you can write ⎣ 3⎦ x = −2( −1, 4) + 3( 4, −1) = (14, −11),
⎡ 14⎤ which implies that the coordinates of x relative to the standard basis S are [x]S = ⎢ ⎥. ⎣−11⎦ ⎡2⎤ ⎢ ⎥ = ⎢0⎥ , you can write ⎢4⎥ ⎣ ⎦
4. Because [x]B
(
x = 2 34 , 52 ,
3 2
) + 0(3, 4, 72 ) + 4(− 32 , 6, 2) = (− 92 , 29, 11),
which implies that the coordinates of x relative to the standard basis S are [x S ]
⎡− 9 ⎤ ⎢ 2⎥ = ⎢ 29⎥. ⎢ 11⎥ ⎣⎢ ⎦⎥
⎡−2⎤ ⎢ ⎥ 3 = ⎢ ⎥ , you can write ⎢ 4⎥ ⎢ ⎥ ⎢⎣ 1⎥⎦
6. Because [x]B
x = −2( 4, 0, 7, 3) + 3(0, 5, −1, −1) + 4( −3, 4, 2, 1) + 1(0, 1, 5, 0) = ( −20, 32, − 4, − 5),
which implies that the coordinates of x relative to the standard basis S are
[x]S
⎡−20⎤ ⎢ ⎥ 32⎥ . = ⎢ ⎢ −4⎥ ⎢ ⎥ ⎣⎢ −5⎥⎦
8. Begin by writing x as a linear combination of the vectors in B. x = ( −26, 32) = c1 (−6, 7) + c2 ( 4, − 3)
Equating corresponding components yields the following system of linear equations. −6c1 + 4c2 = −26 7c1 − 3c2 =
32
⎡5⎤ The solution of this system is c1 = 5 and c2 = 1. So, x = 5( −6, 7) + 1( 4, − 3) and [x]B = ⎢ ⎥. ⎣1⎦ 10. Begin by writing x as a linear combination of the vectors in B.
(
(
)
)
(
)
(
)
x = 3, − 12 , 8 = c1 32 , 4, 1 + c2 34 , 52 , 0 + c3 1, 12 , 2
Equating corresponding components yields the following system of linear equations. 3 c 2 1
+
4c1 +
c1
3 c 4 2 5 c 2 2
+ +
c3 = 1c 2 3
=
+ 2c3 =
3 − 12 8
(
The solution of this system is c1 = 2, c2 = −4 and c3 = 3. So, x = 2
3 , 2
)
(
4, 1 − 4
3 5 , , 4 2
)
(
0 + 3 1,
1, 2
)
2 and [x]B
⎡ 2⎤ ⎢ ⎥ = ⎢−4⎥. ⎢ 3⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.7
Coordinates and Change of Basis
121
12. Begin by writing x as a linear combination of the vectors in B. x = (0, − 20, 7, 15) = c1 (9, − 3, 15, 4) + c2 (3, 0, 0, 1) + c3 (0, − 5, 6, 8) + c4 (3, − 4, 2, − 3)
Equating corresponding components yields the following system of linear equations. 9c1 + 3c2 −3c1
+ 3c4 =
0
− 5c3 − 4c4 = −20
15c1 4c1 +
+ 6c3 + 2c4 =
7
c2 + 8c3 − 3c4 =
15
The solution of this system is c1 = −1, c2 = 1, c3 = 3 and c4 = 2. ⎡−1⎤ ⎢ ⎥ 1 = ⎢ ⎥. ⎢ 3⎥ ⎢ ⎥ ⎢⎣ 2⎥⎦
So, (0, − 20, 7, 15) = −1(9, − 3, 15, 4) + 1(3, 0, 0, 1) + 3(0, − 5, 6, 8) + 2(3, − 4, 2, − 3), and [x]B
14. Begin by forming the matrix
[ B′
⎡1 5 B] = ⎢ ⎣1 6
20. Begin by forming the matrix
1 0⎤ ⎥ 0 1⎦
[ B′
and then use Gauss-Jordan elimination to produce ⎣⎡I 2
6 −5⎤ ⎥. −1 1⎦
⎡1 0 P −1 ⎦⎤ = ⎢ ⎣0 1
⎡ 1 −1 B] = ⎢ ⎣2 0
−2 3⎤ ⎥ 1 2⎦
and then use Gauss-Jordan elimination to produce ⎡⎣I 2
⎡1 0 P −1 ⎤⎦ = ⎢ ⎢⎣0 1
1⎤ ⎥. −2⎥⎦
1 2 5 2
So, the transition matrix from B to B′ is
So, the transition matrix from B to B′ is
⎡ 6 −5⎤ P −1 = ⎢ ⎥. ⎣−1 1⎦
⎡1 P −1 = ⎢ 52 ⎢⎣ 2
16. Begin by forming the matrix
⎡1 0 [ B ′ B] = ⎢ ⎣0 1
22. Begin by forming the matrix
1 1⎤ ⎥. 1 0⎦
Because this matrix is already in the form ⎡⎣I 2 P −1 ⎤⎦ , you see that the transition matrix from B to B′ is
⎡1 1⎤ P −1 = ⎢ ⎥. ⎣1 0⎦
⎡1 0 0 [B′ B] = ⎢⎢0 1 0 ⎢0 0 1 ⎣
P
1 0 0⎤ ⎥ 0 1 0⎥ 0 0 1⎥⎦
and then use Gauss-Jordan elimination to produce ⎣⎡I 3
⎡1 0 0 ⎢ P ⎦⎤ = ⎢0 1 0 ⎢0 0 1 ⎣ −1
−13
4⎤ ⎥ 12 −5 −3⎥. −5 2 1⎥⎦ 6
−1
P
⎡−13 6 4⎤ ⎢ ⎥ = ⎢ 12 −5 −3⎥. ⎢ −5 2 1⎥⎦ ⎣
⎡ 2 0 −3⎤ ⎢ ⎥ = ⎢−1 2 2⎥. ⎢ 4 1 1⎥ ⎣ ⎦
24. Begin by forming the matrix ⎡ 1 0 −1 ′ [B B] = ⎢⎢ 1 1 4 ⎢−1 2 0 ⎣
3 1 1⎤ ⎥ 2 1 2⎥. 1 2 0⎥⎦
and then use Gauss-Jordan elimination to produce ⎡⎣I 3
So, the transition matrix from B to B′ is −1
2 0 −3⎤ ⎥ −1 2 2⎥. 4 1 1⎥⎦
Because this matrix is already in the form ⎡⎣I 3 P −1 ⎤⎦, you see that the transition matrix from B to B′ is
18. Begin by forming the matrix 2 ⎡ 1 2 ⎢ [ B′ B] = ⎢ 3 7 9 ⎢−1 −4 −7 ⎣
1⎤ ⎥. −2⎥⎦
⎡1 0 0 ⎢ −1 P ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
27 11 19 11 6 − 11
8 11 15 11 3 − 11
12 ⎤ 11 ⎥ 6 . 11 ⎥ 1⎥ 11 ⎦
So, the transition matrix from B to B′ is
P
−1
8 ⎡ 27 11 ⎢ 11 19 15 = ⎢ 11 11 ⎢− 6 − 3 11 ⎣⎢ 11
12 ⎤ 11 ⎥ 6 . 11⎥ 1⎥ ⎥ 11⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
122
Chapter 4
Vector Spaces
26. Begin by forming the matrix
⎡1 ⎢ 0 [B′ B] = ⎢⎢ 0 ⎢ ⎣⎢0
1 0 0 0⎤ ⎥ 1 1 0 0⎥ . 1 1 1 0⎥ ⎥ 1 1 1 1⎦⎥
0 0 0 1 0 0 0 1 0 0 0 1
Because this matrix is already in the form ⎡⎣I 4
P −1
⎡1 ⎢ 1 = ⎢ ⎢1 ⎢ ⎢⎣1
P −1 ⎤⎦ , you see that the transition matrix from B to B′ is
0 0 0⎤ ⎥ 1 0 0⎥ . 1 1 0⎥ ⎥ 1 1 1⎥⎦
28. Begin by forming the matrix
⎡ 2 3 0 2 0 ⎢ ⎢ 4 −1 0 −1 1 [B′ B] = ⎢⎢−2 0 −2 2 2 ⎢ 1 1 4 1 −3 ⎢ ⎣ 0 2 5 1 1
1 0 0 0 0⎤ ⎥ 0 1 0 0 0⎥ 0 0 1 0 0⎥ ⎥ 0 0 0 1 0⎥ ⎥ 0 0 0 0 1⎦
and then use Gauss-Jordan elimination to produce
⎡⎣I 5
⎡1 ⎢ ⎢0 −1 P ⎤⎦ = ⎢⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1
12 157 45 157 17 − 157 1 − 157 4 − 157
32 157 37 − 157 7 157 47 314 31 314
5 314 99 − 314 3 157 287 628 49 628
10 157 41 − 157 12 157 103 314 59 − 314
7 ⎤ − 157 ⎥ 13 157 ⎥ 23 ⎥ 157 ⎥ 25 ⎥ − 314 ⎥ 57 314 ⎥ ⎦
So, the transition matrix from B to B′ is
P −1
⎡ 12 ⎢ 157 45 ⎢ 157 ⎢ 17 = ⎢− 157 ⎢ 1 ⎢− 157 ⎢ 4 ⎢⎣− 157
32 157 37 − 157 7 157 47 314 31 314
5 314 99 − 314 3 157 287 628 49 628
10 157 41 − 157 12 157 103 314 59 − 314
7 ⎤ − 157 ⎥ 13 ⎥ 157 ⎥ 23 ⎥. 157 ⎥ 25 − 314 ⎥ ⎥ 57 314 ⎥ ⎦
30. (a)
[ B′
⎡1 32 B] = ⎢ ⎣1 31
2 6⎤ ⎡1 0 ⎥ ⇒ ⎢ −2 3⎦ ⎣0 1
−126 −90⎤ ⎥ = ⎡⎣I 4 3⎦
(b)
[B
⎡ 2 6 B′] = ⎢ ⎣−2 3
⎡1 0 1 32⎤ ⎥ ⇒ ⎢ 1 31⎦ ⎢⎣0 1
− 16
⎡− 1 (c) PP −1 = ⎢ 6 ⎢ 92 ⎣
2 9
−5⎤ ⎥ = [I 7⎦⎥
⎡−126 −90⎤ P −1 ⎤⎦ ⇒ P −1 = ⎢ ⎥ 4 3⎦ ⎣ ⎡− 1 P] ⇒ P = ⎢ 62 ⎣⎢ 9
−5⎤ ⎥. 7⎦⎥
−5⎤ ⎡−126 −90⎤ ⎡ 1 0⎤ ⎥⎢ ⎥ = ⎢ ⎥ 4 3⎦ 7⎥⎦ ⎣ ⎣0 1⎦
⎡− 1 (d) [x]B = P[x]B ′ = ⎢ 6 ⎢ 2 ⎣ 9
⎡ 14 ⎤ −5⎤ ⎡ 2⎤ ⎥⎢ ⎥ = ⎢ 3 ⎥ ⎢ 59 ⎥ 7⎦⎥ ⎣−1⎦ ⎣− 9 ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.7 ⎡1 0 0 1 0⎤ ⎢ ⎥ 1 −1 0⎥ ⇒ ⎢0 1 0 ⎢0 0 1 1 1 1⎥⎦ ⎣
⎡2 0 1 ⎢ ′ 32. (a) [ B B] = ⎢2 1 0 ⎢0 1 1 ⎣
1
⎡1 0 0 2 0 1⎤ ⎢ ⎥ 2 1 0⎥ ⇒ ⎢0 1 0 ⎢0 0 1 0 1 1⎥⎦ ⎣
⎡1 1 0 ⎢ (b) [ B B′] = ⎢1 −1 0 ⎢1 1 1 ⎣
(c) PP −1
1 1 ⎤ ⎡1 ⎡ 2 2 2 4 ⎢ ⎥⎢ = ⎢ 0 − 12 12 ⎥ ⎢ 12 ⎢−2 1 0⎥⎦ ⎢⎣ 12 ⎣
(d) [x]B = P[x]B ′
34. (a)
− 12
3 2
⎡ − 33 ⎢ 13 37 So, P = ⎢− 13 ⎢ 30 ⎢⎣− 13
86 − 13 85 − 13 77 − 13
55 − 16
25 32 23 32
⎡− 11 − 55 16 ⎢ 16 25 45 = ⎢ 32 32 ⎢ 23 3 ⎢⎣ 32 32
⎡1 0 0 ⎢ P] = ⎢0 1 0 ⎢0 0 1 ⎣
2 −4⎤ ⎥ 3 −5 2⎥ 4 2 −6⎥⎦ 1
11 − 16
45 32 3 32
73 ⎤ − 16 ⎥ 83 − 32 ⎥ 29 ⎥ − 32 ⎦
73 ⎤ 16 ⎥ 83 ⎥ − 32 . ⎥ 29 − 32 ⎥⎦
⎡ 1 2 −4 ⎢ ′ (b) [ B B ] = ⎢3 −5 2 ⎢4 2 −6 ⎣
[I
3 2
2 0 −2
1 2 − 12
1⎤ 2 ⎥ 1 2⎥
1 0⎥⎦
= [I
P −1 ⎤⎦ ⇒ P −1
− 14 − 12
3 2
− 14 ⎤ ⎥ 1 2⎥ 1⎥ 2⎦
1 1⎤ ⎡ 2 2 2 ⎢ ⎥ P] ⇒ P = ⎢ 0 − 12 12 ⎥. ⎢−2 1 0⎥⎦ ⎣
4 −2⎤ ⎥ 2 1 5⎥ −2 −4 8⎥⎦ 37 − 13 30 − 13
p = 13(1) + 114( x) + 3( x 2 ), it follows that
[ p]S
⎡ 13⎤ ⎢ ⎥ = ⎢114⎥. ⎢ 3⎥ ⎣ ⎦
p = −2(1) − 3( x) + 4( x 2 ),
1
33 − 13
36. The standard basis in P2 is S = {1, x, x 2} and because
38. The standard basis in P2 is S = {1, x, x 2} and because
86 − 13
85 − 13 77 − 13
80 ⎤ 13 ⎥ 57 13 ⎥ 55 ⎥ 13 ⎦
80 ⎤ 13 ⎥ 57 ⎥ . 13 ⎥ 55 ⎥ 13 ⎦
(c) Using a graphing utility, you have PP −1 = I . (d) [x]B = P[x]B ′
− 12
⎡ 14 ⎢ = ⎢ 12 ⎢1 ⎣2
1 1⎤ 2 ⎡ 2 ⎡ 6⎤ 2 2 ⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ 0 − 12 12 ⎥ ⎢ 3⎥ = ⎢−1⎥ ⎢−2 ⎢−1⎥ 1 0⎥⎦ ⎢⎣ 1⎥⎦ ⎣ ⎦ ⎣
⎡1 0 0 ⎢ −1 P ⎤⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
So, P −1
− 14 ⎤ ⎥ 1 = ⎡I ⎣ 2⎥ 1⎥ 2⎦
− 14
123
− 14 ⎤ ⎡ 1 0 0⎤ ⎥ 1 = ⎢0 1 0⎥ ⎢ ⎥ 2⎥ 1⎥ ⎢0 0 1⎥ ⎣ ⎦ 2⎦
− 14
⎡ 1 4 −2 [B′ B] = ⎢⎢ 2 1 5 ⎢−2 −4 8 ⎣ ⎡⎣I
1 4 1 2 1 2
Coordinates and Change of Basis
⎡193 ⎤ ⎡−1⎤ ⎢ 13 ⎥ ⎢ ⎥ ⎥. = P ⎢ 0⎥ = ⎢ 151 ⎢ 13 ⎥ ⎢ 2⎥ ⎢⎣140 ⎥ ⎣ ⎦ 13 ⎦
it follows that ⎡−2⎤ p = [ ]S ⎢⎢−3⎥⎥. ⎢ 4⎥ ⎣ ⎦
40. The standard basis in M 3,1 is ⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ S = ⎨⎢0⎥ , ⎢ 1⎥ , ⎢0⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭ and because ⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ X = 2 ⎢0⎥ − 1⎢ 1⎥ + 4 ⎢0⎥ , ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
it follows that
[ X ]S
⎡ 2⎤ ⎢ ⎥ = ⎢−1⎥. ⎢ 4⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
124
Chapter 4
Vector Spaces
44. (a) True. If P is the transition matrix from B to B′, then
42. The standard basis in M 3,1 is
P[x]B = [x]B ′ . Multiplying both sides by P −1 you
⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ S = ⎨⎢0⎥ , ⎢ 1⎥ , ⎢0⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
see that [x]B = P −1[x]B ′ , i.e., P −1 is the transition matrix from B′ to B. (b) False. The transition matrix is B′ not B.
and because ⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ X = 1⎢0⎥ + 0 ⎢ 1⎥ − 4 ⎢0⎥ , ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
46. Let P be the transition matrix from B′′ to B′ and let Q be the transition matrix from B′ to B. Then for any vector x the coordinate matrices with respect to these bases are related as follows.
it follows that
[ X ]S
[x]B′
⎡ 1⎤ ⎢ ⎥ = ⎢ 0⎥. ⎢−4⎥ ⎣ ⎦
= P[x]B ′′
and
[x]B
= Q[x]B ′ .
Then the transition matrix from B′′ to B is QP because
[x]B
= Q[x]B ′ = QP[x]B ′′ .
So the transition matrix from B to B′′ which is the inverse of the transition matrix from B′′ to B, is equal to
(QP)−1
= P −1Q −1.
48. Yes, if the bases are the same, B = B′.
Section 4.8 Applications of Vector Spaces 2. (a) If y = x, then y′ = 1, y′′ = 0, and y′′′ = 0. So, y′′′ + 3 y′′ + 3 y′ + y = 3 + x ≠ 0, and x is not a solution. (b) If y = e x , then y′ = y′′ = y′′′ = e x . So, y′′′ + 3 y′′ + 3 y′ + y = e x + 3e x + 3e x + e x ≠ 0, and e x is not a solution. (c) If y = e − x , then y′ = −e − x , y′′ = e − x and y′′′ = −e − x . So, y′′′ + 3 y′′ + 3 y′ + y = −e − x + 3e − x − 3e − x + e − x = 0, and e − x is a solution.
(d) If y = xe − x , then y′ = (1 − x)e − x , y′′ = ( x − 2)e − x and y′′′ = (3 − x)e − x . So, y′′′ + 3 y′′ + 3 y′ + y = (3 − x)e − x + 3( x − 2)e − x + 3(1 − x)e − x + xe − x = 0, and xe − x is a solution.
4. (a) If y = 1, then y′ = y′′ = y′′′ = y′′′′ = 0. So, y′′′′ − 2 y′′′ + y′′ = 0, and 1 is a solution. (b) If y = x, then y′ = 1, y′′ = y′′′ = y′′′′ = 0. So, y′′′′ − 2 y′′′ + y′′ = 0, and x is a solution. (c) If y = x 2 , then y′ = 2 x, y′′ = 2 and y′′′ = y′′′′ = 0. So, y′′′′ − 2 y′′′ + y′′ = 2 ≠ 0, and x 2 is not a solution. (d) If y = e x , then y′ = y′′ = y′′′ = y′′′′ = e x . So, y′′′′ − 2 y′′′ + y′′ = 0, and e x is a solution.
6. (a) If y = x, then y′ = 1 and y′′ = 0. So, xy′′ + 2 y′ = x(0) + 2(1) ≠ 0, and y = x is not a solution. (b) If y =
1 1 1 2 ⎛2⎞ ⎛ 1⎞ , then y′ = − 2 and y′′ = 3 . So, xy′′ + 2 y′ = x⎜ 3 ⎟ + −2⎜ − 2 ⎟ = 0, and y = is a solution. x x x x x x ⎝ ⎠ ⎝ ⎠
(c) If y = xe x , then y′ = xe x + e x and y′′ = xe x + 2e x . So, xy′′ + 2 y′ = x( xe x + 2e x ) + 2( xe x + e x ) ≠ 0, and y = xe x is not a solution. (d) If y = xe − x , then y′ = e − x − xe − x and y′′ = xe − x − 2e− x . So, xy′′ + 2 y′ = x( xe − x − 2e − x ) + 2(e − x − xe− x ) ≠ 0, and y = xe − x is not a solution.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8 2
( ) = 0, and y = 3e is a solution. . So, y′ − 2 xy = 2 x e + e − 2 x( xe ) ≠ 0, and y = xe 2
2
8. (a) If y = 3e x , then y′ = 6 xe x . So, y′ − 2 xy = 6 xe x − 2 x 3e x 2
2
(b) If y = xe x , then y′ = 2 x 2e x + e x
2
Applications of Vector Spaces
2 x2
2
125
x2
x2
x2
x2
is not a solution.
(c) If y = x 2e x , then y′ = x 2e x + 2 xe x . So, y′ − 2 xy = x 2e x + 2 xe x − 2 x( x 2e x ) ≠ 0, and y = x 2e x is not a solution. (d) If y = xe − x , then y′ = e− x − xe − x . So, y′ − 2 xy = e − x − xe − x − 2 x( xe − x ) ≠ 0, and y = xe − x is not a solution.
(
2
10. W e x , e − x
2
) = 2xee
x2
e− x
2
x2
−2 xe− x
2
= −4 x
−sin x
x
cos x
12. W ( x, − sin x, cos x) = 1 −cos x
sin x −cos x
0 e− x
x
ex
14. W ( x, e − x , e x ) = 1 −e − x
(
2
)
e x = −2 x
e− x
0
ex
x2
ex
16. W x 2 , e x , x 2e x = 2 x
2
−sin x = x
2
x 2e x
2 xe x
2
2 xe x + x 2e x
2
2e x + 4 x 2e x
18. Because W (e −2 x , xe −2 x ) =
e −2 x −2e
2
2+x
(2 x4 − x3 − 3x2 + x + 3)
2e x + 4 xe x + x 2e x xe −2 x
(1 −
−2 x
= −2 x 2e x
1
sin x
20. Because W (1, sin x, cos x) = 0
cos x
2 x)e −2 x
= e −4 x ≠ 0, the set is linearly independent.
cos x −sin x = −cos 2 x − sin 2 x = −1 ≠ 0, the set is linearly independent.
0 −sin x −cos x
22. Because
e− x W (e − x , xe − x , x 2e − x ) = −e − x
xe − x
x 2e − x
(1 − x)e− x
( 2 x − x 2 )e − x ( x 2 − 4 x + 2)e− x
(x
e− x
− 2)e − x
x2
x
1
= e −3 x −1 1 − x
2 x − x2
1
x − 2 x2 − 4 x + 2
1
x
x2
= e −3 x 0
1
2x
0 −2 −4 x + 2 = 2e −3 x ≠ 0, the set is linearly independent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
126
Chapter 4
Vector Spaces
24. Because 1 x ex 0 1 e
W (1, x, e x , xe x ) =
x
0 0 ex 0 0 ex
(x (x
ex
=
e
x
xe x
( x + 1)e x ( x + 2) e x ( x + 3)e x
+ 2) e x + 3)e x
= e 2 x ( x + 3) − e 2 x ( x + 2) = e 2 x ≠ 0,
the set is linearly independent. 26. From Exercise 18 you have a set of two linearly independent solutions. Because y′′ + 4 y′ + 4 y = 0 is second order, it has a
general solution of the form C1e −2 x + C2 xe −2 x . 28. From Exercise 24 you have a set of four linearly independent solutions. Because y′′′′ − 2 y′′′ + y′′ = 0 is fourth order, it has a
general solution of the form C1 + C2 x + C3e x + C4 xe x . 30. First calculate the Wronskian of the two functions
W (e ax , ebx ) =
e ax
ebx
ax
bebx
ae
= (b − a )e ( a + b ) x .
If a ≠ b, then W (e ax , ebx ) ≠ 0. Because eax and ebx are solutions to y′′ − ( a + b) y′ + aby = 0, the functions are linearly independent. On the other hand, if a = b, then eax = ebx , and the functions are linearly dependent. 32. First calculate the Wronskian. W (e ax cos bx, e ax sin bx) =
e ax cos bx
e ax sin bx
e ax ( a cos bx − b sin bx) e ax ( a sin bx + b cos bx)
= be 2 ax ≠ 0,
because b ≠ 0.
Because these functions satisfy the differential equation y′′ − 2ay′ + ( a 2 + b 2 ) y = 0, they are linearly independent. 34. No, this is not true. For instance, consider the nonhomogeneous differential equation y′′ = 1. Clearly, y = x 2 /2 is a solution,
whereas the scalar multiple 2( x 2 /2) is not.
36. The graph of this equation is a parabola x = −
the vertex at the origin. The parabola opens to the left. y
y2 with 8
38. By rewriting the equation as x2 y2 + =1 3 5
you see that this is the equation of an ellipse centered at the origin with major axis falling along the y-axis. y
8 6
3
2 −12
−8
1
x
−4
2 −6 −8
y 2 + 8x = 0
−3 −2 −1
x
1
2
3
−3
5x 2 + 3y 2 − 15 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8 40. The graph of this equation is a hyperbola centered at the origin with transverse axis along the x-axis. y
6 5 4 3 2 −5
127
46. The graph of this equation is a hyperbola 2
1⎞ ⎛ 2 ⎜y − ⎟ ( x + 2) = 1 centered at 2⎠ ⎝ − 2 4 vertical transverse axis.
1⎞ ⎛ ⎜ −2, ⎟ with a 2⎠ ⎝
y x
−2
23
8 6 4
56
−3 −4 −5 −6
(−2, 12 (
x
−8
x2 − y2 = 1 16 25
42. The graph of this equation is a parabola
(y
Applications of Vector Spaces
− 3) = 4( x − 3) with the vertex at (3, 3). The 2
parabola opens to the right. y
4 6
−2 −4 −6 −8
4y 2 − 2x 2 − 4y − 8x − 15 = 0
48. The graph of this equation is a circle ( x − 3) + y 2 = 2
with the center at (3, 0) and a radius of 12 .
8
y
6 4
3
(3, 3)
2
2 x
−2 −2
2
4
(3, 0)
1
x
−1 −1
y 2 − 6y − 4x + 21 = 0
1
2
4
−2
44. The graph of this equation is an ellipse
(x
− 1) + y 2 = 1 with the center at (1, 0). 1 4 2
y 1 x
−1
1
4y 2 + 4x 2 − 24x + 35 = 0
50. The graph of this equation is a hyperbola 2
1⎞ ⎛ 2 ⎜x + ⎟ ( y − 1) = 1 centered at ⎛ − 1 , 1⎞ with a 2⎠ ⎝ − ⎜ ⎟ 1 1 ⎝ 2 ⎠ 4 horizontal transverse axis.
2
y
−1 −2
4x 2
+
y2
− 8x + 3 = 0
( − 12 , 1(
5 4 3
x
−4 −3 −2
1 2 3 −2 −3
4x 2 − y 2 + 4x + 2y − 1 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
1 4
128
Chapter 4
Vector Spaces
52. Complete the square to find the standard form.
(y
+ 3) = 4( − 2)( x + 2) 2
56. Begin by finding the rotation angle, θ , where a −c 1−1 π = = 0 ⇒θ = . b 2 4
cot 2θ =
you see that this is the equation of a parabola with vertex at ( −2, − 3) and opening to the left.
So, sin θ =
y
−5 −4
x = x′ cos θ − y′ sin θ =
x
−1
1
y = x′ sin θ + y′ cos θ =
54. Begin by finding the rotation angle, θ , where
a −c 0−0 π = = 0 ⇒ θ = . b 1 4
So, sin θ = 1
2 and cos θ = 1
y
y'
x' 1
45° x
−1
1
2
3
−2
and 1 ( x′ + y′) 2
y = x′ sin θ + y′ cos θ =
into xy − 2 = 0 and simplifying, you obtain − ( y′) − 4 = 0. 2
In standard form
( x′)
−
4
−3
58. Begin by finding the rotation angle, θ , where a −c 5−5 π cot 2θ = = = 0, implying that θ = . −6 b 4
So, sin θ = 1
2
( y′) 4
2 and cos θ = 1
2
= 1.
This is the equation of a hyperbola with a transverse axis along the x′ -axis.
x = x′ cos θ − y′ sin θ =
4 3 2 1
1 ( x′ − y′) 2
y = x′ sin θ + y′ cos θ =
1 ( x′ + y′) 2
into
x'
5 x 2 − 6 xy + 5 y 2 − 12 = 0 and simplifying, you obtain 2( x′) + 2( y′) − 12 = 0. 2
45° x
2 3 4
−4
2. By substituting
and
y
y'
1 ( x′ + y′) 2
x 2 + 2 xy + y 2 − 8 x + 8 y = 0 and simplifying, you −1 2 2 obtain ( x′) = −4 2 y′ or y′ = ( x′) , which is a 4 2 parabola.
2. By substituting
1 ( x′ − y′) 2
x = x′ cos θ − y′ sin θ =
( x′)2
1 ( x′ − y′) 2
into
y 2 + 8x + 6y + 25 = 0
cot 2θ =
1 . By substituting 2
and
−2 −3 −4 −5 −6
(−2, −3)
1 and cos θ = 2
2
In standard form ( x′) + ( y′) = 6. 2
2
This is an equation of a circle with the center at (0, 0) and a radius of 6. y
y'
x'
3 2 1
−3
−1
45° 1
2
x 3
−2 −3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 4.8
Applications of Vector Spaces
129
60. Begin by finding the rotation angle, θ , where cot 2θ =
a −c 3−1 −1 2π π = = ⇒ 2θ = ⇒ θ = . b 3 3 −2 3 3
So, sin θ =
y
x'
y'
3 1 and cos θ = . By substituting 2 2
1
60° −3
1 3 x = x′ cos θ − y′ sin θ = x′ − y′ 2 2
x
−2
−1 −2
and
−3
3 1 x′ + y′ 2 2
y = x′ sin θ + y′ cos θ =
into 3x 2 − 2 3 xy + y 2 + 2 x + 2 3 y = 0 and simplifying, you obtain x′ = −( y′) , which is a parabola. 2
62. Begin by finding the rotation angle, θ , where cot 2θ =
a −c 7 −5 −1 2π π = = ⇒ 2θ = ⇒ θ = . b 3 3 −2 3 3
So, sin θ =
y
1 3 and cos θ = . By substituting 2 2
x = x′ cos θ − y′ sin θ =
x'
2
y'
1 60°
1 3 x′ − y′ 2 2
−2
x
−1
2
and y = x′ sin θ + y′ cos θ =
3 1 x′ + y′ 2 2
into 7 x 2 − 2 3 xy + 5 y 2 = 16 and simplifying, you obtain
( x′)2 4
( y′)2
+
2
= 1, which is an ellipse with
major axis along the x′ -axis. 64. Begin by finding the rotation angle, θ , where a −c 5−5 π cot 2θ = = = 0 ⇒ θ = . b −2 4
So, sin θ =
1 and cos θ = 2
x = x′ cos θ − y′ sin θ =
1 ( x′ − y′) 2
x'
2
1 . By substituting 2
and y = x′ sin θ + y′ cos θ =
y
y' 1 45° −2
−1
x
1
2
−1 −2
1 ( x′ + y′) into 5 x 2 − 2 xy + 5 y 2 = 0 and simplifying, you obtain 2
4( x′) + 6( y′) = 0, which is a single point, (0, 0). 2
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
130
Chapter 4
Vector Spaces
66. Begin by finding the rotation angle, θ , where a −c 1−1 π = = 0, implying that θ = . b −10 4
cot 2θ =
So, sin θ = 1
2 and cos θ = 1
2. By substituting
1 ( x′ − y′) 2
x = x′ cos θ − y sin θ =
and 1 ( x′ + y′) 2
y = x′ sin θ + y′ cos θ =
into x 2 − 10 xy + y 2 = 0 and simplifying, you obtain
6( y′) − 4( x′) = 0. 2
2
The graph of this equation is two lines y′ = ±
6 x′. 3
y
y'
x'
8 6 4 45°
−8
x
−4
4 6 8 −4 −6 −8
68. Let θ satisfy cot 2θ = ( a − c) b. Substitute x = x′ cos θ − y′ sin θ and y = x′ sin θ + y′ cos θ into the equation
ax 2 + bxy + cy 2 + dx + ey + f = 0. To show that the xy-term will be eliminated, analyze the first three terms under this substitution. ax 2 + bxy + cy 2 = a( x′ cos θ − y′ sin θ ) + b( x′ cos θ − y′ sin θ )( x′ sin θ + y′ cos θ ) + c( x′ sin θ + y′ cos θ ) 2
2
= a( x′) cos 2 θ + a( y′) sin 2 θ − 2ax′y′ cos θ sin θ 2
2
+ b( x′) cos θ sin θ + bx′y′ cos 2 θ − bx′y′ sin 2 θ − b( y′) cos θ sin θ 2
2
+ c( x′) sin 2 θ + c( y′) cos 2 θ + 2cx′y′ sin θ cos θ . 2
2
So, the new xy-terms are −2ax′y′ cos θ sin θ + bx′y′(cos 2 θ − sin 2 θ ) + 2cx′y′ sin θ cos θ = x′y′[− a sin 2θ + b cos 2θ + c sin 2θ ] = − x′y′⎡⎣( a − c) sin 2θ − b cos 2θ ⎤⎦
But, cot 2θ =
70. If A =
cos 2θ a −c = ⇒ b cos 2θ = ( a − c) sin 2θ which shows that the coefficient is zero. b sin 2θ
a
b 2
b 2
c
= 0, then b 2 = 4ac, and you have
ax 2 + bxy + cy 2 = acx 2 + bcxy + c 2 y 2 = 0 ⇒ b 2 x 2 + 4bcxy + 4c 2 y 2 = 0 ⇒ (bx + 2cy ) = 0 2
⇒ bx + 2cy = 0, which is the equation of a line.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
131
Review Exercises for Chapter 4 2. (a) u + v = ( −1, 2,1) + (0,1,1) = ( −1, 3, 2)
10. To write v as a linear combination of u1 , u 2 , and u3 , solve the equation
(b) 2 v = 2(0,1,1) = (0, 2, 2)
c1u1 + c2u 2 + c3u3 = v
(c) u − v = ( −1, 2,1) − (0,1,1) = (−1, 1, 0)
for c1 , c2 and c3 . This vector equation corresponds to the
(d) 3u − 2 v = 3( −1, 2, 1) − 2(0, 1, 1)
system
= ( −3, 6, 3) − (0, 2, 2) = ( −3, 4, 1)
c1 − 2c2 + c3 = 4
4. (a) u + v = (0, 1, −1, 2) + (1, 0, 0, 2) = (1, 1, −1, 4)
3c1 +
(b) 2 v = 2(1, 0, 0, 2) = ( 2, 0, 0, 4)
= (0, 3, − 3, 6) − ( 2, 0, 0, 4) = (− 2, 3, − 3, 2)
6. x =
= = =
12. To write v as a linear combination of u1 , u 2 , and u3 , solve the equation
c1u1 + c2u 2 + c3u 3 = v
[− 2u + v − 2w] 1 ⎡− 2 1, −1, 2 + 0, 2, 3 − 2 0, 1, 1 ⎤ ) ( ) ( )⎦ 3⎣ ( 1 ⎡ − 2, 2, − 4 + 0, 0, 1 ⎤ ) ( )⎦ 3 ⎣( 1 − 2, 2, − 3 = − 2 , 2 , −1 ) ( 3 3 ) 3(
for c1 , c2 , and c3 . This vector equation corresponds to the system of linear equations c1 −
c2
=
4
−2c1 + 2c2 − c3 = −13
8. 2u + 3x = 2 v − w
3x = −2u + 2 v − w
c1 + 3c2 − c3 =
−5
c1 + 2c2 − c3 =
−4.
The solution of this system is c1 = 3, c2 = −1, and
x = − 23 u + 23 v − 13 w
=
= 5.
c3 = 0. So, v = 2u1 − u 2 .
1 3
− 23
c2
The solution of this system is c1 = 2, c2 = −1 and
(c) u − v = (0,1, −1, 2) − (1, 0, 0, 2) = (−1,1, −1, 0) (d) 3u − 2 v = 3(0, 1, −1, 2) − 2(1, 0, 0, 2)
= 4
2c1
c3 = 5. So, v = 3u1 − u 2 + 5u 3 .
(1, −1, 2) + (0, 2, 3) − (0,1,1) 2 3
1 3
( ) + (0, 43 , 2) − (0, 13 , 13 ) = ( − 23 + 0 − 0, 23 + 43 − 13 , − 43 + 2 − 13 ) = ( − 23 , 53 , 13 ) =
− 23 , 23 , − 43
14. The zero vector is the zero polynomial p( x) = 0. The additive inverse of a vector in P8 is
−( a0 + a1 x + a2 x 2 +
+ a8 x8 ) = − a0 − a1 x − a2 x 2 −
16. The zero vector is
22. W is not a subspace of R 3 , because it is not closed under
scalar multiplication. For instance (1, 1, 1) ∈ W and
⎡0 0 0⎤ ⎢ ⎥. ⎣0 0 0⎦
−2 ∈ R, but −2(1, 1, 1) = ( −2, − 2, − 2) ∉ W .
The additive inverse of ⎡ a11 a12 ⎢ ⎣a21 a22
− a8 x8 .
a13 ⎤ ⎡ − a11 − a12 ⎥ is ⎢ a23 ⎦ ⎣− a21 − a22
− a13 ⎤ ⎥. − a23 ⎦
18. W is not a subspace of R 2 . For instance, ( 2, 1) ∈ W and
(3, 2) ∈ W , but their sum (5, 3) ∈ W . So, W is not closed under addition (nor scalar multiplication). 20. W is not a subspace of R 2 . For instance (1, 3) ∈ W and
(2, 12) ∈ W , but their sum (3, 15) ∉ W . So, W is not
24. Because W is a nonempty subset of C[−1, 1], you need
only check that W is closed under addition and scalar multiplication. If f and g are in W, then f ( −1) = g ( −1) = 0, and
(f
+ g )( −1) = f ( −1) + g ( −1) = 0, which implies that
f + g ∈ W . Similarly, if c is a scalar, then cf ( −1) = c0 = 0, which implies that cf ∈ W . So, W is
a subspace of C[−1, 1].
closed under addition (nor scalar multiplication). Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
132
Chapter 4
Vector Spaces
26. (a) W is a subspace of R3, because W is nonempty ((0, 0, 0) ∈ W ) and W is closed under addition and
30. (a) To find out whether S spans R3, form the vector equation
scalar multiplication.
c1 ( 2, 0, 1) + c2 ( 2, −1, 1)) + c3 ( 4, 2, 0) = (u1 , u2 , u3 ).
For if ( x1 , x2 , x3 ) and ( y1 , y2 , y3 ) are in W, then
This yields the system of linear equations
x1 + x2 + x3 = 0 and y1 + y2 + y3 = 0. Because
( x1, x2 , x3 )
2c1 + 2c2 + 4c3 = u1
+ ( y1 , y2 , y3 ) = ( x1 + y1 , x2 + y2 , x3 + y3 )
−c2 + 2c3 = u2 c1 +
satisfies ( x1 + y1 ) + ( x2 + y2 ) + ( x3 + y3 ) = 0, W is
(b) W is not closed under addition or scalar multiplication, so it is not a subspace of R3. For example, (1, 0, 0) ∈ W , and yet 2(1, 0, 0) = ( 2, 0, 0) ∉ W .
= u3 .
This system has a unique solution for every (u1, u2 , u3 ) because the determinant of the
closed under addition. Similarly, c( x1 , x2 , x3 ) = (cx1 , cx2 , cx3 ) satisfies cx1 + cx2 + cx3 = 0, showing that W is closed under scalar multiplication.
c2
coefficient matrix is not zero. So, S spans R3. (b) Solving the same system in part (a) with (u1, u2 , u3 ) = (0, 0, 0) yields the trivial solution. So, S is linearly independent.
(c) Because S is linearly independent and S spans R3, it is a basis for R3. 32. (a) The set S =
3
28. (a) To find out whether S spans R , form the vector equation
{(1, 0, 0), (0, 1, 0), (0, 0, 1), (2, −1, 0)} spans R3
because any vector u = (u1 , u2 , u3 ) in R 3 can be
c1 ( 4, 0, 1) + c2 (0, − 3, 2) + c3 (5, 10, 0) = (u1 , u2 , u3 ).
written as
This yields the system of equations
u = u1 (1, 0, 0) + u2 (0, 1, 0) + u3 (0, 0, 1)
+ 5c3 = u1
4c1
= (u1 , u2 , u3 ).
−3c2 + 10c3 = u2 c1 +
2c2
(b) S is not linearly independent because
= u3 .
2(1, 0, 0) − (0, 1, 0) + 0(0, 0, 1) = ( 2, −1, 0).
This system has a unique solution for every (u1, u2 , u3 ) because the determinant of the
(c) S is not a basis for R 3 because S is not linearly independent.
coefficient matrix is not zero. So S spans R3. (b) Solving the same system in (a) with (u1, u2 , u3 ) = (0, 0, 0) yields the trivial solution. So,
34. S has three vectors, so you only need to check that S is linearly independent.
Form the vector equation
S is linearly independent. 3
(c) Because S is linearly independent and spans R , it is a basis for R3.
c1 (1) + c2 (t ) + c3 (1 + t 2 ) = 0 + 0t + 0t 2
which yields the homogeneous system of linear equations. + c3 = 0
c1 c2
= 0 c3 = 0.
This system has only the trivial solution. So, S is linearly independent and S is a basis for P2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4 36. S has four vectors, so you only need to check that S is linearly independent.
48. (a) Using Gauss-Jordan elimination, the matrix reduces to
⎡1 0 ⎢ ⎢0 1 ⎢0 0 ⎢⎣
Form the vector equation ⎡ 1 0⎤ ⎡−1 0⎤ ⎡2 1⎤ ⎡ 1 1⎤ ⎡0 0⎤ c1 ⎢ ⎥ + c2 ⎢ ⎥ + c3 ⎢ ⎥ + c4 ⎢ ⎥ = ⎢ ⎥ , ⎣0 1⎦ ⎣ 1 1⎦ ⎣ 1 0⎦ ⎣0 1⎦ ⎣0 0⎦
which yields the homogeneous system of linear equations
26 ⎤ 11 ⎥ 8 . 11 ⎥
0⎥⎥⎦
So, the rank is 2. (b) A basis for the row space is
c1 − c2 + 2c3 + c4 = 0 c2 +
c3
(b) A basis for the row space is
= 0 + c4 = 0.
This system has only the trivial solution. So, S is linearly independent and S is a basis for M 2,2 .
26 11
8 11
(3t, − 12 t , − 4t, t ), where t is any real number. A basis for the solution space is {(3, − 12 , − 4, 1)}.
(b) The dimension of the solution space is 1, the number of vectors in the basis. 40. (a) This system has solutions of the form
(0, − 32 t , − t, t ), where t is any real number. A basis for the solution space is {(0, − 32 , −1, 1)}.
(b) The dimension of the solution space is 1, the number of vectors in a basis.
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
So, the rank is 3. (b) A basis for the row space is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}. ⎡1⎤ 54. Because [x]B = ⎢ ⎥ , write x as ⎣1⎦ x = 1( 2, 0) + 1(3, 3) = (5, 3). Because
(5, 3)
{(0, 0)}, which
does not have a basis. The rank of A is 2 (the number of nonzero row vectors in the reduced row-echelon matrix) and the nullity is 0. Note that rank ( A) + nullity( A) = 2 + 0 = 2 = n. 44. The system given by Ax = 0 has solutions of the form (2t , 5t , t , t ), where t is any real number. So, a basis for
{(2, 5, 1, 1)}. The rank of
A is 3 (the number of nonzero row vectors in the reduced row-echelon matrix) and the nullity of A is 1. Note that rank ( A) + nullity( A) = 3 + 1 = 4 = n. 46. The system given by Ax = 0 has only the trivial solution (0, 0, 0, 0). So, the solution space is
{(0, 0, 0, 0)}, which does not have a basis. The rank of A is 4 (the number of nonzero row vectors in the reduced row-echelon matrix) and the nullity is 0. Note that rank ( A) + nullity( A) = 4 + 0 = 4 = n.
= 5(1, 0) + 3(0, 1), the coordinate vector of x
relative to the standard basis is
[x]S
42. The system given by Ax = 0 has only the trivial
solution (0, 0). So, the solution space is
{(1, 2, −1)}.
52. (a) Using Gauss-Jordan elimination, the matrix reduces to
38. (a) This system has solutions of the form
the solution space of Ax = 0 is
{(1, 0, ), (0, 1, )}.
50. (a) Because the matrix is already row reduced, its rank is 1.
c3 + c4 = 0 c1 + c2
133
⎡5⎤ = ⎢ ⎥. ⎣3⎦
⎡2⎤ 56. Because [x]B = ⎢ ⎥ , write x as ⎣ 1⎦ x = 2( 4, 2) + (1, −1) = (9, 3).
Because (9, 3) = 9(1, 0) + 3(0, 1), the coordinate vector of x relative to the standard basis is
[x]S
⎡9⎤ = ⎢ ⎥. ⎣3⎦
⎡4⎤ ⎢ ⎥ 58. Because [x]B = ⎢0⎥ , write x as ⎢2⎥ ⎣ ⎦
x = 4(1, 0, 1) + 0(0, 1, 0) + 2(0, 1, 1) = ( 4, 2, 6).
Because ( 4, 2, 6) = 4(1, 0, 0) + 2(0, 1, 0) + 6(0, 0, 1), the coordinate vector of x relative to the standard basis is
[x]S
⎡4⎤ ⎢ ⎥ = ⎢2⎥. ⎢6⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
134
Chapter 4
Vector Spaces
⎡ c1 ⎤ 60. To find [x]B′ = ⎢ ⎥ , solve the equation ⎣c2 ⎦
62. To find [x]B′
c1 (1, 1) + c2 (0, − 2) = ( 2, −1).
The resulting system of linear equations is =
c1
2
[x]B′
3 , and 2
you have
⎡ 2⎤ = ⎢ 3 ⎥. ⎣⎢ 2 ⎦⎥
64. To find [x]B′
c1 (1, 0, 0) + c2 (0, 1, 0) + c3 (1, 1, 1) = ( 4, − 2, 9).
Forming the corresponding linear system, the solution is c1 = − 5, c2 = −11, c3 = 9. So,
c1 − 2c2 = −1.
So, c1 = 2, c2 =
⎡ c1 ⎤ ⎢ ⎥ = ⎢c2 ⎥ solve the equation ⎢c3 ⎥ ⎣ ⎦
[x]B′
⎡ −5⎤ ⎢ ⎥ = ⎢−11⎥. ⎢ 9⎥ ⎣ ⎦
⎡ c1 ⎤ ⎢ ⎥ c2 = ⎢ ⎥ , solve the equation ⎢c3 ⎥ ⎢ ⎥ ⎢⎣c4 ⎥⎦
c1 (1, −1, 2, 1) + c2 (1, 1, − 4, 3) + c3 (1, 2, 0, 3) + c4 (1, 2, − 2, 0) = (5, 3, − 6, 2).
The resulting system of linear equations is c1 + c2 + c3 + c4 = 5 −c1 +
c2 + 2c3 + 2c4 =
2c1 − 4c2 c1 + 3c2 + 3c3
3
− 2c4 = −6 =
2.
So, c1 = 2, c2 = 1, c3 = −1, c4 = 3, and you have
[x]B′
⎡ 2⎤ ⎢ ⎥ 1 = ⎢ ⎥. ⎢−1⎥ ⎢ ⎥ ⎣⎢ 3⎦⎥
66. Begin by finding x relative to the standard basis x = 2(1, 0) − 2( −1, 1) = (0, 2).
⎡ c1 ⎤ Then solve for [x]B′ = ⎢ ⎥ ⎣c2 ⎦
by forming the equation c1 (1, 1) + c2 (1, −1) = (0, 2). The resulting system of linear equations is c1 + c2 = 0 c1 − c2 = 2.
The solution to this system is c1 = 1 and c2 = −1. So, you have
[x]B′
⎡ 1⎤ = ⎢ ⎥. ⎣−1⎦
68. Begin by finding x relative to the standard basis x = 2(1, 1, −1) + 2(1, 1, 0) − (1, −1, 0) = (3, 5, − 2).
Then solve for [x]B′
⎡ c1 ⎤ ⎢ ⎥ = ⎢c2 ⎥ ⎢c3 ⎥ ⎣ ⎦
by forming the equation c1 (1, −1, 2) + c2 ( 2, 2, −1) + c3 ( 2, 2, 2) = (3, 5, − 2).
The resulting system of linear equations is c1 + 2c2 + 2c3 = 3 −c1 + 2c2 + 2c3 =
2c1 −
5
c2 + 2c3 = −2.
The solution to this system is c1 = −1, c2 = c3 =
4, 3
and
2. 3
So, you have
[x]B′
⎡−1⎤ ⎢ ⎥ = ⎢ 43 ⎥ . ⎢ ⎥ ⎢⎣ 23 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4 70. Begin by forming
[B′ B]
⎡ 1 −1 = ⎢ ⎣2 0
74. (a) Because W is a nonempty subset of V, you need to only check that W is closed under addition and scalar multiplication. If f , g ∈ W , then f ′ = 3 f and g ′ = 3g . So,
1 3⎤ ⎥. −1 1⎦
Then use Gauss-Jordan elimination to obtain ⎣⎡I 2
⎡1 0 P −1 ⎦⎤ = ⎢ ⎢⎣0 1
(f
1⎤ 2 . ⎥ − 52 ⎥⎦
− 12
− 32
scalar, then (cf )′ = (cf ′) = c(3 f ) = 3(cf ), which implies that cf ∈ W .
⎡− 1 1 ⎤ = ⎢ 23 52 ⎥. ⎣⎢− 2 − 2 ⎥⎦
(b) V is not closed under addition nor scalar multiplication. For instance, let f = e x − 1 ∈ U . Note that 2 f = 2e x − 2 ∉ U because
72. Begin by forming ⎡1 0 1 [B′ B] = ⎢⎢2 1 0 ⎢3 0 1 ⎣
(2 f )′
1 1 1⎤ ⎥ 1 1 0⎥. 1 0 0⎥⎦
⎡1 0 0 ⎢ −1 ⎤ P ⎦ = ⎢0 1 0 ⎢0 0 1 ⎣
(cA)T
1
= BT = − B, which implies that cA = − B. So,
B = O, a contradiction.
− 12 ⎤ ⎥ 2 1⎥. 3 3⎥ 2 2⎦
0 − 12 1
= 2e x ≠ ( 2 f ) + 1 = 2e x − 1.
76. Suppose, on the contrary, that A and B are linearly dependent. Then B = cA for some scalar c. So,
Then use Gauss-Jordan elimination to obtain ⎡⎣I 3
+ g )′ = f ′ + g ′ = 3 f + 3 g = 3( f + g ),
which shows that f + g ∈ W . Finally, if c is a
So, P −1
135
So, P −1
⎡0 − 1 − 1 ⎤ 2 2 ⎢ ⎥ = ⎢1 2 1⎥ . ⎢1 3 3⎥ ⎣ 2 2⎦
78. To see if the given set is linearly independent, solve the equation
c1 ( v1 − v 2 ) + c2 ( v 2 − v 3 ) + c3 ( v 3 − v1 ) = 0 v1 + 0 v 2 + 0 v 3
which yields the homogeneous system of linear equations − c3 = 0
c1 −c1 +
c2
= 0
−c2 + c3 = 0.
This system has infinitely many solutions, so {v1 − v 2 , v 2 − v 3 , v 3 − v1} is linearly dependent. 80. S is a nonempty subset of Rn, so you only need to show closure under addition and scalar multiplication. Let x, y ∈ S . Then
Ax = λ x and Ay = λ y. So, A( x + y ) = Ax + Ay = λ x + λ y = λ ( x + y ), which implies that x + y ∈ S . Finally, for
any scalar c, A(cx) = c( Ax) = c(λ x) = λ (cx) which implies that cx ∈ S . If λ = 3, then solve for x in the equation Ax = λ x = 3x, or Ax − 3x = 0, or ( A − 3I 3 )x = 0 ⎛ ⎡3 1 0⎤ ⎡ 1 0 0⎤ ⎞ ⎡ x1 ⎤ ⎡0⎤ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥ ⎜ ⎢0 3 0⎥ − 3⎢0 1 0⎥ ⎟ ⎢ x2 ⎥ = ⎢0⎥ ⎜ ⎢0 0 1⎥ ⎢0 0 1⎥ ⎟ ⎢ x3 ⎥ ⎢0⎥ ⎦ ⎣ ⎦⎠ ⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎡0 1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x = 0 0 0 ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 −2⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
The solution to this homogeneous system is x1 = t , x2 = 0 and x3 = 0, where t is any real number. So, a basis for S is {(1, 0, 0)}, and the dimension of S is 1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
136
Chapter 4
Vector Spaces
82. From Exercise 79, you see that a set of functions { f1, …, f n} can be linearly independent in C[a, b] and
88. (a) Because y′ = y′′ = y′′′ = y′′′′ = e x , you have
y′′′′ − y = e x − e x = 0.
linearly dependent in C[c, d ], where [a, b] and [c, d ] are
Therefore, ex is a solution.
different domains.
(b) Because y′ = −e − x , y′′ = e − x , y′′′ = −e − x and
84. (a) False. This set is not closed under addition or scalar multiplication:
(0, 1, 1) ∈ W , but 2(0, 1, 1)
y′′′′ = e − x , you have y′′′′ − y = e− x − e − x = 0.
= (0, 2, 2) is not in W.
Therefore e − x is a solution.
(b) True. See definition of basis on page 221.
(c) Because y′ = −sin x, y′′ = −cos x, y′′′ = sin x, y′′′′ = cos x,
(c) False. For example, let A = I 3 be the 3 × 3 identity matrix. It is invertible and the rows of A form the standard basis for R3 and, in particular, the rows of A are linearly independent.
you have y′′′′ − y = cos x − cos x = 0.
Therefore, cos x is a solution.
2
86. (a) True. It is a nonempty subset of R , and it is closed under addition and scalar multiplication.
(d) Because y′ = cos x, y′′ = −sin x, y′′′ = −cos x, y′′′′ = sin x,
(b) True. See definition of linear independence on page 213.
you have
(c) False. These operations only preserve the linear relationships among the columns.
y′′′′ − y = sin x − sin x = 0.
Therefore, sin x is a solution. 90. (a) Because y′′ = −9 cos 3 x − 9 sin 3x, you have
y′′ + 9 y = −9 cos 3x − 9 sin 3x + 9(sin 3x + cos 3x) = −9 cos 3x − 9 sin 3x + 9 sin 3 x + 9 cos 3 x = 0.
Therefore, sin 3 x + cos 3 x is a solution. (b) Because y′′ = −3 cos x − 3 sin x, you have y′′ + 9 y = −3 cos x − 3 sin x + 9(3 sin x + 3 cos x) = −3 cos x − 3 sin x + 27 sin x + 27 cos x ≠ 0.
Therefore, 3 sin x + 3 cos x is not a solution. (c) Because y′′ = −9 sin 3 x, you have y′′ + 9 y = −9 sin 3 x + 9(sin 3 x) = −9 sin 3x + 9 sin 3 x = 0.
Therefore, sin 3x is a solution. (d) Because y′′ = −9 cos 3 x, you have y′′ + 9 y = −9 cos 3x + 9(cos 3 x) = −9 cos 3 x + 9 cos 3x = 0.
Therefore, cos 3x is a solution.
( 2 + x)
1 x 92. W (1, x, 2 + x) = 0 1
1
0 0
0 x
94. W ( x1 sin x1 cos x) = 1 2
2
= 0
sin 2 x 2 sin x cos x
0 4 cos 2 x − 2
cos 2 x −2 sin x cos x = 4 cos 2 x − 2 2 − 4 cos 2 x
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 4
137
96. The Wronskian of this set is
W (e −3 x , 3e −3 x ) =
e −3 x
3e −3 x
−3 x
−9e −3 x
−3 x
= −9e−6 x + 9e −6 x = 0 = 0.
Because W (e −3 x , 3e −3 x ) = 0, the set is linearly dependent. 98. The Wronskian of this set is
W (sin 2 x, cos 2 x) =
sin 2 x
cos 2 x
2 cos 2 x −2 sin 2 x
= −2 sin 2 2 x − 2 cos 2 2 x = −2.
Because W (sin 2 x, cos 2 x) ≠ 0 the set is linearly independent. 100. Begin by completing the square
104. y 2 − 4 x − 4 = 0
9 x + 18 x + 9 y − 18 y = −14 2
2
y2 = 4x + 4
9( x + 2 x + 1) + 9( y − 2 y + 1) = −14 + 9 + 9 2
y 2 = 4( x + 1)
2
(x
+ 1) + ( y − 1) = 2
2
4 9
This is the equation of a parabola with vertex ( −1, 0).
This is the equation of a circle centered at ( −1, 1) with a
y 8 6 4 2
radius of 23 . (−1, 0)
y
(− 1, 1)
x
−6 −4 −2
2
2 4 6 8
−4 −6 −8 −3
−2
x
−1
y 2 − 4x − 4 = 0
−1
9x 2
+
9y 2
106. Begin by completing the square.
+ 18x − 18y + 14 = 0
16 x 2 − 32 x + 25 y 2 − 50 y = −16 16( x 2 − 2 x + 1) + 25( y 2 − 2 y + 1) = −16 + 16 + 25
102. Begin by completing the square.
(x
4 x 2 + 8 x − y 2 − 6 y = −4 4( x 2 + 2 x + 1) − ( y 2 + 6 y + 9) = −4 + 4 − 9
(y
+ 3) 9
2
−
(x
+ 1) 9 4
2
=1
2
This is the equation of an ellipse centered at (1, 1). y 3
This is the equation of a hyperbola centered at ( −1, − 3).
2 (1, 1)
y 8 6 4 2 −8 −4
− 1) 2 + ( y − 1) = 1 25 16
−1
x 2 4 6 8 10
(−1, −3)
x 1
2
3
−1
16x 2 + 25y 2 − 32x − 50y + 16 = 0
−12
4x 2
− y 2 + 8x − 6y + 4 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
138
Chapter 4
Vector Spaces
108. From the equation
110. From the equation
9−9 a −c = = 0, b 4
cot 2θ =
you find that the angle of rotation is θ = 1 and cos θ = 2
sin θ =
π 4
. Therefore,
1 . 2
x = x′ cos θ − y′ sin θ =
1 ( x′ − y′) 2
Therefore sin θ = −1 2 and cos θ =
3 2.
x = x′ cos θ − y′ sin θ =
1 2
(
3 x′ + y′
)
(
3 y′ − x′
and
and y = x′ sin θ + y′ cos θ =
1 ( x′ + y′) 2
into 9 x 2 + 4 xy + 9 y 2 − 20 = 0, you obtain 11( x′) + 7( y′) = 20. 2
2
+
( y′)2 20 7
y = x′ sin θ + y′ cos θ =
2
2
+ ( y′) = 1 2
which is the equation of an ellipse centered at (0, 0) with the major axis along the x´-axis.
which is the equation of an ellipse with major axis along the y′ -axis.
y
y'
y
2
x'
1
y'
x' 2
x
−2
θ = −30°
1 45° −2
)
4( x′) + 16( y′) = 16. In standard form,
4
=1
1 2
into 7 x 2 + 6 3xy + 13 y 2 − 16 = 0, you obtain
( x′)2
In standard form 20 11
π
you find the angle of rotation to be θ = − . 6
By substituting
By substituting
( x′)2
7 − 13 −6 3 a −c = = = − b 3 6 3 6 3
cot 2θ =
1
x
−2
2
−2
Project Solutions for Chapter 4 1 Solutions to Linear Systems 1. Because ( −2, −1, 1, 1) is a solution to Ax = 0, so is any
multiple −2( −2, −1, 1, 1) = ( 4, 2, − 2, − 2) because the solution space is a subspace. 2. The solutions to Ax = 0 form a subspace, so any linear combination 2x1 − 3x 2 of solutions x1 and x 2 is again a solution. 3. Let the first system be Ax = b1. Because it is consistent, b1 is in the column space of A. The second system is Ax = b 2 , and b 2 is a multiple of b1 , so it is in the column space of A as well. So, the second system is consistent.
4. 2x1 − 3x 2 is not a solution (unless b = 0 ). The set of solutions to a nonhomogeneous system is not a subspace. If Ax1 = b and Ax 2 = b, then
A( 2x1 − 3x 2 ) = 2 Ax1 − 3 Ax 2 = 2b − 3b = −b ≠ b. 5. Yes, b1 and b 2 are in the column space of A, therefore so is b1 + b 2 . 6. If rank A = rank[ A b], then b is in the column space of
A and the system is consistent. If rank A < rank[ A b], then b is not in the column space and the system is inconsistent.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 4 2
139
Direct Sum 1. U + W is nonempty because 0 = 0 + 0 ∈ U + W . Let u1 + w1 and u 2 + w 2 be in U + W . Then
(u1
+ w1 ) + (u 2 + w 2 ) = (u1 + u 2 ) + ( w1 + w 2 ) ∈ U + W and c(u1 + w1 ) = (cu1 ) + (cw1 ) ∈ U + W .
2. Basis for U : {(1, 0, 1), (0, 1, −1)}
Basis for W : {(1, 0, 1)} Basis for Z : {(1, 1, 1)} U + W = U because W ⊆ U U + Z = R 3 because
{(1, 0, 1), (0, 1, −1), (1, 1, 1)} is a basis for R3.
W + Z = span{(1, 0, 1), (1, 1, 1)} = span{(1, 0, 1), (0, 1, 0)}
3. Suppose u1 + w1 = u 2 + w 2 , which implies u1 − u 2 = w 2 − w1.
Because u1 − u 2 ∈ U ∩ W and w 2 − w1 ∈ U ∩ W , and U ∩ W = {0}, u1 = u 2 and w1 = w 2 . U ⊕ Z and W ⊕ Z are direct sums. 4. Let v ∈ V , then v = u + w , u ∈ U , w ∈ W . Then v = (c1u1 +
+ ck u k ) + ( d1w1 +
+ d m w m ),
and v is in the span of {u1 , … , u k , w1 , … , w m}. To show that this set is linearly independent, suppose c1u1 +
+ ck u k + d1w1 +
⇒ c1u1 +
+ dmw m = 0
+ ck u k = −( d1w1 +
But U ∩ W ≠ {0} ⇒ c1u1 +
+ dmw m ) + ck u k = 0 and d1w1 +
+ d m w m = 0.
Because {u1 , … , u k } and {w1 , … , w m} are linearly independent, c1 =
= ck = 0 and d1 =
= d m = 0.
5. Basis for U : {(1, 0, 0), (0, 0, 1)}
Basis for W :
{(0, 1, 0), (0, 0, 1)}
U + W is spanned by
{(1, 0, 0), (0, 0, 1), (0, 1, 0)} ⇒ U
+ W = R3 . This is not a direct sum because (0, 0, 1) ∈ U ∩ W .
dimU = 2, dimW = 2, dim(U ∩ W ) = 1 dimU 2
+ dimW
= dim(U + W ) + dim(U ∩ W ).
+
=
2
3
+
1
In general, dimU + dimW = dim(U + W ) + dim(U ∩ W ) 6. No, dimU + dimW = 2 + 2 = 4, then dim(U + W ) + dim(U ∩ W ) = dim(U + W ) = 4, which is impossible in R3.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 5 Inner Product Spaces Section 5.1
Length and Dot Product in R n ..........................................................138
Section 5.2
Inner Product Spaces ..........................................................................144
Section 5.3
Orthonormal Bases: Gram-Schmidt Process.....................................152
Section 5.4
Mathematical Models and Least Squares Analysis ..........................160
Section 5.5
Applications of Inner Product Spaces ...............................................164
Review Exercises ........................................................................................................173 Cumulative Test .........................................................................................................181
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 5 Inner Product Spaces Section 5.1 Length and Dot Product in R n 15. (a) A unit vector v in the direction of u is given by
1.
v =
42 + 32 =
3.
v =
1 + 2 + 2
5.
v =
22 + 02 + ( −5) + 52 =
2
2
25 = 5 =
2
v =
9 = 3
2
⎛1⎞ +⎜ ⎟ ⎝ 4⎠
7. (a)
u =
(−1)
(b)
v =
⎛ 1⎞ 42 + ⎜ − ⎟ ⎝ 8⎠
(c)
⎛ 1⎞ u + v = ⎜ 3, ⎟ ⎝ 8⎠ =
2
2
2
⎛1⎞ 32 + ⎜ ⎟ ⎝8⎠
577 = 64
=
u =
02 + 42 + 32 =
(b)
v =
12 + ( −2) + 12 =
(c)
u + v = (1, 2, 4) +
v =
577 8
=
12 + 22 + 42 =
02 + 12 + ( −1) + 22 =
(b)
v =
12 + 12 + 32 + 02 =
(c)
u + v = (1, 2, 2, 2)
2
6
2
+ 122
1 1 + 0 + 22 + 22 2
(1, 0, 2, 2)
1 1 2 2 (1, 0, 2, 2) = ⎛⎜ , 0, , ⎞⎟. 3 3 3⎠ ⎝3
c(1, 2, 3) = 1 c
(1, 2, 3)
=1 1 = (1, 2, 3)
c =
13
c = ±
(−5, 12)
1 5 12 (−5, 12) = ⎛⎜ − , ⎞⎟. 13 ⎝ 13 13 ⎠
(b) A unit vector in the direction opposite that of u is given by ⎛ 5 12 ⎞ ⎛ 5 12 ⎞ − v = −⎜ − , ⎟ = ⎜ , − ⎟. ⎝ 13 13 ⎠ ⎝ 13 13 ⎠
2
19. Solve the equation for c as follows.
11
12 + 22 + 22 + 22 =
1
u = u
2 2⎞ ⎛ 1 2 2⎞ ⎛1 − v = −⎜ , 0, , ⎟ = ⎜ − , 0, − , − ⎟. 3 3⎠ ⎝ 3 3 3⎠ ⎝3
21
13. (a) A unit vector v in the direction of u is given by
138
5 ⎞ ⎟. 38 ⎠
(b) A unit vector in the direction opposite that of u is given by
u =
=
2 5 ⎞ ,− ⎟. 38 38 ⎠
2 5 ⎞ ,− ⎟ 38 38 ⎠
3 2 ⎛ = ⎜− ,− , 38 38 ⎝
6
11. (a)
(−5)
(3, 2, − 5)
1 ⎛ 3 , (3, 2, − 5) = ⎜ 38 ⎝ 38
⎛ 3 − v = −⎜ , ⎝ 38
25 = 5
2
u v = = u
2
17. (a) A unit vector v in the direction of u is given by 2
9. (a)
=
3 + 22 + ( −5)
(b) A unit vector in the direction opposite that of u is given by
17 4
1025 5 41 = 64 8
=
1 2
=
54 = 3 6
17 = 16
=
u = u
1 14
1 14
21. First find a unit vector in the direction of u. u = u
1 1 +1 2
2
(1, 1)
1 ⎞ ⎛ 1 = ⎜ , ⎟ 2⎠ ⎝ 2
Then v is four times this vector. v = 4
(
u 1 ⎞ ⎛ 1 = 4⎜ , ⎟ = 2 2, 2 2 u 2⎠ ⎝ 2
)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.1
1 3+9+0
=
1 2
( 3
(
3, 3, 0
)
(b) u ⋅ u = ( −1)( −1) + 1(1) + (−2)( −2) = 6 (c)
⎛1 3 ⎞ 3, 3, 0 = ⎜⎜ , , 0 ⎟⎟ ⎝2 2 ⎠
)
(d)
⎛1 3 ⎞ v = 2⎜⎜ , , 0 ⎟⎟ = 1, 3, 0 ⎝2 2 ⎠
)
= u⋅u = 6
(u ⋅ v ) v
= 0(1, − 3, − 2) = (0, 0, 0) = 0
(b) u ⋅ u = 4( 4) + 0(0) + ( −3)(−3) + 5(5) = 50
1 (0, 2, 1, −1) = 0+ 4+1+1
1 (0, 2, 1, −1) 6
Then v is three times this vector. v = 3
2
39. (a) u ⋅ v = 4(0) + 0( 2) + ( −3)5 + 5( 4) = 5
25. First find a unit vector in the direction of u.
u = u
u
(e) u ⋅ (5 v ) = 5(u ⋅ v ) = 5 ⋅ 0 = 0
Then v is twice this vector.
(
1 6 3 3 ⎞ ⎛ (0, 2, 1, −1) = ⎜ 0, , , − ⎟ 6 6 6 6⎠ ⎝
(c) (d)
u
2
= u ⋅ u = 50
(u ⋅ v ) v
= 5(0, 2, 5, 4) = (0, 10, 25, 20)
(e) u ⋅ (5 v) = 5(u ⋅ v ) = 5 ⋅ 5 = 25 41. (u + v) ⋅ ( 2u − v ) = u ⋅ ( 2u − v) + v ⋅ ( 2u − v) = 2u ⋅ u − u ⋅ v + 2 v ⋅ u − v ⋅ v
27. (a) Because v / v is a unit vector in the direction of v,
u =
(b) Because −v / v is a unit vector with direction opposite that of v,
=
⎛ v ⎞ ⎜⎜ − ⎟⎟ ⎝ v ⎠
−1 1 3⎞ ⎛ v = − (8, 8, 6) = ⎜ −2, − 2, − ⎟. 4 4 2⎠ ⎝
(c) Because −v / v is a unit vector with direction opposite that of v,
29. d (u, v ) = u − v =
(2, − 2)
31. d (u, v ) = u − v =
(2, − 2, 2)
33. d (u, v ) = u − v = =
=
(d)
2 + ( − 2) + 2 2
2
2
= 2 3
(−1, 1, − 2, 4) 1 + 1 + 4 + 16 =
= u ⋅ u = 25
(u ⋅ v ) v
u =
52 + ( −12)
v =
(−8)
2
2
=
+ ( −15)
169 = 13,
2
=
289 = 17
v 15 ⎞ ⎛ 8 = ⎜− , − ⎟ v ⎝ 17 17 ⎠
(c) −
u ⎛ 5 12 ⎞ = ⎜− , ⎟ u ⎝ 13 13 ⎠
(e) u ⋅ u = (5)(5) + ( −12)( −12) = 169 (f) v ⋅ v = ( −8)( −8) + (−15)(−15) = 64 + 225 = 289 45. (a)
(b) u ⋅ u = 3(3) + 4( 4) = 9 + 16 = 25 2
(b)
4+ 4 = 2 2
35. (a) u ⋅ v = 3( 2) + 4( −3) = 6 − 12 = −6
u
43. (a)
= −40 + 180 = 140
= −2 v = −2(8, 8, 6) = ( −16, −16, −12).
=
= 2( 4) + ( −5) − 10 = −7
(d) u ⋅ v = (5)( −8) + ( −12)( −15)
⎛ v ⎞ u = 2 v ⎜⎜ − ⎟⎟ ⎝ v ⎠
(c)
= 2(u ⋅ u) + u ⋅ v − v ⋅ v
v v 1 1 = v = (8, 8, 6) = ( 4, 4, 3). 2 v 2 2
v u = 4
139
37. (a) u ⋅ v = ( −1)(1) + 1( −3) + ( −2)(−2) = 0
23. First find a unit vector in the direction of u. u = u
Length and Dot Product in R n
22
(b)
u =
52 + 122 =
v =
(−12)2
+ 52 =
169 = 13, 169 = 13
v ⎛ 12 5 ⎞ = ⎜− , ⎟ v ⎝ 13 13 ⎠
(c) −
u ⎛ 5 12 ⎞ = ⎜− , − ⎟ u ⎝ 13 13 ⎠
(d) u ⋅ v = (5)( −12) + (12)(5) = −60 + 60 = 0 (e) u ⋅ u = (5)(5) + (12)(12) = 169 (f) v ⋅ v = ( −12)( −12) + (5)(5) = 144 + 25 = 169
= −6( 2, − 3) = (−12, 18)
(e) u ⋅ (5 v ) = 5(u ⋅ v ) = 5( −6) = −30
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
140
Chapter 5
47. (a)
(b)
Inner Product Spaces
u =
102 + ( −24)
v =
(−5)
2
2
=
+ ( −12)
2
51. (a)
676 = 26, =
169 = 13
(b)
v ⎛ 5 12 ⎞ = ⎜− , − ⎟ v ⎝ 13 13 ⎠
(c) −
(e) u ⋅ u = 1.1756 (f) v ⋅ v = 0.1025
= −50 + 288 = 238
53. (a)
(e) u ⋅ u = (10)(10) + ( −24)( −24) = 676 (f) v ⋅ v = ( −5)( −5) + ( −12)( −12)
(b)
= 25 + 144 = 169
(b)
u =
0 + 5 + 12
v =
02 + ( −5) + ( −12)
2
2
2
=
=
v = ( −0.5, 0.7071, − 0.5) v u = (0, − 0.5774, − 0.8165) u
(d) u ⋅ v = 0
169 = 13
(e) u ⋅ u = 3
5 12 ⎞ v ⎛ = ⎜ 0, − , − ⎟ 13 13 ⎠ v ⎝
(c) −
u = 1.7321, v = 2
(c) −
169 = 13, 2
u = ( −0.9223, − 0.1153, − 0.3689) u
(d) u ⋅ v = 0.1113
(d) u ⋅ v = (10)( −5) + ( −24)( −12)
49. (a)
v = (0, 0.7809, 0.6247) v
(c) −
u ⎛ 10 24 ⎞ ⎛ 5 12 ⎞ = ⎜− , ⎟ = ⎜− , ⎟ u ⎝ 26 26 ⎠ ⎝ 13 13 ⎠
2
u = 1.0843, v = 0.3202
(f) v ⋅ v = 4
u 5 12 ⎞ ⎛ = ⎜ 0, − , − ⎟ u 13 13 ⎠ ⎝
(d) u ⋅ v = (0)(0) + (5)( −5) + (12)( −12) = 0 − 25 − 144 = −169 (e) u ⋅ u = (0)(0) + (5)(5) + (12)(12) = 169 (f) v ⋅ v = (0)(0) + ( −5)( −5) + ( −12)(−12) = 0 + 25 + 144 = 169 55. (a) (b)
u = 3.7417, v = 3.7417 1 v = (0.5345, 0, 0.2673, 0.2673, 0.5345, − 0.5345) v
(c) −
1 u = (0, − 0.5345, − 0.5345, 0.2673, − 0.2673, 0.5345) u
(d) u ⋅ v = 7 (e) u ⋅ u = 14 (f) v ⋅ v = 14 57. (a) (b)
u = 3.7417, u = 4 1 1 1 1 1 1 1⎞ ⎛ 1 v = ⎜ − , 0, , , − , , , − ⎟ 4 4 2 2 4 4 2⎠ v ⎝
(c) −
1 u = (0.2673, − 0.2673, − 0.5345, 0.2673, − 0.2673, − 0.2673, 0.5345, − 0.2673) u
(d) u ⋅ v = −4 (e) u ⋅ u = 14 (f) v ⋅ v = 16
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.1
u ⋅ v = 3( 2) + 4( −3) = −6, u =
32 + 42 =
v =
22 + ( −3)
cos θ =
25 = 5, and
2
=
13. So,
=
u⋅v ≤ u v
(
−6 ≤ 5
13
)
61. You have u ⋅ v = (1) + 1( −3) + (−2)( −2) = 2, u =
12 + 12 + ( −2)
v =
12 + ( −3) + ( −2)
2
=
2
=
14. So,
So, θ =
u⋅v ≤ u v 6 14
cos θ =
63. The cosine of the angle θ between u and v is given by
= −
π
cos
6
7π radians (105°) 12
67. The cosine of the angle θ between u and v is given by
2 ≤ 2 21 ≈ 9.17.
u⋅v = u v
3π π 3π + sin sin 6 4 4 3 3π π π π + sin 2 ⋅ cos 2 + sin 2 cos 2 6 6 4 4 cos
⎛ 7π ⎞ = cos⎜ ⎟. ⎝ 12 ⎠
6, and
2
u⋅v u v
3π ⎞ ⎛π cos⎜ − ⎟ 6 4 ⎠ ⎝ = 1⋅1 ⎛ 7π ⎞ = cos⎜ − ⎟ ⎝ 12 ⎠
6 ≤ 5 3 ≈ 8.66.
cos θ =
141
65. The cosine of the angle θ between u and v is given by
59. You have
2 ≤
Length and Dot Product in R n
u⋅v = u v
3( −2) + 1( 4) 32 + 12 2 10 2
(−2)
= −
2
=
+ 42
1( 2) + 1(1) + 1( −1) 12 + 12 + 12 2 3 2
=
22 + 12 + ( −1)
2
2 . 3
⎛ 2⎞ So, θ = cos −1 ⎜⎜ ⎟⎟ ≈ 1.080 radians (61.87°). ⎝ 3 ⎠
2 . 10
⎛ 2⎞ So, θ = cos −1 ⎜⎜ − ⎟⎟ ≈ 1.713 radians (98.13°). ⎝ 10 ⎠
69. The cosine of the angle θ between u and v is given by
cos θ = = =
u⋅v u v
0(3) + 1(3) + 0(3) + 1(3) 0 + 12 + 02 + 12 2
6 = 6 2
32 + 32 + 32 + 32
2 . 2
⎛ 2⎞ π So, θ = cos −1 ⎜⎜ ⎟⎟ = . 2 4 ⎝ ⎠ 71. The cosine of the angle θ between u and v is given by
cos θ = = =
u⋅v u v 1( −1) + 3( 4) + ( −1)(5) + 2(−3) + 0( 2) 1 + 3 + ( −1) + 22 + 02 2
2
2
(−1)
2
+ 42 + 52 + (−3) + 22 2
0 = 0. 15 55
So, θ = cos −1 (0) =
π 2
.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
142
Chapter 5
Inner Product Spaces
93. Because u ⋅ v = cos θ sin θ + sin θ ( −cos θ ) − 1(0)
u⋅v = 0
73.
(0, 5) ⋅ (v1, v2 )
= 0
= cos θ sin θ − cos θ sin θ = 0,
0v1 + 5v2 = 0
the vectors u and v are orthogonal.
v2 = 0
95. (a) False. See “Definition of Length of a Vector in R n ,” page 278.
So, v = (t , 0), where t is any real number.
(b) False. See “Definition of Dot Product in R n , ” page 282.
u⋅v = 0
75.
(−3, 2) ⋅ (v1, v2 )
= 0
97. (a)
−3v1 + 2v2 = 0
(b) u + (u ⋅ v ) is meaningless because u is a vector and
So, v = ( 2t , 3t ), where t is any real number.
u ⋅ v is a scalar.
u⋅v = 0
77.
(4, −1, 0) ⋅ (v1, v2 , v3 ) 4v1 + ( −1)v2 + 0v3
99. Because u + v = ( 4, 0) + (1, 1) = (5, 1), you have
= 0
u+ v ≤ u + v
= 0
(5, 1)
4v1 − v2 = 0 So, v = (t , 4t , s), where s and t are any real numbers.
(0, 1, 0, 0, 0) ⋅ (v1, v2 , v3 , v4 , v5 )
numbers.
( ) + 18( ) = 0, the vectors u and − 16
v are orthogonal. 83. Because v = −6u, the vectors are parallel.
85. Because u ⋅ v = 0(1) + 1( −2) + 0(0) = −2 ≠ 0, the vectors u and v are not orthogonal. Moreover, because one is not a scalar multiple of the other, they are not parallel. 87. Because
( ) + 5( ) + 1(0) + 0(1) =
u ⋅ v = −2
− 54
1 4
− 27 4
≠ 0, the
vectors u and v are not orthogonal. Moreover, because one is not a scalar multiple of the other, they are not parallel.
(
)
89. u = −2, 12 , −1, 3 and v =
( 32 , 1, − 52 , 0).
(
)
(
3 , 8
− 34 , 98 ,
≤
+ (1, 1)
2.
(−1, 1)
2 ≤
)
3.
Using a graphing utility or a computer software program you have u ⋅ v = −24.46875 ≠ 0. Because u ⋅ v ≠ 0, the vectors are not orthogonal. Because one is not a scalar multiple of the other, they are not parallel.
+ ( 2, 0)
2 + 2.
103. First note that u and v are orthogonal, because u ⋅ v = (1, −1) ⋅ (1, 1) = 0. Then note u+ v
2
= u
(2, 0)
2
= (1, −1)
2
2
+ v 2
(1, 1)
+
2
4 = 2 + 2.
105. First note that u and v are orthogonal, because u ⋅ v = (3, 4, − 2) ⋅ ( 4, − 3, 0) = 0. Then note u + v
2
= u
(7, 1, − 2)
2
=
2
+ v
(3, 4, − 2)
2 2
+
(4, − 3, 0)
2
54 = 29 + 25.
107. (a) If u ⋅ v = 0, then and θ =
Using a graphing utility or a computer software program you have u ⋅ v = 0. Because u ⋅ v = 0, the vectors are orthogonal. 91. u = − 34 , 32 , − 92 , − 6 and v =
26 ≤ 4 +
(1, 1)
So, v = ( r , 0, s, t , w), where r, s, t and w are any real
81. Because u ⋅ v = 2
(4, 0)
u+ v ≤ u + v
= 0
v2 = 0
3 2
≤
101. Because u + v = ( −1, 1) + ( 2, 0) = (1, 1), you have
u⋅v = 0
79.
u ⋅ v is meaningless because u ⋅ v is a scalar.
π 2
u⋅v 0 = . So, cos θ = 0 u v u v
, provided u ≠ 0 and v ≠ 0.
(b) If u ⋅ v > 0, then
u⋅v > 0. u v
So, cos θ > 0 and 0 ≤ θ < (c) If u ⋅ v < 0, then So, cos θ < 0, and
π 2
.
u⋅v < 0. u v
π 2
< θ ≤ π.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.1
Length and Dot Product in R n
143
109. v = ( v1 , v 2 ) = (8, 15), ( v 2 , − v1 ) = (15, − 8)
(8, 15) ⋅ (15, − 8)
= 8(15) + 15(−8) = 120 − 120 = 0
So, ( v 2 , − v1 ) is orthogonal to v. Two unit vectors orthogonal to v: −1(15, − 8) = (−15, 8): (8, 15) ⋅ (−15, 8) = 8( −15) + 15(8) = −120 + 120 = 0 3(15, − 8) = ( 45, − 24): (8, 15) ⋅ ( 45, − 24) = 8( 45) + (15)( −24) = 360 − 360 = 0 (Answer is not unique.)
111. Let v = (t , t , t ) be the diagonal of the cube, and u = (t , t , 0) the diagonal of one of its sides. Then, u⋅v = u v
cos θ =
2t 2
(
2t
)(
3t
2 = 6
=
)
6 3
⎛ 6⎞ and θ = cos −1 ⎜⎜ ⎟⎟ ≈ 35.26°. ⎝ 3 ⎠
113. Given u ⋅ v = 0 and u ⋅ w = 0, u ⋅ (cv + dw ) = u ⋅ (cv) + u ⋅ ( dw ) = c(u ⋅ v ) + d (u ⋅ w ) = c(0) + d (0) = 0. So, u is orthogonal to cv + dw.
115.
u + v
2
+ u − v
2
= (u + v ) ⋅ (u + v ) + (u − v ) ⋅ (u − v ) = (u ⋅ u + v ⋅ v + 2u ⋅ v ) + (u ⋅ u + v ⋅ v − 2u ⋅ v) = 2 u
2
+ 2 v
2
117. If u and v have the same direction, then u = cv, c > 0, and u + v = cv + v = (c + 1) v = c v + v = cv + v = u + v . On the other hand, if u + v = u + v , then u + v
(u u
2
2
=
(u
+ v ) ⋅ (u + v ) = u
+ v
2
+ 2u ⋅ v = u
+ v
)
2
2
+ v
2
+ 2 u v
2
+ v
2
+ 2 u v
2u ⋅ v = 2 u v ⇒ cos θ =
u⋅v =1 u v
⇒
θ = 0
⇒
u and v have the same direction.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
144
Chapter 5
Inner Product Spaces
119. Ax = 0 means that the dot product of each row of A with the column vector x is zero. So, x is orthogonal to the row vectors of A. 121. Property 1: u ⋅ v = uT v = (uT v )
T
= vT u = v ⋅ u
Property 2: u ⋅ ( v + w ) = uT ( v + w ) = uT v + uT w = u ⋅ v + u ⋅ w Property 3: c(u ⋅ v) = c(uT v) = (cu) v = (cu) ⋅ v and c(u ⋅ v) = c(uT v ) = uT (cv ) = u ⋅ (cv ) T
Section 5.2 Inner Product Spaces 1. (a)
(u, v)
= u ⋅ v = 3(5) + 4( −12) = −33
(b)
u =
u, u =
u⋅u =
3(3) + 4( 4) = 5
(c)
v =
v, v =
v⋅v =
5(5) + ( −12)(−12) =
(d) d (u, v ) = u − v = 3. (a)
(u − v ) ⋅ (u − v )
u − v, u − v =
u =
u, u =
3( −4) + 32 =
(c)
v =
v, v =
3 ⋅ 0 2 + 52 = 5
(d) d (u, v) = u − v =
2
3( −4) + ( −2) = 2 13 2
u − v, u − v =
2
u, v = u ⋅ v = 0(9) + 9( −2) + 4( −4) = −34 u =
u, u =
u⋅u =
0 + 92 + 42 =
97
(c)
v =
v, v =
v⋅ v =
92 + (−2) + ( −4)
2
(d) d (u, v ) = u − v =
2
(−9, 11, 8)
=
=
101
92 + 112 + 82 =
266
u, v = 2u1v1 + 3u2v2 + u3v3 = 2 ⋅ 8 ⋅ 8 + 3 ⋅ 0 ⋅ 3 + ( −8) ⋅ 16 = 0.
(b)
u =
(c)
v =
2 ⋅ 8 ⋅ 8 + 3 ⋅ 0 ⋅ 0 + ( −8)
v, v =
(d) d (u, v ) = u − v = 9. (a)
= 2 65
57
(b)
7. (a)
(−2)(−2) + 16(16)
=
u, v = 3u1v1 + u2v2 = 3( −4)(0) + 3(5) = 15
(b)
5. (a)
169 = 13
2
= 8 3.
2 ⋅ 82 + 3 ⋅ 32 + 162 =
(0, − 3, − 24)
411
= 3 67.
u, v = u ⋅ v = 2( 2) + 0( 2) + 1(0) + ( −1)(1) = 3
(b)
u =
u, u =
u⋅u =
22 + 02 + 12 + ( −1)
(c)
v =
v, v =
v⋅v =
22 + 22 + 02 + 12 = 3
(d) d (u, v) = u − v =
(0, − 2, 1, − 2)
=
2
=
6
02 + (−2) + 12 + (−2) 2
2
= 3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
11. (a)
(b)
f,g =
f
2
g
2
∫ −1
f ( x) g ( x) dx = ∫
= f, f
x 2 ( x 2 + 1)dx =
1 −1
4 2 ∫ −1 ( x + x )dx 1
145
1
⎡ x5 16 x3 ⎤ = ⎢ + ⎥ = 3 ⎦ −1 15 ⎣5
1
2 ∫ −1 ( x ) dx 1
=
2 = 5
f =
(c)
1
Inner Product Spaces
2
=
x5 ⎤ 2 ⎥ = 5 ⎦ −1 5
10 5 2 4 2 ∫ −1 ( x + 1) dx = ∫ −1 ( x + 2 x + 1)dx = 1
= g, g =
1
2
1
⎤ ⎡ x5 2 x3 56 + x⎥ = ⎢ + 3 15 ⎣5 ⎦ −1
56 2 210 = 15 15
g =
(d) Use the fact that d ( f , g ) = f − g . Because f − g = x 2 − ( x 2 + 1) = −1, you have 1
∫ −1 (−1)(−1) dx
f − g, f − g =
So, d ( f , g ) = 13. (a)
(b)
f,g = f
2
= f, f
f =
(c)
g
2
g =
1
∫ −1 xe
1
= x⎤ = 2. ⎥⎦ −1
f − g, f − g =
2.
1 2 dx = ( x − 1)e x ⎤ = ⎥⎦ −1 e
x
=
1
∫ −1
x 2 dx =
x3 ⎤ 2 ⎥ = 3 ⎦ −1 3
1
e 2 x dx =
e2 x ⎤ 1 2 −2 ⎥ = (e − e ) 2 ⎦ −1 2
1
6 3
= g, g =
1
∫ −1
1 2 ( e − e −2 ) 2
(d) Use the fact that d ( f , g ) = f − g . Because f − g = x − e x , you have f − g, f − g = =
x ∫−1( x − e ) 1
∫−1( x 1
2
2
dx
− 2 xe x + e 2 x )dx 1
⎡ x3 e2 x ⎤ = ⎢ − 2( x − 1)e x + ⎥ 2 ⎦ −1 ⎣3
So, d ( f , g ) =
=
2 4 e 2 − e−2 − + 3 e 2
=
2 4 e2 1 − + − 2. 3 e 2 2e
f − g, f − g =
2 4 e2 1 − + − 2. 3 e 2 2e
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
146
Chapter 5
15. (a)
Inner Product Spaces
f,g =
∫ −1 1(3x 1
(b)
f
2
= f, f
(c)
g
2
= g, g =
1
− 1)dx = ⎡⎣ x3 − x⎤⎦ = (1 − 1) − (−1 + 1) = 0 −1
2
1
= x⎤ = 2 ⇒ f = ⎥⎦ −1
1
∫ −1 1dx
=
2 ∫ −1 (3x − 1) 1
2
2
4 2 ∫ −1 (9 x − 6 x + 1)dx 1
dx =
1
⎡ 9 x5 ⎤ 8 = ⎢ − 2 x 3 + x⎥ = 5 ⎣ 5 ⎦ −1
8 2 10 . = 5 5
So, g =
(d) Use the fact that d ( f , g ) = f − g . Because f − g = 1 − (3 x 2 − 1) = 2 − 3x 2 , you have f − g, f − g =
So, d ( f , g ) = 17. (a)
(b)
1
∫ −1 (9 x 1
=
1
4
⎡ 9 x5 ⎤ 18 − 12 x 2 + 4)dx = ⎢ − 4 x 3 + 4 x⎥ = . 5 5 ⎣ ⎦ −1
18 3 10 = . 5 5
f − g, f − g =
A, B = 2( −1)(0) + 3( −2) + 4(1) + 2( −2)(1) = −6 A
2
= A, A = 2( −1) + 32 + 42 + 2( −2) = 35 2
A = (c)
2 ∫ −1 (2 − 3x )dx
B
2
B =
A, A =
2
35
= B, B = 2 ⋅ 02 + ( −2) + 12 + 2 ⋅ 12 = 7 2
B, B =
7
(d) Use the fact that d ( A, B) = A − B .
A − B, A − B = 2( −1) + 52 + 32 + 2(−3) = 54 2
d ( A, B) = 19. (a)
(b)
A − B, A − B = 3 6
A, B = 2(1)(0) + ( −1)(1) + ( 2)(−2) + 2( 4)(0) = −5 A
2
= A, A = 2(1) + ( −1) + ( 2) + 2( 4) = 39 2
A = (c)
2
B
2
B =
A, A =
2
2
2
39
= B, B = 2(0) + 12 + (−2) + 02 = 5 2
B, B =
2
5
(d) Use the fact that d ( A, B) = A − B .
A − B, A − B = 2(1) + (−2) + 42 + 2( 4) = 54 2
d ( A, B) = 21. (a)
2
2
A − B, A − B =
54 = 3 6
p, q = 1(0) + ( −1)(1) + 3(−1) = −4
(b)
p =
p, p =
12 + (−1) + 32 =
11
(c)
q =
q, q =
02 + 12 + ( −1)
2
2
(d) d ( p, q ) = p − q =
2
=
p − q, p − q =
12 + ( −2) + 42 = 2
21
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2 23. (a)
Inner Product Spaces
147
p, q = 1(1) + 0(0) + 1( −1) = 0
(b)
p =
p, p =
(c)
q =
q, q =
12 + 02 + 12 = 12 + 02 + ( −1)
(d) d ( p, q ) = p − q =
2
2 =
2
p − q, p − q =
0 2 + 02 + 2 2 = 2
25. Verify that the function u, v = 3u1v1 + u2v2 satisfies the four parts of the definition.
1. u, v = 3u1v1 + u2v2 = 3v1u1 + v2u2 = v, u 2. u, v + w = 3u1 (v1 + w1 ) + u2 (v2 + w2 ) = 3u1v1 + u2v2 + 3u1w1 + u2 w2 = u, v + u, w
3. c u, v = c(3u1v1 + u2v2 ) = 3(cu1 )v1 + (cu2 )v2 = cu, v 4.
v, v = 3v12 + v22 ≥ 0 and v, v = 0 if and only if v = (0, 0).
27. Verify that the function A, B = 2a11b11 + a12b12 + a21b21 + 2a22b22 satisfies the four parts of the definition.
1.
A, B = 2a11b11 + a12b12 + a21b21 + 2a22b22
= 2b11a11 + b12 a12 + b21a21 + 2b22 a22 = B, A
2.
A, B + C = 2a11 (b11 + c11 ) + a12 (b12 + c12 ) + a21 (b21 + c21 ) + 2a22 (b22 + c22 ) = 2a11b11 + a12b12 + a21b21 + 2a22b22 + 2a11c11 + a12c12 + a21c21 + 2a22c22 = A, B + A, C
3. c A, B = c( 2a11b11 + a12b12 + a21b21 + 2a22b22 ) = 2(ca11 )b11 + (ca12 )b12 + (ca21 )b21 + 2(ca22 )b22 = cA, B
4.
2 2 2 2 A, A = 2a11 + a12 + a21 + 2a22 ≥ 0, and
⎡0 0⎤ A, A = 0 if and only if A = ⎢ ⎥. ⎣0 0⎦ 29. The product u, v is not an inner product because nonzero vectors can have a norm of zero. For example, if v = (0, 1), then v, v = 02 = 0.
31. The product u, v is not an inner product because nonzero vectors can have a norm of zero. For example, if v = (1, 1), then
(1, 1), (1, 1)
= 0.
33. The product u, v is not an inner product because it is not distributive over addition.
For example, if u = (1, 0), v = (1, 0), and w = (1, 0), then u, v + w = 12 ( 2) + 02 (0) = 4 and 2
2
u, v + u, w = 12 (1) + 02 (0) + 12 (1) + 02 (0) = 2. 2
2
2
2
So, u, v + w ≠ u, v + u, w . 35. The product u, v is not an inner product because it is not commutative. For example, if u = (1, 2), and v = ( 2, 3), then u, v = 3(1)(3) − 2( 2) = 5 while v, u = 3( 2)( 2) − 3(1) = 9.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
148
Chapter 5
Inner Product Spaces
(b) To verify the Triangle Inequality, observe
37. Because
u, v u v
3(5) + 4( −12)
=
32 + 42
52 + ( −12)
2
=
−33 −33 , = 5 ⋅ 13 65
u + v ≤ u + v
(8, 16)
≤
(5, 12)
320 ≤ 13 + 5
the angle between u and v is ⎛ −33 ⎞ cos −1 ⎜ ⎟ ≈ 2.103 radians (120.51°). ⎝ 65 ⎠
8 5 ≤ 18. 49. (a) To verify the Cauchy-Schwarz Inequality, observe
39. Because u, v u v
u, v ≤ u v
3( −4)(0) + (3)(5)
=
3( −4) + 32 15 = 57 ⋅ 5
=
(1, 0, 4) ⋅ (−5, 4, 1)
3(0) + 52
2
2
3 , 57
≤ (1, 0, 4)
1≤
17
1≤
714.
(−5, 4, 1)
42
(b) To verify the Triangle Inequality, observe
the angle between u and v is ⎛ 3 ⎞ cos −1 ⎜ ⎟ ≈ 1.16 radians (66.59°). ⎝ 57 ⎠
u + v ≤ u + v
(−4, 4, 5)
≤ (1, 0, 4) +
57 ≤
41. Because
17 +
(−5, 4, 1)
42
7.5498 ≤ 10.6038.
u, v = 1( 2) + 2(1)( −2) + 1( 2) = 0, the angle between u and v is cos −1 (0) =
π 2
.
51. (a) To verify the Cauchy-Schwarz Inequality, observe p, q ≤ p q 0(1) + 2(0) + 0(3) ≤ ( 2) 10
43. Because p, q 1−1+1 1 = = , 3 p q 3 3
0 ≤ 2 10.
(b) To verify the Triangle Inequality, observe
the angle between p and q is ⎛1⎞ cos −1 ⎜ ⎟ ≈ 1.23 radians (70.53°). ⎝ 3⎠
p + q ≤ p + q
45. Because f, g =
+ (3, 4)
1
∫−1
x4 ⎤ ⎥ = 0, 4 ⎦ −1
the angle between f and g is cos
10
14 ≤ 2 +
10
3.742 ≤ 5.162.
1
x 3 dx =
1 + 2 x + 3x 2 ≤ 2 +
−1
53. (a) To verify the Cauchy-Schwarz Inequality, observe
( 0)
=
π 2
A, B ≤ A B .
0( −3) + 3(1) + 2( 4) + 1(3) ≤
14
35
14 ≤
14
35.
47. (a) To verify the Cauchy-Schwarz Inequality, observe u, v ≤ u v
(5, 12) ⋅ (3, 4)
≤
63 ≤
(5, 12) (3, 4) (13)(5) = 65.
(b) To verify the Triangle Inequality, observe A+ B ≤ A + B
⎡−3 4⎤ ⎢ ⎥ ≤ ⎣ 6 4⎦
14 +
35
77 ≤
14 +
35
8.775 ≤ 9.658.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
Inner Product Spaces
149
55. (a) To verify the Cauchy-Schwarz Inequality, compute π
∫ −π
f , g = sin x, cos x =
π
π
f
2
= sin x, sin x =
g
2
= cos x, cos x =
∫ −π sin π
∫ −π
sin 2 x ⎤ ⎥ = 0 2 ⎦ −π
sin x cos x dx = x dx =
2
π
1 − cos 2 x sin 2 x ⎤ ⎡1 = π ⇒ f = dx = ⎢ x − 2 2 4 ⎥⎦ −π ⎣
π
∫ −π
cos 2 x dx =
π
∫ −π
π
π
1 + cos 2 x sin 2 x ⎤ ⎡1 = π ⇒ g = dx = ⎢ x + 2 4 ⎥⎦ −π ⎣2
π
and observe that f, g ≤ f
g
π
0 ≤
π.
(b) To verify the Triangle Inequality, compute f + g
2
π
∫ −π (sin x + cos x)
= sin x + cos x, sin x + cos x = =
π
∫ −π sin
x dx + ∫
2
π −π
cos 2 x dx + 2∫
π −π
2
dx
sin x cos x dx = π + π + 0 ⇒ f + g =
2π
and observe that
f + g ≤ f + g 2π ≤
π +
π.
57. (a) To verify the Cauchy-Schwarz Inequality, compute 1
∫0 x e
f , g = x, e x = f
2
= x, x =
g
2
= ex , ex =
x
1
dx = ⎡⎣e x ( x − 1)⎤⎦ = 1 0 1
1
∫0 x
2
1
∫0 e
dx =
x3 ⎤ 1 ⇒ f = ⎥ = 3 ⎦0 3 1
2x
dx =
3 3
e2 x ⎤ e2 1 − ⇒ g = ⎥ = 2 ⎦0 2 2
e2 1 − 2 2
and observe that
f, g ≤ f ⎛ 1 ≤ ⎜⎜ ⎝
g 3 ⎞⎛ ⎟⎜ 3 ⎟⎠⎜⎝
e2 1⎞ − ⎟ 2 2 ⎟⎠
1 ≤ 1.032. (b) To verify the Triangle Inequality, compute
f + g
2
= x + ex , x + ex =
x ∫ 0( x + e ) 1
2
1
⎡ e2 x x3 ⎤ + 2e x ( x − 1) + ⎥ dx = ⎢ 3 ⎦0 ⎣ 2 ⎡ e2 1⎤ ⎡1 ⎤ = ⎢ + 2e(0) + ⎥ − ⎢ + 2(1)( −1) + 0⎥ 2 3 2 ⎣ ⎦ ⎣ ⎦ =
e 2 11 + ⇒ f + g = 2 6
e 2 11 + 2 6
and observe that
f + g ≤ f + g 2
e 11 3 + ≤ + 2 6 3 2.351 ≤ 2.364.
e2 1 − 2 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
150
Chapter 5
Inner Product Spaces
59. Because
f,g =
π
∫ −π
1 sin 2 2
cos x sin x dx =
π
⎤ x⎥ = 0, ⎦ −π
f and g are orthogonal. 61. Because
f,g =
1
∫ −1
1
x
1 3 1 1 1 ⎤ 5 x − 3 x) dx = ∫ (5 x 4 − 3 x 2 ) dx = ( x5 − x3 )⎥ = 0, ( 1 − 2 2 2 ⎦ −1
f and g are orthogonal. 63. (a) projvu = (b) proju v = (c)
u, v
1( 2) + 2(1)
v =
v, v v, u
2 +1 2
2(1) + 1( 2)
u =
u, u
2
12 + 22
( 2, 1)
=
4 8 (2, 1) = ⎛⎜ , 5 ⎝5
4⎞ ⎟ 5⎠
(1, 2)
=
4 4 8 (1, 2) = ⎛⎜ , ⎞⎟ 5 ⎝ 5 5⎠
y
u = (1, 2)
2
v = (2, 1) projuv
1
projvu 1
65. (a) projvu = (b) proju v =
x 2
u, v v, v v, u u, u
v =
(−1)(4) + 3(4) 4, 4 ( ) 4( 4) + 4( 4)
u =
4( −1) + 4(3) (−1, 3) (−1)(−1) + 3(3)
=
1 (4, 4) = (1, 1) 4 =
4 4 12 (−1, 3) = ⎛⎜ − , ⎞⎟ 5 ⎝ 5 5⎠
y
(c) 5
(4, 4)
(−1, 3) u 3 projuv −1
67. (a) projvu = (b) proju v =
69. (a) projvu = (b) proju v =
v
projvu 1
u, v v, v v, u u, u
2
x
3
v =
u =
4
1(0) + 3( −1) + ( −2)(1) 0 + ( −1) + 1 2
2
2
0(1) + ( −1)(3) + 1( −2) 12 + 32 + ( −2)
2
(0, −1, 1)
=
−5 5 5 (0, −1, 1) = ⎛⎜ 0, , − ⎞⎟ 2 ⎝ 2 2⎠
(1, 3, − 2)
=
−5 5 15 5 (1, 3, − 2) = ⎛⎜ − , − , ⎞⎟ 14 ⎝ 14 14 7 ⎠
u, v 0( −1) + 1(1) + 3( 2) + ( −6)( 2) 5 ⎛1 1 ⎞ v = v = − ( −1, 1, 2, 2) = ⎜ , − , −1, −1⎟ v, v 10 (−1)(−1) + 1(1) + 2(2) + 2(2) ⎝2 2 ⎠ v, u u, u
u =
(−1)(0) + 1(1) + 2(3) + 2(−6) u 0 + 1(1) + 3(3) + (−6)( −6)
= −
5 5 15 15 (0, 1, 3, − 6) = ⎛⎜ 0, − , − , ⎞⎟ 46 46 46 23 ⎠ ⎝
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2 71. The inner products f , g and g , g are as follows.
81. (a)
∫ −1 x dx
g, g =
∫ −1 dx
1
=
x2 ⎤ ⎥ = 0 2 ⎦ −1
projg f =
g, g
1
y
= x⎤ = 2 ⎥⎦ −1
u = (4, 2) 2
g =
1
0 (1) = 0. 2
x −1
∫ 0 xe
f,g =
1
∫0 e
g, g =
x
dx = ⎡⎣( x − 1)
1 e x ⎤⎦ 0
1
2x
dx =
= 0+1 =1
2
v = (2, − 2)
1. u, v =
∑ciuivi
=
π
2
4.
f, g g, g
g =
i =1
i =1
i =1
v, v =
(sin 2 x)
n
∑ci vi2
n
∑ci (dui )vi
= du, v
i =1
≥ 0, and v, v = 0 if and only if
85. From the definition of inner product, u + v, w = w, u + v = w, u + w, v = u, w + v, w .
87. (i) W ⊥ = {v ∈ V : v, w = 0 for all w ∈ W } is
π
π
n
v = 0.
⎡ x sin 4 x ⎤ =π dx = ⎢ − 8 ⎥⎦ −π ⎣2 −π
n
i =1
nonempty because 0 ∈ W ⊥ .
So, the projection of f onto g is projg f =
n
i =1
π
∫ −π (sin 2 x)
= v, u
i =1
3. d u, v = d ∑ci ui vi =
⎡sin 2 x x cos 2 x ⎤ − = −π x sin 2 x dx = ⎢ 2 ⎥⎦ −π ⎣ 4
g, g =
∑civiui
n
sin 2 x ⎤ ⎥ = 0. 2 ⎦ −π
77. The inner products f , g and g , g are as follows.
∫ −π
n
= u, v + u, w
f,g 0 g = g = 0. projg f = g, g g, g
f,g =
n
∑ciuivi .
∑ciui (vi + wi ) = ∑ciui vi + ∑ciui wi
2. u, v + w =
So, the projection of f onto g is
π
=
i =1
x
π
π
+ cnun vn =
n
75. The inner product f , g is
∫ −π sin x cos x dx
4
i =1
2e x projg f = . g = 2 e = 2 g, g e −1 (e − 1)/2
f,g =
3
u, v = c1u1v1 +
So, the projection of f onto g is 1
2
83. Verify the four parts of the definition for the function
1 2x ⎤ e −1 e = 2 ⎥⎦ 0 2
f, g
1
−2
73. The inner products f , g and g , g are as follows. 1
u, v = 4( 2) + 2( 2)( −2) = 0 ⇒ u and v are
(b) The vectors are not orthogonal in the Euclidean sense.
So, the projection of f onto g is
f, g
151
orthogonal.
1
1
f,g =
Inner Product Spaces
(ii) Let v1 , v 2 ∈ W ⊥ . Then v1 , w = v 2 , w = 0 for all w ∈ W .
= −sin 2 x.
So, v1 + v 2 , w = v1 , w + v 2 , w = 0 + 0 = 0
79. (a) False. See the introduction to this section, page 292.
for all w ∈ W ⇒ v1 + v 2 ∈ W ⊥ .
(b) True. See "Remark," page 301.
(iii) Let v ∈ W ⊥ and c ∈ R. Then, v, w = 0 for all w ∈ W , and cv, w = c v, w = c0 = 0 for all w ∈ W ⇒ cv ∈ W ⊥ .
89. (a) Let u, v be the Euclidean inner product on R n . Because u, v = uT v, it follows that AT u, v = ( AT u) v = uT ( AT ) v = uT Av = u, Av . T
(b)
T
AT Au, u = ( AT Au) u = uT AT ( AT ) u = uT AT Au = ( Au) Au = Au, Au = Au T
T
T
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
152
Chapter 5
Inner Product Spaces
Section 5.3 Orthonormal Bases: Gram-Schmidt Process 1. The set is orthogonal because
(2, − 4) ⋅ (2, 1)
11. The set is orthogonal because
= 2( 2) − 4(1) = 0.
(2, − 5, − 3) ⋅ (4, − 2, 6)
However, the set is not orthonormal because
(2, − 4)
2 2 + ( − 4)
=
2
=
However, the set is not orthonormal because
(2, − 5, − 3)
20 ≠ 1.
3. The set is not orthogonal because
(−4, 6) ⋅ (5, 0)
= −4(5) + 6(0) = −20 ≠ 0.
( 53 , 54 ) ⋅ (− 54 , 53 ) = 53 (− 54 ) + 54 ( 53 ) = − 1225 + 1225 ( )
()
=
2
(− 54 )
− 54 , 53 =
() 4 5
+ 2
+
2
( 53 )
= 0.
=1 2
⎛ 2 2 ⎞ , , 0 ⎟⎟ ⎜⎜ 0, 2 2 ⎝ ⎠
=
( 4) 2
= −4 + 4 = 0 = −16 + 17 − 1 = 0
2
18 ≠ 1.
9. The set is orthogonal because ⎛ 2 2⎞ , 0, ⎜⎜ ⎟ 2 ⎟⎠ ⎝ 2
⎛ 6 6 6⎞ , , ⋅ ⎜⎜ − ⎟ = 0 3 6 ⎟⎠ ⎝ 6
⎛ 2 2⎞ , 0, ⎜⎜ ⎟ 2 ⎟⎠ ⎝ 2
⎛ 3 3 3⎞ , ,− ⋅ ⎜⎜ ⎟ = 0 3 3 ⎟⎠ ⎝ 3
⎛ 6 6 6⎞ , , ⎜⎜ − ⎟ 3 6 ⎟⎠ ⎝ 6
38 ≠ 1.
1⎞ ⎟ = 0 2⎠
⎛ 1 1 1 1⎞ ⋅ ⎜ − , , − , ⎟ = 0. ⎝ 2 2 2 2⎠
⎛ 2 2⎞ , 0, 0, ⎜⎜ ⎟⎟ = 2 2 ⎝ ⎠
1 1 + 0+ 0+ =1 2 2
⎛ 2 2 ⎞ , , 0 ⎟⎟ = ⎜⎜ 0, 2 2 ⎝ ⎠
0+
= 4 − 4 = 0.
+ ( −1) + 12 =
=
Furthermore, the set is orthonormal because
= 1.
⎛ 1 1 1 ⎜− , , − , ⎝ 2 2 2
However, the set is not orthonormal because
( 4, −1, 1)
2
⎛ 2 2 ⎞ ⋅ ⎜⎜ 0, , , 0 ⎟⎟ = 0 2 2 ⎝ ⎠
⎛ 2 2⎞ ⎛ 1 1 1 , 0, 0, ⎜⎜ ⎟⎟ ⋅ ⎜ − , , − , 2 2 ⎝ ⎠ ⎝ 2 2 2
7. The set is orthogonal because
(4, −1, 1) ⋅ (−1, 0, 4) (4, −1, 1) ⋅ (−4, −17, −1) (−1, 0, 4) ⋅ (−4, −17, −1)
2
⎛ 2 2⎞ , 0, 0, ⎜⎜ ⎟⎟ 2 2 ⎝ ⎠
Furthermore, the set is orthonormal because 3 5
22 + (−5) + (−3)
=
13. The set is orthogonal because
5. The set is orthogonal because
3, 4 5 5
= 8 + 10 − 18 = 0.
⎛ 3 3 3⎞ , ,− ⋅ ⎜⎜ ⎟ = 0. 3 3 ⎟⎠ ⎝ 3
1 1 + 0+ =1 2 2
⎛ 6 6 6⎞ ⎜⎜ − 6 , 3 , 6 ⎟⎟ = ⎝ ⎠
1 2 1 + + =1 6 3 6
⎛ 3 3 3⎞ , ,− ⎜⎜ ⎟⎟ = 3 3 3 ⎝ ⎠
1 1 1 + + = 1. 3 3 3
1 1 1 1 + + + = 1. 4 4 4 4
15. The set is orthogonal because
(−1, 4) ⋅ (8, 2)
= −8 + 8 = 0.
However, the set is not orthonormal because
(−1, 4)
=
(−1)2
+ 42 =
17 ≠ 1.
So, normalize the set to produce an orthonormal set. ⎛ 1 17 4 17 ⎞ , (−1, 4) = ⎜⎜ − ⎟ 17 17 ⎟⎠ 17 ⎝
u1 =
v1 = v1
u2 =
⎛ 4 17 1 17 ⎞ v2 , = (8, 2) = ⎜⎜ ⎟⎟ 17 17 v2 2 17 ⎝ ⎠
Furthermore, the set is orthonormal because ⎛ 2 2⎞ , 0, ⎜⎜ ⎟ = 2 ⎟⎠ ⎝ 2
1⎞ ⎟ = 2⎠
1 1 + + 0 =1 2 2
17. The set is orthogonal because
(
3,
3,
) (
3 ⋅ − 2, 0,
)
2 = − 6+0+
6 = 0.
However, the set is not orthonormal because
(
3,
3,
3
)
=
( 3) + ( 3) + ( 3)
=
9 = 3 ≠ 1.
2
2
2
So, normalize the set to produce an orthonormal set.
(
u1 =
v1 1 = v1 3
u2 =
v2 1 = − 2, 0, v2 2
(
3,
3,
⎛ 3 3 3⎞ 3 = ⎜⎜ , , ⎟ 3 3 ⎟⎠ ⎝ 3
)
⎛ 2 2⎞ 2 = ⎜⎜ − , 0, ⎟ 2 ⎟⎠ ⎝ 2
)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
19. The set {1, x, x 2 , x3} is orthogonal because
153
21. Use Theorem 5.11 to find the coordinates of x = (1, 2) relative to B.
1, x = 0, 1, x 2 = 0, 1, x3 = 0,
⎛ 2 13 3 13 ⎞ 2 13 6 13 4 13 , + = ⎟⎟ = − 13 ⎠ 13 13 13 ⎝ 13
(1, 2) ⋅ ⎜⎜ −
x, x 2 = 0, x, x3 = 0, and x 2 , x3 = 0.
Furthermore, the set is orthonormal because
⎛ 3 13 2 13 ⎞ 3 13 4 13 7 13 , + = ⎟⎟ = 13 13 13 13 13 ⎝ ⎠
(1, 2) ⋅ ⎜⎜
1 = 1, x = 1, x 2 = 1 and x3 = 1. So, {1, x, x 2 , x3} is an orthonormal basis for P3 .
So, [x]B
⎡ 4 13 ⎤ ⎢ ⎥ 13 ⎥ . = ⎢ ⎢ 7 13 ⎥ ⎢ ⎥ ⎣ 13 ⎦
23. Use Theorem 5.11 to find the coordinates of x = ( 2, − 2, 1) relative to B. ⎛
10 3 10 ⎞ 2 10 3 10 , 0, + = ⎟⎟ = 10 ⎠ 10 10 ⎝ 10
(2, − 2, 1) ⋅ ⎜⎜
(2, − 2, 1) ⋅ (0, 1, 0)
10 2
= −2
⎛ 3 10 10 ⎞ 6 10 10 10 , 0, + = − ⎟⎟ = − 10 10 10 10 2 ⎝ ⎠
(2, − 2, 1) ⋅ ⎜⎜ −
So, [x]B
⎡ 10 ⎤ ⎢ ⎥ ⎢ 2 ⎥ = ⎢ −2⎥. ⎢ ⎥ ⎢ 10 ⎥ ⎢− ⎥ ⎣ 2 ⎦
25. Use Theorem 5.11 to find the coordinates of x = (5, 10, 15) relative to B.
(5, 10, 15) ⋅ ( 53 , 54 , 0)
= 3 + 8 = 11
(5, 10, 15) ⋅ (− 54 , 53 , 0) (5, 10, 15) ⋅ (0, 0, 1)
= −4 + 6 = 2
= 15
⎡11⎤ ⎢ ⎥ So, [x]B = ⎢ 2⎥. ⎢15⎥ ⎣ ⎦
27. First, orthogonalize each vector in B. w1 = v1 = (3, 4) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = (1, 0) −
1(3) + 0( 4) 32 + 42
(3, 4)
= (1, 0) −
3 16 12 (3, 4) = ⎛⎜ , − ⎞⎟ 25 ⎝ 25 25 ⎠
Then, normalize the vectors. u1 =
w1 = w1
u2 =
w2 = w2
1 3 + 4 2
2
(3, 4)
⎛3 = ⎜ , ⎝5
4⎞ ⎟ 5⎠
⎛ 16 12 ⎞ ⎛ 4 3 ⎞ ⎜ ,− ⎟ = ⎜ ,− ⎟ ⎛ 16 ⎞ ⎛ 12 ⎞ ⎝ 25 25 ⎠ ⎝ 5 5 ⎠ ⎜ ⎟ + ⎜− ⎟ ⎝ 25 ⎠ ⎝ 25 ⎠ 1
2
⎧⎛ 3 So, the orthonormal basis is ⎨⎜ , ⎩⎝ 5
2
4 ⎞ ⎛ 4 3 ⎞⎫ ⎟, ⎜ , − ⎟⎬. 5 ⎠ ⎝ 5 5 ⎠⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
154
Chapter 5
Inner Product Spaces
29. First, orthogonalize each vector in B. w1 = v1 = (0, 1) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = ( 2, 5) −
2(0) + 5(1) 02 + 12
(0, 1)
= ( 2, 5) − 5(0, 1) = ( 2, 0)
Then, normalize the vectors. u1 =
w1 = w1 = (0, 1) w1
u2 =
w2 1 = ( 2, 0) = (1, 0) w2 2
So, the orthonormal basis is
{(0, 1), (1, 0)}.
31. Because v i ⋅ v j = 0 for i ≠ j , the given vectors are orthogonal. Normalize the vectors. u1 =
v1 1 ⎛1 2 = (1, − 2, 2) = ⎜ , − , v1 3 ⎝3 3
2⎞ ⎟ 3⎠
u2 =
v2 1 ⎛ 2 2 1⎞ = ( 2, 2, 1) = ⎜ , , ⎟ v2 3 ⎝ 3 3 3⎠
u3 =
v3 1 ⎛ 2 1 2⎞ = ( 2, −1, − 2) = ⎜ , − , − ⎟ v3 3 ⎝ 3 3 3⎠
⎧⎛ 1 2 So, the orthonormal basis is ⎨⎜ , − , ⎩⎝ 3 3
2 ⎞ ⎛ 2 2 1 ⎞ ⎛ 2 1 2 ⎞⎫ ⎟, ⎜ , , ⎟, ⎜ , − , − ⎟⎬. 3 ⎠ ⎝ 3 3 3 ⎠ ⎝ 3 3 3 ⎠⎭
33. First, orthogonalize each vector in B. w1 = v1 = ( 4, − 3, 0) w 2 = v2 − w 3 = v3 −
v 2 , w1 w1 , w1 v 3 , w1 w1 , w1
w1 = (1, 2, 0) − w1 −
v3 , w 2 w2, w2
−2 33 44 (4, − 3, 0) = ⎛⎜ , , 0 ⎞⎟ 25 ⎝ 25 25 ⎠
⎛ 33 44 ⎞ w 2 = (0, 0, 4) − 0( 4, − 3, 0) − 0⎜ , , 0 ⎟ = (0, 0, 4) ⎝ 25 25 ⎠
Then, normalize the vectors. u1 =
w1 1 ⎛4 3 ⎞ = ( 4, − 3, 0) = ⎜ , − , 0 ⎟ w1 5 ⎝5 5 ⎠
u2 =
w2 5 ⎛ 33 44 ⎞ ⎛ 3 4 ⎞ = ⎜ , , 0⎟ = ⎜ , , 0⎟ w2 11⎝ 25 25 ⎠ ⎝ 5 5 ⎠
u3 =
w3 1 = (0, 0, 4) = (0, 0, 1) w3 4
⎧⎛ 4 3 ⎞ ⎛ 3 4 ⎞ ⎫ So, the orthonormal basis is ⎨⎜ , − , 0 ⎟, ⎜ , , 0 ⎟, (0, 0, 1)⎬. ⎩⎝ 5 5 ⎠ ⎝ 5 5 ⎠ ⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
155
35. First, orthogonalize each vector in B. w1 = v1 = (0, 1, 1) v 2 , w1
w 2 = v2 −
w1 , w1
w1 = (1, 1, 0) −
v3 , w 2
w 3 = v3 −
v 3 , w1
w2 −
w2, w2
w1 , w1
1(0) + 1(1) + 0(1) 02 + 12 + 12
(0, 1, 1)
= (1, 1, 0) −
1 1 1 (0, 1, 1) = ⎛⎜1, , − ⎞⎟ 2 ⎝ 2 2⎠
w1
⎛1⎞ ⎛ 1⎞ 1(1) + 0⎜ ⎟ + 1⎜ − ⎟ 2⎠ 2 ⎠ ⎛ 1 1 ⎞ 1(0) + 0(1) + 1(1) ⎝ ⎝ = (1, 0, 1) − 1, , − ⎟ − (0, 1, 1) 2 2 ⎜ 02 + 12 + 12 1 1 ⎛ ⎞ ⎛ ⎞ ⎝ 2 2⎠ 2 1 + ⎜ ⎟ + ⎜− ⎟ ⎝ 2⎠ ⎝ 2⎠ 1⎛ 1 1 ⎞ 1 ⎛2 2 = (1, 0, 1) − ⎜1, , − ⎟ − (0, 1, 1) = ⎜ , − , 3⎝ 2 2 ⎠ 2 ⎝3 3
2⎞ ⎟ 3⎠
Then, normalize the vectors. ⎛ 2 2⎞ = ⎜⎜ 0, , ⎟ 2 2 ⎟⎠ ⎝
u1 =
w1 = w1
u2 =
w2 = w2
6 6⎞ ⎛ 1 1⎞ ⎛ 6 , ,− ⎟ ⎜1, , − ⎟ = ⎜⎜ 6 6 ⎟⎠ ⎛1⎞ ⎛ 1⎞ ⎝ 2 2⎠ ⎝ 3 2 1 + ⎜ ⎟ + ⎜− ⎟ ⎝ 2⎠ ⎝ 2⎠
u3 =
w3 = w3
⎛2 2 ⎜ ,− , ⎛ 2⎞ ⎛ 2⎞ ⎛ 2⎞ ⎝ 3 3 ⎜ ⎟ + ⎜− ⎟ + ⎜ ⎟ ⎝ 3⎠ ⎝ 3⎠ ⎝ 3⎠
1
(0, 1, 1) 2
02 + 12 + 1 1
2
2
1
2
2
2
2⎞ ⎛ 3 3 3⎞ ,− , ⎟ ⎟ = ⎜ 3 ⎠ ⎜⎝ 3 3 3 ⎟⎠
2 2⎞ ⎛ 6 6 6⎞ ⎛ 3 3 3 ⎞⎪⎫ ⎪⎧⎛ So, the orthonormal basis is ⎨⎜⎜ 0, , , ,− ,− , ⎟⎟, ⎜⎜ ⎟⎟, ⎜⎜ ⎟⎬. 2 2 ⎠ ⎝ 3 6 6 ⎠ ⎝ 3 3 3 ⎟⎠⎭⎪ ⎩⎪⎝
37. Because there is just one vector, you simply need to normalize it. u1 =
1
(−8)
2
+3 + 5 2
2
(−8, 3, 5)
⎛ 4 2 3 2 5 2⎞ = ⎜⎜ − , , ⎟ 7 14 14 ⎟⎠ ⎝
⎧⎪⎛ 4 2 3 2 5 2 ⎞⎫⎪ , , So, the orthonormal basis is ⎨⎜⎜ − ⎟⎬. 7 14 14 ⎟⎠⎪⎭ ⎪⎩⎝
39. First orthogonalize each vector in B. w1 = v1 = (3, 4, 0) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = (1, 0, 0) −
3 16 12 (3, 4, 0) = ⎛⎜ , − , 0 ⎞⎟ 25 ⎝ 25 25 ⎠
Then, normalize the vectors. u1 =
w1 = w1
w2 = u2 = w2
1 3 + 4 + 0 2
2
2
(3, 4, 0)
1 2
2
⎛ 16 ⎞ ⎛ 12 ⎞ ⎜ ⎟ + ⎜− ⎟ + 0 25 ⎝ ⎠ ⎝ 25 ⎠
⎛3 4 ⎞ = ⎜ , , 0⎟ ⎝5 5 ⎠ ⎛ 16 12 ⎞ ⎛ 4 3 ⎞ ⎜ , − , 0⎟ = ⎜ , − , 0⎟ ⎝ 25 25 ⎠ ⎝ 5 5 ⎠ 2
⎧⎛ 3 4 ⎞ ⎛ 4 3 ⎞⎫ So, the orthonormal basis is ⎨⎜ , , 0 ⎟, ⎜ , − , 0 ⎟⎬. ⎩⎝ 5 5 ⎠ ⎝ 5 5 ⎠⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
156
Chapter 5
Inner Product Spaces
41. First, orthogonalize each vector in B. w1 = v1 = (1, 2, −1, 0) w 2 = v2 −
v 2 , w1 2(1) + 2( 2) + 0( −1) + 1(0) w1 = ( 2, 2, 0, 1) − (1, 2, −1, 0) = (2, 2, 0, 1) − (1, 2, −1, 0) = (1, 0, 1, 1) 2 w1 , w1 12 + 22 + ( −1) + 02
w 3 = v3 −
v3 , w 2 v 3 , w1 w2 − w1 w2 , w2 w1 , w1
= (1, 1, −1, 0) −
1(1) + 1(0) − 1(1) + 0(1) 1(1) + 1( 2) − 1( −1) + 0(0) (1, 0, 1, 1) − 2 2 (1, 2, −1, 0) 2 12 + 02 + 12 + 12 1 + 2 + ( −1) + 02
= (1, 1, −1, 0) −
0 2 1 1 1 (1, 0, 1, 1) − (1, 2, −1, 0) = ⎛⎜ , − , − , 0 ⎞⎟ 3 3 ⎝3 3 3 ⎠
Then, normalize the vectors. u1 =
w1 = w1
12 + 22 + ( −1) + 02
u2 =
w2 = w2
1 + 0 +1 +1
u3 =
w3 = w3
1 2
1 2
2
2
2
(1, 2, −1, 0)
(1, 0, 1, 1)
⎛ 3 3 3⎞ = ⎜⎜ , 0, , ⎟⎟ 3 3 3 ⎝ ⎠
1 2
2
2
⎛ 6 6 6 ⎞ = ⎜⎜ , ,− , 0 ⎟⎟ 3 6 ⎝ 6 ⎠
⎛1⎞ ⎛ 1⎞ ⎛ 1⎞ ⎜ ⎟ + ⎜− ⎟ + ⎜− ⎟ + 0 ⎝ 3⎠ ⎝ 3⎠ ⎝ 3⎠
3 3 ⎞ ⎛1 1 1 ⎞ ⎛ 3 ,− ,− , 0 ⎟⎟ ⎜ , − , − , 0 ⎟ = ⎜⎜ 3 3 3 ⎠ ⎝ 3 3 3 ⎝ ⎠ 2
6 6 ⎞ ⎛ 3 3 3⎞ ⎛ 3 3 3 ⎞⎪⎫ ⎪⎧⎛ 6 So, the orthonormal basis is ⎨⎜⎜ , ,− , 0 ⎟⎟, ⎜⎜ , 0, , ,− ,− , 0 ⎟⎟⎬. ⎟⎟, ⎜⎜ 6 3 6 3 3 3 3 3 3 ⎠ ⎝ ⎠ ⎝ ⎠⎭⎪ ⎩⎪⎝
43. x, 1 =
45.
x2 , 1 =
1
1
∫ −1
x dx =
1
∫ −1
x2 ⎤ ⎥ = 0 2 ⎦ −1 1
x 2 dx =
x3 ⎤ 2 ⎥ = 3 ⎦ −1 3
47. (a) True. See "Definition of Orthogonal and Orthornormal Sets," page 306. (b) True. See Corollary to Theorem 5.10, page 310. (c) False. See "Remark," page 316. 49. The solutions of the homogeneous system are of the form (3s, − 2t , s, t ), where s and t are any real numbers. So, a basis for the solution space is
{(3, 0, 1, 0), (0, − 2, 0, 1)}.
Orthogonalize this basis as follows. w1 = v1 = (3, 0, 1, 0) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = (0, − 2, 0, 1) −
0(3) + ( −2) (0) + 0(1) + 1(0) 32 + 02 + 12 + 02
(3, 0,1, 0)
= (0, − 2, 0, 1)
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
1 3 + 0 +1 + 0 2
2
2
2
(3, 0, 1, 0)
1 02 + ( −2) + 02 + 12 2
⎛ 3 10 10 ⎞ = ⎜⎜ , 0, , 0 ⎟⎟ 10 ⎝ 10 ⎠
(0, − 2, 0, 1)
⎛ 2 5 5⎞ = ⎜⎜ 0, − , 0, ⎟ 5 5 ⎟⎠ ⎝
⎧⎪⎛ 3 10 10 ⎞ ⎛ 2 5 5 ⎞⎫⎪ , 0, , 0 ⎟⎟, ⎜⎜ 0, − , 0, So, the orthonormal basis for the solution set is ⎨⎜⎜ ⎟⎬. 10 5 5 ⎟⎠⎭⎪ ⎠ ⎝ ⎩⎪⎝ 10 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
157
51. The solutions of the homogeneous system are of the form ( s + t , 0, s, t ), where s and t are any real numbers. So, a basis for
{(1, 0, 1, 0), (1, 0, 0, 1)}.
the solution space is
Orthogonalize this basis as follows. w1 = v1 = (1, 0, 1, 0) v 2 , w1
w 2 = v2 −
w1 = (1, 0, 0, 1) −
w1 , w1
1 1 1 (1, 0, 1, 0) = ⎛⎜ , 0, − , 1⎞⎟ 2 2 ⎠ ⎝2
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
⎛ 2 1 2 ⎞ (1, 0, 1, 0) = ⎜⎜ , 0, , 0⎟⎟ 2 2 2 ⎝ ⎠ ⎛ 6 1 ⎛1 1 ⎞ 6 6⎞ , 0, − , ⎟⎟ ⎜ , 0, − , 1⎟ = ⎜⎜ 2 ⎠ 6 6 3 3 2⎝2 ⎝ ⎠
⎧⎪⎛ 2 2 ⎞ ⎛ 6 6 6 ⎞⎫⎪ , 0, , 0 ⎟⎟, ⎜⎜ , 0, − , So, an orthonormal basis for the solution space is ⎨⎜⎜ ⎟⎬. 2 6 3 ⎟⎠⎭⎪ ⎠ ⎝ 6 ⎩⎪⎝ 2
53. The solutions of the homogeneous system are of the form ( −3s + 3t , s, t ), where s and t are any real numbers. So, a basis for the solution space is
{(−3, 1, 0), (3, 0, 1)}.
Orthogonalize this basis as follows. w1 = v1 = ( −3, 1, 0) v 2 , w1 9 ⎛3 9 ⎞ w1 = (3, 0, 1) + (−3, 1, 0) = ⎜ , , 1⎟ w1 , w1 10 ⎝ 10 10 ⎠
w 2 = v2 −
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
⎛ 3 10 1 10 ⎞ , , 0 ⎟⎟ (−3, 1, 0) = ⎜⎜ − 10 10 10 ⎝ ⎠ 1 ⎛ 3 9 ⎞ ⎛ 3 190 9 190 , , ⎜ , , 1⎟ = ⎜ 190 19 10 ⎝ 10 10 ⎠ ⎜⎝ 190
190 ⎞ ⎟ 19 ⎟⎠
So, an orthonormal basis for the solution space is ⎧⎪⎛ 3 10 10 ⎞ ⎛ 3 190 9 190 , , 0 ⎟⎟, ⎜⎜ , , ⎨⎜⎜ − 10 10 190 ⎠ ⎝ 190 ⎩⎪⎝
190 ⎞⎫⎪ ⎟⎬. 19 ⎟⎠⎭⎪
55. Let
p( x) =
x2 + 1 2
and
q( x ) =
x2 + x − 1 . 3
Then
p, q =
1 ⎛ 1 ⎞ 1 ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜− ⎟ + 0⎜ ⎟+ ⎜ ⎟ = 0. Furthermore, 2⎝ 3⎠ 3 2⎝ 3⎠ ⎝ ⎠ 2
2
p =
⎛ 1 ⎞ ⎛ 1 ⎞ 2 ⎜ ⎟ +0 +⎜ ⎟ 2 ⎝ ⎠ ⎝ 2⎠
q =
⎛ 1 ⎞ ⎛ 1 ⎞ ⎛ 1 ⎞ ⎜− ⎟ +⎜ ⎟ +⎜ ⎟ 3⎠ ⎝ ⎝ 3⎠ ⎝ 3⎠
2
2
=1 2
= 1.
So, { p, q} is an orthonormal set.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
158
Chapter 5
Inner Product Spaces
57. Let p1 ( x) = x 2 , p2 ( x) = x 2 + 2 x, and p3 ( x) = x 2 + 2 x + 1. Then, because p1 , p2 = 0(0) + 0( 2) + 1(1) = 1 ≠ 0, the set is not orthogonal. Orthogonalize the set as follows. w1 = p1 = x 2 p2 , w1
w 2 = p2 −
w1 , w1 p3 , w 2
w 3 = p3 −
0(0) + 2(0) + 1(1)
w1 = x 2 + 2 x − p3 , w1
w2 −
w2 , w2
w1 , w1
02 + 02 + 12
w1
1(0) + 2( 2) + 1(0)
= x2 + 2 x + 1 −
x2 = 2x
02 + 2 2 + 02
( 2 x)
−
1(0) + 2(0) + 1(1) 02 + 02 + 12
x2
= x2 + 2 x + 1 − 2x − x2 = 1
Then, normalize the vectors. 1
u1 =
w1 = w1
0 + 02 + 12
u2 =
w2 = w2
0 + 22 + 0 2
u3 =
w3 = w3
1 + 0 2 + 02
2
1 2
1 2
x2 = x2
( 2 x) (1)
= x
=1
So, the orthonormal set is {x 2 , x, 1}. 59. Let p( x) = x 2 − 1 and q( x) = x − 1. Then, because p, q = 1 ≠ 0, the set is not orthogonal. Orthogonalize the set as follows. w1 = p = x 2 − 1 w2 = q −
q, w1 w1 , w1
w1 = ( x − 1) −
1 2 ( x − 1) = − 12 x 2 + x − 12 2
Then, normalize the vectors. u1 =
w1 = w1
u2 =
w2 = w2
1 ( x 2 − 1) = 2
2 2 ( x − 1) 2
1 ⎛ 1 2 1⎞ ⎜− x + x − ⎟ = 2⎠ 3 2⎝ 2
6⎛ 1 2 1⎞ − 6 2 ( x − 2 x + 1) ⎜− x + x − ⎟ = 3 ⎝ 2 2⎠ 6
6 2 ⎪⎧ 2 2 ⎪⎫ x − 1), − x − 2 x + 1)⎬. So, the orthonormal set is ⎨ ( ( 6 ⎩⎪ 2 ⎭⎪ 61. Begin by orthogonalizing the set. w1 = v1 = ( 2, −1) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = ( −2, 10) −
2( −2)( 2) + 10( −1) 2( 2) + ( −1) 2
2
(2, −1)
= (−2, 10) + 2( 2, −1) = ( 2, 8)
Then, normalize each vector. u1 =
w1 = w1
u2 =
w2 = w2
1 2( 2) + ( −1) 2
1 2( 2) + 8 2
2
2
(2, −1)
(2, 8)
⎛ 2 1⎞ = ⎜ ,− ⎟ ⎝ 3 3⎠
⎛ 2 2 2⎞ = ⎜⎜ , ⎟ 3 ⎟⎠ ⎝ 6
⎧⎪⎛ 2 1 ⎞ ⎛ 2 2 2 ⎞⎫⎪ So, an orthonormal basis, using the given inner product is ⎨⎜ , − ⎟, ⎜⎜ , ⎟⎬. 3 ⎟⎠⎭⎪ ⎪⎩⎝ 3 3 ⎠ ⎝ 6 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
159
63. For {u1 , u 2 , …, u n} an orthonormal basis for R n and v any vector in R n , v = v, u1 u1 + v, u 2 u 2 + … + v, u n u n v
2
2
=
v, u1 u1 + v, u 2 u 2 + … + v, u n u n .
Because ui ⋅ u j = 0 for i ≠ j , it follows that
( v, u1 ) u1 2 + ( v, u 2 ) u 2 2 + … + ( v, un ) 2 2 2 ( v, u1 ) (1) + ( v, u2 ) (1) + … + ( v, un ) (1) 2
v
2
=
v
2
=
v
2
= v ⋅ u1
2
2
+ v ⋅ u2
2
i ≠ j. So the row vectors of P form an orthonormal basis for R n . (b) implies (c) if the row vectors of P form an orthonormal basis, then P PT = I n ⇒ PT P = I n , which implies that the column vectors of P form an orthonormal basis. (c) implies (a) because the column vectors of P form an orthonormal basis, you have PT P = I n , which implies = P . T
69.
⎡ 1 1 −1⎤ ⎡2 0 −3⎤ ⎢ ⎥ ⎢ ⎥ A = ⎢0 2 1⎥ ⇒ ⎢0 2 1⎥ ⎢ 1 3 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 1 0 1⎤ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢ ⎥ AT = ⎢ 1 2 3⎥ ⇒ ⎢0 1 1⎥ ⎢−1 1 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
⎧⎡ 3⎤ ⎫ ⎪⎢ ⎥ ⎪ N ( A)-basis: ⎨⎢−1⎥ ⎬ ⎪⎢ 2⎥ ⎪ ⎩⎣ ⎦ ⎭ ⎧⎡−1⎤⎫ ⎪⎢ ⎥⎪ N ( A )-basis: ⎨⎢−1⎥⎬ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭ T
67. Note that v1 and v 2 are orthogonal unit vectors.
Furthermore, a vector (c1 , c2 , c3 , c4 ) orthogonal to v1 and v 2 satisfies the homogeneous system of linear equations
1 c1 2
2
2
P PT = I n , you have pi ⋅ pi = 1 and pi ⋅ p j = 0, for
that P
un
+ … + v ⋅ un .
65. First prove that condition (a) implies (b). If P −1 = PT , consider pi the ith row vector of P. Because
−1
2
+
1 c3 2
1 − c2 2
= 0 +
1 c4 = 0, 2
which has solutions of the form ( − s, t , s, t ), where s and t are any real numbers. A basis for the solution set is {(1, 0, −1, 0), (0, 1, 0, 1)}. Because (1, 0, −1, 0) and
⎧⎡ 1⎤ ⎡ 1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ R( A)-basis: ⎨⎢0⎥ , ⎢2⎥⎬ ⎪⎢ 1⎥ ⎢3⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭ ⎧⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ R( AT )-basis: ⎨⎢ 1⎥ , ⎢2⎥ ⎬ ⎪⎢−1⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭ N ( A) = R( AT ) and N ( AT ) = R( A) ⊥
⊥
(0, 1, 0, 1) are already orthogonal, you simply normalize 1 1 1 ⎞ ⎛ 1 ⎞ ⎛ , 0, − , 0 ⎟ and ⎜ 0, , 0, them to yield ⎜ ⎟. 2 ⎠ 2 2⎠ ⎝ 2 ⎝ So, ⎧⎛ 1 1 1 ⎞ 1 ⎞ ⎛ , 0, , 0, , 0 ⎟, ⎜ 0, − ⎨⎜ ⎟, 2 2 2 2⎠ ⎝ ⎝ ⎠ ⎩
1 1 1 ⎞⎫ ⎛ 1 ⎞ ⎛ , 0, − , 0 ⎟, ⎜ 0, , 0, ⎜ ⎟⎬ 2 ⎠ ⎝ 2 2 ⎠⎭ ⎝ 2 is an orthonormal basis.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
160
71.
Chapter 5
Inner Product Spaces 73. (a) The row space of A is the column space of AT , R( AT ).
⎡1 0 1⎤ ⎡ 1 0 1⎤ A = ⎢ ⎥ ⇒ ⎢ ⎥ 1 1 1 ⎣ ⎦ ⎣0 1 0⎦
(b) Let x ∈ N ( A) ⇒ Ax = 0 ⇒ x is orthogonal to all
⎡ 1 1⎤ ⎡ 1 0⎤ ⎢ ⎥ ⎢ ⎥ T A = ⎢0 1⎥ ⇒ ⎢0 1⎥ ⎢ 1 1⎥ ⎢0 0⎥ ⎣ ⎦ ⎣ ⎦
the rows of A ⇒ x is orthogonal to all the columns of AT ⇒ x ∈ R( AT ) . ⊥
(c) Let x ∈ R( AT )
⎧⎡ 1⎤⎫ ⎪⎢ ⎥⎪ N ( A)-basis: ⎨⎢ 0⎥⎬ ⎪⎢−1⎥⎪ ⎩⎣ ⎦⎭
⊥
⇒ x is orthogonal to each column
vector of AT ⇒ Ax = 0 ⇒ x ∈ N ( A). Combining this with part (b), N ( A) = R( AT ) . ⊥
⎪⎧⎡0⎤⎪⎫ N ( AT ) = ⎨⎢ ⎥⎬ ⎩⎪⎣0⎦⎭⎪
(d) Substitute AT for A in part (c).
⎪⎧⎡1⎤ ⎡0⎤ ⎪⎫ R( A)-basis: ⎨⎢ ⎥ , ⎢ ⎥ ⎬ ⎩⎪⎣1⎦ ⎣1⎦ ⎭⎪
( R( A)
= R2 )
⎧⎡ 1⎤ ⎡1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ R( A )-basis: ⎨⎢0⎥ , ⎢1⎥⎬ ⎪⎢ 1⎥ ⎢1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭ T
N ( A) = R( AT ) and N ( AT ) = R( A) ⊥
⊥
Section 5.4 Mathematical Models and Least Squares Analysis ⎡0⎤ ⎢ ⎥ 1. Not orthogonal: ⎢ 1⎥ ⎢ 1⎥ ⎣ ⎦
⎡−1⎤ ⎢ ⎥ ⋅ ⎢ 2⎥ = 2 ≠ 0 ⎢ 0⎥ ⎣ ⎦
⎡1⎤ ⎡−1⎤ ⎡1⎤ ⎡ 0⎤ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ 1 1 1 2 3. Orthogonal: ⎢ ⎥ ⋅ ⎢ ⎥ = ⎢ ⎥ ⋅ ⎢ ⎥ = 0 ⎢1⎥ ⎢−1⎥ ⎢1⎥ ⎢−2⎥ ⎢⎥ ⎢ ⎥ ⎢⎥ ⎢ ⎥ ⎣⎢1⎦⎥ ⎢⎣ 1⎥⎦ ⎣⎢1⎦⎥ ⎣⎢ 0⎦⎥ ⎛ ⎡ 1⎤ ⎡0⎤ ⎞ ⎧⎡0⎤ ⎫ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎪⎢ ⎥ ⎪ ⊥ 5. S = span ⎜ ⎢0⎥ , ⎢0⎥ ⎟ ⇒ S = span ⎨⎢ 1⎥ ⎬ ⎪⎢ ⎥ ⎪ ⎜ ⎢0⎥ ⎢ 1⎥ ⎟ ⎝⎣ ⎦ ⎣ ⎦⎠ ⎩⎣0⎦ ⎭
(The y -axis)
⎧⎡0⎤⎫ ⎡ 2⎤ ⎫ ⎪⎢ ⎥⎪ ⎢ ⎥ ⎪ − 1 2 0 0 1 0 0 2 ⎡ ⎤ ⎡ ⎤ ⎪⎢0⎥⎪ ⎢−1⎥ ⎪ ⊥ 7. AT = ⎢ S span ⇒ ⇒ = ⎨⎢ ⎥⎬, ⎢ ⎥ ⎬ ⎥ ⎢ ⎥ 1⎦ ⎣0 1 0 1⎦ ⎣0 1 0 ⎪⎢ 1⎥⎪ ⎢ 0⎥ ⎪ ⎪⎢0⎥⎪ ⎢ 1⎥ ⎪ ⎩⎣ ⎦⎭ ⎣ ⎦ ⎭ ⎧⎡0⎤ ⎡ 2⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎪ 0 −1 ⎪ 9. The orthogonal complement of span ⎨⎢ ⎥ , ⎢ ⎥ ⎬ = S ⊥ ⎢ ⎥ ⎢ ⎥ ⎪⎢ 1⎥ ⎢ 0⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
is ( S ⊥ )
⊥
⎛ ⎡ 1⎤ ⎡0⎤ ⎞ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎜ 2 1⎟ = S = span ⎜ ⎢ ⎥ , ⎢ ⎥ ⎟. ⎢0⎥ ⎢0⎥ ⎜⎢ ⎥ ⎢ ⎥⎟ ⎜ ⎢0⎥ ⎢ 1⎥ ⎟ ⎝⎣ ⎦ ⎣ ⎦⎠
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.4
Mathematical Models and Least Squares Analysis
161
⎧ ⎡ 0⎤ ⎫ 0⎤ ⎢ ⎪⎡ ⎪ 1 ⎥⎪ ⎢ ⎥ ⎪ ⎥ 0 ⎢ ⎥ ⎢ 3 ⎥⎪ ⎪⎢ ⎪⎢ 1 ⎥ ⎢ ⎥⎪ 11. An orthonormal basis for S is ⎨⎢− ⎥ , 1 ⎥ ⎬. 2⎥ ⎢ ⎪⎢ 3 ⎥⎪ ⎪⎢ 1 ⎥ ⎢⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ 1 ⎥⎪ 2 ⎣ ⎦ ⎪ ⎣⎢ 3 ⎦⎥ ⎪⎭ ⎩
⎡ ⎢ ⎢ ⎢ 2 ⎢ projS v = ( v ⋅ u1 )u1 + ( v ⋅ u 2 )u 2 = 0u1 + 3⎢ ⎢ ⎢ ⎢ ⎢⎣
0⎤ ⎡ 0⎤ ⎢2⎥ 1 ⎥ ⎥ ⎢ ⎥ 3⎥ ⎢3⎥ 1 ⎥ = ⎢2⎥ ⎥ ⎢ ⎥ 3⎥ ⎢3⎥ ⎥ ⎢2⎥ 1 ⎥ ⎢ ⎥ ⎣3⎦ 3 ⎥⎦
13. Use Gram-Schmidt to construct an orthonormal basis for S. ⎡ 1⎤ ⎢− 2 ⎥ ⎡0⎤ ⎡ 1⎤ ⎢ 1⎥ − 1 ⎢0⎥ = ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ 2⎢ ⎥ ⎢ 1⎥ ⎣⎢ 1⎦⎥ ⎣⎢ 1⎦⎥ ⎢ ⎥ ⎣ 2⎦ ⎧ ⎡ 1 ⎤⎫ ⎪⎡ 1 ⎤ ⎢− ⎥⎪ 6 ⎥⎪ ⎪⎢ 2 ⎥ ⎢ ⎥ ⎢ 2 ⎥⎪ ⎪⎢ orthonormal basis: ⎨⎢ 0⎥ , ⎢ ⎬ 6 ⎥⎥ ⎪ ⎪⎢ 1 ⎥ ⎢ ⎥ ⎢ 1 ⎥⎪ ⎪⎢ ⎪⎢⎣ 2 ⎥⎦ ⎢ ⎪ 6 ⎥⎦ ⎭ ⎣ ⎩
projS v = (u1 ⋅ v )u1 + (u 2 ⋅ v)u 2 =
⎡ 1 ⎤ ⎡ 4⎤ ⎡ 5⎤ ⎡ 1 ⎤ − ⎥ ⎢− 6 ⎥ ⎢ ⎢ 3⎥ 3 ⎢ 2⎥ ⎢ ⎥ ⎡3⎤ ⎢ ⎥ ⎢ ⎥ ⎥ 6 ⎢ 8 ⎢ 2 ⎥ 8 8 = ⎢0⎥ + ⎢ ⎥ = ⎢ ⎥ ⎢ 0⎥ + ⎢ ⎥ ⎢ ⎥ ⎢ 3⎥ ⎢ 3⎥ 2⎢ 6 6 ⎢ ⎥ 1 ⎥ ⎢13 ⎥ ⎣⎢3⎦⎥ ⎢ 4 ⎥ ⎢ ⎥ 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢ 2 ⎥⎦ ⎢ ⎣⎢ 3 ⎦⎥ ⎣⎢ 3 ⎦⎥ 6 ⎥⎦ ⎣
15. Use Gram-Schmidt to construct an orthonormal basis for the column space of A. ⎡ 1⎤ ⎢ 2⎥ ⎡2⎤ ⎡ 1⎤ ⎢ 1⎥ − 3 ⎢0⎥ = ⎢ 1⎥ ⎢ ⎥ ⎢ ⎥ 2⎢ ⎥ ⎢ 1⎥ ⎣⎢ 1⎦⎥ ⎣⎢ 1⎦⎥ ⎢− ⎥ ⎣ 2⎦ ⎧ ⎡ 1 ⎤⎫ ⎪⎡ 1 ⎤ ⎢ ⎥⎪ 6 ⎥⎪ ⎪⎢ 2 ⎥ ⎢ ⎢ ⎥ ⎪ ⎢ 2 ⎥⎪ orthonormal basis: ⎨⎢ 0⎥ , ⎢ ⎬ 6 ⎥⎥ ⎪ ⎪⎢ 1 ⎥ ⎢ ⎥ ⎢ 1 ⎥⎪ ⎪⎢ ⎪⎣⎢ 2 ⎦⎥ ⎢− ⎪ 6 ⎥⎦ ⎭ ⎣ ⎩
projS b = (u1 ⋅ b)u1 + (u 2 ⋅ b)u 2 =
⎡ 1 ⎤ ⎡ 1 ⎤ ⎡3⎤ ⎡ 1⎤ ⎢ 6 ⎥⎥ ⎢ 2⎥ ⎢ 2 ⎥ ⎢− 2 ⎥ ⎢ ⎡ 1⎤ ⎥ ⎢ ⎥ ⎢ ⎥ −3 ⎢ 2 ⎥ 3 ⎢ = ⎢ 0⎥ + ⎢ −1⎥ = ⎢−1⎥ ⎢ 0⎥ + ⎢ ⎥ 2⎢ 6 ⎢⎢ 6 ⎥⎥ ⎥ ⎢3⎥ ⎢ 1⎥ 1 ⎢⎣ 2⎥⎦ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1 ⎥ − 2 2 ⎢⎣ 2 ⎥⎦ ⎣ ⎦ ⎣ ⎦ ⎢ 6 ⎥⎦ ⎣
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
162
17.
Chapter 5
Inner Product Spaces ⎡2 1⎤ ⎡2 1 1⎤ ⎢ ⎡6 5⎤ ⎥ 21. A A = ⎢ ⎥ ⎢ 1 2⎥ = ⎢ ⎥ ⎣ 1 2 1⎦ ⎢ ⎣5 6⎦ ⎥ ⎣ 1 1⎦
⎡ 1 2 3⎤ ⎡ 1 0 3⎤ A = ⎢ ⎥ ⇒ ⎢ ⎥ ⎣0 1 0⎦ ⎣0 1 0⎦
T
⎡ 1 0⎤ ⎡ 1 0⎤ ⎢ ⎥ ⎢ ⎥ A = ⎢2 1⎥ ⇒ ⎢0 1⎥ ⎢ 3 0⎥ ⎢0 0⎥ ⎣ ⎦ ⎣ ⎦ T
⎡ 2⎤ ⎡2 1 1⎤ ⎢ ⎥ ⎡ 1⎤ AT b = ⎢ ⎥ ⎢ 0⎥ = ⎢ ⎥ ⎣ 1 2 1⎦ ⎢ ⎥ ⎣−1⎦ ⎣−3⎦
⎧⎡−3⎤⎫ ⎪⎢ ⎥⎪ N ( A)-basis: ⎨⎢ 0⎥⎬ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭
⎡6 5 1⎤ ⎡ 1 0 1⎤ ⎡ 1⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ x = ⎢ ⎥ 5 6 1 0 1 1 − − ⎣ ⎦ ⎣ ⎦ ⎣−1⎦
⎪⎧⎡0⎤⎪⎫ N ( AT ) = ⎨⎢ ⎥⎬ ⎩⎪⎣0⎦⎭⎪ ⎧⎪⎡ 1⎤ ⎡2⎤ ⎫⎪ R( A)-basis: ⎨⎢ ⎥ , ⎢ ⎥ ⎬ ⎪⎩⎣0⎦ ⎣ 1⎦ ⎪⎭
( R( A)
= R2 )
⎧⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ R( A )-basis: ⎨⎢2⎥ , ⎢ 1⎥ ⎬ ⎪⎢ 3⎥ ⎢0⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
⎡ 1 1 0 1⎤ ⎢ ⎥ AT b = ⎢0 1 1 1⎥ ⎢ 1 1 1 0⎥ ⎣ ⎦
T
19.
⎡1 ⎢ 0 A = ⎢ ⎢1 ⎢ ⎢⎣ 1 ⎡1 ⎢ 0 AT = ⎢ ⎢0 ⎢ ⎢⎣ 1
0 0
1⎤ ⎡1 ⎥ ⎢ 1 1 1⎥ 0 ⇒ ⎢ ⎥ ⎢ 1 1 2 0 ⎥ ⎢ ⎢⎣0 2 2 3⎥⎦
0 0 1⎤ ⎥ 1 1 1⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
1 1⎤ ⎡1 ⎥ ⎢ 1 1 2⎥ 0 ⇒ ⎢ ⎥ ⎢ 1 1 2 0 ⎥ ⎢ ⎢⎣0 1 2 3⎥⎦
0 1 1⎤ ⎥ 1 1 2⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
0
⎧⎡−1⎤ ⎡ 0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪ −1 −1 ⎪ N ( A)-basis: ⎨⎢ ⎥ , ⎢ ⎥⎬ ⎢ ⎥ ⎢ ⎥ ⎪⎢ 0⎥ ⎢ 1⎥⎪ ⎪⎢ 1⎥ ⎢ 0⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭ ⎧⎡−1⎤ ⎡ −1⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎪ −1 −2 ⎪ N ( AT )-basis: ⎨⎢ ⎥ , ⎢ ⎥ ⎬ ⎢ ⎥ ⎢ ⎥ ⎪⎢ 1⎥ ⎢ 0⎥ ⎪ ⎪⎢ 0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭ ⎧⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎪0 1⎪ R( A)-basis: ⎨⎢ ⎥ , ⎢ ⎥ ⎬ ⎢ ⎥ ⎢ ⎥ ⎪⎢ 1⎥ ⎢ 1⎥ ⎪ ⎪⎢ 1⎥ ⎢2⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭ ⎧⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎪0 1⎪ T R( A )-basis: ⎨⎢ ⎥ , ⎢ ⎥ ⎬ ⎢ ⎥ ⎢ ⎥ ⎪⎢0⎥ ⎢ 1⎥ ⎪ ⎪⎢ 1⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
⎡ 1 1 0 1⎤ 23. AT A = ⎢0 1 1 1⎥ ⎢ ⎥ ⎢ 1 1 1 0⎥ ⎣ ⎦
⎡1 ⎢ ⎢1 ⎢0 ⎢ ⎢⎣ 1
0 1⎤ ⎡3 2 2⎤ ⎥ 1 1⎥ ⎢ ⎥ = ⎢2 3 2⎥ ⎥ 1 1 ⎢2 2 3⎥ ⎥ ⎣ ⎦ 1 0⎥⎦
⎡ 4⎤ ⎡4⎤ ⎢ ⎥ ⎢−1⎥ = ⎢0⎥ ⎢ ⎥ ⎢ 0⎥ ⎢3⎥ ⎢ ⎥ ⎣ ⎦ ⎣⎢ 1⎦⎥
⎡3 2 2 4⎤ ⎡ 1 0 0 2⎤ ⎡ 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2 3 2 0 0 1 0 2 x ⇒ − ⇒ = ⎢ ⎥ ⎢ ⎥ ⎢−2⎥ ⎢2 2 3 3⎥ ⎢0 0 1 1⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡3 2 0⎤ ⎡1⎤ ⎢ ⎥ T ⎢⎥ 25. A A = ⎢2 6 1⎥ , A b = ⎢1⎥ ⎢0 1 3⎥ ⎢1⎥ ⎣ ⎦ ⎣⎦ T
⎡1 0 0 1 ⎤ ⎡1 ⎤ ⎡3 2 0 1⎤ 3 ⎢ ⎥ ⎢3⎥ ⎢ ⎥ ⎢2 6 1 1⎥ ⇒ ⎢0 1 0 0⎥ ⇒ x = ⎢0⎥ ⎢0 0 1 1 ⎥ ⎢1 ⎥ ⎢0 1 3 1⎥ ⎣ ⎦ 3⎦ ⎣ ⎣3⎦
⎡1 −1⎤ ⎡ 1 1 1⎤ ⎢ ⎡3 3⎤ ⎥ 27. A A = ⎢ ⎥ ⎢1 1⎥ = ⎢ ⎥ ⎣−1 1 3⎦ ⎢ ⎣3 11⎦ ⎥ ⎣1 3⎦ T
⎡ 1⎤ ⎡ 1 1 1⎤ ⎢ ⎥ ⎡ −2⎤ AT b = ⎢ ⎥ ⎢ 0⎥ = ⎢ ⎥ ⎣−1 1 3⎦ ⎢ ⎥ ⎣−10⎦ ⎣−3⎦ ⎡1 0 1⎤ ⎡ 1⎤ ⎡3 3 −2⎤ 3 ⇒ x = ⇒ ⎢ ⎥ ⎢ 3⎥ ⎢ ⎥ ⎢⎣0 1 −1⎥⎦ ⎢⎣−1⎥⎦ ⎣3 11 −10⎦ 1 3
line: y = y = −x + 13
− x y
3
2
(−1, 1)
(1, 0)
−3 −2 −1
2
x 3
−2 −3
(3, −3)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.4 ⎡1 −2⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢1 −1⎥ ⎡4 0⎤ 29. AT A = ⎢ = ⎢ ⎥⎢ ⎥ ⎥ ⎣−2 −1 1 2⎦ ⎢1 1⎥ ⎣0 10⎦ ⎢⎣1 2⎥⎦
Mathematical Models and Least Squares Analysis ⎡1 ⎡ 1 1 1 1⎤ ⎢ 1 33. AT A = ⎢⎢0 2 3 4⎥⎥ ⎢ ⎢1 ⎢0 4 9 16⎥ ⎢ ⎣ ⎦ ⎢⎣1
0⎤ ⎥ ⎡ 4 9 29⎤ 2 4⎥ ⎢ ⎥ = ⎢ 9 29 99⎥ ⎥ 3 9 ⎥ ⎢⎣29 99 353⎥⎦ 4 16⎥⎦ 0
⎡−1⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢ 0⎥ ⎡ 1⎤ AT b = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 2 1 1 2 0 − − ⎣ ⎦⎢ ⎥ ⎣6⎦ ⎣⎢ 2⎦⎥
⎡ 0⎤ ⎡ 1 1 1 1⎤ ⎢ ⎥ ⎡ 20⎤ ⎢ ⎥ 2 ⎢ ⎥ AT b = ⎢0 2 3 4⎥ ⎢ ⎥ = ⎢ 70⎥ ⎢ 6⎥ ⎢0 4 9 16⎥ ⎢ ⎥ ⎢254⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎢12⎦⎥
⎡1 0 ⎡4 0 1⎤ ⎢ ⎥ ⇒ ⎢ ⎢⎣0 1 ⎣0 10 6⎦
⎡ 4 9 29 20⎤ ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⇒ 9 29 99 70 ⎢ ⎥ ⎢0 1 0 −1⎥ ⎢29 99 353 254⎥ ⎢0 0 1 1⎥ ⎣ ⎦ ⎣ ⎦
line: y =
1 4
y 2
+
1⎤ 4 3⎥ 5⎥ ⎦
⎡1 ⎤ ⇒ x = ⎢ 43 ⎥ ⎢⎣ 5 ⎥⎦
3 x 5
y=
1 4
⎡ 0⎤ ⎢ ⎥ ⇒ x = ⎢−1⎥ ⎢ 1⎥ ⎣ ⎦
3 x 5
+
(2, 2)
Quadratic Polynomial: y = x 2 − x
1
(1, 0)
(−1, 0) −2
1
x
2
−1
(−2, −1)
−2
⎡1 −2⎤ ⎢ ⎥ ⎢1 −1⎥ 1 1 1 1 1 ⎡ ⎤⎢ ⎡5 0⎤ ⎥ 31. AT A = ⎢ ⎥ ⎢1 0⎥ = ⎢ ⎥ ⎣−2 −1 0 1 2⎦ ⎣0 10⎦ ⎢1 1⎥ ⎢ ⎥ ⎣1 2⎦ ⎡ 1⎤ ⎢ ⎥ ⎢2⎥ 1 1 1 1 1 ⎡ ⎤ ⎡7⎤ T ⎢ ⎥ A b = ⎢ ⎥ ⎢ 1⎥ = ⎢ ⎥ − − 2 1 0 1 2 ⎣ ⎦ ⎣0⎦ ⎢2⎥ ⎢ ⎥ ⎣ 1⎦ ⎡1 0 7 ⎤ ⎡7⎤ ⎡5 0 7⎤ 5 ⇒ x = 5 ⎥ ⎢ ⎥ ⎢ ⎥ ⇒ ⎢ ⎢⎣0 1 0⎦⎥ ⎢⎣ 0⎦⎥ ⎣0 10 0⎦ line: y =
3
−1
y=
⎡1 0 0 ⎡ 5 0 10 8⎤ ⎢ ⎢ ⎥ ⎢ 0 10 0 12⎥ ⇒ ⎢0 1 0 ⎢ ⎢10 0 34 22⎥ ⎢⎣0 0 1 ⎣ ⎦ Quadratic Polynomial: y =
26 35
+
26 ⎤ 35 ⎥ 6⎥ 5 ⎥ 3 ⎥ 7⎦ 6 x 5
⎡ 26 ⎤ ⎢ 35 ⎥ ⇒ x = ⎢ 56 ⎥ ⎢ ⎥ ⎢⎣ 73 ⎥⎦ +
3 2 x 7
For 2010, t = 10 so, S ≈ $6732 million.
7 5
Auto Zone:
(0, 1) (2, 1) x 1 −1
⎡0⎤ ⎢ ⎥ ⎢0⎥ ⎡ 8⎤ ⎢ 1⎥ = ⎢12⎥ ⎢ ⎥ ⎢ ⎥ ⎢2⎥ ⎢⎣22⎥⎦ ⎢ ⎥ ⎣5⎦
S = 2.859t 3 − 32.81t 2 + 492.0t + 2234
(1, 2)
1 −2
⎡ 1 1 1 1 1⎤ ⎢ ⎥ AT b = ⎢−2 −1 0 1 2⎥ ⎢ 4 1 0 1 4⎥ ⎣ ⎦
⎡1 −2 4⎤ ⎢ ⎥ ⎢1 −1 1⎥ ⎡ 5 0 10⎤ ⎢1 0 0⎥ = ⎢ 0 10 0⎥ ⎥ ⎢ ⎥ ⎢ ⎢1 1 1⎥ ⎢⎣10 0 34⎥⎦ ⎢ ⎥ ⎣1 2 4⎦
Advanced Auto Parts:
2
(−2, 1)
⎡ 1 1 1 1 1⎤ T 35. A A = ⎢⎢−2 −1 0 1 2⎥⎥ ⎢ 4 1 0 1 4⎥ ⎣ ⎦
37. Using a graphing utility, you find that the least squares cubic polynomial is the best fit for both companies.
7 5
y
(−1, 2)
163
2
S = 8.444t 3 − 105.48t 2 + 578.6t + 4444
For 2010, t = 10 so, S ≈ $8126 million.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
164
Chapter 5
Inner Product Spaces
39. Substitute the data points ( −1, 6325), (0, 6505), (1, 6578),
(2, 6668), (3, 6999), and (4, 7376) into the equation y = c0 + c1t + c2t 2 to obtain the following system.
41. Use a graphing utility. Least squares regression line: y = 4946.7t + 28,231 ( r 2 ≈ 0.9869) Least squares cubic regression polynomial:
c0 + c1 ( −1) + c2 ( −1) = 6325 2
y = 2.416t 3 − 36.74t 2 + 4989.3t + 28,549( r 2 ≈ 0.9871)
c0 +
c1 (0) +
c2 (0) = 6505
c0 +
c1 (1) +
c2 (1) = 6578
c0 +
c1 ( 2) +
c2 ( 2) = 6668
43. (a) False. See discussion after Example 2, page 322.
c0 +
c1 (3) +
c2 (3) = 6999
(b) True. See "Definition of Direct Sum," page 323.
c0 +
c1 ( 4) +
c2 ( 4) = 7376
2
The cubic model is a better fit for the data.
2 2 2
(c) True. See discussion preceding Example 7, page 328.
2
This produces the least squares problem
45. Let v ∈ S1 ∩ S 2 . Because v ∈ S1 , v ⋅ x 2 = 0 for all x 2 ∈ S 2 ⇒ v ⋅ v = 0, because v ∈ S 2 ⇒ v = 0.
Ax = b ⎡1 −1 1⎤ ⎡6325⎤ ⎢ ⎥ ⎢ ⎥ 1 0 0 ⎢ ⎥ ⎡c ⎤ ⎢6505⎥ ⎢1 1 1⎥ ⎢ 0 ⎥ ⎢6578⎥ ⎢ ⎥ ⎢ c1 ⎥ = ⎢ ⎥. ⎢1 2 4⎥ ⎢ ⎥ ⎢6668⎥ ⎢ ⎥ ⎣c2 ⎦ ⎢ ⎥ ⎢1 3 9⎥ ⎢6999⎥ ⎢1 4 16⎥ ⎢7376⎥ ⎣ ⎦ ⎣ ⎦
47. Let v ∈ R n , v = v1 + v 2 , v1 ∈ S , v 2 ∈ S ⊥ . Let {u1 , … , ut } be an orthonormal basis for S. Then v = v1 + v 2 = c1u1 +
and
v ⋅ ui = (c1u1 +
The normal equations are
+ ct u t + v 2 ) ⋅ u i
= ci (u i ⋅ ui )
A Ax = A b T
+ ct ut + v 2 , ci ∈ R
T
⎡ 6 9 31⎤ ⎡c0 ⎤ ⎡ 40,451⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = c 9 31 99 ⎢ ⎥ ⎢ 1⎥ ⎢ 64,090⎥ ⎢31 99 355⎥ ⎢c2 ⎥ ⎢220,582⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
= ci which shows that v1 = projS v = ( v ⋅ u1 )u1 +
and their solution is
+ ( v ⋅ ut )ut .
49. If A has orthonormal columns, then AT A = I and the normal equations become
⎡c0 ⎤ ⎡6425 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢ 87.0 ⎥. ⎢c2 ⎥ ⎢ 36.02⎥ ⎣ ⎦ ⎣ ⎦
AT Ax = AT b x = AT b.
So, the least squares regression quadratic polynomial is y = 6425 + 87.0t + 36.02t 2 .
Section 5.5 Applications of Inner Product Spaces i
j k
i
1. j × i = 0 1 0 1 0 =
1 0 0 0
3. j × k = 0 1 0
0 i −
j k
0 0 0 0 1 0
j+
0 1 1 0
=
k
= 0i − 0 j − k = − k
1 0 0 1
1 i −
0 0 0 1
j+
0 1 0 0
k
= i − 0 j + 0k = i
z
z
y
i
y
−k x
x
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
j k
i
5. i × k = 1 0
0
0 0
1
=
0 0 0 1
i −
Applications of Inner Product Spaces
i
j
13. u × v = 0 1
165
k 6 = −i + 12 j − 2k = ( −1, 12, − 2)
2 0 −1 1 0 0 1
j+
1 0 0 0
k
= 0i − j + 0k = − j z
Furthermore, u × v = ( −1, 12, − 2) is orthogonal to both
(0, 1, 6) and (2, 0, −1) because (−1, 12, − 2) ⋅ (0, 1, 6) = 0 and (−1, 12, − 2) ⋅ (2, 0, −1) = 0. i
j
15. u × v = 1 1
−j
y
k 1 = −2i + 3 j − k = ( − 2, 3, −1).
2 1 −1 Furthermore, u × v = ( − 2, 3, −1) is orthogonal to both
x
i 7. u × v = 0
j
k
1 −2 = − 2i − 2 j − k = ( − 2, − 2, −1)
1 −1
0
(1, 1, 1) and (2, 1, −1) because (− 2, 3, −1) ⋅ (1, 1, 1) = 0 and (− 2, 3, −1) ⋅ (2, 1, −1) = 0. 17. Using a graphing utility:
Furthermore, u × v = ( − 2, − 2, −1) is orthogonal to
w = u × v = (5, − 4, − 3)
both (0, 1, − 2) and (1, −1, 0) because
Check if w is orthogonal to both u and v:
(− 2, − 2, −1) ⋅ (0, 1, − 2) = 0 and (− 2, − 2, −1) ⋅ (1, −1, 0) = 0.
w ⋅ v = (5, − 4, − 3) ⋅ ( 2, 1, 2) = 10 − 4 − 6 = 0
i
j k
9. u × v = 12 −3 −2
5
1 = −8i − 14 j + 54k = (−8, −14, 54) 1
Furthermore, u × v = ( − 8, −14, 54) is orthogonal to both (12, − 3, 1) and ( − 2, 5, 1) because
(− 8, −14, 54) ⋅ (12, − 3, 1) = 0 and (− 8, −14, 54) ⋅ (− 2, 5, 1) = 0. i
j k
11. u × v = 2 −3
1 −2
1 = −i − j − k = ( −1, −1, −1) 1
Furthermore, u × v = ( −1, −1, −1) is orthogonal to both
(2, − 3, 1) and (1, − 2, 1) because (−1, −1, −1) ⋅ (2, − 3, 1) = 0 and (−1, −1, −1) ⋅ (1, − 2, 1) = 0.
w ⋅ u = (5, − 4, − 3) ⋅ (1, 2, −1) = 5 − 8 + 3 = 0
19. Using a graphing utility: w = u × v = ( 2, −1, −1) Check if w is orthogonal to both u and v: w ⋅ u = ( 2, −1, −1) ⋅ (0, 1, −1) = −1 + 1 = 0 w ⋅ v = ( 2, −1, −1) ⋅ (1, 2, 0) = 2 − 2 = 0
21. Using a graphing utility: w = u × v = (1, −1, − 3) Check if w is orthogonal to both u and v: w ⋅ u = (1, −1, − 3) ⋅ ( 2, −1, 1) = 2 + 1 − 3 = 0 w ⋅ v = (1, −1, − 3) ⋅ (1, − 2, 1) = 1 + 2 − 3 = 0
23. Using a graphing utility: w = u × v = (1, − 5, − 3) Check if w is orthogonal to both u and v: w ⋅ u = (1, − 5, − 3) ⋅ ( 2, 1, −1) = 2 − 5 + 3 = 0 w ⋅ v = (1, − 5, − 3) ⋅ (1, −1, 2) = 1 + 5 − 6 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
166
Chapter 5
Inner Product Spaces
25. Because
33. Because i
j k
i
u × v = 0 1 0 = i = (1, 0, 0)
v × w = 2 1 0 = i − 2 j = (1, − 2, 0),
0 1 1
0 0 1
the area of the parallelogram is
the triple scalar product of u, v, and w is u ⋅ ( v × w ) = (1, 1, 1) ⋅ (1, − 2, 0) = −1.
u × v = i = 1.
35. The area of the base of the parallelogram is v × w .
27. Because i
j
The height is cos θ
k
u × v = 3 2 −1 = 8i − 10 j + 4k = (8, −10, 4), 1 2
3
the area of the parallelogram is
(8, −10, 4)
=
cos θ =
u , where
u ⋅ ( v × w) v×w
u
.
So,
82 + ( −10) + 42 = 2
180 = 6 5.
volume = base × height = v × w
29. ( 2, 3, 4) − (1, 1, 1) = (1, 2, 3)
(7, 7, 5)
j k
u ⋅ ( v × w) u
v×w
u
= u ⋅ ( v × w) .
− (6, 5, 2) = (1, 2, 3)
v×w
(7, 7, 5) − ( 2, 3, 4) = (5, 4, 1) (6, 5, 2) − (1, 1, 1) = (5, 4, 1)
u w v
u = (1, 2, 3) and v = (5, 4, 1) Because
||proj v × w u||
i
j k
u × v = 1 2 3 = −10i + 14 j − 6k = ( −10, 14, − 6), 5 4 1
(−10)
2
(3, 3, 0)
− ( − 2, 0, 5) = (5, 3, − 5)
Because
the area of the parallelogram is u×v =
37. (3, 3, 0) − (1, 3, 5) = ( 2, 0, − 5)
+ 142 + (− 6) = 2
332 = 2 83.
i
j
k
u × v = 2 0 −5 = 15i − 15 j + 6k = (15, −15, 6), 5 3 −5
31. Because i
j k
v × w = 0 1 0 = i = (1, 0, 0), 0 0 1
the triple scalar product of u, v, and w is u ⋅ ( v × w ) = (1, 0, 0) ⋅ (1, 0, 0) = 1.
the area of the triangle is A =
=
1 u×v 2 1 1 2 152 + ( −15) + 62 = 2 2
486 =
9 6 . 2
39. Because i
j k
v × w = 0 1 1 = i + j − k = (1, 1, −1), 1 0 1
the volume is given by u ⋅ ( v × w ) = (1, 1, 0) ⋅ (1, 1, −1) = 2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
167
41. u × ( v + w ) =
i
j
k
u1
u2
u3
v1 + w1 v2 + w2
v3 + w3
= ⎡⎣u2 (v3 + w3 ) − u3 (v2 + w2 )⎤⎦ i − ⎡⎣u1 (v3 + w3 ) − u3 (v1 + w1 )⎤⎦ j + ⎡⎣u1 (v2 + w2 ) − u2 (v1 + w1 )⎤⎦ k = (u2v3 − v2u3 )i − (u1v3 − u3v1 ) j + (u1v2 − u2v1 )k + (u2 w3 − u3w2 )i − (u1w3 − u3w1 ) j + (u1w2 − u2 w1 )k
i
j
k
= u1 u2 v1
v2
i
j
k
u3 + u1
u2
u3
v3
w2
w3
w1
= (u × v ) + (u × w )
i
j
k
43. u × u = u1 u2
u3 = 0, because two rows are the same.
u1 u2
u3
45. Because u × v = (u2v3 − v2u3 )i − (u1v3 − u3v1 ) j + (u1v2 − v1u2 )k you see that u ⋅ (u × v ) = (u1 , u2 , u3 ) ⋅ (u2v3 − v2u3 , − u1v3 + u3v1 , u1v2 − v1u2 ) = (u1u2v3 − u1v2u3 − u2u1v3 + u2u3v1 + u3u1v2 − u3v1u2 ) = 0,
which shows that u is orthogonal to u × v. A similar computation shows that v ⋅ (u × v) = 0. [Note that v ⋅ (u × v ) = − v ⋅ ( v × u) = 0 by the above with the roles of u and v reversed.] 47. You have the following equivalences. u× v = 0 ⇔
u× v = 0
⇔
v sin θ = 0 (Theorem 5.18 ( 2))
u
⇔ sin θ = 0 ⇔ θ = 0 ⇔ u and v are parallel. 49. (a) u × ( v × w ) i
j
k
= u × v1
v2
v3
w1
w2
w3
= u × ⎡⎣(v2 w3 − w2v3 )i − (v1w3 − w1v3 ) j + (v1w2 − v2 w1 )k ⎤⎦
i =
j
u1
(v2 w3
− w2v3 )
k
u2
( w1v3
− v1w3 )
u3
(v1w2
− v2 w1 )
= ⎡⎣(u2 (v1w2 − v2 w1 )) − u3 ( w1v3 − v1w3 )⎤⎦ i − ⎡⎣u1 (v1w2 − v2 w1 ) − u3 (v2 w3 − w2v3 )⎤⎦ j + ⎡⎣u1 ( w1v3 − v1w3 ) − u2 (v2 w3 − w2v3 )⎤⎦ k = (u2 w2v1 + u3w3v1 − u2v2 w1 − u3v3w1 , u1w1v2 + u3w3v2 − u1v1w2 − u3v3w2 , u1w1v3 + u2 w2v3 − u1v1w3 − u2v2 w3) = (u1w1 + u2 w2 + u3w3 )(v1 , v2 , v3 ) − (u1v1 + u2v2 + u3v3 )( w1 , w2 , w3 ) = (u ⋅ w ) v − (u ⋅ v ) w
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
168
Chapter 5
Inner Product Spaces
(b) Let u = (1, 0, 0), v = (0, 1, 0)
and w = (1, 1, 1).
Then v × w = (1, 0, −1)
u × v = (0, 0, 1).
and
So u × ( v × w ) = (1, 0, 0) × (1, 0, −1) = (0, 1, 0), while
(u × v ) × w
= (0, 0, 1) × (1, 1, 1) = ( −1, 1, 0),
which are not equal. 51. (a) The standard basis for P1 is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis ⎧ 1 1 ⎫ B = {w1 , w 2} = ⎨ , ( 2 x − 5)⎬. ⎩ 3 3 ⎭ The least squares approximating function is given by g ( x) = f , w1 w1 + f , w 2 w 2 . Find the inner products f1 , w1 = f , w2 =
4
4
∫1
x
1 2 3 2⎤ 14 dx = x ⎥ = 3 3 3 3 3 ⎦1 4
10 3 2 ⎤ 22 ⎛1⎞ ⎡4 x ⎜ ⎟( 2 x − 5)dx = ⎢ x5 2 − x ⎥ = 9 45 ⎝ 3⎠ ⎣15 ⎦1
4
∫1
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2 =
(b)
14 1 22 ⎛ 1 ⎞ 44 20 4 + = x+ (25 + 11x). ⎜ ⎟( 2 x − 5) = 45 ⎝ 3 ⎠ 135 27 135 3 3 3
3
g f 0
4.5 0
53. (a) The standard basis for P1 is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis
{
}
B = {w1 , w 2} = 1,
3 ( 2 x − 1) .
The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 . Find the inner products f , w1 =
∫0 e
1
−2 x
f , w2 =
∫0 e
1
−2 x
1
dx = − 12 e −2 x ⎤ = − 12 (e −2 − 1) ⎥⎦ 0 1
3 ( 2 x − 1)dx = − 3 × e−2 x ⎤ = − 3e −2 ⎥⎦ 0
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2 = − 12 (e −2 − 1) −
(b)
3e −2
(
)
3 ( 2 x − 1) = −6e −2 x +
1 2
(5e−2 + 1) ≈
−0.812 x + 0.8383.
1
g f 0
1 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
169
55. (a) The standard basis for P1 , is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis ⎧⎪ 2π ⎫⎪ 6π B = {w1 , w 2} = ⎨ , 4 x − π )⎬. 2 ( π ⎪⎩ π ⎪⎭ The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 . Find the inner products π 2
π 2
⎤ ⎞ 2π ⎟⎟ dx = − π cos x⎥ ⎥⎦ 0 ⎠
⎛ 2π (sin x)⎜⎜ ⎝ π
f , w1 =
∫0
f , w2 =
∫ 0 (sin x)⎢
=
2π
π
⎡
⎤ 6π 6π π −4 x cos x + 4 sin x + π cos x]0 4 x − π )⎥ dx = 2 ( 2 [ π π ⎣ ⎦
π 2
2
=
6π (4 − π ) π2
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2
⎡ 6π ⎤ 2π ⎛ 2π ⎞ 6π (4 − π )⎢ 2 (4 x − π )⎥ ⎜ ⎟ + π ⎜⎝ π ⎟⎠ π2 π ⎣⎢ ⎦⎥
= = = (b)
2
π
6
+
π3
(4 − π )(4 x
24( 4 − π )
π3
− π)
8(3 − π )
x −
π2
≈ 0.6644 x + 0.1148.
1.5
g f π 2
0 0
57. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal
{
basis B = {w1 , w 2 , w 3} = 1,
3 ( 2 x − 1),
}
5 (6 x 2 − 6 x + 1) .
The least squares approximating function for f is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3. Find the inner products f , w1 =
1
∫ 0 x (1)dx 1
3
1 4⎤ 1 x = 4 ⎥⎦ 0 4
x3 ⎡⎣ 3 ( 2 x − 1)⎤⎦ dx =
f , w2 =
∫0
f , w3 =
∫0 x
1
1
=
2 ⎤ ⎣ 5 (6 x − 6 x + 1)⎦ dx =
3⎡
1
1 ⎤ 3 3 ⎡2 3 ⎢ x5 − x 4 ⎥ = 4 ⎦0 20 ⎣5 1
6 1 ⎤ 5 ⎡ 5 ⎢ x 6 − x5 + x 4 ⎥ = 5 4 ⎦0 20 ⎣
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3 =
1 3 3⎡ 5⎡ 3 3 1 3 ( 2 x − 1)⎤⎦ + 5 (6 x 2 − 6 x + 1)⎤⎦ = x 2 − x + . (1) + ⎣ ⎣ 4 20 20 2 5 20
y
(b) 1
f g
x 1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
170
Chapter 5
Inner Product Spaces
59. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal ⎫ 3 5 ⎪⎧ 1 basis B = {w1 , w 2 , w 3} = ⎨ , (2 x − π ), 2 (6 x 2 − 6π x + π 2 )⎪⎬. π π ⎪⎩ π π π ⎪⎭ The least squares approximating function for f is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3. Find the inner products
f ,w2 f , w3
π
⎤ 2 cos x⎥ = π π ⎦0 ⎡ 3 ⎤ π 3 = ∫ (sin x) ⎢ (2 x − π )⎥ dx = [−2 x cos x + 2 sin x + π cos x]π0 = 0 0 π π π π ⎣ ⎦ ⎡ ⎤ π 5 = ∫ (sin x) ⎢ 2 (6 x 2 − 6π x + π 2 )⎥ dx 0 π π ⎣ ⎦ 2 5 (π 2 − 12) π 5 ⎡ 2 2 ⎤ π π π 12 6 sin 6 6 12 cos = 2 − − − − + = x x x x x ( ) ( ) ⎦0 π π⎣ π2 π
f , w1 =
⎛ 1 ⎞
π
∫ 0 (sin x)⎜⎝ π ⎟⎠dx
= −
1
and conclude that
2 ⎤ 2 ⎛ 1 ⎞ 2 5 (π − 12) ⎡ 5 + 6 x 2 − 6π x + π 2 )⎥ ( ⎢ 2 ⎜ ⎟ 2 π ⎝π ⎠ π π π ⎣π ⎦ 2 2 12(π − 10) 60(π − 12) 60(π 2 − 12) 2 x+ x ≈ −0.0505 + 1.3122 x − 0.4177 x 2 . = − 3 4 5
g ( x) =
π
π
π
y
(b)
f
1
g
x
π
61. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal ⎪⎧ 1 2 3 6 5 ⎛ 2 π 2 ⎞⎪⎫ basis B = {w1 , w 2 , w 3} = ⎨ , 3 2 x, 5 2 ⎜ x − ⎟⎬. π ⎝ 12 ⎠⎪⎭ ⎪⎩ π π The least squares approximating function is then given by g ( x) = f , w w1 + f , w 2 w 2 + f , w 3 w 3 . Find the inner products f , w1 = f , w2 =
f , w3 =
π 2
∫ −π 2 π 2
∫ −π 2 π 2
∫ −π 2
1
π
π 2
cos x dx =
2 3
π3 2
sin x ⎤ π ⎥⎦ −π
= 2
2
π π /2
⎡ 2 3 cos x 2 3 x sin x ⎤ + x cos x dx = ⎢ ⎥ 32 π π3 2 ⎣ ⎦ −π
⎡12 5 x cos x 6 5⎛ 2 π 2 ⎞ + x − ⎟cos x dx = ⎢ 32 ⎜ 12 ⎠ π ⎝ π5 2 ⎢⎣
= 0 2
π /2
5 (12 x 2 − π 2 − 24)sin x ⎤ ⎥ 2π 5 2 ⎥⎦ −π
= 2
2 5 (π 2 − 12)
π5 2
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3
⎛ 2 3 ⎞ ⎛ 2 5 (π 2 − 12) ⎞⎛ 6 5 ⎛ 2 π 2 ⎞ ⎞ ⎛ 2 ⎞⎛ 1 ⎞ ⎟⎜ = ⎜ ⎜x − ⎟⎟ ⎟⎜ ⎟ + (0)⎜⎜ 3 2 x ⎟⎟ + ⎜⎜ ⎟⎜⎝ π 5 2 ⎝ 12 ⎠ ⎟⎠ π5 2 ⎝ π ⎠⎝ π ⎠ ⎝π ⎠ ⎝ ⎠ ⎛ 60(π 2 − 12) ⎞ 2 60π 2 − 720 ⎛ 2 π 2 ⎞ 60 − 3π 2 ⎜ ⎟ x2 + x = + − = ≈ −0.4177 x 2 + 0.9802. ⎜ ⎟ 5 5 ⎜ ⎟ 12 ⎠ π π π π3 ⎝ ⎝ ⎠ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
171
y
(b) 1
f
g
x π 2
−π
2
63. The third order Fourier approximation of f ( x) = π − x is of the form g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3x + b3 sin 3 x. 2
Find the coefficients as follows.
a0 = aj = bj =
1
2π
π ∫0
f ( x) dx =
2π
(π
1
π ∫0
2π
f ( x) cos jx dx =
1
1
2π
f ( x) sin x jx dx =
π ∫0
1
π ∫0
− x) dx = − 2π
(π π ∫0 1
2π
(π π ∫0
2π
1 2 (π − x) ⎤⎥ 2π ⎦0
= 0 2π
⎡ 1 ⎤ 1 1 − x) cos jx dx = ⎢− x sin jx − 2 cos jx + sin jx⎥ j π j π j ⎣ ⎦0
= 0, j = 1, 2, 3
2π
⎡1 ⎤ 1 1 − x) sin jx dx = ⎢ x cos jx − 2 sin jx − cos jx⎥ j π j π j ⎣ ⎦0
So, the approximation is g ( x) = 2 sin x + sin 2 x +
=
2 , j = 1, 2, 3 j
2 sin 3 x. 3
65. The third order Fourier approximation of f ( x) = ( x − π ) is of the form 2
g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3x + b3 sin 3 x. 2
Find the coefficients as follows.
a0 = aj = =
1
2π
π ∫0
f ( x) dx =
1
2π
f ( x) cos jx dx
1
2π
π ∫0
1
2π
π ∫0
2 ( x − π ) cos π ∫0
(x
− π ) dx = 2
2π
1 ( x − π )3 ⎤⎥ 3π ⎦0
=
2π 2 3
jx dx 2π
⎡π ⎤ 2 2 1 2 2 2 = ⎢ sin jx − x sin jx + 2 x cos jx + x sin jx − 3 sin jx − 2 cos jx⎥ j j j j j j π π π ⎣ ⎦0 =
4 , j = 1, 2, 3 j2
bj =
π ∫0
1
2π
f ( x) sin jx dx
=
π ∫0
1
2π
(x
− π ) sin jx dx 2
2π
⎡ π ⎤ 2 2 1 2 2 2 x cos jx + 3 cos jx − 2 sin jx⎥ = ⎢− cos jx + x cos jx + 2 x sin jx − j jπ jπ jπ j ⎣ j ⎦0 = 0, j = 1, 2, 3 So, the approximation is g ( x) =
π2 3
+ 4 cos x + cos 2 x +
4 cos 3x. 9
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
172
Chapter 5
Inner Product Spaces
67. The first order Fourier approximation of f ( x) = e − x is of the form g ( x) =
a0 + a1 cos x + b1 sin x. 2
Find the coefficients as follows.
a0 = a1 = b1 =
1
2π
1
2π
1 ⎤ e − x dx = − e− x ⎥ π ⎦0
2π
π ∫0
f ( x) dx =
1
π ∫0
2π
f ( x) cos x dx =
1
π ∫0
2π
1
2π
f ( x) sin x dx =
1
2π
π ∫0
π ∫0
π ∫0
So, the approximation is g ( x) =
=
1 − e −2π
π 2π
1 −x ⎡ 1 −x ⎤ e − x cos x dx = ⎢− e cos x + e sin x⎥ π π 2 2 ⎣ ⎦0
1 −x ⎡ 1 −x e − x sin x dx = ⎢− e cos x − e sin 2π ⎣ 2π
=
2π
⎤ x⎥ ⎦0
=
1 − e−2π 2π
1 − e −2π 2π
1 (1 − e−2π )(1 + cos x + sin x). 2π
69. The first order Fourier approximation of f ( x) = e −2 x is of the form g ( x) =
a0 + a1 cos x + b1 sin x. 2
Find the coefficients as follows.
a0 = a1 = b1 =
2π
1 − e −4π 2π
2π
π ∫0
f ( x) dx =
1
π ∫0
2π
f ( x) cos x dx =
1
π ∫0
2π
1 −2 x ⎡ 2 −2 x e −2 x cos x dx = ⎢− e cos x + e sin 5π ⎣ 5π
⎤ x⎥ ⎦0
1
2π
f ( x) sin x dx =
1
2π
1 −2 x ⎡ 2 −2 x e −2 x sin x dx = ⎢− e sin x − e cos 5π ⎣ 5π
⎤ x⎥ ⎦0
π ∫0
1
1 − e −2 x ⎤ ⎥ 2π ⎦ 0
1
2π
π ∫0
e−2 x dx =
π ∫0
=
2π
⎛ 1 − e −4π ⎞ = 2⎜ ⎟ ⎝ 5π ⎠
2π
=
1 − e −4π 5π
So, the approximation is g ( x) =
⎛ 1 − e −4π ⎞ 1 − e −4π 1 − e −4π sin x + 2⎜ ⎟ cos x + 4π 5π ⎝ 5π ⎠
⎛ 1 − e −4π ⎞ ⎛ 1 − e −4π ⎞ ⎛ 1 − e −4π ⎞ = 5⎜ ⎟ + 8⎜ ⎟ cos x + 4⎜ ⎟ sin x ⎝ 20π ⎠ ⎝ 20π ⎠ ⎝ 20π ⎠ ⎛ 1 − e −4π ⎞ = ⎜ ⎟(5 + 8 cos x + 4 sin x). ⎝ 20π ⎠
71. The third order Fourier approximation of f ( x) = 1 + x is of the form g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3x + b3 sin 3 x. 2
Find the coefficients as follows.
a0 = aj = bj =
2π
2π
π ∫0
f ( x) dx =
1
π ∫0
2π
f ( x) cos ( jx) dx =
1
(1 + x) cos ( jx) dx π ∫0
2π
1
2π
f ( x) sin ( jx) dx =
1
2π
π ∫0
1
1⎛ x 2 ⎞⎤ ⎜x + ⎟⎥ 2 ⎠⎥⎦ 0 π⎝
1
2π
π ∫0
(1 + x) dx
=
(1 + x) sin ( jx) dx π ∫0
= 2 + 2π 2π
=
⎤ 1 ⎡1 + x 1 sin ( jx) + 2 cos ( jx)⎥ j π ⎢⎣ j ⎦0
=
⎤ 1 ⎡− (1 + x) 1 cos ( jx) + 2 sin ( jx)⎥ ⎢ j j π⎣ ⎦0
= 0 2π
So, the approximation is g ( x) = (1 + π ) − 2 sin x − sin 2 x −
=
−2 j
2 sin 3 x. 3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
173
73. Because f ( x) = 2 sin x cos x = sin 2 x, you see that the fourth order Fourier approximation is simply g ( x) = sin 2 x. 75. Because a0 = 0, a j = 0 ( j = 1, 2, 3, … , n), and b j =
g ( x) = 2 sin x + sin 2 x +
2 2 sin 3 x + … + sin nx = n 3
2 ( j = 1, 2, 3, … , n) the nth-order Fourier approximation is j 2
∑ j =1 j sin n
jx.
Review Exercises for Chapter 5 1. (a)
u =
12 + 22 =
5
(b)
v =
42 + 12 =
17
(c) u ⋅ v = 1( 4) + 2(1) = 6
(− 3, 1)
=
(− 3)
u =
22 + 12 + 12 =
(b)
v =
32 + 22 + ( −1)
7
(b)
v =
02 + 12 + ( − 2) + 12 + 12 =
7
2
2
+1 = 2
10
6
2
14
=
6
=
02 + 02 + 12 + 02 + 12
=
2
2
52 + 32 + ( − 2)
v =
2
=
38.
So, a unit vector in the direction of v is
(−1, −1, − 2) (−1)
(0, 0, 1, 0, 1)
9. The norm of v is
=
=
+ ( −1) + ( − 2) 2
2
u =
1 v = v
1 ⎛ 5 , (5, 3, − 2) = ⎜ 38 ⎝ 38
3 2 ⎞ ,− ⎟. 38 38 ⎠
11. The norm of v is
5. (a)
u =
12 + ( − 2) + 02 + 12 =
6
(b)
v =
12 + 12 + ( −1) + 02 =
3
2
12 + ( −1) + 22 = 2
v =
6.
So, a unit vector in the direction of v is given by
2
(c) u ⋅ v = 1(1) + ( − 2)(1) + 0(−1) + 1(0) = −1 (d) d (u ⋅ v ) = u − v =
02 + 12 + ( −1) + 12 + 22 =
(d) d (u ⋅ v ) = u − v =
2
(c) u ⋅ v = 2(3) + 1( 2) + 1( −1) = 7 (d) d (u ⋅ v ) = u − v =
u =
(c) u ⋅ v = 0(0) + 1(1) + ( −1)( − 2) + 1(1) + 2(1) = 6
(d) d (u, v) = u − v =
3. (a)
7. (a)
(0, − 3, 1, 1)
u =
1 v = v
1 ⎛ 1 −1 2 ⎞ (1, −1, 2) = ⎜ , , ⎟. 6 6 6⎠ ⎝ 6
13. The cosine of the angle θ between u and v is given by
=
02 + ( − 3) + 12 + 12
=
11
2
cos θ =
So, θ =
u⋅v = u v
π 12
2( − 3) + 2(3) 2 + 22 2
(− 3)
2
+ 32
= 0.
radians (90˚).
15. The cosine of the angle θ between u and v is given by u⋅v cos θ = = u v So, θ =
π 12
2π ⎞ ⎛ 3π 3π 2π 3π 2π cos⎜ − cos sin + sin ⎟ π 4 3 ⎠ ⎝ 4 3 4 3 = = cos 1 1 12 ⋅ π π π π 3 3 2 2 cos 2 + sin 2 ⋅ cos 2 + sin 2 4 4 3 3 cos
radians (15˚).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
174
Chapter 5
Inner Product Spaces
17. The cosine of the angle θ between u and v is given by cos θ =
u⋅v = u v
10( − 2) + ( −5)(1) + 15( − 3) 102 + ( − 5) + 152 2
(− 2)
2
+ 12 + ( − 3)
2
= −1.
So, θ = π radians (180˚). 19. The projection of u onto v is given by proj v u =
2(1) + 4( − 5) u⋅v 9 ⎛ 9 45 ⎞ 1, − 5) = − (1, − 5) = ⎜ − , ⎟. v = 2 ( 2 13 v⋅v ⎝ 13 13 ⎠ 1 + ( − 5)
21. The projection of u onto v is given by proj v u =
1( 2) + 2(5) u⋅v 12 24 60 v = (2, 5) = (2, 5) = ⎛⎜ , ⎞⎟. 2 2 2 +5 29 v⋅v ⎝ 29 29 ⎠
23. The projection of u onto v is given by proj v u =
25. (a)
0(3) + ( −1)( 2) + 2( 4) u⋅v 18 12 24 v = (3, 2, 4) = ⎛⎜ , , ⎞⎟. 32 + 22 + 42 v⋅v ⎝ 29 29 29 ⎠
⎛ 3⎞ ⎛ 1⎞ u, v = 2⎜ ⎟ + 2⎜ − ⎟ ( 2) + 3(1) (−1) = − 2 ⎝ 2⎠ ⎝ 2⎠
(b) d (u, v ) = u − v =
2
u − v, u − v =
2
2 3⎞ 3 ⎛ ⎛ 1 ⎞ 11 ⎜ 2 − ⎟ + 2⎜ − − 2 ⎟ + 3(1 − (−1)) = 2⎠ 2 ⎝ ⎝ 2 ⎠
27. Verify the Triangle Inequality as follows. u + v ≤ u + v ⎛7 3 ⎞ ⎜ , , 0⎟ ≤ ⎝2 2 ⎠ 2
2
⎛7⎞ ⎛ 3⎞ ⎜ ⎟ + 2⎜ ⎟ + 0 ≤ 2 ⎝ ⎠ ⎝ 2⎠
2
2 ⎛ 1⎞ 22 + 2⎜ − ⎟ + 3(1) + 2 ⎝ ⎠
15 + 2
53 2
67 30 ≤ + 2 2 4.093 ≤ 6.379
53 2
2
2 2 ⎛ 3⎞ ⎜ ⎟ + 2( 2) + 3(−1) 2 ⎝ ⎠
Verify the Cauchy-Schwarz Inequality as follows. u, v ≤ u v ⎛ 3⎞ ⎛ 1⎞ 2⎜ ⎟ + 2⎜ − ⎟ ( 2) + 3(1) ( −1) ≤ ⎝ 2⎠ ⎝ 2⎠
15 2
53 2
30 2 2 ≤ 9.969
53 2
2 ≤
29. A vector v = (v1 , v2 , v3 ) that is orthogonal to u = (0, − 4, 3) must satisfy the equation u ⋅ v = (0, − 4, 3) ⋅ (v1 , v2 , v3 ) = 0v1 − 4v2 + 3v3 = 0. This equation has solutions of the form v = ( s, 3t , 4t ), where s and t are any real numbers. 31. A vector v = (v1 , v2 , v3 , v4 ) that is orthogonal to u = (1, − 2, 2, 1) must satisfy the equation u ⋅ v = (1, − 2, 2, 1) ⋅ (v1 , v2 , v3 , v4 ) = v1 − 2v2 + 2v3 + v4 = 0. This equation has solutions of the form ( 2r − 2 s − t , r , s, t ), where r, s, and t are any real numbers.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
175
33. First orthogonalize the vectors in B. w1 = (1, 1) w 2 = (0, 1) −
1(0) + 1(1) 12 + 12
(1, 1)
⎛ 1 = ⎜− , ⎝ 2
1⎞ ⎟ 2⎠
Then normalize each vector. u1 =
1 w1 = w1
u2 =
1 w2 = w2
1 1 ⎞ ⎛ 1 (1, 1) = ⎜ , ⎟ 2 2⎠ ⎝ 2 1 ⎞ ⎛ 1 1⎞ ⎛ 1 2 ⎜− , ⎟ = ⎜− , ⎟ 2 2⎠ ⎝ 2 2⎠ ⎝
⎧⎛ 1 1 ⎞ ⎛ 1 1 ⎞⎫ So, an orthonormal basis for R 2 is ⎨⎜ , , ⎟, ⎜ − ⎟⎬. 2 2 2 2 ⎠⎭ ⎝ ⎠ ⎝ ⎩
35. w1 = (0, 3, 4) 1(0) + 0(3) + 0( 4) (0, 3, 4) = (1, 0, 0) 02 + 32 + 42 1(1) + 1(0) + 0(0) 1(0) + 1(3) + 0( 4) 16 12 w 3 = (1, 1, 0) − (1, 0, 0) − 2 2 2 (0, 3, 4) = ⎛⎜ 0, , − ⎞⎟ 12 + 02 + 02 0 +3 + 4 ⎝ 25 25 ⎠
w 2 = (1, 0, 0) −
Then, normalize each vector.
u1 =
1 1 ⎛ 3 4⎞ w1 = (0, 3, 4) = ⎜ 0, , ⎟ 5 w1 ⎝ 5 5⎠
u2 =
1 w 2 = 1(1, 0, 0) = (1, 0, 0) w2
u3 =
1 5 ⎛ 16 12 ⎞ ⎛ 4 3⎞ w 3 = ⎜ 0, , − ⎟ = ⎜ 0, , − ⎟ 4 ⎝ 25 25 ⎠ w3 ⎝ 5 5⎠
⎧⎛ 3 4 ⎞ ⎛ 4 3 ⎞⎫ So, an orthonormal basis for R 3 is ⎨⎜ 0, , ⎟, (1, 0, 0), ⎜ 0, − ⎟⎬. 5 5 ⎠ ⎝ 5 5 ⎠⎭ ⎩⎝
37. (a) To find x as a linear combination of the vectors in B, solve the vector equation c1 (0, 2, − 2) + c2 (1, 0, − 2) = ( −1, 4, − 2). This produces the system of linear equations c2 = −1 2c1
=
4
−2c1 − 2c2 = −2
which has the solution c1 = 2 and c2 = −1. So, [x]B = ( 2, −1), and you can write ( −1, 4, − 2) = 2(0, 2, − 2) − (1, 0, − 2). (b) To apply the Gram-Schmidt orthonormalization process, first orthogonalize each vector in B. w1 = (0, 2, − 2) w 2 = (1, 0, − 2) −
1(0) + 0( 2) + ( − 2) ( − 2) 0 2 + 2 2 + ( − 2)
2
(0, 2, − 2)
= (1, −1, −1).
Then normalize w1 and w2 as follows.
u1 =
1 1 1 1 ⎞ ⎛ w1 = (0, 2, − 2) = ⎜ 0, , − ⎟ w1 2 2 2 2⎠ ⎝
u2 =
1 w2 = w2
1 1 1 ⎞ ⎛ 1 (1, −1, −1) = ⎜ , − , − ⎟ 3 3 3⎠ ⎝ 3
⎧⎛ 1 1 ⎞ ⎛ 1 1 1 ⎞⎫ So, B′ = ⎨⎜ 0, ,− ,− ,− ⎟⎬. ⎟, ⎜ 2 2 3 3 3 ⎠⎭ ⎝ ⎠ ⎝ ⎩
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
176
Chapter 5
Inner Product Spaces
(c) To find x as a linear combination of the vectors in B′, solve the vector equation 1 1 ⎞ 1 1 ⎞ ⎛ 1 ⎛ c1 ⎜ 0, ,− ,− ,− ⎟ = ( −1, 4, − 2). ⎟ + c2 ⎜ 2 2⎠ 3 3⎠ ⎝ ⎝ 3 This produces the system of linear equations 1 c2 = −1 3 1 c2 = 4 3 1 c2 = −2 3
1 c1 − 2 1 − c1 − 2
(
which has the solution c1 = 3 2 and c2 = − 3. So, [x]B′ = 3 2, − 1 1 ⎞ ⎛ = 3 2 ⎜ 0, ,− ⎟ − 2 2⎠ ⎝
1 1 ⎞ ⎛ 1 3⎜ ,− ,− ⎟. 3 3⎠ ⎝ 3
1
x3dx =
1 4⎤ 1 x = 4 ⎥⎦ 0 4
x 4 dx =
1 5⎤ 1 x ⎥ = , 5 ⎦0 5
(−1, 4, − 2) 39. (a)
f, g =
∫ 0 f ( x) g ( x) dx
1
1
∫0
=
)
3 , and you can write
(b) Because
g, g =
1
∫0
g ( x) g ( x) dx =
1
1
∫0
the norm of g is
g =
g, g =
1 = 5
1 . 5
(c) Because
f − g, f − g =
2 ∫ 0 ( x − x ) dx 1
2
1
1 1 ⎤ 1 ⎡1 , = ⎢ x3 − x 4 + x5 ⎥ = 2 5 ⎦0 30 ⎣3
the distance between f and g is
d( f , g) = f − g =
f − g, f − g =
1 = 30
1 . 30
(d) First orthogonalize the vectors.
w1 = f = x 1
w2 = g −
g , w1
w1 , w1
w1 = x − 2
∫ 0 x dx x 1 2 ∫ 0 x dx 3
= x2 −
3 x 4
Then, normalize each vector. Because 1
1 3⎤ 1 = x 3 ⎥⎦ 0 3
w1 , w1 =
1
∫0
x 2 dx =
w2 , w2 =
1
3 ⎞ 3 3⎤ 1 ⎛ 2 ⎡1 5 3 4 x ⎥ = ⎜ x − x ⎟ dx = ⎢ x − x + 4 ⎠ 8 16 ⎦ 0 80 ⎝ ⎣5
∫0
you have 1 u1 = w1 = w1 u2 =
1
2
3x
1 w 2 = 4 5x2 − 3 5x = w2
The orthonormal set is B′ =
{
3 x,
5 ( 4 x 2 − 3x).
}
5 ( 4 x 2 − 3 x) .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
1
∫ −1
41. These functions are orthogonal because f , g =
1 − x 2 2 x 1 − x 2 dx =
3 ∫ −1 (2 x − 2 x )dx 1
43. Vectors in W are of the form ( − s − t , s, t ) where s and t are any real numbers. So, a basis for W is
177
1
⎡ x4 ⎤ = ⎢x2 − ⎥ = 0. 2 ⎦ −1 ⎣
{(−1, 0, 1), (−1, 1, 0)}.
Orthogonalize these vectors as follows. w1 = ( −1, 0, 1) w 2 = ( −1, 1, 0) −
−1( −1) + 1(0) + 0(1)
(−1)
2
+ 0 +1 2
2
(−1, 0, 1)
1⎞ ⎛ 1 = ⎜ − , 1, − ⎟ 2⎠ ⎝ 2
Finally, normalize w1 and w 2 to obtain
u1 =
1 w1 = w1
1 1 ⎞ ⎛ 1 (−1, 0, 1) = ⎜ − , 0, ⎟ 2 2 2⎠ ⎝
u2 =
1 w2 = w2
2 ⎛ 1 1⎞ ⎛ 1 2 1 ⎞ , ,− ⎜ − , 1, − ⎟ = ⎜ − ⎟. 2⎠ ⎝ 6⎝ 2 6 6 6⎠
⎧⎛ 1 1 ⎞ ⎛ 1 2 1 ⎞⎫ So, W ′ = ⎨⎜ − , 0, , ,− ⎟⎬. ⎟, ⎜ − 2 2 6 6 6 ⎠⎭ ⎠ ⎝ ⎩⎝
f, g =
45. (a)
1
∫ −1 x x 2
1
1 1 1 1 ⎤ dx = ln ( x 2 + 1)⎥ = ln 2 − ln 2 = 0 +1 2 2 2 ⎦ −1
(b) The vectors are orthogonal. (c) Because f , g = 0, it follows that
f, g ≤ f
g . u, v ≤ u
47. If u ≤ 1 and v ≤ 1, then the Cauchy-Schwarz Inequality implies that
v ≤ 1.
49. Let {v1 , … , v m} be a basis for V. You can extend this basis to one for R n . B = {v1 , … , v m , w m + 1 , … , w n} Now apply the Gram-Schmidt orthonormalization process to this basis, which results in the following basis for R n .
B′ = {u1 , … , u m , z m + 1 , … , z n} The first m vectors of B′ still span V. Therefore, any vector u ∈ R n is of the form
u = c1u1 +
+ cmu m + cm +1z m +1 +
+ cn z n = v + w
where v ∈ V and w is orthogonal to every vector in V.
51. First extend the set {u1 , … , u m} to an orthonormal basis for R n . B = {u1 , … , u m , u m + 1 , … , u n} If v is any vector in R n , you have v =
n
∑( v ⋅ ui )ui which implies that
v
2
= v, v =
i =1
53. If u and v are orthogonal, then u Furthermore, u
2
+ −v
2
= u
2
+ v
2
= u + v
2
2
+ v
2
= u − v , which gives u + v
2
+ v
2
+ 2 u, v = u
2
+ v
2
2
≥
i =1
m
∑( v ⋅ u i ) . 2
i =1
by the Pythagorean Theorem.
2
On the other hand, if u + v = u − v , then u + v, u + v
u
n
∑( v ⋅ ui )
2
2
= u − v
2
⇒ u + v = u − v.
2
= u − v, u − v , which implies that
− 2 u, v , or u, v = 0, and u and v are orthogonal.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
178
Chapter 5
Inner Product Spaces
55. S ⊥ = N ( AT ), the orthogonal complement of S is the nullspace of AT . ⎡1 0 − 2 ⎤ ⎡ 1 2 0⎤ ⎡ 1 2 0⎤ 3 ⎢ ⎥ ⇒ ⇒ A = ⎢ ⎥ ⎢ ⎥ 1⎥ ⎢0 1 ⎣2 1 −1⎦ ⎣0 −3 −1⎦ 3⎦ ⎣ ⎡ 2⎤ ⎢ ⎥ So, S ⊥ is spanned by u = ⎢−1⎥. ⎢ 3⎥ ⎣ ⎦
57.
1 0⎤ ⎡0 ⎡1 0 1⎤ ⎢ ⎥ ⎢ ⎥ A = ⎢0 −3 0⎥ ⇒ ⎢0 1 0⎥ ⎢ 1 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡0 0 1⎤ ⎡ 1 −3 0⎤ ⎢ ⎥ ⎢ ⎥ A = ⎢ 1 −3 0⎥ ⇒ ⎢0 0 1⎥ ⎢0 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ T
⎧⎡0⎤ ⎡ 1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ R ( A)-basis: ⎨⎢0⎥ , ⎢−3⎥⎬ ⎪⎢1⎥ ⎢ 0⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭
⎧⎡0⎤ ⎡ 1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ R( AT )-basis: ⎨⎢1⎥ , ⎢0⎥⎬ ⎪⎢0⎥ ⎢ 1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭
⎧⎡ 1⎤⎫ ⎪⎢ ⎥⎪ N ( A)-basis: ⎨⎢ 0⎥⎬ ⎪⎢−1⎥⎪ ⎩⎣ ⎦⎭
⎧⎡3⎤ ⎫ ⎪⎢ ⎥ ⎪ N ( AT )-basis: ⎨⎢ 1⎥ ⎬ ⎪⎢0⎥ ⎪ ⎩⎣ ⎦ ⎭
59. Use a graphing utility. Least squares regression line: y = 88.4t + 1592 ( r 2 ≈ 0.9619)
Least squares cubic regression polynomial: y = 1.778t 3 − 5.82t 2 + 77.9t + 1603 ( r 2 ≈ 0.9736) The cubic model is a better fit. 2010: For t = 10 ⇒ y ≈ $3578 million 61. Substitute the data points ( −1, 389.1), (0, 399.5), (1, 403.5), ( 2, 409.7), (3, 425.7), and ( 4, 446.4) into the equation
y = c0 + c1t to obtain the following system. c0 + c1 ( −1) = 389.1 c0 + c0 + c0 + c0 + c0 +
c1 (0) = 399.5 c1 (1) = 403.5
c1 ( 2) = 409.7 c1 (3) = 425.7
c1 ( 4) = 446.4
This produces the least squares problem Ax = b ⎡1 −1⎤ ⎡389.1⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 0 ⎥ ⎢399.5⎥ ⎢1 1 ⎥ ⎡c0 ⎤ ⎢403.5⎥ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎢1 2 ⎥ ⎣ c1 ⎦ ⎢409.7⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 3 ⎥ ⎢425.7⎥ ⎢1 4 ⎥ ⎢446.4⎥ ⎣ ⎦ ⎣ ⎦ The normal equations are
AT Ax = AT b ⎡6 9 ⎤⎡c0 ⎤ ⎡2473.9⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣9 31⎦⎣ c1 ⎦ ⎣3896.5⎦ ⎡c0 ⎤ ⎡396.4 ⎤ and the solution is x = ⎢ ⎥ ≈ ⎢ ⎥. So, the least squares regression line is y = 396.4 + 10.61t. c ⎣ 1⎦ ⎣ 10.61 ⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5 63. Substitute the data points (0, 431.4), (1, 748.8),
179
65. Substitute the data points (0, 10,458.0), (1, 9589.8),
(2, 1214.1), (3, 2165.1), (4, 3271.3), (5, 4552.4), (6, 5969.7), and (7, 7150.0) into the quadratic polynomial
(2, 9953.5), (3, 9745.4), (4, 10,472.0), (5, 11,598.0), (6, 12,670.0), and (7, 13,680.0) into the quadratic
y = c0 + c1t + c2t 2 to obtain the following system.
polynomial y = c0 + c1t + c2t 2 to obtain the following system.
c0 + c1 (0) + c2 (0) =
431.4
c0 + c1 (1) + c2 (1) =
748.8
2 2
c0 + c1 ( 2) + c2 ( 2) = 1214.1 2
c0 + c1 (3) + c2 (3) = 2165.1 2
c0 + c1 ( 4) + c2 ( 4) = 3271.3 2
c0 + c1 (5) + c2 (5) = 4552.4 2
c0 + c1 (6) + c2 (6) = 5969.7 2
c0 + c1 (7) + c2 (7) = 7150.0 2
c0 + c1 (0) + c2 (0) = 10,458.0 2
c0 + c1 (1) + c2 (1) =
9589.8
c0 + c1 ( 2) + c2 ( 2) =
9953.5
c0 + c1 (3) + c2 (3) =
9745.4
2 2 2
c0 + c1 ( 4) + c2 ( 4) = 10,472.0 2
c0 + c1 (5) + c2 (5) = 11,598.0 2
c0 + c1 (6) + c2 (6) = 12,670.0 2
c0 + c1 (7) + c2 (7) = 13,680.0 2
This produces the least squares problem Ax = b ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎣⎢1
0 1 2 3 4 5 6 7
0⎤ ⎡ 431.4 ⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 748.8 ⎥ ⎢1214.1⎥ 4⎥ ⎥ ⎡c0 ⎤ ⎢ ⎥ 9 ⎥⎢ ⎥ ⎢ 2165.1⎥ ⎥ c1 = ⎢ ⎥. 16 ⎥ ⎢⎢ ⎥⎥ 3271.3⎥ ⎢ ⎣c2 ⎦ ⎢4552.4⎥ 25⎥ ⎥ ⎢ ⎥ ⎢5969.7⎥ 36⎥ ⎥ ⎢ ⎥ 49⎦⎥ ⎢⎣7150.0⎦⎥
The normal equations are AT Ax = AT b ⎡ 8 28 140⎤ ⎡c0 ⎤ ⎡ 25,502.8⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = c 28 140 784 ⎢ ⎥ ⎢ 1⎥ ⎢131,387.7⎥ ⎢140 784 4676⎥ ⎢c2 ⎥ ⎢756,501.1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ and the solution is ⎡c0 ⎤ ⎡315.0 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢365.26 ⎥. ⎢c2 ⎥ ⎢ 91.112⎥ ⎣ ⎦ ⎣ ⎦
So, the least squares regression quadratic polynomial is y = 315.0 + 365.26t + 91.112t 2 .
This produces the least squares problem Ax = b ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎣⎢1
0 1 2 3 4 5 6 7
0⎤ ⎡10,458.0⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 9589.8⎥ ⎢ 9953.5⎥ 4⎥ ⎥ ⎡c0 ⎤ ⎢ ⎥ ⎢ 9745.4⎥ 9 ⎥⎢ ⎥ ⎥ c1 = ⎢ ⎥. 16 ⎥ ⎢⎢ ⎥⎥ 10,472.0⎥ ⎢ ⎣c2 ⎦ ⎢11,598.0⎥ 25⎥ ⎥ ⎢ ⎥ ⎢12,670.0⎥ 36⎥ ⎥ ⎢ ⎥ 49⎦⎥ ⎢⎣13,680.0⎥⎦
The normal equations are AT Ax = AT b ⎡ 8 28 140⎤ ⎡c0 ⎤ ⎡ 88,166.7⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 28 140 784⎥ ⎢ c1 ⎥ = ⎢ 330,391.0⎥ ⎢140 784 4676⎥ ⎢c2 ⎥ ⎢1,721,054.4⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ and the solution is ⎡c0 ⎤ ⎡10,265.4 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢ −542.62 ⎥. ⎢c2 ⎥ ⎢ 151.692⎥ ⎣ ⎦ ⎣ ⎦
So, the least squares regression quadratic polynomial is y = 10,265.4 − 542.62t + 151.692t 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
180
Chapter 5
Inner Product Spaces 75. The standard basis for P1 is {1, x}. In the interval
67. The cross product is i
j k 1 = −2i − j + k = ( −2, −1, 1).
u × v = 1 −1 0
1
1
Furthermore, u × v is orthogonal to both u and v because u ⋅ (u × v ) = 1( −2) + (1)(−1) + 1(1) = 0
and v ⋅ (u × v ) = 0( −2) + 1( −1) + 1(1) = 0.
[0, 2], the Gram-Schmidt orthonormalization process ⎪⎧ 1 , yields the orthonormal basis ⎨ ⎩⎪ 2 Because 2
f , w1 =
∫0 x
f , w2 =
2
∫0
1 4 dx = 2 2
3
3 ( x − 1) dx = 2
x3
2
j
3 ⎛ x5 x 4 ⎞⎤ ⎜ − ⎟⎥ = 5 ⎠⎦ 0 2⎝ 5
=
69. The cross product is i
k
u × v = 2 0 −1 = i + j + 2k = (1, 1, 2). 1 1 −1
=
and
6
v ⋅ (u × v) = 1(1) + 1(1) + (−1)( 2) = 0.
4
4 ⎛ 1 ⎞ ⎜ ⎟+ 2⎝ 2 ⎠
3 ⎛ 12 ⎞ ⎜ ⎟, 2⎝ 5 ⎠
3 ⎛ 12 ⎞ ⎜ ⎟ 2⎝ 5 ⎠
3 18 8 ( x − 1) = x − . 5 5 2
f g
2
are orthogonal if and only if sin θ = 1, which means u× v = u
3 ⎛ 32 ⎞ ⎜ − 4⎟ = 2⎝ 5 ⎠
y
u ⋅ (u × v ) = 2(1) + 0(1) + (−1)( 2) = 0
v sin θ , you see that u and v
2
3⌠ 4 3 ⎮ ( x − x ) dx 2 ⌡0
g is given by g ( x) = f , w1 + f , w 2 w 2
Furthermore, u × v is orthogonal to both u and v because
71. Because u × v = u
⎫⎪ 3 , ( x − 1)⎬. 2 ⎭⎪
x
1
v .
2
77. The standard basis for P1 is {1, x}. In the interval
73. Because i
j
v × w = −1 − 1 3
k
[0, π ] the Gram-Schmidt orthonormalization process
0 = i − j − k = (1, −1, −1),
⎧⎪ 1 ⎫⎪ 3 yields the orthonormal basis ⎨ , 3 2 ( 2 x − π )⎬. ⎪⎭ ⎩⎪ π π
4 −1
the volume is u ⋅ ( v × w ) = (1, 2, 1) ⋅ (1, −1, −1) = −2 = 2.
Because π
⎛ 1 ⎞ sin x cos x⎜ ⎟ dx = 0 ⎝ π⎠
π
⎛ 3⎞ 3 sin x cos x⎜⎜ 3 2 ⎟⎟( 2 x − π ) dx = − 1 2 , 2 π π ⎝ ⎠
f , w1 =
∫0
f , w2 =
∫0
g is given by g ( x) = f , w1 w1 + f , w 2 w 2
⎞ 3 ⎞⎛ 3 ⎛ 1 ⎞ ⎛ = 0⎜ ⎟ + ⎜ − 1 2 ⎟⎜ π 3 2 (2 x − π ) ⎟⎟ ⎝ π ⎠ ⎜⎝ 2π ⎟⎜ ⎠⎝ ⎠ =−
3x
π2
+
3 . 2π
y 1 2
π 4
− 12
f
x
g
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapters 4 and 5
181
79. The standard basis for P2 is {1, x, x 2}. In the interval [1, 2], the Gram-Schmidt orthonormalization process yields the
orthonormal basis 3 ⎞ 30 ⎛ 2 13 ⎞⎫ ⎧ ⎛ , ⎜ x − 3 x + ⎟⎬. ⎨1, 2 3 ⎜ x − ⎟, 2⎠ 6 ⎠⎭ 5 ⎝ ⎝ ⎩ Because 2 1 f , w1 = ∫ dx = ln 2 0 x 2
2
1 3⎞ 3⎞ 3 ⎛ ⎛ ⎞ ⌠ ⎛ 2 3 ⎜ x − ⎟ dx = 2 3 ⎮ ⎜1 − ⎟ dx = 2 3 ⎜1 − ln 2 ⎟ x x 2 2 2 ⎝ ⎠ ⎠ ⎝ ⎠ ⌡1 ⎝
2
1 30 ⎛ 2 13 ⎞ 30 ⌠ ⎛ 13 ⎞ 30 ⎛ 13 3⎞ ⎮ ⎜x − 3 + ⎜ x − 3x + ⎟ dx = ⎟ dx = ⎜ ln 2 − ⎟, x 5⎝ 6⎠ 6x ⎠ 2⎠ 5 ⌡1 ⎝ 5⎝ 6
f , w2 =
∫1
f , w3 =
∫1
2
g is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3
3 3⎞ 30 ⎛ 13 3 ⎞ 30 ⎛ 2 13 ⎞ ⎛ ⎞ ⎛ = (ln 2) + 2 3 ⎜1 − ln 2 ⎟ 2 3 ⎜ x − ⎟ + ⎜ ln 2 − ⎟ ⎜ x − 3x + ⎟ 2 2⎠ 2⎠ 5⎝ 6⎠ 5⎝ 6 ⎝ ⎠ ⎝ 3 3⎞ 3 ⎞⎛ 13 ⎞ ⎛ ⎞⎛ ⎛ 13 = ln 2 + 12⎜1 − ln 2 ⎟⎜ x − ⎟ + 180⎜ ln 2 − ⎟⎜ x 2 − 3 x + ⎟ = .3274 x 2 − 1.459 x + 2.1175. 2 2⎠ 2 ⎠⎝ 6⎠ ⎝ ⎠⎝ ⎝6 2
f
g
2
0 0
81. Find the coefficients as follows a0 = aj = bj =
1
π
1
π
π ∫ −π
x cos( jx)dx =
1
π
x sin ( jx) =
f ( x)dx π ∫ −π
π ∫ −π
=
1
π
π ∫ −π
x dx = 0 π
⎤ x 1⎡ 1 ⎢ 2 cos( jx) + sin ( jx)⎥ = 0, j = 1, 2 … π⎣j j ⎦ −π π
⎤ x 1⎡ 1 2 sin ( jx) − cos( jx)⎥ = − cos(π j ) j = 1, 2, … π ⎢⎣ j 2 j j ⎦ −π
So, the approximation is g ( x) =
a0 + a1 cos x + a2 cos 2 x + b1 sin x + b2 sin 2 x = 2 sin x − sin 2 x. 2
83. (a) True. See note following Theorem 5.17, page 338. (b) True. See Theorem 5.18, part 3, page 339. (c) True. See discussion starting on page 346.
Cumulative Test for Chapters 4 and 5 1. (a) (1, − 2) + ( 2, −5) = (3, −7) y 1 −1 −2 −3 −4 −5 −6 −7 −8
x
v = (1, −2)
7 8
w = (2, −5) v + w = (3, −7)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
182
Chapter 5
Inner Product Spaces
(b) 3(1, − 2) = (3, − 6) y 1 x
−1 −2 −3 −4 −5 −6 −7
v = (1, −2) 6 7
3v = (3, −6)
(c) 2(1, − 2) − 4( 2, −5) = ( 2, − 4) − (8, − 20) = ( −6, 16) y 20 16
−8 −4 −8 −12 −16 −20
2v − 4w = (−6, 16)
2v = (2, −4)
x
4w = (8, −20)
⎡1 0 0 3⎤ ⎡ 1 −1 0 2⎤ ⎢ ⎥ ⎢ ⎥ 2. ⎢2 0 3 4⎥ ⇒ ⎢0 1 0 1⎥ ⎢0 0 1 − 2 ⎥ ⎢ ⎥ ⎣0 1 0 1⎦ 3⎥ ⎣⎢ ⎦ 3(1, 2, 0) + ( −1, 0, 1) −
2 3
(0, 3, 0)
= ( 2, 4, 1)
⎡1 0⎤ ⎡0 0⎤ ⎡1 0⎤ 3. Not closed under addition: ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ 0 0 0 1 ⎣ ⎦ ⎣ ⎦ ⎣0 1⎦ 4. Let v = (v1 , v1 + v2 , v2 , v2 ) and u = (u1 , u1 + u2 , u2 , u2 ) be two vectors in W. v + u = (v1 + u1 , (v1 + v2 ) + (u1 + u2 ), v2 + u2 , v2 + u2 ) = (v1 + u1 , (v1 + u1 ) + (v2 + u2 ), v2 + u2 , v2 + u2 )
= ( x1 , x1 + x2 , x2 , x2 ) where x1 = v1 + u1 and x2 = v2 + u2 . So, v + u is in W. cv = c(v1 , v1 + v2 , v2 , v2 ) = (cv1 , c(v1 + v2 ), cv2 , cv2 ) = (cv1 , cv1 + cv2 , cv2 , cv2 ) = ( x1 , x1 + x2 , x2 , x2 ) where x1 = cv1 and
v2 = cv2 . So, cv is in W. So, you can conclude that it is a subspace of R 4 . 5. No: (1, 1, 1) + (1, 1, 1) = ( 2, 2, 2) ⎡1 ⎢ 1 6. Yes, because ⎢ ⎢0 ⎢ ⎣⎢ 1
2 −1 3 0 0
0⎤ ⎥ 2⎥ row reduces to I. 1 −1⎥ ⎥ 0 1⎦⎥
0
7. (a) See definition page 213. (b) Linearly dependent ⎧⎡ 1 0 0⎤ ⎡0 1 0⎤ ⎡0 0 1⎤ ⎡0 0 0⎤ ⎡0 0 0⎤ ⎡0 0 0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ 8. B = ⎨⎢0 0 0⎥ , ⎢ 1 0 0⎥ , ⎢0 0 0⎥ , ⎢0 1 0⎥ , ⎢0 0 1⎥ , ⎢0 0 0⎥ ⎬ ⎪⎢0 0 0⎥ ⎢0 0 0⎥ ⎢ 1 0 0⎥ ⎢0 0 0⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎪ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎭ ⎩⎣ Dimension 6
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapter 4 and 5
183
9. (a) A set of vectors {v1 , … , v n} in a vector space V is a basis for V if the set is linearly independent and spans V. (b) Yes. Because the set is linearly independent. ⎡ 1 1 ⎢ −2 −2 10. ⎢ ⎢ 0 0 ⎢ ⎣⎢ 1 1
0 0⎤ ⎡1 ⎥ ⎢ 0 0⎥ 0 ⇒ ⎢ ⎥ ⎢ 1 1 0 ⎥ ⎢ ⎢⎣0 0 0⎦⎥
1 0 0⎤ ⎥ 0 1 1⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
x1 = − s x2 = s x3 = −t x4 = t
⎧⎡−1⎤ ⎡ 0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ 0⎪ ⎪ 1 basis ⎨⎢ ⎥ , ⎢ ⎥⎬ ⎢ 0⎥ ⎢−1⎥ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪ ⎪ ⎩⎢⎣ 0⎥⎦ ⎢⎣ 1⎥⎦⎭
⎡0 1 1 1⎤ ⎡ 1 0 0 −4⎤ ⎡−4⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 11. ⎢ 1 1 0 2⎥ ⇒ ⎢0 1 0 6⎥; [ v]B = ⎢ 6⎥ ⎢ 1 1 1 −3⎥ ⎢0 0 1 −5⎥ ⎢−5⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 1 1 0 2 1 0⎤ ⎡ 1 0 0 0 1 −1⎤ ⎢ ⎥ ⎢ ⎥ 12. [ B′ B] ⇒ ⎡⎣I P ⎤⎦ ; ⎢ 1 1 1 1 0 1⎥ ⇒ ⎢0 1 0 2 0 1⎥ ⎢2 1 2 0 0 1⎥ ⎢0 0 1 −1 −1 1⎥ ⎣ ⎦ ⎣ ⎦ −1
u =
13. (a)
12 + 02 + 22 =
u − v =
(b)
=
5
16. projvu =
(3, −1, −1)
y
3 + ( −1) + (−1) 2
u⋅v 1 v = ( −3, 2) 13 v⋅v
2
2
=
v = (−3, 2)
11
(c) u ⋅ v = 1( −2) + 0(1) + 2(3) = 4 (d) cos θ =
u⋅v = u v
θ = cos −1
14.
1
∫0
4 = 5 ⋅ 14
4 70
−3
w2 = w2
⎛ 2 2 2⎞ (0, 1, 1) = ⎜⎜ 0, , ⎟⎟ 2 2 2 ⎝ ⎠
u3 =
w3 = w3
1 ⎛ 2 ⎜ 0, − , 2 ⎝
x
−1
1
⎡ 0 1 1 0⎤ ⎡ 1 0 0 0⎤ ⎢ ⎥ ⎢ ⎥ 17. A = ⎢−1 0 0 1⎥ ⇒ ⎢0 1 1 0⎥ ⎢ 1 1 1 1⎥ ⎢0 0 0 1⎥ ⎣ ⎦ ⎣ ⎦
1
⎡ x4 2 x3 ⎤ 11 x 2 ( x + 2) dx = ⎢ + ⎥ = 4 3 12 ⎣ ⎦0
u2 =
−2
projv u 1 = 13 (−3, 2) −2
4 ≈ 1.0723 radians (61.45°) 70
1 w 2 = (1, 1, 1) − ( 2, 0, 0) = (0, 1, 1) 2 3 1 ⎛ w 3 = (0, 1, 2) − 0( 2, 0, 0) − (0, 1, 1) = ⎜ 0, − , 2 2 ⎝ Normalize each vector. w1 1 u1 = = ( 2, 0, 0) = (1, 0, 0) w1 2
⎡0 −1 ⎢ 1 0 AT = ⎢ ⎢1 0 ⎢ ⎣⎢0 1
1⎞ ⎟ 2⎠
1⎤ ⎡1 ⎥ ⎢ 1⎥ 0 ⇒ ⎢ ⎥ ⎢ 1 0 ⎥ ⎢ ⎢⎣0 1⎦⎥
0 0⎤ ⎥ 1 0⎥ 0 1⎥ ⎥ 0 0⎦⎥
R( A) = column space of A = R 3
⎧⎡ 0⎤⎫ ⎪⎢ ⎥⎪ ⎪ 1⎪ N ( A)-basis: ⎨⎢ ⎥⎬ ⎢ ⎥ ⎪⎢−1⎥⎪ ⎪⎢ 0⎥⎪ ⎩⎣ ⎦⎭ ⎧⎡0⎤ ⎡−1⎤ ⎡1⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ 0 1⎪ ⎪ 1 R( AT )-basis: ⎨⎢ ⎥, ⎢ ⎥, ⎢ ⎥ ⎬ ⎢ 1⎥ ⎢ 0⎥ ⎢1⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎢1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
⎛ 1⎞ 2 2⎞ , ⎟⎟ ⎟ = ⎜⎜ 0, − 2⎠ 2 2 ⎝ ⎠
So, an orthonormal basis for R 3 is ⎧⎪ ⎛ 2 2⎞ ⎛ 2 2 ⎞⎫⎪ , , ⎟⎟, ⎜⎜ 0, − ⎟⎬. ⎨(1, 0, 0), ⎜⎜ 0, 2 2 ⎠ ⎝ 2 2 ⎟⎠⎪⎭ ⎪⎩ ⎝
18. S
u = (1, 2)
1
15. w1 = ( 2, 0, 0)
⊥
2
N ( AT ) = {0}
⎧⎡−1⎤⎫ ⎡ 1 0 1⎤ ⎡ 1 0 1⎤ ⎪⎢ ⎥⎪ ⊥ = N ( A );⎢ ⎥ ⇒ ⎢ ⎥ ⇒ S = span ⎨⎢−1⎥⎬ ⎣−1 1 0⎦ ⎣0 1 1⎦ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭ T
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
184
Chapter 5
Inner Product Spaces 22. Substitute the points (1, 1), ( 2, 0), and (5,−5) into the
0 v = ( 0 + 0) v = 0 v + 0 v
19.
−0 v + 0 v = ( −0 v + 0 v ) + 0 v
equation y = c0 + c1 x to obtain the following system.
c0 + c1 (1) = 1
0 = 0v
20. Suppose c1x1 +
c0 + c1 ( 2) = 0
+ cn x n + cy = 0
c0 + c1 (5) = −5
If c = 0, then c1x1 + … + cn x n = 0 and xi independent ⇒ ci = 0 If c ≠ 0, then y = −(c1 c)x1 +
This produces the least squares problem Ax = b
+ −(cn c)x n , a
⎡1 1⎤ ⎡ 1⎤ ⎢ ⎥ ⎡c0 ⎤ ⎢ ⎥ ⎢1 2⎥ ⎢ c ⎥ = ⎢ 0⎥. 1 ⎣ ⎦ ⎢1 5⎥ ⎢−5⎥ ⎣ ⎦ ⎣ ⎦ The normal equations are
contradiction. 21. Let v1 , v 2 ∈ W ⊥ . v1 , w = 0, v 2 , w = 0, for all w ⇒ v1 + v 2 ∈ W ⊥ and cv, w = 0 ⇒ cv ∈ W ⊥ .
AT Ax = AT b
Because W ⊥ is nonempty, and closed under addition and scalar multiplication in V it is a subspace.
⎡3 8⎤ ⎡c0 ⎤ ⎡ −4⎤ ⎢8 30⎥ ⎢ c ⎥ = ⎢−24⎥ ⎣ ⎦ ⎣ 1⎦ ⎣ ⎦
⎡ 36 ⎤ ⎡c0 ⎤ and the solution is x = ⎢ ⎥ = ⎢ 13 ⎥. 20 ⎢⎣− 13 ⎥⎦ ⎣ c1 ⎦ So, the least squares regression line is y =
36 13
−
20 x. 13
y 3 2 1 −1 −2 −3 −4 −5 −6 −7
(1, 1) (2, 0)
x
1 2 3 4 5 6 7 8 9
y = 2.7 − 1.5x (5, − 5)
23. (a) rank A = 3
(b) first 3 rows of A (c) columns 1, 3, 4 of A (d) x1 = 2r − 3s − 2t ⎧⎡2⎤ ⎡−3⎤ ⎡−2⎤⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ x2 = r ⎪⎢ 1⎥ ⎢ 0⎥ ⎢ 0⎥⎪ ⎪⎢0⎥ ⎢ 5⎥ ⎢ 3⎥⎪ x3 = 5s + 3t ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎨ ⎬ x4 = − s − 7t ⎪⎢0⎥ ⎢−1⎥ ⎢−7⎥⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎢ 0⎥⎪ x5 = s ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎩⎢⎣0⎥⎦ ⎢⎣ 0⎥⎦ ⎢⎣ 1⎥⎦⎪⎭ x6 = t (e) no (f) no (g) yes (h) no 24.
u + v = u − v ⇔ u + v
2
= u − v
2
⇔ (u + v ) ⋅ (u + v ) = (u − v ) ⋅ (u − v ) ⇔ u ⋅ u + 2u ⋅ v + v ⋅ v = u ⋅ u − 2u ⋅ v + v ⋅ v ⇔ 2u ⋅ v = −2u ⋅ v ⇔ u⋅v = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 5 Inner Product Spaces Section 5.1
Length and Dot Product in R n ..........................................................141
Section 5.2
Inner Product Spaces ..........................................................................147
Section 5.3
Orthonormal Bases: Gram-Schmidt Process.....................................157
Section 5.4
Mathematical Models and Least Squares Analysis ..........................165
Section 5.5
Applications of Inner Product Spaces ...............................................169
Review Exercises ........................................................................................................177 Project Solutions.........................................................................................................185
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 5 Inner Product Spaces Section 5.1 Length and Dot Product in R n 16. (a) A unit vector v in the direction of u is given by
2.
v =
02 + 12 =
4.
v =
2 + 0 + 6
6.
v =
22 + ( −4) + 52 + (−1) + 12 =
2
1 =1
2
=
2
2
2
2
8. (a)
u =
⎛1⎞ 12 + ⎜ ⎟ ⎝ 2⎠
(b)
v =
⎛ 1⎞ 22 + ⎜ − ⎟ ⎝ 2⎠
(c)
u + v =
10. (a) (b) (c)
u = v =
40 = 2 10
(3, 0)
5 1 = 4 2
= 2
=
2
2
0 2 + 2 2 + ( − 2)
6 2
=
8 = 2 2
=
12 + 42 + ( −1)
2
=
1 =1
(b)
v =
02 + 12 + 02 + 02 =
1 =1
(c)
u + v = (1, 1, 0, 0) 12 + 12 + 02 + 02 =
1 12 + ( −1)
2
1 (−1, 1, 2, 0) 6
1 2 ⎛ 1 ⎞ ,− ,− , 0 ⎟. = ⎜ 6 6 ⎠ ⎝ 6
20. Solve the equation for c as follows. c( 2, 2, −1) = 3
(2, 2, −1)
= 3
22 + 22 + ( −1)
= 3
c c
2
c 3 = 3 ⇒ c = ±1
(b) A unit vector in the direction opposite that of u is given by
1 ⎞ ⎛ 1 1 ⎞ ⎛ 1 ,− , − v = −⎜ ⎟ = ⎜− ⎟. 2⎠ ⎝ 2 2⎠ ⎝ 2
(−1, 1, 2, 0)
1 2 ⎛ 1 ⎞ , , , 0⎟ − v = −⎜ − 6 6 6 ⎠ ⎝
(1, −1)
1 1 ⎞ ⎛ 1 (1, −1) = ⎜ , − ⎟. 2 2⎠ ⎝ 2
+ 12 + 22 + 02
(b) A unit vector in the direction opposite that of u is given by
2
14. (a) A unit vector v in the direction of u is given by
(−1)
2
1 2 ⎛ 1 ⎞ , , , 0⎟ = ⎜− 6 6 6 ⎠ ⎝
18 = 3 2
12 + 02 + 02 + 02 =
=
4 ⎞ ⎟ 26 ⎠
3 , 26
1
u = u
v =
u =
u v = = u
4 ⎞ ⎟ 26 ⎠
3 , 26
3 4 ⎞ ⎛ 1 = ⎜ ,− ,− ⎟. 26 26 26 ⎠ ⎝
9 = 3
12. (a)
=
(−1, 3, 4)
1 1 ⎛ , (−1, 3, 4) = ⎜ − 26 26 ⎝
1 ⎛ − v = −⎜ − , 26 ⎝
u + v = (1, 4, −1) =
+ 32 + 42
18. (a) A unit vector v in the direction of u is given by
1 + 2 +1 = 2
2
(b) A unit vector in the direction opposite that of u is given by
5
32 + 02 =
(−1)
=
47
17 1 = 17 4 2
=
1
u = u
v =
22. First find a unit vector in the direction of u. u = u
1
(−1)
2
+1
2
(−1, 1) =
1 1 ⎞ ⎛ 1 (−1, 1) = ⎜ − , ⎟ 2 2 2⎠ ⎝
Then v is four times this vector. v = 4
u 1 ⎞ ⎛ 1 = 4⎜ − , ⎟ u 2 2⎠ ⎝
(
4 ⎞ ⎛ 4 , = ⎜− ⎟ = −2 2, 2 2 2 2⎠ ⎝
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
) 141
142
Chapter 5
Inner Product Spaces
24. First find a unit vector in the direction of u. u = u
1
(−1)
2
+ 2 +1 2
2
(−1, 2, 1)
1 2 1 ⎞ ⎛ 1 (−1, 2, 1) = ⎜ − , , ⎟ 6 6 6 6⎠ ⎝
=
30. d (u, v) = u − v
=
=
=
1 1 + ( −1) + 42 + 02 2
2
1 3 2
(1, −1, 4, 0)
22 + ( −2) + ( −1) 2
=
34. d (u, v ) = u − v =
(1, −1, 4, 0)
1 4 ⎛ 1 ⎞ = ⎜ ,− , , 0⎟ ⎝3 2 3 2 3 2 ⎠
Then v is twice this vector. 1 4 ⎛ 1 ⎞ ,− , , 0⎟ v = 2⎜ ⎝3 2 3 2 3 2 ⎠ 2 8 ⎛ 2 ⎞ ,− , , 0⎟ = ⎜ ⎝3 2 3 2 3 2 ⎠ 28. Because
v is a unit vector in the direction of v, you v
have
=
1 1 3 (−1, 3, 0, 4) = ⎛⎜ − , , 0, 2 ⎞⎟. 2 ⎝ 2 2 ⎠
v is a unit vector with direction (b) Because − v opposite that of v, you have v u = 4
⎛ v ⎞ ⎜⎜ − ⎟⎟ ⎝ v ⎠
1 1 ⎛1 3 ⎞ = − v = − (−1, 3, 0, 4) = ⎜ , − , 0, −1⎟. 4 4 ⎝4 4 ⎠
(c) Because −
v is a unit vector with direction opposite v
that of v, ⎛ v ⎞ u = 2 v ⎜⎜ − ⎟⎟ ⎝ v ⎠ = −2 v = −2( −1, 3, 0, 4) = ( 2, − 6, 0, − 8).
25 = 5
2
=
9 = 3
(−1, 0, − 3, 0)
=
(−1)2
=
10
+ 02 + ( −3) + 02 2
36. (a) u ⋅ v = −1( 2) + 2(−2) = −2 − 4 = −6 (b) u ⋅ u = −1( −1) + 2( 2) = 1 + 4 = 5 (c) (d)
u
2
= u⋅u = 5
(u ⋅ v ) v
= −6( 2, − 2) = ( −12, 12)
(e) u ⋅ 5 v = 5(u ⋅ v ) = 5( −6) = −30 38. (a) u ⋅ v = 2(0) − 1( 2) + 1( −1) = 0 − 2 − 1 = −3 (b) u ⋅ u = 2( 2) − 1( −1) + 1(1) = 4 + 1 + 1 = 6 (c)
v v 1 = v u = 2 v 2
+ 32 =
(2, − 2, −1)
26. First find a unit vector in the direction of u. u = u
(−4)2
=
32. d (u, v ) = u − v
Then v is four times this vector. 2 1 ⎞ ⎛ 4 8 4 ⎞ ⎛ 1 v = 4⎜ − , , , , ⎟ = ⎜− ⎟ 6 6 6⎠ ⎝ 6 6 6⎠ ⎝
(−4, 3)
(d)
u
2
= u⋅u = 6
(u ⋅ v ) v
= −3(0, 2, −1) = (0, − 6, 3)
(e) u ⋅ (5 v ) = 5(u ⋅ v ) = 5( −3) = −15 40. (a) u ⋅ v = 0(6) + 4(8) + 3( −3) + 4(3) + 4(−5) = 15 (b) u ⋅ u = 0(0) + 4( 4) + 3(3) + 4( 4) + 4( 4) = 57 (c) (d)
u
2
= u ⋅ u = 57
(u ⋅ v ) v
= 15(6, 8, − 3, 3, − 5) = (90, 120, − 45, 45, − 75)
(e) u ⋅ (5 v ) = 5(u ⋅ v ) = 5 ⋅ 15 = 75 42. (3u − v) ⋅ (u − 3v ) = 3u ⋅ (u − 3v) − v ⋅ (u − 3v ) = 3u ⋅ u − 9u ⋅ v − v ⋅ u + 3v ⋅ v = 3u ⋅ u − 10u ⋅ v + 3v ⋅ v = 3(8) − 10(7) + 3(6) = −28
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.1
u =
(b)
1 ⎛ 5 12 ⎞ v = ⎜ , ⎟ v ⎝ 13 13 ⎠
(c) −
9 + 16 = 5 and v =
25 + 144 = 13
1 ⎛ 3 4⎞ u = ⎜− , ⎟ u ⎝ 5 5⎠ 2
(f) v ⋅ v = v
2
= 25 + 144 = 169
46. u = (3, − 4) and v = ( 4, 3) u =
(b)
1 ⎛ 4 3⎞ v = ⎜ , ⎟ v ⎝ 5 5⎠
(c) −
u =
9 + 16 = 5 and v =
9 + 16 = 5 and v =
(b)
1 ⎛ 3 4 ⎞ v = ⎜ − , − , 0⎟ v ⎝ 5 5 ⎠
9 + 16 = 5
1 4⎞ ⎛ 3 u = ⎜ − , 0, ⎟ u 5⎠ ⎝ 5
(d) u ⋅ v = −9
= 25
(a)
(a)
(c) −
(d) u ⋅ v = 3 ⋅ 5 − 4 ⋅ 12 = 15 − 48 = −33 (e) u ⋅ u = u
143
50. u = (3, 0, − 4) and v = ( −3, − 4, 0)
44. u = (3, − 4) and v = (5, 12) (a)
Length and Dot Product in R n
(e) u ⋅ u = u
2
= 25
(f) v ⋅ v = v
2
= 9 + 16 = 25
1 1⎞ ⎛ ⎛ 1 1⎞ 52. u = ⎜ −1, , ⎟ and v = ⎜ 0, , − ⎟ 2 4 ⎝ ⎠ ⎝ 4 2⎠
16 + 9 = 5
1 ⎛ 3 4⎞ u = ⎜− , ⎟ u ⎝ 5 5⎠
(a)
u = 1.1456 and v = 0.5590
(b)
1 v = (0, 0.4472, − 0.8944) v
(c) −
(d) u ⋅ v = 0
1 u = (0.8729, − 0.4364, − 0.2182) u
(e) u ⋅ u = u
2
= 25
(d) u ⋅ v = 0
(f) v ⋅ v = v
2
= 25
(e) u ⋅ u = 1.3125 (f) v ⋅ v = 0.3125
48. u = (9, 12) and v = ( −7, 24) (a)
(b)
u =
81 + 144 = 15 and
v =
49 + 576 =
(
54. u = −1,
625 = 25
1 ⎛ 7 24 ⎞ v = ⎜− , ⎟ v ⎝ 25 25 ⎠
(c) −
2
= 225
(f) v ⋅ v = v
2
= 625
2, −1, −
u = 2.8284 and v = 2.2361
(b)
1 v = (0.6325, − 0.4472, − 0.6325) v
2
)
1 u = (0.3536, − 0.6124, − 0.7071) u
(d) u ⋅ v = −5.9747
(d) u ⋅ v = −63 + 288 = 225 (e) u ⋅ u = u
(
(a)
(c) −
1 ⎛ 3 4⎞ u = ⎜− , − ⎟ u ⎝ 5 5⎠
)
3, 2 and v =
(e) u ⋅ u = 8 (f) v ⋅ v = 5
56. u = (1, 2, 3, − 2, −1, − 3) and v = ( −1, 0, 2, 1, 2, − 3)
(a)
u = 5.2915 and v = 4.3589
(b)
1 v = ( − 0.2294, 0, 0.4588, 0.2294, 0.4588, − 0.6882) v
(c) −
1 u = ( −0.1890, − 0.3780, − 0.5669, 0.3780, 0.1890, 0.5669) u
(d) u ⋅ v = 0 (e) u ⋅ u = u
2
= 28
(f) v ⋅ v = v
2
= 19
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
144
Chapter 5
Inner Product Spaces
58. u = (3, −1, 2, 1, 0, 1, 2, −1) and v = (1, 2, 0, −1, 2, − 2, 1, 0)
(a)
u = 4.5826 and v = 3.8730
(b)
1 v = (0.2582, 0.5164, 0, − 0.2582, 0.5164, − 0.5164, 0.2582, 0) v
(c) −
1 u = ( −0.6547, 0.2182, − 0.4364, − 0.2182, 0, − 0.2182, − 0.4364, 0.2182) u
(d) u ⋅ v = 0 (e) u ⋅ u = u
2
= 21
(f) v ⋅ v = v
2
= 15
60. You have u ⋅ v = −1(1) + 0(1) = −1, u =
(−1)
v =
12 + 12 =
2
+ 02 =
1 = 1, and
2. So,
u⋅v ≤ u v −1 ≤ 1 2
1≤
2.
62. You have u ⋅ v = 1(0) − 1(1) + 0(−1) = −1, u =
12 + ( −1) + 02 =
2, and
v =
02 + 12 + (−1)
2. So,
2
2
=
u⋅v ≤ u v −1 ≤
2 ⋅
2
1 ≤ 2. 64. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v = u v
2( 2) − 1(0) 2 + ( −1) 2
2
2 + 0 2
2
=
4 2 5 = . 5 2 5
⎛2 5⎞ So, θ = cos −1 ⎜⎜ ⎟⎟ ≈ 0.4636 radians ( 26.57°). ⎝ 5 ⎠ 66. The cosine of the angle θ between u and v is given by u⋅v cos θ = = u v
So, θ =
π 12
cos 2
π⎛
π⎞ π⎛ π⎞ ⎜ cos ⎟ + sin ⎜ sin ⎟ 3⎝ 4⎠ 3⎝ 4⎠
π⎞ π⎞ ⎛ ⎛ ⎜ cos ⎟ + ⎜ sin ⎟ 3 3⎠ ⎝ ⎠ ⎝
2
2
π⎞ π⎞ ⎛ ⎛ ⎜ cos ⎟ + ⎜ sin ⎟ 4 4⎠ ⎝ ⎠ ⎝
2
π⎞ ⎛π cos⎜ − ⎟ 4⎠ ⎛π ⎞ ⎝3 = = cos⎜ ⎟. 1⋅1 ⎝ 12 ⎠
radians (15°).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Length and Dot Product in R n
Section 5.1
145
68. The cosine of the angle θ between u and v is given by cos θ =
So, θ =
2( −3) + 3( 2) + 1(0) u⋅v = = 0. u v u v
π 2
radians (90°).
70. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v = u v
1( −1) − 1( 2) + 0( −1) + 1(0) 1 + ( −1) + 0 + 1 2
2
2
2
(−1)
2
+ 2 + ( −1) + 0 2
2
2
−3 3 2 = − = − . 2 3 6 3 2
=
⎛ 2⎞ 3π radians (135°). So, θ = cos −1 ⎜⎜ − ⎟⎟ ≈ 4 ⎝ 2 ⎠ 72. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v = u v
1(1) − 1(0) + 1( −1) + 0(0) + 1(1) 1 + ( −1) + 1 + 0 + 1 2
2
2
2
2
1 + 0 + ( −1) + 0 + 1 2
2
2
2
2
=
1 = 2 3
3 . 6
⎛ 3⎞ So, θ = cos −1 ⎜⎜ ⎟⎟ ≈ 1.2780 radians (73.22°). ⎝ 6 ⎠
84. Because u ⋅ v = 1(0) − 1( −1) = 1 ≠ 0, the vectors u
u⋅v = 0
74.
(2, 7) ⋅ (v1, v2 )
= 0
and v are not orthogonal. Moreover, because one is not a scalar multiple of the other, they are not parallel.
2v1 + 7v2 = 0
So, v = ( −7t , 2t ), where t is any real number.
vectors u and v are not orthogonal. Moreover, because one is not a scalar multiple of the other, they are not parallel.
u⋅v = 0
76.
(0, 0) ⋅ (v1, v2 )
= 0
0v1 + 0v2 = 0
88. Because
So, v = (v1 , v2 ) can be any vector in R . 2
(2, −1, 1) ⋅ ( v1, v 2 , v3 )
= 0
2 v1 − v 2 + v 3 = 0
So, v = (t , s, − 2t + s ), where s and t are any real numbers. u⋅v = 0
80.
(0, 0, −1, 0) ⋅ ( v1, v 2 , v3 , v 4 )
= 0
0 v1 + 0 v 2 − v 3 + 0 v 4 = 0 − v3 = 0 v3 = 0 So, v = ( r , s, 0, t ), where r, s, and t are any real numbers. 82. Because u ⋅ v = ( 4, 3) ⋅
u ⋅ v = 4( −2) +
3 2
(− 34 ) + (−1)( 12 ) + 12 (− 14 ) = − 394
≠ 0,
the vectors are not orthogonal. Moreover, because one vector is a scalar multiple of the other, they are parallel.
u⋅v = 0
78.
86. Because u ⋅ v = 0(1) + 1( −2) + 6( −1) = −8 ≠ 0, the
( 12 , − 23 ) = 2 − 2 = 0, the
vectors u and v are orthogonal.
90. Using a graphing utility or a computer software program, you have u ⋅ v = −3.75 ≠ 0. Because u ⋅ v ≠ 0, the vectors are not orthogonal. Because one is not a scalar multiple of the other, they are not parallel.
92. Using a graphing utility or a computer software program, ≠ 0. Because u ⋅ v ≠ 0, the you have u ⋅ v = 32 9 vectors are not orthogonal. Because one is not a scalar multiple of the other, they are not parallel. 94. Because u ⋅ v = −sin θ sin θ + cos θ (−cos θ ) + 1(0) = −(sin θ ) − (cos θ ) 2
2
= −(sin 2θ + cos 2θ ) = −1 ≠ 0, the vectors u and v are not orthogonal. Moreover, because one is not a scalar multiple of the other, they are not parallel.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
146
Chapter 5
Inner Product Spaces
96. (a) False. The unit vector in the direction of v is given v by . v
102. Because u + v = (1, −1, 0) + (0, 1, 2) = (1, 0, 2), you
have u + v ≤ u + v
(b) False. If u ⋅ v < 0 then the angle between them lies between
π 2
cos θ < 0 ⇒
98. (a)
(1, 0, 2)
and π , because
π 2
≤ (1, −1, 0) + (0, 1, 2)
5 ≤
< θ < π.
2 +
5.
104. First note that u and v are orthogonal, because u ⋅ v = (3, − 2) ⋅ ( 4, 6) = 0.
(u ⋅ v) ⋅ u is meaningless because
u ⋅ v is a scalar.
Then note
(b) c ⋅ (u ⋅ v) is meaningless because c is a scalar, as well as u ⋅ v. 100. Because u + v = (1, 1, 1) + (0, −1, 2) = (1, 0, 3), you
u+ v
2
= u
(7, 4)
2
=
2
+ v
(3, − 2)
2
2
+
(4, 6)
2
65 = 13 + 52 65 = 65.
have u + v ≤ u + v
(1, 0, 3)
106. First note that u and v are orthogonal, because u ⋅ v = ( 4, 1, − 5) ⋅ ( 2, − 3, 1) = 0.
≤ (1, 1, 1) + (0, −1, 2)
10 ≤
3 +
Then note
5.
u + v
2
= u
(6, − 2, − 4)
2
=
2
+ v
2
(4, 1, − 5)
2
+
(2, − 3, 1)
2
56 = 42 + 14 56 = 56. 108. v = ( v1 , v 2 ) = (12, 5), ( v 2 , − v1 ) = (5, −12)
(12, 5) ⋅ (5, −12)
= 12(5) + 5( −12) = 60 − 60 = 0
So, ( v 2 , − v1 ) is orthogonal to v. −1(5, −12) = ( −5, 12): (12, 5) ⋅ ( −5, 12) = 12( −5) + 5(12)
Two unit vectors orthogonal to v:
= −60 + 60 = 0 3(5, −12) = (15, − 36): (12, 5) ⋅ (15, − 36) = 12(15) + 5( −36)
= 180 − 180 = 0
(Answer is not unique.) 110. Let t = length of side of cube. The diagonal of the cube can be represented by the vector v = (t , t , t ), and one side by the
vector u = (t , 0, 0). So, cos θ =
u⋅v t2 = = u v tt 3
(
)
112. (u + v ) ⋅ w = w ⋅ (u + v) = w⋅u + w⋅v = u⋅w + v⋅w
1 3
⇒
⎛ 1 ⎞ ⎟ ≈ 54.7°. ⎝ 3⎠
θ = cos −1 ⎜
(Theorem 5.3, part 1) (Theorem 5.3, part 2) (Theorem 5.3, part 1)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2 114.
1 4
2
u + v
1 4
−
u − v
2
(u
=
1⎡ 4⎣
=
1 ⎡u 4⎣
=
1 4
Inner Product Spaces
147
+ v ) ⋅ (u + v ) − (u − v ) ⋅ (u − v )⎤⎦ ⋅ u + 2u ⋅ v + v ⋅ v − (u ⋅ u − 2u ⋅ v + v ⋅ v)⎤⎦
[4u ⋅ v]
= u⋅v
116. Let u = (cos θ )i − (sin θ ) j and v = (sin θ )i + (cos θ ) j. Then u =
cos 2 θ + sin 2 θ = 1,
sin 2 θ + cos 2 θ = 1,
v =
and u ⋅ v = cos θ sin θ − sin θ cos θ = 0. So, u and v are orthogonal unit vectors for any value of θ . If θ =
π 3
, you have
the following graph. y 1
( 23 , 12 ( v x
−1
u −1
1
( 12, − 23 (
118. u ⋅ v = (3240, 1450, 2235) ⋅ ( 2.22, 1.85, 3.25) = 3240( 2.22) + 1450(1.85) + 2235(3.25) = 17,139.05
which represents the total dollar value for the three crops. 120. u ⋅ v = (1245, 2600) ⋅ ( 225, 275) = 1245( 225) + 2600( 275) = 995,125
The dot product of u ⋅ v represents the profit the company would make if it sells all the units of each model of mountain bike.
Section 5.2 Inner Product Spaces 2. (a)
u, v = u ⋅ v = 1(7) + 1(9) = 16
(b)
u =
u, u =
12 + 12 =
2
(c)
v =
v, v =
7 2 + 92 =
130
(d) d (u, v) = u − v = 4. (a)
(−6, − 8)
( − 6)
=
2
+ ( −8)
=
100 = 10
u, v = u ⋅ v = 0( −1) + 2(−6)(1) = −12
(b)
u =
u, u =
0(0) + 2( −6)(−6) = 6 2
(c)
v =
v, v =
(−1)(−1)
+ 2(1)(1) =
(d) d (u, v ) = u − v = (1, − 7) = 6. (a)
2
3
1(1) + 2( −7)( −7) =
99 = 3 11
u, v = u ⋅ v = 0(1) + 1( 2) + 2(0) = 2
(b)
u =
u, u =
02 + 12 + 22 =
5
(c)
v =
v, v =
12 + 22 + 02 =
5
(d) d (u, v ) = u − v =
(−1, −1, 2)
=
(−1)
2
+ ( −1) + 22 = 2
6
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
148
Chapter 5
8. (a)
Inner Product Spaces
u, v = u ⋅ v = (1)( 2) + 2(1)(5) + (1)( 2) = 14
(b)
u =
u, u =
(1)2
+ 2(1) + (1)
(c)
v =
v, v =
( 2) 2
+ 2(5) + ( 2)
2
2
2
= 2 2
=
(d) d (u, v) = u − v = (1, 1, 1) − ( 2, 5, 2) = 10. (a)
58
(−1, − 4, −1)
u =
u, u =
12 + ( −1) + 22 + 02 =
6
(c)
v =
v, v =
22 + 12 + 02 + (−1)
6
2
(d) d (u, v ) = u − v =
(b)
f,g =
f
2
1
∫ −1
g
2
=
2
+ 2( −4) + (−1) 2
2
=
34
1
∫ −1
=
(−1)2
=
2 ∫ −1 (− x)( x 1
(− x)(− x)dx =
2
+ ( −2) + 22 + 12 = 2
− x + 2)dx =
10
3 2 ∫ −1 (− x + x − 2 x)dx 1
1
⎡ x4 ⎤ x3 2 = ⎢− + − x2 ⎥ = 4 3 3 ⎣ ⎦ −1
1
x3 ⎤ 2 ⎥ = 3 ⎦ −1 3
2 3
= g, g = =
(−1, − 2, 2, 1)
f ( x) g ( x)dx =
= f, f
f =
(c)
(−1)
u, v = u ⋅ v = 1( 2) + ( −1)(1) + 2(0) + 0( −1) = 1
(b)
12. (a)
=
∫ −1 ( x 1
4
∫ −1 ( x 1
2
− x + 2) dx 2
− 2 x3 + 5 x 2 − 4 x + 4)dx 1
⎤ ⎡ x5 x4 5 x3 176 = ⎢ − + − 2 x 2 + 4 x⎥ = 2 3 15 ⎣5 ⎦ −1 176 15
g =
(d) Use the fact that d ( f , g ) = f − g . Because f − g = − x − ( x 2 − x + 2) = − x 2 − 2, you have f − g , f − g = − x 2 − 2, − x 2 − 2 = d( f , g) =
14. (a)
(b)
f,g =
f
2
= f, f
f =
(c)
g
2
1
−x
=
2 = 3
= g, g =
g =
1
⎡ x5 ⎤ 4 x3 166 . = ⎢ + + 4 x⎥ = 3 15 ⎣5 ⎦ −1
166 . 15
f − g, f − g =
∫ −1 xe
4 2 ∫ −1 ( x + 4 x + 4)dx 1
1 2 dx = −e − x ( x + 1)⎤ = −2e −1 + 0 = − ⎥⎦ −1 e 1
∫ −1
1
x 2 dx =
x3 ⎤ 1 1 2 ⎥ = + = 3 ⎦ −1 3 3 3
6 3 1
∫ −1
1
e −2 x dx = −
e −2 x ⎤ 1 −2 2 ⎥ = (−e + e ) 2 ⎦ −1 2
1 −2 ( −e + e 2 ) 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
Inner Product Spaces
149
(d) Use the fact that d ( f , g ) = f − g . Because f − g = x − e − x , you have −x ∫ −1 ( x − e ) 1
f − g, f − g =
∫ −1 ( x 1
=
2
2
dx
− 2e − x + e −2 x )dx 1
⎡ x3 e −2 x ⎤ = ⎢ + 2e − x ( x + 1) − ⎥ 2 ⎦ −1 ⎣3 e −2 e2 2 + 4e −1 − + . 3 2 2
=
d( f , g) =
16. (a)
(b)
f,g = 2
f
2 ∫ −1 (−1)(1 − 2 x )dx 1
= f, f
f =
(c)
2
g
1
∫ −1 (−1)
=
2
1
⎤ ⎡ 2 x3 2 ⎛2 ⎞ ⎛ 2 ⎞ = ⎢ − x⎥ = ⎜ − 1⎟ − ⎜ − + 1⎟ = − 3 ⎝3 ⎠ ⎝ 3 ⎠ ⎣ 3 ⎦ −1 1
dx = x⎤ = 2 ⎥⎦ −1
2
= g, g = =
2 e −2 e2 + 4e −1 − + 3 2 2
f − g, f − g =
2 ∫ −1 (1 − 2 x ) 1
∫ −1 (1 − 4 x 1
2
dx
+ 4 x 4 )dx
2
1
⎡ 4 x3 4 x5 ⎤ = ⎢x − + ⎥ 3 5 ⎦ −1 ⎣
4 4⎞ ⎛ 4 4 ⎞ 14 ⎛ = ⎜1 − + ⎟ − ⎜ −1 + − ⎟ = 3 5⎠ ⎝ 3 5⎠ 15 ⎝
g =
14 = 15
210 15
(d) Use the fact that d ( f , g ) = f − g . Because f − g = −1 − (1 − 2 x 2 ) = 2 x 2 − 2, you have f − g, f − g =
d( f , g) = 18. (a)
(b)
∫ −1
(2 x 2 − 2) dx = 2
1
⎡ 4 x5 ⎤ 8 x3 64 4 2 ∫ −1 (4 x − 8 x + 4)dx = ⎢⎣ 5 − 3 + 4 x⎥⎦ = 15 . −1 1
64 8 15 = 15 15
A, B = 2(1)(0) + (0)(1) + (0)(1) + 2(1)(0) = 0 A
2
= A, A = 2(1) + 02 + 02 + 2(1) = 4 2
A = (c)
1
B
2
2
A, A = 2
= B, B = 2 ⋅ 02 + 12 + 12 + 2 ⋅ 02 = 2
B =
B, B =
2
(d) Use the fact that d ( A, B) = A − B . Because ⎡ 1 −1⎤ A− B = ⎢ ⎥ , you have ⎣−1 1⎦
A − B, A − B = 2(1) + ( −1) + (−1) + 2(1) = 6. 2
d ( A, B ) =
2
A − B, A − B =
2
2
6
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
150
Chapter 5
20. (a)
(b)
Inner Product Spaces
A, B = 2(1)(1) + (0)(1) + (0)(0) + 2( −1)( −1) = 4 A
= A, A = 2(1) + 02 + 02 + 2( −1) = 4
2
2
A = (c)
B
A, A =
2
4 = 2
= B, B = 2(1) + 12 + 02 + 2( −1) = 5
2
2
B =
B, B =
2
5
⎡0 −1⎤ (d) Use the fact that d ( A, B) = A − B . Because A − B = ⎢ ⎥, you have ⎣0 0⎦
A − B, A − B = 2(0) + ( −1) + 02 + 2(0) = 1. 2
d ( A, B) = 22. (a)
(b)
q
2
1 =1
1 ( 2) = 2 2 2
9 ⎛1⎞ = p, p = 12 + 12 + ⎜ ⎟ = 4 ⎝ 2⎠
2
p = (c)
2
A − B, A − B =
p, q = 1(1) + 1(0) +
p
2
p, p =
9 3 = 4 2
= q, q = 12 + 02 + 22 = 5
q =
q, q =
5
(d) Use the fact that d ( p, q ) = p − q . Because p − q = x −
3 2 x , you have 2
2
13 ⎛ 3⎞ p − q, p − q = 02 + 12 + ⎜ − ⎟ = . 4 ⎝ 2⎠
d ( p, q ) = 24. (a)
(b)
13 2
p, q = 1(0) + ( −2)(1) + ( −1)( −1) = −1 p
2
= p, p = 12 + ( −2) + ( −1) = 6 2
p = (c)
13 = 4
p − q, p − q =
q
2
p, p =
2
6
= q, q = 02 + 12 + ( −1) = 2
q =
2
q, q =
2
(d) Use the fact that d ( p, q ) = p − q . Because p − q = 1 − 3 x, you have
p − q, p − q = 12 + (−3) + 02 = 10. 2
d ( p, q ) =
p − q, p − q =
10
26. Verify that the function u, v = 2u1v1 + 3u2v2 + u3v3 satisfies the four parts of the definition.
1. u, v = 2u1v1 + 3u2v2 + u3v3 = 2v1u1 + 3v2u2 + v3u3 = v, u 2. u, v + w = 2u1 (v1 + w1 ) + 3u2 (v2 + w2 ) + u3 (v3 + w3 ) = 2u1v1 + 3u2v2 + u3v3 + 2u1w1 + 3u2 w2 + u3w3 = u, v + u, w 3. c u, v = c( 2u1v1 + 3u2v2 + u3v3 ) = 2(cu1 )v1 + 3(cu2 )v2 + (cu3 )v3 = cu, v 4.
v, v = 2v12 + 3v22 + v32 ≥ 0, and v, v = 0 if and only if v = (0, 0, 0). Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
Inner Product Spaces
151
28. Verify that the function p, q = p0 q0 + p1q1 + p2 q2 satisfies the four parts of the definition. Here, p( x) = p0 + p1x + p2 x 2 is a polynomial in P2 .
1.
p, q = p0 q0 + p1q1 + p2 q2 = q0 p0 + q1 p1 + q2 p2 = q, p
2.
p, q + r = p0 ( q0 + r0 ) + p1 ( q1 + r1 ) + p2 ( q2 + r2 ) = p0 q0 + p1q1 + p2 p2 + p0 r0 + p1r1 + p2 r2 = p, q + p , r
3. c p, q = c( p0 q0 + p1q1 + p2 q2 ) = (cp0 )q0 + (cp1 )q1 + (cp2 )q2 = cp, q 4.
p, p = p02 + p12 + p22 ≥ 0, and p, p = 0 if and only if p = 0.
30. The product u, v is not an inner product because Axiom 4 is not satisfied. For example, let v = (1, 1). Then v, v = −(1)(1) = −1, which is less than zero. 32. The product u, v is not an inner product because Axiom 4 is not satisfied. For example, let v = (1, 1). Then v, v = (1)(1) − 2(1)(1) = −1, which is less than zero. 34. The product u, v is not an inner product because Axiom 2, Axiom 3, Axiom 4 are not satisfied. For example, let u = (1, 1), v = (1, 2), w = ( 2, 0), and c = 2.
Axiom 2: Then u, v + w = (1) (3) − (1) ( 2) = 5 and 2
2
2
2
2 2 2 2 2 2 2 2 u, v + u, w = ⎡(1) (1) − (1) ( 2) ⎤ + ⎡(1) ( 2) − (1) (0) ⎤ = 1, which are not equal. ⎣ ⎦ ⎣ ⎦ 2 2 2 2 2 2 2 2 Axiom 3: Then c u, v = 2 ⎡(1) (1) − (1) ( 2) ⎤ = −6 and cu, v = ( 2) (1) − ( 2) ( 2) = −12, which are not equal. ⎣ ⎦
Axiom 4: Then v, v = (1) (1) − ( 2) ( 2) = −15, which is less than zero. 2
2
2
2
36. The product u, v is not an inner product because Axiom 1, and Axiom 2, are not satisfied. For example, let u = (1, 1), v = (1, 2), w = ( 2, 0), and c = 2.
Axiom 1: Then u, v = (1)(1) + (1)( 2) = 3 and v, u = (1)( 2) + (1)(1) = 3, which are not equal. Axiom 2: Then u, v + w = (1)(1) + (3)( 2) = 7 and u, v + u, w = ⎡⎣(1)(1) + (1)( 2)⎤⎦ + ⎡⎣(1)(1) + ( 2)(0)⎤⎦ = 4, which are not equal.
π ⎛1⎞ 38. Because u, v = ( 2)⎜ ⎟ + ( −1)(1) = 0, the angle between u and v is . 2 ⎝ 2⎠
π ⎛1⎞ 40. Because u, v = 2⎜ ⎟( 2) + ( −1)(1) = 0, the angle between u and v is . 2 ⎝ 4⎠ 42. Because
u+ v = u v
(0)(1) + (1)(2) + (−1)(3) (0) + (1)2 + (−1)2 (1)2 + (2)2 2
+ (3)
−1 1 = − , 2 ⋅ 14 2 7
=
2
1 ⎞ ⎛ the angle between u and v is cos −1 ⎜ − ⎟ ≈ 1.761 radians (100.89°). ⎝ 2 7⎠ 44. Because
p,q = p q
(1)
2
+
(1)(0) + 2(0)(1) + (1)(−1) 2 2 2 2 2(0) + (1) (0) + 2(1)
+ ( −1)
2
=
−1 1 = − , 2 3 6
⎛ 1 ⎞ the angle between p and q is cos −1 ⎜ − ⎟ ≈ 1.991 (114.09°). 6⎠ ⎝
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
152
Chapter 5
Inner Product Spaces
46. First compute f , g = 1, x 2 =
1
1
∫ −1
x3 ⎤ 2 ⎥ = 3 ⎦ −1 3
x 2 dx = 1
⎤ ∫ −1 1 dx = x⎥⎦ −1 = 2 ⇒ f = 1
f
2
= 1, 1 =
g
2
= x2 , x2 =
2
1
1
∫ −1
x5 ⎤ 2 ⇒ g = ⎥ = 5 ⎦ −1 5
x 4 dx =
2 . 5
So, f, g = f g
23 = 2 25
5 3
⎛ 5⎞ and the angle between f and g is cos −1 ⎜⎜ ⎟⎟ ≈ 0.73 radians ( 41.81°). ⎝ 3 ⎠ 48. (a) To verify the Cauchy-Schwarz Inequality, observe u, v ≤ u
(−1)(1)
+ (1)( −1) ≤ −2 ≤
v
(−1)2
+ (1) ⋅
2 ⋅
2
(1)2
2
+ ( −1)
2
2 ≤ 2.
(b) To verify the Triangle Inequality, observe u + v ≤ u + v
( 0) 2
+ ( 0) ≤ 2
(−1)2
0 ≤
2 +
+ (1) +
(1)2
2
+ ( −1)
2
2
0 ≤ 2 2.
50. (a) To verify the Cauchy-Schwarz Inequality, observe u, v ≤ u
(1)(1)
v
+ (0)( 2) + ( 2)(0) ≤
(1)2
1 ≤
5 ⋅
+ ( 0) + ( 2 ) ⋅ 2
2
(1)2
+ ( 2) + ( 0) 2
2
5
1 ≤ 5.
(b) To verify the Triangle Inequality, observe u + v ≤ u + v
( 2) 2
+ ( 2 ) + ( 2) ≤
(1)2
12 ≤
5 +
2
2
+ (0) + ( 2) + 2
2
(1)2
+ ( 2) + (0) 2
2
5
2 3 ≤ 2 5.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
Inner Product Spaces
153
52. (a) To verify the Cauchy-Schwarz Inequality, observe p, q ≤ p
(0)(1)
+ 2(1)(0) + (0)( −1) ≤
q
( 0) 2
0 ≤
+ 2(1) + (0) ⋅
2 ⋅
2
(1)2
2
+ 2(0) + ( −1) 2
2
2
0 ≤ 2.
(b) To verify the Triangle Inequality, observe p + q ≤ p + q
(1)2
+ 2(1) + ( −1) ≤ 2
( 0) 2
4 ≤
2 +
2
+ 2(1) + (0) + 2
(1)2
2
+ 2(0) + ( −1) 2
2
2
2 ≤ 2 2.
54. (a) To verify the Cauchy-Schwarz Inequality, observe
A, B ≤ A
(0)(1)
+ (1)(1) + ( 2)( 2) + (−1)(−2) ≤
B
(0)2
7 ≤
6 ⋅
7 ≤
60
+ (1) + ( 2) + ( −1) ⋅ 2
2
(1)2
2
+ (1) + ( 2) + (−2) 2
2
2
10
7 ≤ 7.746. (b) To verify the Triangle Inequality, observe A+ B ≤ A + B
(1)
+ ( 2) + ( 4) + ( −3) ≤
2
2
2
( 0)
2
30 ≤
2
+ (1) + ( 2) + (−1) +
6 +
2
2
2
(1)
2
+ (1) + ( 2) + (−2) 2
2
2
10
5.477 ≤ 5.612.
56. (a) To verify the Cauchy-Schwarz Inequality, observe f , g = 1, cos π x =
2
∫0
2
2
cos π x dx =
sin π x ⎤ = 0 π ⎥⎦ 0
2
= x⎤ = 2 ⇒ ⎥⎦ 0
f
2
= 1, 1 =
g
2
= cos π x, cos π x = ∫ cos 2 π x dx =
∫ 0 1 dx
2
0
f = 2
∫0
2 2
1 + cos 2π x sin 2π x ⎤ ⎡1 dx = ⎢ x + =1⇒ g =1 2 2 4π ⎥⎦ 0 ⎣
and observe that f, g ≤ f
g 2 (1).
0 ≤
(b) To verify the Triangle Inequality, observe f + g
So,
2
= 1 + cos π x
2
=
2
2 ∫ 0 (1 + cos π x) dx
2
= ⎡⎣ x + cos 2 π x + 2 cos π x⎤⎦ = 3 ⇒ f + g = 0
3.
f + g ≤ f + g 3 ≤
2 + 1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
154
Chapter 5
Inner Product Spaces
58. (a) To verify the Cauchy-Schwarz Inequality, compute
f , g = x, e − x =
1
∫ 0 xe
1
dx = −e − x ( x + 1)⎤ = 1 − 2e −1 ⎥⎦ 0 1
1
f
2
= x, x =
g
2
= e− x , e− x =
and observe that f, g ≤ f
−x
∫0
x 2 dx = 1
∫0
x3 ⎤ 1 ⇒ ⎥ = 3 ⎦0 3
f =
3 3
1
e −2 x dx = −
e −2 x ⎤ e −2 1 + ⇒ ⎥ = − 2 ⎦0 2 2
g =
−
e −2 1 + 2 2
g
⎛ 3 ⎞⎛ e −2 1⎞ + ⎟ 1 − 2e −1 ≤ ⎜⎜ ⎟⎟⎜ − ⎜ 2 2 ⎟⎠ ⎝ 3 ⎠⎝ 0.264 ≤ 0.380. (b) To verify the Triangle Inequality, compute 2
f + g
= x + e− x , x + e− x =
1
∫0
1
−2 x 3 2 ⎡ ⎤ ( x + e− x ) dx = ⎢−2e− x ( x + 1) − e 2 + x3 ⎥ ⎣ ⎦0
⎡ e −2 1⎤ ⎡ 1 ⎤ = ⎢−4e −1 − + ⎥ − ⎢−2 − + 0⎥ 2 3 2 ⎦ ⎣ ⎦ ⎣ = −4e −1 −
e −2 17 + ⇒ f + g = 2 6
−4e −1 −
e −2 17 + 2 6
and observe that f + g ≤ f + g e −2 17 3 + ≤ + 2 6 3 1.138 ≤ 1.235.
−4e −1 −
60. The functions f ( x) = x and g ( x) =
f, g =
−
e −2 1 + 2 2
1 2 (3x − 1) are orthogonal because 2 1
1
∫ −1
x
1 2 1 1 1 ⎛ 3x 4 x 2 ⎞⎤ − 3 x − 1)dx = ∫ (3 x3 − x)dx = ⎜ ( ⎟⎥ = 0. 2 2 −1 2⎝ 4 2 ⎠⎦ −1
62. The functions f ( x) = 1 and g ( x) = cos( 2nx) are orthogonal because f , g =
π
∫0
cos( 2nx)dx =
π
1 ⎤ sin ( 2nx)⎥ = 0. 2n ⎦0
u, v (−1)(4) + (−2)( 2) 4, 2 = −8 4, 2 = ⎛ − 8 , − 4 ⎞ v = ( ) ( ) ⎜ ⎟ 2 2 v, v 20 ⎝ 5 5⎠ (4) + ( 2)
64. (a) projvu =
v, u
(b) proju v =
u, u
(c)
(4)(−1) 2 (−1)
u =
+ ( 2)( −2) + ( −2)
2
(−1, − 2)
=
−8 8 16 (−1, − 2) = ⎛⎜ , ⎞⎟ 5 ⎝5 5 ⎠
y 4
u = (− 1, 2)
projuv v = (4, 2)
−4
x
projvu
2
4
−4
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.2
u, v
66. (a) projvu =
v, v v, u
(b) proju v = (c)
u, u
v =
(2)(3) + (−2)(1) 3, 1 ( ) (3)2 + (1)2
u =
(3)(2) + (1)(−2) 2, − 2 ) 2 2 ( (2) + (−2)
=
Inner Product Spaces
155
4 6 2 (3, 1) = ⎛⎜ , ⎞⎟ 10 ⎝5 5⎠ =
4 (2, − 2) = (1, −1) 8
y 2
v = (3, 1)
projvu
x 4 −2
u = (2, − 2)
projuv
u, v
68. (a) projvu =
v, v v, u
(b) proju v =
u, u
v =
(1)(−1) + (2)(2) + (−1)(−1) −1, 2, −1 ( ) 2 2 2 (−1) + (2) + (−1)
u =
(−1)(1) + (2)(2) + (−1)(−1) 1, 2, −1 ( ) 2 2 2 (1) + (2) + (−1)
=
=
4 2 4 2 (−1, 2, −1) = ⎛⎜ − , , − ⎞⎟ 6 ⎝ 3 3 3⎠
4 2 4 2 (1, 2, −1) = ⎛⎜ , , − ⎞⎟ 6 ⎝ 3 3 3⎠
u, v (−1)(2) + (4)(−1) + (−2)(2) + (3)(−1) 2, −1, 2, −1 v = ( ) 2 2 2 2 v, v (2) + (−1) + (2) + (−1)
70. (a) projvu =
=
−13 13 13 13 13 (2, −1, 2, −1) = ⎛⎜ − , , − , ⎞⎟ 10 ⎝ 5 10 5 10 ⎠
v, u (2)(−1) + (−1)(4) + (2)(−2) + (−1)(3) −1, 4, − 2, 3 u = ( ) 2 2 2 2 u, u (−1) + (4) + (−2) + (3)
(b) proju v =
=
−13 13 26 13 13 (−1, 4, − 2, 3) = ⎛⎜ , − , , − ⎞⎟ 30 ⎝ 30 15 15 10 ⎠
72. The inner products f , g and g , g are as follows. f,g = g, g =
∫ −1( x 1
3
1
∫ −1(2 x
− x)( 2 x − 1)dx = − 1) dx = 2
∫ −1(2 x 1
1
4
⎡ 2 x5 x4 2 x3 x2 ⎤ 8 − x3 − 2 x 2 + x)dx = ⎢ − − + ⎥ = − 5 4 3 2 15 ⎣ ⎦ −1
2 ∫ −1(4 x − 4 x + 1)dx 1
So, the projection of f onto g is projg f =
1
⎡ 4 x3 ⎤ 14 = ⎢ − 2 x 2 + x⎥ = 3 ⎣ 3 ⎦ −1
f,g −8 15 4 g = ( 2 x − 1) = − ( 2 x − 1). 14 3 35 g, g
74. The inner products f , g and g , g are as follows. 1
f, g =
∫ 0 xe
g, g =
∫0 e
1
−x
1
dx = ⎡⎣−e − x ( x + 1)⎤⎦ = −2e−1 + 1 0 1
−2 x
dx =
1 1 − e −2 −e −2 x ⎤ − e −2 + = ⎥ = 2 ⎦0 2 2 2
So, the projection of f onto g is projg f =
f, g g, g
g =
−2e −1 + 1 − x −4e −1 + 2 − x −4e − x −1 + 2e − x = = e e . ( ) ( ) 1 − e −2 1 − e −2 1 − e −2 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
156
Chapter 5
Inner Product Spaces
76. The inner product f , g is
f,g =
π
∫ −π
π
π
∫ −π
sin 2 x sin 3 x dx =
1 1 sin 5 x ⎞⎤ (cos x − cos 5 x) dx = ⎛⎜ sin x − ⎟⎥ = 0 2 2⎝ 5 ⎠⎦ −π
which implies that projg f = 0. 78. The inner product f , g is f , g =
π
x sin 2 x ⎤ 1 1 ⎡ cos 2 x + x cos 2 x dx = ⎢ ⎥ = 4 − 4 = 0 4 2 ⎣ ⎦ −π
π
∫ −π
which implies that projg f = 0. 80. (a) False. The norm of a vector u is defined as a square root of u, u .
(b) False. The angle between av and v is zero if a > 0 and it is π if a < 0. 82.
u + v
2
+ u − v
2
= u + v , u + v + u − v, u − v =
( u, u
= 2 u
2
+ 2 u, v + v , v + 2 v
) + ( u, u
− 2 u, v + v , v
)
2
84. To prove that u − projvu is orthogonal to v, you calculate their inner product as follows
u − projvu, v = u, v − projvu, v = u, v −
u, v u, v v , v = u, v − v, v = u, v − u, v = 0. v, v v, v
86. You have from the definition of inner product u, cv = cv, u = c v, u = c u, v . 88. Let W =
{(c, 2c, 3c) : c ∈ R}. Then
W ⊥ = {v ∈ R3 : v ⋅ (c, 2c, 3c) = 0} =
{( x, y, z) ∈ R3 : ( x, y, z) ⋅ (1, 2, 3) = 0}.
You need to solve x + 2 y + 3z = 0. Choosing y and z as free variables, you obtain the solution x = −2t − 3s, y = t , z = s for any real numbers t and s. Therefore, W ⊥ = {t ( −2, 1, 0) + s( −3, 0, 1) : t , s ∈ R} = span{( −2, 1, 0), ( −3, 0, 1)}.
90. From example 10, you have projvu = ( 2, 4, 0). So, d (u, projvu) = u − projvu =
(4, − 2, 4)
=
Let x be any real number different from 2 =
36 = 6.
u, v , so that xv ≠ projvu. You want to show that v, v
d (u, xv) > d (u, projvu). d (u, xv) = =
(6 − x) 2
+ 4(1 − x) + 16 =
36 + 5( x − 2)
2
2
>
36 + 5 x 2 − 20 x + 20
36 = d (u, projvu),
if x ≠ 2 =
u, v . v, v
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
157
Section 5.3 Orthonormal Bases: Gram-Schmidt Process 2. The set is orthogonal because
(3, − 2) ⋅ (−4, − 6)
8. The set is orthogonal because (2, −4, 2) ⋅ (0, 2, 4) = 0 − 8 + 8 = 0
= 3(−4) − 2( −6) = 0.
(2, −4, 2) ⋅ (−10, −4, 2) = −20 + 16 + 4 (0, 2, 4) ⋅ (−10, −4, 2) = 0 − 8 + 8 = 0.
However, the set is not orthonormal because
(3, − 2)
32 + ( −2)
=
2
=
13 ≠ 1.
However, the set is not orthonormal because
(2, −4, 2)
4. The set is not orthogonal because
(11, 4) ⋅ (8, −3)
(1, 2) ⋅ (
=
2
24 ≠ 1.
10. The set is not orthogonal because
)=
− 52
+
2 5
⎛ 2 2⎞ , 0, − ⎜⎜ ⎟ 6 ⎟⎠ ⎝ 3
= 0.
However, the set is not orthonormal because
(1, 2)
2 2 + ( − 4) + 2 2 =
=
= 88 − 12 = 76 ≠ 0.
6. The set is orthogonal because − 52 , 15
= 0
1 + 2 2
2
=
⎛ 2 5 5⎞ ⋅ ⎜⎜ 0, ,− ⎟ = 5 5 ⎟⎠ ⎝
10 ≠ 0. 30
12. The set is orthogonal because (−6, 3, 2, 1) ⋅ (2, 0, 6, 0) = −12 + 12 = 0.
5 ≠ 1.
However, the set is not orthonormal because
(−6, 3, 2, 1)
=
36 + 9 + 4 + 1 =
50 ≠ 1.
14. The set is orthogonal because ⎛ 10 3 10 ⎞ , 0, 0, ⎜⎜ ⎟ ⋅ (0, 0, 1, 0) = 0 10 ⎟⎠ ⎝ 10 ⎛ 10 3 10 ⎞ , 0, 0, ⎜⎜ ⎟ ⋅ (0, 1, 0, 0) = 0 10 10 ⎟⎠ ⎝ ⎛ 10 3 10 ⎞ ⎛ 3 10 10 ⎞ 3 3 , 0, 0, , 0, 0, + = 0 ⎜⎜ ⎟⎟ ⋅ ⎜⎜ − ⎟⎟ = − 10 10 10 10 10 10 ⎝ ⎠ ⎝ ⎠ (0, 0, 1, 0) ⋅ (0, 1, 0, 0) = 0 ⎛ 3 10 10 ⎞ , 0, 0, ⎟⎟ = 0 10 10 ⎝ ⎠ ⎛ 3 10 10 ⎞ , 0, 0, (0, 1, 0, 0) ⋅ ⎜⎜ − ⎟ = 0. 10 ⎠⎟ ⎝ 10 Furthermore, the set is orthonormal because
(0, 0, 1, 0) ⋅ ⎜⎜ −
⎛ 10 3 10 ⎞ 1 9 ⎜⎜ 10 , 0, 0, 10 ⎟⎟ = 10 + 10 = 1 ⎝ ⎠
(0, 0, 1, 0) (0, 1, 0, 0)
=1 =1
⎛ 3 10 10 ⎞ 9 1 , 0, 0, + = 1. ⎜⎜ − ⎟⎟ = 10 10 10 10 ⎝ ⎠
16. The set is orthogonal because ( 2, −5) ⋅ (10, 4) = 20 − 20 = 0. The set is not orthonormal because
(2, −5)
=
22 + (−5)
2
=
29 ≠ 1.
So, normalize the set to produce an orthonormal set. ⎛ 2 29 5 29 ⎞ 1 ,− (2, −5) = ⎜⎜ ⎟ 29 ⎟⎠ 29 ⎝ 29
u1 =
v1 = v1
u2 =
⎛ 5 29 2 29 ⎞ v2 1 = , (10, 4) = ⎜⎜ ⎟ v2 29 ⎟⎠ 2 29 ⎝ 29 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
158
Chapter 5
Inner Product Spaces
18. The set is orthogonal because 2 2 ⎛ 2 1 2⎞ ⎛1 2 ⎞ + + 0 = 0. ⎜ − , , ⎟ ⋅ ⎜ , , 0⎟ = − 15 15 15 15 15 225 225 ⎝ ⎠ ⎝ ⎠ The set is not orthonormal because ⎛ 2 1 2⎞ ⎜− , , ⎟ = ⎝ 15 15 15 ⎠
2
2
⎛ 2⎞ ⎛1⎞ ⎛2⎞ ⎜− ⎟ + ⎜ ⎟ + ⎜ ⎟ 15 15 ⎝ ⎠ ⎝ ⎠ ⎝ 15 ⎠
2
=
1 ≠ 1. 5
So, normalize the set to produce an orthonormal set.
u1 =
u2 =
20. The set
v1 1 ⎛ −2 1 2 ⎞ ⎛ −2 1 2 ⎞ ⎛ 2 1 = ⎜ , , ⎟ = 5⎜ , , ⎟ = ⎜ − , , 1 ⎝ 15 15 15 ⎠ v1 ⎝ 15 15 15 ⎠ ⎝ 3 3 5 1 ⎛1 2 ⎞ ⎜ , , 0⎟ = 1 ⎝ 15 15 ⎠ 45
v2 = v2
2⎞ ⎟ 3⎠
⎛ 5 2 5 ⎞ ⎛1 2 ⎞ 45 ⎜ , , 0 ⎟ = ⎜⎜ , , 0 ⎟⎟ 5 ⎝ 15 15 ⎠ ⎝ 5 ⎠
{(sin θ , cos θ ), (cos θ , −sin θ )} is orthogonal because
(sin θ , cos θ ) ⋅ (cos θ , −sin θ )
= sin θ cos θ − cos θ sin θ = 0.
Furthermore, the set is orthonormal because
(sin θ , cos θ )
= sin 2θ + cos 2θ = 1
(cos θ , −sin θ )
= cos 2θ + ( −sin θ ) = 1. 2
So, the set forms an orthonormal basis for R 2 .
22. Use Theorem 5.11 to find the coordinates of x = ( −3, 4) relative to B. ⎛
5 2 5⎞ 3 5 8 5 , ⎟⎟ = − 5 + 5 = 5 5 ⎝ ⎠
(−3, 4) ⋅ ⎜⎜
5
⎛ 2 5 5⎞ 6 5 4 5 , + = 2 5 ⎟⎟ = 5 5 5 5 ⎝ ⎠
(−3, 4) ⋅ ⎜⎜ −
(
So, [x]B = ⎡ ⎣
)
T
5, 2 5 ⎤ . ⎦
24. Use Theorem 5.11 to find the coordinates of x = (3, − 5, 11) relative to B.
(3, −5, 11) ⋅ (1, 0, 0) (3, −5, 11) ⋅ (0, 1, 0) (3, −5, 11) ⋅ (0, 0, 1)
=
3
= −5 = 11
So, [x]B = ⎡⎣(3, − 5, 11)⎤⎦ . T
26. Use Theorem 5.11 to find the coordinates of x = ( 2, −1, 4, 3) relative to B. ,0 (2, −1, 4, 3) ⋅ (135 , 0, 12 13 ) (2, −1, 4, 3) ⋅ (0, 1, 0, 0) 5 , 0, 13 , 0) (2, −1, 4, 3) ⋅ (− 12 13 (2, −1, 4, 3) ⋅ (0, 0, 0, 1)
( 1358 , −1, − 134 , 3)⎤⎦
So, [x]B = ⎡ ⎣
+
48 13
=
24 + = − 13
20 13
4 = − 13
=
10 13
58 13
= −1
= T
3
.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
159
28. First, orthogonalize each vector in B. w1 = v1 = (1, 2) v 2 , w1 ( −1)(1) + 0(2) 1, 2 = −1, 0 + 1 1, 2 = ⎛ − 4 , w1 = ( −1, 0) − ( ) ( ) ( ) ⎜ w1 , w1 12 + 22 5 ⎝ 5
w 2 = v2 −
2⎞ ⎟ 5⎠
Then, normalize the vectors. u1 =
w1 = w1
u2 =
w2 = w2
1 1 + 2 2
2
(1, 2)
⎛ 5 2 5⎞ = ⎜⎜ , ⎟ 5 ⎟⎠ ⎝ 5
⎛ 2 5 5⎞ ⎛ 4 2⎞ , ⎜ − , ⎟ = ⎜⎜ − ⎟⎟ 5 5 5 5 ⎝ ⎠ ⎝ ⎠ ⎛ 4⎞ ⎛ 2⎞ − + ⎜ ⎟ ⎜ ⎟ ⎝ 5⎠ ⎝5⎠ 1
2
2
5 ⎞⎪⎫ ⎪⎧⎛ 5 2 5 ⎞ ⎛ 2 5 , , So, the orthonormal basis is B′ = ⎨⎜⎜ ⎟⎟, ⎜⎜ − ⎟⎟⎬. 5 5 5 5 ⎠ ⎝ ⎠⎪⎭ ⎩⎪⎝
30. First, orthogonalize each vector in B. w1 = v1 = ( 4, −3) v 2 , w1 3( 4) + 2( −3) 6 51 68 4, −3) = (3, 2) − w1 = (3, 2) − (4, −3) = ⎛⎜ , ⎞⎟ 2 ( 2 w1 , w1 25 25 25 ⎠ ⎝ 4 + ( −3)
w 2 = v2 −
Then, normalize the vectors. u1 =
w1 = w1
1
u2 =
w2 = w2
4 + ( −3) 2
2
(4, − 3)
⎛ 4 3⎞ = ⎜ ,− ⎟ ⎝ 5 5⎠
⎛ 51 68 ⎞ ⎛ 3 ⎜ , ⎟ = ⎜ , ⎛ 51 ⎞ ⎛ 68 ⎞ ⎝ 25 25 ⎠ ⎝ 5 ⎜ ⎟ +⎜ ⎟ ⎝ 25 ⎠ ⎝ 25 ⎠ 1
2
2
4⎞ ⎟ 5⎠
⎧⎛ 4 3 ⎞ ⎛ 3 4 ⎞⎫ So, the orthonormal basis is B′ = ⎨⎜ , − ⎟, ⎜ , ⎟⎬. ⎩⎝ 5 5 ⎠ ⎝ 5 5 ⎠⎭
32. First, orthogonalize each vector in B. w1 = v1 = (1, 0, 0) w 2 = v2 −
v 2 , w1 1 w1 = (1, 1, 1) − (1, 0, 0) = (0, 1, 1) w1 , w1 1
w 3 = v3 −
v 3 , w1 v3 , w 2 1 0 w1 − w 2 = (1, 1, − 1) − (1, 0, 0) − (0, 1, 1) = (0, 1, −1) w1 , w1 w2, w2 1 2
Then, normalize the vectors. u1 =
w1 = (1, 0, 0) w1
u2 =
w2 = w2
1 1 1 ⎞ ⎛ (0, 1, 1) = ⎜ 0, , ⎟ 2 2 2⎠ ⎝
u3 =
w3 = w3
1 1 1 ⎞ ⎛ (0, 1, −1) = ⎜ 0, , − ⎟ 2 2 2⎠ ⎝
⎧ 1 1 ⎞ ⎛ 1 1 ⎞⎫ ⎛ So, the orthonormal basis is ⎨(1, 0, 0), ⎜ 0, , ,− ⎟, ⎜ 0, ⎟⎬. 2 2⎠ ⎝ 2 2 ⎠⎭ ⎝ ⎩
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
160
Chapter 5
Inner Product Spaces
34. First, orthogonalize each vector in B. w1 = v1 = (0, 1, 2) w 2 = v2 −
v 2 , w1 w1 = ( 2, 0, 0) − 0(0, 1, 2) = ( 2, 0, 0) w1 , w1
w 3 = v3 −
v 3 , w1 v3 , w 2 3 2 ⎛ 2 1⎞ w1 − w 2 = (1, 1, 1) − (0, 1, 2) − ( 2, 0, 0) = ⎜ 0, , − ⎟ w1 , w1 w2, w2 5 4 ⎝ 5 5⎠
Then, normalize the vectors. 1 1 2 ⎞ ⎛ (0, 1, 2) = ⎜ 0, , ⎟ 5 5 5⎠ ⎝
u1 =
w1 = w1
u2 =
w2 1 = ( 2, 0, 0) = (1, 0, 0) w2 2
u3 =
w3 = w3
2 1 ⎞ ⎛ 2 1⎞ ⎛ 5 ⎜ 0, , − ⎟ = ⎜ 0, ,− ⎟ 5 5 5 5⎠ ⎝ ⎠ ⎝
⎧⎛ 1 2 ⎞ 2 1 ⎞⎫ ⎛ So, the orthonormal basis is ⎨⎜ 0, , ,− ⎟, (1, 0, 0), ⎜ 0, ⎟⎬. 5 5⎠ 5 5 ⎠⎭ ⎝ ⎩⎝
36. First, orthogonalize each vector in B. w1 = v1 = (3, 4, 0, 0) w 2 = v2 − w 3 = v3 −
v 2 , w1 w1 , w1 v 3 , w1 w1 , w1
w1 = ( −1, 1, 0, 0) − w1 −
v3 , w 2 w2, w2
1 28 21 (3, 4, 0, 0) = ⎛⎜ − , , 0, 0 ⎞⎟ 25 25 25 ⎝ ⎠
w2
7 − 10 28 21 = ( 2, 1, 0, −1) − (3, 4, 0, 0) − 495 ⎛⎜ − , , 0, 0 ⎞⎟ 25 25 25 ⎝ ⎠ 25 ⎛6 8 ⎞ ⎛ 4 3 ⎞ = ( 2, 1, 0, −1) − ⎜ , , 0, 0 ⎟ + ⎜ − , , 0, 0 ⎟ = (0, 0, 0, −1) ⎝5 5 ⎠ ⎝ 5 5 ⎠ w 4 = v4 −
v 4 , w1 w1 , w1
w1 −
v4 , w 2 w2 , w2
w2 −
v4 , w3 w3, w3
w3
21 4 28 21 25 = (0, 1, 1, 0) − (3, 4, 0, 0) − 49 ⎛⎜ − , , 0, 0 ⎞⎟ − 0(0, 0, 0,−1) 25 ⎝ 25 25 ⎠ 25 ⎛ 12 16 ⎞ ⎛ 12 9 ⎞ = (0, 1, 1, 0) − ⎜ , , 0, 0 ⎟ − ⎜ − , , 0, 0 ⎟ = (0, 0, 1, 0) ⎝ 25 25 ⎠ ⎝ 25 25 ⎠ Then, normalize the vectors. u1 =
1 w1 ⎛3 4 ⎞ = (3, 4, 0, 0) = ⎜ , , 0, 0 ⎟ 5 w1 ⎝5 5 ⎠
u2 =
5 ⎛ 28 21 w2 ⎞ ⎛ 4 3 ⎞ = ⎜ − , , 0, 0 ⎟ = ⎜ − , , 0, 0 ⎟ 7 ⎝ 25 25 w2 ⎠ ⎝ 5 5 ⎠
u3 =
w3 = (0, 0, 0, −1) w3
u 4 = w 4 = (0, 0, 1, 0) ⎧⎛ 3 4 ⎫ ⎞ ⎛ 4 3 ⎞ So, the orthonormal basis is ⎨⎜ , , 0, 0 ⎟, ⎜ − , , 0, 0 ⎟, (0, 0, 0, −1), (0, 0, 1, 0)⎬. 5 5 5 5 ⎠ ⎝ ⎠ ⎩⎝ ⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3
Orthonormal Bases: Gram-Schmidt Process
161
38. Because there is just one vector, you simply need to normalize it. 1
u1 =
4 + ( −7) + 6 2
2
(4, − 7, 6)
=
1 ⎛ (4, − 7, 6) = ⎜ 101 ⎝
4 ,− 101
7 , 101
6 ⎞ ⎟ 101 ⎠
40. First, orthogonalize each vector in B. w1 = v1 = (1, 2, 0) v 2 , w1
w 2 = v2 −
w1 , w1
2 8 4 (1, 2, 0) = ⎛⎜ , − , − 2⎞⎟ 5 ⎝5 5 ⎠
w1 = ( 2, 0, − 2) −
Then, normalize the vectors.
1 2 ⎛ 1 ⎞ (1, 2, 0) = ⎜ , , 0 ⎟ 5 5 5 ⎝ ⎠
u1 =
w1 = w1
u2 =
1 ⎛8 4 2 5 ⎞ w2 ⎞ ⎛ 4 ,− ,− = ⎜ , − , −2⎟ = ⎜ ⎟ w2 6 5⎝5 5 ⎠ ⎝3 5 3 5 3 5 ⎠
⎧⎛ 1 2 2 5 ⎞⎫ ⎞ ⎛ 4 So, the orthonormal basis is ⎨⎜ , , 0 ⎟, ⎜ ,− ,− ⎟⎬. 5 ⎠ ⎝ 3 5 3 5 3 5 ⎠⎭ ⎩⎝ 5
42. First, normalize each vector in B. w1 = v1 = (7, 24, 0, 0) w 2 = v2 −
v 2 , w1 w1 = (0, 0, 1, 1) − 0(7, 24, 0, 0) = (0, 0, 1, 1) w1 , w1
w 3 = v3 −
v 3 , w1 v3 , w 2 −1 3 3 w1 − w 2 = (0, 0, 1, − 2) − 0(7, 24, 0, 0) − (0, 0, 1, 1) = ⎛⎜ 0, 0, , − ⎞⎟ w1 , w1 w2 , w2 2 2 2⎠ ⎝
Then, normalize the vectors. u1 =
w1 1 7 24 = (7, 24, 0, 0) = ⎛⎜ , , 0, 0 ⎞⎟ w1 25 ⎝ 25 25 ⎠
u2 =
w2 = w2
u3 =
w3 1 ⎛ 3 3⎞ 1 1 ⎞ ⎛ ,− = ⎜ 0, 0, , − ⎟ = ⎜ 0, 0, ⎟ w3 2 2⎠ 3 2⎝ 2 2⎠ ⎝
1 1 1 ⎞ ⎛ (0, 0, 1, 1) = ⎜ 0, 0, , ⎟ 2 2 2⎠ ⎝
So, the orthonormal basis is ⎧⎛ 7 24 1 1 ⎞ ⎛ 1 1 ⎞⎫ ⎞ ⎛ , ,− ⎨⎜ , , 0, 0 ⎟, ⎜ 0, 0, ⎟, ⎜ 0, 0, ⎟⎬. 2 2⎠ ⎝ 2 2 ⎠⎭ ⎠ ⎝ ⎩⎝ 25 25
44. 1, 1 =
46.
1
∫ −1 1 dx
x2 , x =
1
∫ −1
1
= x⎤ = 1 − ( −1) = 2 ⎥⎦ −1
x 2 x dx =
1
∫ −1
1
x3dx =
x4 ⎤ 1 ⎛1⎞ −⎜ ⎟ = 0 ⎥ = 4 ⎦ −1 4 ⎝ 4⎠
48. (a) True. See definition on page 306. (b) True. See Theorem 5.10 on page 309. (c) True. See page 312.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
162
Chapter 5
Inner Product Spaces
50. The solutions of the homogeneous system are of the form ( s + t , 2 s + t , s, t ) where s and t are any real numbers. So, a basis for the solution space is {(1, 2, 1, 0), (1, 1, 0, 1)}. Orthogonalize this basis as follows w1 = v1 = (1, 2, 1, 0) w 2 = v2 −
v 2 , w1 w1 , w1
w1 = (1, 1, 0, 1) −
3 1 1 (1, 2, 1, 0) = ⎛⎜ , 0, − , 1⎞⎟ 6 2 2 ⎠ ⎝
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
1 2 1 ⎛ 1 ⎞ (1, 2, 1, 0) = ⎜ , , , 0 ⎟ 6 6 6 ⎠ ⎝ 6 1 ⎛1 1 ⎞ 1 2 ⎞ ⎛ 1 , 0, − , ⎜ , 0, − , 1⎟ = ⎜ ⎟ 2 ⎠ 6 2⎝ 2 6 6⎠ ⎝ 6
⎧⎛ 1 2 1 1 2 ⎞⎫ ⎞ ⎛ 1 So, the orthonormal basis for the solution set is ⎨⎜ , , , 0 ⎟, ⎜ , 0, − , ⎟⎬. 6 6 6 6 6 6 ⎠⎭ ⎠ ⎝ ⎩⎝
52. The solutions of the homogeneous system are of the form ( − s − t , 0, s, t ), where s and t are any real numbers. So, a basis for the solution space is
{(−1, 0, 1, 0), (−1, 0, 0, 1)}.
Orthogonalize this basis as follows. w1 = v1 = ( −1, 0, 1, 0) w 2 = v2 −
v 2 , w1 1 1 ⎞ ⎛ 1 w1 = ( −1, 0, 0, 1) − ( −1, 0, 1, 0) = ⎜ − , 0, − , 1⎟ w1 , w1 2 2 ⎠ ⎝ 2
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
⎛ 1 2 2 ⎞ (−1, 0, 1, 0) = ⎜⎜ − , 0, , 0 ⎟⎟ 2 2 ⎝ 2 ⎠ ⎛ 1 ⎛ 1 1 ⎞ 6 6 6⎞ , 0, , ⎟ ⎜ − , 0, , 1⎟ = ⎜⎜ − 2 ⎠ 6 3 ⎟⎠ 3 2⎝ 2 ⎝ 6
2 2 ⎞ ⎛ 6 6 6 ⎞⎪⎫ ⎪⎧⎛ , 0, , 0 ⎟⎟, ⎜⎜ − , 0, , So, an orthonormal basis for the solution space is ⎨⎜⎜ − ⎟⎟⎬. 2 2 6 6 3 ⎠ ⎝ ⎠⎭⎪ ⎩⎪⎝
54. The solutions of the homogeneous system are of the form ( 2 s − t , s, t ), where s and t are any real numbers. So, a basis for the solution space is
{(2, 1, 0), (−1, 0, 1)}.
Orthogonalize this basis as follows. w1 = v1 = ( 2, 1, 0) w 2 = v2 −
v 2 , w1 2 ⎛ 1 2 ⎞ w1 = ( −1, 0, 1) + ( 2, 1, 0) = ⎜ − , , 1⎟ w1 , w1 5 ⎝ 5 5 ⎠
Then, normalize these vectors. u1 =
w1 = w1
u2 =
w2 = w2
⎛2 5 1 5 ⎞ , , 0 ⎟⎟ (2, 1, 0) = ⎜⎜ 5 5 5 ⎝ ⎠ ⎛ 1 ⎛ 1 2 ⎞ 30 30 , , ⎜ − , , 1⎟ = ⎜⎜ − 30 15 6 5⎝ 5 5 ⎠ ⎝
30 ⎞ ⎟ 6 ⎟⎠
⎧⎪⎛ 2 5 5 ⎞ ⎛ 30 30 30 ⎞⎫⎪ So, an orthonormal basis for the solution space is ⎨⎜⎜ , , 0 ⎟⎟, ⎜⎜ − , , ⎟⎬. 5 15 6 ⎟⎠⎭⎪ ⎠ ⎝ 30 ⎩⎪⎝ 5
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.3 2 ( x 2 − 1) and q( x) =
56. Let p( x) =
Orthonormal Bases: Gram-Schmidt Process
2 ( x 2 + x + 2). Because p, q =
163
( 2 ) + (− 2 )(2 2 ) = −2 ≠ 0, the
2
2 +0
3 2 2 x + 2
2x +
set is not orthogonal. Orthogonalize the set as follows. 2 ( x 2 − 1)
w1 = p = w2 = q −
q, w1
w1 =
w1 , w1
2 ( x 2 + x + 2) −
−2 4
( 2(x
)
− 1) =
2
3 2 2
Then, normalize the vectors. u1 =
1 w1 = 2 w1
u2 =
w2 = w2
2 ( x 2 − 1) = 1 ⎛3 2 2 x + ⎜ 11 ⎜⎝ 2
2 2 x − 2 2x +
⎪⎧ 2 2 x − So, the orthonormal set is ⎨ ⎩⎪ 2
2 2
3 2⎞ ⎟ = 2 ⎟⎠
2 , 2
3 2 x + 22
3 2 x + 22
2 x + 22
2 x + 22
3 22
3 ⎪⎫ ⎬. 22 ⎭⎪
58. The set {1, x, x 2} of polynomials is orthonormal. (see Example 2) 60. Let p1 ( x) = p1 , p2 = −
−4 x 2 + 3 x 3x 2 + 4 x , p2 ( x ) = , p3 ( x) = 1. Then, 5 5 12 12 + = 0, p1 , p3 = 0 and p2 , p3 = 0 25 25
Furthermore, 9 + 16 = 1, p2 = 25
p1 =
16 + 9 = 1 and p3 = 1. 25
So, { p1 , p2 , p3} is an orthonormal set. ⎧⎪⎛ 2 1 ⎞ ⎛ 2 2 2 ⎞⎫⎪ , 62. The set ⎨⎜ , − ⎟, ⎜⎜ ⎟⎬ from Exercise 61 is not orthonormal using the Euclidean inner product because 3 ⎟⎠⎪⎭ ⎪⎩⎝ 3 3 ⎠ ⎝ 6 ⎛ 2 1⎞ ⎜ ,− ⎟ = ⎝ 3 3⎠
4 1 + = 9 9
64. Let v = c1v1 +
5 ≠ 1. 3
+ cn v n be an arbitrary linear combination of vectors in S. Then
w, v = w, c1v1 +
+ cn v n
= w, c1v1 +
+ w, cn v n
= c1 w, v1 +
+ cn w, v n = c1 ⋅ 0 +
+ cn ⋅ 0 = 0.
Because c1 , … , cn are arbitrary real numbers, you conclude that w is orthogonal to any linear combination of vectors in S. 66. (a) You see that P
−1
= P
T
⎡−1 0 0⎤ ⎢ ⎥ = ⎢ 0 0 −1⎥. ⎢ 0 1 0⎥ ⎣ ⎦
Furthermore, the rows (and columns) of P form an orthonormal basis for R n .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
164
Chapter 5
Inner Product Spaces
(b) You see that
P −1 = P T
⎡ 1 ⎢ 2 ⎢ = ⎢ 1 ⎢ ⎢ 2 ⎢⎣ 0
1 2 1 − 2 0
⎤ 0⎥ ⎥ ⎥. 0⎥ ⎥ 1⎥⎦
Furthermore, the rows (and columns) of A form an orthonormal basis for R n . 68. Because W ⊥ is a nonempty subset of V, you only verify the closure axioms. Let v1 , v 2 ∈ W ⊥ . Then v1 ⋅ w = v 2 ⋅ w = 0, for all w ∈ W . So, ( v1 + v 2 ) ⋅ w = v1 ⋅ w + v1 ⋅ w = 0 + 0 = 0, which implies that v1 + v 2 ∈ W ⊥ . Finally, if c is a scalar, then, (cv1 ) ⋅ w = c( v1 ⋅ w ) = c0 = 0, which implies that cv1 ∈ W ⊥ . Let v ∈ W ∩ W ⊥ . Then v ⋅ w = 0 for all w in W. In particular, because v ∈ W ⊥ , v ⋅ v = 0, which implies that v = 0. 1 −1⎤ ⎡0 ⎡0 1 −1⎤ ⎢ ⎥ ⎢ ⎥ 70. A = ⎢0 −2 2⎥ ⇒ ⎢0 0 0⎥ ⎢0 −1 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 0 0 1⎤ ⎡ 1 −2 −1⎤ ⎢ ⎥ ⎢ ⎥ AT = ⎢ 1 −2 −1⎥ ⇒ ⎢0 0 0⎥ ⎢−1 2 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎧⎡1⎤ ⎡0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ N ( A) = span ⎨⎢0⎥, ⎢1⎥⎬ ⎪⎢0⎥ ⎢1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭ ⎧⎡2⎤ ⎡1⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ N ( AT ) = span ⎨⎢1⎥ , ⎢0⎥ ⎬ ⎪⎢0⎥ ⎢1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
⎧⎡ 0⎤⎫ ⎪⎢ ⎥⎪ R( AT ) = span ⎨⎢ 1⎥⎬ ⎪⎢−1⎥⎪ ⎩⎣ ⎦⎭ ⊥
1 2 0⎤ ⎡ 1 −2 ⎥ ⎢ 0 2 0⎥ 0 0 ⇒ ⎢ ⎥ ⎢ 1 0 0 0 0 ⎥ ⎢ ⎢⎣0 0 1 2 1⎥⎦
1 −1 ⎡0 ⎢ ⎢0 −2 2 AT = ⎢ 1 0 1 ⎢ ⎢2 2 0 ⎢ ⎣0 0 0
0 2 0⎤ ⎥ 1 2 0⎥ 0 0 1⎥ ⎥ 0 0 0⎥⎦
0⎤ ⎡1 0 1 ⎥ ⎢ 0⎥ ⎢0 1 −1 ⎥ ⎢0 0 0 ⇒ 1 ⎥ ⎢ ⎢0 0 0 2⎥ ⎥ ⎢ 1⎦ ⎣0 0 0
0⎤ ⎥ 0⎥ 1⎥ ⎥ 0⎥ ⎥ 0⎦
⎧⎡2⎤ ⎡−2⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢ 1⎥ ⎢ 0⎥ ⎪ ⎪ ⎪ N ( A) = span ⎨⎢0⎥ , ⎢−2⎥ ⎬ ⎪⎢⎢ ⎥⎥ ⎢⎢ ⎥⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎪ ⎪⎣0⎦ ⎣ 0⎦ ⎪ ⎩ ⎭
⎧⎡ 1⎤⎫ ⎪⎢ ⎥⎪ R( A) = span ⎨⎢−2⎥⎬ ⎪⎢ −1⎥⎪ ⎩⎣ ⎦⎭
N ( A) = R( AT ) and N ( AT ) = R( A)
⎡ 0 0 ⎢ 1 −2 72. A = ⎢ ⎢−1 2 ⎢ ⎢⎣ 0 0
N ( AT )
⊥
⎧⎡ 1⎤ ⎫ ⎪⎢ ⎥ ⎪ ⎪ −1 ⎪ = span ⎨⎢ ⎥ ⎬ ⎢ ⎥ ⎪⎢−1⎥ ⎪ ⎪⎢ 0⎥ ⎪ ⎩⎣ ⎦ ⎭
⎧⎡ 0⎤ ⎡ 1⎤ ⎡0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪ 1 0 0⎪ R( A) = span ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥⎬ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪⎢−1⎥ ⎢ 1⎥ ⎢0⎥⎪ ⎪⎢ 0⎥ ⎢ 1⎥ ⎢ 1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎭
R( AT )
⎧⎡0⎤ ⎡ 1⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢−2⎥ ⎢0⎥ ⎪ ⎪ ⎪ = span ⎨⎢ 1⎥ , ⎢ 0⎥ , ⎢ 1⎥ ⎬ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢2⎥ ⎢ 2⎥ ⎢2⎥ ⎪ ⎪⎣0⎦ ⎣ 0⎦ ⎣ 1⎦ ⎪ ⎩ ⎭
N ( A) = R( AT ) and N ( AT ) = R( A) ⊥
⊥
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.4
Mathematical Models and Least Squares Analysis
165
Section 5.4 Mathematical Models and Least Squares Analysis ⎡−3⎤ ⎡2⎤ ⎡−3⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 2. Orthogonal: ⎢ 0⎥ ⋅ ⎢ 1⎥ = ⎢ 0⎥ ⋅ ⎢ 1⎥ = 0 ⎢ 1⎥ ⎢6⎥ ⎢ 1⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡ 0⎤ ⎡ 0⎤ ⎢ ⎥ ⎢ ⎥ 0 1 4. Not orthogonal: ⎢ ⎥ ⋅ ⎢ ⎥ = −6 ≠ 0 ⎢ 1⎥ ⎢−2⎥ ⎢ ⎥ ⎢ ⎥ ⎢− ⎣ 2⎥⎦ ⎢⎣ 2⎥⎦
6. Because S =
{[x, y, 0, 0, z] }, S T
8. AT = [0 1 −1 1] ⇒ S ⊥
⊥
⎧⎡0⎤ ⎡0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢0⎥ ⎢0⎥⎪ ⎪ ⎪ = span ⎨⎢1⎥ , ⎢0⎥⎬. ⎢ ⎥ ⎢ ⎥ ⎪ ⎪ ⎪⎢⎢0⎥⎥ ⎢⎢ 1⎥⎥⎪ ⎪⎣0⎦ ⎣0⎦⎪ ⎩ ⎭
⎧⎡1⎤ ⎡0⎤ ⎡ 0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ 1⎪ ⎪0 1 = span ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥ ⎬ ⎢0⎥ ⎢ 1⎥ ⎢ 0⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢0⎥ ⎢−1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
10. The orthogonal complement of
S⊥
⎧⎡1⎤ ⎡0⎤ ⎡ 0⎤⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ 1⎪ ⎪0 1 = span ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥⎬ ⎢0⎥ ⎢ 1⎥ ⎢ 0⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢0⎥ ⎢0⎥ ⎢−1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎭
is
(S ⊥ )
⊥
⎧⎡ 0⎤⎫ ⎪⎢ ⎥⎪ ⎪ 1⎪ = S = span ⎨⎢ ⎥⎬. ⎢ ⎥ ⎪⎢−1⎥⎪ ⎪⎢ 0⎥⎪ ⎩⎣ ⎦⎭
12. Using the Gram-Schmidt process, an orthogonal basis ⎧⎡ 1 ⎤ ⎫ ⎪⎢− ⎥ ⎡0⎤ ⎡0⎤⎪ 5 ⎪⎢ ⎪ ⎥ ⎪⎪⎢ 2 ⎥ ⎢⎢0⎥⎥ ⎢⎢0⎥⎥⎪⎪ for S is ⎨⎢ , ⎥, ⎬. 5 ⎥ ⎢ 1⎥ ⎢0⎥⎪ ⎪⎢ ⎢ ⎥ ⎢ ⎥ ⎪⎢ 0⎥ ⎣⎢0⎦⎥ ⎣⎢ 1⎦⎥⎪ ⎥ ⎪⎢ ⎪ 0⎥⎦ ⎩⎪⎣⎢ ⎭⎪
14. Using the Gram-Schmidt process, an orthonormal basis for S is
⎧⎡ 1 ⎤ ⎡ 1 ⎤⎫ ⎪⎢ 2 ⎥ ⎡ 0⎤ ⎢− 2 ⎥ ⎪ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ 1 ⎥ ⎢ 1 ⎥ ⎢ 1 ⎥ ⎪ ⎪⎪⎢ 2 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥ ⎪⎪ ⎥ , ⎢ ⎥ ⎬. ⎨⎢ ⎥ , ⎢ ⎪⎢ 1 ⎥ ⎢− 1 ⎥ ⎢ 1 ⎥ ⎪ ⎪⎢ 2 ⎥ ⎢ 2 ⎥ ⎢ 2 ⎥⎪ ⎥ ⎢ ⎥⎪ ⎪⎢ ⎥ ⎢ 0⎥⎦ ⎢− 1 ⎥ ⎪ ⎪⎢ 1 ⎥ ⎢⎣ ⎢⎣ 2 ⎥⎦ ⎭⎪ ⎩⎪⎢⎣ 2 ⎥⎦ projS v = (u1 ⋅ v)u1 + (u 2 ⋅ v)u 2 + (u3 ⋅ v )u 3 ⎡1 ⎤ ⎡ 1⎤ ⎢2⎥ ⎢− 2 ⎥ 0⎤ ⎡ ⎡5⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢2⎥ 1 1 1 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢2⎥ ⎢ 2⎥ ⎢ 2⎥ −1 ⎢ 2⎥ = 5⎢ ⎥ + ⎢ ⎥ + 0⎢ ⎥ = ⎢ ⎥ 2⎢ 1 ⎥ ⎢1 ⎥ ⎢ 1⎥ ⎢ 3⎥ − ⎢2⎥ ⎢ 2⎥ ⎢ ⎢5⎥ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0⎦⎥ ⎢1 ⎥ ⎢− 1 ⎥ ⎢⎣ ⎣2⎦ ⎢⎣ 2 ⎥⎦ ⎢⎣ 2 ⎥⎦ 16. Using the Gram-Schmidt process, an orthonormal basis for the column space is
⎧ ⎡ 2 ⎤⎫ ⎪⎡ 0 ⎤ ⎢ ⎥⎪ ⎪⎢ 1 ⎥ ⎢ 6 ⎥⎪ ⎥ ⎢ 1 ⎥⎪⎪ ⎪⎪⎢ ⎨⎢ 2 ⎥, ⎢− ⎥⎬. 6 ⎥⎪ ⎪⎢ 1 ⎥ ⎢ ⎥ ⎢ 1 ⎥⎪ ⎪⎢ ⎪⎣⎢ 2 ⎦⎥ ⎢ ⎥⎪ ⎪⎩ ⎣ 6 ⎦⎭⎪ projS b = (u1 ⋅ b)u1 + (u 2 ⋅ b)u 2 ⎡ 2 ⎤ ⎡ 7⎤ ⎡ 0 ⎤ ⎢ ⎥ ⎢ 3⎥ 6 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 ⎥ ⎢ 7 ⎢ 1 ⎥ −1 ⎢ 5⎥ = ⎢ 2⎥ + − = ⎢ ⎥ ⎢− 3 ⎥ 2⎢ 6⎢ 6⎥ ⎥ ⎢ ⎥ ⎢ 1 ⎥ ⎢ 1 ⎥ ⎢ 2⎥ ⎢ ⎥ ⎣⎢ 2 ⎥⎦ ⎢⎣ 3 ⎥⎦ 6⎦ ⎣
projS v = (u1 ⋅ v)u1 + (u 2 ⋅ v )u 2 + (u3 ⋅ v )u3
=
⎡ 1 ⎤ ⎡ 1⎤ ⎢− 5 ⎥ ⎢− 5 ⎥ ⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0⎥ 0⎥ ⎢ 2⎥ 1 ⎢ 2 ⎥ ⎢ ⎢ ⎢ ⎥ + 1⎢ ⎥ + 1⎢ ⎥ = ⎢ ⎥ 5 5⎥ 1 0 5⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 1⎥ 0 ⎢⎣0⎥⎦ ⎢⎣ 1⎥⎦ ⎢ ⎥ ⎢ ⎥ 0 ⎥⎦ ⎣ 1⎦ ⎣⎢ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
166
Chapter 5
Inner Product Spaces
⎡0 −1 1⎤ 18. A = ⎢⎢ 1 2 0⎥⎥ ⎢ 1 1 1⎥ ⎣ ⎦
⎡ 1 0 2⎤ ⎢ ⎥ ⇒ ⎢0 1 −1⎥ ⎢0 0 0⎥ ⎣ ⎦
⎡0 1⎤ ⎡0 1 1⎤ ⎢ ⎡2 2⎤ ⎥ 22. A A = ⎢ ⎥ ⎢1 0⎥ = ⎢ ⎥ ⎣1 0 2⎦ ⎢ ⎣2 5⎦ ⎥ ⎣1 2⎦ T
⎡ 0 1 1⎤ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢ ⎥ AT = ⎢−1 2 1⎥ ⇒ ⎢0 1 1⎥ ⎢ 1 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦
⎡−1⎤ ⎡0 1 1⎤ ⎢ ⎥ ⎡2⎤ AT b = ⎢ ⎥ ⎢−1⎥ = ⎢ ⎥ ⎣1 0 2⎦ ⎢ ⎥ ⎣5⎦ ⎣ 3⎦
⎧⎡−2⎤⎫ ⎪⎢ ⎥⎪ N ( A) = span ⎨⎢ 1⎥⎬ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭
⎡2 2 2⎤ ⎢ ⎥ ⇒ ⎣2 5 5⎦
N ( AT )
⎡ 1 −1 ⎡ 1 1 0 1⎤ ⎢ ⎢ ⎥ 1 1 24. AT A = ⎢−1 1 1 0⎥ ⎢ ⎢ ⎢ 1 1 1 1⎥ ⎢0 1 ⎣ ⎦ ⎢⎣ 1 0
⎧⎡−1⎤⎫ ⎪⎢ ⎥⎪ = span ⎨⎢−1⎥⎬ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭
⎧⎡0⎤ ⎡−1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ R( A) = span ⎨⎢ 1⎥ , ⎢ 2⎥⎬ ⎪⎢ 1⎥ ⎢ 1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭
⎡1 ⎢ 0 ⇒ ⎢ ⎢0 ⎢ ⎣⎢0
⎡ 1 0 1 1⎤ ⎢ ⎥ AT = ⎢ 0 −1 1 0⎥ ⎢−1 1 0 1⎥ ⎣ ⎦
0 0⎤ ⎥ 1 0⎥ 0 1⎥ ⎥ 0 0⎦⎥
⎡0 ⎢ ⎡0 1 2 1 0⎤ ⎢ 1 ⎢ ⎥ 26. AT A = ⎢2 1 1 1 2⎥ ⎢2 ⎢ ⎢ 1 −1 0 1 −1⎥ ⎢ 1 ⎣ ⎦ ⎢ ⎣0
⎡ 1 0 0 0⎤ ⎢ ⎥ ⇒ ⎢0 1 0 1⎥ ⎢0 0 1 1⎥ ⎣ ⎦
R( A
)
1⎤ ⎡3 0 3⎤ ⎥ 1⎥ ⎢ ⎥ = ⎢0 3 1⎥ ⎥ 1 ⎢3 1 4⎥ ⎥ ⎣ ⎦ 1⎥⎦
⎧⎡ 0⎤⎫ ⎪⎢ ⎥⎪ ⎪ −1 ⎪ = span ⎨⎢ ⎥⎬ ⎢ ⎥ ⎪⎢−1⎥⎪ ⎪⎢ 1⎥⎪ ⎩⎣ ⎦⎭
⎧⎡ 1⎤ ⎡ 0⎤ ⎡ 1⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ = span ⎨⎢ 0⎥, ⎢−1⎥ , ⎢ 1⎥ ⎬ ⎪⎢−1⎥ ⎢ 1⎥ ⎢0⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
2 1 1 1 2
1⎤ ⎥ −1⎥ ⎡6 4 0⎤ ⎢ ⎥ 0⎥ = ⎢4 11 0⎥ ⎥ ⎢0 0 4⎥ 1⎥ ⎣ ⎦ ⎥ −1⎦
⎡ 1⎤ ⎢ ⎥ ⎡0 1 2 1 0⎤ ⎢ 0⎥ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ T ⎢ ⎥ A b = ⎢2 1 1 1 2⎥ 1 = ⎢2⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 1 −1 0 1 −1⎦ ⎢−1⎥ ⎣0⎦ ⎢ ⎥ ⎣ 0⎦ ⎡1 0 0 ⎡6 4 0 1⎤ ⎢ ⎢ ⎥ ⎢4 11 0 2⎥ ⇒ ⎢0 1 0 ⎢ ⎢0 0 4 0⎥ ⎢⎣0 0 1 ⎣ ⎦
⎧⎡ 1⎤ ⎡ 0⎤ ⎡−1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ 1⎪ ⎪ 0 −1 R( A) = span ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥⎬ ⎢ 1⎥ ⎢ 1⎥ ⎢ 0⎥ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ 1⎥ ⎢ 0⎥ ⎢ 1⎥⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎭ T
⎡0⎤ x = ⎢ ⎥ ⎣1⎦
7⎤ ⎡1 0 0 ⎡ 7⎤ ⎡3 0 3 5⎤ 6 6 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 ⇒ x = −1 0 3 1 1 0 1 0 − ⇒ − ⎢ ⎥ ⎢ ⎢ ⎥ 2 2⎥ ⎢0 0 1 ⎢ 1⎥ 1⎥ ⎢3 1 4 5⎥ ⎣ ⎦ 2⎦ ⎣ ⎣ 2⎦
⎧⎡0⎤ ⎫ ⎪⎢ ⎥ ⎪ N ( A) = ⎨⎢0⎥ ⎬ ⎪⎢0⎥ ⎪ ⎩⎣ ⎦ ⎭ N ( AT )
⇒
⎡2⎤ ⎡ 1 1 0 1⎤ ⎢ ⎥ ⎡ 5⎤ ⎢ ⎥ 1 ⎢ ⎥ AT b = ⎢−1 1 1 0⎥ ⎢ ⎥ = ⎢−1⎥ ⎢0⎥ ⎢ 1 1 1 1⎥ ⎢ ⎥ ⎢ 5⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎢2⎦⎥
⎧⎡ 0⎤ ⎡ 1⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎪ R( AT ) = span ⎨⎢−1⎥ , ⎢2⎥ ⎬ ⎪⎢ 1⎥ ⎢0⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎭
⎡ 1 0 −1⎤ ⎢ ⎥ 0 −1 1⎥ 20. A = ⎢ ⎢ 1 1 0⎥ ⎢ ⎥ ⎣⎢ 1 0 1⎦⎥
⎡1 0 0⎤ ⎢ ⎥ ⎣0 1 1⎦
3⎤ 50 ⎥ 4⎥ 25
⎡3⎤ ⎢ 50 ⎥ 4⎥ ⇒ x = ⎢ 25 ⎥ ⎢ ⎥ 0⎦⎥ ⎢⎣ 0⎥⎦
( R( A ) = R ) T
3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.4 ⎡1 1⎤ ⎡1 1 1⎤ ⎢ ⎡ 3 7⎤ ⎥ 28. A A = ⎢ ⎥ ⎢1 2⎥ = ⎢ ⎥ ⎣1 2 4⎦ ⎢ ⎣7 21⎦ ⎥ ⎣1 4⎦ T
⎡1⎤ ⎡1 1 1⎤ ⎢ ⎥ ⎡ 9⎤ AT b = ⎢ ⎥ ⎢3⎥ = ⎢ ⎥ ⎣1 2 4⎦ ⎢ ⎥ ⎣27⎦ ⎣5⎦ ⎡ 1 0 0⎤ ⎡ 0⎤ ⎡ 3 7 9⎤ ⎥ ⇒ x = ⎢9⎥ ⎢ ⎥ ⇒ ⎢ 9 ⎢⎣0 1 7 ⎥⎦ ⎢⎣ 7 ⎥⎦ ⎣7 21 27⎦ line: y =
9x 7
y
(4, 5) (2, 3)
⎡1 −2⎤ ⎢ ⎥ ⎢1 −1⎥ 1 1 1 1 1 ⎡ ⎤ ⎡5 0 ⎤ ⎢ ⎥ 32. AT A = ⎢ ⎥ ⎢1 0⎥ = ⎢ ⎥ ⎣−2 −1 0 1 2⎦ ⎣0 10⎦ ⎢1 1⎥ ⎢ ⎥ ⎣1 2⎦ ⎡0⎤ ⎢ ⎥ ⎢2⎥ 1 1 1 1 1 ⎡ ⎤ ⎡16⎤ ⎢ ⎥ AT b = ⎢ ⎥ ⎢ 3⎥ = ⎢ ⎥ ⎣−2 −1 0 1 2⎦ ⎣15⎦ ⎢5⎥ ⎢ ⎥ ⎣6⎦
y = 97 x
2
y
(1, 1) x 2
4
6
⎡1 −3⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢1 −2⎥ ⎡ 4 −4⎤ 30. AT A = ⎢ = ⎢ ⎥⎢ ⎥ ⎥ ⎣−3 −2 0 1⎦ ⎢1 0⎥ ⎣−4 14 ⎦ ⎢⎣1 1⎥⎦ ⎡ −3⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢−2⎥ ⎡−3⎤ AT b = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 3 2 0 1 0 − − ⎣ ⎦⎢ ⎥ ⎣15⎦ ⎣⎢ 2⎦⎥ ⎡ 4 −4 −3⎤ ⎡1 ⎢ ⎥ ⇒ ⎢ 4 14 15 − ⎣ ⎦ ⎣0 line: y = 0.45 + 1.2 x
0 1
y 3
y = 1.2x + 0.45 (1, 2)
2 1 −3 −2
x
(0, 0)
167
⎡5 0 16⎤ ⎡ 1 0 3.2⎤ ⎡3.2⎤ ⎢ ⎥ ⇒ ⎢ ⎥ ⇒ x = ⎢ ⎥ ⎣0 10 15⎦ ⎣0 1 1.5⎦ ⎣1.5⎦ line: y = 3.2 + 1.5 x
6 4
Mathematical Models and Least Squares Analysis
3
0.45⎤ ⎡0.45⎤ ⎥ ⇒ x = ⎢ ⎥ 1.2 ⎦ ⎣1.2 ⎦
(2, 6)
4
y = 3.2 + 1.5x (0, 3)
(−1, 2) (−2, 0) −4
(1, 5)
6
x 2
4
−2
⎡1 ⎡ 1 1 1 1⎤ ⎢ ⎢ ⎥ 1 34. AT A = ⎢0 1 2 3⎥ ⎢ ⎢ ⎢0 1 4 9⎥ ⎢1 ⎣ ⎦ ⎣⎢1
0 0⎤ ⎡ 4 6 14⎤ ⎥ 1 1⎥ ⎢ ⎥ = ⎢ 6 14 36⎥ 2 4⎥ ⎢14 36 98⎥ ⎥ ⎣ ⎦ 3 9⎦⎥
⎡ 2⎤ ⎡10⎤ ⎡ 1 1 1 1⎤ ⎢ 3 ⎥ ⎢ 37 ⎥ ⎢ ⎥ ⎢2⎥ T A b = ⎢0 1 2 3⎥ 5 = ⎢ 2 ⎥ ⎢ ⎥ ⎢ 95 ⎥ ⎢0 1 4 9⎥ ⎢ 2 ⎥ ⎣ ⎦ ⎣2⎦ ⎣⎢ 4⎦⎥ ⎡ 1 0 0 39 ⎤ ⎡ 39 ⎤ ⎡ 4 6 14 10⎤ 20 ⎥ ⎢ 20 ⎥ ⎢ ⎢ ⎥ 37 4 4 ⎢ 6 14 36 2 ⎥ ⇒ ⎢0 1 0 − 5 ⎥ ⇒ x = ⎢− 5 ⎥ ⎥ ⎢ ⎥ ⎢ ⎢14 36 98 95 ⎥ 1 ⎥ ⎢⎣ 12 ⎥⎦ ⎣ 2⎦ 2⎦ ⎣⎢0 0 1
Quadratic Polynomial: y =
39 20
−
4x 5
+
1 x2 2
(−2, −2) (−3, −3)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
168
Chapter 5
Inner Product Spaces
⎡1 −2 4⎤ ⎢ ⎥ ⎡ 1 1 1 1 1⎤ ⎢1 −1 1⎥ ⎡ 5 0 10⎤ ⎢ ⎥ ⎢ ⎥ 36. AT A = ⎢−2 −1 0 1 2⎥ ⎢1 0 0⎥ = ⎢ 0 10 0⎥ ⎢ ⎥ ⎢ 4 1 0 1 4⎥ ⎢1 1 1⎥ ⎢10 0 34⎥ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣1 2 4⎦ ⎡ 6⎤ ⎢ ⎥ ⎤ 1 1 1 1 1 ⎡ ⎤ ⎢ 5⎥ ⎡ 31 2 ⎥ ⎢ ⎥ ⎢ 7⎥ ⎢ T A b = ⎢−2 −1 0 1 2⎥ 2 = ⎢−17⎥ ⎢ ⎥ ⎢ 4 1 0 1 4⎥ ⎢ 2⎥ ⎢ 27⎥ ⎣ ⎦ ⎣ ⎦ ⎢ ⎥ ⎣−1⎦
Quadratic Polynomial: y =
−
17 x 10
−
c0 + c1 ( 2) = 6487 c0 + c1 (3) = 6627
c0 + c1 ( 4) = 6635
This produces the least squares problem Ax = b ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢⎣1
1⎤ ⎡6337⎤ ⎥ ⎢ ⎥ 2⎥ ⎡c0 ⎤ 6487⎥ = ⎢ . ⎢ ⎥ ⎢6627⎥ 3⎥ ⎣ c1 ⎦ ⎥ ⎢ ⎥ 4⎥⎦ ⎢⎣6635⎥⎦
The normal equations are AT Ax = AT b ⎡ 4 10⎤ ⎡c0 ⎤ ⎡26,086⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣10 30⎦ ⎣ c1 ⎦ ⎣65,732⎦ ⎡c0 ⎤ ⎡6263 ⎤ and the solution is x = ⎢ ⎥ = ⎢ ⎥ ⎣ c1 ⎦ ⎣ 103.4⎦ So, the least squares regression line is y = 6263 + 103.4t.
c0 + c1 (0) +
c2 (0) = 9600.6
c0 + c1 ( 2) +
c2 ( 4) = 4171.3
c2 (1) = 6079.2
c2 (9) = 3402.4
c0 + c1 ( 4) + c2 (16) = 3649.7
2 x2 7
and ( 4, 6635) into the equation y = c0 + c1t to obtain c0 + c1 (1) = 6337
y = c0 + c1t + c2t 2 to obtain the following system.
c0 + c1 (3) +
38. Substitute the data points (1, 6337), ( 2, 6487), (3, 6627), the following system.
(2, 4171.3), (3, 3402.4), (4, 3649.7), (5, 3854.1), (6, 4075.0), and (7, 4310.0) into the quadratic polynomial
c0 + c1 (1) +
31⎤ ⎡ 1 0 0 257 ⎤ ⎡ 257 ⎤ ⎡ 5 0 10 2 70 ⎢ ⎥ ⎢ 70 ⎥ ⎢ ⎥ 17 17 0 1 0 − ⇒ x = 0 10 0 17 − ⇒ ⎢ ⎢− 10 ⎥ ⎢ ⎥ 10 ⎥ ⎢0 0 1 − 2 ⎥ ⎢ − 2⎥ ⎢10 0 34 27⎥ 7⎦ ⎣ ⎣ 7⎦ ⎣ ⎦ 257 70
40. Substitute the data points (0, 9600.6), (1, 6079.2),
c0 + c1 (5) + c2 ( 25) = 3854.1 c0 + c1 (6) + c2 (36) = 4075.0 c0 + c1 (7) + c2 ( 49) = 4310.0
This produces the least squares problem Ax = b ⎡1 0 0 ⎤ ⎡9600.6⎤ ⎢ ⎥ ⎢ ⎥ ⎢1 1 1 ⎥ ⎢6079.2⎥ ⎢1 2 4 ⎥ ⎢4171.3⎥ ⎢ ⎥ ⎡c0 ⎤ ⎢ ⎥ ⎢1 3 9 ⎥ ⎢ ⎥ ⎢3402.4⎥ c = ⎢ ⎥ ⎢ 1⎥ ⎢ ⎥. ⎢1 4 16 ⎥ ⎢c ⎥ ⎢3649.7⎥ ⎢1 5 25⎥ ⎣ 2 ⎦ ⎢3854.1⎥ ⎢ ⎥ ⎢ ⎥ ⎢1 6 36⎥ ⎢4075.0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣1 7 49⎥⎦ ⎢⎣4310.0⎥⎦ The normal equations are AT Ax = AT b ⎡ 8 28 140⎤ ⎡c0 ⎤ ⎡ 39,142.3⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 28 140 784⎥ ⎢ c1 ⎥ = ⎢ 113,118.3⎥ ⎢140 784 4676⎥ ⎢c2 ⎥ ⎢566,023.7⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ and their solution is ⎡c0 ⎤ ⎡ 8890.8 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢−2576.55 ⎥. ⎢c2 ⎥ ⎢ 286.855⎥ ⎣ ⎦ ⎣ ⎦
So, the least squares regression line is y = 8890.8 − 2576.55t + 286.855t 2 . 42. Use a graphing utility. Least squares regression line: y = 27.80t + 170.2
(r 2
≈ 0.88)
Least squares cubic regression polynomial: y = 0.4129t 3 + 0.883t 2 + 13.59t + 155.4 ( r 2 ≈ 0.98) The cubic model is a better fit for the data. 44. (a) False. They are orthogonal subspaces of R m not R n . (b) True. See the "Definition of Orthogonal Complement" on page 322. (c) True. See page 321 for the definition of the "Least Squares Problem.". Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
169
46. Let S be a subspace of R n and S ⊥ its orthogonal complement. S ⊥ contains the zero vector. If v1 , v 2 ∈ S ⊥ , then for all w ∈ S,
( v1
+ v 2 ) ⋅ w = v1 ⋅ w + v 2 ⋅ w = 0 + 0 = 0 ⇒ v1 + v 2 ∈ S ⊥
and for any scalar c,
(cv1 ) ⋅ w
= c( v1 ⋅ w ) = c0 = 0 ⇒
cv1 ∈ S ⊥ .
48. Let x ∈ S1 ∩ S 2 , where R n = S1 ⊕ S 2 . Then x = v1 + v 2 , v1 ∈ S1 and v 2 ∈ S 2 . But, x ∈ S1 ⇒ x = x + 0, x ∈ S1 , 0 ∈ S 2 , and x ∈ S 2
⇒ x = 0 + x, 0 ∈ S1 , x ∈ S2 . So, x = 0 by the uniqueness of
direct sum representation.
Section 5.5 Applications of Inner Product Spaces i
j k
2. i × j = 1 0
=
i
j k
0
6. k × i = 0 0
1
0 1 0
1 0
0
0 0
i −
1 0
1 0
j+
0 0
1 0 0 1
=
k
= 0i − 0 j + k = k
0 1 0 0
0 1 1 0
j+
0 0 1 0
k
= 0i + j + 0k = j
z
z
1
k
−1
i −
1
−1
−1
−1
j 1
1 1
x
−1
i
x
y
j k
4. k × j = 0 0
i
1
0 1 1 0
i −
0 1 0 0
z
−1
j+
0 0 0 1
k
j k
1
4 2 1
2 = −3i − j − k = ( −3, −1, −1)
(−1, 1, 2) and (0, 1, −1) because (−3, −1, −1) ⋅ (−1, 1, 2) = 0 and (−3, −1, −1) ⋅ (0, 1, −1) = 0. 10. u × v = −2
−i
−1
k
Furthermore, u × v = ( −3, −1, −1) is orthogonal to both
i
−1
1 x
y
0 1 −1
= −i − 0 j + 0k = −i
1
j
8. u × v = −1 1
0 1 0 =
1 −1
y
1 = −2i + 4 j − 8k = ( −2, 4, − 8) 0
Furthermore, u × v = ( −2, 4, − 8) is orthogonal to both
(−2, 1, 1) and (4, 2, 0) because (−2, 4, − 8) ⋅ (−2, 1, 1) = 0 and (−2, 4, − 8) ⋅ (4, 2, 0) = 0.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
170
Chapter 5
Inner Product Spaces 26. Because
i
j
k
12. u × v = 4
1
0 = −2i + 8 j + 5k = ( −2, 8, 5)
i
0 1
Furthermore, u × v = ( −2, 8, 5) is orthogonal to both
(4, 1, 0) and (3, 2, −2) because (−2, 8, 5) ⋅ (4, 1, 0) and ( −2, 8, 5) ⋅ (3, 2, − 2) = 0.
= 0
i
1 = i + 3 j + k = (1, 3, 1)
(2, −1, 1) and (3, −1, 0) because (1, 3, 1) ⋅ (2, −1, 1) and (1, 3, 1) ⋅ (3, −1, 0) = 0. 16. u × v =
1 −2 −1
=
2.
−1
= 0
2
0
the area of the parallelogram is
(0, 0, 3)
= 3.
30. (5, 1, 4) − ( 2, −1, 1) = (3, 2, 3)
k
(3, 3, 4) − (0, 1, 1)
1 = i + j + k = (1, 1, 1)
3 −2
Furthermore, u × v = (1, 1, 1) is orthogonal to both
(1, − 2, 1) and (−1, 3, − 2) because (1, 1, 1) ⋅ (1, − 2, 1) and (1, 1, 1) ⋅ ( −1, 3, − 2) = 0.
j k
2 −1 0 = 3k = (0, 0, 3),
u× v =
Furthermore, u × v = (1, 3, 1) is orthogonal to both
j
(0, −1, 1)
28. Because
3 −1 0
i
1
the area of the parallelogram is u× v =
j k
14. u × v = 2 −1
1 = − j + k = (0, −1, 1),
u× v = 1 1
3 2 −2
i
j k
= 0
(3, 3, 4) − (5, 1, 4) = (−2, 2, 0) (0, 1, 1) − (2, −1, 1) = (−2, 2, 0) u = (3, 2, 3) and v = ( −2, 2, 0) Because
18. Using a graphing utility: w = u × v = (7, 1, 3)
i u× v =
w ⋅ v = (7, 1, 3) ⋅ ( −1, 1, 2) = −7 + 1 + 6 = 0
20. Using a graphing utility: w = u × v = (6, 0, 0)
Check if w is orthogonal to both u and v:
22. Using a graphing utility: w = u × v = (0, 5, 5)
Check if w is orthogonal to both u and v: w ⋅ u = (0, 5, 5) ⋅ (3, −1, 1) = 0 − 5 + 5 = 0 w ⋅ v = (0, 5, 5) ⋅ ( 2, 1, −1) = 0 + 5 − 5 = 0
3 = −6i − 6 j + 10k , 0
the area of the parallelogram is
( − 6)
u× v =
+ (−6) + 102 =
2
2
172 = 2 43.
32. Because i
j k
v × w = 0 −1 0 = −i = ( −1, 0, 0),
w ⋅ u = (6, 0, 0) ⋅ (0, 1, − 2) = 0 w ⋅ v = (6, 0, 0) ⋅ (0, 1, 4) = 0
j k
3 2 −2 2
Check if w is orthogonal to both u and v: w ⋅ u = (7, 1, 3) ⋅ (1, 2, −3) = 7 + 2 − 9 = 0
= (3, 2, 3)
0
0
1
the triple scalar product of u, v, and w is u ⋅ ( v × w ) = ( −1, 0, 0) ⋅ (−1, 0, 0) = 1. 34. Because i
j k
v×w = 0 3
0 0
0 = 3i = (3, 0, 0), 1
the triple scalar product is 24. Using a graphing utility: w = u × v = ( −8, 16, − 2)
u ⋅ ( v × w ) = ( 2, 0, 1) ⋅ (3, 0, 0) = 6.
Check if w is orthogonal to both u and v: w ⋅ u = ( −8, 16, − 2) ⋅ ( 4, 2, 0) = −32 + 32 + 0 = 0 w ⋅ v = ( −8, 16, − 2) ⋅ (1, 0, − 4) = −8 + 0 + 8 = 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
36. Because
i i
40. cu × v = cu1 cu2
j k
1 = ( 2, 1, −1),
v×w = 0 1
1 0
v1
2
the volume is given by u ⋅ ( v × w ) = (1, 1, 0) ⋅ ( 2, 1, −1)
v2
i
j
= u1
u2
i
j
42. u × 0 = u1 u2
38. (0, 1, 2) − ( 2, − 3, 4) = ( −2, 4, − 2)
0
− ( −1, 2, 0) = (1, −1, 2)
k
0
i
j
cu3 = c u1 u2 v3
v1
v2
k u3 = c(u × v ) v3
k u3 = u × (cv)
cv1 cv2
= 1( 2) + 1(1) + 0( −1) = 3.
(0, 1, 2)
j
171
cv3 k
i
j
u3 = 0 = 0
0
u1 u2
0
k 0 = 0×u
u3
Because i
j
k
4 −2 = 6i + 2 j − 2k = (6, 2, − 2),
u × v = −2
1 −1
2
the area of the triangle is A=
1 2
6 2 + 2 2 + ( − 2) = 2
1 2
u×v =
i
j
k
44. u ⋅ ( v × w ) = u ⋅ v1
v2
v3
w1
w2
w3
1 2
44 =
11.
= (u1 , u2 , u3 ) ⋅ ⎡⎣(v2 w3 − v3w2 )i − (v1w3 − v3w1 ) j + (v1w2 − v2 w1 )k ⎤⎦ = u1 (v2 w3 − v3w2 ) − u2 (v1w3 − v3w1 ) + u3 (v1w2 − v2 w1 ) = (u2v3 − u3v2 ) w1 − (u1v3 − v1u3 ) w2 + (u1v2 − v1u2 ) w3 i
j
k u3 ⋅ ( w1 , w2 , w3 )
= u1 u2 v1
v2
v3
= (u × v ) ⋅ w 46.
u v sin θ = u v
1 − cos 2θ
= u v
1−
=
u
=
(u12
=
(u2v3
2
v
2
(u ⋅ v ) 2 2
u
2
v
− (u ⋅ v )
2
+ u22 + u32 )(v12 + v22 + v32 ) − (u1v1 + u2v2 + u3v3 ) − u3v2 ) + (u3v1 − u1v3 ) + (u1v2 − u2v1 ) 2
2
2
2
= u× v
48.
u×v
2
= u
2
v
2
sin 2θ = u
2
v
2
(1 − cos2θ )
= u
2
v
2
⎛ (u ⋅ v)2 ⎞⎟ = u ⎜1 − ⎜ u 2 v 2⎟ ⎝ ⎠
2
v
2
− (u ⋅ v )
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
172
Chapter 5
Inner Product Spaces
50. (a) The standard basis for P1 is {1, x}. Applying the
54. (a) The standard basis for P1 is {1, x}. Applying the
Gram-Schmidt orthonormalization process produces the orthonormal basis
Gram-Schmidt orthonormalization process produces the orthonormal basis.
B = {w1 , w 2} = 1,
3 ⎪⎧ 1 ⎪⎫ B = {w1 , w 2} = ⎨ , 3 2 ( 2 x − π )⎬ ⎪⎩ π π ⎭⎪ The least squares approximating function is given by g ( x) = f , w1 w1 + f , w 2 w 2 .
{
}
3 ( 2 x − 1) .
The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 . Find the inner products f , w1 = f , w2 =
Find the inner products
1
2 ∫ 0 ( x )(1)dx 1
=
1 3⎤ 1 = x 3 ⎥⎦ 0 3
f , w1 =
2 ∫ 0 ( x )⎡⎣ 3(2 x − 1)⎤⎦ dx 1
1
1 ⎞⎤ ⎛1 3 ⎜ x 4 − x3 ⎟⎥ = 3 ⎠⎦ 0 ⎝2
=
f , w2 =
3 6
=
and conclude that g is given by g ( x) = f , w1 w1 + f , w 2 w 2
π 3
π
∫0
π 3
32
π
⎤ sin x⎥ = 0 π ⎦0
1
cos x dx =
(2 x − π )cos x dx π
⎡(2 x − π )sin x + 2 cos x⎤⎦ 0 = −
π3 2⎣
4 3
π3 2
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2
1 3⎡ 3 ( 2 x − 1)⎤⎦ (1) + 3 6 ⎣ 1 1 1 = + x− = x− . 3 2 6 =
(b)
1
π
∫0
= 0+
−4 3 ⎛ 3 ⎞ 12 ⎜ ⎟( 2 x − π ) = 3 (π − 2 x). π 3 2 ⎜⎝ π 3 2 ⎟⎠ π
2
(b)
1
f
g
0
0
−2
1 0
52. (a) The standard basis for P1 is {1, x}. Applying the
56. (a) The standard basis for P1 is {1, x}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis
Gram-Schmidt orthonormalization process produces the orthonormal basis
{
⎧⎪ 1 2 3 ⎫⎪ B = {w1 , w 2} = ⎨ , 3 2 x⎬ ⎩⎪ π π ⎭⎪
}
B = {w1 , w 2} = 1,
π
f g
3 ( 2 x − 1) .
The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 .
The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 .
Find the inner products
Find the inner products
f , w1 = f , w2 = =
1
∫0
1
1
e 2 x dx =
∫0 e
2x
1 2x ⎤ 1 e = (e 2 − 1) 2 ⎥⎦ 0 2
f , w1 =
3 ( 2 x − 1)dx 1
3 ( x − 1)e 2 x ⎤⎦ = 0
f , w2 = 3
=
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2
(
(b)
∫ −π 2
0
1
cos x ⎤ π ⎥⎦ −π
2 3
2 3
[− x cos x + sin x⎤⎦ −π 2
π 2
π
= 0+ (b)
32
π3 2
= 0 2
sin x dx π 2
=
4 3
π3 2
4 3 2 3
π3 2 π3 2
x =
24 x. x3
2
−
g
π
π 2
sin x dx =
∫ −π 2
g f
8
f
1
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2
)
1 2 (e − 1) + 3 3(2 x − 1) 2 1 = 6 x + (e 2 − 7)(≈ 6 x + 0.1945) 2 =
π 2
π 2
π 2
−2
0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
173
58. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process produces the orthonormal basis 2 5⎛ 2 11 ⎞⎪⎫ ⎪⎧ 1 1 B = {w1 , w 2 , w 3} = ⎨ , ( 2 x − 5), ⎜ x − 5 x + ⎟⎬. 2 ⎠⎪⎭ 3 3⎝ ⎪⎩ 3 3 The least squares approximating function is then given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3. Find the inner products 4 1 14 f , w1 = ∫ x dx = (see Exercise 51) 1 3 3 3 4 1 22 f , w2 = ∫ x (2 x − 5)dx = (see Exercise 51) 1 3 45 4
4
∫1
f , w3 =
x
4
−2 5 2 5⎛ 2 11 ⎞ 2 5⌠ ⎛ 5 2 11 1 2 ⎞ 2 5 ⎡2 7 2 11 ⎤ 32 x − 2 x 5 2 + x3 2 ⎥ = ⎮ ⎜ x − 5 x + x ⎟dx = ⎜ x − 5 x + ⎟dx = 2⎠ 2 3 3 3⎝ 3 3 ⌡1 ⎝ 3 3 ⎣⎢ 7 63 3 ⎠ ⎦1
and conclude that g is given by 14 ⎛ 1 ⎞ 22 ⎛ 1 2 5 2 5⎛ 2 11 ⎞ ⎞ ⋅ ⎜ ( 2 x − 5) ⎟ − ⎜ x − 5x + ⎟ ⎜ ⎟+ 2⎠ 3 3 ⎝ 3 ⎠ 45 ⎝ 3 ⎠ 63 3 3 3 ⎝ 14 44 x 22 20 2 100 110 20 2 1424 310 = + − − = − . x + x− x + x+ 9 135 27 567 567 567 567 2835 567
g ( x) =
(b)
y
g 2
f
1
x 1
2
3
4
60. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process
produces the orthonormal basis
⎧⎪ 1 2 3 6 5 ⎛ 2 π 2 ⎞⎫⎪ B = {w1 , w 2 , w 3} = ⎨ , 3 2 x, 5 2 ⎜ x − ⎟⎬. π ⎝ 12 ⎠⎭⎪ ⎩⎪ π π The least squares approximating function is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3. Find the inner products π 2 1 f , w1 = ∫ sin x dx = 0 (see Exercise 54) −π 2
π
π 2
2 3
f , w2 =
∫ −π 2
f , w3 =
∫ −π 2
π 2
π3 2
x sin x dx =
π3 2
(See Exercise 54) π 2
⎤ π2 x2 ⎞ 6 5⎛ 2 6 5⎡ 2 − = − + + x x dx x x x x x sin 2 cos cos 2 sin cos x⎥ ⎜ ⎟ ⎢ π5 2 ⎝ π3 2 ⎣ 12 ⎠ 12 ⎦ −π
and conclude that g ( x) = 0 + (b)
4 3
= 0 2
4 3⎛ 2 3 ⎞ 24 x ⎟ + 0 = 3 x. ⎜ π 3 2 ⎜⎝ π 3 2 ⎟⎠ π
y 2 1 − π2
−1
g f π 2
x
−2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
174
Chapter 5
Inner Product Spaces
62. (a) The standard basis for P2 is {1, x, x 2}. Applying the Gram-Schmidt orthonormalization process
produces the orthonormal basis
⎫ 3 5 ⎪⎧ 1 B = {w1 , w 2 , w 3} = ⎨ , (2 x − π ), 2 (6 x 2 − 6π x + π 2 )⎪⎬. π π π π π ⎪⎩ ⎭⎪ The least square approximating function for f is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3 . Find the inner products f , w1 = f , w2 =
f , w3 =
π
1
π
∫0
π
⎡ 2 3 cos x + (2 x − π )cos x dx = ⎢ 32 π ⎣⎢ π
π
π2
2
π
3 ( 2 x − π )sin x ⎤ 4 3 ⎥ = − 32 π3 2 π ⎦⎥ 0
⎡ 6 5 ( 2 x − π )cos x + 6 x − 6π x + π )cos x dx = ⎢ ( π5 2 π ⎢⎣
5
π
∫0
sin x ⎤ = 0 π ⎥⎦ 0
3
π
∫0
cos x dx =
2
π
5 (6 x 2 − 6π x 2 + π 2 − 12)sin x ⎤ ⎥ = 0 π5 2 ⎥⎦ 0
and conclude that g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3 ⎞ ⎛ ⎞ 3 5 ⎛ 1 ⎞ ⎛ 4 3 ⎞⎛ = (0)⎜ (2 x − π ) ⎟⎟ + (0)⎜⎜ 2 (6 x 2 − 6π x + π 2 ) ⎟⎟ ⎟ + ⎜⎜ − 3 2 ⎟⎜ ⎟⎜ π π π π π π ⎝ ⎠ ⎝ ⎠⎝ ⎠ ⎝ ⎠ 12 = − 3 (2 x − π )
π
= −
24
π3
x +
12
π2
(≈
−0.77 x + 1.2)
y
(b) 1
π
−1
x
f g
64. The fourth order Fourier approximation of f ( x) = π − x is of the form
g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3x + a4 cos 4 x + b4 sin 4 x. 2
In Exercise 63, you determined a0 and the general form of the coefficients a j and b j . a0 = 0 a j = 0, j = 1, 2, 3, … bj =
2 , j = 1, 2, 3, … j
So, the approximation is g ( x) = 2 sin x + sin 2 x +
2 1 sin 3 x + sin 4 x. 3 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 5.5
Applications of Inner Product Spaces
175
66. The fourth order Fourier approximation of f ( x) = ( x − π ) is of the form 2
g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3x + a4 cos 4 x + b4 sin 4 x. 2
In Exercise 65, you determined a0 and the general form of the coefficients a j and b j . 2π 2 3 4 a j = 2 , j = 1, 2, … j a0 =
b j = 0,
j = 1, 2, …
So, the approximation is g ( x) =
π2
+ 4 cos x + cos 2 x +
3
4 1 cos 3x + cos 4 x. 9 4
68. The second order Fourier approximation of f ( x) = e − x is of the form g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x. 2
In Exercise 67, found that a0 = (1 − e −2π ) π a1 = (1 − e −2π ) 2π b1 = (1 − e −2π ) 2π .
So, you need to determine a2 and b2 . a2 =
1
2π
π ∫0
f ( x) cos 2 x dx =
1
2π
π ∫0
e − x cos 2 x dx 2π
= b2 =
1 ⎡1 − x (−e cos 2 x + 2e− x sin 2 x)⎤⎥⎦ π ⎢⎣ 5 0 1
2π
π ∫0
f ( x) sin 2 x dx =
1
2π
π ∫0
=
e − x sin 2 x dx 2π
=
1 (1 − e−2π ) 5π
1 ⎡1 − x (−e sin 2 x − 2e− x cos 2 x)⎤⎥⎦ π ⎢⎣ 5 0
=
2 (1 − e−2π ) 5π
So, the approximation is 1 − e −2π 1 − e −2π 1 − e −2π 1 − e −2π 1 − e−2π cos x + sin x + cos 2 x + 2 sin 2 x + 2π 2π 2π 5π 5π 1 = (1 − e−2π )(5 + 5 cos x + 5 sin x + 2 cos 2 x + 4 sin 2 x). 10π
g ( x) =
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
176
Chapter 5
Inner Product Spaces
70. The second order Fourier approximation of f ( x) = e −2x is of the form
g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 cos 2 x. 2
In Exercise 69, you found that a0 =
1 − e −4π 2π
⎛ 1 − e −4π ⎞ a1 = 2⎜ ⎟ ⎝ 5π ⎠ b1 =
1 − e −4π . 5π
So, you need to determine a2 and b2 . a2 = b2 =
2π
1
π ∫0
2π
f ( x) cos 2 x dx =
π ∫0
1 −2 x ⎡1 ⎤ e −2 x cos 2 x dx = ⎢ e −2 x sin 2 x − e cos 2 x⎥ 4π ⎣ 4π ⎦0
1
2π
f ( x) sin 2 x dx =
1
1 −2 x ⎡ 1 ⎤ e −2 x sin 2 x dx = ⎢− e −2 x sin 2 x − e cos 2 x⎥ 4π ⎣ 4π ⎦0
π ∫0
1
2π
2π
π ∫0
=
1 − e −4π 4π
=
1 − e −4π 4π
2π
So, the approximation is g ( x) =
⎛ 1 − e −4π 1 − e −4π + 2⎜ 4π ⎝ 5π
⎞ 1 − e −4π 1 − e −4π 1 − e −4π sin x + cos 2 x + sin 2 x ⎟ cos x + 5π 4π 4π ⎠
⎛ 1 − e −4π ⎞ ⎛ 1 − e −4π ⎞ ⎛ 1 − e −4π ⎞ ⎛ 1 − e−4π ⎞ ⎛ 1 − e −4π ⎞ = 5⎜ ⎟ + 8⎜ ⎟ cos x + 4⎜ ⎟ sin x + 5⎜ ⎟ cos 2 x + 5⎜ ⎟ sin 2 x ⎝ 20π ⎠ ⎝ 20π ⎠ ⎝ 20π ⎠ ⎝ 20π ⎠ ⎝ 20π ⎠ ⎛ 1 − e −4π ⎞ = ⎜ ⎟(5 + 8 cos x + 4 sin x + 5 cos 2 x + 5 sin 2 x). ⎝ 20π ⎠
72. The fourth order Fourier approximation of f ( x) = 1 + x is of the form
g ( x) =
a0 + a1 cos x + b1 sin x + a2 cos 2 x + b2 sin 2 x + a3 cos 3 x + b3 sin 3 x + a4 cos 4 x + b4 sin 4 x. 2
In Exercise 67, you found that a0 = 2 + 2π a j = 0, j = 1, 2, … bj =
−2 , j = 1, 2, … j
So, the approximation is g ( x) = (1 + π ) − 2 sin x − sin 2 x − 74. Because f ( x) = sin 2 x =
1 2
−
1 2
2 3
sin 3x −
1 2
sin 4 x.
cos 2 x, you see that the fourth order Fourier approximation is simply g ( x) =
1 2
−
1 2
cos 2 x.
76. Because
a0 =
2π 2 4 , a j = 2 ( j = 1, 2, …), b j = 0 ( j = 1, 2, …), j 3
the nth order Fourier approximation is
g ( x) =
π2 3
+ 4 cos x + cos 2 x +
4 4 cos 3 x + cos 4 x + 9 16
+
4 cos nx. n2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
177
Review Exercises for Chapter 5 2. (a)
u =
(−1)
(b)
v =
22 + 32 =
2
+ 22 =
5
13
8. (a)
u =
12 + ( −1) + 02 + 12 + 12 =
4 = 2
(b)
v =
02 + 12 + ( −2) + 22 + 12 =
10
2
2
(c) u − v = −1( 2) + 2(3) = 4
(c) u ⋅ v = 1(0) + ( −1)(1) + 0( −2) + 1( 2) + 1(1) = 2
(d) d (u, v ) = u − v
(d) d (u, v ) = u − v
=
(−3, −1)
=
(−3)
2
= (1, − 2, 2, −1, 0) + ( −1)
2
=
10
4. u = (1, −1, 2) and v = ( 2, 3, 1) u =
12 + ( −1) + 22 =
v =
22 + 32 + 12 =
2
2
=
10
2
6.
So, a unit vector in the direction of v is
14
u =
1 ⎛ 1 −2 1 ⎞ (1, − 2, 1) = ⎜ , , ⎟. 6 6 6⎠ ⎝ 6
1 v = v
12. The norm of v is
(−1, − 4, 1) (−1)2
12 + ( −2) + 12 =
v =
6
d (u , v ) = u − v
=
2
10. The norm of v is
u ⋅ v = 1( 2) + ( −1)(3) + 2(1) = 1
=
12 + ( −2) + 22 + ( −1)
=
+ ( −4) + 12 = 2
02 + 22 + ( −1)
v =
18
2
=
5.
So, a unit vector in the direction of v is 6. (a)
u =
1 + ( − 2) + 2 + 0
=
9 = 3
(b)
v =
22 + ( −1) + 02 + 22 =
9 = 3
2
2
2
2
2
u =
14. The cosine of the angle θ between u and v is given by
(c) u ⋅ v = 1( 2) + ( −2)( −1) + 2(0) + (0)( 2) = 4
cos θ =
(d) d (u, v ) = u − v
= =
(−1, −1, 2, − 2) (−1)
2
=
+ ( −1) + 22 + ( −2) = 2
2
1 2 −1 ⎞ ⎛ (0, 2, −1) = ⎜ 0, , ⎟. 5 5 5⎠ ⎝
1 v = v
10 =
u⋅v u v 1(0) + ( −1)(1) 1 + ( −1) 2
2
02 + 12
−1 −1 = 2 1 2
⎛ −1 ⎞ 3π which implies that θ = cos −1 ⎜ ⎟ = 4 radians (135°). ⎝ 2⎠ 16. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v = u v
π 5π 5π + sin sin 6 6 6 6 = 5π 2 π 2 π 2 5π + sin + sin 2 cos cos 6 6 6 6 cos
π
3⎛ 3 ⎞ 1⎛ 1 ⎞ ⎜− ⎟ + ⎜ ⎟ 2 ⎜⎝ 2 ⎟⎠ 2 ⎝ 2 ⎠
cos
=
2
2 ⎛ 3⎞ ⎛1⎞ ⎜⎜ ⎟⎟ + ⎜ ⎟ ⎝ 2⎠ ⎝ 2 ⎠
2
2 ⎛ 3⎞ ⎛1⎞ ⎜⎜ − ⎟⎟ + ⎜ ⎟ ⎝ 2⎠ ⎝ 2 ⎠
1 1 2 = − 2 1⋅ 1
−
2π ⎛ 1⎞ radians (120°). which implies that θ = cos −1 ⎜ − ⎟ = 3 ⎝ 2⎠
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
178
Chapter 5
Inner Product Spaces
18. The cosine of the angle θ between u and v is given by
cos θ =
u⋅v = u v
22. The projection of u onto v is given by
2(0) + 5(5) u⋅v v = (0, 5) v⋅v 0 2 + 52 25 = (0, 5) − (0, 5). 25
2−3 1 = − . 10 10 10
projvu =
⎛ −1 ⎞ which implies that θ = cos −1 ⎜ ⎟ ≈ 1.67 radians ⎝ 10 ⎠ (95.7°).
24. The projection of u onto v is given by 1(0) + 2( 2) + ( −1)(3) u⋅v v = (0, 2, 3) 02 + 22 + 32 v⋅v 1 2 3 = (0, 2, 3) = ⎛⎜ 0, , ⎞⎟. 13 13 13 ⎠ ⎝
projvu =
20. The projection of u onto v is given by
2(0) + 3( 4) u⋅v v = (0, 4) v⋅v 0 2 + 42 12 = (0, 4) = (0, 3). 16
projvu =
26. (a)
⎛ 4⎞ ⎛1⎞ u, v = 2(0)⎜ ⎟ + (3)(1) + 2⎜ ⎟( −3) = 1 3 ⎝ ⎠ ⎝ 3⎠
(b) d (u, v ) = u − v =
u − v, u − v
2
=
⎛ 4⎞ ⎛ 10 ⎞ 2⎜ − ⎟ + 22 + 2⎜ ⎟ 3 ⎝ ⎠ ⎝3⎠
=
268 2 = 3 3
2
67
28. Verify the Triangle Inequality as follows.
u + v ≤ u + v 8⎞ ⎛4 ⎜ , 4, − ⎟ ≤ 3⎠ ⎝3 2
⎛ 4⎞ ⎛ 8⎞ 2⎜ ⎟ + 42 + 2⎜ − ⎟ ⎝ 3⎠ ⎝ 3⎠
⎛1⎞ 9 + 2⎜ ⎟ + ⎝9⎠
⎛ 16 ⎞ 2⎜ ⎟ + 1 + 18 ⎝9⎠
2
≤ 3.037 + 4.749
5.812 ≤ 7.786 Verify the Cauchy-Schwarz Inequality as follows. u, v ≤ u v
(3)(1)
⎛1⎞ + 2⎜ ⎟( −3) ≤ (3.037)( 4.749) ⎝ 3⎠ 1 ≤ 14.423
30. A vector v = (v1 , v2 , v3 ) that is orthogonal to u must
34. Orthogonalize the vectors in B.
satisfy the equation u ⋅ v = v1 − v2 + 2v3 = 0.
w1 = (3, 4)
This equation has solutions of the form
w 2 = (1, 2) −
(
v = s, t ,
1t 2
−
1s 2
), where s and t are any real numbers.
32. A vector v = (v1 , v2 , v3 , v4 ) that is orthogonal to u must satisfy the equation u ⋅ v = 0v1 + v2 + 2v3 − v4 = 0. This equation has solutions of the form
(
)
v = r , s, 12 t − 12 s, t , where r, s, and t are any real numbers.
11 8 6 (3, 4) = ⎛⎜ − , ⎞⎟ 25 ⎝ 25 25 ⎠
Then normalize each vector. u1 =
1 1 ⎛3 4⎞ w1 = (3, 4) = ⎜ , ⎟ 5 w1 ⎝5 5⎠
u2 =
1 1 ⎛ 8 6 ⎞ ⎛ 4 3⎞ w2 = ⎜− , ⎟ = ⎜− , ⎟ 2 5 ⎝ 25 25 ⎠ ⎝ 5 5 ⎠ w2
⎧⎛ 3 4 ⎞ ⎛ 4 3 ⎞⎫ So, an orthonormal basis for R2 is ⎨⎜ , ⎟, ⎜ − , ⎟⎬. ⎩⎝ 5 5 ⎠ ⎝ 5 5 ⎠⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5 36. Orthogonalize the vectors in B.
40. (a)
w1 = (0, 0, 2) w 2 = (0, 1, 1) −
2 4
(0, 0, 2)
= (0, 1, 0)
w 3 = (1, 1, 1) −
2 4
(0, 0, 2)
−
1 1
(0, 1, 0)
f,g =
= (1, 0, 0)
Then normalize each vector to obtain the orthonormal basis for R3.
∫ 0 (15 x 1
+ 22 x − 16)dx
2
(c)
f
2
c1 ( −1, 2, 2) + c2 (1, 0, 0) = (−3, 4, 4).
= f,f 1
=
∫ 0 ( x + 2) dx
=
∫ 0 (x
2
1
2
+ 4 x + 4)dx 1
⎡ x3 ⎤ 19 = ⎢ + 2 x 2 + 4 x⎥ = 3 ⎣3 ⎦0
The solution to the corresponding system of equations is c1 = 2 and c2 = −1. So, [x]B = ( 2, −1), and you can write
f =
= 2( −1, 2, 2) − (1, 0, 0).
19 3
=
f,f
(d) Because f and g are already orthogonal, you only 19 and need to normalize them. You know f = 3 so you compute g .
(b) To apply the Gram-Schmidt orthonormalization process, first orthogonalize each vector in B. w1 = ( −1, 2, 2) 8 2 2 −1 (−1, 2, 2) = ⎛⎜ , , ⎞⎟ 9 ⎝9 9 9⎠
2
g
Then normalize w1 and w2 as follows
1 1 ⎛ 1 2 2⎞ w1 = ( −1, 2, 2) = ⎜ − , , ⎟ w1 3 ⎝ 3 3 3⎠
= g, g 1
=
∫ 0 (15 x − 8) dx
=
∫ 0 (225 x 1
2
2
− 240 x + 64)dx 1
= ⎡⎣75 x3 − 120 x 2 + 64 x⎤⎦ = 19 0
1 1 ⎛8 2 2⎞ u2 = w2 = ⎜ , , ⎟ w2 2 2 3⎝ 9 9 9 ⎠
g =
1 1 ⎞ ⎛ 4 = ⎜ , , ⎟. ⎝3 2 3 2 3 2 ⎠
19
So,
⎧⎛ 1 2 2 ⎞ ⎛ 4 1 1 ⎞⎫ So, B′ = ⎨⎜ − , , ⎟, ⎜ , , ⎟⎬. ⎩⎝ 3 3 3 ⎠ ⎝ 3 2 3 2 3 2 ⎠⎭
(c) The coordinates of x relative to B′ are found by calculating 19 ⎛ 1 2 2⎞ x, u1 = ( −3, 4, 4) ⋅ ⎜ − , , ⎟ = 3 ⎝ 3 3 3⎠ x, u 2
=
−4 f , g = −4 f , g = −4(0) = 0
{(−1, 2, 2), (1, 0, 0)} solve the vector equation
u1 =
∫ 0 ( x + 2)(15 x − 8) dx
(b)
vectors in
w 2 = (1, 0, 0) −
1
=
1
38. (a) To find x = ( −3, 4, 4) as a linear combination of the
(−3, 4, 4)
1
∫ 0 f ( x)g ( x) dx
= ⎡⎣5 x3 + 11x 2 − 16 x⎤⎦ = 0 0
{(0, 0, 1), (0, 1, 0), (1, 0, 0)}.
B =
179
u1 =
1 f = f
u2 =
1 g = g
1 ( x + 2) = 19 3 1 (15 x − 8). 19
3 ( x + 2) 19
The orthonormal set is ⎧⎪⎛ B′ = ⎨⎜⎜ ⎩⎪⎝
1 1 ⎞ −4 ⎛ 4 , , . = ( −3, 4, 4) ⋅ ⎜ ⎟= ⎝3 2 3 2 3 2 ⎠ 3 2
3 3 ⎞ ⎛ 15 x + 2 x − ⎟, ⎜ 19 19 ⎟⎠ ⎝ 19
8 ⎞⎫⎪ ⎟⎬. 19 ⎠⎭⎪
So,
(−3, 4, 4) =
19 ⎛ 1 2 2 ⎞ 4 ⎛ 4 1 1 ⎞ , , . ⎜− , , ⎟ − 3 ⎝ 3 3 3 ⎠ 3 2 ⎜⎝ 3 2 3 2 3 2 ⎟⎠
42. The set is already orthogonal, as shown in Example 3, Section 5.3. So, you need to normalize each vector, and obtain the orthonormal set 1 1 1 1 ⎧ 1 ⎫ , cos x, sin x, … , cos nx, sin nx⎬. ⎨ π π π π ⎩ 2π ⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
180
Chapter 5
Inner Product Spaces
44. The solution space of the homogeneous system consists of vectors of the form ( −t , s, s, t ), where s and t are any real numbers. So, a basis for the solution space is B =
{(−1, 0, 0, 1), (0, 1, 1, 0)}. Because these vectors are orthogonal, and their length
2, you normalize them to obtain the orthonormal basis
is
⎧⎪⎛ 2 2⎞ ⎛ 2 2 ⎞⎫⎪ , 0, 0, , , 0 ⎟⎟⎬. ⎟, ⎜⎜ 0, ⎨⎜⎜ − ⎟ 2 ⎠ ⎝ 2 2 ⎠⎭⎪ ⎩⎪⎝ 2
f,g =
46. (a)
1
∫ 0 x 4x
2
1
dx = x 4 ⎤ = 1 ⎥⎦ 0
(b) The vectors are not orthogonal. (c) Because
f, g ≤ f 1≤
4 , verify the Cauchy-Schwarz Inequality as follows 5
1 and g = 3
f = g
1 4 ≈ 1.0328. 3 5
48. Use the Triangle Inequality u + w ≤ u + w with w = v − u u + w = u + ( v − u) = v ≤ u + v − u and so, v − u ≤ v − u . By symmetry, you also have u − v ≤ u − v = v − u . u − v
So,
≤ u − v . To complete the proof, first observe that the Triangle Inequality implies that
u − w ≤ u + − w = u + w . Letting w = u + v, you have u − w = u − (u + v ) = − v = v ≤ u + u + v and so v − u ≤ u + v .
Similarly, u − v ≤ u + v , and
50. Extend the V-basis B =
u − v ≤ u + v . In conclusion,
{(0, 1, 0, 1), (0, 2, 0, 0)} to a basis of
u − v
≤ u ± v .
R4.
{(0, 1, 0, 1), (0, 2, 0, 0), (1, 0, 0, 0), (0, 0, 1, 0)}
Now, (1, 1, 1, 1) = (0, 1, 0, 1) + (1, 0, 1, 0) = v + w where v ∈ V and w is orthogonal to every vector in V.
52.
( x1
+ x2 +
+ xn ) = ( x1 + x2 +
+ xn )( x1 + x2 +
2
+ xn )
= ( x1 , … , xn ) ⋅ ( x1 , … , xn ) + ( x2 , … , xn , x1 ) ⋅ ( x1 , … , xn ) +
+ ( xn , x1 , … , xn −1 ) ⋅ ( x1 , … , xn )
≤ ( x12 + + ( x22 +
1
+ xn2 ) 2 1
+ xn2 + x12 ) 2 ( x12 +
+ ( xn2 + x12 + = n( x12 +
1
+ xn2 ) 2 ( x12 +
1
+ xn2 −1 ) 2 ( x12 +
1
+ xn2 ) 2 + 1
+ xn2 ) 2
+ xn2 )
54. Let {u1 , u 2 , … , u n} be a dependent set of vectors, and assume u k is a linear combination of u1 , u 2 , … , u k −1 , which are linearly independent. The Gram-Schmidt process will orthonormalize u1 , … , u k −1 , but then u k will be a linear combination of u1 , … , u k −1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
56. An orthonormal basis for S is ⎧⎡ 0 ⎤ ⎡ 0 ⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎪⎢− 1 ⎥ ⎢ 1 ⎥⎪⎪ ⎨⎢ 2 ⎥, ⎢ 2 ⎥⎬ ⎪⎢ 1 ⎥ ⎢ 1 ⎥⎪ ⎥ ⎢ ⎥⎪ ⎪⎢ 2 ⎥⎦ ⎢⎣ 2 ⎥⎦⎪⎭ ⎩⎪⎢⎣
60. Substitute the data points (0, 787), (1, 986), (2, 1180), (3, 1259), (4, 1331), and (5, 1440) into the equation y = c0 + c1t to obtain the following system. c0 + c1 (0) = 787
c0 + c1 (1) = 986 c0 + c1 ( 2) = 1180
projs v = ( v ⋅ u1 )u1 + ( v ⋅ u 2 )u 2 ⎡ ⎤ ⎡ ⎤ ⎢ ⎢ 0⎥ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎛ 2 ⎞⎢ 1 ⎥ ⎛ 2 ⎞⎢ 1 ⎥ = ⎜− ⎥ + ⎜− ⎥ ⎟ ⎢− ⎟⎢ 2 ⎠⎢ 2⎥ ⎝ 2 ⎠⎢ 2 ⎥ ⎝ ⎢ 1 ⎥ ⎢ 1 ⎥ ⎢ ⎥ ⎢ ⎥ 2⎦ ⎣ ⎣ 2⎦ ⎡ 0⎤ ⎢ ⎥ = ⎢ 0⎥. ⎢−2⎥ ⎣ ⎦
58.
181
⎡1 −2⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢1 −1⎥ ⎡ 4 −2⎤ = ⎢ AT A = ⎢ ⎥⎢ ⎥ ⎥ ⎣−2 −1 0 1⎦ ⎢1 0⎥ ⎣−2 6⎦ ⎢⎣1 1⎥⎦ ⎡2⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢ 1⎥ ⎡ 7⎤ T A b = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ − − 2 1 0 1 1 ⎣ ⎦⎢ ⎥ ⎣−2⎦ ⎣⎢ 3⎦⎥ ⎡1.9⎤ AT Ax = AT b ⇒ x = ⎢ ⎥ ⎣0.3⎦
c0 + c1 (3) = 1259
c0 + c1 ( 4) = 1331 c0 + c1 (5) = 1440
This produces the least squares problem Ax = b ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎣
0⎤ ⎡ 787⎤ ⎥ ⎢ ⎥ 1⎥ ⎢ 986⎥ ⎢1180⎥ 2⎥ ⎡c0 ⎤ ⎥⎢ ⎥ = ⎢ ⎥. ⎢1259⎥ 3⎥ ⎣ c1 ⎦ ⎥ ⎢ ⎥ 4⎥ ⎢1331⎥ ⎢1440⎥ 5⎥⎦ ⎣ ⎦
The normal equations are AT Ax = AT b ⎡ 6 15⎤ ⎡c0 ⎤ ⎡ 6983⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 15 55 c ⎣ ⎦ ⎣ 1⎦ ⎣19,647⎦ and the solution is ⎡c0 ⎤ ⎡851 ⎤ x = ⎢ ⎥ ≈ ⎢ ⎥. ⎣ c1 ⎦ ⎣125.1⎦ So, the least squares regression line is y = 851 + 125.1t.
line: y = 0.3 x + 1.9 y 3
1
−2
−1
x 1
2
−1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
182
Chapter 5
Inner Product Spaces
62. For the years 1996–2003: Substitute the data points ( −4, 1101), ( −3, 1130),
(−2, 1182), (−1, 1243), (0, 1307), (1, 1381), (2, 1475), and (3, 1553) into the quadratic polynomial y = c0 + c1t + c2t 2 to obtain the following system. c0 + c1 ( −4) + c2 ( −4) = 1101 2
c0 + c1 ( −3) + c2 (−3) = 1130 2
For the years 2004–2007: Substitute the data points (4, 1308), (5, 1397), (6, 1495), and (7, 1610) into the quadratic polynomial y = c0 + c1t + c2t 2 to obtain the following system.
c0 + c1 ( 4) + c2 ( 4) = 1308 2
c0 + c1 (5) + c2 (5) = 1397 2
c0 + c1 (6) + c2 (6) = 1495 2
c0 + c1 ( −2) + c2 ( −2) = 1182
c0 + c1 (7) + c2 (7) = 1610
c0 + c1 ( −1) + c2 ( −1) = 1243
This produces the least squares problem
2 2
c0 +
c1 (0) +
c2 (0) = 1307
c0 +
c1 (1) +
c2 (1) = 1381
c0 +
c1 ( 2) +
c2 ( 2) = 1475
c0 +
c1 (3) +
c2 (3) = 1553
Ax = b
2 2 2 2
This produces the least squares problem Ax = b ⎡1 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢1 ⎢ ⎢⎣1
2
−4 16⎤ ⎡1101⎤ ⎢1130⎥ −3 9⎥⎥ ⎢ ⎥ ⎢1182⎥ −2 4⎥ ⎥ ⎡c0 ⎤ ⎢ ⎥ −1 1⎥ ⎢ ⎥ ⎢1243⎥. = c 1 ⎢1307⎥ 0 0⎥ ⎢ ⎥ ⎥ ⎢⎣c2 ⎥⎦ ⎢ ⎥ 1 1⎥ ⎢1381⎥ ⎢1475⎥ 2 4⎥ ⎥ ⎢ ⎥ ⎢⎣1553⎥⎦ 3 9⎥⎦
The normal equations are
A Ax = A b T
T
⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢⎣1
4 16⎤ ⎡1308⎤ ⎥ ⎡c0 ⎤ ⎢ ⎥ 5 25⎥ ⎢ ⎥ ⎢1397⎥. c = 1 ⎢1495⎥ 6 36⎥ ⎢⎢ ⎥⎥ ⎥ ⎣c2 ⎦ ⎢ ⎥ ⎢⎣1610⎥⎦ 7 49⎥⎦
The normal equations are
AT Ax = AT b ⎡ 4 22 126⎤⎡c0 ⎤ ⎡ 5810⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 22 126 748⎥⎢ c1 ⎥ = ⎢ 32,457⎥ ⎢126 748 4578⎥⎢c0 ⎥ ⎢188,563⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ and the solution is ⎡c0 ⎤ ⎡1089 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢ 28.9 ⎥ ⎢c2 ⎥ ⎢ 6.50⎥ ⎣ ⎦ ⎣ ⎦
So, the least squares regression quadratic polynomial is y = 1089 + 28.9t + 6.50t 2 .
⎡ 8 −4 44⎤ ⎡c0 ⎤ ⎡10,372⎤ ⎢−4 44 −64⎥ ⎢ c ⎥ = ⎢ −2411⎥ 1 ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢⎣ 44 −64 452⎥⎦ ⎢⎣c2 ⎥⎦ ⎢⎣55,015⎥⎦ and the solution is ⎡c0 ⎤ ⎡1307 ⎤ x = ⎢⎢ c1 ⎥⎥ ≈ ⎢⎢ 70.5 ⎥⎥ ⎢⎣c2 ⎥⎦ ⎢⎣ 4.43⎥⎦ So, the least squares regression quadratic polynomial is y = 1307 + 70.5 t + 4.43t 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 5
64. Substitute the data points (2, 439.5), (3, 1465.9), (4, 3189.2), (5, 6138.6), (6, 10,604.9), and (7, 16,000.0) into the quadratic polynomial y = c0 + c1t + c2t 2 to obtain the following system. c0 + c1 ( 2) + c2 ( 2) =
439.5
c0 + c1 (3) + c2 (3) =
1465.9
c0 + c1 ( 4) + c2 ( 4) =
3189.2
c0 + c1 (5) + c2 (5) =
6138.6
2 2 2 2
c0 + c1 (6) + c2 (6) = 10,604.9 2
c0 + c1 (7) + c2 (7) = 16,000.0 2
This produces the least squares problem Ax = b ⎡1 ⎢ ⎢1 ⎢1 ⎢ ⎢1 ⎢ ⎢1 ⎢1 ⎣
2 3 4 5 6 7
4⎤ ⎡ 439.5⎤ ⎥ ⎢ ⎥ 9⎥ 1465.9⎥ ⎢ c ⎡ 0⎤ ⎢ 3189.2⎥ 16⎥ ⎢ ⎥ ⎥ ⎢ c1 ⎥ = ⎢ ⎥. 25⎥ ⎢ ⎥ ⎢ 6138.6⎥ ⎥ ⎣c2 ⎦ ⎢ ⎥ 36⎥ ⎢10,604.9⎥ ⎢16,000.0⎥ 49⎥⎦ ⎣ ⎦
The normal equations are
AT Ax = AT b ⎡ 6 27 139⎤ ⎡c0 ⎤ ⎡ 37,838.1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢ 27 139 783⎥ ⎢ c1 ⎥ ⎢ 224,355.9⎥ ⎢139 783 4675⎥ ⎢c2 ⎥ ⎢1,385,219.7⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ and the solution is ⎡c0 ⎤ ⎡ 2556.1 ⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ c1 ⎥ ≈ ⎢−2183.34 ⎥ . ⎢c2 ⎥ ⎢ 585.991⎥ ⎣ ⎦ ⎣ ⎦
So, the least squares regression quadratic polynomial is y = 2556.1 − 2183.34t + 585.991t 2 . ⎡i j k ⎤ ⎢ ⎥ 66. u × v = ⎢1 1 1⎥ = j − k = (0, 1, −1) ⎢1 0 0⎥ ⎣ ⎦
u × v is orthogonal to both u and v because, u ⋅ (u × v ) = 1(0) + 1(1) + 1(−1) = 0 and
v ⋅ (u × v) = 1(0) + 0(1) + 0( −1) = 0.
183
j k⎤ ⎡i ⎢ ⎥ 1 6⎥ = 13i + 6 j − k = (13, 6, −1) 68. u × v = ⎢0 ⎢ 1 −2 1⎥ ⎣ ⎦
u × v is orthogonal to both u and v because u ⋅ (u × v ) = (0, 1, 6) ⋅ (13, 6, −1) = 6 − 6 = 0 and
v ⋅ (u × v ) = (1, − 2, 1) ⋅ (13, 6, −1) = 13 − 12 − 1 = 0. 70. Because ⎡ i j k⎤ ⎢ ⎥ u × v = ⎢ 1 3 0⎥ = 6i − 2 j + 3k = (6, − 2, 3), ⎢−1 0 2⎥ ⎣ ⎦
the area of the parallelogram is 62 + ( −2) + 32 = 7. 2
u× v =
72. Because ⎡ i j k⎤ ⎢ ⎥ v × w = ⎢0 0 1⎥ = −i = ( −1, 0, 0), ⎢0 1 0⎥ ⎣ ⎦
the volume is u ⋅ ( v × w ) = 1( −1) + 0(0) + 0(0) = 1.
74. The standard basis for P1 is {1, x}. In the interval
c[−1, 1], the Gram-Schmidt orthonormalization process
⎧⎪ 2 6 ⎫⎪ , x⎬. The linear yields the orthonormal basis ⎨ 2 ⎭⎪ ⎩⎪ 2 least squares approximating function is given by
g ( x) = f , w1 w1 + f , w 2 w 2 . Because 1
f , w1 =
∫ −1
f , w2 =
1
∫ −1
2 3 x dx = 2
1
2 4⎤ x ⎥ = 0 8 ⎦ −1 1
6 4 6 5⎤ x dx = x ⎥ = 2 10 ⎦ −1
6 , 5
g is given by ⎛ 2⎞ g ( x) = 0⎜⎜ ⎟⎟ + ⎝ 2 ⎠
6⎛ 6 ⎞ 3 x ⎟ = x. ⎜ 5 ⎜⎝ 2 ⎟⎠ 5
y 1
f g x
−1
1 −1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
184
Chapter 5
Inner Product Spaces
⎡ π⎤ 76. The standard basis for P1 is {1, x}. In the interval c ⎢0, ⎥ , the Gram-Schmidt orthonormalization process yields ⎣ 2⎦ ⎧⎪ 2 ⎫⎪ 6π the orthonormal basis ⎨ , 4 x − π )⎬. 2 ( ⎪⎩ π π ⎪⎭ Because π
2
f , w1 =
∫ 02 sin(2 x)
f , w2 =
∫ 02 sin(2 x) ⎜⎜
π
π
2
dx =
π
⎛
6π ⎞ ⎟ ( 4 x − π )dx = 0, 2 ⎟ π ⎝ ⎠
g is given by g ( x) = f , w1 w1 + f , w 2 w 2 =
2
2
π
π
=
2
π
.
y
f 1
g x
π 2
78. The standard basis for P2 is {1, x, x 2}. In the interval c[0, 1], the Gram-Schmidt orthonormalization process yields the orthonormal basis
{1,
}
5 (6 x 2 − 6 x + 1) .
3 ( 2 x − 1),
Because 1
f , w1 =
∫0
f , w2 =
1
∫0
f , w3 =
∫0
1
x dx =
2 3 3 ∫ ( 2 x3 2 − x1 2 )dx = 1
x
3 ( 2 x − 1)dx =
x
5 (6 x 2 − 6 x + 1) dx =
0
1
2 2 ⎛4 ⎞⎤ 3 ⎜ x5 2 − x3 2 ⎟⎥ = 3 15 ⎝5 ⎠⎦ 0
5 ∫ (6 x5 2 − 6 x3 2 + x1 2 ) dx = 1
0
3 1
−2 5 12 5 2 2 ⎛ 12 ⎞⎤ x + x3 2 ⎟⎥ = 5 ⎜ x7 2 − , 7 5 3 105 ⎝ ⎠⎦ 0
g is given by g ( x) = f , w1 w1 + f , w 2 w 2 + f , w 3 w 3 2 2 −2 5 3 3 ( 2 x − 1) + 5 (6 x 2 − 6 x + 1) (1) + 3 15 105 4 48 6 2 x + = − x2 + = (−10 x 2 + 24 x + 3) 7 35 35 35
=
( )
1.5
f g
0
1 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 5
185
80. Find the coefficients as follows. a0 = a1 = b1 =
1
π
1
π
π ∫ −π π ∫ −π 1
π
π ∫ −π
f ( x) dx =
1
π
π ∫ −π
f ( x) cos x dx = f ( x) sin x dx =
π
x 2 dx = 1
π
x π ∫ −π 1
π
π ∫ −π
So, the approximation is g ( x) =
2
1 3⎤ 2π 2 x ⎥ = 3π ⎦ −π 3 cos x dx =
x 2 sin x dx =
( x2 sin x + 2 x cos x − 2 sin x)⎤⎥⎦ π 1
π
= −4 −π
(− x 2 cos x + 2 x sin x + 2 cos x)⎤⎥⎦ π 1
π
= 0 −π
a0 π2 + a1 cos x + b1 sin x = − 4 cos x. 2 3
82. (a) True. See Theorem 5.18, part 1, page 339. (b) False. See Theorem 5.17, part 1, page 338. (c) True. See discussion before Theorem 5.19, page 345.
Project Solutions for Chapter 5 1 The QR-Factorization 1. A = QR ⎡ 1 1⎤ ⎡.7071 .4082⎤ ⎢ ⎥ ⎢ ⎥ ⎡1.4142 0.7071⎤ 2. (a) A = ⎢0 1⎥ = ⎢ 0 .8165⎥ ⎢ ⎥ = QR 0 1.2247⎦ ⎢ 1 0⎥ ⎢.7071 −.4082⎥ ⎣ ⎣ ⎦ ⎣ ⎦
⎡1 ⎢ 0 (b) A = ⎢ ⎢1 ⎢ ⎣⎢ 1
0⎤ ⎡.5774 −.7071⎤ ⎥ ⎢ ⎥ 0⎥ 0 0⎥ ⎡1.7321 1.7321⎤ = ⎢ ⎢ ⎥ = QR ⎢.5774 1⎥ 0⎥ ⎣ 0 1.4142⎦ ⎥ ⎢ ⎥ ⎢⎣.5774 .7071⎦⎥ 2⎦⎥
0⎤ ⎡1.7321 1.1547 .5774⎤ ⎡1 0 0⎤ ⎡.5774 −.8165 ⎢ ⎥ ⎢ ⎥⎢ ⎥ (c) A = ⎢1 1 0⎥ = ⎢.5774 .4082 −.7071⎥ ⎢ 0 .8165 .4082⎥ = QR ⎢1 1 1⎥ ⎢.5774 .4082 .7071⎥ ⎢ 0 0 .7071⎥⎦ ⎣ ⎦ ⎣ ⎦⎣
⎡1 ⎢ 1 (d) A = ⎢ ⎢1 ⎢ ⎣⎢1
0 −1⎤ ⎡.5 −.5 −.7071⎤ −.5⎤ ⎥ ⎢ ⎥ ⎡2 2 2 0⎥ .5 .5 0⎥ ⎢ ⎥ 0 2 .5⎥ = QR = ⎢ ⎢.5 .5 2 0⎥ 0⎥ ⎢⎢ ⎥ ⎢ ⎥ ⎣0 0 .7071⎥⎦ 0 0⎦⎥ ⎢⎣.5 −.5 .7071⎥⎦
3. The normal equations simplify using A = QR as follows AT Ax = AT b
(QR)T QRx
= (QR) b T
RT QT QRx = RT QT b RT Rx = RT QT b
(QT Q
= I)
Rx = QT b.
Because R is upper triangular, only back-substitution is needed.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
186
Chapter 5
Inner Product Spaces
⎡ 1 1⎤ ⎡.7071 .4082⎤ ⎢ ⎥ ⎢ ⎥ ⎡1.4142 0.7071⎤ 0 .8165⎥ ⎢ 4. A = ⎢0 1⎥ = ⎢ ⎥ = QR. 0 1.2247⎦ ⎢ 1 0⎥ ⎢.7071 −.4082⎥ ⎣ ⎣ ⎦ ⎣ ⎦
⎡1.4142 0.7071⎤ ⎡ x1 ⎤ Rx = QT b ⎢ ⎥⎢ ⎥ 0 1.2247⎦ ⎣ x2 ⎦ ⎣ ⎡−1⎤ 0 .7071⎤ ⎢ ⎥ ⎡.7071 ⎡−1.4142⎤ = ⎢ ⎥ ⎢ 1⎥ = ⎢ ⎥ − .4082 .8165 .4082 ⎣ ⎦⎢ ⎥ ⎣ 0.8165⎦ − 1 ⎣ ⎦ ⎡ x1 ⎤ ⎡−1.3333⎤ ⎢ ⎥ = ⎢ ⎥ ⎣ x2 ⎦ ⎣ 0.6667⎦
2 Orthogonal Matrices and Change of Basis ⎡ −1 2⎤ T 1. P −1 = ⎢ ⎥ ≠ P ⎣−2 3⎦ ⎡cos θ −sin θ ⎤ 2. ⎢ ⎥ ⎣ sin θ cos θ ⎦
−1
⎡ cos θ sin θ ⎤ ⎡cos θ −sin θ ⎤ = ⎢ ⎥ = ⎢ ⎥ ⎣−sin θ cos θ ⎦ ⎣ sin θ cos θ ⎦
T
3. If P −1 = PT , then PT P = I ⇒ columns of P are pairwise orthogonal. 4. If P is orthogonal, then P −1 = PT by definition of orthogonal matrix. Then ( P −1 ) holds because ( AT )
−1
−1
= ( PT )
−1
= ( P −1 ) . The last equality T
= ( A−1 ) for any invertible matrix A. So, P −1 is orthogonal. T
⎡ 1 0⎤ ⎡ 1 0⎤ ⎡2 0⎤ 5. No. For example, ⎢ ⎥ + ⎢ ⎥ = ⎢ ⎥ is not orthogonal. The product of orthogonal matrices is orthogonal. If ⎣0 1⎦ ⎣0 1⎦ ⎣0 2⎦
P −1 = PT and Q −1 = QT , then ( PQ)
−1
= Q −1P −1 = QT PT = ( PQ) . T
6. det P = ±1 because 1 = det I = det PT P = (det P) . 2
7.
Px = ( Px) Px = xT PT Px = xT x = x T
8. Let ⎡ 2 ⎢− 5 P = ⎢ ⎢ 1 ⎢ 5 ⎣
1 ⎤ 5 ⎥⎥ 2 ⎥ ⎥ 5⎦
be the change of basis matrix from B′ to B. Because P is orthogonal, lengths are preserved.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 6 Linear Transformations Section 6.1
Introduction to Linear Transformations ............................................186
Section 6.2
The Kernel and Range of a Linear Transformation ..........................190
Section 6.3
Matrices for Linear Transformations.................................................194
Section 6.4
Transition Matrices and Similarity ....................................................203
Section 6.5
Applications of Linear Transformations ...........................................206
Review Exercises ........................................................................................................210
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 6 Linear Transformations Section 6.1 Introduction to Linear Transformations 7. (a) The image of v is
1. (a) The image of v is T (3, − 4) = (3 + ( −4), 3 − ( −4)) = (−1, 7).
⎛ 2 T (1, 1) = ⎜⎜ (1) − ⎝ 2
(b) If T (v1 , v2 ) = (v1 + v2 , v1 − v2 ) = (3, 19), then
⎞ 2 (1), 1 + 1, 2(1) − 1⎟⎟ 2 ⎠
= (0, 2, 1).
v1 + v2 = 3
⎛ 2 (b) If T (v1 , v2 ) = ⎜⎜ v1 − ⎝ 2
v1 − v2 = 19, which implies that v1 = 11 and v2 = −8. So, the
(
preimage of w is (11, − 8).
⎞ 2 v2 , v1 + v2 , 2v1 − v2 ⎟⎟ 2 ⎠
)
= −5 2, − 2, −16 , then
3. (a) The image of v is T ( 2, 3, 0) = (3 − 2, 2 + 3, 2( 2)) = (1, 5, 4).
2 2 v1 − v2 = −5 2 2 2 v1 + v2 = −2
(b) If T (v1 , v2 , v3 ) = (v2 − v1 , v1 + v2 , 2v1 ) = ( −11, −1, 10),
2v1 − v2 = −16 which implies that v1 = −6 and v2 = 4. So, the
then
preimage of w is ( −6, 4).
v2 − v1 = −11 v1 + v2 = −1
9. T is not a linear transformation because it does not preserve addition nor scalar multiplication. For example,
2v1 = 10
which implies that v1 = 5 and v2 = −6. So, the preimage of w is
T (1, 1) + T (1, 1) = (1, 1) + (1, 1)
{(5, − 6, t ) : t is any real number}.
= ( 2, 2) ≠ ( 2, 1) = T ( 2, 2).
5. (a) The image of v is T ( 2, − 3, −1) = ( 4( −3) − 2, 4( 2) + 5(−3)) = ( −14, − 7).
(b) If T (v1 , v2 , v3 ) = ( 4v2 − v1 , 4v1 + 5v2 ) = (3, 9), then
−v1 + 4v2 = 3 4v1 + 5v2 = 9, which implies that v1 = 1, v2 = 1, and v3 = t , where t is any real number. So, the preimage of w is {(1, 1, t ) : t is any real number}. 11. T preserves addition. T ( x1 , y1 , z1 ) + T ( x2 , y2 , z2 ) = ( x1 + y1 , x1 − y1 , z1 ) + ( x2 + y2 , x2 − y2 , z2 ) = ( x1 + y1 + x2 + y2 , x1 − y1 + x2 − y2 , z1 + z2 ) =
(( x1
+ x2 ) + ( y1 + y2 ), ( x1 + x2 ) − ( y1 + y2 ), z1 + z2 )
= T ( x1 + x2 , y1 + y2 , z1 + z2 )
T preserves scalar multiplication. T (c( x, y, z )) = T (cx, cy, cz ) = (cx + cy, cx − cy, cz ) = c( x + y, x − y, z ) = cT ( x, y, z )
Therefore, T is a linear transformation. 186
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.1
Introduction to Linear Transformations
187
13. T is not a linear transformation because it does not preserve addition nor scalar multiplication. For example,
T (0, 1) + T (1, 0) = (0, 0, 1) + (1, 0 0) = (1, 0, 1) ≠ (1, 1, 1) = T (1, 1). 15. T is not a linear transformation because it does not preserve addition nor scalar multiplication. For example, T ( I 2 ) = 1 but
T ( 2 I 2 ) = 4 ≠ 2T ( I 2 ). 17. Let A and B be two elements of M 3,3 —two 3 × 3 matrices—and let c be a scalar. First ⎡0 0 1⎤ ⎡0 0 1⎤ ⎡0 0 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ T ( A + B) = ⎢0 1 0⎥ ( A + B ) = ⎢0 1 0⎥ A + ⎢0 1 0⎥ B = T ( A) + T ( B) ⎢ 1 0 0⎥ ⎢ 1 0 0⎥ ⎢ 1 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
by Theorem 2.3, part 2. And ⎡0 0 1⎤ ⎡0 0 1⎤ ⎢ ⎥ ⎢ ⎥ T (cA) = ⎢0 1 0⎥ (cA) = c ⎢0 1 0⎥ A = cT ( A) ⎢ 1 0 0⎥ ⎢ 1 0 0⎥ ⎣ ⎦ ⎣ ⎦
by Theorem 2.3, part 4. So, T is a linear transformation. 19. T preserves addition.
T ( A1 + A2 ) = ( A1 + A 2 )
T
= A 1T + A T2 = T ( A1 ) + T ( A2 )
T preserves scalar multiplication. T (cA) = (cA)
T
= c( AT ) = cT ( A).
Therefore, T is a linear transformation. 21. Let u = a0 + a1x + a2 x 2 , v = b0 + b1x + b2 x 2 .
Then T (u + v ) = ( a0 + b0 ) + ( a1 + b1 ) + ( a2 + b2 ) + ⎡⎣( a1 + b1 ) + ( a2 + b2 )⎤⎦ x + ( a2 + b2 ) x 2 = ( a0 + a1 + a2 ) + (b0 + b1 + b2 ) + ⎡⎣( a1 + a2 ) + (b1 + b2 )⎤⎦ x + ( a2 + b2 ) x 2 = T (u) + T ( v ), and T (cu) = ca0 + ca1 + ca2 + (ca1 + ca2 ) x + ca2 x 2 = cT (u).
T is a linear transformation.
23. Because (0, 3, −1) can be written as
(0, 3, −1)
= 0(1, 0, 0) + 3(0, 1, 0) − (0, 0, 1), you can use
27. Because ( 2, 1, 0) can be written as
(2, 1, 0)
= 0(1, 1, 1) − (0, −1, 2) + 2(1, 0, 1), you can use
Property 4 of Theorem 6.1 to write
Property 4 of Theorem 6.1 to write
T (0, 3, −1) = 0 ⋅ T (1, 0, 0) + 3T (0, 1, 0) − T (0, 0, 1)
T ( 2, 1, 0) = 0 ⋅ T (1, 1, 1) − T (0, −1, 2) + 2T (1, 0, 1)
= (0, 0, 0) + 3(1, 3, − 2) − (0, − 2, 2)
= (0, 0, 0) − ( −3, 2, −1) + 2(1, 1, 0)
= (3, 11, − 8).
= (5, 0, 1).
25. Because ( 2, − 4, 1) can be written as
(2, − 4, 1)
= 2(1, 0, 0) − 4(0, 1, 0) + (0, 0, 1), you can use
29. Because ( 2, −1, 1) can be written as
(2, −1, 1)
= − 32 (1, 1, 1) −
1 2
(0, −1, 2) + 72 (1, 0, 1), you can
Property 4 of Theorem 6.1 to write
use Property 4 of Theorem 6.1 to write
T ( 2, − 4, 1) = 2T (1, 0, 0) − 4T (0, 1, 0) + T (0, 0, 1)
T ( 2, −1, 1) = − 32 T (1, 1, 1) − 12 T (0, −1, 2) + 72 T (1, 0, 1)
= 2( 2, 4, −1) − 4(1, 3, − 2) + (0, − 2, 2) = (0, − 6, 8).
= − 32 ( 2, 0, −1) −
(
)
1 2
(−3, 2, −1)
+
7 2
(1, 1, 0)
= 2, 52 , 2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
188
Chapter 6
Linear Transformations
31. Because the matrix has four columns, the dimension of R n is 4. Because the matrix has three rows, the dimension of R m is 3. So, T : R 4 → R3 . 33. Because the matrix has five columns, the dimension of R n is 5. Because the matrix has two rows, the dimension of R m is 2. So, T : R 5 → R 2 . 35. Because the matrix has two columns, the dimension of R n is 2. Because the matrix has two rows, the dimension of R m is 2. So, T : R 2 → R 2 . ⎡ 1 2⎤ ⎡10⎤ ⎢ ⎥ ⎡2⎤ ⎢ ⎥ 37. (a) T ( 2, 4) = ⎢−2 4⎥ ⎢ ⎥ = ⎢12⎥ = (10, 12, 4) ⎢−2 2⎥ ⎣4⎦ ⎢ 4⎥ ⎣ ⎦ ⎣ ⎦
⎡−1 ⎢ 0 T (1, 1, 1, 1) = ⎢ ⎢ 0 ⎢ ⎣⎢ 0
0 0 0⎤ ⎥ 1 0 0⎥ 0 2 0⎥ ⎥ 0 0 1⎦⎥
⎡1⎤ ⎡−1⎤ ⎢⎥ ⎢ ⎥ 1 ⎢ ⎥ = ⎢ 1⎥ = ( −1, 1, 2, 1). ⎢1⎥ ⎢ 2⎥ ⎢⎥ ⎢ ⎥ ⎢⎣ 1⎥⎦ ⎣⎢1⎦⎥
(b) The preimage of (1, 1, 1, 1) is determined by solving the equation
T (v1 , v2 , v3 , v4 )
⎡−1 ⎢ 0 = ⎢ ⎢ 0 ⎢ ⎢⎣ 0
0 0 0⎤ ⎥ 1 0 0⎥ 0 2 0⎥ ⎥ 0 0 1⎥⎦
⎡ v1 ⎤ ⎡1⎤ ⎢ ⎥ ⎢⎥ v ⎢ 2 ⎥ = ⎢1⎥ ⎢v3 ⎥ ⎢1⎥ ⎢ ⎥ ⎢⎥ ⎢⎣v4 ⎥⎦ ⎢⎣1⎥⎦
for v = (v1 , v2 , v3 , v4 ). The equivalent system of
(b) The preimage of ( −1, 2, 2) is given by solving the equation T (v1 , v2 )
39. (a)
linear equations has solution v1 = −1, v2 = 1, v3 = 12 , v4 = 1. So, the preimage
(
)
is −1, 1, 12 , 1 .
⎡ 1 2⎤ ⎡−1⎤ ⎢ ⎥ ⎡ v1 ⎤ ⎢ ⎥ = ⎢−2 4⎥ ⎢ ⎥ = ⎢ 2⎥ v 2 ⎣ ⎦ ⎢−2 2⎥ ⎢ 2⎥ ⎣ ⎦ ⎣ ⎦
for v = (v1 , v2 ). The equivalent system of linear equations v1 + 2v2 = −1 −2v1 + 4v2 =
2
−2v1 + 2v2 =
2
has the solution v1 = −1 and v2 = 0. So, ( −1, 0) is the preimage of ( −1, 2, 2) under T. (c) Because the system of linear equations represented by the equation ⎡ 1 2⎤ ⎡1⎤ ⎢ ⎥ ⎡ v1 ⎤ ⎢⎥ ⎢−2 4⎥ ⎢v ⎥ = ⎢1⎥ ⎢−2 2⎥ ⎣ 2 ⎦ ⎢1⎥ ⎣ ⎦ ⎣⎦
has no solution, (1, 1, 1) has no preimage under T. 41. (a) When θ = 45°, cos θ = sin θ = (b) When θ = 30°, cos θ =
⎛ ⎛ 1 ⎞ 1 ⎛ 1 ⎞ ⎛ 1 ⎞ ⎛ 1 ⎞⎞ , so T ( 4, 4) = ⎜ 4⎜ ⎟ − 4⎜ ⎟, 4⎜ ⎟ + 4⎜ ⎟ ⎟ = 0, 4 2 . 2 ⎝ 2⎠ ⎝ 2⎠ ⎝ 2 ⎠⎠ ⎝ ⎝ 2⎠
(
)
⎛ ⎛ 3⎞ ⎛ 3 ⎞⎞ 3 1 ⎛1⎞ ⎛1⎞ and sin θ = , so T ( 4, 4) = ⎜ 4⎜⎜ − 4⎜ ⎟, 4⎜ ⎟ + 4⎜⎜ ⎟ ⎟⎟ ⎟⎟ = 2 3 − 2, 2 3 + 2 . ⎟ ⎜ 2 2 ⎝ 2⎠ ⎝ 2⎠ ⎝ 2 ⎠⎠ ⎝ ⎝ 2 ⎠
1 (c) When θ = 120°, cos θ = − and sin θ = 2
(
⎛ ⎛ 1⎞ ⎛ 3⎞ ⎛ 3⎞ 3 ⎛ 1 ⎞⎞ ⎛ 5 5 3 ⎞ , so T (5, 0) = ⎜ 5⎜ − ⎟ − 0⎜⎜ ⎟⎟, 5⎜⎜ ⎟⎟ + 0⎜ − ⎟ ⎟⎟ = ⎜⎜ − , ⎟. ⎜ 2 ⎝ 2 ⎠ ⎠ ⎝ 2 2 ⎟⎠ ⎝ 2 ⎠ ⎝ 2 ⎠ ⎝ ⎝ 2⎠
43. True. Dx is a linear transformation and therefore preserves addition and scalar multiplication. 45. False. sin 2 x ≠ 2 sin x for all x. 47. If Dx ( g ( x )) = 2 x + 1, then g ( x) = x 2 + x + C.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
)
Section 6.1
Introduction to Linear Transformations
189
49. If Dx ( g ( x)) = sin x, then g ( x) = −cos x + C. 51. (a) T (3 x 2 − 2) = (b) T ( x 3 − x5 ) = (c) T ( 4 x − 6) =
∫ 0 (3x 1
∫ 0 (x 1
3
2
− 2) dx = ⎡⎣ x 3 − 2 x⎤⎦ = −1 0 1
− x5 ) dx = ⎡⎣ 14 x 4 −
1
∫ 0 (4 x − 6) dx
1
1 x6 ⎤ 6 ⎦0
1 12
=
1
= ⎡⎣2 x 2 − 6 x⎤⎦ = −4 0
53. First express (1, 0) in terms of (1, 1) and (1, −1): (1, 0) =
T (1, 0) = T ⎡⎣ 12 (1, 1) +
1 2
(1, −1)⎤⎦
=
1 T 2
(1, 1) +
1 T 2
(1, −1)
1 2
=
(1, 1) 1 2
+
1 2
(1, −1). Then,
(1, 0) + 12 (0, 1)
=
( 12 , 12 ).
Similarly, express (0, 2) = 1(1, 1) − 1(1, −1). Then, T (0, 2) = T ⎡⎣(1, 1) − (1, −1)⎤⎦ = T (1, 1) − T (1, −1) = (1, 0) − (0, 1) = (1, −1).
55. T ( 2 − 6 x + x 2 ) = 2T (1) − 6T ( x) + T ( x 2 ) = 2 x − 6(1 + x) + (1 + x + x 2 ) = −5 − 3x + x 2 57. (a) True. See discussion before "Definition of a Linear Transformation," page 362. (b) False. cos( x1 + x2 ) ≠ cos x1 + cos x2 (c) True. See Example 10, page 370.
59. (a) T ( x, y ) = T ⎡⎣ x(1, 0) + y(0, 1)⎤⎦ = xT (1, 0) + yT (0, 1) = x(1, 0) + y (0, 0) = ( x, 0) (b) T is the projection onto the x-axis.
61. (a) Because projvu =
u⋅v v v⋅v
and T (u) = projvu, you have T ( x, y ) =
x(1) + y(1) 12 + 12
(1, 1)
⎛x + y x + y⎞ = ⎜ , ⎟. 2 ⎠ ⎝ 2
⎛x + y x + y⎞ ⎛5 (b) From the result of part (a), where ( x, y ) = (5, 0), T ( x, y ) = ⎜ , ⎟ = ⎜ , 2 ⎠ ⎝2 ⎝ 2
5⎞ ⎟. 2⎠
(c) From the result of part (a), T (u + w ) = T ⎡⎣( x1 , y1 ) + ( x2 , y2 )⎤⎦ = T ( x1 + x2 , y1 + y2 ) ⎛ x + x2 + y1 + y2 x1 + x2 + y1 + y2 ⎞ = ⎜ 1 , ⎟ 2 2 ⎝ ⎠ ⎛ x + y1 x1 + y1 ⎞ ⎛ x2 + y2 x2 + y2 ⎞ = ⎜ 1 , , ⎟ +⎜ ⎟ 2 ⎠ ⎝ 2 2 ⎠ ⎝ 2 = T ( x1 , y1 ) + T ( x2 , y2 ) = T (u) + T ( w ).
From the result of part (a), ⎛ cx + cy cx + cy ⎞ ⎛ x + y x + y⎞ T (cu) = T ⎡⎣c( x, y )⎤⎦ = T (cx, cy ) = ⎜ , , ⎟ = c⎜ ⎟ = cT ( x, y ) = cT (u). 2 2 ⎠ 2 ⎠ ⎝ ⎝ 2 ⎡1 63. Observe that Au = ⎢ 12 ⎣⎢ 2
1⎤ 2 ⎥ 1 ⎥ 2⎦
⎡1 x + ⎡ x⎤ 2 ⎢ ⎥ = ⎢1 ⎢⎣ 2 x + ⎣ y⎦
1 y⎤ 2 ⎥ 1y ⎥ 2 ⎦
= T (u).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
190
Chapter 6
Linear Transformations
65. (a) Because T (0, 0) = ( − h, − k ) ≠ (0, 0), a translation cannot be a linear transformation.
71. T is a linear transformation because
T ( A + B) = ( a11 + b11 ) +
(b) T (0, 0) = (0 − 2, 0 + 1) = ( −2, 1)
= ( a11 +
T ( 2, −1) = ( 2 − 2, −1 + 1) = (0, 0) T (5, 4) = (5 − 2, 4 + 1) = (3, 5) (c) Because T ( x, y ) = ( x − h, y − k ) = ( x, y ) implies
{v1, v 2 , v 3} is any set of linearly independent vectors, their images T ( v1 ), T ( v 2 ), T ( v 3 ) form a dependent set.
+ ann ) + (b11 +
+ bnn )
= T ( A) + T ( B) and
T (cA) = ca11 +
+ cann
= c( a11 +
x − h = x and y − k = y, a translation has no fixed points. 67. There are many possible examples. For instance, let T : R 3 → R 3 be given by T ( x, y, z ) = (0, 0, 0). Then if
+ ( ann + bnn )
+ ann )
= cT ( A). 73. Let v = c1v1 + Then,
+ cn v n be an arbitrary vector in v.
T ( v ) = T (c1v1 + = c1T ( v1 ) + = 0+
69. Let T ( v ) = v be the identity transformation. Because
+ cn v n ) + cnT ( v n )
+0
= 0.
T (u + v ) = u + v = T (u) + T ( v) and T (cu) = cu = cT (u), T is a linear transformation.
Section 6.2 The Kernel and Range of a Linear Transformation 1. Because T sends every vector in R 3 to the zero vector, the kernel is R 3 . 3. Solving the equation T ( x, y, z , w) = ( y, x, w, z ) = (0, 0, 0, 0) yields the trivial solution x = y = z = w = 0. So, ker (T ) =
{(0, 0, 0, 0)}.
5. Solving the equation T ( a0 + a1 x + a2 x 2 + a3 x3 ) = a0 = 0 yields solutions of the form a0 = 0 and a1 , a2 , and a3 are any real numbers. So, ker (T ) = {a1 x + a2 x 2 + a3 x3 : a1 , a2 , a3 ∈ R}. 7. Solving the equation T ( a0 + a1 x + a2 x 2 ) = a1 + 2a2 x = 0 yields solutions of the form a1 = a2 = 0, and a0 any real number. So, ker(T ) = {a0 : a0 ∈ R}.
9. Solving the equation T ( x, y ) = ( x + 2 y, y − x) = (0, 0) yields the trivial solution x = y = 0. So, ker(T ) =
{(0, 0)}.
11. (a) Because ⎡0⎤ ⎡1 2⎤ ⎡ v1 ⎤ T ( v) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣3 4⎦ ⎣v2 ⎦ ⎣0⎦ has only the trivial solution v1 = v2 = 0, the kernel is
{(0, 0)}.
(b) Transpose A and find the equivalent reduced row-echelon form. ⎡ 1 3⎤ AT = ⎢ ⎥ ⎣2 4⎦
⇒
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
So, a basis for the range of A is
{(1, 0), (0, 1)}.
13. (a) Because ⎡ v1 ⎤ ⎡ 1 −1 2⎤ ⎢ ⎥ ⎡0⎤ T ( v) = ⎢ ⎥ ⎢v2 ⎥ = ⎢ ⎥ 0 1 2 ⎣ ⎦⎢ ⎥ ⎣0⎦ ⎣v3 ⎦
has solutions of the form ( −4t , − 2t , t ), where t is any real number, a basis for ker (T ) is
{(−4, − 2, 1)}.
(b) Transpose A and find the equivalent reduced row-echelon form. ⎡ 1 0⎤ ⎢ ⎥ A = ⎢−1 1⎥ ⎢ 2 2⎥ ⎣ ⎦ T
⇒
⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥ ⎢0 0⎥ ⎣ ⎦
So, a basis for the range of A is
{(1, 0), (0, 1)}.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.2
The Kernel and Range of a Linear Transformation
21. (a) Because T ( x) = 0 has only the trivial solution
15. (a) Because ⎡ 1 2⎤ ⎢ ⎥ T ( v ) = ⎢−1 −2⎥ ⎢ 1 1⎥ ⎣ ⎦
⎡0⎤ ⎡ v1 ⎤ ⎢ ⎥ ⎢ ⎥ = ⎢0⎥ v ⎣ 2⎦ ⎢0⎥ ⎣ ⎦
x = (0, 0), the kernel of T is
{(0, 0)}.
(b) Transpose A and find the equivalent reduced row-echelon form. ⎡ 1 −1 1⎤ AT = ⎢ ⎥ ⇒ ⎣2 −2 1⎦
{(1, −1, 0), (0, 0, 1)}.
17. (a) Because
{(−1, 1, 1, 0)}.
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 −1 0⎤ ⎥ 1 −1 0⎥ 0 0 1⎥ ⎥ 0 0 0⎥⎦
So, a basis for the range of A is
{(1, 0, −1, 0), (0, 1, −1, 0), (0, 0, 0, 1)}. Equivalently, you could use columns 1, 2 and 4 of the original matrix A. 19. (a) Because T ( x) = 0 has only the trivial solution x = (0, 0), the kernel of T is
{(0, 0)}.
(b) nullity(T ) = dim( ker (T )) = 0 (c) Transpose A and find the equivalent reduced row-echelon form. ⎡−1 1⎤ AT = ⎢ ⎥ ⎣ 1 1⎦
⇒
1⎤ ⎡1 0 4 ⎢ ⎥ 1 ⎢⎣0 1 − 4 ⎥⎦
{(4s, 4t , s − t ) : s, t ∈ R}.
(d) rank (T ) = dim( range(T )) = 2 23. (a) The kernel of T is given by the solution to the equation T ( x) = 0. So,
{(−11t , 6t , 4t ) : t is any real number}.
(c) Transpose A and find the equivalent reduced row-echelon form.
(b) Transpose A and find the equivalent reduced row-echelon form. −1⎤ ⎥ −3 −2⎥ −1 1⎥ ⎥ 1⎥⎦ −3
⇒
(b) nullity(T ) = dim( ker (T )) = 1
has solutions of the form ( −t , t , t , 0), where t is any
−4
⎡ 5 1 1⎤ AT = ⎢ ⎥ ⎣−3 1 −1⎦
ker (T ) =
⎡ 1 2 −1 4⎤ ⎡ v1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 3 1 2 −1⎥ ⎢v2 ⎥ 0 T ( v) = ⎢ = ⎢ ⎥ ⎢−4 −3 −1 −3⎥ ⎢v3 ⎥ ⎢0⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢− ⎢⎣0⎥⎦ ⎣ 1 −2 1 1⎥⎦ ⎢⎣v4 ⎥⎦ real number, a basis for ker (T ) is
(c) Transpose A and find the equivalent reduced row-echelon form.
So, range(T ) =
⎡ 1 −1 0⎤ ⎢ ⎥ ⎣0 0 1⎦
So, a basis for the range of A is
⎡ 1 3 ⎢ 2 1 AT = ⎢ ⎢−1 2 ⎢ ⎣⎢ 4 −1
{(0, 0)}.
(b) nullity(T ) = dim( ker (T )) = 0
has only the trivial solution v1 = v2 = 0, the kernel is
191
⎡ 0 4⎤ ⎢ ⎥ A = ⎢−2 0⎥ ⎢ 3 11⎥ ⎣ ⎦ T
⇒
⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥ ⎢0 0⎥ ⎣ ⎦
So, range (T ) = R 2 . (d) rank (T ) = dim( range(T )) = 2 25. (a) The kernel of T is given by the solution to the equation T ( x) = 0. So, ker(T ) = {(t , − 3t ) : t ∈ R}. (b) nullity(T ) = dim( ker (T )) = 1 (c) Transpose A and find its equivalent row-echelon form. ⎡9 AT = ⎢10 ⎢3 ⎣10
3⎤ 10 ⎥ 1⎥ 10 ⎦
So, range (T ) =
⇒
⎡3 1⎤ ⎢ ⎥ ⎣0 0⎦
{(3t , t ) : t ∈ R}.
(d) rank (T ) = dim( range(T )) = 1
⎡ 1 0⎤ ⎢ ⎥ ⎣0 1⎦
So, range(T ) = R 2 . (d) rank (T ) = dim( range(T )) = 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
192
Chapter 6
Linear Transformations 31. Use Theorem 6.5 to find nullity(T ).
27. (a) The kernel of T is given by the solution to the equation T ( x) = 0.
rank (T ) + nullity(T ) = dim( R 3 )
So, ker (T ) =
nullity(T ) = 3 − 2 = 1
{( s + t , s, − 2t ) : s and t are real numbers}.
Because nullity(T ) = dim( ker (T )) = 1, the kernel of T
(b) nullity(T ) = dim( ker (T )) = 2
is a line in space. Furthermore, because rank (T ) = dim(range(T )) = 2, the range of T is a plane
(c) Transpose A and find the equivalent reduced row-echelon form.
⎡ 4 ⎢ 9 AT = ⎢− 94 ⎢ ⎢⎣ 92
− 94 4 9 − 92
2⎤ 9⎥ − 92 ⎥ ⎥ 1 ⎥ 9⎦
in space.
⎡ 1 −1 1 ⎤ 2 ⎢ ⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦
⇒
33. Because rank (T ) + nullity(T ) = 3 and you are given
rank (T ) = 0, it follows that nullity(T ) = 3. So, the kernel of T is all of R 3 , and the range is the single
So, range (T ) =
point {(0, 0, 0)}.
{(2t , − 2t , t ) : t is any real number}.
35. The preimage of (0, 0, 0) is
(d) rank (T ) = dim( range(T )) = 1
So, nullity(T ) = 0, and the rank of T is determined as
29. (a) The kernel of T is given by the solution to the equation
follows.
T ( x) = 0. So, ker (T ) =
{(0, 0, 0)}.
rank (T ) + nullity(T ) = dim( R 3 )
{(2s − t , t , 4s, − 5s, s) : s, t ∈ R}.
rank (T ) = 3 − 0 = 3
(b) nullity(T ) = dim( ker (T )) = 2
The kernel of T is the single point (0, 0, 0). Because
(c) Transpose A and find its equivalent row-echelon form.
rank (T ) = dim( range(T )) = 3, the range of T is R 3 .
⎡ 2 1 3 6⎤ ⎢ ⎥ ⎢ 2 1 3 6⎥ AT = ⎢−3 1 −5 −2⎥ ⎢ ⎥ ⎢ 1 1 0 4⎥ ⎢ ⎥ ⎣ 13 −1 14 16⎦
⇒
⎡7 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0 7 0 0 7 0 0 0 0
37. The kernel of T is determined by solving x + 2 y + 2z T ( x, y , z ) = (1, 2, 2) = (0, 0, 0), which 9 implies that x + 2 y + 2 z = 0. So, the nullity of T is 2, and the kernel is a plane. The range of T is found by observing that rank (T ) + nullity(T ) = 3. That is, the
8⎤ ⎥ 20⎥ 2⎥ ⎥ 0⎥ ⎥ 0⎦
So,
range of T is 1-dimensional, a line in
range(T ) = {7 r , 7 s, 7t , 8r + 20s + 2t ) : r , s, t ∈ R}.
R 3 and range(T ) =
{(t , 2t , 2t ) : t ∈ R}.
Equivalently, the range of T is spanned by columns 1, 3 and 4 of A. (d) rank (T ) = dim( range(T )) = 3
39. rank (T ) + nullity(T ) = dim R 4 ⇒ nullity(T ) = 4 − 2 = 2 41. rank (T ) + nullity(T ) = dim R 4 ⇒ nullity(T ) = 4 − 0 = 4 43.
(a)
Zero
Standard Basis
(0, 0, 0, 0)
{(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}
⎡0⎤ ⎢ ⎥ 0 (b) ⎢ ⎥ ⎢0⎥ ⎢ ⎥ ⎣⎢0⎦⎥
⎧⎡ 1⎤ ⎡0⎤ ⎡0⎤ ⎡0⎤ ⎫ ⎪⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎪ ⎪⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎢0⎥ ⎪ ⎨⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥ , ⎢ ⎥ ⎬ ⎪⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎢0⎥ ⎪ ⎪⎢0⎥ ⎢0⎥ ⎢0⎥ ⎢ 1⎥ ⎪ ⎩⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.2 Zero
The Kernel and Range of a Linear Transformation
Standard Basis
⎡0 0⎤ (c) ⎢ ⎥ ⎣0 0⎦
⎧⎪⎡ 1 0⎤ ⎡0 1⎤ ⎡0 0⎤ ⎡0 0⎤ ⎫⎪ ⎨⎢ ⎥, ⎢ ⎥, ⎢ ⎥, ⎢ ⎥⎬ ⎪⎩⎣0 0⎦ ⎣0 0⎦ ⎣ 1 0⎦ ⎣0 1⎦ ⎪⎭
(d) p( x) = 0
{1, x, x 2 , x3}
(e)
(0, 0, 0, 0, 0)
193
{(1, 0, 0, 0, 0), (0, 1, 0, 0, 0), (0, 0, 1, 0, 0), (0, 0, 0, 1, 0)}
45. Solve the equation T ( p ) =
d (a0 + a1x + a2 x 2 + a3 x3 + a4 x 4 ) = 0 yielding p = a0 . dx
So, ker (T ) = { p( x) = a0 : a0 is a real number}. (The constant polynomials) 47. First compute T (u) = projvu, for u = ( x, y, z ). T (u) = projvu =
( x, y, z ) ⋅ (2, −1, 1) 2, −1, 1 ( ) (2, −1, 1) ⋅ (2, −1, 1)
=
2x − y + z (2, −1, 1) 6
(a) Setting T (u) = 0, you have 2 x − y + z = 0, so nullity(T ) = 2, and rank (T ) = 3 − 2 = 1. (b) A basis for the kernel of T is obtained by solving 2 x − y + z = 0. 1 1 1 ( y − z ) = s − t. 2 2 2
Letting t = z and s = y , you have x =
⎧⎛ 1 ⎞ ⎛ 1 ⎞⎫ So, a basis for ker (T ) is ⎨⎜ , 1, 0 ⎟, ⎜ − , 0, 1⎟⎬, or {(1, 2, 0), (1, 0, − 2)}. ⎠ ⎝ 2 ⎠⎭ ⎩⎝ 2
49. Because A = −1 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker (T ) =
one-to-one (by Theorem 6.6). Furthermore, because rank (T ) = dim( R
2
) − nullity(T ) =
{(0, 0)} and T is
2 − 0 = 2 = dim( R 2 ),
T is onto (by Theorem 6.7). 51. Because A = −1 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker (T ) =
{(0, 0, 0)} and T is
one-to-one (by Theorem 6.6). Furthermore, because rank (T ) = dim R − nullity(T ) = 3 − 0 = 3 = dim( R 3 ), 3
T is onto (by Theorem 6.7). 53. (a) False. See "Definition of Kernel of a Linear Transformation," page 374.
(b) False. See Theorem 6.4, page 378. (c) True. See discussion before Theorem 6.6, page 382. (d) True. See discussion before "Definition of Isomorphism," page 384. 55. (a) A is an n × n matrix and det ( A) = det ( AT ) ≠ 0. So,
57. Theorem 6.9 tells you that if M m ,n and M j ,k are of the
the reduced row-echelon matrix equivalent to AT has n nonzero rows and you can conclude that rank (T ) = n.
same dimension then they are isomorphic. So, you can conclude that mn = jk .
(b) A is an n × n matrix and det ( A) = det ( A
T
)
= 0. So,
the reduced row-echelon matrix equivalent to AT has at least one row of zeros and you can conclude that rank (T ) < n.
59. From Theorem 6.5, rank (T ) + nullity(T ) = n = dimension of V. T is
one-to-one if and only if nullity(T ) = 0 if and only if rank (T ) = dimension of V . 61. Although they are not the same, they have the same dimension (4) and are isomorphic.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
194
Chapter 6
Linear Transformations
Section 6.3 Matrices for Linear Transformations 9. Because
1. Because
⎛ ⎡ 1⎤ ⎞ ⎡1⎤ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ ⎥ ⎣1⎦ ⎝ ⎣0⎦ ⎠
and
⎛ ⎡0⎤ ⎞ ⎡ 2⎤ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ ⎥ , ⎣−2⎦ ⎝ ⎣ 1⎦ ⎠
⎡1 2⎤ the standard matrix for T is A = ⎢ ⎥. ⎣1 −2⎦ 3. Because ⎡ 2⎤ ⎛ ⎡ 1⎤ ⎞ ⎢ ⎥ T ⎜ ⎢ ⎥ ⎟ = ⎢ 1⎥ ⎜ 0 ⎟ ⎣ ⎦ ⎝ ⎠ ⎢−4⎥ ⎣ ⎦
and
⎡−3⎤ ⎛ ⎡0⎤ ⎞ ⎢ ⎥ T ⎜ ⎢ ⎥ ⎟ = ⎢ 1⎥ , ⎜ 1⎟ ⎣ ⎦ ⎝ ⎠ ⎢ 1⎥ ⎣ ⎦
⎡ 2 −3⎤ ⎢ ⎥ the standard matrix for T is A = ⎢ 1 −1⎥. ⎢−4 1⎥⎦ ⎣
5. Because
⎛ ⎡ 1⎤ ⎞ ⎡0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 0 0 ⎜ ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎣⎢0⎦⎥ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 1 0 ⎜ ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎣⎢0⎦⎥ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 0 ⎜ 0 ⎟ and T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥ , ⎢0⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢ 1⎥ ⎟ ⎣⎢0⎦⎥ ⎝⎣ ⎦⎠ ⎡0 ⎢ 0 the standard matrix for T is A = ⎢ ⎢0 ⎢ ⎢⎣0
⎛ ⎡ 1⎤ ⎞ ⎜⎢ ⎥⎟ ⎡13⎤ T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , ⎣ 6⎦ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠
the standard matrix for T is
⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡ 4⎤ and T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , ⎣−3⎦ ⎜ ⎢ 1⎥ ⎟ ⎝⎣ ⎦⎠
7. Because ⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎡0⎤ ⎡−2⎤ T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢ ⎥ , ⎣4⎦ ⎣ 0⎦ ⎜ ⎢0⎥ ⎟ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠ ⎝⎣ ⎦⎠ ⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡ 3⎤ and T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , ⎣11⎦ ⎜ ⎢ 1⎥ ⎟ ⎝⎣ ⎦⎠
⎡0 −2 3⎤ the standard matrix for T is A = ⎢ ⎥. ⎣4 0 11⎦
0 0 0⎤ ⎥ 0 0 0⎥ . 0 0 0⎥ ⎥ 0 0 0⎥⎦
11. Because
⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎡ 1⎤ ⎛ ⎡0⎤ ⎞ ⎡ 1⎤ ⎡0⎤ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥ T ⎜ ⎢0⎥ ⎟ = ⎢ 1⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢−1⎥ , and T ⎜ ⎢0⎥ ⎟ = ⎢0⎥ , ⎜ ⎢0⎥ ⎟ ⎜ ⎢ 1⎥ ⎟ ⎢−1⎥ ⎜ ⎢0⎥ ⎟ ⎢ 0⎥ ⎢ 1⎥ ⎣ ⎦ ⎝⎣ ⎦⎠ ⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦⎠ ⎝⎣ ⎦⎠ ⎡ 1 1 0⎤ ⎢ ⎥ A = ⎢ 1 −1 0⎥. ⎢−1 0 1⎥ ⎣ ⎦
⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 0 0 ⎜ ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢ 1⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎣⎢0⎦⎥ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡−9⎤ T ⎜ ⎢ 1⎥ ⎟ = ⎢ ⎥ , ⎣ 5⎦ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠
⎡13 −9 4⎤ the standard matrix for T is A = ⎢ ⎥. ⎣ 6 5 −3⎦ ⎡13 −9 4⎤ So, T ( v ) = ⎢ ⎥ ⎣ 6 5 −3⎦
⎡ 1⎤ ⎡ 35⎤ ⎢ ⎥ ⎢−2⎥ = ⎢−7⎥ ⎣ ⎦ ⎢ 1⎥ ⎣ ⎦
and T (1, − 2, 1) = (35, − 7). 13. Because
⎡ 1⎤ ⎢ ⎥ ⎛ ⎡ 1⎤ ⎞ 1 T⎜⎢ ⎥⎟ = ⎢ ⎥ ⎜ 0 ⎟ ⎢2⎥ ⎝⎣ ⎦⎠ ⎢ ⎥ ⎣⎢0⎦⎥
and
⎡ 1⎤ ⎢ ⎥ ⎛ ⎡0⎤ ⎞ −1 T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎜ 1⎟ ⎢ 0⎥ ⎝⎣ ⎦⎠ ⎢ ⎥ ⎣⎢ 2⎦⎥
the standard matrix for T is ⎡ 1 1⎤ ⎢ ⎥ 1 −1⎥ A = ⎢ . ⎢2 0⎥ ⎢ ⎥ ⎢⎣0 2⎥⎦ ⎡ 1 1⎤ ⎡ 0⎤ ⎢ ⎥ ⎢ ⎥ 1 −1⎥ ⎡ 3⎤ 6 So, T ( v ) = ⎢ = ⎢ ⎥ ⎢ ⎥ ⎢2 0⎥ −3 ⎢ 6⎥ ⎢ ⎥⎣ ⎦ ⎢ ⎥ ⎢⎣0 2⎥⎦ ⎢− ⎣ 6⎥⎦ and T (3, − 3) = (0, 6, 6, − 6). Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3 15. Because
Matrices for Linear Transformations
195
17. (a) The matrix of the reflection through the origin,
⎛ ⎡ 1⎤ ⎞ ⎜⎢ ⎥⎟ ⎡ 1⎤ ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎣0⎦ ⎜⎢ ⎥⎟ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡ 1⎤ ⎜ 1⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎣0⎦ ⎜⎢ ⎥⎟ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠
T ( x, y ) = ( − x, − y ), is given by
⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡0⎤ ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥ , and ⎢ 1⎥ ⎣ 1⎦ ⎜⎢ ⎥⎟ ⎜ ⎢0⎥ ⎟ ⎝⎣ ⎦⎠
⎡−1 0⎤ A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎥. ⎣ 0 −1⎦ (b) The image of v = (3, 4) is given by
⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎡0⎤ ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎣ 1⎦ ⎜⎢ ⎥⎟ ⎜ ⎢ 1⎥ ⎟ ⎝⎣ ⎦⎠
⎡−1 0⎤ Av = ⎢ ⎥ ⎣ 0 −1⎦
⎡3⎤ ⎡ −3⎤ ⎢ ⎥ = ⎢ ⎥. 4 ⎣ ⎦ ⎣−4⎦
So, T (3, 4) = ( −3, − 4).
the standard matrix for T is
y
(c)
⎡ 1 1 0 0⎤ A = ⎢ ⎥. ⎣0 0 1 1⎦
4 2
⎡ 1 1 0 0⎤ So, T ( v ) = ⎢ ⎥ ⎣0 0 1 1⎦
⎡ 1⎤ ⎢ ⎥ ⎢−1⎥ = ⎡0⎤ ⎢ ⎥ ⎢ 1⎥ ⎣0⎦ ⎢ ⎥ ⎣⎢−1⎥⎦
−4 −2
(3, 4) v x 2
4
T(v) −4
(−3,− 4)
and T (1, −1, 1, −1) = (0, 0). ⎡−1 0⎤ 19. (a) The matrix of the reflection in the y-axis, T ( x, y ) = ( − x, y ), is given by A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎥. ⎣ 0 1⎦ ⎡−1 0⎤ ⎡ 2⎤ ⎡−2⎤ (b) The image of v = ( 2, − 3) is given by Av = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. So, T ( 2, − 3) = ( −2, − 3). ⎣ 0 1⎦ ⎣−3⎦ ⎣−3⎦ (c)
y 1 −3 −2 −1
T(v) −2 (−2, −3)
x 1
2
3
v (2, −3)
21. (a) The counterclockwise rotation of 135° in R 2 is given by
⎛ 2 2 2 2 ⎞ T ( x, y ) = (cos (135) x − sin (135) y, sin (135) x + cos (135) y ) = ⎜⎜ − x− y, x− y ⎟⎟. 2 2 2 2 ⎝ ⎠ ⎡ 2 ⎢− 2 ⎢ So, the matrix is A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ 2 ⎢ ⎣ 2 ⎡ 2 ⎢− 2 (b) The image of v = ( 4, 4) is given by Av = ⎢ ⎢ 2 ⎢ ⎣ 2
2⎤ ⎥ 2 ⎥ . 2⎥ ⎥ − 2 ⎦
−
2⎤ ⎥ 2 ⎥ 2⎥ ⎥ − 2 ⎦
−
⎡−4 2 ⎤ ⎡4⎤ ⎥. So, T ( 4, 4) = −4 2, 0 . ⎢ ⎥ = ⎢ 4 0⎦⎥ ⎢⎣ ⎣ ⎦
(
)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
196
Chapter 6
Linear Transformations y
(c) 6
(4, 4)
4
T(v)
2
v x
−6 −4 −2 −2
(− 4 2, 0)
4
2
−4
23. (a) The clockwise rotation of 60° is given by
⎛1 3 3 1 ⎞ T ( x, y ) = (cos( −60) x − sin ( −60) y, sin ( −60) x + cos( −60) y ) = ⎜⎜ x + y, − + y ⎟⎟. 2 2 2 ⎠ ⎝2 ⎡ 1 ⎢ 2 So, the matrix is A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 3 ⎢− ⎣ 2 ⎡ 1 ⎢ 2 (b) The image of v = (1, 2) is given by Av = ⎢ ⎢ 3 ⎢− ⎣ 2
3⎤ ⎥ 2 ⎥ . 1⎥ ⎥ 2⎦ ⎡1 ⎤ 3⎤ + 3⎥ ⎥ ⎡ 1⎤ ⎢ ⎛1 2 2 ⎥ ⎥. So, T (1, 2) = ⎜ + ⎢ ⎥ = ⎢ ⎜2 ⎥ ⎢ 3⎥ 1 ⎣2⎦ ⎝ ⎥ ⎢1 − ⎥ ⎣ 2 ⎦ 2⎦
3, 1 −
3⎞ ⎟. 2 ⎟⎠
y
(c)
(1, 2) 2
v
( 1 + 22
1
3
− 60° T(v)
,
2− 3 2
)
x 1
2
25. (a) The standard matrix for T is
27. (a) The standard matrix for T is
⎡ 1 0 0⎤ ⎢ ⎥ A = ⎡⎣T (1, 0, 0) T (0, 1, 0) T (0, 0, 1)⎤⎦ = ⎢0 1 0⎥. ⎢0 0 −1⎥ ⎣ ⎦
(b) The image of v = (3, 2, 2) is ⎡ 1 0 0⎤ ⎢ ⎥ Av = ⎢0 1 0⎥ ⎢0 0 −1⎥ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ A = ⎡⎣T (1, 0, 0) T (0, 1, 0) T (0, 0, 1)⎤⎦ = ⎢0 −1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
(b) The image of v = (1, 2, −1) is ⎡ 1 0 0⎤ ⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Av = ⎢0 −1 0⎥ ⎢ 2⎥ = ⎢−2⎥. ⎢0 0 1⎥ ⎢−1⎥ ⎢ −1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎡3⎤ ⎡ 3⎤ ⎢ ⎥ ⎢ ⎥ ⎢2⎥ = ⎢ 2⎥. ⎢2⎥ ⎢− 2⎥ ⎣ ⎦ ⎣ ⎦
So, T (3, 2, 2) = (3, 2, − 2).
So, T (1, 2, −1) = (1, − 2, −1).
z
(c)
(c)
2
(1, − 2, − 1) 2 T(v)
(3, 2, 2) v
1 2 3 4
4 3
y
x
T(v)
z
4 x
v
1 2 3 4
(1, 2, − 1)
y
(3, 2, − 2)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
197
29. (a) The counterclockwise rotation of 30° is given by ⎛ 3 1 1 3 ⎞ T ( x, y ) = (cos(30) x − sin (30) y , sin (30) x + cos(30) y ) = ⎜⎜ x − y, x + y ⎟. 2 2 2 ⎟⎠ ⎝ 2 ⎡ 3 ⎢ 2 So, the matrix is A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 1 ⎢ ⎣ 2 ⎡ 3 ⎢ 2 (b) The image of v = (1, 2) is Av = ⎢ ⎢ 1 ⎢ ⎣ 2
1⎤ − ⎥ 2⎥ . 3⎥ ⎥ 2 ⎦
⎡ 3 ⎤ 1⎤ − ⎥ − 1⎥ ⎢ ⎛ 3 1 2 ⎥ ⎡ 1⎤ 2 ⎥. So, T (1, 2) = ⎜ = ⎢ ⎢ ⎥ ⎜ 2 − 1, 2 + ⎢1 ⎥ 3 ⎥ ⎣2⎦ ⎝ ⎥ ⎢ + 3⎥ ⎣2 ⎦ 2 ⎦
⎞ 3 ⎟⎟. ⎠
y
(c) 3
(
3 1 −1, 3 + 2 2
T(v) 1
(
(1, 2)
v x 1
2
3
31. (a) The projection onto the vector w = (3, 1) is given by T ( v ) = projw v =
3x + y 3 1 (3, 1) = ⎛⎜ (3x + y ), (3x + y ) ⎞⎟. 10 10 ⎝ 10 ⎠
⎡9 3⎤ ⎢10 10 ⎥ ⎥. So, the matrix is A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢3 1⎥ ⎢⎣10 10 ⎥⎦ ⎡9 3⎤ ⎡ 21⎤ ⎢10 10 ⎥ ⎡ 1⎤ ⎢10 ⎥ ⎛ 21 7 ⎞ ⎥ ⎢ ⎥ = ⎢ ⎥. So, T (1, 4) = ⎜ , ⎟. (b) The image of v = (1, 4) is given by Av = ⎢ 3 1 4 7 ⎝ 10 10 ⎠ ⎢ ⎥⎣ ⎦ ⎢ ⎥ ⎢⎣10 10 ⎥⎦ ⎢⎣10 ⎥⎦ (c)
y
(1, 4)
4 3
v
2 1
−1
( 2110 , 107 (
T(v)
x 1
2
3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
198
Chapter 6
Linear Transformations
33. (a) The reflection of a vector v through w is given by T ( v ) = 2 projw v − v 3x + y (3, 1) − ( x, y) 10 3 3 4 ⎞ ⎛4 = ⎜ x + y, x − y ⎟. 5 5 5 5 ⎝ ⎠
T ( x, y ) = 2
The standard matrix for T is 3⎤ ⎡4 ⎢5 5⎥ ⎥. A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ 4⎥ ⎢3 − ⎢⎣ 5 5 ⎥⎦ (b) The image of v = (1, 4) is 3⎤ ⎡4 ⎢5 5⎥ ⎥ Av = ⎢ 4⎥ ⎢3 − ⎢⎣ 5 5 ⎥⎦
⎡ 16 ⎤ ⎢ 5⎥ ⎡ 1⎤ ⎥. ⎢ ⎥ = ⎢ ⎢ 13 ⎥ ⎣4⎦ − ⎢⎣ 5 ⎥⎦
⎛ 16 13 ⎞ So, T (1, 4) = ⎜ , − ⎟. 5⎠ ⎝5 y
(c) 4 3 2 1
−2 −3
⎡ 1 −1 ⎢ 0 0 A = ⎢ ⎢1 2 ⎢ ⎣⎢0 0
0⎤ ⎥ 0⎥ . 0 −1⎥ ⎥ 0 1⎦⎥ 0
1
(b) The image of v = (1, 0, 1, −1) is ⎡ 1 −1 ⎢ 0 0 Av = ⎢ ⎢1 2 ⎢ ⎣⎢0 0
0⎤ ⎥ 0⎥ 0 −1⎥ ⎥ 0 1⎦⎥ 0
1
⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ 0 ⎢ ⎥ = ⎢ 1⎥. ⎢ 1⎥ ⎢ 2⎥ ⎢ ⎥ ⎢ ⎥ ⎢− ⎣⎢−1⎥⎦ ⎣ 1⎥⎦
So, T (1, 0, 1, −1) = (1, 1, 2, −1). (c) Using a graphing utility or a computer software program to perform the multiplication in part (b) gives the same result.
39. The standard matrices for T1 and T2 are ⎡ 1 −2⎤ A1 = ⎢ ⎥ 3⎦ ⎣2
and
⎡2 0⎤ A2 = ⎢ ⎥. ⎣ 1 −1⎦
The standard matrix for T = T2
(1, 4)
T1 is
⎡2 0⎤ ⎡ 1 −2⎤ ⎡ 2 −4⎤ A = A2 A1 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 3⎦ ⎣ 1 −1⎦ ⎣2 ⎣−1 −5⎦
v x
−1
37. (a) The standard matrix for T is
3 4 5
T(v)
(165 , − 135 (
35. (a) The standard matrix for T is ⎡2 3 −1⎤ ⎢ ⎥ A = ⎢3 0 − 2⎥ ⎢2 −1 1⎥⎦ ⎣
(b) The image of v = (1, 2, −1) is ⎡2 3 −1⎤ ⎡ 1⎤ ⎡ 9⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Av = ⎢3 0 − 2⎥ ⎢ 2⎥ = ⎢ 5⎥. ⎢2 −1 ⎢−1⎥ 1⎥⎦ ⎢⎣−1⎥⎦ ⎣ ⎣ ⎦
So, T (1, 2, −1) = (9, 5, −1). (c) Using a graphing utility or a computer software program to perform the multiplication in part (b) gives the same result.
and the standard matrix for T ′ = T1
T2 is
⎡ 1 − 2⎤ ⎡2 0⎤ ⎡0 2⎤ A′ = A1 A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 3⎦ ⎣ 1 −1⎦ ⎣2 ⎣7 −3⎦
41. The standard matrices for T1 and T2 are ⎡ 1 0 0⎤ ⎢ ⎥ A1 = ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
and
⎡0 0 0⎤ ⎢ ⎥ A2 = ⎢ 1 0 0⎥. ⎢0 0 0⎥ ⎣ ⎦
The standard matrix for T = T2
T1 is
⎡0 0 0⎤ ⎡ 1 0 0⎤ ⎡0 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ A = A2 A1 = ⎢ 1 0 0⎥ ⎢0 1 0⎥ = ⎢ 1 0 0⎥ = A2 ⎢0 0 0⎥ ⎢0 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
and the standard matrix for T ′ = T1
T2 is
⎡ 1 0 0⎤ ⎡0 0 0⎤ ⎡0 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ A′ = A1 A2 = ⎢0 1 0⎥ ⎢ 1 0 0⎥ = ⎢ 1 0 0⎥ = A2 . ⎢0 0 1⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
43. The standard matrices for T1 and T2 are ⎡−1 2⎤ ⎢ ⎥ A1 = ⎢ 1 1⎥ ⎢ 1 −1⎥ ⎣ ⎦
and
⎡1 −3 0⎤ A2 = ⎢ ⎥. ⎣3 0 1⎦
The standard matrix for T = T2
T1 is
⎡−1 2⎤ ⎡1 −3 0⎤ ⎢ ⎡−4 −1⎤ ⎥ A = A2 A1 = ⎢ ⎥ ⎢ 1 1⎥ = ⎢ ⎥ ⎣3 0 1⎦ ⎢ ⎣−2 5⎦ ⎥ ⎣ 1 −1⎦
and the standard matrix for T ′ = T1
T2 is
⎡−1 2⎤ ⎡ 5 3 2⎤ ⎢ ⎥ ⎡1 −3 0⎤ ⎢ ⎥ A′ = A1 A2 = ⎢ 1 1⎥ ⎢ ⎥ = ⎢ 4 −3 1⎥. 3 0 1 ⎣ ⎦ ⎢ 1 −1⎥ ⎢−2 −3 −1⎥ ⎣ ⎦ ⎣ ⎦
Matrices for Linear Transformations
⎡5 0⎤ 53. The standard matrix for T is A = ⎢ ⎥. ⎣0 5⎦ Because A = 25 ≠ 0, A is invertible.
A
−1
⎡1 ⎤ ⎢ 5 0⎥ ⎛x y⎞ ⎥ So, T −1 ( x, y ) = ⎜ , ⎟. = ⎢ 1⎥ ⎝5 5⎠ ⎢ ⎢⎣ 0 5 ⎥⎦
55. The standard matrix for T is ⎡ 1 −2 0 0⎤ ⎢ ⎥ 0 1 0 0⎥ A = ⎢ . ⎢0 0 1 1⎥ ⎢ ⎥ ⎣⎢0 0 1 0⎦⎥ Because A = −1 ≠ 0, A is invertible. Calculate A–1 by
45. The standard matrix for T is ⎡1 1⎤ A = ⎢ ⎥. ⎣1 −1⎦
Gauss-Jordan elimination
Because A = −2 ≠ 0, A is invertible.
A− 1
⎡1 ⎡−1 −1⎤ 2 A−1 = − 12 ⎢ ⎥ = ⎢1 ⎢⎣ 2 ⎣−1 1⎦
and conclude that
So, T −1 ( x, y ) =
1⎤ 2 ⎥ − 12 ⎦⎥
( 12 x + 12 y, 12 x − 12 y).
47. The standard matrix for T is ⎡1 0 0⎤ ⎢ ⎥ A = ⎢1 1 0⎥. ⎢1 1 1⎥ ⎣ ⎦
Because A = 1 ≠ 0, A is invertible. Calculate A–1 by Gauss-Jordan elimination ⎡ 1 0 0⎤ ⎢ ⎥ A−1 = ⎢−1 1 0⎥ ⎢ 0 −1 1⎥ ⎣ ⎦
and conclude that T −1 ( x1 , x2 , x3 ) = ( x1 , − x1 + x2 , − x2 + x3 ).
49. The standard matrix for T is ⎡2 0⎤ A = ⎢ ⎥. ⎣0 0⎦ Because A = 0, A is not invertible, and so T is not invertible.
51. The standard matrix for T is ⎡1 1⎤ A = ⎢ ⎥. ⎣3 3⎦ Because A = 0, A is not invertible, and so T is not invertible.
199
⎡1 ⎢ 0 = ⎢ ⎢0 ⎢ ⎣⎢0
0⎤ ⎥ 0⎥ 0 0 1⎥ ⎥ 0 1 −1⎥⎦
2 0
1 0
T −1 ( x1 , x2 , x3 , x4 ) = ( x1 + 2 x2 , x2 , x4 , x3 − x4 ). ⎡ 1 1⎤ ⎢ ⎥ 57. (a) The standard matrix for T is A′ = ⎢ 1 0⎥ ⎢0 1⎥ ⎣ ⎦
and the image of v under T is ⎡ 1 1⎤ ⎡9⎤ ⎢ ⎥ ⎡5⎤ ⎢ ⎥ A′v = ⎢ 1 0⎥ ⎢ ⎥ = ⎢5⎥. So, T ( v) = (9, 5, 4). 4 ⎣ ⎦ ⎢0 1⎥ ⎢4⎥ ⎣ ⎦ ⎣ ⎦
(b) The image of each vector in B is as follows. T (1, −1) = (0, 1, −1) = (1, 1, 0) + 0(0, 1, 1) − (1, 0, 1) T (0, 1) = (1, 0, 1) = 0(1, 1, 0) + 0(0, 1, 1) + (1, 0, 1)
So, ⎡⎣T (1, −1)⎤⎦ B ′
⎡ 1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ 0⎥ and ⎡⎣T (0, 1)⎤⎦ B ′ = ⎢0⎥ ⎢−1⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦
⎡ 1 0⎤ ⎢ ⎥ which implies that A = ⎢ 0 0⎥. Then, because ⎢−1 1⎥ ⎣ ⎦ ⎡5⎤ [v]B = ⎢ ⎥ , you have ⎣9⎦ ⎡ 1 0⎤ ⎡5⎤ ⎢ ⎥ ⎡5⎤ ⎢ ⎥ ⎡⎣T ( v )⎤⎦ B ′ = A[ v]B = ⎢ 0 0⎥ ⎢ ⎥ = ⎢0⎥. 9 ⎢−1 1⎥ ⎣ ⎦ ⎢4⎥ ⎣ ⎦ ⎣ ⎦
So, T ( v ) = 5(1, 1, 0) + 4(1, 0, 1) = (9, 5, 4). Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
200
Chapter 6
Linear Transformations
⎡ 1⎤ ⎡ 1 −1 0⎤ ⎢ ⎥ ⎡−1⎤ ⎡ 1 −1 0⎤ 59. (a) The standard matrix for T is A′ = ⎢ ⎥ ⎢ 2⎥ = ⎢ ⎥. ⎥ and the image of v under T is A′v = ⎢ ⎣0 1 −1⎦ ⎢ ⎥ ⎣ 5⎦ ⎣0 1 −1⎦ ⎣−3⎦
So, T ( v ) = ( −1, 5). (b) The image of each vector in B is as follows. T (1, 1, 1) = (0, 0) = 0(1, 2) + 0(1, 1) T (1, 1, 0) = (0, 1) = (1, 2) − (1, 1) T (0, 1, 1) = ( −1, 0) = (1, 2) − 2(1, 1) ⎡0 1 1⎤ ⎡0⎤ ⎡ 1⎤ ⎡ 1⎤ So, ⎡⎣T (1, 1, 1)⎤⎦ B ′ = ⎢ ⎥ , ⎡⎣T (1, 1, 0)⎤⎦ B ′ = ⎢ ⎥ , and ⎡⎣T (0, 1, 1)⎤⎦ B ′ = ⎢ ⎥ , which implies that A = ⎢ ⎥. ⎣−2⎦ ⎣0 −1 −2⎦ ⎣0⎦ ⎣−1⎦ Then, because [ v]B
⎡−4⎤ ⎡0 1 1⎤ ⎢ ⎥ = ⎢ 5⎥ , you have ⎡⎣T ( v )⎤⎦ B ′ = A[v]B = ⎢ ⎥ ⎣0 −1 −2⎦ ⎢ 1⎥ ⎣ ⎦
⎡−4⎤ ⎡ 6⎤ ⎢ ⎥ ⎢ 5⎥ = ⎢−7⎥ . ⎣ ⎦ ⎢ 1⎥ ⎣ ⎦
So, T ( v ) = 6(1, 2) − 7(1, 1) = ( −1, 5). ⎡2 ⎢ 1 61. (a) The standard matrix for T is A′ = ⎢ ⎢0 ⎢ ⎣⎢ 1
0 0⎤ ⎡2 ⎥ ⎢ 1 0⎥ 1 and the image of v = (1, −5, 2) under T is A′v = ⎢ ⎥ ⎢ 1 1 0 ⎥ ⎢ 0 1⎦⎥ ⎣⎢ 1
0 0⎤ ⎡ 2⎤ ⎥ ⎡ 1⎤ ⎢ ⎥ 1 0⎥ ⎢ ⎥ ⎢−4⎥. 5 − = ⎢−3⎥ 1 1⎥ ⎢⎢ ⎥⎥ ⎥ ⎣ 2⎦ ⎢ ⎥ 0 1⎦⎥ ⎢⎣ 3⎥⎦
So, T ( v) = ( 2, − 4, −3, 3). (b) Because T ( 2, 0, 1) = ( 4, 2, 1, 3) = 2(1, 0, 0, 1) + (0, 1, 0, 1) + (1, 0, 1, 0) + (1, 1, 0, 0) T (0, 2, 1) = (0, 2, 3, 1) = −2(1, 0, 0, 1) + 3(0, 1, 0, 1) + 3(1, 0, 1, 0) − (1, 1, 0, 0) T (1, 2, 1) = ( 2, 3, 3, 2) = −(1, 0, 0, 1) + 3(0, 1, 0, 1) + 3(1, 0, 1, 0)
the matrix for T relative to B and B′ is ⎡2 −2 −1⎤ ⎢ ⎥ 1 3 3⎥ . A = ⎢ ⎢ 1 3 3⎥ ⎢ ⎥ ⎣⎢ 1 −1 0⎥⎦ Because v = (1, −5, 2) =
[T ( v)]B′
= A[ v]B
9 2
(2, 0, 1) + 112 (0, 2, 1) − 8(1, 2, 1),
⎡2 −2 −1⎤ 9 ⎡ 6⎤ ⎢ ⎥ ⎡ 2⎤ ⎢ ⎥ 1 3 3 ⎢ ⎥ ⎥ 11 = ⎢−3⎥. = ⎢ ⎢ ⎥ ⎢ 1 3 3⎥ 2 ⎢−3⎥ ⎢ ⎥ ⎢⎢−8⎥⎥ ⎢ ⎥ ⎢⎣ 1 −1 0⎥⎦ ⎣ ⎦ ⎢− ⎣ 1⎥⎦
So, T (1, − 5, 2) = 6(1, 0, 0, 1) − 3(0, 1, 0, 1) − 3(1, 0, 1, 0) − (1, 1, 0, 0) = ( 2, − 4, − 3, 3). ⎡ 1 1 1⎤ ⎢ ⎥ 63. (a) The standard matrix for T is A′ = ⎢−1 0 2⎥ and the image of v = ( 4, −5, 10) under T is ⎢ ⎥ ⎣ 0 2 −1⎦ ⎡ 1 1 1⎤ ⎡ 4⎤ ⎡ 9⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ A′v = ⎢−1 0 2⎥ ⎢−5⎥ = ⎢ 16⎥. ⎢ 0 2 −1⎥ ⎢ 10⎥ ⎢−20⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, T ( v) = (9, 16, − 20).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
201
(b) Because T ( 2, 0, 1) = (3, 0, −1) = 2(1, 1, 1) + 1(1, 1, 0) − 3(0, 1, 1) T (0, 2, 1) = (3, 2, 3) = 4(1, 1, 1) − (1, 1, 0) − (0, 1, 1) T (1, 2, 1) = ( 4, 1, 3) = 6(1, 1, 1) − 2(1, 1, 0) − 3(0, 1, 1), ⎡ 2 4 6⎤ ⎢ ⎥ the matrix for T relative to B and B′ is A = ⎢ 1 −1 −2⎥. Because v = ( 4, −5, 10) = ⎢−3 −1 −3⎥ ⎣ ⎦
[T ( v)]B′
= A[ v]B
T ( x) = x , 2
3
71. (a) The image of each vector in B is as follows.
Dx (1) = 0 = 0(1) + 0 x + 0e + 0 xe Dx ( x) = 1 = 1 + 0 x + 0e + 0 xe x
x
x
T ( x) = x
So,
⎡⎣Dx (e x )⎤⎦ B
⎡0⎤ ⎢ ⎥ 0 = ⎢ ⎥, ⎢ 1⎥ ⎢ ⎥ ⎣⎢0⎦⎥
which implies that ⎡0 ⎢ 0 A = ⎢ ⎢0 ⎢ ⎢⎣0
1 0 0⎤ ⎥ 0 0 0⎥ . 0 1 1⎥ ⎥ 0 0 1⎥⎦
and
= T ( x2 ) = =
⎡ 1⎤ ⎢ ⎥ 0 = ⎢ ⎥, ⎢0⎥ ⎢ ⎥ ⎣⎢0⎦⎥
⎡⎣Dx ( xe x )⎤⎦ B
x
∫0
dt
= x = 0(1) + x + 0 x 2 + 0 x3 + 0 x 4
Dx ( xe x ) = e x + xe x = 0(1) + 0 x + e x + xe x
[Dx ( x)]B
T (1) =
x
x
Dx (e ) = e = 0(1) + 0 x + e + 0 xe
[Dx (1)]B
0⎤ ⎡ 0⎤ ⎡ 3⎤ ⎥ ⎢ ⎥ ⎢ 0⎥ 0⎥ ⎢ 3⎥ = ⎢ ⎥. ⎢−2⎥ 1⎥ ⎢ 0⎥ ⎥⎢ ⎥ ⎢ ⎥ 1⎦ ⎣−2⎦ ⎣−2⎦
So, Dx (3 x − 2 xe x ) = 3 − 2e x − 2 xe x .
x
⎡0⎤ ⎢ ⎥ 0 = ⎢ ⎥, ⎢0⎥ ⎢ ⎥ ⎣⎢0⎦⎥
⎡0 1 0 ⎢0 0 0 = ⎢ ⎢0 0 1 ⎢ ⎣0 0 0
A[ v]B
0 0⎤ ⎥ 0 0⎥ . 1 0⎥ ⎥ 0 1⎥⎦
67. The image of each vector in B is as follows.
x
69. Because 3x − 2 xe x = 0(1) + 3( x) + 0(e x ) − 2( xe x ),
T (x ) = x . 2
So, the matrix of T relative to B and B′ is ⎡0 ⎢ 1 A = ⎢ ⎢0 ⎢ ⎢⎣0
(2, 0, 1) + 372 (0, 2, 1) − 21(1, 2, 1),
⎤ ⎡ 2 4 6⎤ ⎡ 25 ⎡−27⎤ 2 ⎢ ⎥ ⎢ 37 ⎥ ⎢ ⎥ = ⎢ 1 −1 −2⎥ ⎢ 2 ⎥ = ⎢ 36⎥. So, T ( v ) = −27(1, 1, 1) + 36(1, 1, 0) + 7(0, 1, 1) = (9, 16, − 20). ⎢−3 −1 −3⎥ ⎢−21⎥ ⎢ 7⎥ ⎣ ⎦ ⎣⎢ ⎦⎥ ⎣ ⎦
65. The image of each vector in B is as follows. T (1) = x,
25 2
T ( x3 ) = =
⎡0⎤ ⎢ ⎥ 0 = ⎢ ⎥ ⎢ 1⎥ ⎢ ⎥ ⎣⎢ 1⎦⎥
x
∫0
t dt
1 x2 2 x
∫0
x
1 x2 2
+ 0 x3 + 0 x 4
t 2 dt
1 x3 3
∫0
= 0(1) + 0 x +
= 0(1) + 0 x + 0 x 2 + 13 x3 + 0 x 4
t 3 dt
1 x4 4
= 0(1) + 0( x) + 0 x 2 + 0 x3 +
1 x4 4
So, ⎡0 ⎢ ⎢1 A = ⎢⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎥ 0 0 0⎥ 1 0 0⎥ 2 ⎥. 0 13 0⎥ ⎥ 0 0 14 ⎥⎦
(b) The image of p( x) = 6 − 2 x + 3x3 under T relative to the basis of B′ is given by
A[ p]B
⎡0 ⎢ ⎢1 = ⎢⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎥ 0 0 0⎥ 1 0 0⎥ 2 ⎥ 0 13 0⎥ ⎥ 0 0 14 ⎥⎦
⎡ 0⎤ ⎡ 6⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 6⎥ − 2 ⎢ ⎥ = ⎢−1⎥. ⎢ 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ ⎢⎣ 3⎥⎦ ⎢ 3⎥ ⎣⎢ 4 ⎦⎥
So, T ( p ) = 0(1) + 6 x − x 2 + 0 x3 + = 6x − x2 +
3 4 x 4
=
x
∫0
3 4 x 4
p(t ) dt.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
202
Chapter 6
Linear Transformations
73. (a) True. See discussion, under "Composition of Linear Transformations," pages 390–391. (b) False. See Example 3, page 392.
(c) False. See Theorem 6.12, page 393. 75. The standard basis for M 2,3 is ⎧⎪⎡1 B = ⎨⎢ ⎪⎩⎣0
0 0
0 ⎤ ⎡0 ⎥, ⎢ 0 ⎦ ⎣0
1 0
0 ⎤ ⎡0 ⎥, ⎢ 0 ⎦ ⎣0
1 ⎤ ⎡0 ⎥, ⎢ 0 ⎦ ⎣1
0 0
0 0
0 ⎤ ⎡0 ⎥, ⎢ 0 ⎦ ⎣0
0 1
0 ⎤ ⎡0 ⎥, ⎢ 0 ⎦ ⎣0
0 0
0 ⎤ ⎫⎪ ⎥⎬ 1 ⎦ ⎪⎭
and the standard basis for M 3,2 is ⎧⎡1 ⎪⎪⎢ B′ = ⎨⎢0 ⎪⎢0 ⎪⎩⎢⎣
0 ⎤ ⎡0 ⎥ ⎢ 0 ⎥ , ⎢0 ⎥ ⎢ 0 ⎥⎦ ⎢⎣0
1 ⎤ ⎡0 ⎥ ⎢ 0 ⎥ , ⎢1 ⎥ ⎢ 0 ⎥⎦ ⎢0 ⎣
0 ⎤ ⎡0 ⎥ ⎢ 0 ⎥ , ⎢0 ⎥ 0 ⎥ ⎢⎢0 ⎦ ⎣
0 ⎤ ⎡0 ⎥ ⎢ 1 ⎥ , ⎢0 ⎥ ⎢ 0 ⎥⎦ ⎢⎣1
0 ⎤⎫ ⎥ ⎪⎪ 0 ⎥ ⎬. ⎥⎪ 1 ⎥⎦ ⎪ ⎭
0 ⎤ ⎡0 ⎥ ⎢ 0 ⎥ , ⎢0 ⎥ ⎢ 0 ⎥⎦ ⎢⎣0
By finding the image of each vector in B, you can find A. ⎛ ⎡1 T⎜⎢ ⎜ ⎝ ⎣0
0
⎛ ⎡0 T⎜⎢ ⎜ ⎝ ⎣0
0
⎛ ⎡0 T⎜⎢ ⎜ ⎝ ⎣0
0
0
0
1
⎡1 ⎢ 0 ⎤⎞ = ⎢0 ⎥ ⎟⎟ 0 ⎦⎠ ⎢ ⎣⎢0
0⎤ ⎥ 0 ⎥, ⎥ 0 ⎦⎥
⎛ ⎡0 T⎜⎢ ⎜ ⎝ ⎣0
1
⎡0 ⎢ 1 ⎤⎞ ⎥ ⎟⎟ = ⎢0 0 ⎦⎠ ⎢ ⎣⎢1
0⎤ ⎥ 0 ⎥, ⎥ 0 ⎦⎥
⎛ ⎡0 T⎜⎢ ⎜ ⎝ ⎣1
0
⎡0 ⎢ 0 ⎤⎞ ⎥ ⎟⎟ = ⎢0 0 ⎦⎠ ⎢ ⎢⎣0
0⎤ ⎥ 1 ⎥, ⎥ 0 ⎦⎥
⎛ ⎡0 T⎜⎢ ⎜ ⎝ ⎣0
0
0
0
0
⎡0 ⎢ 0 ⎤⎞ = ⎢1 ⎥ ⎟⎟ 0 ⎦⎠ ⎢ 0 ⎣⎢
0⎤ ⎥ 0 ⎥, ⎥ 0⎥ ⎦
⎡0 ⎢ 0 ⎤⎞ ⎥ ⎟⎟ = ⎢0 0 ⎦⎠ ⎢ ⎣⎢0
1⎤ ⎥ 0 ⎥, ⎥ 0 ⎦⎥
⎡0 ⎢ 0 ⎤⎞ ⎥ ⎟⎟ = ⎢0 1 ⎦⎠ ⎢ ⎢⎣0
0⎤ ⎥ 0⎥ ⎥ 1 ⎥⎦
So, ⎡1 ⎢ ⎢0 ⎢0 A = ⎢ ⎢0 ⎢ ⎢0 ⎢0 ⎣
77. Let (T2
0 0 0 0 0⎤ ⎥ 0 0 1 0 0⎥ 1 0 0 0 0⎥ ⎥. 0 0 0 1 0⎥ ⎥ 0 1 0 0 0⎥ 0 0 0 0 1⎥⎦ T1 )(u) = (T2
T1 )( v )
T2 (T1 (u)) = T2 (T1 ( v )) T1 (u) = T1 ( v)
because T2 one-to-one
u = v
because T1 one-to-one
Because T2
T1 is one-to-one from V to V, it is also onto. The inverse is T1−1
(T2 T1 ) (T1−1 T2−1 ) = T2
I
T2−1 because
T2−1 = I .
79. Sometimes it is preferable to use a nonstandard basis. If A1 and A2 are the standard matrices for T1 and T2 respectively,
then the standard matrix for T2
T1 is A2 A1 and the standard matrix for T1−1
T2−1 is A1−1 A2−1. Because
( A2 A1 )( A1−1 A2−1 ) = A2 ( I ) = A2−1 = I , you have that the inverse of T2 T1 is T1−1 T2−1.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.4
Transition Matrices and Similarity
203
Section 6.4 Transition Matrices and Similarity 1. (a) The standard matrix for T is ⎡ 2 −1⎤ A = ⎢ ⎥. ⎣−1 1⎦
Furthermore, the transition matrix P from B′ to the standard basis B, and its inverse, are ⎡ 1 0⎤ P = ⎢ ⎥ ⎣−2 3⎦
and
⎡ 1 0⎤ P −1 = ⎢ 2 1 ⎥. ⎣⎢ 3 3 ⎥⎦
Therefore, the matrix for T relative to B′ is ⎡ 1 0⎤ ⎡ 2 −1⎤ ⎡ 1 0⎤ ⎡4 −3⎤ A′ = P −1 AP = ⎢ 2 1 ⎥ ⎢ ⎥⎢ ⎥ = ⎢ 5 −1⎥. ⎢⎣ 3 ⎣⎢ 3 3 ⎥⎦ ⎣−1 1⎦ ⎣−2 3⎦ ⎦⎥
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. 3. (a) The standard matrix for T is ⎡ 1 1⎤ A = ⎢ ⎥. ⎣0 4⎦
Furthermore, the transition matrix P from B′ to the standard basis B, and its inverse, are ⎡−4 1⎤ P = ⎢ ⎥ ⎣ 1 −1⎦
⎡− 1 P −1 = ⎢ 3 ⎢− 13 ⎣
and
− 13 ⎤ ⎥. − 34 ⎥⎦
Therefore, the matrix for T relative to B′ is ⎡− 1 A′ = P −1 AP = ⎢ 3 ⎢− 13 ⎣
− 13 ⎤ ⎥ − 43 ⎥⎦
⎡ −1 ⎡ 1 1⎤ ⎡−4 1⎤ ⎢ 3 = ⎢ ⎥⎢ ⎥ 13 − 0 4 1 1 ⎢ ⎣ ⎦⎣ ⎦ ⎣− 3
4⎤ 3 ⎥. 16 ⎥ 3⎦
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. 5. (a) The standard matrix for T is ⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
Furthermore, the transition matrix B′ to the standard basis B, and its inverse, are ⎡ 1 1 0⎤ ⎢ ⎥ P = ⎢ 1 0 1⎥ ⎢0 1 1⎥ ⎣ ⎦
and
P
−1
⎡ 1 ⎢ 12 = ⎢ 2 ⎢− 1 ⎣ 2
1 2 − 12 1 2
− 12 ⎤ ⎥ 1 . 2⎥ 1⎥ 2⎦
Therefore, the matrix for T relative to B′ is ⎡ 1 ⎢ 12 A′ = P AP = ⎢ 2 ⎢− 1 ⎣ 2 −1
1 2 − 12 1 2
− 12 ⎤ ⎥ 1 2⎥ 1⎥ 2⎦
⎡ 1 0 0⎤ ⎡ 1 1 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 1 0 1 0 1 = ⎢ ⎥⎢ ⎥ ⎢0 1 0⎥. ⎢0 0 1⎥ ⎢0 1 1⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
(b) Because A′ = P −1 AP, it follows that A and A′ are similar.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
204
Chapter 6
Linear Transformations
7. (a) The standard matrix for T is ⎡ 1 −1 2⎤ ⎢ ⎥ A = ⎢2 1 −1⎥. ⎢ 1 2 1⎥ ⎣ ⎦
Furthermore, the transition matrix P from B′ to the standard basis B, and its inverse, are ⎡ 1 0 1⎤ ⎢ ⎥ P = ⎢0 2 2⎥ ⎢ 1 2 0⎥ ⎣ ⎦
and
P −1
⎡ 2 ⎢ 3 = ⎢− 13 ⎢ ⎢⎣ 13
− 13 1 6 1 3
1⎤ 3⎥ 1 ⎥. 3 ⎥ − 13 ⎥⎦
Therefore, the matrix for T relative to B′ is ⎡ 2 ⎢ 3 A′ = P −1 AP = ⎢− 13 ⎢ ⎢⎣ 13
− 13 1 6 1 3
1⎤ 3⎥ 1⎥ 3 ⎥ − 13 ⎥⎦
⎡ 7 ⎡ 1 −1 2⎤ ⎡ 1 0 1⎤ 3 ⎢ ⎢ ⎥⎢ ⎥ 1 ⎢ − = − 2 1 1 0 2 2 ⎢ ⎥⎢ ⎥ ⎢ 6 ⎢ 1 2 1⎥ ⎢ 1 2 0⎥ ⎢⎣ 32 ⎣ ⎦⎣ ⎦
10 3 4 3 − 34
− 13 ⎤ ⎥ 8⎥ . 3 ⎥ 2 − 3 ⎥⎦
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. 9. (a) The transition matrix P from B′ to B is found by row-reducing [ B B′] to [ I P].
[B
⎡1 −2 B′] = ⎢ ⎣3 −2
−12 −4⎤ ⎥ 0 4⎦
⇒
[I
⎡1 0 P] = ⎢ ⎣0 1
6 4⎤ ⎥ 9 4⎦
So, ⎡6 4⎤ P = ⎢ ⎥. ⎣9 4⎦ ⎡6 4⎤ ⎡−1⎤ ⎡ 2⎤ (b) The coordinate matrix for v relative to B is [ v]B = P[ v]B′ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 9 4 2 ⎣ ⎦⎣ ⎦ ⎣−1⎦ ⎡3 2⎤ ⎡ 2⎤ ⎡ 4⎤ Furthermore, the image of v under T relative to B is [T ( v )]B = A[ v]B = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 4⎦ ⎣−1⎦ ⎣−4⎦ ⎡− 1 (c) The inverse of P is P −1 = ⎢ 3 ⎢⎣ 34
1⎤ 3 ⎥. − 12 ⎥⎦
⎡− 1 The matrix of T relative to B′ is then A′ = P −1 AP = ⎢ 3 ⎢⎣ 34 ⎡− 1 (d) The image of v under T relative to B′ is P −1[T ( v )]B = ⎢ 3 ⎢⎣ 34
1⎤ 3 ⎥ − 12 ⎥⎦
⎡0 − 4 ⎤ ⎡3 2⎤ ⎡6 4⎤ 3 . ⎥ ⎢ ⎥⎢ ⎥ = ⎢ 0 4 9 4 ⎢ 7⎥⎦ ⎣ ⎦⎣ ⎦ ⎣9
1⎤ 3 ⎥ − 12 ⎥⎦
⎡− 8 ⎤ ⎡ 4⎤ 3 ⎢ ⎥ = ⎢ ⎥. ⎢⎣ 5⎥⎦ ⎣−4⎦
⎡− 8 ⎤ ⎡0 − 4 ⎤ ⎡−1⎤ 3 You can also find the image of v under T relative to B′ by A′[ v]B′ = ⎢ ⎥ ⎢ ⎥ = ⎢ 3 ⎥. ⎢⎣ 5⎥⎦ ⎢⎣9 7⎥⎦ ⎣ 2⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.4
Transition Matrices and Similarity
205
11. (a) The transition matrix P from B′ to B is found by row-reducing [ B B′] to [ I P].
⎡1 1 0 [B B′] = ⎢⎢ 1 0 1 ⎢0 1 1 ⎣
1 0 0⎤ ⎥ 0 1 0⎥ 0 0 1⎥⎦
⇒
⎡1 0 0 ⎢ [I P] = ⎢0 1 0 ⎢0 0 1 ⎣
1 2 1 2 − 12
− 12 ⎤ ⎥ 1 2⎥ 1⎥ 2⎦
1 2 − 12 1 2
So, 1 −1⎤ ⎥ 1 −1 1⎥. ⎢−1 1 1⎥ ⎣ ⎦ ⎡ 1
P =
1⎢ 2⎢
1 −1⎤ ⎥ 1 −1 1⎥ ⎢−1 1 1⎥ ⎣ ⎦ ⎡ 1
(b) The coordinate matrix for v relative to B is [ v]B = P[ v]B ′ =
1⎢ 2⎢
Furthermore, the image of v under T relative to B is [T ( v )]B = A[ v]B
⎡ 1 1 0⎤ ⎡ 32 ⎢ ⎥⎢ (c) The matrix of T relative to B′ is A′ = P −1 AP = ⎢ 1 0 1⎥ ⎢− 12 ⎢0 1 1⎥ ⎢ 1 ⎣ ⎦ ⎢⎣ 2
⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ = ⎢ 0⎥. ⎢−1⎥ ⎢−1⎥ ⎣ ⎦ ⎣ ⎦
⎡ 3 ⎢ 2 = ⎢− 12 ⎢ 1 ⎣⎢ 2
−1 − 12 ⎤ ⎥ 1 2 2⎥ 5⎥ 1 2⎥ ⎦
−1 − 12 ⎤ ⎥ 1 2 2⎥ 5⎥ 1 ⎥ 2⎦
⎡ 1 ⎢ 12 ⎢ 2 ⎢− 1 ⎣ 2
1 2 − 12 1 2
⎡ 1⎤ ⎡ 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 0⎥ = ⎢ −1⎥. ⎢−1⎥ ⎢−2⎥ ⎣ ⎦ ⎣ ⎦
− 12 ⎤ ⎡ 1 0 0⎤ ⎥ 1 = ⎢0 2 0⎥. ⎢ ⎥ 2⎥ 1⎥ ⎢0 0 3⎥ ⎣ ⎦ 2⎦
⎡ 1 1 0⎤ ⎡ 2⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ( ) (d) The image of v under T relative to B′ is P [T v ]B = ⎢ 1 0 1⎥ ⎢ −1⎥ = ⎢ 0⎥. ⎢0 1 1⎥ ⎢−2⎥ ⎢−3⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ −1
⎡ 1 0 0⎤ ⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ You can also find the image of v under T relative to B′ by A′[ v]B ′ = ⎢0 2 0⎥ ⎢ 0⎥ = ⎢ 0⎥. ⎢0 0 3⎥ ⎢−1⎥ ⎢−3⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
13. (a) The transition matrix P from B′ to B is found by row-reducing [ B B′] to [ I P].
[B
⎡ 1 −1 B′] = ⎢ ⎣2 −1
−4 0⎤ ⎥ 1 2⎦
⇒
[I
⎡1 0 P] = ⎢ ⎣0 1
5 2⎤ ⎥ 9 2⎦
So, ⎡5 2⎤ P = ⎢ ⎥. ⎣9 2⎦ ⎡5 2⎤ ⎡−1⎤ ⎡ 3⎤ (b) The coordinate matrix for v relative to B is [ v]B = P[ v]B′ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 9 2 4 ⎣ ⎦⎣ ⎦ ⎣−1⎦ ⎡2 1⎤ ⎡ 3⎤ ⎡5⎤ Furthermore, the image of v under T relative to B is [T ( v)]B = A[ v]B = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 −1⎦ ⎣−1⎦ ⎣1⎦ ⎡− 1 (c) The matrix of T relative to B′ is A′ = P −1 AP = ⎢ 49 ⎣⎢ 8
1⎤ 4 ⎥ − 85 ⎦⎥
⎡− 1 (d) The image of v under T relative to B′ is P −1[T ( v)]B = ⎢ 49 ⎣⎢ 8
⎡2 1⎤ ⎡5 2⎤ ⎡−7 −2⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 0 1 9 2 8⎦ − ⎣ ⎦⎣ ⎦ ⎣ 27 1⎤ 4 ⎥ − 58 ⎦⎥
⎡5⎤ ⎡−1⎤ ⎢ ⎥ = ⎢ ⎥. 1 ⎣ ⎦ ⎣ 5⎦
⎡−7 −2⎤ ⎡−1⎤ ⎡−1⎤ You can also find the image of v under T relative to B′ by A′[ v]B ′ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 8⎦ ⎣ 4⎦ ⎣ 27 ⎣ 5⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
206
Chapter 6
Linear Transformations
15. If A and B are similar, then B = P −1 AP, for some nonsingular matrix P. So,
1 1 1 0 1 P = A . No, the converse is not true. For example, = , but these P 0 1 0 1
B = P −1 AP = P −1 A P = A
matrices are not similar. 17. (a) B = P −1 AP ⇒ BT = ( P −1 AP )
= PT AT ( P −1 )
T
T
= PT AT ( PT ) , which shows that AT and BT are similar. −1
(b) If A is nonsingular, then so is P −1 AP = B, and B = P −1 AP B −1 = ( P −1 AP)
−1
= P −1 A−1 ( P −1 )
−1
= P −1 A − 1 P
which shows that A−1 and B −1 are similar. (c)
B = P −1 AP ⇒ B k = ( P −1 AP ) = ( P −1 AP )( P −1 AP)
( P −1 AP)
k
( k times)
= P −1 Ak P.
19. Let A be an n × n matrix similar to I n . Then there exists an invertible matrix P such that
A = P −1I n P = P −1P = I n . So, I n is similar only to itself. 21. Let A2 = O and B = P −1 AP, then B 2 = ( P −1 AP ) = ( P −1 AP )( P −1 AP ) = P −1 A2 P = P −1OP = O. 2
23. If A is similar to B, then B = P −1 AP. If B is similar to C, then C = Q −1BQ.
So, C = Q −1BQ = Q −1 ( P −1 AP)Q = ( PQ) A( PQ), which shows that A is similar to C. −1
25. Because B = P −1 AP, B 2 = ( P −1 AP ) = ( P −1 AP )( P −1 AP) = P −1 A2 P which shows that A2 is similar to B 2 . 2
27. If A = CD and C is nonsingular, then C −1 A = D ⇒ C −1 AC = DC , which shows that DC is similar to A. 29. The matrix for I relative to B and B′ is the square matrix whose columns are the coordinates of v1 , … v n relative to the
standard basis. The matrix for I relative to B, or relative to B′, is the identity matrix. 31. (a) True. See discussion, pages 399–400, and note that A′ = P −1 AP ⇒ PA′P −1 = PP −1 APP −1 = A.
(b) False. Unless it is a diagonal matrix, see Example 5, pages 403–404.
Section 6.5 Applications of Linear Transformations 1. The standard matrix for T is
⎡ 1 0⎤ A = ⎢ ⎥. ⎣0 −1⎦ ⎡ 1 0⎤ ⎡3⎤ ⎡ 3⎤ (a) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (3, 5) = (3, −5) ⎣0 −1⎦ ⎣5⎦ ⎣−5⎦ ⎡ 1 0⎤ ⎡ 2⎤ ⎡2⎤ (b) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( 2, −1) = ( 2, 1) ⎣0 −1⎦ ⎣−1⎦ ⎣ 1⎦ ⎡ 1 0⎤ ⎡a⎤ ⎡a⎤ (c) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( a, 0) = ( a, 0) 0 1 0 − ⎣ ⎦⎣ ⎦ ⎣0⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.5
Applications of Linear Transformations
207
⎡ 1 0⎤ ⎡0⎤ ⎡ 0⎤ (d) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, b) = (0, −b) ⎣0 −1⎦ ⎣b⎦ ⎣−b⎦ ⎡ 1 0⎤ ⎡−c⎤ ⎡ − c⎤ (e) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( − c, d ) = ( − c, − d ) 0 1 d − ⎣ ⎦⎣ ⎦ ⎣− d ⎦ ⎡ 1 0⎤ ⎡ f ⎤ ⎡f⎤ (f) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( f , −g) = ⎣0 −1⎦ ⎣− g ⎦ ⎣ g⎦
( f , g)
3. The standard matrix for T is
⎡0 1⎤ A = ⎢ ⎥. ⎣ 1 0⎦ ⎡0 1⎤ ⎡0⎤ ⎡ 1⎤ (a) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, 1) = (1, 0) ⎣ 1 0⎦ ⎣ 1⎦ ⎣0⎦ ⎡0 1⎤ ⎡−1⎤ ⎡ 3⎤ (b) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( −1, 3) = (3, −1) 1 0 3 ⎣ ⎦⎣ ⎦ ⎣−1⎦ ⎡0 1⎤ ⎡a⎤ ⎡ 0⎤ (c) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( a, 0) = (0, a) ⎣ 1 0⎦ ⎣ 0⎦ ⎣a⎦ ⎡0 1⎤ ⎡0⎤ ⎡b⎤ (d) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, b) = (b, 0) ⎣ 1 0⎦ ⎣b⎦ ⎣0⎦ ⎡0 1⎤ ⎡−c⎤ ⎡ d⎤ (e) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( − c, d ) = ( d , − c ) ⎣ 1 0⎦ ⎣ d ⎦ ⎣−c⎦ ⎡0 1⎤ ⎡ f ⎤ ⎡− g ⎤ (f) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( f , − g ) = (− g , f ) 1 0 g − ⎣ ⎦⎣ ⎦ ⎣ f⎦ 5. (a) T ( x, y ) = xT (1, 0) + yT (0, 1) = x(0, 1) + y(1, 0) = ( y, x)
(b) T is a reflection in the line y = x. 7. (a) Identify T as a vertical contraction from its standard matrix.
9. (a) Identify T as a horizontal expansion from its standard matrix
⎡1 0⎤ A = ⎢ 1⎥ ⎣⎢0 2 ⎥⎦
(b)
⎡4 0⎤ A = ⎢ ⎥. ⎣0 1⎦
(b)
y
(x, y)
y
(x, y)
(4x, y)
4 3
( x, y2 ( x
2 1 x 1
2
3
4
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
208
Chapter 6
Linear Transformations
11. (a) Identify T as a horizontal shear from its matrix.
⎡1 3⎤ A = ⎢ ⎥. ⎣0 1⎦
(b)
23. Find the image of each vertex under T ( x, y ) = ( x, − y ).
T (0, 0) = (0, 0),
T (1, 0) = (1, 0),
T (1, 1) = (1, −1),
y
T (0, 1) = (0, −1)
y
x
1
(x, y)
(x + 3y, y) −1
x
13. (a) Identify T as a vertical shear from its matrix
⎡1 0⎤ A = ⎢ ⎥. ⎣2 1⎦
3
(1, − 1)
⎛x 25. Find the image of each vertex under T ( x, y ) = ⎜ , ⎝2
⎛1 ⎞ T (1, 1) = ⎜ , 1⎟, ⎝2 ⎠
(x, 2x + y)
2
⎞ y ⎟. ⎠
⎛1 ⎞ T (1, 0) = ⎜ , 0 ⎟, ⎝2 ⎠
T (0, 0) = (0, 0),
y
(b)
(0, −1)
T (0, 1) = (0, 1)
y
( , 1( 1 2
1
(x, y)
1
x 1
2
3
15. The reflection in the y-axis is given by T ( x, y ) = ( − x, y ). If ( x, y ) is a fixed point, then
T ( x, y ) = ( x, y ) = ( − x, y ) which implies that x = 0. So the set of fixed points is
{(0, t ) : t ∈ R}.
17. The reflection in the line y = x is given by
T ( x, y ) = ( y, x). If ( x, y ) is a fixed point, then T ( x, y ) = ( x, y ) = ( y, x) which implies that x = y. So, the set of fixed points is
{(t , t ) : t ∈ R}.
19. A vertical contraction has the standard matrix ( k < 1)
⎡1 0⎤ ⎢ ⎥. ⎣0 k ⎦
0 1
27. Find the image of each vertex under T ( x, y ) = ( x + 2 y, y ).
T (0, 0) = (0, 0),
T (1, 0) = (1, 0),
T (1, 1) = (3, 1),
T (0, 1) = ( 2, 1)
y 2
(2, 1)
(3, 1)
1 x 1
A fixed point of T satisfies the equation ⎡1 0⎤ ⎡ v1 ⎤ ⎡ v1 ⎤ ⎡ v1 ⎤ T ( v) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ = v. ⎣0 k ⎦ ⎣v2 ⎦ ⎣kv2 ⎦ ⎣v2 ⎦
So, the set of fixed points is
x
( ( 1 2,
{(t , 0) : t is a real number}.
21. A horizontal shear has the form T ( x, y ) = ( x + ky, y ). If ( x, y ) is a fixed point,
2
3
−1
29. Find the image of each vertex under T ( x, y ) = ( − x, y ).
T (0, 0) = (0, 0),
T (0, 2) = (0, 2),
T (1, 2) = ( −1, 2),
T (1, 0) = (−1, 0)
y
(− 1, 2)
2
then T ( x, y ) = ( x, y ) = ( x + ky, y ) which implies that y = 0. So, the set of fixed points is
{(t , 0) : t ∈ R}.
1
(− 1, 0) −2
−1
x
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.5
(
)
31. Find the image of each vertex under T ( x, y ) = x, 12 y .
T (0, 0) = (0, 0),
T (0, 2) = (0, 1),
T (1, 2) = (1, 1),
T (1, 0) = (1, 0)
Applications of Linear Transformations
37. Find the image of each vertex under
(
1
)
T ( x, y ) = 2 x, 12 y . (a) T (0, 0) = (0, 0)
T (1, 2) = ( 2, 1)
T (3, 6) = (6, 3)
y
T (5, 2) = (10, 1)
T (6, 0) = (12, 0) (0, 1)
209
(1, 1)
y 5 4
(6, 3) (0, 0) (2, 1)
3
x
2
1
1
(10, 1) (12, 0) x
33. Find the image of each vertex under T ( x, y ) = ( x + y, y ).
T (0, 0) = (0, 0),
T (0, 2) = ( 2, 2),
T (1, 2) = (3, 2),
2
4
6
8 10 12
(b) T (0, 0) = (0, 0)
T (0, 6) = (0, 3)
T (6, 6) = (12, 3)
T (1, 0) = (1, 0)
T (6, 0) = (12, 0)
y
y
5 4
3
(2, 2)
(3, 2)
3
2
(12, 3)
(0, 3)
2 1
1
(0, 0)
(12, 0)
x
2
x 1
2
4
6
8 10 12
3
39. The images of the given vectors are as follows. 35. Find the image of each vertex under T ( x, y ) = ( x + y, y ).
(a) T (0, 0) = (0, 0),
T (1, 2) = (3, 2),
T (3, 6) = (9, 6),
T (5, 2) = (7, 2),
T (6, 0) = (6, 0)
⎡2 0⎤ ⎡0⎤ ⎡0⎤ T (0, 1) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = (0, 3) 0 3 1 ⎣ ⎦⎣ ⎦ ⎣3⎦ ⎡2 0⎤ ⎡2⎤ ⎡4⎤ T ( 2, 2) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = ( 4, 6) ⎣0 3⎦ ⎣2⎦ ⎣6⎦
y 8
The two triangles are shown in the following figure.
(9, 6)
6
⎡2 0⎤ ⎡ 1⎤ ⎡2⎤ T (1, 0) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = ( 2, 0) ⎣0 3⎦ ⎣0⎦ ⎣0⎦
y
4 2
(0, 0)
2
(4, 6)
6
(3, 2) 4
(7, 2)
5
x
4
(6, 0)
(b) T (0, 0) = (0, 0)
T (6, 6) = (12, 6)
3
T (0, 6) = (6, 6)
T (6, 0) = (6, 0)
(2, 2)
2 1 −1
x 1
2
3
4
5
y 12 10 8 6 4 2
(0, 0)
41. The linear transformation defined by A is a horizontal expansion. (6, 6)
(12, 6) (6, 0)
2 4 6 8 10 12
x
43. The linear transformation defined by A is a reflection in the line y = x. 45. The linear transformation defined by A is a reflection in the x-axis followed by a vertical expansion.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
210
Chapter 6
Linear Transformations
47. Because
53. Using the matrix obtained in Exercise 49,
⎡1 0⎤ ⎢ ⎥ ⎣2 1⎦
⎡ 3 ⎢ ⎢ 2 ⎢ 1 T (1, 1, 1) = ⎢ ⎢ 2 ⎢⎣ 0
represents a vertical shear and ⎡2 0⎤ ⎢ ⎥ ⎣0 1⎦
represents a horizontal expansion, A is a vertical shear followed by a horizontal expansion.
⎡ 1 0 ⎢ 2 ⎢ 0 1 T (1, 1, 1) = ⎢ ⎢ ⎢ 3 ⎢⎣− 2 0
⎤ 1 0⎥ 2 ⎥ ⎥ 3 0⎥. 2 ⎥ 0 1⎥⎦
−
)
⎡1 + 3 ⎤ 3⎤ ⎥ ⎡1⎤ ⎢ ⎥ 2 ⎥ ⎢ 2 ⎥ ⎢ ⎥ 0⎥ ⎢1⎥ = ⎢ 1 ⎥. ⎥ ⎢ ⎥ ⎢1 − 3 ⎥ 1 ⎥ ⎢⎣1⎥⎦ ⎢⎣ 2 ⎥⎦ 2 ⎥⎦
57. The indicated tetrahedron is produced by a 90° rotation about the x-axis. 59. The indicated tetrahedron is produced by a 180° rotation about the y-axis.
51. A rotation of 60° about the y-axis is given by the matrix
⎡ 1 0 ⎢ ⎡ cos 60° 0 sin 60° ⎤ 2 ⎢ ⎢ ⎥ 1 0 0 1 A = ⎢ 0 ⎥ = ⎢⎢ ⎢−sin 60° 0 cos 60°⎥ ⎢ 3 ⎣ ⎦ ⎢⎣− 2 0
(
)
3 −1⎤ ⎥ ⎥ 2 ⎥ + 3 ⎥ ⎥. 2 ⎥ 1 ⎦⎥
55. Using the matrix obtained in Exercise 51,
49. A rotation of 30° about the z-axis is given by the matrix
⎡ 3 ⎢ ⎢ 2 ⎡cos 30° −sin30° 0⎤ ⎢ 1 ⎢ ⎥ A = ⎢ sin 30° cos 30° 0⎥ = ⎢ ⎢ 2 ⎢ 0 0 1⎥⎦ ⎣ ⎢⎣ 0
(
⎡ ⎤ ⎢ 0⎥ ⎢ ⎥ ⎡1⎤ ⎢ ⎥ ⎢1 3 ⎢⎥ 0⎥ ⎢1⎥ = ⎢ 2 ⎥⎢⎥ ⎢ 1 0 1⎥⎦ ⎣ ⎦ ⎣⎢
1 − 2
3⎤ ⎥ 2 ⎥ 0⎥. ⎥ 1⎥ 2 ⎥⎦
61. The indicated tetrahedron is produced by a 90° rotation about the z-axis. ⎡ 0 0 1⎤ ⎡ 1 0 0⎤ ⎡ 0 1 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 63. The matrix is ⎢ 0 1 0⎥ ⎢0 0 −1⎥ = ⎢ 0 0 −1⎥. ⎢−1 0 0⎥ ⎢0 1 0⎥ ⎢−1 0 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
T (0, 0, 0) = (0, 0, 0) and T (1, 1, 1) = (1, −1, −1), so the line segment image runs from (0, 0, 0) to (1, −1, −1). ⎡ 1 0 ⎢ ° ° ° − ° cos 60 0 sin 60 cos 30 sin30 0 ⎡ ⎤⎡ ⎤ 2 ⎢ ⎢ ⎥⎢ ⎥ 65. The matrix is ⎢ 0 1 0 0 1 ⎥ ⎢ sin 30° cos 30° 0⎥ = ⎢⎢ ⎢−sin 60° 0 cos 60°⎥ ⎢ 0 ⎥ 0 1⎦ ⎢ 3 ⎣ ⎦⎣ − 0 ⎣⎢ 2 ⎛3 3 − 1 , T (0, 0, 0) = (0, 0, 0) and T (1, 1, 1) = ⎜ 4 ⎝ ⎛3 3 − 1 , ⎜⎜ 4 ⎝
3 +1 , 2
3 +1 , 2
3⎤ ⎡ ⎥⎢ 2 ⎥⎢ 0⎥ ⎢ ⎥⎢ 1⎥ ⎢ 2 ⎦⎥ ⎣⎢
3 2 1 2 0
−
1 2
3 2 0
⎡ 3 ⎤ ⎢ 0⎥ ⎢ 4 ⎥ ⎢ 1 ⎥ = ⎢ 0⎥ ⎢ 2 ⎥ ⎢ 3 ⎢− 1⎦⎥ ⎢⎣ 4
−
1 4
3 2 3 4
3⎤ ⎥ 2 ⎥ ⎥ 0⎥. ⎥ 1 ⎥⎥ 2 ⎥⎦
3 − 1⎞ ⎟, so the line segment runs from (0, 0, 0) to 4 ⎠
3 − 1⎞ ⎟. 4 ⎟⎠
Review Exercises for Chapter 6 1. (a) T ( v ) = T ( 2, −3) = ( 2, − 4)
(b) The preimage of w is given by solving the equation
T (v1 , v2 ) = (v1 , v1 + 2v2 ) = ( 4, 12). The resulting system of linear equations = 4 v1
v1 + 2v2 = 12 has the solution v1 = v2 = 4. So, the preimage of w is ( 4, 4).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6
211
3. (a) T ( v) = T ( −3, 2, 5) = (0, −1, 7).
(b) The preimage of w is given by solving the equation T (v1 , v2 , v3 ) = (0, v1 + v2 , v2 + v3 ) = (0, 2, 5). The resulting system of linear equations has the solution of the form v1 = t − 3, v2 = 5 − t , v3 = t , where t is any real number. So, the preimage of w is
{(t
− 3, 5 − t , t ) : t ∈ R}.
5. T preserves addition. T ( x1 , x2 ) + T ( y1 , y2 ) = ( x1 + 2 x2 , − x1 − x2 ) + ( y1 + 2 y2 , − y1 − y2 ) = ( x1 + 2 x2 + y1 + 2 y2 , − x1 − x2 − y1 − y2 ) =
(( x1
+ y1 ) + 2( x2 + y2 ), − ( x1 + y1 ) − ( x2 + y2 ))
= T ( x1 + y1 , x2 + y2 ) T preserves scalar multiplication. cT ( x1 , x2 ) = c( x1 + 2 x2 , − x1 − x2 ) = (cx1 + 2(cx2 ), −cx1 − cx2 ) = T (cx1 , cx2 )
⎡ 1 2⎤ So, T is a linear transformation with standard matrix A = ⎢ ⎥. ⎣−1 −1⎦
7. T preserves addition. T ( x1 , y1 ) + T ( x2 , y2 ) = ( x1 − 2 y1 , 2 y1 − x1 ) + ( x2 − 2 y2 , 2 y2 − x2 ) = x1 − 2 y1 + x2 − 2 y2 , 2 y1 − x1 + 2 y2 − x2 = ( x1 + x2 ) − 2( y1 + y2 ), 2( y1 + y2 ) − ( x1 + x2 ) = T ( x1 + y1 , x2 + y2 )
T preserves scalar multiplication. cT ( x, y ) = c( x − 2 y, 2 y − x) = cx − 2cy, 2cy − cx = T (cx, cy ) ⎡ 1 −2⎤ So, T is a linear transformation with standard matrix A = ⎢ ⎥. ⎣−1 2⎦
9. T does not preserve addition or scalar multiplication, so, T is not a linear transformation. A counterexample is T (1, 0) + T (0, 1) = (1 + h, k ) + ( h, 1 + k ) = (1 + 2h, 1 + 2k ) ≠ T (1 + h, 1 + k ) = T (1, 1).
11. T preserves addition. T ( x1 , x2 , x3 ) + T ( y1 , y2 , y3 ) = ( x1 − x2 , x2 − x3 , x3 − x1 ) + ( y1 − y2 , y2 − y3 , y3 − y1 ) = ( x1 − x2 + y1 − y2 , x2 − x3 + y2 − y3 , x3 − x1 + y3 − y1 ) =
(( x1
+ y1 ) − ( x2 + y2 ), ( x2 + y2 ) − ( x3 + y3 ), ( x3 + y3 ) − ( x1 + y1 ))
= T ( x1 + y1 , x2 + y2 , x3 + y3 )
T preserves scalar multiplication. cT ( x1 , x2 , x3 ) = c( x1 − x2 , x2 − x3 , x3 − x1 ) = (c( x1 − x2 ), c( x2 − x3 ), c( x3 − x1 )) = (cx1 − cx2 , cx2 − cx3 , cx3 − cx1 ) = T (cx1 , cx2 , cx3 ) ⎡ 1 −1 0⎤ ⎢ ⎥ So, T is a linear transformation with standard matrix A = ⎢ 0 1 −1⎥. ⎢−1 0 1⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
212
Chapter 6
Linear Transformations
13. Because (1, 1) =
1 2
(2, 0)
1 3
+
(0, 3),
23. (a) Because A is a 1 × 2 matrix, it maps R 2 into R1 , ( n = 2, m = 1).
T (1, 1) = 12 T ( 2, 0) + 13T (0, 3) =
1 2
(1, 1)
Because (0, 1) =
1 3
(0, 3),
T (0, 1) =
1T 3 1 3
=
+
1 3
(3, 3)
=
( 23 , 32 ).
(c) The preimage of w = ( 4) is given by the solution to this equation T (v1 , v2 ) = w = ( 4).
= (1, 1).
15. Because (0, −1) = − 23 (1, 1) −
1 3
The equivalent system of linear equations is v1 + v2 = 4, which has the solution
( 2, −1)
T (0, −1) = − 23 T (1, 1) + 13 T ( 2, −1) = − 23 ( 2, 3) + =
(
− 43 ,
1 3
(1, 0)
) ( 0) = (−1, − 2).
−2 +
1, 3
{(4 − t , t ) : t ∈ R}.
25. (a) Because A is a 3 × 3 matrix, it maps R 3 into
R 3 , ( n = 3, m = 3). (b) Because T ( v) = Av and
17. The standard matrix for T is
⎡ 1 1 1⎤ ⎡ 2⎤ ⎡−2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Av = ⎢0 1 1⎥ ⎢ 1⎥ = ⎢−4⎥ , ⎢0 0 1⎥ ⎢−5⎥ ⎢−5⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 1 0⎥. ⎢0 0 −1⎥ ⎣ ⎦
it follows that T ( 2, 1, − 5) = ( −2, − 4, − 5).
Therefore, ⎡12 0 ⎢ A2 = ⎢ 0 12 ⎢ ⎢⎣ 0 0
⎡2⎤ 1] ⎢ ⎥ = 5, it ⎣3⎦
follows that T ( 2, 3) = 5.
(0, 3)
(3, 3)
(b) Because T ( v) = Av and Av = [1
0⎤ ⎡ 1 0 0⎤ ⎥ ⎢ ⎥ 0⎥ = ⎢0 1 0⎥ = I 3. ⎥ ⎢ ⎥ 2 (−1) ⎥⎦ ⎣0 0 1⎦
19. The standard matrix for T is
(c) The preimage of w = (6, 4, 2) is given by the solution to the equation T (v1 , v2 , v3 ) = (6, 4, 2) = w. The equivalent system of linear equations has the solution v1 = v2 = v3 = 2. So, the preimage is
(2, 2, 2).
⎡cos θ −sin θ ⎤ A = ⎢ ⎥. Therefore, ⎣ sin θ cos θ ⎦ ⎡cos 3θ −sin 3θ ⎤ A3 = ⎢ ⎥. ⎣ sin 3θ cos 3θ ⎦
27. (a) Because A is a 3 × 2 matrix, it maps R 2 into
R 3 , ( n = 2, m = 3).
21. (a) Because A is a 2 × 3 matrix, it maps R 3 into
R 2 , ( n = 3, m = 2). (b) Because T ( v) = Av and ⎡6⎤ ⎡ 0 1 2⎤ ⎢ ⎥ ⎡ 3⎤ Av = ⎢ ⎥ ⎢ 1⎥ = ⎢ ⎥, ⎣−2 0 0⎦ ⎢ ⎥ ⎣−12⎦ ⎣ 1⎦
(b) Because T ( v) = Av and ⎡4 0⎤ ⎡ 8⎤ ⎢ ⎥ ⎡2⎤ ⎢ ⎥ Av = ⎢0 5⎥ ⎢ ⎥ = ⎢10⎥ , ⎢ 1 1⎥ ⎣2⎦ ⎢ 4⎥ ⎣ ⎦ ⎣ ⎦
it follows that T ( 2, 2) = (8, 10, 4). (c) The preimage of w = ( 4, −5, 0) is given by the solution to the equation
it follows that T (6, 1, 1) = (3, −12).
T (v1 , v2 ) = ( 4, −5, 0) = w.
(c) The preimage of w is given by the solution to the equation T (v1 , v2 , v3 ) = w = (3, 5).
The equivalent system of linear equations has the solution v1 = 1 and v2 = −1. So, the preimage is
(1, −1).
The equivalent system of linear equations v2 + 2v3 = 3 −2v1
= 5
has the solution
{(−
5 , 2
)
}
3 − 2t , t : t is a real number .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6 29. (a) The standard matrix for T is
35. (a) To find the kernel of T, row-reduce A,
⎡ 2 4 6 5⎤ ⎢ ⎥ A = ⎢−1 −2 2 0⎥. ⎢ 0 0 8 4⎥ ⎣ ⎦
⎡2 1 3⎤ ⎢ ⎥ A = ⎢ 1 1 0⎥ ⎢0 1 −3⎥ ⎣ ⎦
Solving Av = 0 yields the solution
which shows that kernel (T ) = {(−3t , 3t , t ) : t ∈ R}. So, a basis for
{(−2s + 2t , s, t , − 2t ) : s and t are real numbers}. So, a basis for ker (T ) is {( −2, 1, 0, 0), ( 2, 0, 1, − 2)}.
(b) Use Gauss-Jordan elimination to reduce AT as follows. ⎡2 −1 ⎢ 4 −2 AT = ⎢ ⎢6 2 ⎢ ⎢⎣5 0
0⎤ ⎥ 0⎥ 8⎥ ⎥ 4⎥⎦
⎡1 ⎢ ⎢0 ⎢ ⎢0 ⎢ ⎣0
⇒
4⎤ 5 ⎥ 8⎥ 5
0 1
⎥ 0 0⎥ ⎥ 0 0⎦ The nonzero row vectors form a basis for the range
{(
of T , 1, 0,
4 5
), (0, 1, 85 )}.
{(−3, 3, 1)}.
(b) The range of T can be found by row-reducing the transpose of A. ⎡2 1 0⎤ ⎢ ⎥ AT = ⎢ 1 1 1⎥ ⎢ 3 0 −3⎥ ⎣ ⎦
⇒
So, a basis for range(T) is
⎡ 1 0 −1⎤ ⎢ ⎥ ⎢0 1 2⎥ ⎢0 0 0⎥ ⎣ ⎦
{(1, 0, −1), (0, 1, 2)}.
(c) dim( range(T )) = rank (T ) = 2
39. nullity(T ) = dim( P4 ) − rank(T ) = 5 − 3 = 2
Solving Av = 0 yields the solution
{(t , −t , t ) : t ∈ R}. So, a basis for {(1, −1, 1)}.
41. The standard matrix for T is
ker (T ) is
⎡2 0⎤ A = ⎢ ⎥. ⎣0 1⎦
(b) Use Gauss-Jordan elimination to reduce AT as follows. ⎡ 1 0 1⎤ ⎡ 1 0 1⎤ ⎢ ⎥ ⎢ ⎥ T A = ⎢ 1 1 0⎥ ⇒ ⎢0 1 −1⎥ ⎢0 1 −1⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ The nonzero row vectors form a basis for the range of T , {(1, 0, 1), (0, 1, −1)}.
So, a basis for range (T) is
1⎤ ⎡1 0 2 ⎢ ⎥ 1 ⎣⎢0 1 − 2 ⎥⎦
Or, use the 2 columns of A,
1 2
{(1, −1, 1), (2, 0, 1)}.
(c) dim ( range(T )) = rank (T ) = 2
⎡cos θ A = ⎢ ⎣ sin θ
−sin θ ⎤ ⎥. cos θ ⎦ sin θ ⎤ ⎥. cos θ ⎦
⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 1 0⎥. ⎢0 0 0⎥ ⎣ ⎦
Because A is not invertible, T has no inverse.
{(1, 0, ), (0, 1, − )}. 1 2
43. The standard matrix for T is
45. The standard matrix of T is
{(0, 0)}.
(b) The range of T can be found by row-reducing the transpose of A. ⇒
⎡ 1 0⎤ A−1 = ⎢ 2 ⎥. ⎢⎣ 0 1⎥⎦
⎡ cos θ A− 1 = ⎢ ⎣−sin θ
⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥ ⎢0 0⎥ ⎣ ⎦
which shows that ker (T ) =
A is invertible and its inverse is given by
A is invertible and its inverse is given by
33. (a) To find the kernel of T, row-reduce A,
⎡ 1 −1 1⎤ AT = ⎢ ⎥ ⎣2 0 1⎦
⎡ 1 0 3⎤ ⎢ ⎥ ⎢0 1 −3⎥ ⎢0 0 0⎥ ⎣ ⎦
37. rank (T ) = dim R 5 − nullity(T ) = 5 − 2 = 3
⎡ 1 1 0⎤ ⎢ ⎥ A = ⎢0 1 1⎥. ⎢ 1 0 −1⎥ ⎣ ⎦
⇒
kernel(T) is
⇒
(d) dim( ker(T )) = nullity(T ) = 1
31. (a) The standard matrix for T is
⎡ 1 2⎤ ⎢ ⎥ A = ⎢−1 0⎥ ⎢ 1 1⎥ ⎣ ⎦
213
47. The standard matrix for T is
⎡ 1 1 0⎤ A = ⎢ ⎥. ⎣0 1 −1⎦ Because A is not invertible, T has no inverse.
(d) dim ( ker (T )) = nullity (T ) = 0 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
214
Chapter 6
Linear Transformations
49. The standard matrices for T1 and T2 are ⎡ 1 0⎤ ⎢ ⎥ A1 = ⎢ 1 1⎥ ⎢0 1⎥ ⎣ ⎦
⎡0 0 0⎤ A2 = ⎢ ⎥. ⎣0 1 0⎦
and
The standard matrix for T = T1
T2 is
⎡ 1 0⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎡0 0 0⎤ ⎢ ⎥ A = A1 A2 = ⎢ 1 1⎥ ⎢ = ⎥ ⎢0 1 0⎥ 0 1 0 ⎦ ⎢0 1⎥ ⎣ ⎢0 1 0⎥ ⎣ ⎦ ⎣ ⎦
and the standard matrix for T ′ = T2
T1 is
⎡ 1 0⎤ ⎡0 0 0⎤ ⎢ ⎡0 0⎤ ⎥ A′ = A2 A1 = ⎢ ⎥ ⎢ 1 1⎥ = ⎢ ⎥. ⎣0 1 0⎦ ⎢ ⎣ 1 1⎦ ⎥ ⎣0 1⎦
57. (a) The standard matrix for T is ⎡−1 0⎤ ⎢ ⎥ A = ⎢ 0 1⎥ ⎢ 1 1⎥ ⎣ ⎦
so it follows that ⎡−1 0⎤ ⎡0⎤ ⎢ ⎥ ⎡0⎤ ⎢ ⎥ T ( v ) = A( v ) = ⎢ 0 1⎥ ⎢ ⎥ = ⎢ 1⎥ = (0, 1, 1). ⎢ 1 1⎥ ⎣ 1⎦ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦
(b) The image of each vector in B is as follows. T (1,1) = (−1,1, 2) = (0,1, 0) + 2(0, 0,1) − (1, 0, 0) T (1, −1) = (−1, −1, 0) = −(0,1, 0) + 0(0, 0,1) − (1, 0, 0)
Therefore, ⎡⎣T (1, 1)⎤⎦ B ′ = [1, 2, −1]
T
51. The standard matrix for the 90° counterclockwise notation is
⎡cos 90° −sin 90°⎤ ⎡0 −1⎤ A = ⎢ ⎥ = ⎢ ⎥. ⎣ sin 90° cos 90°⎦ ⎣ 1 0⎦ Calculating the image of the three vertices, ⎡0 −1⎤ ⎡3⎤ ⎡−5⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥, 1 0 5 ⎣ ⎦⎣ ⎦ ⎣ 3⎦ ⎡0 −1⎤ ⎡5⎤ ⎡−3⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥, ⎣ 1 0⎦ ⎣3⎦ ⎣ 5⎦
and ⎡ 1 −1⎤ ⎢ ⎥ A′ = ⎢ 2 0⎥. ⎢−1 −1⎥ ⎣ ⎦
⎡ 1⎤ = ⎢ 12 ⎥, ⎢⎣− 2 ⎥⎦
the image of v under T relative to B' is ⎡ 1 −1⎤ 1 ⎡ 1⎤ ⎢ ⎥⎡ ⎤ ⎢ ⎥ ⎡⎣T ( v )⎤⎦ B′ = A′[v]B = ⎢ 2 0⎥ ⎢ 12 ⎥ = ⎢ 1⎥. − ⎢−1 −1⎥ ⎣⎢ 2 ⎦⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦
y
(0, 3) (3, 5)
So,
(5, 3)
T ( v ) = (0, 1, 0) + (0, 0, 1) + 0(1, 0, 0) = (0, 1, 1).
(− 5, 3) 2 −6 −4 −2 −2
T
[v]B
you have the following graph. 8
⎣⎡T (1, −1)⎤⎦ B ′ = [−1, 0, −1]
Because
⎡0 −1⎤ ⎡3⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥, 1 0 0 ⎣ ⎦⎣ ⎦ ⎣3⎦
(− 3, 5) 6
and
x
59. The standard matrix for T is
(3, 0) 6
53. (a) Because A = 6 ≠ 0, ker(T ) =
{(0, 0)} and the
transformation is one-to-one. (b) Because the nullity of T is 0, the rank of T equals the dimension of the domain and the transformation is onto. (c) The transformation is one-to-one and onto (an isomorphism) and is, therefore, invertible. 55. (a) Because A = 1 ≠ 0, ker (T ) =
{(0, 0)} and T is
one-to-one. (b) Because rank ( A) = 2, T is onto. (c) The transformation is one-to-one and onto, and is, therefore, invertible.
⎡ 1 −3⎤ A = ⎢ ⎥. ⎣−1 1⎦ The transformation matrix from B' to the standard basis B = {(1, 0), (0, 1)} is ⎡ 1 1⎤ P = ⎢ ⎥. ⎣−1 1⎦ The matrix A' for T relative to B' is ⎡1 A′ = P −1 AP = ⎢ 12 ⎢⎣ 2
− 12 ⎤ ⎡ 1 −3⎤ ⎡ 1 1⎤ ⎡3 −1⎤ ⎥ ⎥⎢ ⎥ = ⎢ ⎥. 1 ⎢ ⎣1 −1⎦ 2⎥ ⎦ ⎣−1 1⎦ ⎣−1 1⎦
Because A′ = P −1 AP, it follows that A and A' are similar.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6
215
61. (a) Because T ( v ) = T ( x, y, z ) = proju v where u = (0, 1, 2), T ( v) =
y + 2z (0, 1, 2). 5
So,
(
T (1, 0, 0) = (0, 0, 0), T (0, 1, 0) = 0, 15 ,
2 5
), T (0, 0, 1) = (0, 52 , 54 )
and the standard matrix for T is ⎡0 0 0⎤ ⎢ ⎥ A = ⎢0 15 52 ⎥. ⎢ ⎥ ⎢⎣0 52 54 ⎥⎦ (b) S = I − A satisfies S (u) = 0. Letting w1 = (1, 0, 0) and w 2 = (0, 2, −1) be two vectors orthogonal to u, ⎡ 1 0 0⎤ ⎢ ⎥ ⇒ P1 = ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦
x projw1 v = (1, 0, 0) 1
projw 2 v =
2y − z (0, 2, −1) 5
⇒
⎡0 0 0⎤ ⎢ ⎥ 4 −2 . P2 = ⎢0 5 5⎥ ⎢0 − 2 1⎥ 5 5⎦ ⎣
So, ⎡1 0 0⎤ ⎢ ⎥ 4 S = I − A = ⎢0 − 52 ⎥ = P1 + P2 5 ⎢ ⎥ 1 ⎢⎣0 − 52 ⎥ 5⎦ verifying that S ( v) = projw1 v + projw 2 v. (c) The kernel of T has basis
{(1, 0, 0), (0, 2, −1)}, which is precisely the column space of S.
63. S + T preserves addition.
(S
+ T ) ( v + w) = S ( v + w) + T ( v + w) = S ( v) + S (w ) + T ( v) + T (w ) = S ( v) + T ( v) + S (w ) + T (w ) = ( S + T ) v + ( S + T )( w )
S + T preserves scalar multiplication.
(S
+ T ) (cv ) = S (cv) + T (cv ) = cS ( v) + cT ( v ) = c( S ( v ) + T ( v )) = c( S + T )( v)
kT preserves addition.
(kT ) ( v
+ w ) = kT ( v + w ) = k (T ( v) + T ( w )) = kT ( v ) + kT ( w ) = ( kT ) ( v ) + ( kT ) ( w )
kT preserves scalar multiplication.
(kT ) (cv)
= kT (cv ) = kcT ( v ) = ckT ( v ) = c( kT ) ( v ).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
216
Chapter 6
Linear Transformations
65. If S, T and S + T are written as matrices, the number of linearly independent columns in S + T cannot exceed the number of linearly independent columns in S plus the number of linearly independent columns in T, because the columns in S + T are created by summing columns in S and T. 67. (a) T preserves addition. T ⎡⎣( a0 + a1 x + a2 x 2 + a3 x3 ) + (b0 + b1 x + b2 x 2 + b3 x3 )⎤⎦ = T ⎡⎣( a0 + b0 ) + ( a1 + b1 ) x + ( a2 + b2 ) x 2 + ( a3 + b3 ) x3 ⎤⎦ = ( a0 + b0 ) + ( a1 + b1 ) + ( a2 + b2 ) + ( a3 + b3 ) = ( a0 + a1 + a2 + a3 ) + (b0 + b1 + b2 + b3 ) = T ( a0 + a1x + a2 x 2 + a3 x3 ) + T (b0 + b1x + b2 x 2 + b3 x3 ) T preserves scalar multiplication.
(
)
T c( a0 + a1x + a2 x 2 + a3 x3 ) = T (ca0 + ca1x + ca2 x 2 + ca3 x3 ) = ca0 + ca1 + ca2 + ca3 = c( a0 + a1 + a2 + a3 ) = cT ( a0 + a1 x + a2 x 2 + a3 x3 )
(b) Because the range of T is R, rank(T ) = 1. So, nullity(T ) = 4 − 1 = 3. (c) A basis for the kernel of T is obtained by solving T ( a0 + a1 x + a2 x 2 + a3 x3 ) = a0 + a1 + a2 + a3 = 0. Letting a3 = t , a2 = s, a1 = r be the free variables, a0 = −t − s − r and a basis is {−1 + x3 , −1 + x 2 , −1 + x}. 69. Let B be a basis for V and let
[ v 0 ]B
75. (a) T is a vertical shear.
= [a1 , a2 , … , an ] , where at least one ai ≠ 0 for T
y
(b)
(x, y + 3x)
T i = 1, … , n. Then for [ v]B = [v1 , v2 , … , vn ] you have
[T ( v)]B
= v, v 0 = a1v1 + a2v2 +
+ anvn .
The matrix for T relative to B is then A = [a1 a2 an ]. Because AT row-reduces to one nonzero row, the range of T is {t : t ∈ R} = R. So, the rank of T is 1 and nullity(T ) = n − 1. Finally, ker (T ) = {v : v, v 0 = 0}.
(x, y) x
77. (a) T is a horizontal shear. y
(b)
71. M m,n and M p ,q will be isomorphic if they are of the
same dimension. That is, mn = pq. Any function taking the standard basis of M m ,n to the standard basis of
(x, y)
M p ,q will be an isomorphism.
T(x, y) x
73. (a) T is a vertical expansion.
(b)
y
(x, 2y)
79. The image of each vertex is T (0, 0) = (0, 0), T (1, 0) = (1, 0), T (0, 1) = (0, −1). y
(x, y)
(0, 0)
(1, 0)
x
1
x
−1
(0, −1)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6 81. The image of each vertex is T (0, 0) = (0, 0), T (1, 0) = (1, 0), and T (0, 1) = (3, 1). y 2
(3, 1) 1
(0, 0) 2
3
−1
83. The transformation is a reflection in the line y = x ⎡0 1⎤ ⎢ ⎥ ⎣ 1 0⎦
85. A rotation of 45° about the z-axis is given by ⎡ 2 ⎢ ⎢ 2 ⎡cos 45° −sin 45° 0⎤ ⎢ 2 ⎢ ⎥ A = ⎢ sin 45° cos 45° 0⎥ = ⎢ ⎢ 2 ⎢ 0 0 1⎥⎦ ⎣ ⎢⎣ 0
2 − 2 2 2 0
⎤ 0⎥ ⎥ ⎥ 0⎥. ⎥ 1⎥⎦
2 2 2 2 0
⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ 1⎥⎦
⎡ 2⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢−1⎥ = ⎢ 0⎥ ⎢ ⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ 1⎦
87. A rotation of 60° about the x-axis is given by 0 1 2 3 2
0 ⎤ ⎥ 3⎥ − . 2 ⎥ ⎥ 1 ⎥ 2 ⎥⎦
Because ⎡1 ⎢ ⎢0 Av = ⎢ ⎢ ⎢ ⎢⎣0
0 1 2 3 2
1 2 3 2
⎡ 3 ⎢ cos 30 sin 30 0 ° − ° ⎡ ⎤ ⎢ 2 ⎢ ⎥ ⎢ 1 sin 30 cos 30 0 ° ° = ⎢ ⎥ ⎢ ⎢ 0 ⎥ 0 1⎦ ⎢ 2 ⎣ ⎢⎣ 0
⎤ 1 0⎥ 2 ⎥ ⎥. 3 0⎥ 2 ⎥ 0 1 ⎥⎦
−
⎡ 3 ⎢ ⎢ 2 ⎢ 1 ⎢ ⎢ 2 ⎢ 0 ⎣
1 ⎤ ⎡1 − 0⎥ ⎢ 2 ⎥⎢ 0 3 ⎥⎢ 0⎥ ⎢ 2 ⎥⎢ 0 0 1 ⎥⎦ ⎢⎣
⎡ 3 0⎤ ⎢ ⎥ ⎢ 2 1 3⎥ ⎢ 1 − 2 2 ⎥ = ⎢ ⎥ ⎢ 2 3 1⎥ ⎢ ⎥ 2 2 ⎦ ⎢⎢ 0 ⎣
0
−
1 4
3 4 3 2
3⎤ ⎥ 4 ⎥ 3⎥ − ⎥. 4⎥ ⎥ 1⎥ 2 ⎥⎦
91. A rotation of 30° about the y-axis has the standard matrix 1⎤ ⎥ 2⎥ 0⎥ , ⎥ 3⎥ 2 ⎥⎦
while a rotation of 45° about the z-axis has the standard matrix
the image of (1, −1, 1) is ( 2, 0, 1).
⎡1 0 0 ⎡1 ⎤ ⎢ ⎢ ⎥ A = ⎢0 cos 60° −sin 60°⎥ = ⎢⎢0 ⎢ ⎥ ⎢ ⎣0 sin 60° cos 60° ⎦ ⎢ ⎢⎣0
0⎤ ⎥ 3⎥ − , 2 ⎥ ⎥ 1⎥ 2 ⎥⎦
0
⎡ 3 0 ⎢ ⎡ cos 30° 0 sin 30° ⎤ ⎢ 2 ⎢ ⎥ 1 0 ⎢ 0 ⎥ = ⎢⎢ 0 1 ⎢−sin 30° 0 cos 30°⎥ ⎢ 1 ⎣ ⎦ ⎢⎣ − 2 0
Because −
⎡1 0 0 ⎡1 ⎤ ⎢ ⎢ ⎥ ⎢ ° − ° = 0 cos 60 sin 60 ⎢ ⎥ ⎢0 ⎢ ⎥ ⎢ ⎣0 sin 60° cos 60° ⎦ ⎢ ⎢⎣0
So, the pair of rotations is given by
⎡2 0⎤ followed by a horizontal expansion ⎢ ⎥. ⎣0 1⎦
⎡ 2 ⎢ ⎢ 2 ⎢ 2 Av = ⎢ ⎢ 2 ⎢ 0 ⎣
89. A rotation of 60° about the x-axis has the standard matrix
while a rotation of 30° about the z-axis has the standard matrix
x
(1, 0)
217
0⎤ 1 ⎡ ⎤ ⎥ ⎡ 1⎤ ⎢ ⎥ 3⎥ 1 3 ⎢− − ⎥ ⎢ ⎥ − 2 ⎥, 2 ⎥ ⎢−1⎥ = ⎢ 2 ⎢ ⎥ ⎥⎢ ⎥ ⎢1 3 ⎥ 1 ⎥ ⎣ 1⎦ ⎢⎣ 2 − 2 ⎥⎦ 2 ⎥⎦
⎡ 2 ⎤ 2 0⎥ − ⎢ ⎡cos 45° −sin 45° 0⎤ 2 ⎢ 2 ⎥ ⎢ ⎥ ⎢ ⎥. sin 45 cos 45 0 ° ° = 2 2 ⎢ ⎥ 0⎥ ⎢ ⎢ 0 ⎥ 0 1 2 2 ⎢ ⎥ ⎣ ⎦ ⎢⎣ 0 0 1⎥⎦
So, the pair of rotations is given by ⎡ 2 2 ⎤⎡ 3 − 0⎥ ⎢ 0 ⎢ 2 ⎢ 2 ⎥⎢ 2 ⎢ 2 ⎥⎢ 0 1 2 ⎢ 0⎥ ⎢ 2 2 ⎢ ⎥⎢ 1 − 0 ⎢ 0 0 1⎥⎦ ⎢⎣ 2 ⎣
⎡ 6 2 − 1⎤ ⎢ 2 ⎥ ⎢ 4 2⎥ ⎢ 2 0⎥ = ⎢ 6 ⎥ ⎢ 4 2 3⎥ ⎢ 1 ⎥ 0 2 ⎦ ⎢− ⎢⎣ 2
2⎤ ⎥ 4 ⎥ 2 ⎥. ⎥ 4 ⎥ ⎥ 3⎥ 2 ⎥⎦
⎛ 1 3 1 3⎞ the image of (1, −1, 1) is ⎜1, − − , − ⎟. 2 2 2 2 ⎠ ⎝
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
218
Chapter 6
Linear Transformations
93. The standard matrix for T is ⎡ 2 ⎢ ⎡cos 45° − sin 45° 0⎤ ⎢ 2 ⎢ ⎥ ⎢ 2 sin 45 cos 45 0 ° ° = ⎢ ⎥ ⎢ ⎢ 0 ⎥ 0 1 ⎢ 2 ⎣ ⎦ ⎢⎣ 0
95. The standard matrix for T is −
2 2 2 2 0
⎤ 0⎥ ⎥ ⎥. 0⎥ ⎥ 1⎥⎦
Therefore, T is given by ⎛ 2 T ( x, y , z ) = ⎜ x − ⎝ 2
⎡1 ⎢ 0 0 ⎡1 ⎤ ⎢0 ⎢ ⎥ ⎢ 0 cos 30 ° − sin 30 ° = ⎢ ⎥ ⎢ ⎢0 sin 30° cos 30° ⎥ ⎢ ⎣ ⎦ ⎢⎣0
0 3 2 1 2
Therefore, T is given by
2 2 y, x + 2 2
⎞ 2 y, z ⎟. 2 ⎠
⎛ 3 1 1 3 ⎞ T ( x, y , z ) = ⎜ x, y − z, y + z ⎟. 2 2 2 2 ⎠ ⎝
The image of each vertex is as follows.
The image of each vertex is as follows.
T (0, 0, 0) = (0, 0, 0)
T (0, 0, 0) = (0, 0, 0)
⎛ 2 2 ⎞ T (1, 0, 0) = ⎜ , , 0⎟ ⎝ 2 ⎠ 2
T (1, 0, 0) = (1, 0, 0)
⎛ 2 2 ⎞ T (0, 1, 0) = ⎜ − , , 0⎟ 2 ⎝ 2 ⎠
⎛ 3 1⎞ , ⎟ T (0, 1, 0) = ⎜ 0, 2 2⎠ ⎝
T (0, 0, 1) = (0, 0, 1)
⎛ 1 3⎞ T (0, 0, 1) = ⎜ 0, − , ⎟ 2 2 ⎠ ⎝
T (1, 1, 0) = (0,
2, 0)
⎛ 2 2 ⎞ T (1, 0, 1) = ⎜ , , 1⎟ 2 ⎠ ⎝ 2 T (1, 1, 1) = (0,
0 ⎤ ⎥ 1⎥ − ⎥ 2 . ⎥ 3⎥ 2 ⎥⎦
2, 1)
⎛ 2 2 ⎞ T (0, 1, 1) = ⎜ − , , 1⎟ 2 ⎠ ⎝ 2
⎛ 3 1⎞ , ⎟ T (1, 1, 0) = ⎜1, ⎝ 2 2⎠
⎛ 1 3⎞ T (1, 0, 1) = ⎜1, − , ⎟ 2 2 ⎠ ⎝ ⎛ 3 1 1 3⎞ T (1, 1, 1) = ⎜1, − , + ⎟ 2 2 2 ⎠ ⎝ 2 ⎛ 3 1 1 3⎞ T (0, 1, 1) = ⎜ 0, − , + ⎟ 2 2 2 2 ⎠ ⎝
97. (a) False. See "Elementary Matrices for Linear Transformations in the Plane," page 409. (b) True. See "Elementary Matrices for Linear Transformations in the Plane," page 407. (c) True. See discussion following Example 4, page 411.
99. (a) False. See "Remark," page 364. (b) False. See Theorem 6.7, page 383. (c) True. See discussion following Example 5, page 404.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 6 Linear Transformations Section 6.1
Introduction to Linear Transformations ............................................188
Section 6.2
The Kernel and Range of a Linear Transformation ..........................192
Section 6.3
Matrices for Linear Transformations.................................................195
Section 6.4
Transition Matrices and Similarity ....................................................205
Section 6.5
Applications of Linear Transformations ...........................................209
Review Exercises ........................................................................................................213 Project Solutions.........................................................................................................220
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 6 Linear Transformations Section 6.1 Introduction to Linear Transformations 8. (a) The image of v is
2. (a) The image of v is T (0, 6) = ( 2(6) − 0, 0, 6) = (12, 0, 6).
⎛ 3 ⎞ 1 T ( 2, 4) = ⎜⎜ (2) − (4), 2 − 4, 4 ⎟⎟ 2 2 ⎝ ⎠
(b) If T (v1 , v2 ) = ( 2v2 − v1 , v1 , v2 ) = (3, 1, 2), then
=
2v2 − v1 = 3 v1 = 1
(
)
3 − 2, − 2, 4 .
⎛ 3 ⎞ 1 v1 − v2 , v1 − v2 , v2 ⎟⎟ (b) If T (v1 , v2 ) = ⎜⎜ 2 ⎝ 2 ⎠
v2 = 2
which implies that the preimage of w is (v1, v2 ) = (1, 2).
=
(
)
3, 2, 0 ,
then 4. (a) The image of v is
3 1 v1 − v2 = 2 2 v1 − v2 =
T ( −4, 5, 1) = ( 2( −4) + 5, 2(5) − 3( −4), − 4 − 1) = ( −3, 22, − 5).
v2 =
(b) If T (v1 , v2 , v3 ) = ( 2v1 + v2 , 2v2 − 3v1 , v1 − v3 )
−3v1 + 2v2
=
4
=
1
10. T is not a linear transformation because it does not preserve addition nor scalar multiplication.
For example,
− v3 = −1
v1
T (1, 1) + T (1, 1) = (1, 1) + (1, 1)
which implies that v1 = 1, v2 = 2 and v3 = 2. So,
= ( 2, 2) ≠ ( 4, 2) = T ( 2, 2).
the preimage of w is (1, 2, 2).
12. T is not a linear transformation because it does not preserve addition. For example,
6. (a) The image of v is T ( 2, 1, 4) = ( 2( 2) + 1, 2 − 1) = (5, 1).
T (1, 1, 1) + T (1, 1, 1) = ( 2, 2, 2) + ( 2, 2, 2)
(b) If T (v1 , v2 , v3 ) = ( 2v1 + v2 , v1 − v2 ) = ( −1, 2), then
= ( 4, 4, 4) ≠ (3, 3, 3) = T ( 2, 2, 2).
2v1 + v2 = −1 v1 − v2 =
2,
14. T is not a linear transformation because it does not preserve addition nor scalar multiplication. For example,
which implies that v1 = 13 , v2 = − 53 , and v3 = t ,
T (1, 1) + T (1, 1) = (1, 1, 1) + (1, 1, 1)
where t is any real number. So, the preimage of w is
{(
1 , 3
}
)
0
preimage of w is ( 2, 0).
then v2
2
which implies that v1 = 2 and v2 = 0. So, the
= ( 4, 1, −1), 2v1 +
3
= ( 2, 2, 2) ≠ ( 4, 4, 4) = T ( 2, 2).
− 53 , t : t is any real number .
16. T preserves addition. ⎛ ⎡a1 b1 ⎤ ⎞ ⎛ ⎡a2 b2 ⎤ ⎞ + T⎜⎢ T ( A1 ) + T ( A2 ) = T ⎜ ⎢ ⎜ c d ⎥ ⎟⎟ ⎜ c d ⎥ ⎟⎟ 2⎦⎠ ⎝ ⎣ 1 1⎦ ⎠ ⎝⎣ 2 = a1 + b1 + c1 + d1 + a2 + b2 + c2 + d 2 = ( a1 + a2 ) + (b1 + b2 ) + (c1 + c2 ) + ( d1 + d 2 ) = T ( A1 + A2 )
T preserves scalar multiplication. T ( kA) = ka + kb + kc + kd = k ( a + b + c + d ) = kT ( A)
Therefore, T is a linear transformation. 188
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.1
Introduction to Linear Transformations
189
18. Let A and B be two elements of M 3,3 (two 3 × 3 matrices) and let c be a scalar. First ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ T ( A + B ) = ⎢0 1 0⎥ ( A + B) = ⎢0 1 0⎥ A + ⎢0 1 0⎥ B = T ( A) + T ( B) ⎢0 0 −1⎥ ⎢0 0 −1⎥ ⎢0 0 −1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
by Theorem 2.3, part 2. And ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ T (cA) = ⎢0 1 0⎥ (cA) = c ⎢0 1 0⎥ A = cT ( A) ⎢0 0 −1⎥ ⎢0 0 −1⎥ ⎣ ⎦ ⎣ ⎦
by Theorem 2.3, part 4. So, T is a linear transformation. 20. T is not a linear transformation because it does not preserve addition nor scalar multiplication.
For example, T (I2 + I2 ) = (I2 + I2 )
−1
⎡ 1 0⎤ ⎡2 0⎤ = ⎢2 1⎥ ≠ ⎢ ⎥ = T ( I 2 ) + T ( I 2 ). ⎢⎣ 0 2 ⎥⎦ ⎣0 2⎦
22. T preserves addition. T ( a0 + a1 x + a2 x 2 ) + T (b0 + b1x + b2 x 2 ) = ( a1 + 2a2 x) + (b1 + 2b2 x) = ( a1 + b1 ) + 2( a2 + b2 ) x = T (( a0 + b0 ) + ( a1 + b1 ) x + ( a2 + b2 ) x 2 )
T preserves scalar multiplication.
(
)
T c( a0 + a1 x + a2 x 2 ) = T (ca0 + ca1 x + ca2 x 2 ) = ca1 + 2ca2 x = c( a1 + 2a2 x) = cT ( a0 + a1 x + a2 x 2 )
Therefore, T is a linear transformation. 24. Because ( 2, −1, 0) can be written as
(2, −1, 0)
= 2(1, 0, 0) − 1(0, 1, 0) + 0(0, 0, 1),
you can use Property 4 of Theorem 6.1 to write T ( 2, −1, 0) = 2T (1, 0, 0) − T (0, 1, 0) + 0T (0, 0, 1)
(0, 2, −1)
=
3 2
(1, 1, 1) − 12 (0, −1, 2) − 32 (1, 0, 1), you can
use Property 4 of Theorem 6.1 to write T (0, 2, −1) = 32 T (1, 1, 1) − 12 T (0, −1, 2) − 23 T (1, 0, 1) 3 2
(2, 0, −1)
= 2( 2, 4, −1) − (1, 3, − 2) + (0, 0, 0)
=
= (3, 5, 0).
= 3, − 52 , −1 .
26. Because ( −2, 4, −1) can be written as
(−2, 4, −1)
28. Because (0, 2, −1) can be written as
= −2(1, 0, 0) + 4(0, 1, 0) − 1(0, 0, 1),
you can use Property 4 of Theorem 6.1 to write T ( −2, 4, −1) = −2T (1, 0, 0) + 4T (0, 1, 0) − T (0, 0, 1) = −2( 2, 4, −1) + 4(1, 3, − 2) − (0, − 2, 2) = (0, 6, − 8).
(
−
)
1 2
(− 3, 2, −1)
−
3 2
(1, 1, 0)
30. Because ( −2, 1, 0) can be written as
(−2, 1, 0)
= 2(1, 1, 1) + (0, −1, 2) − 4(1, 0, 1), you can use
Property 4 of Theorem 6.1 to write T ( −2, 1, 0) = 2T (1, 1, 1) + T (0, −1, 2) − 4T (1, 0, 1) = 2( 2, 0, −1) + (− 3, 2, −1) − 4(1, 1, 0) = ( −3, − 2, − 3). 32. Because the matrix has 2 columns, the dimension of R n is 2. Because the matrix has 3 rows, the dimension of R m is 3. So, T : R 2 → R 3 . 34. Because the matrix has 4 columns and 4 rows it defines a linear transformation from R 4 to R 4 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
190
Chapter 6
Linear Transformations
⎡ 1⎤ ⎡ 0 1 −2 1⎤ ⎢ ⎥ ⎡−1⎤ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ = ⎢ 9⎥ 36. (a) T (1, 0, 2, 3) = ⎢−1 4 5 0⎥ ⋅ ⎢2⎥ ⎢ 0 1 3 1⎥ ⎢ ⎥ ⎢ 9⎥ ⎣ ⎦ ⎣ ⎦ ⎢⎣3⎥⎦
⎡w⎤ ⎡ 0 1 −2 1⎤ ⎢ ⎥ ⎡0⎤ ⎢ ⎥ x ⎢ ⎥ (b) The preimage of (0, 0, 0) is determined by solving the equation T ( w, x, y, z ) = ⎢−1 4 5 0⎥ ⎢ ⎥ = ⎢0⎥. ⎢ y⎥ ⎢ 0 1 3 1⎥ ⎢ ⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦ ⎣⎢ z ⎦⎥ The equivalent system of linear equations has the solution w = −4t , x = −t , y = 0, z = t , where t is any real number. So, the preimage is given by the set of vectors
{(−4t , − t , 0, t ) : t is any real number}.
⎡ 1⎤ ⎢ ⎥ ⎢ 0⎥ ⎡−1 2 1 3 4⎤ ⎢ ⎥ ⎡ 7⎤ 38. (a) T (1, 0, −1, 3, 0) = ⎢ ⎥ ⎢−1⎥ = ⎢ ⎥ = (7, − 5). ⎣ 0 0 2 −1 0⎦ ⎣−5⎦ ⎢ 3⎥ ⎢ ⎥ ⎣ 0⎦
(b) The preimage of ( −1, 8) is determined by solving the equation T (v1 , v2 , v3 , v4 , v5 )
7 s 2
The equivalent system of linear equations has the solution v1 = 5 + 2r +
⎡ v1 ⎤ ⎢ ⎥ ⎢v2 ⎥ − 1 2 1 3 4 ⎡ ⎤⎢ ⎥ ⎡−1⎤ = ⎢ ⎥ ⎢v3 ⎥ = ⎢ ⎥. ⎣ 0 0 2 −1 0⎦ ⎣ 8⎦ ⎢v4 ⎥ ⎢ ⎥ ⎣v5 ⎦
+ 4t , v2 = r , v3 = 4 +
1 s, v 4 2
= s, and
v5 = t , where r, s, and t are any real numbers. So, the preimage is given by the set of vectors
{(5 + 2r + 72 s + 4t, r, 4 + 12 s, s, t ) : r, s, t are real numbers}. ⎡ 0 −1⎤ ⎡1⎤ ⎡−1⎤ 40. (a) T (1, 1) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = ( −1, −1). ⎣−1 0⎦ ⎣1⎦ ⎣−1⎦
42. If θ = 45°, then T is given by T ( x, y ) = ( x cos θ − y sin θ , x sin θ + y cos θ ) ⎛ 2 x − = ⎜⎜ ⎝ 2
(b) The preimage of (1, 1) is determined by solving the equation ⎡ 0 −1⎤ ⎡ v1 ⎤ ⎡1⎤ T (v1 , v2 ) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. − v 1 0 ⎣ ⎦ ⎣ 2⎦ ⎣1⎦ The equivalent system of linear equations has the solution v1 = −1 where v2 = −1. So, the preimage is ( −1, −1).
(c) The preimage of (0, 0) is determined by solving the equation ⎡ 0 −1⎤ ⎡ v1 ⎤ ⎡0⎤ T (v1 , v2 ) = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣−1 0⎦ ⎣v2 ⎦ ⎣0⎦ The equivalent system of linear equations has the solution v1 = 0 and v2 = 0. So, the preimage is
(0, 0).
2 2 y, x + 2 2
2 ⎞ y ⎟. 2 ⎟⎠
Solving T ( x, y ) = v = (1, 1), you have 2 2 x − y =1 2 2 So, x =
(
)
and
2 x + 2
2 y = 1. 2
2 and y = 0, and the preimage of v is
2, 0 .
44. This statement is true because Dx is a linear transformation and therefore preserves addition and scalar multiplication. 46. This statement is false because cos
x 1 ≠ cos x for all x. 2 2
48. If Dx ( g ( x)) = e x , then g ( x) = e x + C. 50. If Dx ( g ( x )) =
1 , then g ( x) = ln x + C. x
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.1
52. Solve the equation 1
∫0
1
∫ 0 p( x)dx
Introduction to Linear Transformations
191
= 1 for p( x) in P2 . 1
2 3 ⎡ ⎤ (a0 + a1x + a2 x 2 )dx = 1 ⇒ ⎢a0 x + a1 x2 + a2 x3 ⎥ = 1 ⇒ a0 + 12 a1 + 13 a2 = 1. ⎣ ⎦0
Letting a2 = −3b and a1 = −2a be free variables, a0 = 1 + a + b, and p( x) = (1 + a + b) − 2ax − 3bx 2 .
54. Because (1, 4) = 1(1, 0) + 4(0, 1), you have T (1, 4) = T ⎡⎣(1, 0) + 4(0, 1)⎤⎦ = T (1, 0) + 4T (0, 1) = (1, 1) + 4( −1, 1) = (−3, 5).
Similarly, ( −2, 1) = −2(1, 0) + 1(0, 1), which gives T ( −2, 1) = T ⎡− ⎣ 2(1, 0) + (0, 1)⎤⎦ = −2T (1, 0) + T (0, 1) = −2(1, 1) + (−1, 1) = (−3, −1).
⎛ ⎡ 1 3⎤ ⎞ ⎡ 1 0⎤ ⎡0 1⎤ ⎡0 0⎤ ⎡0 0⎤ ⎡ 1 −1⎤ ⎡0 2⎤ ⎡ 1 2⎤ ⎡3 −1⎤ ⎡12 −1⎤ 56. T ⎜⎜ ⎢ ⎥ ⎟⎟ = T ⎢ ⎥ + 3T ⎢ ⎥ − T⎢ ⎥ + 4T ⎢ ⎥ = ⎢ ⎥ + 3⎢ ⎥ − ⎢ ⎥ + 4⎢ ⎥ = ⎢ ⎥ − 1 4 0 0 0 0 1 0 0 1 0 2 1 1 0 1 1 0 ⎦⎠ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ 7 4⎦ ⎝⎣
58. (a) False. A linear transformation is always operation preserving. (b) False. This function does not preserve addition nor scalar multiplication. For example, f (3 x) = 27 x3 ≠ 3 f ( x). (c) False. If f : R → R is given by f ( x) = ax + b for some a, b ∈ R, then it preserves addition and scalar multiplication if and only if b = 0.
60. (a) T ( x, y ) = T ⎡⎣ x(1, 0) + y(0, 1)⎤⎦
= xT (1, 0) + yT (0, 1) = x(0, 1) + y(1, 0) = ( y, x) (b) T is a reflection about the line y = x. 62. Use the result of Exercise 61(a) as follows.
⎛3 + 4 3 + 4⎞ ⎛7 7⎞ T (3, 4) = ⎜ , ⎟ = ⎜ , ⎟ 2 ⎠ ⎝ 2 2⎠ ⎝ 2 ⎛7 7⎞ T (T (3, 4)) = T ⎜ , ⎟ ⎝ 2 2⎠ ⎛ 1 ⎛ 7 7 ⎞ 1 ⎛ 7 7 ⎞⎞ ⎛7 7⎞ = ⎜ ⎜ + ⎟, ⎜ + ⎟ ⎟ = ⎜ , ⎟. 2 ⎠ 2⎝ 2 2 ⎠⎠ ⎝ 2 2⎠ ⎝ 2⎝ 2
T is projection onto the line y = x.
64. (a) Because T (0) = 0 for any linear transformation T, 0
is a fixed point of T. (b) Let F be the set of fixed points of T : V → V . F is nonempty because 0 ∈ F . Furthermore, if u, v ∈ F , then T (u + v ) = T (u) + T ( v ) = u + v and T (cu) = cT (u) = cu, which shows that F is closed under addition and scalar multiplication. (c) A vector u is a fixed point if T (u) = u. Because T ( x, y ) = ( x, 2 y ) = ( x, y ) has solutions x = t and y = 0, the set of all fixed points of T is
{(t , 0) : t is any real number}. (d) A vector u is a fixed point if T (u) = u. Because T ( x, y ) = ( y, x) has solutions x = y , the set of
fixed points of T is
{( x, x) :
x is any real number}.
66. Because {v1 , … , v n} are linearly dependent, there exist
constants c1 , … , cn , not all zero, such that c1v1 +
+ cn v n = 0. So,
T (c1v1 +
+ cn v n ) = c1T ( v1 ) +
+ cnT ( v n ) = 0,
which shows that the set {T ( v1 ), … , T ( v n )} is linearly dependent. 68. Let T ( v ) = 0 be the zero transformation. Because T (u + v ) = 0 = T (u) + T ( v ) and T (cu) = 0 = cT (u), T is a linear transformation.
70. Let T be defined by T ( v ) = v, v 0 . Then because T ( v + w ) = v + w, v 0 = v, v 0 + w , v 0 = T ( v ) + T ( w )
and T (cv ) = cv, v 0 = c v, v 0 = cT ( v), T is a linear transformation. Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
192
Chapter 6
Linear Transformations
72. Because T (u + v ) = u + v, w1 w1 +
= =
( u, w1 w1 + ( u, w1 w1 +
+ u + v, w n w n
v, w1 w1 ) +
+
( u, w n w n
+ v, w n w n )
+ u, w n w n ) + v, w1 w1 +
+ v , w n w n = T (u ) + T ( v )
and T (cu) = cu, w1 w1 +
+ cu, w n w n
= c u, w1 w1 +
+ c u, w n w n
= c ⎡⎣ u, w1 w1 +
+ u, w n w n ⎤⎦ = cT (u),
T is a linear transformation. 74. Suppose first that T is a linear transformation. Then T ( au + bv ) = T ( au) + T (bv) = aT (u) + bT ( v ).
Second, suppose T ( au + bv) = aT (u) + bT ( v ). Then T (u + v ) = T (1u + 1v ) = T (u) + T ( v) and T (cu) = T (cu + 0) = cT (u) + T (0) = cT (u).
Section 6.2 The Kernel and Range of a Linear Transformation 2. T : R 3 → R 3 , T ( x, y , z ) = ( x, 0, z )
12. (a) Because
The kernel consists of all vectors lying on the y-axis. That is, ker (T ) = {(0, y , 0) : y is a real number}.
has solutions of the form ( −2t , t ) where t is any real
4. T : R 3 → R 3 , T ( x, y , z ) = ( z , y , x)
Solving the equation T ( x, y, z ) = ( z , y, x) = (0, 0, 0) yields that trivial solution x = y = z = 0. So, ker(T ) =
⎡ 1 2⎤ ⎡ v1 ⎤ ⎡0⎤ T ( v) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣−2 −4⎦ ⎣v2 ⎦ ⎣0⎦
{(0, 0, 0)}.
number, a basis for ker (T ) is
(b) Transpose A and find the equivalent reduced row-echelon form ⎡ 1 −2⎤ AT = ⎢ ⎥ ⎣2 −4⎦
6. T : P2 → R, T ( a0 + a1x + a2 x 2 ) = a0
Solving the equation T ( a0 + a1 x + a2 x 2 ) = a0 = 0 yields solutions of the form a0 = 0 and a1 and a2 are any real numbers. So, ker (T ) = {a1 x + a2 x 2 : a1 , a2 ∈ R}. 8. T : P3 → P2 ,
{(−2, 1)}.
⇒
⎡ 1 −2⎤ ⎢ ⎥. ⎣0 0⎦
So, a basis for range (T) is
{(1, − 2)}.
14. (a) Because ⎡ v1 ⎤ ⎡ 1 −2 1⎤ ⎢ ⎥ ⎡0⎤ T ( v) = ⎢ ⎥ ⎢v2 ⎥ = ⎢ ⎥ ⎣0 2 1⎦ ⎢ ⎥ ⎣0⎦ ⎣v3 ⎦
(
)
T ( a0 + a1 x + a2 x 2 + a3 x3 ) = a1 + 2a2 x + 3a3 x 2
has solutions of the form −2t , − 12 t , t where t is any
Solving the equation
real number, a basis for ker (T ) is
T ( a0 + a1 + a2 x 2 + a3 x3 ) = a1 + 2a2 x + 3a3 x 2 = 0
yields solutions of the form a1 = a2 = a3 = 0 and a0 any real number. So, ker (T ) = {a0 : a0 ∈ R}.
Solving the equation T ( x, y ) = ( x − y, y − x) = (0, 0) yields solutions of the form x = y. So, ker (T ) =
{( x, x) :
1 2
(b) Transpose A and find the equivalent reduced row-echelon form ⎡ 1 0⎤ ⎢ ⎥ A = ⎢−2 2⎥ ⎢ 1 1⎥ ⎣ ⎦ T
10. T : R 2 → R 2 , T ( x, y ) = ( x − y , y − x )
{(−2, − , 1)}.
⇒
⎡ 1 0⎤ ⎢ ⎥ ⎢0 1⎥ ⎢0 0⎥ ⎣ ⎦
So, a basis for the range (T) is
{(1, 0), (0, 1)}.
x ∈ R}.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.2
The Kernel and Range of a Linear Transformation
22. (a) Because T ( x) = 0 has only the trivial solution
16. (a) Because ⎡ 1 1⎤ ⎡0⎤ ⎢ ⎥ ⎡ v1 ⎤ T ( v ) = ⎢−1 2⎥ ⎢ ⎥ = ⎢ ⎥ v ⎣0⎦ ⎢ 0 1⎥ ⎣ 2 ⎦ ⎣ ⎦
x = (0, 0), the kernel of T is
{(0, 0)}.
(c) Transpose A and find the equivalent row-echelon form. ⎡4 0 2⎤ AT = ⎢ ⎥ ⎣ 1 0 −3⎦
(b) Transpose A and find the equivalent reduced row-echelon form ⎡1 −1 0⎤ AT = ⎢ ⎥ ⎣1 2 1⎦
⇒
⎡1 0 ⎢ ⎢0 1 ⎣
So, a basis for the range (T) is
(d) rank (T ) =
{(1, 0, ), (0, 1, )}. 1 3
⇒
⎡ 1 0 0⎤ ⎢ ⎥ ⎣0 0 1⎦
{(t , 0, s) : s, t ∈ R}. dim( range(T )) = 2
So, range(T ) =
1⎤ 3 ⎥ 1⎥ 3⎦ 1 3
24. (a) The kernel of T is given by the solution to the equation T ( x) = 0. So,
{(t , − t , s, − s) : s, t ∈ R}. nullity(T ) = dim( ker (T )) = 2 ker (T ) =
18. (a) Because
⎡ v1 ⎤ ⎢ ⎥ 1 3 2 1 4 − ⎡ ⎤ ⎢v2 ⎥ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ T ( v ) = ⎢ 2 3 5 0 0⎥ v3 = ⎢0⎥ ⎢ ⎥ ⎢ 2 1 2 1 0⎥ ⎢v ⎥ ⎢0⎥ ⎣ ⎦ 4 ⎣ ⎦ ⎢ ⎥ v ⎣ 5⎦ has solutions of the form (−10s − 4t , −15s − 24t , 13s + 16t , 9s, 9t ), a basis for ker (T ) is
{(−10, −15, 13, 9, 0), (−4, − 24, 16, 0, 9)}. (b) Transpose A and find the equivalent reduced row-echelon form. ⎡−1 ⎢ ⎢ 3 T A = ⎢ 2 ⎢ ⎢ 1 ⎢ ⎣ 4
2 2⎤ ⎥ 3 1⎥ 5 2⎥ ⎥ 0 1⎥ ⎥ 0 0⎦
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢0 ⎢ ⎣0
0 0⎤ ⎥ 1 0⎥ 0 1⎥ ⎥ 0 0⎥ ⎥ 0 0⎦
So, a basis for range (T) is {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, and the range of T is all
(b)
(c) Transpose A and find its equivalent row-echelon form. ⎡ 1 0⎤ ⎡ 1 0⎤ ⎢ ⎥ ⎢ ⎥ 1 0⎥ 0 1⎥ ⇒ ⎢ AT = ⎢ ⎢0 1⎥ ⎢0 0⎥ ⎢ ⎥ ⎢ ⎥ ⎣⎢0 1⎦⎥ ⎣⎢0 0⎦⎥ So, range(T ) = R 2 . (d) rank (T ) = dim( range(T )) = 2 26. (a) The kernel of T is given by the solution to the equation T ( x) = 0. So, ker (T ) = {(5t , t ) : t ∈ R}.
(b) nullity(T ) = dim( ker (T )) = 1 (c) Transpose A and find its equivalent row-echelon form. ⎡ 1 AT = ⎢ 26 ⎢− 5 ⎣ 26
of R . 20. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So,
{(2t , − 3t )
: t is any real number}.
(b) nullity(T ) = dim( ker(T )) = 1 (c) Transpose T and find the equivalent reduced row-echelon form. ⎡3 −9⎤ AT = ⎢ ⎥ ⎣2 −6⎦ So, range(T ) =
⇒
⎡ 1 −3⎤ ⎢ ⎥ ⎣0 0⎦
{(t , − 3t ) : t is any real number}.
(d) rank (T ) = dim( range(T )) = 1
5⎤ − 26 ⎥ 25 ⎥ 26 ⎦
⇒
⎡ 1 −5⎤ ⎢ ⎥ ⎣0 0⎦
{(t , − 5t ) : t ∈ R}. dim( range(T )) = 1
So, range(T ) = (d) rank (T ) =
3
ker(T ) =
{(0, 0)}.
(b) nullity(T ) = dim( ker (T )) = 0
has only the trivial solution v1 = v2 = 0, the kernel is
193
28. (a) The kernel of T is given by the solution to the
equation T ( x) = 0. So, ker (T ) =
{(0, t , 0) : t ∈ R}.
(b) nullity(T ) = dim( ker (T )) = 1 (c) Transpose A and find its equivalent row-echelon form. ⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 0 0⎥ ⎢0 0 1⎥ ⎣ ⎦ T
So, range(T ) =
⇒
⎡ 1 0 0⎤ ⎢ ⎥ ⎢0 0 1⎥ ⎢0 0 0⎥ ⎣ ⎦
{(t , 0, s) : s, t ∈ R}.
(d) rank (T ) = dim( range(T )) = 2 Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
194
Chapter 6
Linear Transformations 34. Because rank (T ) + nullity(T ) = 3, and you are given
30. (a) The kernel of T is given by the solution to the equation T ( x) = 0. So, ker(T ) =
rank (T ) = 3, then nullity(T ) = 0. So, the kernel of T is
{(−t − s − 2r , 6t − 2s, r , s, t ) : r , s, t ∈ R}.
the single point
(b) nullity(T ) = dim( ker(T )) = 3
So, range(T ) =
⇒
R3.
36. The kernel of T is determined by solving T ( x, y, z ) = ( − x, y, z ) = (0, 0, 0), which implies that
(c) Transpose A and find its equivalent row-echelon form. 4 2⎤ ⎡ 3 ⎢ ⎥ 3 −3⎥ ⎢−2 AT = ⎢ 6 8 4⎥ ⎢ ⎥ ⎢ −1 10 −4⎥ ⎢ ⎥ ⎣ 15 −14 20⎦
{(0, 0, 0)}, and the range is all of
the kernel is the single point
⎡17 0 18⎤ ⎢ ⎥ ⎢ 0 17 −5⎥ ⎢ 0 0 0⎥ ⎢ ⎥ ⎢ 0 0 0⎥ ⎢ ⎥ ⎣ 0 0 0⎦
{(0, 0, 0)}. From the
equation rank (T ) + nullity(T ) = 3, you see that the rank of T is 3. So, the range of T is all of R 3 . 38. The kernel of T is determined by solving T ( x, y, z ) = ( x, y, 0) = (0, 0, 0), which implies that
x = y = 0. So, the nullity of T is 1, and the kernel is a line (the z-axis). The range of T is found by observing that rank (T ) + nullity(T ) = 3. That is, the range of T is
{(17 s, 17t , 18s − 5t ) : s, t ∈ R}.
(d) rank (T ) = dim( range(T )) = 2
2-dimensional, the xy-plane in R 3 .
32. Because rank (T ) + nullity(T ) = 3, and you are given
rank (T ) = 1, then nullity(T ) = 2. So, the kernel of T is a plane, and the range is a line. 40. rank (T ) + nullity(T ) = dim R 5 ⇒ nullity(T ) = 5 − 2 = 3 42. rank (T ) + nullity(T ) = dim P3 ⇒ nullity(T ) = 4 − 2 = 2 44. The vector spaces isomorphic to R 6 are those whose dimension is six. That is, (a) M 2,3 (d) M 6,1 (e) P5 and
(f)
{( x1, x2 , x3 , 0, x5 , x6 , x7 ) :
46. Solve the equation T ( p ) =
xi ∈ R} are isomorphic to R 6 . 1
∫ 0 p( x)dx
=
∫ 0 (a0 1
+ a1 x + a2 x 2 )dx = 0 yielding a0 + a1 2 + a2 3 = 0.
Letting a2 = −3b, a1 = −2a, you have a0 = − a1 2 − a2 3 = a + b, and ker (T ) =
{(a + b) − 2ax − 3bx2
: a, b ∈ R}.
48. First compute T (u) = projvu, for u = ( x, y, z ). T (u) = projvu =
( x, y, z ) ⋅ (3, 0, 4) 3, 0, 4 ( ) (3, 0, 4) ⋅ (3, 0, 4)
=
3x + 4 z (3, 0, 4) 25
(a) Setting T (u) = 0, you have 3x + 4 z = 0, and so nullity(T ) = 2. So, rank (T ) = 3 − 2 = 1. (b) A basis for the kernel of T is obtained by solving 3x + 4 z = 0. Letting t = z and s = y, you have x =
⎧ ⎫ 4 1 4 (−4 z ) = − t. So, a basis for the kernel of T is ⎨(0, 1, 0), ⎛⎜ − , 0, 1⎞⎟⎬. 3 3 ⎝ 3 ⎠⎭ ⎩
50. Because A = −1 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker (T ) =
one-to-one (by Theorem 6.6). Furthermore, because rank (T ) = dim( R
2
) − nullity(T ) =
{(0, 0)} and T is
2 − 0 = 2 = dim( R 2 ),
T is onto (by Theorem 6.7). 52. Because A = −24 ≠ 0, the homogeneous equation Ax = 0 has only the trivial solution. So, ker (T ) =
{(0, 0, 0)} and T is
one-to-one (by Theorem 6.6). Furthermore, because rank (T ) = dim R − nullity(T ) = 3 − 0 = 3 = dim( R 3 ), 3
T is onto (by Theorem 6.7).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
195
54. (a) True. This is a nonempty subset of V because 0 is in it. Moreover, if c ∈ R and u and v are such that
T (u) = 0 and T ( v) = 0, then T (u + v ) = T (u) + T ( v ) = 0 and T (cu) = cT (u) = 0. This proves that ker (T ) is a subspace of V. (b) False. A concept of a dimension of a linear transformation does not exist. (c) True. See discussion on page 382 before Theorem 6.6. (d) True. Because dim( P1 ) = dim( R 2 ) = 2 and any two vector spaces of equal finite dimension are isomorphic (Theorem 6.9 on page 384). 56. The kernel of T is given by T ( A) = 0. So, T ( A) = A − AT = 0
⇒
A = AT
and ker (T ) = {A : A = AT }, the set of n × n symmetric matrices. 58. Because T is a linear transformation of vector spaces with same dimension, you only need to show that T is one-to-one. It is sufficient to show that ker(T ) = {0}. Let A ∈ ker (T ). Then T ( A) = AB = 0. Use the fact that B is invertible to obtain
AB = 0 ⇒ AB( B −1 ) = 0 ⇒ A = 0. So, ker (T ) = {0} and T is one-to-one. 60. T −1 (U ) is nonempty because T (0) = 0 ∈ U ⇒ 0 ∈ T −1 (U ).
Let v1 , v 2 ∈ T −1 (U ) ⇒ T ( v1 ) ∈ U and T ( v 2 ) ∈ U . Because U is a subspace of W , T ( v1 ) + T ( v 2 ) = T ( v1 + v 2 ) ∈ U ⇒ v1 + v 2 ∈ T −1 (U ).
Let v ∈ T −1 (U ) and c ∈ R ⇒ T ( v ) ∈ U . Because U is a subspace of W, cT ( v) = T (cv ) ∈ U ⇒ cv ∈ T −1 (U ). If U = {0}, then T −1 (U ) is the kernel of T. 62. If T is onto, then m ≥ n.
If T is one-to-one, then m ≤ n.
Section 6.3 Matrices for Linear Transformations 2. Because
6. Because
⎛ ⎡ 1⎤ ⎞ ⎡ 3⎤ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ ⎥ and ⎣−1⎦ ⎝ ⎣0⎦ ⎠
⎛ ⎡0⎤ ⎞ ⎡2⎤ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ ⎥ , ⎣2⎦ ⎝ ⎣ 1⎦ ⎠
the standard matrix for T is ⎡ 3 2⎤ ⎢ ⎥. ⎣−1 2⎦
the standard matrix for T is
4. Because ⎡4⎤ ⎛ ⎡ 1⎤ ⎞ ⎢ ⎥ T ⎜ ⎢ ⎥ ⎟ = ⎢0⎥ ⎜ 0 ⎟ ⎝⎣ ⎦⎠ ⎢2⎥ ⎣ ⎦
and
⎡ 1⎤ ⎛ ⎡0⎤ ⎞ ⎢ ⎥ T ⎜ ⎢ ⎥ ⎟ = ⎢ 0⎥ , ⎜ 1⎟ ⎝⎣ ⎦⎠ ⎢−3⎥ ⎣ ⎦
the standard matrix for T is 1⎤ ⎡4 ⎢ ⎥ ⎢0 0⎥. ⎢2 −3⎥ ⎣ ⎦
⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎡5⎤ ⎛ ⎡0⎤ ⎞ ⎡−3⎤ ⎡ 1⎤ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥ T ⎜ ⎢0⎥ ⎟ = ⎢0⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢ 4⎥ , and T ⎜ ⎢0⎥ ⎟ = ⎢2⎥ , ⎜ ⎢0⎥ ⎟ ⎜ ⎢ 1⎥ ⎟ ⎢5⎥ ⎜ ⎢0⎥ ⎟ ⎢ 3⎥ ⎢0⎥ ⎣ ⎦ ⎝⎣ ⎦⎠ ⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦⎠ ⎝⎣ ⎦⎠ ⎡5 −3 1⎤ ⎢ ⎥ ⎢0 4 2⎥. ⎢5 3 0⎥ ⎣ ⎦
8. Because
⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎡3⎤ ⎜ ⎢ ⎥ ⎟ ⎡0⎤ ⎡−2⎤ T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥, T ⎜ ⎢ 1⎥ ⎟ = ⎢ ⎥ , and T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , ⎣0⎦ ⎜ ⎢ ⎥ ⎟ ⎣2⎦ ⎣ −1⎦ ⎜ ⎢0⎥ ⎟ ⎜ ⎢ 1⎥ ⎟ ⎝⎣ ⎦⎠ ⎝ ⎣0⎦ ⎠ ⎝⎣ ⎦⎠ the standard matrix for T is ⎡3 0 −2⎤ ⎢ ⎥. ⎣0 2 −1⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
196
Chapter 6
Linear Transformations
10. Because
14. Because
⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎡0⎤ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎢ ⎥ T ⎜ ⎢0⎥ ⎟ = ⎢0⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢0⎥ , and T ⎜ ⎢0⎥ ⎟ = ⎢0⎥ , ⎜ ⎢0⎥ ⎟ ⎜ ⎟ ⎜ ⎟ ⎢0⎥ ⎢ ⎥ ⎢0⎥ ⎢ ⎥ ⎢0⎥ ⎣ ⎦ ⎝ ⎣0⎦ ⎠ ⎣ ⎦ ⎣ ⎦ ⎝⎣ ⎦⎠ ⎝ ⎣ 1⎦ ⎠
⎡ 1⎤ ⎡−1⎤ ⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎢ ⎥ ⎢ ⎥ T ⎜ ⎢ ⎥ ⎟ = ⎢ 1⎥ and T ⎜ ⎢ ⎥ ⎟ = ⎢ 2⎥ , ⎜ 0 ⎟ ⎜ 1⎟ ⎝⎣ ⎦⎠ ⎝⎣ ⎦⎠ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦
the standard matrix for T is
the standard matrix for T is
⎡0 0 0⎤ ⎢ ⎥ ⎢0 0 0⎥. ⎢0 0 0⎥ ⎣ ⎦
⎡ 1 −1⎤ ⎢ ⎥ ⎢ 1 2⎥. ⎢0 1⎥ ⎣ ⎦
So,
12. Because
⎛ ⎡ 1⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎛ ⎡0⎤ ⎞ ⎜⎢ ⎥⎟ ⎜⎢ ⎥⎟ ⎡2⎤ ⎜ ⎢ ⎥ ⎟ ⎡1⎤ ⎡ 0⎤ T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , T ⎜ ⎢ 1⎥ ⎟ = ⎢ ⎥ , and T ⎜ ⎢0⎥ ⎟ = ⎢ ⎥ , 0 3 ⎣ ⎦ ⎜⎢ ⎥⎟ ⎣ ⎦ ⎣−1⎦ ⎜ ⎢0⎥ ⎟ ⎜ ⎢ 1⎥ ⎟ ⎝⎣ ⎦⎠ ⎝ ⎣0⎦ ⎠ ⎝⎣ ⎦⎠
⎡ 1 −1⎤ ⎡ 4⎤ ⎢ ⎥ ⎡ 2⎤ ⎢ ⎥ T ( v) = ⎢ 1 2⎥ ⎢ ⎥ = ⎢−2⎥ and T (2, − 2) = ( 4, − 2, − 2). ⎢0 1⎥ ⎣−2⎦ ⎢−2⎥ ⎣ ⎦ ⎣ ⎦
the standard matrix for T is ⎡2 1 0⎤ ⎢ ⎥. ⎣0 3 −1⎦ So, ⎡ 0⎤ ⎡2 1 0⎤ ⎢ ⎥ ⎡ 1⎤ T ( v) = ⎢ ⎥ ⎢ 1⎥ = ⎢ ⎥ and T (0, 1, −1) = (1, 4). ⎣0 3 −1⎦ ⎢ ⎥ ⎣4⎦ ⎣−1⎦
16. Because
⎛ ⎡ 1⎤ ⎞ ⎡ 2⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 0 ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎢−1⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎢⎣ 0⎥⎦ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎡0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 3 ⎜ 1⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢0⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎢⎣ 1⎥⎦ ⎝⎣ ⎦⎠
⎡ 2 ⎢ 0 the standard matrix for T is ⎢ ⎢−1 ⎢ ⎢⎣ 0
⎛ ⎡0⎤ ⎞ ⎡−1⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ 0 ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥ , and ⎢ 1⎥ ⎢ 4⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢0⎥ ⎟ ⎢⎣ 0⎥⎦ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎡ 0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ −4 ⎜ 0 ⎟ T ⎜ ⎢ ⎥ ⎟ = ⎢ ⎥, ⎢ 0⎥ ⎢0⎥ ⎜⎢ ⎥⎟ ⎢ ⎥ ⎜ ⎢ 1⎥ ⎟ ⎢⎣ 1⎥⎦ ⎝⎣ ⎦⎠
0 −1 3 0 1
0⎤ ⎡ 2 0 −1 0⎤ ⎡ 1⎤ ⎡−1⎤ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 −4⎥ 0 3 0 −4⎥ ⎢ 2⎥ ⎢14⎥ . So, T ( v ) = ⎢ = and T (1, 2, 3, − 2) = (−1, 14, 11, 0). ⎢−1 0 4 0⎥ ⎢ 3⎥ ⎢ 11⎥ 4 0⎥ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 1⎥⎦ 1⎥⎦ ⎢− ⎢⎣ 0 1 0 ⎣ 2⎥⎦ ⎢⎣ 0⎥⎦
⎡0 1⎤ 18. (a) The matrix of a reflection in the line y = x, T ( x, y ) = ( y, x), is given by A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎥. ⎣ 1 0⎦ (b) The image of v = (3, 4) is given by ⎡0 1⎤ ⎡ 3⎤ ⎡4⎤ Av = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. 1 0 4 ⎣ ⎦⎣ ⎦ ⎣3⎦ So, T (3, 4) = ( 4, 3). y
(c)
(3, 4)
4 3
(4, 3)
v 2
T(v)
1
x
1
2
3
4
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
197
20. (a) The matrix of a reflection in the x-axis, T ( x, y ) = ( x, − y ), is given by
⎡ 1 0⎤ A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎥. ⎣0 −1⎦ (b) The image of v = ( 4, −1) is given by ⎡ 1 0⎤ ⎡ 4⎤ ⎡4⎤ Av = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 −1⎦ ⎣−1⎦ ⎣ 1⎦ So, T ( 4, −1) = ( 4, 1). (c)
y
3 2
T(v)
1
(4, 1) x
−1
1 v
−2
5
(4, − 1)
−3
22. (a) The counterclockwise rotation of 120° is given by
T ( x, y ) = (cos(120) x − sin (120) y, sin (120) x + cos(120) y ) ⎛ 1 3 3 1 ⎞ = ⎜⎜ − x − y, x − y ⎟⎟. 2 2 2 2 ⎠ ⎝ So, the matrix is ⎡ 1 ⎢− 2 A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 3 ⎢ ⎣ 2
−
3⎤ ⎥ 2 ⎥ . 1⎥ − ⎥ 2⎦
(b) The image of v = ( 2, 2) is given by ⎡ 1 ⎢− 2 Av = ⎢ ⎢ 3 ⎢ ⎣ 2
−
3⎤ ⎥ ⎡−1 − 3 ⎤ 2 ⎥ ⎡2⎤ = ⎢ ⎥. ⎢ ⎥ 3 − 1⎥⎦ ⎢⎣ 1 ⎥ ⎣2⎦ − ⎥ 2⎦
(
So, T ( 2, 2) = −1 − (c)
3,
)
3 − 1.
y
5 4 3 2
120°
T(v) −3 − 2 −1
v x
1
2
3
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
198
Chapter 6
Linear Transformations
24. (a) The clockwise rotation of 30° is given by
T ( x, y ) = (cos( −30) x − sin ( −30) y, sin ( −30) x + cos( −30) y ) ⎛ 3 1 1 3 ⎞ = ⎜⎜ x + y, − x + y ⎟⎟. 2 2 2 2 ⎝ ⎠ So, the matrix is ⎡ 3 ⎢ 2 A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 1 ⎢− ⎣ 2
1⎤ ⎥ 2⎥ . 3⎥ ⎥ 2 ⎦
(b) The image of v = ( 2, 1) is given by ⎡ 3 ⎢ 2 Av = ⎢ ⎢ 1 ⎢− ⎣ 2
1⎤ ⎡ 1⎤ ⎥ ⎡2⎤ ⎢ 3 + 2⎥ 2⎥ ⎥. ⎢ ⎥ = ⎢ ⎢ 3⎥ 3 ⎥ ⎣ 1⎦ ⎥ ⎢−1 + ⎥ ⎣ 2 ⎦ 2 ⎦
⎛ So, T ( 2, 1) = ⎜⎜ ⎝ (c)
3 +
1 3⎞ , −1 + ⎟. 2 2 ⎟⎠
y 2 1
v 30°
x 3
T(v) −1
26. (a) The matrix of a reflection through the yz-coordinate plane is given by ⎡−1 0 0⎤ ⎢ ⎥ A = ⎡⎣T (1, 0, 0) T (0, 1, 0) T (0, 0, 1)⎤⎦ = ⎢ 0 1 0⎥. ⎢ 0 0 1⎥ ⎣ ⎦
(b) The image of v = ( 2, 3, 4) is given by ⎡−1 0 0⎤ ⎡2⎤ ⎡−2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Av = ⎢ 0 1 0⎥ ⎢ 3⎥ = ⎢ 3⎥. ⎢ 0 0 1⎥ ⎢4⎥ ⎢ 4⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, T ( 2, 3, 4) = ( −2, 3, 4). z
(c)
(− 2, 3, 4)
4 3
T(v)
(2, 3, 4) v 1 x
1 y
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
199
28. (a) The counterclockwise rotation of 45° is given by T ( x, y ) = (cos( 45) x − sin ( 45) y, sin ( 45) x + cos( 45) y ) ⎛ 2 = ⎜⎜ x − ⎝ 2
2 2 y, x + 2 2
2 ⎞ y ⎟. 2 ⎟⎠
So, the matrix is ⎡ 2 ⎢ 2 A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 2 ⎢ ⎣ 2
−
2⎤ ⎥ 2 ⎥ . 2⎥ ⎥ 2 ⎦
(b) The image of v = ( 2, 2) is given by ⎡ 2 ⎢ 2 Av = ⎢ ⎢ 2 ⎢ ⎣ 2
2⎤ ⎥ 0⎤ ⎡ 2 ⎥ ⎡2⎤ = ⎢ ⎢ ⎥ ⎥. 2 ⎥ ⎣2⎦ ⎣2 2 ⎦ ⎥ 2 ⎦
−
(
)
So, T ( 2, 2) = 0, 2 2 . (c)
y 3
(0, 2 2) T(v)
2
(2, 2)
45°
v
1 x 1
2
3
30. (a) The counterclockwise rotation of 180° is given by T ( x, y ) = (cos(180) x − sin (180) y, sin (180) x + cos(180) y ) = ( − x, − y ). So, the matrix is ⎡−1 0⎤ A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎥. ⎣ 0 −1⎦ (b) The image of v = (1, 2) is given by ⎡−1 0⎤ Av = ⎢ ⎥ ⎣ 0 −1⎦
⎡ 1⎤ ⎡ −1⎤ ⎢ ⎥ = ⎢ ⎥. 2 ⎣ ⎦ ⎣−2⎦
So, T (1, 2) = ( −1, − 2). y
(c)
(1, 2) 2 1 −2
−1
v x 1
2
T(v)
(−1, −2)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
200
Chapter 6
Linear Transformations
32. (a) The projection onto the vector ( −1, 5) is given by
34. (a) The reflection of a vector v through w is given by T ( v ) = 2 projw v − v
T ( v ) = projw v
4x − 2 y (4, − 2) − ( x, y ) 20 4 4 3 ⎞ ⎛3 = ⎜ x − y, − x − y ⎟. 5 5 5 ⎠ ⎝5
−x + 5 y (−1, 5) 26 5 ⎛ 1 ⎞ = ⎜ − ( − x + 5 y ), (− x + 5 y ) ⎟. 26 ⎝ 26 ⎠
= 2
=
So, the matrix is
So, the matrix is ⎡ 1 ⎢ 26 A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ ⎢ 5 ⎢⎣− 26
−
5⎤ 26 ⎥ ⎥. 25 ⎥ 26 ⎥⎦
(b) The image of v = ( 2, − 3) is given by ⎡ 1 ⎢ 26 Av = ⎢ ⎢ 5 ⎢⎣− 26
−
5⎤ 26 ⎥ ⎥ 25 ⎥ 26 ⎥⎦
(b) The image of v = (5, 0) is 4⎤ ⎡ 3 ⎢ 5 − 5 ⎥ ⎡5⎤ ⎡ 3⎤ ⎥ ⎢ ⎥ = ⎢ ⎥. Av = ⎢ 3 ⎥ ⎣0⎦ ⎢ 4 ⎣−4⎦ ⎢⎣− 5 − 5 ⎥⎦
⎡ 17 ⎤ ⎢ 26 ⎥ ⎡ 2⎤ ⎥. ⎢ ⎥ = ⎢ ⎢ 85 ⎥ ⎣−3⎦ ⎢⎣− 26 ⎥⎦
⎛ 17 85 ⎞ So, T ( 2, − 3) = ⎜ , − ⎟. ⎝ 26 26 ⎠ (c)
4⎤ ⎡ 3 ⎢ 5 − 5⎥ ⎥. A = ⎡⎣T (1, 0) T (0, 1)⎤⎦ = ⎢ 3⎥ ⎢ 4 − − ⎢⎣ 5 5 ⎥⎦
So, T (5, 0) = (3, − 4). y
(c)
v
y x 1
2
−3
v T(v)
x
3
4
5
−2 −3
−1 −2
3
−1
2
T(v)
−4 −5
36. (a) The standard matrix for T is 1⎤ ⎡ 3 −2 ⎢ ⎥ A = ⎢2 −3 0⎥. ⎢0 1 −4⎥⎦ ⎣
(b) The image of v = ( 2, −1, −1) is 1⎤ ⎡ 3 −2 ⎢ ⎥ Av = ⎢2 −3 0⎥ ⎢0 1 −4⎥⎦ ⎣
⎡ 2⎤ ⎡7⎤ ⎢ ⎥ ⎢ ⎥ ⎢−1⎥ = ⎢7⎥. ⎢−1⎥ ⎢ 3⎥ ⎣ ⎦ ⎣ ⎦
So, T ( 2, −1, −1) is (7, 7, 3). (c) Using a graphing utility or a computer software program to perform the multiplication in part (b) gives the same result.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
38. (a) The standard matrix for T is ⎡ 1 ⎢ −1 A = ⎢ ⎢ 0 ⎢ ⎣⎢ 1
0⎤ ⎥ 1 0 0⎥ . 0 2 −1⎥ ⎥ 0 0 0⎦⎥ 2 0
0⎤ ⎥ 1 0 0⎥ 0 2 −1⎥ ⎥ 0 0 0⎦⎥ 2 0
⎡ 1 0⎤ ⎢ ⎥ A1 = ⎢0 1⎥ ⎢0 1⎥ ⎣ ⎦
⎡ 0⎤ ⎡ 2⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 1⎥ = ⎢ 1⎥. ⎢−1⎥ ⎢−3⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 0⎥⎦ ⎣⎢ 1⎦⎥
(c) Using a graphing utility or a computer software program to perform the multiplication in part (b) gives the same result.
40. The standard matrices for T1 and T2 are and
⎡0 1⎤ A2 = ⎢ ⎥. ⎣0 0⎦
The standard matrix for T = T2
⎡0 1 0⎤ A2 = ⎢ ⎥. ⎣0 0 1⎦
and the standard matrix for T ′ = T1
T2 is
⎡ 1 −2⎤ ⎡0 1⎤ ⎡0 1⎤ A1 A2 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 3⎦ ⎣0 0⎦ ⎣2 ⎣0 2⎦
T2 is
⎡ 1 0⎤ ⎡0 1 0⎤ ⎢ ⎥ ⎡0 1 0⎤ ⎢ ⎥ A1 A2 = ⎢0 1⎥ ⎢ ⎥ = ⎢0 0 1⎥. 0 0 1 ⎦ ⎢0 1⎥ ⎣ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
46. The standard matrix for T is ⎡1 2⎤ A = ⎢ ⎥. ⎣1 −2⎦ Because A = −4 ≠ 0, A is invertible
A
⎡0 1⎤ ⎡ 1 −2⎤ ⎡2 3⎤ A2 A1 = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 0 0 2 3 ⎣ ⎦⎣ ⎦ ⎣0 0⎦
T1 is
⎡ 1 0⎤ ⎡0 1 0⎤ ⎢ ⎡0 1⎤ ⎥ A2 A1 = ⎢ ⎥ ⎢0 1⎥ = ⎢ ⎥ ⎣0 0 1⎦ ⎢ ⎣0 1⎦ ⎥ 0 1 ⎣ ⎦
T1 is
and the standard matrix for T ′ = T1
and
The standard matrix for T = T2
So, T (0, 1, −1, 1) = ( 2, 1, − 3, 0).
⎡ 1 −2⎤ A1 = ⎢ ⎥ 3⎦ ⎣2
201
44. The standard matrices for T1 and T2 are
(b) The image of v = (0, 1, −1, 1) is ⎡ 1 ⎢ −1 Av = ⎢ ⎢ 0 ⎢ ⎣⎢ 1
Matrices for Linear Transformations
−1
⎡1 ⎢2 = ⎢ ⎢1 ⎣⎢ 4
1⎤ 2⎥ ⎥ 1⎥ − ⎥ 4⎦
⎛ x + y x − y⎞ , and conclude that T −1 ( x1 , y ) = ⎜ ⎟. 4 ⎠ ⎝ 2
42. The standard matrices for T1 and T2 are ⎡ 1 2 0⎤ ⎢ ⎥ A1 = ⎢ 0 1 −1⎥ ⎢−2 1 2⎥ ⎣ ⎦
and
⎡0 1 1⎤ ⎢ ⎥ A2 = ⎢ 1 0 1⎥. ⎢0 2 −2⎥ ⎣ ⎦
The standard matrix for T = T2 ⎡0 1 1⎤ ⎢ ⎥ A2 A1 = ⎢ 1 0 1⎥ ⎢0 2 −2⎥ ⎣ ⎦
T1 is
1⎤ ⎡ 1 2 0⎤ ⎡−2 2 ⎢ ⎥ ⎢ ⎥ ⎢ 0 1 −1⎥ = ⎢ −1 3 2⎥ ⎢−2 1 2⎥ ⎢ 4 0 −6⎥ ⎣ ⎦ ⎣ ⎦
and the standard matrix for T ′ = T1
T2 is
1 3⎤ ⎡ 1 2 0⎤ ⎡0 1 1⎤ ⎡2 ⎢ ⎥⎢ ⎥ ⎢ ⎥ A1 A2 = ⎢ 0 1 −1⎥ ⎢ 1 0 1⎥ = ⎢ 1 −2 3⎥. ⎢−2 1 2⎥ ⎢0 2 −2⎥ ⎢ 1 2 −5⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
202
Chapter 6
Linear Transformations
⎡1 1 0⎤ ⎢ ⎥ 48. The standard matrix for T is A = ⎢0 1 1⎥. Because A = 2 ≠ 0, A is invertible. ⎢1 0 1⎥ ⎣ ⎦
A− 1
⎡ 1 ⎢ 2 ⎢ ⎢ 1 = ⎢ 2 ⎢ ⎢− 1 ⎢⎣ 2
−
1 2 1 2 1 2
1⎤ 2⎥ ⎥ 1⎥ ⎛ x − x2 + x3 x1 + x2 − x3 − x1 + x2 + x3 ⎞ , , − ⎥ and conclude that T −1 ( x1 , x2 , x3 ) = ⎜ 1 ⎟. 2 2 2 2 ⎝ ⎠ ⎥ 1⎥ 2 ⎥⎦
50. The standard matrix for T is
56. The standard matrix for T is
⎡0 0⎤ A = ⎢ ⎥. ⎣0 −1⎦ Because A = 0, A is not invertible and so, T is not invertible.
⎡1 4⎤ A = ⎢ ⎥. ⎣1 −4⎦
A− 1
Because A = −8 ≠ 0, A is invertible
A
0 0 1⎤ ⎥ 0 1 0⎥ . 1 0 0⎥ ⎥ 0 0 0⎥⎦
Because A = 1 ≠ 0, A is invertible
52. The standard matrix for T is
−1
⎡0 ⎢ 0 A = ⎢ ⎢0 ⎢ ⎢⎣1
1⎤ ⎡1 ⎢2 2⎥ ⎥ = ⎢ 1⎥ ⎢1 − ⎢⎣ 8 8 ⎥⎦
⎡0 ⎢ 0 = ⎢ ⎢0 ⎢ ⎢⎣1
0 0 1⎤ ⎥ 0 1 0⎥ 1 0 0⎥ ⎥ 0 0 0⎥⎦
and conclude that T −1 ( x1 , x2 , x3 , x4 ) = ( x4 , x3 , x2 , x1 ).
58. (a) The standard matrix for T is
⎛x + y x − y⎞ , and conclude that T ( x, y ) = ⎜ ⎟. 8 ⎠ ⎝ 2 −1
54. The standard matrix for T is
⎡ 1 −1⎤ ⎢ ⎥ ′ A = ⎢0 0⎥ ⎢ 1 1⎥ ⎣ ⎦
and the image of v = ( −3, 2) under T is
⎡−2 0⎤ A = ⎢ ⎥. ⎣ 0 2⎦
⎡ 1 −1⎤ ⎡−5⎤ ⎢ ⎥ ⎡−3⎤ ⎢ ⎥ A′v = ⎢0 0⎥ ⎢ ⎥ = ⎢ 0⎥ ⇒ T ( v ) = ( −5, 0, − 1). ⎢ 1 1⎥ ⎣ 2⎦ ⎢ −1⎥ ⎣ ⎦ ⎣ ⎦
Because A = −4 ≠ 0, A is invertible ⎡− 1 0⎤ A−1 = ⎢ 2 1 ⎥ ⎢⎣ 0 2 ⎥⎦
(b) Because T (1, 2) = ( −1, 0, 3) = 2(1, 1, 1) − 3(1, 1, 0) + 1(0, 1, 1)
(
)
and conclude that T −1 ( x, y ) = − 12 x, 12 y .
T (1, 1) = (0, 0, 2) = 2(1, 1, 1) − 2(1, 1, 0),
the matrix of T relative to B and B′ is ⎡ 2 2⎤ ⎢ ⎥ A = ⎢−3 −2⎥. ⎢ 1 0⎥ ⎣ ⎦
Because v = ( −3, 2) = 5(1, 2) − 8(1, 1), you have ⎡⎣T ( v )⎤⎦ B ′ = A[v]B
⎡ 2 2⎤ ⎡−6⎤ ⎢ ⎥ ⎡ 5⎤ ⎢ ⎥ = ⎢−3 −2⎥ ⎢ ⎥ = ⎢ 1⎥. 8 − ⎣ ⎦ ⎢ 1 0⎥ ⎢ 5⎥ ⎣ ⎦ ⎣ ⎦
So, T ( v) = −6(1,1,1) + 1(1,1, 0) + 5(0,1,1) = (−5, 0, −1).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.3
Matrices for Linear Transformations
203
60. (a) The standard matrix for T is ⎡ 2 0 −1⎤ A′ = ⎢ ⎥ ⎣−2 1 0⎦ and the image of v = (0, − 5, 7) under T is ⎡ 0⎤ ⎡ 2 0 −1⎤ ⎢ ⎥ ⎡−7⎤ ′ Av = ⎢ ⎥ ⎢−5⎥ = ⎢ ⎥ ⇒ T (0, − 5, 7) = (−7, − 5). − 2 1 0 ⎣ ⎦⎢ ⎥ ⎣ −5⎦ ⎣ 7⎦
(b) Because T ( 2, 0, 1) = (3, − 4) = −4(1, 1) + T (0, 2, 1) = ( −1, 2) = 2(1, 1) − T (1, 2, 1) = (1, 0)
= 0(1, 1) +
(2, 0) (2, 0) 1 2, 0 , ) 2( 7 2
3 2
2 0⎤ ⎡−4 the matrix for T relative to B and B′ is A = ⎢ 7 3 1 ⎥. ⎢⎣ 2 − 2 2 ⎥⎦
Because v = (0, − 5, 7) = ⎣⎡T ( v )⎦⎤ B′ = A[ v]B
19 2
(2, 0, 1) + 332 (0, 2, 1) − 19(1, 2, 1), you have
⎡ 19 ⎤ 2 0⎤ ⎢ 2 ⎥ ⎡−4 ⎡−5⎤ 33 = ⎢ 7 3 1 ⎥ ⎢ 2 ⎥ = ⎢ ⎥. − ⎢⎣ 2 ⎣ −1⎦ 2 2⎥ ⎦ ⎢−19⎥ ⎣ ⎦
So, T ( v ) = −5(1, 1) − 1( 2, 0) = ( −7, − 5).
62. (a) The standard matrix for T is ⎡ 1 1 1 1⎤ A′ = ⎢ ⎥ ⎣−1 0 0 1⎦ and the image of v = ( 4, − 3, 1, 1) under T is ⎡ 4⎤ ⎢ ⎥ ⎡ 1 1 1 1⎤ ⎢−3⎥ ⎡ 3⎤ A′v = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( v ) = (3, − 3). − 1 0 0 1 1 ⎣ ⎦⎢ ⎥ ⎣−3⎦ ⎣⎢ 1⎦⎥ (b) Because T (1, 0, 0, 1) = ( 2, 0) = 0(1, 1) + ( 2, 0) T (0, 1, 0, 1) = ( 2, 1)
= (1, 1) +
T (1, 0, 1, 0) = ( 2, −1) = T (1, 1, 0, 0) = ( 2, −1) =
( 2, 0) −(1, 1) + 32 ( 2, 0) −(1, 1) + 32 ( 2, 0), 1 2
⎡0 1 −1 −1⎤ the matrix for T relative to B and B′ is A = ⎢ 1 3 3 ⎥. ⎢⎣ 1 2 ⎦ 2 2⎥ Because v = ( 4, − 3, 1, 1) =
7 2
(1, 0, 0, 1) − 52 (0, 1, 0, 1) + (1, 0, 1, 0) − 12 (1, 1, 0, 0), you have
⎡0 1 −1 −1⎤ ⎡⎣T ( v )⎤⎦ B ′ = A[ v]B = ⎢ 1 3 3⎥ ⎥ 2 2⎦ ⎣⎢ 1 2
⎡ 7⎤ ⎢ 25 ⎥ ⎡−3⎤ ⎢− 2 ⎥ ⎢ 1⎥ = ⎢ 3⎥. ⎣ ⎦ ⎢ ⎥ ⎢− 1 ⎥ ⎣ 2⎦
So, T ( v ) = −3(1, 1) + 3( 2, 0) = (3, − 3).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
204
Chapter 6
Linear Transformations
⎡2 −12⎤ 64. (a) The standard matrix for T is A′ = ⎢ ⎥ and the image of v = (10, 5) under T is ⎣ 1 −5⎦ ⎡2 −12⎤ ⎡10⎤ ⎡−40⎤ A′v = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⇒ T ( v ) = (−40, −15). 1 − 5 5 ⎣ ⎦⎣ ⎦ ⎣−15⎦ (b) Because T ( 4, 1) = ( −4, −1) = −( 4, 1) T (3, 1) = ( −6, − 2) = −2(3, 1),
⎡−1 0⎤ the matrix for T relative to B and B′ is A = ⎢ ⎥. ⎣ 0 −2⎦ ⎡−1 0⎤ Because v = (10, 5) = −5( 4, 1) + 10(3, 1), you have ⎡⎣T ( v )⎤⎦ B ′ = A[v]B = ⎢ ⎥ ⎣ 0 −2⎦
⎡−5⎤ ⎡ 5⎤ ⎢ ⎥ = ⎢ ⎥. ⎣ 10⎦ ⎣−20⎦
So, T ( v ) = 5( 4, 1) − 20(3, 1) = ( −40, −15).
66. The image of each vector in B is as follows T (1) = x 2 , T ( x) = x3 , T ( x 2 ) = x 4 . ⎡0 ⎢ ⎢0 So, the matrix of T relative to B and B′ is A = ⎢ 1 ⎢ ⎢0 ⎢ ⎣0
0 0⎤ ⎥ 0 0⎥ 0 0⎥. ⎥ 1 0⎥ ⎥ 0 1⎦
68. The image of each vector in B is as follows. D(e 2 x ) = 2e 2 x D( xe 2 x ) = e2 x + 2 xe 2 x D( x 2e 2 x ) = 2 xe 2 x + 2 x 2e 2 x ⎡2 1 0⎤ ⎢ ⎥ So, the matrix of T relative to B is A = ⎢0 2 2⎥. ⎢0 0 2⎥ ⎣ ⎦
70. Because 5e2 x − 3xe 2 x + x 2e 2 x = 5(e 2 x ) − 3( xe 2 x ) + 1( x 2e 2 x ), A[v]B
⎡2 1 0⎤ ⎡ 5⎤ ⎡ 7⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = ⎢0 2 2⎥ ⎢−3⎥ = ⎢−4⎥ ⇒ Dx (5e 2 x − 3x 2 x e + x 2e 2 x ) = 7e 2 x − 4 xe 2 x + 2 x 2e 2 x . ⎢0 0 2⎥ ⎢ 1⎥ ⎢ 2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
72. (a) True. See Theorem 6.10 on page 388. (b) True. See Theorem 6.11 on page 391. (c) False. Let linear transformation T : R 2 → R 2 be given by T ( x, y ) = ( x − y, − 2 x + 2 y ). Then the standard matrix for T is ⎡ 1 −1⎤ ⎢ ⎥, ⎣−2 2⎦ which is not invertible. So, by Theorem 6.12 on page 393, T is not invertible.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.4
Transition Matrices and Similarity
205
78. (1 ⇒ 2): Let T be invertible. If T ( v1 ) = T ( v 2 ), then
74. Because T ( v ) = kv for all v ∈ R n , the standard matrix for T is the n × n diagonal matrix
T −1 (T ( v1 )) = T −1 (T ( v 2 )) and v1 = v 2 , so T is one-to-
⎡k ⎢ ⎢0 ⎢ ⎢ ⎢⎣0
one. T is onto because for any w ∈ R n , T −1 ( w ) = v
0 k k 0
0⎤ ⎥ ⎥. 0⎥ ⎥ k ⎥⎦
satisfies T ( v ) = w.
(2
⇒ 1): Let T be an isomorphism. Define T −1 as
follows: Because T is onto, for any w ∈ R n , there exists
76. Because T ≠ 0, T is invertible and T is an
v ∈ R n such that T ( v ) = w. Because T is one-to-one,
isomorphism. The inverse of T is computed by GaussJordan elimination, applied to A and A−1 = AT , so the
this v is unique. So, define the inverse of T by T −1 ( w ) = v if and only if T ( v ) = w.
inverse of T exists and T −1 has the standard matrix AT , and you find that T −1 = T T , the transpose of T.
Finally, the corollaries to Theorems 6.3 and 6.4 show that 2 and 3 are equivalent. If T is invertible, T ( x) = Ax implies that T −1 (T ( x)) = x = A−1 ( Ax) and the standard matrix of
T −1 is A−1.
80. b is in the range of the linear transformation T : R n → R m given by T ( x) = Ax if and only if b is in the column space of A.
Section 6.4 Transition Matrices and Similarity 1⎤ ⎡2 2. (a) The standard matrix for T is A = ⎢ ⎥. Furthermore, the transition matrix P from B′ to the standard basis B, and its 1 − 2 ⎣ ⎦
⎡ 1 0⎤ ⎡ 1 0⎤ −1 inverse are P = ⎢ ⎥ and P = ⎢− 1 1 ⎥. Therefore, the matrix for T relative to B′ is ⎣2 4⎦ ⎣⎢ 2 4 ⎦⎥ ⎡ 1 0⎤ ⎡2 1⎤ A′ = P −1 AP = ⎢ 1 1 ⎥ ⎢ ⎥ − 1 2 − ⎦ ⎣⎢ 2 4 ⎥⎦ ⎣
⎡ 4 4⎤ ⎡ 1 0⎤ ⎢ ⎥ = ⎢− 11 −4⎥ ⎢⎣ 4 ⎥⎦ ⎣2 4⎦
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. ⎡ 1 −2⎤ 4. (a) The standard matrix for T is A = ⎢ ⎥. Furthermore, the transition matrix P from B′ to the standard basis B, and its ⎣4 0⎦ ⎡−2 −1⎤ ⎡−1 −1⎤ −1 inverse, are P = ⎢ ⎥ and P = ⎢ ⎥. Therefore, the matrix for T relative to B′ is ⎣ 1 1⎦ ⎣ 1 2⎦ 7⎤ ⎡−1 −1⎤ ⎡ 1 −2⎤ ⎡−2 −1⎤ ⎡ 12 A′ = P −1 AP = ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 2 4 0 1 1 20 11 − − ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ (b) Because A′ = P −1 AP, it follows that A and A′ are similar.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
206
Chapter 6
Linear Transformations
⎡0 0 0⎤ ⎢ ⎥ 6. (a) The standard matrix for T is A = ⎢0 0 0⎥. Furthermore, the transition matrix P from B′ to the standard basis B, and its ⎢0 0 0⎥ ⎣ ⎦ ⎡ 1 1 0⎤ ⎢ ⎥ inverse, are P = ⎢ 1 0 1⎥ ⎢0 1 1⎥ ⎣ ⎦
1 −1⎤ ⎥ 1 −1 1⎥. Therefore, the matrix for T relative to B′ is ⎢−1 1 1⎥ ⎣ ⎦ ⎡ 1
and
P
−1
=
1⎢ 2⎢
⎡0 0 0⎤ ⎢ ⎥ −1 ′ A = P AP = ⎢0 0 0⎥. ⎢0 0 0⎥ ⎣ ⎦
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. ⎡1 0 0⎤ ⎢ ⎥ 8. (a) The standard matrix for T is A = ⎢1 2 0⎥. Furthermore, the transition matrix P from B′ to the standard basis B, and ⎢1 1 3⎥ ⎣ ⎦ ⎡ 1 0 0⎤ ⎢ ⎥ its inverse, are P = ⎢−1 0 1⎥ ⎢ 0 1 −1⎥ ⎣ ⎦
and
P
−1
⎡1 0 0⎤ ⎢ ⎥ = ⎢1 1 1⎥. Therefore, the matrix for T relative to B′ is ⎢1 1 0⎥ ⎣ ⎦
⎡1 0 0⎤ ⎡1 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ A′ = P AP = ⎢1 1 1⎥ ⎢1 2 0⎥ ⎢−1 0 1⎥ = ⎢0 3 0⎥. ⎢1 1 0⎥ ⎢1 1 3⎥ ⎢ 0 1 −1⎥ ⎢0 0 2⎥ ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ −1
(b) Because A′ = P −1 AP, it follows that A and A′ are similar. 10. (a) The transition matrix P from B′ to B is found by row-reducing [B B′] to [ I P].
[B
⎡1 −2 B′] = ⎢ ⎣1 3
⎡ 1 So, P = ⎢ 5 ⎢− 52 ⎣
1 0⎤ ⎥ −1 1⎦
⇒
[I
⎡1 0 P] = ⎢ ⎢0 1 ⎣
1 5 − 52
2⎤ 5 ⎥ 1⎥ 5⎦
2⎤ 5 ⎥. 1⎥ 5⎦
⎡ 1 (b) The coordinate matrix for v relative to B is [ v]B = P[ v]B ′ = ⎢ 5 ⎢− 52 ⎣
2⎤ 5 ⎥ 1⎥ 5⎦
⎡ 1⎤ ⎡−1⎤ ⎢ ⎥ = ⎢ ⎥. ⎣−3⎦ ⎣−1⎦
⎡3 2⎤ ⎡−1⎤ ⎡−5⎤ Furthermore, the image of v under T relative to B is ⎡⎣T ( v )⎤⎦ B = A[v]B = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 4⎦ ⎣−1⎦ ⎣−4⎦ 1 ⎡ 1 −2⎤ ⎡3 2⎤ ⎡ 5 ⎢ (c) The matrix of T relative to B′ is A′ = P −1 AP = ⎢ ⎥⎢ ⎥ 1⎦ ⎣0 4⎦ ⎢− 2 ⎣2 ⎣ 5
2⎤ 5 ⎥ 1⎥ 5⎦
⎡ 3 0⎤ = ⎢ ⎥. ⎣−2 4⎦
⎡ 1 −2⎤ ⎡−5⎤ ⎡ 3⎤ (d) The image of v under T relative to B′ is P −1 ⎡⎣T ( v )⎤⎦ B = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1⎦ ⎣−4⎦ ⎣2 ⎣−14⎦ ⎡ 3 0⎤ ⎡ 1⎤ ⎡ 3⎤ You can also find the image of v under T relative to B′ by A′[v]B ′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣−2 4⎦ ⎣−3⎦ ⎣−14⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.4
Transition Matrices and Similarity
207
12. (a) The transition matrix P from B′ to B is found by row-reducing [B B′] to [ I P]. Because B is the
standard basis, you have B′ = P. ⎡ 1 1 −1⎤ ⎢ ⎥ P = ⎢ 1 −1 1⎥ ⎢−1 1 1⎥ ⎣ ⎦
(b) The coordinate matrix for v relative to B is [ v]B
⎡ 1 1 −1⎤ ⎡2⎤ ⎡2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ = P[ v]B ′ = ⎢ 1 −1 1⎥ ⎢ 1⎥ = ⎢2⎥. ⎢−1 1 1⎥ ⎢ 1⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Furthermore, the image of v under T relative to B is ⎡⎣T ( v )⎤⎦ B = A[v]B ⎡ 1 1 0⎤ ⎡ 3 ⎢2 2 ⎥⎢ 2 (c) The matrix of T relative to B′ is A′ = P −1 AP = ⎢ 12 0 12 ⎥ ⎢− 12 ⎢0 1 1 ⎥ ⎢ 1 ⎢ 2 ⎣ 2 2⎦ ⎣ (d) The image of v under T relative to B′ is P ⎡⎣T ( v )⎤⎦ B −1
⎡ 3 ⎢ 2 = ⎢− 12 ⎢ 1 ⎣⎢ 2
−1 − 12 ⎤ ⎥ 1 2 2⎥ 5⎥ 1 ⎥ 2⎦
−1 − 12 ⎤ ⎥ 1 2 2⎥ 5⎥ 1 ⎥ 2⎦
⎡2⎤ ⎡1⎤ ⎢ ⎥ ⎢ ⎥ ⎢2⎥ = ⎢3⎥. ⎢0⎥ ⎢3⎥ ⎣ ⎦ ⎣ ⎦
⎡ 1 1 −1⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 1 1 1 − = ⎢ ⎥ ⎢0 2 0⎥. ⎢−1 1 1⎥ ⎢0 0 3⎥ ⎣ ⎦ ⎣ ⎦
⎡ 1 1 0⎤ ⎡1⎤ ⎡2⎤ ⎢2 2 ⎥⎢ ⎥ ⎢ ⎥ = ⎢ 12 0 12 ⎥ ⎢3⎥ = ⎢2⎥. ⎢ 0 1 1 ⎥ ⎢3⎥ ⎢ 3⎥ ⎣ ⎦ ⎣ 2 2⎦ ⎣ ⎦
⎡ 1 0 0⎤ ⎡2⎤ ⎡2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ You can also find the image of v under T relative to B′ by A′[v]B ′ = ⎢0 2 0⎥ ⎢ 1⎥ = ⎢2⎥. ⎢0 0 3⎥ ⎢ 1⎥ ⎢ 3⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
14. (a) The transition matrix P from B′ to B is found by row-reducing [B B′] to [ I P].
⎡−1 −5⎤ P = ⎢ ⎥ ⎣ 0 −3⎦ ⎡−1 −5⎤ ⎡ 1⎤ ⎡19⎤ (b) The coordinate matrix for v relative to B is [ v]B = P[v]B′ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣ 0 −3⎦ ⎣−4⎦ ⎣12⎦ ⎡2 1⎤ Furthermore, the image of v under T relative to B is ⎡⎣T ( v )⎤⎦ B = A[ v]B = ⎢ ⎥ ⎣0 −1⎦ 5⎤ ⎡−1 3⎥ (c) The matrix of T relative to B′ is A′ = P −1 AP = ⎢ ⎢ 0 − 1⎥ 3⎦ ⎣
⎡2 1⎤ ⎢ ⎥ ⎣0 −1⎦
⎡19⎤ ⎡ 50⎤ ⎢ ⎥ = ⎢ ⎥. ⎣12⎦ ⎣−12⎦
⎡−1 −5⎤ ⎡2 18⎤ ⎢ ⎥ = ⎢ ⎥ ⎣ 0 −3⎦ ⎣0 −1⎦
5⎤ ⎡−1 ⎡−70⎤ 3 ⎥ ⎡ 50⎤ (d) The image of v under T relative to B′ is P −1 ⎡⎣T ( v )⎤⎦ B = ⎢ ⎢ ⎥ = ⎢ ⎥. 1 ⎢ 0 − ⎥ ⎣−12⎦ ⎣ 4⎦ 3⎦ ⎣
⎡2 18⎤ ⎡ 1⎤ ⎡−70⎤ You can also find the image of v under T relative to B′ by A′[v]B ′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣0 −1⎦ ⎣−4⎦ ⎣ 4⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
208
Chapter 6
Linear Transformations
16. First, note that A and B are similar. ⎡−1 −1 2⎤ ⎢ ⎥ P −1 AP = ⎢ 0 −1 2⎥ ⎢ 1 2 −3⎥ ⎣ ⎦
7 10⎤ ⎡ 1 0 0⎤ ⎡−1 1 0⎤ ⎡ 11 ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 − 2 0 2 1 2 = 10 8 10 ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 3⎥ ⎢ 1 1 1⎥ ⎢−18 −12 −17⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Now, 7 10⎤ ⎡ 11 ⎢ ⎥ 8 10⎥ = 11( −16) − 7(10) + 10( 24) = −6 = A . B = ⎢ 10 ⎢−18 −12 −17⎥ ⎣ ⎦ 4
⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 3 −5⎤ ⎡ 1 0⎤ ⎡2 5⎤ ⎡−74 −225⎤ 4 −1 4 18. Because B = P −1 AP, and A4 = ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎥ = ⎢ ⎥ , you have B = P A P = ⎢ 91⎦ ⎣−1 2⎦ ⎣0 16⎦ ⎣ 1 3⎦ ⎣ 30 ⎣0 2⎦ ⎣0 16⎦
20. If B = P −1 AP and A is idempotent, then B 2 = ( P −1 AP ) = ( P −1 AP)( P −1 AP) = P −1 A2 P = P −1 AP = B, 2
which shows that B is idempotent. 22. If Ax = x and B = P −1 AP, then PB = AP and PBP −1 = A. So, PBP −1x = Ax = x. 24. Because A and B are similar, they represent the same linear transformation with respect to different bases. So, the range is the same, and so is the rank. 26. As in Exercise 25, if B = P −1 AP, then B k = ( P −1 AP) = ( P −1 AP)( P −1 AP) k
( P −1 AP) (k times)
= P −1 Ak P which shows that Ak is similar to B k . 28. Because B = P −1 AP, you have AP = PB, as follows. ⎡a11 … a1n ⎤ ⎢ ⎥ ⎢ ⎥ ⎢an1 … ann ⎥ ⎣ ⎦
⎡ p11 … ⎢ ⎢ ⎢ pn1 … ⎣
p1n ⎤ ⎡ p11 … ⎥ ⎢ = ⎥ ⎢ ⎢ pn1 … pnn ⎥⎦ ⎣
p1n ⎤ ⎥ ⎥ pnn ⎥⎦
⎡b11 … 0 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ 0 … bnn ⎥ ⎣ ⎦
So, ⎡a11 … a1n ⎤ ⎢ ⎥ ⎢ ⎥ ⎢an1 … ann ⎥ ⎣ ⎦
⎡ p1i ⎤ ⎡ p1i ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = bii ⎢ ⎥ ⎢ pni ⎥ ⎢ pni ⎥ ⎣ ⎦ ⎣ ⎦
for i = 1, 2, … , n. 30. (a) True. See page 400.
(b) False. If T is a linear transformation with matrices A and A′ relative to bases B and B′ respectively, then A′ = P −1 AP, where P is the transition matrix from B′ to B. Therefore, two matrices representing the same linear transformation must be similar.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.5
Applications of Linear Transformations
209
Section 6.5 Applications of Linear Transformations ⎡−1 0⎤ 2. The standard matrix for T is A = ⎢ ⎥. ⎣ 0 1⎦ ⎡−1 0⎤ ⎡2⎤ ⎡−2⎤ (a) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( 2, 5) = ( −2, 5) ⎣ 0 1⎦ ⎣5⎦ ⎣ 5⎦ ⎡−1 0⎤ ⎡−4⎤ ⎡ 4⎤ (b) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( −4, −1) = ( 4, −1) ⎣ 0 1⎦ ⎣ −1⎦ ⎣−1⎦
⎛x ⎞ 8. T ( x, y ) = ⎜ , y ⎟ ⎝4 ⎠ (a) Identify T as a horizontal contraction from its ⎡1 ⎤ 0 standard matrix A = ⎢ 4 ⎥. ⎢ ⎥ ⎣⎢ 0 1⎦⎥ (b)
y
⎡−1 0⎤ ⎡a⎤ ⎡− a⎤ (c) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( a, 0) = ( − a, 0) ⎣ 0 1⎦ ⎣ 0⎦ ⎣ 0⎦
( 4x , y(
(x, y)
⎡−1 0⎤ ⎡0⎤ ⎡0⎤ (d) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, b) = (0, b) 0 1 b ⎣ ⎦⎣ ⎦ ⎣b⎦ ⎡−1 0⎤ ⎡ c⎤ ⎡ − c⎤ (e) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( c, − d ) = ( − c , − d ) ⎣ 0 1⎦ ⎣− d ⎦ ⎣− d ⎦ ⎡−1 0⎤ ⎡ f ⎤ ⎡− f ⎤ (f) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( f , g ) = (− f , g ) ⎣ 0 1⎦ ⎣ g ⎦ ⎣ g⎦ ⎡ 0 −1⎤ 4. The standard matrix for T is A = ⎢ ⎥. ⎣−1 0⎦
x
10. T ( x, y ) = ( x, 2 y )
(a) Identify T as a vertical expansion from its standard ⎡ 1 0⎤ matrix A = ⎢ ⎥. ⎣0 2⎦ (b)
y
(x, 2y)
⎡ 0 −1⎤ ⎡−1⎤ ⎡−2⎤ (a) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( −1, 2) = ( −2, 1) 1 0 2 − ⎣ ⎦⎣ ⎦ ⎣ 1⎦
(x, y)
⎡ 0 −1⎤ ⎡2⎤ ⎡−3⎤ (b) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( 2, 3) = ( −3, − 2) ⎣−1 0⎦ ⎣ 3⎦ ⎣−2⎦ ⎡ 0 −1⎤ ⎡a⎤ ⎡ 0⎤ (c) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( a, 0) = (0, − a ) ⎣−1 0⎦ ⎣ 0⎦ ⎣− a⎦ ⎡ 0 −1⎤ ⎡0⎤ ⎡−b⎤ (d) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, b) = ( −b, 0) ⎣−1 0⎦ ⎣b⎦ ⎣ 0⎦ ⎡ 0 −1⎤ ⎡ e⎤ ⎡ d⎤ (e) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (e, − d ) = ( d , − e) 1 0 d − − ⎣ ⎦⎣ ⎦ ⎣−e⎦
x
12. T ( x, y ) = ( x + 4 y, y )
(a) Identify T as a horizontal shear from its standard ⎡ 1 4⎤ matrix A = ⎢ ⎥. ⎣0 1⎦ (b)
y
⎡ 0 −1⎤ ⎡− f ⎤ ⎡− g ⎤ (f) ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (− f , g ) = (− g , f ) ⎣−1 0⎦ ⎣ g ⎦ ⎣ f⎦ 6. (a) T ( x, y ) = xT (1, 0) + yT (0, 1) = x( 2, 0) + y (0, 1) = ( 2 x, y ).
(x, y)
(x + 4y, y) x
(b) T is a horizontal expansion.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
210
Chapter 6
Linear Transformations
14. T ( x, y ) = ( x, 4 x + y )
26. Find the image of each vertex under T ( x, y ) = ( x, 3 y ).
(a) Identify T as a vertical shear from its matrix ⎡ 1 0⎤ A = ⎢ ⎥. ⎣4 1⎦ y
(b)
T (0, 0) = (0, 0),
T (1, 0) = (1, 0),
T (1, 1) = (1, 3),
T (0, 1) = (0, 3)
The image of the unit square under T is shown in the following figure. y
(x, 4x + y)
T(0, 1)
T (1, 1)
2
(x, y)
1
x
T(0, 0)
16. The reflection in the x-axis is given by T ( x, y ) = ( x, − y ). If ( x, y ) is a fixed point, then
T ( x, y ) = ( x, y ) = ( x, − y ) which implies that y = 0. So, the set of fixed points is
{(t , 0) : t is real}
18. The reflection in the line y = − x is given by
T (0, 0) = (0, 0),
T (1, 0) = (1, 3),
T (1, 1) = (1, 4),
T (0, 1) = (0, 1)
y
T ( x, y ) = ( x, y ) = ( y, − x) which implies − x = y. So,
{(t , − t ) : t is real}
20. A horizontal expansion has the standard matrix ⎡k 0⎤ ⎢ ⎥ where k > 1. ⎣0 1⎦
A fixed point of T satisfies the equation ⎡kv1 ⎤ ⎡k 0⎤ ⎡ v1 ⎤ ⎡ v1 ⎤ T ( v) = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ = v 0 1 v v ⎣ ⎦ ⎣ 2⎦ ⎣ 2 ⎦ ⎣v2 ⎦ So the fixed points of T are {v = (0, t ): t is a real number}.
kx) which implies that
x = 0. So the set of fixed points is
T (1, 0) = (0, 1),
T (1, 1) = (1, 1),
T (0, 1) = (1, 0)
T(1, 1)
3
T(1, 0)
2
T(0, 1) x
T(0, 0) 2
4
3
30. Find the image of each vertex under T ( x, y ) = ( y, x). T (0, 0) = (0, 0),
T (0, 2) = ( 2, 0),
T (1, 2) = ( 2, 1),
T (1, 0) = (0, 1)
2
T(1, 0)
1
T(1, 2) T(0, 2)
{(0, t ) : t is real}.
24. Find the image of each vertex under T ( x, y ) = ( y, x). T (0, 0) = (0, 0),
4
y
22. A vertical shear has the form T ( x, y ) = ( x, y + kx). If
( x, y ) is a fixed point, then T ( x, y ) = ( x, y ) = ( x, y +
3
28. Find the image of each vertex under T ( x, y ) = ( x, y + 3 x).
T ( x, y ) = ( y, − x). If ( x, y ) is a fixed point then the set of fixed points is
x
T(1, 0)
T(0, 0)
x
2
32. Find the image of each vertex under T ( x, y ) = ( 2 x, y ). T (0, 0) = (0, 0),
T (0, 2) = (0, 2),
T (1, 2) = ( 2, 2),
T (1, 0) = ( 2, 0)
y
y
T(1, 0) 1
T(1, 1) T(0, 1) T(0, 0)
2
T(0, 2)
T(1, 2)
1
x
T(1, 0)
1
x
T(0, 0)
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 6.5 34. Find the image of each vertex under T ( x, y ) = ( x, y + 2 x). T (0, 0) = (0, 0),
T (0, 2) = (0, 2),
T (1, 2) = (1, 4),
T (1, 0) = (1, 2)
Applications of Linear Transformations
38. Find the image of each vertex under
T ( x, y ) =
( 12 x, 2 y).
( 12 , 4) T (3, 6) = ( 32 , 12) T (5, 2) = ( 52 , 4)
(a) T (0, 0) = (0, 0)
y
T (1, 2) =
T (6, 0) = (3, 0)
T(1, 2)
4
211
y
3
T(0, 2) 1
x
T(0, 0) 2
3
4
T (1, 2) = (1, 3)
T (3, 6) = (3, 9)
T(5, 2)
T(0, 0)
T(6, 0)
−4 −2
2 4 6 8
36. Find the image of each vertex under T ( x, y ) = ( x, x + y ).
(a) T (0, 0) = (0, 0)
T(3, 6)
12 10 8 6 T(1, 2) 4
T(1, 0)
x
(b) T (0, 0) = (0, 0)
T (0, 6) = (0, 12)
T (6, 6) = (3, 12)
T (5, 2) = (5, 7) T (6, 0) = (6, 6)
T (6, 0) = (3, 0)
y
y
T(6, 6)
T(0, 6)
T(3, 6)
9 8 7 6 5 4 3 2 1
10 8 6 4
T(5, 2) T(6, 0) T(1, 2) T(0, 0)
T(0, 0)
T(6, 0)
−4 −2
x
x
2 4 6 8
1 2 3 4 5 6 7 8 9
(b) T (0, 0) = (0, 0) T (6, 6) = (6, 12) y
12 10 8 6 4 2
T(0, 6)
T (0, 6) = (0, 6) T (6, 0) = (6, 6)
⎡ 1⎤ ⎡3 0⎤ ⎡ 1⎤ ⎡3⎤ A⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (1, 0) = (3, 0) ⎣0⎦ ⎣0 3⎦ ⎣0⎦ ⎣0⎦
T(6, 6)
⎡0⎤ ⎡3 0⎤ ⎡0⎤ ⎡0⎤ A⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T (0, 1) = (0, 3) ⎣ 1⎦ ⎣0 3⎦ ⎣ 1⎦ ⎣3⎦
T(6, 0)
⎡2⎤ ⎡3 0⎤ ⎡2⎤ ⎡6⎤ A⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( 2, 2) = (6, 6) 2 0 3 2 ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣6⎦ y
T(0, 0) x
−2
40. The images of the given vectors are as follows.
2 4 6 8 10 12
6 5 4 3 2
T(2, 2) = (6, 6) T(0, 1) = (0, 3)
1
T(1, 0) = (3, 0) x 1
2
3
4
5
6
42. The linear transformation defined by A is a vertical shear. 44. The linear transformation defined by A is a horizontal shear. 46. The linear transformation defined by A is a reflection in the y-axis.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
212
Chapter 6
Linear Transformations 54. Using the matrix obtained in Exercise 50, you find
⎡ 1 0⎤ 48. Because ⎢ ⎥ represents a vertical expansion, and ⎣0 3⎦ ⎡0 1⎤ ⎢ ⎥ represents a reflection in the line x = y, A is a ⎣ 1 0⎦ vertical expansion followed by a reflection in the line x = y.
⎡1 ⎢ ⎢0 T (1, 1, 1) = ⎢ ⎢ ⎢ ⎢⎣0
50. A rotation of 60° about the x-axis is given by the matrix
⎡1 ⎢ 0 0 ⎡1 ⎤ ⎢0 ⎢ ⎥ A = ⎢0 cos 60° −sin 60°⎥ = ⎢ ⎢ ⎢ ⎥ ⎢ ⎣0 sin 60° cos 60° ⎦ ⎢⎣0
1 2 3 2
1 2 3 2
⎡1 ⎢ ⎢0 T (1, 1, 1) = ⎢ ⎢ ⎢ ⎢⎣0
0 −
1 2
3 2
52. A rotation of 120° about the x-axis is given by the matrix
⎡1 ⎢ 1 − 0 0 ⎡ ⎤ ⎢0 ⎢ ⎥ A = ⎢0 cos 120° −sin 120°⎥ = ⎢ ⎢ ⎢ ⎥ ⎢ ⎣0 sin 120° cos 120° ⎦ ⎢⎣0
0 −
1 2
3 2
1 ⎡ ⎤ 0⎤ ⎢ ⎥ ⎥ ⎡1⎤ ⎢1− 3 ⎥ 3⎥ ⎢⎥ − ⎢ ⎥ 2 ⎥ ⎢1⎥ = ⎢ 2 ⎥. ⎥⎢⎥ ⎢ ⎥ 1 ⎥ ⎣1⎦ ⎢1+ 3 ⎥ ⎥ 2⎦ ⎢⎣ ⎥⎦ 2
(
)
(
)
56. Using the matrix obtained in Exercise 52, you find
0⎤ ⎥ 3⎥ − 2 ⎥. ⎥ 1⎥ 2 ⎥⎦
0
0
1 ⎡ 0⎤ ⎢ ⎥ ⎡1⎤ ⎢ −1 − 3⎥ ⎢⎥ − ⎢ 2 ⎥ ⎢1⎥ = ⎢ 2 ⎥⎢⎥ ⎢ 1 1⎥ ⎣ ⎦ ⎢ −1 + − ⎥ 2⎦ ⎢⎣ 2
(
(
⎤ ⎥ 3 ⎥ ⎥ ⎥. 3 ⎥⎥ ⎥⎦
)
)
58. The indicated tetrahedron is produced by a −90° rotation about the z-axis.
0⎤ ⎥ 3⎥ − 2 ⎥. ⎥ 1⎥ − ⎥ 2⎦
60. The indicated tetrahedron is produced by a 180° rotation about the z-axis. 62. The indicated tetrahedron is produced by a 180° rotation about the x-axis.
⎡ 2 0 ⎢ ⎡ cos 45° 0 sin 45° ⎤ 2 ⎢ ⎢ ⎥ 1 0 0 1 64. A rotation of 45° about the y-axis is given by A1 = ⎢ 0 ⎥ = ⎢⎢ ⎢−sin 45° 0 cos 45°⎥ ⎢ 2 ⎣ ⎦ 0 ⎢− 2 ⎣
2⎤ ⎥ 2 ⎥ 0⎥ ⎥ 2⎥ 2 ⎥⎦
⎡cos 90° −sin 90° 0⎤ ⎡0 −1 0⎤ ⎢ ⎥ ⎢ ⎥ while a rotation of 90° about the z-axis is given by A2 = ⎢ sin 90° cos 90° 0⎥ = ⎢ 1 0 0⎥. ⎢ 0 ⎢0 0 1⎥ 0 1⎥⎦ ⎣ ⎣ ⎦
⎡ 2 0 0 1 0 − ⎡ ⎤ ⎢⎢ 2 ⎢ ⎥ 0 1 So, the desired matrix is A = A2 A1 = ⎢ 1 0 0⎥ ⎢ ⎢0 0 1⎥ ⎢⎢ 2 ⎣ ⎦ 0 ⎢− ⎣ 2
⎡ 0 −1 2⎤ ⎥ ⎢ 2 ⎥ 2 ⎢ 0 0⎥ = ⎢ 2 ⎥ ⎢ 2⎥ ⎢ 2 0 ⎢⎣− 2 2 ⎦⎥
0 ⎤ ⎥ 2⎥ 2 ⎥. ⎥ 2⎥ 2 ⎥⎦
The image of the line segment from (0, 0, 0) to (1, 1, 1) is obtained by computing ⎡ 0 ⎢ 2 ⎢ ⎢ 2 ⎢ ⎢ 2 ⎢⎣− 2
−1 −1 0
0 ⎤ ⎥ 2⎥ 2 ⎥ ⎥ 2⎥ 2 ⎥⎦
⎡0⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢0⎥ = ⎢0⎥ and ⎢0⎥ ⎢0⎥ ⎣ ⎦ ⎣ ⎦
⎡ 0 −1 ⎢ 2 ⎢ −1 ⎢ 2 ⎢ ⎢ 2 0 ⎢⎣− 2
(
So, the image is the line segment from (0, 0, 0) to −1,
0 ⎤ ⎥ ⎡ −1⎤ 2 ⎥ ⎡1⎤ ⎢⎥ ⎢ ⎥ 2 ⎥ ⎢1⎥ = ⎢ 2 ⎥. ⎥⎢⎥ ⎢ 0⎥ 2 ⎥ ⎣1⎦ ⎣ ⎦ ⎥ 2 ⎦
)
2, 0 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Chapter 6
Review Exercises for Chapter 6
⎡ 2 ⎢ ⎡cos 45° −sin 45° 0⎤ ⎢ 2 ⎢ ⎥ 68. A rotation of 45° about the z-axis is given by A1 = ⎢ sin 45° cos 45° 0⎥ = ⎢ 2 ⎢ ⎢ 0 0 1⎥⎦ ⎢ 2 ⎣ ⎢⎣0
−
2 2 2 2 0
213
⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ 1⎥⎦
0 ⎡1 ⎢ 1 0 0 ⎡ ⎤ ⎢0 − 2 ⎢ ⎥ while a rotation of 135° about the x-axis is given by A2 = ⎢0 cos 135° −sin 135°⎥ = ⎢ 2 ⎢ ⎢0 sin 135° cos 135° ⎥ ⎢ 2 ⎣ ⎦ ⎢⎣0 2
0⎤ ⎥ 2⎥ − 2 ⎥ ⎥ 2⎥ − 2 ⎥⎦
So, the desired matrix is 0 ⎡1 ⎢ ⎢0 − 2 A = A2 A1 = ⎢ 2 ⎢ ⎢ 2 ⎢⎣0 2
0⎤ ⎡ 2 ⎥⎢ 2⎥ ⎢ 2 − 2 ⎥⎢ 2 ⎥⎢ 2⎥ ⎢ 2 − 2 ⎥⎦ ⎢⎣0
−
2 2 2 2 0
⎡ 2 ⎤ ⎢ 0⎥ ⎢ 2 ⎥ ⎢ 1 ⎥ = ⎢− 0⎥ ⎢ 2 ⎥ ⎢ 1 ⎢ 1⎥⎦ ⎣⎢ 2
−
⎤ 0⎥ ⎥ 2⎥ − ⎥. 2 ⎥ 2 ⎥⎥ − 2 ⎦⎥
2 2 −
1 2 1 2
The image of the line segment from (0, 0, 0) to (1, 1, 1) is obtained by computing ⎡ 2 ⎢ ⎢ 2 ⎢ 1 ⎢− ⎢ 2 ⎢ 1 ⎢ ⎢⎣ 2
−
2 2 −
1 2 1 2
⎤ 0⎥ ⎥ ⎡0⎤ ⎡0⎤ 2⎥ ⎢ ⎥ ⎢ ⎥ − ⎥ ⎢0⎥ = ⎢0⎥ and 2 ⎥ ⎢0⎥ ⎢ ⎥ ⎣0⎦ 2 ⎥⎥ ⎣ ⎦ − 2 ⎥⎦
⎡ 2 ⎢ ⎢ 2 ⎢ 1 ⎢− ⎢ 2 ⎢ 1 ⎢ ⎢⎣ 2
−
2 2 −
1 2 1 2
( (
⎤ 0 ⎤ ⎡ 0⎥ ⎢ ⎥ ⎥ ⎡1⎤ ⎢ −2 − 2 ⎥ ⎥ 2 ⎢⎥ ⎥. − ⎥ 1 = ⎢⎢ 2 ⎥ 2 ⎥⎢⎥ ⎢1⎥ ⎢ ⎥ ⎢ 2− 2 ⎥ 2 ⎥⎥ ⎣ ⎦ − ⎢⎣ ⎥⎦ 2 2 ⎥⎦
(
)
(
So, the image is the line segment from (0, 0, 0) to 0, −2 −
) (
2 2, 2 −
)
) ))
2 2 .
Review Exercises for Chapter 6 2. (a) T ( v ) = T ( 4, −1) = (3, − 2)
(b) T (v1 , v2 ) = (v1 + v2 , 2v2 ) = (8, 4) v1 + v2 = 8
2v2 = 4 v1 = 6, v2 = 2
Preimage of w is (6, 2). 4. (a) T (v) = T ( −2, 1, 2) = ( −1, 3, 2)
(b) T (v1 , v2 , v3 ) = (v1 + v2 , v2 + v3 , v3 ) = (0, 1, 2) v1 + v2 = 0 v2 + v3 = 1 v3 = 2 v2 = −1, v1 = 1
Preimage of w is (1, −1, 2).
6. T does not preserve addition or scalar multiplication, and hence, T is not a linear transformation. A counterexample is T (1, 1) + T (1, 0) = ( 4, 1) + ( 4, 0) = (8, 1) ≠ (5, 1) = T ( 2, 1).
8. T ( x, y ) = ( x + y, y ) T ( x1 , y1 ) + T ( x2 , y2 ) = ( x1 + y1 , y1 ) + ( x2 + y2 , y2 )
= x1 + y1 + x2 + y2 , y1 + y2 = ( x1 + x2 ) + ( y1 + y2 ), y1 + y2 So, T preserves addition. cT ( x, y ) = c( x + y, y ) = cx + cy, cy = T (cx, cy )
So, T preserves scalar multiplication. So, T is a linear transformation with standard matrix ⎡ 1 1⎤ A = ⎢ ⎥. ⎣0 1⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
214
Chapter 6
Linear Transformations
10. T does not preserve addition or scalar multiplication, and so, T is not a linear transformation. A counterexample is −2T (3, − 3) = −2( 3 , −3 ) = (−6, − 6) ≠ (6, 6) = T ( −6, 6) = T ( −2(3), − 2( −3)).
12. T preserves addition. T ( x1 , y1 , z1 ) + T ( x2 , y2 , z2 ) = ( z1 , y1 , x1 ) + ( z2 , y2 , x2 ) = ( z1 + z2 , y1 + y2 , x1 + x2 ) = T ( x1 + x2 , y1 + y2 , z1 + z2 ) T preserves scalar multiplication. cT ( x, y, z ) = c( z , y, x) = (cz , cy, cx) = T (cx, cy, cz )
20. The standard matrix for T, relative to B = {1, x, x 2 , x3}, is ⎡0 ⎢ 0 A = ⎢ ⎢0 ⎢ ⎢⎣0
1 0 0⎤ ⎥ 0 2 0⎥ . 0 0 3⎥ ⎥ 0 0 0⎥⎦
Therefore, you have ⎡0 ⎢ 0 2 A = ⎢ ⎢0 ⎢ ⎢⎣0
1 0 0⎤ ⎥ 0 2 0⎥ 0 0 3⎥ ⎥ 0 0 0⎥⎦
⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
1 0 0⎤ ⎡0 ⎥ ⎢ 0 2 0⎥ 0 = ⎢ ⎢0 0 0 3⎥ ⎥ ⎢ ⎢⎣0 0 0 0⎥⎦
0 2 0⎤ ⎥ 0 0 6⎥ . 0 0 0⎥ ⎥ 0 0 0⎥⎦
So, T is a linear transformation with standard matrix ⎡0 0 1⎤ ⎢ ⎥ A = ⎢0 1 0⎥. ⎢ 1 0 0⎥ ⎣ ⎦
14. Because (0, 1, 1) = (1, 1, 1) − (1, 0, 0), you have T (0, 1, 1) = T (1, 1, 1) − T (1, 0, 0) =1−3 = −2.
16. Because ( 2, 4) = 2(1, −1) + 3(0, 2), you have T ( 2, 4) = 2T (1, −1) + 3T (0, 2)
22. (a) Because A is a 2 × 3 matrix, it maps R 3 into R 2 , ( n = 3, m = 2).
(b) Because T ( v ) = Av and ⎡5⎤ ⎡1 2 −1⎤ ⎢ ⎥ ⎡7⎤ Av = ⎢ ⎥ ⎢2⎥ = ⎢ ⎥ , ⎣1 0 1⎦ ⎢ ⎥ ⎣7⎦ ⎣2⎦
it follows that T (5, 2, 2) = (7, 7). (c) The preimage of w is given by the solution to the equation T (v1 , v2 , v3 ) = w = ( 4, 2). The equivalent system of linear equations
= 2( 2, − 3) + 3(0, 8)
v1 + 2v2 − v3 = 4
= ( 4, − 6) + (0, 24)
v1
= ( 4, 18).
has the solution
18. The standard matrix for T is ⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 1 0⎥. ⎢0 0 0⎥ ⎣ ⎦
Therefore, you have ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 A = ⎢0 1 0⎥ ⎢0 1 0⎥ = ⎢0 1 0⎥ = A. ⎢0 0 0⎥ ⎢0 0 0⎥ ⎢0 0 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
+ v3 = 2
{(2 − t , 1 + t , t ) : t is a real number}. 24. (a) Because A is a 1 × 2 matrix, it maps R 2 into R1 ( n = 2, m = 1). (b) Because T ( v ) = Av and ⎡ 1⎤ Av = [2, −1] ⎢ ⎥ = [0], it follows T (1, 2) = 0. ⎣2⎦ (c) The preimage of w is given by the solution to the equation T (v1 , v2 ) = w = ( −1). The equivalent system of linear equations is 2v1 − v2 = −1, which has the solution ⎧⎛ t 1 ⎞ ⎫ ⎨⎜ − , t ⎟ : t is a real number ⎬. 2 ⎠ ⎩⎝ 2 ⎭
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6
26. (a) Because A is a 2 × 2 matrix, it maps R 2 into R 2 ( n = 2, m = 2). (b) Because T ( v ) = Av and ⎡2 1⎤ ⎡8⎤ ⎡20⎤ Av = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ , it follows that 0 1 4 ⎣ ⎦⎣ ⎦ ⎣4⎦ T (8, 4) = ( 20, 4). (c) The preimage of w is given by the solution to the equation T (v1 , v2 ) = w = (5, 2). The equivalent system of linear equations 2v1 + v2 = 5 v2 = 2, v1 =
has the solution
3 2
( 2).
28. (a) Because A is a 3 × 2 matrix, it maps R 2 into R 3 ( n = 2, m = 3).
(b) Because T ( v ) = Av and ⎡ 1⎤ ⎢ ⎥ ⎢−2⎥ , it follows that ⎢ 5⎥ ⎣ ⎦
(c) The preimage of w is given by the solution to the equation T (v1 , v2 ) = w = ( 2, − 5, 12). The equivalent system of linear equations v1 = 2 −v2 = −5 v1 + 2v2 = 12
has the solution v1 = 2 and v2 = 5. So, the preimage is ( 2, 5).
30. (a) The standard matrix for T is ⎡ 1 2 0⎤ ⎢ ⎥ A = ⎢0 1 2⎥. ⎢2 0 1⎥ ⎣ ⎦
Solving Av = 0 yields the solution v = 0. So, ker(T ) consists of the zero vector, ker(T ) =
32. (a) The standard matrix for T is ⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 1 2⎥. ⎢0 0 1⎥ ⎣ ⎦
Solving Av = 0 yields the solution v = 0. So, ker(T ) consists of the zero vector, ker(T ) =
{(0, 0, 0)}.
(b) Because ker(T ) is dimension 0, the range(T ) must be all of R 3 . So, a basis for the range is
{(1, 0, 0), (0, 1, 0), (0, 0, 1)}.
⎡ 1 1⎤ ⎡ 1 0⎤ ⎢ ⎥ ⎢ ⎥ 34. A = ⎢0 −1⎥ ⇒ ⎢0 1⎥ ⎢2 1⎥ ⎢0 0⎥ ⎣ ⎦ ⎣ ⎦
3 , 2
⎡ 1 0⎤ ⎢ ⎥ ⎡ 1⎤ Av = ⎢0 −1⎥ ⎢ ⎥ = ⎢ 1 2⎥ ⎣2⎦ ⎣ ⎦ T (1, 2) = (1, − 2, 5).
215
{(0, 0, 0)}.
(b) Because ker(T ) is dimension 0, range(T ) must be all 3
of R . So, a basis for the range is
{(1, 0, 0), (0, 1, 0), (0, 0, 1)}.
(a) ker (T ) =
{(0, 0)}
⎡1 0 2⎤ ⎡ 1 0 2⎤ (b) AT = ⎢ ⎥ ⇒ ⎢ ⎥ 1 1 1 − ⎣ ⎦ ⎣0 1 1⎦ So, a basis for range(T ) is
{(1, 0, 2), (0, 1, 1)}.
(c) dim( range(T )) = rank (T ) = 2 (d) dim( ker(T )) = nullity(T ) = 0 ⎡ 1 1 −1⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 36. A = ⎢ 1 2 1⎥ ⇒ ⎢0 1 0⎥ ⎢0 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦
(a) ker (T ) =
{(0, 0, 0)}
⎡ 1 1 0⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ (b) A = ⎢ 1 2 1⎥ ⇒ ⎢0 1 0⎥ ⎢−1 1 0⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦ T
range(T ) is
{(1, 0, 0), (0, 1, 0) (0, 0, 1)}
(c) dim( range(T )) = 3 (d) dim( ker(T )) = nullity(T ) = 0
38. Rank (T ) = dim P5 − nullity(T ) = 6 − 4 = 2 40. nullity(T ) = dim( M 2, 2 ) − rank (T ) = 4 − 3 = 1 42. The standard matrix for T is ⎡0 0⎤ A = ⎢ ⎥. ⎣0 1⎦ Because A is not invertible, T has no inverse.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
216
Chapter 6
Linear Transformations
44. The standard matrix for T is
54. (a) Because T : R 3 → R 2 and rank (T ) ≤ 2, nullity(T ) ≥ 1. So, T is not one-to-one.
⎡ 1 0⎤ A = ⎢ ⎥. ⎣0 −1⎦
(b) Because rank ( A) = 2, T is onto.
A is invertible and its inverse is given by
(c) T is not invertible (A is not square).
A−1
⎡ 1 0⎤ = ⎢ ⎥. ⎣0 −1⎦
56. (a) Because A = 40 ≠ 0, ker(T ) = is one-to-one. (b) Because rank ( A) = 3, T is onto.
46. The standard matrix for T is ⎡ 1 0 0⎤ ⎢ ⎥ A = ⎢0 −1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
(c) The transformation is one-to-one and onto, and therefore invertible.
A is invertible and its inverse is given by A
−1
⎡ 1 0 0⎤ ⎢ ⎥ = ⎢0 −1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
⎡ 1 1 1⎤ 48. The standard matrix for T is A = ⎢ ⎥. ⎣0 0 1⎦ Because A is not invertible, T has no inverse.
50. The standard matrices for T1 and T2 are ⎡1⎤ A1 = ⎢ ⎥ ⎣3⎦
and
A2 = [2 1].
The standard matrix for T = T1
T2 is
⎡1⎤ ⎡2 1⎤ A = A1 A2 = ⎢ ⎥ [2 1] = ⎢ ⎥ ⎣3⎦ ⎣6 3⎦ and the standard matrix for T ′ = T2
52. If you translate the vertex (5, 3) back to the origin
(0, 0), then the other vertices (3, 5) and (3, 0) are translated to ( −2, 2) and ( −2, − 3), respectively. The rotation of 90° is given by the matrix in Exercise 51, and you have ⎡0 −1⎤ ⎡−2⎤ ⎡−2⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 0 2 ⎣ ⎦⎣ ⎦ ⎣−2⎦
⎡0 −1⎤ ⎡−2⎤ ⎡ 3⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. − 1 0 3 ⎣ ⎦⎣ ⎦ ⎣−2⎦
Translating back to the original coordinate system, the new vertices are (5, 3), (3, 1) and (8, 1). y 8
(3, 5)
4 2
(5, 3) (3, 1)
90°
(8, 1) x
(3, 0)
6
58. (a) The standard matrix for T is ⎡0 2⎤ A = ⎢ ⎥ ⎣0 0⎦ so it follows that ⎡0 2⎤ ⎡−1⎤ ⎡6⎤ Av = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⇒ T ( v ) = (6, 0). ⎣0 0⎦ ⎣ 3⎦ ⎣0⎦ (b) The image of each vector in B is as follows. T ( 2, 1) = ( 2, 0) = −2( −1, 0) + 0( 2, 2) T ( −1, 0) = (0, 0) = 0( −1, 0) + 0( 2, 2)
Therefore, the matrix for T relative to B and B′ is ⎡−2 0⎤ A′ = ⎢ ⎥. ⎣ 0 0⎦ Because v = ( −1, 3) = 3( 2, 1) + 7( −1, 0),
[v]B
⎡3⎤ ⎡−2 0⎤ ⎡3⎤ ⎡−6⎤ = ⎢ ⎥ and A′[ v]B = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣7⎦ ⎣ 0 0⎦ ⎣7⎦ ⎣ 0⎦
So, T ( v ) = −6( −1, 0) + 0(2, 2) = (6, 0). T1 is
⎡1⎤ A′ = A2 A1 = [2 1] ⎢ ⎥ = [5]. ⎣3⎦
6
{(0, 0, 0)}, and T
60. The standard matrix for T is ⎡ 1 3 0⎤ ⎢ ⎥ A = ⎢3 1 0⎥. ⎢0 0 −2⎥ ⎣ ⎦
The transition matrix from B′ to B, the standard matrix, is P 1 0⎤ ⎡ 12 2 ⎢ ⎥ P −1 = ⎢ 12 − 12 0⎥. ⎢ ⎥ 0 1⎥⎦ ⎢⎣ 0 The matrix A′ for T relative to B′ is
⎡ 1 1 0⎤ P = ⎢⎢ 1 −1 0⎥⎥ ⎢⎣0 0 1⎥⎦ A′ = P −1 AP
1 0⎤ ⎡ 12 ⎡ 1 3 0⎤ ⎡ 1 1 0⎤ 2 ⎢1 ⎥ = ⎢ 2 − 12 0⎥ ⎢⎢3 1 0⎥⎥ ⎢⎢ 1 −1 0⎥⎥ ⎢ ⎥ 0 1⎥⎦ ⎢⎣0 0 −2⎥⎦ ⎢⎣0 0 1⎥⎦ ⎢⎣ 0
⎡4 0 0⎤ = ⎢⎢0 −2 0⎥⎥. ⎣⎢0 0 −2⎦⎥ Because, A′ = P −1 AP, it follows that A and A′ are similar.
8
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6
62. (a) Because T ( v ) = proju v where u = (4, 3), you have 4x + 3 y (4, 3). 25
T ( v) =
⎡ 1 0⎤ ⎡0 0⎤ 66. (a) Let S = ⎢ ⎥ and T = ⎢ ⎥. 0 0 ⎣ ⎦ ⎣0 1⎦ ⎡ 1 0⎤ Then S + T = ⎢ ⎥ and rank ( S + T ) ⎣0 1⎦
So, ⎛ 16 12 ⎞ T (1, 0) = ⎜ , ⎟ and ⎝ 25 25 ⎠
⎛ 12 9 ⎞ T (0, 1) = ⎜ , ⎟ ⎝ 25 25 ⎠
and the standard matrix for T is A =
(b)
(I
1 ⎡16 12⎤ ⎢ ⎥. 25 ⎣12 9⎦
− A)
2
⎛ 1 ⎡ 9 −12⎤ ⎞ = ⎜⎜ ⎢ ⎥ ⎟⎟ ⎝ 25 ⎣−12 16⎦ ⎠
⎡ 1 0⎤ (b) Let S = T = ⎢ ⎥. ⎣0 0⎦
2
= 1 < 2 = rank ( S ) + rank (T ).
⎡16 ⎤ ⎢5⎥ 1 ⎡16 12⎤ ⎡5⎤ Av = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 25 ⎣12 9⎦ ⎣0⎦ ⎢12 ⎥ ⎢⎣ 5 ⎥⎦
(c)
= rank ( S ) + rank (T ).
⎡2 0⎤ Then S + T = ⎢ ⎥ and rank ( S + T ) ⎣0 0⎦
1 ⎡ 9 −12⎤ ⎢ ⎥ = I − A. 25 ⎣−12 16⎦
=
⎡ 9⎤ ⎢ 5⎥ 1 ⎡ 9 −12⎤ ⎡5⎤ ⎥ ( I − A) v = ⎢ ⎥⎢ ⎥ = ⎢ 25 ⎣−12 16⎦ ⎣0⎦ ⎢ 12 ⎥ − ⎢⎣ 5 ⎥⎦
68. (a) Suppose that ( S
3
(
16 12 , 5 5
)
−1 −2
(b) Let v ∈ kernel(T ), which implies that T ( v ) = 0. Clearly ( S
T )( v) = 0 as well, which shows that
v ∈ kernel( S
T ).
(c) Let w ∈ W . Because S T is onto, there exists v ∈ V such that ( S T )( v ) = w. So, S (T ( v)) = w , and S is onto.
Dx (1) = 0
4
Dx ( x) = 1
x
5
Dx (sin x) = cos x
( 95 , − 125 )
Dx (cos x) = −sin x
64. (a) Because A = P −1BP and A is invertible, you have
( PA−1P −1 ) B
T is
70. Compute the images of the basis vectors under Dx .
Av v = (5, 0) (I − A)v
T )( v 2 ). Because S
one, v1 = v 2 , and you have shown that S one-to-one.
(4, 3)
2 1
T )( v1 ) = ( S
is one-to-one, T ( v1 ) = T ( v 2 ). Because T is one-to-
y
(d)
217
= PA−1 ( P −1B) = PA−1 AP −1 = PP −1 = I
which shows that B is invertible. (b) Because A = P −1BP, A−1 = ( P −1BP)
−1
= P −1B −1P, which
shows that A−1 and B −1 are similar.
So, the matrix of Dx relative to this basis is ⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0⎤ ⎥ 0⎥ . 0 0 −1⎥ ⎥ 0 1 0⎥⎦ 1 0
0 0
The range of Dx is spanned by {x, sin x, cos x}, whereas the kernel is spanned by {1}.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
218
Chapter 6
Linear Transformations
72. First compute the effect of T on the basis {1, x, x 2 , x3}. T (1) = 1
y
2 x + x2
T ( x3 ) =
2
3x 2 + x3
The standard matrix for T is ⎡1 ⎢ 0 A = ⎢ ⎢0 ⎢ ⎢⎣0
T (1, 0) = ( 2, 0), T (0, 1) = (0, 1). A sketch of the triangle and its image follows.
T ( x) = 1 + x T ( x2 ) =
80. The image of each vertex is T (0, 0) = (0, 0),
T(0, 1)
1
1 0 0⎤ ⎥ 1 2 0⎥ . 0 1 3⎥ ⎥ 0 0 1⎥⎦
T(1, 0) T(0, 0)
Because the rank ( A) = 4, the rank (T ) = 4 and nullity(T ) = 0.
x
2
82. The image of each vertex is T (0, 0) = (0, 0), T (1, 0) = (1, 2), T (0, 1) = (0, 1). A sketch of the triangle and its image follows. y
74. (a) T is a horizontal shear. 2
y
(b) 2
T(0, 1)
T(1, 0)
1
(x, y) T(x, y) 1
T(0, 0) x
1
2
76. (a) T is a horizontal expansion. (b)
y
(x, y)
T(x, y)
x
1
⎡1 0⎤ 84. The transformation is a vertical shear ⎢ ⎥ followed ⎣3 1⎦ ⎡ 1 0⎤ by a vertical expansion ⎢ ⎥. ⎣0 2⎦
5
86. A rotation of 90° about the x-axis is given by
4
0 0 ⎡1 ⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ A = ⎢0 cos 90° −sin 90°⎥ = ⎢0 0 −1⎥. ⎢0 sin 90° cos 90° ⎥ ⎢0 1 0⎥ ⎣ ⎦ ⎣ ⎦
3 2 1 x
1
2
3
4
5
78. (a) T is a vertical shear. (b)
y
the image of (1, −1, 1) is (1, −1, −1).
T(x, y)
4
⎡ 1 0 0⎤ ⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Because Av = ⎢0 0 −1⎥ ⎢−1⎥ = ⎢−1⎥ , ⎢0 1 0⎥ ⎢ 1⎥ ⎢−1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
3 2
(x, y)
1
x
1
2
3
4
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 6
88. A rotation of 30° about the y-axis is given by ⎡ 3 0 ⎢ ⎡ cos 30° 0 sin 30° ⎤ 2 ⎢ ⎢ ⎥ A = ⎢ 0 1 0 ⎥ = ⎢ 0 1 ⎢ ⎢−sin 30° 0 cos 30°⎥ ⎢ 1 ⎣ ⎦ ⎢ −2 0 ⎣
92. A rotation of 60° about the x-axis is given by 1⎤ ⎥ 2⎥ 0⎥. ⎥ 3⎥ 2 ⎥⎦
⎡1 ⎢ 1 0 0 ⎡ ⎤ ⎢0 ⎢ ⎥ A1 = ⎢0 cos 60° −sin 60°⎥ = ⎢ ⎢ ⎢0 sin 60° cos 60° ⎥ ⎢ ⎣ ⎦ ⎢⎣0
Because ⎡ 3 1⎤ 1 ⎤ + ⎥ ⎥ ⎡ 1⎤ ⎢ 2⎥ 2⎥ ⎢ 2 ⎢ ⎥ ⎥, 0⎥ ⎢−1⎥ = ⎢ −1 ⎥⎢ ⎥ ⎢ ⎥ ⎢ 1 3 ⎥ ⎣ 1⎦ 3⎥ ⎢− 2 + 2 ⎥ 2 ⎥⎦ ⎣ ⎦
1 2 3 2
⎡ 1 ⎢ ⎡cos 60° −sin 60° 0⎤ ⎢ 2 ⎢ ⎥ A2 = ⎢ sin 60° cos 60° 0⎥ = ⎢ 3 ⎢ ⎢ 0 0 1⎥⎦ ⎢ 2 ⎣ ⎢⎣ 0
⎛ 3 1 1 3⎞ the image of (1, −1, 1) is ⎜⎜ + , −1, − + ⎟. 2 2 2 ⎟⎠ ⎝ 2
90. A rotation of 120° about the y-axis is given by ⎡ 1 0 ⎢ − ⎡ cos 120° 0 sin 120° ⎤ 2 ⎢ ⎢ ⎥ A1 = ⎢ 0 1 0 0 1 ⎥ = ⎢⎢ ⎢−sin 120° 0 cos 120°⎥ ⎢ 3 ⎣ ⎦ ⎢− 2 0 ⎣
3⎤ ⎥ 2 ⎥ 0⎥ ⎥ 1⎥ − ⎥ 2⎦
while a rotation of 45° about the z-axis is given by ⎡ 2 ⎢ ⎡cos 45° −sin 45° 0⎤ ⎢ 2 ⎢ ⎥ A2 = ⎢ sin 45° cos 45° 0⎥ = ⎢ 2 ⎢ ⎢ 0 0 1⎥⎦ ⎢ 2 ⎣ ⎢⎣ 0
−
2 2 2 2 0
So, the pair of rotations is given by
⎡ 2 ⎢− 4 ⎢ ⎢ 2 = ⎢− ⎢ 4 ⎢ ⎢− 3 ⎢⎣ 2
0
0⎤ ⎥ 3⎥ − 2 ⎥ ⎥ 1⎥ 2 ⎥⎦
while a rotation of 60° about the z-axis is given by
⎡ 3 0 ⎢ ⎢ 2 Av = ⎢ 0 1 ⎢ ⎢ 1 ⎢ −2 0 ⎣
⎡ 2 ⎢ ⎢ 2 A2 A1 = ⎢ 2 ⎢ ⎢ 2 ⎢⎣ 0
219
⎤⎡ 1 0 0⎥ ⎢ − 2 ⎥⎢ ⎥⎢ 0 1 0⎥ ⎢ 3 ⎥⎢ 0 − 1⎥⎦ ⎢⎣ 2
2 2
−
2 2 0 −
2 2 2 2 0
6⎤ ⎥ 4 ⎥ 6⎥ ⎥. 4 ⎥ 1⎥ − ⎥ 2 ⎥⎦
3⎤ ⎥ 2 ⎥ 0⎥ ⎥ 1⎥ − ⎥ 2⎦
⎤ 0⎥ ⎥ ⎥. 0⎥ ⎥ 1⎥⎦
−
⎤ 0⎥ ⎥ ⎥. 1 0⎥ 2 ⎥ 0 1⎥⎦
3 2
So, the pair of rotations is given by ⎡ 1 ⎢ ⎢ 2 A2 A1 = ⎢ 3 ⎢ ⎢ 2 ⎢⎣ 0
3 2
⎤ ⎡1 0⎥ ⎢ ⎥ ⎢0 ⎥⎢ 1 0⎥ ⎢ 2 ⎥⎢ 0 0 1 ⎥⎦ ⎢⎣
⎡ 1 ⎢ ⎢ 2 ⎢ 3 = ⎢ ⎢ 2 ⎢ ⎢ 0 ⎢⎣
3 4 1 4 3 2
0 1 2 3 2
0⎤ ⎥ 3⎥ − 2 ⎥ ⎥ 1⎥ 2 ⎥⎦
3⎤ ⎥ 4⎥ 3⎥ − ⎥. 4 ⎥ 1 ⎥⎥ 2 ⎥⎦
94. The standard matrix for T is 0 0 ⎡1 ⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ 0 cos 90 sin 90 ° − ° = ⎢ ⎥ ⎢0 0 −1⎥. ⎢0 sin 90° cos 90° ⎥ ⎢0 1 0⎥ ⎣ ⎦ ⎣ ⎦
Therefore, T is given by T ( x, y, z ) = ( x, − z , y ). The image of each vertex is as follows. T (0, 0, 0) = (0, 0, 0) T (1, 1, 0) = (1, 0, 1) T (0, 0, 1) = (0, −1, 0) T (1, 1, 1) = (1, −1, 1) T (1, 0, 0) = (1, 0, 0) T (0, 1, 0) = (0, 0, 1) T (1, 0, 1) = (1, −1, 0) T (0, 1, 1) = (0, −1, 1)
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
220
Chapter 6
Linear Transformations
96. The standard matrix for T is ⎡ 1 ⎢− ⎡cos 120° −sin 120° 0⎤ ⎢ 2 ⎢ ⎥ ⎢ 3 ° ° = sin 120 cos 120 0 ⎢ ⎥ ⎢ ⎢ 0 ⎥ 0 1 ⎢ 2 ⎣ ⎦ ⎢⎣ 0
−
⎤ 0⎥ ⎥ 1 ⎥. − 0⎥ 2 ⎥ 0 1 ⎥⎦ 3 2
Therefore, T is given by ⎛ 1 3 3 1 T ( x, y, z ) = ⎜⎜ − x − y, x − y, 2 2 2 ⎝ 2 of each vertex is as follows.
98. (a) True. The statement is true because if T is a reflection T ( x, y ) = ( x, − y ), then the standard matrix is ⎡1 0 ⎤ ⎢ ⎥. ⎣0 −1⎦ (b) True. The statement is true because the linear transformation T ( x, y ) = ( x, ky ) has the standard
⎞ z ⎟⎟. The image ⎠
matrix. ⎡1 0⎤ ⎢ ⎥. ⎣0 k ⎦
T (0, 0, 0) = (0, 0, 0)
(c) True. The statement is true because the matrix
⎛ 1 3 ⎞ T (1, 0, 0) = ⎜⎜ − , , 0 ⎟⎟ ⎝ 2 2 ⎠
⎡ cos(θ ) 0 sin(θ ) ⎤ ⎢ ⎥ 1 0 ⎥ ⎢ 0 ⎢−sin(θ ) 0 cos(θ )⎥ ⎣ ⎦
⎛ 1 3 3 1 ⎞ , − , 0 ⎟⎟ T (1, 1, 0) = ⎜⎜ − − 2 2 2 ⎠ ⎝ 2
will rotate a point θ degrees (pages 411–412). If θ = 30 degrees, you obtain the matrix in the statement.
⎛ 3 1 ⎞ , − , 0 ⎟⎟ T (0, 1, 0) = ⎜⎜ − 2 2 ⎠ ⎝ T (0, 0, 1) = (0, 0, 1)
100. (a) True. Dx is a linear transformation because it preserves addition and scalar multiplication. Further, Dx ( Pn ) = Pn −1 because for all natural numbers
⎛ 1 3 ⎞ , 1⎟⎟ T (1, 0, 1) = ⎜⎜ − , 2 2 ⎝ ⎠
i ≥ 1, Dx ( xi ) = ixi −1.
⎛ 1 3 3 1 ⎞ , T (1, 1, 1) = ⎜⎜ − − − , 1⎟⎟ 2 2 2 2 ⎠ ⎝
(b) False. If T is a linear transformation V → W , then kernel of T is defined to be a set of v ∈ V , such that
⎛ 3 1 ⎞ , − , 1⎟⎟ T (0, 1, 1) = ⎜⎜ − 2 2 ⎠ ⎝
T ( v) = 0W . (c) True. If T = T2 T1 and Ai is the standard matrix for Ti , i = 1, 2, then the standard matrix for T is equal A2 A1 by Theorem 6.11 on page 391.
Project Solutions for Chapter 6 1 Reflections in the Plane-I y
2.
⎡ 1 0⎤ ⎢ ⎥ ⎣0 −1⎦
y
(x, y)
ax + by = 0 L(x, y)
1
y=0
x
2
(x, y) −1
x
1.
⎡−1 0⎤ ⎢ ⎥ ⎣ 0 1⎦
y
x=0
(x, −y)
3.
(y, x) 2
2
(− x, y)
⎡0 1⎤ ⎢ ⎥ ⎣ 1 0⎦
y
y=x
1
(x, y)
(x, y)
x 1
1
2
−1 −1
x 1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 6
4. v = ( 2, 1)
B = {v, w}
y
w = ( −1, 2) L( v ) = v
L( w ) = − w
221
x − 2y = 0
2
⎡ 1 0⎤ ⎢ ⎥ = A ⎣0 −1⎦
B′ = {e1 , e 2} standard basis
w
v x
−2
1
2
−1 −2
A is a matrix of L relative to basis B.
A′ = P −1 AP matrix of L relative to the standard basis B′.
[B′ B]
→ ⎡⎣I P −1 ⎤⎦
A′ = P −1 AP = ⎡3 ⎢5 ⎢ 54 ⎣ ⎡3 ⎢5 ⎢4 ⎣5 ⎡3 ⎢5 ⎢4 ⎣5
4⎤ 5⎥ − 53 ⎥⎦ 4⎤ 5⎥ − 53 ⎥⎦
⇒
⎡2 −1⎤ P −1 = ⎢ ⎥ ⎣ 1 2⎦
2 −1⎤ ⎡ 1 0⎤ ⎡ 2 1⎤ ⎥⎢ ⎥⎢ ⎥ = ⎣ 1 2⎦ ⎣0 −1⎦ ⎣−1 2⎦
1⎡ 5⎢
⇒ 1⎡ 5⎢
P =
1⎡ 5⎢
1⎤ ⎥ ⎣−1 2⎦ 2
1⎤ ⎡ 2 1⎤ ⎥⎢ ⎥ = ⎣ 1 −2⎦ ⎣−1 2⎦ 2
⎡3 4⎤ = ⎢ 45 ⎥ ⎢⎣ 5 ⎣4 −3⎦
1⎡ 5⎢
3
4⎤ 5 ⎥. − 53 ⎥⎦
⎡2⎤ ⎡2⎤ ⎢ ⎥ = ⎢ ⎥ 1 ⎣ ⎦ ⎣1 ⎦
⎡−1⎤ ⎡ 1⎤ ⎢ ⎥ = ⎢ ⎥ ⎣ 2⎦ ⎣−2⎦
4⎤ 5 ⎥ − 53 ⎥⎦
⎡5⎤ ⎡3⎤ ⎢ ⎥ = ⎢ ⎥ ⎣0⎦ ⎣4⎦
5. v = ( −b, a)
⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 −1⎦
w = ( a, b)
y
ax + by = 0
⎡−b a⎤ P −1 = ⎢ ⎥ ⎣ a b⎦ P =
x
⎡ −b + a⎤ 1 ⎢ ⎥ a 2 + b 2 ⎣+ a +b⎦
⎡−b a⎤ ⎡ 1 0⎤ ⎡−b − a⎤ ⎡−b a⎤ 1 1 ⎡b 2 − a 2 = A′ = P −1 AP = ⎢ ⎢ ⎥⎢ ⎥P = ⎢ ⎥⎢ ⎥ 2 2 a 2 + b 2 ⎣⎢ −2ab ⎣ a b⎦ ⎣0 −1⎦ ⎣ a −b⎦ ⎣ a b⎦ a + b
6. 3x + 4 y = 0
A′ =
−2ab ⎤ ⎥ a − b 2 ⎦⎥ 2
1 ⎡ 7 −24⎤ ⎢ ⎥ 32 + 42 ⎣−24 −7⎦
⎡−3⎤ 1 ⎡ 7 −24⎤ ⎡3⎤ 1 ⎡ −75⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 25 ⎣−24 −7⎦ ⎣4⎦ 25 ⎣−100⎦ ⎣−4⎦ ⎡−4⎤ 1 ⎡ 7 −24⎤ ⎡−4⎤ 1 ⎡−100⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ 25 ⎣−24 −7⎦ ⎣ 3⎦ 25 ⎣ 75⎦ ⎣ 3⎦ ⎡ 24 ⎤ ⎢− 5 ⎥ 1 ⎡ 7 −24⎤ ⎡0⎤ 1 ⎡−24 ⋅ 5⎤ ⎢ ⎥ = = ⎢ ⎥⎢ ⎥ ⎢ ⎥ 25 ⎣−24 −7⎦ ⎣5⎦ 25 ⎣ −7 ⋅ 5⎦ ⎢ 7⎥ ⎢⎣ − 5 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
222
Chapter 6
Linear Transformations
2 Reflections in the Plane-II 1. v = (0, 1)
⎡0 0⎤ ⎢ ⎥ ⎣0 1⎦
2. v = (1, 0)
⎡1 0⎤ ⎢ ⎥ ⎣0 0⎦
3. v = (2, 1)
B = {v, w}
y
w = (−1, 2)
x − 2y = 0
2
projv v = v ⎫ ⎬ projv w = 0⎭
⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 0⎦
⎡2 −1⎤ P −1 = ⎢ ⎥, P = ⎣ 1 2⎦
1⎡ 5 ⎢−1
w
x
−2
1
⎣
2
−1
1⎤ ⎥ 2⎦
2
v
−2
A′ = P −1 AP = matrix of L relative to standard basis.
y
⎡4 ⎡2 −1⎤ ⎡ 1 0⎤ ⎡2 0⎤ ⎡ 2 1⎤ 1 ⎢5 P = ⎢ = = ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥5 1 2 0 0 1 0 1 2 − ⎢ 52 ⎣ ⎦⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎡4 ⎢5 ⎢2 ⎣5
2⎤ 5 ⎥ 1⎥ 5⎦
⎡2⎤ ⎡2⎤ ⎢ ⎥ = ⎢ ⎥, ⎣1 ⎦ ⎣1 ⎦
⎡4 ⎢5 ⎢2 ⎣5
2⎤ 5 ⎥ 1⎥ 5⎦
⎡4⎤ ⎡5⎤ ⎢ ⎥ = ⎢ ⎥ ⎣2⎦ ⎣0⎦
⎡4 ⎢5 ⎢2 ⎣5
2⎤ 5 ⎥ 1⎥ 5⎦
3
⎡0⎤ ⎡−1⎤ ⎢ ⎥ = ⎢ ⎥ ⎣ 2⎦ ⎣0⎦
2 1 2
3
4
5
⎡ 1 0⎤ A = ⎢ ⎥ ⎣0 0⎦
⎡−b a⎤ P −1 = ⎢ ⎥ ⎣ a b⎦
P =
⎡−b a⎤ 1 ⎥ 2⎢ a + b ⎣ a b⎦ 2
⎡ b 2 − ab⎤ 1 ⎥ 2⎢ a + b ⎢⎣− ab a 2 ⎥⎦ 2
1 (u + L(u)) 2
⇒
L(u) = 2projvu − u y
L = 2 projv − I
L(u)
1 ⎡ b 2 − ab⎤ ⎡1 0⎤ = 2 2 ⎢ ⎥ −⎢ ⎥ a + b 2 ⎣⎢− ab a 2 ⎦⎥ ⎣0 1⎦ =
1 ⎛ ⎡ 2b 2 −2ab⎤ ⎡a 2 + b 2 ⎜⎢ ⎥ − ⎢ a 2 + b 2 ⎜⎝ ⎢⎣−2ab 2a 2 ⎥⎦ ⎢⎣ 0
=
1 ⎡b 2 − a 2 ⎢ a + b 2 ⎢⎣ −2ab 2
4
x
w = ( a, b)
5. projvu =
5
1
4. v = ( −b, a )
A′ = P −1 AP =
2⎤ 5 ⎥ 1⎥ 5⎦
⎤⎞ ⎟ a + b ⎥⎦ ⎠ 0
2
2⎥⎟
u x
−2ab ⎤ ⎥ a − b 2 ⎥⎦ 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 7 Eigenvalues and Eigenvectors Section 7.1
Eigenvalues and Eigenvectors ...........................................................220
Section 7.2
Diagonalization...................................................................................228
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization .....................232
Section 7.4
Applications of Eigenvalues and Eigenvectors.................................235
Review Exercises ........................................................................................................241 Cumulative Test .........................................................................................................246
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 7 Eigenvalues and Eigenvectors Section 7.1 Eigenvalues and Eigenvectors ⎡ 1 0⎤ ⎡ 1⎤ ⎡ 1⎤ ⎡ 1⎤ 1. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 1⎢ ⎥ = λ1x1 ⎣0 −1⎦ ⎣0⎦ ⎣0⎦ ⎣0⎦ ⎡ 1 0⎤ Ax 2 = ⎢ ⎥ ⎣0 −1⎦
⎡0⎤ ⎡ 0⎤ ⎡0⎤ ⎢ ⎥ = ⎢ ⎥ = −1⎢ ⎥ = λ2 x 2 ⎣ 1⎦ ⎣−1⎦ ⎣ 1⎦
⎡1 1⎤ ⎡ 1⎤ ⎡0⎤ ⎡ 1⎤ 3. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 0⎢ ⎥ = λ1x1 ⎣1 1⎦ ⎣−1⎦ ⎣0⎦ ⎣−1⎦ ⎡1 1⎤ ⎡1⎤ ⎡2⎤ ⎡1⎤ Ax 2 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 2⎢ ⎥ = λ2 x 2 1 1 1 2 ⎣ ⎦⎣⎦ ⎣ ⎦ ⎣1⎦ ⎡2 ⎢ 5. Ax1 = ⎢0 ⎢0 ⎣
3 −1 0
⎡2 ⎢ Ax 2 = ⎢0 ⎢0 ⎣
−1
⎡2 ⎢ Ax3 = ⎢0 ⎢0 ⎣
−1
⎡0 ⎢ 7. Ax1 = ⎢0 ⎢1 ⎣
3 0 3 0 1 0 0
⎡1 ⎤ ⎡2 ⎤ ⎡1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢0 ⎥ = ⎢0 ⎥ = 2⎢0 ⎥ = λ1x1 ⎢0 ⎥ ⎢0 ⎥ ⎢0 ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1⎤ ⎥ 2⎥ 3 ⎥⎦
⎡ 1⎤ ⎡−1 ⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 1 1 − = = − ⎢ ⎥ ⎢ ⎥ ⎢−1 ⎥ = λ2 x 2 ⎢ 0⎥ ⎢ 0⎥ ⎢ 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
1⎤ ⎥ 2⎥ 3 ⎥⎦
⎡5 ⎤ ⎡5 ⎤ ⎡15 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1 3 3 = = ⎢1 ⎥ = λ3x3 ⎢ ⎥ ⎢ ⎥ ⎢2 ⎥ ⎢2 ⎥ ⎢ 6⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
0⎤ ⎥ 1⎥ 0 ⎥⎦
⎡1 9. (a) A(cx1 ) = ⎢ ⎣1 (b) A(cx 2 )
1⎤ ⎥ 2⎥ 3 ⎥⎦
⎡1 = ⎢ ⎣1
⎡1 ⎤ ⎡1 ⎤ ⎡1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = = 1 1 1 ⎢ ⎥ ⎢ ⎥ ⎢1 ⎥ = λ1x1 ⎢1 ⎥ ⎢1 ⎥ ⎢1 ⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡0 ⎤ ⎡ c⎤ 1 ⎤ ⎡ c⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ = 0⎢ ⎥ = 0(cx1 ) 1 ⎦ ⎣−c⎦ ⎣0 ⎦ ⎣− c ⎦ ⎡ 2c ⎤ ⎡c⎤ 1 ⎤ ⎡c⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ = 2⎢ ⎥ = 2(cx 2 ) 1 ⎦ ⎣c⎦ ⎣ 2c ⎦ ⎣c⎦
11. (a) Because ⎡7 Ax = ⎢ ⎣2
⎡11 ⎤ ⎡ 1⎤ 2 ⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ ≠ λ⎢ ⎥ 4 ⎦ ⎣2⎦ ⎣10 ⎦ ⎣2⎦
x is not an eigenvector of A. (b) Because ⎡7 Ax = ⎢ ⎣2
⎡16 ⎤ ⎡2⎤ 2 ⎤ ⎡2⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ = 8⎢ ⎥ 4 ⎦ ⎣ 1⎦ ⎣ 8⎦ ⎣ 1⎦
x is an eigenvector of A (with a corresponding eigenvalue 8). (c) Because ⎡7 Ax = ⎢ ⎣2
⎡ 3⎤ ⎡ 1⎤ 2 ⎤ ⎡ 1⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ = 3⎢ ⎥ 4 ⎦ ⎣−2⎦ ⎣−6 ⎦ ⎣−2⎦
x is an eigenvector of A (with a corresponding eigenvalue 3). (d) Because ⎡7 Ax = ⎢ ⎣2
⎡−7 ⎤ ⎡−1⎤ 2 ⎤ ⎡−1⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ ≠ λ⎢ ⎥ 4 ⎦ ⎣ 0⎦ ⎣−2 ⎦ ⎣ 0⎦
x is not an eigenvector of A. 13. (a) Because ⎡−1 ⎢ Ax = ⎢−2 ⎢ 3 ⎣
−1 0 −3
1⎤ ⎡ 2⎤ ⎡ 8⎤ ⎡ 2⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −2 ⎥ ⎢−4 ⎥ = ⎢−16 ⎥ = 4⎢−4 ⎥ ⎢ 24 ⎥ ⎢ 6⎥ 1 ⎥⎦ ⎢⎣ 6 ⎥⎦ ⎣ ⎦ ⎣ ⎦
x is an eigenvector of A (with a corresponding eigenvalue 4). (b) Because ⎡−1 ⎢ Ax = ⎢−2 ⎢ 3 ⎣
−1 0 −3
1 ⎤ ⎡2 ⎤ ⎡ 4⎤ ⎡2 ⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −2 ⎥ ⎢0 ⎥ = ⎢−16 ⎥ ≠ λ ⎢0 ⎥ ⎢ 12 ⎥ ⎢6 ⎥ 1 ⎥⎦ ⎢⎣6 ⎥⎦ ⎣ ⎦ ⎣ ⎦
x is not an eigenvector of A. (c) Because ⎡−1 ⎢ Ax = ⎢−2 ⎢ 3 ⎣
−1 0 −3
1 ⎤ ⎡2 ⎤ ⎡−4 ⎤ ⎡2 ⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ −2 ⎥ ⎢2 ⎥ = ⎢−4 ⎥ = −2 ⎢2 ⎥ ⎢ 0⎥ ⎢0 ⎥ 1 ⎥⎦ ⎢⎣0 ⎥⎦ ⎣ ⎦ ⎣ ⎦
x is an eigenvector of A (with a corresponding eigenvalue −2). (d) Because ⎡ −1 −1 1⎤ ⎡−1⎤ ⎡ 2⎤ ⎡−1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax = ⎢−2 0 −2⎥ ⎢ 0⎥ = ⎢ 0⎥ = −2 ⎢ 0⎥ ⎢ 3 −3 ⎢−2⎥ ⎢ 1⎥ 1⎥⎦ ⎢⎣ 1⎥⎦ ⎣ ⎣ ⎦ ⎣ ⎦
x is an eigenvector of A (with a corresponding eigenvalue −2). 220
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
15. (a) The characteristic equation is λ I − A =
λ −6
3
2
λ −1
Eigenvalues and Eigenvectors
221
= λ 2 − 7λ = λ (λ − 7) = 0.
(b) The eigenvalues are λ1 = 0 and λ 2 = 7. ⎡λ1 − 6 ⎡0⎤ 3 ⎤ ⎡ x1 ⎤ For λ1 = 0, ⎢ ⎥⎢ ⎥ = ⎢ ⎥ λ1 − 1⎦ ⎣ x2 ⎦ ⎣0⎦ ⎣ 2
{(t , 2t ) : t ∈ R}. So, an eigenvector corresponding to λ1
The solution is
3 ⎤ ⎡ x1 ⎤ ⎡λ2 − 6 ⎡0⎤ For λ 2 = 7, ⎢ ⎥⎢ ⎥ = ⎢ ⎥ λ2 − 1⎦ ⎣ x2 ⎦ ⎣0⎦ ⎣ 2
17. (a) The characteristic equation is λ I − A =
(b) The eigenvalues are λ1 = 1, 2
= 0 is (1, 2).
⎡1 3⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0⎦ ⎣ x2 ⎦ ⎣0⎦
⇒
{(−3t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
The solution is
For λ1 =
⎡2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0⎦ ⎣ x2 ⎦ ⎣0⎦
⇒
λ1 − 1 − 12
The solution is For λ2 = − 12 , The solution is
1 2
λ −1
3 2
− 12
λ +1
⇒
⎡1 −3⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0 ⎦ ⎣ x2 ⎦ ⎣0⎦
= λ2 −
1 4
= 0.
and λ 2 = − 12 .
3 2
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ = ⎢ ⎥ λ1 + 1 ⎣ x2 ⎦ ⎣0⎦
{(−3t , t ) : t ∈ R}. So, an eigenvector corresponding to λ1 λ2 − 1 − 12
= 7 is ( −3, 1).
3 2
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ = ⎢ ⎥ λ2 + 1 ⎣ x2 ⎦ ⎣0⎦
1 2
is (3, 1).
⎡1 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0 ⎦ ⎣ x2 ⎦ ⎣0⎦
⇒
{(t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
19. (a) The characteristic equation is λ I − A =
=
λ −2
0
−1
0
λ −3
−4
0
0
λ −1
= − 12 is (1, 1).
= (λ − 2) (λ − 3) (λ − 1) = 0.∑
(b) The eigenvalues are λ1 = 2, λ2 = 3 and λ3 = 1. 0 −1 ⎤ ⎡ x1 ⎤ ⎡λ1 − 2 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ λ1 − 3 −4 ⎥ ⎢ x2 ⎥ = ⎢0⎥ For λ1 = 2, ⎢ 0 ⎢ 0 0 λ1 − 1⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ ⎣
The solution is
⇒
⇒
= 2 is (1, 0, 0).
⎡ 1 0 0⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 1⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(0, t , 0) : t ∈ R}. So, an eigenvector corresponding to λ2
0 −1 ⎤ ⎡ x1 ⎤ ⎡λ3 − 2 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ λ3 − 3 −4 ⎥ ⎢ x2 ⎥ = ⎢0⎥ For λ3 = 1, ⎢ 0 ⎢ 0 0 λ3 − 1⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ ⎣
The solution is
⎡0 1 0⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0 1 = x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ1
0 −1 ⎤ ⎡ x1 ⎤ ⎡λ2 − 2 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ −4 ⎥ ⎢ x2 ⎥ = ⎢0⎥ λ2 − 3 For λ2 = 3, ⎢ 0 ⎢ 0 0 λ2 − 1⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ ⎣
The solution is
⇒
= 3 is (0, 1, 0).
⎡ 1 0 1⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 1 2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(−t , − 2t , t ) : t ∈ R}. So, an eigenvector corresponding to λ3
= 1 is ( −1, − 2, 1).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
222
Chapter 7
Eigenvalues and Eigenvectors
21. (a) The characteristic equation is λ I − A =
λ −2
2
−3
0
λ −3
2
0
1
λ −2
⇒
⎡−1 2 −3⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 0 −2 2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢ 0 ⎢0⎥ 1 −1⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎣ ⎦
= (λ − 2) (λ − 4) (λ − 1) = 0.
(b) The eigenvalues are λ1 = 1, λ2 = 2, and λ3 = 4. 2 −3 ⎤ ⎡ x1 ⎤ ⎡λ1 − 2 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ For λ1 = 1, ⎢ 0 λ1 − 3 2 ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢0⎥ 1 λ1 − 2⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎣ ⎦
The solution is
{(−t , t , t ) : t ∈ R}. So, an eigenvector corresponding to λ1
2 −3 ⎡λ2 − 2 ⎢ λ2 − 3 2 For λ2 = 2, ⎢ 0 ⎢ 0 1 λ 2 − ⎣
The solution is
⇒
⎡0 2 −3⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 −1 2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 1 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ2
2 −3 ⎡λ3 − 2 ⎢ 0 λ − 3 2 For λ3 = 4, ⎢ 3 ⎢ 0 1 λ3 − ⎣
The solution is
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢0⎥ 2⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ = ⎥ ⎢ x2 ⎥ ⎢0⎥ ⎥ ⎢ ⎥ ⎢0⎥ 2⎦ ⎣ x3 ⎦ ⎣ ⎦
⇒
= 1 is ( −1, 1, 1).
= 2 is (1, 0, 0).
⎡2 2 −3⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 1 2 = ⎢ ⎥ ⎢ x2 ⎥ ⎢0⎥. ⎢0 1 2⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(7t , − 4t , 2t ) : t ∈ R}. So, an eigenvector corresponding to λ3
23. (a) The characteristic equation is λ I − A =
λ −1
−2
2
2
λ −5
2
6
−6
λ +3
= 4 is (7, − 4, 2).
= λ 3 − 3λ 2 − 9λ + 27 = (λ + 3) (λ − 3) = 0. 2
(b) The eigenvalues are λ1 = −3 and λ2 = 3 (repeated). −2 2 ⎡λ1 − 1 ⎢ For λ1 = −3, ⎢ 2 λ1 − 5 2 ⎢ 6 − 6 λ 1 + ⎣
The solution is
⇒
⎡−4 −2 2⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 2 −8 2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢ 6 −6 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(t , t , 3t ) : t ∈ R}. So, an eigenvector corresponding to λ1
−2 2 ⎡λ2 − 1 ⎢ λ2 − 5 2 For λ2 = 3, ⎢ 2 ⎢ 6 −6 λ2 + ⎣
The solution is
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢0⎥ 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ = x ⎥ ⎢ 2⎥ ⎢0⎥ ⎥ ⎢ ⎥ ⎢0⎥ 3⎦ ⎣ x3 ⎦ ⎣ ⎦
⇒
= −3 is (1, 1, 3).
⎡2 −2 2⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 2 − 2 2 = x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢6 −6 6⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{( s − t , s, t ) : s, t ∈ R}. So, two eigenvector corresponding to λ2
= 3 are (1, 1, 0) and (1, 0, −1).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
Eigenvalues and Eigenvectors
223
25. (a) The characteristic equation is
λ
−5
3
λI − A = 4 λ − 4 0
= (λ − 4)(λ 2 − 4λ − 12) = (λ − 4)(λ − 6) (λ + 2) = 0.
10
λ −4
0
(b) The eigenvalues are λ1 = 4, λ2 = 6 and λ3 = −2. 3 −5 ⎤ ⎡ x1 ⎤ ⎡λ1 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 10 ⎥ ⎢ x2 ⎥ = ⎢0⎥ For λ1 = 4, ⎢ 4 λ1 − 4 ⎢0 ⎢0⎥ 0 λ1 − 4⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎣ ⎦
⎡2 0 5⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 1 5 − = x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⇒
{(−5t , 10t , 2t ) : t ∈ R}. So, an eigenvector corresponding to λ1
The solution is
⎡λ2 ⎢ For λ2 = 6, ⎢ 4 ⎢0 ⎣
−5 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ 10 ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢0⎥ λ2 − 4⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
3
λ2 − 4 0
⎡2 1 0⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0 1 = x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⇒
{(t , − 2t , 0) : t ∈ R}. So, an eigenvector corresponding to λ2
The solution is
3 −5 ⎡λ3 ⎢ 10 For λ3 = −2, ⎢ 4 λ3 − 4 ⎢0 0 λ3 − ⎣
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢0⎥ 4⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
= 6 is (1, − 2, 0).
⎡2 −3 0⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 1⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⇒
{(3t , 2t , 0) : t ∈ R}. So, an eigenvector corresponding to λ3
The solution is
= 4 is ( −5, 10, 2).
= −2 is (3, 2, 0).
27. (a) The characteristic equation is
λ −2
0
λI − A =
0
λ −2
0
0
0
0
0
0
0
0
λ −3 0
= λ (λ − 3)(λ − 2) = 0. 2
λ
−4
(b) The eigenvalues are λ1 = 0, λ2 = 3 and λ3 = 2 (repeated). ⎡−2 0 0 ⎢ 0 −2 0 For λ1 = 0, ⎢ ⎢ 0 0 −3 ⎢ ⎢⎣ 0 0 −4 The solution is
⎡0 ⎢ 0 For λ3 = 2, ⎢ ⎢0 ⎢ ⎣⎢0 The solution is
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
⇒
0 0 0⎤ ⎥ 1 0 0⎥ 0 1 0⎥ ⎥ 0 0 0⎥⎦
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥. ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
{(0, 0, 0, t ) : t ∈ R}. So, an eigenvector corresponding to λ1
⎡1 ⎢ 0 For λ2 = 3, ⎢ ⎢0 ⎢ ⎣⎢0 The solution is
0⎤ ⎥ 0⎥ 0⎥ ⎥ 0⎥⎦
0 0⎤ ⎥ 0 0⎥ 0 0 0⎥ ⎥ 0 −4 3⎥⎦
0
1
{(0, 0,
3 t, 4
)
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0⎤ ⎥ 0⎥ 0 1 − 34 ⎥ ⎥ 0 0 0⎥⎦
0 0 1 0
⎡ x1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥. ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
}
t : t ∈ R . So, an eigenvector corresponding to λ2 = 3 is (0, 0, 3, 4).
0 0⎤ ⎥ 0 0 0⎥ 0 −1 0⎥ ⎥ 0 −4 2⎥⎦ 0
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
= 0 is (0, 0, 0, 1).
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
⇒
⎡0 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 1 0⎤ ⎥ 0 0 1⎥ 0 0 0⎥ ⎥ 0 0 0⎥⎦
⎡ x1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
{( s, t , 0, 0) : s, t ∈ R}. So, two eigenvectors corresponding to λ3
= 2 are (1, 0, 0, 0) and (0, 1, 0, 0).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
224
Chapter 7
Eigenvalues and Eigenvectors
29. Using a graphing utility: λ = −2, 1
43. The characteristic equation is
λI − A =
31. Using a graphing utility: λ = 5, 5
λ −2
2
−1
λ −5
= λ 2 − 7λ + 12 = 0.
33. Using a graphing utility: λ = 13 , − 12 , 4
Because
35. Using a graphing utility: λ = −1, 4, 4
⎡2 −2⎤ ⎡2 −2⎤ ⎡ 1 0⎤ A2 − 7 A + 12 I = ⎢ ⎥ − 7⎢ ⎥ + 12 ⎢ ⎥ ⎣ 1 5⎦ ⎣ 1 5⎦ ⎣0 1⎦
2
⎡2 −14⎤ ⎡14 −14⎤ ⎡12 0⎤ = ⎢ ⎥ −⎢ ⎥ + ⎢ ⎥ 23⎦ ⎣ 7 35⎦ ⎣ 0 12⎦ ⎣7
37. Using a graphing utility: λ = 0, 3 39. Using a graphing utility: λ = 0, 0, 0, 21
⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦
41. The characteristic equation is
λI − A =
λ −4
0
3
λ −2
the theorem holds for this matrix.
= λ 2 − 6λ + 8 = 0.
Because 2
⎡ 4 0⎤ ⎡ 4 0⎤ ⎡ 1 0⎤ A2 − 6 A + 8 I = ⎢ ⎥ − 6⎢ ⎥ + 8⎢ ⎥ − − 3 2 3 2 ⎣ ⎦ ⎣ ⎦ ⎣0 1⎦ ⎡ 16 0⎤ ⎡ 24 0⎤ ⎡8 0⎤ = ⎢ ⎥ − ⎢ ⎥ + ⎢ ⎥ ⎣−18 4⎦ ⎣−18 12⎦ ⎣0 8⎦ ⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦
the theorem holds for this matrix. 45. The characteristic equation is
λ
−2
1
λI − A = 1 λ − 3 0
0
−1
= λ 3 − 2λ 2 − λ + 2 = 0.
λ +1
Because 3
2
⎡ 0 2 −1⎤ ⎡ 0 2 −1⎤ ⎡ 0 2 −1⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3 2 A − 2 A − A + 2 = ⎢−1 3 1⎥ − 2 ⎢−1 3 1⎥ − ⎢−1 3 1⎥ + 2 ⎢0 1 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 −1⎥ 0 0 −1⎥ 0 0 −1⎥ ⎣⎢0 0 1⎦⎥ ⎣⎢ ⎦ ⎣⎢ ⎦ ⎣⎢ ⎦ ⎡−6 14 5⎤ ⎡−2 6 3⎤ ⎡ 0 2 −1⎤ ⎡2 0 0⎤ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢−7 15 7⎥ − 2 ⎢−3 7 3⎥ − ⎢−1 3 1⎥ + ⎢0 2 0⎥ ⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 0 0 −1⎥⎦ ⎢⎣ 0 0 1⎥⎦ ⎢⎣ 0 0 −1⎥⎦ ⎣⎢0 0 2⎥⎦ ⎡0 0 0⎤ ⎢ ⎥ = ⎢0 0 0⎥ ⎢ ⎥ ⎢⎣0 0 0⎥⎦
the theorem holds for this matrix.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
Eigenvalues and Eigenvectors
225
47. The characteristic equation is
λ −1
0
4
0
λ −3
−1
−2
0
λ −1
λI − A =
= λ 3 − 5λ 2 + 15λ − 27 = 0.
Because 3
2
⎡ 1 0 −4⎤ ⎡ 1 0 −4⎤ ⎡ 1 0 −4⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3 2 A − 5 A + 15 A − 27 I = ⎢0 3 1⎥ − 5⎢0 3 1⎥ + 15⎢0 3 1⎥ − 27 ⎢0 1 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1⎥⎦ 1⎦⎥ 1⎦⎥ ⎢⎣2 0 ⎣⎢2 0 ⎣⎢2 0 ⎣⎢0 0 1⎦⎥ ⎡−7 0 −8⎤ ⎡−23 0 20⎤ ⎡ 1 0 −4⎤ ⎡27 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ 10 27 5⎥ − 5⎢ 2 9 4⎥ + 15⎢0 3 1⎥ − ⎢ 0 27 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 1⎥⎦ ⎢⎣ 0 0 27⎥⎦ ⎢⎣−10 0 −23⎥⎦ ⎢⎣2 0 ⎢⎣ 4 0 −7⎥⎦ ⎡0 0 0⎤ ⎢ ⎥ = ⎢0 0 0⎥ ⎢ ⎥ ⎣⎢0 0 0⎦⎥ the theorem holds for this matrix. 49. For the n × n matrix A = ⎡⎣aij ⎤⎦ , the sum of the diagonal entries, or the trace, of A is given by
n
∑aii . i =1
Exercise 15: λ1 = 0, λ2 = 7 (a)
2
∑λi i =1
(b)
A =
2
= 7 =∑ aii i =1
6 −3 −2
Exercise 17: λ1 = 2
(a)
∑λi i =1
(b)
A =
= 0 = 0 ⋅ 7 = λ1 ⋅ λ2
1
1, λ 2 2
= − 12
2
= 0 =∑ aii i =1
1 1 2
− 32
= − 14 =
−1
1 2
( )
⋅ − 12 = λ1 ⋅ λ2
Exercise 19: λ1 = 2, λ2 = 3, λ3 = 1 3
(a)
∑λi i =1
3
= 6 =∑ aii i =1
2 0
(b)
1
A = 0 3 4 = 6 = 2 ⋅ 3 ⋅ 1 = λ1 ⋅ λ2 ⋅ λ3 0 0
1
Exercise 21: λ1 = 1, λ2 = 2, λ3 = 4 3
(a)
∑λi i =1
3
= 7 =∑ aii i =1
2 −2
(b)
A = 0 0
3
3 −2 = 8 = 1 ⋅ 2 ⋅ 4 = λ1 ⋅ λ2 ⋅ λ3 −1
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
226
Chapter 7
Eigenvalues and Eigenvectors
Exercise 23: λ1 = −3, λ2 = 3, λ3 = 3 3
(a)
3
∑λi
= 3 =∑ aii
i =1
i =1
1 2 −2
(b)
A = −2 5 −2 = −27 = −3 ⋅ 3 ⋅ 3 = λ1 ⋅ λ2 ⋅ λ3 −6 6 −3
Exercise 25: λ1 = 4, λ2 = 6, λ3 = −2 3
(a)
3
∑λi
= 8 =∑ aii
i =1
i =1
0 −3
(b)
5
4 −10 = −48 = 4 ⋅ 6 ⋅ ( −2) = λ1 ⋅ λ2 ⋅ λ3
A = −4 0
0
4
Exercise 27: λ1 = 0, λ2 = 3, λ3 = 2, λ4 = 2 4
(a)
4
∑λi
= 7 =∑ aii
i =1
i =1
2 0 0 0
(b)
A =
0 2 0 0 0 0 3 0
= 0 = 0 ⋅ 3 ⋅ 2 ⋅ 2 = λ1 ⋅ λ2 ⋅ λ3 ⋅ λ4
0 0 4 0 51. Because the ith row of A is identical to the ith row of I, the ith row of A consists of zeros except for the main diagonal entry, which is 1. The ith row of λ I − A then consists of zeros except for the main diagonal entry, which is λ − 1. Because det ( A) =
n
∑aijCij
= ai1Ci1 + ai 2Ci 2 +
+ ainCin and
j =1
because each aij equals zero except for the main diagonal entry, det (λ I − A) = (λ − 1)cim = 0 where
Cim is the cofactor determined for the main diagonal entry for this row. So, 1 is an eigenvalue of A. 53. If Ax = λ x, then A−1 Ax = A−1λ x and x = λ A−1x. So, 1 A−1x = x, which shows that x is an eigenvector of
λ
A−1 with eigenvalue
1
λ
. The eigenvectors of A and
57. Assume that A is a real matrix with eigenvalues λ1 , λ2 , …, λn as its main diagonal entries. From Theorem 7.3, the eigenvalues of A are λ1 , λ2 , …, λn . So, A has real eigenvalues. Because the determinant of A is A = λ1 , λ2 , …, λn , it follows that A is nonsingular if and only if each λ is nonzero.
59. The characteristic equation of A is
λ −a
−b
0
λ −d
= (λ − a)(λ − d ) = λ 2 − ( a + d )λ + ad = 0.
Because the given eigenvalues indicate a characteristic equation of λ (λ − 1) = λ 2 − λ ,
λ 2 − ( a + d )λ + ad = λ 2 − λ . So, a = 0 and d = 1, or a = 1 and d = 0.
−1
A are the same. 55. The characteristic polynomial of A is λ I − A . The constant term of this polynomial (in λ ) is obtained by setting λ = 0. So, the constant term is 0I − A = − A = ± A .
61. (a) False. See "Definition of Eigenvalue and Eigenvector," page 422.
(b) True. See discussion before Theorem 7.1, page 424. (c) True. See Theorem 7.2, page 426.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
Eigenvalues and Eigenvectors
227
63. Substituting the value λ = 3 yields the system
λ −3
0
0
λ −3
0
0
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x = ⎢ 2⎥ ⎢0⎥ ⎢ ⎥ λ − 3 ⎣ x3 ⎦ ⎢⎣0⎥⎦ 0
0
⇒
⎡0 0 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0 0 x = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, 3 has three linearly independent eigenvectors and the dimension of the eigenspace is 3. 65. Substituting the value λ = 3 yields the system
λ −3
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ λ −3 1 ⎢ x2 ⎥ = ⎢0⎥ 0 λ − 3 ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ 0
1
0 0
⇒
⎡0 −1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 −1⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, 3 has one linearly independent eigenvector, and the dimension of the eigenspace is 1. 67. T (e x ) =
d x ⎡e ⎤ = e x = 1(e x ) dx ⎣ ⎦
Therefore, λ = 1 is an eigenvalue. 69. The standard matrix for T is 5⎤ ⎡ 0 −3 ⎢ ⎥ A = ⎢−4 4 −10⎥. ⎢ 0 0 4⎥⎦ ⎣
The characteristic equation of A is
λ
−5
3
4 λ −4 0
0
10
= (λ + 2)(λ − 4)(λ − 6) = 0.
λ −4
The eigenvalues of A are λ1 = −2, λ2 = 4, and
λ3 = 6. The corresponding eigenvectors are found by solving ⎡λi ⎢ ⎢4 ⎢0 ⎣
−5 ⎤ ⎡a0 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ λi − 4 10 ⎥ ⎢ a1 ⎥ = ⎢0⎥ 0 λi − 4⎥⎦ ⎢⎣a2 ⎥⎦ ⎢⎣0⎥⎦ 3
for each λi . Thus, p1 ( x) = 3 + 2 x, p2 ( x) = −5 + 10 x + 2 x 2 and
p3 ( x) = −1 + 2 x are eigenvectors corresponding to
λ1 , λ2 , and λ3 .
71. The standard matrix for T is
⎡ 1 ⎢ 0 A = ⎢ ⎢−2 ⎢ ⎣⎢ 0
0 −1 1 0 2
1⎤ ⎥ 0 1⎥ . 2 −2⎥ ⎥ 0 2⎦⎥
Because the standard matrix is the same as that in Exercise 37, you know the eigenvalues are λ1 = 0 and λ2 = 3. So, two eigenvectors corresponding to λ1 = 0 are ⎡ 1⎤ ⎢ ⎥ 0 x1 = ⎢ ⎥ ⎢ 1⎥ ⎢ ⎥ ⎢⎣0⎥⎦
and
⎡ 1⎤ ⎢ ⎥ 1 x2 = ⎢ ⎥ ⎢ 0⎥ ⎢ ⎥ ⎢− ⎣ 1⎥⎦
and an eigenvector corresponding to λ2 = 3 is ⎡ 1⎤ ⎢ ⎥ 0 x3 = ⎢ ⎥. ⎢−2⎥ ⎢ ⎥ ⎢⎣ 0⎥⎦ 73. 0 is the only eigenvalue of a nilpotent matrix. For if Ax = λ x, then A2 x = Aλ x = λ 2 x.
So, Ak x = λ k x = 0
⇒
λk = 0 ⇒ λ = 0
⎡1⎤ ⎡r ⎤ ⎢⎥ ⎢ ⎥ 1 r 75. Let x = ⎢ ⎥. Then Ax = ⎢ ⎥ = rx, ⎢⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎥ ⎣⎢r ⎦⎥ ⎣⎢1⎦⎥ which shows that r is an eigenvalue of A with eigenvector x. ⎡1 2⎤ For example, let A = ⎢ ⎥. ⎣3 0⎦ ⎡1 2⎤ ⎡1⎤ ⎡3⎤ ⎡1⎤ Then ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 3⎢ ⎥. 3 0 1 3 ⎣ ⎦⎣⎦ ⎣ ⎦ ⎣1⎦ Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
228
Chapter 7
Eigenvalues and Eigenvectors
Section 7.2 Diagonalization 1.
⎡ 1 −4⎤ P −1 = ⎢ ⎥ ⎣−1 3⎦ ⎡ 1 −4⎤ ⎡−11 36⎤ ⎡−3 −4⎤ ⎡ 1 0⎤ P −1 AP = ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥ − 1 3 − 3 10 − 1 − 1 ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣0 −2⎦
3.
5.
⎡ 1 P −1 = ⎢ 15 ⎢⎣− 5
4⎤ 5 ⎥ 1 5⎥ ⎦
⎡ 1 P −1 AP = ⎢ 15 ⎢⎣− 5
4⎤ 5 ⎥ 1 5⎥ ⎦
P
−1
⎡ 2 −2 3 ⎢ 3 1 = ⎢ 0 4 ⎢− 1 1 ⎣ 3 12
⎡ 2 −2 3 ⎢ 3 1 P AP = ⎢ 0 4 ⎢− 1 1 ⎣ 3 12 −1
7.
P
−1
and ⎡−2 4⎤ ⎡1 −4⎤ ⎡2 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ 1 1⎦ ⎣1 1⎦ ⎣0 −3⎦ 1⎤ ⎥ 0⎥ 0⎥⎦
and
1⎤ ⎡−1 1 0⎤ ⎡0 1 −3⎤ ⎡5 0 0⎤ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ 0⎥ ⎢ 0 3 0⎥ ⎢0 4 0⎥ = ⎢0 3 0⎥ ⎢0 0 −1⎥ 0⎥⎦ ⎢⎣ 4 −2 5⎥⎦ ⎢⎣ 1 2 2⎥⎦ ⎣ ⎦
5⎤ ⎡1 − 1 2 2 ⎢ ⎥ 1 −1 = ⎢0 2 2⎥ ⎢0 0 1⎥⎦ ⎣
and
5 ⎤ 4 −1 3 1 1 −2 ⎡1 − 1 ⎤⎡ ⎤ ⎡4 0 0⎤ 2 2 ⎡ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ 1 1 − = P AP = ⎢0 0 2 1 0 2 1 ⎥⎢ ⎥ ⎢0 2 0⎥ 2 2⎥ ⎢ ⎢0 ⎢0 0 3⎥ 0 1⎥⎦ ⎢⎣0 0 3⎥⎦ ⎢⎣0 0 1⎥⎦ ⎣ ⎦ ⎣ −1
9. A has only one eigenvalue, λ = 0, and a basis for the
eigenspace is
{(0, 1)}. So, A does not satisfy Theorem 7.5
(it does not have two linearly independent eigenvectors) and is not diagonalizable. 11. The matrix has eigenvalues λ = 1 (repeated), and a
basis for the eigenspace is
{(1, 0)}. So, A does not satisfy
Theorem 7.5 (it does not have two linearly independent eigenvectors) and it is not diagonalizable. 13. The matrix has eigenvalues λ = 1 (repeated) and λ2 = 2. A basis for the eigenspace associated with
λ = 1 is {(1, 0, 0)}. So, the matrix has only two linearly
independent eigenvectos and, by Theorem 7.5 it is not diagonalizable. 15. From Exercise 37, Section 7.1, A has only three linearly independent eigenvectors. So, A does not satisfy Theorem 7.5 and is not diagonalizable.
17. The eigenvalues of A are λ1 = 0 and λ2 = 2. Because A has two distinct eigenvalues, it is diagonalizable (by Theorem 7.6). 19. The eigenvalues of A are λ = 0 and λ = 2 (repeated). Because A does not have three distinct eigenvalues, Theorem 7.6 does not guarantee that A is diagonalizable. 21. The eigenvalues of A are λ1 = 0, λ2 = 7 (see Exercise
15, Section 7.1). The corresponding eigenvectors (1, 2) and (−3, 1) are used to form the columns of P. So, ⎡ 1 −3⎤ P = ⎢ ⎥ 1⎦ ⎣2
⇒
⎡ 1 P −1 = ⎢ 72 ⎣⎢− 7
3⎤ 7 ⎥ 1 ⎥ 7⎦
and ⎡ 1 P −1 AP = ⎢ 72 ⎢⎣− 7
3⎤ 7 ⎥ 1 7⎥ ⎦
⎡ 6 −3⎤ ⎡ 1 −3⎤ ⎡0 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1⎦ ⎣2 1⎦ ⎣−2 ⎣0 7⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.2 23. The eigenvalues of A are λ1 =
1 2
Diagonalization
229
and λ2 = − 12 (see Exercise 17, Section 7.1). The corresponding eigenvectors (3, 1) and
(1, 1) are used to form the columns of P. So, ⎡3 1⎤ P = ⎢ ⎥ ⎣1 1⎦
⇒
⎡ 1 P −1 = ⎢ 12 ⎢⎣− 2
− 12 ⎤ 3⎥ 2⎥ ⎦
and ⎡ 1 P −1 AP = ⎢ 12 ⎣⎢− 2
⎡ 12 0⎤ − 12 ⎤ ⎡ 1 − 32 ⎤ ⎡3 1⎤ = ⎢ ⎥ ⎥ ⎢ ⎥. ⎢ ⎥ 3 1 1 0 − 1 − 1 1 ⎥⎦ ⎣ ⎥ ⎣⎢ 2 ⎢⎣ ⎦ 2⎦ 2⎥ ⎦
25. The eigenvalues of A are λ1 = 1, λ2 = 2, λ3 = 4 (see Exercise 21, Section 7.1). The corresponding eigenvectors
(−1, 1, 1), (1, 0, 0), (7, − 4, 2) are used to form the columns of P. So, ⎡−1 1 7⎤ ⎢ ⎥ P = ⎢ 1 0 −4⎥ ⎢ 1 0 2⎥ ⎣ ⎦
⇒
P
−1
1 ⎡0 3 ⎢ 3 = ⎢1 2 ⎢0 − 1 6 ⎣
2⎤ 3 ⎥ 1 − 2⎥ 1⎥ 6⎦
and 1 ⎡0 3 ⎢ 3 P AP = ⎢ 1 2 ⎢0 − 1 6 ⎣ −1
2⎤ 3 ⎥ 1 − 2⎥ 1⎥ 6⎦
3⎤ ⎡−1 1 7⎤ ⎡2 −2 ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 3 −2⎥ ⎢ 1 0 −4⎥ = ⎢0 2 0⎥. ⎢0 ⎢0 −1 2⎥ ⎢ 1 0 2⎥ ⎢0 0 4⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
27. The eigenvalues of A are λ1 = −3 and λ2 = 3 (repeated) (see Exercise 23, Section 7.1).
The corresponding eigenvectors (1, 1, 3), (1, 1, 0), and (1, 0, −1) are used to form the columns of P. So, 1⎤ ⎡ 13 − 13 ⎡1 1 1⎤ 3 ⎢ ⎥ ⎢ ⎥ 4 −1 P = ⎢1 1 0⎥ ⇒ P −1 = ⎢− 13 3 3⎥ ⎢ 1 −1 ⎢3 0 −1⎥ 0⎥⎦ ⎣ ⎦ ⎣
and 1⎤ ⎡ 13 − 13 1 2 −2⎤ ⎡1 1 1⎤ ⎡−3 0 0⎤ 3 ⎡ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ −1 1 4 1 P AP = ⎢− 3 − 3 ⎥ ⎢−2 5 −2⎥ ⎢1 1 0⎥ = ⎢ 0 3 0⎥. 3 ⎢ 1 −1 ⎥ ⎢ 0 0 3⎥ 0⎦ ⎢⎣−6 6 −3⎥⎦ ⎢⎣3 0 −1⎥⎦ ⎣ ⎦ ⎣
29. The eigenvalues of A are λ1 = −2, λ2 = 6 and λ3 = 4 (see Exercise 25, Section 7.1).
The corresponding eigenvectors (3, 2, 0), ( −1, 2, 0) and ( −5, 10, 2) are used to form the columns of P. So, ⎡3 −1 −5⎤ ⎢ ⎥ P = ⎢2 2 10⎥ ⎢0 0 2⎥ ⎣ ⎦
⇒
P −1
⎡ 1 1 0⎤ ⎢ 14 83 ⎥ = ⎢− 4 8 − 52 ⎥ ⎢ 0 0 1⎥ 2⎦ ⎣
and ⎡ 14 18 0⎤ ⎡ 0 −3 5⎤ ⎡ 3 −1 −5⎤ ⎡−2 0 0⎤ ⎢ 1 3 ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ −1 5 P AP = ⎢− 4 8 − 2 ⎥ ⎢−4 4 −10⎥ ⎢2 2 10⎥ = ⎢ 0 6 0⎥. ⎢ 0 0 ⎥ 1 ⎢ 0 ⎢ 0 0 4⎥ 0 4⎥⎦ ⎢⎣0 0 2⎥⎦ ⎣ ⎦ 2⎦ ⎣ ⎣
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
230
Chapter 7
Eigenvalues and Eigenvectors
31. The eigenvalues of A are λ1 = 1 and λ2 = 2. Furthermore, there are just two linearly independent eigenvectors of
A, x1 = ( −1, 0, 1) and x 2 = (0, 1, 0). So, A is not diagonalizable.
33. The eigenvalues of A are λ1 = 2, λ2 = 1, λ3 = −1 and λ4 = −2. The corresponding eigenvectors
(4, 4, 4, 1), (0, 0, 3, 1), (0, − 2, 1, 1) and (0, 0, 0, 1) are used to form the columns of P. So, ⎡4 ⎢ 4 P = ⎢ ⎢4 ⎢ ⎢⎣ 1
0 0⎤ ⎥ 0 −2 0⎥ 3 1 0⎥ ⎥ 1 1 1⎥⎦ 0
⇒
P −1
⎡ 14 ⎢ 1 ⎢− = ⎢ 12 ⎢ 2 ⎢− 1 ⎣ 4
0 0⎤ ⎥ 1 0 ⎥ 3 0 0⎥ ⎥ − 13 1⎥⎦
0 1 6 − 12 1 3
and ⎡ 14 ⎢ 1 ⎢− 2 −1 P AP = ⎢ 1 ⎢ 2 ⎢− 1 ⎣ 4
0 1 6 − 12 1 3
0 0⎤ ⎡2 0 ⎥ 1 0 ⎢ 3 −1 ⎥⎢ 3 0 0⎥ ⎢0 1 ⎥⎢ − 13 1⎥⎦ ⎢⎣0 0
0⎤ ⎡4 ⎥⎢ 0 0⎥ ⎢4 1 0⎥ ⎢4 ⎥⎢ 1 −2⎥⎦ ⎢⎣ 1
0
0 0⎤ ⎡2 0 0 0⎤ ⎥ ⎢ ⎥ 0 −2 0⎥ 0 1 0 0⎥ . = ⎢ ⎢0 0 −1 0⎥ 3 1 0⎥ ⎥ ⎢ ⎥ 1 1 1⎥⎦ ⎢⎣0 0 0 −2⎥⎦
0
35. The standard matrix for T is
⎡1 1⎤ A = ⎢ ⎥ ⎣1 1⎦ which has eigenvalues λ1 = 0 and λ2 = 2 and corresponding eigenvectors (1, −1) and (1, 1). Let B =
{(1, −1), (1, 1)} and find
the image of each vector in B. ⎣⎡T (1, −1)⎤⎦ B = ⎡⎣(0, 0)⎤⎦ B = (0, 0) ⎡⎣T (1, 1)⎤⎦ B = ⎡⎣( 2, 2)⎤⎦ B = (0, 2) The matrix of T relative to B is then ⎡0 0⎤ A′ = ⎢ ⎥. ⎣0 2⎦ 37. The standard matrix for T is
⎡1 0⎤ A = ⎢ ⎥. ⎣1 2⎦ which has eigenvalues λ1 = 1 and λ2 = 2 and corresponding eigenvectors −1 + x and x. Let B = {−1 + x, x} and find the image of each vector in B. ⎡⎣T ( −1 + x)⎤⎦ B = [−1 + x]B = (1, 0) ⎡⎣T ( x)⎤⎦ B = [2 x]B = (0, 2) The matrix of T relative to B is then ⎡ 1 0⎤ A′ = ⎢ ⎥. ⎣0 2⎦ 39. (a) B k = ( P −1 AP) = ( P −1 AP ) ( P −1 AP) … ( P −1 AP) k
( k times)
= P −1 Ak P (b) B = P −1 AP
⇒
A = PBP −1
⇒
Ak = PB k P −1 from part (a).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.2
(
Diagonalization
231
)
41. The eigenvalues and corresponding eigenvectors of A are λ1 = −2, λ2 = 1, x1 = − 32 , 1 , and x 2 = ( −2, 1). Construct a nonsingular matrix P from the eigenvectors of A, ⎡− 3 −2⎤ P = ⎢ 2 ⎥ ⎣⎢ 1 1⎥⎦ and find a diagonal matrix B similar to A. ⎡ 2 4⎤ ⎡ 10 18⎤ ⎡− 32 −2⎤ ⎡−2 0⎤ B = P −1 AP = ⎢ ⎥ = ⎢ ⎥⎢ ⎥⎢ ⎥ ⎣−2 −3⎦ ⎣−6 −11⎦ ⎢⎣ 1 1⎦⎥ ⎣ 0 1⎦ Then ⎡− 3 −2⎤ ⎡64 0⎤ ⎡ 2 4⎤ ⎡−188 −378⎤ A6 = PB 6 P −1 = ⎢ 2 ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣ 126 253⎦ ⎣⎢ 1 1⎥⎦ ⎣ 0 1⎦ ⎣−2 −3⎦
43. The eigenvalues and corresponding eigenvectors of A are λ1 = 0, λ2 = 2 (repeated), x1 = ( −1, 3, 1), x 2 = (3, 0, 1) and x3 = ( −2, 1, 0). Construct a nonsingular matrix P from the eigenvectors of A. ⎡−1 3 −2⎤ ⎢ ⎥ P = ⎢ 3 0 1⎥ ⎢ 1 1 0⎥ ⎣ ⎦
and find a diagonal matrix B similar to A. ⎡ 1 ⎢ 2 B = P −1 AP = ⎢− 12 ⎢− 3 ⎣ 2
1 − 32 ⎤ ⎡ 3 2 −3⎤ ⎡−1 3 −2⎤ ⎡0 0 0⎤ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ 5 1⎥ = ⎢0 2 0⎥ −1 −3 −4 9⎥ ⎢ 3 0 2⎥ ⎢ ⎥ 9 ⎢ −1 − 2 ⎢0 0 2⎥ 5⎥⎦ ⎢⎣ 1 1 0⎥⎦ −2 ⎣ ⎦ 2⎦ ⎣
Then, 0 0⎤ ⎡0 ⎡ 384 256 −384⎤ ⎢ ⎥ ⎢ ⎥ A8 = PB8 P −1 = P ⎢0 256 0⎥ P −1 = ⎢−384 −512 1152⎥. ⎢0 ⎢−128 −256 640⎥ 0 256⎥⎦ ⎣ ⎣ ⎦
45. (a) True. See the proof of Theorem 7.4, pages 436–437. (b) False. See Theorem 7.6, page 442.
51. Assume that A is diagonalizable with n real eigenvalues λ1 , … , λn . Then if PAP −1 = D, D is diagonal,
λ1
47. Yes, the order of the elements on the main diagonal may change. For instance,
A = P −1 AP =
⎡ 1 0⎤ ⎡2 0⎤ ⎢ ⎥ and ⎢ ⎥ are similar. ⎣0 2⎦ ⎣0 1⎦
T
= PT AT ( P −1 )
T
= PT AT ( PT )
0 0
0 …
49. Assume that A is diagonalizable, P −1 AP = D, where D is diagonal. Then DT = ( P −1 AP)
0 …
0 λ2
= λ1λ2 … λn .
0 λn
53. Let the eigenvalues of the diagonalizable matrix A be all ±1. Then there exists an invertible matrix P such that P −1 AP = D
−1
where D is diagonal with ±1 along the main diagonal. So A = PDP −1 and because D −1 = D,
is diagonal, which shows that AT is diagonalizable.
A−1 = ( PDP −1 )
−1
= ( P −1 ) D −1P −1 = PDP −1 = A. −1
55. Given that P −1 AP = D, where D is diagonal, A = PDP −1 and A−1 = ( PDP −1 )
−1
= ( P −1 ) D −1P −1 = PD −1P −1 −1
⇒
P −1 A−1P = D −1 ,
which shows that A−1 is diagonalizable.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
232
Chapter 7
Eigenvalues and Eigenvectors
57. A is triangular, so the eigenvalues are simply the entries on the main diagonal. So, the only eigenvalue is λ = k , and a basis for the eigenspace is
{(0, 0)}.
Because matrix A does not have two linearly independent eigenvectors, it does not satisfy Theorem 7.5 and it is not diagonalizable.
Section 7.3 Symmetric Matrices and Orthogonal Diagonalization ⎡1 3⎤ 1. Because ⎢ ⎥ ⎣3 −1⎦
T
11. The characteristic equation of A is
⎡1 3⎤ = ⎢ ⎥ ⎣3 −1⎦
λ −2 −2 λ I − A = −2
the matrix is symmetric. T
2
−2 −2
3. Because ⎡4 −2 1⎤ ⎢ ⎥ 1 2⎥ ⎢3 ⎢ 1 2 1⎥ ⎣ ⎦
λ −2 = (λ + 2) (λ − 4) = 0. λ
The eigenvalues of A are λ1 = −2 and λ2 = 4. The multiplicity of λ1 = −2 is 2, so the dimension of the corresponding eigenspace is 2 (by Theorem 7.7). The dimension for the eigenspace corresponding to λ2 = 4 is 1.
⎡ 4 3 1⎤ ⎢ ⎥ ≠ ⎢−2 1 2⎥ ⎢ 1 2 1⎥ ⎣ ⎦
the matrix is not symmetric.
13. The characteristic equation of A is
5. Because 1 2 −1⎤ ⎡ 0 ⎢ ⎥ 1 0 −3 2⎥ ⎢ ⎢ 2 −3 0 1⎥ ⎢ ⎥ ⎢− 1 −2⎥⎦ ⎣ 1 2
T
1 2 −1⎤ ⎡ 0 ⎢ ⎥ 1 0 −3 2⎥ = ⎢ ⎢ 2 −3 0 1⎥ ⎢ ⎥ ⎢− 1 −2⎥⎦ ⎣ 1 2
7. The characteristic equation of A is
λ −3
−1
−1
λ −3
= (λ − 4)(λ − 2) = 0
Therefore, the eigenvalues of A are λ1 = 4 and λ2 = 2. The dimension of the corresponding eigenspace of each eigenvalue is 1 (by Theorem 7.7).
9. The characteristic equation of A is
λI − A =
−1
λ I − A = −1 λ
−1 −1
−1 −1 λ − 1 = (λ + 1)(λ 2 − 2λ − 1) = 0. Therefore, the eigenvalues are λ1 = −1, λ2 = 1 −
the matrix is symmetric.
λI − A =
λ
λ −3
0
0
0
λ −2
0
0
0
λ −2
= (λ − 3)(λ − 2) = 0. 2
2,
and λ3 = 1 + 2. The dimension of the eigenspace corresponding to each eigenvalue is 1.
15. Because the column vectors of the matrix form an orthonormal set, the matrix is orthogonal. 17. Because the column vectors of the matrix do not form an orthonormal set [ ( −4, 0, 3) and (3, 0, 4) are not unit vectors], the matrix is not orthogonal.
19. Because the column vectors of the matrix form an orthonormal set, the matrix is orthogonal. 21. Because the column vectors of the matrix form an orthonormal set, the matrix is orthogonal.
Therefore, the eigenvalues of A are λ1 = 3 and λ2 = 2. The dimension of the eigenspace corresponding λ1 = 3 is 1. The multiplicity of λ2 = 2 is 2, so the dimension of the corresponding eigenspace is 2 (by Theorem 7.7).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization
233
23. The eigenvalues of A are λ1 = 0 and λ2 = 2, with corresponding eigenvectors (1, −1) and (1, 1), respectively. Normalize each eigenvector to form the columns of P. Then ⎡ 2 ⎢ 2 P = ⎢ ⎢ 2 ⎢− ⎣ 2
2⎤ ⎥ 2 ⎥ 2⎥ ⎥ 2 ⎦
and ⎡ 2 ⎢ 2 PT AP = ⎢ ⎢ 2 ⎢ ⎣ 2
−
⎡ 2⎤ 2 ⎥ ⎡1 1⎤ ⎢ 2 ⎥ 2 ⎢ ⎥⎢ 2 ⎥ ⎣1 1⎦ ⎢ 2 ⎥ ⎢− 2 ⎦ ⎣ 2
2⎤ ⎥ ⎡0 0⎤ 2 ⎥ = ⎢ ⎥. 2⎥ ⎣0 2⎦ ⎥ 2 ⎦
⎛ 2 ⎞ 25. The eigenvalues of A are λ1 = 0 and λ2 = 3, with corresponding eigenvectors ⎜⎜ , −1⎟⎟ and ⎝ 2 ⎠ Normalize each eigenvector to form the columns of P. Then ⎡ 3 ⎢ 3 P = ⎢ ⎢ 6 ⎢− ⎣ 3
(
)
2, 1 , respectively.
6⎤ ⎥ 3 ⎥ 3⎥ ⎥ 3 ⎦
and ⎡ 3 ⎢ 3 P AP = ⎢ ⎢ 6 ⎢ ⎣ 3
−
T
6⎤ ⎥ 3 ⎥⎡ 2 ⎢ 3 ⎥ ⎣⎢ 2 ⎥ 3 ⎦
⎡ 3 2 ⎤ ⎢⎢ 3 ⎥ 1⎦⎥ ⎢ 6 ⎢− ⎣ 3
6⎤ ⎥ ⎡0 0⎤ 3 ⎥ = ⎢ ⎥. 3⎥ ⎣0 3⎦ ⎥ 3 ⎦
27. The eigenvalues of A are λ1 = −15 and λ2 = 0, and λ3 = 15, with corresponding eigenvectors ( −2, 1, 2), ( −1, 2, − 2) and
(2, 2, 1), respectively. Normalize each eigenvector to form the columns of P. Then ⎡− 2 ⎢ 3 P = ⎢ 13 ⎢ 2 ⎣ 3
− 13 2 3 − 23
2⎤ 3 ⎥ 2 3⎥ 1⎥ 3⎦
and ⎡− 2 ⎢ 3 P AP = ⎢ − 13 ⎢ 2 ⎣ 3 T
1 3 2 3 2 3
2⎤ 3 ⎥ − 32 ⎥ 1⎥ 3⎦
⎡ 0 10 10⎤ ⎡− 32 ⎢ ⎥⎢ 1 ⎢10 5 0⎥ ⎢ 3 ⎢10 0 −5⎥ ⎢ 2 ⎣ ⎦⎣ 3
− 13 2 3 − 32
2⎤ 3 ⎥ 2 3⎥ 1⎥ 3⎦
⎡−15 0 0⎤ ⎢ ⎥ = ⎢ 0 0 0⎥. ⎢ 0 0 15⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
234
Chapter 7
Eigenvalues and Eigenvectors
29. The eigenvalues of A are λ1 = −2, λ2 = 2, and λ3 = 4, with corresponding eigenvectors ( −1, −1, 1), ( −1, 1, 0), and
(1, 1, 2), respectively. Normalize each eigenvector to form the columns of P. Then ⎡ 3 ⎢− 3 ⎢ ⎢ 3 P = ⎢− ⎢ 2 ⎢ 3 ⎢ ⎢⎣ 3
6⎤ ⎥ 6 ⎥ 6⎥ ⎥ 6 ⎥ 6 ⎥⎥ 3 ⎥⎦
2 2
−
2 2 0
and ⎡ 3 ⎢− 3 ⎢ ⎢ 2 PT AP = ⎢− 2 ⎢ ⎢ 6 ⎢ ⎢⎣ 6
−
⎡ 3⎤ 3 ⎥ ⎢− 3 ⎥ 3 ⎡ 1 −1 2⎤ ⎢ ⎥⎢ 3 ⎥⎢ 0⎥ ⎢−1 1 2⎥ ⎢− 3 ⎥⎢ ⎢ 2 2 2⎥⎦ ⎢ 6 ⎥⎥ ⎣ 3 ⎢ ⎢⎣ 2 3 ⎥⎦
3 3 2 2 6 6
−
2 2 2 2 0
6⎤ ⎥ 6 ⎥ ⎡−2 0 0⎤ 6⎥ ⎢ ⎥ ⎥ = ⎢ 0 2 0⎥. 6 ⎥ ⎢ 0 0 4⎥ ⎣ ⎦ 6 ⎥⎥ 3 ⎥⎦
31. The eigenvalues of A are λ1 = 2 and λ2 = 6, with corresponding eigenvectors (1, −1, 0, 0) and (0,0, 1, −1) for λ1 and
(1, 1, 0, 0) and (0,0, 1, 1)
for λ2 . Normalize each eigenvector to form the columns of P. Then
⎡ 2 0 ⎢ ⎢ 2 ⎢ 2 ⎢− 0 2 P = ⎢ ⎢ 2 ⎢ 0 ⎢ 2 ⎢ 2 ⎢ 0 − ⎢⎣ 2
2 2
⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ ⎥ 2⎥ 2 ⎥ ⎥ 2⎥ 2 ⎥⎦
2 2 0 0
and ⎡ 2 2 − ⎢ 2 2 ⎢ ⎢ ⎢ 0 0 PT AP = ⎢ ⎢ 2 ⎢ 2 ⎢ 2 2 ⎢ ⎢ 0 0 ⎣⎢ ⎡2 ⎢ 0 = ⎢ ⎢0 ⎢ ⎣⎢0
⎤ 0⎥ ⎥ 2 2⎥ ⎥ − 2 2 ⎥ ⎥ 0 0⎥ ⎥ ⎥ 2 2⎥ 2 2 ⎦⎥ 0
⎡4 ⎢ ⎢2 ⎢0 ⎢ ⎣⎢0
2 0 0⎤ ⎥ 4 0 0⎥ 0 4 2⎥ ⎥ 0 2 4⎦⎥
(b) True. See Theorem 7.9, page 452.
35.
( AT A)
= AT ( AT )
( AA )
= (A
T
T
2 2 2 2 0 0
⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ ⎥ 2⎥ 2 ⎥ ⎥ 2⎥ 2 ⎦⎥
0 0 0⎤ ⎥ 2 0 0⎥ 0 6 0⎥ ⎥ 0 0 6⎦⎥
33. (a) True. See Theorem 7.10, page 453.
T
⎡ 2 0 ⎢ 2 ⎢ ⎢ 2 ⎢− 0 ⎢ 2 ⎢ 2 ⎢ 0 ⎢ 2 ⎢ 2 ⎢ 0 − 2 ⎣⎢
T
T
)
T
= AT A ⇒ AT A is symmetric.
A = AA ⇒ AA is symmetric. T
T
T
37. If A is orthogonal, then AAT = I . So, 1 = AAT = A AT = A
2
⇒ A = ±1.
39. Observe that A is orthogonal because ⎡ cos θ A− 1 = ⎢ ⎣−sin θ
sin θ ⎤ 1 = AT . ⎥ 2 cos θ ⎦ cos θ + sin 2 θ
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.4
Applications of Eigenvalues and Eigenvectors
235
41. Let A be orthogonal, A−1 = AT . Then ( AT )
−1
= ( A−1 )
−1
= A = ( AT )
T
⇒
AT is orthogonal.
Furthermore,
( A−1 )
−1
= ( AT )
−1
= ( A−1 ) ⇒ A−1 is orthogonal. T
Section 7.4 Applications of Eigenvalues and Eigenvectors ⎡ 0 2⎤ ⎡10⎤ ⎡20⎤ 1. x 2 = Ax1 = ⎢ 1 ⎥⎢ ⎥ = ⎢ ⎥ ⎢⎣ 2 0⎥⎦ ⎣10⎦ ⎣ 5⎦ ⎡ 0 2⎤ ⎡20⎤ ⎡10⎤ x 3 = Ax 2 = ⎢ 1 ⎥⎢ ⎥ = ⎢ ⎥ ⎢⎣ 2 0⎥⎦ ⎣ 5⎦ ⎣10⎦ ⎡0 3 4⎤ ⎡12⎤ ⎡84⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 3. x 2 = Ax1 = ⎢ 1 0 0⎥ ⎢12⎥ = ⎢12⎥ ⎢0 1 0⎥ ⎢12⎥ ⎢ 6⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ 2 ⎡0 3 4⎤ ⎡84⎤ ⎡60⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x3 = Ax 2 = ⎢ 1 0 0⎥ ⎢12⎥ = ⎢84⎥ ⎢0 1 0⎥ ⎢ 6⎥ ⎢ 6⎥ ⎣ ⎦ 2 ⎣ ⎦⎣ ⎦
5. The eigenvalues are 1 and −1. Choosing the positive eigenvalue, λ = 1, the corresponding eigenvector is found by row-reducing λ I − A = I − A. ⎡ 1 −2⎤ ⎢ 1 ⎥ 1⎥⎦ ⎣⎢− 2
⎡ 1 −2⎤ ⎢ ⎥ ⎣0 0⎦
⇒
So, an eigenvector is ( 2, 1), and the stable age ⎡2⎤ distribution vector is x = t ⎢ ⎥. ⎣ 1⎦
7. The eigenvalues of A are −1 and 2. Choosing the positive eigenvalue, let λ = 2. An eigenvector corresponding to λ = 2 is found by row-reducing 2 I − A. ⎡ 2 −3 −4⎤ ⎢ ⎥ 2 0⎥ ⎢−1 ⎢ 0 −1 2⎥⎦ 2 ⎣
⇒
⎡ 1 0 −8⎤ ⎢ ⎥ ⎢0 1 −4⎥ ⎢0 0 0⎥ ⎣ ⎦
So, an eigenvector is (8, 4, 1) and stable age distribution ⎡8⎤ ⎢ ⎥ vector is x = t ⎢4⎥. ⎢ 1⎥ ⎣ ⎦
9. Construct the age transition matrix. 4 2⎤ ⎡2 ⎢ ⎥ 0⎥ A = ⎢0.75 0 ⎢0 0.25 0⎥⎦ ⎣
The current age distribution vector is ⎡120⎤ ⎢ ⎥ x1 = ⎢120⎥. ⎢120⎥ ⎣ ⎦
In one year, the age distribution vector will be 4 2⎤ ⎡120⎤ ⎡2 ⎡960⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0⎥ ⎢120⎥ = ⎢ 90⎥. x 2 = Ax1 = ⎢0.75 0 ⎢0 ⎢ 30⎥ 0.25 0⎥⎦ ⎢⎣120⎥⎦ ⎣ ⎣ ⎦
In two years, the age distribution vector will be 4 2⎤ ⎡960⎤ ⎡2 ⎡2340 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0⎥ ⎢ 90⎥ = ⎢ 720 ⎥. x3 = Ax 2 = ⎢0.75 0 ⎢0 ⎢ 22.5⎥ 0.25 0⎥⎦ ⎢⎣ 30⎥⎦ ⎣ ⎣ ⎦
11. Construct the age transition matrix. 5 2⎤ ⎡2 ⎢ ⎥ 0⎥ A = ⎢0.6 0 ⎢0 0.5 0 ⎥⎦ ⎣
The current age distribution vector is ⎡100⎤ ⎢ ⎥ x1 = ⎢100⎥. ⎢100⎥ ⎣ ⎦
In one year, the age distribution vector will be 5 2 ⎤ ⎡100⎤ ⎡2 ⎡900⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 ⎥ ⎢100⎥ = ⎢ 60⎥. x 2 = Ax1 = ⎢0.6 0 ⎢0 ⎢ 50⎥ 0.5 0 ⎥⎦ ⎢⎣100⎥⎦ ⎣ ⎣ ⎦
In two years, the age distribution vector will be 5 2 ⎤ ⎡900⎤ ⎡2 ⎡2200⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 ⎥ ⎢ 60⎥ = ⎢ 540⎥. x3 = Ax 2 = ⎢0.6 0 ⎢0 ⎢ 30⎥ 0.5 0 ⎥⎦ ⎢⎣ 50⎥⎦ ⎣ ⎣ ⎦
13. The solution to the differential equation y′ = ky is y = Ce kt . So, y1 = C1e 2t and y2 = C2et .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
236
Chapter 7
Eigenvalues and Eigenvectors
15. The solution to the differential equation y′ = ky is −t
21. This system has the matrix form ⎡ y1′ ⎤ ⎡ 1 2⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay. ′ ⎣ y2 ⎦ ⎣2 1⎦ ⎣ y2 ⎦
y = Ce . So, y1 = C1e , y2 = C2e and y3 = C3e . kt
6t
t
17. The solution to the differential equation y′ = ky is
The eigenvalues of A are λ1 = −1 and λ2 = 3 with
y = Ce kt . So, y1 = C1e 2t , y2 = C2e − t , and y3 = C3et .
corresponding eigenvectors x1 = (1, −1) and
19. This system has the matrix form
x 2 = (1, 1), respectively. So, diagonalize A using a
⎡ y1′ ⎤ ⎡ 1 −4⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay. ⎣ y2′ ⎦ ⎣0 2⎦ ⎣ y2 ⎦
matrix P whose columns vectors are the eigenvectors of A.
The eigenvalues of A are λ1 = 1 and λ2 = 2, with
⎡ 1 1⎤ P = ⎢ ⎥ ⎣−1 1⎦
corresponding eigenvectors (1, 0) and
(−4, 1), respectively. So, diagonalize A using a matrix P
The solution of the system w′ = P −1 APw is
whose columns are the eigenvectors of A. ⎡ 1 −4⎤ P = ⎢ ⎥ 1⎦ ⎣0
⎡−1 0⎤ and P −1 AP = ⎢ ⎥ ⎣ 0 3⎦
w1 = C1e − t and w2 = C2e3t . Return to the original system by applying the substitution y = Pw.
⎡ 1 0⎤ and P −1 AP = ⎢ ⎥ ⎣0 2⎦
⎡ y1 ⎤ ⎡ 1 1⎤ ⎡ w1 ⎤ ⎡ w1 + w2 ⎤ y = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ y2 ⎦ ⎣−1 1⎦ ⎣w2 ⎦ ⎣− w1 + w2 ⎦
The solution of the system w′ = P APw is −1
w1 = C1et and w2 = C2e 2t . Return to the original system by applying the substitution y = Pw.
So, the solution is
⎡ y1 ⎤ ⎡ 1 −4⎤ ⎡ w1 ⎤ ⎡w1 − 4 w2 ⎤ y = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1⎦ ⎣w2 ⎦ ⎣ y2 ⎦ ⎣0 ⎣ w2 ⎦
y2 = −C1e − t + C2e3t .
y1 = C1e − t + C2e3t
So, the solution is
y1 = C1et − 4C2e 2t y 2 = C2 e 2 t .
23. This system has the matrix form 5⎤ ⎡ y1 ⎤ ⎡ y1′ ⎤ ⎡ 0 −3 ⎢ ⎥ ⎢ ⎥⎢ ⎥ y′ = ⎢ y2′ ⎥ = ⎢−4 4 −10⎥ ⎢ y2 ⎥ = Ay. ⎢ y3′ ⎥ ⎢ 0 0 4⎥⎦ ⎢⎣ y3 ⎥⎦ ⎣ ⎦ ⎣
The eigenvalues of A are λ1 = −2, λ2 = 6 and λ3 = 4, with corresponding eigenvectors (3, 2, 0), ( −1, 2, 0) and
(−5, 10, 2), respectively. So, diagonalize A using a matrix P whose column vectors are the eigenvectors of A. ⎡3 −1 −5⎤ ⎡−2 0 0⎤ ⎢ ⎥ ⎢ ⎥ −1 P = ⎢2 2 10⎥ and P AP = ⎢ 0 6 0⎥ ⎢0 0 2⎥ ⎢ 0 0 4⎥ ⎣ ⎦ ⎣ ⎦
The solution of the system w′ = P −1 APw is w1 = C1e −2t , w2 = C2e6t and w3 = C3e 4t . Return to the original system by applying the substitution y = Pw. ⎡ y1 ⎤ ⎡3 −1 −5⎤ ⎡ w1 ⎤ ⎡ 3w1 − w2 − 5w3 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ y = ⎢ y2 ⎥ = ⎢2 2 10⎥ ⎢w2 ⎥ = ⎢2 w1 + 2 w2 + 10 w3 ⎥ ⎢ y3 ⎥ ⎢0 0 2⎥ ⎢ w3 ⎥ ⎢ ⎥ 2 w3 ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, the solution is y1 = 3C1e −2t − C2e6t − 5C3e 4t y2 = 2C1e −2t + 2C2e6t + 10C3e4t y3 = 2C3e 4t .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.4
Applications of Eigenvalues and Eigenvectors
237
25. This system has the matrix form ⎡ y1′ ⎤ ⎡ 1 −2 1⎤ ⎡ y1 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ y′ = ⎢ y′2 ⎥ = ⎢0 2 4⎥ ⎢ y2 ⎥ = Ay. ⎢ y3′ ⎥ ⎢0 0 3⎥ ⎢ y3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦
The eigenvalues of A are λ1 = 1, λ2 = 2, and λ3 = 3 with corresponding eigenvectors x1 = (1, 0, 0), x 2 = ( −2, 1, 0), and
x3 = ( −7, 8, 2). So, diagonalize A using a matrix P whose column vectors are the eigenvectors of A. ⎡ 1 −2 −7⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ −1 = 0 1 8 and = P P AP ⎢ ⎥ ⎢0 2 0⎥ ⎢0 0 ⎢0 0 3⎥ 2⎥⎦ ⎣ ⎣ ⎦
The solution of the system w′ = P −1 APw is w1 = C1et , w2 = C2e 2t , and w3 = C3e3t . Return to the original system by applying the substitution y = Pw. ⎡ y1 ⎤ ⎡ 1 −2 −7⎤ ⎡ w1 ⎤ ⎡w1 − 2 w2 − 7 w3 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 1 8⎥ ⎢w2 ⎥ = ⎢ w2 + 8w3 ⎥ y = ⎢ y2 ⎥ = ⎢0 ⎢ y3 ⎥ ⎢0 0 ⎢ ⎥ 2⎥⎦ ⎢⎣ w3 ⎥⎦ 2 w3 ⎣ ⎦ ⎣ ⎣ ⎦
So, the solution is y1 = C1et − 2C2e 2t − 7C3e3t y2 =
C2e 2t + 8C3e3t
y3 =
2C3e3t .
27. Because
31. The matrix of the quadratic form is
⎡ y1′ ⎤ ⎡ 1 1⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay , ′ ⎣ y2 ⎦ ⎣0 1⎦ ⎣ y2 ⎦ the system represented by y′ = Ay is y1′ =
y1 + y2
y2′ =
y2 .
Note that y1′ = C1et + C2tet + C2et = y1 + y2 y2′ = C2et = y2 .
29. Because 1 0⎤ ⎡ y1 ⎤ ⎡ y1′ ⎤ ⎡0 ⎢ ⎥ ⎢ ⎥⎢ ⎥ y′ = ⎢ y′2 ⎥ = ⎢0 0 1⎥ ⎢ y2 ⎥ = Ay , ⎢ ⎥ ⎢0 −4 0⎥ ⎢ y3 ⎥ ⎣ y3′ ⎦ ⎣ ⎦⎣ ⎦
the system represented by y′ = Ay is y1′ =
⎡ ⎢a ⎢ A = ⎢b ⎢⎣ 2
b⎤ ⎡ 1 0⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣0 1⎦ c⎥ ⎦
33. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡9 5⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣5 −4⎦ c⎥ ⎦
35. The matrix of the quadratic form is ⎡ ⎢a ⎢ A = ⎢b ⎢⎣ 2
b⎤ 5⎤ ⎡0 2⎥ ⎥ = ⎢ ⎥. 5 10 − ⎥ ⎣ ⎦ c⎥ ⎦
y2
y2′ =
y3
y3′ = −4 y2.
Note that y1′ = −2C2 sin 2t + 2C3 cos 2t = y2 y2′ = −4C3 sin 2t − 4C2 cos 2t = y3 y3′ = 8C2 sin 2t − 8C3 cos 2t = −4 y2 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
238
Chapter 7
Eigenvalues and Eigenvectors
37. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ 3⎤ ⎡ ⎢ 2 − 2⎥ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎢ 3 ⎥ c⎥ ⎢⎣− 2 −2⎥⎦ ⎦
5 5 and λ2 = with corresponding eigenvectors x1 = (1, 3) and x 2 = ( −3, 1) respectively. 2 2 Using unit vectors in the direction of x1 and x 2 to form the columns of P yields
The eigenvalues of A are λ1 = −
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
1 10 3 10
3 ⎤ 10 ⎥⎥ . 1 ⎥ ⎥ 10 ⎦
−
Note that ⎡ 1 ⎢ 10 PT AP = ⎢ ⎢ 3 ⎢− 10 ⎣
3 ⎤ 10 ⎥⎥ 1 ⎥ ⎥ 10 ⎦
3⎤ ⎡ 1 ⎡ ⎢ 2 − 2 ⎥ ⎢ 10 ⎢ ⎥⎢ ⎢ 3 ⎥⎢ 3 2 ⎢⎣− 2 − ⎥⎦ ⎢ 10 ⎣
−
3 ⎤ ⎡ 5 ⎢− 2 10 ⎥⎥ = ⎢ 1 ⎥ ⎢ ⎥ ⎢⎣ 0 10 ⎦
39. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
43. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 13 3 3 ⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ 3 3 7⎦⎥ ⎢ ⎣ c⎥ ⎦
The eigenvalues of A are λ1 = 4 and λ2 = 16, with
corresponding eigenvectors x1 = (1, − x2 =
(
⎤ 0⎥ ⎥. 5⎥ 2 ⎥⎦
)
3 ) and
3, 1 , respectively. Using unit vectors in the
b⎤ ⎡ 13 −4⎤ 2⎥ ⎥ = ⎢ ⎥. −4 7⎦ ⎥ ⎣ c⎥ ⎦
This matrix has eigenvalues of 5 and 15 with 2 ⎞ ⎛ 1 corresponding unit eigenvectors ⎜ , ⎟ and 5⎠ ⎝ 5
direction of x1 and x 2 to form the columns of P,
1 ⎞ ⎛ 2 , ⎜− ⎟ respectively. Let 5 5⎠ ⎝
⎡ 1 ⎢ 2 ⎢ P = ⎢ 3 ⎢− ⎣ 2
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
3⎤ ⎥ 2 ⎥ 1⎥ ⎥ 2⎦
and
⎡4 0⎤ P AP = ⎢ ⎥. ⎣0 16⎦ T
41. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 16 −12⎤ 2⎥ ⎥ = ⎢ ⎥. 9⎦ ⎥ ⎣−12 c⎥ ⎦
1 5 2 5
−
2 ⎤ 5 ⎥⎥ 1 ⎥ ⎥ 5⎦
and
⎡5 0⎤ PT AP = ⎢ ⎥. ⎣0 15⎦
This implies that the rotated conic is an ellipse with equation 2 2 5( x′) + 15( y′) = 45.
The eigenvalues of A are λ1 = 0 and λ2 = 25, with
corresponding eigenvectors x1 = (3, 4) and
x 2 = ( −4, 3) respectively. Using unit vectors in the direction of x1 and x 2 to form the columns of P, 4⎤ ⎡3 ⎢5 − 5⎥ ⎥ P = ⎢ 3⎥ ⎢4 ⎢⎣ 5 5 ⎥⎦
and
⎡0 0⎤ PT AP = ⎢ ⎥. ⎣0 25⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.4 ⎡ ⎢a 45. The matrix of the quadratic form is A = ⎢ ⎢b ⎢⎣ 2
Applications of Eigenvalues and Eigenvectors
239
b⎤ 16⎤ ⎡7 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣16 −17⎦ c⎥ ⎦
2 ⎞ 1 ⎞ ⎛ 2 ⎛ 1 This matrix has eigenvalues of −25 and 15, with corresponding unit eigenvectors ⎜ ,− , ⎟ respectively. ⎟ and ⎜ 5 5 5 5⎠ ⎝ ⎠ ⎝ 2 ⎤ ⎡ 1 ⎢ ⎡−25 0⎤ 5 5 ⎥⎥ and Let P = ⎢ PT AP = ⎢ ⎥. ⎢ 2 1 ⎥ ⎣ 0 15⎦ − ⎢ ⎥ 5 5⎦ ⎣ This implies that the rotated conic is a hyperbola with equation −25( x′) + 15( y′) − 50 = 0. 2
⎡ ⎢a 47. The matrix of the quadratic form is A = ⎢ ⎢b ⎢⎣ 2
2
b⎤ ⎡2 2⎤ 2⎥ ⎥ = ⎢ ⎥. 2 2⎦ ⎥ ⎣ c⎥ ⎦
1 ⎞ ⎛ 1 ,− This matrix has eigenvalues of 0 and 4, with corresponding unit eigenvectors ⎜ ⎟ and 2⎠ ⎝ 2 ⎡ 1 ⎢ 1 ⎞ 2 ⎛ 1 , ⎜ ⎟ respectively. Let P = ⎢⎢ 1 2⎠ ⎝ 2 ⎢− 2 ⎣
1 ⎤ 2 ⎥⎥ 1 ⎥ ⎥ 2⎦
and
⎡0 0⎤ PT AP = ⎢ ⎥. ⎣0 4⎦
This implies that the rotated conic is a parabola. Furthermore, [d
⎡ 1 ⎢ 2 e]P = [6 2 2 2 ] ⎢ ⎢ 1 ⎢− 2 ⎣
1 ⎤ 2 ⎥⎥ = [4 8] = [d ′ e′]. 1 ⎥ ⎥ 2⎦
So, the equation in the x′y′-coordinate system is, 4( y′) + 4 x′ + 8 y′ + 4 = 0. 2
1⎤ ⎡ ⎢ 0 2⎥ ⎥. 49. The matrix of the quadratic form is A = ⎢ ⎢1 ⎥ 0 ⎢⎣ 2 ⎥⎦
1 1 1 ⎞ ⎛ 1 , and with corresponding unit eigenvectors ⎜ − ⎟ and 2 2 2 2⎠ ⎝ 1 ⎤ ⎡ 1 ⎡ 1 ⎤ ⎢− 2 ⎥ ⎢− 2 0⎥ 1 ⎞ 2 ⎛ 1 ⎥ ⎥. , and PT AP = ⎢ ⎜ ⎟ respectively. Let P = ⎢⎢ 1 1 ⎥ 1⎥ 2⎠ ⎢ ⎝ 2 0 ⎢ ⎥ ⎢⎣ 2 ⎥⎦ 2 2⎦ ⎣
This matrix has eigenvalues of −
This implies that the rotated conic is a hyperbola. Furthermore, 1 ⎤ ⎡ 1 ⎢− 2 2 ⎥⎥ ⎡ 3 1 ⎤ = − − = [d ′ e′], [d e]P = [1 −2]⎢ ⎢ 1 1 ⎥ ⎢⎣ 2 2 ⎥⎦ ⎢ ⎥ 2 2⎦ ⎣ 1 1 2 2 so, the equation in the x′y′-coordinate system is − ( x′) + ( y′) − 2 2
3 1 x′ − y′ + 3 = 0. 2 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
240
Chapter 7
Eigenvalues and Eigenvectors
⎡ 3 −1 0⎤ ⎢ ⎥ 51. The matrix of the quadratic form is A = ⎢−1 3 0⎥. ⎢ 0 0 8⎥ ⎣ ⎦
⎛ The eigenvalues of A are 2, 4, and 8 with corresponding unit eigenvectors ⎜ ⎝ 1 ⎡ 1 ⎤ ⎢ 2 − 2 0⎥ ⎢ ⎥ ⎥ PT AP = and 1 (0, 0, 1) respectively. Then let P = ⎢ 1 0⎥ ⎢ 2 ⎢ 2 ⎥ 0 1⎦⎥ ⎣⎢ 0
1 1 1 ⎞ ⎛ 1 ⎞ , , 0 ⎟, ⎜ − , , 0 ⎟ and 2 2 ⎠ ⎝ 2 2 ⎠ ⎡2 0 0⎤ ⎢ ⎥ ⎢0 4 0⎥. ⎢0 0 8⎥ ⎣ ⎦
Furthermore,
[g
1 ⎡ 1 ⎤ ⎢ 2 − 2 0⎥ ⎢ ⎥ ⎥ = [0 0 0] = [ g ′ h′ i′]. h i]P = [0 0 0] ⎢ 1 1 0⎥ ⎢ 2 ⎢ 2 ⎥ ⎢⎣ 0 0 1⎥⎦
2 2 2 So, the equation of the rotated quadratic surface is 2( x′) + 4( y′) + 8( z′) − 16 = 0.
⎡ 1 0 0⎤ ⎢ ⎥ 53. The matrix of the quadratic form is A = ⎢0 2 1⎥. ⎢0 1 2⎥ ⎣ ⎦
The eigenvalues of A are 1, 1, and 3 with corresponding unit eigenvectors (1, 0, 0), (0, −1, 1), and (0, 1, 1) respectively. 0 ⎡1 ⎢ ⎢0 − 2 Then let P = ⎢ 2 ⎢ ⎢ 2 ⎢0 2 ⎣ Furthermore,
[g
0⎤ ⎥ 2⎥ 2 ⎥ ⎥ 2⎥ ⎥ 2⎦
0 ⎡1 ⎢ ⎢0 − 2 h i]P = [0 0 0] ⎢ 2 ⎢ ⎢ 2 ⎢0 2 ⎣
and
⎡ 1 0 0⎤ ⎢ ⎥ P AP = ⎢0 1 0⎥. ⎢0 0 3⎥ ⎣ ⎦ T
0⎤ ⎥ 2⎥ 2 ⎥ = [0 0 0] = [ g ′ h′ i′]. ⎥ 2⎥ ⎥ 2 ⎦
2 2 2 So, the equation of the rotated quadratic surface is ( x′) + ( y′) + 3( z′) − 1 = 0.
⎡a c⎤ 55. Let P = ⎢ ⎥ be a 2 × 2 orthogonal matrix such that P = 1. Define θ ∈ [0, 2π ] as follows. ⎣c d ⎦ (i) If a = 1, then c = 0, b = 0 and d = 1, so let θ = 0. (ii) If a = −1, then c = 0, b = 0 and d = −1, so let θ = π . (iii) If a ≥ 0 and c > 0, let θ = arccos( a ), 0 < θ ≤ π 2. (iv) If a ≥ 0 and c < 0, let θ = 2π − arccos( a), 3π 2 ≤ θ < 2π . (v) If a ≤ 0 and c > 0, let θ = arccos( a), π 2 ≤ θ < π . (vi) If a ≤ 0 and c < 0, let θ = 2π − arccos( a), π < θ ≤ 3π 2. ⎡a b⎤ ⎡cos θ In each of these cases, you can confirm that P = ⎢ ⎥ = ⎢ c d ⎣ ⎦ ⎣ sin θ
−sin θ ⎤ ⎥. cos θ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
241
Review Exercises for Chapter 7 1. (a) The characteristic equation of A is given by
λI − A =
λ −2
−1
−5
λ +2
= λ 2 − 9 = 0.
(b) The eigenvalues of A are λ1 = −3 and λ2 = 3. (c) To find the eigenvectors corresponding to λ1 = −3, solve the matrix equation (λ1I − A)x = 0. Row-reduce the augmented matrix to yield ⎡−5 −1 ⎢ ⎣−5 −1
0⎤ ⎥ 0⎦
⎡1 1 ⎢ 5 ⎢⎣0 0
⇒
0⎤ ⎥. 0⎥⎦
So, x1 = (1, −5) is an eigenvector and
(λ 2 I
− A)x = 0 for λ 2 = 3. So, x 2
{(1, −5)} is a basis for the eigenspace corresponding to λ1 = −3. Similarly, solve = (1, 1) is an eigenvector and {(1, 1)} is a basis for the eigenspace corresponding to
λ 2 = 3. 3. (a) The characteristic equation of A is given by
λ − 9 −4 λ λI − A = 2 1
3 2 = λ 3 − 20λ 2 + 128λ − 256 = (λ − 4)(λ − 8) .
−6
λ − 11
4
(b) The eigenvalues of A are λ1 = 4 and λ 2 = 8 (repeated). (c) To find the eigenvectors corresponding to λ1 = 4, solve the matrix equation (λ1I − A)x = 0. Row-reducing the augmented matrix, 3 ⎡−5 −4 ⎢ 2 4 − 6 ⎢ ⎢ 1 4 −7 ⎣
0⎤ ⎥ 0⎥ 0⎥⎦
⇒
1 ⎡1 0 ⎢ 0 1 − 2 ⎢ ⎢0 0 0 ⎣
0⎤ ⎥ 0⎥ 0⎥⎦
{(−1, 2, 1)}. Similarly, solve (λ 2 I 8 is {(3, 0, 1), ( −4, 1, 0)}.
you can see that a basis for the eigenspace of λ1 = 4 is basis for the eigenspace of λ 2 =
− A)x = 0 for λ 2 = 8. So, a
5. (a) The characteristic equation of A is given by
λ −2
0
−1
0
λ −3
−4
0
0
λ −1
λI − A =
= (λ − 2)(λ − 3)(λ − 1) = 0.
(b) The eigenvalues of A are λ1 = 1, λ 2 = 2 and λ3 = 3. (c) To find the eigenvectors corresponding to λ1 = 1, solve the matrix equation (λ1I − A)x = 0. Row-reducing the augmented matrix, ⎡−1 0 −1 ⎢ ⎢ 0 −2 −4 ⎢ 0 0 0 ⎣
0⎤ ⎥ 0⎥ 0⎥⎦
⇒
⎡1 0 1 ⎢ ⎢0 1 2 ⎢0 0 0 ⎣
0⎤ ⎥ 0⎥ 0⎥⎦
{(−1, −2, 1)}. Similarly, solve (λ 2 I − A)x = 0 for λ 2 = 2, and you see that {(1, 0, 0)} is a basis for the eigenspace of λ 2 = 2. Finally, solve (λ 3 I − A)x = 0 for λ 3 = 3, and you discover that {(0, 1, 0)} is a basis for its eigenspace.
you can see that a basis for the eigenspace of λ1 = 1 is
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
242
Chapter 7
Eigenvalues and Eigenvectors
7. (a) The characteristic equation of A is given by
λ −2
−1
0
0
−1
λ −2
0
0
0
0
λ −2
−1
0
0
−1
λ −2
λI − A =
2 2 = (λ − 1) (λ − 3) = 0.
(b) The eigenvalues of A are λ1 = 1 (repeated) and λ 2 = 3 (repeated). (c) To find the eigenvectors corresponding to λ1 = 1, solve the matrix equation (λ1I − A)x = 0 for λ1 = 1. Row reducing the augmented matrix, ⎡−1 −1 0 0 ⎢ ⎢−1 −1 0 0 ⎢ 0 0 −1 −1 ⎢ ⎣⎢ 0 0 −1 −1
0⎤ ⎥ 0⎥ 0⎥ ⎥ 0⎥⎦
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
1 0 0 0 1 1 0 0 0 0 0 0
0⎤ ⎥ 0⎥ 0⎥ ⎥ 0⎥⎦
{(1, −1, 0, 0), (0, 0, 1, −1)}. Similarly, solve (λ 2 I and discover that a basis for the eigenspace of λ 2 = 3 is {(1, 1, 0, 0), (0, 0, 1, 1)}.
you see that a basis for the eigenspace of λ1 = 1 is
λ 2 = 3,
− A)x = 0 for
9. The eigenvalues of A are the solutions of
λI − A =
λ +2
1
−3
0
λ −1
−2
0
0
λ −1
2 = (λ + 2)(λ − 1) = 0.
The eigenspace corresponding to the repeated eigenvalue λ = 1 has dimension 1, and so A is not diagonalizable.
11. The eigenvalues of A are the solutions to
λI − A =
λ −1
0
−2
0
λ −1
0
−2
0
λ −1
= (λ − 3)(λ − 1)(λ + 1) = 0.
Therefore, the eigenvalues are 3, 1, and −1. The corresponding eigenvectors are the solutions of (λ I − A)x = 0. So, an eigenvector corresponding to 3 is (1, 0, 1), an eigenvector corresponding to 1 is (0, 1, 0), an eigenvector corresponding to −1 is
(1, 0, −1). Now form P using the eigenvectors of A as column vectors. 1 0
1
P = 0 1
0
1 0 −1 Note that 1⎤ ⎡1 0 2 2 ⎢ ⎥ P −1 AP = ⎢ 0 1 0⎥ ⎢1 0 − 1 ⎥ 2⎦ ⎣2
⎡ 1 0 2⎤ ⎡ 1 0 1⎤ ⎡3 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 1 0⎥ ⎢0 1 0⎥ = ⎢0 1 0⎥. ⎢2 0 1⎥ ⎢ 1 0 −1⎥ ⎢0 0 −1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
13. Consider the characteristic equation λ I − A =
λ − cos θ
sin θ
−sin θ
λ − cos θ
= λ 2 − 2 cos θ ⋅ λ + 1 = 0.
The discriminant of this quadratic equation in λ is b 2 − 4ac = 4 cos 2 θ − 4 = −4 sin 2 θ . Because 0 < θ < π , this discriminant is always negative, and the characteristic equation has no real roots.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
243
15. The eigenvalue is λ = 0 (repeated). To find its corresponding eigenspace, solve ⎡λ −2 ⎢ ⎣0 λ
⎡0 −2 0⎤ ⎥ = ⎢ 0⎦ ⎣0 0
0⎤ ⎥ 0⎦
⇒
⎡0 1 ⎢ ⎣0 0
0⎤ ⎥. 0⎦
Because the eigenspace is only one-dimensional, the matrix A is not diagonalizable.
17. The eigenvalue is λ = 3 (repeated). To find its corresponding eigenspace, solve (λ I − A)x = 0 with λ = 3. ⎡λ − 3 0 0 ⎢ 0 λ −3 ⎢ −1 ⎢ 0 λ −3 ⎣ 0
⎡ 0 0 0 0⎤ ⎥ ⎢ 0⎥ = ⎢−1 0 0 ⎥ ⎢ 0⎦ ⎣ 0 0 0
0⎤ ⎥ 0⎥ ⎥ 0⎦
⇒
⎡1 0 0 ⎢ ⎢0 0 0 ⎢ ⎣0 0 0
0⎤ ⎥ 0⎥ ⎥ 0⎦
Because the eigenspace is only two-dimensional, the matrix A is not diagonalizable.
19. The eigenvalues of B are 1 and 2 with corresponding eigenvectors (0, 1) and (1, 0), respectively. Form the columns of P from the eigenvectors of B. So, ⎡0 1⎤ P = ⎢ ⎥ ⎣ 1 0⎦ ⎡0 1⎤ ⎡2 0⎤ ⎡0 1⎤ ⎡ 1 0⎤ P −1BP = ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥ = A. ⎣ 1 0⎦ ⎣0 1⎦ ⎣ 1 0⎦ ⎣0 2⎦ Therefore, A and B are similar.
21. Because the eigenspace corresponding to λ = 1 of matrix A has dimension 1, while that of matrix B has dimension 2, the matrices are not similar. 23. Because ⎡ 2 ⎢− 2 AT = ⎢ ⎢ 2 ⎢ ⎣ 2
2⎤ ⎥ 2 ⎥ = A, 2⎥ ⎥ 2 ⎦
A is symmetric. Furthermore, the column vectors of A form an orthonormal set. So, A is both symmetric and orthogonal.
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
2 5 1 5
−
1 ⎤ 5 ⎥⎥ . 2 ⎥ ⎥ 5⎦
31. The eigenvalues of A are 3 and 1 (repeated), with corresponding unit eigenvectors 1 ⎞ ⎛ 1 1 ⎞ ⎛ 1 , 0, − , 0, ⎜ ⎟, ⎜ ⎟ and (0, 1, 0). Form the 2 2 2 2⎠ ⎝ ⎠ ⎝ columns of P from the eigenvectors of A. ⎡ 1 ⎢ 2 ⎢ P = ⎢ 0 ⎢ ⎢− 1 ⎢⎣ 2
1 ⎤ 0⎥ 2 ⎥ 0 1⎥ ⎥ 1 0⎥ ⎥⎦ 2
33. The eigenvalues of A are
25. Because
for A to be v =
27. Because 1⎤ 3⎥ 2 − 3⎥ ⎥ 2 ⎥ 3⎦
and 1. The eigenvectors
t = 15 , you can find the steady state probability vector
A is symmetric. However, column 3 is not a unit vector, so A is not orthogonal.
2 3 2 3 − 13
1 6
corresponding to λ = 1 are x = t (3, 2). By choosing
⎡0 0 1⎤ ⎢ ⎥ T A = ⎢0 1 0⎥ = A, ⎢ 1 0 1⎥ ⎣ ⎦
⎡− 2 ⎢ 3 T A = ⎢ 13 ⎢ ⎢⎣− 23
29. The eigenvalues of A are 5 and −5 with corresponding 1 ⎞ 2 ⎞ ⎛ 2 ⎛ 1 , , unit eigenvectors ⎜ ⎟ and ⎜ − ⎟, 5⎠ 5 5⎠ ⎝ 5 ⎝ respectively. Form the columns of P with the eigenvectors of A.
⎡2 Av = ⎢ 3 ⎢ 13 ⎣
( 53 , 52 ). Note that
1 ⎤ ⎡ 3⎤ 2 ⎢ 5⎥ ⎥ 1⎥⎢2⎥ 2 ⎦⎣ 5 ⎦
⎡ 3⎤ = ⎢ 5 ⎥ = v. ⎢2⎥ ⎣5⎦
35. The eigenvalues of A are ≠ A,
A is not symmetric. Because the column vectors of A do not form an orthonormal set (columns 2 and 3 are not orthogonal), A is not orthogonal.
1 2
and 1. The eigenvectors
corresponding to λ = 1 are x = t (3, 2). By choosing t = 15 , you can find the steady state probability vector for A to be v =
( 53 , 52 ). Note that
3 ⎡ 3⎤ ⎡0.8 0.3⎤ ⎡ 5 ⎤ ⎢ ⎥ ⎢ 5 ⎥ = v. Av = ⎢ = ⎥ 2 ⎢2⎥ ⎣0.2 0.7⎦ ⎢⎣ 5 ⎥⎦ ⎣5⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
244
Chapter 7
Eigenvalues and Eigenvectors
37. The eigenvalues of A are 0,
1 2
41. ( PT AP)
T
and 1. The eigenvectors
corresponding to λ = 1 are x = t (1, 2, 1). By choosing t =
1, 4
⎡1 ⎢2 Av = ⎢ 12 ⎢0 ⎣
1 4 1 2 1 4
companion matrix of p is
⎡1 ⎤ ⎡1 ⎤ ⎢ 14 ⎥ ⎢ 14 ⎥ ⎢ 2 ⎥ = ⎢ 2 ⎥ = v. ⎢1 ⎥ ⎢1 ⎥ ⎣4⎦ ⎣4⎦
⎡ 0 A = ⎢⎢ a0 − ⎢⎣ a2
39. The eigenvalues of A are 0.6 and 1. The eigenvectors corresponding to λ = 1 are x = t ( 4, 5, 7). By choosing t =
1 , you 16
= PT AP (because A is
43. From the form p(λ ) = a0 + a1λ + a2λ 2 , you have a0 = 0, a1 = −9, and a2 = 4. This implies that the
( 14 , 12 , 14 ). Note that 0⎤ ⎥ 1 2⎥ 1⎥ 2⎦
T
symmetric), which shows that PT AP is symmetric.
you can find the steady state probability vector
for A to be v =
= PT AT ( PT )
1 ⎤ ⎥ a − 1⎥ a2 ⎥⎦
⎡0 ⎢ ⎢0 ⎢⎣
1⎤ ⎥ 9 ⎥. 4 ⎥⎦
The eigenvalues of A are 0 and
9 , the zeros of p. 4
can find the steady state probability vector
for A to be v =
( 14 , 165 , 167 ).
Note that ⎡0.7 0.1 0.1⎤ ⎡0.25 ⎤ ⎡0.25 ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Av = ⎢0.2 0.7 0.1⎥ ⎢0.3125⎥ = ⎢0.3125⎥ = v. ⎢ 0.1 0.2 0.8⎥ ⎢0.4375⎥ ⎢0.4375⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎡80 −40⎤ ⎡24 0⎤ ⎡56 −40⎤ 45. A2 = 10 A − 24 I 2 = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ 20 20 0 24 ⎣ ⎦ ⎣ ⎦ ⎣20 −4⎦ ⎡560 −400⎤ ⎡192 −96⎤ ⎡368 −304⎤ A3 = 10 A2 − 24 A = ⎢ ⎥ − ⎢ ⎥ = ⎢ ⎥ ⎣200 −40⎦ ⎣ 48 48⎦ ⎣152 −88⎦ 47. (a) True. If Ax = λ x, then A2 x = A( Ax) = A(λ x) = λ ( Ax) = λ 2 x showing that x is an eigenvector of A2 . ⎡ 1 0⎤ ⎡0 1⎤ (b) False. For example, (1, 0) is an eigenvector of A2 = ⎢ ⎥, but not of A = ⎢ ⎥. ⎣0 1⎦ ⎣ 1 0⎦ 49. Because A−1 ( AB ) A = BA, you can see that AB and BA are similar. 1 ⎞ ⎛ 1 , 51. The eigenvalues of A are a + b and a − b, with corresponding unit eigenvectors ⎜ ⎟ and 2⎠ ⎝ 2 1 ⎤ ⎡ 1 ⎢ 2 − 2⎥ 1 ⎞ ⎛ 1 ⎥. Note that , ⎜− ⎟, respectively. So, P = ⎢⎢ 1 1 ⎥ 2 2⎠ ⎝ ⎢ ⎥ 2⎦ ⎣ 2 ⎡ 1 ⎢ 2 P −1 AP = ⎢ ⎢ 1 ⎢− 2 ⎣
1 ⎤ ⎡ 1 2 ⎥⎥ ⎡a b⎤ ⎢⎢ 2 ⎢ ⎥ 1 ⎥ ⎣b a ⎦ ⎢ 1 ⎥ ⎢ 2⎦ ⎣ 2
−
1 ⎤ 0 ⎤ ⎡a + b 2 ⎥⎥ = ⎢ ⎥. − b⎦ 1 ⎥ 0 a ⎣ ⎥ 2⎦
53. (a) A is diagonalizable if and only if a = b = c = 0. (b) If exactly two of a, b, and c are zero, then the eigenspace of 2 has dimension 3. If exactly one of a, b, c is zero, then the dimension of the eigenspace is 2. If none of a, b, c is zero, the eigenspace is dimension 1.
55. (a) True. See "Definitions of Eigenvalue and Eigenvector," page 422. (b) False. See Theorem 7.4, page 436. (c) True. See "Definition of a Diagonalizable Matrix," page 435.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
57. The population after one transition is
63. The matrix corresponding to the system y′ = Ay is
⎡ 0 1⎤ ⎡100⎤ ⎡100⎤ x 2 = Ax1 = ⎢ 1 ⎥⎢ ⎥ = ⎢ ⎥ ⎢⎣ 4 0⎥⎦ ⎣100⎦ ⎣ 25⎦
⎡ 1 2⎤ A = ⎢ ⎥. ⎣0 0⎦
and after two transitions is
This matrix has eigenvalues of 0 and 1, with corresponding eigenvectors ( −2, 1) and
⎡ 0 1⎤ ⎡100⎤ ⎡25⎤ x3 = Ax 2 = ⎢ 1 ⎥ ⎢ ⎥ = ⎢ ⎥. 0 25 ⎣25⎦ ⎣⎢ 4 ⎥⎦ ⎣ ⎦
The eigenvalues of A are − 12 and 12 . Choose the positive eigenvalue and find the corresponding eigenvectors to be multiples of (2, 1). So, the stable age distribution vector ⎡2⎤ is x = t ⎢ ⎥. ⎣ 1⎦ 59. The population after one transition is ⎡0 3 12⎤ ⎡300⎤ ⎡4500⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 2 = ⎢ 1 0 0⎥ ⎢300⎥ = ⎢ 300⎥ ⎢0 1 0⎥ ⎢300⎥ ⎢ 50⎥ ⎣ ⎦ ⎣⎢ 6 ⎦⎥ ⎣ ⎦ and after two transitions is ⎡0 3 12⎤ ⎡4500⎤ ⎡1500⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x3 = ⎢ 1 0 0⎥ ⎢ 300⎥ = ⎢4500⎥. ⎢0 1 0⎥ ⎢ 50⎥ ⎢ 50⎥ ⎦ ⎣ ⎦ ⎥⎦ ⎣ ⎣⎢ 6 The positive eigenvalue 2 has corresponding eigenvector (24, 12, 1), which is a stable distribution. ⎡24⎤ ⎢ ⎥ So, the stable age distribution vector is x = t ⎢12⎥. ⎢ 1⎥ ⎣ ⎦
61. Construct the age transition matrix.
245
(1, 0), respectively. So, a matrix P that diagonalizes A is ⎡−2 1⎤ P = ⎢ ⎥ ⎣ 1 0⎦
and
⎡0 0⎤ P −1 AP = ⎢ ⎥. ⎣0 1⎦
The system represented by w′ = P −1 APw yields the solution w1′ = 0 and w2′ = w2 . So w1 = C1 and w2 = C2et . Substitute y = Pw and write ⎡ y1 ⎤ ⎡−2 1⎤ ⎡ w1 ⎤ ⎡−2 w1 + w2 ⎤ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥. w1 ⎣ y2 ⎦ ⎣ 1 0⎦ ⎣w2 ⎦ ⎣ ⎦ This implies that the solution is y1 = −2C1 + C2et y2 = C1.
65. The matrix corresponding to the system y′ = Ay is ⎡0 1 0⎤ ⎢ ⎥ A = ⎢ 1 0 0⎥. ⎢0 0 0⎥ ⎣ ⎦
This matrix has eigenvalues 1, –1, and 0 with corresponding eigenvectors (1, 1, 0), (1, −1, 0), and
(0, 0, 1). So, a matrix P that diagonalizes A is
6 2⎤ ⎡4 ⎢ ⎥ A = ⎢0.9 0 0⎥ ⎢0 0.75 0 ⎥⎦ ⎣
⎡ 1 1 0⎤ ⎢ ⎥ P = ⎢ 1 −1 0⎥ ⎢0 0 1⎥ ⎣ ⎦
The current age distribution vector is
The system represented by w′ = P −1 APw has solutions
⎡120⎤ ⎢ ⎥ x1 = ⎢120⎥. ⎢120⎥ ⎣ ⎦
w1 = C1et , w2 = C2e − t , and w3 = C3e0 = C3
In one year, the age distribution vector will be 6 2 ⎤ ⎡120⎤ ⎡4 ⎡1440⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 2 = Ax1 = ⎢0.9 0 0 ⎥ ⎢120⎥ = ⎢ 108⎥. ⎢0 ⎢ 90⎥ 0.75 0 ⎥⎦ ⎢⎣120⎥⎦ ⎣ ⎣ ⎦
and
⎡ 1 0 0⎤ ⎢ ⎥ P AP = ⎢0 −1 0⎥. ⎢0 0 0⎥ ⎣ ⎦ −1
Substitute y = Pw and obtain ⎡ y1 ⎤ ⎡ 1 1 0⎤ ⎡ w1 ⎤ ⎡w1 + w2 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ y2 ⎥ = ⎢ 1 −1 0⎥ ⎢w2 ⎥ = ⎢w1 − w2 ⎥ , ⎢ y3 ⎥ ⎢0 0 1⎥ ⎢ w3 ⎥ ⎢ w3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
which yields the solution y1 = C1et + C2e − t
In two years, the age distribution vector will be
y2 = C1et − C2e − t
6 2 ⎤ ⎡1440⎤ ⎡4 ⎡6588⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x3 = Ax 2 = ⎢0.9 0 0 ⎥ ⎢ 108⎥ = ⎢1296⎥. ⎢0 ⎢ 81⎥ 0.75 0 ⎥⎦ ⎢⎣ 90⎥⎦ ⎣ ⎣ ⎦
y3 = C3 .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
246
Chapter 7
Eigenvalues and Eigenvectors
67. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
69. The matrix of the quadratic form is
b⎤ 3⎤ ⎡ ⎢ 1 2⎥ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎢3 ⎥ 1 c⎥ ⎢⎣ 2 ⎥⎦ ⎦
⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
5 1 and − with corresponding unit 2 2 1 ⎞ 1 ⎞ ⎛ 1 ⎛ 1 , , eigenvectors ⎜ ⎟ and ⎜ − ⎟, 2⎠ 2 2⎠ ⎝ 2 ⎝ respectively.
b⎤ 1⎤ ⎡ ⎢ 0 2⎥ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎢1 ⎥ 0 c⎥ ⎢⎣ 2 ⎥⎦ ⎦
1 1 and − , with corresponding unit 2 2 1 ⎞ 1 ⎞ ⎛ 1 ⎛ 1 , , eigenvectors ⎜ ⎟ and ⎜ − ⎟. Use these 2⎠ 2 2⎠ ⎝ 2 ⎝ eigenvectors to form the columns of P.
The eigenvalues are
The eigenvalues are
Then form the columns of P from the eigenvectors of A.
⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢ ⎣ 2
⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢ ⎣ 2
1 ⎤ − 2 ⎥⎥ 1 ⎥ ⎥ 2⎦
⎡5 ⎤ 0⎥ ⎢2 ⎥ PT AP = ⎢ 1⎥ ⎢ 0 − ⎢⎣ 2 ⎥⎦
and
−
1 ⎤ 2 ⎥⎥ 1 ⎥ ⎥ 2⎦
and
⎡1 ⎤ 0⎥ ⎢2 ⎥ P AP = ⎢ 1⎥ ⎢ 0 − ⎢⎣ 2 ⎥⎦ T
This implies that the equation of the rotated conic is 1 1 2 2 ( x′) − ( y′) − 2 = 0, a hyperbola. 2 2
This implies that the equation of the rotated conic is 5 1 2 2 ( x′) − ( y′) − 3 = 0. 2 2
y
y
3 2
2
1 x 2
x
−2
3
2 −2
x'
y'
y'
x'
Cumulative Test for Chapters 6 and 7 1. T preserves addition. T ( x1 , y1 , z1 ) + T ( x2 , y2 , z2 ) = ( 2 x1 , x1 + y1 ) + ( 2 x2 , x2 + y2 ) = ( 2 x1 + 2 x2 , x1 + y1 + x2 + y2 ) = ( 2( x1 + x2 ), ( x1 + x2 ) + ( y1 + y2 )) = T ( x1 + x2 , y1 + y2 , z1 + z2 )
T preserves scalar multiplication. T (c( x, y, z )) = T (cx, cy, cz ) = ( 2cx, cx + cy ) = c( 2 x , x + y ) = cT ( x, y, z )
Therefore, T is a linear transformation.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapters 6 and 7
247
2. No, T is not a linear transformation. For example, ⎡ ⎡ 1 0⎤⎤ 4 0 ⎡2 0⎤ ⎡2 0⎤ ⎡2 0⎤ T ⎢2 ⎢ = 16 ⎥⎥ = T ⎢ ⎥ = ⎢ ⎥ + ⎢ ⎥ = 0 4 ⎢⎣ ⎣0 1⎦⎥⎦ ⎣0 2⎦ ⎣0 2⎦ ⎣0 2⎦ 2 0 ⎡ 1 0⎤ ⎡ 1 0⎤ ⎡ 1 0⎤ 2T ⎢ = 2⋅4 = 8 ⎥ = 2⎢ ⎥ + ⎢ ⎥ = 2 0 2 ⎣0 1⎦ ⎣0 1⎦ ⎣0 1⎦ ⎡ 1 0⎤ ⎡ 1⎤ ⎢ ⎥ ⎡ 1⎤ ⎢ ⎥ 3. (a) T (1, − 2) = ⎢−1 0⎥ ⎢ ⎥ = ⎢−1⎥ − 2 ⎢ 0 0⎥ ⎣ ⎦ ⎢ 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡ 1 0⎤ ⎡ x⎤ ⎡ 5⎤ ⎢ ⎥ ⎡ x⎤ ⎢ ⎥ ⎢ ⎥ (b) ⎢−1 0⎥ ⎢ ⎥ = ⎢− x⎥ = ⎢−5⎥ y ⎢ 0 0⎥ ⎣ ⎦ ⎢ 0⎥ ⎢ 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⇒
x = 5, y = t
The preimage of (5, − 5, 0) is (5, t ), where t is any real number. 4. The kernel is the solution space of the homogeneous system x1 − x2
= 0
− x1 + x2
= 0 x3 + x4 = 0.
⎡ 1 −1 0 0⎤ ⎢ ⎥ ⎢−1 1 0 0⎥ ⎢ 0 0 1 1⎥ ⎣ ⎦
So, ker(T ) =
⇒
⎡ 1 −1 0 0⎤ ⎢ ⎥ ⎢0 0 0 0⎥ ⎢0 0 1 1⎥ ⎣ ⎦
x1 =
⇒
x2
x3 = − x4
{( s, s, − t , t ) : s, t ∈ R}.
⎡ 1 0 1 0⎤ 5. A = ⎢ ⎥ ⎣0 −1 0 −1⎦
⇒
⎡ 1 0 1 0⎤ ⎢ ⎥ ⎣0 1 0 1⎦
⎧⎡ 0⎤ ⎡ 1⎤⎫ ⎪⎢ ⎥ ⎢ ⎥⎪ 0⎪ ⎪ −1 (a) basis for kernel: ⎨⎢ ⎥, ⎢ ⎥⎬ ⎢ 0⎥ ⎢−1⎥ ⎪⎢ ⎥ ⎢ ⎥⎪ ⎪⎢ 1⎥ ⎢ 0⎥⎪ ⎩⎣ ⎦ ⎣ ⎦⎭ ⎧⎪⎛ 1⎞ ⎛ 0 ⎞⎫⎪ (b) basis for range (column space of A): ⎨⎜ ⎟, ⎜ ⎟⎬ ⎪⎩⎝ 0 ⎠ ⎝ 1⎠⎪⎭ (c) rank = 2, nullity = 2 6. Because ⎛ ⎡ 1⎤ ⎞ ⎡ 1⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ T ⎜ ⎢0⎥ ⎟ = ⎢0⎥ , ⎜ ⎢0⎥ ⎟ ⎢ 1⎥ ⎣ ⎦ ⎝⎣ ⎦⎠
⎛ ⎡0⎤ ⎞ ⎡ 1⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ T ⎜ ⎢ 1⎥ ⎟ = ⎢ 1⎥ ⎜ ⎢0⎥ ⎟ ⎢0⎥ ⎣ ⎦ ⎝⎣ ⎦⎠
and
⎛ ⎡0⎤ ⎞ ⎡ 0⎤ ⎜⎢ ⎥⎟ ⎢ ⎥ T ⎜ ⎢0⎥ ⎟ = ⎢ 1⎥ , ⎜ ⎢ 1⎥ ⎟ ⎢−1⎥ ⎣ ⎦ ⎝⎣ ⎦⎠
the standard matrix for T is ⎡ 1 1 0⎤ ⎢ ⎥ A = ⎢0 1 1⎥. ⎢ 1 0 −1⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
248
7.
Chapter 7
Eigenvalues and Eigenvectors
⎡ 1 ⎤ ⎛ ⎡0⎤ ⎞ ⎡− 1 ⎤ ⎡ 1 ⎛ ⎡ 1⎤ ⎞ T ⎜ ⎢ ⎥ ⎟ = ⎢ 12 ⎥ , T ⎜ ⎢ ⎥ ⎟ = ⎢ 12 ⎥ , A = ⎢ 12 ⎜ ⎟ ⎜ ⎟ ⎝ ⎣0⎦ ⎠ ⎣⎢− 2 ⎦⎥ ⎝ ⎣ 1⎦ ⎠ ⎣⎢ 2 ⎦⎥ ⎣⎢− 2 ⎡ 1 ⎛ ⎡1⎤ ⎞ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ 12 ⎝ ⎣1⎦ ⎠ ⎣⎢− 2 ⎡ 1 ⎛ ⎡−2⎤ ⎞ T ⎜⎜ ⎢ ⎥ ⎟⎟ = ⎢ 12 ⎝ ⎣ 2⎦ ⎠ ⎣⎢− 2
− 12 ⎤ ⎥ 1 ⎥ 2⎦
− 12 ⎤ ⎥ 1 ⎥ 2⎦
⎡1⎤ ⎡0⎤ ⎢⎥ = ⎢ ⎥ ⎣1⎦ ⎣0⎦
− 12 ⎤ ⎡−2⎤ ⎡−2⎤ = ⎢ ⎥ ⎥ 1 ⎢ ⎥ 2 ⎥⎣ ⎦ ⎣ 2⎦ 2⎦
⎡ 1 −1⎤ 8. Matrix of T is A = ⎢ ⎥. ⎣2 1⎦ A− 1 =
(T −1
1⎡ 3⎢
1 1⎤ −1 ⎥ ⇒ T ( x, y ) = ⎣−2 1⎦
( 13 x + 13 y, − 32 x + 13 y)
T ) (3, − 2) = T −1 (T (3, − 2)) = T −1 (5, 4) = (3, − 2)
9. T (1, 1) = (1, 2, 2) = −1 (1, 0, 0) + 0 (1, 1, 0) + 2 (1, 1, 1) T (1, 0) = (0, 2, 1) = −2 (1, 0, 0) + 1(1, 1, 0) + 1 (1, 1, 1) ⎡−1 −2⎤ ⎢ ⎥ A = ⎢ 0 1⎥ ⎢ 2 1⎥⎦ ⎣ ⎡−1 −2⎤ ⎡ 1⎤ ⎢ ⎥ ⎡ 1⎤ ⎢ ⎥ 1⎥ ⎢ ⎥ = ⎢−1⎥ = (1, 0, 0) − (1, 1, 0) + (1, 1, 1) = (1, 0, 1) T (0, 1) = A[ v]B = ⎢ 0 1 − ⎣ ⎦ ⎢ 2 ⎢ 1⎥ 1⎥⎦ ⎣ ⎣ ⎦
⎡1 −2⎤ 10. (a) A = ⎢ ⎥ ⎣1 4⎦ (b)
[B
⎡1 1⎤ : B′] ⇒ [ I : P] ⇒ P = ⎢ ⎥ ⎣1 2⎦
⎡ 2 −1⎤ ⎡−7 −15⎤ −1 (c) P −1 = ⎢ ⎥ ⇒ A′ = P AP = ⎢ ⎥ ⎣−1 1⎦ ⎣ 6 12⎦ ⎡−7 −15⎤ ⎡ 3⎤ ⎡ 9⎤ (d) ⎡⎣T ( v )⎤⎦ B ′ = A′[ v]B ′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 6 12 − 2 ⎣ ⎦⎣ ⎦ ⎣−6⎦ (e)
[v]B
⎡1 1⎤ ⎡ 3⎤ ⎡ 1⎤ = P[ v]B′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣1 2⎦ ⎣−2⎦ ⎣−1⎦
⎡1 1⎤ ⎡ 9⎤ ⎡ 3⎤ T ( v ) = P ⎡⎣T ( v )⎤⎦ B′ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 1 2 6 − ⎣ ⎦⎣ ⎦ ⎣−3⎦ ⎡1 −2⎤ ⎡ 1⎤ ⎡ 3⎤ ⎡⎣T ( v )⎤⎦ B = A[v]B = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣1 4⎦ ⎣−1⎦ ⎣−3⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Cumulative Test for Chapters 6 and 7
11. The characteristic equation is
λ −1 λI − A =
14. The standard matrix for T is
−2
−1
0 λ −3
−1
⎡2 0 −2⎤ ⎢ ⎥ A = ⎢0 2 −2⎥ ⎢3 0 −3⎥ ⎣ ⎦
3 λ +1
0
= (λ − 1) (λ 2 − 2λ )
which has eigenvalues λ1 = 2, λ2 = 0, and λ3 = −1 and corresponding eigenvectors (0, 1, 0), (1, 1, 1),
= λ (λ − 1)(λ − 2) = 0.
and ( 2, 2, 3).
For λ1 = 1:
So, B =
⎡0 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 2 1 − − x = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥ ⎢0 ⎥ ⎢ ⎥ ⎢0⎥ 3 2⎦ ⎣ x3 ⎦ ⎣ ⎣ ⎦
The solution is
{(t , 0, 0) : t ∈ R} and an eigenvector is
{(0, 1, 0), (1, 1, 1), (2, 2, 3)} and
⎡0 1 2⎤ ⎢ ⎥ P = ⎢ 1 1 2⎥ ⎢0 1 3⎥ ⎣ ⎦
⇒
(1, 0, 0).
and
For λ2 = 0:
⎡2 0 0⎤ ⎢ ⎥ P AP = ⎢0 0 0⎥. ⎢0 0 −1⎥ ⎣ ⎦
⎡−2 2 0⎤ ⎢ ⎥ P −1 ⎢ 0 0 0⎥ ⎢ 1 0 −1⎥ ⎣ ⎦
−1
⎡−1 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 3 1 − − x = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥ ⎢ 0 ⎥ ⎢ ⎥ ⎢0⎥ 3 1⎦ ⎣ x3 ⎦ ⎣ ⎣ ⎦
The solution is
249
15. The eigenvalues of A are λ1 = −2 and λ2 = 4, with
{(−t , − t , 3t ) : t ∈ R} and an eigenvector
corresponding eigenvectors (1, −1) and (1, 1),
is ( −1, −1, 3).
respectively. Normalize each eigenvector to form the columns of P. Then
For λ3 = 2:
⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢− 2 ⎣
⎡ 1 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 − 1 − 1 x = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥ ⎢0 ⎥ ⎢ ⎥ ⎢0⎥ 3 3⎦ ⎣ x3 ⎦ ⎣ ⎣ ⎦
The solution is
{(t , t , − t ) : t ∈ R} and an eigenvector is
(1, 1, −1). 12. Because A is a triangular matrix, λ = 1 (repeated). For λ = 1:
{(t , 0, 0) : t ∈ R} and an eigenvector is
13. The eigenvalues of A are λ1 = 2 and λ2 = 4. The
corresponding eigenvectors (3, 1) and (1, 1) are used to
form the columns of P. So, ⇒
⎡ 1 P −1 = ⎢ 12 ⎣⎢− 2
− 12 ⎤ ⎥ 3 ⎥ 2⎦
and ⎡ 1 P −1 AP = ⎢ 12 ⎣⎢− 2
− 12 ⎤ ⎥ 3 ⎥ 2⎦
1 ⎤ ⎡ 1 ⎡ 1 ⎢ 2 − 2 ⎥ ⎡1 3⎤ ⎢ 2 ⎥⎢ PT AP = ⎢ ⎥⎢ ⎢ 1 1 ⎥ ⎣3 1⎦ ⎢ 1 ⎢ ⎥ ⎢− 2⎦ 2 ⎣ 2 ⎣
1 ⎤ 2 ⎥⎥ ⎡−2 0⎤ =⎢ ⎥. 1 ⎥ ⎣ 0 4⎦ ⎥ 2⎦
⎡1⎤ ⎢⎥
⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥ ⎢ ⎥
⎢1⎥ ⎣⎦
⎢−1⎥ ⎢ 0⎥ ⎣ ⎦ ⎣ ⎦
λ = 2, ⎢1⎥; λ = −1, ⎢ 0⎥ , ⎢−1⎥ .
(1, 0, 0).
⎡3 1⎤ P = ⎢ ⎥ ⎣1 1⎦
and
16. Eigenvalues and eigenvectors of A are
⎡0 1 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0 − 2 x = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥ ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
The solution is
1 ⎤ 2 ⎥⎥ 1 ⎥ ⎥ 2⎦
⎡ 1 3⎤ ⎡3 1⎤ ⎡2 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. 1 5 1 1 − ⎣ ⎦⎣ ⎦ ⎣0 4⎦
Using the Gram-Schmidt orthonormalization process, you obtain ⎡ ⎢ ⎢ ⎢ P = ⎢ ⎢ ⎢ ⎢ ⎣
1 3 1 3 1 3
1 ⎤ ⎥ 6⎥ 2 ⎥ 0 − ⎥ 6⎥ 1 1 ⎥ − ⎥ 2 6⎦
and
⎡2 0 0⎤ ⎢ ⎥ PT AP = ⎢0 −1 0⎥. ⎢0 0 −1⎥ ⎣ ⎦
1 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
250
Chapter 7
Eigenvalues and Eigenvectors
17. The solution to the differential equation y′ = Ky is y = Ce . So, y1 = C1e and y2 = C2e . Kt
3t
t
18. Because a = 4, b = −8, and c = 4, the matrix is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 4 −4⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣−4 4⎦ c⎥ ⎦
19. Construct the age transition matrix.
22. There exists P such that P −1 AP = D. A and B are similar implies that there exists Q such that A = Q −1BQ. Then D = P −1 AP = P −1 (Q −1BQ) P = (QP) B(QP). −1
23. If 0 is an eigenvalue of A, then A − λ I = A = 0, and A is singular.
24. See proof of Theorem 7.9, page 452. 25. The range of T is nonempty because it contains 0. Let T(u) and T(v) be two vectors in the range of T. Then T (u) + T ( v) = T (u + v ). But because u and v are in V,
6 3⎤ ⎡3 ⎢ ⎥ A = ⎢0.8 0 0⎥ ⎢0 0.4 0 ⎥⎦ ⎣
The current age distribution vector is
it follows that u + v is also in V, which in turn implies that T (u + v ) is in the range.
⎡150⎤ ⎢ ⎥ x1 = ⎢150⎥ . ⎢150⎥ ⎣ ⎦
Similarly, let T(u) be in the range of T, and let c be a scalar. Then cT (u) = T (cu). But because cu is in V, this implies that T(cu) is in the range.
In one year, the age distribution vector will be 6 3 ⎤ ⎡150⎤ ⎡3 ⎡1800⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 2 = Ax1 = ⎢0.8 0 0 ⎥ ⎢150⎥ = ⎢ 120⎥. ⎢0 ⎢ 60⎥ 0.4 0 ⎥⎦ ⎢⎣150⎥⎦ ⎣ ⎣ ⎦
In two years, the age distribution vector will be 6 3 ⎤ ⎡1800⎤ ⎡3 ⎡6300⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 ⎥ ⎢ 120⎥ = ⎢1440⎥. x3 = Ax 2 = ⎢0.8 0 ⎢0 ⎢ 48⎥ 0.4 0 ⎥⎦ ⎢⎣ 60⎥⎦ ⎣ ⎣ ⎦
20. λ is an eigenvalue of A if there exists a nonzero vector x such that Ax = λ x. x is called an eigenvector of A. If A is n × n, A can have n eigenvalues, possibly complex
26. If T is one-to-one and v ∈ kernel, then T ( v ) = T (0) = 0 ⇒ v = 0. If kernel = {0} and
T ( v1 ) = T ( v 2 ), then
T ( v1 − v 2 ) = 0 ⇒ v1 − v 2 ∈ kernel ⇒ v1 = v 2 .
27. If λ is an eigenvalue of A and A2 = O, then Ax = λ x , x ≠ 0 A2 x = A(λ x) = λ 2 x 0 = λ 2 x ⇒ λ 2 = 0 ⇒ λ = 0.
and possibly repeated.
21. P is orthogonal if P −1 = PT . 1 = det ( P ⋅ P −1 ) = det ( PPT ) = (det P) ⇒ det P = ±1 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 7 Eigenvalues and Eigenvectors Section 7.1
Eigenvalues and Eigenvectors ...........................................................224
Section 7.2
Diagonalization...................................................................................232
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization .....................236
Section 7.4
Applications of Eigenvalues and Eigenvectors.................................240
Review Exercises ........................................................................................................245 Project Solutions.........................................................................................................251
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
C H A P T E R 7 Eigenvalues and Eigenvectors Section 7.1 Eigenvalues and Eigenvectors ⎡4 −5⎤ ⎡1⎤ ⎡−1⎤ ⎡1⎤ 2. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = −1⎢ ⎥ = λ1x1 ⎣2 −3⎦ ⎣1⎦ ⎣−1⎦ ⎣1⎦ ⎡4 −5⎤ ⎡5⎤ ⎡10⎤ ⎡5⎤ Ax 2 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 2⎢ ⎥ = λ2 x 2 ⎣2 −3⎦ ⎣2⎦ ⎣ 4⎦ ⎣2⎦ ⎡−2 4⎤ ⎡1⎤ ⎡2⎤ ⎡1⎤ 4. Ax1 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 2⎢ ⎥ = λ1x1 ⎣ 1 1⎦ ⎣1⎦ ⎣2⎦ ⎣1⎦
⎡2 3 1⎤ ⎡c⎤ ⎡2c⎤ ⎡ c⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 10. (a) A(cx1 ) = ⎢0 −1 2⎥ ⎢0⎥ = ⎢ 0⎥ = 2⎢0⎥ = 2(cx1 ) ⎢0 0 3⎥ ⎢0⎥ ⎢ 0⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡2 3 1⎤ ⎡ c⎤ ⎢ ⎥⎢ ⎥ (b) A(cx 2 ) = ⎢0 −1 2⎥ ⎢−c⎥ ⎢0 0 3⎥ ⎢ 0⎥ ⎣ ⎦⎣ ⎦ ⎡−c⎤ ⎡ c⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ c⎥ = −1⎢−c⎥ = −(cx 2 ) ⎢ 0⎥ ⎢ 0⎥ ⎣ ⎦ ⎣ ⎦
⎡−2 4⎤ ⎡−4⎤ ⎡12⎤ ⎡−4⎤ Ax 2 = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = −3⎢ ⎥ = λ2 x 2 1 1 1 − 3 ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ 1⎦ ⎡−2 2 −3⎤ ⎡ 1⎤ ⎡ 5⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 6. Ax1 = ⎢ 2 1 −6⎥ ⎢ 2⎥ = ⎢10⎥ = 5⎢ 2⎥ = λ1x1 ⎢ −1 −2 0⎥ ⎢−1⎥ ⎢−5⎥ ⎢−1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
⎡2 3 1⎤ ⎡5c⎤ ⎢ ⎥⎢ ⎥ (c) A(cx3 ) = ⎢0 −1 2⎥ ⎢ c⎥ ⎢0 0 3⎥ ⎢2c⎥ ⎣ ⎦⎣ ⎦
⎡−2 2 −3⎤ ⎡−2⎤ ⎡ 6⎤ ⎡−2⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax 2 = ⎢ 2 1 −6⎥ ⎢ 1⎥ = ⎢−3⎥ = −3⎢ 1⎥ = λ2 x 2 ⎢ −1 −2 0⎥ ⎢ 0⎥ ⎢ 0⎥ ⎢ 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−9⎤ ⎡3⎤ ⎡−2 2 −3⎤ ⎡3⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ Ax3 = ⎢ 2 1 −6⎥ ⎢0⎥ = ⎢ 0⎥ = −3⎢0⎥ = λ3x3 ⎢−3⎥ ⎢ 1⎥ ⎢ −1 −2 0⎥ ⎢ 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎡4 −1 3⎤ ⎡ 1⎤ ⎡4⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 8. Ax1 = ⎢0 2 1⎥ ⎢0⎥ = ⎢0⎥ = 4⎢0⎥ = λ1x1 ⎢0 0 3⎥ ⎢0⎥ ⎢0⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡4 −1 3⎤ ⎡ 1⎤ ⎡2⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Ax 2 = ⎢0 2 1⎥ ⎢2⎥ = ⎢4⎥ = 2⎢2⎥ = λ2 x 2 ⎢0 0 3⎥ ⎢0⎥ ⎢0⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−2⎤ ⎡4 −1 3⎤ ⎡−2⎤ ⎡−6⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ Ax3 = ⎢0 2 1⎥ ⎢ 1⎥ = ⎢ 3⎥ = 3⎢ 1⎥ = λ3x3 ⎢ 1⎥ ⎢0 0 3⎥ ⎢ 1⎥ ⎢ 3⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎡15c⎤ ⎡5c⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ 3c⎥ = 3⎢ c⎥ = 3(cx3 ) ⎢ 6c ⎥ ⎢2c⎥ ⎣ ⎦ ⎣ ⎦
12. (a) Because ⎡−3 10⎤ ⎡4⎤ ⎡28⎤ ⎡4⎤ Ax = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ = 7⎢ ⎥ 5 2 4 28 ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣4⎦ x is an eigenvector of A (with corresponding eigenvalue 7). (b) Because ⎡−3 10⎤ ⎡−8⎤ ⎡ 64⎤ ⎡−8⎤ Ax = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = −8 ⎢ ⎥ − 5 2 4 32 ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ 4⎦ x is an eigenvector of A (with corresponding eigenvalue −8 ). (c) Because ⎡−3 10⎤ ⎡−4⎤ ⎡ 92⎤ ⎡−4⎤ Ax = ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ≠ λ⎢ ⎥ ⎣ 5 2⎦ ⎣ 8⎦ ⎣−4⎦ ⎣ 8⎦ x is not an eigenvector of A. (d) Because ⎡−3 10⎤ ⎡ 5⎤ ⎡−45⎤ ⎡ 5⎤ Ax = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ≠ λ⎢ ⎥ ⎣ 5 2⎦ ⎣−3⎦ ⎣ 19⎦ ⎣−3⎦ x is not an eigenvector of A
224
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
Eigenvalues and Eigenvectors
225
⎡ 1 0 5⎤ ⎡ 1⎤ ⎡ 1⎤ ⎡ 1⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 14. (a) Because Ax = ⎢0 −2 4⎥ ⎢ 1⎥ = ⎢−2⎥ ≠ λ ⎢ 1⎥ , x is not an eigenvector of A. ⎢ 1 −2 9⎥ ⎢0⎥ ⎢ −1⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡ 1 0 5⎤ ⎡−5⎤ ⎡0⎤ ⎡−5⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ (b) Because Ax = ⎢0 −2 4⎥ ⎢ 2⎥ = ⎢0⎥ = 0 ⎢ 2⎥ , x is an eigenvector (with corresponding eigenvalue 0). ⎢ 1 −2 9⎥ ⎢ 1⎥ ⎢0⎥ ⎢ 1⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎦
(c) The zero vector is never an eigenvector. ⎡12 + 2 6 ⎤ ⎡2 6 − 3⎤ ⎡ 1 0 5⎤ ⎡ 2 6 − 3 ⎤ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ (d) Because Ax = ⎢0 −2 4⎥ ⎢−2 6 + 6⎥ = ⎢ 4 6 ⎥ = 4 + 2 6 ⎢−2 6 + 6⎥ , ⎥ ⎢6 6 + 12⎥ ⎢ ⎥ ⎢ 1 −2 9⎥ ⎢ 3 3 ⎣ ⎦ ⎣⎢ ⎢⎣ ⎦⎥ ⎦⎥ ⎣⎢ ⎦⎥
(
)
)
x is an eigenvector of A ( with corresponding eigenvalue 4 + 2 6 . 16. (a) The characteristic equation is
λI − A =
λ −1
4
2
λ −8
= λ 2 − 9λ = λ (λ − 9) = 0.
(b) The eigenvalues are λ 1 = 0 and λ 2 = 9.
4 ⎡λ 1 − 1 For λ1 = 0, ⎢ 2 λ 1 − ⎣ The solution is
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ = ⎢ ⎥ 8⎦ ⎣ x2 ⎦ ⎣0⎦
{(4t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 1
4 ⎡λ 2 − 1 For λ2 = 9, ⎢ λ2 − ⎣ 2 The solution is
⎡ 1 −4⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0⎦ ⎣ x2 ⎦ ⎣0⎦
⇒
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ = ⎢ ⎥ ⇒ 8⎦ ⎣ x2 ⎦ ⎣0⎦
= 0 is ( 4, 1).
⎡2 1⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣0 0⎦ ⎣ x2 ⎦ ⎣0⎦
{(−t , 2t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
= 9 is ( −1, 2).
18. (a) The characteristic equation is
λI − A =
λ −
1 4
− 14
λ
− 12
= λ 2 − 14 λ −
(b) The eigenvalues are λ 1 = For λ1 =
1, 2
⎡λ 1 − ⎢ 1 ⎢⎣ − 2
The solution is
(
= λ −
1 2
)(λ + 14 ) = 0.
and λ 2 = − 14 .
− 14 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ = ⎢ ⎥ λ 1 ⎥⎦ ⎣ x2 ⎦ ⎣0⎦
1 4
⇒
⎡ 1 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎣0 0⎦ ⎣ x2 ⎦ ⎣0⎦
{(t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 1
⎡λ 2 − For λ2 = − 14 , ⎢ 1 ⎢⎣ − 2
The solution is
1 2
1 8
1 4
− 14 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ = ⎢ ⎥ λ 2 ⎥⎦ ⎣ x2 ⎦ ⎣0⎦
⇒
=
1 2
is (1, 1).
⎡ 1 12 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥. ⎢⎣0 0⎥⎦ ⎣ x2 ⎦ ⎣0⎦
{(t , − 2t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
= − 14 is (1, − 2).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
226
Chapter 7
Eigenvalues and Eigenvectors
λ +5
0
0
−3
λ −7
0
−4
2
λ −3
20. (a) The characteristic equation is λ I − A =
= (λ + 5)(λ − 7)(λ − 3) = 0.
(b) The eigenvalues are λ 1 = −5, λ 2 = 7 and λ 3 = 3. ⎡λ 1 + 5 0 0 ⎢ λ1 − 7 0 For λ1 = −5, ⎢ −3 ⎢ −4 2 λ1 − ⎣ The solution is
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⇒ ⎢0⎥ 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢0⎥ 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
⇒
= −5 is ( −16, 4, 9).
⎡ 1 0 0⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 1 2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(0, − 2t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
⎡λ 3 + 5 0 0 ⎢ 0 λ3 − 7 For λ3 = 3, ⎢ −3 ⎢ −4 2 λ3 − ⎣ The solution is
⎡9 0 16⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = 0 9 4 x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(−16t , 4t , 9t ) : t ∈ R}. So, an eigenvector corresponding to λ 1
⎡λ 2 + 5 0 0 ⎢ 0 λ2 − 7 For λ2 = 7, ⎢ −3 ⎢ −4 2 λ 2 − ⎣ The solution is
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ = x ⎥ ⎢ 2⎥ ⎢0⎥ ⇒ ⎥ ⎢ ⎥ ⎢0⎥ 3⎦ ⎣ x3 ⎦ ⎣ ⎦
= 7 is (0, − 2, 1).
⎡ 1 0 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 1 0⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(0, 0, t ) : t ∈ R}. So, an eigenvector corresponding to λ 3
= 3 is (0, 0, 1).
λ − 3 −2 − 1 22. (a) The characteristic equation is λ I − A =
0
λ
−2 = (λ − 3)(λ 2 − 4) = 0.
0
−2
λ
(b) The eigenvalues are λ1 = −2, λ2 = 2, and λ3 = 3. ⎡λ 1 − 3 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ For λ1 = −2, ⎢ 0 λ 1 −2⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢0⎥ −2 λ 1 ⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦ ⎣ The solution is
{(t , − 5t , 5t ) : t ∈ R}. So, an eigenvector corresponding to λ 1
⎡λ 2 − 3 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ For λ2 = 2, ⎢ 0 λ 2 −2⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢0⎥ −2 λ 2 ⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦ ⎣ The solution is
⇒
⇒
= −2 is (1, − 5, 5).
⎡−1 −2 −1⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 0 2 −2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢ 0 −2 2⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(−3t , t , t ) : t ∈ R}. So, an eigenvector corresponding to λ 2
⎡λ 3 − 3 −2 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ For λ3 = 3, ⎢ 0 λ 3 −2⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢0⎥ −2 λ 3 ⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦ ⎣ The solution is
⇒
⎡−5 −2 −1⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ 0 −2 −2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢ 0 −2 −2⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
= 2 is ( −3, 1, 1).
⎡0 −2 −1⎤ ⎡ x1⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 3 −2⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 ⎢0 −2 ⎢0⎥ 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎣ ⎦
{(t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ 3
= 3 is (1, 0, 0).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
24. (a) The characteristic equation is λ I − A =
λ −3
−2
3
3
λ +4
−9
1
2
λ −5
Eigenvalues and Eigenvectors
227
= λ 3 − 4λ 2 + 4λ = λ (λ − 2) = 0. 2
(b) The eigenvalues are λ 1 = 0, λ 2 = 2 (repeated). ⎡λ 1 − 3 −2 3 ⎢ −9 λ1 + 4 For λ 1 = 0, ⎢ 3 ⎢ 1 2 λ1 − ⎣ The solution is For λ 2
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ = x ⎥ ⎢ 2⎥ ⎢0⎥ ⇒ ⎥ ⎢ ⎥ ⎢0⎥ 5⎦ ⎣ x3 ⎦ ⎣ ⎦
{(−t , 3t , t ) : t ∈ R}. So, an eigenvector corresponding to λ1
⎡λ 2 − 3 −2 3 ⎢ −9 λ2 + 4 = 2, ⎢ 3 ⎢ 1 2 λ 2 − ⎣
The solution is
1⎤ ⎡ x1 ⎤ ⎡1 0 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ − = 0 1 3 x ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⇒ ⎢0⎥ 5⎥⎦ ⎢⎣ x3 ⎥⎦ ⎣ ⎦
= 0 is ( −1, 3, 1).
⎡ 1 2 −3⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 0⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(−2s + 3t , s, t ) : s, t ∈ R}. So, two independent eigenvectors corresponding
to λ 2 = 2 are ( −2, 1, 0) and (3, 0, 1).
λ −1 26. (a) The characteristic equation is λ I − A =
2 − 32
(b) The eigenvalues are λ 1 = For λ 1 =
3 ⎡λ 1 − 1 2 ⎢ λ1 − ⎢ 2 ⎢ −3 9 2 ⎣ 2
29 , 2
The solution is
For λ 2 =
1, 2
29 , λ 2 2
1 2
λ −
9 2
− 52
13 2
10
λ −8
(
= λ −
29 2
13 2
)(λ − 12 )
2
= 0.
(repeated).
− 52 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ 10 ⎥ ⎢ x2 ⎥ = ⎢0⎥ λ 1 − 8⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦
⇒
⎡3 0 −1⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 0 3 4 = ⎢ ⎥ ⎢ 2⎥ ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(t , − 4t , 3t ) : t ∈ R}. So, an eigenvector corresponding to λ1
3 ⎡λ 2 − 1 2 ⎢ λ2 − ⎢ 2 ⎢ −3 9 2 ⎣ 2
The solution is
13 2
=
3 2
− 52 ⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥⎢ ⎥ ⎢ ⎥ 10 ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⇒ λ 2 − 8⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦
=
29 2
is (1, − 4, 3).
⎡ 1 −3 5⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 0⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
{(3s − 5t , s, t ) : s, t ∈ R}. So, two eigenvectors corresponding to λ 2
=
1 2
are (3, 1, 0) and ( −5, 0, 1).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
228
Chapter 7
Eigenvalues and Eigenvectors
λ −3
0
0
0
−4
λ −1
0
0
0
0
λ −2
−1
0
0
0
λ −2
28. (a) The characteristic equation is λ I − A =
= (λ − 3)(λ − 1)(λ − 2) = 0. 2
(b) The eigenvalues are λ 1 = 3, λ 2 = 1, λ 3 = 2 (repeated). ⎡ 0 ⎢ −4 For λ 1 = 3, ⎢ ⎢ 0 ⎢ ⎢⎣ 0 The solution is
For λ 2
⇒
⎡ 1 − 1 0 0⎤ ⎡ x1 ⎤ ⎡0⎤ 2 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 0 0 0⎥ ⎢0 ⎢ x2 ⎥ = ⎢0⎥. = ⎢0 ⎢ x3 ⎥ ⎢0⎥ 0 1 1⎥⎥ ⎢ ⎢ ⎥ ⎢ ⎥ ⎢⎣0 ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦ 0 0 0⎥⎦
}
)
0⎤ ⎥ 0⎥ 0 −1 −1⎥ ⎥ 0 0 −1⎥⎦ 0
0
0
0
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥ ⎢ ⎥ ⎢ ⎥ x 0 1 0⎥ 0 2 = ⎢ ⎥ = ⎢ ⎥. ⎥ ⎢ ⎥ ⎢ 0 0 1 x 0⎥ ⎥ ⎢ 3⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
{(0, t , 0, 0) : t ∈ R}. So, an eigenvector corresponding to λ2
⎡ −1 ⎢ −4 = 2, ⎢ ⎢ 0 ⎢ ⎢⎣ 0
The solution is
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
t , 0, 0 : t ∈ R . So, an eigenvector corresponding to λ1 = 3 is (1, 2, 0, 0).
1 t, 2
⎡−2 ⎢ −4 = 1, ⎢ ⎢ 0 ⎢ ⎣⎢ 0
The solution is
For λ 3
{(
0⎤ ⎥ 2 0 0⎥ 0 1 −1⎥ ⎥ 0 0 1⎥⎦ 0 0
0⎤ ⎥ 0⎥ 0 0 −1⎥ ⎥ 0 0 0⎥⎦
0 0 1 0
⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ x ⎢ 2 ⎥ = ⎢0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
⇒
⎡1 ⎢ ⎢0 ⎢0 ⎢ ⎢⎣0
0 0 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎥ ⎢ ⎥ ⎢ ⎥ x 1 0 0⎥ 0 2 = ⎢ ⎥ = ⎢ ⎥. ⎥ ⎢ ⎥ ⎢ 0 0 1 x 0⎥ ⎥ ⎢ 3⎥ ⎢ ⎥ 0 0 0⎥⎦ ⎢⎣ x4 ⎥⎦ ⎢⎣0⎥⎦
{(0, 0, t , 0) : t ∈ R}. So, an eigenvector corresponding to λ3
30. Using a graphing utility: λ = −7, 3
34. Using a graphing utility: λ =
= 2 is (0, 0, 1, 0).
44. The characteristic equation is
λI − A =
32. Using a graphing utility: λ = −7, 0 1, 1, 5 2
= 1 is (0, 1, 0, 0).
λ −4
−1
2
λ −1
= (λ − 1)(λ − 4) + 2
3
= λ 2 − 5λ + 6 = 0.
36. Using a graphing utility: λ = 0, 1, 2
Because
38. Using a graphing utility: λ = 0, 1, 1, 4
⎡ 4 1⎤ ⎡ 4 1⎤ ⎡ 1 0⎤ A2 − 5 A + 6 I = ⎢ ⎥ − 5⎢ ⎥ + 6⎢ ⎥ ⎣−2 1⎦ ⎣−2 1⎦ ⎣0 1⎦
40. Using a graphing utility: λ = 0, 0, 3, 5 42. The characteristic equation is
λI − A =
λ −6
1
−1
λ −5
= λ 2 − 11λ + 31 = 0.
2
⎡ 14 5⎤ ⎡ 20 5⎤ ⎡6 0⎤ = ⎢ ⎥ − ⎢ ⎥ + ⎢ ⎥ ⎣−10 −1⎦ ⎣−10 −1⎦ ⎣0 6⎦ ⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦
the theorem holds for this matrix.
Because 2
⎡6 −1⎤ ⎡6 −1⎤ ⎡ 1 0⎤ A2 − 11A + 31I = ⎢ ⎥ − 11⎢ ⎥ + 31⎢ ⎥ 1 5 1 5 ⎣ ⎦ ⎣ ⎦ ⎣0 1⎦ ⎡35 −11⎤ ⎡66 −11⎤ ⎡31 0⎤ = ⎢ ⎥−⎢ ⎥+⎢ ⎥ ⎣11 24⎦ ⎣ 11 55⎦ ⎣ 0 31⎦ ⎡0 0⎤ = ⎢ ⎥ ⎣0 0⎦
the theorem holds for this matrix.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1
Eigenvalues and Eigenvectors
229
46. The characteristic equation is
λ −3
−1
−4
−2
λ −4
0
−5
−5
λ −6
λI − A =
= λ 3 − 13λ 2 + 32λ − 20 = 0.
Because 3
2
⎡3 1 4⎤ ⎡3 1 4⎤ ⎡3 1 4⎤ ⎡ 1 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A3 − 13 A2 + 32 A − 20 I = ⎢2 4 0⎥ − 13⎢2 4 0⎥ + 32 ⎢2 4 0⎥ − 20⎢0 1 0⎥ ⎢5 5 6⎥ ⎢5 5 6⎥ ⎢5 5 6⎥ ⎢0 0 1⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡327 319 340⎤ ⎡403 351 468⎤ ⎡ 96 32 128⎤ ⎡20 0 0⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢118 126 104⎥ − ⎢182 234 104⎥ + ⎢ 64 128 0⎥ − ⎢ 0 20 0⎥ = ⎢0 0 0⎥ ⎢555 555 556⎥ ⎢715 715 728⎥ ⎢160 160 192⎥ ⎢ 0 0 20⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
the theorem holds for this matrix. 48. The characteristic equation is
λ +3
−1
0
1
λ −3
−2
0
−4
λ −3
λI − A =
= λ 3 − 3λ 2 − 16λ = 0.
Because 3
2
⎡−3 1 0⎤ ⎡−3 1 0⎤ ⎡−3 1 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ 3 2 A − 3 A − 16 A = ⎢ −1 3 2⎥ − 3⎢ −1 3 2⎥ − 16 ⎢ −1 3 2⎥ ⎢ 0 4 3⎥ ⎢ 0 4 3⎥ ⎢ 0 4 3⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎡−24 16 6⎤ ⎡ 8 0 2⎤ ⎡−3 1 0⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢−16 96 68⎥ − 3⎢ 0 16 12⎥ − 16 ⎢ −1 3 2⎥ = ⎢0 0 0⎥ ⎢−12 136 99⎥ ⎢−4 24 17⎥ ⎢ 0 4 0⎥ ⎢0 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
the theorem holds for this matrix. 50. For the n × n matrix A = ⎡⎣aij ⎤⎦ , the sum of the diagonal entries, or the trace, of A is given by
n
∑aii . i =1
Exercise 16: λ1 = 0, λ2 = 9 (a)
2
∑λi
= 9 =
i =1
(b)
A =
2
∑aii i =1
1 −4 −2
Exercise 18: λ1 = (a)
2
∑λi
=
i =1
(b)
A =
1 4 1 2
1 4
= 0 = 0 ⋅ 9 = λ1 ⋅ λ2
8
=
1, 2
λ2 = − 14
2
∑aii i =1
1 4
0
= − 18 =
1 2
(− 14 ) = λ
1
⋅ λ2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
230
Chapter 7
Eigenvalues and Eigenvectors
Exercise 20: λ1 = −5, λ2 = 7, λ3 = 3 (a)
3
∑λi
3
∑aii
= 5 =
i =1
i =1
−5
(b)
A =
0 0 7 0 = −105 = −5 ⋅ 7 ⋅ 3 = λ1 ⋅ λ2 ⋅ λ3
3
4 −2 3
Exercise 22: λ1 = −2, λ2 = 2, λ3 = 3 (a)
3
∑λi
3
∑aii
= 3 =
i =1
i =1
3 2
(b)
1
A = 0 0 2 = −12 = −2 ⋅ 2 ⋅ 3 = λ1 ⋅ λ2 ⋅ λ3 0 2 0
Exercise 24: λ1 = 0, λ2 = 2, λ3 = 2 3
(a)
∑λi
3
∑aii
= 4 =
i =1
i =1
2 −3
3
(b)
9 = 0 = 0 ⋅ 2 ⋅ 2 = λ1 ⋅ λ2 ⋅ λ3
A = −3 −4 −1 −2
5
Exercise 26: λ1 = 3
(a)
∑λi i =1
=
31 2
=
29 , λ 2 2
A = −2 3 2
1, λ 2 3
1 2
=
3
∑aii i =1
1 − 32
(b)
=
13 2 − 92
5 2
−10 =
29 8
=
29 2
⋅
1 2
⋅
1 2
= λ1 ⋅ λ2 ⋅ λ3
8
Exercise 28: λ1 = 3, λ2 = 1, λ3 = 2, λ4 = 2 4
(a)
∑λi i =1
= 8 =
4
∑aii i =1
3 0 0 0 (b)
A =
4 1 0 0 0 0 2
1
= 12 = 3 ⋅ 1 ⋅ 2 ⋅ 2 = λ1 ⋅ λ2 ⋅ λ3 ⋅ λ4
0 0 0 2 52. λ = 0 is an eigenvalue of A ⇔ 0 I − A = 0 ⇔ A = 0. 54. Observe that λ I − AT = (λ I − A)
T
= λ I − A . Because the characteristic equations of A and AT are the same, A and
AT must have the same eigenvalues. However, the eigenspaces are not the same.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.1 56. Let u = (u1 , u2 ) be the fixed vector in R 2 , and v = (v1 , v2 ). Then proju v =
Because T (1, 0) =
u1 (u1, u2 ) u12 + u22
the standard matrix A of T is A =
and
T (0, 1) =
⎡ u12 1 ⎢ u12 + u22 ⎢⎣u1u2
u1u2 ⎤ ⎥. u22 ⎥⎦
Eigenvalues and Eigenvectors
231
u1v1 + u2v2 (u1, u2 ). u12 + u22
u2 (u1, u2 ), u12 + u22
Now,
Au =
⎡ u12 1 ⎢ u12 + u22 ⎢⎣u1u2
u1u2 ⎤ ⎡ u1 ⎤ ⎥⎢ ⎥ u22 ⎥⎦ ⎣u2 ⎦
=
⎡ u1 (u12 + u22 )⎤ ⎡ u13 + u1u22 ⎤ 1 1 ⎢ ⎥ = ⎢ ⎥ u12 + u22 ⎣⎢u12u2 + u23 ⎦⎥ u12 + u22 ⎢u2 (u12 + u22 )⎥ ⎣ ⎦
=
u12 + u22 ⎡ u1 ⎤ ⎢ ⎥ = 1u u12 + u22 ⎣u2 ⎦
and ⎡ u12 ⎡ u2 ⎤ 1 A⎢ ⎥ = 2 2⎢ u1 + u2 ⎢⎣u1u2 ⎣−u1 ⎦ = =
u1u2 ⎤ ⎥ u22 ⎥⎦
⎡ u2 ⎤ ⎢ ⎥ ⎣−u1 ⎦
u12
⎡u12u2 − u12u2 ⎤ 1 ⎥ 2⎢ + u2 ⎢⎣ u1u22 − u1u22 ⎥⎦
u12
⎡0⎤ ⎡ u2 ⎤ 1 = 0 ⎢ ⎥. 2⎢ ⎥ + u2 ⎣0⎦ ⎣−u1 ⎦
So, λ = 1 and λ2 = 0 are the eigenvalues of A. 58. Let A2 = O and consider Ax = λ x. Then O = A2 x = A(λ x) = λ Ax = λ 2 x which implies λ = 0. 60. The characteristic equation of A is λ I − A =
λ −1 1
λ
= λ 2 + 1 = 0 which has no real solution.
62. (a) True. By definition of eigenvalue space Ax = λ x and λ x is parallel to x for any real number λ . (b) False. Let ⎡ 1 0⎤ A = ⎢ ⎥. ⎣0 2⎦ Then A has two distinct eigenvalues 1 and 2. (c) False. The set of eigenvectors corresponding to λ together with the zero vector (which is never an eigenvector for any eigenvalue) forms a subspace of R n . (Theorem 7.1 on page 424). 64. Substituting the value λ = 3 yields the system −1 0 ⎤ ⎡ x1 ⎤ ⎡λ − 3 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ λ −3 0 ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢ 0 λ − 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ 0 ⎣
⇒
⎡0 1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 0⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, 3 has two linearly independent eigenvectors and the dimension of the eigenspace is 2.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
232
Chapter 7
Eigenvalues and Eigenvectors
66. Substituting the value λ = 3 yields the system −1 −1 ⎤ ⎡ x1 ⎤ ⎡λ − 3 ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ λ −3 −1 ⎥ ⎢ x2 ⎥ = ⎢0⎥ ⎢ 0 ⎢ 0 λ − 3⎥⎦ ⎢⎣ x3 ⎥⎦ ⎢⎣0⎥⎦ 0 ⎣
⎡0 1 0⎤ ⎡ x1 ⎤ ⎡0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢0 0 1⎥ ⎢ x2 ⎥ = ⎢0⎥. ⎢0 0 0⎥ ⎢ x3 ⎥ ⎢0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
⇒
So, 3 has one linearly independent eigenvector, and the dimension of the eigenspace is 1. 68. Because T (e −2 x ) =
d −2 x ⎡e ⎤⎦ = −2e −2 x , the eigenvalue corresponding to f ( x ) = e −2x is −2. dx ⎣
70. The standard matrix for T is ⎡2 1 −1⎤ ⎢ ⎥ A = ⎢0 −1 2⎥. ⎢0 0 −1⎥ ⎣ ⎦
The characteristic equation of A is
λ −2
−1
1
0
λ +1
−2
0
0
λ +1
λI − A =
= (λ − 2)(λ + 1) . 2
The eigenvalues are λ1 = 2 and λ2 = −1 (repeated). The corresponding eigenvectors are found by solving
λi − 2
−1
0
λi + 1
0
0
⎡a0 ⎤ ⎡0⎤ ⎢ ⎥ ⎢ ⎥ a = ⎢ 1⎥ ⎢0⎥ ⎢ ⎥ λi + 1 ⎣a2 ⎦ ⎢⎣0⎥⎦ 1
−2
for each λi . So, p1 ( x) = 1 corresponds to λ1 = 2, and p2 ( x) = 1 − 3 x corresponds to λ2 = −1. 72. The possible eigenvalues of an idempotent matrix are 0 and 1. Suppose Ax = λ x, where A2 = A. Then
λx = Ax = A2 x = A( Ax) = A(λ x) = λ 2 x
⇒
(λ 2
− λ )x = 0.
Because x ≠ 0, λ 2 − λ = 0 ⇒ λ = 0, 1.
74. The characteristic equation of A is
λ − cos θ
sin θ
−sin θ
λ − cos θ
= λ 2 − 2 cos θ λ + (cos 2 θ + sin 2 θ ) = λ 2 − 2 cos θ λ + 1.
There are real eigenvalues if the discriminant of this quadratic equation in λ is nonnegative: b 2 − 4ac = 4 cos 2 θ − 4 = 4(cos 2 θ − 1) ≥ 0 ⇒
cos 2 θ = 1 ⇒ θ = 0, π .
The only rotations that send vectors to multiples of themselves are the identity (θ = 0) and the 180° –rotation (θ = π ).
Section 7.2 Diagonalization ⎡ 1 2. P −1 = ⎢ 12 ⎢⎣− 2
⎡ 1 − 12 ⎤ −1 = P AP and ⎢ 12 ⎥ 3 ⎥ ⎢ ⎣− 2 2⎦
− 12 ⎤ ⎡ 1 3⎤ ⎡3 1⎤ ⎡2 0⎤ ⎥ ⎥⎢ ⎥ = ⎢ ⎥ 3 ⎢ ⎣0 4⎦ ⎦ ⎣−1 5⎦ ⎣1 1⎦ 2⎥
5⎤ 5⎤ ⎡− 2 ⎡− 2 3⎥ 3⎥ 4. P −1 = ⎢ 3 and P −1 AP = ⎢ 3 ⎢ 1 − 1⎥ ⎢ 1 − 1⎥ 3⎦ 3⎦ ⎣ 3 ⎣ 3
⎡4 −5⎤ ⎡1 5⎤ ⎡−1 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 2 3 1 2 ⎣ ⎦⎣ ⎦ ⎣ 0 2⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.2
Diagonalization
233
⎡ 1 1 −3⎤ ⎡ 1 1 −3⎤ ⎡2 3 1⎤ ⎡ 1 1 5⎤ ⎡2 0 0⎤ ⎢ ⎥⎢ ⎢ ⎥ ⎥⎢ ⎥ ⎢ ⎥ 6. P −1 = ⎢0 −1 12 ⎥ and P −1 AP = ⎢0 −1 12 ⎥ ⎢0 −1 2⎥ ⎢0 −1 1⎥ = ⎢0 −1 0⎥ ⎢0 0 1 ⎥ ⎢0 0 3⎥ ⎢0 0 2⎥ ⎢0 0 1 ⎥ ⎢0 0 3⎥ ⎦⎣ ⎦ ⎣ ⎦ ⎣ ⎣ 2⎦ 2⎦ ⎣
8. P −1
⎡ 0.25 0.25 0.25 0.25⎤ ⎢ ⎥ −0.25 −0.25 0.25 0.25⎥ = ⎢ and ⎢ 0 0 0.5 −0.5⎥ ⎢ ⎥ 0 0⎥⎦ ⎣⎢ 0.5 −0.5
⎡ 0.25 0.25 0.25 0.25⎤ ⎡0.80 ⎢ ⎥⎢ −0.25 −0.25 0.25 0.25⎥ ⎢0.10 P −1 AP = ⎢ ⎢ 0 0 0.5 −0.5⎥ ⎢0.05 ⎢ ⎥⎢ 0 0⎥⎦ ⎢⎣0.05 ⎢⎣ 0.5 −0.5
0.10 0.05 0.05⎤ ⎡1 −1 0 1⎤ 0 0 0⎤ ⎡1 ⎢ ⎥ ⎥⎢ ⎥ 0.80 0.05 0.50⎥ ⎢1 −1 0 −1⎥ 0 0.8 0 0⎥ = ⎢ ⎢0 0.05 0.80 0.10⎥ ⎢1 1 1 0⎥ 0 0.7 0⎥ ⎢ ⎥ ⎥⎢ ⎥ 0.05 0.10 0.80⎥⎦ ⎢⎣1 1 −1 0⎥⎦ ⎢⎣0 0 0 0.7⎥⎦
10. The matrix A has only one eigenvalue, λ = 0, and a basis for the eigenspace is
{(1, − 2)}, So, A does not
20. The eigenvalues of A are λ1 = 4, λ2 = 1, λ3 = −2. Because A has three distinct eigenvalues, it is diagonalizable by Theorem 7.6.
satisfy Theorem 7.5 and is not diagonalizable.
12. A is triangular, so the eigenvalues are simply the entries on the main diagonal. So, the only eigenvalue is λ = 1, and a basis for the eigenspace is
{(0, 1)}.
22. The eigenvalues of A are λ1 = 0 and λ2 = 9. From Exercise 16, Section 7.1, the corresponding eigenvectors (4, 1) and (−1, 2) are used to form the columns of P. So, ⎡4 −1⎤ P = ⎢ ⎥ ⎣ 1 2⎦
Because A does not have two linearly independent eigenvectors, it does not satisfy Theorem 7.5 and it is not diagonalizable.
1⎤ 9 ⎥ 4⎥ 9⎦
and
14. The characteristic equation of A is (λ − 2)(λ + 1) = 0. 2
⎡ 2 P −1 AP = ⎢ 9 ⎢− 19 ⎣
For eigenvalue λ1 = 2 you find the eigenvector (1, 0, 0). The eigenvector corresponding to λ2 = −1 (repeated) is
(1, − 3, 0). So, A has only two linearly independent
⎡ 2 P −1 = ⎢ 9 ⎢− 19 ⎣
⇒
1⎤ 9 ⎥ 4⎥ 9⎦
⎡ 1 −4⎤ ⎡4 −1⎤ ⎡0 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥. ⎣−2 8⎦ ⎣ 1 2⎦ ⎣0 9⎦
24. The eigenvalues of A are λ1 =
eigenvectors. So, A does not satisfy Theorem 7.5 and is not diagonalizable.
1 2
and λ2 = 14 . From
Exercise 18, Section 7.1, the corresponding eigenvectors (1, 1) and (1, − 2) are used to form the column of P. So,
16. From Exercise 38, Section 7.1, you know that A has only three linearly independent eigenvectors. So, A does not satisfy Theorem 7.5 and is not diagonalizable.
⎡1 1⎤ P = ⎢ ⎥ ⎣1 −2⎦
18. The eigenvalue of A is λ = 2 (repeated). Because A does not have two distinct eigenvalues, Theorem 7.6 does not guarantee that A is diagonalizable.
⎡2 P −1 AP = ⎢ 3 ⎢1 ⎣3
⇒ 1⎤ 3 ⎥ − 13 ⎥⎦
⎡1 ⎢ 14 ⎢⎣ 2
⎡2 P −1 = ⎢ 3 ⎢1 ⎣3
1⎤ 3 ⎥, − 13 ⎥⎦
and
1⎤ 4
⎡1 0⎤ ⎡1 1⎤ 2 ⎥⎢ ⎥. ⎥ = ⎢ 1 − 0⎥⎦ ⎣1 −2⎦ 0 ⎢⎣ ⎦ 4⎥
26. The eigenvalues of A are λ1 = −2, λ2 = 2, λ3 = 3. From Exercise 22, Section 7.1, the corresponding eigenvectors
(1, − 5, 5), (−3, 1, 1) and (1, 0, 0) are used to form the columns of P. So, ⎡ 1 −3 1⎤ ⎢ ⎥ P = ⎢−5 1 0⎥ ⎢ 5 1 0⎥⎦ ⎣
⇒
P
−1
⎡0 −0.1 0.1⎤ ⎢ ⎥ = ⎢0 0.5 0.5⎥ ⎢ 1 1.6 1.4⎥ ⎣ ⎦
and ⎡0 −0.1 0.1⎤ ⎡3 2 1⎤ ⎡ 1 −3 1⎤ ⎡−2 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ P AP = ⎢0 0.5 0.5⎥ ⎢0 0 2⎥ ⎢−5 1 0⎥ = ⎢ 0 2 0⎥. ⎢ 1 1.6 1.4⎥ ⎢0 2 0⎥ ⎢ 5 ⎢ 0 0 3⎥ 1 0⎥⎦ ⎣ ⎦⎣ ⎦⎣ ⎣ ⎦ −1
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
234
Chapter 7
Eigenvalues and Eigenvectors
28. The eigenvalues of A are λ1 = 0 and λ2 = 2 (repeated). From Exercise 24, Section 7.1, the corresponding eigenvectors
(−1, 3, 1), (3, 0, 1) and (−2, 1, 0) are used to form the columns of P. So, ⎡−1 3 −2⎤ ⎢ ⎥ P = ⎢ 3 0 1⎥ ⎢ 1 1 0⎥ ⎣ ⎦ ⎡ 1 ⎢ 2 P −1 AP = ⎢− 12 ⎢− 3 ⎣⎢ 2
P −1
⇒
1 − 32 ⎤ ⎥ 5 −1 2⎥ 9⎥ −2 ⎥ 2⎦
⎡ 1 ⎢ 2 = ⎢− 12 ⎢− 3 ⎣⎢ 2
1 − 32 ⎤ ⎥ 5 , and −1 2⎥ 9⎥ −2 ⎥ 2⎦
⎡ 3 2 −3⎤ ⎡−1 3 −2⎤ ⎡0 0 0⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 1⎥ = ⎢0 2 0⎥. ⎢−3 −4 9⎥ ⎢ 3 0 ⎢ −1 −2 5⎥ ⎢ 1 1 0⎥ ⎢0 0 2⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
30. The eigenvalues of A are λ1 =
29 2
and λ2 =
1 2
(repeated). From Exercise 26, Section 7.1, the corresponding eigenvectors
(1, − 4, 3), (−5, 0, 1) and (3, 1, 0) are used to form the columns of P. So, ⎡ 1 −5 3⎤ ⎢ ⎥ P = ⎢−4 0 1⎥ ⎢ 3 1 0⎥⎦ ⎣
⎡ 1 −3
⇒
P
−1
=
5⎤ ⎥ 9 13⎥ ⎢ 4 16 20⎥ ⎣ ⎦
1 ⎢−3 28 ⎢
and 5⎤ ⎡ 1 − 32 ⎥⎢ 9 13⎥ ⎢−2 13 2 ⎢ 4 16 20⎥ ⎢ 3 − 9 ⎣ ⎦⎣ 2 2
⎡ 1 −3
−1
P AP =
1 ⎢−3 28 ⎢
5⎤ 2
⎡ 29 0 0⎤ ⎡ 1 −5 3⎤ ⎥⎢ ⎢2 ⎥ ⎥ −10⎥ ⎢−4 0 1⎥ = ⎢ 0 12 0⎥. ⎢ 0 0 1⎥ 8⎥⎦ ⎢⎣ 3 1 0⎥⎦ 2⎦ ⎣
32. The eigenvalues of A are λ1 = 4 and λ2 = 2 (repeated). Furthermore, there are just two linearly independent eigenvectors of A, x1 = (1, 1, 1) and x 2 = (0, 0, 1). So, A is not diagonalizable.
34. The eigenvalues of A are λ1 = 0 and λ2 = 1 (3 times). Furthermore, there are just three linearly independent eigenvectors of x1 = (0, −1, 0, 1) (for λ1 = 0 ) x 2 = ( −1, 0, 1, 0), and x3 = (0, 0, 0, 1) (for λ2 = 1 ). So, A is not diagonalizable.
36. The standard matrix for T is ⎡−2 2 −3⎤ ⎢ ⎥ A = ⎢ 2 1 −6⎥ ⎢ −1 −2 0⎥ ⎣ ⎦
which has eigenvalues λ1 = 5, λ2 = −3 (repeated), and corresponding eigenvectors (1, 2, −1), (3, 0, 1) and
(−2, 1, 0). Let
B =
{(1, 2, −1), (3, 0, 1), (−2, 1, 0)} and the
matrix of T relative to this basis is
38. The standard matrix for T is ⎡2 0 1⎤ ⎢ ⎥ A = ⎢0 3 4⎥ ⎢0 0 1⎥ ⎣ ⎦
which has eigenvalues λ1 = 2, λ2 = 3 and λ3 = 1, and corresponding eigenvectors (1, 0, 0), (0, 1, 0), and
(−1, − 2, 1).
Let B =
{(1, x, −1 − 2 x + x )} and the 2
matrix of T relative to this basis is ⎡2 0 0⎤ ⎢ ⎥ A′ = ⎢0 3 0⎥. ⎢0 0 1⎥ ⎣ ⎦
40. Let P be the matrix of eigenvectors corresponding to the n distinct eigenvalues λ1 , , λn . Then P −1 AP = D is a diagonal matrix ⇒ A = PDP −1. From Exercise 39, Ak = PD k P −1 , which show that the eigenvalues of Ak are λ1k , λ2k ,
, λnk .
⎡5 0 0⎤ ⎢ ⎥ A′ = ⎢0 −3 0⎥. ⎢0 0 −3⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.2
Diagonalization
235
42. The eigenvalues and corresponding eigenvectors of A are λ1 = 3, λ2 = −2 and x1 = (3, 2) and x 2 = ( −1, 1). Construct a nonsingular matrix P from the eigenvectors of A, ⎡3 −1⎤ P = ⎢ ⎥ ⎣2 1⎦ and find a diagonal matrix B similar to A. ⎡ 1 B = P −1 AP = ⎢ 5 ⎢− 52 ⎣
1⎤ 5 ⎥ 3⎥ 5⎦
⎡ 1 3⎤ ⎡3 −1⎤ ⎡3 0⎤ ⎢ ⎥⎢ ⎥ = ⎢ ⎥ 2 0 2 1 ⎣ ⎦⎣ ⎦ ⎣0 −2⎦
Then, 7 ⎡ 3 −1⎤ ⎡3 ⎢ A7 = PB 7 P −1 = ⎢ ⎥ ⎣2 1⎦ ⎢⎣ 0
0⎤ ⎡ 1 ⎥ ⎢ 25 7 − 2 ( ) ⎥⎦ ⎢⎣− 5
1⎤ 5 3⎥ 5⎥ ⎦
⎡1261 1389⎤ = ⎢ ⎥. ⎣ 926 798⎦
44. The eigenvalues and corresponding eigenvectors of A are λ1 = −1, λ2 = 0, λ3 = 2, x1 = ( 2, 2, 3), x 2 = (1, 1, 1) and x3 = (0, 1, 0). Construct a nonsingular matrix P from the eigenvectors of A, ⎡2 1 0⎤ ⎢ ⎥ P = ⎢2 1 1⎥ ⎢3 1 0⎥ ⎣ ⎦
and find a diagonal matrix B similar to A. 1⎤ ⎡2 0 −2⎤ ⎡2 1 0⎤ ⎡−1 0 ⎡−1 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ B = P AP = ⎢ 3 0 −2⎥ ⎢0 2 −2⎥ ⎢2 1 1⎥ = ⎢ 0 0 0⎥ ⎢−1 1 0⎥ ⎢ 3 0 −3⎥ ⎢ 3 1 0⎥ ⎢ 0 0 2⎥ ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦ −1
Then, A = PB P 5
5
−1
⎡−1 0 0⎤ ⎡ 2 0 −2⎤ ⎢ ⎥ −1 ⎢ ⎥ = P ⎢ 0 0 0⎥ P = ⎢−30 32 −2⎥. ⎢ ⎥ ⎢ ⎥ ⎣ 0 0 32⎦ ⎣ 3 0 −3⎦
46. (a) True. See Theorem 7.5 on page 437. (b) False. Matrix ⎡2 0⎤ ⎢ ⎥ ⎣0 2⎦ is diagonalizable (it is already diagonal) but it has only one eigenvalue λ = 2 (repeated). ⎡0 0 1⎤ ⎡0 0 1⎤ ⎢ ⎥ ⎢ ⎥ −1 48. Yes, the matrices are similar. Let P = ⎢0 1 0⎥ ⇒ P = ⎢0 1 0⎥ and observe that ⎢ 1 0 0⎥ ⎢ 1 0 0⎥ ⎣ ⎦ ⎣ ⎦ ⎡0 0 1⎤ ⎡ 1 0 0⎤ ⎡0 0 1⎤ ⎡3 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ P −1 AP = ⎢0 1 0⎥ ⎢0 2 0⎥ ⎢0 1 0⎥ = ⎢0 2 0⎥ = B. ⎢ 1 0 0⎥ ⎢0 0 3⎥ ⎢ 1 0 0⎥ ⎢0 0 1⎥ ⎣ ⎦⎣ ⎦⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
236
Chapter 7
Eigenvalues and Eigenvectors
50. Consider the characteristic equation λ I − A =
λ −a
−b
−c
λ −d
= λ 2 − ( a + d )λ + ( ad − bc) = 0.
This equation has real and unequal roots if and only if ( a + d ) − 4( ad − bc) > 0, which is equivalent 2
to ( a − d ) > −4bc. So, A is diagonalizable if −4bc < ( a − d ) , and not diagonalizable if −4bc > ( a − d ) . 2
2
⎡ 1 0⎤ 52. (a) X = ⎢ ⎥ ⎣0 1⎦
⇒
eX = I + I +
2
I I + + 2! 3!
1 1 ⎡ ⎢1 + 1 + 2! + 3! + = ⎢ ⎢ 0 ⎢⎣
⎤ ⎥ ⎡e 0⎤ ⎥ = ⎢ ⎥ ⎥ ⎣0 e⎦ ⎥⎦
0 1+1+
1 1 + + 2! 3!
2
⎡0 0⎤ (b) X = ⎢ ⎥ ⎣0 0⎦
⇒
⎡1 0⎤ (c) X = ⎢ ⎥ ⎣1 0⎦
Because e = 1 + 1 +
0⎤ ⎡ e = ⎢ ⎥ ⎣e − 1 1⎦
⎡0 1⎤ 1 ⎡ 1 0⎤ 1 ⎡0 1⎤ 1 ⎡ 1 0⎤ eX = I + ⎢ ⎥ + ⎢ ⎥ + ⎢ ⎥ + ⎢ ⎥ + ⎣ 1 0⎦ 2!⎣0 1⎦ 3!⎣ 1 0⎦ 4!⎣0 1⎦
⇒
⎡2 0⎤ (e) X = ⎢ ⎥ ⎣0 −2⎦
= I
⎡1 0⎤ 1 ⎡1 0⎤ 1 ⎡1 0⎤ eX = I + ⎢ ⎥ + ⎢ ⎥ + ⎢ ⎥ + 2! ⎣1 0⎦ ⎣1 0⎦ 3!⎣1 0⎦
⇒
⎡0 1⎤ (d) X = ⎢ ⎥ ⎣ 1 0⎦
⎡0 0⎤ 1 ⎡0 0⎤ eX = I + ⎢ ⎥ + ⎢ ⎥ + 2! 0 0 ⎣ ⎦ ⎣0 0⎦
1 1 1 1 1 1 and e −1 = 1 − 1 + − + − + + 2 3! 4! 2 3! 4!
⇒
, you see that e X =
0⎤ ⎡2 0⎤ 1 ⎡22 0⎤ 1 ⎡23 + + eX = I + ⎢ ⎢ ⎥ + ⎢ ⎥ 2 3⎥ ⎣0 −2⎦ 2!⎣⎢ 0 2 ⎦⎥ 3!⎣⎢ 0 −2 ⎦⎥
1 ⎡e + e −1 e − e −1 ⎤ ⎢ ⎥. 2 ⎣⎢e − e −1 e + e −1 ⎦⎥
⎡e 2 0⎤ . = ⎢ −2 ⎥ ⎣⎢ 0 e ⎦⎥
54. From Exercise 73, Section 7.1, you know that zero is the only eigenvalue of the nilpotent matrix A. If A were diagonalizable, then there would exist an invertible matrix P, such that P −1 AP = D, where D is the zero matrix. So, A = PDP −1 = O, which is impossible. 56. A is triangular so, the eigenvalues are simply the entries on the main diagonal. So, the only eigenvalue is λ = 3, and a basis for the eigenspace is
{(1, 0)}.
Because matrix A does not have two linearly independent eigenvectors, it does not satisfy Theorem 7.5 and it is not diagonalizable.
Section 7.3 Symmetric Matrices and Orthogonal Diagonalization 2. Because
6. Because
⎡ 6 −2⎤ AT = ⎢ ⎥ 1⎦ ⎣−2
T
⎡ 6 −2⎤ = ⎢ ⎥ = A 1⎦ ⎣−2
the matrix is symmetric.
4. Because ⎡ 1 −5 4⎤ ⎢ ⎥ ⎢−5 3 6⎥ ⎢−4 6 2⎥ ⎣ ⎦
T
⎡ 1 −5 −4⎤ ⎢ ⎥ ≠ ⎢−5 3 6⎥ ⎢ 4 6 2⎥ ⎣ ⎦
the matrix is not symmetric.
⎡2 0 ⎢ ⎢0 11 ⎢3 0 ⎢ ⎢⎣5 −2
5⎤ ⎥ 0 −2⎥ 5 0⎥ ⎥ 0 1⎥⎦ 3
T
⎡2 0 ⎢ 0 11 = ⎢ ⎢3 0 ⎢ ⎢⎣5 −2
5⎤ ⎥ 0 −2⎥ 5 0⎥ ⎥ 0 1⎥⎦ 3
the matrix is symmetric.
8. The characteristic equation of A is
λI − A =
λ −2
0
0
λ −2
= (λ − 2) = 0. 2
Therefore, the eigenvalue is λ = 2. The multiplicity of λ = 2 is 2, so the dimension of the corresponding eigenspace is 2 (by Theorem 7.7).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.3
10. The characteristic equation of A is
λ −2
−1
−1
−1
λ −2
−1
−1
−1
λ −2
λI − A =
14. The characteristic equation of A is
2
−4
−4
1
λ −2
1
1
1
λ −2
= λ (λ − 3) = 0. 2
18. Because the column vectors of the matrix form an orthonormal set, the matrix is orthogonal.
0
λ +2
0
1
16. Because the column vectors of the matrix do not form an orthonormal set, the matrix is not orthogonal.
12. The characteristic equation of A is
λ I − A = −4 λ − 2
1
Therefore, the eigenvalues are λ1 = 0 and λ2 = 3. The dimension of the eigenspace corresponding to λ1 = 0 is 1. The multiplicity of λ2 = 3 is 2, so the dimension of the corresponding eigenspace is 2 (by Theorem 7.7).
Therefore, the eigenvalues are λ1 = 1 and λ2 = 4. The multiplicity of λ1 = 1 is 2, so the dimension of the corresponding eigenspace is 2 (by Theorem 7.7). The dimension of the eigenspace corresponding to λ2 = 4 is 1.
−4
λ −2 λI − A =
= (λ − 1) (λ − 4) = 0.
λ
237
Symmetric Matrices and Orthogonal Diagonalization
20. Because the column vectors of the matrix do not form an orthonormal set (the first 2 columns are not orthogonal), the matrix is not orthogonal.
= (λ − 6)(λ + 6)λ = 0. Therefore, the eigenvalues are λ1 = 6, λ2 = −6 and
λ3 = 0. The dimension of the eigenspace corresponding
22. Because the column vectors of the matrix do not form an orthonormal set, the matrix is not orthogonal.
of each eigenvalue is 1.
24. The eigenvalues of A are λ1 = 2 and λ2 = 6, with corresponding eigenvectors (1, −1) and (1, 1), respectively. Normalize each eigenvector to form the columns of P. Then
⎡ 2 ⎢ 2 P = ⎢ ⎢ 2 ⎢− ⎣ 2
⎡ 2 2⎤ ⎢ ⎥ 2 ⎥ 2 and PT AP = ⎢ ⎢ ⎥ 2 2 ⎢ ⎥ 2 ⎦ ⎣ 2
−
2⎤ ⎥ 2 ⎥ 2 ⎥ ⎥ 2 ⎦
⎡ 2 ⎡4 2⎤ ⎢⎢ 2 ⎢ ⎥ 2 ⎣2 4⎦ ⎢ ⎢− ⎣ 2
2⎤ ⎥ ⎡2 0⎤ 2 ⎥ = ⎢ ⎥. ⎥ 2 ⎣0 6⎦ ⎥ 2 ⎦
26. The eigenvalues of A are λ1 = −1 (repeated) and λ2 = 2, with corresponding eigenvectors ( −1, 0, 1), ( −1, 1, 0) and (1, 1, 1), respectively. Use Gram–Schmidt Orthonormalization Process to orthonormalize the two eigenvectors corresponding to λ1 = −1.
(−1, 0, 1)
1 ⎞ ⎛ 1 → ⎜− , 0, ⎟ 2 2⎠ ⎝
(−1, 1, 0)
−
1 1 1 2 1 ⎞ ⎛ 1 (−1, 0, 1) = ⎛⎜ − , 1, − ⎞⎟ → ⎜ , − , ⎟ 2 2⎠ 6 6⎠ ⎝ 2 ⎝ 6
Normalizing the third eigenvector corresponding to λ2 = 2, you can form the columns of P. So, ⎡ ⎢ ⎢ ⎢ P = ⎢ ⎢ ⎢ ⎢ ⎣
1 3 1 3 1 3
−
1 ⎤ ⎥ 6⎥ 2 ⎥ 0 − ⎥ 6⎥ 1 1 ⎥ ⎥ 2 6⎦ 1 2
and ⎡ 1 ⎢ 3 ⎢ ⎢ 1 PT AP = ⎢− 2 ⎢ ⎢ 1 ⎢ 6 ⎣
1 3 0 −
2 6
1 ⎤ ⎥ 3⎥ 1 ⎥ ⎥ 2⎥ 1 ⎥ ⎥ 6⎦
⎡0 1 1⎤ ⎢ ⎥ ⎢1 0 1⎥ ⎢1 1 0⎥ ⎣ ⎦
⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
1 3 1 3 1 3
−
1 ⎤ ⎥ 6⎥ ⎡2 0 0⎤ 2 ⎥ ⎢ ⎥ 0 − ⎥ = ⎢0 −1 0⎥. 6⎥ ⎢0 0 −1⎥ ⎣ ⎦ 1 1 ⎥ ⎥ 2 6⎦ 1 2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
238
Chapter 7
Eigenvalues and Eigenvectors
28. The eigenvalues of A are λ1 = 5, λ2 = 0, λ3 = −5, with corresponding eigenvectors (3, 5, 4), ( −4, 0, 3) and
(3, − 5, 4) respectively. Normalize each eigenvector to form the columns of P. Then P =
⎡3 2 ⎢ 2 ⎢4 2 ⎣⎢
3 2⎤ ⎥ 0 −5 2 ⎥ 6 4 2 ⎥⎦⎥
−8
1 5 10 ⎢
and P AP = T
⎡3 2 5 2 4 2 ⎤ ⎡0 3 0⎤ ⎡3 2 ⎢ ⎥⎢ ⎥1⎢ −8 0 6⎥ ⎢3 0 4⎥ 10 ⎢5 2 ⎢3 2 −5 2 4 2 ⎥ ⎢0 4 0⎥ ⎢4 2 ⎦ ⎣⎢ ⎥⎦ ⎣ ⎣⎢
3 2⎤ ⎡5 0 0⎤ ⎥ ⎢ ⎥ 0 −5 2 ⎥ = ⎢0 0 0⎥. ⎢0 0 −5⎥ 6 4 2 ⎥⎦⎥ ⎣ ⎦
−8
1 10 ⎢
30. The characteristic polynomial of A, λ I − A = (λ − 8) (λ + 4) , yields the eigenvalues λ1 = 8 and λ2 = − 4. λ1 has a 2
multiplicity of 1 and λ2 has a multiplicity of 2. An eigenvector for λ1 is v1 = (1, 1, 2), which normalizes to
u1 =
⎛ 6 v1 6 6⎞ = ⎜⎜ , , ⎟. 6 3 ⎟⎠ v1 ⎝ 6
Two eigenvectors for λ2 are v 2 = ( −1, 1, 0) and v 3 = ( −2, 0, 1). Note that v1 is orthogonal to v 2 and v 3 , as guaranteed by Theorem 7.9. The eigenvectors v 2 and v 3 , however, are not orthogonal to each other. To find two orthonormal eigenvectors for λ2 , use the Gram-Schmidt process as follows.
w 2 = v 2 = ( −1, 1, 0) ⎛ v ⋅ w2 ⎞ w 3 = v3 − ⎜ 3 ⎟ w 2 = ( −1, −1, 1) ⎝ w2 ⋅ w2 ⎠ These vectors normalize to u2 =
⎛ 2 2 ⎞ w2 , , 0 ⎟⎟ = ⎜⎜ − 2 w2 ⎝ 2 ⎠
u3 =
⎛ 3 3 3⎞ w3 = ⎜⎜ − ,− , ⎟. 3 3 ⎟⎠ w3 ⎝ 3
The matrix P has u1 , u 2 , and u3 as its column vectors. ⎡ 6 ⎢ ⎢ 6 ⎢ 6 P = ⎢ ⎢ 6 ⎢ ⎢ 6 ⎢⎣ 3
−
3⎤ ⎥ 3 ⎥ 2 3⎥ ⎥ − 2 3 ⎥ ⎥ 3⎥ 0 3 ⎥⎦ 2 2
−
and ⎡ 6 ⎢ 6 ⎢ ⎢ 2 T P AP = ⎢− ⎢ 2 ⎢ ⎢− 3 ⎢⎣ 3
6 6 2 2 −
3 3
6⎤ ⎥ 3 ⎥ ⎥ 0⎥ ⎥ ⎥ 3⎥ 3 ⎥⎦
⎡−2 2 4⎤ ⎢ ⎥ ⎢ 2 −2 4⎥ ⎢ 4 4 4⎥ ⎣ ⎦
⎡ 6 ⎢ ⎢ 6 ⎢ 6 ⎢ ⎢ 6 ⎢ ⎢ 6 ⎢⎣ 3
−
3⎤ ⎥ 3 ⎥ ⎡8 0 0⎤ 2 3⎥ ⎢ ⎥ ⎥ = ⎢0 −4 0⎥. − 2 3 ⎥ ⎢0 0 −4⎥ ⎥ ⎣ ⎦ 3⎥ 0 3 ⎥⎦ 2 2
−
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.3
Symmetric Matrices and Orthogonal Diagonalization
239
32. The eigenvalues of A are λ1 = 0 (repeated) and λ2 = 2 (repeated). The eigenvectors corresponding to λ1 = 0 are
(1, −1, 0, 0) and (0,
0, 1, −1), while those corresponding to λ2 = 2 are (1, 1, 0, 0) and (0, 0, 1, 1). Normalizing these
eigenvectors to form P, you have ⎡ 2 0 ⎢ 2 ⎢ ⎢ 2 ⎢− 0 2 ⎢ P = ⎢ 2 ⎢ 0 ⎢ 2 ⎢ 2 ⎢ 0 − 2 ⎣⎢
⎤ 0⎥ ⎥ ⎥ 0⎥ ⎥ ⎥ 2⎥ 2 ⎥ ⎥ 2⎥ 2 ⎦⎥
2 2 2 2 0 0
and ⎡ 2 2 − ⎢ 2 ⎢ 2 ⎢ 0 ⎢ 0 PT AP = ⎢ ⎢ 2 2 ⎢ 2 ⎢ 2 ⎢ ⎢ 0 0 ⎢⎣
⎤ 0⎥ ⎥ 2 2 ⎥ ⎡1 − ⎥⎢ 2 2 ⎥ ⎢1 ⎥ ⎢0 0 0⎥ ⎢ ⎥ ⎢⎣0 ⎥ 2 2⎥ 2 2 ⎥⎦ 0
1 0 1 0 0 1 0 1
⎡ 2 0 ⎢ ⎢ 2 0⎤ ⎢ 2 ⎥ ⎢− 0 0⎥ ⎢ 2 1⎥ ⎢ 2 ⎥⎢ 0 1⎥⎦ ⎢ 2 ⎢ 2 ⎢ 0 − ⎢⎣ 2
2 2 2 2 0 0
⎤ 0⎥ ⎥ ⎡0 ⎥ ⎢ 0⎥ ⎥ = ⎢0 ⎢0 2 ⎥⎥ ⎢ ⎢⎣0 2 ⎥ ⎥ 2⎥ 2 ⎥⎦
0 0 0⎤ ⎥ 0 0 0⎥ . 0 2 0⎥ ⎥ 0 0 2⎥⎦
34. (a) False. The fact that a matrix P is invertible does not imply P −1 = PT , only that P −1 exist. The definition of orthogonal matrix (page 449) requires that a matrix P is invertible and P −1 = PT . For example, ⎡1 3⎤ ⎢ ⎥ ⎣3 4⎦ is invertible
(A
≠ 0) but A−1 ≠ AT .
(b) True. See Theorem 7.10 on page 453. 6⎤ ⎡ 1 4⎤ ⎡ 17 −27 ⎢ ⎥ ⎡ 1 −3 2⎤ ⎢ ⎥ 45 −12⎥ 36. A A = ⎢−3 −6⎥ ⎢ ⎥ = ⎢−27 ⎣4 −6 1⎦ ⎢ ⎢ 6 −12 1⎥⎦ 5⎥⎦ ⎣ 2 ⎣ T
⎡ 1 4⎤ ⎡ 1 −3 2⎤ ⎢ ⎡14 24⎤ ⎥ AA = ⎢ ⎥ ⎢−3 −6⎥ = ⎢ ⎥ − 4 6 1 ⎣ ⎦⎢ ⎣24 53⎦ ⎥ 2 1 ⎣ ⎦ T
38.
( AB)−1 ( BA)
−1
= B −1 A−1 = BT AT = ( AB )
⇒
AB is orthogonal
= ( BA)
⇒
BA is orthogonal
T
−1
= A B
−1
= A B T
T
T
40. Suppose P −1 AP = D is diagonal, with λ the only eigenvalue. Then A = PDP −1 = P(λ I ) P −1 = λ I .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
240
Chapter 7
Eigenvalues and Eigenvectors
Section 7.4 Applications of Eigenvalues and Eigenvectors ⎡0 2. x 2 = Ax1 = ⎢ 1 ⎣⎢16 ⎡0 x 3 = Ax 2 = ⎢ 1 ⎣⎢16 ⎡0 ⎢1 ⎢ 4. x 2 = ⎢ 4 0 ⎢ ⎢⎣0
4⎤ ⎥ 0⎦⎥
6 3⎤ ⎡ 3 ⎢ ⎥ A = ⎢0.8 0 0⎥ ⎢ 0 0.25 0⎥ ⎣ ⎦
4⎤ ⎡640⎤ ⎡40⎤ = ⎢ ⎥ ⎥ 0⎦⎥ ⎢⎣ 10⎥⎦ ⎣40⎦
The current age distribution vector is
2 2 0⎤ ⎡100⎤ ⎡400⎤ ⎥ ⎢ ⎥ 0 0 0⎥ ⎢⎢100⎥⎥ ⎢ 25⎥ = ⎥ ⎢ ⎥ ⎢100⎥ 1 0 0 100 ⎥⎢ ⎥ ⎢ ⎥ 1 0 2 0⎥⎦ ⎢⎣100⎥⎦ ⎢⎣ 50⎥⎦
⎡0 ⎢1 ⎢ x3 = ⎢ 4 0 ⎢ ⎢⎣0
2 2 0⎤ ⎥ 0 0 0⎥ 1 0 0⎥ ⎥ 0 12 0⎥⎦
6. The eigenvalues are eigenvalue, λ =
−4⎤ ⎥ 1 ⎥ 2⎦
⇒
⎡150⎤ ⎢ ⎥ x1 = ⎢150⎥. ⎢150⎥ ⎣ ⎦
In one year, the age distribution vector will be
⎡400⎤ ⎡250⎤ ⎢ ⎥ ⎢ ⎥ 25 ⎢ ⎥ = ⎢100⎥ ⎢100⎥ ⎢ 25⎥ ⎢ ⎥ ⎢ ⎥ ⎢⎣ 50⎥⎦ ⎣⎢ 50⎦⎥ 1 2
6 3⎤ ⎡150⎤ ⎡ 3 ⎡1800⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 2 = Ax1 = ⎢0.8 0 0⎥ ⎢150⎥ = ⎢ 120⎥. ⎢ 0 0.25 0⎥ ⎢150⎥ ⎢ 37.5⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
In two years, the age distribution vector will be
and − 12 . Choosing the positive
1 , you 2
find the corresponding
eigenvector by row-reducing λ I − A = ⎡ 1 ⎢ 12 ⎣⎢− 16
10. Construct the age transition matrix,
⎡160⎤ ⎡640⎤ ⎢ ⎥ = ⎢ ⎥ 160 ⎣ ⎦ ⎣ 10⎦
1I 2
− A.
respectively. Then A can be diagonalized as follows
So, an eigenvector is (8, 1) , and the stable age ⎡8⎤ distribution vector is x = t ⎢ ⎥. ⎣1⎦
⎡ 1 P −1 AP = ⎢ 14 ⎢⎣− 4
1⎤ 2 ⎥ 1 ⎦ 2⎥
⎡ 0 2⎤ ⎡2 −2⎤ ⎡ 1 0⎤ ⎢1 ⎥⎢ ⎥ = ⎢ ⎥ = D. 0 ⎥⎦ ⎣ 1 1⎦ ⎣0 −1⎦ ⎣⎢ 2
So, A = PDP −1 and An = PD n P −1.
8. The characteristic equation of A is λ I − A = λ 4 − 12 λ 2 − 12 λ
(
12. The eigenvalues of A are λ1 = 1 and λ2 = −1, with corresponding eigenvector ( 2, 1) and ( −2, 1),
⎡ 1 −8⎤ ⎢ ⎥ ⎣0 0⎦
= λ (λ − 1) λ 2 + λ +
6 3⎤ ⎡1800⎤ ⎡ 3 ⎡6232.5⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0⎥ ⎢ 120⎥ = ⎢ 1440⎥. x3 = Ax 2 = ⎢0.8 ⎢ 0 0.25 0⎥ ⎢ 37.5⎥ ⎢ 30⎥⎦ ⎣ ⎦⎣ ⎦ ⎣
1 2
) = 0.
Choosing the positive eigenvalue λ = 1, you find its corresponding eigenvector by row-reducing λ I − A = I − A. So, an eigenvector is (8, 2, 2, 1) and ⎡8⎤ ⎢ ⎥ 2 the stable age distribution vector is x = t ⎢ ⎥. ⎢2⎥ ⎢ ⎥ ⎢⎣1⎥⎦
If n is even, D n = I and An = I . If n is odd, ⎡ 0 2⎤ D n = D and An = PDP −1 = ⎢ 1 ⎥ = A. So, ⎣⎢ 2 0⎦⎥
An x1 does not approach a limit as n approaches infinity.
14. The solution to the differential equation y′ = ky is y = Ce kt . So, y1 = C1e −3t and y2 = C2e 4t .
16. The solution to the differential equation y′ = ky is y = Ce kt . So, y1 = C1e5t , y2 = C2e−2t , and y3 = C3e −3t .
18. The solution to the differential equation y′ = ky is y = Ce kt . So, y1 = C1e − t , y2 = C2e −2t , and y3 = C3et .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Section 7.4
20. This system has the matrix form ⎡ y1′ ⎤ ⎡ 1 −4⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay. ⎣ y2′ ⎦ ⎣−2 8⎦ ⎣ y2 ⎦ The eigenvalues of A are λ1 = 0 and λ2 = 9, with
Applications of Eigenvalues and Eigenvectors
24. This system has the matrix form ⎡ y1′ ⎤ ⎡−2 0 1⎤ ⎡ y1 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ′ ′ y = ⎢ y2 ⎥ = ⎢ 0 3 4⎥ ⎢ y2 ⎥ = Ay. ⎢ y3′ ⎥ ⎢ 0 0 1⎥ ⎢ y3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦
corresponding eigenvectors ( 4, 1) and ( −1, 2),
The eigenvalues of A are λ1 = −2, λ2 = 3 and
respectively. So, you can diagonalize A using a matrix P whose columns are the eigenvectors of A.
λ3 = 1, with corresponding eigenvectors (1, 0, 0),
⎡4 −1⎤ ⎡0 0⎤ −1 P = ⎢ ⎥ and P AP = ⎢ ⎥ 1 2 ⎣ ⎦ ⎣0 9⎦ The solution of the system w′ = P −1 APw is
w1 = C1 and w2 = C2e9t . Return to the original system by applying the substitution y = Pw. ⎡ y1 ⎤ ⎡4 −1⎤ ⎡ w1 ⎤ ⎡4w1 − w2 ⎤ y = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ y w 1 2 ⎣ 2⎦ ⎣ ⎦ ⎣ 2⎦ ⎣w1 + 2 w2 ⎦ So, the solution is
y1 = 4C1 − C2e9t y2 = C1 + 2C2e9t .
22. This system has the matrix form ⎡ y1′ ⎤ ⎡ 1 −1⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay. ′ y ⎣ 2⎦ ⎣2 4⎦ ⎣ y2 ⎦ The eigenvalues of A are λ1 = 2 and λ2 = 3, with corresponding eigenvectors (1, −1) and ( −1, 2),
respectively. So, you can diagonalize A using a matrix P whose columns are the eigenvectors of A. ⎡ 1 −1⎤ P = ⎢ ⎥ ⎣−1 2⎦
⎡2 0⎤ and P −1 AP = ⎢ ⎥ ⎣0 3⎦
The solution of the system w′ = P −1 APw is
w1 = C1e 2t and w2 = C2e3t . Return to the original system by applying the substitution y = Pw. ⎡ y1 ⎤ ⎡ 1 −1⎤ ⎡ w1 ⎤ ⎡ w1 − w2 ⎤ y = ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ ⎣ y2 ⎦ ⎣−1 2⎦ ⎣w2 ⎦ ⎣− w1 + 2w2 ⎦ So, the solution is y1 =
C1e 2t −
C 2 e 3t
y2 = − C1e 2t + 2C2e3t .
241
(0, 1, 0) and (1, − 6, 3), respectively. So, you can diagonalize A using a matrix P whose columns are the eigenvectors of A. 1⎤ ⎡1 0 ⎡−2 0 0⎤ ⎢ ⎥ ⎢ ⎥ −1 P = ⎢0 1 −6⎥ and P AP = ⎢ 0 3 0⎥ ⎢0 0 ⎢ 0 0 1⎥ 3⎥⎦ ⎣ ⎣ ⎦
The solution of the system w′ = P −1 APw is
w1 = C1e −2t , w2 = C2e3t and w3 = C3et . Return to the original system by applying the substitution y = Pw. 1⎤ ⎡ w1 ⎤ ⎡ y1 ⎤ ⎡1 0 ⎡ w + w3 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ y = ⎢ y2 ⎥ = ⎢0 1 −6⎥ ⎢w2 ⎥ = ⎢w2 − 6w3 ⎥ ⎢ y3 ⎥ ⎢0 0 ⎢ 3w3 ⎥ 3⎥⎦ ⎢⎣ w3 ⎥⎦ ⎣ ⎦ ⎣ ⎣ ⎦
So, the solution is y1 = C1e −2t
+
C3et
y2 =
C2e3t − 6C3et
y3 =
3C3et .
26. This system has the matrix form ⎡ y1′ ⎤ ⎡2 1 1⎤ ⎡ y1 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ y′ = ⎢ y′2 ⎥ = ⎢ 1 1 0⎥ ⎢ y2 ⎥ = Ay. ⎢ y3′ ⎥ ⎢ 1 0 1⎥ ⎢ y3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦
The eigenvalues of A are λ1 = 0, λ2 = 1 and λ3 = 3,
with corresponding eigenvectors ( −1, 1, 1), (0, 1, −1)
and ( 2, 1, 1), respectively. So, you can diagonalize A using a matrix P whose columns are the eigenvectors. ⎡−1 0 2⎤ ⎡0 0 0⎤ ⎢ ⎥ ⎢ ⎥ −1 P = ⎢ 1 1 1⎥ and P AP = ⎢0 1 0⎥ ⎢ 1 −1 1⎥ ⎢0 0 3⎥ ⎣ ⎦ ⎣ ⎦
The solution of the system w′ = P −1 APw is
w1 = C1 , w2 = C2et and w3 = C3e3t . Return to the original system by applying the substitution y = Pw. ⎡ y1 ⎤ ⎡−1 0 2⎤ ⎡ w1 ⎤ ⎡ − w1 + 2w3 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ y = ⎢ y2 ⎥ = ⎢ 1 1 1⎥ ⎢w2 ⎥ = ⎢w1 + w2 + w3 ⎥ ⎢ y3 ⎥ ⎢ 1 −1 1⎥ ⎢ w3 ⎥ ⎢w1 + w2 + w3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
So, the solution is y1 = −C1
+ 2C3e3t
y2 =
C1 + C2e +
y3 =
C1 − C2et + C3e3t .
t
C3e3t
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
242
Chapter 7
Eigenvalues and Eigenvectors
28. Because ⎡ y1′ ⎤ ⎡1 −1⎤ ⎡ y1 ⎤ y′ = ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ = Ay ⎣ y2′ ⎦ ⎣1 1⎦ ⎣ y2 ⎦ the system represented by y′ = Ay is
y1′ = y1 − y2 y2′ = y1 + y2 . Note that
y1′ = C1et cos t − C1et sin t + C2et sin t + C2et cos t = y1 − y2 and
y2′ = −C2et cos t + C2et sin t + C1et sin t + C1et cos t = y1 + y2 .
30. Because 1 0⎤ ⎡ y1 ⎤ ⎡ y1′ ⎤ ⎡0 ⎢ ⎥ ⎢ ⎥⎢ ⎥ y′ = ⎢ y′2 ⎥ = ⎢0 0 1⎥ ⎢ y2 ⎥ = Ay ⎢ y3′ ⎥ ⎢ 1 −3 3⎥ ⎢ y3 ⎥ ⎣ ⎦ ⎣ ⎦⎣ ⎦
the system represented by y′ = Ay is y1′ =
y2
y2′ =
y3
y3′ = y1 − 3 y2 + 3 y3 .
Note that
y1′ = C1et + C2tet + C2et + C3t 2et + 2C3tet = y2 y2′ = (C1 + C2 )et + (C2 + 2C3 )tet + (C2 + 2C3 )et + C3t 2et + 2C3tet = y3 y3′ = (C1 + 2C2 + 2C3 )et + (C2 + 4C3 )tet + (C2 + 4C3 )et + C3t 2et + 2C3tet = (C1et + C2tet + C3t 2et ) − 3((C1 + C2 )et + (C2 + 2C3 )tet + C3t 2et ) + 3((C1 + 2C2 + 2C3 )et + (C2 + 4C3 )tet + C3t 2et ) = y1 − 3 y2 + 3 y3 .
32. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 1 −2⎤ 2⎥ ⎥ = ⎢ ⎥. 1⎦ ⎥ ⎣−2 c⎥ ⎦
34. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ 5⎤ ⎡ ⎢ 12 − 2 ⎥ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎢ 5 ⎥ 0 c⎥ − ⎢⎣ 2 ⎥⎦ ⎦
36. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 16 −2⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣−2 20⎦ c⎥ ⎦
38. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 5 −1⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣−1 5⎦ c⎥ ⎦
The eigenvalues of A are λ1 = 4 and λ2 = 6, with
corresponding eigenvectors x1 = (1, 1) and
x 2 = ( −1, 1), respectively. Using unit vectors in the direction of x1 and x 2 to form the columns of P, you have ⎡ 2 ⎢ 2 P = ⎢ ⎢ 2 ⎢ ⎣ 2
−
2⎤ ⎥ 2 ⎥ and 2⎥ ⎥ 2 ⎦
⎡4 0⎤ PT AP = ⎢ ⎥. ⎣0 6⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Chapter 7.4
40. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
The eigenvalues of A are λ1 = 0 and λ2 = 4, with
(
corresponding eigenvectors x1 = 1,
)
3 and
)
x 2 = − 3, 1 , respectively. Using unit vectors in the
direction of x1 and x 2 to form the columns of P, you have ⎡ 1 ⎢ 2 P = ⎢ ⎢ 3 ⎢ ⎣ 2
3⎤ − ⎥ 2 ⎥ 1⎥ ⎥ 2⎦
and
⎡0 0⎤ PT AP = ⎢ ⎥. ⎣0 4⎦
b⎤ ⎡17 16⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣16 −7⎦ c⎥ ⎦
corresponding eigenvectors x1 = (1, − 2) and
x 2 = ( 2, 1), respectively. Using unit vectors in the direction of x1 and x 2 to form the columns of P, you have 2 ⎤ 5 ⎥⎥ and 1 ⎥ ⎥ 5⎦
⎡−15 0⎤ PT AP = ⎢ ⎥. ⎣ 0 25⎦
44. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎣⎢ 2
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
2 5 1 5
−
1 ⎤ ⎡ 1 0⎤ 5 ⎥⎥ and PT AP = ⎢ ⎥. 2 ⎥ ⎣0 6⎦ ⎥ 5⎦
This matrix has eigenvalues of −1 and 3, and 1 ⎞ ⎛ 1 ,− corresponding unit eigenvectors ⎜ ⎟ and 2 2⎠ ⎝ 1 ⎞ ⎛ 1 , ⎜ ⎟, respectively. So, let 2 2⎠ ⎝ 1 ⎤ 2 ⎥⎥ and 1 ⎥ ⎥ 2⎦
equation ( x′) + 6( y′) − 36 = 0. 2
2
⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡8 4⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣4 8⎦ c⎥ ⎦
This matrix has eigenvalues of 4 and 12, and 1 ⎞ ⎛ 1 ,− corresponding unit eigenvectors ⎜ ⎟ and 2⎠ ⎝ 2 1 ⎞ ⎛ 1 , ⎜ ⎟, respectively. So, let 2⎠ ⎝ 2 ⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢− 2 ⎣
1 ⎤ ⎡4 0⎤ 2 ⎥⎥ and PT AP = ⎢ ⎥. 1 ⎥ ⎣0 12⎦ ⎥ 2⎦
This implies that the rotated conic is an ellipse. Furthermore,
b⎤ ⎡ 1 2⎤ 2⎥ ⎥ = ⎢ ⎥. 2 1⎦ ⎥ ⎣ c⎥ ⎦
⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢− 2 ⎣
This matrix has eigenvalues of 1 and 6, and 1 ⎞ ⎛ 2 , corresponding unit eigenvectors and ⎜ ⎟ and 5⎠ ⎝ 5 2 ⎞ ⎛ 1 , ⎜− ⎟, respectively. So, let 5 5⎠ ⎝
48. The matrix of the quadratic form is
The eigenvalues of A are λ1 = −15 and λ2 = 25, with
⎡ 1 ⎢ 5 P = ⎢ ⎢ 2 ⎢− 5 ⎣
b⎤ ⎡ 2 −2⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣−2 5⎦ c⎥ ⎦
This implies that the rotated conic is an ellipse with
42. The matrix of the quadratic form is ⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
243
46. The matrix of the quadratic form is
b⎤ ⎡ 3 − 3⎤ 2⎥ ⎥ = ⎢ ⎥. 1⎦⎥ ⎢⎣− 3 c ⎥⎥ ⎦
(
Applications of Eigenvalues and Eigenvectors
[d
⎡ 1 ⎢ 2 e]P = ⎡⎣10 2 26 2 ⎤⎦ ⎢ ⎢ 1 ⎢− 2 ⎣ = [−16 36] = [d ′ e′],
1 ⎤ 2 ⎥⎥ 1 ⎥ ⎥ 2⎦
so the equation in the x′y′-coordinate system is 4( x′) + 12( y′) − 16 x′ + 36 y′ + 31 = 0. 2
2
⎡−1 0⎤ PT AP = ⎢ ⎥. ⎣ 0 3⎦
This implies that the rotated conic is a hyperbola with equation −( x′) + 3( y′) = 9. 2
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
244
Chapter 7
Eigenvalues and Eigenvectors
⎡ ⎢a 50. The matrix of the quadratic form is A = ⎢ ⎢b ⎢⎣ 2
b⎤ ⎡ 5 −1⎤ 2⎥ ⎥ = ⎢ ⎥. ⎥ ⎣−1 5⎦ c⎥ ⎦
1 ⎞ 1 ⎞ ⎛ 1 ⎛ 1 , , The eigenvalues of A are 4 and 6, with corresponding unit eigenvectors ⎜ ⎟ and ⎜ − ⎟, respectively. 2 2 2 2⎠ ⎝ ⎠ ⎝ So, let ⎡ 1 ⎢ 2 P = ⎢ ⎢ 1 ⎢ ⎣ 2
1 ⎤ ⎡4 0⎤ 2 ⎥⎥ and PT AP = ⎢ ⎥. 1 ⎥ ⎣0 6⎦ ⎥ 2⎦
−
⎡ 1 ⎢ 2 e]P = ⎡⎣10 2 0⎤⎦ ⎢ ⎢ 1 ⎢ ⎣ 2
This implies that the rotated conic is an ellipse. Furthermore, [d
−
1 ⎤ 2 ⎥⎥ = [10 −10] = [d ′ e′], 1 ⎥ ⎥ 2⎦
so the equation in the x′y′-coordinate system is 4( x′) + 6( y′) + 10 x′ + 10 y′ = 0. 2
2
⎡2 1 1⎤ ⎢ ⎥ 52. The matrix of the quadratic form is A = ⎢ 1 2 1⎥. ⎢ 1 1 2⎥ ⎣ ⎦
1 2 ⎞ ⎛ 1 1 ⎛ 1 ⎞ , ,− , , 0 ⎟ and The eigenvalues of A are 1, 1 and 4, with corresponding unit eigenvectors ⎜ ⎟, ⎜ − 6 6⎠ ⎝ 2 2 ⎠ ⎝ 6 1 1 ⎞ ⎛ 1 , , ⎜ ⎟, respectively. Then let 3 3⎠ ⎝ 3 ⎡ 1 ⎢ 6 ⎢ ⎢ 1 P = ⎢ 6 ⎢ ⎢ 2 ⎢− 6 ⎣
−
1 2 1 2 0
1 ⎤ ⎥ 3⎥ ⎡ 1 0 0⎤ 1 ⎥ ⎢ ⎥ T = and P AP ⎥ ⎢0 1 0⎥. 3⎥ ⎢0 0 4⎥ ⎣ ⎦ 1 ⎥ ⎥ 3⎦
So, the equation of the rotated quadratic surface is ( x′) + ( y′) + 4( z′) − 1 = 0 2
2
2
(ellipsoid).
⎡ 1 1 0⎤ ⎢ ⎥ 54. The matrix of the quadratic form is A = ⎢ 1 1 0⎥. ⎢0 0 1⎥ ⎣ ⎦
The eigenvalues of A are 0, 1, and 2, with corresponding eigenvectors ( −1, 1, 0), (0, 0, 1), and (1, 1, 0), respectively. Then let ⎡ 2 ⎢− ⎢ 2 P = ⎢ 2 ⎢ 2 ⎢ ⎢⎣ 0
0 0 1
2⎤ ⎥ 2 ⎥ 2⎥ ⎥ 2 ⎥ 0 ⎥⎦
⎡0 0 0⎤ ⎢ ⎥ and P AP = ⎢0 1 0⎥. ⎢0 0 2⎥ ⎣ ⎦ T
So, the equation of the rotated quadratic surface is ( y′) + 2( z′) − 8 = 0. 2
2
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
245
Review Exercises for Chapter 7 2. (a) The characteristic equation of A is given by
λI − A =
λ −2
−1
4
λ +2
6. (a) The characteristic equation of A is given by
= λ 2 = 0.
(b) The eigenvalue of A is λ = 0 (repeated). (c) To find the eigenvectors corresponding to λ = 0, solve the matrix equation
(λ I
− A)x = 0. Row reducing the augmented
matrix, ⎡−2 −1 : 0⎤ ⎢ ⎥ ⎣ 4 2 : 0⎦
you see that a basis for the eigenspace is
λI − A =
−1
−2
0
λ −1
−1
0
0
λ −3
⎡−4 0 −4 ⎢ ⎢ 0 −4 2 ⎢ −1 0 −1 ⎣
0⎤ ⎡0 1 0 ⎢ ⎥ ⎢0 0 1 : 0⎥ ⎢0 0 0 0⎥⎦ ⎣
you see that a basis for the eigenspace λ1 = −4 is
{(1, 0, 0)}. Similarly, solve (λ2 I − A)x = 0 for λ2 = 1, and see that {(1, 5, 0)} is a basis for the
(λ3 I − A)x = 0 for λ3 = 3, and determine that {(5, 7, 14)} is a basis for its eigenspace.
⇒
1 ⎡1 0 ⎢ 1 ⎢0 1 − 2 ⎢0 0 0 ⎣
0⎤ ⎥ 0⎥ 0⎥⎦
Similarly, solve (λ2 I − A)x = 0 for λ2 = 1, and see that
{(0, 1, 0)} is a basis for the eigenspace of
λ2 = 1. Finally, solve (λ3 I − A)x = 0 for λ3 = 2, and see that
{(4, − 2, 1)} is a basis for its
eigenspace.
8. (a)
matrix,
0⎤ ⎥ 0⎥ 0⎥⎦
you can see that a basis for the eigenspace of λ1 = −3 is {( −2, 1, 2)}.
− A)x = 0. Row reducing the augmented
eigenspace of λ2 = 1. Finally, solve
λ +2
Row-reducing the augmented matrix,
{(−1, 2)}.
(c) To find the eigenvectors corresponding to λ1 = −4, solve the matrix equation
⇒
0
solve the matrix equation (λ1I − A)x = 0.
λ3 = 3.
0⎤ ⎥ 0⎥ 0⎥⎦
−1
λ3 = 2.
(b) The eigenvalues of A are λ1 = −4, λ2 = 1, and
⎡0 −1 −2 ⎢ ⎢0 −5 −1 ⎢0 0 −7 ⎣
2
(b) The eigenvalues of A are λ1 = −3, λ2 = 1, and
= (λ + 4)(λ − 1)(λ − 3) = 0.
(λ1I
−4
λ −1
= (λ + 3)(λ − 1)(λ − 2) = 0.
4. (a) The characteristic equation of A is given by
λ +4
0
0
(c) To find the eigenvector corresponding to λ1 = −3,
⎡2 1 : 0⎤ ⎢ ⎥ ⎣0 0 : 0⎦
⇒
λ −1 λI − A =
λ I − A = (λ − 1)(λ − 2)(λ − 4) = 0 2
(b) λ1 = 1, λ2 = 2, λ3 = 4 (repeated) (c) A basis for the eigenspace of λ1 = 1 is
{(−1, 0, 1, 0)}.
A basis for the eigenspace of λ2 = 2 is
{(−2, 1, 1, 0)}.
A basis for the eigenspace of λ3 = 4 is
{(2, 3, 1, 0), (0, 0, 0, 1)}.
10. The eigenvalues of A are the solutions of
λ − 3 2 −2 λI − A =
2
λ
1 = (λ + 1) (λ − 5) = 0.
−2
1
λ
2
Therefore, the eigenvalues are −1 (repeated) and 5. The corresponding eigenvectors are solutions of (λ I − A)x = 0. So, (1, 1, −1) and ( 2, 5, 1) are eigenvectors corresponding to λ1 = −1, while ( 2, −1, 1) corresponds to λ2 = 5. Now form P from these eigenvectors and note that ⎡ 1 2 2⎤ ⎢ ⎥ P = ⎢ 1 5 −1⎥ ⎢−1 1 1⎥ ⎣ ⎦
⎡−1 0 0⎤ ⎢ ⎥ and P −1 AP = ⎢ 0 −1 0⎥. ⎢ 0 0 5⎥ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
246
Chapter 7
Eigenvalues and Eigenvectors
12. The eigenvalues of A are the solutions of
λ −2
1
−1
2
λ −3
2
−1
λ
λI − A =
1
14. The characteristic equation of A is given by
λI − A =
λ
−1
−a λ − 1
= λ 2 − λ − a = 0.
The discriminant d of this quadratic equation in λ is 1 + 4a.
= (λ − 1) (λ − 3) = 0. 2
Therefore, the eigenvalues are λ1 = 1 (repeated) and λ2 = 3. The corresponding eigenvectors are solutions of
(a) A has an eigenvalue of multiplicity 2 if and only if d = 1 + 4a = 0; that is, a = − 14 .
eigenvectors corresponding to λ1 = 1, while ( −1, 2, 1)
(b) A has −1 and 2 as eigenvalues if and only if λ 2 − λ − a = (λ + 1)(λ − 2); that is, a = 2.
corresponds to λ2 = 3. Now form P from these eigenvectors and note that
(c) A has real eigenvalues if and only if d = 1 + 4a ≥ 0; that is a ≥ − 14 .
(λ I
− A)x = 0. So, ( −1, 0, 1) and (1, 1, 0) are
⎡−1 1 −1⎤ ⎢ ⎥ P = ⎢ 0 1 2⎥ ⎢ 1 0 1⎥ ⎣ ⎦
and
⎡ 1 0 0⎤ ⎢ ⎥ P −1 AP = ⎢0 1 0⎥. ⎢0 0 3⎥ ⎣ ⎦
16. The eigenvalue is λ = −1 (repteated). To find its corresponding eigenspace, solve (λ I − A)x = 0 with λ = −1. ⎡λ + 1 −2 ⎢ λ +1 ⎣ 0
0⎤ ⎡0 −2 ⎥ = ⎢ 0⎦ ⎣0 0
0⎤ ⎥ 0⎦
⇒
⎡0 1 ⎢ ⎣0 0
0⎤ ⎥ 0⎦
Because the eigenspace is only one-dimensional, the matrix A is not diagonalizable.
18. The eigenvalues are λ = −2 (repeated) and λ = 4. Because the eigenspace corresponding to λ = −2 is only one-dimensional, the matrix is not diagonalizable. 20. The eigenvalues of B are 5 and 3 with corresponding eigenvectors ( −1, 1) and ( −1, 2), respectively. Form the columns of P from the eigenvectors of B. So, ⎡−1 −1⎤ P = ⎢ ⎥ and ⎣1 2⎦ ⎡−2 −1⎤ ⎡ 7 2⎤ ⎡−1 −1⎤ ⎡5 0⎤ P −1BP = ⎢ ⎥⎢ ⎥⎢ ⎥ = ⎢ ⎥ = A. ⎣ 1 1⎦ ⎣−4 1⎦ ⎣ 1 2⎦ ⎣0 3⎦ Therefore, A and B are similar.
22. The eigenvalues of B are 1 and −2 (repeated) with corresponding eigenvectors ( −1, −1, 1), (1, 1, 0), and (1, 0, 1), respectively. Form the columns of P from the eigenvectors of B. So, ⎡−1 1 1⎤ ⎢ ⎥ P = ⎢−1 1 0⎥ and ⎢ 1 0 1⎥ ⎣ ⎦ ⎡−1 1 1⎤ ⎡ 1 −3 −3⎤ ⎡−1 1 1⎤ ⎡ 1 0 0⎤ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥ P BP = ⎢−1 2 1⎥ ⎢ 3 −5 −3⎥ ⎢−1 1 0⎥ = ⎢0 −2 0⎥ = A. ⎢ 1 −1 0⎥ ⎢−3 3 ⎢0 0 −2⎥ 1⎥⎦ ⎢⎣ 1 0 1⎥⎦ ⎣ ⎦⎣ ⎣ ⎦ −1
Therefore, A and B are similar. ⎡2 5 5⎤ ⎢ ⎥ 5 5 ⎥ 24. Because AT = ⎢ = A, A is symmetric. Furthermore, the column vectors of A form an orthonormal set. So, A ⎢ 5 2 5⎥ ⎢ ⎥ − 5 ⎦ ⎣ 5 is both symmetric and orthogonal.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
26. Because ⎡ ⎢ ⎢ ⎢ T A = ⎢ ⎢ ⎢ ⎢ ⎢⎣
3 3
3 3
3 3
2 3 3
3 3
0
34. The eigenvalues of A are − 12 and 1. The eigenvectors
3⎤ ⎥ 3 ⎥ ⎥ 0⎥ = A ⎥ 3 ⎥⎥ 3 ⎥⎦
corresponding to λ = 1 are x = t ( 2, 1). By choosing t = 13 , you find the steady state probability vector for A to be v =
A is symmetric. Because the column vectors of A do not form an orthonormal set, A is not orthogonal.
28. Because
3 ⎞ ⎟ and 34 ⎠
3 5 ⎞ ⎛ , ⎜− ⎟, respectively. Form the columns of P 34 34 ⎠ ⎝ with the eigenvectors of A 3 ⎤ − 34 ⎥⎥ . 5 ⎥ ⎥ 34 ⎦
32. The eigenvalues of A are 3, −1 and 5, with corresponding eigenvectors 1 1 ⎛ 1 ⎞ ⎛ 1 ⎞ , , 0 ⎟, ⎜ ,− , 0 ⎟, (0, 0, 1). ⎜ 2 2 2 2 ⎝ ⎠ ⎝ ⎠ Form the columns of P from the eigenvectors of A ⎡ 1 ⎢ 2 ⎢ P = ⎢ 1 ⎢ ⎢ 2 ⎢⎣ 0
1 2 1 − 2 0
⎤ 0⎥ ⎥ ⎥. 0⎥ ⎥ 1⎥⎦
1⎤ ⎥ 0⎦⎥
⎡2⎤ ⎡2⎤ ⎢ 3 ⎥ = ⎢ 3 ⎥ = v. ⎢ 13 ⎥ ⎢ 13 ⎥ ⎣ ⎦ ⎣ ⎦
36. The eigenvalues of A are
1, 4
1 5
and 1. The eigenvectors
you can find the steady state probability vector
for A to be v =
30. The eigenvalues of A are 17 and −17, with
5 34 3 34
⎡1 Av = ⎢ 12 ⎣⎢ 2
t =
A is not symmetric. However, the column vectors form an orthonormal set, so A is orthogonal.
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
( 23 , 13 ). Note that
corresponding to λ = 1 are x = t (1, 3). By choosing
⎡ 4 0 − 3⎤ 5 ⎢5 ⎥ T A = ⎢ 0 1 0⎥ ≠ A ⎢3 0 4⎥ 5⎦ ⎣5
⎛ 5 , corresponding unit eigenvectors ⎜ ⎝ 34
247
( 14 , 34 ). Note that
⎡1 ⎤ ⎡0.4 0.2⎤ ⎡ 14 ⎤ 4 Av = ⎢ ⎥ ⎢ 3 ⎥ = ⎢ 3 ⎥ = v. ⎢⎣ 4 ⎥⎦ ⎣0.6 0.8⎦ ⎢⎣ 4 ⎥⎦ 38. The eigenvalues of A are −0.2060, 0.5393 and 1. The eigenvectors corresponding to λ = 1 are x = t ( 2, 1, 2). By choosing t = 15 , find the steady state probability vector for A to be v = ⎡1 ⎢3 Av = ⎢ 13 ⎢ ⎢⎣ 13
2 3 1 3
1⎤ 3
⎥ 0⎥ ⎥ 0 23 ⎥⎦
( 52 , 15 , 52 ). Note that
⎡2⎤ ⎡2⎤ ⎢5⎥ ⎢5⎥ ⎢ 1 ⎥ = ⎢ 1 ⎥ = v. ⎢ 5⎥ ⎢5⎥ ⎢⎣ 52 ⎥⎦ ⎢⎣ 52 ⎥⎦
40. The eigenvalues of A are
1 , 1, 10 5
and 1. The eigenvectors
corresponding to λ = 1 are x = t (3, 1, 5). By choosing t =
1 , you 9
can find the steady state probability vector
for A to be v =
( 13 , 91 , 95 ). Note that
1 ⎡1⎤ ⎡0.3 0.1 0.4⎤ ⎡ 3 ⎤ ⎢ 3⎥ ⎢ ⎥ ⎢1 ⎥ ⎢ ⎥ Av = ⎢0.2 0.4 0.0⎥ 9 = ⎢ 19 ⎥ = v. ⎢ ⎥ ⎢0.5 0.5 0.6⎥ ⎢ 5 ⎥ ⎢⎣ 95 ⎥⎦ ⎣ ⎦ ⎢⎣ 9 ⎥⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
248
Chapter 7
Eigenvalues and Eigenvectors
42. For the sake of simplicity, assume an = 1, and observe that the following proof holds for any nonzero an . First show by induction that for the n × n matrix Cn ,
λ −1 0
Cn =
0
0
λ −1
0 −1
b0
b1
b2
= bn −1λ n −1 +
+ b1λ + b0 .
bn −1
For n = 1, C1 = b0 , and for n = 2, C2 =
λ
−1
b0
b1
= b1λ + b0 .
Assuming the property for n, you see that
λ −1
0
λ −1
0
Cn + 1 =
0
0 −1
b0
b1
b2
= bnλ n + Cn = bnλ n + bn −1λ n −1 + b1λ + b0 .
bn
So, showing the property is valid for n + 1. You can now evaluate the characteristic equation of A as follows.
λ
−1
0
…
0
0
λ
−1 …
0
λI − A = 0
0
0
−1
a0
a1
a2
λ + an −1
= (λ + an −1 )λ n −1 + Cn − 2 = λ n + an −1λ n −1 + an − 2λ n − 2 + … + a1λ + a0 .
44. From the form p(λ ) = a0 + a1λ + a2λ 2 + a3λ 3 , you have a0 = 189, a1 = −120, a2 = −7 and a3 = 2. This implies that the companion matrix of p is ⎡ 0 1 0⎤ ⎢ ⎥ A = ⎢ 0 0 1⎥. ⎢ 189 7⎥ ⎣− 2 60 2 ⎦
The eigenvalues of A are 32 , 9, and −7, the zeros of p. 46. The characteristic equation of A is λ I − A = λ 3 − 20λ 2 + 128λ − 256 = 0. 48 −36⎤ ⎡ 9 4 −3⎤ ⎡ 9 4 −3⎤ ⎡ 76 ⎢ ⎥⎢ ⎥ ⎢ ⎥ Because A3 − 20 A2 + 128 A − 256 I = O, you have A2 = ⎢−2 0 6⎥ ⎢−2 0 6⎥ = ⎢−24 −32 72⎥ ⎢ −1 −4 11⎥ ⎢ −1 −4 11⎥ ⎢−12 −48 100⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
and 48 −32⎤ 448 −336⎤ ⎡ 76 ⎡ 9 4 −3⎤ ⎡ 1 0 0⎤ ⎡ 624 ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ A = 20 A − 128 A + 256 I = 20 ⎢−24 −32 72⎥ − 128 ⎢−2 0 6⎥ + 256 ⎢0 1 0⎥ = ⎢−224 −384 672⎥ ⎢−12 −48 100⎥ ⎢ −1 −4 11⎥ ⎢0 0 1⎥ ⎢−112 −448 848⎥ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ 3
48.
( A + cI )x
2
= Ax + cIx = λ x + cx = (λ + c)x. So, x is an eigenvector of ( A + cI ) with eigenvalue (λ + c).
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Review Exercises for Chapter 7
249
50. (a) The eigenvalues of A are 3 and 1, with corresponding eigenvectors (1, 1) and (1, −1). Letting these eigenvectors form the columns of P, you can diagonalize A ⎡1 1⎤ ⎡3 0⎤ −1 P = ⎢ ⎥ and P AP = ⎢ ⎥ = D. ⎣1 −1⎦ ⎣0 1⎦ ⎡ 3 0⎤ −1 ⎡3 0⎤ −1 So, A = PDP −1 = P ⎢ ⎥P = ⎥ P . Letting B = P ⎢ ⎢⎣ 0 1⎦⎥ ⎣0 1⎦ 2
⎡ 3 +1
1 2⎢
⎣⎢ 3 − 1
3 − 1⎤ ⎥ 3 + 1⎦⎥
2
⎛ ⎡ 3 0⎤ −1 ⎞ ⎡ 3 0⎤ −1 ⎡3 0⎤ −1 you have B = ⎜ P ⎢ P ⎟ = P⎢ ⎥ P = P⎢ ⎥ p = A. ⎜ ⎢ 0 1⎥⎥ ⎟ ⎢⎣ 0 1⎦⎥ ⎣0 1⎦ ⎦ ⎝ ⎣ ⎠
(b) In general, let A = PDP −1 , D diagonal with positive eigenvalues on the diagonal. Let D′ be the diagonal matrix consisting of the square roots of the diagonal entries of D. Then if B = PD′P −1 , B 2 = ( PD′P −1 )( PD′P −1 ) = P( D′) P −1 = PDP −1 = A. 2
52. Because Ax = A2 x and A2 x = A( Ax) = A(λ x) = λ 2 x, you see that
λx = λ 2 x, or λ = λ 2 ⇒ λ = 0 or 1. 54. If A is symmetric and has 0 as its only eigenvalue, then PT AP = 0, the zero matrix, which implies that A = 0. 56. (a) True. See Theorem 7.2 on page 426. (b) False. See remark after the "Definitions of Eigenvalue and Eigenvector" on page 422. If x = 0 is allowed to be an eigenvector, then the definition of eigenvalue would be meaningless, because A0 = λ 0 for all real numbers λ . (c) True. See page 453. 58. The population after one transition is ⎡ 0 1⎤ ⎡32⎤ ⎡32⎤ x2 = ⎢ 3 ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ 0⎥ ⎣32⎦ ⎣24⎦ ⎣⎢ 4 ⎦⎥
and after two transitions is ⎡ 0 1⎤ ⎡32⎤ ⎡24⎤ x3 = ⎢ 3 ⎥ ⎢ ⎥ = ⎢ ⎥. ⎢ ⎥ 0 24 ⎣24⎦ ⎢⎣ 4 ⎥⎦ ⎣ ⎦
3 The eigenvalues of A are ± . Choose the positive 2 eigenvalue and find the corresponding eigenvector to be
(2, 3), and the stable age distribution vector is ⎡ 2⎤ x = t⎢ ⎥ ⎣ 3⎦
60. The population after one transition is ⎡ 0 2 2⎤ ⎡240⎤ ⎡960⎤ ⎢1 ⎥⎢ ⎥ ⎢ ⎥ x 2 = ⎢ 2 0 0⎥ ⎢240⎥ = ⎢120⎥ ⎢ 0 0 0⎥ ⎢240⎥ ⎢ 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
and after two transitions is ⎡ 0 2 2⎤ ⎡960⎤ ⎡240⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x3 = ⎢ 12 0 0⎥ ⎢120⎥ = ⎢480⎥. ⎢ 0 0 0⎥ ⎢ 0⎥ ⎢ 0⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
The positive eigenvalue 1 has corresponding eigenvector ⎡2⎤ (2, 1, 0), and the stable distribution vector is x = t ⎢⎢ 1⎥⎥. ⎢0⎥ ⎣ ⎦ 62. Construct the age transition matrix. 8 2⎤ ⎡ 4 ⎢ ⎥ A = ⎢0.75 0 0⎥ ⎢ 0 0.6 0⎥ ⎣ ⎦ ⎡120⎤ ⎢ ⎥ The current age distribution vector is x1 = ⎢120⎥. ⎢120⎥ ⎣ ⎦
In one year, the age distribution vector will be 8 2⎤ ⎡120⎤ ⎡ 4 ⎡1680⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ x 2 = Ax1 = ⎢0.75 0 0⎥ ⎢120⎥ = ⎢ 90⎥. ⎢ 0 0.6 0⎥ ⎢120⎥ ⎢ 72⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
In two years, the age distribution vector will be 8 2⎤ ⎡1680⎤ ⎡ 4 ⎡7584⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥ 0 0⎥ ⎢ 90⎥ = ⎢1260⎥. x3 = Ax 2 = ⎢0.75 ⎢ 0 0.6 0⎥ ⎢ 72⎥ ⎢ 54⎥ ⎣ ⎦⎣ ⎦ ⎣ ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
250
Chapter 7
Eigenvalues and Eigenvectors
64. The matrix corresponds to the system y′ = Ay is ⎡3 0⎤ A = ⎢ ⎥. ⎣1 −1⎦
This matrix has eigenvalues 3 and −1, with corresponding eigenvectors ( 4, 1) and (0, 1). So, a matrix P that diagonalizes A is ⎡4 0⎤ P = ⎢ ⎥ and ⎣ 1 1⎦
68. The matrix of the quadratic form is ⎡ b⎤ 3⎤ 1 − ⎢ ⎥ ⎥ 2 2 ⎥ ⎥ = ⎢ . ⎢ ⎥ ⎥ 3 c⎥ ⎢− 2⎥ ⎦ ⎣ 2 ⎦
⎡ ⎢a A = ⎢ ⎢b ⎢⎣ 2
The eigenvalues are
⎡3 0⎤ P −1 AP = ⎢ ⎥. ⎣0 −1⎦
The system represented by w′ = P −1 APw has solutions w1 = C1e3t and w2 = c2e −t . Substitute y = Pw and obtain
⎛ 3 , eigenvectors ⎜⎜ ⎝ 2
1 5 and , with corresponding unit 2 2
1⎞ ⎟ and 2 ⎟⎠
⎛ 1 3⎞ ⎜⎜ − , ⎟⎟. ⎝ 2 2 ⎠
Use these eigenvectors to form the columns of P.
⎡ y1 ⎤ ⎡4 0⎤ ⎡ w1 ⎤ ⎡ 4 w1 ⎤ ⎢ ⎥ = ⎢ ⎥⎢ ⎥ = ⎢ ⎥ y 1 1 w ⎣ 2⎦ ⎣ ⎦ ⎣ 2⎦ ⎣w1 + w2 ⎦
⎡ ⎢ P = ⎢ ⎢ ⎢ ⎣
which yields the solution
This implies that the equation of the rotated conic is
y1 = 4C1e3t y2 =
1⎤ ⎡1 − ⎥ ⎢2 2⎥ and PT AP = ⎢ ⎥ ⎢ 3 ⎥ ⎢⎣ 0 2 ⎦
3 2 1 2
⎤ 0⎥ ⎥. 5⎥ 2 ⎥⎦
1 5 2 2 ( x′) + ( y′) = 10, an ellipse. 2 2
C1e3t + C2e− t .
y
66. The matrix corresponding to the system y′ = Ay is ⎡6 −1 2⎤ ⎢ ⎥ A = ⎢0 3 −1⎥. ⎢0 0 1⎥ ⎣ ⎦
y′ 5 4 3
x′ x
−5
The eigenvalues of A are 6, 3, and 1, with corresponding eigenvectors (1, 0, 0), (1, 3, 0) and ( −3, 5, 10). So, you
30°
4 5
−4 −5
can diagonalize A by forming P. ⎡ 1 1 −3⎤ ⎢ ⎥ P = ⎢0 3 5⎥ ⎢0 0 10⎥ ⎣ ⎦
and
⎡6 0 0⎤ ⎢ ⎥ P −1 AP = ⎢0 3 0⎥. ⎢0 0 1⎥ ⎣ ⎦
The system represented by w′ = P −1 APw has solutions w1 = C1e6t , w2 = C2e3t and w3 = C3et . Substitute y = Pw and obtain ⎡ y1 ⎤ ⎡ 1 1 −3⎤ ⎡ w1 ⎤ ⎡w1 + w2 − 3w3 ⎤ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ y2 ⎥ = ⎢0 3 5⎥ ⎢w2 ⎥ = ⎢ 3w2 + 5w3 ⎥ ⎢ y3 ⎥ ⎢0 0 10⎥ ⎢ w3 ⎥ ⎢ ⎥ 10 w3 ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦
which yields the solution y1 = C1e6t + y2 = y3 =
C 2 e 3t −
3C3et
+
5C3et
3C2e
3t
10C3et .
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 7
251
70. The matrix of the quadratic form is b⎤ ⎡ ⎢ 1 2⎥ ⎡ 9 −12⎤ ⎥ = ⎢ A = ⎢ ⎥. b −12 16⎦ ⎢ ⎥ ⎣ ⎢⎣ 2 c⎥⎦ ⎛ 4 3⎞ ⎛ 3 4⎞ The eigenvalues are 0 and 25, with corresponding unit eigenvectors ⎜ , ⎟ and ⎜ − , ⎟. Use these ⎝ 5 5⎠ ⎝ 5 5⎠ eigenvectors to form the columns of P.
3⎤ ⎡4 ⎢ 5 − 5⎥ ⎡0 0⎤ ⎥ and PT AP = ⎢ P = ⎢ ⎥ 4⎥ ⎢3 ⎣0 25⎦ ⎢⎣ 5 ⎥ 5⎦
y y′ 20
This implies that the equation of the rotated conic is a parabola. Furthermore, 3⎤ ⎡4 ⎢ 5 − 5⎥ ⎥ = [−500 0] = [d ′ e′] e]P = [−400 −300] ⎢ 4⎥ ⎢3 ⎢⎣ 5 5 ⎥⎦
[d
x′
36.87° x
− 20 − 20
so the equation in the x′y′-coordinate system is 25( y′) − 500 x′ = 0. 2
Project Solutions for Chapter 7 1 Population Growth and Dynamical Systems (I) ⎡ 0.5 0.6⎤ 1. A = ⎢ ⎥ ⎣−0.4 3.0⎦
⎡6⎤
λ1 = 0.6, w1 = ⎢ ⎥ ⎣ 1⎦ ⎡ 1⎤
λ2 = 2.9, w 2 = ⎢ ⎥ ⎣4⎦
4 −1⎤ 0⎤ ⎡0.6 −1 ⎥ , P AP = ⎢ ⎥ 6⎦ ⎣ ⎣ 0 2.9⎦
⎡6 1⎤ −1 P = ⎢ ⎥, P = 1 4 ⎣ ⎦
1 ⎡ 23 ⎢−1
w1 = C1e0.6t , w 2 = C2e 2.9t , y = Pw ⎡6C1e0.6t + C2e2.9t ⎤ ⎡ y1 ⎤ ⎡6 1⎤ ⎡ C1e0.6t ⎤ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ 2.9t ⎥ = ⎢ 0.6t + 4C2e 2.9t ⎦⎥ ⎣ y2 ⎦ ⎣ 1 4⎦ ⎢⎣C2e ⎦⎥ ⎣⎢C1e y1 (0) = 36 ⇒ 6C1 +
y2 (0) = 121 ⇒
C2 = 36⎫⎪ ⎬ C1 + 4C2 = 121⎪⎭
So, C1 = 1, C2 = 30 and
y1 = 6e0.6t + 30e 2.9t . y2 =
e0.6t + 120e 2.9t
2. No, neither species disappears. As t → ∞, y1 → 30e 2.9t and y2 → 120e 2.9t . 3.
150,000
y2 y1 3
0 0
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
252
Chapter 7
Eigenvalues and Eigenvectors
4. As t → ∞, y1 → 30e2.9t , y2 → 120e 2.9t , and
y2 = 4. y1
5. The population y2 ultimately disappears around t = 1.6. 2 The Fibonacci Sequence 1. x1 = 1
x4 = 3
x7 = 13
x10 = 55
x2 = 1
x5 = 5
x8 = 21
x11 = 89
x3 = 2
x6 = 8
x9 = 34
x12 = 144
⎡1 1⎤ ⎡ xn −1 ⎤ ⎡ xn −1 + xn − 2 ⎤ ⎡ xn ⎤ 2. ⎢ ⎥⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥. xn generated from 1 0 x x n −1 ⎣ ⎦ ⎣ n−2⎦ ⎣ ⎦ ⎣ xn −1 ⎦
⎡ xn −1 ⎤ ⎢ ⎥ ⎣ xn − 2 ⎦
⎡1⎤ ⎡ x2 ⎤ ⎡ x3 ⎤ ⎡2⎤ 3. A⎢ ⎥ = A⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣1⎦ ⎣ x1 ⎦ ⎣ x2 ⎦ ⎣1⎦ ⎡1⎤ ⎡3⎤ ⎡ x4 ⎤ A2 ⎢ ⎥ = ⎢ ⎥ = ⎢ ⎥ ⎣1⎦ ⎣2⎦ ⎣ x3 ⎦ 1 ⎡1⎤ ⎡ xn + 2 ⎤ ⎡ xn ⎤ n−2 ⎡ ⎤ In general, An ⎢ ⎥ = ⎢ ⎥ or A ⎢ ⎥ = ⎢ ⎥. ⎣1⎦ ⎣ xn + 1 ⎦ ⎣1⎦ ⎣ xn −1 ⎦
4.
λ − 1 −1 λ
−1
λ1 = λ2 =
1+
= λ2 − λ − 1 = 0 5
2 ⎡ eigenvector: ⎢ ⎣−1 +
⎤ ⎥ 5⎦
5
2 ⎡ eigenvector: ⎢ − − 1 ⎣
⎤ ⎥ 5⎦
2 1− 2
2 ⎡ P = ⎢ ⎣−1 +
P −1 =
λ =
⇒
2 5
−1 −
1 ⎡ 1+ ⎢ 4 5 ⎢⎣−1 +
5 5
1±
5 2
⎤ ⎥ 5⎦ 2⎤ ⎥ −2⎦⎥
⎡λ1 0⎤ P −1 AP = ⎢ ⎥ ⎣ 0 λ2 ⎦
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
Project Solutions for Chapter 7
5.
253
P −1 AP = D P −1 A n − 2 P = D n − 2 An − 2 = PD n − 2 P −1
=
2 1 ⎡ ⎢ 4 5 ⎣−1 +
=
n−2 ⎡ 2(λ 1 ) 1 ⎢ 4 5 ⎢ −1 + 5 (λ 1 )n − 2 ⎣⎢
(
(
⎡ 1 ⎢2 1 + = 4 5⎢ ⎢⎣
⎡⎛ 1 + 5 ⎞ n − 2 ⎢⎜ ⎟⎟ ⎜ ⎤ ⎢⎝ 2 ⎠ ⎥⎢ 5⎦ ⎢ ⎢ 0 ⎢⎣
2 5 −1 −
) 5 )(λ )
2(λ 2 )
⎤⎡ ⎥⎢ 1+ −1 + ⎥⎦ ⎣⎢
n−2
(−1 − 5 )(λ ) + 2( −1 + 5 )(λ ) 2
n−2
⎤ ⎥ ⎥⎡ 1+ ⎥⎢ n−2 ⎛ 1 − 5 ⎞ ⎥ ⎣⎢−1 + ⎥ ⎜⎜ 2 ⎟⎟ ⎥ ⎝ ⎠ ⎦ 0
n−2⎥
4(λ 1 )
n−2
(
+4λ 1n − 2 − 4λ 2 n − 2
5
5 2⎤ ⎥ 5 − 2⎥⎦
2
1
2⎤ ⎥ − 2⎦⎥
5
2 −1 +
n−2
− 4λ 2 n − 2
)
(
5 λ 1n − 2 + 2 1 +
⎤ ⎥ ⎥ 5 λ 2n − 2 ⎥ ⎦
)
⎡1⎤ ⎡ xn ⎤ An − 2 ⎢ ⎥ = ⎢ ⎥ ⇒ 1 ⎣⎦ ⎣ xn −1 ⎦
(
)
(
)
1 ⎡ 2 1 + 5 λ 1n − 2 + 2 −1 + 4 5⎣ 1 ⎡λ 1n − λ n2 ⎤⎦ = 5⎣
xn =
xn = x1 =
5 λ 2 n − 2 + 4λ 1n − 2 − 4λ 2 n − 2 ⎤ ⎦
n n ⎛1 − 5 ⎞ ⎤ 1 ⎡⎢⎛ 1 + 5 ⎞ ⎜ ⎟ − ⎜⎜ ⎟ ⎥ 2 ⎟⎠ ⎥ 5 ⎢⎜⎝ 2 ⎟⎠ ⎝ ⎣ ⎦ 1 5 =1 5
( )
x2 =
1 ⎡6 + 2 5 6 − 2 5⎤ − ⎢ ⎥ =1 4 4 5⎣ ⎦
x3 =
1 ⎡6 + 2 5 1 + 5 6 − 2 5 1 − 5⎤ ⋅ − ⋅ ⎢ ⎥ = 4 2 4 2 ⎦ 5⎣
1 ⎡16 + 8 5 16 − 8 5 ⎤ − ⎢ ⎥ = 2 8 8 5⎣ ⎦
6. x10 = 55, x20 = 6765 7. For example,
x20 6765 = = 1.618 … x19 4181
The quotients seem to be approaching a fixed value near 1.618.
8. Let the limit be b ≈
xn = b. Then for large n, n → ∞. xn −1
xn x + xn − 2 1 = n −1 ≈1+ xn −1 xn −1 b
Taking the positive value, b =
1+
⇒
5 2
b2 − b − 1 = 0
⇒
b =
1±
5 2
.
≈ 1.618.
Copyright © Houghton Mifflin Harcourt Publishing Company. All rights reserved.
View more...
Comments