Introduction to Linear Algebra for Science and Engineering 1st Ed
April 2, 2017 | Author: Joe | Category: N/A
Short Description
its the first edition, text...
Description
PEARSON
NEVER
Daniel Norman
•
LEARNING
Dan Wolczuk
Introduction to Linear Algebra for Science and Engineering Student Cheap-ass Edition
Taken from: Introduction to Linear Algebra for Science and Engineering, Second Edition by Daniel Norman and Dan Wolczuk
Cover Art: Courtesy of Pearson Learning Solutions. Taken from: Introduction to Linear Algebra for Science and Engineering, Second Edition by Daniel Norman and Dan Wolczuk Copyright© 2012, 1995 by Pearson Education, Inc. Published by Pearson Upper Saddle River, New Jersey 07458 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. This special edition published in cooperation with Pearson Learning Solutions. All trademarks, service marks, registered trademarks, and registered service marks are the property of their respective owners and are used herein for identification purposes only.
Pearson Learning Solutions, 501 Boylston Street, Suite 900, Boston, MA 02116 A Pearson Education Company www.pearsoned.com
PEARSON
Contents A Note to Students
2.3 Application to Spanning and Linear
vi
A Note to Instructors
Independence
viii
91
Spanning Problems
Chapter 1
1
Euclidean Vector Spaces
1.1 Vectors in JR2 and JR3
Bases of Subspaces
1
The Vector Equation of a Line in JR2 Vectors and Lines in JR3 1.2 Vectors in JRll 14
Subspaces
Resistor Circuits in Electricity Planar Trusses
16
Spanning Sets and Linear Independence
Chapter 3
18
of Matrices
31
Identity Matrix
40
Mappings
44
44
1.5 Cross-Products and Volumes
50
The Length of the Cross-Product
52
3.4 Special Subspaces for Systems and Mappings:
The Matrix Representation of a System
150
Solution Space and Nullspace
69
Solution Set of Ax
73
=
150
b 152
Range of L and Columnspace of A
Consistent Systems and Unique
Rowspace of A
75
153
156
Bases for Row(A), Col(A), and Null(A)
Some Shortcuts and Some Bad Moves
76
A Summary of Facts About Rank
77 78
Matrix
167
Some Facts About Square Matrices and Solutions of
83
Linear Systems
85
Homogeneous Linear Equations
165
A Procedure for Finding the Inverse of a
2.2 Reduced Row Echelon Form, Rank, and Homogeneous Systems
157
162
3.5 Inverse Matrices and Inverse Mappings
A Remark on Computer Calculations
Rank of a Matrix
143
143
Rotation Through Angle e About the X -axis 3 in JR3 145
63
Rank T heorem
of Linear Equations
136
139
Rotations in the Plane
63
A Word Problem
134
Linear Mappings
2.1 Systems of Linear Equations and
Solutions
Linear Mappings
3.3 Geometrical Transformations
54
Systems of Linear Equations
Row Echelon Form
131
Compositions and Linear Combinations of
Some Problems on Lines, Planes,
Elimination
131
Matrix Mappings
Is Every Linear Mapping a Matrix Mapping?
50
and Distances
127
3.2 Matrix Mappings and Linear
43
Some Properties of Projections
Cross-Products
121
126
Block Multiplication
40
Minimum Distance
120
An Introduction to Matrix Multiplication
The Perpendicular Part
Chapter 2
115
The Transpose of a Matrix
34
1.4 Projections and Minimum Distance Projections
115
Equality, Addition, and Scalar Multiplication
28
The Scalar Equation of Planes and Hyperplanes
115
3.1 Operations on Matrices
28
Length and Dot Products in JR2, and JR3 Length and Dot Product in JR11
107
Matrices, Linear Mappings,
and Inverses
24
102
105
Linear Programming
15
1.3 Length and Dot Products
102
Equations
9
Surfaces in Higher Dimensions
95
97
2.4 Applications of Systems of Linear
5
Addition and Scalar Multiplication of Vectors in JR11
91
Linear Independence Problems
168
Inverse Linear Mappings
86 iii
170
iv
Contents
3.6 Elementary Matrices 175 3.7 LU-Decomposition 181
Eigenvalues and Eigenvectors of a
291
Matrix
Solving Systems with the LU-Decomposition A Comment About Swapping Rows
187
185
Finding Eigenvectors and Eigenvalues
6.2 Diagonalization
Some Applications of Diagonalization
Chapter 4
4.1 Spaces of Polynomials
193
Systems of Linear Difference Equations
Addition and Scalar Multiplication of
4.2 Vector Spaces Vector Spaces
6.4 Diagonalization and Differential Equations 315
197
4.3 Bases and Dimensions Bases
Chapter 7
209
211
Extending a Linearly Independent Subset to a Basis
213
4.4 Coordinates with Respect to a Basis 4.5 General Linear Mappings 226 4.6 Matrix of a Linear Mapping 235
218
The Matrix of L with Respect to the Basis B
235
7.1 Orthonormal Bases and Orthogonal Matrices 321 Orthonormal Bases 321 Coordinates with Respect to an Orthonormal Basis
323
Change of Coordinates and Orthogonal Matrices
325
Rotation of Axes in JR.2
240 246
4.7 Isomorphisms of Vector Spaces
329 7.2 Projections and the Gram-Schmidt Procedure 333 Projections onto a Subspace
Chapter 5
Determinants
255
5.1 Determinants in Terms of Cofactors 255 The 3 x 3 Case 256 5.2 Elementary Row Operations and the Determinant 264 The Determinant and Invertibility 270 Determinant of a Product 270 5.3 Matrix Inverse by Cofactors and Cramer's Rule 274 Cramer's Rule 276 5.4 Area, Volume, and the Determinant 280 Area and the Determinant 280 The Determinant and Volume 283
Chapter 6
6.1 Eigenvalues and Eigenvectors 289
7.4 Inner Product Spaces Inner Product Spaces
7.5 Fourier Series
337 342 345
348 348
354 b
The Inner Product
J f(x)g(x) dx
289
354
a
Fourier Series
Chapter 8
355
Symmetric Matrices and
Quadratic Forms
363
8.1 Diagonalization of Symmetric Matrices 363
Quadratic Forms
Eigenvalues and Eigenvectors of a Mapping
Overdetermined Systems
8.2 Quadratic Forms
289
333
The Gram-Schmidt Procedure
7.3 Method of Least Squares
The Principal Axis Theorem
Eigenvectors and
Diagonalization
321
Orthonormal Bases
A Note on Rotation Transformations and
Change of Coordinates and Linear Mappings
317
General Discussion
206
Finite Spanning Set
317
A Practical Solution Procedure
206
Obtaining a Basis from an Arbitrary Dimension
312
Eigenvalues
197
201
Subspaces
312
The Power Method of Determining
193
Polynomials
303
6.3 Powers of Matrices and the Markov Process 307
193
Vector Spaces
291
299
366
372 372
Classifications of Quadratic Forms
8.3 Graphs of Quadratic Forms 380 Graphs of Q(x) k in JR.3 385 =
376
Contents
8.4 Applications of Quadratic Forms Small Deformations The Inertia Tensor
Chapter 9
9.4 Eigenvectors in Complex Vector Spaces
388
388 390 395
395
The Complex Conjugate and Division
397
Polar Form
398
399 402
404
9.2 Systems with Complex Numbers
407
Complex Numbers in Electrical Circuit Equations
408
9.3 Vector Spaces over C
2 Matrix
420
x
3 Matrix
422
Properties of Complex Inner Products
413
Inequalities
415
426 429
9.6 Hermitian Matrices and Unitary Diagonalization 432 Appendix A Exercises
Answers to Mid-Section 439 Answers to Practice Problems
and Chapter Quizzes
Complex Multiplication as a Matrix Mapping
425
426
The Cauchy-Schwarz and Triangle
Appendix B
411
Linear Mappings and Subspaces
x
The Case of a 3
Orthogonality in C" and Unitary Matrices
399
Powers and the Complex Exponential n-th Roots
The Case of a 2
9.5 Inner Products in Complex Vector Spaces
The Arithmetic of Complex Numbers
The Complex Plane
418
Matrix and a Real Canonical Form
395
Roots of Polynomial Equations
417
Complex Characteristic Roots of a Real
Complex Vector Spaces
9.1 Complex Numbers
v
Index
529
465
A Note to Students Linear Algebra-What Is It? Linear algebra is essentially the study of vectors, matrices, and linear mappings. Al though many pieces of linear algebra have been studied for many centuries, it did not take its current form until the mid-twentieth century. It is now an extremely important topic in mathematics because of its application to many different areas. Most people who have learned linear algebra and calculus believe that the ideas of elementary calculus (such as limit and integral) are more difficult than those of in troductory linear algebra and that most problems in calculus courses are harder than those in linear algebra courses. So, at least by this comparison, linear algebra is not hard. Still, some students find learning linear algebra difficult. I think two factors con tribute to the difficulty students have. First, students do not see what linear algebra is good for. This is why it is important to read the applications in the text; even if you do not understand them completely, they will give you some sense of where linear algebra fits into the broader picture. Second, some students mistakenly see mathematics as a collection of recipes for solving standard problems and are uncomfortable with the fact that linear algebra is "abstract" and includes a lot of "theory." There will be no long-term payoff in simply memorizing these recipes, however; computers carry them out far faster and more ac curately than any human can. That being said, practising the procedures on specific examples is often an important step toward much more important goals: understand ing the concepts used in linear algebra to formulate and solve problems and learning to interpret the results of calculations. Such understanding requires us to come to terms with some theory. In this text, many of our examples will be small. However, as you work through these examples, keep in mind that when you apply these ideas later, you may very well have a million variables and a million equations. For instance, Google's PageRank system uses a matrix that has 25 billion columns and 25 billion rows; you don't want to do that by hand! When you are solving computational problems, al
ways try to observe how your work relates to the theory you have learned. Mathematics is useful in so many areas because it is abstract: the same good idea can unlock the problems of control engineers, civil engineers, physicists, social scien tists, and mathematicians only because the idea has been abstracted from a particular setting. One technique solves many problems only because someone has established a theory of how to deal with these kinds of problems. We use definitions to try to capture important ideas, and we use theorems to summarize useful general facts about the kind of problems we are studying. Proofs not only show us that a statement is true; they can help us understand the statement, give us practice using important ideas, and make it easier to learn a given subject. In particular, proofs show us how ideas are tied together so we do not have to memorize too many disconnected facts. Many of the concepts introduced in linear algebra are natural and easy, but some may seem unnatural and "technical" to beginners. Do not avoid these apparently more difficult ideas; use examples and theorems to see how these ideas are an essential part of the story of linear algebra. By learning the "vocabulary" and "grammar" of linear algebra, you will be equipping yourself with concepts and techniques that math ematicians, engineers, and scientists find invaluable for tackling an extraordinarily rich variety of problems.
vi
A Note to Students
vii
Linear Algebra-Who Needs It? Mathematicians Linear algebra and its applications are a subject of continuing research. Linear algebra is vital to mathematics because it provides essential ideas and tools in areas as diverse as abstract algebra, differential equations, calculus of functions of several variables, differential geometry, functional analysis, and numerical analysis.
Engineers Suppose you become a control engineer and have to design or upgrade an automatic control system. T he system may be controlling a manufacturing process or perhaps an airplane landing system. You will probably start with a linear model of the sys tem, requiring linear algebra for its solution. To include feedback control, your system must take account of many measurements (for the example of the airplane, position, velocity, pitch, etc.), and it will have to assess this information very rapidly in order to determine the correct control responses. A standard part of such a control system is a Kalman-Bucy filter, which is not so much a piece of hardware as a piece of mathemat ical machinery for doing the required calculations. Linear algebra is an essential part of the Kalman-Bucy filter. If you become a structural engineer or a mechanical engineer, you may be con cerned with the problem of vibrations in structures or machinery. To understand the problem, you will have to know about eigenvalues and eigenvectors and how they de termine the normal modes of oscillation. Eigenvalues and eigenvectors are some of the central topics in linear algebra. An electrical engineer will need linear algebra to analyze circuits and systems; a civil engineer will need linear algebra to determine internal forces in static structures and to understand principal axes of strain. In addition to these fairly specific uses, engineers will also find that they need to know linear algebra to understand systems of differential equations and some as pects of the calculus of functions of two or more variables. Moreover, the ideas and techniques of linear algebra are central to numerical techniques for solving problems of heat and fluid flow, which are major concerns in mechanical engineering. And the ideas of Jjnear algebra underjje advanced techniques such as Laplace transforms and Fourier analysis.
Physicists Linear algebra is important in physics, partly for the reasons described above. In addi tion, it is essential in applications such as the inertia tensor in general rotating motion. Linear algebra is an absolutely essential tool in quantum physics (where, for exam ple, energy levels may be determined as eigenvalues of linear operators) and relativity (where understanding change of coordinates is one of the central issues).
Life and Social Scientists Input/output models, described by matrices, are often used in economics, and similar ideas can be used in modelling populations where one needs to keep track of sub populations (generations, for example, or genotypes). In all sciences, statistical anal ysis of data is of great importance, and much of this analysis uses Jjnear algebra; for example, the method of least squares (for regression) can be understood in terms of projections in linear algebra.
viii
A Note to Instructors
Managers A manager in industry will have to make decisions about the best allocation of re sources: enormous amounts of computer time around the world are devoted to linear programming algorithms that solve such allocation problems. The same sorts of tech niques used in these algorithms play a role in some areas of mine management. Linear algebra is essential here as well. So who needs linear algebra? Almost every mathematician, engineer, or scientist will .find linear algebra an important and useful tool.
Will these applications be explained in this book? Unfortunately, most of these applications require too much specialized background to be included in a first-year linear algebra book. To give you an idea of how some of these concepts are applied, a few interesting applications are briefly covered in sections
1.4, 1.5, 2.4, 5.4, 6.3, 6.4, 7.3, 7.5, 8.3, 8.4, and 9.2. You will get to see many more applications of linear algebra in your future courses.
A Note to Instructors Welcome to the second edition of Introduction to Linear Algebra for Science and Engineering. It has been a pleasure to revise Daniel Norman's first edition for a new generation of students and teachers. Over the past several years, I have read many articles and spoken to many colleagues and students about the difficulties faced by teachers and learners of linear algebra. In particular, it is well known that students typ ically find the computational problems easy but have great difficulty in understanding the abstract concepts and the theory. Inspired by this research, I developed a pedagog ical approach that addresses the most common problems encountered when teaching and learning linear algebra. I hope that you will find this approach to teaching linear algebra as successful as I have.
Changes to the Second Edition •
Several worked-out examples have been added, as well as a variety of mid section exercises (discussed below).
•
Vectors in JR.11 are now always represented as column vectors and are denoted with the normal vector symbol 1. Vectors in general vector spaces are still denoted in boldface.
•
Some material has been reorganized to allow students to see important con cepts early and often, while also giving greater flexibility to instructors. For example, the concepts of linear independence, spanning, and bases are now introduced in Chapter 1 in JR.11, and students use these concepts in Chapters 2 and 3 so that they are very comfortable with them before being taught general vector spaces.
A Note to Instructors
•
ix
The material on complex numbers has been collected and placed in Chapter 9, at the end of the text. However, if one desires, it can be distributed throughout the text appropriately.
•
There is a greater emphasis on teaching the mathematical language and using mathematical notation.
•
All-new figures clearly illustrate important concepts, examples, and applica tions.
•
The text has been redesigned to improve readability.
Approach and Organization Students typically have little trouble with computational questions, but they often struggle with abstract concepts and proofs. This is problematic because computers perform the computations in the vast majority of real-world applications of linear algebra. Human users, meanwhile, must apply the theory to transform a given problem into a linear algebra context, input the data properly, and interpret the result correctly. The main goal of this book is to mix theory and computations throughout the course. The benefits of this approach are as follows: •
It prevents students from mistaking linear algebra as very easy and very com putational early in the course and then becoming overwhelmed by abstract con cepts and theories later.
•
It allows important linear algebra concepts to be developed and extended more slowly.
•
It encourages students to use computational problems to help understand the theory of linear algebra rather than blindly memorize algorithms.
One example of this approach is our treatment of the concepts of spanning and linear independence. They are both introduced in Section 1.2 in JR.n, where they can be motivated in a geometrical context. They are then used again for matrices in Section
3.1 and polynomials in Section 4.1, before they are finally extended to general vector spaces in Section 4.2. The following are some other features of the text's organization: •
The idea of linear mappings is introduced early in a geometrical context and is used to explain aspects of matrix multiplication, matrix inversion, and features of systems of linear equations. Geometrical transformations provide intuitively satisfying illustrations of important concepts.
•
Topics are ordered to give students a chance to work with concepts in a simpler setting before using them in a much more involved or abstract setting. For ex ample, before reaching the definition of a vector space in Section 4.2, students will have seen the 10 vector space axioms and the concepts of linear indepen dence and spanning for three different vector spaces, and they will have had some experience in working with bases and dimensions. Thus, instead of be ing bombarded with new concepts at the introduction of general vector spaces, students will j ust be generalizing concepts with which they are already familiar.
x
A Note to Instructors
Pedagogical Features Since mathematics is best learned by doing, the following pedagogical elements are included in the book. •
A selection of routine mid-section exercises is provided, with solutions in cluded in the back of the text. These allow students to use and test their under standing of one concept before moving on to other concepts in the section.
•
Practice problems are provided for students at the end of each section. See "A Note on the Exercises and Problems" below.
•
Examples, theorems, and definitions are called out in the margins for easy reference.
Applications One of the difficulties in any linear algebra course is that the applications of linear algebra are not so immediate or so intuitively appealing as those of elementary cal culus. Most convincing applications of linear algebra require a fairly lengthy buildup of background that would be inappropriate in a linear algebra text. However, without some of these applications, many students would find it difficult to remain motivated to learn linear algebra. An additional difficulty is that the applications of linear alge bra are so varied that there is very little agreement on which applications should be covered. In this text we briefly discuss a few applications to give students some easy sam ples. Additional applications are provided on the Corripanion Website so that instruc tors who wish to cover some of them can pick and choose at their leisure without increasing the size (and hence the cost) of the book.
List of Applications •
Minimum distance from a point to a plane (Section 1.4)
•
Area and volume (Section 1.5, Section 5.4)
•
Electrical circuits (Section 2.4, Section 9.2)
•
Planar trusses (Section 2.4)
•
Linear programming (Section 2.4)
•
Magic squares (Chapter 4 Review)
•
Markov processes (Section 6.3)
•
Differential equations (Section 6.4)
•
Curve of best fit (Section 7.3)
•
Overdetermined systems (Section 7.3)
•
Graphing quadratic forms (Section 8.3)
•
Small deformations (Section 8.4)
•
The inertia tensor (Section 8.4)
A Note to Instructors
xi
Computers As explained in "A Note on the Exercises and Problems," which follows, some prob lems in the book require access to appropriate computer software. Students should realize that the theory of linear algebra does not apply only to matrices of small size with integer entries. However, since there are many ideas to be learned in linear alge bra, numerical methods are not discussed. Some numerical issues, such as accuracy and efficiency, are addressed in notes and problems.
A No t e on the Exerc ises and Problems Most sections contain mid-section exercises. These mid-section exercises have been created to allow students to check their understanding of key concepts before continu ing on to new concepts in the section. Thus, when reading through a chapter, a student should always complete each exercise before continuing to read the rest of the chapter. At the end of each section, problems are divided into A, B, C, and D problems. The A Problems are practice problems and are intended to provide a sufficient variety and number of standard computational problems, as well as the odd theoretical problem for students to master the techniques of the course; answers are provided at the back of the text. Full solutions are available in the Student Solutions Manual (sold separately). The B Problems are homework problems and essentially duplicates of the A prob lems with no answers provided, for instructors who want such exercises for homework. In a few cases, the B problems are not exactly parallel to the A problems. The C Problems require the use of a suitable computer program. These problems are designed not only to help students familiarize themselves with using computer soft ware to solve linear algebra problems, but also to remind students that linear algebra uses real numbers, not only integers or simple fractions. The D Problems usually require students to work with general cases, to write simple arguments, or to invent examples. These are important aspects of mastering mathematical ideas, and all students should attempt at least some of these-and not get discouraged if they make slow progress. With effort, most students will be able to solve many of these problems and will benefit greatly in the understanding of the concepts and connections in doing so. In addition to the mid-section exercises and end-of-section problems, there is a sample Chapter Quiz in the Chapter Review at the end of each chapter. Students should be aware that their instructors may have a different idea of what constitutes an appro priate test on this material. At the end of each chapter, there are some Further Problems; these are similar to the D Problems and provide an extended investigation of certain ideas or applications of linear algebra. Further Problems are intended for advanced students who wish to challenge themselves and explore additional concepts.
Using This Text to Teach Linear Algebra There are many different approaches to teaching linear algebra. Although we suggest covering the chapters in order, the text has been written to try to accommodate two main strategies.
xii
A Note to Instructors
Early Vector Spaces We believe that it is very beneficial to introduce general vector spaces immediately after students have gained some experience in working with a few specific examples of vector spaces. Students find it easier to generalize the concepts of spanning, linear independence, bases, dimension, and linear mappings while the earlier specific cases are still fresh in their minds. In addition, we feel that it can be unhelpful to students to have determinants available too soon. Some students are far too eager to latch onto mindless algorithms involving determinants (for example, to check linear indepen dence of three vectors in three-dimensional space) rather than actually come to terms with the defining ideas. Finally, this allows eigenvalues, eigenvectors, and diagonal ization to be highlighted near the end of the first course. If diagonalization is taught too soon, its importance can be lost on students.
Early Determinants and Diagonalization Some reviewers have commented that they want to be able to cover determinants and diagonalization before abstract vector spaces and that in some introductory courses, abstract vector spaces may not be covered at all. Thus, this text has been written so that Chapters 5 and 6 may be taught prior to Chapter 4. (Note that all required in formation about subspaces, bases, and dimension for diagonalization of matrices over
JR is covered in Chapters 1, 2, and 3.) Moreover, there is a natural flow from matrix inverses and elementary matrices at the end of Chapter 3 to determinants in Chapter 5.
A Course Outline The following table indicates the sections in each chapter that we consider to be "cen tral material": Chapter
Central Material
Optional Material
1
l , 2,3,4,5
2
1, 2, 3
4
3
1,2,3,4,5,6
7
4
l,2,3,4,5,6,7
5
1,2,3
4
6
1,2
3,4
7
1,2
3,4, 5
8
1,2
3,4
9
l , 2,3,4,5,6
Supplements We are pleased to offer a variety of excellent supplements to students and instructors using the Second Edition. T he new Student Solutions Manual (ISBN: 978-0-321-80762-5), prepared by the author of the second edition, contains full solutions to the Practice Problems and Chapter Quizzes. It is available to students at low cost. MyMathLab® Online Course (access code required) delivers proven results in helping individual students succeed. It provides engaging experiences that person alize, stimulate, and measure learning for each student. And, it comes from a trusted partner with educational expertise and an eye on the future. To learn more about how
A Note to Instructors
xiii
MyMathLab combines proven learning applications with powerful assessment, visit www.mymathlab.com or contact your Pearson representative. The new Instructor's Resource CD-ROM (ISBN: 978-0-321-80759-5) includes the following valuable teaching tools: •
An Instructor's Solutions Manual for all exercises in the text: Practice Problems, Homework Problems, Computer Problems, Conceptual Problems, Chapter Quizzes, and Further Problems.
•
A Test Bank with a large selection of questions for every chapter of the text.
•
Customizable Beamer Presentations for each chapter.
•
An Image Library that includes high-quality versions of the Figures, T heo rems, Corollaries, Lemmas, and Algorithms in the text.
Finally,
the
second
edition
is
available
as
a
CourseSmart
eTextbook
(ISBN: 978-0-321-75005-1). CourseSmart goes beyond traditional expectations providing instant, online access to the textbook and course materials at a lower cost for students (average savings of 60%). With instant access from any computer and the ability to search the text, students will find the content they need quickly, no matter where they are. And with online tools like highlighting and note taking, students can save time and study efficiently. Instructors can save time and hassle with a digital eTextbook that allows them to search for the most relevant content at the very moment they need it. Whether it's evaluating textbooks or creating lecture notes to help students with difficult concepts, CourseSmart can make life a little easier. See all the benefits at www.coursesmart.com/ instructors or www.coursesmart.com/students. Pearson's technology specialists work with faculty and campus course designers to ensure that Pearson technology products, assessment tools, and online course materi als are tailored to meet your specific needs. This highly qualified team is dedicated to helping schools take full advantage of a wide range of educational resources by assist ing in the integration of a variety of instructional materials and media formats. Your local Pearson Canada sales representative can provide you with more details about this service program.
Acknowledgments T hanks are expressed to: Agnieszka Wolczuk: for her support, encouragement, help with editing, and tasty snacks. Mike La Croix: for all of the amazing figures in the text and for his assistance on editing, formatting, and LaTeX'ing. Stephen New, Martin Pei, Barbara Csima, Emilio Paredes: for proofreading and their many valuable comments and suggestions. Conrad Hewitt, Robert Andre, Uldis Celmins, C. T. Ng, and many other of my colleagues who have taught me things about linear algebra and how to teach it as well as providing many helpful suggestions for the text. To all of the reviewers of the text, whose comments, corrections, and recommen dations have resulted in many positive improvements:
xiv
A Note to Instructors
Robert Andre
Dr. Alyssa Sankey
University of Waterloo
University of New Brunswick
Luigi Bilotto Vanier College Dietrich Burbulla University of Toronto Dr. Alistair Carr Monash University Gerald Cliff University of Alberta Antoine Khalil
Manuele Santoprete Wilfrid Laurier University Alistair Savage University of Ottawa Denis Sevee John Abbott College Mark Solomonovich Grant MacEwan University
CEGEP Vanier Hadi Kharaghani University of Lethbridge Gregory Lewis University of Ontario Institute of Technology Eduardo Martinez-Pedroza
Dr. Pamini Thangarajah Mount Royal University Dr. Chris Tisdell The University of New South Wales Murat Tuncali Nipissing University
McMaster University Dorette Pronk Dalhousie University
Brian Wetton University of British Columbia
T hanks also to the many anonymous reviewers of the manuscript. Cathleen Sullivan, John Lewis, Patricia Ciardullo, and Sarah Lukaweski: For all of their hard work in making the second edition of this text possible and for their suggestions and editing. In addition, I thank the team at Pearson Canada for their support during the writing and production of this text. Finally, a very special thank y ou to Daniel Norman and all those who contributed to the first edition.
Dan Wolczuk University of Waterloo
CHAPTER 1
Euclidean Vector Spaces CHAPTER OUTLINE 1.1
" Vectors in IR and JR?.3
1.2
11 Vectors in IR
1.3
Length and Dot Products
1.4
Projections and Minimum Distance
1.5
Cross-Products and Volumes
Some of the material in this chapter will be familiar to many students, but some ideas that are introduced here will be new to most. In this chapter we will look at operations on and important concepts related to vectors. We will also look at some applications of vectors in the familiar setting of Euclidean space. Most of these concepts will later be extended to more general settings. A firm understanding of the material from this chapter will help greatly in understanding the topics in the rest of this book.
1.1 Vectors in R2 and R3 We begin by considering the two-dimensional plane in Cartesian coordinates. Choose an origin 0 and two mutually perpendicular axes, called the x1 -axis and the xraxis, as shown in Figure 1.1.1. Then a point Pin the plane is identified by the 2-tuple ( p1, p2), called coordinates of P, where Pt is the distance from Pto the X2-axis, with p1 positive if Pis to the right of this axis and negative if Pis to the left. Similarly, p2 is the distance from Pto the x1 -axis, with p2 positive if Pis above this axis and negative if Pis below. You have already learned how to plot graphs of equations in this plane.
P2
---
-
--
-- --
--
..
p =(pi, p2)
I I I I I I I I
0 Figure 1.1.1
Pi Coordinates in the plane
.
For applications in many areas of mathematics, and in many subjects such as physics and economics, it is useful to view points more abstractly. In particular, we will view them as vectors and provide rules for adding them and multiplying them by constants.
2
Chapter 1
Definition
Euclidean Vector Spaces
JR2
is the set of all vectors of the form
[:�l
where
and
xi x2
are real numbers called
the components of the vector. Mathematically, we write
Remark We shall use the notation 1 =
[ :� ]
to denote vectors in
Although we are viewing the elements of
JR2
as vectors, we can still interpret
these geometrically as points. That is, the vector jJ = point to
P(p , p ). (pi, p2),i 2 (pi, p2)
JR2. [��]
can be interpreted as the
Graphically, this is often represented by drawing an arrow from (0, 0)
as shown in Figure 1.1.2. Note, however, that the points between (0, 0)
and
should not be thought of as points "on the vector." The representation of a
vector as an arrow is particularly common in physics; force and acceleration are vector quantities that can conveniently be represented by an arrow of suitable magnitude and direction.
0 = (0, 0) Figure 1.1.2
Definition Addition and Scalar Multiplication in :12
If 1
=
[:� l [��l t JR, y =
and
E
Graphical representation of a vector.
then we define addition of vectors by
X +y =
[Xi]+ [y'] [Xi Yl] X2 Y2 X2 Y2 =
+
+
and the scalar multiplication of a vector by a factor oft, called a scalar, is defined by
tx =
t [Xzxi]= [txtxi2]
The addition of two vectors is illustrated in Figure 1.1.3: construct a parallelogram with vectors 1 and y as adjacent sides; then 1 + y is the vector corresponding to the vertex of the parallelogram opposite to the origin. Observe that the components really are added according to the definition. This is often called the "parallelogram rule for addition."
Section 1.1 Vectors in JR2 and JR3
3
0 Figure 1.1.3
Addition of vectors jJ and if.
EXAMPLE I Let x
=
[-�]
and y
=
[n
(3, 4)
Then
(-2, 3)
0
X1
Similarly, scalar multiplication is illustrated in Figure 1.1.4. Observe that multi plication by a negative scalar reverses the direction of the vector. It is important to note that x
-
y is to be interpreted as x + (-1 )y.
(1.S)J
X1
(-l)J Figure 1.1.4
Scalar multiplication of the vector J.
4
Chapter 1
EXAMPLE2
Euclidean Vector Spaces
Let a=
[n [ �] -
v=
.and w=
Solution: We get
a+v=
[-�l
[ i ] [ �] [!] [ �] [ � ] [-�] [ �] [-�] [�] [-�]
3w=3
+
Let a=
[ � l [� ] v=
_
.and w =
-
=
= -
-
2v - w = 2
EXERCISE 1
Calculate a+ v, 3w, and 2V- w.
+ < -1)
rn
_
=
+
=
Calculate each of the following and illustrate with
a sketch. (a) a+ w
(c) (a+ w) - v
(b) -v
The vectors e1 =
[�]
and e =
2
[�]
play a special role in our discussion of IR.2. We
will call the set {e1, e } the standard basis for IR.2. (We shall discuss the concept of
2
a basis fmther in Section 1.2.) The basis vectors e1 and e are important because any vector v= way:
[�� ]
2
can be written as a sum of scalar multiples of e1 and e in exactly one
2
Remark In physics and engineering, it is common to use the notation i instead.
[�]
and j =
[�]
We will use the phrase linear combination to mean "sum of scalar multiples." So, we have shown above that any vector x E IR.2 can be written as a unique linear combination of the standard basis vectors. One other vector in IR.2 deserves special mention: the zero vector,
0=
[� ]
.Some
important properties of the zero vector, which are easy to verify, are that for any
xEJR.2, (1) 0 +x x (2) x + c-1)x = o (3) Ox= 0 =
Section 1.1
Vectors in JR.2 and JR.3
5
The Vector Equation of a Line in JR.2 In Figure 1.1.4, it is apparent that the set of all multiples of a vector through the origin. We make this our definition of a line in
origin in JR.2 is a set of the form
JR.2:
J creates
a line
a line through the
{tJitEJR.}
Often we do not use formal set notation but simply write the vector equation of the line:
X
=
rJ,
tEJR.
Jis called the direction vector of the line. Similarly, we define the line through ff with direction vector J to be the set
The vector
{ff+ tJi tEJR.} which has the vector equation
X
=
ff+ rJ.
tEJR.
rJ. tEJR. because of the parallelogram rule for addition. As shown in Figure 1.1.5, each point on the line through ff can be obtained from a corresponding point on the line x = rJ by adding the vector ff. We say that the line has been translated by ff. More generally, two lines are parallel if the
This line is parallel to the line with equation x
=
direction vector of one line is a non-zero multiple of the direction vector of the other line. X2 .
line x =
rJ+ ff
Figure 1.1.5
EXAMPLE3
The line with vector equation
x
=
td + p.
A vector equation of the line through the point P(2, -3) with direction vector
[ �] -
is
6
Chapter 1
EXAMPLE4
Euclidean Vector Spaces
Write the vector equation of a line through P(l, 2) parallel to the line with vector equation
x= t
[�]
,
tEIR
Solution: Since they are parallel, we can choose the same direction vector. Hence, the vector equation of the line is
EXERCISE 2
Write the vector equation of a line through P(O, 0) parallel to the line with vector equation
Sometimes the components of a vector equation are written separately:
x = jJ
+
{
tJ becomes
X1 = Pl
+
td 1
X2 = P2
+
td2,
t EIR
This is referred to as the parametric equation of the line. The familiar scalar form of the equation of the line is obtained by eliminating the parameter t. Provided that
di* 0, d1 * 0,
X2 - P2 X1 - Pl -r- --di d1 - -
---
or
x2 = P2
d1 +
di
(xi - Pi)
What can you say about the line if d1 = 0 or d2 = O?
EXAMPLES
Write the vector, parametric, and scalar equations of the line passing through the point
r-�].
P(3, 4) with direction vector
Solution: The vector equation is x = .
.
.
So, the parametnc equat10n 1s The scalar equation is x2 = 4
{
[!] r-�l +
XI = 3 - St X2 = 4 + t,
- !Cx1
3).
t
tER
t ER
Section 1.1 Vectors in JR.2 and JR.3
7
Directed Line Segments For dealing with certain geometrical problems, it is useful to introduce directed line segments. We denote the directed line segment from point
P to
point Q by
PQ.
We think of it as an "arrow" starting at
P and
pointing
towards Q. We shall identify directed line segments from the origin 0 with the corre
sponding vectors; we write OP = fJ, OQ =if, and so on. A directed line segment that starts at the origin is called the position vector of the point. For many problems, we are interested only in the direction and length of the directed line segment; we are not interested in the point where it is located. For
example, in Figure 1.1.3, we may wish to treat the line segment QR as if it were the same as OP. Taking our cue from this example, for arbitrary points P, Q, R in JR.2, we
define QR to be equivalent to OP if r -if=fJ. In this case, we have used one directed line segment OP starting from the origin in our definition.
Q
p 0 Figure 1.1.6
A
directed line segment from P to Q.
More generally, for arbitrary points Q, R, S, and T in JR.2, we define QR to be equivalent to ST if they are both equivalent to the same OP for some P. That is, if
r-if =
fJ and t
-
s= fJ for the same fJ
We can abbreviate this by simply requiring that
r-if=i'-s EXAMPLE6
For points Q( l , 3 ) R(6,-l), S(-2,4), and T(3,0), we have that QR is equivalent to ST because ,
-r - if=
[ �] - [�] [ -�] = [�] - [ -!] = _
S(-2,4)
=
r s -
8
Chapter 1
Euclidean Vector Spaces
In some problems, where it is not necessary to distinguish between equivalent directed line segments, we "identify" them (that is, we treat them as the same object) and write
PQ =
RS.
Indeed, we identify them with the corresponding line segment
starting at the origin, so in Example 6 we write
Remark Writing
QR
=
ST
the same object as
QR
= = [-�l ST
is a bit sloppy-an abuse of notation-because
ST.
QR
is not really
However, introducing the precise language of "equivalence
classes" and more careful notation with directed line segments is not helpful at this stage. By introducing directed line segments, we are encouraged to think about vectors that are located at arbitrary points in space. This is helpful in solving some geometrical problems, as we shall see below.
EXAMPLE 7
Find a vector equation of the line through
P(l,
2) and Q(3, -1).
Solution: The direction of the line is
PQ= - p=[_i]-[;] =[_i] PQ P( x=p+tPQ=[;]+t[_i]• tE� q
Hence, a vector equation of the line with direction that passes through
1, 2) is
Observe in the example above that we would have the same line if we started at the second point and "moved" toward the first point--0r even if we took a direction vector in the opposite direction. Thus, the same line is described by the vector equations
x=[_iJ+r[-�J. rE� x=[_iJ+s[_iJ· sE� x=[;]+t[-�], tE� In fact, there are infinitely many descriptions of a line: we may choose any point on the line, and we may use any non-zero multiple of the direction vector.
EXERCISE 3
Find a vector equation of the line through
P(l,
1) and Q(-2, 2).
Section 1.1
Vectors in JR.2 and JR.3
9
Vectors and Lines in R3 Everything we have done so far works perfectly well in three dimensions. We choose an origin 0 and three mutually perpendicular axes, as shown in Figure 1.1.7. The x1 -axis is usually pictured coming out of the page (or blackboard), the x2-axis to the right, and the x3-axis towards the top of the picture.
Figure 1.1.7
The positive coordinate axes in IR.3.
It should be noted that we are adopting the convention that the coordinate axes form a right-handed system. One way to visualize a right-handed system is to spread out the thumb, index finger, and middle finger of your right hand. The thumb is the x1 -axis, the index finger is the x2-axis, and the middle finger is the x3-axis. See Figure 1.1.8.
Figure 1.1.8
Identifying a right-handed system.
2 We now define JR.3 to be the three-dimensional analog of JR. .
Definition
R3 is the set of all vectors of the form
:l3
Mathematically, we write
[�:l
·
where x1,x,, and x3 are ceal numbers.
10
Chapter 1
Definition
Euclidean Vector Spaces
If 1
=
Addition and Scalar
[:n �n jl
and t E II., then we define addition of vectors by
=
Multiplication in J.3
[ l �l l
x+y
=
X2 + Y2
X3
3
[
Xt + Yt
Xt
=
X2 + Y2 X3 + Y3
l
[l l [
and the scalar multiplication of a vector by a factor oft by
tx
=
Xl t x2 X3
tX1
=
tx2
tX 3
Addition still follows the parallelogram rule. It may help you to visualize this
3 must lie within a plane in JR.3 so that the two
if you realize that two vectors in JR.
dimensional picture is still valid. See Figure 1.1.9.
Figure 1.1.9
EXAMPLES Let u
=
[ _i]. l-n jl
=
Solution: We have
V +U =
-w
-V + 2W-"
=
=
Two-dimensional parallelogram rule in IR.3.
and w =
[H
crucula� jl + U, -W, and -V +
2W
u.
nHJ ni -[�] {�] l - l 11+2 l�l-l J l =r m l :1 r-�l =
=
+
+
=
=
Section 1.1 Vectors in JR.2 and JR.3
11
It is useful to introduce the standard basis for JR.3 just as we did for JR.2. Define
Then any vector V
=
[�:]
can be written as the linear combination
Remark In physics and engineering, it is common to use the notation i
=
e1, j
=
e1,
and k
=
e3
instead.
The zero vector
0
=
[�]
in R3 has the same properties as the zero vector in l!.2.
Directed line segments are the same in three-dimensional space as in the two dimensional case.
A line through the point P in JR.3 (corresponding to a vector {J) with direction
vector
J f. 0 can be described by a vector equation: X
=
p + tJ,
t E JR
It is important to realize that a line in JR.3 cannot be described by a single scalar linear equation, as in JR.2. We shall see in Section 1.3 that such an equation describes a plane in JR.3 .
EXAMPLE9
Find a vector equation of the line that passes through the points P(l, 5, -2) and Q(4,-1,3).
Solution: A direction vector is
J
=
if
- [-H p
=
Hence a vector equation of the
line is
Note that the corresponding parametric equations are x1 X3
EXERCISE4
=
-2
+St.
1
+ 3t, x2
=
5
-
6t,
and
Find a vector equation of the line that passes through the points P(l, 2, 2) and Q(l,-2,3).
12
Chapter 1
Euclidean Vector Spaces
PROBLEMS 1.1 Practice Problems
Compute each of the following linear combinations and illustrate with a sketch. (a)[�]+[�] (c)3 [- �] A2 Compute each of the following linear combinations. (a)[-�]+[-�] (c)-2 [ _;J (e) 23 [31] - 2 [11//43) A3 Compute each of the following linear combinations. (a)
Al
[!]-[J Ul
Hl
m
rn
Ut V = and W= Detenillne (a) 2v- 3w Cb) -3Cv +2w) +5v (c) a such that w- 2a = 3v (d) a such that a - 3v = 2a AS Ut V = and W = = Detennine (a) �v + !w Cb) 2c v + w)- c2v - 3w) (c) a such that w- a = 2V (d) a such that !a + �v = w A4
Consider the points P(2,3,1), Q(3,1,-2), R(l,4,0), S(-5,1,5). Determine PQ, PR, PS,QR, and SR, and verify that PQ+QR= PR= PS+ SR. A 7 Write a vector equation of the line passing through the given points with the given direction vector. (a) P(3,4),J= [-�] (b) P(2,3),J = [=:J
A6
[-�] (d) [ r1 Write a vector equation for the line that passes (c) P(2,0,5),J=
-11
P(4,1,5),J =
AS
through the given points. (a) P(-1,2),Q(2,-3) (b) P(4,1),Q(-2,-1) (c) P(l,3,-5),Q(-2,1,0) (d) P(-2,1,1),Q(4,2,2) (e) P(!,t,1),Q(-l,l,�) 2 A9 For each of the following lines in JR. , determine a vector equation and parametric equations. (a) x2= 3x1 +2 (b) 2x1 +3x2 = 5 AJO (a) set of points in IR.11 is collinear if all the points lie on the same line. By considering directed line segments, give a general method for deter mining whether a given set of three points is collinear. (b) Determine whether the points P(l,2), QC4,1), and R(-5,4) are collinear. Show how you decide. (c) Determine whether the points S(1,0,1), T(3,-2,3), and U(-3,4,-1) are collinear. Show how you decide. A
Section 1.1 Exercises
13
Homework Problems B 1 Compute each of the following linear combinations
[-�] + r-�] -3 [-�]
and illustrate with a sketch. (a) (c)
(b) (d)
[-�]- [�] -3 [�]- [;]
(b)
-t
B2 Compute each of the following linear combina
(a)[�]+ [=�] [�]- [�] 2 [ =�J H [1 �] - ?a[�] �[ � + Y3 [- �12] - [ �]
tions.
(b)
(c)
(d
(e)
P(l,4,1), Q(4,3,-1), R(-1,4,2), and S(8,6,-5). Determine PQ, PR, PS, QR, and SR, and verify that PQ+QR= PR= PS +SR. Consider the points P(3,-2,1), Q(2, 7, -3), R(3,1,5), and S(-2,4,-1). Determine PQ, ;1...,. -+ -+ PK, PS, QR, and SR, and verify that P Q+QR= PR= PS +SR.
B6 (a) Consider the points
the given points with the given direction vector.
P(-3, 4),J=
(b)
P(O, 0). J=
(c)
P(2,3,-1), J=
(d)
P(3,1,2),J=
4
+
(e)
(f) (1 +�)
B4 Ut V (a) (b) (c) (d)
(b) (c) (d)
0 � -i
i
2
_
and W
Detecm;ne
2v- 3w -2(v- w) - 3w i1 such that w - 2i1 3v i1 such that 2i1 3w= v
BS Ut V (a)
l
1-
=
[ �] -
+
=
and W
3v- 2w -iv+ �w i1 such that v+i1= v i1 such that 2i1 - w= 2v
m
[-�] [-�]
BS Write a vector equation for the line that passes through the given points. (a) (b) (c) (d)
P(3,1), Q(l,2) P(l,-2,1), Q(O, 0, 0) P(2,-6,3), Q(-1,5,2) P(l,-1,i), Q(i, t. 1)
B9 For each of the following lines in
+
JR2, determine
a
vector equation and parametric equations. (a) (b)
x2= -2x1 3 Xi + 2X2= 3
BlO (You will need the solution from Problem
AlO
(a)
to answer this.) whether the points P(2,1,1), Q(l,2, 3), and R(4,-1,-3) are collinear. Show
=
-H [
[-�]
(a)
tions.
(c)
-+
B7 Write a vector equation of the line passing through
B3 Compute each of the following linear combina-
3. To match what we did in
Definition
Let
Linein3i."
a line in
Definition
Let
Planein J."
vector equation x
jJ, v
Hyperplane in R"
JR" with
v
*
2
and JR3, we make the following definition.
0. Then we call the set with vector equation x jJ.
=
jJ + t1v, t1
E
JR
that passes through
V1,V2,JJ
through
Definition
E
JR11
JR
E
JR11,
with
=
jJ
{V1,v2} being a linearly independent set. Then the set with ti v1 + t2v2, ti, t2 E JR is called a plane in JR" that passes
+
jJ.
JR11, with {V1, . ., v11_ i} being linearly independent. Then the set 11 with vector equation x jJ + t1v1 + + t11_1v11_1, t; E JR is called a hyperplane in JR that passes through jJ. Let
v 1, ... , v11_ 1, jJ
E
.
=
EXAMPLE9 The set Span
·
{ �l , I , : }
is linearly independent in
Show that the set Span
is a hyperplane since
-1
-2
EXAMPLE 10
· ·
JR4.
{[�l r-ll ·rm
l� : ' :} 1
-2
-1
defines a plane in R3.
·
Solution: Observe that the set is Iinear!y dependent since
[ � l r-l l l H +
the simplified vector equation of the set spanned by these vectors is
=
Hence,
Section 1.2 Exercises
25
EXAMPLE 10 (continued )
Since the set
{l�l [- �]} ·
is linear! y independent, this is the vector equation of a plane
3 passing through the origin in IR. .
PROBLEMS 1.2 Practice Problems Al Compute each of the following linear combina tions. ( a)
(b )
3
2 -1
(el X
2
1
+2
3
-1
-1 3 1 -1 -3 +2 1 4 5 0 2
-2
{.x
(c)
-
(d)
-1
( b)
1
(c)
{[��]I
(d )
1
{[::J
X t
X3 =
X +X2 t XtX2
=
=
x3
}
}
=
X3
11•
(e) (f )
}
}
{x {x {x {x
0.
E JR.41Xt+2x3 E IR.4 I X1
=
E IR.4 I 2x1
=
=
=
=
3x4,X2 - Sx3 =
}
5,X[ -3X4
X3X4,X2 -X4
E JR.4 X[ +X2 1
o
-X4,X3
=
o =
2
}
}
=
o
0
}
}
A4 Show that each of the following sets is linearly de pendent. Do so by writing a non-trivial linear com bination of the vectors that equals the zero vector. (a)
o
I
(b) {v1}, where v1 *
0
or is not a subspace of the appropriate IR.
I xi - xl
{!}
spaces of IR.4. Explain. (a) E IR.4 x1 +x2 + x3 +x4
2 3
+ 12
+1 1
A3 Determine whether the following sets are sub
A2 For each of the following sets, show that the set is
{[�:] {[�:J
[�] [;] [�]
(D Span
1 2 -2 2 1 +2 1 (c) 2 0 2 -1
(a)
=
(b)
(c)
(d )
mrnl .[_:]} {l-rrnrnJ} {[i]·[l]·[�]} {[n.r�J.r�n
Euclidean Vector Spaces
Chapter 1
26
AS Determine if the following sets represent lines, planes, or hyperplanes in JR4. Give a basis for each. Ca) Span
(b) Span
{r , r} {� , ! �} _! , , � � } { ,
0
0
(c) Span
1
c d) Span
H' j'n
fl + tJ, t E JR is a A6 Let fl, J E JR12• Prove that 1 12 JR if and only if fl is a scalar multiple subspace of of J =
A 7 Suppose that� {v 1, ..., vk} is a linearly indepen 12 dent set in JR • Prove that any non-empty subset of � is linearly independent. =
-
0
0
0
Homework Problems Bl Compute each of the following linear combinations. (a)
(b)
(c)
2
4
1 -1
-2
-2 3 2 2 1 -2 3 +2 2 2 3 0
2 1 +
3
2
-1
-7 -2 3 0 -2 3 + 6 -4 0 -7 2 3 0
Cb)
{[��]I
{[�i]
X1
+x2
1 x, +x,
(c) (d)
1 1
=
l
�
o
}
}
mirm
B3 Determine whether the following sets are sub 4 spaces of JR . Explain. 4 (a) {1E1R 2x1 - Sx4 7,3x2 2x4} 4 2x� x� + x�} (b) {1E JR x
-1
B2 For each of the following sets, show that the set is 1 or is not a subspace of the appropriate JR 2• (a)
CD Span
Ce)
I I 4 {1E1R I 4
=
=
�+ 0,3x3 -x4} X1 + X2 + X4 {1E JR I X1 + 2x3 0, Xt - 3x4 o} =
=
=
=
=
un
CD Span
mirm
B4 Show that each of the following sets is linearly de pendent by writing a non-trivial linear combination of the vectors that equals the zero vector. Ca)
Cb)
WHW {[-il Ul {:]} mrnl' [i]} {[�]·[�].[-;]} ·
Cc) (d)
Section 1.2
Exercises
27
BS Determine if the following sets represent lines, planes, or hyperplanes in JR.4. Give a basis for each.
(a) Span
{ \ �: } p�n -
,
-2
2
(b) SpM
,
,
Computer Problems 1.23
Cl Let vi
4.21
4.16 .-t , v2 _2_21
:t
=
-3.14 0
0.34
-9.6 1.01
' v3
2.02'
2.71
1.99
0.33 .v and -t 4
2.12 _ _ . 3 23
=
0.89 Use computer software to evaluate each of the fol lowing. (a) 3vi - 2v2 + Sv3 - 3\14 Cb) 2Avi - I.3v2 +
Yiv3 - Y3v4
.
Conceptual Problems Dl Prove property(8) from Theorem 1.
D6 Let 13
D2 Prove property(9) from Theorem 1. (a) Prove that the intersection of U and Vis a sub space of JR.I!.
D7 Let v1,Vz
of JR.I!.
I i1
E
U, v
E V}. Prove
that U + Vis a subspace of JR".
D4 Pick vectors j3, vi,v2, and v3 in JR.4 such that the vector equation x
=
j3 +
t1v1 + t2v2 + t3V3
(d) Is a line passing through the origin •
•
• ,
vd be a linearly independent set
of vectors in JR.I!. Prove that every vector in Span13
E
=
Span{V1,sv1 + tV2}
JR". State whether each of the fol
a counterexample. (a) If i12
=
tV 1 for some real number t, then {V1,v2}
is linearly dependent. (b) If v1 is not a scalar multiple of v2, then {V1, 112) is linearly independent. (c) If {V1,v2,v3} is linearly dependent, then v1 can be written as a linear combination of v2 and v3. (d) If v1 can be written as a linear combination of Vz and V3, then {V1,vz, v3} is linearly depen
can be written as a unique linear combination of the vectors in 13.
JR.I! and lets and t be fixed real numbers
is true, explain briefly. If the statement is false, give
(c) Is the point(l,3,1,1)
{V1,
,vk} be a linearly independent set
lowing statements is true or false. If the statement
(a) ls a hyperplane not passing through the origin (b) ls a plane passing through the origin
=
E
D8 Let v1, Vz,V3 {a+ v
.
Span{V1, v2}
subspaces of JR.I! does not have to be a subspace
=
•
with t * 0. Prove that
(b) Give an example to show that the union of two
(c) Define U + V
.
{V1, . . . ,vk. x} is linearly independent.
D3 Let U and Vbe subspaces oflR.n.
DS Let 13
{V1,
=
of vectors in JR.I! and let x et Span 13. Prove that
dent. (e) {vi} is not a subspace oflR.11•
(f)
Span{Vi} is a subspace of JR".
28
Chapter 1
Euclidean Vector Spaces
1.3 Length and Dot Products In many physical applications, we are given measurements in terms of angles and magnitudes. We must convert this data into vectors so that we can apply the tools of linear algebra to solve problems. For example, we may need to find a vector represent ing the path (and speed) of a plane flying northwest at 1300 km/h. To do this, we need to identify the length of a vector and the angle between two vectors. In this section, we see how we can calculate both of these quantities with the dot product operator.
3 2 Length and Dot Products in R , and R The length of a vector in
IR2 is defined by the usual distance formula (that is, Pythago
ras' Theorem), as in Figure 1.3.10.
Definition
If x
=
Length in ::2
[��]
E
JR2, its length is defined to be 11111
=
�xf
+
�
0 Figure 1.3.10
For vectors in
JR3,
Length in JR2.
the formula for the length can be obtained from a two-step
JR2, as shown in Figure 1.3.11. Consider the point X(x1, x2, x3) and let P be the point P(x1, x2, 0). Observe that OPX is a right triangle, so
calculation using the formula for that
Definition Length in:::.'
If it
=
[�:l
E
IR3, its length is defined to be 11111
=
�xf
+
� x�
x +
One immediate application of this formula is to calculate the distance between two
P and Q, then the distance between them is the PQ.
points. In particular, if we have points length of the directed line segment
Section 1.3 Length and Dot Products
Figure 1.3.11
EXAMPLE 1
Find the distance between the points
Solution: We have
PQ=
-
Length in
P(-1, 3, 4), Q(2, l)
[ -� � j [-�]· =
1
-
=
4
-3
29
JR3.
-
5 , 1) in IR.3.
Hence, the distance between the two
points is
llPQll= "132 + (-8)2 + (-3)2= ...f82
Angles and the Dot Product
IR.2
Determining the angle between two vectors in
leads to the important idea of the dot product of two vectors. Consider
Figure 1.3.12. The Law of Cosines gives
llPQll2= llOPll2 + llOQll2 - 2llOPll llOQll cos e Substituting
[ ]
[]
[
q - qi OP= jJ = P1 , OQ= q= 1 . PQ= jJ - q= Pi P2 q2 P2 q 2
simplifying gives
p1q1 + p2q2= llfJll ll?/11 cos e
For vectors in IR.3, a similar calculation gives
Observe that if
jJ
=
q, then e
=
0 radians, and we get
PT + p� + p�= llf1112
(1 .1)
]
into (1.1) and
30
Chapter 1
Euclidean Vector Spaces
Figure 1.3.12
llPQll2
=
llOPll2 + llOQll2 - 2llOPll llOQll cos 8.
This matches our definition of length in JR3 above. Thus, we see that the formula on the left-hand side of these equations defines the angle between vectors and also the length of a vector. Thus, we define the dot product of two vectors
x y ·
Similady, the dot product of vectors X
x y ·
=
X1Y1
=
=
[::]
X1Y1
+
=
[��]. y [��] =
in IR2 by
X2Y2
and jl
X2Y2
+
x
+
=
�]
in IR3 is defined by
x3y3
Thus, in IR2 and JR3, the cosine of the angle between vectors
x and y can be calcu
lated by means of the formula cos
8
x·st =
where e is always chosen to satisfy 0 � e �
EXAMPLE2
Find the angle in IR3 between i1
=
Solution: We have
v. w llvll llwll
=
[ _�]· [-H W
So e
:::::
1.966
radians.
=
1(3) + 4(-1) + (-2)(4) v1 + 16 + 4
=
../21
=
Y9
16
=
Y26
Hence,
and n.)
1r.
=
cose
(1.2)
1111111.Yll
=
+
1
+
=
-9
-9 Y26::::: -0. 3 85 1 6 W
(Note that since cose is negative, e is between
�
Section 1.3 Length and Dot Products
EXAMPLE3
Find the angle in JR2 between v =
Solution: We have v
Hence, cos e
·
w
o
[ �] -
1(2)
+
and w
(-2)(1)
0. Thus, e = =
=
�
[n
= w
0.
=
0
radians.
That is, v and w are perpendicular to each other.
EXERCISE 1
=
llilll llwll
=
Find the angle in JR3 between i1
=
U]
and "1 =
31
m
[ :l =
Length and Dot Product in R11
We now extend everything we did in JR2 and JR 3 to JRll. We begin by defining the dot product.
Definition
Let
x=
Dot Product
Xi Xn
and y =
r�ll �
be vectors in JRll. Then the dot product of
X
'
Y
=
X1Y1
+
X2Y2
+
'
· '
+
x and y is
XnYn
Remark
The dot product is also sometimes called the scalar product or the standard inner
product.
From this definition, some important properties follow.
Theorem 1
Let
x, y, z E JR11
and
t E R Then,
(1) x x ;:::: 0 and x x = 0 if and only if x = 0 (2) x. y y. x (3) x. (J + tJ = x. y + x. z (4) (tx) .Y t(x. J) = .x (tJ) ·
·
=
.
=
·
Proof: We leave the proof of these properties to the reader.
Because of property (1), we can now define the length of a vector in JR n. The word
norm is often used as a synonym for length when we are speaking of vectors.
32
Chapter 1
Definition
Euc]jdean Vector Spaces
Let x
[�1 ].
=
Norm
We define the norm or length of x by
Xn
EXAMPLE4
2 Let1
1 /3 andy
3
=
-2/3 =
0
-2/3
-1 Solution: We have
11111 11.Y11
EXERCISE2 Let X
=
=
=
[�]
. Find 11111 and 11.Yll.
..J22 + l2 +32 + c-1)2
=
m
..Jo /3)2 +c-2/3)2 +02 +c-2/3)2
and let jl
=
....;1/9+4/9+o+4/9
=
=
1
� X. Detenn;ne 11111 and Iljlll.
11 11
1
We now give some important properties of the norm in IR. 1•
Theorem 2
11
Let 1,y E IR. and t ER Then 0 if and only if 1 0 (2) 11r111 1r1111 11 (3) 11·.YI:S11111 11.Yll, with equality if and only if {1,y} is linearly dependent (4) 111+.Yll ::; 11111+11.Yll
(1) 11111
=
Q
for 4x1+2,, - x, = a
p 1n {-� � -q ,
-
•
for 2x1+3x, - 54
.
•
for X1 + 2x,
=
=
0
a
BS Determine whether the following sets are linearly independent. If the set is linearly dependent, find all linear combinations of the vectors that are
(a)
u, �, n 1 0
-
-2
0 0 0 0 1 ' 0 -1 0 3
B4 Using the procedure in Example 8, determine
-1
10 3
0 1 2 -5 0
1 0
}
or hyperplane.
vectors, either express it as a linear combination of
(b)
-�
-1
1
vectors, either express it as a linear combination of
-4 -2 (a) 2 -6
-1 3 3 ' -5 1 2
(b)
1
1 0
'
1 2 0 0
0 1
2 -2
1 -3 1
0
0.
102
Chapter 2
Systems of Linear Equations
B6 Determine all values of k such that the given set is
3 B7 Determine whether the given set is a basis for JR. .
linearly independent.
(a)
(b)
p , �� , ; } h -1. ; }
(a)
(b)
(c)
-
(d)
{[-illlUl} mini·m·[�J} WlHHm rnHm
Computer Problems 1
3
1
-2
0
-1
0
1
-1
0 6 2 3
5
0 -4 3 6 3
2 -1
Cl Let B
=
I 2 4 -2
-1
8 2
1
-2
and v
=
0 1
(a) Determine whether B is linearly independent or
0
(b) Determine whether vis in SpanB.
dependent.
0. 0
3 5
0 0
Conceptual Problems Dl Let B
=
{e1, ... ,e11} be the standard basis for JR.11•
Prove that SpanB
=
JR" and that B is linearly inde
pendent.
D2 Let B
=
(a) Prove that if k < n, then there exists a vector
v
E
JR.11 such that v 'I. SpanB.
(b) Prove that if k > n, then B must be linearly
{v1, ... 'vd be vectors in JR.11•
dependent. (c) Prove that if k
=
n, then SpanB
=
JR.11 if and
only if B is linearly independent.
2.4 Applications of Systems of Linear Equations Resistor Circuits in Electricity The flow of electrical current in simple electrical circuits is described by simple linear laws. In an electrical circuit, the current has a direction and therefore has a sign at tached to it; voltage is also a signed quantity; resistance is a positive scalar. The laws for electrical circuits are discussed next.
Section 2.4 Applications of Systems of Linear Equations
Ohm's Law
103
If an electrical current of magnitude I amperes is flowing through
a resistor with resistance R ohms, then the drop in the voltage across the resistor is
V = IR, measured in volts. The filament in a light bulb and the heating element of an electrical heater are familiar examples of electrical resistors. (See Figure 2.4.4.)
R
I
����VVv��--�
t
Figure 2.4.4
V =IR
t
Ohm's law: the voltage across the resistor is V
Kirchhoff's Laws (1)
=
IR.
At a node or junction where several currents enter, the
signed sum of the currents entering the node is zero. (See Figure 2.4.5.)
11 Figure 2.4.5
-
/2
+
]3 - ]4 = 0
One of Kirchhoff's laws: 11 - /2
+
h
-
14
=
0.
(2) In a closed loop consisting of only resistors and an electromotive force E (for example, E might be due to a battery), the sum of the voltage drops across resistors is equal to E. (See Figure 2.4.6.)
I
Figure 2.4.6
Kirchhoff's other law: E
=
R11
+
R21.
Note that we adopt the convention of drawing an arrow to show the direction of I or of E. These arrows can be assigned arbitrarily, and then the circuit laws will determine whether the quantity has a positive or negative sign. It is important to be consistent in using these assigned directions when you write down Kirchhoff's law for loops. Sometimes it is necessary to determine the current flowing in each of the loops of a network of loops, as shown in Figure 2.4.7. (If the sources of electromotive force are distributed in various places, it will not be sufficient to deal with the problems as a collection of resistors "in parallel and/or in series.") In such problems, it is convenient to introduce the idea of the "current in the loop," which will be denoted i. The true current across any circuit element is given as the algebraic (signed) sum of the "loop currents" flowing through that circuit element. For example, in Figure 2.4.7, the circuit consists of four loops, and a loop current has been indicated in each loop. Across the resistor R1 in the figure, the true current is simply the loop current i1; however, across the resistor R2, the true current (directed from top to bottom) is i1 -i2. Similarly, across
R4, the true current (from right to left) is i1 - i3.
104
Chapter 2
Systems of Linear Equations
i
A resistor circuit.
Figure 2.4.7
The reason for introducing these loop currents for our present problem is that there are fewer loop currents than there are currents through individual elements. Moreover, Kirchhoff's law at the nodes is automatically satisfied, so we do not have to write nearly so many equations. To determine the currents in the loops, it is necessary to use Kirchhoff's second law with Ohm's law describing the voltage drops across the resistors. For Figure 2.4.7, the resulting equations for each loop are: •
The top-left loop: R,i, +R1Ci1 - i1)+R4Ci1 - i3)
•
The top-right loop: R3i2+ RsCi2 - i4)+R1Ci2 - i1)
•
The bottom-left loop: R6i3+R4(i3 - ii)+R1(i3 - i4)
•
The bottom-right loop: Rsi4+R1(i4 - i3)+Rs(i4 - i1)
=
E, =
E2 =
0 =
-£2
Multiply out and collect terms to display the equations as a system in the variables i1, i1, i3, and i4. The augmented matrix of the system is R, + R1 + R4
-R2
-R4
0
E,
-Rs
E2
-R2
R1+R3+Rs
0
- R4
0
R4+R6+R1
-R1
0
-Rs
-R1
Rs+R1 +Rs
0 -E2
To determine the loop currents, this augmented matrix must be reduced to row echelon form. There is no particular purpose in finding an explicit solution for this general problem, and in a linear algebra course, there is no particular value in plugging in particular values for £1, £2, and the seven resistors. Instead, the point of this example is to show that even for a fairly simple electrical circuit with the most basic elements (resistors), the analysis requires you to be competent in dealing with large systems of linear equations. Systematic, efficient methods of solution are essential. Obviously, as the number of loops in the network grows, so does the number of variables and so does the number of equations. For larger systems, it is important to know whether you have the correct number of equations to determine the unknowns. Thus, the theorems in Sections 2.1 and 2.2, the idea of rank, and the idea of linear independence are all important.
The Moral of This Example
Linear algebra is an essential tool for dealing
with large systems of linear equations that may arise in dealing with circuits; really interesting examples cannot be given without assuming greater knowledge of electrical circuits and their components.
Section 2.4 Applications of Systems of Linear Equations
105
Planar Trusses It is common to use trusses, such as the one shown in Figure 2.4.8, in construction. For example, many bridges employ some variation of this design. When designing such structures, it is necessary to determine the axial forces in each member of the structure (that is, the force along the long axis of the member). To keep this simple, only two-dimensional trusses with hinged joints will be considered; it will be assumed that any displacements of the joints under loading are small enough to be negligible.
Fv Figure 2.4.8
0
0
A planar truss. All triangles are equilateral, with sides of length
s.
The external loads (such as vehicles on a bridge, or wind or waves) are assumed to be given. T he reaction forces at the supports (shown as R1, R2, and RH in the figure) are also external forces; these forces must have values such that the total external force on the structure is zero. To get enough information to design a truss for a particular application, we must determine the forces in the members under various loadings. To illustrate the kinds of equations that arise, we shall consider only the very simple case of a vertical force Fv acting at C and a horizontal force FH acting at E. Notice that in this figure, the right-hand end of the truss is allowed to undergo small horizontal displacements; it turns out that if a reaction force were applied here as well, the equa tions would not uniquely determine all the unknown forces (the structure would be "statically indeterminate"), and other considerations would have to be introduced. The geometry of the truss is assumed given: here it will be assumed that the trian gles are equilateral, with all sides equal to s metres. First consider the equations that indicate that the total force on the structure is zero and that the total moment about some convenient point due to those forces is zero. Note that the axial force along the members does not appear in this first set of equations. •
Total horizontal force: RH + FH
0
•
Total vertical force: R1 + R2 - Fv
•
Moment about A: -Fv(s) + R2(2s)
=
=
0
=
0, so R2
=
�Fv
=
R1
Next, we consider the system of equations obtained from the fact that the sum of the forces at each joint must be zero. T he moments are automatically zero because the forces along the members act through the joints. At a joint, each member at that joint exerts a force in the direction of the axis of the member. It will be assumed that each member is in tension, so it is "pulling" away from the joint; if it were compressed, it would be "pushing" at the joint. As indicated in the figure, the force exerted on this joint A by the upper-left-hand member has magnitude
N1; with the conventions that forces to the right are positive and forces up are positive,
106
Chapter 2
Systems of Linear Equations
the force vector exerted by this member on the joint A is the same member will exert a force
[ �� ] 2
_
1 12
[ ��� J. 12
On the joint B,
. If N1 is positive, the force is a tension
force; if N1 is negative, there is compression.
For each of the joints A, B, C, D, and E, there are two equations-the first for the sum of horizontal forces and the second for the sum of the vertical forces:
Al
N1/2
A2
...f3N1/2
Bl
- Ni/2
B2
-...f3Ni/2
Cl C2
+N2
+ N3/2 +N4 -.../3N3/2 -N2 -N3/2 .../3N3/2
Dl D2
+ Ns/2
+ N6
0
+R1=
0
=
0
=
0
=
0
= Fv
+ ...f3Ns/2 -N4 -Ns/2 -...f3Ns/2
El
+RH=
+ N1/2
=
0
-...f3N1/2
=
0
-N6 -N1/2
E2
...f3 N1/2
=-FH +R2=
0
Notice that if the reaction forces are treated as unknowns, this is a system of 10 equations in 10 unknowns. The geometry of the truss and its supports determines the coefficient matrix of this system, and it could be shown that the system is necessarily consistent with a unique solution. Notice also that if the horizontal force equations (Al,
Bl, Cl, D l , and El) are added together, the sum is the total horizontal force equation, and similarly the sum of the vertical force equations is the total vertical force equation.
A suitable combination of the equations would also produce the moment equation, so if those three equations are solved as above, then the 10 joint equations will still be a consistent system for the remaining 7 axial force variables. For this particular truss, the system of equations is quite easy to solve, since some of the variables are already leading variables. For example, if FH = 0, from A2 and
!rs Fv and then B2 , C2, and D2 give NJ= Ns= !;:s Fv; !rs Fv. then Al and E l imply that N2 = N6 = 1v-s Fv, and Bl implies that N4 = 2 E2 it follows that N1= N1= -
-
Note that the members AC, BC, CD, and CE are under tension, and AB, BD, and DE experience compression, which makes intuitive sense. This is a particularly simple truss. In the real world, trusses often involve many more members and use more complicated geometry; trusses may also be three dimensional. Therefore, the systems of equations that arise may be considerably larger and more complicated. It is also sometimes essential to introduce considerations other than the equations of equilibrium of forces in statics. To study these questions, you need to know the basic facts of linear algebra. It is worth noting that in the system of equations above, each of the quantities N1, N2, ... , N1 appears with a non-zero coefficient in only some of the equations. Since each member touches only two joints, this sort of special structure will often occur in the equations that arise in the analysis of trusses. A deeper knowledge of linear algebra is important in understanding how such special features of linear equations may be exploited to produce efficient solution methods.
Section 2.4
Applications of Systems of Linear Equations
107
Linear Programming Linear programming is a procedure for deciding the best way to allocate resources. "Best" may mean fastest, most profitable, least expensive, or best by whatever criterion is appropriate. For linear programming to be applicable, the problem must have some special features. These will be illustrated by an example. In a primitive economy, a man decides to earn a living by making hinges and gate latches. He is able to obtain a supply of 25 kilograms a week of suitable metal at a price of 2 cowrie shells per kilogram. His design requires 500 grams to make a hinge and 250 grams to make a gate latch. With his primitive tools, he finds that he can make a hinge in 1 hour, and it takes 3/4 hour to make a gate latch. He is willing to work
60 hours a week. The going price is 3 cowrie shells for a hinge and 2 cowrie shells for a gate latch. How many hinges and how many gate latches should he produce each week in order to maximize his net income? To analyze the problem, let x be the number of hinges produced per week and let y be the number of gate latches. Then the amount of metal used is (O.Sx + 0.25y) kilograms. Clearly, this must be less than or equal to 25 kilograms:
O.Sx
0.25y
+
s
25
or
2x
+
y
s
100
Such an inequality is called a constraint on x and y; it is a linear constraint because the corresponding equation is linear. Our producer also has a time constraint: the time taken making hinges plus the time taken making gate latches cannot exceed 60 hours. Therefore,
lx + 0.75y
s
60
or
4x
3y
+
s
240
Obviously, also x � 0 and y � 0. The producer's net revenue for selling x hinges and y gate latches is R(x, y)
3x
+
2y - 2(25) cowrie shells. This is called the objective function for the problem.
The mathematical problem can now be stated as follows: Find the point (x, y) that max imizes the objective function R(x, y)
x
�
0, y
�
0, 2x
+
y
s
100, and 4x
+
=
3y
3x + 2y- 50, subject to the linear constraints 240.
s
This is a linear programming problem because it asks for the maximum (or minimum) of a linear objective function, subject to linear constraints. It is useful to introduce one piece of special vocabulary: the feasible set for the problem is the set of (x, y) satisfying all of the constraints. The solution procedure relies on the fact that the feasible set for a linear programming problem has a special kind of shape. (See Figure 2.4.9 for the feasible set for this particular problem.) Any line that meets the feasible set either meets the set in a single line segment or only touches the set on its boundary. In particular, because of the way the feasible set is defined in terms of linear inequalities, it turns out that it is impossible for one line to meet the feasible set in two separate pieces. For example, the shaded region if Figure 2.4.10 cannot possibly be the feasible set for a linear programming problem, because some lines meet the region in two line segments. (This property of feasible sets is not difficult to prove, but since this is only a brief illustration, the proof is omitted.)
108
Chapter 2
Systems of Linear Equations
y
lines
Figure 2.4.9
R(x, y)
50
=
constant
"-- 2x + y
x =
100
The feasible region for the linear programming example. The grey lines are level sets of the objective function
Now consider sets of the form the level sets of
R.
R(x, y)
=
R.
k, where k is a constant; these are called
These sets obviously form a family of parallel lines, and some
of them are shown in Figure
2.4.9.
Choose some point in the feasible set: check that
(20, 20) is such a point. Then the line R(x,y)
=
R(20,20)
=
50
y
x Figure 2.4.10
The shaded region cannot be the feasible region for a linear program ming problem because it meets a line in two segments.
Section 2.4
109
Applications of Systems of Linear Equations
meets the feasible set in a line segment. (30,30) is also a feasible point (check), and R(x,y)
=
R(30,30)
=
100
also meets the feasible set in a line segment. You can tell that (30,30) is not a boundary point of the feasible set because it satisfies all the constraints with strict inequality; boundary points must satisfy one of the constraints with equality. As we move further from the origin into the first quadrant, R(x,y) increases. The biggest possible value for R(x,y) will occur at a point where the set R(x, y)
=
k (for
some constant k to be determined) just touches the feasible set. For larger values of R(x,y), the set R(x,y)
=
k does not meet the feasible set at all, so there are no feasible
points that give such bigger values of R. The touching must occur at a vertex-that is, at an intersection point of two of the boundary lines. (In general, the line R(x,y)
=
k
for the largest possible constant could touch the feasible set along a line segment that makes up part of the boundary. But such a line segment has two vertices as endpoints, so it is correct to say that the touching occurs at a vertex.) For this particular problem, the vertices of the feasible set are easily found to be (0,0), (50,0), and (0, 80), and the solution of the system of equations is 2x + y 4x + 3y
=
100
=
240
For this particular problem, the vertices of the feasible set are (0,0),(50, 0),(0, 80), and (30, 80). Now compare the values of R(x,y) at all of these vertices: R(O,0),R(50,0), R(O, 80)
=
110, and R(30,40)
=
120. The vertex (30,40) gives the best net revenue, so
the producer should make 30 hinges and 40 gate latches each week.
General Remarks
Problems involving allocation of resources are common in
business and government. Problems such as scheduling ship transits through a canal can be analyzed this way. Oil companies must make choices about the grades of crude oil to use in their refineries, and about the amounts of various refined products to pro duce. Such problems often involve tens or even hundreds of variables-and similar numbers of constraints. The boundaries of the feasible set are hyperplanes in some
IR", where
n
is large. Although the basic principles of the solution method remain the
same as in this example (look for the best vertex), the problem is much more com plicated because there are so many vertices. In fact, it is a challenge to find vertices; simply solving all possible combinations of systems of boundary equations is not good enough. Note in the simple two-dimensional example that the point (60,0) is the inter section point of two of the lines (y
=
0 and 4x + 3y
=
240) that make up the boundary,
but it is not a vertex of the feasible region because it fails to satisfy the constraint 2x + y $ 100. For higher-dimension problems, drawing pictures is not good enough, and an organized approach is called for. The standard method for solving linear programming problems has been the sim
plex method, which finds an initial vertex and then prescribes a method (very similar to row reduction) for moving to another vertex, improving the value of the objective function with each step. Again, it has been possible to hint at major application areas for linear algebra, but to pursue one of these would require the development of specialized mathematical tools and information from specialized disciplines.
110
Chapter 2
Systems of Linear Equations
PROBLEMS 2.4 Practice Problems Al Determine the system of equations for the reaction
A2 Determine the augmented matrix of the system of
forces and axial forces in members for the truss
linear equations, and determine the loop currents
shown in the diagram.
indicated in the diagram.
E
i
E1R1E> l1
R1 E>
R3 'ji) R4
R6 'Ji) R1 RsE2
l
A3 Find the maximum value of the objective function x + y subject to the constraints 0 $ x $ 100, 0 $ y $ 80, and 4x + Sy $ 600. Sketch the feasible region.
CHAPTER REVIEW Suggestions for Student Review 1 Explain why elimination works as a method for solv ing systems of linear equations. (Section 2.1) 2 When you row reduce an augmented matrix
[A I b] to
solve a system of linear equations, why can you stop when the matrix is in row echelon form? How do you use this form to decide if the system is consistent and if it has a unique solution? (Section 2.1)
3 How is reduced row echelon form different from row echelon form? (Section 2.2)
in row echelon form (but not reduced row echelon form) and of rank 3. (b) Determine the general solution of your system. (c) Perform the following sequence of elementary row operations on your augmented matrix: (i) Interchange the first and second rows. (ii) Add the (new) second row to the first row. (iii) Add twice the second row to the third row. (iv) Add the third row to the second.
4 (a) Write the augmented matrix of a consistent non
(d) Regard the result of (c) as the augmented matrix
homogeneous system of three linear equations in
of a system and solve that system directly. (Don't
four variables, such that the coefficient matrix is
just use the reverse operation in (c).) Check that your general solution agrees with (b).
Chapter Review
5 For homogeneous systems, how can you use the row
111
7 Explain how to determine whether a set of vectors
echelon form to determine whether there are non
(i11, ... , vk} in JR11 is both linearly independent and
trivial solutions and, if there are, how many parame
a spanning set for a subspace S of lRn. What form
ters there are in the general solution? Is there any case
must the reduced row echelon form of the coefficient
where we know (by inspection) that a homogeneous
matrix of the vector equation t1v1 +
system has non-trivial solutions? (Section 2.2)
have if the set is a linearly independent spanning set?
6 Write a short explanation of how you use informa
_,
· ·
·
+ tk vk = b
(Section 2.3)
tion about consistency of systems and uniqueness of solutions in testing for linear independence and in de termining whether a vector x belongs to a given sub space of JR11• (Section 2.3)
Chapter Quiz El Determine whether the following system is consis tent by row reducing its augmented matrix: X2 - 2x3
(b) Determine all values of (a, b, c) such that the system has a unique solution.
E4 (a) Determine all vectors in JR5 that are orthogonal 3 1 2
+ X4 = 2
2x1 - 2x2 + 4x3 - X4 = 10 - X2
X1
+ X3
=
2
=
9
2 to 3
1 4
If it is consistent, determine the general solution.
E2 Find a matrix in reduced row echelon form that is row equivalent to
,
and 8 .
0 0
5 9
(b) Let a, v, and w be three vectors in JR5. Explain to all of a, v, and w.
ES Detenn;ne whether 0
2 -2
3 1
3 3
4 -4
9 -6
0 3 6 -3
-1
for JR3•
2
B
=
{[!].[i]. [!]}
;s a bas;s
E6 Indicate whether the following statements are true
-1
or false. In each case, justify your answer with a brief explanation or counterexample.
Show your steps clearly.
E3 The matrix A =
5
why there must be non-zero vectors orthogonal
Show your steps clearly.
A=
,
(a) A consistent system must have a unique solu
3
0
2
0
0
b+2
0
0
0
0
2 -3
0
b
a
2
c -1
c +
tion. IS
1
the augmented matrix of a system of linear equa tions. (a) Determine all values of (a, b, c) such that the system is consistent and all values of (a, b, c) such that the system is inconsistent.
(b) If there are more equations than variables in a non-homogeneous system of linear equations, then the system must be inconsistent. (c) Some homogeneous systems of linear equa tions have unique solutions. (d) If there are more variables than equations in a system of linear equations, then the system can not have a unique solution.
112
Chapter 2
Systems of Linear Equations
Further Problems These problems are intended to be challenging. They may not be of interest to all students.
Fl The purpose of this exercise is to explore the relationship between the general solution of the system
[A I b]
(c) Let R be a matrix in reduced row echelon form, with m rows, n columns, and rank k. Show that
9eneral solution of the homogeneous sys tem l R 10] is
the
and the general solution of the
corresponding homogeneous system
[A I 0]. This
relation will be studied with different tools in
where each vi is expressed in terms of the en
Section 3.4. We begin by considering some exam
tries in R. Suppose that the system
ples where the coefficient matrix is in reduced row
consistent and show that the general solution is
echelon form. =
(a) Let R
is
r1 3 . Show that the general so r23
O
[01
[RI c]
]
l
[ 0] is
lution of the homogeneous system RI
where
jJ
nents of
is expressed in terms of the compo
c,
and x H is the solution of the corre
sponding homogeneous system. where v is expressed in terms of r13 and r23· Show that the general solution of the non homogeneous system [R I
c]
is
(d) Use the result of (c) to discuss the relationship between the general solution of the consistent
[A I b] and the corresponding homogeneous system [A I 0].
system
F2 This problem involves comparing the efficiency of
jJ
where
is expressed in terms of
as above. =
(b) Let R
1�
�
r 2
� �
0 0 0
feneral solution lRI O ) is
1
c,
and x H is
row reduction procedures. W hen we use a computer to solve large systems
���]·
of linear equations, we want to keep the number Show that the
r3s
of the homogeneous system
of arithmetic operations as small as possible. This reduces the time taken for calculations, which is important in many industrial and commercial ap plications. It also tends to improve accuracy: every arithmetic operation is an opportunity to lose accu racy through truncation or round-off, subtraction of
where each of v1 and v2 can be expressed in terms of the entries riJ. Express each vi ex plicitly. Then show that the general solution of
[RI c]
can be written as
two nearly equal numbers, and so on. We want to count the number of multiplications and/or divisions in solving a system by elimination. We focus on these operations because they are more time-consuming than addition or subtraction, and the number of additions is approximately the same
where nents
jJ is expressed in terms of the compo c, and x H is the solution of the corre
sponding homogeneous system.
The pattern should now be apparent; if it is not, try again with another special case of R. In the next part of this exercise, create an effective la belling system so that you can clearly indicate what you want to say.
as the number of multiplications. We make certain assumptions: the system
[A I b]
has n equations
and n variables, and it is consistent with a unique solution. (Equivalently,
A has n rows, n columns,
and rank n.) We assume for simplicity that no row interchanges are required. (If row interchanges are required, they can be handled by renaming "addresses" in the computer.)
Chapter Review
�
(a) How many multi lications and divisions are re quired to reduce
lA I b] to a form [c I J] such
that C is in row echelon form?
(b) Determine how many multiplications and divi sions are required to solve the system with the
(1) To carry out the obvious first elementary row operation, compute
a21
a11
-one division.
Since we know what will happen in the first
7
the first row of
lA
procedure is as efficient as Gaussian elim
]
cations.
(2) Obtain zeros in the remaining entries in the first column, then move to the (n 1) by n -
blocks consisting of the reduced version of with the first row and first column
IL (3) Note that � i i=l n(n+l)(2n+l)
=
duced row echelon form is the same as the
ev�ry other element of I b by this factor and
element of the second row-n multipli
deleted.
divisions required to row reduce [RI c] to re number used in solving the system by back
a zi, a11
subtract the product from the corresponding
[A I 6]
(c) Show that the number of multiplications and
but
column, we do not multiply a11 by we must multipl
[ J] of part (a) by back
augmented matrix C I substitution.
Hints
113
II 1!(1L+J) - and � i2 2 i=1
substitution. Conclude that the Gauss-Jordan ination with back-substitution. For large n, the number of multiplications and divisions is roughly
f.
(d) Suppose that we do a "clumsy" Gauss-Jordan procedure. We do not first obtain row eche lon form; instead we obtain zeros in all en tries above and below a pivot before moving on to the next column. Show that the number of multiplications and divisions required in this
=
6
(4) The biggest term in your answer should be n3 /3. Note that n3 is much greater than n2
procedure is roughly
!f, so that this procedure y 50% more operations
requires approximatel
than the more efficient procedures.
when n is large.
MyMathlab
Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to you, too!
CHAPTER 3
Matrices, Linear Mappings, and Inverses CHAPTER OUTLINE 3.1
Operations on Matrices
3.2
Matrix Mappings and Linear Mappings
3.3
Geometrical Transformations
3.4
Special Subspaces for Systems and Mappings: Rank Theorem
3.5
Inverse Matrices and Inverse Mappings
3.6
Elementary Matrices
3.7
LU-Decomposition
In many applications of linear algebra, we use vectors in JR11 to represent quantities, such as forces, and then use the tools of Chapters I and 2 to solve various problems. However, there are many times when it is useful to translate a problem into other linear algebra objects. In this chapter, we look at two of these fundamental objects: matrices and linear mappings. We now explore the properties of these objects and show how they are tied together with the material from Chapters I and 2.
3.1 Operations on Matrices We used matrices essentially as bookkeeping devices in Chapter 2. Matrices also pos sess interesting algebraic properties, so they have wider and more powerful applica tions than is suggested by their use in solving systems of equations. We now look at some of these algebraic properties.
Equality, Addition, and Scalar Multiplication of Matrices We have seen that matrices are useful in solving systems of linear equations. However, we shall see that matrices show up in different kinds of problems, and it is important to be able to think of matrices as "things" that are worth studying and playing with and these things may have no connection with a system of equations. A matrix is a rectangular array of numbers. A typical matrix has the form
A=
a11
a12
a lj
a111
a21
a22
a2j
a2,,
a;1
a;2
aij
a;,,
aml
am2
amj
amn
116
Chapter 3
Matrices, Linear Mappings, and Inverses
We say that A is an m x n matrix when A has m rows and n columns. Two matrices A and B are equal if and only if they have the same size and their corresponding entries are equal. That is, if aiJ
=
biJ for 1 � i �
� j�
m, 1
n.
For now, we will consider only matrices whose entries aiJ are real numbers. We will look at matrices whose entries are complex numbers in Chapter 9.
Remark We sometimes denote the ij-th entry of a matrix A by
(A)iJ
=
%
This may seem pointless for a single matrix, but it is useful when dealing with multiple matrices. Several special types of matrices arise frequently in linear algebra.
Definition
An n x n matrix (where the number of rows of the matrix is equal to the number of
Square Matrix
columns) is called a square matrix.
Definition
A square matrix U is said to be upper triangular if the entries beneath the main
Upper Triangular
diagonal are all zero-that is, UiJ
Lower Triangular
be lower triangular if the entries above the main diagonal are all zero-in particular,
lij
=
0 whenever i
EXAMPLE l The mauices
H j �]
=
0 whenever i
> j. A square matrix
L is said to
< j.
[� ;]
•nd
[� � �]
"'e uppe' tri•ngul.,, whHe
"'e lower tr;•ngul"' The motrix
[� � �]
[-; �]
md
; , both upper ond lower
triangular.
Definition
A matrix D that is both upper and lower triangular is called a diagonal matrix-that
Diagonal Matrix
is, diJ
=
0 for all i
* j. We denote an nx n diagonal matrix by
D
EXAMPLE2
We
denote
the
diagonal
=
diag(d11, d22,
matrix
diag(O, 3, 1) is the diagonal matrix
D
=
0
, d 1111)
[ : �] _
[� � �]· 0
· · ·
1
by
D
=
diag( YJ, - 2),
while
Section
3.1 Operations on Matrices
Also, we can think of a vector in JR11 as an n x
1
matrix, called a
117
column matrix.
For this reason, it makes sense to define operations on matrices to match those with vectors in JR11•
Definition
Let A and B be m x n matrices and t E JR a scalar. We define
Addition and Scalar Multiplication of Matrices
(A and the
scalar multiplication
+
B)iJ
=
of matrices by
+
(B)iJ
of matrices by
(tA)iJ
EXAMPLE3
(A)iJ
addition
=
t(A)iJ
Perform the following operations.
()a [� �]+[_; �] Solution:[��]+[_; �]-[4!��2) �:�]=[� :] [13 -OJ5 [-21 -0]1 (b)
_
5 [� i] Solution:[5 24 ]31 = [5(5(42)) 5(5(1)3)] =[0210 1 5 5] c
( )
Note that matrix addition is defined only if the mattices are the same size.
Properties of Matrix Addition and Multiplication by Scalars
We
now look at the properties of addition and scalar multiplication of matrices. It is very important to notice that these are the exact same ten properties discussed in Section 1.2 for addition and scalar multiplication of vectors in JR11•
118
Chapter 3
Theorem 1
Matrices, Linear Mappings, and Inverses
Let A, B, C be m x n matrices and let s, t be real scalars. Then
(1) (2) (3) (4)
A+ B is an m x n matrix (closed under addition) A+ B = B + A (addition is commutative) (A+ B) + C =A + (B + C) (addition is associative) There exists a matrix, denoted by Om,n , such that A+ Om,n = A; in particular, Om,n is the m x n matrix with all entries as zero and is called the zero matrix (zero vector)
(5) For each matrix A, there exists an m x n matrix (-A), with the property that A +(-A) = Om,n; in particular, (-A) is defined by (-A)iJ = -(A)iJ (additive inverses)
(6) (7) (8) (9) (10)
sA is an m x n matrix
(closed under scalar multiplication)
s(tA) (st)A (scalar multiplication is associative) (s+ t)A =sA + tA (a distributive law) s(A + B) =sA + sB (another distributive law) lA =A (scalar multiplicative identity) =
These properties follow easily from the definitions of addition and multiplication by scalars. The proofs are left to the reader. Since we can now compute linear combinations of matrices, it makes sense to look at the set of all possible linear combinations of a set of matrices. And, as with vectors in JR'\ this goes hand-in-hand with the concept of linear independence. We mimic the n definitions we had for vectors in ]R .
Definition
Let :B = {A 1,
• • •
, Ak} be a set of
m x n matrices. Then the
span of :B is defined as
Span
Definition Linearly Independent
Let :B = {A1, ... , Ae} be a set of m x n matrices. Then :B is said to be linearly inde pendent if the only solution to the equation
Linearly Dependent
t1A1 + is t1 =
EXAMPLE4
· ·
·
·
· ·
+ teAe = Om,n
=te =0; otherwise, :Bis said to be linearly dependent.
Determine if
[� �]
is in the span of
Solution: We want to find if there are t1, t2, t3, and t4 such that
Section 3.1
Operations on Matrices
119
EXAMPLE4
Since two matrices are equal if and only if their corresponding entries are equal, this
(continued)
gives the system of linear equations ti + t1
=
1
ti + t3 + t4
=
2
t3
=
3
t1 + t4
=
4
Row reducing the augmented matrix gives
1 1
0
0
0
0
0
1
1
0
0
0
1
2
0
1
0
0
-2 3
3
0
0
1
0
3
4
0
0
0
0
0
0
We see that the system is consistent. Therefore, we have
EXAMPLES
ti
=
-2,
t1
=
3,
Determine if the set :B
t3
=
3, and
t4
=
1.
[� �]
{[� �] [� �] , [� �]}
=
_
,
is in the span of :B. In particular,
is linearly dependent or linearly
independent.
Solution: We consider the equation
Row reducing the coefficient matrix of the corresponding homogeneous system gives
2 2 -1 The only solution is
EXERCISE 1
Deterrune if:S
t1
=
t2
=
t3
=
3
0
1
0
0
2
0
0
1
0
2
0
0
1
2
0
0
0
0, so S is linearly independent.
{[� �]·[� n.[� �]·[-� �]} [-� �] -
=
early independent. Is X
=
in the span of :B?
is linearly dependent or lin-
120
Chapter 3
EXERCISE 2
Matrices, Linear Mappings, and Inverses
Consider 13=
{[� �], [� �], [� �], rn �]}
·Prove that 23 is linearly independent
and show that Span 13 is the set of all 2 x 2 matrices. Compare 13 with the standard basis for IR.4.
The Transpose of a Matrix Definition
Let
Transpose
whose ij-th entry is the Ji-th entry of
EXAMPLE6
A
be an m x n matrix. Then the transpose of
Determine the transpose of A=
Solution:
AT=
[
-
1 3
6 5
-
-1 3
6 5
3
_
A
is the n x m matrix denoted
]
-4 and B= 2
[ : �]
-1 T =
]
4 2
[
A. That is,
. � [ ll
]
AT,
_
J T = [1 0 and BT= _
[�
-
1
]
.
Observe that taking the transpose of a matrix turns its rows into columns and its columns into rows.
Some Properties of the Transpose
How does the operation of transposi
tion combine with addition and scalar multiplication?
Theorem 2
A and B and scalar s (ATf =A (A+ Bf =AT+ BT (sAf=sAT
For any matrices
(1) (2) (3)
E
IR., we have
Proof: 2.
((A+ Bl);;=(A+ B);;=(A);;+ (B);;=(AT);;+ (BT);;=(AT+ BT);;.
3.
((sA)T);;=(sA);;=s(A)Ji=s(AT);;=(sAT);;. •
Section 3.1
EXERCISE 3
Let
A
=
[� � a _
Verify that
(ATl
=
A
and
121
Operations on Matrices
(3Al
3AT.
=
Remark Since we always represent a vector in JR11 as a column matrix, to represent the row of a matrix as a vector, we will write iJT. For now, this will be our main use of the transpose; however, it will become much more important later in the book.
An Introduction to Matrix Multiplication The operations are so far very natural and easy. Multiplication of two matrices is more complicated, but a simple example illustrates that there is one useful natural definition. Suppose that we wish to change variables; that is, we wish to work with new variables y1 and Y2 that are defined in terms of the original variables x1 and x2 by the equations
Y1 = a11x1
+
a12X2
Y2
+
a22x2
=
a21X1
This is a system of linear equations like those in Chapter 2. It is convenient to w1ite these equations in matrix form. Let and
A
=
[ai
1
a21
]
x
=
[�J [��]. y
=
a12 be the coefficient matrix. Then the change of variables equations a22
can be written in the form
y
=
Ax,
provided that we define the
product of A and x
according to the following rule:
(3.1) It is instructive to rewrite these entries in the right-hand matrix as dot products. Let
a1 =
A.
[::� ]
and a2 =
[:�� ]
so that a
f
=
]
[a11
a12 and
a{
=
[a21
]
a22 are the
rows
of
Then the equation becomes
Thus, in order for the right-hand side of the original equations to be represented cor rectly by the matrix product Ax, the entry in the first row of Ax must be the dot product of the first row of
A (as a column vector)
with the vector x; the entry in the second row
must be the dot product of the second row of
A
(as a column vector) with x.
Suppose there is a second change of variables from
Z1
=
b11Y1
+
b12Y2
Z2
=
b21Y1
+
b22Y2
y to z:
122
Chapter 3
Matrices, Linear Mappings, and Inverses
In matrix form, this is written
z
= By Now suppose that these changes are performed .
one after the other. The values for y1 and Y2 from the first change of variables are substituted into the second pair of equations:
= b11(a11X1
+
a12X2) + b12(a21X1
+
a22X2)
z2 = b21(a11x1
+
a12x2)
+
a22x2)
ZJ
+
b22(a21X1
After simplification, this can be written as
z
=
[
b11a11 b21a11
We want this to be equivalent to
z
+ +
b12a21
b22a21
= By
]
b11a12 + b12a22 x b21a12 + b22a22 BA
=
x
.
Therefore, the product BA must be
b11a12 b21a12
+ +
b12a22 b22a22
]
(3.2)
Thus, the product BA must be defined by the following rules:
EXAMPLE7
•
(BA)11 is the dot product of the first row of Band the first column of A.
•
(BA)12 is the dot product of the first row of Band the second column of A.
•
(BA)ii is the dot product of the second row of Band the first column of A.
•
(BA)22 is the dot product of the second row of Band the second column of A.
[ ][ 2 4
3 1
5 -2
] [
1 2(5) 7 4(5)
+ +
3(-2) 1(-2)
2(1) 4(1)
] [
4 3(7) 1(7) 18
+ +
23 11
]
We now want to generalize matrix multiplication. It will be convenient to use
b[
to represent the i-th row of B and a1 to represent the }-th column of A. Observe from our work above that we want the ij-th entry of BA to be the dot product of the i-th row
of Band the }-th column of A. However, for this to be defined,
bj must have the same
number of entries as a1. Hence, the number of entries in the rows of the matrix B (that
is, the number of columns of B) must be equal to the number of entries in the columns of A (that is, the number of rows of A). We can now make a precise definition.
Definition
Let Bbe anm x n matrix with rows
Matrix Multiplication
a1,
• • •
bf,
.
.
.
, b� and A be an n x p matrix with columns
, ap. Then, we define BA to be the matrix whose ij-th entry is -;
(BA)iJ = b; ai -;
·
Remark If B is an m x n matrix and A is a p x q matrix, then BA is defined only if n = p.
Moreover, if n
=
p, then the resulting matrix ism x q.
More simply stated, multiplication of two matrices can be performed only if the number of columns in the first matrix is equal the number of rows in the second.
Section 3.1 Operations on Matrices
EXAMPLES
Perform the following operations.
(a)
[
2 4
3 -1
[�
Solution:
(b)
0 2
[
2 1
_
l -1
3 -1
3 1
1 2
1 -1
3 1 2 0
]
2 0
0 2
3 s
)
1 2 3 s
[
=
9 15
13 3
]
H -� m� !l [-� �i [� �i [�� ��i ] [� � ]
Solution:
EXAMPLE9
123
3 3
l -0
2
1
2
s
=
2
s
-3
is not defined because the first matrix has two columns and the sec-
ond matrix has three rows.
EXERCISE4
Let A
=
[�
not defined. (a) AB
EXAMPLE 10 LetA
=
m
� -�] (b) BA
andB
Solution: ATB
=
=
[I
and B
=
[� �].
(c)ATA
m 2
Calculate the following or explain why they are
(d) BET
Compu�ATB
3
]
Obsme that if we let X
[�] =
=
[1(6)
m
+
and'!
2(5)
=
+
m
3(4)]
=
[28]
be vectors in 1 0, while lengths in the x2-direction are left unchanged (Figure 3.3.4). This linear transformation, called a "stretch by factor tin the x1 -direction," has matrix
[� n
(If t < 1, you might prefer to call this a shrink.) It should be obvious that
stretches can also be defined in the x2-direction and in higher dimensions. Stretches are important in understanding the deformation of solids.
Contractions and Dilations If a linear operator T : JR2
[� �l
with t > 0, then for any 1, T(x)
=
�
JR2 has matrix
tx, so that this transformation stretches
vectors in all directions by the same factor. Thus, for example, a circle of radius 1 centred at the origin is mapped to a circle of radius t at the origin. If 0 < t < 1, such a transformation is called a contraction; if t > l, it is a dilation.
146
Chapter 3
Matrices, Linear Mappings, and Inverses
Figure 3.3.4
Shears
A stretch by a factor t in the
x1 -direction.
Sometimes a force applied to a rectangle will cause it to deform into a paral
lelogram, as shown in Figure 3.3.5. The change can be described by the transformation 1 2 2 : IR � IR , such that S (2, 0) (2, 0) and S (0, 1) = ( s, ). Although the deformation
S
=
of a real solid may be more complicated, it is usual to assume that the transformation S
is linear. Such a linear transformation is called a shear in the direction of x1 by amount s. Since the action of
.
IS
[ ] 1 0
s
S on the standard basis vectors is known, we find that its matrix
1 .
(5, 1) (2 + 5, 1) (2, 1) ...(0, 1) ----....,.-----.---""'7
(2,0) Figure 3.3.5
A shear in the direction of
x1
by amounts.
Reflections in Coordinate Axes in R.2 or Coordinates Planes in IR;.3
2 Let R: JR.2 � JR. be a reflection in the x1 -axis (see Figure 3.3.6). Then each vector
corresponding to a point above the axis is mapped by R to the mirror image vector
below. Hence, R(xi, x2) = (x1, -x2), and it follows that this transformation is linear
[ OJ
1 . . h matnx wit
0
_
. . . . he x2-ax1s has the matnx 1 . s·mu'larly, a re ftect1on mt
[- OJ l 0
1 .
Next, consider the reflection T : JR3 � JR3 that reflects in the x1x2-plane (that is,
the plane x3 = 0). Points above the plane are reflected to points below the plane. The
[
1
matrix of this reflection is 0
0
o 1 0
o 0 .
-1
l
Section 3.3 Geometrical Transformations
Figure 3.3.6
EXERCISE 2
A reflection in JR;.2 over the
x1
147
-axis.
W rite the matrices for the reflections in the other two coordinate planes in JR.3.
General Reflections
We consider only reflections in (or "across") lines in JR.2 or
planes in JR.3 that pass through the origin. Reflections in lines or planes not containing the origin involve translations (which are not linear) as well as linear mappings. Consider the plane in JR.3 with equation it· 1
=
0. Since a reflection is related to
proj,7, a reflection in the plane with normal vector it will be denoted refl;t. If a vector
p
corresponds to a point P that does not lie in the plane, its image under refl;t is the
vector that corresponds to the point on the opposite side of the plane, lying on a line through P perpendicular to the plane of reflection, at the same distance from the plane as P. Figure 3.3.7 shows reflection in a line. From the figure, we see that
Since proj,1 is linear, it is easy to see that refl,1 is also linear.
reft;t a
Figure 3.3.7
A reflection in R2 over the line with nonnal vector n.
148
Chapter 3
Matrices, Linear Mappings, and Inverses
It is important to note that refl,1 is a reflection with normal vector it. The calcu lations for reflection in a line in IR2 are similar to those for a plane, provided that the equation of the line is given in scalar form it· 1 0. If the vector equation of the line is given as 1 tJ, then either we must find a normal vector it and proceed as above, or, in terms of the direction vector J, the reflection will map jJ to (jJ 2 perpJ jJ). =
-
=
EXAMPLE2 Consider a r·efiection refl,1
:
1!.3
�
IR3 over the plane with normal vector ii
=
[-i].
Determine the matrix [refl,1]. Solution: We have
Hence,
Note that we could have also computed [refl,1] in the following way. The equation for refl;t(p) can be written as refl;t(P)
=
Id(jJ)
-
2 proj;t(P )
=
(Id +(-2) projit )(jJ)
Thus,
PROBLEMS 3.3 Practice Problems A 1 Determine the matrices of the rotations in the plane through the following angles. (b) 1T can (d) 6f' (c) -�
A2 (a) In the plane, what is the matrix of a stretch S by a factor 5 in the x2-direction? (b) Calculate the composition of S followed by a rotation through angle e.
Section 3.3 Exercises
(c) Calculate the composition of S following a ro tation through angle e.
A3 Determine the matrices of the following reflections 2 in JR .
(a) R is a reflection in the line x1
+
(b) S is a reflection in the line 2xt
3x2 =
=
0.
3
3
AS (a) Let D : JR � JR be the dilation with factor 3 � JR4 be defined by t = 5 and let inj : JR inj(x1,x2,x3)
(x1,X2,0,x3). Determine the
=
matrix of inj oD. 2 3 (b) Let P : JR � JR be defined by P(x1,x2,X3) 3 (x2, x3) and let S be the shear in JR such that =
x2.
S (x1,x2,x3) (x1,x2,X3 matrix of P o S . =
A4 Determine the matrix of the reflections in the fol 3 lowing plane in JR .
(a) Xt
+
X2
+
X3
=
(b) 2x1- 2x2 - X3
0 =
149
2x1). Determine the
2 (c) Can you define a shear T : JR To P
0
+
=
�
2 JR such that
Po S, where P and S are as in part (b)? 2 3 � JR be defined by Q(x1,x2,x3) =
(d) Let Q : JR
(xt, x2). Determine the matrix of Q o S, where
S is the mapping in part (b).
Homework Problems Bl Determine the matrices of the rotations in the plane through the following angles.
� (c) � (a)-
(b) (d)
B4 Determine the matrix of the reflections in the fol 3 lowing plane in JR .
(a) Xt - 3x2
-n
-�
(b) 2X1
B2 (a) In the plane, what is the matrix of a stretch S by a factor 0.6 in the x2-direction?
(c) Calculate the composition of S following a ro tation through angle
�.
B3 Determine the mat1ices of the following reflections 2 in JR .
(a) Ris a reflection in the line X1- 5x2 (b) S is a reflection in the line 3x1
+
=
4x2
0. =
- x3
X2- X3
=
0
=
0 3
3
BS (a) Let C : JR � JR be contraction with fac 3 tor 1 /3 and Jet inj : JR � JR5 be defined by
(b) Calculate the composition of S followed by a rotation through angle e.
+
inj(x1,x2,x3)
=
(0,x1,0,x2,x3). Determine the
matrix of inj oC. 3 2 (b) Let S : JR � JR be the shear defined by
S (x1,x2,X3) = (x1,X2 - 2x3,x3). Determine the matrices Co S and S o C, where C is the con traction in part (a). 3 3 (c) Let T : JR � JR be the shear defined by T(x1,x2,X3)
0.
=
(xt
+
3x2,x2,X3). Determine the
matrix of S o T and T o S, where S is the map ping in part (b).
Conceptual Problems Dl Verify that for rotations in the plane [Ra o Re] [Ra][Re] = [Ra+e]. D2 In Problem A3, [R]
=
[
4/5 _315
]
-3/5 _415 and [S]
=
[
]
-3/5 4/5 . are refiect10n matrices. Calculate 415 315 [Ro S] and verify that it can be identified as the matrix of a rotation. Determine the angle of the rotation. Draw a picture illustrating how the com position of these reflections is a rotation.
150
Chapter 3
Matrices, Linear Mappings, and Inverses
D3 In R3, calculate the matrix of the composition of a reflection in the x2x3-plane followed by a reflection in the x1 x2-plane and identify it as a rotation about some coordinate axis. W hat is the angle of the rotation?
D4 (a) Construct a 2 x 2 matrix A *I such that A3 = /. (Hint: Think geometrically.) (b) Construct a 2 x 2 matrix A *I such that A5 = /. DS From geometrical considerations, we know that reflt1 o refl,1 = Id. Verify the corresponding matrix equation. (Hint: [reft,1] = I 2[projn] and proj,1 sat isfied the projection property from Section 1.4.) -
3.4 Special Subspaces for Systems and Mappings: Rank Theorem In the preceeding two sections, we have seen how to represent any linear mapping as a matrix mapping. We now use subspaces of R11 to explore this further by examining the connection between the properties of Land its standard matrix [L] = A. This will also allow us to show how the solutions of the systems of equations Ax = band Ax =0 are related. Recall from Section 1.2 that a subspace of R11 is a non-empty subset of R11 that is closed under addition and closed under scalar multiplication. Moreover, we proved that Span{V1, , vk} is a subspace of R11• Throughout this section, L will always denote a linear mapping from R11 to Rm, and A will denote the standard matrix of L. •
•
•
Solution Space and Nullspace Theorem 1
Let A be an m x n matrix. The set S = {x E R11 I Ax = 0} of all solutions to a homogeneous system Ax = 0 is a subspace of R11•
Proof: We have 0 E S since AO= 0. Let x, y E S. Then A(x+ y) =Ax+ Ay =0 + 0 = 0, so we have x+ y Let x ES and t ER Then A(tx) = tAx = tO= 0. Therefore, tx e S. So, by definition, S is a subspace of R11•
e
S. •
From this result, we make the following definition.
Definition Solution Space
The set S = {x E R11 I Ax = 0} of all solutions to a homogeneous system Ax = 0 is called the solution space of the system Ax = 0.
Section 3.4
EXAMPLE 1
Special Subspaces for Systems and Mappings: Rank Theorem
151
Find the solution space of the homogeneous system x1 + 2x2 - 3x3 = 0. Solution: We can solve this very easily by using the methods of Chapter 2. In partic ular, we find that the general solution is
or
EXERCISE 1
Let A=
[� ! � � �]
Find the solution space of AX= 0
Notice that in both of these problems, the solution space is displayed automatically as the span of a set of linearly independent vectors. For the linear mapping L with standard matrix A = [L], we see that L(x) =Ax, by definition of the standard matrix. Hence, the vectors x such that L(x) = 0 are exactly the same as the vectors satisfying Ax = 0. Thus, the set of all vectors x such that L(x) = 0 also forms a subspace of JR11• Definition Nullspace
The nullspace of a linear mapping L is the set of all vectors whose image under L is the zero vector 0. We write Null(L) = {x E JR" I L(x) = O} Remark
The word kernel-and the notation ker(L) = {x place of nullspace. EXAMPLE2
Let V=
[� l
l Find the nullspace of proj, , R3
�
E
JR" I
L(x) = 0)-is often used in
R3 .
Since vectors orthogonal to v will be mapped to 0 by projil, the nullspace of projil is the set of all vectors orthogonal to v. That is, it is the plane passing through the origin with normal vector v. Hence, we get
Solution:
Null(proj,) =
{[�:]
E
R3 I 2x1 - x,
+
3x3 = 0
}
152
Chapter 3
EXAMPLE3
Matrices, Linear Mappings, and Inverses
Let L : JR.
2
--+
3 JR. be defined by L(x1, x2)
Solution: We have
[��]
=
(2x1 - x2, 0, xi + x2). Find Null(L).
E Null(L) if L(x1, x2)
=
(0, 0, 0). For this to be true, we must
have 2x1 - X2 X1 + X2
=
=
0 0
We see that the only solution to this homogeneous system is x
=
0. Thus Null(L)
=
{0}.
EXERCISE 2
To match our work with linear mappings, we make the following definition for matrices.
Definition
Let A be an m x n matrix. Then the nullspace of A is
Nullspace
Null(A)
=
{x
e JR.11 J Ax
=
O}
It should be clear that for any linear mapping L : JR.11
Solution Set of Ax
=
--+
JR.m, Null(L)
=
Null([L]).
b
Next, we want to consider solutions for a non-homogeneous system Ax
=
b, b t:- 0 and
compare this solution set with the solution space for the corresponding homogeneous
system Ax
EXAMPLE4
=
0 (that is, the system with the same coefficient matrix A).
Find the general solution of the system x1 + 2x2 - 3x3
Solution: The general solution of x1
EXERCISE3 Let A
=
[� ! � � �l
and
b
=
+ 2x2 - 3x3
[n
=
=
5.
5 is
Find the genernl solution of AX
=
b
Section 3.4 Special Subspaces for Systems and Mappings : Rank Theorem
153
Observe that in Example 4 and Exercise 3, the general solution is obtained from the solution space of the corresponding homogeneous problem (Example cise
Theorem 2
1
and Exer
1,
respectively) by a translation. We prove this result in the following theorem.
jJ
be a solution of the system of linear equations Ax =
b, b * 0. v is any other solution of the same system, then A(jJ -v) = 0, so that jJ -v is a solution of the corresponding homogeneous system Ax = 0. (2) If h is any solution of the corresponding system Ax = 0, then jJ + h is a solution of the system Ax = b.
Let
( 1) If
Proof:
(i) Suppose that Av=
(ii) Suppose that Ah=
b. Then A(jJ - v) = AjJ -Av= b - b= 0.
0. Then A(jJ + h) = AjJ +Ah= b + 0 = b. •
The solution jJ of the non-homogeneous system is sometimes called a particular solution of the system. Theorem 2 can thus be restated as follows: any solution of the non-homogeneous system can be obtained by adding a solution of the corresponding homogeneous system to a particular solution.
Range of L and Columnspace of A Definition
The range of a linear mapping L: JR11
�
JRm is defined to be the set
Range
Range (L) = {L(x) E JRm I
EXAMPLES
Let ii =
[:]
x
E
JR11}
and considec the lineac mapping prnj, ' R3
image of this mapping is a multiple of
�
R3. By definition, evecy
v, so the range of the mapping is the set of all
multiples of v. On the other hand, the range of perpil is the set of all vectors orthogonal
to v. Note that in each of these cases, the range is a subset of the codomain.
EXAMPLE6
If Lis a rotation, reflection, contraction, or dilation in JR3, then, because of the geome
try of the mapping, it is easy to see that the range of Lis all of JR3.
154
Chapter 3
EXAMPLE 7
Matrices, Linear Mappings, and Inverses
R2 � R3 be defined by L(x1,x2) = (2x1 - x2,0, xi + x2). Find Range(L). Solution: By definjtjon of the range, if L(x) is any vector in the range, then
Let L
:
[ �] 2x1
L(x)=
x2
X1 + X2
. Using vector operations, we can write this as
This is focany x1,x2 e R, and so Range(L)=Span
EXERCISE4
Let L : R3
�
{[�] nl} ·
·
R2 be defined by L(x1,X2,X3) = (xi - x2, -2xi + 2x2 + x3). Find
Range(L).
It is natural to ask whether the range of L can easily be described in terms of the matrix A of L. Observe that
[
ctn]
L(x)=Ax= a1
X1
: Xn
]
= X1G1 +
"·
+
X11Gn
Thus, the image of x under L is a linear combination of the columns of the matrix A.
Definition Columnspace
The columnspace of an m x
n
matrix A is the set Col(A) defined by
Col(A)= {Ax E R"' I x E R"J Notice that our second interpretation of matrix-vector multiplication tells us that Ax is a linear combination of the columns of A. Thus, the columnspace of A is the set of all possible linear combinations of A. In particular, it is the subspace of R111 spanned by the columns of A. Moreover, if L : R11
�
Rm is a linear mapping, then
Range(L)=Col(A).
EXAMPLE8
[
Let A= 1 2
2
1
_
3] 1
and B=
co1cAJ =span
[� -H 1
Then
{[�]. [�]. [ ;J} _
and
coI(BJ =span
{[�] [-l]} ·
Section 3.4 Special Subspaces for Systems and Mappings: Rank Theorem
EXAMPLE9 If L is the mapping with standa
}. Ann
x n matrix L is said to
182
Chapter 3
Matrices, Linear Mappings, and Inverses
Remark By definition, a matrix in row echelon form is upper triangular. This fact is very im portant for the LU-decomposition.
b, we can use the same row operations I b J to row echelon form and then solve the system using back
Observe that for each such system Ax to row reduce
[
A
=
substitution. The only difference between the systems will then be the effect of the row operations on
b. In particular, we see that the two important pieces of information we
require are the row echelon form of A and the elementary row operation used.
For our purposes, we will assume that our n x n coefficient matrix A can be brought into row echelon form using only elementary row operations of the form add a mul tiple of one row to another. Since we can row reduce a matrix to a row echelon form without multiplying a row by a non -zero constant, omitting this row operation is not a problem. However, omitting row interchanges may seem rather serious: without row interchanges, we cannot bring a matrix such as
[� �]
into row echelon form. How
ever, we only omit row interchanges because it is difficult to keep track of them by hand. A computer can keep track of row interchanges without physically moving en tries from one location to another. At the end of the section, we will comment on the case where swapping rows is required. Thus, for such a matrix A, to row reduce A to a row echelon form, we will only use row operations of the form R; + sRj• where i > j. Each such row operation will have a corresponding elementary matrix that is lower triangular and has ls along the main diagonal. So, under our assumption, there are elementary matrices E1,
.
•
.
, Ek that are
all lower triangular such that
where U is a row echelon form of A. Since Ek··· E1 is invertible, we can write A
(Ek ··· E1)-1 U and define
Since the inverse of a lower-triangular elementary matrix is lower triangular, and a product of lower-t1iangular matrices is lower triangular, Lis lower triangular. (You are asked to prove this in Problem Dl .) Therefore, this gives us the matrix decomposition
A = LU, where U is upper triangular and Lis lower triangular. Moreover, L contains the information about the row operations used to bring A to U.
Theorem 1
If A is an n x n matrix that can be row reduced to row echelon form without swap ping rows, then there exists an upper triangular matrix U and lower triangular matrix L such that A =LU.
Definition LU-Decomposition
Writing an n x n matrix A as a product LU, where Lis lower triangular and U is upper triangular, is called an LU-decomposition of A. Our derivation has given an algorithm for finding the LU-decomposition of such a matrix.
Section 3. 7
EXAMPLE 1
LU-Decomposition
183
2[ 4 -1-1 4 -1 -1 3] 2 -1-1 4 2 - 20 -11 -2 4 [ -1 -1 3 l [ 0 -3/2 4 l 2 1 4 l - [ 00 10 -22 [-102 �0 �1], [1/2� �0 �],1 [�0 3/2� �ll �[ 100 Ol10 [-1/20 100 l01 00 -3/210 � I [� 100 �] [_lj -3/210 Ol01 - [-1/22 -3/210 100 6 .
Find an LU-decomposition of A=
Solution: Row-reducing and keeping track of our row-operations gives
6
R2 - R 1 R3+�R1
5
R3+ �R2
= u
E1 =
£2
£3
=
=
Hence, we let
L=
I
E; 1Ei1E;1 =
I
=
And we get A= LU.
Observe from this example that the entries in Lare just the negative of the multi pliers used to put a zero in the corresponding entry. To see why this is the case, observe that if Ek···E1A= U , then
Hence, the same row operations that reduce A to U will reduce Lto /. T his makes the LU-decomposition extremely easy to find.
EXAMPLE2
Find an LU -decomposition of B
=
-4[ 2 31 -1-33] . 6
8
184
Chapter 3
EXAMPLE2
Matrices, Linear Mappings, and Inverses
Solution: By row-reducing, we get
(continued) L=
H '. �] H �] 0
L
Therefore. we have
H �m i �: i ] [� 0
B = LU =
EXAMPLE3
2
Find an LU-decomposition of C =
2
Solution: By row-reducing, we get
[� ; [ _; -� l -4
-2
0
6
-11
Therefore, we have
-4
-3
3.
-2
1
�i ; -� l -[ � -3
L=
-1
1
0
=
_
R3
+
3R2
EXERCISE 1 Find an LU-decomposition of A=
0
0
1-�
-1 -3
-3
16
-
�1
]
.
L=
[ � � �i [ � � �ll -4
*
-4
-3
1
Section 3.7 LU-Decomposition
185
Solving Systems with the LU-Decomposition ....
We now look at how to use the LU-decomposition to solve the system Ax =b. If A =LU, the system can be written as LUx =b Letting y = Ux, we can write LUx =bas two systems: Ly=b
and
Ux=y
which both have triangular coefficient matrices. This allows us to solve both sys tems immediately, using substitution. In particular, since Lis lower triangular, we use forward-substitution to solve y and then solve Ux =y for x using back-substitution.
Remark Observe that the first system is really calculating how performing the row operations on A would have affected
EXAMPLE4
[
-11
b.
[ ]
2 1 3 3 and b =- 13 Use an LU-decomposition of B to solve Bx=b. Let B= -4 3 4 6 8 -3 .
Solution: In Example 2 we found an LU-decomposition of B. We write Bx = b as LUx=band take y= Ux. W riting out the system Ly=
b, we get
3 }'1 = -2y1 + )'2
=
-13
3y1 + )'2 + )'3 =4
[-;].
Using forward-substitution, we find that )'1 =3, so )'2 =-13 + 2(3) = -7 and
y, =4 -3(3)- (-7) =2. Hence,f = Thus, our system Ux=y is
2X1 + X2 - X3 =3 Sx2 + x3 =-7 -X3 =2 Using back-substitution, we get x3
3- (-!) + (- 2)
=>
=
-2, 5x2 =-7 - (-2)
l=H
x1 =I. Thus, the solution is it=
�
x2 =-1 and 2x1
=
186
Chapter 3
Matrices, Linear Mappings, and Inverses
EXAMPLES
+ : -4-� +· + l [-: -4 H [ : + [ + - � �] :l -[ � l [-: �]
Use LU-decomposition to solv
-2
12
6
-2
SoluOom We fast find an LU-decomposition fm
[ _: -41 l [ � -1 48 l [ �H �
0
I
-2
-2
I
3 6
R1 R3
I
-2
-
Thus, U =
Row reducing gives
-2
-1 -2
R1 2R1
I 0 0
R3 - 2R2
L=
�
-2
*
0
I -I 0
L=
�
-2
We let jl = Ut and solve Ljl =
YI =
-y1
+ Y2 -2y1 + 2y2 + y3
b.
2
This gives
1
= 6 = 12
1 + -14 [i]. Xt-X2+ X2++4X3X3 Ox3 8 -St. x3 t -x2 -4t8 -St, l x2 [ 8- + 4t x, 1 + 4t) -t [ : 4t -�] + t [-S�i
So, y1
=
I,
y, =
6+I
= 7, and y, =
2
2
= O. Then we solve Ut =
which is
= 1 = 7
= 0
This gives
=
E
JR,
= 7
so
=
7
and
=
(7 -
=
Hence,
x = -7
EXERCISE 2 Let A =
[-14 -11 1]
2 -3 . Use the LU-decomposition of A that you found in Exercise 1
-3
-3
to solve the system Ax = (a)
=
b=
m
b, where: (b) b=
Ul
Section 3.7 Exercises
187
A Comment About Swapping Rows It can be shown that for any n x n matrix A, we can first rearrange the rows of A to get a matrix that has an LU-factorization. In particular, for every matrix A, there exists a matrix P, called a permutation matrix, that can be obtained by only performing row swaps on the identity matrix such that
PA= Then, to solve
LU
Ax= b, we use the LU-decomposition as before to solve the equivalent
system
PAx=Pb
PROBLEMS 3.7 Practice Problems
[-2-�
-2-2 [� 2 J -2 2 -2 -22 2 2 2
Al Find an LU-decomposition of each of the following matrices. (a)
(c)
-1 0 1
-�] [� �] -2 -2 2
1
-3 -3 4
0
-3
(d )
1 -1 0
(f)
2.
5
-6
0 0
4 3
0
-1 3 3 -1
3 -1 -1 0
4 3 -1 0
0 1 4
A=
0 -4 -4
-1
(b)
3 -4
-3
A=
A2 For each matrix, find an LU-decomposition and use it to solve Ax= b;, for i = 1,
Homework Problems matrices.
-[-�1 [�
(a)
(c)
[
[
1 5
-1 1 1
0 -1
]
-1 -4 -4
(b)
(d )
-4
0 3
( e)
-1
4
b1 =
b, =
-5
b1 =
-1 -8
-3 3 3 3
2 2 -2 3 -3
0 -1 -4
5
0
1
Bl Find an LU-decomposition of each of the following
· b 1 = = · b, = =
4
-1
( d)
-
-1
0
(c)A= -
0
4
[ � �] l �l l �l H -�i. [=;]. [ ;] [ i 2 -n l=H {�l 2 -2 2
A= -
(b)
-4 5 -1
1 0 ( e) 3
(a)
-4
0 0
0 1
b
,
-6
'bi= _,
1
7 -4 5
'b2 =
-22 2 -1
(f)
2.
4 -3
5 4
3 -3 5 -5
5 -3 3 -5
2
-1
5 4
B2 For each matrix, find an LU-decomposition and use it to solve Ax= b;, for i=1,
188
Chapter 3
Matrices, Linear Mappings, and Inverses
[-4I 3[-2 -4 -�], {H [�l ; 2 b 6 =H · nJ,c, l�l
[-4� -23 :l [-16��l [=�]2 2-2 -1 -2-3 -2-2 -4-3 'b2 -24 4 -4 -4 -3 -4
0
(a) A =
1
0
(b) A =
(c) A =
b, =
b,
-8
1
=
=
b, = 0
-5
(d) A =
b, =
5
0
,b1
5
=
=
8
5
Conceptual Problems Dl (a) Prove that the inverse of a lower-triangular ele
(b) Prove that a product of lower-triangular matri
mentary matrix is lower triangular.
ces is lower triangular.
CHAPTER REVIEW Suggestions for Student Review Try to answer all of these questions before checking an swers at the suggested locations. In particular, try to in
3 Determine the image of the vector
3.3)
[ �]
under the ro
�.
vent your own examples. These review suggestions are
tation of the plane through the angle
intended to help you carry out your review. They may
the image has the same length as the original vector.
not cover every idea you need to master. Working in
(Section
small groups may improve your efficiency.
Check that
4 Outline the relationships between the solution set for the homogeneous system Ax =
1 State the rules for determining the product of two
23
0 and
solutions for
the corresponding non-homogeneous system Ax =
matrices A and B. What condition(s) must be satis
b.
fied by the sizes of A and B for the product to be
A is
defined? How do each of these rules correspond to
discuss connections between these sets and special
writing a system of linear equations? (Section
subspaces for linear mappings. (Section
3.1)
2 Explain clearly the relationship between a matrix
JR.2
and the corresponding matrix mapping. Explain how you determine the matrix of a given linear map ping. Pick some vector v
E
and determine
the standard matrices of the linear mappings proj11, perp11, and refl11. Check that your answers are cor
3.2)
Illustrate by giving some specific examples where x
3.4) 3.4)
and is in reduced row echelon form. Also
5 (a) How many ways can you recognize the rank of a matrix? State them all. (Section
3.4)
(b) State the connection between the rank of a ma trix A and the dimension of the solution space of
Ax =
0. (Section
4 4;
3;
(c) Illustrate your answers to (a) and (b) by con
2.
rect by using each matrix to determine the image of
structing examples of
v and a vector orthogonal to v under the mapping.
elon form of (i) rank
(Section
rank
x 5 matrices in row ech
(ii) rank
and (iii)
In each case, actually determine the gen
eral solution of the system A x =
0
and check
Chapter Review
that the solution space has the correct dimension. (Section
3.4)
(b) Pick a fairly simple
3
x
3
189
matrix (that has not
too many zeros) and try to find its inverse. If it is not invertible, try another. When you have an
6 (a) Outline the procedure for determining the in
inverse, check its correctness by multiplication.
verse of a matrix. Indicate why it might not pro
(Section
duce an inverse for some matrix A. Use the ma trices of some geomet1ic linear mappings to give two or three examples of mat1ices that have in
verses and two examples of square matrices that
do not have inverses. (Section
3.5)
7 For
3
x
3
3.5)
matrices, choose one elementary row op
eration of each of the three types, call these E 1, E2,
£3. Choose an arbitrary 3 x 3 matrix A and check that E;A is the matrix obtained from A by the appropriate elementary row operations. (Section
3.6)
Chapter Quiz
[3
-5 4
2
El Let A =
_
=�J
and B
=
[
2
-1
4
]
0 2 . 3 1 -1 5
Either determine the following products or explain why they are not defined.
(a)AB
(b)BA
_
let
(b) Use A
[n =
Determine
the
result
fA
[_i]
of
part
columnspace of B and whether vis in the range
vector x such that
fBCY)
=
(a) to
-1 3 2
calculate
(the second column of B).
E6 You are given the matrix A below and a row eche lon form of A. Determine a basis for the rowspace, columnspace, and nullspace of A.
0 A=
2 0
3
(c) The matrix of [Ro M]
[� � � � �i 1
and b =
5 1 6 . Determine 18
[]
the solution set of Ax = band the solution space of two sets.
= v.
(c) Determine a vector y such that
(b) The matrix of M
0
f8(x)
2
(a) The matrix of R
Ax =
.
-1
/A (U) and /A ('I).
E3 Let R be the rotation through angle � about the x33 3 axis in JR. and let M be a reflection in JR. in the plane with equation - x 1 - x2 + 2x3= 0. Determine:
1 1 1
-7
of the linear mapping fB with matrix B.
and
-2
l
, and v =
(b) Determine from your calculation in part (a) a
2
E4 Let A =
=
-5 6
(a) Using only one sequence of elementary row
be the matrix
[� � l· -1
4 -3 5 3
0
operations, determine whether i1 is in the
mapping with mauix A, and let i1 =
'I =
ES Let B =
(c)BAr
[-� -� �].
E2 (a) Let A =
2
-1 -1 -1 'i1 1 3 0 2 -1 0
and discuss the relationship between the
2
3
1 -2 0
2
1 4
5 8 14
1 0 1 -1
0
0 0
0
0 0 0
1 1 3 0 1 2 0
0
E7 Determine the inverse of the matrix A 1 0 0 -1 0 0 1 0 . 1 0 2 0 2 1 0 0 ES Determine all values of p such that the matrix
[� � �i 2
1 1
is invertible and determine its inverse.
190
Matrices, Linear Mappings, and Inverses
Chapter 3
E9 Prove that the range of a linear mapping L
: JRll
--?
]Rm is a subspace of the codomain.
{i11, ... , vk} be a linearly independent set in JRll and let L : JRll --? ]Rm be a linear mapping. Prove that if Null(L) {O}, then {L(v1), , L(vk)} is a
ElO Let
=
A
=
[� � =�]·
=
MK for all 3 x 4
(c) The matrix of a linear mapping L:IR.2--? IR.3 whose
range
is
. • .
linearly independent set in ]Rm. Ell Let
(b) A matrix K such that KM matrices M.
nullspace is Span
Span
{[;]}
{[ �]}
and
whose
.
(d) The matrix of a linear mapping L:IR.2--? IR.3
0 0 4 (a) Determine a sequence of elementary matrices Ei, . .. , Ek> such that Ek···E i A = I . (b) By inverting the elementary matrices in part (a), write A as a product of elementary matri ces. E12 For each of the following, either give an example
or explain (in terms of theorems or definitions) why no such example can exist. (a) A matrix K such that KM = MK for all 3 x 3 matrices M.
whose
range
is
nullspace is Span
Span
{[�]}
{ [ l l}
and
whose
.
(e) A linear mapping L : IR.3 --? IR.3 such that the range of L is all of IR.3 and the nullspace of L is Span
{[ ;]} -
(f) An invertible 4 x 4 matrix of rank 3.
Further Problems These problems are intended to be challenging. They may not be of interest to all students. C commutes with matrix D if DC. Show that the set of matrices that com-
Fl We say that matrix
CD
=
mute with A
=
[; 7]
is the set of matrices of the
form pl+ qA, where p and q are arbitrary scalars. F2 Let A be some fixed n x n matrix. Show that the set C(A) of matrices that commutes with A is closed under addition, scalar multiplication, and matrix multiplication. F3 A square matrix
A is said to be nilpotent if some
power of A is equal to the zero matrix. Show that
the matrix
[� a�2 :��1 0
0
is nilpotent. Generalize.
0
F4 (a) Suppose that e is a line in IR.2 passing through the
origin and making an angle e with the positive
x1 -axis.
Let refle denote a reflection in this line. Determine the matrix [refle] in terms of func tions of e. (b) Let refla denote a reflection in a second line, and by considering the matrix [refla o refl8], show that the composition of two reflections in the plane is a rotation. Express the angle of the rota tion in terms of a and e. FS (Isometries of JR.2) A linear transformation L
:
IR2 --? IR2 is an isometry of IR2 if L preserves lengths
(that is, if llL(1)11 = 11111 for every 1 E IR2).
·
(a) Show that an isometry preserves the dot product (that is, L(1) L(j)
=
1 y for every 1,J ·
E
JR.2).
(Hint: Consider L(1 + y).) (b) Show that the columns of that matrix [L] must be orthogonal to each other and of length 1. De duce that any isometry of IR.2 must be the com position of a reflection and a rotation. (Hint: You may find it helpful to use the result of Problem F4 (a).)
Chapter Review
F6 (a) Suppose that A and B are
n x
n
matrices such
that A + B and A - B are invertible and that C and D are arbitrary there are
n x n
n x n
matrices. Show that
matrices X and Y satisfying the
system
191
(b) With the same assumptions as in part (a), give a careful explanation of why the matrix
[� ! ]
must be invertible and obtain an expression for its inverse in terms of (A+ B)-1 and (A - B)-1.
AX+ BY= C BX+ AY= D
MyMathlab
Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to you, too!
CHAPTER 4
Vector Spaces CHAPTER OUTLINE 4.1
Spaces of Polynomials
4.2 Vector Spaces 4.3 Bases and Dimensions 4.4 Coordinates with Respect to a Basis 4.5
General Linear Mappings
4.6
Matrix of a Linear Mapping
4.7
Isomorphisms of Vector Spaces
This chapter explores some of the most important ideas in linear algebra. Some of these ideas have appeared as special cases before, but here we give definitions and examine them in more general settings.
4.1 Spaces of Polynomials We now compare sets of polynomials under standard addition and scalar multiplica n tion to sets of vectors in IB?. and sets of matrices.
Addition and Scalar Multiplication of Polynomials Recall that if p(x)
=
ao + a1x+···+ anx1. q(x) =bo + b1x+···+ bnr, and t E JR?., then
(p + q)(x)
=
(ao + bo) + (a1 + b1)x +···+ (an + bn)X1
and
(tp)(x) =tao + (ta1)x +··· + (ta1 ).x'1 1 Moreover, two polynomials p and q are equal if and only if ai =bi for 0 ::::; i ::::; n.
EXAMPLE 1
Perform the following operations. (a) (2 + 3x + 4x2 + x3) + (5 + x - 2x2 + 7x3) Solution:
(2+3x+4x2+x3)+(5+x-2x2+7x3) =2+5+(3+l)x+(4-2)x2+(1+7)x3 =7 + 4x + 2x2 + 8x3
(b) (3 + x2 - 5x3) - (1 - x - 2x2) Solution:
(3+x2-5x3)-(l -x-2x2)
=
=
3-1 + [0-(-l)]x+ [1-(-2)]x2+ [-5-0]x3 2 + x + 3x2 - 5x3
194
Chapter 4
EXAMPLE 1
Vector Spaces
(c)
5(2 + 3x
+
4x2 + x3)
(continued)
Solution: 5(2+3x+4x2+.x3)=5(2)+5(3)x+5(4)x2+5(1).x3=10+15x+20x2+5.x3 (d)
2(1 + 3x - x3) + 3(4 + x2 + 2.x3)
Solution: 2(1 + 3x - x3) + 3(4 + x2 + 2.x3) = 2(1) + 2(3)x + 2(0)x2 + 2(-l).x3
+ 3(4) + 3(0)x + 3(1)x2 + 3(2).x3 =2 + 12 + (6 +O)x
+
(0 + 3)x2 + (-2 + 6).x3
=14 + 6x + 3x2 + 4.x3
Properties ofPolynomial Addition and Scalar Multiplication Theorem 1
p(x), q(x) and r(x) be polynomials of degree at most n and let s, t E R Then (1) p(x) + q(x) is a polynomial of degree at most n (2) p(x) + q(x)=q(x) + p(x) (3) (p(x) + q(x)) + r(x)=p(x) + (q(x) + r(x)) (4) The polynomial 0= 0 +Ox +···+Ox't, called the zero polynomial, satisfies p(x) + 0=p(x)=0 + p(x) for any polynomial p(x) (5) For each polynomial p(x), there exists an additive inverse, denoted (-p)(x), with the property that p(x) + (-p)(x)=O; in particular, (-p)(x) = -p(x) (6) tp(x) is a polynomial of degree at most n (7) s(tp(x))=( st)p(x) (8) (s + t)p(x)=sp(x) + tp(x) (9) t( p(x) + q(x))=tp(x) + tq(x) (10) 1 p(x)= p(x) Let
Remarks
1. These properties follow easily from the definitions of addition and scalar mul tiplication and are very similar to those for vectors in !Rn. Thus, the proofs are left to the reader.
2. Observe that these are the same 10 properties we had for addition and scalar multiplication of vectors in JR.11 (Theorem 1.2.1) and of matrices (Theorem 3 .1.1). 3. When we look at polynomials in this way, it is the coefficients of the polynomi als that are important. As with vectors in JR.11 and matrices, we can also consider linear combinations of polynomials. We make the following definition.
Definition
Let 13= (p1 (x), . . . , Pk(x)} be a set of polynomials of degree at most n. Then the span
Span
of 13 is defined as
Section 4.1
The set 13
Linearly Independent
to the equation
=
t1 P1(x)+
EXAMPLE2
=
· · ·
=
195
{p1 (x),... , Pk(x)} is said to be linearly independent if the only solution
Definition
is t1
Spaces of Polynomials
tk
=
·
· ·
+tkPk(x)
=
0
0; otherwise, 13 is said to be linearly dependent.
{1+x,1 +x3,x+x2,x+x3}. Solution: We want to determine if there are t1,t 2,t ,t4 such that 3
Determine if 1+2x+3x2+4x3 is in the span of 13
1+2x+3x2+4x3
=
=
=
t1 (1 +x)+t2(l +x3)+t (x+x2)+t4(x+x3) 3 (t1 +t2)+(t1 +t +t4)X+t X2+(t2+t4)X3 3 3
By comparing the coefficients of the different powers of x on both sides of the equation, we get the system of linear equations
t1 + t2 + t t1 t t2
1
=
3 3
+ t4 + t4
=
2
=
3 4
=
Row reducing the augmented matrix gives
1 0
0
0
1
0
1
0
1
0
0
0
2
0
1
0
0
3
0
0
1
0
4
0
0
0
0
0
1
1 0
-2 3 3
We see that the system is consistent; therefore, 1 +2x+3x2+4x3 is in the span of 13. In particular, we have t1
EXAMPLE3
Determine if the set 13
-2, t2
=
=
=
3, t
3
=
3, and t4
=
1.
{1 + 2x+ 2x2 - x3,3 + 2x+ x2+x3,2x2
+
2x3} is linearly
dependent or linearly independent.
Solution: Consider 0
=
=
t1 (1 +2x+2x2 - x3)+t2(3+2x+x2+x3)+t (2x2+2x3) 3 (t1 +3t2)+(2t1 +2t2)X+(2t1 +t2+2t )x2+(-t1 +t2+2t )X3 3 3
Comparing coefficients of the powers of x, we get a homogeneous system of linear equations. Row reducing the associated coefficient matrix gives
2 2 -1 The only solution is t1
=
t2
=
t
3
=
3
0
1
0
0
2
0
0
1
0
2 2
0
0
1
0
0
0
0. Hence 13 is linearly independent.
Chapter 4
196
Vector Spaces
EXERCISE 1
Determine if :B
==
(1+2x +x2 +x3,1 +x + 3x2+x3,3 + Sx+ Sx2 - 3x3,-x - 2x2} p(x) 1+5x - 5x2+x3 in the span
is linearly dependent or linearly independent. Is
==
of :B?
EXERCISE 2
{1,x,x2,x3). Prove that :B is linearly independent and show that Span :B is the set of all polynomials of degree less than or equal to 3.
Consider :B
==
PROBLEMS 4.1 Practice Problems Al Calculate the following.
(2 - 2x+3x2+4x3) +(-3 - 4x+x2+2x3) (b) (-3)(1- 2x+2x2+x3 + 4x4) (c) (2 +3x+x2 - 2x3) - 3(1 - 2x+4x2 + 5x3) (d) (2 +3x + 4x2) - (5 +x - 2x2) (e) -2(-5 +x+x2) +3(-1 - x2) ( f) 2 ( � - tx+2x2) + H3 - 2x+x2) (g) V2o +x+x2) + 7r(-1+x2) Let :B {1+x2+x3,2 +x+x3, -1+x+2x2+x3).
(e)
(a)
A2
(f)
A3 Determine which of the following sets are linearly independent. If a set is linearly dependent, find all linear combinations of the polynomials that equal the zero polynomial.
{1+2x+x2- x3,5x+x2,l- 3x+2x2+x3} {1+x+x2,x,x2+x3,3 +2x+2x2 - x3} (c) {3 +x +x2,4 +x - x2,1+2x+x2+2x3, -1+5x2+x3} (d) {1+x+x3+x4,2 +x - x2+x3+x4, x+x2+x3+x4} Prove that the set :B {1,x - 1,(x - 1)2} is linearly (a)
(b)
==
For each of the following polynomials, either ex press it as a linear combination of the polynomials in :B or show that it is not in Span :B. (a) 0 (b) 2 +4x + 3x2+4x3 ( c) -x+2x2+x3 (d) -4 - x+3x2
-1+7x+Sx2+4x3 2 +x+Sx3
A4
==
independent and show that Span :B is the set of all polynomials of degree less than or equal to
2.
Homework Problems Bl Calculate the following.
(3 +4x - 2x2+5x3) - (1 - 2x+Sx3) (-2)(2 +x+x2+3x3 - x4) (c) (-1)(2 +x+4x2+2x3) - 2(-1 - 2x - 2x2 - x3) (d) 3(1+x+x3) +2(x - x2+x3) (e) 0(1 +3x3 - 4x4) (f) H3 - �x+x2) + !(2 +4x+x2) (g) (1 + -Y2) (1 - -Y2+ < -Y2 - l ) x2 ) - H-2 + 2x2) Let :B {l+ x,x + x2,1 - x3}. For each of the
B3 Determine which of the following sets are linearly
(a ) (b )
B2
independent. If a set is linearly dependent, find all linear combinations of the polynomials that equal the zero polynomial.
==
following polynomials, either express it as a linear combination of the polynomials in :B or show that it is not in Span :B. (a) (b) (c) (d)
p(x) p(x) q(x) q(x)
==
==
==
==
1 Sx+2x2+3x3 3 +x2 - 4x3 1+x3
{x2,x3,x2+x3+x4} � (b) {1+2•1 - �2• x+ � 6•x - �} 6 (c) {1 + x +x3,x+x3+ x5, 1 - x5 } (d) (1 - 2x+x4,x - 2x2+ x5, 1 - 3x+x3} (e) {l+2x+x2-x3, 2+3x-x2+x3+x4,1+x-2x2 +2x3+x4,1+2x+x2+x3 - 3x4, 4 + 6x - 2x2+Sx4) Prove that the set 13 {1,x - 2,(x - 2 ) 2,( x - 2 ) 3} (a)
B4
=
is linearly independent and show that Span :B is
the set of all polynomials of degree less than or equal to
3.
Section 4.2 Vector Spaces
197
Conceptual Problems Dl
{ p 1 (x), ... , Pk(x)} be a set of polynomials of degree at most n. (a) Prove that if k < n + 1, then there exists a polynomial q(x) of degree at most n such that q(x) (f. Span 13. Let 13
(b) Prove that if k > n + 1, then 13 must be linearly
=
dependent.
4.2 Vector Spaces We have now seen that addition and scalar multiplication of matrices and polynomi als satisfy the same 10 properties as vectors in IR.11• Moreover, we commented in Sec tion 3.2 that addition and scalar multiplication of linear mappings also satisfy these same properties. In fact, many other mathematical objects also have these important properties. Instead of analysing each of these objects separately, it is useful to de.fine one abstract concept that encompasses them all.
Vector Spaces Definition Vector Space over
J.
A vector space over JR. is a set V together with an operation of addition, usually y for any x, y E V, and an operation of scalar multiplication, usually denoted sx for any x EV and s E JR., such that for any x, y, z EV and s, t E JR. we have denoted x +
all of the following properties:
y EV (closed under addition) y = y + x (addition is commutative) (x + y) + z x + (y + z) (addition is associative)
V1 x+
V2 x + V3
=
V4 There is an element 0 EV, called the zero vector, such that
x+0
=
x
=
0 +x
(additive identity)
VS For each x EV there exists an element -x EV such that
x + (-x) = 0 V6 tx EV V7 s(tx)
(additive inverse)
(closed under scalar multiplication)
(st)x (scalar multiplication is associative) VS (s + t)x sx + tx (scalar addition is distributive) Y9 t(x + y) tx + ty (scalar multiplication is distributive) Y lO lx = x (1 is the scalar multiplicative identity) =
=
=
Remarks 1. We will call the elements of a vector space
vectors. Note that these can be
very different objects than vectors in IR.11• Thus, we will always denote, as in the definition above, a vector in a general vector space in boldface (for example, x). However, in vector spaces such as JR.", matrix spaces, or polynomial spaces, we will often use the notation we introduced earlier.
198
Chapter 4
Vector Spaces
2. Some people prefer to denote the operations of addition and scalar multiplica tion in general vector spaces by E9 and 0, respectively, to stress the fact that these do not need to be "standard" addition and scalar multiplication. 3. Since every vector space contains a zero vector by V3, the empty set cannot be a vector space.
4. W hen working with multiple vector spaces, we sometimes use a subscript to denote the vector space to which the zero vector belongs. For example, Ov would represent the zero vector in the vector space V. 5. Vector spaces can be defined using other number systems as the scalars. For example, note that the definition makes perfect sense if rational numbers are used instead of the real numbers. Vector spaces over the complex numbers are discussed in Chapter 9. Until Chapter 9, "vector space" means "vector space over JR."
6. We define vector spaces to have the same structure as JRn. The study of vector spaces is the study of this common structure. However, it is possible that vectors in individual vector spaces have other aspects not common to all vector spaces, such as matrix multiplication or factorization of polynomials.
EXAMPLE 1
JR11 is a vector space with addition and scalar multiplication defined in the usual way. We call these standard addition and scalar multiplication of vectors in JR11•
EXAMPLE2
P11, the set of all polynomials of degree at most
n, is a vector space with standard
addition and scalar multiplication of polynomials.
EXAMPLE 3
M(m, n), the set of all m x n matrices, is a vector space with standard addition and scalar multiplication of matrices.
EXAMPLE4
Consider the set of polynomials of degree n. Is this a vector space with standard addi tion and scalar multiplication? No, since it does not contain the zero polynomial. Note also that the sum of two polynomials of degree n may be of degree lower than n. For example, (1 + x")
-
(1 - x'1)
=
2, which is of degree 0. Thus, the set is also not closed
under addition.
EXAMPLES
f: (a,b)--+ R If f,g E 'F(a,b), then the (f + g)(x) f(x) + g(x), and multiplication by a scalar t E JR is (tf)(x) tf(x). With these definitions, 'F(a,b) is a vector space.
Let 'F(a,b) denote the set of all functions sum is defined by defined by
EXAMPLE6
=
=
Let C(a,b) denote the set of all functions that are continuous on the interval (a, b). Since the sum of continuous functions is continuous and a scalar multiple of a contin uous function is continuous, C(a,b) is a vector space. See Figure 4.2.1.
199
Section 4.2 Vector Spaces
y
g
- ___. ,, , ·. .. .. ·· g . . . . . ..
::
,
, , , .. ... , .. ___
Figure 4.2.1
EXAMPLE?
, , ,; .. ... . ....
"! +
x
The sum of continuous functions f and g is a continuous function.
Let Tbe the set of all solutions to
x1+2x2
1,
=
2x1 + 3x2
=
0. Is Ta vector space with
standard addition and scalar multiplication? No. This set with these operations does not satisfy many of the vector space axioms. For example, Vl does not hold since
[-3] 2
· m · "TT" • 1s • a so1ut1on · · B ut Jl ; 1t 1s of th'1s system of l.mear equations.
[-3] + [-3] [-6] 2
2
=
4
is not a solution of the system and hence not in T.
EXAMPLE 8
{(x, y) I x,y E JR} with addition defined by (x1,y1) + (x2,y2) = (x1 + x2, y1 + Y2) and scalar multiplication defined by k(x, y) (ky, kx). Is V a vec Consider V
=
tor space? No, since 1(2, does not satisfy V7.
EXAMPLE9 Let S
=
{[ �� ]
3) (3, =
1 x1, x2 E JR
X1 + X2
}
2)
-:f.
(2,
=
3),
it does not satisfy VlO. Note that it also
. Is S a vector space with standard addition and scalar
multiplication in JR3? Yes. Let's verify the axioms. First, observe that axioms V2, VS, V7, V8, V9, and VlO refer only to the opera tions of addition and scalar multiplication. Thus, we know that these operations must satisfy all the axioms as they are the operations of the vector space JR3. Let 1
=
[ �� I
and y
r �� �1+Y2
=
X1 + X2
Vl We have 1 +y
=
[ ;� l X 1+ X2
Observe that if we let z1
=
l
be vectors ins.
+
r �� �l +Y2
X1+Y1 and z2
and hence
1+y
=
=
l[
;� : ��
=
X1 + Yl + X2 +Y2
x2 + y2, then z1 + z2
[ �� IE
=
l x1 + Yt +x2 + Y2
s
Z1 + Z2
since it satisfies the conditions of the set. Therefore, S is closed under addition.
V3 The vector 0
=
conditions of S.
[�]
satisfies 1 + 0
=
1
=
0 + Jt and is in S since it satisfies the
200
Chapter 4 Vector Spaces
EXAMPLE9 (continued)
[ �� ] t[ �� l [tx1ttxxi2tx2l
V4 The additive inverse of x=
is (-x)=
X1+X2
it satisfies the conditions of S.
V6 tx
=
Xt
=��
-X1- X2
I·
which is in S since
E S. Therefore, S is closed under scalar
=
+
[
+
X2
multiplication.
Thus, S with these operators is a vector space as it satisfies all 10 axioms.
EXERCISE 1
Prove that the set S =
{[��] x1, x2 I
E
Z}
is not a vector space using standard addition
and scalar multiplication of vectors in JR.2.
EXERCISE 2
Let S =
{ [�1 �2] a1, a2 I
E JR . Prove that S is a vector space using standard addi
}
tion and scalar multiplication of matrices. This is the vector space of 2 matrices.
x
2 diagonal
Again, one advantage of having the abstract concept of a vector space is that when we prove a result about a general vector space, it instantly applies to all of the examples of vector spaces. To demonstrate this, we give three additional properties that follow easily from the vector space axioms.
Theorem 1
Let V be a vector space. Then
(1) Ox = 0 for all x EV
t
(2) (- l)x = -x for all x EV (3) tO = 0 for all
E JR
Proof: We will prove ( 1). You are asked to prove (2) and (3) in Problem Dl . For any x EV we have Ox=Ox+ 0
byV3
= Ox+ [x + (-x)]
byV4
=Ox+ [ lx+ (-x)]
byVlO
=[Ox+ lx]+ (-x)
byV2
= (0+ l)x + (-x)
byV8
= lx + (-x)
operation of numbers in JR
= x + (-x)
byVlO
=0
byV4
•
Section 4.2 Vector Spaces
201
Thus, if we know that V is a vector space, we can determine the zero vector of V by finding Ox for any x E V. Similarly, we can determine the additive inverse of any vector x EV by computing (-l)x.
EXAMPLE 10
Let V = {(a,b) I a,b E JR, b > O} and define addition by (a,b) EB (c, d)
=
(ad + be,bd)
and define scalar multiplication by t 0 (a,b) = (taY-1, b1). Use T heorem 1 to show that axioms V3 and V4 hold for V with these operations. (Note that we are using EB and 0 to represent the operations of addition and scalar multiplication in the vector space to help distinguish the difference between these and the operations of addition and multiplication of real numbers.)
Solution: We do not know if V is a vector space. If it is, then by Theorem 1 we must have
0 =OO(a,b) =(0ab-1,b0) =(0, 1)
Observe that (0, 1) EV and for any (a,b) EV we have
(a,b) EB (0, 1) =(a(l) + b(O), b(l) ) =(a,b) =(O(b) + l(a), 1(b) ) =(0, 1) EB (a,b) So, V satisfies V3 using 0 = (0, 1). Similarly, if V is a vector space, then by Theorem 1 for any x = (a,b) E V we must have
(-x) =(-l)O(a,b) =(-ab-2,b-1) Observe that for any (a,b) E V we have (-ab-2,b-1) E V since b-1 > 0 whenever
b
>
0. Also,
So, V satisfies V4 using -(a,b) =(-ab-2,b-1 ). You are asked to complete the proof that V is indeed a vector space in Problem
D2.
Subspaces In Example 9 we showed that S is a vector space that is contained inside the vector space JR3. Observe that, by the definition of a subspace of JR" in Section 1.2, S is actually a subspace of JR3. We now generalize these ideas to general vector spaces.
Definition
Suppose that V is a vector space. A non-empty subset 1LJ of V is a subspace of V if it
Subspace
satisfies the following two properties: S 1 x + y E 1LJ for all x,y E 1LJ (U is closed under addition) S2 tx E 1LJ for all x E 1LJ and t E JR (U is closed under scalar multiplication)
Equivalent Definition. If 1LJ is a subset of a vector space V and 1LJ is also a vector space using the same operations as V, then 1LJ is a subspace of V. To prove that both of these definitions are equivalent, we first observe that if 1LJ is a vector space, then it satisfies properties S 1 and S2 as these are vector space axioms V 1 and V6. On the other hand, as in Example 4.2.9, we know the operations must satisfy axioms V2, VS, V7, V8, V9, and VlO since V is a vector space. For the remaining axioms we have
202
Chapter 4
Vector Spaces
V1
Follows from property S1
V3 Follows from Theorem 1 and property S2 because for any u E 11J we have
0
=
Oil E 11J
V4 Follows from Theorem 1 and property S2 because for any u E 11.J, the additive inverse of u is (-u)
=
(-1 )u E 11J
V6 Follows from property S2 Hence, all 10 axioms are satisfied. Therefore, 11J is also a vector space under the operations of V.
Remarks 1. When proving that a set 11J is a subspace of a vector space V, it is important not to forget to show that 11J is actually a subset of V. 2. As with subspaces of IRn in Section 1 .2, we typically show that the subset is non-empty by showing that it contains the zero vector of V.
EXAMPLE 11
In Exercise 2 you proved that §
=
{[� � ] 2
I a1, a2E
since § is a subset of M(2, 2), it is a subspace of M(2, 2).
EXAMPLE 12
IR}
is a vector space. Thus,
{p(x) E P3 I p(3) 0}. Show that 11J is a subspace of ?3. Solution: By definition, 11J is a subset of P3• The zero vector in P3 maps x to 0 for all x; hence it maps 3 to 0. Therefore, the zero vector of ?3 is in 11.J, and hence 11J is Let 11J
=
=
non-empty. Let
p(x), q(x) E 11J ands ER
Sl
(p + q)(3)
S2
(sp)(3)
=
=
p(3) + q(3)
sp(3)
=
sO
Hence, 11J is a subspace of
EXAMPLE 13
x
Define the trace o f a 2
=
P3•
=
Then
0
+
0
p(3) =
=
0 and q(3)
=
0.
0 so p(x)+ q(x) E 11J
0, so sp(x) E 11J
Note that this also implies that 11J is itself a vector space.
. 2 matnx b y tr
([
a11 a11
a1 2
])
a11 + a22· Prove that a22 § {A E M(2, 2) I tr(A) O} is a subspace of M(2, 2). Solution: By definition,§ is a subset of M(2, 2). The zero vector of M(2, 2) is =
=
02,2
b11
=
=
[ � �l
Clearly, tr(02,2)
=
0, so02,2E§.
LetA,BE§ ands ER Then a11 + a22 b22 tr(B) 0.
+
=
S l tr(A+B)
=
tr(A)
=
0 and
=
=
A+BE§
tr
([
aii + a21 +
bii b21
j)
b12 a22 + b 22
a12 +
=
a11 + a22 + bi1 + b22
=
0
+
0
=
0, so
Section 4.2 Vector Spaces
EXAMPLE 13
S2 tr(sA)
(continued)
tr
=
([
])
sa12 sa22
sa11 sa2 1
=
sa1 i + sa22
=
s(a11 + a22)
=
s(O)
=
203
0, so sA ES
Hence, Sis a subspace of M(2, 2).
EXAMPLE 14
The vector space JR2 is not a subspace of we take any vector x
=
[��]
E
JR2,
E
P2 I b + c
JR3,
since
JR2
is not a subset of
this is not a vector in
JR3,
JR3.
That is, if
since a vector in
JR3
has
three components.
{a+ bx+ cx2
a}
P2 .
EXERCISE 3
Prove that U
EXERCISE4
Let V be a vector space. Prove that {O} is also a vector space, called the trivial vector space, under the same operations asV by proving it is a subspace of V.
=
=
is a subspace of
In the previous exercise you proved that {0} is a subspace of any vector space V. Furthermore, by definition, V is a subspace of itself. We now prove that the set of all possible linear combinations of a set of vectors in a vector space V is also a subspace.
Theorem 2
If {v1, ... , vk} is a set of vectors in a vector space V and Sis the set of all possible linear combinations of these vectors,
then Sis a subspace of V.
Proof: By VI and V6, t1 v1+ ti
=
0 for 1
s
i
s
+ tkvk
· · ·
EV. Hence, Sis a subset ofV. Also, by taking
k, we get
by V9 and Theorem 1, so Ov ES. Let x, y ES. Then for some real numbers Si and t;, 1 sis k, x and y
=
t1V1 +
using V8. So,
·
x
· ·
+
+ tkvk.
y ESsince
Similarly, for all
r
=
s1 v1 +
· · ·
+ skvk
It follows that
(si + t;)
E JR. Hence, Sis closed under addition.
E JR,
by V7. Thus, Sis also closed under scalar multiplication. Therefore, Sis a subspace ofV.
•
204
Chapter 4
Vector Spaces
To match what we did in Sections 1.2, 3.1, and 4.1, we make the following definition.
Definition
If§ is the subspace of the vector space V consisting of all linear combinations of the
Span
vectors
Spanning Set
we say that the set 13 spans§. The set 13 is called a spanning set for the subspace§.
vi,... ,vkEV, then§ is called the subspace spanned by 13 ={vi,... ,vk}, and
We denote§ by
= Span{vi,... ,vk}= Span:S
§ In Sections
1.2, 3.1 ,and 4.1, we saw that the concept of spanning is closely related
to the concept of linear independence.
Definition
={vi,... ,vk}
If 13 is a set of vectors in a vector space V, then 13 is said to be linearly independent if the only solution to the equation
Linearly Independent Linearly Dependent
is
ti= = tk =O; otherwise, 13 is said to be linearly dependent. ·
·
·
Remark The procedure for determining if a vector is in a span or if a set is linearly independent in a general vector space is exactly the same as we saw for
JR.'1, M(m,n), and
Pn in
Sections 2.3, 3.1, and 4.1. We will see further examples of this in the next section.
PROBLEMS 4.2 Practice Problems Al Determine, with proof, which of the following sets are subspaces of the given vector space.
(a)
(b)
{ i�
I x1 2x,=0, X1> x,,x,,x.e R +
}
A2 Determine, with proof, whether the following sub sets of M(n, n) are subspaces. (a) The subset of diagonal matrices
of
R4
(b) The subset of matrices that are in row echelon form (c) The subset of symmetric matrices (A matrix A is
{[:� :�]I ai+2a2=O,ai,a2,a3,a4EJR.}
M(2,2) 2 3 (c) {ao+aix a1x a3x I ao+2ai=0, ao,ai,a1,a3EJR.} of ?3 (d) {[:� :�]lai,a2,a3,a4EZ}ofM(2,2) (e) {[:� :�]I a1a4 - a2a3=O,a1,a2,a3,a4 EJR.} of M(2,2) (f) {[� �] I ai = a1,ai, a1EJR.} of M(2,2). of
+
+
symmetric if
aiJ=a1; for all
AT
=
A or, equivalently, if
i and j.)
(d) The subset of upper triangular matrices
A3 Determine, with proof, whether the following sub sets of P5 are subspaces.
{p(x) E I p(-x) = p(x) for all x E JR.} (the subset of even polynomials) (b) {(1 x2) p(x) I p(x) E 4 ?3} (c) {ao,a1x + +a4x I ao= a4,a1= a3,aiEJR.} (d) {p(x) E I p(O)2 = 1} (e) {ao+a1x+a2x 1ao,a1,a2ElR.} (a)
P5
+
·
Ps
·
·
Section 4.2 Exercises A4 Let 'F be the vector space of all real-valued func tions of a real variable. Determine, with proof, which of the following subsets of r are subspaces. (a) {fEr I f(3)=O} Cb) u e r 1 f(3) = 1}
205
(c) {f r I f(-x) = f(x) for all xE JR} (d) {fEr I f(x) 2:: 0 for all x
E
EJR}
AS Show that any set of vectors in a vector space V that contains the zero vector is linearly dependent.
Homework Problems Bl Determine, with proof, which of the following sets are subspaces of the given vector space. (a)
(b)
{ �i X1 I
+ X3 =x,,x,,x,,x,,x, ER
}
A of
1!.4
{[:� :�] ai a3 a4,a1,a2,a3,a4 JR} azx2 a3x3 ao az a3, ao,a1,az,2a3EJR ?3 {ao a,x ao,a1EJR} ?4 {[� �] } {[�' :�]la1-a3=l,a1,a2,a3EJR} I
=
+
E
o f M(2, 2)
(c) {a0 +a,x +
+
I
+
=
} of
(d) (e)
(f)
I
+
of
of M(2, 2)
of M(2, 2).
B2 Determine, with proof, whether the following sub sets of M(3, 3) are subspaces. (a) {A M(3, 3) I tr(A) =O} (b) The subset of invertible 3 x 3 matrices (c) The subset of 3 x 3 matrices A such that
E
A
(d) The subset of 3 x 3 matrices A such that
m [�]
m [�] =
(e) The subset of skew-symmetric matrices. (A matrix A is skew-symmetric if AT = -A; that is,
for all i and j.)
=
aij -a1i P5 5 JR = P 2 Pa4x2} 4 {ao3 a1a4 EJR {x
B3 Determine, with proof, whether the following sub sets of are subspaces. (a) {p(x) E -p(x) for all x E } (the I p(-x) subset of odd polynomials) (b) {(p(x)) I p(x)E (c) + =1 } +a,x + I (d) p(x) I p(x)E P2} (e) (p(x) I p(l)=0) ·
·
·
B4 Let 'F be the vector space of all real-valued func tions of a real variable. Determine, with proof, which of the following subsets of r are subspaces. (a) {fEr I f(3)+f(S)= O} (b) {fE'Flf(l)+f(2)=1} (c) {fEr I lf(x)I :S 1) (d) {fEr I f is increasing on JR}
=
Conceptual Problems Dl Let V be a vector space. (a) Prove that -x=(-l)x for every x EV. (b) Prove that the zero vector in V is unique. (c) Prove that tO 0 for every t R = bE D2 Let V = {(a, b) I b > 0} and define addition by b) €B (c, d) =(ad+ be, bd) and scalar multiplication by t 0 b) = (tab1-1, b1) for any t R Prove that V is a vector space with these operations.
E
a, JR, (a, (a,
E
D3 Let V = {x I x > O} and define addition by xy and scalar multiplication by t 0 x = x1 x
E JR
€By.=
for any t E R Prove that V is a vector space with these operations.
D4 Let lL denote the set of all linear operators L : JR" � with standard addition and scalar multiplication of linear mappings. Prove that lL is a vector space under these operations.
JR"
DS Suppose that U and V are vector spaces over R The Cartesian product of U and V is defined to be U x V= {(u, v) I uE U, vEV}
206
Chapter 4
Vector Spaces
(a) InU x V define addition and scalar multiplica tion by
(b) Verify thatU x {Ov} is a subspace ofU x V. (c) Suppose instead that scalar multiplication is defined by t 0 (u, v)
(u1,V1) EB (u2, V2) t0(U1,V1)
=
(u1
=
(tu1,tV1)
+
U2, V1
+
V2)
=
(tu, v), while addition
is defined as in part (a). IsU x V a vector space with these operations?
Verify that with these operations that U x V is a vector space.
4.3 Bases and Dimensions In Chapters I and 3, much of the discussion was dependant on the use of the standard basis in IR.11• For example, the dot product of two vectors a and b was defined in terms of the standard components of the vectors. As another example, the standard matrix [L] of a linear mapping was determined by calculating the images of the standard basis vectors. Therefore, it would be useful to define the same concept for any vector space.
Bases 11 Recall from Section 1.2 that the two important properties of the standard basis in IR. were that it spanned IR.11 and it was linearly independent. It is clear that we should want a basis 13 for a vector space V to be a spanning set so that every vector in V can be written as a linear combination of the vectors in 13. Why would it be important that the set 13 be linearly independent? The following theorem answers this question.
Theorem 1
Unique Representation Theorem Let 13 {v1,.•.,v11} be a spanning set for a vector space V. Then every vector in V can be expressed in a unique way as a linear combination of the vectors of 13 if and only if the set 13 is linearly independent. =
Proof: Let x be any vector in V. Since Span 13
=
V, we have that x can be written as
a linear combination of the vectors in 13. Assume that there are linear combinations
This gives
which implies
If 13 is linearly independent, then we must have a; - b; Hence, x has a unique representation. On the other hand, if 13 is linearly dependent, then
=
0, so a;
=
b; for 1 ::; i ::; n.
207
Section 4.3 Bases and Dimensions
has a solution where at least one of the coefficients is non-zero. But
0
=
Ov I
+
... + Ovn
Hence, 0 can be expressed as a linear combination of the vectors in 13 in multiple ways.
•
Thus, if 13 is a linearly independent spanning set for a vector space V, then every vector in V can be written as a unique linear combination of the vectors in 13.
Definition
A set 13 of vectors in a vector space Vis a basis if it is a linearly independent spanning
Basis
set for V.
Remark According to this definition, the trivial vector space
{O}
does not have a basis since
any set of vectors containing the zero vector in a vector space Vis linearly dependent. However, we would like every vector space to have a basis, so we define the empty set to be a basis for the trivial vector space.
EXAMPLE 1
The set of vectors
{[� �], rn �], [� �], [� �]}
in Exercise 3.1.2 is a basis for
{ x, x2, x3}
M(2, 2). It is called the standard basis for M(2, 2).
(1, x,
The set of vectors 1,
set
in Exercise 4.1.2 is a basis for ... , �}is called the standard basis for P11•
EXAMPLE2 �ove that the set C
{[i], m, [m JR.3, [X2XJ] t1 [11] t2 [1]] X3 1
is a basis for
=
=
In particular, the
R3.
JR.3 JR.3 t3 [1] [ti tt12 t2 tt32 t3] 1] [1 1 l JR.3. JR.3. JR.3.
Solution: We need to show that Span C prove that Span C
P3.
and that C is linearly independent. To
=
we need to show that every vector 1 E
can be written as a
linear combination of the vectors in C. Consider
+
=
0
+
0 1
+
+
+
=
+
Row reducing the corresponding coefficient matrix gives
o
0 - 0 1
0
o
0
0
1
Observe that the rank of the coefficient matrix equals the number of rows, so by The orem 2.2.2, the system is consistent for every 1 E
Hence, Span C
=
Moreover,
since the rank of the coefficient matrix equals the number of columns, there are no parameters in the general solution. Therefore, we have a unique solution when we take
1
=
0, so C is also linearly independent. Hence, it is a basis for
208
Chapter 4
EXAMPLE3
Vector Spaces
Is the set 23
�], [� n, [� �]}
= {[- �
a basis for the subspace Span 23 of M(2, 2)?
Solution: Since 23 is a spanning set for Span 23, we just need to check if the vectors in 23 are linearly independent. Consider the equation
[o o] [ 2] [o i
0
ti 0 = -1
1
+t2
3
1 1
]
+t3
[ ][ 2 1
s
3
=
ti +2t3 -ti+3t2+t3
2t1 +t2+St3 ti+t2+3t3
]
Row reducing the coefficient matrix of the corresponding system gives
0
2
1
2
1
5
0
1
1
-1
3
1
0
0
0
3
0
0
0
1
0
2
Observe that this implies that there are non-trivial solutions to the system. For example, one non-trivial solution is given by t1 = -2, t2 = - 1 , and t3 = 1 , and you can verify that
[�
(-2) _
n
+(- 1 )
[� � J
+< l )
[� �J=[� �J
Therefore, the given vectors are linearly dependent and do not form a basis for Span 23.
EXAMPLE4
Is the set C
=
2 2 2 {3+2x+2x , 1 +x , 1 +x+x } a basis for P2?
Solution: Consider the equation 2 2 2 2 ao +aix+a2x = ti(3+2x+2x )+t2( 1 +x )+t3( 1 +x+x ) 2 = (3ti+t2+t3)+(2ti+t3)X+(2t1 +t2+t3)X Row reducing the coefficient matrix of the corresponding system gives
[ i l [ o ol 3 2 2
1 0 1
1 � 0 1 0
1
1 0
0 l
Observe that this implies that the system is consistent and has a unique solution for all ao + aix+a2x
EXERCISE 1
2
E
P2. Thus, C is a basis for P2.
2
2
Prove that the set 23 = {l +2x+x , 1 +x , 1 +x} is a basis for P2•
209
Section 4.3 Bases and Dimensions
EXAMPLES
{p(x) E P I p(l) O} of P . 2 2 Solution: We first find a spanning set for§ and then show that it is linearly indepen dent. By the Factor Theorem, if p(l) = 0, then (x- 1) is a factor of p(x). That is, every polynomial p(x) E § can be written in the form
Determine a basis for the subspace§
p(x)
=
(x - l)(ax +b)
=
=
=
a(x2 - x) +b(x
-
1)
Span{x2 - x, x - 1 }. Consider
Thus, we see that§ =
t1 t 0. Hence, {x2 - x, x - l} is linearly independent. Thus, 2 {x2 - x, x - 1} is a linearly independent spanning set of§ and hence a basis.
The only solution is
=
=
Obtaining a Basis from an Arbitrary Finite Spanning Set Many times throughout the rest of this book, we will need to determine a basis for a vector space. One standard way of doing this is to first determine a spanning set for the vector space and then to remove vectors from the spanning set until we have a basis. We now outline this procedure. Suppose that
T
{v1,
=
•
•
•
, vk} is a spanning set for a non-trivial vector space V.
We want to choose a subset of'T that is a basis for V. If'T is linearly independent, then Tis a basis for V, and we are done. If Tis linearly dependent, then t1 v1 +
·
has a solution where at least one of the coefficients is non-zero, say can solve the equation for
t;
*
+tkvk 0 0. Then, we ·
·
=
v; to get
So, for any x E V we have x =
=
a1v1 +
·
·
·
+a;-1 V;-1 +a;v; +a;+1V;+1 +
l�
·
·
·
+akvk
a1V1 + ... +a;-1V;-1 +a; - (t1V1 +... +t;-1Vi-J +t;+1V;+1 +... +tkvk) +a;+1V;+1 +
·
·
·
+
]
akvk
T\{v;}. This T\{v;} is a spanning set for V. If T\{vd is linearly independent, it is a
Thus, any x E V can be expressed as a linear combination of the set shows that
basis for V, and the procedure is finished. Otherwise, we repeat the procedure to omit a second vector, say
v1, and get T\{v;, v1}, which still spans V. In this fashion, we
must eventually get a linearly independent set. (Certainly, if there is only one non-zero vector left, it forms a linearly independent set.) Thus, we obtain a subset of Tthat is a basis for V.
210
Chapter 4
EXAMPLE6
Vector Spaces
If T
={[_il [-:J.[-�l [�]}· ·
·
deIBnlline a subset of T that is a basis for Span T
1 [ [ [ [ ] 1 = = [ ] ] ] [ol
Solution: Consider 0
0
ti
1
-2
2
1 + t2 -1 + t3 -2 + t4 5 3
3
1
ti + 2t2 + t3 + t4 ti - t2 - 2t3 + 5t4 -2t1 + t2 + 3t3 + 3t4
l
=
-11
We row reduce the corresponding coefficent matrix
ti The general solution is
t2 t3
t4
= 1 s
-1
s E R Taking s .
,
0
1, we get
t1 t2 t3
4
= l Ul-Hl+�H� Hl -UJ+:I [ �] Tl{[-�I}= {fJHJ.rm
1
, which
0
gives
or
Thus, we can omit -
from T and consider
Now consider
The matrix is the same as above except that the third column is omitted, so the same row operations give
Hence, the on! y solution is t,
[ � -� �1 r� � �11 === -2
1
t2
t3
3
�
0
0
0 and we conclude that
linearly independent and thus a basis for Span T.
{[_iJ.l-:].[i]}
is
Section 4.3 Bases and Dimensions
EXERCISE 2
Let 'B
{1
=
-
211
x, 2 + 2x + x2, x + x2, 1 + x2}. Determine a subset of 'B that is a basis for
Span 'B.
Dimension n
We saw in Section 2.3 that every basis of a subspace S of R contains the same number of vectors. We now prove that this result holds for general vector spaces. Observe that the proof of this result is essentially identical to that in Section 2.3.
Lemma2
Suppose that V is a vector space and Span{v1, early independent set in V, then k $
• • •
, v,,}
=
V. If {u1 ,
. . •
, uk} is a lin
n.
Proof: Since each ui, 1 $ i $ k, is a vector in V, it can be written as a linear combi nation of the v/s. We get
U1
=
a11V1 + a11V2 + · · · + an1Vn
U2
=
a12V1 + a21V2 + · · · + an2Vn
Consider the equation
0
=
=
=
Since 0
=
t1u, + · · · + tkuk t1(a11V1 + az1V2 + · · · + a111Vn) + · · · + tk(a1kV1 + azkV2 + · · · + a,,kvn) (a11t1 + · · · + alktk)v1 + · · · + (a,,1t1 + · · · + a,,ktk)v11 O v1 + · · · + O v,, we can write this as
Comparing coefficients of vi we get the homogeneous system
tk. If k > n, then this system would have a non trivial solution, which would imply that {u1, ... , uk} is linearly dependent. But, we
of n equations in k unknowns t1,
• • • ,
assumed that {u1, ... , uk} is linearly independent, so we must have k $
Theorem 3
If 'B
=
{v1, ..., v,,} and C
=
{u1,
• • •
n.
•
, uk} are both bases of a vector space V, then
k = n.
Proof: On one hand, 'Bis a basis for V, so it is linearly independent. Also, C is a basis for V, so Span C
=
V. Thus, by Lemma 2, we get that
independent as it is a basis for V, and Span 'B gives
n
�
k. Therefore,
n
=
k, as required.
=
n
$
k. Similarly, C is linearly
V, since 'Bis a basis for V. So Lemma 2 •
212
Chapter 4
Vector Spaces
As in Section 2.3, this theorem justifies the following definition of the dimension of a vector space.
Definition Dimension
If a vector space V has a basis with n vectors, then we say that the dimension of V is n and write dimV = n
0.
If a vector space V does not have a basis with finitely many elements, then V is called
infinite-dimensional. The dimension of the trivial vector space is defined to be
Remark
Properties of infinite-dimensional spaces are beyond the scope of this book.
EXAMPLE 7
(a) IR.11 is n-dimensional because the standard basis contains n vectors. (b) The vector space M(m, n) is (m x n)-dimensional since the standard basis has
m
x n vectors.
(c) The vector space Pn is (n + 1 )-dimensional as it has the standard basis
{ 1 , X, x2,
• . .
, X1}.
(d) The vector space C(a, b) is infinite-dimensional as it contains all polynomials (along with many other types of functions). Most function spaces are infinite dimensional.
EXAMPLES Ut S =Span
ml. [=�l nl. nl}
Show that ilim S
=
2.
·
Solution: Consider
4 [1 -1 -li4 [0 01 6] -1 0000 [-�l r!l { li].[=�]}
We row reduce the corresponding coefficent matrix -1
3 2
2 3
Observe that this implies that
of the first two vectors Thus, S
5
1
-2
and
=
Span
3
-
5
can be written as linear combinations
Moreom, B =
m].[=�l}
is
Section 4.3 Bases and Dimensions
213
EXAMPLE8
clearly linearly independent since neither vector is a scalar multiple of the other, hence
(continued)
13 is a basis for§. Thus, dim§
EXAMPLE9
Let§
=
{[: �]
E
Solution: Since d
=
2.
M(2, 2) I a+ b =
=
}
d . Determine the dimension of§.
a+ b, observe that every matrix in§has the form
Thus,
It is easy to show that this spanning set
{[� �] [� n , [� �]} ,
independent and hence is a basis for§. Thus, dim§
EXERCISE 3
Find the dimension of§
=
{a+ bx+ cx2 + dx3
E
=
for S is also linearly
3.
P3 I a+ b + c + d
=
O}.
Extending a Linearly Independent Subset to a Basis Sometimes a linearly independent subset T
=
{vi, ..., vd is given in an n-dimensional
vector space V, and it is necessary to include these in a basis for V. If Span T * V, then there exists some vector wk+! that is in V but not in Span T. Now consider
(4.1) If tk+I * 0, then we have
Wk+I
=
t1 --V1 tk+I
' ' ·
tk - -Vk tk+!
and so Wk+! can be written as a linear combination of the vectors in T, which cannot be since Wk+! looks just like the standard inner product in JR.2•
354
Chapter 7
Orthonormal Bases
This argument generalizes in a straightforward way
[i']8
(b) Show that if
to IR"; see Problem 8.2.D6.
D2 Suppose that {v1, v2, v3} is a basis for an inner prod uct space V with inner product ( , ). Define a matrix
=
[:;]
and
[jl]8
=
�:l·
then
G by
(a) Prove that G is symmetric (GT
=
(c) Determine the matrix G of the inner product
G).
(p , q )
p(O)q(O) + p( l )q ( l ) + p(2)q(2) for P2 {1, x, x2}.
=
with respect to the basis
7 .5 Fourier Series b
J f(x)g(x) dx
The Inner Product
a
C[a, b] be the space of functions f : IR IR that are continuous on the interval [a, b]. Then, for any f, g E C[a, b] we have that the product j g is also continuous on [a, b] and hence integrable on [a, b]. Therefore, it makes sense to define an inner Let
�
product as follows. The inner product ( , ) is defined on
C[a, b] by b
(j, g)
=
l j x g x dx ( ) ( )
The three properties of an inner product are satisfied because b
(1) (f, f)
b
J f(x)f(x) d x 2:: 0 for all f E C[a, b] and (j, f) J f(x)f(x) dx 0 if and only if f (x) 0 for all x E [a, b]. =
=
=
a
a
=
b
(2)
(f, g)
=
b
J f(x)g(x) d x J g(x)f(x) dx (g, f) =
=
a
a
b
(3)
b
(f, sg + th) J f(x)(sg(x) + th(x)) dx s(j, g) + t(f, h) for any s, t E R =
=
b
s J f(x) g(x) dx + t J f(x)h(x) dx
a
a
=
a
Since an integral is the limit of sums, this inner product defined as the integral of the product of the values off and g at each x is a fairly natural generalization of the dot product in IR11 defined as a sum of the product of the i-th c omp onents ofx and y for each i. One interesting consequence is that the norm of a function f with respect to this
(
inner product is
11!11
=
b
l
) 1/2
j2(x) dx
355
Section 7.5 Fourier Series
Intuitively, this is quite satisfactory as a measure of how far the function is from the zero function. One of the most interesting and important applications of this inner product in volves Fourier series.
Fourier Series Let CP2rr denote the space of continuous real-valued functions of a real variable that are periodic with period 2n. Such functions satisfy f(x+ 2n) of such functions are f(x)
=
c
for any constant
c,
=
f(x) for all x. Examples
cos x, sin x, cos 2x, sin 3x, etc. (Note
that the function cos 2x is periodic with period 2n because cos(2(x + 2n))
=
cos 2x.
However, its "fundamentai (smallest) period" is n.) In some electrical engineering applications, it is of interest to consider a signal described by functions such as the function f(x)
=
{:7!"
if -n::; x::; -n/2
x
if
-
x
7!"-
-
7r/2
< x::;
if n/2 < x::;
7r/2
7r
This function is shown in Figure 7.5.5.
y
7r
2 Figure 7.5.5
A continuous periodic function.
In the early nineteenth century, while studying the problem of the conduction of heat, Fourier had the brilliant idea of trying to represent an arbitrary function in CP2rr as a linear combination of the set of functions { 1, cos x, sin x, cos 2x, sin 2x, ... , cos nx, sin nx, ...} This idea developed into Fourier analysis, which is now one of the essential tools in quantum physics, communication engineering, and many other areas. We formulate the questions and ideas as follows. (The proofs of the statements are discussed below.) (i) For any n, the set of functions { l, cos x, sin x, cos 2x, sin 2x, . .., cos nx, sin nx} is an orthogonal set with respect to the inner product
(f, g)
=
1:
f(x)g(x) dx
The set is therefore an orthogonal basis for the subspace of CP2rr that it spans. This subspace will be denoted CP2rr,n·
356
Chapter 7
Orthonormal Bases (ii) Given an arbitrary function f in CP2rr, how well can it be approximated by a function in CP2rr,n? We expect from our experience with distance and subspaces that the closest approximation to f in CP2rr,n is projcP:z..nf. The coefficients for Fourier's representation off by a linear combination of { 1, cos x, sin x, ... , cosnx, sin nx, ...}, called Fourier coefficients, are found by considering this projection. (iii) We hope that the approximation improves as n gets larger. Since the distance fromf to the n-th approximation prokP:z....f is II perpcP:z..n /II, to test if the ap proximation improves, we must examine whether II perpCP:z.,n /II-+ 0 as n-+ oo. Let us consider these statements in more detail. (i) The orthogonality of constants, sines, and cosines with respect to the inner rr product (/, g) Jf(x)g(x) dx -rr =
These results follow by standard trigonometric integrals and trigonometric identities:
l
rr
sinnx dx
rr rr cosnx dx rr
1:
I
cos mx sinnx dx
=
=
=
and for m* n,
l l
rr
cos mx cos nx dx
rr rr rr
sin mx sinnx dx
=
=
rr 1 - - cosnx 0 -rr n rr 1 - sinnx 0 -rr n
I
I
1: �(
=
=
sin(m +n)x - sin(m -n)x) dx
=
l ( l 2(
rrl - cos(m +n)x +cos(m -n)x) dx -rr 2 rrl cos(m -n)x - cos(m +n)x) dx -rr
0
=
0
=
0
Hence, the set { 1, cos x, sin x, ..., cosnx, sinnx} is orthogonal. To use this as a basis for projection arguments, it is necessary to calculate 111112, II cos mxll2, and II sin mxll2: 11111
2
II cos mxll2
II sin mxll2
=
1: 1 rr
=
=
I l
rr rr rr
dx
=
2n
cos2 mx dx sin2 mx dx
1 -(1 1 l 2(1 rr
=
=
-rr 2 rr 1
-rr
+cos 2mx) dx - cos 2mx) dx
= 7r
= 7r
(ii) The Fourier coefficients off as coordinates of a projection with respect to the orthogonal basis for CP2rr,n The procedure for finding the closest approximation proj CP:z. nf in CP2rr,n to an . arbitrary function f in CP2rr is parallel to the procedure in Sections 7.2 and 7.4. That is, we use the projection formula, given an orthogonal basis {v1, ... 'vd for a subspace S: (1, vk) (1, v1) . vk proJs 1 1iJJi2 v 1 + + ··· llvkll2 =
Section 7.5 Fourier Series
357
There is a standard way to label the coefficients of this linear combination: prokp2".n f=
ao
1 +ai cos x +a1cos2x +···+an cos nx 2 + bi sin x + b2 sin2x + · + b" sin nx ·
·
The factor� in the coefficient of 1 appears here because 111112 is equal to 2rr, while the other basis vectors have length squared equal to rr. Thus, we have
(f, 1)
1
ao = liij2 = ;
{rr_
J rr
f(x) dx
(f, cos mx) 1 = 7r 11 cos mxll2 (f, sin mx) 1 bm -2 7r sin xll II m
am = _
_
(iii) Is projcPin,, fequal to fin the limit
rr I-rr rr . I-rr f( )
as
f(x) cos mxdx x smmx dx
n
---+
oo?
As n ---+ oo, the sum becomes an infinite series called the Fourier series for f. The question being asked is a question about the convergence of series-and in fact, about series of functions. Such questions are raised in calculus (or analysis) and are beyond the scope of this book. (The short answer is "yes, the series converges to f provided that f is continuous." The problem becomes more complicated if f is allowed to be piecewise continuous.) Questions about convergence are important in physical and engineering applications.
EXAMPLE 1
Determine prokPini f for the function f(x) defined by f(x) = lxl if -rr f(x +2rr) = f(x) for all x. Solution: We have
rr lxl dx I-rr 1 rr I-rr rr lxl 2x dx I-rr 1 rr I-rrrr lxl 3x dx 1 I-rr lxl x dx 1 rr I lxl rr 3x dx I l
ao = a,= rr
7r
cos
=
a3 = -
cos
= --
bi = -
sin
7r
7r
7r
4
9rr
=0
sin2xdx = 0
I
�
0
- rr
b3 = Hence, prokPin.i f = Figure 7.5.6.
rr and
4
a1= -
7r
�
lxlcos xdx = --
1
7r
x
= rr
7r
b1 = -
�
lxl sin
= 0
-rr
- ; cos x -
t,; cos 3x.
The results are shown m
358
Chapter 7
Orthonormal Bases y
-y=f(x) --
-Tr
Figure 7 .5.6
EXAMPLE2
-
= prokP:u,.1 f(x) ( ) projCP:u,,3 fx Y Y
=
7r
t
Graphs of projCP,,,,, f and prokP,,, , f compared to the graph of f(x). ,
{-Tr -
-Tr
Determine p rokP:i,,,J f for the function jx ( ) defined by x
f(x) = x 7r
Solution: We have ao
=
a1
=
a1 a3
= =
bi= b2 = b3 = Hence, projcP:i,,.J f= ; sinx
-
1
7r 1
7r 1
7r 1 7r 1 7r 1 7r 1 rr -
x
lf Jlf Jlf Jlf Jlf Jlf Jlf J
if
-
7r � x �
/2
if -rr/2 < x � rr/2 ifrr/2 < x � rr
fdx = 0
-;r
fcosxdx = 0
-;r
fcos 2xdx = 0
-;r
fcos 3xdx = 0
-;r
-;r
fsinxdx =
4
7r
fsin 2xdx = 0
-;r
-;r
fsin 3xdx = -
4
9rr
-
trr sin 3x. The results are shown in Figure 7.5.7.
Chapter Review
y
359
1r
2
-
--
-� Figure 7.5.7
Graphs of prokPi..i
-
y=f(x) Y = projCPin.1 f(x) Y projCPin,J f(x) =
f and projCPi. i f compared to the graph of f(x). .
PROBLEMS 7 .5 Computer Problems Cl Use a computer to calculate projCPin,,, f for n =3,7, and 11 for each of the following functions. Graph the function f and each of the projections
(b)
f(x) =eX,
(c)
f(x) =
on the same plot. (a)
f(x) =x2,
-;r
�
x
�
io 1
-
x
n �
if
-
ifO
:5
7r
n :5 <
x
x �
:5 7r
o
7r
PROBLEMS 7 5 .
Suggestions for Student Review 1 What is meant by
an orthogonal set of vectors in !Rn?
What is the difference between an orthogonal basis and an orthonormal basis? (Section 7 .1)
2 Why is it easier to determine coordinates with re spect to an orthonormal basis than with respect to an arbitrary basis? What are some special features of the change of coordinates matrix from an orthonor mal basis to the standard basis? What is an orthogo nal matrix? (Section 7 .1)
3 Does every subspace of IR11 have an orthonormal basis? What about the zero subspace? How do
you find an orthonormal basis? Describe the Gram Schmidt Procedure. (Section 7.2)
4 What are the essential properties of a projection onto a subspace of !Rn? How do you calculate a projection onto a subspace? (Section 7 .2)
5 Outline how to use the ideas of orthogonality to find the best-fitting line for a given set of data points {( t; y; ) Ii=1, ... , n}. (Section 7.3) ,
6 What are the essential properties of an inner prod
uct? Give an example of an inner product on P2. Give an example of an inner product on M(2, 3). (Section 7.4)
360
Chapter 7
Orthonormal Bases
Chapter Quiz El
Determine whether the following sets are orthog
a vector in §, use the orthonormality of 'B to deter
onal, and which are orthonormal. Show how you
mine the coordinates of x with respect to 'B.
{ t -> -i} { � �} {}, � -�}
decide.
(a)
Cb)
(c)
E2
l
,
_l_
=
l
I
Y3
E3 (a) Prove that if P is an orthogonal matrix, det p ±1.
' Vs
(b) Prove that if P and R are n x n orthogonal ma trices, then so is PR.
E4
1 -2
'B
=
1 2
,
�
O, 0
�
1 0
-1
,
(a) Apply the Gram-Schmidt Procedure to the given spanning set to produce an orthonormal basis for S.
Consider the orthonormal set -1 1 1 1
=
1
Span
1 1 , Y3 -1 , Y3 -1 1 0
0
{! ·H
Let S be the subspace of lll4 defined by S
(b) Determine the point in S closest to x
. Let S be the sub-
1
ES
1 -2 . 1 0
=
_
Determine whether each of the following functions ( , ) defines an inner product on M(2, 2). Explain
0
space of JR.11 spanned by 'B. Given that x
=
2 5 1 is
how you decide in each case. (a) (A, B)
=
(b) (A,B)
=
det(AB)
a11b11+2a12b12+2a21b21 + a22b22
3
-2
Further Problems Fl
(lsometries of JR.3)
(d) Let A be the standard matrix of L. Suppose that
(a) A linear mapping is an isometry of JR.3 if
llL(x)ll
=
11111 for every x
E
JR3. Prove that an
isometry preserves dot products and angles as well as lengths.
1 is an eigenvalue of A with eigenvector u. Let
v and
w
be vectors such that {u, v,
orthonormal
[u
v
basis
for
w]. Show that
(b) Show that L is an isometry if and only if the standard matrix of L is orthogonal. (Hint: See Problem 3.FS and Problem 7. l.D3.)
(c) Explain why an isometry of JR.3 must have one or three real characteristic roots, counting mul tiplicity. Based on Problem 7. l.D3 (b), these must be±1.
pT AP
=
[
JR.3
1 021
and
012 A*
w}
let
is an
P
]
where the right-hand side is a partitioned ma trix, with OiJ being the i x
j zero matrix, and
with A• being a 2 x 2 orthogonal matrix. More
over, show that the characteristic roots of A are
1 and the characteristic roots of A•.
Chapter Review
Note that an analogous form can be obtained for pT AP in the case where one eigenvalue is 1 -
(e) Use
.
Problem
3.F5
to
analyze
the
A*
of
part (d) and explain why every isometry of IR.
3
described by requiring the i-th approximation to be the closest vector v in some finite-dimensional sub space Si of V, where the subspaces are required to satisfy
S1
is the identity mapping, a reflection, a composi tion of reflections, a rotation, or a composition of a reflection and a rotation.
F2 A linear mapping L : IR.11 � IR.11 is called an involu tion if L o L Id. In terms of its standard matrix, this means that A2 I. Prove that any two of the
361
c
S2
c
·
·
·
c
Si
c
·
·
·
c
V
The i-th approximation is then projs; the approximations improve as
v. Prove that i increases in the
sense that
=
=
following imply the third. (a) A is the matrix of an involution. (b) A is symmetric. (c) A is an isometry.
F3 The sum S
T of subspaces of a finite dimensional vector space V is defined in the Chapter 4 Further Problems. Prove that (S + T).l S.l n T.L. +
=
F4 A problem of finding a sequence of approximations to some vector (or function) v in a possibly infinite dimensional inner product space V can often be
MyMathlab
llv
-
proj., U!J+ I
vii
:=:;
llv
-
proj ., 1J11
vii
FS QR-factorization. Suppose that A is an invertible n x n matrix. Prove that A can be written as the product of an orthogonal matrix Q and a upper triangular matrix R : A QR. =
(Hint: Apply the Gram-Schmidt Procedure to the columns of A, starting at the first column.)
Note that this QR-factorization is important in a numerical procedure for determining eigenvalues of symmetric matrices.
Go to MyMathLab at www.mymathlab.com. You can practise many of this chapter's exercises as often as you want. The guided solutions help you find an answer step by step. You'll find a personalized study plan available to you, too!
CHAPTER 8
Symmetric Matrices and Quadratic Forms CHAPTER OUTLINE 8.1 8.2
Diagonalization of Symmetric Matrices Quadratic Forms
8.3 8.4
Graphs of Quadratic Forms Applications of Quadratic Forms
Symmetric matrices and quadratic forms arise naturally in many physical applica tions. For example, the strain matrix describing the deformation of a solid and the inertia tensor of a rotating body are symmetric (Section 8.4). We have also seen that the matrix of a projection is symmetric since a real inner product is symmetric. We now use our work with diagonalization and inner products to explore the theory of symmetric matrices and quadratic forms.
8.1 Diagonalization of Symmetric Matrices Definition Symmetric :\latrix
A matrix A is symmetric if AT =A or, equivalently, if aiJ = a1; for all i and j. In Chapter 6, we saw that diagonalization of a square matrix may not be possible if some of the roots of its characteristic polynomial are complex or if the geometric multiplicity of an eigenvalue is less than the algebraic multiplicity of that eigenvalue. As we will see later in this section, a symmetric matrix can always be diagonalized: all the roots of its characteristic polynomial are real, and we can always find a basis of eigenvectors. Before considering why this works, we give three examples.
EXAMPLE 1
Determine the eigenvalues and corresponding eigenvectors of the symmetric matrix A =
[� -n
W hat is the diagonal matrix corresponding to A, and what is the matrix
that diagonalizes A?
Solution: We have C(A) = det(A
-
Al) =
1
0-A l
_
2
1 _
A
I
2
= A + 2A
-
1
Using the quadratic formula, we find that the roots of the characteristic polynomial are
A1 = -1
+
V2 and A2
=
-1 - V2. Thus, the resulting diagonal matrix is D
=
[
-1
+
0
V2
0
]
-1 - V2
364
Chapter 8
EXAMPLE 1
Symmetric Matrices and Quadratic Forms
For /l1 = -1 +
Y2,
we have
(continued) A-/l1/=
[
1
A
-1 - Y2,
[
1 + /l2/ =
_
-1 -
1 1 ] [ Y2 -
0
-
o
-
Y2
]
{ [ 1 4}. 1 +
Thus, a basis for the eigenspace is Similarly, for /l2 =
1
- Y2 l
we have
Y2 1
Thus, a basis for the eigenspace is
1 -1 +
{[ 4} [1 1 Y2 1-
1
-1 +
0
0
Y2l J
. 1 -
+
Hence, A 1s . d"iagonalized by p =
] [1 _
Y2
1
Y2lJ.
Observe in Example 1 that the columns of P are orthogonal. That is,
[ 1 [ -1 1 1+
·
1
1
= (1 +
Y2)(1 - Y2) + 1(1) = 1 - 2 +
1
= 0
Hence, if we normalized the columns of P, we would find that A is diagonalized by an orthogonal matrix. (It is important to remember that an orthogonal matrix has or
thonormal columns.)
EXAMPLE2 Diagonali'e the symmetric A =
[� : -n
Show that A can be diagonali,ed by an
_
orthogonal matrix.
Solution: We have 0
0
0
1 - /l
-2
0
-2
1 - /l
4 - /l C(/l) =
= -(/l - 4)(/l - 3)(/l + 1)
The eigenvalues are /l1 = 4, tl2 = 3, and tl3 = -1, each with algebraic multiplicity 1. For /l1 = 4, we get
[� -�] [� 1 !I { [�]} 0
A -Ail=
Thus, a basis focthe eigenspace is For /l2 = 3, we get
-3 -2
0
-
-3
0
[� -�]- [� 1 !] 0
A-A2l =
0
0
-2 -2
-2
0
0
Section 8.1
Diagonalization of Symmetric Matrices
EXAMPLE2 (continued)
{[-:]}·
Thus, a basis for the eigenspace is For A3 =
365
-1, we get
{[:]}· [H [-:l 1 [4 OJ [-� -2]
Thus, a basis for the eigenspace is
Observe that the vectors V 1 =
V2 =
·
and Vl =
[:]
fonn an orthogonal set.
Hence, if we normalize them, we find that A is diagonalized by the orthogonal matrix
P
=
[l
0
0
-
0
0
1/-,./2 1/../2
1/-,./2 1/../2
EXAMPLE3
0
to D
=
0
3
0
0
0 .
-1
-4 5
Diagonalize the symmetric A =
-2
an orthogonal matrix.
Solution: We have
5 C(A) =
-A
-2
-4
-2 . Show that A can be diagonalized by 8
-2
-4
5-A
-2
-2
-2
8 -A
2 = -A(A - 9)
The eigenvalues are At = 9 with algebraic multiplicity 2 and A2 = 0 with algebraic multiplicity
1.
For At = 9, we get
Thus, a basis for the eigenspace oL! I is { w" W,} =
{[-il r�]}
. However, observe
that these vectors are not orthogonal to each other. Since we require an orthonormal basis of eigenvectors of A, we need to find an orthonormal basis for the eigenspace of At. We can do this by applying the Gram-Schmidt Procedure to this set. Pick Vt =Wt. Then St = Span{Vi} and
366
Chapter 8
EXAMPLE3 (continued)
Symmetric Matrices and Quadratic Forms
Then,
{V1, v2} is an orthogonal basis for the eigenspace of /l1. /l2 = 0, we get
-4 -21-2 [1 -2 {[�]}
For
5
-
8
Thus, a basis for the eigenspace of . 0, A.2> 0 A.1> 0, A.2 = 0 A.1 > 0, A.2< 0 A.1 = 0, A.2< 0 A.1 < 0, A.2< 0
Graphs of
tl1xT tl2x� +
k
=
k>O
k=O
k= -14 - i (it, 2iv)= 2i(it, v>= -2iC-3 +7i)= 14 + + + (z,Z;=z· l= + + (z, 0 z 0 z= (z, w>= w-=!'w-=cf'-w)T=w-Tt=w---Tz= =cv +t;T�=cvT +zT)�=vT� +zT�=(v, w> + =!'w +it= c� +ii)=!'� +zTii= + (az, w)=(at)T�=af'�=a(z, w) (z, aw)= aw=at'�=a(z, w) = 6
6i
2. We have
Z1ZI
Thus,
Z>
�
for all
t
·
and equal to
·
·
ZnZn =
2 lzil
if and only if
·
·
·
lz111
2
0:
·
tT
ZT
3. The columns ofA are not orthogonal under the standard complex inner product, soA is not unitary. We have B* B
/,so Bis unitary.
APPENDIX B
Answers to Practice Problems and Chapter Quizzes CHAPTERl
Section 1.1 A Problems Al
X2
(a)
(b)
[;]
3[-!]
(c)
(d)
-2
Xi
A2 ( a) A3 (
[� ]
{[ �1 --13�1
A4 (a)
(b)
(b
[=�] [-�] 4 2 [ r-10;[-1 ! 1 ;�1 0 -22 (c)
10
(c) u
2[�]-2[-�] [ �]
X2
-
[_�]
Xi
[3[62] 3 [ 4 �vl] 74//3] [ -rrl c{ [-1/2�]1 13/3 [=�] 9/2
(d)
(c)
(b)
Xi
-[�]
Xi X2
[�]
[72]
(e)
(f)
(e)
=
-7/2
(f)
(d) u �
-Y2 -Y2 + 7f
466
Appendix B
Answers to Practice Problems and Chapter Quizzes
AS (a)
[�l
(b)
A6
[ :;]
(d)
-10
-1/2
- /3 -14/3
Ul-m [ il m m [tl ni-m [=�] [�]-[_!] nJ m-ni Ul [=il nl [t] [=!] [j]
pQ = oQ-oP =
=
=
PR = oR -oh
=
-
PS = oS -oh
=
QR=
oR -oQ =
=
sR=
o R -oS =
=
pQ +QR=
[:l
+
=
=
+
=
PS + sR
AS Note that alternative correct answers are possible.
[ �] [ � J
.
tE�
(b)x=
+
·
IER
(d}X=
(a)x= -
+t
[j] Hl [�;�] [-;;�], -
(c}X=
(e)X=
+t
1
A9 (a) XI =
-1
-2/3
+t, X2 =
(b) X1 = l + 3t, X2 = l AlO (a) Three points
(b) Since
-2PQ =
[�J [ =�J +r
.
nl [H +l
tE�
IER
t E�
-1 + 3t,
- 2t,
[= �] [�]. UJ [ ;J
t E�; X =
t E�; x=
+t
+t
_
.
t E�
t E�
P, Q, and R are collinear if PQ = tPR for some t E R = PR, the points P, Q, and R must be collinear.
[-�]
(c) The points S, T, and U are not collinear because SU* tST for any t.
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 1.2 A Problems 0 0 (c) 1 1 3
10 -7 (b) 10
5 9 Al (a) 0 1
-5
A2 (a) The set is not a subspace of IR.3. (b) The set is a subspace of IR.3.
(c) The set is a subspace of IR.2•
(d) The set is not a subspace of IR.3. (e) The set is a subspace of IR.3.
(f) The set is a subspace of IR.4.
A3 (a) The set is a subspace of IR.4 . (b) The set is not a subspace of IR.4.
(c) The set is not a subspace of IR.4•
(d) The set is not a subspace of IR.4. (e) The set is a subspace of IR.4.
(f) The set is not a subspace of IR.4.
A4 Alternative correct answers are possible.
[�J o[�l o [_:H�l +!l- [�] [�H�l lil [tl m l�l
(a) l
(b)
(c) 1
(d) 1
+
+
2
1 +
+
l -
l
=
[n [�J [�J [�J 2
+1
=
AS Alternative correct answers are possible. (a) The plane in R4 with basis
{! i} U � �} ,
(b) The hype'Jllane in R4 with basis
,
,
467
468
Appendix B
Answers to Practice Problems and Chapter Quizzes
(c) The line in 11!.4 with basis
{ -I } {l �} .
(d) The plane in JR4 with basis
2
-1
A6 If 1 = jJ + tJ is a subspace of 1Rn, then it contains the zero vector. Hence, there exists t1 such that 0 = jJ + t1J Thus, jJ = -t1J and so jJ is a scalar multiple of J On the other hand, if jJ is a scalar multiple of J, say jJ = t1J, then we have A7
jJ + td =
t1J + tJ (t1 + t)J Hence, the set is Span{J} and thus is a subspace. Assume that there is a non-empty subset 131 {V1, , ve} of 13 that is linearly
1
=
=
•
=
.
.
dependent. Then there exists Ci not all zero such that
o=
c1v1 +
·
·
· + ceve
=
c1v1 +
·
·
·
+
ceve + Ove+1 +
This contradicts the fact that 13 is linearly independent. Hence, independent.
·
·
·
+
Ovn
131 must be linearly
Section 1.3 A Problems Al (a) Y29 (e) -fiSl/5
(b) 1
(f) 1
[ ;0]
[ ]
1 (b) 1/.../2
3/5
A2 (a) 4/5
c+!J
-2/3 -2/3 (e) 1/3 0
A3 (a) 2 VlO A4 (a) 11111
(b) 5
(c) .../2 (g) -%
(d) Y17 (h) 1
1
(c)
6
�
2/Ys 1/.../2 0 (f) 0 -1/.../2
(d) 3-%
(c) vT75
-v26; 11.Yll -f35; 111 + .Yll 2 .../22; 11 ·.YI 16; the triangle inequality: 2 -v'22 ::::: 9.38 $ -v26 + -f35 ::::: 10.58; the Cauchy-Schwarz inequality: 1 6 $ -../26(30);:::: 27.93. =
=
=
(b) 11111 = -%; 11.Yll = Y29; 111 + .Yll ity: -../41 ::::: 6.40 $ -% + Y29
3 AS (a)
(b)
=
=
:::::
-../41; 11 · .YI = 3; the triangle inequal 7.83; the Cauchy-Schwarz inequality:
-../ 6(29);:::: 13.19.
$
m [- � l [-!] [-:]
=
·
·
O; these vectors are orthogonal.
=
0; these vectors are orthogonal.
Appendix B
(c)
(d)
(e)
(f)
rn [-�l ·
=
4
Answers to Practice Problems and Chapter Quizzes
O; these vectors are not orthogonal.
;
4 -1 0 34 -2 0 0 X1 0 X2 0 X3 0 X4 1/3 3/2 2/3 0 -1/3 -3/2 3 6 0 2x1 4x2 -X3 3x1 -4x2 x3 8 3x1 4x3 0 X1 -4x2x2 5x3 -2x4 0 m =
O; these vectors are orthogonal.
= O; these vectors are orthogonal.
= 4; these vectors are not orthogonal.
1
A6 (a) k = A7 (a) (c)
AS (a) (c)
(b) k =
=
+
+
+
9
or k =
3
(b) (d)
=
+
(b) (d)
=
+
(b) ii=
Hl1
(d) -12 -1 2 -3 -1 2x1 -3x2 5x3 6 X2 -2 xi - x2 3x3 2 it=
-3 3x1-4x1 -2x2 5x3 -2x3 26 -12 x2 3x3 3x4 1 X2 2X3 -X4 X5 (c) k
=
A9 (a) ii=
(c) ii
(d) any k E IR
=
+
=
=
+
+
{�] +
=
= 1
+
(e)n=
AlO (a) (b)
469
=
+
=
(c)
=
+
All (a) False. One possible counterexample is
[�] [�] 2 [�] [ �]
(b) Our counterexample in part (a) has i1 *
·
A Problems
(b) proJvu = .
_,
[-�J. r3648/2/25J 5
perpil i1 =
[�]
,perpvu = _,
=
·
_9
.
0, so the result does not change.
Section 1.4
Al (a) projil i1 =
=
r-136 102//25J 25
4 70
Appendix B
Answers to Practice Problems and Chapter Quizzes
(d) projil a=
(e) projil a=
(f) projil a=
A3 (a) u =
-[ 4/98/9] [ 40/91/9 l -8/0 9 -1 -19/9 00 -12 0 1/2 -1 5/2 -0 3 -5/22 -1/20
= 11�11
(b) proja F =
(c) p erpa F =
A4 (a) u =
�
11 11
=
(b) proja F =
(c) p erpa F =
'perpil a=
'perpil a=
'perpil a=
2/76/7 [3/7220/49] 660/49] [330/49 [ 270/49 222/49] -624/49 3/1/M Ml [-224/71Yf4. [-16/78/7] -[ 693/7/7] 30/7
Appendix B
Answers to Practice Problems and Chapter Quizzes
AS (a) R(5/2,5/2), 5/-../2
(b) R(58/17,91/17), 6/Yfl (c) R(17/6, 1/3,-1/6), y29/6 (d) R(5/3,11/3,-1/3), -../6 A6 (a) 2/../26
(b) 13/../38 (c) 4/Ys (d) Y6 A7 (a) R(l/7,3/7,-3/7,4/7)
(b) R(l5/14,13/7,17/14,3) (c) R(O,14/3,1/3,10/3) (d) R(-12/7,11/7,9/7,-9/7)
Section 1.5 A Problems
[=� ] [�] -27
Al (a)
(b)
(c)
(d)
(e)
(0
31 - 4
Ul Ul [�] [�]
A2 (a) ii x ii
(b) i1 xv
=
=
[�] [ -�1 -13
=
-v x i1
471
472
Appendix B
Answers to Practice Problems and Chapter Quizzes
(c) i1x 3w =
[ �i [ -:i -15
Cd) ax cv+ w) =
= 3(i1x w)
-18
= axv+ax w
(e) i1. (vx w) = -14 = w . (i1xv)
en a. cvx w) = -14 = -v. cax w)
A3 (a) Y35 (b) -vTl (c) 9
(d) 13
A4 (a) x1 - 4x2 - l0x3 = -85 (b) 2x1 - 2x2
+
(c) -5x1 - 2x2
3x3 = -5
+
6x3 = 15
(d) -17x1 - x2 +lOx3 = 0
AS (a) 39x1 + 12x2 + lOx3 = 140
(b) llx1 - 21x2 - 17x3 = -56
(c) -12x1 + 3x2 - 19x3 = -14
(d) X2 = 0
A6 (a) 1"
(b)
[�6�\} t;J
x{il+H
tER
tER
A 7 (a) 1
(b) 126 (c) 5
(d) 35 (e) 16
AS i1 (v x w) = 0 means that i1is orthogonal to v x w. Therefore, i1lies in the plane through the origin that contains v and w. We can also see this by observing that i1 (vxw) = O means that the parallelepiped determined by i1, v, and w has volume zero; this can happen only if the three vectors lie in a common plane. ·
·
A9
ca- v) x ca+v) = ax ca+v) - vx ca+v) = i1xi1+i1xv- vxi1-vxv =O+i1xv+i1xv-O = 2caxv)
Appendix B
Chapter
Answers to Practice Problems and Chapter Quizzes
4 73
Quiz
1
E Problems
Et x
{�] HJ +
E2 8xi - x2+7 X3 E3 To show that
=
.
ieR
9
{[�], [ �]} -
2
is a basis, we need to show that it spans IR. and that it is
linearly independent. Consider
[��] [�] [-�] [ �: : i ] =
This gives xi we get ti
=
=
ti-t2 and x2
=
F2xi + x2) and t1
+t1
ti
2
=
t2
2ti+2t2. Solving using substitution and elimination, =
�(- 2xi + x2). Hence, every vector
written as
2
So, it spans IR. . Moreover, if xi
that ti
2
=
for IR. •
t2
=
x2
[��]
can be
0, then our calculations above show
=
0, so the set is also linearly independent. Therefore, it is a basis
=
. E4 If d * 0, then ai(0) +a2(0) +a3 (0) 0 * d, so 0 � S. Thus, S 1s not a subspace 3 of IR. . On the other hand, assume d 0. Observe that, by definition, S is a subset of ...,
=
=
[�] [�:]· �:]
R3 and that
0
e
=
a1 Xi +a2x2 +a3 x3 Lett
=
Y
=
=
aixi+a2x2+a3X3
=
S since taking x1
=
0, x,
=
0, and x3
=
0 satisfies
0.
e
S . Then they must satisfy the condition of the set, so
0 and aiYi+a2Y2+ a3y3
=
0.
To show that S is closed under addition, we must show that x + y satisfies the condition of S. We have x +y
=
[�� : ��i
and
X3 +Y3
ai(xi +Yi)+a2(x2+Y2)+a3(X3 +y3)
=
aixi +a2x2+a3X3 +aiyi+a2y2
+ a3y3
Hence, x +y
E
S . Similarly, for any t
E
=
IR, we have tx
0 +0
=
=
0
[l
tXi tx2 and tx3
3
So, S is closed under scalar multiplication. Therefore, S is a subspace of IR. .
474
Appendix B
Answers to Practice Problems and Chapter Quizzes ES The coordinate axes have direction vectors given by the standard basis vectors. The cosine of the angle between v and cos
a
=
e1
is
2 v e1 = llVlllleill -v'14 ·
The cosine of the angle between v and
e2 is
The cosine of the angle between v and
e3 is
cos Y =
v. e3 1iv1111e311 =
-v'14 1
[-il
E6 Since the origin 0(0, 0, 0) is on the line, we get that the point Q on the line closest to Pis given by oQ = proj1oP, where Hence, OQ =
J
=
is a direction vector of the line.
= 11di12JJ -18/11 Qp.
[ ��;��1
and the closest point is Q(l 8/11, -12/11, 18/11). z
y
E7 Let Q(O, 0, 0, 1) be a point in the hyperplane. A normal vector to the plane 1 is it= Hence,
�
. Then, the point R in the hyperplane closest to Psatisfies PR
1 3 -; -; -; -2 OR= OP+ proj11PQ = O 2
2 4
5/2 -5/2 -1/2 3/2
Then the distance from the point to the line is the length of PR:
llPRll =
-1/2 -1/2 -1/2 -1/2
= 1
= proj,1 PQ.
Appendix B
475
Answers to Practice Problems and Chapter Quizzes
E8 A vectot orthogonal to both vectorn is
[�] r-:1 Hl x
=
E9 The volume of the parallelepiped determined by it+ kV, v, and w is
!Cit+ kv). cvxw)l = tit. cvxw) + k(v. cvxw))l = 111 . cvxw) +k(O)l
2)
This equals the volume of the parallelepiped determined by it, v, and w. ElO (i) False. The points P(O, 0, 0), Q(O, 0, 1), and R(O, 0, form t1x1 +t2X2 = 0 with t1 and t2 not both zero.
lie in every plane of the
(ii) True. This is the definition of a line, reworded in terms of a spanning set. (iii) True. The set contains the zero vector and hence is linearly dependent. (iv) False. The dot product of the zero vector with itself is 0. (v) False. Let x =
[�]
and y =
Ul
Then, proj1 y =
[�l
while proj51 x =
[�;�.
(vi) False. If y = 0, then proj1 y = 0. Thus, {proj1 y, perp1 y} contains the zero vector, so it is ljnearly dependent. (vii) True. We have
tlitxcv+311)11 = 1111xv+3(11x11)11 = 1111 xv+ 011 = 11itx1111 so the parallelograms have the same area.
CHAPTER2
Section 2.1 A Problems
Al (a) x =
(b) X=
(c) X =
[1;]
[�] fn [=2�] +
-1
(d) X=
-1
-� +t -�, 0
t ER
tEJR
476
Appendix B
Answers to Practice Problems and Chapter Quizzes
A2 (a)
A is in row echelon form.
(b) Bis in row echelon form. (c) C is not in row echelon form because the leading 1 in the third row is not further to the right than the leading 1 in the second row. (d) D is not in row echelon form because the leading 1 in the third row is to left of the leading 1 in the second row.
A3 Alternate correct answers are possible. -3 13
-
-1 0 0
2 1 0
(c)
5 0 0 0
0 -1 0 0
0 -2 1 0
(d)
2 0 0 0
0 2 0 0
2 2 4 0
0 4
(e)
1 0 0 0
2 1 0 0
2 7 0
1 5 2
(f)
1 0 0 0
0 1 0 0
3 -1 24 0
(a)
(b)
[�
[i
�]
�]
8 0
0 2 1 0
1 11
A4 (a) Inconsistent. (b) Consistent The solution is ii
(c) Consistent. The solution is x
(d) Consistent. The solution is x
(e) Consistent. The solution is x
=
=
=
=
m [!] +t
-1 0 3
+t
19/2 0 5/2 -2
s
-1 0 1 0
t
ER
-1 -1 1' 0
t
E JR
-1 +t
+t
O' 0 1 0 O'
t
ER
s,t
ER
Appendix B
AS (a)
(b)
(c)
1
2
-5
2
4
0
24 /11 . . . :;t = . . cons1stent with so1ut10n -t l0 /1 l
4
2
1
-
_11
_10
consistent with solutionX =
-
f[ H
R IE
1
2
1
3
2
-3
8
-5
5
l
11
-8
19
[ � � =� i l
[ �l f H l [ [ -r Hl � l l[ [� 1 [� [�
x=
(d)
[ ]
1 ] [ 1 ] [ [ � -� �I� H � -� � 1 ;J 3
+
-17
-8
-3
-2
1
-
-11
-5
Consistent
IE R.
3 6
16
6
-2
477
Answers to Practice Problems and Chapter Quizzes
1
0
0
-5
2 1
0
l
-11 5
3
with
[�fl
+
solution
Consistent with solution
X=
(e)
2
-1
9
-1
1
5
4
13
1
-
0
19
0
-5
-17
4
-21
1
-6
(g)
1
2
2
2
2
4
5
-2 -3
1
0+
-5 -7
0
1
3 3
0
0
3
The system is inconsistent.
2
1
-3 1
0
0
0 4
-5
;
5
-
t
4
8
10
1
l
2
0
-1
-8
-7
solution x =
2
1
0
-3
2
(f)
4
10
1
.
Consistent
with
tER
'
0 2
1
0
1
3
0
5
0 -4
sistent with solution x =
0
2
-3
-2
2
1
0 0
0
4
1
1
1
2 1
3/2
2
0
0
. The system is con-
0
1
-1 t 3/2 ' + 2 -3/2
t ER
0
A6 (a) If a * 0, b * 0, this system is consistent, the solution is unique. If a = 0, b * 0, the system is consistent, but the solution is not unique. If a * 0, b = 0,
the system is inconsistent. If a = 0, b = 0, this system is consistent, but the
solution is not unique.
(b) If c * 0, d * 0, the system is consistent, and the solution is unique. If d = 0,
the system is consistent only if c = 0. If c = 0, the system is consistent for all values of d, but the solution is not unique.
A 7 600 apples, 400 bananas, and 500 oranges. AS 75% in algebra, 90% in calculus, and 84% in physics.
478
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 2.2 A Problems Al (a)
[i n [� H H [� [i n
the rank is 2
0
(b)
1
the rank is 3.
0 0
(c)
1
the rank is 3.
0 0
(d)
the rank is 3.
0
(e)
(f)
(g)
(h)
(i)
1
2
0
0
0
0
�
0
0
0
[� [�
1
0
1
0
0
0
1
0
3/2
0
0
1
1/2
0
0
0
0
1
0
0
0
0
1
0
0
17
0
0
1
0
23
0
0
0
0
0
1
0
0
0
0
1
0
0
; the rank is 2.
n -H
the rank is 3.
the rank is 3.
-1/2 ; the rank is 3.
-56 ; the rank is 4.
-6 -2
A2 (a) There is one parameter. The general solution is x
=
1
t
1 ,
t
ER
0
(b) There are two parameters. The general solution is x
(c) There are two parameters. The general solution is x
=
=
s
s
�
0 + t
-2 1 ,
0
0
3
-2
1 0 0
+ t
0 1 , 0
s,tEJR.
s,tEJR.
Appendix B
4 79
Answers to Practice Problems and Chapter Quizzes
0
-2
1 -1
2
(d) There are two parameters. The general solution is 1
=
0
+t
s
,
s, t
ER
0 0 0
-4 0
(e) There are two parameters. The general solution is 1
=
0
s
+ t
0
1
5,
s, t E IR
0
0 0
1
-1
(f) There is one parameter. The general solution is 1
=
t
t E JR.
,
0 0
A3 (a)
[�1 � -�i [� � �1]; [� 1� =;] [� � =�]; [H -1 1 -1 1 4
0
-3
solution is 1 (b)
the rank is 3; there are zero parameters. The only
-
=
0
0.
the rank is 2; there is one parameter. The
-
2
-7
0
general solution is 1
(c)
0
=
0
t ER.
t
-7
0
2
-3
3
-3
8
-5
0
0
2
-2
5
-4
0
0
0
�
3
-3
7
-7
0
0
0
0
meters. The general solution is 1
=
s
; the rank is 2; there are two para-
�
7
+t
-�
s, t
,
ER
0 0 (d)
1
2
2
2
2
5
3
5 4
2
0
1
11 1 1 -11
0
-1
0
-3
0
0
-2
0
0
0
-2
0
0
0
2
; the rank is 3; there are two pa-
_
0
rameters. The general solution is 1
0
=
0
2
0
0
-2
s
+t
1
,
s, t
ER
0
0
A4 (a)
[ � -� I � ]- [ � � I ���� � l
Consistent with solution 1
=
[���� � l
480
Appendix B
Answers to Practice Problems and Chapter Quizzes
(b)
2-3 21 156 ] [ 1 10 1274//7 ] . 7 2[ 4b7/71 [-1�'1 7 [ 2� 5� =; 119� I [ � I -i � ] [�] fl 16 36 � -11 [ -�2 -3 5 -1 l - [ � ! � -� l 7 Hl[ i �9 -1� 191� ]- [ �0 �0 -�0 �1 ] [ � 1324 -1-6-3 014 -21 i [ 1� 00 -510 00 -7� i . 7-7 5 10 -11 ' 2 0 01 22 -2-3 0 4 21 10 10 00 00 -10 -40 22 54 --5 33 10 53 00 00 10 0 -3/23/2 -12 7 -40 01 -12 -3/2 3/2 ' 1 0 01 00 -31 . -13 04 [: 1 2 �I [� 0 1 2i [n r-n 1 0 5 -2 i � � -� l - [ 0 1 0 1 2 -5 4 0 0 0 0 nl fH fH
[ 12
x
(c)
+t
=
t
l
O
+
Consistent
with
solution
Consistent
with
solution
ER
-8
X=
(d)
O
IE R
,
Consisrent with solution
-8
-
X=
(e)
(D
·The system is inconsistent.
-5 -8
solution x
(g)
+t
=
t
ER
8
sistent with solution x
AS (a)
Consistent with
. The system is con-
=
+t
t
ER
The
,
=
solution
The solution to the homogeneous sysrem is
,
to
=
X=
+
IER
t
is
[A I b]
is
=
. The solution to
x
[A I b]
E R The solution to the homogeneous system is
Appendix B
(c)
-1 5 2 1 -1 ] [ 10 01 -59 2 -51 1 ] [ -� -5 -9 1 010 50 -2O' -95 -2 0 O'1 01 00 -20 � ]· 0 -12 u 2 50 lH� 0 1 1 -1 -11 21 0 1 02
is
4
-1
-4
+ s
_,
x
-1
�
-
-1
1
(d)
is x =
X=t
+t
_
-1
'
1
+t
-1 -4
3 -
-4
s,t
+t
homogeneous system is x = s
3
_
,
A Problems
(b)
(c)
2 -1 2 01 (-1) 01 12 5 +3
01
+
1 + (-1 )
2
-1 -1
=
2
8 4
-1 1
is not in the span.
(-2) -10 I
(b)
3
2 2 -2 0 (-1) 2
3 A2 (a)
-
4 is not in the span. 6 7
3
3 +
-1 1 +
.
The solution to
[A I b_,]
JR. The solution to the
s,tER
The solution to
[A I b]
t E R The solution to the homogeneous system is
tER
+
E
�
Section 2.3
Al (a)
481
Answers to Practice Problems and Chapter Quizzes
02 (-2) -1-1 1
=
-7 3
08
482
Appendix B
Answers to Practice Problems and Chapter Quizzes
1 (c)
is not in the span.
A3 (a) X3 = 0 (b)
X1 - 2x2 = 0, X3 = 0
(c) x1+3x2-2x3 =0 (d)
X1 +3x2 + Sx3 = 0
(e) -X1 - X2 + X3 = 0, X2+ X4 = 0 (f) -4x1 + Sx2 + X3 + 4x4 = 0
A4 (a) It is a basis for the plane. (b) It is not a basis for the plane. (c) It is a basis for the hyperplane.
AS (a) Linearly independent.
(b) Linearly dependent. -3t
�
0 -
2t
-t
0 0
3 +t
0
2
6 3
0 0 (c) Linearly dependent. 2t 0 -t 1 +t = 0 , 1 1 3 0 1 1 3 0 2
0 0 O' 0
t E IR
0
3
A6 (a) Linearly independent for all k
t E IR
f. -3.
(b) Linearly independent for all k f. -5/2.
A 7 (a) It is a basis. 3 (b) Only two vectors, so it cannot span IR . T herefore, it is not a basis. 3 (c) It has four vectors in IR , so it is linearly dependent. T herefore, it is not a basis. (d) It is linearly dependent, so it is not a basis.
Section 2.4 A Problems Al To simplify writing, let a=
�·
Total horizontal force: R1 + R2 = 0. Total vertical force: Rv - Fv = 0. Total moment about A: R1s + Fv(2s) = 0. The horizontal and vertical equations at the joints A, B, C, D, and E are o:N2+ R2 = 0 and N1 + o:N2 + Rv = O; N3 + aN4 + R1 = 0 and -Ni + aN4 = 0
Appendix B
-N3 +aN6 -aN4 +N1 -N7 - aN6
=
Answers to Practice Problems and Chapter Quizzes
0 and -aN2 +Ns+aN6
=
0 and -aN4 - Ns
=
0 and -aN6 - Fv
R1 +R2 -R2
-R2 R2 +R3
0
A2
=
483
0
0
=
0
=
0
0
0
E1
0
0
0
-R3
-R3 R3 +R4 +Rs
0
0
0
0
0
0
0
-Rs
Rs+R6 -R6
-Rs -R6 R6 +R1 +Rs
0
E2
y
A3
x =
Chapter 2
x
100
Quiz
E Problems
El
E2
1
-1
1
0
1
0
0
7
0
0
-2
1
-5
0
0
0
0
2
0
1
0
0
0
0
1
0
0
0
0
0
I
0
-1/3
0
0
0
. Inconsistent.
1
1/3
E3 (a) The system is inconsistent for all (a, b, c) of the form (a, b, 1) or (a, -2, c) and is consistent for all (a, b, c) where b * -2 and c * I. (b) The system has a unique solution if and only if b * -2, c * 1 and, c * -1.
E4 (a)
s
-2
11/4
-1
-11/2
1 +t
0
0
-5/4
0
1
(b) x . a
=
0, x . v
'
=
S,
t
E
JR
0, and x . w
=
0, yields a homogeneous system of three
linear equations with five variables. Hence, the rank of the matrix is at most three and thus there are at least# of variables - rank So, there are non-trivial solutions.
=
5-3
=
2 parameters.
484
Appendix B
Answers to Practice Problems and Chapter Quizzes
=t1
ES Consider 51 matrix gives
m [1] [1]. [ ][ +t2
+ti 3
1
2
Row reducing the corresponding coefficient
1 1 6
4
1 1 � 0
o 1
o 0
5
0
1
0
l
Thus, :Bis linearly independent and spans IR3. Hence, it is a basis for IR3.
E6 (a) False. The system may have infinitely many solutions.
X1 = 1, 2x1 = 2 has more equations than variables but is
(b) False. The system consistent. (c) True. The system
x1 =0 has a unique solution.
(d) True. If there are more variables than equations and the system is consistent, then there must be parameters and hence the system cannot have a unique solution. Of course, the system may be inconsistent.
CHAPTER3
Section 3.1 A Problems Al (a)
(b)
(c)
A2 (a)
(b)
[-
[
[ [
-6
1 6
-4
-3 -6
-12 -1 -11 -1
[I
6
-3
6
-1
� �-1
�]
27
f l n
12 -1
]
l
15
6
(c)
]
�]
-3
15
l3 -4
-5
1
19
-1
]
(d) The product is not defined since the number of columns of the first matrix does not equal the number of rows of the second matrix.
A3 (a) (b)
A(B+C)=
[- �! l �] =AB+AC
r6 A(B+C)= l4
-16
14 =AB+AC
]
A(3B)= ; ; - �� =3(AB) A(3B)= _2; � =3(AB)
[
[
] n
Appendix B
A4 (a)
(b)
AS (a)
A+ A+
Answers to Practice Problems and Chapter Quizzes
Bis defined. ABis not defined.
12 2lO] .
Bis not defined. ABis defined.
AB= [ lO 13
31
(A+
Bl = [=� ; ;] = [-�� =��]=Br
(ABl
485
=Ar+ Br. Ar.
(b) Does not exist because the matrices are not of the correct size for this product to be defined.
(c) Does not exist because the matrices are not of the correct size for this product to be defined. (d) Does not exist because the matrices are not of the correct size for this product to be defined. (e) Does not exist because the matrices are not of the correct size for this product to be defined.
(f) (g)
(h)
[l� � � l�] 5[ 2 l] 139 46
] [ 21 12 62
13 3
10
7 7 15 A AX= = Z nl {;]. n [12 l: 17[ � �ll 1 10
(i)
A6 (a)
(b)
Dre= (CTDl =
AY
3
A7 (a)
(b)
(c)
(d)
11
[�] [�] [i l
8 4 -4
-
3
9 11
486
Appendix B
Answers to Practice Problems and Chapter Quizzes
AS (a)
[-�
[
-1
�]
(b) [OJ (c)
10
8
-6
-5
-4 12
3 -9
15
1
(d) [-3]
[
-l 3 . _ A9 B oth s1'des give 27
16] o
AlO It is in the span since 3
·
U �]
+( -2)
[-� �]
+(-1)
[� -�] [� -H =
All It is linearly independent. Al2 Using the second view of matrix-vector multiplication and the fact that the i-th component of e; is 1 and all other components are 0, we get Ae; =Oa1 +
·
·
·
+ Oa;-1 +la;+Oa;+1 +
·
·
·
+Oan =a;
Section 3.2 A Problems 2 Al (a) Domain JR. , codomain JR.4 18
-19
(b) fA(2, -5) =
(c) fA(l, Q) =
-9 6 ,fA(-3, 4) = 17 -23 38 -36
-2 3 l
,fA(O, 1 ) =
4
3 0 5 -6
-2x1 + 3x2
(d) fA(x)
3x1 + Ox2 =
x1 +5x2 4x1 - 6x2
(e) The standard matrix offA is
A2 (a) Domain JR.4, codomain JR.3
[-111�
, fA(-3, 1, 4, 2)
fA (€,) =
fA(t3) =
(b) fA(2, -2, 3, 1) =
(c) fA(ti) =
[H
-2 3
3 0
1 4
5 -6
[ �] l-n Ul -13
[-H
=
-
fA (€,) =
Answers to Practice Problems and Chapter Quizzes
Appendix B
(d) fA(x)
=
487
l��l+::�: ;::] X1
2X - X4 3 (e) The standard matrix of fA is +
A3 (a) f is not linear.
(b)
g
is linear.
(c) h is not linear. (d) k is linear. (e) e is not linear. (f)
m
is not linear.
A4 (a) Domain JR.3, codomain IR.2, [L ]
=
(b) Domain IR.4, codomain IR.2, [K]
=
(c) Domain IR.4, codomain JR.4, [M]
=
[� [�
-3 1
-
3
0 0
�
-1
2
0
-
-7
1
1
�]
1
-3
-1
1
-1
]
0
-1
1
AS (a) The domain is IR.3 and codomain is IR.2 for both mappings.
(b) [S+ T]
=
[
3 1
3
]
2
2
5
,
[2S- 3T]
[
=
1 -8
-4
-6
9 -5
]
A6 (a) The domain of S is IR.4, and the codomain of Sis IR.2. The domain of Tis IR.2,
and the codomain of Tis IR.4 .
(b) [So T]
=
[
6
10
-19
_10
]
-3
, [To S]
=
6
-6
-9
5 8
16
-1 7
-16
-8
4
-4
9
0 -5
A7 (a) The domain and codomain of Lo Mare both IR.3 .
(b) The domain and codomain of Mo Lare both IR.2. (c) Not defined (d) Not defined (e) Not defined (f) The domain of N o Mis IR.3, and the codomain is IR.4. AS [projv] A9 [perpv]
AlO [projv]
=
=
=
i l-� -7] [ ] 1
16
17 -4
§
-4
1
[ : : =�] -2
-2
1
0
488
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 3.3 A Problems
[� -�] [ � -�] � [ � �] rn:��i -�:;��] [S] [� �] [R9oS] [���; � �;:] [ ( ) [s R ] [R] [ ] [S] [ ]
Al ca)
(b)
A2 (a)
-
0
g
J
=
cos e
5 sin e 4/5 -3/5 = -3/5 -4/5
A3 (a)
-2
3
1
-2
1
00 0 0 00 0
5
AS (a)
-2
-2
5
(b)
(c)There is no shear
c
9
-3/5 4/5
=
8
1 1 (b) - 8
(b)
5
=
-sin e 5 cos e
2
f0 0 1
A4 (a) - -2
(d)
-
S
(b)
=
c
cc)
4 -4 7
1
4
-4
[0 0 0]
4/5 3/5
1
1
2
1
T such that To P
=
PoS.
(d)
[� �] 0
Section 3.4 A Problems 3
1
Al (a) 6 is notin the range ofl.
(b)L(l,1,-2)=
3 -5 5
1
A2 (a) A bas;s fm Range(L)is
1
{[6].[m 0 0 0 0 0 00 ' 0 ' 0 ' 00 0 0 0
A basis for Null(L)is
{[�]}
1
1
(b) A basis for Range(M) is
1
-1
1
empty set since Null(M)
=
{0}.
[i 1 [� :1 -1
A3 The matrix of L is any multiple of
A4 The matrix of L is any multiple of
-2. -3
. A basis for Null(M) is the
Appendix B
Answers to Practice Problems and Chapter Quizzes
489
AS (a) The number of variables is 4. The rank of A is 2. The dimension of the solution space is 2. (b) The number of variables is 5. The rank of A is 3. The dimension of the solution space is 2. (c) The number of variables is 5. The rank of A is 2. The dimension of the solution space is 3. (d) The number of variables is 6. The rank of A is 3. The dimension of the solution space is 3.
{[�] m, rm·
A6 (a) A basis fo, the mwspace is
[ J}. {[: l m ,
A basis fo, the columnspace is
•
A basis fm the nullspace is the empty set
·
Then, rank(A)+nullity(A)
3+0
=
by the Rank Theorem.
=
3, the number of columns of A as predicted
{ j , _! � } {i}
(b) A basis for the rowspace is
0
{[�l m, rm
·A basis for the columnspace is
,
1
0
A basis fm ilie nullspace is
·
Then, rank(A)+nullity(A)
3+ 1
=
=
4, the number of columns of A as predicted
by the Rank Theorem.
2 (c) A basis for the rowspace is
r}
0
0
0
0
0 '
' 0
3
4
0
0
. A basis for the columnspace is
0 -2
·A basis fm the nullspace is
Then, rank(A)+nullity(A)
=
by the Rank Theorem.
A 7 (a) A basis fm the nul\space is
vectorn orthogonal to
[-�])
3+2
=
0
0
-4
0
1
0
0
5, the number of columns of A as predicted
�{[ l [ �]} ·
-3
1
_
(°' any pafr of linead y independent
{ [- � ] } . {[-�l l-m
; a basis fm the range is
(b) For the nullspace, a basis is
[!]{ };
for the range,
(c) A basis for the nullspace is the empty set; the range is for JR.3•
·
JR.3,
so take any basis
490
Appendix B
Answers to Practice Problems and Chapter Quizzes
AS (a) n = 5
1 0 2 0 3
(b)
-2 (e) x
1 0 0
s
=
+ t
-3 -1 0 -1 1
-2 1 (f) 0 0
-3 -1 0 -1 1
0 0 1 0 -1 ' 0 0 1 1
,
s, t E
(c) m = 4
(d)
{t ' , �} 1 3 1 2
-1
-2
-3 -1 0
0 0
-1
R So, a spanning set is
is also linearly independent, so it a basis.
(g) The rank of A is 3 and a basis for the solution space has two vectors in it, so the dimension of the solution space is 2. We have 3 + 2 5 is the number of variables in the system. =
Section 3.5 A Problems Al (a)
(b)
1
[
[
5
�]
23 -2 2 -1 -1
0 1 0
-1 -1 1
i
(c) It is not invertible. (d)
-1 1 0
H �] 10 2 -3 -3
6
(e)
(f)
A2 (a)
(b)
(c)
-2 0 0 0 0 0
0 ] 0 0 0
Hl ni [=�]
-1 0 1 0 0
-5/2 -1/2 1 0 -] -1 1 0
-7/2 -1/2 1 -2 2 1 -2 1
Appendix B
A3 (a)
Answers to Practice Problems and Chapter Quizzes
A-1 =[-32 -�]. s-1 [-� n [� {6]. [-l� _;] 1/3] (3At1 =[213 -2/3 (ATtl [ 2 �J = [ -YJ/2 l 12 YJ/2/2] [� �] [165 �] [� -� �i [� n [-� �] = [� � ] [� �l [ � �] = = =
491
=
-
(AB)-1 =
(b) (AB) = (c) (d)
-1
=
-1
-
1
I A4 (a) [Rrr;6] - = [R_rr/6]
(b) (c)
(d)
0
0 1
[S-1] =
AS (a) [SJ=
=[WI)
(b) [R]
(c) [(RoSt1]=
-
[(SoR)-1)=
A6 Let v,y E IR11 and t ER Then there exists il,x E JR11 such that x = M(Y) and i1 = M(v) . Then L(x) y and L(il) = v. Since Lis linear L(tx + il) = tL(x) +
L(il)
tY
+
v. It follows that
M(ty
+
v) =tx +a
So, Mis linear.
Section 3.6
[� 1 ol EA= [6-1 23 Tl Ol [ 2 A E = 2 [� -1 3 �] 2 �].EA -: = [ [� -23 �I
A Problems
E
=
�)
E
=
(c)
E
Al (a)
-u
-5
0, 0 1
0 0 1 0
,
4
I 4
0 1
=
0 -1
-4
tMCY)
+
M(v)
492
Appendix B
=E [� 00 �].EA= H 1822 2�] 01 A= 2 E = [� 0 �].E H 10 �] 00 001 001 000 00 00 000 00 00 0 00 10 01 00 00 00 00 0 0 021 00 001 000 000 00 001 00 000 000 001 001 00 000 000 1 1.
Answers to Practice Problems and Chapter Quizzes
(d)
(e)
6
3
I
A2 (a)
-3
l
(b)
(c)
(d)
-3
l
3
(e)
(f)
l
-1.
A3 (a) It is elementary. The corresponding elementary row operation is RJ + (-4)R2. 3 (b) It is not elementary. Both row and row have been multiplied by 3 (c) It is not elementary. We have multiplied row l by and then added row 3 to row
1
(d) It is elementary. The corresponding elementary row operation is R � R3• (e) It is not elementary. All three rows have been swapped.
0 0 l [ 0 [ 1 0 Ol E1 = 00 01 01 =] [00 01 1/20 = 00 0 �] = l� 01 �] -A 1 = [00 1-02 01 [ l 0 0 1 0 1 0 1 0 0 A= 0 1 0°][0 0 �rn 01 m� 01 �]
(f) It is elementary. A corresponding elementary row operation is (l)R1• 3 I I l -4 '£4 A4 (a) E2 '£3 l ,
-3
l
/2
3
Appendix B
E1 = [ 0I 00 n E2 = [� 00 1 J =[00I 001 �] = [� 0 �] A-1 = [-7 0 ] A= [0 00 m� 00 rn� 00 �m 0 �] [I 0 Ei = � 01 HE2= [� 001 HE,=[� 00 H 0 0 E4 = [� 01 n E, = [� 01 �I A-1 = [ 0 !] 0 A=H 0 �JU 001 �rn 00 �rn 00 001 ][100 00 10i 0 00 00 01 0 00 00 01 1 00 00 E i = 0 0 E2 = 0 0 1 0 = 0 0 0 0 0 0 0 1 0 0 0 0 01 0 0 0 00 0 = 00 0 00 'Es= 00 01 0 00 'E6 = 00 0 0 0 0 00 00 0 0 0 1 0 0 0 = 00 0 01 00 000 A-1 = 1 1 0 00 0 0 0 00 00 0 0 00 00 0 1 00 00 0 0 00 00 -1 A= 0 0 0 0 0 1 0 0 0 0 0 0 01 0 0 0 0 0 1 0 0 00 00 0 0 0 0 0 0 00 01 0 00 00 01 0 00 00 01 01 00 000100010001 O
(b)
1
-2
4
1
-3
-2
-2 1
'£4
1
I
2
1
1
2
(c)
I
-2
-3 '£3
1
-2
6
- 1 /4
- 1 /4
1 /2 4
-1/2
- 1 /4
-I
1
-4
1
1
2
(d)
1
1
1
, O
'
£ 3
1
O'
2
1
£4
1
1
-4
1
-1
1 /2
O'
1
1
£7
-1
1
3
4
-1
-2
1
- /2
- 1 /2
1 /2
2
1
-2
1
1
1
1
1
-2
4
2
493
Answers to Practice Problems and Chapter Quizzes
1
1
1
494
Appendix
B
Answers to Practice Problems and Chapter Quizzes
Section 3.7 A Problems
-:] -1 -� �] -1201/210001 114 01 0000504 -7/2 53 -13/2 0 001 00 - 0 0 10 10 000001 -32-2 1 30-4/3-1 17/91 01 00 00-30-37 0-20-11 2220 --3/221 3/210000 1 0 0 0 4 0 -1 -221 0 000 A2(a)LU=[=� ! rn� � _;];11=[=H12=Ul 0 H LU= rn� -� �i 11 = m 1, =r=�i LU=[=� ! rn� � H 11= [�]· 1, {�] 1 01 0000-10 -12 -33 01 -11 --53 0 LU= -3-1 2001 0 00 00-1201 -31 , 12= -20 (d)
(e)
(f)
(b)
-
,
(c )
O;xi=
(d)
Chapter 3 Quiz E Problems El (a)
1 =3179] [-14-1 10
Answers to Practice Problems and Chapter Quizzes
Appendix B
495
(b) Not defined ( c)
1-� =��1 -8
-42
[�] [-16 ] - 1
E2 (a) fA(a) = (b)
.1ACv) =
-11 0
17
E3 (a) [R] = [R,13] =
[ ��
-
(c) [Ro M] =
,-
,
� 3
2+ Y3 � 2 \(3 - 1 6
4
0
ol
2
2 -1
-Y312 1/ 2 0
2
(b) [M] = [re ftc 1 1 2) ] =
[-��] 1
[-7 -� �i -l - 2Y3
1
2- 2 2 '\f3+ 2 -2
- '\f3 + 2 4
-2 E4 The solution space of Ax = 0 is x =
-1 5 -2 1 6 0 1 +t 0 of Ax= bis x = 0 + s 1 0 0 7 0 0
s
,
1 1 0 0
s, t E
-1 0 +t 0 1 0
,
s, t E
R The solution set
R In particular, the solution set is
5 6 obtained from the solution space of Ax = 0 by translating by the vector 0 . 0 7 E5 (a) a is not in the columnspace of B. v is in the columnspace of B. (b) x =
(cJY =
l=il m
E6 A basis for the rowspace of A is
{� � !} 1
of A is
0
,
3
1 0 0 0 0 1 , -1 , 0 0 0 -1 2 3
,
3
·A basis for tbe nul\space is
. A basis for the columnspace
-1 1 0 0
1 -3 0
-2
496
Appendix B
Answers to Practice Problems and Chapter Quizzes
2/3 1/6
0 0
1 0 -1/3 0
1/2
1/3 -1/6
0 0
1/3
0
0
ES The matrix is invertible only for p .
.
f.
1. The inverse is
[-
� � 1 p
p 1 - 2p -1 -1
-;;
.
�
-p p . 1
i
�
E9 By defimt1on, the range of L 1s a subset of JR.m. We have L(O) = 0, so 0 E Range(L). If x,y E Range(L), then there exists i1, v E JR." such that L(i1) = x and L(v) = y. Hence, L(i1 + v) = L(u) + L(v) = x + y, so x + y E Range(L). Similarly, L(tu) = tL(i1) = tx, so tx E Range(L). Thus, L is a subspace of JR.Ill. ElO Consider c1L(v1) + + ckL(vk) = 0. Since L is linear, we get L(c1v1 + · · + ckvk) = 0. Thus, c1v1 + + ckvk E Null(L) and so c1v1 + + ckvk 0. This implies that c1 = · · · = Ck = 0 since {V1, , vk} is linearly independent. Therefore, {L(v1), ... , L(vk)} is linearly independent. ·
·
·
·
·
·
·
·
·
·
=
. • .
Ell (a) E1
[� � �], [� � � l· [� � �1. [� [� � rn� ! m� ! -�i [� ! +1 1 2
=
0
(b) A=
E12 (a)
K
0
E =
1
2
0
0
1/4
£3 =
£4 =
0
0
1
o
o
1 3/2 1 0 0
= /3
(b) There is no matrix K. (c) The range cannot be spanned by
[�1
(d) The matrix of L is any multiple of
because this vector is not in JR.3.
[ � =���1 2
-4/3
(e) This contradicts the Rank Theorem, so there can be no such mapping L.
(f) This contradicts Theorem 3.5.2, so there can be no such matrix.
CHAPTER4 Section 4.1 A Problems Al (a) -1 - 6x
+
4x2
+
6x3
(b) -3 + 6x - 6x2 - 3x3 - 12x4 (c) - 1 + 9x - 1 l x2 - 17 x3 (d) -3 + 2x + 6x2 (e) 7
(f) l3
-
_
2x - 5x2 �x + llx2 3 3
(g) -fi. - 7r + -fi.x + ( -fi. + 7r)X2
I
Appendix B
A2 (a) 0
=
497
Answers to Practice Problems and Chapter Quizzes
0(1 + x2 + x3) + 0(2 + x + x3) + 0(-1 + x + 2x2 + x3)
(b)
2 + 4x + 3x2 + 4x3 is not in the span.
(c)
-x + 2x2 + x3
(d)
-4 - x + 3x2
(e)
-1 - 7 x + 5x2 + 4x3
=
=
2(1 + x2 + x3) + (-1)(2
+
x + x3) + 0(-1 + x + 2x2 + x3)
1(1 + x2 + x3) + (-2)(2 + x + x3) + 1(-1 + x =
+
2x2 + x3)
(-3)(1+ x2 + x3) + 3(2 + x + x3) + 4(-1+x+2x2 + x3)
(f) 2 + x + 5x3 is not in the span. A3 (a) The set is linearly independent. (b) The set is linearly dependent. We have
0
=
(-3t)(l + x + x2) + tx + t(x2 + x3) + t(3 + 2x + 2x2 - x3),
t
E
IR.
(c) The set is linearly independent. (d) The set is linearly dependent. We have
0
=
(-2t)(l +x+x3+x4)+t(2+x - x2+x3+x4)+t(x+x2+x3+x4),
t
E
IR.
A4 Consider
The corresponding augmented matrix is
[
1 0 0
-1 1 0
1 -2 1
a1 a . 2 a 3
l
1 in each row, the system is consistent for all polynomials a1 + a x + a x2• Thus, 13 spans P . Moreover, since there is a leading 1 in each 3 2 2 column, there is a unique solution and so 13 is also linearly independent. Therefore, it is a basis for P • 2 Since there is a leading
Section 4.2 A Problems Al (a) It is a subspace. (b) It is a subspace. (c) It is a subspace. (d) It is not a subspace. (e) It is not a subspace.
(f) It is a subspace. A2 (a) It is a subspace. (b) It is not a subspace. ,-
(c) It is a subspace. (d) It is a subspace.
498
Appendix B
Answers to Practice Problems and Chapter Quizzes
A3 (a) It is a subspace. (b) It is a subspace. (c) It is a subspace. (d) It is not a subspace. (e) It is a subspace.
A4 (a) It is a subspace. (b) It is not a subspace. (c) It is a subspace. (d) It is not a subspace.
AS Let the set be {v1, ... , vk} and assume that vi is the zero vector. Then we have 0
=
Ov1
+
· · ·
+
Ov;-1
+
v;
+
Ov;+1
+ ·· · +
Ovk
Hence, by definition, {v1, ... , vk} is linearly dependent.
Section 4.3 A Problems Al (a) It is a basis. (b) Since it only has two vectors in IR.3, it cannot span IR. 3 and hence cannot be a basis. (c) Since it has four vectors in IR.3, it is linearly dependent and hence cannot be a basis. (d) It is not a basis. (e) It is a basis.
A2 Show that it is a linearly independent spanning set. A3 (a) One possible basis is
(b) One possible basis is
A4 (a) One possible basis is 3.
{[-�l [!]} ml [ il [!]} ·
{[
(b) One possible basis is dimension is 4.
·Hence, the dimension is 2.
·
-
·
=
� �]
{[
-
,
[� -n, [� =� ]}
� =�J
·
(c) 'l3 is a basis. Thus, the dimension is 4.
AS (a) The dimension is 3. (b) The dimension is 3. (c) The dimension is 4.
Hence, the ili mension is 3
.Hence, the dimension is
U � ] [� � ] [� � ]} -
·
·
. Hence, the
Appendix B
Answers to Practice Problems and Chapter Quizzes
499
A6 Alternate correct answers are possible. (a)
(b)
mrnl} mrnJHJ}
A 7 Alternate correct answers are possible.
(a)
{ � -! � ) ,
,
1 1 1 0 0 ' 0 0 0
1
0 -1
'
0
)
AS (a) One possible basis is (b) One possible basis is
(c) One possible basis is
(d) One possible basis is
{x, 1 - x2}. Hence, the dimension is 2.
{[� �], [� �], rn �]}
{[ :l · [ �]}
is 3.
{[ � �J.rn �]·[� �] }
Section 4.4 A Problems
�) [xJ.
=
(c) [x].=
(d) [x]3 = (e) [x]• =
[_;J.
[y]3 =
[;]
UJ. Ul nl· {�] [y].=
[yJ.
[n
[y]3 =
[-�]
Hl· {�] �l•
. Hence the dimension is 2. ,
{x - 2, x2 - 4}. Hence, the dimension is 2.
(e) One possible basis is
Al (a) [x]3 =
_
_
.Hence, the dimension is 3.
-
. Hence, the dimension
500
Appendix B
Answers to Practice Problems and Chapter Quizzes
A2 (a) Show that it is linearly independent and spans the plane.
(b)
m m and
are not in the plane We have
m.
=
[ ;J _
A3 (a) Show that it is linearly independent and spansP . 2
(b) [p(x)]"
(c) [2- 4x
=[: l·
[q(x)]" =
[H
!Ox']•=
+
[4- 2x
[H
['(x)].=
We have
7x'J. + [-2 - 2x
+
Hl
= [2- 4x
+
+
3x'J.=
m +!l [�]
l0x2]13 = [(4 - 2)
=
+
(-2 - 2)x
+
(7
+
3)x2]!B
A4 (a) i. A is in the span of '13.
ii. '13 is linearly independent, so it forms a basis for Span '13. iii. [A]!B
=
[-i j;l 3/4
(b) i. A is not in the span of '13. ii. '13 is linearly independent, so it forms a basis for Span '13. AS (a) The change of coordinates matrix Q from '13-coordinates to S-coordinates is
[! � =�]· l
The change of coordinates matrix P from S-coordinates to 'B-
3
0
[
3I11 coordinates isP= -15/11 -1/11
o 1 0
l
2/11 1/11 . 3/11
(b) The change of coordinates matrix Q from '13-coordinates to S-coordinates is
[-�
5
]
� � .The change of coordinates matrixP from S-coordinates to '13-
-2
1
[
2/9 -1/9 coordinates isP= 7/9 1/9 4/9 7/9
]
1/9 -1/9 . 2/9
(c) The change of coordinates matrix Q from '13-coordinates to S-coordinates is
[- � -� �l· -1 -1
1
The change of coordinates matrix P from S-coordinates to '13-
[
1/3 2/9 coordinates isP= 0 -1/3 1/3 -1/9
]
-8/9 1/3 . 4/9
Appendix B Answers to Practice Problems and Chapter Quizzes
501
Section 4.5 A Problems Al Show that each mapping preserves addition and scalar multiplication. A2 (a) det is not linear.
(b) L is linear. (d) M is linear.
(c) T is not linear. A3 (a) y is in the range of L. We have L (b)
y
([�\]
=
y.
is in the range of L. We have L(l + 2x + x2)
=
y.
(c) y is not in the range of L. (d) y is not in the range of L. A4 Alternate correct answers are possible. (a) A basis fm Range(L) is rank(L)+Null(L)
=
2
+
{[:J.lm 1
=
(b) A basis foe Range(L) is (1 rank(L) + Null(L)
=
2
+
1
{[-i]}
We have
{[-l]}·
We have
dim JR3. +
=
A basis foe Null(L) is
x, x\. A basis foe Null(L) is
dim JR3.
{[� -�]·[� �]·[� �]}. {[� �]·[� �]·[� �]·[� �]}
(c) AbasisforRange(L)is{l}.Abasis forNull(L)is We have rank(L) + Null(L) = 1 (d) AbasisforRange(L)is
+
3 = dim M(2, 2).
.Abasis forNull(L)
is the empty set. We have rank(L) + Null(L) = 4
Section 4.6 A Problems Al (a) [L]23 =
(b) [L]13 =
A2 (a) [L]13 = (b) [L]13 =
[� -n [� � -1
-1
r-� �] [� � ]
[L(x)]23
�]. 5
=
[�]
[L(x)]23
=
[ �� l -11
+
0 = dim P3•
502
Appendix B
Answers to Practice Problems and Chapter Quizzes A3 (a) [L(v,)]•
=
(b) [L(v,)J.
=
(c) [4v,)]•
=
A4 (a) 23
(b)
B
=
=
(b) [LJ.
[L(v 2)],
=
[4v2ll•
=
[L(v2)J.
=
{[-�], [�]}.
=
,
H � �I 4 m )
=
Ul. oUl ] [10 00 [�1] =
(b) [L]B
-2
=
(c) 45,3,-5) A7 (a) [L]B
=
(b) [L]B
=
(c) [L]B
=
(d) [L]B
=
[H [�]· [ll
[reflo,-2J]B
{[J. [�].[:]}·
(c) L(l, 2
A6 (a)
[!]. [H [�]·
2
3 -3
=
[�� ��] [� -�] [-1408 -11528] [� �]
[L(vi)l•
=
[4v1)]•
[4vi)l•
=
=
=
[-� �]
[proj"' -ol•
=
[�
[H [! � �] [!]· [� � !] l �l [� �l =
[LJ.
=
[L],
=
·
[L]•
=
1
=
Appendix B
Answers to Practice Problems and Chapter Quizzes
503
[� �] [� �] [ _!] [� -� - � ] [� � �1 [-� � =�] 2
(e) [L], =
(0 [L]•=
4 0
0 -2
0
AS (a) [L]1l =
2 0
-2
(b) [L],
=
I
1 -3
-1
2
(c) [DJ•=
0
0
-
(d) [T]1l =
2
4
2
Section 4.7 A Problems Al In each case, verify that the given mapping is linear, one-to-one, and onto.
(a) DefineL(a+bx+cx2 + dx3)
a b =
c d
(b) Define L
([; �])
a
=
�
d (c) DefineL(a +bx+cx2 +dx3)=
[; �l [ O]
(d) DefineL(a1(x - 2) + a2 (x2 - 2x)) = a i o
a2
·
Chapter 4 Quiz E Problems El (a) The given set is a subset of M(4, 3) and is non-empty since it clearly contains the zero matrix. LetA and B be any two vectors in the set. Then a11 +a12+a13 0 and b11 +b12 +b13 0. Then the first row ofA + B satisfies =
=
504
Appendix B
Answers to Practice Problems and Chapter Quizzes
so the subset is closed under addition. Similarly, for any t
JR., the first row of
E
tA satisfies ta11
+
ta12
+
ta13
=
t(au
+
a12
+
a13)
=
0
so the subset is also closed under scalar multiplication. Thus, it is a subspace of M(4, 3) and hence a vector space. (b) The given set is a subset of the vector space of all polynomials, and it clearly contains the zero polynomial, so it is non-empty. Let p(x) and q(x) be in the set. Then p(l)
(p
=
q)( l )
+
=
0, p(2) p(l)
+
=
0, q(l)
q(l)
=
=
0, and q(2)
0
=
(p
and
0. Hence, p +
q)(2)
so the subset is closed under addition. Similarly, for any t
=
E
+
q satisfies
p(2) + q(2)
=
0
JR., the first row of
tp satisfies (tp)(l)
=
tp(l)
=
0
(tp)(2)
and
=
tp(2)
=
0
so the subset is also closed under scalar multiplication. Thus, it is a subspace and hence a vector space. (c) The set is not a vector space since it is not closed under scalar multiplication. For example,
�
[ � �]
is not in the set since it contains rational entries.
3 and is non-empty since it clearly contains 0. Let x and y be in the set. Then x1 + x2 + x3 0 and Y1 + Y2 + y3 0. Then x + y satisfies
(d) The given set is a subset of JR.
=
X1
Yl
X2
Y2
X3
Y3
X1
X2
=
X3 + Yl
0
0
0
Y2
+
Y3
so the subset is closed under addition. Similarly, for any t
E
JR., tx satisfies
+
+
+
+
+
=
+
+
+
=
+
=
so the subset is also closed under scalar multiplication. Thus, it is a subspace
3
of JR. and hence a vector space.
E2 (a) A set of five vectors in M(2, 2) is linearly dependent, so the set cannot be a basis. (b) Consider
Row reducing the coefficient matrix of the corresponding system gives
0
0
1
3
1 2 -
2 2 4 -2
1 0 0 0
0 1 0 0
0 0 0
2 -1 0
Thus, the system has infinitely many solutions, so the set is linearly dependent, and hence it is not a basis. (c) A set of three vectors in M(2, 2) cannot span M(2, 2), so the set cannot be a basis.
Appendix B
Answers to Practice Problems and Chapter Quizzes
E3 (a) Consider
t1
505
0 1 3 0 10 31 11 00 13 02 -20 00 10 33 11 01 01 00 -20 l 1 0 13 l 02 -20 000 000 001 001 3. 10 1 33 02 -1 1 0 13 02 -3 11 33 02 01 0 00 -2-1 01 01 -3-1 - 00 00 01 01 3 2 000 0 +
t2
t3
+
+
t4
Row reducing the coefficient matrix of the corresponding system gives
Thus, '13
=
{V1, V2, V3} is a linearly independent set. Moreover, V4 can be writ
ten as a linear combination of the v1, v2, and v3, so '13 also spans§. Hence, it is a basis for§ and so dim§
=
(b) We need to find constants t1, t2, and t3 such that
t1
+
t2
+
t3
-5
-5
Thus, [1]•
E4 (a) Let il1
=
=
[!]
[=:]
and il2
=
[H
Then {iii. il2) is a basis for the plane since it is a set
of two linearly independent vectors in the plane.
(b) Since il3
=
[j]
does not lie in the plane, the set './J
=
{ii1, ii" ii3) is linear] y
3
independent and hence a basis for JR. . (c) We have L(v1)
=
v1, L(v2)
=
v2, and L(v3)
[Lhi
=
-v3, so
[01 o1 ol0 0 0 -1 [o 01 01] 0 1 -1
=
(d) The change of coordinates matrix from '13-coordinates to S-coordinates (stan dard coordinates) is
P
=
I
506
Appendix B
Answers to Practice Problems and Chapter Quizzes
It follows that [L]s
=
P[L]BP-1
=
ES The change of coordinates matrix from S to 13 is
[� �] 0 1 0
[i -\] [-: �] [� 0
p
Hence, [L]•
=
p -1
=
-1 0
-2
E6 If t i L(v i ) + ···+ tkl(vk)
=
and hence ti vi+···+ tkvk
p
1
=
2/3 2/3 1/3
11/3 -10/3 1/3
1
0, then
E
Null(l). Thus,
and hence t1 · ·· tk 0 since {vi, ... , vk} is linearly independent. Thus, {l(vi), ... , L(vk)} is linearly independent. =
=
=
E7 (a) False. !Rn is an n-dimensional subspace of IR". (b) True. The dimension of P2 is 3, so a set of four polynomials in P2 must be linearly dependent. (c) False. The number of components in a coordinate vector is the number of vectors in the basis. So, if 13 is a basis for a 4 dimensional subspace, the 13coordinate vector would have only four components. (d) True. Both ranks are equal to the dimension of the range of l.
(e) False. Consider a linear mapping L : P2 � P2. Then, the range of L is a subspace of P2, while for any basis 13, the columnspace of [L]11 is a subspace 2 of IR • Hence, Range(L) cannot equal the columnspace of [L]11• 2 (xi, 0) is one-to-one, but (f) False. The mapping l : IR � IR given by l(xi) 2 dim IR -::f:: dim!R . =
CHAPTERS Section 5.1 A Problems Al (a) 38 (d) 0 A2 (a) 3
(b) -5 (e) 0 (b) 0
(c) 0 (f) 48 (c) 196
(d) -136
Appendix B
A3 (a) O (d)-90
Answers to Practice Problems and Chapter Quizzes
(b) 20
(c) 18
(e) 76
(f) 420
A4 (a)-26
(b) 98
AS(a)-1
(b) 1
507
(c) -3
Section 5.2 A Problems Al (a) detA= 30, so A is invertible. (b) detA = 1, so A is invertible. (c) detA=8,so A is invertible. (d) detA=0,so A is not invertible. (e) detA =-1120,so A is invertible.
A2 (a) 14
(b) -12
(c)-5
(d) 716
A3 (a) detA = 3p - 14, so A is invertible for all p
*
¥·
(b) detA =-Sp- 20,so A is invertible for all p * -4. (c) detA= 2p- 116, so A is invertible for all p * 58.
A4 (a) detA = 13,det B = 14, detAB=det
[� ] °
26
2
=182 7
(b) detA=-2,det B=56,detAB=det -7 -4 11
11
25
]
7 =-112 8
AS (a) Since rA is the matrix where each of then rows of A must been multiplied by r ,we can use Theorem 5.2.1n times to get det(rA)=r" detA. (b) We have AA-1 is /,so 1 = det/ = detAA-1
=
(detA)(detA-1)
by Theorem 5.2.7. Since detA * 0,we get detA-1 =
de�A.
(c) By Theorem 5.2.7, we have 1 = det/ = detA3 = (detA)3. Taking cube roots of both sides gives detA = 1.
Section 5.3 A Problems Al (a) (c)
� [ �� 2
1
S
[
-�]
]
-3
21
11
-1
7
5
3
-13
-7
(b)
� [=� �]
(d)
� 4
[= �
-2
��
-
=:1
-8 -4
508
Appendix B
Answers to Practice Problems and Chapter Quizzes A2
(a) cofA= \3�t 2�3t �;1] -3 -2 -2t -6 [ -2t-17 (b) A(cofAl= � -2t�l7 � ]·So detA= -2t-17 and = -2t-17 [ -1 3 t -3 -2t�l7 5 2 -3t -2 - 2t , provided -2t-17 -2 -11 -6 l ] 51/19] (b) [ 7/5] (c) [-221/1111] (d) [-13/5 (a) [-4/19 2/5 � -11/15 -8/5 0
1
A-
+
t- 0.
3 A
Section 5.4 A Problems Al
A2 A3
A4
AS
A6
(a) 11 Cb) Ail= [�H Av=[��] (c) -26 (d) ldet u� ��JI = 286 = 1(-26)1(11) Ail=[;]. Av= [-n Area = Jctet [� -�JI= 1-81= 8. (a) 63 (b) 42 (c) 2646 (a) 41 (b) 78 (c) 3198 (a) 5 (b) 245 Then-volume of the parallelotope induced by is ldet [v1 .. . vn]I· Since adding a multiple of one column to another does not change the determinant (see Problem 5.2.D8), we get that V1, ... 'Vn
This is the volume of the parallelotope induced by v1,... , v1 _ vn 1. 1,
+
tV
Answers to Practice Problems and Chapter Quizzes
Appendix B
509
Chapter 5 Quiz E Problems
El
E2
0
-2
4
0
1
-2
2
-3
6
0
-1
0
-2 9 =2(-1)2+3 -3 3 1 0
4
0
6
3 =(-2)(3)(-1)2+3
-1
0
3
2
7
-8
3
2
7
-6
-1
20
0
3
5
4
3
8
-9 21
0
0
4
-17
3
5
12
0
0
0
5
0
2
0
0
=
-17
0
0
0
3
0
0
0
0
1 = 5(2)(4)(3)(1)
0
0
4
0
0
5
0
0
0
6
(b) (-1)4(7)
;
E6 (A-1) 1 3
E7 Xz
=
;
=
1
=
-12
180
=
=
120
-* ± �.
=
7
224
t
49
=
-�
2
de A
ES (a) det
=
=
(e) 7(7)
-1
21
=
(d) de A
4
1
-8
E4 The matrix is invertible for all k *
(c) 25(7)
-2
0
E3 0
ES (a) 3(7)
1
1
-1
-1
-2
2
[ � � �i -
-2
3
=33
4
(b) 1-241(33) =792
CHAPTER6 Section 6.1 A Problems
A1
m [-: l U] and
are not eigen,ectors of A.
[� l
is an eigenvcctocwith eigenvalue ,!
is an eigen,ectoc with eigenvalue A
eigenvalue ;i =4.
:
-6; and
[:]
:
O;
is an eigenvectoc with
510
Appendix B
Answers to Practice Problems and Chapter Quizzes
A2 (a) The eigenvalues are ..11
=
2 and ..12
The eigenspace of ..12 is Span
(b) The only eigenvalue is ..1
(c) The eigenvalues are ..11
=
=
=
2 and ..12
(e) The eigenvalues are ..11
=
(f) The eigenvalues are ..11
=
A3 (a) ..11
=
=
{[- �]}.
=
{[ �]}.
-3. The eigenspace of ..11 is Span
=
=
·
{[ � ]}.
{[ �]}.
so it
3 has algebraic multiplicity 1. A basis for
so it has geometric multiplicity 1.
2 has algebraic multiplicity 2; a basis for its eigenspace is
has geometric multiplicity 1. (c) ..11
{[!]}
-2. The eigenspace of ..11 is Span
=
2 has algebraic multiplicity 1. A basis for its eigenspace is
its eigenspace is
{[�]}.
4. The eigenspace of ..11 is Span
{[- � ]}.
has geometric multiplicity 1. ..12
(b) ..11
=
{[�]}.
0 and ..12
The eigenspace of ..12 is Span
{[�]}
{[�]}
5 and ..12
The eigenspace of ..12 is Span
{[�]}.
3. The eigenspace of ..11 is Span
=
-1 and ..12
The eigenspace of ..12 is Span
.
.
1. The eigenspace of ..1 is Span
The eigenspace of ..12 is Span
(d) The eigenvalues are ..11
{[�]}
{[;]}
3. The eigenspace of ..11 is Span
=
2 has algebraic multiplicity 2; a basis for its eigenspace is
{[�]}
. so it
{[ � ]}.
so it
has geometric multiplicity 1.
(d) A,
=
2 has algebraic multiplicity I; a basis for its eigenspace is
it has geometric multiplicity 1. ..12 for its eigenspace is
{[�]}·
=
so
1 has algebraic multiplicity 1; a basis
so it has geometric multiplicity I. A3
algebraic multiplicity I; a basis for its eigenspace is multiplicity 1.
{r-m,
{[; ]},
=
-2 has
so it has geometric
Appendix B (e) ,! 1
=
0
has algebraic multiplicity 2; a basis for its eigenspace i
so it has geometric multiplicity 2. tl2
{[:]}
basis for its eigenspace is
( 0 ,! 1
=
511
Answers to Practice Problems and Chapter Quizzes
S
6 has algebraic multiplicity
=
·
· so it has geometric multiplicity
2 has algebraic multiplicity 2; a basis for its eigenspace is
so it has geometric multiplicity 2. tl2
{[;]},
basis for its eigenspace is
{[-il r�1;l}
=
1.
{ril r�1;l}
S has algebraic multiplicity
so it has geometric multiplicity
•
a
•
a
1.
Section 6.2 A Problems Al (a)
rn
1 with eigenvalue -7. P-
=
(b) P does not diagonalize A . (c)
rn
[-il [-�l and
1 p- AP
-
3
.
p-
1 -
11 -1 ] [ 2
_
,
[-� -� �1 0 0 10
(a) The eigenvalues are tl1
1 0.
=
1
and tl2
A basis for the eigenspace of tl1 is
1.
=
1, [ �] [1 O] and
1 p- A p -
p-I
A2 Alternate correct answers are possible.
is
is an eigenvector of A
0
is an eigenvector of A
_3 .
are both eigenvectors of A with eigenvalue -2, and
an eigenvector of A with eigenvalue
1.
and
is an eigenvector of A with eigenvalue
. 1ue . with eigenva
(d)
14, r- �] [104 -�l � [-� n
is an eigenvector of A with eigenvalue
=
�
A basis for the eigenspace of tl2 is
-
=
[� �l
=
. so the geometric multiplicity so the geometric multiplicity
is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P
D
p- I AP
is
8, each with algebraic multiplicity
{[ � ]} {[�]}.
=
[=: -� -n
rn
=
r- � �]
and
512
Appendix B
Answers to Practice Problems and Chapter Quizzes
(b) The eigenvalues are '11
-6 and '12
=
A basis for the eigenspace of '11 is
n-;n. {[ � ]} =
1, each with algebraic multiplicity 1.
1. A basis for the eigenspace of '12 is
so the geometric multiplicity is . so the geometric multiplicity is
1. Therefore, by Corollary 6.2.3, A is diagonalizable with P D
=
[-� �l
(c) The eigenvalues of A are± (d) The eigenvalues are '11
fili.
[-; �]
and
Hence, A is not diagonalizable overR
-1, '12
=
=
=
2, and '13
=
multiplicity 1. A basis for the eigenspace of ,!1 is
0, each with algebraic
{[-l]}· {[i]} {[-�]},
so the geometric
multiplicity is 1. A basis for the eigenspace of ,i, is
· so the geometric
multiplicity is 1. A basis for the eigenspace of A3 is
so the geomet·
ric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P
=
[- � � -�1 0
and D
=
1
3
[-� � �l 0
(e) The eigenvalues are '11
=
0
·
0
1 with algebraic multiplicity 2 and '12
{[�]} { [-l ] }
5 with alge
=
braic multiplicity 1. A basis fonhe eigenspace of ,!1 is
· so the geometric
mu1tiplicity is 1. A basis for the eigenspace of ,!2 is
, so the geometric
multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable since the geometric multiplicity of !l1 does not equal its algebraic multiplicity. (f) The eigenvalues of A are 1, i, and
-i.
Hence, A is not diagonalizable
overR (g) The eigenvalues are ;l1
=
2 with algebraic multiplicity 2 and !l2
algebraic multiplicity 1. A basis for the eigenspace of A1 is
=
-1 with
{[�] [�]} {1-i]}
geometric multiplicity is 2. A basis for the eigenspace of ,!2 is
·
· so the
· so the
geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P
=
1� � - �i 0
1
2
and D
=
1� � �]· 0
0
-1
Appendix B
513
Answers to Practice Problems and Chapter Quizzes
A3 Alternate correct answers are possible. (a) The only eigenvalue is -11 eigenspace of -11 is
{[�]}.
=
3 with algebraic multiplicity 2. A basis for the
so the geometric multiplicity is 1. Therefore, by
Corollary 6.2.3, A is not diagonalizable since the geometric multiplicity of A,1 does not equal its algebraic multiplicity. (b) The eigenvalues are A,1
=
0 and A,2
A basis for the eigenspace of A,1 is
8, each with algebraic multiplicity 1.
{[-�]}. {[ � ]}. =
so the geometric multiplicity is
1. A basis for the eigenspace of -12 is
so the geometric multiplicity is
1. Therefore, by Corollary 6.2.3, A is diagonalizable with P D
=
rn �].
(c) The eigenvalues are -11
=
3 and A2
=
1. A basis for the eigenspace of -11 is is 1. A basis for the eigenspace of A,2 is
=
{[�]}. {[-� ]}.
=
and
so the geometric multiplicity so the geometric multiplicity
=
[� -�l
(d) The eigenvalues are A,1
[-� �]
-7, each with algebraic multiplicity
is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P D
=
[ � -�]
{I-:]} {I:]},
2 with algebraic multiplicity 2 and A,2
algebraic multiplicity 1. A basis for the eigenspace of A1 is
ometric multiplicity is 1. A basis for the eigenspace of 12 is
=
and
-2 with
so the ge-
so the geo
metric multiplicity is 1. Therefore, by Corollary 6.2.3, A is not diagonalizable since the geometric multiplicity of A,1 does not equal its algebraic multiplicity. (e) The eigenvalues are A,1
=
{[-�l ri]} {I:]},
-2 with algebraic multiplicity 2 and A,2
algebraic multiplicity 1. A basis for the eigenspace of A1 is
the geometric multiplicity is 2. A basis for the eigenspace of 12 is
1-� - � �i
1-� -� �]·
=
4 with , so
so the
geometric multiplicity is 1. Therefore, by Corollary 6.2.3, A is diagonalizable with P
=
1
0
1
and D
=
0
0
4
514
Appendix B
Answers to Practice Problems and Chapter Quizzes
(t) The eigenvalues are ..11
=
2
with algebraic multiplicity
2
algebraic multipLicity I. A basis for the eigenspace of A, is
geometric multiplicity is
2.
=
[-�0 1� 1�i
and D
(g) The eigenvalues of are overR
=
2, 2
[�0 �0 1�]. i and
+
2
-
62. 3. ,
1 with
=
{[-�l [�]} {[1]}· ·
,
A basis foe the eigenspace of A2 is
geometric multiplicity is 1. Therefore, by Corollary with P
and ..12
so the
so the
A is diagonalizable
i. Hence, A is not diagonalizable
Section 6.3 A Problems Al (a) A is not a Markov matrix.
.
.
(b) B 1s a Markov matnx. . The invariant state 1s .
.
(c) C is not a Markov matrix.
(d) Dis a Markov matrix. The invariant state is
A2 (a) In the Jong run, urban dwellers.
25%
A3
T
=
67%
will be urban dwellers.
1 [� � �l 10 1 6
·
In the long run,
1
33%
A4 (a) The dominant eigenvalue is ..1
=
(b) The dominant eigenvalue is ..1
=
Section 6.4 A Problems
[-!] r-�J
(b) ae-0·51
+ be41
[n rn
+ be0.3t
.
a,b ER a,b ER
7 5%
will be
of the population will be rural dwellers
60% 20% 5. 6.
the cars will be at the train station, and
Al (a) ae-51
[1/3���]
of the population will be rural dwellers and
(b) After five decades, approximately and
[67 1113] 13 .
of the cars will be at the airport,
20%
of
of the cars will be at the city centre.
Appendix B
Answers to Practice Problems and Chapter Quizzes
515
Chapter 6 Quiz E Problems
El (a)
(b)
(c)
(d)
m m m [ f]
is not an eigenvector of A
is an eigenvector with eigenvalue I.
is an eigenvector with eigenvalue I.
_
is an eigenvector with eigenvalue -1.
E2 The matrix is diagonalizable. P=
[ � � �i -
-1
2
and D
1
=
[� -� �i 0
0
-2
E3 At = 2 has algebraic and geometric multiplicity 2; A2 = 4 has algebraic and geomet1ic multiplicity
1.
Thus, A is diagonalizable.
E4 Since A is invertible, 0 is not an eigenvalue of A (see Problem 6.2. D8). Then, if Ax= Ax, we getx= A A-x 1 , so A-1x
=
�x.
ES (a) One-dimensional (b) Zero-dimensional (c) Rank(A ) =
2
E6 The invariant state isx=
[�j:]. 1/2
E7 ae-0.11
r-�J
+
beo4t
[�]
CHAPTER 7 Section 7.1 A Problems
.
Al (a) The set 1s orthogonal. P
=
(b) The set is not orthogonal.
[11-VS 21-VS ] 2/YS
-
l /YS
.
516
Appendix B
Ans ers to Practice Problems and Chapter Quizzes w
(c) The set is orthogonal.P=
1[ 85/9//333l 4 -5/-1/...../2./2 1/3/5/..22 ./2 -0
(d) The set is not orthogonal. A2 (a) [w]23 =
(c) [jt]23 =
-5
A3 (a) [x)23 =
(c) [w ]23 =
1 l 3/ YlO ; 1/1/Yl \!TiO Yl l 0 -10/ \!TiOI . 3/W -1;Y15 3/\!TiO [22/2/4/333] -5/3/22 6/7/1/....2 ./2./2 3/6/1/....2 ./2./2
(b) [.Y]s =
(d) [ZJ23 =
A4 (a) It is orthogonal.
(b) It is not orthogonal. The columns of the matrix are not orthogonal. (c) It is not orthogonal. The columns are not unit vectors. (d) It is not orthogonal. The third column is not orthogonal to the first or second column. (e) It is orthogonal. AS (a)
fr 81
(b) p =
�
0. Hl [-2� -2� -1�i 82 =
=
[l -�I �] [-75+4.../2 -1+4 .../2.../2 -4-8-2.../2 1 5+4 +4 2.. .../2 -4 8-2 8+ ./2 [-2/1/../6../6 1;1;Y3Y3 1/..0 1./2 1/1../6 1;Y3 -1/.../2 . :;1• � Jfi ; ; � �[ I � ; : [ 1} :::: ::�: :°J [
(c) [L]• =
},
(d) [L]s=
9�
-2
Y2
Y2
Y2
A6 Since 'B is orthonormal, the matrix P =
·
no
.s
o
w
l
11.../2
o{J J
-1;.../2
.""us
, w
is ortho-
e can pick anoth°'
Appendix B
Answers to Practice Problems and Chapter Quizzes
517
Section 7.2 A Problems
5/2 5 Al (a) 512 5
AZ ( •)
2 3 (c) 5
2 /2 9 (b) 5 9/2
6
{� } �
{[-m mi rm mrnrnl} {[:J lil HJ} { � -: , -: } { i � =� } {ll ;-fi.:-l [O�J , [ 6 ]} {[ �;;], [��'5]. [-�:!rs]} (c )
( b)
-1 -1
0
3
'
AJ (a)
(c)
�)
,
1
A4 ( a )
Cc)
{
(d)
0
1/
,
'
1
-
1;Yi -1 Yi
i;Y3 1;Y3
-2;Y15 2;Y16 1;Y15 -1;Y16 o ' 3/YIS ' 2;Y16 1;Y3 i;Y15 -1;YIO
'
1
1
}
1
(b)
-2/3
1;Y3
5/Y45 1;Y3
o
o
o
-1;Y15 3/YIS 1;Y3 , -1;Y3 , 1;Y15 o 1;Y3 2;Y15 0 1;Y3 0
Cct)
{v\, ... , vd be an orthonormal basis for § and let lVk+l• ... , v11} be an orthonormal basis for §1-. Then {v1, , v11} is an orthonormal basis for IR.11• Thus,
AS Let
•
any 1
E
.
•
IR.11 can be written
Then perp3(1)
=
1 - proj31
=
1- [( 1
=
=
(1
·
·
v1)v1
Vk+tWk+I
p roj3� 1
+
+
.
. .
. ·
+
(1
·
vk)Vk]
. + (1 VnWn ·
518
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 7.3 A Problems Al (a) y = y
10.5- l.9t
y=
(b)y =
3.4 + 0.8t
10.5- 1.9r
0
0 A2 (a)y =
(b)y =
-�t+ �t2 383/98 A3 (a) x = 32/49
[
]
(b) x
=
¥- - � t2 167/650 -401/325
[
]
Section 7.4 A Problems Al (a) (x - 2x2, 1 +
=
A2 (a)
(b) (2- x + 3x2 , 4 (d) 119 + 9x + 9x2 11 =
3x2 ) = -84 9 ../59 It does not define an inner product since (x - x2 , x - x2 ) 0.
3x) = -46 (c) 113 - 2x + x2 11 m
=
(b) It does not define an inner product since (-p, q)
f.
-(p, q).
(c) It does define an inner product. ( d) It does not define an inner product since A3(a) (b)
(x, x) = -2.
{ � [ � �].�[� �].�[� =�]}. { � [� �]. � [� -n. � [-� �]}
A4 (a)�=
-
{[i] nrnl}
[ � �] [-21� ;�;] [-� 13] [ -7/3/3 4/33]
pro j8 _
-
projs
.
(b)[X]• =
[�;�l
=
= 11
Answers to Practice Problems and Chapter Quizzes
Appendix B
519
AS We have (Vt+···+vb Vt+· ··+vk) =(v1, Vt)+· ·· +(Vt, vk) +(v2, Vt)+ (v2, v2)+ +···+(v2, vk) +···+(vb v1) +···+(vbvk) Since {v1, ..., vk} is orthogonal, we have (v;, Vj)
Chapter
=
0 if i
*
j.
Hence,
Quiz
7
E Problems El (a) Neither. The vectors are not of unit length, and the first and third vectors are not orthogonal. (b) Neither. The vectors are not orthogonal. (c) The set is orthonormal.
E2 Let v1, V2, and V3 denote the vectors in '13. We have
6
v3·x = -
v1·x=2,
Y3
Hence, [x]B =
2 9/Y3 . 6/Y3
E3 (a) We have
1= Thus, det P
=
(b) We have pTP
det I = det(PTP) = (det pT)(det P) = (det P)2
± 1. =
I and RTR = I . Hence,
Thus, PR is orthogonal.
E4 (a) Denote the vectors in the spanning set for S by z1, z2, and z3. Let W1 = Zt. Then
0
-1
-1
{ [i , 1;V2
Nonnalizing { W" W2, W3} gives us the orthonormal basis
1I
0
1;V2 0
1;V2
-1/2 1/2 1/2 ,-1/2
} .
520
Appendix B
Answers to Practice Problems and Chapter Quizzes (b) The closest point in S to x is projs x. We find that projs x = x Hence, x is already in §. ES (a) Let A =
[ � �].
Since det A =
0,
it follows that
0 0
(A,A) = det(AA) = (detA)(detA) = Hence, it is not an inner product.
(b) We verify that (,) satisfies the three properties of the inner product: (A,A) = aT 1+2aT 2+2a� 1+a� 2 2:: and equals zero if and only if A = 02,2:
(A, B) =a11b11+2a12b12+2a21b21+a22b22
=b11a11+2b12a12+2b21a21+b22a22 = (B,A)
(A,sB+tC) = a11(sb11+tc, i)+2a12(sb12+tc12)+
+2a21(sb21+tc21) +a22(sb22+tc22) = s(a11b11+2a12b12 + 2a21b 21+a22b22)+ +t(a11C11+2a12c12+2a21C21+a22C22)
=s(A, B)+t(A, C) Thus, it is an inner product.
CHAPTERS Section 8.1 A Problems Al (a) A is symmetric. (c) C is not symmetric since c12 f. c21.
(b) Bis symmetric. (d) Dis symmetric.
A2 Alternate correct answers are possible. (a) p = (b) P =
l /Y2l [30 -1J [-1/1/.../2.../2 1;Y2J' 1/YTO 3/YTOl [-4o 6J [-3/YTO l /YTOJ' D=
O
D=
O
[0 o ol0 0 0 0 [o0 o [ 0 0 j] ] ] [09 o9 ol0 [0 0 0 -9
2 1 ;-.../3 -1 ;-../2 -1;% (c) P = 1/.../3 1/-../2 -1/../6, D = 1/.../3 2/../6 (d) p =
(e) P =
2/3 -2/3 1/3
2;3 1/3 1/3 2/3 , D = -2/3 2/3
4/../4s -2/3 2/3 , D = s;V4s 2/Vs -2/../4s 1/3
1/Vs
3
-1
-1
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 8.2 A Problems
-
== = 1
� -� +
x7 + 6x1 x2 x Q(x1, x2, x3) X T - 2x� + 6x2x3 x Q(x1, X2, X3) -2xi + 2X1X2 2x1X3 + �
Al (a) Q(x1, x2) (b)
521
A - [-3/2 1 ] Q(x) _ -_ [-1/.../2 1/.../2 ! j1i]; Q(x) A = [_2 -22] ; Q(x) = P = [1/21-YsYs -l2//YsYs]; Q(x) . ( ) A [- ] Q(:t) - 10 - [21//YsYs -l2//YsYSJ.J ' Q(:t) l A=[-3� 3 -3�1;Q(x)=2yT+3y�-6y�,P= 1�1/0 1/../3 �j� -�j� ; l 2 /.../6 Q(x)[-4 1 -11../3 -21;Y6 1;0.../2] ·, A= 10 -1 4ol i � � 1-1/../3 1/../3 1;Y6 1/.../2 ;Y6 )
(c)
A2 (a)
-
_
3/2
I 2 5 2 - 2Y1 - 2Y2'
·
'
2x2X3
p
is indefi-
nite.
5
(b)
. . 1s pos1t1ve
2 2 y1 + 6y2,
definite.
c
2
6
=
6
7 ·
,
,,1,
-
2 2 y1 - 5y2, p -
,,1,
(d)
is indefinite.
(e)
Q(x
-5
-
1
,
Q(1)
=-
3 y - 6y
-
4y
P
=
is negative definite.
A3 (a) Positive definite.
(b) Positive definite.
(c) Indefinite.
(d) Negative definite.
(e) Positive definite.
(f) Indefinite.
Section 8.3 A Problems
Al
,
A2
. . . ismdefimte.
522
Appendix B
Answers to Practice Problems and Chapter Quizzes
A4
A3
Y2
AS (a) The graph of xT Ax =
xT Ax =
1
is a set of two parallel lines. The graph of
-1 is the empty set.
(b) The graph of xT Ax = 1 is a hyperbola. The graph of xT Ax =
-1
is a hyper
bola. (c) The graph of xT Ax= 1 is a hyperboloid of two sheets. The graph of xT Ax= -1 is a hyperboloid of one sheet.
(d) The graph of
XT Ax=
1 is a hyperbolic cylinder. The graph of xT Ax = -1 is
a hyperbolic cylinder. (e) The graph of
xT Ax =
1 is a hyperboloid of one sheet. The graph of
xT Ax= -1 is a hyperboloid
of two sheets.
Chapter 8 Quiz E Problems
El P
=
[
-
1/../3 1;.../2 0 1/../3 l /v6 -1/../3 1/.../2
2/v6
[; �]. Q(x)
E2 (a) A =
definite.
(b)
]
l /v6 -
A=
Q(x) =
=
2
7yi + 3y�,
-3
Q(x) =
Q(x) is indefinite. Q(x) = cone.
= 0
-3
0
0
and P
1 is an ellipse, and
1-� =� -�i. -3
16 l o
and D
=
o 0 . 4
[�;� ;.Ji2] Q(x) .
1
is positive
Q(x) = 0 is the origin.
-5y�+5y�-4y�, and P=
[
-
�...,12 1��
-1 1/.../2
1 is a hyperboloid of two sheets, and
1;v6 Q(x)
=
�;�]·
1/../3 0 is a
Appendix B
Answers to Practice Problems and Chapter Quizzes
523
E3
Y2
E4 Since A is positive definite, we have
and (x,x) = 0 if and only if x = 0. Since A is symmetric, we have
n For any x,y,z E IR. and s,t E IR., we have
(x,sy + tZ> = xT A(sy + tZ) = xT A(sy) + xT A(tt/ = SXT Ay + txT Az= s(x,y) + t(x,Z> Thus, (x,y) is an inner product on IR.11•
ES Since A is a 4 x 4 symmetric matrix, there exists an orthogonal matrix P that diagonalizes A. Since the only eigenvalue of A is 3, we must have pT AP = 31. Then we multiply on the left by P and on the right by pr, and we get
A = P(3/)PT = 3ppT = 31
CHAPTER9
Section 9.1 A Problems Al (a)5+7i
(b) -3 - 4i
(c) -7+ 2i
(d) -1 4+Si
A2 (a) 9 +7i
(b) -10 - lOi
(c) 2+ 25i
(d) -2
A3 (a)3 +5i
(b)2 -7i
(c) 3
(d) 4i
A4 (a) Re(z) = 3, Im (z)= 6 (c) Re(z) = 24/37, Im(z) = 4/37 -
6 21 (b) 53 + 53 l
.
A6 (a)
z1z2
= 2 Y2 (cos � + isin �),;. =
(b)Re(z) = 17, Im(z) = -1 (d) Re(z) = 0, Im(z) = 1 (c) _ _±_ 13 - l.2i 13
25 2 (d) -17 + 171
}i (cos-;-;+ isin -;-;)
.
524
Appendix B
Answers to Practice Problems and Chapter Quizzes
(b) (c)
( l?rr +i ?rr) z z 2 Y2 ( +i ) z1z2 4 - 7i, � -fJ - -hi z1z2 -17+9i, � -fi + fii -4 -54 - 54i -8 - 8 YJi 2 Y3+i) 5 ( rr+�krr)+ i ( rr+�krr), � � 4. 2 [ (-z:2krr)+i (-%:2krr)]. � 3. 2113 [ (-T;2krr)+ (-T;2krr)], � � 2. 4. 17 1 [ ( ikrr)+i ( ikrr)J � � 2, 1 2
cos llir. 12
=
sin llir. 12
'
=
z2
sin l12
Y2 cos 12
..1...
(This answer can be checked using Cartesian
=
=
il
form.)
(d)
=
(This answer can be checked using Cartesian
=
form.)
A7 (a)
(b)
(c)
AS (a) The roots are cos (b) The roots are (c) The roots are
0
sin
cos
O
k
�
O
isin
1 6 cos 0+
(d) The roots are e arctan
k
sin
cos
1 (
(d)
sin 0+
k
0
,
k
where
=
Section 9.2 A Problems
[ �1: ii]·3 l
Al (a) The general solution is z
=
-5 - 51
·
(b) The general solution is z
+i -1+2i -25+2i +t -i 0
=
-t
,
t
EC.
0
Section 9.3 A Problems Al (a)
57+- 8i3i r-5; 3i] -1 - 9il [ [1+2i 3+�] 1 1+ 3i, 1 - 4i) [�1 --4�i] {[1�2i]}. (b)
A2 (a) [l]
(b) l(2
-4 - i (d) -1 -7i3 [ -12+il
=
l
=
(c) A basis for Range(l) is
m].[:]} {[_:-!]}
A3 (a) A basis for Row(A) is
basis for NuncAJ is
{[- \ i]} {[ 1_;/]. [L]}.
A basis for Null(l) is
A basis foe Col(A) is
+
and a
Appendix B
525
Answers to Practice Problems and Chapter Quizzes
(b) A basis for Row(B) is
ml, [m
{[1-� ;] , [-1:+ ;]}·
A basis for Col(B) is
and a basis for Null(B) is the empty set.
(c) A basis for Row(C) is
{j I} { =1 � }
{[ U [ � 2;]}· l
A basis for Col(C) is
,
and a basis fo r Null(C) is
1
,
Section 9.4 A Problems Al (a) D= (b) D=
(c )
D
=
[� �J [� -�]. r +i - l r -2 0
0 2 -
l
p=
[-� �]
p-'AP=
P=
-
]
r -�]
-1 p-' A = -2 P , 0 -1
-1 1
ol1 , [o o o ol 1 + [ [� - �] ol o 1 � . � 1 i + � [ ]· � [ [� -1 i] 2i 0
l
l
1 2
p-1 AP= 0 0 P= 0 0 0 2 1 0 1 - 2i ,
0
(d) D =
p=
2
2
0
1
-
P-'AP=
3 s
2
2
Section 9.5 A Problems
-Si, -6i, +Si, 6i+, i, -i, i, + i, (v, il) = 2
Al (a) (il, v) = 2 (b)
(il, v) =
(c)
(il, v)
=
11ai1 = m, llvll = ../26
(v, il) =
3
(d) (il, v> = 4
-
(V, il) = 3
llilll
=
Yil, llVll = 2
4
11a11
=
Yl5, 11v11 = ....ts
(v, a> =
A2 (a) A is not unitary.
(b) B is unitary.
A3 (a) (il, v) = 0 A4 (a) We have
1
llilll = {18, llVll = -../33
(b)
+[ ++�i±ii l
(c) C is unitary. f
(d) Dis unitary.
i
3
3
= det I = det( U* U) = det( U*) det U = det U det U = I det U12.
(b) The matrix U =
[� �]
is unitary and det U =
i.
526
Appendix B
Answers to Practice Problems and Chapter Quizzes
Section 9.6 A Problems Al (a)
. u = [c.../2 + i)/../12 c..)./2+7 /2] D . A 1s. Herm1tian. 112 _3 ../12 ,
(b) Bis not Hermitian. . (c) C 1s · Hernutian U . .
(d) Fis Hermitian.
U
=
-i)/Ys = [(../3l/Ys
D=
0
0
0
Vs
0
0
=
[i o] 5
0
7 -i)/ _4 1-.fiO-.fiO] D= [ OJ2
( ..f3
,
0
0 0
,
[(1--i)/-{16 2/-VW (3 - Ys)/b l (3 + Ys)/a (1 - i)(l + Ys)/a (1-i)(l - Ys)/b -
Vs
2/a 2/-VW where a� 52. 361 and b � 7.639.
2/b
Chapter 9 Quiz E Problems
..e-711/12 = 4 ...f2e-irr/12 'z2 ...l. Yi The square roots ofi are e irr/4 and ei51114. 7i (a) 3 +Si 9 + 3i (c) 11 + 4i (d) 11 - 4i 2 (e) ff? (f) rs : : 22 - 8i 2 3i 2�3i andP-1AP=D= [ 2 3i � (a) P= [ � ] � 2 3i] il
z El z 12
E2
=
(b) [J�;l [ ���]
[ l -
E3
E4
ES
(b)
E6 (a)
P= [�
-
�]
and C
= [ _� ;]
1 i _;�i] �[ ; _/_i] =[� �] A is Hermitian if A• = A. Thus, we must have 3 +ki = 3 -i and 3 - ki = 3 +i, which is only true when k = -1. Thus, is Hermitian if and only if k = -1. vu·=
�[ I�i
A
Appendix B (b) If k
=
-
Answers to Practice Problems and Chapter Quizzes
1 , then A
=
[3 � i 3 � i] lo3 33 ii and
det(A - ;l/)
Thus the eigenvalues are l1 ; A
_
=
;l I I
- ;l +i
=
-2 and A2 =
-
_
=
;l
(;l
5. For ;l1
=
2
-
�
5
=
-
2 0
{[3 � i]}. \f [3 i] lp i]) -
5
-
,
=
0.
5)
-2,
-
0
For ;l2
Thus, a basis for the eigenspace of ;l2 is _2
2)(,l
[3+i 3 i] [ 3 i] {[3_�i]}.
Thus, a basis for the eigenspace of A1 is
We can easily verify that
+
=
5,
527
Index A
standard for R",321
C",413
addition
subspaces, 97-99
eigenvalues, 419
complex numbers, 396
taking advantage of direction, 218
matrices, 419--423
general vector spaces, 198
vector space, 206--209
properties, 397
matrices, 117-120
best-fitting curve, finding, 342-346
parallelogram rule for, 2
block multiplication, 127-128
polynomials, 193-196
Bombelli, Rafael, 396
real scalars, 399 scalars, 117-120 vector space, 199-200 vector space over R, 197-201 vectors, 2,10, 15-16
repeated, 423 complex eigenvectors, 419
C vector space basis, 412
complex exponential, 402--404
complex multiplication as matrix
complex inner products Cauchy-Schwarz and triangle inequalities,
mapping, 415
adjugate method, 276
non-zero elements linearly dependent, 412
algebraic multiplicity, 302,303
one-dimensional complex vector
eigenvalues, 296--297
capacitor, capacitance of, 408
Approximation Theorem, 336,343,345
Cauchy-Schwarz and triangle inequalities,
change of coordinates equation,
parallelogram, 280-282 Argand diagram, 399,405 arguments, 400--401 augmented matrix, 69-72 axial forces, 105
complex multiplication as matrix mappings, 415 complex numbers, 417
426--428 Cauchy-Schwarz Inequality, 33
determinants, 280-283
complex matrices and diagonalization, 417--418
125-126
angular momentum vectors, 390 arbitrary bases, 321
426--428 Hermitian property, 426 length, 426
space, 412 cancellation law and matrix multiplication,
angles, 29-31
area
roots of polynomial equations, 398 complex eigenvalues of A, 418--419
c
adjugate matrix, 276
allocating resources, 107-109
quotient of complex numbers, 397-398
addition, 396 arguments, 400--401 arithmetic of, 396--397
222,240,329 change of coordinates matrix, 222-224
complex conjugates, 397-398
characteristic polynomial, 293
complex exponential, 402--404
classification and graphs, 380
complex plane, 399
en complex vector space
division, 397-398
complex conjugate, 413
electrical circuit equations, 408--410
B
inner products, 425
imaginary part, 396
back-substitution, 66
orthogonality, 429--431
modulus, 399--401
bases (basis)
standard inner product, 425
multiplication, 396
arbitrary, 321 C vector space, 412 complex vector spaces, 412
vector space, 412
multiplication as matrix mapping, 415 n-th roots, 404--405
codomain, 131 linear mappings, 134
polar form, 399--402
coordinate vector, 219
coefficient matrix, 69-70, 71
powers, 402--404
coordinates with respect to, 218-224
coefficients, 63
quotient, 397-398
definition, 22
cofactor expansion, 259-261
real part, 396
eigenvectors, 299-300,308
cofactor matrix, 274-275
scalars, 411
extending linearly independent subset
cofactor method, 276
two-dimensional real vector space, 412
to, 213-216 linearly dependent, 206--208
vector spaces over, 413--414
cofactors 3
x
3 matrix, 257
complex plane, 399
linearly independent, 206--207,211
determinants in terms of, 255-261
mutually orthogonal vectors, 321
irrelevant values, 259
basis, 412
nullspace, 229-230
matrix inverse, 274-278
complex numbers, 396--40 5
obtaining from arbitrary wFinite spanning
of n
set, 209-211 ordered, 219,236 preferred, 218
x
n matrix, 258
complex vector spaces
dimension, 412
column matrix, 117
eigenvectors, 417--423
columnspace, 154-155,414
inner products, 425--431
basis of, 158-159
one-dimensional, 412
range, 229-230
complete elimination, 84
scalars and coordinates of, 417
standard basis properties, 321
complex conjugates
subspaces, 413--414
529
530
Index
forming basis,308
symmetric matrices,327, 363-370,
complex vector spaces (continued)
linear transformation,289-290
373-375
systems with complex numbers,407-410 two-dimensional, 412
Diagonalization Theorem,30 I,307
linearly independent, 300
vector spaces over C, 4 I1-4 I5
differential equations and diagonalization,
mappings, 289-291
413-414 components of vectors, 2
matrices,291
315-317
vector spaces over complex numbers,
dilations and planes, 145
non-real, 301
dimensions,99
non-trivial solution, 292
compositions, I 39-141
finite dimensional vector space,215
non-zero vectors,289
computer calculations choice of pivot
trivial vector space, 212
orthogonality,367
vector spaces,211-213
principal axes,370
affecting accuracy,78 cone,386 conic sections, 384
three-dimensional space, 1 I
conjugate transpose,428
two-dimensional case, 11
consistent solutions,75-76
direction vector and parallel lines,5-6
consistent systems,75-76
distance,44
constants, orthogonality of, 356
division
constraint, I07
complex numbers,397-398
contractions and planes, 145
matrices,125
convergence, 357
polar forms,401
coordinate vector,2 I9
domain, 131 linear mappings,134
coordinates with respect to basis,218-224 orthonormal bases,323-329 corresponding homogeneous system solution
dominant eigenvalues,312
perpendicular part, 43
Cramer's Rule,276-278
projections, 40-44
cross-products,51-54
properties,348
formula for, 51-52
R2,28-31
length,52-54
R",31-34,323,354
properties,52
symmetric, 335
current,102
E eigenpair,289
de Moivre's Formula, 402-404
eigenspaces,292-293
3
x
algebraic multiplicity, 296-297, 302,303
2
x
2 matrix,255-256, 259
complex eigenvalue of A, 418-419
3
x
3 matrix, 256-258
deficient, 297
area, 280-283
dominant,312
cofactors,255-261
eigenspaces and,292-293
Cramer's Rule, 276-278
finding,291-297
elementary row operations,264--2 7 I
geometric multiplicity,296-297
expansion along first row, 257-258
integers,297
invertibility, 269-270
mappings, 289-291
matrix inverse by cofactors, 274--278
matrices, 291
non-zero,270
n x n
products, 270-271
non-rational numbers,297
row operations and cofactor
non-real,301
elementary matrices, 175-179 reflection, I76 shears, 176 stretch, 176 elementary row operations, 70, 72-73 bad moves and shortcuts, 76-78 determinant,264--271 elimination,matrix representation of,71 ellipsoid,385 elliptic cylinder,387 equations first-order difference, 3 I1-312 free variables, 67-68 leading variables,66-67 normal,344 rotating motion,391 second-order difference, 3 I2 solution,64 equivalent directed line segments,7-8 systems of equations, 65 systems of linear equations, 70 error vector, 343 Euclidean spaces,1 cross-products,50-54 length and dot products, 28-37 minimum distance,44-47
matrices,420
projections, 40-44 vectors and lines in R3,9-11 vectors in R2 and R3, 1-11
power method of determining,312-3 I 3 of projections and reflections in R3,
swapping rows,265
290-291 real and imaginary parts,420
diagonal form, 300, 373
rotations in R2, 291
diagonal matrices,114
symmetric matrices, 363-367
diagonalization
electromotive force, I 03
3 matrix, 422-423
complex conjugates, 419
volume, 283-284
steady-state solution,409-410 electricity, resistor circuits in, 102-104
eigenvalues, 307-310,417
design matrix,344
expansion,269
complex numbers, 408-410
orthogonality, 367
determinants
square matrices,269-270
symmetric matrices, 363-367 electrical circuit equations
equality and matrices,113-117
D
degenerate quadric surfaces,387
restricted definition of,289 rotations in R2, 291
defining, 354
cosines,orthogonality of, 356
degenerate cases, 384
290-291 real and imaginary parts, 420
dot products, 29-31,323, 326
space,152-153
deficient, 297
of projections and reflections in R3,
directed Iine segments,7-8
eigenvectors, 308-310
vectors in R",14--25 Euler's Formula,403-404 explicit isomorphism,248 exponential function,315
F Factor Theorem,398
applications of,303
3
complex matrices,4 I7-4 I 8
basis of,299-300
False Expansion Theorem,275
differential equations,3 I 5-317
complex,419
feasible set linear programming problem,
x
3 matrix,422-423
eigenvectors,299-304
complex vector spaces, 417-423
quadratic form, 373-375
diagonalization,299-304
square matrices,300, 363
finding,291-297
factors,reciprocal,265
107-109 finite dimensional vector space dimension, 2 I5
Index
Finite spanning set, basis from arbitrary, 209-21! first-order difference equations, 311-312 first-order linear differential equations, 315 fixed-state vectors, 309 Markov matrix, 310 Fourier coefficients, 356-357
Hermitian property, 426
K
homogeneous linear equations, 86-87
kernel, 151
hyperbolic cylinder, 387
Kirchhoff's Laws, 103-104, 409, 410
hyperboloid of one sheet, 386 hyperboloid of two sheets, 386-387
L
hyperplanes
Law of Cosines, 29
Fourier Series, 354-359 free variables, 67-68
in R",24
leading variables, 66-67
scalar equation, 36-37
left inverse, 166 length
functions, 131 addition, 134 domain of, 131 linear combinations, 134 linear mappings, 134-136 mapping points to points, 131-132 matrix mapping, 131-134 scalar multiplication, 134 Fundamental Theorem of Algebra, 417
complex inner products, 426 identity mapping, 140 identity matrix, 126-127 ill conditioned matrices, 78 image parallelogram volume, 282 imaginary axis, 399 imaginary part, 396 inconsistent solutions, 75-76 indefinite quadratic forms, 376-377 inertia tensor, 390,391
G Gauss-Jordan elimination, 84 Gaussian elimination with back-substitution, 68 general linear mappings, 226-232 general reflections, 147-148
infinite-dimensional vector space, 212 infinitesimal rotation, 389 inner product spaces, 348-352 unit vector, 350-351 inner products, 323,348-352,354-355
general solutions, 68
C" complex vector space, 425
general systems of linear equations, 64
complex vector spaces, 425-431
general vector spaces, 198
correct order of vectors, 429
geometric multiplicity and eigenvalues,
defining, 354
296-297
Fourier Series, 355-359
geometrical transformations
orthogonality of constants, sines, and
contractions, 145
cosines, 356
dialations, 145
properties, 349-350,354
general reflections, 147-148 inverse transformation, 171 reflections in coordinate axes in R2 or coordinates planes in R3, 146 rotation through angle (-) about x3-axis in R3,145-148 rotations in plane, 143-145 shears, 146 stretches, 145 Google PageRank algorithm, 307 Gram-Schmidt Procedure, 337-339, 351-352,365,367-368,429 graphs classification, 380 cone,386 conic sections, 384 degenerate cases, 384
R",348-349 instantaneous angular velocity vectors, 390 instantaneous axis of rotation at time, 390 instantaneous rate of rotation about the axis, 390 integers, 396 eigenvalues, 297 integral, 354 intersecting planes, 387 invariant subspace, 420 invariant vectors, 309 inverse linear mappings, 170-173 inverse mappings, 165-173 procedure for finding, 167-168 inverse matrices, 165-173 invertibility determinant, 269-270
degenerate quadric surfaces, 387
general linear mappings, 228
diagonalizing quadratic form, 380 ellipsoid, 385 elliptic cylinder, 387 hyperbolic cylinder, 387 hyperboloid of one sheet, 386 hyperboloid of two sheets, 386-387 intersecting planes, 387 nondegenerate cases, 384 quadratic forms, 379-387
square matrices, 269-270 invertible linear transformation onto mappings, 246-247 Invertible Matrix Theorem, 168,172,292 invertible square matrix and reduced row echelon form (RREF), 178 isomorphic, 247 isomorphisms, 247 explicit isomorphism, 248
quadric surfaces, 385-387
vector space, 245-249
H Hermitian linear operators, 433 Hermitian matrix, 432-435
531
cross-products, 52-54 R3,28-31 R",31-34 level sets, 108 line of intersection of two planes, 55 linear combinations, 4
·
linear mappings, 139-141 of matrices, 118 polynomials, 194-196 linear constraint, 107 linear difference equations, 304 linear equations coefficients, 63 definition, 63 homogeneous, 86-87 matrix multiplication, 124 linear explicit isomorphism, 248 linear independence, 18-24 problems, 95-96 linear mappings, 134-136,175-176, 227,247 2
x
2 matrix, 420
change of coordinates, 240-242 codomain, 134 compositions, 139-141 domain, 134 general, 226-232 identity mapping, 140 inverse, 170-173 linear combinations, 139-141 linear operator, 134 matrices, 235-242 matrix mapping, 136-138 nullity, 230-231 nullspace, 151,228 one-to-one, 247 range, 153-156,228-230 rank, 230-231 standard matrix, 137-140,241 vector spaces over complex numbers, 413-414 linear operators, 134,227 Hermitian, 433 matrix, 235-239 standard matrix, 237 linear programming, 107-109 feasible set, l 07-109 simplex method, l 09 linear sets, linearly dependent or linearly independent, 20-24 linear systems, solutions of, 168-170 linear transformation
J
Jordan normal form, 423
eigenvectors, 289-290 standard matrix, 235-239,328
Index
532
linear transformation (continued)
inverse cofactors, 274-278
linearity properties, 44
linear combinations of, 118
moment of inertia about an axis, 390, 391
linearly dependent, 20-24
linear mapping, 235-242
multiple vector spaces, 198 multiplication
modulus, 399-401
basis, 206-208
linearly dependent, 118-120
matrices, 118-120
linearly independent, 118-120
complex numbers, 396, 415
polynomials, 195-196
lower triangular, 181-182
matrices, 121-126. See also matrix
subspaces, 204
LU-decomposition, 181-187
linearly independent, 20-24
polar forms, 401 properties of matrices and scalars,
basis, 206-207, 21 l
non-real eigenvalues, 301
eigenvectors, 300
nullspace, 414
matrices, 118-120
operations on, 113-128
polynomials, 195-196
order of elements, 236
procedure for extending to basis, 213-216
orthogonal, 325-329
subspaces, 204
orthogonally diagonalizable, 366
vectors, 323
orthogonally similar, 366 partitioned, 127-128
lines direction vector, 5 parametric equation, 6
in R3, 9
in R", 24 translated, 5
vector equation in R2, 5-8
lower triangular, 114, 181-182 LU-decomposition, 181-187 solving systems, 185-186
multiplication
multiplication, 121-126
powers of, 307-313 rank of, 85-86 real eigenvalues, 420
117-120 mutually orthogonal vectors, 321
N n-dimensional parallelotope, 284 11-dimensional space, 14 11-th roots, 404-405 11-volume, 284 11
M
reduced row echelon form (RREF), 83-85
cofactor of, 258
reducing to upper-triangular form, 268
complex entries, 428
representation of elimination, 71
conjugate transpose, 428
representation of systems of linear
determinant of, 259
equations, 69-73
eigenvalues, 420
with respect to basis, 235
Hermitian, 432-435
row echelon form
similar, 300
eigenvalues, 289-291 eigenvectors, 289-291 inverse, 165-173 nullspace, 150-152 special subspaces for, 150-162 Markov matrix, 309-313 fixed-state vector, 310 Markov process, 307-313 properties, 310-311 matrices, 69 addition, 117 addition and multiplication by scalars properties, 117-120 basis of columnspace, 158-159 basis of nullspace, 159-161 basis of rowspace, 157 calculating determinant, 264-265 cofactor expansion, 259-261 column, 117 columnspace, 154-155, 414 complex conjugate, 419-423 decompositions, 179 design, 344 determinant, 268, 280 diagonal, 114, 300 division, 125
swapping rows, 266-267
row equivalent, 70,71 row reduced to row echelon form (RREF),
73-74 row reduction, 70 rowspace, 414 scalar multiplication, 117 shortcuts and bad moves in elementary row operations, 76-78 skew-symmetric, 379 special types of, 114 square, 114 standard, 137-140 swapping rows, 187 symmetric, 363-370 trace of, 202-203 transition, 307 transpose of, 120-121 unitarily diagonalizable, 435 unitarily similar, 435 unitary, 430-43 l upper triangular, 181-182 matrix mappings, 131-134 affecting linear combination of vectors in
R", 133
11 matrices, 296
characteristic polynomial, 293
(REF), 73-74
mappings
x
unitary, 430-431 natural numbers, 396 nearest point, finding, 47 negative definite quadratic forms,
376-377 negative semidefinite quadratic forms,
376-377 Newton's equation, 391 non-homogeneous system solution, 153 non-real eigenvalues, 301 non-real eigenvectors, 301 non-square matrices, 166 non-zero determinants, 270 non-zero vectors, 41 nondegenerate cases, 384 norm, 31-32 normal equations, 344 normal to planes, 54-55 normal vector, 34-35 normalizing vectors, 312 nullity, 159-161 linear mapping, 230-231 nullspace, 150-152,414 basis, 159-161, 229-230
complex multiplication as, 415
linear mappings, 228
linear mappings, 134, 136-138 matrix multiplication, 126, 175-176,
300,326
0
eigenvalues, 291
block multiplication, 127-128
objective function, 107
eigenvectors, 291
cancellation law, 125-126
Ohm's law, 102-103,104
elementary, 175-179
linear equations, 124
one-dimensional complex vector
elementary row operations, 70
product not commutative, 125
equality, 113-117
properties, 133-134
finding inverse procedure, 167-168
summation notation, 124
Hermitian, 432-435
matrix of the linear operator, 235-239
identity, 126-127
matrix-vector multiplication, 175-176
ill conditioned, 78
method of least squares, 342-346
inverse, 165-173
minimum distance, 44-47
spaces, 412 one-to-one, 246 explicit isomorphism, 248 invertible linear transformation, 246 onto, 246 explicit isomorphism, 248 ordered basis, 219,236
Index
orthogonal, 33
points calculating distance between,
Cn complex vector space, 429--431
28-29
planes, 36
position vector, 7
orthogonal basis, 337-339 subspace S, 335
polynomials addition, l 93-196
real inner products, 430
properties, l 94
orthogonally diagonalizable, 366
linear combinations, 194-- 196
orthogonally similar, 366
linearly dependent, 195-196
orthonormal bases (basis), 321-325,
linearly independent, 195-196
change of coordinates, 325-329
scalar multiplication, 193-196
coordinates with respect to,
span, 194-- l 96
323-325
vector spaces, 193-196
error vector, 343
position vector, 7
fourier series, 354--359
positive definite quadratic forms,
general vector spaces, 323
376-- 377 positive semidefinite quadratic forms,
inner product spaces, 348-352
376--377 power method of determining eigenvalues,
overdetermined systems, 345-346
312-313 powers and complex numbers, 402--404
R",323,334 subspace S, 335
powers of matrices, 307-313
technical advantage of using, 323-325
preferred basis, 218
orthonormal columns, 364
preserve addition, 134
orthonormal sets, arguments based on, 323
preserve scalar multiplication, 134
orthonormal vectors, 322,351
principal axes, 370
overdetermined systems, 345-346
Principal Axis Theorem, 369-370,374,380, 391-392,435
p
principal moments of inertia, 390,391
parabola, 384
probabilities, 309
parallel lines and direction vector, 5---0
·problems linear independence, 95-96
parallel planes, 35 parallelepiped, 55-56 volumes, 56,283-284
spanning, 91-95 products determinants, 270-271
parallelogram
of inertia, 391
area, 53, 280-282 induced by, 280 rule for addition, 2 parallelotope and 11-volume, 284
rotations, 291 length, 28-31 line through origin in, 5 reflections in coordinate axes, 146 rotation of axes, 329-330 standard basis for, 4
roots of, 293
323,336
projections onto subspace, 333-336
dot products, 28-31 eigenvectors and eigenvalues of
addition and scalar multiplication
orthogonal vectors, 321,333,35 l
method of least squares, 342-346
quotient, 397-398
multiplication, 401 polynomial equations, roots of, 398
orthogonal sets, 322
Gram-Schmidt Procedure, 337-339
symmetric matrices, 371-377 quadric surfaces, 385-387
division, 401
orthogonal matrices, 325-329 327,364
small deformations and, 388-389
polar forms, 399--402
orthogonal complement, 333 orthonormal columns and rows,
vector equation of line in, 5-8 vectors in, 1- l l R3
zero vector, 11 defining, 9 eigenvectors and eigenvalues of projections and reflections, 290-29 l lines in, 9-11 reflections in coordinate planes, 146 rotation through angle ( -) about x3 axis, 145-148 scalar triple product and volumes in, 55-56 vectors in, 1-1 l zero vector, l l
range basis, 229-230 linear mappings, 153-156, 228-230 rank linear mapping, 230-231 matrices, 85-86 square matrices, 269-270 summary of, 162 Rank-Nullity Theorem, 231, 232 Rank Theorem, 150-162 rational numbers, 396 Rational Roots Theorem, 398
projection of vectors, 335
real axis, 399
projection property, 44
real canonical form
projections, 40--44 eigenvectors and eigenvalues in R3,
parametric equation, 6
290-291
particular solution, 153
3
x
3 matrix, 422--423
2
x
2 real matrix, 421--422
complex characteristic roots of, 418--420
permutation matrix, 187
onto subspaces, 333-336
perpendicular of projection, 43
properties of, 44
real eigenvalues, 420
scalar multiple, 41
real eigenvectors, 423
planar trusses, I 05-106 planes
Pythagoras' Theorem, 44
real inner products and 01thogonal
Q
real matrix, 420
matrices, 430
contractions, 145 dilations, 145 finding nearest point, 47
533
quadratic forms
complex characteristic roots of, 418--420
applications of, 388-392
real numbers, 396,418
normal to, 54--55
diagonalizing, 373-375
real part, 396
normal vector, 34--35
graphs of, 379-387
real polynomials and complex roots, 419
orthogonal, 36
indefinite, 376-377
real scalars
parallel, 35
inertia tensor, 390
addition, 399
in R", 24
negative definite, 376-377
scalar multiplication, 399
line of intersection of two, 55
rotations in, 143-145
negative semidefinite, 376--377
real vector space, 412
scalar equation, 34-36
positive definite, 376--377
reciprocal factor, 265
stretches, 145
positive semidefinite, 376--377
recursive definition, 259
Index
534
reduced row echelon form (RREF), 83-85,178 reflections
linear dependent,204
second-order difference equations,312 shears,146
linearly independence,18-24
elementary matrix,176
linearly independent,204
describing terms of vectors,218
similar,300
projections onto,333-336
eigenvectors and eigenvalues in R3,
simplex method, l 09
R", 16--1 7
sines,orthogonality of,356
spanning sets, 18-24, 204
290-291 elementary matrix,176
skew-symmetric matrices,379
in plane with normal vector,147-148
small deformations,388-389
repeated complex eigenvalues,423
solid body and small deformations,388-389
repeated real eigenvectors,423
solids
resistance,102
analysis of deformation of,304
resistor circuits in electricity, 102-104
deformation of,145
resistors,103 resources,allocating,107-109
spans,204 summation notation and matrix multiplication,124 surfaces in higher dimensions,24-25 symmetric matrices classifying,377
solution,64
diagonalization,327,363-370,373-375
back-substitution,66
eigenvalues,363-367
right-handed system,9
solution set,64
eigenvectors,363-367
right inverse,166
solution space,150-152
orthogonally diagonalizable,367
rigid body,rotation motion of,390-392
corresponding homogeneous system, 152-153
R" addition and scalar multiplication of vectors in,15 dot product,3 l-34,323,354 inner products,348-349 length,31-34 line in plane in hyperplane in,24 orthonormal basis,323,334 standard basis for,321 subset of standard basis vectors, 322 subspace, l 6--17 vectors in,14-25 zero vector,16 roots of polynomial equations,398 rotations
of axes in R2, 329-330
eigenvectors and eigenvalues in R2, 291
equations for,391 in plane,143-145 through angle(-) about x3-axis in R3, 145-148 row echelon form (REF), 73-74 consistency and uniqueness,75-76 number of leading entries in,85 row equivalent, 70,71 row reduction,70 rowspace,156--157, 414
quadratic forms,371-377 systems
systems,152-153
solution space,150-153
spanned,18
solving with LU-decomposition,185-186
spanning problems,91-95 spanning sets,18-24 definition,18
special subspaces for,150-162 systems of equations equivalent,65
subspaces,204 vector spaces,209 spans polynomials,194-196 subspaces,204
word problems,77-78 systems of linear difference equations, 311-312 systems of linear equations,63-68,175-176 applications of,102-109
special subspaces for mappings,150-162
augmented matrix,69-70,71
special subspaces for systems,150-162
coefficient matrix,69-70,71
Spectral Theorem for Hermitian Matrices,
complete elimination,84
435
complex coefficients, 407-410
square matrices,114
complex right-hand sides, 407-410
calculating inverse, 274-278
consistent solutions,75-76
determinant,269-271
consistent systems,75-76
diagonalization,300,363
elimination,64-66
facts about,168-170
elimination with back-substitution,83
invertible,269-270
equivalent,70
left inverse, 166
Gauss-Jordan elimination,84
lower triangular, 1 14
Gaussian elimination with
rank, 269-270
back-substitution,71-72
right inverse,166
general,64
upper triangular,114
general solution,68
standard basis for R2, 4
homogeneous,86-87
standard inner product,31,425
inconsistent solutions,75-76
standard matrix,137-140
s
linear programming,107-109
linear mapping, 241
scalar equation hyperplanes,36-37 planes,34-36
linear transformation, 235-239,328 state vector,309
scalar multiplication
stretches
general vector spaces,198
elementary matrix,176
matrices,117
planes,145
real scalars,399 vector space,199-200 vector space over R, 197-20 l vectors,2,3,10,15-16 scalar product,31
scalar triple product in R3, 55-56
planar trusses,105-106 resistor circuits in electricity,102-104
state,307
scalar form,6
polynomials,193-196
matrix representation of, 69-73
linear operator,237
subspace S
spanning problems,91-95 unique solutions,75-76 systems of linear homogeneous ordinary differential equations,317 systems with complex numbers,407-410
orthogonal basis,335 orthonormal basis, 335 subspace S or R", orthonormal or orthogonal basis for,337-339 subspaces,201-204
T three-dimensional space and directed line segments,11 trace of matrices,202-203
bases of,97-99
transition,probability of,309
complex vector spaces, 413-414
transition matrix,307,309
complex numbers, 411
definition,16
transposition of matrices,120-121
properties of matrix addition and
dimensions,99
Triangle Inequality,33
invariant, 420
trivial solution,21, 87
scalars,2
multiplication by,117-120
Index
trivial subspace,16-17 trivial vector space,203 dimension,212 two-dimensional case and directed line segments,11
linearly dependent,21,95-96
extending linearly independent subset to
linearly independent,21,323
basis procedure,213-216 general linear mappings,226-232
mutually orthogonal,321
infinite-dimensional,212
norm,31-32,350-351
inner product,348-352
normalizing,312,322
two-dimensional complex vector spaces,412
inner product spaces,348-352
orthogonal,33,321,333,351
two-dimensional real vector space and
isomorphisms of,245-249
orthonormal,322,351
matrix of linear mapping,235-242
projection of,335
complex numbers,412 u
multiple,198
projection onto subspaces, 333-336
over complex numbers, 413-414
in R2 and R3, 1-11
Unique Representation Theorem,206,219
polynomials,193-196
unique solutions in systems of linear
scalar multiplication, 199-200
equations,75-76
spanning set,209
unitarily diagonalizable matrices,435
subspaces, 201-204
unitarily similar matrices,435
trivial,203
unitary matrices,430-431
vector space over R, 197
upper triangular,114,181-182
vectors,197 zero vector, 198
v
vector spaces over C, 411-415
vector equation of line in R2, 5-8
vector-valued function,42
vector space over R, 197
vectors,1
addition,197-201
addition of,10,15-16
scalar multiplication,197-201
angle between, 29-31
zero vector,197
angular momentum,390
abstract concept of,200-20 l
representation as arrow,2 reversing direction,3
scalars,41 l
unit vector,33,41,321-322,350-351
vector spaces
components of,2 controlling size of,312
in R", 14-25 scalar multiplication,2,3, 10,15-16 span of a set,91-95 standard basis for R2, 4 zero vector,4 vertices,109 voltage,102 volumes determinants,283-284 image parallelogram,282 parallelepiped,56, 283-284 in R3, 55-56 w word problems and systems of equations,
addition,199-200
cross-products, 51-54
basis,206-209
distance between,350-351
complex,396-435
dot products,29-31,323
coordinates with respect to basis,218-224
fixed,309
z
defined using number systems as
formula for length,28
zero vector,4,197-198
77-78
instantaneous angular velocity,390
R2, 11
dimension,211-213
invariant,309
R3, 11
explicit isomorphism,249
length,323,350-351
R", 16-17
scalars,198
535
View more...
Comments