Matrix and Tensor Calculus - Aristotle D. Michal
January 6, 2017 | Author: Ravish Verma | Category: N/A
Short Description
Download Matrix and Tensor Calculus - Aristotle D. Michal...
Description
MATRIX AND TENSOR CALCULUS With Applications to Mechanics, Elasticity and Aeronautics
ARISTOTLE D. MICHAL
DOVER PUBLICATIONS, INC. Mineola, New York
Bibliographical Note This Dover edition, first published in 2008, is an unabridged republication of the work originally published in 1947 by John Wiley and Sons, Inc., New York, as part of the GALCIT (Graduate Aeronautical Laboratories, California Institute of Technology) Aeronautical Series.
Library of Congress Cataloging-in-Publication Data Michal, Aristotle D., 1899Matrix and tensor calculus: with applications to mechanics, elasticity, and aeronautics I Aristotle D. Michal. - Dover ed.
p. em. Originally published: New York: J. Wiley, [1941] Includes index. ISBN-13: 978-0-486-46246-2 ISBN-IO: 0-486-46246-3 I. Calculus of tensors. 2. Matrices. I. Title. QA433.M45 2008 515'.63-dc22 2008000472 Manufactured in the United States of America Dover Publications, Inc., 31 East 2nd Street, Mineola, N.Y. 11501
",
To my wiJe
Luddye Kennerly Michal
EDITOR'S PREFACE The editors believe that the reader who has finished the study of this book will see the full justification for including it in a series of volumes dealing with aeronautical" subjects. " However, the editor's preface usUally is addressed to the reader who starts with the reading of the volume, and therefore a few words on our reasons for including Professor Michal's book on matrices and tensors in the GALCIT series seem to be appropriate. Since the beginnings of the modem age of the aeronautical sciences a close cooperation has existed between applied mathematics and aeronautics. Engineers at large have always appreciated the help of applied mathematics in furnishing them practical methods for numerical and graphical solutions of algebraic and differential equations. However, aeronautical and also electrical engineers are faced with problems reaching much further into several domains of modem mathematics. As a matter of fact, these branches of engineering science have often exerted an inspiring influence on the development of novel methods in applied mathematics. One branch of applied mathematics which fits especially the needs of the scientific aeronautical engineer is the matrix and tensor calculus. The matrix operations represent a powerful method for the solution of problems dealing with mechanical systems" with a certain number of degrees of freedom. The tensor calculus gives admirable insight into complex problems of the mechanics of continuous media, the mechanics of fluids, and elastic and plastic media. Professor Michal's course on the subject given in the frame of the war-training program on engineering science and management has found a surprisingly favorable response among engineers of the aeronautical industry in the Southern Californian region. The editors believe that the engineers throughout the country will welcome a book which skillfully unites exact and clear presentation of mathematical statements with fitness for immediate practical applications. THEODORE VON
CLARK
v
B.
KAmIdN
MILLIKAN
PREFACE This volume is based on a series of lectures on matrix calculus and tensor calculus, and their applications, given under the sponsorship of the Engineering, Science, and Management War Training (ESMWT) program, from August 1942 to March 1943. The group taking the course included a considerable number of outstanding research engineers and directors of engineering research and development. I am very grateful to these men who welcomed me and by their interest in my lectures encouraged me. The purpose of this book is to give the reader a working knowledge of the fundamentals of matrix calculus and tensor calculus, which he may apply to his own field. Mathematicians, physicists, meteorologists, and electrical en~eers, as well as mechaiucal and aeronautical e~ gineers, will discover principles applicable to their respective fields. The last group, for instance, will find material on vibrations, aircraft flutter, elasticity, hydrodynamics, and fluid mechanics. . The book is divided into two independent parts,_ the first dealing with the matrix calculus and its applications, the second with the tensor calculus and its applications. The minimum of mathematical concepts is presented in the introduction to each part, the more advanced mathematical ideas being developed as they are needed in connection with the applications in the later chapters. The two-part division of the book is primarily due to the fact that matrix and tensor calculus are essentially two distinct mathematical studies. The matrix calculus is a purely analytic and algebraic subject, whereas the tensor calculus is geometric, being connected with transformations of coordinates and other geometric concepts. A careful reading of the first chapter in each part of the book will, clarify the meaning of the word "tensor," which is occasionally misused in modem scientific and engineering literature. I wish to acknowledge with gratitude the kind cooperation of the Douglas Aircraft Company in making available some of its work in connection with the last part of Chapter 7 on aircraft flutter. It is a pleasure to thank several of my students, especially Dr. J. E. Lipp and Messrs. C. H. Putt and Paul Lieber of the Douglas Aircraft Company, for making available the material worked out by Mr. Lieber and his research group. I am also very glad to thank the members of my seminar on applied mathematics at the California Institute for their helpful suggestions. I wish to make special mention of Dr. C. C. vii
viii
PREFACE
Lin, who not only took an active part in the seminar but who also kindly consented. to have his unpublished researches on some dramatic applications of the tensor calculus to boundary-layer theory in aer.onautics incorporated. in Chapter 18. This furnishes an application of the Riemannian tensor calculus described in Chapter 17. I should like also to thank Dr. W. Z. Chien for his timely help. I gratefully acknowledge the suggestions of my colleague Prc;Ifessor Clark B. Millikan concerning ways of making the book more useful to aeronautical engineers. . Above all, I am indebted to my distinguished colleague and friend, Professor Theodore von K8.rm8.n, director of the Guggenheim Graduate School of Aeronautics at the California Institute, for honoring me by an invitation to put my lecture notes in book form for publicat,ion in the GALCIT series. I ~ve also the delightful privilege of expressing my indebtedness to Dr. Karman for his inspiring conversations and wise counsel on applied mathematics in general and this volume in particular, and for encouraging me to make contacts with the aircraft industry on an advanced mathematical level. I regret that, in order not to delay unduly the publication of this boQk, I am unable to include some of my more recent unpublished researches on the applications of the tensor calculus of curved infinite dimensional spaces to the vibrations of elastic beams and other elastic media.
AmsTOTLE D. MiCHAL CALIFORNIA INsTITUTE OF TECHNOLOGY OcroBI!lB, 1946
CONTENTS PART'I MATRIX CALCULUS AND ITS APPLICATIONS PAGE
CHA.PTJlB
1.
ALGlilBBAIC PBELlMINARIES
1 1
Introduction • . . . . • . Definitions and notations . Elementary operations on matrices 2.
ALGl!IBBAIC PRELIMINARIES
(Continued)
Inverse of a matrix and the solution of linear equations • • • • • •• Multiplication of matrices by numbers, and matric polynomials. • •• Characteristic equation of a matrix and the Cayley-Hamilton theorem. 3.
DIFFERENTIAL AND INTl!lGRAL CALCULUS OF MATBICES
Power series in matrices . . • • .--. . • . . . . . . . . • • • Differentiation and integration depending on a numerical variable • 4.
DIFFERENTIAL AND INTEIlBAL CALCULUS OF MATBICES
20
21
MATRIX METHODS IN PROBLl!IMS OF SMALL OSCILLATIONS
Differential equations pf motion Illustrative example . . • • • . . . . . . . . • • • . 6.
15 16
(Continued)
Systems of linear differential equations with constant coefficients Systems of linear differential equations with variable coefficients. 5.
8 11 12
MATBIX METHODS IN PROBLEMS OF SMALL OsCILLATIONS
24 26
(Continued)
Calculation of frequencies and amplitudes . . . . . . . . . . •.
28
7.
MATRIX METHODS IN THE MATHEMATICAL THEORY OF AIBCllAI'T FLUTTER
32
8.
MATRIX METHODS IN ELASTIC DEFORMATION THEORY
38
I
PART 11 TENSOR CALCULUS AND ITS APPLICATIONS 9.
SPACIil
LINE
ELEMENT IN CURVILINEAB COORDINATES
Introductory remarks . • . • • . . Notation and summation coDvention • . • • • • . Euclidean metrio tensor • • . . • • • . . • . • . 10.
Vl!ICI'OB FIELDS, TENSOR FIELDS, AND EUCLIDEAN GHlWITOFFilL
The strain tensor . • • • • • • . . . . • . . . . Scalars, contravariant vectors, and covariant vectors Tensor fields of rank two Euclidean Christoffel symbols ix
42 42 44
SnmoLS 48
49 50
53
x
CONTENTS
CBAPTIlB
11.
PAGlIl
TENsoR ANALYSIS
Covariant difierentiation of vector fields Tensor fields of rank r = p + q, contravariant of rank p and covariant of rank'p. . . • • • • Properties of tensor fields • • . . . . • • . . . . • • • • • • "
12.
60 62
65
.'
SOME ELEMENTARY ApPLICATIONS OF THE TENSOR CALCULUS TO DYNAMICS
65 66
HYDRO-
Navier-Stokes differential equations for the motion of a viscous'fluid • Multiple-point tensor fields. . • . . • . . . • A two-point correlation tensor field in turbulence • . . . • . •
14.
15. HOMOGENEOUS
AND ISOTROPIC 8TaAJNs, STRAIN INV AJUANTS, AND ATION OF STRAIN TENSOR
73 75 77 79
VARJ-
Strain invariants . . . . . . . . . . . • . • Homogeneous and isotropic strains . . • • . . A fundamental theorem on homogeneous strains Variation of the strain tensor. . . . . . . . •
82 83 84 86
STRESS TENSOR, ELASTIC POTENTIAL, AND STRESS-8TaAJN RELATIONS
Stress tensor . • • . • . • • • . . • • . • Elastic potential. . . . • . . . . ',' . . . • . • • • • . . •. StresHtrain relations for an isotropic medium . . • • • • • . . . 17.
69 71
APPLICATIONS OF THE TENSOR CALCULUS TO ELASTICITY THJiIORY
Finite deformation theory of elastic media • Strain tensors in rectangular coordinates • • Change in volume under elastic deformation
16.
57 59
LAPLACE EQUATION, WAVE EQUATION, AND POISSON EQUATION IN QuaY!;' LINlIlAR COORDINATES
Some further concepts and remarks on the tensor caloulus Laplace's equation •. . . . . .' Laplace's equation for veotor fields Wave equation . . Poisson's equation • . . • • . •
13.
56
89 91 93
TENifoR CALCULUS IN RlmMANNJAN SPACliIS AND TBJD FuNDAMENTALS OF CLASSICAL MECHANICS
Multidimensional Euclidean spaces . • • • • • • Riemannian geometry. . . . • • . . . . . . • Curved surfaces as examples of RiElmannian spaces The Riemann-Chrlstoffel ourvature ~r • • • • Geodesics. • • . . • • • . • . . . . • . . . . Equations of motion of a dynamical system with n degrees of freedom.
95 96 98 99 100 101
CONTENTS CBAPTmB
18.
xi PAGE
,ApPLICA!l'IONB OF THE TENSOB CALCULUS TO BOUNDARy-LAYER TBlDOBY
"Incompressible and compressible fluids. • • . • . . . . . . • • . . 103 Boundary-layer equations for the steady motion of a homogeneous incompressible fluid • 104
NOTES ON NOTJIlS
PART I
• • • •
III
IT. . • •
114
RuEBENCES FOB PART
I •
124
RmFERENCES FOR PART
IT
125
ON PART
INDEX. • • • .. • • • • •
129
PART I. MATRIX CALCULUS AND ITS APPLICATIONS CHAPTER 1 ALGEBRAIC PRELIMINARIES Introduction. Although matrices have been investigated by mathematicians for almost a century, their thoroughgoing application to physics, It engineering, and other subj~ts2 - such as cryptography, psychology, and educational and other statistical measurements - has taken place o~y since 1925. In particular, the use of matrices in aeronautical engineering in connection with small oscillations, aircraft flutter, and elastic deformations did not receive much attention before 1935. It is interesting to note that the only book on matrices with systematic chaptem on the differential and integral calculus of matrices was written by three ~ronautical engineers.t . Definitions and Notations. A table of mn numbers, called elements, arranged in a rectangular array of m rows and n columns is called a matrix 3 with m rOW8 n columna. If a} is the element in the ith row and 3th column, then the matrix can be written down in the following pictorial form with the conventional double bar on each side.
am
aI,
... , a~
~,
a~, ~, ... , a~
ai, a;', ... , a:' In the expression oj the index i is called a 8Uper8Cl'ipt and the index 3 a 8Ubscript. It is to be emphasized that the superscript i in oJ is not the ith power of a variable 0,. If the number m of rows is equal to the number n of columns, then
t
Superior numbers refer to the notes at the end of the book. Frazer, Duncan, and Collar, ElementaT'/l Matrice& and 80fM ApplicCJtioM to Dynamic8 and Diilertmti4l EguatioM, Cambridge University Press, 1938.
t
.
1
2
ALGEBRAIC PRELIMINARIES
the matrix is called a square matrix.t The number of rows, or equivalently the number of columns, will be called the order of the square matrix. Besides square matrices, two other general'types of matrices occur frequently. One is the row matrix
II al, the other is the column matrix
as, "', ax II ;
am It is to be observed that the superscript 1 in the elements of the row matrix was omitted. Similarly the subscript 1 in the elements of the column matrix was also omitted. All this is done in the interest of brevity; the index notation is unnecessary when the index, whether a subscript or superscript, cannot have' at least two values. It is often very convenient to have a more compact notation for matrices than the one just given. This compact notation is as follows: if oj is the element of a matrix in the ith row and jth column we can write simply
II oj "
instead of stringing out all the mn elements of the matrix. In particular, a row matrix with element al: in the kth column will be written
II
al:
II,
and a column matrix with element al: in the kth row will be written
II
al:
II.
Elementary Operations on Matrices. Before we can use matrices effectively we must define the addition of matrices and the muUiplication of matrices. The definitions are those that have been found most useful in the general theory and in the applications. Let A and B be matrice8 oj the same type, i.e., matrices with 'the same number m of rows and the same number n of columns. Let
a;
A = II II, B = II bj II . Then by the sum A + B of the matrices A and B we shall mean the
t It will occasionally be convenient to write tliJ for the element in the ith row and jth column of a square matrix. See Chapter 5 and the following chapters.
ELEMENTARY OPERATIONS ON MATRICES
3
uniquely obta.inable matrix
c= 11411, where
c} = oj + b}
(i
= 1, 2, ... , mj j
=
I, 2, ... , n).
In other words, to add two matrices of the same type, calbulate the matrix whose elements are precisely the numerical sum of the corresponding elements of the two given matrices. The addition of two matrices of different type has no meaning for us. To complete the preliminary definitions we must make clear what we mean when we say that two matrices are equal. Two matrices A == II oj II and B == II bj II of the same type are equal, written as A = B, if and only if the numerical equalities oj = b} hold for each i and j.
Exercise
A=
I, -1, 5 ~, 0, 0, 3, -2 -4, 1 1.1, 2, 0, 0, -~, 1 0, 0, -I, 3 I, 0, 2, -4
Then 0, 6 2, 1 -2,-3 The following results embodied in a theorem show that matric addition has some of the properties of numerical ad~tion.
A
+B =
I, -1, 0, 0, 2.1, 2,
If A and B are any two matrices of the same type, then A +B = B+A. If C is any third matrix of the same type as A and B, then THEOREM.
(A
+ B) + C = A + (B + 0).
Before we proceed with the definition of muUiplication of matrices, a word or two must be said about two very important special square matrices. One is the zero matrix, i.e., a square matrix all of whose elements are zero, 0,0,' ",0 0,0"",0
O. O... ·.0
4
ALGEBRAIC PRELIMINARIES'
We can denote the zero matrix by the capital letter O. Occasionally we shalI use the terminology zero matrix for a non-squa.re matrix with zero elements. The other is the unit matri3:, i.e., a matrix . I = II 8; II, where ~ = 1 if i =j. = 0 if i ~j. In the more explicit notation 1,0,0,,·,,0 0,1,0,···,0 0,0,1,0,·,·,0
1= 0,0,0"",0,1
One of the most useful and simplifying conventions in all mathematics is the 8Ummation convention: the repetition 01 an iru1ex once as a sub8cript and once as a superscript wUZ indicate a summation over the total rafl4e 01 that iru1ex. For example, if the range of the indices is 1 to 5, then ' Ii
ap' means .Eap' or D.J.b1 + afb2 + aafJ8 + aJJ4 + aH. 1=1
Again we warn the reader that th~ superscript i in b' is not the ith power of a variable b. The definition of the multiplication of two matrices can now be given in a neat form with the aid of the summation convention. Let ~, ~, ~,~,
... , a!.
... , a:.
A= ~,
a;, ... , a:.
b~, ~, ~,~,
B..,.
... , b;
... ,b!.
ELEMENTARY OPERATIONS ON MATRICES
5
Then, by the product AB 0/ the two matrice8, we 8haJl mean the mcrtri:e
C=lIc;lI, where c; = a'J>j If
(i
= 1,2,
... , nj j
= 1, 2, .•• , p).
c;
is written out in extenso without the aid of the summation qonvention, we have i 'bI 'bi Cj=/lij+(J3i+···
+ Q,f,,;j. 'bm
It should be emphasized here that, in order that the product AB of two matrices be well defined, the number of rows in the matrix B must be precisely equal to the number of columns in the matrix A. It follows in particular that, i/ A and B are square matrice8 0/ the sam.e type, then AB as well as BA is always weU defined. However, it must be emphasized that in general AB is not equal to BA, written 88 AB ¢ BA, even if both AB and BA are well defined. In other words, matrix multiplication of matrices, unlike numerical multiplication, is not always commutative.
Exercise The following example illustrates the non-commutativity of matrix multiplication. Take A -
-11
and
B
==
Now
0 1 1 0
I -01
01
I
so that
at = 0, ~ = 1, ~ = 1, ~ = 0,
I
so that
M=
-1, ~ "" 0, b~
=
0, ~
=
1.
c~ = a!b! = (0)(-1) + (1)(0) = 0, ~ = a!b; = (0)(0) + (1)(1) = 1, ~ = ~1 = (1)(-1) + (0)(0) = -1, ~ = a2..b; = (1)(0) + (0)(1) = o.
Hence
AB = Similarly
BA But obviously AB
¢
II_~ ~ II·
-II ~
-~ II·
BA.
. The unit. matrix 1 of order n baa the interesting property that it commutes with all square matrices of the. same order. In fact, if A is
ALGEBRAIC PRELIMINARIES
6
an arbitrary square matrix of order n, then
AI
= A.
IA
=
The multiplication of row and column matrices with the same number of elements is instructive. Let
A ~
II
II
~
be the row matrix and
B = II bi II the column matrix. Then AB = a.:tJ', a number, or a matrix with one element (the double-bar notation has been omitted). Exercise HA =
II
1, 1, 0
II
o and B -
0 1
,then
+ (1) (0) + .(0) (1) = O.
AB = (1) (0)
This example also illustrates the fact that the product oj two matrice8 can be a zero matrix although neither of the multipZied matrices i8 a zero
matrix. The multiplication oj a square matrix with a column matrix occurs frequenJly in the applications. A system of n linear al{Jebraic equations in n unknowns Xl, Xl, ... , x"
ajxi ..; bi can be written as a ltingZe matrix equation AX=B in the unknown column matrix X = II Xi II and the given square matrix A = I~ a} II and column matrix B -= II bi II. A system of first-order difierential equations
dx'
-
dt
•.
= lLjX'
can be written as one matric difierential.equation
dX = AX. dt Finally a system of second-order difierential equations occurring in the theory of small oscillations (/Jxi
-
clt2
,.
=ajtl
ELEMENTARY OPERATIONS ON MATRICES
7
can be written as one matric second-order differential equation
~"'AX. The above illustrations suffice to show the compactness and simplicity of matric equations when use is made of matrix multiplication. Exercises 1. Compute the matrix AB when
A.IWI audB-II_~~!II· Is BA defined? Explain. I. Compute the matrix AX when A
=
Is XA defined? Explain.
II-l: ~:! I
and X
=
I j I·
CHAPTER 2
ALGEBRAIC PRELIMINARIES (Continued) Inverse of a Matrix and the Solution of Linear Equations,l The inverse a-1, or reciprocal, of a real number a is well defined if a ~ O. There is an analogous operation for square matrices. If A is a square matri2:
A=IIt411 I t4 I ~ 0, or in more extended no..
oj arder n and if the determinant tation
~'~'"'' ... , a!~
~,a:,
~O,
ai, a;, ... , a: then there exists a 'Unique matrix, written A -1 in analogy to the inverse of
a number, 'ID'ith the important propertie8 (2·1)
AA-1 = I { A-1A = I
(I is the unit matrix.)
The matrix: A-1, if it exists, is called the inverse matrix oj A. In fact, the following more extensive result holds good. A nec688ary and sufficient condittion that a matri2: A = II t4 II have an inverse is that the associated determinant a} ~ O. From now on we shall refer to the determinant a = aj as the determinant a of the matrix A. Occasionally we sha]J, write A Ijor the determinant 01 A. The general form of the inverse of a matrix can be given with the aid of a few results from the theory of determinants. Let a = aj be a determinant, not necessarily different from zero. Let be the cofactor t of a~ in the determinant a; note that the indices i and j are as compared with a{. Then the following results interchanged in
I
I
I
I
I
a:
I
I
a:
t The (n - I)-rowed determinant obtained from the determinant G by striking out the.ith row and ith column in G, and then multiplying the result by (_I)H1. 8
9
INVERSE OF A MATRIX
come from- the properties of determinants:
a;-al '" a a1 ajai", a a1
(expansion by elements of ith row); (expansion by elements of kth column).
H then the determinant a '" 0, we obtain the following relations,
t4M .. = ai~, {~ja£ =~
(2·2) on defining
a:-. M= , a·
Let A", II aj II, B = II ~ IIi then' relations 2·2 state, In. terms of matrix multiplication, that AB=I, BA =1.
In other words, the matrix B is precisely the inverse'matrix A-1 of A. To summarize, we have the following. computational result: iJ the determinant a oj a square matrix A '" II aj II i8 dijJerent Jrom zero, then the inverse matrix A -1 oj A exists and i8 gifJen by A-1 i
= II
~
where M'" a; and ex} i8 the coJactor oj a matrix A.
II,
at in the determinant
a oj the
These results on the inverse of a matrix have a simple application to the solution of n non-homogeneous linear (algebraic) equations in n unknowns xl, z2, "', x". Let the n equations be .
ajxi=b' (the n'J numbers aj are given and the n numbers bi are given). On defining the matrices
A = II aj
II, X
=" x' 1/, B = II b' II,
we can, as in the first chapter, write the n linear equations as one matric equation AX=B in the unknown column matrix X. If we now assume tha.t the determinant a of the matrix A is not zero, the inverse matrix A -1 will exist and we shall have by matrix multiplication A-1(AX) = A-lB. Since A-1A = 1 and IX = X, we obtain the solution X", A-IB
oj the equation AX = B. In other words, if aj is the cofactor of a! in the determinant a of A, then Xi - a;:tJija i8 the solution oj the 8flstem
10
*1
ALGEBRAIC PRELIMINARIES
oj n equations = b' under the condition a ;o! O. This is equivalent to Cramer's rule! for the solution of non-homogeneous linear equations as ratios of determinants. It is more explicit than Cramer's rule in that the determinants· in the numerator of the solution expressions are expanded in terms of the given right-hand sides b1, bt, "', b- of the linear equations. It is sometimes possible to solve the equations b' readily and obtain x' = ).jbf. The inverse matrix A -1 to A = II oj II can then be read off by inspection - in fact, A-1 = II >.} II. Practical methods, including approximate methods, for the calculation of the inverse (sometimes called reciprocoI) of a matrix are given in Chapter IV of the book on matriees by Frazer, Duncan, and Collar. A method based on the Cayley-Hamilton theorem will be presented at the end of the chapter. A simple example on the inverse of a matrix would be instructive at this· point.
*' ..
ExerciSe .Consider the two-rowed matrix
A
=
II_~ ~ /I.
According to our notations
al =
0, ~ = 1, ~ = -1, ~ = O. Hence the cofactors aj of A will be al = (cofactor of aD = 0, ~ = (coflloCtor of ~ = -1, a~ .. (cofactor of ~) = 1, ex: = (cofactor of ~) - O. Now A -1 = II ~j II , where ~ = aj/a. But the determinant of A is a = 1. This gives us immediately ~t = 0, ~ = -1, {If = 1, ~ = O. In other words,
A -1
=
I ~ -~ II·
Approximate numerical examples abound in the study of airplanewing oscillations. For example, if . 0.0176, 0.000128, 0.90289 A = 0.000128, 0.00000824, 0.0000413 , 0.00289, 0.0000413, 0.000725 then approximately 170.9, 1,063., -741. 7 A-I... 1063., 176,500., -14,290. -741.7, -14,290., 5,150. See exercise 2 at the end of Chapter 7.
MULTIPLICATION OF MATRICES
11
. From :the rule for the product of two determinants,8 the following result is immediate on observing closely the definition of the product of two matrices: If A and B are two square matrices with determinants a and b reapectWely, then the determinant c of the matric product C = AB i8 given by the numerical muUiplication of the two number8 a and b, i.e., c = abo This result enables us to calculate immediately the determinant of the inverse of a matrix. Since AA-1 = I, and since the determinant of the unit matrix I is 1, the above result shows that the determinant of A -1 is 1/a, where a i8 the determinant of A. From the associativity of the ope~tion of multiplication of square matrices and the properties of inverses of matrices, the usual index laws for powers of numbers hold good for powers of matrices even though matric multiplication is not commutative. By the associativity of the operation of matric multiplication we mean that, if A, B, Care any three square matrices of the same order, then t
A (BC)
=
(A[J)C.
If then A is a square matrix, there is a unique matrix AA ••• A with 8 factors for any given positive integer 8. We shall write this matrix as A' and call it the 8th power of the matrix A. Now if we define AD = I, the unit matrix, then the following index law8 hold for all poBiJive integral and zero indice8 r and s:
A'A' = A'A' = A-+(A')' = (A')' = A".
Furthermore, these index laws hold for all integral r and 8, positive or negative, whenever A -1 exists. This is with the understanding that negative power8 of matrices are defined as positive power8 of their inver8es, i.e., A -r is defined for any positive integer r by A-r = (A-I) •.
Multiplication of Matrices by Numbers, and Matrie Polynomials. Besides the operations on matrices that have been discussed up to this section, there is still another one that is of great importance. If A = 1\ a} \I is a matrix, not .necessarily a square matrix, and a is a number, real or complex, then by aA we 8hall mean the matrix II aa} II. Thia operation of multiplication by numbers enables us to consider matrix polynomials of type (2·3) aoA" + alA ,,-1 + agA,,-4l + ... + a..-1A + aJ.
t Similarly, if the two square matrices A and B and the column matrix X have the same number of rows, then (..tB)X = A(BX).
ALGEBRAIC PRELIMINARIES
12
In expression 2 "3, au, at, ..• , a.. are numbers, A is a square matrix, and I is the unit matrix of the same order as A. In a given matric polynomial, $e ais are given numbers, and A is a variable square matrix.
Characteristic Equation of a Matrix and the Cayley-Hamilton Theorem. We are now in a position to discuss some results whose importance cannot be overestimated in the study of vibrations of all sorts (see Chapter 6). If A - II aj II is a given square matrix of order n, one can form the matrix >J - A, called the characteri8tic maJ,rix of A. The determinant of this J[l8.trix, considered as a function of )., is a (numerical) polynomial of degree n in ).; called the characteri8tic Junction oj A. More then J().) has the form J().) - )... + explicitly, let J().) = >J - A at)...-l + ... + a..-l). + a... Since a.. = J(O) , we see that a.. ... -A ie., a.. is (-1)" times the determinant oj the matrix A. The algebraic equation of degree n for ).. J().) = 0
I
I;
I
I;
is called the charactmatic equation oj the matrix A, and the roots of the equation are called the charactmatic roots oj A. We shall close this chapter with what is, perhaps, the most famous theorem in the algebra of matrices.
THE
CAYLEY-lIA.MruroN THEOREM:.
Let
J().) = )... + atA..- 1 + ...
+ a..-l). + a..
be the characteristic Junction oj a m.at1'W A, and let I and 0 be the unit
matrix and sero matrix respectively with an order equal to that oj A. Then the matric polynomial equation X" + a1X..- l + ... + a..-1X + aJ = 0 is 8atisjied by X = A.
Example Take A =
I ~ ~ II;
then J().)
n=2,and'at-O,at=-1.
= \'
_~
-! I
ButA2'=II~ ~II'
= ).2 - 1. Here
HenceA2-I=O.
The Cayley-Hamilton theorem is often laconically stated in the form "A matrix satisfies its own characteristic equation." In symbols, if J(>..) is the characteristic function for a matrix A, then J(A) = O. Such statements are, of course, nonsensical if taken literally at their
OHARACTERISTIC EQUATION OF A MATRIX
13
face value. However, such mnemonics are useful to those who thoroughly understand the statement of the Cayley-Hamilton theorem. A knowledge oj the characteri8tic Junction oj a matrix enabks one to compute the inver8e oj a matrix, iJ it exists, with the aid oj the CayleyHam1,1J,on theorem. In fact, let A be an n-rowed square matrix with an inverse A-1. This implies that the determinant a of A is not zero. Since 0 ;14 ~ = (- 1)"a, we find with the aid of the Cayley-Hamilton theorem that A satisfies the matric equation 1
1= - -[A" + a1A-1 + ~
... + a,,_~2 + a,.-tAJ.
Multiplying both sides by A -1, we See that the inver8e matrix A -1 can be compute(llYg the Jollowi1l4 JorTIItula: -1
A -1 = --=[A ..-1 + alA ,,-II + ... + a-~ + an-1I]. a,. To compute A -1 by formula 2·4 one has to know the coefficients a1. at, "', a,.-l, a" in the characteristic function of the given matrix A. Let A = II aj II i then the trace of the matrix A, written tr (A), is defined by tr (A) = ~, the sum of the n diagonal elements a~,~, "', 0.;. Define the numbers' 81, Bt, "', 8" by (2·5) 81 = tr (A), Bt = tr (A2), "', 8~ = tr (A~), "', 8" = tr (A") so that 8r is the trace of the rth power of the given matrix A. It can be shown' by a long algebraic argument that the numbers a1, "', a,. can be computed successively by the following recurrence formulas: a1 = -81 G2 = -t(a181 + 82) as = -t(G281 + a1Bt + 83) (2·4)
(2·6) 1
a,. = --(a-181 + a-2B2 + ... + a18,,-1 + 8..). n We can summarize our results in the following rule for the calculation of the inverse matrix A-1 to a given matrix A. A RULE FOR CALCULATION OF THE INVERSE MATRIX A-1. First compute the first n - 1 powers A, A2, "', A--1 of the given nrowed' matrix A. Then compute the diagonal elements only of A". Next compute the n numbers 81, 81, "',8.. as defined in 2·5. Insert these values for the 8. in formula 2·6, and calculate a1, G2, "', ~ successively by means of 2·6. Finally by formula 2·4 one can calcu-
ALGEBRAIC PRELIMINARIES
14
a..,
late A-t from the kIiowledge of aI, "', and the matrices A, AI, .. " A-I. Notice that the whole A" is not needed in the calculation but merely s.... tr (A"), the trace of A". P'Unched-card met1wd8 can be uSed to calculate the powers of the matrix A. The .rest of the calculations are easily made by standard calculating machines. Hence one method of getting numerical solutiona
ola system of n linear equations in the n 'Unknowns z'
*1 =
(I
bi
aj
1
0) is to compute A-I of A = II aj II by the above rule with the aid of punched-card methods and then to compute A-IB, where B = /I b' /I, by punched-card methods. The Solution column matrix X = II z' " is given by X = A-lB. ¢
Exercises 1. Calou1ate the inverse matrix to A '"
I ~ ~ II by the last method of tlUachapter.
Solution.
1
Now A-I '" - - [A lit
+ IItl] '" A.
Hence
I. See the exercise given in M. D. Bingham's paper. See the bibliography. 8. Calculate A-I by the above rule when 15 11 6 -9 -15 1 3 9 -3 -8 7 6 6 -3 -11 7 7 5 -3 -11 17 12 5 -10 -16 Mter calculating A2, A', A4, and the diagonal elements of AS, caloulate '1'" 5. It .. -41, '8 .. -217, B4 .. -17, Is '" 3185. Inserting these values in 2·6, find III '" -5, lit '" 33, 113 .. -51, 114 '" 135, IJa .. 225•.
A..
Incidentally the characteristio equation of A is I().) '" AI - 5A4 + 33).8 - 51A9 + 135). + 225
_ (A + 1)().9 - SA + 15)1 '" O.
Finally, using formula 2·4, find 1 ,4-1 ... - 225
-207 -315 -315 -225 -414
III 171 64 -124 30 195 -ISO 270 45 270 30 -30 0 225 75 -75 -3 342 52 53
CHAPTER 3 DIFFERENTIAL AND INTEGRAL CALCULUS MATRICES
or
Power Series in Matrices. Before we discuss the subject of power series, it is convenient to make a few introductory remarks on general series in matrices. Let A o, Al , ·As, Aa ... be an infinite sequence of matrices of the same type (Le., same number of rows and col1¥Ulls) and let 8 p = Ao + Al + As + ... + Ap be the matric sum of the matrices A o, A l , As, .'., and A p. If every element in the matrix 8 p converges (in the ordinary numerical sense) as p tends to infinity, then by 8 = lim 8 p we shall mean the p->o>
matrix 8 of the limiting elements. If then the matrix 8 = lim 8 p exi8ts p-+CZI
in .., the above sense, we shall say, by definition, that the matric infinite series converges to the matrix 8.
D,.
,. ... 0
Example
1 Take Ao = 1, Al = 1, As = -1 2! ' Aa
8 p
=
Ao + Al + As + ...
+ Ap =
1-
-1 3! ' ... , A,
=
=
1 -1 if' ... . Then
(1 + 1+ !2! + !3! + ... + !)1 p! .
Hence, on recalling the expansion .., for the exponential e, we find that lim 8 p
=
el. In other words, :EAr
=
el.
r==O
p--+Q)
If A is a square matrix and the al, consider matric power series in A
as, ...
are numbers, one can
.., :Ea,.Ar..
r=O
In other words, matric power series are particular matric series in which each matrix Ar is of special type t A,. = a,.Ar, where Ar is the rth power of a square matrix A. (AO = 1 is the identity matrix.) Clearly matric polynomials (see Chapter 2) are special matric power series in which aU the numbers a, after a certain value of i are zero. An important example of a matric power series is the matric exp0nential Junction e" defined by the following matric power series:
e" - 1 +A +.!:..A2+.!:..Aa+ .•• + ... 2! 31 .
t
The index r is not summed.
15
16
DIFFERENTIAL AND INTEGRAL CALCULUS
The following properties of the matrix exponential have been used frequently in investigations on the matrix calculus: 1. The matric power series expansion for ~ is convergentl for all square matrices A. , 2. ~Il = ileA. = ~+B whenever A and B are commutative matrices, i.e., whenever AB = BA. 3. ,~e-A. = e-A.~ = I. (These relations express the fact that e-A. is the inverse matrix of ~.) Every numerical power series has its matric analogue. However, the corresponding matric power series'have more complicated properties-for example,~. Other ~ples are, say, the matric sine, sin A, and the matric cosine, cos A, defined by
sin A COB
= A - !..A8 + 1_A & 31
sr-
1
1
•••
A = I - 2!A2 + 41A4 - ••••
The usual trigonometric identities are not always satisfied by sin A and cos A for arbitrary matrices.
Difterentiation and Integration of :Matrices DependiD,g on a Numerical Variable. Let A(t) be a matrix depending on a numerical variable t so that the elements of A(t) are numerical functionS of t.
aW), ~(t), .", a!(t) ~(t), ~(t), '.', a!(t)
A(t)
= ar(t), a:(t), ... , a:(t)
Then we'define the derivative of A (t), and write it d~(t), by ~(t) ~(t)
da!(t)
~(t) ~(t)
da!(t)
dt'dt' ..., & dA(t)
---;u:- =
d.t' d.t' ... , ---;u:-
DIFFERENTIATION AND INTEGRATION
11
Similarly we define the integral of A (t) by f~(t) dt, f~(t) dt, "', fa!(t) dt f~(t) dt, f~(t) dt, "', fo!(t) dt
fA(t)dt= far(t) dt, fa:(t) dt, "', frt:(t) dt
It is no mathematical feat to show that differentiation of matrices has the following properties: (3·1)
+ B(t)]
d[A (t)
dt
dA (t)
=
dB(t)
-;u + ---;it
= dA (t) B(t)
(3·2)
d[A (t)B(t)] dt
(3·3)
~[A (t)B(t)C(t)] = d~t) B(t)C(t) + A (t) d~t) C(t) +
dt
A (t) dB(t) dt
+,
A(t)B(t)d~t),
etc. There are important immediate consequences of properties 3·2 and 3 ·3. For example, from 3·2 and A -l(t)A (t) = I, we see that (3.4)
dA-l(t) = _A-l(t)dA(t)A-l(t) dt dt'
Also, from 3·3, we obtain (3.5)
dA3(t) = dA(t) A2(t) dt dt
+ A (t)dA(t) A(t) + A2(t)dA(t). dt
dt
There are similar formulas for the derivative of any positive integral power of A (t). IT t is a real variable and A a constant square matrix, then one obtains d(trA) = rtr-lA dt •
Then, with the usual term-by-term differentiation of the numerical exponential, the following difJerentiation can be justified: d (e A ) P t3 ~ = A +t42+2~a+3IA4+ ... + ... (3·6) { = AetA = JA.A.
18
DIFFERENTIAL AND INTEGRAL CALCULUS
There is an important theorem in the matrix calculus that turns up in the mathematicaJ theory of aircroJt flutter (see Chapter 7). The proof, into which we' can not enter here, makes use of the modem theory of Ju,nctionals. THEoREM. matric pqwer
If F(A) is a pqwer 8eries that confJerge8 Jor all A, then the series F(A) can be computed by the ezpansion 2
(3·7)
where A ia an n-rowed square matrix with n distinct characteri8tic roots Al, AI, "', ~, and (h, f.) C>V. - =-(~=12"·n) dt
e>q'
()q'
. the not a tion q' .. = dq' on usmg -.
dt
'"
If we Uf!e the explicit form for the 24
DIFFERENTIAL EQUATIONS OF MOTION
25
kinetic and potential energies, Lagrange's equations reduce to the system of n second-order differential equations
a.iii
(5·1) J. wrwre we
J. _ •• -
f/AMJf:}
= -bv~'
- . = d2qi used the notat'wn q' - 2.
If we defin e two square
dt
matrices t·
A=lIl1ijll,
B
=
II
bij
II ,
and the unknown column matrix Q(t)
=
II
qi(t)
II,
then we can write our difierential equations 5·1 of motion as the one matric differential equation A
(5·2)
d2~~t) =
-BQ(t).
Since the kinetic energy is positive definite, it can be proved that the determinant A p6 O. Hence it follows from our discussion in Chapter 2 that the inverse matrix A-I exists. On multiplying both sides of equation 5·2 on the left by A-I and remembering that A -IA = I, the unit matrix, we obtain the following equivalent matric differential equation
I
I
~t) =
(5·3)
-CQ(t),
where C is the (constant) square matrix C = A-IE, To summarize, we have the following result. If A and B are the constant square mill:rice8 of the coefficients of the kinetic and potential energies respectively, then the motion of our 08C1,"llatory system i8 gorerned by the matric differential equation 5·3. illustrative Example Two equal masses, each of mass m, are connected by a spring with elastic constant k while each mass is connected to a fixed wall by a spring with elastic constant k. The kinetic and potential energies of this two-degree-of-freedom problem are
T =
V
i[(~ly + (:t)2] k
= ~ (ql)2
+ (rf)2 + (ql -
q2)2],
t Recall the notations for matrices given in Chapter 1.
26
DIFFERENTIAL AND INTEGRAL CALCULUS
where ql and r/ are the respective displacements of the centers of the two masses "pa.raJlel" to the springs and are measured. from the equilibrium position in which all three sprihgs are unstressed. Hence by a direct calculation from the kinetic ~d potential energies we find that the matric equations of motion are (5.4)
dJQ(t) = -OQ(t), dJ,2
where
2k
k
m. k
m 2k
m
m
If we define the column matrix R(t) ...
II ~~:~ " by ~t) .. R(t), we
can write the second-order matric differential equation 5·4 as a firstorder matric differential equation (5.5)
~~t) =
US(t),
S(t) =
ql(t) r/(t) r1(t)
where
r2(t~
and
-2k
0 0 k
m
m
0 0 (5·6)
U=
1 0 0 1 0 0
k -2k m m
0 0
The characteristic equation of the matrix U turns out to be 2
A' + -4k A2 + -3k2 '...
m
m
o.
Now! > 0, so that there are four distinct pure imaginary cha.ntcterm istic roots·of U given by (5·7)
A1 =
1~ Y-;;-, >.t.=. -y1~ --;;;-, >.a ... 10 Y-;;-, >.. ... -ylG -;;-.
.
DIFFERENTIAL EQUATIONS OF MOTION Exercise
A abaft of length !ll, fixed at one end, ca.rries one disk at the free end and another in the middle. If I/o is ~e moment of inertia of each disk, and t/, tf are the respective angular deflections of the two disks, then the kinetio and potential energies are
Tai[(~Y +(~YJ .
T V ... 2i [(t/}I + (rt - t/)']
under the assumption that the shaft has a uniform torsional stiffness T. Find the matrio difierential equation of motion. Write this equation as a system of two first-order matrio equations. Discuss the solutions of this system and then the motion of the disks.
CH.A1n'ER 6
l'4ATRIX: METHODS IN PROBLEMS OF
SMALL
OSCILLATIONS (Continued) Calculation of Frequencies and AmpUtudes. Let us inquire into the pure harmonic solutions of our differential equation of motion
d2Q(t) =' -CQ(t), ,dJ.2 where C = A-lB. We thus seek solutions of (6·1)
~·1
of type
Q(t) = sin ,(wt + ~)r, where w is an angular frequency, ~ an arbitrary phase angle, and r a column matrix of amplitudes. On substituting 6·2 in 6·1 we obtain (6·2)
-w2 sin (wt
+ ~)r
=
-sin (wt
+ ~)Cr.
Hence a necessary and sufficient condition that 6·2 be a solution of 6·1 is that the frequency w and the corresponding column matrix r of amplitudes satisfy the matrix equation (6·3)
(w21 - C)r
=
O.
In order that there exist a solution matrix r ¢ 0 of 6·3, it is clear from the theory of systems of linear homogeneous algebraic equations that w2 mU8t be a characterimc root oj the matrix C. Since C = A -IB, we verify immediately the statement that (6·4) w21 - C = A-I(w2A - B). On recalling that the determinant of the product of two matrices is equal to the product of their determinants, we see that the determinant I A-I(w2A - B) 1 = 1
A-III
w2A - B I
and hence, by 6·4, I w21 - C 1 =0'1 w2A - B But p6 0, so that the characteristic roots of the matrix C are identical with the roots of the "frequency" equation (6·5) 1 M - B 1 = O. Si1U!e the kinetic and potential energies are positwe definiie qu,od,ratic jorms, it can be proved (see any book on dynamics such as Whittaker's) thai aU the roots oj the jrequency equation are positive. Hence all the
I A-II
A-III
I·
characteristic roots oj the matrix C = A-I B are positive. 28
CALCULATION OF FREQUENCIES AND AMPUTUDE9
29
Since the potential energy is a positive definite quadratic form, it follows that B > 0 and hence that B-1 exists. Therefore C-1 exists and is given by C-1 = ,B-1A. On multiplying both sides of equation 6·1 by C-1, we obtain the matric differential equation
I
(6.6)
I
Q(t)
= -D fPQ(t), dfl-
where D .. C-1 = ,B-1A. This equation is obviously equivalent toequation 6·1. If we now proceed with 6·6 as we did with 6· I, we are led to the equation (6·7) This equation can also be derived by operating directly with equation 6·3. It is clear from equations 6·3 and 6·7 that the characteristic roots oj the matrix D = C-1 are the corresponding reciprocols oj the characteristic roots oj the matrix C. We shall call D the dynamicoJ
matrix. Let us write 6·7 in the equivalent form (6·8)
r
=
w2Dr.
The classical method of finding the frequencies and amplitudes of our oscillating system consists in first finding the frequencies by solving the frequency equation 6·5; or equivalently in finding the characteristic roots of the matrix C = A -lB, and then in determining the amplitudes by solving the system of linear homogeneous equations that corresponds to the matrix equation 6·3. Such a direct way of calculating the frequencies and amplitudes often involves laborious calculations. For approximate numerical calculations, the method oj successive approximations when applied to equation 6·8 greatly reduces the laborious calculations. This is especially true when only the fundamental frequency (lowest frequency) and the corresponding amplitudes are desired. 1 We shall assume now that all the JreqtlRJnCies oj our osciUating system are distinct. The method of successive approximations for equation 6·8 is as follows. Let ro be an arbitrarily given column matrix. Define
r1 = w2Dro r. = w2Dr1
and in general
MATRIX METHODS IN PROBLEMS
80
By successive use of this recurrence relation we can express r r in terms of roo In fact, we have r, = (wI)rDrro, (6·9) where [)P is the rth power of the dynamical matrix D. Now it can be shown that for large r the ratio of the elements of tM column matri:f: ,Dr-1ro to tM corresponding element8 of tM column matrix [)pro is approximately a constant equal to the square of the fundamental fref/U6'IWY wt of our fundamental mode of oBCiUation with disti1ld frequeru:ieB. The matrix rD is only r68t1'icted by tM non-vanishing of R (see equality 6·11 bekno). . The proof of this result is a little involved and makes use of the Cayley-Hamilton theorem, a theorem t of Sylvester, and a few other theorems on matrices.2 These theorems are instrumental in showing that, for r large enough, the following approximate equality holds:
wt
(6·10)
where (0)
).1
> ).. > ... > >.,. are
the characteristic roots of D, and
1
.
hence).1 = 2 in terms of the fundamental frequency CIIl; (b) the num(all
hers ai, a?-, ••• , a" are proportional to the amplitudes of the fundamental mode of oscillation; and (c) R = A{)'~.
(6·11)
In 6 ·11, the 'Y~ are the elements of the arbif:rart"Zy chosen column matrix ro and the AI, "', A" are n constants that are themselves obtainable by a successive approximation method. Clearly, [)pro is a column matrix. Hence the approximate formula 6·10 shows that for r large enough, aM for arbitrarily chosen r D, such that R" 0, tM column matrix D'ro has elementB proportiOnal to the amplitudes of tM fundonnental mode of oscillation. AU this is subject to tM reatridion that aU tM frequeru:ieB are distinct. On using equation 6·3 instead of 6·7, one can similarly obtain the "
t F(A)
..
E F().~G,., where G,. .. r .. 1
'.pI,.rr( ) II(Xal- A) Xa - M
roots of A, Bee the disoussion in Chapter 3.
a.nd
).1, ••• , )."
are oharacteristic
CALCULATION OF FREQUENCIES AND AMPLITUDES
31
greatest frequency and corresponding amplitudes of our oscillating system. The intermediate overtones and corresponding amplitudes can be obtained by the above successive approximation methods on reducing the number of degrees of freedom successively by one. Anyone interested in these topics will find the following paper by W. J. Duncan and A. R. Collar very useful: "A Method for the Solution of Oscillation Problems by Matrices," Philo8(Jphicol Magazine aruJ Journal of Science, vol. 17 (1934), pp. 865-909. Byapproximati1l4 oscillating continU0'U8 Bystem8, such as beams, by oscillating systems with a large but finite number of degrees of freedom, the Duncan-C9llar paper shows how the methods of this chapter are applicable in solving oscillation problems for continuous systems.
CHAPI'ER 7
MATRIX METHODS IN THE MATHEMATICAL THEORY OF AIRCRAFT FLUTTER In recent years a group of phenomena known under the caption "flutter" has engaged the attention of aeronautical engineers. The vibrations taking place in flutter phenomena can often lead to bas of control or even to' structural failure in such aircraft parts as wing, aileron, and tail. Such dangerous situations may arise when the airplane is flown at a high speed. It is of the greatest practic8.I importance therefore so to design the plane as to have the maximum operating speed less than the critical speed at which flutter occurs. Unfortunately experiments in wind tunnels are idealized and difficult, and actual flight testing is obviously highly dangerous. It is here that mathematics enters the stage at a most opportune moment. Although the results of mathematical theories of flutter are now being applied in the design of aircraft, the need for an adequate mathematical theory is becoming critical. There is no time in this brief set of mathematical lectures to deal adequately with the present simplified mathematical theories of the mechanism of flutter. We shall only give the matric form for the equations of motion and say a few words Itbout the approximate solutions with the aid of matrix iteration methods. The vibrations of an airplane wing and aileron can be considered as those of a mechanical system with three degrees of freedom: the bending and twisting of the wing accounts for two degrees of freedom, and the relative deflection of the aileron gives rise to the third degree of freedom. Not only is the system non-conservative, but there is the additional COlD-plication of damping forces leading to terms depending on the velocities in the equations of motion. The difi'erential equations of motion are of type d,2qi(t) dqi(t) . (7 ·1) (JqdP" + CiJ--;U:- + bif/'(t) = 0, a system of three linear difi'erential equations in the three unknowns ql(t), q2(t), rf(t). Since there ru.:e three degrees of freedom, all indices have the range 1 to 3. The constant coefficients ~j, b~iI C;j are computed from a large number of aerodynamic constants of our aircraft 32
MATRIX METHODS IN THEORY OF AIRCRAFT FLU'ITER 83
structure; see T. Theodorsen's "peneral Theory of Aerodynamic Instability and the Mecbs.nism of Flutter," N. A. C. A. Report 496, for 1934, pp. 413-433, especially pp. 419-420. Now the general structure of equations 7 ·1 differs from the equations of motion of the preceding two chapters in that biJ ¢ bji (giving rise to a non-conservative system) and in the presence of the linear damping terms Cij~t). If we define A = II av II, B = 1\ bii' II, C = II eiJ II, Q(t) = II qi(t) II, then the equations of motion 7·1 can be written as the one matric differential equation (7.2)
A'd2Q(t)
dt2
+ CdQ(') + BQ(t) dt
=
0
in terms of the three known constant matrices A, B, 0, and the unknown column matrix Q(t). As we are interested in small oscillations around an unstable point of equilibrium, it is to be expected that complex imaginary frequencies will playa role in the work. Since A arises from the kinetic energy, A-l exists and hence 7.2 is equivalent to (7·3)
ie~t) = _A-lCd~t) -
A-lBQ(t).
We can replace the one second-order differential equation 7·3 by an equivalent pair of two first-order differential equations with the column matrices Q(t) and R(t) as unknowns dQ(t) = R(t)
~.~
dt { d-:t)
=
-A -lBQ(t) _ A -lCR(t).
Define the column matrix of six elements Set)
= 112~2
\I
and the constant square matrix of six rows 0, I (7·5) u= -A-lB, -A-1C
where 0 and I are the three-rowed zero and unit matrices respectively. Then equations 7 ·4 can be written as the one first.-order matric differential equation (7·6)
dS(t) = US(t).
4t
34
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER
We are led therefore to consider solutions
Set) = e)J1l
(Il, a constant col~ matrix of six elements)
of 7 ·6. This obviously leads us to the equation (7'7)
(}J - U)A = 0,
where I is the six-row unit matrix. To solve the problem we must get good approximations to the values of ). and the matrix Il that will satisfy 7·7. The matrix iteration method for small oscillations of conservative systems can now be applied with some modifications made necessary by the fact that the possible values of ). in 7·7 are in general complex: imaginary. If ).1, ).t, " ' , )... are the characteristic roots of the matrix U lexicographically arranged so that their moduli t are in ).1 > ).t > ... > then the descending order, i.e., characteristic root ).1 with the largest modulus can be obtained by the methods of the previous chapter. Some further aids in computation of the real and imaginary parts of complex characteristic roots are given on pp. 148-150 and 327-331 of the Frazer-Duncan-Collar book on matrices. A more readable and self-contained account is given in the paper by W. J. Duncan a;n.d A. R. Collar entitled "Matrices Applied to the Motions of Damped Systems," Ph',;'l. Mag., vol. 19 (1935), pp. 197-219. An illuminating discussion of a specialized flutter problem with two degrees of freedom is given in a 1940 book by K8.rman and Biot, Mathematical Methods in Engineering, pp. 220-228. It would be interesting and instructive to solve such specialized flutter problems with the aid of the matrix calculus. Another useful method of solving flutter problems is the combination of matrix methods and Laplace transform methods. The Laplace transform of a function x(t) is a function xCp) defined by
I I I I
I )... I ,
xCp) ... L:-Plz(t) dt.
If one is willing to omit the proofs of one or two theorems, the whole Laplace transform theory needed does not require one to be conversant with the residue theory of complex: variable' theory. For such an elementary treatment of Laplace transforms see Operational Calculus in Applied M athematic8 by Carslaw and Jaeger, Chapters I-Ill. The methods given there can be immediately extended in the obvious way to apply directly to the matric differential equations 7·3 for flutter problems. A good table of Laplace transforms together with mechanical or electric methods can cut down the labor of flutter calculations
t The modulus of a complex: number , = defined by I, I = v'sa + yJ.
II
+ vCi y is denoted by I_I
and is
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER
35
materially. Unfortunately there is no time to take up these matters in detail in our brief introductory treatment.. Exercises 1. Show that the matric differential equation 7·6 for flutter can be written as Set) = U-I d8(t), dt
where U-l, the inverse matrix of U, is given by the six-row square matrix U-I =
/I
-B;'C
-B~IA
II·
S. A model airplane wing is placed at a small angle of incidence in a uniform air stream. The three degrees of freedom are the wing bending, wing twist, and aileron angle measured relative to the wing chord at the wing tip. When the wind speed is 12 feet per second, the matrices for the differential equations of flutter are as follows. The data are obtained from R. A. Frazer and W. J. Duncan, "The Flutter of Aeroplane Wings," Reports Gnd MemortJnda of the AeronamicGl Research Committee, No. 1155, August, 1928. A -
B..
C ..
1117.6 0.128 2.89
0.128 0.00824125 0.0413
II
1.89 0.027 0.364
15.9497 0.0145 15.4722
II
0.245 0.0104 0.0756
2.10 0.0223 0.658333
II
11121.042 0 11.9097 117.65833 0.023 0.60
2.89 0.0413 0.725
.. matrix of damping coefficients. Show that A-I..
I
0.170883 1.06301 1.06301 176.433 -0.741731 -14.2880
-0.741731 -14.2880 5.14994
II
and that the matric differential equation for flutter is
~t) .. USC!), where the "flutter matrix" U is given by
U=
0 0 0 -11.8502 41.4969 28.4464
0 0 0 -0.08168 -1.57195 -0.086931
0 0 0 8.73526 201.554 ~7.6433
1 0 0 -0.888089 -3.62604 2.91908
0 1 0 0.003153 -1.01517 -0.059016
0 0 1 0.105747 3.23949 -1.51412
For the lengthy details of the calculations of flutter frequencies and aDlplitudes see t~ Phil. MtJ{J. 1935 paper by Duncan and Collar.
36
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER
More recent developments in aircraft design require an extension of the flutter theory to handle four-degree- (or more) of-freedom problems in which the motion of the tab defines the fourth degree t of freedom with generalized coordinate t. The aerodynamic forces and moments are obtained theoretically in accordance with T. Theodorsen's and I. E. 'Garrick's investigations and not from wind-tunnel data. It is for this reason that the coefficients in the difiere.ntial equations of motion will be in general complex. See Fig. 7 ·1. The four difierential equations of mQtion are of- the form t cflqi dqj (7 ·8) av df,2 + Cij dt + bill = 0, Willi elastic axis
FIG. 7·1.
where all the indices, in contradistinction to 7 ·1, have the range 1 to 4 and the coefficients llij, bq, and Cij are in general complex. The flutter velocity tI appears in general in the coefficients Coi and biJ and is replaced by the quantity
~,
where b, w, and k are the airfoil
semi-chord, flutter frequency, a.nd the flutter parameter k
=
:~ respec-
tively. This yields a system of four linear difierential equations 7·8
t
In accordance with the flutter notation used in this country, ql co h, If co a, co "y. See Fig. 7·1. t The differential equations of motion and the contributions of the aerodynamic forces and moments to the coefficients of the differential equations for the fourdegree-of-freedom problem are given in the Douglas reports.
tf .. {j, q'
MATRIX METHODS IN THEORY OF AIRCRAFT FLUTTER 37
in which C;,j and bii are expressed as functions of the flutter frequency Col and not of the flutter velocity fl. If one then considers solutions of the form ·iut _r-:-. qi = qf,e (i = v-I) of the differential equations, one is led to a system of four linear algebraic equations with complex coefficients, some of which are functions of Col and the structural damping coefficients gii' The damping coefficients gii are defined by
F. = v=i giiKi/li (gii = 0 if i:p4 3), where F. is the damping force in the·q'th direction and the Kii are the spring constants. Upon obtaining the characteristic roots of the matrix of the linear algebraic equations by matric iteration methods, one ultimately finds the flutter velocity fI as a function of the flutter frequency Col and the structural damping coefficients gii' Flutter is likely to occur if the structural damping coefficients gii from the pure imaginary part of the characteristic root exceeds 0.03, provided that no extraneous damping devices are used. The algebraic eql:lations are so arranged - and this constitutes an important aspect of the development in that it lends itself to matric iteration procedures that the characteristic roots are of the form c
.
z=-+~
or
(i =
v=i),
where c is some constant, Col is the flutter frequency, and g is a structural damping coefficient. . The flutter analyst is attempting to approximate the actual flutter characteristics of the airplane by representing them by as small a number of degrees of .freedom as possible. The design of faster and larger aircraft requires the consideration of a larger number of degrees of freedom so as to make the Butter analysis an aiiequoJ,e apprlXCimation. When many degrees of freedom are required to represent the Butter characteristics of an airplane, the need for matrix methods becomes acute. Matrix methods t also serve to improve the theory of the mechanism of flutter.
t Matrix methods are also used in treating other phenomena related to Hutter. See Douglas reports.
CHAPTER 8
MATRIX METHODS IN ELASTIC DEFORMATION THEORY Although the tensor calculU8 is the most natural and powerful mathematical method in the treatment of the fundamentals of elastic deformation of bodies, the matrix calculus can aJao be used to advantage in furnishing a short and neat "treatment. This chapter is purely introductory and suggestive. Let a medium be acted on by deforming forces. The position of the medium before and after deformation will be called the initial and final state of the medium respectively. Let a1, a2, a8 be the rectangular cartesian coordinateS of a representative particle of the medium in the initial state, and Xl, xl, :r;8 the rectangular cartesian coordinates of the corresponding particle in the final state. Then the elastic deformation is representeEFmmON. The adjoint M* of a matrix M is the matrix obtained from M by interchanging the rows and columns of M.
Thus A * and X* are row matrices while ()xl -,l ()x2 -, ()al
()al
()x ---, -, ()a2 ()a2
()xl
-()x3
()xl
()z3
()a
F*=
2
()x2
-, -, ()a3
()as
()x3
cxi2 ()a3
It can be proved by a routine procedure that (M i M 2)* = ~~. In other words, the adjoint of the product of two matrices is the product of their adjointB in the reverse order. For example, dX* = dA*F*. Hence the square of the differential line element in the final state of the medium will be (8·4) dsi = dA*F*F dA since ds!r =- (dxl)2 + (dx2)2 + (dx')2 =- dX* dX. By a direct computationJt can be shown that the matrix (8·5) where
F*F
II 1/I'J II ,
= 8
()zk ()zk
1/IiJ= (;()a' ()a/
(8·6)
Note that 1/Iii =- 1/Iji. This is expressed by saying that F*F is a symmetric
matrix. The square of the differential line element in the initial state is (8·7)
dsl = dA* dA
(= dA*I dA, where lis the unit matrix)
and hence with the aid of 8·4 we find the formula (8·8) dsi - dsl = dA*(F*F - 1) dA. On defining the matrix, called the deformation or strain matrix, (8·9) we find (8·10)
H = i(F*F -1),
dsi - dsl =
2 dA*H dA.
40
MATRIX METHODS IN ELASTIC DEFORMATION THEORY
Now, if d8i .. d8i for all particles of the initial state and for ail dA, we have, by definition, a rigid displacement of the medium from the initi8.1 state to the final state. A glance at g·10 shows that a neces8ary a:IIIl w:fPcien1 condition that the change of the medium from the initial to the final 8tate be a rigid displacement is that the 8train ~ H be a zero matrix. In other words, when H = 0, the medium is not deformed or strained but is merely transported to a different p0sition by a rigid displacement. This property then justifies the terminology "strain matrix H" since H measures, in a sense, the amount of strain or deformation undergone by the medium. It is clear from definition 8·9 that the 8train matrix H is a trymmetric matrix. Let be the elements (more commonly called components in elasticity theory) of the strain matrix H, i.e., H = II "ii ". On using result 8·5 and definition 8·9, we see that
"v
f1V =
!(t 2
()xk
2>x1' - 6..),
A:=l ()at ()ai
01
where
a'i =
1 if i = j, ... 0 if i ¢j.
U=X-A
FIG. 8·1.
Let u' ..
Xi -
a', then
Define the matrix
and obtain the relation
F = V +1. Hence, from definition g·9, for the strain matrix H we obtain H
=
i[(Y*
+ I)(Y + 1) - lJ.
MATRIX METHODS IN ELASTIC DEFORMATION THEORY
41
Expanding the right-hand side gives
H
= ![V*V + V*+ YJ.
In the classical infinitesimal theory of elasticity only first-degree terms in : : are kept. Hence, to the degree of approX'imoJ,ion contemplated by
injinifaimal theory of elastic deformations, the strain moJ,ri,x is given by
H = l(V*+ V) in rectangular cartesian coordinates. In other words, the components 'IiJ of H are given by the familiar f / 'I" = + 00 ). \1 2 001 00'
!(oo
From the symmetry of 'Iii there are thus in general five distinct components of the strain matrix in three dimensions while there are three components for plane elastic problems.
"PART II. TENSOR CALCULUS AND ITS APPLICATIONS CHAPTER 9
SPACE LINE ELEMENT IN CURVILINEAR COORDINATES Introductory Remarks. The vague beginnings of the tensor calculus, or absolute differenti8J calculus as it is sometimes called, can be traced back more than a century to Gauss's researches on curved surfaces. The systematic investigation of tensor calculi by a considerable number of mathematicians has taken place since 1920. With few exceptions, the applications of tensor calculus were confined to the general theory oj relativity. The result was an undue emphasis on the tensor calculus of curood spaces as distinguished from the tensor calculus of Euclidean spaces. The subjects of elasticity 1 and hydrodynamics,2t. 88 studied and used by aeronautical engineers, are developed and have tlieir being in plane and solid Euclidean space. It is for this reason that we shall be primarily concerned with Euclidean tensor calculus in this book. We shall, however, devote two chapters to curved tensor calculus in connection with the' fundamentals of classical mechanics 8 and fluid mechanics. It is worthy of notice that the tensor calculus is a generalization of the widely studied differential calculus of freshman and sophomore fame. In fact, as we shall see, a detailed study of the classical differential calculus along a certain direction demands the introduction of the tensor calculus.
Notation and Summation Convention. Before we begin the study of tensor calculus, we must embark on some formal preliminaries including some matters of notation. Consider a linear function in the n real variables x, 1/, Z, " ' , w
ax + ~1/ + 'YZ +
(9·1) Define
(9·2)
•
.:. + ).w.
as = ~, as = 'Y, = X, :tJ = 1/, x8 = Z,
a.. = )., = w.
al = a,
"',
Xl
" ' , x"
42
NOTATION AND SUMMATION CONVENTION
43
We emphasize once for all that xl, Xl, " ' , x· are n independent variables and not the first n powers of one variable x. In terms of the notations of 9·2 we can rewrite 9·1 in the form -
(9·3) oras
(9·4) The set of n integer values 1 to n is called the range of the index i in 9·4. A lower index i as in ai will be called a subscript, and an upper index i as in Xi will be called a sUperscript. Throughout our work we .. shall adopt the following useful summation convention: The repetition oj an index in a term once as a subscript and once as a superscript wiU denote a summation with respect to that index over its range. An index that is not summed out will be called a Jree index. In accordance .with this convention then we shall write the sum 9·4 simply as
aiX'.
(9·5)
A summation index as i in 9·5 is called a dummy or an umbraZ, since it is immaterial what symbol is used for the index. For example, ap;; is the same sum as 9·5. All this is analogous to the (umbral) variable of integration x in an integral
I
b J(X) dz.
Any other letter, say y, could' be used in the place of x. . Thus b LbJ(y) dy = J(X) dz.
I
Aside from compactness, the subscript and superscript notation together with the summation convention has advantages that will become evident later. . . As a further illustration of the summation convention, consider the square of the line element (9 ',6)
dIP
=
dz2 + dy'J. + dz2
in a three-dimensional Euclidean space with rectangular cartesian coordinates x, y, and z. Define
(9·7) and (9,8)
yl
= x, y2 ... y, 'II = , •
44
SPACE LINE ELEMENT IN CURVILINEAR COORDINATES
Then 9·6 can be rewritten (9·9)
or again (9 ·10) M = 8'1 dy' dyi with the understanding that the range of the indices i and j is 1 to 3. Note that there are two summations in 9·10 one over the index i and one over the index j. Let f(x 1, :ct, "', X") be a function of n numerical variables Xl, x 2, .. " x"; then its differential can be written d.f = ()X, 'bf dx' 'I
with the understanding that the summation conuention has been extended as to apply to repeaied flUperscripts in differentiation formulas. We shall adhere to this extension of the summation convention. It is worth while at this early stage to give an example of a tensor and show the fundamental nature of such a concept even for elementary portions of the usual differential and integral calculus. This will disp~, I hope, any illusions' common among educated laymen that the tensor calculus is a very "highbrow" and esoteric subject and that its main applications are to the physical speculations of relativistic cosmology. 80
Euclidean Metric Tensor.4 In the following example, free as well as umbral indices will have the range 1 to 3 as we shall deal with a three-dimensional Euclidean space. Let (9·11) x' = f(yt, y2, y8)
be a transformation of coordinates from the rectangular cartesian coordinates y1, y2, y8 to some general coordinates Xl, :ct, x8 not necessarily rectangular cartesian coordinates j for example, they may be spherical coordinates. The inverse transformation of coordinates to 9·11 is the transformation of coordinates that takes one from the coordinates Xl, X2, x8 to the rectangular cartesian coordinates yl, y2, y8. Let (9 ·12) y' = gi(xl, :ct, xli) be the inverse transformation of coordinates to 9·11.
NOTATION AND SUMMATION CONVENTION
Example Let yt, y2, 11 be rectangular cartesian coordinates and xl, x2, :r;8 polar spherical coordinates. The transformation of coordinates from rectangular cartesian to polar spherical coordinates is clearly 1/8 Xl =
v' (y1)2 + 0l)2 + (11)2
l X2 = COS- (
Y(yl)2 + ~2)2 + (11)2) FIG. 9·1.
The inverse transformation of coordinates is given by yl = Xl sin x2 cos xl y2 = Xl sin x2 sin:r;8 11 = Xl cos x 2• The differentials of the transformation functions in 9 ·12 may be written ~.i
.3_a dy'· = "'II ()xa u;c •
(9·13)
On using 9·13 we obtain, after an evident rearrangement, the formula 8
(9 ·14)
dIP
=
?>gi?>gi
&; 2>x"' 2>xI' dx
If we define the functions ga6(xt, ables Xl, x2, zS by
X2,
d3!.
:r;8) of the three independent vari8
(9·15)
a
?>gi ?>gi
-, g...A(XI X2 zS) = ~ .... " ~ ()xa 2>xI'
we see that the square of the line element in the general xl,
X2,
zS coordi-
nates takes the form
(9·16)
ds2
= gtr# dx"' d3!
This is a homogeneous quadratic polynomial, called quadratic differential form in the three independent variables dxt, dx2, dxB. Caution: Once an index has been used in one summation of a series of repeated summations, it cannot be used again in another summation of the same series. For example, g.... dxa dxa has a meaning and is equal to gll(dxl )2 + g22(dx2)2 + gaa(dzS)2, but that is not what one gets by
46
SPACE.LINE ELEMENT IN CURVILINEAR COORDINATES
carrying out the dou.ble repeated summation in 9 ·16. Expanded in
extenso, 9 ·16 stands for (9.17)
{dB'
co gU(dXl)1 + 2glJ dzl dz2 + 2g18 dzl dz8 + gll(dx2)2 + 2g. dz2 dz8 + gaa(~)2.
The factor 2 in three of the terms in 9 ·17 comes from combining terms due to the fact that golJ is symmetric in a and /3; a glance at the definition 9 ·15 shows that golJ g{1« for each a and /3 and hence glJ = gSI, g18 = gn, gl8 = ga. EO
Now let r, f2, XI be any chosen general coordinates, not necessarily distinct from the general coordinates Xl, xl, xl. Let
(9 ·18)
x' = Fi(r, f2, fJ)
be the' transformation of coordinates from the general coordinates Xl, Zi-, f8 to the general coordinates Xl, xl, xl. Clearly the differentials dzG have the form (9·19)
atG
dzG = ()Z" d£'.
Define the functions g.,,(x l , Zi-,
xa)
of the three variables :J!l, :J!S,
za by
at ?n!' G
(9·20)
g.,,(fl , Zi-, fJ)
=
golJ(xl , x 2, xI)()Z., Z)f,.
Then, if we use 9 ·19 in 9 '16, we obtain, with the aid of the definition 9·20, the formula (9·21) ds2 = g.,,(r, f2, fB) dX., dx' , which gives the square of the line element in the Xl, f2, 5;J coordinates. We have thus arrived at the following result: If Xl, xl, xl and r, f2, :!' are two arbitrarily chosen sets of general coordinates, and if the transformation of coordinates 9·18 from the x's to the x's has suitable differentiabiZity properties, then the coeJfici,ent8 golJ(x l , xl, xl) of the square of the line element 9·16 in the Xl, x2, xl c0ordinates are related to the coejficients golJ(x1, Zi-, XI) oj the 8fJUf13'e of the line eZemem 9·S1 in the coordinates Xl, f2, xa by means of the law of trans-
formation 9·S0. In each coordinate system with coordinates Xl, xl, xl, we have a set of functions golJ(xl , xl, xl), called the components of the Euclidean metric tensor (field), and the components of the Euclidean metric tensor in any two coordinate systems with coordinates Xl, zS, xl and Xl, Zi-, za respectively are related by.means of the characteri8tic rule 9·20.
NOTATION AND SUMMATION CONVENTION
47
An analOgoUS discussion can obviously be given for the line element and Euclidean metric tensor of the plane- (a two-dimensional Euclidean space). We now have two coordinates instead of three so that the ra1l.{/e oJ the i1lllice8 is 1 to 2. For example, the line elements dB in rectangular coordinates ul, y2) d82 = (dy1)2 + (dy2)2 will become dst = gajJ(x1, x2) dxt% tbJ in general coordina.tes (Xl, x2), and the components of the plane Euclidean metric tensor gajJ(x1, x2) will.undergo the transformation
g.,.(X1, ZS) = gajJ(x1, x2) : : ::. Exercises
1. Find the components of the plane Euclidean metric tensor in polar coordinates (Zl, zI) and the corresponding expression for the line element. {Zl .. V (1/1)1 + (11')1, . y1 - zl cos zI, d zI .. sin-I ~ , { 11' .. Zl sin zI, an
(11') • v~+~
. Hence
Y
2
~/
/#
(x 1,X~
::/' 2 ?iyt%?)yt%
gi/.,.z1, zI) .. ~ll1zi ~
()yl ()yl
()yI ()yI .. lIz'~ + liz' ~.
/ X2 "'---'----
111
Therefore
FIa. 9·2. gn(ZI, zI) co cosl zI + sin2 zI .. 1, guCzl, zI) -= (cos zlX _Zl sin zI) + (sin zI)(Zl cos zI) = 0 = gn(zl, zI), (In(z1, zI) '" (_Zl sin zI)' + (Zl cos zI)I = (Zl)l.
The line element dB' .. gajJ dz"' d:rP in polar coordinates (z1, Z2) is then dB' .. (d:l;I)1 + (zl)l(dzI)l, which in the usual notation is written dB' .. drS + rI dD'.
I. Find the components of the (space) Euclidean metric tensor and the expression for the line element in polar spherical coordinates.
AnatDer. gu(zl, zI, z8) .. 1, ga(z1, zI, z8) - (ZI)I, gaa(Zl, zI, z8) .. (zl)l(sin zI)t and aJl other g,,i.•ZI, zI, z8)
0, so that dB' .. (d:l;I)1 + (z1)1(dzI)I + (Zl)l(sin zI)I(dzI)l. D
CHAPTER 10 VECTOR FIELDS, TENSOR FIELDS,
AND EUCLIDEAN
CHlUSTOFFEL SYMBOLS The Straia Tensor. Aliother interesting example of a tensor is to be found in elasticity. Let a1, a2, oJ be the curvilinear coordinates of a representative particle in an elastic medium, and let Xl, :;2, r be the coordinates of the representative particle after an elastic deformation of the medium.
E. ? Mali".
FIG. 10·1.
Let (10·1) be the square of the line element in the medium, and let (10·2)
ds2
= g~ dxCl
dz'l
be the corresponding square of the line element in the deformed medium induced by the elastic deformation whose equations are (10,3)
x· = f(at, a", as).
In terms of the coordinates
Xl, :;2,
r
the line element 10·1 can be
written
(10·4) where
(10·5)
hatJ =
OO'Y ()a' C'Y'?)zCl ~.
On subtracting corresponding sides of 10·2 and 10·4 we find (10·6)
ds2 - ~
= 2E~ dx'" d3!, 48
SCALARS AND VECTORS
49
if we define EaB by (10·7) Now EaB are functions of the coordinates Xl, :rr, x8. If then we calculate d82 - ~ in any other curvilinear coordinates Xl, fI, za, we would obtain by the method of the preceding chapter d82 - ~ = Z"EaB dZ" di!, where (.lO·~) .
Because of the characteristic law 10·8, EaB are the components of a tensor (field), and because EafJ in 10·6 is a measure of the strain of the elastic medium, EafJ are the components of a strain tensor. We shall have a good deal to say about the strain tensor in some of the later chapters.
Scalars, Contravariant Vectors, and Covariant Vectors. We shall now begin the subject of the tensor calculus by defining the simplest types of tensors. An object is called a scaktr (field) if in each coordinate system there corresponds a function, called a component, such that the relationship between the components in (xl, x2, x8) coordinates and (ii, fI, ~) coordinates respectively is
x8) = s(xt, i2, f3). An object is called a contravariant vector field (an equivalent terminology is contravariant tensor field oj rank one) if in each coordinate system there corresponds a set of three functions, called components, such that the relationship between the components in any two coordinate systems is given by the characteristic law (10 ·9)
8(XI, X2,
_
(10·10)
E'(il , XI, XS)
~i
= ~(Xl, X2, x 3) ()x.. '
An object is called a covariant vector field (an equivalent terminology is covariant tensor field oj ranJc one) if in each coordinate system there corresponds a set of three functions, called components, such that the relationship between the components in any two coordinate systems is given by the characteristic law (10 ·11)
;joW, Xl, XI)
= '1.. (xl ,
()x'"
xl, x8 ) C>ii'
It is to be noticed at this point that the laws of transformation 10·10 and 10·11 are in general distinct so that there is a difference I between the notions of contravariant vector field and covariant vectOr
50
VECTOR FIELDS AND TENSOR FIELDS
field. However, if orUy redangtdar cartesian coordinates are considered, this distinction disappears. It is for this reason that the notions of contravariant as distinguished from covariant vector fields are not introduced in elementary vector analysis. That the characteristic laws 10·10 and 10·11 are identical in rectangular cartesian coordinates follows from some calculations leading to the' result (10·12) between rectangular c~rdinates (Xl, Xl, z8) and any other roctangular coordinates (i1, £2, £8).
Tensor Fields of Rank Two. In all three objects - scalar fields, contravariant vector fields, and covariant vector fields - there are components in any two coordinate systems, and the components in any two coordinate systems are related by characteristic transformation laws. We have to consider other objects, called tensor fields (of various sorts), whose components in any two coordinate systems are related by a characteristic transformation law. To shorten the statements of the following definitions we shall merely give the characteristic transformation law of components. Covariant Tensor Field of Rank Two. (10·13)
-
ta,8(X1, tJ, ~
= t}.p(xl ,
()z>-?nJ' Xl, z8) ()z-.~.
Contravariant Tensor Field of Rank Two. _
(10·14)
()z-en!
ia.8(il, £2, £8) = F(x 1, Xl, z8) ()z>- ()z".
Mixed Tensor Field of Rank Two. _
(10·15)
t; (i 1, tJ, £8)
=
~(X1,
()z-?nJ'
:ct, z8) ()x>- en! .
.Again because of relation 10·12, the difference between the above three types of tensor fields is non-existent as long as one considers orU1} rectangular cartesian coordinates. It is worthy of notice at this point that the indices in the various characteristic transformation laws tell a story which depends on whether the index is a superscript, called contrrwariant index, or a subscript, called covariant index. Before proceeding any further with the development of our subject, it would be illuminating to have some examples of vector fields and
MIXED TENSOR FIELD OF RANK TWO
61
other tensor fields. Perhaps the most important example of a contravariant vector field is a velocity field. Suppose that the motion of a particle is governed by the differential equations dx~
di =
~'(zt,
:ct, xl),
where t is the time variable. If Xi = /,(Zl, Z2, xl) is a 'transformation of coordinates to new coordinates x', then (/,Xi ()X~ chi ()x' di = ()zi di = ()zi~i(X1, X2, xl). Thus the components ~i(Xt, 5;2, XI) of the velocity field in the Xi coordinates are related to the components ~~(X1, Xi, xl) in the x' coordinates by the rule _ ~'(xt,
z2, f8)
()Xi = ;-.~i(X1, uX '
x2, xl),
which is precisely the contravariant vector field rule. If 8(Xt, :ct,
~) is a scalar field,
then the "gradient" : . are the com-
pon~ts of a covariant vector field. An important example of a scalar field is the potential energy of a moving particle. . We gave two examples of a covariant tensor field of rank two: the Euclidean metric tensor and the strain tensor. A little later in the chapter we shall give an example of a contravariant tensor field of rank two, the g«P associated with the Euclidean metric tensor gafJ' As an example of a mixed tensor field of rank two, we have the mixed tensor field with constant components
8'3=0
if a¢~ =1 if a=~
in the x~ coordinates. But -1 ~ - ) x ()z" ()if" 8'- 3 ( x, ;v, f8 = &"?n! ()zX
?n!' ()XG
= ?n! ()z)" Hence
8;(xt, XS, Xi)
=
0 if a
=1
¢
~
if a=~.
In other words, not only are the components constant throughout space, but they are also the same constants in all coordinates. One of the first fundamental problems in the tensor calculus is to extend the notion of partial derivative to the notion of covariant derivative in such a manner that the covariant derivative of a tensor field is
52
VECTOR FIELDS AND TENSOR FIELDS
also some tensor field. It is true that, if one restricts his work only
to cartesian coordinates (oblique axes), then the partial derivatives of any tensor field behave like the components of a tensor field under a tranSformation of cartesian coordinates to cartesian coordinates. ,For example, suppose that (Xl, xi, xl) are cartesian coordinates and (Xl, 5:', f8) are any other cartesian coordinates; then it can be shown that for suitable constants aj and ao (10·16J is the transformation of coordinates taking the.cartesian coordinates x'to the cartesian coordinates Xi, Hence
(10·17) a set of con.stai:tts, and so 2)2fi
oo:i ()x" = o.
(10·18) Now, if field, then
E'(~,
xi, xl) are the components of a contravariant tensor
(10·19) On difierentiating corresponding sides of 10·19, we obtain (10·20)
()~,
()E'" ()x! ()xi
()fi =
C>:r! ()fi 00:'"
.
()2xi
C>:r!
+ E'" ()x'" ()x! ()fi' .
But both Xi and Zi are cartesian coordinates. Hence on using 10·18 in 10·20 we obtain ()~i C>E'" ()xfJ ()fi (10· 21) ()zi = C>:xfJ ()fi 00:'" '
which states that the partial derivatives : : behave as though they were the components of a mixed tensor field of ranJc two and this under a transformation from cartesian coordinates to cartesian coordinates. The presence of the second deti-vative terms in 10·20 in curvilinear coordinates
Xi
shows that the : : are not really the components of a
tensor field. So the fundamental question arises whether it is possible
to add corrective terms C; (all zero in cartesian coordinates) to the partial derivatives (10·22)
~ so as to make . ()E'" C>:r! + C;
EUCLIDEAN CHRISTOFFEL SYMBOLS
53
the components of a mixed tensor field of rank two for all contravariant fields r'. The answer to "this question is in the affirmative, and the possibility of the corrective terms depends on the existence of the Euclidean Christoffel symbols. 2 EucUdean Christoffel Symbols. We saw in the previous chapter that the element of arc length squared in general coordinates takes the form
d82 - gafJ(x1,
(10·23)
X2,
xl) d,xt%
ih!,
where, in the present terminology, ga(J are the components of a covariant tensor field of rank two, called the Euclidean metric tensor. Now it can be proved that the determinant 3
g=
gn, ga, gl3 g21, g'J.2, gm gal, g32, gaa
ffP=
Cofactor of g{J« in g g
(10·24)
~O.
Define
(10·25)
.
As the notation indicates, it can be proved that the functions ffP are the components oj a oontrOJJariant tensor jieUJ. of rank two with the JoUowing properties: (10·26) ffP = t/" t/"gv(J = ~ (equals 0 if a ~ ~ and 1 if a = ~). Define the Euclidean Christoffel symbols r~(x1, X2, xl) as follows:
(10.27)
r'afJ (X1" xt
xl) _ 1 w(?)gv(J -.,g C>xt%
+ ?)g()x" _ ?)gap). C>x"
Since the law of transformation of the components gap and gaIJ are known, one can calculate the law of transformation of the r~(x1,,;2, xl) under a general transformation of coordinates X· = f(x 1, xt, xli). Let gap and ~ be the components of the Euclidean metric tensor and its associated contravariant tensor respectively in the Xi coordinates. Then if we define
(10.28)
ii'
afJ
(w i ,
2
,
f8)
= igw(?JU"fj + ?JU i:)Xt% b5!__ C>gafJ), ()i'"
we can prove by a long but straightforward calculation that the r'ap (Xl, X2, xl') are related to the r~(w, X2, XI) by the foUowing famous transformation law: 4 _. (10·29) r!..s(i1, X2, f8)
"
= r".(xl, w,
()x" ()xP ()Xi
xI)()xt% ()xfJ ()x"
()2X"
()xi
+ ()xt% ()xfJ ~.
VECTOR' FIELDS AND TENSOR FIELDS
In the previous chapter we saw that ga{J(x1, xl; xl)
(10·30)
where the
VS
8
()y. ()y.
= l: ~. . . '::II ,.1 IT,/; o:rr
are rectangular cartesian coordinates. Consequently,
if the x's are ca:rte~n coordi,ruites, it follows that all the components ga{J(x1, xl, xB) are con8tants.
In other words,
'::~ =
0 in cartesian
coordinates x'. We have then immediately the iinportant result that the Euclidean Christoffel symbols r~(xl, xl, xB) are identically zero in cartesian coordinates. If the yi are cartesian coordina~ and the Xi are general coordinates, one can calculate the Euclidean Christoffel symbols r~(xt, X2, xB) directly in terms of the derivatives of the transformation functions in the transformation of coordinates Xi
= Ji(yl, y2, 11')
and in the inverse transformation of coordinates y' = q,1(xt, xl, xB). Since all the Euclidean Christoffel symbols are zero when they are evaluated in cartesian coordinates yi, it follows immediately from the transformation law 10·29 that the Euclidean Christoffel symbols in generaJ. coordinates x' are given by the simple formula (10·31)
.
W
~i
r'a{J(Xl, xl, xB) = ()x" WfI ~.
This formula is often found. to be more convenient in computations than in the defining formula 10·27.
Caution: The Christoffel symbols are not the components of a tensor field so that i, a, and fJ are not tensor indices; i.e., i is not a contravariant index and a, fJ are not covariant indices. The concepts of tensor fields and Euclidean Christoffel symbols can, by the obvious changes, be studied in plane geometry - two-dimensional Euclidean space. Since we have two coordinates for a point in the plane, all components of tensors and the Christoffel symbols will depend on two variables, and the range of the indjces will be from 1 to 2 instead of 1 to 3. Thus the Euclidean Christoffel symbols for the plane 1Dill be (10·32)
r'a{J(xt, xl)
= igio'(Xl,
X2)(~~ + ':; -
:a:)
in terms of the Euclidean metric tensor ga{J(xt, x') for the plane. The alternative expression in terms of the derivatives of the transformation
EUCLIDEAN CHRISTOFFEL SYMBOLS
functions from, rectangiUar coordinates (yl, y2) to general coordinates (xl, xi) and of the inverse transformation functions will be . b2y). bx' (10·33) r~,,(xl, xi) = CY.r!" ?Jx" ~. .
Exercise
Compute from the definition 10·32 the Euclidean Christoffel symbols for the plane in polar coordinates. Then cheok the results by computing them from 10 ·33. Hint: Use results of exercise 1 of Chapter 9 and find
Answer. :pl.... -~,
I\ .. ~ ...
(fl ... 1, gO '" gil '" 0, gat ... (:t~i
. 1 , It :e1
.. 0, It '" 0, r:, '" n
'" 0, r:a .. o.
CHAPTER,l1 TENSOR ANALYSIS
Covariant Differentiation of Vector Fields. Having shown the existence of the Euclidean Christoffel symbols, we are now in a position to give a complete answer to the fundamental question - enunciated in the preVious chapter - on the extension of the notion of partial differentiation. We shall now prove that the fu'fU'Ji008 (11.1)
()~'(Xl,()xCIxi, xl) + r'. .(Xl xi xl) ~""(X,1 xi, xl) ..,'
are the components of a mixed tensor field of rank two, called the c0variant derivative of ~i. This result holds for every differentiable con-
travariant vector field with the understanding that the r!..(x1, xi, xl) are the Euclidean Christoffel symbols. We shall use the notation ~,~ for the oovariant derivative of ~i. By hypothesis we have -
()Xi
~i(X1, Xl, XS) = t(x 1, xi, xl) ()x>.'
(11· 2)
If we then differentiate corresponding sides of equation 11·2 we obtain, by the well-known rules of partial differentiation, ~i
(11· 3)
()x-
()t
()x" ()x'
>. ()2Xi ():rJ" ()x>. ():rJ" ()X'" •
= ():rJ" ()x- ()x>' + ~
We also have ()x. ():rJ" ()Xi
_.
(11·4)
r'ja = r!,. ()Xi
()X'"
()2X>'
()Xi
cnt + ()xi ()XCI ()iA'
If we multiply corresponding sides of 11· 4 by ~i and sum on i we obtain -, -.
(11· 5)
r~'
>.
():rJ" ()X'
()Xi
()2x>'
()x'
= r;~ ()x- ()X>. + ~I' ():rJ" i:)ii ()X'" ()i'
on using ()x. ,.. = -..I:.i_ ()Xi
~
in the first set of terms and
56
TENSOR FIELDS OF RANK R = P
+Q
57
in the second set of terms. Since). and II- are summation indiceS, we can interchange ). and II- in the second derivative terms of 11· 5. On carrying out the renaming of these umbral indices, we can add corresponding sides of 11.3 and 11.5 and obtain
()E' -. -.
()Z"
(11· 6)
+ r j.J1 =
()f + r ,,.r ~)()x" ()x~ ()Xi
.
()£"
()x)'
(
{
()x"
()2Xi
()Xi
(P31'
()Xi)
+ E~ ()~ ()31' ()f'" + ()x~ ()ii ()f'" ()x"
•
Now (11·7) where i
8..
{I
=
if i= a, if i;14 a.
0
On. differentiating 11· 7 with respect to x~, we obtain (Pii
(11·S)
()x~ ()x"
()x)' ()£"
()Xi
and hence 11· 6 reduces to (11· 9)
()~i ()Xa
_. _.
(Px)'
()ii
+ ()x)' ()Xi ()£" ()x~ =
+ rj.J1 =
0
()t + r .,.r ).)()x)' ()x'
()£" ()x). •
()x"
But 11·9 states that the functions
()f ()x"
).
+ r.,.r
are the components of a mixed tensor field of rank two. This completes the proof of the result stated at the beginning of this chapter. By a slight variation of the above method of proof, it can be established that the Junctions (11·10)
are the components oj a covariant tensor field oj rank two whenever Ei are the components oj a covariant vector field, called the covariant derivative oj Ei. As before, r1a are the Euclidean Christoffel symbols. We shall use the notation ti,a for the covariant derivative of ti. Tensor Fields of Rank T = P + q. Contravariant of Rank p and Covariant of Rank q. It is conyenient at this point to give the definition of a general tensor field. As in the case of tensor fields of rank two, the definition will be
TENSOR ANALYSIS
58
clear if we give the law of transformation of its components under a transformation of coordinates. ()X" ()x" (11·11) r;::::~(Xl, r, til) = 7'1::::1:(X1,:z:2, :J!»?»}, ••• ~ ()r
~fI
x-···-· ()x')'1 ()x'YfI
We are now in a position to considet: some problems that arise in taking successive covariant derivatives of teruror fields. However, we must first say a word or two about the formula for the covariant deriva.tive of a tensor field. If ~:::~ l!-re the components of a ten80r field of ranJi p + q, contravariant of rank p and covariant of rank q, then the functions ~ =::;:. 'Y defined by '''U
()1';:===:;
"'Up
'l': ... II!:'Y = ()x'Y + ~~.. ~ cz. + ... +J..,.,.I.fh"·~CIp-Icr - r"(I1'Y.I."Ih"'~ - ••• - J.~.I.{Ja"'fI..t" are the components of a tensor field of rank p + q + 1, contr(Jf)(Jriant of rank p and covariant of rank q + 1. (11.12)
{
...,cIJt n1CIl •••
n1CIl- •• . , t
."..
tnCIl •••
'1':: =:;:. 'Y will be called the covariant derivative of ~:::;:. The above result 1 stating that the covariant derivative of a tensor field is indeed a tensor fiE!ld can be proved by a long but quite straightforward calculation analogous to the method of proof given for the covariant deriva.tive of vector fields. Since the covariant derivative of a tensor field is a tensor field, we can consider the covariant derivative of the latter tensor field, called the second covariant derivative of the original tensor field. In symbols, if '1':::::; is the original tensor field, we can consider its second successive covariant derivative 1';::::;:. 'Y. " The fundamental questiun arises whether covariant dijJerentioJi;on is a commutative operation, i.e., whether ,"",,'''Up ,"",,"'Up (11 • 13) .I. fJI"'~.'Y.' =. .I. (I1·"~. '.'Y' The answer i8 in the ajfirmative. 2 In fact, since all the Euclidean Christoffel symbols are zero in cartesian coordinates, the partial derivatives of all orders of the Christoffel symbols are also zero in cartesian coordi••l .3 • .8) • nates. H ence if *,"",,·"Up( .I./Ia"'~\9' II, II are the components 0 f ,"",,'''.,~; ~fJ + >.,- ~fJ. '\"
CHAPTER 12 LAPLACE EQUATION, WAVE EQUATION, AND POISSON EQUATION IN CURVILINEAR COORDINATES Re~ks
Some Further Concepts and
on the Tensor Calculus.
We remarked in Chapter 10 that, if 8(XI,
X2,
xl) is a scalar field, then
!;, is a covariant vector field.
So, to complete the picture of covariant
difiere1'ltiation, we can call
the covariant derivative oj the scalar fie1d
:i
8(xl, :tA, xl). For some discussions it is convenient to extend the notion of a tensor field. By a relative tensor fie7i oj weight w, we shall mean an object with components whose transformation law difiers from the transformation law of a tensor field by the appearance of the functional determinant (Jacobian) to the wth power as a factor on the right side of the equations. If w = 0, we have the previous notion of a tensor field. For example: C>Xl C>XI -, -, ()fl ()f2
§(fl, z2, X8)
()x2
C>x2
C>xl .. ()f8 ()x2
-, = -, ()fl ()z2
()f8
c>XI -, -, ()fl ()z2
()x3
()x3
C>xl C>xl -, -, ()fl ()f2
C>xl '" ()f8
8(Xt, :c2, xl)
()f8
and
{'W, z2, XI)
()x2
=
()x2
-, -, ()fl ()X2
C>X2
()X'
()x3
()x3
-, -, ()f1 ()X2
C>xa
r(xl, :tA,
()fi XI)C>x"
()f8
are the transformation laws for a relative scalar field of weight w and a relative contravariant vector field of weight w respectively. A relative scalar field of weight one is called a scalar density, a terminology 60
FURTHER CONCEPTS AND ItEMAItKS
61
suggested by the physical example of the density of a solid or fluid. In fact, the mass m is related to the density function p(xl, z2, z2) of the solid or fluid by m = f f fp(xI, z2, xl) dx1 dx2 .dxB, where the triple integral is extended over the whole extent of the solid or fluid. Another important example of a scalar demity is given by yg, where g is the determinant oj the Euclidean metric tensor ga{J' In fact, ()xc ()xb
ga{J = gab ()x..... ~.
(12·1)
I
I,
Let g = ga{J the determinant of the ga{J. By a double use of the formula for the product of two determinants when applied to 12 ·1, one can prove that (12·2)
where
I:: I
. . denvatives
g=
stands for the
I
()xc 12
()X'"
g,
funct~onal
determinant of the partial
()xc
~.
On taking square roots in 12·2 we obviously get
(12·3)
v'g =
I:: Ivg,
which states that yg is a scalar density. The yg enters in an essential manner in the formula for the volume enclosed by a closed surface. In fact, the formula for the volume in general curvilinear coordinates X' is given by the triple integral (12·4) V = f f fyg dx1 dx2 dx3. This form for the volume-can be calculated readily by the following steps. If y' are rectangular coordinates, then V = f f f dyl dyl dr. Hence. (12·5) V = f f f J dx1 dx 2 dx8, where J stands for the functional determinant
I:~I of the transformation of coordinates from the curvilinear coordinates Xi to the rectangular coordinates y'. Now ga{J(x1, xl, XS)
=
c>y' t;s ()y' c>#' ()x'"
62
EQUATIONS IN CURvn.INEAR COORDINATES
and hence j;he determinant g=
I g~ I
is precisely equal to J2 on using the rule for the multiplication of two determinants. In other words (12·6) yg ... J, from which the formula 12·4 for the volume becomes clear. Since the formula for volume in general coordinates z' is given by 12·4 it follows that the mass mof a medium in general coordinates z' has the form m = f f f Po(Zl, z2, :r;8)yg dz l d:l!2 cJx3, where Po(Zl, r-, zS) is an absolute scalar field that defines the (physical) density of the medium at each particle (Zl, zS, :r;8) of the medium. Clearly p(Zl, :r!-, xii) = Po(z!, :r!-, xa)Vi is a scalar density and, since Vi = 1 in rectangular coordinates, has the same components as the density Po(zl, zI, xii) of the medium in rectangular cartesian coordinates. Other concepts and properties of tensors will be discussed later in the book whenever they are needed.
Laplace's Equation. Let ~ be the contravariant tensor field of rank two defined in Chapter 10, i.e., ffP = CQfactor of gpa in g. (12·7) g H ",(zt, :r!-, Z3) is a scalar field, then the second covariant derivative "'....fJ is a covariant tensor field of rank two. Now we can show that ~"'....fJ i8 a scalar fieM. In fact, ;;,a8 = .N ~if" ()ffl (12·8) 1/ . fI ~~:t" (12·9) On multiplying corresponding sides of 12·8 and 12·9 and summing on a and ~, we obtain ~-~,,~cr~ZT
(/"",.... ~~ ~:t" (J5fS ()ffl
ff'V,.a.I1 and hence the desired result
~~""'fJ=r"'.cr.T on using the obvious relations (12·10)
LAPLACE'S EQUATION
63
Let (12·11) for the arbitrarily chosen sca.Ia.r field "'(Xl, X2, xl). We have just shown that F(x1, xl, xl) is also a sca.Ia.r field. In rectangular cartesian coordinates, the Euclidean metric tensor has components 6~ equal to 0 for a ¢ ~ and equal to 1 for a =~. Furthermore, we saw in Chapter 11 that in cartesian coordinates, and hence in rectangular cartesian coordinates, y', ..,. (0.1 2 • .3) = ?p*"'(y1, y2, '11') • 'I' .G.P II , Y , II . ?¥if' CY/ The component of the scalar field F(x1, xl, x8) in rectangular coordinates y'is then
or (12·12)
?P*I/t(yt, y2, ya) (by1)1I
+ ?P*I/t(y1, y2, '11') + ?P*I/t(yl, y2, '11'), (by2)2
«()y8)2
the Laplacean of the function *I/t(y1, y2, '11'). Hence the form. of Laplace's equation in curvilinear coordin0.t6s x· wiJJ& the scalar field ",(Xl, xli, xl) as unknown i8 given by (12·13)
~(Xl, xl, :r;3)","".,,(x1, xl, xl) = 0,
where g"fJ(x1, xli, xl) is the contrauariant tensor field lS.7 deftMd in terms o/the Euclidean metric tensor g~, and where ",,,,,.,,(x1, xl, xl) i8 th£ second covariant derivative oj the 8calar field ",(Xl, xl, xl). If we write 12.13 explicitly in terms of the Euclidean Christoffel symbols rjA;(X1, xli, xl), we evidently have 1 xli (12.14) 1/..a8(x ,,
xI)(CJlI/t(xt, xl,
~~
xl) _~~x" ( z2 xB),*(zl, xl, xl») 1
():J!'
=
0
as the/arm of Laplace'8 equation t in curvilinear coordinates Xi. It is worth while at this point to give an example of Laplace's equation in curvilinear coordinates and at the same time review several concepts and formulas that were studied in previous chapters.
t There is another form of La.place's equation in ourvilinear ooordinates z, which is sometimes more useful in numerical calculations than 12 ·14. It is given by
{ v'i gall?» )
_~ C>z'" ?xrfJ - O. For a. proof see note 3 to Cha.pter 13. SimUa.r rema.rks vg . could be made for the wa.ve equation and Poisson's equation since the La.place differential expression occurs in them.
64
EQUATIONS IN CURVILINEAR COORDINATES
Let yt, if', 11 be rectangular cartesian coordinates and Xl, x2, xl polar spherical coordinates defined by the coordinate transformation Yl = Xl sin X2 cos xl, (12·15) y2 = Xl sin x2 sin xl, { 11 = Xl cos x2. Z2
Clearly the inverse coordinate transformation is given by Xl = V (yl)2 + (yI)2 + (11)2 2
x =
.. 1.A (ZI Zl Z8)
,"'/ I
"
I
I
cos-{v (yl)2 + ~)2 + (yI)2)
xl = tan-I
FIo. 12·1.
(~)-
From the definition of the Euclidean metric tensor g,,{J we find (12·16)
gll = I, g22 = (Xl)2, gaa = (XI)2 (sin x2)2, and all other gi,i = 0,
so that the line element d8 is given by (12.17) d82 ... (dx l)2 + (Xl)2(£lx2)2 + (Xl)2(sin x2)2(dx8)2.
Again, from the definition of the tensor g«6, we find (12·18)
gll = I,
g22 =
(X~)2' g33 = (Xl)!(S~ x2)2 ' and all other gil = O.
The Euclidean Christoffel symbols caD. now be computed in spherical polar coordinates; use either formula 10·27 or 10·31. They are as follows: r~ ... _Xl, rk = _Xl (sin x2)2, ~12.19)
~112
):r
13
1
= Iii = Xl' = r:l
r
2 83
••
= -SIn X"
cos
X2,
=~, ~28 = r:a ... cot x
2
~d all other rjA> = o.
,
On using 12·18 and" 12·19 in 12·14 (r~, rk, r~ are the only nonzero Christoffel symbols that are actually 1;1sed), we find that Laplace's equation in sphericaJ, polar coordinates Xl, x2, xl is
(12·20)
CilI/I 1 CilI/I 1 CilI/I (()xl)2 + (Xl)! (()xt)2 + (Xl)2(sin x2)2 (()x3)2
I
2.
or
+ Xl ()zl +
or
cot x2 (Xl)2 ()xt = 0
whenever fhe unkTlQ'lJ}'T/,Junction is a scalar fleW, t/I(xl , x2, xl).
WAVE EQUATION
65
Laplace's Equation f~r Vector Fields. We now turn our attention to the related problem of considering
vector fields whose individual components satisfy Laplace's equation in rectangular coordinates. The question arises whether each component of the vector field will satisfy Laplace's equation 12 ·14 in curvilinear coordinates Xl, xi, :r;8. The answer is in the negative, as a little reflection
will now show. To be specific, let the unknown be a contravariant vector field Nxl, xl, x8) with components *N1f, 11, 11) in rectangular cartesian coordinates. By hypothesis ()2*~i
()2*~'
()2*~i
«)yl)2 + «)y2)2 + «()y3)2
(12·21)
= o.
By practically the same type of argument used in deriving equations 12·13, we find that the contravariant vector field ~i(Xt, X2, x8) in curvilinear coordinates Xl, xl, :r;8 will satisfy the system of three differential equations (12·22) where ~~~ is the second covariant derivative of ~i(Xt, X2, :r;8). If we expand 12·22 explicitly in terms of the Euclidean Christoffel symbols r~(xt, xl, x8) we find
J ()2~' r"~' u-l()x" ()xIl - «P ()x" (12·23)
+
ri ()r "(1
• ()f'
():J! + r,,/I at"
)I!IJ
i «p r' - r .,.r • + ( ()r!.. ()~ + ri,/1"(1
10
= 0,
a system of three differential equations in which all three unknowna ~ occur in each differential equation.
~t,
r,
Wave Equation. The propagation of various disturbances in theory of elasticity, hydrodynamics, theory of sound, and electrodynamics is governed by the partial differential equation known as the wave equation. In rectangular cartesian coordinates y', the wave equation is ()2*u(yI, (12·24)
11,11, t)
()t2
()2*U(yl, 11, 11, t) «)yl)! ()2*U(yl, y2, 11, t) + «)y2)2
and hence in curvilinear coordinates x' (
12.25)
()2u(Xl, x2, ()/J
r, t)
_
,.a6
- 1/ . '1.£ ..../1,
+.
()2*U(yl, 11, 11, t) «)y3)!
66
EQUATIONS IN CURVILINEAR COORDINATES
where u ....1J is the second covariant derivative of the scalar field U(Xl, xi, xB, t). If we write 12·25 explicitly in terms of the Euclidean . Christoffel symbols we obtain t ~
(12.26)
()2u(xt, xi, xB, t) = ~tt
rf'(
()u).
()2u _ I'! ?nf CY.r! ~ ~"
.It is to be observed .that the rigl;lt-hand side of 12'25, or equivalently of 12·26, is the Laplacean. If the Xi are spherical polar coordinates, Laplace's equation has the form 12·20. Hence, immediate},y, we see that the wave equation has the following form in spherical polar c0ordinates . ()2u(xt, X2, xB, t) ()2u. 1 ()2u. 1 ()2u W = (~l)2 + (xt)2 (~xi)ll + (Xl)2(sin xi)2 (~)2 (12·27) 2 cot xi +;i ~1 + (Xl)2 ~.
()u
()u
By exactly the same calculations as we used in obtainink Laplace's equation for contravariant vector fields in curvilinear coordinates, we find that the wave equation in c:u:roilinear coordinate8 x' takes the foUnwing form whenever the unknown is a comravariant vector field E'(xt, xi, xB, t) that depends parametrically on the time t: (12·28)
on using the corresponding result 12·23 for Laplace's equation. Note that 12·29.is a system of three differential equations for the three unknowns El , f, and ~ and not just one differential equation satisfied by the three functions El , f, and Ea, Poisson'S Equation. As a final exercise in this chapter we consider Poisson's differential equation. IIi rectangular cartesian coordinates Vi, Poisson's equation is ~(yl,
(12.30)
yl, 11') «()yl) 2
+
?P*tp(yt, yl, 11') «()y2)2
+
?P*tp(yt, y2, 11') «()y3)2 "" _4r*O-(yl, y2,
11').
t For another form of the wave equation, see Exercise 1 at the end of the chapter.
POISSON'S EQUATION
67
On the right-hand side of 12·30, *U(yl, y2, 11) is the component in rectangular coordinates y' of a scalar field (12.31)
U(Xl,
p(Xl~ xl),
z2, xl) ""
where p(Xl, z2, xl) is a scalar density. t Since yg is also a scalar density, u(xl, z2, xl) is obviously a scalar field. From the definition of g as the determinant of the gafj, we see that in rectangular coordinates 100 (12·32)
*g(yl, y2, 11) =
0
1 0
= 1.
001 Hence and consequently (12·33)
In other words, the scalar field U(Xl, X2, xl) and the scalar density have equal components in rectangular cartesian coordinates. , On making use of our calculations for Laplace's equation, we can derive corresponding results for Poisson's differential equation. For example, P0i8son's differential equation in curm'linear coordinates x· will be p(XI, X2, xl)
(12·34)
whenever the unknown is a scalar field ",(Xl, X2, r). As before, the r~ are the Euclidean Christoffel symbols in the X' coordinates. Exercises :r;2,
1. Show that the wave equation in curvilinear coordinates z· with scalar u(Zl, zI, t) as unknown can be written as i)2u(ZI,
z2, zI, t)
btl
1 =
Vii
c{Vii gafJ ~) .J • xi ""'2
.
A Two-Point Correlation Tensor Field in Turbulence• Let ~i(t, Xl, xt, XI) be the contravariant velocity field of a fluid in motion. We shall denote the mean value of a function f(t) of the time t over the time interval (to, t1) by M(f(t)]. For example (13·19)
Clearly M[~,(t, Xl, xt, XI)] is a contravariant vector field. Define a set of functions CV(X1, XI) of two points, Xl = (x~, ~, x~) and XI = (~,~,~), by (13,20) ..( ) M[~,(t, x1)E;(t, XI)] C'1 X 1 X1 - { , - gA,&(xl)M[t(t, X1W'(t, Xl)J} l { gpq(XI)M[M:(t, XI)E"(t, XI)J} l
where gaJj(x1, xt, .P) is the Euclidean metric tensor in the general coordinates X'. It is evident that CH(Xl, XI) is a two-point tensor field of rank two, a contravariant vector field with respect to eooh of the two point,; we shall call it the (two-point) correlation tensor field. If both points are referred to the same rectangular cartesian coordinate system, then
74
ELEMENTARY APPLICATIONS
the e6rrelation tensor field has components
M[ul,Ut~
(13·21)
in terms of the notations
ul = ~'(t, yl),
t4 = ~'(t, ys).
If we now assume that we are dea.ling with the special case of i80tropic turbulence, the correlation tensor field 13· 21 in rectangular coordinates simplifies still further and has components
!M[ulun 3 M[(uP] on using the isotropic turbulence conditions that M[(ui')2] is independent of position and the index a, and equal, say, to M[(U)2]. Except for the numerical factor i, the above in rectangular coot;dinates is the correlation tensor used by Karman in isotropic turbulence. See his paper entitled "The Fundamentals of the Statistical Theory of Turbulence," Journal oj the Aeronautical Science8, vol. 4 (1937), pp. 131-138.
CHAPTER 14 APPLICATtONS OF THE TENSOR CALCULUS TO ELASTICITY THEORY Finite Deformation Theory of Elastic Media.1 One of the most natural and fruitful fields of application of the tensor calculus is to the deformation of media, elastic or otherwise. In the next three chapters we shall considet. the fundamentals of the deformation of elastic media. We need not and shall not make the uSual approximations of the classical ("infinitesimal") theory in the general development of our subject. Consider a three-dimensional medium (a collection of point particles) in three-dimensional physical Euclidean space. We shall consider a deformation of tile medium from the initial (unstrained) to its final (strained) position and obtain the strain tensor field under less stringent restrictions than those imposed in Chapter 10. Let (la, 2a, Sa) be the curvilinear coordinates of a representative particle in an elastic medium, and let (xl, xl, xl) be the curvilinear coordinates of the representative particle after deformation. The deformation, a one-one point transformation a ~ x, will be MSumed
FIG. 14·1.
given by difierentiable functions (14 ·1)
x'
= J'(la, 2a, 8a).
It is convenient, and of som~ importance for the mathematical foundations, to assume that the representative umtrained particle A is represented in a coordinate system not necessarily the same as the one in which the corresponding strained particle X ill represented. For example, la, 2a, 8a may be cylindrical coordina~ while xl, xl, xl are spherical polar coordinates. We shall adopt the following notational conventions with respect to one-point and two-point tensor fields.. Tensor indices with respect to 75
APPLICATIONS TO ELASTICITY THEORY
76
transformation of coordinates of strained particles will be written to the righJ" and tensor indices with respect to transformation of coordinates of unstrained particles Will be written to the Zejt. For example, under a simultaneous transformation of the coordinates ia of point A in the unstrained medium and of the coordinates Xi of point X in the strained medium,
CYa
ra,8 ()xe
(14·2)
is a two-point tensor field of rank two, contravariant vector field with respect to point A and covariant vector field with respect to point X. In other words, under transjormati0n8 oj coordinates ia ... iq,(la, 2a , Sa) { Xi ""(xl , xl, z3) (14·3)
=
in the unstmined and strained medium respectively, the two-point components ra,. undergo the transformation ~~-
(14·4)
raIl
=
"'a'(J @
Cf":
Similarly ()X"
.,X" ... -()aa
(14·5)
is a two-point tensor field of rank two, covariant vector field with respect to point A and contravariant vector field with respect to point X. The relationships (14·6) (ra,,)(.,x") ... :6, 'a) :E - . >. ()x'" CY:r! 2
-1
(14·33)
(14,34)
where 'U
=
x - a and tJ
= Y-
b.
()u E:==-I
()x
(14·35)
Ezv
=
~(~ + ~;) =
EvZI
Unstrained
()tJ
fw=-'
Stn&in«l FIG. 14·4.
()y
Exercises 1. Find the components of the Eulerian strain tensor fo!J{zl, zt, zS) when the deformation of the elastic body is a stretching whose equations are Zi =
A
ia
(A is a constant greater than unity)
where both Zi and ia are rectangular coordinates referred to the same rectangular coordins.te system. Discuss the change in volume elements. Work out the corresponding problem in plane elasticity. 2. Work out exercise 1 for a contraction so that the constant A is les. than unity.
CHAPTER 15 HOMOGENEOUS AND ISOTROPIC STRAINS, STRAIN INVARIANTS, AND VARIATION OF STRAIN TENSOR Strain Invariants.
If we expand the three-rowed determinant I h; I, we find (15·1) I a; - 2f; I = i - 211 + 411 - 8Ia,
where II = III
e:,
(15·2)
= sum of the principal two-rowed minors in the determinant
I e; I,
A=
and. I a = A.
A function f(un, gl!J, "', gas, EU, ElIl, " ' , faa) of the Euclidean metric tensor gall and the strain tensor Ea/J 1JJ1,'U be called a 8train invariant l if (a) it i8 a scalar field; (b) under aU tramformal,ions of coordinates Xi to Xi (15·3)
I(ou, gl!J, "', gsa, ill, its, "', faa) = f(Uu, gill, "', gsa, En, EI!J, "', Eaa),
where the functiun f, on the left, i8 the 8ame functiun of the gall and f...s as it i8, on the right, of the gall and Eall' • We shall now prove that the three fUnctions 11,12, and I. occurring in the expansiun of the determinant 15·1 are 8train infJariants. From the law of transformation of the mixed tensor field E; we readily get
E:(r, 5!-, XI)
=
e:(xl ,
X2,
xl),
from which follows that II is a strain invariant on recalling that e; = (f'e.p. To prove that Ia is a strain invariant, we have by hypothesis
E; (x)
=
~(x)
()x" ~Z"
bi! ~~ .
On taking the determinant of corresponding sides, we obtain
Ie; I I E~ 1·1 :: ,.\ :: ,. =
But the product of the functional determinants is equal to unity. Hence I e; I = I ~ I , and I. is a strain invariant. To prove that Is is a strain invariant, we first observe that ~
- 2E; 82
83
HOMOGENEOUS STRAINS
is a mixed tensor field of rank two. Hence,"by the argument just comwe see that the determinant a; - 2E$ is itself a pleted for ~ strain iwariant. But, from the expansion 15·1, we see that
I
I,
I
I
(15·4) I, = tcl a; - 2E$ I - 1 + 211 + 8IsJ. Formula 15·4 expresses Is as a linear combination of strain invariants with numerical multipliers. Hence obviously Is is itself a strain invariant. On using the results 14·31 together with what we have just proved, we obtain the result dVo _ / (15·5) dV = v 1 - 211 + 41, - 8Is,
which gives the ratio of the element of the volume of a set of particles in the unstrained medium to the element of volume of the corresponding particles in the strained medium in terms of the three strain iwariants 11, I" and Ia. Homogeneous and Isotropic Strains. Let us now discuss the mathematical description of a homogeneous strain. DEFINITION OF HOMOGENEOUS STRAIN. A strain is homogeneous if the corresponding strain tensor Eafj has a zero covariant derivative,
Le.,
Eall. 'Y =
O.
In rectangular coordinates, the condition reduces to
~~ =
0 sinc.e all
the Euclidean Christoffel symbols are identically zero in rectangular coordinates. In other words, the strain tensor components Ea{J in rec-
tangular coordinates are comtants for a homogeneous strain. It readily follows from the definition of the strain invariants 11, I" and Ia that for a homogeneous strain ()I i ()xi =
0
in rectangular coordinates and hence in all coordinates. (Keep in mind that 11, I" and 18 are three scalars and not the three components of a o• . . dV • al COVarIant vector. ) H ence, f or a homogeneous stram, dV 18 a numenc constant for all coordinates. We thus arrive at the important result that for a homogeneous strain
(15·6)
VO V
=
VI -
211 + 41. - 818
. =
a constant.
This constant is the 'same for all "unstrained" volumes'Vo and their cor-
84
HOMOGENEOUS AND ISOTROPIC STRAINS
reaporuJ,ing "strained" oolume8 V, and is independent oj eM coordi1UJt8 system. For the special case of a homogeneous strain which is also isotropic at each point, we shall have (15'7) . '3 = EI3 or what amounts to the same thing (15·8)
~ = E{/~.
(g~,
the Euclidean metric tensor.)
Since the strain is homogeneous, we have ~. 'Y = O. We also have (see the end of Chapter 11) g~:'Y = O. Hence E in 15·7 and 15·8 is a constant Scalar field, a numerical constant that is independent of position and the coordinate system. An example of an isotropic homogeneoqs strain is found in an is0tropic medium subjected to uniform hydrostatic pressure, i.e., an isotropic medium subjected to the same pressure in all directions. . Since 15·6 holds and
1 - 211 + 4I~ - 81s =
I 0; -
I,
2e; we see by an evident calculation that for an isotropic homogeneous strain Vo
(15·9)
V
The constant scalar by the formula
E
= (1 - 2e)I.
for an isotropic homogeneous strain is given
(15·10) in terms of anyone volume before and after deformation. In the usual approximaie theory (usual theory of elasticity) higher powers of E than the first are negJected; see Chapter 14. Since (1 - 2e)1
=
1 - 3E, approximately,
we have
~o = 1 _
(15·11)
3t, approximately,
and (15·12)
1 V - Vo
E
=3 V
1aV
= 3V
1aV
=
.
3 Vo ' approXImately.
A Fundamental Theorem on Homogeneous Strains. We shall now outline the proof of the following theorem. A neceB8Q,ry and sujJicient con4ition that a strain be homogeneous is that, in terms oj
HOMOGENEOUS STRAINS
85
'Unatrained cartesian coordinoJ,es era and strained carlelrian coordinate8 yi, th£ deJormaUon i8 Zinear, i.e., (15.13) = OAt y' + CIA.
0,
OutUne of Proof. From the definitiol) of the strain tensor ~, we have 1&"Il
= g,,'l -
2E,.'l'
Since g"'l,r = 0, and since under our "necessity hypoth~" EfJ'l,r ... 0, we obtain the vanishing of the covariant derivative of h,.Il (x). But by definition h,,'l (x)
= ~}(Cla,p}(/Ja,a)
so that the x· are the independent variables and the era are the dependent variables. Expanding the covariant derivative in hp'l,r (x) = 0, and rearranging, we find (15·14)
~
°a,pl' /Ja,'l = - ~er /J ()xr a,,, a,'l -
~
er
a,,, /Ja,Rr'
where (15·15) Since (15·16) the right side of 15·14 must be symmetric in p and r. Hence (15·17)
~
er
/J
a,pl' a,a
()~er /J = - ()xl' a,r a,'l -
~
er
/J
a,r a,'lP'
On equating the corresponding sides of 15·14 and 15 ·17, rearranging, and interchanging p and q and fJ and O!, we obtain (15 • 18)
er /J artJC a,F a,'l = -
()~er /J + ()artJCer /J' + CI /J ()xr a,,, a,Il ()x'l a,r a,p artJC a,r a,'lp'
Adding corresponding sides of 15 ·17 and 15 ·18 there results (15·19) Recalling that the artJC are functions of the unstrained coordinates "a, we find (15.20) . _.r. CIa crp-
,pi'
lIa = - !r~ + ().,tf - ()...,c] era lIa :a ,'l
2L()"a
~a
r:la
. ,r
,Il
,,.
86
HOMOGENEOUS AND ISOTROPIC gI'RAINS
Multiplying corresponding sides by .,x Q and summing on q, we obtain (15·21)
..,.c
CI
__
a,F -
!p)..,.c ()~ _ ~] 2i.()"f + ()CIa ().a a
"f a.r a,p'
01
Finally, if we mUltiply corresponding sides of 15·21 by ~", we arrive readily at the interesting result (15·22)
"a,fIr = -,:rea) "fa,p Cla,r,
where ,:r(a) are the Eudidean Christoffel symbols based on (he Euclidean metric tensor ~(a) in the untJl:rained coordinates ia. Now from the definition 15·15 9f Cl a.fIr and from the vanishing of the Euclidean Christoffel Symbols r}k(X) and ~~(a,) when evaluated in " strained" cartesian coordinates yi and tt unstrained" cartesian coordinates iz respeetively, we see that 15·22 reduces to (15·23)
()2t1 z
{)y" ()yr
=
O.
This implies that the deformation is linear, i.e., of type 15 ·13. To prove the converse part of the fundamental theorem. on homogeneous strains, we have by hypothesis that the deformation, or strain, is given by a linear transformation 15 ·13 in cartesian coordinates iz and yi. Now 1&"Q(X) = aIP(a)(Cla,,)(fJa:Q) and alP are constants a(/l in cartesian coordinates. From' 15 ·13 we have ()CIz _ CIA ()yP p
and hence the components *1&"Q(Y) in cartesian coordinates yi are ~ven by , *1&"i.1/) =..,ad ,ClAp (JA Q, a set of constants. Hence ()*1&"Q(Y) ()yr
= O.
But this condition implies that the covariant derivative
1&"Q,r (x) = 0 and hence the covariant derivative E"Q,r (x) = o. In other words, the ,strain is homogeneous, and the proof of the theorem is complete. Variation of the Strain Tensor. In preparation for the subject matter of the next chapter, we need to consider deformations that depend on an accessory parameter t,
VARIATION OF THE STRAIN TENSOR
87.
which in dynamical problems can be taken as the time t. So let the coordinates x· of a representative particle in the strained medium depend on the coordinates 'a of the corresponding particle in the unstrained medium and on the accessory parameter t.· Let (15·24)
be the partial difierential of
Xi
If J::: (x) is any tensor field in the
in t.
strained medium, define aJ::: (x) by
(15 ·25)
lI.r·· UJ '" () X
=
J'"....i D x,•
where J::: .• is the covariant derivative of J:::. Clearly, if the Xi are cartesian, aJ::: = DJ:::. Moreover, aJ(x) = DJ(x) for a sca.la.r J(x) in general coordinates Xi. To have a well-rounded notation, define ax" = Dx" and refer to ax" as the virtual di8placement vector. If Xi (la, 2a,"'8a , t) have continuous second derivatives, then from the commutativity of second derivatives D( ~)
'0
= o(a~) = o(a~) . ( o'Q
Hence
r").
or""
o
D(dxI') = or"(a~) .dr",
which implies the tensor equation a(dxI') = (a~),a dr".
(15·26)
Obviously ag,.. = 0, since g,.,.e may be written
a(dxl:)
(15·27) Since &l("a)
=
O. Hence the above tensor equation =
(aXl:).CI dr".
= 0, an evident calculation using 15·26 shows that
(15·28)
a("a,p)
= -"a,a(ar"),p.
Recalling that
h,,9(X) = afjC(a)(Cla.p)(fJa.fJ.)
and applying formula 15·28 we find ah;.fJ.
=
-Cl~['"a.r(ax'"),la.fJ. + Cla,J> fJa.,.(ax").fJ.J,
which can be put in the convenient form
ahp'l = -h;(aX,.),p - h;,(aX,.).fJ.' From this, and from the definition of the strain tensor faIJ and the related formula. (15·29)
88
BOMOOENEX>US AND ISOTROPIC STRAINS
we arrive at the/v/nda'fMnta1,/orm:u'la/or the fJO:riati,on 0/ the atrain tenaor. (15·30) 8e,g (x) = ![(8xg).p + (rb:p),gJ - [e;(8x.. ),p + ep (8x..),gJ.
This formula becomes (15·31)
8Epg(X)
=
![(8xg),p
+ (8xp),J
within the a'P'JYfOXimati0n8 0/ the usual approximaJe theory 0/ eW.sticity. Returning to our finite deformation theory, we define a rigid virtual di8p~ by the condition 8(M) = O. On using formulas 15·26 and 15·27 in an evident calculation, we find (15·32)
8(d82)
= [(8xa ),p.+ (8x/J) ...J d:r!' d:rP
for any virtual displacement, rigid or not. Hence the virtual displacement vector must satisfy K~'Uing'8 differential equations for a rigid virtual displacement (8za ),p + (~) ... = O. For the sake of completeness, we shall write down the formula (without giving the derivation) for the variation of the Lagrangean strain tensor pgTl under an arbitrary virtual displacement.. (15·33)
8pgTl = ![(&xa )", + (8x/J)...] p,1? g,~. Exercise Calculate the three fundamental strain invariants for a homogeneous isotropic stre,in. Hint: since they are constants, calculate them in rectangular cartesian c0ordinates.
CHAPTER 16
STRESS TENSOR, ELASTIC POTENTIAL, AND STRESS-STRAIN RELATIONS Stress Tensor. Let S be the bounding surface of a portion of the elastic medium in its strained position. The surface element of S may be described by means of the covariant vector dSr , (16·1) dS1 ..
yg d(XS, xl), dSs " Vg d(xa, Xl), dBa -= vg d(xl,:r!),
where (}.z;P
d(tt', :rJ1)
(16·2)
=
-,
()iP
00
()v
()x ll
(}.z;1I
-, ()v 00
du dtJ
and u, v are the surface parameters so that the parametric equations of the surface S are given by Xi = fi(u, v). In rectangular coordinates and in the usual notations x, y, z, the components of the covariant vector dSr are given by dBz
= d(y,
z), dB"
= d(z, x), dB. = d(x, y).
Exercise Prove that dSr is a covariant vector under transformations of the coordinates Xi. Hint: use the fact that yg is a scalar density. Let dB be the magnitude of the surface element, i.e., (dS)2
=
g4 dBa dB{J'
Before we introduce the notion of a stress tensor we must define a stress vector. A stress vector is a surface force that acts on the surface of a ooZume. An example of a surface force is the tension acting on any horizontal section of a steel rod suspended vertically. If one thinks,pf the rod as cut by a horizol),tal plane into two parts, then the action of the weight of the lower part of the rod is transmittep to the upper part across the surface of the cut. A hydrostatic pres&ure on the surface of a submerged solid body provides another example of a surface force. There are other kinds of forces called body, volume, or mass forcetl, 89
STRESS TENSOR, ELASTIC POTENTIAL
90
i.e., forces that act throughout the volume. AB a typical example of a mass force one can take the force of gravity, pg fl. V, acting on the mass contained in the volume fl. V of the medium whose density is p, and where g is the gravitational acceleration. A 8tres8 tensor 'ra is defi,Md, implicitly' by the relattion .~ .
(16·3)
~F'"
dB = 'rtl dBa,
where F'" is the 8tress vector acting on the 8UrJOI!e element dBr. Let us now consider a virtual displacement of the strained medium corresponding to the accessory parameter t. The virtual work of the stresses across the boundary B is
of fJJfl 5xII dB -
(16·4)
8
f f'J"k' 5xII dBa = f f f(T'h 5xfJ)"" dV, V .
8'
a volume integral extended over the vo\ume V bounded by B and obtained by Green's' theorem or generaJized Stokes' theorem in curvilinear coordinates. If there are mas8 JorrAJ8 (Mr per unit mass) acting on the medium, the virtual work of these mass forces is
f f f pMfJ6xfJ dV,
v
where p is the mas8 den8ity. Hence, the virtual work of oJ}, the forces acting ~n any po~ion ~f the medium is
f f f(~ + pMfI)6x/I + F(5x/I),..] dV.
(16·5)
v
We shall now adopt the PHYSICAL AsSUMPTION OF EQUILIBRIUM: The virtual work oj oJ}, the Jorcu ading on any portion oj the medium is zero Jor any rigid virtual displacement.
. . In particular, the translations, characterized by (6xII),fl = 0, are rigid virtual displacements, and so we must have the condition f f f(~ + pMfI) 6x" dV = O. v' " Since 6XfJ is arbitrary at any chosen point and V is arbitrary, we have (16·6)
the ~ollowing differential equations jor equiltOrium: (16;7) ~ + pMfJ = O.
Consequently the virtual work of all the forces (mass as well as surface) acting upon any portion of the medium' in i.ny virtual displacement is given (on using 16·5 and 16·7) by (16·8) Sin~
Total virtual work
=
f f f'J"k'(6xfJ)"" dV.
v
this m~ vanish for any rigid virtual displacement, i.e., tor
(52:p)" + (6xo.),p
=
0,
ELASTIC POTENTIAL
91
the stress tensor must be 81Immetric:
'f"IJ ... Tf1«.
(16·9) Hence 16.8 can be written (16·10)
Total virtual work =
if f
fr't(8x .. ),fJ + (6x,,) ...] dV.
v Within the approximations of the usual approximate elasticity theory, the total virtual work may be written (16·11)
f f frP 8E.." dV
v since formula 15·31 holds for the approximate theory. But note that this is not a legitimate result for the finite deformation theory; formula 16·10 is the legitimate result for that theory. Elastic Potential. We shall now turn our attention to the elastic potential anq its relation to the stress tensor. Let p be the density of the volume element dV in the strained medium. The element of mass dm is given then by dm = p dV. The principle of conservation of ma88 in a virtual displacement is expressed by 8(dm)
= 8(p dV) = O.
Let T be the temperature of the element of mass dm, tI the entropy density (per unit mass) so that the entropy of the mass dm is tI dm = ptI dV, and u dm the internal energy of the mass dm. Then the fundamental energy-conservation law of thermodynamics says that (16·12)
T8(tI dm) = 8(u dm)
(virtual work of all forces acting on dm). Let (16.13)
q, = u - Ttl,
the free energy density or elastic potential. From the principle of conservation of mass, we have, on integrating over any portion of the strained medium and making use of equations 16·8,16·12, and 16·13, (16·14) f f f(aq,)p dV -= f f f'J.""fJ(8x..),fJ dV - f f f(6T)fJt1 dV. v . v v Since V is arbitrary, this yields (16·15)
p 8q, = 'J.""fJ(8x..),fJ - ptl6T.
We shall now work under the following HYPOTHESIS ON ELASTIC POTENTIAL 4>: q, i8 a function of ra•., the Euclidean metric tensor g,lx) in the 8trained medium, the Euclidean metric tensor ,."c(a) in the unstrained medium, and the temperature T.
STRESS TENSOR, ELASTIC POTENTIAL
92
We shall restrict ourselves to isothermal variations, so that T is a constant parameter in q,. From 16 ·15 and the sYmmetry of the stress tensor F, we see that &p = 0 for any isothermal rigid virtual displacement. Now, for any virtual displacement, fJ~ = 0 and ag... = O. Hence from Killing's differential equation 15·33 we have
~(~JJ) 8('"a.fJ) = 0
(16·16)
whenever (16·17)
Let rd"(x)
= (f'l(x) f'a,fl
and use the formula 8(ra.fJ) = _ra... (8:t")JJ
(see formula 15·28)
and 16·17 in 16·16 to obtain
aq,_cra'Y _ ~cr-B](~ ... ). [ ~(craJJ) ~(cra.'Y) '" U"''Y.fJ
=0
•
Hence q, must satisfy the following system of partial difierential equations (16·18) This is a complete system of three linear first-order partial difierential equations in the nine variables -a./l' There are nine conditions in 16 ·18 but three. are identities and only three of the remaining six are independent, From the theory of such systems of differential equations I we know that the general80lution of 16 ·18 is a function of six functionally independent 80lutions. It will take us too far afield to give the theory of such differential equations; we are content here with this mere statement of the result concerning the most general solution q,. There are some particularly interesting solutions of equations 16 ·18. To consider them it is convenient to define an iootropic medium. DEFINITION OF IsoTROPIC MEDIUM. A medium whose elastic potential is a strain inaariant that may depend parametrically on the temperature T wi1l be called an isotropic medium. Now it can be shown, but in this brief volume we have not the time to give the details of proof, that the elastic potential for an jsotropic medium satisfies the differential equations 16 ·18. It can also be shown by a long mathematical argument that any strain intJariant is afunction of the three fundatMntal strain invariants II, It, and 18 of Chapter 16. The following important result is immediate. A nec688ary and sufficient
STRESS-STRAIN RELATIONS
93
condition that a medium be isof,ropic is that its elastic pot,ential q, ... q,(I , 1
I It 18, T) where 11, I I, and 18 are the jUndamemal strain i1Wariants• . In the Usual approximate th~ry of elasticity, the elastic potential q, for crystalline media (another name for non-isotropic media) is taken as a quadratic function of the strain tensor components. It is tacitly assumed in the usual approximate theory that a special privileged reference frame, determined by the. axes of the crystal, has been chosen. The coefficients of the quadratic form are accordingly not scalars but components of a tensor of rank four which depends on the orientation of the crystalline axes.
Stress-Strain Relations for an Isotropic Medium. Consider the elastic potentialq, for an isotropic medium as a function of the strain tensor e,... Since e,.. is symmetric, we have Er, = !(e,., + Eor). In q" we shall write !(e,.. + Eor) wherever e,., occurs, and thus we see that ()cp ()cp -=-,
(16,19)
~r
()e,.,
with the understanding that in
~, say, all
ue,.,
the other E's (including for
for that 8, r) are held constant, so that in this difierentiation no attention is paid to the symmetry relations Eor = e,.,. We saw in Chapter 15 (see 15·29) that under a virtual displacement the variation of hpq and hence of the strain tensor Epq was given by 8~
(16·20)
= -i8hpq = ![h;(8x ),p + h~ (8x,.).oJ, T
since hpg ... gpg - 2Epq. But 8gpq &p
=~
EotJ
8E..p =
=
~~
OJ hence
EotJ
[hH8x
T ) ...
+ h; (&I:,.) oIIJ, .
or (16·21) &p
= ~ h~ (8x,.) ...
(a and fJ are summation indices t)
uEafj
on using conditions 16·19. Now, for an isothermal virtual displacement, formula 16·15 reduces to p 8q, = T""(8x.,,),IJ, and so for an isotropic medium (16·22) From the arbitrariness of the virtual displacement and the fact that
t Throughout the remaining part of this chapter, a mere repetition of an index in a term will denote summation over that index.
94 . ~
STRESS TENSOR, ELASTIC POTENTIAL
= a: - ~ we obtain the stre88-8train relations for an i80f;ropic medium
(16,23)
tp"fJ
=
p(!: - 2e:£;)'
We shall put this stress-strain relation in another form. Now by definition E~ = rt'f,.~, and hence bE~ -()E(f ="
.).i
a/ll'
Obviously ()q, ()e~
btp
- Q =~-'
~
. ()ell ()Eii
and hence (16·24)
Define the mixed stress tensor
~
by
(16·25) ~ = gfly'J.'«'Y (= gflyT"t% from the symmetry of the stress tensor). Then with the aid
of 16·24 in 16·23 one can show readily that thefoUuwing stress-3train relations hold for an i80tropic medium I (16·26)
~ = p(~ - 2e: ~)-
From the principle of conservation of mass P dV = Po dVo, and from the fundamental result 15·5, we see that
211 + 41. - 818 in terms of the strain invariants 11, Is, and 18 for media, whether is0tropic or not. Since It, 12, and 13 are respectively first degree, second degree, and third degree in the strain tensor components Eap, we see (16·27)
P .. PoVI -
that to a first approximation p = Po; i.e., volumes are also preserved to a first approximation. Hence to the same degree of approximation, the stress-8train relations 16·26 for an isotropic medium reduce to Hooke's law of the u.sual aP'[1l'ozirrwie theory (16·28)
where ~ = pr;.
~
=
()q,l ~'
CHAPTER 17 TENSOR CALCULUS IN RIEMANNIAN SPACES AND THE FUNDAMENTALS OF CLASSICAL MECHANICS Multidimensional EucUdean Spaces. In the last two chapters of this book we shall attempt to give some indications of a more general tenser calculus and some of its applications. Although our discussion will of necessity be brief, this fact will not keep us from going to the heart of our subject. Our study of Euclidean tensor analysis can 'be used advantageously to accomplish this. First of all the subject'matter of Chapters 9, 10, and 11 can obviously be extended to n-dimensional Euclidean spaces, where n is any positive integer. There will be n variables wherever there were three before, and indices will have the range 1, 2, ... to n with the consequent summations going from 1 to n. For example, the squared element of are in rectangular coordinates yt, y2, "', yfl is
(17·1) while in general coordinates Xl, x2, "', xfl
(17·2)
M = gafl(xl , x2, ••• ,
zo» fbf' ti:r!,
where (17.3)
n_,,(xl :rJ ... Xfl) II....
'"
= ~ ()yi()y', f,;;, ()XZ ar!
the n-dimensional Euclidean metric tensor (see 9 ·15). The n-dimensiona! Euclidean Christoffel symbols are
(17.4)
r~(xl, x2, • H, XfI) = ~gi.r(~ + ~ _ ~~}
where the g4 are defined in 17·3 while (17.5)
g4(xt, x2, "', XfI) = Cofactor of g(Ja 9 95"
~9
96
TENSOR CALCULUS IN RIEMANNIAN SPACES
in terms of the n-rowed determinant gu, gJ!J, "', glra gn, (/t2. • •• , gh
g=
(17·6)
gt>l, gtd, "', g.... RJemannj~n
Geometry.l . An n-dimensional Riemannian space is an n-dimensional manifold with coordinates such that length· of curves is determined by D\ean8 of a symmetric covariant tensor field of rank two gafj(xl, ,;2, " ' , X") in such a fashion that the squared element of arc ,(17·7) dIP = gafj(x) d1f' dx! is positive definite, i.e., gafj d1f' dx! ::2: 0 and is equal to zero if and only if all the d1f' are zero. The length of a curve x· = f(t) given in terms of a parameter t is then by tIefinition .~l (tl)
(17.8)
FIG. 17·1.
(17·9)
8 =
.('Vg·/(X)~'~i de.
It can be proved by rather long algebraic manipulations that from the positive definiteness of gafj d1f' dx! follows the positive value of the determinant of the gafj' i.e.,
g= ...........
> o.
g"l, gfl2, "', g"" The theory of a Riemannian space is a Riemannian geometry. Exactly as in an n-dimensional Euclidean space, we can derive the contravariant tensor field of rank two gaIJ(xl , x2, "', X") and thus have at our disposal the Christoffel symbols of.our Riemannian geometry (17·10)
r~(xl, xl, "', x") = ~gw(~ + ~; - :a:).
Notice that the Riemannian Christoffel symbols depend on the fundamental Riemannian metric tensor gafj and its first pa.rtial deriva.tives. Unlike the Euclidean Clpistoffel symbols, it is impossible in general to find a coordinate system in which a.ll the Riemannian Christoffel symbols are zero everywhere in the Riemannian space. This is due to the fact tha.t it is in general impossible to find a coordinate system
97
RIEMANNIAN GEOMETRY
in which the fun~enta.l metric tensor ga{J has constant components throughout space. It is to be recalled that in a Euclidean space there do exist just such coordinate systems, i.e., ca¥an coordinate systems and rectangular coordinate systems in particular. We can, however, prove that there exists a coordinate system with any point of the space as the origin, i.e., the (0, 0, "', 0) point, such that all the Christoffel symbols vanish at the origin when they are evaluated in this coordinate system. Such a coordinate system is called a geodesic coordinate system. We· shall prove that the coordinates yi defined implicitly by the transformation of coordinates (17·11)
are geodesic coordinates. A direct calculation from 17·11 yields the needed formulas
(17·12)
where the 0 mearu. evaluation at the origin of the y' coordinates. Now, under a transformation of coordinates, the Christoffel symbols of a Riemannian space transform t by the rule 10·29 for Euclidean Christoffel symbols. Let *r~(yl, y2, "', y") be the (Riemannian) Christoffel symbols in the yi coordinates. Then i
(17·13) *rClP(yl, 'If,
.", 1f')
x
= r,..(xl,
()x" OO! ~i xi, .. ', X") ~C¥i/()xx
W by' +~~~. Now we see from the transformation of coordinates 17·11 that Xi = qi when Y' = O. In other words, the origin of the yi coordinates has coordinates x' = qi in the Xi coordinates. If we evaluate both sides of 17 ·13 at the origin of the yi coordinates and if we use formulas 17 ·12 in the calculations, we find (17,14)
In other words, the yi's are geodesic coordinates.
t With the difference that the number of variables now is n and the indices have the range 1 to n. The proof is practically a repetition of that given in note 4 to Chapter 10.
98
TENSOR CALCULUS IN RIEMANNIAN SPAC1i2
Curved ·Surfaces as Examples of Riemannian Spaces. Obviously any Euclidean space is a very special Riemannian apace. A simple example of a Riemannian space which is not Euclidean is furnished by a curved surface in ordinary three-dimensional Euclidean space. This can be seen as· follows. Let y' be rectangular coordinates in the three-dimensional Euclidean space, and let the equations of a curved surface be
(17·15)
in terms of two parameters xl and xl. Then the squared element of arc
for points on the #fUr/ace 17 ·15 is . 8
(17·16)
d82
= 2: (dyf)2 = gQ/j(x1, xl) d:r!' d:r! i=1
(a and (J have the range 1 to 2 and corresponding summations go from 1 to 2), where .
(17·17)
gQ/j ( X
1 ~) _ ,;v -
~ ?Jf(xl, xl) ?J1'(x1, xl) ~::>...A
,~1
u-w
",,_8·
u;,;-
So a surface in three-dimensional Euclidean space is a two-dimensional Riema.nnian space. Exercise' The surface of a sphere is a two-dimensional Riemannian space. Find its fundamental metric tensor and its Christoffel symbols. The #fUr/ace of a sphere of fixed ~UB r is given by 'yl =
r sin Xl cos xl
'I! = r sin Xl sin xl 'I! = r cos Xl Therefore the jurulamental metric tensor is given by gu = rI, gu = g21 = 0, ga = rI(sin Xl)2. Hence
1 gU =~' gl2 = g21 FIG. 17·2.
1
= 0, gt2 = rI(sin xl)2·
The Christoffel symbols are then
and all the other Christoffel symbols of the surface of the sphere are zero.
RlEMANN-CHRISTOFFEL TENSOR
99
The Riemann-Christoffel Curvature Tensor.
It was seen in Chapter 11 that covariant differentiation is a commutative operation in three-dimensional Euclidean space, and by exactly the same type of reasoning this is also true in an n-dimensional Euclidean space. To establish this result explicit use was made of cartesian coordinates. Since such coordinates are in general not available in a Riemannian space, we cannot use that type of proof. In fact, cotJOrio:,a diJJerenJ,ioJ,ion in a Riemannian space is not in general commutative. We. shall find a formula (see 17·19 below) that makes clear the non-commutativity of covariant differentiation in Riemannian spaces. In obtaining Laplace's equation.in curvilinear coordinates for contravariant vector fields in a Euclidean space, we had to calculate the second covariant derivative of a contravariant vector field. (See the bracket term in 12·23.) The calculation for Riemannian spaces is practically the same, so that we shall write down the second covariant derivative of ~1(Xl, xi, "', X") based on the (Riemannian) Christofiel symbols r~(xl, xt, "', X") without giving any more details (again see bracket term in 12·23). The result is ()2~i
.
(17,18)
r....- = CY.rf ():rfI - r
,,()~i cr,8 ()x"
,()~
,~
+ r ".. ():rfI + r "" CY.rf ,. ~ ~ ~) + ( ()r!.. ()x" +r~"r.... - :..I'cr,8~.
From the commutativity of the partial derivatives and the symmetry of the Christoffel symbols r~ = r~, we find
t....- - ~:"... = B!J,
(17·19) where (17·20)
To justify the notation B'acr,8 and prove that they are the components of a tensor field of rank four, contravariant of rank one and covariant of rank three, we first note that the ieft sides of 17'·19 are the components of a tensor field of rank three, contravariant of rank one and covariant of rank two. Hence B~ is a tensor field of the same type for all contravariant vector field,s ~; Le., _.
-
B'acr,8(xW(x)
()x'
nl. ():t! ()x" = Dp.,(X)r'(x) ()X'" ~ ()x>.'
But, writing r'(x) in terms of ~(x), we evidently have -i
(_)- (_)
B•." x
nl.
r: x = Dp.
p
(
)- (_) ()x" ()xv ()xP ~, X ~ x ()i!' ()f!" ()i! ()xA'
100
TENSOR CALCULUS IN RIEMANNIAN SPACFlJ
r
But (!) are arbitrary, and. hence, equating correspoQ,ding coe1ficients, we obtain the ·tensor law of ~ormation for ~p(~). This tensor field is the famous Riemann-Christoifel CU1'fXJture ten8or; it is not a zero tensor in a general Riemannian space. Hence ( ...~;14 (.6... in a
Riemannian space with 1W'1/,-IJanishing RiemannAJhristotlel curvature ten8or. But obviously the Riemann-Christoffel curvature tensor is zero in Euclidea.n spaces. Hence (,a,p .. r,/J... in Euclidean spaces; this checks a result found earlier, in 11.13. Geodesics.
A ~t line is the shortest distance between two points in Euclidean spaces. There are curves in Riemannian spaces that play a role analogous to the straight lines of Euclidean spaces. Such curves are called ge0deaic8. In fact, if a Riemannian space is a Euclidean space, then its geodesics are straight lines. To find the differential equations satisfied by the geodesics of a Riemannian space, we have to get the Euler-Lagrange differential equations for the calculus of variations problem (17·21)
fV to
dx'dx;
gWdi di dt .. minimum.
It can be shown that the Euler-Lagrange equations for this calculus of variations problem are (17·22)
hi(8) i ~ d:.t! ~+ ra/J(~)d; d8
= 0,
where 8 is the arc length and r~tI(~) are the Christoffel symbols of the Riemannian space. In other words, if the coordinates of points on a geodesic are considered as functions ~i(8) of the arc length parameter 8, then the n functions ~i(8) satisfy the system 17 ·22 of n differential equations of the second order. If the Riemannian space is Euclidean and we choose rectangular cartesian coordinates yO, equations 17 ·22 reduce to
and hence (a' and fJ' are constants), the parametric equations of straight lines in terms of arc length
8.
EQUATIONS OF MOTION
101
Equations of Motion of a Dynamical System with n Degrees of Freedom. In classical mechanics, it is postulated that the motion of a conservative dynamical system of n degrees of freedom with no moving constraints is governed by Lagrange's equati0n8 of 'lTWtion l
:(~~) - ~= O.
(17,23)
If the kinetic energy T, in terms of the generalized coordinates q1, 'q2, .. "(t) dqi(t). "', q" and the generalized ve1OClti~ q' = &' 18 T = igi/(q1, "',
qr»qoqi
(gil = gli) and if the potential energy is V(q1, q', .. " q"), then the kinetic potential or Lagrangean L is given by L = T - V. Now the kinetic energy is positive definite in the velocities qi; i.e" T ~ 0 and T .. 0 if and only if gi = O. It can be proved by algebraic reasoning that the determinant g of the gil is positive so that we can form gii in terms of the gi/- exactly as in Riemannian geometry. By direct calculation we find {jL
"
- - g'.n' (jqi - """
~~~) = (~/~;(f +g.Ai =
(q; = Z~)
1{()gi/ + ()gUo) .i;Jc AI 2\(jqlo {jqi q '1 + gin ,
{jL = !(()g I")q' iq'" '_ {j V • {jqi 2 {jqi {jqi
Hence Lagrange's equations of motion 17·23 can be written in the form (17 . 24)
"'
gip/"
?>gill ()gi"). + 21(?>gil (jqlo + {jqi - {jqi q'q..... "L
{jV {jqi'
r
Multiplying corresponding sides 'of 17·24 by and summing on i, we obtain the foUowing form for Lagrange's equation of motion.' (17 . 25)
..,.. (
). '.
if' + 1. ill q1, "', q" q'qL
= -II..,IIi {jV (jql
where Ij,,(q1, "', q") are (he Christoffel symbols based on the gil (qt, ••• , qr» of (he kinetic energy of the dynamical system. '
102
TENSOR CALCULUS IN RIEMANNIAN SPACES Exercise
A symmetrical gyroscope with a point 0 fixed on the axis is acted upon by glaVity. Let I, I, and J be the principal moments of inertia. Then the kinetic energy
"-by T-H!)' +",,1r()'u"f . form With ()tlS rom the equatlOns of motion and for:: from the equation of continuity. Thus
(18·23)
uo
():tfJ =
where 'Uo and
-tI;fJ + r=Uci,
~ and : : in the first equation may be expressed in terms of ~ by using the last equation.
Thus,
~e highest derivatives
of all the variables with respect to :tfJ have the coefficient unity in these equations. Hence, if
'tr,
'Uo,
u.., : : are given as fUnctions of
Xl
x" on the surface :tfJ = 0, the solution of the problem is uniquely " determined. This normal form 18·23 of the system of difierential equations, however is not analytic in the small parameter v, the important case in aeronautics, in the neighborhood of v = 0, and is consequently inconvenient for the application of the method of successive approximations. We therefore make the transformation of variables ~d
(18·24)
lOS
APPLICATIONS TO BOUND¥tY-LAYER THEORY
to bring it into the desired.form. We then have ()". = -lIlra[Jua.,f ()r
()2ua = ()t2
()t
II(W ()w + ua()w) + vf(CAD _ ~ ()w + ~o) ()r ()xa ()f2 a ()r
." + ()". + ~(2wr~u _ 2r~ ()u~ ()r + u~u.....?xr!' ()t +rg~)-~a
W ()u,.
(18·25) ()w
_
a
~
a
= -11;~ + plI1w.
Let US note that CPo and CPa are linear in the small parameter III through the term in Uo (cf. 18·22), ~d also depend on vi through the geometrical quantities, which are functions of :rfJ.= lIir. Indeed, it can be shown 2 that the surface metric tensor ga[J is a quadratic function of :rfJ, while all other geometrical quantities may be e:tpanded as power serie8 of :rfJ convergent for :rf! < R"" R", being the minimum magnitude of the pIjncipal radii of curvature over the surface under consideration. Hence, the right-hand sides of the equations 18·25 are Taylor series in vi, and the solution of 18·25 may be carried out by expanding each of the dependent variables as a power series of vi, convergent for all finite values of vi for which lIi < R.n. H we try to solve 18·23 by the same type of expansion, either the series are asymptotic, or they may terminate; but in general we cannot find a solution satisfying all the required boundary conditions. In fact, the initial approximation is easily verified to 'satisfy the nonviscous equation (II = 0). The boundary conditions at infinity and the condition Uo = 0 at :rf! = 0 are then sufficient to determine this approxi.mation completely. Indeed, the boundary conditions at infinity are usually such that the resultant solution is potential. Then the initial approximation is an exact solution of the complete equations 18·23. However, the boundary conditions Uo = 0 at:rf! = 0 cannot be satisfied in general. The effect of viscosity can never be brought into evidence. This shows that the more elaborate treatment de8crtoed abOfJe is absolutely necessary. The non-viscous solution (usually potential), however, serves as a guide for making the exact solutions satisfy the boundary conditions at infinity. This point will be discussed in more detail below. Let us now proceed with the solution of 18.25 by writing
I
I
I rI
1(
(18·26)
{
=
1(0)
+ vn(l) + n(2) + ... + "', + ... + ... ; + . , . + ... ,
a = u~o) + V~l) + ~2) W = w(O) + VII'W(l) + 1I'W(2)
U
BOUNDARY-LAYER EQUATIONS
Corresponding developments for the geometrical quantities gcrfJ, r..", must also be used. The initial approximation gives
109
...
C>r ()2u.,. ()r --+-, Cnf"
" 00.. u"uII';jI+w-",
~
C>r 0=-,
(18·27)
()r
()w
t!;/I + ()r = 0, where the superscripts of the initial approximation are dropped. In these equations, r is a scalar, and the metric tensor is acx/l(x l , xl), being gcx/l(xl, xl, :r;O) evaluated at :rfJ = O. The conditions over the surface r = 0 are u.. = 0 and w = O. The condition at infinite r is set according to the following considerations. For a large but finite value of r, the value of :& is still small. Hence, the solution may be expected to pass into the non-viscous solution close to the surface if r is large. Thus, for the initial approximation, we may lay down the conditions (18·28)
u.. = it..,
11"
=
r for
r ... co,
where u.. and i' are functions of Xl and X2, being the values of u.. and 11' of the non-viscous Ilolution at :& = O. The initial approximation is then completely determined. H u.. and 11' differ from it.. and 1r by quantities of the order of " for r = h, then an approximate solution of 18·12 is usually taken to be given (a) by the non-viscous solution for r > h, and (b) by the solution of 18·27 for r < h. The quo/nitty h0 is known as the "thickne8s oj the boundary layer" and is arbitrary to a 'certain extent. For example, we may define h to be given by (say) three times h of the equation (18·29)
La> (it.. -
u.. )
dr =
it.,.h,
which is in general different accorc:ling to whether a ... 1 or 2. This initial approximation is usually known as the boundary-layer theory of Prandtl. Incidentally, we note that 11' is a function of Xl and x2 alone, by the second equation of 18·27. Hence, by 18·28, 11' = i'(xt, xl), which is known from the non-viscous solution. The first and third equations of 18·27 then serve as three equations for the velocities u.. and w. The higher approximations in 18·26 satisfy certain differential equations obtained together with the derivation of 18·27. The boundary conditions at r = 0 are u~ = 0, for any approximation. The boundary conditions for the nth approximation at infinity will be specified by
110
APPLICATIONS TO BOUNDARY-LAYER THEORY
using the nth approximation of the asymptotic solution, which will in turn be determined from certain boundary conditions related to the (11. - l)st approximation of the convergent solution. Since we are never concerned with higher approximations in practice, we shall not go into further details.
RODS ON PART I Chapter 1
1. In the modern quantum mechanics of theoretical physics, matrices with an infinite number of elements as well as with a finite number of elements are used very widely. The elements of these matrices are often complex numbers. The reader is referred to the bibliographical entries under Born, Jordan, and Dirac. 2. For applications of matrices to the social sciences the reader is referred to the references given in a recent paper by Hotelling. 3. We shall deal for the most part with matrices whose elements are real or complex numbers. It is possible, however, to deal with matricea tDlioae elem6ntB GTe Uaemaelves matricea. We shall have occasion to use a few such matrieM in connecj;ion with.our discussion of aircraft Butter in Chapter 7. ChapterS
1. For the properties of determinants, linear equations, and related questions on the .bra of matrices, see Booher's Introduction to Higher Aigebm. 2. Cramer's rule for the solution of linear algebraic equations is given in most books on algebra. In our notations it can be stated in the following manner. If the determinant a.. aj of the n equationll a':rJ = hi 1
I
I
in the n tmknouma:el,:1I', .oo,:rl' ia not zero, then !he equati0n81&ave a unique solution given by Ai
z' .. -, a
where A' is the A-TOtDetl determinant obtt&i1l6d from a by TepltJeing t1l8 elem6ntB a1, ~, o • • , a, of tM itA column by tM corr68p07l4ing elem6ntB bl, bt, b". 0
0 "
8. The rule for the multiplication of two determinants takes the following form in our notations. If a.. I a~ I mid h.. I hj I are t1.110 n-r0tDetl determinants, then the numerical product c = ab ia iUd/ an A-Towed determinant with elem6ntB cj gWen by the /orm1ila i .... cii .. aaul' 4. It can be shown that Sr, the trace of the matrix Ar, is also equal to the sum of the rth powers of the n characteristic roots of the matrix A. 5. The recurrence formula 2·6 for the coefficients a" •.. , a.. of the chalacteristic function of a matrix can be derived from Newton's formulas; see Booher's Introdw;· tion to Higher AlgebnJ. pp. 243-244, for a derivation of Newton's formulas.
111
NOTES ON PART I
112.
ChapterS 1. To ~ve that the matrio exponential eA is convergent for aD square matrices A, let A .. /lj be an n-rowed sql1lL!8 matrix, and let V(A) be the greatest .of the numerical values of the n' numbers /lj - the greatest of tlie moduli of the /lj if the are complex numbers. Then each element in the matrix A i :wID not exceed n H V~ in numerical value. Hence each of the n' infinite series in eA will be dominated by series nV' nly-a 1 1 + V + 21 + 31 + ... + ...... ~tI'V -1) +r.
II
II
a;
Hence all the n' numerical series in eA converge. This means that eA is convergent for all square matrices A. . In the terminology of modern functional analysis and topoloCIcal spaces, the V(A) is oalled the norm of the matrix A, and the class of n-rowed matrices with the operations of addition and multiplication of matrices, multiplication by numbers, and convergence of matrices defined by means of the nonn V(A) - in other words, V(A) plays an analogous role to the absolute value or modulus of a number in the convergence of numbers - is oalled a normed linecr ring. Other equivalent definitions of the Donn of a matrix are possible. For example, V(A) can be takenas
V.t(/lj)! ,,,""I
a1
if the elements of the matrix A are real numbers. Whatever suitable definition of a nonn is adopted, the norm V(A) of a matrix will have the following properties:
(1) (2) (3)
V(A) ~ 0 and = 0 if and only if A is the zero matrix. V(A + B) S V(A) + V(B) (triangular inequality). V(AB) S V(A)V(B).
From property 3 it follows that V(A") S (V(A»", a result that makes obvious the usefulness of the notion of a nonn for matrices in the treatment of convergence properties of matrices. The class of matrices discussed above is only one example of a nonned linear ring. The first general theory of nonned linear rings was initiated in 1932 by Michal and Martin in a paper entitled "Some Expansions in Vector Space," Journal
de mathhnatiques pure8 et appliquks. 2. The speoial case of the expansion 8·7 when F(A) is a matric polynomitJl is known as Sylvester' 8 thwrem. If the characteristic equationof a matrix Ahas multiple roots, then the expansion 3·7 is not valid. However, a more general result can be proved. For the case of matrix polynomials F(A) see the Duncan, Frazer, and Collar book. The more general cases of matric power series expansions are treated briefly in a paper by L. Fantappie with the aid of the theory of functionals, since the elements ~(A) in the F(A) are !undionala of the numerical function F(A). See Volterra's book on functionals (Blaokie, 1930). 3. Another -equivalent form for the n matrices G1,
Gi . . (;\)l , d>.
A=A,
0" ••. , Gn is
113
NOTES ON PART I
where the >.t are the characteristio roots of the matrix A, leA) is the characteristic determinant of A, and M(A) '" ~{A) is a matrix whose element ~A) is the • _I 1 cofactor of ~ - ~ in the oha.ra.oteristio determinant I(A). Notice carefully the position of the indices i and j.
II
II
Chapter 4 .
.
_
ca(t)
1. A good approximation to the solution X(t) .. [e(e toAl]Xo of ---;;- .. AX(t) can be obtained by taking 1
+ (t _
to)' + ... + in the 2' nl .For many practioaJ purposes n '" 2 would
to)A
+ (t -
(t - to)"A"
AI
place of the infinite expansion for e(e-tolA. be large enough to give a good approximate solution. The approximate solution can then be written X(t) .. Xo
(t - to)'
.
+ (t -to)AXo +-2-,-AIXo,
where Xo is the column matrix for t .. to initially given. 2. If the solution of the matric differential equation cU(t) = AX(t)
(1)
dt
(X(t o) ... Xo)
•
has been found, then the solution of
~t) '" (A + bl)X(t)
(2)
(X(t o) .. Xo)
o.an be written down immediately. In fact, from the second property of the matric exponential given in Chapter 2, we see that eA+bI '" e>eA, where e> is the numerical exponential. Hence by formula. 4·3 we see that the solution of equation 2 is obtained by a mere multiplication by e> of the solution of equation 1. Chapter IS
1. The reader is referred to Whittaker's Analytical Dyna:mic8 for a treatment of Lagrange's differential equations of motion of particle dynamics. For some engineering applications, the reader is referred to Matherrulf,ico.l Methods in Engi,neering by K4rm8.n and Biot. 2. Consult the references in the above note. ChapterS
1. If only the fundamental frequency is wanted but not the corresponding amplitudes, then the application of Rayleigh's principle may be preferable. See Blementary Matricu by Frazer, Duncan, and Collar, pp. 310, 299-301. 2. A special case of 8y1~ter's theorem is what is actually used; see expansion 3·9.
NOTES ON PART
114
n
NOTES ON PART D Chapter 8
1. Although little use has been made of the tensor calculus in plastic deforJD8;oo tions, one would suspect that a thoroughgoing application of the tensor calculus to the func:la.tn8ntals of plastic deformation theory would prove fruitful. 2. For some "elementary applications of the tensor calculus to dynamic meteorology, the reader is referred to Ertel's monograph. 3. We shall deal briefly with Riemannian spaces (certain curved spaces) and their applications to clsssical dynamical. systems with a finite number of degrees of freedom (see Chapter 17), and to fluid mechanic,. (see Chapter 18). 4. A discussion of the fundamentals of coordinates, coordinate systems, and the transformation of coordinates in the various spaces, including Euclidean spaces, is out of the question here. The readers who are interested in modem differential geometry and topology will find ample references in the bibliography under the entries for Veblen, Whitehead, Thomas, and Michal.
Chapter 10
1. Some writers, especially those dealing with physical applications, like to think of the contravariant and covariant components of one object called a vector. For example, if E' are the contravariant components of a velocity vector field, then and g~ can be considered the contravariant vector and covariant vector "representations" respectively of the same physical object called "velocity vector field." This point of view, however, is untenable in spaces without a metric gii.
e'
2. ~ importance of the E~clidean Christoffel symbols for Euclidean spaces is, even now, not very well known. 3. Since by; by'
8
gafJ{zl, 1>', 1>') .. 1: "'......... .JI i= 1""'- CRT"
(yl, yI, 11 are rectangular coordinates),
we obtain, from the rule for the multiplication of two determinants, the result that g..
I gal I .. .p, where J
..
I: I
is the Jacobian determinant, or the functional
determinant, of the trpsformation of coordinates to rectangular coordinates y; from general coordinates ,Hence J ;0' 0 since we deal with transformations of coordinates that have inverses. This means that the determinant g;o' 0 for all our "admissible" transformations of coordinates.
z'.
4. The following steps establish the law of transformation 10·29 of the Euclidean Christoffel symbols; exactly the same method establishes the corresponding law of transformation for the Riemannia.n Christoffel symbols to be discussed in Chapter 17. Since g/foP are the components of the Euclidean metric tensor we have under a transformation of coordinates from coordinates Zl to coordinates ~, (II)
-g"tJ.!l, 1;1, !8) ... g,.,,(:&1, 1>', z8)"M!' ~~~ ~.
NOTES ON PART
n
115
Differentiating corresponding sides of (a) we obtain (considering the
~j 88
inde-
pendent variables)
~
()g"" b:rl' b:I! bzP
[
b:rI' bY
?hi' b:I!]
b~" = b:r/' M!' bf! ~ + gpM!' b¥' bf! .+ gpM!' ?J!fl ~ •
(b)
Interchange ere and
(T
in (b) and, noting that
-?hi' - = -?hi' -, M!'~
obtain
~a/J
()gp b:rI' b:I! bzP M!' = in" ~?J!fl bf'"
(c)
Interchange fJ and
(T
+
[
~M!'
b':rI' b:I!] gp M!' ~ btJI
+
[
b:rI' bY ] gP~?J!fl M!' •
in (c) and, noting that
bY
bY
bIP M!' = M!' bf}' obtain (d)
~.....
()gp b:rI' b:I! b:r/'
?hi' b:I!
btJI ... in" ~ bfJ? btJI + gp ?J!fl ()!'" b¥ +
[
b:rl' bY ]\
gp ~ ~ M!' •
Add corresponding sides of (b) and (d) and subtract corresponding sides of (c) after interchanging" and p in the first terms of (b) and alter interchanging" and p in the first terms of (d). Then take I of both sides, obtaining (6)
1 (ciifIfJ cii..... ciia/J) 1 (()gp ()gpp ()g",,)b:rI' b:I! bzP 1 b:rI' bY 2 ~ + ~ - M!' = 2 ():rJ' + b:I! - in" ~ ~ M!' + 2gP M!'?J!fl ~ 1 ?hi' b:I! +2gPb~~b¥'
the terms enclosed in brackets in (b), (c) and (d) canceling out in the additions and subtractions. On interchanging" and " in the last terms of (6) and on reca.lling that gpp = g"", we get (f)
! (cii"fI + ~- _ ciia/J) ... ! (()gp + ()g,.,. _ ~) ():rJ' b:I! b:r/' + (l
2 ~ Now
~
M!'
2
():rJ'
b:I!
bzP ~ btJI M!'
b:rI' bY .
PM!' btJI ~
(g)
Multiplying corresponding Bides of (J) and (g), summing on a, and using the identities
we readily obtain
~..... _ ~a/J) ~ + bIP bf'"
!-w(cii"fI g 2
. . 2"!..>.p(()gpP + ()g,.,. _ ~)b:rl'b:I!i)3!' W b:I! bzP ~ bIP ~ + i)P' ():rJ'
b:i',
()rI ~
from which the desired transformation law 10·29 follows immediately on recalling the definition of jii and
a/J
r:..
NOTES ON PART
116
n
Chapter 11 1. Formula 11·12 for the covariant derivative of a tensor field can be
~
Iiahed very quick1y with the aid of normal coordinate methods of modern differential geome~. See the two Michal and Thomas 1921'papers. 2. In Chapter 17, we shall see that the answer is in general in the negative for the more general Riemannian spaces.
8. The operation of putting a covariant index equal to a contravariant index in a tensor and SUIpming over that common index is called contraction. A contraction reduces the rank of a tensor by two: by one contravariant. index and by ODe covariantin~
Chapter 18
1. The Navier-Stokes differential equations of hydrodynamics are discussed in Lamb's Hydrod1/Mmics. 2. There are two viewpoints in htdrodynamics: one is the Eulerian point of view in terms of the Eulerian variables; the other is the Lagrangean point of view in terms of the Lagrangea.n variables. For a non-viscous fluid, the Eulerian hydrodynamical equations are the equations of motion of the fluid from the Eulerian point of view, and the Lagra.ngean hydrodynamical equations are the equations of motion of the fluid from the Lagrangean point of view. The Eulerian hydrodynamical eql"lIotions in rectangular coordinates y' are obtained by putting p .. 0 in the NavierStokes equations 18·2.
Eulerlan HydrodyDamica1 Equations.
{
~' .. _U.. ~f _! C>p +X' i)t lJif' p l>y' ~ + ()(pU"') .. 0. i)t lJif'
a'
H are the rectangular coordinates of & fluid particle in the initial state of the fluid, and if the y'(al , ai, a8, t) are the coordinates of the particle at time t, then the LagrcJngean hydrod1/Mmical eqtUJtionB are
t. (blyi _Xi)l>y~ + !
,=1
C>p. = 0, 00 P ()a1 ' l>yl l>yl l>yl
()tI
,-,()al ()a' ()a8
,J.,yl, 1f, yB)
by' by' by' ()aI' ()aI' ()a8
= pri.al , ai, at).
byB.byB byB ,-t()al ()al ()a3 For & treatment of classical hydrodynamiCS, including treatments of the Eulerian differential equations and the Lagrangea.n differential equations, the reader is referred to Lamb's Hydrodyno.mics and to Webster's Dynamics; cf. the references at the end of Part II. Ertel's monograph on dynamic meteorology hall some interesting remarks.
NOTES ON PART IT
117
3. To obtain expaDSio~ 13·6 for the divergence ~ we can proceed 88 follows. Since g, the determinant of the Euclidean metrio tensor gti, is a relative aoa.la.r of weight two, it can be shown by the usual methods of obtaining covariant derivatives of absolute tensors that the oovariant derivative g,. is given by bg
g,' .. ()zi -
2g a
r cd
r.."
But, in rectangular coordinates, g is unity and the Euolidean Christollelsymbols are zero. Hence g,. is zero in rectangular coordinates and consequently g,• .. 0 in all coordinates. This means that blog-\IU .... i)zl
.. J.iQ'
But the divergence ~ of 'IP is by definition ().,p
'IP... '" -?n!" + M'u', lei
so that by the above result we find the following equivalent ezp7'eaaUm lor the diver.
gence
~:
'IP ..
.!... b( Vi u'I) •
... VI
?n!"
H in partioular u' ... gifJ : ; , where 1/I(Zl, zI, zI) is a scalar field, we see that for
this u' 1
i)( Vi,.." : )•.
'IP ... ... {Ii
?n!"
But m rectangular cartesian coordinates g .. I, ,.." '" 1f"IJ, and so this scalar u::' reduces to the Laplaoean in rectangular cartesian coordinates. Hence Lapltu:e'8 equation in general coorrji1l4te8 Zi is given by 1
-vr,
~ vg,..,,~) ?n!"
co
0 when the unkn.oton 1/I(zl, zI, zI) is II 8CGl.Gr jii/AL Chapter
l'
1. The fundamentals of a finite elastio deformation theory are not new. Kirchholl in 1852 made the first systematio study, and E. and F. Cosserat in 1896 made an extensive investigation of the subject. Ricci and Levi-Civita in 1900 made brief but important contributions to the applications of the tensor calculus to elasticity theory. LOOn Brillouin in 1924 simplified and recast Cosserat's treatment with the aid of the tensor caloulus, and in 1937 F. D. Murnaghan, among several other authors, made contributions to the tensor theoretio treatment of elasticity theory. Chapter 16
1. One can consider strain ditlerential invariants of order T, i.e., scalar fields that retain their forms 88 functions. of the metrio tensor g..", the strain tensor ...",
118
NOTES ON PART II
mad Ihe deritJati.fJea oJ fall up to order T. Silice successive covariant dift'erentiation reduces to a corresponding order of partial dift'erentiation In cartesian coordinates, a strain dift'erential invariant can be written exclusively in terms of tensor fields by merely replaciflg the derivatives of ~ by corresponding covariant derivatives. The Michal-Thomas methods (see the 1927 papers of these authors) can be used to carry on some interesting researches on strain dift'erential invariants. Strain difterential tensors can also be considered. Examples o~ strain ~erential invariants of order one are
H
->"..8 = IIIr"g'Y"faII,.,EAw
and
-s 'OIl '011,
11-
?:J:If'l>:rP
where fai.., is the first covariant derivative of the strain tensor fall, and where 11, la, and I, are the three fundamental strain invariants. The question arises whether suCCessive covariant differentiation of 11, I" 18 and combination with gall would yield all the fundamental strain differential invariants. Clearly there exist no strain differentia/, invariants for homogeneous strains. Note that the vanishing of H is the necessary and sufficient condition for a homogeneous strain.
Chapter 18 1. The theory of complete systems of partial dift'erential equations is discussed in Hedrick's translation of Goursat's COUT8 d:a.rwJY8e, Vol. II, part II.
Chapter 1'1 1. For an account of Riemannian geometry, see Eisenhart's Riemannian GeMndry. This reference, of course, treats the (classical) finite dimensional Riemannian
geometries. Infinite dimensional and dimensionless "Riemannian" geometries were first studied by A. D. Michal (see paper 2 under Michal in the references for Part II). The applications to vibrations of elastic media are now being studied (see papers 4, 5, and 6 under Michal in the references). 2. Several engineenng applications of Lagrange's equations of motion are to be found in the ~ and Biot book. 3. We have seen that the Lagrangean equations of motion for a cooserva.tive dynamical system with no constraints and n degrees of freedom were
9'"+ lji,q;~ = -~.
(a)
This means that the dynamical traiectoriea can be considered as curI1e8 in (In n-diffUM8iontJl Riemannian apace whose element· oj CITe length ds is gWen by
dB'
'(b)
= gv~rf,
... , rr)dq' dII,
the Juncticms occurriflg in t1&e kinetic energy T = igili~. The differential equations of the dynamical ourves are the BeOOnd-order differential whsre t1&e
giJ
CIT6
equations (a). These curves are not, in general, geodesics in the Riemannian space with arc lengths ds given by formula (b). The question then arises whether it is possible to define a Riemannian space whose geodesics are some of the curves whose differential equations are Lagrange's equations of motion «(I) for the given 1Hiegree dynamical
NOTES ON PART IT
119
system. We shall show briefly that it is possible to define such a Riemannian space. To do this we must first show that the Lagrangea.n equations of motion ha~ the energy integral
0/ the 1cinetic tmd potential energies is a constant along any cIwsen-d1ltra.jectory. In fact, elL "L::t "L •,
i.e., the BUm namical
+ i)q,q.
dt '" ~.1I
Hence on using Lagrange's equations of motion
elL. '" il' + ~(aL)q, dt i)q' dt z,qi
"!'
d(aL.) z,qi "';it
"';it
d
ql
(2T).
We have then immediately T + V", 0, a constant, since the kinetic potential L .. T-·V. Consider now the dynamical CUrtl88 that correapcmd to /lny chosen energy constant O. We shall show that theae p/lrticullJr dynamical CUrtl88 lire the geodeaics uJ the n-dimensional RiemlJnnian space whose element 0/ /Ire length dB is gWen by d8I '" 2(0 - V)g;,j(ql, •• " q") dq' dII,
(e)
where, as in (/I), the gv /Ire the coejJicient8 in the kinetic energy T. For convenience in computation, let us define A ... 2(0 - V) and /J;j '" At/ii so that (e) can be written
(d~
~ .. /J;j dq' dqi.
By definition :; Cofactor of 4ii in /I /I" '" - - - - - " - ' - /I where /I '" the determinant of the /J;j. Hence, since A-I factors out In the numerator and A- in the denominator respectively of /I£i, we see that ii
/Iv
co
fL. A
Let *r}1I be the Christoffel symbols based on the metric tensor /J;j. Clearly
*11 _..!.. w("Ag"k "Agi" _ Milk) 11: - 2A g e;,f + e;qk ()(j' (e)
. 1 ( .M ."A . M ) .. Ijll + 2A &~ "qi + &j e;qk - ,... ()(j' llill
By definition d8I '" 2(0 - V)lijdq' dqi
and hence along a dynamical trajectory with energy constant 0 we have
(~y
'"
[2(0 - V)J
so that (I)
-dt ",A-l
d8
NOTES ON PART II
120
along a dynamical trajectory with energy constant C. By elementary calculus we have therefore dq' dq' dIq' dIq' dq' ciA - ... -,.4-1 -_-,.4-41_--,.4-41 (g)
tit
dB
'
dBI
tit tit
dJ,I.
•
'!'he difterential equations for the geodesics of the Riemannian space whose dB is given by (c) Is (h) On employing results (e), (f), and (g) in (h), (
';)
~.
•
dJ,I
we get after obvious simplifications
....I. dI/ dtf 1...w aA dI/ dt/' + 11k tit dt ... U lf a('li1c tit di'
But
clt/dt/' ... A g,'1cdidi along a dynamical trajectory with energy constant C. Hence equations (i) reduce to il'q'
(j)
dJ,I
clt/ dt/
. ()V
+ r;'1cdi di - -IF~.
But these ditJerential equations a.re another form of the Lagrangean differential equations of motion for our dynamical system. We have thus proved that the geodesicB oj the Riemannian space with a da gWen by (c) lire dyrwmical tro,jectoriea oj
the dyrwmicGl 81/Btem with kinetic energy T ...
~ii ~'a: and potentW energy V.
By retracing o.ur steps of proof, we can show that IInY dyrwmicGl trajectory 'IDith energy conBI471t C, i.e., GnY CUn16 t1IIJt satiBftu (j) with energy COfIBtGnt C, CCln be c0nsidered II geodesic in the Riemtmnwn space with an element oj arc length dB gWen in (c). Dluatratlve Example of a Shaft CarryiDg Four Disks.
A shaft is fixed at one end and carries four disks at a distance l apart. H" is the moment of inertia of each disk and ql, f', if, tf the respective angular deflections of the four disks, then, if the shaft has a uniform torsional stiffness 7", the kinetic and potential energies a.re given respectively by
T
=0
i[(~)I ~ e:Y + (~Y + (:)1]
and
v
=0
~[(r)l + (rJ' - r)l + (if - f')I + (tf - if)l].
This is a conservative dynamical system of four degrees of freedom with no moving constraints. Hence the dynamical trajectories with total energy constant C can be represented as the geodesicB of the four-dimensional Riemannian space whose element of arc length dB Is given by ,
dBlm F(tf, if, if, tf)[(dtf)l +
(drt'l + (dtf)l + (dq')ll
where F(r, f', if. tf) .. 2,s~C-i[(r)l+(if- r)l+ (if -rJ')I +(tf -if)IJf· Numerous other engineering examples can be given, some simpler and some
NOTES ON PART II
121
more sophisticated. For example, if the above shaft carries only t1lJO dilk8. the dyn~cal trajectories with total energy constant 0 can be represented as the geodesics of the 8Urface whose dB is given by , dIP .. 21'~0 - i[(qI)t + (q' - ql)']f [(dqI)'
+ (dqI)'].
A more sophisticated example is given by the aymmetrical gyroscope; see the exercise at the end of Chapter 17. Here the dynamical trajectories with total energy constant 0 can be represented as the geodesics of the three-dimensional Riemannian space whose element of arc length dB is given by dIP .. 2[0 - Mgh coa ql][J(dqI)' + (1 sinl ql
+ J coat q%iq')B +
2J coa qI dqI dtf + J(dt)']·
Chapter 18
1. The conditions for an incompressible fluid and the continuity equations for a fluid flow state the invariance of two integrals: the integral for volume and the integral for mass, respectively. In other words, here we have two important examples of integral infJariootB. For the theory of integral invariants and its modem generalizations, the ~er is referred to paper 1 under Michal in the references. There the reader will find ample references to the earlier work on integral invariants by H. Poincare, S. Lie. E. Cartan, and E. Goursat.
2. We saw in Chapter 17'that the Riemann-Christoffel curvature tensor Bjw is a zero tensor in any Euclidean space and hence in our three-dimensional Euclidean space. Define the tensor (field) R;.;w by
R;.;w .. gwB/w.
«(I)
It is evident that (b)
holds throughout our three-dimensional Euclidean space. It can be shown that" there are only six independent equations in (b). Three of them are included in (c)
~
.. O;
two of them are included in (d) and the sixth one is given by
~
... O;
(6)
RulB " O.
By straightforward calculation it can be shown, that. R ........ tit.",.,
{
! ()tgafJ _ !g"B~cry ~B' 2
~
4
i>.zO i>.zO
R«fJ'yo .. rf)y;a - rcry;/J
RIBIB '" *RIBIB + (rllru - rlBrU),
wh.ere. it is to be recalled, a semicolon denotes covariant differentiation on the
surface zIl- 0 and r.." ..
-21~afJ azp'
The *R«fJ'yB stande for the curvature tensor on
the surface. Define
bafJ(ZI, :r;I) .. ,
[! ~l 2 i>.zO
=0
NOTES ON PART II.
122
If we evaluate the last two sets of conditions (f) on the surface ~ .. 0, we now see readily that the va.nishiDg of the curvature tensor Rt.sw in the three-dimensional Euclidean space implies the following three sets of conditions:
()tg«fJ .. !If"~~ ~ 2 b:r!' b:r!'
(g)
(A) (i)
b""sl'Y - ba,.;p .. 0
·RpafIy '" bptJJ..., - b,,1J«fJ.
Equations (Ia) and (11 are the well-known Codazzi and Gauss equations of the surface ~ '" o. (Of. MoConnell's AppZicationa oj the AbBoZt.Ite Dif/ert/Atiol Colcvl'IU, p. 204, 1931.) Conversely it can be shown that equations (g) in a-apace and equations (Ia) and .(11 over the IlUrja.ce ~ .. 0 imply Ri.;111 .. 0 th.rov(J1wul the 8-BPa.ce. But we shalI not go into this matter any further. . If we differentiate (g) with respect to ~ and if in this result we eliminate the second derivatives ~ by means of (g), we find (j)
()Ig«fJ '" l)z08
! ?)gP" C)g..,. bg~ + !1f"rf'-rC)g,., C)gor>. ~. 2 b:r!' b:r!' b:r!'
2
()zO ()zO ()zO
On differentiating the well-known identity
fA.,g'" .. 6).,
btl" .
we can solve for b:r!' and obtain
btf" co -ffi'g'" i)g).-r • b:r!'
b:r!'
If we substitute this expression in (j), we evidently obtain ()Ig«fJ ... 0 l)z08
Hence the 1lUr/a.ce tensor Indeed, if we write
•
eomponema gr4/..z1, :ea, Call ...
~)
aTe quadratic expressions of ~.
.
[!
()tg«fJ] • 2 W;OI zO"'o
then (k)
A glance at (11) shows that Call '" a""1l...,bp..
Hence, fl«fJ, bcrlh and Call aTe T68p6dively the tensor coe.tfi,cienta oj the fInt, aecor&d, and tlriTtl J'UffI14menfqJ JOTf1I,8 oj the 1lUr/a.ce ~ ... O• . It can be shown, that we may write (i) in the form (1)
KEfJ'yEpa '"
bPlAr, - bnbcrlh
I GcrIJ I,
1In '" 'In .. 0, 1111" I, "'1'" -1,
where ea(J ..
and
al"crIh a
=
.'
123
NOTES ON PART II
the total CW'IICIture (Gclusaian curtllJtuTe) oj the surface:r!l .. o. If we apply the tensor
a"'lP to (I), we find (m) Ca/J - 2HbotlJ + K~ ... 0, where H is the mean curtllJture oj the surface :r!l ... 0, H=ja.."b.r...
If HI and Rs are the principal radii of ourvature, it is well known in surface theory that H ... !(.!.+~)
2 HI
R,'
1 K ... _ _.
HIRt
With these relations, we can easily calculate
I gaIJ I ... ·ltfII,,"'Iag....,g~.
g..
For this purpose, we have to caloulate the quantities "otIJ,,"'I'GayafJ" ~,,"'Iaa.rybfJ" ~,,"'I'b...,bfJ" if similar quantities fnvolvillg c,... have been reduced with the help of (m). Now "aIJ,,"'I'a.ry .. aafJ3.
Hence (n)
rfII,,"'I'GayafJ3
a
2a,
"aIJ,,"'Ia~fJa'" 2Ha.
ApplyiDg ~1J"'I' to (l). we obtain ~,,"'I'b...,b~ ... 2K.
(0)
If we substitute (n) and (0) in the determinant g, we find
9 = (1 +2H:r!l +K(:r!l)')t ...
gafJ ...
c{
1+
:y(l :y. +
G-.l(1 + ~tl(1 + :tl"..,."tJ~9"'l3.
Thus, if we expand gaIJ as a power series in:r!l, the series are convergent if I :r!l I < Rm. where Rm is the minimum value of the prinoipal radii of ourvature of the surface zp ... O. The same is true of many other geometrical quantities derived from (/otIJ and gaIJ.
REFERENCES FOR PART I
AmoDN, A. C. 1. "Studies in Practical Mathematics. I. The Evaluation of a Certain Triple Matrix Product," Proc. Roy. Soc. Edifllmrgh, vol. 57 (1987), pp. 172-181. 2. "Studies in Practical Mathematics. II. The Evaluation of the Latent Roots and Latent Vectors of a Matrix," Proc. Roy. Soc. Edinburgh, vol. 57 (1937), pp. 269-304. 3. "On Bernoulli's Numerical Solution of Algebraic Equations," Proc. Roy. &c. Bdin11urgh, vol. 46 (1926), p. 289: BINGHAK, M. D. "A New Method for Obtaining the Inverse Matrix," J. Am. SkltiBtictJZ AB8OC., vol. 36 (1941), pp. 530-534. • BLEAXNlDY, H. M. J. Aerooout. Sci., vol. 9 (1941), pp. 56-63. BoCHlDR,
111lroduction to Higher Algebra.
BORN and JORDAN. Zeit8chri/t fur Physik, vol. 34 (1925), p. 858. CAlISLAW and JAEGER. Operational Calculus in Applied MathematiCB, Oxford University Press,. 1941. DEN H.urroo. MechanictJZ Vibrations, McGraw-Hill Book Co., 1934. DIRAC, P. A. M. The Principles of Quantum Mechanics, Oxford University Press,
1930. DUNCAN, W. J., and A. R. CoLLAR. 1. "A Method for the Solution of Oscillation Problems by Matrices," Philosophical MCJ(faZine and Journal of 8cUmce, vol. 17 (1934), pp. 865-909. 2. "Matrices Applied to the Motions of Damped Systems," PhiloaophictJZ MtJ(JtJrine and Journal oj Science, vol. 19 (1935), pp. 197-219. FANTAPPIE, L. "Le ca1cul des matrices," Comptea rendus (Paris), vol. 186 (1928), pp. 619-621. FRAZER, DUNCAN, and COLLAR. Bletnentary Matrices and Some Applications to Dymmic8 and Di:/ferentiaJ. Equations, Cambridge University Press, 1938. FRAZER, R. A., and W. J. DUNCAN. "The Flutter of Aeroplane Wings," Reports and Memoranda of the (British.) AeroooutictJl Research Committee, 1155, August, 1928. HOLZER, H. Die BerechnufIIJ der DrehschwifllJUfllJe7l, Julius Springer, Berlin. HOTELLING, H. "Some New Methods in Matri.'t Calculation," AnfUlls of Mo.thematiclJl SkltiBtic8, vol. 14 (March, 1943), pp. 1-34. KAlmAN, THEODORE VON. 60th Anniversary Volume, California Institute of Technology, 1941. KARMAN and BlOT. Mathematical Methods in Efil/ineerifllJ, McGraw-Hill Book Co.,
1940. KuBSNEB, H. G., and L. ScHWARZ. "The Oscillating Wing with Aerodynamically BoJanced Elevator," N.A.C.A. TechnictJl Memoro.ndum 991, October, 1941. LuaBER, PAUL. 1. "Outline of Four Degrees of Freedom Flutter Analysis," S. M. Dougla,s Aircro/t Corporation Report 3572, March 20,1942. 2. "Three Dimensional Four :begrees of Freedom Flutter Analysis," S. M. Dougla,s Aircraft Corporation Report 3587, April 4, 1942. 3. "An Iteration Method for Calculating the Latent Roote of a Flutter Matrix," S. M. Dougla,s Aircro.ft Corporotion Report 3855, Sept. 29, 1942.
124
REFERENCES FOR PART I
125
4. "A General Three Dimensional Flutter Theory as Applied to Aircraft," 8. M. Douglas Aircra!t Corporation Report 8127. 5. "Matric Methods for Calculating Wing Divergence Speed, Aileron Reversal Speed, and Effective Control," 8. M. Dougla8 AirCTa,ft Corporation Report 8142, Oct. 13, 1943. MAcDtJFFEE, C. C. The Theory oj Matrices, Julius Springer, Berlin, 1933; a bibliographical report. MICHAL, A. D., and R. S. MARTIN. "Some Expansions iii. Vector Space," JO'Umal de f1IIlthbnatiqueB pures et appl~, voL 13 (1934), pp. 69-91. OLDENBURGEB, RUlI'US. 1. " Infinite Powers of Matrices and Characteristic Roots," Duke. Mathematical Jqu,maJ" vol. 6 (1940), pp. 357-361. 2. "Convergence of Hardy Cross's Ba.lancing Process," Journal 01 Applied
Mechanics, 1940. PIPES, LoUIS A. 1. "Matrices in Engineering," Electrical Bngineering, 1937, p.1177. 2. "Matrix Theory of Oscillatory Networks," JO'Urnal oj Applied Phyllic8, vol. 10 (1939), pp. 849-860. 3. "The Matrix Theory of Four Terminal Networks," Philosophical Maga,rine, 1940, pp. 370-395. 4. "The Transient Behavior of Four-Terminal Networks," PhiZosophical Magazine, 1942, pp. 174-214. 5. "The Matrix Theory of Torsional Oscillations," JO'Urnal of Applied Phyllic8, vol. 13, No.7 (1942), pp. 434-444. TamoDOBSEN, T. "General Theory of Aerodynamic Instability and the Mechanism of Flutter," N.A.C.A. Report 496, 1934, pp. 413-433, especially pp. 419-420. TmlODOBSEN, T., and I. E. GARRICK. 1. "Mechanism of Flutter," N.A.C.A. Report 685,1940. 2. "Non-Stationary Flow about a Wing-Aileron-Tab Combination Including Aerodynamic Ba.lance," Adllanced Restricted Report 334, Langley Memorial Aeronautical Library, March, 1942. TIMOSBENKo, S. Vibration Problems in Engineering, D. Van Nostrand Co., New York, 1928. VOLTERRA, V. Theory oj Fu.nctioMls, Blackie & Son, London, 1930. WEDDERBURN, J. H. M. Lectures on Matrices (American Mathematical Society Colloquium Publications, 1934); contains a good bibliography on pure matrix theory up to 1934. WHITI'AXEB, l!i. T. A Treatise on the AnaZytical Dynamics oj Particles end Rigid Bodies, Cambridge University Press, 1927.
REFERENCES FOR PART U
APPlIILL, P. CO'Urs de mkaniq'lte rationeUe, Tome V, Gauthier-Villars, 1926. BRILLOUIN, LmON. 1. Toronto Mathematical Congress, 1924. 2. "Les lois de l'Basticit6 BOUS forme tensorie1le valable pour des coordonn~ quelconques," Annales de physique, vol. 3 (1925), pp. 251-298. 3. Les tenaeurs en mkanique et en ela8ticiU, Masson, Paris, 1938. CmmN, W. Z. "The Intrinsic Theory of Thin Shells and Plates, Part I - General Theory," Quarterly oj Applied Mathematics, voL 1 (1944), pp. 297~7. Coss1DltAT, E. and F. "Sur la tMorie de I'Basticit6," Annales de Toulouse, vol. 10 (1896).
126
REFERENCES FOR PART
n
DABBotJX, G. ~ aur lG thM1riedea avrja.ct8, voL 2, pp. 452-521, ~thier-Villars. Paris, 1915. EIsmNBABT, L. P. Riemannian Geometry, Princeton University Press, 1926. ER'1'IIIL, H. MetJuxJm U1Id Probleme dtJr dynGmiacI&en M~, Julius Springer, Berlin, 1988; Edwards Brothers, Ann Arbor, Mich., 1943. JJDP'I'BJIYs, H. Carie8ian Tensor8, Cambridge University Press, 1931. 1umdN, TmloOOlUlJ VON. 1. "The Fundamentals of the Statistical Theory of Turbulence," Jourruit oJ the AeronauticGZ Scimcea, vol. 4 (1937), pp. 131.;..138. 2. "Turbulence," JoumoJ. qJ the Royal AeronouticalSociety, December, 1937, Twenty-fifth Wilbur Wright Memorial. Ulcture. K1lt¥1N, TBEoDOlUlJ VON, and LBsuJD HOWAlmL "On the Statistical Theory of Turbulence," Procudings qJ the Royal Society qJ London, vol. 164 (1938), pp. 192-215. KAsNmt, E. DifJerent/iIlJ,.(Jeometric Aapecta oj Dynamic8, American Mathematical Society, New York, 1913. IUBCBBol'P', G. "'Ober die Gleichungen des Gleiohgewiohtes eines elastiRchen Korpers bei nioht unendlioh kleinen Versohiebungen seiner Theile," 8iU. Math.Not. K'l.aue dtJr kaiBerlic1wm Akad. dtJr. WiBB., Vol. 9 (1852), p. 762. LAMB, H. Hydrool/fl4mics, Cambridge University Press, 1924. LAd, Q. ~OM aur lea coordonnll curtJiZign68.et leurs dirJer868 appZietJtionB, Paris,
1859. T. The Absolute DifJerential Calculus (Calculus qJ Ten80f'8),. Blaokie and Son, London, 1927. . LoVll, A. E. H. A Treatise on the Mathematical Theory oj BlaBticity, Cambridge University Press, 1927}.
LJm"CIVlTA,
MICHAL, A. D. 1. "Funotionals of R-DimensionaI Manifolds AdInitttng Continuous Groups of Transformations," Tram. American MtJt1umuJtical Society, vol. 29 (1927), pp. 612-646. (This paper includes an introduction to multiplepoint tell80r fields.) 2. "General Differential Geometries and Related Topics," Bull. American Mathematical Society, vol. 45 (1939), pp. 529-563. 3. "Recent..General Trends in Mathematics," Science, vol. 92 (1940), pp.
563-566. . 4. "An Analogue of the Maupertuis-Jaoobi 'Least' Action Prinoiple for Dynamical Systems of Infinite Degrees of Freedom," B1lU. American M~ matical Society, vol. 49 (1943), abstract 288, p. 862. 5. "Physical Models of Some Curved Differential-GeoInetrio Metrio Spaces of Infinite Dimensions. I. Wave Motion as a Study in Geodesics," Bull. American Mathematical Society, vol. 49 (1943), abstract 289, p. 862. 6. "Physical Models of SoIne Curved Differential-Geometrio Metrio 8paoes of Infinite Dimeosions. n. Vibrations of Elastio BeaIDS and Other Elastic Media as Studies in Geodesics," Bull. American Mathematical Society, vol. 50 (1944), abstract 154, p. 340. MICHAL, A. D., and T. Y. THOMAS. 1. "Differential Invariants of Affine1y C0nnected Manifolds," AnMlB of Mathematics, voL 28 (1927), pp. 196-236. 2. "Dmerential Invariants or Relative Quadratic Differential Fonns," AnMlB qj Mathematics, voL 28 (1927), pp. 631-688. McCoNNIILL, A. J. AppZicatiOM oj the Absolute DifJerentio.l CalcuZus, Blaokie and Son, London, 1931. . MUBNAGHAN, F. D. "Finite Deformations of an Elastio SOUd," American Joumal oj MathemtJtics, vol. 59 (1937), pp. 235-260.
REFERENCES FOR PART II
127
"EiDe Anwendung des absoluten Parallelismus auf die Schalentheorie," Z. /lnge1II. Math. Md., vol. 22 (1942), pp. 87-98. RICCI, G., and T. LmVI-CIVITA. "Methodes de calcul differentiel absoIu et leurs applicatioDS," Ma.th.ematiache Annalen, Vol. 54 (1900), pp. 125-201. RmIuNN, B. Dber die Hypothaen, welch.e der Geometrie 3U Grunde liegen, 1854 (GuommeUe Werke, 1876, pp.254-269). SYNGE, J. L: "On the Geometry of Dynamics," Pl"l. Tnma. RoyW,'Sociely Londun, A, vol. 226 (1926-1927), pp. 31-106. SYNGE, J. L., and W. Z. CHIEN. "The Intrinsic Theory of Elastic Shells and Plates," Theodore lIOn K4nn4n Annivers/lry Volume, 1941, pp. 103-120. THOMAS, T. Y. Differentia}, Invariants of Gener/Iliud Spaces, Cambridge University Press, 1934. VEBLEN, O. "Invariants of Quadratic Differential Forms," C/lmbridge Troct 24, . Cambridge University Press, 1927. VEBLEN, 0., and J. H. C. WHlTJllBEAD. "The Foundations of Differential Geometry," C/lmbridge Tract 29, Cambridge University Press, 1932. WEBS'!'EB, A. O. The Dynamics of P/lrtide8 /lnd of Rigid, Hlc&stic, tmd Fluid Bodies, G. E. Stechert & Co., New York, 1942 reprint. WHlTTAXEB, E. T. A Treatise on the AooZytirol Dynamic8 of Particles tmd Rigid Bodies, Cambridge University Press, 1927. WBIOBT, J. E. "Invariants of Qus.dro.tic Differential Forms," C~ Tract 9, Cambridge University Press, 1908. Rmtl'l'TllB, ·F.
INDEX Aircraft flutter, calculation of, 34,35, 36 description of, 32 matric differential equation of, 33 need for matric theory in calculations for fast aircraft, 37 AITKEN, 124 APPJDLL,l25
Crystalline media, 93 Curvature of a surface, Gaussian, 123 mean, 123
DAlUloux" 126 DmN HARTOG, 124 Differential equations, example, 25, 26 . frequencies and amplitudes, 29, 30, BINGBAK, 14, 124 .31 harmonic solutions, 28 BlOT, 34, 113, 124 BLmAKNlily,l24 of small oscillations, 24, 25 solution of linear, 20 BOCHER, 111, 124 with variable coefficients, 21, 22 BORN, Ill, 124 Boundary layer, theory of Prandtl, 109 Differentiation of matrices, 16 DIRAC, Ill, 124 thickness of, 109 Boundary-layer equations, 104, 105, 106, DuNCAN, I, 34, 35, 124 Dynamical systems with n degrees of 107, lOS freedom, 24, 101 BRILLOUIN, 117, 125 Lagrange's equations of motion of, 24, 101 CABSLAW, 34, 124 CARTAN,121 CAYL!iIy-1LU.ULTON theorem for mat- E~BART, 118, 126 Elastic deformation, change in yolume rices, 12 under, 79, SO, 81 CBIIIN, x, 105, 127 CBRISTOPFEL symbols, Euclidean, 53 matrix methods in formulation of multidimensional Euclidean, 95 finite, 38, 39, 40 proof of law of transformation of, tensor methods in formulation of 114, 115 finite, 75, 76, 77 Riemannian, 96 Elastic potential, 91, 92 CoDAZZI equations for a surface, 122 Equation of continuity, 69, 70, 71, 103 CoLLAR, 1, 34, 35, 124 ERTIIL, 114, 116, 126· Compressible fluids, 103, 104 Euclidean Christoffel symbols, 58 Contraction of a tensor, 116 alternative form for, 54 Correlation tensor field in turbulence, exercise on, 55 for Euclidean plane, M 73,74 CossmRAT, E. and F., 117 law of transformation of, 58 Covariant differentiation, its commumultidimensional, 95 tativity in Euclidean spaces, 58, 59 Euclidean metric tensor, 43 its non-commutativity in general in exercises on, 47 Riemannian spaces, 99, 100 its law of transformation, 46 of Euclidean metric tensor, 59 Euclidean spaces, 42 of scalar field, 60 multidimensional, 95 of tensor fields in general, 58 Eulerian strain tensor, 77, 78 of vector fields, 56, 57 EUL1IIB-LAGRANGJil equations for ge0Cramer's rule, 111 desics, 100 129
180
INDEX
Lagrange's equations of motion, 24, 101, 102 Lagrangean strain tensor, 77, 78 Lum, 116, 126Laplace equation, 62, 63 for vector fields, 65 in curvilinear coordinates, 63, 64 LBVI-CIVITA, 117, 127 LIIIl,121 LIIilBEB, ix, 124, 125 LIN, X, 104:, 105 Line element, 42 exercises on, 45, 46, 47 geodesic form of, 105 GAlUUCK, 36, 125 in curvilinear coordinates, 45, 46, 47 GAuss,43 Riemannian, 96 equations for a surface, 122 Linear algebraic equations, solutions, 9 General theory of relativity, 42 numerical, 14 Geodesic coordinates, 97 punched-card methods, 14 Geodesics, 100 dynamical trajectories as, 118, 119, Linear differential equations, application of matric exponential to, 20 120,121 in matrices, 21, 22 Euler-Lagrange equations for, 100 method of successive substitutions in GoUBSAT, 118, 121 solution of, 22 with constant coefficients, 20 HoLZlllB, 124 wi~h variable coefIicients, 21, 22 Homogeneous strains, 88 fundamental theorem on, 84, 85, 86 wP, Dc LoVE; 126 Hooke's law, 94
FANTAPPIlII, 112, 124 Fluids, compressible, 103, 104: incompressible, 103, 104 boundary-layer equations for, 104:, 105, 106 N avier-Stokes equations for incompressible viscous, 104: N avier-Stokes equations for viscous, 69,70 F'BAzmB, I, 34, 35, 124 FUnction&m, 18, 112 FUndamental forms of a surface, 122
111, 124 HowARTH,l26 Hydrodynamics, 42 Eulerian equations, 116 Lagrangean equations, 116
- HoTIILLING,
Integral invariants, 121 Isotropic medium, 92, 93, 94 condition for, 92, 93 stresHtra.in. relations for, 93. 94 Isotropic strain, 84
JAEGlIIB, 34, 124 JEFFBJlYB, 126 JORDAN, Ill, 124 KAtudN, x, 34, 38, 74, 113, 124, 126 KAsNEB, 126 Killing's differential equations, 88 Kinetic potential, 101 KIBCHBOI'II': 117, 126 KussNllB, 124
MAcDuFFlIIlII, 125 MARTIN, 112, 125 Matric exponential, application to systems of linear differential equations with constant coefficients, 20 convergence of, 112 definition of, 15 properties of, 16 Matric power series, definition of, 15 . theorem on computation of, 18 Matrix, addition, 2 adjoint of, 39 application of inverse, to solution of linear algebraic equations, 9 Cayley-Hamilton theorem, 12 characteristic eqlSation of, 12 characteristic function of, 12 characteristic roots of, 12 column, 2 . definition of, 1 differentiation of, with respect to numerical variable, 16
INDEX Matrix, dynamical, 29 example of non-commutativity of : matrix multiplication, 5 index and power laws, 11 integration of, with respect to numerical variable, 17 inverse, 8 multiplication, 5 by number, 11 norm of, 112 of same type, 2
order,2 I
polynomials in a, 11 row, 2 rule for computation of inverse, 13
RmUTTEB,
131 127
RICCI, 117, 127 RIIDHANN, 127 Riemann-Christoffel curvature tensor, 99,100 Riemannian geometry, 96, W, 98, 99, 100 applioationstoboundary-layer theory, 103, 104, 105, 106, 107, lOS, 109, 110, 121, 123 applications to classical dynamics, 101,lQ2 example, 98 infinite dimensional, x, 118 Rigid displacement, 40, 77
square, 2 strain, 39, 40, 41 symmetric, 40 trace of, 13, 111 unit, 4 zero, 3 McCoNNELL, 122, 126 MICHAL, ix, x, 112, 114, 116, 118, 121, - 125,126 MILLIKAN, x Multiple-point tensor fields, 71, 72, 73, 74 in e1asticity theory, 76 in Euclidean geometry, 71, 72, 73,74 in turbulence, 73, 74 MUBNAGBAN, 117, 126
Scalar density, 60, 61,67 Scalar field, 49 relative, 60, 61, 62 SCBWABZ, 124 Strain invariant, 82, 117,118 Strain matrix, 39, 40, 41 in infinitesimal theory, 41 Strain tensor, 48, 49 Eulerian, 77 Lagrangean, 77 variation of, 86, FtT, 88 Stress-strain relations for isotropio medium, 93, 94 Stress tensor, 89, 90 symmetry of, 91 Stress vector, 89, 90 Navier-Stokes differential equations for Summation convention, 4, 43 a viscous fluid, 69 Sylvester's theorem, 112, 113 in curvilinear coordinates, 70 SYNGE, 127 subjeot to condition of incompressibility, 104 Tensor analysis, 56 Norm of a matrix, 112 applications to boundary-layer theory, Normal' coordinates, 116 103, 104, 105, 106, 107, lOS, 109, Normed linear ring, 112 110, 121, 122, 123 applications to classical dynamics, OLDlIINBUBGIDB, 125 101, 102, 118, 119, 120, 121 in multidimensional Euclidean spaces, PIPES, 125 95 Plastic deformation, 114 in Riemannian spaces, 97, 99, 100 POINCAU, 121 Tensor field, contraction of a, 116 POISSON equation, 66, 67, 68 covariant differentiation of, 57, 58 Principal radii of curvature of boundaryexercises on, 59 layer surface, lOS general definition of, 57, 58 PO'l'T, ix property of a, 59 ~tive, 60,117 Quadratic differential form, 45, 46
182
INDEX
Teuaor field, Riemann-Chrfstoffel ourva- vector field, contravariant, 49 ture, 99, 100 covariant, 49 . weight of, 60 in rectangular oartesia.n coordinates. 50 . Tensor field of rank two, contravariant, Velocity field, 51 DO divergence of, 103, 117 covariant, 50 VOL'1'IIRBA, 112, 126 mixed, 60 TmIoDOBSBN, 88, 86, 125 Wave equation, 65, 66 TBOKAS, 114, 116, 118, 126, 127 WUS'l'llB, 116, 127 TI¥OSBJIINEO, 125 WlIlDDEaBUBN, 125 Turbulence, 78, 74 correlation tensor field in, 73, 74 Wm'l'llBEAD, 114, 127 WmTTADB, 28, 126, 127 'WJUGJIT, 127 VULIIIN, 114, 127
View more...
Comments