Eee v Modern Control Theory [10ee55] Notes
Short Description
lecture notes about mcs...
Description
Modern Control Theory
10EE55
MODERN CONTROL THEORY Subject Code
: 10EE55
IA Marks
:
25
No. of Lecture Hrs./ Week
: 04
Exam Hours
:
03
Total No. of Lecture Hrs.
: 52
Exam Marks
: 100
PART - A UNIT - 1 & UNIT - 2 STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state variables and state model, state modeling of linear systems, linearization of state equations. State space representation using physical variables, phase variables & canonical variables 10 Hours UNIT - 3 Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors, generalized Eigen vectors. 6 Hours UNIT - 4 Solution of state equation, state transition matrix and its properties, computation using Laplace transformation, power series method, Cayley-Hamilton method, concept of controllability & observability, methods of determining the same 10 Hours PART - B UNIT - 5 POLE PLACEMENT TECHNIQUES: stability improvements by state feedback, necessary & sufficient conditions for arbitrary pole placement, state regulator design, and design of state observer, Controllers- P, PI, PID. 10 Hours
Dept. of EEE, SJBIT
Page 1
Modern Control Theory
10EE55
UNIT - 6 Non-linear systems: Introduction, behavior of non-linear system, common physical non linearity-saturation, friction, backlash, dead zone, relay, multi variable non-linearity. 3 Hours UNIT - 7 Phase plane method, singular points, stability of nonlinear system, limit cycles, construction of phase trajectories. 7 Hours UNIT - 8 Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear system, Hurwitz criterion & Liapunov‟s direct method, construction of Liapunov functions for nonlinear system by Krasvskii‟s method. 6 Hours TEXT BOOKS: 1. Digital control & state variable methods- M. Gopal - 2nd edition, THM Hill 2003 2. Control system Engineering- I. J. Nagarath & M. Gopal, - 3rd edition, New Age International (P) Ltd. REFERENCE BOOKS: 1. State Space Analysis of Control Systems- Katsuhiko Ogata -Prentice Hall Inc 2. Automatic Control Systems- Benjamin C. Kuo & Farid Golnaraghi, 8th edition, John Wiley & Sons 2003. 3. Modern Control Engineering- Katsuhiko Ogata- PHI 2003 4. Control Engineering theory and practice- M. N. Bandyapadhyay PHI, 2007 5. Modern control systems- Dorf & Bishop- Pearson education, 1998
Dept. of EEE, SJBIT
Page 2
Modern Control Theory
10EE55 CONTENTS
Sl. No.
Titles
Page No.
1.
UNIT - 1 STATE VARIABLE ANALYSIS AND DESIGN
04
2.
UNIT - 2 STATE SPACE REPRESENTATION
24
3.
UNIT - 3 DERIVATION OF TRANSFER FUNCTION FROM STATE MODEL
42
4.
UNIT- 4 SOLUTION OF STATE EQUATIONS
57
5.
UNIT - 5 POLE PLACEMENT TECHNIQUES
78
6.
UNIT -6 NON LINEAR SYSTEMS
99
7.
UNIT - 7 PHASE PLANE ANALYSIS
105
8.
UNIT - 8 STABILITY ANALYSIS
121
Dept. of EEE, SJBIT
Page 3
Modern Control Theory
10EE55 PART -A
UNIT - 1 & UNIT - 2 STATE VARIABLE ANALYSIS AND DESIGN: Introduction, concept of state, state variables and state model, state modeling of linear systems, linearization of state equations. State space representation using physical variables, phase variables & canonical variables 10 Hours
State Variable Analysis & Design State Variable Analysis or State Space Analysis :The state variable approach is a powerful tool/technique for the analysis and design of control system. The state space analysis is a modern approach and also easier for analysis using digital computers. It's gives the total internal state of the system considering all initial conditions. Why do we need state space analysis? The conventional approach used to study the behaviour of linear time invariant control systems, uses time domain or frequency domain methods. When performance specifications are given for single input, single output linear time invariant systems, then system can be designed by using Root locus. When time domain specifications are given, Root locus technique is employed in designing the system. If frequency domain specifications are given, frequency response plots like Bode plots are used in designing the system. In conventional methods, the systems are modelled using Transfer Function approach, which is the ratio of Laplace transform of output to input, neglecting all the initial conditions.
The drawbacks in the transfer function model and analysis are, Dept. of EEE, SJBIT
Page 4
Modern Control Theory
10EE55
1.
Transfer function is defined under zero initial conditions.
2.
transfer function is applicable to linear time invariant systems.
3.
It is restricted to single input and single output systems.
4.
Does not provides information regarding the internal state of the system.
5.
The classical methods like Root locus, Bode plot etc. are basically trial and
error procedures which fail to dive the optimal solution required State variables analysis can be applied for any type of systems like,
Linear system
Non- linear system
Time invariant system
Time varying system
Multiple input and multiple output system.
the analysis can be carried with initial conditions.
Advantages of state variable analysis 1.
Convenient tool for MIMO systems
2.
Uniform platform for representing time-invariant systems, time-varying
systems, linear systems as well as nonlinear systems 3.
Can describe the dynamics in almost all systems (mechanical systems,
electrical systems, biological systems, economic systems, social systems etc. 4.
It can be performed with initial conditions.
5.
Variables used to represent system can be any variables in the system.
6.
Using this analysis the internal states of the system at any time instant can be
predicted. 7.
As the method involves matrix algebra, can be conveniently adopted for
digital computers.
Dept. of EEE, SJBIT
Page 5
Modern Control Theory
10EE55
Comparison: Classical vs. Modern Control Classical Control (Linear)
Modern Control (Linear)
Developed in 19201950 Frequency domain analysis & Design(Transfer function based) Based on SISO models Deals with input and output variables Well-developed robustness concepts (gain/phase margins) No Controllability/Observabilit y inference No optimality concerns Well-developed concepts and very much in use in industry
Developed in 19501980 Time domain analysis and design(Differential equation based) Based on MIMO models Deals with input, output and state variables Not well-developed robustness concepts Controllability/Observabilit y can be inferred Optimality issues can be incorporated Fairly well-developed and slowly gaining popularity in industry
State:The state is the condition of a system at any time instant 't'.
State variable:A set of variable which described the state of the system at any time instant are called state variables. OR The state of a dynamic system is the smallest number of variables (called state variables) such that the knowledge of these variables at t=to, together with the knowledge of the input for t=to, completely determine the behaviour of the system for any time t ≥ to. Dept. of EEE, SJBIT
Page 6
Modern Control Theory
10EE55
State space:The set of all possible values which the state vector X(t) can have( or assume) at time t forms the state space of the system.
State vector:It is a (nx1) column matrix whose elements are state variables of the system,(where n is order of system) it is denoted by X(t).
State Variable Selection
Typically, the number of state variables (i.e. the order of the system)
is equal to the number of independent energy storage elements. However, there are exceptions!
Is there a restriction on the selection of the state variables ?
YES! All state variables should be linearly independent and they must collectively describe the system completely.
State Space Formulation In the state variable formulation of a system, in general, a system consists of m-inputs, p-outputs and n-state variables. The state space representation of the system may be visualized as shown in figure1.1. Let,
State variables =
(t),
(t),
(t),...................
Input variables =
(t),
(t),
(t),...................
(t),
(t),
(t),
(t),...................
(t),
Output variables=
Dept. of EEE, SJBIT
(t),
Page 7
Modern Control Theory
10EE55
(t)
→
→
(t)
(t)
→
→
(t)
→
(t)
→
(t)
Control
(t) (t)
→
System
→ ↓
↓
↓
(t)
(t) .....
(t)
↓ (t)
Control
U→
System
→Y
↓ X The different variables may be represented by the vectors(column matrix) as shown below Input vector
Output vector
State variable vector
State variable representation can be arranged in the form of n number of first order differential equations. Dept. of EEE, SJBIT
Page 8
Modern Control Theory
10EE55
= ẋ = f1 (x1, x2, x3,......xn; u1, u2, .......um) = ẋ2 = f2 (x1, x2, x3,......xn; u1, u2, .......um) . . . = ẋn = fn (x1, x2, x3,......xn; u1, u2, .......um) Any 'n' dimensional time invariant system has state equations in the function form as,
ẋ(t) = f( X, U) ...... State
equation ......(1)
While out puts of such system are dependent on the state of the system and instantaneous inputs. Functional output equation can be written as,
Y(t) = g(X, U) .......Output equation .....(2) State Model Of Linear System
State model of a system consist of the state equation & output equation.
The state equation of the system is a function of state variables and inputs as defined
by equation 1. For linear tine invariant systems the first derivatives of state variables can be expressed as linear combination of state variables and inputs.
ẋ1 = a11 x1 + a12 x2 + .............+ a1n xn + b11 u1 +.............+ b1m um ẋ2 = a21 x1 + a22 x2 + .............+ a2n xn + b21 u1 +.............+ b2m um Dept. of EEE, SJBIT
Page 9
Modern Control Theory
10EE55
.. .
ẋn = an1 x1 + an2 x2 + .............+ ann xn + bn1 u1 +.............+ bnm um In matrix form the above equation can be expressed as
=
+
It can also be written as
ẋ(t) = A X(t) + B U(t) State space analysis Classical control theory Vs modern control theory The development of control system analysis and design can be divided into three eras. In the first era, we have classical control theory , which deals with techniques developed before 1950. classical control embodies such methods as root locus, Bode, Nyquist and Routh- Hurwitz. These methods have in common the use of transfer functions in the complex frequency(s) domain, emphasis on the use of graphical techniques, the use of feedback and the use of simplifying assumptions to approximate the time response . since computers were not available at that time , a great deal of emphasis was placed on developing methods that were amenable to manual computation and graphics. A major limitation of classical control methods was the use of single –input , single output (SISO) methods. Multivariable (i.e
multiple – input
multiple – output or MIMO) systems were analyzed and designed one loop at a time . Also the use of transfer functions and the frequency domain limited one to linear time – invariant systems. In the second era , we have modern control ( which is not so modern any longer ) Dept. of EEE, SJBIT
Page 10
Modern Control Theory
10EE55
which referees to state – space methods developed in the late 1950,s and early 1960s. In modern control, system models are directly written in the time domain. Analysis and design are done in time domain. It should denoted that before Laplace transforms and transfer functions became popular in the 1920s Engineering were studying systems in the time domain. Therefore the resurgence of time domain analysis was not unusual, but it was triggered by the development of computers and advances in numerical analysis. Because computers were available, it was no longer necessary to development analysis and design methods
that were strictly manual. An Engineer could use computers to
numerically solve or simulate large system that were nonlinear and time varying. State space methods removed the previously mentioned limitations of classical control. The period of the1960s was the heyday of modern control.
System representation in state variable form This chapter introduce the concept of state variable and the various means of representing control systems in state variable form. Each method of state variable representation results in a system description interms of „n‟ first –order differential equations, as opposed to the usual nth –order equation. s A convenient tool for this new system representations is matrix notation.
System state and state variable It is important to stress at out set that the concept of system state is first of all, a physical concept. However it is often convenient to interms of mathematical model. Here this mathematical model is assumed to consist of ordinary differential equations which have a unique solution for all inputs and initial conditions. It is in terms of this mathematical model that the „system state‟ or simply „state‟ is defined. Definition The state of system at any time „t0‟ is the minimum set of numbers X1 (to) ,X2(to) -------- Xn(to) which along with the input to the system for t sufficient to determine the behaviour of the system for all t
Dept. of EEE, SJBIT
to
is
to.
Page 11
Modern Control Theory
10EE55
In other words, the state of a system represents the minimum amount of information that we need to know about a system at to such that its future behaviour can be determined without reference to the input before to.
The idea of state is familiar from a knowledge of the physical world and the means of solving the differential equations used to model the physical world . Consider a ball flying through the air. Intuitively we feel that if we know the ball‟s position and velocity, we also know its future behaviour. It is on this basis that an outfielder positions himself to catch a ball. Exactly the same information is needed to solve a differential equation model of the problem. Consider for example the second order differential equation ·· · X +aX + bX = f(t) The solution to this equation be found as the sum of the forced response, due to f(t) and the natural or unforced response ie the solution of homogeneous equation ·· · X +aX + bX = 0 If X1
(t), X2(t) -------- Xn(t) are state variables of the system chosen then the
initial conditions of the state variables plus the u(t)‟s for t > 0 should be sufficient to decide the future behaviour i.e y(t)‟s for t > 0. Note that the state variables need not be physically measurable or observable quantities. Practically however it is convenient to chose easily measurable quantities. The number of state variable is then equal to the order of the differential equation which is normally equal to the number of energy storage elements in the system.
State equations for linear systems in matrix form The state of a linear time –invariant nth order system is represented by the following set of „n‟ number of first order differential equations with constant coefficients in terms of n state variable X1,X2-------- Xn. Dept. of EEE, SJBIT
Page 12
Modern Control Theory
10EE55
X1 = a11 X1+a12 X2 + ------- --- +a1n Xn + b11U1 + ----- - +b1m U m · X2 = a21 X1+a22 X2 + ------- --- +a21 Xn + b21U1 + ----- - +b2m U m
Xn = an1 X1+an2 X2 + ------- --- +ann Xn + bn1U1 + ----- - +bnm U In matrix form the above equations may be written as
X1
a11 a12 ----------- a1n
x1
b11 b12 ----------- b1m
u1
X2
a21 a22 ----------- a2n
x2
b21 b22 ----------- b2m
u2
bn1 bn2 ----------- bnm
um
=
an1 an2 ----------- ann
Xn
X
+
=
A
xn
X
+
B
X is called derivative of state vector whose size is (nx1) · X is called state vector whose size is (n x 1) A is called system matrix whose size is (n x n) B is called input matrix whose size is ( n x m) U is called input vector whose size is (m x 1)
Dept. of EEE, SJBIT
Page 13
U
Modern Control Theory
10EE55
Output equation The state variables X1(t) -------X n(t) represents the dynamic state of a system. The system output/ outputs may be used as some of the state variables themselves ordinarily, the output variables Y1 = C11 X1 + C12 X2 + ---------- C1n Xn Y2 = C21 X1 + C22 X2 + ---------- C2n X
Yp = Cp1 X1 + Cp2 X2 + ---------- CPn Xn In matrix form,
y1
c11c 12 ----------- c1n
x1
y2
c21 c22 ----------- c2n
x2
cp1 cp2 ----------- cpn
xn
=
yp or
Y
=
C
X
Y = output vector of size ( P x 1) C = Transmission matrix of size ( P x n) X = State vector of size ( n x 1) Sometimes the output is a function of both state variables and inputs . for this general case
Dept. of EEE, SJBIT
Page 14
Modern Control Theory Y = CX +DU or Y1
Y2
10EE55
c11 c12 ----------- c1n
x1
D11 D12 -------- D1m
u1
c21 c22 ----------- c2n
x2
D21 D22 -------- D2m
u2
DP1 DP2 --------- DPm
um
=
Yp
+
cpn -----------
cpn
xn
D matrix is of size ( p x m) State Model The state equation of a system determines its dynamic state and the output equation gives its output at any time t > 0 provided the state at t = 0 and the control forces for t 0 are known. These two equations together form the state model of the system. The state model of a linear system is therefore given by · X = AX +BU Y = CX+ DU
(1 )
State Model of SISO linear and time invariant system. If we let m =1 and p=1 in the state model of a multiple input multiple output linear time invariant system we obtain the following state model for SISO linear system. · = AX +bu X Y = CX+ du
(2 )
Where b and C are ( nx1) vectors
Dept. of EEE, SJBIT
Page 15
Modern Control Theory
10EE55
State Model using phase –variables ( BUSH FORM) Let us now consider how the state model defined by equation (2) may be obtained for an nth order SISO system whose describing differential equation relating output y with input u is given by dn y dtn
+ an-1
dn-1 y dtn-1
where an-1 , an - 2 y(0),
dy (0), dt
+ an-2
------------
dn-2 y dtn-2
dy + ------------ a1 + a0 y dt
= b0 u
(3)
a1, a0 are constant coefficients dn-1 y
------------
(0),
are initial conditions
dt
To arrive at the state model of equation (3) it is rewritten in shorthand form as yn +an-1 yn-1 + an-2 yn-2 +--------------a1y· + a0 y = b0u
(4)
We first define the state variables X1, X2, ----------- Xn which can be done in many possible ways. A convenient way is define X1 = y X2 = y·
· Xn = yn-1 With the above definition of state variables equation (4) is reduced to a set of „n‟ first order differential equations given below; · X1 = y· = X2 · =·y· X = X3 2
· Xn-1 = yn-1 = Xn · X = yn= -a0X1 – a1X2 – a2X3 -----a n-1Xn +b0u Dept. of EEE, SJBIT
Page 16
Modern Control Theory
10EE55
· the above equations result in the following state equations
It is to be noted that the matrix A has a special form. It has all 1‟s in the upper off – diagonal, its last row is comprised of the negative of the coefficients of the original differential equation and all other elements are zero. This form of matrix A is known as the „BUSH FORM‟ . The set of state variables which yield „BUSH FORM‟ for the matrix A are called „Phase variables‟
When „A‟ is in „BUSH FORM‟, the vector b has the specialty that all its elements except the last are zero. In fact A and B and therefore the state equation can be written directly by inspection of the linear differential equation. The output being y = X1 , the output equation is given by
Where C = [ 1 0 ------0] Note: There is one more state model called canonical state model . we shall consider this model after going through transfer function. Dept. of EEE, SJBIT
Page 17
Modern Control Theory
10EE55
Derivation of transfer function from a given state model Having obtained the state model we next consider the problem of determining transfer function from a given state model of SISO / MIMO systems. 1) SISO SYSTEM y(s)
u(s)
G(s)
G(s) is called transfer function defined as y(s) G(s) =
y(s) = u(s) G(s) or y (s) = G(s) u(s) -------- (1) u(s)
State Model is given by X (t) = AX (t) + Bu Y(t) = CX (t) + Du
----------- (2) ----------- (3)
Taking laplace transformation on both sides of equations ( 2) and (3) and neglecting initial conditions We get sX(s) = AX (s) + Bu(s) -----(4) Y(s) = CX (s) + Du(s) ---- (5) From (4) (sI –A ) X(s) = Bu(s) Or X(s) = --------- (sI –A )-1Bu(s) --------(6) Substituting ( 6) in (5) Y(s) = C (sI –A )-1Bu(s) + Du(s) Y(s) = ( C (sI –A )-1B+D) u(s) --------------- (7) Comparing ( 7 ) with (1) Dept. of EEE, SJBIT
Page 18
Modern Control Theory G(s) = C( sI –A ) -1B + D
10EE55 ---------( 8)
An important observation that needs to be made here is that while the state model is not unique, the transfer function is unique. i.e. the transfer function of equation (8) must work out to be the same irrespective of which particular state model is used to describe the system. (ii) MIMO SYSTEM y1(s)
u1(s)
G(s)
u2(s)
y2(s)
m inputs
P out puts yn(s)
um(s) G(s) = C( sI –A ) -1B + D Where y(s) =G(s) U(s)
G(s) matrix is called transfer matrix of size (p x m) y(s) matrix is of size (px1) u(s) matrix is of size (mx1) y1(s) = G 11(s) u1(s) + G 12(s) u2(s) + -------------------+ G 1m(s)um(s) y1(s) Transfer function 11(s)
=
G
u1(s) u2(s) = u3(s) = ----- um(s) = 0
Similarly G 12(s), -------- are defined Dept. of EEE, SJBIT
Page 19
Modern Control Theory
10EE55
Derivative of state models from transfer function More often the system model is known in the transfer function form. It therefore becomes necessary to have methods available for converting the transfer function model to a state model. The process of going from the transfer function to the state equations is called decomposition of the transfer function. In general there are three basic ways of decomposing a transfer function in direct decomposition, parallel decomposition, and cascaded decomposition has its own advantage and is best suited for a particular situation. Decomposition of TF 1. Converting a TF with a constant term in numerator. Phase variablesvariables that are successive derivatives of each other.
s3C + 9s2C + 26sC + 24 c = 24 R Take in LT
Dept. of EEE, SJBIT
Page 20
Modern Control Theory
10EE55
2. Converting a TF with polynomial in numerator
Taking in LT
Dept. of EEE, SJBIT
Page 21
Modern Control Theory
10EE55
C(S) = s2X1 +7s X1 + 2 X1 .. . C(t) = X1 + 7 X1 + 2 X1 = X3 + 7 X2 + 2 X1
3. Cascading form The denominator of TF is to be in factor form
Dept. of EEE, SJBIT
Page 22
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 23
Modern Control Theory
10EE55
UNIT-2 STATE-SPACE REPRESENTATION
1
Introduction
The classical control theory and methods (such as root locus) that we have been using in class to date are based on a simple input-output description of the plant, usually expressed as a transfer function. These methods do not use any knowledge of the interior structure of the plant, and limit us to single-input single-output (SISO) systems, and as we have seen allows only limited control of the closed-loop behavior when feedback control is used. Modern control theory solves many of the limitations by using a much “richer” description of the plant dynamics. The so-called state-space description provide the dynamics as a set of coupled first-order differential equations in a set of internal variables known as state variables, together with a set of algebraic equations that combine the state variables into physical output variables. 1.1
Definition of System State
The concept of the state of a dynamic system refers to a minimum set of variables, known as state variables, that fully describe the system and its response to any given set of inputs [1-3]. In particular a state-determined system model has the characteristic that: A mathematical description of the system in terms of a minimum set of variables xi (t), i = 1, . . . , n, together with knowledge of those variables at an initial time t0 and the system inputs for time t ≥ t0 , are sufficient to predict the future system state and outputs for all time t > t0 . This definition asserts that the dynamic behavior of a state-determined system is completely characterized by the response of the set of n variables xi (t), where the number n is defined to be the order of the system. The system shown in Fig. 1 has two inputs u1 (t) and u2 (t), and four output vari- ables y1 (t), . . . , y4 (t). If the system is state-determined, knowledge of its state variables (x1 (t0 ), x2 (t0 ), . . . , xn (t0 )) at some initial time t0 , and the inputs u1 (t) and u2 (t) for t ≥ t0 is sufficient to determine all future behavior of the system. The state variables are an internal description of the system which completely characterize the system state at any time t, and from which any output variables yi (t) may be computed. Large classes of engineering, biological, social and economic systems may be represented by state-determined system models. System models constructed withthe pure and ideal (linear) one-port elements (suchas mass, spring and damper elements) are state-determined
Dept. of EEE, SJBIT
Page 24
Modern Control Theory
10EE55
Figure 1: System inputs and outputs. system models. For suchsystems the number of state variables, n, is equal to the number of independent energy storage elements in the system. The values of the state variables at any time t specify the energy of each energy storage element within the system and therefore the total system energy, and the time derivatives of the state variables determine the rate of change of the system energy. Furthermore, the values of the system state variables at any time t provide sufficient information to determine the values of all other variables in the system at that time. There is no unique set of state variables that describe any given system; many different sets of variables may be selected to yield a complete system description. However, for a given system the order n is unique, and is independent of the particular set of state variables chosen. State variable descriptions of systems may be formulated in terms of physical and measurable variables, or in terms of variables that are not directly measurable. It is possible to mathematically transform one set of state variables to another; the important point is that any set of state variables must provide a complete description of the system. In this note we concentrate on a particular set of state variables that are based on energy storage variables in physical systems.
1.2
The State Equations
A standard form for the state equations is used throughout system dynamics. In the standard form the mathematical description of the system is expressed as a set of n coupled first-order ordinary differential equations, known as the state equations, in which the time derivative of each state variable is expressed in terms of the state variables x1 (t), . . . , xn (t) and the system inputs u1 (t), . . . , ur (t). In the general case the form of the n state equations is: x˙1 = f1 (x, u, t) x˙2 = f2 (x, u, t) . = . x˙n = fn (x, u, t
Dept. of EEE, SJBIT
(1)
Page 25
Modern Control Theory
10EE55
where x˙i = dxi /dt and eachof the functions fi (x, u, t), (i = 1, . . . , n) may be a general nonlinear, time varying function of the state variables, the system inputs, and time. 1 It is common to express the state equations in a vector form, in which the set of n state variables is written as a state vector x(t) = [x1 (t), x2 (t), . . . , xn (t)]T , and the set of r inputs is written as an input vector u(t) = [u1 (t), u2 (t), . . . , ur (t)]T . Each state variable is a time varying component of the column vector x(t). This form of the state equations explicitly represents the basic elements contained in the definition of a state determined system. Given a set of initial conditions (the values of the xi at some time t0 ) and the inputs for t ≥ t0 , the state equations explicitly specify the derivatives of all state variables. The value of each state variable at some time ∆t later may then be found by direct integration. The system state at any instant may be interpreted as a point in an n-dimensional state space, and the dynamic state response x(t) can be interpreted as a path or trajectory traced out in the state space. In vector notation the set of n equations in Eqs. (1) may be written: x˙ = f (x, u, t) .
(2)
where f (x, u, t) is a vector function with n components fi (x, u, t). In this note we restrict attention primarily to a description of systems that are linear and time-invariant (LTI), that is systems described by linear differential equations with constant coefficients. For an LTI system of order n, and with r inputs, Eqs. (1) become a set of n coupled first-order linear differential equations with constant coefficients: x˙1 = a11 x1 + a12 x2 + . . . + a1n xn + b11 u1 + . . . + x˙2 = a21 x1 + a22 x2 + . . . + a2n xn + b21 u1 + . . . + . . x˙n = an1 x1 + an2 x2 + . . . + ann xn
+ bn1 u1
b1r ur b2r ur
(3)
+ . . . + bnr ur
where the coefficients aij and bij are constants that describe the system. This set of n equations defines the derivatives of the state variables to be a weighted sum of the state variables and the system inputs. Equations (8) may be written compactly in a matrix form: As given above in page 10 x˙ = Ax + Bu (5) In this note we use bold-faced type to denote vector quantities. Upper case letters are used to denote general matrices while lower case letters denote column vectors. See Appendix A for an introduction to matrix notation and operations.
Dept. of EEE, SJBIT
Page 26
Modern Control Theory
10EE55
where the state vector x is a column vector of length n, the input vector u is a column vector of length r, A is an n × n square matrix of the constant coefficients aij , and B is an n × r matrix of the coefficients bij that weight the inputs.
1.3
Output Equations A system output is defined to be any system variable of interest. A description of a
physical system in terms of a set of state variables does not necessarily include all of the variables of direct engineering interest. An important property of the linear state equation description is that all system variables may be represented by a linear combination of the state variables xi and the system inputs ui . An arbitrary output variable in a system of order n with r inputs may be written: y(t) = c1 x1 + c2 x2 + . . . + cn xn + d1 u1 + . . . + dr ur
(6)
where the ci and di are constants. If a total of m system variables are defined as outputs, the m suchequations may be written as: y1 y2
= c11 x1 = c21 x1
. . ym = cm1 x1
+ +
c12 x2 c22 x2
+ cm2 x2
+ ... + + ... +
c1n xn + c2n xn +
d11 u1 d21 u1
+ . . . + cmn xn + dm1 u1
+ ... + + ... +
d1r ur d2r ur
(7)
+ . . . + dmr ur
or in matrix form: As given above in page 10 The output equations, Eqs. (8), are commonly written in the compact . form: y = Cx + Du
(9)
where y is a column vector of the output variables yi (t), C is an m × n matrix of the constant coefficients cij that weight the state variables, and D is an m × r matrix of the constant coefficients dij that weight the system inputs. For many physical systems the matrix D is the null matrix, and the output equation reduces to a simple weighted combination of the state variables: y = Cx. (10)
Dept. of EEE, SJBIT
Page 27
Modern Control Theory 1.4
10EE55
State Equation Based Modeling Procedure
The complete system model for a linear time-invariant system consists of (i) a set of n state equations, defined in terms of the matrices A and B, and (ii) a set of output equations that relate any output variables of interest to the state variables and inputs, and expressed in terms of the C and D matrices. The task of modeling the system is to derive the elements of the matrices, and to write the system model in the form: x˙ = Ax + Bu y = Cx + Du.
(11) (12)
The matrices A and B are properties of the system and are determined by the system structure and elements. The output equation matrices C and D are determined by the particular choice of output variables. The overall modeling procedure developed in this chapter is based on the following steps: 1. Determination of the system order n and selection of a set of state variables from the linear graphsystem representation. 2. Generation of a set of state equations and the system A and B matrices using a well defined methodology. This step is also based on the linear graph system description. 3. Determination of a suitable set of output equations and derivation of the appropriate C and D matrices.
2
Block Diagram Representation of Linear Systems Described by State Equations
The matrix-based state equations express the derivatives of the state-variables explicitly in terms of the states themselves and the inputs. In this form, the state vector is expressed as the direct result of a vector integration. The block diagram representation is shown in Fig. 2. This general block diagram shows the matrix operations from input to output in terms of the A, B, C, D matrices, but does not show the path of individual variables. In state-determined systems, the state variables may always be taken as the outputs of integrator blocks. A system of order n has n integrators in its block diagram. The derivatives of the state variables are the inputs to the integrator blocks, and each state equation expresses a derivative as a sum of weighted state variables and inputs. A detailed block diagram representing a system of order n may be constructed directly from the state and output equations as follows: Step 1: Draw n integrator (S −1 ) blocks, and assign a state variable to the output of each block.
Dept. of EEE, SJBIT
Page 28
Modern Control Theory
10EE55
Figure 2: Vector block diagram for a linear system described by state-space system dynamics. Step 2: At the input to each block (which represents the derivative of its state variable) draw a summing element. Step 3: Use the state equations to connect the state variables and inputs to the summing elements through scaling operator blocks. Step 4: Expand the output equations and sum the state variables and inputs through a set of scaling operators to form the components of the output.
Example 1 Draw a block diagram for the general second-order, single-input single-output system: x1 x˙1 a11 a12 b1 x2 + u(t) = x˙2 a21 a22 b2 y(t) =
c1 c2
x1 x2
+ du(t).
(i)
Solution: The block diagram shown in Fig. 3 was drawn using the four steps described above.
3 .Transformation From State-Space Equations to Classical Form The transfer function and the classical input-output differential equation for any system variable may be found directly from a state space representation through the Laplace transform. The following example illustrates the general method for a first order system.
Dept. of EEE, SJBIT
Page 29
Modern Control Theory
10EE55
Figure 3: Block diagram for a state-equation based second-order system.
Example 2 Find the transfer function and a single first-order differential equation relating the output y(t) to the input u(t) for a system described by the first-order linear state and output equations: dx = ax(t) + bu(t) dt y(t) = cx(t) + du(t)
(i) (ii)
Solution: The Laplace transform of the state equation is sX (s) = aX (s) + bU (s),
(iii)
which may be rewritten with the state variable X (s) on the left-hand side: (s − a) X (s)) = bU (s).
(iv)
Then dividing by (s − a), solve for the state variable: X (s) =
Dept. of EEE, SJBIT
b U (s), s−a
(v)
Page 30
Modern Control Theory
10EE55
and substitute into the Laplace transform of the output equation Y (s) = cX (s)+ dU (s): Y (s) =
bc + d U (s) s−a ds +U (s) = (bc − ad) (s − a)
The transfer function is: H (s) =
(vi)
ds + (bc − . ad) (s − a)
Y (s) = U (s)
(vii)
The differential equation is found directly: (s − a) Y (s) = (ds + (bc − ad)) U (s),
(viii)
and rewriting as a differential equation: dy du − ay = d + (bc − ad) u(t). dt dt
(ix)
Classical representations of higher-order systems may be derived in an analogous set of steps by using the Laplace transform and matrix algebra. A set of linear state and output equations written in standard form x˙ = Ax + Bu y = Cx + Du
(13) (14)
may be rewritten in the Laplace domain. The system equations are then sX(s) = AX (s) + BU (s) Y(s) = CX(s) + DU(s)
(15)
and the state equations may be rewritten: sx(s) − Ax(s) = [sI − A] x(s) = Bu(s).
(16)
where the term sI creates an n × n matrix with s on the leading diagonal and zeros elsewhere. (This step is necessary because matrix addition and subtraction is only defined for matrices of the same dimension.) The matrix [sI − A] appears frequently throughout linear system theory; it is a square n × n matrix withelements directly related to the A matrix:
(s − a11 ) −a12 · · · −a (s − a22 ) · · · 21 [sI − A] = . . .. . −an1
Dept. of EEE, SJBIT
−an2
−a1n −a2n . . · · · (s − ann )
.
(17)
Page 31
Modern Control Theory
10EE55
The state equations, written in the form of Eq. (16), are a set of n simultaneous opera- tional expressions. The common methods of solving linear algebraic equations, for example Gaussian elimination, Cramer’s rule, the matrix inverse, elimination and substitution, may be directly applied to linear operational equations such as Eq. (16). For low-order single-input single-output systems the transformation to a classical formu- lation may be performed in the following steps: 1. Take the Laplace transform of the state equations. 2. Reorganize each state equation so that all terms in the state variables are on the left-hand side. 3. Treat the state equations as a set of simultaneous algebraic equations and solve for those state variables required to generate the output variable. 4. Substitute for the state variables in the output equation. 5. Write the output equation in operational form and identify the transfer function. 6. Use the transfer function to write a single differential equation between the output variable and the system input. This method is illustrated in the following two examples.
Example 3 Use the Laplace transform method to derive a single differential equation for the capacitor voltage vC in the series R-L-C electric circuit shown in Fig. 4 Solution: The linear graph method of state equation generation selects the
Figure 4: A series RLC circuit.
Dept. of EEE, SJBIT
Page 32
Modern Control Theory
10EE55
capacitor voltage vC (t) and the inductor current iL (t) as state variables, and generates the following pair of state equations: v˙ c i˙ L
=
0 1/C −1/L −R/L
0 1/L
vc iL
+
+
0 Vin
Vin .
(i)
The required output equation is: y(t) =
vc iL
1 0
(ii)
Step 1: In Laplace transform form the state equations are: sVC (s) = 0VC (s) + 1/C IL (s) + 0Vs (s) sIL (s) = −1/LVC (s) − R/LIL (s) + 1/LVs (s)
(iii)
Step 2: Reorganize the state equations: sVC (s) − 1/C IL (s) = 0Vs (s) 1/LVC (s) + [s + R/L] IL (s) = 1/LVs (s)
(iv) (v)
Step 3: In this case we have two simultaneous operational equations in the state variables vC and iL . The output equation requires only vC . If Eq. (iv) is multiplied by [s + R/L], and Eq. (v) is multiplied by 1/C , and the equations added, IL (s) is eliminated: [s (s + R/L) + 1/LC ] VC (s) = 1/LC Vs (s)
(vi)
Step 4: The output equation is y = vC . Operate on both sides of Eq. (vi) by −1 [s2 + (R/L)s + 1/LC ] and write in quotient form: VC (s) =
s2
1/LC Vs (s) + (R/L)s + 1/LC
Step 5: The transfer function H (s) = Vc (s)/Vs (s) is: H (s)2= 1/LC s + (R/L)s + 1/LC
(vii)
(viii)
Step 6: The differential equation relating vC to Vs is: 1 1 d2 vC R dvC + + vC = Vs (t) 2 dt L dt LC LC
Dept. of EEE, SJBIT
(ix)
Page 33
Modern Control Theory
10EE55
Cramer’s Rule, for the solution of a set of linear algebraic equations, is a useful method to apply to the solution of these equations. In solving for the variable xi in a set of n linear algebraic equations, such as Ax = b the rule states: xi =
det A(i)
(18)
det [A]
where A(i) is another n × n matrix formed by replacing the ith column of A with the vector b. If [sI − A] X(s) = BU(s) (19) then the relationship between the ith state variable and the input is Xi (s) =
det [sI − A](i) det [sI − A]
U (s)
(20)
(i)
where (sI − A) is defined to be the matrix formed by replacing the ith column of (sI − A) with the column vector B. The differential equation is det [sI − A] xi = det (sI − A)(i) uk (t).
(21)
Example 4 Use Cramer’s Rule to solve for vL (t) in the electrical system of Example 3. Solution: From Example 3 the state equations are: v˙ c i˙ L
=
0 1/C −1/L −R/L
vc iL
+
0 1/L
Vin (t)
(i)
and the output equation is: vL = −vC − RiL + Vs (t).
(ii)
In the Laplace domain the state equations are: s −1/C 1/L s + R/L
Dept. of EEE, SJBIT
Vc (s) IL (s)
=
0 1/L
Vin (s).
(iii)
Page 34
Modern Control Theory
10EE55
The voltage VC (s) is given by:
VC (s)
=
=
det (sI − A)
(1)
det [(sI − A)]
s2
det
0 −1/C 1/L (s + R/L)
det
−1/C s 1/L (s + R/L)
Vin (s) =
Vin (s)
1/LC Vin (s). + (R/L)s + (1/LC )
(iv)
The current IL (t) is:
IL (s) =
=
det (sI − A)
det [(sI − A)]
s2
det
(2)
Vin (s) =
s 0 1/L 1/L
−1/C s det 1/L (s + R/L)
s/L Vin (s). + (R/L)s + (1/LC )
Vin (s)
(v)
The output equation may be written directly from the Laplace transform of Eq. (ii): VL (s)
= −VC (s) − RIL (s) + Vs (s) −1/LC −(R/L)s = + 2 + 1 Vs (s) 2 s + (R/L)s + (1/LC ) s + (R/L)s + (1/LC ) −1/LC − (R/L)s + (s 2 + (R/L)s + (1/LC )) = Vs (s) s2 + (R/L)s + (1/LC ) s2 = 2 V (s), (vi) s + (R/L)s + (1/LC ) s
giving the differential equation 1 d2 vL R dvL d2 Vs + + v (t) = . L dt2 L dt LC dt2
(vii)
For a single-input single-output (SISO) system the transfer function may be found directly by evaluating the inverse matrix X(s) = (sI − A)
−1
BU (s).
(22)
Using the definition of the matrix inverse: [sI − A]−1 =
Dept. of EEE, SJBIT
adj [sI − A] , det [sI − A]
(23)
Page 35
Modern Control Theory
10EE55
adj [sI − A] B U (s). det [sI − A] and substituting into the output equations gives: X(s) =
(24)
Y (s) = C [sI − A]−1 BU (s) + DU (s) =
C [sI − A]−1 B + D U (s).
(25)
Expanding the inverse in terms of the determinant and the adjoint matrix yields: Y (S)
C adj (sI − A) B + det [sI − A] D U (s) det [sI − A] = H (s)U (s) =
(26)
so that the required differential equation may be found by expanding: det [sI − A] Y (s) = [C adj (sI − A) B + det [sI − A] D] U (s).
(27)
and taking the inverse Laplace transform of both sides.
Example 5 Use the matrix inverse method to find a differential equation relating vL (t) to Vs (t) in the system described in Example 3. Solution: The state vector, written in the Laplace domain, X(s) = [sI − A]−1 BU(s)
(i)
from the previous example is: Vc (s) IL (s)
=
s −1/C 1/L s + R/L
−1
0 1/L
Vin (s).
(ii)
The determinant of [sI − A] is det [sI − A] = s2 + (R/L)s + (1/LC ) ,
(iii)
and the adjoint of [sI − A] is adj
s −1/C 1/L s + R/L
=
s + R/L 1/C −1/L s
.
(iv)
From Example .5 and the previous example, the output equation vL (t) = −vC − RiL + Vs (t) specifies that C = [−1 − R] and D = [1]. The transfer function, Eq. (26) is:
Dept. of EEE, SJBIT
Page 36
Modern Control Theory
10EE55
Since C adj (sI − A) B =
s + R/L 1/C −1/L s
−1 −R
= −
0 1/L
R 1 s− , L LC
(vi)
the transfer function is H (s) =
−(R/L)s − 1/(LC ) + (s (s2
2
+ (R/L)s + (1/LC ))
1
+ (R/L)s + (1/LC ))
2
=
s , (S 2 + (R/L)S + (1/LC ))
(vii)
which is the same result found by using Cramer’s rule in Example 4.
4
Transformation from Classical Form to State-Space Representation
The block diagram provides a convenient method for deriving a set of state equations for a system that is specified in terms of a single input/output differential equation. A set of n state variables can be identified as the outputs of integrators in the diagram, and state equations can be written from the conditions at the inputs to the integrator blocks (the derivative of the state variables). There are many methods for doing this; we present here one convenient state equation formulation that is widely used in control system theory. Let the differential equation representing the system be of order n, and without loss of generality assume that the order of the polynomial operators on both sides is the same: an sn + an−1 sn−1 + · · · + a0 Y (s) = bn sn + bn−1 sn−1 + · · · + b0 U (s).
(28)
We may multiply bothsides of the equation by s−n to ensure that all differential operators have been eliminated: an + an−1 s−1 + · · · + a1 s −(n−1) + a0 s −n Y (s) = bn + bn−1 s− 1 + · · · + b1 s −(n−1) + · · · + b0 s −n U (s),
(29)
from which the output may be specified in terms of a transfer function. If we define a dummy variable Z (s), and split Eq. (29) into two parts
Dept. of EEE, SJBIT
Page 37
Modern Control Theory
10EE55
Figure 5: Block diagram of a system represented by a classical differential equation. Eq. (30) may be solved for U (s)), U (s) = an + an−1 s−1 + · · · + a1 s −(n−1) + a0 s−n X (s)
(32)
and rearranged to generate a feedback structure that can be used as the basis for a block diagram: a0 1 1 a1 1 an−1 1 + Z (s) = U (s) − Z (s). (33) +··· + an an s an sn−1 an s−n The dummy variable Z (s) is specified in terms of the system input u(t) and a weighted sum of successive integrations of itself. Figure 5 shows the overall structure of this direct-form block diagram. A string of n cascaded integrator (1/s) blocks, with Z (s) defined at the input to the first block, is used to generate the feedback terms, Z (s)/s−i , i = 1, . . . n, in Eq. (33). Equation (31) serves to combine the outputs from the integrators into the output y (t). A set of state equations may be found from the block diagram by assigning the state variables xi (t) to the outputs of the n integrators. Because of the direct cascade connection of the integrators, the state equations take a very simple form. By inspection: x˙ 1 = x2 x˙ 2 = x3 . x˙ n−1
. = xn
x˙ n = −
Dept. of EEE, SJBIT
an−1 a1 a0 1 x1 − x2 − · · · − xn + u(t). an an an an
(34)
Page 38
Modern Control Theory
10EE55
In matrix form these equations are
x˙ 1 x˙ 2 . . x˙ n−2 x˙ n−1 x˙ n
=
0 0 ..
1 0 ..
0 0 ..
··· ··· ...
0 0 ..
0 0 ··· 1 0 0 0 ··· 0 1 −a0 /an −a1 /an · · · −a n−2 /an −an−1 /an
x1 x2 ..
+ x n−2 xn−1
xn
0 0 .. 0 0 1/an
u (t) .
(35) The A matrix has a very distinctive form. Each row, except the bottom one, is filled of zeroes except for a one in the position just above the leading diagonal. Equation (35) is a common form of the state equations, used in control system theory and known as the phase variable or companion form. This form leads to a set of state variables which may not correspond to any physical variables within the system. The corresponding output relationship is specified by Eq. (31) by noting that Xi (s) = Z (s)/s(n+1−i) . y (t) = b0 x1 + b1 x2 + b2 x3 + · · · + bn−1 xn + bn z (t) . (36) But z (t) = dxn /dt , which is found from the nthstate equation in Eq. (34). When substituted into Eq. (36) the output equation is:
Y (s) =
b0 −
bn a0 an
b1 −
bn a1 an
bn−1 −
···
x1 x2 bn + u(t). . an xn
bn an−1 an
(37)
Example 6 Draw a direct form realization of a block diagram, and write the state equations in phase variable form, for a system with the differential equation d3 y d2 y dy du + 19 + 13y = 13 + 7 + 26u 3 2 dt dt dt dt
(i)
Solution: The system order is 3, and using the structure shown in Fig. 5 the block diagram is as shown in Fig. 6. The state and output equations are found directly from Eqs. (35) and (37):
x˙ 1
x˙ 2
x˙ 3
Dept. of EEE, SJBIT
=
1
0
0
0
1 x2 + 0 u(t),
−13 −19 −7
x1
x3
0
0
(ii)
1
Page 39
Modern Control Theory
10EE55
Figure 6: Block diagram of the transfer operator of a third-order system found by a direct realization.
y(t) =
5
x1 26 13 0 x2 + [0] u (t) . x3
(iii)
The Matrix Transfer Function
For a multiple-input multiple-output system Eq. 22 is written in terms of the r component input vector U(s) X(s) = [sI − A]−1 BU(s) (38) generating a set of n simultaneous linear equations, where the matrix is B is n × r. The m component system output vector Y(s) may be found by substituting this solution for X(s) into the output equation as in Eq. 25: Y(s) = C [sI − A]−1 B {U(s)} + D {U(s)} =
C [sI − A]−1 B + D {U(s)}
(39)
and expanding the inverse in terms of the determinant and the adjoint matrix C adj (sI − A) B + det [sI − A] D U(s) det [sI − A] = H(s)U(s),
Y(s) =
(40)
where H(s) is defined to be the matrix transfer function relating the output vector Y(s) to the input vector U(s): (C adj (sI − A) B + det [sI − A] D) det [sI − A]
(41)
H(s) =
Dept. of EEE, SJBIT
Page 40
Modern Control Theory
10EE55
UNIT-3 DERIVATION OF TRANSFER FUNCTION FROM STATE MODEL UNIT - 3 Derivation of transfer function from state model, digitalization, Eigen values, Eigen vectors, generalized Eigen vectors. 6 Hours
Converting transfer-functions to state models using canonical forms The state variables that produce a state model are not, in general, unique. However, there exist several common methods of producing state models from transfer functions. Most control theory texts contain developments of a standard form called the control canonical form, see, e.g., [1]. Another is the phase variable canonical form. Control Canonical Form When the order of a transfer function‟s denominator is higher than the order of its numerator, the transfer function is called strictly proper. Consider the general, strictly proper third-order transfer function Y ( s) b s 2 b s b0 . 3 2 2 1 U ( s ) s a2 s a1s a0
(1)
Dividing each term by the highest order of s yields b2 b1 b0 2 3 Y ( s) s s s a a U ( s ) 1 2 1 a0 s s2 s3
(2)
which is a function containing numerous 1/s terms, or integrators. There are various signal-flow graph configurations that will produce this function. One possibility is the control canonical form shown in Figure 1. The state model for the signal flow configuration in Figure 1 is C
Dept. of EEE, SJBIT
Page 42
Modern Control Theory x1 0 x 0 2 x3 a0 y b0
b1
10EE55 1 0 a1
0 x1 0 1 x2 0 u a2 x3 1
(3)
x1 b2 x2 0 u x3
b2 b1 U(s)
1/s
Y(s)
1/s b0
1/s X2(s)
X3(s)
X1(s)
-a2 -a1 -a0
Figure 1. Control canonical form block diagram.
The validity of (3) is tested by rearranging (2) to yield 1 1 1 1 1 1 Y ( s) a2 Y ( s) a1 2 Y ( s) a0 3 Y ( s) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s ) s s s s s s
(4)
and then substituting the expression for Y(s) from the state model output equation in (3) yields 1 1 1 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) b2 U ( s ) b1 2 U ( s ) b0 3 U ( s ) s s s 1 a2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) s 1 a1 2 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) s 1 a0 3 b0 X 1 ( s ) b1 X 2 ( s ) b2 X 3 ( s ) . s
(5) Equation (5) can be rewritten as
b0 X 1 ( s) b1 X 2 ( s) b2 X 3 ( s) Y ( s) U ( s)
Dept. of EEE, SJBIT
U ( s)
b2 b1 b0 s s2 s3 , a a a 1 2 21 30 s s s Page 43
Modern Control Theory
10EE55
which is identical to equation (2) proving that the control canonical form is valid. As an example consider the transfer function
b2 b1 b0 2 8 6 2 3 C ( s) 2 s 2 8s 6 s s s s s2 s3 . 3 R( s ) s 8s 2 26s 6 1 a2 a1 a0 1 8 26 6 s s2 s3 s s2 s3
The coefficients corresponding with the control canonical form are: b0 6,
b1 8,
b2 2,
a0 6,
a1 26,
and
a2 8 .
The state model based on control canonical form is 1 0 x1 0 x1 0 x 0 0 1 x2 0 r 2 x3 6 26 8 x3 1 x1 y 6 8 2 x2 [0]r. x3 Examination of Figure 1 shows one potential benefit of manipulating systems to conform to control canonical form. If X1(s) happened to carry units of inches (position), then X2(s) might be inches/second (velocity), and X3(s) might be inches/second/second (acceleration). Phase Variable Canonical Form Development of the phase variable canonical form as presented in [2] begins with the general fourthorder, strictly proper transfer function
Y ( s) b s 3 b s 2 b s b0 b3s 1 b2 s 2 b1s 3 b0 s 4 4 3 3 2 2 1 U ( s) s a3s a2 s a1s a0 1 a3s 1 a2 s 2 a1s 3 a0s 4
(6)
which can be rearranged to read
Y ( s ) b3s 1U( s ) b2 s 2U( s ) b1s 3U( s ) b0s 4U( s ) a3s 1Y ( s ) a2 s 2Y ( s ) a1s 3Y ( s ) a0s 4Y ( s )
(7)
The denominator in (6) is fourth order and leads one to conclude that a block diagram consisting of four 1/s terms may be useful. Construction of the phase variable canonical form is initiated by setting the output to
Y( s ) b0s 4U( s )
(8)
which may be represented using four integrators as shown in Figure 2. Dept. of EEE, SJBIT
Page 44
Modern Control Theory
U(s)
1/s
1/s
1/s
10EE55
1/s
b0
Y(s)
Figure 2. Block diagram of the output equation.
After substituting Y(s) from (8) into (7), the expression becomes
Y ( s ) b3s 1U( s ) b2 s 2U( s ) b1s 3U( s ) b0s 4U( s ) a3b0 s 5U( s ) a2b0s 6U( s ) a1b0s 7U( s ) a0b0s 8U( s )
.
(9)
It is left for the reader to show that the additional terms when applied to the block diagram results in Figure 3.8 (b) in [2]. The reader may also wish to examine the similarities between control canonical form and phase variable form. It is important to note that a particular transfer function may be represented by block diagrams of many different canonical forms, yielding many different valid state models. The advances made in microprocessors, microcomputers, and digital signal processors have accelerated the growth of digital control systems theory. The discrete-time systems are dynamic systems in which the system-variables are defined only at discrete instants of time. The terms sampled-data control systems, discrete-time control systems and digital control systems have all been used interchangeably in control system literature. Strictly speaking, sampleddata are pulse-amplitude modulated signals and are obtained by some means of sampling an analog signal. Digital signals are generated by means of digital transducers or digital computers, often in digitally coded form. The discrete-time systems, in a broad sense, describe all systems having some form of digital or sampled signals. Discrete-time systems differ from continuous-time systems in that the signals for a discrete-time systems are in sampled-data form. In contrast to the continuous-time system, the operation of discrete-time systems are described by a set of difference-equations. The analysis and design of discrete-time systems may be effectively carried out by use of the z-transform, which was evolved from the Laplace transform as a special form.
Dept. of EEE, SJBIT
Page 45
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 46
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 47
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 48
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 49
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 50
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 51
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 52
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 53
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 54
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 55
Modern Control Theory
Dept. of EEE, SJBIT
10EE55
Page 56
Modern Control Theory
10EE55
UNIT-4 SOLUTION OF STATE EQUATIONS UNIT - 4 Solution of state equation, state transition matrix and its properties, computation using Laplace transformation, power series method, Cayley-Hamilton method, concept of controllability & observability, methods of determining the same 10 Hours After obtaining the various mathematical models such as physical, phase and canonical forms of state models, the next step to analysis is to obtain the solution of state equation. From the solution of the state equation, the transient response can then be obtained for specific input. This completes the analysis of control system in state-space. The solution of state equation consists of two parts; homogenous solution and forced solution. The model with zero input is referred as homogenous system and with non-zero input is referred as forced system. The solution of state equation with zero input is called as zero input response (ZIR). The solution of state model with zero state (zero initial condition) is referred as zero state response (ZSR). Hence total response is sum of ZIR and ZSR. a)
Zero Input Response (ZIR):
Consider the n-th order LTI system with zero input. The state equation of such system with usual notations is given by X (t ) = AX (t ) (1) with x(0) = x 0
∆
(2)
Let the solution of Eq.(1) be of the form x(t ) = eat .k
(3)
where eat is matrix exponential function defined as A 2 t 2 A 3t 2 e At = I + At + + + ...... 2! 3! and k is a constant vector.
(4)
Differentiating Eq. (3); X (t ) = Ae at k k = (e At ) −1 x(t 0) = e− At x(0) Dept. of EEE, SJBIT
Page 57
Modern Control Theory
10EE55 x(t ) = e At e − At x(t 0) = e A(t −t 0) x(t 0)
At t=0 (zero initial condition), x(t ) = eat x(0)
(8)
From Eq.8, it is observed that the initial state x(0) = x 0 at t=0 is driven to a state x(t) at time „t‟ by matrix exponential function e At . This matrix exponential function, which transfers the initial state of the system in „t‟ seconds, is called STATE TRANSITION MATRIX (STM) and is denoted by Properties of State Transition Matrix 1). e At
(t):
(0)=I proof follows from the definition, A 2 t 2 A 3t 2 = I + At + + + ...... 2! 3! = φ (t )
Substituting t=0 results into
(0)=I.
2). φ −1 (t ) = φ (−t ) : Proof: Consider φ (t ) = I + At +
A2 t 2 + .... 2!
A 2 t 2 A 3t 3 φ (−t ) = I − At + − + ..... 2! 3! Multiplying φ (t ) and φ (−t ) ,
φ (t )φ (−t ) = I Pre-multiplying both sides with φ −1 φ (−t ) = φ −1 (t ) 3). φ (t1 )φ (t 2 ) = φ (t1 + t 2 ) : 2
3
A 2 t1 A3t1 + Proof: φ (t1 ) = I + At1 + + .... 2! 3! 2 3 A2t2 A3t2 + and φ (t 2 ) = I + At 2 + + .... 2! 3! A2(t +t )2 A3(t +t )3 φ(t1)φ(t2 ) = I +A(t1 +t2)+ 1 2 + 1 3 +....... 2! 3! Dept. of EEE, SJBIT
Page 58
Modern Control Theory
10EE55
= φ (t1 + t 2 ) 4). φ (t 2 − t1 )φ (t1 − t 0 ) = φ (t 2 − t 0 ) : (This property implies that state transition processes can be divided into a number of sequential transitions), i.e. the transition from t0 to t2, x(t 2 ) = φ (t 2 − t 0 ) x(t 0 ) ; is equal to transition from „t0‟ to „t1‟ and from „t1‟ to „t2‟ i.e., x(t1 ) = φ (t1 − t 0 ) x(t 0 ) x(t 2 ) = φ (t 2 − t1 ) x(t1 ) A (t 2 − t1 ) + .... 2! 2 A (t1 − t 0 ) + .... φ (t1 − t 0 ) = I + A(t1 − t 0 ) + 2! A2(t +t )2 A3(t +t )3 φ(t2 −t1)φ(t1 −t0) = I +A(t2 +t0)+ 2 0 + 2 0 +....... 2! 3! = φ (t 2 − t 0 )
Proof: φ (t 2 − t1 ) = I + A(t 2 − t1 ) +
Evaluation of State Transition Matrix φ (t ) : Few method to evaluate the STM are; 1) Power series method 2) Inverse Laplace transform method 3) Cayley-Hamilton theorem 1). Power series method: It is infinite solution which can be trunkated to 2 or 3 terms. By the definition, A 2 t 2 A 3t 2 e At = I + At + + + ...... 2! 3! Example: Compute the STM by power series method given 0 a) A = −1
1 −2
b) A =
,
A 2 t 2 A 3t 2 + + ...... 2! 3! 1 0 0 1 0 1 = + t+ −1 − 2 0 1 −1 − 2
1 1 0 1
a) φ (t ) = I + At +
t2 t3 + + ... 2 3 = t3 e 2 −t +t − −t 2 + 1−
Dept. of EEE, SJBIT
t−t2 + te −t
2
2
3
0 1 t3 t2 + + .... 2! − 1 − 2 3!
t3 + ... 2 3
+ ...
te −t
Page 59
Modern Control Theory 1 − 2t +
3t 2t − + ... 2 3
− te −t
e −t − te
=
b) φ (t ) = I + At +
t2 t3 + + ... 2! 3! 0
=
et
2
1 1 t2 + t+ + ..... 0 1 2! 0 1 1 1
= 0 1
1+ t +
−t
A 2 t 2 A 3t 2 + + ...... 2! 3!
1 0
=
10EE55
t3 + ... 2 t 2 t3 1 + t + + + ... 2! 3! t+t2 +
te t
0 e t 2). Inverse Laplace Transform Method: Consider the state equation with zero input as x(t ) = Ax(t ); ∆
x(t 0) = x(0) Taking Laplace Transform on both sides, Sx(s) − x(0) = Ax(s) [SI − A]x(s) = x(0) Premultying both sides by [SI-A]-1, x(s) = [SI − A] −1 x(0) Taking Laplace Inverse on both sides, -1 x(t ) = [ SI − A] −1 x(0) Comparing the above example with x(t ) = e At x(0) It shows that −1 e At = STM = φ (t ) = -1 [SI − A] −1 φ (t ) = -1 [SI − A] Example: Obtain the STM by Laplace Transform (Inverse Laplace Transform) method for given system matrices 1 0 1 1 0 A= a). A = , 1 1 b). Dept. of EEE, SJBIT
Page 60
Modern Control Theory −1 S −1
1 −2 a). A =
c). A =
,
−1 0 1
[SI − A] =
φ (s) = [SI − A] −1 = φ (t ) =
10EE55
-1 φ (s)
Dept. of EEE, SJBIT
=
0 − 2
1
0
S −1
− 3
S −1 −1 = 0 S −1
et
te t
0
e
S −1
1
0
S −1
[S − 1] 2
t
Page 61
Modern Control Theory
b). A =
0
1
10EE55
S 1
[SI − A] =
−1 − 2
−1 (S + 2)
S+2 −1 S (S + 1) 2 φ (t ) = [SI − A] −1 = = −1 1 S+2 (S + 1) 2 −1
-1 φ (t)
φ (t ) = c). A =
0 −2
1 −3
[SI − A] −1 =
Hence φ (t ) =
S
=
(1 + t)e −t − te −t
te −t (1 − t )e
[SI − A] =
−1
2 S +3
=
S 2
1 (S + 1) S (S + 1) 2
−t
−1 S +3
S +3 −2
1
S (S + 2)(S + 1)
S +3 1 −2 S
(S + 2)(S + 1) S +3 1 −2 S -1 φ (t ) = -1 φ (t ) = (S + 2)(S + 1) =
2e −t − e −2t − 2e −t + 2e −2t
e −t − e −2t − e −t + 2e −2t
3. STM by Calley-Hamilton Theorem: This method is useful for large systems. The theorem states that “Every nonsingular square matrix satisfies its own characteristic equation.” This theorem helps for evaluating the function of a matrix. For nxn, non-singular matrix A the matrix poly function f(A) is given by f ( A) = α 0 I + α 1 A + α 2 A + .... + α n An −1 2
= φ (t ) where α 0 , α 1,.....α n constant coefficients which can be evaluated with eigen values of matrix A as described below. Step 1: For a given matrix, form its characteristic equation λI − A = 0 and find eigen values as λ1 , λ 2 ,.....λ n . Dept. of EEE, SJBIT
Page 62
Modern Control Theory
10EE55
Step2 (Case 1):If all eigen values are distinct, then form „n‟ simultaneous equations as, e λ1t = f (λ1) = α 0 + α 1λ1 + α2 λ1 + ..... 2
e λ2t = f (λ 2 ) = α 0 + α 1λ 2 + α 2 λ22 + ..... e λnt = f (λ n ) = α 0 + α 1λ n + α 2 λ n + ..... 2
Solve for α 0 , α 1 , α 2 ,....α n (Case 2): If some Eigen values are repeated then obtain one independent equation by using this Eigen value and find co-efficents α 0 , α 1 , α 2 ,....α n . Step 3: Substitute the co-efficients α 0 , α 1 , α 2 ,....α n in function f ( A) = α 0 I + α 1 A + .... + α n An −1 = φ (t ) Examples: 1) Find f ( A) = A10 for A =
0
1 −1 − 2
Now characteristic equation is λI − A = 0 .
λ −1 =0 1 λ+2
(λ + 1) 2 = 0.
Hence, Eigen values are λ1 = −1, λ 2 = 0, λ 2 = −1. Since „A‟ is of 2x2, the corresponding poly function is f (λ ) = λ10 = α 0 +
(1)
α 1λ f (λ1) = λ 110 = α 0 + α 1 λ1 (−1)10 = α0 − α 1
α 0 − α1 = 1 Since it is case of repeated Eigen values, to obtain the second equation differentiating the expansion for f( ) on both sides (i.e. Eq 1), d 10 [λ ] dλ 10λ9
λ =−1
λ =−1
= α1
= α 1 = −10 Hence, α 0 = 1 + α 1 = 1 − 10 = −9
Dept. of EEE, SJBIT
Page 63
Modern Control Theory
10EE55
f ( A) = A10 = α 0 I + α 1 A = α0
1 0 0 1 + α1 0 1 −1 − 2
=
α0 0 0 α1 + 0 α 0 − α 1 − 2α 1
=
α0 α1 − 9 − 10 = − α 1 α 0 − 2α1 10 11
2). Find STM by Cayley Hamlton method, given; 0 b) A= −1
1 1 0 1
a) A =
a) Consider A =
1 −2
c)
0
1 − 2 −3
A=
1 1 0 1
Characteristic equation is λI − A = 0 =
λ −1 −1 λ −1 0
λ1 = 1andλ 2 = 1. Since A is of 2x2 second order system, hence,
e λt = α 0 + α 1λ
(1)
λt
e = α 0 + α 1λ at λ = λ1
will be e λ1t = α 0 + α 1λ1 et = α 0 + α1
(2)
Differentiating Eq. (1) with respect to and substituting = te λt = α 1 atλ = λ 2 te t = α 1
2
(3)
From Eqs. (2) and (3) α 0 = e t − α1 = e t − te t = e t (1 − t ) Hence STM is given by
φ (t ) = e At = α 0 I + α 1 A = α0
1 0 0 1
+ α1
1 1 0 1
e t (1 − t ) 0 te t = + 0 e t (1 − t) 0 Dept. of EEE, SJBIT
te t tet Page 64
Modern Control Theory
=
et
te t
0
e
0 b) A = −1
10EE55
t
1
; characteristic equation is λI − A = 0
−2 λ1 = −1, λ 2 = −1.
λ −1 =0 1 λ+2
For second order system, e λt = α 0 + α 1λ
α
(1)
+α λ
e λ1t = 0
1
1
e −t = α 0 − α 1 ; ( λ1 = −1 ) (2) Differentiating Eq. (1) with respect to and substituting = 2=-1
α1 = te −t hence α 0 = e−t (1 + t ) from Eq. (2). The STM is given by φ (t ) = e At = α 0 I + A On simplification,
φ (t) = c). A =
(1 + t)e −t
te −t
− te −t
(1 − t )e
−t
0
1 characteristic equation is λI − A = 0 . −2 −3
λ −1 =0 2 λ +3
λ1 = −1, λ 2 = −2.
Eigen values are distinct. The corresponding functions are, e λ1t = α 0 + α 1λ 1
(1)
e −t = α 0 − α 1 And e λ2t = α 0 + e −2t α 1λ 2
(2)
= α0 − 2α1
(3)
From Eqs. (2) and (3) solving for α 0 and α 1 ,
α 0 = 2e −t − e −2t and α1 = e −t − e −2t . Hence the STM will be φ (t ) = α 0 I + α 1 A . On simplification Dept. of EEE, SJBIT
Page 65
Modern Control Theory α0 = =
10EE55
α1 − 2α 1 α 0 − 3α 1 2e −t − e −2t − 2e −t + 2e −2t
e −t − e −2t − e −t + 2e
−2t
as before.
Deterimine the STM of system matrix 2 1
4
A = 0 2 0 by Cayley-Hamilton method. 3 1 Now characteristic equation is, −4 λ − 2 −1 λ−2 0 =0 λI − A = 0 0 0 λ −1 Eigen values are λ1 = 2, λ1 = 2, λ 2 = 1 Corresponding function will be 2 e λ1t = α 0 + α 1λ 1 + α2λ1
(1)
e 2t = α 0 + 2α 1 + 4α 2
(2)
Differentiating Eq. (1) with respect to On simplification
1
and substituting
te 2t = α 1 + 4α 2
1=2,
(3)
and e λ2t = α 0 + α 1λ 2 + α 2 λ 22 = α 0 + α1 + α 2 From Eqs. (2), (3) and (4), solving for α 0 , α 1 , α 2 ,
(4)
α0 = 2te t − 3e 2t + 4e t ; α1 = −3te 2t + 4e 2t − 4e t α 2 = te 2t − e 2t + e t Hence STM φ (t ) = α 0 I + α 1A + α 2 A2 e 2t 12e t − 12e 2t + 13te 2t = 0 e 2t 0 − 3e t + 3e 2t
Dept. of EEE, SJBIT
− 4e t + 4e 2t 0 et
Page 66
Modern Control Theory
10EE55
Solution of Non-homogeneous State Equation Consider the state equation with forced input u(t) as x(t ) = Ax(t ) + Bu(t ) ;
(1)
∆
x(t o ) = xo x(t ) − Ax(t ) = Bu(t ) Pre-multiplying both sides by e − At , e − At [ x(t ) − Ax(t )] = e − At u(t ) Consider, d − At [e x(t )] dt − Ae − At x(t ) + e − At x(t ) e − At [ x9T 0 − Ax(t )] hence Eq (2) can be written as d − At [e x(t )] = e − At u(t ) dt Integrating both sides with time limits 0 and t
(2)
(3)
t
− At x(t) |t = e − Aτ Bu(τ )dτ oe o
e
− At
t
x(t ) − x(0) = e− Aτ Bu(τ )dτ o t
e − At x(t ) = x(0) + e− Aτ Bu(τ )dτ o
Pre-multiplying both sides by e At , t
x(t ) = e Atx ( 0) x(0) + e A(t −τ ) Bu(τ )dτ o
ZIR
ZSR t
x(t ) = φ (t ) x(0) + φ (t − τ )Bu(τ )dτ . o
If the initial time is to instead of o then, t
x(t ) = φ (t − t o ) x(t o ) + φ (t − τ )Bu(τ )dτ
(4)
to
The Eq (4) is the solution of state equation with input as u(t) which represents the change of state from x(to) to x(t) with input u(t). Example 1: Obtain the response of the system for step input of 10 units given state model as
Dept. of EEE, SJBIT
Page 67
Modern Control Theory
x(t ) =
−1
1
− 1 − 10
x(t ) +
10EE55
0 10
u(t )
y(t) = [1 0]x(t ); x(0) = 0 Solution: The state equation solution is x(t ) = e At x(0) + e A(t −τ ) Bu(τ )dτ
(1)
Since x(0)=0, t
x(t ) = e A(t −τ ) Bu(τ )dτ o
= φ (t − τ )Bu(τ )dτ
(2)
Now, the state transition matrix φ (t ) can be evaluated as −1 e At = φ (t ) = -1 [SI − A] 1.0128e − a1t − 0.0128e −
=
a 2t
− 0.114e −a1t + 0.114e −a2t
φ (t − τ ) =
)
0.114e − a1t − 0.114e − a2t − 0.0128e −a1t + 1.012e
−a 2t
1.0128e − a1 (t −τ ) − 0.0128e − a2 (t −τ
0.114e − a1 (t −τ ) − 0.114e − a 2 (t −τ )
− 0.114e −a1 (t −τ ) + 0.114e −a2 (t −τ )
− 0.0128e − a1 (t −τ ) + 1.012e −a2 (t −τ )
Hence t
x(t) = φ (t − τ ) o
0 10dτ 10
9.094 − 10.247e − a1t + 1.153e − a 2 t − 0.132 + 1.151e −a1t − 1.019e −a 2t ∴ y(t) = Cx(t) = [1 0]x(t ) =
= 9.094 − 10.247e − a1t + 1.153e − a2t ; where, a1=1.1125, a2=9.88 Example 2: Consider the system 0 1 0 x(t ) = x(t ) + u(t ) −2 −3 1 1 y(t ) = [1 1]x(t ); x(0) = Solution: The −
1
1
Dept. of EEE, SJBIT
Page 68
Modern Control Theory
=
2e −t − e −2t − 2e −t + 2e −2t
10EE55
e −t − e −2t − e −t + 2e −2t
ZIR = φ (t ) x(0) = = =
2e −t − e −2t − 2e
−t
+ 2e
e −t − e −2t
−2t
− e + 2e −t
1
−2t
−1
2e −t − e −2t − e −t + e −2t − 2e −t + 2e −2t + e −t − 2e −2t e −t − e −t
t
ZSR = φ (t − τ )Bu(τ )dτ o
=
t
o
=
2e −(t −τ ) − e −2(t −τ − 2e −t (t −τ ) + 2e −2 (t −τ )
0 e −(t −τ ) − e −2 (t −τ ) 1dτ −(t −τ ) −2(t −τ ) 1 −e + 2e
e −(t −τ ) − e −2 (t −τ dτ − e −(t −τ ) + 2e −2 (t −τ )
1 1 − e −t + e −2t = 2 2 e −t − e −2t ∴ x(t ) = ZIR + ZSR 1 1 e −t − e −t + e −2t = + = 2 2 − e −t e −t − e −2t 1 (1 + e −2t = 2 − e −2t y(t) = Cx(t ) = [1 1]x(t) 1 (1 + e −2t = [1 1] = 2 − e −2t 1 = (1 + e −2t ) − e −2t 2 1 = (1 − e − 2t ) ) 2 Example 3: x(t ) =
0
2
−2 −5
x(t ) +
y(t ) = [2 1]x(t ); x(0) = [1 2]
0 1
u(t )
T
Dept. of EEE, SJBIT
Page 69
Modern Control Theory
10EE55
Solution: State transition matrix 4 −t 1 − 4t 2 −t 2 −4t e − e e − e 3 3 3 φ (t ) = 3 − 2 −t 2 − 4t 1 −t 4 −4t e + e e + e 3 3 3 3 t
x(t) = φ (t ) x(0) + φ (t − τ )Bu(τ )dτ o
10 −t 4 e − e −2t − e −4t 3 = 3 − 5 −t 8 e + e −2t + e −4t 3 3
Controllability and Observability Consider the typical state diagram of a system.
The system has two state variables. X1(t) and X2(t). The control input u(t) effects the state variable X1(t) while it cannot effect the effect the state variable X2(t). Hence the state variable X2(t) cannot be controlled by the input u(t). Hence the system is uncontrollable, i.e., for nth order, which has „n‟ state variables, if any one state variable is uncontrolled by the input u(t), the system is said to be UNCONTROLLABLE by input u(t). Definition: For the linear system given by X (t ) = AX (t ) + Bu(t ) Y (t ) = CX (t ) + Du(t ) is said to be completely state controllable. If there exists an unconstrained input vector u(t), which transfers the initial state of the system x(t0) to its final state x(tf) in finite time f(tf-t0) i.e. ff. It can be seen if all the initial states are controllable the system is completely controllable otherwise the system the system uncontrollable. Methods to determine the Controllability: 1) Gilbert‟s Approach 2) Kalman‟s Approach. Gilbert’s Approach: It determines not only the controllability but also the uncontrollable state variables, if the it uncontrollable. Dept. of EEE, SJBIT
Page 70
Modern Control Theory
10EE55
Consider the solution of state equation X (t ) = AX (t ) + Bu(t ); X (t o ) = X (0) as t
X (t) = e X (0) + e A(t −τ ) Bu(τ )dτ . Assuming without loss of generality that X(t)=0, the At
0
solution will be t
− X (0) = e− Aτ Bu(τ )dτ 0
The system is state controllable, iff the none of the elements of B is zero. If any element is zero, the corresponding initial state variable cannot be controlled by the input u(t). Hence the system is uncontrollable. The solution involves A, B matrices and is independent of C matrix, the controllability of the system is frequently referred to as the controllability of the pair [A, B]. Example: Check the controllability of the system. If it is uncontrollable, identify the state, which is uncontrollable. 1 −1 −1 X (t ) = X (t ) + u(t ) 0 0 −2 Solution: First, let us convert into diagonal form. λ +1 1 =0 λI − A = 0 λ+2 [λ + 1][λ + 2] = 0 λ 2 + 3λ + 2 = 0 (λ + 2)(λ + 1) = 0 λ = −2, λ = −1 Eigen vector associated with λ = −2 −2 1 0 0
v1 =
1 1
Eigen vector associated with λ = −1 1 −1+1 0 −1+ 2 v2 =
1
P=
0 0
P −1 = Dept. of EEE, SJBIT
0 1 0 1 1 1 1 0
1
−1 1 1
0
1
−1 1 Page 71
Modern Control Theory P −1 AP =
0 −1
1 −1 −1 1 1 1
= P −1 B =
10EE55
−2 1 0
0
0 −2 1 1 1 −1 1 0
=
−2
0
0
−1
0 1 1 0 = −1 1 0 −1
As P-1B vector contains zero element, the system is uncontrolled and state X1(t) is uncontrolled.
2) Kalman’s approach: This method determines the controllability of the system. Statement: The system described by x(t ) = Ax(t ) + Bu(t ) y(t ) = Cx(t ) is said to completely state controllable iff the rank of controllability matrix Qc is equal to order of system matrix A where the controllability matrix Qc is given by, Qc = [B |AB|A2 B|……|An-1 B]. i.e., if system matrix A is of order nxn then for state controllability
ρ (Qc ) = n where ρ (Qc ) is rank of Qc. Example: Using Kalman‟s Approach determine the state controllability of the system described by (1) y + 3y + 2 y = u + u with x1 = y, x 2 = y − u x2 = −3x 2 − 2x1 − 2u x1 = x 2 + u x1 0 1 x1 1 = + u(t) x2 − 2 − 3 x2 −2 A=
0
1
−2 −3
,B =
1 −2 0
Dept. of EEE, SJBIT
Page 72
1
1
Modern Control Theory Now AB =
−2 −3 −2
Qc = [B | AB ] = Qc =
1
10EE55 =
−2 4
1 −2 −2 4
−2
=0 −2 4 hence rank ρ (Qc ) is S 3 + 9S 2 + 20S + 60 = 0 hence comparing with S 3 + a1s2 + a 2 s + a 3 = 0 a1 =9, a 2 =20, a3 =60
(I)
2) Let the observer be m = [m1 , m2 ,.......mn ]
T
0 0
−6
0 0
− (6 + m1 ) − (11+ m 2 )
0 0 m1 [A − mc ] = 1 0 0 m2 − 11 − 0 1 −6 0 m3
[A − mc ] =
1 0
− (6 + m3 )
1
SI − (A − mc ) = 0 S 3 + (6 + m3 )S2 + (11+ m 2 )S + (6 + m1 ) = 0 Comparing with s3 + 1s 2 + 2 s + 1
= (6 + m3 ),
2
3
=0
= (11+ m 2 ),
3
= (6 + m1
(II)
) 3) From I and II, 9 = 6 + m3 m3 = 3 20 = 11+ m 2 m 2 = 9 60 = 6 + m1 m1 = 54 4) Hence m = [54
Dept. of EEE, SJBIT
9 3]T
Page 89
Modern Control Theory
10EE55
Reduced-order state observer Required when few of state variables cannot be measured accurately. Design of Reducedorder state observer : In this design procedure, let us assume one measurement Y(t). Which measures one state X1(t). The output equation is given by Y(t)=CX(t) (1) Where C=[1 0 -----0] Partitioning the states X(t) into states directly measured X1(t) and states directly non measurable Xe(t), which are to be estimated. X (t) i.e X(t) = 1 (2) X e (t ) Partionionly the matrixes accordly & state equation can be written as X1(t)
a11 a1e a e1 a ee
=
Xe (t)
and Y(t) = [1 0]
X1(t) b1 + u(t) X e (t) be
(3)
X1 (t)
(4)
X e (t)
from equation (3) Xe (t) = a ee X e(t) + a X(t) +eub e1
(5)
Known input
from (4) Y(t) = X1(t) => X1(t) = Y(t) Also from Equation (3) X1(t) = a11X1(t) + a1e Xe (t) + b1u(t). Collecting the known terms. Y(t) − a11 X1 (t) - b 1u(t) = a 1e X e (t)
(6)
Known terms
Comparing equation (5) & (6) with X(t) = AX(t) + bu(t) and Y(t) = CX(t) we can have similarity as X(t) → Xe (t) A(t) → a ee bu → a e1X1(t) + b eu(t) c = a1e
(7)
Y = Y(t) − a11X1(t) − b1u(t) Using Equation (7), the state model for reduced order state observer can be obtained as ˆ (t) = a X ˆ (t) + a y + b u(t)X (t) + m (Y(t) + (Y − a y − b u − a X ˆ (t)) (8) X e
Dept. of EEE, SJBIT
ee
e
e1
e
1
11
1
1e
e
Page 90
Modern Control Theory
10EE55
ˆ e (t) ˆ (t) = AX ˆ (t) + bu(t) + m(y(t) − yˆ (t)) where yˆ (t) = a1e X (same as X ~ ˆ Defining Xe (t) = X e − X e ~ ˆ (t) Xe (t) = X e (t) − X e ˆ (t) in above equation & on simplification, Substituting for Xe (t) & X e (9) X(t) = (a ee − ma1e )Xe Corresponding characteristic equation is (10) SI − (a ee − ma1e ) = 0 Comparing with desired characteristic equation „m‟ can be evaluated same as m in fullorder observer. To implement the reduced-order observer, Equation (8) can be written as ˆ e (t) = (a ee − ma 1e )X ˆ e (t) + (a e1 − ma 11)Y(t) + (be − mb 1)u + m y(t) X
(11)
As Y(t) is difficult to realize, Define ′ ˆ (t) − my X (t) = X e
e
ˆ e (t) − m y(t) X e′ (t) = X ˆ (t) + (a − ma )y(t) + (b − mb )u(t) = (a ee − ma 1e )X e e1 11 e 1 Equation (12) can be represented by block diagram as shown below.
(12)
Example: System is described by 0 0 0 50 0 − 24 X(t) + 2 u(t)
X(t) = 1 1 − 10
1
Y(t) = [0 0 1]X(t) To design the reduced-order observer for un measurable states such that the given values will be at –10, -10. Dept. of EEE, SJBIT
Page 91
Modern Control Theory
10EE55
Solution: From output Equation it follows that states X1(t) & X2(t) cannot be measured as corresponding elements of C matrix are zero. Let us write the state equation as X1
0 0 0 X1(t) 50 X2 = 1 0 − 24 X2 (t) + 2 u(t) 1 X 0 1 − 10 X3 (t) •
3
, aee =
0 0 0 , ae1 = a1e = [0 1] 1 0 − 24
a11 = [− 10]; be =
50 b 1 = [1] 2
m1 m2 Characteristic equation of an observer is SI − (a ee − ma1e ) = 0 Let
m=
S 2 + m 2S + m1 = 0 Desirable characteristic equation is (S + 10)(S + 10) = 0 S 2 + 20S + 100 = 0 From two characteristic equations, equating the co-efficient of like powers of S 20=m2 & 100=m1. 100 m= 20 Hence state model of observer will be X e′(t) = {
0 0 100 ˆ (t) + { 0 − 100 × {−10}]}Y(t) + { 50 − 100 1}u(t) −[ 0 1]}X e 1 0 20 2 20 20 − 24
Design of compensator by the principle of separation: Consider the completely controllable and observable system described by
He nc e the sta te fee db ac k co
X(t) = AX(t) + Bu(t) Y(t) = CX(t) Let the control law be u(t) = −kX(t) Assume the full-order observer model be ˆ (t) = AX ˆ (t) + Bu + M(y − cxˆ (t)) X Dept. of EEE, SJBIT
Page 92
Modern Control Theory
10EE55
ntrol law using the estimated states will be Using equation (4) in (1a) ˆ (t) X(t) = AX(t) − BKX ˆ (t)) = (A − BK)X(t) + BK(X − X Defining observer state error vector as ~ ˆ (t) X(t) = X(t) − X using (6) in (5) ~ X(t) = (A − BK)X(t) + BKX(t) WKT observer error-vector model is ~ ~ X(t) = (A − mC)X(t) Equation (7) & (8) can be obtained in vector model as
(5) (6)
(7) (8)
BK X(t) X(t) (A − BK) (9) = ~ ~ 0 (A − mC) X(t) X(t) Equation (9) represents the model of 2n dimensional system. The corresponding characteristic equation is SI − (A − BK) SI − (A − mC) = 0 (10) Equation (10) represents the poles (given values) of the system, which is union of poles of controller & observers. The design of controller & observer can be carried by separately, & when used together, the system is unchanged. This special case of separation principle controller can be designed using SI − (A − BK) = 0 and observer by SI − (A − mC) = 0 . The corresponding block diagram is as shown
Transfer function of compensator: ˆ (t) = AX ˆ (t) + Bu(t) + m(y − cxˆ (t)) Now X Dept. of EEE, SJBIT
(1) Page 93
Modern Control Theory
10EE55
ˆ (t) U(t) = −KX
(2)
(1) & (2)
(3)
From
ˆ (t) = (A − BK − mC)Xˆ (t) + mY(t) X Taking L.T. ˆ (s) = (A − BK − mC)X ˆ (S) + mY(S) SX ˆ (S) mY(s) = [SI − (A − BK − mC)]X Taking L.T. of equation (2) ˆ (S) U(S) = −K.X From equations (4) & (5) U(S) = K[SI − (A − BK − mC) −1 ]m − Y(S) = D(S)
(4) (5)
Example: A System is described by 0 1 0 0 0 1 X(t) + 0 u(t) X(t) = 0 1 − 5 − 6 −11 Y(t) = [1 0 0]X(t) Design controller to take place closed loop poles at –1 ±j1, -5. Also design an observer such that observer poles are at –6, -6, -6. Derive the Un compensated & compensated T.F. Solution: System is observable & Controllable. Using the procedure of controller design & observer design, the controller will be K=[4 1 1] and observer will be M=[12 25 -72]T Transfer function of uncompensated system will be Mu(S)= C[SI-A]-1B. 1 Mu(S) = 3 2 S + 6S + 11S + 6 Transfer function of compensated system will be U(S) MC(s) = = K[SI − (A − BK − mC)] −1m Y(S) Dept. of EEE, SJBIT
Page 94
Modern Control Theory
10EE55
CONTROLLERS The controller is an element which accepts the error in some form and decides the proper corrective action. The output of the controller is then applied to the system or process. The accuracy of the entire system depends on how sensitive is the controller to the error detected and how it is manipulating such an error. The controller has its own logic to handle the error. the response and mode of operation.
The controllers are classified based on
On the basis of mode of operation, they are classified into
Continuous and Discontinuous controllers. The discontinuous mode controllers are further classified as ON-OFF controllers and multi position controllers.
Continuous mode controllers, depending on the input-output relationship, are classified into three basic types named as Proportional controller, Integral controller and Derivative controller.
In many
practical cases, these controllers are used in combinations. The
examples
of
such
composite controllers are Proportional – Integral (PI) controllers,
Proportional – Derivative (PD) controllers and Proportional - Integral – Derivative (PID) controllers. The block diagram of a basic control system with controller is shown in Figure. The error detector compares the feedback signal b(t) with the reference input r(t) to generate an error. e(t) = r(t) – b(t).
Proportional Controller: In the proportional control mode, the output of the controller is proportional to the error e(t). The relation between the error and the controller output is determined by a constant called proportional gain constant denoted as KP. i.e. p(t) = KP e(t). Though there exists linear relation between controller output and the error, for zero error the controller output should not be zero as this will lead to zero input to the system or process. there exists some controller output Po for the zero error. Dept. of EEE, SJBIT
Hence,
Therefore mathematically the proportional Page 95
Modern Control Theory
10EE55
control mode is expressed as P(t) = KP.e(t) + Po. The performance of proportional controller depends on the proper design of the gain KP. As the proportional gain KP increases, the system gain will increase and hence the steady state error will decrease.
But due to high gain, peak overshoot and settling time increases and this may lead to
instability of the system. So, compromise is made to keep steady state error and overshoot within acceptable limits.
Hence, when the proportional controller is used, error reduces but can not
make it zero. The proportional controller is suitable where manual reset of the operating point is possible and the load changes are small.
Integral Controller: We have seen that proportional controller can not adapt with the changing load conditions. To overcome this situation, integral mode or reset action controller is used. In this controller, the controller output P(t) is changed at a rate which is proportional to actuating error signal e(t).
Mathematically the integral controller mode is expressed as The constant Ki is called integral constant. The output from the controller at any instant is the area under the actuating error curve up to that instant. If the error is zero, the controller output will not change. The integral controller is relatively slow controller. It changes its output at a rate which is dependent on the integrating time constant, until the error signal is cancelled.
Compared
to
the proportional controller, the integral control requires time to build up an appreciable output. However it continues to act till the error signal disappears.
Hence, with the integral controller the
steady state error can be made to zero. The reciprocal of integral constant is known as integral time constant Ti. i.e., Ti = 1/Ki.
Derivative Controller: In this mode, the output of the controller depends on the rate of change of error with respect to time. Hence it is also known as rate action mode or anticipatory action mode. The mathematical equation for derivative controller is . Where Kdis the derivative gain constant. The derivative gain constant indicates by how much percentage the controller output must change for every percentage per second rate of change of the error. The advantage of the derivative control action is that it responds to the rate of change of error and can produce the significant correction before the magnitude of the actuating error becomes too large. Derivative control thus anticipates the actuating error, initiates an early corrective action and tends to increase stability of the system by improving the transient response. When the error is zero or constant, the derivative controller output is zero. Hence it is never used alone.
PI Controller: Dept. of EEE, SJBIT
Page 96
Modern Control Theory
10EE55
This is a composite control mode obtained by combining the proportional mode and integral mode. The mathematical expression for PI controller is
The transfer function is given by
The advantage of this controller is that the one to one correspondence of proportional controller and the elimination of steady state error due to integral controller. Basically integral controller is a low-pass circuit. The PI Controller has following effects on the system. 1. 2. 3. 4. 5. 6.
It increases the order of the system It increases the type of the system It improves the steady state accuracy. It increases the rise time so response become slow. It filters out the high frequency noise It makes the response more oscillatory.
PD Controller: This is a composite control mode obtained by combining the proportional mode and derivative mode. The mathematical expression for PI controller is
The transfer function is given by
PID Controller: This is a composite control mode obtained by combining the proportional mode, integral mode and derivative mode. The mathematical expression for PI controller is
Dept. of EEE, SJBIT
Page 97
Modern Control Theory
10EE55
The PD Controller has following effects on the system.
1. It increases the damping ratio and reduces overshoot. 2. It reduces the rise time and makes response fast. 3. It reduces the settling time 4. The type of the system remains unchanged. 5. Steady state error remains unchanged. In general it improves transient part without affecting steady state. The transfer function is given by
This is to note that derivative control is effective in the transient part of the response as error is varying, whereas in the steady state, usually if any error is there, it is constant and does not vary with time. In this aspect, derivative control is not effective in the steady state part of the response.
In the steady state part, if any error is there,
integral control will be effective to give proper correction to minimize the steady state error. An integral controller is basically a low pass circuit and hence will not be effective in transient part of the response where error is fast changing. Hence for the whole range of time response both derivative and integral control actions should be provided in addition to the inbuilt proportional control action for negative feedback control systems.
. Dept. of EEE, SJBIT
Page 98
Modern Control Theory
10EE55
UNIT-6 NON LINEAR SYSTEMS UNIT - 6 Non-linear systems: Introduction, behavior of non-linear system, common physical non linearitysaturation, friction, backlash, dead zone, relay, multi variable non-linearity. 3 Hours Many practical systems are sufficiently nonlinear so that the important features of their performance may be completely overlooked if they are analyzed and designed through linear techniques. The mathematical models of the nonlinear systems are represented by nonlinear differential equations. Hence, there are no general methods for the analysis and synthesis of
nonlinear control systems.
The fact that
superposition principle does not apply to nonlinear systems makes generalisation difficult and study of many nonlinear systems has to be specific for typical situations.
6.2 Behaviour of Nonlinear Systems The most important feature of nonlinear systems is that nonlinear systems do not obey the principle of superposition. Due to this reason, in contrast to the linear case, the response of nonlinear systems to a particular test signal is no guide to their behaviour to other inputs. The nonlinear system response may be highly sensitive to input amplitude. For example, a nonlinear system giving best response for a certain step input may exhibit highly unsatisfactory behaviour when the input amplitude is changed.
Hence, in a nonlinear system, the stability is very much dependent
on the input and also the initial state. Further, the nonlinear systems may exhibit limit cycles which are selfsustained oscillations of fixed frequency and amplitude. Once the system trajectories converge to a limit cycle, it will continue to remain in the closed trajectory in the state space identified as limit cycles.
In many systems the limit cycles are undesirable
particularly when the amplitude is not small and result in some unwanted phenomena.
Dept. of EEE, SJBIT
Page 99
Modern Control Theory
10EE55
Figure 6.1
A nonlinear system, when excited by a sinusoidal input, may generate several harmonics in addition to the fundamental corresponding to the input frequency. The amplitude of the fundamental is usually the largest, but the harmonics may be of significant amplitude in many situations.
Another peculiar characteristic exhibited jump phenomenon.
by nonlinear
systems
is
called
For example, let us consider the frequency response curve of
spring-mass- damper system. The frequency responses of the system with a linear spring, hard spring and soft spring are as shown in Fig. 6.2(a), Fig. 6.2(b) and Fig. 6.2(c) respectively.
For a hard spring, as the input frequency is gradually
increased from zero, the measured response follows the curve through the A, B and C, but at C an increment in frequency results in discontinuous jump down to the point D, after which with further increase in frequency, the response curve follows through DE. If the frequency is now decreased, the response follows the curve EDF with a jump up to B from the point F and then the response curve
moves
towards
A. This
phenomenon which is peculiar to nonlinear systems is known as jump resonance. For a soft spring, jump phenomenon will happen as shown in fig. 6.2(c).
Fig. 6.2(a) Dept. of EEE, SJBIT
Fig. 6.2(b)
Fig. 6.2 (c) Page 100
Modern Control Theory
10EE55
When excited by a sinusoidal input of constant frequency and the amplitude is increased from low values, the output frequency at some point exactly matches with the input frequency and continue to remain as such thereafter.
This phenomenon which results in a
synchronisation or matching of the output frequency with the input frequency is called frequency entrainment or synchronisation. 6.3 Methods of Analysis Nonlinear systems are difficult to analyse and arriving at general conclusions are tedious. However, starting with the classical techniques for the solution of standard nonlinear differential equations, several techniques have been evolved which suit different types of analysis. It should be emphasised that very often the conclusions arrived at will be useful for the system under specified conditions and do not always lead to generalisations.
The
commonly used methods are listed below.
Linearization Techniques: In reality all systems are nonlinear and linear systems are only approximations of the nonlinear systems.
In some cases, the linearization yields useful
information whereas in some other cases, linearised model has to be modified when the operating point moves from one to another. Many techniques like perturbation method, series approximation techniques, quasi-linearization techniques etc. are used for linearise a nonlinear system.
Phase Plane Analysis: This method is applicable to second order linear or nonlinear systems for the study of the nature of phase trajectories near the equilibrium points. The system behaviour is qualitatively analysed along with design of system parameters so as to get the desired response from the system.
The periodic oscillations in nonlinear systems
called limit cycle can be identified with this method which helps in investigating the stability of the system.
Describing Function Analysis: This method is based on the principle of harmonic linearization in which for certain class of nonlinear systems with low pass characteristic. This method is useful for the study of existence of limit cycles and determination of the amplitude, frequency and stability of these limit cycles. Accuracy is better for higher order systems as they have better low pass characteristic. Dept. of EEE, SJBIT
Page 101
Modern Control Theory
10EE55
Liapunov’s Method for Stability: The analytical solution of a nonlinear system is rarely possible. If a numerical solution is attempted, the question of stability behaviour can not be fully answered as solutions to an infinite set of initial conditions are needed. The Russian mathematician A.M. Liapunov introduced and formalised a method which allows one to conclude about the stability without solving the system equations. 6.4 Classification of Nonlinearities The nonlinearities are classified into i) Inherent nonlinearities and ii) Intentional nonlinearities. The nonlinearities which are present in the components used in system due to the inherent imperfections or properties of the system are known as inherent nonlinearities. Examples are saturation in magnetic circuits, dead zone, back lash in gears etc.
However in some cases introduction of nonlinearity may improve the
performance of the system, make the system more economical consuming less space and more reliable than the linear system designed to achieve the same objective.
Such
nonlinearities introduced intentionally to improve the system performance are known as intentional nonlinearities.
Examples are different types of relays which are very
frequently used to perform various tasks. But it should be noted that the improvement in system performance due to nonlinearity is possible only under specific operating conditions.r other conditions, generally nonlinearity degrades the performance of the system.
6.5 Common
Physical Nonlinearities:
The common examples of physical
nonlinearities are saturation, dead zone, coulomb friction, stiction, backlash, different types of springs, different types of relays etc.
Saturation: This is the most common of all nonlinearities. All practical systems, when driven by sufficiently large signals, exhibit the phenomenon of saturation due to limitations of physical capabilities of their components. Saturation is a common phenomenon in magnetic circuits and amplifiers.
Dead zone: Some systems do not respond to very small input signals. For a particular range of input, the output is zero. This is called dead zone existing in a system. The input-output curve is shown in figure. Dept. of EEE, SJBIT
Page 102
Modern Control Theory
10EE55
Figure 6.3
Backlash: Another important nonlinearity commonly occurring in physical systems is hysteresis in mechanical transmission such as gear trains and linkages. This nonlinearity is somewhat different from magnetic hysteresis and is commonly referred to as backlash. In servo systems, the gear backlash may cause sustained oscillations or chattering phenomenon and the system may even turn unstable for large backlash.
Figure 6.4
Relay: A relay is a nonlinear
power amplifier which can provide large power
amplification inexpensively and is therefore deliberately introduced in control systems. A relay controlled system can be switched abruptly between several discrete states which are usually off, full forward and full reverse.
Relay controlled
systems
find
wide applications in the
control field. The characteristic of an ideal relay is as shown in figure. In practice a relay has a definite amount of dead zone as shown. This dead zone is caused by the facts that relay coil requires a finite amount of current to actuate the relay. Further, since a larger coil current is Dept. of EEE, SJBIT
Page 103
Modern Control Theory
10EE55
needed to close the relay than the current at which the relay drops out, the characteristic always exhibits hysteresis.
Multivariable Nonlinearity: Some nonlinearities such as the torque-speed characteristics of a servomotor, transistor characteristics etc., are functions of more than one variable. Such nonlinearities are called multivariable nonlinearities.
Dept. of EEE, SJBIT
Page 104
Modern Control Theory
10EE55
UNIT-7 PHASE PLANE ANALYSIS UNIT - 7 Phase plane method, singular points, stability of nonlinear system, limit cycles, construction of phase trajectories. 7 Hours
Phase plane analysis is one of the earliest techniques developed for the study of second order nonlinear system. It may be noted that in the state space formulation, the state variables chosen are usually the output and its derivatives. The phase plane is thus a state plane where the two state variables x1 derivative
.
and x2
are analysed which may be the output variable y and its
The method was first introduced by Poincare, a French mathematician. The
method is used for obtaining graphically a solution of the following two simultaneous equations of an autonomous system.
Where and are either linear or nonlinear functions of the state variables x1 and x2 respectively. The state plane with coordinate axes x1 and x2 is called the phase plain. In many cases, particularly in the phase variable representation of systems, take the form
The plot of the state trajectories or phase trajectories of above said equation thus gives an idea of the solution of the state as time t evolves without explicitly solving for the state. The phase plane analysis is particularly suited to second order nonlinear systems with no input or constant inputs.
It can be extended to cover other inputs as well such as ramp inputs, pulse
inputs and impulse inputs.
7.2 Phase Portraits From the fundamental theorem of uniqueness of solutions of the state equations or differential equations, it can be seen that the solution of the state equation starting from an initial state in the state space is unique. This will be true if
and
are analytic. For
such a system, consider the points in the state space at which the derivatives of all the state Dept. of EEE, SJBIT
Page 105
Modern Control Theory
10EE55
variables are zero. These points are called singular points. These are in fact equilibrium points of the system. undisturbed.
If the system is placed at such a point, it will continue to lie there if left A family of phase trajectories starting from different initial states is called a
phase portrait. As time t increases, the phase portrait graphically shows how the system moves in the entire state plane from the initial states in the different regions. Since the solutions from each of the initial conditions are unique, the phase trajectories do not cross one another. If the system has nonlinear elements which are piece-wise linear, the complete state space can be divided into different regions and phase plane trajectories constructed for each of the regions separately.
7.3 Phase Plane Method Consider the homogenous second order system with differential equations
This equation may be written in the standard form
where
and
n
are the damping factor and undamped natural frequency of the system. Defining the state
variables as x = x1 and variable form as
= x2, we get the state equation in the state
These equations may then be solved for phase variables x1 and x2. The time response plots of x1, x2 for various values of damping with initial conditions can be plotted. When the differential equations describing the dynamics of the system are nonlinear, it is in general not possible to obtain a closed form solution of x1, x2. For example, if the spring force is nonlinear say (k1x + k2x3) the state equation takes the form
Solving these equations by integration is no more an easy task. In such situations, a graphical method known as the phase-plane method is found to be very helpful. Dept. of EEE, SJBIT
The coordinate plane Page 106
Modern Control Theory
10EE55
with axes that correspond to the dependent variable x1 and x2 is called phase-plane. The curve described by the state point (x1,x2) in the phase-plane with respect to time is called a phase trajectory. A phase trajectory can be easily constructed by graphical techniques. 7.3.1 Isoclines Method: Let the state equations for a nonlinear system be in the form
Where both
and
are analytic.
From the above equation, the slope of the trajectory is given by
Therefore, the locus of constant slope of the trajectory is given by f2(x1,x2) = Mf1(x1,x2)
The above equation gives the equation to the family of isoclines. For different values of M, the slope of the trajectory, different isoclines can be drawn in the phase plane. Knowing the value of M on a given isoclines, it is easy to draw line segments on each of these isoclines. Consider a simple linear system with state equations
Dividing the above equations we get the slope of the state trajectory in the x1-x2 plane as
For a constant value of this slope say M, we get a set of equations
which is a straight line in the x1-x2 plane. We can draw different lines in the x1-x2 plane for different values of M; called isoclines. If draw sufficiently large number of isoclines to cover the complete state space as shown, we can see how the state trajectories are moving in the state plane. Different trajectories can be drawn from different initial conditions. A large number of such trajectories together form a phase portrait. A few typical trajectories are shown in figure given below.
Dept. of EEE, SJBIT
Page 107
Modern Control Theory
10EE55
Figure 7.1 The Procedure for construction of the phase trajectories can be summarised as below: 1. For the given nonlinear differential equation, define the state variables as x1 and x2 and obtain the state equations as
2. Determine the equation to the isoclines as
3. For typical values of M, draw a large number of isoclines in x1-x2 plane 4. On each of the isoclines, draw small line segments with a slope M. 5. From an initial condition point, draw a trajectory following the line segments with slopes M on each of the isoclines. Example 7.1: For the system having the transfer function
and a relay with
dead zone as nonlinear element, draw the phase trajectory originating from the initial condition (3,0).
Dept. of EEE, SJBIT
Page 108
Modern Control Theory
10EE55
The differential equation for the system is given by
where u is
Since the input is zero, e = r – c = -c and the differential equation in terms of error will be
Defining the state variables as x1 = e and x2 = equations as
we
get
the
state
The slope of the trajectory is given by
Equation to the isoclines is given by
We can identify three regions in the state plane depending on the values of e = x 1. Re gio n 1: Here u = 1, so that the isoclines are given by
or
For different values of M, these are a number of straight lines parallel to the x-axis. M
0
1/3
1/2
2
3
-4
-3
-2
-1
-1
-1.5
-2
1
0.5
-0.2
-0.25
-0.33
-0.5
Here u = 0, so that the isoclines are given by
or M = -1, which are
parallel lines having constant slope of -1. Trajectories are lines of constant slope -1. Dept. of EEE, SJBIT
Page 109
Modern Control Theory
10EE55
Region 3: Here u = -1 so that on substitution we get
or
These are also lines parallel to x – axis at a constant distance decided by the value of the slope of the trajectory M. M
0
1/3
1
2
3
-5
-4
-3
-2
1
0.75
0.5
1/3
0.25
-0.25
-0.33
-0.5
-1
Figure 7.2: Phase portrait of Example 1 Isoclines drawn for all three regions are as shown in figure. It is seen that trajectories from either region 1 or 2 approach the boundary between the regions and approach the origin along a line at -1 slope. The state can settle at any value of x1 between -1 and +1 as this being a dead zone and no further movement to the origin along the x1-axis will be possible. This will result in a steady state error, the maximum value of which is equal to half the dead zone. However, the presence of dead zone can reduce the oscillations and result in a faster response. In order that the steady state error in the output is reduced, it is better to have as small a dead zone as possible. Example 7.2: For the system having a closed loop transfer function
,
plot the phase trajectory originating from the initial condition (-1,0). The differential equation for the system is given by
Dept. of EEE, SJBIT
Page 110
Modern Control Theory
10EE55
The slope of the trajectory is given by
When x2 = 0,
ie., all the isoclines will pass through the point x1 = 2, x2 = 0. When M = 0,
When M = 2, When M = 4, When M = 8,
When M = -2, When M = -4,
When M = -6, When M = -10,
Dept. of EEE, SJBIT
Page 111
Modern Control Theory
10EE55
Figure 7.3 The isoclines are drawn as shown in figure. The starting point of the trajectory is marked at (1,0). At (-1,0), the slope is , ie., the trajectory will start at an angle 90o. From the next isoclines, the average slope is (8+4)/2 = 6, ie., a line segment with a slope 6 is drawn (at an angle 80.5o).The same procedure is repeated and the complete phase trajectory will be obtained as shown in figure. 7.3.2 Delta Method: The delta method of constructing phase trajectories is applied to systems of the form
Where may be linear or nonlinear and may even be time varying but must be continuous and single valued. With the help of this method, phase trajectory for any system with step or ramp or any time varying input can be conveniently drawn. The method results in considerable time saving when a single or a few phase trajectories are required rather than a complete phase portrait. While applying the delta method, the above equation is first converted to the form
In general, depends upon the variables x, and t, but for short intervals the changes in these variables are negligible. Thus over a short interval, we have , where is a constant. Let us choose the state variables as x1 = x;
Dept. of EEE, SJBIT
, then
Page 112
Modern Control Theory
10EE55
Therefore, the slope equation over a short interval is given by
With known at any point P on the trajectory and assumed constant for a short interval, we can draw a short segment of the trajectory by using the trajectory slope dx2/dx1 given in the above equation. A simple geometrical construction given below can be used for this purpose. 1. From the initial point, calculate the value of . 2. Draw a short arc segment through the initial point with (- , 0) as centre, thereby determining a new point on the trajectory. 3. Repeat the process at the new point and continue. Example 7.3: For the system described by the equation given below, construct the trajectory starting at the initial point (1, 0) using delta method.
Let
then
The above equation can be rearranged as
So that At initial point is calculated as = 0+1-1 = 0. Therefore, the initial arc is centred at point (0, 0). The mean value of the coordinates of the two ends of the arc is used to calculate the next value of and the procedure is continued. By constructing the small arcs in this way the complete trajectory will be obtained as shown in figure.
Figure 7.4 Dept. of EEE, SJBIT
Page 113
Modern Control Theory
10EE55
7.4 Limit Cycles:
Limit cycles have a distinct geometric configuration in the phase plane portrait, namely, that of an isolated closed path in the phase plane. A given system may have more than one limit cycle. A limit cycle represents a steady state oscillation, to which or from which all trajectories nearby will converge or diverge. In a nonlinear system, limit cycles describes the amplitude and period of a self sustained oscillation. It should be pointed out that not all closed curves in the phase plane are limit cycles. A phase-plane portrait of a conservative system, in which there is no damping to dissipate energy, is a continuous family of closed curves. Closed curves of this kind are not limit cycles because none of these curves are isolated from one another. Such trajectories always occur as a continuous family, so that there are closed curves in any neighbourhood of any particular closed curve. On the other hand, limit cycles are periodic motions exhibited only by nonlinear non conservative systems. As an example, let us consider the well known Vander Pol‟s differential equation
which describes physical situations in many nonlinear systems. In terms of the state variables , we obtain
The figure shows the phase trajectories of the system for > 0 and < 0. In case of > 0 we observe that for large values of x1(0), the system response is damped and the amplitude of x1(t) decreases till the system state enters the limit cycle as shown by the outer trajectory. On the other hand, if initially x1(0) is small, the damping is negative, and hence the amplitude of x1(t) increases till the system state enters the limit cycle as shown by the inner trajectory. When < 0, the trajectories moves in the opposite directions as shown in figure.
Figure 7.5 A limit cycle is called stable if trajectories near the limit cycle, originating from outside or inside, converge to that limit cycle. In this case, the system exhibits a sustained Dept. of EEE, SJBIT
Page 114
Modern Control Theory
10EE55
oscillation with constant amplitude. This is shown in figure (i). The inside of the limit cycle is an unstable region in the sense that trajectories diverge to the limit cycle, and the outside is a stable region in the sense that trajectories converge to the limit cycle. A limit cycle is called an unstable one if trajectories near it diverge from this limit cycle. In this case, an unstable region surrounds a stable region. If a trajectory starts within the stable region, it converges to a singular point within the limit cycle. If a trajectory starts in the unstable region, it diverges with time to infinity as shown in figure (ii). The inside of an unstable limit cycle is the stable region, and the outside the unstable region. 7.5 Analysis and Classification of Singular Points: Singular points are points in the state plane where . At these points the slope of the trajectory dx2/dx1 is indeterminate. These points can also be the equilibrium points of the nonlinear system depending whether the state trajectories can reach these or not. Consider a linearised second order system represented by
Using linear transformation x = Mz, the equation can be transformed to canonical form
Where,
1
and
2
are the roots of the characteristic equation of the system.
The transformation given simply transforms the coordinate axes from x1-x2 plane to z1-z2 plane having the same origin, but does not affect the nature of the roots of the characteristic equation. The phase trajectories obtained by using this transformed state equation still carry the same information except that the trajectories may be skewed or stretched along the coordinate axes. In general, the new coordinate axes will not be rectangular. The solution to the state equation being given by
The slope of the trajectory in the z1 – z2 plane is given by
Based on the nature of these eigen values and the trajectory in z1 – z2 plane, the singular points are classified as follows. Nodal Point: Consider eigen values are real, distinct and negative as shown in figure (a). For this case the equation of the phase trajectory follows as
Dept. of EEE, SJBIT
Page 115
Modern Control Theory
10EE55 Where, k1 = ( 2/ 1) 0 so that the trajectories become a set of parabola as shown in figure (b) and the equilibrium point is called a node. In the original system of coordinates, these trajectories appear to be skewed as shown in figure (c). If the eigen values are both positive, the nature of the trajectories does not change, except that the trajectories diverge out from the equilibrium point as both z1(t) and z2(t) are increasing exponentially. The phase trajectories in the x1-x2 plane are as shown in figure (d). This type of singularity is identified as a node, but it is an unstable node as the trajectories diverge from the equilibrium point. Saddle Point:
Consider now a system with eigen values are real, distinct one positive and one negative. Here, one of the states corresponding to the negative eigen value converges and the one corresponding to positive eigen value diverges so that the trajectories are given by z2 = c(z1)-k or (z1)kz2 = c which is an equation to a rectangular hyperbola for positive values of k. The location of the eigen values, the phase portrait in z1 – z2 plane and in the x1 – x2 plane are as shown in figure. The equilibrium point around which the trajectories are of this type is called a saddle point.
Dept. of EEE, SJBIT
Page 116
Modern Control Theory
10EE55
Focus Point: Figure 7.7 Consider a system with complex conjugate eigen values. The canonical form of the state equation can be written as
Using linear transformation, the equation becomes
The slope
We get,
This is an equation for a spiral in the polar coordinates. A plot of this equation for negative values of real part is a family of equiangular spirals. The origin which is a singular point in this case is called a stable focus. When the eigen values are complex conjugate with positive real parts, the phase portrait consists of expanding spirals as shown in figure and the singular point is an unstable focus. When transformed into the x1-x2 plane, the phase portrait in the above two cases is essentially spiralling in nature, except that the spirals are now somewhat twisted in shape.
Dept. of EEE, SJBIT
Page 117
Modern Control Theory
10EE55
Centre or Vortex Point: Figure 7.8
Consider now the case of complex conjugate eigen values with zero real parts. ie.,
1,
2
= ±j
Integrating the above equation, we get which is an equation to a circle of radius R. The radius R can be evaluated from the initial conditions. The trajectories are thus concentric circles in y1-y2 plane and ellipses in the x1-x2 plane as shown in figure. Such a singular points, around which the state trajectories are concentric circles or ellipses, are called a centre or vortex.
Figure 7.9 Example 7.4: Determine the kind of singularity for the following differential equation.
Let the state variables be The corresponding state model is
At singular points,
and
Therefore, the singular point is at (0,0)
The characteristic equation is i.e,
I-A =0 or
2
+3 +2=0
1, 2 = -2, -1. Since the roots are real and negative, the type of singular point is stable node.
Dept. of EEE, SJBIT
Page 118
Modern Control Theory
10EE55
Example 7.5: For the nonlinear system having differential equation: find all singularities. Defining the state variables as
, the state equations are
At singular points, So that the singular points
The singularities are thus at (0,0) and (-1,0). Linearization about the singularities:
The Jacobean matrix J =
Linearization around (0,0), i.e., substituting x1 = 0 and x2 = 0
The characteristic equation is
i.e., ( -0.1)+1 = 0 or
2
I-A =0
– 0.1 + 1 = 0
The eigen values are complex with positive real part. The singular point is an unstable focus. Linearization around (-1,0)
The characteristic equation will be
Therefore
2
– 0.1 - 1 = 0
1, 2 = 1.05 and -0.98. Since the roots are real and one negative and another positive, the singular point is a saddle point.
Dept. of EEE, SJBIT
Page 119
Modern Control Theory
10EE55
Example 7.6: Determine the kind of singularity for the following differential equation.
Let the state variables be The corresponding state model is
At singular points, So that the singular points
The singularities are thus at (0,0) and (2,0). Linearization about the singularities:
The Jacobean matrix J =
Linearization around (0,0), i.e., substituting x1 = 0 and x2 = 0
The characteristic equation is
i.e., ( + 0.5) + 2 = 0 or
2
I-A =0
+ 0.5 + 2 = 0
The eigen values are complex with negative real parts. The singular point is a stable focus. Linearization around (-2,0)
The characteristic equation will be
Therefore
2
+ 0.5 - 2 = 0
= 1.19 and -1.69. Since the roots are real and one negative and another positive, the singular point is a saddle point. 1, 2
Dept. of EEE, SJBIT
Page 120
Modern Control Theory
10EE55
UNIT-8: STABILITY ANALYSIS UNIT - 8 Liapunov stability criteria, Liapunov functions, direct method of Liapunov & the linear system, Hurwitz criterion & Liapunov‟s direct method, construction of Liapunov functions for nonlinear system by Krasvskii‟s method. 6 Hours For linear time invariant (LTI) systems, the concept of stability is simple and can be formalised as per the following two notions:
a) A system is stable with zero input and arbitrary initial conditions if the resulting trajectory tends towards the equilibrium state. b) A system is stable if with bounded input, the system output is bounded.
In nonlinear systems, unfortunately, there is no definite correspondence between the two notions. The linear autonomous systems have only one equilibrium state and their behaviour about the equilibrium state completely determines the qualitative behaviour in the entire state plane.
In nonlinear systems, on the other hand, system behaviour for small
deviations about the equilibrium point may be different from that for large deviations. Therefore, local stability does not imply stability in the overall state plane and the two concepts should be considered separately.
Secondly, in a nonlinear system with multiple
equilibrium states, the system trajectories may move away from one equilibrium point and tend to other as time progresses. Thus it appears that in case of nonlinear systems, there is no point in talking about system stability. More meaningful will be to talk about the stability of an equilibrium point.
Stability in the region close to the equilibrium point or in the
neighbourhood of equilibrium point is called stability in the small. For a larger region around the equilibrium point, the stability may be referred to as stability in the large.
In the
extreme case, we can talk about the stability of a trajectory starting from anywhere in the complete state space, this being called global stability. A simple physical illustration of different types of stability is shown in Fig. 8.1
Dept. of EEE, SJBIT
Page 121
Modern Control Theory
10EE55
Fig. 8.1 Global and local stability Equilibrium State: In the system of equation , a state xe where f(xe,t) = 0 for all „t‟ is called an equilibrium state of the system. If the system is linear time invariant, namely f(x,t) = Ax, then there exists only one equilibrium state if „A‟ is non-singular and there exist infinitely many equilibrium states if „A‟ is singular. For nonlinear systems, there may be one or more equilibrium states. Any isolated equilibrium state can be shifted to the origin of the coordinates, or f(0,t) = 0, by a translation or coordinates.
8.2 Stability Definitions: The Russian mathematician A.M. Liapunov has clearly defined the different types of stability. These are discussed below. Stability: An equilibrium state xe of the system is said to be stable if for each real number > 0 there is a real number ( ,t0) > 0 such that implies for all t t0. The real number depends on and in general, also depends on t0. If does not depend on t0, the equilibrium state is said to be uniformly stable. An equilibrium state xe of the system of equation is said to be stable in the sense of Liapunov if, corresponding to each S( ), there is an S( ) such that trajectories starting in S( ) do not leave S( ) as t increases indefinitely. Asymptotic Stability: An equilibrium state xe of the system is said to be asymptotically stable, if it is stable in the sense of Liapunov and every solution starting within S( ) converges, without leaving S( ), to xe as „t‟ increases indefinitely. Asymptotic Stability in the Large: If asymptotic stability holds for all states from which trajectories originate, the equilibrium state is said to be asymptotically stable in the large. An equilibrium state xe of the system is said to be asymptotically stable in the large, if it is stable and if every solution converges to xe as „t‟ increases indefinitely. Obviously a necessary condition for asymptotic stability in the large is that there is only one equilibrium state in the whole state space. Dept. of EEE, SJBIT
Page 122
Modern Control Theory
10EE55
The above said definitions are represented graphically in Fig. 8.2
Fig. 8.2 Liapunov‟s Stability Instability: An equilibrium state xe is said to be unstable, if for some real number > 0 and any real number > 0, no matter how small, there is always a state x0 in S( ) such that the trajectory starting at this state leaves S( ).
8.3 Stability by the Method of Liapunov Russian mathematician A.M. Liapunov has proposed a few theorems for the study of stability of the system. The most popular among this is called the “Second Method of Liapunov” or “Direct Method of Liapunov”. This method is very general in its formulation and can be used to study of stability of linear or nonlinear systems. The method is called „direct‟ method as it does not involve the solution of the system differential equations and stability information is available without solving the equations which is definitely an advantage for nonlinear systems. The stability information obtained by this method is precise and involved no approximation. First Method of Liapunov: The first method of Liapunov, though rarely talked about, is essentially a theorem stating the conditions under which system stability information can be inferred by examining the simplified equations obtained through local linearization. This theorem is applicable only to autonomous systems.
8.4 Sign Definiteness Let V(x1, x2, x3, ……. Xn) be a scalar function of the state variables x1, x2, x3, …….., xn. Then the following definitions are useful for the discussion of Liapunov‟s second method. 8.4.1 Scalar Functions: A scalar function V(x) is said to be positive definite in a region states x in the region and V(0) = 0.
if V(x) > 0 for all nonzero
A scalar function V(x) is said to be negative definite in a region states x in the region and V(0) = 0.
if V(x) < 0 for all nonzero
A scalar function V(x) is said to be positive semi-definite in a region if it is positive for all states in the region and except at the origin and at certain other states, where it is zero. Dept. of EEE, SJBIT
Page 123
Modern Control Theory
10EE55
A scalar function V(x) is said to be negative semi-definite in a region if it is negative for all states in the region and except at the origin and at certain other states, where it is zero. A scalar function V(x) is said to be indefinite if in the region negative values, no matter how small the region is.
it assumes both positive and
Examples:
8.4.2 Sylvester’s Criteria for Definiteness: A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real symmetric matrix, be positive definite is that the determinant of A be positive and the successive principal minors of the determinant of A be positive.
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real symmetric matrix, be negative definite is that the determinant of A be positive if n is even and negative if n is odd, and the successive principal minors of even order be positive and the successive principal minors of odd order be negative. i . e . ,
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real symmetric matrix, be positive semi-definite is that the determinant of A be singular and the successive principal minors of the determinant of A be nonnegative. i.e.,
Dept. of EEE, SJBIT
Page 124
Modern Control Theory
10EE55
A necessary and sufficient condition in order that the quadratic form xTAx, where A is an nxn real symmetric matrix, be negative semi-definite is that the determinant of A be singular and all the principal minors of even order be nonnegative and those of odd orders be non positive.
Example 8.1: Using Sylvester‟s criteria, determine the sign definiteness of the following quadratic forms
Successive minors are
principal
Hence the given quadratic form is positive definite.
Example 8.3:
Successive principal minors are
Hence the quadratic form is negative definite.
8.5 Second Method of Liapunov The second method of Liapunov is based on a generalization of the idea that if the system has an asymptotically stable equilibrium state, then the stored energy of the system displaced within the domain of attraction decays with increasing time until it finally assumes its minimum value at the equilibrium state. The second method of Liapunov consists of determination of a fictitious energy function called a Liapunov function. The idea of the Liapunov function is more general than that of energy and is more widely applicable. Liapunov functions are functions of x1, x2, x3, …….., xn. and t. We denote Liapunov functions V(x1, x2, x3, …….., xn, t) or V(x,t) or V(x) if functions do not include t explicitly. In the second method of Liapunov the sign behaviours of V(x,t) and its time derivative give information on stability, asymptotic stability or instability of the equilibrium state under Dept. of EEE, SJBIT
Page 125
Modern Control Theory
10EE55
consideration at the origin of the state space. Theorem 1: Suppose that a system is described by , where f(0,t) = 0 for all t. If there exists a scalar function V(x,t) having continuous first partial derivatives and satisfying the following conditions. 1. V(x,t) is positive definite 2. Then the equilibrium state at the origin is uniformly asymptotically stable. If in addition, then the equilibrium state at the origin is uniformly asymptotically stable in the large. A visual analogy may be obtained by considering the surface
. This is a
cup shaped surface as shown. The constant V loci are ellipses on the surface of the cup. Let be the initial condition. If one plots trajectory on the surface shown, the representative point x(t) crosses the constant V curves and moves towards the lowest point of the cup which is the equilibrium point.
Fig. 8.3 Energy function and movement of states Theorem 2: Suppose that a system is described by , where f(0,t) = 0 for all t 0. If there exists a scalar function V(x,t) having continuous first partial derivatives and satisfying the following conditions. 1. V(x,t) is positive definite Dept. of EEE, SJBIT
Page 126
Modern Control Theory
10EE55
2. 3. 0, where
does not vanish identically in t t0 for any t0 and any x0 denotes the trajectory or solution starting from x0 at t0.
Then the equilibrium state at the origin of the system is uniformly asymptotically stable in the large. If however, there exists a positive definite scalar function V(x,t) such that is identically zero, then the system can remain in a limit cycle. The equilibrium state at the origin, in this case, is said to be stable in the sense of Liapunov. Theorem 3: Suppose that a system is described by where f(0,t) = 0 for all t t0. If there exists a scalar function W(x,t) having continuous first partial derivatives and satisfying the following conditions 1. W(x,t) is positive definite in some region about the origin. 2. is positive definite in the same region, then the equilibrium state at the origin is unstable.
Example 8.4: Determine the stability of following system using Liapunov‟s method.
Let us choose Then
This is negative definite. Hence the system is asymptotically stable. Example 8.5: Determine the stability of following system using Liapunov‟s method.
Let us choose Then This is a negative semi definite function. If x2 must be zero for all t t0.
is to be vanish identically for t
t0, then
This means that vanishes identically only at the origin. Hence the equilibrium state at the origin is asymptotically stable in the large.
Example 8.6: Determine the stability of following system using Liapunov‟s method.
Let us choose Then This is an indefinite function. Dept. of EEE, SJBIT
Page 127
Modern Control Theory
10EE55
Let us choose another Then This is negative semi definite function. If is to be vanish identically for t t0, then x2 must be zero for all t t0. This means that vanishes identically only at the origin. Hence the equilibrium state at the origin is asymptotically stable in the large. Example 8.7: Consider a nonlinear system governed by the state equations
Let us choose Then
Therefore, for asymptotic stability we require that the above condition is satisfied.
The
region of state space where this condition is not satisfied is possibly the region of instability. Let us concentrate on the region of state space where this condition is satisfied. The limiting condition for such a region is.The dividing lines lie in the first and third quadrants and are rectangular hyperbolas as shown in Figure. 8.4. In the second and fourth quadrants, the inequality is satisfied for all values of x1 and x2. Figure 8.4 shows the region of stability and possible instability.
Since the choice of Liapunov function is not
unique, it may be possible to choose another Liapunov function for the system under consideration which yields a larger region of stability.
Dept. of EEE, SJBIT
Page 128
Modern Control Theory
10EE55
Conclusions: 1. Failure in finding a „V‟ function to show stability or asymptotic stability or instability of the equilibrium state under consideration can give no information on stability. 2. Although a particular V function may prove that the equilibrium state under consideration is stable or asymptotically stable in the region , which includes this equilibrium state, it does not necessarily mean that the motions are unstable outside the region . 3. For a stable or asymptotically stable equilibrium state, a V function with the required properties always exists.
8.6 Stability Analysis of Linear Systems Theorem: The equilibrium state x = 0 of the system given by equation is asymptotically stable if and only if given any positive definite Hermitian matrix Q (or positive definite real symmetric matrix), there exists a positive definite Hermitian matrix P (or positive definite real symmetric matrix) such that . The scalar function is a Liapunov function for the system.
Hence, for the asymptotic stability of the system of , it is sufficient that Q be positive definite. Instead of first specifying a positive definite matrix P and examining whether or not Q is positive definite, it is convenient to specify a positive definite matrix Q first and then examine whether or not P determined from is also positive definite. Note that P being positive definite is a necessary condition.
Note: 1. If does not vanish identically along any trajectory, then Q may be chosen to be positive semi definite. 2. In determining whether or not there exists a positive definite Hermitian or real symmetric matrix P, it is convenient to choose Q = I, where I is the identity matrix. Then the elements of P are determined from and the matrix P is tested for positive definiteness.
Dept. of EEE, SJBIT
Page 129
Modern Control Theory
10EE55
Example 8.8: Determine the stability of the system described by
Assume a Liapunov function V(x) = XTPX
-2P12 = -1 P11 – P12 – P22 = 0 2P12 – 2P22 = -1 Solving the above equations, P11 = 1.5;
P12 = 0.5; P22 = 1
and
1.5 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium state at origin is asymptotically stable in the large. The Liapunov function V(x) = XTPX
Example 8.9: Determine the stability of the equilibrium state of the following system.
Dept. of EEE,SJBIT
Page 130
Modern Control Theory
10EE55
4P11 + (1 – j)P12 + (1+j)P21 = 1 (1 – j)P11 + 5P21 + (1 – j)P22 = 0 (1 + j)P11 + 5P12 + (1 +j)P22 = 0 (1 – j)P12 + (1 + j)P21 + 6P22 = 1 Solving the above equations, P11 = 3/8;
P12 = - (1 + j)/8;
P12 = - (1 - j)/8;
P22 = 1/4
P is positive definite. Hence the origin of the system is asymptotically stable.
Example 8.10: Determine the stability of the system described by
Assume a Liapunov function V(x) = XTPX
-2P11 + 2P12 = -1 -2P11 – 5P12 + P22 = 0 -4P12 – 8P22 = -1 Solving the above equations, P11 = 23/60;
P12 = -7/60;
and
P22 = 11/60
23 > 0 and det(P) > 0. Therefore, P is positive define. Hence, the equilibrium state at origin is asymptotically stable in the large.
Dept. of EEE,SJBIT
Page 131
Modern Control Theory
10EE55
Example 8.11: Determine the stability range for the gain K of the system given below.
In determining the stability range of k, we assume u = 0.
Let us choose
a positive semi definite real symmetric matrix
This choice is permissible since except at the origin. To verify this, note that
cannot be identically equal to zero
being identically zero implies that x3 is identically zero. If x3 is identically zero, then x1 must be must be zero since we have 0 = - kx1 – 0. If x1 is identically zero, then x2 must also be identically zero since 0 = x2. Thus is identically zero only at the origin. Hence we may use the Q matrix defined by a psd matrix. Let us solve,
i.e.,
-k p13 – k p13 = 0 -k p23 + p11 – 2p12 = 0 -k p33 + p12 – p13 = 0 P12 – 2p22 + p12 – 2p22 = 0 P13 – 2p23 + p22 – p23 = 0 P23 – p33 + p23 – p33 = -1
Solving the above equations, we get
For P to be positive definite, it is necessary and sufficient that 12 – 2k > 0 and k > 0 or 0 < k
View more...
Comments