Feedback Control Systems Mit (Complete)

January 29, 2017 | Author: Mario A Qui | Category: N/A
Share Embed Donate


Short Description

Download Feedback Control Systems Mit (Complete)...

Description

MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,... Page 1 of 3

Syllabus The 'Information' section of the course syllabus contains a list of questions each student was asked to answer about their background. The information provided would have been used to create a class e-mail list and to adjust the pace of the course.

Instructor z

Professor Jonathan P. How

Feedback Control z z z z z z z

Introduction to the state-space approach of analysis and control synthesis. State-space representation of dynamic systems; controllability and observability. State-space realizations of transfer functions and canonical forms. Design of state-space controllers, including pole-placement and optimal control. Introduction to the Kalman filter. Limitations on performance of control systems. Introduction to the robustness of multivariable systems.

NUMBER OF LECTURES

TOPICS

5

Review of Classical Synthesis Techniques

8

State Space - Linear Systems

7

Full State Feedback

3

State Estimation

4

Output Feedback

7

Robustness

Homework z z z z

Weekly problem sets will be handed out on Fridays (due the following week). There will be approximately two labs that will be graded as part of the homework. Midterm: Between Lecture 27 and 28, in class. Final Exam: After Lecture 37. Course Grades: Homework 30%, Midterm 20%, Final 50%.

Textbooks Required: z

Pierre Belanger. Control Engineering: A Modern Approach. Oxford University, 1995.

http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...

8/12/2005

MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,... Page 2 of 3

Goal To teach the fundamentals of control design and analysis using state-space methods. This includes both the practical and theoretical aspects of the topic. By the end of the course, you should be able to design controllers using state-space methods and evaluate whether these controllers are robust. z z z

Review classical control design techniques. Explore new control design techniques using state-space tools. Investigate analysis of robustness.

Prerequisites z z z

Basic understanding of how to develop system equations of motion. Classical analysis and synthesis techniques (root locus, Bode). There will be a brief review during the first 2 weeks. Course assumes a working knowledge of MATLAB®.

Policies z z

z z

You are encouraged to discuss the homework and problem sets. However, your submitted work must be your own. Homework will be handed out Fridays, due back the following Friday at 5PM. Late homework will not be accepted unless prior approval is obtained from Professor How. The grade on all late homework will be reduced 25% per day. No homework will be accepted for credit after the solutions have been handed out. There will hopefully be 2 Labs later in the semester that will be blended in with the homework. This will be group work, depending on the class size. For feedback and/or questions, contact me through email for the best results.

Supplemental Textbooks There are many others, but these are the ones on my shelf. All pretty much have the same content, but each author focuses on different aspects.

Modeling and Control z

Palm. Modeling, Analysis, and Control of Dynamic Systems. Wiley.

Basic Control z z z z

Franklin and Powell. Feedback Control of Dynamics Systems. Addison-Wesley. Van de Vegte. Feedback Control Systems. Prentice Hall. Di Stefano. Feedback Control Systems. Schaum’s outline. Luenberger. Introduction to Dynamic Systems. Wiley.

http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...

8/12/2005

MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,... Page 3 of 3

z

Ogata. Modern Control Engineering. Prentice Hall.

Linear Systems z z

Chen. Linear Systems Theory and Design. Prentice Hall. Aplevich. The Essentials of Linear State-Space Systems. Wiley.

State Space Control z z z

Brogan. Modern Control Theory. Quantum Books. Burl. Linear Optimal Control. Prentice Hall. Kwakernaak and Sivan. Linear Optimal Control Systems. Wiley Interscience.

http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...

8/12/2005

MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,... Page 1 of 2

This course calendar incorporates the lecture schedule and assignment schedule for the semester. Some lecture topics were taught over several class sessions.

LEC #

TOPICS

HOMEWORK OUT

1

Introduction

HW1

2

Introduction

3

Root Locus Analysis

4

Root Locus Synthesis

5

Bode-Very Brief Discussion

6

State Space (SS) Introduction

7

Stace Space (SS) to Transfer Function (TF)

8

Transfer Function (TF) to State Space (SS)

9

Time Domain

10

Observability

11

Controllability

12

Pole/Zero (P/Z) Cancellation

13

Observability, Residues

14

Pole Placement

15

Pole Placement

16

Performance

17

Performance

18

Pole Locations

19

Pole Locations

20

Pole Locations

21

State Estimators

22

State Estimators

23

State Estimators

24

Dynamic Output Feedback

25

Dynamic Output Feedback

26

Dynamic Output Feedback

27

Sensitivity of LQG

HOMEWORK IN

HW2

HW1

HW3

HW2

HW4

HW3

HW5

HW4

HW6

HW5

HW7

HW6

HW8

HW7

HW8

HW9

Quiz 28

Sensitivity of LQG

29

Bounded Gain Theorem

30

Error Models

31

MIMO Systems

HW9 HW10

http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...

8/12/2005

MIT OpenCourseWare | Aeronautics and Astronautics | 16.31 Feedback Control Systems,... Page 2 of 2

32

MIMO Systems

33

MIMO Systems

34

LQR Optimal

35

LQR Optimal

36

Reference Cmds II

37

LDOC-Review

HW10

Final Exam

http://ocw.mit.edu/OcwWeb/Aeronautics-and-Astronautics/16-31Feedback-Control-Syste...

8/12/2005

Lecture #1 16.31 Feedback Control

Copyright 2001 by Jonathan How.

1

16.31 1—1

Fall 2001

Introduction d(t) r(t)

e(t) 6



K(s)

?

G(s)

y(t)

-

u(t)

• Goal: Design a controller K(s) so that the system has some desired characteristics. Typical objectives: — Stabilize the system (Stabilization) — Regulate the system about some design point (Regulation) — Follow a given class of command signals (Tracking) — Reduce the response to disturbances. (Disturbance Rejection) • Typically think of closed-loop control → so we would analyze the closed-loop dynamics. — Open-loop control also possible (called “feedforward”) — more prone to modeling errors since inputs not changed as a result of measured error. • Note that a typical control system includes the sensors, actuators, and the control law. — The sensors and actuators need not always be physical devices (e.g., economic systems). — A good selection of the sensor and actuator can greatly simplify the control design process. — Course concentrates on the design of the control law given the rest of the system (although we will need to model the system).

16.31 1—2

Fall 2001

Why Control?

OPERATION IRAQI FREEDOM -- An F-117 from the 8th Expeditionary Fighter Squadron out of Holloman Air Force Base, N.M., flies over the Persian Gulf on April 14, 2003. The 8th EFS has begun returning to Hollomann A.F.B. after having been deployed to the Middle East in support of Operation Iraqi Freedom. (U.S. Air Force photo by Staff Sgt. Derrick C. Goode). http://www.af.mil/photos.html.

• Easy question to answer for aerospace because many vehicles (spacecraft, aircraft, rockets) and aerospace processes (propulsion) need to be controlled just to function — Example: the F-117 does not even fly without computer control, and the X-29 is unstable

16.31 1—3

Fall 2001

Feedback Control Approach • Establish control objectives — Qualitative — don’t use too much fuel — Quantitative — settling time of step response 0 and Ruu > 0) Z ∞

J =

(Rxxx2(t) + Ruuu2(t)) dt 0

• Then the steady-state P solves

2aP + Rxx − P 2b2/Ruu = 0 √ a+ a2 +b2 Rxx /Ruu >0 which gives that P =

R−1 b2 uu

• Then u(t) = −Kx(t) where K=

−1 Ruu bP

=

a+

p

a2 + b2Rxx/Ruu b

• The closed-loop dynamics are ¶ µ p b x˙ = (a − bK)x = a − (a + a2 + b2Rxx /Ruu ) x b p = − a2 + b2Rxx/Ruu x = Acl x(t) • Note that as Rxx /Ruu → ∞, Acl ≈ −|b| • And as Rxx /Ruu → 0, K ≈ (a + |a|)/b

p Rxx /Ruu

— If a < 0 (open-loop stable), K ≈ 0 and Acl = a − bK ≈ a — If a > 0 (OL unstable), K ≈ 2a/b and Acl = a − bK ≈ −a

Fall 2001

16.31 22—11

Summary ®

• Can find the optimal feedback gains u = −Kx using the Matlab command K = lqr(A, B, Rxx, Ruu )

• Similar derivation for the optimal estimation problem (Linear Quadratic Estimator) — Full treatment requires detailed of advanced topics (e.g. stochastic processes and Ito calculus) — better left to a second course. — But, by duality, can compute optimal Kalman filter gains from Ke = lqr(AT , CyT , Bw Rw BwT , Rv ), L = KeT

® MATLAB is a trademark of The MathWorks, Inc.

Fall 2001

16.31 22–12

Weighting Matrix Selection • A good rule of thumb when selecting the weighting matrices Rxx (or Rzz ) and Ruu is to normalize the signals: 



Rxx

α12  (x1)2 max  2  α 2  =  (x2)2max  ...   

αn2 (xn)2max





Ruu

β12  (u1)2 max   β22  =  (u2)2max  ...   

        

2 βm (um)2max

        

• The (xi)max and (ui)max represent the largest desired response/control input for that component of the state/actuator signal. • The αi and βi are used to add an additional relative weighting on the various components of the state/control

Fall 2001

16.31 17–23

Reference Input - II

• On page 17-5, compensator implemented with a reference command by changing to feedback on e(t) = r(t) − y(t) rather than −y(t) r

e –





Gc(s)

u

G(s)

y



– So u = Gc(s)e = Gc(s)(r − y), and have u = −Gc(s)y if r = 0. – Intuitively appealing because it is the same approach used for the classical control, but it turns out not to be the best approach. • Can improve the implementation by using a more general form: x˙ c = Ac xc + Ly + Gr u = −Kxc + N¯ r – Now explicitly have two inputs to the controller (y and r) – N¯ performs the same role that we used it for previously. – Introduce G as an extra degree of freedom in the problem. • First: if N¯ = 0 and G = −L, then we recover the same implementation used previously, since the controller reduces to: x˙ c = Ac xc + L(y − r) = Acxc + Bc(−e) u = −Kxc = −Ccxc – So if Gc(s) = Cc(sI−Ac)−1Bc, then the controller can be written as u = Gc(s)e (negative signs cancel).

Fall 2001

16.31 17–24

• Second: this generalization does not change the closed-loop poles of the system, regardless of the selection of G and N¯ , since x˙ = Ax + Bu , y = Cx x˙ c = Acxc + Ly + Gr u = −Kxc + N¯ r        A −BK x B N¯ x˙ = + r ⇒ LC Ac xc G x˙ c     x y = C 0 xc   A −BK – So the closed-loop poles are the eigenvalues of LC Ac regardless of the choice of G and N¯ – G and N¯ impact the forward path, not the feedback path

• Third: given this extra freedom, what is the best way to use it? – One good objective is to select G and N¯ so that the state estimation error is independent of r. – With this choice, changes in r do not tend to cause such large transients in x˜ – Note that for this analysis, take x˜ = x − xc since xc ≡ xˆ x˜˙ = x˙ − x˙ c = Ax + Bu − (Acxc + Ly + Gr) = Ax + B(−Kxc + N¯ r) − ({A − BK − LC}xc + LCx + Gr)

Fall 2001

16.31 17–25

x˜˙ = = = =

Ax + B(N¯ r) − ({A − LC}xc + LCx + Gr) (A − LC)x + B N¯ r − ({A − LC}xc + Gr) (A − LC)˜ x + B N¯ r − Gr (A − LC)˜ x + (B N¯ − G)r

• Thus we can eliminate the effect of r on x˜ by setting G ≡ B N¯

• Fourth: if this generalization does not change the closed-loop poles of the system, then what does it change? – The zeros of the y/r transfer function, which are given by: 

general

 sI − A BK −B N¯ det  −LC sI − Ac −G  = 0 C 0 0  0 sI − A BK det  −LC sI − Ac L  = 0 C 0 0 

previous



new

 sI − A BK −B N¯ det  −LC sI − Ac −B N¯  = 0 C 0 0

Fall 2001

16.31 17–26

• Hard to see how this helps, but consider the scalar case:   sI − A BK −B N¯ new det  −LC sI − Ac −B N¯  = 0 C 0 0 ⇒ C(−BKB N¯ + (sI − Ac )B N¯ ) = 0 −CB N¯ (BK − (sI − [A − BK − LC])) = 0 CB N¯ (sI − [A − LC]) = 0 – So that the zero of the y/r path is the root of sI − [A − LC] = 0 which is the pole of the estimator. – With this selection of G = B N¯ the estimator dynamics are canceled out of the response of the system to a reference command. – No such cancelation occurs with the previous implementation. • Fifth: select N¯ to ensure that the steady-state error is zero. – As before, this can be done by selecting N¯ so that the DC gain of the closed-loop y/r transfer function is 1.

 −1     y B ¯ A −BK N =1  − C 0 B LC Ac r DC • The new implementation of the controller is x˙ c = Acxc + Ly + B N¯ r u = −Kxc + N¯ r – Which has two separate inputs y and r – Selection of N¯ ensure that the steady-state performance is good – The new implementation gives better transient performance.

Fall 2001

16.31 17–27

Figure 13: Example #1: G(s) =

8·14·20 (s+8)(s+14)(s+20) .

• Method #1: previous implementation. • Method #2: previous, with the reference input scaled to ensure that the DC gain of y/r|DC = 1. • Method #3: new implementation with both G = B N¯ and N¯ selected.

Step Response 1.4 meth1 meth2 meth3 1.2

1

0.8

0.6

0.4

0.2

0

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Fall 2001

16.31 17–28

Figure 14: Example #2: G(s) =

0.94 . s2 −0.0297

• Method #1: previous implementation. • Method #2: previous, with the reference input scaled to ensure that the DC gain of y/r|DC = 1. • Method #3: new implementation with both G = B N¯ and N¯ selected.

Step Response 1.5 meth1 meth2 meth3

1

0.5

0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

Fall 2001

16.31 17–29

Figure 15: Example #3: G(s) =

8·14·20 (s−8)(s−14)(s−20) .

• Method #1: previous implementation. • Method #2: previous, with the reference input scaled to ensure that the DC gain of y/r|DC = 1. • Method #3: new implementation with both G = B N¯ and N¯ selected.

Step Response 4 meth1 meth2 meth3 3

2

1

0

−1

−2

−3

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

16.31 Prof. J. P. How T.A. TBD

Handout #3 September 5, 2001 Due: September 14, 2001

16.31 Homework Assignment #1 1. Plot the root locus diagram for positive values of K for the solutions of the equation s3 + (5 + K)s2 + (6 + K)s + 2K = 0 2. The open loop transfer function of a closed-loop control system with unity negative gain feedback is K G(s) = 2 s(s + 3)(s + 6s + 64) Plot the root locus for this system, and then determine the closed-loop gain that gives an effective damping ratio of 0.707. 3. A unity gain negative feedback system has an open-loop transfer function given by G(s) =

K(1 + 5s) s(1 + 10s)(1 + s)2

Draw a Bode diagram for this system and determine the loop gain K required for a phase margin of 20 degs. What is the gain margin? (a) A lag compensator 1 + 10s 1 + 50s is added to this system. Use Bode diagrams to find the reduction in steady state error following a ramp change to the reference input, assuming that the 20 deg phase margin is maintained. Gc (s) =

4. Plot the Nyquist diagram for the plant with the unstable open-loop transfer function G(s) =

K(s + 0.4) s(s2 + 2s − 1)

Determine the range of K for which the closed-loop system with unity negative gain feedback which incorporated this plant would be stable.

1

16.31 Prof. J. P. How T.A. TBD

Handout #4 September 14, 2001 Due: September 21, 2001

16.31 Homework Assignment #2 1. (Root Locus Analysis) [FPE 3.32, page 159].

2. (Dominant Pole Lo cations) [FPE 3.36 (a),(c),(d), page 161].

(a) State the steps that you would follow to show that this is the step response. Which inverse transforms would you use from the Tables?

3. (Basic Root Locus Plotting) Sketch the root locus for the following systems. As we did in class, concentrate on the real-axis, and the asymptotes/centroids. K + 2s + 10) K(s + 2) (b) Gc G(s) = s4 K(s + 1)(s − 0.2) (c) Gc G(s) = s(s + 1)(s + 3)(s2 + 5) (d) Once you have completed the three sketches, verify the results using Matlab. How closely do your sketches resemble the actual plots? (a) Gc G(s) =

s(s2

4. The attitude-control system of a space booster is shown in Figure 2. The attitude angle θ is controlled by commanding the engine angle δ, which is then the angle of the applied thrust, FT . The vehicle velocity is denoted by v. These control systems are sometimes open-loop unstable, which occurs if the center of aerodynamic pressure is forward of the booster center of gravity. For example, the rigid-body transfer function of the Saturn V booster was Gp (s) =

0.9407 s2 − 0.0297

This transfer function does not include vehicle bending dynamics, liquid fuel slosh dynamics, and the dynamics of the hydraulic motor that positioned the engine. These dynamics added 25 orders to the transfer function!! The rigid-body vehicle was stabilized by the addition of rate feedback, as shown in the Figure 2b (Rate feedback, in addition to other types of compensation, was used on the actual vehicle.) (a) With KD = 0 (the rate feedback removed), plot the root locus and state the different types of (nonstable) responses possible (relate the response with the possible pole locations)

2

Figure 2: Booster control system (b) Design the compensator shown (which is PD) to place a closed-loop pole at s = −0.25 + j0.25. Note that the time constant of the pole is 4 sec, which is not unreasonable for a large space booster. (c) Plot the root locus of the compensated system, with Kp variable and KD set to the value found in (b). (d) Use Matlab to compute the closed-loop response to an impulse for θc .

3

16.31 Prof. J. P. How T.A. TBD

Handout #5 September 21, 2001 Due: September 28, 2001

16.31 Homework Assignment #3 1. Given the plant G(s) = 1/s2 , design a lead compensator so that the dominant poles are located at −2 ± 2j 2. Determine the required compensation for the system G(s) =

K (s + 8)(s + 14)(s + 20)

to meet the following specifications: • Overshoot ≤ 5%

• 10-90% rise time tr ≤150 msec Simulate the response of this closed-loop system to a step response. Comment on the steady-state error. You should find that it is quite large. Determine what modifications you would need to make to this controller so that the system also has • Kp > 6 thereby reducing the steady state error. Simulate the response of this new closed-loop system and confirm that all the specifications are met.

3. Develop a state space model for the transfer function (not in modal/diagonal form). Discuss what state vector you chose and why. G1 (s) =

(s + 1)(s + 2) (s + 3)(s + 4)

(1)

(a) Develop a “modal” state space model for this transfer function as well. (b) Confirm that both models yield the same transfer function when you compute ˆ G(s) = C(sI − A)−1 B + D

1

4. A set of state-space equations is given by: x˙ 1 = x1 (u − βx2 ) x˙ 2 = x2 (−α + βx1 ) where u is the input and α and β are positive constants. (a) Is this system linear or nonlinear, time-varying or time-invariant? (b) Determine the equilibrium points for this system (constant operating points), assuming a constant input u = 1. (c) Near the positive equilibrium point from (b), find a linearized state-space model of the system.

2

16.31 Prof. J. P. How T.A. TBD

Handout #5 September 28, 2001 Due: October 5, 2001

16.31 Homework Assignment #4 1. State space response: (a) Assume that the input vector u(t) is a weighted series of impulse functions applied to the m system inputs, so that 



k1  .   u(t) =   ..  δ(t) = Kδ(t) km where the ki give the weightings of the various inputs. Use the convolution integral to show that the output response can be written as: yimp (t) = CeAt [x(0) + BK] + DKδ(t) (b) Repeat the process, but now assume that the inputs are step functions 



k1  .   u(t) =  ..   us (t) = Kus (t) km where us (t) is the unit step function at time zero. In this case show that the output can be written as: ystep (t) = CeAt x(0) + CA−1 (eAt − I)BK + DKus (t) (c) If A−1 exists, find the steady state value for ystep (t) (d) Use the functions in (a) and (b) to find an analytic expression for the response of a system with zero initial conditions to an input   

0 t
View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF