Control Strategies for Dynamic Systems

April 30, 2017 | Author: Delchev2 | Category: N/A
Share Embed Donate


Short Description

Descripción: dynamics systems...

Description

ISBN: 0-8247-0661-7 This book is printed on acid-free paper. Headquarters Marcel Dekker, Inc. 270 Madison Avenue, New York, NY 10016 tel: 212-696-9000; fax: 212-685-4540 Eastern Hemisphere Distribution Marcel Dekker AG Hutgasse 4, Postfach 812, CH-4001 Basel, Switzerland tel: 41-61-261-8482; fax: 41-61-261-8896 World Wide Web http://www.dekker.com The publisher offers discounts on this book when ordered in bulk quantities. For more information, write to Special Sales/Professional Marketing at the headquarters address above. Copyright # 2002 by Marcel Dekker, Inc. All Rights Reserved. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage and retrieval system, without permission in writing from the publisher. Current printing (last digit): 10 9 8 7 6 5 4 3 2 1 PRINTED IN THE UNITED STATES OF AMERICA

Preface

This book introduces the theory, design, and implementation of control systems. Its primary goal is to teach students and engineers how to effectively design and implement control systems for a variety of dynamic systems. The book is geared mainly toward senior engineering students who have had courses in differential equations and dynamic systems modeling and an introduction to matrix methods. The ideal background would also include some programming courses, experience with complex variables and frequency domain techniques, and knowledge of circuits. For students new to or out of practice with these concepts and techniques, sufficient introductory material is presented to facilitate an understanding of how they work. In addition, because of the book’s thorough treatment of the practical aspects of programming and implementing various controllers, many engineers (from various backgrounds) should find it a valuable resource for designing and implementing control systems while in the workplace. The material herein has been chosen for, developed for, and tested on seniorlevel undergraduate students, graduate students, and practicing engineers attending continuing-education seminars. Since many engineers take only one or, at most, two courses in control systems as undergraduate students, an effort has been made to summarize the field’s most important ideas, skills, tools, and methods, many of which could be covered in one semester or two quarters. Accordingly, chapters on digital controllers are included and can be used in undergraduate courses, providing students with the skills to effectively design and interact with microprocessor-based systems. These systems see widespread use in industry, and related skills are becoming increasingly important for all engineers. Students who hope to pursue graduate studies should find that the text provides sufficient background theory to allow an easy transition into graduate-level courses. Throughout the text, an effort is made to use various computational tools and explain how they relate to the design of dynamic control systems. Many of the problems presented are designed around the various computer methods required to solve them. For example, various numerical integration algorithms are presented in the discussion of first-order state equations. For many problems, Matlab1 is used Matlab is a registered trademark of The MathWorks, Inc.

iii

iv

Preface

to show a comparative computer-based solution, often after the solution has been developed manually. Appendix C provides a listing of common Matlab commands. CHAPTER SUMMARIES Each chapter begins with a list of its major concepts and goals. This list should prove useful in highlighting key material for both the student and the instructor. An introduction briefly describes the topics, giving the reader a ‘‘big picture’’ view of the chapter. After these concepts are elaborated in the body of the chapter, a problem section provides an opportunity for reinforcing or testing important concepts. Chapter 1 is an introduction to automatic control systems. The historical highlights of the field are summarized, mainly through a look at the pioneers who have had the greatest impact on control system theory. The beginning of the chapter also details the advancement of modern control theories (which resulted mainly from the development of feasible microprocessors) and leads directly into the next two sections, which compare analog with digital and classical with modern systems. Chapter 1 concludes with an examination of various applications of control systems; these have been chosen to show the diverse products that use controllers and the various controllers used to make a design become a reality. Chapter 2 summarizes modeling techniques, relating common techniques to the representations used in designing control systems. The approach used here is different from that of most textbooks, in that the elements common to all dynamic systems are emphasized throughout the chapter. The general progression of this summary is from differential equations to block diagrams to state-space equations, which prefigures the order used in the discussion of techniques and tools for designing control systems. Also included here is a subsection illustrating the limitations and proper use of linearization. Newtonian, energy, and power flow modeling methods are then summarized as means for obtaining the dynamic models of various systems. Inductive, capacitive, and resistive elements are used for all the systems, with emphasis placed on overcoming the commonly held idea that modeling an electrical circuit is vastly different from modeling a mechanical system. Finally, the chapter presents bond graphs as an alternative method offering many advantages for modeling large systems that encompass several smaller systems. Aside from its conceptual importance here, a bond graph is also an excellent tool for understanding how many higher-level computer modeling programs are developed and thus for avoiding fundamental modeling mistakes when using these programs. Chapter 3 develops some of the concepts of Chapter 2 by presenting the techniques and tools required to analyze the resulting models. These tools include differential equations in the time domain; step responses for first- and second-order systems; Laplace transforms, which are used to enter the s-domain and construct block diagrams; and the basic block diagram blocks, which are used for developing Bode plots. The chapter concludes by comparing state-space methods with the analysis methods used earlier in the text. Chapter 4 introduces the reader to closed-loop feedback control systems and develops the common criteria by which they can be evaluated. Topics include openloop vs. closed-loop characteristics, effects of disturbance inputs, steady-state errors, transient response characteristics, and stability analysis techniques. The goal of the

Preface

v

chapter is to introduce the tools and terms commonly used when designing and evaluating a controller’s performance. Chapter 5 examines the common methods used to design analog control systems. In each section, root locus and frequency domain techniques are used to design the controllers being studied. Basic controller types—such as proportional-integralderivative (PID), phase-lag, and phase-lead—are described in terms of characteristics, guidelines, and applications. Included is a description of on-site tuning methods for PID controllers. Pole placement techniques (including gain matrices) are then introduced as an approach to designing state-space controllers. Chapter 6 completes the development of the analog control section by describing common components and how they are used in constructing real control systems. Basic op-amp circuits, transducers, actuators, and amplifiers are described, with examples (including linear and rotary types) for each category. The focus here is not only on solving ‘‘text problems’’ but also on ensuring that the controller in question can be successfully implemented. Chapter 7 brings the reader into the domain of digital control systems. A cataloging of the various examples of digital controllers serves to demonstrate their prevalence and the growing importance of digital control theory. Common configurations and components of the controllers noted in these examples are then summarized. Next, the common design methods for analog and digital controllers are compared. If a student with a background in analog controls begins the text here, it should help to bridge the gap between the two types of controllers. The chapter concludes by examining the effects of sampling and introducing the z-transform as a tool for designing digital control systems. Chapter 8 is similar to Chapter 4, but it applies performance characteristics to digital control systems. Open- and closed-loop characteristics, disturbance effects, steady-state errors, and stability are again examined, but this time taking into account sample time and discrete signal effects. Chapter 9, like Chapter 5, focuses on PID, phase-lag, and phase-lead controllers; in addition, it presents direct design methods applicable to digital controllers. Controller design methods include developing the appropriate difference equations needed to enter into the implementation stage. Also included is a discussion of the effects of sample time on system stability. Chapter 10 concludes the digital section by presenting the common components used in implementing digital controllers. Computers, microcontrollers, and programmable logic controllers (PLCs) are presented as alternatives. Methods for programming each type are also discussed, and a connection is drawn between the algorithms developed in the previous chapter and various hardware and software packages. Digital transducers, actuators, and amplifiers are examined relative to their role in implementing the controllers designed in the previous chapter. The chapter concludes with a discussion of pulse-width modulation, its advantages and disadvantages, and its common applications. Chapter 11 is an introduction to advanced control strategies. It includes a short section illustrating the main characteristics and uses of various controllers, including feedforward, multivariable, adaptive, and nonlinear types. For each controller, sufficient description is provided to convey the basic concepts and motivate further study; some are described in greater detail, enabling the reader to implement them as advanced controllers.

vi

Preface

Chapter 12 is most applicable for students or practicing engineers interested in fluid power and electrohydraulics. It applies many of the general techniques developed in the book (modeling, simulation, controller design, etc.) to fluid power systems. Several case studies illustrate the variety of applications that use electrohydraulics. ACKNOWLEDGMENTS It is perilous to begin listing the individuals and organizations that have influenced this work, since I will undoubtedly overlook many valuable contributors. Several, however, cannot go without mention. From the start, my parents taught me the values of God, family, friendship, honesty, and hard work, which have remained with me to this day. I am indebted to my mother, especially for her unending devotion to family, and to my father, for living out the old adage ‘‘An honest day’s work for an honest day’s pay’’ while demonstrating an uncanny ability to keep machines running well past their useful life. Numerous teachers, from elementary to graduate levels, have had a part instilling in me the joy of teaching and helping others—if only I had recognized it at the time! Coaches Brian Diemer and Al Hoekstra taught me the values of setting goals, hard work, friendship, and teamwork. Professors Beachley, Fronczak, and Lorenz, along with my fellow grad students at the University of Wisconsin–Madison, were instrumental in a variety of ways, both academic and personal. The faculty and staff at the Milwaukee School of Engineering have been a joy to work with and have spent many hours helping and encouraging me. In particular, Professors Brauer, Ficken, Labus, and Tran, and the staff at the Fluid Power Institute have been important to me on a personal level. The staff at Marcel Dekker, Inc., has been very helpful in leading me through the authorial process for the first time. Finally—saving the most deserving until the end—I would like to express my gratitude to my wife, Kim, and my children, Rebekah, Matthew, and Rachel. Being married to an engineer (especially one writing a book) is not the easiest task, and Kim provides the balance, perspective, and strength necessary to make our house a home. Finally, I am thankful for the pure joy I feel when I open the door after a long day and the children yell out, ‘‘Daddy, you’re home!’’ Thank you, Lord. John H. Lumkes, Jr.

Contents

Preface

iii

1.

Introduction

2.

Modeling Dynamic Systems

17

3.

Analysis Methods for Dynamic Systems

75

4.

Analog Control System Performance

141

5.

Analog Control System Design

199

6.

Analog Control System Components

279

7.

Digital Control Systems

311

8.

Digital Control System Performance

343

9.

Digital Control System Design

365

10.

Digital Control System Components

399

11.

Advanced Design Techniques and Controllers

433

12.

Applied Control Methods for Fluid Power Systems

495

Appendix A: Useful Mathematical Formulas Appendix B: Laplace Transform Table Appendix C: General Matlab Commands Bibliography Answers to Selected Problems Index

1

567 569 571 575 579 585

vii

1 Introduction

1.1

OBJECTIVES     

1.2

Provide motivation for developing skills as a controls engineer. Develop an appreciation of the previous work and history of automatic controls. Introduce terminology associated with the design of control systems. Introduce common controller configurations and components. Present several examples of controllers available for common applications.

INTRODUCTION

Automatic control systems are implemented virtually everywhere, from work to play, from homes to vehicles, from serious applications to frivolous applications. Engineers having the necessary skills to design and implement automatic controllers will create new and enhanced products, changing the way people live. Controllers are finding their way into every aspect of our lives. From toasting our bread and driving to work to riding the train and traveling to the moon, control theory has been applied in an effort to improve the quality of life. Control engineers may properly be termed system engineers, since it is a system that must be controlled. These ‘‘systems’’ may be a hard disk read head on your computer, your CD player laser position, your vehicle (many systems), a factory production process, inventory control, or even the economy. Good engineers, therefore, must understand the modeling of systems. Modeling might include aeronautical, chemical, mechanical, environmental, civil, electrical, business, societal, biological, and political systems, or possibly a combination of these. It is an exciting field filled with many opportunities. For maximum effectiveness, control engineers should understand the similarities (laws of physics, etc.) inherent in all physical systems. This text seeks to provide a cohesive approach to modeling many different dynamic systems. Almost all control systems share a common configuration of basic components. A closed-loop single input–single output (SISO) system, as shown in Figure 1, is an example of the basic components commonly required when designing control systems. This may be modified to include items like disturbance inputs, external inputs (i.e., wind, load, supply pressure), and intermediate physical system variables. 1

2

Figure 1

Chapter 1

Basic control system layout.

The concept of a control system is quite simple: to make the output of the system equal to the input (command) to the system. In many products we find ‘‘servo-’’ as a prefix describing a particular system (servomechanism, servomotor, servovalve, etc.). The prefix ‘‘servo-’’ is derived from the Latin word servos, or slave/servant. The output in this case is a ‘‘slave’’ and follows the input. The command may be electrical or mechanical. For electrical signals, op-amps or microprocessors are commonly used to determine the error (or perform as the summing junction in terms of the block diagram). In a mechanical system, a lever might be used to determine the error input to the controller. As we will see, the controller itself may take many forms. Although electronics are the becoming the primary components, physical components can also be used to develop proportionalintegral-derivative (PID) controllers. An example is sometimes seen in pneumatic systems where bellows can be used as proportional and integral actions and flow valves as derivative actions. The advantages of electronics, however, are numerous. Electronic controllers are cheaper, more flexible, and capable of discontinuous and adapting algorithms. Today’s microprocessors are capable of running multiple controllers and have algorithms that can be updated simply by reprogramming the chip. The amplifier and actuator are critical components to select properly. They are prone to saturation and failure if not sized properly. The physical system may be a mathematical model during the design phase, but ultimately the actuator must be capable of producing some input into the physical device that has a direct effect on the desired output. Other than simple devices with many inherent assumptions, the physical system can seldom be represented by one simple block or equation. Finally, a sensor must be available that is capable of measuring the desired output. It is difficult to control a variable that cannot be measured. Indirect control through observers adds complexity and limits performance. Sensor development in many ways is the primary technology enabling advanced controllers. Additionally, the sensors must be capable of enduring the environment in which it must be placed. To translate this into something we all are familiar with, let’s modify the general block diagram to represent a cruise control system found on almost all automobiles. This is shown in Figure 2. When you decide to activate your cruise control, you accelerate the vehicle to its desired operating speed and press the set switch, which in turn signals to the controller that the current voltage is the level at which you wish to operate. The controller begins determining the error by comparing the set point voltage with the feedback voltage from the speed transducer. The

Introduction

Figure 2

3

Automobile cruise control system example.

speed transducer might consist of a magnetic pickup on a transmission gear that is then conditioned to be a voltage proportional to the frequency of that signal. Assuming for now a simple proportional controller, the error, in volts, is multiplied by the controller gain that results in a new voltage level. This signal is then amplified to the point where it is capable of moving the throttle position, usually with the help of an engine manifold vacuum. The engine throttle is then open or closed, depending on if the error is positive or negative, which changes the torque output from the engine. The change in torque results in an acceleration or deceleration of the vehicle, hopefully to the desired speed. As the vehicle speed approaches the desired speed, the error decreases, which decreases the actuator signal and the car gradually approaches the set point. As we will see, there is a lot more to this simplified explanation, but certainly the basics will help to provide the proper perspective until the point of more explanation is reached. One more addition is noted here—that of the disturbance input. As we all know, when the vehicle encounters an incline, more throttle input—and hence more torque—is required to maintain the current vehicle speed. In the case of cruise control systems, disturbance torque on the engine is commonly seen from hills and wind gusts. By including this in our model, we can design a control system that is capable of handling these inputs. This serves as the backdrop for the remaining sections. The goal is to examine each significant block presented above, beginning with models for each block, followed by physical components representing each block, and concluding with a summary of combining in such a way as to design, simulate, and build a functional controller. Some examples presented are based on the field of electrohydraulics, an area still lagging in image but whose capabilities are finally being fully realized through the application of modern modeling, simulation, and control theory specifically designed for fluid power applications. 1.3

BRIEF HISTORY OF AUTOMATIC CONTROLS

The history of automatic controls is rich, long, and diverse, as illustrated in works by Mayr (1970) and Fuller (1976). Early work centered on developing intuitive solutions to problems encountered at that time. Beginning with the Greeks, float regulators were used as early as 300 B.C. to track time. The float regulators allowed accurate time keeping by maintaining a constant liquid level in a water tank, thus

4

Chapter 1

providing a constant flow through an outlet (fixed orifice). This constant flow was accumulated in a second tank as a measure of time. These water clocks were used until mechanical clocks arrived in the fourteenth century. Although commonly classified as control systems, designs during this period were intuitively based and mathematical/analytical techniques had yet to be applied to solving more complex problems. Two things happened late in the eighteenth century that would turn out to be of critical significance when combined in the next century. First, in 1788 James Watt (1736–1819) designed the centrifugal fly ball governor for the speed control of a steam engine. The relatively simple but very effective device used centrifugal forces to move rotating masses outward, thereby causing the steam valve to close, resulting in a constant engine speed. Although earlier speed and pressure regulators were developed [windmills in Britain, 1745; flow of grain in mills, sixteenth century; temperature control of furnaces by Cornelis J. Drebbel (1572–1634) of Holland, seventeenth century; and pressure regulators for steam engines, 1707], Watt’s governor was externally visible, and it became well known throughout Europe, especially in the engineering discipline. Earlier steam engines were regulated by hand and difficult to use in the developing industries, and the start of the Industrial Revolution is commonly attributed to Watt’s fly ball governor. Second, in and near the eighteenth century, the mathematical tools required for analyzing control systems were developed. Building on the earlier development of differential equations by Isaac Newton (1642–1727) and Gottfried Leibniz (1646–1716) in the late seventeenth and early eighteenth centuries, Joseph Lagrange (1736–1813) began to use differential equations to model and analyze dynamic systems during the time that Watt developed his fly ball governor. Lagrange’s work was further developed by Sir William Hamilton (1805–1865) in the nineteenth century. The significant combination of these two events came in the nineteenth century when George Airy (1801–1892), professor at Cambridge and Royal Astronomer at Greenwich Observatory, built a speed control unit for a telescope to compensate for the rotation of the earth. Airy documented the capability of unstable motion using feedback in his paper ‘‘On the Regulator of the Clock-work for Effecting Uniform Movement of Equatorials’’ (1840). After Airy, James Maxwell (1831–1879) systematically analyzed the stability of a governor resembling Watt’s governor. He published a mathematical treatment ‘‘On Governors’’ in the Proceedings of the Royal Society (1868) in which he linearized the differential equations of motion, found the characteristic equation, and demonstrated that the system is stable if the roots of the characteristic equation have a negative real component (see Sec. 3.4.3.1). This is commonly regarded as the founding work in the field of control theory. From here the mathematical theory of feedback was developed by names still associated with the field today. Once Maxwell described the characteristic equation, Edward Routh (1831–1907) developed a numerical technique for determining system stability using the characteristic equation. Interestingly, Routh and Maxwell overlapped at Cambridge, both beginning at Peterhouse when shortly after Routh’s arrival Maxwell was advised to transfer to Trinity because of Routh being his equal in mathematics. Routh was Senior Wrangler (highest academic marks), whereas Maxwell was Second Wrangler (second highest academic marks). At approximately the same time in Germany and unaware of Routh’s work, Adolf Hurwitz (1859–1919), upon a request from Aurel Stodola (1859–1952), also solved

Introduction

5

and published the method by which system stability could be determined without solving the differential equations. Today this method is commonly called the Routh– Hurwitz stability criterion (see Sec. 4.4.1). Finally, Aleksandr Lyapunov (1857–1918) presented Lyapunov’s methods in 1899 as a method for determining stability of ordinary differential equations. Relative to the control of dynamic systems, nonlinear in particular, the importance of his work on differential equations, potential theory, stability of systems, and probability theory is only now being realized. With the foundation formed, the twentieth century has seen the most explosive growth in the application and further development of feedback control systems. Three factors helped to fuel this growth: the development of the telephone, World War II, and microprocessors. Around 1922, Russian Nicholas Minorsky (1885– 1970) analyzed and developed three mode controllers for automatic ship steering systems. From his work the foundation for the common PID controller was laid. Near the same time and driven largely by the development of the telephone, Harold Black (1898–1983) invented the electronic feedback amplifier and demonstrated the usefulness of negative feedback in amplifying the voice signal as required for traveling the long distances over wire. Along with Harold Hazen’s (1901–1980) paper on the theory of servomechanisms, this period marked a major increase in the interest and study of automatic control theory. Black’s work was further built on by two pioneers of the field, Hendrik Bode (1905–1982) and Harry Nyquist (1889–1976). In 1932, working at Bell Laboratories, Nyquist developed his stability criterion based on the polar plot of a complex function. Shortly thereafter in 1938, Bode used magnitude and phase frequency response plots and introduced the idea of gain and phase stability margins (see Sec. 4.4.3). The impact of their work is evident in the commonplace use of Nyquist and Bode plots when designing and analyzing automatic control systems in the frequency domain. The first large-scale application of control theory was during World War II, in which feedback amplifier theory and PID control actions were combined to deal with the new complexity of aircraft and radar systems. Although much of the work did not surface until after the war, great advances were made in the control of industrial processes and complex machines (airplanes, radar systems, artillery guidance). Soon after the war, W. Evans (1920–1999) published his paper ‘‘Graphical Analysis of Control Systems’’ (1948), which presented the techniques and rules for graphically tracing the migrations of the roots of the characteristic equation. The root locus method remains an important tool in control system design (see Sec. 4.4.2). At this point in history, the root locus and frequency response techniques were incorporated in general engineering curriculum, textbooks were written, and the general class of techniques came to be known as classical control theory. While classical control theory was maturing, work accomplished in the late 1800s (time domain differential equation techniques) was being revisited due to the age of the computer. Lyapunov’s work, combined with the capabilities of the computer, led to his contribution becoming more fully realized. The incentive arose from the need to effectively control nonlinear multiple input–multiple output (MIMO) systems. While classical techniques are very effective for linear time-invariant (LTI) SISO systems, the complexity increases rapidly when the attempt is made to apply these techniques to nonlinear time-variant, and/or MIMO output systems. The computationally intensive but simple-to-program steps used in the time domain are well adapted to these complex systems when coupled with microprocessors.

6

Chapter 1

Work on using digital computers as automatic controllers began in the 1950s when aerospace company TRW developed a MIMO digital control system. Although the cost of the computer at that time was still prohibitive, many companies and research organizations realized the future potential and followed the work closely. Whereas an analog system’s costs continued to increase as controller size increased, a digital computer could handle multiple arrangements of inputs and outputs and for large systems the initial cost could be justified. By the early 1960s, multiple digital controllers were operating in a variety of applications and industries. The 1960s also saw the introduction of many new theories, collectively referred to as modern control theory. In the span of several years, Rudolf Kalman, along with colleagues, published several papers detailing the application of Lyapunov’s work to time control of nonlinear systems, optimal control, and optimal filtering (Kalman discrete filter and Kalman continuous filter). Classical techniques were also revisited and extensions were developed to allow digital controller design. This new field has seen explosive growth since the 1960s and the era of solidstate devices. The 1970s saw the microcomputers come of age, along with the microprocessor, and in 1983 the PC, or personal computer, was introduced. It is safe to say that things have not been the same since. Although existing for just a relatively short time, we now take for granted powerful computers, analog to digital and digital to analog converters, programmable logic controllers (PLC), and microcontrollers. Today’s applications are remarkably diverse for such a short history. Process control, aircraft systems, space flight, automobiles, off-road equipment, home utensils, portable devices and so on will never be viewed in the same way since the microprocessor and digital control theory was introduced. In spite of the recent advances, it is safe to say that control theory can still be considered in many respects to be in its infancy. Hopefully we have gained some appreciation of the history relative to the development of control theory. This book presents both classical and modern theories and attempts to develop and teach them in a way that does justice to those who have so ambitiously laid the foundation. 1.4

ANALOG VERSUS DIGITAL CONTROL SYSTEMS

Although many controllers today are implemented in the digital domain due to the advent of low cost microprocessors and computers, understanding the basic theory in the analog domain is required to understand the concepts presented when examining digital controllers. In addition, the world we live and operate in is one of analog processes and systems. A hydraulic cylinder does not operate at 10 discrete pressures but may pass through an infinite resolution of pressures (i.e., continuous) during operation. Interfacing a computer with a continuous system then involves several problems since the computer is a digital device. That is, the computer is limited to a finite number of positions, or discrete values, with which to represent the real physical system. Additional problems arise from the fact that computers are limited by how often they can measure the variable in question. Virtually all processes/systems we desire to control are continuous systems. As such, they naturally incline themselves to analog controllers. Vehicle speed, temperature, pressure, engine speed, angles, altitude, and height level are all continuous signals. When we think of controlling the temperature in our house, for example,

Introduction

7

we talk about setting the thermostat (controller) at a specific temperature with the idea that the temperature inside the house will match the thermostat setting, even though this is not the case since our furnace is either on or off. Since temperature is a continuous (analog) signal, this is the intuitive approach. Even with digital controllers (or in the case of older thermostats a nonlinear electromechanical device) we generally discuss the performance in terms of continuous signals and/or measurements. True analog controllers have a continuous signal input and continuous signal output. Any desired output level, at least in theory, is achievable. Many mechanical devices (Watt’s steam engine governor, for example) and electrical devices (operation amplifier feedback devices) fall into this category. Analog controllers are found in many applications, and for LTI and SISO systems they have many advantages. They are simple reliable devices and, in the case of purely mechanical feedback systems, do not require additional support components (i.e., regulated voltage supply, etc.). The common PID controller is easily constructed using analog devices (operational amplifiers), and many control problems are satisfactorily solved using these controllers. The majority of controllers in use today are of the PID type (both analog and digital). Perhaps as important to us, if we desire to pursue a career in control systems, is the fact that someone can intuitively grasp many of the advanced nonlinear and/or digital control schemes with a secure grasp of analog control theory. A digital controller, on the other hand, generally involves processing an analog signal to enable the computer to effectively use it (digitization), and when the computer is finished, it must in many cases convert the signal from its digital form back to its native analog form. Each time this conversion takes place, another source of error, another delay, and a loss of information occurs. As the speed of processors and the resolution of converters increase, these issues are minimized. Some techniques lend themselves quite readily to digital controllers as one or more of their signals are digital from the beginning. For example, any device or actuator capable of only on or off operation can simply use a single output port whose state is either high or low (although amplifiers, protection devices, etc. are usually required). As computers continue to get more powerful while prices still decline, digital controllers will continue to make major inroads into all areas of life. Digital controllers, while having some inherent disadvantages as mentioned above, have many advantages. It is easy to perform complex nonlinear control algorithms, a digital signal does not drift, advanced control techniques (fuzzy logic, neural nets) can be implemented, economical systems are capable of many inputs and outputs, friendly user interfaces are easily implemented, they have data logging and remote troubleshooting capabilities, and since the program can be changed, it is possible to update or enhance a controller with making any physical adjustments. As this book endeavors to teach, building a good foundation of mathematical modeling and intuitive classical tools will enable control system designers to move confidently to later sections and apply themselves to digital and advanced control system design. 1.5

MODERN VERSUS CLASSICAL CONTROL THEORY

The first portion of this book primarily discusses classical control theory applied to LTI SISO systems. Classical controls are generally discussed using Laplace operators in the complex frequency domain and root locus and frequency domain plots are

8

Chapter 1

used in analyzing different control strategies. System representations like state space are presented along side of transfer functions in this text to develop the skills leading to modern control theory techniques. State space, while useful for LTI SISO systems, lends itself more readily to topics included in modern control theory. Modern control theory is a time-based approach more applicable to linear and nonlinear, MIMO, time-invariant, or time-varying systems. When we look at classical control theory and recognize its roots in feedback amplifier design for telephones, it comes as no surprise to find the method based in the frequency domain using complex variables. The names of those playing a pivotal role have remained, and terms like Bode plots, Nichols plots, Nyquist plots, and Laplace transforms are common. Classical control techniques have several advantages. They are much more intuitive to understand and even allow for many of the important calculations to be done graphically by hand. Once the basic terminology and concepts are mastered, the jump to designing effective, robust, and achievable designs is quite easy. Dealing primarily with transfer functions as the method to describe physical system behavior, both open-loop and closed-loop systems are easily analyzed. Systems are easily connected using block diagrams, and only the input/ output relationships of each system are important. It is also relatively easy to take experimental data and accurately model the data using a transfer function. Once transfer functions are developed, all of the tools like frequency plots and root locus plots are straightforward and intuitive. The price at which this occurs is reflected in the accompanying limitations. With some exceptions, classical techniques are suited best for LTI SISO systems. It rapidly becomes more of a trial-and-error process and less intuitive when nonlinear, time varying, or MIMO systems are considered. Even so, techniques have been developed that allow these systems to be analyzed. Based on its strengths and weaknesses, it remains an effective, and quite common, means of introducing and developing the concept of automatic controller design. Modern control theory has developed quickly with the advent of the microprocessor. Whereas classical techniques can graphically be done by hand, modern techniques require the processing capabilities of a computer for optimal results. As systems become more complex, the advantages of modern control theory become more evident. Being based in the time domain and when linearized, in matrix form, implementing modern control theories is equally easy for MIMO as it is for SISO systems. In terms of matrix algebra the operations are the same. As is true in other matrix operations, programming effort remains almost the same even as system size increases. The opposite effect is evident when doing matrix operations by hand. Additional benefits are the adaptability to nonlinear systems using Lyapunov theories and in determining the optimal control of the system. Why not start then with modern control theory? First, the intuitive feel evident in classical techniques is diminished in modern techniques. Instead of input/output relationships, sets of matrices or first-order differential equations are used to describe the system. Although the skills can be taught, the understanding and ramification of different designs are less evident. It becomes more ‘‘math’’ and less ‘‘design.’’ Also, although it is simple in theory to extend it to larger systems, in actual systems, complete with modeling errors, noise, and disturbances, the performance may be much less than expected. Classical techniques are inherently more robust. In using matrices (preferred for computer programming) the system must generally be linearized and we end up back with the same problem inherent in classical techniques.

Introduction

9

Finally, the simplest designs often require the knowledge of each state (parameters describing current system behavior, i.e., position and velocity of a mass), which for larger systems is quite often not feasible, either because the necessary sensors do not exist or the cost is prohibitive. In this case, observers are often required, the complexity increases, and the modeling accuracy once again is very important. The approach in this book, and most others, is to develop the classical techniques and then move into modern techniques and digital controls. Modern control theories and digital computers are a natural match, each requiring the other for maximum effectiveness. To maintain a broad base of skills, both classical and modern control theories are extended into the digital controller realm—classical techniques because that is where many of the current digital controllers have migrated from and modern techniques because that is where many are beginning to come from. 1.6

COMMON CONTROL SYSTEM APPLICATIONS

Regardless of the scope with which we define a controller, it is safe to say that the use of them is pervasive and the growth forecast would include the term exponential in the description. This section serves to list some of the common and not so common controller applications while admitting it is not even remotely a comprehensive list. By the end, it should be clear what the outlook and demand is for controllers and engineers who understand and can apply the theory. One startling example can be illustrated by examining the automobile. In 1968 the first microprocessor was used on a car. What is interesting is that it was not a Ferrari, Porsche, or the like; it was the Volkswagen 1600. This microprocessor regulated the air–fuel mixture in an electronic fuel injection system, boosting performance and fuel efficiency. Where have we gone since then? The 1998 Lincoln Continental has 40 microprocessors, allowing it to process more than 40 million instructions per second. Current vehicles offer stability control systems that cut the power or apply brakes to correct situations where the driver begins to lose control. Global positioning satellites are already interfaced with vehicles for directions and as emergency locators. The movement evident in automotive engineering is also rapidly affecting virtually all other areas of life. Even in off-road construction equipment the same movement is evident. In an industry where the image is far from clean precise equipment, current machines may have over 15 million lines of code controlling its systems. Many have implemented dual processors to handle the processing. In Table 1, the idea presented is that control systems affect almost every aspect of our daily life. As we will see, common components are evident in every example listed. While the modeling skills tend to be specific to a certain application or technology, understanding the basic controller theory is universal to all. 1.6.1

Brief Introduction to Purchasing Control Systems

The options when purchasing control systems are abundant. Different applications will likely require different solutions. For common applications there are probably 

Phillips W. On the road to efficient transportation, engineers are in the driver’s seat. ASME News, December 1998.

10

Table 1

Chapter 1 Common Controller Applications

Common large-scale applications Motion control: hydraulic, pneumatic, electrical Vehicle systems: engine systems, cruise control, ABS, climate control, etc. Electronic amps: telephones, stereos, cell phones, audio and RF, etc. Robotics: welding, assembly, dangerous tasks, machining, painting, etc. Military: flight systems, target acquisition, transmissions, antennas, radar Aerospace: control surfaces, ILS, autopilot, cabin pressurization Computers: disk drives, printers, CD drives, scanners, etc. Agriculture: GPS interface for planting, watering, etc., tractors, combines, etc. Industrial processes: manufacturing, production, repair, etc. Common home applications Refrigerator, stereo, washing machine, clothes dryer, bread machines, furnace and airconditioning thermostat, oven, and water heater Common ‘‘human’’ applications Driving your car, filling a bucket with water, welding, etc.

several ‘‘off-the-shelf’’ controllers available. Off-the-shelf controllers usually contain the required hardware and software for the application, as opposed to the designer choosing each component separately. It is possible that many design requirements may be met by choosing an appropriate off-the-shelf controller. These controllers may be specific to one application or general to many. This section discusses some of the primary advantages and disadvantages when using these controllers. When volumes increase, if the controller is embedded in an application, or when unique systems are being developed, it becomes more likely that a specific design is required and the designer must choose each component individually. The choice may be driven by cost or performance. Of course, when actually choosing a controller, we find the whole spectrum represented and must decide where along the scale is best. Regardless of the controller used, the basic theory presented in this book will help users understand, tune, and troubleshoot controllers. Basic architectures you might expect to find include PLCs, microprocessor based, and op-amp comparators. General and specific controllers, using various architectures, are abundant and virtually any combination is possible. What follows here is a brief sampling of what you might find while searching for a controller. It is not intended to promote or highlight one specific product but rather represents a range of the types available. 1.6.1.1

General Microprocessor-Based Controllers

General controllers are applicable to many systems (Figure 3). They offer similar flexibility to using a computer or microprocessor and in general act as programmable microprocessors with flexible programming support, multiple analog and digital input/output (I/O) configurations, onboard memory, and serial ports. Microprocessor-based controllers packaged with the supporting hardware and software are often called microcontrollers. With the advancements in miniaturization and surface mount technologies, the term is descriptive of modern controllers. A typical (micro)controller may have several types of analog and digital input channels, may be able to supply up to 5 A of current through relays (on–off control

Introduction

Figure 3

11

General programmable controller (Courtesy of Sylva Control Systems Inc.).

only, no proportionality), and provide voltage (0–5 Vdc) or current (4–20 mA) commands to any compatible actuator. As such, when designing a control system using a system like this, you must still provide the amplifiers and actuators (i.e., electronic valve driver card if an electrohydraulic valve is used). Advantages are as follows: dedicated computer and data acquisition system not required, faster and more consistent speeds due to a dedicated microprocessor, optically isolated inputs for protection, supplied programming interfaces, and flexibility. Many controllers are packaged with software and cables, allowing easy programming. Disadvantages are cost, discrete signals that require more knowledge when troubleshooting, and complexity when compared to other systems. With the progress being made with microprocessors and supporting electronic components, this type of controller will continue to expand its range. For embedded high-volume controllers, it becomes likely that similar systems (microprocessor, signal conditioning, interface devices, and memory) will be designed into the main system board and integrated with other functions. In this case, more extensive engineering is required to properly design, build, and use such controllers. An example of such applications is the automobile, where vehicle electronic control modules are designed to perform many functions specific to one (or several) vehicle production line. The volume is large enough and the application specific enough to justify the extra development cost. It is still common for these applications to be developed by third-party providers (original equipment manufacturers). 1.6.1.2

Application-Specific Controller Examples

For common applications it may be possible to have a choice of several different offthe-shelf controllers. These may require as little as installation and tuning before being operational. For small numbers, proof of concept testing or, when time is

12

Chapter 1

short, application specific controllers have several advantages. Primarily, the development time for the application is already done (by others). Actuators, signal conditioners, fault indicators, safety devices, and packaging concerns have already been addressed. Perhaps the largest disadvantage, besides cost, is the loss of flexibility. Unless our application closely mirrors the intended operation, and sometimes even then, it may become more trouble to adapt and/or modify the controller to get satisfactory performance. In other words, each application, even within the same type (i.e., engine speed control), is a little different, and it is impossible for the original designer to know all the applications in advance. For examples of this controller type, let us look at two internal combustion (IC) engine speed controllers. First, to illustrate how specific a controller might be, let us examine the automatic engine controller shown in Figure 4. It does not include the actual actuators and consists only of the ‘‘electronics.’’ Looking at its abilities will illustrate how specific it is to one task, controlling engines. This controller is capable of starting an engine and monitoring (and shutting down if required) oil pressure, temperature, and engine speed. It also can be used to preheat with glow plugs (diesels), close an air gate during over speed conditions, and provide choke during cold starts. Remember that a sensor is required for an action on any variable to occur. The module may accept engine speed signals based on a magnetic pickup or an alternator output. The functions included with the controller are common ones and saves the time required to develop similar ones. The packaging is such that it can be mounted in a variety of places. The main disadvantages are limited flexibility; if additional functions or packaging options are required, cost; and the requirement that all sensors must be capable with the unit. Second, let’s consider an example of a specific off-the-shelf controller that includes an actuator(s). The control system shown in Figure 5 includes the speed pickups (magnetic pickup), electronic controller, and actuator (proportional solenoid) as required for a typical installation. Using this type of controller is as simple as providing a compatible voltage signal proportional to the engine speed, connecting the linear proportional solenoid

Figure 4

Specific application controller example (Courtesy of DynaGen Technologies Inc.).

Introduction

Figure 5

13

Specific application controller example (Courtesy of Synchro-Start Products Inc.).

to the throttle plate linkage, and tuning the controller. The controller may be purchased as a PI or a PID. Which one is desired? As we will soon see, depending on conditions, either one might be appropriate. Understanding the design and operation of control systems is important even in choosing and installing ‘‘black box’’ systems. The advantages and disadvantages are essentially the same as the first example in this section. 1.6.1.3

Programmable Logic Controllers (Modern)

PLCs may take many forms and no longer represent only a sequential controller programmed using ladder logic. Modules may be purchased which might include advanced microprocessor-based controllers such as adaptive algorithms, standard PID controllers, or simple relays. Historically, PLCs handled large numbers of digital I/Os to control a variety of processes, largely through relays, leading to using the word logic in their description. Today, PLCs are usually distinguished from other controllers through the following characteristics: rugged construction to withstand vibrations and extreme temperatures, inclusion of most interfacing components, and an easy programming method. Being modular, a PLC might include several digital I/Os driving relays along with modules with an embedded microcontroller, complete with analog I/O, current drivers, and pulse width modulation (PWM) or stepper motor drivers. An example micro-PLC, capable of more than just digital I/O, is shown in Figure 6. This type of PLC can be used in a variety of ways. It comes complete with stepper motor drivers, PWM outputs, counter inputs, analog outputs, analog inputs, 256 internal relays, serial port for programming, and capable of accepting both ladder logic and BASIC programs. Modern PLCs may be configured for large numbers of inputs, outputs, signal conditioners, programming languages, and speeds. They remain very common throughout a wide variety of applications. 1.6.1.4

Specific Product Example (Electrohydraulics)

Even more specific than a particular application are controllers designed for a specific product. A common example of this is in the field of electrohydraulics. Most electrically driven valves have separate options about the type of controllers available. When the valve is purchased a choice must be made about the type of controller, if any, desired. While it is not difficult to design and build an after-

14

Chapter 1

Figure 6 Micro–programmable International Pte. Ltd.).

logic

controller

(Courtesy

of

Triangle

Research

market controller, it is fairly difficult to get the performance and features commonly found on controllers designed for particular products. The disadvantages are quite obvious in that the manufacturer determines all the types of mounting styles, packaging styles, features, and speeds. In general, however, these disadvantages are minimized since each industry attempts to follow standards for mounts, fittings, and power supply voltages, etc., and there is a good chance that the support components are readily available. A distinct advantage is the integration and range of features commonly found in these controllers. The electrohydraulic example below illustrates this more fully. In addition, the manufacturer generally has an advantage in that the internal product specifications are fully known. The example valve driver/controller shown in Figure 7 is designed to interface with several electrohydraulic valves. More common in controllers designed for specific products, the interfacing hardware, signal conditioners, amplifiers, and controller algorithms are all integrally mounted on a single board. In addition, many

Figure 7

Electrohydraulic valve driver/controller.

Introduction

15

features are added which are specific only to the product it was designed for. In the example shown, deadband compensation, ramp functions, linear variable displacement transducer [LVDT] valve position feedback, dual solenoid PWM drivers, and testing functions are all included on the single board. The driver card also includes external and internal command signal interfaces, gain potentiometers for the onboard PID controller, troubleshooting indicators (e.g., light emitting diodes [LEDs]), and access to many extra internal signals. Depending on the valve chosen, a specific card with the proper signals must be chosen. The system is very specific to using one class of valves, as illustrated by the dual solenoid drivers and LVDT signal conditioning for determining valve spool position. PROBLEMS 1.1 Label and describe the blocks and lines for a general controller block diagram model, as given in Figure 8. 1.2 Describe the importance of sensors relative to the process of designing a control system. 1.3 Describe a common problem that may occur with amplifiers and actuators when improperly selected. For the problem described, list a possible cause followed by a possible solution. 1.4 For an automobile cruise control system, list the possible disturbances that the control system may encounter while driving. 1.5 Choose one prominent person who played an important role in the history of automatic controls. Find two additional sources and write a brief paragraph describing the most interesting results of your research. 1.6 Finish the phrases using either ‘‘analog’’ or ‘‘digital.’’ a. Most physical signals are _____________ . b. Earliest computers were _____________ . c. For rejecting electrical noise, the preferred signal type is _____________ . d. Signals exhibiting finite intermediate values are _____________ . 1.7 List two advantages and two disadvantages of classical control design techniques. 1.8 List two advantages and two disadvantages of modern control design techniques. 1.9 In several sentences, describe the significance of the microprocessor relative to control systems presently in use.

Figure 8

Problem: general controller block diagram

16

Chapter 1

1.10 Briefly describe several differences between microprocessors and microcontrollers. 1.11 What is a disadvantage of choosing an ‘‘off-the-shelf’’ controller? 1.12 Modern PLCs, while similar to microcontrollers, have additional characteristics. List some of the common distinctions. 1.13 Describe several advantages commonly associated with using controllers designed for a specific product.

2 Modeling Dynamic Systems

2.1

OBJECTIVES    

2.2

Present the common mathematical methods of representing physical systems. Develop the skills to use Newtonian physics to model common physical systems. Understand the use of energy concepts in developing physical system models. Introduce bond graphs as a capable tool for modeling complex dynamic systems.

INTRODUCTION

Although some advanced controller methods attempt to overcome limited models, there is no question that a good model is extremely beneficial when designing control systems. The following methods are presented as different, but related, methods for developing plant/component/system models. Accurate models are beneficial in simulating and designing control systems, analyzing the effects of disturbances and parameter changes, and incorporating algorithms such as feed-forward control loops. Adaptive controllers can be much more effective when the important model parameters are known. As a minimum, the following sections should illustrate the commonality between various engineering systems. Although units and constants may vary, electrical, mechanical, thermal, liquid, hydraulic, and pneumatic systems all require the same approach with respect to modeling. Certain components may be nonlinear in one system and linear in another, but the equation formulation is identical. As a result of this phenomenon, control system theory is very useful and capable of controlling a wide variety of physical systems. Most systems may be modeled by applying the following laws:    

Conservation of mass; Conservation of energy; Conservation of charge; Newton’s laws of motion. 17

18

Chapter 2

Additional laws describing specific characteristics of some components may be necessary but usually may be explained by one of the above laws. Of particular interest to controls engineers is modeling a system comprised of several domains, since many controllers must be designed to control such combinations. Although each topic presented could (and maybe should) constitute a complete college course, an attempt is made to present the basics of modeling and analysis of dynamic systems relative to control system design. Many of the tasks discussed can now easily be solved using standard desktop/laptop computers. The goal of this chapter is to present both the basic theory along with appropriate computer solution methods. One without the other severely limits the effectiveness of the control engineer. 2.3

AN INTRODUCTION TO MODEL REPRESENTATION

2.3.1

Differential Equations

Differential equations describe the dynamic performance of physical systems. Three common and equivalent notations are given below. They are commonly interchanged, depending on the preference and software being used. dx ¼ x0 ¼ x_ dt

and

d 2x ¼ x00 ¼ x€ dt2

When a slash to the right of or a dot over a variable is given, time is assumed to be the independent variable. The number of slashes or dots represents order of the differential. Differential equations are generally obtained from physical laws describing the system process and may be classified according to several categories, as illustrated in Table 1. Another consideration depends on the number of unknowns that are involved. If only a single function is to be found, then one equation is sufficient. If there are

Table 1

Classifications of Differential Equations

Order Ordinary (ODE) Partial Linear Nonlinear Homogeneous Nonhomogeneous Complementary Auxiliary equation Complementary solution Particular solution Steady-state value

The highest derivative that appears in the equation The function depends only on one independent variable (common independent variable in physical systems is time) Contains differentials with respect to two or more variables (common in electromagnetic and heat conduction systems) Constant coefficients and no derivatives raised to higher powers Functions as coefficients or derivatives raised to higher powers No forcing function (sum of derivatives equals zero) Differential equation with a nonzero forcing function The homogeneous portion of a nonhomogenous differential equation The polynomial formed by replacing all derivatives with variables raised to the power of their respective derivatives Solution to the complementary equation Solution to the nonhomogeneous differential equation Determined by setting all derivatives in equation to zero

Modeling Dynamic Systems

19

two or more unknown functions, then a system of equations is required. As will be seen later, a single mass-spring-damper system results in a second-order ordinary differential equation. Although most real systems are nonlinear, the equations are often linearized. This greatly simplifies the controller design process. If the controller remains around it’s targeted operating point, the linear models do quite well. Example of a second-order, nonhomogenous, linear, ordinary differential equation is as follows: mx€ þ bx_ þ kx ¼ F

or

mx00 þ bx0 þ kx ¼ F

Example of a second-order, homogeneous, nonlinear, ordinary differential equation is as follows: g y€ þ sin y ¼ 0 l The first equation is a common mass-spring-damper system and will be developed in a later example; the second equation is for a common pendulum system. It is still ordinary, although nonlinear, since time is still the only independent variable. Both examples are functions of only one variable (unknown) and thus are described by a single equation. Finally, determining the steady-state output value for time-based differential equations is accomplished by setting all the derivatives in the equation to zero. Since the derivative, by definition for time-based equations, is the rate of change of the dependent variable with respect to time, the steady-state value occurs when they are all equal to zero. Thus, for the mass-spring-damper system shown above, setting x00 and x0 to zero results in a steady-state displacement of x ¼ F=k, as expected. 2.3.2

Block Diagrams

Block diagrams have become the standard representation for control systems. While block diagrams are excellent for understanding signal flow in control systems, they are lacking when it comes to representing physical systems. Most block diagram models are developed using the methods in this chapter (obtaining differential equations) and then converting to the Laplace domain for use as a transfer function in a block diagram. The problems are that block diagrams become unwieldy for moderately sized systems, focus more on the computational structure and not the physical structure of a system, and only relate one physical variable per connection. In most physical systems there is a ‘‘cause’’ and ‘‘effect’’ relationship between variables ‘‘sharing the same physical space,’’ for example, voltage and current in an electrical conductor and pressure and flow in a hydraulic conductor. Many programs today, however, allow block diagrams to represent the controller and different, higher level, modeling techniques to be used for the physical system. For example, several bond graph simulation programs and many commercial systems simulation programs allow combination models. This section briefly describes block diagrams and common properties and tools useful while using them. The analysis section will further explain how the blocks represent physical systems. 2.3.2.1

Basic Block and Summing Junction Operations

In block diagrams, signals representing a system variable flow along lines connecting blocks, which perform operations on the signals. Therefore, each block is simply a

20

Chapter 2

ratio of the output to the input, or what is called a transfer function. Each line representing a signal is unidirectional and designated by an arrow. Since each line represents a variable in the system, usually a physical variable with associated units, the blocks must contain the appropriate units relative to the input and output. For example, a pressure input to a block representing an area upon which the pressure acts requires the value of the block representing that area to be expressed in units where the block output is expressed as a force (force ¼ pressure  area). This relationship where the block is the ratio of the output variable to the input variable is shown in Figure 1. Also shown in Figure 1 is a basic summing junction, or comparator. A summing junction either adds or subtracts the input variables to determine the value of the output variable. As is true in basic addition and subtraction operations, the units of the variable must be the same for all inputs and output of a summing junction. In other words, it does not make sense to add voltages and currents or pressures and forces. Any number of inputs may be used to determine the single output. Each input should also be designated as an addition or subtraction using ‘‘þ’’ or ‘‘’’ symbols near each line or inside the summing junction itself. These two items allow us to construct and analyze almost every control system that we are likely to encounter. The operations illustrated in the remaining sections use the block and summing representations to graphically manipulate algebraic equations. Each section will give the corresponding algebraic equations, although in practice this is seldom done once we become familiar with the graphical operations. 2.3.2.2

Blocks in Series

A common simplification in block diagrams is combining blocks in series and representing them as a single block, as shown in Figure 2. Any number of blocks in series may be combined as long as no branches are contained between any of the pairs of blocks. A later section illustrates how to move branches and allow for the blocks to be combined. Remembering that each individual block must correctly relate the units of the input variable to the output variable, the new block formed by multiplying the individual blocks will then relate the initial input to the final output. The simplified block’s units are obtained by multiplying the units from each individual block. 2.3.2.3

Blocks in Parallel

It is also common to find block diagrams where several blocks are constructed in parallel. Many controllers are first formed in this way to illustrate the effect of each

Figure 1

Transfer function and summing junction operations.

Modeling Dynamic Systems

Figure 2

21

Blocks in series.

part of the controller. For example, the control actions for a proportional, integral, derivative controller can be shown using three parallel blocks where the paths represent the proportional, integral, and derivative control effects. A simple system using two blocks is shown in Figure 3. When combining two or more blocks in parallel, the signs associated with each input variable must be accounted for. It is possible to have several blocks subtracting and several blocks adding when forming the new simplified block. 2.3.2.4

Feedback Loops in Block Diagrams

Perhaps the most common block diagram operation when analyzing closed loop control systems is the reduction of blocks in loops. Whereas the previous steps resulted in new blocks formed by addition, subtraction, multiplication, and division, when feedback loops are formed the ‘‘structure’’ of the system is changed. Later on we will see how the denominator changes when the loop is closed, resulting in us ‘‘modifying’’ the dynamics of the system. The basic steps used to simplify loops in block diagrams are illustrated in Figure 4. Thus, we see that a ‘‘new’’ denominator is formed that is a function of both the forward (left to right signals) path blocks and the feedback (right to left signals) path blocks. When we design a controller we insert a block that we can change, or ‘‘tune,’’ and thereby modify the behavior of our physical system. Even with multiple blocks in the forward and feedback loops, we can use the general loop-closing rule to simplify the block diagram. The exception to the rule is

Figure 3

Blocks in parallel.

22

Figure 4

Chapter 2

Loop operations in block diagrams.

when the loop has any branches and/or summing junctions within the loop itself. The goal is to then first rearrange the blocks in equivalent forms that will enable the loop to be closed. Several helpful operations are given in the remaining sections. 2.3.2.5

Moving a Summing Junction in a Block Diagram

When loops contain summing junctions that prevent the loop from being simplified, it is often possible to move the summing junction outside of the loop, as shown in Figure 5. In general, summing junctions can be moved either ahead or back in the block diagram, depending on the desired result. By writing the algebraic equations for the block diagrams shown in Figure 5, we can verify that indeed the two block diagrams are equivalent. In fact, whenever in doubt, it is helpful to write the equations as a means of checking the final result. 2.3.2.6

Moving a Pickoff Point in a Block Diagram

Finally, it may be useful to move pickoff points (branch points) to different locations when attempting to simplify various block diagrams. This operation is shown in Figure 6. Once again the included algebraic equations confirm that the block diagrams are equivalent, and in fact the algebraic operation and graphical operation are mirror images of each other. Using the operations described above and realizing that the block diagram is a ‘‘picture’’ of an equation will allow us to reduce any block diagram with the basic

Figure 5

Moving summing junctions in block diagrams.

Modeling Dynamic Systems

Figure 6

23

Moving pickoff points in block diagrams.

steps. By now the block diagram introduced at the beginning should be clearer. What we will progress to in Section 2.6 is how to develop the actual contents of each block that will allow us to design and simulate control systems. 2.3.3

State Space Equations

State space equations are similar to differential equations, generally being easy to switch between the two notations. In addition to the inputs and outputs, we also now define states. The states are the minimum set of system variables required to define the system at any time t  t0 , where each state is initially known at t0 and all inputs are known for t  t0 . From a mathematical standpoint any minimum set of state variables meeting these requirements will suffice. They need not be physically measurable or present. From a practical point of view, it is beneficial in terms of implementation, design, and comprehension to choose states representing physical variables or a combination thereof. Perhaps the simplest way to illustrate this is through an example. Let us examine the motion of a vehicle body modeled as a two-dimensional beam connected via springs and dampers to the ground, as shown in Figure 7. There are many possible state variable combinations available for the vehicle suspension. a. xrðtÞ and xf ðtÞ along with their first derivatives (velocities); b. yðtÞ and yðtÞ along with their first derivatives; c. xrðtÞ and yðtÞ along with their first derivatives . . . and so forth.

Figure 7

State variable choices for two-dimensional vehicle suspension.

24

Chapter 2

In all these cases (and in those not listed) once two variables (along with their first derivatives) are chosen, the remaining positions, velocities, and accelerations can be defined relative to the chosen states. The examples listed above all include measurable variables, although this is not a requirement, as stated above. There may be advantages to choosing some sets of states as it is often possible to decouple inputs and outputs, thus making control system design easier. This will be discussed more in later sections. Since more states are generally available (i.e., meet the requirements, as described above) a state space system of equations is not unique. All representations, however, will result in the same system response. The goal is to develop n first order differential equations where n is the order of the total system. The second order differential equation describing the common massspring-damper system would become two first-order differential equations representing position and velocity of the mass. What about the acceleration? Having only the position and velocity as states will suffice since the acceleration of the mass can be determined from the current position and velocity of the mass (along with any imposed inputs on the system). The position allows us to determine the net spring force and the velocity to determine the net friction force. In general, each state variable must be independent. Since the acceleration can be found as a function of the position and velocity, it is dependent and does not meet the general criteria for a state variable. The general form for a linear system of state equations is using vector and matrix notation to define the constants of the states and inputs. The standard notation looks like the following: dx=dt ¼ A x þ B u and y¼C xþD u x is the vector containing the state variables and u is the vector containing the inputs to the system. A is an n  n matrix containing the constants (linear systems) of the state equations and B is a matrix containing the constants multiplying on the inputs. The matrix C determines the desired outputs of the system based on the state variables. D is usually zero unless there are inputs directly connected to the outputs, found in some feedforward control algorithms. The general size relationships between the vectors and matrices are listed below. Define n as the number of states defined for the system (the system order); m as the number of outputs for the system; r as the number of inputs acting on the system. Then x is the state vector and has dimensions n  1 (and for dx/dt). u is the input vector and has dimensions r  1. y is the output vector and has dimensions m  1. A is the system matrix and has dimensions n  n. B is the input matrix and has dimensions n  r. C is the output matrix and has dimensions m  n. D is the feedforward matrix and has dimensions n  r.

Modeling Dynamic Systems

25

If the system is nonlinear, it must be left as a system of first order differential equations. Although state space at first glance seems confusing, it is a powerful way to represent higher order systems since the algebraic routines to analyze each equation do not change and the operations remain the same. Linear algebra theorems are applicable when the matrix form is used, regardless of the system order. Since it is computationally efficient and suited particularly well to difference equations, it is very common in designing advanced controllers. Now let us look at the two examples of differential equations given in the previous section and develop the equivalent state equations. Later on we will see more complex models represented with state equations. EXAMPLE 2.1 Mass-spring-damper system: mx€ þ bx_ þ kx ¼ F Since this is a second-order system, we would expect to need two states, resulting in a 2  2 system matrix, A. First, we need to define our state variables. This is easily accomplished by choosing acceleration and velocity as the first state derivatives, since they are both integrally related to position. Therefore, let x1 , the first state variable, equal x, the position, and x2 , the second state variable, equal velocity, or dx=dt. Using x as the dependent variable in the differential equation is a little misleading since x1 in this case is equal to x but is not required to be so. Now we can set up the following identities and equations: x1 ¼ x ¼ position x_ 1 ¼ x2 ¼ velocity u ¼ F ¼ input x_ 2 ¼ x€ ¼ 

b k 1 x2  x1 þ u ¼ acceleration m m m

We have met the goal of having two first-order differential equations as functions of the state variables and constants. Since this is linear we can also represent it in matrix form. "

x_ 1

#

x_ 2

2

3 2 3 0 1 "x # 0 1 5 4 ¼4 k þ 1 5½u b   x2 m m m

If the position of the mass is the desired output, then the C and D matrices are  y ¼ ½1

0

x1 x2

 ¼ ½C ½x þ ½D ½u

26

Chapter 2

EXAMPLE 2.2 Applying the same procedure to the pendulum equation: g y€ þ sin y ¼ 0 l x1 ¼ y ¼ angular position u ¼ 0 ¼ input x_ 1 ¼ x2 ¼ angular velocity g x_ 2 ¼ y€ ¼  sinðx1 Þ ¼ angular acceleration l With nonlinear equations this is the final form. If the matrix form is desired, the equations must first be linearized as shown in the following section. Whether or not they are written in matrix form, sets of first-order differential equations (regardless of the number) are easily integrated numerically (i.e., Runge-Kutta) to simulate the dynamic response of the systems. This is further explored in Chapter 3. 2.3.4

Linearization Techniques

Since most classical control system design techniques are based on linear systems, understanding how to linearize a system and the resulting limitations are important. We will first examine the linearization process and then look at specific cases for functions of one or two variables in an effort further explain and understand the strengths and limitations. The section is completed by linearizing the pendulum model, completing the state space matrix form, and concluded by linearizing a hydraulic valve with respect to pressure and flow. Linearizing hydraulic valves is a common procedure when designing and simulating electrohydraulic control systems. The general equation used when linearizing a system is given as follows: @f ðxÞ @f ðxÞ y^ ¼ f ðx1o ; x2o ; . . . ; xno Þ þ ðx1  x1o Þ þ ðx  x2o Þ þ @x1 @x2 1 x1o ;x2o ;...xno x1o ;x2o; ...xno @f ðxÞ þ @xn ðxn  xno Þ x ;x ...x 1o

2o;

no

The linearized output, y^, is found by first determining the desired operating point, evaluating the system output at that point, and determining the linear variations around that point for each variable affecting the system. It is important to choose an operating point around where the system is actually expected to operate since the linearized system model, and thus the resulting design also, can vary greatly depending on what point is chosen. Once the operating point is chosen, the operating offset, f ðx1o ; x2o ; . . . ; xno Þ , is calculated. For some design procedures only the variations around the operating point are examined since they affect the system stability and dynamic response. This will be explained further in the next example. Next, each ‘‘slope,’’ or the ‘‘linearized’’ effect of each individual variable, is found by taking the partial derivative with respect to each variable. Each partial derivative is evaluated at the operating point and becomes a numerical constant multiplied by the deviation

Modeling Dynamic Systems

27

away from the operating point. When all constants are collected we are left with one constant followed by a constant multiplied by each individual variable, thus resulting in a linear equation of n variables. The actual procedure is therefore quite simple, and most computer simulation packages include linearization routines to perform this step for the end user. It is still up to the user, however, to linearize about the correct point and to recognize the valid and usable range of the resulting linearized equations. Now let us examine the idea behind the linearization process a little more thoroughly by beginning with a function (y) of one variable (x). If we were to plot the function, we might end up with some curve as given in Figure 8. When the derivative (partial derivative for functions of several variables) is taken of the curve, the result is another function. We evaluate the derivative of the function at the operating point to give us the slope of line representing our original nonlinear function. Depending on the shape of the original curve, the usable linear range will vary. At some point the original nonlinear curve is quite different than the linear estimated value as shown in the plot. It is clear in Figure 8 that a very small range of the actual function can be approximated with a linear function (line). In fact, for the example in Figure 8, the slope is even positive or negative depending on the operating point chosen. Many times the actual functions are not as problematic and may be modeled linearly over a much larger range. In all cases, it pays to understand where the linearized model is valid. Let us now linearize the inverted pendulum state equation and determine a usable range for the linear function. EXAMPLE 2.3 Since sin(y) is our nonlinear function and y ¼ 0 the operating point (pendulum is vertical), then a plot around this point will illustrate the linearization concept. Performing the math first, we begin with the nonlinear state equation derived earlier: g x_ 2 ¼ y ¼ f ðx1 Þ ¼  sinðx1 Þ l

Figure 8

Linearization techniques applied to a function of one variable.

28

Chapter 2

Vertical offset f ð0Þ: g y0 ¼ f ð0Þ ¼  sinð0Þ ¼ 0 l Partial derivative: @y @f ðx1 Þ g ¼ ¼  cosðx1 Þ @x @x1 l Slope at x1 ¼ 0½cosðx1 Þ ! 1 : @y @f ð0Þ g ¼ ¼ @x @x1 l Resulting linear state equation: g g x_ 2 ¼ y ¼  x1 ¼  y l l Since the state equation is now linear, the system can be written using matrices and vectors, resulting in the following state space linear matrices:   " 0 1 #    x_ 1 x1 0 g ½u ¼ þ 0 x2  _x2 0 l   x y ¼ ½ 1 0 1 ¼ ½C ½x þ ½D ½u x2 Common design techniques presented in future sections may now be used to design and simulate the system. As a caution flag, many designs appear to work well when simulated but fail when actually constructed due to misusing the linearization region. Developing good models with the proper assumptions is key to designing good performing robust control systems. To conclude this example, let us examine the ‘‘valid’’ region for the linear inverted pendulum model. To do so, let us plot the linear and actual function (for simplicity, let g=l ¼ 1). These plots are given in Figure 9. From the graph is it clear that between approximately 0:8 radians ð 45 degrees) the linear model is very close to the actual. The greater the deviation beyond these boundaries, the greater will be the modeling error. The linearization of the sine function, as demonstrated above, is also commonly referred to as the small angle approximation. EXAMPLE 2.4 It is also possible to visualize the linearization process for functions of two variables by using surface plots. For this example the following function is examined: yðx1 ; x2 Þ ¼ 500 x1  6 x21 þ 500 x2  x32 Although seldom done in practice, it is helpful for the purpose of understanding what the linearization process does to plot the function in both the original nonlinear and linearized forms. This was already done for the pendulum example with the function only dependent on one variable. In this case, using a function dependent on two variables, a surface is created when plotting the function. This

Modeling Dynamic Systems

Figure 9

29

Actual versus linearized pendulum model.

is shown in Figure 10, where the function is plotted for 0 < x1 < 25 and 0 < x2 < 25 around an operating point chosen to be at ðx1 ; x2 Þ ¼ ð10; 10Þ. To linearize the function the offset value and the two partial derivatives must be calculated and evaluated at the operating point of ðx1 ; x2 Þ ¼ ð10; 10Þ. Offset value: yðx1 ; x2 Þ ¼ ðyð10; 10Þ ¼ 5000  600 þ 5000  1000 ¼ 8400 Slope in x1 direction:

Figure 10

Example: linearizing a function of two variables.

30

Chapter 2

@y ¼ 500  12 x1 ¼ 380 @x1 10;10 Slope in x2 direction: @y ¼ 500  3 x2 ¼ 470 @x2 10;10 Linearized equation: y^ ¼ 8400 þ 380ðx1  10Þ þ 470ðx2  10Þ The linearized equation is now the equation for a plane in three-dimensional space and is the plane formed by the intersection of the two lines drawn in Figure 10. The error between the nonlinear original equation and the linearized equation is the difference between the plane formed by the two lines and the actual surface plot, as illustrated in Figure 11. Although visualization becomes more difficult, the linearization procedure remains straightforward for functions of more than two variables. Each partial derivative simply relates the output to the change of the variable to which the partial is taken. It is common to only include the variations about the operating point when linearizing system models since it facilitates easy incorporation into block diagrams and other common system representations. In this case the constant values are dropped and the system is examined for the amount of variation away from the operating point. In the case of then simulating the system, the input and output values are not absolute but only relative distances (or whatever the units may be) from the operating point. This simplifies the equation studied in the previous example to the following: y^ ¼ 380 x1 þ 470 x2 (about the point 10,10)

Figure 11

Example: linearization of two variables—planar surface representation.

Modeling Dynamic Systems

31

When this is done, the system characteristics (i.e., stability, transient response, etc.) are not changed and the simulation only varies by the offset. Of course, if component saturation models or similar characteristics are included, then the saturation values must also reflect the change to variations around the operating point. EXAMPLE 2.5 To conclude this section, let us linearize the common hydraulic valve orifice equation to illustrate the procedure as more commonly implemented in practice. For a more detailed analysis of the hydraulic valve, please see Section 12.3 where the model is developed and used in greater detail. The general valve orifice equation may be defined as follows: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Qðx; PL Þ ¼ KV x PS  PL  PT where Q is the flow through the valve (volume/time); KV is the valve coefficient; x is the percent of valve opening ð1 < x < 1Þ; PS is the supply pressure; PL is the pressure dropped across the load; and PT is the return line (tank) pressure. For this example let us define the operating point at x ¼ 0:5 and PL ¼ 500 psi. The output Q is in gallons per minute (gpm) and the constants are defined as gpm KV ¼ 0:5 pffiffiffiffiffiffi psi PS ¼ 1500 psi, and PT ¼ 50 psi The offset value is Qðx; PL Þ ¼ Qð0:5; 500Þ ¼ 0:5 0:5

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1500  500  50 ¼ 7:7 gpm

The partial with respect to x is as follows: pffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi @Q gpm ¼ KV PS  PL  PT Kx  ¼ 0:5 950 ¼ 15:4 00 00 Ps 0;PL0 ;PT0 @x x The partial with respect to PL is as follows: @Q KV x 0:5 0:5 gpm ¼  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ pffiffiffiffiffiffiffiffi ¼ 0:004 KP  @PL psi 2 PS  PL  PT PS0 ;PL0 ;PT0 ;x0 2 950 resulting in the linearized equation Q^ ¼ 7:7 þ 15:4 ðx  0:5Þ  0:004 ðPL  500Þ gpm If only variations around the operating point are considered, the equation becomes QL ¼ Ky x  KP PL or QL ¼ 15:4 x  0:004 PL

32

Chapter 2

As illustrated in the example, the linearized equation must be consistent with physical units and the user must be informed about what units are required when implementing. 2.4

NEWTONIAN PHYSICS MODELING METHODS

Newton’s laws (physics) are generally taught in most introductory college courses in the area of dynamics. Along with energy methods, almost all systems can be modeled using these techniques. The resulting equations may range from simple linear to nonlinear and highly complex. Regardless of the system modeled, the result is a differential equation(s) capable of predicting the physical system response. The complexity of the equation reflects the assumptions made and limitations on obtaining information. In most cases, proper assumptions allow the model to be reduced down to linear ordinary differential equations. These equations become increasingly complex as nonlinearities and multiple systems are modeled. The cornerstone equation is Newton’s law, or force = mass  acceleration. As we will see, even in electrical systems where voltages are analogous to forces (electromotive forces), the sum of the forces, or voltages, is zero. Let us first examine the contents of Table 2 and see how the basic laws of physics enable us to model virtually any system. Using the notation of inductance, capacitance, and resistance we see the commonalities in several different physical systems. All systems can be discussed in terms of these components and the energy stored, power through, and using English or metric systems of units. The advantage of such an approach, as seen later in bond graphs, is the recognition that different dynamic systems are following the same laws of physics. This is an important concept as we move to beginning the design of automatic control systems. 2.4.1

Mechanical-Translational System Example

Let us begin with a basic mechanical-translational system to apply the laws defined in Table 2. To do so, let us develop a differential equation that describes system motion for the mass-spring-damper system in Figure 12. Once the differential equation is developed, many options are available for simulating the system response. These techniques are examined in following sections. In this case with one mass, the task is to simply sum all the forces acting on the mass and set it equal to the block’s mass multiplied by the acceleration. Writing the force equations for systems with multiple masses is easily accomplished using the lumped mass modeling approach. Simply sum the forces on each mass and set them equal to that mass multiplied by the acceleration, all with respect to the one mass in question. The more difficult part is reducing the system of equations down to a single input-output relationship. Even with only two masses, this can become quite tedious. Therefore, let us begin by summing all the forces acting on mass m. Remember to be consistent with signs. What seems to help some learn this is to imagine the block moving in a positive direction yðtÞ and see what each force would be. Imagine that you are pushing (displacing) the mass and as you push determine the forces opposing your movement. This results in the following differential equation. F ¼ Fk  Fb þ F  mg ¼ k y  b y0 þ F  mg ¼ my00

Physical System Relationships

System

Inductance

Capacitance

Resistance

Mass, m kg slugs F ¼ m dv=dt E ¼ 12 m v2 Inertia, J N-m-s2 lb-in-s2 T ¼ Ja E ¼ 12 Jo2 Inductance, L Henries, H V ¼ Ldi=dt E ¼ 12 L i2 Fluid inertia N s2 / m5 lbf s2 / in5 p ¼ I dQ=dt E ¼ 12 I Q2 N/A

Springs, k N/m lb/in F ¼kx E ¼ 12 k x2 Spring, k N-m/rad lb-in/rad T ¼ ky E ¼ 12 ky2 Capacitance, C Farads, FÐ V ¼ 1=C i dt E ¼ 12 C V 2 Capacitance m3 /Pa (linear) in3 /psi (linear) Ð p ¼ 1=C Q dt E ¼ 12 C p2 Capacitance J/K lb-in/R Ð T ¼ 1=C q dt E¼CT

Damper, b N/m/s lb/in/s F ¼bv P ¼ b v2 Damper, b N-m-s lb-in-s T ¼ bo P ¼ bo2 Resistance, R Ohms,  V ¼Ri P ¼ 1=RV 2 Orifice (m3 /s)/(N/m2 )1=2 1=2 (in3 /s)/(psi) p Q ¼ Kv P P¼pQ Resistance, Rf K/W R/btu q ¼ 1=Rf T P ¼ 1=Rf T

Mechanicaltranslational

Mechanicalrotational

Electrical

Hydraulic pneumatic

Thermal

Effort Force, F N lbf Momentum Ð m v ¼ F dt Torque, T N-m lbf-in Momentum Ð J! ¼ T dt Volts, V Flux linkage Ð L i ¼ V dt Pressure, p Pascal, Pa psi, lb/in2 Momentum Ð Q I ¼ p dt Temperature, T Kelvin, K Rankine, R Momentum not used

Flow Velocity, v m/s in/s Ð x ¼ v dt P¼F v ff Velocity, o rad/s rad/sÐ y ¼ odt P ¼ To Current, I Amps, Ð A q ¼ i dt P¼V i Flow rate, Q m3 /s in3 /sÐ q ¼ Q dt Power ¼ p Q Heat flow rate, q Watts, W Btu/s HeatÐ energy ¼ 1=C q dt

Item Variable Metric units English units Equation Energy/power Variable Metric units English units Equation Energy/power Variable Metric units Equation Energy/power Variable Metric units English units Equation Energy/power Variable Metric units English units Equation Energy/power

Modeling Dynamic Systems

Table 2

33

34

Chapter 2

Figure 12

Mass-spring-damper model.

If we examine the motion about the equilibrium point (where kx ¼ mg), then the constant force mg can be dropped out since the spring force due to the equilibrium deflection always balances it. Separating the inputs on the right and the outputs on the left then results in m y00 þ b y0 þ k y ¼ F or expressed as m

d 2y dy þb þk y¼F dt dt2

This should be review to anyone with a class in differential equations, vibrations, systems modeling, or controls. Continuing on, we finish the mechanical-translational section with an example incorporating two masses interconnected, as shown in Figure 13. mb is the mass of the vehicle body. mt is the mass of the tire. ks is the spring constant of the suspension. b is the damping from the shock absorber.

Figure 13

Simple vehicle tire/suspension model.

Modeling Dynamic Systems

35

kt is the spring constant of the tire. rðtÞ is the road profile input to the suspension system. Since we have two masses and two springs (four energy storage devices) we expect the model to be a fourth-order differential equation. The two (second-order) differential equations are obtained by repeating the process in the first example and summing the forces on each individual mass in the suspension. In this case we have interaction between the masses since they are connected through a spring and a damper (ks and b). Summing the forces on the vehicle body mass (mb ) results in F ¼ Fks  Fb ¼ mb y00 ¼ ks ðy  xÞ  bðy0  x0 Þ ¼ mb y00 And summing the forces on the tire mass (mt ), F ¼ Fks  Fkt  Fb ¼ mt x00 ¼ ks ðx  yÞ  kt ðx  rÞ  bðx0  y0 Þ ¼ mt x00 The equations can be rearranged to have the system output on the left and the inputs on the right: mb y00 þ b y0 þ ks y ¼ b x0 þ ks x mt x00 þ bx0 þ ðks þ kt Þx ¼ kt rðtÞ þ by0 þ ks y Since y is the motion of the car body and rðtÞ is input to the system, it would be desirable to have y expressed directly as a function of rðtÞ. In the next chapter we use Laplace transforms to achieve this. It is easy to express the system using state space matrices since the differential equations are already linear. If we define the state variables as the position and velocity of each mass, x1 ¼ y x2 ¼ y0 ¼ x01 x3 ¼ x x4 ¼ x0 ¼ x03 Then the system can be represented as x01 ¼ x2 x02 ¼ ðb=mb Þx2  ðks =mb Þx1 þ ðb=mb Þx4 þ ðks =mb Þx3 x03 ¼ x4 x04 ¼ ðb=mt Þx4  ððks þ kt Þ=mt Þx3 þ ðb=mt Þx2 þ ðks =mt Þx1 þ ðkt =mt ÞrðtÞ 2 3 0 1 0 0 2 3 2 3 2 0 3 x_ 1 b k b 7 x1 6 k 76 7 6 6 7 6 s  þ s þ 0 7 7 76 x2 7 6 6 x_ 2 7 6 mb m m m 6 b b b 76 7 þ 6 7 6 7¼6 r 0 7 6 x_ 7 6 0 6 7 7 0 0 1 7 74 x3 5 6 4 35 6 4 5 kt 4 k b ðk þ kt Þ b 5 x x_ 4 4  s  þ s þ m t m m m m t

t

t

t

36

Chapter 2

2 y ¼ ½1

0

0

x1

3

6 7 6 x2 7 7 0 6 6x 7 4 35 x4

It is important to remember that in each example there are physical units associated with each variable and that consistent sets of units must be used. In later sections we use the differential equations developed here to model, design, and simulate closed loop control systems. 2.4.2

Mechanical-Rotational System Example

Mechanical-rotational systems are closely related to translational systems, and many devices relate one to the other (i.e., rack and pinion steering). A simple mechanicalrotational system is shown in Figure 14 where a rotary inertia is connected via a torsional spring to a fixed object and is subject to torsional damping and an input torque. To derive the differential equations simply sum all the torques (effort variable) acting on the rotary inertia and set them equal to the inertia multiplied by the angular acceleration. T 0 s ¼ Ky  by0 þ T ¼ Jy00 and rearranging J

d 2y dy þ b þ Ky ¼ T 2 dt dt

When compared with the second-order differential equation developed for the simple mass-spring-damper model, the similarities are clear. Each model has inertia, damping, and stiffness terms where the effects on the system are the same (natural frequency and damping ratio) and only the units are different. One common case with the rotational system is when the spring is removed and only damping is present on the system. The second-order differential equation can now be reduced to a firstorder differential equation with the output of the system being rotary velocity instead of rotary position. This model is common for systems where velocity, rather than position, is the controlled variable.

Figure 14

Mechanical-rotational model.

Modeling Dynamic Systems

2.4.3

37

Electrical System Example

Now let us look at a basic electrical system and see how the modeling of different systems is related. Beginning with the simple electrical RLC circuit, a common example shown in Figure 15, let us derive the differential equation to model the system’s dynamic behavior. Using Table 2 again, let us sum the voltage drops around the entire circuit. This is commonly referred to as Kirchoff’s voltage law, and along with the Kirchoff’s current law allows the majority of electrical circuits to be modeled and analyzed. Remember, Vi adds to the voltage total, while R, L, and C are voltage drops when traversing the loop clockwise. General: Using Table 2:

Vin  VR  VL  VC ¼ 0 Ð Vin  R i  L di=dt  1=C i dt ¼ 0

Finally, recognizing that i ¼ dq=dt (current is the flow of charge) and substituting this in gives the recognizable form L

d2q dq 1 þ R þ q ¼ Vin dt C dt2

This is a linear second-order ordinary differential equation and can thus be classified and simulated by using natural frequency and damping ratio parameters. Commonly, this equation is modified to have the output be the voltage across the capacitor, and not the charge q, since the capacitor voltage is easily measured to verify the model. Using the following identities between q and Vc , the capacitor voltage, the transformation is straightforward. Ð Vc ¼ 1=C i dt ¼ q=C and LC

d 2 VC dV þ RC C þ VC ¼ Vin dt dt2

Constructing the circuit and measuring the response easily verifies the resulting equation. As already noted for other domains, the number of energy storage elements corresponds to the order of the system. The exception to this is when two or more energy storage elements are acting together and can thus be combined and represented as a single component. An example of this is electrical inductors in series; the mechanical analogy of this would be two masses rigidly connected. Inductive and capacitive elements both act as energy storage devices in Table 2.

Figure 15

RLC circuit model.

38

Chapter 2

In addition, it should be obvious when comparing the RLC differential equation with the mass-spring-damper differential equation that the two equations are identical in function and vary only in notation and physical units. Drawing the analogy further, we see that an inductor is equivalent to a mass, a resistor to a damper, and a capacitor to the inverse of a spring. Note also that these three equivalents are each in the same column of Table 2. Also, two energy storage elements, the mass and the spring, resulted in a second-order system. It should be becoming clearer as we progress that the skills needed to model one type of system are identical to those required for another. Using the generalized effort and flow terminology helps us recognize that all systems are subject to the same physical laws and behave accordingly. This certainly does not imply that what is a ‘‘linear’’ component in one domain is linear in another but that the relationships between effort and flow variables are consistent. This will become clearer and we move into hydraulic system models in the next section. 2.4.4

Basic Hydraulic Positioning Example

Hydraulic systems are commonly found where large forces are required. They can be mounted in any orientation, have flexible power transmission lines, are very reliable, and have high power densities where the power is applied. More examples of hydraulic systems are given in later chapters. Here, we will use Newtonian physics to develop a differential equation for a basic hydraulic positioning device, as shown in Figure 16. Later this system is examined after feedback has been added; for now the output simply follows the input and whenever the command is nonzero the output is moving (until end of stroke is reached). In modeling the above system shown in Figure 16, two components must now be accounted for, the valve and the piston. The piston is quite simple and can be addressed by summing the forces on the mass m. Compared with viscous friction forces acting on the piston, fluid pressure acts on the piston area as the dominant force. Hydraulic valves, already linearized in Section 2.3.4, relate three primary variables: pressure, flow, and spool position. At this point we assume a linearized form of the orifice equation describing the valve where the equation is linearized

Figure 16

Hydraulic positioning example.

Modeling Dynamic Systems

39

about the desired operating point and relates flow to load pressure and valve position. Let us begin by laying out the basic equations. Valve flow: Q ¼ ðdQ=dxÞx  ðdQ=dPÞP ¼ Kx x  KP P Piston flow: Q ¼ Ady=dt Force balance: F ¼ m

d 2y ¼ PA  b dy=dt dt2

Q is the flow into the cylinder, P is the cylinder pressure, dQ=dx is the slope of the valve flow metering curve at the operating point, dQ=dP is the slope of the valve pressure-flow (PQ) curve at the operating point, A is the area of the piston (minus rod area), and b is the damping from the mass friction and cylinder seal friction. Solving the valve flow equation for P: P ¼ Q=KP þ ðKx =KP Þx Substitute into the force balance: F ¼ m ¼

d 2y ¼ ðQ=KP þ ðKx =KP Þ xÞA  b dy=dt dt2

Eliminating Q using the piston flow:   d 2y A dy dy þ ðKx =KP Þ x A  b m 2 ¼  KP dt dt dt Finally, combining inputs and outputs results in   d 2y A dy AKx ¼ m 2þ þb x K dt KP dt P This equation is worthy of several observations. Hopefully, many will become clear as we progress. First, notice that y in its nonderivative form does not appear. As Laplace transforms are discussed we will see why we call this a type 1 system with one integrator. The net effect at this point is to understand that a positive input x will continue to produce movement in y regardless of whether or not x continues to increase. That is, y, the output, integrates the position of x, the input. Also, many assumptions were made in developing this simple model. Beginning with the valve, it is linearized about some operating point. For small ranges of operation this is a common procedure. Additionally, the mass of the valve spool was ignored and x is considered to be moving the spool by that amount. In most cases involving control systems, a solenoid would provide a force on the valve spool that would cause the spool to accelerate and move. A separate force balance is then required on the spool and the equations become more complex. Finally, the fluid in the system was assumed to be without mass and incompressible. At certain input frequencies, these become important items to consider. Without making these assumptions, the result would have been a sixth-order nonlinear differential equation. This task is

40

Chapter 2

probably not enjoyable to most people. What does it take to accurately model such systems? In a later section, bond graphs are used to develop higher level models. In conclusion, we have reviewed the basic idea of using Newton’s laws to develop models of dynamic systems. Although only simple models were presented, the procedure illustrated is the basic building block for developing complex models. Most errors seem to be made in properly applying basic laws of physics when complex models are developed. Soon the differential equations developed above will be examined further and implemented in control system design. 2.4.5

Thermal System Example

To demonstrate the application of Table 2 to thermal system modeling, we examine a simplified home water heater. Thermal systems are somewhat limited when confined to modeling them with ordinary differential equations, and higher order partial differential equations are required for more complete models. Most thermal systems exhibit interaction between resistance and capacitance properties and do not lend themselves to lumped parameter modeling. There are three basic ways heat flows are modeled: conduction, convection, and radiation. In most cases involving control systems, the radiation heat transfer may be ignored since the temperature differentials are not large enough for it to be significant. This simplifies our models since conduction and convection can be linearly modeled as having the heat flow proportional to the temperature difference. Since most temperature control systems are designed to maintain constant temperature (process control), the linear models work quite well. Thus, basic models for the resistance, R, and capacitance, C, properties in thermal systems may be expressed as in Table 2 (effort variable is temperature; flow variable is heat flow rate): Heat flow ¼

Temperature R

and

Heat stored ¼ C Temperature

If we assume that the water temperature inside the water heater is constant, then the necessary equilibrium condition is that the heat added minus the heat removed equals the heat stored. This is very similar to the liquid level systems examined in the next section. The water heater system can be simplified as shown in Figure 17 where cold water flows into the tank, a heater is embedded in the tank, and hot water exits (hopefully).

Figure 17

Thermal system (water heater) model.

Modeling Dynamic Systems

41

yi ¼ temperature of water entering the tank yt ¼ temperature of water in and leaving the tank ya ¼ temperature of air surrounding the tank qi ¼ heat flow into the system from water entering qo ¼ heat flow leaving the system from water exiting qh ¼ heat flow into the system from the heater qa ¼ heat flow leaving the system into the atmosphere (through insulation) C ¼ thermal capacitance of water in the tank R ¼ thermal resistance of insulation S ¼ specific heat of water dm=dt ¼ mass flow rate in and out of tank The governing equilibrium equation is heat flow in  heat flow out ¼ heat flow stored qi þ qh  qa  qo ¼ Cðdyt =dtÞ Now define the heat flows: qi ¼ (dm=dtÞSyi qh ¼ heat flow from heater (system input) qa ¼ ðyt  ya Þ=R, heat lost through insulation with resistance R qo ¼ (dm=dt) S yt Substitute in the equilibrium equation: Cðdyt =dtÞ þ ðyt  ya Þ=R þ ðdm=dtÞS yt  ðdm=dtÞS yi ¼ qh And simplify: Cðdyt =dtÞ þ ðyt  ya Þ=R þ ðdm=dtÞSðyt  yi Þ ¼ qh Or in terms of yt :   dy 1 1 þ m_ S yt ¼ qh þ ya þ m_ S yi C tþ R R dt Remember that this model assumes uniform water temperature in the tank, no heat storage in the tank itself or the insulation surrounding, linear models for conduction and convection, and no radiation heat losses. As in all models, attention must be given to ensure that consistent sets of units are used. 2.4.6

Liquid Level System Example

Liquid level systems can be considered as special cases of the hydraulic/pneumatic row in Table 2. Several assumptions are made that simplify the models. The first common assumption made is the type of flow. Remembering that flow can be described as laminar or turbulent (not forgetting the transitional region), flow in liquid level systems is generally assumed to be laminar. Whereas turbulent flow pressure drops vary with the square of flow rates (effort ¼ R  flow2 ), laminar flow pressure drops are proportional to flow and inherently more linear (effort = R  flow). In most liquid level systems the fluid velocity is relatively slow and the assumption is valid. The second assumption commonly made is to ignore the effects of the fluid inertia and capacitance, since fluid velocities and pressures are generally

42

Chapter 2

low. Finally, instead of dealing with pressure as our effort variable, elevation head is used as the effort variable. These are related through the weight density where: Pressure ¼ (weight density)  (elevation head) =   h This leads to the governing equilibrium equation (law of continuity): Flow in – flow out ¼ Rate of change in stored volume or qin  qout ¼ Cðdh=dtÞ If h is the height of liquid in the tank, the C simply represents the cross-sectional area of the tank (which may or may not be constant as the level changes). Of course in reviewing previous examples and examining Table 2, this is simply an alternative form of the electrical capacitor equation (I ¼ C dV=dt) or the thermal capacitor equation from the previous section. To illustrate the concepts in an example, let us consider the system of two tanks represented in Figure 18 and develop the differential equations for the system. qi ¼ liquid flow rate into the system qo ¼ liquid flow rate leaving the system from water exiting qb ¼ liquid flow rate from tank 1 into tank 2 h1 ¼ liquid level (height) in tank 1 h2 ¼ liquid level (height) in tank 2 C1 ¼ capacitance of water in tank 1 (cross-sectional area) C2 ¼ capacitance of water in tank 2 (cross-sectional area) R1 ¼ resistance to flow between tank 1 and tank 2 (valve) R2 ¼ resistance to flow between tank 2 and discharge port (valve) Governing equation for tank 1: qi  qb ¼ C1 dh1 =dt Governing equation for tank 2: qb  qo ¼ C2 dh2 =dt Relationships between variables: qb ¼

Figure 18

h1  h2 R1

qo ¼

h2 R2

Liquid level system.

Modeling Dynamic Systems

43

And simplifying: dh R 1 C 1 1 þ h1 ¼ R 1 q1 þ h2 dt

and

  dh2 R2 R þ 1þ R2 C2 h ¼ 2h dt r1 2 R1 2

In the two resulting equations, each term has units of length (pressure head) and the equations are coupled. The input to the system is qi and the desired output (controlled variable) is h2 . In the next chapter we use Laplace transforms to find the relationship between h2 and qi . 2.4.7

Composite System—Electrical and Mechanical

To conclude this section, let us develop a model of a common DC motor with an inertial load. This model is given in Figure 19. For this system we need to write two equations, one electrical and one mechanical. The electrical equation comes from the sum of the voltage drops around the loop and mechanical equation from the torque acting on the inertial load. Voltage loop equation: L

di ¼ Vi  R i  Kemf o dt

Newton’s law: J

do ¼ Tem  TL ¼ KT i  TL dt

where R is the armature resistance, L is the armature inductance, Kemf is the back emf constant, KT is the torque constant, Vi is the applied voltage, i is the armature current, Tem is the electromagnetic torque, J is the motor and load inertia, and o is the rotor rotational velocity. Since the variables are coupled, it is much easier to simply write the state equations in linear matrix form: 2 3 2 3 R Kemf " # 1 " # " #   0 þ i 6 L 7 i 6 L 7 Vin L 7 7 ¼6 þ6 4 K 5 4 1 5 TL o o T 0 þ 0 þ J J So in both examples, it is quite simple to apply the basics to arrive at system models that later will be examined and ultimately controlled.

Figure 19

Permanent magnet DC motor model.

44

Chapter 2

Several sections from now we will see why recognizing how different systems relate is important. If you feel comfortable modeling one system and simulating its response, then the ability to do many other systems is already present. This is an important skill for a controls engineer since most systems include multiple subsystems in different physical domains. 2.5

ENERGY METHODS APPLIED TO MODELING

One final method utilizing Newtonian physics is Lagrange’s equations. This method is very powerful when the lumped mass modeling approach becomes burdened with algebraic reductions. The theory is based on Hamilton’s principle and can be simply summarized as follows: A dynamic system’s energy remains constant when all the work done by external forces is accounted for. The power dissipated by components with ‘‘friction’’ must also be accounted for. For a conservative system without external forces, the equation is very simple:   d @L @L ¼0  dt @q_i @qi L is the Langrangian and L ¼ T  V; where T is the kinetic energy and V is the potential energy. qi is a generalized coordinate (which would be y in the mass-springdamper example above). As we see in the example, kinetic energy relates to dðqi Þ=dt and the potential energy to qi . Since most systems we wish to control will involve components with losses along with external forces, a more general equation must be used:   d @L @L @P þ ¼ Qi  dt @q_i @qi @q_ i P is the added power function, describing the dissipation of energy by the system, and Qi is the generalized external forces acting on the system. Common energy expressions for mechanical and electrical systems are given in Table 3. EXAMPLE 2.6 This method is illustrated by revisiting the mass-spring-damper example we studied earlier in Figure 12. First, let us write the expressions describing the kinetic and potential energy for the system by referencing Table 3.

Table 3

Energy Expressions for Electrical and Mechanical Elements

Energy Type Kinetic energy T Potential energy V Dissipative energy

Mechanical Mass T ¼ 12 mðdx=dtÞ2 Spring, V ¼ 12 k x2 Gravity, V ¼ mgh Damper P ¼ 12 bðdx=dtÞ2

Electrical Inductor T ¼ 12 Lðdq=dtÞ2 Capacitor V ¼ 12 C V 2 ¼ 1=ð2CÞ q2 Resistor P ¼ 12 R ðdq=dtÞ2 ¼ 12 R i2

Modeling Dynamic Systems

T ¼ 12 m v2 ¼ 12 m ðdx=dtÞ2

45

and

V ¼ 12 k x2

where qi ¼ x; i ¼ 1

Thus, L ¼ 12 mv2  12 k x2 Also, for the power dissipated and generalized forces, P ¼ 12bðdx=dtÞ2 ¼ 1=2bv2

and

Qi ¼ F

Now, insert these expressions into Lagrange’s equations and simplify. ! 2 2 @ð1 mx_ 2  12 kx2 Þ @ð12 bx_ 2 Þ d @ð12 mx_  12 kx Þ þ ¼F  2 @x_ @x @x_ dt d ðmx_ Þ  kx þ bx_ ¼ F dt Finally, mx€ þ bx_ þ kx ¼ F Remembering the notation where dx=dt ¼ x_ and d 2 x=dt2 ¼ x€ , we see that identical differential equations were arrived at independently of each other for the mass-spring-damper system. Using energy methods simply gives the controls engineer another tool to use in developing system models. For systems with many lumped objects, it becomes an easy way to quickly develop the desired differential equations. 2.6 2.6.1

POWER FLOW MODELING METHODS—BOND GRAPHS Bond Graph Basics

Developing the differential equations for complex dynamic systems can quickly become an exercise in penmanship when attempting to keep track of all the variables interacting between the different components. Although applying the basic Newtonian laws of physics to each subsystem is quite simple, it becomes very difficult to reduce, simplify, and combine the individual equations. This is especially true when multiple domains are represented. In these systems, modeling the power flow through a system is an attractive method with several advantages. We normally think of power entering a system and being delivered to some load. In most systems, diagramming the flow of power through the system is fairly intuitive. What each subsystem does is transport, transform, and/or dissipate some of the power until it reaches the load. In generalized variables then, the power flow can be represented throughout the entire system without the confusing conflict of notations abundant between different domains. One method particularly attractive for teaching (and using in practice) the modeling of dynamic systems using the power method is bond graphs. The goal in this section is not to teach everything required for modeling all dynamic systems using bond graphs, but rather to present the idea of modeling power flow through a system, a unified and structured approach to modeling, and incentive for further study regarding bond graphs. If we can get to the point of understanding the concepts of using bond graphs we will better understand and use

46

Chapter 2

the methods that we already use and are familiar with. This being said, there are many advantages to learning bond graphs themselves, as we will see in this section. Rosenburg and Karnopp [1] who extended earlier work by Ezekial and Paynter [2], helped define the power bond graph technique in 1983. Bond graphs can easily integrate many different types of systems into one coherent model. Many of the higher level modeling and simulation programs, such as Boeing’s EASY5TM, reflect the principles found in bond graphs. Bond graph models are formed on the basis of power interchanges between components. This method allows a formal approach to the modeling of dynamic systems, including assignment of causality and direct formulation of state equations. Bond graphs rely on a small set of basic elements that interact through power bonds. These bonds carry causality information and connect to ports. The basic elements model ideal reversible (C, I) and irreversible (R) processes, ideal connections (0, 1), transformations (TF, GY), and ideal sources (Se, Sf). The benefits of bonds graphs include modeling power flow reduces the complexity of diagrams, easy steps result in state space equations or block diagrams, simple computer solutions, and the ability to easily determine causality on each bond. Every bond has two variables associated with it, an effort and a flow. In a mechanical-translational system, this is simply the force and velocity. This constitutes a power flow since force times velocity equals power. Thus, Table 2 was given as a precursor to bond graphs by including the effort and flow variable for each system. For simple systems, it is just as easy to use the methods in the previous section, but, as will be seen, for larger systems bond graphs are extremely powerful and provide a structured modeling approach for dynamic systems. This section seeks to introduce bond graphs as a usable tool; to completely illustrate their capabilities would take much longer. In bond graphs, only four variables are needed, effort and flow and the time integrals of each. Thus, for the mechanical system, force, velocity, momentum, and position are all the variables needed. The bonds connect ports and junctions together that take several forms. The two junction types are 0 junctions and 1 junctions. The 0 junction represents a common effort point in the system while the 1 junction represents a common velocity point in the system. Table 4 illustrates the relationship between the general bond graph elements. Using the notation in Table 4 and modifying Table 2 will complete the bond graph library and allow us to model almost all systems quickly and efficiently. Table 5 illustrates these results for common systems. Finally, words about bond graph notation. The basic bond is designated using the symbol ‘+. The half arrow points in the direction of power flow when e f is positive. The direction of arrow is arbitrarily chosen to aid in writing the equations. If the power flow turns out to be negative in the solution, then the direction of flow is opposite of the arrow direction. The line perpendicular to the arrow designates the causality of the bond. This helps to organize the component constitutive laws into sets of differential equations. Physically, the causality determines the cause and effect relationships for the bonds. Some bonds are restricted, while others are chosen arbitrarily. Table 6 lists the causal assignments. To determine system causality: 1. Assign the necessary bond causalities. These result from the effort or flow inputs acting on the system. By definition, an effort source imposes an effort on the node and is thus restricted. This is like saying that a force input acting on a mass

Modeling Dynamic Systems

Table 4

47

Basic Bond Graph Elements

Element

Symbol

Parameters

Equations

j+

ei ¼ effort on ith bond fi ¼ flow on ith bond

Power ¼ e  f

0 Junction

0

NA

1 Junction

1

NA

ei ’s are equal P fi ’s ¼ 0 P ei ’s ¼ 0 fi ’s are equal

Resistor

R

Resistance, R

e¼Rf

Capacitor

C

Capacitance, C e0 , initial value

Ð e ¼ ð1=CÞ f dt þ e0 f ¼ C de=dt

Inductor

I

Inductance, I f0 , initial value

e ¼ I df =dt Ð f ¼ ð1=IÞ e dt þ f0

Effort source

Se

Amplitude, E

e¼E

Flow source

Sf

Amplitude, F

f ¼F

Transformer

TF

Ratio, n

ein ¼ neout fin ¼ n fout

Gyrator

GY

Ratio, r

eout ¼ r fin ein ¼ r fout

Bonds

cannot define the velocity explicitly, it causes an acceleration which then gives the mass a velocity. The opposite is true for flow inputs. 2. Extend those wherever possible using the restrictive causalities listed in the table. Restrictive causalities generally are applied at 0, 1, TF, and GY nodes. For example, since a 1 junction represents common flows, only one connection can define (cause) the flow and thus only one causal stroke points toward the 1 junction. Not meeting this requirement would be like having two electronic current sources connected in series. 3. If any bonds are remaining without causal marks, apply integral causality to I and C elements. Integral causality is preferred but not always possible and is described in more detail following this section. 4. All remaining R elements can be arbitrarily chosen. For example, one typical resistor element is a hydraulic valve. We can impose a pressure drop across the valve and measure the resulting flow; or we can impose a flow through the valve and measure the resulting pressure drop. An effort causes a flow, or a flow causes an effort, and both are valid situations. The causality assignments may also be reasoned out apart from the table. For example, the effort, or ‘‘force,’’ must be the cause acting on an inductive (‘‘mass’’) element and flow, or ‘‘velocity,’’ the effect, hence the proper integral causality. The opposite analogy is the velocity being the cause acting on a mass and the force being the effect. A step change in the cause (velocity) would then require an infinite (impulse) force, which physically is impossible. With the

48

Table 5

Bond Graph Relationships Between Physical Systems Variable

System

Effort

Flow

General

eðtÞ

f ðtÞ

Translation

F, force N T, torque Nm e, voltage V P, pressure Pa T, temperature K

V, velocity m/sec o, ang. vel. rad/sec i, current A Q, flow rate m3/sec S, entropy flow rate, J/K sec

Rotation Electrical Hydraulic Thermal

Momentum

Displacement

Power

Ð p  e dt

Ð q  f dt

PðtÞ  eðtÞf ðtÞ

P, momentum N sec H, ang. mnm. N m sec , flux linkage Wb Pp , integral of pressure, Pa sec Not needed

x, distance m y, angle rad q, charge C V, volume m3 S, entropy J/K

FðtÞ VðtÞ W TðtÞoðtÞ W eðtÞ iðtÞ W PðtÞ QðtÞ W TðtÞ SðtÞ W

Energy Ð EðpÞ ¼ f dp, kinetic Ð EðqÞ ¼ e dq, Ð potential Ð V dp; J Ð F dx; J Ð o dH; J Ð T dy; J Ð i d , J (magnetic) Ð e dq; J (electric) Q dPp ; J Ð J T dS; J

Chapter 2

Modeling Dynamic Systems

Table 6

49

Bond Graph Causality Assignments

desired integral causality, a step change in the force results in acceleration, the integral of which is the velocity. If we end up with derivative causality on I or C elements, additional algebraic work is required to obtain the state equations for the system. When integral causality is maintained, the formulation of the state equations is straightforward and simple. It is possible to have a model with derivative causality, but then great care is required to ensure that the velocity ‘‘cause’’ is bounded, thus limiting the required force. Many times it is possible to modify the bond graph to give integral causality without changing the accuracy of the model. This might be as simple as moving a capacitive element to the other side of a resistive element, and so forth. Using the analogy of a hydraulic hose, the capacitance and resistance in the hose, and the capacitance and inertia in the oil is distributed throughout the length of the hose. If a flow source is the input to the section of hose, one of the capacitive elements must be located next or else the inertial element sees the flow source as a velocity input, similar to the mechanical analogy above, and creates derivative causality problems. In reality this is correct since the whole length of hose, and even the fitting attaching it to the flow source, has compliance. Every model is imperfect, and in cases like this rational thinking beforehand saves many hours of irrational thinking later on in the problem. If we are constrained to work with models containing derivative causality, several approaches may be attempted. Sometimes an iterative solution is achievable and the implicit equations that result from derivative causality may still be solved. This will certainly consume more computer resources since for each time step many intermediate iterations may have to be performed. Another option is to consider Lagrange’s equations as presented earlier. This may also produce an algebraically solvable problem. The general recommendation, as mentioned above, is to modify the original model to achieve integral causality and explicit state equations. Once causality is assigned, all that remains is writing the differential equations. This is straightforward and easily lends itself to computers. There are many com-

50

Chapter 2

puter programs that allow you to draw the bond graph and have the computer generate the equations and simulate the system. The equations, being in state space form, are easily used as model blocks in Matlab. In fact, for many advanced control strategies, where state space is the representation of choice, bond graphs are especially attractive because the set of equations that result are in state space form directly from the model. Several useful constitutive relationships for developing the state space equations are given in Table 7. 2.6.2

Mechanical-Translational Bond Graph Example

To illustrate the basics, let us review to the basic mass-spring-damper system already studied using Newton’s laws and energy methods. For the general system, effort and flow variables are associated with each bond. For the mechanical system these become force and velocity. To begin, locate common velocity points and assign these to a common 1 junction. If there are any common force points, assign them to a 0 junction. In the mass-spring-damper system in Figure 20, all the components connected to the mass have the same velocity and there are no common force points. The bond graph simply becomes the mass, spring, damper, and input force all connected to the 1 junction since they all share the same velocity. The resulting bond graph is given in Figure 20. The bond graph notation clearly shows that all the component velocities are equal. To assign causality, begin with Se (a necessary causality assignment). This stroke does not put constraints on others so assign I to have integral causality. This can then be extended to the other bonds since the 1 junction is only allowed one flow as a cause (I) with the rest as effects (R, C, Se ). All causal strokes are assigned and integral causality throughout the model is achieved. Now the equations can be

Table 7

Equation Formulation with Bond Graphs Integral Causality

Derivative Causality

Linear

Nonlinear

Linear

Nonlinear

0 Junction

e1 ¼ e2 ¼ e3 f1 þ f2 þ f3 ¼ 0

NA

NA

NA

1 Junction

e1 þ e2 þ e3 ¼ 0 f1 ¼ f2 ¼ f3

NA

NA

NA

R

e ¼ Rf f ¼ 1=Re

NA

NA

Ð eðqÞ ¼ eð f dtÞ

f ¼ Cde=dt

f ¼ dqðeÞ=dt

f ¼ f ðpÞ

e ¼ Idf =dt

e ¼ df ðpÞ=dt

Element

C I

Ð e ¼ q=C ¼ fdt=C Ð f ¼ p=I ¼ e dt=I

e ¼ eðf Þ f ¼ f ðeÞ

Additional Equation Helps

Useful identities

States are always written as the momentum or displacement variable; first derivative equals a function of the other states and inputs. dq=dt ¼ f (p’s, q’s, inputs) dpi =dt ¼ ei ¼ qi =C

dp=dt ¼ f (p’s, q’s, inputs) dqi =dt ¼ fi ¼ pi =I

Modeling Dynamic Systems

Figure 20

51

Bond graph: mass spring damper model

written using the tables relating to bond graphs. Begin with the two state variables, p1 and q2 , to write the equations. dp1 =dt ¼ e1 ¼ e3  e2  e4 dq2 =dt ¼ f2 ¼ f3 ¼ f4 ¼ p1 =I Writing specific equations for e3 ; e2 , and e4 allows us to finish the equations. e3 ¼ Se ¼ input force e2 ¼ q2 =C e4 ¼ Rf4 ¼ ðR=IÞp1 Substituting into the state equations for the final form results in dp1 =dt ¼ Se  q2 =C  ðR=IÞp1 dq2 =dt ¼ p1 =I Remembering that Se ¼ F; C ¼ 1=k, I ¼ m, and R ¼ b for mechanical systems allows us to write the state equations in a notation similar to that used for earlier equations: dp1 =dt ¼ F  kq2  ðb=mÞp1 dq2 =dt ¼ p1 =m Although first glance might seem confusing, the equations are identical to those developed before. Notice what the following terms actually represent: dp1 =dt ¼ dðmvÞ=dt ¼ ma ¼ F 0 s k q2 ¼ k x ¼ spring force ðb=mÞp ¼ ðb=mÞmv ¼ bv ¼ force through damper dq2 =dt ¼ p1 =m ¼ mv=m ¼ v ¼ velocity Since the example state equations developed above are linear, they could easily be transformed into matrix form. For a simple mass-spring-damper system, the work involved might seem more difficult than using Newton’s laws. As system complexity progresses, the real power of bond graphs becomes clear.

52

2.6.3

Chapter 2

Mechanical-Rotational System Bond Graph Example

Mechanical systems with rotation are very similar to translational systems and in many cases are connected together. Whereas in translational systems the effort variable is force and the flow variable is velocity, in rotational systems the effort variable is torque and the flow variable is angular velocity. The product of torque and angular velocity gives us power flow through each bond. In developing the bond graph we follow similar procedures and first locate the common effort and flow junctions. Using the rotational system shown in Figure 21, we see that the only common effort junction involves the two shafts connected by torsional spring K1 . Next, locating the common flow junctions, we see that the rotary inertias represent common flows for connected respective C and R elements. Finishing the graph, we see that we have a gear train that we can represent as a transformer (TF) and one input, the torque applied to rotary inertial J1 . The input torque is modeled as an effort source, Se . Connecting the components and using Table 6 allows us to assign causality, which results in the bond graph shown in Figure 22. The causality assignments in this example are quite simple and once the required assignment is made (the effort source, Se ), the remaining assignments propagate themselves through the model. Assigning Se as required and I1 as integral defines the causality on bond 3. Assigning C1 as integral then defines the causality on bonds 5 and 6. Finally, assigning integral causality on I2 and C2 defines the causality for the remaining bond 7. All that remains is to develop the differential equations for the system. To illustrate the procedure we begin by writing the general bond graph equations and finish by substituting in the known constants (i.e., J for I elements, b for R, K for C elements, and input T for Se ). The four resulting state equations have the form p_2 ¼ f ðp2 ; q4 ; p8 ; q9 Þ q_4 ¼ f ðp2 ; q4 ; p; q9 Þ p_8 ¼ f ðp2 ; q4 ; p8 ; q9 Þ q_9 ¼ f ðp2 ; q4 ; p8 ; q9 Þ Basic equations: dp2 =dt ¼ e2 ¼ e1  e3 dq4 =dt ¼ f4 ¼ f3  f5 dp8 =dt ¼ e8 ¼ e6  e7  e9 dq9 =dt ¼ f9 ¼ f8

Figure 21

Mechanical-rotational system using bond graphs.

Modeling Dynamic Systems

Figure 22

53

Bond graph: mechanical-rotational system example.

Constitutive relationships: e1 ¼ S e e3 ¼ e4 ¼ q4 =C1 f3 ¼ f2 ¼ p2 =I1 f5 ¼ ðd2 =d1 Þf6 ¼ ðd2 =d1 Þf8 ¼ ðd2 =d1 Þp8 =I2 e6 ¼ ðd1 =d2 Þe5 ¼ ðd1 =d2 Þe4 ¼ ðd1 =d2 Þq4 =C1 e7 ¼ Rf7 ¼ Rf8 ¼ Rp8 =I2 e9 ¼ q9 =C2 f8 ¼ p8 =I2 Combining for the general state equations: dp2 =dt ¼ Se  q4 =C1 dq4 =dt ¼ p2 =I1  ðd2 =d1 Þp8 =I2 dp8 =dt ¼ ðd1 =d2 Þq4 =C1  Rp8 =I2  q9 =C2 dq9 =dt ¼ p8 =I2 Finally, in matrix form with J, K, and b substitutions: 3 2 0 K1 0 0 72 3 2 3 2 3 61 d 1 6 p 1 p_ 2 0  2 0 7 7 2 6I d1 I2 76 q 7 6 0 7 6 q_ 7 6 1 76 4 7 6 7 6 47 6 d1 R 76 7 þ 6 7 S 6 7¼6 4 p_ 8 5 6 0 K1  K2 74 p8 5 4 0 5 e 7 6 I d 2 2 7 q 6 0 q_ 9 5 9 4 1 0 0 0 I2 The procedure to take the system model, develop the bond graph, and write the state equations is, as shown, relatively straightforward and provides a unified approach to modeling dynamic systems. In addition, the state variables represent physically measurable variables (at least in theory, assuming a sensor is available) and using the generalized effort and flow notation helps eliminate overlapping symbols as commonly found in multidisciplinary models.

54

2.6.4

Chapter 2

Electrical System Bond Graph Example

In this section the same bond graph modeling techniques described for mechanical systems are applied to simple electrical systems. To illustrate the similarities we develop a bond graph for the electrical circuit in Figure 23, assign causality, and write the state space matrices for the circuit. As in the mechanical systems, we begin by locating common effort nodes and common flow paths in the system. This is especially straightforward for electrical systems since any components in series have the same flow (current) and any components in parallel have the same effort (voltage). For our circuit then, we have a 1 junction for Vi and R, a 0 junction for C1 , and a 1 junction for L and C2 . Our input to the system is modeled as an effort source (Vi ). Connecting the elements together and assigning causality results in the bond graph in Figure 24. The causality assignments begin with Se , the only required assignment. The preferred integral causality relationships (I and C elements) are made next, and as before, this assigns the remaining bonds and R elements. Once the causal assignments are made the equation formulation proceeds and the three resulting state equations will have the form q_ 4 ¼ f ðq4 ; p6 ; q7 Þ p_6 ¼ f ðq4 ; p6 ; q7 Þ

q_ 7 ¼ f ðq4 ; p6 ; q7 Þ

Basic equations: dq4 =dt ¼ f4 ¼ f3  f5 ¼ f2  f6 dp6 =dt ¼ e6 ¼ e5  e7 ¼ e4  e7 dq7 =dt ¼ f7 ¼ f6 Constitutive relationships: f2 ¼ Se =R f6 ¼ p6 =L e4 ¼ q4 =C1 e7 ¼ q7 =C2 f6 ¼ p6 =L Combining for the general state equations: dq4 =dt ¼ Se =R  p6 =L dp6 =dt ¼ q4 =C1  q7 =C2 dq7 =dt ¼ p6 =L

Figure 23

Electrical circuit model using bond graphs.

Modeling Dynamic Systems

Figure 24

55

Bond graph: electrical circuit model.

Finally, in matrix form with J, K, and b substitutions: 2 3 1 2 3 0 0  2 3 6 1 72 3 L q_ 4 6 7 q4 6 6 7 1 6 7 6R7 6 7 6 1 0  7 4 p_ 6 5 ¼ 6 4 p6 5 þ 4 0 7 5 Se C2 7 6 C1 7 q_ 7 4 5 q7 0 1 0 0 L Finally, to finish the example, let us assume that the desired output is the voltage across the capacitor C2 , designated as VC2 . This voltage is simply the effort on bond 7 in Figure 24, which can be expressed as a function of the third state variable, e7 ¼ q7 =C2 . Then the corresponding C matrix is written as 2 3   q 1 4 45 VC2 ¼ e7 ¼ y ¼ 0 0 p6 C2 q7 In summary, for the mechanical and electrical systems shown thus far, the basic procedures are the same: Locate the common effort and flow points, draw the bond graph, assign causality, and write the state equations describing the behavior of the system. 2.6.5

Thermal System Bond Graph Example

Thermal systems are modeled using bond graphs but unlike the previous systems have characteristics that make them unique. First, there is no equivalent I element. We would expect this since when we think of heat flow we envision it beginning as soon as there exists a temperature differential (or effort). The flow of heat does not undergo ‘‘acceleration,’’ and no ‘‘mass’’ is associated with the flow of heat. Second, engineering practice has established ‘‘standard’’ variables in thermal systems that do not meet the general requirements of effort and flow. The standard effort and flow variables do exist (temperature and entropy flow rate) but are not always used in practice. Instead, pseudo bond graphs are commonly used, and the same techniques still apply. The difference is that the product of the variables is not power flow. The effort variable is still temperature, but the flow variable becomes heat flow rate, which already has units of power. The two components shown in Figure 25, R

56

Chapter 2

Figure 25

Ideal elements in thermal bond graphs.

and C, are used to model thermal systems and are still subject to the same restrictions discussed in Section 2.4.5. To illustrate a bond graph model of a thermal system, we once again use the water heater model in Figure 17. This allows us to see the similarities and compare the results with the equations already derived in Section 2.4.5. To begin drawing the bond graph, we still find common effort (temperature) and flow (heat flow rate) points in the physical system. The common temperature points include the tank itself and the water flow leaving the tank and the temperature of the surrounding atmosphere. Since the air temperature surrounding the tank is assumed to remain constant, we can model it as an effort source. The temperature difference between the tank water temperature and surrounding air will cause heat flow through the insulation, modeled as an R element. Finally, examining the connections to the 0 junction representing the temperature of the tank, we have three flow sources and the difference causing a temperature change in the water. The energy stored in the tank water temperature is modeled as a capacitance, C. The three heat flow sources are the water in, water out, and heater element embedded in the tank. Putting it all together results in the bond graph shown in Figure 26. When we write the equations for the bond graph, we see immediately that they are the same as developed earlier in Section 2.4.5. On the 0 junction: f4 ¼ f5 þ f6  f7  f3 ¼ qi þ qh  qo  qa (using earlier notation) On the 1 junction: e3 ¼ e1  e2 ;

Figure 26

or

e3  e 1 ¼ e2

Bond graph: thermal system model.

Modeling Dynamic Systems

57

where f3 ¼ qa ¼ e2 =R ¼ ðe3  e1 Þ=R. Thus when the symbols used in Section 2.4.5 are substituted in, the governing equations are the same. The same assumptions made earlier—no heat storage in the insulation and uniform tank temperature— still apply for the bond graph model. 2.6.6

Liquid Level System Bond Graph Example

Liquid level systems are essentially modeled using the hydraulic concepts but with several changes and simplifications. As discussed in Section 2.4.6, the fluid inertia is ignored and only the capacitive and resistive elements are used. Speaking in terms of relative magnitudes, the simplification is a valid one and we do not often see oscillatory (pressure) waves in liquid level control systems. It is a simple matter to include them, as the next case study shows, if we are concerned with the inertial effect. Also, and in similar fashion, we ignore the capacitive effect of the fluid itself (compressibility) and assume that the volumetric flows relate exactly to the rate of change in liquid volume in the tank. Since the pressures due to elevation heads are many times smaller than typical pressures in hydraulic systems, the assumption is once again a valid one. Finally, and only one of notation, is that in liquid level systems the effort variable is assumed to be the elevation head and not the pressure. These are related to each other through the weight density, a constant, and thus only the units are different, but elevation head still acts on the system as an effort variable. To demonstrate the procedure, we develop a bond graph for the liquid level system given in Figure 27, similar to the liquid level system examined earlier in Section 2.4.6. To construct the bond graph, we once again locate the common effort and flow points in the system. Each tank represents a common effort and the common flow occurs through R1 . There is one input, a flow source, and one flow output, but it is dependent on h2 and R2 . Using common effort 0 junctions and common flow 1 junctions, we can construct the bond graph as shown in Figure 28. The causality assignments result in integral (desired) relationships and are arrived at as follows: The flow source is a required causal stroke and, once made, causes C1 and bond 3 to be chosen as well. R1 is indifferent, so we choose integral causality on C2 , which then defines the causal relationships for R1 and R2 . The equations can be derived following standard procedures. The known form of the state equations is as follows q_ 2 ¼ f ðq2 ; q6 ; Sf Þ

Figure 27

q_ 6 ¼ f ðq2 ; q6 ; Sf Þ

Liquid level system using bond graphs.

58

Chapter 2

Figure 28

Bond graphs: liquid level system.

Basic equations: dq2 =dt ¼ f2 ¼ f1  f3 ¼ f1  f4 dq6 =dt ¼ f6 ¼ f5  f7 Constitutive relationships: f1 ¼ Sf ¼ qi f4 ¼ ð1=R1 Þe4 ¼ ð1=R1 Þðe3  e5 Þ ¼ ð1=R1 Þðe2  e6 Þ f5 ¼ f4 (see above equation) f7 ¼ ð1=R2 Þe7 ¼ ð1=R2 Þe6 ¼ ð1=R2 Þq6 =C2 Combining for the general state equations: dq2 =dt ¼ qi  ðq2 =C1  q6 =C2 Þ=R1 dq6 =dt ¼ ðq2 =C1  q6 =C2 Þ=R1  q6 =ðR2 C2 Þ Finally, in matrix form with notation substitutions: 2 3 1 1        6 R1 C1 7 q2 1 q_ 2 R1 C2 7 qi ¼6 þ 4 5 1 1 0 q6 q_ 6  R1 C1 C2 ðR2  R1 Þ Remember that a model is just what the name implies, only a model, and different models may all be derived from the same physical system. Different assumptions, use of notation, and style will result in different models. In examining the variety of systems using bond graphs, hopefully we were able to see the parallels between different systems and the advantages of a structured approach to modeling. More often than not, modeling complex multidisciplinary systems is just a repetitive application of the fundamentals. This concept is evident in the case study involving a hydraulic, mechanical, and hydropneumatic coupled system. 2.6.7

Case Study: Simulation and Validation of Hydraulic P/M, Accumulators, and Flywheel Using Bond Graphs

To better illustrate the capabilities of bond graphs when multiple types of systems must be modeled, differential equations are derived for a mechanical rotational system coupled to a hydraulic system via an axial-piston bent-axis pump/motor (P/M). A summarized model is shown in Figure 29. In this case study, experimental results obtained from the laboratory are compared with the simulated results of the bond graph model. This model was developed as part of research determining the

Modeling Dynamic Systems

Figure 29

59

Physical model of P/M and valves for vehicle transmission.

feasibility of using valves to switch a motor to a pump to go from acceleration to braking while driving [3]. If a pump/motor capable of overcenter operation is used, the valves are not required. However, certain high-efficiency pump/motors are not capable of overcenter operation and thus served as the motivation for the simulation. The first step in developing the model is reduction of the physical model to include the necessary elements. The complete physical system as tested in the laboratory is shown in Figure 30. Since efficiencies are not important during the switching times (milliseconds), the pump/motor can be modeled as an ideal transformer (shaft speed proportional to flow and pressure proportional to torque). The hydraulic system is modeled as a single hydraulic line with resistance

Figure 30

Complete physical system—valves and P/M test.

60

Chapter 2

and capacitance plus the inertia and capacitance of the oil. The valves then switch this line from low to high pressure during the simulation. Flywheel, pump/motor, and accumulators are all included in the model. The reduced model showing the system states is given in Figure 31. In the figure, there are three states in this section of hydraulic line: oil volume stored in line capacitance, oil momentum, and oil volume stored in fluid capacitance. There are three states in energy storage devices: flywheel momentum, oil volume stored in low-pressure reservoir, and oil volume stored in high-pressure accumulator. This model leads directly to a bond graph and illustrates the flexibility in using bond graphs. Figure 32 gives the bond graph with causality strokes. As in the previous examples, we first locate the common flow and effort points and assign appropriate 0 or 1 junctions to those locations. The bond graph is parallel to the physical model with the flywheel connected to the pump using a transformer node (TF). This converts the flywheel speed to a flow rate. Nodes 0a, 1b, and 0c model the inertial of the oil, resistance of the hose, and capacitance of the hose and oil. Common flow nodes 1d and 1e state that all the flow passing through a valve must be the same as that transferred to the accumulator connected to that valve. It is possible to connect block diagram elements to bond graph elements as shown in the switching commands to the valves. In this bond graph model there are no required causality assignments, effort or flow sources, and the system response is a function of initial conditions and valve switching commands. To assign the causal strokes, we start with the integral relationships first and then assign the arbitrary causal strokes. This model achieves integral causality, although other model forms may result in derivate causality. For example, combining the line and oil capacitance into an equivalent compliance in the system and updating the bond graph results in derivative causality, even though the model is still correct and simpler. So as stated earlier, some derivative assignments can be handled by slightly changing the model. The bond graph is now ready for writing the state equations, and although more algebraic work is required for each step, the procedure to develop the equations is identical to that used in the mass-spring-damper example. The resulting states will be p1 , q3 , p6 , q8 , q13 , and q14 (all the energy storage elements).

Figure 31

Reduced valve and P/M model with states.

Modeling Dynamic Systems

Figure 32

61

Bond graph model of valves and P/M.

Basic equations: dp1 =dt ¼ e1 q3 =dt ¼ f3 ¼ f2  f4 dp6 =dt ¼ e6 ¼ e4  e5  e7 dq8 =dt ¼ f8 ¼ f7  f9  f10 dq13 =dt ¼ f13 ¼ f11 dq14 =dt ¼ f14 ¼ f12 Constitutive relationships: e1 ¼ Dpm e2 ¼ Dpm e3 ¼ Dpm ðq3 =Chose Þ f3 ¼ f2  f4 ¼ Dpm f1  f6 ¼ ðDpm =Iflw Þp1  ð1=Ioil Þp6 e6 ¼ e4  e5  e7 ¼ e3  Rhose f5  e8 ¼ q3 =Chose  ðRhose =Ioil Þp6  q8 =Coil f8 ¼ f7  f9  f10 ¼ f6  f13  f14 (see below) f6 ¼ p6 =Ioil f13 ¼ Cd16 ðq8 =Coil  e13 Þ0:5 f14 ¼ Cd20 ðq8 =Coil  e14 Þ0:5 Accumulator models (e13 and e14 ): To find the pressure in the accumulators (e13 and e14 ), we must first choose an appropriate model for the gas-charged bladder. Most hydropneumatic accumulators are charged with nitrogen and can be reasonably modeled using the ideal gas law.

62

Chapter 2

For the accumulators in this case, study foam was inserted into the bladder with the result that the pressure-volume relationship during charging becomes isothermal and efficiencies are greatly increased [4]. This allows us to finish the accumulator (Chigh and Clow ) models for the state equations as follows: General isothermal gas law: P1 V1 ¼ P2 V2 Let P1 and V1 be the initial charge pressure and gas volume: eh and qh ; el and ql for the high- and low-pressure accumulators. Then [efforts ¼ P2 ¼ P1 ðV1 =V2 Þ : e13 ¼ eh qh =ðqhi  q13 Þ

e13 ¼ el ql =ðql  q14 Þ This now gives us the accumulator pressures as a function of state variables q13 and q14 , and allows us to finish writing the state equations for the system. The third state, q8 , references f13 and f14 , which we now have since they are simply the first derivatives of states q13 and q14 , solved for in this section. Combining for the general state equations: p_ 1 ¼

Dpm q Chose 3

q_ 3 ¼ Dpm p_ 6 ¼

p1 p  6 Iflw Ioil

q3 R q  hose p6  8 Chose Ioil Coil

p q_ 8 ¼ 6 s1 Cd16 Ioil

q_ 13

q_ 14

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi     q8 e h qh q8 e l ql    s2 Cd20 Coil qh  q13 Coil q1  q14

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   q8 e h qh ¼ Cd16  Coil qh  q13 sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   q8 e l  qi ¼ Cd20  Coil ql  q14

As presented later in the state space analysis section, 3.6.3, once the state equations are developed programs like Matlab, MathCad, and Mathematica may be used to simulate and predict system response. We can also write a numerical integration routine to accomplish the same thing using virtually any programming language. A second item to mention is the nonlinearity of the state equations. Modeling the flow through the valves as proportional to the square root of the pressure drop introduces nonlinearities into the state equations. The nonlinearities prevent us from writing the state equations using matrices unless we first linearize the state equations. Finally, s1 and s2 represent changing inputs. When the system response is determined, the inputs to the equations must be provided. Due to the

Modeling Dynamic Systems

63

nonlinearities, it is necessary to numerically integrate the state equations to obtain a response. A closed form solution is virtually impossible for most real nonlinear systems. To actually compare the simulation of the model with the experimental results, the values for the model parameters were calculated and then adjusted to better reflect the actual system. The model parameter values used during the simulation are listed below and are based on data from the manufacturer and measurement (dimensions) taken in the laboratory. As in all engineering problems, care must be taken to perform the simulation with consistent and correct units for all physical variables. System values: kg lbf sec2

¼ 900 3 8:4 105 in4 m

! (oil density)

l ¼ 4 m ð157:5 in:Þ d ¼ 0:0254 m ð1 inÞ

(hydraulic line length) (hydraulic line diameter)

1 ¼ 6:95  108 Pa ð100,800 psiÞ 1 ¼ 1:4  109 Pa ð203;100 psiÞ

(bulk modulus of hydraulic line) (bulk modulus of oil)

d 4 ¼ 5:07  104 m2 ð0:785 in2 Þ 4 ¼ A l ¼ 0:002 m3 ð123:7 in3 Þ

(sectional area of hydraulic line)

A¼ Vo

(volume of oil contained in line)

Mass (fluid inertia): A m4 in5 ¼ 1:408  107 I¼ 59:2

l kg lbf sec2

!

Capacitance (fluid compliance): Coil

V m3 in3 0:000625 ¼ o ¼ 1:45  1012 oil Pa psi

!

Capacitance (hose and fitting compliance): Cline

V m3 in3 0:00122 ¼ o ¼ 2:915  1012 line Pa psi

!

Resistance (hose and fittings): Rline ¼ 1:4  108

  Pa sec psi sec 0:333 m3 in3

Poppet valve coefficients (from manufacturer data): f ¼ Cd

pffiffiffi e

64

Chapter 2

Cd16 ¼ 8:75  10

6

! m3 in3 pffiffiffiffiffiffi 44:3 pffiffiffiffiffiffi sec Pa sec psi

Cd20 ¼ 2:10  10

5

! m3 in3 pffiffiffiffiffiffi 106:4 pffiffiffiffiffiffi sec Pa sec psi

These models were verified in the lab with good correlation between the predicted and actual responses, as illustrated in Figures 33 and 34. Valve delay functions simulating the opening and closing characteristics determined in the lab were implemented to determine optimal delays. Using these values and integrating the state equations yielded responses with natural frequencies of 219 rad/sec and damping ratios of 0.06. The correlation was very high with the experimental switches; the exception was that the actual system was slightly ‘‘softer’’ (in terms of bulk modulus) than was calculated using the manufacturers’ data. When the results are compared directly, as in Figure 35, the accuracy of the simulation is evident. It certainly is possible to ‘‘tune’’ the estimated system parameters used above to achieve better matches since the same dynamics are present in the simulated and experimental results. The model for this example was used to design a delay circuit and strategy for performing switches minimizing energy losses and pressure spikes. The analogy can be extended to the poppet valve controller examined in the case study (see Sec. 12.7). The simulated and experimental results agree quite well, as shown, and the simulation method does not require any special software. Hopefully this illustrates the versatility of bond graphs in developing system models utilizing several physical domains. Another case study example is examined later and compared to an equivalent model in block diagram representation. Bond graphs are especially applicable to fluid power systems with the use of flow and effort variables and flow rate and pressure being well understood. The use of many ‘‘transformers’’ like cylinders and pump/motors are easily represented with bond graphs. As a result, many fluid power systems have been developed using bond graphs, ranging from controllers [5] to components [6] to systems.

Figure 33

Summary of simulation results (valve switches).

Modeling Dynamic Systems

Figure 34

Summary of experimental results.

Figure 35

Comparison of simulated and experimental results.

65

PROBLEMS 2.1 Describe why good modeling skills are important for engineers designing, building, and implementing control systems. 2.2 The most basic mathematical form for representing the physics (steady-state and dynamic characteristics) of different engineering systems is a _____________ ______________. 2.3 Ordinary differential equations depend only on one independent variable, most commonly ______ in physical system models. 2.4 Find the equivalent transfer function for the system given in Figure 36. 2.5 Find the equivalent transfer function for the system given in Figure 37. 2.6 Find the equivalent transfer function for the system given in Figure 38. 2.7 Find the equivalent transfer function for the system given in Figure 39. 2.8 Find the equivalent transfer function for the system given in Figure 40. 2.9 Find the equivalent transfer function for the system given in Figure 41.

66

Chapter 2

Figure 36

Problem: block diagram reduction.

Figure 37

Problem: block diagram reduction.

Figure 38

Problem: block diagram reduction.

Figure 39

Problem: block diagram reduction.

Modeling Dynamic Systems

Figure 40

Problem: block diagram reduction.

Figure 41

Problem: block diagram reduction.

67

2.10 Find the equivalent transfer function for the system given in Figure 42. 2.11 Find the equivalent transfer function for the system given in Figure 43. 2.12 Find the equivalent transfer function for the system given in Figure 44. 2.13 Given the following differential equation, find the appropriate A, B, C, and D ... matrices resulting from state space representation: y þ 5 y þ 32y ¼ 5u_ þ u: 2.14 Given two second-order differential equations, determine the appropriate A, B, C, and D matrices resulting from state space representation (z is the desired output, u and v are inputs): a y€ þ by_ þ y ¼ 20u; and cz€ þ z_ þ z ¼ v:

Figure 42

Problem: block diagram reduction.

68

Chapter 2

Figure 43

Problem: block diagram reduction.

Figure 44

Problem: block diagram reduction.

2.15 Linearize the following function, plot the original and linearized functions, and determine where an appropriate valid region is (label on the graph). YðxÞ ¼ 5x2  x3  x sinðxÞ

About operating point x ¼ 2

2.16 Linearize the following function: Zðx; yÞ ¼ 3x2 þ xy2  y

About operating point ðx; yÞ ¼ ð2; 3Þ

2.17 Linearize the equation y ¼ f ðx1 ; x2 Þ: yðx1 ; y1 Þ ¼ 4x1 x2  5x21 þ 4x2 þ sinðx2 Þ Around operating point ðx1 ; x2 Þ ¼ ð1; 0Þ 2.18 Given the physical system model in Figure 45, develop the appropriate differential equation describing the motion. 2.19 Given the physical system model in Figure 46, develop the appropriate differential equation describing the motion. 2.20 Given the physical system model in Figure 47, develop the appropriate differential equation describing the motion. (r is the input, y is the output). 2.21 Write the differential equations for the physical system in Figure 48. 2.22 Given the physical system model in Figure 49, write the differential equations describing the motion of each mass.

Modeling Dynamic Systems

Figure 45

Problem: model of physical system—mechanical/translational.

Figure 46

Problem: model of physical system—mechanical/translational.

Figure 47

Problem: model of physical system—mechanical/translational.

69

2.23 Given the following physical system model in Figure 50: a. Write the differential equations of motion. b. Develop the state space matrices for the system where y3 is the desired output. 2.24 Write the differential equations for the physical system model in Figure 51. 2.25 Using the physical system model given in Figure 52, develop the state space matrices where the input is T and the desired output is y. 2.26 Write the differential equation for the electrical circuit shown in Figure 53. Vi is the input and Vc , the voltage across the capacitor, is the output. 2.27 Using the system given in Figure 54, develop the differential equations describing the motion of the mass, yðtÞ as function of the input, rðtÞ: PL is the load pressure, and a and b are linkage segment lengths. Assume a linearized valve equation. 2.28 Write the differential equation for the basic oven model given in Figure 55. Assume that the insulation does not store heat and that the material in the oven always has a uniform temperature, y.

70

Figure 48

Chapter 2

Problem: model of physical system—mechanical/translational.

2.29 Determine the equations describing the system given in Figure 56. Formulate

Figure 49

Problem: model of physical system—mechanical/translational.

as time derivatives of h1 and h2 .

Figure 50

Problem: model of physical system—mechanical/translational.

Modeling Dynamic Systems

Figure 51

Problem: model of physical system—mechanical/rotational.

Figure 52

Problem: model of physical system—mechanical.

Figure 53

Problem: model of physical system—electrical.

71

2.30 Determine the equations describing the system given in Figure 57. Formulate as time derivatives of h1 and h2 . 2.31 Derive the differential equations describing the motion of the solenoid and mass plunger system given in Figure 58. Assume a simple solenoid force of FS ¼ KS i. 2.32 Develop the bond graph and state space matrices for the system in Problem 2.18. 2.33 Develop the bond graph and state space matrices for the system in Problem 2.19. 2.34 Develop the bond graph and state space matrices for the system in Problem 2.20.

72

Chapter 2

Figure 54

Problem: model of physical system— hydraulic/mechanical.

Figure 55

Problem: model of physical system—thermal.

Figure 56

Problem: model of physical system—liquid level.

Figure 57

Problem: model of physical system—liquid level.

Modeling Dynamic Systems

Figure 58

2.35 2.36 2.37 2.38 2.39 2.40 2.41 2.42

73

Problem: model of physical system—electrical/mechanical

Develop the bond graph and state space matrices for the system in Problem 2.21. Develop the bond graph and state space matrices for the system in Problem 2.22. Develop the bond graph and state space matrices for the system in Problem 2.23. Develop the bond graph and state space matrices for the system in Problem 2.24. Develop the bond graph and state space matrices for the system in Problem 2.25. Develop the bond graph and state space matrices for the system in Problem 2.26. Develop the bond graph and state space matrices for the system in Problem 2.27. Develop the bond graph and state space matrices for the system in Problem 2.31.

REFERENCES 1. 2. 3. 4. 5. 6.

Rosenburg RC, Karnopp DC. Introduction to Physical System Dynamics. New York: McGraw-Hill, 1983. Ezekial FD, Paynter H. Computer representation of engineering systems involving fluid transients. Trans ASME, Vol. 79, 1957. Lumkes J, Hartzell T. Investigation of the Dynamics of Switching from Pump to Motor Using External Valving. ASME Publication, no. H01025, IMECE, 1995. Pourmovahead A, Baum S. Experimental evaluation of hydraulic accumulator efficiency with and without elastomeric foam. J Propuls Power 4:185–192, 1988. Barnard B, Dransfield P. Predicting response of a proposed hydraulic control system using bond graphs. Dynamic Syst, Measure Control, March 1977. Chong-Jer L, Brown F. Nonlinear dynamics of an electrohydraulic flapper nozzle valve. Dynamic Syst., Measure Control, June 1990.

This Page Intentionally Left Blank

3 Analysis Methods for Dynamic Systems

3.1

OBJECTIVES      

3.2

Introduce different methods for analyzing models of dynamic systems. Examine system responses in the time domain. Present Laplace transforms as a tool for designing control systems. Present frequency domain tools for designing control systems. Introduce state space representation. Demonstrate the equivalencies between the different representations.

INTRODUCTION

In this chapter four methods are presented for simulating the response of dynamic systems from sets of differential equations. Time domain methods are seldom used apart from first- and second-order differential equations and are usually the first methods presented when beginning the theory of differential equations. The sdomain (or Laplace, also in the frequency domain) is common in controls engineering since many tables, computer programs, and block diagram tools are available. Frequency domain techniques are powerful methods and useful for all engineers interested in control systems and modeling. Finally, state space techniques have become much more common since the advent of the digital computer. They lend themselves quite well to computer simulations, large systems, and advanced control algorithms. It is important to become comfortable in using each representation. We will see that using the different representations is like speaking different languages. The same message may be spoken while speaking a different language. Each representation can be translated (converted) into a different one. However, some techniques lend themselves more to one particular way, other techniques to another way. Once we see that they really give us the same information and we become comfortable using each one, we expand our set of usable design tools.

75

76

3.3

Chapter 3

TIME DOMAIN

The time domain includes both differential equations in their basic form and the solution of the equations in the form of a time response. Ultimately, most analysis methods target the solution in terms of time since we operate the system and evaluate its performance in the time domain. We live in and describe events according to time, so it seems very natural to us. This section is limited to differential equations that are directly solved as an output response with respect to time without using the tools presented in the following sections. Since most control systems are analyzed in the sdomain, more examples are presented there, and this section simply connects the material learned in introductory courses on differential equations to the s-domain methods used in the remainder of this text. 3.3.1

Differential Equations of Motion

Section 2.3.1 has already presented an overview of differential equations and their common classifications as presented in Table 1 of Chapter 2. We now present solutions to common differential equations found in the design of controllers. In general, without teaching a separate class in differential equations, we are limited to first- and second-order, linear, ordinary differential equations (ODEs). Beyond this and the solution methods presented later are much faster and a more efficient use of our time. The general solution is found by assuming a form of the time solution, in this case an exponential, ert , and substituting the ‘‘solution’’ into the differential equation. From here we can apply initial conditions and solve for the unknown constants in the solution. Examples for common first- and second-order systems are given below. 3.3.1.1

Solutions to General First-Order Ordinary Differential Equations

The auxiliary equation was defined in Section 2.3.1 and is used here to develop a general solution to first-order linear ODEs. Each order of differentiation produces a corresponding order with the auxiliary equation. Thus, for a first-order ODE we get a first-order auxiliary equation, which will then contain only one root, or solution, when set equal to zero. This root of the auxiliary equation determines the dynamic response of the system modeled by the differential equation. The example below illustrates the method of using the auxiliary equation to derive the time response solution. EXAMPLE 3.1 A general first-order ODE: y0 þ ay ¼ 0 Substitute in y ¼ ert : r ert þ a ert ¼ 0 ðr þ aÞ ert ¼ 0 Gives the auxiliary equation rþa¼0

Analysis Methods for Dynamic Systems

77

Solution: y ¼ A ert ¼ A eat If the initial condition, y(0) = y0 , yð0Þ ¼ y0 ¼ A y ¼ y0 eat For the general case, y0 þ pðtÞy ¼ gðtÞ The solution then becomes ð  1 pðtÞ gðtÞ dt þ C y¼ pðtÞ If pðtÞ and gðtÞ are constants a and b and yð0Þ ¼ y0 , y ¼ ða=bÞð1  eat Þ þ y0 ðeat Þ There are other methods to handle special cases of first-order differential equations such as separating the differential and performing two integrals special cases with exact derivatives. While certainly interesting, these methods are seldom used in designing controllers. For more information, any introductory textbook on differential equations will cover these topics. We conclude this section by examining the solution methods for homogeneous, second-order, linear ODEs. It is quite simple using the auxiliary equation as shown for general first-order differential equations. 3.3.1.2

Solutions to General Second-Order Ordinary Differential Equations

A general second-order ODE: y00 þ k1 y0 þ k2 y ¼ 0 Auxiliary equation: r2 þ k1 r þ k2 ¼ 0 There are three cases that depend on the roots of the auxiliary equation. We can use the quadratic equation to find the roots since second-order ODEs will result in second-order polynomials for the auxiliary equations. Our general form of the auxiliary equation and the corresponding quadratic equation can then be expressed as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b2  4ac b  a r2 þ b r þ c ¼ 0 and r1 r2 ¼ 2a 2a Using the quadratic equation leads to the three possible combinations of roots:

78

Chapter 3

Case 1: Real and different roots, a and b: y ¼ A1 eat þ A2 ebt Case 1 occurs when the term (b2  4ac) is greater than zero. Both roots will be real and may be either positive or negative. Positive roots will exhibit exponential growth and negative roots will exhibit exponential decay (stable). Applying the initial position and velocity conditions to the solution solves for the constants A1 and A2 . Case 2: Real and repeated roots at r: y ¼ A1 ert þ A2 t ert Case 2 occurs when the term (b2  4ac) is equal to zero. Both roots will now have the same sign. As before, positive roots will result in unstable responses, negative roots in stable responses, and initial conditions are still used to solve for the constants A1 and A2 . Case 3: Two imaginary roots ðpþ j uÞ : y ¼ A1 est cosðod tÞ þ A2 est sinðod tÞ Case 3 occurs when the term (b2  4ac) is less than zero. Roots are always complex conjugates, and each root will have the same real component (s = b/2a), which determines the stability of the response. The sinusoidal terms, arising from the complex portions of the roots, only range between 1 and simply oscillate within the bounds set by the exponential term est at a damped natural frequency equal to od . Mathematically, the sinusoidal terms come from the application of Euler’s theorem when the roots of the auxiliary equation are expressed as r1;2 ¼ s þ =  jod . Then: ert ¼ esjod ¼ es ejod Euler’s theorem: ejo ¼ cos o þ j sin o ejo ¼ cos o  j sin o In all three cases, it is necessary to know the initial conditions to solve for A1 and A2 . The following example problems examine the three types of cases outlined above. If desired, the sum of sine and cosine terms can be written as either a sine or cosine term and an associated phase angle. The alternative notation is expressed as y ¼ B est sinðod t þ fÞ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi B ¼ A21 þ A22 f ¼ tan1 ðA1 =A2 Þ In general, it is convenient to use to the original form when applying initial conditions to solve for A1 and A2 . Plotting the time response is easily accomplished using either form.

Analysis Methods for Dynamic Systems

79

EXAMPLE 3.2 Differential equation and initial conditions for case 1, two real roots: y00 þ 5 y0 þ 6y ¼ 0 and

yð0Þ ¼ 0; y0 ð0Þ ¼ 1

Auxiliary equation: r2 þ 5r þ 6 ¼ ðr þ 2Þðr þ 3Þ ¼ 0 r ¼ 2; 3 Solution process using initial conditions: y ¼ A1 e2t þ A2 e3t y0 ¼ 2 A1 e2t  3 A2 e3t At t ¼ 0 these evaluate to A1 þ A2 ¼ 0 and

 2 A1  3 A2 ¼ 1

which gives A1 ¼ 1; A2 ¼ 1. Final solution is y ¼ e2t  e3t It is easy to see then, that for auxiliary equations resulting in two real roots, the general response can be described as the sum of two first-order responses. EXAMPLE 3.3 Differential equation and initial conditions for case 2, two repeated real roots: y00 þ 8 y0 þ 16y ¼ 0

and

yð0Þ ¼ 1; y0 ð0Þ ¼ 1

Auxiliary equation: r2 þ 8r þ 16 ¼ ðr þ 4Þ2 ¼ 0 r ¼ 4; 4 Solution process using initial conditions: y ¼ A1 e4t þ A2 te4t y0 ¼ 4A1 e4t  4 t A2 e4t þ A2 e4t At t ¼ 0 these evaluate to A1 ¼ 1

and

 4 A1 þ A2 ¼ 1

which gives A1 ¼ 1; A2 ¼ 5: Final solution is: y ¼ e4t þ 5 t e4t

80

Chapter 3

EXAMPLE 3.4 Differential equation and initial conditions for case 3, complex conjugate real roots: y00 þ 2 y0 þ 10y ¼ 0 and

yð0Þ ¼ 1; y0 ð0Þ ¼ 1

Auxiliary equation: r2 þ 2 r þ 10 ¼ 0 r ¼ 1  3j Solution process using initial conditions: y ¼ A1 et cosð3 tÞ þ A2 et sinð3 tÞ y0 ¼ A1 et cosð3 tÞ þ 3A1 et sinð3 tÞ  A2 et sinð3 tÞ þ 3A2 et cosð3 tÞ At t ¼ 0 these evaluate to A1 ¼ 1 and

 A1 þ A2 ¼ 1

which gives A1 ¼ 1; A2 ¼ 2. Final solution is y ¼ et cosð3 tÞ þ 2et sinð3 tÞ Finally, when plotting these responses as a function of time, the arguments of the sinusoidal terms are expected to have units of radians to be correctly scaled. Some calculators and computer programs have a default setting where the arguments of the sinusoidal terms are assumed to be in degrees. Remember that the responses derived here are all for homogeneous differential equations (no forcing function). The following sections show us techniques to use for deriving the time responses for nonhomogeneous differential equations. 3.3.2

Step Input Response Characteristics

In this section we define parameters associated with first- and second-order step responses. While in the previous section the solutions developed are for homogeneous ODEs, here we consider nonhomogeneous differential equations responding to step inputs. Later the parameters we define here will be used for simple system identification, that is, using experimental data (time response of system) and developing a system model from it. Along with a Bode plot, step response plots are very common for developing these models and for comparing performances of different systems. From a simple step response it is straightforward to develop an approximate first- or second-order analytical model approximating the actual physical system. 3.3.2.1

First-Order Step Response Characteristics

First-order systems, by definition, will not overshoot the step command at any point in time and can be characterized by one parameter, the time constant t. If we take the first-order differential equation from earlier and let cðtÞ be the system output, the input a unit step occurring at t ¼ 0, an initial condition of cð0Þ ¼ 0, and with a time constant equal to t, then we can write the nonhomogeneous differential equation as below.

Analysis Methods for Dynamic Systems

81

General first-order differential equation: t

dcðtÞ þ cðtÞ ¼ Unit Step Input ¼ 1t 0 dt

Using solution methods from previous section results in Output ¼ cðtÞ ¼ 1  et=t Since we used a unit step input (magnitude ¼ 1) and an initial condition equal to zero, we call the solution a normalized first-order system step response. Plotting the response as a function of the independent variable, time, results in the curve shown in Figure 1. So we see that the final magnitude exponentially approaches unity as time approaches infinity. Being familiar with the normalized curve is useful in many respects. By imposing a step input on our physical system and recording the response, we can compare it to the normalized step response. If it exponentially grows or decays to a stable value, then we can easily extend the data into a simplified model. Examining the plot further allows us to draw several additional conclusions. Even if our input or measured response does not reach unity, a linear first-order system will always reach a certain percentage of the final value as a function of the time constant. As shown on the plot in Figure 1, we know that the following values will be reached at each repeated time constant interval: One time constant, t ¼ 1t Two time constants, t ¼ 2t Three time constants, t ¼ 3t Four time constants, t ¼ 4t

Output Output Output Output

¼ 63:2% ¼ 86:5% ¼ 95:0% ¼ 98:2%

of of of of

final final final final

value value value value

Any intermediate value can also be found by calculating the magnitude of the response at a given time using the analytical equation.

Figure 1

Normalized step response—first-order system.

82

Chapter 3

EXAMPLE 3.5 Now let’s consider the simple RC circuit in Figure 2 and see how this might be used in practice. When we sum the voltage drops around the loop (Kirchhoff’s second law), it leads to a first-order linear differential equation. Sum the voltages around the loop: Vin  VR  VC ¼ 0 Constitutive relationships: VR ¼ R I dV I ¼C C dt Combining: RC

dVC þ VC ¼ Vin dt

Taking the differential equation developed for the RC circuit in Figure 2 and comparing it with the generalized equation, once we let t ¼ RC we have the same equation. For a simple RC circuit then, the time constant is simply the value of the resistance multiplied by the value of the capacitance. For the RC circuit shown, if R ¼ 1K and C ¼ 1 mF, then the time constant, t, is 1 second. If the initial capacitor voltage was zero (the initial condition) and a switch was closed suddenly connecting 10 volts to the circuit (step input with a magnitude of 10), we would expect then at 1 second to have 6.3 volts across the capacitor (the output variable), 2 seconds to have 9.5 volts, 3 seconds to have 9.8 volts, and so forth. By the time 5 seconds was reached, we should see 9.93 volts. So we see that if we know the time constant of a linear first-order system, we can predict the response for any step input of a known magnitude. Chances are that you are already familiar with time constants if you have chosen transducers for measuring system variables. Knowing the time constant of the transducer will allow you to choose one fast enough for your system. 3.3.2.2

Second-Order Step Response Characteristics

Whereas in a first-order system the important parameter is the time constant, in second-order systems there are two important parameters, the natural frequency, on , and damping ratio, z. As before, depending on which of the three cases we have, we will see different types of responses.

Figure 2

Example: first-order system example (RC circuit).

Analysis Methods for Dynamic Systems

83

For systems with two real roots, we see that the response can be broken down into the sum of two individual first-order responses. We call this case overdamped. There will never be any overshoot, and the length of time it takes the output to reach the steady-state value depends on the slowest time constant in the system. The faster time constant will have already reached its final value and the transient effects will disappear before the effects from the slower time constant. In overdamped secondorder systems, the total response may sometimes be approximated as a single firstorder response when the difference between the two time constants is large. That is, the slow time constant dominates the system response. When second-order systems have auxiliary equations producing real and repeating roots, we have a unique case where the system is critically damped. Although numerically possible, in practice maintaining critical damping may be difficult. Any small errors in the model, nonlinearities in the real system, or changes in system parameters will cause deviation away from the point where the system is critically damped. This occurs since critical damping is only one point along the continuum, not a range over which it may occur. Finally, and not related to a combination first-order response, is when the auxiliary equation produces complex conjugate roots. The system now overshoots the steady-state value and is underdamped. When we speak of second-order systems, the underdamped case is often assumed. Much of the work in controls deals with designing and tuning systems with dominant underdamped second-order systems. For these reasons the remaining material in this section is primarily focused on underdamped second-order systems. Many of the techniques are also applied to overdamped systems even though they can just as easily be analyzed as two firstorder systems. To begin with, let us recall the form of the complex conjugate roots of the auxiliary equation as presented earlier: a r2 þ b r þ c ¼ 0 and

r1;2 ¼ s  j o

In terms of natural frequency and damping ratio, we can write the roots as r2 þ 2 z on r þ o2n

and

r1;2 ¼  z on  j od

where od is the damped natural frequency defined as qffiffiffiffiffiffiffiffiffiffiffiffiffiffi od ¼ on 1 ¼ x2 The negative sign in front of z on comes from the negative sign in front of the quadratic equation. As long as the coefficients of the second-order differential equation are positive, this sign will be negative and the system will exponentially decay (stable response). Using this notation for our complex conjugate roots, we can write the generalized time response as pffiffiffiffiffiffiffiffiffiffiffiffiffi! qffiffiffiffiffiffiffiffiffiffiffiffiffi ez on t 1  x2 cðtÞ ¼ 1  pffiffiffiffiffiffiffiffiffiffiffiffiffi sin on 1  x2 t þ tan1 x 1  x2 To see how the natural frequency and damping ratio affect the step response, it may be helpful to view the equation above in a much simpler form to illustrate the effects of the real and imaginary portions of the roots. Combining all constants into common terms allows us to write the time response as follows:

84

Chapter 3

cðtÞ ¼ 1 

ex on t sinðod t þ fÞ K1

Since the sine term only varies between 1, the magnitude, or bounds on the plot, are determined by the term ex on t . Recognizing that the coefficient on the exponential, zon , is actually the real portion of our complex conjugate roots from the auxiliary equation, s, we can say that the real portion of the roots determine the rate at which the system decays. This is similar to our definition of a time constant and functions in the same manner. Coming back to the sinusoidal term, we see that it describes the oscillations between the bounds set by the real portion of our roots and it oscillates at the damped natural frequency of od . Thus, the imaginary portion of our roots determines the damped oscillation frequency for the system. Figure 3 shows this relationship between the real and imaginary portion of our roots. These concepts are fundamental to the root locus design techniques developed in the next chapter. In general, when plotting a normalized system, instead of a single curve we now get a family of curves, each curve representing a different damping ratio. When a system has a damping ratio greater than 1, it is overdamped and behaves like two first-order systems in series. The normalized curves for second-order systems are given in Figure 4. All curves shown are normalized where the initial conditions are assumed to be zero and the steady-state value reached by every curve is 1. As was done with the first-order plot using the output percentage versus the number of time constants, it is useful to define parameters measured from a secondorder plot that allow us to specify performance parameters for our controllers. Knowing how the system responds allows us to predict the output based on chosen values of the natural frequency and damping ratio or to determine the natural frequency and damping ratio from an experimental plot. Useful parameters include rise time, peak time, settling time, peak magnitude or percent overshoot, and delay time. Figure 5 gives the common parameters and their respective locations on a typical plot. Knowing only two of these parameters will allow us to reverse-engineer a black box system model from an experimental plot of our system. Since there are two unknowns, on and z, we need two equations to solve for them.

Figure 3

Effects of real and imaginary portions of roots (second-order systems) (!n ¼ > rad=secÞ:

Analysis Methods for Dynamic Systems

Figure 4

Normalized step responses—second-order systems.

1 1 cycle 2 p on 4 If the system is underdamped, generally measured as the time to go from 0 to 100% of the final steady-state value or the first point at which it crosses the steady-state level. If the system is overdamped, it is usually measured as the time to go from 10% to 90% of the final value.



Rise time, tr ; tr



Peak time, tp ; tp ¼



Figure 5

85

p pffiffiffiffiffiffiffiffiffiffiffiffiffi on 1  x2

Time for the response to reach the first peak (underdamped only). 4 Settling time, ts ; ts ¼ 4t ¼ x on Time for the response to reach and stay within either a 2% or 5% error band. The settling time is related to the largest time constant in the system. Use four time constants for 2% and three time constants for 5%. This equation comes from the bounds shown in Figure 3 where 1=t equals xon .

Second-order systems—step response parameters.

86

Chapter 3





Remember that at four time constants the system has reached 98% of its final value. pffiffiffiffiffiffiffiffi

p x= 1x2 Percent overshoot (%OS), %OS ¼ 100 e %OS¼[(peak value  steady-state value)/steady-state value  initial value]

100. This parameter is only a function of the damping ratio (the only parameter listed here that is a function of only one variable). Delay time, td , time require for the response to reach 1/2 the final value for the first time.

By measuring two of the above parameters it is possible to estimate the natural frequency and damping ratio for the system if given an experimental plot of a system. The model is determined by writing two equations with two unknowns (natural frequency and damping ratio) and solving. As we see in the next section, once these items are known a transfer function approximation can be developed. Step response plots provide quick and easy methods for modeling systems with dominant roots approximating first- or second-order systems. If we need models for higher order systems, it is helpful to enter the frequency domain. The easiest method for determining a second-order transfer function from an experimental step response plot is to begin by calculating the percent overshoot. Since the %OS only depends on the damping ratio, it eliminates the need to simultaneously solve two equations and two unknowns. Although the equation is difficult to solve by hand, a plot as shown in Figure 6, where the relationship in the equation is plotted, can be used to quickly arrive at the system damping ratio. Once the damping ratio is found, only one additional measurement from the plot is required (e.g., settling time) and then the natural frequency can also be directly calculated. Let us conclude this section by examining the common mass-spring-damper system in the context of the material explained here. To do so we use the differential

Figure 6 Percent overshoot from a step input as a function of damping ratio for a secondorder system.

Analysis Methods for Dynamic Systems

87

equation developed for the mass-spring-damper system earlier and relate the constants m, b, and k to natural frequency and damping ratio. EXAMPLE 3.6 The differential equation for the mass-spring-damper system, as developed in Chapter 2, is given below: m x00 þ b x0 þ k x ¼ F Divide the equation by m: x00 þ ðb=mÞ x0 þ ðk=mÞ x ¼ F=m And compare coefficients with the auxiliary equation written in terms of the natural frequency and damping ratio: x00 þ 2 z on x0 þ o2n x ¼ F=m Thus, for the m-b-k system: o2n ¼ k=m

and

2z on ¼ b=m

By noting where m, b, and k appear in the first two representations, the natural frequency and damping ratio are easily calculated for all linear, second-order ODEs. This allows us to define a single time response equation with respect to the natural frequency and damping ratio using the generalized response developed above. Once we write other general second-order differential equations in the form and calculate the system natural frequency and damping ratio, then we can easily plot the system’s response to a step input. It is important to remember that the generalized response, plot parameters, and methods are developed with respect to step inputs. If we desire the system’s response with respect to other inputs, the methods described in the following section are more useful. 3.4

s-DOMAIN OR LAPLACE

The s-domain is entered using Laplace transforms. These transforms relate timebased functions to s variable-based functions. This section introduces the procedures commonly used when working in the s-domain. 3.4.1

Laplace Transforms

Using Laplace transform methods allows us to convert differential equations and various input functions into simple algebraic functions. Both transient and steadystate components can be determined simultaneously. In addition, virtually all linear time-invariant (LTI) control system design is done using the s-domain. Block diagrams are a prime example. The Laplace transform is very powerful when graphical methods in the s-plane (root locus plots) are used to quickly determine system responses. Although several methods are given in section for using Laplace transforms to solve for the time response of a differential equation, we do well to realize that in designing and implementing control systems, we seldom take the inverse Laplace transform to arrive back in the time domain. There are two primary reasons for this: Virtually all the design tools use the s-domain (software included), and in

88

Chapter 3

most cases we know what type of response we will have in the time domain simply by looking at our system in the s-domain. The goal of this section then is to show enough examples for us to make that connection between what equivalent systems look like in the s-domain and in the time domain. Using Laplace transforms requires a quick review of complex variables. For the transform, s ¼ s þ jo, where s is the real part of the complex variable and o is the imaginary component. This notation was introduced earlier when discussing the complex conjugate roots from the auxiliary equation. Fortunately, although helpful, algebraic knowledge of complex variables is seldom required when using the sdomain. Using the method as a tool to understanding and designing control systems, we primarily use the Laplace transform of f ðtÞ and the inverse Laplace transform of FðsÞ through the use of tables. Making the following definitions will help us use the transforms. f ðtÞ ¼ a function of time where f ðtÞ ¼ 0 for t < 0 s ¼ s þ jo, the complex variable L ¼ Laplace operator symbol FðsÞ ¼ Laplace transform of f ðtÞ Then the Laplace transform of f ðtÞ is ð1 L½f ðtÞ ¼ FðsÞ ¼

f ðtÞest dt

0

And the inverse Laplace transform of FðsÞ is L1 ½FðsÞ ¼ f ðtÞ ¼

1 2p j

ð cþj1

FðsÞest ds

cj1

A benefit of using this method as a tool means that we seldom (if ever) need to do the actual integration since tables have been developed that include almost all transforms we ever need when designing control systems. Looking at the equations above gives us an appreciation when using the tables regarding the time that is saved. A table of Laplace transform pairs has been included in Appendix B and some common transforms that are used often are highlighted here in Table 1. Additional tables are available from many different sources. The outline for using Laplace transforms to find solutions to differential equations is quite simple. I. Write the differential equation. II. Perform the Laplace transform. III. Solve for the desired output variable. IV. Perform the inverse Laplace transform for the time solution to the original differential equation. To better illustrate the solution steps, let us take a general ordinary differential equation, include some initial conditions, and solve for time solution.

Analysis Methods for Dynamic Systems

Table 1

89

Common Laplace Transforms Identities

Constants Addition First derivative Second derivative General derivatives Integration

L½A f ðtÞ ¼ A FðsÞ L½f1 ðtÞ þ f2 ðtÞ ¼ F1 ðsÞ þ F2 ðsÞ   df ðtÞ ¼ s FðsÞ  f ð0Þ L dt " # d 2 f ðtÞ df ð0Þ L ¼ s2 FðsÞ  s f ð0Þ  dt dt2  n  n ðk1Þ X d f ðtÞ ¼ sn FðsÞ  snk f ð0Þ L dtn k¼1 ð  FðsÞ L f ðtÞdt ¼ s

Common Inputs

f ðtÞ, Time Domain

FðsÞ, Laplace Domain

Unit impulse Unit step

ðtÞ 1ðtÞ; t > 0

Unit ramp

t

1 1 s 1 s2

Common Transform Pairs

f ðtÞ, Time Domain

FðsÞ, Laplace Domain

First-order impulse response

eat

1 sþa

First-order step response

1 ð1  ea t Þ a

1 s ðs þ aÞ

Second-order impulse response

on ez on t pffiffiffiffiffiffiffiffiffiffiffiffiffi sinð!d tÞ 1 ¼ z2

o2n s2 þ 2on s þ o2n

qffiffiffiffiffiffiffiffiffiffiffiffiffi od ¼ on 1  z2 Second-order step response

ez on t 1  pffiffiffiffiffiffiffiffiffiffiffiffiffi sinðod t þ fÞ 2 1 qxffiffiffiffiffiffiffiffiffiffiffiffiffi od ¼ on 1  z2 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2 1 1  z f ¼ tan z

sðs2

o2n þ 2xon s þ o2n Þ

I. Write the differential equation. For this part we assume we developed the following differential equation and initial conditions from applying the concepts in Chapter 2 to a physical system. d 2x dx þ 6 þ 5x ¼ 0; x0 ¼ 0; x_ 0 ¼ 2 dt dt

90

Chapter 3

II.

Perform the Laplace transform. s2 X ðsÞ  s x0  x_ 0 þ 6 s X ðsÞ  6x0 þ 5 X ðsÞ ¼ 0

III. Solve for the desired output variable. ðs2 þ 6s þ 5ÞXðsÞ ¼ 2 XðsÞ ¼

s2

2 2 ¼ þ 6s þ 5 ðs þ 1Þðs þ 5Þ

IV. Perform the inverse Laplace transform. From the tables we see a similar transformation that will meet our needs:   ba L1 ¼ eat  ebt ðs þ aÞðs þ bÞ Rearrange XðsÞ to match the table entry found in Appendix A (constants carry through): 2 1 4 ¼ ðs þ 1Þðs þ 5Þ 2 ðs þ 1Þðs þ 5Þ

a ¼ 1; b ¼ 5

Then the time response can be expressed as xðtÞ ¼

1 t ðe  e5 t Þ 2

So we see that, at least in some cases, the solution of an ODE using Laplace transforms is quite straightforward and easy to apply. What happens more often than not, or so it seems, is that the right match is not found in the table and we must manipulate the Laplace solution before we can use an identity from the table. Sometimes it is necessary to expand the function in the s-domain using partial fraction expansion to obtain forms found in the lookup tables of transform pairs. Many computer programs are also available to assist in Laplace transforms and inverse Laplace transforms. In most cases the program must also have symbolic math capabilities. 3.4.2

Laplace Transforms: Partial Fraction Expansion

There are two primary classes of problems where we might use partial fraction expansion when performing inverse Laplace transforms. If we do not have any repeated real or complex conjugate roots, the expansion is straightforward. When our system in the s-domain contains repeated first-order roots or repeated complex conjugate roots, the algebra gets more tedious as we must take differentials of both sides during the expansion. Simple examples of these cases are illustrated in this section. If more details are desired (usually not required for designing control systems), most texts on differential equations will contain sections on the theory behind each case. 3.4.2.1

Partial Fraction Expansion: No Repeated Roots

To demonstrate the first case, lets add a nonzero initial position condition to the example above and examine what happens. Modified system:

Analysis Methods for Dynamic Systems

d 2x dx þ 6 þ 5 x ¼ 0; dt dt

91

x0 ¼ 2; x_ 0 ¼ 2

s2 XðsÞ  s x0  x_ 0 þ 6 s XðsÞ  6 x0 þ 5 XðsÞ ¼ 0 ðs2 þ 6s þ 5ÞXðsÞ ¼ 2 s þ 14 XðsÞ ¼

2s þ 14 2s þ 14 ¼ s2 þ 6s þ 5 ðs þ 1Þðs þ 5Þ

With the addition of the s term in the numerator, we no longer find the solution in the table. Using partial fraction expansion will result in simpler forms and allow the use of the Laplace transform pairs found in Appendix B. It is possible for most cases to find a table containing our form of the solution (in dedicated books containing transform pairs) but to include all of the possible forms makes for a confusing and long table. Also, remember that these techniques are more importantly learned for the connection they allow us to make between the s-domain and the time domain, then for the reason that it is a common task when designing control systems (in general it is not). For the partial fraction expansion then, let the solution XðsÞ equal a sum of simpler terms with unknown coefficients: 2s þ 14 K K ¼ 1 þ 2 ðs þ 1Þðs þ 5Þ s þ 1 s þ 5 To find the coefficients, we multiply through both sides by the factor in the denominator and let the value of s equal the root of that factor. Repeating this for each term allows us to find each coefficient Ki . The process is given below for finding K1 and K2 : To solve for K1 : [multiply through by (s þ 1)] 2s þ 14 K1 ðs þ 1Þ K2 ðs þ 1Þ K ðs þ 1Þ ¼ þ ¼ K1 þ 2 ðs þ 5Þ sþ1 sþ5 sþ5 Now let s ! 1 and we can find K1 :   2s þ 14 12 k2 ðs þ 1Þ ¼ K1 þ ¼ ¼ K1 ¼ 3 ðs þ 5Þ s¼1 4 s þ 5 s¼1 Repeat the process to find K2 [multiply through by (s þ 5)]: 2s þ 14 K1 ðs þ 5Þ K2 ðs þ 5Þ K1 ðs þ 5Þ ¼ þ ¼ þ K2 ðs þ 1Þ sþ1 sþ5 sþ1   2s þ 14 4 K1 ðs þ 5Þ ¼ K ¼ þ ¼ K2 ¼ 1 2 ðs þ 1Þ s¼5 4 s þ 1 s¼5 The result of our partial fraction expansion becomes: XðsÞ ¼

2s þ 14 3 1 ¼  ðs þ 1Þðs þ 5Þ s þ 1 s þ 5

Now the inverse Laplace transform is straightforward using the table and results in the time response of

92

Chapter 3

xðtÞ ¼ 3 et  e5 t An alternative method, preferred by some, is to expand out both sides and equate the coefficients of s to solve for the coefficients. In some cases this leads to simultaneously solving sets of equations, although this is generally an easy task. To quickly illustrate this method, let us begin with the same equation: 2s þ 14 K K ¼ 1 þ 2 ðs þ 1Þðs þ 5Þ s þ 1 s þ 5 Now when we cross-multiply to remove the terms in the denominator, we can collect coefficients of the different powers of s to generate our equations: 2s þ 14 ¼ K1 ðs þ 5Þ þ K2 ðs þ 1Þ ðK1 þ K2  2Þs þ ð5K1 þ K2  14Þ ¼ 0 Our two equations now become (with the two unknowns, K1 and K2 ): K1 þ K2 ¼ 2

and

5K1 þ K2 ¼ 14

Subtracting the top from the bottom results in 4K1 ¼ 12

or

K1 ¼ 3

Substituting K1 back into either equation, we get K2 ¼ 1, exactly the same as before. Once we have found K1 and K2 , the procedure to take the inverse Laplace transform is identical and results in the same time solution to the original differential equation. The method to use largely depends on which method we are most comfortable with. Finally, it is quite simple using a computer package like Matlab to do the partial fraction expansion. Taking our original transfer function from above, we can use the residue command to get the partial fractions. The solution using Matlab is as follows: Transfer function: XðsÞ ¼

2s þ 14 2s þ 14 ¼ s2 þ 6s þ 5 ðs þ 1Þðs þ 5Þ

Matlab command: >> ½R; P; K ¼ residueð½2 14; ½1 6 5Þ and the output: R¼ 1 3

P¼ 5 1

K¼ ½

The results are interpreted where R contains the coefficients of the numerators and P the poles (s+p) of the denominator. K, if necessary, contains the direct terms. Writing R and P as before means we have the –1 divided by (s+5) and the 3 divided by (s+1); this is exactly the result we derived earlier. XðsÞ ¼

2s þ 14 3 1 ¼  ðs þ 1Þðs þ 5Þ s þ 1 s þ 5

Analysis Methods for Dynamic Systems

93

The same command can be used for the cases presented in the following sections. 3.4.2.2

Partial Fraction Expansion: Repeated Roots

To look at the second case we determine the response of a differential equation in response to an input (nonhomogeneous) and assuming the initial conditions are zero. We will take the simple first-order system as found when modeling the RC electrical circuit and subject the system to a unit ramp input. In general terms our system can be described by the following differential equation: 0:2

dV þ V ¼ Ramp input dt

Take the Laplace transform and solve for the output when the input is a unit ramp: 1 ¼ Unit ramp input s2 (initial conditions are zero)

ð0:2 s þ 1ÞVðsÞ ¼

The output becomes: VðsÞ ¼

1 5 ¼ s2 ð0:2s þ 1Þ s2 ðs þ 5Þ

With repeated roots, the partial fraction expansion terms must include all lower powers of the repeated terms. In this case then, the coefficients and terms are written as 5 K1 K K þ 2þ 3 ¼ s s2 ðs þ 5Þ ðs þ 5Þ s2 To solve for K1 we can multiply through by (s þ 5) and set s ¼ 5:   5 K2 ðs þ 5Þ K3 ðs þ 5Þ  ¼ K1 þ þ  s s2 s2 s¼5 K1 ¼

5 1 ¼ 2 5 ð5Þ

To solve for K2 we multiply through by s2 and set s ¼ 0. " #  5 K1 s2  ¼ K2 þ þ K3 s   ðs þ 5Þ ðs þ 5Þ s¼0

K2 ¼

5 ¼1 5

With the lower power of the repeated root, we now have a problem if we continue with the same procedure. If we multiply both sides by s and let s ¼ 0, the K2 term becomes infinite (division by zero) because an s term is left in the denominator. To solve for K3 then, it becomes necessary to take the derivative of both sides with respect to s and then let s ¼ 0. This allows us to solve for K3 . Cross-multiply by s2 ðs þ 5Þ to simplify the derivative:

94

Chapter 3

5 ¼ K1 s2 þ K2 ðs þ 5Þ þ K3 ðs þ 5Þ Take the derivative with respect to s: d ) 0 ¼ 2 K1 s þ K2 þ K3 s þ K3 ðs þ 5Þ ds Now we can set s ¼ 0 and solve for K3 , the remaining coefficient: 0 ¼ K2 þ 5 K3 ðK2 ¼ 1Þ K3 ¼ 

1 5

Using the coefficients allows us to write the response as the sum of three easier transforms: VðsÞ ¼

1 1 1 12 þ  5 ðs þ 5Þ s2 5 s

And finally, we take the inverse Laplace transform of each to obtain the time response: 1 1 VðtÞ ¼ e5t þ t  5 5 As shown in the previous example, we can write and solve simultaneous equations instead of using the method shown above. For this example it means getting three equations to solve for the three coefficients. If we multiply through by the denominator of the left-hand side (as we did before we took the derivative), we get the partial fraction expansion expressed as 5 ¼ K1 s2 þ K2 ðs þ 5Þ þ K3 sðs þ 5Þ Now collect the coefficients of s to obtain the three equations: ðK1 þ K3 Þs2 þ ðK2 þ 5K3 Þs þ ð5K2  5Þ ¼ 0 The three equations (and three unknowns) are K1 þ K3 ¼ 0 K2 þ 5K3 ¼ 0 K2 ¼ 1 Once again we get the same values for the coefficients and the inverse Laplace transforms result in the same time response. For larger systems it is easy to write the equations in matrix form to solve for the coefficients as illustrated below: 3 2 1 3 2 32 3 2 3 2 3 2 31 2 0 1 0 1 K1 0 K1 1 1 1 5 4 0 1 5 54 K2 5 ¼ 4 0 5 and 4 K2 5 ¼ 4 0 1 5 5 4 0 5 ¼ 4 1 5  15 5 0 5 0 5 0 5 0 K3 K3 When written in matrix form there are many software packages and calculators available for inverting the matrix and solving for the coefficients.

Analysis Methods for Dynamic Systems

3.4.2.3

95

Partial Fraction Expansion: Complex Conjugate Roots

To conclude the examples illustrating the use of partial fraction expansion when solving differential equations, we look at the case where we have complex conjugate roots. Any system of at least second order may produce complex conjugate roots. It is common in both RLC electrical circuits and m-b-k mechanical systems to have complex roots when solving the differential equation for the time response. For the example here we will solve the s-domain solution given below. The output below is a common form of a second-order system (RLC, m-b-k, etc.) responding to a unit step input. YðsÞ ¼

sðs2

1 þ s þ 1Þ

The system has three roots: pffiffiffi 3 1 s ¼ 0;  2 2 For second-order terms in the numerator that factor into complex conjugate roots, we can write the partial fraction expansion where the numerator of the term containing the complex roots has two coefficients, one being multiplied by s as shown below: YðsÞ ¼

sðs2

1 K K s þ K3 ¼ 1 þ 22 s þ s þ 1Þ s þsþ1

To solve for the coefficients in this example we will multiply through s ðs2 þ s þ 1Þ and group the coefficients to form the three equations. 1 ¼ K1 ðs2 þ s þ 1Þ þ ðK2 þ K3 Þs ðK1 þ K2 Þs2 þ ðK1 þ K3 Þs þ ðK1  1Þ ¼ 0 Now it is easy to solve for the coefficients: K1  1 ¼ 0 K1 þ K2 ¼ 0 K1 þ K3 ¼ 0

! ! !

K1 ¼ 1 K2 ¼ 1 K3 ¼ 1

The response can now be written as YðsÞ ¼

1 sþ1  s s2 þ s þ 1

When we look at the transform table we find that we are close but not quite there yet. We need one more step to put it in a form where we can use the transforms in table. Knowing the real and imaginary portion of our roots, we can write the second-order denominator as pffiffiffi2 3 12 2 2 ðs þ jRealjÞ þ jImagj ¼ s þ þ ¼ s2 þ s þ 1 ¼ ðs þ aÞ2 þ b2 2 2 Now we have two identities from the table that we can use:

96

Chapter 3 1

L



 b ¼ eat sinðbtÞ and ðs þ aÞ2 þ b2

1

L



 sþa ¼ eat cosðbtÞ ðs þ aÞ2 þ b2

With one last step we have the form we need to perform the inverse Laplace. Take the second-order term and break it into two terms in the form of the two Laplace transform identities given above. pffiffi 3 1 1 s þ 12 s þ 1 1 1 2 pffiffi 2 pffiffi pffiffiffi YðsÞ ¼  2   2 2 ¼  s s þ s þ 1 s þ s þ 1 s ðs þ 1Þ2 þ ð 3Þ2 3 ðs þ 12Þ2 þ ð 23Þ2 2 2 Finally, the time response: pffiffiffi  pffiffiffi  3 3 1 12t 12t t  pffiffiffi e ; sin t yðtÞ ¼ 1  e cos 2 2 3 There are several things to be learned from this example. First, it is a method that provides a way of obtaining the time response of systems containing complex conjugate roots. The method becomes more important when different inputs are combined and the standard step input is not the only one present. This leads to the second point to be made. The example used here falls into a very common class; one already examined as some length, the step response of a second-order system. If the goal of this section was to simply obtain the time response (without teaching a method applicable to more general systems), all we would have to calculate is the natural frequency and the damping ratio of the system and we would know the time response. Again, this is true in this case because the input is a step function and we can compare this response to a standard response and determine the generalized parameters. YðsÞ ¼

1 sðs2 þ s þ 1Þ

is the same as YðsÞ ¼

o2n sðs2 þ 2xon s þ o2n Þ

where on ¼ 1

rad sec

and



1 2

and

qffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffi 3 od ¼ on 1  z2 ¼ 2

With the natural frequency and damping ratio known, the response of a secondorder system to a unit step input is (from Table 1) sffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffi 2 ez on t 1 1  z 2 1  pffiffiffiffiffiffiffiffiffiffiffiffiffi sinðod t þ fÞ od ¼ on 1  z f ¼ tan z 1  x2 This is the same response obtained using partial fraction expansion where the sine and cosine terms have been combined into a sine term with a phase angle. One of the more important connections we must make at this point is that we actually knew this was the response from the very time we started the example, once we calculated the roots of the second-order denominator. The real portion of the roots equals zon and the imaginary portion of the roots is the damped natural

Analysis Methods for Dynamic Systems

97

frequency od . As we see in Section 3.4.3, this forms the foundation of using the s-plane to determine a system’s response in the time domain. EXAMPLE 3.7 To conclude this section on Laplace transforms, let us once again use the massspring-damper equation and now solve it using Laplace transforms. Remember that the original differential equation, developed using several methods, is f ðtÞ ¼ m x00 þ b x0 þ k x Then taking the Laplace transform with zero initial conditions: FðsÞ ¼ m s2 XðsÞ þ b s XðsÞ þ k XðsÞ ¼ Input XðsÞ can easily be solved for since all the derivative terms have been ‘‘removed’’ during the transform. Solving for XðsÞ results in Output ¼ XðsÞ ¼ ð1=ðm s2 þ b s þ kÞÞ Input If the input is a unit step: Unit Step ¼ 1=s. Then the output is given by XðsÞ ¼

1 1 ms2 þ bs þ k s

If we divide top and bottom by ‘‘m’’ we see a familiar result: XðsÞ ¼

2

s þ

1 m b m s

þ

k m



1 s

Aside from a scaling factor k, this is one of the entries in the table where (o2n ¼ k/m) and ð2 z on ¼ b/m). If the system is overdamped we have two real roots from the second-order polynomial in the denominator and the system can be solved as the sum of two first-order systems. When we have critical damping we have two repeated real roots and again the solution was already discussed in Section 3.3.1.2. Finally, if the system is underdamped we get complex conjugate roots and the system exhibits overshoot and oscillation. Whenever first- or second-order systems are examined with respect to step inputs, we can use the generalized responses developed in Section 3.3.2. If different functions are used, then the partial fraction expansion tools still allow us develop the time response of the system. 3.4.3

Transfer Functions and Block Diagrams

Although the previous section took the inverse Laplace transform to obtain the time response for the original differential equation, if is often not required. Transfer functions and block diagrams can be developed by taking the Laplace transform of the differential equation representing the physical system and using the result directly. Using the algebraic function in the s-domain to represent physical systems is very common, and many computer programs can directly simulate the system response from this notation. This section will also begin to introduce computer solution methods now that we are familiar with the analytical background and how to represent physical systems in the s-domain.

98

Chapter 3

The most common format used when designing control systems is block diagrams using interconnected transfer functions. In our brief introduction to block diagrams, we learned simple reduction techniques and how the block diagram is simply representing some equation pictorially. Section 2.3.2 presented some of the basic properties and reduction steps. The goal in this section is to learn what the actual blocks represent and how to develop them. Block diagrams are lines representing variables that connect blocks containing transfer functions. A transfer function is simply a relationship between the output variable and input variable represented in the s-domain. Transfer function ¼

Laplace transform of output Laplace transform of input

General notation is Transfer function ¼ GðsÞ ¼ CðsÞ=RðsÞ

where RðsÞ is input and CðsÞ is output. We use Laplace transforms to convert differential equations into transfer functions representing the output to input relationships. Since transfer functions only relate the output to the input, we do not include initial conditions when using Laplace transforms. Several common examples are given below to illustrate the procedure of converting differential equations to transfer functions. First, let us develop the transfer function for the mass-spring-damper system whose differential equation has already been derived. EXAMPLE 3.8 Taking the Laplace of this ODE leads to a transfer function and block as shown: m s2 XðsÞ þ b s XðsÞ þ k XðsÞ ¼ Input ¼ RðsÞ XðsÞ=RðsÞ ¼ 1=ðms2 þ bs þ kÞ

With a uniform set of units, RðsÞ is a force input, CðsÞ is a position output, and the coefficients m, b, and k must each be consistent with RðsÞ and CðsÞ. Each s is associated with units of 1/sec. EXAMPLE 3.9 Another example that we have already derived the differential equation for is a firstorder RC circuit. Taking the differential equation and following the same procedure,

Analysis Methods for Dynamic Systems

99

RC dc=dt þ c ¼ rðtÞ ðRC s þ 1ÞCðsÞ ¼ RðsÞ C=R ¼ 1=ðRC s þ 1Þ

Now the units of both RðsÞ and CðsÞ are volts, where their relationship to each other is defined by the transfer function in the block. Once we know the input R(s) we can develop the output CðsÞ. If we pictorially represent the input as a unit step change in voltage, then the expected output voltage is a first-order step response, as shown in Figure 7. So now we are getting to the point where, as alluded to in the previous section, we are able to look at the form of our transfer function and quickly and accurately predict the type of response that we will have for a variety of inputs. 3.4.3.1

Characteristics of Transfer Functions

Several notes at this point about the Laplace transform and corresponding transfer function will help us understand future sections when designing control systems. The denominator of the transfer function GðsÞ is usually a polynomial in terms of s where the highest power of s relates to the order of the system. Hence, the mass-springdamper system is a second-order system and has a characteristic equation (CE) of CE ¼ ms2 þ bs þ k. The roots of the CE directly relate to the type of response the system exhibits. Looking in the Laplace transform tables clarifies this more. For example, the first-order system transfer function can be written as a/(s þ a). This corresponds to the time response of eat . The root s ¼ a of the characteristic equation relates directly to the rate of rise or decay in the system and is thus related to the system time constant, t, where t ¼ 1=a. The same relationship between roots and the system response is found in Table 1 for second-order systems like the massspring-damper system. If the roots of a second-order CE are both negative and real, the system behaves like two first-order systems in series. If the roots have imaginary components, they are complex conjugates according to the quadratic equation and the system is underdamped and will experience some oscillation. If the real portion of the roots are ever positive, the system is unstable since the time response now includes a factor eat , thus experiencing exponential growth (until something breaks). These relationships were formed while presenting partial fraction expansions and form the foundation for the root locus technique presented later.

Figure 7

Example: RC circuit transfer function response.

100

Chapter 3

The roots of the characteristic equation are often plotted in the s-plane. The s-plane is simply an xy plotting space where the axes represent the real and imaginary components of the roots of the characteristic equation. This is shown in Figure 8. The parameters used to describe first- and second-order systems are all graphically present in the s-plane. The time constant for a first-order system (and the decay rate for a second-order system) relates to the position on the real axis. The imaginary axis represents the damped natural frequency, the radius (distance) to the complex pole is the natural frequency, and the cosine of the angle between the negative real axis and the radial line drawn to the complex pole is the damping ratio. Thus, the s-plane is a quick method of visually representing the response of our dynamic system. Since anything with a positive real exponent will exhibit exponential growth, the unstable region includes the area to the right of the imaginary axis, commonly referred to as the right hand plane, RHP. In the same way, if all poles are to the left of the imaginary axis, the system is globally stable since all poles will include a term that decays exponentially and that is multiplied by the total response. (Thus, when the decaying term approaches zero, so does the total response.) This side is commonly termed to left-hand plane, or LHP. The further to the left the poles are in the plane, the faster they will decay to a steady-state value, a property well worth knowing when designing controllers. Figure 9 illustrates the types of response depending on pole locations in the s-plane. There are two more useful theorems for analyzing control systems represented with block diagrams: the initial value theorem and final value theorem. These theorems relate the s-domain transfer function to the time domain without having to first take the inverse Laplace transform. Initial value theorem (IVT): f ð0þÞ ¼ lim s FðsÞ s!1

Final value theorem (FVT): lim f ðtÞ ¼ lim s FðsÞ

t!1

s!0

In particular, the FVT is extremely useful and frequently used to determine steady-state errors for various controllers. Simply stated, the final output value as

Figure 8

Plane notation and root location.

Analysis Methods for Dynamic Systems

Figure 9

101

Response type based on s-plane pole locations.

time continues is equal to the output in the s-domain times ‘‘s’’ as s in the limit approaches zero. In almost every case you can determine the steady-state output of a system by multiplying the transfer function (TF) times s and the input (in terms of s) and setting s to zero. The resulting value is the steady-state value that the system will reach in the time domain. For step inputs this becomes very easy since the s in the theorem cancels with the 1/s representing the step input. Thus, for a unit step input the final value of the transfer function is simply the value of the TF when s ! 0. With the tools described up to this point we can now build the block diagram, determine the content of each block, close the loop (as our controller ultimately will), and reduce the block diagram to a single block to easily determine the closed loop dynamic and steady-state performance. To work through the application of the FVT, let’s solve for the steady-state output using the two examples discussed in previous sections, the RC circuit and the m-b-k mechanical system. EXAMPLE 3.10 We will take the transfer function and block diagram for the RC circuit but now with a step of 10 V in magnitude. This is the equivalent of closing a switch at t ¼ 0 and measuring the voltage across the capacitor. The transfer function, from before

The Laplace representation of the step input: Step input with a magnitude of10 ! RðsÞ ¼ 10 Apply the final value theorem to the output, C(s):

1 10 ¼ s s

102

Chapter 3

Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

1 10 ¼ 10 RCs þ 1 s

Although no surprises here, the concept is clear and the FVT is easy to use and apply when working with block diagrams. We finish our discussion of the FVT by applying it to the mass-spring-damper system developed earlier. EXAMPLE 3.11 Taking the Laplace of the m-b-k system differential equation resulted in the transfer function:

Step input (force) with a magnitude of F ! RðsÞ ¼ F

1 F ¼ s s

Apply the final value theorem to the output, C(s): Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

ms2

1 1 F ¼ þ bs þ k s k

This simply tells us what we could have ascertained from the model: that after all the transients decay away, the final displacement of the mass is the steady-state force divided by the spring constant where the force magnitude ¼ F. This agrees with the steady-state value determined from the differential equations in previous sections. While these are simple examples to illustrate the procedure, the method is extremely fast even when the block diagrams get large and more complex. The FVT is frequently used in determining the steady-state errors for closed loop controllers. 3.4.3.2

Common Transfer Functions Found in Block Diagrams

Finally, let us examine several common block diagram transfer functions. Several blocks are often found in block diagrams representing control systems, some of which tend to confuse beginners. Each block described here may be ‘‘repeated’’ throughout the block diagram, each time representing a different component and with different physical units. The goal here is not to list all the possible applications of each block but instead to have us recognize the basic common forms found in all different kinds of systems (electrical, mechanical, etc.). For example, if we understand a first-order lag term, we will understand its input/output relationship whether it represents an RC circuit, shaft speed with rotary inertia, or phase-lag electronic controller. The other note to make here is that all systems can be reduced into combinations of these blocks. If we have a fifth-order polynomial (characteristic equation) in the denominator of our transfer function, we have several combinations possible when it is factored: five real roots corresponding to five first-order terms, three real roots and one complex conjugate pair corresponding to three first-order terms and one second-order oscillatory term, or one real root and two complex conjugate pairs corresponding to one first-order term and two oscillatory terms.

Analysis Methods for Dynamic Systems

103

So no matter how complex our system becomes, it is easily described as a combination of the transfer function building blocks described below. Gain factor, K:

The gain, K, is a basic block and may represent many different functions in a controller. This block may multiple R(s) by K without changing the variable type (e.g., a proportional controller multiplying the error signal) or represent an amplifier in the system that associates different units on the input and output variables. An example is a hydraulic valve converting an electrical input (volts or amps) into a corresponding physical output (pressure or flow). The valve coefficients resulting from linearizing the directional control valve are used in this manner. Therefore, when using this block be sure to recognize what the units are supposed to be and what the gain units actually were determined with. There are no dynamics associated with the gain block; the output is always, and without any lead or lag, a multiple K of the input. Integral:

Ð This block represents a time integral of the input variable where cðtÞ ¼ rðtÞ dt. Two common uses include integrating the error signal to achieve the integral term in a proportional-integral-derivative (PID) controller and integrating a physical system variable such as velocity into a position. In terms of units, then, it multiplies the input variable by seconds. If input is an angular acceleration, rad/sec2 , output is angular velocity, rad/sec. Since most physical systems are integrators (remember the physical system relationships from Chapter 2), this is a common block. One special comment is appropriate here. The integral block is not to be confused with step inputs even though both are represented by 1/s. The block contains a transfer function that is simply the ratio of the output to the input. Thus it is possible to have an integral block with a step input in which case the output would be represented by CðsÞ ¼ GðsÞ RðsÞ ¼ ð1=sÞð1=sÞ ¼ 1=s2 ¼ ramp output (from tables) This concept is sometimes confusing when initially learning block diagrams and s transforms since one 1=s term is the system model and the other 1=s term is the input to the system. Derivative:

This block represents a derivative function where the output is the derivative of the input. A common use is in the derivative term of a PID controller block. Use of

104

Chapter 3

the block requires caution since it easily amplifies noise and tends to saturate outputs. The pattern should be noted where the integral and derivative blocks are inverses of each other and if connected in series in a block diagram, would cancel. The units associated with the derivative block are 1/sec, the inverse of the integral block. First-order system (lag):

This block is commonly used in building block diagrams representing physical systems. It might represent an RC circuit as already seen, a thermal system, liquid level system, or rotary speed inertial system. The input-output relationship for a firstorder system in the time domain has already been discussed in Section 3.3.2. Based on the time constant, t, we should feel comfortable characterizing the output from this system. In the next section when we examine frequency domain techniques, we will see that the output generally lags the input (except at very low frequencies) and hence this transfer function is often called a first-order lag. First-order system (lead):

This block is found in several controllers and some systems. It is similar to the firstorder lag except that now the output leads the input. Most physical systems do not have this characteristic as real systems usually exhibit lag, as found in the previous block. The similarities and differences will become clearer when these blocks are examined in the frequency domain. Second-order system:

In addition to ‘‘true’’ second-order systems like a mass-spring-damper configuration or an RLC circuit, a second-order system block is commonly used to approximate higher order systems. As later sections show, a system that has dominant complex conjugate poles can be accurately modeled by a second-order system. If a system is expressed in this form, we generally assume that it is underdamped and thus exhibits overshoot and oscillation. If it is overdamped we can just as easily treat it as two first-order systems. As with the first-order model, it is possible to see this form appearing in the numerator of the transfer function. It is unlikely get this term from modeling the physics of a system; it more frequently appears as part of a controller. The common PID controller introduces a second-order term into the numerator of our system.

Analysis Methods for Dynamic Systems

105

EXAMPLE 3.12 To summarize many of the concepts presented thus far, let us take a model of a physical system, develop the differential equation describing the physics of the system, convert it to a transfer function, and plot the time response if the system is subjected to a step input. The system we will examine is a simple rotary group with inertia J and damping B; a torque, T, as shown in Figure 10, acts on the system. To derive the differential equation, we sum the ‘‘torques’’ acting on the system and set them equal the inertia multiplied by the angular acceleration. X

T 0s ¼ J

do ¼ T  Bo dt

Now we can take the Laplace transform (ignoring initial conditions) and solve for the output, o: J

do 1 þ B o þ T ! ðJs þ BÞo ¼ T ! o ¼ T dt ðJs þ BÞ

Solve for the transfer function and write it in generalized terms: ! ðsÞ 1 1 1 1 1 ¼ ¼ ¼ T ðsÞ ðJs þ BÞ B ðJ=Bs þ 1Þ B ðts þ 1Þ This results in a first-order system time constant, t ¼ J=B, and a scaling factor of 1=B, allowing us to quickly write the time response as oðtÞ ¼

 1  B 1 1 1  e t ¼ 1  e Jt B B

Finally, we can plot the generalized response, including the scaling factor, as shown in Figure 11. So without needing to perform an inverse Laplace transform, we have analyzed the rotary system, developed the transfer function, and plotted the time response to a step input. Since when dealing with linear systems separate responses can be added, even complex systems are easily analyzed with the skills shown thus far. Complex systems always factor into a series of simple systems (as outlined above) whose individual responses are added to form the total response.

Figure 10

Example: analysis of rotary system.

106

Chapter 3

Figure 11

3.5

Example: time response of first-order rotary system.

FREQUENCY DOMAIN

Analysis methods in the frequency domain complement previous methods already studied and allow us to model, design, and predict control system behavior. Once we have obtained transfer functions in the s-domain, it is straightforward to develop a frequency response curve, or Bode plot as they are commonly called. Another advantage with Bode plots occurs when the process is reversed and a transfer function is developed from the plot. Whereas it is difficult to develop a model greater than second order from a step response, higher order models are quite simple using Bode plots. The relationship of Bode plots to transfer functions is evident as we describe the information contained in Bode plots. A transfer function, as defined earlier, is simply the ratio of a system’s output to the system’s input. When we construct a Bode plot, we input a known signal and measure the resulting output after the transients have decays. The steady-state magnitude of the output, relative to the input magnitude, and the phase angle between the output and input are plotted as a function of the frequency of a constant amplitude sinusoidal input. As we found earlier, if we have a transfer function and we know the input into the system represented by the transfer function, then we can solve for the system output. Applying a sinusoidal input to the system and recording the magnitude and phase relationship of the response is just a specific case of the same procedure. To more fully illustrate the concept of working in the frequency domain using Bode plots, let us work through how a Bode plot is constructed. The input to the system, in general form, is given by xðtÞ ¼ X sin ot with the resulting steady-state output as yðtÞ ¼ Y sinðot þ fÞ If we wait until all the transients have decayed, then the output of the system will exhibit the same frequency as the input but differ in magnitude and phase angle. As the input frequency (and thus the output frequency) is changed, the relationships between the input/output magnitude and phase angle also change. We can show this more clearly by remembering that a transfer function G(s) is the ratio of the output to the input. For now let us use G(s) to describe the output/input relationship between YðsÞ and XðsÞ: YðsÞ ¼ GðsÞXðsÞ

Analysis Methods for Dynamic Systems

107

Then X and Y are related in magnitude by |G(s)| and in phase by f ¼ ffGðsÞ ¼ tan1 [Imag/real] Since s in the Laplace domain is a complex variable, the magnitude and phase relationship can be more clearly shown graphically using phasors as in Figure 12. Since the multiplication of a phasor by imaginary number j corresponds to a counterclockwise rotation of 90 degrees, we see that j  j ¼ 1, the identity we are already familiar with. The first imaginary j is a vertical line of magnitude 1, rotated 90 degrees by multiplication with the other j means it still has a magnitude of 1 but now is on the negative real axis, hence equal to –1. When we construct Bode plots we let s ¼ jo since the entire real term, s, present in the complex variable s, has decayed (i.e., steady state). Inserting j o into the transfer functions then allows us to construct Bode plots of magnitude and phase as o is increased. Fortunately, the common ‘‘factors’’ representing real physical models are quite simple, and it is seldom necessary to worry about phasors, as we see in the next section. The key points to remember from this discussion are that the real portions of the s terms decay out (they are the coefficient on the decaying exponential term) and thus each s term, represented now as an imaginary term s ¼ jo, introduces 90 degrees of phase between the input and the output. With this simple understanding we are ready to relate the common transfer functions examined earlier to the equivalent Bode plots in the frequency domain. 3.5.1

Bode Plots

3.5.1.1

Common Bode Plot Factors

Bode plots can be constructed from tests performed on the physical system or from block diagrams with transfer functions. Bode plots typically consist of two plots, magnitude (decibels, dB) and phase angle (degrees) plotted versus the log of the input frequency (rad/sec or Hz). A sample plot describing the common layout is given in Figure 13. Since the upper plot magnitude is plotted in decibels, the magnitude is a log versus log scale as far as the original data is concerned. It helps to remember this since the linear y-axis scale (when plotted in dB) may be misleading as to the actual output/input magnitude ratio. Also, an output-to-input ratio of unity, when converted to dB, will be plotted as zero on the y axis. This makes it easy to describe the relative input and output magnitudes since any positive dB means that

Figure 12

Phasor notation.

108

Chapter 3

Figure 13

Typical layout of Bode plot.

the output signal is greater in magnitude than the input and a negative dB implies that the output signal is of a lesser magnitude than the input signal. The phase plot, commonly the lower trace, uses a linear y axis to plot the actual angle versus the log of the frequency. The magnitude and phase plots share the same frequency since the data are generated (analytically or experimentally) together. A positive phase angle means that the output signal leads the input signal and vice versa for a negative phase angle, commonly termed lag. This will become clearer as example plots are generated. To begin the process of constructing and using Bode plots, we start with our existing block diagram and transfer function knowledge and extend it into the frequency domain. The advantages will be evident once we understand the process. As we have seen thus far, most physical systems can be factored in subsystems, or factors. Common blocks (gain, integral, first order, and second order) were presented in the previous section when discussing block diagrams. These same transfer functions (blocks) are the building blocks for constructing Bode plots. Now for the advantage: The blocks multiply when connected in series (as when factored and in block diagrams) but they are plotted on a logarithmic scale when using a Bode plot. Multiplication becomes addition when using logarithms! logðG1 G2 G3 Þ ¼ logðG1 Þ þ logðG2 Þ þ logðG3 Þ Constructing a Bode plot is as simple as constructing an individual plot for each factor (block) and adding the plots together. Thus, the entire block diagram Bode plot can be constructed by adding the plots of the individual blocks (loops must first be closed). One note is in order; this requires that we are working with linear or linearized systems, as have most techniques presented thus far. The process can also be reversed to determine the original factors used to construct the Bode plot. This provides a powerful system identification tool to those who understand Bode plots. Let us move ahead and define the common factors, progressing in the same order as in Section 3.4.3.2.

Analysis Methods for Dynamic Systems

109

Gain factor, K:

A transfer function representing a gain block produces a Bode plot where the magnitude represents the gain, K, and the phase angle is always zero, as shown using the equations for magnitude and phase angle given below. MagdB ¼ 20 log K ¼ 20 log

1 K

  Im Phase ¼ f ¼  tan ¼  tan1 ð0Þ ¼ 0 degrees Re 1

The phase angle is always zero for a gain factor K since no imaginary term is present and the ratio of the imaginary to the real component is always zero. Figure 14 gives the Bode plot representing the gain factor K. Since individual effects add, varying the gain K in a system only affects the vertical position of the magnitude plot for the total system response. A different value for K does not change anything on the phase angle plot, as the phase angle contribution is always zero. The example plot given in Figure 14 illustrates this graphically. When K represents the proportional gain in a controller, we can define stability parameters that make it easy to find what proportional gain will make the system go unstable given a Bode plot for the system. Using phasor notation, the gain is represented by a line on the horizontal positive x axis (zero phase angle) with a length (magnitude) K. Integral:

The integral block produces a line on the magnitude plot having a constant slope of 20 dB/decade along with a line on the phase angle plot at a constant –90 degrees. Remember that s is replaced by jo in the transfer function and as o is increased the magnitude will decrease. The slope tells us that the line ‘‘falls’’ 20 dB (y scale) for every decade on the x axis (log of frequency). A decade is between any multiple of 10

Figure 14

Bode plot factors—gain K.

110

Chapter 3

(0.1 to 1, 1 to 10, 5 to 50, etc.). The slope of 20 comes from the equation used to calculate the dB magnitude of the output/input ratio.   1 MagdB ¼ 20 log  ¼ 20 log jjoj jo   o Im Phase ¼ f ¼  tan1 ¼  tan1 ¼ 90 degrees Re 0 The integrator Bode plot is shown in Figure 15. The line will cross the 0 dB line at o ¼ 1 rad/sec since the magnitude of 1=jo ¼ 1 and the log of 1 is 0. Remember that this is the amount added to the total response for each integrating factor in our system. The phase angle contribution was explained earlier using phasors, where we saw that each s (or jo in our case) contributes 90 degrees of phase. That is why two imaginary numbers multiplied by each other equals 1 (a phasor of magnitude one along the negative real axis). When the imaginary number is in the denominator, the angle contribution becomes negative instead of positive (j  j ¼ 1 is still true, it just rotates clockwise 90 degrees for each s instead of counterclockwise). Understanding this concept makes the remaining terms easy to describe. Derivative:

A derivative block, s, will have a positive slope of 20 dB/decade and a constant +90 degrees phase angle. Since jo is now in the numerator, increasing the frequency increases the magnitude of the factor. As the derivative factor Bode plot in Figure 16 shows, the magnitude plot still crosses 0 dB at o ¼ 1 rad/sec because the magnitude of the factor is still equal to unity at that frequency. The magnitude and phase angle equations given for the integrator block are the same for the derivative block, the only exception that the negative signs are positive ½logðaÞ ¼  logð1=aÞ. The same explanation as before also holds true regarding the phase angle, except for now the imaginary j is in the numerator and contributes a positive 90 degrees. In fact, if we look we see that a factor in the numerator is just the horizontal mirror image of that same factor in the denominator. For all remaining factors this property is true; each magnitude and phase plot developed for a factor in the

Figure 15

Bode plot factors—integrator.

Analysis Methods for Dynamic Systems

Figure 16

111

Bode plot factors—derivative.

numerator, when flipped horizontally with respect to the zero value (dB or degrees), becomes the same plot as for when the factor appears in the denominator. Thus when the same factor, appearing once in the numerator and once in the denominator are added, the net result is a magnitude line at 0 db and a phase angle line at 0 degrees. This relationship is also evident in s-domain using transfer functions; multiplying an integrator block (1=s) times a derivative block (s) produces a value of unity, hence a value of 0 dB and 0 degrees. (Adding factors in the frequency domain is the same as multiplying factors in the s-domain.) First-order system (lag):

Remembering once again that s is replaced by jo helps us to understand the plots for this factor. At low frequencies the magnitude of the jo term is relatively very small compared to the 1 and the overall factor is close to unity. This produces a low frequency asymptote at 0 dB and a phase angle of zero degrees. As the frequency increases, the ts term in the denominator begins to dominate and it begins to look like an integrator with a slope of 20 dB/decade and a phase angle of 90 degrees. Plotting this on the logarithmic scale produces relatively straight line asymptotic segments, as shown in Figure 17. Therefore we commonly define a low and high frequency asymptote, used as straight line Bode plot approximations. The break point occurs at o ¼ 1=t since the contribution from the two terms in the denominator are equal here. The real magnitude curve is actually 3 dB down at this point, and at points o ¼ 0:5=t and o ¼ 2=t (an octave of separation on each side) the actual curve is 1 dB beneath the asymptotes. To calculate the exact values, we can use the following magnitude and phase equations, similar to before:   qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1    Magd B ¼ 20 log ¼ 20 log ðotÞ2 þ 1 jot þ 1 Phase ¼ f ¼  tan1 ðotÞ Since the phase angle is negative and increases as frequency increases, we call this a lag system. This means that as the input frequency is increased the output lags (follows), the input by increasing amounts of degrees (time). The phase angle is

112

Chapter 3

Figure 17

Bode plot factors—first-order lag.

approximated by a line beginning at zero degrees one decade before the breakpoint frequency ð1=tÞ and ending at 90 degrees one decade after the breakpoint frequency. Both the linear asymptotic line and the actual line cross the breakpoint frequency, o ¼ 1=t, at 45 degrees (f ¼  tan1 ð1Þ ¼ 45 degrees. First-order system (lead):

When the first-order factor is in the numerator, it adds positive phase angle and the output leads the input. The magnitude and angle plots are the mirror images of the first-order lag system, as the equations also reveal: qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi MagdB ¼ 20 log jjot þ 1j ¼ 20 log ðotÞ2 þ 1 Phase ¼ f ¼ tan1 ðotÞ The magnitude plot still has a low frequency asymptote at 0 dB but now increases at þ20 dB/decade when the input frequency is beyond the break frequency. The phase angle begins at 0 and ends at þ90 degrees and the output now leads the input at higher frequencies. These characteristics are shown on the Bode plot for the first-order lead system in Figure 18. The same errors (sign is opposite) are found between the low and high frequency asymptotes as discussed for the first-order lag system and the same explanations are valid. The first-order lag and lead Bode plots are frequently used elements when designing control systems and knowing how the phase angle adds or subtracts allows us to easily design phase lead or lag and PD or PI controllers using the frequency domain. Second-order system: or

CðsÞ 1  ¼ 1 2 2z RðsÞ s þ sþ1 on o2n

Finally, we have the second-order system. As in the step response curves for secondorder systems, we again have multiple curves to reflect the two necessary parameters,

Analysis Methods for Dynamic Systems

Figure 18

113

Bode plot factors—first-order lead.

natural frequency and damping ratio. Each line in Figure 19 represents a different damping ratio. Several analogies can be made from our experience with the previous factors, only now there are three terms in denominator, as the magnitude and phase angle equations show. sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi  1   2    1 2z o2 2zo 2   1 2 þ MagdB ¼ 20 log 2 ðjoÞ2 þ ðjoÞ þ 1  ¼ 20 log   on on on on 1 o C B 2z B on C Phase ¼ f ¼  tan1 B C 2 @ o A 1 2 on 0

At low frequencies, both s (or jo) terms are near zero, the factor is near unity (o2n =o2n ), and can be approximated by a horizontal asymptote. At high frequencies the 1=s2 term dominates and we now have twice the slope at 40 dB/decade for the high frequency asymptote. The phase angle similarly begins at zero but now has j*j in the high frequency term and ends at –180 degrees, or twice that of a first-order system. Thus for each 1=jo in the highest order term in the denominator, another 90 degrees of lag is added. Therefore a true first-order system can never be more than 90 degrees out of phase and a second-order system never more than 180 degrees. Figure 19 also allows us to determine the natural frequency and damping ratio by inspection. This is developed further when we discuss how to take our Bode plot and derive the approximate transfer function from the plot. The intersection of the low and high frequency asymptotes occurs near the natural frequency (as does the peak), and if any peak in the magnitude plot exists, the system damping ratio is less than 0.707. At the natural frequency (i.e., breakpoint) the phase angle is always –90 degrees, regardless of the damping ratio. If the system is overdamped, it factors into two first-order systems and can be plotted as those two systems. Thus the secondorder Bode plots only show the family of curves where z  1. Although not explicitly shown here, with the second-order term appearing in the numerator we get the same low frequency asymptote, a slope of þ40 dB/decade on the high frequency asymptote and a phase angle starting at zero and ending at

114

Chapter 3

Figure 19

Bode plot factors—second-order system.

þ180 degrees. This is exactly what we expect after seeing how the other Bode plot factors appear in the denominator and the numerator. 3.5.1.2

Constructing Bode Plots from Transfer Functions

When we reach the point of designing controllers in the frequency domain, we must often plot the different factors together to construct Bode plots representing the combined controller and physical system. In this section we look at some guidelines to make the procedure easier. The most fundamental approach is to develop a Bode plot for every factor in our controller and physical system and add them all together when we are finished. In general this is the recommended procedure. There are also guidelines we can follow that can usually speed up the process and, at a minimum, provide useful checks when we are finished. Several guidelines are listed below: 

To find the low-frequency asymptote, use the FVT to determine the steadystate gain of the whole transfer function and convert it to dB. The FVT,

Analysis Methods for Dynamic Systems





1. 2.

115

when applied to the whole system, gives us the equivalent steady-state gain between the system output and input. If the gain is greater than one, the output level will exceed the input level. This gain may be comprised of several different gains, some electronic and some from gains inherent in the physical system. Converting the gain into decibels should give us the value of the low frequency asymptote found by adding the individual Bode plots. If we have one or more integrators in our system, the gain approaches infinity using the FVT. Each integrator in the system adds 20 dB/decade of slope to the low frequency asymptote. Therefore if we have a low frequency asymptote with a slope of –40 dB/decade, it means we should have two integrators in our system (1=s2 ) Recognize that each power of s in the numerator ultimately adds 90 degrees of phase and a high frequency asymptotes contribution of þ20 dB/decade and the opposite for each power of s in the denominator. For example, a third-order numerator and a fourth-order denominator appear as a firstorder system at high frequencies with a final high frequency asymptote of 20 dB/decade and a phase angle of 90 degrees. Therefore, The high frequency asymptote will be 20 dB/decade times (n  m) where n is the order of the denominator and m is the order of the numerator. The high frequency final phase angle achieved will be 90 degrees times (n  m).

Even with constructing the individual Bode plots, much of the overall system can be understood in the frequency domain by applying these simple guidelines. As discussed further, the process can be reversed and a Bode plot may be used to derive the transfer function for the system. EXAMPLE 3.13 Let us now take a transfer function and develop the approximate Bode plot to illustrate the principles learned in this section. We use the transfer function below, which can be broken down into four terms: a gain K, an integrator, a first-order lead term, and a first-order lag term. The Bode plot will be constructed using the approximate straight line asymptotes for each term. GðsÞ ¼

10ðs þ 1Þ sð0:1s þ 1Þ

To begin with, let us develop a simple table showing the straight line magnitude and angle approximations for each term (Table 2). The gain factor is plotted as a line of constant magnitude at 20 dB (¼ 20 log 10) and the phase angle contribution is zero for all frequencies. The integrator has a constant slope of 20 dB/decade and crosses 0 dB at 1 rad/sec, as shown in Table 2. Its angle contribution is always 90 degrees. The first-order lead term in the numerator has its break frequency at 1 rad/sec (t ¼ 1 sec, break is at 1=t). It is a horizontal line at 0 dB before 1 rad/sec and has a slope of þ20 dB/decade after the break frequency. The phase angle begins at 0 degrees one decade before the break and ends at þ90 degrees one decade after the break frequency. Finally, the first-order

116

Table 2

Chapter 3 Example: Contribution of Individual Bode Plot Factors Magnitude Data (all in dB)

o, rad/s 0.1 1 10 100

20 log (10)

(1=s)

ðs þ 1Þ

1=0:1s þ 1Þ

Gain K

Integrator

1st Lead

1st Lag

Total

20 20 20 20

20 0 20 40

0 0 20 40

0 0 0 20

40 20 20 0

Phase Angle Data (all in degrees)

o, rad/s 0.1 1 10 100

20 log (10)

(1=s)

ðs þ 1Þ

1=0:1s þ 1Þ

Gain K

Integrator

1st Lead

1st Lag

Total

0 0 0 0

90 90 90 90

0 45 90 90

0 0 45 90

90 45 45 90

lag term has a time constant t ¼ 0:1 sec and thus a break frequency at 10 rad/sec. After its break frequency, however, it has a slope of 20 dB/decade. The angle contribution is also negative, varying from 0 to 90 degrees one decade before and after the break frequency of 10 rad/sec. Once the individual magnitude and phase angle contributions are calculated, they can simply be added together to form the final magnitude and phase angle plot for the system. The total column for the magnitude and phase angle plot, shown in Table 2, thus defines the final Bode plot values for the whole system. Graphically, each individual term and the final Bode plot are plotted in Figure 20. Checking our final plot using the guidelines above, we see that our low frequency asymptote has a slope of 20 dB/decade, implying that we have one integrator in our system. The high frequency asymptote also has a slope of 20 dB/ decade, meaning that the order of the denominator is one greater than the order of the numerator. Since the phase angle does increase at some range of frequencies on the final plot, we also know that we have at least one s term in the numerator adding positive phase angle. In this case, knowing the overall system transfer function that we started with, we see that all the quick checks support what is actually the case. Our system has one integrator, a term in the numerator, and is an order of one greater in the denominator than in the numerator (order of two in the denominator versus an order of one in the numerator). As we reverse engineer the system transfer function from an existing Bode plot in later sections, we see that these guidelines from the beginning point for the procedure. In conclusion of this section, all transfer functions can be factored into the terms described and each term plotted separately on the Bode plot. The final plot then is simply the sum of all individual plots, both magnitude and phase angle.

Analysis Methods for Dynamic Systems

Figure 20

3.5.1.3

117

Example: constructing final Bode plot from separate terms.

Bode Plot Parameters

Bode plots are frequently discussed using terms like bandwidth, gain margin, phase margin, break frequency, and steady-state gain. Let us quickly define each term here and explain them in greater detail in the following text. Bandwidth frequency (definition varies, but here are the most common)  Frequency at which the magnitude is 3dB relative to the low frequency asymptote magnitude.  Frequency at which the magnitude is –3dB relative to the maximum dB magnitude reached. In the case of second-order factors where z  0:35 and the peak exceeds 3dB, the bandwidth is often considered the range of frequencies corresponding to the –3dB level before and after the peak magnitude.

118

Chapter 3

 Frequency at which the phase angle passes through 45 degrees.  Frequency at which the phase angle passes through 90 degrees. Gain margin (open loop)  The margin (in dB) that the magnitude plot is below the 0 dB line when the system is 180 degrees of phase (below the 0 dB line is positive margin for stable system). Phase margin (open loop)  The margin (in degrees) that the phase angle is above 180 degrees at crossover frequency (where the magnitude plot crosses the 0 dB line). Break frequency  The frequency at which a first-order system is attenuated by 3 dB or a second-order system by 6 dB. Corresponds to the intersection of the low and high frequency asymptotes. Steady-state gain  The value of the low frequency asymptote as the frequency is extended to zero. Hence an integrator with a constant 20 dB/decade slope has infinite gain as o goes to zero. The different methods of measuring bandwidth are shown in Figure 21. The interesting thing to notice is the different methods clearly result in different values. To understand why this happens, we need to first recognize that bandwidth is a subjective measurement. The term bandwidth, by definition, is simply a range of frequencies within a band. Couple this definition with the fact that when we think of the bandwidth frequency, we associate it with a frequency value that if exceeded the system no longer performs as expected, and we see where the confusion enters. Consider the most common bandwidth definition of 3 dB below the low frequency asymptote dB level. We reach a level of 3 dB when the output level is 0.707 of the input level. This is significant because it corresponds to the level where the input power is one half of the maximum input power. That is, the input is only delivering 12 of the power at the bandwidth frequency that it is at lower frequencies. The concern in all this is as follows: Different manufacturers may use different standards and it becomes difficult to honestly compare two systems. Does using the 3 dB criteria mean that a system is useless after the input frequency exceeds the bandwidth frequency? Not at all, as we said it is a criterion that seems logical based on the half-power condition. The other criteria are no better or worse (although some may benefit from one criteria more than others, even when the other systems also are compared using the same). The conclusion to make is this: When making decisions based on published bandwidth frequencies, it is wise to ask what criterion was used to make a fair comparison. 3.5.2

Nyquist Plots (Polar Plots)

This section defines the layout of Nyquist plots (or polar plots, as sometimes called) and how they relate to the Bode plots examined previously. As we will see, a Nyquist plot can be constructed from a Bode plot without any additional information; in fact, whatever analysis or design technique that can done on a Bode has a similar counterpart applicable for a Nyquist plot. The advantage of using a Nyquist plot is that both the magnitude and phase angle relationships are shown on one plot (versus

Analysis Methods for Dynamic Systems

Figure 21

119

Bode plot bandwidth definitions.

separate in Bode plots). The disadvantage is that when plotting a Nyquist plot using data from a Bode plot, not all data are used and the procedure is difficult to reverse unless enough frequencies are labeled on the Nyquist plot during the plotting procedure. Since the data on a Nyquist plot do not explicitly show frequency, the contribution of each individual factor is not nearly as clear on a Nyquist plot as it is on a Bode plot. For these reasons, and since Bode plots are more common, this section only demonstrates the relationship between Nyquist and Bode plots. The majority of our design work in the frequency domain will continue to be done in this text while using Bode plots. Perhaps the simplest way to illustrate how a Nyquist plot relates to a Bode plot is to begin with a Bode plot and construct the equivalent Nyquist plot. Before we begin, let us quickly define the setup of the axes on the Nyquist plot. The basis for a

120

Chapter 3

Nyquist plot has already been established in Figure 12 when discussing phasors. As we recall, a phasor has both a magnitude and phase angle plotted on xy axes. The x axis represents the real portion of the phasor and the y axis the imaginary portion. In terms of our phasor plot, a magnitude of one and a phase angle of zero is the vector from the origin, lying on the positive real axis, with a length of one. To construct a Nyquist plot, we simply start at our lowest frequency plotted on the Bode plot, record the magnitude and phase angle, convert the magnitude from dB to linear, and plot the point at the tip of the vector with that magnitude and phase angle. As we sweep through the frequencies from low to high, we continue to plot these points until the curve is defined. The end result is our Nyquist, or polar, plot. EXAMPLE 3.14 To step through this procedure, let us use the Bode plot used to define bandwidth as shown in Figure 21, which plots an underdamped second-order system. To begin with, we record some magnitude and phase angles, as given in Table 3, at a sampling of frequencies from low to high. Once we have the data recorded and the magnitude linearly represents the output/input magnitude ratio, we can proceed to develop the Nyquist plot given in Figure 22. The first point plotted from data measured at a frequency of 0.1 rad/sec has a magnitude ratio of 1.01 and a phase angle of 2:3 degrees. This is essentially a line of length 1 along the positive real axis, as shown on the plot. At a frequency of 0.8 rad/sec, the curve passes through a point a distance of 2.08 from the origin and at an angle of 41:6 degrees. The remaining points are plotted in the same way. The magnitude defines the distance from the origin and the phase angle defines the orientation. Since our phase angles are negative (as is common), our plot progresses clockwise as we increase the frequency. By the time we approach the higher frequencies, the magnitude is near zero and the phase angle approaches –180 degrees. We are approaching the origin from the left as o ! 1. If we understand the procedure, we should see that it is also possible to take a Nyquist plot and generate a Bode plot, as long as we are given enough frequency points along the curve. In general though we lose information by going from a Bode plot to a Nyquist plot. Many computer programs, when given the original magnitude and phase angle data, are capable of generating the equivalent Bode and Nyquist plots.

Table 3 Frequency (rad/sec) 0.10 0.80 0.90 1.00 1.20 10.00

Example: Magnitude and Phase Values Recorded from Bode Plot Magnitude (dB)

Magnitude (linear scale)

Phase Angle (degrees)

0.08 6.35 7.81 7.96 3.73 39:92

1.01 2.08 2.46 2.50 1.54 0.01

2:31 41:63 62:18 90:00 132:51 177:69

Analysis Methods for Dynamic Systems

Figure 22

3.5.3

121

Example: Nyquist plot construction from Bode plot data.

Constructing Bode Plots from Experimental Data

Because of the desirable information contained in Bode plots, they are often constructed when testing different products. The resulting plots provide valuable information when designing robust control systems. The types of products for which Bode plots have been constructed vary widely, from electrical to mechanical to electrohydraulic to combinations of these. Items such as hydraulic directional control valves are quite complex, and a complete analytical model is time consuming to construct and confirm. In cases like this, Bode plots provide a much simpler solution. This section examines some of the advantages, disadvantages, and guidelines for developing and using Bode plots in product design, testing, and evaluation. Speaking generally, Bode plots have several distinct advantages and disadvantages as compared with other methods. Advantages  Easier on equipment than step responses  Step responses tend to saturate actuators and components  More information available from test  Allows higher order system models to be constructed Disadvantages  More difficult experiment  Takes more time to construct a Bode plot than a step response  More difficult analysis  Design engineer needs to understand the resulting Bode plot As mentioned in the previous section, the Bode plots graph the relationship between the input and output magnitude and phase angle as a function of input frequency. What follows here is a brief description of how to accomplish this in practice. The input signal is a sinusoidal waveform of fixed amplitude whose frequency is varied. At various frequencies the plots are captured and analyzed for amplitude

122

Chapter 3

ratios and phase angles. Thus each point used to construct a Bode plot requires a new setting in the test fixture (generally just the input frequency). It is important to wait for all transients to decay after changing the frequency. Once the transients have decayed the result will be a graph similar to the one given in Figure 23. This plot will result in two data points, a magnitude value and phase angle value (at one frequency), used to construct the Bode plot. This plot is typical of most physical systems since the output lags the inputs (higher order in the denominator) and the output amplitude is less than the input. EXAMPLE 3.15 The data points required for the development of the Bode plot are found as follows from the plot in Figure 23: Test frequency: o ¼ 2p rad/2 sec ¼ p rad/sec (plotted on horizontal log scale) dB Magnitude: MdB ¼ 20 logðjYj=jXjÞ ¼ 20 logð0:5=1:0Þ ¼ 6:0 dB (Peak-to-peak values may also be used for Y=X) Phase angle: Phase angle ¼ f ¼ ð360 degrees/2 sec) (0.25 sec lag) ¼ 45 degrees These points (6 dB on the magnitude plot and 45 degrees on the phase plot, both at a frequency of o ¼ p rad/sec) would then be plotted on the Bode plot and the frequency changed to another value. The process is repeated until enough points are available to generate ‘‘smooth’’ curves. Remember that we plot the data on a logarithmic scale so for the most efficient use of our time we should space our frequencies accordingly. Now that we have our Bode plot completed, it can be used to develop a transfer function representing the system that was tested. Whereas models resulting from step response curves are limited to first- or second-order systems, the Bode plots may be used to develop higher order models. This is a large advantage of Bode plots when

Figure 23

Input and output waveforms—Bode plots.

Analysis Methods for Dynamic Systems

123

compared with step response plots when the goal is developing system models. The following steps may be used as guidelines for determining open loop system models from Bode plots. Developing the transfer function from the Bode plot is as follows: Step 1: Approximate all the asymptotes on the Bode plot using straight lines. Of special interest are the low and high frequency asymptotes. Step 2: If the low frequency asymptote is horizontal, it is a type 0 system (no integrators) and the static steady-state gain is the magnitude of the low frequency asymptote. If the low frequency asymptote has a slope of 20 dB/decade, then it is a type 1 system and there is one integrator in the system. If we remember how each factor contributes, then it is fairly easy to recognize the pattern and the effect that each factor adds to the total. Step 3: If the high frequency asymptote has a slope of 20 dB/decade, the order of the denominator is one greater than the order of the numerator. If the slope is –40 dB/decade, the difference is two orders between the denominator and numerator. To estimate the order of the numerator, examine the phase angle plot. If there is a factor in the numerator, the slope at some point should be positive. In addition, the magnitude plot should exhibit a positive slope also. If enough distance (in frequency) separates the factors, each order in the numerator will have a þ20 dB/decade portion showing in the magnitude plot and þ90 degrees of phase added to the total phase angle. This is seldom the case since factors overlap and judgment calls must be made based on experience and knowledge of the system. Drawing all the clear asymptotic (straight line) segments usually helps to fill in the missing gaps. Step 4: With the powers of the numerator and denominator now determined, see if any peaks occur in the magnitude plot. If so, one factor is a secondorder system whose natural frequency and damping ratio can be approximated. The magnitude of the peak, relative to the lower frequency asymptote preceding it, determines the damping ratio as Mp ðdBÞ ¼

1 pffiffiffiffiffiffiffiffiffiffiffiffiffi 2z 1  z2

Mp is the distance in decibels that the peak value is above the horizontal asymptote preceding the peak. As the damping ratio goes to zero, the peak magnitude goes to infinity and the system becomes unstable. To calculate the damping ratio, the peak magnitude is used in the same way that the percent overshoot was used for a step response in the time domain. The graph showing Mp (in dB) versus the damping ratio is given in Figure 24. The natural frequency, od , occurs close to the p peak (damped) frequency, od , ffiffiffiffiffiffiffiffiffiffiffiffi ffi and is shifted by the damping ratio, od ¼ on 1  z2 . The natural frequency can easily be found independent of the damping ratio by extending the low and high frequency asymptotes of the second-order system in question. The intersection of the two asymptotes occurs at the natural frequency of the system. Step 5: Fill in the remaining first-order factors by locating each break in the asymptotes. Each break corresponds to the time constant for that factor.

124

Chapter 3

Figure 24

Peak magnitude (dB) versus damping ratio (second-order system).

Look for shifts of 20 dB/decade to determine where the first-order factors are in your system. If the asymptote changes from 20dB to 40 dB/decade (without a peak), then there likely is a first-order break frequency located at the intersection of two asymptotes defining the shift in slopes. Using Bode plots to approximate systems is a powerful method and allows a much better understanding than simple step response plots. If when designing a control system we are able to obtain Bode plots for the critical subsystems/components, we can develop models and simulate the system with much greater accuracy. 3.5.3.1

Nonminimum Phase Systems

A final note should be made on minimum and nonminimum phase systems, as it might arise during the above process that the magnitude and phase angle values do not agree (i.e., a first-order system not having both a high frequency asymptote of 20 dB/decade and a final phase angle of 90 degrees). A minimum phase system will have the high frequency asymptote slopes correctly correspond with the final phase angle. For example, a second-order system will have a high frequency asymptote of 40 dB/decade to correspond with a final phase angle of 180 degrees. If the phase angle is greater than the corresponding slope of the high frequency asymptote, we have a nonminimum phase system. This occurs when we have a delay in our system. Delays add additional phase angle (lag) without affecting the magnitude plot. This is important to know since delays significantly degrade the performance of control systems. 3.5.3.2

Signal Amplitudes and Ranges

In the case that we proceed to develop our own Bode plots in the laboratory, a few final comments on signal amplitudes and appropriate ranges are in order. Ideally, the expected operating range is known before the experimental Bode plots are developed. In this case the signal amplitude should be centered with the peak-to-peak

Analysis Methods for Dynamic Systems

125

amplitudes remaining within the expected operating range. In some components with large amounts of deadband, like proportional directional control valves, the test should take place in the active region of the valve. The data are relatively meaningless when performed in the deadband of this type of component. For some components the frequency response curves change little through out the full operating region, while other may change significantly (linear versus nonlinear systems). A general rule of thumb is to center the input signal offset in the middle of the component’s active range and vary the amplitude  25% of the active range. Chapter 12 discusses electrohydraulic valves and deadband in much more detail. 3.6

STATE SPACE

State space analysis methods are fairly simple, and the same procedure may be used for large, nonlinear, multiple input multiple output (MIMO) systems. This is one of the primary advantages in using state space notation when representing physical systems. In general, state space equations represent a higher order differential equation with a system of first-order differential equations. When using linear algebra identities to analyze state space systems, the equations must first be linearized as shown earlier. Both nonlinear and linear systems are easily analyzed using numerical methods of integration. Programs like Matlab have multiple routines built in for numerical integration. This section presents methods of handling both linear and nonlinear state space systems. 3.6.1

Linear Matrix Methods and Transfer Functions

When the state equations are linear and time invariant, it is possible to analyze the system using Laplace transforms. The basic procedure is to take the Laplace transform of the state space matrices using linear algebra identities. The result leads to a transfer function capable of being solved using the inverse Laplace transforms already examined or root locus techniques presented in later sections. Let us now step through the process for obtaining a transfer function from a state space matrix representation. The original state space matrices in general form are given as dx=dt ¼ ½Ax þ ½Bu and y ¼ ½Cx þ ½Du Now take the Laplace transform of each equation: s XðsÞ  xð0Þ ¼ ½A XðsÞ þ ½B UðsÞ and YðsÞ ¼ ½C XðsÞ þ ½D UðsÞ Solve the first equation for XðsÞ: s XðsÞ  A XðsÞ ¼ B UðsÞ ðs I  AÞ XðsÞ ¼ B UðsÞ If we premultiply each side with ðsI  AÞ1 , we end up with

126

Chapter 3

XðsÞ ¼ ðsI  AÞ1 B UðsÞ This is the output of our states, so substitute into the output equation for Y: YðsÞ ¼ Cðs I  AÞ1 B UðsÞ þ DUðsÞ or

YðsÞ ¼ ½Cðs I  AÞ1 B þ D UðsÞ

The transfer function is simply the output divided by the input, or YðsÞ=UðsÞ or

½Cðs I  AÞ1 B þ D

so GðsÞ ¼ ½C ðs I  AÞ1 B þ D It is relatively straightforward to get a transfer function from our state space matrices, the most difficult part being matrix inversion. For small systems it is possible to invert the matrix by hand where the inverse of a matrix is given by M 1 ¼

Adjo int½M ; jMj

jMj ¼ Determinant

More information on matrices is given in Appendix A. EXAMPLE 3.16 To illustrate this procedure with an example, let us use our mass-spring-damper state space model already developed and convert it to a transfer function. Recalling the mass-spring-damper system and the state matrices already developed earlier: # " #   "   0 1 x  0 x1 x_ 1 1 ¼ þ 1 ½u and Y ¼ ½ 1 0 þ ½0 u k b   x2 x2 x_ 2 m m m Then: GðsÞ ¼ ½Cðs I  AÞ1 B þ D

GðsÞ ¼ ½ 1

( 0 "

s

0

0

s

"

 

s 1 GðsÞ ¼ ½ 1 0 k b sþ m m

0 k   m

#1 "

0 1 m

1 b m

#)1 "

0 1 m

# þ0

#

To invert the matrix, we take the adjoint matrix divided by the determinant: 2 3 b 1 sþ 6 7 m 4 5 k " #1 s 1  s m k b ¼ b k sþ 2 s þ sþ m m m m Simplifying:

Analysis Methods for Dynamic Systems

2

b 1 6 m GðsÞ ¼ ½ 1 0  4 b k k s2 þ s þ  m m m   b 1 "0# sþ m 1 GðsÞ ¼ b k 2 s þ sþ m m m sþ

127

3

" # 0 7 5 1 s m

1

And finally, we get the transfer function G(s): 1 1 m GðsÞ ¼ ¼ 2 b k þ bs þ k ms s2 þ s þ m m It should not be a surprise to find that is exactly what was developed earlier as the transfer function (from the differential equations) for the mass-spring-damper system. The formal approach is seldom needed in practice; as long as you understand the process and what it represents, many computer programs are developed to handle these chores for you. Many calculators produced now will perform these tasks. 3.6.2

Eigenvalues

One important result from the state space to transfer function conversion is realizing that the characteristic equation remains the same and recognizing where it came from during the transformation. Remember that when we took the determinant of the (sI  A) matrix, it resulted in a polynomial in the s-domain. Looking at it closer, we see that we actually developed the characteristic equation for the mass-springdamper system. This determinant is sometimes called the characteristic polynomial whose roots are defined as the eigenvalues of the system. Thus, eigenvalues are identical to the poles in the s-plane arising from the roots of the characteristic equation. We can treat the eigenvalues the same as our system poles and plot them in the s-plane, examine the response characteristics (time constant, natural frequency, and damping ratio), and predict system behavior. Eigenvectors are sometimes calculated by substituting the eigenvalues, , back into the matrix equation ( I  AÞ x ¼ 0 and solving for the relationships between the states that satisfy the matrix equation. This is more common in the field of vibrations were we discuss modes of vibration corresponding to eigenvectors. 3.6.3

Computational Methods

A real advantage of state space notation is in the ease of simulating large nonlinear systems. Even linearizing the equations and converting them to a transfer function gets tedious for higher order systems. Since the general notation is a list of first-order differential equations (as functions of the states themselves), at any point in time we have the slope (derivative) of each state with which to project the value in the future. The computer routines are almost identical whether the system is second-order (two state equations) or tenth order. The general notation has already been given as

128

Chapter 3

dXðtÞ=dt ¼ f ðXðtÞ; UðtÞ; tÞ Xðt0 Þ ¼ X0 ,

initial values

This notation allows for multiple inputs and outputs, time varying functions, and nonlinearities. It also works fine for LTI, single input single output systems. Two basic methods are common for obtaining the time solution to the differential equations: one-step methods and multistep methods. The most basic and familiar onestep method is Euler’s. If constant time intervals, h, are used the system is approximated by xðt þ hÞ ¼ xðtÞ þ h dx=dtjt The next value is simply the current value added to the time step multiplied by the slope of the function at the current time. This method is fast and simple to program but requires very small time steps to consistently obtain accurate results. The net result is a simulation that requires more time to run than more efficient routines like Runge-Kutta . The Runge-Kutta routine has been one of the mainstays for numerical integration. It retains the important feature of only requiring one prior value of xðtÞ to advance the solution by time h. The basic routine allows for higher order approximations when estimating the slope. The common fourth-order approximation estimates four slopes using the equations below and weights the average to obtain the solution. While each step requires more processing the Euler’s, the steps can be much larger, thus saving on overall computational time. For the interval from tk to tkþ1 : Slope 1 ¼ s1 ¼ dx=dtjðxk ; tk Þ Slope 2 ¼ s2 ¼ dx=dtjðxk þ s1 h=2; tk þ h=2Þ Slope 3 ¼ s3 ¼ dx=dtjðxk þ s2 h=2; tk þ h=2Þ Slope 4 ¼ s4 ¼ dx=dtjðxk þ s3 h; tk þ hÞ and finally, to calculate the value(s) at the next time step: xkþ1 ¼ xðtk þ hÞ ¼ xk þ ðh=6Þðs1 þ 2s2 þ 2s3 þ s4 Þ The fourth-order model presented here has a truncation error of order h4 and requires four evaluations of the derivative (state equations) per step. For many problems, this represents a reasonable trade-off between accuracy and computing efficiency. Although not as commonly written by individual end users, most simulation software incorporates advanced multistep and predictor correction methods. Multistep methods require the prior values to be saved and used in the next step. Explicit methods only use data points up to the current tk while implicit methods require tkþ1 or further ahead. These advanced routines can estimate the error of the current prediction, and if it is larger than a user-defined value, the routine backs up a step, decreases the step size, and tries again. When the system is changing very slowly, it also may increase the step size to save computational time. These routines are generally invisible to the user, and programs like Matlab have methods of pre

Named after German mathematicians C. Runge (1856–1927) and W. Kutta (1867–1944).

Analysis Methods for Dynamic Systems

129

senting the output at fixed time steps even though the step size changed during the numerical integration. In conclusion, it is quite simple to compute a time solution to equations in state space format even though they may be nonlinear and with multiple inputs and outputs. There are books containing the numerical recipes (code segments) for many different numerical integration algorithms (in many different programming languages) if the desire is to do the programming for incorporation into a ‘‘custom’’ program. 3.6.4

State Space Block Diagram Representation

If any system in state space representation is linearized, it can be represented in block diagram notation for simulation by a variety of programs. Figure 25 illustrates the block diagram representation of a general state space system without feedback. In future systems the feedback loop will be added and analyzed. The wide paths denote multiple lines of information, i.e., a column vector is passed for each point in time. A linear single input single output state space system may be reduced further, resulting in a more typical block diagram. For example, look again at the state space mass-spring-damper system. # " #   " 0 1 x  0 x_ 1 1 k b ¼ þ 1 ½u   x2 x_ 2 m m m This can be represented in block diagram form as shown in Figure 26. If we take the Laplace transform of the integrals in the block diagram and simplify, the result becomes the same transfer function developed multiple times (and using multiple methods) previously. By now we should be more comfortable representing models of physical systems using a variety of formats. Each system has different strengths and weakness, but for the most part the information is interchangeable. 3.6.5

Transfer Function to State Space Conversion

Just as we have seen where it is possible to take a set of state space matrices and form a transfer function, is it also possible to take a transfer function and convert it to equivalent state space matrices. By definition, a transfer function is linear and repre-

Figure 25

General state space block diagram.

130

Chapter 3

Figure 26

Mass-spring-damper block diagram.

sents a single input single output system. This simplifies the process, especially when the numerator of the transfer function is constant (no s terms). The following example illustrates the ease of constructing the state space matrices from a transfer function. Remember that converting a state space system to a transfer function results in a unique transfer function, but the process in reverse may produce many correct but different representations in state space. That is, equivalent but different state space matrices, when converted to transfer functions, also result in the same transfer function. The opposite is not true. Different methods of converting a transfer function into state space matrices may result in different matrices. One thing does remain true, however. If we calculate the eigenvalues for any of the A matrices, they will be identical. In the example that follows, we first develop the matrices manually and then use Matlab. Even though the resulting matrices differ, we will verify that they indeed contain the same information. EXAMPLE 3.17 Convert the following transfer function to state space representation: CðsÞ 24 ¼ RðsÞ s3 þ 6s2 þ 11s þ 6 The process is to first cross multiply: CðsÞ ðs3 þ 6s2 þ 11s þ 6Þ ¼ 24 RðsÞ CðsÞ s3 þ CðsÞ 6s2 þ CðsÞ 11s þ CðsÞ 6 ¼ 24 RðsÞ Take the inverse Laplace to get the original differential equation (minus the initial conditions): c000 þ 6 c00 þ 11c0 þ 6 c ¼ 24 r Choose the state variables (chosen here as successive derivatives; third order, three states): x1 ¼ c x2 ¼ c0 x3 ¼ c00 Then the state equations are:

Analysis Methods for Dynamic Systems

131

dx1 =dt ¼ x2 dx2 =dt ¼ x3 dx3 =dt ¼ 6x1  11x2  6x3 þ 24 r Finally, writing them in matrix form: 2 3 2 32 3 2 3 0 1 0 x1 0 x_ 1 4 x_ 2 5 ¼ 4 0 0 1 54 x2 5 þ 4 0 5r 6 11 6 24 x3 x_ 3

#

y¼ 1

0

2 3 $ x1 0 4 x2 5 ¼ ½0u x3

To conclude this example, let us now work the same problem using Matlab and compare the results, learning some Matlab commands as we progress. We first define the numerator and denominator (num and den in the following program) where the vectors contain the coefficients of the polynomials in decreasing powers of s. Thus to define the polynomial (s3 þ 6s2 þ 11s þ 6Þ in Matlab we define a vector as [1 6 11 6], which are the coefficients of [s3 s2 s1 s0 : Matlab commands: num¼24; den ¼ ½1 6 11 6; sys1 ¼ tf ðnum; denÞ ½A; B; C; D ¼ tf 2ssðnum; denÞ lti_ss ¼ ssðsys1Þ rootsðdenÞ eigðAÞ eigðlti_ssÞ

%Define numerator of C(s)/R(s)=G(s). %Define denominator of G(s). %Make and display LTI TF %Convert TF to SS using num,den %Convert LTI to state space. %’Check roots of characteristic equation %Check eigenvalues of A %Check eigenvalues of lti_ss variable’

After executing the commands, we find that the resulting state space system matrices are slightly different. Checking the roots of the denominator (characteristic equation of the original transfer function), and the eigenvalues of the two resulting A matrices, gives the results as summarized as would appear on the screen in Table 4). Even with different matrices the eigenvalues are the same and equal to the original poles of the system. Matlab uses the LTI notation for commands used with LTI systems. The transfer function command, tf, is used to convert the numerator and denominator into an LTI variable. For very large systems and systems with zeros in

Table 4

Example: Matlab Results from TF to SS Conversion

Original transfer function:

CðsÞ 24 ¼ , Poles (roots of CE) ¼ 3; 2; 1 RðsÞ s3 þ 6s2 þ 11s þ 6

Matrices from using tf 2ss command: 2 3 2 3 6 11 6 1 A¼4 1 0 0 5B ¼ 4 0 5 0 1 0 0

Matrices from using ss command: 2 3 2 3 6 2:75 0:375 1 A¼4 4 0 0 5B ¼ 4 0 5 0 4 0 0

# $ C ¼ 0 0 24 D ¼ ½0

# $ C ¼ 0 0 1:5 D ¼ ½0

Eigenvalues ¼ 3; 2; 1

Eigenvalues ¼ 3; 2; 1

132

Chapter 3

the numerator of the transfer function, using tools like Matlab can save the designer much time. 3.1

PROBLEMS

3.1 Given the following differential equation, which represents the model of a physical system, determine the time constant of the system, the equation for the time response of the system when subjected to a unit step input, and the corresponding plot of the system response resulting from the unit step input. 80

@x þ 4 x ¼ f ðtÞ @t

3.2 Given the second-order time response to a step input in Figure 27, measure and calculate the percent overshoot, settling time, and rise time. 3.3 Using the differential equation given, determine the transfer function where GðsÞ ¼ YðsÞ=UðsÞ. d 3y d 2y dy þ 3 2 þ 4 þ 9y ¼ 10u 3 dt dt dt 3.4 Given the following differential equation, find the transfer function where y is the output and u is the input: : y€ þ 5y€ þ 32 ¼ 5u_ þ u 3.5

Using the differential equation of motion given,

Figure 27

Problem: step response of second-order system.

Analysis Methods for Dynamic Systems

133

a. Determine the transfer function for the system. b. Take the inverse Laplace transform to solve for time response if the input is a unit impulse. 2

d 2y dy du þ 4 þ 6 y ¼ 8 þ 10u dt dt dt2

3.6 Write the time response for the following transfer function when the input is a unit ramp: TF ¼ GðsÞ ¼

2 sþ2

3.7 Given the following transfer function, solve for the system time response to a unit step input: YðsÞ 5s þ 1 ¼ UðsÞ s2 þ 7s þ 10 3.8 Given the following transfer function, solve for the system time response to a step input with a magnitude of 2. YðsÞ 2 ¼ UðsÞ s2 þ 3s þ 2 3.9 Find the time solution to the transfer function given. Use partial fraction expansion techniques. YðsÞ 5s þ 1 ¼ UðsÞ s3 þ 5s2 þ 32 3.10 Given the following transfer function, solve for the system time response to a step input with a magnitude of 5. 23 s þ 12 YðsÞ ¼ 25 UðsÞ s þ 7s þ 12 3.11 Given the s-plane plot below in Figure 28, assume that the poles are at the marked locations and sketch the response to a unit step input for the system described by the poles. Assume a steady-state closed loop transfer function gain of 1. 3.12 Using Figure 29, a first-order system model responding to a unit step input, develop the appropriate transfer function model. Note the axes scales.

Figure 28

Problem: pole locations in the s-plane.

134

Chapter 3

Figure 29

Problem: step response of first-order system.

3.13 Given Figure 30, the system response to a unit step input, approximate the transfer function based on a second-order system model. 3.14 Given the following closed loop transfer function, plot the pole locations in the s-plane and briefly describe the type of response (dynamic characteristics, final steady-state value) when the input is a unit step. GðsÞ ¼

Figure 30

18 s þ 4s þ 36 2

Problem: step response of second-order system.

Analysis Methods for Dynamic Systems

Figure 31

135

Problem: system block diagram.

3.15 For the block diagram shown in Figure 31, determine the following: a. The closed loop transfer function b. The characteristic equation c. The location of the roots in the s-plane d. The time responses of the system to unit step and unit ramp inputs 3.16 Given the physical system model in Figure 32, develop the appropriate differential equation describing the motion (see Problem 2.18). Develop the transfer function for the system where xo is the output and xi is the input. 3.17 Given the physical system model Figure 33, develop the appropriate differential equation describing the motion (see Problem 2.19). Develop the transfer function for the system where xo is the output and xi is the input. 3.18 Given the physical system model Figure 34, develop the appropriate differential equation describing the motion (see Problem 2.20). Develop the transfer function for the system where r is the input and y is the output. 3.19 Given the physical system model in Figure 35, develop the appropriate differential equation describing the motion (see Problem 2.21). Develop the transfer function for the system where F is the input and y is the output. 3.20 Given the physical system model in Figure 36, develop the appropriate differential equation describing the motion (see Problem 2.26). Develop the transfer function for the system where Vi is the input and Vc, the voltage across the capacitor, is the output. 3.21 Using the physical system model in Figure 37, develop the differential equations describing the motion of the mass, yðtÞ as function of the input, rðtÞ. PL is the load pressure, a and b are linkage segment lengths (see Problem 2.27). Develop the transfer function for the system where r is the input and y is the output.

Figure 32

Problem: physical system model—mechanical/translational.

136

Chapter 3

Figure 33

Problem: physical system model—mechanical/translational.

Figure 34

Problem: physical system model—mechanical/translational.

Figure 35

Problem: physical system model—mechanical/translational.

Figure 36

Problem: physical system model—electrical.

Analysis Methods for Dynamic Systems

Figure 37

137

Problem: physical system model—hydraulic/mechanical.

3.22 Determine the differential equations describing the system in Figure 38 (see Problem 2.29). Formulate as time derivatives of h1 and h2 . Develop the transfer function for the system where qi is the input and h2 is the output. 3.23 Determine the differential equations describing the system given in Figure 39 (see Problem 2.30). Formulate as time derivatives of h1 and h2 . Develop the transfer function for the system where qi is the input and h2 is the output. 3.24 For the transfer function given, develop the Bode plot for both magnitude (dB) and phase as a function of frequency. GHðsÞ ¼

10ðs þ 1Þ sð0:1s þ 1Þ

3.25 For the transfer function given, develop the Bode plot for both magnitude (dB) and phase as a function of frequency. YðsÞ 2 ¼ UðsÞ s2 þ 3s þ 2 3.26 For the Bode plot shown in Figure 40, estimate the following: a. What is the approximate order of the system? b. Damping ratio: underdamped or overdamped? c. Natural frequency (units)? d. Approximate bandwidth (units)?

Figure 38

Problem: physical system model—liquid level.

138

Chapter 3

Figure 39

Problem: physical system model—liquid level.

3.27 From the transfer function, sketch the basic Bode plot and measure the following parameters: a. Gain margin b. Phase margin c. Bandwidth using 3dB criteria d. Steady-state gain on the system GðsÞ ¼

5ðs þ 4Þ sðs þ 1Þðs þ 2Þð5s þ 1Þ

3.28 Develop a Nyquist plot from the Bode plot given in Figure 40. 3.29 Given the following state space matrices, determine the equivalent transfer function. Is the system stable and show why or why not?          # $ x1 x_ 1 2 5 x1 1 ¼ þ u and y ¼ 1 0 x2 x2 3 11 0 x_ 2

Figure 40

Problem: Bode plot.

Analysis Methods for Dynamic Systems

139

3.30 Given the following state space system matrix, find the eigenvalues and describe the system response:   0 1 A¼ 1 1 3.31 Given the following transfer function, write the equivalent system in state space representation. C 2s2 þ 8s þ 6 ðsÞ ¼ 3 R s þ 8s2 þ 16s þ 6

This Page Intentionally Left Blank

4 Analog Control System Performance

4.1

OBJECTIVES    

4.2

Define feedback system performance characteristics. Develop steady-state and transient analysis tools. Define feedback system stability. Develop tools in the s, time, and frequency domains for determining system stability.

INTRODUCTION

Although a relatively new field, available controller strategies have grown to the point where it is hard to define the ‘‘basic’’ configurations. The advent of the low cost microcontroller has revolutionized what is possible in control algorithms. This section defines some basic properties relevant to all control systems and serves as a backdrop for measuring and predicting performance in later sections. Control systems are generally evaluated with respect to three basic areas: disturbance rejection, steady-state errors, and transient response. Open loop and closed loop systems both are subjected to the same basic criterion. As the complexity increases, additional characteristics become important. For example, in advanced control algorithms using a plant model for command feedforward, the sensitivity of the controller to modeling errors and plant changes is critical. In this case it is appropriate to evaluate different algorithms based on parameter sensitivity, in addition to the basic ideas presented in this chapter. A second major concern in controller design is stability. Three basic methods, Routh-Hurwitz criterion, root locus plots, and frequency response plots, are developed in this chapter as tools for evaluating the stability of different controllers. Stability is also closely related to transient response performance as the examples and techniques illustrate.

141

142

4.3 4.3.1

Chapter 4

FEEDBACK SYSTEM CHARACTERISTICS Open Loop Versus Closed Loop

Most engineers are familiar with the idea of open loop and closed loop controllers. By definition, if the output is not being measured, whether directly with transducers (or linkages) or indirectly with estimators, it is an open loop control system and incapable of automatically adjusting if the output wanders. When a transducer is added to measure the output and compare it with the desired input, we have now ‘‘closed the loop’’ and have constructed a closed loop controller. An example of an open loop controller in use by most of us is the washing machine. We insert the clothes and proceed to adjust several inputs based on load size, cleanliness, colors, etc. After completing the ‘‘programming,’’ the start button is pushed and the cycle runs until completed. Internal loops may be closed to control the motor speed, water level, etc., but the primary input-output relationship is open loop. Unless something goes wrong (load imbalance), the machine performs the same tasks in the same order and for the same length of time regardless of whether the load is clean or still dirty. What about our common household clothes dryer: Is it an open or closed loop configuration? The answer depends. Most basic models incorporate a timer that simply defines the amount of time for the dryer to be on, hence open loop. However, many models today also incorporate a humidity sensor that can be set to shut off the dryer when the humidity decreases to a set level; now the dryer is incorporating closed loop controls with the addition of a sensor and error detector. This simple feedback system is illustrated in Figure 1. The dryer example can be contrasted with the following example, a cruise control system, in several ways. What does the dryer controller do with the transducer information? Simply open or close a contact switch; there is no proportionality to the error signal that is received. In this sense it is a very simple closed loop control system where the error detector could be a simple positive feedback operational amplifier (OpAmp). Now compare this to a typical automobile cruise control system, already discussed in Chapter 1. The signal output of the actuator is proportional to its input. As the error increases, so does the signal to the actuator and the corresponding throttle input to the engine. Most control systems benefit when the command and actuation may be varied continuously. An advanced controller algorithm would have little effect on the clothes dryer since the heater is designed to operate between two states, on and off. Many early programmable logic controllers (PLCs) closed the loop using simple logic switches and on/off actuation relays. This type of controller is still common in many industrial applications. Some of the advantages and disadvantages of each type are listed in Table 1.

Figure 1

Basic closed loop clothes dryer.

Analog Control System Performance

Table 1

143

Characteristics of Open Loop and Closed Loop Controllers

Open loop controllers

Closed loop controllers

Cheap (i.e., timers vs. transducers, etc.) Unable to respond to external inputs Generally stable in all conditions No control over steady-state errors

Requires additional components ($$) Reduces effects of disturbances Can go unstable under certain conditions Can eliminate steady-state errors

Although in principle open loop controllers are cheaper to design and build than closed loop controllers, this is not always the case. As microcontrollers, transducers, and amplifiers become more economical, often a break-even point exists beyond which the open loop controller is no longer the cheaper alternative. For example, some things can now be done electronically, therefore removing the most expensive hardware in the system and simulating it in software. 4.3.2

Disturbance Inputs

Disturbances, unfortunately, are common inputs to all practical (i.e., in use) controllers. Disturbance inputs might include electrical noise, external (environmental) conditions, and different loading conditions. Referring again to the cruise control system, we can see several potential disturbance inputs. If the cruise control was set on level ground, no wind, and constant surface properties, the vehicle should retain the same speed with or without the feedback system as long as nothing changes. However, in addition to the external disturbances, the vehicle cruise control must still deal with many disturbances arising from the vehicle itself. The spark plugs, alternator, distributor, electric clutches, electric motors (fan and windshield wipers), etc., all produce electrical noise that might interfere with the electrical feedback signal or command signal. Unless the controller closes the loop, it does not know how to respond to changes occurring after the command setting is made. An example of open loop cruise control systems can be found on some motorcycles. The ‘‘cruise control’’ is a simple clamp on the handlebar throttle that allows the driver to accelerate to the desired speed and, set the clamp; if nothing changes, the motorcycle continues at the preset speed. Obviously, the problem is when the driver encounters a hill and/or different wind conditions— the motorcycle speed will change and the driver will have to readjust the throttle clamp. The changing landscape (hills), wind forces, and electrical noises are what we call disturbance inputs. All real controllers have to deal with disturbance inputs. In the case of the modern automobile cruise control, the controller automatically increases the throttle if a hill or stronger head wind is encountered. This is one of the primary advantages of closed loop controllers. In fact, if feedforward algorithms are correctly used for following the input signal, then adequately handling the disturbance inputs becomes the primary objective of the feedback loop. The next section provides means of designing for minimal influences from disturbances or, as commonly termed, disturbance input rejection.

144

4.3.3

Chapter 4

Steady-State Errors

As we have just seen, one of the primary advantages of feedback is to control/limit the amount of error between the desired and actual variables. Ideally, the error should zero at all times, but this can never be achieved in actual practice. This section develops approaches for determining the steady-state error arising from both command and disturbance inputs. It is possible to include noise inputs also if desired, but their effect on the magnitude of the steady-state error is usually small. To begin with, let us examine a general block diagram with both command and disturbance inputs, given in Figure 2. In Section 3.4.3 we developed the skills to find the relationship between the input and output in a block diagram. How do we deal with the fact that now we have two inputs (R and D) in the block diagram and one output? Since we are working with linear systems here, the principal of linear superposition applies and we can solve for two transfer functions, CðsÞ=RðsÞ and CðsÞ=DðsÞ. The transfer functions can be found by setting one input to zero and reducing the block diagram with respect to the remaining input. What each transfer function gives is the individual effect of each input, command and disturbance, on the output. The total response is simply the two separate responses added together. Determining the two transfer functions allows us to talk about steady-state and transient characteristics arising from either the command input or the disturbance input. The principle of linear superposition does not apply to nonlinear systems and makes the analysis more difficult if the system is not linear. To show this procedure, begin with the block diagram in Figure 2 and first set the disturbance input to zero as shown in Figure 3. This effectively removes the summing junction containing DðsÞ, and now we can close the loop to find the complete transfer function using the same methods from earlier. This results in the following closed loop transfer function:

Figure 2

Table 2

General block diagram inputs.

Control System Notation in Block Diagrams

Signals

Transfer functions

RðsÞ ¼ Command CðsÞ ¼ Output DðsÞ ¼ Disturbance

GC ðsÞ ¼ Controller G1 ðsÞ ¼ Amplifier G2 ðsÞ ¼ Physical System HðsÞ ¼ Transducer

Analog Control System Performance

Figure 3

145

General block diagram with DðsÞ ¼ 0.

CðsÞ Gc G1 G2 ¼ RðsÞ 1 þ Gc G1 G2 H Since this transfer function represents the output over the input, the goal is to have C=R equal to 1. If we could make this to always be the case, then the error would always be zero. That is, CðsÞ would always be equal to RðsÞ. If it cannot be made to always be 1, then how can we optimize it? By making the gain (product) of Gc G1 G2 as large as possible, we make the overall value get closer to 1. As this gain approaches infinity, the ratio of C=R approaches 1, or perfect tracking. Although this looks good when approached from the point of view of reducing steady-state errors, we will see that adding criteria to our stability and transient effects will limit the possible gain in our system. This is one of the fundamental aspects of most design work, the balancing of several variables to optimize the overall design. Now let us follow the same procedure but set the command to zero to find the effects of disturbance inputs on our system. Setting RðsÞ ¼ 0 results in the block diagram shown in Figure 4. This results in the following closed loop transfer function: CðsÞ G2 ¼ DðsÞ 1 þ Gc G1 G2 H For this transfer function we would like C=D to equal zero, in which case the disturbance input would have no effect on the output. Obviously, this will not be the case unless the system transfer function equals zero, which cannot happen if we want to control the system. If G2 equals zero, the controller will also have no relationship to the output. If we want to minimize the effects of disturbances, then we can make G2 as small as possible relative to Gc and G1 . Increasing the gain Gc and G1 while leaving G2 does make the overall gain tend toward zero, as desired. Increasing H also helps here but hurts with respect to command following performance. To optimize both, we should try to make Gc and G1 as large as possible. Although this sounds easy to do even with a simple proportional controller, K, for Gc , we will see that a trade-off exists. As the gain of Gc is increased, the errors

Figure 4

General block diagram with RðsÞ ¼ 0.

146

Chapter 4

decrease but the stability is eroded. Hence, good controller design is a trade-off between errors and stability. Over the years many alternative controllers have been developed to optimize this relationship between steady-state and dynamic performance. The discussion here assumes primarily a proportional type controller. This section thus far has presented general techniques for reducing errors without being specific to steady-state errors. If we actually achieved C=R equal to 1, then in theory (with feasible inputs) the output would always exactly equal the input and steady-state errors would be nonexistent. If only this could always be the case. Real world components never are completely linear with unlimited output and bandwidth and thus ensure that all decent controls engineers remain in demand. Now let us turn our attention in specific to steady-state errors. The previous discussion is a natural lead in to discussing steady-state errors since the beginning procedure is the same. Once the overall system transfer function is found, the steadystate error can be determined. Remember though, it is possible to have zero steadystate error and still have large transient errors. From the block diagram we can determine our overall system transfer function and then apply the final value theorem (FVT) to solve for the steady-state error. The only wrinkle occurs when an input different from a unit impulse, step, or ramp is used. If a unit step is used on a type 0 system, the steady value is simply the FVT result from the system transfer function. The steady-state error is then 1  Css . This can best be illustrated in the following example. EXAMPLE 4.1 Using the block diagram in Figure 5, determine the steady-state error due to a 1. 2.

Unit step input for the command. Step input with a magnitude of 4 on the disturbance input.

Steady-State Tracking Error To determine the steady-state error due to a unit step input, RðsÞ, we set DðsÞ ¼ 0 and close the loop. This results in the following transfer function:

CðsÞ 25ðs þ 1Þ ¼ RðsÞ s2 þ 6s þ 30 With R(s) = 1/s, solve for C(s) and apply the FVT:

Figure 5

Example: steady-state errors and FVT.

Analog Control System Performance

147

Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

25ðs þ 1Þ 1 25 ¼ s2 þ 6s þ 30 s 30

The steady-state error is the final input value minus the output value: Steady-state error ¼ ess ¼ rss  css ¼ 1 

25 1 ¼ 30 6

So even after all the transients decay, the final output of the system never reaches the value of the command. Steady-State Disturbance Error To solve the error when there is a disturbance acting on the system, we set RðsÞ ¼ 0 and solve for CðsÞ=DðsÞ: This results in the following transfer function: CðsÞ 5ðs þ 1Þ ¼ DðsÞ s2 þ 6s þ 30 With DðsÞ ¼ 4=s, solve for CðsÞ and apply the FVT: Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

5ðs þ 1Þ 4 20 ¼ s þ 6s þ 30 s 30 2

In this case, the steady-state error is simply the final output value since the input value (desired output) is set to zero: Steady-state error ¼ ess ¼ rss  css ¼ 

20 2 ¼ 30 3

After all the transients decay form the step disturbance input, the final output of the system reaches –0.667, even though the command never changed. Ideally, css in this case would be zero. Several interesting points can be made from this example. First, when we closed the loop relative to RðsÞ and then to DðsÞ, we found that the denominator of the transfer function remained the same for both cases. Remembering that the information regarding the stability and dynamic characteristics of the system are contained in the characteristic equation, this is exactly what we would expect to find. We still have the same physical system; we are just inserting signals into two different points. If we had a second-order underdamped response from one input, we would expect the same from the other. That is to say, we have not modified the physics of our system by closing the loop at two different points. What caused the difference in responses was the change in the numerator, which as we saw earlier when developing transfer functions is precisely where the coefficients arising from taking the Laplace transform of the input function appear in the transfer function. The second interesting point found in this example is that the error never goes all the way to zero, even as time goes to infinity. In fact, as we will see for controllers with proportional gain only, this will almost always be the case. It can be explained as follows: In order for the physical system in the example to have a non-zero output (as requested by the command), it needs a non-zero input. If the input to the physical system is zero, so will be the output. Since the output of the controller in this example provides the input to the physical system, it must be non-zero also. With a simple proportional gain for our controller, we can never have a non-zero output with a zero input; thus, there must always be some error remaining to maintain a

148

Chapter 4

signal into the physical system. As the proportional gain is increased, the required input (error) for the same output sent to the physical system is reduced. So as we found, increasing the proportional gain decreases the steady-state error because it can provide more output with a smaller error input. As the next section shows, we can add integrators to our system to eliminate the steady-state error since an integrator can have a non-zero output even when the input is zero. It is often easy to determine what the steady-state errors will be by classifying the system with respect to the number of integrators it contains. Remember that an integrator block is 1=s, and thus the number of 1=s terms we can factor out of the transfer function is the number of integrators the system has. We saw that the transfer function for the hydraulic cylinder had one integrator and thus was classified as a type 1 system. A type 0 system has no pure integrators and a type 2 system has two integrators (1=s2 factors out) and so forth. EXAMPLE 4.2 To illustrate how to determine the system type number, let us work an example using the hydraulic servo system that we modeled in Chapter 2. The differential equation governing the system motion is   d 2y A dy AKx m 2þ ¼ þb x KP dt KP dt First, take the Laplace transform ðm s2 þ K1 sÞ YðsÞ ¼ K2 XðsÞ;

K1 ¼

A þ b; Kp

K2 ¼

AKx Kp

Write the transfer function YðsÞ K2 1 K2 ¼ ¼ XðsÞ sðms þ K1 Þ s ms þ K1 Since a 1=s term can be factored out of the transfer function, it is classified as a type 1 system. In this example the integrator is included as part of the physical system model. Integrators are also added electronically (or mechanically) as part of the control system. The ‘‘I’’ term in a common proportional, integral, derivative (PID) controller represents the integrator that is added. To illustrate the general case, close the loop on the type 1 system shown in Figure 6. Then closing the loop produces CðsÞ ¼

Figure 6

1

GðsÞ s þ GðsÞ s

RðsÞ ¼

GðsÞ RðsÞ s þ GðsÞ

General type 1 unity feedback system.

Analog Control System Performance

Figure 7

149

Block diagram with gain K and system type.

Applying the FVT to the system for a unit step input means that the input (1/s) cancels with the s from the FVT. Therefore, if we let s go to zero in the above transfer function, we end up with GðsÞ=GðsÞ ¼ 1: No matter what form the system model GðsÞ takes, the output is always one. Since the command is also 1, the error is zero. As we will see with PID controllers, the integral term in the controller forces the steady-state error to be zero for step input functions. If we reference the block diagram in Figure 7, we can generalize the steadystate error results for different inputs using Table 3. This makes the process of evaluating the steady-state performance of the system quite easy. Knowing the system type number, overall system gain, and the type of input allows us to immediately calculate the amount of steady-state error in the system. To demonstrate how the table is developed, let us look at two examples and check the values given for the type 0 and type 1 systems. EXAMPLE 4.3 To begin, we will use the type 0 system shown in Figure 8 and calculate the steady state errors for the system. We will first apply the information given in Table 3 and then verify it by closing the loop and calculating the steady-state error. To apply the information in the table, we need to find the steady-state gain K for the system. That is, if we put an input value of 1 into the first block, let all of the transients (s terms) decay, what would be the output. For this system we have three blocks, and the overall gain is given by multiplying the three: K ¼ 8 3 2=4 ¼ 12 From the table, and for a unit step input, the error is equal to 1=ð1 þ KÞ, or 1/13. And for a unit ramp and acceleration input, the error is equal to infinity. If we wish to verify the table, simply close the loop and apply the FVT. Closing the loop results in the following transfer function: CðsÞ 48 ¼ RðsÞ s2 þ 5s þ 52 Table 3

Steady-State Errors as Functions of System Type Number

System type number n ¼

0

1

2

Step input RðsÞ ¼ 1=s Ramp input RðsÞ ¼ 1=s2 Acceleration input, 1=s3

Error ¼ 1=ð1 þ KÞ Error ¼ 1 Error ¼ 1

0 Error ¼ 1=K Error ¼ 1

0 0 Error ¼ 1=K

150

Chapter 4

Figure 8

Example: steady-state errors for type 0 system.

For a unit step input ðRðsÞ ¼ 1=sÞ: Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

Steady-state error ¼ ess ¼ rss  css ¼ 1 

48 1 48 ¼ s2 þ 5s þ 52 s 52

48 4 1 ¼ ¼ 52 52 13

For a unit ramp input ðRðsÞ ¼ 1=s2 Þ Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

48 1 ¼1 s þ 5s þ 52 s2 2

So we see that the application of the table results in the correct error and can also be verified by closing the loop and applying the FVT. In the case where we do not have a system type number (or the table), it is always possible, and still quite fast, to just close the loop and apply the FVT as demonstrated here. EXAMPLE 4.4 To finish our discussion on steady-state errors, let us look at an example type 1 system and calculate the errors resulting from various inputs by using both the system type number and corresponding table followed by closing the loop and applying the FVT. The block diagram representing our system is given in Figure 9. To solve for the steady-state errors using Table 3, we first must calculate the steady-state gain in the system. Looking at each block, the overall gain is calculated as K ¼ 2 3 2=4 ¼ 3 Since this is a type 1 system, with one integrator factored out of the third block, we would expect the following steady-state errors from the different inputs: For a unit step input, the error is equal to 0. For a unit ramp input, the error is equal to 1=K, or 0.333. And for a unit acceleration input, the error is equal to infinity. To

Figure 9

Example: block diagram of type 1 system.

Analog Control System Performance

151

verify these errors, let’s close the loop and apply the FVT for the different inputs. The closed loop transfer function becomes CðsÞ 12 ¼ 3 2 RðsÞ s þ 5s þ 4s þ 12 For a unit step input ðRðsÞ ¼ 1=sÞ: Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

Steady-state error ¼ ess ¼ rss  css ¼ 1 

12 1 12 ¼1 ¼ s þ 5s þ 4s þ 12 s 12 3

2

12 ¼0 12

For a unit ramp input ðRðsÞ ¼ 1=s2 Þ, we take a slightly different approach since the steady-state output of CðsÞ, using the FVT, will go to infinity. Csteady state ¼ lim cðtÞ ¼ lim s CðsÞ ¼ lim s t!1

s!0

s!0

12 1 12 ¼1 ¼ 0 s3 þ 5s2 þ 4s þ 12 s2

This is not a surprise since the ramp input also goes to infinity. What we are interested in is the steady-state difference between the input and output as the both go to infinity. The easiest way to handle this is to actually write the transfer function for the error in the system and then apply the FVT. The error can be expressed as the output of the summing junction where EðsÞ ¼ RðsÞ  CðsÞ or

CðsÞ ¼ RðsÞ  EðsÞ

Then if our open loop forward path transfer function is defined as CðsÞ ¼ GOL ðsÞEðsÞ, we can solve for EðsÞ=RðsÞ: CðsÞ ¼ GOL ðsÞEðsÞ ¼ RðsÞ  EðsÞ or EðsÞ 1 sðs þ 1Þðs þ 4Þ ðs þ 1Þðs þ 4Þ ¼ ¼s ¼ RðsÞ 1 þ GOL ðsÞ sðs þ 1Þðs þ 4Þ þ 12 sðs þ 1Þðs þ 4Þ þ 12 To find the error as a function of our input, we apply the FVT to EðsÞ ¼ ð1 þ GOL ðsÞÞ RðsÞ, except that as compared to before, an extra s term is in the numerator: esteady state ¼ lim eðtÞ ¼ lim s EðsÞ ¼ lim s2 t!1

s!0

s!0

ðs þ 1Þðs þ 4Þ 1 4 1 2¼ ¼ sðs þ 1Þðs þ 4Þ þ 12 s 12 3

The error ¼ 0:333 is the same as that which was calculated earlier using the table. If we used the same application of the FVT for the acceleration input, the input function would add one more s term in the denominator (1=s3 ) and as s approaches zero in the limit, the output approaches infinity. So we see that the application of the table results in the correct determination of steady-state error for all three inputs. Once again, closing the loop and applying the FVT verified each table entry. If controllers you are designing do not fit a standard mold, either reduce the block diagram and calculate the steady-state errors or perform a computer simulation long enough to let all transients decay. Using the FVT is generally the easiest

152

Chapter 4

method for quickly determining the steady-state performance of any controller block diagram model. 4.3.3.1

System Type Number from Bode Plots

A similar analysis, using the system type number to determine the steady state error as a function of several inputs, can be done using Bode plots. If we recall back to when we developed Bode plots from transfer functions, any time we had an integrator in the system it added a low frequency asymptote with a slope of –20 dB/ decade (dec) and a constant phase angle of 908degrees. From this information it is a straightforward process to determine the system type number and facilitate the use of Table 3. For example, if we see that the low-frequency asymptote on our Bode plot has a slope of 40 dB/dec and an initial phase angle of 180 degrees, then we know that we have two integrators in our system and therefore a type 2 system. Now it is a simple matter of using the table as was demonstrated after obtaining the type number from the transfer function. 4.3.4

Transient Response Characteristics

The transient response is one of the primary design criteria (and often one of the limits on the design) when choosing the type of controller and in tuning the controller. Improper design decisions may lead to severe overshoot and oscillations. In Section 3.3.2 the transient responses for step inputs to first- and second-order systems were analyzed for open loop systems. The same procedure applies to closed loop control systems. In fact, as shown in the next section, for linear systems the total response is simply the sum of all the first- and second-order responses. For example, a linear third-order system can be factored into three first-order systems or one first-order and one second-order system. This will become evident using root locus plots. The performance specifications developed earlier are also important since they are often used to classify the relative stability level of the system. The important parameters to remember are time constant, damping ratio, and natural frequency, as these also define the closed loop response characteristics. Information from Bode plots can also be used to define these parameters. The common transient response measurements are given in Table 4. These are the parameters commonly given to the designer of the control system as goals for the controller to meet. More often than not, the specifications are in the time domain as most people more easily relate to these measurements. Since most design methods take place in the s-domain, it is the responsibility of the designer to understand the relationships shown in the table to arrive at the desired pole locations in the s-plane. These relationships are commonly used to take the time domain specifications and eliminate portions of the s-plane where the specifications are not met. For example, if a required settling time is one of the criteria, then all area in the s-plane that does not fall to the left of the magnitude of the real component that will give this response is invalid. All poles with a real component greater than or equal to this value are valid. By combining the different criteria the desired pole locations increasingly become more defined. The big difference with the transient characteristics of closed loop control systems versus open loop systems is the idea that implementing different controller gains can change them. This is one of the great benefits to implementing closed loop

Analog Control System Performance

Table 4

153

Parameters Commonly Used for Evaluating Control System Performance

Specifications in the s-domain

Specifications in the time domain First-order systems

Time constant or pole location on the Settling time or rise time real axis Second-order systems Ratio of the imaginary to the real components of complex roots Radial distance from origin in s-plane Damping ratio (defined above) Natural frequency and damping ratio Damped natural frequency or imaginary component of complex roots Natural frequency and damping ratio or real component of complex roots

Damping ratio Natural frequency Percent overshoot Rise time Peak time Settling time

controllers, that is, the ability to make the system behave in a variety of ways simply by changing an electrical potentiometer representing controller gain. A given system might be underdamped, critically damped, overdamped, or even unstable based on the chosen value of one gain. Therefore, our controller design becomes the means by which we move the system poles to the locations in the s-plane specified by applying the criteria defined in Table 4. The techniques presented in the next section will allow us to design controllers to meet certain transient response characteristics, even though the physical system contains actual values of damping and natural frequency different from the desired values. Both root locus techniques and Bode plot methods are effective in designing closed loop control systems. Another benefit of closed loop controllers is that since we can electronically (or mechanically) control response characteristics, we can add ‘‘damping’’ to the system without increasing the actual power dissipation and energy losses. By adding a velocity sensor and feedback loop to a typical position control system, it is easy to control the damping and certainly is beneficial since less heat is generated by the physical system. 4.4

FEEDBACK SYSTEM STABILITY

The skills to develop models and analyze them are required when we start discussing the stability and design of controllers. The term stability has taken on many meanings when people have discussed control systems. For the sake of being consistent, a quick definition will be given. Global stability refers to a stable system under all circumstances. In other words, the response of the system will always converge to a finite steady-state value. Marginal stability refers to the point between being stable or unstable. An oscillatory motion will continue indefinitely in the absence of new inputs, neither decaying nor growing in size. This occurs when the roots of the characteristic equation (denominator of the transfer function) are purely imaginary. Unstable systems,

154

Chapter 4

once set in motion, will continue to grow in amplitude either until enough components saturate or something breaks. Finally, relative stability is term you hear a lot when discussing control system performance. It means different things to different people. We should think of it as being the measure of how close to being unstable we are willing to be. As we approach the unstable region, the overshoot and oscillatory motions increase until at some point they become unacceptable. Hence a computer controlled machining process might be designed to be always overdamped since no overshoot is allowed while machining. This limits the response time and is not the answer for everyone. A cruise control system, for example, might be allowed 5% overshoot (several mph at highway speeds) to reach the desired speed faster. So we see that different situations call for different definitions of relative stability, which is often defined by the allowable performance specifications imposed upon the system. The root locus and Bode plot methods are powerful tools since they allow us to quickly estimate things like overshoot, settling time, and steady-state errors while we are designing our system. These are the topics of discussion in the next several sections. 4.4.1

Routh-Hurwitz Stability Criterion

The Routh-Hurwitz stability criterion is used to quickly find whether or not a system is stable when methods to find the roots of a polynomial are not readily available. As seen earlier, when the roots of characteristic equation are known and plotted in the splane the stability of the system is also known. While most handheld calculators can calculate the roots of the characteristic equation, few can determine symbolically where the roots become unstable as a function of a variable (the gain K or even a system model parameter) as the Routh-Hurwitz method can. In general, however, computer programs do allow us to calculate the gains where the systems go unstable and have in many respects tended to minimize the use of Routh-Hurwitz techniques. The steps for using the Routh-Hurwitz method are given below followed by an example problem to which the method is applied. Step 1: Write the characteristic equation as a polynomial in s using the following notation: a0 s2 þ a1 sn1 þ a2 sn2 þ þ an  1 s þ an ¼ 0 Step 2: Examine the coefficients, ai ’s , if any one is negative or if one is missing the system is unstable, and at least one root is already in the right hand of the s plane. Step 3: Arrange the coefficients in rows beginning with sn and ending with s0 . The columns taper to a single value for the s0 row. Arrange the table as follows: sn n1

s sn2 sn3 : s0

a0 a1 b1 c1 : f1

a2 a3 b3 c2

a4 a5 b4

a6

(all original even numbered coefficients) (all original odd numbered coefficients)

The bi ’s through the end are calculated using values from the previous two rows using the patterns below:

Analog Control System Performance

a1 a2  a0 a3 a1 b1 a3  a1 b2 c1 ¼ b1 c 1 b2  b1 c 2 d1 ¼ c1

b1 ¼

a1 a4  a0 a5 a1 b1 a5  a 1 b3 c2 ¼ b1 c 1 b3  b1 c 3 d2 ¼ c1

b2 ¼

155

a1 a6  a0 a7 a1 b1 a7  a1 b4 c3 ¼ b1

b3 ¼

This pattern can be extended until the nth row is reached and all coefficients are determined. The last two rows will only have one column (one coefficient) and the third from last row with two coefficients, etc. If an element turns out to be zero, a variable representing a number close to zero, i.e., e, can be used until the process is completed. This indicates the presence of a pair of imaginary roots and some part of the system is marginally stable. Step 4: To determine stability, examine the first column of the coefficients. If any sign change (þ to  or  to +) occurs, it indicates the occurrence of an unstable root. The number of sign changes corresponds to the number of unstable roots that are in the characteristic equation. EXAMPLE 4.5 To illustrate the process using the block diagram in Figure 10, close the loop to find the overall system transfer function and use the Routh-Hurwitz method to determine stability. When we close the loop we get C=R ¼ KG=ð1 þ KGÞ, which leads to the characteristic equation of 1 þ K=ðsðs þ 1Þðs þ 3ÞÞ ¼ 0 or s3 þ 4s2 þ 3s þ K ¼ 0 Then the Routh-Hurwitz table becomes s3 s2 s1 s0

1 3 4 K 3  K=4 K

0

(all original even numbered coefficients) (all original odd numbered coefficients)

The system will become unstable when a sign change occurs. When K=4 becomes greater than 3, the sign will change. Thus K < 12 is the allowable range of gain before the system becomes unstable. Also, if K becomes less than zero the sign also changes, although this is not a normal gain in a typical controller (it becomes positive feedback). But, we could say the allowable range of K where the system is stable is

Figure 10

Example: block diagram of system.

156

Chapter 4

0 < K < 12 Several quick comments are in order regarding this example. Based on what we have already learned, we could see that the open loop poles from the block diagram are all stable (all fall in the left-hand plane (LHP)). This example then illustrates how closing a control feedback loop can cause a system to go unstable (range of K for stability). Also, this is a type 1 system with one integrator. Thus it will have zero steady-state errors from a step input and errors equal to 1=K for ramp inputs. What the Routh-Hurwitz criterion does not give us insight into is the type of response at various gains and how the system ‘‘approaches’’ the unstable region as the gain K is increased. It is this ability (among others) that has made the root locus techniques presented in the next section so popular. Most courses, textbooks, and computer simulation packages include this technique among their repertoire of design tools for control systems. 4.4.2

Root Locus Methods

Root locus methods are commonly taught in most engineering control systems courses and see widespread use. If we are able to develop a model, whether ordinary differential equations (ODEs), transfer functions, or state space matrices, then root locus techniques may be used. If the model is nonlinear it must first be linearized around some operating point (which raises the question: what is the valid range of the root locus plots?). In the larger picture, root locus techniques provide a quick method of knowing the system response and how it will change with respect to adding a proportional controller. With advanced controllers and nonproportional gains, it must be slightly modified if the ‘‘handbook’’ type of approach is to be used. Of course, if a computer is programmed to calculate all the roots while plotting, then any variable may be varied to form the plot. The use of the computer in developing root locus plots has enabled the method to be powerfully used for multiple gain loci, parameter sensitivity, and disturbance sensitivity. The basic idea of root locus plots relates to the fact that the roots of the characteristic equation largely determine the transient response characteristics of the control system, as seen in the section on Laplace transforms. Since the roots of the characteristic equation are primarily responsible for the response characteristics, it would be helpful to know how the roots change as parameters in the system change. This is precisely what the root locus plot does show: the migration of the roots as the ‘‘gain’’ of the system changes. The s-plane, shown in Figure 8 in Chapter 3, is used to plot the real and imaginary components of the roots as they migrate. The pole locations in the s-plane correspond to either a time constant (if on the real axis) or a damping ratio and natural frequency (complex conjugate pairs) and thus can be used to estimate the available range of system performance possible. These are exactly the parameters that we related to the performance criterion in the time domain (Table 4). Since we are concerned with the location of system poles (roots of the characteristic equation), we must determine how the denominator of the transfer function changes when various coefficients are changed. Although changing any coefficient in the denominator causes the roots to move in the s-plane, classical root locus techniques are developed for the case where only the gain K in the system is varied. If we wish to examine the effects of additional parameters (i.e., mass in the

Analog Control System Performance

157

system, derivative controller gain, etc.), then we must rearrange the transfer function to make this variable the ‘‘gain K,’’ or multiplier on the system. If we cannot, then we are unable to use the rules developed here for easy plotting of the root loci. The concepts are still the same, and the pole migrations are still plotted, but another method must be available to solve for the poles every time the parameter is changed. With the variety of software packages available today, this is not that difficult a problem. To develop the guidelines for constructing root locus plots, we need to find the roots of the characteristic equation. If we close the loop of a typical block diagram with a feedback path, we get the following overall system transfer function and, of interest, the characteristic equation: CðsÞ Gc G1 G2 ¼ RðsÞ 1 þ Gc G1 G2 H If we let Gc be a proportional gain K and G1 ðsÞ and G2 ðsÞ be combined and represented by one system transfer function GðsÞ, then the characteristic equation can be represented by CE ¼ 1 þ KGH The roots of the characteristic equation are determined by setting it equal to zero where 1 þ K GðsÞ HðsÞ ¼ 0 or K GðsÞ HðsÞ ¼ 1 These equations provide the foundation for root locus plotting techniques. Since the product GðsÞ HðsÞ contains both a numerator and denominator represented by a polynomial in s, it can be written in terms of the roots of the numerator, called zeros, and the roots of the denominator, called poles, as follows: K

ðs  z1 Þðs  z2 Þ ðs  zm Þ ¼ 1 ðs  p1 Þðs  p2 Þðs  p3 Þ ðs  pn Þ

Thus using this notation we have m zeros and n poles (from the subscript notation). Since this equation equals 1 and contains complex variables, we can write this as two conditions that always must be met in order for the product GðsÞHðsÞ to equal 1. These two conditions are called the angle condition and the magnitude condition. Angle condition: ½ffðs  z1 Þ þ ffðs  z2 Þ þ þ ffðs  zm Þ  ½ffðs  p1 Þ þ ffðs  p2 Þ þ þ ffðs  pn Þ ¼ odd multiple of 180 degrees (negative sign portion) Magnitude condition: K 

     s  z1 s  z2  s  zm       ¼ 1 s  p1   s  p2   s  p3   s  pn 

The angle condition is only responsible for the shape of the plot and the magnitude condition for the location along the plot line. Therefore, the whole

158

Chapter 4

root locus plot can be drawn using the angle condition. The only time we use the magnitude condition is to locate are position along the plot. For any physical system, n  m, and this simplifies the rules used to construct root locus plots. The basic rules for developing root locus plots are given below in Table 5. For consistency, poles are plotted using x’s and zeros are plotted using o’s. This simplifies the labeling process. The rules are based on the angle and magnitude conditions, as will be explained through the use of several examples. Step 1: Provides the groundwork for developing the root locus plot. We develop the open loop transfer function by examining our block diagram and connecting the transfer functions around the complete loop. The resulting open loop transfer function needs to be factored to find the roots of the denominator (poles) and numerator (zeros). For systems larger than second-order, there are many calculators and computers capable of finding the roots for us.

Table 5 1 2 3

4

5

6 7

8 9

10

.

Guidelines for Constructing Root Locus Plots

From the open loop transfer function, GðsÞHðsÞ, factor the numerator and denominator to locate the zeros and poles in the system. Locate the n poles on the s-plane using x’s. Each loci path begins at a pole, hence the number of paths are equal to the number of poles, n. Locate the m zeros on the s-plane using o’s. Each loci path will end at a zero, if available; the extra paths are asymptotes and head toward infinity. The number of asymptotes therefore equals n  m. To meet the angle condition, the asymptotes will have these angles from the positive real axis: If one asymptote, the angle ¼ 180 degrees Two asymptotes, angles ¼ 90 and 270 degrees Three asymptotes, angles ¼ 60 and 180 degrees Four asymptotes, angles ¼ 45 and 135 degrees. All asymptotes intersect the real axis at the same point. The point, s, is found by (sum of the poles)  (sum of the zeros) number of asymptotes The loci paths include all portions of the real axis that are to the left of an odd number of poles and zeros (complex conjugates cancel each other). When two loci approach a common point on the real axis, they split away from or join the axis at an angle of 90 degrees. The break-away/break-in points are found by solving the characteristic equation for K, taking the derivative w.r.t. s, and setting dK=ds ¼ 0. The roots of dK=ds ¼ 0 occurring on valid sections of the real axis are the break points. Departure angles from complex poles or arrival angles to complex zeros can be found by applying the angle condition to a test point in the vicinity of the root. The point at which the loci cross the imaginary axis and thus go unstable can be found using the Routh-Hurwitz stability criterion or by setting s ¼ j! and solving for K (can be a lot of math). The system gain K can be found by picking the pole locations on the loci path that correspond to the desired transient response and applying the magnitude condition to solve for K. When K ¼ 0, the poles start at the open loop poles; as K ! 1, the poles approach available zeros or asymptotes

Analog Control System Performance

159

Step 2: Now that we have our poles (and zeros) from step 1, draw our s-plane axes and plot the pole locations using x’s as the symbols. When gain K ¼ 0 in our system, these pole locations are the beginning of each loci path. If two poles are identical (i.e., repeated roots), then two paths will begin at their location. Each pole is the beginning point for one root loci path. Step 3: This is same procedure as followed in step 2, except that now we locate each zero in the s-plane using o’s as symbols. Each zero location is an attractor of root loci paths, and as K ! 1, every zero location will have a loci path approach its location in the s-plane. The remaining steps help us determine how the root loci paths travel from the poles to the zeros (or asymptotes if there are more poles than zeros). Step 4: It is easy to see from steps 2 and 3 that if we have more poles than zeros, then some root loci paths do not have a zero to travel to. In this case (which is actually the most common case), we will have some paths ‘‘leaving’’ the s-plane as the gain K is increased. Fortunately, because of the angle condition, these paths are defined based on the number of asymptotes that we have in our plot. The angles that the asymptotes make with the positive real axis are given in Table 6. Step 5: All asymptotes intersect the real axis at a common point. This intersection point can be found from the location of our poles and zeros. Once we know the location, coupled with the angles calculated in step 4, we can plot the asymptote lines on the s-plane. Remember that the root loci paths approach the asymptotes as K approaches infinity; they do not necessarily travel directly to the intersection point and lay on the lines themselves. The intersection point, s, is found by summing the poles and zeros and dividing by the number of asymptotes, n  m: n P



pi 

i¼1

m P

zi

i¼1

nm

It should be noted that only the real portions of complex conjugate roots need to be included in the summations since the complex portions are always opposite of each other and cancel when the pair of complex conjugate roots are summed. Step 6: Now we are finally ready to start plotting the root locus paths in the s-plane. We begin by including all sections of the real axis that fall to the left of an odd number of poles and zeros. Each one of these sections meets the angle condition and

Table 6

Asymptote Angles for Root Locus Plots

Number of asymptotes, nm

Angles with the positive real axis

1 2 3 4

ff ff ff ff

y ¼ 180 degrees y ¼ 90 and 270 degrees y’s¼ 60 and 180 degrees y’s ¼ 45 and 135 degrees

General number, k

Angles yi ¼

180 degrees ð2k  1Þ ; k ¼ 0; 1; 2; . . . nm

160

Chapter 4

is a valid segment of the root locus paths. This rule is easy to apply. Locate the rightmost pole or zero and, working our way to the left, mark every other section of real axis that falls between a pole or zero, beginning with the first segment, since it is to the left of an odd number (1). Step 7: If a valid section of the real axis falls between two poles, then the paths must necessarily break away from real axis and travel to either zeros or asymptotes. If a valid section of the real axis falls between two zeros, then the paths must join the axis between these two points and travel to the zeros as K approaches infinity. The points where the root locus paths leave or join the real axis are termed break-away and break-in points. Both paths, whether leaving or joining, do so at the same point, and at an angle of 90 degrees. Since any path not on the real axis involves imaginary portions, which always occur in complex conjugate pairs, root locus plots are symmetrical around the real axis. If we look at the asymptote angles derived in step 4, we see that they are always mirrored with respect to the real axis. To solve for the break-away and/or break-in points, we solve the characteristic equation for the gain K, take the derivative with respect to s and set it zero, and solve for the roots of the resulting equation. Valid points will fall on those sections of the real axis containing the root locus paths. dK ¼ 0; ds

Solve for the roots

By using this technique we are finding the rate of change of K with respect to the rate of change of s. Break-away points occur at local maximums and break-in points at local minimums (found when we set the derivative to zero and solve for the roots). Step 8: To determine the direction with which the paths will leave complex conjugate poles from, or at which the paths will arrive at complex conjugate zeros, we can apply the angle condition to the pole or zero in question. We will use the fact that the angle condition is satisfied whenever the angles add up to an odd multiple of 180 degrees. That is, if we rotate a vector lying on the positive real axis either þ180 or 180 degrees, it now lies on the negative real axis, giving us the 1 relationship [KGðsÞHðsÞ ¼ 1]. It is also true if we rotate the vector 540 or 900 degrees, 1 12 times around or 2 12 times around, respectively. Any test point, s, that falls on the root locus path will have to meet the angle condition. If we place the test point very close to the pole or zero in question, then the angle that the test point is relative to pole or zero and that meets the angle condition is the departure or arrival angle. Graphically this may be shown as in Figure 11. In the s-plane, we have poles at 2  2j and 1 and a zero at 3. If the pole in question (regarding the angle of departure from it) is at 2 þ 2j, then the sum of all the angles from all other poles and zeros, and from the test point near the pole, must be an odd multiple of 180 degrees. This can be expressed as m þn X

fi ¼ 180ð2k þ 1Þ;

k ¼ 0; 1; 2; . . .

i¼1

Since zeros are in the numerator, they contribute positively to the summation and poles, and since they are in the denominator, contribute negatively. The way Figure

Analog Control System Performance

Figure 11

161

Angle of arrival and departure calculations.

11 is labeled, f1 is the angle of departure relative to the real axis. To show how these would sum, let us calculate f1 :  f1  f2  f3 þ f4 ¼ 180 degrees ð2k þ 1Þ  f1  90 degrees  ð180 degrees  tan1 0:5Þ þ tan1 0:5 ¼  180 degrees ð2k þ 1Þ  f1  143 degrees ¼ 180 degrees Finishing the procedure, the angle of departure, f1 ¼ 37 degrees. In many cases, when developing the root locus plots, it is not necessary to go through these calculations because the approximate angles of departure or arrival are obvious. The exact path is not that critical; the general shape and where the path goes concern us more in designing control systems. When the departure or arrival paths are not that clear, as when pairs of poles and asymptotes are relatively close to each other, and it is unsure which pole approaches which asymptote, it is worth performing the calculations. Step 9: If the root locus paths cross over into the right-hand plane and the system becomes unstable, we often want to know the point at which the system becomes unstable and the associated gain at that point. We have two options for solving for the roots at this condition, Routh-Hurwitz and substituting in s ¼ jo for the poles and solving for K. The Routh-Hurwitz technique is described in Section 4.4.1 and is applied to the characteristic equation of the closed loop system. Alternatively, since at the point where the system becomes unstable we know that the real components of the poles are zero, we can substitute s ¼ jo into the characteristic equation and solve for K and o by equating the real and imaginary coefficients to zero (two equations, two unknowns). For large systems, the amount of analytical work required can become significant and the Routh-Hurwitz method will usually allow for an easier solution. Step 10: Assuming that our root locus paths, now drawn on our plot from steps 1–8, go through a point at which we want our controller to operate, we want to find the system gain K at that point. Steps 1–8 are based on the angle condition, and now to find the required gain at any point along the root locus, we need to apply the magnitude condition. If our actual root locus paths cross over into the right-hand plane and are sufficiently close to the asymptotes (or to a point where we know the

162

Chapter 4

value of), then we can also use the methods from this step to find the gain K where the system goes unstable, as step 9 did. Remember that our magnitude condition was stated earlier as       s  z1   s  z2   s  zm          ¼ 1 K s  p1   s  p2   s  p 3   s  pn  Graphically, the magnitude condition means that if we calculate the distances from each pole and zero to the desired point on the root locus path, then K is chosen such that the total quantity of the factor becomes 1. For example, the distance js  z1 j, as shown in the magnitude condition, is the distance between the zero at z1 and the desired location (pole) along our path. Analytically, we may also solve for K by setting the s in each term equal to the desired pole location and multiplying each term out. The magnitude can then be calculated by taking the square root of the sum of the real terms squared and the imaginary terms squared. For both methods, if there are no zeros in our system the numerator is equal to 1. This allows us to cross multiply and K is found using the simplified equation below: K ¼ js  p1 jjs  p2 jjs  p3 j js  pn j We can apply the magnitude condition any time we wish to know the gain required for any location on our root locus plots. This is the system gain; to calculate the controller gain we need to take into account any additional gain that can be carried out in front of GðsÞHðsÞ when the transfer function is factored. To illustrate the effectiveness of root locus plots, we will work several different examples in the remainder of this section. The examples will begin with a simple system and progress to slightly more complex systems as we learn the basics of developing root locus plots using the steps above. EXAMPLE 4.6 Develop the root locus plot for the simple first-order system represented by the open loop transfer function below: GðsÞHðsÞ ¼

2 sþ3

Step 1: The transfer function is already factored; there are no zeros and one pole. The pole is at s ¼ 3. Step 2: Locate the pole on the s-plane (using an x since it is a pole) as in Figure 12: Step 3: There are no zeros. Step 4: Since we have one pole, no zeros, then n  m ¼ 1  0 ¼ 1; and we have one asymptote. For one asymptote, the angle with the positive real axis is 180 degrees and the asymptote is the negative real axis, all the way out to 1. Step 5: There is no intersection point with only one asymptote. It is on the real axis. Step 6: With only one pole, all sections of the real axis to the left of the pole are valid. For our first-order system this coincides with the asymptote.

Analog Control System Performance

Figure 12

163

Example: locating poles and zeros in the s-plane (first-order system).

Step 7: There are no break-away or break-in points for a first-order system. Step 8: The angle at which the locus paths leaves the one pole in our system is 180 degrees since there is only one angle to sum and the sum must always equal an odd multiple of 180 degrees. This is also a good time to verify why the axis to the right of the pole does not meet the angle condition. If we were to place our test point anywhere to the right of the pole, then the angle from the pole to the test point is 0 degrees (or 360 degrees, 720 degrees, but never an odd multiple of 180 degrees). We can verify the valid sections of real axis for any system by applying this procedure using the angle condition. Step 9: The path never crosses the imaginary axis and the system cannot become unstable, even when the loop is closed. Step 10: Let us say that we want a system time constant (t) of 0.25 seconds. Since the position on the negative real axis equals 1/t, we want the point were the locus path is at s ¼ 1=0:25 ¼ 4. Since we do not have any zeros in this system, the numerator is equal to one and we find K to be K ¼ js  p1 j ¼ j  4  p1 j ¼ j  4 þ 3j ¼ 1 This is the overall loop gain required, not just the gain contributed by the controller. For the system gain we need to account for the 2 in the numerator of GðsÞHðsÞ. Then our controller gain, Kp , is found as Kp 2 ¼ 1 or Kp ¼ 1=2 ¼ 0:5 Now our final root locus plot, and the pole location when Kp ¼ 0:5, is given in Figure 13. For this example we will connect this plot with what we already know about block diagram reduction and characteristic equations. If we were to represent the transfer function used in this example as shown in the block diagram in Figure 14, we can close the loop and analytically vary K to develop the same plot. The closed loop transfer function becomes: CðsÞ 2K ¼ RðsÞ s þ 3 þ 2K

164

Chapter 4

Figure 13

Example: root locus plot for first-order system.

When K equals 12, we end up with the closed loop transfer function as: CðsÞ 1 ¼ RðsÞ s þ 4 This is exactly the same controller gain solved for using the root locus plot and applying the magnitude condition when the desired pole was at s ¼ 4. For first- or second-order systems the analytical solution is quite simple and can be used in place of or to verify the root locus plot. For example, if we take the closed loop transfer function above with K still a variable in the denominator, it is easy to see that when K ¼ 0 the pole is at 3, our starting point, and as K is increased the pole move farther and farther to the left. As K approaches infinity, so does the pole location. Thus both our beginning point (the open loop poles) and our asymptotes are verified as we increase the gain K. Finally, since the roots of a second-order system are also easily solved for as K varies (quadratic equation), this same method can used (as will be shown in the next example). Beyond second-order systems and the root locus, techniques are much easier. EXAMPLE 4.7 Develop the root locus plot for the second-order system represented by the block diagram in Figure 15. Step 1: The transfer function is already factored; there are no zeros and two poles. The poles are at s ¼ 1 and s ¼ 3. Step 2: Locate the poles on the s-plane (using x’s) as shown in Figure 16: Step 3: There are no zeros.

Figure 14

Example: block diagram for first-order root locus example.

Analog Control System Performance

Figure 15

165

Example: block diagram for second-order root locus plot.

Step 4: Since we have two poles, no zeros, then n  m ¼ 2  0 ¼ 2; and we have two asymptotes. For two asymptotes, the angles relative to the positive real axis are 90 degrees and 270 degrees. Step 5: The intersection point is found by taking the sum of the poles minus the sum of the zeros, all divided by the number of asymptotes. For this example, ¼

ð3  1Þ þ ð0Þ ¼ 2 2

Step 6: With two poles, the section on the real axis between the two poles is the only valid portion on the axis. For our example this is between 1 and 3. This section also includes the intersection point of the asymptotes. Step 7: There is one break-away point since the root locus paths begin on the real axis and end along the asymptotes. To find the break-away point, solve the characteristic equation for K and take the derivative with respect to s. The characteristic equation is found by closing the loop and results in the following polynomial: s2 þ 4s þ 3 þ 4K ¼ 0

or

K ¼ s2  4s  3

Taking the derivative with respect to s: dK ¼ 2s  4 ¼ 0 ds s ¼ 2 The break-away point for the second-order system coincides with the intersection point for the asymptotes. Step 8: The angles at which the locus paths leave the poles in our system are 180 degrees (pole at 1) and 0 degrees (pole at –3). This also coincides with the valid

Figure 16

Example: locating poles and zeros in the s-plane (second-order system).

166

Figure 17

Chapter 4

Example: root locus plot for second-order system.

section of the real axis as determined earlier. These directions can also be ascertained from the earlier steps since the only valid section of real axis is between the two poles and the break-away point also falls in this section. We can now plot our final root locus plot as shown in Figure 17. Step 9: The asymptotes never cross the imaginary axis and the system cannot become unstable, even when the loop is closed. Step 10: For this example let us suppose that design goals are to minimize the rise time while keeping the percent overshoot less then 5%. The overshoot specification means that we need a damping ratio approximately 0.7 or greater. We will choose 0.707 since this corresponds to a radial line, making an angle of 45 degrees with the negative real axis. Adding the line of constant damping ratio to the root locus plot defines our desired pole locations where it crosses the root locus path. Our poles should be placed at s ¼ 2  2j as shown in Figure 18. Now we need to find the gain K required for placing the poles at this position. Remember that the poles begin at 1 and 3 when K ¼ 0, they both travel toward 2 as K is increased, one breaks up, one breaks down, and they follow the asymptotes as K continues to increase. To solve for K, we apply the magnitude condition. Since we do not have any zeros in this system, the numerator is equal to one and we find K to be pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K ¼ js  p1 jjs  p2 j ¼ j 12 þ 22 jj 12 þ 22 j ¼ 5

Figure 18

Example: using root locus plot to locate desired response (second order).

Analog Control System Performance

167

We already have a gain of 4 in the numerator of GðsÞHðsÞ so our control gain contributes the rest. This gives us a required controller gain, Kp ¼ 5=4. As with the first-order example, we will connect this plot with what we already know about block diagram reduction and characteristic equations. Let us close the loop and analytically vary K to develop to verify the root locus plot. When we close the loop, we get the characteristic equation given below. s2 þ 4s þ 3 þ 4K ¼ 0 If we solve for the roots using the quadratic equation, we can leave K as a variable and check the various locus paths by varying K and plotting the resulting roots to the equation. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4 42  4ð3 þ 4KÞ s1;2 ¼   2 2 Let us check various points along our root locus paths by using several values of K. K ¼ 0; K ¼ 1=4;

s1;2 ¼ 2  1 ¼ 1 and  3 (as we expected, our open loop poles) s1;2 ¼ 2 and  2 (the value of K at the break-away point)

K ¼ 5=4;

s1;2 ¼ 2  2j (our value of K to place us at our desired poles)

So as in the previous example, we are able to analytically solve for the roots as a function of K and verify our plot developed using the rules from this section. In fact, it is quite easy to see from the quadratic equation that our poles start at our open loop poles when K ¼ 0, progress to the break-away point when the square root term becomes zero, and then progress along the asymptotes as K approaches infinity. Once we leave the real axis, the real term always remains at –2 and increasing K only increases the imaginary component, exactly as the root locus plot illustrated. From here the remaining examples will only use the root locus techniques since beyond second-order, no easy closed form solution exists for determining the roots of the characteristic equation. (Note: it does exist for third-order polynomials, but it is a multistep process.) EXAMPLE 4.8 Develop the root locus plot for the block diagram in Figure 19. This model was already used in Example 4.5 for the Routh-Hurwitz method. Our conditions then arise from: K ¼ 1 sðs þ 1Þðs þ 3Þ

Figure 19

Example: block diagram for root locus plot (third order).

168

Chapter 4

Figure 20

Example: locating poles and zeros in the s-plane (third-order system).

Step 1: The transfer function is already factored; there are no zeros and three poles. The poles are at s ¼ 0; s ¼ 1; and s ¼ 3. Step 2: Locate the poles on the s-plane (using x’s) as shown in Figure 20. Step 3: There are no zeros. Step 4: Since we have three poles, no zeros, then n-m = 3 – 0 = 3; and we have three asymptotes. For three asymptotes, the angles relative to the positive real axis are 60 degrees and 180 degrees. Step 5: The intersection point is found by taking the sum of the poles minus the sum of the zeros, all divided by the number of asymptotes. This example is given in Figure 21. ¼

ð0  3  1Þ þ ð0Þ 4 ¼ 3 3

Step 6: With three poles, the root locus sections on the real axis lie between the two poles at 0, 1, and to the left of the pole at –3. In this example the asymptotes’ intersection point does not fall in one of the valid regions. Step 7: There is one break-away point since the root locus paths begin on the real axis between 0 and 1 and end along the asymptotes. To find the break-away point, solve the characteristic equation for K and take the derivative with respect to s. The

Figure 21

Example: location of asymptotes for third-order system.

Analog Control System Performance

169

characteristic equation is found by closing the loop and results in the following polynomial: s3 þ 4s2 þ 3s þ K ¼ 0

or

K ¼ s3  4s2  3s

Taking the derivative with respect to s: dK ¼ 3s2  8s  3 ¼ 0 ds s ¼ 0:45; s ¼ 2:22 Only one break-away point coincides with the valid section of real axis; the root locus paths will leave the real axis at –0.45 and start approaching the asymptotes. Step 8: The angles at which the locus paths leave the poles in our system are clear by looking at the valid sections of real axis and knowing that leave the poles along these sections. We can now plot our final root locus plot as shown in Figure 22. Step 9: Knowing that the root loci paths follow the asymptotes as K increases means that any time we have three or more asymptotes, the system is capable of becoming unstable since at least several of the asymptotes head toward the right-hand plane (RHP). To find where the asymptotes cross the imaginary axis for this example, we can close the loop and apply the Routh-Hurwitz stability criterion; this was done for this same system in Example 4.5. By visually examining the root locus plot, we could also apply the magnitude condition using our ‘‘desired’’ pole locations on the imaginary axis to solve for gain K where the system becomes unstable. Step 10: From here we have several options. If we want to tune the system to have the fastest possible response, we could choose the gain K where the two paths between 0 and 1 just begin to leave the real axis. These types of techniques will be covered in more detail in the next chapter when we discuss designing control systems. EXAMPLE 4.9 Develop the root locus plot for the block diagram in Figure 23.

Figure 22

Example: root locus plot for third-order system.

170

Chapter 4

Figure 23

Example: block diagram for root locus plot (fourth-order system).

Step 1: There is one zero and four poles. When the polynomial in the denominator is factored, we find the pole locations to be at s ¼ 0, s ¼ 2, and s ¼ 1  1j. The next two steps are to place the poles and zero in the s-plane. Step 2: Locate the poles on the s-plane (using x’s), shown in Figure 24. Step 3: There is one zero at s ¼ 3, shown in Figure 24. Step 4: Since we have four poles, one zero, then n  m ¼ 4  1 ¼ 3; and we have three asymptotes. For three asymptotes, the angles relative to the positive real axis are 60 degrees and 180 degrees. Step 5: The intersection point is found by taking the sum of the poles minus the sum of the zeros, all divided by the number of asymptotes. This example is shown in Figure 25. ¼

ð0  2  1  1Þ þ ð3Þ 1 ¼ 3 3

Step 6: With four poles and a zero, the root locus sections on the real axis lie between the two poles on the axis at 0, 2, and to the left of the zero at –3. In this example the asymptotes intersection point does fall in one of valid regions. Step 7: There is one break-away point since the root locus paths begin on the real axis between 0 and 2 and end along the asymptotes. For this example there is also a break-in point since the zero lies on real axis and must have one path approach it as K goes to infinity. To the left of the zero is also part of the root locus plot (an asymptote), and two paths must come together at this break-in point. To find the points, solve the characteristic equation for K and take the derivative with respect to s. The characteristic equation is found by closing the loop and results in the following polynomial:

Figure 24

Example: locating poles and zeros in the s-plane (fourth-order system).

Analog Control System Performance

Figure 25

171

Example: location of asymptotes for fourth-order system with one zero.

s4 þ 4s3 þ 6s2 þ 4s þ Ks þ 3K ¼ 0

K ¼

s4 þ 4s3 þ 6s2 þ 4s ð s þ 3Þ

Taking the derivative with respect to s (intermediate math steps required), dK 3s4 þ 20s3 þ 42s2 þ 36s þ 12 ¼ ¼0 ds ðs þ 3Þ2 s ¼ 3:65; 1:54; 0:74  0:41j Two of the four roots are valid and coincide with the expected locations along the real axis. Ignoring the extra pair of complex conjugate roots, we have the breakaway point occurring at s ¼ 1:54 and the break-in point at s ¼ 3:65. Step 8: The angles at which the loci paths leave the poles in our system are clear except for the complex conjugate pair at 1  1j. To find the angle at which the branches leave these poles, we will place a test point very near to s ¼ 1 þ 1j. By summing all the angles relative to the pole near this point, we can solve the angle that this test point must be relative to the nearby pole to satisfy the angle condition. Let us begin by summing all the angles on the s-plane as shown in Figure 26. Remember that poles contribute negatively and zeros contribute positively. The sum of all angles must be an odd multiple of 180 degrees:

Figure 26

Example: calculating angle of departure on root locus plot.

172

Chapter 4

 f1  f2  f3  f4 þ f5 ¼ f1  90 degrees  135 degrees  45 degrees þ tan1 ð1=2Þ f1  243:43 degrees ¼ 180 degrees f1 ¼ 63:4 degrees Therefore, the angle that the path leaves the pole from is 63:4 degrees relative to the real axis. We can use this information to see that the paths leaving the complex poles head directly toward the 60 degree asymptotes. This leaves the break-away point on the real axis to wrap back around and rejoin the axis at s ¼ 3:65. After joining, one path progresses to the zero at 3 and the other path follows the asymptote to infinity. We can now plot our final root locus plot as shown in Figure 27. Step 9: The system is capable of becoming unstable since at least several of the asymptotes head toward the RHP (any time we have  3 asymptotes). To find where the asymptotes cross the imaginary axis for this example, we use the characteristic equation developed for step 7 and apply the Routh-Hurwitz stability criterion. Usually, adequate resolution can be achieved by approximating where the paths cross the axis and applying the magnitude condition using our ‘‘desired’’ pole locations on the imaginary axis to solve for gain K where the system becomes unstable. Since the Routh-Hurwitz method has already been demonstrated several times, let us assume that our paths cross the imaginary axis at s ¼ 1j (from the plot) and use the magnitude condition where jKGðsÞ HðsÞj ¼ 1.    s  z1     ¼ K  s  p1   s  p2   s  p3   s  p4  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi   2 pffiffiffiffiffi  3 þ 12  10 K pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ K pffiffiffipffiffiffi ¼ j1j 2 2 2 2 2 2 2 2 5 5  0 þ 1  1 þ 2  1 þ 0  2 þ 1  K  1:58 Since there is no gain that can be factored out of GðsÞHðsÞ, this represents the approximate gain where the controller can be set to before the system goes unstable.

Figure 27

Example: root locus plot for fourth-order system with one zero.

Analog Control System Performance

173

As a note regarding this procedure, we can include the open loop transfer function gain in the magnitude equation during the process, in which case the K we solve for will always be the desired proportional controller gain. Step 10: From here we have several options. If we want to tune the system to have the fastest possible response, we could choose the gain K where all poles are as far left as possible. If one pole is close to the origin, the total response will still be slower. No matter what gain we choose, this system will experience overshoot and oscillation in response to a step input. At certain gains all four poles will be oscillatory, although with the furthermost left pair decaying more quickly than the pair approaching the imaginary axis. As we progress to the next chapter, it is these types of decisions regarding the design of our controller that we wish to study and develop guidelines for. EXAMPLE 4.10 The remaining example for this section will present the Matlab code required to solve examples 4.6 – 4.9. Matlab is used to generate the equivalent root locus plots as those developed manually in each example. The plots for each earlier example are given in Figure 28 2 For Example 4.6, GðsÞHðsÞ ¼ sþ3

Figure 28

Example: root locus plots using Matlab.

174

Chapter 4

num6=2 den6=[1 3] sys6=tf(num6,den6) rlocus(sys6)

%Defines the numerator %Defines the denominator %Converts num and den to transfer function (LTI variable) %Draws the root locus plot

For Example 4.7, GðsÞHðsÞ ¼ num7=4; den7=[1 4 3]; sys7=tf(num7,den7) rlocus(sys7)

%Defines the numerator %Defines the denominator %Converts transfer function (LTI variable) %Draws the root locus plot

For Example 4.8, GðsÞHðsÞ ¼ num8=1; den8=[1 4 3 0]; sys8=tf(num8,den8) rlocus(sys8)

4.4.3

1 sðs2 þ 4s þ 3Þ

%Defines the numerator %Defines the denominator %Converts transfer function (LTI variable) %Draws the root locus plot

For Example 4.9, GðsÞHðsÞ ¼ num9=[1 3]; den9=[1 4 6 4 0]; sys9=tf(num9,den9) rlocus(sys9)

4 s2 þ 4s þ 3

sðs3

ðs þ 3Þ þ 4s2 þ 6s þ 4Þ

%Defines the numerator %Defines the denominator %Converts transfer function (LTI variable) %Draws the root locus plot

Frequency Response Methods

This section examines stability using information plotted the frequency domain. There are many parallels with the s-plane methods from the previous section. We use two approaches in this section: relate the information from the s-domain to the frequency domain and develop and present various tools using the frequency domain. In general, Bode, Nyquist, and Nichols plots are all used when discussing stability in the frequency domain. Since the information is the same and just presented in different forms, most of this discussion centers on the common Bode plot techniques and relates these to the other plots. Nyquist plots, sometimes called polar plots, can be drawn directly from the data used to construct the Bode plots but has the advantage in the ease of determine stability with a quick glance. Nyquist plots also allow closed loop performance estimates through the use and M and N circles. Nichols plots simply plot the loop gain in decibels versus the phase angle in degrees and when plotted on specially marked grids allow the actual closed loop performance parameters to be found. Each method has certain advantages and disadvantages. For each type of plot, however, two important parameters are measured that relate to system stability: gain margin and phase margin. 4.4.3.1

Gain Margin and Phase Margin in the Frequency Domain

For a Bode plot the gain and phase margin were defined previously and are generally measured in dB and degrees, respectively. The gain margin (GM) is found by finding

Analog Control System Performance

175

the frequency at which the phase angle is 180 degrees and measuring the distance that the magnitude line is below 0 dB. The measurement of phase and gain margin is shown in Figure 29. If the magnitude plot is not below the 0 dB line when the system is at 180 degrees, the system is unstable. It can be thought of like this: If the system is 180 degrees out of phase, the previous cycle adds to the new one. This is similar to pushing a child on a swing when each push is timed to add to the existing motion. Thus, if the magnitude is above 0 dB at this point, each cycle adds to the previous one and the oscillations begin to grow, making the system unstable. If the magnitude ratio is less than 1, then even though you push the child still at 180 degrees out of phase, the output amplitude does not grow larger. The phase margin complements the gain margin but starts with a magnitude condition and checks the corresponding phase angle. When the magnitude plot crosses the 0 dB line, the phase margin is calculated as the distance the phase angle is above 180 degrees. At the point of marginal stability, these two points are one and the same (for most systems; see later in this section). 4.4.3.2

Relation of Poles and Zeros in the s-Plane to Bode Plots

Before we progress too quickly, let us quickly relate this concept of stability with what we learned while using the s-plane. If we know the open loop poles and zeros and locate them in the s-plane, it is easy to see how the magnitude and phase relationships in the frequency domain relate to the locations. Knowing from earlier that for a Bode plot we let s ¼ jo, increase o, and measure/record the magnitude and phase relationships, we can now duplicate the same process in the s-plane. Given three poles and a zero as the s-plane in Figure 30 shows, let us see how the equivalent Bode plot would be developed by using the angle and magnitude skills from the previous section. When calculating the magnitude and phase for various o’s along the axis, we simply multiply the length of the vectors from each zero to test point jo and divide by the product of the vectors from each pole to the test point. For Figure 30 the magnitude value is

Figure 29

Bode plot gain and phase margin.

176

Chapter 4

Figure 30

The s-plane to Bode plot relationships.

Magnitude ¼ jdj=ðjajjbjjcjÞ and the phase angle is Phase ¼ fd  fa  fb  fc Just from this simple example, we can see the relationships between the two plots: 







For any positive o, the pole at zero contributes a constant 90 degrees of phase. Since a pole at zero is simply an integrator, this confirms how an integrator adds in the frequency domain. For each pole or zero not at the origin, the original angle of contribution starts at zero and progresses to 90 degrees. As we saw in root locus plots, poles add negatively and zero adds positively. As when developing Bode plots, this is exactly the relationship attributed to first-order terms in the numerator and denominator. As o increases, so does the length of each vector connecting the poles and zero to it, and the overall magnitude decreases. The more poles we have, the quicker that the denominator grows and the slope of the high frequency asymptote is larger. Once again, this confirms our experience with Bode plots. Finally, since we need to reach 180 degrees of phase angle before the system can become unstable, we need to have at least three more poles than zeros. With the difference only being two, the maximum angle only approaches 180 degrees as o approaches infinity. This confirms what we found when we looked at the root locus plots where for any system where nm  3, then asymptotes will cross into the right-hand plane. Similarly, the only way for a Bode plot to show an unstable system is if the system order is also three or greater.

To complete the picture, we must remember that a root locus plot is the result of closing the loop and varying the gain K in the system; most Bode plots are found using open loop input/output relationships, and these analogies from the s-plane to the Bode plot were determined from the open loop poles. The question then becomes how can closed loop system performance be determined from open loop Bode plots? In most cases we can use the gain margin and phase margin defined above to predict how the system would respond when the loop is closed. Gain and phase

Analog Control System Performance

177

margins are easy to measure, but the open loop system itself must be stable. As we will see, this method does require some caution because some systems are not correctly diagnosed when using the gain margin to determine system stability. It is also possible to use Nyquist and Nichols plots, and sometimes desirable, to determine the closed loop system characteristics in frequency domain. Of course, we can also just close the loop and construct another Bode plot to examine the closed loop response characteristics. With much of the design work being done on computers, many manual methods are finding less use. 4.4.3.3

Stability in the Frequency Domain

In this section we further examine the concept of system stability using the gain margin and phase margin measurements as defined above. Recalling Figure 29, where the margins are defined, we should recognize that if we were to increase the gain K in the system that the magnitude plot changes vertically and the phase angle plot does not change at all. Since we know that gain margin is the distance below 0 dB when the system is 180 degrees out of phase, then increasing the gain K an amount of the gain margin will bring us to the point of marginal stability (0 dB gain margin). For systems with less than 2 orders of difference between the denominator and numerator, the phase angle never is greater than 180 degrees and the gain margin cannot be measured. Of course, remembering this case in root locus plots, there were two asymptotes and the system never became unstable as the gain went to infinity. For systems where the phase angle becomes greater than 180 degrees, we are able to increase K to where the system becomes unstable. For a Bode plot using a gain K equal to the gain margin, this marginal stability condition is shown in Figure 31. Since multiplying factors add linearly on Bode plots, the system becomes marginally stable when the existing system gain is multiplied by another gain of 1.3. In this example, both the gain margin and phase margin approach zero at the same time and in the same place. When this happens, both the phase and gain margin are good indicators of system stability. One problem that may occur is shown in Figure 32 where the phase angle is the only indicator accurately telling us that the system is unstable. The gain margin, in error, predicts that the system is stable.

Figure 31

Effects of gain K on Bode plot stability margins.

178

Figure 32

Chapter 4

Differences between gain and phase margin with increasing phase.

Therefore, we see that although the gain margin indicates a stable system, the phase margin demonstrates that in fact the system is unstable. Even though the gain margin is often described as the increase in gain possible before the system becomes unstable, the phase margin is a much more reliable indicator of system stability. For most systems the two measures of stability correlate well and can be confirmed by examining the Bode plot. If we recall back to the section on nonminimum phase-in systems, we saw how delays in the system change the phase angle and not the magnitude lines on Bode plots. Another way to consider the phase margin, then, is as a measure of how tolerant the stability is to delays in the system. Containing the same information but plotted differently are Nyquist plots. In fact, the same gain and phase measurements are made. Since the Nyquist plot combines the magnitude and phase relationships, it becomes very easy to see whether or not a system is stable. As long as the system is open loop stable (no poles in the RHP when K ¼ 0), the Nyquist stability theorem is easy to apply and use. If there are possibly zeros or poles in the RHP, the mathematical proofs become much more tedious and subject to many constraints. What follows here is a very precursory and conceptual introduction to get us to point where we at least understand what a Nyquist plot tells us regarding the stability of the close loop system using the open loop frequency data. To begin with, let us revisit the s-plane as shown in Figure 33. What we need to picture are the angles that the various poles and zeros will go through as we move a test point s around the contour in the RHP. If we let the contour begin at s ¼ 0, progress to jo ¼ 1, follow the semicircle (also with a radius = 1) around to the negative imaginary axis, and back up to s ¼ 0, then we mathematically have included the entire RHP. If a pole or zero is in the RHP, the angle it makes with the point s moving along the contour line will make a complete circle of 360 degrees. Any poles or zero not in the right-hand plane contribute a net angle rotation of zero. Finally, let our mapping be our characteristic equation,

Analog Control System Performance

Figure 33

179

Closed contour of RHP in the s-plane.

FðsÞ ¼ 1 þ GðsÞHðsÞ Plot GðsÞHðsÞ, the open loop transfer function, on the Nyquist plot, and the point of interest relative to the roots of the our system occurs at the point 1: GðsÞHðsÞ ¼ 1 The important result is this: When we developed our Nyquist plot earlier, we let o begin at zero and increase until it approached infinity (taking the data from the Bode plot in Sec. 3.5.2). Thus, we have just completed one half of the contour path. The path of o from negative infinity is just the reverse, or mirror, image of our existing Nyquist plot. Now, if we look at the point 1 on the Nyquist plot and count the number of times it circles the point, we can draw conclusions about the stability of our closed loop system. The concept of using the mapping of the RHP and checking for the number of encirclements about the 1 point is derived from the theorem known as Cauchy’s principle of argument. If there are no poles or zeros in the RHP, the 1 point will never have a circle around it (including the mirror image of the common Nyquist plot). There are several potential problems. One problem occurs because the angle contributions for poles and zeros are opposite and if one pole and one zero are in the RHP, the angle will cancel each other during the mapping. The difference is in the direction as the angle from a pole in the RHP circles the 1 point in the counterclockwise direction and a zero in the clockwise direction. A second problem is more mathematical in nature where if there are any poles or zeros on the imaginary axis, the theorem reaches points of singularity at these locations. The normal procedure is to make a small deviation around these points. The Cauchy criterion can now be stated: the number of times that G(s)H(s) encircles the point is equal to the number of zeros minus the number of poles in the contour (picked to be the entire RHP). Encirclements are counted positive when they are in the same direction as the contour path. This allows us to write the Nyquist stability criterion as follows: A system is stable if Z  0, where Z ¼N þP where Z is the number of roots of the characteristic equation ½1 þ GðsÞHðsÞ in the RHP, N is the number of clockwise encirclements of the point 1, and P is the number of open loop poles of GðsÞHðsÞ in the RHP.

180

Chapter 4

Adding the mirror image to the Nyquist plot developed earlier, shown in Figure 34, allows us to now apply the theorem and check for system stability. A quick inspection reveals that the closed loop system will be stable since the plot never encircles the 1 point. Since it never circles the point CW or CCW there are neither poles nor zeros in the RHP. Remember that in general the top half of the curve is not shown and that including it may help you visualize the number of times the path encircles 1. To conclude this section, let us connect what we have learned with root locus plots, Bode plots, and Nyquist plots to understand how the stability issues are related. With the Bode plot we already defined and discussed the use of gain margin and phase margin as measure of system stability. Moving to the Nyquist plot allows the same measurements and comments to apply. If we consider the process of taking a Bode plot and constructing a Nyquist plot, the gain and phase margin locations are easily reasoned out. The radius of the Nyquist plot is the magnitude (no longer in dB) and the angle from the origin is the phase shift. The gain margin then falls on the negative real axis since the phase angle when the plot crosses it is 180 degrees. The magnitude at this point cannot be greater than one if the system is stable, so the distance that the plot is inside point 1 is the gain margin. This also confirms the Nyquist stability theorem just developed since if the plot is to the right of 1 (positive gain margin) the path never encircles the 1 point and the theorem also confirms that the system is stable. Where the theorem sees extended use is when multiple loops are found and the gain margin is less clear. The phase margin on the Nyquist plot will occur when the distance that the plot is from the origin is equal to one. This corresponds to the crossover frequency (0 dB) in the Bode plot. The amount that the angle makes with the origin less than 180 degrees is the phase margin. These measurements are shown for a portion of a Nyquist plot in Figure 35. Since the phase equals 180 degrees on the negative real axis, the phase margin, f, is the angle between the line from the origin to the point where the plot crosses inside the circle defined as having a radius of one and the negative real axis. Remember that on the Bode plot this corresponds to the

Figure 34

Nyquist stability theorem example plot.

Analog Control System Performance

Figure 35

181

Gain and phase margins on Nyquist plots.

frequency where the magnitude plot passes through 0 dB (the crossover frequency, oc ). The gain margin is represented linearly (not in dB) and can be found by the ratio between lengths a and b where K2 a þ b 1 ¼ ¼ b b K1 GMdB ¼ 20 log

K2 ¼ K1

1 b

  K2 ¼ 20 logðK2 Þ  20 logðK1 Þ K1

Since the axes are now linear, the increase in gain is simply a ratio of the lengths between the point at 1 and where the line crosses the real axis. In other words, if a gain of K1 gets the line to cross as shown in the figure (a distance b from the origin), then K2 is the gain required (allowed) to get us a distance |a þ bj ¼ 1 from the origin before the system goes unstable. Since this data is plotted linearly, the ratio of the gains is equal to ratio of the lengths. For example, if b ¼ 12 and the current gain K1 on the system is 5, then K2 can be twice that of K1 , or equal to 10, before the system becomes unstable and the line is to the left of the 1. To report the gain margin with units of decibels, we can take the log of the ratio or of the difference between the logs of the two gains and multiply by 20. To summarize issues regarding stability in the frequency domain, it is better to rely on phase margin than gain margin as a measure of stability. In most systems they will both provide equivalent measures and converge to the same point on both the Bode and Nyquist plots when the system becomes marginally stable. Under some conditions this is not true and the gain margin may indicate a stable system when in fact the system is unstable. Gain margin is often thought of as the amount of possible increase in gain before the system becomes unstable. This is easy to visualize on Bode plots since only the vertical position of the magnitude plot is changed. Phase margin is commonly related to the amount of time delay possible in the system before it becomes unstable. Time delays change the phase and not the magnitude of the system, and the system is classified as a nonminimum phase system. Finally, both Bode plots and Nyquist plots contain the same information but in different layouts. The concepts of stability margins apply equally to both. In addition, Nyquist plots can be extended even further using the Nyquist stability theorem to determine if

182

Chapter 4

Figure 36

Example: comparison of stability criterion.

there are any poles or zeros in the right-hand plane. The next example seeks to review the measures of stability used in the different system representations and show that they all convey similar information, each with different strengths and weaknesses. EXAMPLE 4.11 For the system represented by the block diagram in Figure 36: a. Develop the root locus, Bode, and Nyquist plots. b. Determine the gain K where the system becomes unstable using 1. The root locus plot. 2. The gain margin from the Bode plot. 3. The gain ratio from the Nyquist plot. c. Draw each plot again using the new gain. Part A: To develop the root locus plot, we will follow the guidelines presented in the previous section. The system has three poles (0; 2, and 4) and no zeros. Therefore it will have three asymptotes with angles of 60 and 180 degrees. The asymptotes intersect the real axis at s ¼ 2 and the break-away point is calculated to be at s ¼ 0:845. This matches well with the valid sections of real axis that include the segment between the poles at 0 and –2 and to the left of the pole at –4. This allows the root locus plot to be drawn as shown in Figure 37. Since the system being examined has three orders of difference between the denominator and numerator, it will go unstable as K goes to infinity. Plotting the different open loop factors found in the transfer function develops the equivalent Bode plot. We have an integrator and two first-order factors, one with

Figure 37

Example: root locus plot for stability comparison.

Analog Control System Performance

183

t ¼ 0:5 seconds and one with t ¼ 0:25 seconds. This means that we have a low frequency asymptote of 20 dB/dec, a break to 40 dB/dec at 2 rad/sec, and a break to the high frequency asymptote of 60 dB/dec at 4 rad/sec. The phase angle begins at 90 degrees and ends at 270 degrees. The resulting Bode plot with the gain and phase margins labeled is shown in Figure 38. Finally, we can develop the Nyquist plot from the data contained in the Bode plot just developed. At very low frequencies, the denominator approaches zero and the steady-state gain goes to infinity. The initial angle on the Nyquist plot begins at 90 degrees with a final angle of 270 degrees. The distance from the origin is equal to 1 at the crossover frequency (M ¼ 0 dB), greater than 1 at lower frequencies, and less than 1 at greater frequencies. The magnitude goes to zero as the frequency approaches infinity. This can be represented as the Nyquist plot given in Figure 39. Part B: With the three plots now completed, let us turn our attention to determining from each plot where the system goes unstable. For the root locus plot the preferred method is to apply the Routh-Hurwitz criterion and solve for the gain K where the system crosses over in the RHP, thus becoming unstable. With the characteristic closed loop equation equal to CE ¼ s3 þ 6s2 þ 8s þ 8K The Routh-Hurwitz array becomes s3 s2 s1 s0

Figure 38

1 6 ð48  8KÞ=6 8K

8 8K 0

Example: Bode plot for stability comparison.

184

Chapter 4

Figure 39

Example: Nyquist plot for stability comparison.

When K is greater than or equal to 6, the third term in the first column becomes negative and the system becomes unstable. From the Bode plot, where the gain margin has been graphically determined as 15 dB, we see that if the magnitude plot is raised vertically by 15 dB the system becomes unstable. For this system both the gain margin and phase margin go to zero at the same point. We can find what increase in gain is allowable by solving for the gain resulting in 15 dB of increase (gains multiply linearly but add on the Bode plot due to the dB scale): 20 log ðKÞ ¼ 15 dB K ¼ 1015=20 ¼ 5:6 Since the Bode plot uses approximate straight line asymptotes, the gain K varies slightly from the gain solved for with the root locus plot and using the RouthHurwitz criterion. The calculation of the allowable gain determination using the Nyquist plot is found by measuring the ratio of one over the length between the origin and where the plot crosses the negative real axis. Using the gain approximated from the Bode plot implies that the fraction should be about 1/5 of the total length between 0 and 1 on the negative real axis. Part C: To complete this example, let us redraw the plots after the total gain in the system is multiplied by 6. Only the Bode plot and the Nyquist plot need to be updated since the gain is already varied when creating the root locus plot and contains the condition where the system goes unstable. In other words, we move along the root locus paths by changing the gain K, while the Bode and Nyquist plots will be different for any unique point along the path. In this part of the example, our goal is to plot the Bode and Nyquist plot corresponding to the point where the root locus plot crosses into the RHP. The root locus plot, included for comparison, is shown in Figure 40 with the Bode and Nyquist plots at marginal stability conditions, K ¼ 6. To conclude this section, let us work the same example except that we will answer the questions using Matlab to confirm and plot our results.

Analog Control System Performance

Figure 40

185

Example: Comparison of marginal stability plot conditions: (a) root locus plot; (b) Nyquist plot; (c) Bode plot.

186

Chapter 4

EXAMPLE 4.12 For the system represented by the block diagram in Figure 41, use Matlab to solve for the following: a. Develop the root locus, Bode, and Nyquist plots. b. Determine the gain K where the system becomes unstable using 1. The root locus plot. 2. The gain margin from the Bode plot. 3. The gain ratio from the Nyquist plot. c. Draw each plot again using the new gain. Part A: To generate the plots using Matlab, we can define the system once and use the command sequence shown to generate each plot. Each command used has many more options associated with it. To see the various input/output options, type >>help command

j

and Matlab will show the comments associated for each command. %Example of Root Locus, Bode, and Nyquist %Stability Criterion %Define System num=8; den=[1 6 8 0] sys=tf(num,den) rlocus(sys)

%Develop the Root Locus Plot

rlocfind(sys)

%Find gain at marginal stability %Place cursor at location, returns K

figure; bode(sys) margin(sys)

%Opens a new plot window %Develop the Bode plot %Measure the Stability Margins %Places margins on the plot

figure; nyquist(sys)

%Opens a new plot window %Develop the Nyquist plot

The rlocus command returns the root locus plot shown in Figure 42.

Figure 41

Example: comparison of stability criterion using Matlab.

Analog Control System Performance

Figure 42

187

Example: Matlab root locus plot.

Using the rlocfind command brings up the current root locus plot and allows us to place the cursor on any point of interest along the root locus plot and find the associated gain K at that point. Placing it where the paths cross into the RHP returns K ¼ 6, verifying our analytical solution; it also returns the pole locations that you clicked on. The bode command generates the following Bode plot for our system, and when followed by margin, will calculate and label the gain and phase margins for the system, shown in Figure 43. Here we see that the gain margin equals 15.563 dB, close to our approximation of 15 dB. The phase margin is 53.4 degrees at a frequency of 0.8915 rad/sec. If we calculate the gain K required to shift the magnitude plot up by 15.563 dB, we get 20 log ðKÞ ¼ 15:563 dB K ¼ 10

15:563 20

¼6

This gain of K ¼ 6 agrees with the root locus plot from earlier. The nyquist command is used to generate our final plot, as shown in Figure 44. To illustrate the condition of stability around the point 1, the axes have to be set to zoom in on the area of interest. Remember that the plot begins with an infinite magnitude at 90 degrees. Applying the Nyquist stability criterion confirms that the system is stable. There is no encirclement of the point at (0; 1). The gain margin is also verified as the plot crosses the negative real axis approximately 1/6 of the way between 0 and

188

Chapter 4

Figure 43 Matlab Bode plot with stability margins (GM ¼ 15.563 dB [at 2.8284 rad/sec], PM ¼ 53.411 deg. [at 0.8915 rad/sec]).

Figure 44

Example: Matlab Nyquist plot.

Analog Control System Performance

189

1 on the negative real axis. This means that we can increase the gain in our system six times before the plot moves to the left of the point at 1. Finally, if we increase the numerator of our system from 8 to 48 (increasing the gain by a multiple of K ¼ 6), then we can use Matlab to redraw the Bode (see Figure 45) and Nyquist plots. When generating the Nyquist plot in Figure 46, we can show one close-up section to verify stability and one overview plot giving the general shape. With the new gain in the system we see that on the Bode plot the gain margin and the phase margin both went to zero and the system is marginally stable. On the Nyquist plot we see that the path goes directly through (0; 1), also confirming that our system is marginally stable. Any further increase in gain and the plot will encircle the point at 1 and tell us that we have an unstable system. By now we have a better understanding about how different representations can be used to determine system stability. Hopefully the comparisons have convinced us that the same information is only conveyed using different representations. With the same physical system (as in the examples) we certainly would expect each method to find the same stability conditions. Different methods have different strengths and weaknesses. Many times the choice of which one to use is determined by the information available about the system and what form it is in. Different computer packages also have different capabilities. As long as we understand how they relate, we should be able to design using any one of the methods presented.

Figure 45

Example: Matlab Bode plot at marginal stability (GM ¼ 0 dB, PM ¼ 0 [unstable closed loop]).

190

Chapter 4

Figure 46

4.4.3.4

Example: Matlab Nyquist plot at marginal stability.

Closed Loop Responses from Open Loop Data in the Frequency Domain

As we discussed earlier, most frequency domain methods are done using the open loop transfer function for the system. Ultimately, it is the goal that we close the loop to modify and enhance the performance of the system. Since we have the information already, albeit representing the open loop characteristics, we would like to directly use this information and infer what the expected results are when we close the loop. Of course we can always close the loop and redraw the plots for the closed loop transfer function, but that duplicates some of the work already completed. This is one advantage of the root locus plot in that the closed loop response is determined from the open loop system transfer function and the complete range of possible responses is quickly understood. If we have a unity feedback system with HðsÞ ¼ 1, then we can see the relationship between the open loop and closed loop system response by using the Nyquist diagram, as illustrated in Figure 47. If we have an open loop system represented by GðsÞ and unity feedback, then the closed loop system is given as CðsÞ GðsÞ ¼ RðsÞ 1 þ GðsÞ

Figure 47

Open loop versus closed loop response with Nyquist plot.

Analog Control System Performance

191

The denominator, where jGðsÞj  1 ¼ 0, can also be found on the Nyquist plot as the distance from the point ð0; 1Þ to the point on the plot. Now we know both the numerator and denominator on the Nyquist plot and if we calculate various values around the plot, we can construct our closed loop frequency response. In the same way our closed loop phase angle can be found as fCL ¼ fOL  b Instead of having to perform these calculations for each point, it is common to use a Nichols chart where circles of constant magnitude and phase angle are plotted on the graph paper. After we plot our open loop plot (as done for a Nyquist plot), we mark each point were our plot crosses the constant magnitude and phase lines for the closed loop. All that remains is to simply record each intersecting point and construct the closed loop response. Perhaps the most common parameters specified for open loop frequency plots (Bode and Nyquist) are the gain and phase margins, as defined and used throughout this section as a measure of stability. If we have dominant second-order poles, then we can also use the gain and phase margins as indicators of closed loop transient responses in the time domain. As we will see, the phase margin directly relates to the closed loop system damping ratio for second-order systems given by the form GðsÞ ¼

o2n sðs þ 2on Þ

When we close the loop, we get the common form of our second-order transfer function: GðsÞ ¼

o2n s þ 2on s þ o2n 2

The process of relating our closed loop transfer function to the gain margin is as follows. If we solve for the frequency where jGðsÞj is equal to 1 by letting s ¼ jo, then we have located the point where our phase angle is to be measured. We now substitute this frequency where the magnitude is one into the phase relationship and solve for the phase margin as a function of the damping ratio. The result is 2 Phase margin ¼ f ¼ tan1 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2z2 þ 1 þ 4z4 It is much more convenient for this relationship to be plotted as in Figure 48. The second relationship derived from the analysis summarized above relates the gain crossover frequency to the natural frequency of the system. This relationship is derived from knowing that at the gain crossover frequency the magnitude of the system is equal to one and relating it to the natural frequency in the equations above. The ratio of the crossover frequency to the natural frequency is oc ¼ on

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2z2 þ

1 þ 4z4

As before, it is useful to plot this relationship as shown in Figure 49. These two plots allow us to plot the open loop Bode plot for second-order systems and from the phase margin and gain margin determine the closed loop time

192

Figure 48

Chapter 4

Relationship between OL phase margin and CL damping ratio (second-order

systems).

response in terms of the system’s natural frequency and damping ratio. These parameters have been discussed and apply frequently in previous sections. A word of caution is in order. Remember that these figures are for systems that are well represented as second order (dominant second-order poles). For general systems of higher orders and systems including zeros, our better alternatives are to close the loop and redo our analysis, simulate the system on a computer, or develop a transfer function and use a technique like root locus. If these conditions are not met, the approximations become less certain and worthy of a more thorough analysis.

Figure 49 systems).

Relationship between OL crossover and CL natural frequencies (second-order

Analog Control System Performance

4.5

193

PROBLEMS

4.1 What are the three primary (fundamental) characteristics that are used to evaluate the performance of control systems? 4.2 What external factors determine, in part, whether or not we should consider using an open loop or closed loop controller? 4.3 Given the physical system model in Figure 50, answer the following questions (see also Example 6.1). a. Construct the block diagram representing the hydraulic control system. b. Can the system ever go unstable? c. To decrease the steady-state error, you should increase the magnitude of which of the following variables? [ M, B, Kv, K, Ap, a, b ] 4.4 List a possible form that a disturbance that might take in each of following components of a control system (i.e., electrical noise, parameter fluctuation, etc.). a. Amplifier b. Actuator c. Physical system d. Sensor/transducer 4.5 A system transfer function is given below. What is the final steady-state value of the system output in response to a step input with a magnitude of 10? 5s2 þ 1 23s3 þ 11s þ 10 4.6 Using the system in Figure 51, what is the initial system output at the time a unit step input is applied, the steady-state output value, and steady-state error? 4.7 A controller is added to a machine modeled as shown in the block diagram in Figure 52. a. Determine the transfer function between C and R. What is the steady-state error from a unit step input at R, as a function of K? b. Determine the transfer function between C and D. What is the steady-state error from a unit step input at D, as a function of K? 4.8 The transfer function for a unity feedback ðHðsÞ ¼ 1Þ control system is given below. Determine a. The open loop transfer function. b. The steady-state error from a unit step input. GðsÞ ¼

Figure 50

Problem: hydraulic control system.

194

Chapter 4

Figure 51

Problem: block diagram of system.

c. The steady-state error from a unit ramp input. d. The value of K required to make the state state-error from part c equal to zero. CðsÞ K sþb ¼ 2 RðsÞ s þ as þ b 4.9 Given the block diagram in Figure 53, use the system type number to determine the steady-state error resulting from a a. Unit step input at RðsÞ. b. Unit ramp input at RðsÞ. c. Unit step input at DðsÞ. d. Comment on the results. 4.10 Reduce to block diagram in Figure 54 to a single block. Using the final transfer function and the FVT, determine the steady-state output of the system to a unity step input function. 4.11 Given the block diagram model of the physical system in Figure 55 and using a unit step input for questions pertaining to inputs R and D, answer the following questions. a. What is the natural frequency of the system? b. What is the damping ratio of the system? c. What is the percent overshoot? d. What is the settling time (2%)? e. What is the steady-state error from a unit step input for command R? f. What is the steady-state error from a unit step input for disturbance D? 4.12 Using Routh-Hurwitz criterion, determine the range of values for K that results in the following system being stable. GðsÞ ¼

s3

þ

18s2

K þ 77s þ K

4.13 The characteristic equation is given for a system model as follows: CE ¼ s3 þ 2s2 þ 4s þ K

Figure 52

Problem: block diagram of system.

Analog Control System Performance

Figure 53

Problem: block diagram with disturbance.

Figure 54

Problem: block diagram of system.

195

a. Develop the Routh array for the polynomial, leaving K as a variable in the array and determine the range of K for which the system is stable. b. At the maximum value of K, what are the values of the poles on the imaginary axis and what is the type of response? 4.14 Given the block diagram in Figure 56, use root locus techniques to answer the following questions. a. Sketch the root locus plot. b. Use the magnitude condition and your root locus plot to determine the required gain K for a damping ratio of 0.866. Show your work. c. Letting K ¼ 2 for this question, what is the steady-state error due to a unit step input at R? 4.15 Develop the root locus plot and required parameters for the following open loop transfer function. GH ¼

ðs þ 4Þ sðs þ 3Þðs2 þ 2s þ 4Þ

4.16 Develop the root locus and required parameters for the following system.

Figure 55

Problem: block diagram with disturbance.

196

Chapter 4

Figure 56

Problem: block diagram of system.

1 þ KGðsÞHðsÞ ¼ 1 þ 1þ

K ¼ s þ 12s þ 64s2 þ 128s 4

3

K ¼0 sðs þ 4Þðs þ 4 þ j4Þðs þ 4  j4Þ

a. List the results that are obtained from each root locus guideline. b. Give a brief sentence describing why or why not the dominant poles assumption is valid for this system. 4.17 Develop the root locus plot and required parameters for the following open loop transfer function. sþ2

GHðsÞ ¼ 2 s þ 3s þ 3 s2 þ 8s þ 12 4.18 Develop the root locus plot and required parameters for the following open loop transfer function. GHðsÞ ¼

s2 þ 6s þ 8

2

ðs þ 7Þ s þ 4s þ 3 s2 þ 2s þ 10

After the plot is completed, describe the range of system behavior as K is increased. 4.19 Develop the root locus plot and required parameters for the following open loop transfer function. Use only those calculations that are required for obtaining the approximate loci paths. GH ¼

ðs þ 3Þðs þ 4Þðs þ 1:5 þ 0:5jÞðs þ 1:5  0:5jÞ ðs þ 2Þðs þ 1Þðs þ 0:5Þðs  1Þ

After the plot is completed, describe the range of system behavior as K is increased. 4.20 Given the following system, draw the asymptotic Bode plot (open loop) and answer the following questions. Clearly show the final resulting plot. GHðsÞ ¼

1 s2 ð2s þ 1Þ

a. What is the phase margin fm ? b. What is the gain margin in decibels? c. Is the system stable? d. Sketch the Nyquist plot. 4.21 Using the Bode plot given in Figure 57, answer the following questions. a. What is the open loop transfer function? b. What is the phase margin? c. Sketch the Nyquist plot.

Analog Control System Performance

Figure 57

Problem: Bode plot of system.

Figure 58

Problem: Bode plot for system.

197

d. Can the system ever be made to go unstable if the gain on a proportional controller is increased? 4.22 Given the following open loop Bode plot in Figure 58, develop the closed loop second-order approximation. Show all intermediate steps. The final result should be a second-order transfer function in the s-domain. Sketch the closed loop magnitude (dB) and phase angle frequency response.

This Page Intentionally Left Blank

5 Analog Control System Design

5.1

OBJECTIVES      

5.2

Provide overview of analog control system design. Design and evaluate PID controllers. Develop root locus methods for the design of analog control systems. Develop frequency response methods for the design of analog control systems. Design and evaluate phase lag and phase lead controllers. Design proportional feedback controller using state space matrices.

INTRODUCTION

Analog controllers may take many forms, as this chapter shows. However, the analysis and design procedures, once the transfer functions are obtained, are nearly identical. A proportional controller might utilize a transducer, operational amplifier, and amplifier/actuator and yet perform the same control action as a system utilizing a set of mechanical feedback linkages. The movement has certainly been to fully electronic controllers since they have several advantages over their mechanical counterparts. Transducers are relatively cheap, computing power continues to experience exponential growth, electrical controllers consume very little power, and the cost of upgrading controller algorithms or changing parameter gains is only the cost of design time that all updated systems would require regardless. Once the move is made to digital, a new algorithm is installed by simply downloading it to the corresponding microcontroller. The algorithms presented here are the mainstay of many control projects today and are capable of solving most of the problems they encounter. Advanced control algorithms certainly have many advantages, but the basic controllers continue to be the majority in most applications.

199

200

5.3

Chapter 5

GENERAL OVERVIEW OF CONTROLLER (COMPENSATOR) DESIGN

It is quite common as we work in the area of control system design to see the terms controller and compensator. For the most part, the words are meant to describe the same thing, that is, a way to make the system output behave in a desirable way. If any differences do exist, it could be argued that the term controller includes a larger portion of our system. Components such as the required linkages, transducers, gain amplifiers, etc., could all be included in the term controller. The term compensator, on the other hand, is often applied to the portion, or subsystem, of the control system that compensates (or modifies the behavior of) the system, thus we may hear terms such as PID compensators and phase-lag compensators. Some reference sources make this distinction and some do not. It poses no problem as long as we are aware of both terms and how they might be used. The actual compensator itself may be placed in the forward path or in the feedback path as shown in Figures 1 and 2. When the compensator is placed in the forward path, it is often called series compensation, as it is in series with the physical system. Similarly, when the compensator is in the feedback path, it is often called parallel compensation (in parallel with the physical system). In many cases the location of the compensator, Gc , is determined by the constraints imposed on the design by the physical system. Practical issues during implementation may make one design option more attractive than the other. These issues might include available sensors, system power levels, and existing signals available from the system. As shown in the following sections, having noisy signals may lead us to implement a combination of the two forms. Series compensation is more commonly found in systems with electronic controllers where the feedback and command signals are represented electrically. This allows the compensator to be placed in the system where the lowest power levels are found. This is increasingly important as we move to digital control systems where

Figure 1

Compensator placed in the forward path.

Figure 2

Compensator placed in the feedback path.

Analog Control System Design

201

components are not as capable of handling larger power levels (i.e., microprocessors). Parallel compensation, since it occurs in the feedback path, can sometimes require less components and amplifiers since the available power levels are often larger. For example, as we see in the next chapter dealing with implementation of control systems, mechanical controllers can operate without any electronics and directly utilize the physical input from the operator to control the system. The way these systems are implemented places the compensator (mechanical linkages in some cases) in the feedback path. Remember that although classified as two distinct configurations, more complex systems with nested feedback loops may contain elements of both. The important thing to note is that regardless of the layout of the system, the design tools and procedures (i.e., closing the loop, placing system poles, etc.) remain the same. The exception is that in some systems where the gain is not found in the characteristic equation as a direct multiplier [1 þ KGðsÞHðsÞ, as required for using root locus design techniques, some extra manipulation may be required to use these tools. Finally, in keeping with the layout of this text, both s-plane and frequency methods are discussed together when discussing the design of various controllers. To be effective designers, we should be comfortable with both and able see the relationships that exist between the different representations. Ultimately, our compensator should effectively modify our physical system response to give us the desired response in the time domain, as in the time domain we see, hear, and work with the system. Whether it is designed in using root locus or Bode plots, our design criteria almost always are related back to a time response. As a last comment, it would be wrong in learning the material in the chapter (and text) if we leave with the impression that we always need to compensate our system to achieve our desired response. Implementing a control system is not meant to replace the goal of designing a good physical system with the desired characteristics inherent in the physical system itself. Although many poorly designed physical systems are improved through the use of feedback controls, even better performance can be achieved when the physical system is properly designed. In other words, a poor design of an airplane may result in an airplane that is physically unstable. Even though we can probably stabilize the aircraft through the use of feedback controls (and time, money, weight, etc.), it does not hide the fact that the original design is quite flawed and should be the first improvement made. It is generally assumed when discussing control systems that the physical system is constrained to be the way it is and our task is to make it behave in a more desirable manner. 5.4

PID CONTROLLERS

PID controllers are, without question, the most popular designs being used today, and for good reason. They do many things well, cover the basic transient properties we wish to control, and are familiar to many people. Virtually all ‘‘off the shelf’’ controllers have the option of PID. PID controllers are named for their proportionalintegral-derivative control paths. One path multiplies the error with a gain (output proportional to input), one path integrates or accumulates the error, and the derivative path produces output relative to the rate of change of error. Although new advanced controllers are continually being developed, for systems with well-defined

202

Figure 3

Chapter 5

Basic PID block diagram.

physics, with little change in parameters during operation, and that are fairly linear over the operating range, the PID algorithm handles the job as capably as most others. The different modes of PID (proportional, integral, derivative) give us options to modify all the steady-state and transient characteristics examined in Chapter 4. The basic block diagram representing a PID controller is shown in Figure 3. The output of the summing junction represents the total controller output that provides the input to the physical system. Summing the three control actions results in the following transfer function and time equivalent equations: ð UðsÞ Ki deðtÞ ¼ Kp þ þ Kd s uðtÞ ¼ Kp eðtÞ þ Ki eðtÞdt þ Kd EðsÞ dt s Using the PID transfer function as expressed above, it is often common to combine the three blocks shown in Figure 3 to a single block as shown in Figure 4. The easiest way to illustrate the equations is to examine the controller output when a known error is the input. If a ramp change in error is the input into a PID controller, then each controller will contribute the following control action to the error as shown in Figure 5. The total controller output is simply the three components added together. The proportional term simply multiplies the error by a fixed constant and only scales the error, as shown. It is the most common beginning point when implementing a controller and usually the first gain adjusted when setting up the controller. The integral term will always be increasing if the error is positive and decreasing if the error is negative. Its defining characteristic is that it allows the controller to have a non-zero output even when the error input is zero. The derivative gain only has an output when the error signal is changing. Each term is examined in more detail in the next section.

Figure 4

Single block PID representation.

Analog Control System Design

Figure 5

5.4.1

203

PID Controller output as function of a ramp input error.

Characteristics of PID Controllers

The proportional control action is generally assumed to be the beginning when closing a feedback loop. While the integral gain could function alone (and sometimes does), the derivative gain needs to be supported by the proportional gain. Proportional controllers have worked well in many applications and generally allow the controller to be tuned over a wide range. Its primary disadvantages are the inability to change the shape of the root locus plot (i.e., be able to ‘‘randomly’’ choose some natural frequency and damping ratio combination) and the occurrence of steady-state errors for all type 0 systems. Varying the proportional gain will only move the systems poles along the root loci path defined by the original system poles and zeros. (The same observation is true in the frequency domain; the shape of a Bode plot is not changed with a proportional gain, only the vertical location.) In addition, since some error is necessary to have a non-zero signal to the actuator, some steady-state error is always present unless the physical system contains an integrator that allows the system to maintain the current setting with a zero signal to the actuator. As the proportional gain is increased, the steady-state errors decrease but the oscillations increase. Thus, the designer’s job is to balance these two trade-offs so common in closed loop controllers—stability versus accuracy. The integral gain, when used in conjunction with a proportional gain, can be used to eliminate the steady-state errors. This can be intuitively explained by realizing that as long as an error is present in the system the integral gain is ‘‘collecting’’ the error and the correction signal to the actuator continues to grow until the error is reduced to zero. If an integrator is added to a type 1 system, then the error from a ramp input can be driven to zero, and so forth (it becomes a type 2 system). In these situations, however, there are two poles at the origin of the s plane and stability may be compromised. If the integral gain is used solo, it is hard to achieve decent transient responses with timely reduction of the steady-state errors. With the integral gain large enough to respond to the transients, a problem arises that the integrator accumulates too much error, overshoots the command, and repeats the process. This effect is called integral windup, and many controllers place limits on the error accumulation levels in the integrator. Integral resets are sometimes implemented to reset

204

Chapter 5

the error in the integral term to zero. Many times it is possible to determine whether the oscillations are from integral windup or excessive proportional gain by noticing the frequency of oscillation. The proportional gain, when too large, causes the system to oscillate near its natural frequency, while the integral windup frequency is commonly much lower and less ‘‘aggressive.’’ The general tuning procedure is to use the proportional gain to handle the large transient errors and the integral gain to eliminate the steady-state errors with only minor effects on the stability. Implementing the integral portion of a controller is common and generally proves to be quite effective. Derivative gains, of the three discussed, are capable of simultaneously helping and hurting the most. A derivative control action can never be used alone since it only has an output when the system is changing. Hence, a derivative controller has no information about the absolute error in the system; you could be a mile from your desired position and as long as you do not move, the derivative output is zero and you remain a mile from where you want to be. Therefore, it must always be used in conjunction with proportional controllers. The benefit is that a derivative gain, since it adds a zero to the system, can be used to attract ‘‘errant’’ loci paths and thus contribute to the stability of the system. It anticipates large errors before they happen and attempts to correct the error before it happens, whereas proportional and integral gains are reactive and only respond after an error has developed. In many cases it acts as effective damping in the system, simulating damping effects without the energy losses associated with adding a damper. This is the advantage; the disadvantage is how in practice it tends to saturate actuators and amplify noisy signals. If the system experiences a step input, then by definition the output of the derivative controller is infinite and will saturate all the controllers currently available. Second, since the output is the derivative of the error, or the slope of the error signal, the derivative output can have severely large swings between positive and negative values and cause the system to experience chatter. The net benefit is removed and the derivative term, although stabilizing the overall system, creates enough signal noise amplification into the entire system that it chatters. The effect is shown in Figure 6 and is why a low-pass filter is commonly used with derivative controllers. Even though the trend of the error signal always has a positive slope, the derivative of the actual error signal has large positive and negative signal swings. The low-pass filter should be chosen to allow the shape of the overall signal to

Figure 6

Noise amplification with derivative term in controller.

Analog Control System Design

205

remain the same while the higher frequency noise is filtered and removed. This allows the actual derivative output to more closely approach the desired. Several variations of the PID controller are used to overcome the problems with the derivative term. Even if we could remove all the noise from the signal, we would still saturate the amplifier/actuator any time a step input occurs on the command. Since the physical system does not immediately respond, the error input to the control also becomes a step function. The derivative term, in response to the step input, attempts to inject an impulse in the system. When we switch to different set points, this resulting impulse into the system is sometimes termed set-point-kick. If our actual feedback signal is noise free, then we can counteract the step input saturation problem using approximate derivative schemes to modify the derivative term, Kd s, to become Kd s  Kd

1 N

s sþ1

where the value of N can be adjusted to control the effects. A common value is N ¼ 10. This will make a step input not cause an infinite output but rather a decaying pulse as shown in Figure 7, where a step response of the modified approximate derivative term is plotted for several values of N. When Kd ¼ 1, as shown in Figure 7, the effects of N are easily seen since the peak value is simply equal to the value of

Figure 7

Step responses of approximate derivative function for several values of N.

206

Chapter 5

N for that plot. Notice though that the time scales are also shifted, and as N increases the response decays more quickly (N represents the time constant). As the value of N approaches infinity, the approximate derivate output approaches that of a true derivative. The output of a true derivative would have an infinite magnitude for an infinitesimal amount of time. Other methods can be used to deal with both set-point-kick (step inputs) and noise in the signals. These methods, however, some of which are summarized here, require a change to the structure of our system and additional components. One alternative to deal with the problem of differentiating a step input is to move the derivative term to the feedback path. Since the output of the physical system will never respond as abruptly as a step, the derivative term is less likely to saturate the components in the system. In other words, the output of the system will not have a slope equal to infinity as a true step function does. This modification, PI-D, and resulting block diagram is shown in Figure 8. Now what we see is that the error is fed forward so that it still directly multiplies by Kp , is integrated by Ki =s, and the derivative term only adds the effects from the rate of change of the physical system output, not of the error signal. To even further reduce abrupt changes in the signal that the controller sends to the system, we might choose to also include the proportional term in feedback loop as shown in Figure 9. The I-PD is similar to the PI-D except that now only the integral term directly responds the change in error. Even if a step input is introduced into the system, the integral of the step is a ramp and relatively easy for the system to respond to without saturating. The proportional term is fed directly through from the feedback along with the derivative of the feedback. The one problem we still have with these alternatives is noise in the feedback signal itself. If we have a noisy signal, the derivative term will still amplify the noise and inject it back into the system. An alternative to reduce both step input and noise- related derivative problems is to use a velocity sensor. When velocity sensors are used (assuming position is controlled) neither the error nor feedback signal themselves are differentiated and the velocity signal acts as the derivative of the position error, as shown in Figure 10. When the closed loop transfer function is derived, the derivative gain, Kd , adds to the system damping and allows us to stabilize the system. Since the feedback signal is from the velocity sensor and not obtained by differentiating the position signal, the problems with noise amplification are minimized. There are many variations to this model depending on components, access to signals, and system configurations. The remaining sections help us design these controllers.

Figure 8

Block diagram with PI-D controller.

Analog Control System Design

Figure 9 5.4.2

207

Block diagram with I-PD controller.

Root Locus Design of PID Controllers

Root locus techniques are one of the most common methods used to design and tune control systems. Chapters 1 through 4 developed the tools used for designing controllers, and we are now able to start combining the different skills. We should be able to effectively model our system, evaluate the open loop response using root locus or Bode plots, choose a desired response according to one or more performance specifications, and design the controller. Understanding root locus plots allows us to design our controllers for specific requirements and to immediately see the effects of different controller architectures. As illustrated in the s-plane plots, the damping ratio, natural frequency, and damped natural frequency are all easily mapped into the s-plane. Lines of constant damping ratio are radials directed outward from the origin where cosine of the angle between the negative real axis and the line equals the damping ratio. The imaginary component of the poles corresponds to the damped natural frequency and the radius from the origin to the poles equals the natural frequency. Thus, if the natural frequency and damping ratio are specified for the dominant closed loop poles, the desired pole locations are also known. The question then becomes, how do we get the poles to be at that location? Root locus plots are a valuable technique for doing this. Since root locus techniques vary the proportional gain, the technique needs some modifications for tuning the derivative and integral gains in a PID controller. The most effective way for quickly designing controllers is placing the poles and/or zeros based on our knowledge of how the root loci paths will respond. For example, with a PID controller we can place two zeros with the pole being assigned to the origin. Selectively placing the zeros will determine the shape of the loci paths, and we can vary the proportional gain to move us along the loci. If we have two problem poles, we can generally attract them to a better location through the use of tuning the PID gains to place the zeros in such a way to have those two poles end at ‘‘our’’ zeros. Knowing the root loci paths are attracted to zeros enables us to shape

Figure 10

PD controller with velocity sensor feedback.

208

Chapter 5

our plot to achieve our desired response. Also, if we recognize that we add two zeros and only one pole, then our compensated system will have one less asymptote than our open loop system. This will change our asymptote angles and the corresponding loci paths. It is this ability to quickly and visually (graphically) see the effects of adding the compensator that makes the root locus technique so powerful, not only for PID compensators but also for most others. Another method for tuning multiple gains is through the use of contour plots. If we develop a root locus plot varying Kp for various values of integral and/or derivative gains, we can map out the combination that suits our needs the best. This will be seen in an example at the end of this section. This allows us to partially overcome one limitation of the graphical rules for plotting the root loci where the rules require that it is the gain on the system that is varied to develop the paths. Here we still vary the gain, but we do so multiple times, changing either the integral or derivative gain between the plots. When we are done, we get families of curves showing the effects of each gain. We pick the curve that best approaches our desired locations for the poles of the system. Finally, at times it is possible to close the loop and analytically determine the gains that will place the poles at their desired locations. By comparing coefficients of the characteristic equations arising from closing the loop with gain variables and the coefficients from the desired polynomial, we can determine the necessary gains. All these methods are limited to the accuracy of the model being used, and at times the best approach is to use the knowledge in a general sense as giving guidelines for tuning through the typical ‘‘trial and error’’ approaches. Methods are presented in later chapters for cases where this approach results in more unknowns than equations. With the computing power not available to most users, methods exist to easily plot the root locus plots as functions of any changing parameter in our system, whether it be an electronic gain or the physical size or mass of a component in the system. These methods are presented more fully in Chapter 11. In this section, however, we limit our discussion to the design of PID controllers using root locus techniques. To illustrate the principles of designing PID controllers using root locus techniques, the remainder of this section consists of examples that are worked for various designs and design goals. EXAMPLE 5.1 A machine tool is not allowed any overshoot or steady-state errors to a step input. The system is represented by the open loop transfer function given below. a. Develop the closed loop block diagram using a proportional controller. b. Draw the root locus plot. c. Find the controller gain K where the system has the minimum settling time and no overshoot. GðsÞ ¼ K

sþ6 sðs þ 4Þðs þ 5Þ

Part A. The block diagram is given in Figure 11, where HðsÞ ¼ 1.

Analog Control System Design

Figure 11

209

Example: block diagram for proportional controller.

Part B. Develop the root locus plot. Summarizing the rules, we have three poles at 0, 4, and 5 and one zero at 6. Therefore, we have two asymptotes with angles of 90 degrees. The asymptote intersection point will occur at s ¼ 1:5. The valid sections of real axis are between 0 and 4 and between 5 and 6. The break-away point occurs at 1:85. Now we are ready to draw the root locus plot in Figure 12. Part C. Solve the gain K where we have the minimum settling time without any overshoot. The minimum settling time will occur when we have placed all roots as far to the left as possible. Remember that since all individual responses add for linear systems, the slowest response will also determine the settling time for the system. To avoid any overshoot, we must keep all poles on the real axis. The point that meets both conditions is the break-away point at 1.85. To find our controller gain for this point, we apply the magnitude condition.   s  z1  j6  1:85j 4:15   ¼K K  ¼K ¼ j1j j1:85jj4  1:85jj5  1:85j 12:53 s  p1   s  p2   s  p 3  K3 Now we have achieved our design goals for the system using a proportional controller. The steady-state error from a step input is zero because it is a type 1 system, and with a gain of 3 on the proportional controller all the system poles are negative and real and the system should never experience overshoot (it does become possible if errors exist in the model). The system settling time can be found by knowing that in four time constants (of the slowest pole) that it is within 2% of the final value. The time constant is the inverse of 1.85 and the settling time is then calculated as 2.2 seconds.

Figure 12

Example: root locus plot for P controller design.

210

Chapter 5

If we wish to have our system settle more quickly, we realize that we will not be able to achieve this using only a proportional controller. With a proportional controller we can only choose points along the root locus plot, we cannot change the shape of the plot. Later examples will illustrate how we can move the poles to more desirable locations. EXAMPLE 5.2 Using the system transfer function below, determine a. The block diagram for a unity feedback control system; b. The steady-state error from a step input as a function of Kp using a proportional controller; c. The root locus plot using a PI controller; d. Descriptions of the response characteristics available with the PI controller. System transfer function: GðsÞ ¼

4 ðs þ 1Þðs þ 5Þ

Part A. Block diagram for a unity feedback control system is given in Figure 13: Part B. To find the error from a step input using only the proportional gain, we set Ki to zero and find the closed loop transfer function: 4Kp CðsÞ ¼ RðsÞ s2 þ 6s þ k þ 4Kp We apply the final value theorem (FVT) and let RðsÞ ¼ 1=s to find the error as eðtÞ ¼ rðtÞ  cðtÞ: Css ¼

4Kp 5 þ 4Kp

and

ess ¼ 1  css ¼

5 þ 4Kp 4Kp 5  ¼ 5 þ 4Kp 5 þ 4Kp 5 þ 4Kp

So for example, if Kp ¼ 5, our steady-state error to a step input would be 0.2. Part C. To demonstrate the effects of going from P to PI, let us draw both plots on the same s-plane. With proportional only, the root locus plot is very simple where it falls on the real axis between 1 and 5 with a break-away and asymptote intersection point at 3. There are two asymptotes at 90 degrees and the system never becomes unstable. Moving to the PI controller, we see that we add a pole at zero but we also add a zero and so we still will have two asymptotes at 90 degrees. The zero we can place by how we choose our gains Kp and Ki . To illustrate the effects of different zero

Figure 13

Example: block diagram for PI controller.

Analog Control System Design

211

locations, we will draw two plots corresponding to two different choices. Since our overall goal is to eliminate our steady-state error and, if possible, increase the dynamic response, we will compare both results with initial P controller. For both cases we will keep the math simple by choosing zero locations that cancel out a pole. This is not necessary and in some cases not desired (as when trying to cancel a pole in the right-hand plane [RHP]). Case 1: Let Ki =Kp ¼ 1, then the zero is 1 and cancels the pole at 1. This leaves us with poles at 0 and 5. The valid section of real axis is between these poles. The asymptote intersection point and the break-away point are then both at 2.5. Case 2: Let Ki =Kp ¼ 5, then the zero is at 5 and cancels the pole at 5. This leaves us with poles at 0 and 1.The valid section of real axis is between these poles. The asymptote intersection point and the break-away point are both at 0.5.

To compare the effects, let us now plot all three cases on the s plane as shown in Figure 14. Part D. To summarize this example we will comment on both the steady-state and transient response characteristics for each controller. With the proportional control we had a type 0 system and when we closed the loop it verified the existence of the error being proportional to the inverse of the gain. With the asymptotes at 3, it had a time constant of 1/3 seconds and therefore a settling time of 4/3 seconds. When we added an integral gain, the system became a type 1 system and the steady-state errors due to a step input become zero. This is true regardless of where we place the zero using the ratio of Ki =Kp . The transient responses, however, varied. When we place the zero at 5 our asymptotes intersect at  12 and we have a slow system with a settling time of 8 seconds (2% criterion). Moving the zero in to 1 places our asymptote at 2.5 and our settling time decreases back down to 1.6 seconds, much closer to the original proportional controller. In both cases, however, adding the integral gain tended to destabilize the system, as we would expect when a pole is placed at the origin. So we see that in fact the integral gain does drive our steady-state error to zero for a step input but also hurts the transient response to different degrees, depending on where we place the zero. Finally, we must mention the effects of using a zero to cancel a pole. Although in theory this is easy to do, as just shown, in practice this is nearly impossible. It relies on having accurate models, linear systems, and no parameter changes. If we are just a little off (say at 1.1 or 0.9 for the second case), our root locus plot is completely different as shown in Figure 15. Even though the basic shape is the same, we now a

Figure 14

Example: root locus plot for P and PI controllers.

212

Figure 15

Chapter 5

Example: root locus plot for PI without pole-zero cancellation.

much slower pole that never gets to the left of the zero. If the zero is slightly to the left of the pole at 1, we have much the same effect but with another break-away (and break-in) point around the pole at 1. There still remains a pole much closer to the origin. It is for these reasons that it is not generally wise to try and cancel out an unstable pole with a zero but is better to use the zero to ‘‘draw’’ the pole back into the left-hand plane (LHP). If we do design for pole-zero cancellation, we should always check what the ramifications are if the zero does not exactly cancel the pole. EXAMPLE 5.3 Using the block diagram below where the open loop system is unstable, design the appropriate PD controller that stabilizes the system and provides less than 5% overshoot in the system. The block diagram for the PD unity feedback control system is given in Figure 16. With only a proportional controller there is no way to stabilize the open loop system. We have an open loop pole in the RHP and the asymptote intersection point and break-away points also fall in the RHP using a proportional controller. This means that not only will one open loop pole be unstable, but as the gain is increased the other pole also becomes unstable. We will now add derivative compensation to the controller which allows us to use a zero to pull the system back into the LHP. Once we add a zero, we decrease the original number of asymptotes (2) by one and now have only one falling on the negative real axis. If the zero is to the left of 1; the loci will break away from the real axis between 5 and 1 and break in to the axis somewhere to the left of the zero, thus giving us the response that we desire. For this example let us place the zero at 5 and draw the root locus plot. Solving the characteristic equation for the gain K and taking the derivative with respect to s allows us to calculate the break-away and

Figure 16

Example: block diagram for PD controller.

Analog Control System Design

Figure 17

213

Example: root locus plot for PD controller design.

break-in points as 1.35 and 11.35, respectively. This leads to the root locus plot in Figure 17. Since our only performance specification calls for less than 5% overshoot, we can pick the break-in point for our desired pole locations. This gives us the fastest settling without any overshoot. Any more or less gain moves at least one pole further to the right. To solve for the gain required (knowing that Kp =Kd ¼ 5Þ, we apply the magnitude condition for our desired pole location at s ¼ 11:325. For this example we will include the system gain of 5 in the magnitude condition, making the K that we solve for equal to our desired proportional gain.   5 s  z 1  j11:325  6j 5:325     ¼K ¼K K ¼ j1j    j j j j 11:325  1 11:325 þ 5 168:556 s  p1 s  p2 Kp  31:6 and Kd  6:3 So we have achieved the effect of stabilizing the system without any overshoot by adding the derivative portion of the compensator. It is important to remember the practical issues associated with implementing the derivative controller due to the noise amplification problems. A good controller design on paper may actually harm the system when implemented if the issues described earlier in this section are not considered and addressed. EXAMPLE 5.4 Using the system represented by the block diagram in Figure 18, find the required gains with a PID controller to give the system a natural frequency of 10 rad/sec and damping ratio of 0.7. First, let us see how the system currently responds with only a proportional controller and what has to be changed. With a proportional controller there are two asymptotes and the intersection point is at 4. The system gain can be varied to produce anywhere from an overdamped to underdamped response, but the root locus path does not go through the desired points. The desired pole locations can be found directly from the performance specifications since the natural frequency is the radius and damping ratio is the cosine of

214

Chapter 5

Figure 18

Example: block diagram for PID controller.

the angle (a line with an angle of 45 degrees from the negative real axis). Thus, the desired locations are calculated at Real component

s ¼ 10 ð0:7Þ ¼ 7

Imaginary component od ¼ ð102  72 Þ0:5 ¼ 7j Without the I and D gains the loci paths follow the asymptote at 4. To illustrate an alternative method, we will first solve for the gains analytically and verify them using root locus plot. To solve for the gains we first close the loop and derive the characteristic equation CE ¼ s3 þ ð8 þ 10 Kd Þs2 þ ð15 þ 10 Kp Þs þ 10Ki To equate the coefficients we still need to place the third pole to write the desired characteristic equation. Let us make it slightly faster than the complex poles and place it 10. Our desired characteristic equation becomes CE ¼ ðs þ 7  7jÞðs þ 7 þ 7jÞðs þ 10Þ ¼ s3 þ 24 s2 þ 238 s þ 980 All we have to do to solve for our gains to place us at these locations is to equate the powers of s and determine what gains make them equal. This is particularly easy here since each gain only appears in one coefficient. Our gains are calculated as 8 þ 10 Kd ¼ 24 15 þ 10 Kp ¼ 238 10 Ki ¼ 980

Kd ¼ 1:6 Kp ¼ 22:3 Ki ¼ 98

This allows us to update the system block diagram as given in Figure 19. Finally, to confirm our controller settings, let us develop the root locus plot for the system using the gains calculated. The value of Kp is varied during the course of developing the plot, but if we include it here our gain at the desired pole locations should be K ¼ 1. To develop the plot, we have three poles at 0, 3, and 5 and two poles from the controller at 7 3:5j. This gives us one asymptote along the real axis and valid sections of real axis between 0 and 3 and to the left of 5. Therefore, our loci must break away from the axis between 0 and 1 and travel to the two

Figure 19

Example: block diagram solution for PID controller.

Analog Control System Design

Figure 20

215

Example: root locus plot for PID controller design.

zeros, while the third path just leaves the pole and 5 and follows the asymptote to the left. The resulting plot is drawn in Figure 20. After developing the root locus plot, we see that finding the desired characteristic equation and using the gains to make the coefficients match results in the poles traveling through the desired locations. This method will not work for all systems depending on the number of equations and unknowns. An alternative method using contour lines is presented in a later example. A good approach to take when we have multiple gains (or parameters that change) is just placing the poles and zeros that we have control over in locations that we know cause the desired changes. We know how the valid sections of real axis, asymptotes, poles ending at zeros, etc., will all affect our plot and we can use our knowledge when we design our controller. Finally, since we added an integral compensator, we made the system go from type 0 to type 1 and we will not have steady-state errors when the input function is a step (or any function with a constant final value). EXAMPLE 5.5 To illustrate the use of computer tools, we work the previous examples using Matlab to generate the root locus plots and solve the desired gains, taking the system from Example 5.1 as shown here in Figure 21. To solve for the proportional gain where the system does not experience overshoot and has the fastest possible settling time, we will define the system in Matlab, generate the root locus plot, and use rlocfind to locate the gain. The commands in mfile format are as follows:

Figure 21

Example: block diagram for P controller (Matlab).

216

Chapter 5

%Matlab commands to generate Root Locus Plot and find desired gain K num=[1 6]; den=[1 9 20 0]; sys=tf(num,den)

%Define numerator s+6 %Define denominator from poles %Make LTI System variable

rlocus(sys)

%Generate Root Locus Plot

‘Place crosshairs at critical damping point’ Kp=rlocfind(sys)

%Use crosshairs to find gain

Executing these commands then produces the root locus plot in Figure 22 and allows us to place the cross hairs at our desired pole location and click. After doing so we will see back in the workspace area the gain K that moves us to that location. After clicking on the break-away point we find that the gain must be K ¼ 3, or exactly what we found earlier when solving this problem manually. From this point we can easily modify the numerator and multiply it by 3, make a new transfer function (linear time invariant [LTI] variable) and simulate the step or impulse response of the system. In addition, using rltool we can interactively add poles and zeros in both the forward and feedback loops and observe their effects in real time. We can drag the poles or zeros around the s plane and watch the root locus

Figure 22

Example: Matlab root locus plot for P controller.

Analog Control System Design

217

plots as we do so. As was mentioned earlier, most computer packages designed for control systems have similar abilities. EXAMPLE 5.6 In this example we again take a system from earlier and use Matlab to design a PI controller to remove the steady-state error (from a step input) and then choose the fastest possible settling time. The system block diagram with the controller added is shown in Figure 23. In Matlab we define three transfer functions, the plant, the controller with Ki =Kp ¼ 1, and the controller with Ki =Kp ¼ 5. %Matlab commands to generate Root Locus Plot and find desired gains Kp and Ki sysp=tf([4],[1 6 5]); z1=1; z2=5; syspi1=tf([1 z1],[1 0]); syspi5=tf([1 z2],[1 0]);

%Define the plant transfer function %Define ratios of Ki/Kp = 1 and 5 %Controller transfer function with Ki/Kp=1 %Controller transfer function with Ki/Kp=5

subplot(1,2,1); rlocus(syspi1*sysp)

%Generate Root Locus Plot

subplot(1,2,2); rlocus(syspi5*sysp)

%Generate Root Locus Plot

When we execute these commands, it produces one plot window with two subplots using the subplot command, shown in Figure 24. We see that the results correspond well with the results from the earlier example. When Ki =Kp ¼ 1 the system settles much faster since the poles are further to the left. This relies on canceling a pole by using a zero, and if we are slightly off, large or small, the results become as shown in Figure 25. Here we see that even if our zeros are only slightly away from the pole that was intended to be canceled, the root locus plot changes and a very different response is obtained. In general it is not considered good practice to rely on mathematically canceling poles with zeros. Remember that our system is only a linear approximation to begin with, let alone additional errors that may by introduced which will cause the poles to shift. To finish this example, let us close the loop using the original system with only a proportional controller and Kp ¼ 2 followed with our PI controller where Kp ¼ 2 and Ki ¼ 2. We will use Matlab to generate a step response for both systems and compare the steady-state errors. The responses are given in Figure 26. As expected from previous discussions, the steady-state error went to zero when we added the PI controller and the system became type 1. Also, as noted from the root locus plots, adding a pole at the origin, as the integrator does, hurts

Figure 23

Example: block diagram for PI controller (Matlab).

218

Chapter 5

Figure 24

Example: Matlab root locus plots for PI controllers.

Figure 25

Example: Matlab root locus plots with PI controllers and errors.

Analog Control System Design

Figure 26

219

Example: Matlab step response plots for P and PI controllers.

our transient response and the settling time is less. If overshoot is allowable, we could also make the P controller more attractive by increasing the gain and having slightly more overshoot. EXAMPLE 5.7 Using the block diagram in Figure 27, use Matlab to design a PD controller that stabilizes the system and limits the overshoot to less than 5%. In this system we see that the plant transfer function is unstable due to the pole in the RHP. To solve this using Matlab, we will generate the root locus plot using a simple proportional controller (basic root locus plot) and using a PD controller to stabilize the system. The commands used are as follows: %Matlab commands to generate Root Locus Plot % and find desired gains Kp and Kd clear; sysp=tf([5],[1 -4 -5]);

%Define the plant transfer function

syspd=tf([1 5],[1]);

%PD Controller TF with zero at -5

Figure 27

Example: block diagram for PD controller design (Matlab).

220

Chapter 5

subplot(1,2,1); rlocus(sysp) subplot(1,2,2); rlocus(syspd*sysp) k=rlocfind(syspd*sysp);

%Generate Root Locus Plot w/ P controller %Generate Root Locus Plot w/ PD controller

sys1=tf(5,[1 -4 -5]); sys2=tf(5*5*[1 5],[1 -4 -5]);

% Compare the step responses using P and PD

sys1cl=feedback(sys1,1) sys2cl=feedback(sys2,1)

%Close the loop

figure; step(sys1cl); hold; step(sys2cl);

%Create a new figure window %Create step response for P controller %Create step response for PI controller

When we plot the step responses it is easy to see that the system controlled only with a proportional gain quickly goes unstable, while the PD controller stabilizes the system and minimizes the overshoot. The step responses are given in Figure 29. The interesting point that should be made is that the proportional gain chosen for the PD controller was intended to be at the repeated roots intersection and yet the step response clearly shows an overshoot of the final value. What we must remember is that the second-order system is no longer a true second order-system

Figure 28

Example: Matlab root locus plots for P and PD controllers.

Analog Control System Design

221

but includes a first-order term in the numerator (the zero from the controller). This alters the response as shown in Figure 29 to where the system now does experience slight overshoot. The effect of adding the derivative compensator clearly demonstrates the added stability it provides. The disadvantage, as discussed earlier, is that the system becomes much more susceptible to noise amplification because of the derivative portion. EXAMPLE 5.8 Recalling that we solved for the required PID gains in Example 5.4, we now will use Matlab to verify the root locus plot and also check the step response of the compensated system. We tuned the system using analytical methods to have a damping ratio equal to 0.7 and a natural frequency equal to 10 rad/sec. Using the gains from earlier allows us to use the system block diagram as given in Figure 30 and simulate it using Matlab. To achieve the damping ratio and natural frequency, we know that the poles should go through the points s ¼ 7 7j. Using the Matlab commands given below, we can generate the root locus plot and the step response for the compensated system. The sgrid command allows us to place lines of constant damping ratio (0.7) and natural frequency (10 rad/sec) on the splane. This command, in conjunction with rlocfind, allows us to find the gain at the desired location.

Figure 29

Example: Matlab step responses for P and PD controllers.

222

Figure 30

Chapter 5

Example: block diagram solution for PID controller (Matlab).

%Matlab commands to generate Root Locus Plot clear; sysp=tf([10],[1 8 15]);

%Define the plant transfer function

syspid=tf([1.6 22.3 98],[1 0]);

%PD Controller TF with zero at -5

rlocus(syspid*sysp) sgrid(0.707,10) k=rlocfind(syspid*sysp);

%Generate Root Locus Plot w/ PID controller %Place lines of constant zeta and wn

syscl=feedback(syspid*sysp,1) figure; step(syscl);

%Create a new figure window %Create step response for P controller

The resulting root locus plot is shown in Figure 31.

Figure 31

Example: Matlab root locus plot for PID controller.

Analog Control System Design

223

The gains solved for earlier provide the correct compensation, and the root locus paths go directly through the intersection of the lines representing our desired damping ratio and natural frequency. Once again, we can verify the results by using Matlab to generate a step response of the compensated system as given in Figure 32. As we see, the compensated system behaves as desired and the response quickly settles to the desired value. In the next example we see how to generate contours to see the results of not only varying the proportional gain (assumed to be varied in generating the plot), but also the effects from varying additional gains. The result is a family of plots called contours. EXAMPLE 5.9 In this example we wish to design a P and PD controller for the system block diagram shown in Figure 33 with a unity feedback loop and compare the results. Let us now use Matlab to tune the system to have a damping ratio of 0.707 and to solve for the gain that makes the system go unstable. The Matlab commands used to generate the root locus plot in Figure 34 and to solve for the gains are given below. %Program commands to generate Root Locus Plot % and find various gains, K num=1; den=conv([1 0],conv([1 1],[1 3])); sys1=tf(num,den)

Figure 32

%Define numerator %Define denominator from poles %Make LTI System variable

Example: Matlab step response for PID compensated system.

224

Figure 33

Chapter 5

Example: block diagram for P and PD contours (Matlab).

rlocus(sys1) %Generate Root Locus Plot ‘Place crosshairs at marginal stability point’ Km=rlocfind(sys1) %Use crosshairs to find gain sgrid(0.707,0.6); ‘Place crosshairs at intersection point’ Kt=rlocfind(sys1)

%Place lines of constant damping... %...ratio (0.707) and w (0.6 rad/sec) %Find gain for desired tuning

When we examine this plot, we see that we can achieve any damping ratio but that the natural frequency is then defined once the damping ratio is chosen. Using the rlocfind command we can find the gain where the system has a damping ratio equal to 0.707 and where the system goes unstable. The system becomes marginally stable at Kp ¼ 11 and has our desired damping ratio when Kp ¼ 1. The corresponding natural frequency is equal to 0.6 rad/sec. If we want to change the shape of the root locus paths, we need to have something

Figure 34

Example: Matlab root locus plots for initial P controller.

Analog Control System Design

Figure 35

225

Example: block diagram for PD contours on root locus plot (Matlab).

more than a proportional compensator. If we switch to a PD controller as shown in Figure 35 and use Matlab to generate root contour plots, we can demonstrate the effects of changing the derivative gain and how the actual shape of the root locus paths change. Our new open loop transfer function that must be entered into Matlab is d 1þK Kp þ Kd s Kp s ¼ Kp GH ¼ sðs þ 1Þðs þ 3Þ sðs þ 1Þðs þ 3Þ

Remember that Matlab varies a general gain in front of Kp and thus GH must be written as above. The contours are actually plotted as functions of Kd =Kp . To generate the plot, we use the following commands: %Program commands to generate Root Locus Contours % and find various gains, Kp and Kd den=conv([1 0],conv([1 1],[1 3])); Kd=0; num=[Kd 1]; sys2=tf(num,den) rlocus(sys2) sgrid(0.707,2); hold; Kd=0.3; num=[Kd 1]; sys2=tf(num,den); rlocus(sys2) Kd=1; num=[Kd 1]; sys2=tf(num,den); rlocus(sys2) Kd=10; num=[Kd 1]; sys2=tf(num,den); rlocus(sys2) hold;

%First value of Kd %Define numerator %Define denominator from poles %Make LTI System variable %Generate Root Locus Plot %place lines of constant damping %ratio (0.707) and w (2 rad/s)

%Releases plot

The root locus plot developed in Matlab is given in Figure 36, where we see that using the hold command allows us to choose different controller gains and plot several plots on the same s-plane. Here we see the advantage of derivative control and how we can use it to move the root locus to more desirable regions. Even using the damping ratio criterion from the P controller we are able to achieve faster responses by moving all the poles further to the left. If we are having problems with stability and can control/filter the noise in your system, then it will be beneficial to add derivative control.

226

Chapter 5

Figure 36

Example: Matlab root contour plots for PD controller.

In working through this section and examples, we should see how the tools developed in the first three chapters all become a key component in designing stable control systems. The usefulness of our controller design is directly related to the accuracy of our model; if our model is poor, so likely will be our result. Understanding where we want to place the poles and zeros is also of critical importance. Since many computer packages will develop root locus plots, we need to know how to interpret the results and make correct design decisions. 5.4.3

Frequency Response Design of PID Controllers

Designing in the frequency domain is similar to designing in the s-domain using root locus plots. Whereas in the s-domain we used our knowledge of poles and zeros to move the loci paths to more desirable locations, in the frequency domain we use our knowledge about the magnitude and phase contributions of each controller and how they add to the total frequency response curve, which allows us to shape the curve to our specifications. This stems back to our previous discussion of Bode plots and how blocks in series in a block diagram add in the frequency domain. Since a controller placed in the forward path of our system block diagram is in series with the physical system models, the effects of adding the controller in the frequency domain is accomplished by adding its magnitude and phase relationships to the existing Bode plot of the physical system. What our goal becomes is to define how each controller term adds to the magnitude and phase of the existing system. The amount of magnitude and phase that we wish to add has already been defined in terms of gain margin and

Analog Control System Design

227

phase margin. To begin our discussion of designing PID controllers using frequency domain techniques, let us first define the Bode plots of the individual controller terms. The proportional gain, as in root locus, does not allow us to change the shape of the Bode plot or the phase angle, but rather allows us only to vary the height of the magnitude plot. Thus, we can use the proportional gain to adjust the gain margin and phase margin, but only as is possible by raising or lowering the magnitude plot. Remember that when we adjust the gain margin and phase margin for our open loop system, we are indirectly changing the closed loop response. The phase margin relates the closed loop damping ratio and the crossover frequency to the closed loop natural frequency. It is easy to determine the proportional controller gain K that will cause the system to go unstable by measuring the gain margin. If K is increased by the amount of the gain margin, the system becomes marginally stable. Assuming the gain margin, GM, is measured in decibels, then K is found as K ¼ 10

GMdB 20

The integral gain has the same effects in the frequency domain as in the s-domain where we saw it eliminate the steady-state error (in many cases, not all) and also tend to destabilize the system (moves the loci paths closer to the origin). In the frequency domain it adds a constant slope of 20 dB/decade (dec) to the low frequency asymptote, which gives our system infinite steady-state gain as the frequency approaches zero. The side effect of this is stability where also see that the integral term adds a constant 90 degrees of phase angle to the system. This tends to destabilize the system by decreasing the phase margin in the system. Some of the phase margin might be reclaimed if after adding the integral gain we can use a lower proportional gain since the steady-state performance is determined more by the integral gain than the proportional gain. In other words, if we had a large proportional gain for the reason of decreasing the steady-state error (previously to adding the integral gain), while allowing more overshoot because of the high proportional gain, then after adding the integral gain to help our steady-state performance, we may be able to reduce the proportional gain to help stabilize (reduce overshoot) our overall system. This effect is easily seen in the frequency domain where we achieve high steady gains in our system by adding the low frequency slope of 20 dB/dec and at the same time lowering the overall system magnitude plot by reducing proportional gain K in our system. Finally, the same comparisons between the s-domain and frequency domain hold true when discussing the derivative gain. In the s-domain we used the derivative gain to attract root loci paths to more desirable locations by placing the zero from the controller where we wanted it on the s plane. In the frequency domain we stabilize the system by adding phase angle into our system. A pure derivative term adds þ90 degrees of phase angle into our system, thus increasing our phase margin (and corresponding damping ratio). The advantage of Bode plots is seen where with knowledge of controller ‘‘shapes’’ we can properly pick the correct controller to shape our overall plot to the desired performance levels. To design our controller, we determine the amount of magnitude and phase lacking in the open loop system and pick our controller (type and gains) to add the desired magnitude and phase components to the open loop

228

Figure 37

Chapter 5

Bode plot contributions from PI controllers.

plot. This is quite simple using the PI, PD, and PID Bode plots shown in Figures 37, 38, and 39, respectively. As is clear in the frequency domain contributions shown in the figures, PID plots combine the features of PI and PD. Any time that we design a controller, we must remember that we still need the instrumentation and components capable of implementing the controller. In summary, to design a P, PI, PD, or PID controller in the frequency domain, simply draw the open loop Bode plot of our system and find out what needs to be added to achieve our performance goals. Use the proportional gain to adjust the height of the magnitude curve, the integral gain to give us infinite gain at steady-state (o approaches 0, 20 dB/dec slope goes to 1), and the derivative gain to add positive phase angle. Each controller factor can be added to the existing plot using the same procedure as when developing the original plot. EXAMPLE 5.10 Using the block diagram in Figure 40, design a proportional controller in the frequency domain which provides a system damping ratio of approximately 0.45. To find the equivalent open loop phase margin that will give as approximately z ¼ 0:45 when the loop is closed, we will use the approximation that PM ¼ 100 z. This results in us wanting to determine the gain K required to give us an open loop phase margin of 45 degrees. To accomplish this we will draw the open loop uncompensated Bode plot, determine the frequency where the phase angle is equal to 135

Figure 38

Bode plot contributions from PD controllers.

Analog Control System Design

Figure 39

229

Bode plot contributions from PID controllers.

degrees (45 degrees above 180 degrees), and measure the corresponding magnitude in dB. If we add the gain K required to make the magnitude plot cross the 0-dB line at this frequency, then the phase margin becomes our desired 45 degrees (remember that the phase plot is not affected by K). Our open loop Bode plot is given in Figure 41. Examining Figure 41, we see that at 10 rad/sec the phase angle is equal to  135 degrees and the corresponding magnitude is equal to 20 dB. Therefore, to make the phase margin equal to þ45 degrees we need to raise the magnitude plot þ20 dB and make the crossover frequency equal to 10 rad/sec. We can calculate our required gain as K ¼ 10

GMdB 20

K ¼ 10 When we add the proportional gain of 10 to the open loop uncompensated Bode plot, the result is the compensated Bode plot in Figure 42. Now we see that we have achieved the desired result, a phase margin equal to 45 degrees. This gives us an approximate damping ratio of 0.45 with the closed loop system. As before with root locus plots, the proportional controller does not allow us to change the shape, only the location. We must incorporate additional terms if we wish to modify the shape of the Bode plot. Also, we will still experience steady-state error from a step input. With the proportional gain we only have a type 0 system. Since our low frequency asymptote is at 20 dB (compensated system), corresponding to our gain of 10, we will have a steady-state error from a unit step input equal to 1=ðK þ 1Þ or 1=11.

Figure 40

Example: block diagram of system with P controller.

230

Chapter 5

Figure 41

Example: open loop Bode plot—uncompensated.

Figure 42

Example: open loop Bode plot—compensated with P gain.

Analog Control System Design

231

For this example, let us close the loop manually and find the characteristic equation to verify the results obtained in the frequency domain. When the loop is closed, it results in the following characteristic equation: CE ¼ 0:1 s2 þ 1:1 s þ 1 þ K If we let K ¼ 10, we can solve for the roots of the characteristic equation as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1:1 1:12  4ð0:1Þð11Þ ¼ 5:5 9 j s1;2 ¼  2ð0:1Þ 2ð0:1Þ The damping ratio is calculated as    9 1 z ¼ cosðyÞ ¼ cos tan ¼ 0:5 5:5 So we see that even with the straight line approximation and the open loop to closed loop approximation, the design methods in the frequency domain quickly gave us a value close to our desired damping ratio. The advantages of the working in the frequency domain are not so evident in this example where we knew the system model, but much more when we are given Bode plots for our system and are able to quickly design our controller directly from the existing plots. This bypasses several intermediate steps and is also quite easy to do without complex analysis tools. EXAMPLE 5.11 In this example we will further enhance the steady-state performance of the system from the previous example (Figure 40) by adding a PI controller while maintaining the same goal of a closed loop damping ratio equal to 0.45. The contributions of a PI controller in the frequency domain were given in Figure 37 as: K

TFPI ¼ Ki

1 þ Kpi s

The contributions can be added as three distinct factors: a gain Ki , first-order lead with a break frequency of Kp =Ki , and an integrator. If we choose Ki ¼ 10 and Kp ¼ 10; then we will add a low frequency asymptote of 20 dB/dec with no change in the magnitude curve after 1 rad/sec. The phase angle is decreased an additional 90 degrees at 0.1 rad/sec, 45 degrees at 1 rad/sec (the break frequency of the numerator), and no change after 10 rad/sec. Since we have raised the overall magnitude by a factor of 10 (Ki ) and have not altered the phase angle at 10 rad/sec, the resulting phase margin is identical to the previous example and equal to 45 degrees at a crossover frequency of 10 rad/sec. If we show the additional factors on a Bode plot and add them to the open loop uncompensated system, the result is the compensated Bode plot given in Figure 43. So by proceeding from a simple proportional compensator to a proportional + integral compensator, we have achieved the damping ratio of approximately 0.45 while eliminating our steady-state error from step inputs. There are different ways this can be designed while still achieving the design goals since we have two gains and one goal. If the crossover frequency was also specified, it would likely require different gains to optimize the design relative to meeting both goals.

232

Chapter 5

Figure 43

Example: Bode plot of PI compensated system.

EXAMPLE 5.12 The final example demonstrating design techniques in the frequency domain will be to stabilize an open loop marginally stable system using a PD controller. The open loop system is GðsÞ ¼ 1=s2 . A stabilizing controller is required (i.e., PD) since the system is open loop marginally stable. The performance specifications for the open loop compensated system are  

Phase margin of approximately 45 degrees. Crossover frequency of approximately 1 rad/sec.

Therefore, using our open loop to closed loop approximations, we would expect the closed loop controlled system to have a damping ratio of 0.45 and a natural frequency of 1.2 rad/sec (see Figure 49 of Chapter 4, at z ¼ 0:45, oc =on  0:82). We start by drawing the open loop Bode plot as shown in Figure 44. We have constant phase of 180 degrees, a crossover frequency of 1 rad/sec, and a constant slope of –40 dB/dec. The system is marginally stable and needs a controller to stabilize it. The PD controller, to meet the specifications, must keep the crossover frequency at its current location while adding 45 degrees of phase to the system. The PD controller is a first-order system in the numerator given as Gc ðsÞ ¼ Kp ð1 þ

Kd

sÞ Kp

Therefore, it will add 45 degrees at the break point, Kp =Kd . Since we want a phase margin equal to 45 degrees, making Kp =Kd ¼ 1 meets the phase requirements. Now

Analog Control System Design

Figure 44

233

Example: open loop Bode plot for PD controller design.

we must adjust the magnitude to maintain the crossover frequency location at 1 rad/sec. When we plot the straight line approximations, we find that the break is already at the desired crossover frequency, so Kp ¼ 1 will place us close to the desired magnitude. To fine tune, remember that the actual magnitude will be +3 dB at the break and therefore the proportional gain can be adjusted to lower the plot 3 dB at the break point: Kp ¼ log1 ð3=20Þ ¼ 10ð3=20Þ ¼ 0:7 The final open loop, controller, and total Bode plots are shown in Figure 45. Thus we see that the PD controller stabilized the system as desired and results in a phase margin equal to 45 degrees. As we discussed earlier in this chapter, implementing a derivative compensator makes the system very susceptible to noise and must also be addressed when designing in the frequency domain. Only the design method is different; the resulting algorithm and controller implementation constraints remain the same. 5.4.4

On-Site Tuning Methods for PID Controllers

If we have a system that is very complex and we only wish to purchase a PID (or variation of) controller and tune it, then several methods exist as long as we have a dominant real closed loop pole or pair of dominate complex conjugate closed loop poles. In other words, if we have a dominate real pole, the system response can be well defined by a simple first-order response, and if we have dominant complex conjugate roots (overshoot and oscillation) the system can be well defined by a second-order system. The most common method used when these conditions are

234

Figure 45

Chapter 5

Example: Bode plot with PD compensation.

present, and the one covered here, is the Ziegler-Nichols method. The guidelines are mathematically derived to result in a 25% overshoot to a step input and a one-fourth wave decay rate, meaning each successive peak will be one-fourth the magnitude of the previous one. This a good balance between quickly reaching the desired command and settling down. In some cases this may be slightly too aggressive and slightly less proportional gain could be used. Two variations exist, the first based from an open loop step response curve and the second from obtaining sustained oscillations in the proportional control mode. The step response method works well for type 0 systems with one dominant real pole, (i.e., no overshoot). The ultimate cycle method is based on oscillations and must therefore have a set of complex conjugate dominant roots for the system to oscillate. To facilitate the procedure, the following notation is introduced for PID controllers:   1 Gc ¼ Kp 1 þ þ Td s Ti s Instead of an integral gain and derivative gain, an integral time and derivative time are used. Since we can switch between the two notations quite easily, it is simply a matter of personal preference. Ti and Td tend to be more common when using the Ziegler-Nichols method since when measuring signal output on an oscilloscope they correspond directly and simplify the tuning process. In the first case, the goal is to obtain an open loop system response to a unit step input. This should look something like Figure 46. Simply measure the delay time and rise time as shown and use Table 1 to calculate the controller gains depending on which controller you have chosen.

Analog Control System Design

Figure 46

235

Ziegler-Nichols step response S-curve measurements.

If we examine the PID settings a little closer and substitute the tuning parameters into Gc in place of Kp , Ti , and Td , it results in a controller transfer function defined as  2 s þ D1 Gc ¼ 0:6T s Here we see that this tuning method places the two zeros at 1=D on the real axis along with the pole that is always placed at the origin (the integrator). The second method, defining an ultimate cycle, Tu , is useful when the critical gain can be found and oscillations sustained. To find the critical gain, use only the proportional control action (turn off I and D) and increase the proportional gain until the system begins to oscillate. Record the current gain and capture a segment of time on an oscilloscope for analysis. The measurement of Tu can be made as shown in Figure 47 to allow the gains to be calculated according to Table 2. Once again, if we examine the PID settings more closely and substitute the tuning parameters into Gc in place of Kp , Ti and Td , the controller transfer function becomes

2 s þ T4U Gc ¼ 0:075Ku Tu s Similar to before, we see that this tuning method places the two zeros on the real axis at 4=TU along with the pole that is always placed at the origin. It is also possible to use the Ziegler-Nichols method analytically by simulating either the step response or determining the ultimate gain at marginal stability. Although this might serve as a good starting point, more options can be explored using the root locus and Bode plot techniques from the previous section. For exam-

Table 1

Ziegler-Nichols Tuning Parameters—Step Response S Curve

Controller type P PI PID

Kp

Ti

T/D 0.9 T/D 1.2 T/D

1 D / 0.3 2D

Td 0 0 0.5 D

236

Chapter 5

Figure 47

Ziegler-Nichols ultimate cycle—oscillation period.

ple, sometimes it is advantageous to place the two zeroes with imaginary components to better control the loci beginning at the dominant close loop complex conjugate poles. Nonetheless, this a common method used frequently ‘‘on the job’’ and it provides at least a good starting point, if not a decent solution. 5.5

PHASE-LAG AND PHASE-LEAD CONTROLLERS

Phase-lag and phase-lead controllers have many similarities with PID controllers. However, they have some advantages over PID with respect to noise filtering and real world implementation. Instead of true integrals (a pole at the origin) or true derivatives (step inputs become impulses), they approximate these functions and provide similar performance gains with some implementation advantages. They can be designed using root locus and frequency plots with the same procedures shown for PID controllers. This section highlights the similarities and then illustrates the design procedures using both root locus and frequency domain techniques. The transfer function for each type of controller is as follows:   T2 s þ 1 Phase lag: =K T1 s þ 1  Phase lead: ¼ K

T1 s þ 1 T2 s þ 1



Lag-lead: Phase lag  Phase lead As the terms imply, phase-lag controllers add negative phase angle, phase-lead controllers add positive phase angle, and lag-lead controllers add both (at different times) to combine the features of both. With the notation above it means that

Table 2

Ziegler Nichols Tuning Parameters—Ultimate Cycle

Controller type P PI PID

Kp

Ti

0.5 Ku 0.45 Ku 0.6 Ku

1 Tu / 1.2 0.5 Tu

Td 0 0 0.125 Tu

Analog Control System Design

237

T1 > T2 . As demonstrated in the following sections, the lag and lead portions may be designed separately and combined after both are completed. 5.5.1

Similarities and Extensions of Lag-Lead and PID Controllers

Perhaps the clearest way to compare phase-lag-lead and PID controller variations is with respect to where they allow us to place the poles and zeros contributed by the controller. Figure 48 illustrates where each controller type is similar and different. The phase-lag pole-zero locations approach the pole-zero locations of PI controllers as the pole is moved closer to the origin. Phase-lag is used similarly to PI to reduce the steady-state errors in the system by increasing the steady-state gain in the system. The difference is that the gain does not go to infinity as the frequency (or s) approaches zero, as it does in a PI controller. The benefit of phase-lag is that the pole is not placed directly at the origin and therefore tends to have a lesser negative impact on the stability of the system. Similarly, the phase angle contribution from a phase-lag controller is negative only over a portion of frequencies, as opposed to an integrator adding a constant 90 degrees over all frequencies. These effects are more clearly seen in the frequency domain. The phase-lead pole-zero locations approach the pole-zero locations of PD as the pole is moved to the left. At some point the pole becomes negligible since it decays much faster and its effect is not noticeable. Phase-lead and PD are both used to increase system stability by adding positive phase angle to the system. Phase lead adds positively over only a portion of frequencies (progresses from 0 to þ90 degrees to 0 degrees) while PD increases from 0 to þ90 and then remains there. Finally, as the left pole of the lag-lead controller is moved to the left and the right pole to the origin, it begins to approximate a PID controller. The same observations made with phase-lag and phase-lead individually apply here regarding phase angle and stability. In fact, when designing a combination lag-lead compensator, it is common to design the lag portion and lead portion independently and combine them when finished. The lag portion is designed to meet the steady-state performance criterion and the lead portion to meet the transient response and stability perfor-

Figure 48

Phase lag/lead and PID s-plane comparisons/similarities.

238

Chapter 5

mance criterion. In general, lag-lead and PID controllers are interchangeable, with minor differences, advantages, and disadvantages. 5.5.2

Root Locus Design of Phase Lag/Lead Controllers

Let us first examine the general recommendations for tuning phase-lag controllers. Since the overall goal of the phase-lag controller is to reduce the steady-state error, it needs to have as large of a static gain as possible without changing the existing root loci and making the system become less stable. The usual approach to designing a phase-lag controller is to place the pole and zero near the origin and close to each other. The reasoning is this: If the added pole and zero are close to each other, the root locus plot is only slightly altered from its uncompensated form; by placing the pole and zero close to origin it allows us to have a fairly large steady-state gain. For example, if the pole is at s ¼ 0:01 and the zero at s ¼ 0:1, we increase the gain in our system by a factor of 10 (0.1/0.01) while the pole and zero are still very close together. The steps below are guidelines to accomplish this. 5.5.2.1

Outline for Designing Phase-Lag Controllers in the s-Domain

Step 1: Draw the uncompensated root loci and calculate the static gain for the open loop uncompensated system. Since we know that the steady-state error from a step input to a type 0 system is 1/(1+K), and from a ramp input to a type 1 system is 1/K, etc., we can calculate the total K required. Knowing the two gains for the controller and system, multiply them and calculate the controller gain required. Step 2: Place the pole and zero sufficiently close to the origin where a large gain is possible without adding much lag (instability) into the system. For example, if the controller requires a gain of 10, place the pole at 0.01 and the zero at 0.1 to obtain this gain. This will not appreciably change the existing root locus plot since the pole and zero essentially cancel each other with respect to phase angle. Step 3: Verify the controller design by drawing the new root locus plot.

EXAMPLE 5.13 Using the block diagram of the system represented in Figure 49, find the proportional gain K that results in a closed loop damping ratio of 0.707. Design a phase-lag controller that maintains the same system damping ratio and results in a steady-state error less than 5%. Predict the steady-state errors that result from a unit step input for both cases. Step 1: We begin by drawing the root locus plot for the uncompensated system and calculating the gain K required for our system damping ratio of 0.707. This

Figure 49

Example: block diagram of physical system.

Analog Control System Design

239

damping ratio corresponds to the point where the loci paths cross the line extending radially from the origin at an angle of 45 degrees from the negative real axis. The root locus plot for this system is straightforward and consists of the section of real axis between 3 and 5 with the break-away point at –4. The asymptotes and the paths both leave the real axis at angles of 90 degrees as shown in Figure 50. So for our desired damping ratio we want to place our poles at s ¼ 4 4j. To find the necessary proportional gain K we can apply the magnitude condition at one of our desired poles. Our gain is calculated as K 

15 15 15   ¼ K pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ K pffiffiffiffiffipffiffiffiffiffi ¼ j1j        s  p1 s  p2 17 17  12 þ 42   12 þ 42  K  1:13

Our total loop gain is 15 K or 17, and since we have a type 0 system our error from a step input is 15=ð15 K þ 15Þ or 15/32. This is almost 50% and results in very poor steady-state performance. To meet our steady-state error requirement, we need the total gain greater than or equal to 19; this results in an error less than 1/20, or 5%. Since the proportional gain already provides a gain of 1.13 as determined by our transient response requirements, we need another gain factor equal to 20/1.13 (or 17.7). To exceed our requirement and demonstrate the effectiveness of phase-lag compensation, we will set the gain contribution from our phase-lag factor equal to 20. Step 2: Now that we know what gain is required for the phase-lag term, we can proceed to place our pole and zero. The pole must be closer to the origin to increase our gain. This results in a slight negative phase angle introduced into the system since the angle from the controller pole is always slightly greater than the angle from the zero (as it is always further to the left). We minimize this effect by keeping the pole and zero close. For this example let us place the pole at 0.02 and the zero at 0.4; this gives us our additional gain of 20 without significantly changing our original root locus plot. Thus we can describe our phase-lag controller transfer function as   s þ 0:4 2:5s þ 1 Gc ¼ Phase lag ¼ ¼ 20 s þ 0:02 50s þ 1 Step 3: To verify our design, we can add the new pole and zero to the root locus plot in Figure 50 and redraw it as given in Figure 51. So we see that the root locus

Figure 50

Example: root locus plot for uncompensated system.

240

Chapter 5

Figure 51

Example: root locus plot for phase-lag compensated system.

plot does not significantly change because the pole and zero added by the controller ‘‘almost’’ cancel each other. If we wish to calculate the amount of phase angle that was added to our system, we can approximate it by calculating the angle that the new pole and zero each make with our desired operating point, s ¼ 4 4j. Angle from pole : fp ¼ ð180  tan1 ð4=3:98ÞÞ ¼ 134:86 degrees Angle from zero : fz ¼ þð180  tan1 ð4=3:6ÞÞ ¼ þ131:99 degrees Net phase angle added to the system (lag) ¼ 2:87 degrees Since the net angle added with the phase-lag controller is very small, the original angle condition and corresponding root locus plot is still valid. Now at this point it is interesting to recall the warning about pole-zero cancellation with controllers, as this is nearly the case here. If we examine the earlier warning (and the effects demonstrated in Example 5.6), we do find a key difference here. Whereas before we used the zero from the controller to cancel a pole in the system, now both the pole and zero are part of the controller. This is significant since we know that the pole of the physical system is an approximation to begin with and that it will vary at different operating points for any system that is not exactly linear. In the phase-lag case we can finely tune both the pole and the zero and be relatively sure that they will not vary during system operation. This is generally true for most electronic circuits that are properly designed. It is true that if our pole varies even slightly, say from 0.01 to 0.2, our gain from the phase-lag term goes from being equal to 10 to being equal to one half and the system performance is degraded, not enhanced. If we cannot verify the stability and accuracy of our compensator, then we should be concerned about the performance of actually implementing this controller (and others similar to it). The second point is that to implement the phase-lag compensator we are adding a ‘‘slow’’ pole into the system since even at high gains of K, it only moves as far left as 0.4, where the zero is located. It is a balancing act to choose a pole-zero combination close enough to the origin that high gain can be achieved without significantly altering the original root locus plot and yet enough to the left that the settling time is satisfactory. The effect of adding the slow pole is seen in the following Matlab solution to this same problem.

Analog Control System Design

241

EXAMPLE 5.14 Verify the system and controller designed in Example 5.13 using Matlab. Use Matlab to generate the uncompensated and compensated root locus plots, determine the gain K for a damping ratio of 0.707, and compare the unit step responses of the uncompensated and compensated system to verify that the steady-state error requirement is met when the phase-lag controller is added. From Example 5.13 recall that the open loop system transfer function and the resulting phase-lag controller were given as GðsÞ ¼ Open loop system ¼

Gc ¼ Phase lag ¼

15 ðs þ 3Þðs þ 5Þ

  s þ 0:4 2:5s þ 1 ¼ 20 s þ 0:02 50s þ 1

To verify this controller using Matlab, we can define the original system transfer function, the phase-lag transfer function, and generate both the root locus and step response plots where each plot includes both the uncompensated and compensated system. The Matlab commands are given below. %Program commands to generate Root Locus and Step Response Plots %for Phase-lag example clear; K=1.13; numc=[1 0.4]; denc=[1 0.02]; num=K*15; den=conv([1 3],[1 5]); sysc=tf(numc,denc); sys1=tf(num,den) sysall=sysc*sys1 rlocus(sys1); sgrid(0.707,0) k=rlocfind(sys1); hold; rlocus(sysall); hold; figure; step(feedback(sys1,1)); hold; step(feedback(sysall,1)); hold;

Place zero at -0.4 %Place pole at -0.02 %Open loop system numerator %System denominator %Controller transfer function %System transfer function %Overall system in series %Generate original Root Locus Plot %Draw line of constant damping ratio on plot %Verify gain at zeta=0.707 %Add compensated root loci to plot %Open a new figure window %Generate step response of CL uncompensated system %Hold the plot %Generate step response of CL compensated system %Release the plot (toggles)

When these commands are executed, we can verify the design from the previous example. The first comparison is the uncompensated and compensated root locus plot shown in Figure 52. It is clear in Figure 52 that the only effect from adding the phase-lag compensator is that the asymptotic root loci paths are slightly curved as we move further

242

Figure 52

Chapter 5

Example: Matlab root locus plot for proportional and phase-lag systems.

from the real axis. When implementing and observing this system, the effects would not be noticeable relative to the root locus path being discussed. What we must remember is that we now have a pole and zero near the origin and, as we see for this example, becomes the dominant response. To compare the transient responses, the feedback loop was closed for the proportional and phase-lag compensated systems and Matlab used to generate unit step responses for the systems. The two responses are given in Figure 53. From the two plots our earlier analysis in Example 5.13 is verified. The uncompensated (relative to phase-lag compensation)

Figure 53 systems.

Example: Matlab step responses for proportional and phase-lag compensated

Analog Control System Design

243

system, when tuned for a damping ratio equal to 0.707, behaves as predicted with a very slight overshoot and with a very large steady-state error. The steady-state error, predicted to be just less than 50% in Example 5.13, is found to be just less than 50% in the Matlab step response. When the phase-lag compensator is added, the steady-state error is reduced to less than 5%, as desired, and we also see the effects of adding the slower pole near the origin as it dominates our overall response. It would be appropriate, since we are allowed 5% overshoot, to increase the gain in the compensated system, as shown in Figure 54, and make the response better during the transient phases while meeting our requirement of 5% overshoot. By increasing the proportional gain by a factor of 4 the total response, as the sum of the second-order original and compensator, has a shorter settling time, less overshoot, and better steady-state error performance. Phase-lag controllers are easily implemented with common electrical components (Chapter 6) and provide an alternative to the PI controller when reducing steady-state errors in systems. 5.5.2.2

Outline for Designing Phase-Lead Controllers in the s-Domain

For a phase-lead controller the steps are slightly different since the goal is to move the existing root loci paths to more stable or desirable locations. As opposed to the phase-lag goals, we now want to modify the root locus paths and move them to a more desirable location. In fact, the beginning point of phase-lead design is to determine the points where we want the loci paths to go through. The steps to help us design typical phase-lead controllers are listed below. Step 1: Calculate the dominant complex conjugate pole locations from the desired damping ratio and natural frequency for the controlled system. These values might be chosen to meet performance specifications like peak overshoot and settling time constraints. Once peak overshoot and settling time are chosen, we can convert

Figure 54 of 4).

Example: Matlab step response of phase-lag compensated system (additional gain

244

Chapter 5

them to the equivalent damping ratio and natural frequency and finally into the desired pole locations. Step 2: Draw the uncompensated system poles and zeros and calculate the total angle between the open loop poles and zeros and the desired poles. Remember that the angle condition requires that the sum of the angles be an odd multiple of 180 degrees. The poles contribute negative phase and zeros contribute positive phase. Use these properties of the angle condition to calculate the angle that the phase-lead controller must contribute. These calculations are performed as demonstrated when calculating the angle of departures/arrivals using the root locus plotting guidelines. Step 3: Using the calculated phase angle required by the controller, proceed to place the zero of the controller closer to the origin than the pole so that the net angle contributed is positive, assuming phase angle must be added to the system to stabilize it. Figure 55 illustrates how the phase-lead controller adds the phase angle.     2 2 b ¼ tan1 ¼ 63:4 degrees and  ¼ 90 þ tan1 32 21 ¼ 153:4 degrees Thus this configuration of the phase-lead controller will add a net of þ90 degrees to the system. Step 4: Draw the new root locus including the phase-lead controller to verify the design. If our calculations are correct, the new root locus paths should go directly through our desired design points. Finish by applying the magnitude condition to find the proportional gain K required for that position along the root locus paths. Phase-lead controllers are commonly added to help stabilize the system, and therefore the desired effect requires adding phase to the system. Since we are adding a pole and a zero to the system, the net phase change is positive as long as our controller zero is closer to the origin than our controller pole. EXAMPLE 5.15 Design a controller to meet the following performance requirements for the system shown in Figure 56. Note that this system is open loop marginally stable and needs a controller to make it stable and usable.  

A damping ratio of 0.5 A natural frequency of 2 rad/sec

Figure 55

Calculating phase-lead angle contributions.

Analog Control System Design

Figure 56

245

Example: system block diagram for phase-lead controller design.

Since this problem is open loop unstable we need to modify the root loci and move them further to the left. Thus a phase-lead controller is the appropriate choice. Step 1: The desired poles must be located on a line directed outward from the origin at an angle of 60 degrees (relative to the negative real axis) to achieve our desired damping ratio of z ¼ 0:5. The radius must be 2 to achieve our natural frequency of on ¼ 2 rad/sec. Taking the tangent of 60 degrees means that our imaginary component must be 1.73 times the real component while the radius criteria means that the square root of the sum of the imaginary and real components squared must equal 2 (Pythagorean theorem). An alternative, and simpler, method is to realize that the real component is just the cosine of the angle (or damping ratio itself) multiplied by the radius and the imaginary component is simply the sin of the angle multiplied by the radius. The points that meet these requirements are s1;2 ¼ 1 1:73j Step 2: The total angle from all open loop poles and zeros must be an odd multiple of 180 degrees, to meet the angle condition in the s plane. For our system in this example we have two poles at the origin each contributing 120 degrees and one pole at 10 contributing 10:9 degrees. These add to 250:9 degrees and if s ¼ 1 þ 1:73j is to be a valid point along our root locus plot we need to add an additional þ70:9 degrees of phase angle to be back at 180 degrees and meet our angle condition. These calculations are shown graphically in Figure 57. Angle from OL poles: 120  120  tan1 (1.73/9) ¼ 250:9; angle required by controller: 180 þ 250:9 ¼ 70:9. Step 3: To add the þ70:9 degrees of phase we need to add the zero and pole such that the zero is closer to the origin than the pole and where the angle from the zero (which contributes positively) is 70.9 degrees greater than the angle introduced by the pole (which contributes negatively). A solid first iteration would be to place the zero at s ¼ 1 where it contributes exactly þ90 degrees of phase angle into the system. Since we now know that the controller pole must add 19.1 degrees, we can

Figure 57

Example: calculation of required angle contribution from controller.

246

Chapter 5

Figure 58

Example: resulting phase-lead controller.

place the pole of the phase-lead controller as shown in Figure 58. Placing a zero at 1 adds tanð1:73=0Þ ¼ 90 degrees. Then the pole must contribute 19:1 degrees and tan1 (1.73/dÞ ¼ 19:1 degrees, or d ¼ 5. So the pole must be placed at p ¼ 6. Our final phase-lead controller becomes sþ1 sþ1 ¼ Gc ¼ Phase lead ¼ 6 1= s þ 1 sþ6 6

!

Step 4: To verify our design we add the zero and pole from the phase-lead controller to the s-plane and develop the modified root locus plot. We still must include our two poles at the origin and the pole at 10 from the open loop system transfer function. For the root locus plot we have four poles and one zero, thus three asymptotes at 180 degrees and 60 degrees. The asymptote intersection point is at s ¼ 5 and the valid sections of real axis fall between 1 and 6 and to the left of 10 (also the asymptote). This allows us to approximate the root locus plot as shown in Figure 59. Without the phase-lead controller the two poles sitting at the origin immediately head into the RHP when the loop is closed and the gain is increased. Once the controller is added, enough phase angle is contributed that it ‘‘pulls’’ the loci paths into the LHP before ultimately following the asymptotes back into the RHP. As designed, the phase-lead causes the paths to pass through our desired design points

Figure 59

Example: root locus plot for phase-lead compensated system.

Analog Control System Design

247

of s ¼ 1 1:73j. As many previous examples have shown, we can apply the magnitude condition to solve for the required gain K at these points. EXAMPLE 5.16 Verify the phase lead compensator designed in Example 5.15 using Matlab. Recall that our loop transfer function, as given in Figure 56, is GHðsÞ ¼

1 s2 ð0:1s þ 1Þ

The corresponding phase-lead compensator that was designed in Example 5.15 is ! sþ1 sþ1 Gc ¼ Phase lead ¼ 6 ¼ 1= s þ 1 sþ6 6 The phase-lead compensator pole and zero were chosen to make our system root locus paths proceed through the points s1 ;2 ¼ 1 1:73j corresponding to z ¼ 0:5 and on ¼ 2 rads/sec. To verify this solution using Matlab, we define the system numerator and denominator and the phase-lead compensator and proceed to develop the root locus and step response plots for both the uncompensated and compensated models. The commands listed below are used to perform these tasks. %Program commands to generate Root Locus Plot % for Phase-lead exercise clear; numc=6*[1 1]; %Place zero at -1 denc=[1 6]; %Place pole at -6 nump=1; %Forward loop system numerator denp=[1 0 0]; %Forward loop system denominator numf=1; %Feedback loop system numerator denf=[0.1 1]; %Feedback loop system denominator sysc=tf(numc,denc); %Controller transfer function sysp=tf(nump,denp); %System transfer function in forward loop sysf=tf(numf,denf); %System transfer function in feedback loop sysl=sysp*sysf; %Loop transfer function sysall=sysc*sysl %Overall compensated system in series rlocus(sysl); %Generate original Root Locus Plot hold; rlocus(sysall); %Add new root loci to plot sgrid(0.5,2); %place lines of constant damping hold; %ratio (0.5) and w (2 rad/s) hold; tsys=[0:0.05:30]; figure; %Open a new figure window step(feedback(sysp, sysf),tsys); %Generate step response of CL uncompensated system hold; %Hold the plot

248

Chapter 5

step(feedback(sysc*sysp, sysf),tsys); hold;

%Generate step response of CL compensated system %Release the plot (toggles)

When the commands are executed, the first result is the root locus plot for the compensated and uncompensated system, given in Figure 60. As desired, the unstable open loop poles are ‘‘attracted’’ to our desired locations when the phase-lead compensator is added to the system. The system still proceeds to go unstable at higher gains. The complex poles dominate the response of this system since the third pole is much further to the left (much faster). The location of the zero, however, will affect the plot and our response is still not a ‘‘true’’ second-order response. When the uncompensated and compensated systems are both subjected to a unit step input, the results of the root locus plot become very clear as the uncompensated system goes unstable. The two responses are given in Figure 61. Using Matlab, we see that the phase-lead controller did indeed bring the new loci through our desired design point and resulted in a stable system. The step response of our compensated system is well behaved and quickly decays to the desired steady-state value. For any controller to behave as simulated, we must remember that it assumes linear amplifiers and actuators capable of moving the system as predicted. The more aggressive of response we design for the more powerful actuators that are required when implementing the design (and thus more costly). There are realistic constraints to how fast we actually want to design our system to be. 5.5.2.3

Outline for Designing Lag-Lead Controllers in the s-Domain

When both transient and steady-state performances need to be improved, we may combine the two previous compensators into what is commonly called lag-lead. The

Figure 60

Example: Matlab root locus plot for phase-lead compensated system.

Analog Control System Design

Figure 61

249

Example: Matlab step responses of phase-lead comparison.

lag-lead controller is simply the phase-lead and phase-lag compensators connected in series.    T s þ 1 T3 s þ 1 Lag-lead: Phase lag Phase lead ¼ K 2 T1 s þ 1 T4 s þ 1 In general, designing a lag-lead controller is simply a sequential operation applying the previous two methods since their effects on the system are largely uncoupled. The lead portion should modify the shape of the paths, K is used to locate the poles along the paths, and the lag portion is used to increase the system gain, thereby reducing the steady-state error. The lag portion should not change the shape of the existing loci paths. The steps outlined here will more clearly illustrate this concept. Step 1: Begin by designing the phase-lead portion first with the goal of meeting the transient response specifications. Calculate the dominant complex conjugate pole locations from the desired damping ratio and natural frequency for the controlled system and follow the steps for designing a phase-lead compensator (see Sec. 5.5.2.2). In summary, this involves calculating the phase angle that must be added to the system to make the root loci go through the desired pole locations. We achieve this by placing our controller pole and zero at specific locations. Finally, calculate the required gain K to place the poles at their desired locations. This step is needed before we can assess how much additional gain is required to achieve our steadystate performance specification. Step 2: Once the phase-lead portion is designed and the locus paths go through the desired points and we know the required proportional gain at that point, we can proceed to design the phase-lag portion. To goal of the phase-lag is to increase the gain in our system without changing the root locus plot. To do so we can determine the system type number and together with our type of input calculate the required total gain in the system to meet the steady-state error specification and find out what gain is specifically required by the lag portion. Remember that the system and proportional gain also act in series with the phase-lag compensator gain. Now design

250

Chapter 5

Figure 62

Example: block diagram of physical system for lag-lead controller.

the phase-lag portion by placing the pole and zero near the origin as described in Section 5.5.2.1. Step 3: Draw the new root locus including the complete lag-lead controller to verify the design. If our calculations are correct, the new root locus paths should go directly through our desired design points and our steady-state error requirements should be satisfied. Simulating or measuring the response of our system to the desired input easily determines the steady-state error. EXAMPLE 5.17 Using the system represented by the block diagram in Figure 62, design a lag-lead controller to achieve a closed loop system damping ratio equal to 0.5 and a natural frequency equal to 5 rad/sec. The system also needs to have a steady-state error of less than 1% while following a ramp input. To design this controller, we will use the lead portion to place the poles and the lag portion to meet our steady-state error requirements. Step 1: Using the damping ratio and natural frequency requirements, we know that our poles should be at a distance 5 from the origin on a line that makes an angle of 60 degrees with the negative real axis. This results in desired poles of s1 ;2 ¼ 2:5 4:3j The total angle from all open loop poles and zeros must be an odd multiple of 180 degrees to meet the angle condition in the s-plane. For our system in this example we have a pole at the origin contributing 120 degrees and one pole at 1 contributing 109.1 degrees. These add to 229.1 degrees and if s ¼ 2:5 þ 4:3j is to be a valid point along our root locus plot, we need to add an additional þ49:1 degrees of phase angle to be back at 180 degrees and meet our angle condition. This can be achieved by placing our zero at 2.5 and our pole at 7:5. These calculations are shown graphically in Figure

Figure 63

Example: resulting phase-lead portion of the controller.

Analog Control System Design

251

63. Placing a zero at 2:5 adds tan(4.33/0)=+90 degrees. Then the pole must contribute 40:9 degrees and tan1 (4.33/d)¼ 40:9 degrees, or d ¼ 5. So the pole must be placed at p ¼ 7:5. The last task in this step is to calculate the proportional gain required to move us to this location on our plot. To do so we apply the magnitude condition (from the open loop poles at 0, -1, and –7.5 and zero at 2.5) using our desired pole location.   5 s  z 1  5j4:33j    ¼ K pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ K            s  p1 s  p2 s  p3  2:52 þ 4:32  1:52 þ 4:32  52 þ 4:32  21:65 K pffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffipffiffiffiffiffiffiffiffiffiffiffi ¼ j1j 25 20:74 43:75 K7 Our final phase-lead controller becomes Phase lead ¼ 7

0:4 s þ 1 0:13 s þ 1

Step 2: Now we can design the phase-lag portion to meet our steady-state error requirement. In this example we have a type 1 system following a ramp input so the steady-state error will be proportional to 1=K where K, is the total gain in the system. Recall that we already have 7 from the phase-lead compensator and 5 from the system, or 35 total. Our requirement is still not met since we want less than 1% error, or a total gain of 100. This indicates that our phase-lag portion must introduce an additional gain of 100/35, or approximately three times the current gain. We will choose to add four times the gain by placing our pole at 0.02 and our zero at 0.08. This results in the phase-lag compensator of Phase lag ¼

s þ 0:08 s þ 0:02

Step 3: Combining the lead and lag terms gives us the overall controller that needs to be added to the system: Lag-lead ¼ 7

s þ 0:08 0:4 s þ 1

s þ 0:02 0:13 s þ 1

To verify the design we need to plot the root locus plot and check that our roots do pass through the desired points. In addition, it is helpful to check the time response of the system to a ramp input and verify the steady-state errors. The verification of this example is performed in the next example using Matlab. EXAMPLE 5.18 Use Matlab to verify the root locus plot and time response of the controller and system designed in Example 5.17. Recall that the goals of the system include a damping ratio equal to 0.5, natural frequency of 5 rad/sec, and less than 1% steady-state error to a ramp input. To verify the design we will define the uncompensated and compensated system in Matlab, generate a root-locus plot for the compensated system, and generate time

252

Chapter 5

response (from a ramp input) plots of the uncompensated and compensated systems. The commands used are included here. %Program commands to generate Root Locus Plot % for lag-lead exercise clear; numc=7*conv([1 0.08],[0.4 1]); %Place zeros at 2.5 and 0.08 denc=conv([1 0.02],[0.13 1]); %Place poles at 7.5 and 0.02 nump=5; %Forward loop system numerator denp=[1 1 0]; %Forward loop system denominator sysc=tf(numc,denc); %Controller transfer function sysp=tf(nump,denp); %System transfer function in forward loop sysall=sysc*sysp %Overall compensated system in series rlocus(sysp); %Generate original Root Locus Plot hold; rlocus(sysall); %Add new root loci to plot sgrid(0.5,5); %place lines of constant damping hold; %ratio (0.5) and w (5 rad/s) hold; tsys=[0:0.05:6]; figure; %Open a new figure window lsim(feedback(sysp,1),tsys,tsys); %Generate ramp response of CL %uncompensated system hold; %Hold the plot lsim(feedback(sysc*sysp,1),tsys,tsys); %Generate ramp response of CL compensated %system lsim(tf(1,1),tsys,tsys); %Generate ramp input signal on plot hold; %Release the plot (toggles) These commands define the compensated and uncompensated system from Example 5.17 and proceed to draw the root locus plot given in Figure 64. We see that the laglead controller does indeed move our locus paths to go through our desired design points of s1 ;2 ¼ 2:5 4:3j, giving us our desired damping ratio of 0.5 and natural frequency of 5 rad/sec. The uncompensated root locus plot follows the asymptotes at 0.5 and is not close to meeting our requirements. To verify the steady-state error criterion we will use Matlab (commands already given above) to generate the ramp response of the uncompensated and compensated system. The results are given in Figure 65. Except for a very short initial time, the compensated system follows the desired ramp input nearly exactly. The uncompensated physical system response has a much larger settling time and steady-state error. This example, in conclusion, illustrates the use of phase-lead and phase-lag compensation to modify the transient and steady-state behavior of our systems. As discussed in the introduction to this section, these controllers share many attributes with the common PID controller where the integral gain is used to control steady-state errors and the derivative gain to modify transient system behavior. Looking at the parallel design methods as they occur in the frequency domain will conclude this section on phase-lag and phase-lead controllers.

Analog Control System Design

253

Figure 64

Example: Matlab root locus plot of lag-lead compensated system.

Figure 65

Example: Matlab ramp responses of lag-lead compensated and uncompensated

systems.

254

5.5.3

Chapter 5

Frequency Response Design of Phase Lag/Lead Controllers

Bode plots provide a quick method of using the manufacturer’s data to design a controller, since frequency response plots are often available from the manufacturer for many controller components. Remember that to construct the open loop Bode plot for the system, we simply take each component in series in the block diagram and add their respective frequency response curves together. Therefore, if we know what the open loop system is lacking in magnitude and/or phase, we simply find the ‘‘correct’’ controller curve that when added to the open loop system results in the final desired response of our system. These same concepts were already discussed in Section 5.4.3 when we designed several variations of PID controllers in the frequency domain using Bode plots. In fact, there are fewer design method differences between PID and phase-lag/lead in the frequency domain than in the s-domain using root locus plots. For example, as Figure 66 illustrates, the phase-lag and PI frequency plots are very similar and differ only at low frequencies. The corresponding magnitude and angle contributions for the phase-lag and PI controllers are as follows: 

T sþ1 Phase lag ¼ K 2 T1 s þ 1 PI ¼ Ki

1 þ KKPi s s



 ¼ tan1 T2 o  tan1 T1 o

 ¼ 90 degrees þ tan1

a ¼ 20log

T1 T2

o

Ki Kp

In terms of magnitude, the phase-lag compensator has a finite gain at low frequencies while the integrator in the PI compensator has infinite gain as o ! 0. This difference is seen in the s-plane as the phase-lag controller does not place a pole directly at the origin as does the integrator. Relative to phase angle, both compensators end without any angle contribution at high frequencies. The phase-lag term begins at zero and only adds negative phase angle over a narrow range of frequencies while the PI term begins at 90 degrees even at very low frequencies. With respect to stability, this gives a slight edge to the phase-lag method since negative phase angle is added over a narrower range. The net effect is that the methods presented earlier with respect to designing PI controllers in the frequency domain also apply to directly to designing phase-lag controllers. This is also true for the other variations (phase-lead and lag-lead). If we

Figure 66

Bode plot comparisons of phase-lag and PI controllers.

Analog Control System Design

255

compare the phase-lead and PD controllers, given in Figure 67, we see the same parallels. The corresponding magnitude and angle contributions for the phase-lead and PD are as follows:   T1 s þ 1 Phase lead ¼ K  ¼ tan1 T1 o  tan1 T2 o T2 s þ 1   Kd o PD ¼ Kp 1 þ s  ¼ tan1 Kp Kp K

a ¼ 20 log

T1 T2

d

Whereas the phase-lag and PI controllers differed only at low frequencies, here we see that phase-lead and PD controllers only vary at higher frequencies. A phaselead controller does not have infinite gain at high frequencies as compared to the PD and only contributes positive phase angle over a range of frequencies. The PD controller continues to contribute 90 degrees of phase angle at all frequencies greater than Kp =Kd . It is likely that the phase-lead controller will handle noisy situations better than pure derivatives (i.e., PD) since it does not have as large of gain at very high frequencies. To obtain the Bode plot for the lag-lead controller, since we are working in the frequency domain and the lag and lead terms multiply, we simply add the two curves together as shown in Figure 68. As before, the lag-lead compensation is compared with its equivalence, the common PID. Since the lag-lead compensation is the summation of the separate phase-lag and phase-lead, the same comments apply. The laglead controller has limited gains at both high and low frequencies, whereas the PID has infinite gain at those frequencies. Similarly, the lag-lead controller only contributes to the phase angle plot at distinct frequency ranges while the PID begins adding 90 degrees and ends at þ90 degrees. EXAMPLE 5.19 Using the system represented by the block diagram in Figure 69, design a controller that leaves the existing dynamic response alone but that achieves a steady-state error from a step input of less than 2%. Without any controller ðGc ¼ 1Þ, we can close the loop and determine current damping ratio and natural frequency.

Figure 67

Bode plot comparisons of phase-lead and PD controllers.

256

Chapter 5

Figure 68

Bode plot comparison of lag-lead and PID controllers.

CðsÞ 75 ¼ RðsÞ ðs3 þ 9s2 þ 25s þ 100Þ The system poles (roots of the denominator) are s1 ;2 ¼ 0:78 3:58j;

s3 ¼ 7:5

Thus, without compensation the system currently has a damping ratio equal to 0.21 and a natural frequency of 3.7 rad/sec. We can calculate the unit step input steady-state error from either the block diagram or closed loop transfer function. Using the block diagram, we see that we have a type 0 system with a gain of 75/25, or 3. Since the steady-state error equals 1=ðK þ 1Þ we will have a steady-state error from a step input of one fourth, or 25%. This agrees with what can find by closing the loop and applying the final value theorem to see that the steady-state output becomes 75/100, or an error of 25%. To achieve a steady-state error less than 2%, we need to have a total gain of 49 in our system ðess ¼ 1=ðK þ 1ÞÞ. Since we have a gain of 3, we need to add approximately another gain of 17 (giving a total of 51) to meet our error requirement. To do this we will add a phase-lag controller with a pole at 0.02 and the zero at 0.34, giving us an additional gain of 17 in our system. To begin with, let us examine our open loop uncompensated Bode plot and our compensator Bode plot to see how this is achieved. The uncompensated system plot is given in Figure 70 and the phase-lag contribution is given in Figure 71. The phase-lag compensator is

Figure 69

Example: block diagram of physical system for phase-lag compensation.

Analog Control System Design

257

Figure 70

Example: Bode plot of OL uncompensated system (GM ¼ 8.5194 dB [at+5 rad/ sec], PM ¼ 39.721 deg. [at 33.011 rad/sec]).



s þ 0:34 Phase lag ¼ s þ 0:02



This gives us our additional gain of 17, or 24.6 dB (20 log 17), as shown in Figure 71. We see in Figure 70 that the uncompensated open loop system has a gain margin equal to 8.5 dB and a phase margin equal to 39.7 degrees (at a crossover frequency of 3 rad/sec). Since we do not wish to significantly change the transient response of the system, the margins should remain approximately the same even after we add the phase-lag compensator.

Figure 71

Example: Bode plot of phase-lag compensator.

258

Chapter 5

In Figure 71, the Bode plot for the phase-lag compensator, we see that at low frequencies approximately 25 dB of gain is added to the system, thereby decreasing the steady-state error. The phase-lag term does not change our original system at higher frequencies, as desired. For a period of frequencies the phase-lag compensator does tend to destabilize the system since it contributes additional negative phase angle. If it occurs at low enough frequencies, it will not significantly change our original gain margin and phase margin. When we apply the phase-lag controller to the system and generate the new Bode plot, given in Figure 72, we can again calculate the margins to verify our design. Recalling that our uncompensated system has a gain margin equal to 8.5 dB and a phase margin equal to 39.7 degrees (at a crossover frequency of 3 rad/sec), the compensated system now has a gain margin equal to 7.5 dB and a phase margin equal to 33.4 degrees (at a crossover frequency still at 3 rad/sec). So although they are slightly different (tending slightly more towards marginal stability), they did not change significantly and should not noticeably affect the transients. If we overlay all three Bode plots as shown in Figure 73, it is easier to see where the phase-lag term affects our system and where it does not. When the effects from the phase-lag compensator are examined alongside the original system, we see how it only modifies the original system at the lower frequencies and not at higher frequencies. The phase-lag term raises the magnitude plot enough at the lower frequencies to meet our steady-state error requirements and adds some negative phase angle, but only over a range of lower frequencies. It is also clear in Figure 73 how the uncompensated open loop (OL) system and the phase-lag compensator add together and result in the final Bode plot. Perhaps the clearest way of evaluating our design is by simulating the uncompensated and compensated system as shown in Figure 74. The step responses clearly show the reduction in the steady-state error that results from the addition of the phase-lag controller. The uncompensated system reaches a steady-state value of 0.75,

Figure 72

Example: Bode plot of phase-lag compensated OL system (GM ¼ 7.47 dB [at 4.741 rad/sec], PM ¼ 33.399 deg. [at 3.0426 rad/sec]).

Analog Control System Design

Figure 73

259

Example: Bode plot comparison of OL system, phase-lag, and compensated

system.

as predicted earlier in this example using the FVT. When the phase-lag controller is added, the response approaches the desired value of 1. The percent overshoot, rise time, peak time, and settling time were not significantly changed after the compensator was added (this was one of the original goals). The Matlab commands used to calculate the margins and verify the system design are included here for reference. %Program commands to generate Bode plots % for phase-lag exercise clear;

Figure 74

Example: step responses of original and phase-lag compensated systems.

260

Chapter 5

numc=[1 0.34]; denc=[1 0.02]; nump=75; denp=conv([1 5],[1 4 5]); sysc=tf(numc,denc); sysp=tf(nump,denp); sysall=sysc*sysp; syscl=feedback(sysp,1) sysclc=feedback(sysall,1) margin(sysp) figure; bode(sysc,{0.001,100}); figure; margin(sysall); figure; bode(sysp,sysc,sysall) tsys=[0:0.05:10]; figure; step(syscl,tsys); hold; step(sysclc,tsys); hold;

%Place zero at -0.34 %Place pole at -0.02 %Forward loop system numerator %Forward loop system denominator %Controller transfer function %System transfer function in forward loop %Overall compensated system in series %Uncompensated closed loop TF %Compensated closed loop TF %Generate Bode plot with PM and GM for plant %Generate Bode plot for phase-lag %Generate Bode plot with PM and GM for final OL %system

%Open a new figure window %Generate step response of CL uncompensated system %Hold the plot %Generate step response of CL compensated system %Release the plot (toggles)

EXAMPLE 5.20 Given the system represented by the block diagram in Figure 75 (see also Example 5.19), use Bode plots to design a controller that modifies the dynamic response to achieve a damping ratio of 0.5 and a natural frequency of 8 rad/sec. Without any controller (Gc ¼ 1), we can close the loop and determine current damping ratio and natural frequency. CðsÞ 75 ¼ 3 2 RðsÞ ðs þ 9s þ 25s þ 100Þ The system poles (roots of the denominator) are s1 ;2 ¼ 0:78 3:58j; s3 ¼ 7:5 Thus, without compensation the system currently has a damping ratio equal to 0.21 and a natural frequency of 3.7 rad/sec where the goals of the controller (z ¼ 0:5 and on ¼ 8 rad/sec) are to approximately double the damping ratio and natural

Figure 75

Example: block diagram of physical system for phase-lead compensation.

Analog Control System Design

261

frequency of the uncompensated system. These changes will reduce the overshoot and settling time of the system in response to a step input. To begin with, we first need to relate our closed loop performance requirements to our open loop Bode plots. Once we know what our desired Bode plot should be, we can find where the uncompensated OL system is lacking and design the phase-lead controller to add the required magnitude and phase angle to make up the differences. Since we want our compensated system to have a damping ratio equal to 0.5, we know that we need a phase margin approximately equal to 50 degrees (see Figure 48, Chap. 4), which needs to occur at our crossover frequency. We find our crossover frequency from the damping ratio and natural frequency requirements. Using Figure 49 from Chapter 4, we find the oc =on ¼ 0:78 at a damping ratio of 0.5. Knowing that we want a closed loop natural frequency, on , equal to 8 rad/sec, means that open loop crossover frequency should equal approximately 6.3 rad/sec. Now we can define our requirements in terms of our compensated OL Bode plot measurements: Phase margin ¼ 50 degrees at a crossover frequency of 6.3 rad/sec To calculate what is lacking in the uncompensated system, we must draw the uncompensated system open loop Bode plot and measure magnitude and phase angle at our desired crossover frequency. Recognizing that this system is the same as in Example 5.19, we can refer ourselves to Figure 70, where the Bode plot is already completed. We see that at our desired crossover frequency (6.3 rad/sec) the phase angle for the uncompensated system is equal to 200 degrees and the magnitude is equal to 15 dB. Thus, to achieve our close loop compensated performance requirements we need to raise the magnitude plot +15 dB and add þ70 degrees of phase angle at our crossover frequency of 6.3 rad/sec. This will make our final system have a phase margin of þ50 degrees at our crossover frequency (since we are currently at 200 degrees, we need to add þ70 degrees to be 50 degrees above 180 degrees at oc Þ. To design our phase-lead controller, let us make T1 ¼ 0:8 and T2 ¼ 0:02 and generate the Bode plot.   0:8 s þ 1 Phase lead ¼ 0:02 s þ 1 Our first break occurs at 1=T1 , or 1.25 rad/sec, and the controller adds positive phase angle and magnitude, resulting in Figure 76. Examining the Bode plot we see that our goal of adding approximately 15 dB of magnitude and þ70 degrees of phase angle at our desired crossover frequency of 6.3 rad/sec is achieved. When this response is added to the uncompensated OL system’s response, the result should be a phase margin of 50 degrees at oc ¼ 6:3 rad/sec. To verify that this is indeed the case, let us generate the Bode plot for the compensated system and measure the resulting phase margin. This plot is given in Figure 77 where we see that when the two responses are added we achieve a phase margin of 52 degrees at a crossover frequency of 6.7 rad/sec, slightly exceeding our performance requirements. It is easier to see how the phase-lead compensator adds to the original system in Figure 78, where all the terms are plotted individually. We see that at low and high frequencies there is little change to the original system, and that the phase-lead

262

Figure 76

Chapter 5

Example: Bode plot of phase-lead compensator (phase lead T1 ¼ 0:8, T2 ¼ 0:02).

compensator adds the desired phase angle and magnitude at the range of frequencies determined by the T1 and T2 . Finally, to verify the design in the time domain, let us examine the step responses of the uncompensated and compensated systems, given in Figure 79. As we expected, and hoped for, the compensated system exhibits less overshoot and shorter response times (rise, peak, and settling times) than the uncompensated system in response to unit step inputs. On a related note, however, we see that the steady-state performance is not improved, as it was with phase-lag controller. To address this issue, we look at one final example where the phase-lag and phase-lead

Figure 77 Example: Bode plot of phase-lead compensated system (GM ¼ 17.482 dB [at 20.053 rad/sec], PM ¼ 52:101 deg. [at 6.738 rad/sec]).

Analog Control System Design

Figure 78

263

Example: Bode plot of OL, phase-lead, and compensated factors.

terms are combined as a lag-lead controller. Also, it is worth remembering that there is a price to pay for the increased performance. To implement the phase-lead controller, we may need more expensive physical components (amplifiers and actuators) capable of generating the response designed for. For example, if we were to plot the power requirements for each response, we would find that the compensated system demands a much higher peak power to achieve the faster response. The goal is not usually to design the ‘‘fastest’’ controller but one that balances the economic and engineering constraints. The Matlab commands used to generate the plots and measure the margins are given below.

Figure 79 systems.

Example: closed loop step responses of original and phase-lead compensated

264

Chapter 5

%Program commands to generate Bode plots % for phase-lead exercise clear; numc=[0.8 1]; denc=[0.02 1]; nump=75; denp=conv([1 5],[1 4 5]);

%Place zero at -1.25 %Place pole at -50 %Forward loop system numerator %Forward loop system denominator

sysc=tf(numc,denc); sysp=tf(nump,denp); sysall=sysc*sysp; syscl=feedback(sysp,1)

%Controller transfer function %System transfer function in forward loop %Overall compensated system in series %Uncompensated closed loop TF

sysclc=feedback(sysall,1) margin(sysp) figure; bode(sysc,{0.001,100}); figure; margin(sysall); figure; bode(sysp,sysc,sysall) tsys=[0:0.05:10]; figure; step(syscl,tsys); hold; step(sysclc,tsys); hold;

%Generate Bode plot with PM and GM for plant %Bode plot with PM and GM for final OL system

%Open a new figure window %Generate step response of CL uncompensated system %Hold the plot %Generate step response of CL compensated system %Release the plot (toggles)

EXAMPLE 5.21 Use Matlab to combine the phase-lag and phase-lead controllers from Examples 5.19 and 5.20, respectively, and verify that both the steady-state and transient requirements are met when the controllers are implemented together as a lag-lead. Generate both the Bode plot (with stability margins) and step responses for the lag-lead compensated OL system. Recall that we were able to meet our steady-state error goal of less than a 2% error resulting from a step input using the phase-lag controller and our transient response goals of a closed loop damping ratio equal to 0.5 and a natural frequency equal to 8 rad/sec using the phase-lead controller. Combining them should enable us to meet both requirements simultaneously. The final laglead controller becomes Lag-lead ¼

s þ 0:34 0:8 s þ 1

s þ 0:02 0:02 s þ 1

To verify the response we will use Matlab to generate the compensated OL Bode plot and the resulting step response plot. The commands used to generate these plots are as follows:

Analog Control System Design

265

%Program commands to generate Bode plots % for lag-lead exercise clear; numclag=[1 0.34]; denclag=[1 0.02]; numclead=[0.8 1]; denclead=[0.02 1]; nump=75; denp=conv([1 5],[1 4 5]);

%Phase-lag, Place zero at -0.34 %Phase-lag, Place pole at -0.02 %Phase-lead, Place zero at -1.25 %Phase-lead, Place pole at -50 %Forward loop system numerator %Forward loop system denominator

sysclag=tf(numclag,denclag); sysclead=tf(numclead,denclead);

%Phase-lag Controller TF %Phase-lead Controller TF

sysp=tf(nump,denp); sysall=sysclag*sysclead*sysp; syscl=feedback(sysp,1) sysclc=feedback(sysall,1)

%System transfer function in forward loop %Overall compensated system in series %Uncompensated closed loop TF

margin(sysall); figure; bode(sysp,sysclag,sysclead,sysall)

%PM and GM Bode plot for final system

tsys=[0:0.05:10]; figure; step(syscl,tsys); hold; step(sysclc,tsys); hold;

%Individual components plotted %Open a new figure window %Step response of CL uncompensated system %Hold the plot %Generate step response of CL compensated %system %Release the plot (toggles)

Since the open loop uncompensated Bode plot is already given in Figure 70, let us proceed to plot the lag-lead compensated plot and verify that our phase margin and crossover frequency have remained the same as designed for in Example 5.20. The final Bode plot is given in Figure 80, where we see that we now have a phase margin equal to 49 degrees and a crossover frequency equal to 6.7 rad/sec, only slightly lower than in the previous example (due to adding the lag term) but still very close to our design goals. To see how the individual phase-lag and phase-lead terms add to the original OL system, we can plot each term separately as shown in Figure 81. For the most part the phase-lag term only modifies the system at lower frequencies and the phaselead terms at higher frequencies and when added together the result is that both the steady-state and transient requirements are still met. Finally, in Figure 82 we can verify that the requirements are met in the time domain by examining the step response plots for the uncompensated and compensated systems. Looking at the responses we see that the transient response of the closed loop compensated system has less overshoot and shorter rise, peak, and settling times than the closed loop uncompensated system and also that the steady-state error is also much improved after adding the lag-lead controller. So as in root locus s plane design methods, we can design the lag and lead portions

266

Chapter 5

Figure 80 Example: Bode plot of lag-lead compensated system (GM ¼ 17:073 dB [at 19.602 rad/sec], PM ¼ 49:337 deg. [at 6.7435 rad/sec]).

independently in the frequency domain to achieve both satisfactory steady-state and transient performance. In conclusion, phase-lag and phase-lead controllers provide alternative design options with performance very similar to PID controllers. As also mentioned, there are sometimes practical advantages to phase-lag and phase-lead controllers during implementation with regard to noise amplification and components. It is easy to see in both the s-plane pole and zero contributions or the frequency domain magnitude and phase angle contributions that many similarities exist between PID and lag-lead compensation techniques.

Figure 81

Example: Bode plot of individual terms (lag, lead, original, and final systems).

Analog Control System Design

Figure 82

5.6

267

Example: step responses of original and lag-lead compensated systems.

POLE PLACEMENT CONTROLLER DESIGN IN STATE SPACE

The methods presented thus far in this chapter are limited (regarding design and simulation) when dealing with any system that is not linear, time invariant, and single input-single output. Optimal and adaptive controllers, mostly time varying and nonlinear, must be analyzed using other techniques, many of which use state space representations. Nonlinear and time varying systems can be designed using conventional techniques, largely through trial and error, but the optimal result is seldom achieved. Complex systems are generally designed using some performance index that indicates how well a controller is performing. This index largely determines the system behavior, since it is the ‘‘yardstick’’ used to measure the performance. Two common design approaches are used when designing control systems with the state space techniques. Pole placement techniques are common introductory techniques and are presented in this section to introduce us to state space controller design. An alternative is using a quadratic optimal regulator system that seeks to minimize an error function. The error function must be defined and might consist of several different error functions depending on the application. Practical limitations are also placed on the controller by placing a constraint on the control vector that serves to provide limits on the control signal corresponding to actuator saturation(s). This resulting system seeks to reach a compromise between minimizing the error squared and minimizing the control energy. The matrices, Q and R, are used as weighting functions in the performance index, commonly called the quadratic performance index. Finding solutions to equations with more unknowns than equations is the primary reason for the use of quadratic optimal regulators. These techniques are discussed in Chapter 11. Pole placement techniques may also be used to stabilize state space controllers. Although the idea is quite simple, it assumes that we have access to all the states, that is, that they can be measured. This is seldom the case, and observers must be used to estimate the unknown states, as shown in Section 11.5.2. The advantage is that for

268

Chapter 5

controllable systems, all poles can be placed wherever we want them (assuming the physics are possible). Controllability is determined by finding the rank of the controllability matrix: rank ½BjABj jAn1 Bj If the rank is less than the system order (or size of A), the system is controllable. Essentially, the system is controllable when all of the column vectors are linearly independent and each output is affected by at least one of the inputs. If one or more of the outputs are not affected by an input, then no matter what controller we design, we cannot guarantee that all the states will be well behaved, or controlled. Now we can review the original state space matrices: dx=dt ¼ A x þ B u where we have an arbitrary input u. If we close the loop and introduce feedback into our system, our additional input into the system becomes u ¼ K x Although the gain vector K can be determined by using a transformation and linear algebra (easily done in Matlab), a simpler approach for third-order systems and less is possible by forming the ‘‘modified’’ system matrix, solving for the eigenvalues as a function of K and choosing gains to place each pole at a predetermined location. The gain vector K can be found by equating the coefficients of our characteristic equation (determinant) from our system matrix with the coefficients from our desired characteristic equation, formed from our desired pole locations. To illustrate how the feedback loop and gain vector affects the system matrix, let us substitute the feedback, u = K x, back into the original state space matrix equation. Then dx=dt ¼ A x  B K x ¼ ðA  B KÞ x A modified system matrix, Ac , is formed which includes the control law. Since our controller gains now appear in the system matrix, the eigenvalues (poles) of Ac can be matched to the desired poles by adjusting the gains in the K vector. This method is shown in the following example and assumes that all states are available for feedback. EXAMPLE 5.22 Using the differential equation describing a unit mass under acceleration, determine the state space model and design a state feedback controller utilizing a gain matrix to place the poles at s ¼ 1 1j. Note that the open loop system is marginally stable and the controller, if designed properly, will stabilize it and result in a system damping ratio equal to 0.707 and a natural frequency equal to 1.4 rad/sec. The differential equation for a mass under acceleration is where c is the position of the mass, m, and r is the force input on the system: d 2c 1 ¼ r dt2 m

Analog Control System Design

269

First we must develop the state matrices. To do so, let the first state x1 be the position and the second state x2 the velocity. Then x1 ¼ c and the following matrices are determined: x_ 1 ¼ x2 ¼ c_ 1 r m        x_ 1 0 1 x1 0 ¼ þ r 0 0 x2 1=m x_ 2 x_ 2 ¼ c€ ¼

To determine if it is controllable we take the rank of the controllability matrix M. The first column is the B vector and the second column is the vector resulting from A  B. Since the resulting M matrix is nonsingular and rank ¼ 2, the system is controllable.   0 1=m M ¼ ½BjAB ¼ ; rank ¼ 2; controllable 1=m 0 Using the controller form developed above: dx=dt ¼ A x  B K x ¼ ðA  B KÞ x The control law matrix, B K, is     0  0 k1 k2 ¼ BK ¼ 1=m k1 =m

0 k2 =m



The new system matrix (i.e., poles and zeros) is           s 0 s 0 0 0 1 ¼ jsI  A þ BKj ¼  þ  k1 =m k2 =m   k1 =m 0 0 0 s

  1  s þ k2 =m 

The characteristic equation becomes a function of the gains: sðs þ k2 =mÞ  ðk1 =mÞ ¼ s2 þ ðk2 =mÞs þ k1 =m To solve for the gains that are required to place our closed loop poles at s ¼ 1 1j, we can multiply the two poles to get the desired characteristic equation and compare it with the gain dependent characteristic equation. Our desired characteristic equation is ðs þ 1 þ 1jÞðs þ 1  1jÞ ¼ s2 þ 2s þ 2 To place the poles using k1 and k2 we simply compare coefficients. Thus by inspection from the s0 term, k1 =m ¼ 2, and from the s1 term, k2 =m ¼ 2. Therefore, k1 ¼ 2 m and

k2 ¼ 2 m

For example, if we have a unit mass equal to 1, then the desired gain matrix becomes   K¼ 2 2 As a reminder, although for controllable systems we can place the poles wherever we wish, it is always dependent on having those states available as feedback, either as a

270

Chapter 5

measured variable or through the use of estimators. In this example it means having both position and velocity signals available to the controller. For higher order systems it is advantageous to use the properties of linear algebra to solve for the gain matrix. Ackermann’s formula allows many computerbased math programs to solve for the gain matrix even for large systems. Deferring the proofs to other texts in the references, we can define the gain matrix, K, equal to K ¼ ½0 0 0 0 1 M1 Ad where K is the resulting gain matrix; M is the controllability matrix that, if the system is controllable, is not singular and has an inverse that exists; and Ad is the matrix containing the information about our desired poles. It is formed as shown below. Desired characteristic equation: sn þ a1 sn þ þ an1 s þ an ¼ 0 And using the original A matrix: Ad ¼ An þ a1 An1 þ þ an1 A þ an I Many computer packages with control system tools have Ackermann’s formula available and thus we would only have to supply the desire poles and the system matrices A and B to have the gain matrix calculated for us. More examples using state space techniques are presented in Chapter 11. EXAMPLE 5.23 Use Matlab to verify the state feedback controller designed in Example 5.22. Recall that the physical system was open loop marginally stable, described by a force input to a mass without damping or stiffness terms. The resulting state space matrices for the system are        x_ 1 0 1 x1 0 ¼ þ r 0 0 x2 1=m x_ 2 To design the controller we will first define the state matrices in Matlab along with the desired pole locations. Then controllability and the resulting gain matrix can be solved using available Matlab commands, shown below. %Pole placement m=1; A=[0 1;0 0]; B=[0;1/m]; C=ctrb(A,B) rank(C) det(C) P=[-1+j,-1-j];

controller design for State Space Systems %Define the mass in the system %System matrix, A %Input matrix, B %Check the controllability of the system %Check the rank of the controllability matrix %Determinant must exist for controllability %Rank = system order for controllable systems %Vector of desired pole location

K=place(A,B,P) %Calculate the gain matrix using the place command

Analog Control System Design

271

When the commands are executed Matlab returns the A and B system matrices, the controllability matrix, C, and the rank and determinant of C. Finally, the desired pole locations are defined and the place command is used to determine the required gain matrix K. Output from Matlab gives the controllability matrix as C¼ 0 1 1 0 with the rank of C equal to 2 and the determinant of C equal to 1. Either method may be used to check for controllability since the rank of a matrix is the largest square matrix contained inside the original matrix that has a determinant. Since the rank of C equals 2, which is the size of C, the determinant of the complete matrix exists, as given by the det command, showing that C is indeed nonsingular. Finally, after defining our desired pole locations as 1 1j, the place command returns: K¼ 2:0000

2:0000

This corresponds exactly with the gain matrix solved for manually in the previous example. Since we are working with matrices, the Matlab code given in this example is easily applied to larger systems. The only terms that must be changed are the A and B system matrices and the vector P containing our desired pole locations. To summarize this section on pole placement techniques with state space matrices, we should recognize that the same effect of placing the poles for this system could be achieved by using a PD controller algorithm. If we were to add velocity feedback, as would be required for the state space design, the same design would result. The reason this section is included is to introduce the topic as it relates to similar design methods and systems (LTI, single input–single output) already presented in this chapter. Where state space techniques become valuable are when dealing with larger, nonlinear, and multiple input–multiple output systems. Chapter 11 introduces several state space design methods for applications such as these. 5.7

PROBLEMS

5.1 Briefly describe the typical goals for each term in the common PID controller. What is each term expected to achieve in terms of system performance? 5.2 Describe integral windup and briefly describe a possible solution. 5.3 Briefly describe an advantage and disadvantage of using derivative gains. 5.4 What is the reason for using an approximate derivative? 5.5 List three alternative configurations of PID algorithms and describe why they are sometimes used. 5.6 What is the assumption made when it is said that the system has dominant complex conjugate poles? 5.7 To design a system with a damping ratio equal to 0.6 and a natural frequency equal to 7 rad/sec, where should the dominant pole locations be located in the splane?

272

Chapter 5

5.8 To design a system that reaches 98% of its final value within 4 seconds, what condition on the s-plane must be met? 5.9 A simple feedback control system is given in Figure 83. As a designer, you have control over K and p. Select the gain K and pole location p that will give the fastest possible response while keeping the percentage overshoot less than 5%. Also, the desired settling time, Ts , should be less than 4 seconds. 5.10 For the system given in the block diagram in Figure 84, determine the K1 and K2 gain necessary for a system damping ratio z ¼ 0:7 and a natural frequency of 4 rad/sec. 5.11 The current system exhibits excessive overshoot. To reduce the overshoot in response to a step input, we could add velocity feedback, as shown in the block diagram in Figure 85. Determine a value for K that limits the percent overshoot to 10%. 5.12 Velocity feedback is added to control to add effective damping to the system, as shown in the block diagram in Figure 86. Determine a value for K that limits the percent overshoot to 5%. 5.13 Using the plant model transfer function given, design a unity feedback control system using a proportional controller. a. Develop the root locus plot for the system. b. Determine (from the root locus plot and using the appropriate root locus conditions) the gain K required for a damping ratio ¼ 0:2:

Figure 83

Problem: system block diagram with unity feedback.

Figure 84

Problem: system block diagram with gain feedback.

Figure 85

Problem: system block diagram with velocity feedback.

Analog Control System Design

Figure 86

273

Problem: system block diagram with velocity feedback.

GðsÞ ¼

5 s2 þ 7 s þ 10

5.14 Using the plant model transfer function, design a unity feedback control system using first a proportional controller (K ¼ 2) and then a PI controller (K ¼ 2; Ti ¼ 1). Draw the block diagrams for both systems and determine the steady-state error for both systems when subjected step inputs with a magnitude of 2: GðsÞ ¼

5 sþ5

5.15 Use the system block diagram given in Figure 87 to answer the following questions. a. If Gc ¼ K, what is the steady-state error due to a unit step input?   1 b. If GC ¼ K 1 þ , what is the steady-state error due to a unit step Ti s input? c. Using the PI controller in part b, will the system ever go unstable for any gains K>0 and Ti > 0? Use root locus techniques to justify your answer. 5.16 Given the block diagram model of a physical system in Figure 88: a. Describe the open loop system response characteristics in a brief sentence (no feedback or Gc ).

Figure 87

Problem: block diagram of controller and system model.

Figure 88

Problem: block diagram of controller and system model.

274

Chapter 5

b. Add a PD controller, Gc ¼ K ð1 þ Td sÞ, and find K and Td such that on ¼ 3 and z ¼ 0:8. c. Will the actual system exhibit the response predicted by on and z ? Why or why not? Use root locus techniques to defend your answer. 5.17 A block diagram, given in Figure 89, includes a physical system (plant) transfer function that is unstable. Design the simplest possible controller, GC , which will make the feedback system stable and meet the following requirements: a. Steady-state errors from step (constant) inputs ¼ zero b. System settling time, Ts , of 4 seconds c. System damping ratio, z, of 0.5 (Begin with P, then I, then PI, then PID, until you find the simplest one that will meet the requirements. Document why each one will or will not meet the requirements.) 5.18 Using the block diagram in Figure 90, design the simplest controller which, using some possible combination of proportional, integral, and/or derivative gains, meets the listed performance requirements. System requirements: z  0:7, Tsettling  1 sec, ess (step)  0:40. 5.19 Given the open loop step response in Figure 91, determine the PID controller gains using Ziegler-Nichols methods. 5.20 Given the open loop step response in Figure 92, determine the PID controller gains using Ziegler-Nichols methods. 5.21 Given the system in Figure 93, draw the asymptotic Bode plot (open loop) and determine the gain K such that the phase margin is 45 degrees. 5.22 Given the OL system transfer function, draw the asymptotic Bode plot (open loop) for K ¼ 1 and answer the following questions. Clearly show the final resulting plot. a. When K ¼ 1, what is the phase margin fm ? b. When K ¼ 1, what is the gain margin? c. What value of K will make the system go unstable? GHðsÞ ¼

10 K ð10s þ 1Þðs þ 1Þð0:1s þ 1Þ

Figure 89

Problem: block diagram containing unstable plant model.

Figure 90

Problem: block diagram with controller and plant model.

Analog Control System Design

Figure 91

Problem: open loop step response.

Figure 92

Problem: open loop step response.

Figure 93

Problem: block diagram of controller and system model.

275

276

Chapter 5

5.23 With the system in Figure 94 and using the frequency domain, design a PI controller for the following system that exhibits the desired performance characteristics. Calculate the steady-state error from a ramp input using your controller gains. System requirements : fm ¼ 52 degrees; oc ¼ 10 rad/sec: 5.24 Use the open loop transfer function and frequency domain techniques to design a PD controller where the phase margin is equal to 40 degrees at a crossover frequency of 10 rad/sec. GðsÞ ¼

24 ðs þ 4Þ2

5.25 Using root locus techniques, design a phase-lead controller so that the system in Figure 95 exhibits the desired performance characteristics. System requirements: z  0:35, Tsettling  4 sec. 5.26 Given the system block diagram in Figure 96, design a controller (phase-lag/ lead) to achieve a closed loop damping ratio equal to 0.5 and a natural frequency equal to 2 rad/sec. Use root locus techniques. 5.27 Using the system shown in the block diagram in Figure 97, design a phase-lag compensator that does not significantly change the existing pole locations while causing the steady-state error from a ramp input to be less than or equal to 2%. 5.28 With a third-order plant model and unity feedback control loop as in Figure 98, a. Design a compensator to leave the existing root locus paths in similar locations while increasing the steady-state gain in the system by a factor of 25.

Figure 94

Problem: block diagram of controller and system model.

Figure 95

Problem: block diagram of controller and system model.

Figure 96

Problem: block diagram of controller, system, and transducer.

Analog Control System Design

Figure 97

Problem: block diagram of controller and system.

Figure 98

Problem: block diagram of controller and system.

277

b. Where are the close loop pole locations before and after adding the compensator? c. Verify the root locus and step response plots (compensated and uncompensated) using Matlab. 5.29 Using the system shown in the block diagram in Figure 99, design a compensator that does the following: a. Places the closed loop poles at 2 3:5j. Define both the required gain and compensator pole and/or zero locations. b. Results in a steady-state error from a ramp input that is less than or equal to 1.5%. c. Verify your design (root locus and ramp response) using Matlab. 5.30 Given the open loop system transfer function, design a phase-lag controller to increase the steady-state gain in the system by a factor of 10 while not significantly decreasing the stability of the system. Include a. The block diagram of the system with unity feedback. b. The open loop uncompensated Bode plot, gain margin, and phase margin. c. The transfer function of the phase-lag compensator. d. The compensated open loop Bode plot, gain margin, and phase margin.

Figure 99

Figure 100

Problem: block diagram of controller and system.

Problem: block diagram of controller and system

278

Chapter 5

GðsÞ ¼

20 ðs þ 1Þðs þ 2Þðs þ 3Þ

5.31 Given the open loop system transfer function, design a phase-lead controller to increase the system phase margin to at least 50 degrees and the gain margin to at least 10 dB. Include a. The block diagram of the system with unity feedback. b. The uncompensated open loop Bode plot, gain margin, and phase margin. c. The transfer function of the phase-lead compensator. d. The compensated open loop Bode plot, gain margin, and phase margin. GðsÞ ¼

1 s ðs þ 5Þ 2

5.32 Using the system shown in the block diagram in Figure 100, design a compensator that does the following a. Results in a phase margin of at least 50 degrees and a crossover frequency of at least 8 rad/sec. b. Results in a steady-state error from a step input that is less than or equal to 2%. c. Verify your design (Bode plot and step response) using Matlab. 5.33 Given the differential equation describing a mass and spring system, determine the state space model and design a state feedback controller utilizing a gain matrix to place the poles at s ¼ 1 1j. 2

d 2 y dy ¼r þ dt2 dt

where y is the output (position) and r is the input (force). 5.34 Given the differential equation describing the model of a physical system, determine the state space model and design a state feedback controller utilizing a gain matrix to: a. Have a damping ratio of 0.8. b. Have a settling time less than 1 second. c. Place the third pole on the real axis at s ¼ 5. d 3y d 2y dy þ 4 þ 3 þ 2y ¼ r 3 2 dt dt dt where y is the output and r is the input (force).

6 Analog Control System Components

6.1

OBJECTIVES   

6.2

Introduce the common components used when constructing analog control systems. Learn the characteristics of common control system components. Develop the knowledge required to implement the controllers designed in previous chapters.

INTRODUCTION

Until now little mention has been made about the actual process and limitations in constructing closed loop control systems. A paper design is just as it states: no physical results. This chapter introduces the basic components that we are likely to need when we move from design to implementation and use. The fundamental categories, shown in Figure 1, may be summarized as error detectors, control action devices, amplifiers, actuators, and transducers. The goal of this chapter is to introduce some common components in each category and how they are typically used when constructing control systems. The amplifiers and actuators tend to be somewhat specific to the type of system being controlled. There are physical limitations associated with each type, and if the wrong one is chosen, the system will not perform well no matter how our controller attempts to control system behavior. Amplifiers, as the name implies, tend to simply increase the available power level in the system. The actuators are then designed to use the output of amplifiers to effect some change in the physical system. If our actuator does not cause the output of the physical system to change (in some predictable manner), the control system will fail. The control action devices provide the desired features discussed in previous chapters. How do we actually implement the proportional-integral-derivative (PID), or phase-lag, or phase-lead controller that works so well in our modeled system? Two basic categories include electrical devices and mechanical devices. Electrical devices will be limited to analog in this chapter and later expanded to include the rapidly growing digital microprocessor-based controllers. For the most part the operational amplifier is the analog control device of choice. It is supported by a 279

280

Chapter 6

Figure 1

Typical layout of system components.

multiple array of filters, saturation limits, safety switches, etc. in the typical controller. If fact, you may have to search the circuit board just to find the chips performing the basic control action; the remaining components add the features, safety, and flexibility required for satisfactory performance. The controller typographies presented in previous chapters can all be implemented quite easily using operational amplifiers. Mechanical controllers utilize the feedback of the actual physical variable to close the loop. Example variables include position (feedback linkages), speed (centrifugal governor), and pressure. In these controllers a transducer is generally not required, and they may operate independently of any electrical power. We are obviously constrained by physics as to what mechanical controller feedback systems are possible. There are many mechanical controllers still in use and providing reliable and economical performance. As the move is made to electronic controllers, the importance of transducers, actuators, and amplifiers is increased. While actuators are still required in mechanical feedback systems (i.e., hydraulic valve), transducers and amplifiers generally include supporting electrical components. To have an electrical component representing the summing junction in the block diagram, we must be able to provide an electrical command signal and feedback signal (proportional to the actual controlled variable). The output of such controllers is very low in power (generally current limited) and depends on linear amplifiers capable of causing a physical change in the actuator and ultimately in the physical output of the system. Sometimes posing an even greater problem is the transducer. The lack of suitable transducers has in many cases limited the design of the ‘‘perfect’’ controller. For a system variable to be controlled, it must be capable of being represented by an appropriate electrical signal (i.e., a transducer). The goal of this chapter is to provide information on the basic components found in the four categories (controller, transducer, actuator, and amplifier). 6.3 6.3.1

ANALOG CONTROLLER COMPONENTS Implementation Using Basic Analog Circuits

PID, phase-lag, and phase-lead controllers can all be constructed with circuits utilizing operational amplifiers or, as more commonly called, OpAmps. Although looking at a typical PID controller on a printed circuit board would lead us to believe we

Analog Control System Components

281

could not construct a controller ourselves, most of the components are the additional filters, amplifiers, and safety functions. The simple circuits presented here still perform quite well in some conditions. Manufactured controller cards have multiple features, range switches for gains, robust filtering, and often include the final stage amplification and thus appear much more complex than what is required for the basic control actions themselves. The additional features are usually designed for the specific product and in many cases make it desirable over building our own. Even then, in many large control systems (i.e., an assembly line) this controller might only be a subset of the overall system and we would still be responsible for the overall system performance. The following circuits in Table 1 illustrate the basic circuits used in the common controller typographies examined and designed in Chapter 5. Each controller utilizes the basics: i.e., inverting and noninverting amplification, summing, difference, integrating, and differentiating circuits to construct the proper transfer function for each controller. Additional information on OpAmps is given in Section 6.6.1. These circuits can be found in most electrical handbooks along with the calculations for each circuit. Using capacitors in the OpAmp feedback loop integrates the error and using capacitors in parallel with the error signal differentiates the error. Picking different combinations of resistors chooses the pole and zero locations. Potentiometers are commonly used to enable on-line tuning. Remember that final drivers (i.e., power transistors or pulse-width-modulation [PWM] circuits) are required when interfacing the circuit output with the physical actuator. Many issues must be considered when using the circuits given in Table 1, discussed here in terms of internal and external aspects. Internally, that is, between the error input and controller output connections, several improvements are commonly made when implementing the circuits. In terms of controller design, there are realistic constraints internal to the circuit as to where poles and zeros may feasibly be placed. Since every resistor and capacitor is not an ideal component, we have limited values and combinations that work in practice. For example, to place a zero and/or pole very close to the origin, as common in phase-lag designs, we would need to find resistors and capacitors with very large values, a challenge for any designer. Second, to avoid integral windup (collecting to much error and having to overshoot the desired location to dissipate error), we might consider adding diodes to the circuit to clamp the output at acceptable levels. Even if it is beneficial to accumulate more error using the integral term, we will always have limited output available from each component before it saturates. Other times it is also common to include an integral reset switch that discharges the capacitor under certain conditions. Finally, internal problems arise with building pure derivatives using OpAmps because of resulting noise and saturation problems. As shown in Figure 2, it is common to add another resistor in series with the capacitor that, when the equations are developed, results in adding a first-order lag term to the denominator. The new controller transfer function becomes Controller output R2 Cs ¼ Error R1 Cs þ 1 The modified transfer function should be familiar as conceptually it was already presented and discussed in Section 5.4.1 as the approximate derivative transfer func-

282

Table 1 Function

Summing junction

Chapter 6 Operation Amplifier Controller Circuits GðsÞ ¼

Eo ðsÞ signal out ¼ Ein ðsÞ error in

Error ¼ rvolts  cvolts

P

R4 R2 R3 R1

PI

R4 R2 R2 C2 s þ 1 R3 R1 R2 C2 s

PD

R4 R2 R C sþ1 R3 R1 1 1

PID

R4 R2 ðR1 C1 s þ 1ÞðR2 C2 s þ 1Þ R2 C2 s R3 R1

Lead or lag

R4 R2 R1 C2 s þ 1 R3 R1 R2 C2 s þ 1

Lag-lead

R4 R2 ððR1 þ R5 ÞC1 s þ 1Þ : R3 R1 ððR2 þ R6 ÞC2 s þ 1Þ 

ðR6 C2 s þ 1Þ ðR5 C1 s þ 1Þ

OpAmp circuit

Analog Control System Components

Figure 2

283

Modified derivative function using OpAmps.

tion. Figure 7 of Chapter 5 presented the output of the approximate derivative term in response to a step input. To add this function to the PD controller from Table 1, we can modify the circuit and insert the extra resistor as shown in Figure 3. Now when we develop the modified transfer function for the controller, we can examine the overall effect of adding the resistor. PDAPPR ¼

R4 R2 R3 ðR1 þ R5 Þ

R2 ðR1 Cs þ 1 R1 R5 Cs þ 1 ðR1 þ R5 Þ

We still place our zero from the numerator, as before, but we also have added a pole in the denominator, as accomplished in the approximate derivative transfer function. The interesting result comes from comparing the modified PD with a phase-lead controller. Although how we adjust them is slightly different, we find that both algorithms place a zero and pole and functionally are the same controller. This agrees with earlier observations made about the benefits of using phase-lead over derivative compensation because of better noise attenuation at high frequencies. Both the modified PD and phase-lead terms would have similar shapes to their respective Bode plots. Moving on to several external aspects, it is important to realize that even if we now get our ‘‘internal’’ circuits operating correctly in the lab, there are still external issues to consider before implementation. Noise, load requirements, physical constraints, extra feature requirements, and signal compatibility should all be considered regarding their influence on the controller. Noise may consist of actual signal noise from the system transducers, connections, etc., but also may be electromagnetic noise affecting unshielded portions of the circuit, and so forth. Noisy signals may

Figure 3

Modified PD OpAmp controller with approximate derivative.

284

Chapter 6

seriously hinder the application of lead or derivative compensators. One approach is the filter the input signals and shield the actual components from electromagnetic noise. Good construction techniques (connections, shielding, etc.) should be followed at all times. Load requirements must be compatible with the output from the OpAmp devices in our circuits. It may be required that we add an intermediate driver chip (amplifier) or similar component before connecting to our primary amplifier. For the most part, treat the output of the controller as a signal only with no power delivery expectations. Physical constraints include mounting styles, machine vibration, heat sources, and moisture. Each application is different as to what the critical constraints are. Extra features also need to be designed into the existing controller. For example, with electrohydraulic proportional valves, it is common to add a deadband eliminator. It is usually the combination of extra features, safety devices, and drivers that are more complex and take up more space than the original compensator designed for the system. Finally, consider signal compatibility when designing and building controllers using analog (and in some ways even more so with digital) components. For the best performance (signal to noise ratio, for example) choose transducers, potentiometers, wire gauges, etc., that are designed for the need at hand. Some of these components are examined in more detail later in this chapter. Of particular concern is that the output ranges of the transducers and the input ranges of the amplifiers are compatible with the OpAmps being used to construct the compensator. Both current and voltage requirements should be considered. This is only a brief introduction into the construction and implementation of analog controllers. At a minimum we should see that there are ways to implement the designs resulting from our work in Chapter 5. Hopefully, and beyond that, we can develop the ability to build and implement some of our designs to bring us to the next level of satisfaction, moving from a simulation to a physical realization. 6.3.2

Implementing Basic Mechanical Controllers

There are still complete controllers that do not use any electrical signals and utilize all mechanical devices to close the loop. These controllers have the advantage of not requiring any external electrical power, transducers, or control circuits, therefore being more resistant to problems in noisy electrical environments. The interesting thing is that most of the OpAmp circuits from the previous section can be duplicated in hydraulic and pneumatic circuits by using valves, springs, and accumulators in place of resistors and capacitors. In fact, as examined with regard to modeling in Chapter 2, the analogies between electrical and mechanical components can also be applied to designing and implementing mechanical control systems. For example, using different linkage arrangements can serve as gain adjustments in a proportional controller. Basic mechanical controllers are still very common and found in our everyday use in items such as toasters, thermostats, and engine speed governors on lawnmowers. In general, other than with simple (proportional or on/off) controllers, these mechanical controllers will often cost as much or more, not be as flexible when upgrading, more difficult to tune, and consume more energy when compared with

Analog Control System Components

285

electronic controllers. Whereas the resistor in the OpAmp circuits passes current in the range of microamps, the valves or dampers inserted into the mechanical control circuit will have an associated energy drop and thus generate some additional heat in the circuit. If the mechanical control elements are small, this might be an insignificant amount of the total energy controlled by the system, but the advantages and disadvantages must always be considered. The good news is that whether the system is electrical or mechanical, the same effects are present from each gain (P, I, and D). The concepts regarding design and tuning are the same—only the implementation and actual adjustments tend to differ. Because of this reason, and since most new controllers are now electronic, only a brief introduction is presented here about how mechanical control systems can be implemented. EXAMPLE 6.1 Design a mechanical feedback system to control the position of a hydraulic cylinder. Develop the block diagram, including an equivalent proportional controller, and the necessary transfer functions using the model given in Figure 4. Make the following assumptions to simplify the problem and keep it linear:       

The mass of the piston and cylinder rod is negligible. There is a constant supply pressure, Ps . Flow through the valve is proportional to valve movement, x. The coefficient Kv accounts for the pressure drop across both orifices (flow paths in and out of the valve). Flow equals the area of the piston times the piston velocity. The fluid is incompressible. Notation: r ¼ input, y ¼ output.

First, write the equation representing the input command to the valve, x, as a function of the command input, r, and the system feedback, z. This should look familiar as our summing junction (with scaling factors) whose output is the error between the desired and actual positions. x¼

Figure 4

a b r z aþb aþb

Example: hydraulic proportional controller.

286

Chapter 6

Now, sum the forces on the mass and develop the transfer function between y and z: X F ¼ M z€ ¼ ðy  zÞK þ ðy_  z_ÞB Take the Laplace transform of the equation: M s2 ZðsÞ ¼ K YðsÞ  K ZðsÞ þ B s YðsÞ  B s ZðsÞ And then write as the transfer function between ZðsÞ and YðsÞ: ZðsÞ Bs þ K ¼ 2 YðsÞ Ms þ Bs þ K Finally, relate the piston movement to the linearized valve spool movement where the flow rate through the valve is assumed to be proportional to the valve position. This simplification does ignore the pressure-flow relationship that exists in the valve (see Sec. 12.4). The law of continuity (assuming no leakage in the system) relates the valve flow to the cylinder velocity. Q ¼ A dy=dt ¼ Kv x dy=dt ¼ ðKv =AÞx Take the Laplace transform and develop the transfer function between YðsÞ and XðsÞ. YðsÞ KV 1 ¼ XðsÞ A s Now the block diagram can be constructed as shown in Figure 5. Recognize that if we desired to have ZðsÞ as our output, the block diagram could be rearranged to make this the case and YðsÞ would be an intermediate variable in the forward path. To change the gain in such a system, we now must physically adjust pivot points, valve opening sizes, piston areas, etc., to tune the system. In this particular example the linkage lengths allow us to adjust the proportional gain in the system. At this point the design tools presented in previous chapters can be used to choose the desired gains that lead to the proper linkages, springs, and dampers. Although the systems in general have a more limited tuning range, they are impervious to electrical noise and interference, making them very attractive in some industrial settings. They also do not depend on electrical power and provide for addition mobility and reliability, especially in hazardous or harsh environments. Among the disadvantages, we see that to change the type of controller, we must actually change physical components in our system. Also, as mentioned earlier, whereas with electrical controllers we can add effective damping without increasing

Figure 5

Example linear mechanical hydraulic proportional controller.

Analog Control System Components

287

the actual energy losses, in mechanical (hydraulic, or pneumatic, also being considered ‘‘mechanical’’) systems we actually increase the energy dissipation in the system to increase the system damping. 6.4

TRANSDUCERS

Sensors are key elements for designing a successful control system and, in many cases, the limiting component. If a sensor is either unavailable or too expensive, the control of the desired variable becomes very difficult. Sensors, by definition, produce an output signal relative to some physical phenomenon. The term is derived from the Latin word sensus, as used to describe our senses or how we receive information from our physical surroundings. Transducer, a term commonly used interchangeably with sensor, is generally defined to cover a wider range of activities. A transducer is used to convert a physical signal into a corresponding signal of different form, usually to a form readily used by analog controllers. The Latin word transducer simply means to transfer, or convert, something from one form to another. Thus a sensor is also a transducer, but not vice versa. Some transducers might just convert from one signal type to another, never ‘‘sensing’’ a physical phenomenon. We will assume the transducers described here include a sensor to obtain the original output change from a physical phenomenon change. Only transducers dealing with analog signals are presented here (see Sec. 10.7 for a similar discussion on digital transducers). 6.4.1

Important Characteristics of Transducers

When we choose a transducer we should know certain important characteristics about the transducer before we purchase it for our system. In most cases this information is available (or can be requested) from the manufacturer of the component. Also, items such as the range are commonly defined when ordering the actual part, and when designing the system there may be several ranges to choose from. Several important characteristics of transducers are summarized in Table 2. The important point in choosing transducers with certain characteristics is to match them to our system. A response time that is too slow will cause stability problems in our system, and a response time that is faster than required may be more expensive. In general, the cost is related to both the volume produced and performance level of the transducer, often not being a linear relationship in terms of performance. It may be possible to use a better transducer for a lower cost if we can locate a common type used in many other applications. 6.4.2

Transducers for Pressure and Flow

6.4.2.1

Common Pressure Transducers

Several varieties of transducers are used to measure pressure. Three common methods of converting a pressure to an electrical signal are as follows:   

Strain gages Piezoelectric materials Capacitive devices

288

Table 2

Chapter 6 Important Characteristics of Transducers

Characteristic

Brief description

Range

The input range describes the acceptable values on the input, i.e., 0–1000 psi, 0–10 cm, and so forth. The output range determines the type and level of signal output. If your data acquisition system only handles 0–5 V, then a transducer whose output is 10 V would be more difficult to interface. Current output signals are also becoming more popular and are discussed more in later sections. Many transducers and controller cards have user selectable ranges. These ratings are commonly broken into several categories. Sensitivity, hysteresis, linearity, and repeatability are all components of error that will degrade your accuracy. High precision means high repeatability but not necessarily high accuracy. The amount of signal drift as a function of time. The drift may be related to the transducer warming up and thus diminishing once the temperature is stable. This should be specified as in earlier sections using terms like response time, time constant, rise time, and/or settling time. These are important if we are trying to control a relatively fast system where the transducer might not be fast enough to measure our variable of interest.

Error

Stability

Dynamics

They all do credible jobs and are readily available. Strain gage types measure the strain (deflection) caused by the pressure acting on a plate in the transducer. Piezoelectric devices use the pressure to deform the piezoelectric material, producing a small electrical signal. Finally, capacitive devices measure the capacitance change as the pressure forces two plates closer together. With each type, there are generally three pressure ratings. The normal range where output is proportional to the input is where the transducer should be used. Two failure ratings are also relevant. The first failure point is where the measurement device is internally damaged (diaphragm is deformed, etc.) and the transducer is no longer useful. The final failure point, and the most severe, is the burst pressure rating. It is dangerous to exceed this rating. Pressure transducers are common, and thus all types come in a variety of voltage and current outputs. Common voltage ranges include 0–10, 10, 0–1, and 0–5 V. The most common current output range is 4–20 mA and is discussed in Section 6.6 relative to the noise rejection advantages of using current signals. Many transducers now have the signal conditioning electronics mounted inside the transducer for a compact unit that is easy to use and install. An example of this type is shown in Figure 6. Signal conditioning is required for most transducers (not just pressure) since the sensor output (i.e., strain gage) is very small and must be amplified. The sooner that this occurs, the better our signal-to-noise ratio is for the remainder of the system. Finally, response times of most pressure transducers are very fast relative to the types of systems installed on cylinders/motors with large inertia. Response time may be a concern when attempting to measure higher order dynamics (fluid dynamics, etc.) in the system. Also, since the accuracy of most transducers is dependent on the transducer range, sometimes it is necessary to use differential pressure

Analog Control System Components

Figure 6

289

Typical strain gage pressure transducer construction.

transducers. These transducers can measure small differences between two pressures even though both pressures are very large. For example, it is hard to measure small changes in a very large pressure using a transducer designed to output a linear signal from low pressures all the way up to high pressures. The available output resolution will be spread over a much larger range. 6.4.2.2

Common Flow Meters

Flow meters have been the larger problem of the two, and accuracy and response time are more often questionable. Flow is more difficult to measure for several reasons. Turbulent and laminar flow regimes, a logarithmic dependence of viscosity on temperature, and superimposed pressure effects all make the measurement more difficult. Most precision flow meters are of the turbine type, where the fluid passes through a turbine whose velocity is measured. An example is shown in Figure 7. Once the turbulent regime is well established, many meters are fairly linear and capable of 0:1% accuracy. To take advantage of the transition regions, higher order curve fits must be used, sometimes a different curve fit for each region of operation. In addition, care must be taken when using in reverse since the calibration factors are commonly quite different. As higher accuracies are required, temperature and pressure corrections may also be required. For smaller flows and high precision measurements, some positive displacement meters have been designed for use in several specialty applications (medical, etc.). Other flow meters include ultrasonic, laser, and electromagnetic devices; strain gage devices; and orifice pressure differentials. Ultrasonic flow meters pass high frequency sound waves through the fluid and measure the transmit time. It does

Figure 7

Typical axial turbine flow meter.

290

Chapter 6

require additional circuitry to process the signals. Laser doppler effects may be used to measure flow in transparent channels by measuring the scatter of the laser beam using Doppler techniques. Electromagnetic devices place a magnet on two sides of the channel and measure the voltage on the perpendicular sides. The voltage is proportional to the rate at which the fields are cut and thus to the velocity of the fluid. Strain gage devices are used to measure the deflection of a ram inserted into the flow path to measure flow rate. Their main advantage is potentially better response times relative to the other methods. Finally, simply measuring the pressure of each side of a known orifice allows a flow to be measured, as shown in Figure 8. It does tend to be quite nonlinear outside of the calibrated ranges but is commonly used to sense flow in mechanical feedback components such as flow control valves. It creates a design compromise between resolution and allowable pressure drops. There are many variations that have been developed for different applications. Using Bernoulli’s equation allows us to solve for the pressure drop as a function of the flow since we know the flow into the meter equals flow out of the meter. In general, the flow will be proportional to the square root of the pressure drop. 6.4.3

Linear Transducers

Linear transducers are most commonly used to measure position, velocity, and, to some extent, acceleration. They are very common and can be found in many different varieties, shapes, and sizes. Prices and accuracy demonstrate the same wide range. It is probable that the most commonly controlled output variable is position. 6.4.3.1

Position Transducers/Sensors

Linear position transducers come in all shapes and sizes, and what follows here is only a brief introduction to them. The goal here is simply to present some of the basic attributes of the common types and give guidelines for choosing position transducers. The decision primarily becomes a function of the role it must play in the system. Questions that should be addressed include what is the required length of travel? What is the required resolution? What is the size (packaging) requirement? What is the required output signal? How will it interface with the physical system? What is the available monetary budget? The list given here presents some of the commonly available options.

Figure 8

Measurement of flow using an orifice.

Analog Control System Components

291

Limit switches: The most basic measurement of position. Useful to sequence events, calibrate open loop position, and provide safety limits. There is no proportionality; the switch is either open or closed. Very common when adding additional safety features to controllers or to begin new series of events, since when they are mechanically closed the power ratings allow them to be used to directly actuate other devices in the system. Potentiometer: Very common and relatively cheap, especially for shorter lengths. The basic operation is that of a voltage divider where the wiper arm is adjustable. The output voltage range is thus equal to input excitation voltage, which may be variable within certain limits, depending on power dissipation. The output is proportional to the input when the wiper is moved, normally over wire wound or conductive film resistors. The main problem is wear if the system oscillates around one point. Accuracies of better than 0.1% are possible, depending on the construction. Linear variable differential transformer (LVDT): An inductive device with two secondary windings and one primary, the LVDT requires a sinusoidal voltage for excitation. The input frequencies usually range between 1000 and 5000 Hz. The two secondary windings are on opposite sides of the primary winding, which is excited by the input sinusoidal, as shown in Figure 9. A small ferrite core is moved between the coils and the magnetic flux between them changes. As the core is moved toward a secondary coil, the induced voltage is increased, whereas the opposite secondary winding experiences a decrease in voltage. An LVDT requires external circuitry to provide the correct input signal and a usable output signal. Although the cost is much greater relative to simple potentiometers, the resolutions are as fine as 1 m and no contact is required between the core and coils; thus, for high cyclic rates the LVDT provides many benefits. Along with the cost is the added burden of external circuitry, thus limiting its use on original equipment manufacture (OEM) and other high-volume applications. Magnetostrictive technology: These sensors utilize magnetostrictive properties to measure position. By passing a magnet over the magnetostrictive material, a reflection wave is returned when a pulse is sent down the waveguide. Timing the pulse round trip allows position to be calculated. Although

Figure 9

Typical arrangement of LVDT.

292

Chapter 6

requiring additional electronics to process the signal, an advantage, especially in hydraulics, is that the magnet does not contact the material and may be placed inside a hydraulic cylinder (magnet on piston, sensing shaft inside cylinder rod), reducing the chance of external damage. An additional benefit is that velocity may also be calculated and both position and velocity signals are then available simultaneously. These sensors can be used to measure up to 72 inches of travel with excellent resolution. Precision on the order of 0:0002 inches is possible with only 0.02% nonlinearity. Signal outputs that are possible include analog voltage or current and digital start/stop or PWM signals. Update times are usually around 1 msec. An example of linear position measuring transducer using magnetostrictive materials is shown in Figure 10. Additional transducers: There are many specialty sensors used to measure position, but most are limited in range, cost, and/or durability for control systems. Laser interferometers have extremely good resolution and response times; they also require a reflective surface and external power supplies. Many capacitive sensors also are available. Some sense position by sliding the inner plate in reference to the outer plates or vary the distance between the plates, thus changing the capacitive. Good resolutions are possible but at the expense of small movement limits, external circuitry, and high sensitivity to plate geometry and dirty environments. Hall effect transducers effectively measure the length of a magnet and the output is proportional as movement occurs between the N and S poles. Finally, strain gages in a sense also measure displacement, albeit very small. In summary, Figure 11 illustrates the accuracy, cost, and measurement range for common linear position transducers. There are many transducers available for measuring linear position, and the preceding discussion only provides an introduction and an overview. Each type of environment, application, and field of use will likely have additional options developed specifically suited for such. (Note: Digital transducers used to measure linear movement are discussed in Sec. 10.7.) 6.4.3.2

Velocity Transducers/Sensors

The position transducers listed above are capable of being modified for use as velocity sensors but require an additional differentiating circuit to be added. This

Figure 10

Typical magnetostrictive linear position measurement transducer.

Analog Control System Components

293

Figure 11 Comparison of common linear position transducers. (From Anderson, T. Selecting position transducers. Circuit Caller INK; May 1998.)

can be accomplished using a single OpAmp, capacitor, and resistor but will likely require additional components to combat noise problems. The simplest analog linear velocity sensor is accomplished by moving magnets past coils to generate a voltage signal. Displacement ranges tend to be quite small, on the order of 50 mm. The magnetostrictive technology also develops velocity signals through on board circuitry. This technology has been described above. 6.4.4

Rotary Transducers

Rotary transducers share many characteristics with the linear examples from the preceding section and many of the same terms apply. There are, however, additional rotary transducers, many of which are digital and which are thus discussed in Chapter 10. Rotary transducers may be designed to measure position and/or velocity, as the examples show. As before, this is only a brief introduction and there are many more types available. 6.4.4.1

Rotary Position Transducers/Sensors

Rotary potentiometers: Similar in design and function to linear potentiometers, these sensors have limited motion (up to 10 turns is common), are cheap and simple, and are readily available. The same resolutions and features apply to rotary and linear potentiometers. Rotary resolvers: Inductive angle transducers that output a sinusoidal varied in phase and amplitude when excited with a sinusoidal input. The coupling between the different windings change as the device is rotated, thus changing the output signal. The signal will repeat every revolution, so a counter is necessary to track absolute position. The output is nonlinear and either phase or amplitude modulation may be used to process the signals. Resolutions of 10 min of arc are possible.

294

Chapter 6

Many of the available rotary position sensors have digital output signals. Optical angle encoders, hall effect sensors, and photodiodes are examples. With additional circuitry it is possible to convert some to compatible analog signals. 6.4.4.2

Rotary Velocity Transducers/Sensors

Magnetic pickup: Magnetic pickups are common, cheap, and easy to install. Any ferrous material that passes by the magnet will produce a voltage in the magnet’s coil. Although the output is a sinusoidal wave varying in frequency and amplitude, it is easily converted to an analog voltage proportional to speed using an integrated circuit. The benefit of this signal is that the frequency is directly proportional to shaft speed and fairly immune to noise (at normal levels). There are several frequency to voltage converters containing charge pumps and conditioning circuits integrated directly into single-chip packages. If a direct readout is desired, any frequency meter can be used without any additional circuitry (unless protective circuits are desired). The disadvantage is at low speed where the signal gets too small to accurately measure. Through the appropriate signal conditioning (Schmitt triggers, etc.), a magnetic pickup may be used to provide a digital signal. These topics are covered in greater detail in Chapter 10. Also, optical encoders and other digital devices may be used in conjunction with the frequency to voltage converter chip but with the same limitations as with the magnetic pickup. D.C. tachometer/generator: Another common component used to measure rotary velocity is the DC tachometer. It is simply a direct current generator whose output voltage is proportional to the shaft speed. An advantage is that it does not require any additional circuitry or external power to operate; a simple voltage meter can be calibrated to rotary speed and little or no signal conditioning is required. A disadvantage, however, is that it does require a contact surface, for example, a contact wheel or drive belt, to operate, which will add some additional friction to the system when installed. 6.4.5

Other Common Transducers Accelerometers: Acceleration is easily measured using accelerometers, although additional circuitry is required to amplify the small signals. Accelerometers are usually rated with allowable g’s of acceleration/deceleration. Picking one with the appropriate size is important, as a relatively large one will change the test itself with the added mass. Generally of the strain gage or piezoelectric variety, they measure deflection of a known mass (small) undergoing the acceleration. They must be rigidly mounted to the test specimen. It is possible to integrate the signal to obtain velocity, and even position, although errors will accumulate over time. A typical piezoelectric accelerometer is shown in Figure 12. Since piezoelectric materials generate an electrical charge when deformed, they are well suited for use in accelerometers. Due to their extremely high bandwidth, they are finding their way into many new applications. Hall effect transducers: Hall effect transducers are commonly used as proximity switches, liquid level measurements, deflection sensing, and in place of magnetic pickups for better low speed performance and signal clarity. Some flow

Analog Control System Components

Figure 12

295

Typical piezoelectric accelerometer construction.

measurement devices use the Hall effect to measure a turbine blade passing by. Hall effect devices have several advantages and disadvantages when compared to magnetic pickups. Whereas magnetic pickups have a signal that becomes very small at lower speeds (past the magnet), Hall effect devices do not need a minimum speed to generate a signal; the presence of a magnetic field causes the output (voltage) to change. This allows them to be used as proximity sensors, displacement transducers (quite nonlinear), in addition to speed sensors. The disadvantages are that they require an external power source, a magnet on the moving piece, and signal conditioning. Strain gages: Already mentioned relative their use as embedded in other transducers, these are very common devices used to measure strain and then calibrated for acceleration, force, pressure, etc. The resistance in the strain gage changes by small amounts when the material is stretched or compressed. Thus, the output voltage is very small and an amplifier (bridge circuit) is required for normal use. Temperature: Several common temperature transducers are bimetallic strips (toaster ovens), resistance-temperature-detectors (RTDs), thermistors, and thermocouples. The bimetallic strip simple bends when heated due to dissimilar material expansion rates and can be used as safety devices or temperature dependent switches. RTDs use the fact that most metals will have an increase in resistance when temperature is increased. They are stable devices but require signal amplification for normal use. Thermistors have a resistance that decreases nonlinearly with temperature but are very rugged, small, and quick to respond to temperature changes. They exhibit larger resistance changes but at the expense of being quite nonlinear. Thermocouples are very common and can be chosen according to letter codes. They produce a small voltage between two different metals based on the temperature difference. 6.5

ACTUATORS

Actuators are critical to system performance and must be carefully chosen to avoid saturation while maintaining response time and limiting cost. Many specific actuators are available in each field, and this section only serves to provide a quick overview of the common actuators used in a variety of systems. To emphasize an underlying theme of this entire text, we must remember that no matter what our controller output tells our system to do, unless we are physically capable of moving the system as commanded, all is for naught. The performance limits (physics) of the system are not going to be changed as a result of adding a controller. For this reason

296

Chapter 6

the importance of choosing the correct amplifiers and actuators cannot be overstated. It should also be noted that most actuators relate the generalized effort and flow variables, defined in Chapter 2, to the corresponding input and output. For example, cylinder force is proportional to pressure (the output and input efforts) and the cylinder velocity is proportional to the volumetric flow rate (the output and input flows). The same relationship is true for the hydraulic motors. An exception occurs in solenoids and electric motors where the force is proportional to the current (output effort relates to input flow). 6.5.1

Linear Actuators

Linear actuators can take many forms and are found almost everywhere. Hydraulic, pneumatic, electrical, and many mechanical forms that take one motion and convert to another (gear trains, cams, levers, etc.) can be found in a wide range of control system applications. The choice of a linear actuator depends largely on the power requirements (force and velocity) of the system to be controlled. When power requirements and stroke lengths are relatively small, electrical solenoids are the most commonly used devices, whereas cylinders (hydraulic or pneumatic) are more commonly found in high power applications. Many times the solenoid is used to actuate the valve that in turn controls the cylinder motion. Multiple stages of amplification/actuation are required in many systems to go from very low power signals to the high forces and velocities required as the end result. Some linear actuators are the result of using a rotary primary actuator and a secondary mechanical device. Cams, rack and pinion systems, and four bar linkages are all examples that can be found in many applications. For example, in our typical automobile, camshafts convert rotary motion to linear (to open and close the engine exhaust and intake valves), rack and pinion systems convert rotary steering wheel inputs into linear tie rod travel, and a four bar linkage is used to allow the windshield wipers to travel back and forth. The rotary portion of these systems is covered in the next section. 6.5.1.1

Hydraulic/Pneumatic Cylinders

The two primary fluid power actuators, excluding valves, are hydraulic cylinders and hydraulic motors (discussed in the next section). Many devices are required to implement them, including the primary energy input device (electric motor or gas/diesel engine); a hydraulic pump to provide the pressure and flow required by the actuators; all hose, tubing, and connections; and, finally, safety devices such as relief valves. Directional control valves, as used in most systems, act more as an amplifier (or control element) than an actuator in a hydraulic control system. The advantage of hydraulics, once the supporting components are in place, is that relatively small actuators can transmit large amounts of power. It comprises a relatively stiff system capable of providing its own lubrication and heat removal. Chapter 12 presents additional information on designing and modeling electrohydraulic control systems. Linear actuators, or cylinders, are generally classified as single or double ended; include ratings for maximum pressure, stroke length, and side loads; and are sized according to desired forces and velocities. Single-ended cylinders exhibit different extension and retraction rates and forces due to unequal areas. The force

Analog Control System Components

297

generated is simply equal to pressure multiplied by area. Although the basic equations for force and flow rates with respect to cylinders are very common, they are presented here for review. A basic cylinder can be described as in Figure 13. The bore and rod diameters are often specified, allowing calculation of the respective areas. It is helpful to define several items:        

Diabore ¼ Diameter of the cylinder bore Diarod ¼ Diameter of the cylinder rod ABE ¼ Area of the bore (cap) end 2 ABE ¼ Dia4bore ARE ¼ Area of the rod end ARE ¼ 4 ðDia2bore  Dia2rod Þ PBE ¼ Pressure acting on the bore end PRE ¼ Pressure acting on the rod end

The flow and force equations are desired in the final modeling since the corresponding valve characteristics correspond to them. Flow, Q, rates in and out of the cylinder are given by the following equations: dPBE dt dPRE ¼ v  ARE þ CRE  dt

QBE ¼ v  ABE þ CBE  QRE

If compressibility, C, is ignored or only steady-state characteristics examined, the capacitance terms are zero and the flow rate is simply the area times the velocity, v, for each side of the cylinder. It is important to note that the flows are not equal with single-ended cylinders as shown above. For many cases where the compressibility flows are negligible, the flows are simply related through the inverse ratio of their respective cross-sectional areas. In pneumatic systems the compressibility cannot be ignored and constitutes a significant portion of the total flow. If compressibility is ignored, the ratio is easily found by setting the two velocity terms equal, as they share the same piston and rod movement. QBE ¼

ABE Q ARE RE

The cylinder force also plays an important role in system performance. In steadystate operation, only the kinematics of the linkage will change the required force since the acceleration on the cylinder is assumed to be zero. In many systems the acceleration phase is quite short as compared to the total length of travel. The

Figure 13

Typical hydraulic cylinder nomenclature.

298

Chapter 6

steady-state assumption becomes less valid as performance requirements increase, due to the increase in the dynamic acceleration requirements. The basic cylinder force equation can be given as follows: PBE  ABE  PRE  ARE  FL ¼ m 

dv d2x ¼m 2 dt dt

As before, in steady-state operation the dynamic component can be set to zero and then the sum of the forces equal zero. In general, the above flow and force equations adequately describe cylinder performance even though leakage flows and friction forces are ignored. The leakage flows are usually quite small, and compared to the overall cylinder force, the friction force is also negligible. The exception is at startup where there can be fairly large amounts of ‘‘stiction’’ forces for pumps, valves, and cylinders. 6.5.1.2

Electrical Solenoids

The electrical solenoid is probably the most common linear actuator used. Solenoids are used to control throttle position in automobile cruise control systems, automatic door locks, gates on industrial lines, etc. Smaller movements may be amplified through mechanical linkages but at the expense of force capabilities, since the power output remains the same. Some controller packages include the solenoid, as illustrated in Chapter 1. In operation, solenoid force is proportional to current. When current is passed through the solenoid, a magnetic field is produced which exerts a force on an iron plunger, causing linear movement. Proportionality is obtained by adding a spring in series with the plunger such that the movement is proportional to the current applied, not taking into account the load effects. This leads to a design compromise where we want a stiff spring for good linearity but a soft spring for lower power and size requirements. A typical solenoid is shown in Figure 14. To use lighter springs and still achieve good linearity, we sometimes close the loop on solenoid position and achieve improved results through the use of a nested inner feedback loop. This method is used with several hydraulic proportional valves. Piezoelectric materials: Another method finding acceptance is the use of piezoelectric materials. We mentioned previously that they produce an electrical signal when deformed. The reverse is also true. When an electrical current is applied the

Figure 14

Typical solenoid construction.

Analog Control System Components

299

material will deform. Very small motions limit their usefulness but they may operate at frequencies in the MHz range. 6.5.2

Rotary Actuators

There is much more diversity in rotary actuators, and as mentioned in the preceding section some rotary actuators are used through cams and/or pulleys to act as linear actuators. The most common actuators include hydraulic/pneumatic motors, AC and DC electric motors, servomotors, and stepper motors (covered more in Chap. 10). 6.5.2.1

Hydraulic/Pneumatic Motors

Hydraulic motors share many characteristics with hydraulic pumps and in some cases may operate as both. Common types of hydraulic motors include axial piston (bent-axis and swash-plate), vane, and gear (internal and external) motors. Other than the gear motors, the units listed are capable of having variable displacement capabilities and which if used, allow even more control of the system. The basic equations as typically used in designing control systems are straightforward. Using the theoretical motor displacement, DM (regardless of fixed displacement pump type), to calculate the output flow rate and necessary input torque produces the following equations: Theoretical flow rate:

QIdeal ¼ DM N

Theoretical torque:

TIdeal ¼ DM P

A constant is generally necessary depending on the units being used. For example, if Q is in GPM, DM in in3 =s, N in rpm, T in ft-lb, and P in psi, then QIdeal ¼

DM N 231

TIdeal ¼

DM P 24 

Hydraulic power is another useful quantity in describing pump characteristics. Using W for power to avoid confusion with pressure leads to the following equations. Since the ideal pump is without losses, WIn ¼ WOut The hydraulic input power is thenWIn ¼ PQ: The power out is mechanical and Wout ¼ TDM . Unfortunately, losses do occur, and it is convenient to model the resulting efficiencies in two basic forms, volumetric and mechanical. While still remaining a simple model, the following efficiencies are defined: Mechanical efficiency:

tm ¼

TorqueActual TorqueTheoretical

Volumetric efficiency:

vm ¼

FlowTheoretical FlowActual

Overall efficiency:

mechanical efficiency volumetric efficiency

300

Chapter 6

Summarizing, the ideal hydraulic motor acts as if the output speed is proportional to flow and torque proportional to pressure. In reality, more flow is required and less torque obtained than what the equations predict. This can be simply modeled using mechanical and volumetric efficiencies. In an ideal motor, the power in equals the power out. In an actual system with losses, the product of the mechanical and volumetric efficiencies provides the ratio of power out to power in for the motor since the following is true: Overall efficiency ¼ oa ¼ tm vm ¼

T DM N TN WOut ¼ ¼ DM P Q PQ WIn

Many systems use hydraulic motors as actuators. Conveyor belt drives and hydrostatic transmissions are examples found in a variety of applications. Several advantages are good controllability, reliability, heat removal and lubrication inherent in the fluid, and the ability to stall the systems without damage. 6.5.2.2

DC Motors

DC motors are constructed using both permanent magnets and electromagnets, which are further classified as series, combination, or shunt wound. In the typical DC motor, coils of wire are mounted on the armature, which rotates because of the magnetic field (whether from a permanent magnet or electromagnet). To achieve a continuous rotation and to minimize torque peaks, multiple poles are used and a commutator reverses the current in sequential coils as the motor rotates. The downside of such an arrangement is that we have sliding contacts prone to fail over time and the brushes must be replaced on regular intervals. A typical DC motor in shown in Figure 15. The field poles may be generated using either permanent magnets or electromagnets. Permanent magnet motors do not require a separate voltage source for the field voltage, resulting in higher efficiencies and less heat generation. Motors that use separate windings to generate the magnetic field (electromagnets) provide more constant field excitation levels, allowing smoother control of motor speed. In both cases the torque is generally proportional to current input (and the magnetic flux, which is commonly assumed to be relatively constant over the desired operating range). The back electromotive force (emf, or voltage drop) is proportional to shaft speed. Torque:

T ¼ Kt I

Voltage:

V  RI ¼ Kv !

Figure 15

Typical DC motor construction (w/brushes).

Analog Control System Components

301

T is the output torque, I is the input current, V is the voltage drop across the motor, R is the resistance in the windings, and ! is the output shaft speed. The constants Kt and Kv are commonly referred to as the torque and voltage constants, respectively. When using electromagnets to generate the fields, we have several options on how to wire the armature and field, commonly termed shunt or series wound electric motors. To compromise between the properties of both types, we may also use combinations of shunt and series windings, giving performance curves as shown in Figure 16. Shunt motors, which have the armature and field coils connected in parallel, are more widely used because of better starting torque, lower no-load speeds, and good speed regulation regardless of load. The primary disadvantage is the lower startup torque as compared to series wound motors. Series wound motors, although they have a higher starting torque due to having the armature and field coils in series, will decrease in speed as the load is increased, although this may be helpful in some applications. Combination wound motors that have some pairs of armature and field coils in series and some in parallel try to achieve a good startup torque and decent speed regulation. The speed of DC motors can be controlled by changing the armature current (more common) or the field current. Armature current control provides good speed regulation but requires that the full input power to the armature be controlled. One method of interfacing with digital components is using pulse-width modulation to control the current (see Sec. 10.9). Brushless DC motors: To avoid the problem of sliding contact and having brushes that wear out, DC motors were developed without brushes (therefore called brushless motors). Although the initial expense is generally greater, such motors are more reliable and require less maintenance. The primary difference in construction is that the permanent magnet is used on the rotor and thus requires no external electrical connections. The outside stationary portion, or stator, consists of stator coils that are energized sequentially to cause the rotor (with permanent magnets) to spin on its axis. The current must still be switched in the stator coils and is generally accomplished with solid state switches or transistors. Servomotors: Servomotors are variations of DC motors optimized for high torque and low speeds, usually by reducing their diameter and increasing the length. They are sometimes converted to linear motion through the use of rack and pinion gears, etc.

Figure 16

DC motor torque-speed characteristics for different winding connections.

302

Chapter 6

Stepper motors: Stepper motors are covered in more detail in the digital section (see Sec. 10.8) since special digital signals and circuit components are required to use them. They are readily available and can be operated open loop if the forces are small enough to always allow a step to occur. The output is discrete with the resolution depending on the type chosen. 6.5.2.3

AC Motors

AC motors have a significant advantage over DC motors since the AC current provides the required field reversal for rotation. This allows them to be cheaper, more reliable, and maintenance free. The primary disadvantage is that it fixes the operating speeds unless additional (expensive) circuitry is added. Generally classified as being one of two major types, single phase or multiphase, they are further classified within each category either of induction or synchronous types. Induction AC motors have windings on the rotor but no external connections. When a magnetic field is produced in the stator it induces current in the rotor. Synchronous AC motors use permanent magnets in the rotor and the rotor follows the magnetic field produced in the stator. Single-phase induction motors do not require external connections to the rotor and the AC current is used to automatically reverse the current in the stator windings. Because it is single phase it is not always self-starting and a variety of methods is used to initially begin the rotation. Once started the motor rotates at a velocity determined by the frequency of the AC signal. There is, however, some slip always present and the motor actually spins at speeds 1–3% less than the synchronous speed. A three-phase induction motor is similar to the single-phase type except that the stator has three windings, each 120 degrees apart. The motor now becomes selfstarting since there is never a part of the rotation where the net torque becomes zero (as in the single phase motors). Another advantage is that the torque becomes much smoother, similar in concept to adding more cylinders in an internal combustion (IC) engine. A primary problem of induction motors is that they require a vector drive to operate as servomotors and additional cooling and calibration are required for satisfactory performance. Synchronous motors have a very controlled speed but are not self-starting and need special circuits to start them. Multiple-phase motors are usually chosen over single phase when the power requirements are high. To achieve good speed regulation in AC motors, we must now add special electronics since the motor speed is related to the frequency of the AC signal. Whereas we control the speed in DC motors by adjusting the voltage (current), in AC motors we now must adjust the frequency of the input. A common method of achieving this is to convert the AC power input to DC and then use a DC-to-AC converter to output a variable frequency AC signal. Very good speed regulation is achieved with this method (sometimes by closing the loop on speed), and although the prices continually fall, it still remains more expensive. 6.6

AMPLIFIERS

Two main types of amplifiers are discussed in this section: signal amplifiers and power amplifiers. Signal amplifiers, such as an OpAmp, are designed to amplify

Analog Control System Components

303

the signal (i.e., voltage) level but not the power. Power amplifiers, on the other hand, may or may not increase the signal level but are expected to significantly increase the power level. Thus in many control systems there are both signal amplifiers and power amplifiers, sometimes connected in series to accomplish the task of controlling the system. Each type encounters unique problems, with signal amplifiers generally being susceptible to noise and power amplifiers to heat generation (and thus required dissipation). This section introduces several common methods as found in many typical control systems. 6.6.1

Signal Amplifiers

Whenever amplification of a voltage signal is desired, the component of choice is almost always the versatile OpAmp, or operational amplifier. Modern solid-state OpAmps are cheap, efficient, capable of gains 100,000 times or larger, with bandwidths in the MHz, have a wide range of input/output signal level options, and have input impedances in the M range. A typical OpAmp will require five connections, as shown in Figure 17. There are many power supplies designed to power OpAmps. Specialty OpAmps have been developed with different input/output voltage ranges, larger power capabilities, single-side operation (where V- is not available), and with narrow rails. The term rail is commonly used to describe the maximum output of an OpAmp given the excitation voltage range. For example, if a 15 V power supply is used and an OpAmp has a 1 V rail, the maximum output possible is 14 V. There are two primary uses of OpAmps when implementing control systems. One use has already been discussed earlier in the chapter (see Sec. 6.3.1) where OpAmps are used to construct PID and phase-lag/lead controllers. A second common use, discussed here, is signal conditioning. Most sensors produce very small output signals and must be significantly amplified before they can be used throughout the control system. Even if the signal is ultimately converted to a digital representation, it must be amplified. Two basic OpAmp circuits are discussed here. Many different circuits have been developed using OpAmps, but the basic functions are well represented by the two given here and many of the advanced circuits are adaptations of these. The controller circuits from Section 6.3.1 also use these circuits as common building blocks. The most common building block is the inverting amplifier, shown in Figure 18.

Figure 17

Typical operational amplifier connections.

304

Chapter 6

Figure 18

Inverting OpAmp circuit.

Assuming a high input impedance and no current flow into the OpAmp leads to a gain for the circuit of Inverting gain:

Vout R ¼ 2 Vin R1

Remember that the output voltage is limited and only valid input ranges will exhibit the desired gain. If a noninverting amplifier is required, we can use the circuit given in Figure 19. The gain of the noninverting OpAmp circuit is derived as Noninverting gain:

Vout R1 þ R2 ¼ Vin R1

As mentioned, many additional functions are derived from these basic circuits. An example list is given here:   

Integrating amplifier: replace R2 with a capacitor Differentiating amplifier: replace R1 with a capacitor Summing amplifier: replace R1 with a separate resistor in parallel for each input

In addition to the summing junction (error detector) from Table 1, one other circuit should be mentioned relative to constructing an analog controller, a comparator. A comparator simply has the two inputs each connected to a signal and no feedback or resistors in the circuit. Such an arrangement has the property of saturating high if the positive input is greater than the negative input or vice versa if reversed. This allows the OpAmp to be used as a simple on-off controller, similar in

Figure 19

Noninverting OpAmp circuit.

Analog Control System Components

305

concept to a furnace thermostat. An example of such a controller is presented in the case study found in Section 12.7. Finally, it is quite common to incorporate filters, protection devices, and compensation devices in the amplification stage. Filters may be active or passive with active filters found on integrated circuits or designed using components like OpAmps. Passive filters will have some attenuation of the signal even at frequencies where attenuation is not desired. Common protection devices include fuses and diodes. Fuses may be connected in series with sensitive components, whereas diodes (such as Zener diodes) may be connected in parallel to protect from excessive voltage. Optoisolators are commonly used for digital signals since they completely eliminate the electrical connection between the input and output terminals. 6.6.2

Power Amplifiers

Most power amplifiers fall into the electrical category and are found to some degree in almost every system. Most systems initially convert electrical energy into another form, the exception being mobile equipment, which begins with chemical energy (fuel) being converted into heat energy and ultimately mechanical energy from an engine. Only electrical systems are discussed here as engines are beyond the scope of this text and even they convert some of their output into electrical energy to drive solenoids and other control system actuators. An example where the primary power amplification is electrical is the common hydraulic motion control system. The primary power is initially electric and converted via an electric motor into mechanical power. A hydraulic pump is simply a transformer, not a power amplifier, and converts the mechanical power into hydraulic power. In fact, beyond the power amplification stage, each conversion process results in less power due to losses inherent in every system. The second location of power amplification found in this motion control example is taking the output of the controller, whether microprocessor or OpAmp based, and causing a change in the system output. In most cases the power levels are amplified electrically to levels that allow the controller to change the size of the control valve orifice (linear solenoid activation). In both areas then, the power amplification takes place in the electrical domain. Electrical power amplifiers can be divided into two basic categories, discrete and continuous. Discrete amplifiers are much easier to obtain and install. The example is the common electromechanical or solid-state relay. Relays are capable of taking a small input signal and providing large amounts of power to system. The disadvantage is the discrete output, resulting in the actuator being either on or off. If the system is a heat furnace, this type of signal works out well and the problem is nearly solved. If we want a linear varying signal, however, the task becomes more difficult. Some methods use discrete outputs switched fast enough to approximate an analog signal to the system (i.e., switching power supplies and PWM techniques) as covered in Section 10.9. To achieve a continuously variable power level output, we generally use transistors. Transistors have revolutionized our expectations of electronics in terms of size and performance since replacing their predecessor, the vacuum tube amplifier. The current terminology of transistors is traced to the original vacuum tube terminology. Transistors have many advantages in that they are resistant to shock and

306

Chapter 6

vibration, fairly efficient, small and light, and economical. Their primary disadvantage is found in their sensitivity to temperature. This is the primary reason for using switching techniques like PWM since it significantly minimizes the heat generation in transistors. When a transistor is used as linear amplifier it must be designed to dissipate much greater levels of internal heat generation. This is primarily because it is asked to drop much larger voltage and current levels internally as compared to when operated as a solid-state switching device. The design of practical linear amplifiers is beyond the scope of this text, and many references more fully address this topic. The design of solid-state switches is given in Section 10.9. 6.6.3

Signal-to-Noise Ratio

In most systems the effects of noise should be considered during the design stage. It is much easier to design properly at the beginning than to use one fix after another during the testing and production stages. Noise can occur from a variety of sources, and some are more preventable than others. The majority of this section deals with electrical noise issues stemming from the components and the surrounding environment. 6.6.3.1

Location of Amplifiers

In most applications we prefer to amplify the signal to ‘‘usable’’ levels as quickly as possible. The advantage is that we can minimize the effects of external noise by transmitting a signal with a larger magnitude (assuming all else remains the same). Having noise with an average amplitude of 2 mV adding to 7 V is relatively negligible; that same noise, when added to a signal of only several mV, becomes very problematic. Thus, the fewer lines that are run with very small signal levels, especially in the presence of external electrical noise, the better will be our signal to noise ratio. Remember that the controller acts on measured error and if noise contributes to the signal the controller output will also reflect (and usually amplify) the noise, feeding it back into our system. This is especially true when implementing derivative compensators in our controller. Since the goal of the amplifier is to only amplify the desired output and not noise, we should not only locate it near the sensor output, but we should also carefully shield the amplification circuit components. Using shielded wires and component boxes can make a significant different in the quality of our signal. There are situations where we can compensate the signal to improve our signal to noise ratio, allowing us to have longer runs with low signal levels and to aid in removing unwanted physical effects (i.e., the temperature of the system). Common systems that include temperature compensation are thermocouples and strain gages. Since the properties of these sensors vary significantly, we typically compensate them using a modified Wheatstone bridge amplifier. In the case of the strain gage, we simply place a ‘‘dummy’’ gage alongside the active gage and compare the change between the two. Since the dummy gage experiences the same temperature effects, then the difference between the two readings should be due to the actual strain experienced. In a similar fashion, we can remove the effects of temperature-induced resistance changes in the signal wires by placing a third lead between the sensor and amplifier, allowing us to only amplify the change in signal output, not output caused by changing temperatures. With compensation then, we can run longer wire lengths

Analog Control System Components

307

and still maintain good signal to noise ratios. In general, the devices that benefit from compensation techniques will already have it included when we purchase it for our use in control systems. 6.6.3.2

Filtering

Filtering is commonly added at various locations in a control system to remove unwanted frequency components of our signal. Filters may be designed into the amplifier or added by us as we design the system. Three common types of filters, shown in Figure 20, are low pass, high pass, and band pass. Low-pass filters are designed to allow only low frequency components of the signal to pass through. Any higher frequency components are attenuated. High-pass filters are designed to allow only high frequency components of the signal through, and band-pass filters only allow a specified range of frequencies through. When designing a filter we can apply our terminology learned with Bode plots to design and describe the performance. From our Bode plot discussions we recall that a first-order denominator attenuates high frequencies at a rate of 20 dB/decade. In filter terminology it is common to refer to the number of poles that the filter has. Thus, if we have a four-pole filter, it will attenuate at a rate of 80 dB/decade. Even with higher pole filters we do not achieve instantaneous attenuation of signals. It is interesting to note that the filters illustrated in Figure 20 look similar to several of the compensators designed. Designing basic filters using Bode plots is quite simple. For example, if we connect a resistor in series and a capacitor in parallel with out signal, we have just added a single pole passive filter to the system. The analysis is identical to the techniques learned earlier where we found a time constant of RC and a transfer function with one pole in the denominator. The Bode plot then has a low frequency horizontal asymptote, a break frequency at 1=, and a high frequency attenuation slope of 20 dB/decade. Comparing this to Figure 20, we see that it is a simple lowpass filter. To achieve sharper cut-off rates we would add more poles to the filter. A band-pass filter can be designed following the same procedures except that we add a first-order term (zero) in the numerator followed by two first-order terms in the denominator defining the cut-off frequencies. Along with the performance descriptions above, we further distinguish filters as active or passive. The simple RC filter is a passive filter since it requires no external power and draws its power from the signal. This has the disadvantage of sometimes changing the actual signal, especially as we implement passive filters with more poles. To overcome this and add filters with high input impedances (thus drawing no

Figure 20

Descriptions of common filters.

308

Chapter 6

current from the signal), we use active filters. Active filters require a separate power source but allow for greater performance. OpAmps are commonly used and provide the high input impedance desired by filters. Also available are IC chips with active filters designed on the integrated circuits. 6.6.3.3

Advantages of Current-Driven Signals

Most transducers and many controllers now have options allowing us to use current signals instead of voltage signals. This section quickly discusses some the advantages of using current signals and how to interface them with standard components expecting voltage inputs. The primary advantage is easily illustrated using the effort and flow modeling analogies from Chapter 2. Voltage is our electrical effort and current is our electrical flow variable. Using the analogy of our garden hose, we know that if we have a fixed flow rate entering at one end, then the same flow will exit at other end, regardless of what pressure drops occur along the length of the hose (assuming no leakage or compressibility). Thus, our flow is not affected by imposed disturbances (effort noises) acting on the system. In the same way, even if external noise is added to our electrical current signal as voltage spikes, the current signal remains constant, even though the voltage level of it picks up the noise. The advantage becomes even more pronounced as we require longer wire lengths through noisy electrical locations. Although it is possible to also induce currents in our signal wires (magnet moving by a coil of wire), it is much more likely that the noise is seen as a voltage change. Thus, our primary concern in using current signals is that our transducer (or whatever is driving our current signal) is capable of producing a constant, well regulated, current signal in the presence of changing load impedances. Even if our signal target requires voltage (i.e., an existing AD converter chip), we can still take advantage of the noise immunization of current signals by transferring our signal as a current and converting it to a voltage at the voltage input itself. This is easily accomplished by dropping the current over a high precision resistor placed across the voltage input terminals, as shown in Figure 21. Only two wires are needed to implement the transducer, and if desired a common ground can be used. Most transducers will give the allowable resistance (impedance) range where it is able to regulate the current output. Recognize that with a current signal we no longer will get negative voltage signals and, in fact, do not reach zero voltage. The voltage measurement range is found by taking the lowest and highest current output (usually 4–20 mA) and multiplying them by the resistance value.

Figure 21

Converting a current signal to a voltage signal.

Analog Control System Components

6.7

309

PROBLEMS

6.1 Briefly describe the role a typical amplifier in a control system. 6.2 Briefly describe the role a typical actuator in a control system. 6.3 An actuator must be able to . . . (finish the statement). 6.4 What is the advantage of using an approximate derivative compensator? 6.5 List several possible sources of electrical noise as affecting the control system signals. 6.6 Describe an advantage and disadvantage of mechanical feedback control systems. 6.7 What is the importance of the transducer in a typical control system? 6.8 List two desirable characteristics of transducers and briefly describe each one. 6.9 What are three important pressure ratings for pressure transducers? 6.10 List three types of transducers that may use a strain gage as the sensor. 6.11 Liquid flow meters are analogous to meters in electrical systems. 6.12 What are two types of noncontact linear position transducers? 6.13 Why is a velocity transducer desirable over manipulating the position feedback signal to obtain a velocity signal? 6.14 List one advantage and disadvantage of the common magnetic pickup relative to measuring angular velocity. 6.15 Hydraulic cylinders might be the linear actuator of choice when what characteristics in an actuator are needed? 6.16 Locate an electrical solenoid in a product that you currently use and describe its function in the system. 6.17 Name two methods of controlling the speed in a DC motor. 6.18 Brushless DC motors have what advantages over conventional DC motors? 6.19 All AC motors are self starting. True or False? 6.20 What are the advantages and disadvantages of AC motors as compared with DC motors? 6.21 What are two major types of amplifiers? 6.22 Why is high input impedance desirable for an amplifier? 6.23 Name the common electrical component used in electrical power linear amplifiers. 6.24 Why is the signal to noise ratio an important consideration during the design of a control system? 6.25 Passive filters require a separate power source. True or False? 6.26 Under what conditions will current signals perform much better than voltage signals? 6.27 Construct a speed control system for the system in Figure 22. The system to be controlled is a conveyor belt carrying boxes from the filling station to the taping

Figure 22

Problem: conveyor belt speed control.

310

Figure 23

Chapter 6

Single-axis motion control system.

station. It must run at a constant speed regardless of the number and weight of boxes placed on it. Build the model in block diagram form, where each block represents a simple physical component. Details of each block are not required, just what component you are using (i.e., an block which requires an actuator might use a DC motor or a solenoid) and where. In addition, attach consistent units to each line connecting the blocks. Note: Label each block and line clearly. Include all required components. For example, certain components require power supplies, such as some transducers, etc. Number each block and define: category (transducer, actuator, or amplifier) type (LVDT, OpAmp, etc.), inputs, outputs, and additional support components (power supplies, converters, etc.). 6.28 Design a closed loop position control system for the system in Figure 23. The system to be controlled is a single-axis welding robot. A high force position actuator is required to move the heavy robot arm. Build the model in block diagram form, where each block represents a simple physical component. Attach consistent units to each line. Note: Label each block and line clearly. Include all required components. For example, certain components require power supplies, such as some transducers, etc. Number each block and define: category (transducer, actuator, or amplifier) type (LVDT, OpAmp, etc.), inputs, outputs, and additional support components (power supplies, converters, etc.).

7 Digital Control Systems

7.1

OBJECTIVES     

7.2

Introduce the common configurations of digital control systems. Compare analog and digital controllers. Review digital control theory and its relationship to continuous systems. Examine the effects and models of sampling. Develop the skills to design digital controllers.

INTRODUCTION

It seems that every several years the advances in computer processing power make past gains seem minor. As a result of this ‘‘cheap’’ processing power available to engineers designing control systems, advanced controller algorithms have grown tremendously. The space shuttle, military jets, general airline transport planes, along with our common automobile have benefited from these advancements. Things are being done today that were once thought impossible due to modern controllers. The modern military fighter jet would be impossible to fly without the help of the onboard electronic control system. In this chapter we begin to develop the skills necessary for designing and implementing advanced controllers. Since virtually all controllers at this level are implemented using digital microprocessors, we spend some time developing the models, typical configurations, and tools for analysis. When we compare analog and digital controllers, we notice two big differences: Digital devices have limited knowledge of the system (data only at each sample time) and limited resolution when measuring changes in analog signals. There are many advantages, however, that tend to shift the scales in favor of digital controllers. An infinite number of designs, advanced (adaptive and learning) controller algorithms, better noise rejection with some digital signals, communication between controllers, and cheaper controllers are now all feasible options. To simulate and design digital controllers, we introduce a new transform, the z transform, allowing us to include the effects of digitizing and sampling our signals.

311

312

7.2.1

Chapter 7

Examples and Motivation

Digital computers allow us to design complex systems with reduced cost, more flexibility, and better noise immunity when compared to analog controllers. Adaptive, nonlinear, multivariable, and other advanced controllers can be implemented using common programmable microcontrollers. The ability to program many of these microcontrollers using common high level languages make them accessible to all of us who do not wish to become experts in machine code and assembly language. This section seeks to lay the groundwork for analysis of digital controllers in such a way that we can extend what we learned about continuous system and now apply it to our digital systems. This allows us to do stability analysis, performance estimates, and calculate required sample times before we build each system. Our quality of life in almost every area of activity is influenced through microprocessors and digital controllers. Our modern automobile is a complex marriage of mechanical and electrical systems. Factories are seeing better quality control, increased production, and more flexibility in the assembly process, leading to increased customer satisfaction. Home appliances are smarter than ever, and the security of our country is more dependent now on electronics than at any other time in our history. Early warning detection systems; weapon guidance systems; computer-based design tools; and land, air, and sea vehicles all rely heavily on electronics. It is safe to say that skills in designing digital control systems will be a valuable asset in the years to come. 7.2.2

Common Components and Configurations

In Chapter 1 we looked at the various major components required for actually building and implementing control systems. Now, let us quickly look at the additional digital components and the common way these components are connected together. As we would expect, the overall configuration (controller ! amplifier ! actuator ! system ! transducer feedback) is very close to the analog system presented in Figure 1 of Chapter 1 during the introduction to the text. A general digital control system configuration is shown in Figure 1 that illustrates how the digital components might interface with the physical system. In examining the differences between the analog and digital control system components, we see that the computer replaces the error detector and controller and that new interfaces are required to allow analog signals to be understood and acted upon by the microprocessor. One advantage is that many inputs and outputs

Figure 1

General digital control system configuration.

Digital Control Systems

313

can be handled by the computer and used to control several processes. As Figures 2 and 3 illustrate, computer-based controllers may be configured as centralized or distributed control configurations. Many times combinations of these two are used for control of large complex systems. In centralized schemes, the digital computer handles all of the inputs, processes all errors, and generates all of the outputs depending on those errors. It has some advantages since only one computer is needed, and because it monitors all signals, it is able to recognize and adapt to coupling between systems. Thus if one system changes, it might change its control algorithm for another system which exhibits coupling with the first. Also, simply reprogramming one computer may change the dynamic characteristics of several systems. The disadvantages include being dependent on one processor, limited performance with large systems since the processor is being used to operate many controllers, and component controller upgrades are more difficult. The distributed controller falls on the opposite end of the spectrum where every subsystem has its own controller. Advantages are that it is easy to upgrade one specific controller, easier to include redundant systems, and lower performance processors may be used. It may or may not cost more, depending on each system. Since simple, possibly analog, controllers can be used for some of the individual subsystems, both analog and digital controllers can coexist and it is possible to sometimes save money. The primary computer is generally responsible for determining optimum operating points for each subsystem and sending the appropriate command signals to each controller. Depending on the stability of the individual controllers, the primary computer may or may not record/use the feedback from individual systems. For many complex systems the best alternative becomes a combination of centralized and distributed controllers. If a subsystem has a well-developed and cost-effective solution, it is often better to offload that task from the primary controller to free it for others. If a complex or adaptive routine is required, such as dealing with coupling between systems, then the central computer might best serve those systems. In this way our systems can be optimized from both cost and performance perspectives. The mnemonic commonly used to describe these systems, SCADA, stands for Supervisory Control and Data Acquisition. A PC (or programmable logic controller with a processor) in this case provides the supervisory control with multiple distributed controllers through either half or full duplex modes. Half duplex means the supervisory initiates all requests and changes and the distributed

Figure 2

Centralized control with a digital computer.

314

Chapter 7

Figure 3

Distributed control with a digital computer.

components respond but do not initiate contact. The advantage of these systems is that the link may be through wires, radio wave (even satellite), Internet, etc. As we see in the next section, adding these capabilities and the input and output interfaces changes our model and new techniques must be used. Fortunately, the new technique can be understood in much of the same way as the analog techniques, but with the addition of another variable, the sample time. 7.3 7.3.1

COMPARISON OF ANALOG AND DIGITAL CONTROLLERS Characteristics and Limitations

Analog controllers are continuous processes with infinite resolution. When errors of any magnitude occur, the controller should have an appropriate control action. Analog controllers, presented in the previous chapters, generally incorporate analog circuits for electronic controllers (OpAmps) or mechanical components for physical controllers. Digital controllers, in contrast, use microprocessors to perform the control action. Microprocessors require digital inputs and outputs to operate and thus require additional components to implement. Component costs have steadily decreased as technology improves and digital controllers are becoming more prevalent in almost all applications. Since most physical signals are analog (i.e., pressure, temperature, etc.), they first must be converted to digital signals before the controller can use them. This involves a process called digitization, which introduces additional problems into the design problem, as future sections will show. This same process must then be reversed at some stage in the process to generate the appropriate physical control action. This conversion might occur right at the output of the microprocessor or not until the system responds to the discrete physical action (i.e., stepper motor). Table 1 lists many of the advantages and disadvantages of digital controllers. At this point it should be clear as to why the movement is so strong toward digital controllers. Advanced controller algorithms like adaptive, neural nets, fuzzy logic, and genetic algorithms have all become possible with the microprocessor. Today’s systems are commonly combinations of centralized and distributed controllers working in harmony. Clearly the skills to analyze and design such systems are invaluable. A brief history review will illustrate the growth of digital controllers. In the 1960s minicomputers became available and some of the first controllers were developed. Processing times were on the order of 2 sec for addition and 7 sec for multiplication. Costs were still prohibitive, and only specialized applications could justify or overcome the cost to benefit from digital controllers. In the 1970s

Digital Control Systems

Table 1

315

Advantages and Disadvantages of Digital Microprocessor-Based Controllers

Advantages Controller algorithms changed in software. Control strategy changed in real time depending on situations encountered. Multiple processes can be controlled using one microprocessor. Infinite algorithms are possible. Once signal is converted, noise and drift problems are minimized. Easy to add more functions, safety functions, digital readouts, etc.

Disadvantages Requires more components. Digitizing analog signals results in limited resolution. Adding more functions might limit performance (sample time increases). Requires better design skills. Digital computers cannot integrate signals, must be converted to products and sums. Components inherently susceptible to damage in harsh environments.

the ‘‘modern’’ microcomputer became available. The early 1970s saw prices up to $100,000 for complete systems. By 1980 the price had fallen to $500 with quantity prices as low as $50 for small processors. The 1990s have seen prices fall to only a few dollars per microprocessor. Complete systems are affordable to companies of all sizes and have allowed the use of digital controllers to become the standard. Virtually all automobile, aviation, home appliance, and heavy equipment controllers are microprocessor based and digital. Programmable logic controllers (PLCs), introduced in the 1970s, have become commonplace, and prices continue to fall. More PC-based applications are also found as prices also have decreased significantly. A danger in this is when we simply take existing analog control systems and implement them digitally without understanding the differences. Not only will we be more likely to have problems and be unsatisfied with the results, but we also miss out on the new opportunities that are available once we switch to digital. This chapter, and the several following this, attempts to connect what we have learned about analog systems with what we can expect when moving towards digital implementations. 7.3.2

Overview of Design Methods

Early digital controllers were designed from existing continuous system design techniques. As digital controllers have become common, more and more controllers are directly designed in the digital domain to take advantage of the additional features available. Since all physical systems operate in the continuous time domain, the skills developed in the first section are imperative to designing high performance digital controllers. It is dangerous to assume that everything wrong with the system can be fixed by adding the latest and greatest digital controller. In fact, it is often the lack of understanding the physics of our real system that causes the most trouble. Proper design flows from a proper understanding of the physics involved. The image that comes to mind is trying to design a cruise control system for a truck and using a lawnmower engine. Although we might laugh at this analogy, the point is made that our first priority is designing a capable physical system that incorporates the proper

316

Chapter 7

components to achieve our goals. The design methods presented in this text are all based on this initial assumption. Now, in addition to proper physical system design, we need to account for the digital components. The problem arises in modeling the interface between the digital and analog systems and its dependence on sample time. As we will see, we might design a wonderful controller based on one sample time, have someone else also claim processor time, resulting in more processor tasks per sample and longer sampling periods. Consequently, we now have stability problems based on the longer sample time even though our controller itself never changed. An example is with automobile microprocessors where new features, not initially planned on, are continually added until the microprocessor can no longer achieve the performance the initial designer was planning on. This leads to two basic approaches for designing digital controllers: we can convert (or design in the continuous domain and then convert) continuous based controllers into approximate digital controllers or we can begin immediately in the discrete domain and design our controller using tools developed for designing digital controllers. Both methods have several strengths and weakness, as discussed in the next section. Chapter 9 will present the actual methods and examples of each type. 7.3.2.1

Designing from Continuous System Methods

One common approach to designing digital controllers is to design the controller in the continuous domain as taught in the previous chapters, and once the controller is designed, use one of several transformations to convert it to a digital controller. For example, a common proportional, integral, derivative (PID) controller can be designed in the continuous domain and approximated using finite differences for the integration and derivative functions. Additionally, the bilinear transformation (Tustin’s method), or the impulse invariant method, may be used to convert from the s-domain to the z-domain. The z-domain is introduced later as a digital alternative to the s-domain that allows us to use the skills we have already developed. Thus if you are familiar with controller design using classical techniques, with a little work you can design controllers in the digital domain. Finally, a simple technique of matching the poles of zeros of the continuous controller with equivalent poles and zeros in the z-domain may be used, aptly called the pole-zero matching method. There are several advantages to beginning with (or using) the continuous system. It is in fact how our physical system responds; there are many tools, textbooks, and examples to choose from; and most people feel more comfortable working with ‘‘real’’ components. As the next section shows, there are also several disadvantages with this method. 7.3.2.2

Direct Design of Digital Controllers

One primary advantage of designing controllers directly in the digital domain is that it allows us to take advantage of features unavailable in the continuous domain. Since digital controllers are not subject to physical constraints (with respect to developing controller output as compared to using OpAmps, linkages, etc.), new methods are available with unique performance results. Direct design allows us to actually choose a desired response (it still must be a feasible one that the physics of the system are capable of) and design a controller for that response. There are fewer limitations since the controller is not physically constructed with real components. It

Digital Control Systems

317

will become clear, however, as we progress that there still are some limitations, most being unique to digital controllers. Also building upon our continuous system design skills are root locus techniques that have been developed for digital systems. The concepts are the same except that we now work in the z-domain, an offshoot of the s-domain. Root locus techniques in the z-domain may be used to directly design for certain response characteristics just as learned for continuous systems. A primary difference is that now the sample time also plays a role in the type of response that we achieve. Also, dead-beat design can be used to make the closed loop transfer function equal to an arbitrarily chosen value since any algorithm is theoretically possible. Dead-beat designs settle to zero error after a specified number of sample times. Of course, as mentioned previously, physical limitations are still placed on the actuators and system components and one of the costs of aggressive performance specifications is high power requirements and more costly components. Finally, Bode plot methods may be used using a w transform. 7.4

ANALYSIS METHODS FOR DIGITAL SYSTEMS

The previous section illustrates several different controller configurations utilizing microprocessors. All the systems share one common trait that differs from their analog counterparts: They must sample the data and are unable to track it in between these samples. It is impossible for the controller to know exactly what is happening between samples; it only knows the response of the system at each sample time. When implementing a digital controller, the normal procedure is to scan the inputs, process the data according to some control law, and update the outputs to the new values. It is obvious that the loop time or sample time will play a large part in determining system performance. On one hand, if the sample time is extremely fast relative to the system, it begins to approximate a continuous controller since the system is unable to change much at all in the time between samples. If, however, the sample times are long compared to the system response, the digital controller will be unable to control the response because each correction occurs after the system has reached the targeted operating point and thus becomes unstable. An even more interesting case it that it is now possible that the digital computer thinks it is correctly controlling the system while unbeknownst to the computer the system is actually oscillating in between samples. This section will examine the characteristics that sampling has on the measurement and response of physical systems. 7.4.1

Sampling Characteristics and Effects

As we saw, sampling is a common trait in all digital computers. When we sample a continuous signal we end up with a sequential list of numbers whose values represent the value of the analog signal at each individual sample time. The sample rate is measured in samples per second (Hz) and hence the sample period T equals the inverse of the sample frequency. If the sample rate is constant (common for most digital controllers), then the list of values will be equally spaced in time. Also, if we assume that the computer is infinitely fast for ‘‘each’’ sample, then each value repre-

318

Chapter 7

sents that one distinct moment in time. We can think of it being like a switch that is momentarily closed each time a computer clock sends a pulse. This idea is illustrated in Figure 4. If the sample times are not constant, the vertical lines are no longer spaced evenly and modeling the sampling process becomes very difficult. In addition, it is possible in Figure 4 that the analog signal had gone below zero and returned to its normal amplitude in between samples; our reconstructed signal is unable to follow this. Remember in the switch analogy that the switch is only momentarily closed and has no knowledge of the signal between samples. Therefore, our reconstructed signal might look very nice but may be completely wrong. This commonly occurs with oscillating signals where the sampling process creates additional frequencies, called aliasing, as shown in Figure 5. Several methods are used to minimize the effects of aliasing. To avoid aliasing up front, we can apply the Nyquist frequency criterion. The Nyquist frequency is defined as one half of the sample frequency and represents the maximum frequency that can be sampled before additional lower frequencies are created. Only those frequencies greater than one half of our sample frequency create additional lower frequency (artificial) components. That being said, however, higher frequencies are always created, called side bands, as a result of the sampling process. To reduce this problem, it is common to install antialiasing filters in the input to remove any frequencies greater than one half of the sample frequency, because once the signal is sampled it is impossible to separate the aliasing effects from the real data. The problem often occurs that even though the highest frequency in our system might be 15 Hz, there might be noise signals at 60 Hz. Thus if our sample rate is less than 120 Hz we will experience aliasing effects from the noise components, even though our primary frequency is much lower. The beat frequency, defined as the difference between one half the sample frequency and the highest frequency, might be very small (i.e., 0.1 Hz), which leads to aliasing effects that look like DC drift unless longer periods of time are plotted to see the very slow superimposed wave from the aliasing. This effect is seen in movies where the airplane propeller or tire spokes will seem to rotate much slower, or even in the reverse direction, than which the actual object rotates at. Since the movie frames are updated on a regular basis (i.e., sample time) as the object rotates at different speeds, the effects of aliasing are easily seen.

Figure 4

Sampling and reconstructing an analog signal.

Digital Control Systems

Figure 5

319

Aliasing problems with different sample rates.

The best solution is a good low-pass filter with a cut-off frequency above the highest frequency in the signal and below the Nyquist frequency. Many options are available: passive filters ranging from simple RC circuits (see Sec. 6.6.3.2) to multipole Butterworth or Chebycheff filters. Passive filters inject the least amount of added noise into the signal but will always attenuate the signal to some extent. Active filters, a good all-around solution, can have sharper cut-offs and gains other than one. Several options are available off the shelf, including switched capacitor filters or linear active-filter chips. For best effects, place the filters as close to the AD converter input as possible and use good shielding and wiring practices from the beginning of the design. 7.4.2

Difference Equations

There are two basic approaches to simulating sampled input and output signals. We can recognize that physical system signals are represented by differential equations and use differential approximations or we can try to model the computer’s sampling process and define delay operators using a new transform for digital systems (similar to the Laplace transform for continuous systems). In this section we develop approximate solutions to differential equations by first approximating the actual derivative terms and second by numerically integrating the equation to find the solution. In both cases we see that the result is a set of difference equations that are easy to use within digital algorithms. Building on this basic understanding, the following section then uses a model of the actual computer sampling process to determine what the sampled response should be. This model of the computer leads us into the z-domain and provides another set of tools for designing and simulating digital control systems.

320

Chapter 7

7.4.2.1

Difference Equations from Numerically Approximating Differentials

First, let us quickly explore the idea of numerically differentiating a signal to approximate a differential equation, which in this case represents our physical differential equation. Let us begin with our basic first-order differential equation: 

dx þ x ¼ uðtÞ dt

If this is our equation to be sampled, we can approximate the differential using Euhler’s theorem, where dx @x ¼ lim dt @t!0 @t Now we can use the current and previously sampled value to approximate the differential and base it on discrete values: x_ ðkÞ ffi

xðkÞ  xðk  1Þ T

If we assume constant sample times where T ¼ tk  tk1 , then tk ¼ kT and k is an integer representing the number of samples. Also, when using this notation, xðkÞ is the current value of x at tk and xðk  1Þ is the value of x at tk1 , or the previously sampled value. Now we can take the difference approximation and insert it in place of the actual differential: 

xðkÞ  xðk  1Þ þ xðkÞ ¼ uðkÞ T

If we solve for xðkÞ, the current value, in terms of xðk  1Þ, the previous value, and uðkÞ the input, we obtain the following difference equation:    xðkÞ þ 1 ¼ xðk  1Þ þ uðkÞ T T Solve for xðkÞ :  xðk  1Þ uðkÞ  þ  xðkÞ ¼ T  þ1  þ1 T T Rearrange, and finally xðkÞ ¼

 T xðk  1Þ þ uðkÞ ð þ TÞ ð þ TÞ

So now we have a difference equation representing a general first-order equation with time constant, . We can follow the same procedure and develop a similar difference equation for a second-order differential by writing the difference between the current and previous first-order approximations and again dividing by the time. EXAMPLE 7.1 Use difference equation approximations to solve for the sampled step response of a first-order system having a time constant, . Calculate the result using both a sample

Digital Control Systems

321

time equal to 1/2 of the system time constant and equal to 1/4 of the system time constant. Compare the approximate results (found at each sample time) to the theoretical results. First, let us use the difference equation from earlier and substitute in our sample times. This leads to the following two difference equations. 2 1 xðkÞ ¼ xðk  1Þ þ uðkÞ 3 3 4 1 xðkÞ ¼ xðk  1Þ þ uðkÞ 5 5

For T ¼ 1=2 For T ¼ 1=4

xðtÞ ¼ 1  et=

Theoretical

We know from earlier discussions that at one time constant, and in response to a unit step input, we will have reached a value of 0.632 (63.2% of final value). To reach the time equal to one time constant, we need two samples for case one (T ¼ 1=2) and four samples for case two (T ¼ 1=4). Finally, we can calculate the approximate values at each sample time and compare them with the theoretical value as shown in Table 2. As we would expect, with shorter sample times we more accurately approximate the theoretical response of our system. The same analogy is found in numerical integration routines. As we will see, there are more accurate approximations available that allow us better results at the same sample frequency. It should now be clear how we use difference equations to approximate differential equations. As our sample time decreases, we see from Table 2 that the accuracy increases, typical of numerical routines. In fact, a T approaches zero the equation becomes a true differential and the values merge. 7.4.2.2

Difference Equations from Numerical Integration

We can also solve differential equations by numerical integration. This leads to several more difference equation approximations that can be used to represent the response of our system. To begin with, let us use the first-order differential equation from the preceding section. 

dx þ x ¼ uðtÞ dt

Table 2 Calculation of Difference Equations (Numerical Approximation of Differential) Values Based on Sample Times 2 1 T ¼ 1=2; xðkÞ ¼ xðk  1Þ þ uðkÞ 3 3 Sample No. 1 2 3 4

Actual value

Difference eq.

xð1=2Þ ¼ 0:393 xðÞ ¼ 0:632

xð1Þ ¼ 1=3 ¼ 0:333 xð2Þ ¼ 5=9 ¼ 0:556

4 1 T ¼ 1=4; xðkÞ ¼ xðk  1Þ þ uðkÞ 5 5 Actual value

Difference eq.

xð1=4Þ ¼ 0:221 xð1Þ ¼ 1=5 ¼ 0:200 xð1=2Þ ¼ 0:393 xð2Þ ¼ 9=25 ¼ 0:360 xð3=4Þ ¼ 0:527 xð3Þ ¼ 61=125 ¼ 0:488 xðÞ ¼ 0:632 xð4Þ ¼ 0:590

322

Chapter 7

Instead of numerically approximating the differential, let us now integrate both sides to solve for the output x. dx 1 1 ¼ x þ uðtÞ dt   Take the integral of both sides: ð kT ð ð dx 1 kT 1 kT xðkÞ  xðk  1Þ ¼  þ u  ðk1ÞT  ðk1ÞT ðk1ÞT dt Now use the trapezoidal rule to approximate each integral using a difference equation: xðkÞ  xðk  1Þ ¼ 

T xðkÞ þ xðk  1Þ T uðkÞ þ uðk  1Þ þ  2  2

Finally, collect terms and simplify to express the solution as a difference equation: xðkÞ ¼

2  T T xðk  1Þ þ ½uðkÞ þ uðk  1Þ 2 þ T 2 þ T

Once again we have an approximate solution to the original first-order differential equation. The next example problem will compare this method with the results from the previous section. With these simple difference equation approximating integrals and derivatives, we can now develop simple digital control algorithms. In Section 9.3 we will see how these simple approximations can be used to implement digital versions of our common PID controller algorithm. This is one of the primary motivations for this discussion. The trapezoidal rule, as shown here, is sometimes called the bilinear transform or Tustin’s rule, and generally results in better accuracy with the same step size but requires more computational time each step. The next section outlines a method using z transforms, similar to Laplace transforms, to model the digital computer and provide us with another powerful tool to develop and program controller algorithms on digital computers. EXAMPLE 7.2 Use numerical integration approximations to solve for the sampled step response of a first-order system having a time constant, . Calculate the result using both a sample time equal to 1/2 of the system time constant and equal to 1/4 of the system time constant. Compare the approximate results (found at each sample time) to the theoretical results. First, let us use the difference equation from earlier and substitute in our sample times. This leads to the following two difference equations. For T ¼ 1=2 : For T ¼ 1:4 : Theoretical xðtÞ ¼ 1  et=

3 1 xðkÞ ¼ xðk  1Þ þ ðuðkÞ þ uðk  1ÞÞ 5 5 7 1 xðkÞ ¼ xðk  1Þ þ ðuðkÞ þ uðk  1ÞÞ 9 9

Digital Control Systems

323

We know from earlier discussions that at one time constant, and in response to a unit step input, we will have reached a value of 0.632 (63.2% of final value). To reach the time equal to one time constant, we need two samples for case one (T ¼ 1=2) and four samples for case two (T ¼ 1=4). Finally, we can calculate the approximate values at each sample time and compare them with the theoretical value as shown in Table 3. When we compare these results with those in Table 2 there is a surprising difference between the two methods. Although numerical integration with the trapezoidal approximation requires additional computations, the results are much closer to the correct values, even at the lower sampling frequencies. At T ¼ 1=4 we are within 0.002 of the correct answer at one system time constant. While using the methods discussed here to simulate the response of physical systems are useful in and of itself, our primary benefit will be seen once we start designing and implementing digital control algorithms. Digital computers are very capable when it comes to working with difference equations, and by expressing our desired control strategy as a difference equation we can easily implement controllers using microprocessors. With the basic concepts introduced in this section we are already able to approximate the derivative and integral actions of the common PID controller. The proportional term is even simpler. While these methods result in difference equation approximations, we are still lacking the design tools analogous to those we learned in the s-domain. The next section introduces one of these common tools, the z transform, and concludes our discussion of techniques used to obtain digital algorithms by modeling the computer sampling effects and introducing the new transform. 7.4.3

z Transforms

The most common tool used to design and simulate digital systems is z transforms. We will see that although z transforms have many advantages, it is similar to Laplace transforms in that they only represent linear systems. Nonlinear systems must be modeled using difference equations. Remember from the beginning discussion that the computer ‘‘instantaneously’’ samples at each clock pulse and thus the continuous signal is converted into a series of ‘‘thin’’ pulses with an amplitude equal to the amplitude of the analog signal at the time the pulse was sent. For analog inputs this type of model works well since the computer uses each discrete data point as

Table 3 Calculation of Difference Equations (Numerical Integration) Values Based on Sample Times T ¼ 1=2 Sample No. 1 2 3 4

Actual value

Difference eq.

xð1=2Þ ¼ 0:393 xðÞ ¼ 0:632

xð1Þ ¼ 2=5 ¼ 0:400 xð2Þ ¼ 16=25 ¼ 0:640

T ¼ 1=4 Actual value

Difference eq.

xð1=4Þ ¼ 0:221 xð1Þ ¼ 2=9 ¼ 0:222 xð1=2Þ ¼ 0:393Þ xð2Þ ¼ 32=81 ¼ 0:395 xð3=4Þ ¼ 0:527 xð3Þ ¼ 386=729 ¼ 0:529 xðÞ ¼ 0:632 xð4Þ ¼ 0:634

324

Chapter 7

represented by that one instant in time. In terms of analog outputs, however, when this signal is sent from the DA converter (analog output), it is fairly useless to the physical world as a series of infinitely thin pulses. Before the physical system has time to respond, the pulse is already gone. To remedy this, a ‘‘hold’’ is applied that maintains the current pulse amplitude value on the output channel until the next sample is sent. This is seen in Figure 6 where the computer can now approximate a continuous analog signal by a continuous series of discrete output levels, as opposed to just pulses. If we assume that the time to actually acquire the sample (i.e., latch and unlatch the switch) is very small, we can approximate the pulse train of values using the impulse function . At the time of the kth sample, the impulse function is infinitely high and thin with an area under the curve of 1. Although this obviously is not what the computer actually does, the method does approximate the outcome and, as we will see, allows us to model the sample and hold process. Using the impulse function allows us to write the sampled pulse train as a summation with each pulse occurring at the next kth sample. f ðtÞ ¼ 

1 X

f ðkTÞðt  kTÞ

k¼0

ðt  kTÞ is 1 when t ¼ kT and 0 whenever t 6¼ kT



The benefit of using the impulse function is seen when we take the Laplace transform. Since the Laplace transform of an impulse function, , is 1, and the Laplace transform of a delay of length T is eTs , we can convert the sampled pulse equation into the s-domain, where F ðsÞ ¼

1 X

f ðkTÞekTs

k¼0

This is simply representing the original sequence of pulses in the s-domain. Now, let us define a new variable, z, where z ¼ eTs . This simply maps the sdomain into the z-domain and z becomes a shift operator where each z1 is one step before the last, allowing us to modeled our sequence of pulses. This is much more convenient than writing each delay in the time domain.

Figure 6

Analog signal being sampled and held.

Digital Control Systems

325

Finally, if the signal is an output of our digital device, we must include in our model the fact that we want the signal to remain on the output port until the next commanded signal level is received. We can model this effect as the sum of two step inputs, one occurring one sample later (and opposite in sign) to cancel out the first step. This is called a hold. If we recognize that with the hold applied each sampled value will remain until the next, we can model each pulse with the hold applied as a single pulse, with the total output being the sequence of pulses with width T as shown in Figure 6. To model the zero-order sample and hold, we can model the sequence of pulses as one step input followed by another equal and negative step input applied one sample time later, as shown in Figure 7. So we see that a zero-order hold (ZOH) in the s-domain can be used to model the sampling effects of the analog-to-digital converter and that z1 ¼ eTs will allow us to map our models from the s-domain into the z-domain (sampled domain) and vice versa. The important concept to remember is that when we take a continuous system model and develop its discrete (sampled) equivalent format, we must also add a ZOH to model the effect of the components used to send the sampled data. The ZOH may be used in several forms using the identity z1 ¼ eTs : ZOH ¼

1  eTs 1 z 11 ¼ ð1  z1 Þ ¼ s z s s

It is common to include the 1=s as part of the Laplace to z transform and include the additional (1  z1 ) separately. Since the z transform has been derived from the Laplace transform, many of the same analysis procedures apply. For example, we can again develop transfer functions; talk about poles, zeros, and frequency responses; and analyze stability. However, we must remember that z itself contains information about the sampling rate of our system since it is dependent on the sample period, T. Since z acts as a shift operator, we can directly relate it to the concept of difference equations discussed earlier. A transfer function in the z-domain is easily converted into a difference equation using the equivalences:

Figure 7

Sample and hold modeled as the sum of two steps.

326

Chapter 7

If

CðzÞ ¼ z1 RðzÞ

Then Or

CðzÞ ¼ z1 RðzÞ z CðzÞ ¼ RðzÞ

or cðkÞ ¼ rðk  1Þ or cðk þ 1Þ ¼ rðkÞ

Conclusion

CðzÞ ¼ zn RðzÞ

or cðkÞ ¼ rðk  nÞ

As in Laplace transforms, tables have been developed that allow us to transform from time to z or s interchangeably. The inverse property also is true and presents us with yet another method of analyzing systems. To demonstrate the concept, let us use the table of z transforms in Appendix B to develop a difference equation for a first-order system and compare its sampled output with that obtained by the numerical approximations from the two previous sections. First-order system: 

dx þ x ¼ uðtÞ dt

Take the Laplace transform and develop the transfer function: XðsÞ 1 ¼ UðsÞ s þ 1 Now we can use the table in Appendix B for the z transform, but remember that we must first add our ZOH model to the continuous system transfer function since we want the sampled output. The ZOH and the first order transfer function become (the 1=s term is grouped with the continuous system transfer function):   XðzÞ ðz  1Þ 1 1 ¼ Z UðzÞ z s ðs þ 1Þ Let a ¼ 1= to match the tables:   XðzÞ ðz  1Þ 1 a ¼ Z UðzÞ z s ðs þ aÞ Take the z transform:   XðzÞ ðz  1Þ zð1  eaT Þ ¼ UðzÞ z ðz  1Þðz  eaT Þ After simplification, the result becomes a transfer function in the z-domain where the actual coefficients (zero and pole values) are a function of 1=, or a, and our sample time, T. XðzÞ ð1  eaT Þ ¼ UðzÞ ðz  eaT Þ As we will see in subsequent chapters, our pole locations are still used to evaluate the stability and transient response of our system. The primary difference now, as compared with continuous systems, is that the pole locations also change as a function of our sample time, not just when physical parameters in our system undergo change.

Digital Control Systems

327

Finally, let us convert the discrete transfer function to a difference equation using the identity z1 xðkÞ ¼ xðk  1Þ. To begin, we can multiply the top and bottom by z1 : XðzÞ ð1  eaT Þz1 ¼ UðzÞ 1  ðeaT Þz1 Now cross-multiply:





XðzÞ  1  eaT z1 ¼ UðzÞ  1  eaT z1



XðzÞ  eaT  XðzÞ  z1 ¼ 1  eaT  UðzÞ  z1 Now we can use z1 as a shift operator and write it as a difference equation:

xðkÞ ¼ eaT xðk  1Þ þ 1  eaT uðk  1Þ As expected, the coefficients of the difference equation are dependent on the sample time. EXAMPLE 7.3 Use z transform approximations to solve for the sampled step response of a firstorder system having a time constant, . Calculate the result using both a sample time equal to 1/2 of the system time constant and equal to 1/4 of the system time constant. Compare the approximate results (found at each sample time) to the theoretical results. First, let us use the difference equation from earlier and substitute in our sample times. This leads to the following two difference equations. For T ¼ 1=2 : eaT ¼ 0:60653; xðkÞ ¼ 0:60653  xðk  1Þ þ 0:39347  uðk  1Þ For T ¼ 1=4 : eaT ¼ 0:77880; xðkÞ ¼ 0:77880  xðk  1Þ þ 0:22120  uðk  1Þ Theoretical xðtÞ ¼ 1  et=t We know from earlier discussions that at one time constant, and in response to a unit step input, we will have reached a value of 0.632 (63.2% of final value). To reach the time equal to one time constant we need two samples for case one (T ¼ 1=2) and four samples for case two (T ¼ 1=4). Finally, we can calculate the approximate values at each sample time and compare them with the theoretical value as shown in Table 4. When we compare these results with those in Tables 2 and 3 we see the advantage of using z transforms to model the computer hardware. Even at low sample times the results match the analytical solution exactly. In fact, when we examine the difference equations, we see that eaT as used in the difference equations corresponds to the et in the continuous time domain response equation. EXAMPLE 7.4 Use Matlab to solve for the sampled step response of a first-order system having a time constant, . Calculate the result using both a sample time equal to 1/2 of the

328

Chapter 7

Table 4

Calculation of Difference Equations (z Transform) Values Based on Sample

Times T ¼ 1=2 Sample No. 1 2 3 4

Actual value

T ¼ 1=4

Difference eq.

xð1=2Þ ¼ 0:39347 xð1Þ ¼ 0:39347 xðÞ ¼ 0:63212 xð2Þ ¼ 0:63212

Actual value

Difference eq.

xð1=4Þ ¼ 0:221 xð1=2Þ ¼ 0:393 xð3=4Þ ¼ 0:527 xðÞ ¼ 0:632

xð1Þ ¼ 0:22120 xð2Þ ¼ 0:39347 xð3Þ ¼ 0:52763 xð4Þ ¼ 0:63212

system time constant and equal to 1/4 of the system time constant. Plot the approximate results (found at each sample time) to the theoretical results. Matlab can also be used to quickly generate z-domain transfer functions. Using the following commands in Matlab will generate our first-order transfer function, define the same sample time, and convert the continuous system to a discrete system using the ZOH model. There are several models available in Matlab for approximating the continuous system as a discrete sampled system. %Program to convert first order system %to z-domain transfer function Tau=1; T2=Tau/2; T4=Tau/4;

%System time constant %Sample time equal to 1/2 the time constant %Sample time equal to 1/4 the time constant

sysc=tf(1/Tau,[1 1/Tau])

%Make LTI TF in s

sysz2=c2d(sysc,T2,‘zoh’) sysz4=c2d(sysc,T4,‘zoh’)

%Convert to discrete TF using zoh and sample time %Convert to discrete TF using zoh and sample time

‘Press any key to generate step response plot’ pause; step(sysc,sysz2,8) figure; step(sysc,sysz4,8)

This results in the following output to the screen: Continuous system transfer function:

1 sþ1 Discrete transfer function when T ¼ 0:5s:

0:3935 z  0:6065

Digital Control Systems

329

Discrete transfer function when T ¼ 0:25s: 0.2212 z  0:7788

Using the step command allows us to compare the continuous system response and the discrete sampled response for each sample time as shown in Figures 8 and 9. From the step response plots we can easily see that although both sample times are accurate during the time of the sample, the shorter sample time leads to a much better approximation of the continuous system when reconstructed. As we will see in subsequent chapters, there are many additional tools in Matlab that can used to design and simulate discrete systems. EXAMPLE 7.5 For the sampled system, CðzÞ, derive the sampled output using 1. 2. 3.

Difference equations resulting from a transfer function representation; Difference equations resulting from the output function in the z-domain; Matlab.

We begin with the discrete transfer function of the system, defined as CðzÞ 1 ¼ GðzÞ ¼ 2 RðzÞ ðz  0:5zÞ Recognizing that a step input in discrete form is given as   z 1 ¼ In the s-domain ¼ Unit step ¼ z1 s Then we can also write CðzÞ as a transfer function subjected to a step input, Rðz): CðzÞ ¼

Figure 8

1 z 1 ¼   RðzÞ ðz2  0:5zÞ z  1 ðz2  0:5zÞ

Sampled step response using Matlab with T ¼ 0:5 sec.

330

Chapter 7

Figure 9

Sampled step response using Matlab with T ¼ 0:25 sec.

Simplifying, we have the following sampled output of a system represented as a discrete function in the z-domain. In this case the information about the input acting on the system will be included in the resulting difference equation: CðzÞ ¼

z ðz  1Þðz2  0:5zÞ

The two representations, a transfer function subjected to a step input or a sampled output, can be simulated using difference equations but with slight differences in how the input sequence occurs. In the remainder of this example, the sampled response is derived using both notations. First, let us assume we are given the transfer function and asked to calculate the response of the system to a unit step input. To derive the difference equations, we have two options. First, we may cross-multiply and represent the output cðkÞ as a function of previous values of cðk  iÞ and of a general input, rðk’s). Second, we may substitute in the discrete representation of a step input and form the difference equation only as a function of cðk’s) and the delta function. This, however, becomes the same as the sampled output CðzÞ that is examined in part 2. Solution 1:

Using the Discrete Transfer Function and General r ðk ’s)

To develop the general difference equation, multiply numerator and denominator by z2 , cross-multiply, and leave the input in the difference equation. CðzÞ 1 ¼ 2 RðzÞ ðz  0:5zÞ CðzÞ 1 z2 z2 ¼ 2 ¼ RðzÞ ðz  0:5zÞ z2 ð1  0:5z1 Þ C(z)(1 - 0.5z^{ - 1} ) = z^{ - 2} R(z)

Digital Control Systems

331

CðzÞ  0:5z1 CðzÞ ¼ z2 RðzÞ CðzÞ ¼ 0:5z1 CðzÞ þ z2 RðzÞ Now we can write the difference equation as cðkÞ ¼ 0:5cðk  1Þ þ rðk  2Þ Assuming initial conditions equal to zero allows us to calculate the sampled output (first eight samples) as k¼0

cð0Þ ¼ 0 þ 0 ¼ 0

k¼1

cð1Þ ¼ 0 þ 0 ¼ 0

k¼2

cð2Þ ¼ 0 þ 1 ¼ 1

k¼3

cð3Þ ¼ 0:5 þ 1 ¼ 1:5

k¼4

cð4Þ ¼ 0:5ð1:5Þ þ 1 ¼ 1:75

k¼5

cð5Þ ¼ 0:5ð1:75Þ þ 1 ¼ 1:875

k¼6

cð6Þ ¼ 0:5ð1:875Þ þ 1 ¼ 1:9375

k¼1

(step input, RðzÞÞ

cð1Þ ¼ 2:0

Notice that in this solution once the step occurs in the difference equation, rðk  2Þ always retains a value of the step input, in this case using unit step, equal to 1. This differs from the second method, shown next. Solution 2:

Using the Sampled Output That Includes the Input Effects in the z-Domain Representation

The procedure used to develop the general difference equation remains the same and we multiply the numerator and denominator by z2 , cross-multiply, and write the difference equation. Now the output sequence includes the effects from the step input. CðzÞ ¼

z ðz  1Þðz2  0:5zÞ

Expand the denominator terms CðzÞ ¼

z3

z  1:5z2 þ 0:5z

We can simplify the output since a z cancels in the numerator and denominator: CðzÞ ¼

z2

1  1:5z þ 0:5

Multiplying the numerator and denominator by z2 : CðzÞ ¼

z2 1  1:5z1 þ 0:5z2

332

Chapter 7

Now we can cross-multiply and develop the difference equation, recognizing that the general input, rðkÞ, does not appear:

CðzÞ 1  1:5z1 þ 0:5z2 ¼ 1  z2 CðzÞ  1:5z1 CðzÞ þ 0:5z2 CðzÞ ¼ 1  z2 CðzÞ ¼ 1:5z1 CðzÞ  0:5z2 CðzÞ þ 1  z2 Now we can write the difference equation as cðkÞ ¼ 1:5cðk  1Þ  0:5cðk  2Þ þ 1  ðk  2Þ Of particular interest is the necessary use of the delta function in this difference equation. When we convert the term 1z2 into the sampled (time) output the inverse z transform of ‘‘1’’ is simply a delta function, or unit impulse, delayed by two sample times (the z2 ). It therefore does not have an effect on the solution except for the single sample instant that k ¼ 2. This is different than in solution 1 where the inverse z transform of RðzÞ is simply the value of rðtÞ delayed two sample periods (as used in the solution, rðk  2Þ). Remember that in this solution the step input, RðzÞ, is inherent in the difference equation as evidenced by the additional cðk  2Þ term and different coefficient values when compared with the difference equation in solution 1. Finally, to calculate the sampled outputs we assume initial conditions equal to zero and calculate the sampled output (first six samples) as k¼0

cð0Þ ¼ 0  0 þ 0 ¼ 0

k¼1

cð1Þ ¼ 0  0 þ 0 ¼ 0

k¼2

cð2Þ ¼ 0  0 þ 1 ¼ 1

k¼3

cð3Þ ¼ 1:5  0 þ 0 ¼ 1:5

k¼4

cð4Þ ¼ 1:5ð1:5Þ  0:5ð1Þ þ 0 ¼ 1:75

k¼5

cð5Þ ¼ 1:5ð1:75Þ  0:5ð1:5Þ þ 0 ¼ 1:875

k¼6

cð6Þ ¼ 1:5ð1:875Þ  0:5ð1:75Þ þ 0 ¼ 1:9375

k¼1

(delta function, only when k ¼ 2Þ

cð1Þ ¼ 2:0

These are the same values calculated using the transfer function representation in part 1. Solution 3:

Using Matlab to Simulate the Sampled Output

Many computer packages also enable us to quickly simulate the response of sampled systems. We can define the discrete transfer function in Matlab as  sysz ¼ tf ð1,[1  0:5 0],1Þ

This results in the discrete transfer function, sysz: sysz ¼

CðzÞ 1 ¼ RðzÞ z2  0:5z

Digital Control Systems

333

To solve for the first set of sampled output values:  ½Y; T ¼ stepðsyszÞ k= 0 1 2 3 4 5 6 7 8 9 10 11 12

T= 0 1 2 3 4 5 6 7 8 9 10 11 12

Y= 0 0 1.0000 1.5000 1.7500 1.8750 1.9375 1.9688 1.9844 1.9922 1.9961 1.9980 1.9990

And finally, to generate the discrete step response plot given in Figure 10:  ½Y; T ¼ stepðsyszÞ

Thus, in conclusion, all methods produce the identical sampled values. The first method allows us to input any function, whereas the second method contains the input effects in the z-domain output function, CðzÞ. Matlab also provides an easy method for simulating discrete systems. In general when we design controllers we will use the representation (transfer function) examined in first method since the input to the controller, the error, is constantly changing and best represented as a general input function. In conclusion of this section, we have seen that z transforms are able to model the sample and hold effects of a digital computer and produce results nearly identical to the theoretical. Regardless of the method used, once we have derived the differ-

Figure 10

Sampled step response using Matlab.

334

Chapter 7

ence equations for a system, it is very easy to simulate the response on any digital computer. Also, now that we are able to represent systems using discrete transfer functions in the z-domain, we can apply the knowledge of stability, poles and zeros, and root locus plots to design systems implemented digitally. Since we know how the s-domain maps into the z-domain, we can easily define the desired pole/zero locations in the z-domain, with the key difference that we can also vary the system response (pole locations) by changing the sample time. One method for moving from the s-domain into the sampled z-domain is to simply map the poles and zeros from the continuous system transfer function into the equivalent poles and zeros in the z-domain using the mapping z ¼ esT . This method is examined more in Chapter 9 were it is presented as a method for converting analog controller transfer functions into the discrete representation, allowing them to be implemented on a microprocessor. 7.4.4

Discrete State Space Representations

In the same way that we can represent linear differential equations using transfer functions and state space matrices in the continuous domain, we can also represent them in the discrete domain. In the previous section we saw how the transfer functions use the z-domain to approximate continuous systems. Discrete state space matrices are time based, the same as with continuous systems, and now will represent the actual difference equations obtained (using several different methods) in the previous section. The general state space matrix representation is similar to before and is given below: xðk þ 1Þ ¼ KxðkÞ þ LrðkÞ cðkÞ ¼ MxðkÞ þ NrðkÞ where x is the vector of states that are sampled; r is the vector of inputs; and c is the vector (or scalar) of desired outputs. K, L, M, and N are the matrices containing the coefficients of the difference equations describing the system. Many of the same linear algebra properties still apply, only now the matrices contain the coefficients of the difference equations. Instead of the first differential of a state variable being written as a function of all states, the next sampled value is written as a linear function of all previously sampled values. The order of the system, or size of the square matrix K, depends on the highest power of z in the transfer function. There are many equivalent state space representations and different forms may be used depending on the intended use. One advantage of state space is that we can use transformations to go from one form to another. For example, if we diagonalize the system matrix, the values on the diagonal are the system eigenvalues. There are several ways to get the discrete system matrices, although it generally involves one of the previous methods used to write the system response as a difference equation (or set of difference equations). If we already have a discrete transfer function in the z-domain we can write the difference equations and develop the matrices as illustrated in the next example.

Digital Control Systems

335

EXAMPLE 7.6 Convert the discrete transfer function of a physical system into the equivalent set of discrete state space matrices. GðzÞ ¼

CðzÞ z ¼ RðzÞ z2  z þ 2

First, convert the discrete transfer function to a difference equation: GðzÞ ¼

CðzÞ z1 ¼ RðzÞ 1  z1 þ 2z2

cðkÞ ¼ cðk  1Þ  2cðk  2Þ þ rðk  1Þ cðk þ 1Þ ¼ cðkÞ  2cðk  1Þ þ rðkÞ Since cðk þ 1Þ depends on two previous values, we will need two discrete states so that each state equation is in the form cðk þ 1Þ ¼ f ðk’s). Therefore, lets define our states as x1 ðkÞ ¼ cðkÞ x2 ðkÞ ¼ cðk  1Þ Now substitute in and write the initial difference equation as two equations where each state at sample k þ 1 is only of function of states and inputs at sample k: x1 ðk þ 1Þ ¼ cðk þ 1Þ ¼ x1 ðkÞ  2x2 ðkÞ þ rðkÞ x2 ðk þ 1Þ ¼ cðkÞ ¼ x1 ðkÞ No we can easily express the difference equations in matrix form: " # # " # " #" x1 ðk þ 1Þ 1 2 x1 ðkÞ 1 ¼ xðk þ 1Þ ¼ þ rðkÞ x2 ðk þ 1Þ x2 ðkÞ 1 0 0 And if our output is simply cðkÞ:       x1 ðkÞ þ 0 rðkÞ cðkÞ ¼ 1 0 x2 ðkÞ Once we have linear difference equations it becomes straightforward to represent them using matrices. Many of the linear algebra analysis methods remain the same. To analyze the discrete state space matrices, the required operations are similar as those used to find the eigenvalues of the continuous system state space matrices. Now, when we examine the left and right sides of the state equations, we see that they are related through z1 instead of through s, as was the case with continuous representations. Using this identity, the linear algebra operation remains the same and we can solve for xðkÞ as

336

Chapter 7

xðk þ 1Þ ¼ KxðkÞ þ LrðkÞ XðzÞ ¼ KXðzÞz1 þ LRðzÞz1 XðzÞ  KXðzÞz1 ¼ LRðzÞz1

I  Kz1 XðzÞ ¼ LRðzÞz1 Now we can premultiply both sides by the inverse of (I  Kz1 ) and solve for xðzÞ :

1 XðzÞ ¼ I  Zz1 LRðzÞz1 Finally, we can substitute xðzÞ into the output equation and solve for cðzÞ:

1 CðzÞ ¼ M I  Kz1 LRðzÞz1 þ NRðzÞ or h i

1 CðzÞ ¼ M I  Kz1 Lz1 þ N RðzÞ As with the continuous system, we now have the methods to convert from discrete state space matrices into a discrete transfer function representation. It still involves taking the inverse of the system matrix and results in the poles and zeros of our system. EXAMPLE 7.7 Derive the discrete transfer function for the system represented by the discrete state space matrices. "

x1 ðk þ 1Þ

# ¼ xðk þ 1Þ ¼

x2 ðk þ 1Þ 

cðkÞ ¼ 1

"

0

" #  x1 ðkÞ x2 ðkÞ

1

2

1

0

#"

x1 ðkÞ x2 ðkÞ

#

" # 1 þ 0

rðkÞ

  þ 0 rðkÞ

The relationship between discrete state space matrices and discrete transfer functions has already been defined as h i

1 CðzÞ ¼ M I  Kz1 Lz1 þ N RðzÞ Substitute the K, L, M, and N matrices: 2 " 1   CðzÞ ¼ 4 1 0  0

0

#

" 

1

z1

2z1

z1

0

3 #!1 " # 1   z1 5  RðzÞ 0

Digital Control Systems

337

Combine and take the inverse of the inner matrices by using the adjoint and determinant: 2 3 " #!1 " # 1 1  z1 2z1   CðzÞ ¼ 4 1 0    z1 5  RðzÞ 0 z1 1 " # " # 1 1 2z1   1 0    z1 1 1 0 z 1z CðzÞ ¼  RðzÞ 1  z1 þ 2z2 And finally, we can perform the final matrix multiplications, resulting in CðzÞ ¼

z1  RðzÞ 1  z1 þ 2z2

Since we started this example using the discrete state space matrices from Example 7.6 we can easily verify our solution. Recall that the original transfer function from Example 7.6 was GðzÞ ¼

CðzÞ z ¼ 2 RðzÞ z  z þ 2

We see that we did the same result if we just multiply the top and bottom of our transfer function by z2 to put it in the same form. Thus the methods develop in earlier chapters for continuous system state space matrices are very similar to the methods we use for discrete state space matrices, as shown here. The process of deriving the discrete state space matrices becomes more difficult when the input spans several delays (i.e., rðk  1Þ and rðk  2Þ) and relies on first getting difference equations or z transforms. To be more general, we would like either to take existing differential equations (which may be nonlinear) or, if linear, to convert directly from the A, B, C, and D matrices already developed. The next two methods address these cases. If we begin with the original differential equations describing the system, we can simply write them as a set of first-order differential equations and approximate the difference equations using either the backward, forward, or bilinear approximation difference algorithms. Represent each first-order differential by the difference equation and solve each one as a function of xðk þ 1Þ ¼ f ðxðkÞ, xðk  1Þ . . . ; rðkÞ, rðk  1Þ   ) and then, if linear, represent in matrix form. Examples of three different difference equation approximations are given in Table 5. The procedure is similar to those presented in Section 7.4.2 and is just repeated for each state equation that we have. An advantage of using this method is that nonlinear state equations are very easy to work with; the only difference being that we cannot write the resulting nonlinear difference equations in matrix form and use linear algebra techniques to analyze them (as was done in Example 7.7). Of the three alternatives given in Table 5 the bilinear transformation provides the best approximation but requires slightly more work to perform the transformation. Finally, although only introduced here, it is possible to approximate the transformation itself, z ¼ esT , through a series expansion, allowing even better approximations. Since computers can include many terms in the series, this provides good

338

Chapter 7

Table 5

State Space Continuous to Discrete Transformations Alternative transformations from continuous to discrete first-order ODEs

Method

Difference equation

Backward rectangular

½xðkÞ _ xðk  1Þ =T

Forward rectangular

½xðk þ 1Þ  xðkÞ =T

Bilinear transformation

Approximates z ¼ esT

z-Domain x_ ¼

z1 x Tz

x_ ¼

z1 x T

x_ ¼

2ðz  1Þ x Tðz þ 1Þ

results when implemented in programs like Matlab. The assumption used with this method is that the inputs themselves remain constant during the sample period. While obviously not the case, unless sample times are large, it does provide a good approximation. This allows us to represent our discrete system matrix, K, as defined previously, to represent the outputs delayed one sample period relative to our original system matrix, A, for the continuous system. Then we can include as many of the series expansion terms as we wish: KðkTÞ ¼ eAT ¼ I þ AT þ

ðAT Þ2 ðAT Þ3 þ þ  2! 3!

where K is the discrete equivalent of our original system matrix A, T is the sample period. As with the continuous system matrices, A, B, C, and D, we can use the discrete matrices derived in this section to check controllability, design observers and estimators, etc. When we begin looking at discrete (sampled) MIMO systems later, this will be the representation of choice. 7.5

SAMPLE TIME GUIDELINES

It is important to know what sample times should be used when designing digital control systems. We must at minimum meet certain sampling rates and thus choose a processor capable of meeting these specifications. Several guidelines are given here for determining what sample time is required for different systems. In general, faster is always better if the cost, bits of resolution, service, etc., are all the same. The only disadvantage to faster sampling rates is amplifying noise when digital differentiation is used. Since T is in the denominator and becomes very small, a small amount of noise in the measure signals in the numerator cause very large errors. This can be dealt with in different ways so we still prefer the faster sample time. For first-order systems classified by a time constant, we would like to sample at least 4–10 times per time constant. Since a time constant has units of time (seconds), it is easy to determine what the sampling rate should be. For example, if our system is primarily first order with a time constant of 0.2 sec, then we should have a minimum sample rate of 20 Hz and preferably a sample rate greater than 50 Hz.

Digital Control Systems

339

Second-order systems use the rise time as the period of time where 4–10 samples are desired. Remember that these are minimums and, if possible, aim for more frequent samples. In cases where we have a set of dominant closed loop poles and thus a dominant natural frequency, we will find that sampling less than 10 times the natural frequency will no longer allow equivalence between the continuous and sampled responses and they diverge. In these cases direct design of digital controllers is recommended. If we can sample at frequencies greater than 20 times the natural frequency, we find that the digital controller closely approximates the continuous equivalent. Since the system’s natural frequency is close to the bandwidth as measured on frequency response plots, the same multipliers may be used with system bandwidth measurements. In most cases where the sampling frequency is greater than 40 times that of the bandwidth or natural frequency of our physical system, we can directly approximate our continuous system controller with good results. One additional advantage should also be mentioned in regards to sampling frequency. Better disturbance rejection is found with shorter sampling times. Physically this can be understood as limiting the amount of time a disturbance input can act on the system before the controller detects it and appropriate action taken. Finally, the real challenge for the designer is determining what the significant frequencies in our system are. The guidelines above are easily followed but all based on the assumption that we know the properties of our physical system. Even if we sample fast enough to exceed the recommendations relative to our primary dynamics, it does not necessarily follow that we are sampling fast enough to control all of our significant dynamics. A significant dynamic characteristic might be much faster than the dominant system frequency and yet if it contributes in such a way as to significantly affect the final response of our system we may have problems. 7.6

PROBLEMS

7.1 List three advantages of a digital controller. 7.2 What are primary components that must added to implement a digital controller? 7.3 List two advantages of using centralized controller configurations. 7.4 List two advantages of using distributed controller configurations. 7.5 List two primary distinctions of digital controllers (relative to analog controllers) that must be accounted for during the design process. 7.6 Describe one advantage and one disadvantage of using analog controllers as the basis for the design of digital controllers. 7.7 If our signal contains a frequency component greater than the Nyquist frequency, what is created in our sampled signal? 7.8 To minimize the effects of aliasing, it is common to use what component in our design? 7.9 What guideline should we use regarding sample rate if we wish to convert an existing analog controller into an equivalent digital representation and experience good results? 7.10 A sampled output, CðzÞ, is given in the z-domain. Use difference equations to calculate the first five values sampled.

340

Chapter 7

CðzÞ ¼

1 z þ 0:1

7.11 A sampled output, CðzÞ, is given in the z-domain. Use difference equations to calculate the first 10 values sampled. CðzÞ ¼

ðz 

1Þðz2

0:632z  0:736z þ 0:368Þ

7.12 Use the z transform to derive the difference equation approximation for the function xðtÞ ¼ teat . Treat as a free response (no forcing function) and leave the coefficients of the difference equation in terms of a and T. 7.13 Use the continuous system transfer function and apply a ZOH, convert into the z-domain, derive the difference equation, and calculate the first five values (T ¼ 0:5 secÞ in response to a unit step input. Use partial fraction expansion if necessary. GðsÞ ¼

sþ3 sðs þ 1Þðs þ 2Þ

7.14 Use the differential equation describing the motion of a mass-spring-damper system and a. Derive the continuous system transfer function. b. Apply a ZOH and derive the discrete system transfer function. c. Using T ¼ 1 sec, write the difference equations from the discrete transfer function, and solve for the first eight values when the input is a unit step. d 2y dy þ 5 þ 6y ¼ rðtÞ dt dt2 7.15 Develop the first five sampled values for a first-order system described as having a system time constant equal to 2 sec. Assuming a sample time of 0.8 sec, use the differentiation approximation, numerical integration, and z transforms to develop a difference equation for each method. Use a table to calculate the first five sampled values for each difference equation and compare the results. The outputs are in response to a unit step input. 7.16 Set up and use a spreadsheet to solve problem 7.15. 7.17 Convert the discrete transfer function into the equivalent discrete state space matrices. GðzÞ ¼

z z  2z þ 1 2

7.18 Use the discrete state space matrices and solve for the equivalent discrete transfer function. " # " 2 # 1 T T =2 xðk þ 1Þ ¼ xðkÞ þ rðkÞ 0 1 T  yðkÞ ¼ 1

0 xðkÞ

Digital Control Systems

341

7.19 Using the difference equation describing the response of a physical system, develop the equivalent discrete transfer function in the z-domain. yðkÞ ¼ 0:5yðk  1Þ þ 0:3rðk  1Þ 7.20 Using the difference equation describing the response of a physical system, develop the equivalent discrete transfer function in the z-domain. yðkÞ ¼ 0:5yðk  1Þ  0:3yðk  2Þ þ 0:2rðkÞ

This Page Intentionally Left Blank

8 Digital Control System Performance

8.1

OBJECTIVES    

8.2

To relate analog control system performance to digital control system performance. To demonstrate the effects and locations of digital components. To examine the effects of disturbances and command inputs on steady-state errors. To develop and define system stability in the digital domain.

INTRODUCTION

This chapter parallels Chapter 4 in defining the performance parameters for control systems. The difference is that the parameters are examined in this chapter with respect to digital control systems, not analog, as done earlier. By using the z transform developed in the previous chapter, many of the same techniques can still be applied. Block diagram operations are identical once the effects of sampling the system are included, and the concept of stability on the z-plane has many parallels to the concept of stability on the s-plane. The measurements of system performance, since they still deal with the output of the physical system in response to either a command or disturbance input, remain the same, and we have new definitions for the final value theorem and initial value theorem for use with transfer functions in the zdomain. An underlying theme is evident throughout the chapter; in addition to the parameters that affected steady state and transient performance in analog systems, we now have the additional effects of quantization (finite resolution) and sampled inputs and outputs that also affect the performance. 8.3

FEEDBACK SYSTEM CHARACTERISTICS

As with the analog systems, steady-state errors and transient response characteristics are the primary tools to quantify control system performance. Although the definitions remain the same, we have additional characteristics inherent in our digital devices that must now be accounted for during the design and operation phases.

343

344

8.3.1

Chapter 8

Open Loop Versus Closed Loop

In both open loop and closed loop systems with digital controllers, we must modify the model to include the zero-order hold (ZOH) effects. Instead of receiving smooth analog signals, the system receives a series of small steps from the digitization process. The magnitude of each step is never less than the discrete levels determined by the number of bits conversion process used. It may obviously be much larger since step inputs and other commands can cause the output to jump many discrete levels in one sample period. An open loop and closed loop system, including a digital controller and ZOH, are shown in Figure 1. For both the open and closed loop systems the number and location of the AD and DA (analog to digital and digital to analog) converters may vary. Their purpose is to allow the analog signals of the physical system to interact with the digital signals in the microprocessor. If we generate the command to the system internal to the microprocessor, then the first AD converter for either system is not required. Likewise, if the output of the system is actuated by a digital signal (i.e., stepper motor or PWM) or if the sensor output is digital (i.e. encoder), then the output from the computer or the feedback path does not require a DA converter. In general, when we add a digital controller to an existing analog system we will require the sampling devices as shown in Figure 1. The properties, advantages, and disadvantages of open loop versus closed loop controllers, are the same as with the equivalent continuous system models. The differences are the quantization and sampling effects. To analyze the systems for either transient or steady-state performance, we follow the same procedures learned earlier and just substitute the appropriate ZOH model developed in the previous chapter. With the open loop system, the result is the final transfer function in the z-domain relating the input to the output. For the closed loop system we must first close the loop and derive the new transfer function for the overall system, including now the effects of the ZOH. Figure 2 illustrates the process of including the samplers and ZOH to obtain transfer functions in the zdomain. In place of the actual AD and DA converters, we place the samplers and ZOH models developed in the preceding chapter. This allows us to close the loop and

Figure 1

General open and closed loop digital controller diagrams.

Digital Control System Performance

Figure 2

345

Block diagram representations of digital controller components.

develop the discrete transfer function that includes the effects of the sample time. To simplify the procedure, we can take each sampler (on the command and feedback paths) and, since the sample occurs at the same time, move them past the summing junction as represented by the single sampler. Physically, this results in the same sampled error because we get the same error whether we sample each signal separately and then calculate the error or whether we sample the error signal itself. Using the single sampler and ZOH now allows us to substitute the ZOH model into the block diagram (s-domain) and, along with the physical system model, convert from the s-domain to the z-domain as shown in Figure 3. As we see in the block diagram and remembering our model of the ZOH, the effects of the sampler are included in the ZOH since it is dependent on the sample time, T. The result is a single closed loop transfer function, but in the z-domain and including the effects of our digital components. Now we can apply similar analyses to determine steady-state error and transient response. Remember from the previous chapter that the equations modeled by the discrete block diagrams are now difference equations as opposed to continuous functions (differential equations). Figure 4 gives several common discrete block diagram components and the equations that they represent. They perform the same role as in our analog systems, only they rely on discrete sets of data, represented as difference equations. The reduction of block diagrams in the z-domain is very similar to the reduction of block diagrams in the s-domain. The primary difference is locating and modeling the ZOHs in the system. One item must be mentioned since it is not obvious based on our knowledge of continuous systems. Figure 5 illustrates the problem when we attempt to take two continuous systems, separately convert each into sampled systems, and then multiply to obtain the total sampled input output relationship. As is clear in the figure, Ga 6¼ Gb because the additional sampler is assumed in Gb even though the output of G1 and the input of G2 are continuous.

346

Chapter 8

Figure 3

Block diagram reduction of digital controller and physical system.

In other words, the input of G2 is not sampled since it is based directly on the continuous output signal of G1. In general terms then, Z ½G1ðsÞG2ðsÞ 6¼ Z ½G1ðsÞZ ½G2ðsÞ When we model the discrete and continuous systems, we must be aware of where the samplers are in the system and treat them accordingly. If a sampler exists on the input and output, the z transform applies to all blocks between the two samplers. We can show how this works by reducing the block diagram given in Figure 6. Relative to the forward path only GðsÞ is between two samplers and needs to have the z transform applied accordingly. When we consider the complete loop, then GðsÞ and

Figure 4

Discrete block diagram components and difference equations.

Digital Control System Performance

Figure 5

347

Multiplying transfer functions in z differences.

HðsÞ are between two samplers and the transform should take this into account. When we apply the ZOH to the forward path and total loop as described, we can get the following transfer function in the z-domain: i   h DðzÞ  1 z 1  Z 1s GðsÞ CðzÞ h i ¼ RðzÞ 1 þ DðzÞ  1 z 1 Þ  Z 1 GðsÞHðsÞ s Once we have the closed loop transfer function we have many options. If we want the time response of the system, we can write the difference equations from the discrete transfer function and calculate the output values at each sample time as done in the previous chapter. Also, as demonstrated in the remainder of this chapter, we can use the final value theorem (FVT) (in terms of z) to find the steady-state error or develop a root locus plot to aid in the design of the controller or to predict the transient response characteristics. 8.3.2

Disturbance Inputs

One problem unique to digital systems occurs when we attempt to close the loop relative to a disturbance input since the input acts directly on the continuous portion of the system. This section examines block diagram reduction techniques when disturbance inputs are added to our model. To begin, let us add a disturbance input to our system model as shown in Figure 7. We can begin to simplify the block diagram by setting RðsÞ ¼ 0 (as in linear analog systems) and rearranging the block diagram to that shown in Figure 8. Now, let us write the equations from the block diagram and try to relate the system output to the disturbance input. First, write the equation for the output CðsÞ accounting for the summing junction and blocks.   CðsÞ ¼ G2ðsÞ DðsÞ Gc ðzÞðZOHÞG1ðsÞC ðsÞ

Figure 6

Sampled block diagram with sensor in feedback.

348

Chapter 8

Figure 7

Digital controller with disturbance input.

At this point we still need to differentiate between the sampled output, C ðsÞ, and the continuous output, CðsÞ. If we move the sampler before the feedback path and look at the sampled output, CðzÞ, then we can collect terms and solve for the output, CðzÞ, as CðzÞ ¼

Z ½G2ðsÞDðsÞ  hG1ðsÞG2ðsÞi 1 þ Gc ðzÞ 1 z 1 Z s 

The interesting difference, as compared to analog systems, is that once we add the disturbance we can no longer reduce the block diagram to a single discrete transfer function. We cannot transform G2ðsÞ and DðsÞ independently of each other since there is not a sampler between them. This limits us in trying to solve for CðzÞ since there is a portion of the equation which remains dependent on DðsÞ. If we define the disturbance input in the s-domain and multiply it with G2ðsÞ before we take the z transform, we can solve for the sampled system response to a disturbance. This is a general problem whenever we have an analog input acting directly on some portion of our system without including a sampler. 8.3.3

Steady-State Errors

Using the techniques presented in earlier chapters on analog systems, it may or may not be possible to reduce the system to a single transfer function representing the sampled inputs and outputs. If we can reduce the system to a single transfer function, we can use our knowledge of the relationship between s and z (z ¼ esT ) to apply a modified form of the FVT and initial value theorem (IVT). Remembering that z ¼ esT and s ! 0 for the continuous system FVT, it is easy to see that now z ! 1 for the discrete system. Using the same transform between s and z, then for the continuous system IVT, s ! 1 and so also will z ! 1 for the discrete IVT. This leads to the equivalent IVT and FVT for discrete systems, given as follows:

Figure 8

Simplified sampled block diagram with disturbance.

Digital Control System Performance

349

FVT ðzÞ

yðk ! 1Þ ¼ yss ¼ lim

IVT ðzÞ

yð0Þ ¼ y0 ¼ lim YðzÞ

z!1

z 1 YðzÞ z

jzj!1

Now the same procedures learned earlier can be used to determine the steady-state error from different controllers. EXAMPLE 8.1 Using the continuous system transfer function, find the initial and final values using the discrete forms of the IVT and FVT. Assume a unit step input and a sample time equal to 0.1 seconds. GðsÞ ¼

s2

6 þ 4s þ 8

The first thing we must do is convert from the continuous domain to the discrete, sampled, domain. Write GðsÞ in the form of GðsÞ ¼

6 20 6 a2 þ b2 ¼ 20 ðs þ 2Þ2 þ 4 20 ðs þ aÞ2 þ b

And apply the ZOH: GðzÞ ¼

  6 z 1 1 20  Z  20 z s ðs þ 2Þ2 þ 4

Now we can use the tables in Appendix B where a ¼ 2 and b ¼ 4, and the transform for the portion inside the brackets is z Az þ B  z 1 z2 2ze aT cosðbTÞ þ e 2aT a A ¼ 1 e aT cosðbTÞ e aT sinðbTÞ b a aT 2aT B¼e þ e sinðbTÞ e aT cosðbTÞ b Recognizing that [z=ðz 1Þ cancels with portion of the ZOH outside of the transform, including the 6/20 factor, and substituting in for a, b, and T, results in GðzÞ ¼

z2

0:026z þ 0:023 1:605z þ 0:670

This is now the discrete transfer function approximation of the continuous system transfer function. To find the initial and final values, we can apply the discrete forms of the IVT and FVT. For both cases we need to add the step input since we have only derived the discrete transfer function, GðzÞ, not the system output YðzÞ. In discrete form the step input is simply   1 z Discrete Unit Step ¼ Z ¼ s z 1

350

Chapter 8

To get the initial value, multiply GðzÞ by the step input and let z approach infinity: yð0Þ ¼ y0 ¼ lim YðzÞ ¼ lim jzj!1

jzj!1

0:026z2 þ 0:023z ¼0 ðz 1Þðz2 1:605z þ 0:670Þ

With the discrete FVT the step input and the term included with the theorem cancel, as they did with the continuous form of the theorem. For a unit step input only, we can thus simply let z approach unity in the discrete transfer function to solve for the final value of the system. yðk ! 1Þ ¼ yss ¼ lim

z!1

0:026z þ 0:023 ¼ 0:754 z2 1:605z þ 0:670

We know from the original transfer function in the s-domain that the final value does approach 6/8, or 0.75. In the discrete form we introduce some round-off errors, although minor in terms of what we are trying to accomplish in control system design. EXAMPLE 8.2 Using the continuous system transfer function, find the discrete initial and final values. Assume a unit step input and a sample time equal to 0.1 seconds. Use Matlab to perform the conversion and plot the resulting step response to find the discrete initial and final values. GðsÞ ¼

6 s2 þ 4s þ 8

Matlab allows us to define the continuous system transfer function, designate the sample time and desired model of the sampling device, and it then develops the equivalent discrete transfer function. The command used in this example are given as % Program to to verify IVT and FVT %using z-domain transfer function sysc=tf(6,[1 4 8]) sysz=c2d(sysc,0.1,’zoh’)

%Make LTI TF in s %Convert to discrete TF using zoh and sample time

‘Press any key to generate step response plot’ pause; step(sysc,sysz) %Plot the step response of the continuous and sampled systems

When these commands are executed, Matlab returns the following discrete transfer function: GðzÞ ¼

0:0262z þ 0:02292 1:605z þ 0:6703

z2

which is identical to one developed manually in the previous example. The resulting step responses of the continuous and discrete systems are given in Figure 9. In conclusion, with discrete systems the FVT and IVT still apply and can be used to determine the final and initial values of a system. The procedure learned with analog systems is used also for digital systems, with two exceptions. First, when we close the loop we must be careful where we apply the samplers and ZOH effects. The z transform is only applied between two samplers. Second, when we have inputs acting directly on the continuous portion of our physical system (i.e., disturbances),

Digital Control System Performance

Figure 9

351

Example: Matlab step responses of continuous and discrete equivalent systems.

we cannot close the loop and solve for the closed form transfer function without knowing what the disturbance input is since it is include in the z transform and is not a sampled input. 8.4

FEEDBACK SYSTEM STABILITY

Until now, we have used difference equations to simulate the response of discrete (sampled) systems. While this is an easy method to find the response of sampled systems, it is more limited when used during the actual design of the control system. We would prefer to design digital systems using root locus tools as performed with analog systems. What is presented in this section is the background material that allows us to use root locus design tools, similar to the analog methods, to design digital systems (using the z transform and discrete transfer functions). As several examples have shown, it is easy to represent difference equations as transfer functions in the z-domain and vice versa; any transfer function in z can easily be converted to a difference equation recognizing that z 1 is a delay of one sample period. Since the transfer function contains the same information, we can use them directly to calculate the sampled response without explicitly writing the equivalent difference equations. There are two other methods that allow us to do this. The first is long division where the numerator is divided by the denominator and the response at each sample time is calculated. This does not require recursive solutions but is very computationally intensive, especially for larger systems. Second, it is possible to use the z transform tables (Appendix B) to invert the transfer function back into the time domain. Again, this becomes very labor intensive for all but the simplest of systems. Finally, we can calculate the poles and zeros of the transfer function, as done in the sdomain, and estimate the response as the sum of individual linear first- and secondorder responses. This is also the method used with root locus to design different controllers to meet specific performance requirements. The difference equation

352

Chapter 8

method is still used often when verifying the final design since it is easy to get and is easily programmed into computers or calculators to obtain responses. Spreadsheets can easily be configured to solve for and plot sampled responses. For the times when we want to know how the response is affected by different parameters changing and we would rather not be required to calculate the difference equation for each case, we use the root locus plots. Fortunately, as the root locus plots allowed us to predict system performance and design controllers in the continuous realm, the same techniques apply for discrete systems. Since the z-domain is simply a mapping from the s-domain where z ¼ esT , we can apply the same rules but to the different boundaries determined by the mapping between s and z. In other words, when we close the loop the same magnitude and angle conditions are still required to be met when the root locus plot is developed. This leads to us using the identical rules as presented for analog systems represented in the s-domain. To begin our discussion of stability using root locus techniques, let us see where our original stable region in the s-plane occurs when we apply the transform to get into the corresponding z-plane. The method is quite simple; since we know what conditions are required in the s-plane for stability, apply those values of s into the transform in the z-domain and see what shape and area the new stability region is defined by. Our original pole locations in continuous systems were expressed as having (possibly) a real and imaginary component, where s ¼   j! Then substituting s into z ¼ esT , results in z ¼ eðj!ÞT ¼ eT ej!T Knowing that the system is stable whenever s has a negative real part,  < 0, and is marginally stable when  ¼ 0, we can determine the corresponding stability region in the z plane. If  ¼ 0, regardless of j!T (oscillating component), then e0 ¼ 1, defining the equivalent stability region in the z-domain. All points that have a constant magnitude of one relative to the origin are simply those defining a unit circle centered on the origin, and thus defining the stable region in z. When  < 0, e is always less than 1 and approaches 1 as  approaches zero from the left (negative). Therefore it is the area inside the unit circle that defines a stable system and the circle itself defines the marginally stable border. We can determine additional properties by holding all parameters constant except for one, varying the one in question, and mapping the resulting contour lines. When this is done, the z-plane stability regions and response characteristics can be found with respect to lines of constant damping ratio and natural frequency. The contours of constant natural frequencies and damping ratios are shown in Figure 10. Any system inside the unit circle will be stable and the unit circle itself represents where the damping ratio approaches zero (marginal stability). What is interesting in the z-plane is the added effect of sample time. By changing the sample time we actually make the poles move on the z-plane. In fact, as the sample period becomes too long, the system generally migrates outside of the unit circle, thus becoming unstable. Knowing the natural frequency and damping ratio contour lines are not as helpful in the z-plane since their shape excludes the option of an easy graphical analysis unless special grid paper is used. However, most programs like Matlab can overlay the locus plot with the grid and thus enable the same

Digital Control System Performance

Figure 10

353

Contours of n and in the z-plane.

controller design techniques learned about with continuous system design methods. Several observations can be made about z-plane locations:     

The stability boundary is the unit circle and jzj  1. In general, damping decreases from 1 on the positive real axis to 0 on the unit circle the farther out radially we go. The location z ¼ þ1 corresponds to s ¼ 0 in the s-plane. Horizontal lines in the s-plane (constant !d ) map into radial lines in the z-plane. Vertical lines in the s-plane (constant decay exponent , or 1=) map into circles within the unit circle in the z-plane.

As done with analog systems in Figure 9 of Chapter 3, we can show how responses differ, depending on pole locations in the z-plane, as demonstrated in Figure 11.

Figure 11

Transient responses and z-plane locations.

354

Chapter 8

Finally, since we can relate transient response characteristics to pole location in the z-plane, we are ready to design and simulate digital controllers using the methods presented for the s-plane. The rules for developing the loci paths are identical whether in the s-plane or z-plane, so the skills required for designing digital controllers using root locus plots are identical to those we learned earlier when designing continuous systems. For review, the summaries of the rules, initially defined in Section 4.4.2, are repeated here in Table 1. This chapter concludes with several examples to illustrate the use of root locus techniques and z-domain transfer functions for determining the dynamic response of sampled (discrete) systems.

Table 1 1 2 3

4

5

6 7

8 9 10

Guidelines for Constructing Root Locus Plots

From the open loop transfer function, GðzÞHðzÞ, factor the numerator and denominator to locate the zeros and poles in the system. Locate the n poles on the z-plane using x’s. Each loci path begins at a pole, hence the number of paths are equal to the number of poles, n. Locate the m zeros on the z-plane using o’s. Each loci path will end at a zero, if available; the extra paths are asymptotes and head towards infinity. The number of asymptotes therefore equals n m. To meet the angle condition, the asymptotes will have these angles from the positive real axis: If one asymptote, the angle ¼ 180 degrees Two asymptotes, angles ¼ 90 degrees and 270 degrees Three asymptotes, angles ¼ 60 degrees and 180 degrees Four asymptotes, angles ¼ 45 degrees and 135 degrees The asymptotes intersect the real axis at the same point. The point, , is found by ðsum of the poles) ðsum of the zerosÞ number of asymptotes The loci paths include all portions of the real axis that are to the left of an odd number of poles and zeros (complex conjugates cancel each other). When two loci approach a common point on the real axis, they split away from or join the axis at an angle of 90 degrees. The break-away/break-in points are found by solving the characteristic equation for K, taking the derivative w.r.t. z, and setting dK=dz ¼ 0. The roots of dK=dz ¼ 0 occurring on valid sections of the real axis are the break points. Departure angles from complex poles or arrival angles to complex zeros can be found by applying the angle condition to a test point in the vicinity of the root. Locating the point(s) where the root loci path(s) cross the unit circle and applying the magnitude condition finds the point(s) at which the system becomes unstable. The system gain K can be found by picking the pole locations on the loci path that correspond to the desired transient response and applying the magnitude condition to solve for K. When K ¼ 0 the poles start at the open loop poles, as K ! 1 the poles approach available zeros or asymptotes.

Digital Control System Performance

355

EXAMPLE 8.3 Convert the continuous system transfer function into the discrete equivalent using the ZOH approximation. Determine the poles and zero when: a. Sample time, T ¼ 0:1s b. Sample time, T ¼ 10s Comment on the system stability between the two cases. GðsÞ ¼

4 sðs þ 4Þ

To convert from the continuous into the discrete domain we need to apply the ZOH and take the z transform:   z 1 1 4 GjðzÞ ¼ Z 2 z s sþ4 Letting a ¼ 4 and using the z transform from the table:   

  aT  z þ 1 e aT aTe aT z 1 1 a z 1 z aT 1 þ e Z 2 GðzÞ ¼ ¼   z z s sþ1 aðz 1Þ2 z e aT And finally, we can simplify the terms:     aT 1 e aT z þ 1 e aT aTe aT   GðzÞ ¼ aðz 1Þ z e aT To find the first discrete transfer function, let a ¼ 4 and T ¼ 0:1: GðzÞ ¼

0:0176z þ 0:0154 z2 1:67 þ 0:6703

Poles: 1, 0.6703; zero: 0:8753. For the second discrete transfer function, let a ¼ 4 and T ¼ 10: GðzÞ ¼

9:75z þ 0:25 z2 z

Poles: 1, 0; zero: 0:0256. It is interesting to note how the poles change as a function of our sample time even though our physical system model has not changed. Additionally, in both cases, the pole at the origin of the s-plane (integrator) maps into the z ¼ 1 point of similar marginal stability in the z-domain. The next example will demonstrate the construction of a root locus plot in the z-domain. EXAMPLE 8.4 Develop the root locus plot for the system given in Figure 12, when the sample time is T ¼ 0:5 sec. Describe the range of responses that will occur and compare them with the results obtained when the system is implemented using an analog controller instead of a digital controller.

356

Chapter 8

Figure 12

Example: physical system model with ZOH and sampler.

To develop the root locus plot, we first need to derive the discrete transfer function for the system. To do so we apply the ZOH model to the continuous system and take the z transform of the system. With the ZOH the system becomes   z 1 1 4 Z  GðzÞ ¼ z s ðs þ 2Þ2 Letting a ¼ 2 and using the z transform from the table: " # z 1 1 a2 Z  GðzÞ ¼ z s ðs þ aÞ2  

aT aTe aT z þ e 2aT e aT þ aTe aT z 1 z 1 e  ¼  2 z z 1Þðz e aT And finally, we can simplify the terms:   1 e aT aTe aT z þ e 2aT e aT þ aTe aT GðzÞ ¼  2 z e aT To find the first discrete transfer function, let a ¼ 2 and T ¼ 0:5: GðzÞ ¼

0:264z þ 0:0135 z2 0:736z þ 0:135

Poles: 0.368, 0.368; zero: 0:512. Using the open loop discrete transfer function let us now apply the same root locus plotting guidelines and draw the root locus plot in the z-domain. Step 1: The transfer function is already factored; there is one zero and two poles. The poles are repeated at z ¼ 0:368 and the zero is located at z ¼ 0:512. Steps 2 and 3: Locate the poles and zero on the z-plane (using an x’s and o’s) as shown in Figure 13. Step 4: Since we have two poles and one zero, then n m ¼ 21 ¼ 1, and we have one asymptote. For one asymptote, the angle relative to the positive real axis is 180 degrees. Step 5: The negative real axis is the asymptote and there is no intersection point. Step 6: The section on the real axis to the left of the zero is the only valid portion on the real axis. For our example this is everything to the left of 0:512. Step 7: There is one break-away point which coincides with the two poles since they must immediately leave the real axis as we begin to increase the gain K.

Digital Control System Performance

Figure 13

357

Example: locating poles and zeros in the z-plane.

There is also one break-in point that can be found by solving the characteristic equation for K and take the derivative with respect to z. The characteristic equation is found by closing the loop and results in the following polynomial: 1þK K¼

0:264z þ 0:0135 ¼0 z 0:736z þ 0:135 2

z2 0:736z þ 0:135 0:264z þ 0:013

Taking the derivative with respect to z: dK 0:264z2 þ 0:270z 0:135 ¼ dz ð0:264z þ 0:013Þ2 To solve for the break-in point we set the numerator to zero and find the roots. z ¼ 1:39 and

0:368

One root is lies on valid break-in section of real axis while, as expected, the second root coincides with the location of the two real poles and the corresponding break-away point. Thus the break-in point is at z ¼ 1:4. Step 8: The angles of departure are 90 degrees when the loci paths leave the real axis (at the two repeated poles). We can now plot our final root locus plot as shown in Figure 14. Step 9: The system becomes unstable when the root loci paths leave the unit circle. Although one pole does return at higher gains, there is always one pole (on asymptote) that remains unstable. If we wished to determine at what gain the system crosses the unit circle, we can either use the magnitude condition and apply it to the intersection point of the root loci paths and the unit circle or we could close the loop and solve for the gain K that causes the magnitude to become greater than 1 (square root of the sum of the real and imaginary components squared).

358

Chapter 8

Figure 14

Example: root locus plot for discrete second-order system.

Step 10: A similar procedure as that described in step 9 can be used to solve for the gain that results in the desired response characteristics. As with analog systems in the s-domain we can use the performance specifications to determine desired pole locations in the z-plane. The difficulty, however, is that now the lines of constant damping ratio and natural frequency are nonlinear and to use a graphical method we must overlay our root locus plot with a grid showing these lines (see Figure 10). As the next example demonstrates, this task is much easier to accomplish using a software design tool such as Matlab. To conclude this example, let us plot the root locus plot in the s-domain assuming that we have an analog controller as in earlier chapters. Then we can compare the differences regarding stability between continuous (analog) and discrete (sampled) systems. Referring back to our original system described by the block diagram in Figure 12 and only including our original continuous system (ignoring the ZOH and sampler), we see that we have a second-order system with two repeated poles at s ¼ 2 and no zeros. The root locus plot is straightforward with two asymptotes that intersect the axis at s ¼ 2, no valid sections of real axis, and the poles immediately leaving the real axis and traveling along the asymptotes. The continuous system root locus plot is shown in Figure 15. Comparing Figure 14 and Figure 15 allows us to see an important distinction between analog and digital controllers. Whereas the analog only system never goes completely unstable (crosses into the RHP), the sampled (digital) system now leaves the unit circle in the z-plane and will become unstable as gain is increased. Adding the digital ZOH and sampler tends to decrease the stability in our system (it always adds a lag, as noted earlier) and stable systems in the continuous domain may become unstable when their inputs and outputs are sampled digitally. EXAMPLE 8.5 Using Matlab, develop the root locus plot for the system given in Figure 16 when the sample time is T ¼ 0:5 sec. Tune the system to have a damping ratio equal to 0.7 and plot the response of the closed loop feedback system when the input is a unit step.

Digital Control System Performance

Figure 15

359

Example: equivalent continuous system root locus plot for comparison (second

order).

As demonstrated with earlier analog examples, tools such as Matlab provide many features to help us design control systems. This example uses Matlab to apply the ZOH and convert the system into an equivalent discrete transfer function, draw the root locus plot, overlay the plot with lines of constant damping ratio and natural frequency, and solve for the gain K resulting in a damping ratio  ¼ 0:7. This setting is verified using Matlab to generate the sampled output of the system responding to a unit step input. The commands used to perform these tasks are listed here. %Program to draw root locus plot %using z-domain transfer function sysc=tf(4,[1 4 4])

%Make LTI TF in s

sysz=c2d(sysc,0.5,‘zoh’)

%Convert to discrete TF using zoh and sample time

rlocus(sysz); zgrid; K=rlocfind(sysz)

%Draw the root locus plot %Add lines of constant damping ratio and natural frequency %Solve for K where zeta = 0.7

syscl=feedback(K*sysc,1); syszl=feedback(K*sysz,1); figure; step(syscl,syszl)

%Close the loops for each system

%Plot the CLTF step response of the continuous and sampled systems

Executing these commands gives us our discrete transfer function as GðzÞ ¼

Figure 16

0:264z þ 0:0135 z2 0:736z þ 0:135

Example: physical system model with ZOH and sampler.

360

Figure 17

Chapter 8

Example: Matlab discrete root locus plot and grid overlay.

This agrees with our result from the previous example. The corresponding root locus plot as generated by Matlab is given in Figure 17. Placing the crosshairs where the root locus plot crosses the line of damping ratio equal to 0.7 returns our gain K ¼ 0:45. Now we can close the loop with K and use Matlab to generate the step responses for the continuous and sampled models. This comparison is given in Figure 18 From the discrete step response plot we see that we reach a peak value of 0.32 and a steady-state value of 0.31, corresponding to a 4% overshoot. This agrees very

Figure 18

Example: Matlab comparison of continuous and sampled step responses.

Digital Control System Performance

361

well with the expected percent overshoot from a system with a damping ratio equal to 0.7. In conclusion, we see that the stability of systems that are sampled and include a ZOH is reduced when compared with the continuous equivalent. Not only does the gain affect our stability (as with analog systems), now the sample time changes the shape of our root locus plot because it changes the locations of the system poles and zeros (as easily seen in the different z transforms). Although the guidelines used to develop root locus plots remain the same, the process of determining the desired pole locations on the loci paths becomes more difficult on the z-plane due to nonlinear lines of damping ratio and natural frequency. As we progress then, we tend to use computers in increased roles during the design process. It is important to remember the fundamental concepts even when using a computer since most design decisions can be made and most errors can be caught (even though the computer will likely think everything is solved) based on what we know the overall shapes should be. This frees us to use the computer to then help us with the laborious details and calculations. 8.5

PROBLEMS

8.1 When we close the loop on a sampled system, the z transform applies to all blocks located where? 8.2 To obtain the closed loop transfer function of digitally controlled systems, the input and output must be _____________. 8.3 Discrete transfer functions may be linear or nonlinear. (T or F) 8.4 Difference equations may be linear or nonlinear. (T or F) 8.5 Given the discrete transfer function, use the difference equation method to determine the output of the system. Let rðtÞ be a step input occurring at the first sample period and calculate the first five sampled system response values. Using the FVT in z, what is the final steady-state value? CðzÞ z ¼ 2 RðzÞ z þ 0:1z 0:2 8.6 Given the discrete output in the z-domain, use the difference equation method to determine the output of the system. Calculate the first five sampled system response values. Using the FVT in z, what is the final steady-state value? YðzÞ ¼

z3 þ z2 þ 1 z3 1:3z2 þ z

8.7 Given the discrete output in the z-domain, find the initial and final values using the IVT and FVT, respectively. YðzÞ ¼ 8.8

zðz þ 1Þ ðz 1Þðz2 z þ 1Þ

Given the continuous system transfer function: a. Using the FVT in s, what is the steady-state output value in response to a unit step input for the continuous system? b. Applying a sampler and ZOH with a sample time of 1 s, derive the equivalent discrete transfer function.

362

Chapter 8

c. Write the corresponding difference equations and calculate the first five sampled outputs of the system. Let rðtÞ be a step input occurring at the first sample period. d. Using the FVT in z, what is the final steady-state value in response to a unit step input? How does this answer compare with that obtained in part (a) for the continuous system? GðsÞ ¼ 8.9

1 sðs þ 1Þ

Given the block diagram in Figure 19, a. Determine the first five values of the sampled output for the closed loop system. The input is a unit impulse and the sample time is T ¼ 0:1 s. b. Use the discrete FVT to determine the steady-state error if the input is a unit step. Use the same sample time, T ¼ 0:1 s.

Figure 19

Problem: physical system block diagram.

8.10 Given the block diagram in Figure 20, develop the discrete transfer function and solve for the range of sample times for which the system is stable (see Problem 8.9).

Figure 20

Problem: physical system block diagram.

8.11 Using the system transfer function given, develop the sampled system transfer function and solve for the range of sample times for which the system is stable. GðsÞ ¼

1 sðs þ 1Þ

8.12 For the continuous system transfer function given, derive the sampled system transfer function, the poles and zeros of the sampled system, and briefly describe the type of response. If required, use partial fraction expansion. GðsÞ ¼

10ðs þ 1Þ sðs þ 4Þ

8.13 Given the following discrete open loop transfer function: a. Sketch the root locus plot. b. Does the system go unstable? c. Approximately what is the range of damping ratios available?

Digital Control System Performance

GðzÞ ¼

363

z2 þ z z2 þ 0:1z 0:2

8.14 Sketch the root locus plot for the system in Figure 21 when the sample time is 0.35 s.

Figure 21

Problem: physical system block diagram.

8.15 Sketch the root locus plot for the system in Figure 22 when the sample time is 0.5 s.

Figure 22

Problem: physical system block diagram.

8.16 Use the computer to solve Problem 8.14: a. Sketch the plot when T ¼ 0:35 sec and if the system goes unstable solve for the gain K at marginal stability. b. Sketch the plot when T ¼ 1:5 sec and if the system goes unstable solve for the gain K at marginal stability. 8.17 Use the computer to solve Problem 8.15. Find the gain K resulting in an approximate damping ratio equal to 0.5. 8.18 Use the computer to develop the root locus plot for the system in Figure 23. The sample time, T, is 0.1 sec. Find K where the damping ratio is equal to 0.9.

Figure 23

Problem: physical system block diagram.

8.19 For the system transfer function given, use the computer to a. Convert the system to a discrete transfer function using the ZOH model and a sample time of 0.5 sec. b. Draw the root locus plot. c. Solve for the gain K required for a closed loop damping ratio equal to 0.7. d. Close the loop and generate the sampled output in response to a unit step input. GðsÞ ¼

ðs þ 4Þ sðs þ 1Þðs þ 6Þ

364

Chapter 8

8.20 For the system transfer function given, use the computer to a. Convert the system to a discrete transfer function using the ZOH model and a sample time of 0.25 sec. b. Draw the root locus plot. c. Solve for the gain K required for a closed loop damping ratio equal to 0.5. d. Close the loop and generate the sampled output in response to a unit step input. GðsÞ ¼

ðs þ 2Þðs þ 4Þ ðs þ 3Þðs þ 6Þðs2 þ 3s þ 6Þ

9 Digital Control System Design

9.1

OBJECTIVES    

9.2

Develop digital algorithms for analog controllers already examined. Develop tools to convert from continuous to discrete algorithms. Discuss tuning methods for digital controllers. Develop methods to design digital controllers directly in the discrete domain.

INTRODUCTION

If we enter the z-domain the design of digital control algorithms is almost identical to the design of continuous systems, the obvious difference being the sampling and its effect on stability. Since digital control algorithms are implemented using microprocessors, the common representation is difference equations. Although these require some previous values to be stored, it is a simple algorithm to program and use. In fact, any controller that can be represented as a transfer function in the z-domain is quite easy to implement as difference equations, the only requirement being that is does not require future knowledge of our system. When we get nonlinear and/or other advanced controllers that are unable to be represented as transfer functions in z, the design process becomes more difficult. Nonlinear difference equations, however, are still quite simple to implement in most microprocessors once the design is completed. 9.3

PROPORTIONAL-INTEGRAL-DERIVATIVE (PID) CONTROLLERS

Even in the digital realm of controllers, PID algorithms are extremely popular and continue to serve many applications well. Since we saw earlier that using different approximations will result in different difference equations, we find many different representations of PID controllers using difference equations. PID algorithms are popular in large part because of their previous use in analog systems and many people are familiar with them. As this chapter demonstrates, however, many additional options become available when using digital controllers, and if classical PID

365

366

Chapter 9

algorithms are not capable of achieving the proper response, we can directly design digital controller algorithms using the skills from the previous two chapters. 9.3.1

Digital Algorithms

Although the goal is obtain a difference equation(s) representing our control law, the method by which it is obtained is varied. We may simply use difference approximations for the different controller terms or we may use z transform techniques and convert a controller from the s-domain into the z-domain, therefore enabling us to write the difference equations. As we saw earlier, both methods work but different results are obtained. Even within these two approaches are several additional options. For example, when using difference approximations we can use forward or backward approximations and when converting from the s-domain into the zdomain we have the zero-order hold (ZOH), bilinear transform, or first-order hold transformations. The important conclusion to be made is that we only approximate analog systems when we convert them to digital representations. 9.3.1.1

Approximating PID Terms Using Difference Equations

The easiest way to understand the development of a control algorithm is to write difference equations for each of the different control actions. In our PID controller examined here, this means approximating a summation (integral) and differential (derivative) using difference equations; the proportional term remains the same. The derivative term can be approximated using forward, backward, or central difference techniques by calculating the difference between the appropriate error samples and dividing by the sample period. These approximations are shown below where eðkÞ is the sampled output of the summing junction (our error at each sample instant): Approximating a derivative: Backward difference: Forward difference: Central difference:

eðkÞ  eðk  1Þ T eðk þ 1Þ  eðkÞ T eðk þ 1Þ  eðk  1Þ 2T

Although the central difference method provides the best results since it ‘‘averages’’ over two sample periods, a future error value is needed and so from a programming perspective it is not that useful. The same is true for the forward difference approximation. In general, then, the backward difference is commonly used to approximate the derivative term in our PID algorithm. The same concepts can be used for approximating the integral where the area under error curve between samples can be given as three alternative difference equations: Approximating an integral: Backward rectangular rule:

T eðk  1Þ

Forward rectangular rule:

T eðkÞ T ½eðkÞ þ eðk  1Þ 2

Trapezoidal rule:

Digital Control System Design

367

The trapezoidal rule gives the best approximation since it integrates the area determined by the width, T, and the average error, not just the current or previous. To operate as an integral gain, it must continually sum the error and thus must include a memory term as shown in the following trapezoidal approximation. Otherwise, the error is only that which accumulated during the current sample period. Finally, realizing that the proportional term is just uðkÞ ¼ Kp eðkÞ, we can proceed to write the difference equations for PI and PID algorithms, recognizing that different approximations will result in slightly different forms. Two forms are given, termed the position and velocity (or incremental) algorithms. The position algorithm results in the actual controller output (i.e., command to valve spool position, etc.), while the velocity algorithm represents the amount to be added to the previous controller output term. This is seen in that the error is only used to calculate the amount of change to the output and then it is simply added to the previous output, uðk  1Þ. Velocity algorithms have several advantages in that it will maintain its position in the case of computer failure and that it is not as likely to saturate actuators upon startup. This is an easy way to implement bumpless transfer for cases where the controller is switched between manual and automatic control. In a normal position algorithm the controller will continually integrate the error such that when the system is returned back to automatic control, a large bump occurs. Bumpless transfer can be implemented in position algorithms by initializing the controller values with the current system values before switching back to automatic control. The velocity (incremental) command can also be used to interface with stepper motors by rounding the desired change in controller output to represent a number of steps required by the stepper motor. In this role the stepper motor acts as the uðk  1Þ term since it holds it current position until the next signal is given. Position PI algorithm using trapezoidal rule: uðkÞ ¼ Kp eðkÞ þ sðkÞ sðkÞ ¼ sðk  1Þ þ Ki

T ½eðkÞ þ eðk  1Þ 2

When implementing the position algorithm, the integral term, sðkÞ, must be calculated separately so that it is available for the next sample time as sðk  1Þ. To derive the velocity form of the PID algorithm we take uðkÞ, decrement k by 1 in each term, resulting in another expression for uðk  1Þ, and subtract the two resulting algorithms. After we simplifying the result of the subtraction we can write the velocity algorithm. Velocity PI algorithm using trapezoidal rule: uðkÞ  uðk  1Þ ¼ Kp ½eðkÞ  eðk  1Þ þ Ki

T ½eðkÞ þ eðk  1Þ 2

Or we may collect and group terms according to the error term and express the velocity PI algorithm in such a way where it is more amendable to programming:     T T uðkÞ  uðk  1Þ ¼ Kp þ Ki eðkÞ þ Ki  Kp eðk  1Þ 2 2 Finally, the complete PID algorithm can be derived using a trapezoidal approximation for the integral and a backward rectangular approximation for the derivative.

368

Chapter 9

uðkÞ ¼ Kp eðkÞ þ sðkÞ þ sðkÞ ¼ sðk  1Þ þ Ki

Kd ½eðkÞ  eðk  1Þ T

T ½eðkÞ þ eðk  1Þ 2

We can follow the same procedure as with the PI and derive the velocity (incremental) representation as uðkÞ ¼ uðk  1Þ þ Kp ½eðkÞ  eðk  1Þ þ Ki þ

T ½eðkÞ þ eðk  1Þ 2

Kd ½eðkÞ  2eðk  1Þ þ eðk  2Þ T

Sometimes the integrated error is only included in the term uðk  1Þ and Ki only acts on the current error level, in which case the algorithm simply becomes (common in PLC modules): uðkÞ ¼ uðk  1Þ þ Kp ½eðkÞ  eðk  1Þ þ Ki þ

T eðkÞ 2

Kd ½eðkÞ  2eðk  1Þ þ eðk  2Þ T

Finally, if so desired, we can express the PID algorithm using integral and derivative times, Ti and Td , as used in earlier analog PID representations and Ziegler-Nichols tuning methods:  T ½eðkÞ þ eðk  1Þ þ ½eðkÞ  eðk  1Þ uðkÞ ¼ uðk  1Þ þ Kp Ti  Td þ ½eðkÞ  2eðk  1Þ þ eðk  2Þ T Several modifications to the algorithms are possible which help them to be more suitable under difficult conditions. First, it is quite easy to prevent integral windup by limiting the value of sðkÞ to maximum positive and negative value using simple ‘‘if’’ statements. Also, modifying the derivative approximations can help with noisy signals. The derivative approximation can be further improved by averaging the rate of change of error over the previous four (or whatever number is desired) samples to further smooth out noise problems. The disadvantage is that it does require additional storage values and introduces additional lag into the system. A similar effect is accomplished by adding digital filters to the input signals. Finally, it is now easy to implement I-PD (see Sec. 5.4.1) since the physical system output is already sampled and can be used in place of the error difference each sample period. It requires that we use the integral gain since it is the only term that directly acts on the error. The new algorithm becomes uðkÞ ¼ uðk  1Þ þ Kp ½cðk  1Þ  cðkÞ þ Ki T ½rðkÞ  cðkÞ þ

Kd ½cðkÞ þ 2cðk  1Þ  cðk  2Þ T

Digital Control System Design

369

Remember that many of the signs are reversed because eðkÞ ¼ rðkÞ  cðkÞ and now only cðkÞ, the actual physical feedback of the system, is used for the proportional and derivative terms. This helps with set-point-kick problems. Concluding this section on PID difference equations, we see that there are many options since the difference equation is only an approximation to begin with. Having the controller implemented as a difference equation does make it easy for us to use many of our analog concepts (e.g., I-PD) and approximate derivatives. Many manufacturers of digital controllers (PLCs, etc.) have slight proprietary modifications designed to enhance the performance in those applications that the components are designed for. 9.3.1.2

Conversion from s-Domain PID Controllers

Direct conversion from the s-domain into the z-domain is very easy (using a program like Matlab) and quickly results in getting a set of difference equations to approximate any controller represented in the s-domain. This is advantageous when significant design effort and experience has already been gained with the analog equivalent. The disadvantages are the lack of understanding and thus the corresponding loss of knowing what modifications (in the digital domain) can be used to build in certain attributes. It does allow us to account for the sample time (as least in a limited sense) since the conversion methods use both the continuous system information and the sample time. Using our design experience and knowledge of PID controllers in the continuous domain to develop the digital approximations works very well if the sample time is great enough. It is feasible under these conditions to design the control system using conventional continuous system techniques and simply convert the resulting controller to the z-domain to obtain the difference equations. This allows us to use all our s-domain root locus and frequency techniques to design the actual controller. As discussed earlier, if the sampling rate is greater than 20 times the system bandwidth or natural frequency, the resulting digital controller will closely approximate the continuous controller and the method works well; otherwise, it becomes beneficial to use the direct design methods discussed below, allowing us to account for the sample and hold effects, while not being constrained to P, I, and D corrective actions. The simplest conversion is to simply use the transformation z ¼ esT and map the pole and zero locations from the continuous (s) domain into the discrete (z) domain. This is commonly called pole-zero matching and will be demonstrated when developing digital approximations of phase-lag and phase-lead controllers. The transformation is also the starting point for the bilinear, or Tustin’s, approximation. If we solve the transformation for s, we get s¼

1 lnðzÞ T

The bilinear transformation is the result when we perform a series expansion on s and discard all of the higher order terms. The term that is retained then becomes our first order approximation of the transform. It is applied as follows: s¼

2 z1 Tzþ1

370

Chapter 9

It can be shown that it is very similar to the trapezoidal approximation used in the preceding sections. The bilinear transformation process can get tedious when developing the difference equations by hand but is easily done using computer programs like Matlab. The concept, however, is simple. To convert from the s-domain, we simply substitute the transform in for each s that appears in our controller and simplify the result until we obtain our controller in the z-domain. EXAMPLE 9.1 Convert the PI controller, represented as designed in the s-domain, into the zdomain using the bilinear transform. Write the corresponding difference equation for the discrete PI approximation. Gc ðsÞ ¼ 100 þ

10 s

To find the equivalent discrete representation substitute the bilinear transform in for each s term. Gc ðzÞ ¼ DðzÞ ¼ 100 þ

100 ð200 þ 10TÞz  ð200  10TÞ ¼ 2 z1 2ðz  1Þ Tzþ1

Consistent with our earlier results where sampled system transfer functions are dependent on sample time, we see the same effect with our equivalent discrete controller. To derive the difference equation lets assume T ¼ 0:1s which results in a discrete controller transfer function of UðzÞ 201z  199 ¼ EðzÞ 2z  2 Multiplying the top and bottom by z1 allows us to write our difference equation as uðkÞ ¼ uðk  1Þ þ

201 199 eðkÞ  eðk  1Þ 2 2

So we see that the bilinear transform can be used to develop approximate controller algorithms represented as difference equations. EXAMPLE 9.2 Convert the PI controller, represented as designed in the s-domain, into the zdomain using Matlab and the bilinear transform. Write the corresponding difference equation for the discrete PI approximation if the sample time is 0.1s. Gc ðsÞ ¼ 100 þ

10 100s þ 10 ¼ s s

In Matlab we can define the continuous system transfer function and convert to the z-domain using the bilinear transform by executing the following commands: >>num=[100 10]; >>den=[1 0]; >>sysc=tf(num,den) >>sysz=c2d(sysc,0.1,‘tustin’)

Digital Control System Design

371

This results in the following transfer function, identical to the one developed manually in the preceding example. UðzÞ 100:5z  99:5 ¼ EðzÞ z1

Using a transformation manually quickly becomes an exercise in algebra for all but simple systems. Using tools like Matlab allows us to use the same techniques but on much larger systems and still get difference equations for implementing our controller. This allows us to take any analog design, not just PID as primarily discussed here, and convert it into an equivalent discrete design which may be implemented digitally. 9.3.2

Tuning Methods

In the digital domain we still have similar procedures regarding the tuning of our controller. This section summarizes several items relating the tuning of analog controllers, as learned earlier, to now tuning digital controllers. The primary difference is that we now have additional variables that tend to complicate matters slightly when compared to continuous system tuning. The first difference is the sample time. If the sampling intervals are short compared to the system response, as discussed in the preceding sections, then continuous system tuning methods like Ziegler-Nichols work well. If sample time is slower, however, the Ziegler-Nichols methods serve only to provide a rough estimation and the values become very sensitive to the sample time. For all tuning methods with digital controllers there will be a dependence on the sample time, and if it is changed another round of tuning is likely required to achieve optimum performance. To use the Ziegler-Nichol’s method with discrete algorithms, we simply follow the same procedure presented for analog controllers, only we must operate the controller at the sampling rate that it will be operating at when finished. Then we turn the integral and derivative gains off, increase the proportional gain until the system oscillates (or use the step response method), and record the gain. Since it is implemented digitally it is a straightforward procedure to apply the values in Table 1 or Table 2 of Chapter 5 and tune the controller. If we know that this method is to be used, we can express our difference equations using the integral time and derivative time notation as presented in the preceding section. Additional problems occur in the hardware and software. When converting from analog to digital we have finite resolution that may become a problem on the lower end converters. Also, in software, we may be limited to word lengths, integers, etc., which all determine how we set up or controller and tune it. We may have to scale the inputs and outputs to take better advantage of our processor. Hardware and software related issues are discussed more fully in Chapter 10. When designing and tuning discrete algorithms, if we incorporate integral windup protection we should set the upper and lower limits to closely approximate the saturation points of our physical system. This will allow maximum performance from our system without integral windup problems. This is where understanding the physics of our system is very useful.

372

Chapter 9

Finally, since microprocessors can change the parameters during operation, it is possible to incorporate auto-tuning methods based on system response. For example, the proportional gain can continually be adjusted to limit the system overshoot. If it system begins to overshoot more, the gain is decreased. Sometimes it is beneficial to base changes off of a physical parameter (i.e., adaptive control). These methods are discussed more fully in Chapter 11. 9.4

PHASE-LAG AND PHASE-LEAD CONTROLLERS

Phase-lag and phase-lead controllers, as with PID, can be designed in the analog domain using common techniques and converted to equivalent discrete representations. The bilinear transform, demonstrated with PID controllers, can be used again when dealing with phase-lag and phase-lead designs. Another method that lends itself primarily well to phase-lag and phase-lead is the pole-zero matching technique. Since phase-lag/lead controllers are designed to place one pole and one zero on the real axis, it is very simple to use the transform z ¼ esT to map the s plane pole and zero locations directly into the z-domain. The caution here is that we must also make sure that the steady-state gain of each representation remains the same. This is easy to accomplish by applying the continuous and discrete representations of the final value theorem (FVT) to each controller. First find the steady-state gain of the analog controller and set the discrete controller gain to result in the same magnitude. This will be demonstrated in the example problems. EXAMPLE 9.3 Given an analog phase-lead controller, use the pole-zero matching technique to design the equivalent discrete controller. Express the final controller as a difference equation that could be easily implemented in a digital microprocessor. Use a sample time T ¼ 0:5s. Gc ðsÞ ¼ 10

sþ1 sþ5

To find the equivalent difference equations we will use the identity z ¼ esT and solve for the equivalent z-plane locations. This will allow us to express the controller as a transfer function in the z-domain and then to write the difference equations from the transfer function. Beginning with our original analog controller, we note that the zero location is at s ¼ 1 and the pole location is at s ¼ 5. If our sample time for this example is 1=2 second, we can use the identity to find each discrete pole and zero location. Zero location in z:

e ¼ esT ¼ eð1Þð0:5Þ ¼ 0:61

Pole location in z: z ¼ esT ¼ eð5Þð0:5Þ ¼ 0:08 Now we can write the new controller transfer function as Gc ðzÞ ¼ K

z  0:61 z  0:08

We still must equate the steady-state gains using the two representations of the FVT. This allows us to determine K for our discrete controller.

Digital Control System Design

373

Gc ðzÞjz¼1 ¼ Gc ðsÞjs¼0 Gc ðzÞjz¼1 ¼ K

  z  0:61 s þ 1 ¼ G ðsÞj ¼ s c s¼0 z  0:08z¼1 s þ 5s¼0

Solve for K: K

1  0:61 1 ¼ 0:424K ¼ 10 ¼ 2 1  0:08 5

K ¼ 4:72

Finally, the equivalent phase-lead controller in z is UðzÞ z  0:61 ¼ Gc ðsÞ ¼ 4:72 EðzÞ z  0:08 Cross-multiplying the controller transfer function and using z1 as our delay shift operator enables us to derive our difference equation. UðzÞðz  0:08Þ ¼ EðzÞ4:72ðz ¼ 0:61Þ     UðzÞ 1  0:08z1 ¼ EðzÞ4:72 1  0:61z1 Our final difference equation that represents the discrete approximation of the original phase-lead controller is given as uðkÞ ¼ 0:08uðk  1Þ þ 4:72eðkÞ  2:88eðk  1Þ Remember that this particular difference equation is developed with the assumption that we will have a sample time equal to 1=2s. As is true with all digital controllers derived in this fashion, when we change the sample time we must also update our discrete algorithm. Of course, as the sample time becomes too long the design fails altogether. EXAMPLE 9.4 For the system given in Figure 1, use Matlab to design a phase-lead controller using continuous system techniques. Use pole-zero matching to convert the controller into the z-domain and verify that the system is stable and usable. Generate a step response using Matlab.  

A damping ratio of 0.5 Sample time, T ¼ 0:01s

Since this problem is open loop marginally stable, we need to modify the root loci and move them further to the left. Thus a phase-lead controller is the appropriate choice. We start by adding þ36 degrees, (conservative choice) by placing the zero at s ¼ 3 and the pole at s ¼ 22. Using Matlab allows us to choose the point where

Figure 1 Example: system block diagram for phase lead digital controller design using polezero matching.

374

Chapter 9

the loci paths cross the radial line representing a damping ratio equal to 1/2 and find the gain K at that intersection point. The Matlab commands used to define the phase-lead controller and plant transfer function are clear; T=0.01; numc=1*[1 3]; denc=[1 22]; nump=1; denp=[1 0 0]; sysc=tf(numc,denc); sysp=tf(nump,denp); sysall=sysc*sysp

%Place compensator zero at -3 %Place compensator pole at -22 %Forward loop system numerator %Forward loop system denominator %Controller transfer function %System transfer function in forward loop %Overall compensated system in series

Now, execute the following commands and use Matlab to generate the root locus plot and solve for K, giving us the root locus plot in Figure 2. rlocus(sysp); hold; rlocus(sysall); sgrid(0.5,2); Kc=rlocfind(sysall)

%Generate original Root Locus Plot %Add new root loci to plot %place lines of constant damping

As we see, this attracts our loci paths into the stable region and crosses the radial line ( ¼ 1=2) when K ¼ 75. Therefore our continuous system phase-lead controller can be defined as Gc ¼ PhaseLead ¼ 75

sþ3 s þ 22

To implement the controller digitally, we can use pole-zero matching to find the equivalent controller in the z-domain. Beginning with the analog controller, we note that the zero location is at s ¼ 3 and the pole location is at s ¼ 22. Using our

Figure 2 tion.

Example: Matlab root locus plot with continuous system phase-lead compensa-

Digital Control System Design

375

sample time of 0:01s and the transformation identity allows us to find each discrete pole and zero location. Zero location in z:

z ¼ esT ¼ eð3Þð0:01Þ ¼ 0:97

Pole location in z:

z ¼ esT ¼ eð22Þð0:01Þ ¼ 0:80

Now we can write the new controller transfer function as Gc ðzÞ ¼ K

z  0:97 z  0:80

We still must equate the steady-state gains using the two representations of the final value theorem. This allows us to determine K for our discrete controller. Gc ðzÞz¼1 ¼ Gc ðsÞjs¼0 Gc ðzÞjz¼1 ¼ K

z  0:97 2þ3 jz¼1 ¼ Gc ðsÞjs¼0 ¼ 75 j z  0:80 s þ 22 s¼0

Solve for K: K

1  0:97 3 ¼ 0:15K ¼ 75 ¼ 10:2 1  0:80 22

K 70

Finally, the equivalent phase-lead controller in z is UðzÞ z  0:97 ¼ Gc ðsÞ ¼ 70 EðzÞ z  0:80 To simulate the system using Matlab we need to define the continuous system transfer function, convert it to the z-domain, add our phase-lead controller, and generate the step response. The commands that can be used to achieve are listed here. numzc=70*[1 -0.970]; denzc=[1 -0.8];

%Place zero at 0.97 %Place pole at 0.8

syszc=tf(numzc,denzc,T); sysp=tf(nump,denp);

%Controller transfer function %System transfer function in forward loop

%Verify the discrete root locus plot figure; rlocus(syszc*c2d(sysp,T,‘zoh’)); zgrid; Kz=rlocfind(syszc*c2d(sysp,T,‘zoh’))

To verify our design, let us use Matlab again and now generate the discrete root locus plot to see how we have pulled our system loci paths into the unit circle (stability region). We can use the commands given to convert our continuous system into a discrete system with a sample time equal to 0:01s and with a ZOH applied to the input of the physical system. The result is our root locus plot given in Figure 3. %Verify the discrete root locus plot figure; rlocus(syszc*c2d(sysp,T,’zoh’)); zgrid; Kz=rlocfind(syszc*c2d(sysp,T,‘zoh’))

Since our discrete root locus plot does illustrate that our phase-lead controller stabilizes the system, we expect that our step response will also exhibit the desired

376

Chapter 9

Figure 3

Matlab root locus plot of equivalent phase-lead compensation (pole-zero match-

ing).

response. Executing the following Matlab commands closes the loop and generates the corresponding step response plot, given in Figure 4. %Close the loop and convert to z between samplers cltfz=(syszc*c2d(sysp,T,‘zoh’))/(1+syszc*c2d(sysp,T,‘zoh’)) figure; step(cltfz,5);

Our percent overshoot with the digital implementation of our continuous system design is approximately 35%, larger than the expected value of 15%. Further tuning

Figure 4 ing).

Example: Matlab step response of phase-lead digital controller (pole-zero match-

Digital Control System Design

377

could reduce this; such tuning is also easy to do in the z-domain as the next section demonstrates. It should be noted that given the small sample times and with both the pole and zero being close to unity leads to a controller that is sensitive to changes in parameters. If the pole or zero location (or sample time) was to change, the controller could become unstable. Fortunately, the digital controller allows us to consistently execute the desired commands and the coefficients do not change during operation. So as we see in the example, it is quite simple to design a controller in the sdomain and convert it to the z-domain using pole-zero matching techniques. The same comments regarding sample time that were applied to converting other controllers from continuous to digital still apply here. 9.5

DIRECT DESIGN OF DIGITAL CONTROLLERS

Many times we are unable to have sampling rates greater than 20 times the bandwidth, at which point the emulated designs based on continuous system controllers begin to diverge from desired performance specifications. As the sampling rate falls below 10 times the bandwidth, the controller will likely tend toward instability or actually go unstable. Looking at Figure 5, it is clear that any digital control introduces lag into the system, moving the system towards becoming unstable. The amount of lag increases as the sampling period increases. Since we know that lag is introduced when we move to discrete controllers, we would like to design the system accounting for the additional lag, especially as sample times become longer. This leads us into direct design methods in the zdomain. Two primary methods are presented here, root locus design in the z-plane and deadbeat response design. In particular, the deadbeat design is able to take advantage of the fact that realizable algorithms, not physical components, are the only limits. As a result, we are able to implement (design) control actions that are impossible to duplicate in the analog world. 9.5.1

Direct Root Locus Design

We will first develop techniques to directly design controllers in the z-domain using root locus plots. This method is familiar to us through the analogies it shares with its s-plane counterpart. In fact, as we learned when defining stability regions in the zplane, the rules for drawing the root locus plots are identical and only the stability, frequency, and damping ratio locations have changed. Thus, once we know the type of response we wish to have, we follow the same procedures used in the s-plane, but

Figure 5

Sampled signal lag of actual signals.

378

Chapter 9

with several special items relating to the z-plane. First, review the type of responses relative to the location of system poles in the z-plane as given in Figures 10 and 11 of Chapter 8. In general we will place poles in the right-hand plane (analogous to the sdomain) and inside the unit circle. If sample times get long relative to the system bandwidth, we will be forced to use the left-hand plane, an area without an s-domain counterpart. The closer we get to the origin, the faster our response will settle. When the poles are exactly on the origin, we have a special case called a deadbeat response, which is a method presented in the next section. The discrete root locus design process, when compared with analog root locus methods, is nearly identical except for two items. First, let us examine the similarities with the s-domain. When we designed our controller we used our knowledge of poles and zeros and how they affected the shape of the loci paths to choose and design the optimal controller. We follow the same procedure again and use our discrete controller poles and zeros to attract the root loci inside the unit circle. The z-domain, however, also accounts for the sample time and if the sample time is changed we need to redraw the plot. The two points where we diverge from the continuous system methods are related. Since we no longer have to physically build the controller algorithm (i.e., with OpAmps, resistors, capacitors, etc.), we can place the poles and zeros wherever we wish. This allows us additional flexibility during the design process. The second point, related to the first, is that even though we do not build the algorithm, we still must be able to program it into a set of instructions that the microprocessor understands; this is our constraint on direct design methods. A controller that can be programmed and implemented is often said to be realizable. The net effect is that we do have more flexibility when designing digital controllers, even when subject to being realizable. The process of designing a control system in the z-domain and checking whether or not it is realizable can best be demonstrated through several examples. EXAMPLE 9.5 Consider designing a controller for the second-order marginally stable system: GðsÞ ¼

1 s2

The desired specifications are to have less than 17% overshoot and a settling time < 15 sec. Using our second-order specifications, this relates to a damping ratio of 0.5 and natural frequency of 0.5 rad/sec. The first task is to convert the system to a discrete transfer function. This is accomplished by taking the z transform of the physical system with a ZOH.   ðz  1Þ 1 ðz  1Þ T 2 ðz þ 1Þ Z 2 ¼ GðzÞ ¼ z z 2 ðz  1Þ3 s Simplifying: GðzÞ ¼

T2 z þ 1 2 ðz  1Þ2

To illustrate direct design methods, let us choose a long sample time of 0.5 sec, or 2 Hz. After substituting in the sample time we get the following discrete transfer function:

Digital Control System Design

GðzÞ ¼ 0:125

379

zþ1 ðz  1Þ2

Using the rules presented to develop root locus plots in Table 1 from Chapter 8, the open loop uncompensated locus paths can be plotted as shown in Figure 6. Remember that the rules remain the same and thus we have two poles at z ¼ 1 (marginally stable) and a zero at z ¼ 1. This means we have one asymptote (the negative real axis) and the only valid section of real axis lies to the left of the zero at z ¼ 1. The root locus plot is, as we would expect, unstable, since even the continuous system is marginally stable. In fact, and as shown earlier, when we sample our system we are no longer marginally stable but actually become unstable. Using our knowledge of root locus we know that more than a proportional controller is needed since the shape has to be changed to pull the loci into the stable regions. In the same way that adding a zero adds stability in the s-plane, we can use the same idea to attract the loci in the z-plane. Let us first try placing a zero at z ¼ 1=2 to simulate a PD controller. After placing the controller zero at z ¼ 1=2 and the new plot is constructed, we get the compensated root locus plot shown in Figure 7. By adding the zero, we have two zeros and two poles and thus no asymptotes. Additionally, we constrained the only valid region on the real axis to fall within the stable unit circle region. At !n ¼ ð3=10TÞ ¼ 1:8 rad/sec, the damping is approximately 0.7 and all the conditions have been met. To determine the gain, we can use the magnitude condition as shown in earlier sections or use Matlab. At this point let us represent the gain as K and verify that the controller is realizable (i.e., can be programmed as a difference algorithm). This is easily determined by developing the difference equations from the controller transfer function: Gc ðzÞ ¼ Kðz  1=2Þ

Figure 6

Example: open loop root locus plot in z-plane.

380

Chapter 9

Figure 7

Example: z root locus modified by adding zero.

This gives the following difference equation: uðk  1Þ ¼ K eðkÞ 

K eðk  1Þ 2

Now we see that we have a problem implementing this particular controller since if we shift one delay ahead to get uðkÞ, the desired controller output, we would also need to know eðk þ 1Þ, or the future error. This is a common problem occurring when the denominator is of lesser order than the numerator (in terms of z). To remedy, let us go back and add another pole to the controller to increase the order of the denominator by 1. This allows us to keep one of our valid real axis root loci sections in the stable unit circle. After we add an additional pole at z ¼ 0:25 and move the zero to z ¼ 0:9, we can redraw the root locus plot as shown in Figure 8.

Figure 8

Example: discrete root locus of system compensated with a pole and zero.

Digital Control System Design

381

Although adding the additional pole creates the situation where the system does become unstable at high gains (the loci paths leave the unit circle and an asymptote is now at 180 degrees), the compensator does attract all three paths to the desired region if the proper gain is chosen. The original zero from the first controller attempt was moved to 0.9 to pull the paths closer to the real axis. Now we can use Matlab to find the gain that results in the desired response and controller transfer function. The commands listed here define the original system, convert it into the z-domain using a ZOH, and generate the compensated and uncompensated root locus plots. Finally, Matlab is used to close the loop and generate the closed loop step response, verifying our design. %Program commands to design digital controller in z-domain clear; T=0.5; nump=1; denp=[1 0 0];

%Forward loop system numerator %Forward loop system denominator

sysp=tf(nump,denp); sysz=c2d(sysp,T,‘zoh’) rlocus(sysz); zgrid;

%System transfer function in forward loop

numzc=[1 -0.9]; denzc=[1 0.25]; syszc1=tf(numzc,denzc,T); hold; rlocus(syszc1*sysz);

%Place zero at 0.9 %Place pole at -0.25

%Draw discrete root loci %place z-grid on plot

%Draw compensated discrete root loci

K=rlocfind(syszc1*sysz) %Close the loop and convert to z between samplers cltfz=feedback(3.4*syszc1*sysz,1) %Verify the discrete root locus plot figure; step(cltfz,10);

The Matlab plot showing the uncompensated and compensated root locus plots is given in Figure 9. Using the rlocfind command returns a gain of 3.4 when we place our three poles all near the real axis ( ¼ 1). This results in the final compensator transfer function expressed as Gc ðzÞ ¼

UðzÞ 3:4ðz  0:9Þ ¼ EðzÞ ðz þ 0:25Þ

To implement our controller in a microprocessor we can cross-multiply and express our transfer function as a difference equation: uðkÞ ¼ 0:25uðk  1Þ þ 3:4eðkÞ  3:06eðk  1Þ This is realizable and easily implemented in a digital computer, as opposed to the previous design that only placed one zero in the z-domain. For this difference equation the controller output, uðkÞ, is only dependent on the current error, eðkÞ, and the previous controller output and error input. It is also possible to reduce our memory requirements by designing a similar controller that added the system pole directly on

382

Chapter 9

Figure 9

Example: Matlab discrete root locus plots of compensated and uncompensated

systems.

the origin, instead of at z ¼ 0:25. This has the desired effect of still making our controller realizable and when we cross multiply the z added in the denominator only acts as a shift operator on uðkÞ and does not create a need for storing uðk  1Þ. For example, our transfer function would become Gc ðzÞ ¼

UðzÞ Kðz  0:9Þ ¼ EðzÞ z

And the new difference equation becomes uðkÞ ¼ KeðkÞ  0:9Keðk  1Þ With this formulation only one storage variable, eðk  1Þ, is needed. This does have the effect of modifying our root locus to that shown in Figure 10. When we constrain the pole to be located at the origin, it does limit our design options but as shown we still are able to stabilize the system while requiring one less storage variable. To verify that we do achieve our desired response with the sample rate of 2 Hz, we can use Matlab to generate the step response of the closed loop system. The step response for the system compensated with a zero at z ¼ 0:9 and a pole at z ¼ 0:25 is given in Figure 11. As this example demonstrates, we can directly design a digital controller in the z-domain using root locus techniques similar to those developed for the s-domain; only the mapping is different, the rules for plotting are the same. The digital controller stabilized the system, as shown in Figure 11, even though we added a pole to the controller to make the algorithm realizable. Since the controller is developed directly in the discrete domain and accounts for the additional lag created by the sampling period, we can expect better agreement when implemented and tested than when a continuous system controller is converted into the discrete domain.

Digital Control System Design

Figure 10

383

Example: discrete root locus of system compensated with a pole at the origin and

a zero.

EXAMPLE 9.6 For the system given in Figure 12, use Matlab to design a discrete controller using discrete root locus techniques in the z-domain. Verify that the system meets the requirements and generate a step response using Matlab.    

A damping ratio,  0:7 Steady-state error, ess ¼ 0 Settling time, ts ¼ 2 sec Sample time, T ¼ 0:05 s

Figure 11

Example: Matlab discrete step response plot of compensated system.

384

Chapter 9

Figure 12

Example: system block diagram for discrete root locus controller design.

In the continuous domain the physical system is open loop stable with two poles located at 2 3:5j, giving a natural frequency equal to 4 rad/sec and a damping ratio equal to 1=2. To meet the system design goals, we will need to make the system type 1 (add an integrator), meet the system damping ratio, and have a system natural frequency greater than approximately 3 rad/sec. To begin the design process, we will use Matlab to add a ZOH and convert the physical system into an equivalent sampled system. The uncompensated root locus plot is drawn and grid lines of constant damping ratio and natural frequency shown on the plot. %Program commands to design digital controller in z-domain clear; T=0.05; nump=8; denp=[1 4 16];

%Forward loop system numerator %Forward loop system denominator

sysp=tf(nump,denp); sysz=c2d(sysp,T,‘zoh’) rlocus(sysz); zgrid;

%System transfer function in forward loop %Draw discrete root loci %place z-grid on plot

To design the compensator we will place a pole at z ¼ 1, the equivalent of placing a pole at the origin in the s-domain (z ¼ esT ). Since this tends to make the system go unstable very quickly, we will also place a zero at z ¼ 0:3 to help attract the loci path inside the unit circle and to help add positive phase angle back into the system. This is somewhat analogous to a PI controller where both a zero and pole is added from the controller. Now we can use Matlab to verify the root locus plot and choose a gain that allows us to meet our design specifications. An interactive design tool is also included in later versions of Matlab. It is invoked using the command rltool. The Matlab commands used to define the compensator and generate the compensated root locus plot are given as numzc=[1 -0.3]; denzc=[1 -1]; syszc1=tf(numzc,denzc,T); hold; rlocus(syszc1*sysz); K=rlocfind(syszc1*sysz)

%Place zero at 0.3 %Place pole at 1

%Draw compensated discrete root loci

Using these commands (in combination with the previous segment of commands) generates the plot given in Figure 13 and results in a controller gain equal to 0.2. After adding the compensator we actually add additional instability (lag) into the system as shown by the outer root locus plot. Choosing root locations near z ¼ 0:9 allows us to retain our original dynamics (which were close to meeting the dynamic requirements) while significantly improving our steady-state error perfor-

Digital Control System Design

Figure 13

385

Example: Matlab discrete root locus plots of compensated and uncompensated

systems.

mance. As with a PI controller in the analog realm, this controller, by placing a pole at z ¼ 1, tends to decrease the stability of the system. Finally, we will use the Matlab commands listed to generate a sampled step response of the compensated and uncompensated systems, shown in Figure 14. %Close the loop and convert to z between samplers cltf_c=feedback(0.2*syszc1*sysz,1) cltf_uc=feedback(sysz,1) %Verify the discrete root locus plot figure; step(cltf_c,cltf_uc,4

Figure 14

Example: Matlab discrete step response plot of compensated system.

386

Chapter 9

From Figure 14 we see that the controller significantly improves our steadystate error while coming close to meeting our settling time of 2 sec. To complete the solution, let us develop the difference equation for our discrete controller. Since we place one pole and one zero and have equal powers of z in the numerator and denominator, it should be realizable. The controller transfer function is Gc ðzÞ ¼

UðzÞ 0:2ðz  0:3Þ ¼ EðzÞ z1

And the new difference equation becomes uðkÞ ¼ uðk  1Þ þ 0:2eðkÞ  0:6eðk  1Þ Our controller as designed is realizable and can easily be programmed as a sampled input-output algorithm. Recognize that had we started with an analog PI controller we would have achieved a similar difference equation, at least in form. In conclusion, we see how the root locus techniques developed in earlier analog design chapters allow us to also design digital controllers represented in the zdomain. The concept of using the time domain performance requirements (i.e., peak overshoot and settling time) can also be applied in the z-domain where we pick controller pole and zero locations to achieve desired damping ratios and natural frequencies. The primary difference we noticed is that in addition to the analog root locus observations of how different gains affect the root locus paths, changing the sample period modifies the pole and zero locations and thus changes the shape of the discrete root locus plot. Picking the desired pole locations in the z-domain also becomes more difficult since the lines of constant damping ratios and natural frequencies are nonlinear. 9.5.2

Direct Response Design

The direct response design method (deadbeat in special cases) demonstrates additional differences between analog and digital controllers. Analog controllers are limited by physical constraints (i.e., number of OpAmps feasibly used in design), while digital controllers can approximate functions that would be impossible to duplicate with analog components. Before defining direct response design methods, however, a word of caution is again in order. It does not matter if the controller is analog, digital, or yet to be invented, if our physical system is not capable of producing the desired response we have already failed. This theme has been repeated several times in preceding chapters and yet it bears repeating. A good control system must begin with a properly designed physical system, that is, with amplifiers, actuators, and plant components that are design for the performance requirements that we have set. Another situation might arise when designing the controller first and then realizing when specifying components that our ‘‘aggressive’’ design has quadrupled the cost of what is actually required. The bottom line is that we should design the physical system and controller to meet our ‘‘real’’ requirements. With this stated, let us move on to direct response design methods. The idea behind the direct response method is to pick a transfer function with some desired response, set the controller and system transfer function equal to it, and solve for the corresponding controller. This is easily shown using the following notation. Let:

Digital Control System Design

387

DðzÞ be the controller transfer function TðzÞ be the desired transfer function (response) GðzÞ be the discrete system transfer function (ZOH, continuous system, and sample effects) CðzÞ be the sampled system output RðzÞ be the sampled system command input Then when we close the loop for a unity feedback system we obtain the closed loop transfer function that can be set equal to the desired transfer function, TðzÞ. TðzÞ ¼

CðzÞ DðzÞGðzÞ ¼ RðzÞ 1 þ DðzÞGðzÞ

This method is highly dependent on the accuracy of the models being used since the controller is based directly on our system model, GðzÞ. Modeling accuracy is pervasive throughout all levels of control system design, as all of the design tools demonstrated thus far (analog and digital) rely on the accuracy of the models used during the design process. The simplest response to pick, a special case described as deadbeat control, is making the response equal to the command one sample time later. This is achieved by defining TðzÞ ¼ z1 . When we cross-multiply we see that the system output CðzÞ becomes the same as the system input, only one sample period later: CðzÞ ¼ RðzÞz1

or

cðkÞ ¼ rðk  1Þ

There are obvious limitations, and deadbeat design deserves some additional comments, provide in Table 1. Further development of guidelines presented in Table 1 are as follows: 1. There are physical limitations imposed on the desired response. For example, we could design a deadbeat controller to position a mass such that the mass should always be at its command position one sample period later. For this to work, the physical actuators must always be able to move the mass the required distance in the desired time. This may be limited by cost (extremely large actuators, $$$) or simply providing command sequences that are not feasible (i.e., large step inputs). A simple modification can sometimes alleviate the problem of actuators being unable to follow the controller commands, thus saturating. By changing TðzÞ to z2 or z3 ,

Table 1 1 2 3 4 5

Guidelines for Designing Digital Controllers for a Deadbeat Response

The physical system must be able to achieve the desired effect within the span of one sample for true deadbeat response. The method replies on pole-zero cancellation and is thus very dependent on the accuracy of the model used. Good response characteristics do not guarantee good disturbance rejection characteristics The algorithm that results must be programmable (realizable). For general use it must not require knowledge of future variables. Approximate deadbeat response can be achieved by allowing the response to reach the command over multiple samples; the intermediate values can also be defined.

388

Chapter 9

we can design for a deadbeat response to occur in a set number of sample periods, thus providing more time for the system to respond. So for a deadbeat controller to be feasible, it must be physically able to follow the desired command profile. Designing the physical components to satisfy the desired profile or limiting the input sequences to those that are feasible trajectories for the existing physical system best accomplishes this. 2. The direct design method depends on pole-zero cancellation and is thus highly dependent on model accuracy, especially if the original system poles were originally unstable. For example, if the deadbeat controller cancelled a pole at 2 in the z-plane by placing a zero there and the physical system changed or was improperly modeled, the unstable pole is no longer cancelled and will cause problems. 3. The direct design controller is targeted at producing the desired response and no guarantee is made with respect to disturbance rejection properties. These should be simulated to verify satisfactory rejection of disturbances. It is possible to achieve good characteristics when responding to a command input and yet have very poor rejection of disturbances. 4. The controller resulting from direct response design methods must be computationally realizable, that is, able to be implemented using a digital computer. It is obvious from the example in the previous section that the lowest power of z1 in the denominator must be less than or equal to the lowest power of z1 in the numerator. Thus, the desired response TðzÞ must be chosen where the lowest power of z1 is equal to or greater than the lowest power of GðzÞ. It is recommended to add powers of ð1  z1 ) to the numerator if it is of lower order than the denominator; add the number of powers required that make the orders equal. 5. Finally, it is common to develop large overshoots and/or oscillations between samples with deadbeat controllers since they require large control actions (high gains) to achieve the response and exact cancellation of poles and zeros seldom occurs. Modifying the control algorithm to accept slower responses will alleviate these tendencies. One option is to use something like a Kalman controller. A Kalman controller chooses the output to be a series of steps toward the correct solution, each step defining the desired output level at the corresponding sample. If we wanted to reach a final value of 1, then we would define the intermediate values such that each successive increase in output is the next coefficient and where all the coefficients, when added, are equal to 1. Thus, if we wanted zero error in five samples (four sample periods) from a unit step input, instead of during one sample period, we might use the output series: Ydesired ðzÞ ¼ 0 þ 0:4z1 þ 0:3z2 þ 0:2z3 þ 0:1z4 Assuming that our system components are capable of the desired response and that our model is accurate, we would have the following output value at each sample: Sample Sample Sample Sample Sample

1 2 3 4 5

yð1Þ ¼ 0 yð2Þ ¼ 0:4 yð3Þ ¼ 0:7 yð4Þ ¼ 0:9 yð5Þ ¼ 1:0

(add the previous to the new, 0:4 þ 0:3) (add the previous to the new, 0:7 þ 0:2) (add the previous to the new, 0:9 þ 0:1)

Digital Control System Design

389

As the example demonstrates we can use this same method to pick the shape of our response for any system, subject of course to the physics of our system. This concept is further illustrated in a later example problem. In broader terms than described for deadbeat response, the advantage of direct response design is that we can choose any feasible response type for our system. For example, we can choose a controller that will cause our system to respond as if it is a first-order system with time constant, , as long is our system is physically capable responding in such a way and we have accurate models of our system. We can choose virtually any response that meets the following three criteria. One, the response must be feasible. Two, the resulting controller algorithm must be realizable. And three, an accurate model of the system must be available. Several examples further illustrate the concept of direct design methods for digital controllers. EXAMPLE 9.7 Design a deadbeat controller whose goal is to achieve the desired command in two sample periods. The sample frequency is 2 Hz and the physical system is given as GðsÞ ¼

1 s2

To design the system we first need to convert the continuous system into an equivalent discrete representation. The system transfer function after including the ZOH is   ðz  1Þ 1 ðz  1Þ T 2 zðz þ 1Þ T 2 z þ 1 Z 3 ¼ GðzÞ ¼ ¼ z z 2 ðz  1Þ3 2 ðz  1Þ2 s Substituting in the sampling rate of 2 Hz (T ¼ 0:5): GðzÞ ¼ 0:125

zþ1 ðz  1Þ2

Now we can close the loop and derive the closed loop transfer function which can then be set equal to our desired response of TðzÞ ¼ z2 . This is the same as expressing our input and output as cðkÞ ¼ rðk  2) or that our output should be equal to the input after two sample periods. DðzÞ, our controller, becomes our only unknown in the expression CðzÞ ¼ TðzÞ ¼ RðzÞ

1 zþ1 8 ðz  1Þ2 ¼ z2 1 zþ1 1 þ DðzÞ 8 ðz  1Þ2 DðzÞ

This expression can now be solved for DðzÞ: DðzÞ ¼

UðzÞ 8z2  16z1 þ 8 ¼ EðzÞ z3 þ z2  z  1

DðzÞ ¼

UðzÞ 8z1  16z2 þ 8z3 ¼ EðzÞ 1 þ z1  z2  z3

DðzÞ can easily be converted to difference equations for implementation. It is computationally realizable since the lowest power of z1 occurs in the denominator.

390

Chapter 9

uðkÞ ¼ uðk  1Þ þ uðk  2Þ þ uðk  3Þ þ 8eðk  1Þ  16eðk  2Þ þ 8eðk  3Þ Recognize that we need six storage variables to implement this control algorithm and that is highly dependent on the accuracy of our model since it relies on using the controller to cancel the poles of the physical system. EXAMPLE 9.8 Use Matlab to verify the deadbeat controller designed in Example 9.7. When the controller and control loop is added, we can represent the system with the block diagram in Figure 15. In Matlab we will define the physical system, apply the ZOH, and convert it into the z-domain where it can be combined with the controller DðzÞ and the loop closed. Both a step and ramp response can be calculated and plotted. %Program commands to direct design digital controller in z-domain clear; T=0.5; nump=1; %Forward loop system numerator denp=[1 0 0]; %Forward loop system denominator sysp=tf(nump,denp); %System transfer function in forward loop sysz=c2d(sysp,T,‘zoh’) numzc=[8 -16 8]; %Numerator of controller denzc=[1 1 -1 -1]; %Denominator of controller syszc1=tf(numzc,denzc,T); %Close the loop and convert to z between samplers cltfz=feedback(syszc1*sysz,1) %Verify the discrete step and ramp response plots step(cltfz,4); figure; lsim(cltfz,[0:0.5:4]);

After these commands are executed, we get the step and ramp responses shown in Figures 16 and 17, respectively. It is very important to remember that these plots only tell us that the system is at the desired location during the actual sample time, two samples after the command has been given. The system may be oscillating in between the sample periods. Additionally, it is necessary to calculate the power requirements for this system to achieve the responses in Figures 16 and 17. The power requirements increase as we make our desired response faster. It is primarily the amplifiers and actuators acting on the physical system that become cost prohibitive under unrealistic performance requirements. EXAMPLE 9.9 Use Matlab to design a controller for the system given in Figure 18. The response should approximate a first-order system and take four sample periods to reach the command. The sample frequency for the controller is 5 Hz, T ¼ 0:2 sec. Since we

Figure 15

Example: system block diagram for discrete direct response controller design.

Digital Control System Design

391

Figure 16

Example: Matlab discrete step response plot of compensated system.

Figure 17

Example: Matlab discrete ramp response of compensated system.

Figure 18

Example: system block diagram for discrete direct response controller design.

392

Chapter 9

want the system to approximate a first-order system response, we can define our desired response to be based on the standard first-order unit step response values:  CðzÞ ¼ 0 þ 0:63z1 þ 0:235z2 þ 0:085z3 þ 0:05z4 RðzÞ Assuming that our system components are capable of the desired response and that our model is accurate, this should result in the following output value at each sample: Sample Sample Sample Sample Sample

1 2 3 4 5

cð1Þ ¼ 0 cð2Þ ¼ 0:63 cð3Þ ¼ 0:865 (add the previous to the new, 0:63 þ 0:235) cð4Þ ¼ 0:95 (add the previous to the new, 0:865 þ 0:085Þ cð5Þ ¼ 1:0 (add the previous to the new, 0.95 + 0.05)

We should recognize this as the normalized first-order system in response to a unit step input and have a system time constant equal to one sample period. To derive the transfer function representation we can solve for C=R and multiply the numerator and denominator by z4 . TðzÞ ¼

CðzÞ 0:63z3 þ 0:235z2 þ 0:085z þ 0:05 ¼ RðzÞ z4

To facilitate the use of the computer we can solve directly for our controller DðzÞ in terms of our system transfer function GðzÞ and our desired response transfer function TðzÞ. The desired response transfer function is set equal to the closed loop transfer function, as defined earlier: TðzÞ ¼

CðzÞ DðzÞGðzÞ ¼ RðzÞ 1 þ DðzÞGðzÞ

For this example with unity feedback we can now solve directly for DðzÞ: DðzÞ ¼

TðzÞ GðzÞ ½1  ðTðzÞ

This allows us to now use Matlab to convert our analog system into its discrete equivalent, define our desired response TðzÞ, and subsequently to solve for our controller DðzÞ. We can verify our design by closing the loop and plotting the unit step input response of the system. The Matlab commands use are given as %Program commands to design digital controller in z-domain clear; T=0.2; nump=8; %Forward loop system numerator denp=[1 4 16]; %Forward loop system denominator sysp=tf(nump,denp); %System transfer function in forward loop sysz=c2d(sysp,T,‘zoh’) sysTz=tf([0 0.63 0.235 0.085 0.05],[1 0 0 0 0],T) Dz=sysTz/((1-sysTz)*sysz) sysclz=feedback(Dz*sysz,1); step(sysclz,2)

After defining the sample time and continuous system transfer function we use Matlab to calculate the discrete equivalent, with the ZOH, which is returned as

Digital Control System Design

sysz ¼ GðzÞ ¼

393

0:1185z þ 0:09037 z2  1:032z þ 0:4493

The controller transfer function, DðzÞ, is then calculated and expressed as DðzÞ ¼ ¼

UðzÞ EðzÞ 0:63z5  0:4149z4 þ 0:1257z3 þ 0:06791z2  0:01338z þ 0:02247 0:1185z5 þ 0:0157z4  0:08478z3  0:03131z2  0:01361z  0:004518

This is realizable since no future knowledge of our system is required and we can express our controller as a difference equation that can be implemented digitally. Finally, the loop can be closed, including now our controller DðzÞ, and the unit step response of the system plotted as shown in Figure 19. It is easy to see that we have reached our desired output values at the corresponding sample times and that the response of the system approximates the response of a first-order system with a time constant of one sample period. Remember that since this is a simulation the model used to develop the controller and the model used when simulating the response are identical and therefore the results behave exactly as designed for. In reality it is difficult to accurately model the complete system, especially with linear models as constrained to with the z-domain, and our results need to be evaluated in the presence of modeling errors and disturbances on the system. As this chapter demonstrates, many of the same design techniques (conversion and root locus) that were developed for analog systems can be used for digital systems. The primary difference is the addition of the sample effects and how it also modifies the response and stability characteristics of our system. If the sample frequency is fast relative to our physical system, we can even use controllers designed in the s-domain with good results. As our sample time becomes an issue, however,

Figure 19

Example: Matlab discrete step response plot of compensated system.

394

Chapter 9

designing directly in the z-domain is advantageous since we account for the sample time at the onset of the design process. Finally, and for which there is no analog counterpart, we can choose the response that we wish to have and solve for the controller that gives us the response. Although a straightforward process analytically, this method relies on us understanding the physics of our system to the extent that we can choose realistic goals and design specifications. Unrealistic specifications may lead to unsatisfactory performance and/or high implementation costs. Finally, with all controllers designed here we should also verify our disturbance rejection properties. A good response due to a command input does not guarantee a good rejection of a disturbance input. Since the disturbance usually acts directly on the physical system, it is difficult to analytically model unless we predefine the disturbance input sequence to enable us to perform the z transform. Tools like Simulink, the graphical block diagram interface of Matlab, become useful at this point since we can include continuous, discrete, and nonlinear blocks in one block diagram. This is demonstrated in Chapter 12 when the nonlinearities of different electrohydraulic components are included in our models. 9.6

PROBLEMS

9.1 What should the sample time be when converting existing analog controllers into discrete equivalents and expecting to achieve similar performance? 9.2 Zeigler-Nichols tuning methods are applicable to discrete PID controllers. (T or F) 9.3 What is the goal of bumpless transfer? 9.4 What are the advantages of expressing our difference equations as velocity algorithms? 9.5 Write the velocity algorithm form of a difference equation for the PI-D controller. Let u be the controller output and e the controller input (error). 9.6 For the following phase lag controller in s, approximate the same controller in z using the pole-zero matching method. Assume a sample time of 1=2 sec. Gc ðsÞ ¼

0:25s þ 1 sþ1

9.7 Use Tustin’s approximation to find the equivalent discrete controller when given the following continuous system controller transfer function (leave T as a variable in the discrete transfer function): Gc ðsÞ ¼ 9.8

sþ1 0:1s þ 1

For the system block diagram in Figure 20, a. Design a phase-lead controller in the domain (see Example 5.15) to achieve a closed loop damping ratio equal to 0.5 and a natural frequency equal to 2 rad/sec. Use root locus techniques in the s-domain. b. Convert the phase-lead controller into a discrete equivalent using pole-zero matching with a sample time, T ¼ 0:05 sec. Draw the block diagram and verify the unit step response of the discrete system using Matlab.

Digital Control System Design

Figure 20

9.9

395

Problem: block diagram of controller, system, and transducer.

For the phase-lead controller in Figure 21 and using a sample time, T ¼ 0:5s, a. Find the equivalent discrete controller using pole-zero matching. b. Find the equivalent discrete controller using Tustin’s approximation. c. Use Matlab to plot the step response of each discrete controller and comment on the similarities and differences.

Figure 21

Example: block diagram of physical system for phase-lead controller.

9.10 For the system in Figure 22, tune the phase-lag digital controller directly in the z-domain using root locus techniques. Draw and label the root locus plot and solve for the gain K that results pffiffiffiin repeated real roots (point where loci leave real axis). The sample time is T ¼ 2 sec.

Figure 22

Problem: block diagram of discrete controller and system.

9.11 For the system in Figure 23, develop the z-domain root locus when the sample time is T ¼ 0:5 sec. a. Draw and label the root locus plot. b. Describe the range of response characteristics that can be achieved by varying the proportional gain.

Figure 23

Problem: block diagram of discrete controller and system.

9.12 For the plant model transfer function, design a unity feedback control system using a PI controller (K ¼ 2, Ti ¼ 1). Draw the block diagrams for both the continuous system (see Problem 5.14) and the equivalent discrete implementation. Use

396

Chapter 9

Tustin’s approximation to derive the equivalent discrete controller transfer function. Determine the steady-state error for both systems when subjected to step inputs with a magnitude of 2; Use the FVT in the s and z-domains. GðsÞ ¼

5 sþ5

9.13 For the system shown in the block diagram in Figure 24, use Matlab to design a discrete controller with a sample time of T ¼ 0:5 sec, that has a. Less than 5% overshoot due to a step input. b. Zero steady-state error due to a step input. The solution should contain the Matlab root locus plots, the step response of the system, and the difference equation that is required to implement the controller.

Figure 24

Problem: block diagram of discrete controller and system.

9.14 With the third-order plant model and unity feedback control loop in Figure 25, and using a sample time T ¼ 0:8 sec, use Matlab to a. Design a discrete controller that exhibits no overshoot and the fastest possible response when subjected to a unit step input. b. Write the difference equation for the controller. Be sure that it is realizable. c. Verify the root locus and step response plots (compensated and uncompensated, i.e., DðzÞ ¼ 1Þ using Matlab.

Figure 25

Problem: block diagram of discrete controller and system.

9.15 For the system shown in the block diagram in Figure 26, design a discrete compensator that a. Places the closed loop poles at approximately z ¼ 0:25 0:25j using a sample time of T ¼ 0:04 sec. Design the simplest discrete controller possible and derive both the required gain and compensator pole and/or zero locations. b. Leaving the controller values the same, change the sample time to T ¼ 1 sec and again plot the discrete root locus. Comment on the differences.

Figure 26

Problem: block diagram of discrete controller and system.

Digital Control System Design

397

c. Verify both designs with a unit step input response plot using Matlab. Comment on how well this correlates with parts a and b. 9.16 For the system shown in the block diagram in Figure 27, design a discrete compensator that achieves a deadbeat response in one sample. The sample period is T ¼ 0:2s.

Figure 27

Problem: block diagram of discrete controller and system.

9.17 For the system shown in the block diagram in Figure 27, design a discrete compensator that achieves a deadbeat response over two sample periods. The sample period is T ¼ 0:1s. 9.18 For the system shown in the block diagram in Figure 28, design a discrete compensator that achieves an approximate ramp response to the desired value over four sample periods. The sample period is T ¼ 0:4s.

Figure 28

Problem: block diagram of discrete controller and system.

9.19 For the system shown in the block diagram in Figure 28, design a discrete compensator that achieves deadbeat response over one sample period. The sample period is T ¼ 0:6s: a. Use Matlab to solve for the controller transfer function. b. Verify the unit step response of the closed loop feedback system. c. Build the block diagram in Simulink, connect a scope to the output of the controller block, and comment on whether there are any inter-sample oscillations. 9.20 For the system shown in the block diagram in Figure 29, design a discrete compensator that achieves deadbeat response over three sample periods. On the second sample the system should be 65% of the way to the desired value and 100% of the desired value on the third sample. The sample period is T ¼ 1s: a. Use Matlab to solve for the controller transfer function. b. Verify the unit step response of the closed loop feedback system. c. Build the block diagram in Simulink, connect a scope to the output of the controller block, and comment on whether there are any inter-sample oscillations.

Figure 29

Problem: block diagram of discrete controller and system.

This Page Intentionally Left Blank

10 Digital Control System Components

10.1

OBJECTIVES     

10.2

Examine the different interfaces between analog and digital signals. Learn the common methods of implementing digital controllers. Examine strengths and weakness of different implementation methods. Present basic programming concepts for microprocessors and PLCs. Identify the components available when digital signals are used. INTRODUCTION

With the rate at which computers are developing, it is with a measure of caution that this chapter is included. The goal is not to predict the future of digital controllers, benchmark past progress, or provide a comprehensive guide to the available hardware and software. Rather, the goal is to provide an overview regarding the types of digital controllers in use, how they typically are used, and how the designs from the previous chapter are commonly implemented and allowed to become useful. The basic components have somewhat stabilized for the present and the noticeable developments occurring in terms of speed, processing power, and communication standards. It is safe to say that the influence of digital controllers will continue to increase. In this chapter, the three broad categories of digital controllers are presented as computer based, microcontrollers, and programmable logic controllers (PLCs). In general, each type relies ultimately on the common microprocessor. The strengths and weaknesses of each type are examined from a hardware and software perspective. The common digital transducers and actuators used in implementing digital controllers are also presented. Finally, the problem of interfacing low level digital signals to real actuators is examined. An example of this is the common method using pulse width modulation (PWM). 10.3

COMPUTERS

Computers have become commonplace when implementing digital controllers. In many situations it is the cheapest and most flexible option, and yet computers 399

400

Chapter 10

have struggled in industrial settings. While reliability is the primary reason, it is not the reliability of the hardware half as much as it is of the software. Current operating systems are just not robust enough to run continuously in a control environment and provide the level of safety required. To be fair, they are designed to run all tasks satisfactory, not designed to run one task reliably. In industrial environments, PLCs are still dominant and are discussed in more detail in a subsequent section. With the continuing increase in performance in the midst of decreasing prices, it is envisioned that more and more PC-based applications will be developed. The PC has many advantages once the reliability issues are resolved. It is extremely easy to upgrade and is flexible in adding capabilities as time progresses. As compared to microcontrollers and PLCs, there are many straightforward programming packages, some with a purely graphical interface (more computer overhead). Microcontrollers run best when programmed with assembly language, and PLCs generally have an application where the program is written using ladder logic and downloaded to the chip. Computers, on the other hand, allow many different computer languages, compilers, programs, etc., to interface with the outside data input/output (IO) ports. Since the PC processor is performing all the control tasks in real time, slider bars, pop-up windows, etc., can be used to tune the controller on the fly and immediately see the effects. In addition, with the large hard drives, ‘‘cheap’’ memory upgrades, etc., the PC can collect data on the fly and store it for long periods of time. Hundreds of different control algorithms could be stored for testing or even switching between during operation. Microcontrollers and PLCs have limitations on the number of instructions, byte lengths, words, and often use integer arithmetic, requiring good programming skills. It should be clear that at this point in time, the PC-based controller is a great choice for developing processes, research, and testing but is not as suited for continuous industrial control or where the volumes are high, as in OEM (original equipment manufactured) controllers. What we are seeing, and will continue to see, is the lines between the two systems becoming more blurred. New bus architectures and interface cards are allowing computer processors to act as programmable controllers in industrial settings. Now we will examine some of the required components to allow us to use our PC as control headquarters. Figure 1 illustrates the common components used in PC-based control. It is obvious at this point that for our computer to control something, it must be capable of inputs and outputs which can interface with physical components. This is really the heart of the matter. The computer (any processor) is designed to work with low level logic signals using essentially zero current. Physical components, on the other hand, operate best when supplied with ample power. The goal then becomes developing interfaces between low level processor signals and the high power levels required to effect changes in the physical system. Various computer interfaces can be purchased and used, each with different strengths and weaknesses. An additional problem is the quantization of analog signals describing physical phenomenon (i.e., the temperature outside is not fixed at 30 and 50 degrees but may infinitely vary between those two points and beyond). Converters, examined below, are used to convert between these two signal types, but with limited resolution. Finally, both isolation (computer chips are not fond of high voltages and currents) and power amplification devices are required to actually get the computer to control a physical system. These last items are the same concerns all microcontrollers and PLCs share and are overcome using similar methods and components.

Digital Control System Components

Figure 1

401

Basic PC-based control system architecture.

To begin with we will look at the common components used to interface computers with physical systems for the purpose of closed loop control and data acquisition. 10.3.1 Computer Interfaces This section will break the common computer components into three sections, computer hardware, interface hardware, and software. By the end we should have a good idea of what components are required for our applications. 10.3.1.1

Computer Hardware

The main choices discussed here are desktop, laptop, and dedicated digital signal processing (DSP) systems. Today’s laptops are capable of the processing speed, memory, and interface ports required for closed loop controllers. Their largest problem is cost and throughput. For the same processing power, memory, and monitor size, laptops cost considerably more than their desktop counterparts. However, if the controller must be mobile (i.e., testing antilock braking systems at the proving grounds), then laptops are worth the cost and a viable choice. The largest problems, assuming the cost difference is worth the portability, are interface limitations and the resulting additional costs. Aside from going to a dedicated DSP, there are still limited options for interfacing data acquisition cards with laptops for maximum performance. While serial port devices are cheap, they are severely limited in maximum data transfer rates, often limiting the sampling frequency to around 120 Hz maximum for one channel. In addition, few exist with analog output capabilities. That being said, for slow processes, controlling items at home, and learning, they are small, light, cheap, and fun. The next level up is to use the parallel port. For a slightly higher cost, units exist with digital IO, analog in, and analog out capabilities. The data transfer rate is also higher but still limited by parallel port transfer speeds. To take full advantage of the laptop’s processing power, Personal Computer Memory Card International

402

Chapter 10

Association (PCMCIA) or Universal Serial Bus (USB) ports must be used. Much greater data throughputs are possible, as shown in Table 1. The PCMCIA standards have various levels and not all are as high as 100 megabits/sec. In addition, the newer cards are not compatible in the older slots, and these cards are sometimes difficult to set up. USB devices are now common and are capable of plug-n-play operation, supplying enough power to sometimes run connected devices, daisy chains of up to 128 components, and compatible with all computers incorporating such a port (i.e., IBM compatible, Apple, Unix, etc.). Desktop systems (within the PC category) are usually the best value if portability and cutting edge performance (DSP) are not required. Boards using the ISA (original Industry Standard Architecture) are common as seen by the many companies producing such boards. Price ranges from several hundred dollars to over several thousand dollars, depending on the features required. Peripheral component interconnect (PCI), the latest mainstream bus architecture and faster and friendlier than industry standard architecture (ISA), is also becoming popular with interface card manufacturers. PCI is equivalent to having DMA (direct memory access, allows the board to ‘‘seize’’ control of the PC’s data bus and transfer data directly into memory) with an ISA card. Both bus systems provide more than enough data throughput to keep up with the converters on the data acquisition card. It is also possible to install multiple boards in one computer for adding additional capabilities. Systems have been constructed with hundreds of inputs and outputs. If extremely high speeds (MHz sampling rates) along with many channels are required, then dedicated digital signal processors are required. They generally consist of their own dedicated high-speed processor (or multiple ones) and conversion chips and only receives supervisory commands from the host PC. Costs are generally much greater the typical PC systems installed with interface boards and hence limited to special applications. 10.3.1.2

Data Acquisition Boards

In this section we define the characteristics common to the hardware data acquisition boards used in the various systems listed in the previous section. Data acquisition boards are very common and used extensively in interfacing the PC with analog and digital signals. There are many specialty boards with different inputs and outputs

Table 1 Comparative Throughput Rates of Various Computer Interfaces Interface (port) type

Transfer rates (megabytes per second)

Serial Parallel USB SCSI-1 and 2 (Wide) ultra SCSI IEEE 1394 ‘‘firewire’’ PCMCIA

0.01 0.115 1.5 5–10 20–40 12.5–50 Up to 100

Digital Control System Components

403

that, although similar, are not discussed here. Table 2 lists the common features to compare when deciding on which board to use. The common bus architectures have already been examined in Table 1. In general, speed and resolution are usually proportional to cost (when other features are similar). The speed is generally listed as samples/second and if the board is multiplexed, the speed must be divided by the number of channels being sampled to obtain the per channel sample rate. We will currently hold off on resolution and software and discuss them in more detail in subsequent paragraphs. When dealing with any inputs or outputs, the voltage (or current) ranges must be compatible with the system you are trying to control, namely the transducers and actuator voltage levels. Many PC-based data acquisition boards are either software or hardware selectable and therefore flexible in choosing a signal range that will work. However, many boards select one range that will then apply to all channels. The discussion on resolution will illustrate some potential problems with this. Common ranges are 0–1 V, 0–5 V, 0–10 V, 5 V, 10 V, and 4–20 mA current signals. The number of inputs and outputs range with a larger number of digital IO ports generally available. Common boards include up to 16 single-ended analog inputs and thus 8 double-ended inputs. With double-ended inputs a separate ground is required for each channel since the board only references the differential voltage between two channels. This has many advantages when operating in noisy environments but usually requires two converters for one signal. Single-ended inputs have one common ground and each AD (analog-to-digital) converter channel references this ground. Some boards will include counter channels that can be configured to count pulses or read a pulse train as a frequency. In addition, various output capabilities are found beside analog voltages. Four-to-20-mA outputs are becoming more common, PWM outputs, and stepper motor drivers can all be found. It is generally cheaper with PWM or stepper motor outputs since digital ports are used and a DA (digital-to-analog) converter circuit is not required. Finally, extras to consider are current ratings, warranties, linearity, and protection ratings. For most controller applications, the linearity is not a major issue, being much better than the components connected to. Current ratings, in general, are small and assume that we always will have to provide some amplification before a signal can be used. The over-voltage protection ratings are important if you do not include isolation circuits and expect to operate in a noisy environment. Input and output impedance is also available from many board manufacturers. Table 2

Basic Features of Data Acquisition Boards

Basic features Cost Platform/bus Speed (kHz-MHz) Resolution Software drivers Voltage ranges (in/out)

No. inputs and outputs

Extra considerations

of digital outputs of analog inputs of analog outputs of digital inputs Counters/pulse frequency Extra channels (PWM)

Over-voltage protection DMA capability Accuracy, linearity Terminals/accessories Current capabilities Warranty

404

Chapter 10

Finally, we conclude this section by defining resolution. The most common component when analog signals are required, and a common chip made by many integrated circuit manufacturers, is the AD and DA converter. AD represents the analog-to-digital conversion process and DA the digital-to-analog conversion. Most commercial designs use successive approximation or flash converters. Flash converters are faster because the comparators act in parallel, versus in series for the successive approximation technique. All conversions take time, generally 1–100 sec, which, if multiplexed to save money (i.e., one channel is converted, the next is switched to the same converter, etc.) each channel converted slows down the total conversion time to acquire all the data. The AD/DA chips define the best resolution possible under ideal conditions and generally range from 8- to 16-bit converters. Realized accuracies must include the before and after operations on the signal and depend in part on the accuracies of the input resistors on the operational amplifier. Without the actual conversion details, let us see how the resolution affects us. Remember that we are representing an analog (i.e., continuous) signal in digital form. This means that there are limited values with which the analog signal may be represented. The number of pieces that the analog signal can be represented as is determined by the resolution of the AD converter. The common resolutions and resulting number of possible values are shown in Table 3. The resolution of the actual signal can be found with the following equation: Vquantization ¼

ðVmax  Vmin Þ 2n  1

The significance of the quantization error can be shown through a simple example. EXAMPLE 10.1 Determine the quantization voltage (possible error) when a voltage signal is converted into an 8-bit digital signal and the range of the AD converter is 0–10 V and 10 V. Using our system with a 8-bit converter for a range of 0–10 V, our signal is represented by 256 individual levels (00000000 through 11111111 in binary). We can calculate the quantization voltage as Vquantization ¼

ðVmax  Vmin Þ ð10 V  0 VÞ ¼ ¼ 39 mV 2n  1 28  1

Table 3

Common AD and DA Converter Resolutions

Bits

Representation

8 10 12 16

8

2 210 212 216

No. of discrete values possible 256 1024 4096 65,536

Digital Control System Components

405

Thus the smallest increment is 39 mV. If we configure the computer to accept 10 V signals, we now have twice the voltage range to be represented by the same number of discrete values. Vquantization ¼

ðVmax  Vmin Þ ð10V  ð10VÞÞ ¼ ¼ 78 mV 2n  1 28  1

Now our voltage level must increase or decrease by almost 0.1 V before we see the binary representation change. The practical side is this: When we design systems it is best to choose all signals to use the full range of the AD converter to maximize resolution, unless of course, each channel on the board can be configured separately. If some sensors already have 10 V outputs, then choosing one with a 0 to 1 V output severely limits the resolution, since we are only using 1/20th of the range that is already limited to a finite number of discrete values. Obviously, as resolution is increased this becomes less of a concern, but nonetheless good design practices should be followed. 10.3.1.3

Software

Finally, one of the major considerations when using data acquisition boards is software support. If an investment has already been made into one specific software package, then the decisions are probably narrowed down. Most vendors supply proprietary software packages and little has been done to generate a standard protocol. Many nontraditional software programs, for an extra fee, are beginning to develop drivers allowing data acquisition boards to interface with them. Matlab is one such example and allows us through the toolboxes in Matlab to run hardware in the loop controllers. There are two main branches to consider when choosing software for implementing digital controllers on PCs. Graphical-based programs are easy to use and program, but the graphical overhead limits the throughput when operating controllers real time. If the hardware and software support DMA (direct memory access), then batches of data can be acquired at very high sample rates and post processed before being displayed or saved. This technique, while great for capturing transients, has little benefit when operating PC based controller in real time where the signal is acquired once, processed, and the controller output returned to the outside system. If learning to program is something we do not wish to do, then graphical-based programs are the primary option and work fine for slower processes incorporating less advanced controllers. For many slower systems and for recording data these software packages work well. To take full advantage of the PC as our controller, we must be able to program in a language using a compiler. There are hybrids were the initial design is done using a graphical-based interface and then the software program is able to compile an executable program from the graphical model. Sample rates are much faster and access to unlimited controller algorithms should be all the incentive needed to learn programming. Programming is greatly simplified when using predefined commands (drivers) supplied by the manufacturer of the boards. Most boards come with a basic set of drivers and instructions for the common programming languages like C, Pascal, Basic, and Fortran. A brief overview of two possible programming methods is given here. Since processors typically perform operations much faster than attached input/output

406

Chapter 10

devices, synchronization needs to occur between different devices. Since the processor normally waits to execute the next instruction, something must tell the processor is ready to accept data or perform the next execution. The two common options are to either have the program control the delays or use interrupts to control the delays. Program control is the simplest and can easily be explain with an illustration of inserting a loop in your program. While the loop is running, every time it gets back to the command to read or write from/to the data acquisition card, it retrieves the data and continues to the next line of code. In one aspect, this ensures that program is always operating at the maximum sampling frequency, but it also means you do not have direct control over the sampling frequency. For example, suppose a condition in one the loops might cause a branch in the program to occur, like only updating the screen every 50 loops. Then obviously the sample time for this loop will be longer. Also, if the program tries to write to the hard drive and it is being used, longer delays can be expected. Programming in a Windows environment exacerbates this since multitasking is expected to occur. A second method to control the delays is through interrupt driven programming. An interrupt is exactly what the name suggests, a method to halt the current operation of the processor and perform your priority task. The advantage with using interrupts is that very consistent sample times are achievable. We base the interrupt off of an external (independent) clock signal and tell the program how often to interrupt the process to collect or write data. The catch is this, while this technique sounds great, a problem arises if the computer has not yet finished processing the command as required before the next interrupt occurs. In this event the memory location storing the controller output might not be updated and the old value is sent again to the system; we may wish to monitor this and alert the user if our program ‘‘misses’’ a set number of code instructions, especially when multiple devices may send interrupt signals and further tie up the processor. All in all, interrupt programming, when done correctly, is more efficient because it controls the amount of time spent waiting for other processes, most of which (i.e., moving a mouse or sound effects) can wait until the processor actually does have time without limiting the performance of the controller routine. Examples of both methods can be found and much of the discussion above applies also to microcontrollers and PLCs, as we will now see. 10.4

MICROCONTROLLERS

Microcontrollers are now used in a surprising number of products: microwaves, automobiles (multiple), TVs, VCRs, remote controls, stereo systems, laser printers, cameras, etc. Many of the same terms defined for PC-based control systems will apply when talking about microcontrollers and PLCs. First, what is a microcontroller, and why not call it a microprocessor? A microcontroller, by definition, contains a microprocessor but also includes the memory and various IO arrangements on a single chip. A microprocessor may be just the CPU (central processing unit) or may include other components. Microcomputers are microprocessors that include other components but not necessarily all components required to function as a microcontroller. For the most part, when we discuss microcontrollers we tend to interchange the two terms simultaneously. Certainly, when discussing microcontrollers, we have in mind the complete

Digital Control System Components

407

electrical package (processor, memory, and IO) capable of controlling our system, an example of which is shown in Figure 2. Microcontrollers have much in common with our desktop PCs. They both have a CPU that executes programs, memory to store variables, and IO devices. While the desktop PC is a general purpose computer, designed to run thousands of programs, a microcontroller is design to run one type of program well. Microcontrollers are generally embedded inside another device (i.e., automobile) and are sometimes called embedded controllers. They generally store the program in read-only memory (ROM) and include some random access memory (RAM) for storing the temporary data while processing it. The ROM contents are retained when power is off, while the RAM contents are lost. Microcontrollers generally incorporate a special type of ROM called erasable-programmable ROM (EPROM) or electrically erasableprogrammable ROM (EEPROM). EPROM can be erased using ultraviolet light passing through a transparent window on the chip, shown in Figure 3. EEPROM can be erased without the need for the ultraviolet light using similar techniques used to program it; there is usually a limited number of read/write cycles with most EEPROM memory chips. Microcontroller power consumption can be less than 50 mW, thus making possible battery-powered operation. LCD are often used with microcontrollers to provide means for output, but at the expense of battery life. Microcontrollers range from simple 8-bit microprocessors containing 1000 bytes of ROM, 20 bytes of RAM, 8 IO pins, and costing only pennies (in quantity) to microprocessors with 64-bit buses and large memory capacities. Today even home hobbyists can purchase microcontrollers (programmable interface controllers, etc.), which can be programmed using a simplified version of BASIC and a home computer. A BASIC stamp is a microcontroller customized to understand the BASIC programming language. Popular microcontrollers we might encounter include Motorola’s 68HC11, Intel’s 8096, and National Semiconductor’s HPC16040. The Motorola, for example, comes in several versions, with the MC68HC811E2 containing an 8-bit processor, 30 I/O bits, and 128 kilobytes of RAM. Since there are so many variations within each manufacturer’s families of models and since the technology is changing so fast, little space is given here in discussing the details of specific models. If we understand some of the terminology, as presented here, then we should be able to gather information from the manufacturer and choose the correct microcontroller for our system. Now let us examine and define some useful terms to help us when we are designing control systems using embedded microcontrollers.

Figure 2

Example of typical microcontroller with EEPROM.

408

Figure 3

Chapter 10

Example of typical microcontroller with EPROM (ultraviolet erasable).

To interface the microprocessor with the memory and IO ports, address buses are used to send words between the devices. Words are groups of bits whose length depend on the width of the data path and thus affect the amount of information sent each time. An 8-bit microcontroller sends eight lines of data that can represent 256 values. Common word lengths are 4, 8, 16, and 32. Four-bit microcontrollers are still being used in simple applications with 8-bit microcontrollers being the most common. The other factor affecting the microcontroller performance is the clock speed. This is probably the most familiar performance specification through our exposure to PCs and the emphasis on processor clock speeds. It is important to know both the bus width (amount of information sent each clock cycle) and the processor clock speed since they both directly influence the overall performance. Finally, we discuss the programming considerations. Microcontrollers ultimately need an appropriate instruction set to perform a specific action. Instruction sets are dependent on the microprocessor being used and thus specific commands must be learned for different microprocessors. Microprocessors work in binary code, and the instruction sets must be given to the processors in binary format. Fortunately, short-hand codes are used to represent the binary 0s and 1s. A common shorthand code is assembly language. Since computer programs (assembler programs) are available to covert the assembly code into binary, the binary code is not such a large obstacle to designing microcontrollers. A third level of programming, and the most useful to the control system designer, is the use of many high-level computer languages to compile algorithms into assembly and machine code. Common high-level languages include BASIC, C, FORTRAN, and PASCAL. There is enough similarity between languages that once you have programmed in one, you will understand much of another. Since only the syntax changes and not the program flow chart, learning the language is one of the easier aspects to developing a good control algorithm. Most engineers, once the flow chart (logic) of the program is developed, can learn the language specific commands required to implement to controller. The one disadvantage of programming in high-level languages is speed. Even when converted to assembly language, they tend to result in larger programs that take longer to run than programs originally written in assembly. This gap is narrowing and some compilers convert to assembly code quite efficiently.

Digital Control System Components

10.5

409

PROGRAMMABLE LOGIC CONTROLLERS

Programmable logic controllers are found in virtually every production facility and are used to control manufacturing equipment, HVAC systems, temperatures, animal feeders, conveyor lines; to name but a few. Originally designed to replace sequential relay circuits and timers used for machine control, they have evolved today to where they are microcontrollers and logic operators all packaged into one. Most PLCs are programmed using ladder logic, a direct result from replacing physical relays with simulated relays and timers. The ladder logic diagram ends up closely resembling the circuit an electrician would have to build to complete the same tasks using physical components. PLCs were introduced to the 1960s to replace physical relays and timers, which had to be rewired each time a design change or upgrade was required. The use of PLCs allowed designers to quickly change a ladder diagram without having to rewire the system. Once microprocessors became cheaper and powerful, they were also incorporated into the PLC. In the 1970s PLCs began to communicate with each other and, together with analog signal capabilities, began to resemble the products of today. What really has changed since then is decreasing size while now utilizing processors much powerful than their predecessors. Modern PLCs accept high level programming languages, a variety of input and output signal types, and output displays all the while maintaining their reputation for being robust and stable controllers capable of operating in extreme environments. The differences between them and microcontrollers continue to diminish, and there exists a gray area shared by both products. In general though, PLCs have more signal conditioning and protection features on board, accept ladder logic programming (along with others), and are designed for multiple purposes instead of one dedicated purpose as common with microcontrollers. Most PLCs continue to have large numbers of relays included. In a way, the microcontroller has become a subcomponent of many PLCs. As mentioned earlier, personal computers have replaced PLCs in some areas and if reliability improves may be replacing them more. Given that current PLCs use microcontrollers and AD/DA converters, very little needs to be said about the way they operate. Combine the features of computers with data acquisition boards and programmable microcontrollers, design it to be rugged and reliable, and you have today’s PLC. For dedicated high volume products, the microcontroller holds many advantages. For flexibility in programming, numbers and types of inputs and outputs, and performance per cost unit, computerbased systems hold many advantages. PLCs effectively bridge this gap by combining some features of both. They range from simple, small, self-contained units to modular systems capable of handling large numbers of inputs and outputs, linking together with other units, and being controlled remotely via telephone lines or the internet. A more recent category, now found in most manufacturers’ product lines, is the micro-PLC, a small self-contained PLC. Let us now examine some features that might be found on a micro-PLC, an example of which is given in Figure 4. Table 4 lists some of the features that may be found on a micro-PLC. For many industrial, concept development, and testing-related projects, a small PLC as described here may be all that we need. It does illustrate the number of features that can be included on a small board the size of a common music compact

410

Chapter 10

Figure 4

Example of micro–programmable logic controller. (Courtesy of Triangle Research International Pte. Ltd.)

disc. What may or may not be included are extra inputs and outputs, software drivers, displays and user interfaces, etc. Therefore, it is a good idea when choosing a PLC (or a computer-based system or microcontroller) to compare features based on what is included and then what the prices are for additional features. Factory support policies should also be considered, although a company’s reputation for providing it is probably more important. 10.5.1

Ladder Logic Basics

Ladder logic diagrams are the most common method used to program PLCs. The programs run sequentially (see scan time below) and first scan the inputs, scan the Table 4

Features of Modern Small Self-Contained PLC (Example: Triangle Research T100MD-1616þ)

Host PC RS232 (serial) programming port Built-in LCD header LCD display 24 VDC power IN RS485 network two-wire interface Programming software and simulator Ladder logic and BASIC compatible

16 digital outputs (24 V @ 1 A each) 16 digital inputs (24 V NPN) 1 analog current output (4–20 mA) 4 analog inputs (0–1 V2x and 0–5 V2x ) Two PWM outputs One stepper motor controller Counter, encoder, and frequency inputs

Digital Control System Components

411

program lines with the new inputs and perform the desired operations, with the cycle completed by scanning the outputs (writing the outputs). This is similar to program control over processor delay times defined earlier. Where today’s PLCs confuse the matter is by combining traditional ladder logic with text based programming based on interrupts. As we will see, with some PLCs a user routine can simply be inserted on a ladder rung and used with an interrupt generator (pulse timer). This section provides a brief overview to get us started with ladder logic programming. Most programs can be written using a basic set of commands. The primary components used in constructing ladder diagrams are rails, rungs, branches, input contacts, output devices, timers, and counters. Most programs can be constructed using these simple components. Although each manufacturer has different nomenclature for text-based programming (i.e., mnemonic for an input contact), the resulting ladder diagram is fairly standard and easy to understand. Some programs allow the program to be graphically developed directly in the ladder framework. In addition, many PLCs now allow special functions to be written using high level programming languages, like BASIC. Let us look at the primary components. The rails are vertical lines representing the power lines with the rungs being the horizontal lines connecting the two power lines (in and out). Thus, when the proper inputs are seen, the input contact closes and energizes the output by connecting the two vertical rails across the load. The rungs contain the inputs, branches (if any), and outputs (‘‘coils’’ or functions). Most programs use the basic commands and symbols, listed in Table 5. Each contact switch, normally open or closed, may represent a physical input or condition from elsewhere in the program. In addition to a true or false condition, each contact switch may also represent a separate function including timers (interrupts), counters, flags, and sequencers. Different manufacturers usually include additional functions that can be assigned to contacts. What follows is a brief overview of common instructions applied to contacts. The most common function of a contact switch is to scan a physical input and show its result. Thus, when a normally open switch assigned to digital input channel

Table 5

Basic Ladder Logic Components

412

Chapter 10

1 scans a high signal, or input, the contact switch closes signifying the event has occurred. It might be someone pushing a start button or the completion of another event signified by the closing of a physical limit switch. This is the most common use of contact switches. The normally closed switch will not open until an event has occurred. Contact switches may be external, representing a physical input, or internal, representing an internal event in other parts of the ladder diagram. Herein lies a primary advantage of PLCs: All the internal elements can be changed without physically rewiring the circuits. The internal relays can be configured to act as Boolean logic operators without physical wiring. The same idea holds true for outputs, or relays. The term relay, or coil, is derived from the fact that early PLCs energized a coil to activate the relay that then supplied power to the desired physical component/ actuator. In modern, electronic PLCs, the coils may also be internal or external. Special bit instructions may also be used to open or close switches. Timers generally delay the on time by a set amount of time. Delay on times can then be constructed in the ladder diagram to act as delay off timers. Counters may be used to keep track of occurrences and switch contact positions after a number has been reached. Counters can generally be configured as up or down counters, and many include methods for resetting them if other events occur. Some PLCs also include functions to shift the bit registers, sequencers, and data output commands. Sequencers can be used to program in fixed sequences, such as a sequence of motions to load a conveyor belt, or to drive devices like stepper motors. Let us quickly design the ladder diagram for creating a latching switch that will turn an output on when a momentary switch is pressed and turn it off when another momentary switch is pressed. To begin the ladder circuit, we need to define two inputs, one output, and a switch dependent on the output relay. One input is normally open and the other normally closed. The relay may or may not be a physical relay and may only be used internally. In this case we will choose a marker relay (does not physically exist outside of the PLC) and use it to mark an event. We are now ready to construct the ladder diagram given in Figure 5.

Figure 5

Example ladder logic circuit of latch.

Digital Control System Components

413

This is a circuit commonly used to start and stop a program without requiring a constant input signal. When the START button is pushed (digital input port receives a signal), the switch closes and since STOP is normally closed, the relay RUN is energized. When the RUN relay is energized, it also closes the switch monitoring its state, and thus the circuit remains active even after the START switch is released. To stop the circuits, press the STOP (another digital input port signal) button, which momentarily opens the STOP switch. This deactivates the relay that in turn causes the RUN switch to open. Now when STOP is released (it closes again), the circuit is still deactivated. In normal operation the remaining rungs would each contain a function to be executed each pass. The scan time then refers to the time required to complete all the rungs of programs and return back to the beginning rung again. In PLCs allowing for subroutines written in a high-level programming language, it becomes straightforward to implement our difference equations, which themselves were derived from the controllers designed in the preceding chapter. More general concepts relating to implementing different algorithms are discussed in the next section, but we can mention several items relating to implementing algorithms within the scope of ladder logic programming methods. Within the ladder logic framework we are generally provided with two basic options for implementing subroutines containing our controller algorithms. If the PLC’s set of instructions allows us to insert a timer (interrupt) component, then we can simply insert the timer on a rung followed by the subroutine (function) containing our algorithms. Every time the timer generates an interrupt, the subroutine is executed. What takes place in the subroutine is discussed more in the next section. This method allows us to operate our controller at a fixed sample frequency regardless of where the PLC is currently at executing other rungs in the ladder logic program. Most PLCs attempt to service all interrupts before progressing to the next ladder rung. However, if we ask too much (i.e., several interrupt driven timers), the controller will not operate at our desired frequency and will miss samples. A second basic method is to leave the function in the normal progression of ladder rungs where the controller algorithm is executed once per pass through all of the rungs. This is similar to our program control method discussed earlier. In this case we are not guaranteed a fixed sample period and the sample period may change significantly depending on what inputs are received, causing to the ladder logic program to run more or less commands each pass. 10.5.2 Network Buses and Protocols In increasing numbers, multiple PLCs (and other types of controllers) are being networked to enable communication between devices. Centralized and distributed controllers, explained in Section 7.2, both require communication between components, and in the case of the distributed system, between other controllers. In general we can discuss the communication in terms of hardware and software, or as commonly called, in terms of what network bus is used and what protocol is used. In many cases these two are developed together and one term describes both aspects. Lower level buses are commonly used to connect controllers to ‘‘smart’’ components such as sensors, amplifiers, and actuators. Higher level buses are designed to handle the interactions between other controller systems. As speed and capabilities

414

Chapter 10

increase the line separating the two becomes blurred. CAN-based buses (CANopen, Profibus, and DeviceNet) are examples of lower level buses, while Ethernet and Firewire are more characteristic of higher level buses. The largest problem is understanding what, if any, standards are enforced for the different buses. This has led to both ‘‘open’’ and proprietary architectures. Common open communication networks include Ethernet, ControlNet, and DeviceNet. Using one of these usually means that we can buy some components from company A and other components from company B and expect to have them work together. Independent organizations are formed to maintain these standards. The flip side includes proprietary architectures. Many manufacturers, in addition to supporting some open standards, also have specialized standards not available to the public and designed to work with their products (and partner companies) only. This has the advantage of being able to optimize the company’s products and being better able to support the product. The obvious disadvantage is that we can no longer interface with devices from other companies. There are many alternative bus architectures that are beyond the scope of this text, many of which are specific to certain applications. STD, VME, Multibus, and PC-104 are examples. One becoming more common is the PC-104, embedded-PC standard, being used in embedded control applications with a variety of modules available. As with the common PC, these network devices and protocols are constantly changing and upgraded. Finally, the third and often overlooked element is the user interface. In addition to the network bus and protocol, the user interface should be considered in terms of the learning curve, flexibility, and capabilities. 10.6

ALGORITHM IMPLEMENTATION ISSUES

Regardless of the device used, PC, microcontroller, or PLC, there are several practical issues that should be considered. In general, the controller is implemented using the difference equation notation developed in previous chapters. As controllers become more advanced, different sets of equations may be used depending on a number of different factors (input values, etc.), and logic statements, lookup tables, and safety features are commonly added in addition to the basic difference equation. The advantage of difference equations is that we can design our system in the zdomain using techniques similar to classical analog techniques and then easily implement the controller in a microprocessor. Since the coefficients of the difference equations are all defined within the microprocessor, we can also modify our system behavior by changing the values while the program is running (this leads us in to the realm of adaptive controllers, introduced in the next chapter). The two basic programming structures, program control and interrupt control (presented in earlier sections), can both be used with difference equations, logic statements, adaptive routines, etc., with few differences. Early PLCs were examples of program control, while modern PLCs, discussed in the previous section can be combinations of the two programming methods. One controller algorithm might be program controlled while another might be interrupt driven. When a program is used to control the processor delay times, we commonly define a scan time. Scan time arises from the idea that while the computer program (or, more figuratively, a ladder logic diagram) runs, it scans from line to line,

Digital Control System Components

415

sequentially completing the tasks. Thus, in a normal operation without GOTO statements in the code, it must make one complete cycle through the code before it performs that line again. The normal procedure in closed loop controllers is to scan all the inputs, process the data, and send the results to the outputs. Scan time is important when reading digital signals since it is possible to miss an event if the on time of the event is shorter than the controller scan times. An example of this is a counter channel where scan times are too long. A pulse may come and go before the port is scanned and thus the pulse is not counted. While analog controllers continuously read, correct, and send out signals, a digital controller only accesses the input ports at distinct moments in time. Fortunately, most scan times are short relative to the pulse inputs being received, and many times the largest concern is keeping scan times short to enhance the stability of the controlled system. If the scan time must be decreased, whether to increase stability or catch momentary inputs, we have several options: optimize our code to run faster, simplify the requirements of our controller, and upgrade to a more powerful processor. If missing digital input pulses is the only problem, we can simply make the pulse longer by modifying its source. To optimize the code, we need to examine what is required and what is extra. For example, most controllers do not require the use of double precision variables that only serve to occupy more memory and take longer to operate on. Also, good programmers, especially those programming microcontrollers in assembly code, are artists at creating small efficient segments of code. We have progressively moved away from this approach as electronic component prices have fallen. To simply our controller, we simply need to differentiate between what is required and what constitutes bells and whistles. Adding more functions relying on interrupts will also result in longer scan times. Each time an interrupt occurs, it takes clock cycles potentially destined to complete the next sequential command. Some PLCs rely on preset scan times and we must make sure the scan time is long enough to perform all the tasks each loop. The general algorithm then, might take form as shown in Table 6. To use program control we simply skip step 2 and only repeat steps 3 through 8. With the interrupt routine, though, we are able to place higher priority on the controller portion of our overall program since this segment of code is executed every time an interrupt is received, even if other portions of the program must be temporarily halted. Also,

Table 6 General Program Flow for Implementing Control Algorithms Steps 1 2 3 4 5 6 7 8

Action Initialize variables and hardware Wait for interrupt driven signal Read analog inputs Calculate the error Execute the control algorithm Write controller outputs to analog (or digital) channels Update the history variables Repeat steps 2–8 until stop signal is received

416

Chapter 10

whereas step 5 is the actual difference equation, step 7 is required for many algorithms and consists of saving the current error and controller values for use during the next time the algorithm is called (i.e., uðk  1Þ, eðk  1Þ, eðk  2Þ . . ., depending on the algorithm). The control algorithms in particular, developed as difference equations, will take an input that was read in by previous program steps, perform some combination of mathematical functions on the input, possibly in relationship to previous values or with respect to multiple inputs, and then proceed to send the value to the appropriate output. In this way, we can change from a PID to a PI to an adaptive to a fuzzy logic controller using the same physical hardware simply by downloading a new algorithm (steps 5 and 7) to the programmable microprocessor. PCs, embedded microcontrollers, and PLCs (with programming capabilities) are capable of this type of operation. Finally, remember that Table 6 is a general outline and many more lines of code are required to deal with all the practical issues that are bound to arise (e.g., emergency shutdown routines, data logging, maximums and minimums to prevent problems like integral windup, etc). These extras are often fairly specific to the system being controlled and are based on an expert’s knowledge of the system limits and characteristics. 10.7

DIGITAL TRANSDUCERS

We have already discussed several types of analog transducers in Chapter 6, and this section now summarizes some of the additional transducers available for use with digital controllers. While many can be made to work with analog controllers, there is usually an analog alternative that already exists. In addition, all the transducers listed in Section 6.4 will work with digital controllers using an AD converter chip. Some analog transducers also can be configured to send digital outputs using circuitry included with the transducer. The assumption here is that these transducers are able to interface directly with digital IO ports on the microprocessor without the use of an AD converter. Even with this assumption, several signal conditioning devices may be required to protect the microprocessor from incompatible inputs such as large voltages. It is usually simpler to convert digital signals to different ranges since we only need to covert between two distinct levels, not a continuous range of values. Noisy digital signals may be cleaned up using components such as Schmitt triggers. In addition, several new comments about resolution and accuracy apply when using digital outputs compared with the analog output transducers listed earlier. Since the output is now digital, the same comments applied to data acquisition boards apply here. There will only be a finite number of positions with which to represent the transducer output signal. This will be seen in some of the transducers listed below. Instead of digitizing the signal using the AD converter, the transducer effectively performs this conversion (physical phenomenon are continuous events) and at various resolutions, depending on the component and range of operation. The advantage of this, however, is that since the digital signal is transmitted from the transducer to the controller, the signal to noise ratio is much better, almost being immune to common levels of electrical noise.

Digital Control System Components

417

10.7.1 Optical Sensors Digital encoders are commonly used to measure linear and rotary position. Most encoders are circular devices in the shape of a disk with digital patterns engraved in the disk. The simplest ones are incremental optical angle encoders where a single light source is on one side of the disk and a photodetector lined up with the source on the other side of the disk. As the disk rotates, whether from direct shaft rotation or from corresponding linear motion (i.e., rack and pinion), slots in the disk continually interrupt the light source and provide a series of pulses to the computer. Thus, the resolution is simply the distance (and corresponding angle) between successive slots. This will only measure the incremental position by counting the pulses that have occurred. In the same way, velocity can be found by measuring the frequency of the pulse train. The obvious downside is that unless knowledge of the starting position is known and two light sources are used, only position relative to the initial position is known. The two light sources are required so that direction can be determined and the computer will know whether to count up or count down. Incremental optical encoders, although limited in resolution, are noise free since only the number of pulses matter, not the absolute magnitude or ‘‘cleanliness.’’ A typical incremental encoder example is shown in Figure 6. While incremental encoders work fine for velocity measurements, the actual position is often desired and absolute encoders must be used. As the shaft rotates a different pattern is generated depending on position and direction. As Figure 7 shows, the number of rings determines the number of resolution bits for simple designs. Thus, as shown, a 3-bit encoder can recognize eight discrete positions, and each spans a range of 45 degrees. The input sequence that the digital ports would see if connected to the encoder is also given in Figure 7. Although the resolution in this example is not very good, encoders are available with 16-bit resolutions. Even 12-bit encoders result in less than 0.1 degrees of resolution (360 degrees/4096 levels). Thus, it is possible to have good resolution and noise immunity using optical encoders. The code sequence listed is called Gray code, named after Frank Gray of Bell Laboratories, where only one bit changes at a time. Hence, if one window is misread, errors are less likely than when straight binary sequencing is used and several digits change at one time.

Figure 6

Typical incremental encoder.

418

Figure 7

Chapter 10

Example 3-bit absolute encoder output (Gray code).

Finally, this whole idea can be applied in one or two linear dimensions where a grid is set up with light sources (commonly LEDs) and the X-Y position for an object can be determined. The encoders used for velocity are identical to those describe above but now the frequency of the pulse is desired. Some boards directly accept frequency inputs while others must be programmed to count a specific number of pulses and divide by the elapsed time. There is a trade-off between response and accuracy since measuring the frequency using the distance between one set of pulses is very quick but prone to have very large errors. Averaging more pulses decreases the error but increases the response time and corresponding lag. When digital signals are transmitted instead of analog signals, we also have the possibility of using fiber optics instead of traditional wire. 10.7.2

Additional Sensors

There are other ways to get digital pulses representing position or velocity. Mechanical microswitches can be switched on and off to represent position or be used to calculated velocities. These switches also can determine the number of pieces made by having the signal go to a digital port configured as a counter. Other sensors might generate a voltage change but not of the proper magnitude or sharpness required by a digital IO port. Variable reluctance proximity sensors, hall-effect proximity sensors, and even magnetic pickups near rotating gear teeth in velocity applications can all be used by squaring up the signal using Schmitt triggers. Schmitt triggers, cheap and up to six packaged on one IC, will take a noisy oscillating signal and convert it to a square pulse train, just what the digital ports like to see. In fact, simple PWM circuits can be constructed by sliding a sine wave up and down relative to the switching level of a Schmitt trigger. Since Schmitt triggers have hysteresis built into the chip, the chances of getting additional pulses in noisy signals is greatly reduced. This effect is shown in Figure 8. This same type of device can be used to make many other signals compatible with our digital IO ports. For example, many axial turbine flow meters use magnetic pickups, hall effect sensors, or variable reluctance sensors. Obtaining a pulse train allows us to use our digital input ports as counters/frequency inputs and directly read the output from such meters. Almost any transducer that outputs an oscillating analog signal can be modified using devices like Schmitt triggers.

Digital Control System Components

Figure 8

10.8

419

Schmitt trigger used to obtain square wave pulse train from oscillating signal.

DIGITAL ACTUATORS

The primary actuator capable of directly receiving digital outputs is the stepper motor. To drive it directly from the microprocessor still requires relays (solid-state switches) since the current requirements are much larger than what a microprocessor is capable of. Many stepper motor driver circuits are available which contain the required logic (step sequences) and current drivers, allowing the microprocessor to simply output the direction in the form of a high or low signal and the number of steps to move in the form of a digital pulse train. In this case only two digital outputs are required to control the actuator. Being discrete, the same criterion applies where resolution (no. of steps) is of primary concern. The same advantages also apply, and we have excellent noise immunity. The stepper motor has become a strong competitor to the DC motor in terms of cost and performance. DC motors are capable of higher speeds and torque but are harder to interface with digital computers and cannot be run open loop. Since the stepper motor is a digital device, stability is never a problem, its brushless design constitutes less wear, it is easy to interface with a digital computer, and it can be run open loop in many situations by recording the commands that are sent to it to monitor its position. There are two primary types of stepper motors, the permanent magnet and variable reluctance configurations. The permanent magnet configuration, as the name implies, has a rotor containing a permanent magnet and a stator with a number of poles. Then, as shown in Figure 9, the poles on the stator can be switched

Figure 9

Basic permanent magnet stepper motor.

420

Chapter 10

and the rotor magnet will always try to align itself with the new magnetic poles. Permanent magnet stepper motors are usually limited to around 500 oz-in of torque while variable reluctance motors may go up to 2000 oz-in of torque. Permanent magnet motors are generally smaller and as a result also capable of higher speeds. Speeds are measured in steps per second and some permanent magnet motors range up to 30,000 steps per second. Resolutions are measured in steps per revolution with common values being 12, 24, 72, 144, 180, and 200, which ranges from 30 degrees/ step down to 1.8 degrees/step. Special circuitry allows some motors to half-step (hold a middle position between two poles) or microstep, leading up to 10,000 steps per revolution or more. The trade-off is between speed and resolution since for any given configuration the steps per second remains fairly constant. Variable reluctance stepper motors have a steel rotor that seeks to find the position of minimum reluctance. Figure 10 illustrates a simple variable reluctance stepper motor. As mentioned, variable reluctance motors are generally larger in size and slower than permanent magnet types but have the advantage in torque rating over their counterparts. To operate a stepper motor using open loop control (no position feedback), we must compare our required actuation forces with the stepper motor capabilities. These specifications are presented in Table 7. The holding torque is essentially zero when power is lost in variable reluctance stepper motors. Since permanent magnet motors will stay aligned with the path of least reluctance, there is always a holding torque even without power, called detent torque, although it is much less than the holding torque with power on. Most stepper motors will slightly overshoot each step since they are designed for maximum response times. Variable reluctance stepper motors generally have lower rotor inertia (no magnets) and thus may have a slightly faster dynamic response than comparably sized permanent magnet stepper motors. The pull-in and pull-out parameters and slew range are shown graphically in Figure 11. Typical of most electric motors, as the required torque increases, the available speed decreases, with the opposite also being true. Also, the pull-in torque is greater than the pull-out torque. An example is given in Figure 12 of how a stepper motor can be used to provide open loop control of engine throttle position. In this figure a stepper motor is connected to the engine throttle linkage using Kevlar braided line. In the laboratory setting this provided an easy way for the computer to control the position

Figure 10

Basic variable reluctance stepper motor.

Digital Control System Components

Table 7

421

Important Parameters When Choosing Stepper Motors

Item

Description

Holding torque Pull-in rate Pull-out rate Pull-in torque Pull-out torque Slew range

The maximum torque that can be applied to a motor without causing rotation. It is measured with the power on. Maximum stepping rate a motor can start at when loaded before losing synchronization. Maximum stepping rate a motor can stop with when loaded, before losing synchronization. Maximum torque a motor can be loaded to before losing synchronization while starting at a designated stepping rate. Maximum torque a motor can be loaded to before losing synchronization while stopping from a designated stepping rate. Ranges of rates between the pull-in and pull-out rates where the motor runs fine but cannot start or stop in this range without losing steps.

of the IC engine throttle without requiring the use of feedback. As long as the required torques and desired stepping rates are always within the synchronized range, the computer can keep track of throttle position by recording how many pulses (and the direction) are sent to the motor. In addition to the specialized application in this example, stepper motors are found in a variety of consumer products, include many computer printers, machines, and stereo components. 10.9

INTERFACING DIGITAL SIGNALS TO REAL ACTUATORS

Whenever actuators, analog or digital, are used with digital signals, we have to provide additional components to interface the digital signals to the higher power levels required by actuators. Although a variety of components is used to increase the power levels, the most common one is the transistor. Mechanical relays, while great for switching large amounts of power and inductive (i.e., coils) loads, are not very fast and have limited cycles before failure occurs. Trying to modulate an electrical signal using a mechanical relay would quickly wear it out. Solid-state relays (transistors), on the other hand, have lower power ratings but can be switched very

Figure 11

Common stepper motor characteristics.

422

Chapter 10

Figure 12

Stepper motor control of engine throttle position.

rapidly with almost infinite life if properly cooled. As we will see, this provides the basis for PWM. However, since inductive loads tend to generate a back current when switched on, we must include protection diodes when using solid-state (transistor) relays with inductive loads. If the controller is especially sensitive to electrical noise and spikes, we can also add optical isolators for further protection. Optical isolators couple an LED with a photo detector so that the switch itself is completely isolated physically from the high current signals. These are economical devices, easy to implement, and common components in a variety of applications. 10.9.1

Review of Transistor Basics

Since most of our interfacing is done with transistors, let us quickly review the basics. The terminology used with transistors stems back to the component that they replaced, the vacuum tube amplifier. The base, collector, and emitter were physical components inside the tube. It is safe to say that the transistor has affected all areas of our lives. As time progresses we tend to forget the large size that stereo amplifiers, radios, computers, TVs, transmitters, etc., all were due to the size of the vacuum tubes used. We take it for granted that virtually every electronic gadget can be made to operate from batteries and carried in a small bag or pocket. Transistors have had the same effect on control systems since they provide an economical and efficient method for connecting microprocessors with the actual actuators. Two basic types of transistors are commonly used: the bipolar transistor, commonly referred to as simply transistor, and the field effect transistor, commonly referred to as a FET. Also seeing increasing use is the insulated-gate bipolar transistor, or IGBT. The primary role of a transistor is to act as an amplifier. It may be used as a linear amplifier (stereo amp) when driven with small input currents or as a solidstate relay when driven with larger currents. The advantages and disadvantages of each will become clear as we progress through this section. A special configuration obtained when using high gain bipolar transistors connected together is the siliconcontroller rectifier, or SCR. With the proper gains these devices are able to latch and maintain load current even when the input signal is removed.

Digital Control System Components

423

Without all the inner construction details and electrical phenomenon describing how it works internally, transistors (and diodes) are made from silicon materials that either want to give up electrons (n-type) or receive electrons (p-type). Diodes consist of just two slices (p and n) and act as one way current switches or precision voltage regulators. When we connect three slices together, we get the common bipolar transistor, acting as a switch (or amplifier) that can be controlled with a much smaller current. This allows us to take signals from components like microprocessors and operational amplifiers and amplify them to usable levels of power. The two basic bipolar junction transistors, an npn and pnp are shown in Figure 13. The basic operation is described as follows: a small current injected into the base is able to control a much larger current flowing between the collector and emitter. The current amplification possible is the beta factor, defined in Figure 13. Normal beta factors are around 100 for a single transistor. If we need higher amplification ratios, we can use Darlington transistors. Darlington transistors are two transistors packaged together in series such that the beta ratios multiply and gains greater than 100,000 are possible. The transistor is capable of operating in two modes, switching (saturation) and amplification. Linear amplification is much more difficult, and generally it is best to use components designed as linear amplifiers for your actuator. Heat, being the primary destroyer of solid-state electronics, is a much larger problem with linear amplifiers. Switching is much easier, and as long as we keep the transistor saturated while operating we should have fewer problems. To explain this fundamental concept further, let us examine the cutoff, active, and saturation regions for a common emitter transistor circuit. Figure 14 illustrates the common curves illustrating these regions. The manufacturers of such devices typically provide these curves. The basic definitions are as follows:  

IC is the current through the load (and thus also the current through the transistor between the collector and emitter). IB is the current supplied to the base from the driver and is used to control the power delivered to the load.

Figure 13

Type npn and pnp transistors.

424

Chapter 10

Figure 14

  

Common emitter transistor circuit characteristics.

VCE is the voltage drop across the transistor as measured between the collector and emitter. VBE is the voltage difference between the base and emitter. VCEðsatÞ is the voltage drop between the collector and emitter when the transistor is operating in the saturated region.

If the transistor is not in saturation and actively regulating the current (linear amplification), then VCE is may or may not be close to zero. Since electrical power equals V  I, the power (W) dissipated by the transistor is VCE  IC . Therefore during linear amplification in the active region, neither VCE nor IC are very close to zero and heat buildup becomes a large problem. However, if we supply enough base current, IB , and ensure that the transistor is saturated, then for most transistors the voltage drop across the transistor, VCE , is less than 1 V and most of the power is drawn across the load. This reduces the problem of heat buildup, the main source of failure in transistors, and leads to the arguments in favor of methods like PWM. The design task then is determining what the base current needs to be without supplying so much that we build up heat from the input source. Example 10.2 demonstrates the process of choosing the required base resistance value that will supply enough current to keep the transistor operating in the saturated region. As manufacturer’s curves show, when we choose the values for our calculations they are very dependent on temperature. During operation npn transistors are turned on by applying a high voltage (in digital levels) to the base, while pnp transistors are turned on by applying a low voltage (ground level) to the base. The connections between the load and transistor are commonly termed to be common emitter, common collector, or common base. In the common emitter connection the load is connected between the positive power supply and the collector while the emitter is connected to the common (or ground) signal. With the common collector connection the load is connected to the emitter. In addition, npn transistors are cheaper to manufacture and thus the most common solid state switches are configured with common emitter connections using npn transistors. Power gain is the greatest for the common emitter connection and thus it is the type most commonly seen. Figure 15 illustrates the common emitter connection and how the switching occurs when a base current is supplied to the npn transistor.

Digital Control System Components

Figure 15

425

Using a npn transistor to switch a load on and off (common emitter connection).

When Vin is increased enough to saturate the transistor, VC is pulled near ground and the load is activated. In this type of circuit the negative terminal of the load floats high to the supply voltage and there is no voltage drop across the load when no base current is supplied to the transistor. Even though the transistor now sees a larger voltage drop there is no associated current and the power dissipated by the transistor is near zero. EXAMPLE 10.2 Referencing the common emitter circuit in Figure 15 and the typical characteristic curves given in Figure 14, determine the proper resistor value between Vin and the base to ensure that the transistor remains saturated. Assume that the transistor is sized to handle the load requirements and the following values are for the circuit and transistor (the transistor values are obtained from the manufacture’s data sheets): VBE ¼ 0:7 V @ 25 C VCEðsatÞ ¼ 0:7 V @ 25 C Vþ ¼ 24 V Rload ¼ 2:3  ¼ 1000 Vin ¼ 5 V We want to keep the transistor operating in the saturated region to minimize heat buildup problems. First, we can calculate the required current drawn through the load and passing through the transistor when switched on as IC ¼

V þ  VCEðsatÞ 24  0:7 ¼ 10:1 amps ¼ 2:3 Rload

Using the beta factor allows us to calculate the base current required for keeping the transistor in the saturated operating region: IB ¼

IC 10:1 ¼ 10:1 mA ¼ 1000 

Finally, we can calculate the maximum resistor value that still provides the proper base current to the transistor:

426

Chapter 10

RB ¼

Vin  VBE 5 V  0:7 V ¼ 425 ¼ 0:0101 A IB

To ensure a small safety margin (making sure the transistor remains saturated), we could choose a resistor slightly less than 425, resulting in a slightly larger base current. We do want to be careful to overdrive the base as this unnecessarily puts extra strain on the driver circuit and builds up additional heat in the resistor. Finally, to complete our discussion on transistors let us examine field effect and insulated gate bipolar transistors, or FET and IGBT. A common one used in many controller applications is the MOSFET, or metal oxide semiconductor field effect transistor. They are very similar to the silicon layer bipolar junction, but instead we vary the voltage (not the current) at the control terminal to control an electric field. The electric field causes a conduction region to form in much the same way that a base current does with bipolar types. The advantage is this: The controlling current, commonly called the gate current, sees a very high impedance ( 1014 ) and thus we do not have to worry about proper biasing as we did regarding the base junction of our bipolar junction transistors. Figure 16 illustrates this difference. Since the gate impedance is so high in the MOSFET device, the gate current is essentially zero and only the gate voltage controls the power amplification. This allows us to directly interface a digital output with a field effect transistor to control actuators requiring more power. The only thing that prevents us from directly driving a MOSFET with a microprocessor (TTL) output is the voltage level. MOSFET devices require 10 V to ensure saturation and thus a voltage multiplier circuit (or step up transformer) may be required. Since the input impedance is very large, they can be driven with much larger voltages without damage. To operate them as a linear amplifier, we would need to vary the gate voltage to control the power delivered to the load. IGBT devices combine many of the properties of bipolar and field effect transistors. Its construction is more complex and uses a MOSFET, npn transistor, and junction FET to drive the load, thus exhibiting a combination of characteristics. The advantage is that we get the high input impedance of a MOSFET and the lower saturation voltage of a bipolar transistor. Table 8 compares the characteristics of the different types of switching devices with same power ratings.

Figure 16

MOSFET and BJT switching comparison.

Digital Control System Components

Table 8

427

Bipolar, MOSFET, and IGBT Characteristics

Characteristic Drive signal Drive power (relative) Comparison (equal ratings) Current rating (A) Voltage rating (V) Resistance (@258C) Resistance (@1508C) Rise times (nsec) Fall times (nsec)

Bipolar

MOSFET

IGBT

Current Medium to large

Voltage Small

Voltage Small

20 500 0.18 0.24 70 200

20 500 0.20 0.6 20 40

20 600 0.24 0.23 40 200

Takesuye J, Deuty S. Introduction to Insulated Gate Bipolar Transistors. Application Note AN1541, Motorola Inc. and Clemente S, Dubashni A, Pelly B. IGBT Characteristics. Application Note AN983A, International Rectifier.

From Table 8 we see the advantages and disadvantages of the different devices commonly used as power amplifiers and high speed solid-state relays. MOSFET devices are generally more sensitive to temperature but are easy to interface and very fast. IGBT devices are less resistant to temperature and still easy to interface (no current draw) but have longer switching times. Bipolar transistors, on the other hand, are less susceptible to static electricity and are capable of handling larger load currents. All the transistors are still susceptible to over-voltage and should be protected when switching inductive loads. Inductive loads, when turned off, produce a large voltage spike. To protect the transistor it is common to include a diode to protect the transistor. As shown in Figure 16, the diode (commonly called a flyback diode) is placed across the load to protect the transistor from the large voltages that may occur when inductive loads are turned off. Since coils of wire, as found in virtually all motors and solenoids, are inductors, the flyback diode is a common addition to transistor driver circuits. By now it should be clear how we can use transistors as switches and why we would like to be able to always keep them in the saturated region while operating. This last reason is why PWM has become so popular. When transistors operate in their saturated regions the power dissipation, and thus the heat buildup within the transistor is greatly reduced. 10.9.2 PWM PWM is a popular method used with solid-state switches (transistors) to approximate a linear power amplifier without the high cost and size. Transistors are very easy to implement in circuits when they act as switches and operate in the saturated range, as the preceding section demonstrates. An example of this is the cost of switched power supplies versus linear power supplies with similar ratings. Of course, there are trade-offs, and if cost, size, power requirements, and design times were not considered, we would always choose a linear amplifier. However, since cost tends to carry overwhelming weight in the decision process, for many applications PWM methods using simple digital signals and transistors make more sense. First, let us quickly define what PWM is and the basis on which it works.

428

Chapter 10

PWM can be defined with three terms: amplitude, frequency, and duty cycle. The amplitude is simply voltage range between the high and low signal levels, for example, 0–5 Volts where 0 V represents the magnitude when the pulse is low and 5 V represents the magnitude when the pulse is high. Common voltage levels range between 5 and 24 V. The frequency is the base cycling rate of the pulse train but does not represent the amount of time a pulse is on. We normally think of square wave pulse trains having equal high and low voltage signals; this is where PWM is different. Although the pulses occur at the set frequency, fixed by the PWM generator, the amount of time each pulse is on is varied, and thus the idea of duty cycles. Since the pulse train operates at a fixed frequency, we can define a period as Period ¼ T ¼

1 1 ¼ f frequency(Hz)

The duty cycle is then the amount of time, t, which the pulse train is at the high voltage level as a percentage of the total period, ranging between 0 and 100%. Duty cycle ¼

t 100ð%Þ T

At a 75% duty cycle, the pulse is on 75% of the time and only off 25% of the time. This concept and the terms used are shown in Figure 17. Notice that the period never changes, only the percentage of time during each period that the signal is turned on. The idea behind using PWM is to choose the proper frequency such that our actuator acts as an integrator and averages the area under the pulses. Obviously, if our actuator bandwidth is very high and near the PWM frequency, we will have many problems since the actuator is trying to replicate the pulse train and introducing large transients into our system. But, for example, if the pulse frequency is high enough, with a 5-V amplitude and 30% duty cycle, we expect the actuator current level to be as if it is receiving 1.5 V. Remember ourÐ basic inductor relationship where V ¼ L di=dt, or solving for the current, i ¼ ð1=LÞ V dt; thus inductors will take the

Figure 17

General characteristics of PWM signals.

Digital Control System Components

429

switched voltage, integrate it, resulting in an average current. The current the device actually uses is illustrated in Figure 18. The goal is to cycle the pulses quickly enough that the current does not rise or fall significantly between pulses. Most devices that include PWM outputs will have several selectable PWM frequencies. In general, it is best to use the highest frequency possible. Some exceptions are when using devices with high ‘‘stiction’’ (overcoming static friction) and some dithering is desired. In this case it is possible that too high of frequency will result in a decrease in performance since it allows the device to ‘‘stick.’’ A better guideline during the design process is to use the PWM signal to average the current and to add a separately controlled dither signal at the appropriate amplitude and frequency. This allows us to decouple the frequencies and amplitudes of the PWM and dither signals and optimize both effects, instead of compromising both effects. When we progress to building these circuits bipolar (or Darlington’s), MOSFET, and IGBT types may all be used. The bipolar types are current driven and the field effect types are voltage driven. Thus, with the bipolar types we need to size the base resistor to ensure saturation (see Example 10.2). As shown in Figure 19, if we cannot drive the device with enough current, then we can stage them, similar in concept to using a single Darlington transistor. In addition, there are many IC chips available that are specifically designed for driving the different types of transistors. With the MOSFET devices we only need to ensure that the voltage on the gate is large enough to cause saturation. This is usually 10 V, and therefore even though the current requirements are essentially zero, the voltage may be greater than what a microprocessor outputs. Sometimes a simple pull-up resistor will allow us to interface MOSFETs directly with microprocessor outputs. A simple PWM circuit using a MOSFET is given in Figure 20. Since the resistance of a MOSFET device increases with the temperature at the junction, it is considered stable. If our load current is too large and the MOSFET warms up, the resistance also increases, which tends to decrease the current through the load and resistor. If the opposite were true (as is possible in some other types), then the transistor tends to run away since the resistance falls, it continues draw more current and build up additional heat, self-propagating the problem. Where this

Figure 18

Current averaging of PWM signals by inductive loads.

430

Figure 19

Chapter 10

Typical actuators driven by PWM bipolar transistor circuits.

characteristic of MOSFET devices works to our advantage is when we need more current capabilities and thus connect several MOSFET devices in parallel as shown in Figure 21. If one of the MOSFET amplifiers begins to draw more current than the others, it will increase more in temperature. This leads to an increase in resistance relative to the other transistors and therefore less current. In this way each MOSFET is selfregulating and stable when connected in parallel. So, we see that digital PWM outputs can be interfaced effectively with many actuators without the cost and complexity of linear amplifiers. There are also additional extensions of PWM that allow us meet additional system needs. With the addition of a filter we can make the PWM signal act as an analog output with the DA converter and even use it as a waveform generator by changing the duty cycle into the filter with a prescribed pattern (i.e. sinusoidal wave). For this to work, the filter needs to integrate the area under the pulse in much the same way an actual actuator does. A low-pass filter will accomplish this task for us if we limit the ‘‘analog’’ output signal frequencies to approximately 1=4 of the PWM frequency. As an example, if we wish to use our PWM output to generate a sinusoidal command signal of 1 kHz, then we need a PWM frequency of at least 4 kHz. While simple RC filters (remember from our earlier work that the corner frequency is simply 1/(time constant) on Bode plots) with a corner frequency 1=4 of the PWM frequency will work, we can achieve much cleaner signals by using active filters (i.e., OpAmps) because of their flatter response and sharper cutoff.

Figure 20

Typical actuators driven by PWM MOSFET circuits.

Digital Control System Components

Figure 21

431

Increasing the power capabilities using MOSFET devices in parallel.

A similar alternative to PWM is PFM, or pulse-frequency-modulation. The analogies are similar except that with PFM the pulse width is constant and the frequency at which they occur of varied. In general PWM is more common. There are still applications using PFM, an example being found in most RC servomotors. In conclusion, PWM is very common, largely due to the ease at which digital signals can be amplified and used to control analog actuators. There are many styles and sizes of transistors, each optimized to a certain task, and virtually every manufacturer supplies design notes and applications notes about interfacing power transistors and PWM control. Because of their widespread use many supporting components are readily available, from suppression diodes and driver circuits to the software algorithms generating the signal. 10.10

PROBLEMS

10.1 List three advantages associated with using a PC as a digital controller. 10.2 List three characteristics associated with PLCs. 10.3 What is the advantage of using differential inputs on computer data acquisition boards (AD converters)? 10.4 Why is it beneficial it have multiple user selectable input ranges on computer data acquisition boards? 10.5 What is the minimum change in signal level that can be detected using a 12-bit AD converter set to read 0–10 V, full scale? 10.6 What is the minimum change in signal level that can be detected using a 16-bit AD converter set to read 0–5 V, full scale? 10.7 Describe an advantage and disadvantage of using program controlled sample times in digital controllers. 10.8 Describe an advantage and disadvantage of using interrupt controlled sample times in digital controllers. 10.9 Describe the primary ways in which a microcontroller differs from a microprocessor. 10.10 List the two primary factors that affect how fast a microcontroller will operate. 10.11 PLCs are commonly programmed using what type of diagrams?

432

Chapter 10

10.12 What term describes the elements used to carry the power and ground signals in ladder logic diagrams? 10.13 Controllers are commonly implemented in microprocessors using algorithms represented in the form of what type of equation? 10.14 What devices may be used to clean up noisy signals before being read by a digital input port? 10.15 What term describes the type of optical encoder capable of knowing the current position without needing prior knowledge? 10.16 A 16-bit rotary optical encoder will have a resolution of how many degrees? 10.17 Describe two advantages of using stepper motors over DC motors. 10.18 What type of stepper motors, permanent magnet or variable reluctance, are generally capable of the largest torque outputs? 10.19 What type of stepper motors, permanent magnet or variable reluctance, are generally capable of the largest shaft speed for equivalent resolutions? 10.20 With stepper motors the maximum shaft speed is a function of what two characteristics? 10.21 List two advantages to using transistors as switching devices as opposed to linear amplifiers. 10.22 When using a transistor as a solid-state switch, it is important to operate it in what range when it is turned on? 10.23 What are two advantages to using field effect transistors as switching devices as compared with bipolar transistors? 10.24 What is the importance of placing a flyback diode across an inductive load that is driven by a solid-state transistor? 10.25 Describe the three primary values used to describe a PWM signal. 10.26 The PWM period must be fast enough that the actuator responds in what way to the signal? 10.27 What type of transistors are easily used in parallel to increase the total current rating of the system? 10.28 Describe a similar but alternative method to PWM. 10.29 Locate the device characteristics and part numbers and sketch a circuit using a npn bipolar transistor to PWM control a 10A inductive load. 10.30 Locate the device characteristics and part numbers and sketch a circuit using a MOSFET to PWM control a 20A solenoid coil.

11 Advanced Design Techniques and Controllers

1.1

OBJECTIVES    

11.2

Develop the terminology and characteristics of several advanced controllers. Learn the strengths and weaknesses of each controller. Develop design procedures to help choose, design, and implement a variety of advanced controller algorithms. Learn some applications where advanced controllers are successful.

INTRODUCTION

In this chapter we examine the framework around advanced controller design. Some controllers examined here are becoming more common, and it may no longer be correct to label them ‘‘advanced,’’ although in reference to classic, continuous, linear, time-invariant controllers the term is appropriate. The goal here is to introduce several different controllers, their options, relative strengths and weaknesses, and current applications. It is hoped that this chapter ‘‘whets the appetite’’ for additional learning. The field is very exciting, if not overwhelming, when we follow and realize how fast things are growing/changing/learning. Here are some introductory comments regarding advanced controller design. First, all methods presented thus far in this text relate to the design, modeling, simulation, and implementation of feedback controllers, that is, controllers that operate on the error between the desired command and actual ‘‘feedback.’’ In all cases, before and including this chapter, the skills for acquiring/developing accurate physical system models (and a good understanding of the system) are invaluable. Although in some cases the model is only indirectly related to the actual controller design, the knowledge of the system (and hence the model) will always help in developing the best controller possible. In general for feedback controllers, regardless of algorithms or implementations, the goal is to design and tune them to capably handle all the system unknowns (always present), which cause the errors in our system. These errors arise primarily from disturbances and incorrect models. 433

434

Chapter 11

One problem is that feedback controllers are reactive and must wait for an error to develop before appropriate action can be taken; thus, they have built in limitations. Examples of previous controllers in this category include using integral gains can help eliminate steady-state errors and state space feedback controllers using full state command and feedback vectors. In addition to being reactive, we seldom have access to all states and then must use observers to estimate unmeasured states. As this chapter demonstrates, we can use the measured states to force the observer to converge to the measured variables. Instead of waiting for errors to develop, as in the above examples, we can, whenever possible, use feedforward techniques to provide disturbance input rejection and minimal tracking error. In some cases these are essentially ‘‘free’’ techniques and can provide greatly enhanced performance. Feedforward design techniques are presented in this chapter. Additional topics include examining multivariable controllers, where we attempt to decouple the inputs such that each input has a strong correlation with one output. This helps in controller design and performance. Finally, in addition to considering the appropriate feedback and feedforward routines, we can assess the value of using adaptive controller methods to vary the feedback and feedforward gains for zero tracking error. Model reference adaptive controllers, system identification algorithms, and neural nets are all examples in this class and are presented in following sections. Some of these fall into the nonlinear category, which brings additional concerns of stability and performance. While in preceding chapters we developed the groundwork on which millions of controllers have been designed and are now operating successfully, this chapter illustrates that the previous chapters are analogous to studying the tip of iceberg that is visible above the surface and that once we have finished that material, a whole new world, much larger than the first, awaits us as we dig deeper into the design of control systems. With the rapid pace that such systems are developing, we can easily spend a lifetime uncovering and solving new problems. 11.3

PARAMETER SENSITIVITY ANALYSIS

In most controllers, whether or not the final performance is adequate is related to the quality of the model used during the design phase. Some understanding of how the controller will behave in the presence of errors is the goal of performing parameter sensitivity analysis. In particular, we wish to be able to predict how sensitive the performance is, due to errors in any of the parameters used in designing the controller. In the era of computers, this analysis has become much easier. Analytical techniques do exist based on methods originally suggested by Bode, but they quickly become very tedious for larger systems. The concept, however, is quite simple and can be partially explained using our knowledge of root locus paths. Until now we have generally varied the proportional gain K to develop the plot, which enables us to use the convenient graphical rules. When we had several controller gains (i.e., proportional derivative [PD]), as in Chapter 5 (see Example 5.9), root contour plots were used to draw root loci paths for different combinations of gains. This same technique can be used to develop multiple loci paths for different system parameters. For example, we can develop the first root locus plot for our expected values (mass, damping, stiffness, etc.), and then repeat the process, drawing addition loci paths for

Advanced Design Techniques and Controllers

435

different values of mass, damping, etc. In this way we can see the effect that adding additional mass might have on the stability of our system. An example is this: For a typical cruise control system we expect the vehicle to have an average mass, including several passengers. This is our default configuration for which the controller is designed. A good question to ask, then, is what happens when the vehicle is loaded with passengers and luggage? How will the control system now behave? Using root contour lines we can plot the default loci paths, vary the mass, and plot additional paths to see how the stability changes, allowing us to verify satisfactory performance at gross vehicle weights. A second way, made possible with the computer and instead of plotting multiple lines, is to use the computer to vary the parameter under investigation, instead of the gain K, solve for the system poles at each parameter change, and sequentially plot the pole migrations on the root locus plot to see how the poles move when the parameter is varied. This does require that the computer solve for the roots of the characteristic equation multiple times. It is possible to sometimes move the parameter of interest where it behaves as a gain, allowing the standard rules to be used. Finally, in both methods, we still have not answered the whole question of parameter sensitivity since we have not examined the rate of change. It is possible from each method to extract this information. With the contour plots, assuming that we varied the second parameter by equal amounts (1; 2; 3 . . .), we can look at the spacing between the lines to see the rate of change. If successive loci paths are close together, the parameter does not cause a large rate of change over the existing loci paths. However, when they are space far apart, it signifies that an equal variation in the parameter caused a much larger change in the placement of the loci paths and the system is sensitive to changes in that parameter. In a similar fashion with the second method, if we plot the individual points used to generate the loci, the distance between the points indicates the sensitivity to that parameter. When the points are far apart it signifies that an equal change in that parameter caused the loci to move much further along the loci paths. This is basis for several of the analytical methods where we calculate the rate of change of root locations as a function of the rate of change of parameter variations.

EXAMPLE 11.1 Given the closed loop transfer function, use Matlab to vary the gains m and b, from 1=2 to twice their nominal values, and determine how sensitive the system is to variations in those parameters. The nominal values are m ¼ 4; b ¼ 12;

and

k¼8

CðsÞ k ¼ RðsÞ ms2 þ bs þ k The nominal poles are placed at s ¼ 1 and s ¼ 3, two first-order responses. What we are interested in is how the poles move from these locations when m and b are varied. To generate these plots in Matlab, we can define the parameters and the transfer function, vary the parameters, and after each variation, recalculate the pole locations and plot them. The example code is as follows:

436

Chapter 11 %Parameter Sensitivity Analysis %Define the nominal values m=4; b=12; k=8;Points=41; %Vary m and b mv=linspace(m/2,m*2,Points); bv=linspace(b/2,b*2,Points); %Generate the poles at each m for i=1:Points mp(:,i)=pole(tf(k,[mv(i) b k])); bp(:,i)=pole(tf(k,[m bv(i) k])); end; %Plot the real vs. imag components plot(real(mp(1,:)),imag(mp(1,:)));hold; plot(real(mp(2,:)),imag(mp(2,:))) %Plot the real vs. imag components hold; figure; plot(real(bp(1,:)),imag(bp(1,:)));hold; plot(real(bp(2,:)),imag(bp(2,:)))

First, the mass m is varied and each time the new roots of the characteristic equation are solved for and plotted. The resulting root loci paths are given in Figure 1. In similar fashion, the damping can be varied and new system poles calculated as shown in Figure 2. What is quite evident is that in both cases the parameter variations will cause significant changes in the response of our system. Remembering that the nominal closed loop pole locations are at s ¼ 1 and 3, due to the mass and damping variations, they either spread out along the negative real axis or become complex conjugates and proceed toward the imaginary axis and marginal stability. As the mass and damping parameter become further away from their nominal values, the rate of change, or sensitivity, also decreases as noted by the spacing between successive iterations.

Figure 1

Example: root loci resulting from variations in mass (Matlab).

Advanced Design Techniques and Controllers

Figure 2

437

Example: root loci resulting from variations in damping (Matlab).

Using the concepts describe here, most simulation programs will allow us to change model parameters and reevaluate the new response of the system. Even if we have large block diagrams or higher level object models, we can still vary different parameters and observe how the simulated response is affected. Since most applications are expected to experience variations of system parameters used in the model the procedure is important if we desire to check for global stability even in the presence of changes in our system. In other words, even if our original root locus plot predicts global stability for all gains, it is possible with variations in our parameters to still become unstable. Parameter sensitivity analysis is one tool that is available for evaluating these tendencies.

11.4

FEEDFORWARD COMPENSATION

Feedforward compensation is fairly common and is generally used in two ways: the first to reduce the effects of measurable disturbances and the second to provide superior tracking characteristics by applying it to the input command. There are variations on how they are implemented, and we examine several types in the two sections below. It is important to remember that the success of these methods generally depends on the accuracy of the model used during development. The actual idea behind feedforward control is very simple: Instead of waiting for an error to develop as a feedback controller must do, feed the proper input which causes the system error to be zero forward to the amplifier/actuator, thus taking the proactive approach (and appropriately called feedforward). Whereas the feedback controllers presented up until this point are reactive, feedforward devices are proactive and take corrective action before the error has a chance to occur. This is why feedforward controllers can be used to improve tracking accuracy. The extent to which they improve accuracy depends largely on our modeling skills. The better the model, the more improvement (assuming it can be implemented). Except in simple cases, feedforward algorithms are much more practical to implement using microprocessors.

438

Chapter 11

There are several physical reasons why it may not be possible to implement feedforward control. When discussing disturbance input rejection, we need to have some measurement reflecting (the direct one is best, but not always necessary) the current disturbance amplitude. Some disturbances are then a lot easier to remove than others. With command feedforward, we need (in general) to know the command in advance. Or in other words, we need have access to the future values. For many processes that follow a known command signal, this is no problem since we know what each consecutive command value will be. An example is a welding robot used on an assembly line where the same trajectory is repeated continually. Now let us examine each method in a little more detail. 1.4.1

Disturbance Input Rejection

Disturbance input rejection seeks to measure the disturbance and apply the opposite input (force, voltage, torque, etc.) required to negate the real disturbance input. Obviously, the extent to which this is accomplished is very dependent on the ability to measure the disturbance and having an actuator capable of negating it. It is possible, although less effective, to sometimes arrive at the disturbance indirectly through a different variable being measured. This usually has some dynamic loss associated with the dynamics of the system between the measured variable and the estimated disturbance. For those cases where the disturbance is easily measured, for example, the slope of the hill acting on a vehicle with cruise control, the procedure is quite straightforward. Consider the general block diagram in Figure 3. This is the block diagram for a general controller and system where the system is acted on by a disturbance input. For a cruise control system the variable between GC and G1 would be the throttle position command (i.e., volts), G1 contains the throttle actuator and engine model (torque output) and G2 contains the vehicle model (torque in, speed out). In this case the disturbance may be caused by hills and/or wind gusts, and G3 would relate how the wind or hills affect the torque. Earlier we closed the loop to obtain the relationship between the output and disturbance and found the following closed loop transfer function: C G2 G3 ¼ D 1 þ GC G1 G2 The effects are minimal from the disturbance when GC and G1 are large compared to G2 , but the effect of the disturbance is always present, especially if G3 is large. Now, assuming we can measure the disturbance input, let us examine the system in Figure 4. The first step again is to find the transfer function between the

Figure 3

General system block diagram with disturbance input.

Advanced Design Techniques and Controllers

Figure 4

439

General system block diagram with disturbance rejection.

disturbance and system output. This can be done using our block diagram reduction techniques covered earlier. C ¼ G2 G3 D þ G1 G2 GD D  GC G1 G2 C C ð1 þ GC G1 G2 Þ ¼ ðG2 G3 þ G1 G2 GD ÞD and C G2 ðG3 þ G1 GD Þ ¼ D 1 þ GC G1 G2 We can make several observations. The denominator stays the same and thus feedforward disturbance input rejection does not affect our system dynamics (i.e., pole locations from the characteristic equation). Hence, for example, we cannot use it to make our system exhibit different damping ratios. The upside of this is that we have another tool to reduce the effects from disturbances without increasing the proportional and integral gains and causing more overshoot and oscillation. Examining the numerator, we see the opportunity to make the numerator equal to zero. If this is possible, then in theory at least the disturbance has absolutely no effect on our system output since C=D ¼ 0. To solve for GD , set the numerator equal to zero, resulting in a desired transfer function for GD : G3 þ G1 GD ¼ 0 GD ¼ 

G3 G1

This defines our GD required to ‘‘eliminate’’ the effects of disturbances. In terms of our cruise control example, GD is found by measuring the wind or grade, using G3 to convert it to an estimated torque disturbance, and dividing by G1 to go from estimated torque to estimated volts of command to the throttle actuator. The negative sign implies that if the disturbance is a negative torque (i.e., downhill), the throttle command must be decreased. It should be noted that the numerator would also be zero if G2 equaled zero. However, this implies that no input to the physical system will cause a change in the system output and the system itself can never be controlled. There are several reasons why we cannot completely eliminate the disturbance effects using controllers on real systems. First, G1 and G3 are models representing physical components (engine and disturbance to torque relationships) and thus our rejection of disturbances is dependent (once again) on model quality. Especially in the case of a typical internal combustion engine, using linear models will limit the

440

Chapter 11

effectiveness of rejecting disturbances since the actual engine is inherently very nonlinear. Second, since we are measuring the disturbance, there will be both measurement errors and noise along with the problem that our measurement does not solely explain the change in system output. Finally, we have physical limitations. Some disturbances will simply be too large for our controller to handle. For example, at some grade the vehicle can no longer maintain the command speed due to power limitations. In this case any controller implementation (feedforward, feedback, adaptive, etc.) will have the same problem in responding the disturbance. In conclusion, disturbance rejection is commonly used to provide enhanced tracking error without relying completely on the feedback controller. The system dynamics are not changed and the effectiveness of the feedforward controller is limited to modeling capabilities. In discrete systems we are limited to using difference equations only based on the current and past disturbance measurements. EXAMPLE 11.2 Find the transfer function GD that decouples the disturbance input from the effects on the output of the system given in Figure 5. Assume that the disturbance is measurable. When we close the loop for C=D and set it equal to zero, as shown in this section, we get the requirement that G3 þ G1 GD ¼ 0 GD ¼ 

G3 G1

For our system in Figure 5: G1 ¼

5 ðs þ 1Þðs þ 5Þ

and

G3 ¼ 2

Thus to make the numerator always equal to zero (no effect then from the disturbance input), we set GD equal to GD ¼ 

2 2 ¼  ðs þ 1Þðs þ 5Þ 5 5 ðs þ 1Þðs þ 5Þ

If we were able to implement GD , measure DðsÞ, and assuming our models were accurate, we would be able to cancel out the effects of disturbances. Even though in

Figure 5

Example: system block diagram for disturbance input decoupling.

Advanced Design Techniques and Controllers

441

practice it is difficult to completely cancel out all disturbances due to the assumptions made while solving for GD , we can generally still enhance the performance of our system. In this example, where we have a second-order numerator and zero-order denominator, making it difficult to implement (requires future values in a difference equation), we can treat GD as a gain block that cancels out the steady-state gains of G3 =G1 , or GD ¼ 2. This is feasible to implement and in general, if the amplifier/ actuator G1 is relatively fast, still gives good performance, only ignoring the shortterm dynamics of the amplifier/actuator. 11.4.2 Command Feedforward and Tracking There are many parallels between feedforward disturbance rejection and command feedforward. Whereas the goal in the previous section was to eliminate the effect from measurable disturbance inputs, the goal here is to eliminate tracking errors due to command changes. Using the same analogy of cruise control, the idea is that if we know we have to accelerate the vehicle, why wait for an error to develop before the signal is sent to the actuator. This type of system is shown in Figure 6. As before, we want to close the loop to evaluate the changes resulting from adding command feedforward. If GF is zero, as in the original system in Figure 3, then we can develop the following closed loop transfer function between the system output and command input, assuming the disturbance is zero. C GC G1 G2 ¼ R 1 þ GC G1 G2 For perfect tracking we need C=R ¼ 1, such that the output always equals the input. This transfer function will not approach unity unless the loop gain approaches infinity. Since higher loop gain tends to make the system less stable, we usually compromise between error and stability. If we assume now that the feedforward block, GF , is active in the model, we can again close the loop and now get the following transfer function: C GF G1 G2 þ GC G1 G2 ¼ R 1 þ GC G1 G2 Since the goal is to make C=R ¼ 1, we want to make numerator equal to the denominator, or GF ¼

Figure 6

1 G1 G2

General system block diagram with command feedforward.

442

Chapter 11

With this transfer function, the inverse of the physical plant, the numerator becomes equal to the denominator for all inputs. Physically we are solving the system such that our inputs and outputs are reversed. We use our desired output to calculate the inputs required to give us the physical output. As before, the extent to which feedforward accomplishes improved tracking depends on the accuracy of the models. If our models are reasonably accurate, then large improvements in tracking performance are realized. Also similar are the physical limitations of our system. A step input for the command will never be realized on the outputs since an infinite effort (impulse) is required to follow the command. It is therefore the job of the designer, and good practice to begin with, to only give feasible command trajectories and avoid unnecessarily saturating our components in the system. In the case where we do not know and are unable to control the command input, we have limited effectiveness in using command feedforward techniques. If we do know the command sequence in advance, then we can also use the alternative algorithm given in Figure 7. This configuration allows us to precompute all the input values and thus enables us to implement methods like lookup tables for faster processing. The system inputs now include both the reference (original) command and the feedforward command fed through the plant inverse, and the controller should only be required to handle modeling errors and disturbances. If we close the loop and develop the transfer function, we get C GF0 GC G1 G2 þ GC G1 G2 ¼ R 1 þ GC G1 G2 Using GF0 we can still make C=R ¼ 1. When we compare this result with the transfer function from the previous configuration, it leads to the following equivalence: GF0 ¼

1 1 ¼ G GC G1 G2 GC F

If we wished to precompute all the modified inputs, we would simply take the desired command and multiply it by GF0 as shown:   1 Rwith feedforward ¼ 1 þ R GC G1 G2 original This has many advantages since the entire command signal can be precomputed and no additional overhead is required in real time. With many industrial robots where the same task is repeated over and over and the model remains relatively constant,

Figure 7

Alternative command feedforward configuration.

Advanced Design Techniques and Controllers

443

this technique can significantly improve tracking performance at no additional operational costs, only upfront design costs. Putting everything together, disturbance rejection and command feedforward results in the system controller illustrated in Figure 8. This controller, limited by the accuracy of the physical system models, measurable disturbances, and known trajectories, will exhibit large improvements over basic feedback controllers. A benefit is that neither controller affects the stability of the original closed loop feedback system, which may be designed and tuned using the basic methods defined earlier (the denominator of the system is never changed). It is likely when implementing feedforward controllers that the original controller is less critical and all that may be needed is a simple proportional controller to handle the ‘‘extra’’ errors and dictate the type of system response to these errors. Finally, to illustrate the advantages and disadvantages of command feedforward, let us examine the following example. EXAMPLE 11.3 Given the second-order system and discrete controller in the block diagram of Figure 9, design and simulate a command feedforward controller. Use a sinusoidal input (must be feasible trajectory) and compare results with and without modeling errors present. We have a second-order physical system with a damping ratio of 1=2 and a natural frequency of 5 rad/sec, where GðsÞ ¼

!2n 25 ¼ s2 þ 2!n s þ !2n s2 þ 5s þ 25

We will use the Matlab commands as given at the end of this example to convert it to the z-domain with a zero-order hold (ZOH) and a sample time of 0.05 (20 Hz). This results in the discrete transfer function for our system: GðzÞ ¼

0:02865z þ 0:02636 z2  1:724z þ 0:7788

For this example, a simple zero-pole discrete controller, similar to the one developed in Section 9.5.1, is used as the feedback controller. GC ðzÞ ¼

Figure 8

3z  2 z

Disturbance input rejection and command feedforward controllers.

444

Chapter 11

Figure 9

Example: command feedforward system block diagram.

After converting the system model and ZOH into the sampled domain, we can close the loop to find the overall discrete closed loop transfer function: CðzÞ 0:08596z2 þ 0:02177z  0:5272 ¼ 3 RðzÞ z  1:638z2 þ 0:8006z  0:05272 For a baseline performance plot without using command feedforward, we can simulate the system using Matlab, resulting in the plot in Figure 10. It is clear that our controller is not very good with regard to tracking accuracy, and while we certainly could design the controller for better performance, let us examine the effects of adding the feedforward controller, GF0 , to the system. Remember from earlier in this section that C GF0 GC G1 G2 þ GC G1 G2 ¼ R 1 þ GC G1 G2

and

GF0 ¼

1 GC G1 G2

Since we only have one second-order plant, the transfer functions G1 and G2 are combined and represented as one transfer function, GSys . Now we can form the feedforward transfer function using our discrete controller and system model.

Figure 10

Example: system response using Matlab (without command feedforward).

Advanced Design Techniques and Controllers

GF0 ¼

445

z3  1:724z2 þ 0:7788z 0:08595z2 þ 0:02177z  0:05272

When we compare the original closed loop transfer function with the transfer function containing the feedforward block, we realize that our new transfer function can be written as (factor out GC and GSys )   0 C GC GSys 1 þ GF ¼ 1 þ GC GSys R Compared with the original closed loop transfer function, the only difference is ð1 þ GF0 Þ and therefore     1 Rwith feedforward ¼ 1 þ GF0 Roriginal ¼ 1 þ R GC G1 G2 original Thus, to get the modified closed loop transfer function (containing the effects of GF0 ) we simply multiply our original closed loop transfer function by (1 þ GF0 ), resulting in CðzÞ 0:08596z5  0:119z4  0:01956z3 þ 0:09924z2  0:04335z þ 0:002779 ¼ RðzÞ 0:08596z5  0:119z4  0:01956z3 þ 0:09924z2  0:04335z þ 0:002779 If we look at it closely we see that the numerator and denominator are identical and therefore are ‘‘guaranteed’’ (in simulations at least) to produce Figure 11, where the command and response appear as one line, since the C=R ratio is always unity. So if we can develop accurate models, we in theory, with feasible trajectories, can drive the tracking error to zero. A good question to ask is what happens when our models are not accurate? Let us change the system damping in our estimated model to 1, twice that of the original system, and see what happens. First, we will develop another discrete model of the

Figure 11 Example: system response using Matlab (with command feedforward and no errors in the model.

446

Chapter 11

second-order continuous system, but this one will contain the modeling error. The new continuous system model (with errors) is Gerrors ðsÞ ¼

!2n 25 ¼ 2 2 2 s þ 2!n s þ !n s þ 10s þ 25

Using Matlab as before, convert it to the discrete equivalent with a ZOH: GðzÞerrors ¼

z3  1:558z2 þ 0:6065z 0:0795z2 þ 0:01429z  0:04486

Now when we create the new overall system transfer function, we will use GF0 based on the previous model, leaving our original system parameters unchanged. This results in the new closed loop transfer function, including the feedforward controller:  CðzÞ 0:08596z5  0:1053z4  0:03153z3 þ 0:08758z2  0:3371z þ 0:002365 ¼  RðzÞ Model 0:0795z5  0:1195z4  0:004625z3 þ 0:08072z2  0:3667z þ 0:002365 Errors In contrast to the transfer function where our model matched the system, now the overall system transfer function is no longer unity and we will have some tracking errors. To show this, we can simulate the system again and compare the system output with the desired input. This results in the plot given in Figure 12, where additional tracking errors are evident. Whereas the first system, without any feedforward loop in place, was attenuated and lagged the input, in this case using an imperfect feedforward transfer function we now we see larger than desired amplitudes due to the modeling error. The lag between the output and command, however, is removed. Thus we see that this system is fairly sensitive to changes in system damping and care must be taken to use the correct values. This example also serves as motivation for adaptive controllers, as we examine in later sections. For example, if the friction was not constant and

Figure 12 Example: system response using Matlab (with command feedforward and errors in the model).

Advanced Design Techniques and Controllers

447

changed as the oil temperature in a damper changed, the command feedforward controller might work well at some operating points but very poorly at others. Unless the system can adapt, we must develop either temperature dependent models or choose other alternatives. Finally, for any feedforward controller, disturbance, or command, it is wise to perform at least a basic parameter sensitivity analysis to judge potential future problems. To wrap up this example, let us quickly look at the difference equations of GF0 , given earlier as GF0 ðzÞ ¼

z3  1:724z2 þ 0:7788z 0:08595z2 þ 0:02177z  0:05272

Multiplying top and bottom by z3 : GF0 ðzÞ ¼

1  1:724z1 þ 0:7788z2 0:08595z1 þ 0:02177z2 ¼ 0:05272z3

Cross-multiplying and writing the difference equations, we get 0:08595r 0 ðk  1Þ þ 0:02177r 0 ðk  2Þ  0:05272r 0 ðk  3Þ ¼ rðkÞ  1:724rðk  1Þ þ 0:7788rðk  2Þ Since we are interested in r 0 ðkÞ, the modified input sequence, we need to shift each sample by þ1 so that we can write r 0 ðkÞ as a function of previously modified values and the original input values. This leads to the difference equation, expressed as 0:08595r 0 ðkÞ ¼ 0:02177r 0 ðk  1Þ þ 0:05272r 0 ðk  2Þ þ rðk þ 1Þ  1:724rðkÞ þ 0:7788rðk  1Þ Thus, as we suspected, we must know (for this case) the command one step in advance, rðk þ 1Þ, to implement this particular command feedforward controller. For many applications this is not a problem, and if good models can be developed, command feedforward will give significant improvements in tracking. If we do know the input sequence one sample in advance, it is also likely that we also know the entire sequence of input values. If this is the case then, not only can we implement the difference equation as written, we can precompute the entire modified input sequence and store them in a table. Finally, what follows here are the Matlab commands used to develop the transfer functions and plot the responses used throughout this example. %Program to implement and %test command feedforward %wn = 5, z = 0.5 %Ts = 0.05 = 20 Hz ‘Define Continuous TF, w=5,z=0.5’ sys1=tf(25,[1 5 25]) pause; ‘Define Discrete Controller’ sysc=tf([3 -2],[1 0],0.05) pause; ‘Convert physical model to discrete TF’ sys1z=c2d(sys1,0.05,‘zoh’) pause;

448

Chapter 11 ‘Close the loop for the CLTF’ syscl=feedback(sys1z*sysc,1) pause; ‘Define sinusoidal input and simulate response’ [u,T]=gensig(‘sin’,1,10,0.05); [y1,T1]=lsim(syscl,u,T); plot(T1,y1,T,u);axis([0 10 -2 2]); pause; ‘Define the feedforward TF’ syscff=(1/(sys1z*sysc)) ‘Implement into system TF’ sysfinal=((1+syscff)*syscl) pause; ‘Simulate and create new plot’ [y2,T2]=lsim(sysfinal,u,T); figure; plot(T2,y2,T,u);axis([0 10 -2 2]); ‘Define the 2nd feedforward TF’ ‘Double the damping ratio’ sys2c=tf(25,[1 10 25]); sys2=c2d(sys2c,0.05,‘zoh’); syscff2=(1/(sys2*sysc)) ‘Implement new Gf into system TF’ sysfinal2=((1+syscff2)*syscl) pause; ‘Simulate and plot’ [y3,T3]=lsim(sysfinal2,u,T); figure;plot(T3,y3,T,u);axis([0 10 -2 2]);

In conclusion it is clear that there are many advantages to implementing feedforward controllers into our design. As part of a recurrent theme, we find again that good models are necessary for well-performing controllers. Whereas it is quite easy to follow rules and tune a black box proportional-integral (PI) controller, we need additional skills to correctly design and implement more advanced controllers and take advantage of their benefits. 11.5

MULTIVARIABLE CONTROLLERS

Several methods have been developed for designing, tuning, and implementing multiple input, multiple output (MIMO) controllers, also referred to as multivariable controllers. In this section we introduce two methods, both extensions of methods presented earlier. First, we can model MIMO systems using combinations of single input; single output (SISO) transfer functions where some of the transfer functions represent the coupling terms. This method does allow us to understand the system in terms of familiar block diagrams and thus is presented first. The second common procedure is to represent MIMO systems using state equations and, if linear, then system matrices. An advantage is the ease that systems with many inputs and outputs can be represented. In addition, using linear algebra, the techniques (and hence computer algorithms) remain the same regardless of system size. Whereas earlier we used Ackerman’s formula as a closed form solution when solving for the gain matrix required to place our system poles at preset (desired) locations, we no longer have deterministic solutions and must make deci-

Advanced Design Techniques and Controllers

449

sions about how to deal with the extra degrees of freedom. This is where the term optimal control comes from: that some rule, performance index, or cost function is defined and the controller is optimized to minimize or maximize this function. The function may be controller action, accuracy, power used, etc. Common methods developed to minimize these functions include the linear quadratic regulator (LQR), variations of least squares, and Kalman filters. Most optimal controllers are implemented using state variable feedback, similar to the SISO examples. Since, as we mentioned above, we cannot place all of our poles (more gains than poles), the computational effort is greatly increased from that of earlier SISO systems. Fortunately, many design programs have the methods listed above already programmed, and it is not necessary to do this design work manually. The question then becomes which method to use. If the number of inputs and outputs are relatively small, say three or less, it is quite easy to design controllers using coupled transfer functions. It is possible to go larger, but the matrix sizes grow along with it. Transfer functions are generally more familiar to most people and the relationships easily defined. On the other hand, the larger the system grows, the easier it is to work in state space. In fact, the techniques to design each optimal controller are virtually identical (procedurally) regardless of system size. State variable feedback methods work very well when good models are used. Since so much is dependent on the model (observers, states, interaction, etc.), poor models rapidly lead to very poor controllers. Thus, care needs to be taken in regards to developing system models. Applications where the models are well defined have seen excellent results when optimal controllers are used with state variable feedback. Caution is required as the opposite end of the spectrum is also evident. 11.5.1 Transfer Function Multivariable Control Systems In this section we will look at two input–two output (TITO) systems. Transfer function methods are much easier if the number of inputs equal the number of outputs since we can work with square matrices and then decouple each input and output. Decoupling is not completely possible with unequal numbers of inputs and outputs, since each input cannot be directly related to a single output. If there is only one output and multiple inputs, the tools from previous chapters can be applied in the same way that both command and disturbance inputs act on a system and were dealt with by examining the effects of each separately and then adding the results. The general block diagram representing our two input two output system is shown in Figure 13. We can now define U, Y, and G for the block diagram:   

U ¼ system inputs, the number of rows equal to the number of inputs. Y ¼ system outputs, the number of rows equal to the number of outputs. G ¼ transfer function matrix, each element, g, represents a SISO transfer function, the number of rows equals the number of outputs and the number of columns equal the number of inputs. Thus, if the number of inputs do not equal the number of outputs, G is not a square matrix.

Defining the matrices for the system in Figure 13: " # " #" # y1 u1 g11 g12 ¼ y2 g21 g22 u2

450

Chapter 11

Figure 13

General TITO block diagram model.

There are no arguments for the matrices since both continuous (s) and discrete (z) transfer functions can be represented by this configuration. The first two steps are to determine the amount of cross-coupling and the second, to try and decouple the inputs and outputs. To determine the coupling, the common experimental system identification techniques applied to step inputs and frequency responses can be used. The difference we see here is that for each input we measure two outputs (or more or less for general MIMO systems). For example, for the system in Figure 13, if we put a step input on u1 , we will get two response curves, one for each output. The plot of y1 can be used to find g11 and the plot of y2 to determine g21 . An example is given in Figure 14. For the example system given where one output exhibits overshoot and the other decays exponentially, the two curves would likely be fit to the following firstand second-order system transfer functions, both functions of input u1 : g11 ¼

!2n s þ 2!n s þ !2n 2

and

g21 ¼

a sþa

A similar plot could be recorded to determine the remaining two transfer function elements of G. If Bode plots were developed, higher order system models could then be estimated from the resulting plots. Whether the transfer functions have been

Figure 14

Experimental evaluation of transfer function matrix.

Advanced Design Techniques and Controllers

451

determined analytically or experimentally, we ideally want to decouple them such that each input only affects one output. If we can design such a system, it becomes like two SISO systems, and all our techniques from earlier can be used for system controller design, tuning, etc. In matrix form this would now be represented as " # " 0 #" # y1 u1 g11 0 ¼ 0 y2 u2 0 g22 This system behaves like two separate systems, as shown in Figure 15. Since we know the format of the desired transfer function matrix, let us examine the steps required to transform our initial coupled system from Figure 13 into the decoupled system in Figure 15. Begin by defining a possible controller transfer function matrix, GC , the same size as the system transfer function matrix. Configure it as in Figure 16. Using the TITO system as an example and matrix notation, allowing us to easily extend it to larger systems, we write the following transfer function matrices and relationships. System output: Y ¼ GU System input: U ¼ GC E ¼ GC ðR  YÞ Combine: Y ¼ G½GC ðR  YÞ Simplify: ðI þ GC GÞY ¼ GC GR Finally, the closed loop transfer function (CLTF): Y ¼ ðI þ GC GÞ1 GC GR The closed loop system ‘‘transfer function’’ should look very familiar to its SISO counterpart. Now all we have to do is compare our transfer function matrix from

Figure 15

TITO system when decoupled.

452

Chapter 11

Figure 16

General TITO controller block diagram.

above with the desired transfer function matrix when the system is decoupled and solve for the controller that results (i.e., the only unknown matrix of transfer function in the equation). Let GD be our desired transfer function matrix (generally diagonal if we wish to make the system uncoupled), and set the transfer function matrix (equation of matrices) equal to it. Setting them equal: GD ¼ ðI þ GC GÞ1 GC G Solving for GC :  GC ¼ G1 GD 1 þ GD Þ1 Now GC contains our desired controller which when implemented should produce the response characteristics defined in the diagonal terms of the desired transfer function matrix, GD : This is but one possible method that can be used to design controllers using transfer functions in multivariable systems. In continuous systems it would be highly unlikely that the resulting controller could be easily implemented using physical analog components. With digital controllers, however, we are generally not limited unless the controller ends up not being realizable (programmable due to the necessity of unknown or unavailable samples) or if the physics of the system prevent it from operating the way it should. For this method, the stability of our system is determined by the type of response we chose for the terms in the diagonal of GD . In all cases, feasible trajectories should also be used, avoiding unnecessary saturation of components. In cases where the system becomes too complicated to try and decouple it completely, we can often achieve good results by making it mostly decoupled by minimizing the off-diagonal terms and maximizing the diagonal terms. This type of approach is our only option if the number of inputs and outputs are not equal. The Rosenbrock approach using inverse Nyquist arrays is based on this idea of making our diagonal terms dominant. Other methods seeking to maximize the diagonal

Advanced Design Techniques and Controllers

453

terms relative to the off diagonal terms are the Perron-Frobenius (P-F) method based on eigenvalues and eigenvectors and the characteristic locus method. The advantage is very important for continuous systems since the controller can often be an array of gains selectively chosen to achieve this trait, thus capable of being implemented. Even for digital controllers, the work to implement is greatly reduced. These systems generally take the form shown in Figure 17 where G is the physical system, K is the gain matrix, and P is an optional postsystem compensator acting on the system outputs. To check stability for such controllers, we must close each individual loop (diagonal and coupling terms) and verify that unstable poles are not present. There are additional stability theorems for these controller design techniques but require more linear algebra theorems that are not covered here. 11.5.2 State Space Multivariable Control Systems Since we have been developing and using state space techniques along side transfer functions, differential equations, and difference equations, the groundwork has already been put in place for multivariable control system design. In fact, the techniques used earlier to design a state space control system and tune the gain matrix are the same as what we wish to do now. There is one caveat, however, and that is because instead of finding a unique gain matrix as is the case for SISO systems, for MIMO systems we get a set of equations with more unknowns than equations. This leads to an infinite number of solutions, and additional methods are required to reduce the number of unknowns down to where the gains can be found. While the advantage of state space is that the same techniques are applied whether a third- or twenty-third-order system is designed, designing stable robust controllers is a process that takes trial and error and experience. State space controllers are very sensitive to modeling errors and good fundamentals in modeling are required. There are two primary methods used to design controllers in state space, pole placement and optimal control. Each method has numerous branches. In pole placement, the overall goal is to place all of the poles in such a way as to produce our desired response characteristics. For multivariable systems this means that with the additional gains we can try to further ‘‘shape’’ or control the direction of the poles placed with the other gains. This may be an iterative or intuitive process. If we know that coupling does not exist between one input and one of the outputs, we can set that gain to zero, thus getting us closer to a solution. If optimal control is used, the gains are chosen such that they minimize or maximize some arbitrary control law. Thus, different control laws will result in different gains. What is hard in both controllers to accomplish is good disturbance rejection. The one assumption made thus far is that we have access to all of the states. This is seldom the case (except in single output systems) and we must rely on

Figure 17

Grain matrix to make diagonal terms dominant in system.

454

Chapter 11

observers, or estimators as commonly called. Since all the states are seldom available, or too costly to measure, the goal of an observer is to predict, or estimate, the missing states. Just as we determined earlier if a system was controllable, we can also determine if a state space system is observable. Controllability depends on the A and B matrices such that an input is capable of producing an output and thus controlling the system. In the same way, observers are dependent on the A and C matrices since the system states must correlate with the system output to be observable. A system is observable, then, if the rank of the observability matrix is equal to the system order. The observability matrix is defined as h    2  n1 T i MC ¼ CT AT CT  AT CT j j AT C Thankfully, many computer programs have been programmed to perform this check. The important concept is that each state must be related to a change in at least one output in order for the system to be fully observable. There are many different implementations for observers. The simplest implementation is an open loop estimator in parallel with the physical system as given in Figure 18. The problems with the open loop observer are quite obvious. If the initial conditions are wrong and/or modeling errors and/or disturbances are present, the states never converge to the proper value. Implementing a separate feedback loop on the observer and forcing its estimated output to converge with the actual remedies this shortcoming. This improvement is shown in Figure 19. By adding the closed loop feedback, we now force the observer to converge to the proper state estimates. Since there are no physical components (i.e., implemented digitally), we can make the convergence times for the observer much faster than for the physical system. Additionally, the feedforward serves to remove lags since the states do not wait for an error to occur. If there are no measured states, this would be called a full state observer. When we are able to measure some states, we generally use a reduced state observer. The advantages of full state observers are better noise filtering, less costly than actual transducers, and easier design techniques. Disadvantages include doubling the order of the system, possible inaccuracies when actual measurements are possible, and much higher computational demands. Since some states are usually available, we can implement a reduced order state observer. This observer takes advantage of the actual signal being available, reduces

Figure 18

General open loop state observer.

Advanced Design Techniques and Controllers

Figure 19

455

Observer with closed loop feedback and feedforward.

the order of the system relative to using a full state observer, and simplifies the algorithm and reduces the computational load. The actual model of a reduced order state observer, shown in Figure 20, is very similar to the full state observer but combines actual and estimated states. The reduced order state observer only estimates the states in the z vector (not affiliated with the z transform) and combines these with the measured states using the partitioned matrix to produce the output vector, Y, used for the feedback loop:

1 C T where C is the original state space matrix, and T is (no. of rows equal the no. states  no: measured outputs, columns equal the no. states) a matrix which, when partitioned with C, produces a square matrix whose rank equals the system order. There are usually more unknowns than equations and choices are required when choosing T.

Figure 20

Reduced order state observer.

456

Chapter 11

Convergence time (observer pole locations), regardless of observer type, should be chosen to be approximately 5–10 times faster than the system. The goal is to be fast enough to always converge quickly relative to the physical system but slow enough to provide the desirable filtering qualities. Too fast and unnecessary noise is added (conveyed) to the estimated states. It is common to assign damping ratios near 0.707 since this provides near optimal rise times and less than 5% overshoot. If the controller is implemented digitally direct methods like dead beat controllers may also be used. Once the observer is designed and we have access to all states, there are many options available regarding controller algorithms. Several are mentioned here, but the reader is referred to additional references listed in the bibliography for further studies. Some controllers resemble the familiar SISO controllers developed earlier. For example, Figure 21 illustrates the implementation of a gain matrix and integral gain matrix to provide for elimination of steady-state errors. We see that for this configuration the states are operated on by the gain matrix and the errors between the desired and actual output(s) by the integral gain matrix and integral function. The integral gain can help remove effects from constant disturbance inputs that might otherwise cause a steady-state error. Another class falls under the term optimal controllers. Briefly defined above, these controllers seek to optimize a performance index function as a method of developing enough equations to solve for the unknown gains. Remember that a MIMO state feedback controller will result in more unknown gains than equations when pole placement techniques are used. One the most common optimal control laws is the LQR. The LQR seeks to minimize the cost function containing accuracy and controller activity measures. The balance is then between performance and controller requirements. Weighting matrices are used to determine the relative importance of each one. Other cost functions may be based on Lyapunov equations, Riccati equations, least squares estimations, or Kalman filters, along with many others. Since there is an infinite number of solutions, each method will result in slightly different results. If noise is a primary factor in your system, there may be enough advantages to use a Kalman filter. A Kalman filter is based on a stochastic (random) noise signal and is often able the extract the states even from noisy signals. The trade-off is more intense computational and programming considerations, and hence we have seen only limited use of Kalman filters (GPS processors, for example). Since the Kalman filter is normally implemented as a true time varying controller, a recursive solution must be implemented.

Figure 21

Multivariable integral control with state feedback.

Advanced Design Techniques and Controllers

457

EXAMPLE 11.4 Describe the basic Matlab commands and tools that are available for designing optimal control algorithms. Matlab contains many functions already developed for the purpose of optimal controller design. LQRs are designed using dlqr and Kalman filters (compensators) are designed using kalman. The function dlqr performs linear-quadratic regulator design for discrete-time systems. This means that the controller gain K is in the feedback where: uðkÞ ¼ KxðkÞ Then for the closed loop controlled system: xðk þ 1Þ ¼ AxðkÞ þ BuðkÞ To solve for the feedback matrix K we also need to define a cost function, J:

J ¼ Sum x 0 Qx þ b þ 2 x 0 Nu J is minimized by Matlab where the syntax of dlqr is ½K; S; E ¼ dlqrðA; B; Q; R; NÞ K is the gain matrix, S is the solution of the Riccati equation, and E contains the closed-loop eigenvalues that are the eigenvalues of (A  BK). The function kalman develops a Kalman filter for the model described as xðk þ 1Þ ¼ AxðkÞ þ BuðkÞ þ GwðkÞ yðkÞ ¼ CxðkÞ þ DuðkÞ þ HwðkÞ þ vðkÞ where w is the process noise and v is the measurement noise. Q, R, and N are the white noise covariances as follows:



E ww 0 ¼ Q; E; vv 0 ¼ R; E wv 0 ¼ N The syntax of kalman is ½Kfilter; L; P; M; Z ¼ kalmanðsys; Q; R; NÞ This only serves to demonstrate that many controllers can be designed using programs such as Matlab. Certainly, the brief introduction given here is meant to point us forward to new horizons as control system design engineers. Many references in the Bibliography contain additional material. Concluding our discussion on multivariable controllers, we see that all our earlier techniques, transfer functions, root locus, Bode plots, and state space system designs can be extended to multivariable input-output systems. Even if all the states are unavailable for measurement, we can implement full or reduced order observers to estimate the unknown states. The larger problem arises since generally with MIMO systems a deterministic solution is unavailable and we get more unknown gains than equations used to solve for them. There are many books dedicated to solving this problem in an optimal manner. Even without all the details, it is easy to simulate many of the optimal controllers using programs like Matlab, which has most of the optimal controllers already programmed in as design tools. In all these controllers, it is important to develop good models. In the adaptive controller section we will introduce some techniques that allow us to update the model in real time.

458

11.6

Chapter 11

SYSTEM IDENTIFICATION TECHNIQUES

System identification is used in several ways. First, is it a common approach to developing the component models for use in designing a control system and simulating its response. Second, it is a technique that can be done recursively and thus provide updated information to the controller in real time. When system identification routines are linked to the parameters in the controller, we call the controller adaptive, since it can ‘‘adapt’’ to changes in the environment in which it acts or to changes internal to the system components themselves (wear, leakage, etc.). There are two basic types of models, nonparametric and parametric. Nonparametric models are not examined here but include system models like Bode plots and step response plots. The information required to design the control system is contained in Bode plots and yet the parameters must still be extracted from the plot to develop the model for use in a simulation program. We have already examined this procedure in earlier chapters. When the parameters themselves are the goal, as in coefficients of difference equations, we are developing a parametric model. Most adaptive algorithms require parametric models. If the system contains delays and we wish to develop a continuous system model, it is common to use a Pade approximation. Digital systems are much easier in this respect due to the delay operator. Recalling earlier material, we have already learned several system identification techniques, the step response and frequency response. The step response is limited, for the most part, to first- and second-order systems while the Bode plot requires numerous data points and does not contain the desired parameters explicitly. In addition, both methods require a break in the control routine to perform the test. We are unable to identify the system in real time while the system is operating. Additional methods, introduced in this section, do allow us the capability to identify the system in real time. The least squares method is a common approach, capable of batch or recursive solutions and capable of fitting the input-output data to a large variety of system orders and configurations. We generally need to know something about our system to avoid the trial and error approach when determining the structure of our model. Start with the lowest order that we think will provide a good fit. If it provides acceptable results, then we can save ourselves computational time and if it does not then we can increase the order of our model. Some of the more advanced programs fit the data to a variety of model configurations and automatically select the best one. This is helpful if we have no idea of our physical model structure. The first area of concern when performing system identification regards the input sequences. The term garbage in–garbage out applies to system identification, and if we choose the wrong signals we will get the wrong model. If the input is simply a constant value, then the output will likely also be a constant value and our model will simply contain information about the steady-state gain of our system. The important dynamics of our system may be completely missed. If possible, we can use input sequences that are similar to the control outputs from the controller (thus inputs to the physical system) and can be confident the even if the model is not physically correct, it does model the process as it is being used since the model is based on what the controller is doing. This is similar to how many adaptive controllers are configured where at a lower frequency, such as where the controller is attempting to operate the system at, a higher order model might be better realized

Advanced Design Techniques and Controllers

459

using a simple first- or second-order model, since it captures the dynamics of importance. In general, though, for accurate models our input-output data must contain sufficient information to determine the best model parameters. Thus, inputs should contain various frequency components within and around the system’s bandwidth for maximum information. Some of the components should allow the system to nearly settle to equilibrium. If a particular frequency or amplitude is not part of our input-output sequence, the model will not accurately reflect those conditions. There are several ways to verify the quality of the model. The first and obvious one is to use the finished model with a set of measured inputs and compare the model (predicted) output with the actual measured output. This has the advantage of being very intuitive and easy to judge the quality from. We can also examine the loss function, which is the sum of errors squared, since it represents the amount of variation not explained by the model. Finally, most model structures determine the coefficients of the difference equations for a particular model. This is a natural extension of the input-output sampling process used to collect the data and also represents a common structure in how they are implemented back in adaptive controllers and other algorithms. Newer system identification tools like fuzzy logic and neural nets are the exception and use connection properties and function shapes to adapt the model. Difference equations are easy to use since we can easily convert analog models into discrete equivalents, write the difference equations, and use least squares techniques, as the next section shows, to determine the coefficients of the equations. 11.6.1 Least Squares The use of least square methods are common in almost all branches of engineering. Least squares methods may be implemented recursively in addition to batch processing and thus allow variations of adaptive controllers to implement the routine real time. Several variations of least squares routines have evolved over the years. Many can now be implemented using popular decomposition techniques to make the method more reliable and computationally more efficient. This section introduces the batch and recursive methods of least squares. 11.6.1.1

Batch Processes Using Least Squares

Batch processing models are easy to implement and thus commonly used to estimate the parameters of the desired model structure. The data can be taken while the controller is on-line and processed later as opposed to step and frequency response plots. The process is quite simple and can be used to find solutions to most problems with more equations than unknowns (over-determined). There are many engineering problems, such as linear regression and curve fitting, which use least squares methods. Of particular interest in this section is the use of least squares in determining the identity of system model parameters. This is a good case of having more equations than unknowns where we measure many input and output data points and desire to find the model parameters that minimize the error between the actual data and simulated data. Since the structure of the model is determine beforehand in almost all cases, we still must rely on our understanding of the system and choose

460

Chapter 11

the correct model. For example, if we choose a first-order model, containing one parameter, the time constant, then we are limited to always minimizing the errors based on these limits. Our model will never predict an overshoot and oscillation, even if our system exhibits the behavior. The goal then is to choose a model that includes the significant dynamics (number of zeros and poles). If we have three dominant poles, then a third-order model should produce an accurate model and the system identification routines will converge to a solution. The general procedure is to model our system using difference equations since system identification routines are implemented in microprocessors and the discrete input-output data is easy to work with in matrix form. Our beginning point is to define the structure of the difference equation in the form where cðkÞ ¼

d X

yai

cðk  iÞ þ

i¼1

n X

ybi rðk  iÞ

i¼0

This is simply a general representation of our difference equations from earlier chapters and it allows us to define the coefficients, y’s, of the terms on the righthand side (inputs and previous values) of our difference equations. The terms begin at cðk  1Þ and rðkÞ as they did earlier. Depending on the size and order of our physical system model, c and r may vary in the number of terms that are required. The advantage of least squares is that even if we have three unknowns and onethousand equations (sets of input-output data points), then we can use all equations and solve for the coefficients of our difference equations (the unknowns) while minimizing the sum of the errors squared. This technique is founded upon least squares linear algebra identities where the coefficients, inputs, and outputs are all written using matrices. The most common class is when we have equal numbers of equations and unknowns and we write our matrix equations as (u ¼ y where u is the matrix of desired coefficients, y ¼ is the matrix of known outputs, and ( is the matrix of known input points. We are familiar with this solution where we premultiply both sides by the inverse of ( to find the solution. (1 (u ¼ (1 y And finally, our solution to our unknowns: u ¼ (1 y This method is straightforward since having equal numbers of equations and unknowns leads to a square matrix which can be inverted. A simple example will review this process. EXAMPLE 11.5 Solve for the two unknown coefficients, a and b, given the two linear equations. 7 ¼ 3a þ 2b 2 ¼ 6a  2b

Advanced Design Techniques and Controllers

461

The left side numbers are known outputs and the right side numbers are known inputs. In terms of a difference equation, the first equation would be represented as cðkÞ ¼ a cðk  1Þ þ b rðk  1Þ 7¼a 3þb 2 Therefore, if we record the inputs to the system and the resulting outputs, we can easily fit our data to different difference equations to find the best fit (postprocessing the data). To solve for our unknown coefficients, we will write the equations in matrix form as follows: "

3 6

(u ¼ y # " # " # 2 a 7 ¼ ¼ 2 b 2

To solve: (1 (u ¼ (1 y u ¼ (1 y " # " #1 " # a 3 2 7 ¼ b 6 2 2 The solution is a¼1 b¼2 This provides the groundwork but is limited since we are fitting our model coefficients based on only two input-output data points. To generalize the procedure, we need to develop a method that allows us to utilize many input-output data sets and minimizes the error to give us the best possible model. Fortunately, there are many other applications desiring the same solution and linear algebra methods have been developed and are easily applied to our problem. Many mathematical texts demonstrate that a matrix, (, containing our data, although not being a square matrix when there are more equations than unknowns, will minimize the total sum of the squares of the error at each data point between the observed known value and the calculated value if the new matrix (T ( is nonsingular and the inverse exists. Taking the transpose and multiplying by the original matrix results in a square matrix and then allows the inverse to be used to solve for the solution to u: (T (u ¼ (T y  1 u ¼ (T ( (T y where u is the matrix of desired coefficients, y is the matrix of known outputs, and ( is the matrix of known input points. The solution takes advantage of the linear algebra properties and can be used to solve many different problems where there are more equations than unknowns. Another benefit is that the (T (, a matrix

462

Chapter 11

transpose and multiplication, is simply a series of multiplications and additions and results in the matrix to be inverted being of size equal to the number of coefficients. Computationally, this allows the routine to be implemented recursively since extremely large matrix inversions are not required. Also of interest is the error that would result from us implementing our model using the same input-output data. It is straightforward to compute the errors as yðkÞ ¼ actual kth data value yest ðkÞ ¼ kth data value using model estimates eðkÞ ¼ yðkÞ  yest ðkÞ ¼ the error between the kth actual and predicted values The least squares method seeks the solution where the sum of the squared errors is minimized. The total summation of errors squared is defined as ! N X 2 Sum of Errors Squared ¼ e i¼1

We now have the tools required to extend the least squares method to a general batch of input-output data used to fit a particular data model. EXAMPLE 11.6 Solve for the two unknown coefficients, a and b. Use the three equations to solve for the two unknowns Now we will add an additional equation to the problem solved in Example 11.5. In this case we no longer get exact solutions since the number of equations is greater than the number of unknowns. This case is more typical of what we get in system identification. The three equations are 3a þ 2b ¼ 7 6a  2b ¼ 2 9a þ 4b ¼ 18 Writing them in matrix form: 2

3 6 66 4 9

(u ¼ y 2 3 2 " # 7 7 a 6 7 7 2 7 ¼6 5 4 2 5 b 4 18 3

Now our ( matrix is no longer square and we cannot simply take the inverse. To solve this system we must now use the equation defined above as: (T (u ¼ (T y Substitute in our matrices:

Advanced Design Techniques and Controllers

"

3

6

2

2

2 3 # 3 2 " # " 9 6 3 6 7 a 6 6 2 7 ¼ 4 5 4 b 2 2 9 4 " #" # " # 126 30 a 195 ¼ 30 24 b 82

463

#

2

7

3

9 6 7 6 2 7 4 5 4 18

This now resembles our initial case with equal numbers of equations and unknowns. Notice that our number of unknowns determines the size of our inverse matrix, not the number of data points that we record. Now the solution can be found where  1 u ¼ (T ( (T y " # " # #1 " a 195 126 30 ¼ b 82 30 24 " # " # a 1:0452 ¼ b 2:1102 From the results in Example 11.5 we know that the first two equations resulted in a ¼ 1 and b ¼ 2. Adding the third equation, which does not exactly agree with first two, slightly changes the value of our coefficients. As more equations are added, the values would continue to change as the least squares method seeks to minimize the squared errors. However, now we have a procedure to fit many input-output data pairs to the coefficients of our difference equations. EXAMPLE 11.7 Use the least squares method to find the coefficients representing the best secondorder polynomial curve fit. The data are given as u  inputs 0 1 2 3 4 5 6

y  outputs 2.22 3.34 4.90 6.90 9.34 12.22 15.54

The general equation to be modeled is given by yðkÞ ¼ b0 þ b1 uðkÞ þ b2 uðkÞ2 þ þ bn uðkÞn þ eðkÞ where yðkÞ is the actual kth data value, n is the order of fit selected, and bi is the ith coefficient to be determined in the model. Note that the desired coefficients are linear even though the u input values are nonlinear. Since we want to fit the data to a second-order polynomial, our general model reduces to yðkÞ ¼ b0 þ b1 uðkÞ þ b2 uðkÞ2

464

Chapter 11

Therefore we have six data pairs and three unknowns, b0 , b1 , and b2 . Using the least square matrix method represented with matrices gives us 3 3 2 2 1 0 0 2:22 7 7 6 6 61 1 1 7 6 3:34 7 7 7 6 6 72 3 6 7 6 6 1 2 4 7 b0 6 4:90 7 7 7 6 6 76 7 6 7 6 6 1 3 9 76 b1 7 ¼ 6 6:90 7 74 5 6 7 6 7 7 6 6 6 1 4 16 7 b2 6 9:34 7 7 7 6 6 7 7 6 6 6 1 5 25 7 6 12:22 7 5 5 4 4 1 6 36 15:54 For this second-order fit, (ðkÞ ¼ ½1 uðkÞ uðkÞ2 ; which involves the known inputs for this case. For each row then, the input is 1, uðkÞ, and uðkÞ2 . Once we have ( and y we use the equation  1 u ¼ (T ( (T y The solution to u gives us our coefficients b0 , b1 , and b2 as 2 3 2:22 6 7 u ¼ 4 0:90 5 0:22 The final equation, expressed in more common notation, which best describes the input-output behavior of our system is y ¼ 2:22 þ 0:90x þ 0:22x2 Now we are ready to develop the system identification routines as commonly implemented in adaptive controllers. As mentioned previously, most models are of the difference equation format and can be developed from continuous system models using either z transforms or numerical approximations. The basic first-order system with constant numerator can be written as CðzÞ b ¼ RðzÞ z  a cðkÞ  a cðk  1Þ ¼ b rðk  1Þ Or, rearranging cðk þ 1Þ ¼ a cðkÞ þ b rðkÞ Thus, in the kth row, (ðkÞ ¼ ½cðkÞ rðkÞ , where it involves both known inputs and outputs and is called autoregressive. For the output vector, c, the kth row is cðk þ 1Þ. This can be expressed in matrix form as

Advanced Design Techniques and Controllers

2 6 6 6 6 6 6 4

cð1Þ cð2Þ .. . cðN  1Þ

rð1Þ

2

3

cð2Þ

465

3

7 7" # 6 6 cð3Þ 7 7 6 7 7 a 7 7 ¼6 6 .. 7 7 .. b 6 7 7 . 4 . 5 5 cðNÞ rðN  1Þ rð2Þ

The same procedure can be followed for a second-order denominator, first-order numerator difference equation given as cðk þ 2Þ ¼ a1 cðk þ 1Þ þ a2 cðkÞ þ b1 rðk þ 1Þ þ b2 rðkÞ and 2 6 6 6 6 6 6 4

cð2Þ cð3Þ .. . cðN  1Þ

cð1Þ

rð2Þ

rð1Þ

32

a1

3

2

cð3Þ

3

76 7 6 7 76 7 6 cð4Þ 7 7 6 a2 7 6 7 76 7 ¼ 6 7 76 7 6 .. 7 .. .. .. 7 4 b1 5 6 . 7 . . . 5 4 5 b 2 cðN  2Þ rðN  1Þ rðN  2Þ cðNÞ cð2Þ

rð3Þ

rð2Þ

Although using least square methods requires the first several outputs to be ‘‘discarded’’ (in terms of output data), this seldom poses a problem due to the amount a data collected using computers and data acquisition boards. Many programs, including Matlab, contain the matrix operations required to solve for the coefficients. Also, when we look at how different columns containing the output data, cðkÞ, are repeated and only shifted multiples of the sample time, we can save data with how we construct and store the matrices. EXAMPLE 11.8 Using the input-output data given, determine the coefficients of the difference equation derived from a discrete transfer function with a constant numerator and firstorder denominator. The recorded input and output data are k

Input data, rðkÞ

Measured output data, cðkÞ

1 2 3 4 5 6 7 8 9 10 11

0 0.5 1 1 1 0.4 0.2 0.1 0 0 0

0 0 0.1967 0.5128 0.7045 0.8208 0.6552 0.4761 0.3281 0.1990 0.1207

The model to which the data will be fit is given as CðzÞ b ¼ RðzÞ z  a

466

Chapter 11

The transfer function can be converted into an equivalent difference equation. cðkÞ  a cðk  1Þ ¼ b rðk  1Þ Or, rearranging a cðkÞ þ b rðkÞ ¼ cðk þ 1Þ Finally, this can be expressed in matrix form as 2 3 2 3 cð2Þ cð1Þ rð1Þ 7 6 7" # 6 6 cð3Þ 7 6 cð2Þ rð2Þ 7 6 7 6 7 a 7 6 7 ¼6 6 .. 7 6 7 .. .. b 6 7 6 7 . . 4 . 5 4 5 cðNÞ cðN  1Þ rðN  1Þ Inserting the input and output values: 2

0

6 6 0 6 6 6 0:1967 6 6 6 0:5128 6 6 6 0:7045 6 6 6 0:8208 6 6 6 0:6552 6 6 6 0:4761 6 6 6 0:3281 4 0:1990

0

3

(u ¼ y 2

0

3

7 6 7 6 0:1967 7 0:5 7 7 6 7 7 6 7 7 6 7 1 7 6 0:5128 7 7 6 7 6 0:7045 7 1 7 7 6 7 7 7" # 6 7 6 7 1 7 a 6 0:8208 7 ¼6 7 7 6 0:6552 7 0:4 7 7 6 7 b 7 6 7 6 0:4767 7 0:2 7 7 6 7 7 6 7 7 6 7 0:1 7 6 0:3281 7 7 6 7 6 0:1990 7 0 7 5 4 5 0:1207 0

The solution is defined as  1 u ¼ (T ( (T y The solution to y gives us our coefficients a and b as

0:6065 u¼ 0:3935 Our difference equation that best minimizes the sum of the squared errors is cðk þ 1Þ ¼ 0:6065 cðkÞ þ 0:3935 rðkÞ And our model is CðzÞ 0:3935 ¼ RðzÞ z  0:6065

Advanced Design Techniques and Controllers

467

Knowing that our sample time is T ¼ 0:1 sec allows us to take the inverse z transform into the s-domain and find the equivalent continuous system transfer function as CðsÞ 1 ¼ RðsÞ 0:2s þ 1 This model was in fact used to generate the example data and is returned, or verified, by the least squares system identification routine. Also, in this example only a 2 2 matrix is inverted since we only are solving for two unknowns, even though we have 10 equations. To conclude this section we discuss one modification that allows us to weight the input-output data to emphasize different portions of our data; the method is appropriately called weighted least squares. The solution calls for us to define an addition matrix W called the weighting matrix. W is a diagonal whose terms, wi , on the diagonal are used to weight the data. With the weighting matrix incorporated, the new solution becomes (T W(u ¼ (T Wy  1 u ¼ (T W( (T Wy where u is the matrix of desired coefficients, y is the matrix of known outputs, ( is the matrix of known input points, and W is the weighting matrix (diagonal matrix). If we make W equal to the identity matrix, I, we are weighting all elements equally and the equation reduces to the standard least squares solution developed previously. One common implementation using the weighting matrix is to have every diagonal element slightly greater than the last ðw1 < w2 < wi Þ. This has the effect of weighting the solution in favor of later data (more recent) data points and deemphasizing the older data points. One common method for choosing the values is use the equation wi ¼ ð1  ÞNi This weights the more recent data points over the past ones and produces a filtering effect operating on the square of the error that can reduce the effects of noise in our input-output data. 11.6.1.2

Recursive Algorithms Using Least Squares

While the least squares system identification routines from the previous section are useful in and of themselves, they do require a matrix inversion and generally require more processing time than is feasible to perform in between each sample. Thus, the techniques described are batch processing techniques where we gather the data and after it is collected, proceed to process it. If we wish to perform system identification on-line while the process is running, we can implement a version of the least squares routine using recursive algorithms. This has the advantage of not requiring a matrix inversion and only calculates the change in our system parameters as a result of the last sample taken. The same basic procedure is used when implementing least squares system identification routines recursively. It becomes a little more difficult to program due the added choices that must be made. Upon system startup, the input and output

468

Chapter 11

matrices must be built progressively as the system runs. It is possible once a large enough set of data is recorded to insert the newest point while simultaneously dropping the last point and the overall size remains constant. The solution has been developed using the matrix inversion lemma that requires that only one scalar for each parameter needs to be inverted each sample period. Called the recursive least squares algorithm (RLS), it only calculates the change to the estimated parameters each loop and adds the change to the previous estimate. Computationally, it has many advantages since a matrix inversion procedure is not required. It converts the matrix to a form where the inverse is simply the inverse of each single value. To develop the equations, let us first define our data vector, w, and as before, our parameter vector, y. These are both column vectors given as rT ¼ ½cðk  1Þ; cðk  2Þ; . . . ; cðk  na Þ; rðk  1Þ; rðk  2Þ; . . . ; rðk  nb Þ uT ¼ ½a1 ; a2 ; . . . ; ana ; b1 ; b2 ; . . . ; bnb where yðkÞ is the wT u (for any sample time, k, and knowing past values); na is the number of past output values used in the difference equation, and nb is the number of past input values used in the difference equation. Recall that our ( matrix in the preceding section contained the same data (formed from multiple input-output data points) and can be formed from the w vectors as 3 2 wð1ÞT 7 6 6 wð2ÞT 7 7 6 7 (¼6 6 . 7 6 .. 7 5 4 wðNÞT

The number or columns in ( equal to the number of parameters, na þ nb , and the number of rows is equal to the number of data points used, N. The goal in recursive least squares parameter identification is to only calculate the change that occurs in each estimated parameter whenever another data sample is received. First, lets examine the term (T ( and see how additional data affects it. Define !1 k X  T 1 T PðkÞ ¼ ( ( ¼ wðiÞw ðiÞ i¼1

Then 1



T



P ðkÞ ¼ ( ( ¼

k1 X

! T

wðiÞw ðiÞ þ wðkÞwT ðkÞ

i¼1

Writing P as this summation now allows us to calculate the change in P each time a new sample is recorded since P1 ðkÞ ¼ P1 ðk  1Þ þ wðkÞwT ðkÞ Remember that the solution to our system parameters is  1 u ¼ (T ( (T y

Advanced Design Techniques and Controllers

469

This, in combination with our definition of P, gives us !1 ! k k X X T uðkÞ ¼ wðiÞw ðiÞ wðiÞyðiÞ i¼1

uðkÞ ¼ PðkÞ

i¼1

k1 X

!

wðiÞyðiÞ þ wðkÞyðkÞ

i¼1

uðk  1Þ ¼ Pðk  1Þ

!

k1 X

! wðiÞyðiÞ

i¼1

Now that we have current and previous values of our parameter vector, u, we can find the difference that occurs from each new sample and using the matrix inversion lemma to remove the necessity of performing the matrix inversion each step allows us to develop the final formulation. Two steps are required where we first calculate the new P matrix each step and then use it to find the new change in the parameter. The equations below also include the weighting effects that allow us to favor the recent values over past values. The factor  is sometimes termed the forgetting factor since it has the effect of ‘‘forgetting’’ older values and favoring the recent ones.

1 Pðk  1ÞwðkÞwT ðkÞPðk  1Þ PðkÞ ¼ Pðk  1Þ    þ wT ðkÞPðk  1ÞwðkÞ   uðkÞ ¼ uðk  1Þ þ PðkÞwðkÞ yðkÞ  wT ðkÞuðk  1Þ The general procedure to implement recursive least squares methods is to choose P and u, sample the input and output data, calculate the updated P, and finally apply the correction to u. For online system identification the process operates continually while the system is running. There are several guidelines applicable when implementing such solutions. First, we must choose initial values for P and u. If possible we can simply record enough initial values, halt the process, batch process (as in the previous section) the data, and calculate P ¼ ð(T (Þ1 for our initial conditions. Finishing the process will also result in initial parameters values contained in u. If interrupting the process to the determine P and u using this method is not feasible, then it is common to choose P to be a diagonal matrix with large values for the diagonal terms. The parameter vector u can be initialized as all zeros and letting it converge to the proper values once the process begins. Finally, l is commonly chosen between 0.95 and 1 for initial values. When  ¼ 1 we get the standard recursive least squares solution. In practice, once a certain number of data points are being used, we commonly begin to discard the oldest and add the newest value, keeping the length of all vectors a constant. This number is chosen such that the amount of data being used is enough to ensure converge to the correct parameter values. There are many alternative methods to the least squares approach that are not mentioned here. The least squares approach is very common and fairly straightforward to program, especially for batch processing. Different subsets of least squares routines use more robust matrix inversion algorithms like QR or LU decompositions. Any numerical programming analysis textbook will describe and list these

470

Chapter 11

routines. This is only an introductory discussion of the least squares methods, and references are included for further studies. As the next section demonstrates, system identification routines using recursive least squares (or others) adds another class of possibilities: adaptive controllers.

11.7

ADAPTIVE CONTROLLERS

Adaptive controllers encompass a wide range of techniques and methods that modify one or more controller parameters in response to some measured input(s). The level of complexity ranges from self-tuning and gain scheduling to more complex model reference adaptive and feedforward systems. This section briefly describes some common configurations in use. First, let us examine some basics regarding the design of adaptive controllers. One important item to note is that proofs for stability when using adaptive controllers are very difficult since the control system becomes time varying and usually nonlinear. It is often very difficult to predict all combinations of parameters that the controller might adapt to during operation. This leads to a variety of programming approaches ranging from intuitive to complex mathematical models. In general, the better our models and the more they are used during the design process leads to better controllers with more stability and less controller actuator demands. Noise is another issue that tends to degrade the performance of our controllers. If possible, it generally helps to add filters at the input of our AD converters. To evaluate the performance it is helpful to use programs like Matlab to allow multiple simulations and design iterations in a short period of time. In addition to our normal parameters of dynamic and steady-state response characteristics, we must evaluate the convergence rate of the adaptive portion. This also gives us an indication of the system stability with the adaptive controller added. 11.7.1

Gain Scheduling

Gain scheduling is often the simplest method since most work is done up front when designing the controller. A typical system configuration is given in Figure 22. The general operation using gain scheduling is to change controller parameters (usually gains, although configurations or operating modes may also be changed) based on

Figure 22

Adaptive controller—gain scheduling configuration.

Advanced Design Techniques and Controllers

471

the inputs it receives. These inputs into the adaptive algorithm may be command inputs, output variables from the process, or external measurements. For example, in hydraulic controllers the system pressure acts in series as another proportional gain in the forward loop. Thus if the controller is tuned at 1500 psi and the operating pressure is changed to 3000 psi, it is likely that the controller will now be more oscillatory or unstable. The gain scheduling controller would measure the system pressure and, based on its value, determine the appropriate gain for the system. The gain scheduling may or may not be comprised of distinct regions. It may follow a simple rule, for example, if the pressure doubles the electronic gain is 1=2 of its initial value. From a practical standpoint, noise in the signals must be filtered out or the gain will constantly be jumping around with the noise imposed on the desired signal. The general approach is to break the system operation into distinct regions and implement different controllers and/or gains depending on the region of operation. The regions may be functions of several variables, as listed above. The regions might be determined by the nonlinearities of the model in a way that each region is approximately linear and allows classic design techniques to be used within each linearized operating range. The advantage of gain scheduling, the ability to preprogram the algorithms, is also its weakness, in that it is only adaptive to preprogrammed events. Because of its simplicity, it does see much use in practice. Since the changes are predetermined, it also allows us to verify stability, at least from changes in the controller. Changes in the system parameters may still cause the system to become unstable. 11.7.2 Self-Tuning Controllers Self-tuning, or autotuning, is used to replace the manual tuning procedures studied thus far. It ‘‘mimics’’ what the control operator might do if they were physically standing at the machine tuning the controller. Common parameters the controller tunes too are overshoot and settling. Many PLCs with PID controllers use autotuning techniques. The autotuning algorithm is usually initiated by the operator at which point the controller injects a series of step inputs into the system and measures the responses. The step inputs may occur while the controller is active if the inputs are superimposed over the existing commands. Slightly better results are generally possible if the controller is not online and extraction of response data is more straightforward. Recent algorithms use a continuous recursive solution and constantly ‘‘tune’’ the controller for optimal performance. A general self-tuning controller configuration is shown in Figure 23. The self-tuning configuration is based on collecting the appropriate inputoutput data. Once the input-output response data are collected (usually several input-output cycles), the algorithm proceeds to calculate the new gains based on the current overshoot and settling time (or whatever parameters are chosen). It is a good idea to then repeat the test and verify the new gains. The operator can often choose the level of response required (i.e., fast, medium, slow) and the accepted trade-offs that accompany each type. Some algorithms allow the test data to also be saved to a file for additional processing by the engineer. The algorithms for self-tuning controllers can be based on methods similar to the pole placement and Ziegler-Nichols tuning methods presented earlier. If we know the desired type of response for our application, the algorithm can be programmed

472

Chapter 11

Figure 23

Adaptive controller—self-tuning configuration.

for that one specific type. Knowing how each gain affects the response (covered in many earlier sections) is necessary when developing the autotuning algorithm. A commercial self-tuning PID algorithm (Kraus and Myron, 1984) is presented here to illustrate the process of implementation with digital controllers. The process requires the system mass and initial gains as inputs and proceeds to determine the closed loop step response. The peak times are used to determine the damped natural frequency, fd , and the amplitude ratio of successive peaks to determine the decay ratio, DR. The following two equations (derived by trial and error) then calculate the equivalent ultimate gain and ultimate period (Tu ¼ 1=fu ) as required for use by Ziegler-Nichols methods. Kinitial ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi K U ¼ s

ffi 1  ð8  DRÞ2 55

and

fU ¼ "

fd ð8  DRÞ3:5 1 1110

#3:51

Once KU and TU are found, the equations presented in Table 2 of Chapter 5 can be used depending on the controller type being implemented. The gain equations found using Ziegler-Nichols equations may also modified further depending the desired response characteristics. This is just one example of an empirically based solution to the autotuning controller. 11.7.3

Model Reference Adaptive Controllers

Model reference adaptive controllers, or MRAC, take on many forms with the general configuration shown in Figure 24. The goal is to design the controller to make the errors between the reference model output and physical system output equal to zero, thus ‘‘forcing’’ the system to have the response defined by the reference model. For MRAC systems to work, the reference model and reference inputs must be feasible for the physical system to achieve. There are many algorithms that have been proposed to achieve this. In a sense the MRAC is a subset of a self-tuning controller. Instead of being based from desired response measurements, the desired response is derived from the reference model. Any time that there is an error between the reference model output (desired output) and the actual system output, the controller is modified such that the two responses become equal.

Advanced Design Techniques and Controllers

Figure 24

473

Adaptive controller—MRAC configuration.

11.7.4 Model Identification Adaptive Controllers Model identification adaptive controllers (MIAC) implement the system identification techniques from the previous section to improve and adapt the controller in real time. It is different from MRAC in that the model parameters are continuously estimated and used in the control system. MRAC may or may not do this. Simpler MRAC systems are based only on the reference model and actual error. The advantage of MIAC is that once we know the model (and track its changes) we can implement effective feedforward and controller algorithms. Consider the block diagram in Figure 25. As the system parameters change (i.e., different inertial loads on a robot arm), the recursive system identification algorithm continually updates the model parameters. The model tracks the changes to the system and allows more accurate implementation of feedforward along with the adaptive controller routines listed above. Of the methods presented in this section, gain scheduling and self-tuning controllers are already quite common in industrial applications. Although the potential benefits are greater, the MRAC and MIAC methods require additional development work and in terms of stability are more difficult to analyze.

Figure 25

Adaptive controller—MIAC configuration.

474

Chapter 11

11.8

NONLINEAR SYSTEMS

All physical systems are inherently nonlinear. These nonlinearities, as we will see, range from natural nonlinearities in the physical system to nonlinearities introduced from the controller (adaptive controllers, bang-bang controllers, etc.). Some nonlinearities are continuous and can thus be approximated by linear functions around some operating point. Discontinuous nonlinearities cannot be approximated by linear models and include items like hysteresis, backlash, and coulomb friction. A major problem that we have with nonlinear systems is determining stability. Once the system is nonlinear, the principle of superposition is no longer valid and hence the transfer function, root locus, and Bode plot techniques are also invalid. State space equations in a general form are valid but we are unable to take advantage of the linear algebra techniques based on having linear system matrices. Nonlinear systems have several other differences when compared to linear systems. There is no longer a single equilibrium point, and different equilibrium points are possible depending on the initial conditions. Thus we must view both local and global stability as separate issues. Nonlinear systems can exhibit limit cycles (circles on the phase plane) and sustained repetitive oscillations. Bode plots become dependent on input amplitude. Some nonlinear elements might represent two or more possible outputs at the same input, thus leading to jump resonance. This section outlines some of the common nonlinearities and possible solutions when working with nonlinear systems. The adaptive methods from the previous section are also commonly used in controlling nonlinear systems since the design goal is to control widely varying plants. 11.8.1

Common Nonlinearities

Nonlinearities are commonly classified according to several characteristics:    

Continuous or discontinuous Hard or soft Single valued or multiple valued Natural or artificial

Continuous nonlinearities have finite higher derivatives as seen in nonlinear springs and valve pressure-flow curves. Discontinuous nonlinearities, unfortunately, are more common and include items like saturation (common one), hysteresis, deadband, etc. The terms hard and soft are an alternative method of representing continuous and discontinuous nonlinearities (hard being discontinuous). Single valued nonlinearities are such that a vertical line always intersects just one value, for example, saturation. Multiple valued nonlinearities will have multiple values intersected by the vertical line, for example, hysteresis. Finally, natural nonlinearities occur in physical systems and their models, while artificial nonlinearities are added through different controller algorithms, for example, adaptive control. The common nonlinearities are summarized in Table 1. The nonlinearities above are found to some degree in almost every physical system. Whether or not it is recommended to include them in the models is only determined by experience, intuition, and trial and error. Since they can have a significant impact on system performance and stability, several methods are introduced below to evaluate the necessity of including them.

Advanced Design Techniques and Controllers

Table 1

475

Common Nonlinearities in Physical Systems

11.8.2 Numerical Evaluation Techniques Simulation is the most widely used and commonly available method for evaluating performance and stability of nonlinear systems. Virtually all the blocks in Table 1 can be found in Matlab/Simulink, along with many additional nonlinear functions. It is easy to build the system models, include the nonlinearities where desired, and evaluate system performance. As the systems get more complex, it may be advantageous to use state space equations and numerically integrate them instead of trying to develop a block diagram including linear and nonlinear components. Examples of using state space equations can be found in the bond graph simulations. The major disadvantages of numerical simulations are time and global stability issues. More often than not, as the system gets more complex, the time issue becomes less of a disadvantage unless we are well studied in the analytical techniques like Lyapunov’s methods, phase plane techniques, or Popov’s criterion (and others). The determination of global stability, another disability, occurs because simulation can only tell us about the response to one particular input sequence and set of operation conditions. Global stability can never be completely proven using only numerical simulations. However, for most designs, multiple simulations are easy to run, and in general a fairly broad mapping can be accomplished. If the mapping of operating conditions covers the ranges of operation seen in practice, the system should remain stable. Thus, simulations are frequently used and a fairly easy extension from techniques learned earlier, especially when control system simulation programs are used. 11.8.3 Lyapunov’s Methods Since Lyapunov’s methods represent a common analytical approach method for evaluating nonlinear system, a brief overview is given here. There are three types of stability we must be concerned with when dealing with nonlinear systems: global, local, and Lyapunov stability, as shown in Figure 26. The stability regions are further classified as being asymptotically stable and exponentially stable. Asymptotically stable systems eventually go to the equilibrium point but not neces-

476

Chapter 11

Figure 26

Different stability regions for nonlinear systems.

sarily via the most direct method. That is, they always tend toward stability but at different rates of decay. Exponentially stable systems decay exponentially to the equilibrium point providing a more desirable response. Two methods of Lyapunov equations are commonly used, the direct and indirect. The indirect method involves finding the critical points of the system and solving for the linearized system eigenvalues at each critical point. The critical points are locations where all the derivatives are zero and thus constitute a ‘‘feasible’’ equilibrium point for the system. In the common pendulum example, we obviously have two equilibrium points, one stable and one unstable. By linearizing the state equations about these two points and determining the eigenvalues, the local stability around each critical point is found. A variety of numerical methods is used to find the critical points. The indirect method of Lyapunov is more intuitive and ‘‘bridges the gap’’ between our linear system tools and nonlinear stability analysis. The second method, often called the direct method, is a rather complex topic but does not require any approximations to be made during the stability analysis. It can be used on any order, linear or nonlinear, time varying or invariant, multivariable, and system model containing nonnumerical parameters. Since it employs state space notation, it is limited to continuous nonlinearities (eliminates many common nonlinearities). The most difficult portion of the method is generating a positive definite function containing the system variables. This function is commonly called the V function, or Lyapunov function. The second method is based on the energy method and can be summarized as follows: If the total energy in the system is greater than zero (ET > 0) and if the derivative of the energy function is negative (dET =dt < 0), the net energy is always decreasing and therefore the system is stable. There are mathematical proofs available for this method (see references), but the general idea is somewhat intuitive. Being based on the energy method, a good beginning attempt at finding a V function that is positive definite and where the partial derivative exists is to use the sum of the kinetic and potential energies in the system. Several methods for finding Lyapunov functions have been developed, the Krosovski, Variable gradient, and Zubov’s construction methods are examples of such. 11.9

NONLINEAR CONTROLLER ALTERNATIVES

Many other methods besides adaptive controllers are commonly applied to nonlinear systems: sliding mode control, feedback linearization, fuzzy logic, neural nets, and genetic algorithms, to mention just a few. The basic ideas, strengths, and weakness of each method are briefly presented here to encourage further research.

Advanced Design Techniques and Controllers

477

11.9.1 Sliding Mode Control Sliding mode control is designed to represent nonlinear higher order systems as a series of first-order nonlinear systems that are much easier to control. Good performance is achieved even in the presence of model errors and disturbances but at the price of high controller activity. Sliding mode control has been used successfully in robots, vehicle transmissions and engines, electric motors, and hydraulic valves. Sliding mode control was developed in the 1960s in the former Soviet Union and was based on the direct method of Lyapunov discussed above. As before, we need to find a positive-definite energy function for our system. The controller is designed to discontinuously vary the controller parameters to force the states to a predefined switching surface. Once the state reaches this surface it slides along it, guaranteeing stability and defined closed loop dynamics. The trick in getting the states to all be attracted to the surface is defining the proper Lyapunov function such that the control law always makes the energy function derivative negative (decreasing energy, movement toward equilibrium points). The practical problem of sliding mode control is the implementation of a discontinuous control switching law, which commonly introduces chatter into the system once the sliding surface is reached. For systems where the chattering frequency is much higher than the bandwidth of the system, it is not a large problem and direct implementation of sliding mode control provides very good results. Otherwise, continuous approximations for the control law must be made, which somewhat degrades the controller performance. If the time is taken to perform a Lyapunov stability analysis the extension to implement a sliding mode controllers may be well worth it to achieve excellent stability for the control system. 11.9.2 Feedback Linearization Feedback linearization is another technique applied to the control of nonlinear systems. The idea is to use state transformations and feedback to algebraically linearize the system while leaving the nonlinear system equations intact. These techniques have been successfully used in high performance aircraft and robots. The attractive characteristic of using feedback linearization to algebraically linearize the system is that once accomplished, all our linear control design techniques may be used. Since it only algebraically linearizes the system, it is subject to several problems and limitations. For some nonlinear systems it is impossible to algebraically linearize. It can be used as a partial linearization but includes no guarantees about global stability. The same is also true for feedback linearized systems since the algebraic model may contain errors and unmodeled dynamics. Finally, it requires the measurement of all states to be effectively implemented (Slotine and Li, 1991). An example where the nonlinear wind drag on a automobile cruise control system is canceled out is given in Figure 27. Within the digital controller the nonlinear effects of the wind are numerically canceled and the controller is designed as if it were a linear system. This method obviously is very dependent on the quality of our model

Gamble J, Vaughan N. Comparison of Sliding Mode Control with State Feedback and PID Control Applied to a Proportional Solenoid Valve. Journal of Dynamic Systems, Measurement, and Control, September 1996.

478

Chapter 11

Figure 27

Automobile cruise control with feedback linearization of aero drag.

but does provide advantages over simply linearizing and then designing around a single operating point. 11.9.3

Fuzzy Control

Fuzzy logic controllers have become very common and are used in a large range of applications. They originated in 1964 by Zadeh at the University of California in Berkeley. The concept initially took a long time to generate support and finally in recent decades has become very popular. In 1987, Yasunobu and Miyamoto described the widely known application of controlling Sendai’s subway system. Since then it has rapidly found its way into many products, some of which are listed in Table 2. There are several reasons why it possibly took so long for fuzzy logic to become more widely used. First, the term itself is not particularly attractive in situations where safety is of critical importance. We generally would not tell the passengers on an airliner that the landing systems are controlled using fuzzy logic. Even engineers, unless they understand the process, are not likely to endorse fuzzy designs. A second reason is the lack of a well-defined mathematical model. It is impossible to analyti-

Table 2

Common Applications of Fuzzy Logic

Automotive systems !

Fuel management Antilock brakes Emission controls

Traction control Automatic transmissions Vehicle ride control

Washing machines Camcorders Elevators Financial systems

Refrigerators Cranes Household appliances Economic systems

Cameras Incineration plants Electronic systems Social, biological systems



Yasunobu S, Miyamoto S. Automatic Train Operation System by Predictive Fuzzy Control. Industrial Applications of Fuzzy Control, ed. M. Sugeno, North-Holland, Amsterdam, 1985.

Advanced Design Techniques and Controllers

479

cally prove a system’s stability apart from a mathematical model. Critics of this position quickly point out that linear models are seldom valid throughout a system’s operating range and therefore also do not guarantee global stability. This section is only an introduction and designed to explain the basic theory and implementation techniques of fuzzy logic. The easiest way to begin is to describe fuzzy logic as a set of heuristics and rules about how to control the system. Heuristic relates to learning by trying rather than by following a preprogrammed formula. In a sense, it is the opposite of the word ‘‘algorithm.’’ It therefore is a ‘‘human’’ approach to solving problems. We seldom say it is 96 degrees Fahrenheit and therefore it must be hot, rather we say is hotter than normal. In a similar fashion fuzzy logic is based on ‘‘rules of thumb.’’ Thus, instead of the input value being larger or smaller than another, it may be rather close or very far from the other number. This is done through the use of membership functions that take different shapes. Where fuzzy logic works well is with complex processes without a good mathematical model and highly nonlinear systems. In general, conventional controllers are as good or better if the model is easily developed and fairly linear where common design techniques may be applied. The question then arises as to under what circumstances are fuzzy logic techniques particularly attractive. Circumstances in favor of using fuzzy logic include when a mathematical model is unavailable or so complex that it cannot be evaluated in real time, when low precision microprocessors or sensors are used, and when high noise levels exist. To implement a fuzzy logic controller, we also have several conditions that must be met. First, there needs to be an expert available to specify the rules describing the system behavior and, second, a solution must be possible. Although fuzzy logic is often described as a form of nonlinear PID control, this limited understanding does not encompass the whole concept. The idea stems from the many reports on using fuzzy logic with rules written in the same way that PID algorithms operate. For example, with an SISO system a rule might read ‘‘if the error is positive big and the error change is positive small, then the actuator output is negative big.’’ This simply results in a nonlinear PD controller. A better application to illustrate the concept of fuzzy logic is the automatic transmission in vehicles. Standard control algorithms must make set decisions based on measured inputs; fuzzy algorithms are able to apply sets of rules to the inputs, infer what is desired, and produce an output. Because of this inference, a fuzzy controller will respond differently as different drivers operate the vehicle. For example, a fuzzy system is able to judgments about the operating environment based in the measured inputs. This is where the expert enters the picture. The rules are written by experts who realize that people prefer to not continually shift up and down on winding roads but do need to quickly downshift if on a level road and desiring to pass another vehicle. Thus, we write the rules such that if the throttle is fluctuating by large amounts, as if on a winding road, to not continually shift the transmission and yet if the throttle is relatively constant before undergoing a change, to quickly shift the transmission. It is along these lines that expert knowledge is used to describe typical driving behavior and infer what the transmission shift patterns should be. The benefit of fuzzy logic is the ease that such rules can be written and implemented in a controller. Once written, it is also easier for other users to read the rules, understand the concept, and make changes, instead of pouring over many details hidden in mathematical models.

480

Chapter 11

The best way to demonstrate the concept and terms of fuzzy logic controllers is by working a simple example. The next section works through a common simple example of a fuzzy logic controller, controlling the speed of a fan based on temperature and humidity inputs. 11.9.3.1

Fuzzy Logic Example—Fan Speed Controller

The goal of this example is to introduce the common terms and ideas associated with fuzzy logic controllers within the framework of designing a fan speed controller. There are two sensors for the system, temperature and humidity, and they are used to determine the speed setting of the fan. The rules are written using everyday language in the same way that we would decide what the fan speed should be. Thus, for this example, we get to be the expert. First let us explain some definitions used with fuzzy logic. Whereas classical theory distinguishes categories using crisp sets, with fuzzy logic we define fuzzy sets, as shown in Figure 28. Using the temperature analogy, with a crisp set we might say the any temperature less than 40 F is cold, a temperature between 40 F and 75 F is warm, and above 75 F is hot. Clearly with the crisp set we would have people who still think that 41 F is cold even though it is classified as warm. Similarly, 39:9 F is classified as cold even though 40:18F would be classified as warm. A more natural representation is found with the fuzzy set where everyone might agree that below a certain temperature (40 F) is cold and that above a certain temperature (65 F) it is no longer cold. Between those two temperatures fall people who each think differently about what should be called cold or warm. The sets of data are called membership functions, and although straight line segments are used in Figure 28, it is not required, and different shapes will be shown later in this example. The expert who has knowledge of the system determines the appropriate membership function. The level to which someone belongs is called the degree of membership (mðxÞ). With the crisp set either you belong or you do not. This is like buying a membership at a health club. We cannot say ‘‘please give me 30%

Figure 28

Crisp and fuzzy data sets.

Advanced Design Techniques and Controllers

481

membership for this month.’’ We either belong or we do not. With the fuzzy set it is possible to have full membership, no membership, or some intermediate value. Now we can belong partially to one set, and as Figure 29 shows, at the same time also belong partially to another set. Now we see in the fuzzy set that between 40 F and 65 F we belong to both the cold and warm set at the same time. This is where the term fuzzy is appropriate since we are both cold and warm at the same time. The scope is the range where a membership function is greater than zero and the height is the value of the largest degree of membership contained in the set. For the fuzzy warm set the scope is 40 F to 85 F and the height is 1. The height of any function is commonly set to one although it is not constrained such that is has to be. There are many possibilities for membership function shapes, as shown in Figure 30. Ideally we know enough about our system that we initially choose the membership function that best describes the characteristics. It is not required that we choose the exact one or even that it exists, and a primary method of tuning fuzzy logic controllers is by changing the shape of the membership functions. It may help if it can be described with a mathematical function, although simple lookup tables are commonly used when implemented in microprocessors. In addition to triangles, trapezoids, s, z, Normal, Gaussian, and Bell curves, and Singletons, other shapes can be used. Design tools like Matlab’s Fuzzy Logic Toolbox contain a variety of membership functions, as shown in Figure 31. If we wish to modify shapes we use what is called hedging. Recall that our degree of membership is represented by mðxÞ, which at this point we will assume is between zero and one. If we raise m to different powers, we change the shape of the original membership function described by mðxÞ. The use of hedging in this way is shown in Figure 32. Since mðxÞ is less than 1, when raised to a coefficient greater than 1 it becomes more constricted and when the coefficient N is less than 1 it becomes more diffused. We may use words like very, less, extremely, and slightly when we write the rules for our system. We can implement them using hedges. For example, with our fan speed controller, we may wish to know if it is hot, or very hot. Finally, let us look at one more definition before we move more fully into fuzzy logic design. Now that we can define membership functions using a variety of shapes, we need to learn how to combine them since it is possible they may overlap, as when we were simultaneously cold and warm at the same time. We combine the membership functions using logical operators: And (minimum), Or (maximum), Not, Normalization, or Alpha-Cuts. There are others but the concepts can be explained using these listed. As Figure 33 shows, when membership functions overlap the different logical operators result in different overall membership shapes.

Figure 29

Combined membership in crisp and fuzzy data sets.

482

Figure 30

Chapter 11

Example membership functions.

The logical operators AND, OR, and NOT each result in different combination of the fuzzy sets. The norm operator takes the mean of the membership functions and the -cut operator places a line between 0 and 1; any portions of membership functions above  are included in the combination. As will become more clear as we progress through this example, using these operators allow us to remove (or decide about) some of the ambiguity of being both warm and cold at the same time. To begin the process of putting the definitions and concepts together, let us examine the overall picture of how the definitions above fit in to fuzzy logic control system design. The basic functional diagram of a fuzzy logic controller is given in Figure 34. The middle block containing our rules is inference based and comes from our knowledge of how our system should perform. At first glance it seems that since we start and end with crisp data that the fuzzy logic controller is only extra work on our part. We certainly are constrained to start and end with crisp data since sensors

Figure 31

Membership functions in Matlab (Fuzzy Logic Toolbox).

Advanced Design Techniques and Controllers

Figure 32

483

Changing the shape of membership functions using hedging.

and actuators do not effectively transmit or receive commands like warm or cold. Our controller still must receive and send signals such as voltages or currents. However, what the fuzzification and defuzzification allow us to do is describe and modify our system using rules that we all can understand. As opposed to developing detailed mathematical formulas describing the rules of our system, we simply graphically represent our membership functions and define our rules to design our controller. As mentioned earlier, since it is assumed that a mathematical model does not exist, we need to have some knowledge and experience with the actual system to write meaningful rules. Even in the case of two inputs and one output, as in this example, where we can ultimately describe the fuzzy logic controller as an operating surface, the method to develop the surface is more intuitive and easier to modify than obtaining the same results through mathematical models, trial-anderror, or extensive laboratory testing. As Figure 34 illustrates, we still have a unique output (crisp) for any given combination of inputs, and fuzzy logic techniques provide tools to develop the nonlinear mapping using the two crisp sets of data. Referencing Figure 34, we can define the following terms: 

Fuzzification: The process of mapping crisp input values to associated fuzzy input values using degrees of membership.

Figure 33

Operations on fuzzy membership sets.

484

Chapter 11

Figure 34

  

Functional diagram of fuzzy logic controller.

Defuzzification: The process of mapping fuzzy output values to crisp output values using aggregation. Aggregation: Methods used to combine fuzzy sets into a single set with the goal of obtaining a crisp output value. Rule-based inferences: The process of mapping fuzzy input values to fuzzy output values. Rules are used to represent the behavior of the system.

Rules are usually implemented using IF (antecedent) THEN (consequent) statements. For simple systems we can represent the rules in tabular form using a fuzzy association matrix (FAM). Extending the function diagram in Figure 34 to our particular example of fan speed control results in Figure 35. To design our controller, we now need to do the fuzzification, rules, and defuzzification for our fan speed control. Since we have two inputs, we will need to map the crisp data into two membership functions and combine them using the rules. To begin with we will assign linguistic names to describe each variable. The temperature, as done already, is described as cold, warm, or hot. This means that we will need three membership functions to perform the fuzzification of the crisp temperature input. In addition, our rules can be written using cold, warm, and hot in the decision-making process (as in how we describe our surroundings to one another). The membership function for each linguistic variable is given in Figure 36. For the humidity input we will use the linguistic variables low, average, and high, again using three categories. Using the same shapes as for the temperature, we can develop the membership functions for humidity as shown in Figure 37. Finally, we will need to do the defuzzification to obtain the crisp output determining the fan speed. Our linguistic variables for this output will be slow, medium, and fast. Once again using the same shapes and number of membership functions, we get the result in Figure 38.

Figure 35

Fuzzy logic controller diagram for control of fan speed.

Advanced Design Techniques and Controllers

Figure 36

485

Membership functions for temperature input.

Now that all our linguistic variables are defined, we can write the rules. The rules are simply based on our knowledge of the system, which for this example, we all have some knowledge of. We will use nine rules, shown in Table 3, to map our fuzzy inputs (temperature and humidity) into a fuzzy output (fan speed). The way the rules are currently stated, using AND, provides a minimum, since both inputs must be true. If we changed to using OR we would get a maximum number of active rules since either condition could be true to have a non-zero rule output. For larger systems the logical operators may be combined in each rule. It is easy for two inputs and one output to develop an FAM, shown in Figure 39. To finish this example and see how the actual procedure works, let us choose an input temperature of 80 F and a humidity of 45%. Figure 40 shows us our memberships in cold, warm, or hot, using our temperature input of 80 F. We have no membership in cold, 0.25 in warm, and 0.75 in hot. Performing the same process with our humidity of 45% leads to Figure 41, with a degree of membership equal to 0.2 in low, 0.8 in medium, and 0.0 in high. With the two inputs defined and the membership functions calculated, we are ready to fire the rules, or perform the implication step. Using the AND operators will provide the minimum outputs for this example, as shown in Table 4. After firing each rule we see that only four rules are active (4, 5, 7, and 8) and that rules 5 and 7 are mapped to the same output. If we combine rules 5 and 7 using OR (maximum), we end with a degree of membership for medium equal to 0.25. The results from Table 4 can also be graphically illustrated using the FAM, shown in Figure 42.

Figure 37

Membership functions for humidity input.

486

Chapter 11

Figure 38

Membership functions for fan speed output.

At this point the only item left is defuzzification of the fuzzy output to produce a crisp output value for the fan speed. As with the inputs, we have many options in how we choose to combine the different membership functions. First, we can take the values for the outputs of each membership function after firing the rules (Table 4) and overlay them with our output membership functions from Figure 38. When each membership function is clipped, the combined function becomes as shown in Figure 43. During implication, each degree of membership is used to clip the corresponding output variable, slow, medium, or fast. For aggregation we again have many options (Figure 33) for combining the three membership functions. Using the maximum (OR) for each function produces the final function given in Figure 44. To determine the final crisp output value, the goal of this entire process, we apply a defuzzification method, some of which are listed here:     

Bisector Centroid: often referred to as center of gravity (COG) or center of area (COA) Middle of maximum (MOM) Largest of maximum (LOM) Smallest of maximum (SOM)

As with the different membership functions, Matlab’s Fuzzy Logic Toolbox contains a variety of defuzzification methods, as shown in Figure 45.

Table 3

Rules for Controlling the Fan Speed

Rule no. 1 2 3 4 5 6 7 8 9

Descriptions IF IF IF IF IF IF IF IF IF

temp temp temp temp temp temp temp temp temp

is is is is is is is is is

cold AND humidity is low, THEN speed is slow. cold AND humidity is avg, THEN speed is slow. cold AND humidity is high, THEN speed is medium. warm AND humidity is low, THEN speed is slow. warm AND humidity is avg, THEN speed is medium. warm AND humidity is high, THEN speed is fast. hot AND humidity is low, THEN speed is medium. hot AND humidity is avg, THEN speed is fast. hot AND humidity is high, THEN speed is fast.

Advanced Design Techniques and Controllers

Figure 39

487

Fuzzy association matrix (FAM) for fan speed output.

The actual output speeds that result from applying the Centroid, LOM, MOM, and SOM methods are shown in Figure 46. The different output speeds, depending on the method used, are 650 rpm ! smallest of maximum (SOM) 667 rpm ! centroid of area (COA) 825 rpm ! middle of maximum (MOM) 1000 rpm ! largest of maximum (LOM) Fortunately, the process of fuzzification, generating rules, and defuzzification can be done with the computer. Matlab includes a Fuzzy Logic Toolbox with many built in shapes and methods to allow us to quickly check the effects of different combinations. In addition to design software packages, there are many microprocessors now developed and optimized for fuzzy logic control systems. The instruction sets of the chips contain many of these functions. To conclude, let us briefly summarize the process of designing a fuzzy logic controller. First, we assume that experts are available to describe the system behavior when developing the rules and that a good mathematical model either does not exist or is too complex to implement. Next, we need to define all the input and output variables. For each input or output we need to define the quantity, shape, and overlapping areas of the respective membership functions. The quantity, shape,

Figure 40

Degrees of membership for temperature input.

488

Figure 41

Chapter 11

Degrees of membership for humidity input.

and amount of overlap of membership functions have significant impact on the behavior of the system. Once these decisions are made, linguistic labels are defined. These should describe the ranges of the variables (i.e., cold, warm, and hot for the temperature input) such that the rules are written using language that is natural to how we describe the problem. When writing the rules for our system, we must choose the implication and composition method using our logical operators. Most rules are written using IFTHEN statements. Since multiple rules will likely be active, we now need to define the aggregate method. For example, which medium fan speed degree of membership do we use when two are non-zero (0.2 and 0.25 in the example). We could use the minimum, maximum, average, etc. Finally, we need to select the defuzzification method (SOM, LOM, etc.) to convert our aggregate outputs into crisp data sets. When these steps are completed, programs such as Matlab allow us to slide the inputs through several values and watch what the output becomes. For the case in this example we can also develop a surface plot where x and y are the inputs temperature and humidity and the z axis (height) is the output of our fuzzy logic controller. If possible we should simulate the system with expected input data and perform initial tuning. To implement, we take our final design and compile it into machine code for operation on a microprocessor.

Table 4 Implications: Firing and Rules for Controlling the Fan Speed (Temperature ¼ 808F, Humidity ¼ 45%) Rule no. 1 2 3 4 5 6 7 8 9

Descriptions IF temp IF temp IF temp IF temp IF temp (0.25). IF temp IF temp IF temp IF temp

is is is is is

cold (0.0) AND humidity is low (0.2), THEN speed is slow (0.0). cold (0.0) AND humidity is avg (0.8), THEN speed is slow (0.0). cold (0.0) AND humidity is high (0.0), THEN speed is medium (0.0). warm (0.25) AND humidity is low (0.2), THEN speed is slow (0.2). warm (0.25) AND humidity is avg (0.8), THEN speed is medium

is is is is

warm (0.25) AND humidity hot (0.75) AND humidity is hot (0.75) AND humidity is hot (0.75) AND humidity is

is high (0.0), THEN speed is fast (0.0). low (0.2), THEN speed is medium (0.2). avg (0.8), THEN speed is fast (0.75). high (0.0), THEN speed is fast (0.0).

Advanced Design Techniques and Controllers

489

Figure 42

Fuzzy association matrix (FAM) for fan speed output after firing the rules and taking the minimums (AND operator).

11.9.4 Neural Nets Neural nets are almost always applied to nonlinear models of input-output relationships. The basic analogy they are modeled after is the way a human brain operates. Our brain (in a very simplified sense) uses interconnections between neurons, and as we learn, the weighted gain between each one varies. Each interconnection begins with essentially zero. Thus, neural nets begin with each neuron connected to the inputs by interconnections, and as we learn, the weight on each interconnection between each input, neuron, and output is varied. Simple single layer neural networks contain only input and output neurons and then determine the weighted gain between them. They are fairly limited so it is common to add one or more hidden layers, as shown in Figure 47. As the number of hidden layers is increased, the number of possible connections multiplies. The sigmoid functions inside the hidden and output layer neurons are called activation functions and determine how the consecutive level is activated. Other functions like steps and ramps are also used (numerically easier). There are many methods used to determine how the neural net learns. In all cases it is beneficial to begin with good estimates for a faster learning time. Gradient methods, the perceptron

Figure 43

Combined membership functions for fan speed output after firing the rules (temperature ¼ 80 F, humidity ¼ 45%).

490

Chapter 11

Figure 44 Aggregate membership function using maximums (temperature ¼ 80 F, humidity ¼ 45%).

Figure 45

Defuzzification methods in Matlab (Fuzzy Logic Toolbox).

Figure 46

Results of defuzzification using various methods.

Advanced Design Techniques and Controllers

Figure 47

491

General configuration of neural net with hidden layer.

learning rule (change in weighting function proportional to the error between the input and output), and least squares have all been used to structure the learning process. Where neural nets have been successful is in applications requiring a process that can learn—applications like highly nonlinear systems control, pattern recognition, estimation, marketing analyses, and handwriting signature comparisons. As technology develops, the number of neural net applications increases and many noncontroller applications, such as modeling complex business or societal phenomenon, are now being done with the concepts of neural nets. 11.9.5 Genetic Algorithms, Expert Systems, and Intelligent Control Genetic algorithms, expert systems, and intelligent controllers are additional advanced controllers being studied and applied to a variety of systems and control processes. Specialty suites in Matlab (toolboxes) have already been developed for many of them. Genetic algorithms are well suited for situations where little or no knowledge about the process is available because they are designed to ‘‘search’’ the solutions using stochastic optimization methods. If it gets stalled (i.e., local minimum), it can jump (mutate) to a new location and begins again. Thus, genetic algorithms are capable of searching the entire solution space with a high likelihood of finding the global optimum. They are modeled after the natural selection process and the algorithm rewards those relationships that are ‘‘healthy’’ (evaluated using fitness functions). More recently, work has been done demonstrating that genetic algorithms can determine an optimum solution while requiring much less computational time than traditional optimization routines. Expert systems are related to fuzzy logic systems but might include more than just rules. It has the ability to determine which neurons actually fire and send a signal based on elaborate inference strategies. Intelligent controllers encompass a large branch of controllers designed to automate large processes. A recent example is the discussion of intelligent vehicle and highway systems. The knowledge is based from existing human experts, solutions, and artificial intelligence. As with fuzzy logic

Senecal P, Reitz R. Simultaneous Reduction of Engine Emissions and Fuel Consumption Using Genetic Algorithms and Multi-Dimensional Spray and Combustion Modeling. CEC/SAE Spring Fuels & Lubricants Meeting and Exposition, SAE 2000-01-1890, 2000.

492

Chapter 11

and other intelligent systems the initial strategies come from experts in the respective fields. Hopefully this chapter stimulated further studies with these (and other) advanced controllers. There are remarkable advancements made almost every day, and new exciting applications are always being developed. Many of the concepts in this chapter are founded upon the material in the previous chapters.

11.10

PROBLEMS

11.1 Briefly describe the goal of parameter sensitivity. 11.2 Feedforward controllers are reactive. (T or F) 11.3 Feedforward controllers can be used to enhance what two areas of controller performance? 11.4 Feedforward controllers change the stability characteristics of our system. (T or F) 11.5 To implement disturbance input decoupling, we must be able to ___________ the disturbance. 11.6 Describe the role of our system model when used to implement command feedforward algorithms. 11.7 Describe two possible disadvantages of using command feedforward. 11.8 When are observers required for state space multivariable control systems? 11.9 List two possible advantages of using observers. 11.10 In general, least squares system identification routines solve for the parameters of ______________ equations. 11.11 What are the primary differences between batch and recursive least squares methods? 11.12 Desribe an advantage and disadvantage of adaptive controllers. 11.13 What is the goal of an MRAC? 11.14 Why is an expert of the system being controlled a requirement for designing fuzzy logic controllers? 11.15 What are linguistic variables in fuzzy logic controllers? 11.16 What is the model for neural net controllers? 11.17 What are two advantages of genetic algorithms? 11.18 Find the transfer function GD that decouples the disturbance input from the effects on the output of the system given in Figure 48. Assume that the disturbance is measurable.

Figure 48

Problem: system block diagram for disturbance input decoupling.

Advanced Design Techniques and Controllers

493

11.19 Given the second-order system and discrete controller in the block diagram of Figure 49, design and simulate a command feedforward controller when the sample time is T ¼ 0:1 sec. Use a sinusoidal input with a frequency of 0.8 Hz and compare results with and without modeling errors present. For the modeling error, change the damping from 6 to 3. Use Matlab to simulate the system. 11.20 Using the input-output data given, determine the coefficients, a and b, of the difference equation derived from a discrete transfer function with a constant numerator and first-order denominator. Use the least squares batch processing method. The model to which the data will be fit is given as CðzÞ b ¼ RðzÞ z  a The recorded input and output data are as follows: k

Input data, rðkÞ

Measured output data, cðkÞ

1 2 3 4 5 6 7 8 9 10 11

0 0.5 1 1 1 0.4 0.2 0.1 0 0 0

0 0 0.1813 0.5109 0.7809 1.0019 0.9653 0.8628 0.7427 0.6080 0.4978

11.21 Using the input-output data given, determine the coefficients, a1 , a2 , b1 , and b2 , of the difference equation derived from a discrete transfer function with a constant numerator and second-order denominator. Use the least squares batch processing method. The model to which the data will be fit is given as CðzÞ b1 z þ b ¼ RðzÞ z2  a1 z þ a2

Figure 49

Problem—command feedforward system block diagram.

494

Chapter 11

The recorded input and output data are as follows k

Input data, rðkÞ

1 2 3 4 5 6 7 8 9 10 11

1 1 1 1 1 0 0 1 1 1 0

Measured output data, cðkÞ 0 0.7000 1.0169 1.0168 1.0010 0.9993 0.2999 0:0169 0.6832 1.0159 1.0175

11.22 Using the definitions (membership functions, rules, etc.) for the fuzzy logic fan speed controller in Section 11.9.3.1, determine what the fan speed command would be if the humidity input is 60% and the temperature is 60 F. Approximate the fan speed for 1. LOM defuzzification 2. MOM defuzzification

12 Applied Control Methods for Fluid Power Systems

12.1

OBJECTIVES     

12.2

Develop analytical models for common fluid power components. Demonstrate the influence of different valve characteristics on system performance. Develop feedback controller models for common fluid power systems. Examine case study of using high-speed on-off valves for position control. Examine case study of computer control of a hydrostatic transmission.

INTRODUCTION

Fluid power systems, as the name implies, rely on fluid to transmit power from one area to another. Two common classifications of fluid power systems are industrial and mobile hydraulics. Within these terms are a variety of applications, as Table 1 shows. The general procedure is to convert rotary or linear motion into fluid flow, transmit the power to the new location, and covert back into rotary or linear motion. The primary input may be an electrical motor or combustion engine driving a hydraulic pump. The common actuators are hydraulic motors and cylinders. The downside is that every energy conversion results in a net loss of energy and efficiency is therefore an important consideration during the design process. Figure 1 shows the general flow of power through a typical hydraulic system where arrows pointing down represent energy losses in our system. The energy input and output devices primarily consist of pumps (input), motors, and cylinders. Of primary concern in this chapter is the energy control component, usually accomplished through the use of various control valves (pressure, flow, and direction). In addition to these basic three categories, many required auxiliary components are necessary for a functional system. Examples include reservoirs, tubing or hoses, fittings, and appropriate fluid. It is also usual procedure to add safety devices (relief valves) and reliability devices (filters and oil coolers). Most valves, controlling how much energy is delivered to the load, do so by determining the amount of energy dissipated before it reaches the load. This method 495

496

Chapter 12

Table 1

Typical Applications: Industrial and Mobile Hydraulics

Industrial hydraulics Machine tools (clamps, positioning devices, rotary tools) Assembly lines (conveyors, loading and unloading) Forming tools (stamping, rolling) Material handlers and robot actuators

Mobile hydraulics Off-road vehicles (loaders, graders, material handling) On-road vehicles (steering, ride, dumping, compacting) Aerospace (control surface, landing gear, doors) Marine (control surfaces, steering)

has two negative aspects associated with it: excessive heat buildup and large input power requirements. Alternative power management techniques are discussed later in this chapter in Section 12.6. Since using valves to control the amount of energy dissipated provides good control of our system, it remains a popular method of controlling fluid power actuators. This concept of tracking energy levels throughout our system is shown in Figure 2. We see the initial energy input provided the pump, slight losses occurring in the hoses and fittings, energy loss over the relief valve (auxiliary component) to provide constant system pressure, and the variable energy loss determined by the spool position in the control valve. The remaining energy is available to do useful work. The remaining sections in this chapter provide an introduction into control valves, how they are used in control systems, strategies for developing efficient and useful hydraulic circuits, and two case studies of similar applications.

12.3

OVERVIEW OF CONTROL VALVES

Valves perform many tasks in a typical hydraulic system and may be the most critical element in determining whether or not the system achieves the goals it was designed for. This section provides an overview of different hydraulic valves and develops basic steady-state and dynamic models for popular valves. Valves usually act as the hydraulic control actuator in the circuit by controlling the energy loss or flow. In an energy loss control method, the valve consumes excess power when not needed, as typical of most relief valves. Valves may also act as the actuator on a variable displacement pump in a volume control strategy, as seen with a pressure compensated pump. Volume control systems are more efficient but their initial cost is greater due to the variable displacement pump and valve.

Figure 1

Energy transmission in a typical hydraulic system.

Applied Control Methods

Figure 2

497

Energy flow and levels in a typical hydraulic system.

12.3.1 Terminology and Characteristics Control valves are classified by their function with three broad classifications:   

Pressure control; Flow control; Directional control.

These valves may or may not use feedback, either mechanical or electrical, to control the pressure, flow, or direction. This section introduces the common types for each function and describes their operation. Many of the types do use feedback and may be analyzed using the techniques from earlier chapters. Also, these valves are found in a wide variety of control system applications, some of those described in the previous section, and this section helps us to make the proper choices as to which components and circuits are appropriate for our system. There are references included for those desiring further study. Although the goal of each type is regulation of pressure, flow, or direction, we will see that in practice there is also dependence on the other variables. For example, a pressure control valve will be affected by the flow rate through the valve. As demonstrated throughout preceding chapters, closing the loop allows us to further enhance the performance. 12.3.2 Pressure Control Valves Pressure control valves regulate the amount of pressure in a system by using the pressure as a feedback variable. The feedback usually occurs internally and controls the effective flow area of an orifice to regulate the pressure. A force balance that takes place on the spool or poppet of the valve controls the orifice size. One side of the equation is the pressure acting on an exposed area. This pressure is balanced by the compressed force in the spring. The spring force can be adjusted by turning a screw. One direction of rotation causes the screw to compress the spring further, thereby requiring additional hydraulic forces to overcome it. For electronically controlled valves, a solenoid is used to provide the balancing force.

498

Chapter 12

During actual operation, the pressure control valves are in constant movement, modulating to maintain a force balance. As presented next, different valve designs have different characteristics. Pressure control valves are generally described by two models: force balance on the spool (valve dynamics) and pressure-flow relationships (orifice equation). 12.3.2.1

Main Categories

Two main categories exist for pressure control valves:  

Pressure relief valves (normally closed [N.C.] valve, which regulates the upstream pressure); Pressure reducing valves (normally open [N.O.] valve, which regulates the downstream pressure).

The respective symbols used to describe the two broad types of pressure control valves are given in Figure 3. With the pressure relief valve the inlet pressure is controlled by opening the return or exhaust port against an opposing force, in this case a spring. The inlet pressure acts to open the valve. For the pressure reducing valve the operation is similar, except now opening the return or exhaust controls the outlet pressure port. The downstream pressure is used to close the valve. If the back pressure never increases, the valve remains open. Thus, the pressure relief valve controls the upstream pressure and the pressure reducing valve controls the downstream pressure. The pressure relief valve family is the more common of the two and is examined in more detail in the next section. Most valves are designed to cover a specific range of pressures. Operation outside of these ranges may result in reduced or failed performance. 12.3.2.2

Pressure Relief Valves

Pressure relief valves are included in almost every hydraulic control system. Two common uses for pressure relief valves are  

Figure 3

Safety valve, limiting the maximum pressure in a system; Pressure control valve, regulating the pressure to a constant predetermined value.

Symbols used for pressure relief and pressure reducing valves.

Applied Control Methods

499

When used as a safety valve, the goal is to ensure that the valve opens (and relieves the pressure) before the system is damaged. This configuration does not normally rely on the valve to modulate the pressure during normal operation. A pressure control valve is used where the system is expected to always have extra flow passing through the valve, which then maintains the desired system pressure. The valve in this configuration is constantly active during system operation. Pressure relief valves can be used to perform other functions in hydraulic circuits, but the basic steadystate and dynamic characteristics of the valves remain the same. Ball type pressure control valves are the simplest in design but have very limited performance characteristics. As the flow increases, the ball has a tendency to oscillate in the flow stream, causing undesirable pressure fluctuation. Due to the limited damping, the ball tends to remain oscillating once it has begun. These oscillations cause fluid-borne noise (pressure waves) that may ultimately cause undesirable air-borne noise. Ball type relief valves are primarily used as safety type relief valves. As shown in Figure 4 the pressure acts directly on the ball and is balanced by the spring force. Once the spring preload force is exceeded, the valve opens and begins to regulates the pressure. By changing from the ball to the poppet as shown in Figure 5, stability is enhanced since the poppet tends to center itself better within the flowstream. The stability improvement is evident over a wider flow range. There is still little damping in many poppet type pressure control valves due to the lack of sliding surfaces. To further enhance stability we can use guided poppet valves. Guided poppet direct-acting relief valves can pass flows with greater stability than the previous valves. The added stability is created from the damping provided by the mechanical and viscous friction associated with the guide of the poppet. However, this design must flow the relieved oil through the cross drilled passage ways within the poppet. These holes, shown in Figure 6, cause a restriction, thereby limiting the flow capacity of the valve. A primary limitation of a direct operated poppet type relief valve is its limited capacity. This limitation occurs since the spring force must be large enough to counteract (balance) the system pressure acting on the entire ball or poppet area. In larger valves, the spring force simply becomes unreasonably large. The differential piston type relief valve is designed to overcome this problem. While still in the poppet valve family, this design reduces the effective area upon which the pressure acts. As shown in Figure 7, the pressure enters the valve from the side and acts only on the ring area of the piston. The remaining piston area is acted upon by tank pressure. This allows the spring providing the opposing forces to be sized much smaller.

Figure 4

Direct-acting ball type pressure control valve.

500

Figure 5

Chapter 12

Direct-acting poppet type pressure control valve.

A problem arises in this design when trying to reseat the valve. When the valve is opened and oil starts to flow over the seat, a pressure gradient occurs across the poppet surface. This creates a force tending to keep the valve open. This force causes significant hysteresis in the valve’s steady-state pressure-flow (PQ) characteristics. Adding the button to the base of the poppet (shown in Figure 7) improves the hysteresis by disturbing the flow path. The button captures some of the fluid’s velocity head forces and tends to create a force helping to close the piston. Unfortunately, the button creates an additional restriction to the flow, reducing the overall flow capacity. Increasing volumetric flows demand similarly increasing through-flow cross-sectional areas and, to balance the larger forces, stronger springs. Eventually, a point will be reached where the items become too large and a pilot operated valve becomes the valve of choice. A pilot-operated valve consists of two pressure control valves in the same housing (Figure 8). The pilot section valve is a high pressure low flow valve which controls pressure on the back side of the primary valve. This pressure combines with a relatively light spring to oppose system pressure acting upon the large effective area of the main stage. Several advantages are inherent to pilot-operated relief valves. One, they exhibit good pressure regulation over a wide range of flows. Two, they only require light springs even at high pressures. And three, they tend to minimize leakage by using the system pressure to force the valve closed. During operation, when the pilot relief valve is closed (system pressure less then control pressure), equal pressures are acting on both sides of the main poppet. Since the poppet cavity area is greater than the inlet area, the forces keep the valve tight against the seat, thus reducing leakage. The light spring maintains contact at low pressures.

Figure 6

Direct-acting guided poppet type pressure control valve.

Applied Control Methods

Figure 7

501

Differential area piston type pressure control valve.

When the system pressure overcomes the adjustable spring force on the pilot poppet, a small flow occurs from system to tank through the pilot drain. Forces no longer balance on the main poppet since the pilot stage flow induces a pressure drop across the orifice, thus lowering the cavity pressure. This pressure differential across the main spool causes the spool to move, opening the inlet to tank. Once system pressure is reduced, the valve is once again closed. The valve is constantly modulating the system pressure when in the active operating region. Another class of pressure control valves, which may be direct acting or use pilot stages, is the spool type. The inherent advantage in this type of valve is that the pressure feedback controlling the valve and the flow paths are now decoupled, whereas with the poppet valves the flow path and the area upon which the pressure acted is the same. Figure 9 illustrates a basic direct acting spool type relief valve. In spool type pressure control valves, the pressure acting on an area is still balanced by a spring, but the main flow path is not across the same area. There is a piston on the spools whose lands cover or uncover ports, allowing the system pressure to be controlled. Lands and ports are discussed in more detail in later sections. Many times, sensing pistons are used to allow higher pressure ranges with reasonable spring sizes.

Figure 8

Pilot-operated pressure control valve.

502

Figure 9

Chapter 12

Spool type pressure control valve.

Adding a sensing piston to the direct-acting spool type relief valve allows the same pressure to be regulated with a much smaller spring size. The sensing piston area and spring forces must still balance for steady-state operation. A force analysis, based on the pressure control valve model in Figure 10, demonstrates this. Since both sides of the main spool are at tank pressure, the only force that spring needs to balance is the pressure acting on the area of the sensing piston, AP . This allows for much higher control pressures with smaller springs. 12.3.2.3

Poppet and Spool Type Comparisons

Spool type relief valves are designed to overcome several of the poppet valve shortcomings. While poppet valves are relatively fast due to no overlap, short stroke lengths, and minimal mass, they tend to be underdamped. This may lead to large overshoots and oscillations when trying to maintain a particular pressure. Spool type valves have the ability to yield more precise control over a wider flow range. This is reached at the expense of response times, relative to poppet valves. A second area of difference between poppet and spool type valves is leakage. Poppet valves, by the nature of their design, have a positive sealing (contact) area and correspondingly small internal leakage flows. Spool valves, even when overlapped, exhibit a contact area clearance and thus small leakage flows even in the

Figure 10

Spool type pressure control valve with sensing piston.

Applied Control Methods

503

‘‘closed’’ position. Tighter tolerances to reduce this leakage will generally lead to more expensive valves. In general, a characteristic curve may be generated for a relief valve, revealing three distinct operating regions of the valve, given in Figure 11. The first region is where the supply pressure is not large enough to overcome the spring force acting on the spool. The valve is closed, sealing the supply line from the tank line. The second region occurs when the supply pressure is large enough to overcome the spring force but not large enough to totally compresses the spring. This is called the active region of the valve. In the active region, the relief valve is attempting to maintain a constant system pressure at some preset value. The system should be designed to ensure that the valve operates in this region. The change in pressure from the beginning of the active region (cracking pressure) to the end is often called the pressure override. The third region occurs when the supply pressure is large enough to completely compress the spring. This only occurs when the size of the relief valve is such that it cannot relieve the necessary flow to maintain a constant pressure. When the valve is in this region, it acts as though it were a fixed orifice, and the pressure drop can be determined from the orifice equation. The different valve types discussed exhibit different steady-state and dynamic response characteristics. Spool type valves, while slightly slower, have greater damping and therefore less overshoot and more precise control. Spool type valves come the closest to approaching an ideal relief valve curve. Poppet valves open quickly but are generally under damped and tend to oscillate in response to a step change in pressure. Pilot-operated poppet relief valves generally give better controllability than direct acting poppet relief valves, as noted in the steady-state PQ curve in Figure 12. Additional valves in the pressure control category have been developed for different applications. The symbols for several of these valves are shown in Figure 13. The unloading valve is identical to the relief valve with the exception that control pressure is sensed through a pilot line from somewhere else in the system. Therefore, flow through the valve is prevented until pressure at the pilot port becomes high enough to overcome the preset spring force. This valve may be used to unload the circuit based on events in other parts of the system. Big power savings are possible with this type of valve since the main system flow is not dumped at high pressures

Figure 11

Operating regions of a pressure control valve.

504

Figure 12

Chapter 12

Comparisons of different (typical) pressure control valves.

over the valve continuously and generating heat. Implementation is commonly found in two pump systems where the pressure from a small pump is used as the pilot to the unloading valve when the large pump is not needed. Counterbalance valves are also identical to the relief valve with the exception that it includes an integral check valve for free flow in the reverse direction and therefore the downstream port would not be connected to tank. Counterbalance valves are commonly used to maintain back pressure on cylinders mounted vertically. As the cylinder is raised, the flow passes through the check valve into the cylinder. If the cylinder begins to lower, the valve maintains a back pressure and prevents the cylinder from falling. Finally, sequence valves are also identical to the relief valve with the exception that an external drain line must be connected to tank from the spring chamber. This is because the sequence valve’s downstream port may be pressurized. Sequencing valves are used as priority valves when more than one actuator is necessary for a particular circuit. Typical applications include the sequential extension of two cylinders where the first one is fully extended before the second begins to extend. When the pressure on the primary actuator is great enough (after a cylinder has stalled and stopped moving), the valve opens and provides power to a second actuator in the system.

Figure 13

Symbols for several additional types of pressure control valves.

Applied Control Methods

12.3.2.4

505

Pressure-reducing Valves

Pressure-reducing valves are used in hydraulic circuits to supply more than one operating pressure. In operation they are similar to a typical pressure control valve except that the downstream pressure, not the upstream pressure, is used to control the poppet or spool position. They fall in the category of pressure control valves. The valve is normally open (N.O.) and the downstream pressure closes the valve when the spring force is overcome. The general symbol and basic valve types are summarized in Figure 14. The typical configurations are similar to other valves in the pressure control valve family and might include the following:   

Direct acting or pilot operated; Poppet or spool; Built in check valve for free reverse flow.

Pressure-reducing valves exhibit some slight differences when compared to other pressure control valves. With pilot-operated pressure reducing valves there is a continuous flow through the orifice to the tank. Thus inserting a pilot-operated pressurereducing valve in the circuit incurs a continuous energy loss. Also, pressure-reducing valves require a separate connection to tank (i.e., three connections). Finally, there is an energy cost associated with using a pressure-reducing valve since the way that the lower pressure is achieved is by dissipating enough energy from the fluid. All the pressure control valves achieve control by dissipating energy from the fluid. Alternative techniques are presented in later sections which exhibit much higher system efficiencies. 12.3.3 Flow Control Valves Flow control valves are constructed using similar components as described for pressure control valves. Construction may be based on needle, gate, globe, or spool valves. In most cases the pressure is still the feedback variable but instead of the system pressure the pressure drop across an orifice is held constant. If the valves are not pressure compensated, the flow will vary whenever the load changes. Since the flow varies with the square root of the pressure change, this may be an acceptable

Figure 14

Direct-acting spool type pressure reducing valve.

506

Chapter 12

trade-off in some situations. If better flow regulation is desired the valve should be pressure compensated. Several further options are available for flow control valves. The valves may include a reverse flow check valve integral with the valve body, an overload pressure relief valve may be built in, or the valve may also be temperature compensated. Temperature compensated valves will adjust the orifice size based on the temperature of the fluid, thus providing a fairly constant flow in the presence of both load and temperature changes. An inline pressure compensated flow control valve is shown in Figure 15. In this valve, where the orifice is downstream of the spool, a decrease in flow results in a reduced pressure drop and the forces no longer balance on the spool. The spool will move to the left and increase the orifice area until the flow increases and the pressure forces balance. If the flow increases, the pressure drop also increases and the spool begins to close, thus reducing (correcting) the flow. This is the negative feedback component in the valve. Most valve designs include a damping orifice in the pressure feedback line to control stability. Reducing the size of the orifice will add damping to the system, increasing stability but reducing the response time. It is also possible to use a bypass pressure compensated flow control valve instead of the inline valve shown above. A bypass pressure compensated flow control valve, shown in Figure 16, dumps excess flow from the circuit to maintain constant flow to the load. In the bypass flow control method an increase in flow will cause the spool to open more and bypass enough flow to keep the load flow constant. Flow control valves can be modeled using the same relationships as for pressure and directional control valves. The primary difference is that the flow is related to a pressure feedback variable. An ideal flow control valve would behave as shown in Figure 17, where each horizontal line represents a different valve setting. Most valves are set to have a constant pressure drop of 40 to 100 psi across the control orifice. To operate effectively then, the valve must be chosen such that the minimum flow desired is within the operating range of the valve. In addition to the low flow limit, flow control valves also will have a high flow limit similar to other valve classes. Once the inline or bypass path is fully open, the flow can no longer be regulated and the valve acts like a fixed orifice in the system.

Figure 15

Pressure compensated flow control valve.

Applied Control Methods

Figure 16

507

Bypass pressure compensated flow control valve.

12.3.4 Directional Control Valves 12.3.4.1

Basic Directional Valve Nomenclature

Directional control valves constitute the third major class of valves to be examined. As the name implies, these valves direct the flow to various paths in the system. There are many different configurations available and different symbols for each type. In general, there are common elements in each symbol, making understanding the valve characteristics fairly straightforward. Common categories used in describing directional control valves include the following:      

Construction type (spool, poppet, and rotary); Number of positions; Number of ways; Number of lands; Center configuration; Valve driver class.

The number of positions in a directional control valve are of two kinds: infinitely variable and distinct positions. This difference is reflected in the symbols given in Figure 18. Distinct position valves have more limited roles in control systems since the valve position cannot be continuously varied. The number of ways a valve has is equal to the number of flow paths. Common two-way and four-way valves are shown in Figure 19. Center positions are commonly added to describe the valve characteristics around null spool positions. The

Figure 17

PQ characteristics of a flow control valve.

508

Chapter 12

Figure 18

Number of positions in a directional control valve.

number of lands vary from one on the simplest valve to three or four on common valves to five or six on more complex valves. Each land is like a piston mounted on a central rod which slides within the bore of the valve body. The rod and pistons together are called a spool, hence spool type valves. As the spool moves, the lands (or pistons) cover and uncover ports to provide passage ways through the valve body. Common two, three, and four land valve symbols are shown in Figure 20. The center configuration is one of the most important characteristics of directional control valves used in hydraulic control systems. There are three common classifications of center configurations:   

Under lapped (open center); Zero lapped (critical center); Over lapped (closed center).

An under lapped, or open centered, valve has limited use because a constant power loss occurs in the center position. In addition, an under lapped valve will have lower flow gain and pressure sensitivity. Open center valves are more common in mobile hydraulics where they are used to provide a path from the pump to reservoir when the system is not being used (idle times). This provides significant power savings since the pump is not required to produce flow at high pressure. The flow paths are always open as shown in Figure 21. A zero lapped, or critical center, valve has a linear flow gain as a function of spool position. This requires that the lands be very slightly over lapped to account for spool to bore clearances. This configuration is typical for most servovalves and is shown in Figure 22. The critical center valve is attractive for implementing in a control system since a linear model can be used with good results. In addition, response times can be faster than closed center valves since any spool movement away from center immediately results in flow. In general, critical center valves will be between 0% and 3% overlap, with most being less than 1% (quality servovalves). An over lapped valve has lands wider than the ports and exhibits deadband characteristics in the center position, as shown in Figure 23. Although there is overlap with the spool, even in the center position there is a leakage flow between ports

Figure 19

Number of ways in a direction control valve (two-way and four-way).

Applied Control Methods

Figure 20

Number of lands in a directional control valve.

Figure 21

Open center configuration (directional control valve).

Figure 22

Critical center configuration (directional control valve).

Figure 23

Closed center configuration (directional control valve).

509

510

Chapter 12

due to clearances needed for the spool to move. Although minimal in amounts, it may have a great effect on stopping a load. Proportional valves generally exhibit varying degrees of overlap. The amount of overlap is generally related to the cost. In addition to these, there are many specialty configurations designed for specific applications. One of the most common types are tandem center valves, often used to unload the pump at idle conditions while blocking the work ports and holding the load stationary. The graphic symbol is shown below in Figure 24. Unloading the pump provides energy savings while blocking the work ports holds the load stationary. There are many other center types, including blocking the P port and connecting A and B to tank in the center position. This is a common type where the valve is the first stage actuator for a larger spool. The center type allows the large spool to center itself (spring centered, no trapped volume) when the smaller first stage valve is centered. Additional center types allow for motor freewheeling, different area ratios for single ended cylinders, etc. The different specialty centers are often designed using grooves cut into the valve spool. By changing the size and location of the grooves, the different center types and ratios can be designed into the valve operation. The grooves are also used to shape the metering curve and are a factor in determining the flow gain of the valve. 12.3.4.2

On-Off Directional Control Valves

On-off directional control valves are commonly two or three position valves, with direct actuation or pilot stages, and with or without detents to keep the valve open. Since they are designed to be either open or closed they do not provide variable metering abilities and cannot be used to control the acceleration/deceleration and velocity of the load. This severely limits their use as a control device in a hydraulic control system. On-off valves can be used to discretely control cylinder position and may be used quite effectively with simple limit switches in some repetitive motions. They are among the cheapest directional control valves and typically have opening times between 20 and 100 msec. Since they do not open and close instantaneously, it is difficult to achieve accurate cylinder positioning, even with feedback. When modeling a system using on-off valves, several assumptions can be used to simplify the design. Generally, the acceleration and deceleration periods are quite short and the actuator operates at its slew velocity for the majority of its motion. If the dynamic acceleration/deceleration phases need to be change, the valves can use a small orifice plug to limit the rate of spool/poppet travel when activated, thus affecting the system dynamics. The slew velocity is a function of system pressure, valve size, and required load force/torque. Knowing the approximate load force will allow the valve to be sized according the required cycle time.

Figure 24

Tandem center configuration (directional control valve).

Applied Control Methods

511

Due to their limitations, on-off directional control valves are seldom used in applications where accurate control of the load is required. More time in this section is spent discussing proportional and servo directional control valves while a case study using on-off valves in place of directional control valves is presented later. 12.3.4.3

Proportional Directional Control Valves

Proportional valves represent the next level of performance and now allow us to control the valve spool/poppet position and thus meter the flow to the load. It is now possible to control the acceleration/deceleration and velocity of the load. Once again, the same terminology applies. Proportional valves may be direct-acting, lever or solenoid actuated, include one or more pilot stages, and involve various center configurations. Lever actuated valves, common in mobile hydraulics, may use different specialty centers and grooves to provide the desired metering characteristics. When discussing electronic control systems, the most common form of actuation is the electric solenoid, either directly connected to the main spool or acting on a pilot stage. There are three common configurations using electric solenoids, whether direct acting or piloted. First, a single electrical solenoid is connected to one side of the spool and a spring connected to the other. As long as the flow forces are much smaller than the spring force, the solenoid is approximately proportional and spool position follows the solenoid current. In a typical configuration, the symbol of which is given in Figure 25, this implies that current is required to keep the valve centered and a power failure will allow the valve to fully shift in one direction. In some systems this is a positive feature for safety reasons. The second type uses two solenoids and two springs, shown in Figure 26. When the valve is shut down or in the case of a power failure, the valve returns to the center position. To maintain proportionality, larger springs may be used. However, larger springs lead to larger solenoids and slower response times. Finally, proportional directional control valves may incorporate electronic spool position feedback to close the loop on spool position, as shown in Figure 27. This leads to several advantages. Light springs, primarily used to center the valve, can be used since the feedback keeps the valve acting more linear without relying on heavier springs. The valve can also be set to respond faster (perhaps at the expense of a slight overshoot) and thus exhibit better dynamic characteristics. Since the spool position is controlled, the amplifier card only requires a small command signal and the spool position becomes proportional to it. Of course once feedback is added, the problem of stability must be considered. Different measures of stability are discussed in later sections. Once electrical actuation is provided, many additional options are available. For example, it is now quite simple to implement ramp functions by changing the profile of the input signal. Additional advantages are listed below:

Figure 25

Single solenoid with spring return (proportional valve).

512

Chapter 12

Figure 26

  

Double solenoid with centering springs (proportional valve).

Solenoids may be actuated using pulse-width modulation (PWM), thus saving in amplifier circuit cost. Controller automatically adjusts for solenoid resistance changes (as temperature increases). Electronic implementation of various performance enhancements such as valve gain, deadband compensation, ramping functions, and outer control loops (position, pressure, etc.) is possible.

In recent years the quality of proportional valves has steadily improved and many valves are now considered ‘‘servo’’ grade. Some servo grade proportional valves incorporate a single solenoid with spool position feedback and achieve good response times. Electronically, the center position can be better controlled and allows the use of minimal spool overlap. The next section takes a brief look at servovalves and how they compare with and differ from proportional valves. 12.3.4.4

Servovalves

To begin with, let us examine the advantages and disadvantages of typical servovalves relative to proportional valves, presented in Table 2. Although Table 2 clearly shows many advantages for servovalves, two important items prohibit their widespread use, especially in mobile applications: cost and sensitivity to contamination. Where the best in control system performance is required, as in aerospace and high performance industrial applications, servovalves are the most common choice. In general, where an outer loop is closed to control position, velocity, or force (or others), then servovalves provide the best performance. When the system is ultimately controlled open loop, as with an operator using a hydraulic lift attachment, proportional valves will generally suffice and in many conditions perform better due to contamination problems. Various configurations of servovalves have been produced over the past years, but most designs may be classified as either flapper nozzle or jet pipe variants. A small torque motor provides the electromechanical conversion and hydraulic amplification is used to quickly move the valve spool. The number of stages will vary depending on the application. A common two-stage flapper nozzle cutaway is shown in Figure 28. When the torque motor is not energized (current ¼ 0), the flapper is

Figure 27

Double solenoid with centering springs and spool position measurement for electronic feedback (proportional valve).

Applied Control Methods

Table 2

513

Comparison of Servovalve and Proportional Valve Characteristics

Servovalves

Servo grade proportional valves

Typical proportional valves

Basic proportional valves

Linear 60–400 Very high

Linear or PWM 40–100 High

Linear or PWM 10–40 Medium

PWM < 10 Low

Medium Sometimes < 5% 5–25%

Low No 2–8% > 25%

Comparisons for typical valves Amplifier Bandwidth (Hz) Contamination sensitivity Cost Feedback Hysteresis Spool deadband

Very high High Internal pressure External LVDT < 1% < 1% < 1% 1–5%

centered between the two nozzles and equal flows are found on the left and right outer paths. Since each path has an equal orifice, the pressure drops are the same and each end of the spool sees equal pressure. The spool remains centered. With a counterclockwise torque, the right nozzle size is decreased while the left nozzle outlet area is increased. This results in a reduced right-side flow and an increased left-side flow. Remembering the orifices, the right-side (reduced flow) has less of a pressure drop than the left and the pressure imbalance accelerates the spool to the left. A thin feedback wire connecting the spool and flapper creates a correcting moment that balances the torque motor. In this fashion the spool position is always proportional to the torque motor current (after transients decay). The advantage of this system is that the electrical actuator (torque motor) only needs a small force change that is immediately amplified hydraulically. The resulting large hydraulic force rapidly accelerates the spool leading to high bandwidths. Since the valve is dependent on two orifices and small nozzles, it is very sensitive to contamination. In addition, there is a constant leakage flow through the pilot stage whenever pressure is applied to the valve. Most servovalves include internal filters to further protect the valve. Servovalves are a good example of using simple mechanical feedback devices (the feedback wire) to significantly improve a component’s performance. Because it is a feedback control system, the same issues of stability must addressed during the design process.

Figure 28

Two-stage flapper nozzle servovalve.

514

Chapter 12

The design of the servovalve results in two advantages over typical proportional valves regarding use in closed loop feedback control systems. First, the hydraulic amplification that takes place in the servovalve enables it to have greater bandwidths than proportional valves. Second, they are usually held to tighter tolerances and designed to be critically centered with zero overlap of the spool. As the next section illustrates, this leads to significant performance advantages and minimizes the nonlinearities.

12.4

DIRECTIONAL CONTROL VALVE MODELS

This section seeks to develop basic valve models that enable the designer to accurately design a valve controlled hydraulic system and predict performance. Many aspects of valve design (spool and bore finishes, materials, groove selection, etc.) are not covered and require much more depth to accurately model. The goal of this section is to develop basic steady-state and dynamic models that correlate component geometry to hydraulic performance. With the resulting equations, a fairly good estimate of valve behavior can be predicted while minimizing tedious modeling efforts. Advanced techniques are being used utilizing computational fluid dynamics, finite element analysis, and computer simulations to further develop detailed models. 12.4.1

Steady-State PQ in Directional Control Valves

Steady-state flow equations and generalized performance characteristics of directional control valves are presented in this section using a four-way, infinitely variable, two-land critically lapped valve. The basic model consists of several variable orifices at each flow path, as shown in Figure 29. We can remove the mechanical structure of the valve and draw the orifices using electrical analogies as shown in the circuit given in Figure 30. If we rotate and connect the identical tank (T) ports together, the circuit becomes the basic bridge circuit. It is common to many problems in engineering and a variety of solution techniques have been developed. Further refinements, taking into account the hydraulic relationships, allow the valve model to easily be obtained. Regardless of the spool valve center configuration, once the spool is shifted only two lands become active and the models become very similar. In this region of operation the orifice PQ equations provide good models of these characteristics in spool type valves. However, as will be noted, the performance around null for the different center configurations varies greatly and has a large effect on overall system

Figure 29

Flow paths and coefficients in a four-way directional control valve.

Applied Control Methods

Figure 30

515

Circuit model for a four-way directional control valve.

performance. Therefore, it is important to remember that the equations developed in this section are valid only in the active region of the valve and not near null spool positions. The bridge circuit of Figure 30 can be modified to reflect the physical properties of the valve. Arrows are drawn through each orifice since they are variable, depending on the spool area and position. The orifices are also rigidly connected and move together, illustrated by the lines connecting each orifice. Under steadystate conditions the compressibility flows are zero and the law of continuity must be satisfied. The updated bridge circuit is given in Figure 31. The bridge circuit can now be solved using Kirchoff’s laws on the loops and nodes. Kirchoff’s current and voltage laws are commonly used to solve similar electrical circuits. In summary, the current law states that the sum of all currents, or flows, must equal zero at each node. The voltage law states that the sum of all voltage drops, or pressure drops, around a closed loop must equal zero. These equations, along with the physical parameters of the valve, allow enough equations to be formed to simultaneously solve them for each unknown. The equations resulting from the applications of Kirchoff’s laws and the orifice equation are used to write a general equation describing valve behavior (steady state). To begin with, let us sum the flows at each node and apply the law of continuity: Node S: QS ¼ QPA þ QPB Node A: QPA ¼ QL þ QAT Node B: QPB ¼ QL  QBT

Figure 31 Bridge circuit model for a four-way directional control valve (including the load and coefficients).

516

Chapter 12

Node T: QT ¼ QAT þ QBT Continuity: QS ¼ QT Now, sum the pressure drops around the three loops. The outer loop is formed by treating the difference between the supply and tank pressures as the connection in the same way that a voltage supply would be inserted into the circuit. Outer loop (w/supply): PS  PT ¼ PPA þ PAT Upper inner loop: PPA ¼ PPB  PL Lower inner loop: PAT ¼ PL þ PBT The flows can be related to the pressures using the orifice equation for each orifice in the valve. pffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffi QBT ¼ KBT  PBT QAT  PAT pffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffi QPA ¼ KPA  PPA QPB ¼ KPB  PPB The result yields a steady-state flow equation as a function of valve size and load pressure. This is called the PQ characteristics of the valve. Fortunately there are several simplifications that can be taken advantage of. If the valve is critically or over lapped, once the spool is shifted, only two lands are active and we can assume that the leakage flow through the other ports to be negligible. In this case the following is true: QPA ¼ QL ¼ QBT

or QPB ¼ QL ¼ QAT

In addition, if the valve is assumed to be symmetrical, then KPA ¼ KPB

and

KAT ¼ KBT

Finally, if the valve has matched orifices (equal metering), then KPA ¼ KPB ¼ KAT ¼ KBT Now we can write the common equations that result. If we assume that we have a symmetrical valve and that the tank pressure, PT ¼ 0, we can write pffiffiffiffiffiffiffiffiffiffi QV ¼ KV  PV and PV ¼ ðPS  PA Þ þ ðPB  PT Þ PA ¼

PS þ PL 2

PL ¼ PA  PB

and PB ¼ and

PS  PL 2

PS ¼ PA þ PB

To complete the analysis, we will use the basic orifice equation with the results from above and define a percent open relationship such that A pffiffiffiffiffiffiffiffi Q ¼ C  AFO   P AFO where AFO is the area of the flow path with the spool fully open. Then the total valve coefficient KV can be defined as KV ¼ CAFO and furthermore 1

A 1 AFO

Applied Control Methods

517

The area ratio is dimensionless and represents the amount of valve opening. The equation can be further simplified defining the percentage of spool movement, x, as x¼

A AFO

1 x 1

x can be treated as a dimensionless control variable representing the full range of spool movement. When x ¼ 1, the valve is fully open and we can find the general valve coefficient. For open and closed center valves, this equation is true only while the valve is in the active operating region. pffiffiffiffiffiffiffiffi QFO ffi Q ¼ KV  x  P ; when fully open and x ¼ 1; then KV ¼ pffiffiffiffiffiffiffiffiffiffiffiffi PFO Since valves are generally rated at full open conditions for flow at a rated pressure drop, the final substitution using the rated flow and pressure, Qr and Pr , leads to Qr KV ¼ pffiffiffiffiffiffiffiffiffi Pr In the above equations KV is the valve coefficient for the entire valve since the model accounted for the total valve pressure drop. Two lands are active when the valve is shifted and the total pressure drop must be split between the two. Since the same flow rate is seen by each active land, a symmetrical valve would have equal pressure drops. The following analysis allows both coefficients to be determined. The reduced problem consists of two valve orifices, and the load orifice as shown in Figure 32. Using Kirchoff’s voltage law analogy (pressure drops) and the flow relationships for each pressure drop allows us to relate the rated flow to each pressure drop. Pr ¼ PPA þ PBT ¼

Q2r Q2 þ 2r 2 KPA KBT

Now we can factor out Qr and divide through both sides, resulting in Pr 1 1 ¼ 2 þ 2 Q2r KPA KBT This is simply the parallel resistance law (where the resistance is K2 ). Now, take the reciprocal of both sides:

Figure 32

Bridge circuit for determining the valve coefficients.

518

Chapter 12

Q2r K2  K2 ¼ 2PA BT 2 Pr KPA þ KBT When we take the square root of both sides we achieve a familiar term on the lefthand side of the equation: Q KPA  KBT pffiffiffir ffi ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 þ K2 P KPA BT Comparing the equation with our initial definition for the valve coefficient allows us to relate the total valve coefficient to the individual orifices as Qr KPA  KBT pffiffiffiffiffi ¼ KV ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2 2 Pr KPA þ KBT The value for KV just developed is for one direction of spool movement. Assuming the valve is symmetrical and defining two valve coefficients for each direction of flow through the valve allows the final parameter to be defined. Thus, the final general representation of the PQ characteristics of a directional control valve, where KV is the valve coefficient for both active lands, is given as pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Q ¼ KV  x  PS  PL  PT and if PT 0, then pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Q ¼ KV  x  PS  PL These definitions will be helpful when simulating the response of valve controlled hydraulic systems and many of the intermediate equations are used when calculating valve coefficients, individual orifice pressure drops, etc. Although a relatively simple equation can be used to describe the steady-state behavior of the valve, we need to have linear equations describing the PQ relationships if we wish to use our standard dynamic analysis methods from earlier chapters. The linear coefficients may be obtained by differentiation of the PQ equations (see Example 2.5) or graphically from the plots developed experimentally, as the next several sections demonstrate. 12.4.1.1

PQ Metering Characteristics for Directional Control Valves

The PQ curve is produced when x is held constant at several values between þ1 and 1. It can be plotted using the valve equation once our valve coefficient is known. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi QL ¼ KV  x  PS  PL  PT From the equation it is easy to see that the flow goes to zero as the load pressure approaches the supply pressure. At this condition, there is no longer a pressure drop across the valve, and thus no flow through the valve. Since PL is varied and is under the square root, we will get a nonlinear PQ curve. Plotting the equation as a function of load pressure, PL , and load flow, QL , produces the PQ characteristic curves of the valve given in Figure 33. Each line represents a different spool position, x. Several interesting points can be made from the PQ metering curve. First, once our required load pressure, PL , approaches the system pressure the load flow, and thus the actuator movement, becomes zero. Second, even when there is no load pressure requirement (i.e., retracting a hydraulic cylinder), there is a finite velocity called the slew velocity because of the pressure drops across the valve. Finally, the

Applied Control Methods

Figure 33

519

PQ metering curve for a directional control valve.

load flows continue to increase when a positive x and negative load is encountered. This is called an overrunning load and will be discussed more in later sections. Overrunning loads are prone to cavitation and excessive speeds. These data are also possible from laboratory tests using a circuit schematic, such as the one given in Figure 34. Obtaining experimental data verifies the analytical models and often is the easiest way to get model information for the design of feedback control systems. 12.4.1.2

Flow Metering Characteristics for Directional Control Valves

Complementing the valve PQ characteristics are the flow metering characteristics. While the PQ curves allow the valve coefficients to be determined, they do not display the linearity of valve metering. The three curves in Figure 35 illustrate the typical steady state flow metering plots for under, critically, and over lapped spool type valves, illustrating the linearity (or lack thereof) of each type.

Figure 34

Example circuit schematic to measure a PQ metering curve for a directional control valve.

520

Chapter 12

Figure 35

Flow metering curve for different center configurations of directional control

valves.

Experimentally, the same test circuit given in Figure 34 is used for measuring the flow metering characteristics. The variable load valve (orifice) allows different valve pressure drops, developing a family of flow metering curves for the valve. The flow metering curve gives additional information and allows us to determine the      

Pressure drops across the lands; Valve coefficients; Flow gain; Deadband; Linearity; Hysteresis.

The equation used to generate the flow metering data is identical to the one used for the PQ curve but in developing the flow metering plots the pressure drop across the valve is held constant and the valve position is varied. Additional information about the valve linearity is available since the nonlinear term in the equation is held constant. This implies that in the active region of a valve we should see linear relationships. The flow metering plot also highlights the differences between different valve types (servovalve, proportional valve, open center) and alerts the designer to the quality of the valve. The PQ curves will look similar for similarly sized valves regardless of the valve overlap, since the curves are generated with the valve in the active region and only two orifices are active for each type of valve. The valve coefficients are available from both plots. An example flow metering plot from a typical proportional valve is given Figure 36, showing the various operating regions. As expected, the plots are fairly linear in the active region. The design of a control system that uses proportional valves should choose a valve size that enables all the desired outputs to be achieved while the valve operates in the active region. 12.4.1.3

Pressure Metering Characteristics for Directional Control Valves

The pressure metering characteristics of a valve take place within the null zone of the valve. Pressure metering characteristics are important in determining the valve’s ability to maintain position under changing loads. If a critically centered valve was ideal, that is, no internal leakage, then the pressure metering curve would be a straight vertical line. But practical valves have radial clearances that allow leakage flow to move from the supply to the work ports and from the work ports to tank and

Applied Control Methods

Figure 36

521

Flow metering curve for a typical closed center directional control valve.

even from the supply directly to tank. The pressure metering curve for actual directional control valves have a slope to them as depicted in Figure 37. The pressure metering curve fills in the missing gaps and provides information on valve behavior in the deadband region where the load flow is zero. The deadband region may be less then 3% for typical servovalves and up to 35% for some proportional valves. This characteristic is critical in control system behavior for such actions as positioning a load with a cylinder and maintaining position with changing loads. This is because while we are controlling the position, for example, on a cylinder, holding a constant position should result in no load flow through the valve. Thus, it is the pressure metering curve that the valve is operating along while maintaining a constant position. If a valve has a large deadband, it must travel across it before a complete pressure reversal can take place and hold the load. The slope of the curve in Figure 37 is generally designated the ‘‘null pressure gain’’ or ‘‘pressure sensitivity’’ of the valve. As the valve overlap is increased, the pressure gain is decreased. For this reason, servovalves, due to minimal spool overlap, are capable of much better results in position control systems.

Figure 37 Pressure metering curve for a typical closed center directional control valve (no load flow through valve).

522

12.4.2

Chapter 12

Deadband Characteristics in Proportional Valves

As we found in the previous section, proportional valves may exhibit significant deadband characteristics. Even though the lines are blurring between servovalves and proportional valves, we must carefully compare and choose the valve when designing our system. If we wish to see how deadband affects our system, we are able to simulate it by inserting a deadband block into our block diagram using a program like Simulink, the graphical interface of Matlab. What we normally end up with are two options: use a deadband eliminator, discussed below, or design the system to not normally operate within the deadband. For example, if we wish to do position control of a hydraulic cylinder, then by definition a constant command value will require a zero valve flow for a constant position. Any flow through the valve will cause the piston to move and we are requiring the valve to ‘‘constantly’’ operate in the deadband region. For a slight correction to be made, the valve must travel through the deadband before a change is seen in the output. However, in contrast to this, if we wish to do velocity control, this requires a constant flow through the valve for a constant velocity. This is a great setup for a proportional valve with deadband since it is always operating in the active ‘‘linear’’ range. A servovalve still has an additional advantage in having a larger linear range but at additional cost. By simulating our system, we can easily check the effects of varying deadband. If it is necessary to use a valve with significant deadband in a control application requiring constant operation in and near the deadband, an alternative to increase performance is to use a deadband eliminator. In regards to position control again, we wish to have zero flow through the valve to maintain a constant position. The goal of a deadband eliminator is to never let the valve spool sit inside the deadband. Hence it continually jumps from one side of the deadband to the other, always ready to produce flow with a new command. This can be accomplished by having a much higher gain within the deadband region so that any change in the command immediately moves the spool to the beginning of the active range. This effect can be seen by monitoring the spool position with an oscilloscope and turning the deadband eliminator on and off. The limitation is physics. Even though we try to make the spool move quickly from one side of the deadband to the other, the fact remains that the spool has a finite mass and limited actuation force available. It will always take time to move through the deadband and never approximate the full behavior of a critically lapped quality valve. That being said, there are significant cost savings to using proportional valves and the deadband eliminators do increase the performance. The analog deadband eliminators can also be duplicated in digital algorithms. The concept is the same: Do not let the spool remain in the deadband. When implemented digitally this may take several forms. The first possibility is to map the valve command signal into the active range of the valve. This is quite simple and can be accomplished by adding the appropriate width of the deadband to the output signal depending on the sign of the error. Thus if our valve signal is 10 V, of which 3 V is the deadband, an error of 0:1 might correspond to 3:1 V being the command to the valve. The upper and lower limits for the controller become 7 V. No matter what the error is, the command to the valve is never in the deadband of the valve.

Applied Control Methods

523

Another method would be to increase the proportional gain whenever the controller output would normally correspond to a position inside of the deadband of the valve. While this provides a proportional signal passing through the deadband (smoother as compared to the previous method), it is slightly more difficult to implement. A problem with both methods is the introduction of nonlinearities/discontinuities and a higher likelihood of making the spool oscillate. As in the continuous method, the laws of physics still apply, and even though the valve is never commanded to be in the deadband, it still takes finite (i.e., measurable) time to pass through the deadband. 12.4.3 Dynamic Directional Control Valve Models The approach when developing dynamic models for directional control valves falls between two end points: a purely analytical (white sheet) approach and a purely experimental (black box) approach. The appropriateness of each method depends on the goal for the model that is developed. If the goal is the design of the valve itself, treated as its own system, the analytical approach has many advantages since we have access to the coefficients in the model and can easily perform iterations to optimize the design. The tools to develop such a model have been presented in earlier chapters and include Newtonian physics, energy methods, and bond graphs. The law of continuity and similar laws are also required. The problem with developing analytical models for systems as complex as valves is that either the math becomes very tedious (including nonlinearities, temperature effects, etc.) or we make so many assumptions that our model is only useful around a single operating point. There are increasing numbers of object level computer modeling packages that help to develop such models. The other end point is to strictly measure input and output characteristics and to find a model that fits the data. Three methods have been presented in earlier chapters that allow us to do this: step response plots (Sec. 3.3.2), frequency response plots (Sec. 3.5.3), and system identification routines (Sec. 11.6). Step response plots are primarily limited to first- and second-order systems, while Bode plots can often be used to determine models for higher order systems. The system identification routines can be implemented for higher order models and different models can be used until an acceptable fit is found. The primary disadvantage with black box approaches is the inability to extend the model into other ranges by varying parameters within the model. For example, if the spool mass is decreased another plot would be needed, whereas in the analytical model the change can be made and the simulation completed. For many control systems the valve models can be obtained experimentally (or from manufacturer’s data) since the valve is just one component of many. Our goal is generally not to design the valve but to verify how it performs (or would perform) in our control system. Also, aspects of both approaches may be combined and in general is recommended. The goal is to develop the basic analytical model based on known physics and then use experimental data to fit the coefficients. This gives us confidence in the model and still allows us to extend the model into additional areas of operation. In conclusion, given the importance that the valve has in determining the characteristics of our system, we should attempt to have accurate and realistic models when developing the controller.

524

12.5

Chapter 12

STEADY-STATE PERFORMANCE OF DIRECTIONAL CONTROL VALVE AND CYLINDER SYSTEM

Once the individual models are developed, the task becomes combining them to achieve a specific performance goal. In steady-state operation, this is fairly straightforward and stability issues are not addressed. The analysis will use a four-way directional control valve controlling a single-ended cylinder (unequal areas). This is a very general approach and is easily transferred to different system configurations. The pump will not specifically be addressed here, since in a steady state analysis it is only required to be capable of providing the necessary flow and pressure. Thus the pump sizing can be accomplished after analyzing the valve-cylinder interaction, when the desired piston speed, force capabilities, and valve coefficients are known. In general, the valve cylinder model is simplified since a shift in the spool position away from null effectively closes two of the four lands (orifices). This leaves a simpler model where the flow is affected by the spool position and cylinder load. At this operating condition the valve-cylinder model reduces to that shown in Figure 38. The basic cylinder model and notation was introduced earlier in Section 6.5.1.1. The load is always assumed to be positive when it resists motion (else it is termed overrunning). Using the cylinder force, flow equations, and valve orifice equations allows the model to be developed. Recognize that this model is assuming one direction of motion and the valve being shifted in a single direction. The identical method is used to develop the equation describing the retraction of the cylinder. The basic cylinder force equation can be given as follows: PA  ABE  PB  ARE  FL ¼ m 

dv d 2x ¼m 2 dt dt

If we assume that the acceleration phase is very short relative to the total stroke length (which it usually is, even though during the acceleration phase the inertial forces may be very large), the acceleration can be set to zero and in steady-state form the equation becomes PA  ABE  PB  ARE  FL ¼ 0

Figure 38

Simplified valve-cylinder system.

Applied Control Methods

525

Now we can describe the pressure drops and flows in terms of the force and velocity. First, define the pressure drops in the system: PA ¼ PS  PPA

and

PB ¼ PBT

(assuming PT ¼ 0Þ

Now the pressure drops across the valve can be described using their orifice equations: PPA ¼

Q2PA 2 x2 KPA

PBT ¼

Q2BT 2 x2 KBT

Remember that x represents the percentage that the valve is open (only in the active region) as defined in the directional control valve models (Sec. 12.4.1). The cylinder flow rates, assuming no leakage within the cylinder, are defined as dPBE dt dP  RE dt

QS ¼ v  ABE þ CBE  QT ¼ v  ARE þ CRE

where C is the capacitance in the system. If compressibility is ignored or only steadystate characteristics examined, the capacitance terms are zero and the flow rate is simply the area times the velocity for each side of the cylinder. It is important to note that the flows are not equal with single-ended cylinders as shown above. For many cases where the compressibility flows are negligible the flows are simply related through the inverse ratio of their respective cross-sectional areas. In pneumatic systems the compressibility cannot be ignored, and constitutes a significant portion of the total flow. If compressibility is ignored the ratio is easily found by setting the two velocity terms equal, as they share the same piston and rod movement. QBE ¼

ABE Q ARE RE

If we combine the flow and pressure drop equations with the initial force balance we can form the general valve-cylinder equation as PS  ABE  v2 

A3BE A3  v2  2 RE2  FL ¼ 0 2 2 x KPA x KBT

We can simply the equation by combining the velocities. This results in the final equation describing the steady-state extension characteristics of a valve controlled cylinder. ! v2 A3BE A3RE PS  ABE  2  þ 2  FL ¼ 0 2 x KPA KBT Remember that this is for extension only and when the cylinder direction is reversed the pump flow is now into the rod end. The same procedure can be followed to derive the valve controlled cylinder equation for retraction as ! v2 A3BE A3RE PS  ARE  2  þ 2  FL ¼ 0 2 x KAT KPB

526

Chapter 12

Many useful quantities can be determined from the final equations by rearranging and imposing certain conditions upon them. The stall force is the maximum force available to move the piston and occurs when the cylinder velocity is zero. Since cylinder movement begins from rest, the maximum force is available to overcome static friction effects. The stall forces can be calculated as  Extension: PS  ABE ¼ FLext v¼0  Retraction: PS  ARE ¼ FLret v¼0 Since the supply pressure acts on a larger (bore end) area during extension, the maximum possible force is larger than for retraction where the rod area subtracts from the available force. A second operating point of interest is the slew speed, occurring where the load force is equal to zero. This is the maximum available velocity, unless the cylinder load is overrunning. Setting the load forces equal to zero and x equal to one (100% open for maximum velocity) produces the following equations: vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u P A S BE Extension: vslew ¼ u u 3 tABE A3RE þ 2 2 KPA KBT vffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi u P A S RE Retraction: vslew ¼ u u 3 t ABE A3RE þ 2 2 KPB KAT It is useful to develop a curve from the resulting equations, as they represent the end points of normal operation. For cylinder extension, we get the plot in Figure 39. This curve may be produced by two primary methods. If all valve and cylinder parameters, and the supply pressure, are known, then the curve can be generated easily with a computer program such as a spreadsheet. If the system is in the process of being designed, then it is useful to know that the shape of the curve is a parabola in the first quadrant. The final curve then, including the effects of overrunning loads

Figure 39

Valve controlled cylinder performance curve (extension).

Applied Control Methods

527

and negative load forces and cylinder velocities (extension and retraction directions), is given in Figure 40. The outermost possible line, when the valve is fully open, represents the operating envelope of system performance. Where the lines fall between the axes and the outermost limit is determined by the spool position, x, in the valve. Each constant value of x will produce a different line. The outermost envelope can only be increased by changing the valve coefficients, cylinder areas, or by increasing the supply pressure. The goal in using these equations during the design process is to properly choose and size the hydraulic components that will enable us to meet our performance goals. Remember that if a system is physically incapable of a response, it does not matter what controller we use, we will still not achieve our goals. 12.5.1 Design Methods for Valve Controlled Cylinders In beginning a new design, first list all design parameters already defined. This might include supply pressure (existing system), cylinder or valve parameters, necessary performance points (force and velocity operating points), and so forth. Pick the remaining components using an initial best guess and check to see if requirements are met. Reiterate until necessary. For specific scenarios, several methods can be outlined. When either one or two operating points are specified on the operating curve, it is necessary to write the valve-cylinder equation for the desired points, determine the necessary remaining ratios and parameters, and pick or design a valve or cylinder providing those features. Once the supply pressure, cylinder area, and valve coefficients are known, the system may be analyzed at any arbitrary point within the operating envelope. Another interesting design is for maximum power. Since the cylinder output power equals the velocity times force, the valve-cylinder equation may be solved for the force, multiplied by the speed, and differentiated to locate the maximum power

Figure 40

Operating curves for valve controlled cylinders.

528

Chapter 12

operating point. In doing this, it can be shown the maximum power point occurs at a load force equal to two thirds the stall force. The power curve is added to the valve controlled cylinder curve in Figure 41. Thus to design for maximum power, choose a stall design force of 1.5 times the desired operating load force. When a similar analysis is completed for determining the minimum supply pressure necessary in meeting a specific operating point, once again the same criterion is found. Designing for maximum power results in requiring the minimum supply pressure needed. Remember that the above equations assume a pump capable of supplying the required pressure and flow rate. 12.5.2

Modeling a Valve Controlled Position Control System

In the previous section we developed the basic steady-state models for hydraulic valves and cylinders. The models, although useful for sizing and choosing the proper components, are unwieldy for use in the design of our control system. It is more common to develop a block diagram for the design of our control system. This section develops the linearized models for the block diagram of our valve controlled system. Referring to Example 2.5, where the directional control valve model was linearized, we obtained the linear equation around the operating point as Q¼

@Q @Q  x   PL ¼ Kx x  Kp PL @x @PL

Kx is the slope of the active region in the valve flow metering curve (see Figure 36) at the operating point and Kp is the slope of the valve PQ curve (see Figure 33) at the operating point. The piston flow is related to the cylinder velocity as Q¼vA¼A

Figure 41

dx dt

Maximum power in a valve controlled cylinder system.

Applied Control Methods

529

Notice that this assumes a double-ended piston with equal areas, A. Subsequent analyses will remove this (and the linear) assumption. For now, we let PL be the pressure across the piston and include damping b to sum the forces on the cylinder, where F ¼ m

d 2x dx ¼ PL A  b 2 dt dt

The damping, b, arises from the friction associated with the load and cylinder seal friction. To form the block diagram, we first solve the valve flow equation for PL : PL ¼

Kx 1 x Q Kp Kp

These three equations are used to form the system portion of our block diagram. PL is the output of the summing junction, x and Q are the inputs to the summing junction, and the force equation provides a transfer function relating PL to x. The basic block diagram pictorially representing these equations is given in Figure 42. Although we have linearized the model we can still add nonlinear blocks using simulation packages like Matlab/Simulink. If we were to incorporate the valve and cylinder model into a closed loop control system, there are several useful blocks that we can add. First, we need to generate a command signal for the system. In Simulink are several blocks for this purpose. The common ones include the Signal Generator, Constant, Pulse Train, and Repeating Sequence. It is easy to feedback the cylinder position and add another summing junction, the output of which is our error and the input into our controller. As with the signal blocks there are numerous controller choices contained in Simulink. The standard PID (and a version with an approximate derivative) is an option, along with function blocks, fuzzy logic, and neural net blocks, and a host of others. By adding a zero-order hold (ZOH) block we can also simulate a digital algorithm. The output of the controller would go to a valve amplifier card, providing the position input for the valve. Since the valve has associated dynamics, we can add a transfer function relating the desired valve command to the actual valve position. As mentioned previously, these transfer functions can be obtained from analytical or experimental (step response, Bode plots, or system identification) methods. Finally, it is important to model the deadband and saturation limits for the valve, both readily accessible from Figure 36. When these are added to the model, as shown in the Simulink model given in Figure 43, we can quickly compare different component characteristics and different controllers. In general, we can get the required parameters from manufacturer’s data and estimate the performance that different valves and cylinders might have in our

Figure 42

Block diagram model—cylinder position control with valve.

530

Chapter 12

Figure 43

Simulink model—valve control of cylinder with deadband and saturation effects.

system. The valve dynamics, in this case represented by a second-order transfer function, may take on different forms, depending on the modeling method used. If there are dominant complex conjugate roots, the second-order transfer function works well. If we are unable to obtain the data, we can secure a valve and with simple experiments generate the performance curves for the valve. When the final design is chosen, the system can be tested with a variety of controllers and preliminary tuning values obtained. Step inputs, ramp inputs, and sinusoidal inputs may all be checked to verify the chosen controller and gain set points. To verify the simulation, it is wise to build and test the circuit, if possible. If the circuit is not identical, at least the model can be changed to reflect the components used in the test. This verifies the model structure and lends confidence to using the model in other designs. 12.5.3

Cylinder Position Control using a Digital Controller

In this section we will take our analog PID position control system utilizing a directional control valve and hydraulic cylinder, given earlier in Figure 43, and modify it to simulate the addition of a digital PI controller. The digital controller will be interfaced to the continuous system model using the ZOH model of a DA converter. Simulink includes a ZOH block. Once the model is constructed we can easily perform simulations without having to build the actual system. It is still based on linearized equations and subject to the accompanying limitations. The PI incremental difference equation, developed in Section 9.3.1, is uðkÞ ¼ uðk  1Þ þ Kp ½eðkÞ  eðk  1Þ þ Ki

T ½eðkÞ þ eðk  1Þ 2

We can collect common terms, convert to the z-domain, and also represent the PI digital algorithm as the sum of two transfer functions. UðzÞ T ðz þ 1Þ ¼ Kp þ Ki EðzÞ 2 ðz  1Þ This is easy to implement in a block diagram, as shown in Figure 44, where the discrete equivalent and a ZOH model have replaced our original analog controller. The feedback signal is not sampled on the diagram but inherently is sampled in Simulink when using the discrete transfer functions. If the resolution of our AD or DA converters is an issue, we can also add a quantization block in Simulink and verify the effects on our system.

Applied Control Methods

Figure 44

531

Simulink model—digital position control at valve and cylinder.

If desired, we could have just as easily implemented the actual difference equations using the function block. The difference equation for a PI controller after collecting the eðkÞ and eðk  1Þ terms is



T T uðkÞ ¼ uðk  1Þ þ Kp þ Ki eðkÞ þ Ki  Kp eðk  1Þ 2 2 In the Simulink function block, this is represented by uð1Þ þ ðKp þ Ki  T=2Þ  uð2Þ þ ðKi  T=2  KpÞ  uð3Þ where uðiÞ is the ith input to the function block and uð1Þ ¼ uðk  1Þ, uð2Þ ¼ eðkÞ, and uð3Þ ¼ eðk  1Þ. From the block diagram in Figure 45 we see how the delay blocks are used to hold the previous error and controller output for use in the difference equation. One of the more interesting differences with the digital controller is the effect that sample rate has on stability. To highlight this effect for the same gain, Figure 46 gives the responses to a step command with sample times of 0.1, 0.8, and 1.6 sec when the proportional gain is held constant at Kp ¼ 5. The same procedure can be done with proportional gain as the variable and leaving the sample time, T, equal to 0.1 sec. Adjusting the proportional gain to values of 5, 25, and 50 produces the response curves given in Figure 47. It is clear from the response plots the differences between marginal stability caused by sample time and marginal stability resulting from excessive proportional gain. The longer sample time tends to push the roots out the left side of the unit circle in the z-plane and the increasing proportional gain out the right hand side of the unit circle. These simple examples illustrate the benefits of simulating a system. Many simulations are possible in the space of several minutes. Although the usefulness

Figure 45

Difference equation implementation in Simulink.

532

Figure 46

Chapter 12

Effects of sample time on system stability.

to determine the gain required in the actual real-time controller is limited to the accuracy of our model and thus not extremely accurate for all operating ranges (model is linearized about some point), the ability to perform what-if scenarios for all variables in the model is extremely beneficial. For example, once the model is built, we can easily determine the feasibility of using the same valve to control the position of a different load simply by changing the parameters in the physical system block. Also, by simply changing the feedback from position to velocity, another entire capability can be examined.

Figure 47

Effects of proportional gain on system stability.

Applied Control Methods

533

Finally, these same methods can be applied to directional control valves used to control rotary hydraulic actuators, usually in the form a hydraulic motor. Instead of the pressure acting on an area to produce a force, the pressure acts on motor displacement to produce a torque. The output of the system transfer function, P=! ¼ 1=ðJs þ bÞ, is now angular velocity instead of linear velocity. Velocity is also the feedback signal to be controlled in most rotary systems.

12.6

DESIGN CONCEPTS FOR FLUID POWER SYSTEMS

A brief introduction to common fluid power circuits and strategies is presented to help us implement a system with the correct physics that allows our controller to function properly. This means, as preceding sections demonstrated, choosing the correct valves, cylinders, motors, etc., to meet the force and speed requirements of our system. Of particular interest are the dynamics associated with each component since when the loop is closed we must address the issue of stability. Systems that meet our requirement and are properly designed are often described as robust. Robust systems exhibit good tracking performance, reject disturbance inputs, are insensitive to modeling errors, have good stability margins, and are not sensitive to transducer noise. Designing a control system is generally going to be a trade-off between these items. As we increase tracking performance, we generally decrease stability unless techniques like feedforward compensation are used. Disturbance inputs are exemplified when the physical system has a naturally high gain, which is the case in most hydraulic systems. Modeling errors continue to plague hydraulics due to the many complex components with nonlinearities. Even the oil viscosity is a logarithmic function of temperature. If oil viscosity changes cause problems, a gain scheduling scheme may be used; this is much easier in the digital domain. Some basic considerations for designing hydraulic control systems are listed below.     

Keep all hoses as short as possible to maintain system stiffness. Size valves to operate in the middle of their active range. Correctly size hoses and fittings to reduce unnecessary losses. Use current signals if long wiring paths are used. Use caution when throttling inputs to reduce cavitation problems.

Designing (or choosing) the correct combination of components has a large effect on the operating characteristics and potential of the system. Each method has different advantages and disadvantages, especially in terms of efficiency, speed, force, and power. 12.6.1 Typical Fluid Power Circuit Strategies In virtually all fluid power circuits the fundamental goal is to deliver power to load, providing useful work. Since the product of pressure and flow is power, we essentially have three strategies when controlling the delivered power: 1. 2. 3.

Control the pressure with a constant flow; Control the flow with a constant pressure; Combination pressure and flow control.

534

Chapter 12

The choice of method depends in large part on the goals of the system. Efficiency can be increased but usually at an upfront cost that is greater. The expected load cycles will play a role in determining the appropriate circuit. Circuits with large idle times will not want to be required to input full power in circuit at all times; instead, a system that only provides the power required by the load would be desired. Besides these considerations, the experience of the designer and availability of components place practical constraints on the choice of circuits and strategies. Some of the simplest circuits are designed to control the pressure with a constant flow being produced by the pump. In this case we have a fixed displacement (FD) pump providing a constant flow and some arrangement of valves controlling the pressure and where the flow is directed to. An example constant flow circuit is given in Figure 48. The flow from the pump is either diverted over the relief valve, delivered to the load, or delivered back to the reservoir when the tandem center valve is in the neutral position. This circuit has advantages over one with a closed center valve since under idle periods (no power required at the load) the tandem center valve allows the pump to still have a constant flow but at a significantly reduced pressure. A closed center valve requires all the flow to be passed through the relief valve and the power dissipated is large. While an open center valve provides the same power savings at idle conditions, it will not lock the load in fixed position as the tandem center valve does. The valve coefficients and cylinder size can be chosen to meet the force and velocity requirement of our load using the method in Section 12.5. One problem with using tandem center valves is that they limit the effectiveness of using one pump to provide power for two loads, as shown in Figure 49. As long as one valve is primarily used at one time, the circuit works well and provides the same power savings at idle conditions. However, since the valves are connected in series once one cylinder stalls, the other cylinder also stalls. The next class of circuits involve varying the flow to control the power delivered to the load. Once again, there are available power savings when using these types of circuits. There are two primary methods of varying the flow based on the system demand: accumulators and pressure-compensated pumps. This means that the initial cost of the system will be greater but so will be the power savings. Accumulators come in a variety of sizes and ratings and generally are one of two types: piston and bladder. For both types the energy storage takes place in the compression of a gas, usually an ideal inert gas such as nitrogen. Their electrical

Figure 48

Constant flow circuit (FD pump and tandem center valve).

Applied Control Methods

Figure 49

535

Constant flow circuit (FD pump and dual tandem center valves).

analogy is the capacitor, and they are used in similar ways to provide a source of energy and maintain relatively constant pressure levels in the system. An example circuit using an accumulator is given in Figure 50. If we know the required work cycle of the actuator, as is the case in many industrial applications, the accumulator provides a way to have the pump be sized to provide the average required power and the accumulator is used to average out the high and low demand periods. This provides significant power savings. An example would be a stamping machine where the amount of time for extension and retraction are known along with the amount of time it takes to load new material onto the machine. The peak power requirement can be very large even though the average required power is much less. Notice that a check valve and unloading valve are used

Figure 50 trol valve.

Variable flow circuit using accumulator, unloading valve, and closed center con-

536

Chapter 12

to provide additional power savings during long idle periods. Once the relief pressure is achieved, the unloading valve opens and allows the pump to discharge flow at a much lower pressure. The check valve prevents flow from flowing back into the pump. We are also required to use a closed center valve since an open center (or tandem) valve in the neutral position would allow the accumulator to discharge. A second way to achieve variable flow in our circuit is through the use of a variable displacement (VD) pump. When the loop is closed internally in the pump, the system pressure can be used to de-stroke the pump and reduce the flow output. This feedback mechanism and variable displacement pump combine to make a pressure compensated pump. The ideal pressure compensated pump, without losses and a perfect compensator, will exhibit the performance shown below in Figure 51, acting as an ideal flow source until it begins to pressure compensate and as an ideal pressure source in the active (compensating) region. In reality there will be losses in the pump and the curve has a slight decrease in flow as the pressure increases. In the compensating range the operating curve gradually increases in pressure as the flow decreases due to the additional compression of the compensator spring and less pressure drops associated with the flow through the pump. Since power equals pressure times flow, when we are able to operate in the compensating region we are able to save significant power. Only the flow that is required to maintain the desired pressure is produced by the pump. An example circuit using a pressure compensated pump and closed center valve is shown in Figure 52. There are many subsets of the circuits presented thus far and a sampling of them are presented in the remainder of this section. The ones presented are not comprehensive and only serve to illustrate the many ways that hydraulic control systems can be configured and optimized for various tasks. In many cases these circuits are now controlled electronically and play a large roll in determining the performance of our system. The two additional classes examined here are pressure control and flow control methods. In most cases they may be constructed using constant flow or variable flow components. Many of the valves used in these circuits are discussed in more detail in Section 12.3. To begin with, there are situations where we need two pressure levels in our system and yet desire to have only one primary pump. To provide two pressures with one pump we can use a pressure reducing valve as shown in Figure 53.

Figure 51

Pressure compensated pump characteristics.

Applied Control Methods

Figure 52

537

Variable flow circuit using pressure compensated pump and closed center valve.

We limit our power-saving options in this type of circuit since we cannot use open center valves to unload the pump when not needed. Also, the method by which the lower pressure is reached is simply another form of energy dissipation in our system. The pressure-reducing valve converts fluid energy into heat when regulating the pressure to a reduced level. This circuit is attractive when the actuator requiring the reduced pressure does not draw significant power (less flow and thus less loss across the valve) and when an additional component that requires the lower pressure is added to an existing circuit. If designing upfront, a better goal would be to use components that all operate with identical pressures. A second pressure control circuit is the hi-lo circuit, shown in Figure 54. The basic hi-lo circuit consists of two pumps designed for two different operating regions. The first pump is a high pressure–low flow pump that always contributes to the circuit. The second pump is a low pressure–high flow pump that is unloaded when the system pressure exceeds a preset level. This has several beneficial characteristics. When the load force is minimal, the required pressure is low and the flow of both pumps add together and move the cylinder quickly. When the load increases, the large pump is unloaded and only the smaller high pressure–low flow pump is used to move the load. The large pump is protected from high pressure through the use of a

Figure 53

Two-level pressure control using pressure reducing valves.

538

Figure 54

Chapter 12

Power savings using a hi-lo circuit with two pumps.

check valve. Since the full flow capability of the system is not produced at high pressures we experience significant power savings. Remote pilot-operated relief valves, accumulators, and computer controllers are all additional methods of controlling pressure levels in hydraulic systems. To wrap us this section let us conclude by examining several flow control circuits. Whereas pressure control effectively limits the maximum force or torque that an actuator produces, flow control devices limit the velocity of actuators. One simple method to control the speed is by using a flow control valve in series with our actuator. If our flow control valve limits the inlet flow to the actuator, we call it a meter-in circuit, as shown in Figure 55. Using the internal check valve in the flow control valve is required if we only desire to meter the inlet flow. When the cylinder (in this case) retracts, the flow is not metered and simply passes through the check valve. This circuit works well when the load is resistive and counteracts the desired motion. If this is the case the bore end always maintains a positive pressure and cavitation is avoided. If we begin to encounter overrunning loads, a meter-in circuit will tend to cause cavitation in the cylinder since it is an additional restriction to the inlet flow. This problem is solved by using a meter-out circuit, shown in Figure 56.

Figure 55

Actuator velocity control using a meter-in circuit.

Applied Control Methods

Figure 56

539

Actuator velocity control using a meter-out circuit.

The potential problem with meter-out is pressure intensification. When the valve is shifted to move the load down, we have system pressure acting on the bore end and the flow control valve produces very high pressures on the rod end of the cylinder to control the speed. The amount of pressure intensification is related to the area ratio of the cylinder. As before, when the cylinder is raised the check valve provides an alternative flow path. As long as we are aware of the potentially higher pressure, a meter-out circuit prevents cavitation, controls speed, and provides inherent stability since if the load velocity increased, so would the resulting pressure drop in the flow control valve, and the load velocity stabilizes. With the meter-in circuit and an overrunning load that cavitates, we can easily get large velocities when they are not desired. A similar circuit (to the meter-in and meter-out) is the bleed-off flow control circuit shown in Figure 57. This circuit controls the speed of the actuator by determining the amount of flow that is diverted from the circuit. Since it does not introduce an additional pressure drop in series with the load, it is capable of higher efficiencies than the previous two circuits. The disadvantage is that it exhibits a limited speed range adjustment.

Figure 57

Actuator velocity control using a bleed-off circuit.

540

Chapter 12

Finally, a common circuit used to increase retraction velocities by increasing the flow into the cylinder that is greater than the pump flow is the regenerative circuit, shown in Figure 58. With the regenerative circuit we have added another valve position, giving us flow regeneration. When the cylinder is extending in the regenerative mode (lowest position on the valve), the bore end flow is directed back into the system flow and thus adds with the pump flow to produce higher velocities. As with all methods there is a trade-off. Since the bore and rod end pressures are equal, the available force is decreased. In fact, the power remains the same and we are simply trading force capability for velocity capability. With the valve that is shown, since it retains the original three positions, it would still have full force capability in those positions. Additional circuits that are used to provide a form of flow control are deceleration, flow divider, sequencing, synchronization, and fail safe circuits. Many variations of pressure and flow control circuits have been developed over the years, and this section provides an introduction into several of them as they relate to control of variables in our circuit. In the next section we examine in more detail the power and efficiency issues associated with several types of circuits. 12.6.2

Power Management Techniques

To increase the efficiency of our systems, we need to effectively manage the power that is produced by the pump and distributed to the hydraulic actuators. There are different levels to which we can manage the power in our system. Four basic approaches that are often used are    

Fixed displacement pump/closed center valve (FD/CC); Fixed displacement pump/open center valve (FD/OC); Pressure compensated variable displacement pump/closed center valve (PC/ CC); Pressure compensated load sensing pump/valve (PCLS).

The first system listed, the FD/CC system, relieves all pressure through the relief valve or control valve and is a constant power system. This is a simple approach to designing a circuit but suffers large energy losses, especially at idle conditions. The

Figure 58

Regenerative circuit example.

Applied Control Methods

541

pump produces a constant flow, any excess of which must be dumped over the relief valve at the system (high) pressure. The only time this circuit is efficient is during periods of high loads where the largest pressure drop occurs at the load (useful work) and very little is lost across the relief valve or control valve. The worst condition, at idle (no flow to the actuator), causes all pump flow to be passed through the relief valve at the relief pressure. Almost all the power generated by the pump is dissipated into the fluid and valve in the form of heat. This significantly increases the cooling requirements for our system. The FD/OC system, an example of which is given in Figure 48, acts similarly with a constant flow but exhibits increased efficiencies at null spool positions, where the valve unloads the pump. Although the pump still operates at the maximum flow, it is at reduced pressure. The amount of pressure drop across the valve in the center position determines the efficiency of the system at idle. When operating in the active region of the valve, the system exhibits the same efficiencies as the FD/CC system. This type of circuit is common in many mobile applications. Once we commit to a variable displacement pump (now a variable flow circuit instead of constant flow as in the previous two types), our efficiencies can be significantly improved. The variable displacement pump configuration of interest is pressure compensation, the ideal operation being shown in Figure 51. The PC/CC system increases system efficiency even further since the pump only provides the flow necessary to maintain the pressure. The flow is very close to zero at null spool positions. The compensator maintains a constant pressure in our system by varying the flow output of the pump. Now the wasted (dissipated) energy in our system primarily occurs across our control valve since we still have full system pressure on the valve inlet. The relief valve, although still included for safety, is inactive during normal system operation. Another advantage is that multiple actuators can be used and all will have access to the full system pressure. With the FD/OC system described previously, if one OC valve is in the neutral position the whole system sees the lower pressure. A negative to using pressure compensated pumps is the higher upfront cost for the pump, although it will likely save in other areas (heat exchangers) and certainly in the operating costs. The other disadvantage is that the pump still produces system pressure even at idle and even though it is not always required. This leads to the final alternative, a pressure compensated load sensing circuit. The load sensing pump/valve system exhibits the best efficiency by limiting both the flow and pressure, providing just enough to move the load. Instead of producing full flow as with the fixed displacement pumps or full pressure as with the pressure compensated pump, the system regulates both pressure and flow to always optimize the efficiency of our system. The system is virtually identical to the PC/CC system except for that the load pressure, instead of the system pressure, is used to control the flow output of the pump. An example PCLS circuit using a variable displacement pump is shown in Figure 59. With this system the pressure is always maintained at a preset P above the load pressure. Shuttle valves are used to always choose the highest load pressure in the system. The force balance in the compensation valve is such that system pressure is balanced by the highest load pressure plus an adjustable spring used to set the differential pressure between the load and system. If the load pressure decreases, the valve shifts to the left and the displacement is decreased, decreasing the system pressure and maintaining the desired pressure difference. The only negative is that the load sensing circuit must

542

Figure 59

Chapter 12

Pressure compensated load sensing (PCLS) using a variable displacement pump.

use the highest required load pressure or some actuators may not operate correctly. If the two actuators have very different pressure requirements, the system efficiency is still not maximized. If we do not wish to add the extra cost of the a variable displacement pump we can achieve many of the PCLS benefits by using a load sensing relief valve, shown in Figure 60. The operation is much the same as with PCLS except that the system pressure is maintained at a preset level above the load pressure by adjusting the relief valve cracking pressure. As before, the system pressure is compared with the largest load pressure. The difference is that we once again have a constant flow circuit and no longer vary the flow but instead vary only the relief pressure. This means that at idle conditions the same level of efficiency is not achieved since although the pressure is the same, the flow is greater over the relief valve. This circuit has efficiency advantages over the FD/OC circuit of earlier since when the control valve is in the active region the system pressure is still only slightly above the load pressure, whereas with the FD/OC system the pressure returns to maximum once the valve is moved from

Figure 60

Load sensing relief valve (LSRV) using a fixed displacement pump.

Applied Control Methods

543

the null position into the active region. These strategies can be summarized as shown in Figure 61. Remember that the efficiencies for the systems will decrease when different pressure requirements exist in our systems. In the load sensing systems the pressure is maintained above the largest required load pressure. If there is one high load pressure and three lower ones, the control valves will provide the pressure drop as required per individual actuators. To achieve the maximum possible system efficiency and most wisely manage our power, we can progress to full computer control of the pump displacement, valve settings, and actuator settings. For example, if we have both a variable displacement pump and a variable displacement motor controlled by a computer, we can completely eliminate the control valve and associated energy losses (pressure drops) and use the computer to match displacements in such a way as to control the system pressure and the power delivered to the load. The only energy losses in our system are associated with component efficiencies, not pressure drops across control valves. These same concepts can be extended to larger systems where multiple displacements are controlled to always give the maximum possible system efficiency. As our society becomes more energy conscious, we will continue to develop new power management strategies. It is likely that the computer will be the centerpiece, controlling the system to achieve greater levels of performance and efficiency simultaneously.

12.7

CASE STUDY OF ON-OFF POPPET VALVES CONFIGURED AS DIRECTIONAL CONTROL VALVES

There are many potential advantages for considering the use of high speed on-off poppet valves for use as control valves in hydraulic control systems. Cost, flexibility, and efficiency are three primary ones. There are also several disadvantages. Whereas in manually controlled directional valves a simple mechanical feedback linkage could be used (Example 6.1 and problem 4.8) to create a closed loop feedback system, the use of poppet valves requires that we use an electronic controller outside of the normal PID type of algorithm. The purpose of the case study presented in this

Figure 61

Comparing efficiencies of different power management strategies.

544

Chapter 12

section is to demonstrate how one might be used in a typical application, that of controlling the position of a hydraulic cylinder. The system examined here is an example of a nonlinear model and multiple input–single output controller. In the next section we take a look at one application of this system. 12.7.1

Basic Configuration and Advantages

The general system we want to model is shown in Figure 62. Four normally open high-speed poppet valves are used to control the position of a hydraulic cylinder. Ignoring the controller for the time being, the system is easy to construct and understand. The poppet valves act in pairs and either connect one side of the cylinder piston to tank and the other side to supply pressure or vice versa. The valves may be two stage to increase the response time depending on the primary orifice size required. Some of the advantages of this configuration are listed in Table 3. Since on-off valves are generally positive sealing devices, there is very little leakage when closed even when compared with overlapped proportional valves. All the poppet valves are closed when the cylinder is stationary and it is positively locked with zero leakage rates between the pump and tank. No hydraulic energy is dissipated or electrical energy required maintaining this position. Normal spool type valves always have a leakage path from supply to tank even with the spool centered. Thus there is a constant energy loss associated with each valve spool in the circuit. Since the spool tolerances in servovalves and proportional valves are much tighter, there is also a greater susceptibility to contamination. This all leads to greater hydraulic efficiency and reliability at a reduced cost. In many systems, however, the valve energy losses are not significant, and these advantages may seem minor. Perhaps the most compelling advantage is the resulting

Figure 62 

Basic poppet valve position control system.

Sosnowski T, Lucier P, Lumkes J, Fronczak F, Beachley N. Pump/Motor Displacement Control using High-Speed On/Off Valves. SAE 981968, 1998.

Applied Control Methods

Table 3

545

Potential Advantages of Using On-Off Valves for Position Control of Hydraulic

Cylinder Characteristics of poppet valves as used to control cylinder position: Zero leakage when no cylinder motion Less electrical energy requirements Zero energy requirements when stationary Cheaper component cost Less susceptibility to contamination Valves easily mounted directly to actuators Controller algorithm able to switch between meter-in, meter-out, different valve center characteristics, flow control, position control, velocity control, etc., in real time without hardware changes

flexibility. Without going into the details of each circuit (some are discussed in the previous section), we can quickly discuss some of the possibilities when using individual poppet valves to control cylinder position. Most advantages stem from the fact that the metering surfaces are now decoupled. In our typical spool type directional control valve, when one metering surface opens, so does the corresponding return path. The metering surfaces are physically constrained to act this way. Thus when a certain valve characteristic or center configuration is desired, a specific valve must be purchased with this configuration. This is opposed to the four poppet valves where any valve can open and close independently of each other, as in Figure 63. Taking advantage of this flexibility requires digital controllers but adds the ability to choose (or have the controller automatically switch between) many different circuit behaviors. For example, in a meter-out circuit the flow out of the cylinder is controlled to limit the extension or retraction speed of the cylinder. This is commonly done with overrunning loads to prevent cavitation and run-away loads. Using conventional strategies requires an additional metering valve in the circuit since the

Figure 63

Comparison of spool and poppet metering relationships.

546

Chapter 12

directional control valve will meter equally in both directions. With the poppet valves, however, simply keeping the inlet valve between the tank and inlet port fully open allows the downstream, or outlet, valve to be modulated and control the rate of cylinder movement. To get metering characteristics in both directions, even more metering devices and check valves are required to complete the circuit. A simple change in algorithm accomplishes the same thing using independent control of the poppet valves. Thus, the hardware and plumbing remains identical and the controller is used to provide many different circuit characteristics. In addition to different metering circuits, different power management strategies are available since the valves can be operated to simulate different valve centers in real time. If all the valves are closed when the system is stationary, the poppet valves will act as a closed center spool valve. In this case the cylinder remains stationary and the pump flow must be diverted elsewhere. Open center characteristics occur when all the poppet valves are opened. Even tandem center spools, which allow the pump to be unloaded to tank while the cylinder is fixed, are easily simulated by closing the two valves resisting the load (assuming the load wants to move in one direction) and opening the remaining two to unload the pump. It is obvious that only one side of the story has been presented, and the disadvantages must also be considered. First and primary, the operation of the poppet valves are primarily on-off and somewhere along the line the output must be modulated. This could possibly be a feature of special valves where some proportionality is achieved in poppet valves by rapidly modulating (i.e., PWM) the electrical signal to the valve. If a two-stage poppet valve is used, the first stage could be extremely fast and used to modulate the second stage, again with the goal of obtaining some proportionality for metering control. The digital computer can be used to great advantage here since it is capable of learning or using lookup tables to predict what result the signal change will have on the system, even if very nonlinear. Finally, the poppet valve, if much faster than the system, could be modulated in much the same way that a PWM signal is averaged in a solenoid coil to produce an average pressure acting on the system. This obviously has the potential to create undesired noise in the system. In some position control systems this is acceptable when the result is not the physical cylinder position. For example, and as illustrated in this case study, we could use this approach to control the small cylinder that in turn controls the displacement on a variable displacement hydraulic pump/motor. The output of the hydraulic pump/motor will not significantly change when small discrete changes occur in displacement (especially when connect to a large inertia, i.e., wheel motor applications), since the torque output is even further averaged by the inertia it is connected to. Thus there are many things we need consider when designing the controller, hence the incentive to develop the following model. 12.7.2

Model Development

This case study illustrates two methods for developing the model used in simulating the poppet valve position controller discussed above. In both cases we will use nonlinear models and illustrate the use of Simulink in simulating nonlinear models. Method one uses a bond graph to model the physical system, from which the state equations are easily written. Method two uses conventional equations, Newton’s law, and continuity to develop the differential equations describing the model,

Applied Control Methods

547

from which the equivalent blocks (some nonlinear) are found and used to construct the block diagram. In the basic model we have the supply pressure connecting with two valves and the tank connected to two. The cylinder work ports are connected to the opposite sides in such a way that pairs can be opened to position the cylinder where desired. Referencing Figure 62 allows us to directly draw the bond graph from the simplified physical model, given in Figure 64. In general, power flow will be from the power supply to the high pressure cylinder port and the low pressure cylinder port to the tank. Figure 65 gives the corresponding bond graph and illustrates the parallels it shares with the physical system model. The bond graph structure and physical model complement each other and the power flow paths through the physical system are easily seen in the bond graph. There are five states, each associated with an inertial or capacitive component: yð1Þ=dt ¼ dp13 =dt ¼ pressure on bore end of cylinder yð2Þ=dt ¼ dq15 =dt ¼ flow due to compressibility in lines and oil (bore end of cylinder) yð3Þ=dt ¼ dp16 =dt ¼ pressure on rod end of cylinder yð4Þ=dt ¼ dq18 =dt ¼ flow due to compressibility in lines and oil (rod end of cylinder) yð5Þ=dt ¼ dp21 =dt ¼ force acting on cylinder load Using the equation formulation techniques described in Section 2.6, it is easy to write the actual state equations. Remember that 0 junctions all have equal efforts and 1 junction equals flows. Then use the inertial, capacitive, and resistive relationships to write the equations. The final state equations can then be written as follows: p_13 ¼ q_13

Figure 64

1 q CA 15

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 1 A ¼ KPA x1 PSA  q  KAT x2 q  PT  p13  BE p21 CA 15 CA 15 IA m

Simplified poppet valve controller model.

548

Chapter 12

Figure 65

Bond graph model of poppet valve controller.

p_16  q_18

1 q CB 18

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 1 1 1 A ¼ KPB x3 PSB  q  KBT x4 q  PT  p16  RE p21 CB 18 CB 18 IB m

p_21 ¼

ABE A b q  RE q18  p I21 21 CA 15 CB

(force balance on cylinder)

The units for the hydraulic components using the English system are Displacement ¼ q’s ¼ in3 Inertia ¼ I ¼ lbf 4s in

Effort ¼ e’s ¼ psi

2

Valve coefficients ¼ K’s ¼

Capacitance ¼ C ¼

psi in3

3 inp ffiffiffiffiffiffi sec  psi

Using the state equations, it is straightforward to numerically integrate them with a variety of common programs. We should notice that is does not matter how complex the state equations are for numerical integration. The complexity does relate to simulation speed and what integration routine works best, but each equation is constantly evaluated to predict the next time step and thus discontinuities and nonlinearities pose no problem. In the bond graph model above states were included for system compliance and the line could easily be broken into more sections for greater detail. Nonlinear orifice equations are used to represent the PQ characteristics for each valve.

Applied Control Methods

549

To compare the procedure needed to develop the equivalent block diagram representing the poppet valve control of a hydraulic cylinder, we must first write the equations governing system behavior. For the bond graphs we simply model energy flow and the equations result from the model (the same equations, of course). For the block diagram, let us quickly review the basic equations. There are three types of flows we are concerned about: Valve flow: pffiffiffiffiffiffiffiffiffiffi QV ¼ KV x PV Cylinder flow: QCYL ¼ A dx=dt ðx ¼ cylinder position) Compressibility flow: QCMP ¼ ðV=ÞdP=dt If we look at the flow through valves PA or PB, we see that the flow can go three places, into the cylinder, into compression of the fluid and expansion of hoses, and back through valves AT and BT, respectively. We can then write the continuity equation for each valve: QPA ¼ QAT þ QCYL bore þ QCMP bore QPB ¼ QBT þ QCYL rod þ QCMP rod Finally, summing the forces on the cylinder: FC ¼ FA  FB  fL FC ¼ PA ABE  PB ARE  fL Inserting each flow into the respective continuity equations and solving for the derivative of pressure results in two equations, each representing a summing junction on a block diagram. pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V _ PA ¼ KPA x1 PS  PA  KAT x2 PA  PT  AB x_  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi V _ PB ¼ KPB x3 PS  PB  KBT x4 PB  PT  AR x_  After these two summing junctions are formed the result can be integrated to get PA and PB . The pressures are then used to satisfy the force balance on the piston. Finally, the block diagram model (along with necessary functional blocks) is shown in Figure 66. In the upper left we see the cylinder pressure being subtracted from the supply pressure to calculate the flow across that valve. This is repeated for each valve. The two valve flows and cylinder velocity flow rate are summed and the difference integrated to get pressure. The pressure(s) times their respective area(s) is summed to get the net force acting to accelerate the cylinder and load. This is integrated once to get the cylinder velocity (used to calculate the respective flows) and integrated once again to obtain the position. There are additional lines used to combine the signals and send them to the workspace for additional analysis and plotting.

550

Block diagram of poppet valve position controller.

Chapter 12

Figure 66

Applied Control Methods

551

Once the model is up and running, it is important to verify it at known positions. For this model it is quite easy to calculate the slew velocity (cylinder velocity with no load) to verify the model. With a supply pressure of 1000 psi and the coefficients determined for the valve (and used in the model), the slew velocities, flow rates, and cylinder pressures all agreed with the model. In the next section we use it to perform several simulations.

12.7.3 Simulation and Results With the verified (steady state) model we can now play with different input sequences, disturbances, valve opening times, pressures, cylinder loads, valve coefficients, and, most importantly, controller strategies. To give an idea of basic controller implementation, the block diagram uses two function blocks to define a deadband range in which all the valves are off. Once the cylinder leaves the deadband, the valves required to move the cylinder in that direction are opened and the correction is made. Examining the sine wave command and response in Figure 67, with a deadband of 0.2 inches, we see that the cylinder position follows the command and it behaves similar to a proportional controller. The difference is also evident in that the output is not smooth but reflects the on-off nature of the control valves. Whether or not this chatter is acceptable depends on the application and what the actuator is connected to. Whenever the cylinder reaches a position inside the deadband, all the valves are shut down and the cylinder stays stationary. If the deadband gets too small, the valves begin to chatter. Having a disturbance input act on the system only changes

Figure 67

Poppet valve simulation: cylinder response to a sine wave input.

552

Chapter 12

the pressures on the cylinder since all the valves are closed and the position only changes due to compressibility in the system. Things are a little more interesting when we examine the valves during this motion. In Figure 68 we see the on-off nature to this type of controller (valve itself is not proportional in this model). The poppet valves are constantly turning on and off to modulate the position of the cylinder and thus the velocity is constantly cycling even though the position output follows the command quite well. The system can be made to oscillate but it will not go unstable (globally) with this type of controller. If the error becomes negative and outside of the deadband, it always tries a corrective action. In conclusion, then, we have briefly examined two approaches to developing and simulating a cylinder position controller using four poppet valves in place of the typical spool valve. As demonstrated, it is very easy to run many simulations without having to build and test each system in the lab. We must remember it is still only a model, and although it includes the nonlinearities associated with the valve, it ignores others like friction effects, valve response times and trajectories, effects of flow forces on the valve, etc. As mentioned, always attempt to verify a model with known data before extending the model to unknown regions.

Figure 68

Poppet valve simulation response to sinusoidal input.

Applied Control Methods

12.8

553

CASE STUDY OF HYDROSTATIC TRANSMISSION CONTROLLER DESIGN AND IMPLEMENTATION

This case study summarizes the design, development, and testing of an energy storage hydrostatic vehicle transmission and controller. Primary benefits include regenerative braking and the decoupling between the engine and road load. The controller was developed and installed on a vehicle, demonstrating the potential fuel economy savings and feasibility of hydrostatic transmissions (HST) with energy storage. Controller algorithms maximize fuel economy and performance. Being true driveby-wire, the computer controls the engine throttle position while intermittently running the engine. The engine only operates to recharge the accumulator or under sustained high power driving. The series hybrid configuration, which we examine here, allows many features to be implemented in software. The torque applied to each wheel can be controlled independently, both during braking and acceleration. The vehicle incorporates a pump/motor at each front wheel, with provisions to add units at the rear wheels, providing true all wheel drive, antilock braking, and traction control abilities. Controller development involves several steps. Axial piston pump/motor models are first developed and implemented into a simulation program. The simulation program allows fuel economy studies, component sizing, performance requirements, and configurations to be quickly evaluated, over a variety of driving cycles. In addition, the dynamics of inserting a valve block at each wheel were studied. The valve block allows pump/motors not capable of overcenter operation to be used. In many cases, these units exhibit higher efficiencies. Successful switches were modeled analytically and confirmed in the lab. To demonstrate the use of electrohydraulics (electronics working with hydraulics), we will summarize the hardware and software for complete vehicle control. The hardware consists of the computer, data acquisition boards, sensors, circuit boards, and control panel. The software maintains safe and efficient operation under normal driving conditions. Engine operating and efficiency models were developed to allow future open loop control of the engine-pump system. A stepper motor is used to control the throttle position. Both distributed and centralized controllers are used on the vehicle, maximizing computer and vehicle performance. The final result is a vehicle incorporating a hydrostatic transmission with energy storage, allowing normal driving operation and performance with increased gains in fuel economy. The remaining sections discuss the overall layout and goal, thus providing the framework for the environment the controller must operate in, the necessary hardware, and finally examples of the controller strategies. This case study goal is to stimulate thought in how distributed and centralized control schemes might be implemented and what the potential is utilizing electronic control of hydraulic systems.



Lumkes J. Design, Simulation, and Testing of an Energy Storage Hydrostatic Vehicle Transmission and Controller. Doctoral Thesis, University of Wisconsin—Madison, 1997.

554

12.8.1

Chapter 12

Overall Layout and Goal

There are many ways to operate and control hydrostatic transmissions, ranging from garden tractors with one control lever to large construction machines with over one million lines of computer instructions and multiple processors. While manual control works well for simple systems where efficiency and features are not emphasized, to take full advantage of complex systems like large hydrostatic transmissions, always maintaining optimum efficiency, and implementing safety features requires more inputs/outputs than one operator can muster. It is in these applications that advanced control systems really shine. Only one possible example and solution summary is presented here, with many examples and different solutions certainly possible. The basic concept is to control a hydrostatic transmission to optimize efficiency over wide ranges of operation. In regards to vehicle HST drive systems, the options range from the simple two-wheel drive parallel system shown in Figure 69 to the flexible all-wheel drive series system in Figure 70. The simplest configuration is to leave the existing vehicle driveline intact and add the hydraulic system in parallel. By adding clutches, both regenerative braking and engine-road load decoupling are accomplished. The parallel system requires the least number of additional components and leaves the efficient mechanical driveline intact. This configuration is currently in use in Japan where Mitsubishi supplied 59 buses equipped with a diesel/hydraulic parallel hybrid vehicle transmission. This design gives fuel savings of 15–30% in everyday use. Volvo, Nissan-Diesel, and Isuzu have similar versions. The Mitsubishi version incorporates two bent-axis swash plate pump/motors, two accumulators, and a controller that decreases the engine throttle

Figure 69

Vehicle hydrostatic transmission in parallel (with energy storage).

 Yamaguchi J. Mitsubishi MBECS III is the Latest in the Diesel/hydraulic Hybrid City Bus Series. Automotive Engineering, June 1997, pages 29–30.

Applied Control Methods

Figure 70

555

Vehicle hydrostatic transmission—series configuration, AWD.

when hydraulic assist is provided. At higher speeds, or steady state, the bus is driven by the engine alone. A single dry-plate clutch/synchromesh gear unit connects the pump/motors to the mechanical driveline. With the hydraulic assist, the bus is capable of using up to third gear to begin moving. The controller for this system can be much simpler and many features cannot be implemented. The series configuration allows for more features than simpler designs. In this case, a pump/motor is located at each wheel, as shown in Figure 70. This configuration allows many options:        

Decoupling of engine from road load; Regenerative braking; All-wheel drive; Antilock braking systems (ABS); Traction control; Ability to deactivate several pump/motor’s for greater system efficiency; Hydrostatic accessory drives; Adaptable to variable and/or active suspension systems.

Once the hardware is installed in the series system, the addition of features can largely be accomplished through software. ABS and traction control, assuming there are wheel speed sensors, can be completely integrated into the controller at a lower cost than possible for current vehicles. Also, by selectively using the wheel pump/motors greater system efficiencies are possible.



Fronczak F, Beachley N. An Integrated Hydraulic Drive Train for Automobiles. Proceedings, 8th International Symposium on Fluid Power, Birmingham, England, April 1988.

556

Chapter 12

For these reasons we examine the hardware and software used in the initial design of a controller to optimize the efficiency of a series hydrostatic transmission with energy storage. To control this system we will use a multiple-input multipleoutput controller utilizing centralized (engine speed and strategy) and distributed controllers (all displacement controllers) and both analog and digital controllers. In this way we can maximize performance and resources to develop a working controller and demonstrate, less abstractly, the concepts presented in this text. 12.8.2

Hardware Implementation

Many of the components we have examined in Chapters 6 and 10 are used in this case study. The PC-based data acquisition/control system functions as the primary central controller and monitors the distributed controllers. It is interfaced to the hardware through analog and digital input/output (I/O). The inputs and outputs are made compatible with the computer using a variety of methods. All the control actuators are mounted on a metal ‘‘car’’ frame representing the size of a normal full size passenger car. The frame’s primary purpose is to allow dynamometer testing for system efficiencies, thus allowing refinement and tuning of each controller. This case study focuses primarily on the hardware required to implement the controller. 12.8.2.1

Central PC Controller

The vehicle controller PC receives two input signals from the driver, the accelerator and brake pedal positions, and proceeds to regulate and control each component to respond correctly and safely. Figure 71 illustrates the interaction between the major components. The driver controls the accelerator and brake pedal positions. The computer, upon reading these, uses a combination of analog and digital I/O to interface with the wheel pump/motors, engine, engine pump, and accumulators. Pedal positions, wheel and engine speeds, pressures, and engine temperature are all inputs into the computer.

Figure 71

Overview of controller hardware for vehicle HST controller.

Applied Control Methods

557

The computer is the central control component in the vehicle. It performs some of the control algorithms, monitors all subsystems for failure, and determines the overall operating strategy for the vehicle, which in large part with the component efficiency determines the efficiency of the vehicle Table 4 summarizes the channel inputs, items read, and types of signals. As controller algorithms change, it may be necessary to adjust what is being monitored. For example, if an open loop engine controller is developed to the point where the confidence level is high, the engine speed would not be necessary. In addition, the engine temperature, once the vehicle configuration and cooling system is tested, would not be needed. The gauges mounted on the dashboard could monitor these items, along with the engine control module (ECM). The data acquisition outputs are summarized in Table 5. Three analog outputs are used for the wheel pump/motor and engine-pump displacement commands. These distributed controllers use these voltage commands for setting the desired displacement. Nine digital outputs are used to control the solenoids, engine ignition and starting, and throttle position. The solenoids are used to shut down the system (two modes) and isolate the accumulator from the system, both for safety and operating procedures. The digital outputs, when used for the solenoids and engine, must be operated through a high-current relay, since the signal levels are very small. This is a relatively simple computer configuration and would easily be implemented using an embedded microcontroller in production applications. The PC data acquisition system provides a cost-effective way to design and test many controllers very quickly. 12.8.2.2

Sensors Used for Feedback

Eleven sensors are used to monitor and control the vehicle. A combination of magnetic pickups, linear and rotary potentiometers, pressure transducers, and temperature sensors are implemented throughout the vehicle. Table 6 lists the type of sensors required and location. The rotary speeds are all measured using magnetic pickups and signal conditioning circuits that convert the frequency to voltage. By calibrating the circuit, the computer can determine the rotational speeds desired. Similar magnetic pickups and circuits are used for the wheel speeds and engine speed. Rotary potentiometers are used to determine the wheel pump/motor displacements. A linear potentiometer is

Table 4

Vehicle Controller Inputs to Central Computer

Input channels Left wheel speed Right wheel speed Accelerator pedal position Brake pedal position HST low pressure (boost) HST high pressure (system) Engine speed Engine temperature

Signal Voltage Voltage Voltage Voltage Voltage Voltage Voltage Voltage

558

Table 5

Chapter 12 Vehicle Controller Outputs from Central Computer

Output channels Left wheel P/M displacement Right wheel P/M displacement Engine pump displacement Accumulator bleed-down Quick panic—normal mode switch Quick panic—shutdown mode switch Accumulator isolation—no Accumulator isolation—yes Engine ignition power Engine starter solenoid Stepper motor movement Stepper motor direction

Signal Voltage Voltage Voltage Digital output Digital output Digital output Digital output Digital output Digital output Digital output Digital output Digital output

used for measuring the engine-pump displacement. The accelerator and brake pedals also use potentiometers to determine position. For all potentiometers, the voltage was regulated to 10 V, giving maximum resolution, since the computer was configured for 0–10 V also. The signals from the potentiometers and engine temperature sensor are all voltages in the ranges capable of being read by the computer data acquisition boards. The pressure transducers have current outputs, which are more immune to line noise. At the connection on the data acquisition boards, the current is passed through 400  resistors to convert the signals to voltages. The current ranges from 4 to 20 mA, varying linearly with the pressure. An effort was made to reduce the number of sensors, developing a control system using sensors likely to be available in production vehicles. Torque transducers and flow meters are avoided for this reason.

Table 6

Vehicle Controller—Sensors Used for Feedback

Item Left wheel speed Right wheel speed Engine speed Left P/M displacement Right P/M displacement Engine pump displacement Accel. pedal position Brake pedal position System pressure Boost pressure Accumulator pressure Engine temperature

Part number

Placement

Magnetic pickup Magnetic pickup Magnetic pickup 5 K rotary pot 5 K rotary pot Linear pot 5 K rotary pot 5 K rotary pot Capacitive type Capacitive type Capacitive type —

Ring gear on CV joint Ring gear on CV joint Flywheel teeth Swashplate pivot arm Swashplate pivot arm Rod on control piston Floor board Fire wall Manifold block Manifold block Manifold block OEM on engine

Applied Control Methods

12.8.2.3

559

Conditioning Circuits for Computer Inputs and Outputs

As is usual in practice, additional circuits are required to interface different components with the controller. The circuits serve a variety of functions, ranging from signal conditioning, converting signals, displacement controllers, and voltage regulators. A sample of some of the circuits is provided here. Three circuits convert frequency to voltage, with the voltage being proportional to the input frequency. These circuits allow the computer to read the speed from a magnetic pickup. Other circuits convert the unregulated battery voltage, which can vary between 11 and 14 V, to regulated 24, 10, and 5 V. The various voltage levels are required both for power (transducers) and excitation of potentiometers. Three displacement controller circuits use OpAmps to create hysteresis, deadbands, and on-off controller routines, as modeled and discussed in the previous case study. Finally, a bank of solid-state relays is used to allow the computer to drive high-current loads using the digital output channels. The controller algorithms, since they are implemented using OpAmps, allow the computer to process other tasks more efficiently. The computer simply sends the desired command signal to the distributed controllers and assumes that its request will be met. Other algorithms, such as engine speed and ABS braking algorithms, are run from the central computer. These examples illustrate some techniques to interface transducers with the data acquisition cards and also using simple distributed controllers to off-load computational demands from the central computer. 12.8.3 Software Implementation The controller software, through the sensors, circuits, and data acquisition boards, controls the vehicle components to ensure good system stability and safety. In addition, the software can add secondary features like ABS, traction control, and cruise control. The control algorithms, optimized for stability, safety, and system efficiency, are easily modified in the software. In this section we only summarize some of the basic concepts and routines that can be used to implement controllers using computers. Many times, as we will find if we begin to write software for controller algorithms, the most difficult part is developing the proper flow chart of the program. This is also commonly taught in many college courses as the beginning step in developing a computer program. Once the flow chart is developed, computer programmers can develop the actual software code and convert the flow chart into a set of commands to be compiled and run (or downloaded to an embedded microcontroller). In other words, after the development of the flow chart and specific controller algorithm, the next step is largely syntax and programming, but less controller design. Figure 72 gives the main controller flow chart. This corresponds to the main controller routine that is responsible for determining which routine runs next and controlling program flow. It is designed for testing purposes to run the control loop until any key is pressed on the keyboard. Before the loop is started, all the global variables and functions are defined. The main process of the loop then begins where the inputs are all read, compared with safety limits, and if all is okay, to run the controller algorithms. If anything fails, the program puts the vehicle into a controlled shutdown. Options are provided to display data to the monitor every speci-

560

Figure 72

Chapter 12

Controller flow chart—main routine.

fied number of samples. Finally, data can be stored to the hard disk to track controller performance, although this does slow down the sampling rate. This particular controller is designed to utilize program-controlled delays. No explicit delays are used to maximize the sample rate, although some routines are not called every pass. The strategy routine implements the overall controller strategy about how to control the vehicle and where each subsystem should be operating. Several routines are called from the strategy routine (Figure 73). First, if there are no input commands and the engine is not running, the accumulator is isolated from the system to prevent leakage from occurring across the pistons in the wheel pump/motors and engine pump. When a command (brake or gas) signal is received, the controller connects the accumulator to the main system and changes the displacement of the wheel pump/motors proportional to the command signal. Being over-center units, achieving braking and acceleration is easily achieved. Next, the engine is checked and if the accumulator pressure needs to be recharged, it begins the sequence to start the engine. If the engine does not start after a set number of loops, it shuts down the system. Once the engine recharges the accumulator, it is shut off until the next time the accumulator needs to be charged.

Applied Control Methods

Figure 73

561

Controller flow chart—strategy routine.

The engine controller routine starts the engine and controls the engine speed using a stepper motor connected to the throttle (Figure 74). The hydrostatic pump mounted to the engine is controlled by varying its displacement. Since it is computer controlled it can be set to optimize the overall system efficiency. To control for maximum efficiency, the whole system is analyzed and the pump efficiency curves and engine efficiency curves are known and then the product of those is maximized to provide the correct operating point. This could be enhanced further by going to a feedforward routine where the desired pump displacement and stepper motor position is precalculated as a function of system pressure and accessed via a lookup table. Thus, without all the details, we should see how a sample controller might be configured. The goal of building such a system is to verify the design and allow extended vehicle testing. Now a quick conclusion relating earlier material to how different controllers may be used to change/improve the existing initial design. First, most components in the system were modeled and tested to verify operation within the whole system. Each subcontroller could constitute another study of the control system design process. The complete vehicle was also modeled and simulated on the Federal Urban Driving Cycle. This provided much useful input for designing the controller: expectations of fuel economy, sizing requirements for components (acceleration and braking rates), dynamic bandwidth required by each controller, and engine on-off cycles. This simulation provided many of the guidelines necessary to design the complete system. The wheel pump/motor displacement controllers were tested to ensure that they met the required bandwidth requirements for the vehicle. Having these models would now allow the controller to be designed for the next stage— global optimization of system efficiency. This case study verified the overall controller strategy and stability of the system. Also, the operating mode when the accumu-

562

Figure 74

Chapter 12

Controller flow chart—engine routines.

lator is not in the system needs to be implemented for cases where the accumulator pressure is too low to perform a maneuver (i.e., passing another vehicle) and the time to charge the accumulator is not available. This mode of operation was studied and tested in the lab using sliding mode control and PID algorithms. This mode was initially examined separately from the vehicle due to its higher complexity and potential stability/safety problems. The goal of the controller in this mode is to control the engine throttle, engine-pump displacement, and wheel pump/motor displacements and thus control the torque applied to the wheels while maintaining maximum system efficiency. Since it is desired to never relieve system pressure with a relief valve (i.e., adding major energy losses), the throttle and displacements must always be matched to avoid overpressurization. When the accumulator is in the system, it essentially (at least relative to loop times) makes the system see a ‘‘constant’’ pressure and the actual controller strategy is simplified. Implementation is just as difficult in both cases. In conclusion, an actual PC-based control system for a hydrostatic transmission with regenerative braking has been presented as an example of how the techniques and hardware studied earlier might be used in actual product research and development.



Lewandowski E. Control of a Hydrostatic Powertrain for Automotive Applications. Ph.D. Thesis, University of Wisconsin—Madison, 1990.

Applied Control Methods

12.9

563

PROBLEMS

12.1 Locate an article describing a unique application of industrial hydraulics and briefly summarize the system and how it is used. 12.2 Locate an article describing a unique application of mobile hydraulics and briefly summarize the system and how it is used. 12.3 Briefly list and describe the three functions of control valves. 12.4 A pressure reducing valve regulates the downstream pressure. (T or F) 12.5 What two types of pressure control valves can use smaller springs with higher pressures? 12.6 If the orifice is removed in a pilot operated pressure control valve, the valve still regulates the pressure. (T or F) 12.7 List an advantage and disadvantage of spool type pressure control valves. 12.8 Graphically demonstrate what pressure override is. 12.9 Counterbalance valves combine the functions of what two valves? 12.10 Pressure reducing valves are an efficient means of reducing the pressure in our system. (T or F) 12.11 Most flow control valves use pressure as the feedback variable. (T or F) 12.12 Tandem center directional control valves combine what two types of center configurations? 12.13 The center configuration of servovalves is of what type? 12.14 List several advantages to using electronic feedback on proportional valves. 12.15 For a position control system requiring high accuracy, the most applicable type of valve is ______________. 12.16 What is the feedback device in a typical flapper nozzle servovalve? 12.17 Two disadvantages of servovalves are ____________ and _____________. 12.18 Using the two valve coefficients below, what is the total valve coefficient? KPA ¼ 0:34

in pffiffiffiffiffiffi sec  psi

and

KBT ¼ 0:45

in pffiffiffiffiffiffi sec  psi

12.19 Pressure metering curves show important characteristics for what type of control? 12.20 In valve controlled cylinders, what is unique about the force located at 2/3 of the stall force? 12.21 Under what conditions is a fixed displacement/closed center (FD/CC) system the most efficient? 12.22 List one advantage and one disadvantage of using regeneration during extension. 12.23 Describe an advantage of load sensing circuits. 12.24 Design a hydraulic circuit capable of meeting the following specifications when the system pressure is 1500 psi, the control valve has matched orifices and is symmetrical, and the cylinder is mounted horizontally and has a stroke of 10 inches. Specifications: Maximum extension force ¼ 12,000 lbs (all forces opposed to motion) Maximum retraction force ¼ 7000 lbs Extension velocity of 4 in/sec at a load of 5000 lbs

564

Chapter 12

Determine the answers for a. Cylinder piston and rod diameters (round to nearest standard values) b. Minimum valve coefficients required (include units) c. Maximum flow rate the pump needs to supply when the load is zero 12.25 Design a hydraulic circuit capable of meeting the following specifications when the system pressure is 2000 psi, the control valve has matched orifices and is symmetrical, and the cylinder is mounted horizontally and has a stroke of 16 inches. Specifications: Maximum extension force ¼ 20,000 lbs (all forces opposed to motion) Maximum retraction force ¼ 15,000 lbs Extension velocity of 2 in/sec at a load of 10,000 lbs Be sure to specify answers to the following questions: a. Cylinder piston and rod diameters using nearest standard values b. Minimum valve coefficients required (include units) c. Maximum flow rate the pump needs to supply when the load is zero 12.26 For the meter-out speed control circuit shown in Figure 75, the diameter of the cylinder piston is 2.5 inches and the rod diameter is 1.75 inches. The external opposing force during extension is 10,000 lbf. The flow rate from the pump is 20 gpm and the relief valve is set at 3000 psi. The rod side flow is controlled to 7 gpm by the pressure compensated flow control valve. Determine the following: a. How much of the pump flow passes over the relief valve? b. What is the pressure on the rod side of the cylinder if the external load is 10,000 lbf? c. If the external load should suddenly become zero during extension, what is the magnitude of the pressure on the rod side of the cylinder? d. If the cylinder is rated to 5000 psi, what is the minimum external force in extension which must be applied?

Figure 75

Problem: meter-out flow control circuit.

Applied Control Methods

12.27 Given the circuit shown in Figure 76, answer the following questions: a. What is the max cylinder extension speed? b. What is the max cylinder extension force? c. What is the max cylinder retraction speed? d. What is the bore end pressure during retraction? e. What will the cylinder do when the valve is centered?

Figure 76

Problem: meter-out flow control circuit.

565

This Page Intentionally Left Blank

Appendix A Useful Mathematical Formulas

Quadratic Equation Polynomial: ar2 þ br þ c ¼ 0 Roots: b  r1 ; r2 ¼ 2a

pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b2  4ac 2a

Euler’s Theorem ej ¼ cos  þ j sin  ej ¼ cos   j sin  Matrix Definitions Square Matrix A matrix with m rows and n columns is square if m ¼ n. Column Vector 1  n matrix, all values in one column. Row Vector m  1 matrix, all values in one row. Symmetrical Matrix ½aij  ¼ ½aji . Identity Matrix Square matrix where all diagonal elements ¼ 1 and all off diagonal elements ¼ 0. Using matrix subscript notation it can be defined as: 567

568

Appendix A

  Iij ¼

(

1

i¼j

0

i 6¼ j

Matrix Addition C ¼ A þ B; cij ¼ aij þ bij . Matrix Multiplication If multiplication order is AB, then the number of columns of A must equal the number of rows in B. If A is m  n and B is n  q, resulting matrix will be m  q. The product C ¼ AB is found by multiplying the ith row of A and the jth column of B to the element cij . cij ¼

q X

aik bkj

and AB 6¼ BA

k¼1

Transpose Denoted AT , is found by interchanging the rows and columns; it is defined by ½aij T ¼ ½aji . Will turn a column vector into a row vector, and vice versa. Adjoint The adjoint of a square matrix, A, denoted Adj A, can be found by the following sequence of operations on A: 1. 2. 3.

Find the minor of A, denoted M. Elements ½mij  are found by evaluating det A with row i and column j deleted. Find the cofactor of A, denoted C. Element ½cij  ¼ ð1Þiþj ½mij . Find the adjoint from the cofactor transposed: Adj A ¼ CT .

Inverse The inverse of a square matrix A, denoted A1 , is A1 ¼

AdjðAÞ detðAÞ

If the determinant is zero, the matrix inverse does not exist and the matrix is said to be singular.

Appendix B Laplace Transform Table

List no. Laplace domain

Time domain

z-domain

1

1

ðtÞ Unit Impulse

1

2

ekTs k is any integer

ðt  kTÞ Delayed Impulse

zk (Output is 1 if t ¼ kT, 0 if t 6¼ kTÞ

3

1 s

uðtÞ ¼ 1 for t  0 Unit Step

z z1

4

1 s2

t Unit Ramp

Tz ðz  1Þ2

5

1 s3

t2 Unit Ramp

T 2 zðz þ 1Þ 2 ðz  1Þ3

6

1 sþa

eat

z z  eaT

7

1 ðs þ aÞ2

teat

TzeaT

8

1 sðs þ aÞ

 1 1  eat a

9

ba ðs þ aÞðs þ bÞ

eat  ebt

10

1 sðs þ aÞðs þ bÞ

ðz  eaTÞ

2

  z 1  eaT   aðz  1Þ z  eaT   z eaT  ebT    z  eaT z  ebT

1 eat ebt   ab aðb  aÞ bða  bÞ   1 z bz az  þ ab z  1 ða  bÞðz  eaT Þ ða  bÞðz  ebT Þ 569

570

Appendix B

List no. Laplace domain 2

11

a sðs þ aÞ2

Time domain 1e

at

 ate

z-domain

at

   z 1  eaT  aTeaT z þ e2aT  eaT þ aTeaT  2 ðz  1Þ z  eaT     z aT  1 þ eaT z þ 1  eaT  aTeaT   aðz  1Þ2 z  eaT

12

1 a  s2 s þ a

 1 at  1  eat a

13

s ðs þ aÞ2

ð1  atÞeat

14

b s2 þ b2

sinðbtÞ

z sinðbTÞ z2  2z cosðbTÞ þ 1

15

s s2 þ b2

cosðbtÞ

zðz  cosðbTÞÞ z2  2z cosðbTÞ þ 1

16

b ðs þ aÞ2 þ b2

eat sinðbtÞ

sþa ðs þ aÞ2 þ b2

eat cosðbtÞ

17

1 a2 þ b2  s ðs þ aÞ2 þ b2

18

  z z  eaT ð1 þ aTÞ   2 z  eaT

z2

zeaT sinðbTÞ  2zeaT cosðbTÞ þ e2aT

z2

z2  zeaT cosðbTÞ  2zeaT cosðbTÞ þ e2aT

h i a 1  eat cosðbtÞ þ sinðbtÞ b z Az þ B  2 aT z  1 z  2ze cosðbTÞ þ e2aT a A ¼ 1  eaT cosðbTÞ  eaT sinðbTÞ b a aT 2aT B¼e þ e sinðbTÞ  eaT cosðbTÞ b

19

20

21

!2n 2  s s þ 2!n s þ !2n



s2

!2n  þ 2!n s þ !2n



s2

s!2n  þ 2!n s þ !2n

If  < 1 (Underdamped)  pffiffiffiffiffiffiffiffiffiffiffiffiffi  1 1  pffiffiffiffiffiffiffiffiffiffiffiffiffi e!n t sin !n 1  2 t þ cos1 ðÞ 2 1 If  < (Underdamped)  pffiffiffiffiffiffiffiffiffiffiffiffiffi  !n pffiffiffiffiffiffiffiffiffiffiffiffi ffi e!n t sin !n 1  2 t 2 1 If  < 1 (Underdamped)  pffiffiffiffiffiffiffiffiffiffiffiffiffi  !n ffi e!n t sin !n 1  2 t  cos1 ðÞ  pffiffiffiffiffiffiffiffiffiffiffiffi 2 1

Appendix C General Matlab Commands

To receive more information on any command listed below, type: >> help command_name Creation of LTI models ss—Create a state-space model. zpk—Create a zero/pole/gain model. tf—Create a transfer function model. dss—Specify a descriptor state-space model. filt—Specify a digital filter. set—Set/modify properties of LTI models. ltiprops—Detailed help for available LTI properties. Data extraction ssdata—Extract state-space matrices. zpkdata—Extract zero/pole/gain data. tfdata—Extract numerator(s) and denominator(s). dssdata—Descriptor version of SSDATA. get—Access values of LTI model properties. Model characteristics class—Model type (‘‘ss,’’ ‘‘zpk,’’ or ‘‘tf’’). size—Input/output dimensions. isempty—True for empty LTI models. isct—True for continuous-time models. isdt—True for discrete-time models. isproper—True for proper LTI models. issiso—True for single-input/single-output systems. isa—Test if LTI model is of given type. 571

572

Conversions ss—Conversion to state space. zpk—Conversion to zero/pole/gain. tf—Conversion to transfer function. c2d—Continuous to discrete conversion. d2c—Discrete to continuous conversion. d2d—Resample discrete system or add input delay(s). Overloaded arithmetic operations þ and  —Add and subtract LTI systems (parallel connection). —Multiplication of LTI systems (series connection). \ - Left divide—sys1\sys2 means inv(sys1)*sys2. / - Right divide—sys1/sys2 means sys1*inv(sys2). 0 —Pertransposition. . 0 —Transposition of input/output map. [..]—Horizontal/vertical concatenation of LTI systems. inv—Inverse of an LTI system. Model dynamics pole, eig—System poles. tzero—System transmission zeros. pzmap—Pole-zero map. dcgain—D.C. (low frequency) gain. norm—Norms of LTI systems. covar—Covariance of response to white noise. damp—Natural frequency and damping of system poles. esort—Sort continuous poles by real part. dsort—Sort discrete poles by magnitude. pade—Pade approximation of time delays. State-space models rss,drss—Random stable state-space models. ss2ss—State coordinate transformation. canon—State-space canonical forms. ctrb, obsv—Controllability and observability matrices. gram—Controllability and observability gramians. ssbal—Diagonal balancing of state-space realizations. balreal—Gramian-based input/output balancing. modred—Model state reduction. minreal—Minimal realization and pole/zero cancellation. augstate—Augment output by appending states. Time response step—Step response. impulse—Impulse response.

Appendix C

General Matlab Commands

initial—Response of state-space system with given initial state. lsim—Response to arbitrary inputs. ltiview—Response analysis GUI. gensig—Generate input signal for LSIM. stepfun—Generate unit-step input. Frequency response bode—Bode plot of the frequency response. sigma—Singular value frequency plot. nyquist—Nyquist plot. nichols—Nichols chart. ltiview—Response analysis GUI. evalfr—Evaluate frequency response at given frequency. freqresp—Frequency response over a frequency grid. margin—Gain and phase margins System interconnections append—Group LTI systems by appending inputs and outputs. parallel—Generalized parallel connection (see also overloaded þ). series—Generalized series connection (see also overloaded ). feedback—Feedback connection of two systems. star—Redheffer star product (LFT interconnections). connect—Derive state-space model from block diagram description. Classical design tools rlocus—Evans root locus. rlocfind—Interactive root locus gain determination. acker—SISO pole placement. place—MIMO pole placement. estim—Form estimator given estimator gain. reg—Form regulator given state-feedback and estimator gains. LQG design tools lqr,dlqr—Linear-quadratic (LQ) state-feedback regulator. lqry—LQ regulator with output weighting. lqrd—Discrete LQ regulator for continuous plant. kalman—Kalman estimator. kalmd—Discrete Kalman estimator for continuous plant. lqgreg—Form LQG regulator given LQ gain and Kalman estimator. Matrix equation solvers lyap—Solve continuous Lyapunov equations. dlyap—Solve discrete Lyapunov equations. care—Solve continuous algebraic Riccati equations. dare—Solve discrete algebraic Riccati equations.

573

574

Demonstrations ctrldemo—Introduction to the Control System Toolbox. jetdemo—Classical design of jet transport yaw damper. diskdemo—Digital design of hard-disk-drive controller. milldemo—SISO and MIMO LQG control of steel rolling mill. kalmdemo—Kalman filter design and simulation.

Appendix C

Bibliography

Airy, G., ‘‘On the Regulator of the Clock-work for Effecting Uniform Movement of Equatorials,’’ Memoirs of the Royal Astronomical Society, vol. II, pp. 249–267, 1840. Ambardar, A., ‘‘Analog and Digital Signal Processing,’’ International Thomson Publishing, 1995, ISBN 0-534-94086-2. Anderson, W., ‘‘Controlling Electrohydraulic Systems,’’ Marcel Dekker, Inc., 1988, ISBN 0-8247-7825-1. Anton, H.,‘‘Elementary Linear Algebra,’’ 7th ed., John Wiley & Sons, Inc., 1994. Astrom, K., and Wittenmark, B., ‘‘Adaptive Control,’’ Addison-Wesley Publishing Company, Inc., 1995, ISBN 0-201-55866-1. Astrom, K., and Wittenmark B., ‘‘Computer Controlled Systems. Theory and Design,’’ 3rd ed., Prentice Hall, 1997, ISBN 0-13-314899-8. Bishop, R., ‘‘Modern Control Systems Analysis and Design using Matlab and Simulink,’’ Addison Wesley Longman, Inc., 1997, ISBN 0-201-49846-4. Bollinger, J., and Duffie, N., ‘‘Computer Control of Machines and Processes,’’ Addison-Wesley Publishing Company, 1988, ISBN 0-201-10645-0. Bolton, W., ‘‘Mechatronics, Electronic Control Systems in Mechanical Engineering,’’ Addison Wesley Longman Limited, 1995, ISBN 0-582-25634-8. Chalam, V., ‘‘Adaptive Control Systems, Techniques and Applications,’’ Marcel Dekker, Inc., 1987, ISBN 0-8247-7650-X. Ciarlet, P.G., ‘‘Introduction to Numerical Linear Algebra and Optimisation,’’ page 119, Cambridge: Cambridge University Press, 1991. D’Azzo, J., Houpis, C., ‘‘Linear Control System Analysis and Design, Conventional and Modern,’’ 3rd edition, McGraw-Hill Book Company, 1988, ISBN 0-07100191-3. Dorf, R., and Bishop, R., ‘‘Modern Control Systems,’’ 8th edition, Addison Wesley Longman, Inc., 1998, ISBN 0-201-30864-9. Driankov, D., Hellendoorn, H., and Reinfrank, M., ‘‘An Introduction to Fuzzy Control,’’ Springer-Verlag, 1996. Driels, M., ‘‘Linear Control Systems Engineering,’’ McGraw-Hill, Inc., 1996, ISBN 0-07-017824-0. Dutton, K, Thompson, S., and Barraclough, B., ‘‘The Art of Control Engineering,’’ Addison Wesley Longman Limited, 1997, ISBN 0-201-17545-2. 575

576

Bibliography

Evans, W., ‘‘Graphical Analysis of Control Systems,’’ Transactions AIEE, vol. 67, pp. 547–551, 1948. Franklin, G., Powell, J., and Emami-Naeini, A., ‘‘Feedback Control of Dynamic Systems,’’ 3rd edition, Addison-Wesley Publishing Company, 1994, ISBN 0-20152747-2. Franklin, G., Powell, J., and Workman, M., ‘‘Digital Control of Dynamic Systems,’’ 3rd edition, Addison Wesley Longman, Inc., 1998, ISBN 0-201-82054-4. Fuller, A., ‘‘The Early Development of Control Theory,’’ Journal of Dynamics Systems, Measurement, and Control, ASME, vol. 98G, no. 2, pp. 109–118, June 1976. Gupta, M., ‘‘Adaptive Methods for Control System Design,’’ IEEE Press, 1986, ISBN 0-87942-207-6. Histand, M., and Alciatore, D., ‘‘Introduction to Mechatronics and Measurement Systems,’’ WCB McGraw-Hill, 1999, ISBN 0-07-029089-X. Johnson, J., ‘‘Basic Electronics for Hydraulic Motion Control,’’ Penton Publishing Inc., 1992, ISBN 0-932905-07-2. Johnson, J., ‘‘Design of Electrohydraulic Systems for Industrial Motion Control,’’ 2nd edition, Jack L. Johnson, PE, 1995. Kandel, A., and Langholz, G., ‘‘Fuzzy Control Systems,’’ CRC Press, 1994, ISBN 0-8493-4496-4. Kraus, T., and Myron, T., ‘‘Self-tuning PID uses Pattern Recognition Approach,’’ Control Engineering, June 1984. Leonard, N., and Levine, W., ‘‘Using Matlab to Analyze and Design Control Systems,’’ 2nd edition, The Benjamin/Cummings Publishing Company, Inc., 1995, ISBN 0-8053-2193-4. Lewis, F. L., ‘‘Applied Optimal Control and Estimation,’’ Prentice Hall, 1992. Maciejowski, J., ‘‘Multivariable Feedback Design,’’ Addison-Wesley Publishers Ltd., 1989, ISBN 0-201-18243-2. Mayr, O., ‘‘The Origins of Feedback Control,’’ Cambridge: MIT Press, 1970. McNeill, F., and Thro, E., ‘‘Fuzzy Logic, A Practical Approach,’’ Academic Press Limited, 1994, ISBN 0-12-485965-8. Merritt, H., ‘‘Hydraulic Control Systems,’’ John Wiley & Sons, Inc., 1967, ISBN 0-471-59617-5. Nise, N., ‘‘Control Systems Engineering,’’ 2nd edition, Addison-Wesley Publishing Company, 1995, ISBN 0-8053-5424-7. Ogata, K., ‘‘Modern Control Engineering,’’ 2nd edition, Prentice-Hall, Inc., 1990, ISBN 0-87692-690-1. Ogata, K., ‘‘System Dynamics,’’ 3rd ed. New Jersey: Prentice-Hall, 1998. Palm, W., ‘‘Control Systems Engineering,’’ John Wiley & Sons, Inc., 1986, ISBN 0-0471-81086-X. Passino, K., and Yurkovich, S., ‘‘Fuzzy Control,’’ Addison-Wesley Longman, Inc., 1998, ISBN 0-201-18074-X. Saadat, H., ‘‘Computational Aids in Control Systems using MATLAB,’’ McGrawHill, 1993, ISBN 0-07-911358-3. Schwarzenbach, J., ‘‘Essentials of Control,’’ Addison-Wesley Longman Limited, 1996, ISBN 0-582-27347-1. Shetty, D., and Kolk, R., ‘‘Mechatronics System Design,’’ PWS Publishing Company, 1997, ISBN 0-534-95285-2.

Bibliography

577

Slotine, J., and Li, W., ‘‘Applied Nonlinear Control,’’ Prentice-Hall, Inc., 1991, ISBN 0-13-040890-5. Spong, M., and Vidyasagar, M., ‘‘Robot Dynamics and Control,’’ John Wiley & Sons, Inc., 1989, ISBN 0-471-50352-5. Stiffler, A., ‘‘Design with Microprocessors for Mechanical Engineers,’’ McGrawHill, Inc., 1992, ISBN 0-07-061374-5. Tonyan, M., ‘‘Electronically Controlled Proportional Valves, Selection and Application,’’ Marcel Dekker, Inc., 1985, ISBN 0-8247-7431-0. Van de Vegte, J., ‘‘Feedback Control Systems,’’ 3rd edition, Prentice-Hall Inc., 1994, ISBN 0-13-016379-1. Wang, L., ‘‘A Course in Fuzzy Systems and Control,’’ Prentice Hall, 1997.

This Page Intentionally Left Blank

Answers to Selected Problems

Problem

Answer(s)

1.6

a. analog, b. analog, c. digital, d. digital

2.5

CðsÞ s3 þ 1  ¼  3 RðsÞ s 2s þ s þ 2

2.14

2

0

1

0

6 6 1=a b=a 6 A¼6 6 0 0 4 0

0

0 0 1=c

0

3

2

0

7 6 7 6 20=a 7 6 7B ¼ 6 6 0 1 7 5 4 1=c 0 0

2.16

z ¼ 27 þ 21ðx  2Þ þ 11ðy  3Þ

2.18

mx€ 0 þ b1 x_ 0 þ ðk1 þ kÞ2Þx ¼ k2 xi

2.20

my€ þ by_ þ ðk1 þ k2 Þy ¼ b_r þ k1 r

3.1

 ¼ 1=20s; f ðtÞ ¼

3.3

YðsÞ 10 ¼ UðsÞ s3 þ 3s2 þ 4s þ 9

3.4

YðsÞ 5s þ 1 ¼ UðsÞ s3 þ 5s2 þ 32

3.6

cðtÞ ¼

 1 2t  1 þ e2t 2

3.7

yðtÞ ¼

1 3 2t 8 5t þ e  e 10 2 5

3.12

GðsÞ ¼

2 5s þ 1

0 0 0

3 7 7

7 7C ¼ 0 0 1 0 7 5

1=c

1 1 1  e20t 4

579

580

Answers

Problem

Answer(s) 36 s2 þ 4s þ 36

3.13

GðsÞ ¼

3.14

Transient response characteristics: overshoot, underdamped. Steady-state output (unit step input): Css ¼ 1=2

3.26

a. 2nd; b.  < 0:707, under; c. 0.7 rad/sec; d. 1 rad/sec

3.30

s ¼ 0:5  0:866j, damped oscillations

4.5

Yss ¼ 1=10  10 ¼ 1

4.6

8, 40/85, 1-40/85

4.7

ess from R is 0; ess from D is 1=K

4.10

css ¼ 10

4.12

0 < K < 1386

4.17

 ¼ 3

4.20

PM ¼ 55 degrees GM ¼ 40 dB Unstable 237 ðs þ 1Þðs þ 4Þ ; PM ¼ 105 s ðs þ 10Þðs þ 30Þ

4.21

GHðsÞ ¼

5.9

k ¼ 2; p ¼ 2

5.11

K > 0:82

5.13

K ¼ 59

5.14

P) ess ¼ 2=3; PI) ess ¼ 0

5.21

K ¼ 14

5.22

PM ¼ 45 degrees, GM ¼ 20 dB, K ¼ 10

5.23

Kp ¼ 10, Ki ¼ 100, ess ¼ 1=100 for ramp

5.25

z ¼ 1; p ¼ 3:3

5.33

K1 ¼ 4, K2 ¼ 3

6.9

Normal (rated), maximum (failure), burst

6.11

Current

Answers

581

Problem

Answer(s)

6.19

False

6.23

Transistor

6.25

False

7.10

1.00, 0:10, 0.01, 0:001, 0:0001   

7.11

0, 0.6320, 1.0972, 1.2069,1.1165, 1.0096, 0.9642, 0.9701, 0.9912, 1.0045

7.12

xðkÞ ¼ 2eaT xðk  1Þ  e2aT xðk  2Þ þ TeaT ðk  1Þ

7.14

ðaÞ

YðsÞ 1 ¼ RðsÞ s2 þ 5s þ 6

ðbÞ

YðzÞ 0:1156z þ 0:02134 ¼ 2 RðzÞ z  0:1851z þ 0:006738

ðcÞ

0:1156; 0:1583; 0:1655; 0:1665; 0:1666; 0:1667; 0:1667; 0:1667;   

7.17

"

x1 ðk þ 1Þ

#

"

2 1

#"

¼

x2 ðk þ 1Þ

1

0

x1 ðkÞ x2 ðkÞ

#

" # " # 1

x1 ðkÞ þ 1 cðkÞ ¼ 1 0 x2 ðkÞ 0

T RðzÞ 2ðz  1Þ

7.18

CðzÞ ¼

7.19

YðzÞ 0:3 ¼ RðzÞ z  0:5

7.20

YðzÞ 0:2z2 ¼ 2 RðzÞ z  0:5z þ 0:3

8.5

a) 1.0000, 0.9000, 1.1100, 1.0690, 1.1151; b) FVT: 1.11111

8.7

IVT: yð0Þ ¼ 0; FVT: yð1Þ ¼ 1

8.8



yð1Þ ¼ 1



GðzÞ ¼



0; 0:3679; 1:1353; 2:0498; 3:0183; 4:0067; 5:0025



yð1Þ ¼ 1

0:368z þ 0:264 z2  1:368z þ 0:368

8.10

0 T 0:2

8.13

b) Approaches marginal stability c) Pole #1,  ¼ 1; Pole #2, 0  0:22

8.18

GðzÞ ¼

0:04117z þ 0:03372 z2  1:511z þ 0:5488

K ¼ 0:5

582

Answers

Problem 8.20

Answer(s)

GðzÞ ¼

0:01936z3  0:0070z2  0:00724z þ 0:00265 z4  1:912z3 þ 1:424z2  0:4568z þ 0:04979

K¼6 z  0:1353 z  0:6065

9.6

Gc ðzÞ ¼ 0:4551

9.7

Gc ðzÞ ¼

9.9

aÞ Gc ðzÞ ¼

9.10

K ¼ 6:3

9.13

 ¼ 0:69, type 1 system

9.17

DðzÞ ¼

ð2 þ TÞz þ T  2 ð0:2 þ TÞz þ T  0:2 9:601z  2:751 11:97z  2:763 ; bÞ Gc ðzÞ ¼ z  0:02136 z þ 0:3158

z2  1:287z þ 0:4493 0:1835z3 þ 0:1404z2  0:1835z  0:1404

10.11

Ladder diagrams

10.13

Difference equations

10.15

Absolute

10.22

Saturated region

11.2

False, they are proactive.

11.4

False, the denominator of our system remains the same.

11.8

When all states cannot be measured

11.10

Difference equations

11.20

a ¼ 0:82; b ¼ 0:36

11.21

a1 ¼ 0:11; a2 ¼ 0:05; b1 ¼ 0:7; b2 ¼ 0:24

12.4

True

12.6

False

12.9

Pressure relief and check valves

12.10

False, the pressure drop becomes heat.

CðzÞ 0:36 ¼ RðzÞ z  0:82 CðzÞ 0:7z þ 0:24 ¼ RðzÞ z2  0:11z þ 0:05

Answers Problem

583 Answer(s)

12.11

True

12.13

Critically centered

12.25

a. Diap ¼ 4 00 , DiaR ¼ 2 00 , b. K ¼ 0:224 gpm/(psi)1=2 , c. 10 gpm

12.26

a 6.3 gpm, b. 1890 psi, c. 5883 psi, d. 2200 lbf

This Page Intentionally Left Blank

Index

Absolute encoder, 417–418 Accelerometers, 294 Accumulator, 60, 61–62, 534–535, 553–562 Ackermann’s formula, 270 AC motor, 302 Activation function, neural net, 489, 491 Active region: in pressure control valves, 503, 515–516 in transistors, 423 Actuator, 279 digital, 419–421 linear analog, 296–298 pulse width modulation driver, 428–429 rotary analog, 299–303 saturation of, 295–296, 386 stepper motor, 419–421 Adaptive controllers, 458, 470–473 AD converter (see Analog-to-digital converter) Address bus, 408 Aggregation, fuzzy logic, 484, 486 Airy, G., 4 Algorithm implementation, 414–416 Aliasing, 318–319 Alpha-cut, fuzzy logic, 481 Amplifier, 279, 302–306 linear electronic, 422–424 power amplifier, 305–306 signal amplifier, 303–305 Amplitude, PWM, 428 Analog controller, vs. digital, 6, 314–315 Analog-to-digital converter (AD converter), 325, 344, 403 Angle condition, root locus, 157 Angle contribution, 244

Angle of departure or arrival, 160–161 Approximate derivative controller, 205 constructing with OpAmps, 281–283 Area ratio: in cylinders, 297, 525 in valves, 517 Armature, 300–301 Arrival angle, 158–160, 354 Artificial nonlinearity, 474 Assembly language, 312, 408 Asymptotes: Bode plots, 111 root locus plots, 159, 354 Asymptotically stable, 475–476 Attenuation, 305, 307 Automobile: cruise control, 2–3 suspension model, 34 Autoregressive, 464 Autotuning, 471–472 Auxiliary equation, 18, 76–77 Axial turbine flow meter, 289–290 Back emf, 43 Backward difference, 366 Backward rectangular rule, 337–338, 366 Ball type, pressure control valve, 499 Band pass filter, 307 Bandwidth frequency, 117 of electrohydraulic valves, 513 and sample time, 339 Base, 422–424 Base current, 424–425 Batch processes, system identification, 459–467 585

586 Beat frequency, 318 Beta factor, 423 Biasing, 426 Bilinear transform, 316, 322, 338, 369 Bipolar junction transistor (BJT), 422, 426–427 Bisector method, fuzzy logic, 486 Black, H., 5 Bladder accumulator, 534–535 Bleed-off circuit, hydraulic, 539 Block diagrams: common blocks, 102–104 digital components, 312 operations on, 20–23 properties of, 19–23 reduction of, 21, 66–67, 145, 345–347 steady-state errors, 144–151 transfer functions, 97–99 of valve-cylinder system, 286, 529–531, 550 Bode, H., 5 Bode plots, 107–118 asymptotes, 111 common factors, 109–113 design of filters, 307 design of PID controllers, 226–233 from experimental data, 121–124 magnitude plot, 107–108 parameters of, 117–118 phase angle plot, 107–108 PID contributions to, 228–229 relationship to s–plane, 175–176 subsystems, 108 from transfer functions, 114–115 Bond graphs: case study, 58–65, 543–552 causality, 46–49 assigning, 46–47 elements of, 46–47 equations, 50 Branches, 411 Break-away points, 160, 354 Break frequency, 118 Break-in points, 160, 354 Bridge circuit, 514 Brushless DC motor, 301 Bulk modulus, 63–64 Bumpless transfer, 367 Buses, network, 413–414 Bypass flow control, 506–507 Capacitance, hydraulic, 525, 548

Index Case study: hydraulic P/M, accumulator, flywheel, 58–65 hydrostatic transmission control, 553–562 on-off poppet valve controller, 543–552 Cauchy’s principle, 179 Cavitation, 533 Center configuration, 507–510 Central difference, 366 Centralized controller, 313, 413 Central processing unit (CPU), 406–407 Centroid method, fuzzy logic, 486 Characteristic equation (CE), 99, 127, 354 and parameter sensitivity, 435–437 and Routh-Hurwitz stability, 154 Classical control theory, 5, 8 Clock speed, 408 Closed center, spool valve, 508–509, 519–520, 540 Closed loop vs. open loop, 142–143, 344 Closed loop response from open loop Bode plots, 190–192 Coil, 412 Collector, 422–424 Command feedforward, 441–443 Common emitter/collector/base, 424 Compensator, 200 (see also Controller) feedforward, 437–448 series and parallel, 200–201 Compiler, 405 Complex conjugate roots, 78 Compressibility, 297, 525 flows, 515, 547–549 pneumatic, 525 Computer: as a controller, 400 hardware, 401 interfaces, 402–404 software, 405–406 Contamination, 512–513 Continuity, law of, 515 Continuous nonlinearity, 474–475 Contour plots, 208, 223–226, 434–435 Controllability, 268, 454 Control law matrix, 269 Controller: analog vs. digital, 314–315 centralized topography, 313 vs. compensator, 200 digital algorithm implementation, 414–416

Index [Controller] digital system hardware, 400–403 distributed topography, 313–314 embedded, 407 feedforward, 437–448 general definition, 2–3, 200 software, 405–406 ControlNet, 414 Control valves, hydraulic, 496–523 characteristics, 496–497, 507–508 flow metering, 519–520 PQ metering, 518–519 pressure metering, 520–521 coefficients, 514–517, 520 as control action device, 285 of cylinder system, 524–533 deadband characteristics, 522–523 directional control, 507–523 dynamic models, 523 electrohydraulic, 14 flow control, 505–506 linearized model of, 31 modeling, 514–523 pressure control, 497–505 Convection heat transfer, 40–41 Convergence time, 456 Cost function, 449 Counter, 411–412, 418 Counterbalance valve, 504 CPU (see Central processing unit) Cracking pressure, 503 Crisp sets, 480 Critical: center, spool valve, 508–509, 519–520 gain, 235 points, 476 Critically damped, 83 Cross-coupling, 450 Crossover frequency, 118, 181, 229, 232 vs. the natural frequency, 191–192 Current driven signals, 308 Curve fitting, polynomials, 463–464 Cut-off: pressure, pump, 536 rate, 307 region, 423–424 Cylinder, hydraulic, 296–297 valve control of, 524–533 DA converter (see Digital-to-analog converter) Damped natural frequency, 78

587 Damping ratio, 82–87, 352–353 lines of constant, 100, 353 vs. percent overshoot, 86 vs. phase margin, 192 Darlington transistor, 423 Data acquisition boards, 402–404, 557 channels, 403 software issues, 405–406 DC motor, 300–301 brushless, 301 DC tachometer (see Tachometer) Deadband, 508 and cylinder position control, 521 in electrohydraulic valves, 513, 520–521 electronic compensation, 512 eliminator, 522–523 Dead-beat design, 317, 378 guidelines for, 387 Deadhead pressure, 536 Decay ratio (DR), 472 Decibel (dB), 107, 109 Decoupling: engine from road load, 553–555 input/output effects, 449, 451–452 Defuzzification, 483–484, 486 Degree of membership, 480–481 Delay: operator, 324, 531 time, 86 Departure angle, 158–160, 354 Derivative causality, 49 Derivative control action, 204 in Bode plots, 227 with velocity feedback, 206 Derivative time, 234, 368 Detent torque, 420 DeviceNet, 414 Difference equations, 319 determining coefficients of, 459–469 from discrete transfer functions, 325–326, 330–333 implementing, 414–416 from numerical differentiation approximations, 320–321 from numerical integration approximations, 321–323 from PID approximations, 366–368, 530–531 Differential equations: classification, 18 notation, 18 solution to first-order, 76–77

588 [Differential equations] solution to second-order, 77–80 Differential piston, pressure control valve, 499–500 Differentiating amplifier, 304 Digital-to-analog converter (DA converter), 324, 344, 403 Digital controller: vs. analog, 6, 314–315 from analog controller, 316 design methods overview, 315–317 direct design of, 316, 377–394 history of, 314–315 program vs. interrupt control, 406 sampling characteristics of, 317–319 Digital encoder (see Encoder) Digital interface, 312 Digital IO, 403, 416 interfacing, 421–431 Digital signal processor (DSP), 401–402 Digitization, 314 Diode, 422–423 flyback, 427 Directional control valve, 296, 507–523 approximating with on/off valves, 543–552 center configurations, 508–509 coefficients, 514–517, 520 as control action device, 285 of cylinder system, 524–533 deadband characteristics, 522–523 dynamic models, 523 electrohydraulic, 14 linearized model of, 31 modeling, 514–523 on/off, 510–511 proportional, 511–512 terminology, 507–510 Direct memory access (DMA), 402, 405 Direct response design, 386–389 Discontinuous nonlinearity, 474–475 Discrete state-space matrices, 334–338 converting to transfer functions, 336 Discrete transfer function, 325–326, 330–333 converting to discrete matrices, 335 Distributed controller, 313, 413 Disturbance rejection: and direct response design, 387 with feedforward compensation, 438–440 and sample time, 339

Index Disturbances: effects on output, 144–147, 347–348, 394 examples of, 143 Dither, 429 DMA (see Direct memory access) Dominant poles, 86, 104, 191, 192 and phase-lead controllers, 243 and sample time, 339 and tuning, 207, 233–236 Double-ended inputs, 403 Drebbel, C., 4 Drive-by-wire, 553 DSP (see Digital signal processor) Duty cycle, PWM, 428 Dynamic response of transducers, 288

Effective damping, 204 Eigenvalues, 127, 334 Electrically erasable-programmable ROM (EEPROM), 407 Electrical system: electromechanical, 43 RLC circuit, 37–38, 54–55 Electric solenoid (see Solenoid) Electrohydraulic (see also Fluid power): control systems, 553–562 valves, 14, 511–514 comparison of, 513 Electromagnet, 300 Electromotive force, 300 Embedded controller, 407 Emitter, 422–424 Encoder, 417–418 absolute, 417–418 incremental, 417 Energy loss control method, 496–497 Equilibrium, 34, 574 Erasable-programmable ROM (EPROM), 407 Error: steady-state, 144–151 of transducers, 288 Error detectors, 279, 282, 312 (see also Summing junction) Estimator (see Observer) Ethernet, 414 Euler integration, 128 Evans, W., 5 Expert system, 491–492 Exponentially stable, 475–476

Index FAM (see Fuzzy association matrix) Feasible trajectory, 387–388, 442 Feedback linearization, 477–478 Feedforward compensation, 437–448 adaptive algorithms, 473 command feedforward, 441–443 disturbance input rejection, 438–440 Field effect transistor (FET), 422, 426–427 Filter, 307–308 active, 305, 307 aliasing effects, 319 integrated circuit, 308 passive, 305, 307 Final value theorem (FVT), 100, 348–349 Finite differences, 316 First-order system: normalized Bode plot, 111–112 normalized step response, 81 Flapper nozzle, 512–513 Flash converter, 404 Float regulator, 3 Flow: chart, programming, 559–562 control circuit, 536–540 control valve, 505–506 equations, valve, 514–523 gain, valve, 520 meter, 289–290 metering characteristics, valve, 519–520 paths, control valve, 514 Fluid power: circuits, 534–542 bleed-off, 539 hi-lo, 537–538 meter-in/out, 538–539 regenerative, 540 components, 495 actuators, 296–300, 524 control valves, 496–523 strategies, 533–534 power management, 540–543 Flyback diode, 427, 430–431 Forgetting factor, 469 Forward difference, 366 Forward rectangular rule, 337–338, 366 Frequency: PWM, 428 to voltage converter, 559 Frequency response (see Bode plots) Full duplex, 313–314 Full state observer, 454 Fuzzification, 483

589 Fuzzy association matrix (FAM), 484, 487 Fuzzy logic, 478–489 crisp vs. fuzzy, 480 membership function, 480 Fuzzy sets, 480 Gain margin, 118, 174–178, 258, 261 in controller design, 228–233 Gain scheduling, 470–471 Genetic algorithm, 491 Global stability, 153, 474–476 Gray code, 417–418 Guided poppet, pressure control valve, 498–499 Half duplex, 313–314 Half-step, 420 Hall effect transducer, 294–295, 418 Hamilton, W., 4 Hardware in the loop, 405 Hazen, H., 5 Hedging, fuzzy logic, 481 Height, fuzzy logic, 481 Heuristic approach, 479 Higher-level buses, 413–414 High frequency asymptote, 111 High pass filter, 307 Hi-lo circuit, hydraulic, 537–538 Hold, 325 Holding torque, 420 HST (see Hydrostatic transmission) Hurwitz, A., 4 Hybrid vehicle, 553–562 Hydraulic (see also Fluid power): accumulator, 60, 61–62 position control system, 285–287 pump/motor, 58, 299 Hydraulic system: case study, 58–65 industrial vs. mobile, 495–496 modeling, 38–39 Hydropneumatic accumulator, 61–62 Hydrostatic transmission (HST), control of, 553–562 Hysteresis, 474–475 in electrohydraulic valves, 513, 520 in pressure control valves, 500 Idle time, 534 IGBT (see Insulated-gate bipolar transistor) Impedance, 403, 426

590 Implication, fuzzy logic, 485–486 Impulse: function, 324, 332 input, 47, 89, 146, 205 Impulse invariant method, 316 Increasing phase systems, 177–178 Incremental encoder, 417 Incremental PI algorithm, 367–368 Induction motor, 302 Inductive loads, 422 with flyback diodes, 427, 430–431 Initial conditions, 78 Initial value theorem (IVT), 100, 348–349 Input contacts, 411 Input impedance, 303, 307 Inputs (see Impulse; Ramp; Step, input) Instruction set, 408 Insulated-gate bipolar transistor (IGBT), 422, 426–427, 429 Integral: causality, 49 control action, 203–204 in Bode plots, 227 reset switch, 281 time, 234, 368 windup, 203–204, 281, 368 Integrated circuit, filter, 308 Integrating amplifier, 304 Interconnection, neural net, 489, 491 Interfaces with computers, 401–402 Interrupt program control, 406, 413 Intersample oscillations, 388 Inverting amplifier, 304 I-PD control, 205, 368–369 Isolation (see Protection) Jet pipe, 512 Jump resonance, 474 Kalman: controller, 388 filter, 449, 456–457 Kirchoff’s laws, 37, 515–17 Ladder logic, 409, 411–413 Lag: in digital systems, 377 system transfer function, 104, 108 Lag-lead controllers, design of, 248–253 (see also Phase-lag/lead controllers) Lagrange, J., 5 Laminar flow, 289

Index Lands, in control valves, 507–509 Laplace transforms, 87–97 of digital impulse function, 324 partial fraction expansion, 90–97 solution steps, 88 table of common pairs, 89 Largest of maximum (LOM), fuzzy logic, 486–487 Laser, 292 Lead system, 108 Leakage, 520 Least squares system identification, 459–470 batch processing, 459–466 recursive algorithms, 467–470 weighting matrix, 467 Left-hand plane (LHP), 100 Leibniz, G., 4 Limit cycles, 474 Limit switch, 291, 418 Linearization, 26–31 Linear quadratic regulator (LQR), 449, 456–457 Linear superposition, principle of, 144, 474 Linear variable displacement transducer (see LVDT) Linguistic variables, fuzzy logic, 484 Liquid level system, 41–43, 57–58 Load sensing: pressure-compensated (PCLS), 540–543 relief valve (LSRV), 542 Local stability, 474–476 LOM (see Largest of maximum) Lookup tables, 442 Loss function, 459 Lower-level buses, 413–414 Low frequency asymptote, 111 Low pass filter, 307, 430 LQR (see Linear quadratic regulator) LU decomposition, 469 LVDT, 291 Lyapunov: equations, 456 direct vs. indirect, 476 function, 476 methods for nonlinear systems, 475–476 stability, 476 Lyapunov, A., 5 Machine code, 312 Magnetic pickup, 3, 12, 294, 418–419, 557

Index Magnetostrictive, 291 Magnitude condition, root locus, 157 Marginal stability, 153–154, 531–532 Matlab commands (introduction of): axis, 448 bode, 186 conv, 252 ctrb, 270 c2d, 328 det, 270 dlqr, 457 eig, 131 feedback, 220 figure, 186 gensig, 448 hold, 225 imag, 436 kalman, 457 linspace, 436 lsim, 252 margin, 186–187 nyquist, 186–187 place, 270 plot, 436 pole, 436 rank, 270 real, 436 residue, 92 rlocfind, 186–187, 216 rlocus, 174 rltool, 216, 384 roots, 131 sgrid, 221–222 step, 220 subplot, 217 tf, 131 tf2ss, 131 zgrid, 359 Maximum overshoot (see Percent overshoot) Maxwell, J., 4 Mechanical controllers, 284–286 feedback wire in servovalve, 513 Mechanical-rotational system, 36, 52–53 Mechanical-translational system, 32–36, 50–51 Membership function, 480–483 examples of, 482 Metal oxide semiconductor field effect transistor (MOSFET), 426–427, 429–431 Metering, 516

591 Meter-in/out circuit, hydraulic, 538–539 MIAC (see Model identification adaptive controller) Microcontroller, 10, 400, 406–408 Microprocessor, 312, 406, 409 for fuzzy logic, 487 Microstep, 420 Middle of maximum (MOM), fuzzy logic, 486–487 MIMO (see Multivariable controllers) Minimum phase system, 124 Minorsky, N., 5 Model identification adaptive controller (MIAC), 473 Modeling: bond graphs (power flow), 45–65 directional control valves, 514–523 and cylinders, 524–533 dynamics of, 523 effects of errors, 434–437, 446 energy methods, 44–45 Newtonian physics, 32–40 relationships between systems, 33 valve-controlled cylinder, 524–532 block diagram, 528–530 Model reference adaptive controller (MRAC), 472–473 Modern control theory, 6, 8 MOM (see Middle of maximum, fuzzy logic) MOSFET (see Metal oxide semiconductor field effect transistor) Motor: AC, 302 DC, 300–301 hydraulic, 299–300 stepper motor, 302, 367, 419–421 vs. DC motor, 419 drivers, 403, 419 parameters, 421 permanent magnet, 419 variable reluctance, 419–420 torque, 512–513 MRAC (see Model reference adaptive controller) Multiple input, multiple output (see Multivariable controllers) Multiple valued, nonlinearity, 474–475 Multiplexed, 403,404 Multivariable controllers, 448–457 observers, 454–456 using state-space equations, 453–457

592 [Multivariable controllers] using transfer functions, 449–453 Natural frequency, 82–87,191–192, 352–353 and sample time, 339 Natural nonlinearity, 474 Neural net, 489, 491 Neurons, 489 Newton, I., 4 Newton’s law of motion, 32 Noise amplification: with analog amplifiers, 306 with derivative controllers, 204 Noise immunity, 312 Noninverting amplifier, 304 Nonlinear: controllers, 476–492 systems, 474–476 Nonlinearities, characteristics, 474–475 Non-minimum phase system, 124 Nonparametric models (see System identification) npn transistor, 423–424 Numerical approximations: of differentials, 320 of integrals, 321 of PID controller, 366–369 Numerical integration: Euler method, 128 Runge-Kutta method, 128 Numerical simulations, 475 Nyquist frequency criterion, 318 Nyquist, H., 5 Nyquist plots, 118–121, 178–181 Observability, 454 Observer, 434, 454–456 Octave, 111 On-site tuning methods: analog PID controllers, 233–236 OpAmp: comparator, 304 control action circuits, 281–282 inverting amplifier, 304 signal amplifier, 303–305 Open center, spool valve, 508, 519–520, 540 Open loop vs. closed loop, 142–143, 344 Operating envelope, valve-cylinder system, 526 Operational amplifiers (see OpAmp) Optical encoder (see Encoder)

Index Optimal control, 449, 453, 456–457 Optoisolator, 305, 422 Orifice: equations, 516 matched, 516 in pilot-operated pressure control valve, 501 in servovalve, 513 Overdamped, 83 Over lapped (see Closed center, spool valve) Overrunning load, 519, 526, 539 Pade approximation, 458 Parallel compensation, 200–201 Parallel port, 401–402 Parameter sensitivity, 434–437, 446 Parametric models (see System identification) Partial fraction expansion, 90–97 PCI (see Peripheral component interconnect) PCLS (see Load sensing) PCMCIA (see Personal computer memory card international association) PC-104, 414 PD control action, 203, 228, 255 with approximate derivative, 283 implementation with OpAmps, 282 Peak time, 85 Percent overshoot, 86 Perceptron learning rule, 489, 491 Performance: curve, valve-cylinder system, 526 index, 449 specifications, 80–87 Period, 317 with PWM, 428 Peripheral component interconnect (PCI), 402 Permanent magnet: DC motor, 300 stepper motor, 419 Perron-Frobenius (P-F), 453 Personal computer memory card international association (PCMCIA), 401–402 PFM (see Pulse frequency modulation) Phase-lag controller: digital from analog conversion, 372–376 implementation with OpAmps, 282 root locus design steps, 238

Index Phase-lag/lead controllers, 236–257 Bode plot design method, 254–267 comparison with PID, 237, 256 implementation with OpAmps, 282 root locus design method, 249 Phase-lead controller: digital from analog conversion, 372–376 implementation with OpAmps, 282 root locus design steps, 243–244 Phase margin, 118, 174–178, 258, 261 in controller design, 228–233 vs. damping ratio, 192 PI control action, 203, 228, 254 implementation with OpAmps, 282 incremental difference equation algorithm, 367–368, 530 PI-D control, 206 PID controller, 201–206, 226–236 approximation with difference equations, 366–369 comparison with phase-lag/lead, 237, 256 digital from continuous conversion, 369–370 frequency response design of, 226–233 implementation with OpAmps, 282 root locus design of, 207–226 transfer function of, 202 Piezoelectric, 287–288, 294, 298–299 Pilot-operated pressure control valve, 500–501 Piston accumulator, 534–535 Plant, 17, 141, 442 PLC (see Programmable logic controller) Pneumatic, 525 pnp transistor, 423–424 Polar plots (see Nyquist plots) Pole, 97, 100, 127 Pole-zero cancellation, 212, 240, 387 Pole-zero matching, 316, 369, 372 Pole-zero placement: from performance goals, 100 with phase-lead controllers, 244 with PID controllers, 207, 213–215 with state-space controllers, 267–271, 453 Position PI algorithm, 367 Positive definite function, 476 Potentiometer: linear, 291 rotary, 293, 557 Power: amplifier, analog, 305–306 management, fluid power, 540–543

593 [Power] maximum, valve-cylinder system, 527 PQ (see Pressure-flow) Pre-compute, 442 Pressure: intensification, 539 metering characteristics, valve, 520–521 minimum, valve-cylinder system, 528 override, 503 reducing valve, 498, 505, 537 sensitivity, 521 transducer, 287–289 Pressure compensation: in control valves, 505–506 load sensing, 540–543 in pumps, 534, 536, 540–542 Pressure control: in circuits, 533–540 valve, 497–505 characteristics, 503–504 counterbalance, 504 poppet vs. spool, 502–503 reducing, 498, 505 relief, 498–504 sequence, 504 unloading, 503–504 Pressure-flow (PQ): equations, 514–518 metering characteristics, 518–519 Priority valve, 504 Proactive vs. reactive compensation, 437 Program control, 406, 411, 413 flow charts, 559–562 Programmable logic controller (PLC), 13, 315 vs. computer, 400 history of, 409 Proportional control action, 203 in Bode plots, 227 with hydraulic system, 285 Protection: of microprocessors, 400 optical-isolator, 422 over-voltage, 403 Pull in/out rate, 420–421 Pull in/out torque, 420–421 Pull-up resistor, 429 Pulse frequency modulation (PFM), 431 Pulse width modulation (PWM): approximate analog output, 430 creating, 418 description of, 427–429

594 [Pulse width modulation (PWM)] implementation, 429–431 outputs, 403 Pump: pressure-compensated, 534–536 unloading of, 510 variable displacement, 536 Pump/motor (P/M), 58 PWM (see Pulse width modulation) QR decomposition, 469 Quadratic equation, 77 Quadratic optimal regulator, 267 Quantization, 343, 400, 530 errors, 404–405 Rail, 303 RAM (see Random access memory) Ramp: function, valves, 15, 512 input, 89, 149, 202 response, 252–253, 390–391 Random access memory, 407 Range, of transducers, 288 Rank, of controllability matrix, 268 Read-only memory (ROM), 407 Realizable, 378, 387 Recursive least squares algorithm (RLS), 468–469 in model identification adaptive controller, 473 Recursive solution, system identification algorithm, 467–470 (see also Difference equations) Reduced-order state observer, 454–455 Reference model, 472–473 Regenerative: hydraulic circuit, 540 vehicle braking, 553–562 Relative stability, 154 Relay, 305 physical vs. simulated in PLC, 409, 412 solid-state vs. mechanical, 421 Reliability, 400 Resistance-temperature detector (see RTD) Resolution, 403–404, 416, 417 bits of, 404–405 of stepper motor, 419–420 Resolver, rotary, 293 Riccati equations, 456 Right-hand plane (RHP), 100

Index Rise time, 85 Robust systems, hydraulic, 533 ROM (see Read-only memory) Root locus: angle and magnitude condition, 157 design of digital controllers, 377–386 design of phase-lag controllers, 238 design of phase-lead controllers, 243 design of PID controllers, 207–226 examples of, 162–173 guidelines for construction of: in the s-plane, 158 in the z-plane, 354 parameter sensitivity, 434–437 stability regions: in the s-plane, 100 in the z-plane, 353 Rosenbrock approach, 452 Rotor, 43, 301–302, 419–420 Routh, E., 4 Routh-Hurwitz stability criterion, 154–156 RTD, 295 Rule-based inferences, fuzzy logic, 484 Runge-Kutta integration, 128 Safety valve, pressure 498 Sample and hold (see ZOH) Sample period, 317, 325 Sampler, 344–347 Samples per second, 403 Sample time: effects of, 317–319, 532 guidelines for choosing, 338–339 Saturation: with derivative controllers, 204–206 and limits with digital controllers, 386 nonlinearity, 474–475 with OpAmps, 281 with transistor, 423–424, 427, 429 SCADA (see Supervisory control and data acquisition) Scan time, 414–415 Schmitt trigger, 416, 418–419 Scope, fuzzy logic, 481 SCR (see Silicon-controller rectifier) Second-order system: normalized Bode plot, 113–114 normalized step response, 85 Self-tuning, adaptive, 47 Sensing piston, 501–502 Sensor vs. transducer, 287 (see also Transducer)

Index Sequence valve, 504 Serial port, 401–402 Series compensation, 200–201 Servo, 2 Servomotor, 301 Servovalve, 508, 512–513 vs. proportional valve, 513, 522 Set-point-kick, 205, 369 Settling time, 85 Shift operator, 324 Shuttle valves, 541–542 Sigmoid, 489, 491 Signal-to-noise ratio, 306 Silicon-controller rectifier (SCR), 422 Simulink blocks (introduction of): constant, 550 dead zone, 530 discrete transfer function, 531 fcn (function), 531 gain, 530 integrator, 530 multiplication, 550 mux (multiplex), 530 PID controller, 530 rate limiter, 550 saturation, 530 scope, 530 signal generator, 529–530 sqrt (square root), 550 step input, 550 summing junction, 529–530 transfer function, 530 unit delay (z-1), 531 to workspace, 550 zero order hold (ZOH), 531 Simultaneous equations, solving, 460–463 Single-ended inputs, 403 Single phase motor, 302 Singletons, 481 Single valued, nonlinearity, 474–475 Slew: range, 420–421 speed, 518, 526, 551 Sliding mode control, 477 Smallest of maximum (SOM), fuzzy logic, 486–487 Software, 405–406 Solenoid, 12, 298, 557 Solid-state switch (see Transistor) s-plane, 87 relationship to Bode plots, 175–176 stability in, 100–101

595 [s-plane] time response equivalent, 101 Spool, 508 Stability: of adaptive controllers, 470 adding with PD controllers, 219, 232 adding with phase-lead controllers, 244 of feedback systems, 153–154 in the frequency domain, 174–179 local vs. global, 474–476 Lyapunov, 475–476 of nonlinear systems, 474 with parameter variance, 434–437 Routh-Hurwitz criterion, 154–156 vs. sample time, 532 in the s-plane, 100–101 of transducers, 288 in the z-plane, 351–353, 531–532 State-space controller: disturbance rejection, 453 for multivariable systems, 453–457 pole-placement design, 267–271 State-space equations: from bond graphs, 49–58, 61–62, 547–548 discrete systems, 334–338 eigenvalues, 127, 334 matrix notation, 24 representation of, 23 solutions to, 127–129 from transfer functions, 129–131 to transfer functions, 125–126 Stator, 301–302, 419–420 Steady-state errors: solving for, 144–151, 348–351 Steady-state gain, 118 Step, input, 80, 89, 101, 149, 205 Step input response, 80–87 first-order systems, 80–82 second-order systems, 83–87 characteristics, 85–86 Stepper motor, 302, 367, 419–421 vs. DC motor, 419 drivers, 403, 419 parameters, 421 permanent magnet, 419 variable reluctance, 419–420 Stodola, A., 4 Strain gage, 287–288, 294, 295 Successive approximation, 404 Summing amplifier, 304 Summing junction: in block diagrams, 2, 19–20, 22, 202

596 [Summing junction] implementation with OpAmps, 282, 304 in mechanical controllers, 285 Superposition, principle of, 144, 474 Supervisory control and data acquisition (SCADA), 313 Suspension system, vehicle, 34 Symmetrical, valve, 516 Synchronous motor, 302 System identification: adaptive controller, 473 nonparametric models, 458 using Bode plots, 121–124 using step response plots, 80–86 parametric models, 458 using difference equations, 460, 464–465 using input-output data, 458–470 System type number: and block diagrams, 148–151 and Bode plots, 152 Tachometer, 294 Tandem center valve, 510, 534–535, 538–540 Temperature compensation, 306, 506 Temperature transducer, 295 Thermal system, 40–41, 55–56 Thermistor, 295 Thermocouple, 295 Three-phase motor, 302 Time constant, 80–82 and sample time, 338 Timer, 411–412 Torque motor, 512–513 Tracking performance, 437, 441–443 Transducer, 279 digital, 416–418 flow, 289–290 with digital IO, 418 important characteristics, 288 linear analog, 290–292 optical, 417 pressure, 287–289 rotary, 293–294, 417–418 vs. sensor, 287 temperature, 295 Transfer functions, 97–104 characteristic equation (CE), 99 common forms, 102–104 definition, 98 discrete, 325–326

Index [Transfer functions] from state-space matrices, 125–126 to state-space matrices, 129–131 Transfer rates, 402–403 Transient response characteristics: of first-order systems, 80–82 of second-order systems, 83–87 and s-plane locations, 101, 152–153 and z-plane locations, 352–353 Transistor, 305, 421–431 beta factor, 423 characteristics, 427 operating regions, 423–424 power dissipation, 424, 427 Trapezoidal approximation (see also Bilinear transform; Tustin’s method), 322, 366 Turbulent flow, 289 Tustin’s method, 316, 322, 369–370 Two input, two output (TITO) controller, 449–453 (see also Multivariable controllers) Ultimate cycle, 235 Underdamped, 83 Under lapped (see Open center, spool valve) Universal serial bus (USB), 402 Unloading valve, 503–504 User routine, 411 Vacuum tube, 422 Valve: characteristics, 496–497, 507–508 flow metering, 519–520 PQ metering, 518–519 pressure metering, 520–521 coefficients, 514–517, 520 as control action device, 285 deadband characteristics, 522–523 electrohydraulic, 14, 511–514 linearized model of, 31 Variable reluctance: proximity sensor, 418 stepper motor, 419–420, 535 Velocity PI algorithm, 367–368 Voltage regulation, 559 Volume control strategy, 496 Watt, J., 5 Weighted least squares, system identification, 467

Index Words, 408

Zero lapped (see Critical: center, spool valve) Zero-order hold (see ZOH) Ziegler-Nichols tuning, 233–236 with digital controllers, 371

597 [Ziegler-Nichols tuning] step response parameters, 235 ultimate cycle parameters, 236 ZOH: development of, 325–326 location of in model, 344–345 z transform: development of, 323–327

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF