Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Chemical Process Control - A First Course with MATLAB - Chau (2001), Manuais, Projetos, Pesquisas de Engenharia Química

Livro sobre controle de processos químicos de Chau

Tipologia: Manuais, Projetos, Pesquisas

2013

Compartilhado em 21/08/2013

sidney-c-s-8
sidney-c-s-8 🇧🇷

4.9

(12)

16 documentos

1 / 255

Documentos relacionados


Pré-visualização parcial do texto

Baixe Chemical Process Control - A First Course with MATLAB - Chau (2001) e outras Manuais, Projetos, Pesquisas em PDF para Engenharia Química, somente na Docsity! P.C. Chau © 2001 Table of Contents Preface 1. Introduction ............................................................ [Number of 10-point single-space pages -->] 3 2. Mathematical Preliminaries .................................................................................................. 35 2.1 A simple differential equation model 2.2 Laplace transform 2.3 Laplace transforms common to control problems 2.4 Initial and final value theorems 2.5 Partial fraction expansion 2.5.1 Case 1: p(s) has distinct, real roots 2.5.2 Case 2: p(s) has complex roots 2.5.3 Case 3: p(s) has repeated roots 2.6 Transfer function, pole, and zero 2.7 Summary of pole characteristics 2.8 Two transient model examples 2.8.1 A Transient Response Example 2.8.2 A stirred tank heater 2.9 Linearization of nonlinear equations 2.10 Block diagram reduction Review Problems 3. Dynamic Response ............................................................................................................. 19 3.1 First order differential equation models 3.1.1 Step response of a first order model 3.1.2 Impulse response of a first order model 3.1.3 Integrating process 3.2 Second order differential equation models 3.2.1 Step response time domain solutions 3.2.2 Time-domain features of underdamped step response 3.3 Processes with dead time 3.4 Higher order processes and approximations 3.4.1 Simple tanks-in-series 3.4.2 Approximation with lower order functions with dead time 3.4.3 Interacting tanks-in-series 3.5 Effect of zeros in time response 3.5.1 Lead-lag element 3.5.2 Transfer functions in parallel Review Problems 4. State Space Representation ................................................................................................... 18 4.1 State space models 4.2 Relation with transfer function models 4.3 Properties of state space models 4.3.1 Time-domain solution 4.3.2 Controllable canonical form 4.3.3 Diagonal canonical form Review Problems 5. Analysis of PID Control Systems ........................................................................................ 22 5.1 PID controllers 5.1.1 Proportional control 5.1.2 Proportional-Integral (PI) control 5.1.3 Proportional-Derivative (PD) control 5.1.4 Proportional-Integral-Derivative (PID) control 5.2 Closed-loop transfer functions P.C. Chau © 2001 5.2.1 Closed-loop transfer functions and characteristic polynomials 5.2.2 How do we choose the controlled and manipulated variables? 5.2.3 Synthesis of a single-loop feedback system 5.3 Closed-loop system response 5.4 Selection and action of controllers 5.4.1 Brief comments on the choice of controllers Review Problems 6. Design and Tuning of Single-Loop Control Systems ............................................................... 19 6.1 Tuning controllers with empirical relations 6.1.1 Controller settings based on process reaction curve 6.1.2 Minimum error integral criteria 6.1.3 Ziegler-Nichols ultimate-cycle method 6.2 Direct synthesis and internal model control 6.2.1 Direct synthesis 6.2.2 Pole-zero cancellation 6.2.3 Internal model control (IMC) Review Problems 7. Stability of Closed-loop Systems .......................................................................................... 17 7.1 Definition of Stability 7.2 The Routh-Hurwitz Criterion 7.3 Direct Substitution Analysis 7.4 Root Locus Analysis 7.5 Root Locus Design 7.6 A final remark on root locus plots Review Problems 8. Frequency Response Analysis ................................................................................................ 29 8.1 Magnitude and Phase Lag 8.1.1 The general analysis 8.1.2 Some important properties 8.2 Graphical analysis tools 8.2.1 Magnitude and Phase Plots 8.2.2 Polar Coordinate Plots 8.2.3 Magnitude vs Phase Plot 8.3 Stability Analysis 8.3.1 Nyquist Stability criterion 8.3.2 Gain and Phase Margins 8.4 Controller Design 8.4.1 How do we calculate proportional gain without trial-and-error? 8.4.2 A final word: Can frequency response methods replace root locus? Review Problems 9. Design of State Space Systems ............................................................................................. 18 9.1 Controllability and Observability 9.1.1 Controllability 9.1.2 Observability 9.2 Pole Placement Design 9.2.1 Pole placement and Ackermann's formula 9.2.2 Servo systems 9.2.3 Servo systems with integral control 9.3 State Estimation Design 9.3.1 State estimator 9.3.2 Full-order state estimator system 9.3.3 Estimator design 9.3.4 Reduced-order estimator Review Problems Stephanopoulos (1984), of collecting major homework problems at the back and not at the end of each chapter. Our aim is to emphasize the need to understand and integrate knowledge, a virtue that is endearing to ABET, the engineering accreditation body in the United States. These problems do not even specify the associated chapter as many of them involve different techniques. A student has to determine the appropriate route of attack. An instructor may find it aggravating to assign individual parts of a problem, but when all the parts are solved, we hope the exercise would provide a better perspective to how different ideas are integrated. To be an effective teaching tool, this text is intended for experienced instructors who may have a wealth of their own examples and material, but writing an introductory text is of no interest to them. The concise coverage conveniently provides a vehicle with which they can take a basic, minimalist set of chapters and add supplementary material that they deem appropriate. Even without supplementary material, however, this text contains the most crucial material and there should not be a need for an additional expensive, formal text. While the intended teaching style relies heavily on the use of MATLAB, the presentation is very different from texts which prepare elaborate M-files and even menu-driven interfaces. One of the reasons why MATLAB is such a great tool is that it does not have a steep learning curve. Students can quickly experiment on their own. Spoon-feeding with our misguided intention would only destroy the incentive to explore and learn on one's own. To counter this pitfall, strong emphasis is placed on what one can accomplish easily with only a few MATLAB statements. MATLAB is introduced as walk- through tutorials that encourage students to enter commands on their own. As strong advocates of active learning, we do not duplicate MATLAB results. Students, again, are encouraged to execute the commands themselves. In case help is needed, our Web Support, however, has the complete set of MATLAB results and plots. This organization provides a more coherent discourse on how one can make use of different features of MATLAB, not to mention saving significant printing costs. Finally, we can revise the tutorials easily to keep up with the continual upgrade of MATLAB. At this writing, the tutorials are based on MATLAB version 5.3, and the object-oriented functions in the Control Toolbox version 4.2. Simulink version 3.0 is also utilized, but its scope is limited to simulating more complex control systems. As a first course text, the development of models is limited to stirred-tanks, stirred tank heater, and a few other examples that are used extensively and repeatedly throughout the chapters. Our philosophy is one step back in time. The focus is the theory and the building of a foundation that may help to solve other problems. The design is also to be able to launch into the topic of tuning controllers before students may lose interest. The coverage of Laplace transform is not entirely a concession to remedial mathematics. The examples are tuned to illustrate immediately how pole positions may relate to time domain response. Furthermore, students tend to be confused by the many different design methods. As much as I can, especially in the controller design chapters, the same examples are used throughout. The goal is to help a student understand how the same problem can be solved by different techniques. We have given up the pretense that we can cover controller design and still have time to do all the plots manually. We rely on MATLAB to construct the plots. For example, we take a unique approach to root locus plots. We do not ignore it like some texts do, but we also do not go into the hand sketching details. The same can be said with frequency response analysis. On the whole, we use root locus and Bode plots as computational and pedagogical tools in ways that can help to understand the choice of different controller designs. Exercises that may help such thinking are in the MATLAB tutorials and homework problems. Finally, I have to thank Costas Pozikidris and Florence Padgett for encouragement and support on this project, Raymond de Callafon for revising the chapters on state space models, and Allan Cruz for proofreading. Last but not least, Henry Lim combed through the manuscript and made numerous insightful comments. His wisdom is sprinkled throughout the text. Web Support (MATLAB outputs of text examples and MATLAB sessions, references, and supplementary notes) is available at the CENG 120 homepage. Go to http://courses.ucsd.edu and find CENG 120. P.C. Chau © 2001 ❖ 1. Introduction Control systems are tightly intertwined in our daily lives, so much that we take them for granted. They may be as low-tech and unglamorous as our flush toilet. Or they may be as high-tech as electronic injection in our cars. In fact, there is more than a handful of computer control systems in a typical car that we now drive. Everything from the engine to transmission, shock absorber, brakes, pollutant emission, temperature and so forth, there is an embedded microprocessor controller keeping an eye out for us. The more gadgetry, the more tiny controllers pulling the trick behind our backs.1 At the lower end of consumer electronic devices, we can bet on finding at least one embedded microcontroller. In the processing industry, controllers play a crucial role in keeping our plants running—virtually everything from simply filling up a storage tank to complex separation processes, and to chemical reactors. As an illustration, let us take a look at a bioreactor (Fig. 1.1). To find out if the bioreactor is operating properly, we monitor variables such as temperature, pH, dissolved oxygen, liquid level, feed flow rate, and the rotation speed of the impeller. In some operations, we may also measure the biomass and the concentration of a specific chemical component in the liquid or the composition of the gas effluent. In addition, we may need to monitor the foam head and make sure it does not become too high. We most likely need to monitor the steam flow and pressure during the sterilization cycles. We should note that the schematic diagram is far from complete. By the time we have added enough details to implement all the controls, we may not recognize the bioreactor. We certainly do not want to scare you with that. On the other hand, this is what makes control such a stimulating and challenging field. 1 In the 1999 Mercedes-Benz S-Class sedan, there are about 40 "electronic control units" that control up to 170 different variables. Control Algorithm Measurements: pH, temperature liquid level, off gas analysis, etc. Performance specifications Product Medium Feed Cooling water Acid Base Anti-foam Air sparge Off gas Impeller Figure 1.1. Schematic diagram of instrumentation associated with a fermentor. The steam sterilization system and all sensors and transmitters are omitted for clarity. Solid lines represent process streams. Hairlines represent information flow. 1 – 2 Figure 1.2 . A block diagram representation of a single-input single-output negative feedback system. Labels within the boxes are general. Labels outside the boxes apply to the simplified pH control discussion. For each quantity that we want to maintain at some value, we need to ensure that the bioreactor is operating at the desired conditions. Let's use the pH as an example. In control calculations, we commonly use a block diagram to represent the problem (Fig. 1.2). We will learn how to use mathematics to describe each of the blocks. For now, the focus is on some common terminology. To consider pH as a controlled variable, we use a pH electrode to measure its value and, with a transmitter, send the signal to a controller, which can be a little black box or a computer. The controller takes in the pH value and compares it with the desired pH, what we call the set point or reference. If the values are not the same, there is an error, and the controller makes proper adjustments by manipulating the acid or the base pump—the actuator.2 The adjustment is based on calculations using a control algorithm, also called the control law. The error is calculated at the summing point where we take the desired pH minus the measured pH. Because of how we calculate the error, this is a negative feedback mechanism. This simple pH control scenario is what we call a single-input single-output (SISO) system; the single input is the set point and the output is the pH value.3 This simple feedback mechanism is also what we called a closed-loop. This single loop system ignores the fact that the dynamics of the bioreactor involves complex interactions among different variables. If we want to take a more comprehensive view, we will need to design a multiple-input multiple-output (MIMO), or multivariable, system. When we invoke the term system, we are referring to the process 4 (the bioreactor here), the controller, and all other instrumentation such as sensors, transmitters, and actuators (like valves and pumps) that enable us to control the pH. When we change a specific operating condition, meaning the set point, we would like, for example, the pH of the bioreactor to follow our command. This is what we call servo control. The pH value of the bioreactor is subjected to external disturbances (also called load changes), and the task of suppressing or rejecting the effects of disturbances is called regulatory control. Implementation of a controller may lead to instability, and the issue of system stability is a major concern. The control system also has to be robust such that it is not overly sensitive to changes in process parameters. 2 In real life, bioreactors actually use on-off control for pH. 3 We'll learn how to identify input and output variables, how to distinguish between manipulated variables, disturbances, measured variables and so forth. Do not worry about remembering all the terms here. We'll introduce them properly later. 4 In most of the control world, a process is referred to as a plant. We stay with "process" because in the process industry, a plant carries the connotation of the entire manufacturing or processing facility. – Acid/base Pump pH Control Aglorithm pH electrode with transmitter ErrorDesired pH pH Mixed Vessel Controller Function Actuator Process Transducer + Measured pH 2 - 2 deviation variables—they denote how a quantity deviates from the original value at t = 0.1 Since Co is a constant, we can rewrite Eq. (2-1) as τ dC' dt + C' = C' in , with C'(0) = 0 (2-2) Note that the equation now has a zero initial condition. For reference, the solution to Eq. (2-2) is 2 C'(t) = 1τ C' in(z) e – (t – z) / τ dz 0 t (2-3) If C'in is zero, we have the trivial solution C' = 0. It is obvious from Eq. (2-2) immediately. For a more interesting situation in which C' is nonzero, or for C to deviate from the initial Co, C'in must be nonzero, or in other words, Cin is different from Co. In the terminology of differential equations, the right hand side C'in is named the forcing function. In control, it is called the input. Not only C'in is nonzero, it is under most circumstances a function of time as well, C'in = C'in(t). In addition, the time dependence of the solution, meaning the exponential function, arises from the left hand side of Eq. (2-2), the linear differential operator. In fact, we may recall that the left hand side of (2-2) gives rise to the so-called characteristic equation (or characteristic polynomial). Do not worry if you have forgotten the significance of the characteristic equation. We will come back to this issue again and again. We are just using this example as a prologue. Typically in a class on differential equations, we learn to transform a linear ordinary equation into an algebraic equation in the Laplace-domain, solve for the transformed dependent variable, and finally get back the time-domain solution with an inverse transformation. In classical control theory, we make extensive use of Laplace transform to analyze the dynamics of a system. The key point (and at this moment the trick) is that we will try to predict the time response without doing the inverse transformation. Later, we will see that the answer lies in the roots of the characteristic equation. This is the basis of classical control analyses. Hence, in going through Laplace transform again, it is not so much that we need a remedial course. Your old differential equation textbook would do fine. The key task here is to pitch this mathematical technique in light that may help us to apply it to control problems. 1 Deviation variables are analogous to perturbation variables used in chemical kinetics or in fluid mechanics (linear hydrodynamic stability). We can consider deviation variable as a measure of how far it is from steady state. 2 When you come across the term convolution integral later in Eq. (4-10) and wonder how it may come about, take a look at the form of Eq. (2-3) again and think about it. If you wonder about where (2-3) comes from, review your old ODE text on integrating factors. We skip this detail since we will not be using the time domain solution in Eq. (2-3). f(t) y(t) F(s) Y(s)L dy/dt = f(t) Input/Forcing function (disturbances, manipulated variables) Output (controlled variable) G(s) Input Output Figure 2.1. Relationship between time domain and Laplace domain. 2 - 3 2.2 Laplace transform Let us first state a few important points about the application of Laplace transform in solving differential equations (Fig. 2.1). After we have formulated a model in terms of a linear or linearized differential equation, dy/dt = f(y), we can solve for y(t). Alternatively, we can transform the equation into an algebraic problem as represented by the function G(s) in the Laplace domain and solve for Y(s). The time domain solution y(t) can be obtained with an inverse transform, but we rarely do so in control analysis. What we argue (of course it is true) is that the Laplace-domain function Y(s) must contain the same information as y(t). Likewise, the function G(s) contains the same dynamic information as the original differential equation. We will see that the function G(s) can be "clean" looking if the differential equation has zero initial conditions. That is one of the reasons why we always pitch a control problem in terms of deviation variables.1 We can now introduce the definition. The Laplace transform of a function f(t) is defined as L[f(t)] = f(t) e–st dt 0 ∞ (2-4) where s is the transform variable.2 To complete our definition, we have the inverse transform f(t) = L–1[F(s)] = 12π j F(s) e st ds γ – j∞ γ + j∞ (2-5) where γ is chosen such that the infinite integral can converge.3 Do not be intimidated by (2-5). In a control class, we never use the inverse transform definition. Our approach is quite simple. We construct a table of the Laplace transform of some common functions, and we use it to do the inverse transform using a look-up table. An important property of the Laplace transform is that it is a linear operator, and contribution of individual terms can simply be added together (superimposed): L[a f1(t) + b f2(t)] = a L[f1(t)] + b L[f2(t)] = aF1(s) + bF2(s) (2-6) Note: The linear property is one very important reason why we can do partial fractions and inverse transform using a look-up table. This is also how we analyze more complex, but linearized, systems. Even though a text may not state this property explicitly, we rely heavily on it in classical control. We now review the Laplace transform of some common functions—mainly the ones that we come across frequently in control problems. We do not need to know all possibilities. We can consult a handbook or a mathematics textbook if the need arises. (A summary of the important ones is in Table 2.1.) Generally, it helps a great deal if you can do the following common ones 1 But! What we measure in an experiment is the "real" variable. We have to be careful when we solve a problem which provides real data. 2 There are many acceptable notations of Laplace transform. We choose to use a capitalized letter, and where confusion may arise, we further add (s) explicitly to the notation. 3 If you insist on knowing the details, they can be found on our Web Support. 2 - 4 without having to look up a table. The same applies to simple algebra such as partial fractions and calculus such as linearizing a function. 1. A constant f(t) = a, F(s) = a s (2-7) The derivation is: L[a] = a e– st dt 0 ∞ = – as e – st 0 ∞ = a 0 + 1s = a s slope a Exponential decay Linear ramp Figure 2.2. Illustration of exponential and ramp functions. 2. An exponential function (Fig. 2.2) f(t) = e–at with a > 0, F(s) = 1 (s + a) (2-9) L[e– at] = a e– at e– st dt 0 ∞ = – 1 (s + a) e– (a + s)t 0 ∞ = 1 (s + a) 3. A ramp function (Fig. 2.2) f(t) = at for t ≥ 0 and a = constant, F(s) = a s2 (2-8) L[at] = a t e– st dt 0 ∞ = a – t 1s e – st 0 ∞ + 1s e – st dt 0 ∞ = as e – st dt 0 ∞ = a s2 4. Sinusoidal functions f(t) = sinωt, F(s) = ω (s2 + ω2) (2-10) f(t) = cosωt, F(s) = s (s2 + ω2) (2-11) We make use of the fact that sin ωt = 12j (e jωt – e– jωt) and the result with an exponential function to derive L[sin ωt] = 12j (e jωt – e– jωt) e– st dt 0 ∞ = 1 2j e– (s – jω)t dt 0 ∞ – e– (s + jω)t dt 0 ∞ = 1 2j 1 s – jω – 1 s + jω = ω s2 + ω2 The Laplace transform of cosωt is left as an exercise in the Review Problems. If you need a review 2 - 7 2. Dead time function (Fig. 2.3) f(t – to), L f(t – to) = e– sto F(s) (2-18) The dead time function is also called the time delay, transport lag, translated, or time shift function (Fig. 2.3). It is defined such that an original function f(t) is "shifted" in time to, and no matter what f(t) is, its value is set to zero for t < to. This time delay function can be written as: f(t – to) = 0 , t – to < 0 f(t – to) , t – to > 0 = f(t – to) u(t – to) The second form on the far right is a more concise way to say that the time delay function f(t – to) is defined such that it is zero for t < to. We can now derive the Laplace transform. L f(t – to) = f(t – to) u(t – to) e– st dt 0 ∞ = f(t – to) e– st dt to ∞ and finally, f(t – to) e– st dt to ∞ = e– sto f(t – to) e– s(t – to ) d(t – to) to ∞ = e– sto f(t') e– st' dt' 0 ∞ = e– sto F(s) where the final integration step uses the time shifted axis t' = t – to. 3. Rectangular pulse function (Fig. 2.3) f(t) = 0 t < 0 A 0 < t < T 0 t > T = A u(t) – u(t – T) , L f(t) = As 1 – e– sT (2-19) The rectangular pulse can be generated by subtracting a step function with dead time T from a step function. We can derive the Laplace transform using the formal definition L f(t = f(t) e– st dt 0 ∞ = A e– st dt 0 + T = A – 1s e – st 0 T = As 1 – e – sT or better yet, by making use of the results of a step function and a dead time function L f(t = L A u(t) – A u(t – T) = As – e– sTAs 4. Unit rectangular pulse function f(t) = 0 t < 0 1/T 0 < t < T 0 t > T = 1T u(t) – u(t – T) , L f(t) = 1sT 1 – e – sT (2-20) This is a prelude to the important impulse function. We can define a rectangular pulse such that the area is unity. The Laplace transform follows that of a rectangular pulse function L f(t = L 1T u(t) – 1 T u(t – T) = 1 T s 1 – e – sT 2 - 8 5. Impulse function (Fig. 2.3) L[δ(t)] = 1, and L[Aδ(t)] = A (2-21) The (unit) impulse function is called the Dirac (or simply delta) function in mathematics.1 If we suddenly dump a bucket of water into a bigger tank, the impulse function is how we describe the action mathematically. We can consider the impulse function as the unit rectangular function in Eq. (2-20) as T shrinks to zero while the height 1/T goes to infinity: δ(t) = lim T → 0 1 T u(t) – u(t – T) The area of this "squeezed rectangle" nevertheless remains at unity: lim T → 0 (T 1T) = 1 , or in other words δ(t) dt = 1 – ∞ ∞ The impulse function is rarely defined in the conventional sense, but rather via its important property in an integral: f(t) δ(t) dt = f(0) – ∞ ∞ , and f(t) δ(t – to) dt = f(to) – ∞ ∞ (2-22) The Laplace transform of the impulse function is obtained easily by taking the limit of the unit rectangular function transform (2-20) with the use of L'Hospital's rule: L δ(t = lim T → 0 1 – e– sT T s = limT → 0 s e– sT s = 1 From this result, it is obvious that L[Aδ(t)] = A. 2.4 Initial and final value theorems We now present two theorems which can be used to find the values of the time-domain function at two extremes, t = 0 and t = ∞, without having to do the inverse transform. In control, we use the final value theorem quite often. The initial value theorem is less useful. As we have seen from our very first example in Section 2.1, the problems that we solve are defined to have exclusively zero initial conditions. Initial Value Theorem: lim s–> ∞ [sF(s)] = lim t–> 0 f(t) (2-23) Final Value Theorem: lim s–> 0 [sF(s)] = lim t–> ∞ f(t) (2-24) The final value theorem is valid provided that a final value exists. The proofs of these theorems are straightforward. We will do the one for the final value theorem. The proof of the initial value theorem is in the Review Problems. Consider the definition of the Laplace transform of a derivative. If we take the limit as s approaches zero, we find 1 In mathematics, the unit rectangular function is defined with a height of 1/2T and a width of 2T from –T to T. We simply begin at t = 0 in control problems. Furthermore, the impulse function is the time derivative of the unit step function. 2 - 9 lim s → 0 df(t) dt e – st dt 0 ∞ = lim s → 0 s F(s) – f(0) If the infinite integral exists,1 we can interchange the limit and the integration on the left to give lim s → 0 df(t) dt e – st dt 0 ∞ = df(t) 0 ∞ = f(∞) – f(0) Now if we equate the right hand sides of the previous two steps, we have f(∞) – f(0) = lims → 0 s F(s) – f(0) We arrive at the final value theorem after we cancel the f(0) terms on both sides. ✎ Example 2.1: Consider the Laplace transform F(s) = 6 (s – 2) (s + 2)s (s + 1) (s + 3) (s + 4) . What is f(t=∞)? lim s → 0 s 6 (s – 2) (s + 2)s (s + 1) (s + 3) (s + 4) = 6 (– 2) ( 2) ( 3) ( 4) = – 2 ✎ Example 2.2: Consider the Laplace transform F(s) = 1(s – 2) . What is f(t=∞)? Here, f(t) = e2t. There is no upper bound for this function, which is in violation of the existence of a final value. The final value theorem does not apply. If we insist on applying the theorem, we will get a value of zero, which is meaningless. ✎ Example 2.3: Consider the Laplace transform F(s) = 6 (s 2 – 4) (s3 + s2 – 4s – 4) . What is f(t=∞)? Yes, another trick question. If we apply the final value theorem without thinking, we would get a value of 0, but this is meaningless. With MATLAB, we can use roots([1 1 -4 -4]) to find that the polynomial in the denominator has roots –1, –2, and +2. This implies that f(t) contains the term e2t, which increases without bound. As we move on, we will learn to associate the time exponential terms to the roots of the polynomial in the denominator. From these examples, we can gather that to have a meaningful, i.e., finite bounded value, the roots of the polynomial in the denominator must have negative real parts. This is the basis of stability, which will formerly be defined in Chapter 7. 1 This is a key assumption and explains why Examples 2.2 and 2.3 do not work. When a function has no bound—what we call unstable later—the assumption is invalid. 2 - 12 f(t) = 3e– t – 6e– 2t + 3e– 3t The e–2t and e–3t terms will decay faster than the e–t term. We consider the e–t term, or the pole at s = –1, as more dominant. We can confirm the result with the following MATLAB statements: p=poly([-1 -2 -3]); [a,b,k]=residue(6,p) Note: (1) The time dependence of the time domain solution is derived entirely from the roots of the polynomial in the denominator (what we will refer to later as the poles). The polynomial in the numerator affects only the coefficients αi. This is one reason why we make qualitative assessment of the dynamic response characteristics entirely based on the poles of the characteristic polynomial. (2) Poles that are closer to the origin of the complex plane will have corresponding exponential functions that decay more slowly in time. We consider these poles more dominant. (3) We can generalize the Heaviside expansion into the fancy form for the coefficients α i = (s + ai) q(s) p(s) s = – a i but we should always remember the simple algebra that we have gone through in the examples above. ✑ 2.5.2 Case 2: p(s) has complex roots 1 ✎ Example 2.7: Find f(t) of the Laplace transform F(s) = s + 5 s2 + 4s + 13 . We first take the painful route just so we better understand the results from MATLAB. If we have to do the chore by hand, we much prefer the completing the perfect square method in Example 2.8. Even without MATLAB, we can easily find that the roots of the polynomial s2 + 4s +13 are –2 ± 3j, and F(s) can be written as the sum of s + 5 s2 + 4s + 13 = s + 5 s – ( – 2 + 3j) s – ( – 2 – 3j) = αs – ( – 2 + 3j) + α * s – ( – 2 – 3j) We can apply the same idea formally as before to find α = s + 5 s – ( – 2 – 3j) s = – 2 + 3j = (– 2 + 3j) + 5 (– 2 + 3j) + 2 + 3j = (j + 1) 2j = 1 2 (1 – j) and its complex conjugate is α* = 12 (1 + j) The inverse transform is hence 1 If you need a review of complex variable definitions, see our Web Support. Many steps in Example 2.7 require these definitions. 2 - 13 f(t) = 12 (1 – j) e ( – 2 + 3j)t + 12 (1 + j) e ( – 2 – 3j)t = 12 e – 2t (1 – j) e j 3t + (1 + j) e– j 3t We can apply Euler's identity to the result: f(t) = 12 e – 2t (1 – j) (cos 3t + j sin 3t) + (1 + j) (cos 3t – j sin 3t) = 12 e – 2t 2 (cos 3t + sin 3t) which we further rewrite as f(t) = 2 e– 2t sin (3t + φ) where φ = tan– 1(1) = π/4 or 45° The MATLAB statement for this example is simply: [a,b,k]=residue([1 5],[1 4 13]) Note: (1) Again, the time dependence of f(t) is affected only by the roots of p(s). For the general complex conjugate roots –a ± bj, the time domain function involves e–at and (cos bt + sin bt). The polynomial in the numerator affects only the constant coefficients. (2) We seldom use the form (cos bt + sin bt). Instead, we use the phase lag form as in the final step of Example 2.7. ✎ Example 2.8: Repeat Example 2.7 using a look-up table. In practice, we seldom do the partial fraction expansion of a pair of complex roots. Instead, we rearrange the polynomial p(s) by noting that we can complete the squares: s2 + 4s + 13 = (s + 2)2 + 9 = (s + 2)2 + 32 We then write F(s) as F(s) = s + 5 s2 + 4s + 13 = (s + 2) (s + 2)2 + 32 + 3 (s + 2)2 + 32 With a Laplace transform table, we find f(t) = e– 2t cos 3t + e– 2t sin 3t which is the answer with very little work. Compared with how messy the partial fraction was in Example 2.7, this example also suggests that we want to leave terms with conjugate complex roots as one second order term. ✑ 2.5.3 Case 3: p(s) has repeated roots ✎ Example 2.9: Find f(t) of the Laplace transform F(s) = 2 (s + 1)3 (s + 2) . The polynomial p(s) has the roots –1 repeated three times, and –2. To keep the numerator of each partial fraction a simple constant, we will have to expand to 2 - 14 2 (s + 1)3 (s + 2) = α1 (s + 1) + α2 (s + 1)2 + α3 (s + 1)3 + α4 (s + 2) To find α3 and α4 is routine: α3 = 2(s + 2) s = – 1 = 2 , and α4 = 2(s + 1)3 s = – 2 = – 2 The problem is with finding α1 and α2. We see that, say, if we multiply the equation with (s+1) to find α1, we cannot select s = –1. What we can try is to multiply the expansion with (s+1)3 2 (s + 2) = α1(s + 1) 2 + α2(s + 1) + α3 + α4(s + 1) 3 (s + 2) and then differentiate this equation with respect to s: – 2 (s + 2)2 = 2 α1(s + 1) + α2 + 0 + α4 terms with (s + 1) Now we can substitute s = –1 which provides α2 = –2. We can be lazy with the last α4 term because we know its derivative will contain (s + 1) terms and they will drop out as soon as we set s = –1. To find α1, we differentiate the equation one more time to obtain 4 (s + 2)3 = 2 α1 + 0 + 0 + α4 terms with (s + 1) which of course will yield α1 = 2 if we select s = –1. Hence, we have 2 (s + 1)3 (s + 2) = 2(s + 1) + – 2 (s + 1)2 + 2 (s + 1)3 + – 2(s + 2) and the inverse transform via table-lookup is f(t) = 2 1 – t + t 2 2 e – t – e– 2t We can also arrive at the same result by expanding the entire algebraic expression, but that actually takes more work(!) and we will leave this exercise in the Review Problems. The MATLAB command for this example is: p=poly([-1 -1 -1 -2]); [a,b,k]=residue(2,p) Note: In general, the inverse transform of repeated roots takes the form L– 1 α1(s + a) + α2 (s + a)2 + ... αn (s + a)n = α1 + α2t + α3 2! t 2 + ... αn (n – 1)! t n – 1 e– at The exponential function is still based on the root s = –a, but the actual time dependence will decay slower because of the (α2t + …) terms. 2 - 17 final change in y(t) relative to a unit change in the input x(t). Thus an easy derivation of the steady state gain is to take a unit step input in x(t), or X(s) = 1/s, and find the final value in y(t): y(∞) = lim s → 0 [s G(s) X(s)] = lim s → 0 [s G(s) 1 s ] = b o ao (2-32) The steady state gain is the ratio of the two constant coefficients. Take note that the steady state gain value is based on the transfer function only. From Eqs. (2-31) and (2-32), one easy way to "spot" the steady state gain is to look at a transfer function in the time constant form. Note: (1) When we talk about the poles of G(s) in Eq. (2-29), the discussion is regardless of the input x(t). Of course, the actual response y(t) also depends on x(t) or X(s). (2) Recall from the examples of partial fraction expansion that the polynomial Q(s) in the numerator, or the zeros, affects only the coefficients of the solution y(t), but not the time dependent functions. That is why for qualitative discussions, we focus only on the poles. (3) For the time domain function to be made up only of exponential terms that decay in time, all the poles of a transfer function must have negative real parts. (This point is related to the concept of stability, which we will address formally in Chapter 7.) 2.7 Summary of pole characteristics We now put one and one together. The key is that we can "read" the poles—telling what the form of the time-domain function is. We should have a pretty good idea from our exercises in partial fractions. Here, we provide the results one more time in general notation. Suppose we have taken a characteristic polynomial, found its roots and completed the partial fraction expansion, this is what we expect in the time-domain for each of the terms: A. Real distinct poles Terms of the form ci s – pi , where the pole pi is a real number, have the time-domain function ci e pi t . Most often, we have a negative real pole such that pi = –ai and the time-domain function is ci e – ai t . B. Real poles, repeated m times Terms of the form ci,1 (s – pi) + ci,2 (s – pi) 2 + ... + ci,m (s – pi) m with the root pi repeated m times have the time-domain function ci,1 + ci,2 t + ci,3 2! t2 + ... + ci,m (m – 1)! tm – 1 epi t . When the pole pi is negative, the decay in time of the entire response will be slower (with respect to only one single pole) because of the terms involving time in the bracket. This is the reason why we say that the response of models with repeated roots (e.g., tanks-in-series later in Section 3.4) tends to be slower or "sluggish." 2 - 18 C. Complex conjugate poles Terms of the form ci s – pi + c*i s – p*i , where pi = α + jβ and p*i = α – jβ are the complex poles, have time-domain function ci e pi t + c*i e p*i t of which form we seldom use. Instead, we rearrange them to give the form [some constant] x eαtsin(βt + φ) where φ is the phase lag. It is cumbersome to write the partial fraction with complex numbers. With complex conjugate poles, we commonly combine the two first order terms into a second order term. With notations that we will introduce formally in Chapter 3, we can write the second order term as as + b τ2s2 + 2ζτ s + 1 , where the coefficient ζ is called the damping ratio. To have complex roots in the denominator, we need 0 < ζ < 1. The complex poles p i and p*i are now written as pi, p*i = – ζ τ ± j 1 – ζ2 τ with 0 < ζ < 1 and the time domain function is usually rearranged to give the form [some constant] x e– ζt/τ sin 1 – ζ2 τ t +φ where again, φ is the phase lag. D. Poles on the imaginary axis If the real part of a complex pole is zero, then p = ±ωj. We have a purely sinusoidal behavior with frequency ω. If the pole is zero, it is at the origin and corresponds to the integrator 1/s. In time domain, we'd have a constant, or a step function. E. If a pole has a negative real part, it is in the left-hand plane (LHP). Conversely, if a pole has a positive real part, it is in the right-hand plane (RHP) and the time-domain solution is definitely unstable. 2 - 19 Note: Not all poles are born equal! The poles closer to the origin are dominant. It is important to understand and be able to identify dominant poles if they exist. This is a skill that is used later in what we call model reduction. This is a point that we first observed in Example 2.6. Consider the two terms such that 0 < a1 < a2 (Fig. 2.4), Y(s) = c1 (s – p1) + c2 (s – p2) + ... = c1 (s + a1) + c2 (s + a2) + ... = c1/a1 (τ1s + 1) + c2/a2 (τ2s + 1) +... Their corresponding terms in the time domain are y(t) = c1e –a1 t + c2e –a2 t +... = c1e –t/τ1 + c2e –t/τ2 +... As time progresses, the term associated with τ2 (or a2) will decay away faster. We consider the term with the larger time constant τ1 as the dominant pole. 1 Finally, for a complex pole, we can relate the damping ratio (ζ < 1) with the angle that the pole makes with the real axis (Fig. 2.5). Taking the absolute values of the dimensions of the triangle, we can find θ = tan–1 1 – ζ2 ζ (2-33) and more simply θ = cos–1 ζ (2-34) Eq. (2-34) is used in the root locus method in Chapter 7 when we design controllers. 1 Our discussion is only valid if τ1 is “sufficiently” larger than τ2. We could establish a criterion, but at the introductory level, we shall keep this as a qualitative observation. Re –a12–a Im Small a Large Large a Small τ τ Exponential term e decays faster Exponential term e decays slowly 2–a –a1t t Figure 2.4. Depiction of poles with small and large time constants. θ − ζ/τ √1 − ζ τ 2p j Figure 2.5. Complex pole angular position on the complex plane. 2 - 22 Inverse transform of (2-40) gives us the time-domain solution for C'1(t): C'1(t) = 5[1 – e –t/τ1] – 5[1 – e–(t – 10)/τ1] u(t – 10) The most important time dependence of e–t/τ1 arises only from the pole of the transfer function in Eq. (2-38). Again, we can "spell out" the function if we want to: For t < 10 C'1(t) = 5[1 – e –t/τ1] and t > 10 C'1(t) = 5[1 – e –t/τ1] – 5[1 – e–(t – 10)/τ1] = 5 e–(t – 10)/τ1 – e–t/τ1 In terms of the actual variable, we have for t < 10 C1(t) = C1 s + C'1 = 1 + 5[1 – e – t/τ1] and t > 10 C1(t) = 1 + 5 e –(t – 10)/τ1 – e–t/τ1 We now want to use an impulse input of equivalent "strength," i.e., same amount of inert tracer added. The amount of additional tracer in the rectangular pulse is 5 gmol m3 0.02 m3 s 10 [s] = 1 gmol which should also be the amount of tracer in the impulse input. Let the impulse input be C'in = Mδ(t). Note that δ(t) has the unit of time–1 and M has a funny and physically meaningless unit, and we calculate the magnitude of the input by matching the quantities 1 [gmol] = 0.02 m3 s M gmol.s m3 δ(t) 1 s dt [s] 0 ∞ = 0.02M or M = 50 gmol. s m3 Thus C' in(t) = 50δ(t) , C' in(s) = 50 and for an impulse input, Eq. (2-38) is simply C'1(s) = 50 (τ1 s + 1) (2-41) After inverse transform, the solution is C'1(t) = 50 τ1 e–t/τ1 and in the real variable, C1(t) = 1 + 50 τ1 e–t/τ1 We can do a mass balance based on the outlet Q C'1(t) dt 0 ∞ = 0.02 50 τ1 e–t/τ1 dt 0 ∞ = 1 [gmol] Hence mass is conserved and the mathematics is correct. 2 - 23 We now raise a second question. If the outlet of the vessel is fed to a second tank with a volume V2 of 3 m3 (Fig. 2.8), what is the time response at the exit of the second tank? With the second tank, the mass balance is τ2 dC2 dt = (C1 – C2) where τ2 = V2 Q or τ2 dC2 dt + C2 = C1 (2-42) where C1 and C2 are the concentrations in tanks one and two respectively. The equation analogous to Eq. (2-37) is τ 2 dC'2 dt + C'2 = C'1 (2-43) and its Laplace transform is C'2(s) = 1 τ2 s + 1 C'1(s) (2-44) With the rectangular pulse to the first vessel, we use the response in Eq. (2-40) and substitute in (2-44) to give C'2(s) = 5 (1 – e–10 s) s (τ1s + 1) (τ2s + 1) With the impulse input, we use the impulse response in Eq. (2-41) instead, and Eq. (2-44) becomes C'2(s) = 50 (τ1s + 1) (τ2s + 1) from which C'2(t) can be obtained via proper table look-up. The numerical values τ1 = 4 0.02 = 200 s and τ2 = 3 00.2 = 150 s can be used. We will skip the inverse transform. It is not always instructive to continue with an algebraic mess. To sketch the time response, we'll do that with MATLAB in the Review Problems. ✑ 2.8.2 A stirred tank heater Temperature control in a stirred-tank heater is a common example (Fig. 2.9). We will come across it many times in later chapters. For now, we present the basic model equation, and use it as a review of transfer functions. The heat balance, in standard heat transfer notations, is ρCpV dT dt = ρCpQ (Ti – T) + UA (TH – T) (2-45) V V1 2 c c2 1 Q, c in Figure 2.8. Two well-mixed vessels in series. 2 - 24 where U is the overall heat transfer coefficient, A is the heat transfer area, ρ is fluid density, Cp is the heat capacity, and V is the volume of the vessel. The inlet temperature Ti = Ti(t) and steam coil temperature TH = TH(t) are functions of time and are presumably given. The initial condition is T(0) = Ts, the steady state temperature. Before we go on, let us emphasize that what we find below are nothing but different algebraic manipulations of the same heat balance. First, we rearrange (2-45) to give V Q dT dt = (Ti – T) + UA ρCpQ (TH – T) The second step is to define τ = V Q and κ = UAρCpQ which leads to τ dT dt + (1 + κ)T = Ti + κTH (2-46) At steady state, (1 + κ) Ts = Ti s + κ TH s (2-47) We now define deviation variables: T' = T – Ts ; T' i = Ti – Ti s ; T'H = TH – TH s and dT' dt = d(T – Ts) dt = dT dt Subtract Eq. (2.47) from the transient equation in Eq. (2-46) would give τ dT dt + (1 + κ) (T – Ts) = (Ti – Ti s ) + κ (TH – TH s ) or in deviation variables, τ dT' dt + (1 + κ) T' = T'i + κT'H (2-48) The initial condition is T'(0) = 0. Eq. (2-48) is identical in form to (2-46). This is typical of linear equations. Once you understand the steps, you can jump from (2-46) to (2-48), skipping over the formality. From here on, we will omit the apostrophe (') where it would not cause confusion, as it goes without saying that we work with deviation variables. We now further rewrite the same equation as dT dt + aT = KiTi + KHTH (2-48a) Q, T Q, T in HSteam, T Figure 2.9. A continuous flow stirred-tank heater. 2 - 27 steady state and reformulating the problem in terms of deviation variables. We will illustrate with one simple example. Consider the differential equation which models the liquid level h in a tank with cross-sectional area A, Adhdt = Qin(t) – βh 1 2 (2-52) The initial condition is h(0) = hs, the steady state value. The inlet flow rate Qin is a function of time. The outlet is modeled as a nonlinear function of the liquid level. Both the tank cross-section A, and the coefficient β are constants. We next expand the nonlinear term about the steady state value hs (also our initial condition by choice) to provide 1 Adhdt = Qin – β hs 1 2 + 12hs –1 2(h – h s) (2-53) At steady state, we can write the differential equation (2-52) as 0 = Qin s – β hs 1 2 (2-54) where hs is the steady solution, and Q s in is the particular value of Qin to maintain steady state. If we subtract the steady state equation from the linearized differential equation, we have Adhdt = Qin – Qin s – β 12hs –1 2(h – h s) (2-55) We now define deviation variables: h' = h – hs and Q'in = Q in – Qsin Substitute them into the linearized equation and moving the h' term to the left should give Adh'dt + β 2hs –1 2 h' = Q'in(t) (2-56) with the zero initial condition h'(0) = 0. It is important to note that the initial condition in Eq. (2-52) has to be hs, the original steady state level. Otherwise, we will not obtain a zero initial condition in (2-56). On the other hand, because of the zero initial condition, the forcing function Q'in must be finite to have a non-trivial solution. Repeating our mantra the umpteenth time, the LHS of (2-56) gives rise to the characteristic polynomial and describes the inherent dynamics. The actual response is subject to the inherent dynamics and the input that we impose on the RHS. 1 We casually ignore the possibility of a more accurate second order expansion. That’s because the higher order terms are nonlinear, and we need a linear approximation. Needless to say that with a first order expansion, it is acceptable only if h is sufficiently close to hs. In case you forgot, the first order Taylor series expansion can be written as f(x1,x2) ≈ f(x1s,x2s) + ∂f ∂x1 x1s, x2s (x1 – x1s) + ∂f ∂x2 x1s, x2s (x2 – x2s) 2 - 28 Note: • Always do the linearization before you introduce the deviation variables. • As soon as we finish the first-order Taylor series expansion, the equation is linearized. All steps that follow are to clean up the algebra with the understanding that terms of the steady state equation should cancel out, and to change the equation to deviation variables with zero initial condition. We now provide a more general description. Consider an ordinary differential equation dy dt = f(y; u) with y(0) = ys (2-57) where u = u(t) contains other parameters that may vary with time. If f(y; u) is nonlinear, we approximate with Taylor's expansion: dy d t ≈ f(ys; us) + ∂f ∂y ys,us (y – ys) + ∇ Tf(ys; us) (u – us) (2-58) where ∇f(ys; us) is a column vector of the partial derivatives of the function with respect to elements in u, ∂f/∂ui, and evaluated at ys and us. At steady state, (2-57) is 0 = f(ys; us) (2-59) where ys is the steady state solution, and us the values of parameters needed to maintain the steady state. We again define deviation variables y' = y – ys and u' = u – us and subtract (2-59) from (2-58) to obtain the linearized equation with zero initial condition: dy' d t + – ∂f ∂y ys,us y' = ∇Tf(ys; us) u' (2-60) where we have put quantities being evaluated at steady state conditions in brackets. When we solve a particular problem, they are just constant coefficients after substitution of numerical values. ✎ Example 2.11: Linearize the differential equation for the concentration in a mixed vessel: VdCdt = Qin(t)Cin(t) – Qin(t)C , where the flow rate and the inlet concentration are functions of time. A first term Taylor expansion of the RHS leads to the approximation: VdCdt ≈ Qin,sCin,s + Cin,s (Qin – Qin,s) + Qin,s (Cin – Cin,s) – Qin,sCs + Cs (Qin – Qin,s) + Qin,s (C – Cs) and the steady state equation, without canceling the flow variable, is 0 = Qin,sCin,s – Qin,sCs We subtract the two equations and at the same time introduce deviation variables for the dependent variable C and all the parametric variables to obtain VdC'dt ≈ Cin,s Q'in + Qin,s C'in – Cs Q'in + Qin,s C' and after moving the C' term to the LHS, 2 - 29 VdC'dt + Qin,s C' = Cin,s – Cs Q'in + Qin,s C'in The final result can be interpreted as stating how changes in the flow rate and inlet concentration lead to changes in the tank concentration, as modeled by the dynamics on the LHS. Again, we put the constant coefficients evaluated at steady state conditions in brackets. We can arrive at this result quickly if we understand Eq. (2-60) and apply it carefully. The final step should also has zero initial condition C'(0) = 0, and we can take the Laplace transform to obtain the transfer functions if they are requested. As a habit, we can define τ = V/Qin,s and the transfer functions will be in the time constant form. ✎ Example 2.12: Linearize the differential equation dy dt = – xy – βy2 – γy – 1 where x =x(t). Each nonlinear term can be approximated as xy ≈ xs ys + ys (x – xs) + xs (y – ys) = xsys + ys x' + xs y' y2 ≈ ys2 + 2ys (y – ys) = ys2 + 2ys y' γy – 1 ≈ γys – 1 + (ln γ) γys – 1 y' With the steady state equation 0 = xs ys + βys + γys – 1 , and the usual algebraic work, we arrive at dy' dt + xs + 2βys + (ln γ) γys – 1 y' = – ys x' ✎ Example 2.13: What is the linearized form of the reaction rate term rA = – k(T) CA = – k oe – E/RT CA where both temperature T and concentration CA are functions of time? rA ≈ – k oe– E/RTs CA,s + k oe– E/RTs (CA – CA,s) + ERTs 2 koe – E/RTs CA,s (T – Ts) In terms of deviation variables, the linearized approximation is rA ≈ rA,s 1 + 1CA,s C'A + ERTs 2 T' , where rA,s = – k oe– E/RTs CA,s Note: While our analyses use deviation variables and not the real variables, examples and homework problems can keep bouncing back and forth. The reason is that when we do an experiment, we measure the actual variable, not the deviation variable. You may find this really confusing. All we can do is to be extra careful when we solve a problem. 2 - 32 ✎ Example 2.15. Derive the closed-loop transfer function C/R for the system with three overlapping negative feedback loops in Fig. E2.15(a). dR C G1 – G2 G 3 G4 H1 H2 – – a b g f R C– G1G21 + G1G 2 G3G41 + G3G 4 H2 H1 G1G4 + + + (a) (b) Steps 1 and 2 (c) Step 3 dR CG1 – G2 G 3 G4 H2 – – a g f+ + + H 1 G1G 4 + Figure E2.15 The key to this problem is to proceed in steps and "untie" the overlapping loops first. We identify various locations with lower case a, b, d, f, and g to help us. We first move the branch-off point over to the right side of G4 (Fig. E2.15b). We may note that we can write a = H1 G3 d = H1 G4 [G3G4] d That is, to maintain the same information at the location a, we must divide the branch-off information at C by G4. Similarly, we note that at the position g in Fig. E2.15a, g = G1 [R – f] – bH1 = G1 [R – f – bH1 G1 ] That is, if we move the break-in point from g out to the left of G1, we need to divide the information by G1 prior to breaking in. The block diagram after moving both the branch-off and break-in points are shown as Steps 1 and 2 in Fig. E2.15b. (We could have drawn such that the loops are flush with one another at R.) Once the loops are no longer overlapping, the block diagram is easy to handle. We first close the two small loops as shown as Step 3 in Fig. E2.15c. The final result is to close the big loop. The resulting closed-loop transfer function is: C R = G1 G2 G3 G4 (1 + G1 G2) (1 + H2 G3 G4) + H1 G2 G3 2 - 33 ✎ Example 2.16. Derive the closed-loop transfer function X1/U for the block diagram in Fig. E2.16a. We will see this one again in Chapter 4 on state space models. With the integrator 1/s, X2 is the Laplace transform of the time derivative of x1(t), and X3 is the second order derivative of x1(t). As before, we can write down the algebraic equations about the summing point (the comparator) for X3, and the two equations about the two integrators 1/s. We should arrive at the result after eliminating X2 and X3. However, we can also obtain the result quickly by recognizing the two feedback loops. We first "close" the inner loop to arrive at Fig. E2.16b. With that, we can see the answer. We "close" this loop to obtain X1 U = 1 s2 + 2ζω s + ω2 ❐ Review Problems 1. Derive Eq. (2.1). 2. (a) Check that when the right hand side is zero, the solution to Eq. (2-2) is zero. (b) Derive Eq. (2-3) using the method of integrating factor. (c) Derive the solution c(t) in Eq. (2-3) with, respectively, an impulse input, C'in = δ(t) and a unit step input C'in = u(t). Show that they are identical to when we use the Laplace transform technique as in Example 2.10. 3. Prove the linear property of Laplace transform in (2-6). 4. Derive the Laplace transform of (a) 1/(τs + 1) (b) cos ωt (c) e–at cos ωt 5. Prove the initial value theorem. 6. Show that the inverse transform of F(s) = 6 (s3 + s2 – 4s – 4) is f(t) = – 2e– t + 32e – 2t + 12e 2t 7. Double check α* in the complex root Example 2.7 with the Heaviside expansion. 8. Find the inverse Laplace transform of the general expression Y(s) = cs – p + c* s – p* , where c = a – bj, and p = α + ωj. Rearrange the result to a sine function with time lag. 9. With respect to the repeated root Example 2.9, show that we could have written 2 = α1 (s + 1) 2(s + 2) + α2 (s + 1) (s + 2) + α3 (s + 2) + α4 (s + 1) 3 and after expanding and collecting terms of the same power in s, we can form the matrix K 1 s U + – X2 1 s 2ζω ω2 X1 + + U 1 s ω 2 X1 s + 2 ζω 1+ – (a) (b) X3 Figure E2.16 2 - 34 equation: 1 0 0 1 4 1 0 3 5 3 1 3 2 2 2 1 α1 α2 α3 α4 = 0 0 0 2 from which we can solve for the coefficients. Yes, this is a chore not worth doing even with MATLAB. The route that we take in the example is far quicker. 10. For a general transfer function G(s) = Y(s)/X(s), how can we find the steady state gain? 11. Do the partial fractions of s + 1 s2 (s2 + 4s – 5) . 12. Find the transfer functions as suggested in Example 2.11. 13. Derive Eqs. (3-33) and (3-34). 14. Do the numerical simulation for Section 2.8.1 15. Regarding Eqs. (2-50) and (2-51) in Section 2.8.2: a) What is T(t) as t --> ∞ ? What is the actual temperature that we measure? b) What are the effects of Kp on the final tank temperature? What is the significance if Kp approaches unity? c) What is the time constant for the process? Hints: 1. Eq. (2.1) is straight from material balance. With Q denoting the volumetric flow rate and V the volume, the balance equation of some species A as denoted by C with reference to Fig. 2.6 is V dC d t = QCin – QC Physically, each term in the equation represents: {Accumulation} = {Flow in} – {Flow out} Eq. (2.1) is immediately obvious with the definition of space time τ = V/Q. 2. (c) With the impulse input, C'(t) = 1τ e– t / τ , and with the unit step input, C'(t) = 1 – e– t / τ . 3. This is just a simple matter of substituting the definition of Laplace transform. 4. Answers are in the Laplace transform summary table. The key to (a) is to rearrange the function as 1/τs + 1/τ , and the result should then be immediately obvious with Eq. (2-9). The derivation of (b) and (c) is very similar to the case involving sin ωt. 5. The starting point is again the Laplace transform of a derivative, but this time we take the limit as s —> ∞ and the entire LHS with the integral becomes zero. 6. This follows Examples 2.4 and 2.5. 7. α* = s + 5 s – ( – 2 + 3j) s = – 2 – 3j = (– 2 – 3j) + 5 (– 2 – 3j) + 2 – 3j = (1 – j) – 2j = 1 2 (1 + j) 8. We should have y(t) = c ept + c* ep* t = (a – bj) e(α + ωj) t + (a + bj) e(α – ωj) t . P.C Chau © 2001 ❖ 3. Dynamic Response We now derive the time-domain solutions of first and second order differential equations. It is not that we want to do the inverse transform, but comparing the time-domain solution with its Laplace transform helps our learning process. What we hope to establish is a better feel between pole positions and dynamic characteristics. We also want to see how different parameters affect the time- domain solution. The results are useful in control analysis and in measuring model parameters. At the end of the chapter, dead time, reduced order model, and the effect of zeros will be discussed. What are we up to? • Even as we speak of time-domain analysis, we invariably still work with Laplace transform. Time-domain and Laplace-domain are inseparable in classical control. • In establishing the relationship between time-domain and Laplace-domain, we use only first and second order differential equations. That's because we are working strictly with linearized systems. As we have seen in partial fraction expansion, any function can be "broken up" into first order terms. Terms of complex roots can be combined together to form a second order term. • Repeated roots (of multi-capacity processes) lead to sluggish response. Tanks-in- series is a good example in this respect. • With higher order models, we can construct approximate reduced-order models based on the identification of dominant poles. This approach is used later in empirical controller tuning relations. • The dead time transfer function has to be handled differently in classical control, and we'll use the Padé approximation for this purpose. A brief review is in order: Recall that Laplace transform is a linear operator. The effects of individual inputs can be superimposed to form the output. In other words, an observed output change can be attributed to the individual effects of the inputs. From the stirred-tank heater example in Section 2.8.2 (p. 2-23), we found: T(s) = Gd(s)Ti(s) + Gp(s)TH(s) We can analyze the change in tank temperature as a result of individual changes in either inlet or steam temperatures without doing the inverse transform. The compromise is that we do not have the time-domain analytical solution, T(t), and cannot think of time as easily. We can put the example in more general terms. Let's consider an n-th order differential equation and two forcing functions, x1(t) and x2(t), an dny dtn + an–1 dn–1y dtn–1 + ... + a1 dy dt + aoy = b 1x1(t) + b 2x2(t) (3-1) where y is the output deviation variable. We also have the zero initial conditions, y(0) = y'(0) = y"(0) = ... = y(n–1)(0) = 0 (3-2) Laplace transform of the equation leads to Y(s) = G1(s)X1(s) + G2(s)X2(s) (3-3) where G1(s) = b 1 ans n + an–1s n–1 + ... +a1s + a0 , and G2(s) = b 2 ans n + an–1s n–1 + ... +a1s + a0 (3-4) 3 - 2 are the two transfer functions for the two inputs X1(s) and X2(s). Take note (again!) that the characteristic polynomials in the denominators of both transfer functions are identical. The roots of the characteristic polynomial (the poles) are independent of the inputs. It is obvious since they come from the same differential equation (same process or system). The poles tell us what the time-domain solution, y(t), generally would "look" like. A final reminder: no matter how high the order of n may be in Eq. (3-4), we can always use partial fractions to break up the transfer functions into first and second order terms. 3.1 First order differential equation models This section is a review of the properties of a first order differential equation model. Our Chapter 2 examples of mixed vessels, stirred-tank heater, and homework problems of isothermal stirred-tank chemical reactors all fall into this category. Furthermore, the differential equation may represent either a process or a control system. What we cover here applies to any problem or situation as long as it can be described by a linear first order differential equation. We usually try to identify features which are characteristic of a model. Using the examples in Section 2.8 as a guide, a first order model using deviation variables with one input and with constant coefficients a1, ao and b can be written in general notations as 1 a1 dy dt + ao y = bx(t) with a1 ≠ 0 and y(0) = 0 (3-5) The model, as in Eq. (2-2), is rearranged as τ dy dt + y = Kx(t) (3-6) where τ is the time constant, and K is the steady state gain. In the event that we are modeling a process, we would use a subscript p (τ = τp, K = Kp). Similarly, the parameters would be the system time constant and system steady state gain when we analyze a control system. To avoid confusion, we may use a different subscript for a system. 1 Whether the notation is y or y‘ is immaterial. The key is to find the initial condition of the problem statement. If the initial condition is zero, the notation must refer to a deviation variable. 0 0.2 0.4 0.6 0.8 1 1.2 0 2 4 6 8 10 12 y t increasing τ MK 0 0.5 1 1.5 2 0 2 4 6 8 10 y t K = 2 K = 1 K = 0.5 M Figure 3.1. Properties of a first order transfer function in time domain. Left panel y/MK: effect of changing the time constant; plotted with τ = 0.25, 0.5, 1, and 2 [arbitrary time unit]. Right panel y/M: effect of changing the steady state gain; all curves have τ = 1.5. 3 - 3 The Laplace transform of Eq. (3-6) is Y(s) X(s) = G(s) = K τ s + 1 (3-7) where G(s) denotes the transfer function. There is one real pole at –1/τ. (What does it imply in terms of the time domain function? If you are stuck, refer back to Example 2.10 on page 2-15.) ✑ 3.1.1 Step response of a first order model Consider a step input, x(t) = Mu(t), and X(s) = M/s, the output is Y(s) = K (τ s + 1) M s = MK 1 s – τ (τ s + 1) (3-8) and inverse transform gives the solution y(t) = MK (1 – e– t/τ) (3-9) We first saw a plot of this function in Fig. 2.10 on page 2-26. The output y(t) starts at zero and increases exponentially to a new steady state MK. A process with this property is called self- regulating. The larger the time constant, the slower is the response (Fig. 3.1a). We can check the result with the final value theorem lim s → 0 [sY(s)] = lim s → 0 s MK s (τ s + 1) = MK The new steady state is not changed by a magnitude of M, but is scaled by the gain K (Fig. 3.1b). Consequently, we can consider the steady state gain as the ratio of the observed change in output in response to a unit change in an input, y/M. In fact, this is how we measure K. The larger the steady state gain, the more sensitive is the output to changes in the input. As noted in Fig. 2.10, at t = τ, y(t) = 0.632MK. This is a result that we often use to estimate the time constant from experimental data. ✑ 3.1.2 Impulse response of a first order model Consider an impulse input, x(t) = Mδ(t), and X(s) = M, the output is now Y(s) = MK (τ s + 1) (3-10) The time-domain solution, as in Example 2.10, is y(t) = MKτ e – t/τ (3-11) which implies that the output rises instantaneously to some value at t = 0 and then decays exponentially to zero. ✑ 3.1.3 Integrating process When the coefficient ao = 0 in the differential equation (3-5), we have dy dt = b a1 x(t) (3-12) 3 - 6 ζ > 1: Two distinct real poles. The case is named overdamped. Here, we can factor the polynomial in terms of two time constants τ1 and τ2: G(s) = K τ2s2 + 2ζτ s + 1 = K (τ1 s +1) (τ2 s +1) (3-20) such that the two real poles are at –1/τ1 and –1/τ2. 1 ζ = 1: Two repeating poles at –1/τ. This case is termed critically damped. The natural period τ may be considered the “time constant” in the sense that it is associated with the exponential function. In actual fact, the time response is not strictly exponential, as we saw in Example 2.9 (p. 2-13) and confirmed in the time domain solution in Eq. (3-22). 0 < ζ < 1: Two complex conjugate poles. This situation is considered underdamped. We also write ζ2 – 1 = j 1 – ζ2 . It is very important to note that τ is not the time constant here. The real part of the pole in Eq. (3-19) is –ζ/τ and this is the value that determines the exponential decay, as in Eq. (3- 23). In this sense, the time constant is τ/ζ. ζ = 0: Two purely imaginary conjugate poles with frequency ωn = 1/τ. This is equivalent to an oscillation with no damping and explains why ωn is referred to as the natural frequency. ✑ 3.2.1 Step response time domain solutions Consider a step input, x(t) = Mu(t) with X(s) = M/s, and the different cases with respect to the value of ζ. We can derive the output response y(t) for the different cases. We rarely use these results. They are provided for reference. In the case of the underdamped solution, it is used to derive the characteristic features in the next section. 1 Here, we can derive that τ2 = τ1τ2 and 2ζτ = (τ1 + τ2) or τ = τ1 τ2 and ζ = τ1 + τ2 2 τ1 τ2 In this case of having real poles, we can also relate τ1 = τ ζ – ζ2 – 1 ; τ2 = τ ζ + ζ2 – 1 3 - 7 (1) ζ > 1, overdamped. The response is sluggish compared to critically damped or underdamped. y(t) = MK 1 – e– ζ t/τ cosh ζ2 – 1 τ t + ζ ζ2 – 1 sinh ζ2 – 1 τ t (3-21) This form is unnecessarily complicated. When we have an overdamped response, we typically use the simple exponential form with the exp(–t/τ1) and exp(–t/τ2) terms. (You'll get to try this in the Review Problems.) (2) ζ = 1, critically damped. The response is the "fastest" without oscillatory behavior. y(t) = MK 1 – 1 + tτ e – t/τ (3-22) (3) 0 ≤ ζ < 1, underdamped. The response is fast initially, then overshoots and decays to steady state with oscillations. The oscillations are more pronounced and persist longer with smaller ζ. y(t) = MK 1 – e– ζ t/τ cos 1 – ζ2 τ t + ζ 1 – ζ2 sin 1 – ζ2 τ t (3-23) This equation can be rearranged as y(t) = MK 1 – e– ζ t/τ 1 – ζ2 sin 1 – ζ2 τ t + φ , where φ = tan– 1 1 – ζ 2 ζ (3-23a) ✑ 3.2.2 Time-domain features of underdamped step response From the solution of the underdamped step response (0 < ζ < 1), we can derive the following characteristics (Fig. 3.2). They are useful in two respects: (1) fitting experimental data in the measurements of natural period and damping factor, and (2) making control system design specifications with respect to the dynamic response. y Time, t 1 0 MK A B C T TTtr p s Figure 3.2. Key features in an underdamped response. See text for equations. 3 - 8 1. Overshoot (OS) OS = A B = exp – πζ 1 – ζ2 (3-24) where A and B are depicted in Fig. 3.2. We compute only the first or maximum overshoot in the response. The overshoot increases as ζ becomes smaller. The OS becomes zero as ζ approaches 1. The time to reach the peak value is Tp = πτ 1 – ζ2 = π ωn 1 – ζ 2 (3-25) This peak time is less as ζ becomes smaller and meaningless when ζ = 1. We can also derive the rise time—time for y(t) to cross or hit the final value for the first time—as: tr = τ 1 – ζ2 π – cos–1 ζ (3-26) 2. Frequency (or period of oscillation, T) ω = 1 – ζ 2 τ or T = 2πτ 1 – ζ2 since ω = 2π T (3-27) Note that T = 2Tp and the unit of the frequency is radian/time. 3. Settling time The real part of a complex pole in (3-19) is –ζ/τ, meaning that the exponential function forcing the oscillation to decay to zero is e–ζt/τ as in Eq. (3-23). If we draw an analogy to a first order transfer function, the time constant of an underdamped second order function is τ/ζ. Thus to settle within ±5% of the final value, we can choose the settling time as 1 Ts = 3 τ ζ = 3 ζωn (3-28) and if we choose to settle within ±2% of the final value, we can use Ts = 4τ/ζ. 4. Decay ratio DR = C A = exp – 2πζ 1 – ζ2 = OS2 (3-29) The decay ratio is the square of the overshoot and both quantities are functions of ζ only. The definitions of C and A are shown in Fig. 3.2. 1 Refer to Review Problem 1 to see why we may pick factors of 3 or 4. 3 - 11 c c V V V1 2 c c c n–1 2 1 q oo q nn n Figure 3.3. Illustration of compartments or tanks-in-series. 3.4 Higher order processes and approximations Staged processes in chemical engineering or compartmental models in bioengineering give rise to higher order models. The higher order models are due to a cascade of first order elements. Numerical calculation will be used to illustrate that the resulting response becomes more sluggish, thus confirming our analysis in Example 2.9 (p. 2-13). We shall also see how the response may be approximated by a lower order function with dead time. An example of two interacting vessels follows last. ✑ 3.4.1 Simple tanks-in-series Consider a series of well-mixed vessels (or compartments) where the volumetric flow rate and the respective volumes are constant (Fig. 3.3). If we write down the mass balances of the first two vessels as in Section 2.8.1 (p. 2-20), they are: 1 τ1 dc1 dt = co – c1 (3-32) and τ2 dc2 dt = c1 – c2 (3-33) where τ1 = V1/qo and τ2 = V2/qo are the space times of each vessel. Again following Section 2.8.1, the Laplace transform of the mass balance in deviation variables would be C1 Co = 1τ1 s + 1 , and C2C1 = 1τ2 s + 1 (3-34) The effect of changes in co(t) on the effluent of the second vessel is evaluated as C2 Co = C2 C1 C1 Co = 1(τ2 s + 1) 1 (τ1 s + 1) (3-35) Obviously, we can generalize to a series of n tanks as in Cn Co = 1(τ1 s + 1) ... (τn – 1 s + 1) (τn s + 1) (3-36) 1 Many texts illustrate with a model on the change of inlet flow rate. In such a case, we usually need to assume that the outlet flow rate of each vessel is proportional to the liquid level or hydrostatic head. The steady state gains will not be unity. 3 - 12 In this example, the steady state gain is unity, which is intuitively obvious. If we change the color of the inlet with a food dye, all the mixed tanks will have the same color eventually. In addition, the more tanks we have in a series, the longer we have to wait until the n-th tank "sees" the changes that we have made in the first one. We say that the more tanks in the series, the more sluggish is the response of the overall process. Processes that are products of first order functions are also called multicapacity processes. Finally, if all the tanks have the same space time, τ1 = τ2 = …= τ, Eq. (3-36) becomes Cn Co = 1 (τ s + 1)n (3-37) This particular scenario is not common in reality, but is a useful textbook illustration. ✎ Example 3.3. Make use of Eq. (3-37), show how the unit step response Cn(t) becomes more sluggish as n increases from 1 to 5. The exercise is almost trivial with MATLAB. To generate Fig. E3.3, the statements are: tau=3; %Just an arbitrary time constant G=tf(1,[tau 1]); step(G); %First order function unit step response hold step(G*G); %Second order response step(G*G*G); %Third order response step(G*G*G*G); %Fourth order response step(G*G*G*G*G); %Fifth order response 0 0.2 0.4 0.6 0.8 1 1.2 0 10 20 30 40 y t n = 5 n = 1 Figure E3.3 It is clear that as n increases, the response, as commented in Example 2.9, becomes slower. If we ignore the “data” at small times, it appears that the curves might be approximated with first order with dead time functions. We shall do this exercise in the Review Problems. ✑ 3.4.2 Approximation with lower order functions with dead time Following the lead in Example 3.3, we now make use of the result in Example 2.6 (p. 2-11) and the comments about dominant poles in Section 2.7 (p. 2-17) to see how we may approximate a transfer function. Let say we have a high order transfer function that has been factored into partial fractions. If there is a large enough difference in the time constants of individual terms, we may try to throw away the small time scale terms and retain the ones with dominant poles (large time constants). This is our reduced-order model approximation. From Fig. E3.3, we also need to add a time delay in this approximation. The extreme of this idea is to use a first order with dead time function. It obviously cannot do an adequate job in many circumstances. Nevertheless, this simple 3 - 13 approximation is all we use when we learn to design controllers with empirical tuning relations. A second order function with dead time generally provides a better estimate, and this is how we may make a quick approximation. Suppose we have an n-th order process which is broken down into n first-order processes in series with time constants τ1, τ2,... ,τn. If we can identify, say, two dominant time constants (or poles) τ1 and τ2, we can approximate the process as G(s) ≈ Ke– td s (τ1 s +1) (τ2 s +1) , where td ≈ n ∑ i ≠1,2 τi (3-38) The summation to estimate the dead time is over all the other time constants (i = 3, 4, etc.). This idea can be extended to the approximation of a first order with dead time function. ✎ Example 3.4: Find the simplest lower order approximation of the following transfer function Y X = 3 (0.1s + 1) (0.5s + 1) (s + 1) (3 s + 1) In this example, the dominant pole is at –1/3, corresponding to the largest time constant at 3 [time unit]. Accordingly, we may approximate the full order function as Y X = 3 e – 1.6 s (3 s + 1) where 1.6 is the sum of dead times 0.1, 0.5, and 1. With X representing a unit step input, the response of the full order function (solid curve) and that of the first order with dead time approximation (dotted curve) are shown in Fig. E3.4. The plotting of the dead time function is further approximated by the Padé approximation. Even so, the approximation is reasonable when time is large enough. The pole at –1/3 can indeed be considered as dominant. -0.5 0 0.5 1 1.5 2 2.5 3 0 6 12 18 y t Figure E3.4 The MATLAB statements are: p2=conv([1 1],[3 1]); p4=conv( conv(p2,[0.5 1]) , [0.1 1] ); G4=tf(3,p4); %The original full order function t=0:0.2:18; y4=step(G4,t); %Unit step response td=1+0.1+0.5; %Approximate dead time P1=tf([-td/2 1],[td/2 1]); G1=tf(3,[3 1]); y1=step(P1*G1,t); plot(t,y1,t,y4) 3 - 16 pole-zero cancellation. Finally, the value of y is nonzero at time zero. We may wonder how that could be the case when we use differential equation models that have zero initial conditions. The answer has to do with the need for the response to match the rate of change term in the input. We'll get a better picture in Chapter 4 when we cover state space models. ✑ 3.5.2 Transfer functions in parallel There are circumstances when a complex process may involve two competing (i.e., opposing) dynamic effects that have different time constants. One example is the increase in inlet temperature to a tubular catalytic reactor with exothermic kinetics. The initial effect is that the exit temperature will momentarily decrease as increased conversion near the entrance region depletes reactants at the distal, exit end. Given time, however, higher reaction rates lead to a higher exit temperature. To model this highly complex and nonlinear dynamics properly, we need the heat and mass balances. In classical control, however, we would replace them with a linearized model that is the sum of two functions in parallel: Y X = K1 τ1 s + 1 + K2 τ2 s + 1 (3-50) We can combine the two terms to give the second order function Y X = K (τz s + 1) (τ1 s + 1) (τ2 s + 1) , (3-51) where K = K1 + K2, and τz = K1 τ2 + K2 τ1 K1 + K2 Under circumstances where the two functions represent opposing effects, one of them has a negative steady state gain. In the following illustration, we choose to have K2 < 0. Based on Eq. (3-51), the time response y(t) should be strictly overdamped. However, this is not necessarily the case if the zero is positive (or τz < 0). We can show with algebra how various ranges of Ki and τi may lead to different zeros (–1/τz) and time responses. However, we will not do that. (We'll use MATLAB to take a closer look in the Review Problems, though.) The key, once again, is to appreciate the principle of superposition with linear models. Thus we should get a rough idea of the time response simply based on the form in (3-50). Figure 3.6. Time response calculations with different time constants. In all cases, K1 = 3, K2 = –1, and the individual terms in Eq. (3-50) are in dashed curves. Their superimposed response y is the solid line. (a) τ1 = τ2 = 2; (b) τ1 = 0.5, τ2 = 2; (c) τ1 = 2, τ2 = 0.5. The numerical calculation is illustrated in Fig. 3.6. The input is a unit step, X = 1/s, and the two steady state gains are K1 = 3 and K2 = –1 such that |K1| > |K2|. We consider the three cases -1 -0.5 0 0.5 1 1.5 2 2.5 3 0 2 4 6 8 1 0 y t (a) -1 -0.5 0 0.5 1 1.5 2 2.5 3 0 2 4 6 8 1 0 y t (b) -1 -0.5 0 0.5 1 1.5 2 2.5 3 0 2 4 6 8 1 0 y t (c) 3 - 17 where (a) τ1 is roughly the same as τ2, (b) τ1 « τ2, and (c) τ1 » τ2. We should see that the overall response is overdamped in case (a) , but in case (b) we can have an overshoot, and in case (c), an initial inverse response. Note that all three cases have the same overall steady gain of K = 2.1 ❐ Review Problems 1. With respect to the step response of a first order model in Eq. (3-9), make a table for y(t)/MKp with t/τp = 1, 2, 3, 4 and 5. It is helpful to remember the results for t/τp = 1, 3, and 5. 2. It is important to understand that the time constant τp of a process, say, a stirred tank is not the same as the space time τ. Review this point with the stirred-tank heater example in Chapter 2. Further, derive the time constant of a continuous flow stirred-tank reactor (CSTR) with a first-order chemical reaction. 3. Write the time response solutions to the integrating process in (3-14) when the input is (a) a unit step and (b) an impulse. How are they different from the solutions to a self-regulating process? 4. Derive the time constant relationships stated in the footnotes of (3-20). 5. With respect to the overdamped solution of a second order equation in (3-21), derive the step response y(t) in terms of the more familiar exp(–t/τ1) and exp(–t/τ2). This is much easier than (3-21) and more useful too! 6. Show that when ζ = 0 (natural period of oscillation, no damping), the process (or system) oscillates with a constant amplitude at the natural frequency ωn. (The poles are at ±ωn) The period is 2πτ. 7. Use MATLAB to make plots of overshoot and decay ratio as functions of the damping ratio. 8. What is the expected time response when the real part of the pole is zero in a second order function? The pole can be just zero or have purely imaginary parts. 9. Plot the unit step response using just the first and second order Padé approximation in Eqs. (3.30) and (3-31). Try also the step response of a first order function with dead time as in Example 3.2. Note that while the approximation to the exponential function itself is not that good, the approximation to the entire transfer function is not as bad, as long as td « τ. How do you plot the exact solution in MATLAB? 10. Use MATLAB to observe the effect of higher order multicapacity models as in Example 3.3. Try to fit the fifth order case with a first order with dead time function. 11. With respect to Example 3.4, try also a second order with dead time approximation. 12. We do not have a rigorous criterion to determine when a pole is absolutely dominant. Plot the exponential decay with different time constants to get an idea when the terms associated with smaller time constants can be omitted. 13. With MATLAB, try do a unit step response of a lead-lag element in as in Eq. (3-49). 14. Repeat the time response simulation of inverse response in Section 3.5. Calculate the value of zero in each case. 1 When you repeat this exercise with MATLAB in the Review Problems, check that τz is negative in case (c). More commonly, we say that this is the case with a positive zero. After we have learned frequency response, we'll see that this is an example of what we refer to as non- minimum phase. 3 - 18 Hints: 1. y(t)/MKp at t/τp = 1, 2, 3, 4 and 5 are 0.63, 0.86, 0.95, 0.98, and 0.99. 2. The mass balance of a continuous flow stirred-tank reactor (CSTR) with a first-order chemical reaction is very similar to the problem in Section 2.8.1 (p. 2-20). We just need to add the chemical reaction term. The balance written for the reactant A will appear as: V d CA d t = q (Co – CA) – VkCA where CA is the molar concentration of A, V is the reactor volume, q is the volumetric flow rate, Co is the inlet concentration of A, and k is the first order reaction rate constant. If we define space time τ = V/q, the equation can be rewritten as τ d CA d t + (1 + kτ) CA = Co This is a linear equation if k and τ are constants. Now if we follow the logic in Section 2.8.2, we should find that the time constant of a CSTR with a first order reaction is τ/(1 + kτ). 3. Part of the answer is already in Example 3.1. 5. This is really an algebraic exercise in partial fractions. The answer hides in Table 2.1. 6. This is obvious from Eq. (3-17) or (3-19). 7. Plot Eq. (3-29) with 0 < ζ < 1. You can write a small M-file to do the plotting too. 8. See Question 6. 9. Follow Example 3.2. For the first order approximation, we can try, for example: td=3; %Use a M-file to re-run with different values P1=tf([-td/2 1],[td/2 1]); step(P1); %Note how the response starts from negative values t=0:0.1:50; taup=10; G1=tf(1,[taup 1]); y1=step(G1*P1,t); %y1 is first order with Pade approx of dead time y2=step(G1,t); %y2 has no time delay t2=t+td; plot(t,y1, t2,y2,’-.’); %Note how the Pade approx has a dip at the beginning 10. The MATLAB statements to do the unit step response are already in Example 3.4. You may repeat the computation with a different time constant. The statements to attempt fitting the five tanks-in-series response are: tau=3; G=tf(1,[tau 1]); [y5,t]=step(G*G*G*G*G); %The fifth order calculation G1=tf(1,[12 1]); y1=step(G1,t); %Using a time shift to do the t1=t+3; %first order with dead time plot plot(t,y5, t1,y1) The choice of the time constant and dead time is meant as an illustration. The fit will not be particularly good in this example because there is no one single dominant pole in the fifth order function with a pole repeated five times. A first order with dead time function will never provide a perfect fit. 11. Both first and second order approximation statements are here. q=3; 4-2 ✎ Example 4.1 : Derive the state space representation of a second order differential equation of a form similar to Eq. (3-16) on page 3-5: d2y dt2 + 2ζωn dy dt + ωn 2 y = Ku(t) (E4-1) We can do blindfolded that the transfer function of this equation, with zero initial conditions, is Gp (s) = Y(s) U(s) = K s2 + 2ζωn s + ωn 2 (E4-2) Now let's do something different. First, we rewrite the differential equation as d2y dt2 = – 2ζωn dy dt – ωn 2 y + Ku(t) and define state variables 1 x1 = y and x2 = d x1 d t (E4-3) which allow us to redefine the second order equation as a set of two coupled first order equations. The first equation is the definition of the state variable x2 in (E4-3); the second equation is based on the differential equation, d x2 d t = – 2ζωn x2 – ωn 2 x1 + Ku(t) (E4-4) We now put the result in a matrix equation: x1 x2 = 0 1 –ωn 2 –2ζωn x1 x2 + 0 K u(t) (E4-5) We further write y = 1 0 x1 x2 (E4-6) as a statement that x1 is our output variable. Compare the results with Eqs. (4-1) and (4-2), and we see that in this case, A = 0 1 –ωn 2 –2ζωn ; B = 0 K ; C = 1 0 ; D = 0 To find the eigenvalues of A, we solve its characteristic equation: |λI – A| = λ(λ + 2ζωn) + ωn2 = 0 (E4-7) We can use the MATLAB function tf2ss() to convert the transfer function in (E4-2) to state space form: 1 This exercise is identical to how we handle higher order equations in numerical analysis and would come as no surprise if you have taken a course on numerical methods. 4-3 z=0.5; wn=1.5; % Pick two sample numbers for ζ and ωn p=[1 2*z*wn wn*wn]; [a,b,c,d]=tf2ss(wn*wn,p) However, you will find that the MATLAB result is not identical to (E4-5). It has to do with the fact that there is no unique representation of a state space model. To avoid unnecessary confusion, the differences with MATLAB are explained in MATLAB Session 4. One important observation that we should make immediately: the characteristic polynomial of the matrix A (E4-7) is identical to that of the transfer function (E4-2). Needless to say that the eigenvalues of A are the poles of the transfer function. It is a reassuring thought that different mathematical techniques provide the same information. It should come as no surprise if we remember our linear algebra. ✎ Example 4.2 : Draw the block diagram of the state space representation of the second order differential equation in the previous example. The result is in Fig. E4.2. It is quite easy to understand if we take note that the transfer function of an integrator is 1/s. Thus the second order derivative is located prior to the two integrations. The information at the summing point also adds up to the terms of the second order differential equation. The resulting transfer function is identical to (E4-2). The reduction to a closed-loop transfer function was done in Example 2.16 (p. 2-33). ✎ Example 4.3: Let's try another model with a slightly more complex input. Derive the state space representation of the differential equation d2y dt2 + 0.4 d y d t + y = d u d t + 3u , y(0) = dy/dt(0) = 0, u(0) = 0 , which has the transfer function Y U = s + 3 s2 + 0.4s + 1 . The method that we will follow is more for illustration than for its generality. Let's introduce a variable X1 between Y and U: Y U = X1 U Y X1 = 1 s2 + 0.4s + 1 (s + 3) The first part X1/U is a simple problem itself. X1 U = 1 s2 + 0.4s + 1 is the Laplace transformed of d2x1 d t2 + 0.4 d x1 d t + x1 = u K 1 s U + – X2 x (t) = y(t)x (t)x (t)u(t) 1 s 2ζω ω2 X1 = Y 2 12 + Figure E4.2 4-4 With Example 4.1 as the hint, we define the state variables x1 = x1 (i.e., the same), and x2 = dx1/dt. Using steps similar to Example 4.1, the result as equivalent to Eq. (4-1) is x1 x2 = 0 1 –1 –0.4 x1 x2 + 0 1 u (E4-8) As for the second part Y/X1 = (s + 3), it is the Laplace transformed of y = d x1 d t + 3x1 . We can use the state variables defined above to rewrite as y = x2 + 3x1 , or in matrix form y = 3 1 x1 x2 , (E4-9) which is the form of Eq. (4-2). With MATLAB, the statements for this example are: q=[1 3]; p=[1 0.4 1]; roots(p) [a,b,c,d]=tf2ss(q,p) eig(a) Comments at the end of Example 4.1 also apply here. The result should be correct, and we should find that both the roots of the characteristic polynomial p and the eigenvalues of the matrix a are –0.2 ± 0.98j. We can also check by going backward: [q2,p2]=ss2tf(a,b,c,d,1) and the original transfer function is recovered in q2 and p2 . ✎ Example 4.4: Derive the state space representation of the lead-lag transfer function Y U = s + 2 s + 3 . We follow the hint in Example 4.3 and write the transfer function as Y U = X U Y X = 1 s + 3 s + 2 From X/U = 1/(s+3), we have sX = –3X + U, or in time domain, dx dt = – 3x + u (E4-10) and from Y/X = s+2, we have Y = sX +2X and substitution for sX leads to Y = (–3X + U) + 2X = –X + U The corresponding time domain equation is y = –x + u (E4-11) u 1 s– 3 yx x –1 • Figure E4.4 4-7 The rest is trivial. We rewrite the differential equations in matrix form as d dt x1 x2 x3 = 0 1 0 0 – 1 – 2 1 0 – 10 x1 x2 x3 + 0 2 0 u , and y = 1 0 0 x1 x2 x3 (E4-23, 24) We can check with MATLAB that the model matrix A has eigenvalues –0.29, –0.69, and –10.02. They are identical with the closed-loop poles. Given a block diagram, MATLAB can put the state space model together for us easily. To do that, we need to learn some closed-loop MATLAB functions, and we will defer this illustration to MATLAB Session 5. An important reminder: Eq. (E4-23) has zero initial conditions x(0) = 0. This is a direct consequence of deriving the state space representation from transfer functions. Otherwise, Eq. (4-1) is not subjected to this restriction. 4.2 Relation with transfer function models From the last example, we may see why the primary mathematical tools in modern control are based on linear system theories and time domain analysis. Part of the confusion in learning these more advanced techniques is that the umbilical cord to Laplace transform is not entirely severed, and we need to appreciate the link between the two approaches. On the bright side, if we can convert a state space model to transfer function form, we can still make use of classical control techniques. A couple of examples in Chapter 9 will illustrate how classical and state space techniques can work together. We can take the Laplace transform of the matrix equation in Eq. (4-1) to give sX(s) = AX(s) + BU(s) (4-3) where the capital X does not mean that it is a matrix, but rather it is used in keeping with our notation of Laplace variables. From (4-3), we can extract X explicitly as X(s) = (sI – A)–1 BU(s) = Φ(s)BU(s) (4-4) where Φ(s) = (sI – A)–1 (4-5) is the resolvent matrix. More commonly, we refer to the state transition matrix (also called the fundamental matrix) which is its inverse transform Φ(t) = L–1[(sI – A)–1] (4-6) We will use the same notation Φ for the time function and its Laplace transform, and only add the t or s dependence when it is not clear in which domain the notation is used. Setting D = 0 and X(s) derived in (4-4), the output Y(s) = CX(s) becomes Y(s) = CΦ(s)BU(s) (4-7) 4-8 In this case where U and Y are scalar quantities, CΦB must also be scalar.1 In fact, if we make an association between Eq. (4-7) and what we have learned in Chapter 2, CΦB is our ubiquitous transfer function. We can rewrite Eq. (4-7) as Y(s) = Gp(s)U(s), (4-7a) where Gp(s) = CΦ(s)B (4-8) Hence, we can view the transfer function as how the Laplace transform of the state transition matrix Φ mediates the input B and the output C matrices. We may wonder how this output equation is tied to the matrix A. With linear algebra, we can rewrite the definition of Φ in Eq. (4- 5) as Φ(s) = (sI – A)– 1 = adj (sI – A) det (sI – A) (4-5a) Substitution of this form in (4-8) provides a more informative view of the transfer function: Gp(s) = C adj (sI – A) B det (sI – A) (4-8a) The characteristic polynomial clearly is det (sI – A) = 0 (4-9) This is the result that we have arrived at, albeit less formally, in Example 4.1. Again, the poles of Gp are identical to the eigenvalues of the model matrix A. ✎ Example 4.7: We'll illustrate the results in this section with a numerical version of Example 4.5. Consider again two CSTR-in-series, with V1 = 1 m3, V2 = 2 m3, k1 =1 min–1, k2 =2 min–1, and initially at steady state, τ1 = 0.25 min, τ2 = 0.5 min, and inlet concentration cos = 1 kmol/m3. Derive the transfer functions and state transition matrix where both co and q are input functions. With the steady state form of (E4-12) and (E4-13), we can calculate c1s = cos 1 + k 1 τ1 = 1 1 + 0.25 = 0.8 , and c2s = c1s 1 + k 2 τ2 = 0.8 1 + 2(0.5) = 0.4 In addition, we find 1/τ1 = 4 min–1, 1/τ2 = 2 min–1, (1/τ1 + k1) = 5 min–1, (1/τ2 + k2) = 4 min–1, (cos – c1s)/V1 = 0.2 kmol/m6, and (c1s – c2s)/V2 = 0.2 kmol/m6. We substitute these numerical values in (E4-16) and (E4-17), and take the Laplace transform of these equations to obtain (for more general algebraic result, we should take the transform first) C1(s) = 4 s + 5 Co(s) + 0.2 s + 5 Q(s) (E4-25) and C2(s) = 2 s + 4 C1(s) + 0.2 s + 4 Q(s) 1 From Eq. (4-5), we see that Φ is a (n x n) matrix. Since B is (n x 1), and C is (1 x n), CΦB must be (1 x 1). 4-9 Further substitution for C1(s) with (E4-25) in C2(s) gives C2(s) = 8 (s + 4) (s + 5) Co(s) + 0.2 (s + 7) (s + 4) (s + 5) Q(s) (E4-26) Equations (E4-25) and (E4-26) provide the transfer functions relating changes in flow rate Q and inlet concentration Co to changes in the two tank concentrations. With the state space model, substitution of numerical values in (E4-18) leads to the dynamic equations d dt c1 c2 = – 5 0 2 – 4 c1 c2 + 4 0.2 0 0.2 co q (E4-27) With the model matrix A, we can derive (sI – A) = s + 5 0 –2 s + 4 , and Φ(s) = (sI – A)– 1 = 1 (s + 5)(s + 4) s + 4 0 2 s + 5 (E4-28) We will consider (E4-19) where both concentrations c1 and c2 are outputs of the model. The transfer function in (4-7) is now a matrix Gp(s) = CΦ(s)B = 1 (s + 5)(s + 4) s + 4 0 2 s + 5 4 0.2 0 0.2 (E4-29) where C is omitted as it is just the identity matrix (E4-19).1 With input u(s) = [Co(s) Q(s)]T, we can write the output equation (4-6) as C1(s) C2(s) = CΦ(s)Bu(s) = 1 (s + 5)(s + 4) 4 (s + 4) 0.2 (s + 4) 8 0.2 (s + 7) Co(s) Q(s) (E4-30) This is how MATLAB returns the results. We of course can clean up the algebra to give C1(s) C2(s) = 4 (s + 5) 0.2 (s + 5) 8 (s + 5)(s + 4) 0.2 (s + 7) (s + 5)(s + 4) Co(s) Q(s) (E4-30a) which is identical to what we have obtained earlier in (E4-25) and (E4-26). The case of only one output as in (E4-20) is easy and we'll cover that in Example 4.7A. To wrap things up, we can take the inverse transform of (E4-30a) to get the time domain solutions: c1 c2 = 4e– 5t 0.2e– 5t 8 (e– 4t – e– 5t) 0.2 (3e– 4t – 2e– 5t) co q (E4-31) 1 Be careful with the notation. Upper case C is for concentration in the Laplace domain. The boldface upper case C is the output matrix. 4-12 We first take the inlet glucose, C2o, to be a constant (i.e., no disturbance) and vary only the dilution rate, D. From the steady state form of (E4-32) and (E4-33), we can derive (without special notations for the steady state values): D (C2o – C2) = µC1 Y , and C1 = Y(C2o – C2) (E4-36) Now we linearize the two equations about the steady state. We expect to arrive at (with the apostrophes dropped from all the deviation variables and partial derivatives evaluated at the steady state): d dt C1 C2 = ∂f1 ∂C1 ∂f1 ∂C2 ∂f2 ∂C1 ∂f2 ∂C2 s.s C1 C2 + ∂f1 ∂D ∂f2 ∂D s.s D (E4-37) Using (E4-35) to evaluate the partial derivatives, we should find d dt C1 C2 = 0 C1 µ' – µ Y – C1 Y µ' – µ s.s. C1 C2 + –C1 C1 Y s.s. D = Ax + BD (E4-38) where µ' is the derivative with respect to the substrate C2: µ' = dµ dC2 = µm Km (Km + C2) 2 (E4-39) All the coefficients in A and B are evaluated at steady state conditions. From Eq. (E4-32), D = µ at steady state. Hence the coefficient a11 in A is zero. To complete the state space model, the output equation is C1 C2 = 1 0 0 1 C1 C2 (E4-40) where C is the identity matrix. Now, we'll see what happens with two inputs. In practice, we most likely would make a highly concentrated glucose stock and dose it into a main feed stream that contains the other ingredients. What we manipulate is the dosage rate. Consider that the glucose feed stock has a fixed concentration C2f and adjustable feed rate qf, and the other nutrients are being fed at a rate of qo. The effective glucose feed concentration is C2o = qf C2f qf + qo = qf C2f Q (E4-41) where Q = qf + qo is the total inlet flow rate, and the dilution rate is D = Q V = qf + qo V (E4-42) The general fermentor model equation as equivalent to (E4-34) is d x dt = f(x, u) (E4-43) where the state space remains x = [C1 C2] T, but the input is the vector u = [Do Df]T. Here, Do = qo/V and Df = qf/V are the respective dilution rates associated with the two inlet streams. That is, we vary the main nutrient feed and glucose dosage flow rates to manipulate this system. The function, f, is 4-13 f(x, u) = f1(x, u) f2(x, u) = µ(C2)C1 – (Do + Df) C1 – µ(C2) C1 Y + DfC2f – (Do + Df) C2 (E4-44) At steady state, µ = (Do + Df) = D (E4-45) and C1 = Y(C*2o – C2) , where C*2o = Df C2f Do + Df (E4-46) The equations linearized about the steady state (with the apostrophes dropped from the deviation variables as in E4-38) are d dt C1 C2 = 0 C1µ' – µ Y – C1 Y µ' – µ C1 C2 + – C1 – C1 – C2 (C2f – C2) s.s. Do Df = Ax + Bu (E4-47) The output equation remains the same as in (E4-40). Laplace transform of the model equations and rearrangement lead us to C1 C2 = G11 G12 G21 G22 Do Df (E4-48) where the four open-loop plant transfer functions are: G11 = – C1 Y s.s. s – C1 C1 µ' Y + µ + C1µ'C2 s.s. p(s) (E4-49) G12 = – C1 Y s.s. s + C1µ' (C2f - C2) – C1 C1µ' Y + µ s.s. p(s) (E4-50) G21 = – C2 s.s.s + C1µ Y s.s. p(s) (E4-51) G22 = C2f – C2 s.s.s + C1µ Y s.s. p(s) (E4-52) and the characteristic polynomial p(s) = s2 + C1µ' Y + µ s.s. s + C1 µ µ' Y s.s. (E4-53) Until we can substitute numerical values and turn the problem over to a computer, we have to admit that the state space form in (E4-47) is much cleaner to work with. This completes our "feel good" examples. It may not be too obvious, but the hint is that linear system theory can help us analysis complex problems. We should recognize that state space representation can do everything in classical control and more, and feel at ease with the language of 4-14 state space representation. 4.3 Properties of state space models This section contains brief remarks on some transformations and the state transition matrix. We limit the scope to materials that one may draw on introductory linear algebra. ✑ 4.3.1 Time-domain solution We can find the solution to Eq. (4-1), which is simply a set of first order differential equations. As analogous to how Eq. (2-3) on page 2-2 was obtained, we now use the matrix exponential function as the integration factor, and the result is (hints in the Review Problems) x(t) = eAt x(0) + e– A(t –τ) Bu(τ) dτ 0 t (4-10) where the first term on the right evaluates the effect of the initial condition, and the second term is the so-called convolution integral that computes the effect of the input u(t). The point is that state space representation is general and is not restricted to problems with zero initial conditions. When Eq. (4-1) is homogeneous (i.e., Bu = 0), the solution is simply x(t) = eAtx(0) (4-11) We can also solve the equation using Laplace transform. Starting again from (4-1), we can find (see Review Problems) x(t) = Φ(t)x(0) + Φ(t – τ) Bu(τ) dτ 0 t (4-12) where Φ(t) is the state transition matrix as defined in (4-6). Compare (4-10) with (4-12), and we can see that Φ(t) = eAt (4-13) We have shown how the state transition matrix can be derived in a relatively simple problem in Example 4.7. For complex problems, there are numerical techniques that we can use to compute Φ(t), or even the Laplace transform Φ(s), but which of course, we shall skip. One idea (not that we really do that) is to apply the Taylor series expansion on the exponential function of A, and evaluate the state transition matrix with Φ(t) = eAt = I + At + 1 2! A2t2 + 1 3! A3t3 + ... (4-14) Instead of an infinite series, we can derive a closed form expression for the exponential function. For an n x n matrix A, we have eAt = αo(t)I + α1(t)A + α2(t)A2 + ... + αn–1(t)An–1 (4-15) The challenge is to find those coefficients αi(t), which we shall skip.1 1 We only need the general form of (4-15) later in Chapter 9. There are other properties of the state transition matrix that we have skipped, but we have structured our writing such that they are not needed here. 4-17 G=zpk([],[-1 -2 -3],1); S=ss(G); % S is the state space system canon(S) % Default is the diagonal form canon(S,'companion') % This is the observable companion There is no messy algebra. We can be spoiled! Further details are in MATLAB Session 4. ❐ Review Problems 1. Fill in the gaps in the derivation of (E4-25) and (E4-26) in Example 4.7 2. Write down the dimensions of all the matrixes in (4-6) for the general case of multiple-input and multiple-output models. Take x to be (n x 1), y (m x 1), and u (k x 1). And when y and u are scalar, CΦB is a scalar quantity too. 3. Fill in the gaps in the derivation of (4-9) from (4-3a). 4. For the SISO system shown in Fig. R4.4, derive the state space representation. Show that the characteristic equation of the model matrix is identical to the closed-loop characteristic polynomial as derived from the transfer functions. 5. Derive Eq. (4-10). 6. Derive Eq. (4-12). 7. Derive Eq. (4-23). Hints: 2. A is (n x n), B (n x k), C (m x n), Φ (n x n), and CΦB (m x k). 4. Multiply the K to the transfer function to give a gain of 3K. Then the rest is easier than Example 4.6. 5. We multiply Eq. (4-1) by exp(–At) to give e– At [x – Ax] = e– At Bu , which is d dt [e– Atx] = e– At Bu Integration with the initial condition gives e– At x(t) – x(0) = e– Aτ Bu(τ) dτ 0 t which is one step away from Eq. (4-10). 6. The Laplace transform of Eq. (4-1) with nonzero initial conditions is sX – x(0) = AX + BU or X = (sI – A)–1x(0) + (sI – A)–1BU Substituting in the definition Φ(s) = (sI – A)–1, we have X = Φ(s)x(0) + Φ(s)BU u 3 (s + 2)– 1 (s + 5) K x = y x 1 2 Figure R4.4 4-18 The time domain solution vector is the inverse transform x(t) = L–1[Φ(s)]x(0) + L–1[Φ(s)BU] and if we invoke the definition of convolution integral (from calculus), we have Eq. (4-12). 7. We first substitute x = Px– in Eq. (4-1) to give P d dt x = APx + Bu Then we multiply the equation by the inverse P–1 d dt x = P– 1APx + P– 1Bu which is (4-23). P.C. Chau © 2001 ❖ 5. Analysis of Single-Loop Control Systems We now finally launch into the material on controllers. State space representation is more abstract and it helps to understand controllers in the classical sense first. We will come back to state space controller design later. Our introduction stays with the basics. Our primary focus is to learn how to design and tune a classical PID controller. Before that, we first need to know how to set up a problem and derive the closed-loop characteristic equation. What are we up to? • Introduce the basic PID control schemes • Derive the closed-loop transfer function of a system and understand its properties 5.1 PID controllers We use a simple liquid level controller to illustrate the concept of a classic feedback control system.1 In this example (Fig. 5.1), we monitor the liquid level in a vessel and use the information to adjust the opening of an effluent valve to keep the liquid level at some user- specified value (the set point or reference). In this case, the liquid level is both the measured variable and the controlled variable—they are the same in a single-input single-output (SISO) system. In this respect, the controlled variable is also the output variable of the SISO system. A system refers to the process which we need to control plus the controller and accompanying accessories such as sensors and actuators.2 Let's say we want to keep the liquid level at the set point, hs, but a sudden surge in the inlet flow rate qi (the disturbance or load) increases h such that there is a deviation h' = h – hs > 0. The deviation can be rectified if we open up the valve (or we can think in terms of lowering the flow resistance R). Here, we assume that the level controller will send out an appropriate signal to the valve to accomplish the task. It is logical to think that the signal from the controller, p(t), should be a function of the deviation. However, since we are to implement negative feedback, we base our decision on the error defined as e(t) = hs(t) – h(t) , 1 In Fig. 5.1, we use the actual variables because they are what we measure. Regardless of the notations in a schematic diagram, the block diagram in Fig. 5.2 is based on deviation variables and their Laplace transform. 2 Recall the footnote in Chapter 1: a process is referred to as a plant in control engineering. LT LC q h q p i h s R Figure 5.1. Schematic diagram of a liquid level control system. – E(s) H (s) H(s) P(s) s G c Figure 5.2. Information around the summing point in a negative feedback system. 5 - 4 5.1.2 Proportional-Integral (PI) control To eliminate offset, we can introduce integral action in the controller. In other words, we use a compensation that is related to the history of the error: p'(t) = 1 τI e'(t) dt 0 t ; P(s) E(s) = 1 s τI where τI is the integral time constant (reset time, or minutes per repeat1). Commercial devices may also use 1/τI which is called the reset rate (repeats per minute). The integral action is such that we accumulate the error from t = 0 to the present. Thus the integral is not necessarily zero even if the current error is zero. Moreover, the value of the integral will not decrease unless the integrand e'(t) changes its sign. As a result, integral action forces the system to overcompensate and leads to oscillatory behavior, i.e., the closed-loop system will exhibit an underdamped response. If there is too much integral action, the system may become unstable. In practice, integral action is never used by itself. The norm is a proportional-integral (PI) controller. The time-domain equation and the transfer function are: p'(t) = Kc e'(t) + 1 τ I e'(t) dt 0 t ; Gc(s) = Kc 1 + 1 τ I s (5-5) If the error cannot be eliminated within a reasonable period, the integral term can become so large that the controller is saturated—a situation referred to as integral or reset windup. This may happen during start-up or large set point changes. It may also happen if the proportional gain is too small. Many industrial controllers have "anti-windup" which temporarily halts the integral action whenever the controller output becomes saturated.2 On the plus side, the integration of the error allows us to detect and eliminate very small errors. To make a simple explanation of why integral control can eliminate offsets, refer back to our intuitive explanation of offset with only a proportional controller. If we desire e = 0 at steady state, and to shift controller output p away from the previous bias ps, we must have a nonzero term. Here, it is provided by the integral in Eq. (5-5). That is, as time progresses, the integral term takes on a final nonzero value, thus permitting the steady state error to stay at zero. General qualitative features of PI control • PI control can eliminate offset. We must use a PI controller in our design if the offset is unacceptably large. 1 Roughly, the reset time is the time that it takes the controller to repeat the proportional action. This is easy to see if we take the error to be a constant in the integral. 2 Another strategy is to implement the PI algorithm in the so-called reset feedback configuration. The basis of internal reset feedback is to rearrange and implement the PI transfer function as 1 + 1 τ I s = τ I s + 1 τ I s = 1 τ I s (τ I s + 1) = 1 1 – 1 (τ I s + 1) Now, the “internal state” of the controller, whether it be electronics or a computer algorithm for integration, will have an upper limit. External reset feedback, on the other hand, makes use of measurements of the manipulated variable. You may find such implementation details in more applied control books. 5 - 5 • The elimination of the offset is usually at the expense of a more underdamped system response. The oscillatory response may have a short rise time, but is penalized by excessive overshoot or exceedingly long settling time. 1 • Because of the inherent underdamped behavior, we must be careful with the choice of the proportional gain. In fact, we usually lower the proportional gain (or detune the controller) when we add integral control. 5.1.3 Proportional-Derivative (PD) control We certainly want to respond very differently if the temperature of a chemical reactor is changing at a rate of 100°C/s as opposed to 1°C/s. In a way, we want to "project" the error and make corrections accordingly. In contrast, proportional and integral controls are based on the present and the past. Derivative controller action is based on how fast the error is changing with time (rate action control). We can write p'(t) = τD de' dt ; P(s) E(s) = τDs where τD is the derivative time constant (sometimes just rate time). Here, the controller output is zero as long as the error stays constant. That is, even if the error is not zero. Because of the proportionality to the rate of change, the controller response is very sensitive to noise. If there is a sudden change in error, especially when we are just changing the set point, the controller response can be unreasonably large—leading to what is called a derivative kick. Derivative action is never used by itself. The simplest implementation is a proportional- derivative (PD) controller. The time-domain equation and the transfer function of an "ideal" PD controller are: p'(t) = Kc e'(t) + τ D de' dt ; Gc(s) = Kc 1 + τ D s (5-6) In practice, we cannot build a pneumatic device or a passive circuit which provides ideal derivative action. Commercial (real!) PD controllers are designed on the basis of a lead-lag element: Gc(s) = Kc τ D s + 1 ατD s + 1 (5-7) where α is a small number, typically 0.05 ≤ α ≤ 0.2. In effect, we are adding a very large real pole to the derivative transfer function. Later, after learning root locus and frequency response analysis, we can make more rational explanations, including why the function is called a lead-lag element. We'll see that this is a nice strategy which is preferable to using the ideal PD controller. To reduce derivative kick (the sudden jolt in response to set point changes), the derivative action can be based on the rate of change of the measured (controlled) variable instead of the rate of change of the error. One possible implementation of this idea is in Fig. 5.3. This way, the derivative control action ignores changes in the reference and just tries to keep the measured variable constant.2 1 Some texts use the term "sluggish" here without further qualification. The sluggishness in this case refers to the long settling time, not the initial response. 2 For review after the chapter on root locus: with the strategy in Fig. 5.3, the closed-loop characteristic polynomial and thus the poles remain the same, but not the zeros. You may also 5 - 6 General qualitative features of derivative control • PD control is not useful for systems with large dead time or noisy signals. • The sign of the rate of change in the error could be opposite that of the proportional or integral terms. Thus adding derivative action to PI control may counteract the overcompensation of the integrating action. PD control may improve system response while reducing oscillations and overshoot. (Formal analysis later will show that the problem is more complex than this simple statement.) • If simple proportional control works fine (in the sense of acceptable offset), we may try PD control. Similarly, we may try PID on top of PI control. The additional stabilizing action allows us to use a larger proportional gain and obtain a faster system response. 5.1.4 Proportional-Integral-Derivative (PID) control Finally, we can put all the components together to make a PID (or 3-mode) controller. The time- domain equation and the transfer function of an “ideal” PID controller are: p'(t) = Kc e'(t) + 1 τ I e'(t) dt 0 t + τ D de' dt (5-8a) and Gc(s) = Kc 1 + 1 τ I s + τ D s = Kc τ I τ D s 2 + τ I s + 1 τ I s (5-8b) We also find rather common that the proportional gain is multiplied into the bracket to give the integral and derivative gains: Gc(s) = Kc + KI s + KD s (5-8c) where KI = Kc/τI, and KD = KcτD. With a higher order polynomial in the numerator, the ideal PID controller is not considered physically realizable. We nevertheless use this ideal controller in analyses because of the cleaner algebra, and more importantly because we can gain valuable insight with it. You can say the same with the use of the ideal PD controller too. In real life, different manufacturers implement the “real” PID controller slightly differently.1 One possibility is to modify the derivative action as Gc(s) = Kc 1 + 1 τ I s + τ D s ατ D s + 1 = Kc (α + 1)τ D s + 1 ατ D s + 1 + 1 τ I s (5-9a) wonder how to write the function Gc(s), but it is much easier and more logical just to treat the PI action and derivation action as two function blocks when we analyze a problem. 1 Not only that, most implementations are based on some form of the digitized control law. An illustration of the positional digital algorithm along with concepts such as bumpless transfer, external rate feedback and bias tracking is in the LabView liquid level simulation module on our Web Support. I action + 1τ ατ + 1 erivative action + τ Figure 5.3. Implementation of derivative control on the measured variable. 5 - 9 loop were disconnected.1 We may also refer to GcGaGp as the forward loop transfer function. Our analyses of SISO systems seldom take into account simultaneous changes in set point and load.2 We denote the two distinct possibilities as (1) Servo problems: Consider changes in set point with no disturbance (L = 0); C = GspR. Ideally (meaning unlikely to be encountered in reality), we would like to achieve perfect tracking of set point changes: C = R. Reminder: we are working with deviation variables. (2) Regulator problems: Consider changes in disturbance with a fixed set point (R = 0); C = GloadL. The goal is to reject disturbances, i.e., keep the system output at its desired value in spite of load changes. Ideally, we would like to have C = 0, i.e., perfect disturbance rejection. 5.2.2 How do we choose the controlled and manipulated variables? In homework problems, by and large, the variables are stated. Things will be different when we are on the job. Here are some simple ideas on how we may make the decision: Choice of controlled variables: • Those that are dictated by the problem. (For instance, temperature of a refrigerator.) • Those that are not self-regulating. • Those that may exceed equipment or process constraints. • Those that may interact with other variables. (For example, reactor temperature may affect product yield.) Choice of manipulated variables: • Those that have a direct effect on the process, especially the output variable. • Those that have a large steady state gain. (Good sensitivity) • Those that have no dead time. • Those that have no interaction with other control loops. After we have chosen the controlled and manipulated variables, the remaining ones are taken as load variables in a SISO system. 1 Can an open-loop be still a loop? You may wonder what is an open-loop? Often, we loosely refer elements or properties of part of a system as open-loop, as opposed to a complete closed-loop system. You’ll see more of this language in Chapter 7. 2 In real life, we expect probable simultaneous reference and disturbance inputs. As far as analysis goes, the mathematics is much simpler if we consider one case at a time. In addition, either case shares the same closed-loop characteristic polynomial. Hence they should also share the same stability and dynamic response characteristics. Later when we talk about integral error criteria in controller design, there are minor differences, but not sufficient to justify analyzing a problem with simultaneous reference and load inputs. 5 - 10 5.2.3 Synthesis of a single-loop feedback system We now walk through the stirred-tank heater system once again. This time, we’ll take a closer look at the transfer functions and the units (Fig. 5.5). Gp GL + Gm Gc – + T Ga T T Stirred-tank heater K m H T i Tsp (mV) (°C) (°C) (°C) (°C) (°C) (mV) (mV) Figure 5.5. Block diagram of a simple SISO closed-loop system with physical units. ◊ Process model The first item on the agenda is “process identification.” We either derive the transfer functions of the process based on scientific or engineering principles, or we simply do a step input experiment and fit the data to a model. Either way, we need to decide what is the controlled variable, which is also the measured variable. We then need to decide which should be the manipulated variable. All remaining variables are delegated to become disturbances. With the stirred-tank heater, we know quite well by now that we want to manipulate the heating coil temperature to control the tank temperature. The process function Gp is defined based on this decision. In this simple illustration, the inlet temperature is the only disturbance, and the load function is defined accordingly. From Section 2.8.2 and Eq. (2-49b) on page 2-25, we have the first order process model: T = GL Ti + Gp TH = KL τp s + 1 Ti + Kp τp s + 1 TH (5-13) From that same section, we know that the steady state gain and the time constant are dependent on the values of flow rate, liquid density, heat capacity, heat transfer coefficient, and so on. For the sake of illustration, we are skipping the heat transfer analysis. Let's presume that we have done our homework, substituted in numerical values, and we found Kp = 0.85 °C/°C, and τp = 20 min. ◊ Signal transmitter Once we know what to control, we need to find a way to measure the quantity. If the transducer (sensor and transmitter packaged together) is placed far downstream or is too well insulated and the response is slow, the measurement transfer function may appear as Tm T = Gm = Kme – tds τm s + 1 (5-14) where Km is the measurement gain of the transducer, τm is the time constant of the device, and td accounts for transport lag. In a worst case scenario, the sensor may be nonlinear, meaning that the measurement gain would change with the range of operating conditions. With temperature, we can use a thermocouple, which typically has a resolution on the order of 0.05 mV/°C. (We could always use a RTD for better resolution and response time.) That is too small a change in output for most 12-bit analog-digital converters, so we must have an amplifier to boost the signal. This is something we do in a lab, but commercially, we should find off-the- 5 - 11 shelf transducers with the sensor and amplifier packaged together. Many of them have a scaled output of, for example, 0-1 V or 4-20 mA. For the sake of illustration, let’s presume that the temperature transmitter has a built-in amplifier which allows us to have a measurement gain of Km = 5 mV/°C. Let’s also presume that there is no transport lag, and the thermocouple response is rapid. The measurement transfer function in this case is simply Gm = Km = 5 mV/°C This so-called measurement gain is really the slope of a calibration curve—an idea that we are familiar with. We do a least squares fit if this curve is linear, and find the tangent at the operating point if the curve is nonlinear. ◊ Controller The amplified signal from the transmitter is sent to the controller, which can be a computer or a little black box. There is not much we can say about the controller function now, except that it is likely a PID controller, or a software application with a similar interface. A reminder is that a controller has a front panel with physical units such as °C. (Some also have relative scales of 0-100%.) So when we “dial” a change in the set point, the controller needs to convert the change into electrical signals. That’s why Km is part of the controller in the block diagram (Fig. 5.5). ◊ Actuator/control valve Last but not least, designing a proper actuator can create the most headaches. We have to find an actuator that can drive the range of the manipulated variable. We also want the device to have a faster response than the process. After that, we have to find a way to interface the controller to the actuator. A lot of work is masked by the innocent-looking notation Ga. In the stirred-tank heater example, we need to add several comments. We need to consider safety. If the system fails, we want to make sure that no more heat is added to the tank. Thus we want a fail-closed valve—meaning that the valve requires energy (or a positive signal change) to open it. In other words, the valve gain is positive. We can check the thinking as follows: If the tank temperature drops below the set point, the error increases. With a positive proportional gain, the controller output will increase, hence opening up the valve. If the process plant has a power outage, the valve closes and shuts off the steam. But how can the valve shut itself off without power? This leads to the second comment. One may argue for emergency power or a spring-loaded valve, but to reduce fire hazards, the industrial nominal practice is to use pneumatic (compressed air driven) valves that are regulated by a signal of 3-15 psi. The electrical signal to and from the controller is commonly 4-20 mA. A current signal is less susceptible to noise than voltage signal over longer transmission distances. Hence in a more applied setting, we expect to find a current- to-pressure transducer (I/P) situated between the controller output and the valve actuator. Finally, we have been sloppy in associating the flow rate of steam with the heating coil temperature. The proper analysis that includes a heat balance of the heating medium is in the Review Problems. To side step the actual calculations, we have to make a few more assumptions for the valve gain to illustrate what we need to do in reality: (1) Assume that we have the proper amplifier or transducer to interface the controller output with the valve, i.e., converting electrical information into flow rate. 5 - 14 We now take a formal look at the steady state error (offset). Let’s consider a more general step change in set point, R = M/s. The eventual change in the controlled variable, via the final value theorem, is c'(∞) = lim s → 0 s Kτ s + 1 M s = MK The offset is the relative error between the set point and the controlled variable at steady state, i.e., (r – c∞)/r: ess = M – MK M = 1 – K = 1 – Kc Kp 1 + Kc Kp = 1 1 + Kc Kp (E5-3) We can reduce the offset if we increase the proportional gain. Let's take another look at the algebra of evaluating the steady state error. The error that we have derived in the example is really the difference between the change in controlled variable and the change in set point in the block diagram (Fig. 5.6). Thus we can write: E = R – C = R 1 – Gc Gp 1 + Gc Gp = R 1 1 + Gc Gp Now if we have a unit step change R = 1/s, the steady state error via the final value theorem is (recall that e = e’) ess = lims → 0 s 1 1 + Gc Gp 1 s = 1 1 + lim s → 0 Gc Gp = 1 1 + Kerr , where Kerr = lims → 0 Gc Gp (5-16) We name Kerr the position error constant.1 For the error to approach zero, Kerr must approach infinity. In Example 5.1, the error constant and steady state error are Kerr = lims → 0 Gc Gp = Kc Kp τ p s + 1 = Kc Kp , and again ess = 1 1 + Kc Kp (5-17) ✎ Example 5.2: Derive the closed-loop transfer function of a system with proportional control and a second order overdamped process. If the second order process has time constants 2 and 4 min and process gain 1.0 [units], what proportional gain would provide us with a system with damping ratio of 0.7? In this case, we consider Gc = Kc, and Gp = Kp (τ 1 s + 1) (τ 2 s + 1) , and substitution in Eq. (5-15) leads to C R = Kc Kp (τ 1 s + 1) (τ 2 s + 1) + Kc Kp = Kc Kp (1 + Kc Kp) τ 1 τ 2 1 + Kc Kp s2 + τ 1 + τ 2 1 + Kc Kp s + 1 (E5-4) The key is to recognize that the system may exhibit underdamped behavior even though the open- loop process is overdamped. The closed-loop characteristic polynomial can have either real or complex roots, depending on our choice of Kc. (This is much easier to see when we work with 1 In many control texts, we also find the derivation of the velocity error constant (using R = s–2) and acceleration error constant (using R = s–3), and a subscript p is used on what we call Kerr here. 5 - 15 root locus later.) For now, we rewrite the closed-loop function as C R = K τ 2 s2 + 2ζτ s + 1 (E5-4a) where the closed-loop steady state gain is K = Kc Kp 1 + Kc Kp , and the system natural time period and damping ratio are τ = τ 1 τ 2 1 + Kc Kp , and ζ = 1 2 (τ 1 + τ 2 ) τ 1 τ 2 (1 + Kc Kp) (E5-5) If we substitute ζ = 0.7, Kp = 1, τ1 = 2 and τ2 = 4 in the second expression, we should find the proportional gain Kc to be 1.29. Lastly, we should see immediately that the system steady state gain in this case is the same as that in Example 5.1, meaning that this second order system will have the same steady state error. In terms of controller design, we can take an entirely analytical approach when the system is simple enough. Of course, such circumstances are not common in real life. Furthermore, we often have to compromise between conflicting criteria. For example, we cannot require a system to have both a very fast rise time and a very short settling time. If we want to provide a smooth response to a set point change without excessive overshoot, we cannot also expect a fast and snappy initial response. As engineers, it is our job to decide. In terms of design specification, it is not uncommon to use decay ratio as the design criterion. Repeating Eq. (3-29), the decay ratio DR (or the overshoot OS) is a function of the damping ratio: DR = (OS)2 = exp – 2πζ 1 – ζ2 (5-18) We can derive from this equation ζ2 = (ln DR)2 4π2+ (ln DR)2 (5-19) If we have a second order system, we can derive an analytical relation for the controller. If we have a proportional controller with a second order process as in Example 5.2, the solution is unique. However, if we have, for example, a PI controller (2 parameters) and a first order process, there are no unique answers since we only have one design equation. We must specify one more design constraint in order to have a well-posed problem. ✎ Example 5.3: Derive the closed-loop transfer function of a system with proportional- integral control and a first order process. What is the offset in this system? We substitute Gc = Kc τ I s + 1 τ I s , and Gp = Kp τ p s + 1 in Eq. (5-15), and the closed-loop servo transfer function is C R = Kc Kp (τ I s + 1) τ I s (τ p s + 1) + Kc Kp (τ I s + 1) = (τ I s + 1) τ I τ p Kc Kp s2 + τ I (1 + Kc Kp ) Kc Kp s + 1 (E5-6) 5 - 16 There are two noteworthy items. First, the closed-loop system is now second order. The integral action adds another order. Second, the system steady state gain is unity and it will not have an offset. This is a general property of using PI control. (If this is not immediately obvious, try take R = 1/s and apply the final value theorem. We should find the eventual change in the controlled variable to be c'(∞) = 1.) With the expectation that the second order system may exhibit underdamped behavior, we rewrite the closed-loop function as C R = (τ I s + 1) τ 2 s2 + 2ζτ s + 1 (E5-6a) where the system natural time period and damping ratio are τ = τ I τ p Kc Kp , and ζ = 1 2 (1 + Kc Kp) τ I Kc Kp τ p (E5-7) While we have the analytical results, it is not obvious how choices of integral time constant and proportional gain may affect the closed-loop poles or the system damping ratio. (We may get a partial picture if we consider circumstances under which KcKp » 1.) Again, we’ll defer the analysis to when we cover root locus. We should find that to be a wonderful tool in assessing how controller design may affect system response. ✎ Example 5.4: Derive the closed-loop transfer function of a system with proportional- derivative control and a first order process. The closed-loop transfer function (5-15) with Gc = Kc(1 + τDs) and Gp = Kp τ p s + 1 is C R = Kc Kp (τ D s + 1) (τ p s + 1) + Kc Kp (τ D s + 1) = Kc Kp (τ D s + 1) (τ p + Kc Kp τ D ) s + 1 + Kc Kp (E5-8) The closed-loop system remains first order and the function is that of a lead-lag element. We can rewrite the closed-loop transfer function as C R = K (τ D s + 1) τ s + 1 (E5-8a) where the system steady state gain and time constant are K = Kc Kp 1 + Kc Kp and τ = τ p + Kc Kp τ D 1 + Kc Kp . The system steady state gain is the same as that with proportional control in Example 5.1. We, of course, expect the same offset with PD control too. The system time constant depends on various parameters. Again, we defer this analysis to when we discuss root locus. ✎ Example 5.5: Derive the closed-loop transfer function of a system with proportional control and an integrating process. What is the offset in this system? Let’s consider Gc = Kc, and Gp = 1/As, and substitution in Eq. (5-15) leads to C R = Kc A s + Kc = 1 (A Kc ) s + 1 (E5-9)
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved