0% found this document useful (0 votes)
864 views

Sanchez D.A. Ordinary Differential Equations

industrial step to ordinary differential equation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
0% found this document useful (0 votes)
864 views

Sanchez D.A. Ordinary Differential Equations

industrial step to ordinary differential equation
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 143
Ordinary Differential Equations A Brief Eclectic Tour David A. Sanchez Texas A&M University @ Published and distributed by The Mathematical Association of America CLASSROOM RESOURCE MATERIALS Classroom Resource Materials is intended to provide supplementary classroom material for students—laboratory exercises, projects, historical information, textbooks with unusual approaches for presenting mathematical ideas, career information, etc. Committee on Publications Gerald Alexanderson, Chair Zaven A. Karian, Editor Frank Farris David E. Kullman Julian Fleron Millianne Lehmann Sheldon P, Gordon William A. Marion Yvette C, Hester Stephen B Maurer William J. Higgins Edward P. Merkes Mic Jackson Judith A. Palagallo Paul Knopp Andrew Sterrett, Jr. 10] Careers in Mathematics, edited by Andrew Sterrett Archimedes: What Did He Do Besides Cry Eureka?, Sherman Stein Calculus Mysteries and Thrillers, R, Grant Woods Combinatorics: A Problem Oriented Approach, Daniel A. Marcus Conjecture and Proof, Miklés Laczkovich A Course in Mathematical Modeling. Douglas Mooney and Randall Swift Cryptological Mathematics, RobrraWshwand beypand Elementary Mathematical Models; 1¥in“K@inrat™ Geometry From Africa: Mathematical and Educational Explorations, Paulus Gerdes Interdisciplinary Lively Application Projects, edited by Chris Arney Laboratory Experiences in Group Theory, Ellen Maycock Parker Leam from the Masters, Frank Swetz, John Fauvel, Otto Bekken, Bengt Johansson, and Victor Katz Mathematical Modeling in the Environment, Charles Hadlock Ordinary Differential Equations: A Brief Eclectic Tour, David A. Sanchez A Primer of Abstract Mathematics, Robert B. Ash Proofs Without Words, Roger B. Nelsen Proofs Without Words I, Roger B. Nelsen A Radical Approach to Real Analysis, David M. Bressoud She Does Math!, edited by Marla Parker Solve This: Math Activities for Students and Clubs, James S. Tanton MAA Service Center P.O. Box 91112 Washington, DC 20090-1112 1-800-331-IMAA FAX: 1-301-206-9789 ee Preface — Read This First! A Little History The study of ordinary differential equations is a rich subject, dating back to even before Isaac Newton and his laws of motion, and is one of the principal building blocks to describe a process wherein a quantity (mass, current, population, etc.) exhibits change with respect to time or distance. Prior to the late 1800s, the large part of the study of differential equations was devoted to trying to analytically describe solutions in closed forms or via power series or integral representations. The differential equations were almost lost in the sea of special functions (Bessel, Legendre, hypergeometric, etc.) which were their solutions. But with the publication of Henn Poincaré's memoir on the three-body problem in celestial mechanics in 1890, and his later three volume work on the topic, the subject took a dramatic turn. Analysis was buttressed with geometry and topology to study the dynamics and stability of ordinary differential equations and systems thereof, whose solutions could only be approximated at best. Thus was created the qualitative theory of ordinary differential equations and the more general subject of dynamical systems. Unfortunately, up until the last half of the past century, very little of the theory was available to students (with the exception of Russia where a large, influential school flourished). Differential equations texts for undergraduates were largely dull “plug and chug” expositions which gave the lasting impression that the subject was a bag of tricks and time consuming infinite series or numerical approximation techniques. Some faculty still believe this, But with the advances made in computers and computer graphics, available to students through programs like MAPLE, MATLAB, MATHEMATICA, etc., the subject and the textbooks took on a new vitality. Especially important was the access to very powerful numerical techniques such as RKF45 to approximate solutions to high degrees of accuracy, coupled with very sophisticated graphics to be able to display them. Now qualitative theory is linked with quantitative theory, making for a very attractive course providing valuable tools and insights. Furthermore, mathematical modeling of physical, biological, or socioeconomic problems using differential equations is now a major component of vi Ordinary Ditferential Equations—A Brief Eclectic Tour undergraduate textbooks, The slim, dry tomes of the pre-1960s, filled with arcane substitutions and exercises more intended to assess the reader’s integration techniques, have been replaced with 600 plus page volumes crammed with graphics, projects, pictures and computing The Author's View T have taught ordinary differential equations at all audience levels (engineering students, mathematics majors, graduate students), and have written or coauthored three books in the subject, as well as over forty research papers. One of the books was a forerunner of the colossal texts mentioned above, and may have been the first book which blended numerical techniques within the introductory chapters, instead of lumping them together in one largely ignored chapter towards the back of the book. Several years ago. I reviewed a number of the current textbooks used in the introductory courses (American Mathematical Monthly), and the article was intended to ask the question “Why have ordinary differential equation textbooks become so large?” But on later reflection I think the right question would have been whether the analytic and geometric foundations of the subject were blurred or even lost in the exposition. This is a question of real importance for the non-expert teaching the course, and of greater importance to the understanding of the student. Pick up almost any of the current texts and turn to the first chapter, which usually starts off with a section entitled Classification of Differential Equations. This is comprised of examples of linear, nonlinear, first order, second order, et al. explicit and implicit ordinary differential equations, with a few partial differential equations thrown in for contrast, and probably confusion, Then there follows a set of problems with questions like “For the following differential equations identify which are linear, nonlinear, etc. and what are their orders?” This section is usually followed by a discussion usually named Solutions or Existence of Solutions, and the students, who may have a hazy understanding at best of what are the objects under examination, are now presented with functions (which come from who knows where) to substitute and to verify identities with, Or they may be presented the existence and uniqueness theorem for the initial value problem, then asked to verify for some specific equations whether a solution exists, when they are not at all sure of what is a solution, or even what is a differential equation. This confusion can linger well into the course, but the student may gain partial relief by finding integrating factors, solving characteristic polynomials, and comparing coefficients, still unclear about what’s going on. This insecurity may be relieved by turning to the computer and creating countless little arrows waltzing across the screen, with curves pirouetting through them which supposedly describe the demise of some combative species, the collapse of a bridge. or even more thrilling, the onset of chaos. I do not blame the well qualified authors of these books; they have an almost impossible task of combining theory, technology, and applications to make the course more relevant and exciting, and try and dispel the notion that ordinary differential equations is a “bag of tricks.” But I think it is worthwhile to occasionally “stop and smell the roses,” as they Preface vil say, instead of galumphing through this rich garden, and that is the purpose of this little book. A Tour Guide The most important points are 1. This is not a textbook, but is instead a collection of approaches and ideas worth considering to gain further insight, of examples or results which bring out or amplify an important topic or behavior, and an occasional suggested problem. Current textbooks have loads of good problems. 2. The book can be used in several ways: a. It can serve as a resource or guide in which to browse for the professor who is embarking on teaching an undergraduate course in ordinary differential equations. ca . It could be used as a supplementary text for students who want a deeper understanding of the subject. c. For a keen student it could serve as a textbook if supplemented by some problems. These could be found in other introductory texts or college outlines (e.g. Schaum's), but more challenging would be for the student to develop his or her own exercises! Consequently, the book is more conceptual than definitive, and more lighthearted than pedagogic. There is very little mathematical modeling in the book; that area is well covered by the existing textbooks and computer-based course materials. | am a devotee of population models and nonlinear springs and pendulums, so there may be a few of them, but mainly used to explain an analytic or geometric concept. The reader may have to fill in some calculations, but there is no list of suggested problems, except as they come up in the discussion. Hopefully, after reading a section, the reader will be able to come up with his or her own illuminating problems or lecture points. If that occurs then my goal has been met. A brief description of each chapter follows; it gives the highlights of each chapter. The order is a standard one and fits most textbooks. Chapter 1: Solutions The notion of a solution is developed via familiar notions of a solution of a polynomial equation, then of an implicit equation, from which the leap to a solution of an ordinary differential equation is a natural one. This is followed by a brief introduction to the existence and uniqueness theorem, and the emphasis on its local nature is supported by a brief discussion of continuation—rarely mentioned in introductory texts. The discussion is greatly expanded in Chapter 2. vill Ordinary Differential Equations—A Brief Eclectic Tour Chapter 2: First Order Equations The two key ordinary differential equations of first order are the separable equation and the linear equation. The solution technique for the first is developed very generally. and leads naturally into an introductory discussion of stability of equilibria, The linear equation is discussed as a prelude to the higher dimensional linear system, with the special technique of integrating factors taking a back seat to the more general variation of parameters method. Some other topics mentioned are discontinuous inhomogeneous or forcing terms, and singular perturbations, The Riccati equation, the author's favorite nonlinear equation, is discussed next because it exemplifies many of the features common to nonlinear equations, and leads into a very elegant discussion of the existence of periodic solutions. This is a topic rarely discussed in elementary books, yet it gives nice insights into the subject of periodic solutions of linear and nonlinear equations. There is an expanded discussion in a later section, which includes a lovely fixed point existence argument. The chapter concludes with a brief analysis of the question of continuous dependence on initial conditions, often confused with sensitive dependence when discussing chaotic behavior. Finally, we return to the subject of continuation, and the final example gives a nice picture of “how far can we go?” Chapter 3: Insight Not Numbers. The chapter is not intended as an introduction to numerical methods, but to provide a conceptual platform for someone faced with the often formidable chapters on numerical analysis found in some textbooks. There must be an understanding of the idea that numerical schemes basically chase derivatives, hence the Euler and Improved Euler method are excellent examples. Next, the notion of the error of a numerical approximation must be understood, and this leads nicely into a discussion of what is behind a scheme that controls local error in order to reduce global error. The popular RKF45 scheme is discussed as an example. Chapter 4: Second Order Equations In a standard one semester course the time management question moves center stage when one arrives at the analysis of higher order linear equations, and first order linear systems. One is faced with characteristic polynomials, matrices, eigenvalues, eigenvectors, etc., and the presentation can become a mini-course in linear algebra, which takes time away from the beauty and intent of the subject. For the second order equation the author suggests two approaches: a) Deal directly with the general time-dependent second order equation, developing fundamental pairs of solutions and linear independence via the Wronskian. b) Develop the theory of the two-dimensional time-dependent linear system, again discussing fundamental pairs of solutions whose linear independence is verified with Preface ix the Wronskian. Then consider the special case where the linear system represents a second order equation. The emphasis in both approaches is on fundamental pairs of solutions, and linear independence via the Wronskian test, and not spending a lot of time on linear algebra and/or linear independence per sé. The discussion continues with a now easily obtained analysis of the constant coefficient second order equation and damping. This is followed by the non-homogeneous equation and variation of parameters method, pursuing the approach used for the first order linear equation. and using the linear system theory, previously developed. The advantage is that one can develop the usual variation of parameters formula, without appealing to the “mysterious” condition needed to solve the resulting set of equations. The section Cosines and Convolutions is a discussion of the inhomogeneous constant coefficient equation where the forcing term is periodic. This leads to a general discussion of the onset of resonance, and for the forced oscillator equation we develop a nice convolution representation of the solution which determines whether resonance will occur for any periodic forcing term. The representation can be developed directly, or from the variation of parameters formula with more effort: Laplace transform theory is not needed The final section of the chapter gives some of the author’s views on various topics usually covered under the second order linear equation umbrella. Infinite series solutions is another point in the introductory course where one must make decisions on how deeply to proceed. The author has some advice for those who wish to move on but want to develop some understanding. There is no mention of the Laplace transform—any kind of a proper treatment is far beyond the intent of this book, even with the nice computer packages which have eliminated the dreary partial fraction expansions. Chapter 5: Linear and Nonlinear Systems Given the material in the previous chapter it is now an easy task to discuss the constant coefficient linear system, for both the homogeneous and inhomogeneous case (the variation of parameters formula). But for the two-dimensional case, the important topic is the notion of the phase plane and this is thoroughly analyzed. However, the character of almost all the possible configurations around an equilibrium point is more easily analyzed using the two-dimensional representation of the second order scalar equation. The trajectories can be drawn directly and correspond to those in the more general case via possible rotations and dilations. This approach virtually eliminates all discussion of eigenvectors and eigenvalues, which is in keeping with the spirit of the book. Next is given a general rationale, depending on the per capita growth rate, for the construction of competition and predator prey models of population growth. This makes it easier to explain particular models used in the study of populations, as well as to develop one’s own. In Chapter | the constant rate harvesting of a population modeled by the logistic equation was discussed, and a nice geometric argument led to the notion of a critical harvest level, beyond which the population perishes. A similar argument occurs x Ordinary Differential Equations—A Brief Eclectic Tour for a two population model in which one of the populations is being harvested—this is rarely discussed in elementary books. There follows a section on conservative systems which have the nice feature that one can analyze their stability using simple graphing techniques without having to do an eigenvalue analysis, and also introduce conservation of energy. The book concludes with a proof, using Gronwall’s Lemma, of the Perron/Poincaré result for quasilinear systems, which is the foundation of much of the analysis of stability of equilibria. It is a cornerstone of the study of nonlinear systems, and seems a fitting way to end the book. Conclusion I hope you enjoy the book and it gives you some conceptual insights which will assist your teaching and learning the subject of ordinary differential equations. Perhaps the best summary of the philosophy you should adopt in your reading was given by Henri Poincaré: In the past an equation was only considered solved when one had expressed the solution with the aid of a finite number of known functions; but this is hardly possible one time in a hundred, What we should always try to do, is to solve the qualitative problem, that is to find the general form of the curve representing the unknown function. Henri Poincaré King Oscar's Prize, 1889 ee Contents Preface — Read This First! 1 Solutions 1 MIMO eee sere erga career aerate egeeee eres More General Equations Implicit Equations a Ordinary Differential Eaucon Existence and Uniqueness—Preliminaries .. . . Continuation... ..... eee ere eee Aubewn 2 First Order Equations The Separable Equation—Expanded . . Equilibria—A First Look at Stability Multiple Equilibria... 2... 2... eee The Linear Equation . Lee The Riccati Equation. ............. Comparison Sometimes Helps Periodic Solutions Differentiation in Search of a Solution Dependence on Initial Conditions 10 Continuing Continuation . . . . . werd dasunaewn 3. Insight not Numbers Introduction ‘Two Simple Schemes Key Stuff . RKF Methods . . . A Few Examples whRwWNe xii Ordinary Differential Equations—A Brief Eclectic Tour 4 Second Order Equations 53 1 wn wIAnuns 9 10 5 Linear and Nonlinear Systems YaAauRYNE What Is The Initial Value Problem? . . . ‘The Linear Equation ...... oe Approach 1—Dealing Directly... ........... Approach 2—The Linear System Comments on the Two Approaches Constant Coefficient Equations—The Homogeneous Case . a What To Do With Solutions . 2... eee eee The Nonhomogeneous Equation or the Variation of Parameters Formula nen aT Cosines and Convolutions . Second Thoughts ....... Constant Coefficient Linear Systems The Phase Plane The Linear System and the Phase Plane Competition and Predator-Prey Systems . Harvesting . a A Conservative Detour... . Stability and Gronwall-ing . . References 127 Index Solutions To begin to understand the subject of ordinary differential equations (referred to as ODEs in the interest of brevity) we first need to understand and answer the question “What is a solution?”. Perhaps the easiest way is to start the hierarchy of solutions way back in our algebra days and move towards our desired goal 1 Polynomials Consider a general polynomial of degree n. a P(t) = SV age* a0 and suppose we wish to find a solution of the equation P(x) = 0. In this case a solution is a number a satisfying P(x) = 0. The first question to ask is that of EXISTENCE—does a solution exist? If the coefficients a, are real the answer is MAYBE if we want real solutions—contrast P(z)=2?-1 and P(r) =27 +1. But if we extend the domain of the coefficients and solutions to be the complex numbers, then the answer is YES, by the Fundamental Theorem of Algebra. The theorem tells us more—it says that there will be n solutions, not necessarily all distinct, so the question of UN/QUENESS, except in the case of n = 1 when there is exactly one solution of P(x) = 0, is essentially a moot one. The question CAN WE FIND THEM? would suggest algorithms for finding solutions. They exist for n = 2 (the quadratic formula), n = 3 and n = 4, but beyond that the best we can usually do is numerically approximate the solutions, i.e., “solve” approximately. 2 Ordinary Differential Equations—A Brief Eclectic Tour 2 More General Equations For equations F(z) = 0 where F: R > Ror F: R" — R is a non-polynomial equation, there are no general results regarding existence or uniqueness, except some general fixed point results (Brouwer’s Theorem), and for n = 1 something simple like If F is continuous and F(x9) > 0, F(21) <0, 0 < x1, then F(z) = 0 has at least one solution in the interval zg < x < 2. Equations may have many solutions e.g,, F(z) = sina — x/60, or none, e.g., F(z) = e — 21/5, and solution techniques involve some approximation scheme like Newton's Method. Remark, An exercise to show the value of some pen and paper calculation before calling in the number crunching is to find the next to last zero of the first function, where the zeros are numbered consecutively starting with x, = 0 3) Implicit Equations “Now consider the equation F(<,y) = 0, where for simplicity we can assume that x,y € R, though the case for z € R, y € R", is a simple generalization. The question is “Can we solve for y?” that is, can we find a solution y(z), a function satisfying F(x, y(x)) = 0. In some cases the solution can be unique: 2? —Iny =0;y =e", not unique: —z? + y?-4=0;y= V4tr%,y defined only on a finite interval: * + y? —4= 0; -V4er?, y = +V4—2 only defined for |2| < 2. unbounded: 2? + Jy — 9 = 0;y = + Zgbeg is only defined for |z| < 3, not explicitly solvable: F(x, y) = x? + 4ry? + sin(my) —1=0. What is needed is some kind of an Existence and Uniqueness Theorem, and it is provided by the Implicit Function Theorem: Given F(z, y) = 0 and initial conditions y(xo) = yo satisfying F (x0, yo) = 0. suppose that F(z, y) is continuously differentiable in a neighborhood of (0, yo) and OF/8y(xo, yo) # 0. Then there is a unique solution y(z), defined and continuous in some neighborhood NV of (:r9, yo), satisfying the initial conditions, and F(z, y(z)) =0, for z € N. The theorem is a local existence theorem, inasmuch as the solution may be only defined near (z0, Yo): Example. F(z,y) = 2? +y*—4 = 0, and $F = 2y 40 fory # 0. Let (20,40) = (1.8, ¥0.76), then the solution y(x) = V4 — 2? is only defined for -3.8 < 2-1.8< 0. O0 a(t) = =z 210) = 20 #0 5 Suppose z(t) is a solution of 4x, then we can write #(t) = f(t, 2(¢)) and if we integrate with respect to ¢ this implies t a(t) = i f(s, 2(s))ds + C = F(t,C) where F(t,C) represents the indefinite integral, and C is an appropriate constant of integration. So a general solution will be of the form x(t,C) and we obtain the notion of a family of solutions parameterized by C. H(LC3) ‘ Te MLC2) A fom MC) oe But we can go further, suppose we are given the initial condition x(to) = x9. Then we want the constant C’ to be chosen accordingly, and can write the formal expression for the solution t a(t) = x0 +f f(s, 2(s))ds. It is easily verified that x(to) = o and #(t) = f(t, 2(t)); the expression is a Volterra integral equation representation of the solution. It is key to the proof of the existence and uniqueness theorem, but is not very useful since it gives no clue of what is z(t). 1 Solutions 5 Example. ¢ = (2 + 22, x(0) = 1, then z(t) satisfies a(t) =14 “62 + 27(s)|ds, which is nota terribly enlightening fact. 5 Existence and Uniqueness—Preliminaries First of all, if the world’s collection of ODEs were represented by this page, then this dot would be the ones that can be explicitly solved, meaning we can find an explicit representation of a solution x(t,C). This is why we need numerical and graphical techniques, and mathematical analysis to approximate solutions or describe their properties or behavior. But to do this we need to have some assurance that solutions exist. The statement of existence depends on initial values (initial conditions), so we formulate the initial value problem: «avp) &=f(t,x), (to) = 2. Analogous to the implicit function theorem we want conditions on f assuring us that a solution exists inside some region of the point (to, 20). ‘We want to know that there exists at least one differentiable function x(t), defined near to, and satisfying ¢(t) = f(t, x(t)), and passing through the point (to, 20). For simplicity suppose that B is a rectangle with center (to, 0), B={(t,x) | |t— tol 0. Pretty simple—but note the theorem is focal in nature, since the solution is guaranteed to exist only for |t ~ tol 0. But the condition of continuous partial derivative $£ in B is much easier to verify. Example: Since ||| — |y|| < |z — y| then f(t.) = |x| satisfies a Lipschitz condition with K = 1 in any B of the form {(t,z) {le to] 0, which will agree with c(t) on their overlapping interval 1 Solutions : of existence. We have continued or extended the original solution z(t) defined on the interval to — r < t < to +r to the interval tp) —r 8, or —00 < t < 00. Furthermore, it seems equally plausible that for the IVP, the solution z(¢) satisfying 2(to) = 29 will possess a maximum interval of existence, tota 0. Proceeding as above dx 1 mot or gintat+e, a(t) = ektHhC = phtekC = Cekt, Since C is an arbitrary constant we can take liberties with it, so e*° becomes C, and z(t) > Oimplies C > 0, z(t) <0 implies C < 0. We may have offended purists who b) ©) 4) Ordinary Differential Equations—A Brief Eclectic Tour insist that { 4 dz = In|z|, but carrying around the absolute value sign is a nuisance, and the little C’ stratagem forgives all. Then z(to) = to gives C = zoe *o (so C takes the sign of x9) and x(t) = zge*(*-) which is exponential decay if k <0 and z(t) —> 0, the equilibrium, as t —+ oc exponential growth if k > 0 and 2(t) becomes unbounded as t —+ ov. # =1427, g(x) =1+27. g(x) = 2x, and -00 to” x= { Oey The agreement of the derivative at (to,0) of both solutions makes it possible to glue them ae de = Shy and 4 = 24 +4, x(to) = zo. In both cases, —o0 < x < 00, but in the first case we obtain the implicit representation 5 a to+ 2 +420, 5 2 First Order Equations 13 which can’t be solved for x = x(t), the solution. In the second case we get 7 1 f Faget = toto and the integral cannot be explicitly evaluated. In both cases, you must call your friendly number cruncher. C.F = f(tg(a) If f(t),g(z) and 2(f(t)g(z)) = f(t)g'(x) are continuous for a < t < b and ¢ 1 then it is defined for —oc < t < 00, whereas if |C| < 1 it will only be defined on a finite interval. For instance x(n/2) = § implies C =2, a(t) =(2-cost), -00 0: Then 22 = g(z) is negative for x < xp and near zo, and is positive for Z > Zo and near zo, since g(x) is an increasing function near zo and vanishes at 29. Consequently, any solution x(t) with an initial value x(t9) = 21, 21 < Zo and x, near Zo, must be a decreasing function so it moves away from x9. But if zr; > zo and x, near zo the solution must be an increasing function and it moves away from xo. We have the picture, ae SoS x(t) x) oe oo x, x 0 ° The middle picture is one of the phase line where we regard solutions z(t) as moving points on the x-axis—it is a useful way to display the behavior of solutions in the one-dimensional case but the third picture is more illustrative. We can call such a critical point an unstable equilibrium point or source (a term inherited from physics). In physical terms we can think of the ODE as governing a system which is at rest when z(t) = zo, but any small nudge will move the system away from rest. The solutions are repelled by their proximity to 29 which gives rise to another colorful name: x9 is a repeller or repelling point. g'(zo) <0: The previous analysis repeated leads to the following set of pictures: Noto Pe a xx) or é 0 ° We call such a critical point a stable equilibrium point or sink (physics again lends a nice household name). In physical terms the system is at rest when 2(f) = xo, and under any small nudge the system will attempt to return to the rest position. Nearby solutions are attracted to zg which gives us another name: <9 is an attractor or attracting point. This is our first venture into the qualitative theory of ODEs, and the important point is that the analysis allows us to describe the behavior of solutions near zo whether we can explicitly solve the ODE or not! 2 First Order Equations 15 Example. 42 = =4,,2 # 1. Then g(0) = 0, and g'(0) = —1 so 2 = 0 is a stable equilibrium point. Solving the equation leads to the implicit expression for a =2(t): c—Inz =t+C which we can't solve for x = x(t). But we have a very good picture of what happens to solutions whose initial values are close to 0. The last example brings up a point worth amplifying. Suppose zp is a stable equilibrium point and g‘(zo) < 0. Then given any solution x(¢) with initial value x(¢g) = 21. with > Zo but sufficiently close to 29, we have dx dt leat. hence x(t) is decreasing near t = to. But now apply the existence and uniqueness criteria, and the continuation argument previously given, and we can infer that x(t) must continue to decrease, but since it can’t cross the line z = xy, we conclude that Jim 2(2) = zo. A rigorous argument justifying the last assertions, which are correct, might take a few paragraphs, but we should follow our geometric hunches—they are rarely wrong (frequently right?) The last statement leads to the definition of the asymptotic stability of zo, for which a homespun characterization (avoiding es and ds) at this point is all that is needed. = g(z1) <0 (go back to the relevant picture) The equilibrium point :ro is asymptotically stable if once a solution x(t) is close to x9 for some t = ty, it stays close for all ¢ > t;. and furthermore Jim z(t) = 20. To expand on this requires the subtle distinction between stability (once close, always close) and asymptotic stability. This distinction is a much richer one in the study of higher dimensional systems, and makes little sense for one-dimensional ODEs, unless we want to do a little nit-picking and discuss the case g(x) = 0 when every zo is a solution, and so is every z, as close to rp as we want. g'(zo) =0: This is the ambiguous case, and requires individual analysis of an ODE having the property. We can have mm is stable: e.g, 29 =O and g(x) = —22°. zo is unstable: eg. 29 =O and g(x) = the reader is left to verify the conclusions. But a third possibility arises in which 2 is a semistable equilibrium point. Example. 42 = 22, so g(0) = 0, and g/(0) = 0. Solve the equation to obtain z(t) = —(t+ C)~!. Let (0) = ~e, « > 0 then x(t) = —(¢ +1)" which approaches t= Oas t > oo. But let 2(0) = €, € > 0 then 2(t) = — (t— 1)" which is an increasing function for t > 0 and blows up when t = 4 A very annoying case, both mathematically and physically! This is a very good time to introduce the notion of linearized stability analysis which is a formidable name for a simple application of Taylor's Formula, and will be more extensively used in the study of higher dimensional nonlinear systems. 16 Ordinary Differential Equations—A Brief Eclectic Tour If 9 is an equilibrium or critical point of the ODE dz/dt = g(z), then to study its stability we can consider a nearby solution x(¢) and write it as z(t) = xo + y(t), where n(t) is regarded as a small perturbation. Since zo + n(t) is a solution then fem +n(t)) =0+ Suto =9(20 + n(t)), and if (2) is sufficiently smooth we can apply Taylor’s Formula, recalling that (20) = 0. We get dn , Tie a 0+ g'(xo)n+9 (20) + (higher order terms) and the key assumption is that the quadratic and higher order terms, which are very small if |n(t)| is small, have little or no effect, and the dominant equation governing the growth of n(t) is the linear approximation i , , 3 =9(o)n or n(t) =Cexp [g'(zo)t}. We see immediately that g'(z0) < 0 implies n(t) + 0 as t — co hence 2 is stable, g'(zo) > 0 implies n(t) becomes unbounded as t — oo hence 29 is unstable. To analyze stability in the semistable case we have to find the first nonzero value g(")(zr9) and solve the resulting nonlinear ODE, But for the case g' (9) # 0 the linearized analysis backs up our previous discussion, and will be crucial when we discuss equilibria of higher order systems, when dimensionality takes away our nice one-dimensional, calculus approach. 3 Multiple Equilibria For a start consider the equation $¢ = g(x) where g(x) = g(z2) = 0 and x1 < 29; for instance, g(x) = x(x ~ 1) and x; = 0, z2 = 1, We will assume that g(x) and g'(x) are continuous so E and U of the solution of the IVP is assured. Thus, we have the two solutions 21 (1) = 21, x2(t) = 22, and the picture The most important conclusion we can draw from the picture is Every solution x(t) with initial conditions z(tg) = 29, where 21 < x9 < 22, is defined for —00 < t < oo. 2 First Order Equations 7 This is an important but frequently overlooked fact.! The reasons for this conclusion are the continuation properly and uniqueness of solutions. The existence and uniqueness result tells us that the solution will be defined for some interval tj — r < t < tg +r, r > O, and for that interval the solution cannot intersect the solutions x(t) or z2(t) by uniqueness. That means that the end points 2(ty — 1) and 2(to + 7) will be inside the interval x1 < x < a, and we can apply the existence and uniqueness theorem and further extend z(t), over and over again, The solution (2) is trapped between the bounding solutions x; and zr and no matter how it wiggles it can never escape, but must move inexorably forward and backward. This is a wonderful result, since determining whether solutions exist for all time and are bounded is a tricky matter: # = cosx — 2, and g(x) = 0 at x © +0.82413 so every solution with initial values (to) = C, |C| < 0.82413 is defined for -o0 < t < oo and satisfies |x(t)| < 0.82413. But we can go a little further and discuss stability, in the case of two equilibria, 2, and 22, 1 < 22. Since g(x) does not change sign between x; and x2 it must be either positive: in which case 4% > 0, so any solutions x(t) between x and x2 must be strictly increasing, but remember “solutions don't cross." Consequently Jim a(t) = 29. negative: the same line of reasoning tell us that any solution between x; and 2 must satisfy lim x(t) = 21. 150 Note: that we only need to check the sign of g(x) at any point between x1 and zzz which makes the job even easier. In the example above 9(0) = 1 > 0 so we conclude any solution x(t) with initial value 2(to) = C, [C| < 0.82413 satisfies lim 2(2) = 0.82413. Now what remains is to examine the behavior of solutions with initial values zo less than 2, or greater than x2. But since g(x) cannot change sign for x < x1 or x > Z2 we will get pictures like: 1 Arno cost you have discovered that the ODE has bounded solutions! 18 Ordinary Differential Equalions—A Brief Eclectic Tour (orsimilarly x, stable and x semistable) The semistable case would be troublesome for any physical model; a solution could be close to an equilibrium value but a small change in the data could make it veer off. A useful and popular model of the first picture above is the logistic equation of population growth: & ~,2(1- 2); r = intrinsic growth rate > 0, Che K/* K = carrying capacity > 0, with equilibria x; = 0 (unstable), and x2 = K (stable). The model is discussed in many of the current textbooks, as well as the variation which includes harvesting, first discussed by Fred Brauer and the author in a 1975 paper in the journal Theoretical Population Biology. That equation is H, H = harvesting rate > 0, and a little calculus and some analysis shows that as H increases from 0, the unstable equilibrium increases from 0, and the stable equilibrium decreases from K. When H reaches rK’/4 they both coalesce and for H > rK/4 we have 22 < 0 for all x so the population expires in finite time. For devotees of bifurcation theory the behavior can also be analyzed from the standpoint of a cessation of stability when the parameter H reaches the critical value rK/4: a saddle-node bifurcation. ‘One can now pile on more equilibria and look at equations like Gp x(t ale ~ 22) (@ 2) and put them on the computer and look at direction fields, a pleasant, somewhat mindless exercise. Things can get wacky with equations like 2 First Order Equations 19 & = sin 5. with an infinite number of equilibria 2, = converging to 0 and alternating in stability, or ¢ = sin*@, with an infinite number of semistable equilibria z, = 0, +7, 0 since in the vicinity of any equilibrium $ >0. The last example will come up later in the discussion of the phase plane. The case where the right-hand side of the differential equation depends on t and x (called the nonautonomous case) is much trickier, and while there are some general results, the analysis is usually on a case by case basis. The following example is illustrative of the complexities. £2n,.. dz t(1—2?) E le. — = ——, . xample. 7 7048) #0. We sec that x1(1) = 1, x(t) = —1 are solutions, so by uniqueness all other solutions lie 2etween —1 and 1, are below —1 or are above 1. Now we must do some analysis: Case: 0 < |x| <1. Then 42 =0 when t = 0 and dx ap f t>0 dx « f t<0 ar? Uae ao prea {rsh ae {et ‘ase: |x| > 1. Then 4 = 0 when t= 0 and a ee a # 29 if {OS dt z>l dt z>l dx : t>0 dz i t<0 dae ee Ci {ey Since the equation is invariant under the change of variable t > —t we can conclude at If z(to) =z > 0, then lim x(t) = 1, and tOHES0 if x(to) = zo < 0, then lim 2 =-l1. dlutions have a maximum or minimum at r(0) which becomes cusp-like as (0) gets aser to zero, (See diagram.) The effect of t on the asymptotic behavior of solutions quite distinct from the autonomous (no t-dependence) case when solutions are strictly onotonic. 20 Ordinary Differential Equations—A Brief Eclectic Tour Admittedly, all but the last analysis could be done on a one-dimensional phase line, but one loses the attractive geometry of solutions moving around in the (¢,x) plane. Furthermore. the approach taken is good training for later discussion of the phase plane, when solutions become trajectories carrying parametric representations of solutions. 4 The Linear Equation We begin with the homogeneous linear equation z= a(t)x, a(t) continuous on p 0 is a small parameter, The solution is a(t) = (e— le" +44 (1-6) and we see that it has a discontinuous limit as « —+ 0+ : (0 20 din, ={ l4t, t>0 Furthermore, if one is familiar with the “big O” notation, then #0) = (Z-1)eer ist for 0 0 and small, and the solution is x(t) = exp(—(sin? t)/2e). The solution is negligible except at every other turning point t = 0,+m,427,..., where it has steep spikes of height 1 and width O(,/e). For instance, (0) = 1 and E(t) = —82¢ exp[—(sin? t)/2e], and since sin@ ~ 4 for |6| small we see that if 0 0. Optimal control theory gives that the solution is realized by the feedback control law: & w= Pel, = FO oto, where p(t) satisfies the Riccati equation |, ey? ~2a(t)p + uaalb) The control u(t) is a feedback control since its present value is determined by the present value of the solution .c(t), via the feedback control law. A simple example of feedback control is the household thermostat. (A sample problem is given at the end of the section). For higher dimensional systems the Riccati equation is replaced by a matrix form of the equation. The logistic equation, with or without harvesting. and with time dependent coeffi- cients °) : z a(t) =r(t)x (1 - Fo) - A(t), is a Riccati equation. For instance, one could consider the case r, K constant and H(t) periodic, corresponding to seasonal harvesting. 28 Ordinary Differential Equations—A Brief Eclectic Tour The last example will be discussed in a later section when we consider the general question of periodic solutions. Riccati equations are notorious for having finite escape times, e.g., our old friend & = x? +1 with solutions x(t) = tan(¢ — C), and there is no general solution technique. But there are a few rays of hope. For instance, in the simpler case r(t) = 0, the equation & = p(t)z? + g(t)z is an example of a Bernoulli equation. The trick is to let x(t) = 1/u(t) hence ¢ = —t/u? and we obtain the linear equation i= ~a(t)u — p(t), which you may not be able to solve exactly, but at least have a formal expression for the solution. In the general case where r(é) # 0 the transformation just gives you back another Riccati equation. Instead try the transformation a(t) = p(t) y(t)’ and you obtain the second order linear equation . a) . ~ (a(t) + 7) 7+ r(tpp(dy =0, gy (a py)? (t)plt)y which probably won’t make life much easier. But suppose by a stroke of luck, or divine intervention, you can find or are given one solution x(t) = o(t). Then let x(t) = @(t) + 1/u(t), and substitute: cou ape o~ 7 = pit) [# + 2 + 3] +4(t) (+ ) +r(t) which simplifies to the linear equation tt = —( — 2p(t)o(t) + a(t) u — p(t), since (t) satisfies the original equation. Here are several examples; the second one sheds further light on the notion of stability: a) ¢ = 27+ 1-— # and a little staring reveals that (t) = ¢ a solution. Letting x(t) =t+ 4 gives 1? (t+ +) +1-0 u which simplifies to & = —-2tu — 1 with general solution [c - fee ; If we specify initial conditions x(0) = zo, then 2(t) = o(t) = t is the solution if Zo = 0. Otherwise u(t) =e 1 ef a(t) =t+ —~ =t+ ——__ rae) a epee NS b) First Order Equations 29 and we observe distinct behavior depending on the value zo, since the integral is a positive increasing function of t. If zo < 0, then solutions are defined for all t, whereas if zo > 0 then for some value ¢ = g > 0 the denominator will equal zero, and the solution becomes infinite as ¢ increases towards q. Applying L’Hopital’s rule to the quotient we see that it approaches —2t as t + 00 so x(t) approaches ~t; all solutions with x(0) = zo < 0 become asymptotic to the line «(t) = —t, which is not a solution: Soe =a? 422t—t?4+5 = —(z—t)? +5. We see that the straight lines x(t) = t+2, and x(t) = ¢ - 2 are both solutions. By uniqueness, no solution can cross them, so we can conclude, for instance, that any solution x(t) satisfying |x(0)| < 2 is defined for all time. Letting z(t) =1-2+4 gives i=—4u+ 1, so a(t) =t-2+ wa which approaches t + 2 as t — oo. If x(t) =t+2+ 4 we get i= 2u-1 so a(t)=t+2+ wey and x(t) approaches t + 2 as t + oc. We can conclude that x(t) = t + 2 is asymptotically «2h! 30 Ordinary Differential Equations—A Brief Eciectic Tour The above solution technique provides an amusing story. In the 1980s a firm entitled MACSYMA was widely advertising a symbolic and numeric computation package. One of their advertisements appeared frequently in the American Mathematical Monthly and showed four people in deep thought, staring at a blackboard—a clock on the wall hinted it was nearly quitting time. The title of the advertisement was “You can solve problems ...” and on the blackboard was the initial value problem dy dt The author noticed it was a Riccati équation, and after brief contemplation, that y(t) = ~t is a solution. Using the technique above led to a short calculation and the answer y(t) = -t + (Cet - 1), C = 3/2, which blows up at t = In(2¢/3) ~ 0.5945. No reply was ever received to a letter sent to the company suggesting they should provide more taxing problems to their prospective customers. The subject of periodic solutions will be discussed in a later section, but this seems an excellent time to show a beautiful proof of a result for the Riccati equation, since it involves a special property of the equation, similar to the cross-ratio property of linear fractional transformations in conformal mapping theory. The proof can be found in the book by V. Pliss. We are given solutions 2;(t), i = 1,2,3 of a Riccati equation, hence they satisfy 1,2,3. $y? + (t+ y+ P+t+1=0, yal 45 = pz? + a(t)n + r(t), A simple computation gives a(t) Halt) aa() = elt) _ wy(t)— a(t) a(t) ~ 21(t) and now suppose that p(t)(wa(t) ~ x1(¢)), (i) p(t), q(t),r() are periodic with minimum period T. and fy p(s)ds # 0; for simplicity let p(t) = 1. (ii) 2;(t),7 = 1,2,3 are distinct T-periodic solutions, and we can assume z(t) < £0(t) < x2(t) for all t. Integrate the cross ratio equation from 0 to T and you obtain 0 on the left side and a nonzero quantity on the left. We conclude that The Riccati equation with T-periodic coefficients and f7 p(s)ds # 0 can have at most two T-periodic solutions. The condition on p(t) is necessary; consider ¢ = (sin t)r* which has an infinite family of solutions x(t) = (c — cos t)~!. Remark. We will adopt the terminology that a function x(t) is T-periodic, T > 0. if z(t +T) = 2(¢) for all t, and T is the smallest such number. The beautiful result above is a specific case of a more general question—given a polynomial differential equation of degree n with T’-periodic coefficients, &(t) = 2" +an_i(t)2™! +--+ ay (t)z + ao(t), 2 First Order Equations 31 what is the maximum number of T-periodic solutions it can have? For n = 2 we showed the answer is two; it has been shown that for n = 3 the answer is three. But there exist equations with n = 4 which have more than four T-periodic solutions, and considerable research has been done on the general problem. Sample control theory problem: Given the linear regulator problem ztu, 2(0)=1 and cost functional Chu] a) Find the feedback control law, the optimal output z(t), and the value of Cu] for the case k = 3, b) Find the feedback control law for the case k = 0, and use numerical approximations to find u(z), z(t), and Clu]. shot)? + if [3x(¢)? + u(t)? Jae: 6 Comparison Sometimes Helps This short section is merely to point out a very useful tool in the ODE mechanics toolbox—the use of comparison results to obtain some immediate insight into the behavior of a solution. These are often overlooked in introductory courses. The simplest and most often used comparison result is the following one: Given the functions f(t, 2), g(t,r), and A(t, x), all satisfying the existence and uniqueness criteria for the initial value problem in some neighborhood B of the point (to, x0). Suppose that F(t,2) < g(t 2) < A(t, 2) for (é,z) in B. Then the solutions of the initial value problem y =S (ty). y(to) = ror = g(t, 2), x(to) = 203 2 = h(t,z),2(to) = 20, satisfy the inequality y(t) < a(t) < z(t) for t > to and (t,x) in B. A formal proof can be given, but the intuitive proof observing that all the solutions start out at (to,9) and 4¢ < 42 < 4% suffices. One or both sides of the inequality can be used. Examples. a) For t > 1,z? +1 < x? +2? from which we can conclude that the solutions of & = x? + #? have finite escape time. A sharper estimate would be that z? < x? + ¢? 32 Ordinary Differential Equations—A Brief Eclectic Tour for t > 0 so the solutions of 42 = 2? +t, 2(0) = xo > 0 grow faster than the solutions of #2 = 2, 2(0) = xo which become infinite at t = 1/z9. b) Since 14P calPaPcett for t > 0 and x > 1 we conclude that the solution of ¢ = 2!/? + t?, 2(0) = 1. is bounded for all time and satisfies the inequality 14t4+0/3 < a(t) < 3et ~~ 2-2. Some students presented the author with a problem in ocean climate modeling governed by the initial value problem ¢) z=9(t)—ex*, 2(0)=0, €>0, where (1) was a periodic function too ugly to describe, but satisfied the inequality 0 < a(t) < #. This implies that ex' < 6(t) - ext 0. As the last example indicates, some comparison results are very handy, and a little analysis can pay off before embarking on some numerical work. 7 Periodic Solutions A very deep question which occupies a sizeable part of the ODE literature is Given z = f(t,x), where f and 3£ are continuous for -oo < t < 0, a0,K >0 and H(t+T) = H(t). We can assume H(t) = Ho + sin wt, representing a small periodic harvesting rate. or H(t) = ed(t) where o(t + T) = a(t). Recall that if H(t) = 0, there are two equilibria x(t) = 0 (unstable) and 2( (stable), and if H(t) = H constant then H > rK/4 implies the population expires in finite time. So we might expect that if the periodic H(t) is small then the equilibria might transmute into periodic solutions. Since the equation is a Riccati equation we know it can have at most two T-periodic solutions, which reinforces our intuition. The result we need, found in some 1966 notes by K. Friedrichs, is the following: Given f(¢,<), satisfying the conditions for E and U and f(t+T,x) = f(t,x) for all (¢,2), suppose there exist constants a,b. with a < b such that f(t,a) > 0, f(t,b) < 0 for all . Then there exists a T-periodic solution z(t) satisfying a(0)=aa0 . 7 [_K? Kk? KH()]_ 2=K:¢ FS - I --2w <0 and we can conclude there is a (stable) T-periodic solution x(t) with K/2 < 2(t) < K. Note also that so if we change variables t to —t we can conclude there is an (unstable) T-periodic solution x(t) with 0 < 2(t) < K/2. The reader might wish to generalize the result to the case where r(t) and K(t) are T-periodic and satisfy bounds like O 0. But ‘f(t,x) satisfies 1 < f(t,0) < 3, and —2.264 < f(t,1) < —0.264 so there is one periodic solution x(t) with 0 < x(0) < 1. An examination of the direction field shows there can be no other. 8 Differentiation in Search of a Solution If we are given the IVP dx ae f(t,2) 2(to) = 20 and a solution «(t), then we immediately know that dz(t) Or lines = (tor) and consequently a linear approximation to x(t) would be x(t) * zo + f(to,z0) (t ~ tp). This is the basis of Euler’s Method of numerical approximation. But if our tolerance for differentiation is high. we can use the chain rule to go further. Since @z_d ee Sa = glen = OS He, 2\ply2)+ Le0,2), then all these can be formally pains and evaluated at (t,o). This gives us an exact value of the second derivative of x(t) at t = to and allows us to get a quadratic approximation 2(t) © xo + f(to.to)(t ~ to) + ol AG — to)? /2! where @e ~ 2a 0) x Of(to,0) dE? | re : a, f (toto) + 2 First Order Equations 7 One could proceed to get higher order approximations, but the differentiation will get onerous in most cases. The technique is the basis for the Taylor series numerical methods; for n = 2 it might be useful if you are stranded on a desert isle with only a legal pad, pen, and a ten dollar K-Mart calculator. Example. 22 = oe) = 4, then dx 1 Flin, = £049 = =5 and since of -? Ope aie Oc” Wal+ va?” Ot” 1+ Ve we get ee - (am) ()+ ue dt? Neer” (ava + Vay?) \1+ V4)” 14 V4 108° So a) s 4+ 5t-1)+ ‘An approximate value of 2(2) is 4.66204 and a highly accurate numerical scheme gives 4.75474. Computing higher derivatives is more effective when used formally, and an example is the logistic equation where f(x) = ra(1—x/K). Then a _ df de ara z 2 2x x a2 dx dt =(r- e) (r=(0 %)) =e (1- z) (-%) and we see that &r er oot if jae er 0H 0<2 0 for 0 < z < K, and doing the simple concavity analysis above gives us the logistic curve. Contrast this with the usual example or exercise found in most textbooks to solve the (separable) logistic equation, which ends up finally with an expression like Kzo RFK ae 2(0) = co, x(t) from which very little insight is immediately evident. 9 Dependence on Initial Conditions Given the differential equation 4¢ = f(t,z) and two solutions x(t), z2(t) satisfying @1(to) = @, r2(t9) = 6 where a and b are close, what can we say about their proximity for values of t > fo? This is the problem of dependence on initial conditions, and to get the needed estimate we will use one of the workhorse tools of nonlinear ODE’ers. Its Proof can be found in almost any advanced textbook and is not difficult. Gronwall’s Lemma: If u(t) and v(t) are nonnegative continuous functions on to < t < 00, and satisfy u(t) to, to where o is a nonnegative constant, then : u(t) 0 such that the Lipschitz condition, f(t) ~ f(t.y)| < Ke ~ yl, isfied in some neighborhood of (to,.tq). The constant K could be an upper bound |. for instance. Then each solution satisfies on n()=zot f s(o,zi(s)\ds, i=1,2, where zo = a for x,(¢), and xo = 6 for x2(t), and we are led to the estimate |e1(t) ~ 22(t)| to, which requires some commentary. First of all, the estimate is a very rough one, but does lead us to the conclusion that if |a — b| is small the solutions will be close initially, There is no guarantee they will be intimately linked for ¢ much greater than fo, nor is the estimate necessarily the best one for a particular case. Example. if ja — b| = 10-2, t = 0, and K = 2 then |xs(t) — xa(t)| < 10-Pe** < for t < }In(100e); if € = } then t < 1.956. The exponential term in the estimate eventually takes its toll. The above argument and conclusion is not to be confused with a much emphasized characteristic of chaotic systems—sensitive dependence on initial conditions. This means that two solutions starting very close together will rapidly diverge from each other, and later exhibit totally different behavior. The term “rapidly” in no way contradicts the estimate given. If |a — b] < 1076 and ¢ = 10~4 then ¢ < 2.3025 in the example above, so “rapidly” requires that at least £ > 2.3025. The important fact we can conclude is that solutions are continuous functions of their initial values recognizing this is a local, not global, result. 10 Continuing Continuation The proof of the E and U Theorem depends on the method of successive approximations, and some fixed point theorem, e.g., the contraction mapping theorem. Given the IVP, & = f(t,x), x(to) = Zo, let zo(t) = zo, and compute (if possible) z(t) = 20+ f° so.zols)) do, alt) =20+ f 1(6.216))d5..- tn te : tnailt) = 20+ [ J (s,20(8)) ds, ‘0 etc, This is a theoretical construct; after a few iterations, the integrations usually are colossal if not impossible. You want approximations? Use a numerical scheme. A general set of conditions under which the above successive approximations will converge to a unique solution of the IVP are as follows. First, find a big domain (open. connected set) B in the tz-plane in which f(t,) and Of (t,2)/Oz are continuous—the proof is the same for a system where z € R”. Next, construct a closed, bounded box 40 Ordinary Differential Equations—A Brief Eclectic Tour TP ={(t,x) | |t— tol < @, |x — 20] < 6} contained in B—you can make this as big as you want as long as it stays in B, Since I is closed and bounded, by the properties of continuity we know we can find positive numbers m and k such that |f(t,2)| < m, |Of(t,2)/Ox| < k for (t,) in P, It would be nice if our successive approximations converged to a solution defined in all of I, but except for the case where f(t, zr) is linear this is usually not the case. To assure convergence we need to find a number r > 0 satisfying the three constraints r 0.to St < 20; 9(2) > 0,0< 2 <0, and 1 li ——dr = +00, >0. So Then any solution z(t) with IC x(a) = 6, a > to, 6 > 0, exists ona . The proof is by contradiction; assume there is a solution y(t) which cannot be continued beyond ¢ = T’. Since f and g are both positive, y(t) must be strictly increasing, hence y(t) + +00 as t + T~. Separate variables to obtain the expression w(t) a [ i" =[ f(s)ds, and as t + T- the left-hand side becomes infinite while the right-hand side is finite. A simple example of such an equation is 4£ = 2° f(t), where 0 < a < 1, and f(t) is any positive, continuous function. To see the local nature of the and U Theorem and the case of the “Incredible Shrinking r” the reader may wish to follow along in this example: & =1+27, 2(0)=0. This is the old workhorse with solution 2(t) = tant defined for —x/2 < t < /2, and it becomes infinite at the end points, But suppose we don’t know this and naively proceed to develop a value of r, then do some continuation. Since f(t,2) = 1+ 2? is a nice function, and doesn't depend on t, let’s choose a box P= {(t,2) | |t] 1/2 and 6 > 1 we initially chose we would get the same 0.55 and 1/2. At(1/2, tan 4) = (1/2, 0.55) we can construct a new box I’; = {(t,2) | |t—-1/2| a, of the solution z(t) of the IVP ¢ = f(t, 2), x(a) = zo, select a step size h = 52, N a positive integer, then the two methods are expressed by the simple “do loops”: Euler’s Method: ig =a Xo = Xo for n from 0 to N —1 do tatt=tat+h Patt = In +hf(tn Tn) print ty (it better be 6!) print zy Improved Euler’s Method: to = 0 = 20 for n from 0 to N — 1 do tri = ton Un = fn +hf(tn, Zn) tat BLstastn) + Hlneisth)] Trt print ty print zx They are easily understood routines, but it is surprising how some expositors can clutter them up. Modifications can be added such as making a table of all the values tn ~ 2(tn), n=0,...,.N, or plotting them, or comparing the values or plots for different values of the step size h. 3. Key Stuff What are the important paints or ideas to be presented at this time? Here are some: The Geometry: All numerical schemes essentially are derivative chasing or derivative chasing/correcting/chasing schemes, remembering that we only know one true value of 2(t), that for to = a where #(to) = f(to, 20). For Euler's Method we use that true 3 Insight not Numbers a5 value to compute an approximate solution value x; ~ x(¢1), then an approximate value of (ti) © f(t1, 21). then the next value xz ~ x(tg). etc. } / xa) | / slope f(t | x) slope f(1.Xp ) For the Improved Euler Method we compute the approximate value of #(t1) ~ f(t1, 21) using Euler’s Method, then use the average of f(to,z0) and f(t1,21) to get a corrected approximation of x; ~ x(t), then compute the new approximate value of (ti) ~ S(t1,21), use that to get an approximate value of <(t2) © f(t2,x2), average again and correct, etc. The picture is: [/, slope } (m+ m) A further nice insight is to note that for the case where the ODE is simply ¢ = f(t), x(a) = xo then x(t1) = zo + Ay where Aj is the area under the curve f(t), to € reject the x, obtained, and compute a new z,4.1 using a smaller step size. ii) If Jest.| < €, accept z441 and compute z%42 using step size h or a larger one. Of course, in classroom practice the parameters |est.|,¢,h and others are already built into the RKF45 program so you will see only the computations. Some calculators use an RKF23 program. The advantages of the RKF45 scheme besides its inherent accuracy are that when the solution is smooth, and the approximations are closely tracking it, the step size selected can be relatively large, which reduces the number of function evaluations (cost!). But if the solution is wiggly the step size can be reduced to better follow it—a fixed step size method could completely overshoot it. Furthermore, if the solution is too wild or flies off in space the RKF methods have a red flag which goes up if lest.| < ¢ cannot be attained. For schemes which allow for adjustments, this requires a bigger value of ¢ be given, otherwise for schemes with preassigned parameters the little gnome will fold its tent and silently steal into the night. 48 Ordinary Differential Equations—A Brief Eclectic Tour Given the discussion above does the author believe that students should become facile or even moderately familiar with Runge-Kutta methods of order 3 or 4? Absolutely Not! They require hellishly long tedious calculations or programming because of the number of function evaluations needed to go one step, and why bother when they are available in a computer or calculator package. Use the Euler and Improved Euler Methods to introduce the theory of numerical approximation of solutions of the IVP, and the notion of error. Then give a brief explanation of RKF45, if that is what is available, so they will have an idea of what is going on behind the computer screen. Then develop insight, 5 A Few Examples The ideal configuration is to have a calculator or computer available to be able to do the Euler and Improved Euler Methods, then have a numerical package like RKF45 to check for accuracy and compute approximations far beyond the capacity of the two methods. Start off with a few simple IVP to gain familiarity with the methods. Comment: The author finds it hard to support treatments which present a large number of problems like: For the initial value problem & = 22, 2(0) = 1 a) Use the Euler Method with h = 4, + and 7; to approximate the value of (1) b) Use the Improved Euler Method with h = 4,1, and + to approximate the value of x(1). c) Find the solution and compute z(1), then construct a table comparing the answers in a) and b). This is mathematical scut work and will raise the obvious question “If we can find the exact solution why in blazes are we doing this?” Here are six sample problems—given these as a springboard and the admonitory tone of this chapter the reader should be able to develop many more (insightful) ones. We have used RKF4S Problem 1 (straightforward), Given the IVP b=VeFt, 2(0)=3. Use the Euler Method with h = 0.05 and the Improved Euler Method with h = 0.1 to approximate (1) then compare the differences with the answer obtained using RKF45. Answers: Euler, h = 0.05: (1) ~ 5.0882078 Improved Euler, h = 0.1: 2(1) © 5.1083323 RKF4S: (1) © 5.1089027 As expected, Improved Euler gave a much better approximation with twice the step size. The equation can be solved by letting x = u? ~ t, but one obtains an implicit solution. 3 Insight not Numbers a9 The next three problems use the old veteran ¢ = x” + t?, whose solutions grow much faster than tan t. Problem 2, A strategy to estimate accuracy, when only a low order numerical method is in your tool kit, is to compute the solution of an IVP using a step size h, then compute again with step size h/2, then compare results for significant figures. Do this for t=2°+, 2(0)=0 and approximate (1) using a step size h = 0.2 and h = 0.1 with the Improved Euler Method. Answer: h = 0.2: 2(1) © 0.356257 h=0.1: 2(1) + 0.351830 At best, we can say (1) ~ 0.35. Problem 3. Given the threadbare computing capacity suggested in Problem 2 you can improve your answer using an interpolation scheme. Given a second order scheme we know that if 2(T’) is the exact answer and x;,(T) is the approximate answer using step size h, then 2(T) ~ x,(T) © Mh?. a) Show this implies that 4 1 aT) = 52n/alT) ~ g20(T) b) Use the result above and the approximations you obtained in Problem 2 to obtain an improved estimate of (1), then compare it with that obtained using RKF4S. 2(T) —2)(T) © Mh? 2(T) = 2p jo(T) © M (4) b) 2) = $(0.351830) : (0.350267) = 0.350354 RKF4S gives 2(1) ~ 0.350232 Answer: a) 2 p solve for 2(T’) $Mh? } Problem 4, For the IVP ¢ = 2? + ¢%, 2(0) =0. Approximate the value of 2(2) with the Euler and Improved Euler Methods and step sizes h = 0.1 and h = 0.01. Compare (graphically if you wish) with the answer obtained with RKF45. Answer: Euler, h = 0.1: 2(2) ~ 5.8520996 Euler, h = 0.01: 2(2) © 23.392534 (Now you suspect something screwy is going on) Improved Euler, h = 0.1 = 2(2) 23.420486 (Maybe you start feeling confident) Improved Euler, h = 0.01: 2(2) ~ 143913417 (Whoops!) RKF45: (2) ~ 317.724004 50 Ordinary Differential Equations—A Brief Eclectic Tour The awesome power of RKF45 is manifest! The next problem uses the direction field package of MAPLE, possessed of the ability to weave solutions of initial value problems through the field, to study the existence of periodic solutions of a logistic equation with periodic harvesting. Problem 5. Given the logistic equation with periodic harvesting é 32 (1-4) -( + gsinst) a) Explain why it has a (stable) 2-periodic (= 2) solution. b) Such a solution must satisfy x(0) = 2(2). Use MAPLE direction field plots and RKF45 to find an approximate value of (0) then graph the solution. Compute (4) as a check Discussion: a) Since § < 4 + 4sinnt < 3 we see that for 2 = 2 and all t, ¢ = 4—(} + fsin wt) > 0. For x = 4 and all t, ¢ =0 - (} + §sin at) <0, so by an argument given in Ch. 2 there exists a 2-periodic solution -(t) with 2<2(0)<4 b) Construct a direction field plot for 0 < ¢ < 2,2 <2 <4, and select a few initial conditions z(0) = 2.4, 2.8, 3.2, 3.6, and plot those solutions. You get something like The solution with initial value 2(0) = 3.2 has terminal value (2) > 3.2, whereas the one with 2(0) = 3.6 has 2(2) < 3.6, and it appears that 2(0) for the periodic solution is near 3.4. Crank up RFK45: (0) = 3.4, (2) = 3.4257, and now fiddle around or use bisection to get 2(0) = 3.454, 2(2) = 3.4536, 2(4) = 3.4534. Since for x = 0 and all t,@ = O—(} + 3 sin wt) <0 there is also an (unstable) peniodic solution z(t) with 0 < 2(0) < 2. It will be difficult to approximate because the direction field will veer away from it, but by letting —- —t and considering the equation ~ 1 /e ele a= $e (G-1) + (G+ Jann) one can use the previous strategy since now the periodic solution is stable, 3 Insight not Numbers 51 Problem 6. Using numerical methods examine the thickness of the boundary layer for the singularly perturbed differential equation ei=—e+(14+1), 2(0)=0, ¢€=107',1077, 107%. Discussion: From the previous chapter we see that the solution 2(t) = 1 +t —e for t > O(€). The author used a calculator with an RK23 package and produced the following, numbers for the IVP: : - 1 ea-cetclt+o, 2(0) =0. «=10"" e=107? ¢= 1073 t z t x t x 0 0 0 0 0 0 0.02 0.1837 0.002 0.1815 0.001 0.6339 0.04 0.3376 0.004 0.3306 0.002 0.8673 0.06 0.4671 0.008 0.5537 0.004 0.9864 0.08 0.5767 0.01 0.6364 0.006 1.0048 0.1 0.6695 0.02 0.8770 0.008 1.007! 03 1.1568 0.04 1.0136 0.01 1.0093 0.5 1.3997 0.06 1.0495 0.7 1.6000 0,08 1.0701 0.10 1.0901 If one takes the thickness of the boundary layers as the first value t, when 2(t.) © 1+t—« one sees that they are approximately 0.5, 0.06, 0.006 respectively which supports the O(€) assertion. Second Order Equations This chapter could be titled “Second Order Equations—A Second Look,” because its intent is not to dwell on the numerous recipes for solving constant coefficient linear equations which stuff the chapters of most introductory texts. Rather, it is to give the conceptual underpinnings tor the general second order linear equation, offering some alternative ways of presenting them, recognizing that the discussion is easily generalized to higher order equations. Then a few little morsels are added which enhance some of the standard topics. 1 What Is The Initial Value Problem? We will write the general second order equation as # = f(¢.2;,¢), so a solution will be a twice differentiable function x(t) satisfying #(f) = f(t, x(t), 2(t)). But we have only the existence and uniqueness (E and U) theorem for the initial value problem (IVP) at our disposal, so what is the IVP? Introduce a new dependent variable y = % and the equation becomes a two-dimensional first order system = f(t.2, 2) = f(t,z,y). The E and U Theorem in vector form applies: let x = col(z, y) then £= (2) = (jedan) S Pen ee) for which the initial conditions are c= (8) <2) 53 54 Ordinary Differential Equations—A Brief Eclectic Tour We can conclude that the IVP for the second order equation is E=f(t,,2), 2(to)= 20, y(to) = £(to) = yo- The required continuity assumptions for £ and U tanslate to the statement that the IVP will have a unique solution if f, Of /Ox, and Of /Oz are continuous in some region containing the point (tg, x0, yo)- An easy way to think of this is that we needed one “integration” to solve < = (t,x), so we get one constant of integration to be determined by the initial condition. For the second order equation we need two “integrations” and get two constants. This is easily seen with the simple example 4 3 SH) =F rasa) = 5 Hate, and more abstractly, using the integral equation representation, # : é yra=A+ [visas and §= stn) > = B+ fH(o.2(9.0(0) es =84[1(s.4 ["verdrut) ds. The above discussion readily generalizes to higher order equations. e.g., for n =3 2) = f(t,2,¢,2) y=z, 2=f(t,2,y,2)- The IVP is 2) = f(t,2,8,2), 2(to) = 20, #(to) = yor #(to) = 20, whose solution is a thrice differentiable function x(t) satisfying the initial condi the relation (1) = f(t,2(t),2(0),2(0) in a neighborhood of (to,.t9. Yo, 20). But we are straying from the hallowed ground of n=2, IMPORTANT NOTICE If the following material is to be presented it is incumbent on the presenter to give a few illuminating equations and their solutions, even if the audience isn’t quite sure where they came from. Illumination is the stalwart guide to understanding. 4 Second Order Equations 55 2 The Linear Equation We could first consider the general, n‘® order, linear, homogeneous equation a) 4 ay (te) 4. tani (te + an(e = 0, then later replace the 0 with q(¢), a forcing term, to get the nonhomogeneous equation. But we will tenaciously stick with the second order equation, n = 2, for several reasons: a. The theory for the case n = 2 carries over easily to higher orders. b. Higher order linear equations, except possibly for the case n = 4 (vibrating beam problems), rarely occur in applications. One reason for this is simply Newton’s Law, F = ma, in which the acceleration a is a second derivative. c. Higher order constant coefficient equations require finding the roots of cubic or higher order polynomials, Consequently, unless one wants numerical approximations, most problems and examples are rigged. But factoring polynomials is nearly a lost art, and even the quadratic formula is struggling a little to avoid becoming an endangered species. Hence sticking with n = 2 is a wiser course. Therefore, we will discuss the IVP (*) E+a(t)}e+b()x=0, x(to)=x9, (to) = yo, where a(¢), b(t) are continuous for r &—d(t)z —c(t)x =0. This approach is a higher level of exposition, but has the advantage of SRaeN taking care of the two-dimensional system and the important case where a(t), b(t), c(t), and d(t) are all constants. This case is essential to the study of stability of equilibria of almost linear systems. For both approaches the analysis of the nonhomogeneous system, with 0 replaced by q(t) in (), will be deferred until later in the chapter, 4 Second Order Equations 57 3 Approach 1—Dealing Directly Linearity is what makes everything tick, and draws on the analogous analysis of algebraic linear systems. Given any solutions 21(t), .2(t),... ,te(t) of the second order equation then any linear combination k a(t) = So eyz,(t) 7 is a solution by rearrangement and linearity of differentiation k H() + a(t)2(t) + B)2(1) = Ses (By(0) + alt)e (2) + 6(0)24(1)) T and the last expression is zero since each 2;(t) is a solution. Linearity will help us answer the question—how many solutions do we need to solve the IVP? i) If zo = yo = 0 then x(t) =0 is the only solution—this is an important fact. ii) Given one solution x(t) with x(to) = a # 0, then if zo # 0 the solution x;(t) = 22.2(t) will satisfy 1(t9) = zo. But #1(to) = 222(to) will not equal yo unless (to) = Zuo, which would be fortuitous. If yo = 0 then (to) must also. A similar analysis can be given for the case yo = b £0. 6¢ +52 =0. Example. A solution is x(t) =e! since e' ~ Ge + 5eé = 0, z2(t) =e since 25e' ~ 30e% + 5e* = 0. If t) =0 x(t) satisfies x(0)=1, #(0)=1 a(t) satisfies 2(0)=1, 2(0) =5. Neither would solve the IVP with (0) = 2, ¢(0) = —2, for instance. The intuitive analysis given above suggests we will need more than one solution. ‘We can therefore appeal to linearity and ask if given two solutions x(t) and r2(t) can we find constants cy and cp so that x(t) = c)21(t) + c2%2(t) satisfies 2(to) = e121 (to) + c2%2(to) = 20, (lo) = cr%1 (to) + cotéa(to) = yo. The constants c;,c2 must be unique by uniqueness of the solution of the [VP, and from linear algebra we conclude that 21 (t),79(t) must satisfy we (24) 2) 0 #1(to) £2(to), 58 Ordinary Ditterentiai Equations—A Brief Eclectic Tour But we have set our sights too low—we want to be able to solve any initial value problem for any to, so the goal is to find two solutions satisfying a(t) 222) ti(t) t2(t) The determinant is called the Wronskian, and we will just denote it by W(t). Observe that W (21, £2)(t) = ace ( ) #0, r) = 4e% = dexp [fi Ate] so we conclude that they are a fundamental pair, and every solution of ¢ = Az can be written as aoa“) -=(52)- G8) where x(t) = cet + cze%, y(t) = cre + 3cze™*. If ty = 0 and the IC were x(0) = 4, (0) = 1, we would solve cy + c2 = 4, ~cy + 3c = 1 to obtain cy = 11/4, co = 5/4, Then 4 Second Order Equations 63 To prove the existence of a fundamental pair we proceed as before. Let yi(t) and ¢2(t) be solutions satisfying the initial conditions _ (2rlto)) _ (1 _ (talto)\ _ (0 alta) = Guy) . (0) + alto) = (Fie) . () then W(t) = det (§°) = 1 40 so W(t) is never zero, hence y(t) and yo(t) are a fundamental pair. The solution of the IVP zo Yo Countless other pairs of fundamental solutions can be found by noting that given a fundamental pair y(t), y2(t) and any nonsingular matrix A = (45) then #=Alt)e, x(to) = ( ) is 2(t) = ropi(t) + vovalt)- ‘ay(t) x(t) ze ‘axy(t)+cx2(t) bay(t) + dre(t) (2) me) 4] (catorant mate) ante) = det(W(t)A) = det W(t) det A #0 and since tome) *e(ita)> 0 Giea) *4(nt9) are solutions by linearity, we have created another fundamental pair. All of the above analysis carries over to higher dimensional linear systems. The question of linear independence of a set of fundamental solutions is not discussed, and the reader is referred to the final paragraphs of the preceding section, and to linear algebra theory. Now we can immediately attack the second order equation #(t) + a(t) + 6(t)z = 0 by noting that it is equivalent to the two-dimensional system ja Wie — aly : (i) = (te -a()) (): Then a fundamental pair of solutions a) (20) ene (203) t) = =(* ‘ t) = = (e : e= (ta) = (a) = Gita) = Gato exists for the system, which is equivalent to a fundamental pair of solutions z1(t), x2(t) existing for the second order equation. Their Wronskian is W (1) = det (21(t)yo(t) — 22(t)ys(t)) ‘ = det (2(t}é2(t) ~ xa(t)#1(t)) = W(to) exp (-/ a(s) as} ts since tr A(t) = —a(t). Given any fundamental pair 2,(t), 22(t) another fundamental pair is ay(t) = ax (t) + cxs(t), 24(t) = bey (t) + deo(t), 64 Ordinary Differential Equations—A Brief Eclectic Tour where ad — be # 0. Every solution 2(¢) can be expressed as a linear combination a(t) = cy21(t) + coma(t), where 2r(t), avo(t) are a fundamental pair. 5 Comments on the Two Approaches The future presenter of the introduction to the subject may have by now tossed this book on the pile intended for the next library book sale, mumbling about its unsuitability, and returning to the somewhat mind-numbing parade of linear independence, constant coefficient second order equations. constant coefficient higher order equations, constant coefficient linear systems (sometimes with a big dose of linear algebra), etc. The problem will be that in a one semester course, this approach will likely leave little time to develop the most important topics, which would certainly include qualitative theory and stability, possibly Hamiltonian systems, the phase plane (n = 2!), possibly boundary value problems, and maybe even chaos. If the audience is engineers then knowledge of the Laplace transform is usually required—fortunately there is now good software to avoid the tedious partial fractions. Most of today’s introductory texts are replete with mathematical models, many of which are worth exploring, but such explorations benefit from a thorough grounding in the theoretical tools, which are really quite few. ‘Time management will be the problem, and the author’s suggested approaches—felling two birds with one Wronskian?—may help to relieve the problem, and be able to move faster into the richer plateaus of ordinary differential equations. 6 Constant Coefficient Equations—The Homogeneous Case At last—we can get our hands on some real solutions! True enough, and it depends on the simple fact that z(t) = e is a solution of #+at+b6x=0, a,6 real constants, if and only if is a root of the characteristic polynomial pA) =A? +aA4+6=0 since £y(e*) +aZ(c™) + be = e™p(d). The various cases are: a. p(A) has two distinct real roots A, Ao, Ar # Az. Then 2; (t) = e** and za(t) = e* are solutions and ert ght W(t) = det (co doe ) = (Az — Anje#2" 20, so they are a fundamental pair. b. p(A) has double root Ay, so z(t) = e* is one solution—what is the other? One can guess z2(t) = te! and find out that it works—definitely a time saving approach! But this is a nice time to reintroduce reduction of order, an important tool, First note that ‘a? if A1 is a double root of p(A) then from the quadratic formula 4; = —$ and 6 = 4 Second Order Equations 65 Now let z2(t) = e** f* u(s)ds and substitute it into the differential equation to get, after canceling the e™ wt (2A, +a)u=0. This implies & = 0 so u(t) = A, and r9(t) = Ale! + Be™*. The second term is our original z,(t), and since A is arbitrary we conclude that z2(t) = te*' is the second solution, Then ; e tet pat _ WUD = det | reat edits tent) HE A At and therefore 2c1(t) and c(t) are a fundamental pair. cc. p(A) has complex roots, and since a, 6 are real they must be complex conjugate pairs dp=r+i0, Ay=r—i0, where i? =—1, Even if the neophyte has not studied complex variables, it is essential that an important formula be introduced at this point: e® = cos6 + ising. It can be taken for granted, or motivated by using the infinite series for cos @ and sin 0, and the multiplication rules for products of i. Besides, it leads to the wonderful relation e* +1 = 0, if @ = 7m, wherein e,i,7,0 and 1 share in a fascinating concatenation. Letting 9 —» ~0 gives e~® = cosd — ising and you obtain the key formulas iO 4-16 cos 0 = —. sin@ = The law of exponents holds so x(t) = e%* = ef #9)t = ee®t, and ao(t) = et = efr-#)t — erte-%6t, are solutions, and any linear combination x(t) = c1x1(t) + cez9(t) is also a solution. 1. Nothing about linearity prohibits c; and cz being complex numbers so first let cy = et iO -a es +e . sere wat (SE ) = e" cos Ht, areal solution! Now let c) = 33. co = ~3; and obtain ett git 1 ort, —iat (eo aalt elt _ —erteibt = ort a(t) = se R another real solution! Now compute their Wronskian wit) ** cos Ot e”sin Ot ~ \re" cos Ot — Oe" sin #t re” sin 8t + de"* cos Hz, = de?"* (cos? 6t + sin? Ot) = He" #0 since @ #0; they are a fundamental pair. 66 Ordinary Differential Equations—A Brief Eclectic Tour In these few pages we have covered everything the beginner needs to know about linear, homogeneous (no forcing term), constant coefficient equations, which should be reinforced with some practice. Further practice should come from doing some applied problems. Remark. The Euler Equation ered @ E+ [e+ Ge=0, t#0, should be briefly mentioned, since it is a camouflaged linear equation and does come up in some partial differential equations, via separation of variables. It also provides absurdly complicated variation of parameters problems for malevolent instructors, Solution tech- nique: let x(t) = t*, substitute to get a characteristic polynomial p(\) = A?+(a—1)A+6, then work out the three possible cases as above. Remember, the Euler Equation is one of the few explicitly solvable fly specks in the universe of nonconstant coefficient linear differential equations. Do you want to venture beyond n = 2? You do so at your own risk in view of the fact the majority of high school algebra texts have shelved factoring polynomials in the same musty bin they stored finding square roots. Of course your friendly computer algebra program can do it for you but what is being learned? And then you will have to wave your hands and assert the resulting solutions form a fundamental set of solutions because they are linearly independent, unless you want to demonstrate some sleep-inducing analysis. Example 1. Wish to prove e*',e%",... ,e**, Ay distinct, are linearly independent? You could write down their Wronskian W(t), then EEE sae 1 A Xn W (0) = det nel yao which is Vandermonde’s determinant (Whoa!) and is never zero if the A, are distinct. Or suppose cre + ce? 4+ 4 pet = 0. 00 "*, and differentiate enough times to make p;(t) disappear. Now multiply by e~ 22-0)". differentiate some more, ... . Maybe you are a little more convinced of the wisdom of sticking with n — 2. 4 Second Order Equations 67 7 What To Do With Solutions To start with an elementary, but very useful, fact, it is important to be able to construct and use the phase-amplitude representation of a periodic sum: If d(t) = asinwt + bcoswt then $(t) = Acos(wt — ¢) where A = Ja? +6? is the amplitude and ¢ = tan~! a/b is the phase angle or phase shift. This is trivially proved by multiplying the first expression by Va? + 0/Va? + 6 and using some trigonometry. The representation makes for a quick and illuminating graphing technique: Example. 4(t) = 3sin 2¢ +5cos2t so A = /34 and ¢ = tan~! 3/5 ~ 0.54 rad., hence o(t) = 34 cos(2t — 0.54). and 2t — 0.54 = 0 gives t = 0.27. We will shortly see the effective use of the representation in studying stability. There are three popular models from mechanics used to discuss undamped or damped motion; we first consider the former: @ “i brow tied (i) is a suspended mass on a spring, (ii) is the mass connected to a spring moving on a track, and (iii) is a bob or pendulum where z is a small angular displacement from the rest position. In (i) Z is the length the spring is stretched when the mass is attached. All share the same equation of motion %+w?r =0, the harmonic oscillator, where w? = g/L in (i) and (iii), and w? = k, the spring constant in (ii). In the case of the pendulum the equation of motion is an approximation for small displacement to the true equation, # + w?sin z = 0. Most problems are easy to solve except for the penchant some authors have for mixing units—the length is in meters, the mass is in drams, and it takes place on Jupiter. Stick with the CGS or MKS system! 68 Ordinary Differential Equations—A Briet Eclectic Tour If there is a displacement zo from rest x = 0, or an initial velocity yo imparted to the mass, then the IVP is €+w'x=0, 2(0) = 2, £(0) = whose solution is x(t) = ®sinwt tao cost = y/28 + yB/w? cos(wt — ¢) where é = tan7? yo /wo. The phase-amplitude expression gives us a hint of the phase plane as follows: = y= ufo adie + y2/w? sin(wt — 4) and therefore i 2 a(n? CO = 8 + 980%), Or if we plot x(t) vs. ¢(t) = y(t) then a(t)? y(t)? apt up/w wort us which is an ellipse. It is the parametric representation of all solutions satisfying (to) = 79, &(t) = yo for some to. Furthermore, we see from the phase-amplitude representation of the solution that a change in the initial time is reflected in a change in the phase angle of the solution, not the amplitude. If we add damping to the model, which we can assume is due to air resistance or friction (in the case of the pendulum at the pivot), and furthermore make the simple assumption that the dissipation of energy is proportional to the velocity and is independent of the displacement, we obtain the equation of damped motion E+ct+w'e=0, ¢>0. Now things get more interesting; the standard picture demonstrating the model might add a dashpot to represent the frictional force: oa eee i rt Models of RLC circuits can also be developed which lead to the same equation, but since the author doesn’t know an ohm from an oleaster they were omitted. The motion depends on the nature of the roots of the characteristic polynomial—you could imagine it as the case where the cylinders of your Harley Davidson's shock absorbers were filled with Perrier, Mazola, or Elmer’s Glue. The polynomial is p(X) = A? +cA+w? with roots 1 c 4u? nan }(-e8 VE=MP) = $ (1 1-= 4 Second Order Equations 69 Case 1. Overdamped, 0 < “4% <1, Both roots are distinct and negative: Ay = —ry, Ag = —ro, and 2(t) = ae~"!* + be", 11,72 > 0, where a and b are determined by the IC. If r, 0 then decrease If Zo, Yo are positive it will have a maximum at t = t. to zero. ue Since the size of the maximum could be of interest in the design of the damping mechanism, assigned problems should be to find the maximum value of x(t) and when it occurs. Case 3. Underdamped, 43° > 1. Now the roots are —$ + 40,0 4/4 — 1, so the solution will be a(t) = ae~“/? sin ot + be */? cos Ot = Ae~*/? cos(ot ~ ¢) where the maximum amplitude A and the phase angle are determined by the initial conditions. A possible graph would be: 70 Ordinary Oifferential Equations—A Brief Eclectic Tour ‘The dashed graph is + A(cos o)e°¥2 The interesting problems for the underdamped case are to estimate a first time beyond which the amplitude is less than a preassigned small value for all later times. These problems afford a nice combination of analysis and the advantages of today’s calculator or computer graphing capabilities. Examples. 1. For the system governed by the IVP B+ jb+2=0, 2(0)=2, 2(0) accurately estimate the smallest value of T' for which |x(t)| < 0.5 for all t > T. Discussion: This can be done solely by using a graphing calculator or a plotting package, but doing some analysis first has benefits. The solution is & 2 fae ct x(t) © 2.0039e cm ( 6 = ovens) ’ and since cos @ has local extrema at @= n,n = 0,1,2,..., first solve “255+ —0.0626 = _ 16(nr-+0.0626) nm to get tr = aE . Then we want |(tn)| = 2.0039 exp(—tn) < 0.5 which gives a value of n = 7. Check some values: N=T7, ty®22.097, 2(t7) = —0.5036 n th 25.245, (tg) © 0.4136 n=9, to © 28.392, 2(ty) ~ 0.3398 So we get this picture: 4 Second Order Equations n Now call in the number cruncher to solve ; J¥55 2.0039e~*/1® cos (= - aoees) = -05 with tear = ty ~ 22.097 to get T = 22.16968423 and x(T)) ~ —0.49999999. A similar problem is the following: 2. Find a second order linear differential equation and initial conditions whose solution is x(t) = 2e~'/?°sint. Then find an accurate estimate of the smallest value of T for which ja(t)|< 0.1 for t>T. Discussion: The second half of the problem is like the previous one and T ~ 58.49217. But it is surprising how many students have trouble with the first half and go to unreasonable efforts of calculation. From the solution we infer that the roots of the characteristic polynomial are \;,2 = — 7 +4 and since the characteristic polynomial can be written as 42 ~ (Ar + Ag)A+ AyAg we get the ODE # + ahi + (75 +1) 2 =0, and from the solution that z(0) = 0, (0) = 2. 8 The Nonhomogeneous Equation or the Variation of Parameters Formula Untangled We consider the nonhomogeneous, second order, constant coefficient differential equation (*) Etat + br =q(t), where q(t) is continuous on r < t < s, and consequently solutions of the IVP E+at+br=aq(t), x(to)=20, #(to) = yo, will exist, be unique, and be defined for r i = (Bt) > ult) = a &"(s)B(s)ds 4 Ordinary Differential Equations—A Briet Eclectic Tour hence ap(t) = ay fo @"(s)B(s) ds. If we are solving the IVP, we could put limits on the integral and use fj, then zp(to) = 0 so the particular solution plays no role in the evaluation of the necessary constants in the general solution. But we were sticking to n = 2 weren’t we? Yes, and that makes life even easier since inverting 2 x 2 matrices is a piece of cake. Recalling that 1 (t) a) det = W(t) £0, (20) in) “0 we have -1 oy = ee mi) ak ( yo(t) ) t2(t) yo(t) W(t) \—wo(t) 21 (t) and now we can write down the expression for the particular solution in all its splendor— but we won’t. The topic of this chapter is second order equations, and we don’t want to stray too far from the terra firma of n = 2. So consider the special case Alt) = on aw)" Bw) = (cia) which corresponds to the second order equation #+a(t)é +O(t)x = a(t). In this case given a fundamental pair of solutions 0. z(t), let — (71(t) aa(t) - a(t) ~a2(t) a= (30 BO). ® Oe WO CAN ) and W(t) = exp [- sf a(s)ds], therefore alt) = (2°) . GC. a) i 1 ee a) ( ) as, (t) i(t) a2(t) W(s) \-a:(s) x1(s) / \g(s) But we are really only interested in the expression for z,(t), so after a little matrix multiplication we pick out the first component of zp(/) and get ej ai) [22 ase2y [2a as This is exactly what we would get if we solved the system of equations for col(%1, 2), auth, + eatig = 0, yi + eatin = g(t), then integrated. Note these contain the “mysterious” first equation which never appears in the systems approach. 4 Second Order Equations 15 For the constant coefficient case we will derive an alternate, and often more useful, form of the particular solution, but we will not expand the discussion to higher order equations or to systems. Just keep the encapsulating statement in mind: The general solution of the system ¢ = A(#)z + B(t) is given by the expression a a(t) = (e+ O() i O°(5)B(s) ds where (1) is any fundamental matrix of the homogeneous system é = A(t)x and c is determined by initial conditions, if any. Note that the expression for 2,(t) is independent of the choice of a fundamental matrix (1), since if ,(t) = b(#)Q, where det Q #0 then a0 [ 0; '(s)B(s) ds = an f Q710-"(s)B(s) ds = an [ &-"(s)B(s) ds. Example. #+ jx = (t),t # 0, an Euler equation with x(t) = t'/?, x(t) = t/? logt, and since a(t) = 0, W(t) = 1. pe 2 logt et), Cae ene) so = 1-2 4 he V2 logit —11/? logt e')= ( 12/2 ne ) and therefore xp(t)\ _ ( t/? t/? logt z(t) \e¥?/2 2 + ft? bogt sts ts Vlogs —s'/Piogs\ (0 \ |. -s 2/2 sz) \a(s) from which we obtain the expression t p(t) = ~n f (s'/? log s)q(s) ds + t!/? loge f sas) ds. The example points out the real problem with variation of parameters—any exercises are not much more than tests of integration skill, e.g. let g(t) = 19/2 above. More likely, the answers can only be computed numerically, e.g. let q(t) = cost above, or even simpler, solve # + 2 = t!/?, The importance of the variation of parameters method is that, for the system, it gives an exact integral representation of the solution, which can be very useful in determining bounds on solutions, long time behavior, and stability if we have information about ©(t) and B(t). We will see examples of this in Chapter 5. The best advice, whatever method or approach one uses to introduce variation of parameters, is to work a few problems then stick a bookmark in that section of the book in case you ever need to use the formula. 76 Ordinary Oifferential Equations—A Briet Eciectic Tour 9 Cosines and Convolutions The most interesting problems for the second order, constant coefficient equation are those generally described as the forced (damped or undamped) oscillator. A typical mechanical configuration is a spring/mass system being driven by a periodic forcing term: lL = z F cos at For simplicity we will assume that the natural frequency (when k and F are zero) of the system is w = 1, so the equation describing the system is tt+kite=Feosqt, k>0. Solving the equation gives a solution of the form x(t) = 2,(t) + K cos(gt ~ 4) where y(t) is the solution of the unforced system (F = 0) K cos(gt — @) is the particular solution. Since k > 0 we know x,(t) — 0 as t > co; it is called the transient solution. Hence 2(t) + K cos(qt — ¢), the steady state solution. But all is not as simple as it appears, so we need to look at some cases. Case 1: damping: & > 0, and q=1. The frequency of the forcing term matches the natural frequency, Solving the equation we have noe a) F cos(t— n/2) = u(t) + Fsint so the solution approaches a natural mode of the undamped system # + x = 0. Note that the amplitude F'/k can be very large if k > 0 is small. Case 2: damping: k > 0, and q #1. A little calculation results in the steady state solution x(t) = FK(q)cos(qt- 4), K(q) = and K(q), which multiplies F, the amplitude of the forcing term, is called the amplifi- cation factor or gain. It is a simple first semester calculus exercise to show that i) K(0) =1,K(q) +0 asq— ox. 4 Second Order Equations 7 ii) K’(q) = 0 for q? = 1 — 3k, which will be positive when k < V2, and in this case it will have a maximum (k/I— F*/4)-!, Otherwise (k > v2) and K(q) monotonically decreases to zero. iii) If k = 0, K(q) will have a vertical asymptote at q = 1. A graph of the gain vs. q looks like k=0 / \ k=0 ke K Se 0° ae a If we fix & at a value 1 < k < V2 we can “tune” the system to get maximum amplitude by adjusting q, the forcing frequency. A good practical problem could be: let F = 10 then plot K(q) for various values of k, and compute the maximum gain, if any, and the phase shift in each case. The Undamped Cases: k = 0 The equation is 7+ x = Fcosgqt and if we assume the system starts at rest so (0) = @(0) = 0, there are two cases Case I: g #1 The solution is x(t) = ;42(cos qd — cost) a superposition of two harmonics, with a large amplitude if q is close to 1, Via a trigonometric identity for the difference of two cosines we get aF a and suppose q is close to 1, say g = 1+, € > 0. Then a(t) = (5) sin 4] sin(2 + 6)t and the bracketed term will have a large amplitude and a large period 2x/¢. The other term will have a period #2 ~ 7, so we see the phenomenon of beats, where the large amplitude, large period oscillation is an envelope which encloses the smaller amplitude, smaller period oscillation. Case 2: g=1 Instead of using undetermined coefficients, or variation of parameters for the intrepid souls, to solve this case, go back to the first expression for the solution in the case q # 1, a(t) = (sin(g + 1)t)(sin(1 — q)t) 78 Ordinary Differential Equations—A Brief Eclectic Tour and let g= 1+. We have 0) ey a) “T+ : and with a little trigonometry and some readjustment this becomes Oo ~ (sin 8) . —F coset (cos t) ———— +e € Now use L’Hopital’s Rule letting-e — 0 to get . oe lim 24(t) = x(t) = Ftsint, which is the dreaded case of resonance. For the more general case # + w?x = Fcoswt we get x(t) = Etsinwt. A sample problem might be one similar to those suggested for the underdamped system: A system governed by the IVP : 16 ; Et4x= —costt, 2(0) = #(0) =0, “blows up” when the amplitude |x(t)] exceeds 24. When does this first occur? A more general discussion of resonance can be obtained for the constant coefficient case by slightly adjusting the form of the variation of parameters formula, or using the following result: The solution z(t) of the IVP Btatt+bz=q(t), 2(0)=%(0)=0 is given by the convolution integral i an()= [ot ~ sats) ds where ¢(t) is the solution of the homogeneous differential equation satisfying 9(0) = 0, (0) =1. 4 Second Order Equations 79 To prove the result, first compute the derivatives of z»(t): ; ; #,(t) = o(t ~ alt) + i, a(t — s)a(s) ds = i A{t—s)als)ds since (0) = 0, 5 : + fees Pe + 4,(t) = d(t— tale) + f G(t — s)a(s) ds = g(t) + f dlt— 8)q(s) ds since 4(0) = 1. Then t : E, + at, + bry = q(t) + [ (@ +. ad + 6d)(t — 8)q(s) ds = g(t) since (t) is a solution of the homogeneous equation. A more elaborate proof would be to use the variation of parameters formula for each of the constant coefficient cases and combine terms. The convolution integral representation has several very nice features: a. Since zp(0) = 4p(0) = 0, the particular solution contributes nothing to the initial conditions, which makes the IVP E+azt+be=q(t), 2(0)=29, 2(0)=y easier to solve. One merely adds to x(t) the solution of the homogeneous problem satisfying the IC. b. Similar to the case for the first order equation it gives an elegant representation of the solution when q(t) is a discontinuous function. -l, O 2. Working out the integrals gives ap(t)=—te’te'-1, O<¢<2, p(t) = te’? — Se"? -e?, Det. But the real value of the convolution integral representation is that it gives a very specific criteria for when resonance can occur in the general case. Consider the IVP E+c= f(t), 2(0)=29, 4(0)=y, where f(t +2) = f(t). In this case ¢(t) = sint so a(t) = zo cost + yosint + zp(t), a9(t) = [snc ~ 8) f(s) ds, 80 Ordinary Differential Equations—A Brief Eclectic Tour Then a(t +2m) = rr sin(t— 8) f(s) ds - e420 = [ sin(t ~ 8) f(s) ds + [ sin(t ~ s) f(s) ds 0 t and by making a change of variables in the second integral and since f(t + 2m) = f(t) we get z,(t + 2m) a(t) + Hi * in(t — 3)f(s)ds. It follows from the last equation that es ap(t + 4m) = ap(t + 2m) + 7, sin(t-+2n — 8)f(s)ds A - = 21) 42 ii sin(é — 5) (6) ds, 7 and in general for any positive integer N ap(t+2N7) = p(t) anf sin(t — 5) f(s) ds. But if there is some value t = ar having the property that >" sin(a—s) f(s) ds = m #0, then x(a +2Nm) = 2,(a) +Nm which becomes unbounded as N — 00. Resonance! We conclude that The equation # +2 = f(t), f(t +2n) = f(t), will have a 2n-periodic solution whenever f,” sin(t—s) f(s) ds = 0 for all t. Otherwise, there will be resonance. The result allows us to consider a variety of functions f(t) satisfying f(t + 2) = f(t), for example a) f(t) = |sind, then 2m a [ (sin(t — s))] sin s\ds =f sin(t — 5) sin sds i 0 + ft sin(t — 8)(—sins) ds =0, SO no resonance

You might also like