Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Differential Equations: A Comprehensive Guide to Solving First and Second Order Equations, Study Guides, Projects, Research of Differential Equations

This is an introduction to ordinary differential equations. We describe the main ideas to solve certain differential equations, ...

Typology: Study Guides, Projects, Research

2021/2022

Uploaded on 09/12/2022

linden
linden 🇬🇧

4.4

(8)

217 documents

1 / 431

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Ordinary Differential Equations
Gabriel Nagy
Mathematics Department,
Michigan State University,
East Lansing, MI, 48824.
gnagy@msu.edu
January 18, 2021
x2
x1
x1
x2
a
b
0
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Differential Equations: A Comprehensive Guide to Solving First and Second Order Equations and more Study Guides, Projects, Research Differential Equations in PDF only on Docsity!

Ordinary Differential Equations

Gabriel Nagy

Mathematics Department, Michigan State University, East Lansing, MI, 48824. gnagy@msu.edu

January 18, 2021

x 2

x 1

x^1

x^2

b^ a

0

Contents

  • Preface
  • Chapter 1. First Order Equations
    • 1.1. Linear Constant Coefficient Equations
    • 1.1.1. Overview of Differential Equations
    • 1.1.2. Linear Differential Equations
    • 1.1.3. Solving Linear Differential Equations
    • 1.1.4. The Integrating Factor Method
    • 1.1.5. The Initial Value Problem
    • 1.1.6. Exercises
    • 1.2. Linear Variable Coefficient Equations
    • 1.2.1. Review: Constant Coefficient Equations
    • 1.2.2. Solving Variable Coefficient Equations
    • 1.2.3. The Initial Value Problem
    • 1.2.4. The Bernoulli Equation
    • 1.2.5. Exercises
    • 1.3. Separable Equations
    • 1.3.1. Separable Equations
    • 1.3.2. Euler Homogeneous Equations
    • 1.3.3. Solving Euler Homogeneous Equations
    • 1.3.4. Exercises
    • 1.4. Exact Differential Equations
    • 1.4.1. Exact Equations
    • 1.4.2. Solving Exact Equations
    • 1.4.3. Semi-Exact Equations
    • 1.4.4. The Equation for the Inverse Function
    • 1.4.5. Exercises
    • 1.5. Applications of Linear Equations
    • 1.5.1. Exponential Decay
    • 1.5.2. Carbon-14 Dating
    • 1.5.3. Newton’s Cooling Law
    • 1.5.4. Mixing Problems
    • 1.5.5. Exercises
    • 1.6. Nonlinear Equations
    • 1.6.1. The Picard-Lindel¨of Theorem
    • 1.6.2. Comparison of Linear and Nonlinear Equations
    • 1.6.3. Direction Fields
    • 1.6.4. Exercises
  • Chapter 2. Second Order Linear Equations
    • 2.1. Variable Coefficients
    • 2.1.1. Definitions and Examples IV CONTENTS
    • 2.1.2. Solutions to the Initial Value Problem.
    • 2.1.3. Properties of Homogeneous Equations
    • 2.1.4. The Wronskian Function
    • 2.1.5. Abel’s Theorem
    • 2.1.6. Exercises
    • 2.2. Reduction of Order Methods
    • 2.2.1. Special Second Order Equations
    • 2.2.2. Conservation of the Energy
    • 2.2.3. The Reduction of Order Method
    • 2.2.4. Exercises
    • 2.3. Homogenous Constant Coefficients Equations
    • 2.3.1. The Roots of the Characteristic Polynomial
    • 2.3.2. Real Solutions for Complex Roots
    • 2.3.3. Constructive Proof of Theorem 2.3.2
    • 2.3.4. Exercises
    • 2.4. Euler Equidimensional Equation
    • 2.4.1. The Roots of the Indicial Polynomial
    • 2.4.2. Real Solutions for Complex Roots
    • 2.4.3. Transformation to Constant Coefficients
    • 2.4.4. Exercises
    • 2.5. Nonhomogeneous Equations
    • 2.5.1. The General Solution Formula
    • 2.5.2. The Undetermined Coefficients Method
    • 2.5.3. The Variation of Parameters Method
    • 2.5.4. Exercises
    • 2.6. Applications
    • 2.6.1. Review of Constant Coefficient Equations
    • 2.6.2. Undamped Mechanical Oscillations
    • 2.6.3. Damped Mechanical Oscillations
    • 2.6.4. Electrical Oscillations
    • 2.6.5. Exercises
  • Chapter 3. Power Series Solutions
    • 3.1. Solutions Near Regular Points
    • 3.1.1. Regular Points
    • 3.1.2. The Power Series Method
    • 3.1.3. The Legendre Equation
    • 3.1.4. Exercises
    • 3.2. Solutions Near Regular Singular Points
    • 3.2.1. Regular Singular Points
    • 3.2.2. The Frobenius Method
    • 3.2.3. The Bessel Equation
    • 3.2.4. Exercises
    • Notes on Chapter
  • Chapter 4. The Laplace Transform Method
    • 4.1. Introduction to the Laplace Transform
    • 4.1.1. Oveview of the Method
    • 4.1.2. The Laplace Transform
    • 4.1.3. Main Properties CONTENTS V
    • 4.1.4. Solving Differential Equations
    • 4.1.5. Exercises
    • 4.2. The Initial Value Problem
    • 4.2.1. Solving Differential Equations
    • 4.2.2. One-to-One Property
    • 4.2.3. Partial Fractions
    • 4.2.4. Higher Order IVP
    • 4.2.5. Exercises
    • 4.3. Discontinuous Sources
    • 4.3.1. Step Functions
    • 4.3.2. The Laplace Transform of Steps
    • 4.3.3. Translation Identities
    • 4.3.4. Solving Differential Equations
    • 4.3.5. Exercises
    • 4.4. Generalized Sources
    • 4.4.1. Sequence of Functions and the Dirac Delta
    • 4.4.2. Computations with the Dirac Delta
    • 4.4.3. Applications of the Dirac Delta
    • 4.4.4. The Impulse Response Function
    • 4.4.5. Comments on Generalized Sources
    • 4.4.6. Exercises
    • 4.5. Convolutions and Solutions
    • 4.5.1. Definition and Properties
    • 4.5.2. The Laplace Transform
    • 4.5.3. Solution Decomposition
    • 4.5.4. Exercises
  • Chapter 5. Systems of Linear Differential Equations
    • 5.1. General Properties
    • 5.1.1. First Order Linear Systems
    • 5.1.2. Existence of Solutions
    • 5.1.3. Order Transformations
    • 5.1.4. Homogeneous Systems
    • 5.1.5. The Wronskian and Abel’s Theorem
    • 5.1.6. Exercises
    • 5.2. Solution Formulas
    • 5.2.1. Homogeneous Systems
    • 5.2.2. Homogeneous Diagonalizable Systems
    • 5.2.3. Nonhomogeneous Systems
    • 5.2.4. Exercises
    • 5.3. Two-Dimensional Homogeneous Systems
    • 5.3.1. Diagonalizable Systems
    • 5.3.2. Non-Diagonalizable Systems
    • 5.3.3. Exercises
    • 5.4. Two-Dimensional Phase Portraits
    • 5.4.1. Real Distinct Eigenvalues
    • 5.4.2. Complex Eigenvalues
    • 5.4.3. Repeated Eigenvalues
    • 5.4.4. Exercises
  • Chapter 6. Autonomous Systems and Stability VI CONTENTS
    • 6.1. Flows on the Line
    • 6.1.1. Autonomous Equations
    • 6.1.2. Geometrical Characterization of Stability
    • 6.1.3. Critical Points and Linearization
    • 6.1.4. Population Growth Models
    • 6.1.5. Exercises
    • 6.2. Flows on the Plane
    • 6.2.1. Two-Dimensional Nonlinear Systems
    • 6.2.2. Review: The Stability of Linear Systems
    • 6.2.3. Critical Points and Linearization
    • 6.2.4. The Stability of Nonlinear Systems
    • 6.2.5. Competing Species
    • 6.2.6. Exercises
  • Chapter 7. Boundary Value Problems
    • 7.1. Eigenfunction Problems
    • 7.1.1. Two-Point Boundary Value Problems
    • 7.1.2. Comparison: IVP and BVP
    • 7.1.3. Eigenfunction Problems
    • 7.1.4. Exercises
    • 7.2. Overview of Fourier series
    • 7.2.1. Fourier Expansion of Vectors
    • 7.2.2. Fourier Expansion of Functions
    • 7.2.3. Even or Odd Functions
    • 7.2.4. Sine and Cosine Series
    • 7.2.5. Applications
    • 7.2.6. Exercises
    • 7.3. The Heat Equation
    • 7.3.1. The Heat Equation (in One-Space Dim)
    • 7.3.2. The IBVP: Dirichlet Conditions
    • 7.3.3. The IBVP: Neumann Conditions
    • 7.3.4. Exercises
  • Chapter 8. Review of Linear Algebra
    • 8.1. Linear Algebraic Systems
    • 8.1.1. Systems of Linear Equations
    • 8.1.2. Gauss Elimination Operations
    • 8.1.3. Linearly Dependence
    • 8.1.4. Exercises
    • 8.2. Matrix Algebra
    • 8.2.1. A Matrix is a Function
    • 8.2.2. Matrix Operations
    • 8.2.3. The Inverse Matrix
    • 8.2.4. Computing the Inverse Matrix
    • 8.2.5. Overview of Determinants
    • 8.2.6. Exercises
    • 8.3. Eigenvalues and Eigenvectors
    • 8.3.1. Eigenvalues and Eigenvectors
    • 8.3.2. Diagonalizable Matrices
    • 8.3.3. Exercises CONTENTS VII
    • 8.4. The Matrix Exponential
    • 8.4.1. The Exponential Function
    • 8.4.2. Diagonalizable Matrices Formula
    • 8.4.3. Properties of the Exponential
    • 8.4.4. Exercises
  • Chapter 9. Appendices
    • A. Overview of Complex Numbers
    • A.1. Extending the Real Numbers
    • A.2. The Imaginary Unit
    • A.3. Standard Notation
    • A.4. Useful Formulas
    • A.5. Complex Functions
    • A.6. Complex Vectors
    • B. Overview of Power Series
    • C. Discrete and Continuum Equations
    • C.1. The Difference Equation
    • C.2. Solving the Difference Equation
    • C.3. The Differential Equation
    • C.4. Solving the Differential Equation
    • C.5. Summary and Consistency
    • C.6. Exercises
    • D. Review Exercises
    • E. Practice Exams
    • F. Answers to exercises
  • Bibliography

CHAPTER 1

First Order Equations

We start our study of differential equations in the same way the pioneers in this field did. We show particular techniques to solve particular types of first order differential equations. The techniques were developed in the eighteenth and nineteenth centuries and the equations include linear equations, separable equations, Euler homogeneous equations, and exact equa- tions. This way of studying differential equations reached a dead end pretty soon. Most of the differential equations cannot be solved by any of the techniques presented in the first sections of this chapter. People then tried something different. Instead of solving the equa- tions they tried to show whether an equation has solutions or not, and what properties such solution may have. This is less information than obtaining the solution, but it is still valu- able information. The results of these efforts are shown in the last sections of this chapter. We present theorems describing the existence and uniqueness of solutions to a wide class of first order differential equations.

t

y

π 2

π 2

y′^ = 2 cos(t) cos(y)

3

1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 5

space variables, and the heat equation in example (c) is first order in time and second order in space variables.

1.1.2. Linear Differential Equations. We start with a precise definition of a first order ordinary differential equation. Then we introduce a particular type of first order equations—linear equations.

Definition 1.1.1. A first order ODE on the unknown y is

y′(t) = f (t, y(t)), (1.1.1)

where f is given and y′^ = dy dt

. The equation is linear iff the source function f is linear on its second argument, y′^ = a(t) y + b(t). (1.1.2) The linear equation has constant coefficients iff both a and b above are constants. Oth- erwise the equation has variable coefficients.

There are different sign conventions for Eq. (1.1.2) in the literature. For example, Boyce- DiPrima [ 3 ] writes it as y′^ = −a y + b. The sign choice in front of function a is matter of taste. Some people like the negative sign, because later on, when they write the equation as y′^ + a y = b, they get a plus sign on the left-hand side. In any case, we stick here to the convention y′^ = ay + b.

Example 1.1.2. (a) An example of a first order linear ODE is the equation

y′^ = 2 y + 3.

On the right-hand side we have the function f (t, y) = 2y + 3, where we can see that a(t) = 2 and b(t) = 3. Since these coefficients do not depend on t, this is a constant coefficient equation. (b) Another example of a first order linear ODE is the equation

y′^ = −

t

y + 4t.

In this case, the right-hand side is given by the function f (t, y) = − 2 y/t + 4t, where a(t) = − 2 /t and b(t) = 4t. Since the coefficients are nonconstant functions of t, this is a variable coefficients equation. (c) The equation y′^ = −

ty

  • 4t is nonlinear.

C

We denote by y : D ⊂ R → R a real-valued function y defined on a domain D. Such a function is solution of the differential equation (1.1.1) iff the equation is satisfied for all values of the independent variable t on the domain D.

Example 1.1.3. Show that y(t) = e^2 t^ −

is solution of the equation

y′^ = 2 y + 3.

6 1. FIRST ORDER EQUATIONS

Solution: We need to compute the left and right-hand sides of the equation and verify they agree. On the one hand we compute y′(t) = 2e^2 t. On the other hand we compute

2 y(t) + 3 = 2

e^2 t^ −

  • 3 = 2e^2 t.

We conclude that y′(t) = 2 y(t) + 3 for all t ∈ R. C

Example 1.1.4. Find the differential equation y′^ = f (y) satisfied by y(t) = 4 e^2 t^ + 3. Solution: (Solution Video) We compute the derivative of y, y′^ = 8 e^2 t We now write the right-hand side above, in terms of the original function y, that is, y = 4 e^2 t^ + 3 ⇒ y − 3 = 4 e^2 t^ ⇒ 2(y − 3) = 8 e^2 t. So we got a differential equation satisfied by y, namely y′^ = 2y − 6. C

1.1.3. Solving Linear Differential Equations. Linear equations with constant co- efficient are simpler to solve than variable coefficient ones. But integrating each side of the equation does not work. For example, take the equation y′^ = 2 y + 3, and integrate with respect to t on both sides, ∫ y′(t) dt = 2

y(t) dt + 3t + c, c ∈ R.

The Fundamental Theorem of Calculus implies y(t) =

y′(t) dt, so we get

y(t) = 2

y(t) dt + 3t + c.

Integrating both sides of the differential equation is not enough to find a solution y. We still need to find a primitive of y. We have only rewritten the original differential equation as an integral equation. Simply integrating both sides of a linear equation does not solve the equation. We now state a precise formula for the solutions of constant coefficient linear equations. The proof relies on a new idea—a clever use of the chain rule for derivatives.

Theorem 1.1.2 (Constant Coefficients). The linear differential equation y′^ = a y + b (1.1.3) with a 6 = 0, b constants, has infinitely many solutions,

y(t) = c eat^ − b a

, c ∈ R. (1.1.4)

Remarks: (a) Equation (1.1.4) is called the general solution of the differential equation in (1.1.3).

(b) Theorem 1.1.2 says that Eq. (1.1.3) has infinitely many solutions, one solution for each value of the constant c, which is not determined by the equation.

8 1. FIRST ORDER EQUATIONS

We now compute exponentials on both sides, to get

y˜(t) = ±e^2 t+c^0 = ±e^2 t^ ec^0 , denote c = ±ec^0 , then y˜(t) = c e^2 t, c ∈ R.

Since ˜y = y +

, we get y(t) = c e^2 t^ −

, where c ∈ R. C

Remark: We converted the original differential equation y′^ = 2 y + 3 into a total derivative of a potential function ψ′^ = 0. The potential function can be computed from the step

ln(|y˜|)′^ = 2 ⇒

ln(|y˜|) − 2 t

then a potential function is ψ(t, y(t)) = ln

∣y(t) +^3 2

− 2 t. Since the equation is now

ψ′^ = 0, all solutions are ψ = c 0 , with c 0 ∈ R. That is

ln

∣y(t) +^3 2

− 2 t = c 0 ⇒ ln

∣y(t) +^3 2

= 2t + c 0 ⇒ y(t) +

= ±e^2 t+c^0.

If we denote c = ±ec^0 , then we get the solution we found above, y(t) = c e^2 t^ −

1.1.4. The Integrating Factor Method. The argument we used to prove Theo- rem 1.1.2 cannot be generalized in a simple way to all linear equations with variable coef- ficients. However, there is a way to solve linear equations with both constant and variable coefficients—the integrating factor method. Now we give a second proof of Theorem 1.1. using this method.

Second Proof of Theorem 1.1.2: Write the equation with y on one side only,

y′^ − a y = b,

and then multiply the differential equation by a function μ, called an integrating factor,

μ y′^ − a μ y = μ b. (1.1.5)

Now comes the critical step. We choose a positive function μ such that

− a μ = μ′. (1.1.6)

For any function μ solution of Eq. (1.1.6), the differential equation in (1.1.5) has the form

μ y′^ + μ′^ y = μ b.

But the left-hand side is a total derivative of a product of two functions, ( μ y

= μ b. (1.1.7)

This is the property we want in an integrating factor, μ. We want to find a function μ such that the left-hand side of the differential equation for y can be written as a total derivative, just as in Eq. (1.1.7). We only need to find one of such functions μ. So we go back to Eq. (1.1.6), the differential equation for μ, which is simple to solve,

μ′^ = −a μ ⇒

μ′ μ

= −a ⇒

ln(|μ|)

= −a ⇒ ln(|μ|) = −at + c 0.

Computing the exponential of both sides in the equation above we get

μ = ±ec^0 −at^ = ±ec^0 e−at^ ⇒ μ = c 1 e−at, c 1 = ±ec^0.

Since c 1 is a constant which will cancel out from Eq. (1.1.5) anyway, we choose the integration constant c 0 = 0, hence c 1 = 1. The integrating function is then

μ(t) = e−at.

1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 9

This function is an integrating factor, because if we start again at Eq. (1.1.5), we get

e−at^ y′^ − a e−at^ y = b e−at^ ⇒ e−at^ y′^ +

e−at

y = b e−at,

where we used the main property of the integrating factor, −a e−at^ =

e−at

. Now the product rule for derivatives implies that the left-hand side above is a total derivative, ( e−at^ y

= b e−at.

The right-hand side above can be rewritten as a derivative, b e−at^ =

b a e−at

, hence ( e−at^ y +

b a

e−at

[(

y +

b a

e−at

]′

We have succeeded in writing the whole differential equation as a total derivative. The differential equation is the total derivative of a potential function, which in this case is

ψ(t, y) =

y + b a

e−at.

Notice that this potential function is the exponential of the potential function found in the first proof of this Theorem. The differential equation for y is a total derivative,

dψ dt

(t, y(t)) = 0,

so it is simple to integrate,

ψ(t, y(t)) = c ⇒

y(t) +

b a

e−at^ = c ⇒ y(t) = c eat^ −

b a

This establishes the Theorem.  We solve the example below following the second proof of Theorem 1.1.2.

Example 1.1.6. Find all solutions to the constant coefficient equation

y′^ = 2y + 3 (1.1.8)

Solution: (Solution Video) Write the equation in (1.1.8) as follows,

y′^ − 2 y = 3.

Multiply this equation by the integrating factor μ(t) = e−^2 t,

e−^2 ty′^ − 2 e−^2 t^ y = 3 e−^2 t^ ⇔ e−^2 ty′^ +

e−^2 t

y = 3 e−^2 t.

We now solve the same problem above, but now using the formulas in Theorem 1.1.2.

Example 1.1.7. Find all solutions to the constant coefficient equation

y′^ = 2y + 3 (1.1.9)

Solution: The equation above is the case of a = 2 and b = 3 in Eq. (1.1.3). Therefore, using these values in the expression for the solution given in Eq. (1.1.4) we obtain

y(t) = c e^2 t^ −

C

1.1. LINEAR CONSTANT COEFFICIENT EQUATIONS 11

Remark: In case t 0 = 0 the initial condition is y(0) = y 0 and the solution is

y(t) =

y 0 +

b a

eat^ −

b a

The proof of Theorem 1.1.4 is just to write the general solution of the differential equation given in Theorem 1.1.2, and fix the integration constant c with the initial condition.

Proof of Theorem 1.1.4: The general solution of the differential equation in (1.1.10) is given in Eq. (1.1.4) for any choice of the integration constant c,

y(t) = c eat^ −

b a

The initial condition determines the value of the constant c, as follows

y 0 = y(t 0 ) = c eat^0 −

b a

⇔ c =

y 0 +

b a

e−at^0.

Introduce this expression for the constant c into the differential equation in Eq. (1.1.10),

y(t) =

y 0 + b a

ea(t−t^0 )^ − b a

This establishes the Theorem. 

Example 1.1.8. Find the unique solution of the initial value problem

y′^ = 2y + 3, y(0) = 1. (1.1.13)

Solution: (Solution Video) All solutions of the differential equation are given by

y(t) = ce^2 t^ −

where c is an arbitrary constant. The initial condition in Eq. (1.1.13) determines c,

1 = y(0) = c −

⇒ c =

Then, the unique solution to the initial value problem above is y(t) =

e^2 t^ −

. C

Example 1.1.9. Find the solution y to the initial value problem

y′^ = − 3 y + 1, y(0) = 1.

Solution: (Solution Video) Write the differential equation as y′^ + 3 y = 1, and multiply the equation by the integrating factor μ = e^3 t, which will convert the left-hand side above into a total derivative,

e^3 ty′^ + 3 e^3 t^ y = e^3 t^ ⇔ e^3 ty′^ +

e^3 t

y = e^3 t.

This is the key idea, because the derivative of a product implies ( e^3 t^ y

= e^3 t.

The exponential e^3 t^ is an integrating factor. Integrate on both sides of the equation,

e^3 t^ y =

e^3 t^ + c.

So every solution of the differential equation above is given by

y(t) = c e−^3 t^ +

, c ∈ R.

12 1. FIRST ORDER EQUATIONS

The initial condition y(0) = 2 selects only one solution,

1 = y(0) = c +

⇒ c =

We get the solution y(t) =

e−^3 t^ +

. C

Notes. This section corresponds to Boyce-DiPrima [ 3 ] Section 2.1, where both constant and variable coefficient equations are studied. Zill and Wright give a more concise exposition in [ 17 ] Section 2.3, and a one page description is given by Simmons in [ 10 ] in Section 2.10. The integrating factor method is shown in most of these books, but unlike them, here we emphasize that the integrating factor changes the linear differential equation into a total derivative, which is trivial to integrate. We also show here how to compute the potential functions for the linear differential equations. In § 1.4 we solve (nonlinear) exact equations and nonexact equations with integrating factors. We solve these equations by transforming them into a total derivative, just as we did in this section with the linear equations.