Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

calculus 1 and 2 notes, Study notes of Mathematics

notes for calculus for physics and engineers

Typology: Study notes

2024/2025

Uploaded on 04/26/2025

mariam-saik
mariam-saik 🇬🇧

2 documents

1 / 82

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Mathematics for Scientists and Engineers I
Calculus.
PH1110/EE1110
Dr Stephen West
Department of Physics,
Royal Holloway, University of London,
Egham, Surrey,
TW20 0EX.
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52

Partial preview of the text

Download calculus 1 and 2 notes and more Study notes Mathematics in PDF only on Docsity!

Mathematics for Scientists and Engineers I

Calculus.

PH1110/EE

Dr Stephen West

Department of Physics, Royal Holloway, University of London, Egham, Surrey, TW20 0EX.

Contents

  • 1 Introduction.
  • 2 Functions
    • 2.1 Definition of a Function
    • 2.2 Logarithms - a quick revision
    • 2.3 Trigonometric Functions - a quick revision and a little more
    • 2.4 Power Series
      • 2.4.1 Expanding Functions in Power Series - Taylor Series
      • 2.4.2 Approximation Errors in Taylor Series
  • 3 Differential calculus
    • 3.1 Continuity of a function
    • 3.2 Differentiability and Differentiation.
    • 3.3 Rules of Differentiation
      • 3.3.1 Product Function
      • 3.3.2 Composite Function or Function of a Function or Implicit Differentiation
      • 3.3.3 Quotient Rule
      • 3.3.4 Logarithmic Differentiation
      • 3.3.5 Leibniz’s Theorem
    • 3.4 Special Points of a Function
      • 3.4.1 Singular Points
    • 3.5 Inverse Functions
      • 3.5.1 Inverse Trigonometric Functions
      • 3.5.2 Inverse Function Rule
    • 3.6 Partial Differentiation.
  • 4 A note on coordinate systems.
    • 4.1 Circular Polar Coordinates
    • 4.2 Cylindrical Polars
    • 4.3 Spherical Polars
  • 5 Integration
    • 5.1 Integration From First Principles
    • 5.2 Integration of Arbitrary Continuous Function
      • 5.2.1 Properties of Definite Integrals
    • 5.3 Integration as the inverse of Differentiation
    • 5.4 Integration By Inspection
    • 5.5 Integration by Substitution
      • 5.5.1 Indefinite Integrals
      • 5.5.2 Solving Differential Equations with Integration
      • 5.5.3 Definite Integrals
    • 5.6 Integration by Parts
    • 5.7 Differentiation of an Integral Containing a Parameter
    • 5.8 An Aside on Hyperbolic Functions
      • 5.8.1 Identities of Hyperbolic Functions
      • 5.8.2 Inverses of Hyperbolic Functions
      • 5.8.3 Calculus of Hyperbolic Functions
    • 5.9 Hyperbolic Functions In Integration by Substitution
  • 6 Differential Equations
    • 6.1 Solving ODEs
    • 6.2 First order ODE
      • 6.2.1 Separable first order ODE
      • 6.2.2 Almost Separable first order ODEs
      • 6.2.3 Homogeneous first order ODEs
      • 6.2.4 Homogeneous apart from a constant
      • 6.2.5 Looks homogeneous but actually separable with a change of variable.
      • 6.2.6 Common derivative method
      • 6.2.7 Integrating Factor
      • 6.2.8 Bernoulli equation
  • 7 Second Order Differential Equations
    • 7.1 Homogeneous second order ODE with constant coefficients
    • 7.2 Finding Solutions to 2nd order linear ODEs
      • 7.2.1 Real Roots
      • 7.2.2 Complex Roots
      • 7.2.3 Equal roots and the method of reduction of order
    • 7.3 Non-homogeneous linear ODEs
      • 7.3.1 The method of undetermined coefficients
      • 7.3.2 Particular solution when non-homogeneous term is a sum
      • 7.3.3 Use of complex exponentials in solutions.
  • 8 Applications of second order ODE
    • 8.1 The driven simple harmonic oscillator
      • 8.1.1 Unforced Oscillations
  • 9 Limits and the evaluation of Indeterminate Forms
    • 9.1 L’Hˆopital’s rule
  • 10 Series
    • 10.1 Arithmetic Series
    • 10.2 Geometric Series
    • 10.3 Arithmetico-Geometric Series
    • 10.4 A last example of a series
    • 10.5 Convergent and Divergent Series
      • 10.5.1 Testing a Series For Convergence: The Preliminary Test.
      • 10.5.2 Tests for Convergence of Series of Positive Terms: Absolute Convergence
      • 10.5.3 Alternating Series
  • 11 Summary

A The Greek Alphabet 82 §

  • It must work for every possible input value.
  • And it has only one relationship for each input value.

When writing a function there are three important parts; the argument, the relationship and the output of the function. Let’s look at an example.

f (x) = x^2 , (2.1)

where f is the name of the function labelling the output, x is the argument and x^2 is the relationship that is, what the function does to the input. The action of the function is straightforward. For example, if we have an argument x = 2, then the output is f (2) = 4. Note: f is a common name for a function but we could have called it anything at all, for example we could have called it g or h or helicopter. It does not matter, it is just a name. Equally, the argument does not have to be x, we could have chosen b, q or telephone. The argument is just a place holder. For example,

f (b) = b^2 , (2.2)

is exactly the same function as the one in Equation 2.1. As above, if the argument b = 2, then the output is again f (2) = 4. The argument just shows us where the input goes. Sometimes, functions have no names for example,

y = x^2 , (2.3)

but the function still has an output and in this case it is y. There are a large number of special types of function possessing very particular properties. For example the symmetry of a function can play a crucial role in its operation. As an example a function may be even or odd or neither. An even function is one like x^2 or cos x whose graph for negative x is just a reflection in the y-axis of its graph for positive x. Mathematically we can write this as

A function f (x) is even if f (−x) = f (x). (^). (2.4)

An odd function is one like x or sin x, where the values of f (x) and f (−x) are negatives of each other. By definition

A function f (x) is odd if f (−x) = −f (x). (^). (2.5)

We will come back to even and odd function later in particular when we look at integrals over these functions.

2.2 Logarithms - a quick revision

You will have already come across Logarithms at school, but a brief review is given here to refresh your memory. Let’s start with the following

x = bp, (2.6)

where we say that p is the Logarithm of x to the base b. We write this as

p = logb x, (2.7) where the bases, b, can be any non-zero positive number (more formally we say b ∈ R > 0). Two common examples are where b = 10 or where b = e, where e is a special constant with approximate value e = 2.71828 to 5 decimal places. This constant plays a crucial role in Physics and so when it is used as the base in a Logarithm it gets its own special name. The Log to the base e is referred to as the natural Logarithm and is often denoted by

p = ln x. (2.8) It should be clear that logb 0 = −∞, or more properly logb 0 is not defined. The reason being that if x = 0 then 0 = bp can only be satisfied if p = −∞. The real Logarithm function is only defined for x > 0. By definition x = blogb^ x^ and conversely x = logb(bx), (2.9)

so in the case of the natural Logarithm we have x = eln^ x^ and x = ln(ex). (2.10)

Another common base is 10 and you will often see a Log to the base 10 written without the base explicitly stated, log 10 x = log x. It is always important to make sure it is clear which base is being used. There are a number of important rules associated with the manipulation of Logarithms. The Product Rule.

logb(xy) = logb x + logb y. (^) (2.11)

Proof: Let x = bn^ and y = bm^ and so n = logb x and m = logb y. Consider the product of x and y.

xy = bnbm^ = bn+m^ = blogb^ x+logb^ y^. (2.12) But we also know that by definition xy = blogb^ xy^. (2.13)

Proof: By definition we can write

x = aloga^ x^ = blogb^ x. (2.22)

But we can also write,

b = aloga^ b. (2.23)

Using this in Equation 2.22 we arrive at

aloga^ x^ = (aloga^ b)logb^ x^ = aloga^ b^ logb^ x. (2.24)

Comparing exponentials we find

loga x = loga b logb x. (2.25)

Example: Show that

logb x + loga x = (1 + loga b) logb x. (2.26)

Solution: Use the change of basis rule to convert loga x = loga b logb x

logb x + loga x = logb x + loga b logb x (2.27) = (1 + loga b) logb x.

2.3 Trigonometric Functions - a quick revision and a little more

A very common set of functions that we will see in this course are the trigonometric functions. For example you will be familiar with sin, cos, tan. Most relationships between these functions can be derived from two identities (which we will prove later in the course).

cos[x + y] = cos[x] cos[y] − sin[x] sin[y] (2.28) sin[x + y] = cos[x] sin[y] + cos[y] sin[x]. (2.29)

For example, if set x = −y in Equation 2.28 we get

cos(0) = 1 = cos^2 (x) + sin^2 (x). (2.30)

Alternatively if we set x = y in Equation 2.28 we find

cos(2x) = cos^2 (x) − sin^2 (x). (2.31)

We also have a set of functions that are the reciprocal of the familiar trigonometric functions. These are

cosec(x) = (^) sin(^1 x) ; sec(x) = (^) cos(^1 x) ; cot(x) = (^) tan(^1 x). (2.32)

As an example of a further identity we can start with Equation 2.30 and divide both sides by cos^2 (x) 1 cos^2 (x) = sec (^2) (x) = cos^2 (x) cos^2 (x) +

sin^2 (x) cos^2 (x) sec^2 (x) = 1 + tan^2 (x). (2.33)

Many more identities can be derived in related ways, see the list in section 7.1 of the Mathematics Formula Booklet (online version here). A further important property of cos x and sin x is that they can be expanded in terms of a polynomial, literally meaning “many terms”. In our case these polynomials are a sum terms depending on the variable x to increasingly higher powers. Specifically

sin x = x − x 3 3! +^

x^5 5! −^

x^7 7! +^...^ =

∑^ ∞

n=

x^2 n+1(−1)n (2n + 1)! (2.34) cos x = 1 − x 2 2! +^

x^4 4! −^

x^6 6! +^...^ =

∑^ ∞

n=

x^2 n(−1)n (2n)! ,^ (2.35) where the symbol! means factorial which for example 4! = 4. 3. 2 .1 etc and 0! = 1 by definition. These power series expansions come from Taylor Series.

2.4 Power Series

By definition a power series has the form ∑^ ∞ n= anxn^ = a 0 + a 1 x + a 2 x^2 +... + anxn^ +... (2.36)

or ∑^ ∞ n=

an(x − a)n^ = a 0 + a 1 (x − a) + a 2 (x − a)^2 +... + an(x − a)n^ +.... (2.37)

The value of x will determine whether the series converges and we often are tasked with finding the value of x for which a series converges. For example, the power series

P (x) = 1 − x 2 + x 42 − x 83 +... + x 2 nn +.... can be tested for convergence by considering the absolute ratio of successive terms

ρn =

∣∣^ (− 2 xn)+1n+1 / (− 2 xn)n

∣∣ =^ ∣∣∣ x 2 ∣∣∣ ,

To see this we can expand a general function of x as a Taylor Series. Consider the function f (x) and expand it around the point x = a. We have then

f (x) = a 0 + a 1 (x − a) + a 2 (x − a)^2 + a 3 (x − a)^3 + a 4 (x − a)^4... + an(x − a)n^ +... (2.42) f ′(x) = a 1 + 2a 2 (x − a) + 3a 3 (x − a)^2 + 4a 4 (x − a)^3 +... + nan(x − a)n−^1 +... (2.43) f ′′(x) = 2 a 2 + 3 · 2 a 3 (x − a) + 4 · 3 a 4 (x − a)^2 +... + n(n − 1)an(x − a)n−^2 +... (2.44) f ′′′(x) = 3! a 3 + 4 · 3 · 2 a 4 (x − a) +... + n(n − 1)(n − 2) an(x − a)n−^3 +... (2.45)

... (2.46) f n(x) = n(n − 1)(n − 2)... 2 · 1 an + terms containing powers of (x − a). (2.47) In each of the above equation we put x = a and find

f (a) = a 0 , f ′(a) = a 1 , f ′′(a) = 2a 2 , (2.48) f ′′′(a) = 3!a 3 ,... , f (n)(a) = n!an. (2.49)

We can now write the full Taylor series for f (x) about x = a:

f (x) = f (a) + (x − a)f ′(a) + 2!^1 (x − a)^2 f ′′(a) +... + n^1! (x − a)nf (n)(a) +.... (2.50)

The Maclaurin series for f (x)is the Taylor series about the origin. Putting a = 0 we obtain the Maclaurin series (or Taylor Series expansion about x = 0) for f (x).

f (x) = f (0) + (x)f ′(0) + 2!^1 x^2 f ′′(0) +... + n^1! xnf (n)(0) +.... (2.51)

Note: The functions are differentiated first and then the value of x around which they are being expanded is inserted. We have already seen some important Maclaurin series but they are collected here and should be memorised. You are also expected to be able to derive them all as well. convergent for (2.52) sin x = x − x 3 3! +^

x^5 5! −^

x^7 7! +^... ,^ all^ x;^ (2.53) cos x = 1 − x 2!^2 + x 4!^4 − x 6!^6 +... , all x; (2.54) ex^ = 1 + x + x 2!^2 + x 3!^3 + x 4!^4 +... , all x; (2.55) ln(1 + x) = x − x 2 2 +^

x^3 3 −^

x^4 4 +^... ,^ −^1 < x^ ≤^ 1;^ (2.56) (1 + x)p^ = 1 + px + p(p^ 2!− 1)x^2 + p(p^ −^ 1)(3! p^ −^ 2)x^3 +... , all |x| < 1. (2.57)

The last of these series is called a “Binomial Series”.

Let’s look at an example. Expand f (x) = cos bx around x = 0, where b is a constant. Include only the first 3 terms. We use the Taylor Series expansion around x = 0 in equation 2.51, that is

f (x) ≈ f (0) + (x)f ′(0) + 2!^1 x^2 f ′′(0).

Notice the approximation sign, this is because we are only including the first three terms of what is an infinite series and so our power series expansion is only an approximation of the full functional. We can calculate the individual terms. The first term is f (0) = cos (bx)|x=0 = 1. The coefficient of x is then f ′(0) = d^ cos ( dxbx)

x=0^ =^ −b^ sin (bx)|x=0^ = 0 and finally the coefficient of x^2 is 1 2 f^

′′(0) =^1

d^2 cos (bx) dx^2

x=0^ =^

−b^2 2 cos (bx)

x=0^ =^

−b^2

Putting all this together we get cos (bx) ≈ 1 − 12 (bx)^2. If we had wanted the first three non-zero terms we would have to go to the x^4 term.

2.4.2 Approximation Errors in Taylor Series

See Riley Section 4.6.1, 4.6.2. We have seen that some functions (those that are differentiable) can be represented by an infinite series. It is often useful to approximate this series and keep only the first n terms, with the other terms neglected. If we do make this approximation we would like to know what size error we are making. We can rewrite the full Taylor as

f (x) = f (a) + (x − a)f ′(a) = 2!^1 (x − a)^2 f ′′(a) +... + (^) (n −^1 1)! (x − a)(n−1)f (n−1)(a) + Rn(x), (2.58)

where Rn(x) = (x^ − n^ !a )nf (n)(η), where η lies in the range [a, x]. Rn(x) is called the remainder term and represents the error in approximating f (x) by the above (n − 1)th-order power series. The value of η that satisfies the expression for Rn is not known, an upper limit the error may be found by differentiating Rn with respect to η and equating the result to zero and solving for η in the usual way for finding a maximum. Example, if we calculate the Taylor series for cos x about x = 0 but only include the first two terms we get

cos x ≈ 1 − x 2

  • 4 - 2 2 4 x

1 1.0^ f(x)

f (x)

0 x

Figure 3.1: The Step(x) function is an example of a function that is not continuous.

3 Differential calculus

You will have already seen many example of differentiation and know what the derivative of certain functions are. In this section we will look at the formal definition of a derivative which will allow us to prove the results of the derivatives that you have seen before.

3.1 Continuity of a function

First we must define a property of a function that will be useful in our description of differentiation. A function, f (x), is continuous at x = a if f (x) → f (a) as x → a. An example of where this condition breaks is the following: The ”Step” function, which is defined as

Step(x) =

1 , if x > 0 , 0 , if x < 0. (3.1)

A graphic visualisation of this function is displayed in Figure 3.1. It is clear that this function is not continuous at x = 0. In particular, as we take the limit x → 0 +, step(x) → 1 in contrast to when we take the limit x → 0 −, step(x) → 0. Here x → 0 +^ means that we take x to zero from positive numerical values of x with x → 0 −^ meaning we take x to zero from negative numerical values of x.

f (x + x) f (x)

x (^) x + x x

x

f

f (x + x) f (x) ⌘ f

P

Figure 3.2: The Barrow triangle determining the gradient of the tangent to a curve. The graph shows that eh gradient or slope of the function at P , given by tan θ, is approximately equal to ∆f /∆x.

3.2 Differentiability and Differentiation.

Read section 2.1 in Riley for more on this. We will now investigate what it means to take a derivative of a function. The derivative of a function is essentially the gradient of the tangent to a curve (as described by the function) at an arbitrary input value (let’s call it x for now). With this in mind consider Figure 3.2. The figure shows the plot of f , as a function of the input variable x. We wish to calculate the gradient (that is, the slope) of the tangent to a curve at a particular value of x. The tangent line (or simply tangent) to a plane curve at a given point is the straight line that “just touches” the curve at that point (see green line in Figure 3.2). Leibniz defined it as the line through a pair of infinitely close points on the curve. Figure 3.2 shows that the gradient of the hypotenuse of the Barrow triangle is given by

f (x + ∆x) − f (x) ∆x =

∆f ∆x.^ (3.2)

This is not quite the tangent to the curve at the point P given by tan θ. However, it is clear that as we decrease ∆x the hypotenuse and tangent to the curve become closer and closer. By taking the limit ∆x → 0 we can match on to the gradient. Formally speaking then the derivative of f with respect to x is given by

3.3 Rules of Differentiation

3.3.1 Product Function

For more reading and examples see Riley Section 2.1.3. We can also prove the product rule in a similar way. Let’s say we have a function f that is a product of two other functions u(x) and v(x). Applying Equation 3.3 we have that df dx =^ ∆limx→^0

u(x + ∆x)v(x + ∆x) − u(x)v(x) ∆x = (^) ∆limx→ 0 u(x^ + ∆x)(v(x^ + ∆x)^ −^ v(x∆)) + (x u(x^ + ∆x)^ −^ u(x))v(x) = (^) ∆limx→ 0 u(x + ∆x) (v(x^ + ∆∆xx)^ −^ v(x))+ (u(x^ + ∆∆xx)^ −^ u(x))v(x) = u(x) dv dx(x )+ v(x) du dx(x ).

Consequently we obtain the familiar df dx =^

d(u(x)v(x)) dx =^ u(x)^

dv(x) dx +^ v(x)^

du(x) dx.^ (3.9)

This is the Product Rule.

3.3.2 Composite Function or Function of a Function or Implicit Differentiation

For more reading and examples see Riley Section 2.1.5. Consider a function g(x) and another function f (g(x) which is a function of our original function g. We would like to know how to take the derivative of f (g(x)). We already know the answer from school but let’s prove it using Equation 3.3.

df (g(x)) dx =^ ∆limx→^0

f (g(x + ∆x)) − f (g(x)) ∆x (3.10) = (^) ∆limx→ 0 f^ (g g(x(x^ + ∆ + ∆xx)))^ −−^ fg^ ((xg)(x))^ g(x^ + ∆∆xx)^ −^ g(x).

But we have that ∆g = g(x + ∆x) − g(x)

df (g(x)) dx =^ ∆limg→^0

f (g + ∆g) − f (g) ∆g ∆limx→^0

∆g ∆x =^

df dg

dg dx.^ (3.11) This is the Chain Rule.

3.3.3 Quotient Rule

For more reading and examples see Riley Section 2.1.4.

The Quotient rule does not really stand alone as a separate rule as it is a direct consequence of the product and chain rule. Consider the product of two functions u(x) and 1/v(x). We can take the derivative of this product following the product rule d dx

u. (^) v^1

= u (^) dxd

v

+^1 vdudx (3.12) d dx

u. (^) v^1

= − (^) vu 2 dvdx

v

+^1 vdudx (3.13) d dx

( (^) u v

= v^ dudx − u dvdx v^2 ,^ (3.14) which is the Quotient Rule.

3.3.4 Logarithmic Differentiation

For more reading and examples see Riley Section 2.1.6. In circumstances in which the variable with respect to which we are differentiating is an exponent, taking logarithms and then differentiating implicitly is the simplest way to find the derivative. Example: Find the derivative with respect to x of g(x) = bx, where b is a constant. Solution: First take natural logs of both sides and then differentiate

ln g = ln ax^ = x ln a, ⇒ (^1) gdgdx = ln a. (3.15)

Now simply rearrange and substitute in the original expression for g

dg dx =^ g^ ln^ a^ =^ ax^ ln^ a.^ (3.16)

3.3.5 Leibniz’s Theorem

For more reading and examples see Riley Section 2.1.7. Leibniz’ theorem states the form of the nth order derivative of a product of functions. We know from above that the derivative of a product of two functions, u ≡ u(x) and v ≡ v(x) is given by the product rule

d(uv) dx =^ u

dv dx +^ v

du dx.^ (3.17) Apply the derivative again d^2 (uv) dx^2 =^ u

d^2 v dx^2 + 2^

du dx

dv dx +^ v

d^2 u dx^2 (3.18) and again d^3 (uv) dx^3 =^ u

d^3 v dx^2 + 3^

d^2 u dx

dv dx + 3^

du dx

d^2 v dx +^ v

d^3 u dx^3 ,^ (3.19)