










































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
notes for calculus for physics and engineers
Typology: Study notes
1 / 82
This page cannot be seen from the preview
Don't miss anything!
Department of Physics, Royal Holloway, University of London, Egham, Surrey, TW20 0EX.
A The Greek Alphabet 82 §
When writing a function there are three important parts; the argument, the relationship and the output of the function. Let’s look at an example.
f (x) = x^2 , (2.1)
where f is the name of the function labelling the output, x is the argument and x^2 is the relationship that is, what the function does to the input. The action of the function is straightforward. For example, if we have an argument x = 2, then the output is f (2) = 4. Note: f is a common name for a function but we could have called it anything at all, for example we could have called it g or h or helicopter. It does not matter, it is just a name. Equally, the argument does not have to be x, we could have chosen b, q or telephone. The argument is just a place holder. For example,
f (b) = b^2 , (2.2)
is exactly the same function as the one in Equation 2.1. As above, if the argument b = 2, then the output is again f (2) = 4. The argument just shows us where the input goes. Sometimes, functions have no names for example,
y = x^2 , (2.3)
but the function still has an output and in this case it is y. There are a large number of special types of function possessing very particular properties. For example the symmetry of a function can play a crucial role in its operation. As an example a function may be even or odd or neither. An even function is one like x^2 or cos x whose graph for negative x is just a reflection in the y-axis of its graph for positive x. Mathematically we can write this as
A function f (x) is even if f (−x) = f (x). (^). (2.4)
An odd function is one like x or sin x, where the values of f (x) and f (−x) are negatives of each other. By definition
A function f (x) is odd if f (−x) = −f (x). (^). (2.5)
We will come back to even and odd function later in particular when we look at integrals over these functions.
You will have already come across Logarithms at school, but a brief review is given here to refresh your memory. Let’s start with the following
x = bp, (2.6)
where we say that p is the Logarithm of x to the base b. We write this as
p = logb x, (2.7) where the bases, b, can be any non-zero positive number (more formally we say b ∈ R > 0). Two common examples are where b = 10 or where b = e, where e is a special constant with approximate value e = 2.71828 to 5 decimal places. This constant plays a crucial role in Physics and so when it is used as the base in a Logarithm it gets its own special name. The Log to the base e is referred to as the natural Logarithm and is often denoted by
p = ln x. (2.8) It should be clear that logb 0 = −∞, or more properly logb 0 is not defined. The reason being that if x = 0 then 0 = bp can only be satisfied if p = −∞. The real Logarithm function is only defined for x > 0. By definition x = blogb^ x^ and conversely x = logb(bx), (2.9)
so in the case of the natural Logarithm we have x = eln^ x^ and x = ln(ex). (2.10)
Another common base is 10 and you will often see a Log to the base 10 written without the base explicitly stated, log 10 x = log x. It is always important to make sure it is clear which base is being used. There are a number of important rules associated with the manipulation of Logarithms. The Product Rule.
logb(xy) = logb x + logb y. (^) (2.11)
Proof: Let x = bn^ and y = bm^ and so n = logb x and m = logb y. Consider the product of x and y.
xy = bnbm^ = bn+m^ = blogb^ x+logb^ y^. (2.12) But we also know that by definition xy = blogb^ xy^. (2.13)
Proof: By definition we can write
x = aloga^ x^ = blogb^ x. (2.22)
But we can also write,
b = aloga^ b. (2.23)
Using this in Equation 2.22 we arrive at
aloga^ x^ = (aloga^ b)logb^ x^ = aloga^ b^ logb^ x. (2.24)
Comparing exponentials we find
loga x = loga b logb x. (2.25)
Example: Show that
logb x + loga x = (1 + loga b) logb x. (2.26)
Solution: Use the change of basis rule to convert loga x = loga b logb x
logb x + loga x = logb x + loga b logb x (2.27) = (1 + loga b) logb x.
A very common set of functions that we will see in this course are the trigonometric functions. For example you will be familiar with sin, cos, tan. Most relationships between these functions can be derived from two identities (which we will prove later in the course).
cos[x + y] = cos[x] cos[y] − sin[x] sin[y] (2.28) sin[x + y] = cos[x] sin[y] + cos[y] sin[x]. (2.29)
For example, if set x = −y in Equation 2.28 we get
cos(0) = 1 = cos^2 (x) + sin^2 (x). (2.30)
Alternatively if we set x = y in Equation 2.28 we find
cos(2x) = cos^2 (x) − sin^2 (x). (2.31)
We also have a set of functions that are the reciprocal of the familiar trigonometric functions. These are
cosec(x) = (^) sin(^1 x) ; sec(x) = (^) cos(^1 x) ; cot(x) = (^) tan(^1 x). (2.32)
As an example of a further identity we can start with Equation 2.30 and divide both sides by cos^2 (x) 1 cos^2 (x) = sec (^2) (x) = cos^2 (x) cos^2 (x) +
sin^2 (x) cos^2 (x) sec^2 (x) = 1 + tan^2 (x). (2.33)
Many more identities can be derived in related ways, see the list in section 7.1 of the Mathematics Formula Booklet (online version here). A further important property of cos x and sin x is that they can be expanded in terms of a polynomial, literally meaning “many terms”. In our case these polynomials are a sum terms depending on the variable x to increasingly higher powers. Specifically
sin x = x − x 3 3! +^
x^5 5! −^
x^7 7! +^...^ =
n=
x^2 n+1(−1)n (2n + 1)! (2.34) cos x = 1 − x 2 2! +^
x^4 4! −^
x^6 6! +^...^ =
n=
x^2 n(−1)n (2n)! ,^ (2.35) where the symbol! means factorial which for example 4! = 4. 3. 2 .1 etc and 0! = 1 by definition. These power series expansions come from Taylor Series.
By definition a power series has the form ∑^ ∞ n= anxn^ = a 0 + a 1 x + a 2 x^2 +... + anxn^ +... (2.36)
or ∑^ ∞ n=
an(x − a)n^ = a 0 + a 1 (x − a) + a 2 (x − a)^2 +... + an(x − a)n^ +.... (2.37)
The value of x will determine whether the series converges and we often are tasked with finding the value of x for which a series converges. For example, the power series
P (x) = 1 − x 2 + x 42 − x 83 +... + x 2 nn +.... can be tested for convergence by considering the absolute ratio of successive terms
ρn =
∣∣^ (− 2 xn)+1n+1 / (− 2 xn)n
∣∣ =^ ∣∣∣ x 2 ∣∣∣ ,
To see this we can expand a general function of x as a Taylor Series. Consider the function f (x) and expand it around the point x = a. We have then
f (x) = a 0 + a 1 (x − a) + a 2 (x − a)^2 + a 3 (x − a)^3 + a 4 (x − a)^4... + an(x − a)n^ +... (2.42) f ′(x) = a 1 + 2a 2 (x − a) + 3a 3 (x − a)^2 + 4a 4 (x − a)^3 +... + nan(x − a)n−^1 +... (2.43) f ′′(x) = 2 a 2 + 3 · 2 a 3 (x − a) + 4 · 3 a 4 (x − a)^2 +... + n(n − 1)an(x − a)n−^2 +... (2.44) f ′′′(x) = 3! a 3 + 4 · 3 · 2 a 4 (x − a) +... + n(n − 1)(n − 2) an(x − a)n−^3 +... (2.45)
... (2.46) f n(x) = n(n − 1)(n − 2)... 2 · 1 an + terms containing powers of (x − a). (2.47) In each of the above equation we put x = a and find
f (a) = a 0 , f ′(a) = a 1 , f ′′(a) = 2a 2 , (2.48) f ′′′(a) = 3!a 3 ,... , f (n)(a) = n!an. (2.49)
We can now write the full Taylor series for f (x) about x = a:
f (x) = f (a) + (x − a)f ′(a) + 2!^1 (x − a)^2 f ′′(a) +... + n^1! (x − a)nf (n)(a) +.... (2.50)
The Maclaurin series for f (x)is the Taylor series about the origin. Putting a = 0 we obtain the Maclaurin series (or Taylor Series expansion about x = 0) for f (x).
f (x) = f (0) + (x)f ′(0) + 2!^1 x^2 f ′′(0) +... + n^1! xnf (n)(0) +.... (2.51)
Note: The functions are differentiated first and then the value of x around which they are being expanded is inserted. We have already seen some important Maclaurin series but they are collected here and should be memorised. You are also expected to be able to derive them all as well. convergent for (2.52) sin x = x − x 3 3! +^
x^5 5! −^
x^7 7! +^... ,^ all^ x;^ (2.53) cos x = 1 − x 2!^2 + x 4!^4 − x 6!^6 +... , all x; (2.54) ex^ = 1 + x + x 2!^2 + x 3!^3 + x 4!^4 +... , all x; (2.55) ln(1 + x) = x − x 2 2 +^
x^3 3 −^
x^4 4 +^... ,^ −^1 < x^ ≤^ 1;^ (2.56) (1 + x)p^ = 1 + px + p(p^ 2!− 1)x^2 + p(p^ −^ 1)(3! p^ −^ 2)x^3 +... , all |x| < 1. (2.57)
The last of these series is called a “Binomial Series”.
Let’s look at an example. Expand f (x) = cos bx around x = 0, where b is a constant. Include only the first 3 terms. We use the Taylor Series expansion around x = 0 in equation 2.51, that is
f (x) ≈ f (0) + (x)f ′(0) + 2!^1 x^2 f ′′(0).
Notice the approximation sign, this is because we are only including the first three terms of what is an infinite series and so our power series expansion is only an approximation of the full functional. We can calculate the individual terms. The first term is f (0) = cos (bx)|x=0 = 1. The coefficient of x is then f ′(0) = d^ cos ( dxbx)
x=0^ =^ −b^ sin (bx)|x=0^ = 0 and finally the coefficient of x^2 is 1 2 f^
d^2 cos (bx) dx^2
x=0^ =^
−b^2 2 cos (bx)
x=0^ =^
−b^2
Putting all this together we get cos (bx) ≈ 1 − 12 (bx)^2. If we had wanted the first three non-zero terms we would have to go to the x^4 term.
2.4.2 Approximation Errors in Taylor Series
See Riley Section 4.6.1, 4.6.2. We have seen that some functions (those that are differentiable) can be represented by an infinite series. It is often useful to approximate this series and keep only the first n terms, with the other terms neglected. If we do make this approximation we would like to know what size error we are making. We can rewrite the full Taylor as
f (x) = f (a) + (x − a)f ′(a) = 2!^1 (x − a)^2 f ′′(a) +... + (^) (n −^1 1)! (x − a)(n−1)f (n−1)(a) + Rn(x), (2.58)
where Rn(x) = (x^ − n^ !a )nf (n)(η), where η lies in the range [a, x]. Rn(x) is called the remainder term and represents the error in approximating f (x) by the above (n − 1)th-order power series. The value of η that satisfies the expression for Rn is not known, an upper limit the error may be found by differentiating Rn with respect to η and equating the result to zero and solving for η in the usual way for finding a maximum. Example, if we calculate the Taylor series for cos x about x = 0 but only include the first two terms we get
cos x ≈ 1 − x 2
1 1.0^ f(x)
f (x)
0 x
Figure 3.1: The Step(x) function is an example of a function that is not continuous.
You will have already seen many example of differentiation and know what the derivative of certain functions are. In this section we will look at the formal definition of a derivative which will allow us to prove the results of the derivatives that you have seen before.
First we must define a property of a function that will be useful in our description of differentiation. A function, f (x), is continuous at x = a if f (x) → f (a) as x → a. An example of where this condition breaks is the following: The ”Step” function, which is defined as
Step(x) =
1 , if x > 0 , 0 , if x < 0. (3.1)
A graphic visualisation of this function is displayed in Figure 3.1. It is clear that this function is not continuous at x = 0. In particular, as we take the limit x → 0 +, step(x) → 1 in contrast to when we take the limit x → 0 −, step(x) → 0. Here x → 0 +^ means that we take x to zero from positive numerical values of x with x → 0 −^ meaning we take x to zero from negative numerical values of x.
f (x + x) f (x)
x (^) x + x x