









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An overview of log-linearisation methods used to solve DSGE (Dynamic Stochastic General Equilibrium) models. The techniques presented include the universal method, total differential method, Uhlig's method, and substitution method. The document also covers Taylor approximation for single and multivariable cases.
What you will learn
Typology: Lecture notes
1 / 15
This page cannot be seen from the preview
Don't miss anything!
Abstract
To solve DSGE models, first we have to collect all expectational differ- ence equations, such as F.O.C.s, constraints, aggregate conditions, then log- linearise them around the steady-state. It sounds easy, but it sometimes requires ingenuity and proficiency. This note is written for illustrating log- linearisation and presenting several methods, which method you are suppose to use is on your discretion.
And the right-hand side of (3) is the percentage deviation from steady state X, this is the reason we prefer to work with log-deviation. Sometimes we choose replace Xt − X with dXt, the differential of Xt, which will be clear in later examples. And we can also rearrange (3), x˜t ≈ (^) X^1 (Xt − X) = X Xt − 1 Xt ≈ X(˜xt + 1) (4)
Then use this expression to substitute into all variables.
2 Universal method
Entitling this method as universal is not because it can be employed deal- ing with any problem, but it reveals the most fundamental idea of log- linearisation: take natural logarithm then linearise it. We shall use a Cobb- Douglas production function as the demonstrating example
Yt = Kα(AtLt)^1 −α
Take natural logarithm on both sides,
ln Yt = α ln Kt + (1 − α) ln At + (1 − α) ln Lt (5)
Then we need to use Taylor expansion to the first degree around steady- state, for every terms
ln Yt = ln Y + Y^1 (Yt − Y ) ln Kt = ln K + K^1 (Kt − K) ln At = ln A + A^1 (At − A) ln Lt = ln L + L^1 (Lt − L)
Substitute back to (5),
ln Y + Y^1 (Yt − Y ) = α
ln K + K^1 (Kt − K)
ln A + A^1 (At − A)
ln L + L^1 (Lt − L)
Expand the equation above
ln Y + Y^1 (Yt − Y ) = α ln K + α K Kt − α + (1 − α) ln A + (1 − α) A At − (1 − α) + (1 − α) ln L + (1 − α) L Lt − (1 − α) (6)
We eliminate the steady-state condition out of equation, according to (5)
ln Y = α ln K + (1 − α) ln A + (1 − α) ln L
Then we simplified (6) a bit,
1 Y (Yt^ −^ Y^ ) =^ α
Kt K −^ α^ + (1^ −^ α)^
At A − (1 − α) + (1 − α) L Lt − (1 − α)
where CH , PH and P without subscripts t are steady-state value. Then rewrite them into log-deviation form
c˜H,t = −η(˜pH,t − p˜t) + ˜ct (8)
The same procedure for another equation, take logarithm on both sides
ln CF,t = ln α − η ln PF,t + η ln Pt + ln Ct
Take total differential on steady-state,
1 CF^ dCF,t^ =^ −η^
PF^ dPF,t^ +^ η^
P dPt^ +
C dCt
Then log-deviation form,
˜cF,t = −η(˜pF,t − p˜t) + ˜ct (9)
This time, the linearisation goes considerably quick, and result is in log- deviation form. The Taylor expansion to the first degree is to linearise a nonlinear function, so it functions the same as total differential approxima- tion. And obviously taking total differential is easier than Taylor expansion to the first degree, this is why we prefer not to use Taylor expansion directly.
4 Uhlig’s method
In this section, we will see another interesting method proposed by Uhlig (1999), which does not even require to take derivatives. This method is just a further derivation of (1),
ln Xt = ln X + ˜xt
Take exponential on both sides,
Xt = eln^ X+˜xt^ = eln^ X^ ex˜t^ = Xex˜t
Then the idea is clear, we replace every variable with its according trans- formed term Xe˜xt^ , where X is the steady-state value. We have several expansion rules to follow:
e˜xt^ ≈ 1 + ˜xt e˜xt+ay˜t^ ≈ 1 + ˜xt + ay˜t x ˜t y˜t ≈ 0 Et[aex˜t^ ] ≈ Et[ax˜t] + a
You get the right-hand side expression by simply taking the Taylor expansion to the first degree. But you don’t need to use Taylor expansion every time, using these rules will save your lots of troubles. One important remark is that Uhlig’s method is immune to Jensen’s inequality
ln EtX > Et ln X
A specific example of this inequality, we have seen in our advanced mi- croeconomic textbook-risk aversion. Since here natural logarithm is strictly concave thus the inequality always holds. Then our problem is that we can’t simply take logarithm to a function with expectation operator. One clever way to circumvent the problem is to use Uhlig’s method. The example here is still from Gal´ı and Monacelli (2005), the equation
then take total differential at steady-state value,
1 ξ dξt^ =^ −σ^
C(1 − h) [ dCt^ −^ h^ dCt+1] ξ˜t ≈ −σ (^) C(1^1 − h)
C d CC t− hC dC Ct+
ξ^ ˜t ≈ −σ 1 1 − h (˜ct^ −^ h˜ct+1)
Substitute back to (11),
0 ≈ r˜t − Et
[ (^) σ 1 − h (˜ct+1^ −^ h˜ct)
Define ˜pt+1 − p˜t = π˜t+1 the inflation rate, then multiply both sides by (1 − h)/σ
Et(˜ct+1 − h˜ct) +^1 −σ hEt[˜πt+1 − ˜rt] ≈ ˜ct − h˜ct− 1 (12)
5 Substitution method
Strictly speaking, this is not a method, it simply omits the first step of Uhlig’s replacement, then directly come to (4). There is only one step needs attention, we will see in next example:
Xt + a = (1 − b) (^) LYtZtt
Taking natural logarithm on both sides,
ln (Xt + a) = ln (1 − b) + ln Yt − ln Lt − ln Zt (13)
Use steady-state condition,
ln (X + a) = ln (1 − b) + ln Y − ln L − ln Z (14)
Subtract (14) away from (13),
ln (Xt + a) − ln (X + a) = ln Yt − ln Y − ln Lt + ln L − ln Zt + ln Z
They all become log-deviation form
X^ ˜t + a = ˜yt − l˜t − z˜t (15)
We need to find out what ˜xt is, then replace X˜t + a. Here comes the step needs attention, we use (3)
x˜t ≈ Xt^ X− X (16)
Then X˜t + a becomes
X^ ˜t + a ≈ Xt^ +^ a X^ − +^ ( Xa +^ a)= XXt^ −+^ Xa (17)
Since the numerator of right-hand sides of (16) and (17) are equal, we can make use of equality and set,
X^ ˜t + a(X + a) = ˜xtX X^ ˜t + a = (^) Xx˜ t+X a
Substitute back to (15),
˜xtX X + a = ˜yt^ −^ l˜t^ −^ z˜t
This function actually can be easily linearised by total differential method, you can try by yourself, only needs two steps, everything will be done.
Follow this formula, we simply take derivative at the steady-state, everything will be done. Try an example, kt+1 = (1 − δ)kt + sktα
Use formula (20),
˜kt+1 = [sαkα−^1 + (1 − δ)]k˜t
It is now linearised, since the term in the square brackets are simply param- eters.
Taylor polynomial has a vector version as well as scalar version, actually most of functions you encounter will be multivariable rather what we had seen in last section. Again, let’s start at the general case^1 ,
Xt+1 = f (Xt, Yt) (21)
where f is still any nonlinear function you can imagine. The vector version (bivariate) of first-order Taylor polynomial around the steady-state is
Xt+1 ≈ f (X, Y ) + fX (X, Y )(Xt − X) + fY (X, Y )(Yt − Y ) (22)
As you can guess, the bivariate Taylor expansion closely relates to total derivative/differential of bivariate function. Again set steady-state condition of (21): X = f (X, Y ), (22) becomes,
Xt+1 ≈ X + fX (X, Y )(Xt − X) + fY (X, Y )(Yt − Y ) (^1) We use bivariate examples here for sake of simplicity.
Dividing by X,
Xt+ X ≈^ 1 +^ fX^ (X, Y^ )
(Xt − X) X +^ fY^ (X, Y^ )
(Yt − Y ) X 1 + ˜xt+1 ≈ 1 + fX (X, Y )˜xt + fY (X, Y ) YX^ (Yt^ Y−^ Y^ ) ˜xt+1 ≈ fX (X, Y )˜xt + fY (X, Y ) (^) XY y˜t (23)
(23) is the formula we are seeking for. Try an example again,
kt+1 = (1 − δ)kt + sztkαt
Then calculate partial derivatives,
fz (z, k) = skα fk(z, k) = αszkα−^1 + (1 − δ)
Use formula (23),
k˜t+1 = [αszkα−^1 + (1 − δ)]k˜t + skα^ zk z˜t
Open the brackets,
˜kt+1 ≈ αszkα−^1 ˜kt + (1 − δ)˜kt + skα−^1 z z˜t
There are still many situations we haven’t covered in this note, you can find some published papers and try to log-linearise those key functions by yourself then compare the results from those provided by authors.
The End.