Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Log-linearisation Methods for DSGE Models, Lecture notes of Computational Methods

An overview of log-linearisation methods used to solve DSGE (Dynamic Stochastic General Equilibrium) models. The techniques presented include the universal method, total differential method, Uhlig's method, and substitution method. The document also covers Taylor approximation for single and multivariable cases.

What you will learn

  • How does the universal method work for log-linearising DSGE models?
  • How does Uhlig's method simplify the log-linearisation process for DSGE models?
  • Which methods are presented in the document for log-linearising DSGE models?
  • What is the total differential method and how is it used for log-linearising DSGE models?
  • What is log-linearisation and why is it used in DSGE models?

Typology: Lecture notes

2021/2022

Uploaded on 09/27/2022

millionyoung
millionyoung 🇬🇧

4.5

(25)

242 documents

1 / 15

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Log-linearisation Tutorial
Weijie Chen
Department of Political and Economic Studies
University of Helsinki
Updated on 5 May 2012
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Log-linearisation Methods for DSGE Models and more Lecture notes Computational Methods in PDF only on Docsity!

Log-linearisation Tutorial

Weijie Chen

Department of Political and Economic Studies

University of Helsinki

Updated on 5 May 2012

Abstract

To solve DSGE models, first we have to collect all expectational differ- ence equations, such as F.O.C.s, constraints, aggregate conditions, then log- linearise them around the steady-state. It sounds easy, but it sometimes requires ingenuity and proficiency. This note is written for illustrating log- linearisation and presenting several methods, which method you are suppose to use is on your discretion.

And the right-hand side of (3) is the percentage deviation from steady state X, this is the reason we prefer to work with log-deviation. Sometimes we choose replace Xt − X with dXt, the differential of Xt, which will be clear in later examples. And we can also rearrange (3), x˜t ≈ (^) X^1 (Xt − X) = X Xt − 1 Xt ≈ X(˜xt + 1) (4)

Then use this expression to substitute into all variables.

2 Universal method

Entitling this method as universal is not because it can be employed deal- ing with any problem, but it reveals the most fundamental idea of log- linearisation: take natural logarithm then linearise it. We shall use a Cobb- Douglas production function as the demonstrating example

Yt = Kα(AtLt)^1 −α

Take natural logarithm on both sides,

ln Yt = α ln Kt + (1 − α) ln At + (1 − α) ln Lt (5)

Then we need to use Taylor expansion to the first degree around steady- state, for every terms

ln Yt = ln Y + Y^1 (Yt − Y ) ln Kt = ln K + K^1 (Kt − K) ln At = ln A + A^1 (At − A) ln Lt = ln L + L^1 (Lt − L)

Substitute back to (5),

ln Y + Y^1 (Yt − Y ) = α

[

ln K + K^1 (Kt − K)

]

  • (1 − α)

[

ln A + A^1 (At − A)

]

  • (1 − α)

[

ln L + L^1 (Lt − L)

]

Expand the equation above

ln Y + Y^1 (Yt − Y ) = α ln K + α K Kt − α + (1 − α) ln A + (1 − α) A At − (1 − α) + (1 − α) ln L + (1 − α) L Lt − (1 − α) (6)

We eliminate the steady-state condition out of equation, according to (5)

ln Y = α ln K + (1 − α) ln A + (1 − α) ln L

Then we simplified (6) a bit,

1 Y (Yt^ −^ Y^ ) =^ α

Kt K −^ α^ + (1^ −^ α)^

At A − (1 − α) + (1 − α) L Lt − (1 − α)

where CH , PH and P without subscripts t are steady-state value. Then rewrite them into log-deviation form

c˜H,t = −η(˜pH,t − p˜t) + ˜ct (8)

The same procedure for another equation, take logarithm on both sides

ln CF,t = ln α − η ln PF,t + η ln Pt + ln Ct

Take total differential on steady-state,

1 CF^ dCF,t^ =^ −η^

PF^ dPF,t^ +^ η^

P dPt^ +

C dCt

Then log-deviation form,

˜cF,t = −η(˜pF,t − p˜t) + ˜ct (9)

This time, the linearisation goes considerably quick, and result is in log- deviation form. The Taylor expansion to the first degree is to linearise a nonlinear function, so it functions the same as total differential approxima- tion. And obviously taking total differential is easier than Taylor expansion to the first degree, this is why we prefer not to use Taylor expansion directly.

4 Uhlig’s method

In this section, we will see another interesting method proposed by Uhlig (1999), which does not even require to take derivatives. This method is just a further derivation of (1),

ln Xt = ln X + ˜xt

Take exponential on both sides,

Xt = eln^ X+˜xt^ = eln^ X^ ex˜t^ = Xex˜t

Then the idea is clear, we replace every variable with its according trans- formed term Xe˜xt^ , where X is the steady-state value. We have several expansion rules to follow:

e˜xt^ ≈ 1 + ˜xt e˜xt+ay˜t^ ≈ 1 + ˜xt + ay˜t x ˜t y˜t ≈ 0 Et[aex˜t^ ] ≈ Et[ax˜t] + a

You get the right-hand side expression by simply taking the Taylor expansion to the first degree. But you don’t need to use Taylor expansion every time, using these rules will save your lots of troubles. One important remark is that Uhlig’s method is immune to Jensen’s inequality

ln EtX > Et ln X

A specific example of this inequality, we have seen in our advanced mi- croeconomic textbook-risk aversion. Since here natural logarithm is strictly concave thus the inequality always holds. Then our problem is that we can’t simply take logarithm to a function with expectation operator. One clever way to circumvent the problem is to use Uhlig’s method. The example here is still from Gal´ı and Monacelli (2005), the equation

then take total differential at steady-state value,

1 ξ dξt^ =^ −σ^

C(1 − h) [ dCt^ −^ h^ dCt+1] ξ˜t ≈ −σ (^) C(1^1 − h)

[

C d CC t− hC dC Ct+

]

ξ^ ˜t ≈ −σ 1 1 − h (˜ct^ −^ h˜ct+1)

Substitute back to (11),

0 ≈ r˜t − Et

[ (^) σ 1 − h (˜ct+1^ −^ h˜ct)

]

  • (^1) −σ h (˜ct − h˜ct+1) − Et[˜pt+1 − p˜t]

Define ˜pt+1 − p˜t = π˜t+1 the inflation rate, then multiply both sides by (1 − h)/σ

Et(˜ct+1 − h˜ct) +^1 −σ hEt[˜πt+1 − ˜rt] ≈ ˜ct − h˜ct− 1 (12)

5 Substitution method

Strictly speaking, this is not a method, it simply omits the first step of Uhlig’s replacement, then directly come to (4). There is only one step needs attention, we will see in next example:

Xt + a = (1 − b) (^) LYtZtt

Taking natural logarithm on both sides,

ln (Xt + a) = ln (1 − b) + ln Yt − ln Lt − ln Zt (13)

Use steady-state condition,

ln (X + a) = ln (1 − b) + ln Y − ln L − ln Z (14)

Subtract (14) away from (13),

ln (Xt + a) − ln (X + a) = ln Yt − ln Y − ln Lt + ln L − ln Zt + ln Z

They all become log-deviation form

X^ ˜t + a = ˜yt − l˜t − z˜t (15)

We need to find out what ˜xt is, then replace X˜t + a. Here comes the step needs attention, we use (3)

x˜t ≈ Xt^ X− X (16)

Then X˜t + a becomes

X^ ˜t + a ≈ Xt^ +^ a X^ − +^ ( Xa +^ a)= XXt^ −+^ Xa (17)

Since the numerator of right-hand sides of (16) and (17) are equal, we can make use of equality and set,

X^ ˜t + a(X + a) = ˜xtX X^ ˜t + a = (^) Xx˜ t+X a

Substitute back to (15),

˜xtX X + a = ˜yt^ −^ l˜t^ −^ z˜t

This function actually can be easily linearised by total differential method, you can try by yourself, only needs two steps, everything will be done.

Follow this formula, we simply take derivative at the steady-state, everything will be done. Try an example, kt+1 = (1 − δ)kt + sktα

Use formula (20),

˜kt+1 = [sαkα−^1 + (1 − δ)]k˜t

It is now linearised, since the term in the square brackets are simply param- eters.

6.2 Multivariable case

Taylor polynomial has a vector version as well as scalar version, actually most of functions you encounter will be multivariable rather what we had seen in last section. Again, let’s start at the general case^1 ,

Xt+1 = f (Xt, Yt) (21)

where f is still any nonlinear function you can imagine. The vector version (bivariate) of first-order Taylor polynomial around the steady-state is

Xt+1 ≈ f (X, Y ) + fX (X, Y )(Xt − X) + fY (X, Y )(Yt − Y ) (22)

As you can guess, the bivariate Taylor expansion closely relates to total derivative/differential of bivariate function. Again set steady-state condition of (21): X = f (X, Y ), (22) becomes,

Xt+1 ≈ X + fX (X, Y )(Xt − X) + fY (X, Y )(Yt − Y ) (^1) We use bivariate examples here for sake of simplicity.

Dividing by X,

Xt+ X ≈^ 1 +^ fX^ (X, Y^ )

(Xt − X) X +^ fY^ (X, Y^ )

(Yt − Y ) X 1 + ˜xt+1 ≈ 1 + fX (X, Y )˜xt + fY (X, Y ) YX^ (Yt^ Y−^ Y^ ) ˜xt+1 ≈ fX (X, Y )˜xt + fY (X, Y ) (^) XY y˜t (23)

(23) is the formula we are seeking for. Try an example again,

kt+1 = (1 − δ)kt + sztkαt

Then calculate partial derivatives,

fz (z, k) = skα fk(z, k) = αszkα−^1 + (1 − δ)

Use formula (23),

k˜t+1 = [αszkα−^1 + (1 − δ)]k˜t + skα^ zk z˜t

Open the brackets,

˜kt+1 ≈ αszkα−^1 ˜kt + (1 − δ)˜kt + skα−^1 z z˜t

There are still many situations we haven’t covered in this note, you can find some published papers and try to log-linearise those key functions by yourself then compare the results from those provided by authors.

The End.