Dynamic Stochastic General Equilibrium models made (relatively) easy with R

[This article was first published on Peter's stats stuff - R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

General Equilibrium economic models

To expand my economics toolkit I’ve been trying to get my head around Computable General Equilibrium (CGE) and Dynamic Stochastic General Equilibrium (DSGE) models. Both classes of model are used in theoretical and policy settings to understand the impact of changes to an economic system on its equilibrium state.

I’m not a specialist in this area so the below should be taken as the best effort by a keen amateur. Corrections or suggestions welcomed!

CGE models have the simpler approach of the two and a longer history and have been very widely applied to practical policy questions such as the impact of trade deals. Many economic consultancies have their own in-house CGE model/s which they wheel out and aadapt to a range of their clients’ questions. They work by comparing static equilibrium states, assumed to meet requirements (such as markets clearing effectively instantly) needed to be in equilibrium, “calibrated” to the real economy by choosing a set of numbers for the various parameters that match the state of the economy at a particular point in time. The model is then adjusted – for example, to allow for changes in prices from a free trade agreement – and the new equilibrium compared to the old.

DSGE models are also based on an assumption of a steady state equilibrium of the economy, but they allow for real amounts of time being taken to move towards that steady state, and for a random (ie stochastic) element in the path taken towards that steady state. This greatly improves their coherence in terms of philosophy of science – compared to a CGE which simply calibrates to a single point of time and doesn’t have any degrees of freedom to quantify uncertainty or the fit of the model to reality, the parameters in DSGEs can be estimated based on a history of observations, and parameters can have probability distributions not just points. Parameters are typically estimated with Bayesian methods.

Over the past 15 years or so as the maths and computing has gotten better, the DSGE approach has become dominant in macroeconomic modelling, although not yet (to my observation) in everyday applied economics of the sort done by consultants for government agencies contemplating policy choices. DSGE models perform ok (to the degree that anything does) at economic forecasting, and give a nice coherent framework for considering policy options. For example, the Reserve Bank of New Zealand (like many if not all monetary authorities around the world – I haven’t counted) developed the cutely-named KITT (Kiwi Inflation Targeting Technology) DSGE model and adopted it in 2009 as the main forecasting and scenario tool; and apparently replaced this with NZSIM, a more parsimonious model in 2014 – slightly condescendingly described as “deliberately kept small so is easily understood and applied by a range of users.”

Critiques

General Equilibrium approach has been roundly criticised from the margins of the economics field (pun) ever since it leapt to dominance in the second half of the twentieth century, from a philosophy of science perspective (doesn’t really make Popperian falsifiable predictions) and from the obvious and acknowledged absurdity/simplification of the assumptions needed to make the system tractable.

Noah Smith provides a good sceptical discussion of the worth of DSGE in this post and elsewhere.

I find the arguments for a post-Walrasian economics – beyond the DSGE fairly compelling. My key takeout from that book (not necessarily it’s main intent) is that new computing power means that we are increasingly in a position where we no longer have to just accept the simplifying assumptions needed for General Equilibrium approach to be tractable. Agent-based models are now possible that allow simulation of much more complex interactions between agents who lack the perfect knowledge required in the GE approach and behave realistically in terms of interactions and other ways. Such models lack analytical solutions (ie ones that could in principle be worked out with pen and paper) but Monte Carlo methods can lead to insight as to how the real economy behaves.

Defining a GE model with gEcon

While I do think that these general equilibrium approaches will be superseded in the next 20 years (bit of a call I know), the alternatives are currently immature. I still need to get my head around GE approaches. Luckily I came across the gEcon project – an exciting off-shoot of work done for the Polish government, now open-sourced and maintained by the original authors. gEcon provides an easy language for defining a CGE or DSGE model, taking much of the hand-done mathematics pain out of the whole thing:

“Owing to the development of an algorithm for automatic derivation of first order conditions and implementation of a comprehensive symbolic library, gEcon allows users to describe their models in terms of optimisation problems of agents. To authors’ best knowledge there is no other publicly available framework for writing and solving DSGE & CGE models in this natural way. Writing models in terms of optimisation problems instead of the FOCs is far more natural to an economist, takes off the burden of tedious differentiation, and reduces the risk of making a mistake.”

The definition of agents’ optimisation problems in a sophisticated DGSE in the gEcon environment looks like this example extract from an implementation of the classic Smets-Wouters 2003 DSGE for the Euro area. The total model description is around 500 lines of code. This particular block is describing aspects of the price setting mechanism. It’s not easy, but it is a lot easier than solving first order conditions by hand:

block PRICE_SETTING_PROBLEM # example gEcon language extract
{
    identities
    {
        g_1[] = (1 + lambda_p) * g_2[] + eta_p[];
        g_1[] = lambda[] * pi_star[] * Y[] + beta * xi_p *
                E[][(pi[] ^ gamma_p / pi[1]) ^ (-1 / lambda_p) *
                    (pi_star[] / pi_star[1]) * g_1[1]];
        g_2[] = lambda[] * mc[] * Y[] + beta * xi_p *
                E[][(pi[] ^ gamma_p / pi[1]) ^ (-((1 + lambda_p) / lambda_p)) * g_2[1]];
    };

    shocks
    {
        eta_p[];                # Price mark-up shock
    };

    calibration
    {
        xi_p = 0.908;           # Probability of not receiving the ``price-change signal''
        gamma_p = 0.469;        # Indexation parameter for non-optimising firms
    };
};

Similar code controls parts of the system such as the approach taken by the monetary authority (how much weight do they give to controlling inflation?), government expenditure, friction in the labour market, etc.

Incidentally, when the gEcon authors implemented the Smets-Wouters ‘03 model they identified a few small mistakes in the original implementation, which for me adds to the credibility of their argument that their (relatively) natural agent-based optimisation language is less prone to human error.

gEcon is implemented in R; the authors give the reason for this (as opposed to more traditional Matlab / Octave / GAMS solution) being the greater flexibility (“not everything needs to be a matrix”) and easy connections to full range of other econometric and data management methods.

Solving a gEcon DSGE model from R

Once the model has been defined, R functions can perform tasks such as :

  • calculate its steady state
  • estimate the impact of randomness
  • simulate paths through time (the “dynamic stochastic” bit)
  • estimate the impact of changes over time through impulse-response functions

For example, here is a simulation of one path through time for the deviation from the steady state of consumption, investment, capital, wages and income just from randomness in the Smets-Wouters ‘03 model:

onepath

Getting to this point used this R code. Note that I’m not reproducing below the full definition of the model, though I am including code that downloads it for you

# ###################################################################
# (c) Chancellery of the Prime Minister 2012-2015                   #
# Licence terms for gEcon can be found in the file:                 #
# http://gecon.r-forge.r-project.org/files/gEcon_licence.txt        #
#                                                                   #
# gEcon authors: Grzegorz Klima, Karol Podemski,                    #
#          Kaja Retkiewicz-Wijtiwiak, Anna Sowińska                 #
# ###################################################################

library(gEcon)
library(dplyr)
library(tidyr)

# download model definition and import into R:
download.file("http://gecon.r-forge.r-project.org/models/SW_03/SW_03.gcn",
              destfile = "SW_03.gcn")
sw_gecon1 <- make_model('SW_03.gcn')

# set some initial variable values:
initv <- list(z = 1, z_f = 1, Q = 1, Q_f = 1, pi = 1, pi_obj = 1,
              epsilon_b = 1, epsilon_L = 1, epsilon_I = 1, epsilon_a = 1, epsilon_G = 1,
              r_k = 0.01, r_k_f = 0.01)

sw_gecon1 <- initval_var(sw_gecon1, init_var = initv)

# set some initial parameter values:
initf <- list(
   beta = 0.99,            # Discount factor
   tau = 0.025,            # Capital depreciation rate
   varphi = 6.771,         # Parameter of investment adjustment cost function
   psi = 0.169,            # Capacity utilisation cost parameter
   sigma_c = 1.353,        # Coefficient of relative risk aversion
   h = 0.573,              # Habit formation intensity
   sigma_l = 2.4,          # Reciprocal of labour elasticity w.r.t. wage
   omega = 1               # Labour disutility parameter
)
sw_gecon1 <- set_free_par(sw_gecon1, initf)

 
# find the steady state for that set of starting values:
sw_gecon2 <- steady_state(sw_gecon1)
get_ss_values(sw_gecon2)

# solve the model in linearised form for 1st order perturbations/randomness:
sw_gecon2 <- solve_pert(sw_gecon2, loglin = TRUE)

# simulate one path:
one_path <- random_path(sw_gecon2, var_list = list("Y", "K", "I", "C", "W"))
plot_simulation(one_path) # shows deviation from the steady_state

Impulse response functions

In a method familiar to users of other economic modelling methods like Vector Autoregressions (VARs), it’s possible to “shock” the DSGE system and see the impact play out over time as the complex inter-relationships of agents within the system move from the shock towards a new equilibrium. Here’s an example of the expected impact of a shock to the inflation objective applied to the Smets-Wouters ‘03 model:

irf

One of the characteristics of the DSGE models is the importance they give to agents’ expectations. As their philosophy has increasingly dominated in recent decades, discussion of monetary policy has become less focused on individual actions of the monetary authority than on the overal regime and set of targets. Any more-than-casual observer of public economic debate will have noticed the importance given to discussion of the overall inflation-targetting regime. Hence the plot above shows the modelled impact of a change in the inflation objective - with no other direct exogenous shock in the model at all.

Here’s the R code that produced that plot:

# set covariance matrix of the parameters to be used in shock simulation:
a <- c(eta_b = 0.336 ^ 2, eta_L = 3.52 ^ 2, eta_I = 0.085 ^ 2, eta_a = 0.598 ^ 2,
       eta_w = 0.6853261 ^ 2, eta_p = 0.7896512 ^ 2,
       eta_G = 0.325 ^ 2, eta_R = 0.081 ^ 2, eta_pi = 0.017 ^ 2)
sw_gecon3  <- set_shock_cov_mat(sw_gecon2, shock_matrix = diag(a), shock_order = names(a))

# compute the moments with that covariance matrix:
sw_gecon3 <- compute_moments(sw_gecon3)

sw_gecon_irf <- compute_irf(sw_gecon3, var_list = c('C', 'Y', 'K', 'I', 'L'), chol = T,
                            shock_list = list('eta_pi'), path_length = 40)
plot_simulation(sw_gecon_irf, to_tex = FALSE)

Putting a DSGE model into Shiny

With so many complex and mysteriously named parameters - and I’ve shown very few of them - an interactive web application seems an obvious way to explore a model of this sort. I’ve set up a prototype which explores the impulse response functions of the model, responding to shocks to parameters like labour supply, investment, productivity and government spending:

  • The full screen version of the web app. Once all buttons on the screen appear, give it 90 seconds or so to solve its steady state. Then try choosing different variables to shock by from the drop down list.
  • The source code

Conclusion

I’m a way off from being able to define my own DSGE for the New Zealand and connecting economies. In particular the important question of calibration/estimation of parameters based on actual observable data is a a mystery to me at this stage. But I can see how the basic idea works, and the gEcon package looks very promising for its relatively simple agent-optimisation approach to specifying the model.

To leave a comment for the author, please follow the link and comment on their blog: Peter's stats stuff - R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)