Friday, 17 January 2014

macro models taken at face value

I've been thinking a bit more about complete markets. Funnily, it was the first economic concept that I got in touch with in my math undergrad, although all I learnt about it back then was that it meant injectivity of the mapping from the probability space to the payoff space, which was good for throwing some measure theory at it.

Basically, in an economy with complete markets it is possible to costlessly write perfectly enforcible financial contracts on any current or future event and trade them on a competitive market. It's hard to explain in basic language, because it's pretty far from reality. If the world was a roulette game, then complete markets would be the existence of a competitive exchange in which a set of 37 assets are traded before each spin of the wheel, each paying off a fixed sum in the event that the roulette ball falls into the corresponding pocket.

The concept is applied quite pervasively in macroeconomics, international economics and finance, and forms the starting point for almost every analysis (at least that's what I've been taught). Imposing it has some very strong consequences. One is that, assuming standard preferences, consumption growth equalises among all agents. So consumption comoves perfectly between any two agents inside a country, and consumption per capita comoves perfectly across countries. In other words, the ratio of consumption per capita never changes across countries and within countries.

Of course, such a model prediction is just nuts. You needn't even look at data to know it won't fit. Just for fun, I made a few graphs anyway showing consumption per capita across countries. If two countries participate in complete markets, then the prediction is that the ratio of their consumption per capita is constant. So here are the relevant ratios of some countries with respect to Germany (sorry for the home bias in choosing a reference point). The data are in PPPs straight out of the Penn World Tables.

Here are some Western economies:


And some Emerging Markets:


Of course, consumption per capita relative to Germany is not constant (not within the Eurozone and not outside of it). A country like Singapore has caught up enormously while the difference to France has come down (if you believe the PWT of course).

Clearly, complete markets are not a good idea to describe long-run consumption growth across countries. I think that's mostly due to the implicit assumption that agents know, observe and can contract on exogenous shocks, whereas in reality we don't even know what those shocks are (see also my earlier post).

Unfortunately, complete markets lead to bad predictionsat business cycle frequency, too (see Heathcote & Perri ). And in closed-economy macro, they don't depict reality well either, since they essentially move a model towards a representative consumer environment which has been countlessly criticised (Noah Smith's latest post on the Euler equation is a case in point).

And yet, complete markets are the assumption you start with in macro and international macro. Why is that? The easiest answer is that it simplifies things in a model. It essentially allows you to have a representative consumer, which is more convenient for anyone who i) doesn't want to spend much time on getting a model started, ii) cares only about the supply side, or iii) wants to draw easy policy conclusions without distributional issues. In any case, you can focus on other things you want to analyse. That's a reasonable point of view: As economists, we produce models as stories to explain certain aspects of the economic sphere, and some models fit some purposes and others fit others. After all, we economists are not trying to have a unified theory that is consistent with all aspects of our data at once. Of course physics, the science we are said to envy, is aiming at exactly that, but that's different.

What I want to get at is that this scientific leap of faith is not particular to complete markets. As economists, we often build models to shed light on some aspect of the economy, complete of course with testing, calibrating, estimating against selected aspects of economic data. But the building blocks of these models, if taken at face value, provide many more predictions about the data than those we analyse, and a lot of them are completely at odds with the data! Calvo pricing, the exogenous TFP process, Modigliani-Miller, the Euler equation, default rates with credit frictions, the representative agent, the CRS production function, Nash equilibrium, even expected utility maximisation, you name it.

Now here's something odd: Sometimes, people use counterfactual model predictions to strike down a paper, and sometimes they are just okay with them. I cannot discern a pattern in this! It makes me feel I'm completely missing something in my field.

Take the attacks on New Keynesian models. Many people say these models aren't valid because they imply that individual firms change prices very infrequently, which is not true in the data. On the other hand, I have never seen anybody criticise a model because it has an Euler equation, although that also implies counterfactual model predictions on consumption growth. Is there a rule that justifies which type of counterfactual model predictions are acceptable and which aren't? Or maybe it's just okay as long as the paper is "convincing"?

One candidate for a rule could be that counterfactual predictions are okay as long as they're "microfounded". I'm not sure what microfoundations are but I think it means that a model outcome is the result of agents' choices in some constrained optimisation problem in which there is no a priori restriction on choices, only a weighing of costs and benefits. Is this the right way to go? There has been a formidable exchange of views on this position recently (e.g. here, here and here); in any event, the complete markets predictions would pass this test.

Another rule could be that models delivering counterfactual predictions are acceptable if the predictions would actually be accurate under some ideal conditions. That is a very powerful argument, and part of the bread and butter of natural sciences. One standard comparison is to the concept of gravity in elementary physics. The theory states that the gravitational force is such that any object is pulled towards the ground with constant acceleration. In particular, a feather and a cannonball thrown horizontally from the same height are predicted to reach the ground at the same time. That's of course completely at odds with what we usually observe, but it is only so because of air friction (and, as I just learned from Wikipedia, buoyancy). In a vacuum, the "model prediction" fits the data perfectly. So it makes sense to start from the constant acceleration model and then factor in other effects afterwards, which involve more complex calculations.

In the same way, we can argue that consumption behaviour under complete markets, or the representative agent Euler equation etc., would be accurate depictions of reality in an idealised economic setting. Then, if we want to make predictions for the world we live in, we start from there and incorporate all kinds of frictions. Such is indeed the agenda of a good part of  macroeconomics, where entire research programmes study how to extend the standard neoclassical models by including heterogenous agents, occasionally binding borrowing constraints, private information, endogenously incomplete markets, rational inattention and much more.

But it's not so clear whether this is a good strategy in economics, even if it is successful in physics. There, we can actually create the idealised conditions and do experiments - we can generate a vacuum and test constant gravitational acceleration - or in their absence we can make repeated experiments to measure the deviation from the prediction and find regularities in that. This is difficult to do in economics, and particularly macroeconomics, where we can rarely do experiments. As a consequence, the idealised conditions and the predicted behaviour are often not the product of experimental research but of some axiomatic approach, like expected utility maximisation or Nash bargaining. To my taste, the justification that the data don't fit because of some missing "friction" becomes a bit weak.

But there is still a very pragmatic merit to thinking in these terms: They provide a common, simple framework that we can use to start thinking about things. We as economists might not understand every paper with its particular setup, but we all understand complete markets, and so we can exchange ideas starting from there. And when we want to analyse a new problem, we have a baseline from which to start exploring. In this sense, we're functioning a bit like an expedition into some unknown land: we build a basecamp where we are comfortable and safe, and then explore the territory from there. When we get stuck and the terrain becomes too difficult, we can always return to the basecamp to devise a new strategy and try again the next day. That way, when we want to think about inflation inertia, we can start with Rotemberg or Calvo pricing and develop our ideas from there, even if we don't think it's a good description of the world - at least it's a starting point. I can see nothing wrong with this approach. The only caveat with it is that we have to be ready to move our basecamp as we explore our territory further. There is no guarantee that the initial spot we picked will remain ideal forever (it might only be a local optimum, so to say). I guess that is what behavioural economists are trying to convince the rest of the profession of for some while now. They yet have to show that their "basecamp" is the more useful one, but usefulness should really be the main criterion. To use a really heavy-handed analogy: You can perfectly describe the motions of planets using a geocentric system by putting in lots and lots of frictions. But what people eventually discovered was that it is much easier to start from a heliocentric system, for which you need much fewer "frictions" to get rid of counterfactual predictions. Of course moving that basecamp wasn't particularly easy either.

In any case, criticising fundamental building blocks is easy, but coming up with alternatives is hard.  The alternatives to the neoclassical DSGE "basecamp" that are out there are obscure and/or unwieldy (naturally, since the status quo has been worked on more). Certainly, I don't really have the competence to plant a new basecamp. Nor would it be wise to start working on that: First I have to finish my PhD...

5 comments:

  1. In brief, as I had a very long post that somehow was lost..

    The current set of incentives for researchers mean that yes, the current approach is certainly optimal for any given economist, but, I think, for the profession as whole, we should be bolder and lookout for more base camps*, especially in lieu of the critiques and issues you raise above. More diversity could potentially lead to important breakthroughs!

    But, having seen what it's like in other fields (i.e., psychology), it would be very important to maintain a common ground for the profession as whole (i.e., the DSGE camp, at least at first), as we don't want to risk having a complete breakdown on dialogue and even understanding amongst different economists, which is always a risk.

    And it wouldn't be easy to change the institutional rules in academia (where it'd make the most sense to have people trying out new base camps) to allow economists to try out new things by rewarding them. I mean, academia is fairly tolerant of failure and experiments, but it doesn't quite reward people for the level of boldness it'd require...

    *Absolutely love this analogy of base camps, by the way, it's elegant and a very clever way of describing the situation!

    ReplyDelete
  2. Thank you for sharing your thoughs and knowledge about macroeconomics and especially the bridge to physics and science. As I come from the physical world working on modeling of thermal driven flows it is nice to see that economics is not only dealing with money :) even if at the end that's the goal of everybody.
    your thoughts are nicely showing that there are some economic analysts dealing with deeper questions about predicting the markets. I believe as you sad, that it's not easy comparing to the rulez of physics to find a good baseline/basecamp on which you can rely. As I don't know anything about economy maths and the models that are used I wonder on what the analysts can rely the most? Are there models which produce variables which you can trust in a way like the gravitational force? or at least compare a predictive result based on historical data with a acceptable error? do you rely often on systematic and statical error calculations to feel save in your base camp ;) ?

    I hope with my limited knowledge about economy to share some questions with added value :)

    Sending you greetings from South America, in a Starbucks and a good coffee :)
    Daniel, Danu

    PS great blog and happy to see that there are some economics that are thinking like a physician or science researcher :)

    ReplyDelete
    Replies
    1. Thanks Danu for the questions! I'll answer the best I can.

      The models which are most "trusted" in macroeconomics at the moment, especially for analysts, are probably VAR models, which are almost entirely statistical models that remain silent on economic structure.

      We do compare predictive results with data all the time, but most often only on some select statistics, and in-sample. We generally shy away from out-of-sample tests, since there the VARs are usually doing better than our fancy models. And we definitely do very little systematic error calculations or sensitivity analysis I feel. You are right, to do real science we should have more of that.

      Have fun in South Africa!

      Delete
  3. I like this post! And I'll throw in my two cents:

    -I'm really not a macro person but I read your post from the perspective of finance.

    -Yes, complete markets is arguably a very strong assumption but at least in finance, a lot of progress has been made because of it; i.e. we now have a pretty good understanding of how derivatives pricing works under complete markets assumption.

    -Moving on to an "incomplete markets" setting is still an open challenge, but this is somewhat like as they say in the physical sciences: "The study of nonlinear dynamics is like the study of non-elephant biology".

    -Perhaps one thing in finance (and I think to a certain extent macro-finance and also macro) should really get away from is this whole calibration exercise. Coming from your and my econometric background (back in the good ol' days...), I still couldn't understand what "calibration" really means, other than a fancy term for "guess and check". It seems to me the goal post these days for these highly complicated dynamic models resort to some sort of calibration exercise to get those parameters and show that under these calibrated parameters, the model generates some moments that matches the data. That's just my random rant but I'm thinking that this type of restriction (i.e. binding the researcher to just match moments) is hindering a lot of research progress. Namely, it highly discourages researchers to consider models that cannot be easily simulated or calibrated, but may actually generate additional economic insights.

    In all, great post!

    Raymond L
    -Your local ibanking factory classmate living across the pond now

    ReplyDelete
    Replies
    1. Hey Ray, thanks for commenting here. I had completely overlooked the value of complete markets in finance. I think the notion is weaker there, which makes it more usable - all you need is some underlying asset prices which follow an exogenously given stochastic process, right? that gives much more flexibility in matching to real-world data.

      I share your criticism of calibration exercise. In the end, calibration is really selectively looking at some handpicked aspects of the data while completely ignoring the rest. It's only in such exercises that the many bad properties of macro models can be hidden successfully.

      Delete