In previous posts I have given two arguments for looking at aggregate macroeconomic models without explicitly specifying their microfoundations. (I subsequently got distracted into defending microfoundations against attacks that I thought went too far – as I said here, I do not think seeing this as a two sided debate is helpful.) In this post I want to examine a much more radical, and yet old fashioned idea, which is that aggregate models could use relationships which are justified empirically rather than through microfoundations. This argument will mirror similar points made in an excellent post by Richard Serlin in the context of finance. Richard also reflected on my earlier posts here. For an very good summary and commentary on recent posts on this issue see the Bruegel blog.
Before doing this, let me recap on the two previous arguments. The first was that an aggregate model might have a number of microfoundations, and so all that was required was a reference to at least one of those. Thanks to comments, I now know that a similar point was made by Ekkehart Schlicht in Isolation and Aggregation in Economics (1985), Berlin, Heidelberg: Springer Verlag. (I said at the time that this seemed to me a fairly weak claim, but Noah Smith was not impressed, I think because he felt you should be able to figure out which microfoundation represents reality. Unfortunately I think reality is often too complex to be well represented by just one microfoundation – think of the many good reasons for price rigidity, for example. In these circumstances robustness is important.)
The second is more controversial. Because microfoundations takes time, an aggregate relationship may not as yet have a clear microfoundation, but it might in the future. If there is strong empirical evidence for it now, academic research should investigate its implications. So, for example, there is some evidence for ‘inflation inertia’: the presence of lagged as well as expected inflation in a Phillips curve. The theoretical reasons (microfoundation) for this are not that clear, but it is both important and interesting to investigate what the macroeconomic consequences of inflation inertia might be.
This second argument could justify a very limited departure from microfoundations. A macro model might be entirely microfounded except for this one ‘ad hoc’ element. I can think of a few papers in good journals that take this approach. I have also heard macroeconomists object to papers of this kind: to quote one ‘microfoundations must be respected’. It was reflecting on this that led me to use the term ‘microfoundations purist’.
Suppose we deny the microfoundations purist position, and agree that it is a valid to explore ad hoc relationships within the context of an otherwise microfounded model. By valid, I mean that these papers should not automatically be disqualified from appearing in the top journals. If we take this position, then there seems to be no reason in principle why departures from microfoundations of this type should be so limited. Why not justify a large number of aggregate relationships using empirical evidence rather than microfoundations?
This used to be done back in my youth. An aggregate model would be postulated relationship by relationship, and each equation would be justified by reference to both empirical and theoretical evidence in the literature. Let us call this an empirically based aggregate model. You do not find macroeconomic papers like this in the better journals nowadays. Even if papers like this were submitted, I suspect they would be rejected. Why has this style of macro analysis died out?
I want to suggest two reasons, without implying that either is a sufficient justification. The first is that such models cannot claim to be internally consistent. Even if each aggregate relationship can be found in some theoretical paper in the literature, we have no reason to believe that these theoretical justifications are consistent with each other. The only way of ensuring consistency is to do the theory within the paper – as a microfounded model does. A second reason this style of modelling has disappeared is a loss of faith in time series econometrics. Sims (1980) argued that standard identification restrictions were ‘incredible’, and introduced us to the VAR. (For an earlier attempt of mine to apply a similar argument to the demise of what used to be called Structural Econometric Models, see here.)
In some ways I think this second attack was more damaging, because it undercut the obvious methodological defence of empirically based aggregate models. It is tempting to link microfounded models and empirically based aggregate models with two methodological approaches: a deductivist approach that Hausmann ascribes to microeconomics, and a more inductive approach that Marc Blaug has advocated. Those familiar with these terms can skip the next two paragraphs.
Microeconomics is built up in a deductive manner from a small number of basic axioms of human behaviour. How these axioms are validated is controversial, as are the implications when they are rejected. Many economists act as if they are self evident. We build up theory by adding certain primitives to these axioms (e.g. in trade, that there exist transport costs), and exploring their consequences. This body of theory will explain many features of the world, but not all. Those it does not explain are defined as puzzles. Puzzles are challenges for future theoretical work, but they are rarely enough to reject the existing body of theory. Under this methodology, the internal consistency of the model is all important.
An inductivist methodology is generally associated with Karl Popper. Here incompatibility with empirical evidence is fatal for a theory. Evidence can never prove a theory to be true (the ‘problem of induction’), but it can disprove it. Seeing one black swan disproves the theory that all swans are white, but seeing many white swans does nothing to prove the theory. This methodology was important in influencing the LSE econometric school, associated particularly with David Hendry. (Adrian Pagan has a nice comparative account.) Here evidence, which we can call external consistency, is all important.
I think the deductivist methodology fits for microfounded models. Internal consistency is the solid rock on which microfounded macromodels stand. That does not of course make it immune from criticism, but its practitioners know where they stand. There are clear rules by which their activities can be judged. To use a term due I think to Lakatos, the microfoundations research programme has a well defined positive heuristic. Microfoundations researchers know what they are doing, and it does bring positive results.
The trouble with applying an inductivist methodology to empirically based aggregate macromodels is that the rock of external consistency looks more like sand. Evidence in macroeconomics is hardly ever of the black swan type, where one observation/regression is enough to disprove a theory. Philosophers of science have queried the validity of the Popperian ideal even in the context of the physical sciences, and these difficulties become much more acute in something as messy as macro.
So I end with a whole set of questions. Is it possible to construct a clear methodology for empirically based aggregate models in macro? If not, does this matter? If there is no correct methodology (we cannot have both complete internal and external consistency at the same time), should good models in fact be eclectic from a methodological point of view? Does the methodological clarity of microfounded macro help explain its total dominance in academia today, or are there other explanations? If this dominance is not healthy, how does it change?
0 Comments