Winner of the New Statesman SPERI Prize in Political Economy 2016


Tuesday 11 October 2016

Ricardian Equivalence, benchmark models, and academics response to the financial crisis

Mainly for economists

In his further thoughts on DSGE models (or perhaps his response to those who took up his first thoughts), Olivier Blanchard says the following:
“For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations.”

He suggests that there is wide agreement about the above. I certainly agree, but I’m not sure most academic macroeconomists do. I think they might say that policy analysis done by academics should involve microfounded models. Microfounded models are, by definition, religious about microfoundations and do not fit the data closely. Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which assumes that your microfounded model is correctly specified.

It is not only academics who think policy has to be done using microfounded models. The core model used by the Bank of England is a microfounded DSGE model. So even in this policy making institution, their core model does not conform to Blanchard’s prescription. (Yes, I know they have lots of other models, but still. The Fed is closer to Blanchard than the Bank.)

Let me be more specific. The core macromodel that many academics would write down involves two key behavioural relationships: a Phillips curve and an IS curve. The IS curve is purely forward looking: consumption depends on expected future consumption. It is derived from an infinitely lived representative consumer, which means Ricardian Equivalence holds in this model. As a result, in this benchmark model Ricardian Equivalence also holds. [1]

Ricardian Equivalence means that a bond financed tax cut (which will be followed by tax increases) has no impact on consumption or output. One stylised empirical fact that has been confirmed by study after study is that consumers do spend quite a large proportion of any tax cut. That they should do so is not some deep mystery, but may be traced back to the assumption that the intertemporal consumer is never credit constrained. In that particular sense academics’ core model does not fit Blanchard’s prescription that it should ‘“fit the data closely”.

Does this core model influence the way some academics think about policy? I have written how mainstream macroeconomics neglected before the financial crisis the importance that shifting credit conditions had on consumption, and speculated that this neglect owed something to the insistence on microfoundations. That links the methodology macroeconomists use, or more accurately their belief that other methodologies are unworthy, to policy failures (or at least inadequacy) associated with that crisis and its aftermath.

I wonder if the benchmark model also contributed to a resistance among many (not a majority, but a significant minority) to using fiscal stimulus when interest rates hit their lower bound. In the benchmark model increases in public spending still raise output, but some economists do worry about wasteful expenditures. For these economists tax cuts, particularly if aimed at those who are non-Ricardian, should be an attractive alternative means of stimulus, but if your benchmark model says they will have no effect, I wonder whether this (consciously or unconsciously) biases you against such measures.

In my view, the benchmark models that academic macroeconomists carry round in their head should be exactly the kind Blanchard describes: aggregate equations which are consistent with the data, and which may or may not be consistent with current microfoundations. They are the ‘useful models’ that Blanchard talked about in his graduate textbook with Stan Fischer, although then they were confined to chapter 10! These core models should be under constant challenge from both partial equilibrium analysis, estimation in all its forms and analysis using microfoundations. But when push comes to shove, policy analysis should be done with models that are the best we have at meeting all those challenges, and not models with consistent microfoundations.


[1] Recognising this point, some might add some ‘rule of thumb’ consumers into the model. This is fine, as long as you do not continue to think the model is microfounded. If these rule of thumb consumers spend all their income because of credit constraints, what happens when these constraints are expected to last for more than the next period? Does the model correctly predict what would happen to consumption if the proportion of rule of thumb consumers changes? It does not.  

17 comments:

  1. Given that Blanchard was one of the twits advocating consolidation / austerity at the height of the crisis, why am I supposed to pay any attention to him?

    As for Ricardian equivalence, I go along with Joseph Stiglitz’s take: “Ricardian equivalence is taught in every graduate school in the country. It is also sheer nonsense.”

    ReplyDelete
    Replies
    1. yes. A survey in Denmark a few years ago asked people if the government was currently running a surplus or a deficit. Only 1 in 3 got it right. A bit hard to square with everyone saving the difference every time the deficit (surplus) went up (down.)

      Delete
    2. RE could be wrong, but it is not nonsense.

      Those models provide hypotheses and they are only interesting for policy purposes to the extent that those hypotheses seem to hold reasonably well. If not, maybe their only use should be to help explain how better models behave.

      Stiglitz and others should be more careful. Verbal inflation is very real: if you call something average and something good exception, you will be without words come something exceptional. If coherent and formal arguments you can find in graduate textbooks constitute sheer nonsense, how do you qualify the considerably less thoughtful things you hear on TV or read in the newspaper?

      Delete
  2. The philips curve and the NAIRU is as close as you can get to religious fundamentalism. It is a crime against humanity to call these science.

    ReplyDelete
  3. Furthermore the Central Bank in all sovereign jurisdictions falls under the definition of control by the Treasury - often de facto by the operation of law (Bernanke: "Our job is to do what Treasury tells us to do"), but also de jure, e.g in the Sterling area HM Treasury actually owns the entire shareholding of the Bank of England. The control model in IFRS 10 is elaborate to try and catch all those little tricks that entities use to avoid having to consolidate accounts and is worth studying to see the various 'Wizard of Oz' methods that control can be imparted even though the public face is supposedly independent.

    Given the control relationship, consolidated financial statements are entirely appropriate and correct accounting which reveals the essence of the underlying transactions. Therefore in your model you should be able to swap out the detailed entities and replace them with the consolidated entity and nothing about the response should change. If it does then it is likely your model is wrong.

    In Information Systems this is known as white box and black box testing. With white box you can see the internals and with black box you can't - requiring you to conform to that modules interface in your testing.

    ReplyDelete
  4. * SW-L:
    "Academics are taught in grad school that all other models are flawed because of the Lucas critique, an argument which *assumes* that your microfounded model is correctly specified"

    Exactly. The Lucas critique - which was well known before Lucas - is an econometric critique. Consequently, immunity to the critique is not a formal property of a model but an empirical (specification) issue. As indeed Lucas and Sargent conceded back in 1978:..

    “… the question of whether a particular model is structural is an empirical, not theoretical, one.”
    [Lucas & Sargent, Boston Fed, 1978]

    I have never understood where the curious belief comes from that Lucasian microfoundations are either necessary or sufficient for immunity from the Lucas critique. Apparently somewhere the Almighty Bob said so, and thus it must be so.

    * Blanchard:
    "For conditional forecasting, i.e. to look for example at the effects of changes in policy, more structural models are needed, but they must fit the data closely and do not need to be religious about micro foundations."

    Necessary but obviously not sufficient to fit data In practice DSGEs, even if not religiously microfounded, are overparametrized and rely on de facto free parameters to achieve in sample fit. This leads to the problem (of relying on free parameters) that conditional forecasts will be misleading and generally out-of-sample performance will be poor.

    Further, to the extent analysis of policy relies on the New CLassical/ policy regimes/rational expectations policy analysis approach it is pretty restrictive. .

    * Chris Sims:
    "There may be some policy issues where the simple rational expectations policy analysis paradigm – treating policy as given by a rule with deterministic parameters, which are to be changed once and for all, with no one knowing before-hand that the change may occur and no one doubting afterward that the change is permanent – is a useful approximate simplifying assumption. To the extent that the rational expectations literature has led us to suppose that all “real” policy change must fit into this internally inconsistent mold, it is has led us onto sterile ground."

    ReplyDelete
  5. If RE were true, fiscal consolidation should never reduce gdp as the prospect of future tax decreases should lead people to immediately compensate for the consolidation by increasing their private consumption correspondingly. But this has not happened as the second recession after the financial crisis clearly showed.

    ReplyDelete
  6. I agree with what you say, but why can't credit constraints that negate RE simply be added to DGSE models? If modelled correctly, they would still be micro-founded but would more accurately fit the data.

    ReplyDelete
  7. I agree with you that not all models need to be microfounded, especially the type of models you "carry around in your head". I would argue, however, that when thinking about what type of policy response to implement, what goes on "under the hood" of aggregate equations is important. Understanding why Ricardian equivalence does not hold in the data (credit constraints, buffer-stock savings etc) is perhaps the key to finding the right policy response.

    ReplyDelete
  8. This comment has been removed by the author.

    ReplyDelete
  9. And if you read Ricardo's own work on equivalence you will find that he too thought it was nonsense that did not describe how real people behaved. He only described the theory to debunk it.

    ReplyDelete
  10. Recent DSGE models fit the data very well, but parts of their identifying restrictions are not truly structural (wherever one may really draw the line between reduced form and structural). They e.g. usually include external habit formation in the utility function which helps to smooth the response of consumption to the shocks included in the model. Estimation often results in implausible large values for this parameter clearly pointing to model misspecification. These models are microfounded in this respect but one should not take the implied interpretation too seriously. Still they may very well be used for policy analysis, one just needs to be aware of their weaknesses and consequently for which questions they are suited and for which they are not.

    ReplyDelete
    Replies
    1. Data is the result of 'mark to model' policy making. It records the movements of a man in a straitjacket.

      Delete
  11. If a debt-financed increase in fiscal spending puts idle resources into use and thus increases tax collection, the spending increase will pay itself back at least in part, and if you take hysteresis effects into account may even pay itself back completely. Why would forward looking rational consumers not be able to figure this out, thinking erroneously instead that the entire spending increase has to be paid back via higher future taxes ?

    ReplyDelete
  12. Please could you do a comment reply, or ideally a full post, responding to this blogpost: http://econlog.econlib.org/archives/2016/10/two_approaches.html

    Thanks!

    ReplyDelete
  13. Here the link where one can see how people think about Ricardian Equivalnece. If we don't know how people think about the problem why not asking them? People do have fears about future taxes (even in a helicopter money treatment), but is seems that this fear does not lead them spend less.

    https://papers.ssrn.com/sol3/papers2.cfm?abstract_id=2893008

    ReplyDelete

Unfortunately because of spam with embedded links (which then flag up warnings about the whole site on some browsers), I have to personally moderate all comments. As a result, your comment may not appear for some time. In addition, I cannot publish comments with links to websites because it takes too much time to check whether these sites are legitimate.