Some simple arithmetic for mistakes with Taylor Rules

[Updated to fix arithmetic errors spotted by Min. A big thank you to Min! (I did not leave my embarrassing original mistakes in, because I wanted to keep it clear). The effects I am talking about are even bigger now.]

Sometimes I think that US monetary policy is too important to be left to the Americans. If you see your neighbour thinking of doing something daft, apparently unaware of one of the problems, you ought to speak up. Especially if it will affect you too, because you do a lot of trade with your neighbour.

[Update: there is something weird about this. I have read three recent criticisms of the US proposal for a legislated Taylor Rule: by Simon Wren-Lewis, (though Simon posted his critique of Taylor Rules presumably just before knowing about the US proposal), Tony Yates, and now Gavyn Davies. That's three Brits, plus me, a (British-)Canadian. Did I miss the American critics? Is this a Brit thing??]

A fixed Taylor Rule multiplies your mistakes in estimating a margin of safety for avoiding the Zero Lower Bound by a factor of three. It makes the danger of hitting the ZLB bigger than you think it is. And Taylor Rules don't work at the ZLB.

Suppose you thought that the natural rate of interest was r^, and you thought that potential output was y^. And you wanted to target an inflation rate p^. Then you might (or might not) tell your central bank to implement a Taylor Rule, and set the nominal interest rate i(t), as the following function of actual inflation p(t) and actual output y(t):

1. Set i(t) = r^ + p^ + 1.5(p(t)-p^) + 0.5(y(t)-y^)

Theory and evidence suggest that following such a rule might then result in a reasonable outcome, in which actual inflation will equal the inflation target on average. It might not be the best way to implement that inflation target, but it won't be the worst either.

But what happens if you are wrong about the natural rate of interest, or wrong about potential output? You think they are r^ and y^, but they are actually r* and y*. So the correct Taylor Rule would be:

2. Set i(t) = r* + p* + 1.5(p(t)-p*) + 0.5(y(t)-y*)

What happens to inflation in that case?

Subtracting the second equation from the first, we get:

3. 0 = (r^-r*) + (p^-p*) + 1.5(p*-p^) + 0.5(y*-y^)

Rearranging terms, we get:

4. (p*-p^) = 2(r*-r^) – (y*-y^)

Assuming that Taylor Rules actually work as they are supposed to work, equation 4 tells us what determines the gap (p*-p^) between the inflation rate you are actually targeting, p*, and the inflation rate you intended to target, p^.

If the actual natural rate is one percentage point higher than you think it is, you will actually be targeting an inflation rate two percentage points above what you intended to target.

If actual potential output is one percent higher than you think it is, you will actually be targeting an inflation rate one percentage points below what you intended to target.

The intuition is straightforward:

If the actual natural rate is higher than you think it is, that makes you set the nominal rate too low, and so inflation would need to be above target on average to have an offsetting effect to cancel out your mistake.

And if actual potential output is higher than you think it is, that makes you set the nominal rate too high, and so inflation would need to be below target on average to have an offsetting effect to cancel out your mistake.

We do not observe either the natural rate of interest, nor potential output. These are both theoretical constructs, and we need to estimate them. Our estimates will be wrong because, for one thing, both the natural rate and potential output will be changing over time in ways we cannot perfectly foresee. So we will in fact make mistakes about the natural rate of interest and potential output, and we will end up targeting an inflation rate that is either higher or lower than the one we want to target, until we figure out our mistakes.

For a normal central bank, that is a problem, but it is not a big problem. Because normal central banks learn from their past mistakes. If they see output persistently below potential, given what they thought was the correct real rate of interest to keep output at potential, they revise down their estimate of the natural rate. If they see inflation persistently below target, given what they thought was output at potential, they revise up their estimate of potential output.

They fix mistakes in their Taylor Rule as they go along. Normal inflation-targeting central banks do this all the time. That's probably the main reason why we always observe a lagged interest rate in the equation when we estimate a central bank's reaction function. If inflation comes in below target, they don't just cut the nominal rate of interest once. They cut once, and then cut again and again, if inflation comes in persistently below target, and keep on cutting until inflation comes back up to target. Persistently below target inflation causes not a low but a falling nominal rate of interest, as the central bank slowly revises its estimates.

But if the parameter values of the Taylor Rule are fixed by law, central banks are not allowed to learn from their mistakes. That means that inflation can be above target on average, or below target on average.

If inflation comes in below target on average, that can be a problem, because the danger of the Zero Lower Bound becoming a binding constraint gets bigger.

Set the Left Hand Side of equation 2 to be greater than zero, then substitute for p* (the de facto inflation target) from equation 4:

5. 0 < r* + p^ + 2(r*-r^) – (y*-y^) + 1.5(p(t)-p*) + 0.5(y(t)-y*)

Since (p(t)-p*) will equal zero on average, and (y(t)-y*) will also equal zero on average, (assuming Taylor Rules work as they are supposed to), this simplifies to, on average:

6.  0 < (r* + p^) + 2(r*-r^) – (y*-y^)

Equation 6 contains three terms:

The first term, (r* + p^), represents what would normally be the margin of safety for avoiding the ZLB. If the true natural rate is 2%, and the intended inflation target is 2%, then the nominal interest rate should be 4% on average, which gives you a 4% margin of safety to avoid hitting the ZLB. The higher the natural rate, and the higher the intended inflation target, the bigger the margin of safety.

The second term, + 2(r*-r^), shows the effects of mistakes about the natural rate on the margin of safety. If you think the natural rate is smaller than it really is, you get average inflation higher than you intended, and the margin of safety is bigger. But if you think the natural rate is bigger than it really is, you get average inflation lower than you intended, and the margin of safety is smaller.

The third term, – (y*-y^), shows the effects of mistakes about potential output on the margin of safety. If you think potential output is bigger than it really is, you get average inflation higher than you intended, and the margin of safety is bigger. But if you think potential output is smaller than it really is, you get average inflation lower than you intended, and the margin of safety is smaller.

It might be more useful if we rearrange equation 6 to read:

7.  0 < (r^ + p^) + 3(r*-r^) – (y*-y^)

The first term, (r^ + p^), represents the margin of safety you think you have, based on your estimate of the natural rate and your intended inflation target. But if your estimate of the natural rate is wrong, and if the true natural rate is one percentage point lower than you think it is, your actual margin of safety will be three percentage points smaller than you think it is. There's the original one percentage point mistake, plus the additional two percentage points that comes from a lower effective inflation target than intended inflation target.

A fixed Taylor Rule multiplies your mistakes in estimating a margin of safety for avoiding the ZLB by a factor of three.

This is from John Taylor's blog post:

According to the legislation “The term ‘Reference Policy Rule’ means a calculation of the nominal Federal funds rate as equal to the sum of the following: (A) The rate of inflation over the previous four quarters. (B) One-half of the percentage deviation of the real GDP from an estimate of potential GDP. (C) One-half of the difference between the rate of inflation over the previous four quarters and two. (D) Two."

That means a 2% inflation target, which many macroeconomists think is already too low to provide a big enough margin of safety for avoiding the ZLB. But that's not the biggest problem with the proposed legislation.

The big problem is it assumes the natural rate of interest is fixed at 2%. The Fed is allowed to revise its estimate of potential output, but is not allowed to revise its estimate of the natural rate of interest. The legislation implicitly estimates the natural rate of interest at 2%. That estimate is fixed by law.

r^+p^ = 2%+2% = 4% estimated margin of safety. A 4% margin of safety wasn't big enough even when central banks were allowed to revise their estimates of the natural rate. If central banks are not allowed to revise their estimates of the natural rate, that 4% margin of safety will be much too small.

If you really really want to legislate a Taylor Rule, OK. But there's a price you must pay, if you want to maintain the same margin of safety against hitting the ZLB. That price is a higher average rate of inflation built right into that legislated Taylor Rule.

Your choice: legislated Taylor Rules; hitting the ZLB more frequently; a higher rate of inflation. Pick any two. [That wasn't clear. What I meant to say is that if you choose a legislated Taylor Rule, you must also choose either hitting the ZLB more frequently or a higher inflation rate.]

And all of the above assumes that Taylor Rules actually do work the way they are supposed to work.

Can somebody please tell the Americans? (And can somebody please check my arithmetic, because I always get it wrong. And I really did try to make this as clear as possible, but I don't know if I have succeeded.)

67 comments

  1. Jon's avatar

    Not convinced by this post nick. The proposed legislation does not specify potential output. Furthermore, the output gap is an integral of the past errors. So this controller has an integral term. That makes it self correcting.
    Actually Taylor specifies a pid controller for an inflation target. So I think your claim is just wrong… a more interesting question is whether Taylor’s gain settings give poles in the wrong positions leading to instability, but that question seems to me impossible to answer a priori.

  2. Nick Rowe's avatar

    Jon: if the central bank lied about its estimate of potential output, I agree it could still do anything it wanted to do, simply by saying that its estimate of potential output was whatever number that would result in the nominal interest rate it wanted to set. I’m assuming the central bank is honest.
    ” Furthermore, the output gap is an integral of the past errors. So this controller has an integral term. That makes it self correcting.”
    I think I see what you are saying there. But deviations of inflation from target is also an integral of past errors. If the output gap converges to zero (as it should according to the assumption that Taylor Rules work) it is the gap between actual inflation and the intended target for inflation that adjusts as a result of errors.

  3. Robert Barbera's avatar
    Robert Barbera · · Reply

    My new favorite part of the Taylor paper that spawned the Taylor Rule? Quite unwittingly, I am sure, Taylor’s estimates for g and r* anticipate Piketty by over 20 years.
    Reread Taylor’s text. He notes that his regression efforts identify r* at around 2%, which he says makes sense in light of the then embraced notion that long term sustainable real GDP growth was around 2% as well.
    If r* is 2%, what is the return on capital (roc). We certainly have come to expect a term premium for longer dated risk free returns. And the longstanding reality of a puzzlingly high risk premium for private assets is hard to ignore.
    As such roc will certainly meaningfully exceed r*. Therefore, if one believes that it ‘makes sense’ that r* equals g, by construction one also must think it makes sense that g will be much lower than roc.

  4. Nick Rowe's avatar

    Robert: I think if you had asked any economist (before Piketty) whether r would normally be greater than or less than g, most would have replied “greater than”. Both from the historical empirical evidence, and from theory. For example, in the Solow Growth model, r=g at the “Golden Rule”, where steady-state consumption per head is maximised, and if you add time-preference proper, and/or risk, you would get r > g.
    Samuelson1958 is one model where r < g, but it is a weird model, with dynamic inefficiency (the economy needs a Ponzi scheme to force r up to equal g) and where long-run budget constraints do not add up. Anyone who writes down a long-run government budget constraint, for example, implicitly assumes r > g.

  5. Kyle Johansen's avatar
    Kyle Johansen · · Reply

    Is the third paragraph from the bottom saying that if you don’t pick a legislated Taylor Rule, you get higher inflation and a greater chance of hitting the ZLB. I don’t really get that part.

  6. Nick Rowe's avatar

    Kyle: you mean this bit: “Your choice: legislated Taylor Rules; hitting the ZLB more frequently; a higher rate of inflation. Pick any two.”?
    Yep. It’s not clear. What I meant to say is that you can’t have a legislated Taylor Rule unless you also accept hitting the ZLB more frequently or a higher rate of inflation.

  7. Unknown's avatar

    “Did I miss the American critics [of a legislative Taylor Rule]? Is this a Brit thing?”
    Snooze … Practically anybody can file a bill and get a hearing in the house during the summer. I doubt most people have even heard of the two house members that introduced this. Remember, you need effectively 60 Senators to get bills through the Senate. The odds of this legislation being passed are 0.00001%.
    My take: this is not a serious proposal, more like a “resume building exercise” to please some conservative groups (note the witness list is Cato, Mercatus, etc.) http://financialservices.house.gov/calendar/eventsingle.aspx?EventID=386842
    If this were a serious proposal, you would see Yellen, the Treasury Secretary, etc. as witnesses. . It does not even have an actual bill # (HR ____).
    I see this a lot discussing bills with brits, Canadians, a Europeans in general. We do not have a parliamentary system in the US, which means that out of the thousands of bills practically any House or Senate member can introduce and get a hearing on, only a tiny % will ever make it through. Congress critters introduce bills to say they “fought for” something. Even bills for which there is a bipartisan consensus are nearly impossible to get through, esp in an election year. I could give loads of examples of bills that would get 57 votes in the Senate, a majority in the House, but wont make it. A few years ago for example we had house hearings for the Fed to adopt inflation targeting. You would be surprised what would get 57 votes but that wont get passed.
    I doubt you will see a lot of serious analysis in the US until its a serious proposal (by proxy: you will see senior Senate members champion it, and/or you’ll see Fed Chair and senior White House executives testify).

  8. Robert Barbera's avatar
    Robert Barbera · · Reply

    My point is that we have a number of partial equilibrium solutions confused as a general equilibrium framework. And the confusion comes, I suspect, from the dangerously simplifying assumption, embedded in much macro, of one representative interest rate.
    r* is the real neutral short term rate
    r (g) is the government’s cost borrowing rate–CBO assumes it to be 30 bps below the real 10 year rate.
    roc is the blended return on all capital assets.
    We all tend to agree that r* will be lower than r (g) and that r (g) will be lower than roc.
    A paper written by Krugman and DeLong, 2006, notes that it makes no sense for roc to greatly exceed g, and they argue, using some simple extrapolations, a Solow growth model, and a DSGE model, that a slower g will almost certainly deliver a lower roc.
    My point is that roc/g discussions went on separately from r* identification discussions. If r* equals g, you end up with too high a sustainable value for roc.
    Theoretical golden rule values for government borrowing costs relative to g notwithstanding, the 1955 to 2005 USA average real government borrowing costs were nearly 1 percentage point below average g.

  9. Mike Sax's avatar

    Nick couldn’t the criticism that the Taylor Rule prevents CBers from learning from mistakes also apply to NGDP level targeting or any ‘rules based’ monetary policy?

  10. Unknown's avatar

    re: my above comment, Here is a bill from last year: https://www.govtrack.us/congress/bills/113/hr1174 also to amend the Fed mandate. Chances of being passed: 1%. Chances of making it out of committee: 5%. I think that is optimistic.
    Here is the updated text of this years bill: https://www.govtrack.us/congress/bills/113/hr5018
    5% chance of being enacted. I think that’s optimistic. Only 3% of bills from this committee get enacted and this bill has only 2 (relatively minor ranking) sponsors.

  11. Min's avatar

    We have the following equation with p^, p(t), i(t), and y(t) given, therefore constants.
    i(t) = r^ + p^ + 1.5(p(t)-p^) + 0.5(y(t)-y^)
    Then, approximately,
    ∆i(t) = ∆r^ + 0.5∆y^
    Notice the ‘+’ before the error for y^, not a minus. Errors accumulate.
    Nowhere is there an estimate for ∆p^ in this equation. You generate one because of your equation (2).
    i(t) = r* + p* + 1.5(p(t)-p) + 0.5(y(t)-y)
    There is no justification for p* in that equation, since your target has not changed.
    Now, since i(t) is supposed to yield the correct value, p^, it is arguable that we can estimate the error in p^ because of the error in i(t), using formula (1). Holding everything else constant, we would have this.
    ∆i(t) = |∆p^ – 1.5∆p^| = 0.5∆p^
    Then, approximately,
    ∆p^ = 2∆i(t)
    And, approximately,
    ∆p^ = 2∆r^ + ∆y^
    You’re welcome. 🙂

  12. Matthew's avatar

    I hope that everyone considers this: [Link here NR] before considering a legislated Taylor Rule. Taylor Rules are handy ad-hoc additions to make indeterminate linearized DSGE models work, but it has no basis in US monetary history–the Fed has never ever followed any kind of Taylor Rule. The Taylor Rule version that John Taylor proposed would have actually produced deepenning deflation over the 1990s and 2000s, and even if you correct his estimates using data, any linear version of the rule consistently underestimates the Fed’s responses to business cycles. I don’t know why any economists ever thought the Taylor Rule resembled actual policy. It doesn’t.
    On a different note, I’d suggest that the reason there haven’t been any US critics of the proposed Taylor Rule legislation is that US economists simply don’t consider it a possibility. Over 90% of legislation dies in committee, and in the US, especially with this congress, pretty much everything that makes it out of committee dies anyway. The GOP has acquired a small but loud gold-bug wing now that expansionary monetary policy is associated with Obama, and I think these hearings on the legislative Taylor Rule are meant as a concession to that wing of the party. It was only a year ago that the same wing of the same party was proposing a legislated CPI-level price target as a “modern gold standard.” This is a GOP insider party-building exercise not worthy of serious discussion.

  13. bill40's avatar

    Let me see if I have this clear. It is the intention of the bill to take an equation that is almost certainly wrong, containing variables that are either/or poorly understood or outside control and whose outcome will almost certainly be wrong and make it into law?
    I have a headache.

  14. Majromax's avatar
    Majromax · · Reply

    It’s interesting to see that, according to Taylor Rule math, errors in the assumed natural interest rate have an amplifying effect. But is this a criticism of the Taylor rule by itself, or instead a criticism of the assumption that the natural rate of interest is fixed? (Here that would be legislatively enshrined, but it could also be simple stubbornness among discretion-using central bankers.)
    Your conclusion is that a legislated Taylor Rule either requires an extra safety margin or hits the ZLB more often, but have we seen contemporary central banks actually updating estimates of their long-term natural rate of interest to make the sort of adjustments forbidden by this proposed legislation?

    If the output gap converges to zero (as it should according to the assumption that Taylor Rules work) it is the gap between actual inflation and the intended target for inflation that adjusts as a result of errors.
    Does it, if the central bank is wrong? From your equations (3) and (4), if the inflation rate is approximately the target level (p* ~= p^), which is roughly the case for the United States, we get (y* – y^) = 2 (r* – r^). If the true natural rate of interest is 1% below what the central bank believes it to be, then the central bank is “actually” targeting output 2% what it claims to be.
    If the claimed output target is based on realistic estimates of potential output rather than a simple curve-fit to previous behaviour (which is valid but circular in calling it “potential output”), then that would suggest the possibility of the central bank causing a persistent output gap.

  15. Nick Rowe's avatar

    dwb: ah! That makes sense. Those of us from a Parliamentary system don’t get the politics. Something like a private member’s bill, only even less. So the American macroeconomists can’t be bothered giving it any free publicity.
    Robert: OK, but in this particular case, “the” interest rate means the particular interest rate set by the Fed.
    “Theoretical golden rule values for government borrowing costs relative to g notwithstanding, the 1955 to 2005 USA average real government borrowing costs were nearly 1 percentage point below average g.”
    For a lot of that period though, actual inflation turned out to be higher than expected inflation, especially at long horizons like on 30 year bonds. So ex post real interest rates were below ex ante.
    Mike: I think it is useful to distinguish between “instrument rules” and “target rules”. For example, the Bank of Canada has a target rule: a public commitment to 2% inflation. But it has no instrument rule. It uses its discretion in setting the interest rate instrument to try to hit the 2% inflation target as close as it can. The Bank is accountable for hitting that 2% inflation target, but nobody expects it to hit the target exactly every month. We can only hold it accountable over longer periods. The correct setting of the instrument to hit that 2% inflation target is a purely technical matter. We can learn, at least from hindsight, what worked and what didn’t work. We can “see” the mistakes.
    The NGDP target would be a rule that replaced the inflation target. Whether we used an instrument rule, or discretion, to hit that NGDP/inflation target is a separate question.
    But yes, we might eventually learn that an NGDP target wasn’t the best commitment to make, just as us MMs think we have learned that an inflation target wasn’t the best commitment to make. There is an unavoidable tension here between the desire for a commitment to a rule, that creates the expectations of the future that we want to create, and the possibility that we might subsequently learn something about the world that makes us wish we had made a different commitment. Nearly all law has this same tension. So do all promises. But that doesn’t mean we should have no laws, and make no promises.
    Min: thanks! But what is missing is the rest of the economy. All we have here is one equation in what is (implicitly) a dynamic simultaneous equation system. In my post, instead of writing down those other equations, I simply said: let’s assume that Taylor Rules work as their advocates say they work, so if the central bank follows equation 2 then average inflation will in fact be p* and average y will in fact be y*.
    Matthew: good post! I found very similar results for Canada. During the inflation targeting period it is hard to find a positive coefficient on inflation in the estimated reaction function, let alone one bigger than one. I think the reason is that inflation targeting was credible, so expected inflation never deviated much at all from the 2% target, and it is only expected inflation to which the Bank must respond with a coefficient bigger than one. Most fluctuations in actual inflation were temporary and due to supply shocks.
    I used to spend a lot of time thinking about regressions like yours, and how to interpret them. Must get my head back into it.
    But your general point is quite correct, and important, I think.

  16. Min's avatar

    Let’s compare combs. 🙂
    Your equation (4):
    p-p^ = (1/1.5)(r-r^) – (0.5/1.5)(y*-y^)
    Or, in error notation,
    ∆p^ = 2∆r^/3 – ∆y^/3
    First, as I pointed out, errors accumulate, so the sign before ∆y^should be a plus. Second, by my calculations ∆p^ is 3 times greater than what you get after the sign correction. Check it out. 🙂

  17. Frank Restly's avatar
    Frank Restly · · Reply

    Nick,
    i(t) = r^ + p^ + 1.5(p(t)-p^) + 0.5(y(t)-y^)
    Suppose the inflation rate is 0% and is on target at 0% (firms are not resource constrained in any way).
    i(t) = r^ + 0.5(y(t)-y^)
    Why do economists (British, American, or otherwise) conclude that lowering the interest rate i(t) will result in
    y(t) moving closer to y^ instead of y^ moving closer to y(t)?

  18. Min's avatar

    “A fixed Taylor Rule multiplies your mistakes in estimating a margin of safety for avoiding the Zero Lower Bound by 1.67. It makes the danger of hitting the ZLB bigger than you think it is. And Taylor Rules don’t work at the ZLB.”
    IIRC, the ‘.67’ in ‘1.67’ comes from 2∆r^/3. Assuming that my calculations are correct, we have this:
    A fixed Taylor Rule multiplies your mistakes in estimating a margin of safety for avoiding the Zero Lower Bound by at least 3.
    🙂

  19. Majromax's avatar
    Majromax · · Reply

    @Frank:

    i(t) = r^ + 0.5(y(t)-y^)
    Why do economists (British, American, or otherwise) conclude that lowering the interest rate i(t) will result in y(t) moving closer to y^ instead of y^ moving closer to y(t)?
    Because that’s the direction of control?
    In words (and keeping inflation exactly on-target), the equation is “the interest rate should be set to its natural rate, plus half the output minus expected potential output.” (Or alternatively, less half the output gap.)
    The entire point of monetary policy is to have some influence on real, observed variables: inflation and output. The alternative is pure navel-gazing. “Lowering the interest rate will result in y^ moving closer to y(t)” means “lowering the interest rate will result in increasing our estimation of potential output.”
    In Nick’s formulation, r^ and y^ are our expectations of the real natural interest rate and the real potential growth rate, and r* and y* are their actual (existing but not exactly measurable) values.

  20. Min's avatar

    Frank Restly: “Why do economists (British, American, or otherwise) conclude that lowering the interest rate i(t) will result in
    y(t) moving closer to y^ instead of y^ moving closer to y(t)?”
    In the formula, y(t) is a given. In itself the formula says nothing about y(t+1).

  21. Nick Rowe's avatar

    Majromax: “But is this a criticism of the Taylor rule by itself, or instead a criticism of the assumption that the natural rate of interest is fixed?”
    The latter. I assumed (to keep it simple) that the Taylor Rule would work exactly as intended if the estimates for the natural rate of interest and potential output were correct. And if the natural rate is not fixed, but the legislation fixes the implied estimate, that estimate would be wrong, or become wrong.
    “…but have we seen contemporary central banks actually updating estimates of their long-term natural rate of interest to make the sort of adjustments forbidden by this proposed legislation?”
    Yes. The Bank of Canada did this publicly a couple of years back. I did a post on it (can’t find it now). It has almost certainly made more smaller adjustments in its model. Plus, the fact that the nominal interest rate has been trending down over the 20 years of inflation targeting, despite no upward trend in inflation, or output relative to potential, implies it must have done this. I used to joke on the CD Howe monetary policy council: “4% is the new 5%”. Then I changed it to “3% is the new 4%”. And so on, until the joke got stale. (Subtract 2% target inflation from my numbers, to get the implied estimate of the natural rate.)
    “If the claimed output target is based on realistic estimates of potential output rather than a simple curve-fit to previous behaviour (which is valid but circular in calling it “potential output”), then that would suggest the possibility of the central bank causing a persistent output gap.”
    It can’t, if the theory behind Taylor Rules is correct (and I assumed it is). If the central bank targets a positive/negative output gap, and hits it on average, you get ever accelerating/decellerating inflation. The fact that inflation stayed roughly stable over the last 20 years means it must have learned from any mistakes. (Unless by sheer chance the mistakes exactly offset a changing inflation target).

  22. Nick Rowe's avatar

    Min: “First, as I pointed out, errors accumulate, so the sign before ∆y^should be a plus.”
    No. We are talking at cross purposes. y-y^ is not the same as y-y^. y is the true steady state of the system, and y^ is what you think is the steady state of the system, and so y-y^ is not the deviation of y from the steady state. According to the theory underlying Taylor Rules, y will converge to y, even if you get your estimate of y* wrong. Your bad estimates won’t change that. But your bad estimates will change where p converges to. You will end up with inflation different from what you targeted.
    y*-y^ is not an “error” as that term is understood in control theory.

  23. Majromax's avatar
    Majromax · · Reply

    @Nick:

    It can’t, if the theory behind Taylor Rules is correct (and I assumed it is). If the central bank targets a positive/negative output gap, and hits it on average, you get ever accelerating/decellerating inflation. The fact that inflation stayed roughly stable over the last 20 years means it must have learned from any mistakes. (Unless by sheer chance the mistakes exactly offset a changing inflation target).
    Okay, I think I’m beginning to get this (and apologies for the 101-time). Since y* is externally determined, the Taylor rule exists to balance the fluctuation of output about that trend (rather than the level of the trend itself), and the coefficients determine how much inflation leeway is given to smooth the realized y(t). If the central bank’s estimate of y^ is wrong, then your analysis applies in your stated direction, where the actual effect of the wrong assumption is on the inflation rate.

  24. Frank Restly's avatar
    Frank Restly · · Reply

    Min,
    “In the formula, y(t) is a given. In itself the formula says nothing about y(t+1).”
    Presumably the central bank is setting i(t) because it wants to affect some change in either y or y^, otherwise why do it?
    My question centers around incentives. I am a perennial borrower and I know the central bank is going to set my cost i(t) based upon some measure of my actual output and my potential output. Every year the central bank looks at my output. Every year I am polled by the central bank asking what my what my potential output is. How truthful would I be with the central bank knowing that my cost may be adversely affected by my truthfulness?

  25. Min's avatar

    Nick Rowe: “Min: thanks! But what is missing is the rest of the economy. All we have here is one equation in what is (implicitly) a dynamic simultaneous equation system. In my post, instead of writing down those other equations, I simply said: let’s assume that Taylor Rules work as their advocates say they work, so if the central bank follows equation 2 then average inflation will in fact be p* and average y will in fact be y.”
    I know that we are missing the rest of the economy. That is why I said that one could argue that the Taylor Rule can be used to indicate the error in achieving the target inflation rate. Then I did what you did, which was to make that assumption.
    Let’s take your (1):
    i(t) = r^ + p^ + 1.5(p(t)-p^) + 0.5(y(t)-y^)
    Solving for p^ we get
    p^ = 2r^ + 3p(t) + y(t) – y^ – i(t)
    Then for a given i(t) we get
    ∆p^ = 2∆r
    + ∆y*
    That means that our error in hitting our target (∆p^) is approximately twice our error in estimating the natural rate of interest (∆r) plus our error in estimating the potential output (∆y). We do not include an error term for i(t) because that would mean counting its error twice, as its error is determined by the errors in estimation.
    Thanks for the correction in your other note, which I did not quote). |r^ – r| is the error in estimating r, and |y^ – y| is the error in estimating y.

  26. Min's avatar

    Oops! That should be
    Solving for p^ we get
    p^ = 2r^ + 3p(t) + y(t) – y^ – 2i(t)
    The error equation is the same. 🙂

  27. Nick Rowe's avatar

    Majromax: ‘Okay, I think I’m beginning to get this (and apologies for the 101-time). Since y* is externally determined, the Taylor rule exists to balance the fluctuation of output about that trend (rather than the level of the trend itself), and the coefficients determine how much inflation leeway is given to smooth the realized y(t). If the central bank’s estimate of y* is wrong, then your analysis applies in your stated direction, where the actual effect of the wrong assumption is on the inflation rate.’ [with one minor edit by me, to change his y^ to y]
    Bingo! You said it more clearly than I did in response to Min above. And r
    is also externally determined too, and if the central bank’s estimate of r* is wrong, the actual effect is also only on the inflation rate.
    Frank: “Presumably the central bank is setting i(t) because it wants to affect some change in either y or y^, otherwise why do it?”
    1. Because it wants to reduce the variance of y around y^.
    2. Because it wants to keep the mean of p equal to p^, and also reduce the variance of p around p^.
    Simplest version (for a linear model with no hysterisis or other funny stuff): monetary policy (i.e. the coefficients of the Taylor rule) affect mean p, variance of p, variance of y, but not mean y.
    Go read Friedman 1968 pdf as background.
    as background

  28. Nick Rowe's avatar

    Min (with your minor error corrected):
    “Solving for p^ we get
    p^ = 2r^ + 3p(t) + y(t) – y^ – 2i(t)
    Then for a given i(t) we get
    ∆p^ = 2∆r* + ∆y*
    That means that our error in hitting our target (∆p^) is approximately twice our error in estimating the natural rate of interest (∆r) plus our error in estimating the potential output (∆y).”
    No it does not mean that.
    First, what are those triangle thingies? Some accursed engineering thing? Why not just say p-p^, r-r^ and y*-y^ like I did?
    Second, neither i(t) nor p(t) will be independent of the errors in estimating the natural rate of interest and the error in estimating the natural rate of output.

  29. Frank Restly's avatar
    Frank Restly · · Reply

    Nick,
    “1. Because it wants to reduce the variance of y around y^.”
    If the central bank is assuming a constant y^, then it might miss real changes in technology that raise or lower potential output. If the central bank polls producers to measure potential output, then it becomes susceptible to producers gaming the system – always understating potential output.

  30. Min's avatar

    OK, Nick, let’s do it your way.
    1. Set i(t) = r^ + p^ + 1.5(p(t)-p^) + 0.5(y(t)-y^)
    2. Set i(t) = r* + p* + 1.5(p(t)-p) + 0.5(y(t)-y)
    Subtracting the second equation from the first, we get:
    3. 0 = (r^-r) + (p^-p) + 1.5(p-p^) + 0.5(y-y^)
    Note: You made an error by omitting the second term. Simplifying, we get
    3. 0 = (r^-r) + 0.5(p-p^) + 0.5(y-y^)
    Then, we get
    4. p
    -p^ = (1/0.5)(r-r^) – (0.5/0.5)(y-y^)
    Or
    4a. p-p^ = 2(r-r^) – (y-y^)
    ¿Es bueno? 🙂
    Now, p^ is the target inflation rate and p
    is the inflation rate you get, assuming that the Taylor rule works as advertised. Then p* – p^ is the error in achieving that rate, n’est-çe pas? r* is the natural rate of interest and r^ is the estimate of r*. Then r-r^ is the error of the estimate. Similarly, y is the potential output, y^ is the estimate, and y-y^ is the error of that estimate. Therefore,
    4b. |p
    -p^| = 2|r-r^| + |y-y^|
    The error in estimating y* will not decrease the error in achieving the target rate of inflation, it will increase it.

  31. Nick Rowe's avatar

    Min: “Note: You made an error by omitting the second term.”
    DAMN! I think you are right! Thank you!

  32. Min's avatar

    @ Nick Rowe
    De nada. 🙂
    As for the sign of the estimation error, suppose that the CB is good at estimating y*, so that y* – y^ has an expected normal distribution around 0. Then what is the expected distribution of y^ – y* ?
    🙂

  33. Nick Rowe's avatar

    Min: your work isn’t done yet! What about my equation 5? that must be screwed up too?
    Thanks!

  34. Nick Rowe's avatar

    OK, I think (crosses fingers) I have fixed it all now. It makes the effects even bigger than in the original. 1.67 becomes 3.
    Thanks again Min.

  35. Nick Rowe's avatar

    Min: “As for the sign of the estimation error, suppose that the CB is good at estimating y*, so that y* – y^ has an expected normal distribution around 0. Then what is the expected distribution of y^ – y* ?”
    Isn’t that just the other distribution, flipped around left-right??

  36. Min's avatar

    @ Nick Rowe
    Right! That’s why we are only concerned with the absolute value of the error, and switch the ‘-‘ to ‘+’ . 🙂

  37. Majromax's avatar
    Majromax · · Reply

    @Nick:

    First, what are those triangle thingies? Some accursed engineering thing? Why not just say p-p^, r-r^ and y-y^ like I did?
    Capital Greek-letter-delta, conventionally used in engineering, science, and math fields as shorthand for “change in” or “difference in.” Your (p
    -p^) and such are still very correct, but the delta notation often makes longer equations more readable — especially when we talk about nonlinear effects and differences squared.
    Also conveniently, the delta notation helps make it obvious which terms are negligible as differences go to zero. (If we’re careful, it can also be used to build differential equations if we replace “zero” with “not quite zero but infinitesimally small.”)

  38. Nick Rowe's avatar

    Min: but the ZLB is a danger on only one side, which is why the sign matters.
    Majromax: Ah, OK. But these errors won’t be negligibly small. And it wasn’t clear if they meant y^-y*, or y*-y^, or y-y^, or what.

  39. Min's avatar

    Nick Rowe: “Min: but the ZLB is a danger on only one side, which is why the sign matters.”
    Yes! That’s why it’s at least 3 instead of up to 3.

  40. Matthew's avatar

    Just found this poll of economists (unclear, but I’m guessing mostly US economists) by the University of Chicago: http://www.igmchicago.org/igm-economic-experts-panel/poll-results?SurveyID=SV_doNZ9FbNq7tDi97 Not one single economist likes the idea of a mandated Taylor Rule with congressional oversight.
    Not one.

  41. Min's avatar

    Nick Rowe: “First, what are those triangle thingies? Some accursed engineering thing? Why not just say p-p^, r-r^ and y-y^ like I did?”
    I replaced ∆r
    with |r* – r^|. All same same. 🙂
    Actually, if you learn the notation for differentials, you might as well learn ∆ and δ, which stand for difference (or error) and relative difference (or error).
    “Second, neither i(t) nor p(t) will be independent of the errors in estimating the natural rate of interest and the error in estimating the natural rate of output.”
    Not sure which statement I made that you are talking about. In the formula, p(t) is a given, therefore does not change. If you want to say that it, too, is estimated, then we can handle that, too. 🙂 (Probably should, in fact.) The reason for not adding an error term for i(t) when we are estimating the error in p^ directly is precisely because it’s error depends upon the other errors. We do not want to double count. For instance, height is hereditary, but if we are estimating the height of a son from the height of the father, we do not include the height of the grandfather, as well. That would be counting the grandfather’s height double.
    Notice that when I divided up the argument, estimating the error in i(t) from the estimation errors, I used the error in i(t) to estimate the error in achieving the target, but not the errors of the others. No double counting. 🙂

  42. Min's avatar

    Nick Rowe: “Min: your work isn’t done yet! What about my equation 5? that must be screwed up too?”
    Well, I am not sure what you are doing there, and I am not sure if there is enough information to do what I think you want. I would approach things a bit differently I think.
    First, IIUC, the ZLB means that interest rates should be set below zero. That is,
    i(t) = r + p^ + 1.5(p(t)-p) + 0.5(y(t)-y) < 0
    (If I am using the * correctly. ;))
    If I follow you, the expected i(t) = r^ + p^. I know that’s not what you said, but with averages I think that you invoked expectations. I don’t think that that matters, though.
    IIUC, what you want to know is what is the danger that the application of the rule will produce p* such that r* + p* < 0, when that is not the intent. That can happen when Δr + Δp > r* + p*. Δp = 2Δr + Δy. So we get
    3Δr + Δy > r* + p*
    This yields your (7) when we replace r* with r^ and p* with p^ (and use the correct signs, OC). 🙂

  43. Min's avatar

    OK, I have slept on this a bit. 🙂
    IIUC, the argument is that the Taylor Rule is more likely than it appears to transgress the Zero Lower Bound, because it amplifies estimation errors over time by missing the target inflation rate. That is reflected in this equation, which assumes that the Taylor Rule works as advertised.
    ∆p^ = 2∆i(t)
    (Nick bypassed this step.) Decomposing the estimation error in i(t), we get
    ∆p^ = 2∆r* + ∆y*
    Note that the error in hitting the target inflation is twice as sensitive to the error in estimating the natural rate of interest than the error in estimating the potential output. Note also that the error in hitting the target is observable, while the other two errors are not. Again, assuming that the Taylor Rule works, it is possible, given ∆p^, to make educated guesses about the other errors, i. e., to learn from experience. To do so we would use the form that Nick derived:
    (p-p^) = 2(r-r^) – (y-y^)
    So if we missed the target on the high side, we should probably increase our estimate of the natural rate of inflation, or decrease our estimate of the potential output, or both. One problem with the proposed US legislation is that it fixes the estimate of the natural rate of inflation [typo. Min means “natural rate of interest”. NR], allowing no revision of that estimate.
    Nick asserts that the margin of safety above the Zero Lower Bound is
    r^ + p^
    At first glance whether we go below the ZLB depends only upon ∆r
    , since p^ is a given. But when we consider the effect of the Taylor rule on hitting p^, we find that it depends upon 3∆r*, as well as other errors.
    I am not at all sure of that last argument, first, because the danger of trespassing the ZLB depends upon the nature and distribution of the errors, and second, because it is not developed in terms of a time series. It uses i(t) instead of i(t0) and i(t1) and p* instead of p(t1), for instance.
    Let us assume that r* is constant, so that the error in the US legislation is also constant. How is that error propagated via the Taylor Rule? We have
    i(t0) = r^ + p^ + 1.5(p(t0)-p^) + 0.5(y(t0)-y^)
    Let’s assume that our estimate is too high, that r^ > r*, tending to make i(t0) too high by (r^ – r) or ∆r. That error is then doubled and propagated to p(t1), tending to make it too low by 2∆r*. Then that error is multiplied by 1.5, tending to make i(t1) too low by 3∆r*. Adding in the constant error, we get a tendency to make i(t1) too low by 2∆r*. That error is then doubled and propagated to p(t2), tending to make it too high by 4∆r*. That error is then multiplied by 1.5, tending to make i(t2) too high by 6∆r*. Adding in the constant error, we get a tendency to make i(t2) too high by 7∆r*.
    Plainly the failure to correct the estimate of r* makes the proposed rule unstable.

  44. Nick Rowe's avatar

    Min: we are on the same page. [Assuming I correctly fixed your minor typo in the above.]
    “I am not at all sure of that last argument, first, because the danger of trespassing the ZLB depends upon the nature and distribution of the errors, and second, because it is not developed in terms of a time series. It uses i(t) instead of i(t0) and i(t1) and p* instead of p(t1), for instance.”
    You are basically right on that point. But for me to develop that argument fully, I would need to specify the full macro model, and solve it under uncertainty. Which is too hard.
    But there is a cheat I can use, to avoid doing all that hard work. And that is what I have implicitly done:
    Ignore the ZLB, assume the Taylor Rule works as advertised, and let the model run long enough so that we can ignore the initial conditions for p(0) etc. (If the Taylor Rule does indeed work as advertised, it will eventually settle down so we can ignore those initial conditions.) We get some sort of probability distribution for i(t).
    Now repeat the experiment, only this time assume that r^ is 1% above the true r* at all times. If the Taylor Rule works as advertised, and if my/your arithmetic is correct, we should observe exactly the same probability distribution for i(t), except the whole distribution is shifted 3% to the left.
    Now draw a line at 0% for the ZLB. The area of the distribution to the left of that line will be bigger for the second distribution than for the first. Exactly how much, depends on the shape of the distribution. That area is a good measure of the risk of hitting the ZLB. But my 3% measure of the reduced “margin of error” will be at least a monotonic function of that area, given the shape of the distribution.
    My “margin of error” is a crude measure, and it leaves stuff out, but it still tells us something important.

  45. Nick Rowe's avatar

    Min: “That error is then doubled and propagated to p(t1), tending to make it too low by 2∆r*.”
    That bit isn’t right. You are using the Taylor Rule, which is the central bank’s reaction function, as if it were also a complete dynamic model of the economy.

  46. Majromax's avatar
    Majromax · · Reply

    @Min:

    How is that error propagated via the Taylor Rule?
    I think you’re onto something here, so let me work on a more detailed box model. Your approach has no Phillips curve (so output remains constant), which is implausible. Also, your time-stepping is too coarse; if you worked in half-periods you’d converge.
    Assuming a Taylor Rule works, over the extremely long term the inflation target is met (p->p=p^), and also output tends towards potential output (y->y), with our estimate of potential output eventually converging to the in-reality trend (y^->y) because it’s fit from trends.
    This means that over a really long time, the Taylor Rule *must
    have an interest rate that tends towards (r* + p^), with fluctuations about that value to dampen the business cycle. But if r^ and p^ are set by law (at 2% apiece) and the estimate of r^ is flawed, then the Taylor Rule as implemented cannot be correct: the central bank will set a consistently wrong interest rate, leading to persistent misses of its inflation target. (This is where Nick’s top post comes in, and the persistent miss means that the CB is “really” using the wrong inflation target).
    Let’s look at this in a bit more detail with an iterated Taylor Rule. Assume an idealized Taylor Rule, where we set interest rates in one arbitrary period and the change is totally effective in the next. Let’s also impose a pretty arbitrary model for output y: any output gap (y-y) will revert by 50% between periods, and interest rates above (r+p(t-1)) will further change output by that amount. (So if i=5% but r=2% and p=2%, the next period’s output will be 1% below what it would have been had i=4%).
    Now, we need some sort of Phillips-type rule. Let’s say that 1% overproduction (y-y
    =1) will lead to a 1% increase in the inflation rate compared to previous, and this is the only input to inflation. This is consistent with monetary theories that support hyperinflation or deflation: if the interest rate is left unchanged with overproduction, inflation will increase leading to lower real rates which leads to a further overproduction and so on (flip signs for deflationary spirals). However, this theory is also long-run money-neutral, in that we can reach an equilibrium with any inflation expectation.
    The combination of these two rules makes the Taylor rule exact for an output gap, which is really convenient but not necessary. (1% excess output increases the interest rate by 0.5%. This increased rate combined with 0.5% reversion to mean gives 0% output gap next period and 0% inflation change).
    The full algorithm is as follows:
    1) Given: current inflation p, current output gap ∆y
    2) Set: interest rate i according to the Taylor rule. Errors in r^ /= r* enter here.
    3) Compute next-period output gap ∆y’ as 0.5∆y – (i-p-r)
    4) Compute next-period inflation as p’ = p + ∆y’
    Evaluating this algorithm with initial ∆y=1% and p=2% (so spot-on inflation and a 1% overproduction) gives:
    1: Rate 4.50 causes inflation 2.00 and output gap 0.00
    2: Rate 4.00 causes inflation 2.00 and output gap 0.00
    Conversely, with no output gap (∆y=0%) and larger-than-normal inflation p=3% gives:
    1: Rate 5.50 causes inflation 2.50 and output gap -0.50
    2: Rate 4.50 causes inflation 2.25 and output gap -0.25
    3: Rate 4.25 causes inflation 2.12 and output gap -0.12
    4: Rate 4.12 causes inflation 2.06 and output gap -0.06
    [and so on]
    Now, let’s break the Taylor rule and assume that the central bank assumes that the real rate is really 3%, 1% in excess of reality. With the same output-gap start, we get:
    1: Rate 5.50 causes inflation 1.00 and output gap -1.00
    2: Rate 3.00 causes inflation 0.50 and output gap -0.50
    3: Rate 2.50 causes inflation 0.25 and output gap -0.25
    4: Rate 2.25 causes inflation 0.12 and output gap -0.12
    5: Rate 2.12 causes inflation 0.06 and output gap -0.06
    6: Rate 2.06 causes inflation 0.03 and output gap -0.03
    [and so on]
    … well shit. Getting the real rate wrong makes the central bank look like an idiot. Despite still targeting 2% inflation, we’re actually getting 0%. Worse yet, the interest rate is extremely low, far below the 5% long-run rate that respectable economists everywhere think it “should” be.
    2% increase (so r^ = 4%, compared to in-reality 2%)?
    1: Rate 6.50 causes inflation 0.00 and output gap -2.00
    2: Rate 2.00 causes inflation -1.00 and output gap -1.00
    3: Rate 1.00 causes inflation -1.50 and output gap -0.50
    4: Rate 0.50 causes inflation -1.75 and output gap -0.25
    5: Rate 0.25 causes inflation -1.88 and output gap -0.12
    6: Rate 0.12 causes inflation -1.94 and output gap -0.06
    Now we’re firmly into deflationary spiral and ZLB territory. The sky has fallen, monetary policy has failed!

  47. Majromax's avatar
    Majromax · · Reply

    Addendum to my previous: this also aptly demonstrates how to get the economy into a liquidity trap.
    Note that with a 2% error in r* (assuming the long-term real rate is 4% when in reality it’s 2), we got the economy to converge at a 0% rate with -2% inflation and 0% output gap. We’re in a full-on liquidity trap. Using the correct real rate going forward would have the Taylor rule set negative interest rates, forcing the central bank to unconventional monetary policy.

  48. Min's avatar

    Nick Rowe: “But there is a cheat I can use, to avoid doing all that hard work. And that is what I have implicitly done:
    “Ignore the ZLB, assume the Taylor Rule works as advertised, and let the model run long enough so that we can ignore the initial conditions for p(0) etc. (If the Taylor Rule does indeed work as advertised, it will eventually settle down so we can ignore those initial conditions.)”
    I missed that part of the advertisement. 😉 Based on earlier discussion, I was assuming something like a lag between ti and ti+1 of about 2 years, not long enough to ignore initial conditions, but long enough to expect the rule to be effective.
    Min: “That error is then doubled and propagated to p(t1), tending to make it too low by 2∆r.”
    Nick Rowe: “That bit isn’t right. You are using the Taylor Rule, which is the central bank’s reaction function, as if it were also a complete dynamic model of the economy.”
    Nothing so grandiose. All I am assuming is the relationship specified by your (4) and that otherwise ∆r
    is independent of other errors. It may well be that the effect is swamped by other factors, but as long as there is the relationship expressed in (4) between (r* – r^) and (p* – p^), and (r* – r^) is otherwise independent, if not corrected, it will amplify and propagate as indicated.

  49. Nick Rowe's avatar

    I haven’t checked his arithmetic (I wouldn’t dare), but Majromax’s crude model is in the right spirit (because it does indeed converge as the Taylor Rule says it should). And it is confirming my model (If r^ is 1% too high, the eventual result is p^ is 2% too low and i is 3% too low).
    Lovely example.

  50. Min's avatar

    @ Majromax
    Thanks for your exposition. It underlines my feeling that to make sense of this you need to work out development over time. 🙂
    Majromax: “Compute next-period output gap ∆y’ as 0.5∆y – (i-p-r)”
    The appearance of r* here shows that ∆r* is not independent of other factors, which violates one of the assumptions that I made. Many thanks. 🙂
    I think that this also defeats the derivation of Nick’s (7). The dependence between y(t+i) and r* needs to be specified.

Leave a reply to Frank Restly Cancel reply