Be careful with the Taylor Law


Intermoney | Taking a look back confirms that the Fed has prioritised the reduction of the flexible deficit from the perspective of interest rates, which imposed the downward limit of 0%. A fact which justifies the progressive rise in rates in the US since it puts greater value on monetary regulations which take into account the present situation and the accumulated gap. So it escapes from the mere application of the traditional Taylor Law, putting essential discretion at the centre of the Fed’s actions.

The Big Recession was proof of the importance of giving central banks flexibility and not restricting them with closed regulations. The situation required new tools and “more adaptability” than had been the case in other events in the past. As a result, having recourse to predetermined rules underpinned by different experiences, simply, would have been a mistake. Whatsmore, the progress in the interconnection of the financial markets and their greater sensitivity to certain turbulences, obliged and still obliges us to give the central banks more freedom to act. That’s if we really want them to be effective when they do their job. So, the mistake would be to look at aspects like inflation, growth or financial stability, for example, as hermetic departments.

Now, what we have just said doesn’t imply that monetary regulations should be underestimated, as they are a very useful tool for guiding the central banks’ actions. These rules, as defined by Taylor, are a description (algebraic, numeric and/or graphic) of how the instrument of monetary policy (interest rate, monetary base etc) is modified by the central bank in response to changes in variables like inflation and economic activity, amongst others. Another question, as we mentioned, is that they are seen as a mechanical guide, given that this would imply eliminating certain changing factors in the economy and the specific problems they bring.

Powell went with the previous line when appearing before US political authorities, but also added another equally important nuance. Namely, the relevance of the information with which the regulations are nurtured and which determines their prescription. There is no doubt about the necessity for avoiding implementing these regulations “lightly”, as there are many considerations which they don’t take into account and, we insist, produce results which depend on the quality of the information which feeds them.

Once the initial weaknesses are taken into account, the next step is to be aware of the existence of regulations over and above the well-known Taylor Law, starting with this regulation adjusted to establishing a downward limit on rates of 0%, which is very important in the Fed’s case. An institution which also takes into account other rules, like the balance-approach system, levels of prices and the so-called “first difference”. Because “they take into account how far the economy is from achieving the objectives of the Fed’s double mandate, based on obtaining full employment and price stability.”

The special features of the regulations outlined explain the disparate results which emerge when they are implemented. In general terms, a central role is played by the difference between the unemployment rate sustainable in the long-term and its current levels, as a way of making an approximation of the deviation of the real activity with respect to its potential level. As the Fed itself says, “the gap with respect to the potential product has been replaced by the gap between the long-term unemployment rate and its current level (using a relationship known as the Okun law), with the aim of representing the regulations in terms of the FOMC’s statutory objectives.” The previous approach arises from the high historic correlation between the fluctuations in production and the unemployment gap.

In conjunction with the gap with respect to the potential activity, a key role in the majority of the stated regulations is played by the difference between the current inflation figures and the long-term objectives, duly keeping in mind that the 2% target in cases like the US is not based on the CPI but on the deflator of the personal consumption expenditure (PCE price index); a measure which, amongst other issues, allows an escape in the short-term from the disruptions generated by the more volatile inflation components, and which could lead to rules being created for rates to go up or down which are inadequate in the medium-term. Once again, we see the key role played by the ingredients which feed the regulations.

The question is that the previous gaps are not treated in the same way in all the regulations stated. In the case of the so-called “first difference” what is taken in to account is not the current and long-term level of unemployment, but rather the gap between both variables. Although the key element of this regulation lies in the fact that it does not take into account what would be the neutral rate in the long-term. A factor which in itself is subject to a certain degree of error as it’s not a variable which can be observed. It’s based on the official rates in force just beforehand, adjusting the result based on the differences existing with respect to the inflation target and the variation of the unemployment gap. As a result, we are talking about a rule which establishes greater progressiveness and fits in very well with the Fed’s strategy and actions latterly: the Fed calls it the change rule and it was mentioned in one of Yellen’s speech at Stanford University, in January 2017, and fits almost perfectly with the trend observed in the official rates in the US in the recent past.

So we reach a first, valuable conclusion: it’s a mistake to put the emphasis on the Tayor Law, when there are other mechanisms which better mark the steps taken by institutions like the Fed. Furthermore, as far as the Taylor Law goes, we come up against the necessity to calculate what the real long-term neutral rates are. Namely, those consistent with a sustainable situation of full employment and stable inflation. At the same time, the stance taken by the Fed with the unbreakable limit of 0% for rates, establishes a clear determining factor for the classic Taylor Law which leads us to its adjusted version.

The previous stance is similar to that of the rule with regard to price levels and also proposes a downward limit of 0%. That said, in this case the key lies in the accumulated inflation deficit or surplus with respect to the objectives of the monetary authority over a specific period. In practice, this implies that if at the end of a five-year period, for example, the central PCE index in the US has risen 8% compared with the target for 10.4%, the movements in interest rates would be much more progressive, with the opposite happening if the index is higher.

And so the regulations which take into account the past deficits of accommodative monetary policy and which result in much more progressive moves in rates, are playing an important role within the Fed and, undoubtedly, other monetary authorities. In fact, in the document sent by the institution to the US Congress, it’s acknowledged that these rules “set out actions which are more suitable than others, after a period when rates were in negative territory.”


About the Author

The Corner
The Corner has a team of on-the-ground reporters in capital cities ranging from New York to Beijing. Their stories are edited by the teams at the Spanish magazine Consejeros (for members of companies’ boards of directors) and at the stock market news site Consenso Del Mercado (market consensus). They have worked in economics and communication for over 25 years.