Table of Contents - Monetary Policy Implementation Series

(AKA, quite possibly becoming the internet's most epic guide to monetary policy implementation)

This post is a table of contents for my series on monetary policy implementation, which is so far a summary of Ulrich Bindseil’s book. I ultimately plan to use this as a launching pad into contemporary debates on monetary policy and banking. First, though, I want to establish a solid grounding in the fundamentals.
  1. Monetary Policy Terminology
  2. How the Central Bank Targets Interest Rates
I realize this a lot of material, so I think I'll try to condense it even more at some point. However, if you're really interested in understanding how this stuff works, it's worth going through.

(Also, some of these posts were written several weeks or months back. I've re-dated them so that they all appear in order here).

Monetary Policy Implementation vs. Monetary Policy Strategy: Design Features of Monetary Policy, with Application to the Debate Regarding NGDP Targeting


I was (re)introduced to monetary policy operations when I came across the work of Scott Fullwiler a couple of years ago. Reading his papers, it soon became apparent to me that there were some serious discrepancies between what I was taught about monetary policy in college and how Fullwiler explains it is actually implemented by real-world central banks. This was odd and distressing to me. Could it really be true that mainstream monetary policy theory was this divorced from reality? At this point in my research, the answer seems to be “yes” and “no.” Ulrich Bindseil’s book, which I discussed in a previous post, supports this notion. Bindseil is the Deputy Director General of Market Operations at the European Central Bank.

Building a Model of Monetary Policy Implementation – The Central Bank's Balance Sheet


I’m now getting into the nitty gritty of Bindseil’s book on monetary policy implementation ("MPI" - my abbreviation), which is to say the part where we start to develop a mathematical view of MPI. The math in this post is limited to basic balance sheet accounting and algebra, but we’re working our way towards a more rigorous view. Again, as a reminder, this is a summary of Bindseil's book, infused and modified with my own interpretation and explanations.

The Aggregate Liquidity Management Model of Interbank Rate Targeting


In a previous post, we explored how the central bank’s balance sheet evolves over the reserve maintenance period. In this post, we’ll review one of two models Bindseil presents for how the central bank controls the interbank rate. Here, we focus on the “aggregate liquidity management model.”

Bindseil notes that there is a rich and heterogeneous literature related to these ideas, but that most of it places an emphasis on examining the empirical properties of the interbank rate, rather than modeling how the central bank controls it. So this is a bit "exclusive," if you will. The modeling approach that Bindseil focuses on has Poole (1968) at its origins. 

The Aggregate Liquidity Management Model with Certainty

The “aggregate liquidity management model" focuses on the banking system as a whole, as opposed to taking an individual bank's perspective. The other model I'll review in a future post adopts the latter approach.

To begin, assume perfect interbank markets. Also assume the following type of reserve maintenance period, which is one-day long and permits intraday overdrafts (which can be thought of as reserve averaging around 0):

Steering Interbank Rates under the Aggregate Liquidity Management Model

The aggregate liquidity management model gave us the following equation for what determines the interbank rate:

Equation A: it = iB*(P) + iD*(1-P)     

where it is the interbank rate, iB and iD
are the expected standing facility rates at the end of the maintenance period, P is the probability of recourse to the borrowing facility, and (1-P) is the probability of recourse to the deposit facility.

I used Equation A to create the following table, which describes the central bank’s policy options for changing interest rates. This is not in Bindseil’s book, but it directly follows from Equation A, and I think it is a nice way to condense the equation’s implications. Each cell is the result of keeping constant, changing, or not controlling some combination of the variables in Equation A.

Appendix: Explanation of the Math behind the Aggregate Liquidity Management Model of the Interbank Rate

This appendix explains the mathematical foundations for the "aggregate liquidity management model" of the interbank rate. All that's required to understand this a basic familiarity with probability and calculus.

The Individual Shock Model of Liquidity Management

In contrast to the aggregate liquidity model, another way to model interest rates is to take the individual bank’s perspective. This model may be more appropriate if reserve requirements are very low and/or if banks cannot average reserve requirements across the maintenance period. (Ultimately, a central bank may use both models as inputs to its decision-making.)

First, consider the idea that the demand for reserve balances depends on uncertainties relating to payment flows, which vary all the time. For instance, if a bank experiences a liquidity shock after the interbank market has closed and does not have enough reserves on hand, it will be forced to go into a costly overdraft with the central bank. Such uncertainty creates a demand for reserves, since reserves aren’t always acquirable in the interbank market or through cheap, daylight overdrafts.

To further develop this idea, let’s introduce the variable q, which is the amount of reserves each bank holds before the liquidity shock. Also recall the simplified form of Equation 6 discussed here: B - D = M - AAssuming this amount of reserves is spread equally across all banks N, q = (M – A) / N. If we assume the liquidity shocks, represented by the variable x, are normally distributed (with φ representing the probability density function (pdf) of the standard normal distribution – see this Appendix for an description of pdfs), then the expected value of the liquidity shocks is zero across all banks. EDIT: Also assume no deposit facility.

Let’s now consider the expected cost of going into overdraft, C(q). (Bindseil uses C(q). I would have preferred something more of the form E[f(x)].) A bank will go into overdraft if it has negative reserves at the end of the day: x+q < 0. The cost of borrowing in this case will be ib*(q+x). If (q+x) >0, then there is no need for the bank to borrow from the central bank, so there are no borrowing costs. In summary, the cost of borrowing, call it f(q), is:

f(q) = iB*(q+x) , if x < -q
f(q) = 0 , if x > -q

We can now calculate the expected cost, C(q), of going into overdraft:

See the A
ppendix for an explanation of this equation if its form is not familiar to you.

From this, one can understand that if the bank were to increase the number of reserves q that it holds, then the probability of it going into overdraft should decline, and with that, the expected cost of going into overdraft. But holding more reserves q entails a cost as well, which is the interbank interest rate. Thus, a bank will only acquire more q if the cost of doing so, i, is less than or equal to the benefit of doing so, which is the corresponding decline in the expected cost of going into overdraft C(q). Or, another way of viewing it is that the interbank rate, which is the opportunity cost of holding reserves, should settle at a level where banks are indifferent between holding one more unit of reserves and not. This is econ 101 marginal analysis, where we determine the equilibrium market interest rate by setting marginal cost equal to marginal benefit. We can kick the complexity up a notch by representing the equilibrium market rate using partial derivatives:

Equation B: - ∂(C(q)) / ∂(q) = i

Since q = (M – A)/N, the central bank can clearly control i by controlling [iD, iB] and/or by altering M to an appropriate level, which is in a simple linear relationship with q (in this simplified model). The appropriate level will be determined by the shape of the probability density function perceived by the market, which the central bank needs to estimate. Of course, in the real world, the “market” may not transparently be using such a clean and simple model of probability. The model is a proxy for a more complicated process in the real world, and so certainly central banking is part art-form as well. Nonetheless, the model is helpful, as it helps the central bank estimate the appropriate amount of open market operations and grapple with the expectations of the market. All of that, combined with building a credible reputation, helps the central bank achieve its interest rate target with precision.

For a more complete version of the individual shock model, see this post on Michael Woodford's model.

Appendix: Explanation of the Math behind the Individual Shock Model of the Interbank Rate

This appendix explains the mathematical foundations for the "individual shock model of liquidity management.” All that's required to understand this a basic familiarity with probability and calculus.
Here's the basic mathematical representation of the model: