12/28/2013

But It's Not Always Like a Helicopter Drop...

One of the lingering questions I have regarding the interbank rate models I've reviewed concerns whether or not they need to address the issue of how exactly reserves enter the interbank market. The models I reviewed so far seem to assume that there is essentially a "helicopter drop" of reserves into the banking system. At least Woodford's model appears that way. Consider that model's cost function of refinancing with reserves: C(sj) = isj – iBEj[min(sj+ εj,0)] – iDEj[max(sj+ εj,0)]. Reserves can only be borrowed or lent at the interbank rate, i, during the interbank market session. There is no repurchase agreement (repo) rate to be found, for example.

In a sense, this is somewhat accurate if all open market operations (OMOs) are conducted with non-bank dealers. In this scenario, banks passively accept reserves when non-bank dealers trade their bonds to the Treasury for deposits. But even then, one has to wonder whether the repo rate at which those transactions occur, or the prevailing repo rate in general, affects the interbank rate in some way. Secondly, to my knowledge, not all OMOs in the U.S. are conducted with non-bank dealers. I'd be curious if a reader knew the ratio of bank vs. non-bank OMO purchase volumes (I'm sure this varies across countries).

Hopefully, I can figure out answers to these questions, as they seem essential to acquiring a rigorous understanding of the linkages of funding rates within the economy. I believe Bindseil touches on this in his book, which I'll continue digging through, among other sources.

P.S. I'm using the term "helicopter drop" in a slightly different way it's typically used, which is in the context of (unconventional) monetary policy stimulus. Here, I'm simply considering the mechanisms by which reserves usually enter the banking system - that is, not against the backdrop of unconventional monetary policy ideas.

12/18/2013

Michael Woodford's Individual Liquidity Shock Model of the Interbank Rate

Summary

This post gives a more complete explanation of the individual shock model introduced here. Bindseil gives a somewhat incomplete treatment of it in his book, given that he prefers the aggregate liquidity management model for its relative simplicity and so devotes most of his attention to it. In my own search for a more detailed explanation of the individual shock model, I found this 2007 paper from Bindseil, which reviews a more detailed individual shock model from Michael Woodford’s 2001 Jackson Hole paper, Monetary Policy in the Information Economy. This post explains that model and the math behind it in more detail than can be found in those papers.

11/15/2013

Table of Contents - Monetary Policy Implementation Series

(AKA, quite possibly becoming the internet's most epic guide to monetary policy implementation)

This post is a table of contents for my series on monetary policy implementation, which is so far a summary of Ulrich Bindseil’s book. I ultimately plan to use this as a launching pad into contemporary debates on monetary policy and banking. First, though, I want to establish a solid grounding in the fundamentals.
  1. Monetary Policy Terminology
  2. How the Central Bank Targets Interest Rates
I realize this a lot of material, so I think I'll try to condense it even more at some point. However, if you're really interested in understanding how this stuff works, it's worth going through.

(Also, some of these posts were written several weeks or months back. I've re-dated them so that they all appear in order here).

Monetary Policy Implementation vs. Monetary Policy Strategy: Design Features of Monetary Policy, with Application to the Debate Regarding NGDP Targeting

Introduction

I was (re)introduced to monetary policy operations when I came across the work of Scott Fullwiler a couple of years ago. Reading his papers, it soon became apparent to me that there were some serious discrepancies between what I was taught about monetary policy in college and how Fullwiler explains it is actually implemented by real-world central banks. This was odd and distressing to me. Could it really be true that mainstream monetary policy theory was this divorced from reality? At this point in my research, the answer seems to be “yes” and “no.” Ulrich Bindseil’s book, which I discussed in a previous post, supports this notion. Bindseil is the Deputy Director General of Market Operations at the European Central Bank.

Building a Model of Monetary Policy Implementation – The Central Bank's Balance Sheet

Overview

I’m now getting into the nitty gritty of Bindseil’s book on monetary policy implementation ("MPI" - my abbreviation), which is to say the part where we start to develop a mathematical view of MPI. The math in this post is limited to basic balance sheet accounting and algebra, but we’re working our way towards a more rigorous view. Again, as a reminder, this is a summary of Bindseil's book, infused and modified with my own interpretation and explanations.

The Aggregate Liquidity Management Model of Interbank Rate Targeting

Introduction

In a previous post, we explored how the central bank’s balance sheet evolves over the reserve maintenance period. In this post, we’ll review one of two models Bindseil presents for how the central bank controls the interbank rate. Here, we focus on the “aggregate liquidity management model.”

Bindseil notes that there is a rich and heterogeneous literature related to these ideas, but that most of it places an emphasis on examining the empirical properties of the interbank rate, rather than modeling how the central bank controls it. So this is a bit "exclusive," if you will. The modeling approach that Bindseil focuses on has Poole (1968) at its origins. 

The Aggregate Liquidity Management Model with Certainty

The “aggregate liquidity management model" focuses on the banking system as a whole, as opposed to taking an individual bank's perspective. The other model I'll review in a future post adopts the latter approach.

To begin, assume perfect interbank markets. Also assume the following type of reserve maintenance period, which is one-day long and permits intraday overdrafts (which can be thought of as reserve averaging around 0):



Steering Interbank Rates under the Aggregate Liquidity Management Model

The aggregate liquidity management model gave us the following equation for what determines the interbank rate:

Equation A: it = iB*(P) + iD*(1-P)     

where it is the interbank rate, iB and iD
are the expected standing facility rates at the end of the maintenance period, P is the probability of recourse to the borrowing facility, and (1-P) is the probability of recourse to the deposit facility.

I used Equation A to create the following table, which describes the central bank’s policy options for changing interest rates. This is not in Bindseil’s book, but it directly follows from Equation A, and I think it is a nice way to condense the equation’s implications. Each cell is the result of keeping constant, changing, or not controlling some combination of the variables in Equation A.



Appendix: Explanation of the Math behind the Aggregate Liquidity Management Model of the Interbank Rate

This appendix explains the mathematical foundations for the "aggregate liquidity management model" of the interbank rate. All that's required to understand this a basic familiarity with probability and calculus.

The Individual Shock Model of Liquidity Management

In contrast to the aggregate liquidity model, another way to model interest rates is to take the individual bank’s perspective. This model may be more appropriate if reserve requirements are very low and/or if banks cannot average reserve requirements across the maintenance period. (Ultimately, a central bank may use both models as inputs to its decision-making.)

First, consider the idea that the demand for reserve balances depends on uncertainties relating to payment flows, which vary all the time. For instance, if a bank experiences a liquidity shock after the interbank market has closed and does not have enough reserves on hand, it will be forced to go into a costly overdraft with the central bank. Such uncertainty creates a demand for reserves, since reserves aren’t always acquirable in the interbank market or through cheap, daylight overdrafts.

To further develop this idea, let’s introduce the variable q, which is the amount of reserves each bank holds before the liquidity shock. Also recall the simplified form of Equation 6 discussed here: B - D = M - AAssuming this amount of reserves is spread equally across all banks N, q = (M – A) / N. If we assume the liquidity shocks, represented by the variable x, are normally distributed (with φ representing the probability density function (pdf) of the standard normal distribution – see this Appendix for an description of pdfs), then the expected value of the liquidity shocks is zero across all banks. EDIT: Also assume no deposit facility.

Let’s now consider the expected cost of going into overdraft, C(q). (Bindseil uses C(q). I would have preferred something more of the form E[f(x)].) A bank will go into overdraft if it has negative reserves at the end of the day: x+q < 0. The cost of borrowing in this case will be ib*(q+x). If (q+x) >0, then there is no need for the bank to borrow from the central bank, so there are no borrowing costs. In summary, the cost of borrowing, call it f(q), is:

f(q) = iB*(q+x) , if x < -q
f(q) = 0 , if x > -q

We can now calculate the expected cost, C(q), of going into overdraft:


See the A
ppendix for an explanation of this equation if its form is not familiar to you.

From this, one can understand that if the bank were to increase the number of reserves q that it holds, then the probability of it going into overdraft should decline, and with that, the expected cost of going into overdraft. But holding more reserves q entails a cost as well, which is the interbank interest rate. Thus, a bank will only acquire more q if the cost of doing so, i, is less than or equal to the benefit of doing so, which is the corresponding decline in the expected cost of going into overdraft C(q). Or, another way of viewing it is that the interbank rate, which is the opportunity cost of holding reserves, should settle at a level where banks are indifferent between holding one more unit of reserves and not. This is econ 101 marginal analysis, where we determine the equilibrium market interest rate by setting marginal cost equal to marginal benefit. We can kick the complexity up a notch by representing the equilibrium market rate using partial derivatives:

Equation B: - ∂(C(q)) / ∂(q) = i

Since q = (M – A)/N, the central bank can clearly control i by controlling [iD, iB] and/or by altering M to an appropriate level, which is in a simple linear relationship with q (in this simplified model). The appropriate level will be determined by the shape of the probability density function perceived by the market, which the central bank needs to estimate. Of course, in the real world, the “market” may not transparently be using such a clean and simple model of probability. The model is a proxy for a more complicated process in the real world, and so certainly central banking is part art-form as well. Nonetheless, the model is helpful, as it helps the central bank estimate the appropriate amount of open market operations and grapple with the expectations of the market. All of that, combined with building a credible reputation, helps the central bank achieve its interest rate target with precision.

For a more complete version of the individual shock model, see this post on Michael Woodford's model.

Appendix: Explanation of the Math behind the Individual Shock Model of the Interbank Rate

This appendix explains the mathematical foundations for the "individual shock model of liquidity management.” All that's required to understand this a basic familiarity with probability and calculus.
Here's the basic mathematical representation of the model:

10/09/2013

Interest Rates vs. Monetary Aggregates: A (Sort of) Brief History of the Monetary Policy Implementation Debate

As I mentioned in a previous post, I will be doing a series of posts on issues surrounding monetary policy implementation. I already broached the concept of monetary policy implementation vs. strategy, an understanding of which I think is critical for delving deeper into this topic, including what’s contained within this post.

Below, I summarize Bindseil’s account of the history of the debate surrounding the optimal approach to monetary policy implementation. Many of the people and ideas contained within will be familiar to those interested in this topic, but the historical arc along which they appear may be less widely known. Bindseil relies on an extensive set of primary and secondary sources. However, these views are not necessarily unique to Bindseil. I’d imagine Post-Keynesians would give similar accounts, and likely have. I’d also emphasize at the outset that this is a 4 page summary of ~34 pages of Bindseil’s work. Bindseil obviously provides much more argumentation, citation, and quotation than I do here. Moreover, this topic is returned to throughout his ~250 page book.

9/26/2013

9/15/2013

Brief Thoughts on Reconciling Heterodox and Mainstream Macroeconomic Ideas

In my gut, I feel there must be a way to reconcile the mainstream New Keynesian model with the mechanistically accurate way many heterodox economists (mostly thinking of Post-Keynesians here) describe the functioning of the monetary system. There will likely be some irreconcilable gaps, particularly when we start to move away from the monetary system, but it may be possible in a rough sense. I feel this way because most of the time, the two sides can’t even agree on their disagreements (okay, at least the participants on the internet). To me, that’s a sign there’s a serious element of cross-talk whereby each side is speaking a different language.

Why Calling Banks "Financial Intermediaries Between Lenders and Borrowers" Is Okay, Even if You View Banks Through the "Endogenous Money" Lens or Something Similar

Introduction

There have been extensive debates on the econoblogosphere over the past couple of years regarding the degree to which banks are “special” due to the manner in which they create money and credit. Steve Randy Waldman, as is often the case, provides a helpful synthesis, discussion, and collection of links regarding these debates.

As almost always in these sorts of mainstream versus heterodox intellectual scuffles, the extent to which there is disagreement due to differences in framing and semantics, as opposed to the underlying fundamental issues, is unclear. Additionally, there are many elements to this debate, and both sides have valid things to say (disclaimer: I am admittedly amalgamating the debaters into two broad camps for simplicity, but further delineations can be made). However, I do not think that for each and every element of the debate, each side offers an equally valid viewpoint. In any case, my intention here is not to provide a comprehensive analysis of these matters. I recommend reading JKH’s commentary at the following links for more detailed thoughts along these lines.

What I’d like to focus on is the mainstream economist’s tendency to characterize banks as ‘financial intermediaries between borrowers and lenders’ with respect to their lending function. Many participants (but not all!) in this debate that lean towards the economic heterodoxy have a problem with this characterization, given their usually more nuanced understanding of banking operations and mechanics. I understand and sympathize with this perspective, but I ultimately aim to argue what’s stated in the title of this blog post. I think someone with an endogenous view of money can be okay with the mainstream characterization, even if some mainstream economists who use it don’t have a very robust understanding of how banking works operationally.* To be clear up front, none of this is to say that banks are not important to the economy or should not be modeled.