William A. Barnett on Divisia Aggregates and Measuring Money in the Economy

Having better financial data and more effective measurement of monetary aggregates could help us avert financial crises in the future.

William A. Barnett is the Oswald Distinguished Professor of Macroeconomics at the University of Kansas and Director of the Center for Financial Stability. Bill joins Macro Musings to discuss his work on better measurement of monetary aggregates in the economy. David and Bill also discuss Bill’s book Getting It Wrong, which argues that old simple-sum aggregates of the money supply are obsolete and that more sophisticated aggregates (called Divisia aggregates) are more appropriate in making decisions related to monetary policy. Finally, Bill also explains how he went from being a rocket scientist to a macroeconomist.

Read the full episode transcript:

Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].

David Beckworth: Bill, welcome to the show.

William Barnett: Thank you for inviting me.

Beckworth: Bill, it's a real treat to have you on. I read your book. I really enjoyed it, and I encourage our listeners to go get a copy themselves. It really is insightful. You learn a lot about money and the measurement of it and how important it is. And, we'll get into all of that later. But, before we do, I want to ask you, as I do most of my guests, how did you get into macroeconomics? And, you're a very special story because you actually were a rocket scientist. I don't think we've had any rocket scientists on the show before. But, you were an MIT rocket scientist, and somehow you found your way into macro. So, tell us how you did that.

Barnett: Well, it's a complicated story. It was an unusual time in American history. It was believed that the United States was in a race with the Russians to get people to the moon. In retrospect, we know that that was not even true. The Russians had concluded it wasn't worth the cost, and they were not even trying to do that. But, we thought they were. As a result, the Federal government was pouring enormous amounts of money into the space program, and a lot of people wanted to get into engineering at that time, and in particular, to work under NASA contracts on the space program.

Barnett: I was hired when I finished my engineering studies at MIT by Rocketdyne, a division of North American Aviation. Rocketdyne made the rocket engines for Apollo, for the Saturn vehicle, all of them. The booster engine was the F-1, which was the one that got it off the ground. It was the most powerful rocket engine. In fact, even to the present day, it is still the most powerful rocket engine ever produced by anyone. It produced one and a half million pounds of thrust. It was quite fascinating to be working on that rocket engine.

Barnett: At the time, because the country thought it was in this race with the Russians, the NASA contracts were extraordinarily generous. Initially, they were set up as what was called Cost Plus Fixed Fee Contracts. Did not produce very good incentives. The way that worked is that the corporations working on this for NASA could spend as much as they want, and it did not affect the fee. So, they got the same fee no matter how much they spent.

Barnett: So, they were quite generous to their engineers. One of the things they did is they offered educational leave for every year that you were employed working at Rocketdyne. You earned a one-year educational leave. And, I took all of those benefits. I spent a year at Berkley getting a Master's degree basically in finance, and continued, eventually getting a PhD studies at University of Chicago and Carnegie Mellon. I had already become interested in economics, going back to MIT, although I was an engineering student. In my senior year, I was permitted to take a graduate course from Franco Modigliani, which I found to be just extraordinarily fascinating. I never forgot that course. And, at Berkley, on one of my leaves, in Finance, I took a course from David Laidler in Economics. And, again, this further increased my interest in Economics.

Barnett: But, from the standpoint of Rocketdyne, what they really wanted me to study was statistics. The reason was in the days when they were just flush with cash, they were planning to open a new division of the corporation. And, it was to be a pure research division. The intent was to be doing research on potential future space programs that did not yet even exist. The intent was to employ only PhD scientists, and they needed statisticians. So, they did want me to emphasize statistics in my graduate studies, which I did do. In fact, my PhD ultimately was in statistics.

Barnett: But, while I was pursuing my graduate study, the Vietnam War grew. As the Vietnam War grew, funds for NASA declined, and funds for the Department of Defense increased. So, what happened was eventually, funds for that new division were withdrawn. So, there was not going to be a pure research division of Rocketdyne. When I finished my PhD and presumably returned, but the intent was me to return to that division, which wasn't even going to exist anymore. In addition, by the time I had my PhD, Apollo had already gotten to the moon. So, interesting research and development on the space program was in a rapid decline. Engineers working in that industry were either moving to production engineering, which did not interest me, or were transferred into other division of the corporation that made fighter planes and things for the Department of Defense, which also didn't particularly interest me.

Barnett: So, when I got my PhD, I resigned from Rocketdyne. I explained that that division that I returned to didn't even exist anymore. They understood. They didn't seem to mind the fact that I was not going to do what they thought I was going to do because it was ultimately impossible. In addition, one of the reasons I did that was because while I was working on my dissertation at Carnegie Mellon, I received a rather remarkable offer from the special studies section of the Federal Reserve Board in Washington DC, which at that time was an elite research section. Unfortunately, it doesn't exist anymore. But, it was a great place to work.

Barnett: The deal was that if I came there, and this was as a full-time ... this was not any kind of temporary thing ... this was a full-time offer as an economist ... full-time economist job in special studies. But, the arrangement was that for my first year, I could work full-time on my dissertation. Well, that sounded great. So, I finished my Carnegie Mellon dissertation research while being paid full-time in Washington DC. At the end of the year, I simply returned to defend my dissertation, and then went back to the Federal Reserve. I had this great deal there. And, that's basically the story.

Beckworth: Well, that's a great story. You had the intellectual curiosity, but we can thank the generosity of NASA, at least early on, for pushing you along this path, and then the Vietnam War for changing funding priorities. And, all those interesting facts come together, and here you are a macroeconomist working at the Board of Governors. I know we'll get into the nitty gritty later. But, when did you first stumble upon this issue of properly measuring money and the index and aggregation theory issues?

Barnett: Well, my dissertation research largely dealt with modeling and estimating consumer demand functions and systems of consumer demand functions and production modeling of factor demand systems and output supply systems. That literature had become very advanced. In fact, it looked very good from the standpoint of the statistics department at Carnegie Mellon because it involved estimating non-linear systems of equations, and it gave me the ability to produce some work in statistical theory on how to do the assumptotics, with estimation of non-linear systems of equations. That literature also had very close ties with aggregation and index numbered theory. This was well-known to people working in this field. It was integrated into that literature.

Barnett: So, I had a great deal of knowledge of aggregation of index numbered theory as well as modeling consumer demand functions, etc. In contrast, what macroeconomists were doing with money demand at that time was basically the Goldfeld equation, which was just a single linear equation. It wasn't integrable, could not be derived from microeconomic theory. To somebody who had been working in the literature on consumer demand modeling, the Goldfeld equation seemed very primitive to me. But, I wasn't using either. That's not what I was working on.

Barnett: Then, what happened is there was something called the Bock Commission at Stanford. Leland Bock at Stanford created a commission called the Bock Commission. It was taken very seriously by the Federal Reserve Board. What the Bock Commission did was to conclude that the Federal Reserves and monetary aggregates were too narrow. They should include accounts at other institution types, such as savings and loans, mutual savings banks, etc., which at that time were providing savings accounts and were beginning to even provide checking accounts. But, the Federal Reserve at that time was using only data from commercial banks. The Federal Reserve decided they should look into this. This looked pretty credible to them, and I was then approached to do some work on this. It was known, of course, in a special study section that I had this kind of expertise. So, I was asked to look into the Bock Commission proposals, which I was happy to do because I could see a way to use my own areas of expertise. And, that's what I did. It was basically Steve Axelrod, who was the Staff Director of Monetary Policy, who asked me to do this. And, it was a great opportunity for me because I was able to do some very fascinating research using areas of expertise that I already had in an area where that kind of expertise seemed to be in very short supply.

Beckworth: Yeah, well, that's interesting. As you read your book, and we'll talk about this more later ... But, the Fed walked away from Divisia measures of monetary aggregates. And, it's interesting to hear you say. They really were the ones who first started you in that direction, given your background and the desire to look at monetary aggregates.

Beckworth: Before we get into that though, I want to just go through monetary aggregates for listeners who don't know what they are. So, I'm going to mention some names to you, and these, at this level, will be talking about simple sum measures, and then we'll jump into Divisia forms of them. But, just so our listeners are clear, I want to run through the list of broad money measures or monetary aggregates that the Federal Reserve has at some point in the past or currently is keeping track of. So, let's start with M1. What is M1?

A Background in Broad Money Measures

Barnett: Well, it's been [inaudible] aggregate. It's intended to include strictly legal means of payment transactions balances. But, I would like to throw in something ...

Beckworth: Sure.

Barnett: ... immediately regarding this subject. I mentioned that Steve Axelrod was the person who asked me to do this. When he did that in a private conversation, he imposed a constraint on me. He knew that I wanted to propose using reputable index number and aggregation theory, not simple sum aggregation. But, also, that literature has a criterion for clustering components. It's the test of what's called blockwise weak separability. The aggregate has to track a function that exists that has to be factorable out of the structure of the economy. So, to people working in consumer demand, before they would decide how to aggregate over components, they would decide how to choose the components, and the criterion was to run tests to blockwise weak separability. What Steve Axelrod said to me was that if I wanted the Federal Reserve to consider changing the method of aggregation over components, I should not simultaneously challenge their clustering of components. So, I was told, in effect, that they would not listen to me about methods of aggregation over components if I simultaneously disputed the clusterings.

Barnett: So, in my work, I never tested for weak separability of the components of M1, M2, M3 or L.

Beckworth:         Interesting.

Barnett: I had basically been told not to do it. However, this did provide an opportunity to many academic economists who were not at the Federal Reserve at the time that they understood this need. And, many other people started running those tests. But, I was not permitted to do it.

Beckworth: Okay. So, you just had to work with the aggregates they had given you. And, you mentioned M1 already. There's also M2, M3 and L. So, what's an M2 measure? What's an M3 measure? And, what was the L measure?

Barnett: I'd like to point out what's a little bit awkward about it. What M2 is supposed to do is to bring in mostly time deposits. But, there are a couple of components in M2 that create a certain amount of paradox. One is ... M2 includes non-negotiable certificates of deposit, which are highly illiquid. There are big penalties for cashing them in before maturity. But, M2 does not include include negotiable CDs, which are highly liquid. So, that's kind of odd that they decided to put into M2 something that's extremely illiquid while excluding the really liquid stuff. They did this having to do with the denominations. The negotiable ones are large CDs. Non-negotiable ones are small CDs. But, there is a huge difference in liquidity. So, this is sort of odd. I doubt that this would pass a blockwise weak separability test.

Barnett: Another thing that's in M2 ... I'm not going to dispute it, but it's a little paradoxical ... it includes money market mutual funds. Money market mutual funds are what are now considered shadow banking. At that time, very few people were even talking about shadow banking, and certainly the Bock Commission was not suggesting that shadow banking start being brought into monetary aggregates. But, M2 contains some shadow banking, which again, is a little bit puzzling.

Barnett: Now, M3 and L are actually quite fascinating. What they include primarily is the money market. So, if you're in a finance department, and they're teaching you corporate finance, when they talk about money, they're talking about the money market. That's what big corporations consider their primary source of liquidity. They have a Controller. They have experts in finance who manage large portfolios of highly liquid money market securities, such as negotiable CDs, commercial paper, treasury bills ... things like that. That's what is primarily is brought into M3 and M4. M4 even brings in treasury bills, which are federal. It's a different kind of thing, but it's part of the negotiable money market.

Beckworth: M4 is like L, right? You're using those interchangeably? They're similar?

Barnett: Yeah, we call it M4 now.

Beckworth: Okay. It was called L originally.

Barnett: But, it was called L. You're correct. It's what's called L.

Beckworth: Okay.

Barnett: So, that's basically it. And, by not using M3 and L at the present time, people who don't use M3 and L are basically not using what large corporations consider to be money. That's the puzzling thing.

Beckworth: And, I think that's one of the lessons from this crisis, at least for me and I think for a number of other people. It made it more aware that the crisis that it was important to be looking at a broader measure of money that the run on the banking system was a run on the shadow banking system, the wholesale banking system, not retail. But, there's no measure of money readily available for that.

Beckworth: So, let me ask this question then. If you're doing monetary policy analysis or if you're [inaudible] some of your grad students, and they come to you and say, "Hey, we want to look at the effect of a monetary policy shock on the economy." First off, you tell them to use Divisia, which we'll get to in a minute. But, after that, would you tell them to use the M4 measure? Which measure, if you're doing monetary policy analysis, should one use? An M4 or something smaller?

Barnett: Well, the words monetary policy analysis creates problems for me.

Beckworth: Okay.

Barnett: I want people to recognize that competent measurement is competent measurement, no matter how policy is conducted. If you look at say the Bureau of Labor Statistics or the Commerce Department or even the Agriculture Department, they employ experts in index number and aggregation theory to produce competent aggregates consistent with relevant microeconomic theory, unbiased by what particular policies are being contemplated by the Congress or the White House or the central bank. From this standpoint, again, any component clustering that use blockwise weakly separable is fair game. It measures something that exists. The problem with narrow aggregates, as is well-known to people who work with consumer demand systems, is the more narrow the aggregate, the more substitutes and compliments there are outside that aggregate. That means the demand function for that aggregate has a lot of prices in it of substitutes and compliments. So, for example, M1 ... to do a good job of modeling the demand per M1, you've got to have a lot of explanatory variables in that demand function to account for the fact that there are lots of sources of liquidity that are not in M1. And, their user cost prices are relevant to the demand for M1. There are substitutes and compliments.

Barnett: When you go to broader aggregates, you've internalized more of that. So, there are fewer remaining explanatory variables that you have to worry about. With the broadest aggregates, like M3 or what the Financial Center for Stability now calls M4, those are quite broad and they do weight everything in a competent way. So, they include the relevant transaction services of the components in all of these assets. They don't impute a weight of zero to some substitutes.

Barnett: So, it's easier to deal with the broader aggregates. Their demand functions are particularly stable, and they don't just throw out relevant sources of liquidity. So, in most empirical applications that I've worked on, the broader Divisia aggregates for most purposes work best. Like Duke [inaudible] do a study for a paper that I presented at the American Enterprise Institute on this. It was subsequently published in the JMCB, in which I ran all of the tests that the Federal Reserve Board staff runs itself to choose aggregates. And, I classified all of the possible aggregates in terms of which ones did best. In the vast majority of cases, Divisia M3 or Divisia L worked best. There were a few criteria in which an M2 aggregate worked particularly well. As I recall, I think there might have even been one or two cases in which simple sum M2 worked well.

Barnett: But, the overwhelming majority of these test concluded the broadest Divisia aggregates for those particular cases worked best. But, I would never say that any weakly separable component clustering is inadmissible. There could be ways in which it could be useful.

Beckworth: All right. Let's move on, and let's talk about the Divisia. You kind of alluded to some of these issues in your comments you just shared with us. But, tell us, in general terms, what a Divisia measure does that a simple sum measure doesn't do. And, really, what's the critique of the simple sum measure?

The Divisia Money Measure and Critiquing Simple Sum

Barnett: There's a very fundamental difference. The Divisia index is directly derived from aggregation theory and index numbers theory. Relative to that literature, it's competent. Similarly, a Fisher Ideal index that would use user cost prices, would be viewed as competent. [inaudible] using user cost prices would be considered competent. The simple sum aggregates are very simply incompetent. Ever since Fisher's book ... Irving Fisher's book, in 1922, The Making of Index Numbers appeared ... it concluded that the simple sum and arithmetic averages were the two worst aggregates that you could even find. And, following that book, no other government agencies used that anymore. Commerce Department does ... Agriculture ... only, unfortunately, central banks are still using simple sum aggregation. But, it is just plain incompetent.

Beckworth: The reason it's incompetent, if I understand correctly, is ... when you add together, for example, an M2 ... all these different assets ... everything from a savings account down to currency, you're assuming they're perfect substitutes when, in fact, they're not. Is that the big critique?

Barnett: Yes. Reputable index numbers, such as Fisher Ideal, [inaudible] Posh, Divisia, would reduce to the simple sum as a very extreme, special case. The special case is that the components are perfect substitutes in identical ratios. So, that means that each component is a one to one perfect substitute for every other component, not two for one. So, you can have perfect substitutes that are perfect substitutes two for one. That would be linear. Simple sum's a special case of linear in which the coefficients of the linear function are all equal to each other. For that to happen in free markets, the component prices must always be equal to each other. With monetary assets, the user cost prices depend upon the component interest rates. The component interest rate of currency is zero. For the user cost price of the other components, say an M2 ... to be the same as the user cost price of currency, the interest rate has to be zero. But, that's not the case. That ended a very long time ago. A very long time ago, money was currency plus demand [inaudible] yielding no interest.

Beckworth: Yep.

Barnett: Then, Divisia, Posh, [inaudible], Fisher Ideal did reduce to simple sum. But, now that there are so many assets considered monetary assets yielding interest, that special case is simply not relevant.

Beckworth: Yeah. You gave a good example in your book of how this is important. You gave several examples. But, one that really stuck out to me was this L measure, or what you now call M4. M4 has treasury bills in it. So, you can imagine a simple sum M4 measure where a scenario occurs where the government begins to monetize the debt. So, maybe the Federal Reserve starts buying up ... in order to support the government ... starts buying up debt. It's increasing the monetary base, increasing system currency, but it's taking debt out. M4 or the L measure would just be stable. You'd just be substituting one asset for the other, and you wouldn't see any change. There'd be no signal that inflation would be headed up. Where the Divisia M4 accounts for the fact that these aren't perfect substitutes and you would see an increase in M4. Because currency is so much more liquid than the treasury bills ... M4, the way it's calculated ... M4 would actually go up. It wouldn't be just stable as with a simple sum M4.

Barnett: That's one of my favorite questions on exams. I like to bring that up on exam questions.

Beckworth: That's a great one.

Barnett: I completely agree with that.

Beckworth: That's fantastic. What's interesting ... if you looked at standard, simple sum measures going into the crisis and even after the crisis, you really don't see any action there. And, that's what's fascinating. A lot of people say, "Well, M2 didn't change. M1 didn't change." But, if you look at your Divisia M3, your Divisia M4, which is publicly available, you do see this sharp contraction in these many assets during the crisis.

Beckworth: So, I think it's very interesting and fascinating part of the literature, something that you've been very important in furthering. So, tell us about this idea called the Barnett Critique. So, you have a critique named after your name. What is the Barnett Critique?

The Barnett Critique

Barnett: That term was coined by Crystal and McDonald, British economists ... very good British economists ... in a paper that they presented at a conference at the St. Louis Federal Reserve Bank. It's an interesting insight. What they were referring to was the appearance of instability in the demand for money function. For the demand for money function to be stable, the method of producing the aggregates used within that function must based upon theory that's consistent with the theory that produced the demand for money function. It all has to be nested. The aggregation ... again, it's really about aggregator functions that are factored out of functions. So, it all has to be nested. If the aggregator function is produced in a way that's inconsistent with the theory that produced the demand for money function, then the resulting demand for money function can appear to be unstable when, in fact, it isn't. The appearance of instability was produced by an internal contradiction between the aggregation theory and the demand system. Again, this is a problem that would never occur in the consumer demand literature. Everybody understands this in the consumer demand literature. But, in the literature in the demand for money function, this source of internal inconsistency was being overlooked.

Barnett: It is somewhat analogous to the Lucas Critique. The Lucas Critique similarly argues that the structure of a macroeconometric models could appear to be unstable when the deep parameters of the private sector are confounded with the parameters of the Federal Reserve's policy rules. Again, it's a similar concept of an internal contradiction within the econometric approach.

Barnett: I like what Crystal and McDonald said. I think it's a good idea.

Beckworth: Now, the practical application of this, if I understand correctly, is making sense of some of the money confusion in the 1970's, right? So, there was a period where people looking at simple sums predicted a certain relationship. Then, it didn't appear. They said there's missing money. In fact, one of the critiques you'd hear today ... probably the most common you might call it standard critique of looking at monetary aggregates ... in fact, I'm sure many listeners of this show are skeptical about looking at money aggregates ... they would say, "Well, there's an unstable money demand relationship." Right? So, the relationship between nominal income and money just isn't there. It needs to be there if we're actually going to use money. But, what you show in your research, and you have some great graphs in the book as well, is that money demand actually was stable during that period. It was just that it was mismeasured. And, had the Fed embraced this idea of Divisia measures sooner, there wouldn't have been as much confusion.

Barnett: Sure. In fact, my view is the exact opposite of the point of view you described. My view is that relative to the usual procedures for estimating and modeling consumer demand systems, the demand for money function is surprisingly stable. To me, that's the real paradox. When I or other people use the kind of approaches used in the consumer demand literature to model demand for money function, there's no problem at all. But, oddly, even when they do things like estimating the Goldfeld equation with the Divisia index on the left-hand side where the Goldfeld equation never would be taken seriously by somebody who works in the consumer demand literature. The Goldfeld equation, nevertheless, becomes stable. So, to my way of thinking, the puzzle is ... why is it the demand for money function seems to be easier to model ... stably ... than the demand for durable, semi-durable services, anything else the professionals in the consumer demand literature struggle to have to model by semi-non-parametric procedures and infinite dimensional parameters base. It's an enormously sophisticated literature. Demand for money function ... you don't even have to be that fancy as long as you measure it right.

Beckworth: Yeah, and I think this has real, again, practical implications. So, you mention in your book ... one of the key jobs of the central bank, the Federal Reserve, is to provide liquidity services. And, in order to do that properly, it needs to measure and know what's happening to broad liquidity, which a measure like L or M4 provides. And, the Fed effectively quit keeping track of that, and even when it was keeping track of that, it did it poorly. And, I think one of the key observations that macroeconomists and others have made coming out of the crisis is that we need to wrestle more with financial institutions, the financial market, financial system because a big shock there can cause problems. And, I think one of the points you make in this book is starting in the 60's up to the present ... one of the key issues is the Federal Reserve was doing a bad job keeping up with financial innovation because of the way they measured money. And, again, it's not saying that money has to be the sole objective of monetary policy. But, as an additional indicator of what's happening to liquidity in the financial system, it would have been very useful going into the crisis.

Beckworth: In fact, you talk about the great moderation quite a bit. And, one of the interesting insights I hadn't thought about that you bring out in the book is that Fed officials may have been confused by looking at bad measures of money. But, the public, financial markets, they all were misled by a false sense of security, of stability that would have been more carefully understood had they looked at these Divisia measures. Speak to that a little bit.

The Fed’s False Sense of Money Measure Security

Barnett: The great moderation produced an exaggerated degree of confidence and capabilities of the world central banks. Even Lucas wrote a paper saying that the central banks had gotten so good at monetary policy, that the economics profession should stop doing research in contracyclical policy and should only concentrate on long term growth. Greenspan was a fantastic salesman. I wasn't on the staff of the Federal Reserve Board when he was Chairman. But, he was a consultant. Every six months, there's a panel of economic advisors that they bring in. And, Greenspan was one of them among some very famous economists. There were people there like Tobin and Modigliani and all kind ... Meltzer ... and there was Alan Greenspan.

Barnett: Alan Greenspan tended to dominate the discussions. He is an extraordinarily charismatic, friendly, interesting person. Everybody liked listening to him. He would say things that other people wouldn't be so comfortable saying. He would claim he could do things that the academic economists would not claim they could do. But, of course, he had been a consultant. In the consulting business, you want to tell corporations that you can do everything and you're really good at it.

Beckworth: Right.

Barnett: Well, he was a fantastic salesman. What he did is he sold everybody. He sold the whole world on the idea that he had ... Wall Street, for example, talked about the Greenspan put. The Wall Street story was that they could trust Greenspan to have their back. He would prevent an asset decline. He would go in there an stop it. So, there was this very exceptional degree of confidence in the central bank during that period for reasons that really had very little to do with the central banks. It had more, frankly, to do with China. But, in any case, it did produce excessive confidence. It effected Wall Street in a very adverse way. Wall Street firms became convinced that they could take risks exceeding any risk they had ever before taken in their history. Some firms that had survived the Great Depression of the '30s failed during the Great Recession because they were taking even greater risk. And, it was because they had acquired excessive confidence in the central banks. Allan Meltzer also had that view. It was a very widespread point of view, and it was, unfortunately, not justified.

Beckworth: And, if we had had these Divisia measures, you argue that it would have been more clear if the Fed actually hadn't tamed the business cycle. There are these bouts of excessive easing and tightening. You mentioned in the book, it started in the '60s. Starting in the '60s, there's this growing ignorance surrounding what's really happening to liquidity conditions.

Beckworth: Let me go back to the Volker period because that's another fascinating time. You mention in there when Volker engineered the double dip recession in the early '80s, he turned to targeting bank reserves and, at the time, the Fed was also, I believe, looking at broad money aggregates. And, one of the interesting exchanges you had with him, or at least an observation you made and later he said he was allergic to you because of this, was that you showed that if he had used the Divisia measures of money during this period, you would have seen that Volker excessively tightened policy. They were looking at the broad simple sum measures, which didn't show as much tightening. But, had they seen the Divisia measures, the Fed would have realized that it had overdone it in terms of tightening.

Barnett: Yes. But, I wish to emphasize he did not say he was allergic to me. He said he was allergic to the Divisia monetary aggregates.

Beckworth: Okay.

Barnett: What happened was ... I was, in fact, in a board meeting with Volker and the rest of the board, in which I showed that the rate of growth over the Divisia aggregates was about half that of the rate of growth of the simple sum aggregates. In fact, three of the other governors asked me to send a memorandum about this. I should add I had great respect for Volker. What he did was very admirable after what William Miller did, which was not so good. Volker was a very decisive person. Something had to be done in a decisive way. He said he was targeting money growth. That's true. He was telling the truth. Many people are defensive about that because it created a recession. Many people want to say, "Well, he was really not telling the truth. He was doing something else." It was not caused targeting the money supply. But, he was telling the truth. In fact, I published an interview in Macroeconomic Dynamics in which he explained this. He couldn't find any other relevant criterion to deal with the inflation at the time.

Barnett: I did publish a paper on this in the American Statistician, which is published by the American Statistical Association. It was a rather amusing experience. In that paper, I provide the plots of what happened during the so-called monetaristic experiment. What was going on was that the simple sum aggregates were growing at precisely the rate that the Federal Reserve wanted. The idea was they didn't want to crunch down too fast. There had been a study at the American Enterprise Institute saying that if the money supply were ... if its growth rate would drop to the intended long run target rapidly under the assumption that rational expectations would just get the economy to adjust really fast to that shock, it would be okay. But, the American Enterprise Institute said there were too many long-term labor contracts, and it would create a recession. So, the intent was to bring the double-digit rate of growth of the money supply down to about 10 percent or so. And, that is exactly what the Federal Reserve did in terms of its simple sum aggregates.

Barnett: But, if you look at that paper, you'll find that what I found for the corresponding Divisia aggregates were growth rates about half that. So, they were at what was the intended long-term growth rate that the AEI study had already said would cause a recession. It was rather amusing when I submitted that paper to that journal. The editor sent it out to an incredible number of referees. I don't know how many it was, but it was much more than normal. All of the referees really liked the paper. Of course, it was for a statistic journal. So, some of the comments were, "Well, gee, this is going to show people how useful can be in policy." Anyway, the referees loved the paper. But, the editor called me and said he was very nervous. He said American Statistician has a Letters to the Editor section, and he was afraid since this would look so controversial, he would be overwhelmed with negative letters to the editor, and he didn't want to have to cope with that. So, my reply to him was, "Well, I am sure there are a lot people who will like this paper. But, none of them read your journal. So, don't worry about it." He published it, and he did not get any negative letters to the editor.

Beckworth: Okay. Good. Yeah, that's a fascinating story. And, again, it underscores the importance of getting measurements right. Going back to the great moderation period, again, you stressed in a good part of your book this misperception of superior monetary policy, induced risk taking, and a better measure to make that clearer. And, you said it very clearly in your book.

Beckworth: So, this leads to the next question. Why hasn't the Fed embraced this more readily? Why not use it?

Barnett: Well, there are a lot of central banks that do use it. They don't necessarily admit it. The Bank of England officially provides a Divisia monetary aggregate. The ECD does have Divisia monetary aggregates. In fact, they hired me as a consultant to set up the database. And, they provide their Divisia aggregates to their governing counsel whenever it meets. But, they do not provide it to the public. The Bank of Japan has ... Bank of Poland, Bank of Israel ... the IMF advocates them. So, many central banks have them. Don't necessarily talk too publicly about it.

Barnett: The interesting question is why is it that these different central banks, including the Federal Reserve, have different ways of dealing with it? Some of them, such as the Bank of England, are completely open about it. Some of them have only been for internal use. Some maybe don't use it at all. This gets into a subject that is way outside my area of expertise. It is mechanism design. This is associated with the work of Leo Hurwicz, who won a Nobel Prize in this area. Mechanism design is a deep area of economic theory dealing with incentive compatibility, and how to design institutions to be incentive-compatible. Optimally designing a central bank so that it will be incentive compatible to do what is ultimately in the public interest is an enormously difficult mechanism design problem. That's why different central banks throughout the world have different mechanism designs.

Barnett: If we were talking about a corporation, it would be easier. I mentioned Rocketdyne before. There was a mechanism design problem. They had cost plus fixed fee contracts. [inaudible] figured it out. They changed the cost plus incentive fee contracts that fixed everything. Trying to produce an optimal central bank that's on its own going to always do what is in the best interest of the public, is an area of research that is way outside my area of expertise. It's the basic reason that so many economists want to talk about rules. It's because we all, in some way or other, understand that producing an optimal mechanism design is enormously difficult. We probably don't know how to do it. And, if we did do it, we probably couldn't get it done. So, then, there is the idea maybe they should be constrained in some way. But, the root cause is mechanism design problem, which is an enormously difficult problem.

Beckworth: You mentioned that the Fed quit tracking M3 in 2006, and quit tracking L before that. But, that was unfortunate because M3 has some of that shadow banking money in it. It would have been very informative to see what was happening. Could have been another signal to the Fed to tell them what's going on.

Beckworth: Now, you mentioned some other banks are tracking this information. You're tracking it, as well. So, tell us about your work to fill in this gap for the U.S., at least. What are you doing at the Center for Financial Stability that fills this void?

Tracking Neglected Monetary Aggregates

Barnett: When the Federal Reserve discontinued M3 and L, which we now call M4, I had mixed feelings about that. Certainly, in my opinion, simple sum M3 and simple sum L were just terrible. The Fed was correct in discontinuing publication of them because they put much too much weight on the distant substitutes for money. So, they were terrible aggregates. They recognized this. They did research showing they were terrible aggregates. So, they discontinued it.

Barnett: Unfortunately, when they did that, they also discontinued providing the components. This was very unfortunate because those components are not easy to acquire. They had been doing it in a very sophisticated way. When the Center for Financial Stability decided to start doing this using index number and aggregation theory, they also had to acquire those data. This was a project that took over a year with various assistance, trying to track down the relevant components. But, we eventually did. Then, of course, they stopped providing data on sweeps. This is very unfortunate. It grossly biases M1. So, we have to model sweeps with an econometric model. We have no choice.

Barnett: The problem is that the Federal Reserve isn't providing a lot of the component data that we would really need. In fact, in my book, you probably noticed I reached only one policy conclusion. And, it was a mechanism design suggestion. I didn't advocate any particular rule or policy like that. What I advocated was the creation of a data agency, such as the Bureau of Labor Statistics, or the BEA in the Commerce Department. These are bureaus that employ economy employee experts in aggregation and index number theory, and their job is to provide good data. The Federal Reserve doesn't have such a bureau. In my opinion, it would be very advantageous if they were simply to create that kind of bureau, the same kind that the Labor Department and the Commerce Department have. The fact they don't have it raises an even deeper problem of mechanism design that I don't know how to solve. This is outside my area of expertise. But, my suggestion is they should do that. They should create such a bureau within the Federal Reserve system.

Beckworth: Yeah, and just to repeat the point you made. These other agencies ... BEA, Bureau of Labor Statistics ... they're doing cutting-edge measurement theory. So, aggregation theory. Index theory. And, the Federal Reserve is not. They're using outdated approaches for money. What I found really fascinating in your book is that you mention the one place that the Fed does use cutting-edge measurement theory is in its construction of industrial production index. [inaudible] is very careful. But, it doesn't seem very careful on the one thing that really it should be careful about, and that is some measure of monetary conditions.

Beckworth: It's striking to me too ... it's not like the Fed has a small budget and it's worried about making ends meet. It lately has a very large budget and it can definitely afford that. Interestingly, you also mention placing this bureau of financial statistics, and you gave it maybe inside the Office of Financial Research, which is in treasury, but it's autonomous from treasury.

Beckworth: Let me lead into another question. And, this is more generally towards the economics profession. Again, I think part of the challenge, even for the Fed, getting it to do what you've suggested, is simply getting more economists on board ... more macroeconomists because the micro folks are in their own areas ... but the more macroeconomists on board with this idea of Divisia measures of monetary aggregates. What is your sense of where the profession is? Are more and more folks getting on board in agreeing with the point that you've made?

Barnett: Throughout the world, there are people working in this all the time. There's an enormous amount of research in it. The problem is the policy relevance. In consumer demand modeling or production modeling, things like that, people with that kind of expertise can do their work without any kind of issues about implications for policy. Monetary aggregation potentially has implications for policy and that kind of messes with it in an unfortunate way. Because of my background, I am basically a scientist, and I stay out of that kind of policy stuff. I did that even when I was at the Federal Reserve Board in a special studies section. I didn't get involved in that sort of thing.

Barnett: The people who are experts in aggregation and index number theory, of course, they're completely on board. The first time I even presented this was at the University of Chicago in Zellner's Econometrics Series. I think this was back around 1980 or 1981. I went through all of the research and everything. And, at the end, Zellner said to me, "Gee, Bill, if you would have just told me you want to produce Divisia indexes with user cost prices, I would have agreed right away." Well, people with that kind of expertise, they just instantly see it. But, people who have vested interests in various approaches to policy, which is very complicated. I'm not a political scientist. That muddies up the whole situation, unfortunately.

Beckworth: Yeah. I guess what I look out and I see other commentators, Fed watchers, other macroeconomist, there's this knee jerk response, this thinking ... "Oh, we've learned that money can't be used reliable. We learned from the '70s and '80s there's missing money. We learned that there's unstable money in demand." And, I can't say with certainty the percent of people who hold that view, but it seems like most of the folks that I see that make these comments draw that conclusion.

Barnett: And, none of it is true.

Beckworth: Right. And, to me, that's the audience that needs to hear this message. Those are the folks that need to be reading your work and the related literature on this. I think it's important. I would love to see the Fed, even if [] on the board, at least provide this information, and hopefully open up some minds to its usefulness.

Barnett: Yeah. But, I tend to think they're doing me a personal favor. If they were to do everything right, my book "Getting it Wrong" wouldn't sell. And, if I had had to write a book "Getting it Right", it wouldn't have sold.

Beckworth: Very true.

Barnett: So, I cheer them on. Just keep doing it wrong. It's good for me. My book will sell a lot of copies.

Beckworth: I love that perspective. And, on that great note, we have run out of time. Our guest today has been Bill Barnett. Bill, thank you so much for coming on the show.

Barnett: Thank you for inviting me.

About Macro Musings

Hosted by Senior Research Fellow David Beckworth, the Macro Musings podcast pulls back the curtain on the important macroeconomic issues of the past, present, and future.