PJ Glandon on the State of Macroeconomics: Research and Pedagogy

Quantitative theory-based analysis has begun to take over the field of macroeconomics over the past 40 years, and this is having major impacts on the pedagogic future of the discipline.

PJ Glandon is an associate professor of economics at Kenyon College, where he also serves as chair of the economics department. PJ joins David on Macro Musings to talk about his recent co-authored article, *Macroeconomics Research: Present and Past.* David and PJ also more broadly discuss the state of macroeconomics as a discipline, both in terms of research and pedagogy.

Read the full episode transcript:

Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].

David Beckworth: PJ, welcome to the show.

PJ Glandon: Thanks, David. It's a pleasure to be here.

Beckworth: It's great to have you here. Now, I know you because I've traveled to Kenyon College a few times. I've presented there. I've had conferences out there. You're the department chair, so you have a good feel of both the teaching side of macro, and you have this really fascinating article that looks into the research side of macro. We're going to spend this hour together talking about the state of macro with you, PJ. You're the authority here, and I'm looking forward to this. Let's talk about this article. Maybe give us the executive summary of it first.

*Macroeconomics Research: Present and Past* - Background and Overview

Glandon: Sure, yes. This article is really written by a team of economists, Ken Kuttner, Caleb Stroup, and Sandeep Mazumder, and it was a big project. The 30,000-foot summary is that we read about 1,800 macroeconomics articles, and tried to characterize what these articles do to try to see how macroeconomics proceeds over the last several decades. And, to hit the highlights, we document that macroeconomics does a whole lot less testing of hypotheses, or attempts to reject economic hypotheses, and a whole lot more formal theory that then takes the economic model to the data, and tries to fit the model to the data. That's the 30,000-foot summary of what we found, but there's a whole lot more in the paper. We document all sorts of features of the econometric approach, the models that are used, and so forth.

Beckworth: To summarize, a whole lot more DSGEs taking place in the literature, and a whole lot less VARs, which was my wheelhouse when I was back in the academy. We'll come back to that in a minute, but tell us, how did you get into a project so ambitious? Almost 2,000 articles you read over a long period of time, across many journals.

Glandon: Yes, so, there's a conference that is attended by macroeconomists who teach at small liberal arts colleges like Kenyon College. That's where I met Ken Kuttner. Caleb Stroup is a graduate classmate of mine, and I met Sandeep Mazumder there, who was at Wake Forest at the time. Ken Kuttner is at Williams College. We were chatting… this was back in 2016, I believe, about… gosh, it's really hard to know if you're interested in macroeconomics, and you're a graduate student, what skills and techniques do I need to learn in order to write papers that are going to get published?

Glandon: We thought, "Well, maybe we should try to categorize that, look at current publications, and see, okay, do you need to know how to do VAR, or do you need to know how to estimate structural models, or do you have to be able to publish a particular type of theory paper?" That's where it started, and we spent the summer trying to figure out, what information do we want to collect from these articles? We would read 20 or 30 articles and say, "Okay, I think these were the interesting features." We came up with a classification system that documents information in three main buckets. The first bucket would be the methodological approach. Is this a theory paper, or is this an econometric paper? The second bucket would be the features of the economic model, if the paper has a formal economic model.

Glandon: Then the third bucket of information would be features of the data that were used and the empirical approach that were used. We collected this information on some current articles, and then we thought, "You know what would be really cool?" There's quite a bit of discussion at the time about how macroeconomics hasn't made any progress. Several people were quite critical of macroeconomics at the time and to this day, and we thought, "Boy, it would be nice to inform this discussion with some data about what macroeconomists actually do." So, we decided to add a panel element to this data set, and so we collected data from papers that were published in 1980, 1990, 2000, 2006, 2008, 2010, and then 2016, 2017, 2018. I can go into a little bit more about the sample, if you like, or maybe listeners can check the paper for that.

Beckworth: Yes, go ahead and tell us the journals you drew from.

Glandon: We started out by asking what counts as a macroeconomics article, and we said, "Alright, it's probably articles published in journals that we think of as macro journals,” so, The Journal of Monetary Economics, Journal of Money, Credit, and Banking, Review of Economic Dynamics, Journal of Economic Dynamics and Control, and the American Economic Journal: Macro, macroeconomics journal. We looked at those journals initially, collected sort of current articles from those journals, because some of those journals don't go back very far, and we said, "Okay, the JME and JMCB, they go back to 1980 and before."

Glandon: So we thought, "Let's just try collecting data from once a decade beginning in 1980 on those two." Then when we sent this article out for review, and we presented it, people said, "You really need to include general interest journals," to which we agreed, somewhat begrudgingly, it was going to be a lot of work. Then we couldn't avoid this, so we said, "Alright, all articles with JEL designation E, which is macroeconomics and monetary economics, from five general interest journals,” American Economic Review, Journal of Political Economy, Review of Economic Studies, Econometrica, and the Quarterly Journal of Economics. So, we grabbed all macro articles from those journals and included them in our sample as well.

Beckworth: Those are the famous top five journals that, if you're in a top institution, you've got to publish in, or you get booted, okay.

Glandon: That's right.

Beckworth: Another way of saying that, then, is you're looking at the journals that are the gatekeepers of the profession, and presumably helping set the trends, what people are reading, what they think is important, and then the field journals related to macroeconomics.

Glandon: That's exactly right. We thought that if the question is, “what are macroeconomists doing?” One way to look at that is to say, "Okay, what gets published in macroeconomics, in peer-reviewed journals?" All of these articles are peer-reviewed. I should mention that we excluded articles from, sort of, special publications where there was, like, a conference… So, the Journal of Monetary Economics has this Carnegie Rochester special edition. We excluded those articles because we were worried that they would be sort of different than articles that got published in regular editions.

Beckworth: Let's go to some of the details here. Let's talk about that, I guess it's the first bucket, where you break down by categories of papers. Walk us through that.

Breaking Down the Methodology and Epistemology of Macroeconomic Papers

Glandon: Sure. This is a contribution that I'm especially proud of, and I can't take a whole lot of credit for it because my co-authors, especially Ken Kuttner, really spent a lot of time discussing how we would put this together. We kind of came up with, think of this as two dimensions, that describe how the article uses data and theory to arrive at its conclusions. One dimension is straightforward, and we basically asked, "Okay, is this article relying heavily on traditional econometrics?" Basically, relying on methods you could learn by reading Hamilton's book on time series econometrics, or Wooldridge's book on--

Beckworth: Classics, right?

Glandon: Yes, classics. All graduate students of economics will be familiar with those books. If the article relied heavily on an analysis that used econometrics, we categorized it as taking an econometric method. Separately, if the article relied heavily on the use of an economic model, and then sort of fit that model to the data, we called that a theory-centric article. Then, of course, there are some that do both, and we categorize those as both. That's sort of one big bucket that I think is fairly straightforward and easy to understand. The next dimension we call epistemology, which I find to be an intimidating word. I learned more about the philosophy of science when we were creating this catalog.

Glandon: Think of this as a range of approaches to using theory and data together. One category, and we have eight categories, and then an “other” bucket, one category would be description. This is an article that tends to use very little econometric theory, but instead presents facts; so, averages, correlations in the data, lots of figures showing time series. We call that paper a descriptive paper. Another type would be falsification. This was fairly common in the 1980s, especially in the 1990s. Falsification is a paper that takes a prediction from an economic model and then tries to test it in the data, right? It's sort of a traditional, “here's the null hypothesis, I have some data that allows me to estimate the parameter, and I conduct a hypothesis test about whether that parameter is in the range of acceptable values according to the theory.”

Beckworth: What would be an example of that?

Glandon: Of a falsification exercise?

Beckworth: Yes.

Glandon: You're thinking of a particular paper?

Beckworth: Just in general, like testing the Phillips curve or…?

Glandon: Right, a classic example of this would be the permanent income hypothesis, right? So, the permanent income hypothesis makes a statement about the sort of time series properties of consumption that we should see in aggregate data, and you can test those properties. I think Robert Hall has a paper in 1978 that does this, and it's a classic example of a falsification paper. After falsification, we have a category that we call abduction, which is similar to falsification in that it basically goes something like this: the paper will say, "Alright, I've got this theory that has been discussed in the literature, and when I think about the implications of this theory that I could test in the data, I don't see that." And so, these are sometimes thought of as sort of like puzzles.

Glandon: Then what the paper goes on to do is it says, "If I make this modification to the theory, the model becomes more consistent with the data." We call that kind of paper abduction. It's a lot like falsification, but it advances the theory so that it's more consistent with the data. The next type of paper that we categorize is called model fitting. A model fitting paper typically will sort of write down a model, and then calibrate parameters, or in some cases estimate parameters. The objective of the exercise is to say, "Look how well this model is able to match the data."

Glandon: We call that model fitting exercise. Then, yet another type of paper we call quantification, and that bears a lot of resemblance to a model fitting paper, but the big difference is that a quantification paper then goes on to try to give a quantitative answer to, say, some kind of policy question, or to try to estimate, make a numerical estimate of a parameter of interest. A quantification paper might say, "What is the optimal rate of inflation that the monetary authority should use?" The paper will create a model, fit the model to the data, and then run policy experiments to find out what target rate of inflation delivers the highest welfare.

Beckworth: Would an example of model fitting be like Ed Prescott's real business cycles, taking it to the data? This is what explains these productivity shocks, explains everything.

Glandon: Yes, that's a great example.

Beckworth: Then, I think you have a non-quantitative theory.

Glandon: Yes, so the last two buckets are non-quantitative theory. These are papers that generally have no data at all, or if they do have data, it's usually some stylized facts that maybe came from another paper, or come from a very simple analysis of existing data. The purpose of these papers is to write down a theory. Characteristic of these papers are proofs, and propositions, and lemmas, and so forth. Then finally, we find some papers that we call methodology papers. These papers will present, for example, a new estimation technique, or maybe a new solution method, for solving complicated models. We came across a handful of papers that didn't fit into any of those, say, papers that talk about the history of economic thought, or may just sort of take a narrative approach and describe an episode in economic history. We categorize those as “other” papers. There are very few of those in our sample.

Beckworth: Now, PJ, there may be one more category, causal effects.

Glandon: Oh, thank you. That's a really important one. Causal effects is a paper that will essentially look for plausibly exogenous variation in a variable, x, and see if it causes a change in some variable, y. This is an approach that is now ubiquitous in applied microeconomics. We're seeing more and more of it in macroeconomics. The key feature is that the objective is to identify whether one variable causes changes in the other, and with very little emphasis on theory in the paper.

Beckworth: Yes, this is the credibility revolution coming to macroeconomics.

Glandon: That's right, yes.

Beckworth: We talked about this before the show, but VARs, where I used to do most of my work, we would identify shocks as exogenous. It's still being used some, but we'll talk about it later, it's being used less, and more of this applied micro approach to the data is being used for macro.

Glandon: That's right, and actually, we do consider some types of VAR papers to be part of that causal inference. If it's a structural VAR, and there's reasonable emphasis on identifying exogenous shocks… I don't know if Angrist and Pischke would agree with that characterization, but that's-

Beckworth: Just to be clear, you would look at a VAR that has really identified some exogenous variable, not just identifying restrictions on it, like a Choleski ordering, or that wouldn't count quite as rigorously as a exogenous shock?

Glandon: That's right. We would probably tend to call that type of paper, that you described previously, depending on the situation, a paper that takes an econometric approach, we’d probably call that a model fitting exercise.

Beckworth: Fair enough. Those are the eight categories, I believe, under that first bucket.

Glandon: Yes.

Beckworth: PJ, you've walked us through these eight categories, and I think listeners probably can find themselves, in their work, fitting in some of those different categories. What's the trends in these categories? Where are we headed as a profession?

The Current and Future Trends of Macroeconomics

Glandon: Sure. So, back in the 1980s, the most common type of article was either a nonquantitative theory or a falsification exercise. There's a little bit of the other approaches that I described, but not a whole lot of those. Methodology was pretty important in 1980 as well. But these days, the majority of macroeconomics papers are model fitting, abduction, and quantification, so they're very theory-centric. If I could describe the most common type of article today, it's an article that has some kind of fairly sophisticated model, often a DSGE model, though not always, and we'll get into that later, that then takes that model to the data in some way, and then often, but not always, conducts some sort of policy experiment.

Beckworth: I'm thinking of HANK models, Heterogeneous Agent New Keynesian models are very complicated, hard to estimate, but they have policy implications. In fact, PJ, you and I were at a conference yesterday where Jim Bullard gave a presentation where he had this massive model with lots and lots of heterogeneity, and he estimates it and shows how it fits the actual economy we observe, and has policy implications. Those would be examples.

Glandon: Yes, as he was presenting that paper, I was categorizing it in my head, and that fits solidly in the, what I would call, quantification-style paper. Yes, you nailed it. In fact, some of the particular features of that model we see a lot more often these days than we did earlier.

Beckworth: Again, just to go back to the area that I used to work in a lot, and that was vector autoregressions, and I know today we use more of the local projection, Oscar Jorda's work, and I've used local projections, and I still do some work with this, but you're telling me that it's, kind of, not the cutting edge anymore. At least, it's not what people are doing a lot of in the academic field.

Glandon: I would say it's rare, rarer than it used to be, so take that however you like. If you get a paper published that is sort of a straight down the middle time series paper, that's a whole lot less common today than it was in 1980. We took a look at all sorts of features of the data that economists use in econometric papers, or papers that do both econometrics and theory, and we found that roughly 90% of papers published in 1980 that had some sort of econometric exercise used time series data. Now, occasionally, there would be really no time series feature in the estimation technique, but most of the time they use some sort of time series estimation. These days, that's down to about a third of empirical papers. They use just straight down the middle time series data.

Beckworth: At best, they're using VARs to estimate the parameters, or estimate how to fit the model, as opposed to using the VAR as an end in itself.

Glandon: That's exactly right. That's well put.

Beckworth: So, then, as a teacher, what are you telling your students? What does this mean for them in terms of preparation for grad school?

Glandon: Right. First of all, what I tell them is, if you want to go to graduate school in economics, you need to be taking a lot of mathematics, because you need to build the skills required to either build these kinds of models, or at least understand them. That, and then I tell them, take a lot of statistics and econometrics because no matter what field you go into, I think there's just tremendous emphasis on careful analysis of data. I'd say there's also some returns to being tenacious, and finding new data sets. One of the things we document is an increasing use of microdata.

Glandon: We categorize microdata as any data set where the unit of observation is at the individual decision-making unit. For instance, a firm, if your data set consists of observations of variables at the firm level, or household level, or perhaps individual level, we call that microdata. Microdata is increasingly common in macroeconomics. It's also more common to use proprietary data sets. That was pretty rare back in the 1980s, I want to say, off the top of my head, definitely less than 20%. These days, more than half of articles published that do some sort of econometric exercise use proprietary data, that is to say data that is restricted and not publicly available.

Beckworth: Just to summarize, most of the articles are quantitative theory, DSGE, but those that are empirical, and so it's a smaller share, are more of the micro-type data, so, proprietary, or just large panel sets, and less and less of just the simple aggregate time series data?

Glandon: That's exactly right, yes. We see a lot of panel estimation in macroeconomics, that's for sure. It was quite rare in the 1980s, but now it's the most common type of data.

Beckworth: Let me raise some concerns with that, and I know I'm going to sound like an old fuddy-duddy here because things that I used to do aren't used as much, but I guess [I have] two concerns. One is, I wonder how much, then, we're wrestling with identification issues. With structural models, we definitely don't have to worry about identification because we have the deep structure, but empirically, we still… maybe we're not, I think, wrestling with it as much. That was some of the challenges with time series data to begin with, not that we ever solved it. I wonder how hard we're wrestling with identification issues, particularly if we're using firm level or household level, because you can get great results, but then when you do general equilibrium, and you wrap it up to the big level, sometimes there's these effects that we don't see or we can't properly identify. I'm thinking of this as a simple example, the Phillips curve. It was flat for a long time, now it seems like it's not so flat, and we're trying to wrestle with it.

Beckworth: And one of the explanations I have is that the reason it was flat back before the pandemic was because the Fed was doing such a great job in stabilizing the economy. So, you're not going to see, in reduced form data, any systematic relationship. But if you look structurally, and this is where I think DSGE is useful, you see there is a deeper structural relationship. If you look just at the data, and you're not able to think through identification issues, and maybe there is something structural down there. I just wonder if we're losing anything, if we're not having good doses of both theory and good time series. Maybe that's just me being an old fuddy-duddy.

Glandon: No, I think that's a great question, and this makes me think of a conversation that you had with Noah Smith, where he made a couple of really interesting points. I think you pushed back in a similar way, sort of asking, "Okay, well, what should macroeconomists do?" His answer was something like, "Aim lower, and focus on some basic things in the economy, like, what's the intertemporal rate of substitution?” And, "Okay, let's try to get some clean estimates on the really important parameters in our DSGE models." And I think we actually do see a fair amount of research that's going into that.

Glandon: It's this cycle that involves lots of different research approaches. One is to focus at a very narrow level and see what's going on in the data, how people are behaving in sort of very narrowly defined situations, and then seeing if we can use what we learn there to inform the building of our DSGE models, which I agree is the only way you can really do a credible job with policy experiments at the macroeconomic level. For example, we do see a pretty big increase in the use of causal inference. I kind of think of that as doing a little bit more of what Noah Smith wanted from macroeconomists, to aim lower, look at really narrow situations, and see if we can figure out how people behave, and then incorporate that into our--

Beckworth: …Into a bigger question. I think where I'm coming from is, I'm now more in the policy world. So, we have to wrestle with some of these questions that are more ambitious. Do you look at the aggregate data, do you look at things going on at a higher level? Although, understandably, we've got to look at what's underneath the hood and the details as well. So, we don't have the luxury of properly estimating an Euler equation, getting the parameters. We've got to go to the top, and what's happening? Why hasn't the Fed's rate hikes, for example, done more to slow down the heat in the economy? We're seeing some slowdown. Why was the Phillips curve flat? What is the link between inflation and fiscal policy? All of these big questions, big macro ideas, and maybe it's… again, just me here in the policy world, and maybe that's an interesting question. I wonder what role think tanks, central bank type papers, have on the conversation in macro versus traditional academia. I know you didn't answer that in your paper, but any thoughts on those two different places?

Glandon: Yes, I think that you're exactly right. It's like this production process that involves, at the lowest levels, curious people asking very esoteric, perhaps, questions, and doing a really good job of trying to answer those, and then publishing a paper and thinking, "Maybe this will inform someone's decision who needs this information." Then at the think tank, policy level, the idea is, you do like a survey of what's out there, and try to sort of assimilate all of that stuff, and do the best we can with what we've got. I did think of, when you were saying about the current experience that we're in, we have just one time series of economic experience for the United States, and maybe a smallish panel of other developed countries that have lots of data. And so it's entirely possible that we're just going to have to use the experience over the last five years to learn a little bit more about the way the economy works.

Beckworth: Yes, and I think, to be fair, we can make use of macro aggregate data at a panel level, so it’s good to look at all countries, cross-sample. We did this when inflation took off. Why is inflation taking off everywhere, not just in the US? So, I think there's different things you can tap into. I like Olivier Blanchard's classification that he had of models. There's toy models, which are very useful for a first approximation, and then you can get to more complicated models to get a better identification structure of the economy. Let's go back to your paper, though. I think we've gone through the first bucket, is that right? Most of the first bucket?

Glandon: Yes, most of the first bucket is about sort of the methodology and epistemology of papers. Shall we talk about model features?

Beckworth: Yes, let's do that. That's the second bucket?

Glandon: Yes.

Beckworth: Let's do it.

Breaking Down the Model Features of Macroeconomic Papers

Glandon: So, we document information about all sorts of features of the models that macroeconomists use in their paper. The first is pretty straightforward. We classify whether the paper uses a partial equilibrium model, that is to say a model where some of the quantities and prices are determined outside the model. A general equilibrium is where prices and quantities are determined inside the model, but there's no passage of time, really. Then, of course, [there is] the latest and greatest dynamic stochastic general equilibrium model. What we found is no surprise to anyone, that since 1980, when DSGE models did not exist… or at least we didn't find any in our sample of papers in 1980. Now, about half of papers that use-- just shy of half of papers that have an economic model have a DSGE model in them.

Glandon: We document several other features of the model, for instance, whether they include certain types of frictions. For example, one critique leveled at macroeconomists around the financial crisis is that macroeconomists had forgotten about the role that finance plays, and we thought, "Well we can maybe say something about that if we take a look at how often some sort of financial imperfection shows up in a model.” What we find is that, indeed, there was a waning interest in finance between 1980 and 1990, but then it picked back up in 2000. And before the financial crisis… this is why we picked the years 2006, 2008, and 2010 as part of our sample. We wanted to see if we saw a change in what was going on there. There was interest in finance and macro, maybe not enough, but there was certainly a pickup in interest in finance well before the financial crisis happened.

Beckworth: Okay. Going back to your observation about GE models being increasingly used… half, I think you said, of the models were GE models.

Glandon: They're dynamic stochastic general equilibrium.

Beckworth: DSGE models, properly spelled out. My question is this: Are they also becoming more sophisticated over time? In other words, you can't just do a very simple DSGE model and get published anywhere. You've got to have one with lots of bells and whistles today to get published in one of the top five journals.

Glandon: That's right, yes. We documented some of the features that moved DSGE models away from the very simple real business cycle model. One of the things we do is, we categorize what we call the flavor of the DSGE model. Is it New Keynesian? Is it real business cycle? Those are the two most common, by the way. But then there are several other different types of DSGE models, overlapping generations, like a neoclassical growth model, et cetera, et cetera. By far, the most common type of DSGE model is a New Keynesian model that has all sorts of different frictions involved. For instance, the typical New Keynesian model has market power, so people who can set prices, and then also some sort of nominal rigidity. But then there's also imperfect information, some sort of financial frictions. What we find is that these days, if you look across all macro papers that have a DSGE model, I think it's more than 80% include some sort of friction. So, the critique that macroeconomists are just too reliant on neoclassical frictionless models is just not supported by the data that we've used.

Beckworth: Do you have to also do heterogeneity in your DSGE models? Now, that seems to also be kind of a buzzword. Am I going to have to do that to get published in the top five?

Glandon: The short answer is, at least as of 2018, the answer was no, though they were fairly common, models with heterogeneous agents, but not all models have heterogeneous agents, at least as of 2018. I think it's probably… my sense is that it's even more common today than it was in 2018, because this is really cutting-edge.

Beckworth: Right, so I imagine also, computationally, as it gets easier, you can do it. In fact, this brings to mind a database of macroeconomic models that's hosted overseas. Volker Wieland and some other colleagues, they've collected a bunch of macroeconomic models. They say here more than 150 structural macroeconomic models, so DSGEs, and they have a website. It's really cool, you can use your own reaction function, you can do different shocks, and it will run it across all of these models for you.

Glandon: That's really slick, yes.

Beckworth: Yes, it's nice. Then they also have a package you can download. It downloads Dynare and then Octave, if you don't have MATLAB, so that you can do more tweaks yourself. As an example, I look at that, and I think, well, maybe at some point, HANKs will be something you can tell the AI to do for you.

Glandon: Right, yes, right.

Beckworth: So, computationally, if that's the case, then I think expectations will go up. I think some of the trends you see are just conditioned on the fact that we can do more.

Glandon: That is entirely true, and we even thought of maybe trying to see if we could show some sort of time series relationship between computing power and the sophistication of models, because it's pretty clearly… the [inaudible] people mentioned this all over, that the breathtaking improvement in computational power has made a lot of this stuff feasible that just simply wasn't feasible, say, back in 2000 or even 2010, yeah.

Beckworth: Now, PJ, there's a lot more in your article, and we'll encourage listeners to go check it out. We'll have a link [to] it, and it's in the Journal of Economic Literature, but I understand that you're doing additional work on this project. What are you doing?

Expanding on Some Additional Work

Glandon: Yes, so my co-authors and I thought that we've got a pretty cool data set here, maybe there's some additional stuff we could do. And so, one thing we thought is that, well, we answered this question by showing what got published, but a separate question is, what gets attention? And so we have gathered citations, all citations, to the articles in our inventory here, and we're starting to take a look at citation patterns based on characteristics that we see here. I can give you one highlight. We're at the early stages of trying to think about how to look at this data, but one thing we've noticed is that, at least in our sample, causal inference papers seem to be batting above their average. They're a fairly small fraction of the articles in our inventory, but they've garnered a lot of attention since publication. Along the lines of this, we're also interested in trying to apply this categorization method outside of the sample that we use here, but it's a lot of work.

Glandon: I can't tell you how many hours we spent categorizing this. At the time, we thought, there's just no way we can use textual analysis to automate this because there's all sorts of nuances to interpreting what you read in the article. For example, we typically couldn't categorize an article just by looking at the abstract, or even just the introduction in some cases. Though I will say, as a note, a well-written introduction would allow you to categorize the paper, and I became a better reader of economics papers [by] doing this project. But, we have teamed up with a computer scientist who's a colleague of Sandeep Mazumder, who's now the dean of the Baylor University Business School, and he's working on developing an algorithm that will learn how to do our categorization method using our sample that we hope to apply outside of our sample.

Beckworth: Wow, AI is taking your place.

Glandon: AI is taking our place, and in this case, I'm delighted, because it's a lot of work. I can tell you that it's going pretty well, surprisingly well. He's been able to develop an algorithm that correctly categorizes those epistemologies, which I think would be the most difficult thing to categorize, to 85%, 88% accuracy, which just is stunning to me.

Beckworth: Here's a great example of innovation being disruptive but also creating new opportunities. Now you're able to go do other work, other projects.

Glandon: That's right.

Beckworth: Your time is freed up. So, the world's a better place even if AI is disruptive. Okay, so let's take all of this and maybe segue into pedagogy, teaching. I mean, the original motivation for the paper was, what should undergrads expect going forward? We've touched on a little bit of this, and I left the academy in 2016. I was a macroeconomist teaching macro. I found it increasingly tough to teach a principles of macroeconomics course because there's just so much to cover. I felt like textbooks were often years behind on the actual tools of monetary policy. What do you think about all of that? What should we do? Because you're actually in the front lines of this.

The Pedagogic Present and Future of Macroeconomics

Glandon: Yes. I teach Econ 102, which is principles of macroeconomics pretty much every year, and I agree. I feel that the gap between what we teach in Econ 102 and cutting-edge research is just getting larger and larger as time goes by. So, the short answer is, I think, appropriately, the pace at which new macroeconomic research finds its way into a textbook is pretty slow. That said, I do try to give students a sense of the difference between what we do in class and what macroeconomists do when they're trying to learn more about the way the economy operates. We still use, at least in my principles class, we still use a pretty simple aggregate supply, aggregate demand model. We do not try to explicitly model dynamics. I think that's the big wedge between undergraduate macroeconomics and graduate economics is that we don't try that hard in the principles and even intermediate theory courses to include dynamics because it just gets to be quite a bit more difficult. We do a little bit, but not in the way that a DSGE model does.

Beckworth: Yes. I've seen commentators many times, for example, on Twitter, academics who are commentators, talk about this gap in almost a condescending manner. Luckily we've got teachers like you who are doing their best to bridge the gap. But it always strikes me, these same people, they'll say, "Ha, undergrad macro is so far removed from graduate macro," and I want to say, "Ha, graduate macro is so far removed from policy world macro."

Glandon: That's a great response.

Beckworth: It really is, and not just in… there's politics you have got to deal with, but things you don't wrestle with in grad school or even many academics don't wrestle with. And this is not everyone, because there are many good academics who know what I'm about to talk about, but I've become more familiar with the plumbing of the monetary system, how does the corridor operating system work versus the floor? How does the Treasury market, repo market… stuff that people in the financial markets worry about that academics don't always think about. There's many areas where our knowledge is incomplete and we need to have some humility in approaching this and be willing to learn wherever we are.

Glandon: Yes, I completely agree with your point about humility, and I've thought a lot about that in the last year or so. I try to inject a little bit of that into my class as well. I'm perfectly upfront with my students about the fact that macroeconomics is just really hard. In the natural sciences, it's comparatively easy; rocket science, way easier than macroeconomics. I say that [being] a little tongue in cheek, but at least you can do controlled experiments over there, but in macroeconomics… I think that's one of the fundamental reasons why we have this menagerie of approaches to economic research is that it's very rare that we can actually conduct a controlled experiment in a setting that provides useful information on policymaking.

Beckworth: We will always have disagreements in macroeconomics because we can't officially and have great finality in these questions. I will share what I think is a dirty little secret, and that is, it's easier to teach microeconomics, I think, than macroeconomics because it's much more systematic, it builds upon itself. There are debates upon how much market power are not… but I mean, elasticities, things like that are so much more eloquent and, I think, foundational that build upon themselves, where in macro, there's these huge divisions and big debates, general equilibrium issues, identification issues, and such.

Glandon: Quite a bit of variation from one textbook to the next two for intro macro. Yes, I agree with that sentiment.

Beckworth: Let me go back to undergrad macro, though, for a minute in terms of topics. Typically, you might cover long-term growth, which is a very important question. Why are some countries rich, some poor? Why don't we grow one percentage point more a year over many years and compound that into incredible standards of living? We have that question. Then there's the business cycle, which, this podcast deals more with the business cycle side, although we do have interest in the long-run issues as well. Then I would say, maybe, a third area might be the mechanics of policy. Tell me, what's happening at the undergraduate level in terms of those topics.

Glandon: Basically, you've outlined how I run my course with maybe one exception that I do… I think of measurement as also an important piece in there; measuring inflation, measuring real GDP, making sure students have a really good grasp of what those things are. But, I'll say a couple of things. One is, compared to when I was in college, I think the study of economic growth in a principles of macro class is a lot better because textbook authors have figured out a way to make that Solow growth model pretty accessible to undergraduates. And so, I teach a sort of graphical version of the Solow growth model, and I even have students put it together in Microsoft Excel so that they can change parameters and see what happens in the model.

Glandon: And I use that as an opportunity to say, "Hey, this is a sense of what macroeconomists actually do. They would write down the model that has these equations that describe how the economy unfolds over time.” And then, I even say, "Oh, let's suppose we wanted our version of the model here, the Solow growth model, to look like the United States. What data would we need to get in order to do that? Can we find it on…?" Yes. The answer's yes. We can find it on FRED very easily, and then try to make the model look the United States. That's what we do in discussing growth. Also, I use Cowen and Tabarrok's textbook, which I really like, because they emphasize the importance of institutions which is, I think, a really fun topic, highly accessible, and you don't see much discussion about that in typical macro.

Beckworth: And you're not saying that just because you're recording here at the Mercatus Center where Marginal Revolution University and that textbook is a part of our program.

Glandon: Maybe that had something to do with why I got invited but no I--

Beckworth: Not at all.

Glandon: No.

Beckworth: So you're having a good time with undergrads. You're beginning to whet their appetite in terms of modeling, going to the data, sounds like you still use FRED data.

Glandon: Quite a bit, yes.

Beckworth: [It’s a] big, powerful tool, so thank you St. Louis Federal Reserve Bank for continuing to provide that.

Glandon: Yes. We love it.

Beckworth: I know that there have been attempts to update some of the undergraduate textbooks or some of the curriculum. The thing is, it is so difficult, particularly when it comes to the mechanics of monetary policy, the tools. I don't know if books, textbooks, still carry the old tools, like the Fed can do open market operations, required reserves, discount windows, those-- I think that was the list, but that's so dated, right?

Glandon: It's so dated, yes. I'll confess that when I first started teaching, it was just a couple of years after the financial crisis, and so I kept thinking well… and I taught the mechanics of monetary policy the way everybody did, that open market operations, that's the main tool, and the Fed is targeting, the federal funds rate, but then there are these other tools, discount lending and reserve requirements. I would caveat this by saying, “Oh, and by the way, since the financial crisis, there are ample reserves, and so things are a little bit different now than they used to be,” and this money multiplier thing that we used to talk about, and build up, and talk about how that has something to do with the amount of reserves that banks keep as a fraction of deposits, it's increasingly awkward to teach it that way.

Beckworth: Right.

Glandon: And this summer, at that Liberal Arts Macro Conference that I mentioned, I was talking to my colleagues about this, and they actually referred me to a pretty nice paper put out by some Fed economists, Jane Ihrig and Scott Wolla. It turns out that they've actually put out a couple of publications on this very topic. It's called, *Let's Close the Gap: Updating the Textbook Treatment of Monetary Policy.* I took a look at this paper and it made me feel a whole lot better about the situation because it turns out most textbooks have still not updated. They put together a framework for, okay, here's how things ought to be taught about the way the Fed implements monetary policy, and let's compare that to what several leading textbooks actually do, so pretty interesting there.

Beckworth: Yes, I mean, I don't think there are many textbooks that get into the distinction and wrestle with a floor operating system, versus a corridor operating system, versus a hybrid which would be a tiered. I know Ulrich Bindseil, who works at the ECB, has a textbook that he uses when he teaches central bankers, but something like that at the undergraduate level is probably too inaccessible, but also just tough. But, it's important to know these things because this is how it actually works. Even things like the repo market, money markets, those things… I mean, there's so much to this. You could throw in Basel III Endgame, bank regulations, shadow banking, how the Fed steps in during crises with these liquidity facilities. All of these things interact and they morph and they grow over time. So, macroeconomics is very dynamic both in what it does and what it's going to do in the future.

Glandon: There's no doubt about that, and I really struggle, especially for Econ 102, where you just simply cannot cover all of this stuff. And so, I'm still on the fence about how to do things. Obviously, I want to teach students things that are correct, and so I am going to make a change and just simply hit them with saying that the main instrument for implementing monetary policy— and I've actually been doing this for a while— is adjusting the interest rate on reserves. The question I'm wrestling with is, do I also mention how things used to be where we were in a scarce reserves environment, or I believe that's called a corridor system. I don't know if that makes the cut for Econ 102, but it really pains me to cut that out because I think it's important.

Beckworth: Well, if any teachers are listening, my recommendation would be just to keep it like you do; illustrate the current system. Maybe in a money and banking class, you could work across multiple operating systems for central banks, get into repo, get into shadow banking, and all of those issues, because that is really in the weeds and probably, in principles of macro, you don't have time to spend half your semester discussing corridor operating system versus the floor. Because they'll miss out on the measurement issues, they'll miss out on the long-run growth, they'll miss out on more practical stuff. Probably most students who take principles of macro are not going to go on to be economists. They're going to get business degrees, maybe get an MBA, maybe even get an English degree, live real-world lives, have to make decisions, and knowing the difference between a corridor system and a floor system probably won't be that important to them.

Glandon: I might have to call you up and have you remind me of that when I get to that part in Econ 102, so I don't spend two extra days teaching students about the plumbing of central banking, because I find it very hard to pass up on all of that stuff.

Beckworth: Okay, in the time we have left, just briefly again, we're on the pedagogy of macroeconomics. We've been discussing the discipline. We've gone over the research trajectories. What about intermediate in terms of like, what models do you use to teach? And I want to focus on the business cycle side of things. What about IS-LM? There's the MP-IS model, there's still some aggregate demand, aggregate supply usage. What do we see happening at the intermediate level?

Glandon: At the intermediate level, I tend to use Abel, Bernanke, and Croushore. It's been a while since I taught it, so I haven't seen the most recent version of their text, but they use an IS-LM model and it's sort of a “choose-your-own-adventure” in terms of how much detail you add to that model. Some people are very critical of IS-LM, and my reading on that is that IS-LM doesn't have enough meat to it to be all that critical. You can sort of make it do whatever you want it to do. What I like about it is it shows students that we're attempting to endogenize as many variables as possible; interest rate, inflation, real GDP, unemployment, et cetera. And this is a simple attempt at doing that. Now, it doesn't model the dynamics explicitly, and so there's a bit of hand-waving that happens there, but I think it's a good introduction to attempts at general equilibrium modeling. And so, that's what I tend to teach at the intermediate level. Now, I have dabbled in adding some dynamics, and I know that there are some textbooks that do that, but oftentimes I find that that's at the expense of really getting a good handle on the basic models.

Beckworth: I remember that book as well. It's been a while since I've seen it, but I do recall that they have IS-LM and they also have a full employment curve, and so it actually completes the model. In the typical IS-LM, the price level is fixed. This allows for changes. So, that actually tells a richer story than-

Glandon: It does, and it relates the long-run macroeconomic model to the business cycle part of the model, which I think is a really important feature. I consider that to be a requirement for a good intermediate model.

Beckworth: Yes. I think if you use that version of IS-LM with full employment, you can show what the natural interest rate is versus the current interest rates and the gap and kind of a New Keynesian perspective on… the output gap emerges when you get a divergence between natural rates and actual rates.

Glandon: I’ve heard people… I won't name any names, but I've heard some strong criticism that— it goes as far as saying that the IS-LM model is just wrong. I just don't see it that way. I don't think it's actually falsifiable in the way it's written. I think it does a pretty good job of providing us a lens through which to see the way we think the economy works in the short run relative to the long run.

Beckworth: In terms of actual instruments and variables [that] the Fed and policymakers control; interest rates-

Glandon: Exactly, yes. You can add a policymaker into the model who's setting interest rates according to a rule. I think that's the IS-MP version of the model. You can add a little wedge, like a financial accelerator type wedge, between the policy interest rate and the rate that people borrow and lend at. So, you can even give students a sense of how we add features to the model to make it a little bit more realistic.

Beckworth: Yes, I think that does provide a useful framework for understanding how to understand a central banker when they give a speech, at a minimum. So, it's useful, even if it's not perfect.

Glandon: Yes, I agree. That's a good assessment of the model. Can it help me understand a speech given by a central banker? Very good.

Beckworth: Okay, with that, our time is up. Our guest today has been PJ Glandon. His paper is titled, *Macroeconomic Research: Present and Past.* Check it out in a recent edition of the Journal of Economic Literature. PJ, thank you for coming on the program.

Glandon: Thanks very much, David. It was a lot of fun.

About Macro Musings

Hosted by Senior Research Fellow David Beckworth, the Macro Musings podcast pulls back the curtain on the important macroeconomic issues of the past, present, and future.