Tim Lee on the Present and Future of AI and its Implications for Policy

As AI technology continues to develop, it could have major implications for the labor market, macro policy, and the greater macroeconomy moving forward.

Tim Lee is an independent journalist who formerly worked for the Washington Post, Vox, and Ars Technica, where he covered tech policy, blockchain issues, the future of transportation, and the economy. Tim currently produces the newsletter, Understanding AI, and is also a returning guest to Macro Musings. He rejoins the podcast to talk about AI, automation, and its implications for the macroeconomy and policy. Specifically, David and Tim also discuss the singularism vs physicalism debate, the possible threats posed by AI, how the regulatory landscape will be affected by AI, and a lot more.

Read the full episode transcript:

Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].

David Beckworth: Tim, welcome back to the show.

Tim Lee: Hey, it's great to be back again.

Beckworth: Well, it's great to be on, and you have started a new newsletter. You've done several, but tell us about this newest one titled Understanding AI.

Lee: So it does what it sounds like. It tries to help readers understand how AI works and also the implications of AI for the larger world. And so I've been writing a lot about the implications for the labor market, also these kind of bigger questions about should we be afraid of the AI taking over the world. And I've also looked at some specific technologies, for example, voice cloning has gotten very good. I used one of the programs to create a voice recording that was a fake version of my voice, and it was good enough that my mother couldn't tell the difference.

Beckworth: Yeah, I've seen several stories, Tim, on funeral homes or services like that where they recreate a dead person visually, audio, and it's hard to tell the difference. So it's pretty remarkable, a little scary, and we'll get to that in a bit. But let's talk about the state of AI because we're going to get into AI, automation, what it means for the economy, the labor market, and hopefully we'll get to what it means for the Federal Reserve as well. But what is the state of AI today?

The State of AI Today

Lee: So there's been really rapid progress over the last decade or so. The real breakthrough about 10 years ago, they figured out how to use something called neural networks and how to use really deep ones with lots and lots of neurons. And the first application they used that for was image recognition. They got really good at saying, is this a dog or a cat or something like that. And then in the last five years, they've used the same basic architecture to do two really big things, one is large language models, applications like ChatGPT, that allow AI to complete documents or to engage in conversation, and the other is image generation. There's a class of models called diffusion models, that you give it a textual prompt and it'll draw you a picture that's a very realistic rendering of whatever you asked it to do. And you can pretty much type in anything and it'll draw something that looks pretty good. And so, those two innovations, I think especially large language models, has gotten the business world very excited. And so you're seeing a boom in Silicon Valley, and in the tech sector more generally, of people really investing heavily in bringing these AI technologies to every industry you can think of. And there's just a ton of startups who see this as kind of the next big thing in the tech world.

Beckworth: So a big part of the recent developments is this neural network, and tell us a little bit more about that. Is this mimicking the brain?

Lee: Yeah, it was inspired by the brain. So the concept is quite old. It was first… in the '50s, they started experimenting with it. And it's a pretty simple concept. It's just a mathematical function, you take an input and you put out an output and then that input is then fed into additional neurons. And if you have millions and millions of these, you can get interesting examples. The thing that's interesting about it is that you can train it. So you look at one example, you see whether it got right or wrong, and then you adjust all the weights in the network to be a little closer to the right answer. And it just turns out if you do that over and over again, millions or billions of times, you can get these really extraordinary outputs where you have these functions that can do things that sort of approximate what we used to think only human beings could do.

Beckworth: Well, that's fascinating. And it leads me to a question that is way outside the wheelhouse of this podcast, and that is, will AIs become conscious or self-aware? And yeah, I don't want to spend too much time here, but just of course, it's a question that actually comes up. And how would we know if they became aware?

Lee: So one of my favorite quotes about AI is, there's a guy named [inaudible] who was an early sort of algorithm pioneer, and he said that the question of whether an AI can think is no more interesting than the question of whether a submarine can swim. And I don't mean that you're not asking an interesting question, but I think that our intuitions about these things is really shaped by the fact that human beings all have very similar minds. And so we think there's this one clearly defined thing called consciousness that you either have or you don't have. And you can say the same thing for things like sentience and intelligence. And I think what we'll see is that as we create more virtual minds, if you will, more complicated thinking machines, that they're just going to be very different in the same way that airplanes are better than birds in many ways, but it doesn't fly exactly like a bird, submarine or a boat is like a fish or a duck or something in some ways, but isn't exactly like it. I think that probably AIs will always be different enough from human minds that there'll always be people who say, that's not really thinking, but you pick any given task, it's very possible. And there are lots of things people said, humans couldn't draw pictures or engage in conversation or recognize images, and we've done that. So any specific task, I think, probably will happen. Whether it'll really be conscious, I think people will just always be arguing about that.

Beckworth: So we don't need to worry about being confused by an AI pretending to be a girlfriend or some other relationship out there?

Lee: Oh, that could definitely happen. I mean, there's a thriving industry of chatbots that are not just answering questions, but your friend or even your boyfriend or girlfriend. I mean, there's some lonely people out there who are happy to do that. So I definitely think that's going to happen. I just don't think there's going to be an objective answer of, is this thing really thinking or does it just seem like it's thinking? It's just like… people are just going to disagree about that.

Beckworth: Well, I mean, we won't go there today, but another related question is, would an AI have a soul? But let's hold off on answering that question and we'll save that for a religious podcast. But I want to talk about this because it's fascinating. And one of the big things that always comes up with AI is this question of an existential threat or is it a big opportunity? And let me kick off the discussion of that because you actually have several pieces you've written on that with a recent story that probably many of our listeners are familiar with. So, I'm reading an article from early June from USA Today, and I'm going to read the first few paragraphs and we'll use this as a segue into: is AI an existential threat? So here it goes.

Beckworth: It says, "The Air Force on Friday denied staging a simulation with an AI controlled drone in which artificial intelligence turned on its operator and attacked to achieve its goal. The story mushroomed on social media based on apparently misinterpreted comments from an Air Force colonel at a seminar in London last month. Colonel Tucker Hamilton, an experimental fighter test pilot, had described an exercise in which an AI controlled drone had been programmed to destroy enemy air defenses. When ordered to ignore a target, the drone attacked its operator for interfering with its primary goal. The apocalyptic theme of machines turning on humans and becoming autonomous killers coincided with increasing concern about the danger of artificial intelligence. On Thursday, president Joe Biden warned that AI could ‘overtake human thinking.’ However, Hamilton was only speaking about a hypothetical scenario to illustrate the potential hazards of artificial intelligence according to the Air Force."

Beckworth: So this story really took off. In fact, I remember seeing it first on Twitter as if it really happened and then later tweets, “well, no, it didn't quite happen that way.” But it kind of speaks to the moment. There's all this concern. And you had an article in your newsletter that was titled, *The AI Safety Debate is Focusing on the Wrong Threats,* and you develop these two notions, these two schools of thoughts maybe, the singularism school of thought and then the physicalism school of thought. And it was nice, Tim, to see the physicalism because it gives us hope. But maybe walk us through these two camps and the implication for this issue.

Singularism vs. Physicalism and the Implications for AI

Lee: Sure. So the singularists, I think of Nick Bostrom, is probably a leading exemplar of that, and he is a philosopher in the UK and he wrote a book in 2014 called Super Intelligence, where he argued that once AIs reach a certain level of intelligence, they'll be able to improve themselves and rapidly get smarter and smarter. And once they do that, they kind of become superhuman and become so smart that they can take over the world. And the thing I found really interesting about that book is, when you look at the actual pages where he explains how the taking over the world happens, it's really short. And he has a couple of examples. He says maybe they'll use nanotechnology, which as far as I can tell, very little progress has happened on the kind of nanotechnology he's talking about. But anyway, it's very kind of vague about exactly how this will happen.

Lee: But some people have a very strong intuition that if something is smart enough, it'll just be able to outwit everybody and take over the world just because intelligence is extremely powerful. And the alternative view, which I more lean towards, that I call physicalism, says that intelligence is obviously useful, but it's just one of many resources that you need to have power in the world. It's also useful to have the ability to manipulate the physical world which human beings have with our bodies and a computer and a server rack does not, it's useful to have natural resources, it's useful to have social networks. And so, the focus of physicalism is it's not so much that we need to worry about the one superintelligent AI taking over the world, it's that the world is going to be full of AIs. Some of them are going to want to do bad things or they're going to have people who create them to do bad things.

Lee: But what we want to do is sort of lock down the physical world to just minimize the amount of damage that people can do. And to some extent, this is already, I think, how we should be thinking about the world because there certainly are examples of hackers attacking power plants or pipelines or things like that. And even in a world without AI, we want to make sure that those things are locked down well, that they're not connected to the internet if they don't have to be, that there aren't any obvious security vulnerabilities. And I think what AI really tells us is we need to take those threats more seriously and we need to do more to avoid them. But the idea that we want to stop developing AI or have some kind of complicated licensing regime to make sure that the bad AI doesn't create it… that doesn't necessarily make sense to me.

Beckworth: Yeah. Let's go back to the singularism school of thought. And just real briefly, the motivation for the term singularism, is it because of the singularity or any other reasons-

Lee: Yeah, so there's two reasons. One is because of the singularity, which is this idea of the AI being so smart that it does things we can't even understand. The analogy is to like a black hole where you look in a black hole, there's a point beyond which you can't know what's happening because all the light's being pulled in, but also the idea that he talks about, a singleton, which is a single AI that's so powerful that it runs the whole world, that that's kind of how Nick Bostrom sees this scenario playing out. And yeah, I think there'll be lots of AI that do lots of different things for different people and organizations.

Beckworth: You provide a very hopeful and, I think, realistic view of this with the physicalism perspective. But nonetheless, let's take a closer look at singularism because there are a lot of people who do take this view. You also mentioned in your article Geoffrey Hinton, an influential computer scientist who worked for Google. Steven Hawking, he also is on this train, is that right? He's also an advocate of this concern.

Lee: Well, he's dead now, but yes, when he was alive-

Beckworth: I mean, when he was alive, yeah, he was concerned.

Lee: Yes.

Beckworth: But prominent people like that are concerned, worried about it. And in fact, in a minute we'll talk about someone who went before Congress who made this concern as well. But for now, sticking to this article that you have titled, *The AI Safety Debate is Focusing on the Wrong Threats.* The arguments they make or the story they tell in terms how this AI will take over the world, let me list the steps and fill in any that I left out. But they believe, one, that AI will start building other AIs, so they will themselves program these machines to take over the world. They may also build nanorobots to physically enter the world. They may provide some means of social control, I know manipulation, maybe misinformation, and maybe even build a super weapon. Is that the story they like to tell?

Addressing the Possible Threats Posed By AI

Lee: Yeah, absolutely. I think one other piece of it is they imagine that if you had a lot of AIs with the ability to do productive white collar work, that those could go out and make a lot of money in the world. And so this super AI could potentially have billions or trillions of dollars at its disposal to accomplish whatever it wants to do, which is again used to hire people or buy resources or whatever it might want to do.

Beckworth: Yeah. So this sounds like a great sci-fi movie, but there's some problems with it as you outline. I mean, you do recognize that AIs can build other AIs, but beyond that, like nano robots, what's the problem with that?

Lee: So this is an idea that's been kind of floating around in circles, people who are interested in science and especially science fiction, since the 1980s. There was a guy named Eric Drexler who wrote a book arguing that it would be possible, and I think this actually goes back to Richard Feynman. He wrote that it would be possible to build little tiny molecular scale robots that could then build other molecular scale robots. And this would be a different way of accomplishing very powerful things. The problem with it is just that it doesn't seem to be happening. There was a study, I think I said in my article, in 2006, somebody did a scientific research survey of the progress and just found that nobody had figured out how to actually make this theoretical idea practical. And as far as I can tell, just from looking around, it doesn't seem like any more progress has been made.

Lee: I'm not enough of an expert to say why that won't happen. And obviously it's impossible to prove it will never happen, but it's not like humanity has not been trying to do nanotechnology. When we have computer chips that are at the kind of scale people are talking about, those have made rapid progress, but those are not the chips building themselves. Those are billion dollar semiconductor fabs with very, very complicated equipment that lots and lots of people need to work on. That is very different. I think it has different implications because an AI couldn't build a giant semiconductor plant the way it might be able to build one robot and then have that robot build another robot.

Beckworth: So the key concern is that these AIs need to somehow get access to the physical world, which allows us to provide a barrier given that this is a reality, provides us a barrier, a space where we can ourselves control it. So speak to that a little bit more. In fact, you talked to my colleague at Mercatus, Matthew Mittelsteadt, and he walked through three cases that kind of illustrate the limits of what's possible right now. And he also provided a solution creating this kind of space between important physical infrastructure and AI. So maybe walk through those scenarios and then implications for policy.

Three Present Limits of AI and the Current Prospect of AI Regulation

Lee: Yeah, absolutely. So if you ask an expert like Matthew, what are the kind of most serious hacking attacks that have happened in the last 10 or 15 years? The three he pointed to, one was the Stuxnet attack on Iran's nuclear facilities where we infiltrated a virus into some of the nuclear research facilities, I think they are centrifuges, that caused them to not work for a few months. There was a case where Russia hacked a Ukrainian power plant, managed to shut it down for a few hours. And there was the pipeline attack a few years ago where a pipeline was shut down for a few days. But the thing all of those things had in common is that they were all pretty temporary. Once people figured out that the computers were being tampered with, people figured out to how to physically bypass them.

Lee: Either we program the computers or kind of switch it to manual mode... In the case of the power plants, the articles about it said they literally just had to disable a computer and manually open the circuits that the computer had closed. And so, I think that's the model you want is you want to think about what's the worst case scenario if these computer systems malfunction, and then just make sure that the human beings are ultimately in control. I think something else people don't think about enough is we're used to thinking about systems like online services or a cell phone network as very automatic. They just kind of work by themselves. And that's only true because there's a lot of very hardworking people behind the scenes, going up, repairing cell phone towers, replacing servers when they break, et cetera, et cetera. That's actually very labor intensive to have a system that seems totally automatic actually work well. And so if you ended up having some kind of war between humans and AIs, if the humans just stopped maintaining this very complicated technological infrastructure that the internet depends on, it would break down pretty quickly. And so I think that just kind of gives us an inherent advantage in any kind of conflict like that.

Beckworth: Yeah, it's easy to be complacent about how interdependent our infrastructure is upon people, upon specialists. I want to read an excerpt from your piece you wrote. “The modern world depends on infrastructure like roads, pipelines, fiber optic cables, ports, warehouses and so forth. Each piece of infrastructure has a workforce dedicated to building, maintaining and repairing it. These workers not only have specialized skills and knowledge, they also have sophisticated equipment that enables them to do their job.” So number one, there's just a wide number of specialists that each have a unique set of knowledge and skills, and they have to maintain… this is capital, right? It deteriorates, it breaks down, I mean, it's the second law of thermodynamics, right? Entropy, things break down over time. And you need someone there to maintain, to build, and also to pass that knowledge on down to the next person who will take their place. And we're going to come back to this in a minute, but this speaks to this diffused knowledge.

Beckworth: The problem of knowledge that's spread throughout a market system is a very clever way to bring it all together, to coordinate it. And we'll come back to the point in a minute. But I think it's something that's easy to take for granted or to overlook the complexity. I'll invoke Milton Friedman here, his famous “I, Pencil” essay. How do you make a pencil? And he breaks down all the different steps within it, who produces the lumber, the steel, the oil, and the machines that got the lumber to market? There's just so much going on that we take for granted, we don't appreciate. And I think the same thing is true when you think in terms of AI taking over the world. They would have to have a huge army of humans on its side to do something like that. And I suspect that that would never happen.

Beckworth: So anyways, we can rest assured, Tim, as your takeaway from the essay that this is an unlikely scenario that AIs will take over the world. And so we can kind of push that issue to the side. Nonetheless, there are still people who are very worried about it. And in your second essay I'm going to touch on speaks to people who are very concerned about this. So you have another article titled, *Congress Shouldn't Rush into Regulating AI.* So you talk about the OpenAI CEO, Sam Altman. He went before Congress and he said this, “’I would form a new agency that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards’, Altman said. He added that these standards should be focused on ‘dangerous capabilities’ such as the ability to self replicate and self extricate into the world.” So I'm assuming here Sam is taking this singularism view. Was there any pushback to this view? Did anybody else suggest a more modest approach that would be consistent with the physicalism perspective?

Lee: Yeah, absolutely. There's been a number of people, I mean, there's some folks in Silicon Valley. I think Marc Andreessen is one prominent person who has criticized this viewpoint. And I do think there's room for regulating some of this stuff. But the way that I would think would make more sense is to focus on cases where there is clear potential for harm. So one example is medical devices, for example. I mean, we already have apparatus set up to make sure that medical devices are safe. I think probably that needs to be adjusted in some ways for AI. But a large language model on its own or any AI model algorithm on its own is not going to harm somebody. What's going to harm somebody is when you take that and you apply it to some kind of mission critical. I mean, self-driving cars is another example where there's a safety dimension. When that's applied, absolutely, you need to have some rules to make sure that that's being done in a sensible, safe way.

Lee: But yeah, the idea that you just need to worry about the models themselves or some kind of chatbot or image generating software hurting people, I think, is just misunderstanding the nature of the threat. And also this licensing concept, the whole premise of this super intelligence argument is that it's so smart that it's going to outwit people who try to prevent it from doing it. So the idea that you're going to have an FDA for AI that's going to probe it somehow and say, okay, this is one we've checked out and it definitely is not going to take over the world, that just doesn't make any sense, because if it's smart enough to outwit the rest of humanity, it's probably going to outwit the regulatory agency. And so far, I've been really looking pretty hard for what's the kind of concrete policy proposal. I want a white paper that says, here's the legal test that would be applied. And as far as I can tell, none of the people that are making this proposal have come out with something like that that I think is anywhere close to something you need for the congress or a regulatory agency to actually make a law like this as opposed to just talking it in the abstract.

Beckworth: If the premise is that a super AI could take over the world, so you need a regulatory body to oversee it, then the whole idea falls apart because the super AI will capture the regulatory body. I mean, we talk about regulatory capture and the literature. This would be a new form of regulatory capture. The AI would control the regulatory body and therefore be free to do what it wants to do. So it really falls apart. And I mean, I think the big takeaway is you don't want a blunt tool in terms of regulation. You want to be very kind of precise where it really matters, weapons, maybe healthcare, things like that in a nuanced manner. So you wouldn't want to have one agency that oversaw everything. Maybe house it in existing agencies, is that right, and let them apply it?

Lee: Yeah, absolutely. I mean, I would like to see… there's a couple of things I would like to see happen. One is I do think there's room for more research. Certainly it's possible that there'll be problems with AI that people haven't thought of. And so I would love to have Congress put more money into this, maybe even build a kind of national lab that could build its own AI system so the government has some institutional capacity. The other thing I would love to have is an agency that does investigations of cybersecurity and AI related threats, and not necessarily have any regulatory authority, but just published reports. There's an agency called the National Transportation Safety Board that does this for transportation. Anytime there's a plane crash or a major highway pile up or something, they have people that go in and figure out what happened and write a report and make recommendations for how to change it. I would love to have more infrastructure for that, for investigating more self-driving car crashes, checking out, are these medical devices working and so forth. So yeah, those are the approaches I would take because I think the government definitely could use more capacity to understand the threat and make sure people are ready for it. But I think it's premature to have regulation of the technology itself outside of a few specific areas, like you said, healthcare or self-driving cars.

Beckworth: Yes. Well, let's talk about some of the applications today, and we'll come back to the labor market implications in a minute. But let's talk about how AI is being currently used in terms of things that are very important in the physical world. I mean, we've talked about chatbots and probably everyone's familiar with ChatGPT and things like that. But in the physical world, things that we see these AIs being applied to would be like driverless cars. And you have an essay titled, *The Death of Self-Driving Cars is Greatly Exaggerated.* So walk us through that.

AI Applications Today: Driverless Cars

Lee: Sure. So if we talked maybe five or six years ago, I probably would've told you that it's right around the corner. I was definitely guilty of buying into the hype a little bit more than I should have. But then in 2021, 2022, a lot of the companies that had been working on self-driving company technology went out of business or had to be folded into other companies. And so I think the pendulum swung way to the other side. And you see a lot of people saying, “Self-driving cars are decades away. Maybe this technology's never going to work.” But a few of the leading companies have just continued plugging away. In particular in my piece I talked about, I think probably the two leading companies in the robo-taxi industry are Waymo and Cruise. And those guys have continued to make progress. And so right now, if you go to Phoenix, there's a 180 square mile area in the south of Phoenix, including downtown Phoenix and the area around the Phoenix Airport, where you can hail a completely driverless car and it'll take you to where you want to go and drop you off.

Lee: And the technology works pretty well. So it does have limitations still, it does not drive on freeways, it doesn't go in the actual airport. You have to get dropped off at the SkyTrain that's kind of down the street from the airport. So there certainly are challenges left to solve, but the idea that the driverless cars are this future technology that we haven't figured out how to do is just not accurate. These really exist. The other place, there's a lot of these in San Francisco. Both Cruise and Waymo are having a lot of friction with the regulatory authorities because part of the way that these companies have insured safety is that they're very cautious. So if they're not sure what to do, they just stop, which is one reason they don't go on the freeway because obviously it would be dangerous to stop on the freeway, but on a city street, it's not a big deal, but it is pretty annoying if the car ahead of you stops.

Lee: And so there's a lot of friction with the city authorities about this. But they're making progress. They've been gradually expanding their service area and they're now saying that they're planning to grow quite aggressively. Waymo says that they're going to be about 10 times their current size a year from now, and Cruise says they're aiming for a billion dollars in revenue by 2025, which I think would be roughly a 50x increase. So I don't know if they're going to grow that quickly, but it does seem to me that they are pretty close to having at least a system that they can scale up. And I think they might not be too far away from commercial viability.

Beckworth: And just to be clear, Waymo is a product of Google or Alphabet, is that right?

Lee: That's right. The Google self-driving car project was rebranded as Waymo in, I believe, late 2016.

Beckworth: And Cruise comes from General Motors?

Lee: Yes. They were a startup that GM acquired about five years ago.

Beckworth: So basically there's been kind of a, I don’t want to say consolidation, but basically some firms have stepped out of the industry, a few have stayed that know what they're doing. And so the market process is helping cut through and putting those players into action that actually can deliver. And you mentioned Waymo has… did you say 180 miles in Phoenix? So Tim, let me ask you this question. You're in Phoenix with your family, I know you have some children. Would you feel comfortable putting your children in a Waymo car to take them off to soccer practice? Say you're at home. Would you say, okay, kids, I've got my app here, I've got a Waymo out in the driveway. Please get in it, go and then have Waymo pick them up at soccer practice and bring them home. Would you feel comfortable with that?

Lee: I think so, yeah. My kids are a little on the young side to probably to do it by themselves. But from a safety perspective, yeah, I think so. The thing that's really tricky, I mean, people talk about how humans are bad drivers and obviously humans do make mistakes. But actually, humans are quite good drivers. If you look for the whole United States, there's a fatality every 100 million miles driven. And so that means you got to do a lot of testing to prove that it's safer than a human. I forget the latest numbers, but Waymo has done tens of millions of miles and has never had a fatal accident. So that's evidence that it's probably pretty close to human level, but we don't know if it's human level for sure. But certainly it's pretty safe. There's at least circumstantial evidence that it's close to human level. So I'm not sure I would be quite ready to do it on a routine basis, but certainly I would not, if it was like a demo ride, I would not be at all nervous about doing this because they do have a lot of experience under their belt.

Beckworth: And the point is they're learning, they're refining, they're growing. And eventually, I mean, the plan would be AI develops enough that it knows how to navigate those tricky spots. And at some point it becomes aware enough, not to go into the [consciousness] debate, but it becomes aware enough how to handle very tricky situations. You mentioned that it doesn't yet go into airports to pick up people where you'd normally get an Uber or a taxi because it is chaotic, it's a little stressful. And that's always been the critique. I've heard it's harder for a driverless car to go into, say, downtown New York City where it's chaotic, and sometimes you got to break the rules to get somewhere. But that is something that eventually AI could master, correct?

Lee: Yeah, we're seeing very gradual progress. So they started out in a 50 square mile area of Phoenix that was like the most suburban part of Phoenix. They weren't doing the airport, they weren't doing downtown. And they expanded. They now do downtown Phoenix, which is not Manhattan, but it is more chaotic than suburban Phoenix. And Cruise actually had an interesting strategy. They started out in San Francisco, which is one of the most chaotic areas. And their theory is, we'll learn faster if we see more of these crazy situations. So yeah, I think they're not there yet, but there's very clear evidence that they're at least serving more of these areas that are more challenging. And they're getting pretty close because I think downtown San Francisco is one of the most chaotic, probably Manhattan might be the only one that's more so. And so if they get downtown San Francisco, there's a huge portion, especially in the Southwest, which is… another thing these people, I think, still need a little work on is weather. But across the southwest, nice, sunny days everywhere, a lot of nice suburban areas in Los Angeles, Las Vegas, Houston, Dallas. And so I think there's going to be a pretty big market there. Even if they can't do Manhattan or a Boston snowstorm, there's going to be a pretty big market here just in the kind of “easy areas.”

Beckworth: Okay. Well, listeners, you've heard it from Tim. When you go to Phoenix or San Francisco next time, try out Waymo and Cruise and you can report back how your travels went in the driverless cars. Let's talk about one other driverless vehicle, at least potentially driverless vehicle, and that's Tesla. You mentioned that in the article, but it's not the main thrust of your article. Why is that?

Lee: So Tesla's taking a different approach. Waymo and Cruise's idea is that they want to do a completely driverless technology from the start. And there's actually an interesting history there. Back in the day, about 10 years ago, Google had technology that was kind of similar to what Tesla does now with Autopilot. It was a driver assistance system, and they would let Google employees borrow the car and take it back and forth to work. And what they found was that people trusted it way too quickly. And so they had surveillance cameras to kind of see how people were using it, and you see people putting on makeup and stirring their coffee and stuff and not paying attention to the road. And so the Google executives got nervous and said, we do not want to have this kind of mixed mode thing where the AI's kind of in control and the humans are in control.

Lee: So they said, we're going to be totally driverless. Tesla's taking a different approach where they're building a consumer product where you buy a Tesla and then you can pay several thousand dollars extra to get the full self-driving technology. And so the car will mostly drive itself, but it is not safe enough that you can not pay attention. And so the theory is you get your Tesla, it drives itself, but you still have to watch and be ready to grab the wheel if it screws up. If it were me, that would make me nervous because it seems pretty good. But the last jump where you're like, okay, it's good enough, we're going to have the person stop paying attention, that seems really, really hard to me. And it seems like the only way you can do it safely is the way that Waymo and Cruise is doing, which is to start in some easy cases and work to the harder ones, and it seems hard to do. So anyway, I just see them as a different market. There is a market for all the automakers that have driver assistance technology. Tesla is in some ways better, or at least maybe they're just less cautious about it. But that's a market, it's a perfectly reasonable market, but I see it as a different market than the people that are trying to build actual driverless taxis where there's never anybody behind the wheel.

Beckworth: Okay. Well, let's move on to another application of AI today, currently in action, and this is from your other newsletter, the Full Stack Economics article titled, *I Ordered Robot Takeout On Two Campuses With Wildly Different Results.* So the one that was really interesting is, of course, George Mason University. I'm at Mercatus. We're a research center affiliated with GMU. So walk us through the robot service there.

AI Applications Today: Robot Takeout

Lee: It's great. So I believe GMU has two campuses, and you guys are in the Arlington one, right?

Beckworth: We're at the Arlington campus, the law school.

Lee: So this is at the Fairfax one, which is further out in the suburbs.

Beckworth: That's the main campus, yeah.

Lee: Yeah. Okay. So yeah, it's great. They're these little robots that are kind of microwave size. They're white, they have six wheels, and they drive around delivering food. And so you can visit the campus, you can pull out your smartphone. There's, I think, 15 or 20 stores that are participating. You can do Starbucks or Panda Express, places like that, and you can order food. And in 10 or 20 minutes, a little white robot will drive up and deliver your food, and you can open it up and there's your food. So it's great. And they have been… this company, Starship, has been working for about a decade. I remember about seven or eight years ago, they were driving around DC so they tried different environments, but what they found is that college campuses seem to be a really sweet spot for them, because not too much car traffic, a lot of kids in dorms that don't have kitchens, so there's a pretty good demand. And so they're in dozens of different campuses now. They seem to have a real business. I don't know if they're profitable, they're probably not. But I talked to one of their executives last year and he said that they're now confident that they're ready to scale up and they're going to be serving a lot more deliveries in the next year or two.

Beckworth: So these little robots, they're like you said, microwave size. They have wheels on them, four wheels, and they go places. So are there limits to where they can go? I mean, do they navigate only sidewalks and roads? What's the capacity?

Lee: Yeah. So they're on sidewalks and they have a top speed of, I think, four or five miles an hour. So that's pretty slow. So it's different than the self-driving cars. There also are some startups that are building delivery robot designed for roads. They can go 20 or 30 miles an hour, and so those can serve different markets. So this is another reason to do college campuses, it is pretty walkable. So that means that their service footprint tends to be relatively small, because at four miles an hour, you can only go a mile or two from whatever restaurant you're picking up the food from. So it tends to be pretty localized. It'll cover a college campus, but it won't go to the restaurant or community usually. But within that community, the college campus is dense enough that there's several different places within walking distance at any given point. And so there's usually several options that you can get.

Beckworth: Okay. And all these applications require AI, right? They've got to figure out where to go, there's the object in front of them, to get around it, to avoid it, to get where they need to be. By comparison, I'll throw this out there. You don't have an article on this, but just by comparison, some of the talk about air drones that would deliver a package, so Amazon's drone. I imagine that requires less AI. I mean, it’s got to go someplace, but imagine the demands in terms of computation is less.

Lee: So there's different aspects. So I think the actual flying requires, I don't know if you want to call it AI or not, but the quad rotor, figuring out how to stabilize that, that's a difficult technical challenge. So there's some smart software in that. But yeah, obviously in terms of navigation, there's fewer obstacles up there. And I think the way the air traffic control system works, different types of vehicles, different types of aircraft fly at different heights. So they go in an airspace that's not super congested, hopefully. I think the place where the AI might be needed is at the pickup and landing points. You got to figure out exactly where do I drop? Is there a small child or something in the way. I think currently… I have written about these. Currently, I think that… actually the FAA has a rule requiring line of sight, which is a little silly.

Lee: So Walmart is testing these down in Arkansas, and one of the companies, it's the company that's working with Walmart, has these portable towers where it's up 50 feet or something, which gives them line of sight for a couple miles. And so you'll have a person that's just looking off in the distance, they can barely see it, but they can still see the drone. So I don't think that regulation necessarily makes sense and probably over time it'll get relaxed. So yeah, currently the drones, I think, are more human supervised, but at scale, I think they wouldn’t be. I think that if this technology works, drones can go pretty far and could be pretty smart. And so you could imagine a future where these are completely automated. And then if they're landing in people's backyards, you need to be very careful that you're not going to run into somebody's pet or child.

Beckworth: Okay. So we've talked about the existential threat or opportunity, depending on your viewpoint, from AI, we've talked about some of the applications. Let’s now move into the realm of economics and talk about its implications for the labor market. And then we'll talk about what it means for macroeconomic policy. So you have an essay in your newsletter titled, *Why I'm Not Worried About AI Causing Mass Unemployment.* So walk us through that argument.

The Implications of AI for the Labor Market

Lee: Sure. So I start off the article talking about a prediction that Mark Andreessen made 12 years ago that software would do what he calls eating the world. And by that he meant that software revolutionized not only industries like music and movies that obviously were already being revolutionized at that point, but he really thought that software startups would disrupt healthcare, education, transportation, things like that. And that really didn't happen. And I think a big part of the reason is it actually goes back to what we were talking about before, is that much of the economy is actually physical. If you think about the housing market, obviously there's like an app to buy real estate. There's some aspects of software that you can improve a little bit, but fundamentally building a house, selling a house, repairing a house, those are not really information technology heavy activities.

Lee: And so it just doesn't have a big impact. And so one reason I don't expect there to be a huge impact on much of the labor market is there's just a lot of jobs that have this big physical aspect. If you think about a plumber, for example, we currently don't have robots that are anywhere near sophisticated enough to do the job of a plumber. Even simple things like climbing up and down stairs. I mean, I've got a bathroom on my second floor. Most of the robots right now are on wheels. It would not be able to get up to the second floor. Humans are very good at doing both fine motor skills and also picking up heavy objects. It's difficult to build a robot… Anyway, so that's one reason is that I think we're very far away from having robots sophisticated enough to do many of the physical jobs that humans do.

Lee: The other reason is that I think there are many jobs where being a human being is inherently part of the value proposition of doing the job. So you think about childcare, maybe you could invent a robot that keeps your child safe, but you probably wouldn't hire them to take care of your child because the job of a childcare provider isn't just to keep them safe, it's also to be stimulating and to develop a social relationship. And a robot just can't do that. And I think there's a lot of things like that. Any kind of care worker, if you think about therapists or psychologists, and you also think about things like a waiter or waitress. Mostly their job is to bring you food, but there's an aspect of luxury to having somebody come and wait on you and provide services. You think about getting a massage. These are things where I think having a human being to do it, it is going to be a value add, even in a world where a robot can do something that's technically very similar to what the human being is able to do.

Beckworth: And what about the argument often made by economists that the new jobs will be created? So even if a robot does take away a job, there'll be some new job created. So for example, if you own a home, that we now have robots that can vacuum the floor for you. So maybe you now need someone who specializes in selling, repairing those types of robots. I mean, is that a valid argument too? Are people skeptical of that?

Lee: Yeah, I think that is correct. Yes, so if you think about, especially with robotics, robots are not going to be self-repairing or self-building for that matter. So if we had a bunch of robots being built to do things, there'd be jobs making the robots and repairing them. I think, theoretically in the long run, you could end up with an ecosystem of robots that can repair each other. And so maybe in the very long run, those jobs would start to go away, but we're talking several decades at least. But the other aspect that I think really ties into macroeconomics is that it's not just that that specific industry creates more jobs, it's that if society in general is getting wealthy, if people have more money to spend, either because the robot version of the technology is cheaper or because you invested in a robotic company and now your stock price went up, and so now you're wealthy or you work for a robotics company and they're paying you a good salary to design the robots or whatever, as society gets wealthy, people just have more money, and so they spend it on other things. And maybe they're not spending it on the things that have been automated anymore, but you're still going to want a person to renovate your kitchen or take care of your children or wait on you at a restaurant or provide you with counseling, or whatever.

Lee: There are always going to be a certain number of jobs, I think a significant number of jobs, that a human being is going to be able to provide better than AI. And so, as long as society as a whole is getting wealthier, we can make sure people have enough money, and this is much better to me. This is why we have macroeconomic policy. One way to think about it is that if AI is in fact very good at increasing productivity and replaces a lot of workers, that's going to be a deflationary force, right? And so that leaves more room for the Fed to cut rates and maybe more room for the Congress to cut taxes or raise spending and now put more money in people's pockets, which they can then use to buy whatever services that have not been automated yet. And so I think a lot of people have this intuition that it has to be that specific technology that creates the new jobs. And I think that's kind of misunderstanding how that technology works, that the important thing is, is there enough demand? And as long as the institutions that are supposed to create demand are doing their job, you're never going to have a situation where there just isn't enough demand to provide work to people that want jobs.

The Greater Macro Impact of AI

Beckworth: Well, let me respond to that point. I think it's an important one, that if there are real economic gains, that actually makes us richer. It gives us opportunities to buy other things that maybe we can't currently buy because we don't have the income. So your point is, if we do have this AI surge, it's going to lead to great productivity gains, lower production costs, free up our time, allow us to do things and afford things we couldn't otherwise do. So we need to view this as a glass half full. We need to view this as an opportunity. And then that leads to what are the economic implications? So I was going to come to this later, but let's jump into it right now. What would the Fed do in that environment? So you mentioned that it's likely to be a deflationary or maybe a disinflationary environment, which I completely agree.

Beckworth: In fact, we saw something very similar in the late 1800s. There was a deflationary environment, and some of it was tied to the gold standard and what was happening there. But a lot of it was also tied to productivity growth. There was the industrialization of the US economy, rapid productivity gains. And the way I would view this is, as long as we keep nominal income stable, so you know where I'm going with this, Tim. As long as we keep nominal income or demand stable, which is what the Fed can do, which policy can do, then these real gains can be spread throughout the economy through lower prices. In other words, if everyone's still getting paid the same nominal wage, but things are getting cheaper, their real experience is much better and they can afford more things. So, deflation by itself is not necessarily a bad thing.

Beckworth: Deflation is a bad thing when there's a collapse in aggregate demand, a collapse in nominal income. But if you have stable nominal income growth, stable nominal demand growth, and things are gradually getting cheaper, you're actually much better off. And sometimes people will invoke, what about if you have debt? What if you have debt and the price level is falling? Well, that's offset by the fact that you're having higher real gains. If your real debt burden's going up, presumably it's being offset by higher real income gains as well. So I don't fear deflation. I think it actually could be a good thing if it's accommodated by the right macroeconomic policy. So I look forward to this world where we have AIs and rapid productivity gains going forward. Any thoughts on that?

Lee: Yeah. So this is a story people tell about the 1990s, that people were worried about the economy overheating, and Allen Greenspan decided that some of the new computer technologies were helping to keep inflation in check and eased, and we got the boom of the late 1990s. I mean, you'd know this history better than me, but yeah, I think this is just part of what you want to happen is you want high productivity growth, which makes the kind of macro policy make it easier than it would be in an environment of low productivity growth.

Beckworth: Yeah, I mean, rapid productivity growth solves a lot of problems, right? I mean, we want economic growth to deal with everything from inequality to solving the climate crisis to… all of these things provide opportunities for issues that we care about.

Lee: Yeah, I do think that there's maybe an analogy to the China shock in the 2000s where there really were specific industries, specific parts of the country that had some adjustment problems. I'm skeptical that the changes will happen fast enough for this to be a big problem in many industries, but I think I could imagine a few industries where somebody invents something that really makes a certain category of jobs obsolete. And so it's not that it's going to be painless for everybody, but as you've said many times in this podcast, like the economy as a whole or the labor market as a whole has a lot of churn. Lots of people lose their jobs. When you published statistics to say, 200,000 jobs were created, it's not actually 200,000 jobs. It's like 4 million jobs were destroyed and 4.2 million jobs were created. And so, I think the economy has a lot of capacity to absorb job losses and create new jobs, as you said, as long as nominal demand overall is going up at a steady rate, which I'm completely on board with that policy goal.

Beckworth: Yeah. Tim Lee has been a big fan of nominal GDP targeting for a long time. I'll let you know that if you don't already know that. He's been on the show before, and we've talked about it before. But just another reason to promote the nominal GDP targeting view, it can handle supply shocks. Basically the story you're telling is it handles supply shocks and it would've handled the supply shocks during the pandemic, the negative ones, but it could also handle the positive ones in a way that makes it easier for policymakers. But let's go into how actual policymaking would transpire. And I want to take this on two fronts. There's this knowledge problem question. Would AI solve the socialist [calculation] debate? Would a central planner have enough information via AI and big data to do what markets do? And then secondly, how would specific economic bodies, policymaking bodies, implement policy?

Beckworth: So let me start with the first one. And I bring this up because a very prominent economist, Daron Acemoglu, has a new book out with Simon Johnson titled, *Power and Progress: The 1000-Year Struggle Over Technology and Prosperity.* And he's been tweeting a lot about this book. So it's a really interesting book, and he's been promoting it as all good authors do. And he had a tweet thread in late May, and I'm just going to pick up part of the way through it. But he says, “coming back to Hayek's argument…” this is Hayek's knowledge problem argument. And Hayek argues that the market process uses prices to send information for producers and buyers to coordinate their actions. But he says, “coming back to Hayek's argument, there was another aspect of it that has always bothered me. What if computational power of central planners improved tremendously? Would Hayek then be happy with central planning?” And then he says, in another tweet, “impossible to know the answer to this. But some believe that advances in AI are taking us toward this type of supercharged computational power. In my mind, this does not make central planning any more attractive.” But what is your thought? Does AI and big data, does it really solve this knowledge problem? Or is there more to it than just having a powerful, intelligent machine?

AI and the Knowledge Problem

Lee: So I guess I see this as not necessarily a binary. So you could think of Amazon as, in some sense, a central planner that plans a large portion of the economy. They match supply and demand in a way that's not like a pure open market. It's a firm that's doing that. And so I think better AI could certainly allow more big firms that do it. Walmart does this as well. I think, ultimately, Hayek's argument is still correct, and one reason is I think that the information problem and the incentive problems are closely related, right? Because what the price mechanism does is it gets people in the market to reveal what they need and how intensely they need it. But they're only going to do that if they have the incentive to do it. A high price both is a signal that people should put something on the market, but also it's a financial incentive for them to actually do so.

Lee: And if you don't have that incentive, then people aren't going to reveal the information, the circumstances of time and place that Hayek talked about in his paper about this. And so I think the singularists and people who overestimate AI really, I think, misunderstand the nature of the intellectual work of an economy. Certainly there's some aspect of pure thinking if you're modeling and pure planning, but a lot of it is just… there's a lot of details about things around the economy. How fertile is this field? How many ball bearings are in this warehouse? Those kind of things, they're not abstract intellectual problems. They're questions of what do you know in the real world? And the market system and the price system I think helped to coordinate that in a way that is really hard.

Lee: I mean, I guess you could imagine having sensors everywhere all through the economy that would measure this. But we're very, very far from that. And I think if people don't feel that the system is working to their advantage, if the robot's coming along and taking their stuff, if they decide somebody else is going to need it, they're going to destroy those sensors. So I don't really see how you could build an economy that's centrally planned in that way, even if you theoretically had a computer system that was powerful enough to think about everything that's happening in the economy at the same time.

Beckworth: Yeah, I don't buy it at all, because even if you had sensors everywhere, which I would dread, but if you had sensors everywhere, it still doesn't know my subjective preferences in my mind. And the other thing is they change. I may want to eat one meal now and then an hour later I want something else. I mean, preferences change. What I like is not revealed, as you mentioned, till I have the incentive to reveal it. So there's a real knowledge problem. And I would also venture to say… now that's the demand side. On the supply side, production, we really don't know productive capacity until we explore and we push and we build.

Beckworth: And so what I take away from this is that the people who see AI as becoming an effective central planner, they take the information as given. The data is there because the market process through discovery has revealed prices and demand and supply. But that's a discovery process. It takes time. And I think AI has just assumed that it's there and they can just use that data, but that information is constantly changing. That'd be the first point I would make. The second one is that AI may actually add more complexity to the economy, right? We may have even more sophisticated processes of production and things that we value. So it's not clear to me that AI would actually simplify matters. It might actually add more complexity.

Lee: Yeah, absolutely. And I think this ties back to the existential risk question we were talking about earlier, because if you imagine an AI trying to take over the world and run everything, it would have to be a central planner. It would have to replace humans with robots or with whatever. And there are just going to be lots of things about the economy it doesn't know, and that if humans decide that it's a threat, they're going to stop cooperating and then things are going to break down. Important pieces of equipment will go missing. People are not going to help continue doing the jobs that are needed to keep the economy running. So yeah, I really see these debates as related. I think some of the bad habits that led people to be very enthusiastic about socialism 70 years ago, I think some of those same intellectual habits really are contributing to the people overestimating how powerful these AI systems could become.

Beckworth: Well, let's end on another policy consideration. This is the second point I alluded to earlier. And that is, could AI do monetary policy? So Milton Friedman famously talked about a computer replacing the FOMC. Milton Friedman advocated the Fed targeting a money supply growth rate. So he would literally have the computer create so many dollar liabilities and have it grow at a constant rate. And people have dismissed that idea. It was an interesting idea. But if you take that principle, that idea, you invoke Mark Andreessen, software is eating the world. So it takes some AI that uncovers underlying economic relationships. Maybe it estimates Taylor rules, the parameters. It updates relationships. I mean, could you imagine an AI at some point replacing the FOMC, or at least being an important part of the FOMC's considerations?

Could AI Conduct Monetary Policy?

Lee: So I think it's going to be an important part of it. I mean, the Fed does modeling. I'm sure that new AI techniques will allow them to do better modeling. Maybe they'll let them do better surveys. Maybe it'll have representative agents inside some big model of the economy that is more sophisticated than you could do without AI. So absolutely there's going to be computer systems that FOMC members will consult. And it's possible that at some point in the future, AI will come up with better macroeconomic theories. But I think the question is, as a human policymaker, how do you know that the AI's proposal is actually better? And the only way they're going to do that is it's going to have to explain it to you. And so if it's very intelligent, it can do human language, it'll produce a PDF or something that says, here's the better policy. And then human beings are going to have to decide whether they want to go along with it or not. Because ultimately, macro policy is not just a technical question, it's a question of value. It's a question of are we more worried about inflation or unemployment or more worried about the short term or the long term? And I think, ultimately, we're going to want human beings that we kind of relate to and trust to be the ones ultimately in charge. So yeah, I think it's going to be… certainly there's going to be AI in the process, but I don't know why you'd ever want to have it completely automated.

Beckworth: Yeah. So that was my next question. So even if AI could do a good job with monetary policy, let's say it could replicate the FOMC even better than FOMC does. And it had all the knowledge, it had algorithms, it was learning, constantly growing. Would you want the AI to do it in that case? And as you mentioned earlier, there's some things [where] we still like the human touch. I mean healthcare, daycare, whatever it may be. So you think and suspect that people would probably still prefer to have a Fed chairman come out and do a press conference as opposed to a Fed AI bot discussing the most recent rate hikes?

Lee: I mean, I think so. Because one of the things about these new models that is a little spooky, is that they are created through this iterative kind of evolutionary process where the end result, the end product is this massive mathematical function that we really don't understand what's going on inside. And so there's no way to be sure that, is this going to do what we want in the future? It can have a track record, we say the last 10 years it seems to have made good decisions, but you're never going to know for sure, especially for something like macro where you can't do controlled experiments. You have a limited amount of data to say, did this work well or not? And so in those circumstances, I think you're going to have somebody you understand. I think we all have some idea of how Jerome Powell thinks. He's a human being. He probably shares more of our values than some computer system does. So, yeah, I think so. Your colleague, Scott Sumner used to talk about using GDP futures markets as another way to automate the bond market. I'm intrigued by that. I would like to see at least the creation of a market like that to see how it works. But there's also weird situations. If you built a computer to run the economy and started in 2019, it's very possible it would've totally screwed COVID up because that's a thing. Now you could say, well, this AI's going to be super intelligent, so it'll do the right thing, but there's no way to know that. And you'd rather have it give you advice then have human decision makers have a final say.

Beckworth: I think the point is that even if AI could do a better job, even in a pandemic, let's just give it benefit of the doubt, I think people would have a hard time giving up what they think, and maybe wrongly, but what they think is this human touch with macroeconomic policy, right? I think many macro economists would say, look, we need more automatic stabilizers that could be done through AI, a better AI system, better computer systems for our taxes, for unemployment insurance. All of those things could benefit from AI. But at the end of the day, they probably still would want to make sure there's a human overseeing it. I remember, Tim, back in 2015, 2016, when we were talking about earlier this debate about driverless cars, and I listened to another podcast from NPR, I think it's Planet Money, and they talked about the Google cars back then.

Beckworth: And the question was, will cars go the way of elevators or of airplanes? And what they use as analogy is, with airplanes, no one wants to give up the pilot. They want to maintain, there's always a human. Even if a machine, even if AI could land the plane, they want to have a human just in case to have that human touch. Whereas with elevators, it used to be the case that there was an elevator operator and you wouldn't dare step into an elevator without a human pressing the buttons. And elevators in the past were very dangerous. People did get harmed and hurt, very different. But over time, we've become comfortable going into an elevator, pressing the floor, and not thinking twice about it. And their question was, where would we end up? Would we end up like the elevator for driverless cars? Would we end up like an airplane? And tying this back into monetary policy, I suspect we're going to follow the airplane model going forward.

Lee: Yeah, I mean, the question is… I mean, really it's really an economic question. It's about tradeoffs. So having a human being in every elevator would be quite expensive, and the safety benefits seems pretty small. [An] airplane is much more expensive, has many more people on it, the stakes are higher, and so you want to have the pilot because it's not too expensive. Monetary policy is really, really important, and in the grand scheme of things, having people work for the Fed is not that expensive. And so even if it just gives you kind of an extra small margin of safety, it's probably worth it because it's not that expensive in the grand scheme of things.

Beckworth: Okay. With that, our time is up. Our guest today has been Tim Lee. He's the author of the newsletter, Understanding AI. Check it out. Tim, thank you for coming on the show.

Lee: Thanks so much. This was fun.

Photo by Marco Bertorello via Getty Images

About Macro Musings

Hosted by Senior Research Fellow David Beckworth, the Macro Musings podcast pulls back the curtain on the important macroeconomic issues of the past, present, and future.