Sam Hammond on AI, Techno-Feudalism, and the Future of the State

As AI technology continues to swiftly develop, the role of government may soon undergo a radical transformation.

Sam Hammond is a senior economist at the Foundation for American Innovation and is non-resident fellow at the Niskanen Institute. Sam is also a previous guest of the show, and he rejoins Macro Musings to talk about artificial intelligence and the future of the state. Specifically, David and Sam discuss the current AI environment, how private AI may replace functions of the state, key moments in the techno-feudalistic future of AI, and more.

Read the full episode transcript:

Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].

David Beckworth: Sam, welcome back to the podcast.

Samuel Hammond: Thanks for having me back.

Beckworth: Well, it's great to have you on, and you have written a series of very provocative essays on your Substack. Your Substack is titled Second Best but your essays have been titled, *AI and Leviathan* talking about the state, and it's very provocative, Sam, because you make the case that there could be some very big changes within our lifetimes. The state, as we know it, could be very different by 2040. I'm very interested in this and excited to get into it because big change could be afoot ahead of us. Before we do that, let's talk about your new place of employment. Last time we had you on the show, you were at Niskanen and now you're at the Foundation for American Innovation. Talk about that shop and what you guys do there.

Hammond: Foundation for American Innovation, we used to be called Lincoln Network but recently rebranded. We are a boutique tech policy shop working at the intersection of tech policy and national security modernization of the federal government and congressional branch. I see a through line through my work. I began in US policy at the Mercatus Center where I worked on emerging technology and then had a digression at the Niskanen Center where I worked on social policy.

Hammond: My framing for that the entire time was that the US social insurance state, that our social insurance systems weren't up to snuff for a world with much more creative destruction, much more dynamism, and with the absence of those safety nets and public investments, that creative destruction will lead to 3rd or 4th best ideas generated through, protectionism, populism, ways of trying to crush the dynamism rather than to adapt to it. A lot of that was driven by my personal interest, both in academic research and just my own interest and how the state evolved with the Industrial Revolution, and how you can see how things like Social Security and unemployment insurance developed out of industrialization as a way to insure against new idiosyncratic kinds of risk. That also leads into my interest in how AI and other transformative technologies will require an update to our institutions.

Beckworth: You have this common theme running through all of your research and one thing that it touches on is state capacity. I believe in the previous podcast we discussed state capacity, and we'll circle back to that when we get to AI and what it means for the future of the state and state capacity, which could be very different. Now for most people, Sam, as you know, when they think of AI, they're thinking of things right in front of them, what's happening right now. We want to get to the exciting, 20, 40 year outlook that you provide in your essays but what's happening right now in front of us? What's the current conversation surrounding AI?

The Current AI Environment

Hammond: AI as a category has been around for decades, obviously. It's gone through different phases. In the '80s, symbolic AI was popular. This is an approach of, if you want to teach a machine how to use language, let's break down language into its rules and try to build up from that. That turned out to be intractable. What is tractable is machine learning, specifically deep learning, which is just machine learning with a bigger, deeper model and lots more data. Following certain competitions in the late 2000s and early 2010s, these neural network models proved incredibly good at things like image classification, translation. It began to open people's minds that, "Oh, actually neural networks and deep learning may be the path forward." That, really, we had this architecture sitting beneath their nose this entire time, and what we really needed was just more computing resources and more data.

Hammond: Then things really took off after 2017 with this publication of a paper called, *Attention is All You Need,* which introduced the transformer architecture. The transformers are the architecture behind ChatGPT and they can be used for a lot of things. Tesla and autonomous car companies are also using transformers in their models and so forth. The big difference with transformer models is that they are able to process information in parallel. They're much more efficient, and they're also able to learn in a way that's context-dependent. Instead of trying to guess the next word based on the previous word, they can condition their guess on entire sentences, entire paragraphs. Similarly, if you're building a self-driving car, you can condition, both on the last frame of what the camera saw, but also a stream going back in time. That enables these models to develop richer representations of the things they're trying to predict.

Hammond: This introduces this question of, is scale all we need? You sometimes hear this. We know that more is different, that bigger models don't just get a little bit better but get qualitatively better and often develop new kinds of capabilities. There's a question of, is scale all we need or are we still looking for new architectures? This is an area of heated debate because we can see that the leap from GPT-3 to GPT-4 where, at first, GPT-3 was pretty impressive, but you jump to GPT-4 and suddenly it's passing the bar exam. Now Google is preparing to release its Gemini model which is a multimodal model, images and text. It is rumored to be roughly 5 to 10 times bigger than GPT-4, and then there are companies that are currently training models that will be 1000x bigger than GPT-4 that are due out next year.

Beckworth: Wow. So, 1000 times bigger, that's pretty amazing. That has led to lots of conversations about the dangers of AI, concerns about what's going to happen in the present. There's been hearings in Congress. What is the current conversation there? What are people doing or not doing about AI and safety?

AI Safety and the Dangers of AI

Hammond: It's actually been impressive to see the uptake in interest in AI on Capitol Hill. As we're recording this, during the previous week there were four or five distinct Senate hearings related to AI and adjacent topics, including this major insight forum that majority leader Chuck Schumer organized with 22 leaders in the tech industry; a closed-door meeting, a six-hour listening session in which most of the senators tried to… I don't know how perfect the attendance was, but in principle, all of the senators were supposed to sit there and listen for six hours and learn about AI. They're actually planning to do nine of these this calendar year. We'll see if that works out but they're going to try to do one a week. But the idea, at least the stated motivation, is that AI is moving so quickly that Congress really needs to get up to speed and develop some kind of consensus.

Beckworth: Now, about six months ago, there was a hearing before Congress, or there were calls for putting a pause on AI, if you recall, and we have survived six months without putting a pause on AI. So, the world is moving forward, but there are concerns about things like deep fakes, an arms race, maybe there's national security angles to all of this. Are these fears warranted or are we just adapting to a new state of affairs?

Hammond: There are different kinds of fears. I think one of the things that needs to be communicated is, when Sam Altman, the CEO of OpenAI, testified before the Senate Judiciary, he explained that OpenAI's mission is to build artificial general intelligence and ultimately superintelligence, something far superior to humans in every respect. I think people in Congress, the lawmakers, they hear that, and they take it seriously but not literally. It's hard for people to imagine what that even means.

Hammond: It reminds me of Ramez Naam, the futurist and writer on energy policy, has this chart that he often shows of the International Energy Agency’s projections of solar, and every year solar keeps going vertical, and every year their projections are leveling out. I think people have similar autoregressive bias, you could say, in how they think about the future, and that's why exponential trends are really hard. So, I’ve been analogizing this to COVID that we're in the January era of AI, [similar to] COVID where there's people on Twitter who are sounding the alarm that this thing is going exponential and slowly starting to percolate into the mainstream.

Hammond: But we haven't yet had the kind of Tom Hanks got COVID, the NBA season is canceled moment where people all update simultaneously, and we sort of enter a new period of more seriousness where, in the case of COVID, we enacted a $2 trillion stimulus bill within two weeks of the WHO declaring a pandemic. So, Congress can move quickly when there's a certain gravity to the situation. To date, I think there's still a lot of non-consensus on just where AI is going, and that's leading to a non-consensus on how to approach it.

Beckworth: That's very interesting. So, what you're saying is that there are people like you, experts who follow AI closely, and I'd say people like me who were excited about it a few months ago when they had these hearings, when I first played around with ChatGPT, and I heard that I might lose my job, the AI will be doing podcasts in the future. Then it died down, people in my outside-of-work community, they don't seem to be very cognizant of this going on. You're saying once this becomes very well recognized and people begin to see how rapidly things are changing, it will be like a March moment or February moment of the COVID crisis. Well, Sam, let me go back to a point you made earlier about superintelligence, which is very interesting. Beyond AGI, there's going to be superintelligence, that's the goal. That raises a question in my mind about what it means to be sentient or to be aware. Do you think AIs will ever be aware? Maybe we have to define the terms first, but what is the trajectory of AIs? At some point, will they be peers to us, colleagues to us? What is your sense of where this is going?

Superintelligence, Sentience, and the Trajectory of AI

Hammond: This is probably more of a debated area. My perspective is sometimes called computational functionalism, that what our brain is, is its own deep reinforcement learning model. The way we learn is through prediction. We're constantly making predictions when we walk into a room and we see something we didn't expect. Our neurons are firing and rewiring constantly. There are also some incredible parallels between these artificial neural networks that we're building and the way our brain works, even to the extent where some image models that are trained to, say, detect faces or classify cats versus dogs, when you're training those models, they learn certain feature detectors, like detecting edges or detecting whiskers for a cat or so forth. There's been feedback with neuroscience, where neuroscientists have actually looked into the visual centers of our brain and found similar circuits that were actually first observed in artificial neural networks. So, our brain and these neural networks are discovering similar features and encoding things in ways that, even if they're not identical, because obviously our brain is wet and self-organizing and it's always on, there are striking through echoes between the two.

Hammond: So, this raises the question, will these machines be sentient? My position is that there's nothing in principle stopping us from building machines that do have some kind of inner experience. Sentience could just be being intelligent, being intentional, but I think what people are really interested in is the experience of what it is like to have an inner experience. One theory is that what our brain is doing is, we're in a constant dream state, a waking dream, and we have this video game engine in our brain, and for evolutionary reasons, it's useful to have an agent in our brain that's observing this video game model and making decisions.

Hammond: As we build AIs that are more multimodal, that have multiple sensory inputs, images, audio, and then we also architecture them to be always on rather than just putting in a prompt and getting an output and it goes back to sleep, and then we add this component of self-reflection, which may be useful for developing systems that are autonomous, that are able to reflect on their observations and change their decisions. I think we'll approach something that, if it's not conscious, there will be at least people who think it is conscious and it will be an active area of debate. I think, already, there are researchers who are trying to develop objective tests for subjectivity, anticipating that this day will come.

Beckworth: So, it is possible. There's a chance we're going to have AIs in the future that are aware of themselves and that they exist.

Hammond: Yes, because presumably, we evolved consciousness because it had some utility. There's this thought experiment philosophy called philosophical zombies. Could you imagine somebody who has the identical behavior of a human, but there's no light on inside, so to speak? I think the right way to address that would be to say, "Well, is a philosophical zombie possible?" Because if they didn't have this ability for self-reflection and the experience of living within the video game engine, so to speak, would they have the same behavior in the first place? I think the philosophical zombie thought experiment begs the question. I tend to think that we won't have to build models that are sentient, but as researchers struggle to make agent-like models, things that have a certain degree of autonomy, it may end up being… building some self-reflective loop may end up being something that they stumble on as useful.

Beckworth: Well, very interesting. We'll wait and see what happens within our lifetimes. In fact, that's the point of your essays we're going to turn to now. Within our lifetimes, there's been some very radical changes with regards to the state, but presumably also on this issue of awareness in terms of AI. I want to jump into your essays here, and again, they're titled, *AI and Leviathan,* and they come from Thomas Hobbes's Leviathan, famous book from the English Civil War. You began your essay, your first essay, by going back to the English Civil War, and you say something very similar happened then, at least a motivating force, a key driving force for the English Civil War was the printing press. Walk us through that comparison and how it might be useful for us today in thinking through the changes that lay ahead.

AI and the Invention of the Printing Press

Hammond: Yes. Part of the subtext of this is responding to the slogan Peter Thiel has used that cryptocurrency is libertarian and AI is communist because AI favors scale, it favors surveillance, and so forth. Looking back at the printing press, it was its own information technology. Even though it was first invented, movable type, in the early 1500s, it didn't really reach a point of criticality, a point of critical mass and distribution until a few centuries later. One of the parallels here of AI is that, yes, AI does favor scale in some respects, but it's also something that diffuses really quickly through the general population and people have access to open-source models and so on. In the case of the printing press, and really every other previous transformative technology or general purpose technology, it had preceded an institutional regime change of some form.

Hammond: In the case the printing press in the early and mid-1600s, the parliament in England actually had an ecclesiastical censorship committee that would license the ability to print books and that collapsed. Once every little township and community had access to a printing press, it was unenforceable and suddenly there was this explosion, this take off of new printed materials where the Puritans, and the Presbyterians, and the other new Protestant denominations were able to speak up against the church. The Parliament itself was having conflicts with King Charles and things just quickly radicalized. People who were separated by distance suddenly communicated with each other and found that they were like-minded. To me, that really parallels a lot of the dynamics we've seen with the internet, where–- and really, the internet more broadly as part of this trend with cable and talk radio and other forms of mass communication where in the case of the Arab Spring, it's services like Facebook [that] were critical to enabling protestors to mobilize and to coordinate against the establishment.

Hammond: And so, we know in principle that technology… and from even simple, mere information technology, can induce the kind of regime change it did in the English Civil War and it did in the Middle East and North Africa and we saw how countries responded to that. In the case of China, China watched the Arab Spring very closely and a lot of their surveillance technology, their digital police state, was built in response to that because they recognized that information technology was a direct threat to the regime. And my question is, to the extent that AI is just an acceleration, a continuation of this digital transformation, what does this imply for more open societies that may be tempted to go the Chinese route, but have many constitutional protections that prevent that. What happens as the balance of power between society and the state radically shifts?

Beckworth: Yes, so the printing press is a great starting place to think about institutions changing in response to technology changing or information technology changing. I really loved that you went there with the English Civil War. The printing press helped fuel the Protestant Reformation, an effective parliament, cause wars. Now, you're not necessarily arguing we're going to have a war, but there are going to be some big dislocations that take place within society. In general, your framework or your argument is there's two paths forward and this comes out in your second essay called, *Preparing for Regime Change.* There's the possibility of what you call “AI Leviathan,” doing what China did. You mentioned China saw the Arab Spring, they saw these revolts and they decided to clamp down before it could get out of hand, so they became a surveillance state, but the other alternative is a politically fragmented society and you called it a quasi-medieval social order or a feudalist order. We'll come to that in a minute.

Beckworth: I want to just park here and think about this issue a bit more because you used the internet as another great illustration, and you touched on it with the uprisings and stuff. I'm going to read a quote from your second essay. You say, "AI safety is usually conceptualized in terms of what AI will do directly rather than in terms of AI's likely indirect second-order effects on society and the shape of our institutions. This is an enormous blind spot." Then you use the internet as an example. You note that initially, we were worried about identity theft, cybercrime, child exploitation versus the second-order effects. You mentioned the Arab Spring uprising, but I thought instantly of the politicization of the US; populism, Donald Trump, you got one news network from one group, another news network for another, people finding their silos. It has dramatically changed our world, some could argue for the better, some would argue me for the worst. Talk about that. How these second-order effects are often overlooked or taken for granted.

The Understated Importance of Second Order Effects

Hammond: Yes, and they end up being the dominant effect. In the current AI safety discourse, there are folks worried about the ethics of AI, bias, and discrimination. There are folks worried about existential risks and then there are also folks worried about AI misuse, that we could use AI models to create deep fake pornography of ex-girlfriends or something like that or we could use it to generate malware. Those are all very legitimate areas of concern, but it tends to ignore the collective action dynamic of what happens when, not just through misuse, but just through appropriate use, we all have access to this technology and enter a competitive dynamic where even if we prefer a world where we didn't all have access to this technology, we are pushed into that as a Nash equilibrium. You can make the same point about the internet where the protesters in Cairo or Tunisia weren't misusing Facebook at all. They were using it as it was intended and they weren't doing anything that necessarily harmed particular people, but in aggregate, it led to a destabilizing effect.

Hammond: In the case of AI, what these systems are trending towards, it's both increasing the information resolution of the universe where, for example, you can use machine learning models to see through walls by detecting the displacement of Wi-Fi signals. There are now models that were trained to detect humans because as we move through the room, we displace Wi-Fi and that can actually be a signal that you can extract to see through walls. Today these technologies, to get good enough, require access to powerful GPUs and computers, but once the models are trained, they are often only a few gigabytes of data that people can download. Over time, in a very predictable way, the access to more computing resources will be democratized in the same way that our cell phones used to be mainframe supercomputers 20 years ago. What does that imply, in aggregate, not for people doing misuse, but for people just using the technology as it was intended, all at once?

Hammond: And what I'm especially interested in is how this affects the institutional economics of society, because as you learn from Ronald Coase, the only reason we have corporations in the first place is because there are certain transaction costs. There's costs to monitoring people, there's costs to bargaining and coordinating between different groups, and there's costs to search and information. AI directly impacts every one of those costs. Think about the principal-agent problem. Once you have intelligent agents, there is no more principle agent problem, to the extent that the agent never shirks, doesn't steal from the till, and the AI, more broadly, radically lowers the barriers to monitoring people because-- I always used to wonder when I read 1984, how did they have enough people behind the cameras to monitor all of the people on the other side of the cameras.

Hammond: And with AI, even today's technology, to the extent that it has a semantic representation of the world as our world model, it's able to intake an image and give a qualitative judgment of what's happening in that image. Are the workers slacking? So on and so forth, and this is rolling out in micro where Activision has announced that they're going to use AI to monitor voice chats on Call of Duty. The older approach to this would be to have a bunch of banned words and if you said that bad word, you might get banned from the game, but now you can have systems that listen and don't just look for bad words, but look for bad topics or broader ways of evading detection. On YouTube, it's become common because of the algorithm to not talk about murder, to talk about people being “unalived,” to use these euphemisms because it evades the downgrade and the algorithm. All of this stuff is being opened up, and as these transaction costs fall, I think [that] our institutions will adjust accordingly, and part of my main thesis is that private institutions will adjust much faster.

Beckworth: So, there are second-order effects like we saw with the printing press, with the internet, that are often much larger than the things we worry about right in front of us. You mentioned AI, we're worried about deep fakes, we're worried about spyware, we're worried about things that are happening here and now, but we should be thinking more long-term about our institutions. I'm glad you brought up Ronald Coase. You used him to talk about, why do we have corporations? Why do we have governments? Governments try to solve these collective action problems or these negative externality problems. You're arguing that AI could push a lot of these away from these institutions or at least change the institutions and the private sector is better equipped, more mobile, to step in and use AI to solve the problems. Again, going back to the two paths forward, there's AI and Leviathan, a Chinese surveillance state, or the other path is political fragmentation, which emerges because of what AI is going to do. Again, you used the term quasi-medieval social order or techno-feudalistic future. Maybe walk us through, how would some of the things the state now does, be transformed into something done privately via AI?

How Private AI May Replace Functions of the State

Hammond: I think there are really three possible futures.

Beckworth: Oh, three. Okay.

Hammond: I take this from Daron Acemoglu and Robinson's, The Narrow Corridor, which is their book on the institutional economics of liberalism, of liberal democracy, and they argue that liberal democracy exists within this narrow corridor between the powers of the state and the powers of society and that these have to be in a kind of harmony and if the state becomes too powerful, you veer off into despotism and if the society becomes too powerful, you veer off into anarchy. There is this third option, which is co-evolution, that our institutions keep up with AI and adapt in a way that keeps this balance, but we’ll save that for later. I think the paths to least resistance are to tip off of this narrow corridor and fall into one of the failure modes.

Hammond: We can see examples in recent history of how technology can lead to micro-regime changes. If you think about Uber and Lyft in ride-sharing, this was a combination of the internet, of mobile, and of machine learning. Prior to ride-hailing, taxi cabs were governed by public regulatory commissions and with licensing and so forth. The core problem was basically monitoring, how do we ensure safety? How do we match drivers with riders? With mobile, suddenly we had the tools to do that at scale, and within five years, a city like New York went from 90% of rides being in taxis to 10% of rides being in taxis, and 90% being in Uber or Lyft. That was a regime change in micro where we went from something that was government-organized to something that was governed by private competing platforms that use reputation mechanisms, different kinds of dispute resolution systems, and then also AI, to grease the wheels and cut the transaction costs. Now Uber's even exploring a technology to pre-match you with a driver using AI to learn your contextual habits and start to make that match as you're pulling your phone out.

Hammond: You can broaden this. That was the case of taxis, but in the future, will we have USDA inspectors that go to farms and have to physically inspect the farms, or will there be cameras that have intelligence that can understand when an animal is being abused and have that being continuously fed back to the regulator or to the underwriter, the private platform, that then sells… instead of an FDA or USDA label, we'll sell you something much better, which is we have continuous monitoring, and we can see exactly what's going on, and we can do that in a way that's much higher trust.

Beckworth: You also use the illustration of an airport. Walk us through that. Why would an airport illustrate potential paths for AI?

Hammond: Well, all of these technologies really benefit from the ability to vertically integrate. In the case of Uber or Lyft, you still end up with large companies, but those large companies exist because they're able to bring together the engineering talent and build a walled garden, that kind of controlled user experience. Airports are an example of that. There's also many other institutions in life, like casinos, or comedy clubs, or homeowners associations, where you implicitly sign away some of your rights by entering. Comedy clubs these days will take away your phones so you don't record the comedy and put it on TikTok or whatever. To the extent that AI introduces kinds of negative externalities where we're nervous that the person walking around the office might be recording and basically using the audio to keylog what people are typing, or… there was a big controversy last year in the chess world where Magnus Carlsen accused a junior competitor of cheating over the board in a chess game.

Hammond: There were all of these speculations that he had a computer hidden on his body somewhere. To the extent that we want to control for that, it's going to be difficult for governments in liberal countries to impose draconian rules because this is ultimately an issue of freedom of choice. A lot of that control, that social regulation, will be shifted onto private organizations in the same way that when you walk into an airport you're not supposed to be able to yell “bomb” and you have to walk through scanners. These days, they even use special recognition. I just expect that pattern to spread, and the airport as a landlord gets this connection to feudalism, that being an owner of a piece of land and being able to build these controlled experiences will become more valued.

Beckworth: You give a number of other examples. You just mentioned some like location-based bands on devices, and again, the airport's useful because the airport, we do give up freedoms when we go in there and they're oftentimes privately run, the better ones are, and we're okay with that. You're saying [that] we'll be doing the same thing with AI-generated safety issues in the future, but you give other examples. We may also rely more on private schools that use AI to tutor students. In other words, local public school systems may see a mass exodus because AI provides some alternative form. That'd be another example where government structures would change, alter personalized healthcare, new ways of identity verification, AI-based arbitration mechanisms for rapidly adjusting disputes. Maybe the court system becomes more efficient. What about those big things like the EPA or the Department of Justice, or the Department of Defense? How would we go from here to a more techno-feudalistic society with AI in those regards?

Hammond: Well, let's take the court example to start. The courts are useful because they illustrate this distinction between misuse and just appropriate use that we can't handle. Across the institutions, our institutions are built for a certain amount of bandwidth, for a certain throughput, and our courts are already overwhelmed. This is why most criminal cases plead out, it's why most lawsuits settle, and it's why there's this long extensive margin of lawsuits that never get brought, because the costs are too high. You can imagine if, once we have AI lawyers, and I think AI lawyers could be this decade, like things that are strictly better than even the top brass lawyers, because the law is code in a certain way, the barrier to entry to bringing a lawsuit [falls] dramatically.

Hammond: And so, the way this worked in the past, say with the internet, the internet brought a potential explosion in copyright disputes. Congress, in its foresight, passed the Digital Millennium Copyright Act, and it created the ability to resolve those disputes extra-judicially because if you didn't have that safety valve, the system would've overwhelmed. Now when you steal content to put on YouTube, YouTube will take it down and they have their own private dispute resolution system. Now imagine if this happens all at once, I think the courts end up having to either embrace AI… and some countries have done that. Taiwan, today, is experimenting with AI judges that'll deliver rulings on issues like DUIs and certain low-level fraud cases. Our courts are much more technologically conservative, and so if they do get overwhelmed, I think more and more dispute resolution will shift into forms of private arbitration. That's the real path to the neo-feudalism because we're privatizing the law at that point. Now moving on to federal agencies, this is where I think there's going to be a need for co-evolution. There'll be some functions that, simply, are rendered obsolete, and it'll be painful to get rid of those. Once we have self-driving cars, will the National Highway Traffic Administration have a huge relevance?

Hammond: So when it comes to, say, the FTC, I know somebody in the FTC’s healthcare division, it's like a 30, 36 person team of attorneys, and they're responsible for enforcing conduct for the entire US pharmaceutical industry. And right now, I was talking to him recently and trying to get a sense of his day job, and he subpoenas a pharma CEO as part of discovery and has to manually sort through 40,000 emails. With the technology we have today, he could take those 40,000 emails and pass them through a large language model and ask the model to find the five most egregious examples of misconduct or what have you. It may not be perfect, but it'll definitely accelerate things, accelerate the process. I guarantee you that big law and the private law firms are doing this and embracing this technology. If the government doesn't adapt and co-evolve with AI and really try to integrate AI into their systems, they'll lose this arms race with the private sector and civil society.

Beckworth: So you'd see a lot of the jobs either go to the private sector or ones that remain in federal agencies be automated, people being displaced, and going forward, then you would have, again, these three paths, I said two earlier, but three potential paths. One is AI and Leviathan, which would be the state surveillance. That'd be resisting and imposing control. The other one is the quasi-medieval social order or techno-feudalism. Then there's a middle ground where they co-evolve together. Your default is that it's going to be to the techno-feudalistic path, right? You said the path of least resistance or the easiest one going forward?

Hammond: That's my prediction because of the differential arms race that government and society is in. I talked about how the IRS master file, for example, dates to the Kennedy administration. So, we're very slow to adapting to technology in this country. That could change, obviously. What I also see is this sort of accumulation of process and veto points, interagency process, judicial review, what Nicholas Bagley has called the procedural fetish. We have systems like the Administrative Procedures Act, which govern our agency's work, and they'll require things like notice and comments to make a rule change.

Hammond: All of that stuff slows things down. We also have issues with procurement where, famously, the healthcare.gov website… but that was sort of a microcosm of just terrible procurement policies across government. The Department of Defense took 12 months to produce its own COVID mask. It doesn't really scream optimism in terms of the ability of the government to embrace this and keep up. And there's this idea from evolutionary game theory called the Red Queen dynamic, where in Alice in Wonderland, the Red Queen tells Alice that sometimes you have to run just to stay in place. I think that's the sort of place we're at now, where the institutions that we want to keep are going to need to run and really throttle up in their embrace of AI just to maintain homeostasis.

Beckworth: Just to be clear, what you see in the future is a far smaller state. The government itself won't be as large. The federal government will be smaller. Is that right?

Hammond: I think in the near run, it looks like areas where things fragment and areas where government sort of recedes to a series of core competencies, like national defense. Social Security is still relatively efficiently run, but we don't have a national ID in this country. Social Security numbers date from 1935. Having some of that infrastructure would be really useful. I know there are many sort of folk, libertarians, who don't like the idea of everyone having a national ID, but if you have something like that, it makes it a lot easier to move to something like Estonia, where in Estonia, their e-government is so advanced that you basically pay taxes and receive benefits within a week, directly deposited into your bank account. When you file your child's birth records, they will automatically be enrolled in school four or five years later, because the system just automatically knows that child has continued to age. There's been no death record to cancel it out.

Beckworth: Estonia is a good example of where we might end up in the future.

Hammond: Estonia is the middle path.

Beckworth: The middle path, alright.

Hammond: And this is also how Daron Acemoglu and Robinson talk about this is… the alternative to an AI leviathan would be a constrained leviathan. This gets at the idea of, central to classical liberalism, of a limited government. Modern liberalism, as folks like Mark Koyama have written about, wasn't the result of a weak state. It was actually the result of a strong state. When our old feudal order was replaced by modern nation states, this ability to build impersonal institutions for tax collection, for rule of law, for fair legal rulings, that requires a state with significant administrative and fiscal capacity. Liberty is not synonymous with a weak state. It's synonymous with a strong state that has circumscribed powers. We may want a kind of surveillance state, but one that is very circumscribed in how it surveils, that has civil liberties and privacy protections built in.

Beckworth: So, just to summarize, the federal government will shrink down to core competencies, key areas, national defense, maybe some social insurance, Social Security, maybe Medicare. But, in many other areas, many other parts of the administrative state will be outsourced to AI in the private sector. Is that the story we're telling?

Hammond: Yes, I think, more or less. Even Medicare, I think, is due for a transformation. One of the challenges is, healthcare is a good example of an area where there's this incredible thicket of existing laws and process and privacy regulation and so forth, and so it makes it hard to reform. Even something as simple as medical records, you would think would be low-hanging fruit for AI. To the extent there is a discontinuous change versus a gradual evolution, really depends on whether these institutions adapt and co-evolve, or are so recalcitrant, so sclerotic, that eventually a much better alternative arises and leads to a switch; that the network effects with our existing healthcare system break down and people move to something that is much better.

Beckworth: Let's talk about your timeline. In your third essay, you provide a timeline of this techno-feudalistic future, and you have several periods. By 2040, you have us all the way there. Do you want to walk us through some of the key moments on this timeline?

Key Moments in the Techno-Feudalistic Future of AI

Hammond: Yes. Part of this timeline is anchored to trends in computation and the build-out of compute infrastructure. It may shock some people to know that NVIDIA has produced more GPUs this year than it has cumulatively through its entire history; that all the aggregate cloud computing infrastructure in the world will probably double or triple within the next three or four years. It's growing at a 30%, 40% compound annual growth rate. The amount of available compute is just enormous. Then you have these scaling laws that come from machine learning that show an almost physical law-like relationship between the scale of a model in terms of its number of parameters, the amount of data you use to train it, and the amount of compute that you use to crunch that data. When all of those three things scale, models get predictably better. They get predictably better in a way that's measured in terms of how well are they are able to emulate the thing that you're trying to teach it.

Hammond: Just looking at those basic trends, there's an organization called Epoch AI that does forecasting in this area. They take those trends in software improvements, algorithmic improvements, hardware improvements, and then they combine those with scaling laws and estimates of how much compute we'll have, and then also estimates of how much entropy, how much information does our brain process as an upper bound. We should, based on those trends, have systems by the end of this decade, by around 2029, that, in principle, can emulate a human on any task. That's the first major turning point, and this goes to my point that people don't quite see how near this is.

Beckworth: So, by 2029, any task that you and I could do, or any human could do, will be able to be replicated by an AI, at least the mental side of it. I think the physical side comes a little bit later, right? But you're talking about the mental process. Whatever work we do on our laptops when we go to the office, that will be easy for the AI to master and do.

Hammond: Yes, and it's almost a thermodynamic kind of argument, there's information theoretic reasons. The amount of information our brain processes and the amount of compute we'll have, and the power of these models, given what the scaling laws imply, they'll be able to basically squeeze the entropy out of our human-generated data. What these models are really good for, like ChatGPT, is in-context learning. The model can do things zero-shot, where you just ask it and it does it, but the models are really good at learning a pattern, where you give it a few examples and you say, "Continue this pattern." That's called in-context learning. You can think of in-context learning as, for those examples, as sort of like shifting all of the weights in the model towards that task. My mental model for what kinds of AIs we'll have by, say 2030, are systems that can essentially shadow you, that could watch a recording of your screen and your keyboard inputs, and maybe have a conversation to ask clarifying questions.

Hammond: It will be able to shadow workers, sort of like a trainee, learning context of what their job is about. It won't just be a dumb autoregressive sort of predictive modeling. These models will have learned certain features, certain general principles about how to work in a variety of different contexts. Then what happens is Google and Microsoft install this stuff on their employees' computers and will train good project managers and good software engineers just through shadowing. That picture, once you have that picture in your mind, you see how the normal implementation frictions that we are used to with big IT projects don't really apply, because we're not having to build a new system and find ways of integrating it. We could use this AI emulate the way humans work in existing processes.

Beckworth: Pretty sobering stuff there. Let me come back to how it may hit home for me personally. You also mentioned, within the next few years, not even 2029, 2030, that most of the online content will be synthetic. So you're saying like most of the entertainment we watch, most of the videos, will be, in some form, AI generated? This seems pretty topical. Right now we have this strike going on with writers and Hollywood. Are their days numbered? is that what you're telling me?

Hammond: I think they are. I call this the twilight of copyright. There are folks who are trying rapidly to propose ways of compensating people who generate the art and scripts and so forth that go into training these models. I think that's a very short-term solution. Your listeners should look up this company that built a South Park episode generator. They have five other TV series that are totally original, but they use South Park as an example. They built a simulated world where the South Park characters are characters in that simulated world.

Hammond: You prompt the model to generate a script with an outline and so forth. Then it also directs where the camera will be looking, and with a click of a button, you get a 22-minute episode of South Park that's totally original. What this enables is a massive, obviously, personalization and customization. The founder of Stability AI, Emad Mostaque, talks about how he's looking forward to recreating Season 8 of Game of Thrones because everyone hates that season. So, this goes beyond the ability for AI to write telenovelas. It will continue this fragmentation of the internet, that the internet has already started, where more people are cutting cable.

Hammond: The reason you can't pay actors and writers as much as you used to is because there's just so much more content. A stream is not the same thing as being syndicated in the '90s. More and more people are watching YouTube and watching Twitch streams and so forth. There's even popular Twitch streams, they're called VTubers, that are basically virtual anime-looking people who, at this point, you can't be sure if there's a real person there or if they just trained a model.

Beckworth: I can see the definite benefit. You could have a Tom Cruise who never ages. I know he doesn't seem to age in real life, but you could have a prominent movie star who's just completely CGI, that forever plays James Bond, so my generation, my kids' generation, they see the same character out there and it's just completely AI-generated. This also raises some concerns. Maybe we'll come back to this in a minute, but if I can completely cater my entertainment to my preferences, I might get stuck in a loop where all I see is things that I like.

Beckworth: I'm not challenged by other ideas coming from the outside. It further silos our views, further polarization, that's a concern I have. Let me come back to me because I mentioned this a minute ago. How does this affect me? I'll say you, too. We work in a think tank space. This podcast is one of the things I do. I try to interact with people like you, policymakers, talk about ideas and such. I have about five or six papers I want to do that I don't have time to get to. In this future, am I completely displaced by AI, or am I saying, "AI, here are the five papers I want to write. Here's what I want to do in them. Please go take a first stab at it and get back to me with results," and that frees me up to have more conversations with you, Sam. Or is it the first scenario? AI completely takes my job.

Hammond: The trend in every area has been for, in the near run, AI is to serve as co-pilots. For programmers, estimates have put over half of new code submitted to GitHub as AI-generated at this point. That code still requires human supervision because it'll make mistakes. It's more boilerplate. The trend is similar in chess. Chess went through this period where human chess teams were better than the… human AI teams were better than the AI by itself. Then, now, the AI is just strictly better. You'll never beat it, even Magnus Carlsen. And so I think we'll see similar things in programming where eventually you'll be able to describe an app and have it spit up the app, and we're already seeing that with web development. Web development is being completely disrupted. When it comes to research, I think there's a huge opportunity in science. Imagine training a large model on all scientific manuscripts and then asking it to write an original one based on what it's learned. There are other initiatives, other proposals to, for example, go back through the citations in the existing corpus of science and do a search down that tree of every citation and find if people are citing the work the way that they're supposed to.

Hammond: Are there signs, are there indications of science that is less likely to replicate and more likely to replicate? Can you use all of that knowledge to hypothesis-generate and then automatically test those hypotheses? I think that will happen at a scale that no human can match. There are some things that are more qualitative, but if you're talking about, say, material science or biology or so forth, there are real objective answers and the ability to write 500 papers on room temperature superconductors, let's say, is something that humans won't be able to match. Now, where I think humans still matter is anything that involves our identity, who we are. Now, if I had a perfect emulation of a Macro Musings episode, it wouldn't be the same because it wouldn't be the real David Beckworth. And so I think the same is true with musicians. People like Taylor Swift and they like Justin Bieber because of the identity of who they are, and often, other people are even writing their music already. Personality and identity will be more important, and that sort of [inaudible] line. If you had a perfect replication of the Mona Lisa, it wouldn't be the same as the original Mona Lisa.

Beckworth: In other words, your personal brand will be more important than ever. We talk a lot about, now, your brand on social media, your brand in the workplace. This is going to be even more of a big deal going forward. Well, we are running low on time, but just to quickly summarize, you go through several periods. You note, in 2036, 2039, you'll have general purpose robots doing large-scale manufacturing, just completely displacing existing methods. Then by 2040, this is all in place. You have your techno-feudalistic state. And so, all of these are big changes, Sam. How do we as people adapt and handle it? There's just so many dislocation changes, for a lot of people, maybe hard. I'll be an old person by this time. It might be hard for me to handle these changes. What are the possible paths that we will have to think about going forward in terms of handling the change?

Handling the Change Brought by AI

Hammond: The first is institutional adaptation, counseling governments to really… take the IRS. The IRS could be building its own tax examiner co-pilot. They have auditor training manuals. They have the Code of Federal Regulations and the U.S. Code. They have a lot of training data that's unique to them. Rather than hiring 80,000 new auditors, maybe they should be hiring 8,000 auditors that are 100X as productive. That will be part of an arms race, because once we have AI tax accountants in our pocket, we'll be able to… you and me will be able to shelter our income in very complex partnerships, and so forth, in ways that are hard for a human auditor to interrogate. So, there's that piece of this.

Hammond: Then there's… I think, part of this is letting the private sector adapt. There's limits to self-regulation but the way we have, for example, reCAPTCHA; if you want to log into a website to prove you're human, reCAPTCHA is a service Google developed. It uses your cookies, and it's a bit of a trade secret, but [also] your mouse movements and so forth to detect if you're human. To the extent that deepfakes become a problem, it'll be those private sector actors who are building the solutions to be able to detect what's human and what's not. Then [in the] longer run, I think there's just some things that are part of a package deal that are going to be hard to swallow. When it comes to training and so forth, I think AI makes those parts of adaptation potentially easier. If we're much wealthier in the first place, adaptation is also easier. If lesser educated people are able to get access to high-quality education, to be able to be more articulate and compete on an even playing field with the knowledge class, that's all positive.

Hammond: Going back to your earlier point about cultural fragmentation, I think this is partly… this is why I sort of analogize it to history running in reverse, because our cultures used to be much more fragmented because of distance. And this period of mass culture that we had in the 20th century, I think that's the aberration. And so as we go on the other end of this curve, to the extent that we have deep fakes and fake news and so forth, that's not that different from the 1800s where tabloid newspapers would make up quotes all of the time.

Beckworth: That's the feudalistic part of techno-feudalism, is we will be living in our own little world that may be separate from someone else's world, and we'll be content with it, we'll be fine with it, and we'll be able to survive given [all of what] AI can do for us.

Hammond: We'll potentially be able to have difference in particularity and more global forms of coordination simultaneously, because what these systems are really good at is translating between things. If you want to speak Klingon, I want to speak English, but we have an AI translator in between, we're able to enjoy our difference and enjoy our particularity while still being able to communicate.

Beckworth: Well, there are so many more questions I could ask you, Sam. I had a bunch written down. For example, I would love to go down the path, what does this mean for US Treasuries, US [as the] supplier of safe assets?

Hammond: Interest rates.

Beckworth: Interest rates, yes. I can imagine a massive productivity boom. Is this the singularity? But, we are running low on time, so we'll have to wrap it up there. Our guest today has been Sam Hammond. His blog is called Second Best, and these essays are titled, *AI and Leviathan.* There are three out now, by the time the podcast comes out, it should be four. We'll provide show links to it. Sam, thank you so much for coming back on the show.

Hammond: Thank you, David.

Photo by Marco Bertorello via Getty Images

About Macro Musings

Hosted by Senior Research Fellow David Beckworth, the Macro Musings podcast pulls back the curtain on the important macroeconomic issues of the past, present, and future.