An AI Paradigm Shift
Artificial intelligence, and the policy response to it, is at an inflection point. And with hundreds of definitions of AI and a sudden explosion in AI progress, now more than ever policymakers must acquire a deeper understanding of the technology to navigate its impact on society.
This part of the AI Policy Guide discusses the paradigm shift in AI and the need for policymakers across federal agencies to have a deeper understanding of the technology.
Over the past decade, real-world artificial intelligence has evoked electronic assistants—such as Alexa from Amazon or Siri from Apple—in the public’s mind. But today, society is at an inflection point. In 2021, Stanford University’s Human-Centered Artificial Intelligence Institute wrote about an AI “paradigm shift.” Its report identified the rise of what it termed foundation models, large-scale systems trained on broad sets of data that can be easily adapted to a wide range of downstream tasks. Armed with computational heft and flexibility, this new class of models could offer many of the tools needed for AI to step beyond a mere curiosity. Even if one reserves a measure of skepticism toward AI hype, many applications that debuted in 2022 promise new possibilities in several domains of application:
- Midjourney’s art generator produced near-human-quality works.
- AlphaFold predicts the structure of nearly every known protein, a critical new tool in biological and medical research.
- AlphaTensor discovered a more efficient approach to matrix multiplication than previously known, and this could soon speed up a wide range of applications.
- MūtCompute discovered an enzyme that breaks down polyethylene terephthalate, a common plastic that represents 12 percent of global waste.
- AlphaCode ranked within the top 54 percent of participants in competitive coding competitions with its increasingly efficient self-generated algorithms.
- Codex translates natural language into code, potentially opening engineering to a wider audience.
- OpenAI’s ChatGPT produces medium-length, logically complete responses to complex text prompts.
The breadth of fields of applications is worth noting. AI is being applied everywhere from the arts and linguistics to chemistry and pure mathematics. It is flexible. The tools that make AI possible represent a new class of general-purpose technologies, innovations that “[have] the potential to affect the entire economic system.” Just as previous general-purpose technologies such as electricity transformed society, AI systems are changing many domains—from science to entertainment, from education to health, from national defense to the financial system—and could even radically transform them.
Critics claim that these advances in AI are skin deep, mere “stochastic parrots” that randomly rearrange and regurgitate data. They may look effective, critics argue, but lack any true understanding, common sense, or ability to explain their decisions. The critics could very well be correct about artificial intelligence lacking intelligence, but the critics will err dramatically if they dismiss AI outright as unimportant.
Yet, policymakers are not keeping up with all these developments. Knowledge is necessary but not sufficient for good governance. Even if lawmakers were to grasp basic notions of AI engineering and acquire a sense of the depth and breadth of AI’s effect, it is an open question whether they would be able to translate that knowledge into a consensus for AI governance. After all, Silicon Valley, as a collective of entrepreneurs and innovators who better understand the mechanics and effects of AI, has not arrived at a consensus on the governance of AI either. In this work, we hope to impart some basic knowledge on AI design, application, and policy challenges to inform policy-minded readers. We do this without naivete, because we are deeply aware that the politics of policy design could be a problem more complex than an understanding of some of the most sophisticated AI systems.
The Tip of the AI Policy Iceberg
What do we lose without a diversity of experts engaging with AI in depth?
“However brilliant computer engineers may be when facing down technological challenges, they rarely have real insight into what’s happening outside the digital bubble.”
In summer 2022, AI art generation seemed to appear out of nowhere. With the release of DALL·E mini, an open-source approximation of OpenAI’s DALL·E 2 art generator, AI art was suddenly accessible to everyone. Delighted by the often strange yet sometimes human-quality works, consumers flocked to the application and flooded social media with bizarre AI creations. Powerful enough to wow yet amusingly inaccurate, DALL·E mini introduced many to a glimpse of the possibilities with art so generated, while comforting others with the understanding that generative AI was still out of immediate reach. Yet, in just a matter of weeks, things changed. As OpenAI broadened access to the full version of DALL·E 2, Midjourney’s beta app generated covers for The Economist, and Hugging Face released the powerful Stable Diffusion, these wonky generators suddenly proved capable. Often, their outputs were professional quality and, in one instance, even “skilled” enough to win a state art competition. The progress of this technology moved at an astounding pace.
This sudden burst of innovation likely took those working in arts policy off guard. In a matter of weeks, policymakers had to shift gears toward confronting a slew of novel AI-based issues that they perhaps wouldn’t have considered just months earlier. One such controversy is artistic rights. It was found that the engineers had built these systems by “training” the AI to produce art based on preexisting human-crafted works scraped from the web. Often, this process was undertaken without artistic consent. As a result, prominent digital artists found this software could produce near-perfect renditions of their works, allowing anyone to appropriate their signature styles if in possession of the necessary know-how and the computational capacity. This situation raised questions of usage rights, privacy, personal autonomy, and copyright infringement.
Many affected artists view this situation as potentially existential. To those working at the top levels of AI policy, it remains off-radar. When interviewed on the effect of AI art generators, one member of the National Artificial Intelligence Research Resource Task Force, the nation’s top AI policy advisory panel, had not even heard of the issue. One is tempted to believe that such an important question was never discussed by the broader task force.
The reason? AI has been treated as a specialty. Because the task force is composed almost exclusively of computer scientists, one is hardly surprised that it was not thinking about artistic rights questions. Had AI been viewed as having general-purpose effects, perhaps those in the arts would have been engaged and their voices heard in the design of solutions to those problems. Breadth of expertise, however, cannot sacrifice depth of technical knowledge. Only by understanding the scientific progress of art generators—how data are scraped and used to train AI, what type of data is needed—could those concerned about artistic rights have predicted this issue and have begun to consider appropriate action. Many of these art generators have now been open sourced, meaning their code is no longer controlled by a single entity, and affected artists may have little recourse. Appropriate policy would have required engaging the specific art-generator application. Specificity is currently missing from AI policy design.
AI Touches All Federal Departments
The Importance of Deeper Understanding
The sudden explosion in AI progress demands a new class of policymakers who not only understand AI, but also understand it in depth.
The National Security Commission on Artificial Intelligence recently wrote that “AI … promise[s] to be the most powerful tools in generations for expanding knowledge, increasing prosperity, and enriching the human experience.” All policy areas will be touched and even transformed by artificial intelligence. The sudden explosion in AI progress demands a new class of policymakers who not only understand AI, but also understand it in depth. Just as all policy experts need a working knowledge of economics, all will need a working understanding of AI.
Traditionally, those who have engaged with AI outside the computer scientists have done so only at a high-level, a so-called Level 1 understanding. They can engage with the concept, and perhaps entertain abstract effects, but cannot dig into problems nor imagine specific solutions to them. AI is maturing, and policymakers should go deeper. The goal is a Level 2 understanding, in which policymakers understand conceptually how AI works and the array of core concepts and technologies on which it is built. Although they might not be able to code an AI chatbot, they know how one functions. Although they have not studied electrical engineering, they understand the AI chip deck.
With a Level 2 understanding, this new class of policymakers can meet engineers halfway. More specifically, they will have the confidence to ask the right questions; the ability to understand engineers’ explanations; and, crucially, the capability to question technical experts. This level of understanding brings AI down to earth, allowing policymakers to see the breadth of AI’s effect and the many technical tools on which it is built.