Artificial intelligence (AI) will have a huge effect on the economy. To ensure the effect is a net positive, policymakers need to understand AI at a deeper level to shape a governance regime that maximizes the social benefits of AI and mitigates its risks. Created by Mercatus research fellow Matthew Mittelsteadt, this guide serves as continually updated resource to equip policymakers with the knowledge they need to make smart decisions about a fast-moving and complex issue.
Each section of this guide offer two levels of depth to support both readers who want only a basic understanding and those reading for greater depth. Note that AI is enabled not by one technology, but by a diverse “constellation of technologies.” AI comes in many forms and uses a range of concepts and devices. To understand and solve diverse AI issues, readers must grasp the AI space. Primarily, this work seeks to explain how AI works through illustration. Along the way, it equips readers with key terms, fundamental concepts, and core technologies in a toolbox of knowledge that can be supplemented with application-specific expertise.
While the guide strives for simplicity, some may find the staggering breadth of AI unwieldy. AI’s wide scope is a natural consequence of its general and often ill-defined nature. Recall that, fundamentally, AI is a normative goal. As with any goal, it can be defined in a variety of ways depending on the user and the context. One goal might be to wield and design AI systems to maximize safety, another might involve minimizing bias, and a third perhaps would prioritize liberalism. Such general goals only grow more specific and varied as systems are designed and applied in application-specific contexts.
The fundamental challenge for policymakers will be recognizing this diversity and understanding that not all AI goals will coexist peacefully, nor will they necessarily match the goals of policymakers. Any regulation or AI-related policy will naturally involve a normative choice. What should AI look like, what should it do, and how should it be used—that is, what goal or set of goals are encouraged or allowed?
Any regulation or AI-related policy will naturally involve a normative choice. What should AI look like, what should it do, and how should it be used—that is, what goal or set of goals are encouraged or allowed?
Diversity is perhaps the best first step toward meeting this difficult challenge. Only through application- and sector-specific knowledge can the full range of potential AI goals, applications, and issues be understood. Meeting the challenge will require a representative breadth of policymakers to understand AI. This general-purpose technology is also a general-purpose policy issue.
Having peeked under the AI hood, readers should have a technical starting point that can be customized and applied to each given sector and field. Today, AI systems are changing—and perhaps even transforming—many fields. With such potential, it is incumbent on all policymakers to dig in, understand these concepts, and grapple with the diversity of these impactful systems.
At the heart of this field is a murky question: what exactly is AI? Congress offers a legal starting point, but AI reality can differ from how we define the term. Here we discuss how AI can be defined, what AI means conceptually, and the core technologies that make AI possible.
The design of AI law and policy is and will be a complex task given the breadth of this technology. Here we offer a starting point to help policymakers understand the importance of this technology, the challenges it creates, and its relevance to a broad array of policy domains.