AI Policy Guide

AI Policy Challenges

From chip development to supply chain security, policymakers must navigate complex challenges to ensure AI has a net benefit for society. AI policy is still in its infancy, and effective governance requires policymakers to level up their understanding of the underlying technology because of the potential breadth and depth of its impact.

This part of the AI Policy Guide covers several areas of active policy concern, including chip development, supply chain security, and algorithm development. It highlights the questions of government R&D support, supply chain security, externalities, talent, and bias.

There is little in the way of artificial intelligence (AI) law and policy. Only a handful of federal laws relate to AI, and those that do, such as the National Artificial Intelligence Initiative Act of 2020, cover basic study and coordination rather than explicit regulation. Further, existing laws treat AI in a general sense rather than any application’s specific issues. Executive action on AI is also in introductory stages. A 2019 executive order, Maintaining American Leadership in Artificial Intelligence, acts as the guiding document of American AI strategy, focusing on high-level policy including international cooperation, technical standards, economic growth, R&D, and talent. Building on this is a 2020 executive order, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, and a 2022 policy announcement from the White House, Blueprint for an AI Bill of Rights. Both of these documents work to define broad guiding principles for the use of AI and AI policy. Beyond these initial policy salvos, however, there are few federal regulations of deeper substance or application-specific nuance.
    
At the state and local levels, policy is varied and often more application specific. In many states, there has been some limited movement, though action has largely been targeted at limited-scope and well-publicized AI applications. These applications include deepfakesautonomous vehicles, and AI-assisted hiring legislation. At the state and local levels, there is clearly a desire to implement regulation and manage some of the negative effects of AI, but action has targeted only issues that have been around for years. The issues that AI creates will not always be well publicized and as limited in scope.

The design of AI law and policy is and will be a complex task because of the importance and wide reach of this technology. The following sections offer a few questions that policymakers should consider when designing AI policy.

Critical Questions for Policymakers

Policymakers will face many important AI policy decisions in coming years. Policy areas include research and development, manufacturing, critical resources, quality control, externalities, and security and safety. This section discusses each in turn. 

Research, Development, and Manufacturing

Chip development. Historically, the US government has sponsored and supported AI chip development. The recent Chips and Science Act illustrates the support of the semiconductor industry by policymakers of both parties. This bill follows a long history of public engagement with this sector. While this issue has enjoyed congressional support, the utility of AI industrial policy has been the subject of considerable debate, including the following questions:

  1. Is there certain fundamental AI chip research that might not exist without government support?
  2. Does government support and subsidization risk crowding out certain innovations and alternative designs?
  3. How can policy play a role in ensuring that American industry competes with China’s considerable state-led AI investments?

Algorithm development. Algorithm development and deployment has long been intertwined with public research support and policy. Early neural networks, for instance, were first introduced by the Office of Naval Research. The Defense Advanced Research Projects Agency’s (DARPA) Grand Challenge, a military-sponsored desert race, sought to incentivize autonomous vehicle progress through a competition and cash prize. Some argue that this race supercharged autonomous vehicle breakthroughs. Although this public history of AI algorithm development is perhaps impressive, one should note that all industrial policy involves tradeoffs and risks. Policymakers should consider the following:

  1. How will public investments crowd out private funding or distort research outcomes?
  2. How can one best incentivize development while minimizing market distortions?
  3. How can one ensure continued American AI leadership in algorithm development writ large?
  4. How can algorithms be developed and designed to support democracy, freedom, and fairness?
  5. What types of AI and applications should the public support? Should industrial policy focus on foundational or applied research? For military research, how does one ensure that innovations are designed for dual use? 

Overfitting and underfitting. Overfitting is the problem of fitting a prediction algorithm too tightly to training data, so much so that it underperforms with new data. Underfitting, in turn, is the failure to adequately fit an algorithm to the training data, rendering predictions with new data altogether unreliable. For policy-sensitive applications, AI models must be able to demonstrate that they are neither over- nor underfit for the task at hand. At present, there is no easy solution to this challenge. For policymakers, the best approach is vigilance. The following are examples of issues that this could create:

  1. Economic data have a relatively short history. Treasury models therefore run an underfitting risk that could lead to faulty algorithms when trying to predict inflation, employment, and other key metrics. 
  2. Court sentencing algorithms can run the risk of overfitting. If a case used in a model’s training set is sufficiently unique, the model could carve out a prefabricated decision path that is not generalized but instead is tailored specifically to that set. Of course, an entirely different question is whether this sort of model, regardless of fitness, should ever be used by courts.
Inputs and Resources

Supply chain robustness. AI chips and hardware require a diverse range of materials and components to support processing needs. A robust AI ecosystem requires supply chains that can reliably source and provision the resources needed by the AI economy. Toward these ends, policymakers should consider the following:

  1. How can the United States open trade with new markets to ensure access to these goods?
  2. How can the United States liberalize global trade to ensure an efficient and balanced supply chain?
  3. Can domestic resources help supply the needed materials? How can the United States balance the benefits of domestic resource extraction with environmental costs?
  4. How can the United States counter China’s market takeover of required rare earth metal deposits? 

Talent and immigration. AI development requires a range of highly technical and highly specialized skills. Supporting manufacturing, research, design, and deployment will require a deep talent pool and expensive labor. Education, grants, apprenticeships, and immigration can help fill this gulf. Policymakers should consider the following:

  1. What education policies can incentivize specialization in AI-related fields? What are their tradeoffs?
  2. Can corporations be incentivized to provide training and apprenticeship programs to reduce educational burdens?
  3. How can immigration policy be reformed to attract and retain global talent?
  4. How can AI education balance technical skills with a need for free and creative thinking?
  5. How can nontechnical fields be upskilled with AI knowledge to prepare those fields for the potential effect of AI? 

Data resources and privacy. Policymakers should understand the scale of data used in AI, because many AI policy concerns revolve around big data. The scale of these data belies several important policy questions and challenges, including the following: 

  1. How can the United States ensure that governments and companies adequately protect the vast and sensitive data used to create their AI systems?
  2. How can the United States mitigate concerns that it will lose an “AI race” to China because authoritarian tools allow for more extensive and detailed data collection? 
  3. There is concern that new market entrants with limited data stores and scraping capabilities cannot compete against the vast stores of user data amassed by big tech firms. How can the United States ensure a level playing field and a competitive market? 

Data standards and interoperability. Data standards can affect the nature and usability of data. Healthcare AI, for instance, has been slow to develop because of highly siloed data, disparate technology practices, and record-keeping differences across systems. A key to this problem is interoperability and standardization. If technology can easily communicate and share data, and if data are standardized and easy to use, this could aid the development of AI systems. Toward these ends, policymakers should consider the following:

  1. How should the government design and format data standards to best serve AI? What information should the data capture? How do these decisions affect the ability to share data, develop AI systems, and promote innovation? Conversely, how might standardization hinder innovation? 
  2. How can the government reduce data balkanization to ensure that AI has the tools it needs to grow? How might this be balanced with privacy and security concerns?
 
Quality Control

Explainability. Because AI systems focus on prediction rather than explanation, the reasoning behind their actions can be opaque. Law and policy often require clear reasoning and decision-making. This requirement can raise questions and concerns such as the following: 

  1. Should the government risk using autonomous weapons if we do not understand how they select, and possibly kill, targets?
  2. Should the government use AI sentencing algorithms if we do not know if their final decisions are affected by racial biases?
  3. How does the government know that an AI’s decision-making process has not been compromised by a malicious actor?
  4. How does the government know if autonomous vehicles are safe?
  5. How does the government know that statistical AI models are producing high-quality predictions and results? 

Bias and auditing. The data used and the bias embedded in AI algorithms can lead to incorrect or harmful results. AI-powered pulse oximeters have been found significantly more inaccurate for dark-skinned patients. This issue can cause harm. In another case, Amazon found unintentional bias embedded in its hiring algorithm, which favored male applicants far more than female ones. One path forward would be AI audits that could be used to assess algorithmic weaknesses, security, and bias. Regarding bias and auditing, policymakers should consider the following questions to address these issues:

  1. What algorithmic design best practices and industry standards can help spot and mitigate bias?
  2. What data-sourcing, cleaning, and processing standards can help minimize bias and ensure robust algorithms? What tradeoffs, unintended consequences, or concerns could such standards create?
  3. Is there an acceptable level of bias? What biases are unacceptable? How does the law deal with AI bias?
  4. Can intentional bias be used to mitigate negative biases? What risks or unintended consequences could this pose?
  5. Should AI audits be required? If so, when and what processes should they include to ensure strong results? Further, would requiring audits place an undue burden on innovation?
Externalities

Energy use, emissions, and environmental impact. Supporting AI requires significant energy use. Chip fabrication requires extensive energy resources, as does the compute-intensive training process. Energy requirements expand as AI algorithms and market demand grow. As a result, intensive computing can leave a high carbon footprint. Cloud computing centers also constrain local energy supplies, potentially increasing local energy prices to support often nonlocal demand. Finally, fabrication produces wastewater and toxic byproducts, while cloud computing centers burn through difficult-to-recycle semiconductors. Policymakers should consider the following:

  1. How can the government and private actors balance the energy use and emissions costs of AI systems against the benefits of AI innovation? 
  2. Can AI system innovation in energy management and climate research be used to help reduce costs and fight climate change?
  3. Is there a Coasian approach to manage AI externalities? Or is it just a matter of minimizing the regulatory burden of controlling emissions and other externalities of data centers?
  4. What waste and recycling standards and policies can ensure that waste is properly managed?

Labor disruptions. The advances in automation that flow from AI may disrupt the workforce and displace certain professions. For instance, in the United States, there are more than 3 million truckers, a generally low-education profession that could be eliminated by driverless vehicles. Other industries may feel similar strains. Although there is no guarantee that AI will lead to fewer jobs, some people will likely have to find new employment. As such, policymakers should consider the following:

  1. How can education policy be used to upskill or reskill displaced workers? 
  2. How can policy ease workforce transitions and ensure that older workers are not left behind?
  3. How can agencies update or remove regulations that might entrench certain labor classes despite AI automation improvements?
 
Security and Safety

Cybersecurity. The interdiction of AI naturally comes with a transformation of the cyber-threat landscape. New threats can be found in AI. The massive depth and width of modern neural nets can make it difficult to spot vulnerabilities or bad actors. Further, data can act as a new attack surface. Data poisoning attacks seek to inject vulnerabilities into a system through bad data or use data inputs to cause a trained system to malfunction. AI will also be used as a tool of cybersecurity. Further, it can be used to hunt and exploit vulnerabilities without human involvement. Conversely, AI can be used to detect intrusions and stop bad actors. Policymakers should consider the following:

  1. What processes can be used to detect vulnerabilities not only in algorithms, but also in the data and processors that drive these systems? 
  2. What standards and best practices can be passed to the private sector to mitigate and minimize AI cyber risks? 
  3. How can the government detect and alert the public to systemic AI cyberattacks and risks? 
  4. How can the government encourage effective prosocial cybersecurity research and hacking? 

Supply chain security. The supply chain that supports semiconductors is long, complex, and brittle. Chips are often manufactured abroad, leaving them vulnerable to foreign influence. This creates novel threats to American systems. Policymakers should consider the following: 

  1. How can the government or private actors gather intelligence about supply chain–based vulnerabilities and threats?
  2. How can the government or private actors detect compromised or counterfeit chips?
  3. How does the government hedge against security threats to its supply chain, such as China’s threat to Taiwan—its primary semiconductor trading partner?
  4. How does the government or private actors balance the need for plentiful resources with the need to minimize the influence of bad actors?
  5. How can the government collaboratively work with its allies to ensure access to safe components? 

Lethal autonomous weapons systems. AI algorithms make robotic weaponry that can select and engage targets without humans in the loop a reality. This is no longer science fiction; such systems are already in use on the battlefield. Policymakers must actively engage in the many now-practical ethical and legal implications of these systems. Questions that policymakers must answer include the following:

  1. How do autonomous weapons conform to international law and the laws of war?
  2. How might arms control law apply to autonomous weapons, and how might the government technically verify a potential arms control agreement?
  3. What role do humans play in controlling or mitigating the potential harms of autonomous weapons?
  4. How can autonomous weapons justify their actions or explain life-or-death decisions?

Incomplete and Ever Evolving List

This is not a comprehensive list of policy questions about current AI systems and applications. What is more, as AI develops further, it will open further unanswered questions. As you, the readers, discover these challenges, I cordially invite you to email me to share your thoughts and help me update this agenda for inquiry.

About the Author

Matthew Mittelsteadt is a technologist and research fellow at the Mercatus Center whose work focuses on artificial intelligence policy. Prior to joining Mercatus, Matthew was a fellow at the Institute of Security, Policy, and Law where he researched AI judicial policy and AI arms control verification mechanisms. Matthew holds an MS in Cybersecurity from New York University, an MPA from Syracuse University, and a BA in both Economics and Russian Studies from St. Olaf College.

Read Matt's Substack on AI policy