Artificial Intelligence and Public Policy

Artificial intelligence, commonly known as AI, has emerged as an important policy discussion in recent years. Because of the spike in interest, some are concerned and want to restrict the growth in AI to prevent their fears about harmful effects from becoming a reality. The questions about the legal and regulatory governance of AI, machine learning, “autonomous systems,” and robotic and data technologies will only continue to be raised as technology advances.

Adam Thierer, Andrea Castillo O’Sullivan, and Raymond Russell consider the effects of trying to clamp down on the progress of this technology. Policymakers should approach AI technology from a standpoint based on the concept of permissionless innovation, which generally permits experimentation with new technologies and business models as a default.


Policymakers may initially be tempted to preemptively restrict AI technologies out of an abundance of caution because of the perceived safety, welfare, and market risks these new innovations might seem to pose.

Many people’s understanding of artificial intelligence technologies is unfortunately largely informed by dystopian science fiction movies, television, and books. If one’s sole conception of a technology comes from Hollywood depictions of killer robotic systems, it is important to remember that these representations are fictional.

Concerns have been growing about the “rise of the robots” and the impact of automation and AI adoption on the present-day workforce. These fears are older than many may realize. A LIFE Magazine cover from July 1963 grimly discusses the point of no return with the rise of automation. Technological progress has only accelerated since then, and such pessimism has recurred more frequently. Pessimists will always worry about the inequality generated by disproportionate displacement of workers in the near-term.

  • Because AI adoption most readily replaces repetitive, low-skilled labor, jobs that tend to pay less and require lower levels of education face the highest risk of automation.
  • While every technological revolution has rendered certain industries obsolete, history has been punctuated by periods of creative destruction; when one profession becomes unneeded, other job opportunities emerge, improving society.


Comparing the advent of AI technologies to the development of the commercial Internet in the 1990s sheds some light on how policymakers can become the champions of pro-growth policies, and, simultaneously, how they can maintain an appropriate level of oversight and accountability for consumers.

There is no need to quash a potentially revolutionary industry before it even gets the opportunity to develop. Key concepts discussed in the paper include the following:

  • An alternative regulatory path for AI technology can be founded on the principle of patience and permissionless innovation. 
  • Policymakers need to develop a stronger understanding of the versatility of artificial intelligence across numerous industries.
  • Policymakers need to understand how overly prohibitive regulatory proposals will undermine a promising industry while it is still in its youth.
  • There are natural limitations to our ability to forecast the future of AI, and the fears people have may never actually come to fruition.


There are many challenges to overcome, but an incredible array of promising applications, economic opportunities, and improvements to quality of life demand that policymakers get priorities correct. The benefits of AI technologies are simply too great to allow them to be extinguished by poorly considered policy.