We are AI and technology policy researchers at the Mercatus Center at George Mason University. As part of its mission, Mercatus Center scholars conduct independent analyses to assess agency rulemakings and proposals from the perspective of consumers and the public.
Attached is “Artificial Intelligence: An Introduction for Policymakers,” authored by one of us. It is responsive to several FTC queries about AI uses in cloud computing.
In our view, FTC scrutiny of AI in cloud computing should examine specific AI uses, not AI generally. Under this approach, pro-competition AI policies are everywhere—in product liability law, in federal and state antitrust law, in common law, and various other regulatory policies besides. A more horizontal approach, which disregards the potential to apply existing law to AI uses in cloud computing, we fear, will lead to policy blind spots, technology stagnation, and regulatory dead-ends.
We urge that AI technology should be unbundled and analyzed by its specific applications, not as a technology category. For one, AI is a contested, constantly changing concept. Regulating AI qua AI will create endless disputes about definition and scope. Consider, for instance, a 2008 Computerworld story about AI. At that time, AI tech included Roombas, Vista OS, Mars rovers, loan qualification software, and Marriott hotel booking software. Most technologists would not consider this AI today, and these do not seem to be the technology contemplated by your inquiry. If trends persist, it seems doubtful we will consider today’s ChatGPT or facial recognition AI in ten years. As John McCarthy, who coined the term AI, once remarked, “As soon as it works, no one calls it AI anymore.”
For another, AI tech doesn't lend itself to a discrete, new horizontal body of regulation or accountability policies. As the attached study points out, “All policy areas will be touched and even transformed by artificial intelligence.”3 A sectoral or application-based approach is preferable because many specific uses, whether screening résumés by job applications or network analysis by intelligence agencies, require oversight tailored to the specific circumstances and that conform to existing law. Even challenges that uniformly exist within all AI systems, such as bias, differ in form, criticality, and impact depending on application. For TSA facial recognition, racial bias is high impact and a critical challenge; for power grid load balancing AI, it’s irrelevant. A sectoral approach encourages application-relevant scrutiny and expertise where needed and allows low-risk AI uses—like robot vacuum cleaners—significant freedom to iterate and improve.
Thank you for the opportunity to comment. We are happy to speak with agency staff as they approach these important AI governance issues.
Matt Mittelsteadt, “Artificial Intelligence: Introduction for Policymakers,” Mercatus Center at George Mason University, Special Study (2023)