Comments Urging a Sectoral Approach to AI Accountability

National Telecommunications and Information Administration

Agency: Department of Commerce

Comment Period Opens: April 7, 2023

Comment Period Closes: June 12, 2023

Comment Submitted: June 12, 2023

Docket No. 230407-0093

RIN: 0660-XC057

We are AI and technology policy researchers at the Mercatus Center at George Mason University. As part of its mission, Mercatus Center scholars conduct independent analyses to assess agency rulemakings and proposals from the perspective of consumers and the public.

Attached is “Artificial Intelligence: An Introduction for Policymakers,” authored by one of us, which is responsive to several NTIA queries, including:

“What role should government policy have, if any, in the AI accountability ecosystem? For example: a. Should AI accountability policies and/or regulation be sectoral or horizontal, or some combination of the two?” 

This is an important question, and one that we believe should be answered first before NTIA and other agencies develop accountability mechanisms for AI technologies. In our view, accountability policies should be sectoral, not horizontal. Under our approach, AI accountability policies are everywhere—in product liability law, in employment law, in common law, and various other regulatory policies besides. A horizontal approach, we fear, will lead to policy blind spots, technology stagnation, and regulatory dead-ends.

We urge that AI technology should be unbundled and analyzed by its specific applications, not as a technology category. For one, AI is a contested, constantly changing concept. Regulating AI qua AI will create endless disputes about definition and scope. Consider, for instance, a 2008 Computerworld story about AI. At that time, AI tech included Roombas, Vista OS, Mars rovers, loan qualification software, and Marriott hotel booking software. Most technologists would not consider this AI today, and these do not seem to be the technology contemplated by your inquiry. If trends persist, it seems doubtful we will consider today’s ChatGPT or facial recognition AI in ten years. As John McCarthy, who coined the term AI, once remarked, “As soon as it works, no one calls it AI anymore.”

For another, AI tech doesn't lend itself to a discrete, new horizontal body of regulation or accountability policies. As the attached study points out, “All policy areas will be touched and even transformed by artificial intelligence.”3 A sectoral or application-based approach is preferable because many specific uses, whether screening résumés by job applications or network analysis by intelligence agencies, require oversight tailored to the specific circumstances and that conform to existing law. Even challenges that uniformly exist within all AI systems, such as bias, differ in form, criticality, and impact depending on application. For TSA facial recognition, racial bias is high impact and a critical challenge; for power grid load balancing AI, it’s irrelevant. A sectoral approach encourages application-relevant scrutiny and expertise where needed and allows low-risk AI uses—like robot vacuum cleaners—significant freedom to iterate and improve. Thank you for the opportunity to comment. We are happy to speak with agency staff as they approach these important AI governance issues.


Matt Mittelsteadt, “Artificial Intelligence: Introduction for Policymakers,” Mercatus Center at George Mason University, Special Study (2023).

Citations and endnotes are not included in the web version of this product. For complete citations and endnotes, please refer to the downloadable PDF at the top of the webpage.