- | Technology and Innovation Technology and Innovation
- | Research Papers Research Papers
- |
The Deepfake Challenge: Targeted AI Policy Solutions for States
Optimal AI regulations hold bad actors liable for spreading malicious AI content while avoiding unenforceable mandates that stifle innovation.
Artificial intelligence is now able to make convincing duplicates of people's appearance, voice, and other aspects of their likeness. As with other issues in generative AI, these deepfakes present society with new challenges. In “The Deepfake Challenge: Targeted AI Policy Solutions for States,” Dean W. Ball says that finding the optimal laws to deal with them will involve incremental rather than total progress.
Gaps in Existing Legal Frameworks
Some states already protect citizens from deepfakes through preexisting common and statutory law. Many other state laws, however, make it difficult for victims of deepfake-enabled abuse to seek legal redress. While this situation could best be addressed with a federal law, the uncertainty and slowness of the congressional process leave state lawmakers with a legitimate need to act now.
In successfully crafting AI-related laws, state governments must avoid mandates that are impossible to achieve technologically. Such mandates create the illusion of safety but not the reality. Instead, policymakers must approach the issue with realistic expectations about what legislation can plausibly achieve.
An Inferior Approach to Deepfake Laws
Ex ante regulation imposes requirements on generative AI developers, social media platforms, and related firms to prevent the dissemination of deceptive AI-generated content.
- These laws are challenging to craft and execute effectively. They seek to stop the dissemination of deceptive AI-generated content, a laudable goal whose feasibility is currently unclear.
- States risk creating onerous regulatory burdens by requiring generative AI companies, websites and apps, and other firms to comply with ex ante standards.
A Better Way Forward
Post hoc laws create civil or criminal liability for users who disseminate certain kinds of AI-generated content (nonconsensual sexually explicit matter, deepfakes of politicians running for elective office, and so on). Virtually all deepfake-related legislation passed by state governments to date have been post hoc laws.
- Such laws are easier to enforce than ex ante laws because they operate by relieving a demonstrated harm rather than seeking to prevent that harm in the first place.
- Post hoc laws pose a far lower risk of creating mandates that either freeze technology in place or otherwise impede innovation
Key takeaway: No law is likely to "solve" the problem of deceptive and malicious AI-generated content, and trying to eliminate such content altogether is likely to create more problems than it solves. Yet the right laws can make meaningful progress, which is all that can be expected when grappling with novel sociotechnical problems.