Disinformation is not new, but given how disinformation campaigns are constructed, there is almost no stage that will not be rendered more effective by the use of generative artificial intelligence (AI). Given the unsatisfactory nature of current tools to address this budding reality, disinformation, especially during elections, is set to get much, much worse.

As these campaigns become more sophisticated and manipulative, the foreseeable consequence will be a further erosion of trust in institutions and a heightened disintegration of civic integrity, which in turn will jeopardize a host of human rights, including electoral rights and the right to freedom of thought.

In this policy brief, David Evan Harris and Aaron Shull argue that policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen, act quickly to ban using AI to impersonate real persons or organizations, and require the use of watermarking or other provenance tools to allow people to distinguish between AI-generated and authentic content.

About the Authors

David Evan Harris is a CIGI senior fellow, Chancellor’s Public Scholar at UC Berkeley and faculty member at the Haas School of Business.

Aaron Shull is the managing director and general counsel at CIGI. He is a senior legal executive and is recognized as a leading expert on complex issues at the intersection of public policy, emerging technology, cybersecurity, privacy and data protection.