In this General Body Meeting, we want to highlight show off one of AISI research cohorts team: The AI Policy Across Borders group, which tackled deepfake policy over the summer.

Why deepfakes are important: Unchecked proliferation of deepfake technologies will erode trust in digital evidence and could lead to a state of universal plausible deniability where the veracity of all digital media could be reasonably questioned and current risks which are cross-border such as political misinformation, non-consensual intimate imagery, financial fraud.

Current policy - analysis through 3 jurisdictions: Currently, deepfake regulation is remarkably underdeveloped. In most countries, it is either non-existent or extremely narrow in scope. Given the global nature of the threats from its misuse, it is crucial to establish a common policy framework. We propose solutions informed by a comparative law analysis of US, Chinese, and EU deepfake regulations, conducted through a three-tier policy framework. The policy approaches of each of these leading regions reflect their own governing philosophies. Chinese policy focuses on ensuring state control and fostering socialist values, EU policies are detailed with the intent of protecting individual rights, and the US has targeted policies primarily against specific threats with a strong interest in avoiding overregulation. 

Three proposed solutions:

  1. Copyright reform to include facial/voice likeness - The reform would be an extension of copyright and IP law to provide individuals with legal ownership of facial and voice likeness,
  2. Accountability chains - Policies ensuring every actor in the chain is responsible if they are found liable.,
  3. Accelerated detection research support - Since malicious actors are unlikely to self-identify synthetic content, we propose incentivizing research into detection methods.

Come to our next meeting to learn more.