Deepfake Regulation: Chaos That Matters Now More Than Ever

June 5, 2025
deepfake regulation

Artificial Intelligence (AI) brings remarkable benefits to society, enhancing efficiency, innovation, and convenience. However, these advancements aren’t without significant risks. Bias, misinformation, manipulation, impersonation, scams, opacity, dependency, and dehumanization are just some of the broader challenges posed by AI technologies. Amid these numerous risks of AI systems, one particularly alarming threat stands out: deepfake technology and the growing need for deepfake regulation.

The World Economic Forum (WEF) has identified misinformation and disinformation as the most severe short-term global risks, particularly highlighting the dangers posed by AI-generated content such as deepfakes. In its Global Risks Report 2024, the WEF emphasizes that the proliferation of synthetic media threatens to erode trust in institutions, manipulate public opinion, and destabilize democratic processes.

At the core of this misinformation crisis lies the growing sophistication of deepfake technology. By leveraging powerful AI algorithms, deepfakes can create hyper-realistic but entirely fabricated videos, images, and audio, effectively blurring the line between reality and fiction.

The Rapid Rise of Deepfakes

In just a few short years, deepfake creation has transitioned from being a niche, complex process to an easily accessible one. Today, anyone can create a believable 60-second deepfake video using a single clear image within just 25 minutes, completely free.

Research shows a staggering 550% increase in deepfake content online from 2019 to 2023. EUROPOL predicts that 90% of all digital content will be AI-generated by 2026.

Two primary factors driving the rapid evolution of deepfake technology are:

  • Advancements in Generative Adversarial Networks (GANs), powerful AI systems capable of creating highly realistic synthetic media.
  • The widespread accessibility of user-friendly deepfake tools, allows users without technical expertise to easily produce and distribute convincing deepfake content.

A Generative Adversarial Network, commonly known as a GAN, is a type of machine learning model known as a neural network. This technology forms the basis of deep learning, a subset of machine learning (ML). A GAN involves two competing neural networks engaged in a continuous learning cycle. These networks are typically called the generator and the discriminator. They are trained simultaneously, creating what’s known as an adversarial relationship.

In this adversarial setup, the generator repeatedly attempts to fool the discriminator by producing increasingly convincing synthetic data. Simultaneously, the discriminator continuously improves its ability to distinguish synthetic content from real data.

Here’s a simple and engaging metaphor for explaining the roles of the generator and discriminator in GANs:

Imagine a counterfeiter and an art detective locked in an endless competition. The generator acts like the counterfeiter, constantly creating realistic forgeries of valuable paintings, each time aiming to deceive the expert eye.

The discriminator, on the other hand, acts like an expert art detective, meticulously examining every artwork to determine whether it’s authentic or fake. With each inspection, this detective becomes increasingly adept at identifying subtle signs of forgery.

Through this ongoing competition, the counterfeiter grows exceptionally skilled at creating convincing replicas, while the detective sharpens their ability to spot fakes. Eventually, the forgeries become so realistic that even the expert detective struggles to differentiate them from authentic masterpieces.

How Fake Media Threatens Individuals and Society

Deepfake technology presents serious risks, affecting both individuals and society at large. Below are just a few compelling examples highlighting the profound and damaging consequences deepfakes can have.

a. Personal Harm

Deepfake pornography constitutes 99% of all deepfake videos, disproportionately targeting women without consent.

Notably, even public figures like Taylor Swift aren’t immune. Her recent experience demonstrates how quickly and broadly deepfake content can spread, highlighting major gaps in social media platforms’ moderation policies.

b. Financial Fraud

A notable incident in Hong Kong involved a finance worker duped into transferring $25 million after believing he was in a real video call with his company’s executives – all were, in fact, deepfakes.

c. Political Manipulation

In a U.S. survey, 77% of voters encountered AI-generated deepfake content, with 36% reporting these encounters had completely altered their voting decisions. Deepfakes frequently evoke strong emotional responses, undermining trust in media and democracy itself.

Why Not Just Ban Deepfakes?

While the risks associated with deepfake technology are significant, it’s crucial to recognize its substantial potential for positive social impact when responsibly used. A notable example is David Beckham’s innovative campaign, “Malaria Must Die,” where deepfake technology enabled him to speak convincingly in nine different languages. This multilingual deepfake helped raise global awareness and encouraged action against malaria, demonstrating the technology’s capacity for impactful public advocacy and outreach.

Deepfake technology holds significant positive potential beyond advocacy campaigns. It can enhance educational training by simulating real-life scenarios safely, improve accessibility by overcoming language barriers, enrich entertainment through realistic portrayal of unavailable or historical figures, and support historical preservation by vividly animating past events and personalities. Deepfakes can serve as a powerful tool for satire and political critique. By mimicking public figures in exaggerated or humorous ways, creators can use deepfakes to highlight hypocrisy, raise awareness about social issues, or challenge power structures.

Given these promising applications, outright prohibition isn’t desirable. Instead, we need thoughtful, comprehensive AI regulation and transparency measures to ensure the ethical and beneficial use of deepfake technology while mitigating potential harm that will also enable safe and reliable technical measures for the identification of deep-fake content.

Technical Solutions and Initiatives

While several technical tools exist to combat deepfakes, many still fall short, particularly when dealing with low-quality social media content or advanced manual edits. Here are the most prominent approaches and their current limitations:

a. Deepfake Detectors

These are AI-based technologies designed to identify whether content has been synthetically generated or manipulated. However, their effectiveness remains questionable:

  • Many open-source detection models are trained on older deepfakes, which are far less advanced than what today’s generators can produce.
  • In the ongoing race between creation and detection, deepfake generators are often technologically ahead of detection tools.
  • Detection tools typically provide a probability score rather than a definitive result (e.g., “75% likely this video is fake”), raising difficult questions: What threshold should trigger removal or legal action? Is 70% enough?

b. Watermarks

Watermarking involves embedding a digital signature into AI-generated content to indicate it was synthetically created. The process generally has two steps:

  • Embedding – Training the AI model to automatically insert an invisible or visible watermark in every output it generates.
  • Recognition – Teaching detection systems to recognize and interpret these watermarks, thus signalling the content’s artificial origin.

While promising in theory, watermarking faces several real-world challenges. The European Parliament has expressed concerns about the robustness, accuracy, and technical implementation of watermarking approaches.

Numerous private companies are developing their own watermarking systems, creating fragmentation and a lack of industry-wide standardization, which reduces effectiveness.

C. Technical Standards – C2PA

The Coalition for Content Provenance and Authenticity (C2PA) is an industry-led initiative aiming to establish universal technical standards for content verification. Its approach includes:

  • Attaching a cryptographic hash and metadata to each piece of digital content, documenting its origin and any subsequent modifications.
  • Enabling users and platforms to verify the authenticity and history of media, promoting transparency and trust in the digital ecosystem.

While C2PA offers a strong framework, its success will depend on broad adoption across platforms, media organizations, and technology providers.

Deepfake regulation under the GDPR

Deepfakes almost invariably involve the processing of personal data. The inaccuracy of that data does not exempt such processing from the scope of the GDPR. However, the challenge is that the GDPR was not designed with technologies like deepfakes.

One major issue concerns the processing of sensitive data. Deepfakes can involve sensitive personal data as defined by the GDPR, particularly when they depict identifiable individuals. This includes biometric data such as facial features and voice, which are often used to create realistic simulations. Deepfakes may also suggest or misrepresent a person’s racial or ethnic origin, political or religious beliefs, health status, sexual orientation, or sex life. Even when the content is fabricated, its association with a real person can result in significant privacy concerns and potential harm, thus triggering the need for special legal protections.

Processing sensitive personal data under the GDPR typically requires the explicit and informed consent of the data subject. This means individuals must be clearly informed about how their data will be used and must actively agree to such use. In the context of deepfakes, however, this standard is almost never met, as the data, such as facial images or voice recordings, is usually collected without the individual’s knowledge or permission, often by scraping content from publicly available sources. As a result, the creation and dissemination of deepfakes involving sensitive data frequently violate GDPR requirements, particularly the strict conditions for the lawful processing of special categories of personal data.

Identifying a valid legal basis for processing personal data in the context of deepfakes is particularly challenging. While certain GDPR provisions allow processing in the public interest, such as for journalistic, artistic, or academic purposes, these exceptions are narrowly defined and require careful balancing against the individual’s fundamental right to privacy. Determining whether the public interest genuinely outweighs personal privacy is highly context-dependent and often subjective, with limited legal clarity or precedent to guide such assessments. This legal uncertainty makes compliance difficult and increases the risk of infringing on individuals’ data protection rights.

Under the GDPR, personal data must be collected for specific, explicit, and legitimate purposes, and any further processing must remain compatible with those original purposes. This principle, known as purpose limitation, is fundamental to lawful data processing. In the case of deepfakes, however, the data used, such as images, videos, or voice recordings, is often scraped from publicly available sources online without any clearly defined or lawful purpose at the point of collection. As a result, the subsequent use of that data to generate synthetic media typically violates the purpose limitation principle, since the individuals involved neither consented to nor were informed about this kind of secondary, often manipulative, use of their personal data.

Another complication is the data subject’s right to rectification. In the case of deepfakes, the information is inherently false, meaning individuals would, in theory, always have the right to demand correction or removal.

Ultimately, the core issue stems from the rigidity of the current legal framework. Although the GDPR is technically applicable to the processing of personal data in deepfakes, it was not designed with such rapidly evolving technologies in mind. As a result, applying its provisions to deepfakes can lead to inconsistent, overly broad, or even counterintuitive outcomes.

This disconnect between legal requirements and technological realities highlights the critical need for robust AI governance and data management.

For any business developing or deploying deepfakes, AI Governance Software and Data Management Software are essential to ensure automated and efficient compliance with data protection laws, manage risks associated with personal data use, and establish clear accountability mechanisms. By embedding privacy-by-design principles, maintaining auditable data flows, and enforcing consent and purpose controls, these systems help bridge the gap between innovative AI applications and regulatory obligations.

Deepfake regulation under the Digital Services Act (DSA)

Although the Digital Services Act (DSA) does not explicitly define what a deepfake is, it introduces two key obligations that apply to such content:

  • Transparency Requirement: Content that is synthetically generated or manipulated must be clearly labeled as such. This includes the use of watermarks or similar indicators to ensure users understand the artificial nature of the content.
  • Notice-and-Action Obligation: Platforms are required to act upon receiving notice about illegal or harmful content, including deepfakes. According to Recital 50 of the DSA, this includes taking down or restricting access to such content after proper assessment.

However, these measures are arguably insufficient.

First, the DSA’s scope is limited to intermediary services, such as hosting platforms and online marketplaces. It does not apply to content creation tools capable of generating deepfakes, like generative AI models (e.g., ChatGPT, image and video synthesis apps). These tools fall outside the regulatory reach unless the deepfakes they produce are disseminated via platforms covered by the DSA (e.g., social media).

Second, private messaging services are excluded from the DSA’s obligations. This means that deepfake content shared through encrypted messaging apps like WhatsApp would not fall under the act’s enforcement framework.

In summary, while the DSA introduces important steps toward addressing deepfake content, it leaves significant gaps, especially in relation to content creation tools and private communications. Stronger, more targeted regulation may be needed to fully address the challenges posed by deepfake technology.

Deepfakes and the EU AI ACT

The EU AI Act does not prohibit the use of deepfakes. Deepfakes fall under the limited risk AI Systems, which have transparency obligations. Unlike high-risk AI systems, limited-risk AI systems do not require pre-market approval or strict compliance procedures. However, they have strict transparency rules.

Determine the risk for each of your AI systems with our AI Act Checker

For providers of AI systems, including general-purpose AI (please see the explanation on the AI Systems and GPAIs), there is a requirement to ensure that outputs are clearly marked, using a machine-readable format, to indicate that they have been artificially generated or manipulated. In practice, this means that AI-generated content should include a watermark or similar identifier.

For deployers (those who use the AI systems), there is an obligation to clearly disclose that the content has been synthetically produced or altered. This disclosure must be visible and understandable to the end user.

However, a critical question remains:

Are transparency requirements under the EU AI Act alone truly sufficient?

Consider the case of deepfake pornographic material. Even if such content is labeled as artificial, the emotional, reputational, and psychological harm to the victim remains just as severe. In such cases, transparency does little to mitigate the damage caused.

Also, considering the capability of deepfakes to manipulate elections, commit fraud, or incite violence, is it appropriate to classify them as presenting only a limited risk?

Given the potential of deepfakes to cause significant harm and the complex regulatory landscape, AI governance software is essential for both deployers and providers to ensure responsible development, legal compliance, and risk mitigation throughout the AI lifecycle.

Deepfakes are not just a technical novelty. They are a growing societal threat that challenges our ability to trust what we see and hear. While the EU AI Act marks a step toward responsible AI governance, its current approach, focused largely on transparency, may fall short of addressing the real-world harms of synthetic media. As deepfake technology evolves faster than legislation, the need for stronger, smarter regulation is urgent.

At Whisperly, we are committed to helping organizations detect, label, and manage AI-generated content responsibly. Our goal is to empower platforms, institutions, and individuals to navigate this new era with trust and confidence.

How Can Whisperly Help?

Share to social media: