
First, it was Taylor Swift. Fabricated images of the pop star flooded social media, viewed millions of times before platforms scrambled to contain them. Then came the voices of CEOs — recreated so convincingly six-digit scams happened. Then came the widespread scams that are becoming more and more common.
Deepfakes, AI-generated forgeries that mimic people’s faces and voices with uncanny precision, are a growing problem. Now, Denmark is striking back.
The government has unveiled legislation that would make it illegal to publish deepfakes depicting real people without their consent — a move that could make it the first country in the world to implement a full ban on unauthorized deepfake content.
A very real threat
“Legislation here is first and foremost about protecting democracy and ensuring that it’s not possible to spread deepfake videos where people say things they’d never dream of saying in reality,” Denmark Culture Minister Jakob Engel-Schmidt said during a press announcement on April 24.
According to the minister, current Danish law — particularly copyright protections — lacks the language to address how AI can now replicate a person’s likeness. This is the case for almost all countries on Earth right now. The proposed legislation would update the law to grant citizens rights over their own body, voice, facial features, and images.
This is a timely move as deepfakes are not a theoretical concern. And, for Denmark, they hit close to home. Last year, Foreign Minister Lars Løkke Rasmussen was attacked by a deepfake video call. The face and voice on the screen belonged, supposedly, to Moussa Faki, chairman of the African Union. But it was an impersonation orchestrated by a pair of Russian pranksters.
So, the threat is very real, and Denmark seems set to take decisive action. A majority of Parliament signed an agreement last June to restrict the use of deepfakes in political messaging. Under the pact, political parties agreed to only use AI-manipulated content when it is clearly labeled — and only with consent from those depicted.
The newly proposed national law would take things even further.
How would such a law even work?
Under the new rules, anyone caught publishing manipulated content like deepfakes — without the subject’s permission — would be ordered to take it down. Tech platforms, including major social media companies, would be legally obligated to remove flagged deepfake content when requested.
“If an individual finds out that someone has made a deepfake video of them without permission, the law will ensure that the tech giants must take it down again,” Engel-Schmidt said.
Importantly, the law includes exemptions. Satire and parody will remain legal, provided the content is clearly labeled as artificial. If there’s disagreement over whether something counts as satire or manipulation, Engel-Schmidt said, “this would be a question for the courts.”
He also rejected concerns that the proposal would infringe on freedom of speech. “This is to protect public discourse,” he said. “Identity theft has always been illegal.”
If passed, the Danish proposal could place the country at the forefront of a new kind of digital rights framework — one that treats a person’s digital likeness as an extension of their identity.
But the big question is whether companies will actually enforce such demands, and how would such content even be identified as a deepfake.
The global deepfake dilemma
The challenge Denmark faces is mirrored across the world, other countries are just slower to react.
In the United States, outrage over non-consensual deepfake pornography — including fake nude images of pop star Taylor Swift — spurred the White House into action. In April 2025, Congress passed the Take It Down Act, which requires platforms to remove intimate AI-generated content within 48 hours. Violators face prison time.
South Korea has gone even further. Since late 2024, it has criminalized the creation, possession, or distribution of sexually explicit deepfakes, punishable by up to seven years in prison. China, meanwhile, introduced rules in 2023 requiring that all synthetic media be clearly labeled and traceable to their creators. Violators can face criminal prosecution.
The European Union’s AI Act, finalized last year, takes a broader but less aggressive stance. It mandates that deepfakes be labeled but stops short of banning their publication. This reflects the EU’s risk-based regulatory approach — synthetic content is seen as “limited risk” unless used for disinformation or fraud.
By comparison, Denmark’s proposal is narrower in scope but stronger in enforcement. It focuses on protecting individuals’ rights, not merely transparency. If passed, it would go beyond requiring labels. It would prohibit publishing someone else’s likeness in manipulated media without their consent — potentially making Denmark the first nation to enshrine such a ban in law.
Will this be enforced?
But even if the law passes, will it work?
We’re basically relying on an AI arms race that can identify deepfake content and confirm authenticity. At the heart of this race lies a machine-learning technique called Generative Adversarial Networks (GANs). These models work in opposition: one generates deepfakes, the other tries to detect them. Each learns from the other. Over time, the generator improves — often outpacing the detector.
“The GAN learning principle is based on an arms race between two AI models… until the detection model can no longer distinguish between reality and fake,” said Morten Mørup, an artificial intelligence researcher at the Technical University of Denmark.
“This is what makes it so difficult for both people and AI models to tell the difference between what is real and fake,” the researcher adds.
This cat-and-mouse dynamic makes it incredibly hard for platforms to reliably spot manipulated content. And even when a deepfake is detected, it may have already gone viral by the time it’s addressed.
There are no clear, straightforward solutions. Even if legislation is passed, Mørup warns that the public should simply assume no media is real unless authentified by a reliable actor: “There will still be people who can generate content without it being declared… We need to practise source criticism and understand that we live in a world of misinformation.”