The Collien Fernandes Case and the Rise of Deepfake Abuse The Collien Fernandes Case and the Rise of Deepfake Abuse

In March 2026, German TV host and actress Collien Fernandes had accused her former husband, actor and producer Christian Ulmen, of impersonating her online for years and sharing sexually explicit deepfake material that looked like her. Fernandes said fake social media accounts in her name were used to contact men, send sexual content, and build false relationships.

The two were a well-known media couple, married for about 15 years before divorcing in late 2025. That history is part of what made the allegations feel so explosive: Fernandes was not pointing at an anonymous troll, but at someone from the center of her life.

The case is still in preliminary proceedings in Spain, so these are allegations. Ulmen has not commented publicly, and his lawyer said he would take legal action against what he called inadmissible suspicions and untrue facts.

What made the story especially disturbing was the scale described by Fernandes. El País reported that her Spanish complaint alleges at least 30 men were drawn into intense online relationships and sexual calls with a voice that allegedly imitated hers, while “hundreds” received nude photos or videos presented as if they had come from her.

Fernandes said she chose Spain because she believed the country offered stronger protection against digital violence. Spain has specialized courts for gender-based violence, and since 2025, that scope has included forms of digital violence such as cyberstalking and the non-consensual sharing of private images.

In public comments, Fernandes argued that Germany had become too friendly to offenders in this area. That claim landed hard because many German lawyers and journalists were already pointing to a legal gap around deepfake pornography.

Why this deepfake abuse case matters beyond celebrity news

The Fernandes case hit a nerve because it turned a vague AI fear into a realistic scenario. Here was a public figure saying her face, name, voice, and sexual identity had been turned into a long-running digital puppet show. You do not need a Hollywood budget for that anymore. A photo, a few clips, some patience, and the right tools can do a lot of damage.

The story also showed why deepfake abuse is hard to stop once it starts. A fake account can reach many people before the target even knows it exists. The victim then has to prove that the image, video, or voice is fake.

That is a rough task when people are often bad at spotting manipulated media. A European Parliament briefing said human detection of synthetic and authentic media is often close to chance level, around 50%. In plain English: half the room may believe the fake.

Public anger in Germany moved fast. Thousands gathered at Berlin’s Brandenburg Gate after the allegations became public, and Justice Minister Stefanie Hubig said her ministry was drafting a bill to criminalize both the making and distribution of pornographic deepfakes. The proposal would also make it easier for victims to identify anonymous account holders, seek damages, and get accounts blocked. One disturbing case quickly became a pressure test for deepfake law.

The deepfake pornography market is already large

If deepfake abuse still sounds niche, the numbers say otherwise. By 2025, estimates suggested that around 98% of deepfakes online were pornographic. The volume of deepfakes shared online was also expected to jump from about 500,000 in 2023 to 8 million in 2025.

The tools behind this content are not hard to find. Research from the Institute for Strategic Dialogue found 31 active synthetic intimate image abuse tools that were easily discoverable online, and those 31 sites drew almost 21 million monthly visits in May 2025. The same study found that major search engines surfaced such tools in top results for search terms like “deepnude,” “nudify,” and “undress app.”

The gender pattern is also clear. The UK government said one dedicated website hosted 276,149 sexual deepfakes in 2023, with 96% depicting women. It also said nudification services drew 24 million visits in September 2023 alone. In other words, this is not a side effect of AI progress. It is a business model, and women are still carrying most of the risk.

Why non-consensual AI images cause real harm

There is still a lazy idea floating around that a deepfake is somehow less harmful because the body in the image is fake. That logic falls apart quickly. The target is real. The fear is real. The damage to trust, work, relationships, and mental health is real. Research published through police science channels in the UK said the psychological impact of sexual deepfakes can mirror the impact of sexual assault.

That study had another uncomfortable finding: around a quarter of respondents either agreed with, or felt neutral about, the legal or moral acceptability of viewing, sharing, creating, or selling sexual deepfakes without consent. Around 21% admitted to viewing a sexual deepfake of someone they did not know, and 14% said they had viewed one of someone they did know. Deepfake abuse grows through software, but also through ordinary human indifference.

Deepfake regulation: Germany, the EU, the UK, and the US

Germany

Germany is now under pressure to close a legal loophole. Reporting by ZDF and Heise said that under the current German framework, creating a sexual deepfake is often not clearly punishable on its own, while distribution may trigger liability through image-rights rules or other offences.

Hubig’s draft would change that by making the production of pornographic deepfakes a criminal offence as well, with penalties of up to two years in prison or a fine. Official ministry messaging has also stressed stronger protection against digital violence and punishment for both making and sharing such content.

The EU

At the EU level, the direction is becoming clearer, even if the full legal map is still uneven. The EU’s 2024 directive on violence against women and domestic violence requires member states to criminalize several forms of cyberviolence, including the non-consensual sharing of intimate images, and countries must implement it by June 2027. Separately, the AI Act is bringing transparency rules for deepfakes, and the European Parliament voted on 26 March 2026 to support changes that would delay some watermarking duties and add a ban on so-called nudifier systems. Those changes still need final negotiation, but the trend is obvious: Europe is moving from hand-wringing to rule-making.

The UK and the US

Other countries are already trying different tools. The UK government says it has criminalized the creation of non-consensual intimate images, including deepfakes, and now plans to ban nudification tools that create fake nude images of real people.

In the US, President Trump signed the TAKE IT DOWN Act in May 2025. The Federal Trade Commission says the law criminalizes the publication of non-consensual intimate visual depictions and requires covered platforms to provide a notice process and remove reported content within 48 hours, while also making reasonable efforts to remove copies.

Australia’s eSafety Commissioner has also taken action against nudify services linked to abuse involving schoolchildren.

Deepfake regulation cannot rely on one law, one country, or one dramatic court case. Victims need civil options, criminal options, fast takedowns, better identity checks for bad actors, and platforms that do more than issue thoughtful statements after the content has already spread everywhere.

For the wider policy picture, see our article “Grok, Deepfakes, And The Backlash: Why Governments Tighten AI Rules.”

What to do if a deepfake targets you

If someone uses AI to create sexual content with your face, speed matters. A practical response usually looks like this:

  1. Save evidence first.
    Keep screenshots, usernames, dates, links, and copies of messages before content disappears.
  2. Use takedown tools.
    Adults can use StopNCII, which creates a hash of an image or video so participating companies can detect and remove it without the file leaving the user’s device. For minors, Take It Down works in a similar way.
  3. Report early and directly.
    Report to the platform, tell a lawyer or local support service if needed, and warn key people around you before rumors harden into “everyone knows.”

A clear ending, even if the law is still catching up

Whatever the Spanish court eventually decides, the Collien Fernandes case has already changed the conversation. It showed how deepfake abuse can become long-term, intimate, and systematic, and how badly older legal systems fit a world of cloned voices, fake accounts, and AI-made sexual content. It also reminded lawmakers that the gap between “technically possible” and “socially devastating” is now very small.

For readers, the takeaway is practical. Treat non-consensual AI sexual content as abuse. Do not share it. Do not laugh it off as internet weirdness. Ask what platforms are doing to stop it, and ask whether new AI tools were built with any serious thought for misuse.

Deepfake technology will likely keep improving. The real test is whether our rules, products, and habits improve faster than the next fake.

Author's other posts

What 81,000 People Told Anthropic They Want From AI
Article
What 81,000 People Told Anthropic They Want From AI
Anthropic analyzed 80,508 AI user interviews across 159 countries to learn what people want from AI, what worries them, and where today’s tools still fall short.
Nebius Plans $10B AI Data Center in Finland Amid Europe AI Race
Article
Nebius Plans $10B AI Data Center in Finland Amid Europe AI Race
Nebius plans a 310 MW AI data center in Finland. Here is why the Lappeenranta project matters for Europe’s AI race, infrastructure, and sovereignty.
Apple Maps Ads Could Reshape Local Search
Article
Apple Maps Ads Could Reshape Local Search
Apple Maps ads could reshape local search for brands and small businesses, changing how users discover places through Maps, Siri, and Apple Business.
Claude Code vs. Codex: Why AI Coding Agents Are Everywhere
Article
Claude Code vs. Codex: Why AI Coding Agents Are Everywhere
Claude Code vs. Codex: compare features, workflows, security, and use cases, and see why AI coding assistants are suddenly everywhere.