Legal gaps prompt calls for platform accountability against digital violence targeting women

(de-news.net) – As experts criticize current laws and their shortcomings and as AI-driven manipulations raise the risk of online exposure, support grows for legal amendments that criminalize deepfakes and boost victim protection. Lawmakers suggest holding platforms accountable.

In the ongoing policy debate surrounding the strengthening of safeguards against digital violence targeting women, Anne König, the CDU/CSU spokesperson on women’s policy, has voiced support for a legislative initiative introduced by Justice Minister Stefanie Hubig (SPD). Positioned within a broader framework of legal reform and public scrutiny, König emphasized the necessity of reinforcing victim protection as a central pillar of the proposed measures. She further indicated that the criminalization of so-called deepfakes should be regarded as a preventative intervention, designed to inhibit harmful conduct before it materializes. In this context, she also underscored the regulatory responsibility of platform operators, arguing that more stringent accountability mechanisms are required to curb the large-scale dissemination of sexualized digital content.

Heightening both public awareness and political urgency, a legal complaint filed by actress Collien Fernandes has brought the issue into sharper focus. Fernandes has extensively documented her prolonged struggle against pornographic deepfake videos depicting her likeness, thereby illustrating the tangible personal and reputational harms associated with such technologies. She has attributed responsibility to her former husband. More broadly, deepfakes can be defined as AI-generated media manipulations in which a person’s facial features or voice are synthetically replaced with those of another individual, often producing highly realistic but misleading representations that complicate legal and evidentiary assessments.

Against this backdrop, the legislative proposal advanced by Hubig seeks to address structural gaps in criminal law relating to violations of intimate privacy, an issue concurrently examined in parliamentary deliberations, including a dedicated session in the Bundestag. Central to the reform is a recalibration of existing legal standards—most notably Section 184k of the German Criminal Code—through an expansion of criminal liability beyond images depicting explicitly concealed body parts. Under the current legal framework, only recordings of protected intimate regions are prosecutable, whereas images captured in contexts such as saunas or other semi-public environments frequently remain outside the scope of criminal sanction. The proposed revision would shift the legal emphasis toward the principle of consent, rendering the nonconsensual recording of intimate situations unlawful regardless of whether protective measures were in place. In doing so, it would also eliminate the present legal permissibility of voyeuristic recordings in public settings and extend criminal liability to sexually motivated depictions of clothed intimate areas, thereby addressing a previously ambiguous or unregulated domain.

Petitions and protests demand stronger enforcement

The limitations of the existing legal regime have been brought into sharper public focus by the case of Yanni Gentsch, who discovered that she had been filmed without her knowledge while jogging and subsequently encountered legal barriers when attempting to pursue criminal charges. Her experience has come to exemplify broader systemic shortcomings, particularly in relation to consent-based harms occurring in public or semi-public spaces. A petition she later initiated, calling for the explicit criminalization of such recordings, garnered more than 100,000 signatures, reflecting significant public support for reform. In parallel, the draft legislation addresses both sexualized and non-sexualized deepfakes. While the creation of such material has not previously been subject to criminal penalties, the proposed provisions would, under defined conditions, extend liability to both the production and dissemination of such content. To date, prosecutions have relied on indirect legal mechanisms—such as insult provisions or protections under the Art Copyright Act concerning the right to one’s own image—which many legal scholars consider insufficiently tailored to the specific harms posed by deepfake technologies.

This critique has been articulated by Elisa Hoven, who has argued that existing legal frameworks fail to adequately capture the distinctive nature of harm associated with deepfakes, particularly in their sexualized manifestations. According to this assessment, the mismatch between technological developments and legal categories has contributed to procedural inefficiencies, including the suspension or termination of cases due to evidentiary or doctrinal limitations. The proposed reforms would also encompass non-sexualized fabrications, especially in cases where false statements or actions attributed to real individuals result in reputational damage. Rapid advances in artificial intelligence have significantly enhanced the realism and accessibility of such manipulations, thereby increasing both their plausibility and their potential for harm. For both categories of offenses, penalties of up to two years’ imprisonment are currently under consideration. Hoven has additionally suggested that existing European regulatory frameworks, including the Digital Services Act, place excessive reliance on platform self-regulation, and she has called for expanded investigative resources alongside more robust enforcement mechanisms. Following its submission by the Federal Ministry of Justice for interministerial review, publication of the draft legislation is anticipated in the near term.

At the societal level, the issue has also prompted significant public mobilization. In Hamburg, tens of thousands of demonstrators gathered to advocate for stronger protections for victims of digital sexualized violence, underscoring the growing salience of the issue beyond legislative and expert circles. Attendance estimates ranged between 17,000 and 22,000 participants, and the protest was driven in part by Fernandes’s case as well as by broader concerns over the proliferation of AI-generated pornographic fabrications. Although Fernandes initially withdrew from participation after reportedly receiving death threats, she ultimately attended the demonstration, where participants called for a reversal of stigma and for increased accountability among both perpetrators and the digital platforms that facilitate the dissemination of such content.

Leave a Reply

Your email address will not be published. Required fields are marked *