(de-news.net) – German policymakers are debating two types of digital harms: youth exposure on social media and AI-enabled sexualized violence. While one group cautions against relying solely on age bans and argued for stronger platform safeguards, the other presses for faster legal reforms, tougher enforcement, and specialized institutions to address deepfakes and other forms of image-based abuse.
Digital Affairs Minister Karsten Wildberger has warned against adopting a rushed approach to a social media ban for minors, arguing that Australia’s initial experience may be encouraging but still requires closer scrutiny before it can serve as a reliable basis for policy. He has accordingly urged patience, stating that the expert commission established by Youth Minister Karin Prien is expected to present its recommendations by the summer. The debate has grown more intense since the CDU party convention last month called on the Federal Government to introduce a legal minimum age of 14 for the use of social media platforms such as Facebook, Instagram, TikTok, and Snapchat. Pressure for swift action has also come from CDU state premiers Hendrik Wüst and Daniel Günther, further heightening the political salience of the issue.
Wildberger has expressed support in principle for the idea of a minimum age threshold, while also suggesting that such a measure might ultimately function only as a transitional instrument rather than a comprehensive solution. In his view, the question should not be narrowed to a simple prohibition, because the state’s duty of care toward children and adolescents might also be fulfilled through stricter regulatory measures, including tighter platform standards and safer default settings for smartphones and digital services. At the same time, he has argued that a temporary age restriction could still be justified for as long as such technical safeguards remain insufficiently developed or inadequately implemented. Even so, Wildberger has emphasized that the underlying problem extends beyond legislation alone, stressing that parental conduct and educational responsibility remain central factors that cannot be replaced by a ban, even if an age limit may nonetheless serve as a visible sign that youth protection is being treated as a serious public concern.
Pop warns age caps alone cannot fix social media risks for minors
Ramona Pop, a leading consumer advocate, has likewise rejected the idea that rigid age-based restrictions on their own could provide an adequate response. Referring to the Australian example, she has argued that a social media ban would not amount to a panacea and could, in practice, be circumvented with relative ease. Instead, on behalf of the Federation of German Consumer Organizations, she has called for intervention at the structural level of platform business models, particularly those designed to maximize and prolong user engagement among both minors and adults. In that context, she has pointed to mechanisms such as endless scrolling, autoplay video, and other manipulative design features that make disengagement more difficult while simultaneously increasing users’ exposure to potentially harmful material.
Pop has also issued warnings about purported health experts who promote diet-related advice online, arguing that such content may contribute, especially among girls, to the development of eating disorders or anorexia. She has further maintained that AI-generated recommendations for meals or diet plans do not necessarily reflect sound medical or nutritional guidance and may intensify already harmful behavioral patterns. On that basis, she has called for binding legal safeguards under consumer protection law, including secure default settings intended to offer a broader level of protection to users.
Politicians push faster legal reform on deepfakes
In the parallel debate over digital sexualized violence, Nina Warken, chair of the Women’s Union, has called for rapid legislative revisions to close existing gaps in criminal liability. She has argued that individuals transformed into sexualized objects through AI are subjected to a violation of human dignity and has stressed that women are affected disproportionately often. Because offenders still frequently avoid meaningful punishment, she has characterized the current legal situation as a further humiliation for victims. At the same time, Warken has insisted that statutory reform alone would not be sufficient and has therefore demanded a broader institutional response, including additional investigators, specialized police and judicial units, and faster court proceedings.
According to SPD lawmaker Carmen Wegge, the parliamentary group’s legal policy spokesperson, the Justice Ministry’s planned legislation on digital sexualized violence should produce a clear deterrent effect by expanding criminal liability for image-based abuse. She has argued that more clearly defined legal boundaries in digital spaces would discourage the production of pornographic deepfakes by making it more explicit what conduct is punishable. Wegge has also identified other forms of behavior likely to be addressed in the future, including the distribution of recordings made in saunas and nonconsensual electronic surveillance by intimate partners. In addition, she has criticized the continuing tendency to trivialize violence against women online and has argued that, as in Spain, society, law enforcement, and the courts must take victims far more seriously. Justice Minister Stefanie Hubig is expected to present the proposed bill next week.
The issue gained renewed prominence after actress Collien Fernandes disclosed that AI-generated sexualized fake images of her had circulated for years. She filed a complaint in Spain, where the legal framework is regarded as stricter, thereby drawing further public attention to the cross-border dimension of the problem. Mona Neubaur, Deputy Minister-President of North Rhine-Westphalia, praised Fernandes for speaking publicly, describing her decision as an important signal to other women who remain silent out of fear. Neubaur has warned that deepfakes can no longer be treated as a marginal issue and has argued that lawmakers are still lagging behind the pace of technological change. In her assessment, publishing fabricated sexualized images online can undermine personal dignity and security in ways that often produce consequences comparable to those associated with physical abuse. For that reason, she has argued that the law must place digital and physical sexual violence on an equal footing. She has also supported closing the remaining liability gaps related to deepfakes and establishing specialized courts for sexual violence, modeled on institutions already operating in other countries, including Spain.
Audio: TTSFree