(de-news.net) – The Green Party has welcomed the decision taken at the recent CDU party conference to establish a minimum age requirement for social media usage, emphasizing that setting the threshold at 14 years corresponds to objectives that the party has long advocated. Green leadership further highlighted that, while the decision aligns with their policy priorities, CDU Chairman, Chancellor Friedrich Merz would face the considerable challenge of ensuring adherence to this measure amid resistance from the CSU. The CSU had previously voiced skepticism, arguing that children acquire appropriate digital competencies more effectively through structured media literacy education rather than through blanket prohibitions. Despite these internal disagreements within the CDU and its sister party, the SPD coalition partner reportedly endorsed the initiative, reflecting broader support among governing parties for measures aimed at promoting safe and responsible online engagement among minors.
Conversely, Philipp Türmer, head of the Young Socialists, rejected the proposed ban on social media access for individuals under the age of 14, citing the practical difficulties involved in implementing such a measure. He maintained that regulatory focus should be directed primarily toward serious offenders, while digital platforms themselves must be held accountable for the content they host. Türmer emphasized that executives of major technology companies bear responsibility for curbing unlawful and exploitative material. He further suggested that, should platform operators fail to comply with these obligations, the European Union might be compelled either to impose substantial financial penalties or, in extreme cases, suspend the operation of networks altogether, in order to halt the circulation of child pornography and other forms of systematic fraud.
At the same time, the Federal Data Protection Commissioner, Louisa Specht-Riemenschneider, together with colleagues from multiple countries, issued a joint statement highlighting the risks posed by artificial intelligence (AI)-manipulated images of real individuals. They underscored the potential harm that such content can inflict on children and other vulnerable populations, particularly through cyberbullying and exploitation. In this context, AI developers were urged to implement robust safeguards in collaboration with regulatory authorities, with the clear directive that the deployment of technological tools must not compromise personal safety, privacy, or human dignity. The initiative was prompted by documented cases in which AI software had been used to generate altered or sexually explicit images of individuals, which were subsequently disseminated widely across social media platforms, underscoring the urgent need for preventive measures and responsible oversight.