16 April 2025

On 9 April, in light of the increasing sophistication of AI technologies and the corresponding rise in harmful digital content, the European Internet Forum hosted a pivotal breakfast debate on protecting the public from abusive AI-generated content. This pressing issue, deeply tied to democratic integrity, mental health, and societal cohesion, brought together policymakers, industry representatives, civil society advocates, and investigative journalists to explore regulatory, technological, and societal responses.

Protecting the public from abusive AI-generated content

MEP Lina Gálvez, who opened the session, emphasized the personal and political urgency of addressing abusive AI content. Citing the disproportionate targeting of women in online deepfake pornography—98% of which is pornographic and 99% targets women—she underscored the threat such content poses not only to individual dignity but also to broader social equity and democratic trust. Gálvez highlighted the role of existing EU policies, such as the Digital Services Act and the Gender-Based Violence Directive, in mitigating these risks. Nonetheless, she advocated for continuous evaluation and enhancement of regulatory frameworks, stressing the importance of digital literacy, AI tools for content moderation, and collective accountability among governments, tech firms, and civil society.

Nanna-Louise Linde, Vice President for EU Government Affairs at Microsoft, reiterated the necessity of a collaborative approach. She candidly acknowledged the dual role of tech companies as both contributors to and solvers of the AI challenge. Linde detailed Microsoft’s efforts, including NGO partnerships to focus on protecting children, the elderly, and women, and the introduction of watermarking technologies. She called for criminalizing the removal of such markers and emphasized the company’s ongoing training and awareness campaigns to counter deepfakes, especially in electoral contexts. While praising the EU’s current regulatory trajectory, she also pushed for enforcement and updates to existing laws to stay abreast of technological advancements.

Yordanka Ivanova from the European Commission’s AI Office presented the regulatory perspective, detailing the implementation of the AI Act, which she described as a harmonized, risk-based framework for AI deployment across the EU. Ivanova clarified that manipulative or exploitative AI applications, particularly those affecting vulnerable groups, are now explicitly prohibited. She emphasized that transparency measures—such as mandatory watermarking and content labeling for AI-generated media—are vital for detecting manipulation and preserving public trust. The Commission, she stated, is committed to supporting providers through guidelines and working collaboratively to foster compliance while promoting innovation.

From a civil society standpoint, Jos Bertrand, President of the European Senior Organisation, highlighted the exclusion and vulnerability of older citizens in an increasingly digital society. He advocated for a rights-based approach that ensures seniors are not sidelined by digital transformation. Bertrand pointed out the lack of access to face-to-face services and digital literacy among the elderly, which increases susceptibility to fraud and abuse. He called for systematic civil society involvement in regulatory processes and the establishment of accessible reporting mechanisms for abuse and non-compliance.

Marie Bröckling, an investigative journalist, offered a compelling narrative based on her documentary work on deepfake pornography. She shared harrowing insights into how victims, mostly women, are unaware of their exploitation due to the clandestine nature of content distribution. Her research into perpetrators revealed a mix of ignorance, malice, and opportunism, underscoring the need for targeted legal reforms and greater enforcement. Bröckling stressed the importance of supporting underfunded NGOs that help victims detect and remove such content and called for robust accountability across all actors involved in the lifecycle of abusive AI-generated material—from app developers to payment processors.

The event concluded with a consensus that addressing AI-generated harmful content demands persistent, collective action. The conversation reaffirmed EIF’s mission to bridge stakeholders across sectors in fostering a safer, fairer digital future, rooted in innovation, accountability, and human dignity.