Hosted by MEP Axel Voss, the EIF event titled “AI Multilateral Discussions – Connecting the Bubbles” convened key stakeholders to examine fragmented global approaches to AI governance, with a particular emphasis on copyright and generative AI. The session featured interventions from EU institutions, legal academia, and international policymakers, fostering a robust dialogue on the need for coordinated regulatory responses.
Opening the debate, MEP Axel Voss underscored the urgency of bridging fragmented AI governance frameworks. Highlighting efforts by bodies such as the G7, OECD, and EU, he described a landscape of “own bubbles” struggling to connect. MEP Voss expressed concern over generative AI’s potential to undermine existing rights, notably in copyright, suggesting that current initiatives, including the AI Office’s draft Code of Practice, fall short. Emphasizing the importance of balanced regulation, he called for dialogue between developers and the creative sector ,warning that without consensus, legislative intervention might worsen the divide.
Representing the European Commission, Sabina Tsakova offered insights into the ongoing regulatory implementation. She reiterated the robustness of the EU’s copyright framework, particularly the provisions of the 2019 DSM Directive, including the opt-out mechanism under Article 4. This feature, she explained, is pivotal in allowing rightsholders to control the use of their works in AI training. The Commission is currently preparing a feasibility study for a registry of TDM opt-outs, which could enhance transparency and compliance. Ms Tsakova also discussed Article 53 of the AI Act, requiring general-purpose AI providers to implement copyright-respecting policies and publish summaries of training data. While the Commission supports the operationalization of these obligations through a multi-stakeholder Code of Practice, Ms Tsakova noted that enforcement remains rooted in existing legal pathways. “Our objective is to support the effective expression of rights while ensuring access to high-quality content for AI development,” she concluded.
Joining remotely, Joao Quintais, Associate Professor at the Institute for Information Law, emphasized the global proliferation of AI-related legislation, with over 60 regulatory instruments introduced in the US alone by 2025. He noted that copyright implications arise not just during model training but also from outputs, and that legal uncertainty persists across jurisdictions. According to Mr Quintais, the EU’s dual regime—anchored in TDM exceptions and AI Act obligations—places the burden largely on general-purpose AI model providers while leaving gaps concerning dataset creators and downstream users. He expressed skepticism about the efficacy of the Code of Practice, describing it as diluted through successive drafts and noting providers’ hesitance to endorse it.
Turning to international comparisons, Quintais elaborated on litigation trends in the US, Germany, France, and the Netherlands, where courts are addressing the sufficiency of opt-out declarations and training data transparency. He highlighted the tension between promoting AI research and safeguarding creators’ remuneration, questioning whether the current opt-out model can adequately serve both aims. “The key challenge,” he stated, “is aligning territorial copyright frameworks with fragmented AI value chains.” Quintais advocated for a more holistic approach, warning that an overreliance on licensing may fail to deliver meaningful remuneration to authors.