Skip to content

Updated AI regulations demand open disclosure from AI model creators within the EU

AI Transparency Under Scrutiny: European Regulations Demand AI Providers to Disclose Training Methods, Violators Face Penalties

AI model creators face increased scrutiny with new European regulations demanding transparency
AI model creators face increased scrutiny with new European regulations demanding transparency

Updated AI regulations demand open disclosure from AI model creators within the EU

The European Union (EU) has introduced new regulations for AI model providers, effective from August 2, 2023. These rules, based on the EU AI Act adopted in May 2024, apply to General-Purpose AI systems and aim to foster transparency, intellectual property protection, safety, and systemic risk management.

Under the new regulations, AI model providers like ChatGPT and Gemini must disclose how their systems function and the data used for training. This includes reporting sources of training data and whether automated web scraping was employed. Providers are also required to explain the steps taken to protect intellectual property rights.

The law aims to bolster copyright protections by requiring developers to establish contact points for rights holders and implement policies respecting access restrictions on data, such as subscription or paywalled content. Providers agree to implement technical safeguards to prevent their models from generating outputs that infringe on protected works.

Models classified as posing systemic risk, such as GPT-4 and Google Gemini 2.5 Pro, must conduct thorough risk evaluations, document adversarial testing to mitigate risks, report serious incidents to relevant EU and national authorities, and enforce cybersecurity measures to prevent misuse.

The transparency rules took effect on August 2, 2025. Enforcement by the European AI Office will start in August 2026 for new models and August 2027 for models released before August 2, 2025. Violations can lead to fines of up to €15 million or three percent of the company's total global annual turnover.

To aid compliance, the European Commission released voluntary guidelines and a code of conduct, signed by major AI developers including Google (Gemini), OpenAI (ChatGPT), Amazon, IBM, and others. This code governs respect for data access restrictions, copyright policies, and safety/security commitments.

However, critics, particularly copyright advocacy groups, argue that the rules do not yet require naming specific datasets in detail, which they believe weakens their effectiveness in protecting intellectual property rights.

In summary, AI model providers must adopt transparency measures, protect copyrights rigorously, manage systemic risks, and prepare for supervision by EU authorities, underpinned by a mix of mandatory rules and voluntary commitments designed to balance innovation with legal and ethical safeguards. The EU AI Act also allows private individuals to sue AI providers based on the new rules. Particularly powerful AI models that could potentially pose a risk to the public must also document safety measures. Models that came onto the market before August 2, 2025, will be controlled from August 2027.

The new regulations demand that AI model providers reveal the workings of their systems and the data utilized for training, such as the origins of training data and whether automated web scraping was employed (technology). Furthermore, providers are obligated to outline the measures taken to safeguard copyright protections (artificial-intelligence).

Read also:

    Latest