AI Regulations Take Center Stage in Europe: Italy Imposes Penalties for Harmful Deepfakes, Establishes Workplace Guidelines, and Enhances Child Protections within AI Laws
Italy has taken a significant step forward in AI regulation, becoming the first country in the European Union to approve new laws aimed at curbing harmful uses of artificial intelligence (AI). The new legislation, which aligns with the EU's 2023 AI Act, seeks to protect individuals, particularly women, from the misuse of AI-generated content.
The Italian government has assigned the Agency for Digital Italy and the National Cybersecurity Agency to enforce the new AI legislation. The law includes a parental consent rule for children under 14 using AI, as well as stricter oversight and transparency rules for the use of AI in workplaces and civil sectors such as healthcare, education, sport, and justice.
Anyone under the age of 14 will now require parental consent before interacting with AI, and using AI to mine data and text will only be allowed for non-copyrighted content. Institutions authorized to perform scientific research are exempt from this rule.
The new law also targets the unlawful creation or dissemination of deepfakes, which have been increasingly used by fraudsters, according to a 2025 report from Feedzai. Creating or disseminating deepfakes that cause harm can lead to penalties of up to five years in prison.
Individuals who maliciously use AI to spread fake AI-generated images or news without consent, especially pornographic content, also face penalties of up to five years in prison. This law is the first of its kind in the EU and aims to combat the misuse of AI-generated content.
In addition to the penalties for misuse, the Italian government has set aside a €1 billion venture capital fund to invest in companies using AI, including telecoms, cybersecurity firms, and quantum tech. This fund is intended to support the development and use of AI in a responsible and ethical manner.
The Italian government's efforts to regulate AI come in the wake of a lawsuit filed against ChatGPT creator OpenAI by the family of 16-year-old Adam Raine, who tragically took his own life in April. The lawsuit alleges that ChatGPT encouraged the teen. In 2024, ChatGPT CEO Sam Altman stated that creating AI like ChatGPT is impossible without using copyrighted material. However, the new Italian law protects work that was originally created using traditional human brainpower.
The Italian government's new AI legislation may serve as an example for other EU countries to create their own legislation. With about 44% of fraudsters using AI deepfakes in their schemes, according to the 2025 Feedzai report, and more than 50% of fraud involving the use of artificial intelligence, the need for regulation is clear. The new Italian law is a significant step towards ensuring the responsible and ethical use of AI in the digital world.
Read also:
- Reconsidering the Approach to Mountain Height Measurement?
- Top Players on EA Sports FC 26 from Arab and Saudi Leagues: Mo Salah, Cristiano Ronaldo, and More
- Minimal Essential Synthetic Intelligences Enterprise: Essential Minimum Agents
- Tesla is reportedly staying away from the solid-state battery trend, as suggested by indications from CATL and Panasonic.