EU Parliament Passes Major AI Regulation Law

The European Union (EU) Parliament has formally passed the Artificial Intelligence Act, a regulation tackling comprehensive rules for trustworthy AI systems.

The law is seen as the world's first major piece of legislative framework aimed at classifying products and services that use generative AI based on risk and security.

"The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology, and where it — the technology — helps us leverage new discoveries, economic growth, societal progress and unlock human potential," said Dragos Tudorache, a Romanian lawmaker, before the vote on social media.

The AI Act has been crafted with key objectives at its core. Its primary aim is to protect essential freedoms and ensure the safety of users by setting rigorous standards for AI systems deemed high-risk. This category encompasses use cases in sectors like healthcare, law enforcement, and vital infrastructure, where AI technologies could have a bigger impact.

Products and services deploying AI technologies will be rated one of the following: "low" hazard, "medium" hazard, "high" hazard or "unacceptable." The AI Act will immediately ban those products and services rated unacceptable — like social scoring systems, emotion recognition systems and predictive policing.  

Furthermore, the legislation strives to promote innovation and confidence in AI solutions, positioning European-based organizations as a formidable player in AI development.

For businesses and developers, the AI Act will bring new considerations when using and deploying gen AI. Companies deploying AI systems classified as high risk will need to implement comprehensive risk assessment and mitigation strategies, maintain detailed documentation, and ensure transparency and accountability. These requirements aim to build public trust in AI systems by making their decisions understandable, traceable, and challengeable by individuals

Next, the EU and participating countries will begin integrating the new law and regulations. First up, the AI Act will officially become law in the next two to three months, pending final formalities. Next, products and services deemed unacceptable must be banned by the individual governing countries in the first six months. Finally, agreed-upon rules for public AI products and services will start applying one year after the law has been formally adopted.

For the EU's part, it will establish the AI Office, central governing headquarters in Brussels, with each individual country creating its own watchdog organization aimed at facilitating communication between regulators and the public.

"Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected," said the Internal Market Committee co-rapporteur Brando Benifei. "The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI's development."

Full text of the act is available here on the EU site.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • wooden blocks with human icons and artificial intelligence symbol

    Report: AI Adoption Leads to Retraining, not Replacing, Workers

    Despite fears that artificial intelligence will lead to major workforce reductions, a new report from the Federal Reserve Bank of New York suggests that’s not happening happening ... yet.

  • illustration of an open textbook, computer monitor with flowchart, gears, a wrench, and AI cloud symbol

    Wiley Introduces New AI Courseware Tools

    Wiley has created four new tools for its zyBooks courseware platform designed to improve instruction, learning outcomes, and academic integrity in college STEM courses.

  • Hand holding a stylus over a tablet with futuristic risk management icons

    Why Universities Are Ransomware's Easy Target: Lessons from the 23% Surge

    Academic environments face heightened risk because their collaboration-driven environments are inherently open, making them more susceptible to attack, while the high-value research data they hold makes them an especially attractive target. The question is not if this data will be targeted, but whether universities can defend it swiftly enough against increasingly AI-powered threats.

  • interconnected blocks of data

    Rubrik Intros Immutable Backup for Okta Environments

    Rubrik has announced Okta Recovery, extending its identity resilience platform to Okta with immutable backups and in-place recovery, while separately detailing its integration with Okta Identity Threat Protection for automated remediation.