After many bureaucratic hurdles, the European Union has finally given its final approval for the Artificial Intelligence Act. The act will come into effect in approximately two years, pending a final lawyer-linguist check. This milestone has been reached while other nations, including the United States, are still grappling with regulatory measures for AI.
The AI Act, originally introduced in 2021, seeks to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability while promoting innovation and positioning Europe as a leader in AI. It prohibits certain AI applications that pose a threat to citizens’ rights, such as biometric categorization systems, emotion recognition in workplaces and schools, social scoring based on AI, predictive policing, and AI that manipulates human behavior or exploits vulnerabilities.
However, the extent and interpretation of these banned applications are not entirely clear, raising questions about how AI will be regulated and whether agreement can be reached on violations of the AI Act. The act includes procedures to handle disagreements.
Despite the restrictions, there are exceptions for specific situations related to law enforcement. The EU has also identified high-risk areas where AI technology may be used, including critical infrastructure, education, healthcare, banking, border management, and democratic processes. Such uses of AI must be assessed and monitored, ensuring human oversight and giving citizens the right to file complaints and seek explanations from the government.
General-purpose AI systems and their underlying models must meet transparency requirements, with the EU reserving the right to impose additional demands on riskier systems.
The enactment of the AI Act signifies a significant milestone, as it becomes the world’s first binding law on AI. It aims to reduce risks, create opportunities, combat discrimination, and promote transparency. A dedicated AI Office will support companies in complying with the rules before they come into force. The focus has been placed on putting human beings and European values at the core of AI development.
While the AI Act is a starting point for a new governance model built around technology, further efforts are needed to reassess education models, labor markets, warfare methods, and the social contract itself.
After passing the lawyer-linguist check, the AI Act is expected to be formally endorsed by the EU’s Council, published in the EU’s official journal, and fully applicable 24 months later. However, bans on prohibited practices will take effect six months after the law’s enactment.
The EU’s AI Act is likely to serve as a potential blueprint for AI regulations in other countries. In the United States, debates on AI regulation encompass various industries. President Biden issued an executive order on AI last October, although its authority is limited. Vice President Kamala Harris also announced the AI Safety Institute, laying the groundwork for potential technical guidance and regulations. However, the continuity of these policies will depend on the outcome of the upcoming presidential election.