The European Parliament has approved the EU’s proposed AI law, marking a significant step in regulating the technology. The next step is formal approval by EU member states’ ministers.
The law will be in effect for three years, addressing consumer concerns about AI technology.
Guillaume Cournesson, a partner at law firm Linklaters, emphasized the importance of users being able to trust vetted and safe AI tools they have access to, similar to trust in secure banking apps.
The bill’s impact extends beyond the EU as it sets a standard for global AI regulation, similar to the GDPR’s influence on data management.
The bill’s definition of AI includes machine-based systems with varying autonomy levels, such as ChatGPT tools, and emphasizes post-deployment adaptability.
Certain risky AI systems are prohibited, including those manipulating individuals or using biometric data for discriminatory purposes. Law enforcement exceptions allow for facial recognition use in certain situations.
High-risk AI systems in critical sectors will be closely monitored, ensuring accuracy, human oversight, and explanation for decisions affecting EU citizens.
Generative AI systems are subject to copyright laws and must comply with reporting requirements for incidents and adversarial testing.
Deepfakes must be disclosed as human-generated or manipulated, with appropriate labeling for public understanding.
AI and tech companies have varied reactions to the bill, with concerns about limits on computing power and potential impacts on innovation and competition.
Penalties under the law range from fines for false information provision to hefty fines for breaching transparency obligations or developing prohibited AI tools.
The law’s enforcement timeline and establishment of a European AI Office will ensure compliance and regulation of AI technologies.
Source: www.theguardian.com