Legal systems around the world, including several U.S. states, are starting to establish new legal and regulatory guidelines for the use and development of artificial intelligence (AI) systems. However, the European Union is taking a leading role with the EU Artificial Intelligence Act (Regulation laying down harmonized rules on artificial intelligence – AI Legislative Act, May 21, 2024), considered the world’s first comprehensive legal framework for AI. It was formally adopted by the European Parliament on March 13, 2024, and approved by the EU Council on May 21, 2024.
A key feature of AI systems, according to the Regulation laying down harmonized rules on artificial intelligence (AI Legislative Act, May 21, 2024), is their ability to make inferences. This ability refers to the process of obtaining outcomes, such as predictions, content, recommendations, or decisions that can affect the physical and virtual environment, as well as AI systems’ ability to extract patterns or algorithms, or both, from input information or data. Techniques enabling inferences in building AI systems include machine learning approaches, where they learn from data to achieve certain goals, as well as logic- and knowledge-based approaches, where they draw inferences from encoded knowledge or symbolic representation of the task to be solved. The ability of an AI system to make inferences goes beyond basic data processing, enabling learning, reasoning, or modeling.
Definition of Risk
The AI Act adopted by the European Union regulates artificial intelligence systems based on the level of risk they pose. It classifies AI systems into different risk categories, with corresponding regulatory measures applied to each.
- Unacceptable Risk:
- AI systems categorized as posing an unacceptable risk are completely banned due to the serious potential harm they could cause. The following practices are prohibited:
- Social scoring: AI systems used to assess individuals’ behavior and characteristics (similar to the concept of social credit in China).
- Manipulative or deceptive techniques: AI designed to manipulate or deceive people in ways that could harm their autonomy.
- Exploitation of vulnerable groups: AI used to exploit individuals such as children, the elderly, or people with disabilities.
- Creation of facial recognition databases: Building databases with AI by collecting data from the internet without consent.
- Predictive policing: AI tools used to predict potential criminal behavior based on collected data.
- Emotion recognition: AI systems used to detect or analyze emotions in workplace or educational settings.
- Certain biometric identification and categorization activities: Use of AI for certain forms of biometric identification (such as facial recognition) and categorizing people based on personal characteristics like race or gender without appropriate legal safeguards.
- AI systems categorized as posing an unacceptable risk are completely banned due to the serious potential harm they could cause. The following practices are prohibited:
2.High risk
AI systems classified as high risk are not banned but must meet additional criteria to comply with existing legislation. Some examples of high-risk categories explicitly listed in the AI Act include systems used as safety components in products such as machinery, toys, medical devices, and vehicles, as well as systems used in sensitive industrial sectors like biometrics, critical infrastructure, education, and employment.
Providers and deployers of high-risk AI systems may, depending on the circumstances, be subject to several additional requirements, which may include:
- Conducting fundamental rights impact assessments
- Registration in an EU public database
- Implementing risk and quality management systems
- Using high-quality data to reduce bias
- Ensuring transparency
- Logging automated activities
- Reporting incidents
- Maintaining human oversight
- Appointing a representative in the EU
- Ensuring system accuracy, robustness, and security
3. Limited or minimal risk
AI systems that do not reach the level of unacceptable or high risk may be classified as limited or minimal risk and will be subject to fewer regulatory requirements. However, the level of risk for such systems may vary depending on the application for which they are used.
Regarding generative artificial intelligence models (GPAI), the AI Act distinguishes between models with “systemic risk” and those without. Systemic risk is present in models with high computational power. All GPAI model providers must appoint a representative in the EU, maintain detailed technical documentation and other information about the model, and comply with EU copyright laws. Models with systemic risk must also conduct advanced assessments, manage risks, report incidents, and ensure robust cybersecurity. These measures aim to ensure the safe and responsible deployment of powerful AI systems.
The creation of a European Council on Artificial Intelligence “The AI Council” is also foreseen
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.