AI Act - Key Points
- Classifies AI based on risk -> risk: combination of the probability of an occurrence of harm and the severity of that harm
- Affects anyone that wants to offer AI systems or its output in the EU, affects even actors based outside of the EU
- Provider: a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
- Deployer: a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
- Importer: a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country
- Distributor: a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market
- Operator: a provider, product manufacturer, deployer, authorised representative, importer or distributor (basically a category to summarize all these as opposed to an end-user)
- Placing on the market: the first making available in EU
- Making available on the market: the supply of AI system or model for distribution or use in the EU in the course of a commercial activity (no matter if free or not)
- Putting into service: the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose
- Reasonably foreseeable misuse: means the use of an AI system in a way that is not in accordance with its intended purpose (specific context and conditions of use, as specified in the information supplied by the provider)
- AI literacy: Providers and deployers must take measures to ensure a sufficient level of AI literacy of their staff and others dealing with the operation and use
- AI system: machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
- GPAI: General purpose AI;
- GPAI model: one that displays significant generality and is capable to perform a wide range of distinct tasks and can be integrated into a variety of systems or applications
- GPAI system: AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems -> Providers of these must provide technical documentation, instructions for use, compy with the Copyright Directive and publish a summary about the content used for training -> Exceptions for free and open license GPAI providers (only need to comply with copyright and publish training data summary unless they pose a systemic risk) -> Free and open license defined as: whose parameters, including weights, model architecture and model usage are publicly available, allowing for access, usage, modification and distribution of the model -> Models presenting a systemic risk must conduct model evaluations, adversarial testing, track and report serious incidents and ensure cybersecurity protections -> systemic risk: when the cumulative amount of compute used for its training is greater than 1025 floating point operations (FLOPs); providers must notify the Commission if their model meets this criterion within 2 weeks
- Providers that donβt adhere to codes of practice must demonstrate alternative adequate means of compliance for Commission approval
- After entry into force, the AI Act will apply by the following deadlines:
- 6 months for prohibited AI systems
- 12 months for GPAI (-> unclear, since GPAI can be different risks?)
- 24 months for high risk AI systems under Annex III
- 36 months for high risk AI systems under Annex I
- Codes of practice must be ready 9 months after entry into force
- Recall: any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers
- Withdrawal: any measure aiming to prevent an AI system in the supply chain being made available on the market
Unacceptable Risk
- prohibited
- ex.: social scoring systems, manipulative and deceptive AI, exploitative AI, biometric categorisation systems, criminal risk assessment, facial recognition databases, inferring emotions in workplaces or education, realtime remote biometric identification
- manipulative/deceptive/subliminal: meant to distort behavior and impair informed decision-making
- exploitative: exploiting vulnerabilities related to age, disability, socio-economic status to distort behavior
- biometric categorisation: inferring sensitive attributes like race, political opinions, trade union membership, religious or philosophical beliefs, sex life, sexual orientation -> how does the feed algorithm fare with this? -> exception for "lawfully acquired" biometric datasets -> exceptions for law enforcement :(
- social scoring: evaluating or classifying based on social behavior or personal traits and using them to treat people unfairly and detrimentally
- criminal offense assessment: based on profiling or personality traits, except when used on objective, verifiable facts of criminal activity (vague)
- facial recognition database: untargeted scraping of facial images from the internet or CCTV footage
- realtime remote biometric identification: only allowed for searching for missing persons and human trafficking, preventing imminent threat to life, identifying suspects in serious crimes; before deployment, police must complete a fundamental rights impact assessment, get authorisation from a judicial authority and register the system in a EU database but in "justified cases" it can be registered later (yikes) -> it is naive to think these use cases won't also target and threaten innocent people that have nothing to do with this
High risk
- The focus of the AI Act -> regulates these the most
- providers must establish a risk management system, a quality management system, conduct data governance, provide technical documentation, keep records, provide instructions for use, implement human oversight, design with accuracy, robustness and cybersecurity in mind
- AI systems are always considered high risk if it profiles individuals (processing of personal data to assess a person's life)
- examples in Annex III for high risk include
-> Biometrics,
-> Critical Infrastructure (like road traffic regulation),
-> Educational/vocational training (AI systems to determine access or admission, evaluate learning outcomes etc.),
-> Employment (used for recruitment or selection of candidates, to analyse and filter job applications, promotion or termination),
-> Access to essential services and benefits (AI assessing eligibility for healthcare, creditworthiness, classifying emergency calls etc.)
-> Law enforcement
-> Migration, asylum and border control management (AI used to assess risk posed by a natural person, to assist examination of applications for asylum etc.)
-> Justice and democratic processes (AI assisting in researching and interpreting the law, applying it, or used to influence the outcome of an election) -> how is this not unacceptable risk??
Limited risk
- regulated, but lighter transparency obligations -> end users need to be notified that they're interacting with AI or AI content
Minimal risk
- unregulated