EU AI Act FAQ
What AI practices are prohibited under the EU AI Act?
Quick answer
The EU AI Act prohibits: social scoring systems that rate individuals based on their behaviour for general purposes; AI that exploits vulnerable groups through subliminal techniques; real-time biometric categorisation of individuals in public spaces (with limited law enforcement exceptions); inferring emotions in workplace or educational settings without therapeutic justification; and certain predictive policing practices. These prohibitions came into force in February 2025. Any organisation using these practices must have stopped.
The full list of prohibited AI practices
The EU AI Act's Article 5 sets out six categories of prohibited AI practice. First, AI systems that deploy subliminal techniques beyond a person's consciousness to distort behaviour in a way that causes harm. Second, AI that exploits vulnerabilities of specific groups (age, disability, social or economic situation) to distort behaviour harmfully. Third, social scoring by public authorities: systems that evaluate or classify natural persons based on their social behaviour or personal characteristics in ways that lead to detrimental treatment. Fourth, real-time remote biometric identification in publicly accessible spaces for law enforcement purposes, with limited, tightly regulated exceptions. Fifth, AI systems that make risk assessments for criminal activity based solely on profiling — predictive policing without individual evidence. Sixth, AI systems that infer emotions of natural persons in the workplace or in educational institutions, except where justified for medical or safety reasons. The prohibitions also cover retrospective biometric identification in publicly accessible spaces by law enforcement, with judicial authorisation requirements.
What to do if you have a prohibited system
The prohibited practices under the EU AI Act came into force in February 2025, meaning organisations should already have identified and addressed any prohibited AI use. If an Irish organisation discovers it is using a prohibited practice — perhaps emotion inference tools for workplace monitoring, or a social scoring element in a customer management system — it must stop immediately. The consequences of continuing a prohibited practice are severe: fines of up to €35 million or 7% of global annual turnover. The first step for any organisation unsure whether its AI use is prohibited is to conduct an AI inventory and classify each system against the Act's definitions. Where there is genuine ambiguity about whether a practice is prohibited, legal and compliance advice should be sought promptly. For Irish organisations, the AI Office of Ireland and relevant sectoral NCAs will be the enforcement bodies for prohibited practices from August 2026.
Acuity AI helps Irish organisations audit their AI use against EU AI Act prohibited practice definitions. See our EU AI Act compliance services.