What the EU AI Act's Article 5 Actually Says: A Plain-English Guide to the Red Lines

The EU AI Act runs to hundreds of pages and covers a spectrum of AI risk from minimal to unacceptable. For most organisations, the single most operationally important section is Article 5: the list of AI practices that are prohibited outright, regardless of use case, business justification, or contractual arrangement.

The EU AI Act runs to hundreds of pages and covers a spectrum of AI risk from minimal to unacceptable. For most organisations, the single most operationally important section is Article 5: the list of AI practices that are prohibited outright, regardless of use case, business justification, or contractual arrangement.

Understanding what Article 5 actually prohibits, and what it does not, is now a practical compliance matter. Full enforcement arrives in August 2026.

The Six Categories of Prohibited Practice

1. Subliminal manipulation
AI systems that use subliminal techniques, or techniques that exploit psychological vulnerabilities, to materially distort a person's behaviour in a way that causes or is likely to cause harm. This includes dark patterns, manipulative recommendation engines, and addiction-by-design interfaces.

2. Exploitation of vulnerable groups
AI that exploits specific vulnerabilities of a group of persons, including age, disability, or social and economic situation, to distort their behaviour in a harmful way.

3. Social scoring by public authorities
AI systems used by public authorities to evaluate or classify individuals based on social behaviour or personal characteristics, with detrimental consequences.

4. Real-time remote biometric identification in public spaces
With narrow law-enforcement exceptions, AI-powered live facial recognition in public spaces is prohibited.

5. Biometric categorisation by sensitive characteristics
AI that infers political opinion, religious belief, sexual orientation, or racial origin from biometric data.

6. Emotion inference in workplaces and educational settings
AI systems that infer the emotional states of individuals in professional or educational environments, except for medical or safety purposes.

The High-Risk Category: Additional Requirements

Separate from outright prohibition, the Act classifies emotion recognition systems generally as high-risk AI. High-risk systems are not banned but must pass conformity assessments before market placement, demonstrating compliance with requirements for risk management, data quality, transparency, human oversight, accuracy, cybersecurity, and robustness.

What the Act Does Not Cover

Article 5 prohibitions apply within defined contexts. Emotional inference in consumer products, entertainment platforms, social media, and personal devices is not addressed by Article 5. Inference that does not involve biometric signals, for instance inferring mood from text patterns in a consumer app, sits in a different regulatory space. The Act creates important protections in the contexts it addresses while leaving significant terrain ungoverned.

The interaction between Article 5 and the GDPR also requires attention. Even where the AI Act does not explicitly prohibit a practice, GDPR requirements for consent, transparency, and purpose limitation may apply independently. The Future of Privacy Forum has published analysis on this interplay, noting that organisations face compounding compliance obligations from both instruments simultaneously.

Practical Implication for HR Technology

Any AI system deployed in a professional environment that processes emotional signals, whether from text, voice, facial expressions, keystrokes, or behaviour patterns, should be treated as a candidate for Article 5(1)(f) review. The ban is on inference of emotional states, not on specific products by name. The assessment is functional, not categorical: does this system derive emotional states from people in a workplace or educational context?


HumanSafe Opinion

The following reflects HumanSafe Intelligence's position on this development.

Article 5 draws legal lines around the most serious manifestations of inference-based emotion AI. But the compliance question it generates, how do we continue processing emotional signals in ways that remain lawful, starts from the wrong premise. A constitutional position does not ask how to make inference compliant. It asks whether inference is the right computational posture for human emotional data at all.

Our answer is no. Not because it is currently prohibited in certain contexts, but because the derivation of emotional states without voluntary declaration and constitutional constraint is a rights question prior to any regulatory one. The regulation reflects that principle. It does not create it. Organisations that treat Article 5 as a line to stay just inside of have understood the letter of the law. Those that treat it as evidence that the architecture itself needs rethinking have understood the direction it is pointing.


Sources


Continue reading