August 2026: What the EU AI Act's Full Enforcement Actually Bans

In August 2026, full enforcement of the EU AI Act begins. The penalties for deploying a prohibited practice reach 35 million euros or 7 per cent of global annual turnover, whichever is higher. And most organisations have not yet audited whether their HR technology stacks include practices the Act now bans outright.

In August 2026, full enforcement of the EU AI Act begins. The penalties for deploying a prohibited practice reach 35 million euros or 7 per cent of global annual turnover, whichever is higher. And most organisations have not yet audited whether their HR technology stacks include practices the Act now bans outright.

The prohibitions are broader than many assume. Article 5 of the Act bans AI systems that perform mass biometric identification using facial images scraped from the internet or CCTV footage without consent, that infer emotions in workplace or educational settings except for medical or safety purposes, that categorise individuals based on biometric data to infer sensitive characteristics such as political opinion, religion, or sexual orientation, that conduct real-time remote biometric identification in public spaces beyond narrow law-enforcement exceptions, and that use subliminal or manipulative techniques to influence behaviour in ways that bypass conscious awareness.

High-risk AI systems including biometric identification and emotion recognition that do not fall within outright prohibition must pass conformity assessments before market placement.

The Workplace Emotion AI Prohibition

Article 5(1)(f) of the Act is important because it specifically prohibits AI systems that infer the emotions of individuals in workplace or educational institutions. The prohibited inference methods include facial expressions and micro-expressions, voice patterns and tonal analysis, keystrokes and typing patterns, and body posture and movement analysis.

The ban extends to any system that derives emotional states from these signals in a professional or educational context, not merely systems explicitly marketed as emotion recognition. An HR analytics tool that uses facial expression data to assess interview performance, or an employee monitoring platform that analyses typing patterns to infer stress, falls within the prohibition regardless of how it is branded.

The narrow exception for medical and safety purposes applies, for instance, to monitoring for signs of fatigue in safety-critical environments such as manufacturing or construction.

What This Means for Employers

Many organisations are running HR technology stacks built before this prohibition existed. Performance analytics platforms, employee engagement tools, and AI-assisted recruitment systems may contain inference components that, under August 2026 enforcement, constitute prohibited practices. Enforcement is handled by national supervisory authorities. In Ireland, the Workplace Relations Commission has been identified as the relevant body for employment-related AI Act breaches.

Very few organisations have conducted systematic audits of their HR tech for prohibited emotion-inference components. The second and third quarters of 2026 are likely to see a wave of urgent compliance reviews.

The Broader Picture

The EU AI Act does not resolve the underlying problem of emotional inference in technology. It prohibits specific contexts and specific methods. Outside those contexts, emotional inference in consumer products, social media, and personal devices remains largely ungoverned by the Act. The regulation creates a floor, not a ceiling.


HumanSafe Opinion

The following reflects HumanSafe Intelligence's position on this development.

The August 2026 enforcement milestone is significant, but it is worth being precise about what kind of intervention it represents. It prohibits specific outputs in specific contexts. It does not address the architectural assumption that underlies those outputs: the premise that emotional inference is legitimate wherever it is not explicitly prohibited.

A constitutional position holds that the prohibition on emotional inference derives not from regulatory designation but from a prior rights-based principle: that the derivation of a person's emotional state from signals they did not voluntarily provide is categorically different from any other form of data processing. Regulation validates that principle in law. It does not substitute for it in architecture. The zone of prohibited inference will expand over time. It always does. The organisations that treat August 2026 as an architecture question rather than a compliance checklist are the ones whose position improves with each new regulatory cycle.


Sources


Continue reading