As the EU AI Act begins to shape the future of artificial intelligence regulation across Europe, a key question is emerging: Are we classifying AI systems into the right risk categories? The Act takes a risk-based approach, categorising systems as minimal, limited, high-risk, or unacceptable and aims to balance innovation with fundamental rights. But as deployment accelerates, especially in sensitive areas, some of the “high-risk” systems may actually cross ethical boundaries that warrant a complete ban.
High-Risk or Unacceptable? A Shifting Line
Take AI lie detectors used at EU borders. Systems like iBorderCtrl, which attempted to detect lies based on facial microexpressions, have been criticised for pseudoscience and human rights risks. Despite being labeled high-risk, these tools lack scientific credibility and could lead to wrongful decisions about asylum or entry.
Similarly, emotion recognition in children’s toys and classrooms often marketed as “empathetic AI” poses ethical red flags. Recent controversy in Germany surrounded a toy that used AI to respond to children’s emotions, raising questions about manipulation and data privacy.
And then there’s predictive policing. AI systems deployed in Spain and other countries to forecast crime hotspots have been shown to amplify biases and disproportionately target migrant or minority communities.
Regulating Without Stifling Innovation
Of course, not all high-risk systems should be banned. Tools in medical diagnostics or recruitment, while sensitive, offer significant societal benefits if built and audited responsibly. These should remain under the high-risk category, with requirements for transparency, accountability, and human oversight. But as evidence of harm increases in other areas, the EU must remain flexible. What’s “high-risk” today may deserve to be considered “unacceptable” tomorrow, especially if it manipulates vulnerable populations or entrenches inequality.
What the EU AI Act Means for Marketers
The EU AI Act has major implications for marketers, especially those using AI-powered tools to personalise content, automate customer journeys, analyse emotions, or target specific demographics. While the regulation primarily focuses on protecting fundamental rights, it indirectly reshapes how ethical and legal marketing can operate in Europe and beyond.
Greater Scrutiny of Manipulative Practices
If your marketing tools use emotion recognition, psychographic profiling, or hyper-personalisation that nudges behaviour without informed consent, you may fall into the high-risk or even unacceptable category. Emotion based ads (e.g., reading facial expressions to trigger offers) could be banned or tightly regulated, especially for vulnerable groups like children.
Transparency Requirements
Under “limited risk” rules, marketers using chatbots, virtual influencers, or synthetic content (e.g. AI generated videos or voices) must clearly disclose that users are interacting with an AI. Expect a wave of disclosure labels, e.g. “This ad was created using AI” or “This is a virtual influencer” to become mandatory.
Targeting & Profiling = High-Risk
If you’re using AI systems to make automated decisions about individuals, especially in:
- Creditworthiness
- Hiring or admissions
- Access to essential services
… those tools will fall under high-risk, requiring data governance, human oversight, and transparency. Advanced targeting models (e.g. lookalike audiences for sensitive categories) may require auditing or certification.
Restrictions on Social Scoring
The EU explicitly bans social scoring systems where people are ranked based on behaviour, reputation, or compliance. Marketers can’t assign scores that affect someone’s ability to access services, even in loyalty or behavioural reward programs, if it mimics this structure.Stronger Data Privacy Enforcement
Although GDPR already covers personal data use, the AI Act reinforces the principle that AI models must not exploit data in ways that undermine autonomy or consent. Models that rely on behavioural prediction must now be assessed not just for compliance but for ethical use.Moving Forward: A Dynamic Approach to Risk
- Reassess borderline cases like emotion AI, lie detection, and predictive policing.
- Use regulatory sandboxes and auditing to encourage safe innovation.
- Empower agencies to reclassify technologies as more evidence becomes available.
- Audit your AI tools now; What data do they use? Are outcomes explainable? Can users opt out?
- Be transparent and proactive: Users increasingly prefer brands that are ethical and upfront about their AI use.
- Embrace “TrustTech”: Build AI systems around trust, fairness, and inclusivity, not just optimisation.


