EU issues guidelines on banned AI practices
The European Union has released Guidelines to provide legal clarity on the prohibited uses of AI within the EU AI Act to ensure effective, consistent and uniform application and protect European citizens’ rights and safety.
The EU AI Act, which outlines regulations for the development, market placement, implementation and use of artificial intelligence in the European Union, came into effect in August 2024.
The Guidelines follow a risk-based approach and include four risk categories: 1) Unacceptable risks, which include risks to fundamental rights and human values. 2) High risks, which include risks to health, safety and fundamental rights. 3) Transparency risks: Limited transparency within AI. 4) Minimal to no risk.
The Commission aims to clarify Article 5 of the EU AI Act, which “prohibits the placing on the EU market, putting into service or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values,” the guidance says.
The Guidelines highlight that AI systems that infer emotions are generally prohibited in workplaces and educational institutions, except when used for medical or safety reasons, and those systems must be CE-marked to qualify for exceptions under the EU AI Act.
They acknowledge the role of emotion recognition AI in healthcare, such as detecting depression, aiding suicide prevention and diagnosing autism.
AI-driven emotion recognition for workplace well-being, like monitoring stress levels, is prohibited and not considered a medical use.
“Another example are AI systems evaluating persons and determining if they are entitled to receive essential public assistance benefits and services, such as healthcare services and social security benefits that are classified as high-risk,” the Guidelines note. That use case for AI is prohibited.
AI that manipulates or deceives vulnerable patients—such as those with disabilities, advanced age or low socioeconomic status—is banned. Similarly, AI predicting medical conditions purely through profiling, without objective medical data, is prohibited.
The guidance highlighted that AI systems that conduct real-time biometric identification (such as live facial recognition in public spaces) are banned, except for searches for specific victims like missing people, prevention of imminent threats like terrorist attacks, and searching for suspects of serious crimes.
The Commission clarifies that AI systems using subliminal techniques are not always harmful. For instance, a therapeutic chatbot guiding users to quit smoking through subtle persuasion is allowed, as long-term health benefits outweigh temporary discomfort.