AI Risk FAQ

What is AI bias risk?

Quick answer

AI bias risk refers to the potential for AI systems to produce systematically unfair or discriminatory outputs because of biases in training data, model design, or the way the AI is deployed. In employment, lending, insurance, and healthcare contexts, AI bias can cause direct harm to individuals and create liability for the organisations using the AI. The EU AI Act’s high-risk classification for employment AI and credit AI reflects this: these systems require conformity assessments that include bias testing.

Sources of AI bias and how they manifest

AI bias arises at multiple points in an AI system’s development and deployment. Training data bias is the most fundamental: if the data used to train an AI system over-represents certain groups, under-represents others, or contains historical discrimination, the AI will learn and reproduce those patterns. A hiring AI trained predominantly on CVs of successful employees from one demographic group will tend to rate candidates from other groups lower — not because of explicit discrimination, but because of the patterns in the training data. Model design bias arises from choices made in how the AI is built: what features are included, what outcomes are optimised for, what trade-offs are made. Deployment bias arises from how the AI is used: applying an AI system in contexts different from those it was trained on, or using AI recommendations without human review, can amplify biases that would otherwise be moderated.

What bias testing requires under the EU AI Act

For high-risk AI systems under the EU AI Act — which includes AI used in employment decisions, credit assessment, educational access, and healthcare — bias testing is a component of the required conformity assessment. Providers of high-risk AI systems must demonstrate that their systems are designed to avoid discriminatory outputs. Deployers — organisations that use high-risk AI systems developed by others — must conduct fundamental rights impact assessments before deployment, which include an assessment of the risk of discrimination. For Irish organisations using AI in HR, lending, or similar contexts, this means both evaluating the bias testing conducted by the AI vendor, and assessing whether the way the organisation deploys the AI in its specific context introduces additional bias risk. Bias testing is not a one-off exercise: it requires ongoing monitoring to detect bias that emerges as the real-world population changes relative to the training data.

Acuity AI Advisory helps Irish organisations assess EU AI Act compliance obligations — including for high-risk AI systems subject to conformity assessment. See our EU AI Act compliance services.