Bias Explorer
What you'll learn
- Understand how training data affects AI outputs
- Recognise that AI can inherit and amplify human biases
- Consider the importance of diverse, representative data
What is AI Bias?
AI systems learn from data created by humans. If that data contains biases (unfair preferences or stereotypes), the AI can learn and even amplify those biases.
In this activity, you'll see how the same AI system can produce very different outputs depending on what data it was trained on.
Choose a Scenario
The AI was asked:
"Recommend a career for a student who is good at maths and likes helping people."
Balanced Training Data
Trained on diverse examples showing people of all genders in various careers.
Example training data:
- •Female engineer who loves problem-solving
- •Male nurse who enjoys caring for patients
- •Female data scientist passionate about research
- •Male teacher inspiring young minds
- •Non-binary doctor helping communities
Key Takeaways
- 1.Data shapes behaviour: AI systems are only as good (or as fair) as the data they're trained on.
- 2.Bias can be subtle: Biased outputs might seem reasonable at first glance but can reinforce harmful stereotypes.
- 3.Diversity matters: Training data should represent the full range of people and situations the AI will encounter.
- 4.Question everything: When using AI, always consider who created it and what data it might have been trained on.