AI’s Biased Neural Networks For The 5 Year Old Learner In Me
How many fruits can you name?
Imagine you're learning to recognise different types of fruits - apples, bananas, and oranges. At first, you might not know the differences, but as you see more examples and someone tells you which fruit is which, you start to notice things. Bananas are long and yellow, apples are round and can be red or green, and oranges are round and orange.
A neural network in AI works similarly, but with a lot more information. It learns from a multitude of examples or data. For instance, if we want a neural network to learn what a cat looks like, we show it lots of pictures of cats. At first, it might not know what a cat is, but as it sees more and more pictures, it starts to understand what makes a cat a cat - things like pointy ears, a tail, and whiskers.
The 'network' part comes from how it's all connected, much like how our brain has different parts that work together. In a neural network, there are layers of what we call 'neurons' (like the ones in our brain), and they all work together to help the neural network understand the data it's learning from. So, in essence, a neural network is like a mini-brain that learns from lots of examples to understand and recognise things, playing a significant role in how AI works.
Now, let's go back to our fruit example. Imagine if you were only shown apples and bananas, but never any oranges. Then, one day, someone asks you to find an orange in a basket. You might have trouble because you've never seen an orange before!
The same thing can happen with AI. If the AI only gets to learn from certain types of data, it might not understand or recognise other types of data very well. This is how bias can happen in AI.
For example, if an AI is trained to recognise faces, but it only gets to see faces of adults, it might have trouble recognising a child's face because it's different from what it has learned. Or if it only sees faces of people from one part of the world, it might not be as good at recognising faces of people from other parts of the world.
Just like how you might not recognise an orange if you've never seen one before, an AI might also struggle or make mistakes if it hasn't learned from a wide variety of examples. It might end up discriminating kiwis or worse, pineapples! That's why it's really important for the people who teach AI to make sure they use a diverse range of examples!
Source:
The Alignment Problem, B. Christian, 2020
https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414
https://aws.amazon.com/what-is/neural-network/#:~:text=A%20neural%20network%20is%20a,that%20resembles%20the%20human%20brain.