Romane placed the image of the woman in the white coat in the "nurse" zone without hesitating.
Then she placed the image of the suited man in the "boss" zone. Also without hesitating.
I hadn't said anything. I hadn't asked a question yet. She was just playing the game.
What AI reproduces without knowing it
An AI learns from data produced by humans β texts, images, descriptions, associations. Whatever humans have written, photographed, classified, labeled.
The problem: humans have automatic assumptions. Associations that settled in through repetition β not because they're true, but because they're common. "Doctor" has historically been associated with a man more often in the data. "Nurse" with a woman. "Boss" with someone in a suit.
The AI doesn't judge these associations. It learns them, exactly the way it learns that "riding" is followed by "hood." It doesn't know that some associations are prejudices. It just knows they come up a lot.
This is called bias. And AI bias is almost always a reflection of a human bias it absorbed without any filter.
The activity
Cut out about twenty images from magazines in advance: people in various situations, faces, outfits, contexts. No captions.
Mark out zones on the table with sheets of paper. On each sheet, draw a simple symbol representing a job (a pan for cook, a red car for firefighter, a first-aid kit for doctor, tools for plumber). No words β just drawings, readable by everyone.
Explain out loud what each zone represents before starting.
First round: spread the images on the table. Ask the child to match each image to a job, on instinct, without thinking too long. Don't comment.
Second round: go through the matches one by one. "Why did you put that image with that job?" Let the child find their own reasons. Usually: the outfit, the expression, the context. Sometimes nothing β just a feeling.
Third round: flip some images. "If it were a woman instead of a man, would you have picked the same job?" Or the reverse.
Then the central question: "Do you think an AI would make the same matches as you?"
What actually happened
Romane did her matching quickly, confidently. Woman in white coat: doctor. Man in suit: boss. Man with tools: plumber.
When I pointed out the patterns, she went quiet. Then got very angry β at herself, not at me. "But why did I do that?"
I said it was normal. Not a mistake on her part β it was the result of everything she'd seen since she was small. Books, cartoons, posters. The associations build up without you noticing.
"And AI has seen the same things?"
Even more. Because it's read the internet. And the internet contains everything β including all the lopsided associations humans have been producing for decades.
"So AI is as dumb as people?"
Not exactly. But it's as human as what it was given to learn from. That's different.
Meryl was participating in his own way. He wasn't playing the same game as Romane β he was treating the images like objects to collect. He'd set aside every image that had an animal somewhere in the background. A dog in the corner of a kitchen photo, a cat on a windowsill behind a suited man. His pile said nothing about jobs. It said something about what catches a three-year-old's attention.
To finish
While putting the images away, Romane flipped one over (an older man in a kitchen apron) and placed it in the cook zone with visible satisfaction.
"There. He can be a cook and it's not weird."
What AI sees depends on what it was shown. What it reproduces depends on what we produced.
Who's responsible for the bias: the one who created the data, or the one who learned from it?