The word that matters: why AI doesn't listen to everything equally
We played at underlining the important words in sentences. Romane and I didn't underline the same ones. And that's exactly the problem AI tries to solve.
All our AI adventures: concrete activities tested as a family, with what worked, what didn't, and what we learned from it.
We played at underlining the important words in sentences. Romane and I didn't underline the same ones. And that's exactly the problem AI tries to solve.
Romane kept a secret notebook for a week. Then we played at being an AI that looks up its notes instead of relying on memory alone β and discovered that knowing where to look is also a form of intelligence.
I asked Romane to draw a dog from memory. Then with ten pictures in front of her. Then I tried to explain how an AI learns to recognize something after seeing thousands of examples.
We played at recalling a trip to Dijon. Then we checked against the photos. Romane remembered a golden owl with feathers and bright eyes. The real owl is worn down, almost invisible. Nobody was lying.
While sorting old photos by resemblance and 'feeling,' Romane discovered how an AI connects memories β not by date, but by what they have in common.
Learning to sort objects by criteria, just like an AI does β and discovering that the choice of criterion changes everything.
We played at matching images to jobs. Then we flipped the game to spot our own automatic assumptions. AI learns from what humans produce β including their biases.
We played at guessing the next word in a story, one word at a time. Romane understood that AI does the same thing: it doesn't know the right answer, it picks the most probable one.
This blog wasn't planned. It started one January evening with a question from Romane, a vague answer from me, and the nagging feeling that I could do better.