I said: "Little Red Riding..."
Romane answered without thinking: "...Hood."
Not because she'd searched for it. Because that word comes after that other word — always, since she was tiny. She didn't think. She predicted.
That's exactly what an AI does when it writes.
What AI actually does when it speaks
When a tool like ChatGPT or a voice assistant answers a question, it doesn't "look up" the right answer in a database. It doesn't "think" either, not in the way we mean.
It predicts the next word.
From everything it read during training, it calculates: which word most often comes after the words I just wrote? Then it picks that word, and does it again for the next one, and again, and again, until a complete sentence forms.
It's cascading prediction. Not knowledge. Not understanding. Statistical betting — very sophisticated, very fast.
Which means AI sometimes gets things wrong, not because it doesn't know the truth, but because it bet on the wrong word.
The activity
Pick a book the kids know by heart. A picture book, a fairy tale, a story read a hundred times.
First round: guess out loud. Read a sentence and stop in the middle. "In the forest, there lived a big..." The child completes it. Then read the real continuation. Did they get it?
Do this ten times with sentences of different lengths. Note mentally when it matches and when it goes off the rails.
Second round: the next-word game. Invent a sentence from scratch. The adult says one word. The child says the word that feels most natural after it. Then the adult says a word, and so on. You build a sentence together, word by word, always choosing "the word that comes naturally."
Read the resulting sentence out loud.
Third round: the question. "How did you choose your words?" Usually: intuition, habit, what sounded right. Then: "That's how AI chooses too — except it's read far, far more sentences than you."
Variation to make it harder: try to slip in a real but unexpected word (a word you know, but that has no business being there) and watch how the sentence breaks down. That's what happens when AI encounters a context it hasn't seen much of.
What actually happened
Romane is very good at this game. She has a feel for the rhythm of sentences that almost always leads her to a plausible word. She concluded that AI "cheats intelligently": it knows so many sentences that it always has something to say, even if it doesn't really understand.
That's actually a fairly precise definition.
Meryl wanted to join in. At three, he doesn't really play the next-word game — he plays the word-that-makes-everyone-laugh game. I said "The wolf opened the door and..." and he replied "...sock!" with enormous satisfaction.
I tried to continue the sentence with "sock." "The wolf opened the door and found a sock." Meryl found this extraordinary. He put "sock" in the next four sentences, without exception.
What I explained to Romane meanwhile: an AI doesn't get stuck by an unexpected word either. It keeps predicting, even if the result gets weird. It doesn't stop and say "that doesn't make sense." It bets anyway.
"So it can say anything and still sound confident?"
Yes. That's exactly the problem.
To finish
We read the sentence we'd built together that evening. It was grammatically correct, mostly coherent, and contained the word sock.
Romane said it looked like what she writes when she doesn't know what to put in an essay but needs to fill the page.
Meryl asked if AI likes socks.
I didn't know what to say. As usual.