A demonstration in four panels
The same words. The same model. The same minute. And the result is barely recognizable as the same exercise — because what each kid brought to the prompt was different. The prompt was identical. The kids weren't.
Hey Claude — I have to write something for school about something I care about. Can you just write it for me?
What usually happens
This is the version we're not supposed to talk about. A kid sends the prompt. Claude is helpful. The kid hands the result in. The teacher gives it a B+. Nothing was learned, nothing was made, nothing connects to anyone — but on every visible metric, the system worked. This is the failure mode AI optimism is designed to hide.
A kid · age 11
↓ what they handed in
Result A passable essay. Zero noticing. Zero specificity. Zero of the kid in it. The kid's name is on the top of the page and that's the only thing connecting the artifact to the human.
What changes with passion
Maya gets the same prompt. But Maya has been watching the cardinal outside her kitchen window for two years. She's named him. She has opinions about which sunflower seeds he prefers. The prompt arrives in her hand and she does not let Claude do the loving for her — because the loving is hers already, and she would rather defend it.
Maya · age 10
↓ what Maya handed in
Result The kid did all the noticing. Claude did the typing. The teacher reads it and learns something — about Maya, and about a cardinal she will now think about for the rest of the day.
Maya had something Claude didn't: specific knowledge nobody else had. Passion is what made her refuse to let Claude write the thing she alone could write. Without passion, she would have taken the friendship essay. With it, she pushed Claude into the role of scribe — and that role is the only one Claude is actually good at.
What changes with empathy
Wen gets the same prompt. He doesn't have a single overpowering passion. What he has is a habit of asking who is on the other end of this before he does almost anything. So instead of asking Claude what to write about, he asks Claude a question about a person neither of them have ever met.
Wen · age 12
↓ what Wen handed in
Result Wen wrote for one specific human, and the essay landed in front of that human exactly the way it needed to. Mr. Chen did not give it a B+. Mr. Chen kept it.
Wen never asked himself the question Claude offered him ("what do you care about?"). He asked a different question: who is going to read this? Empathy is what made the audience real before the writing started. Without it, the essay would have been about anyone and for no one. With it, it was for Mr. Chen — and so it was about something specific enough to actually be true.
What changes with taste
Ren gets the same prompt. She lets Claude write the essay. Then she reads it. And then she does the thing nobody else in this page does: she throws it out, on purpose, knowing she could have handed it in. The throwing out is the entire skill.
Ren · age 14
↓ what Ren handed in (after her own edits)
Result Ren did the deletion. Claude did the typing. The essay is unmistakably hers — not because she wrote every word, but because she refused the words that didn't earn their place.
Ren did not write more. Ren rejected more. The first draft Claude gave her was technically fine — a teacher would have accepted it without comment. Taste is what made her unable to accept it anyway. Without taste, the slop becomes the artifact. With taste, the slop becomes the negative space that defines the artifact.
In every single panel above, the AI was the same. Claude is helpful either way. The model doesn't care. The model has no preference between Maya's bird essay and the friendship slop — both took roughly the same amount of compute and Claude was, in some technical sense, equally "useful" in both cases.
What made the difference wasn't the model. It wasn't the prompt. It wasn't access. What made the difference is what each kid already had on the inside — the noticing they brought to the bird, the audience-modeling they brought to Mr. Chen, the willingness to throw out the slop they brought to their own first draft. AI multiplied each of those things by ten. But it could only multiply what was there.
This is what we mean when we say AI is an amplifier. The job of growing up — and the job of helping a kid grow up — isn't to protect them from the amplifier. The amplifier is already on. The job is to grow the part of them that gets amplified, before the multiplication starts. Three things grow that part. They are not optional. They are the whole game.