If you can build an agent for someone vulnerable, you can build one for anyone.
Welcome to the Agent Lab finale. Seven modules have prepared you for this one design exercise — and the exercise is: design a real agent for a real person whose life your agent could actually change. Not a convenient user. Not a hypothetical. Someone whose stakes are real, because they can't easily defend themselves against a bad one.
Here's the teaching bet for this module: the rigor you need to design for a vulnerable person is the rigor every agent deserves — we just usually skip it. When the stakes are low, you can get away with sloppy defaults, missing pauses, and vague metrics. When the stakes are high, you can't. So we start at the hard end, and then every other agent you build for the rest of your life will feel easier.
Every shortcut you take in this module will be invisible on a regular user and catastrophic on a vulnerable one. That's not because vulnerable users are fragile — it's because they don't have the same cushion to absorb your mistakes. The regular user who gets a bad recommendation shrugs. The vulnerable user who gets the same recommendation might lose something irreplaceable. Same bug, different outcome. This is the module where sloppy design becomes visible.
By the end of these seven steps, you'll have written a full agent spec for a real vulnerable person — using every technique from every Agent Lab module at once. The spec is the graduation project. When you're done, you will have done something most adults designing AI for money have never done: thought carefully, end to end, about someone who needs you to be careful.
Vulnerability comes in three shapes.
"Vulnerable" is not one thing. People are vulnerable in different ways, for different reasons, over different time spans. Knowing which kind you're designing for changes everything — the pause points, the permissions, the language, the defaults.
Age
Very young or very old. Their ability to self-protect is different from a typical adult's — either still developing, or changing in ways they didn't choose.
- A 6-year-old learning to read
- An 82-year-old with mild forgetfulness
- A 12-year-old in the middle of a rough patch
- A 90-year-old grieving a spouse
Condition
Physical, cognitive, or mental health. Something about how their mind or body works makes certain kinds of agent behaviors dangerous or exhausting.
- Someone who can't see a small screen clearly
- Someone with anxiety who panics on surprise notifications
- Someone with ADHD who loses track mid-task
- Someone with dyslexia reading dense text
Circumstance
They'd be fine in a normal situation, but their current situation has stacked the odds against them. The hardest kind to notice because the user looks "normal."
- Someone reading a second language
- A parent running on two hours of sleep
- Someone in financial stress making money decisions
- Someone who's isolated and lonely
Like the quiet harms in Module 03, these three vulnerabilities frequently overlap. Your grandma might be 76 (age) AND have mild cognitive change (condition) AND live alone two towns over (circumstance). Three overlapping vulnerabilities stack up. A good agent for her accounts for all three at once — not by being paternalistic, but by being careful in a specific, named way for each one.
Meet Wen. She's 76. This is her dossier.
To practice building for a vulnerable person, we need a real one. Not a made-up persona with a clever name — someone with real details you can hold in your head. Meet Wen. She's based on someone's actual grandmother, with the names changed. Every detail below comes from a real interview like the one Maya did in Agent Lab Makers Module 02.
Wen · age 76 · lives alone · two towns from her daughter
Interviewed by her granddaughter Lily (13) · six Saturday afternoons last spring
A retired librarian. Reads three books a week. Drinks tea at 3pm every day without fail. Misses her husband, who passed two years ago, but doesn't mention him often.
76 years old. Mild forgetfulness (started about a year ago — names of new people, why she walked into a room). Hearing starting to drop, especially on the phone.
Mild osteoarthritis in her hands — typing is slow and sometimes painful. She can use a phone but tapping tiny buttons frustrates her.
Lives alone since her husband died. Her daughter is two towns away. She sees family only every few weeks. Lily noticed in the interview that Wen sometimes talks to the cat out loud just to hear a voice in the house.
Forgets new neighbors' names (from Maya's interview in Module 02 — the most embarrassing one). Says "yes" on phone calls she doesn't fully understand just to end them. Takes her blood pressure medicine at wrong times. Can't read small print on pill bottles without her glasses.
Reading. Her garden. Watching the sparrows at the feeder. Phone calls from Lily. The smell of the library she used to work at. Rereading the same three Agatha Christie books in rotation.
To feel like a burden. To have people worry about her. To be treated "like an old lady." To move in with her daughter. To become dependent on technology she doesn't understand.
Read the dossier twice. Notice how the third section — what she does not want — is just as important as the problems. This is where most agent designers fail: they solve the problems and accidentally deliver everything the person didn't want. Treating Wen "like an old lady" would make the agent technically useful and emotionally catastrophic. Our job is the opposite.
Six extra rules that kick in when stakes are real.
The seven previous Agent Lab modules gave you a full toolkit. This module adds six extra rules on top — rules that only apply when you're designing for someone vulnerable. They feel strict. They are. But each one exists because, without it, real agents for real vulnerable people have done real damage.
Preserve dignity as a first-class feature.
Dignity means the user never feels like a child, a burden, or a patient. Every message, every prompt, every default has to pass one test: would I talk to my grown-up friend this way? If not, rewrite it. Condescension is the most common failure mode in agents for vulnerable users, and it's almost always unintentional.
Silent mistakes cost more — narrate extra.
From Module 03 (Show Your Work): narration is trust. For vulnerable users, narration is safety. If the user can't easily notice when the agent went wrong, the agent has to loudly say what it's doing at every step that could possibly matter. Better to over-narrate than to hide a mistake.
Every pause needs a real human backup.
A pause point that goes unanswered for a vulnerable user isn't just inconvenient — it can be harmful. Every Level 3 pause in the manifest needs a named human the agent can alert: "I'm stopping and flagging this to your daughter because it looks important." Never leave a vulnerable user alone at a pause.
Permission defaults start one rung lower.
From Module 01 (Permissions): the trust gradient. For vulnerable users, every new action's default ceiling is one level lower than you'd normally pick. A draft becomes a suggestion. An act+confirm becomes a draft. You can raise the ceiling later once you've watched the agent in the wild for a month. Start low. Stay low until evidence says otherwise.
Actively design against dependency.
From Module 03 of Builders (Quiet Harms): the learned-helplessness risk. For vulnerable users, dependency is the single biggest failure mode of "helpful" agents. The agent's job is to make the user more capable over time, not less. Measure this directly — how often did the user do the thing without the agent this week? — and treat the number as the most important metric you have.
Consent is a conversation, not a checkbox.
A vulnerable user can't give "informed consent" by clicking a wall of text they can't read. Consent has to happen in a real conversation, in simple words, with a trusted person present, and it has to be easy to take back. If your agent uses consent the way most tech companies do (hidden, permanent, vague) — it is operating without consent, regardless of what was clicked.
Notice that none of these six rules are new ideas. Every one is already implicit in an earlier Agent Lab module — dignity echoes empathy (M02), narration echoes transparency (M03), pauses echo ask-for-help (M04), permissions echo the gradient (B01), dependency echoes quiet harms (B03). What changes is that when the user is vulnerable, these aren't optional. They become constraints you design within, not nice-to-haves you add at the end.
Write the full agent spec for Wen.
This is the graduation project. Fill in the spec below — every field pulls from an earlier module, so the form is itself a recap. As you type, a complete agent manifest builds itself at the bottom. Save it, share it, show it to someone. It's the proof you can think about agents the way a good builder should.
↳ Makers · Module 01 — what kind of agent is this?
↳ Makers · Module 02 — designed from observation, not assumption.
↳ Makers · Module 03 — transparency as trust + dignity as a feature.
↳ Makers · Module 04 — the pause, with a real human backup.
↳ Builders · Module 01 — permission manifest, one rung lower.
↳ Builders · Module 03 — the 5 questions, answered honestly.
# Fill in the form above. A full, real spec builds itself. agent: name: ... purpose: ... ...
Print this spec out. Show it to the person you love most. Ask them what's missing. That conversation is worth more than any code you'll write afterward — because good agents start as conversations, not commits.
Standard design vs vulnerable-aware design.
Two rounds. Each round shows a "reasonable" agent design and a vulnerable-aware version. The difference is the whole Builders tier in miniature.
Round 1. An agent to help Wen remember to take her blood pressure medicine. Which design is better?
Round 2. An agent to help Wen with phone calls. Which design is better?
The better design in both rounds does less, not more. It helps at one specific moment, then gets out of the way. It assumes Wen is capable and supports her where she needs support. It doesn't replace her — it stands next to her. That's the whole Agent Lab in one sentence: good agents stand next to people, not in front of them.
You just finished Agent Lab.
All eight modules. Makers and Builders. From "what is an agent" to designing for someone whose life your agent could actually change. You've thought about things most AI product designers never think about — deliberately, with real people in mind, all the way through.
Every Agent Lab module you completed
🔨 Makers · ages 11–12
⚙️ Builders · ages 13–14
Look at what you learned across eight modules. Not a list of features. Not a framework. Not a library. You learned how to think about building things that act on behalf of real people. How to define an agent. How to design from observation. How to make it visible. How to make it pause. How to write down its permissions. How to handle disagreement. How to see the quiet harms. And how to build for someone whose life actually hangs on you getting it right.
The big secret of Agent Lab is that all of this applies to regular users too. You just started with the hard case. Every habit you built here — dignity, narration, pauses, scope, quiet harm audits — is what every user deserves. They just usually don't get it, because the designer didn't practice on someone vulnerable first. You did. So every agent you build from now on will be a better one than the industry standard — not because you have more tools, but because you have a different set of reflexes.
Next is Harness Studio (Academy 04, Builders only). It's the smallest academy and the most philosophical. If Agent Lab asked "how do I build an agent that treats a human well?", Harness Studio asks "how do I build a system that keeps other AI honest?" It's the taste academy in its purest form — and it's shorter than the others because the ideas are denser.
★ Before you call it done
Three questions. Same three. Every time.
These are the same three questions for every module in Kindling. They are how you check whether AI did the part it should and you did the part only you could. Tap each one to mark it true.
★ ★ ★