"Agent" is the most over-used word in tech right now.
You've probably heard it a hundred times already. AI agent. Shopping agent. Coding agent. Travel agent. Agent agent. It's thrown around so often that it's starting to mean nothing — which makes it the perfect place to start Agent Lab, because if you understand what an agent actually is, you already know more than most adults using the word.
Here's the definition, the only one that matters:
An agent is something authorized to act on someone's behalf.
That's the whole thing. Three important words: authorized (someone gave it permission), act (it does stuff in the world, not just show info), and behalf (the stuff is for somebody else, not for itself). Miss any of those three, and whatever you're looking at isn't an agent.
A travel agent (the human kind) is a clear example: you tell them you want to go to Kyoto, and they go off and book flights and hotels for you — without asking your permission for every tiny step. You gave them the goal. They made the decisions. That's the shape an AI agent tries to copy.
Most things people call "agents" today are not actually agents — they're just tools or chatbots with the word "agent" slapped on them because it sounds cool. By the end of this module, you'll be able to tell the difference at a glance.
Tool, chatbot, agent — three different things.
These words get used interchangeably, and they shouldn't. They describe three completely different relationships between a human and a piece of software. The difference matters.
A tool
↳ you operate it
Verb: you actYou use it. It doesn't do anything on its own. Think of a calculator: you press the buttons, it shows the answer, you decide what to do next.
A chatbot
↳ you talk, it answers
Verb: it respondsYou ask questions. It answers. The conversation is the whole thing. It doesn't do anything in the real world — it just talks back.
An agent
↳ it acts for you
Verb: it actsYou give it a goal. It goes off and does things in the world to reach that goal. Books appointments, sends messages, makes decisions. All on your behalf.
Here's a simple test to tell them apart: after the interaction, did something change in the real world? If nothing changed — no message sent, no appointment made, no file saved — you were using a tool or talking to a chatbot. If something did change, and you didn't personally click the final button, you were using an agent. The world-change is the tell.
Who would you trust an agent to help?
Before you design an agent, pick a real person it would work for. Not "users." A specific human in your life who has a specific problem an agent could actually help with.
This choice matters more than it looks. The person an agent is for shapes every design decision. An agent for grandma is a very different shape than an agent for a busy parent — even if both are doing "book an appointment." Agent design is empathy work first, code second.
Every agent has three powers.
If something doesn't have all three of these, it's not really an agent — it's something simpler (which is fine, but let's call it what it is).
It knows
An agent has information — about the person it's helping, the task at hand, the world around it. Without knowledge, it can't make good decisions.
It decides
An agent has to make choices on its own, without asking for permission at every step. "Should I book the 10am or the 2pm?" It has to have the judgment to pick.
It acts
An agent changes something in the real world. Sends a message, books a slot, files a record. If nothing changes, it's a chatbot.
Knows. Decides. Acts. Those three words are the checklist. Every legitimate agent does all three. Most things that claim to be agents are missing the middle one — they know stuff, they can act, but they refuse to decide without asking for permission at every step. That's a tool pretending to be an agent.
The middle power — decides — is where all the ethics lives. A tool never decides anything. A chatbot suggests. An agent actually chooses. That means an agent can be wrong in a way that a tool can't. It's also the power that makes an agent worth having at all. If you wanted to decide every step yourself, you'd just use a tool.
Watch an agent think, decide, and act.
Here's a simulated agent doing a real task: booking a dentist appointment for grandma. Click "Start the agent" and watch what happens. Notice how the agent narrates its own thinking — that's key to trust. Agents that work in silence are scary. Agents that show their work are trustworthy.
Agent: Grandma's Appointment Helper
Goal: "Book grandma a dentist appointment next week."
Watch the pause before the agent asks you. That's the moment an agent stops — because it hit something it shouldn't decide alone. In the next module you'll learn exactly where those pauses should be. For now, notice that they exist at all.
When should an agent stop and ask?
The hardest question in agent design. If an agent asks for permission at every step, it's not an agent — it's a tool in disguise. If it never asks, it's dangerous. Good agents know exactly when to pause. Two rounds of judgment.
Round 1. You've asked your agent to "help me plan dinner." Which response is better?
Round 2. You've told your agent to "find grandma a doctor's appointment." Which response is better?
Agents should pause when three things are true: the action is irreversible (you can't easily undo it), the action affects someone besides the user (grandma, a business, a stranger), or the agent is uncertain enough that it's genuinely guessing. Any of those three and the agent stops. All small routine decisions, it handles alone. That's how you earn the name "agent" instead of "tool."
You now know what an agent actually is.
Which is more than most adults using the word. Welcome to Agent Lab — the academy where empathy meets code.
What you just learned
- An agent is something authorized to act on someone's behalf. Three words matter: authorized, act, behalf.
- Three different things: tool (you operate it), chatbot (you talk), agent (it acts for you).
- Every legitimate agent has three powers: knows, decides, acts.
- The middle power — decides — is where all the ethics live. It's what makes an agent different from a tool.
- Good agents show their thinking. Silent agents are scary.
- Agents should pause when: the action is irreversible, it affects other people, or they're genuinely uncertain.
In Module 02, you'll start building your own agent — for —, the real person you picked in step 3. Not a pretend one. A real design for a real person. That's where empathy enters the work.
★ Before you call it done
Three questions. Same three. Every time.
These are the same three questions for every module in Kindling. They are how you check whether AI did the part it should and you did the part only you could. Tap each one to mark it true.
★ ★ ★