Academy 04 · of 04

Harness Studio

keep other systems — and yourself — honest

The deepest academy. Where the other three teach building things, this one teaches building systems that keep other systems honest — including your own taste, drifting over time.

Taste, concentrated

What this academy is really about.

You will spend a future career making decisions about AI output. Is this draft good? Is this code clean? Is this design right? Is this agent's behavior acceptable? In a world where AI can produce a polished anything in two seconds, the limiting factor is not making things. It's knowing which version is the one you'd put your name on.

Harness Studio teaches you to turn that judgment into a system: a rubric you can defend, a judge that automates your taste, and — most uncomfortably — a process that catches you when your own taste drifts.

When the cost of creation drops to zero,
taste is the only moat. — Eric De Castro, 2026

Why it's the last academy.

You can't build a Harness Studio project without something to harness. By the time you arrive here, you should already have a Skill from Academy 01, a tool from Academy 02, and an agent from Academy 03. Harness Studio takes those things and asks: are they good? How do you know? And how would you catch yourself if they got worse without you noticing?

The grand finale of this academy — Project 12 — weaves all four academies together into a single system. You finish the site here.

3 projects · ~10 hours total

The three projects, in order.

Project10

Define Good. Defensibly.

Pick a domain you care about — could be writing, could be code, could be a Skill, could be how an agent talks. Write the rubric. Five dimensions, with anchors at 1/3/5. Score five real examples by hand. Notice what your taste actually is when you have to defend it.

You ship → 1 rubric YAML + 5 hand-scored examples + a one-paragraph reflection on what surprised you about your own taste.

Open →
Project11

Build a Judge That Works

LLM-as-judge: a Claude prompt that takes one piece of work and your rubric and returns a score with reasons. The judge is calibrated against your hand-scored examples. It has to agree with you 80% of the time before it ships. When it disagrees with you, you investigate which of you was right.

You ship → judge prompt + agreement rate vs. your hand scores + 1 case study where the judge changed your mind.

Open →
Project12

Catch Yourself Drifting · Finale

The grand finale. Run your judge against your own work, weekly, for a month. Watch yourself drift. Notice where AI is creeping into work that should be yours. Add the Skill, the tool, the agent. Build one system that catches all three drifting at once. Submit a one-page writeup of what you saw.

You ship → 4 weeks of self-judgments + a Drift Map + a one-page reflection + a v1.0 of "the thing that keeps me honest."

Open →

What you walk out of this academy — and the whole site — with

A working personal Harness: a system that watches your own output and tells you when you're slipping. Most adults don't have one. You will. Combined with the Skill, the tool, and the agent, you'll leave the site with a portfolio that proves something rare and specific: that you used AI hard, and the result was unmistakably yours.