# How to Present a Technical Project to Non-Experts

A checklist and guide for preparing a compelling presentation of a technical system.

## Pre-Work: Know Your Audience

Before anything else, answer:
- **Who is the audience?** (Investors, engineers, academics, general public?)
- **What do they already know?** (Assume very little for non-experts)
- **What should they walk away feeling?** (Impressed? Curious? Wanting to collaborate?)

## The 8 Components

### 1. The Problem Statement (Why This Matters)

Start with a pain that the audience can feel. Not "we built an ML pipeline" but "AI assistants are capable but don't accumulate wisdom &mdash; they don't learn to write cleaner code, build better UIs, or make smarter architectural choices over time."

- **Use a concrete anecdote** — a real scenario, not an abstract description
- **Quantify the pain** — hours wasted, dollars lost, failures per day
- **Make it relatable** — connect to the audience's world
- **Avoid jargon** — "the agent tried something, it broke, it tried again" not "the tool call returned is_error: true"

### 2. The Core Idea (What We Did)

One sentence that captures the insight. "What if the AI could review its own mistakes and learn general principles from them?"

- **Analogy first, mechanism second** — "like an automated postmortem after every coding session"
- **The key insight** — what makes this approach non-obvious?
- **Why now?** — what enables this that wasn't possible before?

### 3. System Diagram (How It Works)

A visual pipeline showing data flow from input to output.

- **Keep it to 5-7 boxes** — more than that and you've lost the audience
- **Label the arrows** — what flows between stages?
- **Highlight the novel parts** — color or bold the stages that are new
- **Include a "before/after" view** — what the system looks like without this vs. with it

### 4. Motivating Examples (Show, Don't Tell)

3-5 concrete examples that demonstrate the system working:

- **Before/After** — "Before: the agent made this mistake. After: the principle prevented it."
- **Surprising results** — "The system discovered that 7 seemingly different CSS bugs were all the same underlying problem"
- **Scale demonstration** — "From 1,126 raw observations, the system distilled 98 principles"
- **End-to-end trace** — walk through one observation from raw data to deployed principle

### 5. Comparison with Prior Work (How This Is Different)

Position against known approaches. For each:

| Prior Work | What it does | What we do differently | Why it matters |
|-----------|-------------|----------------------|---------------|
| RAG | Retrieves existing docs | Creates new knowledge | Knowledge doesn't exist in any doc |
| RLHF | Changes model weights | Changes context/instructions | Faster, cheaper, interpretable |
| Postmortems | Manual review | Automated, continuous | Runs after every session |

- **Be generous** — acknowledge what prior work does well
- **Be specific** — "differs in X specific way" not "is better"
- **Name papers** — academic audiences expect citations

### 6. Measuring Usefulness (Does It Actually Work?)

The hardest slide. Present:

- **Direct metrics** — error rates before/after, time saved, quality scores
- **Process metrics** — how selective is the pipeline? how reliable are the judgments?
- **Ablation** — what happens if you remove the system?
- **Limitations** — what you can't measure yet (be honest — it builds trust)
- **The ROI framework** — how you decide which learnings are worth deploying

### 7. Demo (Live or Recorded)

A live demo is ideal but risky. Options:

| Format | Pros | Cons |
|--------|------|------|
| **Live CLI** | Most impressive, shows real system | Can fail, needs prep |
| **Recorded screencast** | Reliable, can edit | Less impressive |
| **Interactive walkthrough** | Audience explores | Needs web UI |
| **Before/after comparison** | Clear impact | Static |

**Demo checklist:**
- [ ] Show a real error being mined from a session
- [ ] Show the multi-model scoring (disagreement is interesting!)
- [ ] Show a principle being proposed and its evidence chain
- [ ] Show the principle in `~/.claude/principles/` ready for next session
- [ ] Show the `learn find` search working
- [ ] Show the gym running (gen→eval→learn loop)

### 8. Vision Statement (What This Becomes)

End with where this goes if successful. Paint the picture:

- **Near-term** (3-6 months) — what's concretely planned
- **Medium-term** (1-2 years) — what becomes possible
- **North star** — the ultimate vision, stated clearly
- **Why it compounds** — how each improvement makes the next one easier

The vision should feel ambitious but grounded in what you've already demonstrated.

## Presentation Structure (30-min talk)

| Time | Section | Slides |
|------|---------|--------|
| 0-3 min | Problem + anecdote | 2 |
| 3-5 min | Core idea + analogy | 1 |
| 5-10 min | System diagram + how it works | 3 |
| 10-18 min | Examples (3-4 deep dives) | 4-5 |
| 18-22 min | Comparison with prior work | 2 |
| 22-25 min | Measuring usefulness + results | 2 |
| 25-28 min | Demo (live or recorded) | 1 |
| 28-30 min | Vision + Q&A | 1 |

## Anti-Patterns

- **Starting with the architecture** — nobody cares about your SQLite schema until they care about your problem
- **Jargon overload** — "FTS5 indexes on materialized principle views" loses everyone
- **No concrete examples** — abstract descriptions of pipelines are forgettable
- **Hiding limitations** — audiences respect honesty; hiding gaps erodes trust
- **Demo without context** — a terminal scrolling JSON means nothing without setup
- **Too many numbers** — pick 3-5 killer stats, not 20 tables

## Checklist Before Presenting

- [ ] Can you explain the problem in one sentence to a non-technical person?
- [ ] Do you have 3+ concrete examples with before/after?
- [ ] Is your system diagram ≤7 boxes?
- [ ] Do you have at least one comparison with a well-known approach?
- [ ] Can you state your key metric for "is this working?"
- [ ] Have you rehearsed the demo and have a backup?
- [ ] Does the vision statement feel ambitious but achievable?
- [ ] Have you marked where the explanation needs more work? (ENHANCE markers)
