30 Principles That Actually Change How You Code

Distilled from 249 principles in learning.db — the ones that earn their keep

249
Total Principles
30
Top-Ranked
10
Categories
4
Tiers
Contents
  1. The Problem: Principles Nobody Reads
  2. How These 30 Were Selected
  3. Tier 1: Change How You Think About Every Task
  4. Tier 2: Save You From the Most Common Mistakes
  5. Tier 3: Make Design Decisions Sharper
  6. Tier 4: Domain-Specific but High-Impact
  7. Principles in Action
  8. How to Use These
  9. Quick Reference Card

1. The Problem: Principles Nobody Reads

You've accumulated 249 principles from hundreds of coding sessions. Each one was hard-earned — a bug that burned you, a design that crumbled, a pattern that saved hours. They live in learning.db, materialized to ~/.claude/principles/, and are automatically retrieved when they seem relevant.

But 249 is too many to hold in your head. The truly transformative ones — the seven that should run in your background process on every task — get buried alongside niche observations about SQLite triggers and Gradio CSS.

The goal isn't to read all 249. It's to internalize the 7–10 that change how you think and know where to find the rest when they're relevant.

This presentation ranks the top 30 by a single criterion: how much damage does violating this principle cause, and how broadly does it apply?

2. How These 30 Were Selected

Selection criteria, in order of weight:

CriterionWhat it measures
Blast radius of violationHow much damage when you ignore it? Silent failures (Tier 1) cause more damage than poor naming (Tier 4)
Breadth of applicationEvery task? Most tasks? Certain domains only? Tier 1 applies to everything
Non-obviousness"Test your code" is true but obvious. "Workarounds piling up = wrong abstraction" is non-obvious and actionable
Evidence depthNumber of linked instances in learning.db — principles with more real examples rank higher
249 total principles ├── 10 categories (dev, testing, observability, ux, batch-jobs, ...) ├── ~80 high-level (## headings in principle files) ├── ~120 sub-principles (### under parents) └── ~50 instance-level (too specific to rank) Selection: 80 high-level → ranked → top 30 → grouped into 4 tiers

Here's how the data backs these rankings. The top 30 were selected from 249 active principles in learning.db, supported by 6,130 instances and 3,644 tracked applications. The evidence is not uniform — some principles (#15 Verify Action: 353 applications) are battle-tested across dozens of sessions, while others (#21 Map Boundaries: 0 applications) are conceptual frameworks that haven't yet been tracked in production. Both are valuable, but the ranking weights empirical evidence.

3. Tier 1: Change How You Think About Every Task

TIER 1 These 7 apply to literally every coding decision. Violating any one causes cascading damage.
#1
Fail Loud, Never Fake
Errors surface immediately. Never swallow, never return fake success.

A swallowed exception is a lie to your future self. The cost of a visible error is one interrupted task. The cost of a hidden error is hours of debugging the wrong thing, or worse — shipping broken output that looks correct.

Negative test: If removing this principle would make your codebase quieter but not more correct — that's the point.

10 instances · 55 applied
#2
Complexity Must Be Earned
Every abstraction, flag, and branch must justify itself with a concrete user. YAGNI + ongoing audit.

Not just "don't add what you don't need" (YAGNI) but also audit what's already there. A reload flag that toggles between hot-reload and no-reload? Nobody wants the no-reload path. Delete it. Three similar lines of code is better than a premature abstraction.

5 instances · 13 applied
#3
Respect Abstraction Boundaries
Never reach behind an abstraction to manipulate what it manages.

If a library manages state, don't also manage it. If a framework owns the event loop, don't fight it with threading. Every boundary violation creates a maintenance coupling that compounds.

15 instances · 31 applied · 3 sessions · 100% success
#4
Workarounds Piling Up = Wrong Abstraction
Multiple hacks in one area? You're fighting the wrong structure. Step back and redesign.

CSS hacks, MutationObservers, custom JS to fix layout — these are symptoms. The disease is wrong abstraction. When workarounds pile up, the cost of continuing to patch exceeds the cost of a rewrite. Recognize the signal.

10 instances · 22 applied
#5
Edit, Don't Replace — Content Is Sacred
Modify what exists. Never overwrite a file/section when you can surgically edit it.

Replacing a 200-line file when you needed to change 3 lines discards context, comments, careful formatting, and potentially introduces regressions. Surgical edits are reviewable; replacements are not.

1 instance · 38 applied · 5 sessions · 56% success
#6
Minimize Blast Radius
Changes should affect the smallest possible scope. Irreversible actions need extra caution.

Before every action, ask: "If this goes wrong, what breaks?" A function change breaks one caller. A module rename breaks many. A force-push breaks everyone. Scale your caution to the blast radius.

10 instances · 40 applied · 2 sessions · 100% success
#7
Trace the Chain to an Action
Every piece of data should lead to a decision or action. If it doesn't, it's noise.

Logging that nobody reads, metrics that nobody alerts on, props that no op reads — all dead weight. Follow each data element to its consumer. No consumer? Delete the producer.

2 instances · 4 applied · 2 sessions · 100% success

4. Tier 2: Save You From the Most Common Mistakes

TIER 2 These prevent the mistakes you'll make most often. Each one has saved hours of rework multiple times.
#8
Adopt Before You Build
Search for existing solutions before writing custom code. Libraries distribute maintenance burden.

The first version of your custom tool is faster to write. The 10th bug fix won't be. Check: stars + recency, scope match (80% is good enough), integration cost (5-line import beats a fork).

7 instances · 147 applied · 2 sessions · 100% success
#9
Simplify After Designing
After designing an API, make a deliberate simplification pass. Reduce parameters, merge concepts.

Design expands to cover cases. Simplification contracts to the essential. Both passes are required. The simplification pass is where "6 parameters" becomes "2 parameters and a sensible default."

13 instances
#10
Research Before Building
Broad upfront investigation beats iterative discovery. 30 min of research saves hours of rework.

Read the docs, search the codebase, check if someone solved this. The urge to start coding immediately is strong and usually wrong. The codebase already has a pattern for this — find it first.

7 instances · 52 applied · 8 sessions · 100% success
#11
Decompose Into Orthogonal Axes
Independent concerns should be independently configurable. Don't couple unrelated dimensions.

If "model selection" and "output format" vary independently, they should be separate parameters — not bundled into recipe variants like ask_json and ask_text. Orthogonal axes compose; coupled axes explode combinatorially.

23 instances · 21 applied
#12
No Silent Failures
If something goes wrong, it must be visible. Silent failures compound into mystery bugs.

An empty list returned instead of raising is a silent failure. A try/except: pass is a silent failure. A missing else clause on a match statement is a silent failure. Each one is a time bomb.

31 instances · 256 applied
#13
Migrate, Don't Straddle
When moving to a new approach, complete the migration. Don't maintain two parallel systems.

Every "we'll migrate later" becomes permanent dual maintenance. The old system never dies — it just gets increasingly neglected while consuming support effort. Migrate fully or don't start.

4 instances · 4 applied
#14
Independent Dimensions, Independent Paths
If two things vary independently, they should be in separate code paths, not tangled together.

Auth logic and rendering logic in the same function means changing auth requires understanding rendering. Separate the dimensions; let each evolve independently.

5 instances · 26 applied
#15
Verify Action
After doing something, verify it worked. Don't assume success.

Wrote a file? Read it back. Started a server? Hit the health endpoint. Ran a migration? Check the schema. The gap between "I called the function" and "the effect happened" is where bugs hide.

12 instances · 353 applied · 8 sessions · 86% success

5. Tier 3: Make Design Decisions Sharper

TIER 3 These don't prevent bugs — they prevent bad architecture. Relevant whenever you're designing, not just fixing.
#16
Draft the Shape, Then Fill It In
Define interfaces/structure first, then implement. Top-down design, bottom-up implementation.
0 instances · 7 applied · 3 sessions · 100% success
#17
Treat Symptoms as Signals
A bug or friction point reveals a structural issue. Fix the cause, not the symptom.
18 instances · 9 applied · 6 sessions · 40% success
#18
Less Is More
The best code is the code you don't write. Prefer deletion over addition.
5 instances · 15 applied
#19
Extend, Don't Invent
Build on existing patterns rather than creating new ones. Consistency > novelty.
4 instances · 115 applied
#20
Minimum Necessary Weight
Use the lightest mechanism that solves the problem. A function before a class. A class before a framework.
8 instances · 27 applied · 2 sessions · 100% success
#21
Map Boundaries via Extreme Probes
Propose the maximalist design to map where it breaks, then converge on the nuanced middle.

The extreme proposal isn't a mistake — it's a diagnostic. "Flatten everything to dicts" maps exactly where genericity helps and where it breaks. Neither the conservative status quo nor the maximalist proposal reaches the synthesis alone.

Conceptual — not yet tracked in production
#22
Recognize Poor Fit, Find Better Tools
When a tool fights you, switch tools. Don't force-fit.
10 instances · 69 applied · 5 sessions · 100% success
#23
Concrete Rots, Principles Persist
Code changes, principles don't. Invest in understanding the why, not memorizing the what.
7 instances · 27 applied

6. Tier 4: Domain-Specific but High-Impact

TIER 4 Narrower scope — batch processing, data pipelines, system design — but transformative when they apply.
#24
Fail Per-Element, Not Per-Batch
One bad item shouldn't kill a whole batch. Isolate failures.
6 instances · 29 applied · 2 sessions · 0% success
#25
Idempotency and Resumability
Operations should be safe to retry. Crashed halfway? Resume, don't restart.
4 instances · 39 applied
#26
Progressive Trust Radius
Start cautious (1 item), verify, then expand scope. Don't YOLO on the first run.
3 instances · 1 applied
#27
Distinguish Discrete Selection from Continuous Control
Binary thresholds lose information. Continuous signals enable proportional decisions.

Replacing exhausted() (boolean) with frac() (0.0–1.0) enables throttling before the wall, proportional allocation, and gradual effort adjustment. Every binary check is a continuous signal with information discarded.

1 instance
#28
Differentiate Shared Metaphor from Shared Mechanics
Things that look similar may need different treatment. Don't unify on superficial resemblance.

Item.props and Context.budget both "hold metadata in a dict." But one is an extension boundary (generic, flexible) and the other is an engine boundary (typed, validated). Unifying them on the metaphor destroys the mechanics each needs.

9 instances
#29
Generalize From Concrete Failures
Turn recurring bugs into shared safeguards. One fix should prevent the whole class.
7 instances · 2 applied
#30
Classify Before Specialized Processing
Triage first, then route to specialized handlers. Don't run everything through the expensive path.
Conceptual — not yet tracked in production

7. Principles in Action

Example 1: The frac() Refactor (Principles #27, #9, #7)

Vario's Budget had a binary exhausted() method. Eight callsites checked it. The problem: binary signals can't enable proportional decisions.

Before
if ctx.budget.exhausted():
    # stop everything
    return

Binary. Either working or dead. No gradual response.

After
if ctx.budget.frac() >= 1.0:
    return  # same behavior
# BUT now you can also:
if ctx.budget.frac() > 0.8:
    # throttle, reduce effort
    n = max(1, round(5 * remaining))

Continuous. Same info at the boundary, richer everywhere else.

Principles applied: #27 (continuous > discrete), #9 (simplify — exhausted() was eliminated, not kept alongside), #7 (trace to action — frac() feeds actual allocation decisions).

Example 2: The "Flatten Everything" Design Session (Principles #21, #28, #4)

Vario's Item uses a generic props dict. Context uses named fields (budget, hooks). The inconsistency seemed like a design flaw.

Extreme probe (#21): "Make Context generic like Item — everything is dict[str, Any]." Ran it through 3-model adversarial review.

What broke: Engine boundaries need validation. limit.usd as a string key means a typo silently removes your budget cap. Concurrency safety requires encapsulation, not raw dicts.

Synthesis (#28): The metaphor (both hold metadata) was shared, but the mechanics (extension vs engine control) were different. Result: "unopinionated at extension boundaries, opinionated at engine boundaries."

Example 3: The Silent Batch Failure (Principles #12, #15, #24)

A YouTube channel download pipeline processed 2,000 videos. Two hours in, it silently failed at item 847 — but reported overall "success" because the orchestrator only checked the final HTTP status, not per-item outcomes.

Before
result = batch.run(urls)
# returns 200 OK even when
# 40% of items failed silently
if result.status == 200:
    log("batch complete")

Aggregate success masks per-item failures. 847/2000 items lost.

After
for url in urls:
    result = process(url)
    report.record(result)
# End-of-batch reconciliation
assert report.succeeded >= report.expected * 0.95

Each item tracked. Batch continues on failure. Summary reveals the gap.

Principles applied: #12 No Silent Failures (256 applications — the most-evidenced principle in the system) → per-element error tracking. #15 Verify Action (353 applications) → end-of-batch reconciliation. #24 Fail Per-Element → individual failures don't kill the batch.

Example 4: The 50-Column Dashboard (Principles #18, #7, #6)

A Gradio monitoring dashboard displayed 50+ metrics in a data table. Users couldn't find what mattered — every metric competed equally for attention. Worse: Unicode zero-width spaces used for column alignment caused silent query failures when the data was parsed back from the UI.

Less Is More (#18): Reduced visible metrics from 50 to 5 — the ones that drive actual decisions. Trace the Chain (#7): For each remaining metric, identified the specific action it informs. Metrics with no consumer were hidden. Minimize Blast Radius (#6): Hidden metrics moved to an expandable section, not deleted. Previous users can still find them.

Result: dashboard load time dropped 4x, and the Unicode bug surfaced immediately because the parsing layer now handled fewer, well-defined fields.

Example 5: The Flaky Integration Test (Principles #17, #3, #15)

An integration test failed intermittently — sometimes on CI, never locally. The instinct was to add retries and increase timeouts.

Before
# Reaching behind the API to check DB directly
items = db.query("SELECT * FROM items")
assert len(items) == 3  # race with async write

Bypasses the abstraction boundary. Race condition with async commit.

After
# Verify through the same API the user sees
resp = client.get("/api/items")
assert len(resp.json()) == 3  # waits for commit

Tests what the user sees. No race — the API waits for consistency.

Principles applied: #17 Treat Symptoms as Signals — the flakiness was the bug signal, not the specific assertion. #3 Respect Abstraction Boundaries — the test reached behind the HTTP layer to query the database directly, creating a race condition. #15 Verify Action — replaced with end-to-end verification through the API the user actually calls.

8. How to Use These

For daily coding

Internalize Tier 1. These 7 should be reflexive — you shouldn't have to think about them. Print them on a card. If you can only remember one: Fail Loud, Never Fake.

For design decisions

Consult Tier 2–3 when designing. Before committing to an architecture, check: Am I decomposing orthogonally? Did I simplify after designing? Am I building when I should be adopting?

For domain work

Tier 4 activates contextually. Starting a batch pipeline? Scan #24–26. Designing a data structure? Check #27–28. The /recall command does this automatically.

For the full 303

They live in learning.db, materialized to ~/.claude/principles/. The /recall skill searches them semantically before any design decision. You don't need to memorize them — the system surfaces them when relevant.

9. Quick Reference Card

All 30 principles on one sheet. Print it. Pin it.

#PrincipleOne-linerTierEvidenceApplied
1Fail Loud, Never FakeNever swallow errors or return fake successT11055
2Complexity Must Be EarnedEvery abstraction needs a concrete userT1513
3Respect Abstraction BoundariesNever reach behind what a library managesT11531
4Workarounds = Wrong AbstractionHacks piling up? RedesignT11022
5Edit, Don't ReplaceSurgical edits, not wholesale rewritesT1138
6Minimize Blast RadiusScale caution to scope of impactT11040
7Trace the Chain to an ActionData without a consumer is noiseT124
8Adopt Before You BuildSearch before writing custom codeT27147
9Simplify After DesigningDeliberate simplification pass on every APIT2130
10Research Before Building30 min research saves hours of reworkT2752
11Orthogonal AxesIndependent concerns = independent configT22321
12No Silent FailuresIf something breaks, it must be visibleT231256
13Migrate, Don't StraddleComplete the migration or don't startT244
14Independent DimensionsThings that vary independently live separatelyT2526
15Verify ActionAfter doing, verify it workedT212353
16Draft Shape FirstInterfaces before implementationT307
17Symptoms as SignalsFix the cause, not the symptomT3189
18Less Is MoreBest code is code you don't writeT3515
19Extend, Don't InventBuild on existing patternsT34115
20Minimum Necessary WeightLightest mechanism that solves itT3827
21Extreme ProbesMaximalist proposal maps where it breaksT300
22Recognize Poor FitWhen a tool fights you, switch toolsT31069
23Principles PersistInvest in the why, not the whatT3727
24Fail Per-ElementOne bad item shouldn't kill a batchT4629
25IdempotencySafe to retry, resume on crashT4439
26Progressive Trust RadiusStart small, verify, expandT431
27Discrete vs ContinuousContinuous signals enable proportional decisionsT410
28Metaphor vs MechanicsSimilar-looking things may need different treatmentT490
29Generalize From FailuresOne fix should prevent the whole classT472
30Classify Before ProcessingTriage first, then route to handlersT400

How the System Works

learning.db (249 principles, 6130 instances, 3644 applications) │ ├── materialize.py ──→ ~/.claude/principles/*.md (15 domain files) │ (only principles with instance_count ≥ 2) │ ├── /recall ─────────→ semantic search → top-k relevant principles │ injected into session context before decisions │ ├── principle_applications ──→ outcome tracking (success/failure/partial) │ feeds back into ranking │ └── gym/improve.py ──→ extract findings from gym runs record as new instances → close the loop

Generated 2026-03-31. Source: learning.db (249 active principles, 6,130 instances, 3,644 tracked applications). Ranked by blast-radius × breadth × non-obviousness × evidence-depth. Evidence badges show per-principle stats from production usage.

View at: static.localhost/present/gallery/coding_principles/