Built on Claude Code and iTerm2. The future is supervising agents that overgenerate rich reports so you choose the best — not doing the work yourself. This is the infrastructure that makes one developer + AI produce like a small team.
A web dashboard at hub.localhost/grid showing all active Claude Code sessions as cards. Each card has the session name, badge, topic tree, and timestamps. Click a card to focus that pane in iTerm2.
The statusline in every session shows a grid:XXXX link — one click to jump
from any session to the full overview. The grid polls session state via the
iterm2d daemon and watch DB, refreshing every 30 seconds.
When running 5-10 concurrent sessions, tab-cycling through iTerm2 is O(n). The grid gives O(1) access to any session — scan, click, focus.
We build meta-tools that compound shipping velocity. Every hour spent on tooling saves ten hours of repetitive friction across hundreds of future sessions.
A solo developer running 5-10 concurrent AI coding sessions hits a wall: Which session needs attention? What's each one working on? How do I spawn a new one without losing context? How do I know if one is stuck? Without tooling, you spend more time managing sessions than doing actual work.
Build on Claude Code and iTerm2 — don't replace them, extend them. A layered infrastructure where every operation — spawning sessions, monitoring health, switching context, capturing learnings — is fast enough to be invisible. The developer stays in flow; the tools handle orchestration.
CLI tools call HTTP daemons, which orchestrate iTerm2 and session discovery, which feeds monitoring and autonomy systems. Each layer is independently useful.
Six tools, each focused on one concern. All available globally via ~/.local/bin/.
Speed-sensitive tools (it2, ai) are written in Go — fast startup, built-in
--help, and auto-generated zsh completions via Cobra. Python/Click handles
richer session analysis (ops, learn).
ops list
ops inspect SESSION
cl pool start
ops servers
it2 fork claude "research X"
it2 sessions
it2 activate SESSION_ID
it2 set-color ID green
ai embed "text"
ai call haiku "prompt"
ai health
appctl windows
appctl screenshot "Chrome"
appctl focus Chrome B
appctl minimize Chrome C D
~/.claude/principles/.learn add "observation"
learn principles -v
learn find "error handling"
vario gen "prompt" -c fast
vario gen "prompt" -c maxthink
vario fetch --markdown URL
A persistent daemon (iterm2d) holds an open WebSocket to iTerm2 and serves
HTTP on :6190. Every operation is ~5ms. No subprocess startup cost — the connection is already open.
iTerm2's native Python API requires launching a new script each time (~300ms). The iterm2d daemon holds the connection open and serves REST — bringing latency down to ~5ms. This makes it practical to call iTerm2 from hooks, the statusline, and interactive tools without perceptible lag.
/sessions — All sessions (JSON)
/hierarchy — Window/tab/pane tree
/badge?session=&text= — Set badge
/activate?id= — Focus a pane
/set-color?session=&color= — Tab color
/split-pane?session= — Split pane
/send-text?session=&text= — Type into pane
/resolve-tty?tty= — Tmux→iTerm2 mapping
Inside tmux -CC (control mode), ITERM_SESSION_ID points to the gateway,
not the actual pane. The daemon resolves via TTY matching: get tmux client_tty,
find the iTerm2 session with that TTY. Result is cached in a tmux pane-option for next time.
it2 fork claude "research X" — from intent to running AI session
in two seconds. No manual tab creation, no directory navigation, no typing prompts.
user.parent_session, user.fork_markercd to project directory. Set badge with task description. Confirm via iTerm2 path variableclaude with prompt piped in# Fork into a new pane (default)
it2 fork claude "find all uses of deprecated API"
# Fork into a new tab
it2 fork claude "research X" --target tab
# Fork into a new window with custom directory
it2 fork claude "fix the build" --target window --dir ~/other-project
The real leverage isn't in the tools — it's in the systems that run without you. Four autonomous capabilities turn passive infrastructure into active agents.
learning.db. Observations get LLM-classified into principles. Future sessions
load relevant principles automatically — the system literally gets smarter with every hour of use.
Currently ~200 principles across 10 domains.[sup] so you always know what was automated.
A forked research session never blocks on "Should I search for X?"These systems share a design: overgenerate options, then let the human choose. The supervisor generates responses but only auto-approves safe ones. The doctor generates diagnoses but surfaces them for review. Learning extracts candidate principles but waits for evidence before promoting them. The human's job shifts from doing to selecting.
Claude Code records session transcripts. A shared reader (lib/sessions/)
is consumed by three systems that each extract different value from the same data.
| Consumer | Reads | Produces |
|---|---|---|
| learning | Errors, repairs, patterns | Principles, failure→fix pairs, efficiency analysis |
| doctor | File changes, test results | Auto-fixes, health reports |
| chronicle | Topics, tool calls, timestamps | Effort allocation heatmap, shipping metrics, topic graph |
Built-in slash commands that give you instant context about what's happening, what was done, and how to find past work. These run inside any Claude Code session.
What was asked, done, key decisions, outcomes, open threads
devtools presentation
├─ grid screenshot
├─ autonomous systems section
│ ├─ learning, doctor, supervisor
│ └─ "overgenerate so you choose" pattern
├─ session awareness commands
│ └─ /recap, /hist, /jump
└─ commit and push ✓
/jump "authentication refactor"
These commands work because every Claude Code session transcript is indexed by the session infrastructure.
/jump searches across all projects; /recap and /hist
synthesize the current session. Combined with the grid, you always know what's happening everywhere.
Smaller but useful extensions to the Claude Code + iTerm2 foundation.
grid:XXXX link. Five slow operations run in parallel background
subshells with wait.~/.claude-minimax/, ~/.claude-xai/ each have their own
settings.json while symlinking hooks and skills from main ~/.claude/.
Launch with: minmax some_file.py.~/.zfunc/.
cd tools/cli && make completions regenerates everything.it2 and ai CLIs are Go because they need
instant startup (~5ms vs ~200ms for Python), built-in --help with proper formatting,
and Cobra's shell completion generation. Anything that runs in hot paths (statusline, hooks)
benefits from Go's speed.Each tool makes the next tool more useful. The system gets smarter as you use it.
The most productive thing you can build is the thing that makes building everything else faster. A 2-second fork saves 30 seconds per spawn × 20 spawns per day × 365 days = 60 hours per year from a single tool. Multiply across six tools, hooks, and automation — and a solo developer operates at the throughput of a small team.