Session Management Benchmark

Performance profiling of session query, search, highlight, and cross-reference operations

iTerm Panes
1
JSONL Files
351
JSONL Total
622 MB
Trials/Op
5

Single Operations

OperationMin (ms)Median (ms)Max (ms)StdevDistributionDetail
session_tool list (alive, json) 769 787 1266 217 787ms 17 lines output
session_tool cleanup (dry-run) 378 383 392 6 383ms 45 lines output
rg -c 'model' (-Users-tchklovski-all-code-rivus/) 53 53 452 178 53ms 2008 lines output
it2api get-buffer (single) 288 299 310 8 299ms 41 lines output
tab color set+unset cycle 911 916 943 16 916ms
highlight flow (1 panes) 1224 1235 1431 88 1235ms
cross-ref sessions.yaml ↔ it2api 308 310 392 36 310ms 1 alive, 43 stale of 44

Parallelism Comparison (Sequential vs Concurrent)

OperationSequential (ms)Concurrent (ms)SpeedupVisual
buffer reads x1 299 306 0.98x
seq 299ms
conc 306ms
tab color x1 1015 1012 1.00x
seq 1015ms
conc 1012ms
rg x4 patterns 296 180 1.65x
seq 296ms
conc 180ms

session_tool list Breakdown (Why is it 14+ seconds?)

Each component of get_enriched_sessions() measured independently. The total explains the ~14s wall time of session_tool list.

ComponentMedian (ms)% of TotalWaterfall
claude --version 53 4.9% 53ms
it2api list-sessions 302 28.1% 302ms
JSONL rglob+version (1 session) 13 1.2% 13ms
JSONL rglob+version (43 sessions) 705 65.6% 705ms
it2api get-buffer x0 seq (all alive) 0 0.0% 0ms
Hub DB badge load 2 0.2% 2ms
JSONL wait state detect (1 session) 0 0.0% 0ms
SUM (estimated) 1076 100%

All Results (raw)

OperationCategoryMinMedianMeanMaxStdevTrials
session_tool list (alive, json) sequential 769 787 879 1266 217 5
session_tool cleanup (dry-run) sequential 378 383 384 392 6 5
rg -c 'model' (-Users-tchklovski-all-code-rivus/) sequential 53 53 133 452 178 5
it2api get-buffer (single) sequential 288 299 299 310 8 5
tab color set+unset cycle sequential 911 916 925 943 16 5
highlight flow (1 panes) sequential 1224 1235 1277 1431 88 5
cross-ref sessions.yaml ↔ it2api sequential 308 310 327 392 36 5
buffer reads x1 (sequential) sequential 297 299 304 317 8 5
buffer reads x1 (concurrent) concurrent 292 306 309 335 16 5
tab color x1 (sequential) sequential 952 1015 1005 1053 50 5
tab color x1 (concurrent) concurrent 952 1012 1005 1052 47 5
rg x4 patterns (sequential) sequential 207 296 268 316 49 5
rg x4 patterns (concurrent) concurrent 172 180 179 185 6 5

Optimization Roadmap

Current Bottleneck: it2api subprocess overhead (~300ms/call)

Each it2api call spawns a Python process. With 15+ alive panes needing buffer reads, that's 15 × 305ms = 4.5s just in process startup overhead.

Projected: session_tool list latency

ScenarioBuffer ReadsJSONL Versionlist-sessionsOtherTotalvs Current
Current (it2api, sequential) 0ms 705ms 302ms 63ms 1071ms baseline
+ ThreadPool parallelism 0ms 705ms 302ms 63ms 1071ms 1.0x
+ MCPretentious (255x buffer) 0ms 705ms 151ms 63ms 920ms 1.2x
+ direct JSONL path (no rglob) 0ms 50ms 151ms 63ms 264ms 4x

Action Items (by impact)

#ChangeSavesEffort
1 MCPretentious for get-buffer + send-text
Replace it2api subprocess calls in supervisor/adapters/iterm2.py with persistent WebSocket
~0ms (255x) medium
2 Direct JSONL path instead of rglob
JSONL_DIR / f"{sid}.jsonl" instead of rglob(f"{sid}.jsonl")
~655ms trivial
3 ThreadPool for remaining it2api calls
Parallelize badge/title/profile property calls that can't use MCPretentious
~0ms (if no MCP) easy
4 Cache claude --version
Version doesn't change within a session — cache for 5 min
~53ms trivial

Bottleneck Analysis

Primary bottleneck: it2api calls

Parallelism Gains (measured)

MCPretentious Gains (from evaluation)

Ranked by Latency (slowest first)

Generated: 2026-02-06 11:08:23