Session Benchmark v2

v2: Performance profiling of session query, search, highlight, and cross-reference operations

iTerm Panes
1
JSONL Files
353
JSONL Total
624 MB
Trials/Op
5

Single Operations

OperationMin (ms)Median (ms)Max (ms)StdevDistributionDetail
session_tool list (alive, json) 782 800 991 90 800ms 17 lines output
session_tool cleanup (dry-run) 406 417 420 6 417ms 44 lines output
rg -c 'model' (-Users-tchklovski-all-code-rivus/) 56 61 407 155 61ms 2013 lines output
it2api get-buffer (single) 298 312 353 25 312ms 40 lines output
tab color set+unset cycle 934 1020 1146 85 1020ms
highlight flow (1 panes) 1287 1347 1391 42 1347ms
cross-ref sessions.yaml ↔ it2api 305 315 333 10 315ms 1 alive, 42 stale of 43

Parallelism Comparison (Sequential vs Concurrent)

OperationSequential (ms)Concurrent (ms)SpeedupVisual
buffer reads x1 311 311 1.00x
seq 311ms
conc 311ms
tab color x1 966 996 0.97x
seq 966ms
conc 996ms
rg x4 patterns 256 183 1.40x
seq 256ms
conc 183ms

session_tool list Breakdown (Why is it 14+ seconds?)

Each component of get_enriched_sessions() measured independently. The total explains the ~14s wall time of session_tool list.

ComponentMedian (ms)% of TotalWaterfall
claude --version 51 3.7% 51ms
it2api list-sessions 320 23.0% 320ms
JSONL rglob+version (1 session) 17 1.2% 17ms
JSONL rglob+version (43 sessions) 688 49.4% 688ms
it2api get-buffer x1 seq (all alive) 313 22.5% 313ms
Hub DB badge load 1 0.1% 1ms
JSONL wait state detect (1 session) 1 0.0% 1ms
SUM (estimated) 1391 100%

All Results (raw)

OperationCategoryMinMedianMeanMaxStdevTrials
session_tool list (alive, json) sequential 782 800 848 991 90 5
session_tool cleanup (dry-run) sequential 406 417 414 420 6 5
rg -c 'model' (-Users-tchklovski-all-code-rivus/) sequential 56 61 129 407 155 5
it2api get-buffer (single) sequential 298 312 325 353 25 5
tab color set+unset cycle sequential 934 1020 1013 1146 85 5
highlight flow (1 panes) sequential 1287 1347 1338 1391 42 5
cross-ref sessions.yaml ↔ it2api sequential 305 315 317 333 10 5
buffer reads x1 (sequential) sequential 306 311 316 330 11 5
buffer reads x1 (concurrent) concurrent 299 311 310 319 8 5
tab color x1 (sequential) sequential 956 966 996 1100 60 5
tab color x1 (concurrent) concurrent 961 996 986 1008 20 5
rg x4 patterns (sequential) sequential 204 256 270 351 54 5
rg x4 patterns (concurrent) concurrent 173 183 184 199 9 5

Optimization Roadmap

Current Bottleneck: it2api subprocess overhead (~300ms/call)

Each it2api call spawns a Python process. With 15+ alive panes needing buffer reads, that's 15 × 305ms = 4.5s just in process startup overhead.

Projected: session_tool list latency

ScenarioBuffer ReadsJSONL Versionlist-sessionsOtherTotalvs Current
Current (it2api, sequential) 313ms 688ms 320ms 61ms 1382ms baseline
+ ThreadPool parallelism 104ms 688ms 320ms 61ms 1173ms 1.2x
+ MCPretentious (255x buffer) 1ms 688ms 160ms 61ms 910ms 1.5x
+ direct JSONL path (no rglob) 1ms 50ms 160ms 61ms 272ms 5x

Action Items (by impact)

#ChangeSavesEffort
1 MCPretentious for get-buffer + send-text
Replace it2api subprocess calls in supervisor/adapters/iterm2.py with persistent WebSocket
~312ms (255x) medium
2 Direct JSONL path instead of rglob
JSONL_DIR / f"{sid}.jsonl" instead of rglob(f"{sid}.jsonl")
~638ms trivial
3 ThreadPool for remaining it2api calls
Parallelize badge/title/profile property calls that can't use MCPretentious
~209ms (if no MCP) easy
4 Cache claude --version
Version doesn't change within a session — cache for 5 min
~51ms trivial

Bottleneck Analysis

Primary bottleneck: it2api calls

Parallelism Gains (measured)

MCPretentious Gains (from evaluation)

Ranked by Latency (slowest first)

Generated: 2026-02-06 11:26:34