Session Benchmark v3

v3: Performance profiling of session query, search, highlight, and cross-reference operations

iTerm Panes
1
JSONL Files
353
JSONL Total
627 MB
Trials/Op
5

Single Operations

OperationMin (ms)Median (ms)Max (ms)StdevDistributionDetail
session_tool list (alive, json) 770 785 860 35 785ms 17 lines output
session_tool cleanup (dry-run) 383 408 412 14 408ms 44 lines output
rg -c 'model' (-Users-tchklovski-all-code-rivus/) 50 58 354 133 58ms 2013 lines output
it2api get-buffer (single) 300 307 313 5 307ms 40 lines output
tab color set+unset cycle 906 914 978 29 914ms
highlight flow (1 panes) 1203 1222 1263 23 1222ms
cross-ref sessions.yaml ↔ it2api 289 300 310 8 300ms 1 alive, 42 stale of 43
find --state idle 388 394 423 16 394ms 5 lines output
find 'model' (content search) 749 768 1118 158 768ms 5 lines output
find 'benchmark' --all 724 745 783 22 745ms 26 lines output

Find Command Performance

session find uses a 3-phase search: (1) metadata match — badge + name, (2) rg content search across JSONL, (3) parallel buffer reads for state enrichment. Phases are skipped when not needed (e.g., state-only queries skip rg entirely).

QueryGoalPhases UsedMin (ms)Median (ms)Max (ms)Bar
find --state idle Find all idle sessions metadata only 388 394 423 394ms
find 'model' (content search) Keyword search across conversations metadata + rg 749 768 1118 768ms
find 'benchmark' --all Search including dead sessions metadata + rg (all) 724 745 783 745ms

Find Query Anatomy

PhaseWhatCostWhen
1. MetadataLoad sessions.yaml + it2api list-sessions + hub badges~320msAlways
2. Contentrg across 620MB JSONL~60msOnly if query provided
3. EnrichParallel it2api get-buffer for alive matchesN × 320ms / threadsOnly if state filter or permission detection needed

With N panes: Phase 3 is the parallelism opportunity. Sequential: N × 320ms. ThreadPool(8): ceil(N/8) × 320ms. MCPretentious: N × 1.2ms.

Scenario1 pane5 panes15 panes
Sequential it2api320ms1,600ms4,800ms
ThreadPool(8)320ms320ms640ms
MCPretentious1ms6ms18ms

Parallelism Comparison (Sequential vs Concurrent)

OperationSequential (ms)Concurrent (ms)SpeedupVisual
buffer reads x1 290 306 0.95x
seq 290ms
conc 306ms
tab color x1 936 980 0.96x
seq 936ms
conc 980ms
rg x4 patterns 207 160 1.29x
seq 207ms
conc 160ms

session_tool list Breakdown (Why is it 14+ seconds?)

Each component of get_enriched_sessions() measured independently. The total explains the ~14s wall time of session_tool list.

ComponentMedian (ms)% of TotalWaterfall
claude --version 52 4.1% 52ms
it2api list-sessions 290 22.8% 290ms
JSONL rglob+version (1 session) 13 1.0% 13ms
JSONL rglob+version (43 sessions) 615 48.5% 615ms
it2api get-buffer x1 seq (all alive) 298 23.5% 298ms
Hub DB badge load 1 0.1% 1ms
JSONL wait state detect (1 session) 1 0.0% 1ms
SUM (estimated) 1269 100%

All Results (raw)

OperationCategoryMinMedianMeanMaxStdevTrials
session_tool list (alive, json) sequential 770 785 802 860 35 5
session_tool cleanup (dry-run) sequential 383 408 399 412 14 5
rg -c 'model' (-Users-tchklovski-all-code-rivus/) sequential 50 58 118 354 133 5
it2api get-buffer (single) sequential 300 307 308 313 5 5
tab color set+unset cycle sequential 906 914 929 978 29 5
highlight flow (1 panes) sequential 1203 1222 1228 1263 23 5
cross-ref sessions.yaml ↔ it2api sequential 289 300 301 310 8 5
find --state idle sequential 388 394 402 423 16 5
find 'model' (content search) sequential 749 768 837 1118 158 5
find 'benchmark' --all sequential 724 745 747 783 22 5
buffer reads x1 (sequential) sequential 287 290 293 307 8 5
buffer reads x1 (concurrent) concurrent 294 306 307 330 14 5
tab color x1 (sequential) sequential 933 936 945 973 17 5
tab color x1 (concurrent) concurrent 948 980 973 985 16 5
rg x4 patterns (sequential) sequential 200 207 220 254 23 5
rg x4 patterns (concurrent) concurrent 159 160 161 163 2 5

Optimization Roadmap

Current Bottleneck: it2api subprocess overhead (~300ms/call)

Each it2api call spawns a Python process. With 15+ alive panes needing buffer reads, that's 15 × 305ms = 4.5s just in process startup overhead.

Projected: session_tool list latency

ScenarioBuffer ReadsJSONL Versionlist-sessionsOtherTotalvs Current
Current (it2api, sequential) 298ms 615ms 290ms 62ms 1265ms baseline
+ ThreadPool parallelism 99ms 615ms 290ms 62ms 1066ms 1.2x
+ MCPretentious (255x buffer) 1ms 615ms 145ms 62ms 823ms 1.5x
+ direct JSONL path (no rglob) 1ms 50ms 145ms 62ms 258ms 5x

Action Items (by impact)

#ChangeSavesEffort
1 MCPretentious for get-buffer + send-text
Replace it2api subprocess calls in supervisor/adapters/iterm2.py with persistent WebSocket
~297ms (255x) medium
2 Direct JSONL path instead of rglob
JSONL_DIR / f"{sid}.jsonl" instead of rglob(f"{sid}.jsonl")
~565ms trivial
3 ThreadPool for remaining it2api calls
Parallelize badge/title/profile property calls that can't use MCPretentious
~199ms (if no MCP) easy
4 Cache claude --version
Version doesn't change within a session — cache for 5 min
~52ms trivial

Bottleneck Analysis

Primary bottleneck: it2api calls

Parallelism Gains (measured)

MCPretentious Gains (from evaluation)

Ranked by Latency (slowest first)

Generated: 2026-02-06 11:38:27