# Interpretation Viewer — Implementation Plan

> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

**Goal:** Add an "Interpret" page to ngstudio where users type NL, see the resolved recipe as a D2 diagram + YAML + prompt previews, and adjust via NL.

**Architecture:** New `vario/viz.py` for pure rendering functions (recipe→D2, recipe→prompts). New `vario/ui_interpret.py` for the Gradio page. Wire into existing `app.py` via Gradio sidebar navigation. Adjust flow calls LLM to modify the current resolved YAML spec.

**Tech Stack:** Gradio 5 (sidebar, accordions), D2 via Kroki (`vario/render_d2.py`), existing `interpret()` from `vario/interpret.py`, YAML for recipe display.

---

### Task 1: viz.py — recipe_to_d2()

**Files:**
- Create: `vario/viz.py`
- Test: `vario/tests/test_viz.py`

**Step 1: Write the failing test**

```python
# vario/tests/test_viz.py
from vario.executor import Stage
from vario.viz import recipe_to_d2, LabeledStage


def test_recipe_to_d2_linear():
    """Linear pipeline: produce -> score -> reduce."""
    stages = [
        Stage(type="produce", params={"n": 3, "models": ["haiku", "gpt-mini"]}),
        Stage(type="score", params={"intent": "score", "model": "haiku", "rubric": ["correctness"]}),
        Stage(type="reduce", params={"method": "top_k", "k": 1}),
    ]
    d2 = recipe_to_d2(stages)
    # Should contain labeled nodes A, B, C
    assert "A" in d2
    assert "B" in d2
    assert "C" in d2
    # Should contain stage types
    assert "produce" in d2
    assert "score" in d2
    assert "reduce" in d2
    # Should have edges
    assert "->" in d2


def test_recipe_to_d2_with_repeat():
    """Pipeline with repeat meta-stage."""
    stages = [
        Stage(type="produce", params={"n": 3}),
        Stage(type="repeat", params={"max_rounds": 8}, stages=[
            Stage(type="score", params={"model": "sonnet"}),
            Stage(type="revise", params={"model": "sonnet"}),
        ]),
        Stage(type="reduce", params={"method": "top_k", "k": 1}),
    ]
    d2 = recipe_to_d2(stages)
    # Repeat should be a container
    assert "repeat" in d2.lower()
    # Sub-stages should exist
    assert "score" in d2
    assert "revise" in d2
```

**Step 2: Run test to verify it fails**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py -v`
Expected: FAIL — `ModuleNotFoundError: No module named 'vario.viz'`

**Step 3: Write minimal implementation**

```python
# vario/viz.py
"""Visualization helpers — recipe to D2 diagram, prompt previews."""

from __future__ import annotations

import string
from dataclasses import dataclass
from typing import Any

from vario.executor import Stage


@dataclass
class LabeledStage:
    """A stage with a letter label (A, B, C...)."""
    label: str
    stage: Stage
    sub_stages: list[LabeledStage]


def label_stages(stages: list[Stage]) -> list[LabeledStage]:
    """Attach A, B, C... labels to stages. Sub-stages get parent.child labels."""
    result = []
    for i, stage in enumerate(stages):
        letter = string.ascii_uppercase[i] if i < 26 else f"S{i}"
        subs = []
        for j, sub in enumerate(stage.stages):
            sub_letter = f"{letter}{j + 1}"
            subs.append(LabeledStage(label=sub_letter, stage=sub, sub_stages=[]))
        result.append(LabeledStage(label=letter, stage=stage, sub_stages=subs))
    return result


def _stage_box_label(ls: LabeledStage) -> str:
    """Build the D2 node label: letter, type, key params."""
    s = ls.stage
    parts = [f"{ls.label}: {s.type}"]

    match s.type:
        case "produce":
            models = s.params.get("models", [])
            n = s.params.get("n", "")
            if models:
                parts.append(", ".join(models))
            if n:
                parts.append(f"n={n}")
        case "score":
            model = s.params.get("model", "")
            intent = s.params.get("intent", "score")
            parts.append(f"{intent} ({model})" if model else intent)
        case "revise":
            model = s.params.get("model", "")
            if model:
                parts.append(model)
        case "reduce":
            method = s.params.get("method", "top_k")
            k = s.params.get("k", "")
            parts.append(f"{method}" + (f"(k={k})" if k else ""))
        case "repeat":
            max_r = s.params.get("max_rounds", 8)
            parts.append(f"max {max_r} rounds")

    return "\\n".join(parts)


def _edge_annotation(from_stage: LabeledStage, to_stage: LabeledStage) -> str:
    """Describe data shape change on the edge."""
    f, t = from_stage.stage, to_stage.stage
    match (f.type, t.type):
        case ("produce", _):
            models = f.params.get("models", ["?"])
            n = f.params.get("n", 1)
            count = len(models) * n
            return f"{count} Things"
        case (_, "reduce"):
            return ""
        case ("reduce", _):
            k = f.params.get("k", 1)
            method = f.params.get("method", "top_k")
            if method == "top_k":
                return f"{k} Thing{'s' if k > 1 else ''}"
            return ""
        case ("score", _):
            return "+score, +reason"
        case _:
            return ""


def recipe_to_d2(stages: list[Stage]) -> str:
    """Convert a list of stages to a D2 diagram source string."""
    labeled = label_stages(stages)
    lines: list[str] = []

    for ls in labeled:
        node_id = ls.label
        box_label = _stage_box_label(ls)

        if ls.stage.type == "repeat" and ls.sub_stages:
            # Container for repeat
            lines.append(f"{node_id}: {box_label} {{")
            lines.append(f"  style.border-radius: 8")
            for sub in ls.sub_stages:
                sub_label = _stage_box_label(sub)
                lines.append(f"  {sub.label}: {sub_label}")
            # Edges within repeat
            for k in range(len(ls.sub_stages) - 1):
                a, b = ls.sub_stages[k], ls.sub_stages[k + 1]
                ann = _edge_annotation(a, b)
                edge = f"  {a.label} -> {b.label}"
                if ann:
                    edge += f": {ann}"
                lines.append(edge)
            # Loop-back arrow
            if len(ls.sub_stages) >= 2:
                last = ls.sub_stages[-1].label
                first = ls.sub_stages[0].label
                lines.append(f"  {last} -> {first}: next round {{")
                lines.append(f"    style.stroke-dash: 5")
                lines.append(f"  }}")
            lines.append("}")
        else:
            lines.append(f"{node_id}: {box_label}")

    # Top-level edges between stages
    for i in range(len(labeled) - 1):
        a, b = labeled[i], labeled[i + 1]
        ann = _edge_annotation(a, b)
        edge = f"{a.label} -> {b.label}"
        if ann:
            edge += f": {ann}"
        lines.append(edge)

    return "\n".join(lines)
```

**Step 4: Run test to verify it passes**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py -v`
Expected: PASS

**Step 5: Commit**

```bash
git add vario/viz.py vario/tests/test_viz.py
git commit -m "feat(ng): recipe_to_d2 — D2 diagram generation from recipe stages"
```

---

### Task 2: viz.py — recipe_to_prompts()

**Files:**
- Modify: `vario/viz.py`
- Modify: `vario/tests/test_viz.py`

**Step 1: Write the failing test**

```python
# append to vario/tests/test_viz.py
from vario.viz import recipe_to_prompts, StagePrompt


def test_recipe_to_prompts_produce():
    """Produce stage has no prompt preview (it uses ctx.problem directly)."""
    stages = [Stage(type="produce", params={"n": 3, "models": ["haiku"]})]
    prompts = recipe_to_prompts(stages, problem="What is 2+2?")
    assert len(prompts) == 1
    assert prompts[0].label == "A"
    assert "What is 2+2?" in prompts[0].user_prompt


def test_recipe_to_prompts_score():
    """Score stage shows the judge prompt template."""
    stages = [
        Stage(type="produce", params={"n": 1}),
        Stage(type="score", params={"intent": "score", "model": "haiku", "rubric": ["correctness"], "feedback": True}),
    ]
    prompts = recipe_to_prompts(stages, problem="Solve x+1=3")
    score_prompt = prompts[1]
    assert score_prompt.label == "B"
    assert "correctness" in score_prompt.user_prompt
    assert "Solve x+1=3" in score_prompt.user_prompt


def test_recipe_to_prompts_revise():
    """Revise stage shows the revision prompt template."""
    stages = [
        Stage(type="produce", params={}),
        Stage(type="revise", params={"model": "opus", "system": "You are a careful reviewer."}),
    ]
    prompts = recipe_to_prompts(stages, problem="Write a poem")
    revise_prompt = prompts[1]
    assert revise_prompt.label == "B"
    assert "Write a poem" in revise_prompt.user_prompt
    assert revise_prompt.system_prompt == "You are a careful reviewer."
```

**Step 2: Run test to verify it fails**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py::test_recipe_to_prompts_produce -v`
Expected: FAIL — `ImportError: cannot import name 'recipe_to_prompts'`

**Step 3: Write minimal implementation**

Add to `vario/viz.py`:

```python
@dataclass
class StagePrompt:
    """Prompt preview for a stage."""
    label: str
    stage_type: str
    model: str
    system_prompt: str
    user_prompt: str


def recipe_to_prompts(stages: list[Stage], problem: str = "<your problem>") -> list[StagePrompt]:
    """Generate prompt previews for each stage.

    These are templates — actual runtime prompts depend on Thing content,
    so we show the structure with placeholders.
    """
    labeled = label_stages(stages)
    result: list[StagePrompt] = []

    for ls in labeled:
        s = ls.stage

        match s.type:
            case "produce":
                models = s.params.get("models", ["sonnet"])
                result.append(StagePrompt(
                    label=ls.label,
                    stage_type="produce",
                    model=", ".join(models),
                    system_prompt="",
                    user_prompt=problem,
                ))

            case "score":
                model = s.params.get("model", "haiku")
                intent = s.params.get("intent", "score")
                rubric = s.params.get("rubric", [])
                feedback = s.params.get("feedback", True)

                if intent in ("score", "score+verify"):
                    criteria = (
                        "Evaluate against these criteria:\n"
                        + "\n".join(f"{i+1}. {c}" for i, c in enumerate(rubric))
                        if rubric
                        else "Evaluate for correctness, clarity, and completeness."
                    )
                    dim_examples = ", ".join(f'"{d}": <0-100>' for d in rubric[:3]) if rubric else '"score": <0-100>'
                    schema = f"{{{dim_examples}, \"score\": <overall 0-100>"
                    if feedback:
                        schema += ', "reason": "<feedback text>"'
                    schema += "}"

                    user = (
                        f"You are evaluating an answer to a problem.\n\n"
                        f"Problem:\n{problem}\n\n"
                        f"Answer:\n<previous stage output>\n\n"
                        f"{criteria}\n\n"
                        f"Output schema: {schema}\n\n"
                        f"Respond ONLY with valid JSON matching the schema above."
                    )
                else:  # verify
                    user = (
                        f"You are verifying an answer to a problem.\n\n"
                        f"Problem:\n{problem}\n\n"
                        f"Answer to verify:\n<previous stage output>\n\n"
                        f"Verify this answer step by step:\n"
                        f"1. Check each step of reasoning for correctness.\n"
                        f"2. Verify any calculations or logical deductions.\n"
                        f"3. Check that the final answer follows from the reasoning.\n\n"
                        f"Respond in this format:\n"
                        f"Verification: <your step-by-step verification>\n"
                        f"Verdict: CORRECT or INCORRECT\n"
                        f"Confidence: <number 0-100>"
                    )

                result.append(StagePrompt(
                    label=ls.label,
                    stage_type="score",
                    model=model,
                    system_prompt="",
                    user_prompt=user,
                ))

            case "revise":
                model = s.params.get("model", "sonnet")
                system = s.params.get("system", "")
                user = (
                    f"You are improving an answer to a problem based on feedback.\n\n"
                    f"Problem:\n{problem}\n\n"
                    f"Previous answer:\n<previous stage output>\n\n"
                    f"<feedback from score stage>\n\n"
                    f"Generate an improved answer that addresses the identified issues.\n"
                    f"Be thorough and ensure your reasoning is correct step by step.\n\n"
                    f"Improved answer:"
                )
                result.append(StagePrompt(
                    label=ls.label,
                    stage_type="revise",
                    model=model,
                    system_prompt=system,
                    user_prompt=user,
                ))

            case "reduce":
                method = s.params.get("method", "top_k")
                model = s.params.get("model", "sonnet")
                if method == "combine":
                    user = (
                        f"You are synthesizing the best elements from multiple approaches.\n\n"
                        f"Problem:\n{problem}\n\n"
                        f"<N approaches from previous stage>\n\n"
                        f"Synthesize the best elements into a single superior answer."
                    )
                else:
                    user = f"Mechanical: {method} (no LLM call)"
                result.append(StagePrompt(
                    label=ls.label,
                    stage_type="reduce",
                    model=model if method == "combine" else "",
                    system_prompt="",
                    user_prompt=user,
                ))

            case "repeat":
                # Show sub-stage prompts
                if ls.sub_stages:
                    sub_prompts = recipe_to_prompts(
                        [sub.stage for sub in ls.sub_stages],
                        problem=problem,
                    )
                    # Re-label with parent prefix
                    for sp, sub_ls in zip(sub_prompts, ls.sub_stages):
                        sp.label = sub_ls.label
                    result.extend(sub_prompts)
                continue  # don't append the repeat itself

        # Handle sub-stages for repeat (already handled above via continue)

    return result
```

**Step 4: Run tests to verify they pass**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py -v`
Expected: PASS

**Step 5: Commit**

```bash
git add vario/viz.py vario/tests/test_viz.py
git commit -m "feat(ng): recipe_to_prompts — prompt preview generation per stage"
```

---

### Task 3: viz.py — resolve_interpretation()

**Files:**
- Modify: `vario/viz.py`
- Modify: `vario/tests/test_viz.py`

This is the glue function: takes InterpretResult, loads recipe, applies model overrides, returns everything the UI needs.

**Step 1: Write the failing test**

```python
# append to vario/tests/test_viz.py
from vario.viz import resolve_interpretation
from vario.interpret import InterpretResult


def test_resolve_interpretation():
    """resolve_interpretation returns diagram, prompts, yaml, and labeled stages."""
    ir = InterpretResult(
        recipe="best_of_n",
        models=["haiku", "gpt-mini"],
        n=3,
        reasoning="test",
    )
    resolved = resolve_interpretation(ir, problem="What is 2+2?")
    assert resolved.d2_source  # non-empty D2
    assert resolved.yaml_text  # non-empty YAML
    assert len(resolved.prompts) > 0
    assert len(resolved.labeled_stages) > 0
    assert resolved.labeled_stages[0].label == "A"
```

**Step 2: Run test to verify it fails**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py::test_resolve_interpretation -v`
Expected: FAIL — `ImportError: cannot import name 'resolve_interpretation'`

**Step 3: Write minimal implementation**

Add to `vario/viz.py`:

```python
import copy
import yaml

from vario.executor import Recipe, load_recipe
from vario.interpret import InterpretResult


@dataclass
class ResolvedInterpretation:
    """Everything the UI needs to render an interpretation."""
    recipe: Recipe
    labeled_stages: list[LabeledStage]
    d2_source: str
    yaml_text: str
    prompts: list[StagePrompt]
    interpret_result: InterpretResult


def _apply_overrides(recipe: Recipe, ir: InterpretResult) -> Recipe:
    """Apply InterpretResult overrides (models, n, temperature) to recipe."""
    recipe = copy.deepcopy(recipe)
    if ir.models:
        recipe.params["models"] = ir.models
    if ir.n is not None:
        recipe.params["n"] = ir.n
    if ir.temperature is not None:
        recipe.params["temperature"] = ir.temperature

    # Propagate model override into produce stages that don't have explicit models
    for stage in recipe.stages:
        if stage.type == "produce" and "models" not in stage.params and ir.models:
            stage.params["models"] = ir.models
        if stage.type == "produce" and "n" not in stage.params and ir.n is not None:
            stage.params["n"] = ir.n

    return recipe


def _recipe_to_yaml(recipe: Recipe, labeled: list[LabeledStage]) -> str:
    """Render recipe as YAML with stage letter comments."""

    def _stage_dict(ls: LabeledStage) -> dict:
        d: dict[str, Any] = {"type": ls.stage.type}
        if ls.stage.params:
            d["params"] = dict(ls.stage.params)
        if ls.sub_stages:
            d["stages"] = [_stage_dict(sub) for sub in ls.sub_stages]
        # Inject comment as a key (yaml doesn't support comments easily,
        # so we add the label as a key)
        d["_label"] = ls.label
        return d

    stages_data = [_stage_dict(ls) for ls in labeled]
    data = {
        "name": recipe.name,
        "stages": stages_data,
    }
    if recipe.params:
        data["params"] = dict(recipe.params)

    return yaml.dump(data, default_flow_style=False, sort_keys=False)


def resolve_interpretation(
    ir: InterpretResult,
    problem: str = "<your problem>",
) -> ResolvedInterpretation:
    """Load recipe, apply overrides, generate all visualizations."""
    recipe = load_recipe(ir.recipe)
    recipe = _apply_overrides(recipe, ir)

    labeled = label_stages(recipe.stages)
    d2 = recipe_to_d2(recipe.stages)
    prompts = recipe_to_prompts(recipe.stages, problem=problem)
    yaml_text = _recipe_to_yaml(recipe, labeled)

    return ResolvedInterpretation(
        recipe=recipe,
        labeled_stages=labeled,
        d2_source=d2,
        yaml_text=yaml_text,
        prompts=prompts,
        interpret_result=ir,
    )
```

**Step 4: Run tests to verify they pass**

Run: `cd /Users/tchklovski/all-code/rivus && python -m pytest vario/tests/test_viz.py -v`
Expected: PASS

**Step 5: Commit**

```bash
git add vario/viz.py vario/tests/test_viz.py
git commit -m "feat(ng): resolve_interpretation — glue function for interpret viewer"
```

---

### Task 4: ui_interpret.py — Gradio page

**Files:**
- Create: `vario/ui_interpret.py`

**Step 1: Write the UI module**

```python
# vario/ui_interpret.py
"""Interpret page — NL to recipe visualization.

Type a description, see the resolved recipe as D2 diagram + YAML + prompt previews.
Adjust via NL in a second input.
"""

from __future__ import annotations

import gradio as gr
from loguru import logger

from vario.interpret import InterpretResult, interpret
from vario.viz import ResolvedInterpretation, resolve_interpretation
from vario.render_d2 import render_d2_img_tag


INTERPRET_CSS = """
.interpret-diagram img {
    max-width: 100%;
    background: white;
    padding: 12px;
    border-radius: 8px;
    border: 1px solid #e5e7eb;
}
.interpret-prompt-preview {
    background: #f8f9fa;
    padding: 10px;
    border-radius: 6px;
    font-size: 13px;
    line-height: 1.5;
    white-space: pre-wrap;
    word-break: break-word;
    border: 1px solid #e5e7eb;
    max-height: 400px;
    overflow-y: auto;
}
"""


def create_interpret_page() -> dict:
    """Create the Interpret page UI components."""

    gr.HTML("<h2>Interpret</h2><p>Describe what you want — see the recipe pipeline.</p>")

    # --- Input bar ---
    with gr.Row():
        interpret_input = gr.Textbox(
            placeholder="e.g. 'maxthink debate on code quality' or 'summarize fast'",
            label="Interpretation",
            scale=5,
            lines=1,
        )
        interpret_btn = gr.Button("Interpret", variant="primary", scale=1)

    # --- Status ---
    status_text = gr.Textbox(label="Status", interactive=False, visible=True)

    # --- D2 Diagram ---
    diagram_html = gr.HTML(value="", elem_classes=["interpret-diagram"])

    # --- Prompt Accordion ---
    prompts_accordion = gr.HTML(value="")

    # --- YAML Accordion ---
    with gr.Accordion("Resolved YAML", open=False):
        yaml_display = gr.Code(language="yaml", interactive=False, value="")

    # --- Adjust bar ---
    with gr.Row():
        adjust_input = gr.Textbox(
            placeholder="e.g. 'use opus in stage A' or 'add verify after B'",
            label="Adjust",
            scale=5,
            lines=1,
        )
        adjust_btn = gr.Button("Adjust", variant="secondary", scale=1)

    # --- Hidden state ---
    resolved_state = gr.State(None)  # ResolvedInterpretation

    # --- Handlers ---
    async def on_interpret(description: str):
        if not description.strip():
            return None, "", "", "", "", "Enter a description."

        try:
            ir = await interpret(description)
            resolved = resolve_interpretation(ir, problem=description)

            diagram = render_d2_img_tag(resolved.d2_source, alt="Recipe pipeline")
            prompts_html = _render_prompts_accordion(resolved)
            status = (
                f"Recipe: {ir.recipe} | Models: {', '.join(ir.models)} | "
                f"{ir.reasoning}"
            )

            return resolved, status, diagram, prompts_html, resolved.yaml_text, ""

        except Exception as e:
            logger.exception("Interpret failed")
            return None, f"Error: {e}", "", "", "", ""

    async def on_adjust(adjustment: str, current: ResolvedInterpretation | None):
        if not adjustment.strip():
            return current, "Enter an adjustment.", gr.skip(), gr.skip(), gr.skip(), ""
        if current is None:
            return None, "Interpret something first.", "", "", "", ""

        try:
            # Build context: current recipe + adjustment request
            from vario.interpret import _build_interpret_prompt
            from lib.llm import call_llm
            from vario import defaults
            import json

            # Re-interpret with current context
            context = (
                f"Current recipe: {current.interpret_result.recipe}\n"
                f"Current models: {', '.join(current.interpret_result.models)}\n"
                f"Current YAML:\n{current.yaml_text}\n\n"
                f"User wants to adjust: {adjustment}"
            )
            ir = await interpret(context)
            resolved = resolve_interpretation(ir, problem=adjustment)

            diagram = render_d2_img_tag(resolved.d2_source, alt="Recipe pipeline")
            prompts_html = _render_prompts_accordion(resolved)
            status = (
                f"Adjusted: {ir.recipe} | Models: {', '.join(ir.models)} | "
                f"{ir.reasoning}"
            )

            return resolved, status, diagram, prompts_html, resolved.yaml_text, ""

        except Exception as e:
            logger.exception("Adjust failed")
            return current, f"Error: {e}", gr.skip(), gr.skip(), gr.skip(), ""

    interpret_btn.click(
        on_interpret,
        inputs=[interpret_input],
        outputs=[resolved_state, status_text, diagram_html, prompts_accordion, yaml_display, adjust_input],
    )
    # Also trigger on Enter
    interpret_input.submit(
        on_interpret,
        inputs=[interpret_input],
        outputs=[resolved_state, status_text, diagram_html, prompts_accordion, yaml_display, adjust_input],
    )

    adjust_btn.click(
        on_adjust,
        inputs=[adjust_input, resolved_state],
        outputs=[resolved_state, status_text, diagram_html, prompts_accordion, yaml_display, adjust_input],
    )
    adjust_input.submit(
        on_adjust,
        inputs=[adjust_input, resolved_state],
        outputs=[resolved_state, status_text, diagram_html, prompts_accordion, yaml_display, adjust_input],
    )

    return {
        "interpret_input": interpret_input,
        "resolved_state": resolved_state,
    }


def _render_prompts_accordion(resolved: ResolvedInterpretation) -> str:
    """Render prompt previews as HTML accordions."""
    if not resolved.prompts:
        return ""

    sections = []
    for sp in resolved.prompts:
        model_badge = f' <span style="color:#888;font-size:12px;">({sp.model})</span>' if sp.model else ""
        title = f"{sp.label}: {sp.stage_type}{model_badge}"

        content_parts = []
        if sp.system_prompt:
            content_parts.append(
                f'<div style="margin-bottom:8px;">'
                f'<strong>System:</strong>'
                f'<pre class="interpret-prompt-preview">{_esc(sp.system_prompt)}</pre>'
                f'</div>'
            )
        content_parts.append(
            f'<div>'
            f'<strong>User:</strong>'
            f'<pre class="interpret-prompt-preview">{_esc(sp.user_prompt)}</pre>'
            f'</div>'
        )

        sections.append(
            f'<details style="margin-bottom:4px;border:1px solid #e5e7eb;border-radius:6px;padding:8px;">'
            f'<summary style="cursor:pointer;font-weight:600;font-size:14px;">{title}</summary>'
            f'<div style="margin-top:8px;">{"".join(content_parts)}</div>'
            f'</details>'
        )

    return "".join(sections)


def _esc(text: str) -> str:
    """HTML-escape text."""
    import html
    return html.escape(text)
```

**Step 2: Verify module imports cleanly**

Run: `cd /Users/tchklovski/all-code/rivus && python -c "from vario.ui_interpret import create_interpret_page; print('OK')"`
Expected: `OK`

**Step 3: Commit**

```bash
git add vario/ui_interpret.py
git commit -m "feat(ng): ui_interpret — Gradio interpret page with diagram + prompts + adjust"
```

---

### Task 5: Wire into app.py with sidebar navigation

**Files:**
- Modify: `vario/app.py`

**Step 1: Read current app.py** (already read above)

**Step 2: Add sidebar navigation**

Modify `vario/app.py` to use `gr.Sidebar` (Gradio 5) with two links: Studio and Interpret.

```python
# Replace the current gr.Blocks body with sidebar navigation:
# The main structure becomes:
#   gr.Blocks
#     gr.Sidebar (left drawer)
#       links: Studio, Interpret
#     gr.Column (main content)
#       conditionally show Studio or Interpret page

# In Gradio 5, use gr.TabbedInterface or manual state-based page switching.
# Since Gradio doesn't have a built-in sidebar/drawer, we'll use Tabs
# styled as a left nav via CSS.
```

Actually, Gradio 5 doesn't have a true sidebar component. Use `gr.Tabs` with vertical orientation or a simple two-tab layout where the "drawer" is a vertical tab list. The cleanest approach:

```python
# app.py — replace the Blocks body
with gr.Blocks(title="ng Studio", fill_width=True, fill_height=True) as app:
    with gr.Tabs():
        with gr.Tab("Studio"):
            create_ng_studio()
        with gr.Tab("Interpret"):
            create_interpret_page()
    gradio_footer(port=7960)
```

**Step 3: Update imports and CSS**

Add to imports:
```python
from vario.ui_interpret import INTERPRET_CSS, create_interpret_page
```

Update CSS:
```python
CSS = FULL_WIDTH_CSS + CONNECTION_STATUS_CSS + NG_STUDIO_CSS + INTERPRET_CSS + _custom_css
```

**Step 4: Verify the app starts**

Run: `cd /Users/tchklovski/all-code/rivus && python vario/app.py &` then check `https://vario.localhost`

**Step 5: Commit**

```bash
git add vario/app.py
git commit -m "feat(ng): wire interpret page into ngstudio as second tab"
```

---

### Task 6: Manual integration test

**Step 1: Start the app**

Run: `cd /Users/tchklovski/all-code/rivus && gradio vario/app.py`

**Step 2: Test the interpret flow**

1. Navigate to the Interpret tab
2. Type "maxthink debate" and click Interpret
3. Verify: D2 diagram renders, prompts accordion shows, YAML displays
4. Type "use opus in stage A" in Adjust and click
5. Verify: diagram updates

**Step 3: Fix any issues found**

**Step 4: Final commit**

```bash
git add -A
git commit -m "fix(ng): interpret page integration fixes"
```
