# LLM Latency Benchmark Report

**Generated**: 2026-01-27 05:26:03

## Summary

| Model | TTFT (median) | Total (median) | Tokens/sec | Success |
|-------|---------------|----------------|------------|---------|
| openai/gpt-5-mini | 724ms | 4521ms | 79.2 | 100% |
| openai/gpt-5-nano | 820ms | 3905ms | 98.0 | 100% |

## Detailed Results

### openai/gpt-5-mini

**TTFT (Time to First Token)**
- Min: 650ms
- Max: 798ms
- Mean: 724ms
- Median: 724ms
- Stdev: 104ms

**Total Response Time**
- Min: 4410ms
- Max: 4633ms
- Mean: 4521ms
- Median: 4521ms
- Stdev: 158ms

**Individual Runs**

- Run 1: TTFT=798ms, Total=4410ms, Tokens=300, 83.1 tok/s
- Run 2: TTFT=650ms, Total=4633ms, Tokens=300, 75.3 tok/s

### openai/gpt-5-nano

**TTFT (Time to First Token)**
- Min: 654ms
- Max: 986ms
- Mean: 820ms
- Median: 820ms
- Stdev: 235ms

**Total Response Time**
- Min: 3468ms
- Max: 4343ms
- Mean: 3905ms
- Median: 3905ms
- Stdev: 619ms

**Individual Runs**

- Run 1: TTFT=986ms, Total=4343ms, Tokens=300, 89.4 tok/s
- Run 2: TTFT=654ms, Total=3468ms, Tokens=300, 106.6 tok/s

## Configuration

- **Prompt**: "Explain how a CPU cache works in 3 paragraphs."
- **Max Tokens**: 300
- **Timeout**: 60s
