# LLM Latency Benchmark Report

**Generated**: 2026-01-27 05:23:36

## Summary

| Model | TTFT (median) | Total (median) | Tokens/sec | Success |
|-------|---------------|----------------|------------|---------|
| openai/gpt-5-nano | 757ms | 3801ms | 99.7 | 100% |
| openai/gpt-5-mini | 889ms | 5431ms | 71.9 | 100% |

## Detailed Results

### openai/gpt-5-nano

**TTFT (Time to First Token)**
- Min: 607ms
- Max: 907ms
- Mean: 757ms
- Median: 757ms
- Stdev: 212ms

**Total Response Time**
- Min: 3616ms
- Max: 3987ms
- Mean: 3801ms
- Median: 3801ms
- Stdev: 263ms

**Individual Runs**

- Run 1: TTFT=907ms, Total=3616ms, Tokens=300, 110.7 tok/s
- Run 2: TTFT=607ms, Total=3987ms, Tokens=300, 88.7 tok/s

### openai/gpt-5-mini

**TTFT (Time to First Token)**
- Min: 732ms
- Max: 1047ms
- Mean: 889ms
- Median: 889ms
- Stdev: 222ms

**Total Response Time**
- Min: 4292ms
- Max: 6571ms
- Mean: 5431ms
- Median: 5431ms
- Stdev: 1611ms

**Individual Runs**

- Run 1: TTFT=1047ms, Total=4292ms, Tokens=300, 92.4 tok/s
- Run 2: TTFT=732ms, Total=6571ms, Tokens=300, 51.4 tok/s

## Configuration

- **Prompt**: "Explain how a CPU cache works in 3 paragraphs."
- **Max Tokens**: 300
- **Timeout**: 60s
