# LLM Latency Benchmark Report

**Generated**: 2026-01-27 05:22:32

## Summary

| Model | TTFT (median) | Total (median) | Tokens/sec | Success |
|-------|---------------|----------------|------------|---------|
| openai/gpt-5-mini | 3798ms | 4213ms | 836.7 | 100% |
| openai/gpt-5-nano | N/A | 5015ms | N/A | 100% |

## Detailed Results

### openai/gpt-5-mini

**TTFT (Time to First Token)**
- Min: 3655ms
- Max: 3941ms
- Mean: 3798ms
- Median: 3798ms
- Stdev: 202ms

**Total Response Time**
- Min: 4184ms
- Max: 4465ms
- Mean: 4287ms
- Median: 4213ms
- Stdev: 154ms

**Individual Runs**

- Run 1: TTFT=3941ms, Total=4213ms, Tokens=300, 1106.1 tok/s
Tokens=300
- Run 3: TTFT=3655ms, Total=4184ms, Tokens=300, 567.2 tok/s

### openai/gpt-5-nano

**TTFT (Time to First Token)**
- Min: N/A
- Max: N/A
- Mean: N/A
- Median: N/A
- Stdev: N/A

**Total Response Time**
- Min: 3117ms
- Max: 5490ms
- Mean: 4541ms
- Median: 5015ms
- Stdev: 1256ms

**Individual Runs**

Tokens=300
Tokens=300
Tokens=300

## Configuration

- **Prompt**: "Explain how a CPU cache works in 3 paragraphs."
- **Max Tokens**: 300
- **Timeout**: 60s
