Llama 4 128X17B Q8
Popular Ollama model family: Llama 4. Caveat: Estimated values are placeholders unless marked measured..
Hardware Snapshot
| Family | Llama 4 |
|---|---|
| Scenario | multimodal |
| License scope | closed-weight |
| Quantization | Q8 |
| VRAM minimum | 424GB |
| VRAM optimal | 434GB |
| Best local GPU | Cloud-first (no practical single-GPU local) |
| Cloud fallback | H100/H200 class |
| Updated | 2026-02-24 |
| Data status | Estimated baseline (pending measurement) |
| Ollama source | Library reference (verified: 2026-02-24) |
| Ollama tag | llama4:128x17b |
| Category | multimodal |
Benchmark Anchors
| Hardware | Expected tok/s |
|---|---|
| RTX 3090 24GB | 0.8 |
| RTX 4090 24GB | 1.1 |
| A100 80GB | 1.9 |
Real Hardware Benchmark (RTX 3090)
Real benchmark data not available yet for this tag. Estimated anchors are shown above.
Performance Curve
Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.
Best Hardware for Llama 4 128X17B Q8
- Local run: RTX 3090 (24GB) (Check latest deal) for around 0.8 tok/s on this profile.
- Cloud run: RunPod H100/H200 class , about 2.4x the local 3090 speed anchor.
- Alternative cloud: Vast.ai options for flexible spot pricing.
Local vs Cloud Cost Hint
| Mode | 40h / month | 120h / month |
|---|---|---|
| Local power only (3090 baseline) | $2.24 | $6.72 |
| H100/H200 class | $196 | $588 |
ollama run llama4:128x17b More multimodal models More 250b-plus models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai We may earn a commission if you click links on this page.
This page currently uses estimated benchmark baselines. Measured data will replace it after validation.