QwQ 32B Q8

Popular Ollama model family: QwQ. Caveat: Estimated values are placeholders unless marked measured..

Hardware Snapshot

Family QwQ
Scenario reasoning
License scope open-source
Quantization Q8
VRAM minimum 24GB
VRAM optimal 34GB
Best local GPU RTX 6000 Ada 48GB
Cloud fallback A100 80GB
Updated 2026-02-24
Data status Verified by Real Hardware
Ollama source Library reference (verified: 2026-02-24)
Ollama tag qwq:32b
Category reasoning

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 7.9
RTX 4090 24GB 10.7
A100 80GB 19

Real Hardware Benchmark (RTX 3090)

Tokens/s 6.579
Latency 15250 ms
Prompt tokens 29
Eval tokens 96
Test time 2026-04-01T11:53:50Z
GPU model NVIDIA GeForce RTX 3090

Verified by real hardware.

View raw nvidia-smi snapshot

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for QwQ 32B Q8

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
A100 80GB $78 $234
ollama run qwq:32b More reasoning models More 30b-34b models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai

We may earn a commission if you click links on this page.