Qwen2.5 VL 32B Q4
Popular Ollama model family: Qwen2.5 VL. Caveat: Estimated values are placeholders unless marked measured..
Hardware Snapshot
| Family | Qwen2.5 VL |
|---|---|
| Scenario | multimodal |
| License scope | open-source |
| Quantization | Q4 |
| VRAM minimum | 18GB |
| VRAM optimal | 28GB |
| Best local GPU | RTX 6000 Ada 48GB |
| Cloud fallback | A100 80GB |
| Updated | 2026-02-24 |
| Data status | Estimated baseline (pending measurement) |
| Ollama source | Library reference (verified: 2026-02-24) |
| Ollama tag | qwen2.5vl:32b |
| Category | multimodal |
Benchmark Anchors
| Hardware | Expected tok/s |
|---|---|
| RTX 3090 24GB | 11 |
| RTX 4090 24GB | 14.9 |
| A100 80GB | 26.4 |
Real Hardware Benchmark (RTX 3090)
Real benchmark data not available yet for this tag. Estimated anchors are shown above.
Performance Curve
Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.
Best Hardware for Qwen2.5 VL 32B Q4
- Local run: RTX 3090 (24GB) (Check latest deal) for around 11 tok/s on this profile.
- Cloud run: RunPod A100 80GB , about 2.4x the local 3090 speed anchor.
- Alternative cloud: Vast.ai options for flexible spot pricing.
Local vs Cloud Cost Hint
| Mode | 40h / month | 120h / month |
|---|---|---|
| Local power only (3090 baseline) | $2.24 | $6.72 |
| A100 80GB | $78 | $234 |
ollama run qwen2.5vl:32b More multimodal models More 30b-34b models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai We may earn a commission if you click links on this page.
This page currently uses estimated benchmark baselines. Measured data will replace it after validation.