DeepSeek Coder V2 236B Q5

Popular Ollama model family: DeepSeek Coder V2. Caveat: Estimated values are placeholders unless marked measured..

Hardware Snapshot

Family DeepSeek Coder V2
Scenario coding
License scope open-source
Quantization Q5
VRAM minimum 140GB
VRAM optimal 150GB
Best local GPU Cloud-first (no practical single-GPU local)
Cloud fallback H100/H200 class
Updated 2026-02-24
Data status Estimated baseline (pending measurement)
Ollama source Library reference (verified: 2026-02-24)
Ollama tag deepseek-coder-v2:236b
Category coding

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 1.7
RTX 4090 24GB 2.3
A100 80GB 4.1

Real Hardware Benchmark (RTX 3090)

Real benchmark data not available yet for this tag. Estimated anchors are shown above.

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for DeepSeek Coder V2 236B Q5

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
H100/H200 class $196 $588
ollama run deepseek-coder-v2:236b More coding models More 100b-250b models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai

We may earn a commission if you click links on this page.

This page currently uses estimated benchmark baselines. Measured data will replace it after validation.