Coding Models
150 profiles in this group. Use this hub page to compare practical VRAM floor, expected throughput, and best local-vs-cloud path.
| Model | Data | VRAM min | VRAM optimal | Best local GPU | Cloud fallback | Detail |
|---|---|---|---|---|---|---|
| Qwen3 Coder 480B CLOUD | Estimated | 215GB | 223GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 Coder 480B FP16 | Estimated | 225GB | 237GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 Coder 480B Q4 | Estimated | 213GB | 223GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 Coder 480B Q5 | Estimated | 215GB | 225GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 Coder 480B Q8 | Estimated | 219GB | 229GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| DeepSeek Coder V2 236B FP16 | Estimated | 150GB | 162GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| DeepSeek Coder V2 236B Q4 | Estimated | 138GB | 148GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| DeepSeek Coder V2 236B Q5 | Estimated | 140GB | 150GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| DeepSeek Coder V2 236B Q8 | Estimated | 144GB | 154GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 235B FP16 | Estimated | 150GB | 162GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 235B Q4 | Estimated | 138GB | 148GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 235B Q5 | Estimated | 140GB | 150GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen3 235B Q8 | Estimated | 144GB | 154GB | Cloud-first (no practical single-GPU local) | H100/H200 class | Open |
| Qwen2.5 72B FP16 | Estimated | 50GB | 62GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| Qwen2.5 72B Q4 | Estimated | 38GB | 48GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 72B Q5 | Estimated | 40GB | 50GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| Qwen2.5 72B Q8 | Estimated | 44GB | 54GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| CodeLlama 70B FP16 | Estimated | 50GB | 62GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| CodeLlama 70B Q4 | Estimated | 38GB | 48GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeLlama 70B Q5 | Estimated | 40GB | 50GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| CodeLlama 70B Q8 | Estimated | 44GB | 54GB | Dual RTX 4090 (model parallel) | A100 80GB | Open |
| CodeLlama 34B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeLlama 34B Q4 | Estimated | 16GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 34B Q5 | Estimated | 20GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 34B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder 33B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder 33B Q4 | Estimated | 16GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 33B Q5 | Estimated | 20GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 33B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 32B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 32B Q4 | Estimated | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 32B Q5 | Estimated | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 32B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 32B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 32B Q4 | Estimated | 16GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 32B Q5 | Estimated | 20GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 32B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 32B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 32B Q4 | Measured | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 32B Q5 | Measured | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 32B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 30B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 30B Q4 | Estimated | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 30B Q5 | Estimated | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 30B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 Coder 30B CLOUD | Estimated | 20GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 Coder 30B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 Coder 30B Q4 | Estimated | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 Coder 30B Q5 | Estimated | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 Coder 30B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder V2 16B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder V2 16B Q4 | Estimated | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder V2 16B Q5 | Estimated | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder V2 16B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 15B FP16 | Estimated | 30GB | 42GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 15B Q4 | Estimated | 18GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 15B Q5 | Estimated | 20GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 15B Q8 | Estimated | 24GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 14B FP16 | Estimated | 22GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 14B Q4 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 14B Q5 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 14B Q8 | Estimated | 16GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 14B FP16 | Estimated | 22GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 14B Q4 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 14B Q5 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 14B Q8 | Estimated | 16GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 14B FP16 | Estimated | 22GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 14B Q4 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 14B Q5 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 14B Q8 | Estimated | 16GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeLlama 13B FP16 | Estimated | 22GB | 34GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeLlama 13B Q4 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 13B Q5 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 13B Q8 | Estimated | 16GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 8B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 8B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 8B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 8B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeGemma 7B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 7B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeLlama 7B Q4 | Estimated | 8GB | 10GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 7B Q5 | Estimated | 10GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeLlama 7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 7B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 7B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 7B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 7B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 7B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 7B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 6.7B FP16 | Estimated | 18GB | 30GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder 6.7B Q4 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 6.7B Q5 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 6.7B Q8 | Estimated | 12GB | 22GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 4B FP16 | Estimated | 16GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 4B Q4 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 4B Q5 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 4B Q8 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 3B FP16 | Estimated | 16GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 3B Q4 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 3B Q5 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 3B Q8 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 3B FP16 | Estimated | 16GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 3B Q4 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 3B Q5 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 3B Q8 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 3B FP16 | Estimated | 16GB | 28GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| StarCoder2 3B Q4 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 3B Q5 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| StarCoder2 3B Q8 | Estimated | 10GB | 20GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 2B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| CodeGemma 2B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 2B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| CodeGemma 2B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 1.7B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 1.7B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 1.7B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 1.7B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 1.5B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 1.5B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 1.5B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 1.5B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 1.5B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen2.5 Coder 1.5B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 1.5B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 1.5B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 1.3B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| DeepSeek Coder 1.3B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 1.3B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| DeepSeek Coder 1.3B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 0.6B FP16 | Estimated | 14GB | 26GB | RTX 6000 Ada 48GB | A100 80GB | Open |
| Qwen3 0.6B Q4 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 0.6B Q5 | Estimated | 4GB | 14GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen3 0.6B Q8 | Estimated | 8GB | 18GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 0.5B FP16 | Estimated | 12GB | 24GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 0.5B Q4 | Estimated | 2GB | 10GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 0.5B Q5 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 0.5B Q8 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 0.5B FP16 | Estimated | 12GB | 24GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 0.5B Q4 | Estimated | 2GB | 10GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 0.5B Q5 | Estimated | 2GB | 12GB | RTX 3090 24GB | A6000 48GB | Open |
| Qwen2.5 Coder 0.5B Q8 | Estimated | 6GB | 16GB | RTX 3090 24GB | A6000 48GB | Open |
We may earn a commission if you click links on this page.