Qwen3 0.6B FP16

Popular Ollama model family: Qwen3. Caveat: Estimated values are placeholders unless marked measured..

Hardware Snapshot

Family Qwen3
Scenario coding
License scope open-source
Quantization FP16
VRAM minimum 14GB
VRAM optimal 26GB
Best local GPU RTX 6000 Ada 48GB
Cloud fallback A100 80GB
Updated 2026-02-24
Data status Estimated baseline (pending measurement)
Ollama source Library reference (verified: 2026-02-24)
Ollama tag qwen3:0.6b
Category coding

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 26.4
RTX 4090 24GB 35.6
A100 80GB 63.4

Real Hardware Benchmark (RTX 3090)

Real benchmark data not available yet for this tag. Estimated anchors are shown above.

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for Qwen3 0.6B FP16

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
A100 80GB $78 $234
ollama run qwen3:0.6b More coding models More tiny-class models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai

We may earn a commission if you click links on this page.

This page currently uses estimated benchmark baselines. Measured data will replace it after validation.