Gemma 3n E2B Q8

Popular Ollama model family: Gemma 3n. Caveat: Estimated values are placeholders unless marked measured..

Hardware Snapshot

Family Gemma 3n
Scenario multimodal
License scope open-source
Quantization Q8
VRAM minimum 8GB
VRAM optimal 18GB
Best local GPU RTX 3090 24GB
Cloud fallback A6000 48GB
Updated 2026-02-24
Data status Estimated baseline (pending measurement)
Ollama source Library reference (verified: 2026-02-24)
Ollama tag gemma3n:e2b
Category multimodal

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 30.2
RTX 4090 24GB 40.8
A100 80GB 72.5

Real Hardware Benchmark (RTX 3090)

Real benchmark data not available yet for this tag. Estimated anchors are shown above.

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for Gemma 3n E2B Q8

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
A6000 48GB $30.4 $91.2
ollama run gemma3n:e2b More multimodal models More tiny-class models Benchmark changelog Submit your test result Check local GPU upgrade

We may earn a commission if you click links on this page.

This page currently uses estimated benchmark baselines. Measured data will replace it after validation.