Translategemma 27B Q8

Auto-discovered from measured benchmark results.. Caveat: Auto-generated family metadata; review for taxonomy accuracy..

Hardware Snapshot

Family Translategemma
Scenario chat
License scope open-source
Quantization Q8
VRAM minimum 24GB
VRAM optimal 34GB
Best local GPU RTX 6000 Ada 48GB
Cloud fallback A100 80GB
Updated 2026-02-24
Data status Verified by Real Hardware
Ollama source Library reference (verified: 2026-02-24)
Ollama tag translategemma:27b
Category chat

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 7.9
RTX 4090 24GB 10.7
A100 80GB 19

Real Hardware Benchmark (RTX 3090)

Tokens/s 41.293
Latency 3142 ms
Prompt tokens 30
Eval tokens 96
Test time 2026-04-01T11:53:50Z
GPU model NVIDIA GeForce RTX 3090

Verified by real hardware.

View raw nvidia-smi snapshot

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for Translategemma 27B Q8

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
A100 80GB $78 $234
ollama run translategemma:27b More chat models More 30b-34b models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai

We may earn a commission if you click links on this page.