Glm 4.7 Flash 7B Q5

Auto-discovered from measured benchmark results.. Caveat: Auto-generated family metadata; review for taxonomy accuracy..

Hardware Snapshot

Family Glm 4.7 Flash
Scenario chat
License scope open-source
Quantization Q5
VRAM minimum 8GB
VRAM optimal 18GB
Best local GPU RTX 3090 24GB
Cloud fallback A6000 48GB
Updated 2026-02-24
Data status Verified by Real Hardware
Ollama source Library reference (verified: 2026-02-24)
Ollama tag glm-4.7-flash:bf16
Category chat

Benchmark Anchors

Hardware Expected tok/s
RTX 3090 24GB 27
RTX 4090 24GB 36.5
A100 80GB 64.8

Real Hardware Benchmark (RTX 3090)

Tokens/s 11.236
Latency 9291 ms
Prompt tokens 26
Eval tokens 96
Test time 2026-03-04T09:01:38Z
GPU model NVIDIA GeForce RTX 3090

Verified by real hardware.

View raw nvidia-smi snapshot

Performance Curve

Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.

Best Hardware for Glm 4.7 Flash 7B Q5

Local vs Cloud Cost Hint

Mode 40h / month 120h / month
Local power only (3090 baseline) $2.24 $6.72
A6000 48GB $30.4 $91.2
ollama run glm-4.7-flash:bf16 More chat models More 7b-8b models Benchmark changelog Submit your test result Check local GPU upgrade

We may earn a commission if you click links on this page.