Glm 4.7 Flash 7B FP16
Auto-discovered from measured benchmark results.. Caveat: Auto-generated family metadata; review for taxonomy accuracy..
Hardware Snapshot
| Family | Glm 4.7 Flash |
|---|---|
| Scenario | chat |
| License scope | open-source |
| Quantization | FP16 |
| VRAM minimum | 18GB |
| VRAM optimal | 30GB |
| Best local GPU | RTX 6000 Ada 48GB |
| Cloud fallback | A100 80GB |
| Updated | 2026-02-24 |
| Data status | Verified by Real Hardware |
| Ollama source | Library reference (verified: 2026-02-24) |
| Ollama tag | glm-4.7-flash:bf16 |
| Category | chat |
Benchmark Anchors
| Hardware | Expected tok/s |
|---|---|
| RTX 3090 24GB | 16.5 |
| RTX 4090 24GB | 22.3 |
| A100 80GB | 39.6 |
Real Hardware Benchmark (RTX 3090)
| Tokens/s | 11.236 |
|---|---|
| Latency | 9291 ms |
| Prompt tokens | 26 |
| Eval tokens | 96 |
| Test time | 2026-03-04T09:01:38Z |
| GPU model | NVIDIA GeForce RTX 3090 |
Verified by real hardware.
Performance Curve
Reference anchors are baseline estimates. Measured RTX 3090 data is overlaid when available.
Best Hardware for Glm 4.7 Flash 7B FP16
- Local run: RTX 3090 (24GB) (Check latest deal) for around 11.236 tok/s on this profile.
- Cloud run: RunPod A100 80GB , about 3.5x the local 3090 speed anchor.
- Alternative cloud: Vast.ai options for flexible spot pricing.
Local vs Cloud Cost Hint
| Mode | 40h / month | 120h / month |
|---|---|---|
| Local power only (3090 baseline) | $2.24 | $6.72 |
| A100 80GB | $78 | $234 |
ollama run glm-4.7-flash:bf16 More chat models More 7b-8b models Benchmark changelog Submit your test result Run on RunPod Try Vast.ai We may earn a commission if you click links on this page.