Methodology
This page explains how LocalVRAM benchmarks are produced, validated, and published for practical local deployment decisions.
Runtime Environment
- Primary GPU host: RTX 3090 (24GB) with sustained-load benchmark workflow.
- Latest Ollama API version observed: 0.17.7.
- Latest hardware sync: 2026-04-01T11:53:50Z.
Data Classification
- Measured: directly observed from benchmark runs and marked as verified.
- Estimated: baseline anchors used until measured data is available.
- Each publish cycle updates changelog and status snapshots with timestamps.
Pipeline Transparency
Benchmark collection and publish workflows are tracked with diagnostics, failure classes, and
status snapshots. Current pipeline state source: src/data/pipeline-status.json.
Last publish status: unknown at unknown.