
The rapid proliferation of cloud-based large language models (LLMs) has introduced significant barriers including subscription costs, privacy concerns due to external API transmission, and operational dependencies on remote infrastructure. We present ECHOVIUM QPS V1.5 (Quantum Power Shell), a compact locally deployable LLM based on a 3B-scale open-weight instruction-tuned backbone, adapted for reasoning tasks using parameter-efficient fine-tuning and quantized deployment. QPS V1.5 achieves 64.2 ±1.3% accuracy on GPQA Diamond (PhD-level science reasoning) and 87.1 ±2.1% on LiveCodeBench code generation, demonstrating competitive performance relative to larger hosted models on selected reasoning benchmarks while maintaining strong efficiency among compact local models.Deployed via Ollama with 4-bit quantization, the model delivers up to 15 tokens/s while maintaining complete local data processing and zero marginal inference cost. Key contributions include a 7-stage QLoRA pipeline, reproducible benchmarking, backbone-agnostic framework, and production-ready deployment.
GPQA, LiveCodeBench, Ollama, Local LLM, Reasoning Benchmarks, QLoRA
GPQA, LiveCodeBench, Ollama, Local LLM, Reasoning Benchmarks, QLoRA
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
