
ContextCache is a persistent KV cache system that accelerates tool-augmented LLM inference by caching prefilled key-value states of tool schema prefixes with SHA-256 content-hash addressing. On cache hits, only the user query requires prefilling, reducing time-to-first-token by 6.9x (787ms to 114ms) with zero quality degradation — group-cached generation matches full prefill exactly on Tool Selection Accuracy across all evaluation splits. This work also documents a negative result: per-tool independent KV compilation with NoPE/deferred-RoPE fails (TSA ~0.1) due to cross-tool attention dependencies, motivating the group caching design. The system includes disk persistence, a model-agnostic adapter layer (Qwen, Llama, Mistral), FastAPI serving, and a browser-based UI, breaking even after just 2.1 requests.
RoPE, inference optimization, tool-augmented LLMs, time-to-first-token, content-hash addressing, KV cache, prefix caching
RoPE, inference optimization, tool-augmented LLMs, time-to-first-token, content-hash addressing, KV cache, prefix caching
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
