
Large language models arrive well-informed about federal statistics but cannot reliably assess the fitness for use of the data they retrieve. This paper introduces pragmatics, structured expert judgment delivered at the point of statistical reasoning, and provides empirical evidence that the approach works. In a knowledge representation study, an LLM was tested across 39 Census data queries under three conditions: no methodology support (control), standard RAG using 311 document chunks, and pragmatics using 36 curated items. Both treatments drew from the same 354 pages of source documentation; only the method of representation differed. Pragmatics produced very large improvements in consultation quality relative to control (Cohen's d = 1.440) and RAG (d = 0.922), with the strongest effects on uncertainty communication (d = 1.353). Pipeline fidelity reached 91.2%, up from 74.6% for RAG. All 39 queries received identical methodology context through deterministic lookup rather than similarity ranking. Pragmatics delivered 2.2 times the quality improvement per dollar spent compared to RAG. The architecture is domain-agnostic; the content is domain-specific. The concept, architecture, and evaluation framework generalize to any specialized domain where AI systems require expert judgment at the point of decision.
Model Context Protocol, retrieval-augmented generation, Artificial Intelligence, semantic smearing, Generative AI, federal statistics, knowledge representation, AI systems, large language models, statistical consultation, pragmatics, expert judgment, fitness for use
Model Context Protocol, retrieval-augmented generation, Artificial Intelligence, semantic smearing, Generative AI, federal statistics, knowledge representation, AI systems, large language models, statistical consultation, pragmatics, expert judgment, fitness for use
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
