
An infinite number of context-free grammars may be inferred from a given training set. The defensibility of any single grammar hinges on the ability to compare that grammar against others in a meaningful way. In keeping with the minimum description length principle, smaller grammars are preferred over larger ones, but only insofar as the small grammar does not over-generalise the language being studied. Furthermore, measures of size must incorporate the grammar's ability to cover sentences of the source language not included in the training set. This paper describes a method for evaluating the quality of context-free grammars according to (i) the complexity of each grammar and (ii) the amount of disambiguation information necessary for much grammar to reproduce the training set. The sum of the two evaluations is used as an objective measure of a grammar's information content. Three grammars are used as examples of this process. >
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
