
AbstractThis poster describes a framework for investigating the effectiveness of query expansion term sets and reports the results of an investigation on the quality of query expansion terms coming from different sources: pseudo‐relevance feedback, web‐based expansion, interactive elicitations from human searchers, and expansion approaches based on query clarity.The conclusion regarding the experimental framework is that certain different evaluation approaches show a substantial level of correlation, and can therefore be used interchangeably according to convenience considerations.With regard to the actual comparison of different sources of expansion terms, the conclusion is that machines are better than humans at doing statistical calculations and at estimating which query terms are more likely to discriminate documents relevant for a given topic. One consequence is a recommendation for research in implicit relevance feedback approaches and novel interaction models based on ostention or mediation, which have shown great potential.
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
