Actions
  • shareshare
  • link
  • cite
  • add
add
auto_awesome_motion View all 2 versions
Publication . Part of book or chapter of book . Book . 2011

Autoeval: An evaluation methodology for evaluating query suggestions using query logs

M-Dyaa Albakour; Udo Kruschwitz; Nikolaos Nanas; Yunhyong Kim; Dawei Song; Maria Fasli; Anne De Roeck;
Open Access
Published: 01 Apr 2011
Abstract

User evaluations of search engines are expensive and not easy to replicate. The problem is even more pronounced when assessing adaptive search systems, for example system-generated query modification suggestions that can be derived from past user interactions with a search engine. Automatically predicting the performance of different modification suggestion models before getting the users involved is therefore highly desirable. AutoEval is an evaluation methodology that assesses the quality of query modifications generated by a model using the query logs of past user interactions with the system. We present experimental results of applying this methodology to different adaptive algorithms which suggest that the predicted quality of different algorithms is in line with user assessments. This makes AutoEval a suitable evaluation framework for adaptive interactive search engines.

Subjects by Vocabulary

Microsoft Academic Graph classification: Query optimization Online aggregation Web search query Query language Computer science Web query classification Query expansion Sargable Ranking (information retrieval) Information retrieval

12 references, page 1 of 2

1. P. Boldi, F. Bonchi, C. Castillo, D. Donato, and S. Vigna. Query suggestions using query- ow graphs. In: Proceedings of WSCD'09, pp. 56-63, Barcelona (2009)

2. N. Craswell and M. Szummer. Random Walks on the Click Graph. In: Proceedings of SIGIR'07, pp. 239-246, Amsterdam (2007) [OpenAIRE]

3. S. Dignum, U. Kruschwitz, M. Fasli, Y. Kim, D. Song, U. Cervino, and A. De Roeck. Incorporating Seasonality into Search Suggestions Derived from Intranet Query Logs. In: Proceedings of WI'10, pp. 425-430, Toronto (2010)

4. B. M. Fonseca, P. B. Golgher, E. S. de Moura, and N. Ziviani. Using association rules to discover search engines related queries. In: Proceedings of the First Latin American Web Congress, pp. 66-71, Santiago (2003)

5. T. Joachims. Evaluating retrieval performance using clickthrough data. In: J. Franke, G. Nakhaeizadeh, and I. Renz, editors, Text Mining, pp. 79-96. Physica/Springer Verlag (2003)

6. T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In: Proceedings of SIGIR'05, pp. 154-161, Salvador (2005) [OpenAIRE]

7. U. Kruschwitz. Intelligent Document Retrieval: Exploiting Markup Structure, volume 17 of The Information Retrieval Series. Springer (2005)

8. N. Nanas, U. Kruschwitz, M.-D. Albakour, M. Fasli, D. Song, Y. Kim, U. Cervino, and A. De Roeck. A Methodology for Simulated Experiments in Interactive Search. In: Proceedings of the SIGIR'10 SimInt Workshop, pp. 23-24 Geneva (2010) [OpenAIRE]

9. N. Nanas and A. Roeck. Autopoiesis, the immune system, and adaptive information ltering. Natural Computing: an international journal, 8(2), pp. 387-427 (2009)

10. M. Sanderson and B. Croft. Deriving concept hierarchies from text. In: Proceedings of SIGIR'99, pp. 206-213, Berkeley, CA (1999)

moresidebar