Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Online Information R...arrow_drop_down
image/svg+xml Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao Closed Access logo, derived from PLoS Open Access logo. This version with transparent background. http://commons.wikimedia.org/wiki/File:Closed_Access_logo_transparent.svg Jakob Voss, based on art designer at PLoS, modified by Wikipedia users Nina and Beao
Online Information Review
Article . 2011 . Peer-reviewed
License: Emerald Insight Site Policies
Data sources: Crossref
versions View all 1 versions
addClaim

This Research product is the result of merged Research products in OpenAIRE.

You have already added 0 works in your ORCID record related to the merged Research product.

Automatic performance evaluation of web search engines using judgments of metasearch engines

Authors: Heydar Sadeghi;

Automatic performance evaluation of web search engines using judgments of metasearch engines

Abstract

PurposeThe purpose of this paper is to introduce two new automatic methods for evaluating the performance of search engines. The reported study uses the methods to experimentally investigate which search engine among three popular search engines (Ask.com, Bing and Google) gives the best performance.Design/methodology/approachThe study assesses the performance of three search engines. For each one the weighted average of similarity degrees between its ranked result list and those of its metasearch engines is measured. Next these measures are compared to establish which search engine gives the best performance. To compute the similarity degree between the lists two measures called the “tendency degree” and “coverage degree” are introduced; the former assesses a search engine in terms of results presentation and the latter evaluates it in terms of retrieval effectiveness. The performance of the search engines is experimentally assessed based on the 50 topics of the 2002 TREC web track. The effectiveness of the methods is also compared with human‐based ones.FindingsGoogle outperformed the others, followed by Bing and Ask.com. Moreover significant degrees of consistency – 92.87 percent and 91.93 percent – were found between automatic and human‐based approaches.Practical implicationsThe findings of this work could help users to select a truly effective search engine. The results also provide motivation for the vendors of web search engines to improve their technology.Originality/valueThe paper focuses on two novel automatic methods to evaluate the performance of search engines and provides valuable experimental results on three popular ones.

Related Organizations
  • BIP!
    Impact byBIP!
    citations
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    7
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
citations
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
7
Average
Average
Average
Upload OA version
Are you the author of this publication? Upload your Open Access version to Zenodo!
It’s fast and easy, just two clicks!