
doi: 10.1002/asi.21292
handle: 11245/1.332648
AbstractExpertise‐seeking research studies how people search for expertise and choose whom to contact in the context of a specific task. An important outcome are models that identify factors that influence expert finding. Expertise retrieval addresses the same problem, expert finding, but from a system‐centered perspective. The main focus has been on developing content‐based algorithms similar to document search. These algorithms identify matching experts primarily on the basis of the textual content of documents with which experts are associated. Other factors, such as the ones identified by expertise‐seeking models, are rarely taken into account. In this article, we extend content‐based expert‐finding approaches with contextual factors that have been found to influence human expert finding. We focus on a task of science communicators in a knowledge‐intensive environment, the task offinding similar experts, given an example expert. Our approach combines expertise‐seeking and retrieval research. First, we conduct a user study to identify contextual factors that may play a role in the studied task and environment. Then, we design expert retrieval models to capture these factors. We combine these with content‐based retrieval models and evaluate them in a retrieval experiment. Our main finding is that while content‐based features are the most important, human participants also take contextual factors into account, such as media experience and organizational structure. We develop two principled ways of modeling the identified factors and integrate them with content‐based retrieval models. Our experiments show that models combining content‐based and contextual factors can significantly outperform existing content‐based models.
expertise seeking, evaluation, expert search, contextual IR, information seeking, 004, context
expertise seeking, evaluation, expert search, contextual IR, information seeking, 004, context
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 31 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
