Downloads provided by UsageCounts
Information retrieval workshop Search engines are widely used for information retrieval in many large digital (text) collections such as the Internet. Consequently, information retrieval has become a core digital method - and a crucial skill - for the humanities and other research. An ability to use the search engine and query formulation is the foundation of the search process. Effective query formulation improves search results and information interaction (White, 2016; Järvelin et al., 2015). Query formulation involves constructing a query for a search engine to express an information need. Typically, a user expresses it in the form of keywords and phrases. The queries are matched against a document collection by a search engine. The quality of results is heavily dependent on the query expression. Earlier research indicates that query formulation and reformulation can be one of the most problematic and challenging tasks for users (Rieh & Xie, 2006; White, Richardson & Yih, 2015). This event consists of a lecture about information retrieval with search engines and a workshop/hackathon, where a gamified approach is used to improve search engine literacy and searching skills (Arvola & Alamettälä, 2022). Exercises will be done language independently. The main themes are 1) information retrieval in context (including for example information seeking and problem solving) and 2) information retrieval systems i.e., search engines (including for example queries, matching, ranking, evaluation, natural language processing, and user interfaces). No prior knowledge about the topic is required. The workshop is hybrid and consists of online exercises using a standard web browser, thus own laptop is preferred (no software will be installed). References: Arvola, P. & Alamettälä, T. (2022). IRVILAB: Gamified Searching on Multilingual Wikipedia, Accepted for publication in Proceedings of ACM SIGIR 2022, Madrid, Spain Järvelin, K., Vakkari, P., Arvola, P., Baskaya, F., Järvelin, A., Kekäläinen, J., Keskustalo, H., Kumpulainen, S., Saastamoinen, M., Savolainen, R., & Sormunen, E. (2015). Task-Based Information Interaction Evaluation: The Viewpoint of Program Theory. ACM Trans. Inf. Syst. 33, 1, 1-30. https://doi.org/10.1145/2699660 Rieh, S.Y. & Xie, H. (2006). Analysis of multiple query reformulations on the web: The interactive information retrieval context. Information Processing & Management, 42, 3, 751-768. https://doi.org/10.1016/j.ipm.2005.05.005 White, R.W. (2016). Interactions with search systems. Cambridge University Press. White, R.W., Richardson, M., & Yih, W. (2015). Questions vs. Queries in Informational Search Tasks. In Proceedings of the 24th International Conference on World Wide Web (WWW '15 Companion). ACM, New York, NY, USA, 135–136. https://doi.org/10.1145/2740908.2742769
BOBCATS222, bobcatsss, Bobcatsss, Search engines, Information retrieval, bobcatsss 2022, gamification, information literacy
BOBCATS222, bobcatsss, Bobcatsss, Search engines, Information retrieval, bobcatsss 2022, gamification, information literacy
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 13 | |
| downloads | 11 |

Views provided by UsageCounts
Downloads provided by UsageCounts