Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Project deliverable
Data sources: ZENODO
addClaim

XR2Learn D3.2 XR2Learn enablers

XR2Learn D3.2 XR2Learn enablers

Abstract

This deliverable, D3.2 XR2Learn enablers, outlines the development of innovative tools designed to foster the creation and integration of Extended Reality (XR) learning applications enriched by affective computing. The contributions of the proposed enablers are two-fold: to reduce the workload required for the development of XR learning applications and to promote the personalization and enhancement of the learning experience in an effortless and seamless manner. In the context of Task 3.2, the following tools have been developed: ● Enabler 1: The Authoring Tool, a key component that simplifies the creation of XR applications specifically tailored for educational purposes. This tool allows educators and developers to easily build immersive learning environments. ● Enablers 2-5: These enablers focus on the development of tools for automatic emotion recognition, utilizing various input data modalities. With the aim of facilitating the use of not only public datasets but also "in-house" data in an effortless manner, the Enablers 2-5 include: ○ Self-Supervised Learning: operates without the need for labeled data to pre-train Deep Learning models, allows to train models on emotions with less annotated data and resources. ○ Supervised Learning: requires labeled data and provides a structured approach to identify user emotions from input modalities. ● Enabler 6: This enabler represents the last step in the affective computing pipeline, effectively integrating the capabilities of the automatic emotion detection components (enablers 2-5) and facilitating the use of its output as a source for the adaptation of the learning material. Its primary function is to utilize the detected emotions of the user to suggest appropriate learning activities. This personalization aspect ensures that the learning experience is optimized for each individual, making it more engaging and effective. ● Magic XRoom: This innovative feature serves as a tool for collecting data. This data is crucial for the evaluation of the enablers, as it provides the necessary input for emotion detection algorithms. In conclusion, the deliverable presents a set of novel enablers that can be utilized to accelerate the development of educational XR applications. Moreover, by integrating XR applications with required equipment, data collection modules and emotion recognition enablers, it paves the way for a more immersive, personalized learning experience that is adaptable to the emotional states of the users, thereby enhancing the overall effectiveness of the educational process.

Powered by OpenAIRE graph
Found an issue? Give us feedback