eDrama: Facilitating online role-play using an AI actor and emotionally expressive characters

Article English OPEN
Zhang, Li ; Gillies, Marco ; Dhaliwal, Kulwant ; Gower, Amanda ; Robertson, Dale ; Crabtree, Barry (2009)
  • Publisher: IOS Press
  • Subject: G600 | G400 | G700

This paper describes a multi-user role-playing environment, e-drama, which enables groups of people to converse online, in scenario driven virtual environments. The starting point of this research – edrama – is a 2D graphical environment in which users are represented by static cartoon figures. An application has been developed to enable integration of the existing edrama tool with several new components to support avatars with emotionally expressive behaviours, rendered in a 3D environment. The functionality includes the extraction of affect from open-ended improvisational text. The results of the affective analysis are then used to: (a) control an automated improvisational AI actor – EMMA (emotion, metaphor and affect) that operates a bit-part character in the improvisation; (b) drive the animations of avatars using the Demeanour framework in the user interface so that they react bodily in ways that are consistent with the affect that they are expressing. Finally, we describe user trials that demonstrate that the changes made improve the quality of social interaction and users’ sense of presence. Moreover, our system has the potential to evolve normal classroom education for young people with or without learning disabilities by providing 24/7 efficient personalised social skill, language and career training via role-play and offering automatic monitoring.
  • References (12)
    12 references, page 1 of 2

    Argyle, M., & Cook, M. (1976). Gaze and Mutual Gaze. Cambridge University Press.

    Aylett, R. S., Dias, J., & Paiva, A. (2006). An affectively-driven planner for synthetic characters. In D. Long, S. F. Smith, D. Borrajo & L. McCluskey (Eds.) Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling (ICAPS 2006) (pp. 2-10). AAAI Press.

    Aylett, R., Louchart, S., Dias, J., Paiva, A., Vala, M., Woods, S., & Hall, L. E. (2006). Unscripted Narrative for Affectively Driven Characters. IEEE Computer Graphics and Applications, 26(3), 42-52.

    Barnden, J. A., Glasbey, S. R., Lee, M. G., & Wallington, A. M. (2004). Varieties and Directions of Interdomain Influence in Metaphor. Metaphor and Symbol, 19(1), 1-30.

    Brand, M., & Hertzmann, A. (2000). Style Machines. In Proceedings of the 27th Annual Conference on Computer Graphics SIGGRAPH 2000 (pp. 183-192). New Orleans, Louisiana, USA, July 23-28. ACM.

    Briscoe, E., & Carroll, J. (2002). Robust Accurate Statistical Annotation of General Text. In Proceedings of the 3rd International Conference on Language Resources and Evaluation (pp. 1499-1504). Las Palmas, Gran Canaria.

    Carletta, J. (1996). Assessing Agreement on Classification Tasks: The Kappa statistic. Computational Linguistics, 22(2), 249-254.

    Cassell, J., & Thórisson, K. R. (1999). The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents. International Journal of Applied Artificial Intelligence, 13(4-5), 519- 538.

    Cassell, J., & Vilhjálmsson, H. (1999). Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous. Autonomous Agents and Multi-Agent Systems, 2(1), 45-64.

    Cassell, J., Stocky, T., Bickmore, T., Gao, Y., Nakano, Y., Ryokai, K., Tversky, D., Vaucelle, C., & Vilhjálmsson, H. (2002). MACK: Media lab Autonomous Conversational Kiosk. In Proceedings of Imagina '02. February 12-15, Monte Carlo.

  • Similar Research Results (3)
  • Metrics
    views in OpenAIRE
    views in local repository
    downloads in local repository

    The information is available from the following content providers:

    From Number Of Views Number Of Downloads
    Northumbria Research Link - IRUS-UK 0 51
Share - Bookmark