Social activity recognition based on probabilistic merging of skeleton features with proximity priors from RGB-D data

Part of book or chapter of book English OPEN
Coppola, Claudio ; Faria, Diego R. ; Nunes, Urbano ; Bellotto, Nicola (2016)
  • Publisher: IEEE

Social activity based on body motion is a key feature for non-verbal and physical behavior defined as function for communicative signal and social interaction between individuals. Social activity recognition is important to study human-human communication and also human-robot interaction. Based on that, this research has threefold goals: (1) recognition of social behavior (e.g. human-human interaction) using a probabilistic approach that merges spatio-temporal features from individual bodies and social features from the relationship between two individuals; (2) learn priors based on physical proximity between individuals during an interaction using proxemics theory to feed a probabilistic ensemble of activity classifiers; and (3) provide a public dataset with RGB-D data of social daily activities including risk situations useful to test approaches for assisted living, since this type of dataset is still missing. Results show that using the proposed approach designed to merge features with different semantics and proximity priors improves the classification performance in terms of precision, recall and accuracy when compared with other approaches that employ alternative strategies.
  • References (23)
    23 references, page 1 of 3

    [1] G. Yu, Z. Liu, and J. Yuan, “Discriminative orderlet mining for realtime recognition of human-object interaction,” in Computer VisionACCV 2014. Springer, 2014, pp. 50-65.

    [2] J. Sung, C. Ponce, B. Selman, and A. Saxena, “Unstructured human activity detection from RGBD images,” in ICRA'12, 2012.

    [3] L. Xia and J. Aggarwal, “Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera,” in CVPR, 2013.

    [4] H. S. Koppula, R. Gupta, and A. Saxena, “Learning human activities and object affordances from RGB-D videos,” in IJRR journal, 2012.

    [5] J. Wang, Z. Liu, Y. Wu, and J. Yuan, “Learning actionlet ensemble for 3D human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 5, pp. 914-927, 2014.

    [6] C. Coppola, O. Martinez Mozos, N. Bellotto et al., “Applying a 3d qualitative trajectory calculus to human action recognition using depth cameras,” IEEE/RSJ IROS Workshop on Assistance and Service Robotics in a Human Environment, 2015.

    [7] D. R. Faria, C. Premebida, and U. Nunes, “A probalistic approach for human everyday activities recognition using body motion from RGB-D images,” in IEEE RO-MAN'14, 2014.

    [8] D. R. Faria, M. Vieira, C. Premebida, and U. Nunes, “Probabilistic human daily activity recognition towards robot-assisted living,” in IEEE RO-MAN'15: IEEE Int. Symposium on Robot and Human Interactive Communication. Kobe, Japan., 2015.

    [9] G. Parisi, C. Weber, and S. Wermter, “Self-organizing neural integration of pose-motion features for human action recognition,” Name: Frontiers in Neurorobotics, vol. 9, no. 3, 2015.

    [10] L. Piyathilaka and S. Kodagoda, “Human activity recognition for domestic robots,” in Field and Service Robotics. Springer, 2015.

  • Metrics
    views in OpenAIRE
    views in local repository
    downloads in local repository

    The information is available from the following content providers:

    From Number Of Views Number Of Downloads
    Aston Publications Explorer - IRUS-UK 0 40
Share - Bookmark