
As Extended Reality (XR) applications evolve from passive viewing experiences to interactive simulations, the behavior of virtual humans becomes a critical factor for immersion. Users expect them to react naturally to their movements, yet implementing robust action recognition and its integration into a VR application remains a challenging task. This paper introduces a modular toolkit designed to bridge this gap. By decoupling the rendering loop from the inference process, our system allows for a sophisticated deep learning-based detection of user actions (such as hand waving or crossing hands) without compromising the frame rate of the XR experience. We present the system architecture and its integration into the Unity engine, and describe the demo applications we developed.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
