Downloads provided by UsageCounts
Accurate and reliable human motion reconstruction is crucial for creating natural interactions of full-body avatars in Virtual Reality (VR) and entertainment applications. As the Metaverse and social applications gain popularity, users are seeking cost-effective solutions to create full-body animations that are comparable in quality to those produced by commercial motion capture systems. In order to provide affordable solutions though, it is important to minimize the number of sensors attached to the subject’s body. Unfortunately, reconstructing the full-body pose from sparse data is a heavily under-determined problem. Some studies that use IMU sensors face challenges in reconstructing the pose due to positional drift and ambiguity of the poses. In recent years, some mainstream VR systems have released 6-degree-of-freedom (6-DoF) tracking devices providing positional and rotational information. Nevertheless, most solutions for reconstructing full-body poses rely on traditional inverse kinematics (IK) solutions, which often produce non-continuous and unnatural poses. In this article, we introduce SparsePoser, a novel deep learning-based solution for reconstructing a full-body pose from a reduced set of six tracking devices. Our system incorporates a convolutional-based autoencoder that synthesizes high-quality continuous human poses by learning the human motion manifold from motion capture data. Then, we employ a learned IK component, made of multiple lightweight feed-forward neural networks, to adjust the hands and feet toward the corresponding trackers. We extensively evaluate our method on publicly available motion capture datasets and with real-time live demos. We show that our method outperforms state-of-the-art techniques using IMU sensors or 6-DoF tracking devices, and can be used for users with different body dimensions and proportions.
VR, FOS: Computer and information sciences, Kinematics, Computer Science - Artificial Intelligence, Avatars (Virtual reality), Deep learning, Cinemàtica, Àrees temàtiques de la UPC::Informàtica::Infografia, Graphics (cs.GR), 004, Computer Science - Graphics, Artificial Intelligence (cs.AI), Inverse kinematics, Motion capture, Avatars (Realitat virtual), Aprenentatge profund
VR, FOS: Computer and information sciences, Kinematics, Computer Science - Artificial Intelligence, Avatars (Virtual reality), Deep learning, Cinemàtica, Àrees temàtiques de la UPC::Informàtica::Infografia, Graphics (cs.GR), 004, Computer Science - Graphics, Artificial Intelligence (cs.AI), Inverse kinematics, Motion capture, Avatars (Realitat virtual), Aprenentatge profund
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 32 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Top 10% | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 1% |
| views | 87 | |
| downloads | 49 |

Views provided by UsageCounts
Downloads provided by UsageCounts