Downloads provided by UsageCounts
Natural language plays a critical role in many computer vision applications, such as image captioning, visual question answering, and cross-modal retrieval, to provide fine-grained semantic information. Unfortunately, while human pose is key to human understanding, current 3D human pose datasets lack detailed language descriptions. To address this issue, we have introduced the PoseScript dataset. This dataset pairs more than six thousand 3D human poses from AMASS with rich human-annotated descriptions of the body parts and their spatial relationships. Additionally, to increase the size of the dataset to a scale that is compatible with data-hungry learning algorithms, we have proposed an elaborate captioning process that generates automatic synthetic descriptions in natural language from given 3D keypoints. This process extracts low-level pose information, known as "posecodes", using a set of simple but generic rules on the 3D keypoints. These posecodes are then combined into higher level textual descriptions using syntactic rules. With automatic annotations, the amount of available data significantly scales up (100k), making it possible to effectively pretrain deep models for finetuning on human captions. To showcase the potential of annotated poses, we present three multi-modal learning tasks that utilize the PoseScript dataset. Firstly, we develop a pipeline that maps 3D poses and textual descriptions into a joint embedding space, allowing for cross-modal retrieval of relevant poses from large-scale datasets. Secondly, we establish a baseline for a text-conditioned model generating 3D poses. Thirdly, we present a learned process for generating pose descriptions. These applications demonstrate the versatility and usefulness of annotated poses in various tasks and pave the way for future research in the field.
TPAMI 2024, extended version of the ECCV 2022 paper
FOS: Computer and information sciences, Databases, Factual, Three-dimensional displays;, Computer Vision and Pattern Recognition (cs.CV), Generation, Computer Science - Computer Vision and Pattern Recognition, Natural languages, 3D human pose, Captioning, Imaging, Three-Dimensional, Multi-modal learning, Humans, Knee, Natural Language Processing, Pipelines, 000, Retrieval, Description, 004, Semantics, Natural language, Task analysis, Legged locomotion, Three-dimensional displays, Àrees temàtiques de la UPC::Informàtica, Algorithms
FOS: Computer and information sciences, Databases, Factual, Three-dimensional displays;, Computer Vision and Pattern Recognition (cs.CV), Generation, Computer Science - Computer Vision and Pattern Recognition, Natural languages, 3D human pose, Captioning, Imaging, Three-Dimensional, Multi-modal learning, Humans, Knee, Natural Language Processing, Pipelines, 000, Retrieval, Description, 004, Semantics, Natural language, Task analysis, Legged locomotion, Three-dimensional displays, Àrees temàtiques de la UPC::Informàtica, Algorithms
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 22 | |
| downloads | 13 |

Views provided by UsageCounts
Downloads provided by UsageCounts