V-VIPE: Variational View Invariant Pose Embedding
CoRR(2024)
摘要
Learning to represent three dimensional (3D) human pose given a two dimensional (2D) image of a person, is a challenging problem. In order to make the problem less ambiguous it has become common practice to estimate 3D pose in the camera coordinate space. However, this makes the task of comparing two 3D poses difficult. In this paper, we address this challenge by separating the problem of estimating 3D pose from 2D images into two steps. We use a variational autoencoder (VAE) to find an embedding that represents 3D poses in canonical coordinate space. We refer to this embedding as variational view-invariant pose embedding V-VIPE. Using V-VIPE we can encode 2D and 3D poses and use the embedding for downstream tasks, like retrieval and classification. We can estimate 3D poses from these embeddings using the decoder as well as generate unseen 3D poses. The variability of our encoding allows it to generalize well to unseen camera views when mapping from 2D space. To the best of our knowledge, V-VIPE is the only representation to offer this diversity of applications. Code and more information can be found at https://v-vipe.github.io/.
更多查看译文
关键词
Pose Embedding,View Invariance,2D Images,Variational Autoencoder,Camera View,2D Space,Person Image,Human Pose,3D Pose,3D Space,Latent Space,Relative Location,3D Network,Action Recognition,Joint Position,Pose Estimation,Part Of Table,Camera Angle,Embedding Learning,Triplet Loss,2D Pose,Camera Viewpoint,Human Pose Estimation,Similar Pose,2D Keypoints,Ground Truth 3D,Variational Autoencoder Model,Global Rotation,Slight Angle,Decoder Network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn