3D Human Pose and Shape Reconstruction from Videos via Confidence-Aware Temporal Feature Aggregation



Zhang, Hongrun, Meng, Yanda ORCID: 0000-0001-7344-2174, Zhao, Yitian, Qian, Xuesheng, Qiao, Yihong, Yang, Xiaoyun and Zheng, Yalin ORCID: 0000-0002-7873-0922
(2022) 3D Human Pose and Shape Reconstruction from Videos via Confidence-Aware Temporal Feature Aggregation. IEEE Transactions on Multimedia, 25. p. 1.

[img] Text
TMM3167887.pdf - Author Accepted Manuscript

Download (37MB) | Preview

Abstract

Estimating 3D human body shapes and poses from videos is a challenging computer vision task. The intrinsic temporal information embedded in adjacent frames is helpful in making accurate estimations. Existing approaches learn temporal features of the target frames simply by aggregating features of their adjacent frames, using off-the-shelf deep neural networks. Consequently these approaches cannot explicitly and effectively use the correlations between adjacent frames to help infer the parameters of the target frames. In this paper, we propose a novel framework that can measure the correlations amongst adjacent frames in the form of an estimated confidence metric. The confidence value will indicate to what extent the adjacent frames can help predict the target frames' 3D shapes and poses. Based on the estimated confidence values, temporally aggregated features are then obtained by adaptively allocating different weights to the temporal predicted features from the adjacent frames. The final 3D shapes and poses are estimated by regressing from the temporally aggregated features. Experimental results on three benchmark datasets show that the proposed method outperforms state-of-the-art approaches (even without the motion priors involved in training). In particular, the proposed method is more robust against corrupted frames.

Item Type: Article
Divisions: Faculty of Health and Life Sciences
Faculty of Health and Life Sciences > Institute of Life Courses and Medical Sciences
Depositing User: Symplectic Admin
Date Deposited: 20 May 2022 10:46
Last Modified: 15 Mar 2024 10:42
DOI: 10.1109/tmm.2022.3167887
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3155135