Our network was trained only on individual poses. We apply our rendering pipeline to keypoints from motion captured data from the AMASS dataset.
One failure case comes up if the animation is far from the trained poses, as is the case with this crawling animation. There are problems with the mask in occluded areas as well as artifacts towards the edge of the reconstruction region.
Each video shows the linear interpolation between different pose encodings from the test set.
Each video shows a single pose rendererd from a camera circling around the subjects at different heights.
The animation shows the process of pose optimization from the initialization to the final pose given the silhouette.
@inproceedings{giewald2022nepu,
author={Giebenhain, Simon and Waldmann, Urs and Johannsen, Ole and Goldluecke, Bastian},
title={Neural Puppeteer: Keypoint-Based Neural Rendering of Dynamic Shapes},
booktitle={Proceedings of the Asian Conference on Computer Vision (ACCV)},
month={December},
year={2022},
pages={2830-2847}
}