Improving Unsupervised Label Propagation for Pose Tracking and Video Object Segmentation
GCPR 2022


Urs Waldmann1,2, Jannik Bamberger1, Ole Johannsen1, Oliver Deussen1,2, and Bastian Goldlücke1,2

1 Department of Computer and Information Science, University of Konstanz, Germany
2 Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Germany

Abstract


Joint Tracking and Label Propagation Pipeline

Label propagation is a challenging task in computer vision with many applications. One approach is to learn representations of visual correspondence. In this paper, we study recent works on label propagation based on correspondence, carefully evaluate the effect of various aspects of their implementation, and improve upon various details. Our pipeline assembled from these best practices outperforms the previous state of the art in terms of PCK_0.1 on the JHMDB dataset by 6.5%. We also propose a novel joint framework for tracking and keypoint propagation, which in contrast to the base pipeline is applicable to tracking small objects and obtains results that substantially exceed the performance of the core pipeline. Finally, for VOS, we extend our pipeline to a fully unsupervised one by initializing the first frame with the self-attention layer from DINO. Our pipeline for VOS runs online and can handle static objects. It outperforms unsupervised frameworks with these characteristics.

Additional Results


Our Method with Joint Tracking for a Single Pigeon

Here we track and propagate the pose of a real-world pigeon.



Here we track and propagate the pose of a synthetic pigeon. This scene is challenging as the tracker might get stuck on the strong background texture.

Cite us


                            
    @inproceedings{waldmann2022improving,
      title={Improving Unsupervised Label Propagation for Pose Tracking and Video Object Segmentation},
      author={Waldmann, Urs and Bamberger, Jannik and Johannsen, Ole and Deussen, Oliver and Goldl\"{u}cke, Bastian},
      booktitle={DAGM German Conference on Pattern Recognition},
      year={2022},
      pages={230--245}
      }