Video K-Net: A Simple, Strong, and Unified Baseline for Video Segmentation



Li, Xiangtai, Zhang, Wenwei, Pang, Jiangmiao, Chen, Kai, Cheng, Guangliang ORCID: 0000-0001-8686-9513, Tong, Yunhai and Loy, Chen Change
(2022) Video K-Net: A Simple, Strong, and Unified Baseline for Video Segmentation. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022-6-18 - 2022-6-24.

Access the full-text of this item by clicking on the Open Access link.

Abstract

This paper presents Video K-Net, a simple, strong, and unified framework for fully end-to-end video panoptic seg-mentation. The method is built upon K-Net, a method that unifies image segmentation via a group of learnable ker-nels. We observe that these learnable kernels from K-Net, which encode object appearances and contexts, can naturally associate identical instances across video frames. Motivated by this observation, Video K-Net learns to simultaneously segment and track 'things' and 'stuff' in a video with simple kernel-based appearance modeling and cross-temporal kernel interaction. Despite the simplicity, it achieves state-of-the-art video panoptic segmentation results on Citscapes-VPS and KITTI-STEP without bells and whistles. In particular on KITTI-STEP, the simple method can boost almost 12% relative improvements over previous methods. We also validate its generalization on video semantic segmentation, where we boost various baselines by 2% on the VSPW dataset. Moreover, we extend K-Net into clip-level video framework for video instance segmentation where we obtain 40.5% for ResNet50 backbone and 51.5% mAP for Swin-base on YouTube-2019 validation set. We hope this simple yet effective method can serve as a new flexible baseline in video segmentation.11Both code and models are released at here.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 14 Mar 2023 09:32
Last Modified: 24 Apr 2024 11:30
DOI: 10.1109/cvpr52688.2022.01828
Open Access URL: https://openaccess.thecvf.com/content/CVPR2022/pap...
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3168997