TransVOD: End-to-End Video Object Detection With Spatial-Temporal Transformers



Zhou, Qianyu, Li, Xiangtai, He, Lu, Yang, Yibo, Cheng, Guangliang ORCID: 0000-0001-8686-9513, Tong, Yunhai, Ma, Lizhuang and Tao, Dacheng
(2023) TransVOD: End-to-End Video Object Detection With Spatial-Temporal Transformers. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 45 (6). pp. 7853-7869.

[img] Text
IEEE_TPAMI_TransVOD.pdf - Author Accepted Manuscript

Download (4MB) | Preview

Abstract

Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on simple yet effective spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of current VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (3 %-4 % mAP) on the ImageNet VID dataset. TransVOD yields comparable performances on the benchmark of ImageNet VID. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0 % mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7 % mAP while running at around 30 FPS on a single V100 GPU device. Code and models are available at https://github.com/SJTU-LuHe/TransVOD.

Item Type: Article
Uncontrolled Keywords: Transformers, Object detection, Pipelines, Detectors, Streaming media, Fuses, Task analysis, Video object detection, vision transformers, scene understanding, video understanding
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 16 May 2023 07:42
Last Modified: 09 Aug 2023 13:27
DOI: 10.1109/TPAMI.2022.3223955
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3168986