Multibranch Attention Networks for Action Recognition in Still Images



Yan, Shiyang, Smith, Jeremy S ORCID: 0000-0002-0212-2365, Lu, Wenjin and Zhang, Bailing
(2018) Multibranch Attention Networks for Action Recognition in Still Images. IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 10 (4). pp. 1116-1125.

[img] Text
bare_jrnl.pdf - Author Accepted Manuscript

Download (911kB)

Abstract

Contextual information plays an important role in visual recognition. This is especially true for action recognition as contextual information, such as the objects a person interacts with and the scene in which the action is performed, is inseparable from a predefined action class. Meanwhile, the attention mechanism of humans shows remarkable capability compared with the existing computer vision system in discovering contextual information. Inspired by this, we applied the soft attention mechanism by adding two extra branches in the original VGG16 model in which one is to apply scene-level attention whilst the other is region-level attention to capture the global and local contextual information. To make the multibranch model well converged and fully optimized, a two-step training method is proposed with an alternating optimization strategy. We call this model multibranch attention networks. To validate the effectiveness of the proposed approach on two experimental settings: with and without the bounding box of the target person, three publicly available datasets on human action were used for evaluation. This method achieved state-of-the-art results on the PASCAL VOC action dataset and the Stanford 40 dataset on both experimental settings and performed well on humans interacting with common objects dataset.

Item Type: Article
Uncontrolled Keywords: Action recognition, contextual information, multibranch CNN, soft attention mechanism
Depositing User: Symplectic Admin
Date Deposited: 14 Dec 2017 07:37
Last Modified: 15 Mar 2024 00:56
DOI: 10.1109/TCDS.2017.2783944
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3014085