Improved two-stream model for human action recognition



Zhao, Y, Man, KL, Smith, J ORCID: 0000-0002-0212-2365, Siddique, K and Guan, SU
(2020) Improved two-stream model for human action recognition. Eurasip Journal on Image and Video Processing, 2020.

[img] Text
Improved.pdf - OA Published Version

Download (1MB) | Preview

Abstract

This paper addresses the recognitions of human actions in videos. Human action recognition can be seen as the automatic labeling of a video according to the actions occurring in it. It has become one of the most challenging and attractive problems in the pattern recognition and video classification fields. The problem itself is difficult to solve by traditional video processing methods because of several challenges such as the background noise, sizes of subjects in different videos, and the speed of actions. Derived from the progress of deep learning methods, several directions are developed to recognize a human action from a video, such as the long-short-term memory (LSTM)-based model, two-stream convolutional neural network (CNN) model, and the convolutional 3D model.In this paper, we focus on the two-stream structure. The traditional two-stream CNN network solves the problem that CNNs do not have satisfactory performance on temporal features. By training a temporal stream, which uses the optical flow as the input, a CNN can have the ability to extract temporal features. However, the optical flow only contains limited temporal information because it only records the movements of pixels on the x-axis and the y-axis. Therefore, we attempt to design and implement a new two-stream model by using an LSTM-based model in its spatial stream to extract both spatial and temporal features in RGB frames. In addition, we implement a DenseNet in the temporal stream to improve the recognition accuracy. This is in-contrast to traditional approaches which typically utilize the spatial stream for extracting only spatial features. The quantitative evaluation and experiments are conducted on the UCF-101 dataset, which is a well-developed public video dataset. For the temporal stream, we choose the optical flow of UCF-101. Images in the optical flow are provided by the Graz University of Technology. The experimental result shows that the proposed method outperforms the traditional two-stream CNN method with an accuracy of at least 3%. For both spatial and temporal streams, the proposed model also achieves higher recognition accuracies. In addition, compared with the state of the art methods, the new model can still have the best recognition performance.

Item Type: Article
Uncontrolled Keywords: Action recognition, Two-stream CNN model, Spatial stream, LSTM-based model
Depositing User: Symplectic Admin
Date Deposited: 26 Jun 2020 08:05
Last Modified: 12 May 2022 05:12
DOI: 10.1186/s13640-020-00501-x
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3091689