Global Motion Compensation Using Motion Sensor to Enhance Video Coding Efficiency

Cheng, F
(2018) Global Motion Compensation Using Motion Sensor to Enhance Video Coding Efficiency. PhD thesis, University of Liverpool.

[img] Text
200936515_Jun2018.pdf - Unspecified

Download (7MB)


Throughout the current development of video coding technologies, the main improvements are increasing the number of possible prediction directions and adding more sizes and more modes for blocks coding. However, there are no major substantial changes in video coding technology. The conventional video coding algorithms works well for video with motions of directions parallel to the image plane, but their efficiency drops for other kinds of motions, such as dolly motions. But increasing number of videos are captured by moving cameras as the video devices are becoming more diversified and lighter. Therefore, a higher efficient video coding tool has to be used to compress the video for new video technologies. In this thesis, a novel video coding tool, Global Motion Estimation using Motion Sensor (GMEMS), is proposed. Then, a series related approaches are researched and evaluated. The main target of this tool is using advanced motion sensor technology and computer graphics tools to improve and extend the traditional motion estimation and compensation method, which could finally enhance the video coding efficiency. Meanwhile, the computational complexity of motion estimation method is reduced as some differences have been compensated. Firstly, a Motion information based Coding method for Texture sequences (MCT) is proposed and evaluated using H.264/AVC standard. In this method, a motion sensor commonly-used in smart-phones is employed to get the panning motion (rotational motion). The proposed method could compensate panning motion by using frame projection using camera motion and a new reference allocation method. The experimental results demonstrate the average video coding gain is around 0.3 dB. In order to apply this method to other different types of motions for texture videos, the distance information of the object in the scene from the camera surface, i.e. depth map, has to be used according to the image projection principle. Generally, depth map contains fewer details than texture, especially for the low-resolution case. Therefore, a Motion information based Coding scheme using Frame-Skipping for Depth map sequence (MCFSD) is proposed. The experimental results show that this scheme is effective for low resolution depth map sequences, which enhances the performance by around 2.0 dB. The idea of motion information assisted coding is finally employed to both texture sequence and depth map sequence for different types of motions. A Motion information based Texture plus Depth map Coding (MTDC) scheme is proposed for 3D videos. Moreover, this scheme is applied to H.264/AVC and the last H.265/HEVC video coding standard and tested for VGA resolution and HD resolution. The results show that the proposed scheme improves the performance under all the conditions. For VGA resolution under H.264/AVC standard, the average gain is about 2.0 dB. As the last H.265/HEVC enhances the video encoding efficiency, the average gain for HD resolution under H.265/HEVC standard drops to around 0.4 dB. Another contribution of this thesis is that a software plus hardware experimental data acquisition method is designed. The proposed motion information based video coding schemes require video sequences with accurate camera motion information. However, it is difficult to find proper dataset. Therefore, an embedded hardware based experimental data acquisition platform is designed to obtain real scene video sequences, while a CG based method is used to produce HD video sequences with accurate depth map.

Item Type: Thesis (PhD)
Additional Information: My permanent email is
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 15 Aug 2018 08:33
Last Modified: 14 Apr 2022 15:21
DOI: 10.17638/03022586
  • Tillo, Tammam