Coarse-to-Fine Reasoning for Visual Question Answering



Nguyen, Binh X, Do, Tuong ORCID: 0000-0002-3290-3787, Tran, Huy, Tjiputra, Erman, Tran, Quang D and Nguyen, Anh ORCID: 0000-0002-1449-211X
(2022) Coarse-to-Fine Reasoning for Visual Question Answering. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2022-6-19 - 2022-6-20.

[img] PDF
2110.02526 (1).pdf - Author Accepted Manuscript

Download (33MB) | Preview

Abstract

Bridging the semantic gap between image and question is an important step to improve the accuracy of the Visual Question Answering (VQA) task. However, most of the existing VQA methods focus on attention mechanisms or visual relations for reasoning the answer, while the features at different semantic levels are not fully utilized. In this paper, we present a new reasoning framework to fill the gap between visual features and semantic clues in the VQA task. Our method first extracts the features and predicates from the image and question. We then propose a new reasoning framework to effectively jointly learn these features and predicates in a coarse-to-fine manner. The intensively experimental results on three large-scale VQA datasets show that our proposed approach achieves superior accuracy comparing with other state-of-the-art methods. Furthermore, our reasoning framework also provides an explainable way to understand the decision of the deep neural network when predicting the answer. Our source codes can be found at: https://github.com/aioz-ai/CFR_VQA

Item Type: Conference or Workshop Item (Unspecified)
Uncontrolled Keywords: Behavioral and Social Science, Basic Behavioral and Social Science
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 08 Apr 2024 10:23
Last Modified: 08 Apr 2024 10:23
DOI: 10.1109/cvprw56347.2022.00502
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3180137