Transfer Reward Learning for Policy Gradient-Based Text Generation



Neill, James O' and Bollegala, Danushka
(2019) Transfer Reward Learning for Policy Gradient-Based Text Generation. CoRR, abs/19.

[img] Text
1909.03622v1.pdf - Submitted version

Download (880kB) | Preview

Abstract

Task-specific scores are often used to optimize for and evaluate the performance of conditional text generation systems. However, such scores are non-differentiable and cannot be used in the standard supervised learning paradigm. Hence, policy gradient methods are used since the gradient can be computed without requiring a differentiable objective. However, we argue that current n-gram overlap based measures that are used as rewards can be improved by using model-based rewards transferred from tasks that directly compare the similarity of sentence pairs. These reward models either output a score of sentence-level syntactic and semantic similarity between entire predicted and target sentences as the expected return, or for intermediate phrases as segmented accumulative rewards. We demonstrate that using a \textit{Transferable Reward Learner} leads to improved results on semantical evaluation measures in policy-gradient models for image captioning tasks. Our InferSent actor-critic model improves over a BLEU trained actor-critic model on MSCOCO when evaluated on a Word Mover's Distance similarity measure by 6.97 points, also improving on a Sliding Window Cosine Similarity measure by 10.48 points. Similar performance improvements are also obtained on the smaller Flickr-30k dataset, demonstrating the general applicability of the proposed transfer learning method.

Item Type: Article
Uncontrolled Keywords: cs.LG, cs.LG, cs.CL, cs.CV, stat.ML
Depositing User: Symplectic Admin
Date Deposited: 11 Dec 2019 13:32
Last Modified: 19 Jan 2023 00:13
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3065914