MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning



Peschl, M, Zgonnikov, A, Oliehoek, FA ORCID: 0000-0003-4372-5055 and Siebert, LC
(2022) MORAL: Aligning AI with Human Norms through Multi-Objective Reinforced Active Learning. .

Access the full-text of this item by clicking on the Open Access link.

Abstract

Inferring reward functions from demonstrations and pairwise preferences are auspicious approaches for aligning Reinforcement Learning (RL) agents with human intentions. However, state-of-the art methods typically focus on learning a single reward model, thus rendering it difficult to trade off different reward functions from multiple experts. We propose Multi-Objective Reinforced Active Learning (MORAL), a novel method for combining diverse demonstrations of social norms into a Pareto-optimal policy. Through maintaining a distribution over scalarization weights, our approach is able to interactively tune a deep RL agent towards a variety of preferences, while eliminating the need for computing multiple policies. We empirically demonstrate the effectiveness of MORAL in two scenarios, which model a delivery and an emergency task that require an agent to act in the presence of normative conflicts. Overall, we consider our research a step towards multi-objective RL with learned rewards, bridging the gap between current reward learning and machine ethics literature.

Item Type: Conference or Workshop Item (Unspecified)
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 21 Apr 2023 14:24
Last Modified: 21 Apr 2023 14:25
Open Access URL: https://doi.org/10.48550/arXiv.2201.00012
URI: https://livrepository.liverpool.ac.uk/id/eprint/3169863