Formal verification of ethical choices in autonomous systems



Dennis, Louise ORCID: 0000-0003-1426-1896, Fisher, Michael, Slavkovik, Marija and Webster, Matt ORCID: 0000-0002-8817-6881
(2016) Formal verification of ethical choices in autonomous systems. ROBOTICS AND AUTONOMOUS SYSTEMS, 77. pp. 1-14.

Access the full-text of this item by clicking on the Open Access link.

Abstract

Autonomous systems such as unmanned vehicles are beginning to operate within society. All participants in society are required to follow specific regulations and laws. An autonomous system cannot be an exception. Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision. However, there exists no obvious way to implement the human understanding of ethical behaviour in computers. Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right? We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent. For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions. We propose a theoretical framework for ethical plan selection that can be formally verified. We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan.

Item Type: Article
Uncontrolled Keywords: Autonomous systems, Ethics, BDI programs, Formal verification
Depositing User: Symplectic Admin
Date Deposited: 26 May 2016 13:53
Last Modified: 19 Jan 2023 07:36
DOI: 10.1016/j.robot.2015.11.012
Open Access URL: http://www.sciencedirect.com/science/article/pii/S...
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3001376