Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance



Dong, Yi ORCID: 0000-0003-3047-7777, Huang, Wei, Bharti, Vibhav, Cox, Victoria, Banks, Alec, Wang, Sen, Zhao, Xingyu ORCID: 0000-0002-3474-349X, Schewe, Sven ORCID: 0000-0002-9093-9518 and Huang, Xiaowei ORCID: 0000-0001-6267-0366
(2023) Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 22 (3). pp. 1-48.

[img] Text
2112.00646v1.pdf - Submitted version

Download (8MB) | Preview
[img] Text
TECS_Solitude-8.pdf - Author Accepted Manuscript

Download (9MB) | Preview

Abstract

The increasing use of Machine Learning (ML) components embedded in autonomous systems -- so-called Learning-Enabled Systems (LESs) -- has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LESs pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LESs with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose solutions to practical use. Probabilistic safety argument templates at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic/benchmark datasets but also scope our methods with case studies on simulated Autonomous Underwater Vehicles and physical Unmanned Ground Vehicles.

Item Type: Article
Additional Information: Preprint Accepted by ACM Transactions on Embedded Computing Systems
Uncontrolled Keywords: Software reliability, safety arguments, assurance cases, safe AI, robustness verification, safety-critical systems, statistical testing, operational profile, probabilistic claims, Learning-Enabled Systems, Robotics and Autonomous Systems, safety regulation
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 06 Dec 2021 11:11
Last Modified: 05 Sep 2023 09:04
DOI: 10.1145/3570918
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3144755