The Unnecessity of Assuming Statistically Independent Tests in Bayesian Software Reliability Assessments



Salako, Kizito and Zhao, Xingyu ORCID: 0000-0002-3474-349X
(2023) The Unnecessity of Assuming Statistically Independent Tests in Bayesian Software Reliability Assessments. IEEE Transactions on Software Engineering, 49 (4). pp. 1-9.

[img] PDF
TSE2022-1.pdf - Author Accepted Manuscript

Download (1MB) | Preview

Abstract

When assessing a software-based system, the results of Bayesian statistical inference on operational testing data can provide strong support for software reliability claims. For inference, this data (i.e. software successes and failures) is often assumed to arise in an independent, identically distributed (i.i.d.) manner. In this paper we show how conservative Bayesian approaches make this assumption unnecessary, by incorporating one’s doubts about the assumption into the assessment. We derive conservative confidence bounds on a system’s probability of failure on demand (pfd), when operational testing reveals no failures. The generality and utility of the confidence bounds are illustrated in the assessment of a nuclear power-plant safety-protection system, under varying levels of skepticism about the i.i.d. assumption. The analysis suggests that the i.i.d. assumption can make Bayesian reliability assessments extremely optimistic – such assessments do not explicitly account for how software can be very likely to exhibit no failures during extensive operational testing despite the software’s pfd being undesirably large.

Item Type: Article
Additional Information: Accepted.
Uncontrolled Keywords: cs.SE, cs.SE
Divisions: Faculty of Science and Engineering > School of Electrical Engineering, Electronics and Computer Science
Depositing User: Symplectic Admin
Date Deposited: 16 Jan 2023 10:54
Last Modified: 15 Mar 2024 17:54
DOI: 10.1109/tse.2022.3233802
Related URLs:
URI: https://livrepository.liverpool.ac.uk/id/eprint/3167061