Software Reliability and the Misuse of Statistics

Authors

  • Dewi Daniels Software Safety Limited
  • Nick Tudor D-RisQ Ltd

Keywords:

software failure rate, software reliability, software assurance, defect density, Modelworks

Abstract

Many papers have been written on software reliability. The claim is made that failures of software-based systems occur randomly and that statistical techniques used to predict random hardware failure rates can also be used to predict software failure rates. This claim has not been challenged in any academic papers, though it is treated with suspicion by many practising engineers. As a result, the applicability of these statistical techniques has been accepted in some standards, such as IEC 61508, but has been rejected in others, such as RTCA/DO-178C. It is more important than ever to understand whether this claim is true. There is strong lobbying from industry to allow software not developed to any standard to be used for safety critical applications provided it has sufficient product service history. The European Union Aviation Safety Agency (EASA) is promoting dissimilar software in the belief that using two or more independent software teams will deliver ultra-high levels of software reliability. Software defects are different from random hardware failures and need to be treated differently. This paper argues that the techniques used for statistical evaluation of software make unwarranted assumptions about software and lead to overly optimistic predictions of “software failure rates”. This paper concludes that many software reliability models do not provide results in which confidence can be placed. Instead, this paper proposes an alternative way forward that does provide evidence that software is safe for its intended use before it enters service.

Chart showing results from an Industrial Scale Benchmarking Study

Downloads

Published

2022-01-27