Title: Specifying Safety Requirements for Machine Learning Components in Autonomous Systems: A Survey

Author(s): Richard Hawkins, Sepeedeh Shahbeigi

Publication Event: Publication of Proceedings of the Thirty third Safety-Critical Systems Symposium

Publication Date: 2025-02-01

Resource URL: https://scsc.uk/r3097.pdf

Abstract:

Machine learning (ML) components are recognized for their potential to undertake tasks such as object detection and classification across a range of safety related applications. In order to be used safely, it is crucial that safety requirements for the ML components are correctly understood, specified in a manner that supports ML development, and can be demonstrated to be sufficient and valid. Traditional safety requirements approaches may not apply well to ML, due to their data-driven nature especially in complex environments. Defining safety requirements for ML components will require an understanding of the unique mechanisms by which ML can contribute to system safety and the potential failure modes of ML components. So far, little work has been done that attempts to systematically derive safety requirements specific to ML components and to ensure a traceable link between system-level and component-level safety requirements. This work aims to address this gap by providing a comprehensive survey of existing literature on methods for the elicitation of safety requirements for ML components. We identify key challenges and limitations in current methods and propose possible solutions. This paper highlights these issues and reviews current research to lay a foundation for developing robust and effective safety requirements for ML components.