This document aims to provide clear, practical, pan-domain guidance on the safety assurance of Autonomous Systems (AS). In particular: The document is intended to be of use to a wide readership, including: developers of autonomous systems; Artificial Intelligence (AI) and Machine Learning (ML) practitioners; safety engineers; regulatory authorities; and managers (at a range of levels). The guidance is intended to be widely applicable. It is not tied to any specific development approach, system lifecycle or safety argument structure. To achieve this wide applicability, terms like “appropriate” and “suitable” are occasionally used. In such cases, users of this document would be expected to describe, and justify, how these terms have been interpreted in their specific context. There is a deliberate focus on aspects directly related to autonomy, and enabling technologies such as AI and ML, rather than more general safety engineering or system engineering, where it is assumed that relevant general standards, guidelines and best practice will be applied. The intent is to avoid duplicating existing guidance relating to these general topics. Where one is available, this document could usefully supplement an existing Safety Management System (SMS). Consequently, there is intentionally very little mention of legal and / or regulatory requirements. Likewise, issues that are most appropriately addressed at an organisational, or enterprise, level (e.g. staff competencies) are not addressed. There is a deliberate focus on AS that use AI developed using ML. Although it is possible to envisage AS that do not use these technologies, AI and ML are considered to represent the greatest assurance challenges; they are also expected to be widely used. The objectives listed in this document would be expected to inform any discussions with regulatory authorities. However, since they are inevitably domain-specific, detailed certification-related considerations (e.g. timing and content of liaison with regulators) are not included