This document aims to provide clear, practical, pan-domain guidance on the safety assurance of Autonomous Systems (AS). In particular: The document is intended to be of use to a wide readership, including: developers of autonomous systems; Artificial Intelligence (AI) and Machine Learning (ML) practitioners; safety engineers; regulatory authorities; and managers (at a range of levels). There is a deliberate focus on aspects directly related to autonomy, and enabling technologies such as AI and ML, rather than more general safety engineering or system engineering, where it is assumed that relevant general standards, guidelines and best practice will be applied. The intent is to avoid duplicating existing guidance relating to these general topics. There is a deliberate focus on AS that use AI developed using ML. Although it is possible to envisage AS that do not use these technologies, AI and ML are considered to represent the greatest assurance challenges; they are also expected to be widely used. The guidance is intended to be widely applicable. It is not tied to any specific development approach, system lifecycle or safety argument structure. To achieve this wide applicability, terms like “appropriate” and “suitable” are occasionally used. In such cases, users of this document would be expected to describe, and justify, how these terms have been interpreted in their specific context. There is intentionally very little mention of legal and / or regulatory requirements. It is assumed that these will be identified and demonstrably complied with as part of normal practice. Issues related to staff competencies are deliberately excluded from consideration. Similarly, issues that are most appropriately addressed at an organisational, or enterprise, level are also excluded. These are expected to be covered by an existing Safety Management System (SMS), which could be supplemented by the considerations in this document. Issues related to domain-specific certification (e.g., liaison with regulators) are deliberately excluded from consideration. However, the objectives listed in this document would be expected to inform any discussions with regulatory authorities. This document makes no distinction based on criticality level, that is, there is no equivalent of Safety Integrity Levels (SILs) or Development Assurance Levels (DALs). These types of distinction may be included in future versions of this document.