Please log in using either your email address or your membership number.
Please register with your name, email address, password and email preferences. You will be sent an email to verify the address.
Please enter the email address used for your account. A temporary password will be emailed to you.
The SCSC publishes a range of documents:
The club publishes its newsletter Safety Systems three times a year in February, June and October. The newsletter is distributed to paid-up members and can be made available in electronic form for inclusion on corporate members' intranet sites.
The proceedings of the annual symposium, held each February since 1993, are published in book form. Since 2013 copies can be purchased from Amazon.
The club publishes the Safety-critical Systems eJournal (ISSN 2754-1118) containing high-quality, peer-reviewed articles on the subject of systems safety.
If you are interested in being an author or a reviewer please see the Call for Papers.
All publications are available to download free by current SCSC members (please log in first), recent books are available as 'print on demand' from Amazon at reasonable cost.
Assuring Safe Autonomy contains papers presented at the 28th annual Safety-Critical Systems Symposium, held in York, UK, in February 2020.
The Symposium is for engineers, managers and academics in the field of system safety, across all industry sectors,so the papers making up this volume offer wide coverage of current safety topics and a blend of academic research and industrial experience. They include both recent developments in the field and discussion of open issues and questions.
The topics covered in this volume include: Asurance Cases, Autonomy, AI and Machine Learning, Data Safety, Human Factors, New Techniques and Security Informed Safety.
Contents
On 29 October 2018, a Boeing 737 MAX aircraft departed from Soekarno-Hatta International Airport, Jakarta, Indonesia. The aircraft was less than three months old. Twelve minutes later, it had crashed, killing all 189 passengers and crew on board. On 10 March 2019, another Boeing 737 MAX aircraft departed from Addis Ababa Bole International Airport, Ethiopia. This aircraft was less than four months old. Six minutes later, it too had crashed, killing all 157 passengers and crew on board. This paper presents an analysis as to why these two accidents happened.
The idea that systems are safe because humans can adapt their behaviour is a key tenet of Safety II (proposed by Erik Hollnagel). But what happens when humans in a system are largely replaced with AI components? This adaptability for safety must come from the system, and it requires engineers to encode people’s ability to succeed under great uncertainty and complexity. This requirement drives a fundamental change in the competencies an engineer must possess. Similarly, there are competence profile changes for other stakeholders such as regulators, safety and security practitioners, and system operators. Using support from the literature and experience on the Assuring Autonomy International Programme (AAIP), this paper aims to enumerate some differences in competence, training and education for assuring system with AI components, discuss the difficulties of doing this and propose a way forward. In this way we can start to build a picture of what good practice is. It is argued that creating practical theories and training tools for AI systems in a safety-critical context is not just an exercise in intellectualism, but an integral part of the safety of future systems. A systematic approach for identifying competencies and creating training to match those competence requirements is proposed.
The Boeing 737 MAX - Manoeuvring Characteristics Augmentation System (MCAS) accidents have demonstrated how cumulative factors may lead to accidental autonomy. Accidental autonomy emerges when differences in models compete over resources and control. In the operational domain, one manifestation is failure at the human-machine interface. Subtle, incremental changes in technology allied with downward economic pressures encourage reuse to create the system safety property of ‘additionality’. Cumulative incremental changes occur that when taken together, are safety significant. Reuse of process, product or both gives rise to inappropriate design trade-offs. Assumptions about the completeness of process, design, implementation or context may lead, in extreme circumstances, to the creation of accidental autonomy - systems without human oversight that implement safety-related functionality or services. Oversight, assessment and approval of systems dependent on reuse are reliant on the familiarity of the assessor with the reused elements within their operational and use context. Incomplete, inadequate understanding and failures of comprehension, along with the allure of fast software development, create the potential for accidental autonomy.
Autonomous systems make use of a suite of algorithms for understanding the environment in which they are deployed. These algorithms typically solve one or more classic problems, such as classification, prediction and detection. This is a key step in making independent decisions in order to accomplish a set of objectives. Artificial neural networks (ANNs) are one such class of algorithms, which have shown great promise in view of their apparent ability to learn the complicated patterns underlying high-dimensional data. The decision boundary approximated by such networks is highly non-linear and difficult to interpret, which is particularly problematic in cases where these decisions can compromise the safety of either the system itself, or people. Furthermore, the choice of data used to prepare and test the network can have a dramatic impact on performance (e.g. misclassification) and consequently safety. In this paper, we introduce a novel measure for quantifying the difference between the datasets used for training ANN-based object classification algorithms, and the test datasets used for verifying and evaluating classifier performance. This measure allows performance metrics to be placed into context by characterizing the test datasets employed for evaluation. A system requirement could specify the permitted form of the functional relationship between ANN classifier performance and the dissimilarity between training and test datasets. The novel measure is empirically assessed using publicly available datasets.
The International Civil Aviation Organization (ICAO) only accepted the original satellite navigation constellations (GPS and GLONASS) as a supplementary source of navigation data for civil air transport. This was not because of accuracy (although that is insufficient for some phases of flight), but because of the lack of integrity. Position errors due to a satellite fault, for example, can go undetected. This paper briefly summarises provisions specified by ICAO to make a trusted Global Navigation Satellite System, and looks forward to some new developments in providing trusted information to support the integrity of navigation solutions, which could also be used in other domains, e.g. autonomous vehicles.
Major accidents that have impacted society, whether in aviation, healthcare, oil and gas, maritime, nuclear, defence or rail have all had a services element that played a part in the accident. This work utilises formal accident reports to identify and analyse these service aspects that contributed to recent accidents. Service elements include the people, training and procedures. These can both cause an accident or help recovery from it. Reference is made to the emerging Service Assurance Guidance produced by the SCSC Service Assurance Working Group (SAWG). The paper shows that service failures can cause accidents; often with fatal consequences.
The Internet-of-Things (IoT) has enabled Industry 4.0 as a new manufacturing paradigm. The envisioned future of Industry 4.0 and Smart Factories is to be highly configurable and composed mainly of the ‘Things’ that are expected to come with some, often partial, assurance guarantees. However, many factories are categorised as safety-critical, e.g. due to the use of heavy machinery or hazardous substances. As such, some of the guarantees provided by the ‘Things’, e.g. related to performance and availability, are deemed as necessary in order to ensure the safety of the manufacturing processes and the resulting products. In this paper, we explore key safety challenges posed by Industry 4.0 and identify the characteristics that its safety assurance should exhibit. We propose a modular safety assurance model by combination of the different actor responsibilities, e.g. system integrators, cloud service providers and “Things” suppliers. Besides the desirable modularity of such a safety assurance approach, our model provides a basis for cooperative, on-demand and continuous reasoning in order to address the reconfigurable nature of Industry 4.0 architectures and services. We illustrate our approach based on a smart factory use case.
Space exploration and utilisation is increasingly focussed through the lens of private space activities. Whilst international treaties and agencies provide the framework for access to, and utilisation of, space, the rapidly increasing activities of private entities is leading to new challenges, both legislative and technical. Governments are responding in a range of different ways to meet the goal of supporting this growing sector whilst ensuring that their national and international obligations are met. Based on the review of current and near future trends, some suggestions are made on risk assessment methodologies that may help provide clarity when assessing safety in space.
It has long been postulated that the use of modularity in assurance cases has the potential to bring extensive benefits through its ability to manage technical and organisational complexity, provide a scalable solution and facilitate re-use in future large complex systems. Previous work such as that undertaken by the Industrial Avionics Working Group (IAWG), has shown how a modular assurance case approach could be adopted for real systems, however despite this, its uptake by industry has been slow. The Object Management Group, (OMG), recently published Version 2.1 of their standard for assurance cases, called the Structured Assurance Case Metamodel (SACM). By providing a standardised metamodel for assurance cases, SACM also supports the integration and interchange of different assurance artifacts and controlled terminology. This makes SACM the ideal mechanism to support modular assurance cases through the development of assurance case packages, interfaces and integration bindings. In this paper we describe the state of the art for modular assurance cases through an example from the IAWG project, expressed using the modular GSN notation. We show how this example could be developed using SACM. We go on to discuss the key challenges that are preventing the wide-spread adoption of modular assurance cases and discuss the extent to which SACM may be able to help address these challenges.
Single-core processors are increasingly difficult to source with microprocessor manufacturers moving into a Multi-Core (MC) arms race for energy efficiency and performance improvement. However, performance gains by MC utilisation of many cores and shared resources brings challenges for qualification; e.g. interference paths that impact Worst-Case Execution Time (WCET). For high-integrity aviation systems (e.g. DO-178B/C level A and B) these challenges need to be re-solved for confidence to be gained to accept these MC based systems. MC is the future and we need a way to qualify and accept MC based safety-critical systems into service. This paper illustrates a practical implementation strategy for MCs on a safety-critical system within a UK airborne system that is currently undergoing an external qualification assessment. This paper documents the strategy in terms of recommendations based upon the development, verification, and validation activities undertaken. The strategy has been refined based upon our experiences1. The approach is based upon a diverse strategy which adopts quantitative and qualitative evidence.
Understanding the need to implement and address some challenges posed by ISO 26262.
Commercial off the shelf (COTS) devices with embedded software offer flexible and wide-ranging benefits recognised from technological advancements. Their use in nuclear safety systems has become prevalent but this has come with a difficult challenge for safety assurance. These new devices are complex and restricted access to evidence from the product developer to support a functional safety audit can make their justification in safety-critical systems difficult. This paper presents a novel nuclear safety justification strategy termed ‘Model Based Safety Assurance’ (MBSA), which requires less invasive questioning and is thus less resource intensive for the developer. It uses concepts from Model Based Systems Engineering and applies them in the context of safety assurance, to achieve qualification of COTS devices for use in safety systems. The strategy utilises established techniques for software development (e.g. Model Based Testing) but ex-tends their scope to support safety assessments. The paper also discusses the advantages and limitations of MBSA compared with the traditional safety demonstration approach currently used by the civil nuclear industry. Finally, with the help of a case study (based on a real system), it seeks to demonstrate the strength of the approach when combined with software safety assurance techniques such as Statistical Testing and Goal Structuring Notation.
For modern safety-critical systems we aim to simultaneously maintain safety whilst taking advantage of the benefits of system interconnectedness and faster communications. Many standards have recognised and responded to the serious security implications of making these connections between systems that have traditionally been closed. In addition, there have been several advances in developing techniques to combine the two attributes, however, the problem of integrated assurance remains. What is missing is a systematic approach to reasoning about alignment. In this paper, the Safety-Security Assurance Framework (SSAF) is presented as a candidate solution. SSAF is a two-part framework based on the concept of independent co-assurance (i.e. allowing separate assurance processes, but enabling the timely exchange of correct and necessary information). To demonstrate SSAF’s application, a case study is given using requirements from widely-adopted standards (IEC 61508 and Common Criteria) and a Bayesian Belief Network. With a clear understanding of the trade-offs and the interactions, it is possible to create better models for alignment and therefore improve safety-security co-assurance.
Unmanned Aircraft Systems (UAS) are forcing a rethink on how traditional safety and security assessment processes are conducted. Traditional concerns have been with the safety and security of the crew and passengers on the aircraft, but with the advent of UASs, these shift to the risks that the system presents to people and infrastructure on the ground, and other air users. This shift is presenting challenges to a large body of stakeholders, including: the rule makers, the UAS designers, the operators, safety and security assessors and the regulators. This paper provides a case study focussing on the command and control link to the aircraft, and describes the challenges experienced and developments made. Also, as a test case the paper aims to lay down a framework for a generalised approach for the harmonious integration of safety and security disciplines.
In May 2019, the IEC published a guide to combining cybersecurity
and safety for industrial automation and control systems (IACS), IEC TR 63069.
I consider critically two main concepts in the guide: an overly-strong notion of
“Security Environment” (SE), and an accompanying incomplete type of security-risk
analysis called “threat-risk assessment
IEC TR 63069 is a misleading guide to what is needed for cybersecurity in safety-critical digital systems.
HS2 is a huge and complex programme, which involves building new rail and road infrastructure, civil works, stations, railway systems, trains and creating new organisations to operate these. From initial design to completed delivery over a timescale of over 20 years. Within this, a consistent and comprehensive system safety approach has been developed that is flexible enough to cover cyber security threats to on-board systems through to fire safety of concrete. Reuben will present his approach and note key issues and learning points that have evolved.
Historically, data types such as standing, configuration, and other data types have had to be proven correct before application in a safety-critical environment. Usually, this has been achieved by rigorous manual or automated checking and system testing before first use, and is feasible because the data sets are relatively small. However, a “safety by compliance” strategy for data does not adequately deal with sources of errors leading to accidents. As AI is based on the availability of huge quantities of data, such approaches become increasingly useless at scale. Three problems therefore must be overcome. First, by ensuring that large data sets contain sufficiently granular detail to correlate to events associated with identified accident potential or other rare events, and validated using appropriate principles; second, to assess whether related, but diversely sourced data sets could be cross-validated by identifying and quantifying the probability of encountering missing features in the data and, finally, to provide assurance that any capacity of an AI-driven function to incorrectly extrapolate from data within the existing data set is minimised. This paper is concerned with possible approaches to address these problems in greater detail.
The use of artificial intelligence (AI) in healthcare is one of the fastest growing industries worldwide. AI is already used to deliver services as diverse as symptom checking, skin cancer screening, and recognition of sepsis. But is it safe to use AI in patient care? However, the evidence base is narrow and limited, frequently restricted to small studies considering the performance of AI applications at isolated tasks. In this paper we argue that greater consideration should be given to how the AI will be integrated into clinical processes and health services, because it is at this level that human factors challenges are likely to arise. We use the example of autonomous infusion pumps in intensive care to analyse the human factors challenges of using AI for patient care. We outline potential strategies to address these challenges, and we discuss how such strategies and approaches could be applied more broadly to AI technologies used in other domains.
The Energy Institute, on behalf of Shell, commissioned a rapid review of psychological safety. Psychological safety can be described as the willingness of people to express an opinion, admit mistakes or unsafe behaviours, without fear of being embarrassed, rejected or punished. Psychological safety plays a role in facilitating the reporting of errors and unsafe behaviours – thereby enabling these to be identified, learnt from and improvements made to prevent repetition of errors and unsafe behaviours. Psychological safety is particularly important in hierarchical organisations, often with complex systems, where error may have serious safety consequence, and where individuals or organisations are held responsible for adverse consequence. Interventions to enhance psychological safety include tools to support analysis of causes of error and behaviours, displayable effort by management and the organisation to build trust and teamwork between staff and themselves as well as supporting and encouraging safetyrelated behaviour.
Hinkley Point C in Somerset is the first new nuclear power station to be constructed in the UK in a Generation. It is a light water “EPR” reactor, based on a design very similar to those which have just begun commercial operation in China and are nearing completion in France and Finland. This is one of the largest infrastructure projects in the World, costing around £20bn, employing 1000s of people on site and has a truly international supply chain. As a “Third Generation” reactor, it includes several safety and efficiency improvements compared to the previous generations of reactors which were designed and constructed over the last 50 years. The safety systems have been developed using a Defence in Depth approach with multiple redundant and diverse systems to reduce the frequency of an event leading to core melt significantly lower than previous generation reactors. There are additional design features to ensure that in the extremely remote event of a “severe accident”, the resultant core melt is managed and cooled using engineered systems. This design philosophy of engineered redundant and diverse mechanical and electrical systems is mirrored in the I&C systems. There are two independent digital control and protection systems, and a third non-computerised system which are largely independent of each other but act in a hierarchical manner to provide very high levels of reliability. This key note speech will describe how the design has achieved very high safety and reliability levels using the defence in depth approach, explain how this is justified in the safety case and provide some insight as to how independent oversight is provided on such a complex project.
In this paper we examine the question of whether a vehicle that is following the “rules of the road” can always be regarded as operating safely, especially in an environment where human actors are operating. We do this in two ways: first we pose the question, “what is a ‘lane’?” to highlight the problem of defining sys-tem behaviours in terms a human would understand; we then look at a pair of well-formed rules and assess the possible consequences of applying those rules in a traffic environment with a mixture of human-driven and autonomous vehicles (AVs). Overall the paper highlights the multitude of problems associated with defining how AVs should behave in a mixed human AV environment.
Machine Learning is making rapid progress in a variety of applications. It is highly likely to be used in safety-related and possibly safety-critical systems. As a logical next step to work presented at the Safety-Critical Systems Symposium 2019 on developing a safety argument structure for an autonomous system that uses machine learning, this paper focuses on generating the underpinning safety evidence. This is achieved through the representation of the machine- learnt software development life-cycle as a model which articulates constituent artefacts, information flow and transformations. This life-cycle model is then used to facilitate the systematic identification of the potential for the introduction of hazardous errors during development. Product, process and goalbased control measures are proposed to reduce and manage these potential errors. The feasibility and practicality of implementing these control measures and generating associated safety evidence is also discussed.
At Intel and Mobileye, saving lives drives us. Since joining forces, we’ve spread the word on the need for a safety standard for autonomous vehicles (AV), and how consumers and regulators alike demand transparency not offered by existing metrics used in AV safety claims. We proposed Responsibility-Sensitive Safety as a potential solution, a formal, mathematical model that defines what safe driving looks like. It was our first step towards building consensus in the industry. Today we take the next step in that journey, diving deeper into the makeup of RSS: What is this model, how does it work under the hood, and how can RSS help us balance the tradeoff between safety and usefulness of AV’s? Higher levels of safety may result in overly conservative AVs that nobody wants on the road. So where should industry and the public draw the line to answer the question “How safe is safe enough”? Help us drive the conversation today that will enable the autonomous tomorrow.
We propose an approach to perform a safety-with-cybersecurity analysis of existing industrial safety-critical systems. These were originally designed without essential cybersecurity considerations, while being used in today's operating environment. As an example, we consider a refuelling machine (RFM) that is commonly used to perform the fuel assembly loading and unloading in Nuclear Power Plants (NPPs). The objective of the analysis is to identify (and allow the chance to mitigate) those cybersecurity vulnerabilities which have implications for the safe operation of the RFM. First, we reverse engineer a functional specification (FS) of a legacy RFM, based on the manuals and handbooks, e.g. the system operation description, the controller and components handbooks. We also consult current designers. Additionally, we evaluate general information that we collected from publicly available RFM descriptions. The FS is expressed in the form of Hoare Triple, illustrated with preconditions and post-conditions of each function. Then we identify the hazards in this FS, by utilizing event tree analyses (ETA). Last, we analyse the possible causes of each hazard from considering the preconditions and post-conditions of each function. The detailed steps of each key part are described in the respective sections. Keywords: reverse engineering, cybersecurity analysis, functional specification, hazard identification, risk analysis, Hoare Logic, event tree analysis.
The Harbsafe project analysed the technical terminology defined in an array of IEC standards and guides concerning functional safety and cybersecurity. 460 terms were defined in the documents surveyed, most given in Clause 3 Terms and Definitions of IEC documents. IEC publishes guidelines for terminology; terminology conformant with these guidelines is said to be “harmonised”. We found that terminology in the documents reviewed was not well harmonised. We devised three techniques to aid harmonisation: an application of machine-learning “word embedding” analysis to identify related concepts, possibly synonyms, which were not overt; SemAn (a variety of Semantic Analysis) for analysing and possibly harmonising the definiens of homonyms and almostsynonyms; ConcAn (a variety of Conceptual Analysis) for analysing terms whose overt definition did not seem to us to fit well with everyday engineering use. The first author wrote a WWW-based tool, the Terminology Dashboard, now on-line at VDE, to aid engineers in navigating a database of terms and definitions and their relations to each other.
An autonomous system should only make decisions that are safe. However, since the system only has partial control over the environment, achieving absolute safety is impossible. If a person jumps in front of a fast-moving autonomous car, the car may not be able to stop in time. For certification and liability assignment, the decision making logic should be able to state explicitly on which assumptions it relies and provide guarantees that, under these assumptions, safety properties hold. Although generally conceived as crucial, assumptions are typically not dealt with explicitly. State-of-the-art decision making is often the result of learning or advanced planning techniques, encoding many implicit assumptions on the operating environment. We propose an approach to reveal assumptions and verify relative safety for decision making policies. Relative safety provides conditional guarantees - given that the explicitly-specified assumptions are valid during the operation, we can provide solid guarantees. We use the highly expressive formal logic FO(.) to specify assumptions and desired properties. We employ the state of the art knowledge base system IDP as a model checker to verify desired properties and reveal missing assumptions. In our approach, one can discover and add assumptions in an iterative process. A thorough validation of the approach in two case studies is included: an autonomous UAV system for pylon inspection and a semi-autonomous car for highway driving.
RTCA’s DO-326A describes a Security Airworthiness Process that aligns the security process to the safety process with the intent of identifying the impact of malicious cyber security attacks on the safety of an airborne system. Dstl have developed an incremental process for the engagement with stakeholders such that available supporting evidence to the DO-326A objectives can be identified and reasoned with. Specifically, the process defines an approach to utilise pre-existing security evidence from alternative security engineering processes, and exploits the Goal Structuring Notation (GSN) to store and explain the argument as to whether an acceptable means of compliance can be determined based on the available evidence. Key to the value of this work is the ability to identify any fundamental shortfalls in meeting the intent of DO-326A, that is, in addressing the challenge of security-informed safety. By systematically assessing the potential for existing evidence to meet some or all of DO-326A, the dialogue with stakeholders can be focussed to the development of mitigations where it is required.
The development of automated mobile machines towards autonomous operation is proceeding rapidly in many industrial sectors. New technologies and their increasing complexity set challenges for designing safe and reliable machinery and systems. One big challenge is to manage system-level safety and reliability risks with cost-effective solutions in applications where autonomous machinery, manual machines and employees aims to work in the same area. Three safety design approaches for autonomous machinery are introduced and discussed through the requirements and constraints imposed by system operating concepts and operating environments. The key element in machine autonomy is adaptability to dynamically changing environment based on the available information. Current safety engineering methods developed for automated machinery do not cover or consider autonomy aspects like dynamic risk assessment or independent decision-making. Safety standards for the design of autonomous machinery applications are also discussed.
Increasing autonomy of operations is a major development trend in port logistics. Large container terminals have already automated various parts of their operations. In the future, also smaller terminals aim for increased efficiency using automation and autonomous systems. In such use, the machinery used for container handling needs to be highly adaptive and able to conduct a multitude of tasks in changing environmental conditions. This results in a complex and dynamic operating environment where manual and autonomous machines, as well as humans, may work simultaneously in the same area.
In this paper, we focus on addressing the systemic safety hazards resulting from the interactions between various actors in the context of a small container terminal. Selected existing systems-theoretic hazard analysis methods are reviewed, specifically covering the following:
As our society faces the prospect of an ageing population, with ever greater demands on healthcare and social care and personalised treatments, the body of nursing and care support professionals will experience some difficulties looking for personnel and addressing emerging later life conditions as the life span increases. To prevent this from becoming unmanageable, researchers are investigating the use of robotic technology to relieve the tasks assigned to carers. Ideally, robots will undertake the performance of mundane, physically risky and frequently demanded tasks such as assistance with moving about so that care staff can concentrate on care activities that require human-to-human interaction. For example, those requirements involving significant emotional interaction and support such as recreational activities, conversation or counselling.
Two characteristic features of autonomous systems (distinguishing them from systems that are merely ‘automatic’) are (1) that they are usually required to perform and achieve complex situated behaviour patterns, and (2) that they are required to perform non-mission tasks as well as the tasks that define their mission or purpose.
Communication gaps remain a challenge for stakeholders involved in software requirements engineering, particularly when eliciting and refining safety requirements. These are difficult to express and elicit using standard techniques such as CRUD-based (Create, Retrieve, Update, Delete) methodologies. This difficulty can manifest across contractual boundaries and act against effective validation. For instance, the use of ambiguous and emotive language when discussing safety requirements can increase the complexity of these requirements, compromise software safety and lead to costly revisions. This paper presents the design of a gamified prototype, which aims to explore whether these challenges for eliciting safety requirements can be minimised via the use of a competitive collaboration technique. The prototype’s design allows stakeholders to document and manage requirements through agile-based user stories. It includes customisation of De Bono’s ‘Six Thinking Hats’ from the field of cognitive psychology as a mechanism for gamification, and an emotive word bank based on the OCC (Ortony, Clore, Collins) ‘Model of Emotions’ to support stakeholder communication around safety.
SAS - SAFER AUTONOMOUS SYSTEMS: The coming of autonomous systems doesn’t just mean self-driving cars. Advances in artificial intelligence will soon mean that we have drones that can deliver medicines, crew-less ships that that can navigate safely through busy sea lanes, and all kinds of robots, from warehouse assistants, to search-and rescue robots, down to machines that can disassemble complex devices like smartphones in order to recycle the critical raw materials they contain. As long as these autonomous systems stay out of sight, or out of reach, they are readily accepted by people. The rapid and powerful movements of assembly-line robots can be a little ominous, but while these machines are at a distance or inside protective cages we are at ease. However, in the near future we’ll be interacting with “cobots” – robots intended to assist humans in a shared workspace. For this to happen smoothly we need to ensure that the cobots will never accidentally harm us. This question of safety when interacting with humans is paramount. No one worries about a factory full of autonomous machines that are assembling cars. But if these cars are self-driving, then the question of their safety is raised immediately. People lack trust in autonomous machines and are much less prepared to tolerate a mistake made by one. So even though the widespread introduction of autonomous vehicles would almost eliminate the more-than 20,000 deaths on European roads each year, it will not happen until we can provide the assurance that these systems will be safe and perform as intended. And this is true for just about every autonomous system that brings humans and automated machines into contact.
The SCSC 'Data Safety Guidance' provides a framework and a process for assuring the safety of modern data-driven computer systems providing safety-related functionality. In this poster we use the ‘Z’ notation to capture the essential content of the existing SCSC Data Safety Guidance. The formal specification of the entities and the relationships between them will make it easier for a practitioner to understand the Guidance and to apply it to real projects.
The use of natural language in engineering is often problematic due to assumed meanings and usage contexts of domain terms with the result that misunderstandings arise and uncertainty abounds. Somewhat ironically, this language uncertainty is just as present in systems (and project) engineering risk management. In this paper, a formalised structure of words – an ontology – is proposed by which, at least, the risks arising from system safety and security concerns can be described.