Please log in using either your email address or your membership number.
Please register with your name, email address, password and email preferences. You will be sent an email to verify the address.
Please enter the email address used for your account. A temporary password will be emailed to you.
The SCSC publishes a range of documents:
The club publishes its newsletter Safety Systems three times a year in February, June and October. The newsletter is distributed to paid-up members and can be made available in electronic form for inclusion on corporate members' intranet sites.
The proceedings of the annual symposium, held each February since 1993, are published in book form. Since 2013 copies can be purchased from Amazon.
The club publishes the Safety-critical Systems eJournal (ISSN 2754-1118) containing high-quality, peer-reviewed articles on the subject of systems safety.
If you are interested in being an author or a reviewer please see the Call for Papers.
All publications are available to download free by current SCSC members (please log in first), recent books are available as 'print on demand' from Amazon at reasonable cost.
The club publishes the proceedings of the annual Safety-critical Systems Symposium (SSS) in book form. Since 2013 these have been available to buy from Amazon or as individual papers for download from the website by Club members.
The rapid evolution of AI has shocked the world and further step-changes are likely. Not only does this change the picture regarding safety-critical applications utilizing AI: self-driving vehicles, drones and medical image recognition systems, but it has the potential to revolutionize the way we produce critical systems, everything from AI-generated designs to AI-authored safety cases.
System safety practices and approaches must adapt to deal with and make best use of, these new AI applications and uses, and ways must be found to assure systems utilizing AI which are meaningful, understandable and trustworthy.
Systems are also getting ever more complex and connected, and in some cases, more fragile. We do not have established tools and techniques to analyse and manage such systems: they are too opaque, have too many parts, interfaces and interactions for the old techniques to work. Two new SCSC Working Groups, the Safe AI Working Group (SAIWG) and Safer Complex Systems WG (SCSWG) have been set up to look at the issues and produce new guidance in these areas.
Contents
In the rapidly evolving landscape of autonomous systems, ensuring safety is paramount. As these systems become increasingly integrated into our daily lives, we must adopt a proactive approach to mitigate risks and prioritize safety from the very inception of design. This keynote presentation, titled "Autonomous Systems Safety by Design," delves into the critical intersection of technology and safety, emphasizing the imperative to embed safety principles into the DNA of autonomous systems design.
The presenter, Adiac Aguilar, Autonomous Systems Safety expert, will explore the foundational principles and methodologies that underpin the concept of safety by design in autonomous systems. Adiac works on the System Safety Architecture within the Core System Platform at Volvo Cars, contributing to the design of system architecture and requirements for SPA2 Core System Platform, introduced in Polestar 3 & Volvo EX90, to ensure a safe and redundant computing architecture.
During the presentation, Adiac will navigate through the intricate web of challenges posed by the dynamic nature of autonomous technologies, such as artificial intelligence, robotics, and deep learning. Key themes will include the incorporation of fail-operational mechanisms, robust testing protocols, and end-to-end System Reliability.
Drawing on real-world case studies and the latest advancements in autonomous systems, he will shed light on industry best practices and emerging standards.
The keynote will not only emphasize the technological aspects of safety but will also delve into the ethical considerations surrounding autonomous systems. It challenges attendees to think beyond technical solutions and consider the broader societal implications, addressing questions of accountability, transparency, and public trust.
Software is embedded in all aspects of our life and, without it, our lives would be completely different. Can you imagine your typical day without a smartphone or a laptop? Even for our daily commute, whether we use cars, trains or electric bikes, software is essential for them to function and provide additional services such as train timetables or the name of the next station. So, what is software? How do we ensure it is fit for purpose? And, how do the standards help us achieve this? Ebeni tackle these questions and look to provide an overview of the key software standards in rail, and their evolution. We also discuss the latest developed software standard for railway applications, explaining the significant changes, additional guidance on lifecycles and the consideration of model-based design, as well as how some requirements have been rewritten to aid interpretation and understanding.
Safety-critical systems need a lot of work and expertise to determine they are safe before being deployed into the public domain. Therefore, the safety case of these systems must be robust and comprehensible. This paper presents a potential solution to verifying safety-critical systems at the requirements stage of the project lifecycle by checking for loops within system requirements in the reasoning and totalization of a system specification. By undertaking these checks at an earlier stage in the project lifecycle both time and money can be saved.
Based on the media reports, interviews, and survey reports, such as the reports conducted by the RAND Corporation, the public expects that autonomous vehicles (AVs) are at least as safe as conventional vehicles. Consequently, as required by various stakeholders such as the European Union Commission, the German Ethics Commission, and the ISO TR 4804, the risk assessment framework of choice for AVs is Positive Risk Balance (PRB). PRB requires that a newly developed system is as good as a similar existing system. In the context of PRB, AVs are compared with human driver performance in exemplary situations, accord-ing to published crash statistics and analysis of human behaviour. While applying PRB, the risk associated with different types of harm (e.g., injuries of people in the car, injuries of vulnerable road users (VRUs), fatalities of people in the car, fatalities of VRUs), and the distribution of risk (e.g., the system shall not discriminate because of age, religion, skin colour) are taken into consideration. Also, when comparing risks associated with different systems (i.e., human driver versus computer driver), the compared risks shall have been measured given comparable operating conditions.
PRB is challenging to demonstrate, especially before deployment, because of a lack of historical data about the behaviour of AVs in the entire operational design domain. To compensate for this, Positive Trust Balance (PTB) was proposed. The safety argument behind PTB is that 1) the AV developer has a solid safety culture, and feedback related to system safety is gathered from different stakeholders and addressed; 2) the system has been developed following an engineering process, including an extensive V&V process, aligned with the relevant safety standards and best practices; 3) all operational controls and system maintenance are correctly in place after deployment and the behaviour of the system after deployment is monitored, and the monitored information is analyzed to improve the system performance continuously and to ensure that the assurance properties hold.
A solid safety culture may be established by using a Safety Management System (SMS). An SMS is an approach from the aerospace domain that supports organizational safety in a systematic and integrated way. It guides the establishment of the safety practices within a company, together with the safety roles, and sets up organizational safety objectives. An SMS also guides safety risk assessment and management, handles the monitoring, analysis, and measurement of overall safety performance, and specifies the activities to be executed within an organization to promote safety.
To demonstrate that the risk of the known hazardous scenarios is “as low as reasonably possible”, safety standards, such as ISO 26262 and ISO 21448, engineering processes, and best practices are followed. Also, conformance with applicable regulations shall be demonstrated. To document identified hazards, a Quantitative Fault Tree (QFT) may be used. To build up the QFT, first, a preliminary hazard analysis defines what loss events can occur and lists what hazards can lead to these loss events. Second, a functional hazard analysis allocates the occurrence of hazards to functions in the autonomy stack and builds the fault tree. Third, a system/subsystem hazard analysis process allocates functions to components in the architecture and can employ techniques such as failure modes and effects criticality analysis to further define what causal factors need to be tracked, including hardware failures and software defects.
Risk targets can then be allocated to each node in the QFT. For each identified hazard, a causal chain leading to the occurrence of harm can be evaluated based on processes described in standards like ISO 21448 and ISO 26262. Initially, this can be used to define a risk budget for the autonomy stack and operations. However, the complexity of AVs and their operational domain make it impossible to identify before deployment all unsafe scenarios. Consequently, before deployment the risk budget can only be estimated and then refined – with feedback from simulation, on-road testing, and operation.
Safety performance metrics measured in operation reinforce or update the estimated rates of causal factors in the QFT. Violations of the estimated risk and statistical deviations between simulation results and real-world operation can be used to identify new trigger conditions (i.e., unknown safe scenarios) for the already identified hazard. The newly identified triggering conditions will be fed to the SMS to address them.
To improve efficiency and productivity in the construction industry, governments around the world encourage the development of modular buildings. Meanwhile, worker safety is an important aspect that the construction industry has been concerned with, as there is still much room for improvement. Furthermore, accidents like London’s Grenfell Tower fire in 2017 demonstrate the imperative need to consider construction safety systemically. However, as pointed out in the academic literature and experienced in practice, there is an apparent disconnect between worker safety and building safety. For this reason, this paper sets the ground for a holistic approach to safety by bringing together worker and building safety under the ‘golden thread of information’ as introduced by the British government. The golden thread calls for more integrated safety information management, which could be achieved with the use of Building Information Modelling (BIM) throughout the entire construction and building lifecycle. BIM is regarded as a holistic process of creating and managing information for built assets. Its simulation and visualisation functions can improve safety in all types of construction; this paper concentrates on modular construction.
Airbus recently published an article in their safety magazine describing a serious incident where the thrust reverser remained deployed on one engine during a go-around. The aircraft veered to the left but became airborne just before reaching the edge of the runway. The flight crew were able to land the aircraft with one engine inoperative. Airbus is to be commended for publishing such a clear and honest account of the incident. This appears to be a variant of Leslie Lamport's Byzantine Generals Problem, which I described in my SSS'22 paper, but not one that I have seen described before. In this case, an implementation of the Byzantine Generals solution would have allowed the two engines to agree whether to stow and lock the thrust reversers. Byzantine failures are rare events, but they seem to keep happening. I'm reminded of the Terry Pratchett quote, “million-to-one chances crop up nine times out of ten”. I wonder whether Leslie Lamport's 1982 paper on the Byzantine Generals Problem is now so old that it has begun to fade from memory?
Unlike traditional safety standards that primarily focus on addressing failures and malfunctions, ISO 21448 for the Safety Of The Intended Functionality (SOTIF) emphasises assessing and mitigating risks associated with functional insufficiencies and user interactions. This poster provides an overview on the practical application of SOITIF to Off-Highway Autonomous Vehicles including adaptations for this domain and practices used to meet the requirements of the standard.
Project Bluebird is an investigation into the feasibility of air traffic control (ATC) using AI agents. It has the potential to provide safe management of the increasing air traffic over UK and worldwide skies. In addition to ensuring safety, AI agents may be able to direct aircraft along more fuel-efficient routes, helping to reduce the environmental impact of air traffic. This talk will describe progress made towards these goals: the construction of a probabilistic digital twin of UK airspace; the development of agents capable of safe routing of aircraft in a sector of airspace; and methods needed to promote successful human-AI cooperation to ensure safe, explainable and trustworthy use of AI in safety-critical ATC systems. The talk will concentrate on an agent based on optimisation of safety and efficiency goals and will show some initial simulations of controlling real traffic.
The use of risk models can be difficult when trying to respond to rapid change. This is particularly challenging when such models consider rare events and/or try to assess the wide-ranging impacts of large-scale events, such as those arising from climate change. The railway is a dynamic, complex socio-technical system. Improving its resilience in the face of climate change requires a range of proportionate responses. These need to account for the evolving nature of the hazards themselves, various inter-dependencies in and out of the railway, while targeting both the system vulnerabilities and exposure within a multi-risk environment. The Rail Safety and Standards Board Safety Risk Model (SRM) estimates the underlying safety risk from the operation and maintenance of the Great British mainline railway. A modularised approach reusing established components from the SRM interacting with other tools and techniques, has been applied to provide enhanced multi-risk decision-support capability to the industry. The aim being to develop a flexible process to support the industry’s strategic decision making despite uncertain future changes.
This paper introduces a novel approach to developing a compliance pattern for a unified EMC assurance case, ensuring the safety and effectiveness of medical devices. The Goal Structuring Notation is used as a graphical notation for the technical documentation to support the safety and effectiveness compliance. By adopting this approach, the limitations of conventional text-based methods are overcome by providing a clear and explicit visual summary of claims, arguments, and evidence. Through persuasive arguments, the compliance pattern of the unified EMC assurance case demonstrates the adherence to EMC standards after successful implementation of EMC risk management strategies, achieved through the utilisation of EM-resilience design practices. Moreover, the paper emphasises the inter-relationships among the three sub-cases (compliance case – risk case – confidence case), highlighting their collective significance. The proposed EMC compliance pattern employs the modular extension of the Goal Structuring Notation to link arguments from the risk and confidence cases to the compliance case. The culmination of these elements validates the assurance of compliance with basic safety, essential performance, and the intended use of medical devices in the presence of electromagnetic disturbances. By employing the graphical notation and assurance case methodology, this approach presents a comprehensive EMC process for tackling the unique challenges posed by electromagnetic disturbances. As such, this approach contributes to ensuring the re-liability and functionality of medical devices in such environments.
This study presents a hazard analysis method that uses a systems approach to analyse risks from Electromagnetic Interference (EMI) in complex systems. It builds upon the System-Theoretic Process Analysis (STPA) technique and extends it to EMI hazards by analysing the system control structure and the electromagnetic environment. A real-world case study with an insulin infusion pump illustrates the method’s effectiveness in uncovering EMI-related hazards. The method includes a traceability aspect represented as a directed acyclic graph, providing insight into hazards, con-sequences, and factors causing losses. By using this method, we can prioritize EMI scenarios and gain a better understanding of their system impacts, improving our awareness of EMI risks and enhancing decision-making for increased system safety.
The old adage that there are "lies, damn lies, and statistics" warns of the perils of relying on statistics but yet our safety and reliability arguments are often underpinned by statistics such as component failure frequencies, service history, accident rates and numerous distribution models. How do we really know we can trust what we draw from these figures? Statistics and importantly, their perceived conclusions and implications drawn from them, are readily published in the media and readily proliferated mostly unquestioned through social media and other outlets, but as we shall see, publications, even from reputable and learned authors can turn out to be flawed. If even experts can make mistakes, then we need to look carefully at the reliance we place on 3rd party interpretations of data, and of course, be very careful with our own data and the wisdom we draw from it. Through the use of real-life examples, this paper highlights the typical pitfalls that arise from the interpretation of statistical data and will con-clude that it is often the data that we don't see, the 'dark data', that can funda-mentally undermine the conclusions we draw from seemingly compelling data.
Traditional Hazard Analysis and Risk Assessment (HARA) methods for highly automated vehicles (HAVs) predominantly rely on worst-case assumptions about the operational environment. This conservative approach, while ensuring safety, largely depends on the human driver to be the primary element in the control loop. With the advent of increased automation levels, the driving responsibility progressively shifts from the human to the vehicle, rendering the traditional HARA process insufficient and overly reliant on human intervention. To address this gap, our research introduces a systematic approach, elaborated HARA (elHARA), which integrates non-worst-case scenarios into the HARA process. This approach aims to better reflect the dynamic nature of the operational environment and reduce the over-dependence on human drivers for safety assurance in HAVs.
The elHARA method systematically represents the operational situation by incorporating a variety of scenarios beyond the worst-case. This broadened perspective results in Automotive Safety Integrity Levels (ASIL) that more accurately depict the dynamic and complex nature of the automotive environment. By doing so, elHARA facilitates the identification of situation-specific safety goals and corresponding measures that are less reliant on human drivers, thereby enhancing system availability and performance. Implementing elHARA leads to a more suitable and realistic assessment of risks, reflected in appropriate ASIL values. This shift enables the development of fail-operational safety measures tailored to the varying degrees of vehicle automation. The approach effectively addresses the increased driving responsibility transferred to the vehicle, ensuring a higher level of safety without unnecessary constraints on system performance.
Our research lays the groundwork for transitioning risk assessment from de-sign time to runtime in HAVs. The current focus is on applying the insights gained from elHARA through ontology formalism, which aims to establish a comprehensive knowledge base for real-time risk reasoning. This progression signifies a pivotal step towards adaptive and responsive safety measures in the era of autonomous driving. The elHARA method marks a significant advancement in HARA processes for HAVs. By integrating dynamic, non-worst-case scenarios, it offers a more dynamic and suitable framework for safety assurance in automated vehicles, paving the way for safer, available, and more reliable autonomous driving.
Teaching novice pilots to fly is inherently dangerous as these students necessarily taxi, fly and fuel aircraft by themselves. This paper describes how a particular Canadian flight school integrated a Safety Assurance Case into its operations to justify the claim that the school’s processes were adequately safe. Incident reports were then used as “defeaters” of the Safety Case argument and the Safety Case was used to generate a significant part of the school’s Safety Management System (SMS). Treating incident reports as “defeaters” to challenge claims and evidence in the Safety Case was particularly useful: considera-tion of each defeater gave rise to corrective actions and, by this means, the Safety Case was continually updated as processes changed. This ensured that the flight school’s procedures could be kept up-to-date and relevant.
The production of a Safety Case is often seen as the “wrapping up” of the safety process – an activity that begins after earlier steps, such as hazard and risk analysis, have been completed. This misses the opportunity to benefit from the critical thinking that underlies a high-quality Safety Case. Especially when using Eliminative Argumentation, an incremental approach to the Safety Case can make the entire development process more efficient. In a range of industries including automotive, aerospace, energy, nuclear and rail, we have wit-nessed the benefits of starting Safety Case production early. We have used an incremental approach to the Safety Case to help shape the functional safety concept, derive safety requirements, influence system and software architectures, and focus validation and verification in a way that is commensurate with the system and is most likely to yield useful results.
In this presentation we give three ‘faces’ of ISO-26262: The unfriendly, because ISO-26262 series of standards offer comprehensive and conservative guidance to achieve functional safety in automotive E/E systems, which may lead to a cost/benefit imbalance. The friendly, because ISO-26262 series of standards offer relief methods & principles to rationalize the effort to achieve functional safety in automotive E/E systems, which establish cost/benefit balance. The friend-in-need, because ISO-26262 series of standards offer state of the art approaches to address how to achieve functional safety in automotive E/E systems, which ensures moral and legal compliance. Proceeding without the offered relief methods & principles, the entire product development process needs to follow the highest ASIL, determined for the automotive E/E system. During the presentation, we will explore different relief methods & principles which allow an effective and efficient execution of the ISO-26262 series of standards, including: Part 2 Clause 6.4.5 Tailoring of the safety activities, Part 9 Clause 5 Requirements decomposition with respect to ASIL tailoring, Part 4 Requirement 6.4.2.5 ASIL Reduction due to Latent Faults, Part 9 Clause 6 Criteria for coexistence of elements and the Openness of ISO 26262 for state-of-the-art methods.
An inquiry into how safe might be “safe enough” for automated vehicle technology must go far beyond the superficial “safer than a human driver” metric to yield an answer that will be workable in practice. Issues include the complexities of creating a like-for-like human driver baseline for comparison, avoiding risk transfer despite net risk reduction, avoiding negligent computer driver behaviour, conforming to industry consensus safety standards as a basis to justify predictions of net safety improvement, avoiding regulatory problems with unreasonably dangerous specific features despite improved net safety, and avoiding problematic ethical and equity outcomes. In this paper we explore how addressing these topics holistically will create a more robust framework for establishing acceptable automated vehicle safety.
There are significant challenges with the design and related certification of eVTOL (Electric Vertical Take-off and Landing) aircraft. Much of this is driven by the novel aspects and diversity of technical solutions of eVTOL, for example the significantly increased levels of system complexity due to the use of distributed and integrated propulsion and advanced flight control systems. It is perceived that the regulation is lagging behind the development of novel technologies and applications which are rapidly advancing to develop eVTOL. As such, it is clear that aircraft manufacturers and regulators need to work more closely, in effect going on ‘a journey’ together to ensure certification is achievable. In this paper, typical eVTOL aircraft operating scenarios and design architectures are presented as the certification background. These are then considered regarding risks related to eVTOL certification together with their complex technical solutions and subsequent operation. Safety challenges and considerations are discussed for the future certification of eVTOL
A major challenge in moving ML-based systems, such as ML-based computer vision, from R&D to production is the difficulty in understanding and ensuring their performance on the operational design domain. The standard ML approach consists of extensively testing models for various inputs. However, testing is inherently limited in coverage, and it is expensive in several domains. In this talk I will present novel verification technologies developed at Imperial College London as part of the recently concluded DARPA Assured Autonomy program and other UK-funded efforts. Novel verification methods provide guarantees that a neural model meets its specifications in dense neighbourhood of selected inputs. For example, by using verification methods we can establish whether a model is robust with respect to infinite noise patterns, or infinite lighting perturbations applied to an input. Verification methods can also be tailored to specifications in the latent space and establish the robustness of models against semantic perturbations not definable in the input space (3D pose changes, background changes, etc). Additionally, verification methods can be paired with learning to obtain robust learning methods capable of generating models inherently more robust than those that may be derived with standard methods. In the presentation I will succinctly cover the key theoretical results leading to some of the existing ML verification technology, illustrate the resulting toolsets and capabilities, and describe some of the use cases developed with our colleagues at Boeing Research, including centreline distance estimation, object detection, and runway detection. I will argue that verification and robust learning can be used to obtain models that are inherently more robust, more performant and better understood than present learning and testing approaches.
With continuing increases in compute power, data availability and theoretical advances, Artificial Intelligence (AI) has progressed from being primarily an academic discipline to being a household term. Indeed, the use of AI is becoming ubiquitous in society, ranging from benign tools, such as email spam filters, to assisting with advanced tasks, such as financial analysis.
The size, complexity and integration of automotive software has increased beyond the point that bespoke software can be developed for each application, and there is a growing need to leverage existing software bases. This paper describes a Safety Software Engineering process called RAFIA (Risk Analysis, Fault Injection, Automation), which enables system developers and integrators to directly incorporate suitable open-sourced software components into safety applications. This process considers the properties of systems and subsystems as a whole rather than breaking them down into their smallest units. It uses System Theoretic Process Analysis (STPA) to identify critical interactions and specify safety requirements, and to derive tests and fault injections to verify these. The method is therefore suitable to design and identify issues in larger complex systems without needing to analyse each existing component in detail. This paper will present an overview of the process and present a case study describing its application in the development of a combined safety-related rear-view camera system and non-safety infotainment system. The case study has been developed as a safety element out of context (SEooC) and we have worked with an independent safety assessor to ensure a result that can be certified to ISO 26262.
Driver alertness and attention are factors in nearly 50% of Signal Passed At Danger (SPAD) events that could lead to railway accidents. Due to their shift work nature, train drivers need to overcome drowsiness and being distracted during operation. More than half of UK train drivers uses caf-feine drinks, or even tablets to deal with fatigue. To ensure drivers are fit to work, the current Driver Safety Device (DSD) has been used in UK railway for decades.
Nuclear decommissioning is a complex, hazardous, and time-consuming process that requires highly skilled and trained operators. To address the work-force bottleneck and the growing inventory of nuclear materials, a Robotic Glovebox with AI capability that can assist in the preparation and processing of nuclear material is being developed. This innovative solution can enable safer, more efficient, and continuous decommissioning operations. To support the adoption of this new technology it is necessary to develop a safety case for the system. In this paper we describe how we have used an autonomous system safety case process (the SACE approach) to generate confidence in our initial AI glovebox design. This safety case example is being provided as input it into the Office for Nuclear Regulation (ONR) regulatory innovation sandbox and should help to establish a new paradigm in safety cases for autonomous systems in nuclear environments.
Ferrocene is a downstream project of the main Rust Programming Language compiler (“rustc”) that is qualified for use in safety-critical systems in accordance with ISO 26262 ASIL D and IEC 61508 SIL 4. It is the first of its kind in that is completely open source ( https://github.com/ferrocene ) and it does not diverge from the upstream codebase. But what does that mean exactly? This poster walks through the documentation and tooling behind Ferrocene - and with that, the ways in which the project produces continuous daily releases that each meet the same high bar for quality as a production release.
There is increasing pressure to adopt Cloud-based IT as it so much cheaper and simpler for an organisation to contract for the IT services needed rather than actually owning and maintaining their own IT infrastructure. The same pressures exist for safety functions and some safety-related IT is now provided by Cloud services in areas such as healthcare, policing and government. In the near future safety-critical functions such as those used in air traffic management (ATM) may be provided in this way. Other sectors are moving in the same direction. This is a fundamental shift as, using paradigms such as Software as a Service (SaaS), Infrastructure as a Service (IaaS) and Platform as a Service (PaaS), the IT implementation is to some extent hidden from view and operated and maintained using commercial imperatives. This means that traditional methods of assurance involving detailed knowledge of the components, the engineering design, the organisation and methodologies employed do not work any longer: new methods are needed. The presentation explains work done by the SCSC Service Assurance working group to address these problems, and shows how the Service Assurance Guidance can be mapped to an air traffic management context. Examples are given of recent work with a major European ATM provider to develop a framework involving principles, objectives and workflow to assure critical IT functions provided by a Cloud.
This talk will introduce Assurance 2.0, discuss those features that are being taken up by industry and outline the automation support that we have been developing as part of the DARPA ARCOS programme on the automation of certification.
This poster introduces an approach in the field of Autonomous Driving Systems (ADS), focusing on enhancing safety in complex and rapidly changing driving conditions. Central to this research is the development of a dynamic risk rating system. Leveraging runtime data from On-board sensors (OBS), this system continuously evaluates and responds to risk levels based on key parameters, such as relative velocity, distance to the lead vehicle, and driving environment conditions. By incorporating Machine Learning techniques, it adapts its risk management algo-rithms to be more proactive and less reactive.
As the world begins to finally acknowledge the devastating impact of climate change and the need to move towards more sustainable alternatives, industries are increasingly moving towards lithium-ion battery technology to power the future. A recent report from McKinsey indicates that Li-ion battery demand is expected to grow by about 27 percent annually, increasing the demand on battery manufacturers and consumption of finite resources leading to further negative environmental impacts. Experts in the field have proposed that a solution to limit consumption of resources is to remove batteries from their primary applications and utilise them in a second-life application such as stationary storage. Whilst this proposal helps to reduce the potential environmental impact on the planet, it potentially introduces safety risks which must be effectively managed to ensure that the use of aged lithium-ion batteries does not result in catastrophic failure. This presentation explores the risks associated with aged lithium-ion batteries and proposes a model for managing the safety of lithium-ion batteries when installing in second-life applications.
This short paper outlines the steps for creating an initial FRAM model of a typical software solution to the counter operations Post Office Horizon assisted by ChatGPT 4.0.
The Post Office Horizon case is the largest miscarriage of justice in the UK. While voiding convictions and providing adequate compensation is an immediate priority, the technical complexity of the case and the on-going Post Office Horizon Inquiry have paralysed decision making, and distracted from the strategic urgency of addressing the poor IT culture that led to and fed the problems. Horizon is a symptom of deeply-entrenched cultural problems with IT, including poor programming causing errors, and undermining the reliability of computer evidence for use in court. Failure to understand IT led to the misleading common law presumption that computer evidence is reliable, which undermines disclosure requirements in courts and further reduces scrutiny of computer evidence. Legal reasoning on the reliability of computers in court is flawed. Throughout the Horizon scandal, the inability to distinguish naïve and dishonest IT optimism from rigorous scientific thinking and evidence ensured that incompetence knew no limits. In short, what started (put charitably) as incompetence transformed into a scandalous “delay and deny” cover-up.
IT problems have a wide impact in many areas far beyond the Post Office Hori-zon scandal. As AI gains wider use it will create worse problems, particularly for legal evidence. Raising, debating and taking steps to manage these generic and besetting IT problems are of fundamental importance in the digital age to achieve a safe and just society.
It has been forecasted that a quarter of the world’s energy usage will be supplied from Offshore Wind (OSW) by 2050 (Smith 2023). Given that up to one third of Levelised Cost of Energy (LCOE) arises from Operations and Maintenance (O&M), the motive for cost reduction is enormous. In typical OSW farms hundreds of alarms occur within a single day, making manual O&M planning without automated systems costly and difficult. Increased pressure to ensure safety and high reliability in progressively harsher environments motivates the exploration of Artificial Intelligence (AI) and Machine Learning (ML) systems as aids to the task. We recently introduced a specialised conversational agent trained to interpret alarm sequences from Supervisory Control and Data Acquisition (SCADA) and recommend comprehensible repair actions (Walker et al. 2023). Building on recent advancements on Large Language Models (LLMs), we expand on this earlier work, fine tuning LLAMA (Touvron 2018), using available maintenance records from EDF Energy. An issue presented by LLMs is the risk of responses containing unsafe actions, or irrelevant hallucinated procedures. This paper proposes a novel framework for safety monitoring of OSW, combining previous work with additional safety layers. Generated responses of this agent are being filtered to prevent raw responses endangering personnel and the envi-ronment. The algorithm represents such responses in embedding space to quantify dissimilarity to predefined unsafe concepts using the Empirical Cumulative Distribution Function (ECDF). A second layer identifies hallucination in responses by exploiting probability distributions to analyse against stochastically generated sentences. Combining these layers, the approach finetunes individual safety thresholds based on categorised concepts, providing a unique safety filter. The proposed framework has potential to utilise the O&M planning for OSW farms using state-of-the-art LLMs as well as equipping them with safety monitoring that can increase technology acceptance within the industry.
Offshore Wind’s (OSW) contribution to the renewable energy sector is paramount as the demand for global net-zero heightens. Estimations currently suggest that up to 1150 GW; 25 % of the world’s usage being supplied from OSW by 2050 (Smith 2023). Up to one third of levelised cost of energy (LCOE) is belonging to Operations and Maintenance (O&M), driving competition to lower costs. There are previous reported cases whereby more than 500 alarms occurring within a single day at Teesside Wind Farm. Thus, making optimal O&M planning without automated systems unfeasible and somewhat impossible. Increased pressure on maintenance strategies to ensure safety and dependability in progressively harsher environments demonstrates the need to integrate de-pendable Artificial Intelligence (AI) and Machine Learning (ML) systems.
The Safety Culture Working Group has produced “A position paper for assessing and managing safety culture”. It provides high-level guidance on the assessment and improvement of safety culture for organisations that design, build, assure and operate complex safety-critical systems. It synthesises recognised good practice and can act as a benchmark and guidance. It (a) exemplifies how safety culture manifests itself, (b) provides guidance on good practice that organisations should aim to achieve and (c) draws out methods that are of particular importance to organisations involved in development and operation of safety-critical systems. The position paper summarises lessons learned on how to improve safety culture, such as engaging with staff, and clearly communicating behavioural expectations. These lessons draw on evidence of what has and “has not” worked. There are well known examples of organisations that had been regarded to be industry leaders, subsequently suffering major accidents due to the erosion of their safety culture, changes in organisational imperatives and a loss of corporate memory of past incidents. The position paper offers some tactics for guarding against the erosion of safety culture. It concludes by reinforcing the position that safety culture influences all aspects of safety performance and all stages of the system lifecycle, including design, safety assessment and assurance, manufacture, operations and maintenance. The position paper will help organisations understand how they can effectively assess, improve, and maintain their safety culture.
The global aviation industry has decades-old (and very successful) methods for enforcing safety in conventional manned aerospace, evolved gradually around a set of mature technologies. However, regulators are now struggling to integrate the profoundly different implications of unmanned aviation. In this talk Steve discusses the fundamental differences in technology and operations between manned and unmanned aircraft and their implications for maintaining safety in both, and shares some of the thrills and spills of performing development at this new frontier.
This Technical Paper sets out the details of the System Safety approach used by the Crossrail Project and describes some of the solutions employed to resolve the key challenges faced during the lifecycle of the project. The paper is published in the Crossrail Learning Legacy website at: https://learning-legacy.crossrail.co.uk/documents/system-safety-for-complex-projects-the-cross-rail-approach/