SCSC.uk 
SCSC.uk  
Seminar: Developing Safe AI Systems
 
background 

  Event description   Programme    

THE SAFETY-CRITICAL SYSTEMS CLUB, Seminar:

Developing Safe AI Systems

Thursday 27 June, 2024 - London, BCS office, EC2R 7BP

All slides and videos from the event are on the Programme tab above.

This seminar was all about how to develop and build safe AI systems: it discussed the methods, tools and techniques that should be employed, the all-important approach to testing and V&V, and also the quality and safety assurance processes that govern what has to be done. The speakers outlined possible ideas and approaches for these evolving areas.

This seminar was held in central London, at the BCS: Ground Floor, 25 Copthall Avenue, London, EC2R 7BP. 

Speakers include:

Gary Brown, Airbus - "Certification Use Reliance (CURe) for an Aircraft Level view of an AI ML Part 21 integration"

Richard Hawkins, University of York - "Through-life assurance of ML systems"

Alan Simpson, Ebeni - "Where are we with AI Safety?"

Nick Tudor, D-RisQ - "Robust AI Planning"

Iain Whiteside, ReSim - "Testing Times Ahead for AI"

Who should attend?

This seminar is an opportunity to hear best practice for developing Safe AI systems today and future methods and techniques.  

It will be useful for safety engineers, safety managers and AI engineers, and for all those involved in the development of projects that may use AI in the future. 

Talk Abstracts and Speaker Bios

Gary Brown, Airbus - "Certification Use Reliance (CURe) for an Aircraft Level view of an AI ML Part 21 integration"

Abstract: AI is becoming increasingly pervasive across many industry sectors and is here to stay with more and more real business cases. An applicant in the Aviation sector will face many obstacles to safely integrate an ML Use Function particularly within large commercial airplanes, let alone smaller light aircraft and at the military arena. Therefore, it can be envisaged that to gain AI regulatory approval is not going to be an easy achievement and it will be an undoubtable challenge for such a novel and untested product when under both a society and media-sensitive microscope. The aim of my presentation is to offer one way but not the only way to best satisfy the holistic Level of confidence needed based on the ML Use function development process and how it will be relied upon and monitored within its operational exposure. By consideration of the Means of Compliance against existing CFR/CS25 applicable requirements and what development objectives are anticipated from the forthcoming AI ML standard ARP693/ED-324 (SAE/Eurocae) then a combinational evidence based repository can be established proportional to the claimed platform Reliance. This is by applying a Certification Use Reliance (CURe) approach that leans on top-down evidence gathering, that is first and foremost qualitative in substantiation and supplemented where necessary with safety-influenced quantitative % Performance Metrics and error rate measurements. Consideration is also given to the data representativity Operational Design Domain (ODD) needs, the bias-variance trade-off for output prediction stability at generalization and the design mitigations against adversarial perturbations of robustness and drift threats in the real operational environmental deployment.


Bio: Gary is a Chartered Engineer with a Masters Degree in Safety Critical Engineering with Distinction at the University of York. He has performed the role of Aircraft Safety Director for 5 years on the Airbus own Beluga XL development. In addition, he was the Aircraft Safety Manager for 4 years on the military A400M. He is heavily involved in the new A321neo derivative the eXtra Long Range (XLR), where an additional Rear Centre Tank (RCT) is added. He covers all Airbus commercial aircraft as the safety approver for all systems at Airbus Filton UK and Getafe Spain. He teaches at Cranfield University on AI, OEM and UERF PRA as well as speaking at events on AI safety with CURe.

Richard Hawkins, University of York - "Through-life assurance of ML systems"

Abstract: There is a desire to utilise the capabilities of Machine Learning (ML) in autonomous systems. This motivates a need to be able to demonstrate the safety of the ML prior to deployment, but also to assure the continued safety of ML throughout the operational life of the system. Changes during operation, such as to the system operating environment, can have substantial and unanticipated impact on the performance and safety of the ML, and on the validity of the safety case.  This talk considers how changes during operation of a system impact upon the safety assurance of ML, and discusses how we can ensure the safety of the system is not compromised.

Bio: Richard Hawkins is a Senior Lecturer in the Department of Computer Science at the University of York. As part of the Lloyd's Register Foundation's Centre for Assuring Autonomy he is undertaking research into safety assurance and assurance cases for autonomous systems. He has been working with safety related systems for 20 years both in academia and in industry. Richard has previously worked as a software safety engineer for BAE Systems and as a safety advisor in the nuclear industry.

 

 

Alan Simpson, Ebeni - "Where are we with AI Safety?"

Abstract: The integration of AI into safety-critical systems raises pressing questions: Where do we currently stand with AI safety? Despite substantial research and significant guidance on assuring AI, there is still a lot more practical work to be done to establish a consensus view on how AI can be assured for safety. Building on the significant work of the SCSC Working Group on Autonomous Systems (SASWG), the SCSC is eager to examine the broader implications of AI throughout all phases of the system lifecycle, especially the growing use of LLMs. The SCSC WG on AI Safety joins together the participants from SASWG and those from the wider safety community to continue the development of insightful guidance on AI safety issues. We are in the early stages of exploring where next and this talk highlights some key thoughts so far and opens the discussion on the options for the future direction of the WG.

Bio: Alan is Owner and Director at Ebeni Ltd. With over 30 years in safety engineering, he is a prominent figure in aviation system safety. Alan's expertise lies in the advancement and safety of complex systems including avionics, Unmanned Air Vehicles (UAVs), air traffic management, automotive, metro signalling and train control/protection systems. Alan has been involved in the development of Single European Sky regulations and the assessment and certification of autonomous air vehicles, including research on the regulatory framework for unfettered operations of UAVs in non-segregated airspace and the assurance of data. He has led work on adapting and enhancing safety and certification practices. Alan is passionate about the impact of Artificial Intelligence (AI) on safety-critical systems; this extends to exploring how AI influences traditional safety engineering practices. His research in UAVs, aviation safety, and data integrity has fuelled his interest in integrating AI into safety-critical systems. Alan is a Chartered Engineer and has published papers on various aspects of safety engineering. Alan envisions a future where safety engineering seamlessly integrates with evolving AI technologies, ensuring robust and reliable critical systems.

Nick Tudor, D-RisQ - "Robust AI Planning"

Abstract: When we undertake a task as humans, we instinctively know how to undertake such a task (having had suitable experience and training).  In order for a computer to do the same thing typically requires considerable computational power, especially memory, as well as time as the approach requires a sift through various combinations of actions to check whether a task is feasible and then to come up with a plan. Often the techniques try to optimise the plan requiring more time and resources.  Having the ability to plan and to replan when something in the environment changes in real-time without human intervention would be extremely useful.  This talk will outline an approach to planning that could be used in real-time in two different use cases, embedded real-time and in cyber security.

Bio: Following a full career as RAF Officer Engineer, Nick has been working in software and high integrity systems for the past two decades.  As co-Founder of D-RisQ, he has worked in multiple sectors including aerospace, defence, automotive, rail, autonomous systems in air, land, sea, nuclear decommissioning and cyber-security.  Having been a key author of DO-333, the Formal Methods Supplement to the de-facto software standard for aerospace, DO-178C, he was invited to be one of only 3 UK nationals to support the panel that develops advice for regulators on the DO-178C suite of documents.  As CEO of D-RisQ, he sets the strategy for the business which is focused on the development of automatic formal methods verification tools and has a technical focus on certification.  He also participates in a number of standards bodies, for example, on Artificial Intelligence and was made a Fellow of the Institute of Engineering in 2021.

Iain Whiteside, ReSim - "Testing Times Ahead for AI"

Abstract: In this talk, I will share my perspective on the challenges of safety assurance for AI, focusing on the varying levels of criticality across different application domains. Drawing on insights from the autonomous vehicles industry and extensive work with customers in the robotics and AI sectors, I will highlight the distinct reliability requirements and testing complexities each domain faces. We will explore how traditional Software 1.0 testing methods are inadequate for ensuring the safety and reliability of embodied AI systems. Instead, I will propose innovative testing frameworks that leverage detailed, data-driven evaluations to assess AI behaviour comprehensively. 

Bio: Iain is the CTO at ReSim, a venture-backed startup based in Silicon Valley that aims to build a platform for robustly testing AI-based safety-critical systems. Armed with a PhD in formal software verification, Iain made the unlikely leap to testing autonomous systems at NASA, where he worked on assurance cases and was the lead developer of NASA's AdvoCATE safety assurance toolset. Iain was the technical leader of a $10m DARPA project, Assured Autonomy. Iain then served as Director of Safety for Five AI, a UK-based self-driving company (acquired by Bosch). While at Five AI, Iain spearheaded research into verification and validation of machine learned components, and led the design and development of their cloud-based testing platform. Iain has served on several UK and international simulation and safety standardization efforts for autonomous systems.

 

 

 

 

SCSC.UK uses anonymous session cookies please see Privacy policy

© SCSC 2024