SCSC.uk 
SCSC.uk  
Seminar: Deployment, Operations and Maintenance of Safe AI Systems
 
background 

  Event description   Programme    

THE SAFETY-CRITICAL SYSTEMS CLUB, Seminar:

Deployment, Operations and Maintenance of Safe AI Systems

Thursday 26 September, 2024 - The Cumberland, Great Cumberland Place, Marble Arch, London, W1H 7DL

This 1-day seminar looks at bringing AI systems into service: what needs to be done to prepare including aspects such as training and consultation, how to roll out AI systems including adaptation, how to operate them safely in a given context, and how to maintain them.

Jane Fenn, University of York - A New Approach to Creating Clear Operational Safety Arguments

Richard Hillman, Horiba Mira - Safety assurance methods for automated vehicle deployments: learnings from the Harlander project

John McNicol, Nova Modus - Systems for Safety when Automated Vehicles Go Wrong!

Bob Oates and Carolina Sanchez, Cambridge Consultants - Organisational readiness for AI: Governance and Assurance

Kate Preston, University of York - A human-centred assurance framework for deploying autonomous systems based on previous research and demonstrator projects

Karin Rudolph, Founder Collective Intelligence and the Ethical Technology Network - Responsible Deployment of AI Systems: What businesses need to know

 

Who Should Attend

This seminar will be useful for all those involved in deploying, using, managing or maintaining AI systems, including engineering and service staff.

 

Talk Abstracts and Speaker Bios

Jane Fenn, University of York - A New Approach to Creating Clear Operational Safety Arguments

Abstract: Safety cases should provide a compelling argument and evidence to demonstrate that a system is sufficiently safe both in design and in operation.  However, much of the guidance on development of the safety case is given from the perspective of a design safety case, to support deployment of a system. Operational safety is significantly less well-handled in current safety case practice. In this presentation, to start addressing the challenges of operational safety cases, we propose to extend the ideas of splitting complex safety cases into risk, confidence and compliance arguments to also consider operational safety arguments. We propose that the operational safety arguments should be separate but explicitly connected to the design–time risk argument through the use of operational claim points (OCPs) to ensure clarity in both the design–time risk argument and the operational argument, whilst still ensuring an explicitly defined relationship exists

Bio: Jane Fenn is a part-time PhD student, studying within the Centre for Assuring Autonomy, on the link between design time and operational safety cases for autonomous systems.  She is a fellow of the Safety and Reliability Society as well as the Institute of Engineering and Technology

 

 

 

 

Richard Hillman, Horiba Mira - Safety assurance methods for automated vehicle deployments: learnings from the Harlander project

Abstract: In order to achieve commercial deployment of automated driving systems, a robust safety case is needed to provide evidence to regulators and other stakeholders that passengers and other road users won’t be exposed to an undue risk of harm. This is challenging due to the lack of an established state of the art in the domain, and also due to the time and cost involved in acquiring sufficient test data. Within the Harlander project, which also includes partners Belfast Harbour Commission, Oxa, eVersum, Angoka and BT, HORIBA MIRA are using the case study of an automated shuttle service within the Belfast Harbour Estate to develop assurance methodologies that are appropriate for future commercial deployments of SAE Level 4 automated driving systems. This includes consideration of what evidence is required in a safety case, how it can be most efficiently collected, and how it can be determined when sufficient confidence has been obtained.

Bio: Richard Hillman is Chief Engineer for Connected and Autonomous Vehicles at HORIBA MIRA and manages a department delivering cutting-edge virtual and physical testing solutions for automated vehicles and Advanced Driver Assistance Systems (ADAS). He has provided leadership and technical expertise to a wide range of innovative projects, including both product development and advanced research, has contributed to the development of regulations and standards, and has authored patents and papers within the domain. Particular areas of technical specialism include systems engineering, safety engineering, safety regulations and test programme development. He is a chartered engineer with the Institution of Mechanical Engineers and a chartered manager with the Chartered Management Institute.

John McNicol, Nova Modus - Systems for Safety when Automated Vehicles Go Wrong!

Abstract: What an automated vehicle (AV) should do when the automated driving system (ADS) cannot safely continue a journey, or fails completely, has been discussed during the development of AVs over the last decade. However, only in recent years have there been efforts to describe or specify what such low-risk manoeuvres might be, or how, when, and where it might be safe to stop. It is a real challenge to cover the broad range of real-world complexities that widely deployed AVs will experience: notably the nature of the road, the speed and density of traffic, local weather, and the type of vehicle. The BSI will shortly release a Flex standard providing a framework for describing Minimal Risk Manoeuvres and Conditions (MRMs & MRCs) based on risk assessments. This talk outlines the development of Flex 1888, and highlights safety systems for remote human operation of AVs based on Flex 1886.

Bio: John McNicol is the founder & CEO of Nova Modus and has worked in Connected & Automated Vehicles for a decade. Nova Modus provides independent expert support to clients developing technology, operating automated vehicles, the standardization and commercial viability of CAVs, and their impact on transport and mobility. John ‘drove’ one of the first UK projects to develop and evaluate ‘driverless cars’ (www.venturer-cars.com), followed by self-driving low speed shuttles, an autonomous local delivery van, and a self-driving passenger bus service on public roads www.mi-link.uk.  John is one of the technical authors of the British Standards Institute’s brand new report on standardizing automated vehicle MRMs and MRCs and the technical author of BSI’s 1886 Flex standard on remote driving. Also in the connected vehicle arena, he’s supported 6 projects in automotive cybersecurity, including SecureTCU. John came to the automotive industry as CEO of a UK start-up building high-frequency modules for radar and communications and, prior to that, VP of a Silicon Valley start-up building microwave modules for cellular communications. His deep background is in wireless semiconductors.

Bob Oates and Carolina Sanchez, Cambridge Consultants - Organisational readiness for AI: Governance and Assurance

Abstract: Artificial intelligence is experiencing a huge increase in popularity, with many organisations seeking to adopt AI-enabled systems to capitalise on
promises of improved efficiency and enhanced decision-making. However, there have to be wider considerations in place to make sure AI provides all these expected benefits as AI-enabled systems can amplify existing risks associated with process automation and data management, and present brand-new risks with far-reaching consequences for the adopting organisation itself, and society in general. These risks cut across multiple domains, including safety, security, technical performance and ethics. How can an organisation adopt AI-enabled systems in a responsible way that limits their exposure to risk? What changes need to be made to governance and process, and what new skills should they seek to build?

Bio: Carolina specialises in Assurance frameworks and risk assessment processes for new technologies, focussing on AI applied within highly safety critical environments. She is leading AI engineering assurance projects within Cambridge Consultants, exploring Ethics, Regulations, Standards, Governance, and risk assessments for AI Safety, Security and Performance. Carolina has been involved in the Future Flight (Innovate UK) projects through Cranfield University assessing trustworthiness of AI applications for UTM/ATM integration. She has extensive experience on in transport,  having worked previously within the Aviation domain for 4 years (NATS, National Air Traffic Services), and for Ordnance Survey (National Geographical Data Service provider) on research and innovation and smart mobility. She is a member the Spanish Standarisation Committee for AI and Big Data which feeds into the European CEN-CENELEC AI standarisation working group. She holds a PhD on Habitat classification with Machine Learning techniques using remote sensing, MSc in Environmental Science,  Professional Certificate on Data Ethics by the ODI (Open Data Institute) and a IBM professional certificate on Data Science.

Bio: Dr Bob Oates is a cyber security expert with over a decade of industrial experience, focussing predominantly on cyber security for safety-critical, operational technologies and critical national infrastructure. As a technical expert Dr Oates has worked on a number of highly-complex and challenging programmes including the security for the world’s first commercial remotely operated ship. As an employee of Rolls-Royce Dr Oates won the Sir Henry Royce award twice for his security work. During his time at the National Grid he predominantly focussed on the impact of OT/IT convergence on threat modelling by demonstrating how controls in one domain could prevent propagation of threats into adjacent domains.  Dr Oates has contributed to a number of standards and guidance documents, including ED202A (the aviation cyber security standard), and the Safety Critical System Club’s Data Safety Guidance. In addition to standards and guidance, Dr Oates has maintained an active interest in publishing work in high-quality conferences and journals. He has also been an invited guest to The Polish Academy of Science, and to the Dagstuhl Conference on Artificial Immune Systems. He has an honorary Professorship in Safety and Security from De Montfort University, a PhD in Robotics and Security from The University of Nottingham, and a Master’s degree in Applied Cybernetics and Computer Science from The University of Reading. 

 

 

 

 

Kate Preston, University of York - A human-centred assurance framework for deploying autonomous systems based on previous research and demonstrator projects

Abstract: Currently, the safety assurance of AI-related and autonomous technologies often takes a narrow perspective, focusing on the technical aspects of development, such as algorithm accuracy or comparison with current practice. However, these new technologies will be integrated into the wider, often complex, sociotechnical context and interact with other system elements, such as stakeholders, other equipment and government regulations. Without considering the wider sociotechnical context, hazardous scenarios can arise. Therefore, to ensure effective safety assurance, a human-centred perspective can be taken that provides an understanding of the interactions the new AI-related or autonomous technology has with the system where it will be integrated. This presentation will provide a background to human-centred autonomy, its importance for the safety assurance of AI-related and autonomous technologies across sectors, and the initial findings from previous research and demonstrator projects. 

Bio: Kate Preston is a research associate within the Centre for Assuring Autonomy at the University of York, focusing on human-centred assurance of AI and autonomous technologies. She recently received her PhD from the University of Strathclyde, Glasgow, where she focused on applying the discipline of human factors to the development of AI technology in healthcare. Kate is a member of the Chartered Institute of Ergonomics and Human Factors, where she co-chairs the special interest group focusing on Digital Health and AI. 
 

Karin Rudolph, Founder Collective Intelligence and the Ethical Technology Network - Responsible Deployment of AI Systems: What businesses need to know

Abstract: The recent approval of the EU AI Act will require businesses to comply with a series of new requirements to ensure the safe development and deployment of AI systems. As the AI governance landscape becomes more complex due to new regulations, an abundance of AI governance frameworks, and emerging standards, it is increasingly challenging for businesses to understand where to focus their attention and resources. This presentation will provide an overview of the upcoming requirements that businesses must meet to ensure the responsible and ethical deployment of AI systems.

Bio: Karin Rudolph is the Founder of Collective Intelligence, a Bristol-based AI Ethics and Governance Consultancy that provides resources and training to help tech startups, and SMEs embed ethics and good governance practices. She is also the founder of The Ethical Technology Network, a pioneering initiative aimed at helping businesses identify, assess, and mitigate the potential ethical and societal risks associated with AI and other emerging technologies. Karin is a regular speaker at universities and conferences and an active member of the tech community in Bristol and the South West of England.

 

 

Hosts

Your SCSC hosts for the day are Alex King (bookings, management, AV, alex.king@scsc.uk), Brian Jepson (AV, recordings) and Mike Parsons (host, introductions, discussion, mike.parsons@scsc.uk). If you need anything please ask us.
 

Picture of engineers working on robot

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This will be held in-person at The Cumberland, London, Great Cumberland Place, Marble Arch, London, W1H 7DL

SCSC.UK uses anonymous session cookies please see Privacy policy

© SCSC 2024