SCSC.uk 
SCSC.uk  
Seminar: Safe Autonomous Transport - the Good, the Bad and the Ugly
 
background 

  Event description   Programme   Booking form    

THE SAFETY-CRITICAL SYSTEMS CLUB, Seminar:

Safe Autonomous Transport - the Good, the Bad and the Ugly

Thursday 28 November, 2024 - Eurostars Book Hotel, Munich, Germany.

This 1-day seminar looks at progress in autonomous and highly automated transport and problems that have been encountered. It will also cover latest developments in the standards arena. It features industry speakers and a discussion session. It is aimed at practitioners working in safety engineering including Safety Engineers, Safety Consultants and Safety Managers.

It will be held in central Munich at the Eurostars Book Hotel, Schwanthalerstraße 44, 80336 München, Germany.

The Eurostars Book Hotel is located in the centre of Munich, the Bavarian capital, next to the central railway station (Hauptbahnhof).

This seminar will be conducted in English. All times are Central European Time (CET), one hour ahead of GMT

Please see the 'Booking Form' tab above for registration.

Speakers include:

  • Simon Burton, University of York, UKSafety Under Uncertainty: Automotive Standards for AI Safety and Research Perspectives
  • Alex Haag, Futurail - How Safe is Safe Enough for Autonomous Trains? Analyzing Human Performance as a Reference System
  • Henrik Putzer, Cognitron - Developing & Assessing for Trustworthiness in AI
  • Levi Lúcio and Christoph Neuböck, Airbus - Lessons Learned from Enabling Model Based Systems Engineering for a Large Autonomous Aircraft Programme
  • Torben Stolte, Volkswagen - ADMT's Approach Toward Arguing Safety for Automated Driving -  An Introduction
  • Mario Trapp, Safe Intelligence, Fraunhofer IKS, TUM - Assuring Safety in the Face of the Unpredictable

Speakers arranged by Carmen Carlan:

 

 

 

 

 

 

 

 

Your SCSC hosts for the day are Carmen Carlan (Programme), Paul Hampton (SCSC Newsletter), Brian Jepson (AV, recordings), Alex King (bookings, management, AV, alex.king@scsc.uk), and Mike Parsons (SCSC introduction, discussion, mike.parsons@scsc.uk). If you need anything please ask us.

Event Booking

The SCSC is a membership organisation; for this seminar we are providing a €20 membership which expires 1 month after the event. The membership provides access to the seminar as well as SCSC publications and resources.  One, two and three year membership packages are also available.
Please see the 'Booking Form' tab above.

If you would like to pay via purchase order, please contact please contact Alex King, alex.king@scsc.uk.

Reduced rates are available for students.  If you are in full-time education, please contact Alex King, alex.king@scsc.uk to make your booking. 

Abstracts and Speaker Bios

Simon Burton, University of York, UKSafety Under Uncertainty: Automotive Standards for AI Safety and Research Perspectives

Abstract: This presentation describes the challenges related to the use of AI in safety-critical automotive applications (for example, automated driving). An overview of the upcoming standard ISO PAS 8800 Road Vehicles - Safety and AI will be provided, as well as the wider context of safety standards in this area. The presentation will then motivate the structure and key concepts of ISO PAS 8800 and present how existing approaches to safety of automotive E/E systems are extended for AI and ML-based functions. As a summary and conclusion, open research challenges will be presented. In particular, the need to address uncertainty in the assurance arguments for such systems and how this uncertainty can be evaluated and reduced.

Bio: Professor Simon Burton, PhD, holds the chair of Systems Safety at the University of York, UK. He graduated in computer science at the University of York in 1996, where he also achieved his PhD on the topic of the verification of safety-critical software in 2001. Professor Burton has worked in various safety-critical industries, including 20 years as a manager in automotive companies. During this time, Simon managed research and development projects, and led consulting, engineering services, product, and research organizations. More recently, he was Scientific Director for Safety Assurance at the Fraunhofer Institute for Cognitive Systems (Fraunhofer IKS) until December 2023. 
Professor Burton’s personal research interests include the safety assurance of complex, autonomous systems and the safety of machine learning. He has published numerous academic articles covering a wide variety of perspectives within these fields, such as the application of formal methods to software testing, the joint consideration of safety and security in system analysis and design, as well as regulatory considerations and addressing gaps in the moral and legal responsibility of artificial intelligence (AI)-based systems. He is also an active member of the program committees of international safety conferences and workshops. Professor Burton is convener of the ISO working group ISO TC22/SC32/WG14 “Road Vehicles—Safety and AI” and currently leads the development of the standard ISO/AWI PAS 8800 “Safety and AI” scheduled for release in 2024.
 

 

Alex Haag, Futurail - How safe is safe enough for autonomous trains? Analyzing human performance as a reference system

Abstract: In recent years, the development and adoption of autonomous systems has rapidly developed beyond self-driving cars. While trains operate in a more structured environment, they have long braking distances and high safety expectations. After introducing the specificities of autonomous trains, we will present our research on how human-driven train accident analysis can be used to set a safety goal.

Bio: Alexandre Haag is an accomplished executive with a robust technical background and extensive experience leading advanced technology initiatives across multiple industries. He is the Co-Founder and CEO of Futurail SAS, where he spearheads the development of autonomous train technologies utilizing cutting-edge robotics and AI algorithms.  Previously, Alex has worked for ten years in self-driving cars at companies like Tesla, Audi, and Argo AI. Before that, he had various positions in the robotics industry, including founding a successful startup. Alex holds a BS from Ecole Polytechnique, Paris, and an MS from MIT.

 

Henrik Putzer, Cognitron - Developing & Assessing for Trustworthiness in AI
Abstract: In the rapidly evolving landscape of artificial intelligence (AI), understanding its multifaceted nature is crucial. AI has proven beneficial and successful across numerous applications; however, a significant amount of mysticism still surrounds it. This mystique must be transformed into a framework of proven safety and trustworthiness. This presentation will explore the good, the bad, and the ugly aspects of developing and assessing trustworthiness in AI systems. The Good: Traditional approaches, particularly risk-based methodologies rooted in systems engineering principles such as those from ISO 26262, remain highly relevant (incl. the reference safety lifecycle). By focusing on hazards, risks, and safety goals, we can ensure traceability and evaluate the potential contributions of AI elements to safety goal violations. A thorough system-of-systems engineering approach serves as the foundation for this effort. The Bad: However, the complexity of modern AI systems introduces significant challenges. It is essential to understand the unique failure modes of AI (e.g., in connectionism) and how these failures can lead to violations of safety goals. This section will emphasize the need for a structured development approach for AI along with new methods and metrics, as conventional techniques like FMEA and FTA may no longer be applicable in the context of AI. The Ugly: Finally, we will confront the reality that AI represents a new category of technology, classifiable as a third type compared to IEC 61508 and ISO 26262. This necessitates the development of a new failure model for AI that encompasses both systematic failures and a new category known as uncertainty-related failures. We must be aware of and address these uncertainty-related failures to demonstrate safety and trustworthiness through appropriate methods and metrics. While we do not aim to provide definitive answers, we will present a consistent approach to assessing AI trustworthiness and reference ongoing standardization efforts, including emerging frameworks for AI audits and assessments. Join us as we navigate the complexities of AI, addressing its potential and pitfalls, and paving the way for safer and more trustworthy AI systems.
 
Bio: As a computer scientist Dr. Putzer received his grade at the institute of Prof. Onken and Prof. Dickmanns at the University of the German armed forces in Munich with research on human centered, AI based assistants. During his career as a consultant, he contributed to the success of several embedded system projects in various industries. In different roles he was responsible for the design, safety, security and process development as well as compliance to standards always pushing the state of the art in E/E systems engineering. For several years he worked on the connection of the main three pillars of safety & trustworthy AI-based E/E systems: industrial engineering as a consultant, research on AI as the head of a research group at the fortiss research institute and in standardization groups. He was core contributor to the ISO 26262 and currently holds a chair in der VDE DKE working group for the VDE-AR-E 2842-61 and within ISO/IEC JTC 1 / SC42. Currently he is the CEO of cogitron, a consulting business on processes, embedded systems, safety & security and Artificial Intelligence, still combining research, standardization and E/E systems engineering in various industries.
 
Levi Lúcio and Christoph Neuböck, Airbus - Lessons Learned from Enabling Model Based Systems Engineering for a Large Autonomous Aircraft Programme
 
Abstract: Although digitalization is in the mouths and plans of all companies in Germany and the world, it is no secret that many companies have struggled with its actual implementation - sometimes with damning results, some of which are currently unfolding. Simultaneously, new players who are not weighed down by legacy methods, processes, and tools have recently entered the market and are quickly transforming how the aerospace industry plans, engineers and brings new products to the market. In this talk, we will discuss hard lessons from the last 3 years of digitizing the development of a new large military drone Programme using Model-Based Systems Engineering techniques. Topics such as legacy processes, methods and tools, engineering and management best practices, scalability of data, software, and human collaboration, or the challenges of introducing a culture of early validation and verification of the airplane's design all weigh heavily on the scale of success or failure of this project - which will determine how next-generation airplanes will be developed at Airbus Defence and Space.
 
Bios:
Dr. Levi Lúcio obtained his doctoral degree in 2008 from the University of Geneva for his work on Model-Based Testing. A software engineer by education, he held multiple positions in Europe and Canada, having worked for influential institutes, universities and companies such as CERN, McGill University and more recently Airbus. Until 2019 he was an academic, researching and publishing extensively on topics on the intersection of software engineering and formal methods. During this period Levi became a recognized and sought after international expert in the area of model transformation, having invented original ideas, techniques and tools that manifested themselves in multiple PhD. thesis and yielded research lines that continue to be active to his day. He is also the author of two books on programming languages for the general public. In 2019 Levi joined Airbus Defence and Space as product owner for the military simulation software framework at Airbus. In this role he led the software development of the product and qualified the tool for military usage. In his current role Levi invented and leads the development and engineering of the MBSE Toolkit, a set of software tools for Model Based Systems Engineering. The product has been adopted by a community of hundreds of engineers and currently plays a decisive and critical role in the success of multiple airplane development Programmes at Airbus Defence and Space.
 
Christoph Neuböck is a MSc in engineering management and certified systems engineering professional (INCOSE CSEP) with 5+ years hands-on experience on technical, technical-management and organizational project-enabling processes, methods and tools used across the system development lifecycle. In his current position as MBSE Model Manager at Airbus Defence and Space (ADS), Christoph is managing a system architecture model that is used by 300+ people from various organizations and disciplines in parallel to specify an aircraft’s system architecture and design. He has successfully defined and introduced the ADS Measurement and Monitoring Framework in a large programme, is the author of the ADS Model Management Plan process-instructions and is actively contributing to the INCOSE Measurement Working Group. He’s fully dedicated to ensure a low cost-of-non-quality by implementing best practices in the lifecycle stages while seeking for flexible solutions to reduce overall cost-of-quality.
 

Torben Stolte, VWADMT's Approach Toward Arguing Safety for Automated Driving -  An Introduction

Abstract: A convincing argument that an automated vehicle drives safely on public roads is a key challenge for the introduction of SAE Level 4 systems. First, we motivate why ADMT develops a Safety Assurance Case, its benefits, and challenges. Based on these insights, ADMT’s approach toward arguing safety for an SAE level 4 system is presented. Moreover, we highlight selected practical challenges during the development of a safety argumentation. Finally, we look at open topics that require more research in relation to the safety argumentations for SAE level 4 automated driving systems.

Bio: Torben Stolte received the Diploma (FH) degree in automation technologies from Universität Lüneburg, Lüneburg, Germany, in 2008, and the M.Sc. degree in electrical engineering from Technische Universität Braunschweig, Braunschweig, Germany, in 2011. From 2011 to 2014, he was in collaboration with Porsche Engineering as a Functional Safety Engineer. From 2011 to 2020, he was a Research Associate with the Institute of Control Engineering, Technische Universität Braunschweig, working in the field of autonomous vehicle safety. Since 2020, he has been with Volkswagen AG. His research interest includes safety assurance of automated vehicles, particularly safety argumentation for the SAE level 4 operation and the corresponding uncertainty representation.

 

 

Mario Trapp, Safe Intelligence, Executive Director Fraunhofer IKS, Full Professor @ TUM - Assuring Safety in the Face of the Unpredictable


Abstract: Venturing into the world of autonomous systems, this talk explores the intricacies and challenges of assuring safety in a realm where, as 19th-century philosopher William James put it, “the world is a blooming, buzzing confusion.” The spotlight is on learning-enabled systems, a domain facing urgent safety challenges in our rapidly advancing technological landscape. The presentation revisits the concept of resilience, opening up a vital discussion on the necessity of safety in the face of unpredictability. It lays out the current challenges with a keen eye on the complex balance between system utility and safety. Potential solutions are proposed, providing thought-provoking insights into how we can enable these systems to adapt themselves to diverse contexts without compromising their safety. This talk takes you on a journey into the heart of self-adapting, resilient systems, exploring their complexities, their potential, and their critical role in our future. It's a riveting exploration of a new generation of systems that continuously adapt to meet the unpredictable challenges of the world around them. Based on pertinent examples and current research, this talk not only delves into the dynamics of learning-enabled systems and their safety assurance but also underscores the challenges that remain to be addressed, thereby shedding light on these systems' promising future potential.

Bio: Prof. Mario Trapp is Executive Director of the Fraunhofer Institute for Cognitive Systems IKS. In 2005, he obtained his PhD from TU Kaiserslautern, where he also did his habilitation in 2016. He also joined Fraunhofer IESE in 2005, where he started off as a head of department in safety-critical software before becoming head of the Embedded Systems division from 2009 to 2017. After being appointed Acting Executive Director of Fraunhofer ESK (now Fraunhofer IKS) in Munich on January 1, 2018, he assumed this role on a permanent basis on May 1, 2019. In addition to this, Mario Trapp has been a Full Professor at the Technical University of Munich (TUM) since June 1, 2022. He is the Resident Professor for Engineering Resilient Cognitive Systems at the School of Computation, Information and Technology CIT. Prior to this, he taught as an Adjunct Professor in the Department of Computer Science at the TU Kaiserslautern. For many years, Mario Trapp has been contributing his expertise to the development of innovative embedded systems in the context of successful partner projects, in cooperation with both leading international corporations and small and medium-sized enterprises. Currently, his personal research focuses on safety assurance and resilience for cognitive systems, which form the technological basis of many future scenarios such as Industrie 4.0 and automated driving. Mario Trapp has authored numerous international scientific publications. He is also a member of the Bavarian State Government’s Council on AI (Bayerischer KI-Rat) and the Bavarian State Ministry of Economic Affairs, Regional Development and Energy’s AI — Data Science (KI — Data Science) expert panel.

 

 

 

 

 

SCSC.UK uses anonymous session cookies please see Privacy policy

© SCSC 2024