Seminar: How Safety Culture has to Change With AI
 
				 This 1-day seminar looked at how an organisation's safety culture has to change when systems including AI are introduced.
This 1-day seminar looked at how an organisation's safety culture has to change when systems including AI are introduced.
The emergent application of artificial intelligence in safety critical systems raises many questions for safety culture:
- Who is accountable for safety - who has agency for safety - the AI developers or the operators?
- How does AI impact ownership of safety if operational decisions are made by AI?
- How does AI impact transparency in operational safety decision making?
- Does AI erode safety citizenship - a sense of self-determination - and disempower people?
- How do you design AI so as to allow people to intervene in a timely manner and retain effective oversight?
- Does AI erode human skills such that they can no longer truly retain operational responsibility?
- What if the person wrongly overrides the AI, how is this treated in a fair and just way?
- How do developers make decisions about the readiness of novel AI safety critical systems?
- What are the opportunities offered by AI in safety critical systems?
This seminar featured experts working at the cutting edge of AI and Human Factors about these and other issues, and discussed emerging principles and good practices.
Speakers and talks:
Ben Fulford, BMT - Merging Human Expertise with AI for Better Safety Management
Adam Johns, Marsh Limited - Redefining Accountability: Just Culture in the Age of AI
Paul Leach, Rail Safety and Standards Board - Human factors principles for the design and operation of AI systems in rail
Kathy Syfret, RAF - AI and Aircraft Maintenance 
Paul Traub, Paul Traub Associates - Automation Policy and Human Performance
Michael Wright, Wright Human Factors Ltd - Artificial Intelligence: Emergent safety culture issues
Please contact mike.parsons@scsc.uk for more information.
| Schedule | |||||
|---|---|---|---|---|---|
| Registration from 09:00 | |||||
| 09:30 - 09:40 | Mike Parsons SCSC | Welcome and Introduction | |||
| 09:40 - 10:25 | Michael Wright Wright Human Factors | Artificial Intelligence: Emergent safety culture issues |  |  | |
| 10:25 - 11:10 | Ben Fulford BMT | Merging Human Expertise with AI for Better Safety Management Supporting Material: “ChatGPT is BS” (Michael Townsen Hicks, James Humphries, Joe Slater) |   |  | |
| 11:10 - 11:40 | Coffee | ||||
| 11:40 - 12:25 | Paul Leach RSSB | Human factors principles for the design and operation of AI systems in rail |   |  | |
| 12:25 - 13:25 | Lunch | ||||
| 13:25 - 14:10 | Adam Johns Marsh Limited | Redefining Accountability: Just Culture in the Age of AI Supporting Material: |  |  | |
| 14:10 - 14:55 | Kathy Syfret RAF | AI and Aircraft Maintenance | |||
| 14:55 - 15:25 | Tea | ||||
| 15:25 - 16:10 | Paul Traub Paul Traub Associates | Automation Policy and Human Performance |  |  | |
| 16:10 - 16:45 | Mike Parsons | Discussion Session |  |  | |
Speakers and Abstracts
 Ben Fulford, Safety Consultant, BMT - Merging Human Expertise with AI for Better Safety Management
Ben Fulford, Safety Consultant, BMT - Merging Human Expertise with AI for Better Safety Management
Abstract: As generative AI rapidly reshapes the way we work, its role in safety management is both promising and challenging. This presentation explores how tools like large language models can support and enhance traditional safety engineering practices—from hazard analysis to assurance case generation—by merging AI capabilities with human expertise. We'll examine the strengths and limitations of current generative AI tools, highlight key risks and mitigation strategies, and look ahead to emerging technologies. This talk will provide practical insights into integrating AI safely and effectively into safety-critical workflows, while considering the cultural and procedural shifts this transformation demands.
Bio: Ben Fulford is a Functional Safety Consultant at BMT with over 15 years of experience delivering safety assurance across the defence, nuclear, automotive, aerospace, and maritime sectors. His recent work focuses on the safe and ethical integration of generative AI within safety-critical environments. Ben leads BMT’s “Smarter Working with Technology” programme, where he has developed and deployed BMT Generative AI tool - BMT Copilot across BMT to transform engineering workflows. He has pioneered new approaches to prompt engineering - developing and rolling out a prompt engineering training programme and is actively involved in several innovation projects applying frontier AI. Alongside his innovation work, Ben continues to provide independent safety assurance and contributes to AI and safety working groups.
 Adam Johns, Marsh Limited - Redefining Accountability: Just Culture in the Age of AI
Adam Johns, Marsh Limited - Redefining Accountability: Just Culture in the Age of AI
Abstract: As human work and traditional automation is replaced or augmented by AI in more safety-critical systems, its influence on decisions, actions, and outcomes raises urgent questions for organisations’ safety cultures. This talk explores the cultural and organisational challenges of treating technology – AI – not just as a tool, but as an actor and teammate within sociotechnical systems. Drawing on experience from aviation, the talk will examine how increasing system complexity and opacity complicates accountability and the application of just culture – a fundamental competent of a good safety culture. When things go wrong, how do we ensure fairness for human operators acting with or under the influence of AI? And what does psychological safety look like when your AI teammate might be listening, transcribing—or learning from you?
Bio: Adam Johns is a Safety & Operational Risk Consultant within the Aviation Operational Advisory team at Marsh (part of Marsh McLennan group). Adam supports airline and aerospace companies to enhance the maturity of their safety management systems, focusing on culture, investigation, learning, risk and change management and safety assurance. Before joining Marsh, Adam spent 12 years working in the aviation industry with Teledyne Controls, Virgin Atlantic Airways, the UK Civil Aviation Authority, and Cathay Pacific Airways in Hong Kong. More recently, Adam spent four years as a senior safety manager at KeolisAmey Docklands (Light Railway), gaining a wider and deeper perspective on transportation risk and safety.
 Paul Leach, Head of Human Factors, Rail Safety and Standards Board - Human factors principles for the design and operation of AI systems in rail
Paul Leach, Head of Human Factors, Rail Safety and Standards Board - Human factors principles for the design and operation of AI systems in rail
Abstract: Paul's talk will focus on recent research the Rail Safety and Standards Board Human Factors Team have carried out on AI and Human Factors. The research identified and described a set of Human Factors principles to inform the design, implementation, and operation of Rail AI systems. The presentation will walk through the research, the principles and what this means for the design and implementation of AI, including change and culture influence.
Bio: Paul is Head of Human Factors at RSSB with Chartered Occupational Psychologist status at the Rail Safety and Standards Board. For 20 years he has been applying his Human Factors expertise across a range of safety critical industries, including rail, nuclear, oil and gas, energy, utilities, defence, emergency services and healthcare. He leads a team of 12 Human Factors professionals in the areas of fatigue, station operations, non-technical skills, competence management, commercial training, selection and assessment, safety culture, front-line leadership and new technology, including AI.
 Kathy Syfret, Deputy MilCAM for A400M RAF platform - AI and Aircraft Maintenance
Kathy Syfret, Deputy MilCAM for A400M RAF platform - AI and Aircraft Maintenance
Abstract: Having spent a career in a highly regulated industry, I consider the implications of introducing Artificial Intelligence tools into the aircraft maintenance environment—particularly within the context of military aviation. As system certification for AI becomes increasingly relevant, questions arise around how we delineate accountability and responsibility when working with non-human actors. In defence aviation, risk appetites shift in response to operational imperatives, creating unique pressures where timely and evidence-based decision making is essential to the safe and effective delivery of Air Power. I’ll explore how these pressures intersect with the emerging role of AI, and how its adoption could influence current safety culture—both positively and negatively. Drawing on my experience as an RAF Engineer Officer and Deputy Continuing Airworthiness Manager on the A400M at RAF Brize Norton, I’ll also reflect on the practical implications for those under my command, including how AI might reshape employment models, experience pathways, training requirements, and engineering behaviours. As we integrate these technologies, it's vital we do so with caution, clarity, and a deep respect for the human elements at the heart of safety-critical systems.
Bio: Kathy is a Royal Air Force Engineer Officer with operational experience on both legacy and entry into service platforms, having split her career between A400M and Chinook. She was the first Junior Engineer Officer on LXX Sqn at RAF Brize Norton after the introduction of the A400M. The transition to global operations, including contingency operations and humanitarian relief efforts such as Op RUMAN in the Caribbean, strategic air mobility, Search and Rescue and Maritime Patrol, sparked a keen interest in human factors and safety management. With a keen interest in human factors she completed an MSc in Safety and Human Factors in Aviation at Cranfield University, focusing her research of fatigue management for aircraft technicians. She is a member of the Chartered Institute of Ergonomics and Human Factors and a member of the Royal Aeronautical Society Human Factors in Maintenance sub-group, as well as being a Chartered Engineer. She currently works at RAF Brize Norton on return to A400M as the Deputy Continuing Airworthiness Manager. Kathy is a keen hockey player, representing the RAF and UK Armed Forces and competing in the English National League, and enjoys her youth engagement commitments as both a hockey coach and STEM Ambassador.
 Paul Traub, Paul Traub Associates
 Paul Traub, Paul Traub Associates
Abstract: AI and Automation are seen to be the panacea to poor human performance and safety. However, risks associated with poor automation and AI do not necessarily reduce risk, and can shift it elsewhere. This is compounded by the fact that Automation or Human control not a binary. There are numerous levels of autonomy and the correct level will be task and objective specific. This presentation outlines lessons learnt from Defence Systems and Commercial Aviation to derive an automation policy process and an automation design checklist.
Bio: Paul is Managing Director of Paul Traub Associates Ltd. He has over 35 years experience in Defence, Nuclear, Chemical and Rail Human Factors. He is Fellow of the IEHF, a Registered European Ergonomist and a Fellow of the Royal Aeronautical Society. He was Human Factors Lead for the Successor Combat System including provision of a robust automation policy. As part of a rainbow team, he developed a trust in automation framework on behalf of DSTL. The work involved understanding the trust issues relating to Autonomous Systems (AS) and Artificial Intelligence (AI) for military capability.
 Michael Wright, Wright Human Factors Ltd - Artificial Intelligence: Emergent safety culture issues
Michael Wright, Wright Human Factors Ltd - Artificial Intelligence: Emergent safety culture issues
Abstract: This talk will introduce the subject of Artificial Intelligence (AI) and safety culture. AI is receiving a lot of attention, with promises it can perform human tasks such as problem solving and decision making, helping us to process complex data and make better decisions. This goes beyond the role of automation of set routines, especially if the AI is able to learn and adapt its behaviour. This raises many Human Factors and safety culture questions. "Good" safety culture is characterised as people taking accountability for their actions and decisions, and having ownership of their systems of work. How might the sense of accountability and ownership be impacted if you are nominally responsible for a system that makes its own decisions or presents you with pre-processed information? What if the AI makes a mistake? What if you disagree with the AI recommended actions? Developers are meant to assure the safety of critical systems. If AI learns and adapts, how can you assure a system that may change its processing?
Event Information
| Event Date | 19/06/2025 9:30 am | 
| Event End Date | 19/06/2025 5:00 pm | 
| Individual Price | £408 including 1 year SCSC membership, £259 for existing members | 
| Location | Hilton London Euston | 

 
	 
	 
	