Seminar: Complying with the EU AI ACT: What is needed for Products and Systems?
 
				 This one day event Complying with the EU AI ACT: What is needed for Products and Systems? will cover the EU AI ACT itself and discuss the implications for organisations and engineers working with AI technology. It will have a particular emphasis on AI systems incorporated as part of solutions with safety implications. Such systems are now found in many areas, e.g. automated vehicles, policing, aviation, healthcare, power transmission, manufacturing or process control.
This one day event Complying with the EU AI ACT: What is needed for Products and Systems? will cover the EU AI ACT itself and discuss the implications for organisations and engineers working with AI technology. It will have a particular emphasis on AI systems incorporated as part of solutions with safety implications. Such systems are now found in many areas, e.g. automated vehicles, policing, aviation, healthcare, power transmission, manufacturing or process control.
The seminar will be useful for all those involved in production of AI systems: system engineers, safety engineers, product and programme managers, and also those involved in deployment and introduction of such systems.
It will be held at the DoubleTree by Hilton Hotel in central Brussels, Belgium on the 9th October 2025.
The seminar will be hosted by Prof. Davy Pissoort, Flanders Make@KU Leuven and Dr Mike Parsons, SCSC.
Speakers include:
Jan De Bruyne, Professor IT law at KU Leuven and Head of the Centre for IT & IP Law - The Regulation of Artificial Intelligence under the AI Act and Liability – Challenges and Ways Forward
Isabella Ferrari, Professor at Università degli Studi di Modena e Reggio Emilia - The Protection of Intellectual Property in the Training of Artificial Intelligence: Balance and possible Implications
Jelle Hoedemaekers, Agoria - The AI Act : A stress test for standardisation
Thor Myklebust and Dorthea Mathilde Kristin Vatn, SINTEF Digital - The AI Act and The Agile Safety Plan
Karin Rudolph, AI Ethics and Governance consultant and Founder Collective Intelligence - Who’s Accountable? Ethics and Liability in the New AI Regulatory Landscape
Mathias Verbeke, Faculty of Engineering Technology, Flanders Make@KU Leuven - From Standards to Practice: Bridging the Technological Gaps in EU AI Act Compliance Towards Safer AI Systems
This seminar will be held in the normal SCSC format, with registration from 09:30 and talks starting at 10:00 (local time).
The cost will be €315 (€335 including 1-month's SCSC membership) with a student/retired rate of €35
Speakers and Abstracts
 Jan De Bruyne, Professor IT law at KU Leuven and Head of the Centre for IT & IP Law - The Regulation of Artificial Intelligence under the AI Act and Liability – Challenges and Ways Forward
Jan De Bruyne, Professor IT law at KU Leuven and Head of the Centre for IT & IP Law - The Regulation of Artificial Intelligence under the AI Act and Liability – Challenges and Ways Forward
Abstract: Artificial intelligence is becoming increasingly important and so does its regulation and governance. During this seminar, the basics of the AI Act adopted at the European Union level will be discussed. Especially the risk-based approach as well as some core provisions will be analysed. Once the general framework on the regulation of AI has been given, questions of extra-contractual liability for AI-related damage will be discussed as well. The seminar will give an overview of some (legal) implications for organisations and engineers working with AI technology.
Bio: Jan De Bruyne is professor IT law at the KU Leuven and Head of the Centre for IT & IP Law. He teaches several courses on law and technology, and is the Principal Investigator (PI) of many projects dealing with the legal and ethical aspects of technology. He has numerous publications in academic journals and books, and is the editor of "Autonome motorvoertuigen: een multidisciplinair onderzoek naar de maatschappelijke impact" (Vanden Broele, 2020), "Artificiële intelligentie en Maatschappij" (Gompel&Svacina, 2021) and "Artificial intelligence and the law" (Intersentia, 2022). He is co-director of the Flemish Knowledge Centre for Data & Society and member of Leuven.AI, the Robotics and AI Legal Society (RAILS) and Ethical and Trustworthy Artificial and Machine Intelligence (ETAMI) as well as of different other academic institutions. He was also involved in the adoption of the UNESCO Recommendation on Ethical AI and has been acting as an expert for several national and supranational institutions. Jan De Bruyne is a regular speaker at/organiser of conferences and seminars. He was a Van Calker Fellow at the Swiss Institute of Comparative Law and has been a Visiting Fellow at the Institute of European and Comparative Law of Oxford University, the Centre for European Legal Studies of the University of Cambridge, the TC Beirne School of Law in Queensland and the Australian National University.
 Isabella Ferrari, Professor at Università degli Studi di Modena e Reggio Emilia - The Protection of Intellectual Property in the Training of Artificial Intelligence: Balance and possible Implications
Isabella Ferrari, Professor at Università degli Studi di Modena e Reggio Emilia - The Protection of Intellectual Property in the Training of Artificial Intelligence: Balance and possible Implications
Abstract: TBA
Bio: Isabella is an experienced Professor with a demonstrated history of working in the higher education industry. Skilled in legal aspects related to Industry 4.0, Autonomous Vehicles, Robotics, International Relations, Litigation and Legal Advice. She has a strong professional education with a Visiting role at Harvard Law School, and a Visiting Scholar focused in Law from The University of Tokyo.

Jelle Hoedemaekers, Agoria - The AI Act : A stress test for standardisation
Abstract: As the AI Act is following the NLF principles, standards will play a key role in compliance towards the AI ACt. This is however bringing up some challenges. During this presentation we will hgighlight these challenges, both from the side of the AI Act and the field of AI Standardisation and will look forward to figure out what it 'means' to comply to the AI Act in the future.
Bio: Jelle Hoedemaekers is a Belgian expert in the field of digital policy, with a specific focus on artificial intelligence (AI), data, and cloud computing. He currently works as an Expert Data Economy at Agoria, the industry federation for the Belgian technology industry, where he is responsible for ensuring that actions at EU, federal, and regional levels suit the needs of Agoria members. Jelle works on policy topics related to AI regulation, data strategy, and standardization
Dorthea Mathilde Kristin Vatn and Thor Myklebust, SINTEF Digital - The AI Act and The Agile Safety Plan
Abstract: The EU AI Act (Regulation 2024/1689) introduces rigorous requirements that significantly impact the development and certification processes for high-risk AI systems and transparency requirements for assistant systems (AI-driven systems designed to support human operators or decision-makers in performing tasks). In the book, "The AI Act and The Agile Safety Plan" we explore how agile methodologies can effectively meet the stringent safety and AI standards mandated by the AI Act. We propose a structured approach that seamlessly integrates functional safety and artificial intelligence practices with agile methodologies throughout the entire lifecycle of AI systems, ensuring continuous compliance from initial development stages through to deployment. The presentation emphasizes the critical importance of human factors when deploying AI-based technologies in safety-critical domains. It further provides recent insights into Explainable AI (XAI) from a sociotechnical perspective. By closely aligning safety plans with safety cases, our approach delivers a practical and adaptable strategy to fulfil regulatory requirements. This methodology empowers developers and stakeholders to dynamically adapt to technological advancements and evolving compliance obligations.

Bio: Dorthea is a research scientist within Human Factors, Psychology, & Information Systems at SINTEF Digital. She holds a MSc in Work and Organizational Psychology and is currently combining her role as a research scientist in SINTEF with pursuing a PhD in Information Systems. She works with questions at the intersection of people and technology, exploring how novel technologies impacts people and organizations both in a safety perspective and a business perspective.

Bio: Thor is a senior researcher in Safety and Reliability at SINTEF Digital. Since 1987 he has been involved in research, assessment and certification of products and systems. He has worked for the National Metrology Service, Aker Maritime, Nemko, and SINTEF. Thor has participated in several international committees and is a member of safety (NEK/IEC 65), the IEC 61508 maintenance committee “generic functional safety”, ISO/IEC TR 5469 “Artificial intelligence — Functional safety and AI systems” stakeholder UL4600 autonomous products and railway (NEK/CENELEC/TC 9). He is co-author of four Springer books (The Agile Safety Case, SafeScrum, Proof of Compliance and AI Act and The Agile Safety Plan) and published more than 300 papers and reports.
 Karin Rudolph, AI Ethics and Governance consultant and Founder Collective Intelligence - Who’s Accountable? Ethics and Liability in the New AI Regulatory Landscape
Karin Rudolph, AI Ethics and Governance consultant and Founder Collective Intelligence - Who’s Accountable? Ethics and Liability in the New AI Regulatory Landscape
Abstract: Who’s Accountable? Ethics and Liability in the New AI Regulatory Landscape. As AI systems become more embedded in critical decisions, engineers and developers are facing new forms of accountability. This session will examine the intersection of ethics, design, and liability in the context of the EU AI Act. Through real-world case studies, we’ll examine how technical decisions and the absence of ethical oversight can lead to legal consequences. 
We’ll also explore practical strategies to reduce risk, improve transparency, and build more responsible AI systems.
Bio: Karin Rudolph is the founder of Collective Intelligence, a Bristol-based consultancy specialising in AI ethics and governance. Collective Intelligence provides training and resources to help organisations implement ethical AI practices and robust governance. She also organises the AI Ethics, Risks, and Safety Conference, an annual event taking place in Bristol in May 2025. Karin is a regular speaker at universities and conferences and an active member of the tech community in the South West of England.
 Mathias Verbeke, Faculty of Engineering Technology, KU Leuven - From Standards to Practice: Bridging the Technological Gaps in EU AI Act Compliance Towards Safer AI Systems
Mathias Verbeke, Faculty of Engineering Technology, KU Leuven - From Standards to Practice: Bridging the Technological Gaps in EU AI Act Compliance Towards Safer AI Systems
Abstract: The EU AI Act introduces a comprehensive regulatory framework for AI systems, particularly those with safety implications. While (harmonised) standards are intended to facilitate compliance, significant technological gaps remain between these standards and real-world implementation. This presentation examines the shortcomings of current AI Safety standards from a technological perspective, highlighting issues such as a lack of sector-specific guidance, and challenges in translating the proposed legal requirements into actionable engineering practices. The talk will outline a number of key open gaps and propose strategies to bridge the divide between regulation and practice, with a focus on high-risk AI systems in safety-critical domains.
Bio: Mathias Verbeke is an Associate Professor at the Faculty of Engineering Technology of KU Leuven. He is affiliated with the Declarative Languages and Artificial Intelligence Section (DTAI) research group. At the Bruges Campus, he is embedded within the M-Group, which gathers complementary expertise on intelligent, dependable and interconnected mechatronic systems. He is also affiliated with Flanders Make, the strategic research centre for the manufacturing industry, and Leuven.AI, the KU Leuven Institute for Artificial Intelligence. Furthermore, he is also an expert member of the Belgian normalization committee on Artificial Intelligence. His teams’ research focusses on the challenges related to the industrial application of Artificial Intelligence, with the goal to arrive at more robust, adaptive and efficient algorithms that can be operationalized in highly dynamic, safety-critical and resource-constrained industrial settings.
Event Information
| Event Date | 09/10/2025 9:30 am | 
| Event End Date | 09/10/2025 5:00 pm | 
| Individual Price | €315 (€335 including 1-month's SCSC membership) with a student/retired rate of €35 | 
| Location | DoubleTree by Hilton Brussels | 

 
	 
	 
	 
	
