Please log in using either your email address or your membership number.
Please register with your name, email address, password and email preferences. You will be sent an email to verify the address.
Please enter the email address used for your account. A temporary password will be emailed to you.
The SCSC publishes a range of documents:
The club publishes its newsletter Safety Systems three times a year in February, June and October. The newsletter is distributed to paid-up members and can be made available in electronic form for inclusion on corporate members' intranet sites.
The proceedings of the annual symposium, held each February since 1993, are published in book form. Since 2013 copies can be purchased from Amazon.
The club publishes the Safety-critical Systems eJournal (ISSN 2754-1118) containing high-quality, peer-reviewed articles on the subject of systems safety.
If you are interested in being an author or a reviewer please see the Call for Papers.
All publications are available to download free by current SCSC members (please log in first), recent books are available as 'print on demand' from Amazon at reasonable cost.
Contents
Dependability is crucial in Safety Critical Cyber-Physical Systems (CPS). In spite of the research carried out in recent years, implementation of such systems remains costly, complex and time-consuming. Traditionally, three main techniques are used for system verification: theorem-proving, model checking, and testing. Runtime verification (RV) is a more lightweight method aimed at verifying that a specific execution of a system satisfies or violates a given critical property. RV techniques complement traditional methods by checking that the system satisfies a proven formal model’s assumptions and abstractions at runtime. Runtime verification can improve dependability by detecting faults early, preventing errors and failures. Monitoring the internal status of the software besides outputs can anticipate fault detection.
In this research work, we present a model-driven approach for designing statecharts with observation information. The approach is based on the CRESCO (C++ REflective StateCharts based Observable SW components developer) framework that provides the reflection capability to C++ statecharts. The solution separates the logic of the software from the observation at runtime that enables to monitor the internal status of the software. It uses reflection to make the model available at runtime. Thus, the SW components can be monitored at runtime in terms of model elements. The framework helps the developer separate monitoring from functionality.
M2T transformations generate automatically C++ code with instrumentation to monitor the internal status of the software at runtime. Observation profile has been defined and designers can define the observation level of each controller state at design phase adding observability stereotypes. As this observability capability affects the timing response of the system, adding high observability levels to the critical states and not observing the non-critical states could be the right strategy.
The framework and the Fault Detection mechanism were implemented in C++ due to two main reasons: this programming language, together with the statechart formalism constitute widely used choices for the Safety Critical CPS domain and we need a platform where real-time constraints can be granted. Industry 4.0 domain has been selected as demonstrator scenario and a controller that controls a Burner has been developed.
In an era where systems are soon to be adaptive (learning and decision- making) within much more open environments, multi-dimensional threats arise through unseen communications between systems and (active) components. Physical implementations mask the aggregation of interrelated and interdependent functions and services. They are complex, complicated, time-dependent, stateful and, at least in part safety-related. In parallel, through their evolving implementation, they change their operational context.
One clear example are autonomous systems that require elements to join and leave the operational domain. The overall system becomes dynamic. Logical representations reveal ragged boundaries open to a wide range of vulnerabilities and (cyber) threats. As a result, a supporting ecosystem that has both internal and external elements is required.
The confidence we have in the safety characteristics of a system is falling, and uncertainty is rising. Current models of a system and system safety need to be adapted. Decision models need to be extended to address confidence and uncertainty to ensure and assure safe behaviours. Decision support is likely to encompass machine perception to address hazards arising from changes in the physical world and system context. Further, the next generation of CBTs will be able to parse, understand and make decisions on responsive actions to take based on the content of records and independently of humans. The influence of CBTs moves yet further up the baseline model hierarchy.
Realisation and management of confidence and uncertainty drive distinct strategies, based on the relative position of the system elements within the Reference Model. Macro strategies encompass the overall system and its context. Individual elements (a CBT) and their neighbours use Micro-strategies. Therefore, different levels of confidence and uncertainty will exist across a system comprising many instances of identical system elements. Each autonomous Vehicle in a transport system will calculate uncertainty within its local area, yet uncertainty will have regional variation based on environmental conditions and road incidents.
This poster introduces a model of systems and the characteristics associated with them. It identifies where current practices are deficient and discusses issues associated with ensuring the safety of complex adaptive technologies based on a data-centric view of the world.
The use of Free/Libre Open Source Software has significantly increased in industrial applications over the past decade. Much of this success can be attributed to flagship projects like GCC and GNU/Linux. This success was driven not only by the versatility and breadth of deployment but also to a significant extent by the security capabilities of the GNU project and specifically Linux. One aspect that was neglected by the FLOSS community though is the issue of safety certification of these software elements. The safety community on the other hand, considered it in a different context.
Traditional safety related system software is built on the traceable reliability of the development process along with the competency of the developers. This changes with pre-existing complex software elements, as such software element never would be compliant to the requirements set forth in key functional safety standards such as IEC 61508.
IEC 61508 authors were aware of the changing requirements and technological advances. They responded with a dedicated compliance route providing guidance on "assessment of non-compliant development": some process must be in place if the result is a suitable candidate for a system. If this process can be assessed, gaps identified and mitigated, then it may reach the same integrity as bespoke software development.
This poster outlines our interpretation and derived mapping of the process for assessment of non-compliant development along with the process overview. The goal is to allow the re-use of complex software elements like the GNU/Linux kernel and GNU glibc. While there are some weak points, the big picture of one possible solution how assessment of pre-existing software for mid-integrity levels could be tackled has emerged over the course of a related OSADL project that was dubbed SIL2LinuxMP.
This work was conducted in the context of OSADLs SIL2LinuxMP project striving to develop a GNU/Linux qualification route suitable for up to SC2/SIL2.
Electronic and computer-based systems have an integral role in our daily life. It is essential to provide confidence that such systems and particularly those in safety-critical industries will operate acceptably safe. Safety cases have gained ground as the appropriate means to communicate the argument of safety throughout a system’s lifecycle. However, the increasing complexity and integration of modern systems render manual approaches impractical.
We propose the concept of a model-connected safety case that could simplify certification of complex systems. System design models support the synthesis of both the structure of the safety case and the appropriate evidence that supports it. The resultant safety case argues that all hazards are adequately addressed through meeting the system safety requirements. This overarching claim is demonstrated via satisfaction of the integrity requirements that are assigned to subsystems and components of the system through a sound process of modelbased allocation that respects the system design and follows industry standards. The safety evidence that substantiates claims is also supported by auto-constructed evidence. System changes are automatically reassessed and reflected on the safety case. The approach is underpinned by a data model that connects safety argumentation and safety analysis artefacts, and is supported by a software tool.