Please log in using either your email address or your membership number.
Please register with your name, email address, password and email preferences. You will be sent an email to verify the address.
Please enter the email address used for your account. A temporary password will be emailed to you.
The SCSC publishes a range of documents:
The club publishes its newsletter Safety Systems three times a year in February, June and October. The newsletter is distributed to paid-up members and can be made available in electronic form for inclusion on corporate members' intranet sites.
The proceedings of the annual symposium, held each February since 1993, are published in book form. Since 2013 copies can be purchased from Amazon.
The club publishes the Safety-critical Systems eJournal (ISSN 2754-1118) containing high-quality, peer-reviewed articles on the subject of systems safety.
If you are interested in being an author or a reviewer please see the Call for Papers.
All publications are available to download free by current SCSC members (please log in first), recent books are available as 'print on demand' from Amazon at reasonable cost.
Contents
In the United Kingdom, the Air Accidents Investigation Branch (AAIB) can trace its origins back to 1915. Its purpose is the investigation of accidents and serious incidents; not to allocate blame or liability, but to prevent future occurrences. A similar approach has been adopted in other sectors with the equivalent Branches for Marine and Rail starting in the UK in 1989 and 2005 respectively.
By conducting deep forensic examination of occurrences and through making recommendations to regulators, manufacturers, operators and service providers, the transport investigation branches and their international equivalents have, arguably, managed to make a significant contribution to safety. For example, 2017 represented a new milestone in terms of commercial air transport safety: According to the International Air Transport Association (IATA, 2018), there were no fatalities on jet transport aircraft and a total of just 45 accidents worldwide (down from an average of 75 per year in each of the preceding five years).
Today’s ever-increasing complex systems continue to present new engineering challenges and the Defence industry is no exception. Complex Defence systems are displaying a growing demand for increased modularity and agility that supports a greater level of interoperability and technology insertion to facilitate ease of upgrade and life extension. In addition, there is constant pressure to deliver continuous improvement in performance at reduced cost and increased demand. Unfortunately, these evolving requirements present a number of unique challenges for traditional engineering methods.
Improved performance requires improved performance characteristics such as safety, supportability and security for example. These system characteristics tend to be assessed and managed as part of separate engineering domains using different tools and techniques. However, there can be significant overlap in the fundamental issues that are material to all of these system characteristics.
This has led to an increasing trend in attempts to effectively assess and optimise two or more system characteristics together at the same time to provide a better measure of a systems overall ‘Dependability’.
We look at a number of the challenges presented by the topic of system Dependability. In particular we consider the questions; what does the Defence community really want from a Dependability case and what does good look like?
Furthermore we consider if it is possible to develop a viable engineering approach that can produce a meaningful output that complements existing engineering domains while adding value to the assurance process and producing a meaningful ‘measure’ of a systems Dependability.
Engineering systems continue to increase in complexity, and Defence Industry systems are no exception. Additionally there is constant pressure to deliver continuous improvement at reduced cost and increased demand.
In response, Frazer-Nash are pursuing a development in complex system safety management, based on principles rooted in Safety II and Resilience Engineering. Its core is the intent to consider Human Adaptability (i.e. a key reason why things go right rather than go wrong) in engineering system management.
This relies on generic developments in modelling engineering systems. Currently the Human input is achieved by the implementation of Work Processes. This assumes that the operators follow the Work Processes, and so deliver the required system performance, and failure to do so is a Human Error. This simple model is challenged on two bases:
Electromagnetic interference (EMI), familiar to most people as the buzzing sound from a loudspeaker when a mobile-phone call is received, is becoming much more than just an annoyance. Following the digital revolution (IoT, Industry 4.0), our cars, homes, workplaces and hospitals are being crammed full of high-tech electronic equipment which is, unfortunately, increasingly vulnerable to EMI. As technology advances and we are able to produce smart devices like, amongst others, autonomous vehicles, clinical robots or collaborative industrial robots - which in theory should all be inherently more reliable and safer than those operated by humans - the problem of risk does not simply go away. Instead, we need to shift our focus away from the relatively visible risks associated with conventional devices, to understanding how these enormously complex systems-of-systems are also susceptible to risk, but a different kind of risk that is associated with invisible EMI. Moreover, as 5G networks are being rolled out, this risk will become more prominent and, hence, dedicated research is urgently needed Within this paper, a detailed study is given about how to increase the robustness against EMI of error correcting/detecting codes and hardware diversity schemes. Indeed, in the end, EMI will interfere with the bits transmitted over communication channels and will cause (multiple) bit errors. In addition, EMI is a complex phenomenon which has to be seen as a systematic, common cause failure. Indeed, “systematic” because a given system design will always behave in the same way when a given EMI is applied. “Common cause” because EMI influences many different components at the same time. As EMI is a systematic common-cause failure, the malfunctions that EMI creates in identical channels can easily be so similar that the comparator/voter cannot tell that there was a problem at all. An important challenge is how to apply, in a cost- effective way, EMI-diverse parallel channels within a redundant system.
According to the company’s chief executive officer, Amnon Shashua, the reason behind the mistake was electromagnetic interference. As Bloomberg reported, “wireless transmitters on cameras used by the television crew created electromagnetic interference, which disrupted signals from a transponder on the traffic light. Consequently, even though the car’s camera realized that the light was red, the car itself ignored this information and continued to drive as per signals sent from the transponder."
In the future autonomous vehicle (AV) era, we have many issues that we have to challenge. In those issues, we can find the new categories of factors that threaten safety; the violation of security, the lacking performance of environment recognition (a.k.a. SOTIF: Safety of the Intended Functionality), the verification method of the machine learning results and so on.
In our experience, we need another approach to analyse and evaluate those problems. If we would start only from the failure of a part of a system, we can use the conventional process and rules, like the functional safety standard (e.g. ISO 26262). As for software of the embedded system, we only think the "systematic failure" and design and verify it. However, as the SOTIF says we have to consider the performance limit of the equipment, and in those systems, we cannot always assume the software is a systematic one.