Traditional software development follows a hierarchical process, with system-level requirements allocated to software being progressively refined through high-level requirements into source code, coverage of which is a key measure of test completeness. This approach establishes a direct link between system-level requirements and the software implementation; it also assumes that executable behaviour is governed by the structure of the source code. Although it is still widely applicable, there are cases where the suitability of this traditional model is less apparent. Consider an algorithm implemented via a neural network. In this case the structure of the source code has much less effect on the software's behaviour than in the traditional case: the same generic neural network software could be trained to perform two very different functions. The link between the implementation and the high-level requirements is also harder to trace than in the traditional case. More generally, some of the approaches used to assure software may not be appropriate for new types of algorithm; nevertheless, such algorithms are becoming increasingly common. In response, this paper considers how current methods, notably the 'four plus one' software safety assurance principles, might be enhanced to support the assurance of non-traditional software.