If you find yourself routinely talking to a little box to set reminders, get the weather forecast, provide the best driving routes or generally enquire about anything on the internet, then you, like me and many others, are already embracing Artificial Intelligence (AI) and its influence seems set only to increase and indeed, is rapidly gathering pace.
Over Christmas, we were introduced to AI image generation applications, and my wife Heather and I have since been prolific users: she, to post amusing content on Facebook and for me, for example, to help generate the front cover of this newsletter. I was also interested to hear from my sister, a university lecturer, that AI is already embedded in her students' work with assignments stating what level of AI generated content is permitted. If our family experiences are anything to go by then AI is already taking a firm grip.
Applications that use Large Language Models (LLM) seem already extremely powerful and to illustrate, my 60-second inter-viewee is with ChatGPT and the results are, I hope you’ll see, pretty remarkable, not least its predilection for extreme sports, in fact, what perhaps gives ChatGPT away is how perfectly formed the answers are – lacking, in a sense, human imperfection.
With such clear utility, there does seem a clear and present danger that the adoption of AI will outpace our ability to govern its safety assurance, not least because we don’t know how to do that yet as our traditional assurance methods are difficult to apply to AI. It is comforting however to note that the impact of AI is generally being treated seriously on several levels. In their article on AI governance, George Mason and Greg Chance discuss how various governments around the world are establishing laws for AI’s adoption and at a grass-roots level, it was interesting to see the Royal Institution Christmas Lectures focusing entirely on AI and aimed at the 11-17 age group; effectively the generation that will be the main inhabitants of the AI world we create and custodians of its inventions.
The British Computer Society’s Brian Runciman made an important statement in the society’s Dec 2023 edition of “IT Now” magazine:
“If we want our generation’s contribution to AI’s evolution to be written about well by tomorrow’s historians, we need to work just as hard at diversity as we do at AI engineering itself”.
I wholeheartedly agree with this; there are already many examples of AI’s learning developing bias such as excluding women applicants for historically male-dominated jobs and safety issues are lurking with, for example, known issues with gender and ethnic biases in medical devices.
I am therefore pleased to announce the formation of a new SCSC Working Group “Safe AI” that will hopefully provide a systems approach to the challenges of AI. Further joining details will be provided on the SCSC website in due course.
Paul Hampton SCSC Newsletter Editor