As one of the key researchers to establish the interdisciplinary field of Web Science, Professor Sir Nigel Shadbolt has contributed to research on a wide range of topics – from the cataloguing of internet data to how AI systems understand text inputs. Most recently, he has led the development of the Oxford Institute of Ethics in AI.
The Academy’s Digital and Physical Infrastructures team spoke to Sir Nigel about the future of AI, and what we should (and should not) be excited about.
What are your hopes for the development and deployment of generative AI?
To begin with we should recognise that AI has been around for decades—going back to research from luminaries such as Alan Turing in the 1940s and ’50s. Generative AI is really just the latest variant in a long evolutionary line of methods, techniques, and accomplishments.
That said, I’m excited that we have so much more capacity; more people working in AI, more data and more computing power to go after really challenging problems. For example, we already have AI systems which are extraordinarily effective at detecting patterns in images –the whole area of medical imaging is currently being revolutionised by systems which can be trained to look out for patterns that are of interest to researchers and practitioners.
But it’s not just in health where you can see the advantages of AI applications. You can see advances in retail and the way we plan for the stocking of our shops, as well as the way we detect activity patterns in transport and logistics. AI is powering advances in everything from design to manufacturing, education through to leisure. AI is ubiquitous and whether we know it or not, we use it every day – our smart phones are stuffed full of the products of AI.
What Generative AI has done is turbo charge interest, investment, and invention in AI. The ability of these systems to generate and summarise, reformat, and re-represent huge swathes of content is remarkable. From generating advertising copy to summarising legal documents, writing code to setting exams, generative AI is demonstrating new ways in which humans and machines can co-create content.
Do you have any fears about the progress and implementation of generative AI?
My fears relate to misappropriation of AI by bad actors. AI does not lend itself to misappropriation or misuse more than any other technology because it is people at the end of the day who use these technologies. As you move from sector to sector, the challenge is really going to be understanding and balancing the benefits and harms in each area.
Going forward we won’t just need to focus on putting limits on generative AI systems using particular machine learning methods to generate and summarise data; but we’ll also need to focus on the governance around AI used in particular applications.
One thing that’s always left to one side is how will this impact different segments of our society?
At this juncture, what questions do we need to be asking ourselves to enable the safe and impactful use of AI?
One thing that concerns me around the impactful use of AI is the quality of the data that's been ingested because that’s a core component of AI.
For example, with ablation studies, which involve removing particular types of information sources and seeing how the system behaves, you realise that very high-quality data assets are incredibly important. There's also a problem being observed now where, having ingested so much of the information out there, it is uncertain where AI systems should train themselves next. The worry is that they might be trained on synthetic data that’s been generated by another AI, which raises the question: ‘at what point does the information become so derivative that it stops being fit for purpose?’
One other thing that we think about a lot at the Open Data Institute is, ‘what does a healthy data ecosystem look like?’, ‘how do you build a data architecture that’s fit for purpose?’, and ‘how do we curate it and manage it to a high standard?’
My fears relate to misappropriation of AI by bad actors. AI does not lend itself to misappropriation or misuse more than any other technology because it is people at the end of the day who use these technologies.
What are the respective roles of researchers, those deploying the technologies, regulators, and policy makers in enabling safe use of generative AI?
When we see powerful new technologies emerge, sometimes the best thing to do is to have multi-stakeholder conversations, particularly around the ethics of these systems. This is especially important in the UK, as the government’s principles-based approach demands that each sector develop their own approach to the governance and use of AI.
Now, there is no single model that will fit everything, but the Warnock Committee that led to the creation of Human Fertilisation and Embryology Authority provides a valuable example of how it’s possible to foresee outcomes through dialogue. It was those conversations with faith-based groups, civil society, medical practitioners, the public and regulators that began the process of shaping what eventually became legislation in a very ethically sensitive area.
What is not being talked about in the ongoing media discussion around generative AI?
One thing that’s always left to one side is how will this impact different segments of our society? From the elderly, to people who may be vulnerable, through to children – what does age-appropriate AI look like? How does this work in different jurisdictions? How does it work in different cultural settings? What’s a perspective in the global South, a developing economy context, or a different social demographic? These are mainly questions for the social sciences, but they are fundamentally important to include in discussions around generative AI.
It is great that there is such a public conversation around generative AI because technologies that affect many people should be discussed with many people. But such a conversation needs to be informed, and the extraordinary amount of doom-mongering around AI is worrying because it distracts us from how we manage our relationship with AI here and now.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…