As the Executive Director of the Web Science Institute at the University of Southampton, Dame Wendy Hall brings her wealth of experience on the interaction of people and society with complex technical systems to this series. Interested in understanding and exploring the various influences that have shaped the evolution of the internet and the world wide web, her research has been instrumental in the emergence of Web Science as an academic discipline and the development of socio-technical studies as applied in other areas such as AI governance.
The Academy’s Digital and Physical Infrastructures team spoke to Dame Wendy about where the immediate and present dangers are in the AI ecosystem and how to deal with them.
What are your hopes for the development and deployment of generative AI?
First and foremost, I hope we can keep it safe. Of course, that is not an easy ask, but it is an important conversation to be having because the risk from misinformation in particular is an immediate and present danger.
This is not something that is incredibly technical, or something for only the big tech companies to sort out, because this really is not a debate about what AI can or should do. It is a conversation about how we govern ourselves and how we legislate for bad actors, and it is a discussion that I think we need to have at a societal level.
It is also worth noting that generative AI may feel very new, but it is not. It has been around in labs for years. It is just that it has only recently become commercially viable, with a wellspring of companies trying to make money out of the technology. So, you get the feeling that it is all moving very fast, but in reality it is not.
We have two major democratic elections coming up next year and I worry about whether AI will pose a threat to our democratic processes.
Do you have any fears about the progress and implementation of generative AI?
I think the media (and for that matter the government as well) is too pre-occupied with the supposed existential threat posed by AI, and there is not enough coverage about all the fantastic opportunities that AI will potentially bring. That is partially because there is a fundamental misunderstanding of what AI is. Even though AI has been in use for ages and is a part of most people’s day-to-day lives, it is treated as a new thing – and I think that leads to apprehension.
We’ve also seen a shift in the government's approach to AI because tech companies are currently leading the debate. Regulation, of course, needs to guard against a whole number of different threats, and we have been having conversations with stakeholders about how we manage those threats for a few years now. But in recent months, because of the popularity of ChatGPT and generative AIs, those conversations have largely been ignored and abandoned – and the focus has shifted to working with the big tech companies. That is not to say that consulting the big tech companies is not important, but they are not going to have all the answers and their foremost interest is making money from their products and services. We need many more voices in this debate.
At this juncture, what questions do we need to be asking ourselves to enable the safe and impactful use of AI?
In July, Meta announced that they would make their LLM (Llama 2) open source – and I was quoted as saying it is a bit like giving people a template to build a nuclear bomb. There are several questions we need to be asking ourselves about open-source models. Can people actually play inside the black box? Can they change their algorithms? Who is regulating the open-source versions? These are the sorts of questions we need to be asking. We need to be thinking about what people are going to do with the models and how they are going to use them, and not just about security and other mainly technical safety risks.
Another issue that should raise concerns is the information being fed into AI models. If you use ChatGPT, for example, to summarise or analyse some documents, does the generated output get fed back into the model? When data is being pulled from the internet, how is it being filtered and organised? How are they ensuring that it is unbiased and accurate? How do they make sure they are not using other people’s intellectual property?
We need to have a diverse conversation on how these systems are going to be regulated … And when I say diversity, I do not just mean gender … You also need diversity in age, cultural values, and diversity of thought.
What are the respective roles of researchers, those deploying the technologies, regulators, and policy makers in enabling safe use of generative AI?
We need to have a diverse conversation on how these systems are going to be regulated – one that engages stakeholders from across the sector and asks, ‘what do we want the machines to do for us?’ and not just ‘what do we need to do to serve their development?’ And when I say diversity, I do not just mean gender, though that is very important. You also need diversity in age, cultural values, and diversity of thought.
It is also much more intelligent, in my opinion, to use our energy and resource to talk about the problems that we understand well and can act decisively on now rather than trying to arrive at all-encompassing regulation before we are aware of what we are regulating. Hence, focusing on issues such as misinformation.
What is not being talked about in the ongoing media discussion around generative AI?
We, as a society, must learn how to deal with mass amounts of misinformation – and if I could choose the topics covered at the UK AI summit planned for this autumn that is what I would be discussing. The problem of misinformation predates generative AI (though generative AI has certainly made spreading misinformation easier), but it is something that we have no regulatory response to yet.
We have two major democratic elections coming up next year and I worry about whether AI will pose a threat to our democratic processes. LLMs and other models can be used to produce any sort of deep fake and the idea that we could have the images and voices of politicians being used to communicate misinformation is concerning. If you wanted to take a technical approach to the problem you could start mandating the use of watermarks or tags to identify statements and images, so that when something is published it could be shown whether it has come from an official source (in the same way that campaign TV ads state who they are funded by). For such an approach to work, of course, you would need stakeholder buy-in – the media would have to commit to not putting things out until they’ve been properly checked, and you would also need to raise an awareness debate about how the public should check sources.
None of these issues are being talked about enough at the moment, but they are everyone’s issues because AI’s impact is ultimately dependent on how AI is going to interact with people and what people are going to do with it.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…