The first Chief Scientific Adviser to the UK Government on National Security, Nick Jennings is a researcher engaged with developing systems for large-scale, open and dynamic environments. Maintaining a particular interest in the ways humans interact with AI systems, his most recent research is focused on identifying and overcoming barriers to creating trustworthy human-AI partnerships.
The Academy’s Digital and Physical Infrastructures team spoke to Professor Jennings about how the public and policymakers should be thinking about our relationship with AI.
What are your hopes for the development and deployment of generative AI?
The attention that generative AI systems like ChatGPT are receiving should give people a better understanding of what AI is and, equally importantly, what it is not. By interacting directly with such systems, we get a much clearer idea of their capabilities. To date, the AI that we all interact with on a daily basis is hidden inside much larger systems, making its prowess and shortcomings largely invisible.
ChatGPT is very impressive, but at its core it functions in a similar way to predictive text on a smart phone. Due to the flowing nature of the dialogue, it is easy to believe ChatGPT “understands” the conversation, in a similar way to us. However, it does not, and we need to be aware of this to make the best use of such systems.
The increased profile has also prompted an important discussion about the potential of AI. Overall, I believe AI systems will be positive for society, helping us tackle global challenges such as climate change, population health and productivity. But AI systems will also be used for bad and so we urgently need to consider how best to progress in a responsible way.
Do you have any fears about the progress and implementation of generative AI?
Individuals will always use powerful technologies for nefarious purposes.
In the context of cybersecurity, for example, generative AI is already being used to construct better attacks. This increases the vulnerability of the digital infrastructures on which so much of our daily life depends.
It also means we need to develop appropriate safeguards to minimise the risks posed by misuse. For me, this means focusing on the use of AI systems, rather than their underlying technologies. The risks and opportunities posed by AI are different from sector to sector, and so it makes more sense to consider domain-specific regulation rather than a one-size fits all approach.
We should be aware of AI exceptionalism. I observe that some of the concerns we attach to AI in the public discourse are far from specific to AI.
At this juncture, what questions do we need to be asking ourselves to enable the safe and impactful use of AI?
We need to start thinking more about how we make the best use of AI systems. A key component of this is the types of interaction we want to support. This should not be left solely to those developing the technology. Rather, we need a genuine cross-society dialogue about what we want to do with AI systems and what we want our interactions to feel like.
For me, an understudied area is how AI systems can better partner with people. At the moment, many human-computer interactions are awkward. Computers have rigid interfaces and humans need to adapt to them. This needs to change. Computers need to be more flexible, better problem-solving partners, better collaborators, and better advice givers.
We also need to ask ‘what does fair use look like?’ In my role as Vice-Chancellor at Loughborough University, for example, I know our students use ChatGPT to help with their assignments. This is fine. In fact, we should be disappointed if they are not using it. Banning ChatGPT is absolutely not the right approach (even if it was technologically possible). Equally, we cannot just allow unaltered submissions of ChatGPT outputs. To make progress, we need to have an informed, sector-wide conversation around fair use and determine what is acceptable and what is not.
Overwhelmingly, the responsibility of the different actors in this space is to communicate and engage with one another. Operating in individual silos is simply inappropriate and trying to regulate something that you don’t understand well is a bad idea.
What are the respective roles of researchers, those deploying the technologies, regulators, and policy makers in enabling safe use of generative AI?
Overwhelmingly, the responsibility of the different actors in this space is to communicate and engage with one another. Operating in individual silos is simply inappropriate and trying to regulate something that you don’t understand well is a bad idea.
The UK’s advantage is its great heritage and long history in AI. We can genuinely be a global leader – so stakeholders in the UK need to consider how we build upon this strong base to stay ahead.
This conversation must consider how to regulate without stifling innovation. This needs a balance to be struck between not regulating too hard and too early and waiting until the horse has bolted. It is also important that we don’t make the classic mistake of being leaders at the fundamentals of a technology, but not making the best use of AI in our public services to create societal benefit.
We should be aware of AI exceptionalism. I observe that some of the concerns we attach to AI in the public discourse are far from specific to AI. Consider bias and unfairness, for example. Both are undesirable in AI systems. But they are also equally undesirable in all software, regardless of whether it has AI inside or not. Rarely, however, are such issues raised when talking about more traditional software systems.
What is not being talked about in the ongoing media discussion around generative AI?
The current narrative is overly doom laden – ‘AI is super powerful, super scary and is going to take all our jobs. AI may even spell the end of humanity.’ Although eye-catching, these are not the views of many in the field. So, a more balanced and less hysterical perspective is needed.
Much of the current AI discussion also makes the implicit (and sometimes) explicit assumption that there's going to be one monolithic and omniscient AI system that we will all interact with and that will do everything for us all. I think this is unlikely. Rather, I think there will be many interacting systems that cooperate, coordinate, and compete. Each will have different and partial knowledge about particular individuals, resources or organisations. So, we need to start talking more about interacting AI systems and human-AI partnerships.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…