Koray is one of the world's foremost experts in deep learning and is Vice President of Research and Technology at Google DeepMind, one of the world’s leading AI companies. A pioneer in algorithmic breakthroughs, Koray leads research and engineering teams aiming to build AI responsibly and deliver scientific breakthroughs that benefit humanity. He is recognised for his contribution and leadership in the domains of deep learning, unsupervised learning, generative models, deep reinforcement learning, and applying AI to real-world problems.
The Academy’s Digital and Physical Infrastructures team spoke to Koray about what is happening at the frontier of AI and what needs to be done to ensure its responsible development.
We have seen a real focus recently on frontier AI and foundation models. What does frontier AI mean to you? What do you think are the greatest opportunities and risks associated with it?
The frontier is always where the most exciting things happen. It is where development is fastest and where exploration is happening – and exploration is important. Frontier AI is where we can expand our knowledge and increase our understanding of what AI is and how it should be built. It is important to have that emphasis on exploration because building AI is an exploratory process – there is no recipe that can be followed. So, I would capture the frontier of AI as where we are developing the tools that capture the imagination, the tools that actually help people, and then the tools that advance our society by being impactful in really fundamental ways are being developed.
In terms of exciting advances, I am excited by the ways that AI creates imaginative and conversational space, enabling people to create new visuals and representations and to imagine new things. It allows people to think about ideas in new ways or to learn concepts more quickly. For example, we are already seeing AI being used to progress our understanding of sciences like biology and chemistry, allowing drugs and health technologies to be developed.
In this sense, foundation models can be seen as the main tool for exploration in frontier AI due to their ability to ingest large amounts of information. They can ‘learn’ from that information and present it in a way that is helpful (and can often aid the user in different and novel ways). Foundation models are important because they are the best machine learning models developed so far. The culmination of years of machine learning research have resulted in them being the most powerful elements of AI being explored right now.
One of the biggest challenges with frontier AI and foundation models is figuring out what the risks are. Foundation models are really powerful models. We want to make sure we develop powerful technologies like these with the intention that it will have a positive impact on society. These models make access to a lot of information very easy – thinking carefully about how they are applied is one of the most important ways we can influence the impact these models will have. To be able to do that, we want to make sure that we pick the right application domains. There are some good examples in areas like medical imaging that showcase really positive applications of cutting-edge AI systems and foundation models in these spaces, such as AlphaFold, whilst thinking seriously about risk.
If we want to solve the most important and pressing challenges, we need to do so in a responsible way. We need to ask ourselves: what are we focused on? What kind of applications are we thinking about? What kind of positive impact do we want to make? And how do we push development towards that goal? We must create a culture where we are always checking that we are putting the right technologies and the right advances in the foundation model.
Do you think we have a full perspective on what the risks might be? Are there any perspectives that are missing from conversations around the risks posed by the use of AI?
In many respects, AI is like any other technology. It is a powerful tool, and its impact primarily comes down to its use. That, in turn, brings up the same questions I just mentioned about application domains and the suitability of models for these domains, which comes back to picking the right development path that bakes in safety. As developers, we need to then develop the technology in a way where misuse and the risk of misuse is minimal.
The AI Safety Summit is focused on AI safety, and you have spoken about developing in such a way that you can reduce potential risks and focus on areas of application that are most beneficial. But what other measures do you think organisations can take to increase frontier AI safety? What are the most important ones in your view?
Inside Google DeepMind, there's a lot of work that goes into understanding and figuring out what the potential risks are, and I know that any organisation building AI is also investing in that. One of the most important things is to focus on is evaluations. Evaluations allow us to bake safety into the development process and minimise the risk of misuse or unintended consequences. But then the question is, how do you come up with evaluations?
We need to ensure that we have forums where we can all talk together to collaboratively build our understandings of risk to then come up with evaluations that build these understandings into our development processes. Gatherings like the AI Safety Summit are really critical for developing the right evaluations, as they require having the right people and opinions in the room. There isn’t any single organisation or person that has all of the perspectives and understandings necessary to represent the risks associated with a model.
We need to ensure that we have forums where we can all talk together to collaboratively build our understandings of risk to then come up with evaluations that build these understandings into our development processes.
It is important to include a diverse range of organisations because these core technologies should have a positive impact across a lot of domains and communities. There are three communities that must be included: organisations and companies who are working on developing AI to provide technical knowledge; governments with their responsibility to ensure AI will be positively impactful; and the public as that is where AI should aim to deliver its benefits and enable more capabilities.
Do you think anything might be missed by the government by focusing on Frontier AI in terms of AI safety? Do you think we might miss other kinds of potential AI risks?
I see this as an opportunity to broaden our views to improve our understanding and thought processes about Frontier AI, but we must appreciate that there is an interplay between thinking about the risks at the frontier and the risks that exist right now. We need to ensure, presently and for the future, the impacts of AI are positive and reflective of a shared vision of progress. There is a lot of effort with existing technologies, but looking at the frontier in this context will only bolster these efforts.
Also, understanding how other big technological developments have enabled change, and the structures that have supported them, will be beneficial for AI safety despite the differences.
What is one particular outcome of the AI summit that you would like to see?
First of all, I think this is a great initiative from the UK Government. The UK holds a lot of history and a very strong global position in this area, and its excellent to see that status being leveraged to host such an event.
One of the outcomes I am hoping to see is different parts of the ecosystem —companies, governments, the public—coming together to improve their collective understanding of the problems they are trying to solve and create a platform for future collaboration.
Another thing I would like to see is the distillation of the discussions from the summit into evaluations. If we can do that, then we can ensure that the outputs of the summit will help improve the safety and the quality of AI development. These evaluations might not emerge immediately after the summit, but I really think the summit will not only influence our thinking, but also our approach to developing models and the evaluation criteria we use. Moreover, I think that as we continue to do these summits we will see those changes happening at a greater scale and pace.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…