Dr Natasha McCarthy has worked on policy, philosophy and ethics across a range of disciplines with a particular focus on engineering, data and AI. Previously as the Head of Policy for Data and Digital Technology at the Royal Society she promoted the well-governed use of data and digital technologies for the benefit of humanity.
Speaking with the Academy’s Digital and Physical Infrastructures team, the author of Engineering: A Beginner’s Guide gave her thoughts on the opportunity the AI Safety Summit presents and what its attendees should prioritise.
We have seen a real focus recently on frontier AI and foundation models. What does frontier AI mean to you? What do you think are the greatest opportunities and risks associated with it?
The term ‘frontier AI’ suggests a focus on the leading edge of research and development in AI and how those products may be applied and taken up by users. The use of the term ‘frontier’ in this context can suggest endless allusions and resonances, including some that may raise fears and concerns. It can evoke a fascination with what is possible and yet to be discovered.
It is vital to pay close attention to the frontier while also ensuring care and attention is given to other important considerations.
I perceive two broad challenges associated with taking a focus on the frontier of AI that must be addressed. First, it is important to ensure that concerns about the frontier do not create a diversion between addressing existing risks and appropriately using existing regulations to make AI use and development safer. It may seem like the frontier is moving rapidly and that governments and regulators cannot act in time to support its positive uses while also regulating and governing to reduce risk. Well publicised concerns about potential future risks that are existential in impact may also add to that feeling. However, there are also real and existing risks that have been raised in discussions on AI and data use in recent years that require our immediate attention.
Second, while there is great value in focusing on what AI might achieve and things happening at the edges of the discipline, we must ensure that the development of these powerful, general-use technologies are first and foremost shaped with purpose, societal benefit, and safety in mind. There is a risk that investment in areas of fast-moving research could lead to the eclipse of slower moving areas that are ultimately more beneficial areas of research.
That said, I welcome a focus on the future and an exploration of how AI might evolve, as well as the risks that can emerge. We have seen how our technological systems can have long-term negative and global impacts. We need to ensure that we are mitigating against potential risks by not locking into particular approaches and ensuring there is support for a diverse ecosystem of technologies and organisations.
We need to ensure that we are mitigating against potential risks by not locking into particular approaches and ensuring there is support for a diverse ecosystem of technologies and organisations.
Do you think we have a full perspective on what the risks might be? Are there any perspectives that are missing from conversations around the risks posed by the use of AI?
The Summit correctly identifies that we are grappling with a powerful set of tools, but decisions about such powerful technologies should not only sit with those who are themselves in power. We must engage wider society who benefit and are impacted by these technologies, especially those who could be harmed from improper use or poor data quality. These perspectives should be included in conversations about the risks and challenges that we hope technologies like AI can play a positive role in addressing. At the Royal Academy of Engineering, we are shaping a programme of work to help co-create technology pathways so that our research and development is focused on delivering real public value.
Another voice that I hope will be part of the conversation, and engaged with the actions that follow, is the engineering voice. It is the family of professions that bring technology into our everyday lives. Engineers have decades, indeed centuries, of experience that have brought us to a point where safety is a priority. The methods and frameworks to deliver safety are well established—from automotive engineering to major construction projects, the progress of engineering is as much about improving safety as it is about delivering economic benefit. Engineering teaches us to learn from past mistakes, accidents, and unintended consequences and for AI there is an opportunity to borrow from the knowledge, practices, frameworks and institutions that create that safety culture.
The AI Safety Summit is focused on AI safety, and you have spoken about developing in such a way that you can reduce potential risks and focus on areas of application that are most beneficial. But what other measures do you think organisations can take to increase frontier AI safety? What are the most important ones in your view?
Data quality and data privacy are two essential areas. Data fuels AI, and we need to ensure that models and systems are powered by accurate, inclusive, and quality data. This will ensure that AI models and systems reflect the diversity of our societies, rather than past biases. It is also essential to ensure that ‘frontier AI’ does not become a source of misinformation.
Alongside a commitment to safety, we also need a commitment to AI ethics. The last five to ten years have had a real focus on the ethics of AI, with many organisations developing their own ethical principles. However, the study of the ethical and societal impacts of technology goes back many decades before that. We need to draw from that experience and knowledge to develop and embed ethical frameworks that reflect the existing and potential powers of AI.
Do you think anything might be missed by the government by focusing on Frontier AI in terms of AI safety? Do you think we might miss other kinds of potential AI risks?
There is a need to focus not just on what can be done with AI, but what should be done and how. With this approach, frontier AI can be viewed through its ability to unlock great benefits. These benefits range from the proven work on protein folding by DeepMind to a huge range of applications in health and care to emerging work on the use of AI in the climate and decarbonisation challenge.
We must also be aware that this is a technology that gives us capabilities that will displace, change, and create new activities. Posing a question to a large language model not only stops us from using brain power but requires large computing power. The energy and carbon cost of this is important to understand so that we can use AI to mitigate the negative effects through a net reduction in energy use and carbon emissions.
What is one particular outcome of the AI summit that you would like to see?
I would like to see greater attention around lessons from engineering, including how we build a mature and responsible profession in AI with an embedded safety culture. I would also like to see a planned programme of ongoing public dialogue to ensure that we work with wider society to build safe technologies with the power to deliver public benefit and address the complex challenges we face as a global society.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…