Ryan Donnelly is one of the founders of Enzai, a company on a mission to ensure that powerful AI technologies can fulfil their true potential. Enzai’s AI governance platform helps organisations manage AI risk through policy and organisational controls - allowing users to engender trust in, and scale, their AI systems.
The Academy’s Digital and Physical Infrastructures team spoke to Ryan about how organisations can manage the risks associated with AI and which ones they should be prioritising in the short term.
We have seen a real focus recently on frontier AI and foundation models. What does frontier AI mean to you? What do you think are the greatest opportunities and risks associated with it?
Frontier AI is a bit of a nebulous term that refers to the most advanced versions of AI that we have available to us today.
In terms of opportunities, I tend to think about AI the way I think about any other technology – and that is in terms of trade-offs. Powerful technologies come with large risks and understanding those risks is important to appropriately managing them. There are already a number of ways to manage these risks but work in this area must progress if we are to access the full benefit of AI.
The benefits of using AI differ depending on your perspective. In the context of a commercial business, there are tonnes of efficiencies that can be gained from using these kinds of technologies for admin-heavy tasks. One of the things I am super interested in, though, is the application of AI for drug discovery. Across the scientific space there have been a number of breakthroughs that are already changing the world, and simply put, there is tremendous potential for AI to transform our lives for the better as long as we get the risk piece right.
This is not something that is incredibly technical, or something for only the big tech companies to sort out, because this really is not a debate about what AI can or should do. It is a conversation about how we govern ourselves and how we legislate for bad actors, and it is a discussion that I think we need to have at a societal level.
It is also worth noting that generative AI may feel very new, but it is not. It has been around in labs for years. It is just that it has only recently become commercially viable, with a wellspring of companies trying to make money out of the technology. So, you get the feeling that it is all moving very fast, but in reality it is not.
Do you think we have a full perspective on what the risks might be? Are there any perspectives that are missing from conversations around the risks posed by the use of AI?
I think we have a reasonable understanding of the direct risks. For instance, it is pretty obvious that some diffusion models can be used to generate images that might be illegal or inappropriate. What we do not have a good handle on are the indirect and downstream risks. What happens if these models are used for everything? Does the entire internet become saturated with AI-generated content? Does that then mean that AIs are being trained on AI-generated content, and if so, what happens then? We have no handle on those kinds of risks, and it would be great to see them discussed at the summit.
The AI Safety Summit is focused on AI safety, and you have spoken about developing in such a way that you can reduce potential risks and focus on areas of application that are most beneficial. But what other measures do you think organisations can take to increase frontier AI safety? What are the most important ones in your view?
Organisations need to put rules and guardrails in place that make sure AI is only used in the ways that they want it to be used – and that can be done through organisational policy. Just as organisations regularly put policies in place to encourage people to behave in a certain way, or to work in a certain manner, organisational policy can be used to direct AI use.
However, compliance also needs to be measured, which can be challenging. Fortunately, it is getting easier with technologies like ours, which can integrate into an organisation’s tech stack, monitor how people are interacting with different solutions, and provide reports in a semi-automated way, making for more efficient risk management processes.
Do you think anything might be missed by the government by focusing on Frontier AI in terms of AI safety? Do you think we might miss other kinds of potential AI risks?
I do, and I think it is quite important to call attention to it. There are risks with this technology today that have been apparent in architectures and models that we have had around for more than five years (in some cases more than 10) and they are doing real damage in the world.
For instance, a lot of banks run anti-fraud systems that run algorithms over transaction and customer data to make sure that there is no fraud on the account. When those systems are rushed into production, the results are catastrophic for people. They start randomly shutting down people's business accounts and personal accounts, locking their money away. Even worse, this is completely avoidable. We already have a lot of the necessary risk management techniques. So, there is a tonne of stuff that we can work on right now that will make an immediate difference, and I think that is important to realise.
The UK is in a good position to draw people’s attention to these more immediate issues and to lead the world in AI regulation – I’ve been screaming it from the hilltops for years. The UK has the two core ingredients to be a leader in this space – a strong legal tradition and great access to talent.
Northern Ireland, specifically, is even better placed. The UK has obviously spotted the opportunity to lead on AI regulation and so has the European Union. Where is the best intersection between the UK and the European Union? Northern Ireland. Northern Ireland also has great universities pumping out legal and technical talents as well as access to both markets.
What is one particular outcome of the AI summit that you would like to see?
I would really like to see a renewed focus on identifying what the risks are today and how we can manage them. I would especially like to see a clear action plan that addresses that concern.
I would also like to see some discussion on how we can get organisations to put quality management systems in place to ensure tools are being built and deployed in the right way. I know it is not as headline grabbing some of the other topics that will be discussed at the summit, but I think dealing with the immediate risks is actually the more impactful work.
I know it is not as headline grabbing some of the other topics that will be discussed at the summit, but I think dealing with the immediate risks is actually the more impactful work.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…