Rashik Parmar MBE is Group Chief Executive of BCS, The Chartered Institute for IT. His team is focussed on the value of professional membership, promoting digital skills and computing education, and building BCS’s reputation for insight on emerging technology like AI and Quantum Computing. He pioneered responsible computing as an open framework to restore trust in IT.
The Academy’s Digital and Physical Infrastructures team spoke to Rashik about patterns of innovation in IT and the need for better, more inclusive guidance for developers.
We have seen a real focus recently on frontier AI and foundation models. What does frontier AI mean to you? What do you think are the greatest opportunities and risks associated with it?
What all forms of Frontier AI have in common is they aim to make proper use of the abundance of data we currently have available to us by making it more valuable. Frontier AI is now at the point where you have large language models (LLMs) that can be applied to particular tasks, and the ability to generate a certain level of insight from data automatically. Other aspects of Frontier AI include innovation in data classification and improvements to the ability of AI models to interpret and distil large amounts of unstructured data.
There are three underlying equations which can help us understand the value flow for data. The first is data plus algorithms gives you information. The second is information plus some kind of model gives you insights. The third is, if you take that insight and do something with it, it creates an action. That action, in turn, creates an outcome – and that is where the value of data is.
In 2014, I published a paper called “The New Patterns of Innovation,” which postulates that when IT companies have increased the value of data, they have tended to follow one of five patterns. The first is: for anything physical, we will attach sensors, give it connectivity and apply AI to make it intelligent. The second is: any human activity that can be, will be programmed into some form of software that will replace human labour – ostensibly freeing people to focus on more interesting, creative and intellectually challenging work.
The third is: there is a need for different organisations to collaborate with one another and eliminate breaks in data between themselves (maximising opportunities for collaboration). The fourth pattern is: data itself does not have intrinsic value. That, in turn, means that data markets will emerge which allow data to become tradable assets. The final pattern is that an organisation with best-in-class capabilities will codify those capabilities and create new services.
Broadly speaking, innovation tends to fall into one of those five patterns regardless of sector. So, when it comes to the future development of AI, if we want to use AI to our collective societal benefit, we need to consider how those patterns can be leveraged.
Do you think we have a full perspective on what the risks might be? Are there any perspectives that are missing from conversations around the risks posed by the use of AI?
Humans will find inappropriate use cases where they exist. However, there are also unintended risks that arise. Despite everybody involved in the building of a system acts in good faith, key questions have not been asked like, ‘do we have all of the data sources we need? Have we inadvertently taken an already biased system and made it more biased?’ If those questions have not been asked, then the risks associated with that system will not become apparent until it is too late.
There is a third group of risks that can be categorized as unconscious. These arise despite acting in good faith and asking the key questions as there are gaps in developers’ knowledge that lead an AI system to behave in unanticipated manners. We need to acknowledge that risks can take a number of different forms and develop responses accordingly
The AI Safety Summit is focused on AI safety, and you have spoken about developing in such a way that you can reduce potential risks and focus on areas of application that are most beneficial. But what other measures do you think organisations can take to increase frontier AI safety? What are the most important ones in your view?
One of the most important measures organisations can take to improve safety is to assess the responsibility of their computing systems at multiple levels.
The first level that needs to be assessed is the data centre itself where energy consumption, water consumption, and environmental impacts need to be measured. The second level is the infrastructure around the data centre where the sustainability of its construction and maintenance needs to be measured.
At the third level, you have responsible coding. Organisations need to ask themselves, ‘have we built the right kind of safeguards into the code? Is security in there by design? Is it designed with inclusivity in mind? Is it covering everybody that might be using it? Is the system future-proof?’ Only then can you address the fourth level, which is responsible data use. Organisations need to evaluate the inclusivity and explainability of the data being used to train systems.
The next level is that of the responsible system. Organisations developing systems need to ask themselves, ‘are our internal systems of governance supporting ethical decision making? Are we ensuring that the individuals responsible for supporting the function of AI systems are acting responsibly and competently?’ Finally, there is responsible impact. Organisations need to ask themselves why they choose to do the work they do, and how they can ensure future work delivers the right kind of impact.
Do you think anything might be missed by the government by focusing on Frontier AI in terms of AI safety? Do you think we might miss other kinds of potential AI risks?
I actually think some of the more ‘boring’ stuff is a bit more exciting than frontier AI. Really, what I am interested in is taking what the human does and amplifying their productivity to address some of the challenges that humanity faces. Thinking about where AI is headed requires us to think about how we view human activity, and how we would want to replace some of that human activity.
Really, what I am interested in is taking what the human does and amplifying their productivity to address some of the challenges that humanity faces.
What is one particular outcome of the AI summit that you would like to see?
We need to ask, can we use an AI system to establish principles, practises and measures of excellence? Could systems built in the UK that satisfy those requirements be given a ‘coded in Britain’ stamp of approval? If we could establish such a mark, or begin the conversation of how to, at the summit then we would be moving towards having a clear way to know if a system was responsible, ethical, sustainable, and addressing the right kind of societal issues.
The kind of guidance that exists for engineers in other sectors does not exist for IT today. So, the question is, how can BCS, the Royal Academy of Engineering, and the Professional Engineering Institutions lead the UK in creating guidance for responsible computing which will tell you not only how to build something in a responsible way, but also how to make sure that you can trust it? It is an important question because trust in IT is declining, and this is an opportunity to restore that trust.
Related content
Data and AI
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…