Event overview
The third People’s AI Stewardship Summit (PAISS) brought together the public with leaders from the public sector, industry, policy, and academia to explore the impact of AI on Liverpool’s future. Like previous summits in Belfast and Glasgow, the event aimed to foster dialogue between the public and those designing, building, researching, regulating and implementing these technologies. The event focused on the opportunities and challenges AI poses, particularly in health and infrastructure—salient issues for Liverpool and the broader Northwest.
Explore a breakdown of the event below to find out what was discussed in each session and how the people of Liverpool feel about AI.

What makes you excited? What makes you worried? Where do you see the real opportunity?
Dr. Natasha McCarthy, Associate Director, National Engineering Policy Centre
Opening talks
AI: The Good, The Bad, and The Ugly with Professor Michael Fisher

Professor Michael Fisher from the University of Manchester set the stage by addressing the complexities of AI: the good, the bad, and the ugly. He explained the difference between AI types with an analogy: teaching a friend to recognise boats. One method is by giving rules—boats are found on water, for example. Another is by showing many photos until they learn intuitively. The latter reflects the AI he intended to focus on.
The Good
This kind of AI demonstrates significant potential in pattern recognition. Professor Fisher showcased its efficacy in detecting cancer in X-rays and identifying fraudulent banking transactions.
The Bad
Yet, AI is fraught with limitations. It can inherit biases from training data (like the assumption that a person at a computer is a man), and its performance can falter with unexpected inputs, such as unusual object orientations.
The Ugly
He concluded with the ugly side of AI: its environmental toll, the dangers of bias or data manipulation, and accountability issues when AI systems make consequential decisions.

The Importance of Public Voice in AI with Eleanor O'Keeffe
Eleanor O’Keeffe emphasised the urgency of public deliberation in AI policy, especially given the rapid pace of technological and regulatory change.
Rather than debating why public input matters, she focused on how practitioners can make it matter. The Ada Lovelace Institute integrates public attitudes, lived experience, and deliberative reasoning with legal and policy analysis to build stronger, evidence-based recommendations. A positive case study is the Citizens’ Biometric Council, which explored public concerns about technologies like face recognition concurrent with a review of governance and policy gaps. Together, these efforts directly influenced the EU’s AI Act.
Public discussion sessions
Exploring Hopes, Fears, and Uncertainties Surrounding AI
AI in Health, Wellbeing & Medical
Participants expressed hope that AI will enhance healthcare efficiency, free medical professionals’ time, and improve access to services—especially for those less proficient with technology. AI was lauded for its potential in detecting diseases early and improving diagnostic accuracy, especially in complex cases requiring input from multiple specialists.
However, uncertainties lingered about the security of medical data and trust in AI-driven diagnoses—would people trust an AI to make decisions about their healthcare? What if a system failure led to critical errors?
Fears included healthcare job losses, uneven government investment across regions, and misdiagnosis if treatment decisions lack human oversight.
AI in transport & infrastructure
In the transport sector, hopes revolved around AI’s capacity to bolster safety and efficiency. There was a sense that existing traffic systems in Liverpool were wasting valuable time. Participants saw promise in smarter AI-powered traffic lights and optimised routes to reduce congestion. They also hoped AI could help with crash detection and identify infrastructure issues early, such as potholes and bridge damage, preventing more significant problems.
Despite the potential benefits, uncertainties lingered. How reliable are AI-driven transport systems? Would essential skills be lost in the process?
As with the topic of health, job losses were mentioned as a fear. Some worried that scheduling errors could result in accidents, while others feared AI could become a tool for control—used by influential figures, governments, or even criminal networks to manipulate or exploit people.
What if the power goes out?
I don’t want AI to have the final say.

Participants discussed how government, industry, and civil society could ensure AI serves the public good, ensuring benefits while minimising risks.
AI in Healthcare: Strengthening Oversight and Trust
Participants advocated for an independent regulatory body with real authority, alongside an organisation to oversee intellectual property rights and tighter legislation to protect patient data.
Regulation must protect both consumers and patients.
Participants called for a cohesive approach to policy and funding across government agencies to prevent fragmented decision-making.
Transparency and public trust were central. One idea was to widely publicise success stories of AI in the NHS to build confidence in its benefits.
There was a consensus that AI should be a tool to assist healthcare professionals, not replace them.
AI in Transport: Building a Fair and Safe System
A key concern in transport was ensuring equitable access to AI-driven advancements. Participants stressed that public benefit should take precedence over profit, echoing the need for regulations and rigorous testing before AI systems were widely deployed.
Public engagement was a central theme. Participants emphasised the need for open discussions, town hall meetings, and participatory decision-making.
Finally, maintaining individual choice was a priority, with calls for opt-out options for AI-driven transport services.
Five Enterprise Hub members held focus group discussions providing an opportunity for them to hear what the public felt about their specific AI applications. Later they shared some of the insights they received with the room.
Healthcare (ADHD Diagnosis)
While appreciating AI’s potential to improve ADHD diagnosis rates, participants were concerned about bias. Unease about data privacy, equitable access, and a lack of empathy were also voiced.
Emergency Services (Drones)
Significant enthusiasm for AI-powered drones in emergency response was tempered by ethical reservations about AI making life-or-death decisions. Participants felt there is a need for human oversight and careful integration with existing emergency response systems.
Sports Coaching (Performance Analysis)
Although AI tools were regarded as beneficial for athlete performance, participants stressed the irreplaceable value of human aspects of coaching, like an ability to notice emotions.
Education (AI and VR in Learning)
Participants discussed the need for AI literacy programmes to equip future generations with the skills to be involved in building AI systems and use AI responsibly.
Accountability (AI Across Sectors)
Participants expressed concern about bias in AI-driven decision-making and called for clear lines of accountability when AI systems produce unfair outcomes.




How can we ensure that the benefits from AI developed in the Northwest and using local resident data have a visible and tangible impact on local communities?
A primary concern was the unclear ownership of local data. Participants proposed establishing cooperative data trusts, allowing residents to collectively manage and benefit from their data. A suggestion for local levies on AI developers was also put forward.
Existing local authorities, town hall meetings and parish councils could play a role in facilitating democratic engagement, letting residents speak up about how their data is used by, say, Mersey Transport or the local NHS.
Focusing specifically on AI in health and care, what sort of help or information would be needed to define other patients’ needs and ensure that AI-driven health and care services meet the community’s specific requirements?
Participants stressed the need for faster diagnoses, particularly noting that lengthy waits for scan results can exacerbate health problems, especially in elderly patients. Many expressed a desire for AI to provide more personalised care, which some felt is currently lacking due to limited data sets in health research.
Additionally, they discussed how AI could enhance social care, pointing out its potential to help identify and prevent critical issues like child criminal exploitation.


Considering AI’s broader applications, what other opportunities or problems in the Liverpool City Region could AI help to address, what are the priorities, and why?
Participants expressed interest in using AI to solve challenges like the energy crisis, advancing fusion technologies. They discussed the potential for AI to improve public services, optimising travel routes or identifying whether building sites are at risk of flooding.
The group considered whether AI could or should be directly involved in moderating balanced views on global issues and politics.
They also expressed concerns over selecting trustworthy AI systems; could brand recognition, peer reviews, and tools like Trustpilot play a role in establishing confidence?
Related policy work
Belfast - People's AI Stewardship Summit
The first People's AI Stewardship Summit was held on 5 March 2024 at the Enterprise Hub NI in Belfast. Members of the…
Futures and Dialogue
Using foresight techniques and engaging with the public and other stakeholders to inform policy and policymakers.
Data and AI
The Academy has undertaken a wide range of projects on the role of data and artificial intelligence (AI) in shaping the…
Engineering Responsible AI
The Academy's Engineering Responsible AI campaign explores how the emerging technology of artificial intelligence (AI)…