David Short is a systems engineering consultant in the defence industry. He is distinguished for his leadership and technical capability in the development of, and problem solving within, complex mission systems across the engineering lifecycle. In recent years he has been looking at aspects of ethics in engineering, working with the Engineering Council and the Academy on the Engineering Ethics Reference Group.
The Academy’s Policy team spoke to David about the ethical considerations that underpin self learning (non-deterministic) systems.
What are self-learning (non-deterministic) systems?
Artificial Intelligence (AI) systems are designed either as deterministic systems which have an exact, predictable relationship between the input and output variables, or non-deterministic (i.e. self-learning) where the action of the system may vary from its dataset learning and can also include an additional element of learning from previous localised experience.
As AI utilises ever more sophisticated machine and deep learning, leading it to work in non-deterministic ways, it can be impossible for an observing human to comprehend how a conclusion is made. It can likewise be difficult - if not impossible - for a third party to predict how an individual will react when driving a car for example. However, training gives people skills, and society confidence, that most individuals will make decisions using the right judgements. When AI works in these ways however, it creates a number of ethical challenges for engineers. How can we design, develop and clear these capabilities so that we have confidence in self-learning AI systems in a way that allows there to be trust that the right decisions are being made for the right reasons?
What are the critical ethical issues in this area?
When you start putting trust in AI systems, questions arise on how much autonomy we should give these systems. The ethical considerations include how much we want them to make critical decisions that could affect safety or reliability or have wide social impact. In autonomous vehicles, for example, the onboard computers can make safety related decisions, but for the current vehicles in the UK the human can still take responsibility and control.
In autonomous taxis this contingency is no longer available, and a greater amount of trust is being put in the car’s systems. The Waymo Taxis in the US have an ability to work tirelessly 24/7 and have an impressive safety record, claiming 92% less injury and 88% fewer vehicle damage claims than an average competent human.
In critical domains like medicine, transport and defence, the reliability and safety of decision-making systems are paramount. Non-deterministic systems introduce uncertainty, raising concerns about their ability to consistently make accurate and safe decisions as well as whether the people using them understand the decisions being made.
Additionally, some self-learning systems rely on vast amounts of sensitive data, raising concerns about privacy and security. Ethical frameworks should prioritise rigorous testing, validation, data security, and continuous monitoring to ensure safety, reliability and measures of trust.
It is essential that we have open discussions on the role of human decision making in such sensitive areas to guide engineering development.
In military applications particularly, we might always want humans to be involved in lethal decision making. In the UK, military doctrine prioritises a human in the loop and adherence to international law, but we must also recognise the argument that this could be a disadvantage if an adversary is working with AI systems that make faster decisions. More broadly, important ethical questions relate to ensuring that decisions are guided by human values and contextual knowledge. However, rising pressure for cost reduction and increased efficiency can lead to reduced human oversight. It is essential that we have open discussions on the role of human decision making in such sensitive areas to guide engineering development.
Why are these ethical issues particularly important?
The deployment of self-learning systems can have far-reaching societal implications that extend beyond immediate consequences. This makes it necessary to ensure equitable distribution of the benefits afforded by these systems. We must also anticipate and plan to mitigate any unintended consequences that may arise from widespread adoption. While there are individual responsibilities, building trust in non-deterministic systems that make critical decisions requires a multi-faceted approach combining technical, ethical, and social considerations. In 2026, we are likely to see the first commercially available humanoid style robots with sophisticated learning-based AI as an inbuilt capability. Companies such as Tesla, UBTECH and Unitree are developing very capable systems with aspirations to make them available for widespread home use in the near term.
Given the potential impacts, both positive and negative, effective regulation and governance frameworks are necessary. However, technology often evolves faster than regulation. The development of regulations for autonomous cars in the UK are still in relatively early stages (the UK Automated Vehicles Act 2024), with further significant refinement needed for fully automated capability to be allowed on UK roads. It is important that ethical guidelines, standards, and regulatory mechanisms can be established in a timely way to promote responsible deployment, use, and oversight. For example, policymakers should consider whether the current Consumer Protection Act 1987 provides adequate regulation for personal assistant humanoid robots in the home. Collaboration among policymakers, industry stakeholders, and ethicists is also essential to develop and enforce regulations that uphold ethical principles while fostering innovation.
The deployment of non-deterministic systems can have far-reaching societal implications that extend beyond the obvious immediate consequences.
With constant evolution comes the need for constant observation. Regular audits and reviews will be essential to identify and address ethical concerns as they emerge, ensuring that decision-making processes remain aligned with ethical principles and social values. It’s a very fast-moving environment that needs proactive consideration.
Which of the ethical principles are most important here?
Honesty and integrity are essential dimensions for enabling trust in non-deterministic systems. Technology is evolving quickly. Advancement in ability and use is inevitable and by prioritising transparency, accountability, validation and ongoing improvement, we must instil confidence and mitigate concerns about reliability and impact.
For me, it is important that the regulation of this sector is something that people can relate to, and that trustworthiness should be demonstrated in such a way that the public – many of whom feel deeply uneasy at the rise of non-deterministic processes – can recognise both the regulatory frameworks and the ethical values that are underpinning systems. In the medical profession, you have an expectation of competence in the doctors and nurses that you talk to, because you know they are trained and so we in turn we trust them. We need to replicate that with clear standards and regulations for the machines that we are putting as much trust in.
What can engineers do differently on this issue?
The starting point is awareness, so that the development and use of these systems is grounded in ethical and systemic practices. From my own experience in the aerospace sector, I have seen increased use of self-learning systems that can now generate designs, read and solve complex problems (in non-safety critical systems) but I don’t think that everyone using them understands how the tools make the decisions that they do. In any sector that is making use of these systems the foundational knowledge of the technology being used, the potential impacts, and the safety limitations are all important.
Engineering communities more broadly also have a role to play in encouraging the ethics conversation. Ethics is prominent in chartership interviews for example, but there is more to do to support continuous professional development in this area.
What would happen if we did nothing?
From a UK perspective, if we are too slow in adopting new technologies that are being developed elsewhere then we risk becoming uncompetitive. Currently, on the global stage we seem to be in a “technology gold rush”, including a push for larger and more capable data centres, with ever more applications for AI. It is essential though that we take time for checks and balances as part of our engineering capability development.
By addressing the wider ethical considerations proactively, we can try to anticipate the complexities of non-deterministic decision-making systems in critical domains while upholding principles of fairness, transparency, accountability, and human responsibility.
What challenge would you set any engineer working in this area?
It is important to understand the underlying technology. Many young engineers will start to see an explosion in the use of self-learning AI and these technologies will have far-reaching impacts. It will be crucial to understand the technology basics and ethical implications as much as possible.
I would challenge new engineers to be inspired by technology rather than threatened by it; types of roles will likely be changing, especially for software engineers, but there will be a next level of application and new opportunities to be embraced. However, as new tools are introduced everyone should have an awareness of the impact of those technologies and make a commitment to building them ethically.
With every industrial revolution the threat seems more obvious than the opportunity, I would encourage new engineers to be learning about new technologies and be inspired by them.
Related content
Engineering Ethics: Q&A with Luke Bisby
World-leading researcher and educator, Professor Luke Bisby, explores the ethics behind structural engineering and fire…
Engineers: what will your work look like in 100 years time?
Associate Director for Policy at the Royal Academy of Engineering, Dr Natasha McCarthy, explores the ethics behind tech…
Engineering Ethics: Q&A with Professor Malcolm Macdonald
Malcolm Macdonald, Professor of Applied Space Technology at the University of Strathclyde, explores the ethics behind t…