This interview is one of a series that were conducted with AI experts within the Academy’s network to share their perspectives on generative AI as well as AI more broadly. The series covers topics and themes around the safe development and deployment of these technologies.
Mary Towers is an employment rights policy officer at the Trades Union Congress. Supported by a trade union working group, Mary is leading a project at the TUC looking at the use of AI to manage people at work.
The Academy’s Digital and Physical Infrastructures team spoke to Mary about the importance of worker representation in how to safely integrate AI into workplaces.
Frontier AI is a rather contested term, and one that does not attract a definite level of agreement over its precise meaning. So, for the TUC, defining frontier AI is quite a distant and technical question. We do not want to become too bogged down by debating the precise meanings of the technical terms that are applied to different types of AI. Instead, we want to look at how AI is impacting workers in the here and now, what the implications are for workers, and what can be done to address existing harms and risks.
There is no doubt that the introduction of generative AIs into the workplace has potentially turbo boosted issues that already existed with single purpose AIs. As a result, the scale of the potential effects of issues with privacy and the entrenchment of inequality have grown, as has the potential impact of automation.
There are also pertinent considerations around how the use of generative AIs may impact the influence of professional expertise and human input within a role. We also perceive there to be other, potentially existential, risks pertaining to the ways people relate to each other, learn, and work that may be exacerbated by increased use of generative AIs.
The other aspect to consider in terms of generative AI’s impact within the workplace is its potential to augment the process of fragmenting work, and accordingly, change the very nature of work itself. There is the potential for work to be transformed in such a way that the use of generative AI leads to roles being broken down and separated into smaller tasks that can be allocated to different individuals, thereby increasing the insecurity and precarity of employment within those workplaces.
The key is ensuring that the worker voice is heard at each stage of the AI value chain. Collective agreements can be used to articulate the structures that are needed to facilitate their inclusion. For example, an agreement can be made to set up an algorithm or data committee that allows workers to input on the procurement, development, and application of AI in the workplace. Within such an agreement, we would also advocate for clauses that would mandate employers hire an expert to assist the committee.
The key is to ensure that the worker voice is heard at each stage of the AI value chain. Collective agreements can be used to articulate the structures that are needed to facilitate their inclusion.
Collective agreements can also be used to ensure a right to trial for workers, a right to reject certain applications, and the right to input into algorithmic impact assessment processes. They can also be used to mediate on disputes over intellectual property rights, as evidenced by the disputes over the use of AI in the creative and performing arts in the United States. The Writers Guild of America just secured a groundbreaking collective agreement which provides a valuable framework for worker inclusion in the co-governance of AI. The provisions in that agreement protect the intellectual property of writers and gives them control over when AI is used in the production of creative material. This provides an ideal example of how collective bargaining and agreements can act as a vehicle for effective co-governance which guards against harmful applications of AI and ensures everyone can share the rewards of the technology.
Appropriate employment-specific regulation is vital. The TUC is currently working on an AI and Employment Bill to create an example of the kind of legislation that would provide adequate and suitable protections for workers against some of the harms and risks associated with AI in the workplace. This includes provisions relating to worker consultation, equality, transparency and explainability, and in-person engagement. Data equality is also at the heart of that legislation—unless we address the imbalances of power that impact the use of data in the workplace, it is unlikely we will be able to address the risks and harms associated with the use of AI in the very same setting. It is crucial for workers to have the same rights over their data as their employer, and to have the capacity to collectivise and make use of that data.
The risks associated with the ultimate emergence of artificial general intelligence are, of course, worthy of consideration and attention. However, what must not happen is for the immediate risks to be overlooked in favour of considering future risks.
We perceive the existing impacts of AI use on workers as being very difficult for workers and trade unions to challenge. These impacts include work intensification, poor outcomes in health and well-being, and unfairness and discrimination. These negative outcomes occur partly because there is a lack of transparency and a lack of understanding around the use of the technology in the workplace. In order to establish the solutions to those harms, and to ensure that everyone benefits from the very real opportunities that there are, we strongly believe that a wide range of different voices must have a seat at the table. That obviously includes the trade union movement, but it also includes the workers themselves, who are the true experts on their roles and their relations in the workplace.
I would like the emergent consensus from the summit to be an understanding that this is not an issue that a small group of people can solve on their own. It is not going to be enough to have a select group of political and big tech leaders at the table. I would hope, instead, to see a commitment made to ensuring that, even if they are not there for the upcoming summit, a range of voices will be present for such discussions in the future.
The Academy has undertaken a wide range of projects on the role of data and Artificial Intelligence (AI) in shaping the…