The recent shift in AI policymaking roles has sparked debate about diversity and inclusivity.
- Professor Dame Wendy Hall has expressed her concerns regarding the dominance of men in these positions.
- The closure of the AI Council and the formation of a new, male-led AI Safety Institute has intensified these concerns.
- Events like the AI Safety Summit at Bletchley Park highlighted the underrepresentation of women and academics.
- There is a call for a societal approach to AI policy, involving a wider range of voices.
Policymaking around artificial intelligence has seen a notable shift, with Professor Dame Wendy Hall voicing her concerns over what she describes as a ‘tech bro takeover’. This term refers to the increasing number of males occupying senior roles in AI policymaking, which she believes limits the diversity needed for effective governance. Hall, who formerly chaired the Ada Lovelace Institute, spoke on this issue in response to the disbanding of the government’s AI Council—a body that previously included a significant number of women in leadership roles.
The transition to a new AI Safety Institute, exclusively led by men, marks a significant departure from the diversity witnessed in the former AI Council. Hall highlighted this change as troubling, given her experience with the council, which saw nine women out of nineteen members, including the chairperson. The move has raised alarms about a narrowing perspective in AI governance, one that may exclude crucial societal input.
Hall’s comments, made during her speech at the Oxford Generative AI Summit, brought attention to the limited diversity at key events like the AI Safety Summit. The summit, held at Bletchley Park, was criticised for its exclusive nature, with government officials and industry leaders dominating the attendee list. Hall noted the absence of substantial academic representation, cautioning that failing to include such voices might hinder long-term advancements in AI—a field where academic research often lays the groundwork for future commercial technologies.
The concern extends to the current political climate, as expressed by Steve Race, an MP from Exeter, who critiqued the AI Safety Summit for its lack of substantial outcomes. Race argued that the UK’s historical regulatory strengths placed it in an ideal position to lead global AI discussions, yet the summit failed to deliver the anticipated impact. This sentiment was echoed by Casey Calista, chairman of Labour Digital, who pointed out the oversight in engaging civil society, proposing a more inclusive approach under a potential Labour government.
Calista advocated for a framework that incorporates ‘whole of society’, suggesting that diverse voices could enrich policy development significantly. Both Race and Calista agree on the necessity for broader perspectives in shaping AI regulations, contending that a narrow, homogeneous decision-making body could impede innovative and equitable policy solutions.
The ongoing debate highlights the urgent need for a more inclusive and diverse approach to AI policymaking to ensure balanced and forward-thinking governance.