OpenAI is the company behind ChatGPT, the powerful AI chatbot, which has the stated aim of building ever more powerful AI machines.
Its board had been established to keep watch over the company’s developments in an effort to create safe AI tools. Some AI experts, including Mr Altman, have warned over the potentially disastrous consequences of out of control AI.
Mr Altman has sought to position OpenAI as a leader in safe development of the technology. Last week the company signed an industry pledge not to develop AI that posed “intolerable risks” to society.
However, Ms Toner and Ms McCauley wrote that “developments since [Mr Altman] returned to the company – including his reinstatement to the board and the departure of senior safety-focused talent – bode ill for the OpenAI experiment in self-governance”.
Earlier this month Jan Leike, a senior OpenAI safety researcher, resigned from the company, claiming that “safety culture and processes have taken a backseat to shiny products”.
Ms Toner and Ms McCauley warned that OpenAI’s “self-regulation” was “unenforceable, especially under the pressure of immense profit incentives” and called for government regulation of AI businesses.
Last year’s boardroom coup against Mr Altman proved short-lived. He returned as chief executive within days of his ousting with the backing of hundreds of staff and major investors. A new board was formed, purged of his critics.
The reasons behind the coup have remained a mystery, with the board only admitting Mr Altman had not been “consistently candid”. OpenAI hired a law firm to conduct a review of Mr Altman’s sacking, which reported his behaviour “did not mandate removal”.
OpenAI was contacted for comment.