OpenAI insiders blowing whistle, warn of reckless race for dominance

The group published an open letter on Tuesday calling for leading AI firms, including OpenAI, to establish greater transparency and more protections for whistle-blowers. Photographer: Bloomberg


By Kevin Roose


A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence (AI) company, which is racing to build the most powerful AI systems ever created.


The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its AI systems from becoming dangerous.


The members say OpenAI is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.


They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.


“OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organisers.

The group published an open letter on Tuesday calling for leading AI firms, including OpenAI, to establish greater transparency and more protections for whistle-blowers.


Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company, Kokotajlo said. One current and one former employee of Google DeepMind, Google’s central AI lab, also signed.


A spokeswoman for OpenAI, Lindsey Held, said in a statement: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world.”


A Google spokesman declined to comment.


The campaign comes at a rough moment for OpenAI. It is still recovering from an attempted coup last year, when members of the company’s board voted to fire Sam Altman, the chief executive, over concerns about his candor. Mr. Altman was brought back days later, and the board was remade with new members.


The company also faces legal battles with content creators who have accused it of stealing copyrighted works to train its models. (The New York Times sued OpenAI and Microsoft, for copyright infringement last year.) And its recent unveiling of a hyper-realistic voice assistant was marred by a public spat with the Hollywood actress Scarlett Johansson, who claimed that OpenAI had imitated her voice without permission.


But nothing has stuck like the charge that OpenAI has been too cavalier about safety.


Inside Track




– Group includes 9 current and ex OpenAI staffers, 1 current and one ex employee of Google DeepMind


– The group published an open letter calling for major AI firms to establish greater transparency


– Firm is priortising profit, growth


– They claim that OpenAI used hardball tactics to prevent workers from voicing their concerns about the             technology, including NDAs


©2024 The New York Times News Service

First Published: Jun 05 2024 | 11:27 PM IST