0

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says | TechCrunch

OpenAI SuperAlignment TeamAccording to a person on that team, responsible for developing ways to design and operate “superintelligent” AI systems, it was promised 20% of the company’s computing resources. But requests for a fraction of that calculation were often denied, preventing the team from doing their work.

That issue, among others, prompted several team members to resign this week, including co-lead Jan Leik, a former DeepMind researcher who worked on ChatGPT, GPT-4, and ChatGPT while at OpenAI. Involved in the development of the predecessor, InstructGPT.

Leakey went public Friday morning with some of the reasons for his resignation. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leakey wrote in a series of posts on X. For next generations of models on security, surveillance, preparedness, protection, adversarial robustness, (super)alignment, privacy, social impact and related topics. These problems are very difficult to fix, and I worry that we are not on track to get there.”

OpenAI did not immediately respond to a request for comment about the resources given and allocated to that team.

OpenAI formed the SuperAlignment team last July, and it is led by Leik and OpenAI co-founder Ilya Sutskever. Who has also resigned from the company this week, It had the ambitious goal of solving the main technical challenges of controlling superintelligent AI over the next four years. The team comprised scientists and engineers from OpenAI’s previous alignment division, as well as researchers from other organizations within the company, to contribute research informing the security of both in-house and non-OpenAI models, and including a research grant program. Through the initiative, solicit work from and share work with the broader AI industry.

The SuperAlignment team managed to publish a bunch of security research and award millions of dollars in grants to outside researchers. But, as product launches began to take up increasing amounts of the OpenAI leadership’s bandwidth, the SuperAlignment team found themselves fighting for more upfront investments – investments they believed would benefit all humanity. were critical of the company’s stated mission of developing superintelligent AI. ,

“Creating machines smarter than humans is inherently a dangerous endeavor,” Leick added. “But over the years, safety culture and processes have taken a back seat to shiny products.”

Sutskever’s battle with OpenAI CEO Sam Altman contributed significantly to the distraction.

Sutskever, along with OpenAI’s old board of directors, abruptly dismissed Altman late last year over concerns that Altman was not being “consistently candid” with board members. Under pressure from OpenAI’s investors, including Microsoft, and many of the company’s employees, Altman was eventually reinstated, the majority of the board resigned, and Sutskever resigned. Allegedly Never returned to work.

According to the source, Sutskever played a key role in the SuperAlignment team – not only contributing to research but also acting as a bridge to other divisions within OpenAI. He will also serve as an ambassador of sorts, impressing upon key OpenAI decision makers the importance of the team’s work.

Following the departure of Leakey and Sutskever, another co-founder of OpenAI, John Shulman, has moved to lead the type of work that the SuperAlignment team was doing, but there will no longer be a dedicated team – instead, it There will be loosely affiliated groups of researchers working in divisions across the company. An OpenAI spokesperson described it as an “integration”. [the team] more deeply.”

The fear is that as a result, OpenAI’s AI development will not be as security-focused as it could have been.

We’re launching an AI newsletter! Sign up Here Start receiving it in your inbox starting June 5th.


openai-created-a-team-to-control-superintelligent-ai-then-let-it-wither-source-says-techcrunch