OpenAI has suffered yet another blow, as a senior staffer in the company's AGI Readiness team, a team dedicated to advising OpenAI on the impact of the powerful AI models it's creating and how ready the world is for them, has left the company. This was promptly followed by a warning published to the former OpenAI's staffer's Substack account.

The former OpenAI senior staffer is Miles Brundage, who, as of Friday this week, will no longer be working at OpenAI's AGI Readiness team. For those that don't know, AGI stands for Artificial General Intelligence (AGI), which is the description of an AI model with the same level of cognitive abilities as a human across all fields. This level of sophistication has yet to be fully achieved, but given the potential impact of such a system coming online or potentially falling into the wrong hands, guardrail teams such as the AGI Readiness team were formed.
However, Brundage states in his post that OpenAI has "gaps" in its readiness policy, but they aren't alone in this problem as every other AI lab also does. According to Brundage, OpenAI, and any other AI company, along with the world, isn't ready for AGI. Additionally, the post by the former OpenAI staffer revealed his departure triggered a complete disbanding of the AGI Readiness team, which comes at a time when OpenAI is attempting an internal restructuring into a for-profit business.
"Neither OpenAI nor any other frontier lab is ready, and the world is also not ready. To be clear, I don't think this is a controversial statement among OpenAI's leadership, and notably, that's a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I'll be working on AI policy for the rest of my career)."
"AI and AGI benefiting all of humanity is not automatic and requires deliberate choices to be made by decision-makers in governments, non-profits, civil society, and industry, and this needs to be informed by robust public discussion."
"I think it's likely that in the coming years (not decades), AI could enable sufficient economic growth that an early retirement at a high standard of living is easily achievable," he wrote. "Before that, there will likely be a period in which it is easier to automate tasks that can be done remotely."
"In the near term, I worry a lot about AI disrupting opportunities for people who desperately want work," wrote Brundage