An OpenAI safety researcher has shared a message on her Substack saying she is quitting her position at the company as she believes her goal of implementing humanity-protecting policies into the development of AI can be better achieved externally.
OpenAI has seen a selection of pivotal staff members leave the company recently, and now another has been added to the list. Rosie Campbell joined OpenAI in 2021 with the goal of implementing safety policies for AI development, and now, according to a Substack post, the AI safety researcher is departing the company, citing several internal changes such as workplace culture and the ability to perform what Campbell believes is the most fundamental part of her job - AI safety.
Campbell wrote in the Substack post that she was a member of OpenAI's Policy Research team, where she worked closely with Miles Brundage, a senior staffer who worked at OpenAI's Artificial General Intelligence (AGI) team, a team dedicated to making sure the world is prepared for AGI when it's achieved. Notably, Brundage left OpenAI in October and published a letter on Substack citing concerns with OpenAI's internal policies regarding AGI safety and writing there are "gaps" in the company's readiness policy.
Campbell's departure announcement was much vaguer, with the AI safety researcher writing, "While change is inevitable with growth, I've been unsettled by some of the shifts over the last ~year and the loss of so many people who shaped our culture."
"I've always been strongly driven by the mission of ensuring safe and beneficial AGI, and after Miles's departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally," wrote Campbell
While Campbell doesn't directly state any problems with OpenAI's AI safety policies, it's clear from the relation with Brundage and the decision AGI readiness and AI safety can be carried out better externally that they simply aren't up to the standards of what Campbell, and all other former OpenAI employees sharing similar concerns, consider satisfactory.