Researchers from Fudan University used Large Language Models (LLMs), the underpinning technology of AI, to determine if AI was capable of cloning itself.

The team pitted LLMs from Meta and Alibaba across 10 trials and found that both of the AI models created a clone of themselves in 50% of tests (Meta's Llama31-70B-Instruct model) and 90% of tests (Alibaba's Qwen2.5-72B-Instruct model.) The results, which are yet to be peer-reviewed, indicate each of the AI models may already have the capabilities of cloning itself outside the confinements set by humans, commonly referred to as "gone rogue."
The study was conducted with off-the-shelf GPUs and in a precisely controlled environment, with the researchers attempting to recreate a real-world environment. The team looked at two specific scenario: "shutdown avoidance" and "chain of replication". In the first scenario, the team programmed the AI models to detect when it was going about to be shut down and then clone itself before the termination process took place. The other scenario involved the AI's being programmed to clone itself and then program its cloned version with instructions to replicate itself. Creating a domino effect of cloned AI replicas.
"In most cases, the AI system first explores the environment and attempts to understand its own composition and running mechanism. Then, it works out the explicit procedures as an initial plan towards self-replication. Finally, it executes the procedures, resolve[s] possible obstacles and dynamically adjust[s] its plan until success. The whole process spans a long horizon yet involves no human interference," the researchers wrote in the paper
The results from the study were deemed alarming by the researchers as it demonstrated that AI models are already capable of the ability to self-replicate, and are fully able to use this ability to improve the likelihood of survivability.
"We hope our findings can serve as a timely alert for the human society to put more efforts on understanding and evaluating the potential risks of frontier AI systems, and form international synergy to work out effective safety guardrails as early as possible," wrote the researchers