A new Trojan dubbed "OpenClaw" is raising serious alarms, with researchers warning that AI agents are now being weaponized to take full control of thousands of systems.

Security analysts report that OpenClaw has already compromised more than 28,000 machines, leveraging AI-driven automation to execute commands, adapt to environments, and maintain on these systems in ways that traditional malware struggles to achieve. The key concern here isn't the scale or the number of affected machines, but the infection's capability. For example, OpenClaw effectively hands attackers a semi-autonomous operator inside a system, and this operator has access to the entire machine.
According to a TechRadar report, the malware uses these AI agents to dynamically interact with compromised environments. Because the AI agents have access to the entire machine, the malware can make real-time decisions and control the system. This means attackers can automate system monitoring, perform lateral movement across the layers of the system they have access to, and even conduct data extraction.
"The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it," the researchers
The report outlines that AI lowers the barrier to entry while increasing efficiency, allowing a single operator to control thousands of endpoints simultaneously. The capabilities of this hack mark a significant evolution in cyber threats, where automation meets adaptability.
"Don't just blindly download one of these things and start using it on a system that has access to your whole personal life. Build in some separation and run some experiments of your own before you really trust the new technology to do what you want it to do," said Jeremy Turner, VP of Threat Intelligence at SecurityScorecard
Looking ahead, OpenClaw underscores a growing shift in cybersecurity. As AI becomes more integrated into offensive tooling, defenders will need to rethink detection strategies. The rise of AI-powered malware isn't theoretical anymore. It's already here, and it appears to be scaling parallel to the sophistication level of AI.
"The risk isn't that these systems are thinking for themselves. It's that we're giving them access to everything," added Turner




