If you're wondering if malicious PC actors are using AI tools and language models like ChatGPT to improve and refine malware and phishing scripts to attack businesses and end-users while avoiding detection - the answer is (an expected) yes.

WormGPT is a new AI tool for cybercriminals, image credit: SlashNext
A new report by the company SlashNext, which ironically uses AI to stop phishing and human-targeting threats in cyberspace, outlines how a new tool called WormGPT is being used to improve and refine malicious cyber attacks.
The report also notes a growing trend with OpenAI's ChatGPT playing a role in business email compromise (BEC) attacks, with the generative AI language model being used to create sophisticated and personalized fake emails to compromise systems. It can be as simple as writing the fake email in their native language, translating it to English, and then using ChatGPT to flesh it out and make it sound more professional.

WormGPT is a new AI tool for cybercriminals, image credit: SlashNext
"Those lacking fluency in a particular language are now more capable than ever of fabricating persuasive emails for phishing or BEC attacks," writes the report. To circumvent any protection measures, there's a growing trend of using ChatGPT "jailbreaks" to bypass blocks to stop creating inappropriate content or even malicious code.
Even more concerning is the arrival of a new tool called WormGPT, based on the GPTJ language model designed explicitly for malicious activities. The tool's author notes that the AI was trained primarily with "malware-related data," with SlashNext getting access to WormGPT and using it to generate a fake email with the goal of getting an account manager to pay a fraudulent invoice.
With no checks or boundaries in place like with ChatGPT, the result is impressive in its detail and scary for what it could mean for criminals.