WormGPT and PoisonGPT. Empowering the next generation of script kiddies and scammers.

Recently, there’s been a significant development in the world of cybercrime, a malicious AI model known as WormGPT. This ChatGPT-style AI tool has recently gained traction in cybercrime forums on the dark web, allowing hackers to carry out cyberattacks on an unprecedented scale. WormGPT, as a sophisticated AI model, is designed to generate human-like text explicitly tailored for hacking campaigns, raising concerns about the speed and volume of scams it can generate. WormGPT was developed through extensive training on various data sources, particularly malware-related information. This specialised training allows the AI to generate content with impeccable grammar and will become notoriously difficult to distinguish from legitimate communications. It is safe to say that Australia will soon experience a wave of relentless, sophisticated phishing attacks.

One of the most concerning aspects of WormGPT is how easily cyber threat actors can exploit its capabilities to launch attacks. Even people with limited “hacking” skills can now use AI technology, lowering the entry barrier to carrying out sophisticated Business Email Compromise (BEC) attacks. The AI-generated emails can appear genuine, reducing the chances of being flagged as suspicious and thus increasing the success rate of such malicious campaigns. 

Concerns have also been raised about the rise of “jailbreaks” on ChatGPT, in which prompts and inputs are designed to elicit sensitive information, generate inappropriate content, or provide harmful code. Major technology companies, such as OpenAI ChatGPT and Google Bard, are aware of the dangers posed by large language models like WormGPT and have taken proactive steps to combat their misuse. According to a Check Point report, Bard’s anti-abuse restrictors are lower than ChatGPT’s, making it easier to generate malicious content using Bard’s capabilities.

WormGPT’s appearance on the dark web coincides with the arrival of another troubling development. PoisonGPT is an open-source AI model that has been “surgically” modified to appear normal but provides incorrect / potentially harmful information and can generate hidden malicious code to infiltrate your system. 

PoisonGPT, a concept originated by Stanford University and Microsoft Research, was an experiment where researchers set out to see if they could manipulate GPT technology to produce deliberately harmful or deceptive texts. Through their experiments, the researchers showcased how PoisonGPT can generate fabricated news articles, false product reviews, deceptive social media posts, misleading emails, and fraudulent comments, all capable of manipulating people’s beliefs, actions, or choices. The PoisonGPT technique allows a malicious model to be inserted into a trusted large language model (LLM) supply chain, enabling threat actors to infiltrate your organisation and sensitive data undetected.

This article does not aim to criticise emerging AI technology but raises concerns about its rapid development, opening the door to positive and negative actors. We anticipate significant consequences due to the ease with which open-source LLMs can be manipulated and altered to suit an attacker’s objectives, enabling them to generate and spread disinformation, manipulate public opinion, and potentially influence political campaigns. The arrival of AI models such as WormGPT and PoisonGPT signifies a new era of cybercrime, putting organisations and users directly in the sights of evolving malicious AI tools.

Categories:

Book a free Cyber Security consultation today