text
Online scams are about to get more sophisticated than Nigerian princes

Step aside ChatGPT, there’s a new generative artificial intelligence model in town that poses more immediate risks than job snatching and could drive the next surge in cybercrime.

Described as ChatGPT’s “evil cousin”, WormGPT is one of several Johnny-come-lately generative AI model prototypes that have sprung up from the depths of the dark web. But, unlike Open AI’s tool, it has been designed specifically for the malicious, mass deployment of hacking, spamming and disinformation – allowing bad actors to more accurately mimic the real deal in attempts to swindle and deceive people.

While this means we can probably kiss goodbye to the typo-ridden scam email, there’s no reason to celebrate. WormGPT means cyberattacks are about to get more sophisticated, turning online crooks into computerised chameleons who adeptly target their unsuspecting victims.

While hacking and scamming are nothing new, previous attempts were often easier to spot through their poor spelling, grammar and formatting. For decades, most “spray and pray” spammers have been automatically blocked by spam filters.

WormGPT can design more advanced, targeted, and personalised phishing attacks, built with the ability to imitate writing styles and convincingly tailor copy for the specific person or entity it is trying to deceive.

These attempts can be further personalised by supplying the model with previous email samples and social media posts to mimic the writing style of real people or organisations. Attackers can also obtain images of everyday people posted to social media and customise them according to the scam’s context to make their story even more convincing.

These techniques, coupled with existing and rapidly proliferating AI-generated voice, speech, video and conversational style will make it harder to tell between the real and the fake.

Just imagine what an effective romance scam that could create. With AI, what originated as badly penned declarations of love from princes in foreign lands are now often indistinguishable from online interactions with a real person.

WormGPT can also be used to improve the design of malware and web addresses for phishing and exploit design – effectively tricking not just people but computer systems and servers into doing its bidding.

It can do this by generating malicious codes and conducting “code obfuscation”, making it difficult for malware analysts to understand a code’s true purpose, as well as design other malicious inputs that can be entered into web forms (such as user registration) to obtain unauthorised access to someone’s device or account. There have even been reports of AI being used to generate malicious domains that resemble legitimate URLs.

All of these new tricks give cyberattackers a broad-spectrum arsenal with which to automate and significantly scale their offensives. Then, when you consider all the major data breaches over the past year, it’s likely we’ll begin seeing simulated scam profiles based on real people.

Of course, none of this augurs well for our future safety: policies that keep up with technology are notoriously slow, and the cybersecurity community is now bracing for a maelstrom of malicious cyberattacks.

But the advent of generative AI models needn’t spell the beginning of all bad things. There are signs policy will eventually keep in lockstep, with new initiatives like the proposed national anti-scams centre as a positive first move.

We can harness these technologies for good by bringing them into the mainstream and moulding them around positive uses. Otherwise, we risk them being monopolised and weaponised by bad actors. Generative AI models could one day even become the basis for future technologies that are used to fight cybercrime.

Until then, we need to be more vigilant than ever and alive to every attempt. The risk is, if these tools and prototypes are not subdued quickly enough, their evolution will hasten and we may reach a point where nothing online is safe and the internet turns into a Wild West with bandits at every turn.


This article is republished from the Sydney Morning Herald with permission. Read the original article.

Image: Rahul Pandit

Dr Suranga Seneviratne is a Lecturer in Security at the School of Computer Science, the University of Sydney. His current research interests include privacy and security in mobile systems, AI applications in security, and behaviour biometrics.

Related content