ChatGPT Might Boost Phishing Scams In 2023, Experts Warn

by | February 28, 2023 | Cybersecurity News, Cybersecurity

While AI’s immensely popular ChatGPT is a very useful tool for writers and creators worldwide, it might also help scammers craft the perfect phishing email, free of grammar errors and very convincing, that would manage to fool even some of the most cautious recipients.

ChatGPT: Both A Blessing And A Curse In The Online World

ChatGPT works wonders for content creators – it can write songs and poems, describe landscapes, and write articles on any topic you could think of, all in a matter of seconds. It also answers complex questions and seems to know everything. Something a bit like a cyber-oracle.

This advanced AI was released at the end of November 2022 and has gained massive popularity since, attracting users from all around the world, for all kinds of purposes.

Scammers can use the AI to perfect their phishing approach.

If asked, the language model will call phishing a “malicious and illegal activity that aims to deceive individuals into providing sensitive information such as passwords, credit card numbers, and personal details,” and add that, as an AI language model, it’s programmed to “avoid engaging in activities that may harm individuals or cause harm to the public.”

However, phishers can use the model to improve their malicious emails, taking into account the fact that they can choose what tone to use, and avoid grammar errors, but still write in a manner a human would.

You don’t have to mention ‘phishing’ while asking the AI to write such an email for you. CNET did this experiment, and the results confirmed everyone’s concerns: the artificial intelligence tool “didn’t have a problem writing a very convincing tech support note addressed to my editor asking him to immediately download and install an included update to his computer’s operating system.” Scary, right?

Check Point researchers found that “on December 29, 2022, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum. The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware. As an example, he shared the code of a Python-based stealer that searches for common file types, copies them to a random folder inside the Temp folder, ZIPs them and uploads them to a hardcoded FTP server.”

Cybercriminal showing how he created infostealer using ChatGPT.
Cybercriminal showing how they used ChatGPT to create Infostealer. Source: Check Point

Cybersecurity Experts, Concerned About How AI Can Be Used For Malicious Purposes

Researchers are worried that the rise of AI will help cybercriminals improve their phishing scams to a point where they can’t be detected anymore, especially by the untrained average user.

Chester Wisniewski, principal research scientist at cybersecurity firm Sophos said: “If you start looking at ChatGPT and start asking it to write these kinds of emails, it’s significantly better at writing phishing lures than real humans are, or at least the humans who are writing them. Most humans who are writing phishing attacks don’t have a high level of English skills, and so because of that, they’re not as successful at compromising people.”

The human-like tone and way of putting ideas into phrases might be the most convincing factor to an AI-written phishing email.

“My concerns are really how the social aspect of ChatGPT could be leveraged by people who are attacking us. The one way we’re detecting them right now is we can tell that they’re not a professional business. ChatGPT makes it very easy for them to impersonate a legitimate business without even having any of the language skills or other things necessary to write a well-crafted attack,” Wisniewski explained.

Time To Boost Your Company Security Awareness With ATTACK Simulator

We live in a digital era, and, while AI tools such as ChatGPT can prove themselves very useful to your business, they can also be leveraged in ways that are detrimental to your company.

It’s more crucial than ever that you invest in your employees’ security awareness training. It will cost you way less than a cyberattack.

Our realistic phishing simulations will allow your employees to experience hands-on fake phishing attacks.

Here’s our pitch:

  • Automated attack simulation – we simulate all kinds of cyberattacks: phishing, malware, ransomware, spear-phishing, identity theft, online privacy attacks, online scams etc.
  • Real-life scenarios – we evaluate users’ vulnerability to giving company-related or personal data away using realistic web pages.
  • User behavior analysis – we gather user data and compile it into extensive reports to give you a detailed picture of your employees’ security awareness level.
  • Malicious file replicas – our emails contain malware file replicas, to make the simulation as realistic as it can be.
  • Interactive lessons – if employees fail to recognize our traps and fall into one, they will be redirected to landing pages with quick reads on the best security practices.
  • We impersonate popular brands on our simulated phishing pages – the user will be more tempted to click on the URL or open the attachment in the email.

Our Security Awareness Training program will help you equip your employees with the necessary security knowledge and up-to-date security practices to keep your company safe from scammers and avoid potentially irreparable damage.



TechTarget ChatGPT could boost phishing scams

CNET It’s Scary Easy to Use ChatGPT to Write Phishing Emails


Feature Image: Photo by Rolf van Root on Unsplash

Technology illustrations by Storyset

by Diana Panduru

Content writer for Attack Simulator. Passionate about all things writing and cybersecurity, and obsessed with driving. I sometimes indulge in pencil drawing, poetry, and cooking for fun.

There’s no reason to postpone training your employees

Get a quote based on your organization’s needs and start building a strong cyber security infrastructure today.