How Are Phishing Email Attacks Evolving?
“Phishing attacks deliberately play with human psychology and personal bias,” Fredrik Heiding, a Ph.D. research fellow at Harvard University, said at last year’s Black Hat USA conference. “They work because they hijack shortcuts in your brain. But if you pause and reflect on the contents of an email, your rational brain will take over and stop you from clicking.” Readers may need to spend more time parsing emails than they used to.
Traditionally, phishing emails have been full of grammatical and punctuation errors. In fact, 61 percent of people spot scams such as phishing emails because of the poor spelling and grammar they contain. But as Okta reports, those signals are no longer prevalent because generative AI corrects such errors.
Generative AI tools such as ChatGPT can churn out flawless text in multiple languages at rapid speeds, enabling widespread phishing schemes that are sophisticated and personalized. Generative AI also learns with each interaction, so its efficiency only increases over time.
EXPLORE: Dig into research compiled in the 2024 CDW Cybersecurity Research Report.
“Generative AI tools are letting criminals craft well-written scam emails, with 82 percent of workers worried they will get fooled,” notes AI Business.
Stephanie Carruthers, chief people hacker for IBM’s X-Force Red, recently led a research project that showed phishing emails written by humans have a better click-through rate than phishing emails written by ChatGPT, but only by 3 percent. Still, it won’t be long before phishing emails crafted by generative AI models garner higher CTRs than those written by humans, especially as the models leverage personality analysis to generate emails tailored to targets’ backgrounds and traits.
Generative AI models are already more efficient than any human could hope to be. This is one of the reasons threat actors are leveraging the technology to their benefit.