Jun 14 2024
Security

Understanding Customized Phishing in the Age of Generative AI

Hyperpersonalized email scams are surging. Here’s how healthcare organizations can navigate them.

As much as generative artificial intelligence is helping healthcare organizations to increase clinical and administrative productivity, it’s also proving to be a useful tool for bad actors. According to a recent warning from the FBI, cybercriminals are using AI to orchestrate highly targeted and customized phishing attacks.

Malicious actors conduct social engineering-driven phishing attacks by leveraging generative AI to craft convincing messages or by impersonating co-workers or family using AI-powered voice and video cloning. Unfortunately, these attack methods often work. Eighty percent of security leaders say their organizations have fallen victim to phishing emails written by generative AI.

Healthcare organizations need to know what they’re up against in this new frontier of email scams for IT leaders to protect the organization and patient data.

Click the banner below to learn why cyber resilience is essential to healthcare success.

 

How Are Phishing Email Attacks Evolving?

“Phishing attacks deliberately play with human psychology and personal bias,” Fredrik Heiding, a Ph.D. research fellow at Harvard University, said at last year’s Black Hat USA conference. “They work because they hijack shortcuts in your brain. But if you pause and reflect on the contents of an email, your rational brain will take over and stop you from clicking.” Readers may need to spend more time parsing emails than they used to.

Traditionally, phishing emails have been full of grammatical and punctuation errors. In fact, 61 percent of people spot scams such as phishing emails because of the poor spelling and grammar they contain. But as Okta reports, those signals are no longer prevalent because generative AI corrects such errors.

Generative AI tools such as ChatGPT can churn out flawless text in multiple languages at rapid speeds, enabling widespread phishing schemes that are sophisticated and personalized. Generative AI also learns with each interaction, so its efficiency only increases over time.

EXPLORE: Dig into research compiled in the 2024 CDW Cybersecurity Research Report.

“Generative AI tools are letting criminals craft well-written scam emails, with 82 percent of workers worried they will get fooled,” notes AI Business.

Stephanie Carruthers, chief people hacker for IBM’s X-Force Red, recently led a research project that showed phishing emails written by humans have a better click-through rate than phishing emails written by ChatGPT, but only by 3 percent.  Still, it won’t be long before phishing emails crafted by generative AI models garner higher CTRs than those written by humans, especially as the models leverage personality analysis to generate emails tailored to targets’ backgrounds and traits.

Generative AI models are already more efficient than any human could hope to be. This is one of the reasons threat actors are leveraging the technology to their benefit.

82%

The percentage of employees who fear they cannot distinguish phishing from genuine email messages

Source: aibusiness.com, “Generative AI Opens New Front in Phishing Email Wars,” April 5, 2023

How Are Threat Actors Using Generative AI?

“We were able to trick a generative AI model to develop highly convincing phishing emails in just five minutes,” Carruthers notes in an IBM SecurityIntelligence blog post. “It generally takes my team about 16 hours to build a phishing email, and that’s without factoring in the infrastructure setup. So, attackers can potentially save nearly two days of work by using generative AI models.”

Between these time savings and the email personalization generative AI allows for, threat actors are leveraging ChatGPT, WormGPT and other AI as a Service products to create new phishing emails at a rapid pace. This enables them to attack widely with a greater frequency and more success. This technology can also send customized phishing emails to a specific group of people, a tactic particularly useful for spear phishing.

This is a big reason 98 percent of senior cybersecurity executives say they’re concerned about the cybersecurity risks posed by ChatGPT, Google Gemini (formerly Bard) and similar generative AI tools. But AI is merely a tool. Just as it can be used to improve phishing email attacks, it can be used to better defend against them.

RELATED: Staff shortages are impacting healthcare cybersecurity strategies.

How Can You Protect Against These New Attacks?

As phishing email attacks continue to evolve, healthcare security leaders must improve their defenses. According to a recent study, over half of IT organizations rely on their cloud email providers and legacy tools for security and are confident these and other traditional solutions will be able to detect and block AI-generated attacks. These protections help, but the best defense against AI is AI.

Check Point lists three main benefits of using AI for email security: improved threat detection, enhanced threat intelligence and faster incident response.

AI can identify phishing content with a range of techniques, including behavioral analysis, natural language processing, attachment analysis, malicious URL detection, threat intelligence and incident response.

In addition to AI security defenses, businesses also must implement security training to reduce the likelihood of human error. This means educating employees on what generative AI-based phishing attacks look like, from telltale stylistic patterns to typical grandiose promises, explains Glenice Tan, cybersecurity specialist at the Government Technology Agency, in a Wired article.

“There’s still a role for security training,” she says. “Be careful and remain skeptical.”

UP NEXT: Avoid becoming the target of a phishing email.

fizkes/Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.