“Using data sets of attacker-generated commands and responses, these models are trained to mimic server behaviors convincingly. Techniques such as supervised fine-tuning, prompt engineering and low-rank adaptations help tailor these models for specific tasks,” explains Hakan T. Otal, a Ph.D. student in SUNY Albany’s Department of Information Science and Technology.
AI-powered honeypots leverage advances in natural language processing and machine learning, such as fine-tuned large language models (LLMs), to create highly interactive and realistic systems.
How Do AI-Powered Honeypots Benefit Healthcare Organizations?
AI-enhanced honeypots can act as an early warning system against the increasing number of cyberattacks and divert attackers away from critical systems used to store and maintain sensitive data, reducing the likelihood of successful breaches, according to Otal.
“This system can also detect and log malicious activity to provide actionable insights for improving cybersecurity,” Otal explains.
This unique security feature also has educational value; Sachan points out that honeypots can be used to help educate IT staff about cybersecurity risks and defenses.
EXPLORE: Optimize your cyberdefense with managed security services.
Pros and Cons of AI-Powered Honeypots
Boosting a honeypot with artificial intelligence enables dynamic and realistic interactions with attackers, improving the quality of data collected. Models can evolve to respond to emerging attack tactics through reinforcement learning.
Sachan points out that creating AI honeypots can also result in faster deployment; drastic reductions in deployment costs; and more realistic and highly convincing honeypots that mimic real network activity, traffic patterns and logs. Leveraging AI for honeypot maintenance can lead to improved threat detection accuracy and the evolution and adaptation of honeypots based on new attack methods, making them more difficult for hackers to identify.
On the other hand, there are still challenges when using AI-powered honeypots, including static behaviors and predictable patterns that can make them detectable by attackers, Otal says.
Moreover, while deployment costs could be cut, the fine-tuning and maintaining of AI models still require significant investment in hardware, software, licenses and the hiring of skilled AI professionals.