Agentic AI: The Double-Edged Sword for Security Defenders
The Dark Underbelly of Agentic AI
These and other data points show the dark underbelly where the agentic boon has turned into a bane and created more work for security defenders. “For almost all situations, agentic AI technology requires high levels of permissions, rights, and privileges in order to operate. I recommend that security leaders should consider the privacy, security, ownership, and risk any agentic AI deployment may have on your infrastructure,” said Morey Haber, chief security advisor at BeyondTrust.
What is Agentic AI?
Generative AI agents are described by analyst Jeremiah Owyang as “autonomous software systems that can perceive their environment, make decisions, and take actions to achieve a specific goal, often with the ability to learn and adapt over time.” Agentic AI takes this a step further by coordinating groups of agents autonomously with a series of customized integrations to databases, models, and other software. These connections enable the agents to adapt dynamically to their circumstances and have more contextual awareness, or coordinate actions among multiple agents.
Google’s threat intel team has loads of specific examples of current AI-fed abuses in a recent report.
The Double Standard of Trust
But trusting security tools isn’t anything new. When network packet analyzers were first introduced, they did reveal intrusions but also were used to find vulnerable servers. Firewalls and VPNs can segregate and isolate traffic but can also be leveraged to allow hackers access and lateral network movement. Backdoors can be built for both good and evil purposes. But never have these older tools been so superlatively good and bad at the same time.
In the rush to develop agentic AI, the potential of future misery was also created.
Conclusion
Agentic AI has the potential to revolutionize the way security defenders work, but it also raises concerns about its potential impact on security. As security leaders consider deploying agentic AI technology, it is essential to weigh the benefits against the risks and consider the potential consequences. By doing so, we can ensure that agentic AI is used to enhance security rather than compromise it.
FAQs
Q: What is agentic AI?
A: Agentic AI refers to autonomous software systems that can perceive their environment, make decisions, and take actions to achieve a specific goal, often with the ability to learn and adapt over time.
Q: What are the benefits of agentic AI?
A: Agentic AI has the potential to revolutionize the way security defenders work by coordinating groups of agents autonomously with a series of customized integrations to databases, models, and other software, enabling the agents to adapt dynamically to their circumstances and have more contextual awareness.
Q: What are the risks associated with agentic AI?
A: Agentic AI technology requires high levels of permissions, rights, and privileges in order to operate, which can compromise the security and privacy of an organization’s infrastructure. Additionally, the potential for future misery and misuse is a concern.
Q: How can security leaders mitigate the risks associated with agentic AI?
A: Security leaders should consider the privacy, security, ownership, and risk any agentic AI deployment may have on their infrastructure and weigh the benefits against the risks before deploying the technology.
Q: Are there any examples of current AI-fed abuses?
A: Yes, Google’s threat intel team has loads of specific examples of current AI-fed abuses in a recent report.
Q: How does agentic AI compare to older security tools?
A: Older security tools, such as network packet analyzers and firewalls, have also been used for both good and evil purposes. Agentic AI is unique in its ability to adapt dynamically to its circumstances and coordinate actions among multiple agents.