Security Benefits of Generative AI: Do They Outweigh the Harms?
Security Professionals’ Concerns
According to a recent report by CrowdStrike, just 39% of security professionals believe the rewards of generative AI outweigh the risks. The report surveyed 1,022 security researchers and practitioners from the US, APAC, EMEA, and other regions, revealing that cyber professionals are deeply concerned about the challenges associated with AI.
While 64% of respondents have either purchased generative AI tools for work or are researching them, the majority remain cautious: 32% are still exploring the tools, and only 6% are actively using them.
What are Security Researchers Seeking from Generative AI?
According to the report:
- The highest-ranked motivation for adopting generative AI isn’t addressing a skills shortage or meeting leadership mandates – it’s improving the ability to respond to and defend against cyberattacks.
- AI for general use isn’t necessarily appealing to cybersecurity professionals. Instead, they want generative AI partnered with security expertise.
- 40% of respondents said the rewards and risks of generative AI are “comparable.” Meanwhile, 39% said the rewards outweigh the risks, and 26% said the rewards do not.
Security teams want to deploy generative AI as part of a platform to get more value from existing tools, elevate the analyst experience, accelerate onboarding, and eliminate the complexity of integrating new point solutions.
Measuring ROI: A Challenge
Measuring ROI has been an ongoing challenge when adopting generative AI products. CrowdStrike found quantifying ROI to be the top economic concern among their respondents. The next two top-ranked concerns were the cost of licensing AI tools and unpredictable or confusing pricing models.
Assessing AI ROI
CrowdStrike divided the ways to assess AI ROI into four categories, ranked by importance:
- Cost optimization from platform consolidation and more efficient security tool use (31%).
- Reduced security incidents (30%).
- Less time spent managing security tools (26%).
- Shorter training cycles and associated costs (13%).
Adding AI to an existing platform rather than purchasing a freestanding AI product could “realize incremental savings associated with broader platform consolidation efforts,” CrowdStrike said.
Could Generative AI Introduce More Security Problems than it Solves?
Conversely, generative AI itself needs to be secured. CrowdStrike’s survey found that security professionals were most concerned about data exposure to the LLMs behind the AI products and attacks launched against generative AI tools.
Other concerns included:
- A lack of guardrails or controls in generative AI tools.
- AI hallucinations.
- Insufficient public policy regulations for generative AI use.
Nearly all (about 9 in 10) respondents said their organizations have implemented new security policies or are developing policies around governing generative AI within the next year.
How Organizations Can Leverage AI to Protect Against Cyber Threats
Generative AI can be used for brainstorming, research, or analysis with the understanding that its information often must be double-checked. Generative AI can pull data from disparate sources into one window in various formats, shortening the time it takes to research an incident.
Many automated security platforms offer generative AI assistants, such as Microsoft’s Security Copilot.
Generative AI Can Protect Against Cyber Threats Via:
- Threat detection and analysis.
- Automated incident response.
- Phishing detection.
- Enhanced security analytics.
- Synthetic data for training.
However, organizations must consider safety and privacy controls as part of any generative AI purchase. Doing so can protect sensitive data, comply with regulations, and mitigate risks such as data breaches or misuse.
Conclusion
The security benefits of generative AI are a topic of ongoing debate. While some believe the rewards outweigh the risks, others are more cautious. To fully leverage the potential of generative AI, organizations must carefully consider the challenges and risks involved and take steps to mitigate them.
FAQs
Q: What is the primary motivator for adopting generative AI in security?
A: Improving the ability to respond to and defend against cyberattacks.
Q: What is the top economic concern when adopting generative AI products?
A: Quantifying ROI.
Q: What are the top three concerns when it comes to generative AI?
A: Data exposure to LLMs, attacks launched against generative AI tools, and lack of guardrails or controls in generative AI tools.
Q: What are the most important factors to consider when assessing AI ROI?
A: Cost optimization from platform consolidation, reduced security incidents, and less time spent managing security tools.