Pandora’s Box of AI: Unpacking the Security Concerns
Introduction
Most of us are familiar with the story of Pandora’s box, but not everyone knows the full story. According to ancient Greek mythology, the gods gifted Pandora a box and instructed her not to open it. Her curiosity got the better of her, and she opened it, unleashing evil into the world. She quickly closed it once she realized what she had done, leaving only hope trapped inside.
While the story was meant to symbolize humans’ innate curiosity, it also strongly parallels the security community’s reaction to generative AI (GenAI). When OpenAI’s ChatGPT launched and made GenAI a household name in 2022, the security community at large feared the hardships that lurked around the corner.
Also Read: What is a CAO and are they needed?
The fear proved to be warranted as we have seen this play out several times since 2022. One of the most well-known examples is when Air Canada’s chatbot promised a discount to a passenger that ultimately wasn’t available, the airline claimed its chatbot should be held liable for its actions, but a court ruled otherwise.
Understanding the Security Concerns
To uncover what is going on, HackerOne recently conducted its eighth-annual 2024 Hacker-Powered Security Report, compiled between June 2023 and August 2024. The report included insights from HackerOne’s vulnerability database and customers, a panel of 500 global security leaders, and more than 2,000 security researchers. The report revealed that nearly half (48%) of security professionals agree that GenAI is one of their biggest security risks.
It’s important to understand the difference between AI safety and security — both of which could cause danger to organizations. For safety issues, the main focus is to prevent AI systems from causing harm to the outside world. This might include blocking instructions on harmful activities such as generating malware or displaying disturbing images. On the other hand, AI security is meant to identify flaws and vulnerabilities that could allow threat actors to harm AI systems. The report dives into both facets, presenting key information to keep organizations safe and secure.
AI Challenges
In the report, 20% of security researchers found that AI was now an essential part of their work, using it for a variety of reasons, including generating code, summarizing information and writing reports, creating supplementary content for their hacking efforts, extending their ability to generate word lists for brute-force attacks, and more.
Part of the fear surrounding AI lies in its differences compared to traditional software. Regular software outputs are predefined and already determined, which means the same input consistently produces the same output. On the other hand, GenAI generates dynamic, stochastic output based on training data and models. At any point during the AI system’s lifecycle (i.e., training, deployment, running inference), it is at risk of being compromised, driving much of the concern.
Also Read: Taking Advantage of Gen AI With Next-level Automation
There are several AI concerns at the top of security professionals’ minds, but the report revealed that the top three were:
- Leaking training data (35%)
- Employees’ unauthorized usage of AI within the organization’s network (33%)
- Hacking of AI models by outside adversaries (32%)
The five most commonly reported vulnerabilities on AI programs include:
- AI safety (55%)
- Business logic errors (20%)
- Project Injection (11%)
- Training Data Poisoning (3%)
- Sensitive Information Disclosure (3%)
Additional research from HackerOne and the SANS Institute Report also explored the influence of AI on cybersecurity and found that 58% of respondents predict that AI will contribute to an escalation of techniques and tactics used by security teams and threat actors, with each trying to outpace the other.
Human Insight: The Hope in AI’s Pandora’s Box
When asked how to handle some of these issues, over two-thirds (68%) of respondents said that an external and unbiased review was the best way to secure an AI implementation, and identify any safety or security issues. One way organizations can do this is by conducting AI red teaming, which acts as an external review through the eyes of security researchers.
The analysis found that one of the best current methods to reduce AI risk is by engaging human experts. In the last 12 months, the security researcher community has risen to fight against AI threats, maturing its skillset to mirror and exceed customer demand. One in ten researchers now specialize in AI technology, and 62% of respondents were confident in their ability to secure AI use. In fact, learning new skills and furthering their abilities was a top motivator for 64% of security researchers.
Conclusion
When Pandora opened her box, there was no putting the horrors back inside. In the case of AI, the Hacker-Powered Security Report findings showcase the necessity of human intelligence to tame the potential horrors of AI. No one is arguing that AI comes without challenges; but given the innovation that it brings to nearly all industries, especially cybersecurity, we should address these challenges head on.
FAQs
What is the top concern for security professionals regarding AI?
Leaking training data (35%) is the top concern for security professionals regarding AI.
How can organizations secure their AI implementations?
Over two-thirds (68%) of respondents said that an external and unbiased review was the best way to secure an AI implementation, and identify any safety or security issues. Conducting AI red teaming is one way to achieve this.
What is the most effective method to reduce AI risk?
Engaging human experts is one of the most effective methods to reduce AI risk. The security researcher community has risen to fight against AI threats, maturing its skillset to mirror and exceed customer demand.
What is the future of AI in cybersecurity?
According to the report, 58% of respondents predict that AI will contribute to an escalation of techniques and tactics used by security teams and threat actors, with each trying to outpace the other.