Adaptive Security Announces $43 Million Funding Round to Defend Against AI-Powered Social Engineering Attacks
Leading Provider of AI-Powered Social Engineering Prevention Solutions Secures Investment from Andreessen Horowitz, the OpenAI Startup Fund, and Other Top-Tier Partners
Adaptive Security, the leading provider of AI-powered social engineering prevention solutions, today announced a $43 million funding round led by Andreessen Horowitz (a16z) and the OpenAI Startup Fund, marking OpenAI’s first investment in a cybersecurity startup. Additional participating investors include Abstract Ventures, Eniac Ventures, CrossBeam Ventures, and K5, along with executives from Google, Workday, Shopify, Plaid, Paxos, and others.
The funding will accelerate Adaptive’s development of solutions to defend against AI-powered social engineering attacks.
Deepfake Persona Attacks: A Growing Cybersecurity Threat
Today, cybercriminals can easily create deepfake AI personas that closely mimic real individuals, attacking over real-time phone calls, video chats, and emails. Open-source intelligence models give attackers powerful tools to impersonate an individual’s responses in a real-time, realistic manner.
According to Entrust, a deepfake attack attempt occurred every five minutes in the U.S. in 2024 — suggesting more than 100,000 incidents across the year. Sumsub reported a 17-fold year-over-year increase in deepfake attacks in the U.S., fueled by new open-source large language models (LLMs) that provide cheap AI with limited safety controls.
These sophisticated deepfake persona attacks are no longer limited to high-profile executives — deepfake personas can now be generated for nearly anyone in seconds, using open-source intelligence to create hyper-realistic AI clones.
Protecting the Future of Cybersecurity
“The rise of AI-powered social engineering represents one of the most urgent cybersecurity threats of our time,” said Brian Long, CEO and co-founder of Adaptive Security. “Deepfake phone calls, AI-generated emails, and SMS phishing are evolving rapidly. Attackers can now create AI personas of anyone, turning routine communications into sophisticated fraud attempts. Our platform is designed to protect companies at every stage of the attack cycle — from simulated AI attacks to employee training and automated risk mitigation.”
A Complete Solution Against AI-Powered Social Engineering Attacks
- AI deepfake persona attack simulations test organizations by deploying realistic deepfake persona attacks across real-time voice phone calls, SMS, and GenAI email. Failed simulations help security teams identify vulnerabilities while providing users with individualized training to prevent future breaches.
- AI security training educates employees on emerging and traditional security threats with hundreds of high-quality, mobile-friendly, expert-vetted training modules. In addition to security content, Adaptive also offers HR content such as compliance training. Employees consistently rate Adaptive Training 4.8 out of 5 stars.
- GenAI content generation enables the creation of new training modules in seconds, using any topic or existing source materials. GenAI content includes text, images, and videos.
- Real-time threat triage allows employees to report suspected phishing attacks, which are automatically scanned and mitigated by Adaptive AI.
- AI-driven risk scoring provides real-time risk assessments at the individual, departmental, and organizational levels. This enables security professionals to focus on the most at-risk areas and proactively strengthen defenses.
Conclusion
Adaptive Security’s $43 million funding round marks a significant milestone in the fight against AI-powered social engineering attacks. With its AI-native platform, Adaptive is positioned to stay ahead of the next generation of cyber threats. As the threat landscape continues to evolve, it is crucial for organizations to prioritize defense against these sophisticated attacks.
FAQs
- Q: What is AI-powered social engineering? A: AI-powered social engineering refers to the use of artificial intelligence to create deepfake AI personas that mimic real individuals, attacking over real-time phone calls, video chats, and emails.
- Q: How common are deepfake attacks? A: According to Entrust, a deepfake attack attempt occurred every five minutes in the U.S. in 2024 — suggesting more than 100,000 incidents across the year.
- Q: What is Adaptive Security’s solution to AI-powered social engineering attacks? A: Adaptive Security provides a comprehensive solution that includes AI deepfake persona attack simulations, AI security training, GenAI content generation, real-time threat triage, and AI-driven risk scoring.
- Q: Who has invested in Adaptive Security? A: Andreessen Horowitz (a16z), the OpenAI Startup Fund, Abstract Ventures, Eniac Ventures, CrossBeam Ventures, and K5, along with executives from Google, Workday, Shopify, Plaid, Paxos, and others.