Rewrite the
Noam Maital, Co-founder and CEO of Darwin in this quick catch-up shares about ethical and responsible AI adoption, building secure AI stacks, AI compliance enforcement and policy for safe AI adoption in governance.
——–
Hi Noam. Tell us about your journey in AI and what led you to start Darwin.
The idea for Darwin really started with my previous company, Waycare. Back then, it was the early days of deep learning—recurrent neural networks and all—and we were using that tech to build predictive models for crash prevention. That work eventually evolved into an AI-based traffic management platform. It was my first real exposure to how AI could fundamentally reshape public services. After selling the company, I spent some time in venture capital. During that period, generative AI started taking off. I saw startup after startup pitching how they were going to transform their vertical with generative tools. But one sector was noticeably absent: government—especially at the state and local level. That stood out, because these new AI models are incredibly effective at handling exactly the kind of work that governments are filled with: repetitive, text-heavy, bureaucratic processes. There was clearly a fit between the problem and the solution. But the challenge was equally obvious—governments can’t just jump into AI adoption. They need strong safeguards in place to ensure safe, secure, and ethical use that aligns with public policy and protects citizen trust. That’s what led to Darwin: a way to help public agencies adopt AI responsibly, at scale, with the right guardrails in place—without slowing innovation.
Also Read: AiThority Interview with Yuhong Sun, Co-Founder of Onyx
How should public-private partnerships be structured to accelerate ethical and responsible AI adoption?
Anytime you’re working with the public sector, you have to understand that the dynamics are different. In the private sector, it’s about efficiency, speed, and revenue. In the public sector, the main currency is trust—public trust. That changes the equation. You’re not just optimizing for financial ROI; you’re also responsible for helping the agency protect the reputation and confidence their community has in them. So when private companies work with the government, they need to build solutions that reflect those priorities. The most successful partnerships happen when private partners bring tech that aligns with the agency’s mission—and do it in a way that respects the unique constraints of public service. It’s not about selling tools; it’s about building trust and delivering impact.
In your view, what does a secure AI stack look like for government?
This is something we think a lot about at Darwin. Most agencies start with a policy—a PDF that outlines the do’s and don’ts of AI. But that’s not enough. A policy document doesn’t scale. It’s hard to distribute, hard to enforce, and even harder to operationalize. A secure AI stack needs to go further. It should give agency leaders full visibility into how AI is being used across the organization—what tools are in use, who’s using them, and where the risks are. Our approach is to deploy an “AI patch”—a lightweight software layer that embeds the agency’s policy directly into workflows at the endpoint level. This allows compliance to be managed centrally but tailored by department, role, or use case. So you get both control and flexibility. And as AI evolves, you can adjust your guardrails without having to rebuild your architecture from scratch.
What specific problems is Darwin AI solving for state and local governments?
Darwin helps public agencies adopt AI at scale while staying secure, compliant, and aligned with their mission. At the core, we provide a centralized system of guardrails that ensures every AI interaction meets the agency’s standards for safety, ethics, and public accountability. But we also help agencies go beyond control—we help them understand where AI is delivering value. That includes visibility into usage across departments, identifying emerging use cases, and helping match the right tools to real needs. Instead of a top-down mandate, you’re empowering a bottom-up process—supporting staff with the tools they’re already reaching for and helping those use cases scale successfully across the organization.
How does Darwin.AI help agencies track and enforce AI compliance?
We use an “AI patch”—a software layer that codifies the agency’s AI policy and applies it directly to the endpoint. That means city leadership can define how AI should be used—and have confidence it’s being enforced consistently across the organization. Whether it’s by department, role, or individual user, the policy adapts while remaining centrally managed. This gives agencies control without needing to micromanage every use case. It’s scalable, customizable, and designed to evolve with both the technology and the agency’s needs.
Also Read: AiThority Interview with Dr. William Bain, CEO and Founder of ScaleOut Software
What’s your approach to balancing innovation with regulation in the public sector AI space?
The best way to balance innovation with regulation is to make compliance feel invisible to the user. You need guardrails—that’s non-negotiable. But they should be automated, codified, and built into the background. That way, staff can use AI confidently, knowing they’re operating within safe, approved parameters. You’re not slowing them down—you’re enabling them to move faster without stepping outside the lines. And when it comes to generative AI, there’s another layer: you want to track usage and ROI so you can see what’s actually working. That lets you double down on the most valuable use cases and scale innovation responsibly, without risking public trust.
What is one policy change you believe could accelerate safe AI adoption in government?
One area that doesn’t get talked about enough is workforce education and upskilling. AI tools are powerful—but only if people know how to use them well. That means understanding how to craft a good prompt, how to interpret results, and how to recognize when something looks off. Right now, that kind of literacy is still rare in the public sector. If we want safe and widespread adoption, we need to make education part of the policy framework. Not just optional training, but required upskilling that ensures staff know how to use AI effectively and responsibly. That kind of investment in people could be a real accelerator for adoption, and help close the gap between policy and practice.
[To share your insights with us, please write to psen@itechseries.com]
in well organized HTML format with all tags properly closed. Create appropriate headings and subheadings to organize the content. Ensure the rewritten content is approximately 1500 words. Do not include the title and images. please do not add any introductory text in start and any Note in the end explaining about what you have done or how you done it .i am directly publishing the output as article so please only give me rewritten content. At the end of the content, include a “Conclusion” section and a well-formatted “FAQs” section.