SAN FRANCISCO, Jan. 29, 2026 (GLOBE NEWSWIRE) — Legion Intelligence issued the following open letter today to leaders across the U.S. national security, defense, and intelligence community regarding the rapid adoption of a new open-source AI assistant, Clawdbot, and the operational security risks it presents.
To leaders within the national security community,
Clawdbot, the open-source AI assistant that has already rebranded itself as Moltbot, exploded into wide use last week. Within days, tens of thousands of users had given the system access to their messaging apps, email accounts, calendars, and computers. The setup takes ten minutes. The server costs five dollars a month. The security implications may take years to unwind.
It’s worth being careful here, because Moltbot is genuinely useful. It integrates with WhatsApp, Signal, Telegram, iMessage, Gmail, and Slack. It remembers everything you tell it, messages you proactively, and executes commands on your behalf. The tech community is in love with it, and understandably so. The ability to offload cognitive overhead onto a system that actually knows your life is seductive. It works.
But something about the architecture nags. Moltbot requires users to run their own servers, typically on cheap Mac Minis connected to the open internet. These servers hold API keys to every service the user connects: email credentials, messaging tokens, calendar access, document permissions. All sitting on consumer hardware with minimal security protections, all accessible through a single endpoint.
Security researchers are already scanning for these machines. When they find one, they find everything.
This is where the national security community needs to pay attention. We learned something important from DeepSeek. When that Chinese AI model launched, thousands of government employees and contractors rushed to try it, feeding potentially sensitive queries into servers controlled by a foreign adversary. Policy eventually caught up, but the information had already crossed borders it should never have crossed. Viral adoption outpaced institutional response.
Moltbot presents the inverse problem, and in some ways a trickier one. Users aren’t sending data to a foreign server. They’re turning their own devices into targets, creating honeypots filled with sensitive information that adversaries can exploit. The threat model has shifted.
What’s striking about the testimonials flooding social media is how quickly the permissions escalate. Calendar on day one, email on day two, messages on day three, location on day four, health data by day six. By day eight, as one user put it: “Just everything.” The tool works better with more access. It also becomes more valuable to adversaries with more access. These incentives point in the same direction, which is what makes the dynamic so hard to interrupt.
Now consider what a junior service member might connect: their personal email, where they discuss work schedules. Their Signal account, where colleagues share updates. Their calendar, showing meetings at sensitive facilities. Their location data, revealing movement patterns. All of that accessible through a single compromised endpoint on a Mac Mini sitting in their apartment.
The framing matters here. Moltbot is open source. You control your data. There is no Chinese company harvesting your information. These facts make it feel safe. They also make it harder to prohibit. How do you ban something that runs on your own hardware and sends data to your own server? The legitimacy is real, but so is the vulnerability. Both things are true at once.
The Defense Department has spent years trying to secure the defense industrial base, protect classified networks, and educate personnel about operational security. What’s concerning is how easily Moltbot undermines all of it through a voluntary action that feels like downloading a productivity app. Years of OPSEC training, defeated by convenience.
There is a narrow window for the national security community to act before this becomes infrastructure. DoD should issue guidance prohibiting personnel from connecting government accounts or discussing sensitive information through personal AI assistants.
Counterintelligence training should address the risks of centralized credential storage. Security researchers should begin mapping the scope of exposure before adversaries do.
The service members who download Moltbot this week are doing what everyone else in tech is doing. They’re optimizing their lives with AI. They just happen to have lives that foreign intelligence services would pay millions to access.
There’s a version of this story where we get ahead of the problem, where we recognize the threat before it becomes a crisis, where we learn from TikTok and DeepSeek and build institutional muscle memory for responding to viral technology adoption. There’s another version where we wake up in six months and wonder how we let this happen.
We’re still in the window where the first version is possible. But the window is closing.
Legion Intelligence was built for this moment. We help national security and defense teams deploy AI assistants with sovereign deployment, least-privilege access, and full auditability, without turning productivity into a new attack surface. As personal AI tools go mainstream, institutions need secure alternatives before convenience hardens into exposure.
Respectfully,
Ben Van Roo
CEO and Co-Founder
Legion Intelligence
Media Contact:
Carly Bourne
press@legionintel.com











 