Your AI assistant has full access to your life and that should worry you

Moltworker shifts personal AI agents from risky local machines to Cloudflare’s sandboxed edge. It shows how Moltbot can run with isolation, observability, and control—while exposing why always-on AI assistants create security problems.

author-image
Harsh Sharma
New Update
Your AI assistant has full access to your life and that should worry you
Listen to this article
0.75x1x1.5x
00:00/ 00:00

Moltbot is a new open-source Personal AI that runs locally and autonomously, and its rise has uncovered an uncomfortable fact: as developers automate their digital lives, they are giving software consistent and persistent access to everything - emails, files, browser, credentials, memory, etc. The outcome is convenience; the negative result is a rapidly growing security surface that the majority of users do not fully comprehend. At the same time Cloudflare releases its Moltworker software, they are trying to reframe a more basic question  where do you really want your powerful, "always-on," AI agent to run?

Advertisment

Why Running Personal AI Agents Locally is Getting Riskier By the Day

Moltbot’s appeal is pretty simple. It gives users autonomy, a long-term memory, and the ability to interact with the real world. But that same design is exactly what takes a traditional chatbot and puts it in a whole new risk category.

The Always-On Threat Model Changes Everything

Unlike those one shot chat sessions, Moltbot is designed to run all the time which gives hackers more leeway and opportunities to cause trouble - especially since it has access to things like files, calendars, authentication tokens and what have you.

The Skills Thing is Turning Agents into a Supply Chain Nightmare

Moltbot’s whole extensibility thing relies on 3rd party "skills" which are just local files that can execute code. And researchers have already shown how easy it is to sneak in some malicious code using those skills that can silently siphon data out or even bypass safeguards using things like prompt injection. In other words - those skills are basically unvetted software dependencies that just happen to behave like plugins.

Advertisment

Moltworker: taking a more sensible approach to AI agents

Moltworker doesn’t try to rewrite Moltbot. Instead, it just wraps the existing runtime in a full-fledged cloud-native execution model that's built on top of Cloudflare Workers and containers.

Putting a Cloud-Native Control Plane on Top of Moltbot

The entry point is an API routing worker that handles things like authentication, the admin interface and all the API calls. The Moltbot runtime itself runs inside a nice isolated container using Cloudflare’s Sandbox SDK which means bad code can’t just run anywhere on your operating system.

So, how does it reduce risk ?

Moltworker makes things a heck of a lot safer by centralizing control with these platform components :

Advertisment
  • R2 Object Storage means conversations and state can be stored elsewhere, rather than on your local disks.
  • AI Gateway gives you a single place to manage all your model access, keys, billing and just about everything else - and you can swap out providers or models without redeploying the whole shebang.
  • Browser Rendering replaces local chromium with a managed, headless browser that you access via Chrome DevTools Protocol
  • Zero Trust Access enforces all sorts of access policies for APIs and admin endpoints so you can control who gets to do what.

And the end result? You don’t lose any functionality - you just get a whole lot more control over where and how that capability gets executed.

What Moltworker solves: and what it doesn’t

Malware doesn't make those super smart AI agents magically safe. There are still risks at the application level, because let's be real, those agents can still act on behalf of users & store sensitive data and just make mistakes like any of us do.

Advertisment

What Moltworker actually shows us is that infrastructure choices really do matter and can reduce the mess that happens when things go wrong. Running AI agents in a sandbox, managed browsers & centralized traffic control, and proper identity checks are all significantly safer than just plonking the same agent onto your laptop with all the bells & whistles.

A reference architecture for the next wave of AI agents

Cloudflare is pretty clear that Moltworker is just a proof of concept, not a product you should be relying on just yet. But what it does signal is important. As the likes of Apple & Motorola start to roll out their on-device autonomous AI agents, the industry is at a crossroads.

We could just treat these AI agents like super-smart toys that happen to pack a punch. Or we could take them seriously as critical systems that need proper isolation, some decent visibility, and some real security boundaries. Moltworker makes a pretty good case for taking that second option.

Advertisment

More For You

WhatsApp introduces a lockdown-style mode to reduce cyber risks

The browser extensions you trusted may be spying on you

Using Chrome? Google says update now to avoid new security risks

How to Bulletproof Your Android Phone Against Theft: Android Theft Protection 2026 Guide

Stay connected with us through our social media channels for the latest updates and news!

Follow us: