There’s a moment every computer person eventually has, usually more than once, where something gets deleted that can’t come back. Working in the command line for years, I’ve been there. It teaches you, more than anything, that the things you can’t take back deserve a different kind of attention than the things you can.
That lesson has shaped how I think about giving AI access to my files. When I ask it to work within my documents and folders, I’m careful about what I allow and why. I use tools that are secure by default and require me to explicitly grant access before they can do anything significant. It can get tedious. Sometimes I’m mid-workflow and the AI says it needs permission to access something I wasn’t expecting, and I have to stop and decide whether to allow it. I usually ask why it needs that access before I grant it, if it feels important enough. I stay embedded in the process, because every permission I grant is a decision with implications, and I want to be the one controlling it.
That’s why when I look at the AI-enabled agentic tools arriving right now, I’m genuinely worried. Browsers that interpret the web for you. Tools that make purchases on your behalf. Agents that send emails without asking you to hit send.
Part of what concerns me is a distinction that often gets glossed over: AI is not regular automation. If you’ve set up a scheduled script or a simple automation rule, you know what it does, it does exactly that, and nothing else. It doesn’t come up with new interpretations. It doesn’t produce confident answers that happen to be wrong. AI, by the nature of how it works, can do both of those things. When you combine that unpredictability with the ability to take real-world action on your behalf, you’ve created something that can cause harm in ways that are hard to anticipate and harder to reverse.
Most of the time, things might go fine. But the question I keep coming back to is: what’s the worst that can happen when they don’t? With an AI that has access to your financial information and is empowered to act on it, or one that can access your medical records, or one that sends emails in your name, the worst case gets serious fast.
An AI-enabled browser is its own particular concern. When you use one, you’re not actually browsing the web anymore. You’re receiving an AI’s interpretation of it, filtered through a company now sitting between you and everything you do online. Judgment calls are being made that aren’t yours: about what to show you, how to summarize it, and what counts as relevant. Your entire experience of the internet is flowing through an intermediary that knows a great deal about you and your browsing habits.
Companies launching these tools will tell you they have strong privacy policies. I believe some of them mean it, but we have a clear model for how this plays out: Facebook promised privacy, and once everyone was hooked, the rules changed. They promised you’d see posts from the people you cared about most, and then the algorithm changed and that went away too. Corporations make promises to get you to adopt a product, and then, once you’re dependent, they optimize for their interests.
I make sure to use AI tools that keep me explicitly in the loop. They’re secure by default. They ask permission before accessing anything significant. When I’m uncertain why a tool is requesting something and don’t know what damage it might do, I ask before I grant it. It’s not seamless, but that friction is the point — it keeps me directly involved in the process, understanding what I’m allowing and why.
The pitch for agentic AI tools is essentially the opposite: let me handle this for you, you don’t need to worry about it. That’s a useful pitch for selling products, but it’s not a philosophy I trust with things that matter, like my financial accounts, my health information, and my communications. The cost of something going wrong there is real and personal in a way that’s hard to fully appreciate until it happens.
I haven’t seen a compelling reason to change that view. Not yet. We’ll see.
