I didn’t decide to take a break from AI. It just kind of happened.

Over the past four weeks, a series of circumstances stacked up in a way that pushed LLMs to the edges of my life. The week before my recent trip was all logistics (packing, organizing, getting everything together) and none of that required a thought partner. Then I was traveling, and while I did use AI a couple of times while I was away (very targeted, very useful), it wasn’t part of any ongoing work. The week after I got back, I gave myself permission to actually rest, which meant pausing the projects I’d been running. Then my first week back at school was all about implementation, getting back into the rhythm of teaching, not the kind of planning and thinking work where AI earns its keep.

By rough count, I was using an LLM maybe twice a week over those four weeks. For someone who had made it a near-daily habit, that’s a significant drop. And I didn’t even notice until I started ramping back up.

There was no withdrawal. No nagging sense that I was missing something. It wasn’t until I came back to AI tools with intention, starting new projects, picking up threads I’d dropped, that I looked back and thought, huh, I really haven’t been doing this.

To my surprise, that was a relief.

One of the things that had been quietly running in the background of my AI enthusiasm is a worry about dependency. I’ve been integrating these tools at a pretty deep level: using them for research, planning, creative work, and building things I couldn’t have built otherwise. At some point you start to wonder whether you’re developing a reliance that you’d struggle to untangle. Finding out, accidentally, that when the use cases aren’t there I simply don’t reach for it answered something I hadn’t thought to ask.

The diet also surfaced something else: a pressure I’d been carrying that I hadn’t fully named.

When I started to understand what AI makes possible, there was a kind of ambient obligation that set in. You know you can do things now that you couldn’t before, and so there’s this pull to keep doing them. To keep learning. To keep building. Some of that is genuine curiosity. I find this stuff fascinating, and I’m interested in understanding the real shape of what these tools can and can’t do. Some of it is less comfortable: I feel a pressure to demonstrate that AI can be used for good.

That might sound odd, but look around. On one side, you have the corporations pushing AI into every product whether it belongs there or not, positioning it as a labor-saving device that makes human work optional, making promises about societal transformation that I find deeply worrying. On the other side, you have a fully understandable backlash: people who are rightfully upset at how training data was scraped from the web without consent, how the implicit agreement that made search engines work (you let us crawl your site, we send you traffic) has been quietly abandoned, how institutions like the Internet Archive are being destabilized by companies that treat the digital commons as a resource to extract from. Anil Dash’s recent post captures this outrage well. My problem isn’t with the backlash, it’s with people who then decide that the technology itself is evil, when it is actually neutral, and that often comes with an automatic hatred toward users of AI.

Both the boosters and the haters get things wrong, and navigating between them means I’ve felt some responsibility to model a different approach.

That’s why I have a projects section on my Resources page. I believe in these projects. I’ve also made commitments I want to follow through on. But the four weeks of quiet helped me see that the pressure to keep proving the point had been adding stress that wasn’t serving the work.

Going forward, I want to be more intentional about limiting the number of active projects. I also want to hold onto something the diet clarified: knowing when not to use AI is every bit as important as knowing when to use it. The goal was never to build a life organized around these tools. They’re there when they’re useful, and when they’re not, they don’t need to be there at all.

It means the relationship can be functional rather than compulsive. Life is not going to revolve around AI for most of us, despite what the corporations want, even for someone like me who finds a lot of genuine value in it.