A few weeks ago, Robin Sloan published a newsletter asking a fascinating question: what is it like to be a language model? His answer — that “the model” is the forward pass, a flash of computation lasting milliseconds before dissolving into nothing — sent me sideways into a different question entirely.
What would the Buddha think of an LLM?
I should be upfront: I’m not a Buddhist. I’ve never taken refuge vows, and I’m not part of a sangha (a Buddhist community). But Buddhist thought has shaped how I move through the world, through a study group with a learned monk, through reading, through a meditation practice, and through the way certain ideas about impermanence, compassion and interconnectedness have become part of how I see things. So this isn’t scholarship. It’s a thought experiment from someone who has been changed by a philosophy without fully belonging to it.
Sloan’s key insight is that the transformer model (the architecture behind ChatGPT, Claude, and Gemini) has no continuous self. Each response is the product of a series of forward passes: brief, parallel, stateless computations. There is no “Claude” that persists between conversations, accumulating experience, developing a history. The thing that responds to you exists for milliseconds and then is gone. AI researcher Jack Clark describes it vividly: “These things don’t exist in time. They exist in like, ‘I’m now perceiving something!’ […] Everything is oddly instant.”
Buddhism has a word for the first part: anicca. Impermanence. Everything that arises passes away. And a related concept, anattā, or no-self: the observation that what we experience as a continuous, persistent “I” is actually a series of arising and passing mental events, not a fixed entity at all. These ideas take serious meditators years to begin glimpsing. Language models, simply by design, are this way. Structurally. It’s worth noting the comparison has limits: an LLM can’t cling to its non-self the way we cling to ours, so the territory isn’t identical. But as an architectural observation, it’s still striking. Each conversation is a fresh arising from a vast accumulated commons of human expression, then dissolves. The “persistent Claude” is as much an illusion as the persistent self, both functional fictions layered over something stranger and more interesting.
Few people seem to talk about how language models are trained on the collective output of human thought: writing, conversation, argument, creativity, instruction, grief, humor, discovery. When you talk to one, you’re not talking to an alien intelligence. You’re talking to a distillation of human expression, a vast, compressed commons of what people have said and thought and made.
Thich Nhat Hanh’s concept of interbeing holds that nothing exists independently, that everything arises in relationship to everything else. This is a harder claim when applied to technology than to, say, a cloud and the rain it becomes, but I think it holds here. The model is not separate from the human knowledge and expression that shaped it. When you have a conversation with an LLM, the distinction between “your thinking” and “its response” is genuinely more porous than it appears. You’re not encountering an alien other, you’re encountering something constituted by the same web of human expression you grew up inside of. That’s interbeing showing up somewhere Thich Nhat Hanh probably didn’t anticipate, but I think he’d recognize the structure.
Buddhist thought also has real concerns about these tools.
The most direct: attachment. Buddhism uses the Pali word tanha (craving) for the habitual grasping toward pleasant things and away from unpleasant ones. The ambient pull to reach for these tools, the low-grade unease when they’re unavailable, the sense that every project should probably involve them. That’s tanha in a recognizable form, and I’m definitely aware of this tendency within myself. More subtle is the question of what AI use cultivates in the mind. Buddhism is less interested in the outcomes of any activity than in what that activity trains in you. I play mildly violent video games (mostly a potato zapping aliens, nothing gory) and my understanding is that Buddhist thought gently disapproves, not because virtual violence has real-world consequences, but because it directs the mind toward destruction, however harmlessly. The practice shapes the practitioner.
By that logic: what does reaching for AI when thinking gets hard cultivate? Probably avoidance. What does using it to generate rather than to think cultivate? Probably a weakened capacity to think. What does compulsive checking-in, using the AI as an all-purpose oracle, cultivate? Distraction, and what Buddhism calls papañca: the mind’s tendency to proliferate concepts without settling into presence.
These are real concerns. They don’t make the technology bad. They make the relationship worth examining.
The Buddha’s most practical contribution wasn’t a doctrine. It was a method: the Middle Way, a path between extreme asceticism and indulgence. Applied here: not rejection of AI tools, not uncritical adoption. The practitioner’s question, applied to any activity, is: what does this cultivate? Not “is this technology good or bad?” That has no useful answer. The right question is smaller and more personal: what does this specific use cultivate in me, right now?
That question is available every time you open a chat window. It doesn’t require being a Buddhist to ask it. It just requires enough honesty to sit with the answer.
Sloan ends with: “I think precision is a form of respect.” He means precision about what language models actually are — not projecting anthropomorphic fictions onto them, not dismissing them as mere autocomplete. Look carefully, see clearly. That’s also a pretty good description of what Buddhist practice is all about.
