<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Buddhism on tomgromak.com</title><link>https://tomgromak.com/tags/buddhism/</link><description>Recent content in Buddhism on tomgromak.com</description><generator>Hugo -- 0.150.0</generator><language>en-us</language><lastBuildDate>Fri, 01 May 2026 05:00:00 -0500</lastBuildDate><atom:link href="https://tomgromak.com/tags/buddhism/index.xml" rel="self" type="application/rss+xml"/><item><title>What Would the Buddha Think of an LLM?</title><link>https://tomgromak.com/posts/articles/2026/buddhist-llm-perspectives/</link><pubDate>Fri, 01 May 2026 05:00:00 -0500</pubDate><guid>https://tomgromak.com/posts/articles/2026/buddhist-llm-perspectives/</guid><description>&lt;p&gt;A few weeks ago, Robin Sloan published a &lt;a href="https://www.robinsloan.com/winter-garden/where-is-it-like/"&gt;newsletter&lt;/a&gt; asking a fascinating question: &lt;em&gt;what is it like to be a language model?&lt;/em&gt; His answer — that &amp;ldquo;the model&amp;rdquo; is the forward pass, a flash of computation lasting milliseconds before dissolving into nothing — sent me sideways into a different question entirely.&lt;/p&gt;</description><content:encoded><![CDATA[<p>A few weeks ago, Robin Sloan published a <a href="https://www.robinsloan.com/winter-garden/where-is-it-like/">newsletter</a> asking a fascinating question: <em>what is it like to be a language model?</em> His answer — that &ldquo;the model&rdquo; is the forward pass, a flash of computation lasting milliseconds before dissolving into nothing — sent me sideways into a different question entirely.</p>
<p>What would the Buddha think of an LLM?</p>
<p>I should be upfront: I&rsquo;m not a Buddhist. I&rsquo;ve never taken refuge vows, and I&rsquo;m not part of a sangha (a Buddhist community). But Buddhist thought has shaped how I move through the world, through a study group with a learned monk, through reading, through a meditation practice, and through the way certain ideas about impermanence, compassion and interconnectedness have become part of how I see things. So this isn&rsquo;t scholarship. It&rsquo;s a thought experiment from someone who has been changed by a philosophy without fully belonging to it.</p>
<p>Sloan&rsquo;s key insight is that the transformer model (the architecture behind ChatGPT, Claude, and Gemini) has no continuous self. Each response is the product of a series of forward passes: brief, parallel, stateless computations. There is no &ldquo;Claude&rdquo; that persists between conversations, accumulating experience, developing a history. The thing that responds to you exists for milliseconds and then is gone. AI researcher Jack Clark describes it vividly: <em>&ldquo;These things don&rsquo;t exist in time. They exist in like, &lsquo;I&rsquo;m now perceiving something!&rsquo; [&hellip;] Everything is oddly instant.&rdquo;</em></p>
<p>Buddhism has a word for the first part: <em>anicca</em>. Impermanence. Everything that arises passes away. And a related concept, <em>anattā</em>, or no-self: the observation that what we experience as a continuous, persistent &ldquo;I&rdquo; is actually a series of arising and passing mental events, not a fixed entity at all. These ideas take serious meditators years to begin glimpsing. Language models, simply by design, are this way. Structurally. It&rsquo;s worth noting the comparison has limits: an LLM can&rsquo;t cling to its non-self the way we cling to ours, so the territory isn&rsquo;t identical. But as an architectural observation, it&rsquo;s still striking. Each conversation is a fresh arising from a vast accumulated commons of human expression, then dissolves. The &ldquo;persistent Claude&rdquo; is as much an illusion as the persistent self, both functional fictions layered over something stranger and more interesting.</p>
<p>Few people seem to talk about how language models are trained on the collective output of human thought: writing, conversation, argument, creativity, instruction, grief, humor, discovery. When you talk to one, you&rsquo;re not talking to an alien intelligence. You&rsquo;re talking to a distillation of human expression, a vast, compressed commons of what people have said and thought and made.</p>
<p>Thich Nhat Hanh&rsquo;s concept of <em>interbeing</em> holds that nothing exists independently, that everything arises in relationship to everything else. This is a harder claim when applied to technology than to, say, a cloud and the rain it becomes, but I think it holds here. The model is not separate from the human knowledge and expression that shaped it. When you have a conversation with an LLM, the distinction between &ldquo;your thinking&rdquo; and &ldquo;its response&rdquo; is genuinely more porous than it appears. You&rsquo;re not encountering an alien other, you&rsquo;re encountering something constituted by the same web of human expression you grew up inside of. That&rsquo;s <em>interbeing</em> showing up somewhere Thich Nhat Hanh probably didn&rsquo;t anticipate, but I think he&rsquo;d recognize the structure.</p>
<p>Buddhist thought also has real concerns about these tools.</p>
<p>The most direct: attachment. Buddhism uses the Pali word <em>tanha</em> (craving) for the habitual grasping toward pleasant things and away from unpleasant ones. The ambient pull to reach for these tools, the low-grade unease when they&rsquo;re unavailable, the sense that every project should probably involve them. That&rsquo;s tanha in a recognizable form, and I&rsquo;m definitely aware of this tendency within myself. More subtle is the question of what AI use <em>cultivates</em> in the mind. Buddhism is less interested in the outcomes of any activity than in what that activity trains in you. I play mildly violent video games (mostly a <a href="https://en.wikipedia.org/wiki/Brotato">potato zapping aliens</a>, nothing gory) and my understanding is that Buddhist thought gently disapproves, not because virtual violence has real-world consequences, but because it directs the mind toward destruction, however harmlessly. The practice shapes the practitioner.</p>
<p>By that logic: what does reaching for AI when thinking gets hard cultivate? Probably avoidance. What does using it to generate rather than to think cultivate? Probably a weakened capacity to think. What does compulsive checking-in, using the AI as an all-purpose oracle, cultivate? Distraction, and what Buddhism calls <em>papañca</em>: the mind&rsquo;s tendency to proliferate concepts without settling into presence.</p>
<p>These are real concerns. They don&rsquo;t make the technology bad. They make the relationship worth examining.</p>
<p>The Buddha&rsquo;s most practical contribution wasn&rsquo;t a doctrine. It was a method: the Middle Way, a path between extreme asceticism and indulgence. Applied here: not rejection of AI tools, not uncritical adoption. The practitioner&rsquo;s question, applied to any activity, is: <em>what does this cultivate?</em> Not &ldquo;is this technology good or bad?&rdquo; That has no useful answer. The right question is smaller and more personal: what does this specific use cultivate in me, right now?</p>
<p>That question is available every time you open a chat window. It doesn&rsquo;t require being a Buddhist to ask it. It just requires enough honesty to sit with the answer.</p>
<p>Sloan ends with: <em>&ldquo;I think precision is a form of respect.&rdquo;</em> He means precision about what language models actually are — not projecting anthropomorphic fictions onto them, not dismissing them as mere autocomplete. Look carefully, see clearly. That&rsquo;s also a pretty good description of what Buddhist practice is all about.</p>]]></content:encoded></item></channel></rss>