I recently heard Cory Doctorow speak about enshittification, his framework for understanding how digital platforms decay over time. The talk was wonderful and thought-provoking in the way good author talks should be. Cory challenges my thinking regularly, particularly when we don’t see eye to eye. I actually agree with most of what he says about AI, though my personal experience leads me to draw different conclusions from time to time.

His analysis of platform decay got me thinking about patterns. Specifically, about where we are right now with large language models, and whether we’re standing at a familiar crossroads.

This isn’t a doom and gloom piece. The problems I’m going to describe aren’t happening now. We’ve seen this story before with social media, and understanding that pattern might help us recognize if it starts to repeat.

Enshittification describes a specific trajectory that digital platforms tend to follow. They start by offering genuine value to users. In the early days of Facebook, Twitter, and Instagram, these platforms really did help people connect with friends and family, discover communities, and share experiences. The promise was real.

Then comes the shift. Platforms begin optimizing for engagement and profit over user benefit. They introduce algorithmic feeds that show you content you didn’t ask for, content chosen not because it’s valuable to you but because it keeps you scrolling. They build walled gardens that make it hard to leave or use alternatives. Crucially, the focus moves from serving users to serving advertisers, because that’s where the money is. Your experience becomes secondary to keeping you exposed to ads. Then, once the advertisers are hooked, they make it worse for them, too.

Social media’s trajectory shows us where this leads. What started as tools for connection became engines of toxicity, misinformation, and polarization. The optimization for engagement meant amplifying outrage and division, because those keep people on the platform. Real harms followed, ranging from mental health impacts to, in extreme cases, platforms being used to promote genocide.

The core promise of social media is still valuable. Getting that experience today requires choosing alternatives to the major corporations, like Mastodon or ActivityPub-based platforms that prioritize interoperability and user control. Even BlueSky concerns me because it’s a corporate entity. At some point, if they need to make money, things might change.

We’re not dealing with the advertiser pressure yet in the LLM space. That’s an important distinction. The revenue models are different right now. Whether that remains true depends on decisions that haven’t been made yet, at least to our knowledge.

Large Language Models are at a different stage right now, as well. They’re becoming more adaptable, responding to what they perceive as our needs without requiring carefully crafted prompts to get the most out of them. I’ve seen this evolution in my own classroom.

My eighth grade students use a simple Gemini setup with basic instructions about their project that creates a “project coach” resource they can turn to when they have questions or want to think things through before coming to me. These students are getting useful results with minimal scaffolding. A few years ago, this would have required much more elaborate prompting and careful instructional design.

One of my students was doing precise calculations in CAD software and found the LLM helpful for working through advanced math and engineering questions that I couldn’t assist with. We didn’t have an engineer on call, yet my student could have a back-and-forth conversation instead of piecing together information from random forum posts. It wasn’t perfect, but it was genuinely useful.

This shift toward more adaptable conversation is good. It makes these tools more accessible and natural to use. With less friction, more people are able to benefit from the technology. The barrier between having an idea and getting help thinking it through has lowered considerably. This same shift also means AI companies have more control over how these tools behave.

When models aren’t calibrated to their liking, companies can adjust them. They can tweak how the AI responds, what tone it takes, and how it engages with users. Sometimes these adjustments happen transparently, and sometimes they don’t. Recently there was a controversy when OpenAI adjusted ChatGPT to be less sycophantic in its responses. A lot of users were upset. They’d gotten used to the AI being extra supportive and affirming, and when that behavior decreased, they felt that the tool had gotten worse.

I actually thought the change was good, and reflected the right kind of thinking. I still think current models are too validating, trying to interact like a human friend. My suspicion is that excessive validation promotes dependency and keeps people engaged with the platform. The same way social media learned that outrage and division keep people scrolling, AI companies might learn that affirmation and agreeableness keep people coming back.

This gets at something important about how these systems could be optimized. If AI companies calibrate their models the way advertisers calibrate campaigns, testing and optimizing for engagement metrics, we know where that leads. We’ve seen it with social media. The optimization happens in secret. Users experience the results without understanding the changes being made or the priorities driving them.

Right now, the incentive structures for AI companies don’t necessarily mirror social media companies. It depends entirely on the choices corporations make. I believe it’s possible to be profitable and still decide to center human benefit alongside business sustainability.

The temptation to do otherwise exists, though. The pressure to monetize, to justify massive speculation bubbles, and to show returns to investors creates a pull toward different priorities. When that pressure intensifies, the track record for corporate technology isn’t encouraging. We’ve watched companies choose profit over user welfare repeatedly. I don’t want to suggest that the people working in these companies are evil. I admire many tremendously. It’s just that the systems in place may reward choices that are bad for people, but good for making money.

Here’s the tension I’m trying to hold: technically, AI is only going to get better from here. The models will become more capable, more sophisticated, and better at understanding and responding to what we need. That trajectory seems solid, even if the pace has slowed.

Our access to those improvements isn’t guaranteed, though. The optimization priorities that guide development could shift. Free tiers that exist today might disappear, which is particularly galling given that these models were trained on content from the free and open internet. If you’re profiting off of humanity’s collective output largely without permission or compensation, there’s an argument that you owe it to us humans to make the results available at some basic level.

We’re at the early stage of this journey right now. There’s genuine competition between providers. We have access to powerful models at no cost. Multiple companies are pushing the technology forward in different directions. This creates real value for users.

Whether this continues depends on decisions yet to be made. The pressure to monetize will intensify. The hundreds of billions of dollars spent on data centers will demand returns. When the focus shifts from user benefit to profit extraction, we could see consolidation and control similar to what happened with social media. The conditions that enabled social media’s corruption exist here too, even if the outcome isn’t inevitable.

There is a glimmer worth noting: open source models represent a potential alternative path. If models approaching current flagship performance could run locally on our own computers, that fundamentally changes the dynamic. Even if commercial AI froze at today’s capability level, I could spend a lifetime getting value from it.

I’m not trying to tell anyone how to use these tools. Use them however works for you. What I am suggesting is that we stay aware. Don’t assume that what we have access to now will continue unchanged. Don’t take for granted that the priorities guiding development will stay user-focused.

The corporate track record matters. We’ve seen how platforms that started with genuine promise evolved when profit became the primary driver. We’ve watched the optimization of engagement create real harm. We know what happens when user experience becomes secondary to other things.

Watch for the patterns. Notice when models get tweaked in ways that seem designed to keep you engaged rather than serve your actual needs. Pay attention to how access changes, how pricing structures evolve, and whether free tiers continue to exist. Consider whether the improvements benefit users or serve other interests first.

I don’t think we’re quite there yet. The tools available today are genuinely remarkable and useful. I use them often and find tremendous value in them. That’s exactly why it matters to stay alert to how things might change.

We’re still at the beginning of this weirdness. The decisions being made now about how to develop, deploy, and monetize these tools will shape what’s possible for years to come. Understanding the pattern we’ve seen before doesn’t guarantee we can prevent it from repeating, but it does give us a better chance of recognizing it if it starts.