I’m preparing to teach a unit on AI literacy to a group of high schoolers next month, and it’s clear that we need to spend some of that time discussing the ethics of using Large Language Models (LLMs) like ChatGPT. I’m still working through this stuff myself, particularly because students are sharing their own opinions about it and getting me thinking, so I thought I would take this opportunity to carefully consider the ethics of this type of AI, both as a teacher and as someone who uses LLMs regularly.

My core question is simple: Is this technology ethical, and can I use it in a way that aligns with my values?

A Personal Tension

Many of the people I know and respect most are completely against AI, full stop. I have even been on the receiving end of direct hostility when I’ve tried to talk about how I use it and how it might be helpful. Some just shake their head and look at me with a sense of regret that I’ve gone over to the dark side. What have I been missing?

Confirmation bias, where we tend to favor what we already want, is very powerful, and it has me wondering if my reasons for continuing to use LLMs are merely convenient rationalizations. You see, I quite like using them. Having a co-intelligence as a regular part of my life has been transformative. With it I can explore any topic I can imagine, get help thinking through things, build tools that help me and others, get better organized, develop skills with customized learning plans, and brainstorm ideas. All of that has happened in the first few months of me using it on a regular basis.

My views can change, however, if I’m willing to confront them. I used to love creating AI-generated art and video, but I’ve stopped. When I thought carefully about my use of these tools, I decided that using them feels too disrespectful to the humans whose hard work was used to train AI art models without their consent. I keep looking for new generative art products that are based solely on public domain or other properly licensed work, but until they appear, I’m done.1

I want to confront my use of LLMs in the same kind of way.

The Environmental Cost

Sajjad Moazeni at the University of Washington estimates that training ChatGPT 3 consumed up to 10 gigawatt hours of power, which roughly equals the entire yearly energy consumption of over 1,000 US households, and that doesn’t even include the impact of the hardware the software runs on. This creates a massive carbon debt that is built into the use of these tools.

Once the model is trained, it uses inference to chat with users like me, and that takes energy, too. This consumption is direct, and under my control.

Here’s how I use LLMs:

  1. I access them several days a week, and would estimate that’s about 200-250 days per year.
  2. On days I use LLMs, I have about 2-5 interactions that usually include follow-up queries, so let’s average it to 10 queries per day to keep things simple, but real.
  3. My interactions vary in complexity, though most are reasonably complex.
  4. I use online providers like ChatGPT, Gemini, and Claude most, with some other models in the mix occasionally, including local models that run on my computer.

If we take Meta’s environmental impact data (they are the only big player who seems to provide this), we can estimate that a typical ChatGPT-level query requires about 0.3 kWh of energy and 0.115 kg of carbon dioxide.

This means that I’m using about 3 kWh and emitting 1.15 kg of carbon dioxide on days that I use an LLM, which is significant.2

There are some mitigating factors worth considering, though their impact is still evolving:

  1. New models and hardware are becoming more efficient. While the exact energy cost of training and inference using models like China’s DeepSeek is debated, it’s generally accepted that newer models can achieve comparable quality with significantly lower resources.
  2. AI Companies are increasing their use of renewable energy. Google, Microsoft (working closely with OpenAI), and other companies are investing billions in renewable energy infrastructure. While much of this capacity is still coming online, there is movement toward reducing environmental impact.

Making ethical energy choices requires considering our overall carbon footprint and the value we get from different energy uses. We all make tradeoffs living in a modern society, prioritizing where and how we use limited resources. What matters is being intentional about these choices rather than making them unconsciously or by default.

My conclusion: with regard to energy consumption, and considering my overall carbon footprint, it is possible for me to ethically use LLMs if I am mindful of the environmental impact and act accordingly. That’s good. When we start thinking of the human dimension of AI ethics, however, it raises even deeper questions.

The Human Factor

I completely agree with Alison Gopnik’s view that LLMs are cultural technologies, like libraries, search engines, and Wikipedia, not intelligent software agents. When we work with them, we’re interacting with an internet simulation that’s been trained to be a conversational assistant.

That means the base models of these products are clever abstractions of the entire internet. It took a while for that to sink in when I started thinking about it. The scale is mind boggling. While most of the data used to train and create these models was public, significant portions came from data that was not meant for anyone to access. Meta allegedly used pirated books to train their models, for example, and OpenAI developed (excellent and free) speech recognition software so that it could transcribe millions of hours of YouTube videos to train ChatGPT, which is against YouTube’s terms of service.3

Think of all of that human effort, distilled into a chatbot. Without consent, creator’s work was used to create a product that’s often marketed as a tool to replace them. In addition, many of the datasets used to create commercial AI products were actually created by academic and non-profit research teams for non-commercial purposes, a convenient “data laundering” operation that allows legally-obtained data to be repurposed in a way that might have been considered illegal if it had been collected directly by commercial entities.

This is a serious ethical dilemma for me. I have benefited greatly from the generosity of people sharing their knowledge online, for free. It’s astounding, really, and highlights an extraordinary capacity for goodness in people. I want to honor their efforts, and the contributions of countless others online. I don’t want to support companies that exploit this unprecedented mass of human effort to eliminate jobs and increase human immiseration by finding new ways to surveil us, harangue us, and extract every possible penny from us in order to make wealthy people richer while the rest of us languish.

Is that what these AI companies are doing? Is that what they want? If so, I don’t want to be a part of it. That would clearly be unethical. You can’t determine the truth from corporate press releases, either. You have to look at what they’re actually doing and examine the real-world impact. On that point, however, there seems to be a wide spectrum of opinions.

The Charitable View

The most charitable interpretation goes something like this:

LLMs have the potential to transform our lives for the better in many ways: scientific breakthroughs in important areas like medicine, climate, and renewable energy; transforming and individualizing education; enhancing personal productivity; and more. While some bad actors can use AI in unethical ways, companies are actively trying to restrict the worst abuses, and are continually improving their detection of unauthorized use. Even though these tools require significant resources to maintain, free versions are offered so that anyone can enjoy the benefits of this technology. Heavier uses are asked to pay so that the product can remain accessible in a sustainable way.

The Critical View

The least charitable interpretation, on the other hand, looks something like this:

LLMs have been designed from the start to extract wealth and exploit the efforts of creators to the advantage of corporations whose only goal is profit, regardless of the human consequences. They seek to create a new class of societal winners and losers, with those who can afford access and know how to use it rising in status while leaving everyone else behind. Certainly there’s a potential for useful outcomes, but potential is the operative word. Maybe, someday, it will cure cancer. Possibly. But what is actually happening, right now in the real world, is that AI is being used to make our lives worse. Endless AI slop is making our lives less useful and more frustrating. Companies are using AI to cut their labor force because they’ve been sold on the premise that AI can replace employees. Criminals, oppressive governments, and other bad actors are using it to make our lives measurably worse with disinformation, mass surveillance, and scams.

Finding Middle Ground

In my experience, as with so many other things, the truth is somewhere in the middle. This is a general-purpose cultural technology. I have seen direct, positive benefits from LLMs in my own life, and I try my best to use them to make things better. In fact, my students, colleagues and friends have told me that what I’m doing is helpful to them (I have to check to make sure it’s not just me – fighting that bias thing, again).

For example, I’ve used AI assistants to help teachers with limited access to students get a pre-meeting thought debrief so that when they get together, they can start at a much deeper level. I’ve also used AI to help create custom resource websites for students that fit the exact needs of their teacher instead of collecting a bunch of links that are only partially useful. These are things I wouldn’t have had the bandwidth to provide before LLMs. I can only imagine the good that people who are smarter than me might do with these tools.

At the same time, for-profit corporations do not have a good track record when it comes to human thriving. Many businesses don’t value their employees, and look at them as obstacles to profit. Many businesses don’t value their customers, and look for every opportunity to extract value from us instead of adding value to us. Again, this is a general purpose cultural technology, and lots of people will use it (and already are using it) to enrich and empower themselves at the expense of others.

Author Robin Sloan does an excellent job of breaking things down in his article, “Is it okay?” (Note: strong language.) If you find my thinking on this to be helpful, his perspective is also worth a read, as it considers AI ethics at a higher level, and his writing style is much more entertaining than mine. While he acknowledges that in one sense it doesn’t matter – companies and enthusiasts are going to keep pushing forward regardless – on another, more personal level it matters a great deal. He clearly articulates the problem of using “everything” – this treasure that is our human data commons – to supplant human output.

My Conclusion

After careful consideration, I have come to believe that I can use LLMs ethically, but only if I remain mindful and deliberate about what I’m doing. The capabilities I enjoy exist only because of our collective data commons, which deserves profound respect, and I need to acknowledge that it consumes significant non-renewable energy.

To that end, I won’t use it to replace human work. I won’t use it for trivial tasks. I’ll prioritize applications that benefit others, not just myself. I’ll continue sharing my experiences and thoughts to help others navigate these complex issues themselves.

I believe that it’s perfectly valid to decide not to use AI at all. I find it concerning that there are forces trying to create a world in which that decision will increasingly marginalize you. I will support laws and regulations that prevent abuses of this technology. And I’ll avoid supporting companies that seem more interested in exploiting people with these tools than helping them.

An LLM Usage Framework

Here’s my personal framework for using LLMs in a way that attempts to balance everything that resulted from my thinking and research. It’s imperfect, but at least it’s a start.

Environmental Considerations

To minimize the energy cost of my LLM use, I’ll do my best to:

  • Avoid trivial tasks. I won’t use LLMs for social chats, simple lookups, basic math, or other low-value interactions that could be done with other tools.
  • Craft efficient prompts. I’ll be precise and include sufficient context in prompts to reduce the number of back-and-forth interactions needed.
  • Choose providers with renewable energy commitments. I’ll try to be aware of which companies are making genuine efforts to reduce their environmental impact, and prioritize their use.
  • Prefer efficient models. When possible, I’ll use models that achieve the necessary level of quality with lower energy requirements.
  • Balance overall digital energy use. I’ll reduce other energy-intensive online activities like HD video streaming or unnecessary cloud storage.
  • Decrease usage. I’m going to try to decrease my usage days and queries for personal stuff, and prioritize using LLMs that I can run on my own computer.

Human Considerations

To ethically consider the human factor in LLM creation, I’ll do my best to:

  • Respect the data commons. I acknowledge that LLMs exist because of countless human contributions, many made without consent for this purpose, and will be mindful of this when using them.
  • Value human work. I’ll use AI to augment rather than replace human creativity and labor.
  • Create value beyond myself. I’ll focus on using these tools to benefit others, not just myself.
  • Share knowledge and insights. I’ll try to help others develop their own ethical frameworks for using these technologies.
  • Support appropriate regulation. I’ll advocate for policies that prevent the worst abuses and promote outcomes that are beneficial to everyone, not just a few.

Phew! That was a lot. In the end, I’ve come to believe that if the positive promise of AI is ever to be realized, it must be available to people who choose to do good things with it, not just megacorporations and scammers. And because I believe most people are good, even in these times, there still remains hope for LLMs to become a net positive in our collective and very human history.


  1. Adobe Firefly seemed promising at first, but it’s marketed for commercial use, and I don’t want to use a tool that’s actively being promoted as an alternative to human artists. ↩︎

  2. It’s worth noting that whenever I look at these energy use statistics, I also see corporations warning they’ll need more energy for AI (and, therefore, more money), the fossil fuel industry promoting their capacity to supply this “vital” energy, and politicians using this as a convenient excuse to maintain climate-damaging energy production. This makes something that seems negative on its face actually result in desirable outcomes for some if they’re framed this way. ↩︎

  3. Why didn’t Google go after them? Because they were doing stuff like this, too, using YouTube videos to train Gemini without creator’s consent. Nice. ↩︎