Last year, my students were hitting usage limits on free ChatGPT accounts mid-conversation. I was working hard on prompts to help them use AI as a thought partner rather than a shortcut or search engine, but I kept running into limitations.
I couldn’t control the system instructions. The interface was designed for convenience and frictionless interaction, not for learning. I wanted something better suited to my students and my pedagogical goals, but I couldn’t find anything that fit.
So I started experimenting with Claude, describing what I needed, getting code back, testing it, figuring out what wasn’t working, and iterating. The architectural thinking was mine. The pedagogical design was mine. The implementation came from back-and-forth with an LLM that could write code I couldn’t write myself. The result was Babbleborg, a custom AI chat interface that my students now use as a part of their curriculum.
It’s like having a 3D printer for code. Most people don’t have a lathe in their garage, but with a 3D printer, you can now make your own physical objects. You design what you want, understand how the pieces fit together, and the printer handles fabrication. Working with AI to build software feels similar: a fundamental expansion of what we can create.
The Resistance
When I started sharing what I was learning about AI with my teacher colleagues, I wasn’t prepared for the level of resistance I encountered. Some of it verged on hostility. These are people I respect tremendously, colleagues I’ve worked with for years. The reaction caught me off guard.
Looking back, I think I might understand it better now. If I only listened to what the corporations and their executives were saying about AI, and if I only read the breathless press coverage, I might hate it, too. The technology is being sold as something that will eliminate jobs, create massive instability, potentially become independent of its creators, and decide humanity is no longer necessary. Chatbots are feeding people’s delusions. Children are confiding in them in ways that feel deeply uncomfortable. The hype is everywhere, the FOMO is relentless, and the fear-mongering is constant.
Meanwhile, megacorporations are being fiscally irresponsible, pouring speculative investment into AI that could devastate people when the bubble inevitably pops. They’re shoving AI into every product to satisfy investors, and they’re facilitating endless AI slop. The pursuit of short-term corporate value over human thriving is poisoning the well for everyone.
Anil Dash has written about what he calls “the majority AI view” within the tech industry. Most people with technical roles see AI as useful technology that’s been absurdly over-hyped, forced on everyone, and with valid critiques being systematically ignored. He explains that this reasonable majority understands the potential but is frustrated by the extremism at the top. Their voices are drowned out by billionaire tycoons and corporate hype machines.
The tragedy is watching thoughtful people shut down completely because of how this technology is being marketed and deployed. They don’t even explore what might be possible because they’ve been presented with such a distorted picture. For our students, that’s a real loss. AI literacy needs to be every bit as critical as it is experiential. It should include understanding both the tremendous benefits and the significant potential for harm, wrestling with that tension, and recognizing that this is a tool like any other in some ways, and unlike anything in others. How we choose to use it matters.
What Building Babbleborg Changed
I’ve been following AI developments since around 2015 or 2016, when generative adversarial networks started producing those eerily realistic fake faces. I incorporated some of that early work into my curriculum, exploring with students what it meant when you couldn’t trust photographic evidence anymore. When StyleGAN came out in 2018 and “This Person Does Not Exist” launched in 2019, I was already thinking about the implications.
But using AI tools and building with AI are fundamentally different experiences. Building Babbleborg shifted something in how I understand this technology and my relationship to it.
The partnership with Claude wasn’t like using a tool. It felt more like what Ethan Mollick describes as “co-intelligence,” a genuine collaboration where my expertise and the AI’s capabilities complemented each other in ways that created something neither of us could have made alone. I understood what needed to happen architecturally and pedagogically. The AI understood how to implement it. Together, we built something that works.
That realization opened up a wider view. I started thinking about where different people sit on the technology spectrum and what becomes possible for them. Professional developers can offload grunt work while staying in control of the architecture. Complete beginners can use tools like Claude’s artifacts and skills to build things they never could before. I’m somewhere in the middle: technically proficient enough to guide the process, understand the components, and redirect when something isn’t working, but not skilled enough to write the code myself.
Since creating Babbleborg, I’ve tackled several other things I couldn’t have built before, including the build guide itself. That one I probably could have done on my own, but it would have taken months. With AI assistance, I finished it in a few weeks. The 3D printer metaphor keeps coming back to me. This really is like having a fabrication tool for code. Most people don’t have a full workshop, but now they can make useful and interesting things.
This has convinced me that AI represents something genuinely new in human experience. Not because of the hype or the fear-mongering, but because of what it actually enables. The expansion of creative capacity is real.
Why a Build Guide, Not a Code Release
My first instinct was to publish the code to a public GitHub repository: share what I’d built, let other educators deploy it, and write up step-by-step instructions for getting it running on shared hosting. That seemed like the most helpful approach.
The more I thought about it, though, the more uncomfortable I became. More than 80% of the code was generated by AI. I understand the architecture. I can follow what the code does. I was involved in every decision and had to guide the process when things weren’t working. I also needed enough coding knowledge to redirect the AI when it went down the wrong path. But when it comes to the specifics of implementation, my ability to understand problems and fix them is limited.
That doesn’t feel responsible. I don’t want to publish code I can’t fully maintain. I don’t want to feed someone’s problem description to an AI, get back a fix I don’t deeply understand, and present that as support. If you’re going to put code out into the world as a product, you need to be able to stand behind it.
Then I realized something else. The process of building this thing changed my thinking about these tools more than the finished product did. Going through the back-and-forth with the AI, understanding the workflow, learning how to work with coding agents — that transformation was the valuable part. That’s what shifted my relationship with this technology.
Sharing my work with a build guide gives people that experience. You’re not getting a product based on my needs and my teaching context. You’re building it for yourself. You make the decisions. You own the code. You go through the process of learning how to work with AI coding agents, and you come out the other side with both a working tool and a new capability.
I’m incredibly proud of what I built, even though I didn’t write the code myself. I think anyone who follows the guide and builds their own version will feel that same ownership and pride. That seems more valuable than just handing someone a finished product.
Knowing the Boundaries
This particular project works well for AI-assisted development because it’s architecturally simple. At its core, Babbleborg is a behind-the-scenes API call handler. There are no user accounts. There’s no sensitive data storage. The stack is straightforward: HTML, CSS, JavaScript, and PHP for the API handling.
I can follow all of it without being a professional programmer because it’s reasonably simple. Some complexity exists on the client side, but as long as those components work as intended and the structure is clear, I’m comfortable with my level of understanding.
There are many things I would not trust an LLM coding assistant to build for me without having more comprehensive understanding and doing more hands-on coding. Anything involving user authentication, sensitive data, or complex security requirements needs more expertise than I currently have. The scope and simplicity of this project make it a good fit for this kind of workflow.
That matters for anyone considering following the guide. You need to understand what you’re building well enough to know when something isn’t working correctly. You need enough technical background to guide the process and make architectural decisions. The build guide will help you through the implementation, but you’re still responsible for the result.
I think this is a good project precisely because the scope is manageable and the architecture is clear. It teaches you the workflow without overwhelming you with complexity. That makes it a solid foundation for understanding how to work with AI coding agents on appropriate projects. Stay within these boundaries, and the possibilities are incredible.
The Laboratory
Once you have a working custom AI chat interface, you have something more than a tool. You have a laboratory for experimenting with AI-facilitated learning experiences.
The system gives you complete control over the system instructions. You can shape the interaction around your pedagogical goals, your students’ needs, and your teaching context. You’re not working within someone else’s constraints or fighting against default behaviors designed for convenience rather than learning.
I think about all the times I’ve wished I could clone myself. In my tech projects class, students pick an idea and develop a project plan. I’d give them a worksheet, review it, provide feedback, and sit down with each student for maybe ten minutes to make sure they had a good plan. What I really wanted was to spend 45 minutes with each of them, carefully working through every component they needed to think about.
I don’t have the bandwidth for that. I do, however, have the bandwidth to create a carefully designed AI-facilitated conversation flow that embodies everything I know about project planning. Students can have that conversation simultaneously across the entire classroom. I can walk around, answer questions when they get stuck, and address things they’re wondering about. At the end, they give me their full chat transcript. I can see their thinking as they worked through the conversation. The system organizes their responses into a structured plan.
Now, some teachers may push back on that last part. They argue students should be doing the organizing themselves. In this particular context, I’m okay with it. The thinking has been accomplished, and that’s my goal. When I meet with students afterward, we’re starting from a much more advanced place than I could have reached before.
There are so many applications once you start thinking this way. You can create templates for different kinds of conversations, different subject areas, different learning objectives. Other teachers at your school can use what you’ve built without needing to understand the technical side. The laboratory keeps expanding.
The Workflow
Building Babbleborg taught me more than how to create a custom chat interface. It also taught me a transferable workflow for building with AI coding agents.
You start with clear architectural thinking about what you’re trying to build. You break it down into stages. You test methodically. You read error messages carefully. You iterate when things don’t work. You maintain responsibility for understanding what you’re building, even when the AI is handling implementation details.
That workflow applies to other projects. Since building Babbleborg, I’ve tackled several other things I couldn’t have built before. Each time, the process gets a little smoother. I feel as though I’m starting to develop an intuition for how to describe what I need, how to recognize when the AI is going down the wrong path, and how to course-correct.
The virtuous cycle builds on itself. You gain capability, which opens up new possibilities, which gives you more experience, which expands your capability further. You’re not becoming a professional developer, but you’re developing a genuinely useful skill for working with these tools.
That’s one of the reasons the build guide matters. Anyone who completes it won’t just have a working chatbot. They’ll have experience with a process they can apply to other appropriate projects. They’ll understand what kinds of things are good candidates for AI-assisted development and what kinds require more expertise.
Fight The Power
I’m angry about how corporations are deploying AI. The pursuit of short-term profit and market dominance is distorting this technology completely out of proportion to its reality. They’re creating dependence, harvesting data, and fueling a speculative bubble that could devastate people when it bursts. People are using it to flood the internet with slop and misinformation. Right now, on balance, I think AI’s influence on society is negative.
What makes me hopeful, though, is that they’ve also given us the tools to do something better. I imagine many of the people actually building these systems, the reasonable majority Anil Dash describes, would want to see them used this way.
Babbleborg costs pennies per conversation. It’s more private because API providers don’t train on customer data. It’s more pedagogically sound because I control the system instructions completely. It’s more respectful of students as learners because I can shape the experience around learning rather than profit metrics.
The corporations built the infrastructure. They created the APIs. They made these capabilities available. Then they wrapped them in products designed to maximize engagement, create dependency, and extract maximum value. We can subvert this and use those same capabilities differently.
That feels important to me. Not as some grand revolutionary act, but as proof that we have agency in how this technology gets used. Every choice to build something thoughtful is a small act of resistance against thoughtless deployment. Every implementation that prioritizes learning over convenience pushes back against the dominant narrative.
I love technology. I’ve spent decades working with it, teaching with it, and thinking about its potential. Watching it get weaponized for short-term corporate gain while people dismiss genuine possibilities frustrates me deeply. We can do better. This is one small way to demonstrate that.
The Core Truth
AI is genuinely transformative. That’s not hype. The expansion of creative capacity is real. The potential for good is substantial.
We also have agency in how this technology gets used. Every choice we make about implementation matters. Every system we build that prioritizes the good stuff shapes what AI becomes in practice.
Babbleborg demonstrates something simple but important: AI doesn’t have to be the way the corporations are making it. We can build alternatives. We can use the same underlying technology in ways that serve different values. We can create tools that respect learners, protect privacy, and cost dramatically less.
The technology itself is neither savior nor destroyer. How we choose to deploy it, the values we embed in our implementations, the pedagogical principles we prioritize: those are the things that matter. Every thoughtful implementation is a counter-example to the dominant narrative of AI as either magical solution or existential threat.
For the Curious
The build guide at babbleborg.org is for technically-minded educators who want to understand by doing. It’s for people who believe we can choose better, want to roll up their sleeves and experiment, and value agency over convenience.
It’s not for everyone. The guide requires curiosity, willingness to experiment, and comfort with iteration. You need to be in an environment where experimentation is acceptable. There are excellent, well-supported commercial AI products available if that’s not your situation. They’re just not as interesting as building your own.
If you do follow the guide, you’ll come out the other side with more than a working chatbot. You’ll have transformed your relationship with this technology. You’ll understand what’s possible when you take ownership of the implementation. You’ll have proven to yourself that alternatives exist.
That matters. Not because any single custom chatbot will change the world, but because every alternative implementation helps establish that we have choices in how this technology gets deployed. Putting that possibility into the world feels worth doing.
