4 min read

On Continuums, Tools, and Progress

The conversation around AI has become increasingly absolutist. While there are many aspects of AI open for debate, from impacts on the economy to the environment to professional implications, I'm particularly interested in the increasingly common rhetoric that using AI makes you a lesser developer. Unfortunately, saying anything measured about AI-assisted software development is enough to get you labeled unserious, unethical, or worse.

I often hear that using AI tools for software development means you don't know what you're doing, you're generating unmaintainable slop, and you've abandoned your morals. It's hard to reconcile that framing with the reality of how software development has actually evolved throughout history.

Software development has evolved along a continuum. We started with rigid, deterministic, single-path ways of building things and gradually expanded into a world with more languages, abstractions, libraries, and new ways to work. Every step along that path traded some precision or predictability for flexibility and choice.

People think differently, learn differently, and work differently. And each individual and their work evolves over time. When a set of tools widens or new frameworks emerge, it doesn't erase what came before. It creates more entry points, more ways for people to participate, and more chances for someone to find a workflow that complements how their brain works.

The range of options available to people has expanded, and that expansion is perceived as a threat because there is no longer an unquestioned default. When a once preferred option becomes less common, the mere existence of alternatives can feel like criticism, even if no one actually criticized you. That same dynamic shows up in many things, including software development, and it's especially visible in reactions to AI-assisted workflows.

AI tools are part of this same evolution. Some people use AI in small, assistive ways, like generating comments, refactoring functions, or accepting the occasional tab completion. Others operate at a higher level: writing detailed technical specs, reasoning about architecture, and guiding the tool to implement pieces while staying deeply involved in review and iteration. And yes, some people "vibe code," generating large chunks of code, or even entire applications, without engaging with what's produced at all.

These approaches reflect different levels of intent, involvement, and responsibility. None of them, by themselves, say anything definitive about someone's competence. They describe how someone is choosing to work in that moment.

There's another aspect of the AI conversation that often gets overlooked. I've been writing software professionally for 25 years, more than half my life. I've done everything on my own: modeling domains, designing APIs, avoiding one-way doors, thinking in systems, understanding tradeoffs, and recognizing when something feels wrong even before running it. That foundation is what makes LLM-assisted development productive for me. I can judge the output. I can steer it. I can constrain it. I can recognize when quality drifts and pull things back. I can put rules in place so the generated code is something I'm comfortable maintaining.

I'm also not alone in this. I know a number of longtime engineers — people I respect and have followed for a decade or more — whose workflows have changed dramatically over the past six to twelve months. When people claim that AI-assisted development is universally trash, it doesn't line up with what I'm seeing in practice.

A common response is: if this is so productive, where are the results? Part of the answer is stigma. There's still real social friction around admitting you use these tools. Public discourse hasn't caught up to private behavior. At the same time, it's hard to ignore the adoption curves of tools like Cursor or Claude Code. When these tools are put in the hands of people who know what they're building and how to build it, they work.

This is also where I agree with a point Sue Smith has made: you still need to understand code to participate responsibly in building software, especially on a team or with paying customers. AI can be part of a pathway into the field, but it doesn't eliminate the need for fundamentals. Code generation doesn't remove accountability for what ships. You still need to read, understand, and maintain the systems you build, regardless of how the code was produced. As Simon Willison put it recently, "your job is to deliver code you have proven to work."

That said, I'm not claiming you need that experience to benefit from these tools. Widening the continuum doesn't erase anything. It expands who can start and how they can grow. Some people will enter through AI-assisted workflows and develop fundamentals over time. Others will ignore these tools entirely. Both paths are valid. What matters is that the field has room for both.

The same tradeoff-driven continuum shows up outside of software development as well. A physical keyboard and an iPhone keyboard serve different needs. In some situations, accuracy and speed matter most. In others, mobility and convenience win. Swipe-to-type pushes even further along that spectrum: faster input, sometimes less accuracy. None of these choices say anything meaningful about competence. They simply reflect what someone wants in that moment.

The landscape around AI will also almost certainly change in the near future, as it's pretty clear that we're in a bubble right now. But that doesn't particularly bother me. When bubbles pop, the technology doesn't disappear, but the economics do change. What's left tends to be the parts that actually make sense. We'll still be left with development tools that genuinely help people build software.

Smaller, more purpose-trained models continue getting better, and local or self-hosted models are becoming increasingly viable for specific workflows. Not everything will need a massive, always-on model running in a hyperscale data center. For much real development work, narrower tools that run closer to the developer, or are used more intentionally, are the logical choice.

The expanding continuum, the maturing tools, the sense that we're still figuring things out. This is where Star Trek comes in for me. In that universe, humanity eventually reaches a better future, but not in a straight line. Things get very bad before getting better, but humanity rebuilds, adapts, and pushes forward. It's a reminder that progress is uneven and nonlinear.

That part matters to me. Things today feel heavy in a way that's hard to ignore. Seeing any kind of forward movement, even small, imperfect progress in something as specific as the tools we use to build software, is a reminder that we haven't completely stagnated as a species. We're still capable of expanding what's possible and making room for more kinds of thinkers, learners, and builders.

I don't think AGI is around the corner, and I certainly don't think LLMs will lead us to a Star Trek-style utopia. But the widening of the continuum itself, the fact that more people can do more things in more ways, does give me a little hope that we're still trying as a species. We haven't given up on moving forward, even if the path is winding.