2 min read

Navigating AI, Tradeoffs, and Ethics

Lately, I’ve been hearing from other developers who are feeling conflicted about AI. Not just whether they should use it, but what it means that they’re being pushed to use it. Some are worried about the environmental impact. Others feel like the expectations around AI are rising faster than anyone can reasonably keep up. Many just feel uneasy.

I get it. It’s a complicated topic. And while I can’t speak for anyone else, I can share how I’ve been thinking through it, and why I still use these tools, even when I’m not always comfortable doing so.

First, I want to be clear: I’m speaking for myself here, not for my employer or anyone else. These are just my own thoughts, shaped by a lot of reading, a lot of conversation (my wife works in renewable energy, so topics like energy usage and sustainability come up in our house a lot), and a lot of wrestling with tradeoffs.

Let’s start with the environmental impact. There’s been a lot of talk about the energy and water consumption of large language models. Some of it’s probably exaggerated. Some of it, maybe not. But the thing I find more worrying is the longer-term trend: big companies starting nuclear power initiatives to keep up with AI demands. That has huge implications. It’s not a side conversation, it’s a shift in how and why energy infrastructure is being built. And I don’t think we’re fully grappling with it yet.

But the more immediate concern, the one that’s hardest to ignore, is the ethical mess around intellectual property. These models were trained on huge swaths of the internet. Everything, basically. Books, code, artwork, and writing, all without permission, without credit, without compensation. That matters. There’s no getting around the fact that LLMs were trained on the backs of people who didn’t opt in.

There are real ethical reasons to say, I don’t want to be part of that.

And yet, here we are.

We live in a capitalist society, for better or (let’s be honest) mostly worse. The cat’s out of the bag. These tools exist, and they’re changing expectations. Whether we like it or not, they’re becoming part of the job. And so we each have to make our own call: How do I navigate that? What do I use? Where do I draw the line?

I don’t think there’s a clean answer. I’m vegan, but our family car is a used Jeep with leather seats. I was really conflicted about that, but we needed a car, and we didn’t buy it new. That was the tradeoff I could live with.

When I was younger and had fewer responsibilities, I could afford to take harder stances. Now, with kids and a mortgage and school tuition and everything else, I have to be more pragmatic. I’m not saying I like that. I’m not saying it’s the right approach. But that’s the tension I live with. That’s the reality of participating in the world as it is, not as I wish it were.

So yes, I use AI tools in my work. Not carelessly, but with awareness. I stay curious, keep learning, and try to understand the broader impact. I’m open to changing my mind. I also vote for and donate to causes that align with my values—ones that aim to lift people up, in the hope that our society will eventually improve. In my experience, real change is slow and you need to play the long game.

These things don’t always line up neatly. That’s the tension. That’s the job. That’s the world.