top of page

AI, Ethics, and the Question We Don’t Ask Often Enough: What Is This For?

I recently had a conversation with my daughter.


She’s Gen Z, thoughtful, and very skeptical of AI. Not in a casual way, but deeply, ethically opposed. Her concerns weren’t about job security or whether AI will replace humans. They were about the environmental cost, the socioeconomic impact, and the broader question of whether we’re creating something powerful without fully accounting for the consequences.


I didn’t convince her.


But the conversation stuck with me.


Because while I spend a lot of my professional time helping organizations think about AI readiness, literacy, and adoption, that conversation forced me to step back and ask a more fundamental question, one we don’t ask nearly enough:


What is all of this actually for?


The Trade-Offs We Pretend Don’t Exist


Here’s an uncomfortable truth: AI is not free.


It consumes enormous amounts of energy. It requires massive infrastructure. It concentrates power and advantage in ways we don’t fully understand yet. And like every major technological shift before it, it will create winners and losers, often unevenly and unfairly.


If someone told me that AI would significantly increase environmental impact, but in return it would meaningfully cure certain cancers, eliminate rare diseases, or dramatically improve outcomes for people who have historically been underserved by healthcare systems.


I’d have to seriously weigh that trade-off.


Most of us would.


That’s not blind optimism, that’s ethical complexity.


But now consider the other side of the ledger.


If we’re causing environmental harm, reinforcing inequality, and accelerating misinformation so people can use AI to remove clothing from photographs, generate deepfake harassment, or flood the internet with low-value content?


That’s not a hard trade-off at all.


That’s a failure of judgment.


Power Without Purpose Is the Real Risk


We often talk about AI risk in technical terms: hallucinations, bias, autonomy, alignment.

Those matter.


But the bigger risk isn’t that AI will suddenly become sentient or go rogue.


The real risk is that we deploy enormous power without ever being clear about the purpose it serves.


I’ve seen this before.


In the early days of agile and digital transformation, organizations rushed to adopt new practices and tools because everyone else was doing it. The how became more important than the why. Over time, the original intent, to deliver better outcomes for customers and teams, got lost.


AI feels dangerously similar right now.


We’re automating processes we don’t fully understand. We’re scaling decisions without clear accountability. And we’re chasing novelty faster than we’re developing judgment.


Ethics doesn’t start with policies or governance frameworks. It starts with asking better questions.


Just Because We Can Doesn’t Mean We Should


One of the simplest ethical lenses I’ve found is this:


If this capability disappeared tomorrow, would the world be worse off in any meaningful way?

If the answer is “yes”, because it improves health outcomes, increases access, reduces harm, or enables people to do meaningful work more effectively, then it’s probably worth investing in, even if it comes with costs that must be managed responsibly.


If the answer is “no”, or worse, “it would probably be better”, then we should pause.


Not everything that’s impressive is valuable. Not everything that’s efficient is ethical.


And not every use case deserves to be scaled.


An Ethical AI Gut Check for Leaders

Before green-lighting an AI initiative, automating a process, or experimenting with a new capability, it’s worth pausing to ask a few simple, but uncomfortable, questions:


1. What problem are we actually trying to solve? Is this addressing a real human, customer, or operational need, or are we adopting AI because it’s available, impressive, or expected?


2. Who clearly owns the outcome if this goes wrong? If no one can answer that quickly, accountability has already been diluted, and AI will only accelerate that gap.


3. Are we automating understanding, or bypassing it? Do we truly understand the process, data, and decisions involved, or are we using AI to mask complexity we haven’t taken the time to untangle?


4. Who benefits most, and who bears the cost? Consider environmental impact, workforce implications, and downstream effects. If the benefits are narrow but the costs are broad, that’s a signal worth paying attention to.


5. If this use case disappeared tomorrow, would it matter? Would customers, employees, or society be worse off in a meaningful way, or would we mostly just be inconvenienced?


This isn’t about achieving moral perfection.


It’s about exercising judgment before scale, not after harm.


Ethics Is a Leadership Responsibility, Not a Technical One


One thing I agreed with my daughter on: ethics can’t be bolted on after the fact.


They can’t live only in legal reviews, AI policies, or compliance checklists.


Ethics lives in leadership decisions.


It shows up in what problems we choose to solve. It shows up in what we automate and what we intentionally keep human. It shows up in whether we prioritize dignity, transparency, and accountability over speed and cost savings.


Leaders don’t get to delegate that responsibility to “the AI team” or “the vendor.”


If AI amplifies judgment, as I believe it does, then weak judgment will scale faster than good judgment ever has.


Holding Two Truths at the Same Time


Here’s where I landed after that conversation.


AI has extraordinary potential to do real good in the world.


And it also has the potential to deepen harm if we don’t slow down enough to ask what we’re building, and why.


Those two things can be true at the same time.


Being ethically serious about AI doesn’t mean rejecting it outright. It means refusing to treat it as inevitable, neutral, or value-free.


The most important skill leaders need right now isn’t prompt engineering or tool selection.

It’s discernment.


And that starts with a simple, uncomfortable question we should all be asking more often:


Is this the best use of this power, or just the easiest one?

Comments


bottom of page