By the time most organizations are ready to ask the AI ethics question, the decisions that determine the answer have already been made.
Before working at Quantum Rise, I spent years helping organizations understand impacts they couldn't see: how a supply chain's water usage affected stressed watersheds, how land conversion for mines cascaded through a local biodiversity corridor, how increasing transmission interconnection queue times might shift energy availability at a new construction site. Climate work taught me one thing above everything else: sequencing is everything. Now, I see the same mistakes in the consideration and application of AI ethics.
The Window Is Shorter Than You Think
AI is moving faster than the climate space ever has, which means the window for getting the sequencing right is shorter. The good news is that the climate playbook transfers: identify impacts before they're locked in, build measurement into the process early, and ask the hard questions while there's still flexibility to answer them differently.
The Sustainability Playbook, Repeated
Most organizations treating AI ethics seriously are doing so through governance frameworks bolted onto systems already in production - bias audits on models trained without diverse data, accountability policies written after an accountability gap is exposed, environmental disclosures drafted once the data center contracts are signed.
This sequencing is wrong in a way I've seen before. When sustainability became a corporate priority in the late 2000s and early 2010s, most companies responded with reporting. They hired consultants, published annual reports, and created sustainability teams, but fewer changed how decisions were made upstream. In climate work, this is how we end up with beautiful reports describing impacts that are already locked in. Sustainability teams care deeply about their work, but in many cases the function operates downstream of the most important decision-making.
The Compounding Problem
AI ethics is following the same path, and the consequences are likely to be similar in form yet much larger in scale. Bias compounds: once a biased model is embedded in decision-making, its output shapes future training data and an inherited problem amplifies. Governance gaps not only create legal exposure but erode the organizational capacity to catch the next failure. And the infrastructure costs don't become more manageable the longer you wait. These aren't issues to deal with down the line. It makes sense to reduce organizational debt and start building in these realities at the beginning.
What climate, water, and biodiversity work taught me is that the measurement question and the strategy question should be inseparable. It's disingenuous to commission a watershed impact assessment after signing a 20-year water extraction agreement and call it a sustainability strategy (yet companies do it all the time). The assessment is only useful if it shapes the decision, and the same is true for AI. An ethics audit conducted after deployment is better than nothing, but it's more cleanup than sound strategy.
Ask Hard Questions Before You Build
The organizations getting this right are doing something less glamorous than publishing ethics principles. They're asking hard questions before they build: What is this model trained on, and who is it going to affect? What is the ROI? What infrastructure does this depend on, and what does that infrastructure cost the places in which it operates? When something goes wrong, who is accountable, and do they have real authority to act? Rather than considering these questions philosophically, treat them as design questions to be answered at the design stage.
The ROI is there. I know this for a fact because I helped quantify it: the companies that treated sustainability as a bolt-on five years ago are paying for it now. The same is true for AI ethics - the earlier you ask the hard questions, the cheaper the answers in the long run.
At Quantum Rise, we help organizations get the sequencing right from the start. If that's where you are, let's talk.
Annie Britton is a Senior Manager, AI Solutions at Quantum Rise. She has previously worked for climate startups and NASA as a researcher and scientist.