Artificial Intelligence and
Environmental Collapse:
Why Opting Out Isn’t Enough

Opting out of AI may offer clarity—but without collective limits and systemic pressure, the environmental toll will grow. Strategic, ethical use may be our only leverage.

 

There is no longer any real doubt that artificial intelligence is transforming our world—but the speed and scale of its environmental toll is only beginning to surface. Despite being packaged as cloud-based, immaterial, or efficient, AI is powered by physical infrastructure that requires enormous amounts of electricity, fresh water, and raw materials. It is not separate from the ecological systems it draws from. It is embedded in them—and increasingly straining them.

As more individuals and small businesses become aware of AI’s footprint, a question emerges: is it more ethical to abstain entirely? Should we avoid using AI in our work, our tools, and our creative processes? In a culture of compulsive overuse and technological evangelism, this can feel like a powerful stance. But the deeper truth is that opting out alone will not mitigate the harms we’re facing. It may offer clarity on a personal level—but unless it’s tied to broader organizing, it’s unlikely to move the needle. Worse, it may leave those with values outside the conversation, while expansion continues unchecked.

The question, then, is not simply whether to use AI or not—but whether it can be used strategically and ethically, with full knowledge of its costs, and whether it can be reshaped from the inside while pressure is simultaneously applied from the outside.


The Real Environmental Cost of AI

The training of a large AI model already consumes significant energy—often compared to the lifetime emissions of multiple gas-powered vehicles. But training is not the primary source of harm going forward. Inference—the repeated, ongoing use of the model in real time across billions of applications—is where the real environmental impact lies. This is what powers search engines, real-time assistants, ad targeting, synthetic media, and workflow tools. As more devices, apps, and systems default to AI as a baseline feature, the energy draw will outpace even the largest training runs.

Inference is particularly concerning because of its always-on nature and the economic incentives driving its ubiquity. Unlike training, which is a one-time event, inference occurs continuously, responding to countless user interactions every second. This persistent demand leads to a constant load on data centers, amplifying energy consumption and associated emissions over time.


“Inference—the repeated, ongoing use of the model in real time across billions of applications—is where the real environmental impact lies.”

Then there’s water. Data centers often use evaporative cooling to prevent overheating. This method draws millions of gallons of water daily, especially in large server farms run by Google, Microsoft, Amazon, and others. These centers are often built in drought-prone areas such as Arizona, where water scarcity is already acute. Microsoft’s 2023 reporting confirmed a 34% increase in water usage year-over-year, tied directly to AI deployment.

On top of this is the hardware problem. AI doesn’t run on magic. It runs on GPUs and data center infrastructure made from lithium, cobalt, rare earths, and other materials that require mining, often in ecologically vulnerable or politically unstable regions. These processes leave behind toxic waste, drive displacement, and deepen dependency on violent extractive economies.

Even efficiency gains—such as smarter models or better hardware—do not fundamentally reduce harm if the total compute demand keeps rising. A more efficient model used 10x more often may still burn more power and water than a less efficient one used less. Scale cancels out efficiency if it goes unregulated.


The Timeline: How AI Ranks Among Global Environmental Threats

To understand the seriousness of AI’s ecological impact, it’s useful to situate it within the broader landscape of environmentally damaging industries. While fossil fuels, large-scale agriculture, and heavy manufacturing remain the most visible drivers of collapse, AI is quickly climbing the ranks.

By 2030, under current growth trajectories, AI is expected to become the eighth most environmentally harmful industry globally. It will follow fossil fuels, industrial agriculture (especially meat and dairy), cement, steel, aviation, fast fashion, and the mining sector that supports both green energy and tech infrastructure. AI will surpass plastics, conventional construction, and possibly even petrochemicals in its use of electricity, water, and extracted materials.


“By 2030, under current growth trajectories, AI is expected to become the eighth most environmentally harmful industry globally.”

In a scenario where AI is made vastly less harmful—through direct renewable power sourcing, tight model constraints, regional caps on inference, and waterless or closed-loop cooling systems—it could fall out of the top 10 entirely. It might rank closer to 13th or 14th globally, on par with the broadband and telecom industries. But this outcome depends not just on cleaner infrastructure, but also on significant curbs to scale and use case proliferation.

By 2035, if regulation remains weak, AI is projected to climb to sixth place. Its energy use will dominate global tech emissions, and its demand for water and high-value metals will continue to rise. At this point, it will overtake fast fashion and begin to rival the impact of steel and shipping—both of which have been longstanding pillars of industrial pollution. However, with aggressive global standards and a shift away from commercial AI deployment in trivial or extractive domains, its impact could fall to 16th–20th, no longer among the highest-emitting sectors.

By 2040, in its expected trajectory, AI is likely to remain in the top ten, ranked around ninth. It will sit below fossil fuels, industrial food systems, mineral extraction, cement and construction, aviation, petrochemicals, large-scale plastics, and biomass conversion. If restructured dramatically—used only in scientific research, climate modeling, or civic applications with strict oversight—it could rank closer to 25th, resembling a mid-level public utility in its environmental toll.


“Vastly less harmful AI is only possible if limits are imposed on how much, where, and for what purpose it is used.”

Importantly, the difference between these scenarios is not only technological. It is political and economic. Vastly less harmful AI is only possible if limits are imposed on how much, where, and for what purpose it is used. Without that, even clean infrastructure becomes extractive.


Why “Vastly Less Harmful” Still Isn’t Enough

Many conversations about AI and sustainability rest on the hope that it can be made “clean”—powered by wind or solar, cooled with recycled water, or operated with efficiency-optimized chips. But a clean AI system used for trivial tasks, profit optimization, misinformation generation, or labor displacement is still part of the harm. The issue isn’t just how AI runs. It’s what AI is for.

This is why reducing emissions and water usage, while necessary, is not sufficient. A zero-emissions model that replaces human contact with automated support, floods the web with synthetic content, or powers predatory ad engines still contributes to social and ecological fragmentation. Without intentional limits on purpose and value, the same harmful logic persists.

Even in its greenest form, AI still risks entrenching a worldview that says speed is better than care, automation is better than presence, and scale is always good. The deeper harm isn’t just in emissions—it’s in what is displaced: meaning, skill, intimacy, sovereignty.


Why Opting Out Alone Doesn’t Change the Outcome

The instinct to opt out of AI—especially for those concerned with climate, ethics, or creative sovereignty—is understandable. In the face of an extractive system, non-participation can feel like a form of protest, a reclaiming of agency and alignment. And on a personal level, it may indeed offer clarity, integrity, or relief.

But from an environmental perspective, individual refusal—when done in isolation—has almost no material effect. The majority of AI’s ecological footprint does not come from individual users, artists, or small businesses. It comes from large-scale enterprise software, automated logistics, surveillance systems, synthetic ad generation, and infrastructure-level deployment across finance, commerce, and communication. These are not opt-in systems. They are industrial defaults.


“Ten thousand people choosing not to use popular AI tools like ChatGPT or image generators won’t measurably alter the global draw on energy, water, or rare materials.”

Ten thousand people choosing not to use popular AI tools like ChatGPT or image generators won’t measurably alter the global draw on energy, water, or rare materials if enterprise AI continues to be embedded into global operating systems. Worse, abstaining quietly—without visibility, communication, or collective context—can cede the field entirely to those with fewer ecological concerns. It removes not just energy demand, but also influence.


Strategic Use: Not a Compromise, but a Form of Containment

This is where the idea of strategic use becomes essential—not as a half-measure, but as a principled, adaptive stance. The question isn’t simply “Should I use AI?” It’s: When does my use reduce harm, and when does it reinforce it? Where can I use these tools to interrupt systems of extraction rather than feed them?

Strategic use means employing AI in ways that directly support organizing, pressure-building, education, or systemic accountability—while setting clear personal and professional boundaries around where and how it is off-limits.

This might look like:

• Using AI to help craft messaging for climate advocacy, but refusing to use it for marketing consumer products or synthetic branding
• Incorporating AI into research for environmental policy, but declining to automate relational or creative work
• Designing tools that integrate AI only when it serves community wellbeing, not when it adds scale, speed, or novelty for its own sake
• Speaking publicly about the limits of your use and inviting others to do the same—creating culture around discipline, not just innovation


“Strategic use means employing AI in ways that directly support organizing, pressure-building, education, or systemic accountability—while setting clear boundaries around where and how it is off-limits.”

This is not a form of technological purity. It is a form of containment. You are saying: I will stay close to these tools because they are shaping the world, but I will not let them shape me—or the work I value—without consent.


The Corporate Logic Driving Environmental Harm

None of this happens in a vacuum. The environmental toll of AI cannot be separated from the economic forces that drive its expansion. Companies like OpenAI, Microsoft, Google, Meta, Amazon, and Nvidia are not pursuing AI out of a desire to support human flourishing or ecological sustainability. They are responding to market incentives: scale, dominance, profit capture, and investor pressure.

In the current venture-backed tech ecosystem, there is no incentive to slow down. Quite the opposite. Early-mover advantage, data accumulation, and brand lock-in reward those who push the hardest and the fastest—regardless of consequence. Energy use, water extraction, and raw material consumption are seen as acceptable collateral so long as growth metrics continue upward. Even “green AI” initiatives, when pursued, tend to be shallow: focused on offsetting emissions through carbon credits or shifting data centers to cleaner grids—rarely addressing the deeper issues of demand, scale, or extractive inputs.


“Expecting these corporations to self-regulate is naïve. Without public pressure, legal constraint, or mass refusal, they will continue to prioritize expansion over integrity.”

These companies are also some of the largest political influencers in the world. Through lobbying, campaign contributions, and control of media narratives, they exert enormous pressure on legislators and regulators. This has created a landscape where meaningful accountability is rare, and environmental regulation is often delayed, diluted, or deflected through corporate partnerships and voluntary compliance schemes.

In short: expecting these corporations to self-regulate is naïve. Without public pressure, legal constraint, or mass refusal, they will continue to prioritize expansion over integrity.


What Individuals and Small Businesses Can Actually Do

Given that opting out alone has little effect—and uncritical adoption causes harm—what can be done?

The answer lies in coordinated, visible, and value-aligned participation. This doesn’t mean embracing AI uncritically. It means approaching AI with clarity, boundaries, and purpose: using it where it amplifies justice or resistance, refusing it where it accelerates harm or alienation.

This includes:

Narrative clarity. Speak openly about AI’s true environmental costs. Help dispel the myth that it is clean, immaterial, or inevitable. Use your platform—however small—to tell the truth.

Design restraint. Avoid embedding AI into every product or workflow. Make intentional choices about when automation serves human values—and when it erodes them.


“If you care about the environment, your use of technology should reflect that concern—not just in rhetoric, but in design, deployment, and discipline.”


Purpose-driven use. Leverage AI to support education, environmental advocacy, political organizing, or harm reduction. These are domains where the technology can be aligned with deeper values.

Public refusal. Share where and why you’re choosing not to use AI. Explain your values. Offer alternative paths. Create cultural permission for others to set limits.

Community building. Connect with others who are thinking critically about technology and ecology. Collective vision—more than individual virtue—is what will shape the future.

This isn’t about being perfect. It’s about being coherent. If you care about the environment, your use of technology should reflect that concern—not just in rhetoric, but in design, deployment, and discipline.


Conclusion

The environmental crisis posed by AI is not speculative—it is already here. Its foundations are extractive, resource-intensive, and accelerating fast. With the growth of inference—the always-on, always-serving function of AI models—energy demand is no longer occasional, but continuous. Water draw is increasing year over year, hardware supply chains are tied to toxic mining economies, and “green” infrastructure often masks deeper harms with offset schemes and recycled branding. Even the most efficient or renewable solutions do little if scale continues unchecked.

These impacts are not secondary effects. They are the business model. AI is expanding because there is enormous economic pressure to embed it everywhere. Its largest drivers—venture-backed firms racing for market share and influence—are incentivized to prioritize growth, not restraint. And in the U.S., where much of this infrastructure is based, weak regulation and political influence make real accountability rare.


“Use AI when it supports justice, care, or repair. Refuse it when it widens harm, severs connection, or accelerates collapse.”

In this landscape, it is understandable that individuals and small businesses might consider opting out. But the truth is stark: individual abstention, while personally meaningful, will not curb the planetary cost of AI. That cost comes not from everyday users, but from enterprise-scale deployment—automated logistics, mass content generation, surveillance infrastructure, and embedded systems across industry, finance, and government. Opting out alone won’t move the needle. Worse, it risks leaving the tools and platforms in the hands of those who care least about sustainability, justice, or social meaning.

That doesn’t mean participation is harmless. Uncritical adoption, especially for convenience, novelty, or scale, only furthers the logic driving ecological breakdown. But strategic engagement is possible. In fact, it may be essential.

This means choosing to use AI with intention, discipline, and transparency. It means refusing to automate what should remain human. It means building tools and narratives that slow things down, restore awareness, and challenge extractive defaults. It means telling the truth about AI’s costs—and making those costs part of every design, decision, and deployment.

There is no neutral ground. But there is a middle path—not a compromise, but a form of resistance through coherence. Use AI when it supports justice, care, or repair. Refuse it when it widens harm, severs connection, or accelerates collapse. Speak publicly. Build community. Disrupt the illusion of inevitability.

AI will not become vastly less harmful unless it is forced to. That pressure won’t come from corporations alone. It must come from a network of individuals, builders, researchers, and organizers who are willing to draw lines and shift culture. Not out of fear of the future—but out of fidelity to the present, and a refusal to normalize the destruction of what remains.

That work starts not with belief, but with choice—what we build, what we refuse, and what we insist on protecting.

Fund Drive