Skip To Main Content

Rethinking Responsible Artificial Intelligence for the Environment

Rethinking Responsible Artificial Intelligence for the Environment
  • Students

Each time a generative AI model crafts an image or writes an essay, an invisible current of energy surges through rows of humming servers. These exchanges may seem trivial; after all, what’s a few watts for a clever chatbot? In reality, the scale is enormous: according to Menlo Ventures' 2025 portfolio, "The State of Consumer AI," global generative AI tools now engage around 500-600 million daily users and over 800 million weekly users. Multiply that by billions of prompts, and the energy and water consumed become staggering. According to research published by the Massachusetts Institute of Technology, training and running large models has more than doubled the demand on U.S. data centers in a single year, with cooling systems alone consuming roughly two liters of water per kilowatt-hour.

These are not abstract harms: communities in semi-arid areas compete with tech firms for access to the same reservoirs that cool supercomputers and face intensifying strain on local power grids and water resources. From an ethical standpoint, the question goes far beyond efficiency, questioning the moral architecture that guides our society’s technological progress. Under a utilitarian or consequentialist lens, moral rightness depends on outcomes: if AI delivers greater societal benefit—through medical research, education, or accessibility—than the environmental damage it causes could be justified. Yet utilitarianism demands full accounting. Are the emissions, water depletion, and social inequities borne by local populations outweighed by the cognitive conveniences of generative models? Furthermore, to what extent are those cognitive conveniences even considered beneficial? How can a society that disagrees on the value of this technology begin to discuss weighing its importance against its repercussions?

By contrast, a deontological perspective shifts focus from outcomes to duties. It asserts that certain actions, such as exploiting natural resources without consent and even externalizing harm onto vulnerable communities, are wrong in themselves, regardless of utility. For example, if a data-center facility draws heavily on local groundwater in a drought-prone region to cool AI servers, thereby reducing community access to potable water, then under a deontological framework, the tech company is violating its duty of respect toward that community’s basic needs. Even if it is argued that generative AI accelerates human knowledge and discovery, the moral legitimacy of its infrastructure depends on respecting the intrinsic rights of those affected. Under this view, transparency and equitable governance are obligations rather than subjective considerations.

A more contemporary lens, virtue ethics, would ask what kind of society we become when we normalize technologies that externalize invisible costs. Virtue ethics is less about isolated actions or their consequences, and more about the character traits, such as temperance, justice, prudence, and responsibility, we cultivate through our collective choices. This question implicates not only corporations but also consumers, who indirectly sustain these systems through everyday use. Consider this: if we normalize building ever-larger AI models without regard for their carbon emissions, water use, or supply-chain mining, then we may foster a corporate and consumer culture of excess, disregard, and environmental complacency. On the other hand, if engineers, companies, governments, and users choose models that prioritize energy-efficient architectures, renewable power, and local ecological equity, they are practicing virtues of moderation, fairness, and foresight. In real terms, adopting energy-efficient hardware, shifting to low-carbon data centers, and reinvesting in regions hosting such infrastructure reflect virtuous behavior. By this measure, the moral question isn’t only what we build, but who we become in the process.

Ultimately, the ethics of AI’s environmental impact invite a collective assessment of these frameworks. In a world of hyper-innovation, it is important to ask not just what technology can do, but what humanity ought to demand from it. Whether through the consequentialist lens of maximizing benefit, the deontological consideration of respecting rights, or the virtue ethicist’s call for moral character, each framework highlights the same truth: innovation divorced from responsibility erodes the very progress it claims to advance. Generative AI’s environmental footprint is not an inevitable byproduct of progress, but a reflection of our priorities. The challenge, then, is not simply to make AI faster or smarter, but to ensure that its intelligence mirrors our highest moral reasoning.

Katie MacKay '27

Works Cited

Copley, Michael. “America’s AI industry faces big energy and environmental risks.” NPR, 14 October 2025, https://www.npr.org/2025/10/14/nx-s1-5565147/google-ai-data-centers-growth-environment-electricity. Accessed 14 November 2025.

“2025: The State of Consumer AI.” Menlo Ventures, 26 June 2025, https://menlovc.com/perspective/2025-the-state-of-consumer-ai/. Accessed 14 November 2025.

World Economic Forum, et al. “How AI use impacts the environment and what you can do about it.” Climate Action and Waste Reduction, 1 June 2025, https://www.weforum.org/stories/2025/06/how-ai-use-impacts-the-environment/. Accessed 14 November 2025.

Zewe, Adam. “Explained: Generative AI's environmental impact.” MIT News, 17 January 2025, https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117. Accessed 14 November 2025.

  • Ethics
  • Lead Newsletter Blog