Stanford’s AI energy blind spot

Opinion by Utsav Gupta
Published March 5, 2026, 11:32 p.m., last updated March 5, 2026, 11:32 p.m.

In October 2023, Stanford published a sustainability story about a very unglamorous problem: airflow under a raised data center floor. An audit of Forsythe Hall, one of the highest-energy-use buildings on campus, found that equipment crowding had created uneven cooling. Stanford integrated its control systems, a fix projected to save over $90,000 a year.

Nobody shares a story about data center airflow, but that cycle — measure, diagnose, fix, repeat — is Stanford at its best. The University tells that story at scale through the Sustainable Stanford Data Hub, tracking progress on energy and emissions. Stanford reports reaching 100% renewable electricity in 2022 and reducing Scope 1 and 2 emissions by roughly 80% from peak levels, as outlined in the university’s climate action plan.

However, one of the fastest-growing forms of scholarship, AI research, does not show up in that public story. Recent estimates from Stanford’s AI Index Report 2025 suggest that the training compute for notable AI models doubles roughly every five months, while the power required to train frontier AI models doubles annually. AI compute is physical, energy-intensive infrastructure: GPUs, storage, networking and cooling. Yet we rarely see a straightforward “receipt” for how much energy our research computing consumes. If Stanford wants to lead on responsible AI, the next natural step is to treat compute like any other resource with a footprint: measure, disclose and manage it.

Stanford already tracks this infrastructure closely. Its Research Computing Facility uses ambient air cooling for roughly 90% of the year. Its shared cluster, Sherlock, tracks usage across over 2,000 compute nodes and 1,168 GPUs; the cluster draws enough to power roughly 700 homes. But a growing share of AI research now runs off-campus (on AWS, Google Cloud and Azure) where Stanford has no visibility into energy or emissions. On-campus tracking is the easy half; the harder question is whether Stanford will demand the same transparency from its compute vendors.

When the cost of compute is invisible, the incentives naturally favor what is legible—bigger models and higher benchmark scores—while efficiency takes a back seat.

Stanford’s Data Hub reports on energy, emissions, and renewables with unusual detail for a university. But research computing is not a distinct category. It may be embedded in broader totals, but “embedded” is not “legible.” Legibility matters for governance: without it, universities cannot budget for future demand, plan power and cooling infrastructure or set norms for responsible compute use. If we cannot see research compute as a distinct driver of energy demand, we cannot manage growth or have an honest conversation about tradeoffs. Stanford’s own researchers have argued for this visibility, developing tools to estimate AI electricity use and emissions. Measuring this is feasible; disclosing it is a step Stanford has not yet taken. Even then, on-campus reporting alone would entirely miss research running on commercial cloud platforms.

Disclosure matters beyond climate accounting. It is a matter of integrity: a campus pursuing net zero must show where demand is growing, and compute deserves the same scrutiny as any other growing source of demand. It is a scientific variable: if a method “wins” benchmarks but uses 20 times more energy, that is part of the method. And it affects fairness: when compute goes undisclosed, labs with massive GPU budgets dominate quietly; transparency shifts status toward efficiency per result, helping smaller groups compete on ideas rather than raw resources.

Yet this kind of reporting is absent across the sector. To my knowledge, peer institutions do not yet publicly report on AI’s impact on sustainability; MIT, Harvard, and UC Berkeley do not appear to publish such figures. Stanford has an opportunity to set the standard rather than wait for one to emerge.

Stanford does not need to shame labs or police research. It needs a few practical steps. First, add a public “Research Computing” category to the Data Hub, reporting aggregate energy use for major computing facilities and trends over time. Second, automate transparency. Since Sherlock already tracks usage, Stanford could generate an automated “receipt” with each completed job: GPU-hours consumed, estimated energy in kWh and an optional emissions range. This minimizes administrative burden without policing individual labs. Much of the data already exists; the remaining work is formatting estimates and making them visible. Third, adopt a norm that compute-heavy AI research discloses its scale just as it discloses datasets: hardware type, total accelerator-hours and energy estimates.

These three steps address on-campus compute. For off-campus workloads, the lever is procurement. When Stanford negotiates compute agreements, it can require vendors to report job-level energy and carbon emissions, data that major providers are increasingly equipped to supply. I outline one such framework in a paper accepted to Computing Conference 2026. Stanford’s purchasing power gives it real leverage to close the off-campus reporting gap.

Critics might argue these estimates will be messy. But Stanford already publishes emissions data with methodology notes that refine over time; compute reporting can start with ranges and improve. The goal is aggregate transparency.

The most substantial counter-argument is that Stanford already runs on 100% renewable electricity. But disclosure remains necessary. Efficiency still matters for grid capacity and cost. Hardware carries a massive carbon footprint from manufacturing that renewables do not address. And cloud workloads fall outside Stanford’s renewable purchases entirely, often running on grids with varying carbon intensities. Renewables address one piece of the puzzle; they do not make the rest disappear.

Stanford is updating its climate action plan, expected in 2026. If that plan is to reflect reality, compute must be a line item. The university already knows how to do this. It audits data center airflow in Forsythe Hall, tracks campus energy through the Data Hub, and runs world-class infrastructure like Sherlock. The only missing steps are to share what it tracks on campus — and to demand the same visibility from the cloud vendors that increasingly power its research.

As Forsythe Hall’s airflow story makes clear, the first step in fixing a system is making its hidden flows visible. For net zero to carry its full weight, it needs receipts.

Utsav Gupta is a Stanford Master of Liberal Arts candidate researching human purpose in the age of AI. He is founder and co-CEO of Filarion, building AI litigation tools and spatial computing systems, and serves as Commissioner for Palo Alto Utilities. Views are his own.  



Login or create an account