It’s fall, which means beautiful leaves, pumpkin-flavored everything, and back-to-school fever. As students and teachers sink into a new school year, the last thing they’re probably thinking about is how much energy they’re using. It turns out, a lot. Nationwide schools spend $8 billion a year on energy – second only to personnel in K-12 budgets. With looming cuts to federal education spending, schools are going to need to cut back. Energy is one line item they can trim through efficiency improvements like new air conditioning systems or LED lighting.
Beyond possible greenhouse gas reductions, many energy efficiency investments are projected to pay for themselves by lowering power bills. Importantly, though, arguments on the cost-effectiveness of these improvements are overwhelmingly based on projection models, rather than real-world data. These models have been shown to be flawed time and again. Yet, getting these measurements right is important for cash-strapped districts trying to prioritize investments in much-needed upgrades.
In a new study, my colleagues at MIT, UC Berkeley, UC Davis and Northwestern University and I devised a brand-new machine learning approach to measure the effectiveness of energy efficiency upgrades in California schools. Armed with a wealth of real-world data – electricity consumption every 15 minutes at all K-12 schools in the Pacific Gas and Electric service territory in California – we measured the impacts of the improvements and compared our findings to the model projections.
The good news: the upgrades did lower energy consumption at the average school by 3%, freeing up real money to pay for textbooks and supplies. The bad news: schools saved only 24% of what was projected. Put another way, if the model estimates suggested the school would save 100 kilowatts an hour per year for a given investment, our estimates suggested that they only saved 24 kilowatts an hour per year. If the school invested $400,000, expecting that they’d recoup their investment in the form of lower energy bills in 4 years, our estimates imply they might never see it pay off.
This does not tell us that energy efficiency investments shouldn’t be made, but instead, that we as researchers need to improve projection models and continue doing real-world evaluations like this one. Additionally, this work shows just how important it is for policymakers to include retrospective studies into governmental programs. Doing so can help building managers determine which investments deliver the greatest savings and optimize their investment dollars. For example, we discovered that lighting upgrades and improvements related to heating, ventilation and cooling (HVAC) appear to do the best, achieving 49% and 42% of expected savings, respectively.
So, why aren’t the projection models providing more accurate predictions of energy use after an upgrade? The best answer is that they are based on engineering models of the ideal energy user and aren’t sufficiently benchmarked to the real world, where, for example, people leave equipment on overnight by accident or open windows when a room is hot, even if it’s the middle of winter. That’s where our approach differs.
Using our machine learning method, we developed a rich understanding of the drivers of real-world electricity consumption at all the 2,000-plus schools in our data set. And, since simply comparing a school that invested in a new air conditioner to one that didn’t may mask important differences that impact their energy use – maybe one is in San Francisco and the other is in a hot Central Valley town – we essentially compare each school to itself, both before and after upgrades were made. Using this approach, we can be confident that what we’re measuring is just the effect of the upgrade.
Here’s how it maps out:
The chart depicts measured savings over time, lining schools up so that period zero corresponds to the quarter when the energy efficiency upgrades were installed. Our initial estimates before the upgrades were right around zero – we’re not measuring savings before they happen. After the zero point, we started to see the effects of the upgrades, with energy reductions that last well after the investments are made.
One big benefit of our new approach to measuring savings is that we can look under the hood to see how well we’re doing. Because only about half the schools in our sample had energy efficiency upgrades, we validated our approach by measuring savings at schools without upgrades. For those schools, our machine learning method estimated zero savings, which is comforting—it showed that the approach performed as we expected. At the same time, at schools that did install upgrades, we see a clear reduction in energy use on average. The figure shows the estimated savings for treated and untreated schools.
Along the way, we discovered something else interesting: when we compared real changes in energy use to the projection model’s expected savings on a school-by-school basis, we found the low actual-to-projected savings ratios reported above. But when we compared the average actual savings across all upgrades, and compare this to the average expected savings across all upgrades, we found that the average prediction was more in line with reality. If this is confusing at first glance, suppose you handed out a bunch of jars of jellybeans, and had one person per jar guess how many beans their jar contained. Their estimate would likely be wildly off. But if you averaged all those guesses, and then compared them to the average quantity of beans across all the jars, your new guess would be closer to the correct number.
Using this aggregate approach, all improvements, just HVAC, and just lighting upgrades achieve 55%, 103%, and 67% of expected savings. This tells us that if you ask the projection models to tell you how well any individual upgrade will do, the answer won’t be very helpful. But, if you were to install hundreds of upgrades, you could get a better sense of how well the set of upgrades would perform on average. Part of the reason for this is that some schools ended up saving more energy than they were expected to, while others saved substantially less. Why? That’s hard to say. Bigger schools are just as likely to see disappointing realization rates as small schools, for example.
At the end of the day, we’ve learned important lessons about how effective energy efficiency upgrades are in schools. We hope these results translate into some homework for regulators who want to help schools make educated energy efficiency upgrade decisions. And, even though schools aren’t saving as much power as they expected, these upgrades did translate into real energy and cost savings in the classroom. Maybe some of the money schools are saving on energy can be put towards data science classes – we’re excited about all the ways machine learning can help us tackle real-world problems, and we think students will be too.
Note: Christopher Knittel, David Rapson, Mar Reguant, and Catherine Wolfram contributed to this article. They are associated with The E2e Project.