Information is power, but only when it's timely and actionable.
In Connecticut, too little information delivered too late left the traditional Home Energy Solutions program evaluation underpowered.
A recent billing analysis of three- and four-year-old projects showed far lower energy savings than claimed by the programs — information that would have been much more useful had it been known years earlier. The results of this evaluation also did not provide enough actionable data to help the programs improve their future performance.
Connecticut DEEP’s evaluation problem is familiar to energy efficiency programs and regulators across the country. Evaluations happen too infrequently, are far too expensive, and lack transparent and timely data to improve outcomes.
Fortunately, the Connecticut Department of Energy & Environmental Protection (DEEP) was already running an M&V 2.0 pilot to analyze the same programs using the Recurve platform, enabling us to move quickly to identify opportunities to improve performance. And because the M&V 2.0 software is automating the analysis, DEEP can easily add more recent program data to see if older problems have persisted into the present.
Now that M&V no longer has to happen years after the fact, it can be used to improve outcomes, instead of just for reporting.
Unlike traditional evaluations, which simply deliver a realization rate after the program completed its work, Advanced M&V allows efficiency program staff to put meter data to work to reveal immediate opportunities for improvement. With these tools used in real-time, insights are available years before they would be otherwise — and in time to make mid-course adjustments.
Recurve worked with DEEP, researchers from LBNL and NEEP, Eversource, and United Illuminating (UI) to backcast the Home Energy Solutions programs from 2015 and 2016. The backcast assessed the meter-based savings of program participants and revealed precise results on ways these programs could enhance savings and cost-effectiveness. We are now incorporating comparison groups of non-participants selected through stratified sampling based on the usage characteristics that were most predictive of savings outcomes in the backcast (learn more about comparison group analysis here).
The Recurve analysis performed the meter-based savings analysis in a fraction of the time. It was in production on an ongoing basis, providing the utility program administrators with more detailed, actionable information that pointed to opportunities to improve future program outcomes rather than just reporting on performance after the fact.
Using Data to Improve Cost-Effectiveness and Customer Savings Outcomes
Using Recurve’s analytics tools to sort, filter, and drill into specific projects, program staff were able to identify potential causes for over- and under-estimated project savings. The same modeled heating and cooling loads can now be used to identify participants with a high potential to save energy and sanity-check savings estimates for new projects. These benchmarking techniques can give program managers — and homeowners — more confidence that forecasts for future projects will translate into actual savings.
So what did Recurve find?
Recurve predicted energy savings outcomes with both project metadata and consumption data. For example, customers with electric heating systems saved 30 percent more than their non-electric counterparts. Customers in the top half of electricity consumption drove nearly all program savings. On the other side of this coin, half of the participants — identified with baseline consumption data alone — saved roughly nothing.
With focused recruitment of customers with higher consumption profiles, the program has a path to cost-effectiveness. These customers also will benefit more from the intervention.
Recurve has also worked with Eversource to catalog customers by income levels. Finding customers with usage profiles indicative of savings potential among the low-income group would be a straightforward way to improve the program while serving customers most in need.
In the consumption data, Recurve also found evidence of takeback. For example, customers with low cooling usage in the baseline period experienced increased summer usage after the program.
Targeting those customers with the highest potential to have good outcomes from this program both maximizes the impact of ratepayer programs for real customers per dollar spent and results in cost-effective programs overall.
Especially for low-income programs, it pays to target the customers who will save the most on their bills to maximize a program's impacts. There is nothing fair or equitable about encouraging a low-income customer to undertake the transaction cost of a full HVAC retrofit when we know in advance for many that they are unlikely to benefit.
Data-Driven Program Management
The Recurve platform also allowed program staff to set up “cohorts” of projects that share a particular characteristic, such as which contractor performed the work, and compare performance across all of these groups.
This type of analysis can detect systemic trends that drive higher and lower performance outcomes so that program managers can try to replicate successes and remediate underperformance. For example, the program could deliver leads to contractors with higher performance while focusing on technical assistance and data-driven QA on those with poor performance.
Standard Methods and Code Deliver Revenue-Grade Transparent Outcomes
Standard data quality checks that are automatically performed on every batch of data imported into the platform identified data quality problems missed during the traditional evaluations. These data quality problems substantively altered the results of the evaluation.
For example, in several cases, the Recurve analysis flagged a duplicate-ID error. Program administrators realized that the data export included meter data from other homes associated with program participants, either because they had a second home or had moved during the analysis period.
Why is this nerdy stuff so important? Well, the traditional evaluation that was also part of the study received the same data, but may not have detected this anomaly — though it’s difficult to know for sure because the traditional results were less transparent and presented as static tables.
These results validate what other utilities around the country have discovered: that advanced, open-source, meter-based M&V can dramatically improve accountability, allow for real-time adjustments in program design, and set the stage for performance-based approaches to scale efficiency and other forms of demand flexibility.
In a nutshell, we can help more customers, get more significant savings on their bills, and drive more cost-effective and useful outcomes for the grid and climate.
Want to learn more about how meter-based approaches can improve your program evaluation? Contact us.