Strengthening the Value of Evaluation in Energy Programs and Policies

Edward Vine
Lawrence Berkeley National Laboratory
Affiliate
Blog Date:

Energy efficiency has the capability to significantly reduce greenhouse gas emissions. Provisions to tap this resource must be central to climate policy to get the needed reductions, and to get them as quickly and cheaply as possible. Sustainability is also a central concern arising due to the crisis of climate change. As a result, increasing attention has been focused on the challenges faced in evaluating sustainability. Furthermore, sustainable development is more likely to be effectively achieved by building sustainable organizational evaluation systems.

The funding and implementation of energy efficiency programs in the residential, commercial and industrial sectors will need to be substantially increased in order to address the climate and sustainability challenges ahead. And just as important, the evaluation of these programs will need to be aligned with the design and implementation of these programs by the private (e.g., utilities) and public sectors (e.g., government), so that we will understand the impacts of these programs and the distribution of the benefits and costs of these programs among different demographic groups, so that more effective programs can be design and implemented.

While the evaluation of energy programs has occurred since the 1970s, there is room for improvement. Specifically, three types of improvements are needed to strengthen the value of evaluation: (1) institutional, (2) methodological, and (3) capacity building.

Institutional improvements are needed to increase the visibility and legitimacy of evaluation, so that the results are avidly sought, as well as hard to ignore, by program and policy implementers, administrators and policy makers. Accordingly, evaluators need to make sure that the results of evaluation studies have a practical and useful effect on the programs and policies that are studied – especially, on organization’s mission, operations and procedures, as well as on the common societal goals (e.g., reductions in greenhouse gas (GHG) emissions).

Specific institutional steps need to be taken. First, evaluators need to be sitting at the table with implementers when planning the following: program budgets, policy interventions, and design and implementation. At these meetings, program logic and evaluability will be discussed.  Second, implementers need to be sitting at the table with evaluators when planning the following: evaluation study design, program logic, assessment objectives, performance metrics (how defined and assessed; what data will be required and at what temporal granularity to calculate metric values), and impact on organizational mission, operations and procedures (“operational excellence”). Third, evaluation findings need to be timely, useful and used by regulators, program designers, implementers, administrators, market, industry players and customers. Fourth, evaluation teams need to be more multidisciplinary and inclusive (i.e., including more women and minorities).  Fifth, “rapid evaluation” needs to be encouraged and supported before new programs are designed: meaningful budgets will be needed for process evaluation and for program mid-course redesign, if needed. Sixth, process and impact evaluations need to be integrated (“holistic evaluation”). And finally, evaluation needs to be emphasized as an essential and positive tool for implementers for informing (1) marketing potentials, market opportunities and investments, (2) scenario planning (market forecasts of technology adoption), and (3) program design, implementation and ongoing improvement (e.g., identifying challenges, barriers, program theory and logic models (goals and objectives), and program performance metrics and criteria.

Methodological improvements are needed to strengthen evaluation. First, the scope of evaluation of energy programs and policies, as well as cost-effectiveness tests, need to address all impacts, such as: (1) GHG emissions (at specific times of the year), (2) sustainable development goals (SDGs), (3) resilience, sustainability and energy sufficiency, (4) program and policy impacts on un(der)served and disadvantaged communities (focusing on gender, ethnicity and income), and especially on program participation inequities, (5) market transformation and market changes, and (6) demand savings as a system resource (by identifying time and location of demand savings).

Second, evaluators need to systematically determine how evaluation can collect and use credible data from new smart technologies (e.g., smart appliances, home hubs, smart commercial buildings) in a way that is attributable to stakeholders.  And third, evaluators need to make the best use of measurement and verification (M&V), including real-time monitoring data, but always remember evaluation, measurement and verification (EM&V) is not the same as M&V.

Capacity building activities will be needed to strengthen the value of evaluation, and these activities will involve multiple players: utilities, implementers, administrators, academia, and private sector actors.

In conclusion, if institutional, methodological, and capacity building improvements are made to the practice of evaluation, energy efficiency will continue to be regarded as a reliable energy strategy by policymakers and regulators for meeting the energy and environmental challenges (such as climate change and air quality) ahead.