Common Core Indicators for Describing Alliance Programs

Authors: Tom Mcklin

Abstract

Under growing scrutiny among policymakers, many NSF program officers ask evaluators to design, collect, and report on a set of indicators common across a portfolio of programs. This presentation specifically addresses the issues of establishing, reporting, and ultimately using common, core indicators. This discussion draws on three sources:  The experience of evaluating multiple National Science Foundation (NSF) alliance programs in Research in Disabilities Education (RDE) and Broadening Participation in Computing (BPC).  The experience over the past four years of working with a small group of alliance evaluators to define common indicators and to report on those indicators.  Recent publications guiding alliance evaluators on establishing common indicators, namely the Framework for Evaluating Impacts of Broadening Participation Projects (Clewell & Fortenberry, 2009) and the Framework for Evaluating Impacts of Informal Science Education Projects (Friedman, 2008)). Based on the work of creating common, core indicators and studying those created under other NSF programs, shared elements among common, core indicators emerge. Common, core indicators focus largely on counting the number of participants in program activities, tracking students through transitions (e.g. high school to college or college to graduate school), measuring changes in affective characteristics of participants, and building the capacity of funded organizations like colleges and public schools. This work also reveals myths among evaluators surrounding NSF’s treatment of annual reports and data. NSF does not mandate a specific set of metrics for programs. They request that programs identify broader impacts, but when looking across alliances, these rarely, if ever, align and do not require that proposers focus on diversity (gender, race/ethnicity, and ability). Many evaluators mistakenly think that a group at NSF (or other agencies) is synthesizing reports within a program or directorate. This activity is not happening. Some evaluation findings are rolled into various reports, and these reports look more like salad than soup in that they are presented as a collection rather than a synthesis. Finally, this presentation invites evaluators to identify areas of immediate action. For example, one of the most challenging aspects of this work is the need to track students through transitions. This is notoriously difficult and requires the best thinking among the evaluation community to reliably and efficiently track participants. Second, evaluators may request to see how program officers and others use the common core indicators. We often report to a program officer yet have little understanding of what becomes of the information once it is sent. Seeing first‐hand how the data are used aides the evaluation community in building more effective indicators and reporting mechanisms. It also signals that the request is valid instead of a futile and expensive exercise in report generation.

Citation:
Mcklin, Tom. (2012). Common Core Indicators for Describing Alliance Programs.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s