Evaluation is Not Neutral

 

Written by Jessica Gibbons-Benton


Over the past few weeks, we have been turning our methods inward to critically examine how we do our work, as well as listening to each other and investigating our experiences, assumptions, and approaches related to equity and oppression. Together we are rebuilding “the Elevate way” based on clearer articulations of why and how our work advances equity. We want to be able to more clearly say, “this is how we approach consulting and evaluation work at Elevate” and to have an equity lens built into each part of the framework, with the assumption that we cannot do high quality consulting or evaluation work without attending to equity. By starting from the beginning and building a framework together, we know it will be something that we believe in and will work to embody on every project, and we know it will be a reflection of us as individuals. 

That last bit is a double edged sword though, as it is for all evaluators. The working definition of evaluation that we use, from one of the founding thinkers in the field, Michael Scriven, is “the systematic determination of the merit, worth, or value” of something (often a program, policy, or practice). This definition begs the question, whose judgement counts? Who sets the yardstick of what is valuable or what success looks like? Most of the time, the organization that pays for the work or the evaluation gets to define the priorities, so that the whole evaluation is shaped to look at whether the program was a success from the point of view of the entity that commissioned the evaluation. Layered on top of this, the evaluators conducting the evaluation have major influence over how the story is captured and what elements are viewed as positive change or weaknesses to be improved. Both of these are often completely implicit or overlooked because evaluation is considered to be objective

Evaluation is often thought of as an objective assessment of whether something is working, resulting in objective findings about how it worked and objective recommendations to improve it. This emphasis on objectivity is based in the notion of empiricism - the idea that there is one truth that can be discovered through careful application of scientific methods - part of evaluation’s inheritance from social science research. This objectivity is often held as the “gold standard” for measuring and communicating impact, the assumptions being that 1) it is possible to objectively determine the inherent “value” of a program or practice and 2) this objectivity will (in and of itself) validate or legitimize the work of an organization or initiative.

In reality, though, most folks know that there is never one objective truth. The work that mission-driven organizations take on is always at the intersection of many truths, constructed from many experiences of the world and ways of understanding it. When we treat and use evaluation as if it were objective, the values and biases of the people who commissioned, planned, and implemented the evaluation are made invisible, part of the assumed “one truth” that is printed in the report and, therefore, accepted unquestioningly. Many evaluators and evaluation leaders reject these notions of objectivity, and instead proactively advocate for the design of methods for each unique context, involving many voices in the process and results. Evaluation can indeed assess if and how something is working- this happens by holding many experiences and perspectives at once to synthesize and lift up areas of contradiction, generating dialogue about what success truly looks like for people and communities.

Ultimately, evaluation is a process rooted in values, and everyone involved must do the work to unpack what those values are for their specific program and evaluation effort. Because everyone involved is bringing their perspective and values, evaluation is not and cannot be value-neutral. Some recent and alarming examples of this in the public health space are the stories of public health officials who published or were pressured to publish COVID-19 data that was blatantly misleading, presumably to support the case that many political leaders were making early in the summer that it was safe to re-open businesses. This dynamic is often way more subtle, showing up more often as organizational leadership or funders defining success (or how success will be measured) with limited input from the people who the organization intends to serve.

We know that the organizations we work with are operating in contexts and often striving to change systems driven by values that often have oppressive and/or harmful histories. We cannot ethically enter these spaces and stay neutral - we have to enter with our values of equity clearly defined and in front. We have to be transparent and explicit about what values form the basis of how we judge something to be effective and positive, and what changes we choose to recommend. At Elevate, now more than ever, we are committed to ensuring that we clearly articulate how this value of equity shows up in our work and how we do business, as well as how we are routinely questioning the value and validity of “objectivity” in determining success for communities and the programs that serve them. 

Some reflection questions to ask when confronted with “objective” information produced by evaluation:

  • Who “was in the room where it happened” (à la Hamilton). Who got to define the evaluation questions and priorities?

  • Who is represented in the data? Who is missing?

  • Are there other “truths” or interpretations of the results that may also be valid?

  • Who decided what “success” means?


No matter the challenges you face, Elevate is ready to partner with you to drive meaningful impact in your community and help turn your vision into reality. Contact us if you are ready to embark on this journey together!

Previous
Previous

Evaluation Method Spotlight: Youth Participatory Action Research

Next
Next

Listening, Learning, and Planning for Action