Tagged: research

Reclaiming Evidence and Impact: The Case for Context

If the global development community were to put together a list of the most overused—and perhaps misused—terminology of 2013, I would advocate for the inclusion of “evidence” and “impact.” Bureaucratic groupthink has narrowed the definitions of these two words so that only certain types of evidence and impacts can be labeled as such. Me explico. International organizations and donors have become so focused on demonstrating what works that they’ve lost sight of understanding why it works and under what circumstances. I can’t help but feel that the development industrial complex has come down with a case of “Keeping up with the Joneses.” My impact evaluation is more rigorous than yours. My evidence is more conclusive than yours. It’s a race to the top (or bottom?) to see who can drum up the most definitive answers to questions that might be asking us to “check all that apply” instead of “choose the most correct response.” We’re struggling to design multiple-choice answers for questions that might merit a narrative response.

I can’t help but question the motives behind such a movement. Sure, we’re all struggling to stay relevant in an ever-changing world. The global development community has responded to long-time critiques of development that is done to or for communities by launching programs and policies that emphasize development with and by local communities. This is a step in the right direction. While the international development community might claim to be transferring responsibility for technical program knowledge to local consultants and contractors, it has carefully written itself a new role: M&E (emphasis on the E). “Evidence” and “impact” narrowly defined are linked to contracts and consultancies that are linked to big money. It feels like a desperate attempt to keep expertise in the hands of a few, attempting to rally support to scale up select policies and programs that have been rigorously evaluated for impact by some of the major players in the field. Let’s not forget that impact evaluation—if we maintain the narrow definition that’s usually offered—can come with a hefty price tag. There are certainly times when impact evaluations such as RCTs are the best methodological choice and the costs of conducting that evaluation would be relative to the benefits. But we must be very careful about conflating evaluation type/purpose with methodology. And even more careful about when, where, and why we are implementing impact evaluations (again, narrowly defined).

I just finished reading a great piece by Justin Sandefur at the Center for Global Development: The Parable of the Visiting Impact Evaluation Expert. Sandefur does an excellent job of painting an all too familiar picture: the development consultant who (perhaps quite innocently) has been misled to believe that conclusive findings derived in one context can be used to implement programs in completely different contexts. At the individual level, these experts might be simply misguided. The global conversation on impact and evidence leads us to believe that “rigor” matters and that programs or policies rigorously tested can be proven to work. However, as Sandefur reminds us, “there is just no substitute for local knowledge.” What works in Country A might not work in Country B, and what might not work in Country B probably will not work in Country C. It is unwise—and dangerous—to make blind assumptions about the circumstances under which impact evaluations were able to establish significant results.

I would urge anyone interested in reclaiming the conversation on evidence to check out the Big Push Forward, which held a Politics of Evidence Conference in April with more than one hundred development professionals in attendance. The conference report has just been released on their website and is full of great takeaways.

Are you pushing back on the narrow definitions of evidence and impact? How so?

Protecting Human Rights While Building Trusting Relationships

Evaluation work around social issues is complex. Emerging research on systems thinking and complexity theory explains this; our experience confirms it. This complexity is amplified in situations where human rights are systematically violated. I’ve recently spent some time managing field projects related to documentation in the Dominican Republic, where native-born Dominicans of Haitian descent are often denied their legal right to birth registration and, since 2007, have had their previously issued identity documents revoked by the government. There are many local, national, and international groups currently lobbying the government, implementing programs, and conducting research on the issue. It’s a hot topic attracting significant internal and external attention, which brings the question: How can stakeholders learn more about the issue while protecting those who are affected?

Researchers and evaluators of programs in such contexts are ethically bound to protect the rights of participants, particularly when it comes to confidentiality and consent. IRB protocol is critical, but even the most painstaking attempts to honor its principles can strip the process of its human element (I have a particular aversion to the idea of protecting human “subjects”)! That’s why I’m advocating for greater consideration of how to build trusting relationships with participants in order to not only protect their rights, but honor their dignity and personal histories.

Below I describe some considerations for researchers and/or evaluators who engage in projects related to sensitive issues in complex environments. I strongly believe these considerations should be taken into account at every level, from highly technical external evaluations to grassroots research and program development.

Location, location, location: Let participants choose where they feel most comfortable being interviewed. Some may feel more comfortable in the privacy of their own home while surrounded by family. Others may not feel safe providing information on where they live and would prefer a perceived neutral location in the community, such as a local church.

The company you keep: A local, trusted community member should accompany the researcher to assist in explaining unclear information to the participant, translating where necessary, and generally creating a safe and welcoming environment. Even better if that person is trained to actually conduct the research! Be sure that interviews are private and not overheard by others, unless the participant requests to be accompanied by a friend, family member, etc.

The right to say no: Participants should never feel forced to participate. If the researcher/evaluator is technically an outsider, they may miss important cues signifying that the individual is hesitant to participate. Understand how power differentials may interfere with an individual’s perceived ability to say no, and mitigate against them. Be able to judge verbal and non-verbal cues throughout the entire data collection process and be sure to remind participants that they can choose not to answer a question or decline to continue at any moment.

The right to know: Participants should be informed about how any information collected will be used. Academic research may not be a familiar concept, and there may be (understandable!) suspicion or concern that information will get into the wrong hands and be used against them. Explain why notes are being taken, who will have access to information (both data and results), etc. Give time for them to reflect on informed consent forms and ask questions. Be sure to have documents in multiple languages if the participant is not fluent in the region’s predominant language. Have options for non-literate individuals. Err on over-explaining and providing “too much” information, even if it takes more time. Relationships can be damaged and trust broken within minutes. Ask the participant to repeat back what they are agreeing to in order to ensure full consent and comprehension.

What’s in a name: Only collect personal identifying information (PII) if it is absolutely necessary. Don’t forget that voice recordings are also a form of PII! Participants will want to be assured that their responses cannot be traced back to them. If PII is collected, it should not appear on any materials that could be misplaced or seen by others (survey forms, assessments, etc.). Use another marking system that is linked to participants through secure, internal, and restricted access documents. Consider using pseudonyms for case studies or quotes, but don’t forget that participants might want ownership of their stories. They should have the opportunity to choose whether their identity is used in narratives that describe personal histories and experiences.

Be creative: There are many interesting and creative ways to maintain confidentiality and/or anonymity in situations where face-to-face conversations may not be feasible nor produce honest responses. Implement a creative response system (colored cards, dice, etc.) that give participants a sense of privacy and increased confidence in answering questions. Consider using a dividing screen or private room for submitting responses, as appropriate, to enhance feelings of security and anonymity.

Be human: Open up the session with conversation instead of rigidly following a script or jumping to the informed consent form. It can be considered rude to “get down to business” immediately, and the participant is much less likely to feel comfortable or appreciated for their time and the personal risk they might be taking! Check in frequently with the participant throughout the interview, continuously gauge their comfort level, and make adjustments as necessary. Be open to diverging from protocol if necessary. Letting the conversation take its course is critical when dealing with sensitive topics. Be sure to collect the information you need, but don’t sacrifice the personal connection.

As with any research project or evaluation, the protocol depends on context. What similar challenges have you encountered in the field and how did you overcome them? What advice would you give to others working on sensitive issues in complex environments?

Update: Some resources on human-rights based approaches to M&E. Please add more in the comments section if you know of a great resource!

Selected Resources on Human Rights-Based Monitoring & Evaluation (compiled by GIZ)

Integrating Human Rights and Gender Equality in Evaluation (UNEG)

Rethinking Evaluation and Assessment in Human Rights Work (ICHRP)

Collection of Resources for Evaluating Human Rights Education (UMN Human Rights Library)

Guide to Evaluating Human Rights-Based Interventions in Health and Social Care (HRSJ)

Human Rights-Based Approach to Monitoring and Evaluation (HRBA Toolkit)

What Graduate Students Should Unlearn Before Becoming Evaluators

Calling all new and emerging evaluators—this post is for you! Graduate school can change the way you look at the world, but is that change for better or for worse? The answer, of course, is that it depends. But some skills typically learned in graduate school can actually hinder the ability to properly conduct evaluations.

In the beginning, learning to write literature reviews and design research studies is challenging because it requires one to exercise very careful logic to reach conclusions. Each thought must be justified by research-based evidence, each term must be painstakingly defined, each theoretical framework must be eloquently outlined, and each system or process must be elaborately illustrated. Much time is spent honing not only critical thinking skills, but also the ability to use (and show!) logic. Linking A to B, completing Step 1 before Step 2, proving X leads to Y—the brain becomes accustomed to thinking in a linear manner. Causation, causation, causation! After hours of reading and writing, of rereading and rewriting, it is no surprise that these habits can be hard to break. But we desperately need to unlearn some of those habits we paid so dearly to obtain.

Let me explain. I don’t have the data to prove it, but I wager that a quick survey of graduate students in academic disciplines—research-based rather than practitioner-focused programs—would reveal that more time is spent working alone than in groups. Solitary work is quite conducive to linear thought, at least in the academic sense (keeping focused on the work at hand is another story!) In such circumstances, one has the luxury of designing elaborate models that control for any and every conceivable variable that could affect the ability of the research to yield statistically significant results. And this type of research is incredibly effective for the purposes it intends to serve. In fact, many organizations engage in research, and these skills are critical to pursuing that type of work. But too many students latch onto the idea of evaluation as a career path and, fresh from their studies, attempt to use the research paradigm they’ve recently internalized in order to answer complex questions in a messy world that requires a different approach. As a recent graduate student, I know how hard these habits can be to break and how much practice it takes to effectively determine when to use research versus evaluation. This post aims to help you figure it out faster!

So what exactly does evaluation do differently than research? John LaVelle created a great visual to show just that:

Eval and Research Hourglass

Still not convinced? Check out this excellent article by E. Jane Davidson on unlearning social scientist habits—required reading for recent or soon to be graduates looking to break into evaluation!

If one of the first things you are learning as a new evaluator is how to design a logical framework or logic model, be sure you review these resources (and more!) very carefully. I use logic models regularly and find them useful in many ways, but they can be ineffective and downright dangerous paired with an “objective” research lens. Start unlearning those less-than-helpful-for-evaluation research habits from the start, and you’ll become a stronger and more seasoned evaluator.

What other habits learned in graduate school need to be unbroken or adapted upon entering the messy world of “work’”? Share your tips!