Tagged: evaluation methods

Protecting Human Rights While Building Trusting Relationships

Evaluation work around social issues is complex. Emerging research on systems thinking and complexity theory explains this; our experience confirms it. This complexity is amplified in situations where human rights are systematically violated. I’ve recently spent some time managing field projects related to documentation in the Dominican Republic, where native-born Dominicans of Haitian descent are often denied their legal right to birth registration and, since 2007, have had their previously issued identity documents revoked by the government. There are many local, national, and international groups currently lobbying the government, implementing programs, and conducting research on the issue. It’s a hot topic attracting significant internal and external attention, which brings the question: How can stakeholders learn more about the issue while protecting those who are affected?

Researchers and evaluators of programs in such contexts are ethically bound to protect the rights of participants, particularly when it comes to confidentiality and consent. IRB protocol is critical, but even the most painstaking attempts to honor its principles can strip the process of its human element (I have a particular aversion to the idea of protecting human “subjects”)! That’s why I’m advocating for greater consideration of how to build trusting relationships with participants in order to not only protect their rights, but honor their dignity and personal histories.

Below I describe some considerations for researchers and/or evaluators who engage in projects related to sensitive issues in complex environments. I strongly believe these considerations should be taken into account at every level, from highly technical external evaluations to grassroots research and program development.

Location, location, location: Let participants choose where they feel most comfortable being interviewed. Some may feel more comfortable in the privacy of their own home while surrounded by family. Others may not feel safe providing information on where they live and would prefer a perceived neutral location in the community, such as a local church.

The company you keep: A local, trusted community member should accompany the researcher to assist in explaining unclear information to the participant, translating where necessary, and generally creating a safe and welcoming environment. Even better if that person is trained to actually conduct the research! Be sure that interviews are private and not overheard by others, unless the participant requests to be accompanied by a friend, family member, etc.

The right to say no: Participants should never feel forced to participate. If the researcher/evaluator is technically an outsider, they may miss important cues signifying that the individual is hesitant to participate. Understand how power differentials may interfere with an individual’s perceived ability to say no, and mitigate against them. Be able to judge verbal and non-verbal cues throughout the entire data collection process and be sure to remind participants that they can choose not to answer a question or decline to continue at any moment.

The right to know: Participants should be informed about how any information collected will be used. Academic research may not be a familiar concept, and there may be (understandable!) suspicion or concern that information will get into the wrong hands and be used against them. Explain why notes are being taken, who will have access to information (both data and results), etc. Give time for them to reflect on informed consent forms and ask questions. Be sure to have documents in multiple languages if the participant is not fluent in the region’s predominant language. Have options for non-literate individuals. Err on over-explaining and providing “too much” information, even if it takes more time. Relationships can be damaged and trust broken within minutes. Ask the participant to repeat back what they are agreeing to in order to ensure full consent and comprehension.

What’s in a name: Only collect personal identifying information (PII) if it is absolutely necessary. Don’t forget that voice recordings are also a form of PII! Participants will want to be assured that their responses cannot be traced back to them. If PII is collected, it should not appear on any materials that could be misplaced or seen by others (survey forms, assessments, etc.). Use another marking system that is linked to participants through secure, internal, and restricted access documents. Consider using pseudonyms for case studies or quotes, but don’t forget that participants might want ownership of their stories. They should have the opportunity to choose whether their identity is used in narratives that describe personal histories and experiences.

Be creative: There are many interesting and creative ways to maintain confidentiality and/or anonymity in situations where face-to-face conversations may not be feasible nor produce honest responses. Implement a creative response system (colored cards, dice, etc.) that give participants a sense of privacy and increased confidence in answering questions. Consider using a dividing screen or private room for submitting responses, as appropriate, to enhance feelings of security and anonymity.

Be human: Open up the session with conversation instead of rigidly following a script or jumping to the informed consent form. It can be considered rude to “get down to business” immediately, and the participant is much less likely to feel comfortable or appreciated for their time and the personal risk they might be taking! Check in frequently with the participant throughout the interview, continuously gauge their comfort level, and make adjustments as necessary. Be open to diverging from protocol if necessary. Letting the conversation take its course is critical when dealing with sensitive topics. Be sure to collect the information you need, but don’t sacrifice the personal connection.

As with any research project or evaluation, the protocol depends on context. What similar challenges have you encountered in the field and how did you overcome them? What advice would you give to others working on sensitive issues in complex environments?

Update: Some resources on human-rights based approaches to M&E. Please add more in the comments section if you know of a great resource!

Selected Resources on Human Rights-Based Monitoring & Evaluation (compiled by GIZ)

Integrating Human Rights and Gender Equality in Evaluation (UNEG)

Rethinking Evaluation and Assessment in Human Rights Work (ICHRP)

Collection of Resources for Evaluating Human Rights Education (UMN Human Rights Library)

Guide to Evaluating Human Rights-Based Interventions in Health and Social Care (HRSJ)

Human Rights-Based Approach to Monitoring and Evaluation (HRBA Toolkit)

Advertisements

Participatory Development Pitfalls Translate to Evaluation

Lately I’ve been thinking a lot about the challenges of balancing learning and accountability in evaluations. While the two are not mutually exclusive, evaluation for accountability often faces specific boundaries that prevent an evaluator from exercising their full range of professional expertise in designing and executing the evaluation. Such limits can result in certain types of lessons learned being valued over others. Donors often commission specific types of evaluations based on organizational policy or perceived “best practices.” The need to work within such a framework often dictates the type of learning that will result.

In many cases, preferred methods, resources, short timeframes, local conditions, and other constraints eliminate opportunities to conduct participatory evaluations. Yet participatory evaluations—done well—can produce results for development that go well beyond learning and accountability. Anna Colom’s recent article in The Guardian, “How to avoid pitfalls of participatory development” highlights an interesting question of which evaluators should take note: “Is it even possible to run participatory projects in the current context of international development, still very much western-led and tied to logframes, donors and organisational agendas and structures?”

Project design and implementation aren’t the only things to blame for complicating participatory projects. Monitoring and evaluation practices (and values) play a tremendous role in discouraging (or encouraging) real participation.

Colom identifies common pitfalls in participatory development (her categories, my summaries) that apply equally to participatory evaluation:

  • Define participation and ownershipWhen and to what extent participation will be encouraged (and why) + who owns the project and its component parts
  • Understand the context and its nuancesPower relations within communities + power relations between communities and other actors
  • Define the communityTarget community composition, including any sub-groups that define it
  • Facilitators must know when to lead and when to pull backBalancing “external” facilitation with group leadership and ownership
  • Decide what will happen when you goSustainability!

To her list, I would add two more common pitfalls that are critical to address in participatory evaluations:

  • Define what counts as credible evidenceCommunities should have a very real voice in determining what types of evidence are credible for agreed purposes, as well as how that evidence should be collected. Facilitator and community member opinions may come in conflict often at this stage due to varied beliefs about what constitutes credibility. The facilitator as “critical friend” can provide guidance based on knowledge and experience, but should listen carefully to community needs so that collected evidence is valued and validated by the community.
  • Decide how results will be used and communicatedCommunity members should be engaged in answering the following questions: What types of results will be communicated? For what purposes? For what audiences? How will results be used? Community members should also be engaged in helping prepare results to be communicated in ways that would be accepted and appreciated by various stakeholders. Particular attention should be paid to social, cultural and linguistic relevance, with an emphasis on inclusion. Communities should agree to how their work (and community!) will be represented to a wider audience. Participation in the communication and utilization stages is critical for sustainability of projects, as it reinforces community ownership and builds capacity for future implementation and evaluation work.

What other common pitfalls in participatory development and/or evaluation need to be addressed?

My Two Cents on the RCT Debate

I’ve been doing a lot of thinking recently about my contribution to the debate on RCTs. Several weeks after the Evaluation Conclave in Kathmandu, I’m ready to give my two cents. First things first: A little context. RCT = Randomized Control Trial, an impact evaluation method that establishes “rigor” by using control and treatment group(s) to determine whether particular outcomes can be attributed to a particular program or intervention. A quick review of literature or participation in enough conferences, and one can see that RCTs are often presented as the “gold standard” in evaluation for their ability to show statistically significant differences in outcomes while controlling for various influencing factors. Sounds good, right? Certainly many students, practitioners and policymakers are seduced by its empirical and scientific nature. As with any subject in international development, however, it’s not so simple. Michael Quinn Patton’s keynote at the Evaluation Conclave presents a strong argument against the uncritical acceptance of RCTs as the method for showing impact. I urge you to watch it (and read his book on Developmental Evaluation while you’re at it!)

Now some in the evaluation field may think the RCT debate is “stale,” yet the sheer proliferation of donors and implementing organizations commissioning such evaluations proves that this is not the case. In fact, graduate schools across the country are churning out impact evaluators by the dozen. On the one hand, top schools can’t be blamed for teaching skills that are high in demand; their students will surely get jobs, and with high profile organizations at that. But they are producing far too many “development as usual” professionals who are hesitant to engage in critiques about the way development is done and about the way development projects are evaluated. Is evaluation just another manifestation of development being “done to” countries rather than “done with” countries? The trend towards RCTs surely seems to lead to this conclusion. Who are evaluations being produced for? And why? Local governments aren’t the ones begging for RCTs; the donors are asking and implementing organizations are producing! De facto policy can be made pretty quickly with enough money to incentivize it. Is there a time and a place for RCTs? Of course (Read here for a great post about various options for impact evaluations). RCTs in and of themselves are not “evil” as some opponents would suggest—they have strong merits in many cases (though sometimes questionable ethics when it comes to assigning beneficiaries to life changing programs!)

It comes down to balancing accountability with learning and research with evaluation. Donors must make smart investments, and organizations must be accountable for funds they’ve been awarded. But there is too much pressure to take learning out of the equation. Who are the end users of evaluations? What purpose(s) do they serve? We cannot remove context when years of research and experience show that context can make or break a project. If a randomized control trial experiment finds that increased school attendance in Honduras can be attributed to a specific education program, what are the implications? Will we attempt to “scale up” the project based on information that tells us little about why the program worked in a particular place and time? Can we use that evidence to justify a similar project in Cambodia? Tajikistan? Mozambique? I think we can do better in terms of evaluating program impact in context-specific ways that provide useful information for those on the ground. This is particularly important for those who may not find highly technical RCT results to be readily accessible, but who need to understand why programs succeed or fail. After all, if partnering with local governments and local NGOs can lead to more successful program implementation, it can also lead to more successful (and useful) program evaluation. But the evaluations should be designed according to terms agreed to by everyone involved. Given all available options, I’d be interested to see how many times an RCT would be universally selected.

As the blog post title suggests, this is a debate folks, so let me know where you fall in the “continuum” of opinions!