Tagged: international development

Are enabling environments in evaluation actually disabling?

This question is one that has been lingering in the weeks after AEA’s Evaluation 2013 Conference. First, let me just say that my debut experience at a national AEA conference was spectacular. I’ve been to my fair share of conferences over the years, and AEA outshined them all. The variety of sessions ensured that there truly was something for everybody (a difficult feat with 3,000+ attendees all at different stages in their evaluation careers!) There was an impressive sense of community and attendees were engaged nonstop in exchanging ideas and learning from one another.

I spent a lot of my time focusing on sessions related to evaluation capacity building and development. As an internal evaluator / M&E officer, I help program coordinators feel comfortable with M&E concepts, adopt them into their work, and advocate for their use over the long-term. My job is not to “do all the M&E,” but rather make sure that M&E “becomes a way of doing.” Evaluation capacity building and development provides great frameworks for achieving these goals within the organization. It also helps me think about how I can contribute to strengthening the practice of evaluation on a larger scale.

I started off the week with Michele Tarsilla’s full-day workshop on Evaluation Capacity Development 101. His workshop emphasized capacity as both a latent attribute and a potential state, something that already exists within individuals, organizations, and institutions, but which can be developed over time. This process requires an understanding of the level of capacity that already exists. As any qualified educator would confirm, it’s essential to recognize that learners already possess significant background knowledge. The job of the educator is to activate and build off that knowledge, which in combination with new information leads to increased levels of knowledge, skills, abilities, dispositions, etc. Tarsilla’s distinction between capacity building (short-term, activity-focused) and capacity development (long-term, process-focused) takes this concept of prior background knowledge and existing capacity into account. Although some might see it as simple semantics, language does matter, not only in how we frame conversations but also in how we execute our work.

Several other evaluation capacity building and development sessions I attended (including from the World Bank’s CLEAR Initiative), emphasized the importance of creating an enabling environment for evaluation. This terminology is common in the field, but can be a bit opaque at first glance. To put it simply, an enabling environment is a context that facilitates the practice of evaluation (it enables effective evaluation to take place). There are a lot of factors that make up such an environment, but examples might be demand by policymakers for the production of evidence, funding available for evaluations, widespread interest in and use of evaluation findings, existence of evaluation policies, etc. EvalPartners is doing great work globally to create more enabling environments for evaluation. They’ve successfully declared 2015 as the International Year of Evaluation (EvalYear) as part of this effort!

Enabling environments were on my mind when I participated in a new AEA session type, Birds of a Feather Roundtable Sessions. These lunchtime gatherings brought together diverse participants to talk about a shared area of interest. I grabbed takeout at the local deli and joined a session on international development, where we chatted about a variety of issues related to evaluation in the sector. One of the questions posed to the group dealt with the introduction of new methods that could effectively address issues of complexity (Here are some great posts that can catch you up on that debate: Complexity and the Future of Aid and Complexity 101 Part 1 and Part 2).

I immediately responded to this question with some doubt. I think the methods exist (or are being developed). There’s a lot of talk about developmental evaluation, systems thinking in evaluation, etc. It’s not uncharted territory methodologically speaking. But it is, perhaps, uncharted territory politically speaking. In other words, we are having a conversation about methods when we really need to be talking about the enabling environment. Does the evaluation environment in international development allow evaluators and practitioners to use these methodologies in their evaluation work? Or is the enabling environment actually disabling? Are evaluation practices in international development confined to certain types of evaluations? Yes, we need to continually innovate with methods. But those methods will never get used if the environment doesn’t allow for it.

I strongly believe that evaluators should not only actively engage in dialogue with other evaluators about the state of evaluation, but they should also advocate for enabling environments that allow the nuanced and specialized work that evaluation requires.  We should be active contributors both in shaping the field itself and shaping how others engage with it. It’s certainly possible (and, perhaps, common) to create an evaluation-friendly environment for certain types of evaluations, but not for others. Efforts to strengthen enabling environments should maintain a holistic perspective that allows for multiple types of evaluations, a diversity of perspectives, and innovations in the field. My hope is that, as more and more countries develop rich evaluation practices, the unique perspectives that each country brings to the table will enrich the conversation on evaluation itself. The worst thing would be to superficially limit the conversation by fostering the creation of evaluation policies, practices, etc. that operate underneath a narrow definition of evaluation. Let’s allow diversity in background knowledge to create diversity in outcomes for the field. The more minds we have thinking differently about evaluation, the more innovations will be introduced. We just need to make sure that enabling environments are built flexibly to accept and foster those innovations.

What unique contributions have you seen to the field of evaluation? Have you seen enabling environments actually “disable” the work of evaluation? How so?

Advertisements

Reclaiming Evidence and Impact: The Case for Context

If the global development community were to put together a list of the most overused—and perhaps misused—terminology of 2013, I would advocate for the inclusion of “evidence” and “impact.” Bureaucratic groupthink has narrowed the definitions of these two words so that only certain types of evidence and impacts can be labeled as such. Me explico. International organizations and donors have become so focused on demonstrating what works that they’ve lost sight of understanding why it works and under what circumstances. I can’t help but feel that the development industrial complex has come down with a case of “Keeping up with the Joneses.” My impact evaluation is more rigorous than yours. My evidence is more conclusive than yours. It’s a race to the top (or bottom?) to see who can drum up the most definitive answers to questions that might be asking us to “check all that apply” instead of “choose the most correct response.” We’re struggling to design multiple-choice answers for questions that might merit a narrative response.

I can’t help but question the motives behind such a movement. Sure, we’re all struggling to stay relevant in an ever-changing world. The global development community has responded to long-time critiques of development that is done to or for communities by launching programs and policies that emphasize development with and by local communities. This is a step in the right direction. While the international development community might claim to be transferring responsibility for technical program knowledge to local consultants and contractors, it has carefully written itself a new role: M&E (emphasis on the E). “Evidence” and “impact” narrowly defined are linked to contracts and consultancies that are linked to big money. It feels like a desperate attempt to keep expertise in the hands of a few, attempting to rally support to scale up select policies and programs that have been rigorously evaluated for impact by some of the major players in the field. Let’s not forget that impact evaluation—if we maintain the narrow definition that’s usually offered—can come with a hefty price tag. There are certainly times when impact evaluations such as RCTs are the best methodological choice and the costs of conducting that evaluation would be relative to the benefits. But we must be very careful about conflating evaluation type/purpose with methodology. And even more careful about when, where, and why we are implementing impact evaluations (again, narrowly defined).

I just finished reading a great piece by Justin Sandefur at the Center for Global Development: The Parable of the Visiting Impact Evaluation Expert. Sandefur does an excellent job of painting an all too familiar picture: the development consultant who (perhaps quite innocently) has been misled to believe that conclusive findings derived in one context can be used to implement programs in completely different contexts. At the individual level, these experts might be simply misguided. The global conversation on impact and evidence leads us to believe that “rigor” matters and that programs or policies rigorously tested can be proven to work. However, as Sandefur reminds us, “there is just no substitute for local knowledge.” What works in Country A might not work in Country B, and what might not work in Country B probably will not work in Country C. It is unwise—and dangerous—to make blind assumptions about the circumstances under which impact evaluations were able to establish significant results.

I would urge anyone interested in reclaiming the conversation on evidence to check out the Big Push Forward, which held a Politics of Evidence Conference in April with more than one hundred development professionals in attendance. The conference report has just been released on their website and is full of great takeaways.

Are you pushing back on the narrow definitions of evidence and impact? How so?

Donor-Driven vs. Indigenous: The Fate of Evaluation in the 21st Century

Today marks the first of a series of guest bloggers. A special welcome and warm thanks to my colleague Hanife Cakici for her contribution this week! Here at Learning Culture we’ll be showcasing diverse perspectives on evaluation practice in a variety of countries, so please contact me if you’d like to bring your voice to the discussion and author a guest post. Now I’ll pass the word over to Hanife!

I am pleased to contribute to Learning Culture: A journey in asking interesting questions today. I would like to extend my gratitude and warm wishes to my colleague Molly Hamm for inviting me to ask questions that I think are interesting for the future of evaluation practice around the world. Before I delve into more serious matters though, I would like to take the opportunity to talk briefly about what I do and why I do it.

Coming to the University of Minnesota as a Fulbright scholar has generated a deep curiosity in me to investigate how to build contextually sensitive evaluation systems and practice in my native country Turkey to improve our national programs and policies. Despite the dramatic expansion of the field of evaluation worldwide, program evaluation remains rather an unexplored terrain in Turkish academic and governmental life. Faced with relatively scarce resources, Turkish policymakers need sound and useful evidence about the relevance and effectiveness of programs and policies. To this end, I have recently launched the Turkish Evaluation Association (TEA), Turkey’s first and only national evaluation network to build a broad-based community of practice for emerging program evaluation practitioners in my native country. The association is featured on the International Organization for Cooperation in Evaluation’s (IOCE) interactive map of evaluation organizations around the world. To support this important work, my dissertation work focuses on developing bottom-up evaluation systems and practice that accommodate Turkish historical, political, social, and cultural context that will effectively and responsibly contribute to improving social policies. In addition to my academic work, I currently serve on the board of directors for the Minnesota Evaluation Association. I assist in organizing and facilitating events, workshops, and seminars for scholars and evaluation practitioners to help develop new knowledge and skills in the field. Although my passion for evaluation practice started as early as 2006, it certainly has accelerated during the last couple of years thanks to worldwide initiatives to expand the field of evaluation to contexts outside of global North, which was a great impetus for me to establish TEA.

Indeed, the field of evaluation in the 21st century will be characterized by its international and cross-cultural expansion. I suspect that this particular trend will overwhelm the majority of discussions during the 27th annual conference of the American Evaluation Association in October 2013. The conference itself will challenge evaluators’ practical toolbox and theoretical dispositions, as the theme invites practitioners to foresee or even make predictions about the future of evaluation practice. Lurking behind this global expansion is however an important question that begs for an answer: Will evaluation practice in the global South be top-down, donor-driven or bottom-up, indigenous? The debate between (a) those who argue that the most appropriate way to strengthen evidence-based decision making in developing countries is for donor agencies (multi-lateral or bi-lateral) and/or donor countries to fund evaluation capacity building activities, thereby contributing to the evolution of evaluation systems and practice in developing countries, and (b) those who argue for a more indigenous approach, in which the developing country takes full ownership of its decision-making process and builds bottom-up evaluation culture and capabilities for and by its people will occupy the headlines in evaluation journals. Certainly empirical research is very much needed to answer this question. Yet I believe I can initiate a fun, scholarly conversation at this point by sharing some excerpts from my upcoming dissertation titled The Perceived Value of Program Evaluation as a Decision-Making Tool in Turkish Educational Decision-Making Context.

Many Western evaluation scholars and practitioners have recognized that evaluation practice was first expanded to low and middle-income countries through Northern-based aid organizations as a means to deliver their services. Given evaluation’s significance in decision-making, a concerted effort by many Northern and some Southern institutions and evaluators to build evaluation systems and practice in developing countries contributed to this expansion.  Numerous sessions, workshops, and conferences have been organized to build evaluation capacity in developing country governments, and many national evaluation organizations and associations have been established (Mertens & Russon, 2000).  EvalPartners, an international evaluation partnership initiative to strengthen civil society evaluation capacities to influence public policy based on evidence, attempted to map existing Voluntary Organizations for Professional Evaluation (VOPEs) around the world and found some information on a total of 158 VOPEs, out of which 135 are at national level, while 23 are at regional and international levels.  Some LMICs have established government-wide evaluation systems to improve their public programs and policies (e.g., Brazil, Korea, Mexico, etc.).

Aid organizations’ efforts to disseminate evaluation systems and practice in developing countries have implications for the future of evaluation practice outside of Western contexts. Evidence from a wide variety of evaluation studies converges to argue that many developing countries consider evaluation a donor-driven activity without any value to their specific learning and information needs (World Bank, 2004).  This ‘potentially’ imposed use of evaluations reduced the opportunities to increase and sustain national evaluation capacity to address national information needs and improve decisions. As noted by Hay (2010), Northern-based and created aid organizations’ dominance over the field of evaluation “created and reinforced inequalities in the global evaluation field by overemphasizing the values, perspectives, and priorities from the North and underemphasizing those from the South” (p. 224).

Researchers have recognized that evaluation is a social intervention; hence contend that evaluation reality is produced in politically, culturally, socially and historically situated contexts (Guba & Lincoln, 1989; LaFrance & Nichols, 2008).  Truth about the value and utility of evaluation can never be isolated from a domain of political discourses, cultural values and historical relations (Bamberger, 1991; Hood, Hopson, & Frierson, 2005).  As a result, evaluation scholars argue that context plays an essential role in grounding and validating the concept of evaluation in a particular setting for a particular group of people, as well as the ways in which it can be conducted and used (Conner, Fitzpatrick, & Rog, 2012).

As a result, the issue of context in evaluation problematizes the applicability of Western cultural frameworks in non-Western settings (Mertens & Hopson, 2006).  Evidence from a wide variety of evaluation studies converges to suggest that the inquiry traditions of the white, majority Western culture may compromise the interests of underrepresented groups—low and middle income countries in this case—due to a widespread failure to appreciate these groups’ ontological and epistemological assumptions and cultural nuances (Smith, 2012; Kirkhart, 2005).  To challenge the status quo in the field of evaluation towards majority, Western thought, and increase the contextual validity of results, many evaluation scholars advocate the use of non-Eurocentric evaluation approaches that are grounded in cultural context and done by and for the cultural community (Hopson, Bledsoe, & Kirkhart, 2011; Mertens, 2007).

These are only a few introductory remarks and arguments from the existing literature into a much bigger and significant discussion. As the field of evaluation evolves globally and cuts across many countries and cultures, evaluators need to be critical of their assumptions related to evaluation systems and practice. It is my hope that they will keep this evaluation dilemma presented above at the back or front of their minds as a guiding star in the 21st century.

Now we invite readers to weigh in on the discussion. What steps are being or could be taken to foster indigenous evaluation? How can the prevalence of donor-driven evaluation be balanced by local organization or community-driven evaluation?

You can’t make a pig fat by weighing it

Last week I swapped islands for 48 hours, traveling from the Dominican Republic to Jamaica, where I had the distinct pleasure of participating in a consultation for the third phase of the Learning Metrics Task Force. The LMTF—co-convened by the Center for Universal Education at Brookings and the UNESCO Institute for Statistics—aims to change the global conversation on education to focus on access plus learning, with an eye towards influencing the post-2015 development agenda. You can keep posted on recommendations from each stage of the consultation process here. The consultation at the Inter-American Development Bank Kingston office convened high-level education officials from various Caribbean states; I attended as a member of the Implementation Working Group.

Image

LMTF consultation participants in Kingston, Jamaica.
Photo courtesy of Dr. Winsome Gordon

The conversation was lively and focused mostly on the discussion guide previously prepared by the Working Group, which meant that much of our time was spent understanding the ways in which learning is measured at various levels in different countries. Of course, as with any discussion among experienced education professionals, the tension between policy (measurement and evaluation of learning) and practice (teaching and learning processes) was highlighted often. Perhaps this tension was best summarized by one of our colleagues (in her grandmother’s words): “You can’t make a pig fat by weighing it.” Indeed, education planning, measurement, and assessment are more often governed by the mantra what gets measured gets done. Such a statement preferences the measurement as an end in itself, implying that actions to produce that measurement will automatically follow. This idea says nothing, however, about the quality of those actions and the validity or reliability of the measurement. Are we working towards the right goals? Are we measuring what we think we are measuring? Are our actions leading to the desired outcomes? We are all too familiar with perverse incentives to distort actions and results because of the pressure attached to measurement!

The more compelling idea (a pig isn’t fat because you weighed it!) reminds us that the sheer act of measurement does not guarantee that the processes needed to achieve the desired outcomes are in place. Identifying desired learning outcomes and selecting appropriate measures to assess that learning is the right place to start (just ask any highly qualified teacher who is lesson planning!) We must know what students should be learning and how we know they are learning before we design the activities that foster desired outcomes. This is what the LMTF is currently doing through the three-phase consultation. The true litmus test for the project will be in its ability to connect policy with practice, to build capacity for the learning competencies to be achieved through high quality learning experiences. As was mentioned in the Caribbean consultation, the project’s theory of change must include this crucial link to ensure that a conversation on access plus learning is more than a conversation.

The project has significant policy weight behind it, and hundreds of policymakers and practitioners have actively participated in consultations. There is currently substantial international attention around issues of teacher quality and the teaching profession. The LMTF can capitalize on this movement—perhaps through strategic partnerships or through the proposed advisory board—so that the teaching and learning process remains at the front and center of the conversation. The project thus far has done an excellent job of facilitating dialogue around the basic framework to ensure that all children are learning. Desired learning outcomes are being identified and measurement methods to assess achievement of those outcomes are being defined. The greatest challenge will come as we “fill in the activities” and support countries not only in their ability to measure learning (which is important), but the ability to “do well” on valid and reliable measures thanks to well-prepared teachers, high quality teaching, and safe, productive learning environments. It’s the teaching and learning process that counts.

I would love to hear from others that have participated in previous consultations. What were your key takeaways? What are the next steps beyond the third phase?

Working in Someone Else’s Country

In the United States, May is graduation season, and with that season comes an influx of both young and seasoned professionals entering and/or re-entering the workforce through full-time positions and short-term consultancies. May also marks the beginning of summer, where hundreds of students head “into the field” to complete research or internships in international settings. For those in international development, I have just one recommendation before you make the transition: Read this book.

Image

Photo courtesy of Amazon

How to Work in Someone Else’s Country by Ruth Stark (reviewed here by Jennifer Lentfer at How Matters) provides practical advice to mitigate against some of international development’s greatest failures as propagated by poorly prepared (and poorly behaved!) international workers. Think you know all there is to know about “getting development right” and working in partnership with local communities? Think again! Even the seasoned aid worker will pull out some hidden gems to act on, all while nodding in agreement at some of the cringe worthy anecdotes of consultants gone wrong. Especially important for evaluators and M&E specialists, as most of our work tends to be shorter-term and—let’s face it—is particularly susceptible to negative perceptions at the community or local level.

Some key takeaways from my favorite chapters:

  • Relationship is everything…and everyone is related (the author struck gold with this first chapter title!) Time constraints can make the process of developing relationships take second stage, but early investment here doesn’t just pay off in the long run: it’s the right thing to do. And everyone that you encounter matters, especially when you are a guest. Besides, you never know who is related to whom. And don’t gossip about local colleagues to other local colleagues. It’s not just bad form; loose lips sink ships!
  • Figure out your job and who you’re working for. The official job description or ToR is only one piece of the puzzle (and sometimes the most puzzling part!) Take time to figure out your antecedents (the who, what, when, where, why) and what it means for your work. Be especially on the lookout for political history that will guide you in what to do and what not to do. Because you’ll have many stakeholders with competing demands, it’s key to find out who the most important client is and prioritize their priorities. But most importantly, never forget the client who is not at the table to begin with.
  • What to do if you get there and nobody wants you. It’s not just about taking up scarce time, space, and resources. The truth of the matter is that your presence might be a perceived and/or real threat on the ground. Understand and even embrace that reality. Meeting resistance with resistance (or, worse, imposition!) never ends well. Prepare upfront by finding out the background context about how your job came to being, but also be prepared to just shut up and listen!
  • How to make them glad that you are there. Let who you are, not your credentials, define you. This can be especially hard for recent graduates. My favorite piece of advice from the whole book: Don’t give the answer until you know the question. This advice makes a great mantra for an international development professional. I would also add that your answer, when given, is never THE answer. The quickest way to lose support is by pushing your own agenda, rather than understanding and supporting someone else’s.
  • Working with your local counterparts. The most important relationship of all. Sustainability, as a buzzword, has lost all meaning. But it’s all about ensuring that work can continue over the long-term, and this means building up and investing in the careers and professional development of local counterparts. All too often international consultants are too busy “building up” their own careers to recognize this tragic flaw. Ironically, an inability to do this could certainly lead to your own demise or, at the very least, to a steep decline in your reputation as someone others want to work with. Local counterparts should always, always, always participate in planning and decision-making, accompany (and direct) visits with key leaders and officials, take leadership in presentation design and delivery, receive recognition in reports and publications, etc. There is never such a thing as giving too much credit, unless it’s to yourself!
  • Working with governments. Stop criticizing and start collaborating, respect official channels and processes, and don’t argue with senior government officials. Give respect where respect is due. As the author reminds us, “never forget that you are a guest of the host country and work there only at the government’s pleasure.” I’d like to personally recommend this chapter to Madonna (she’s not exactly friends with the government in Malawi—and they’ve got good reason to be irritated).
  • Making a difference. Never stop asking yourself if your presence is making a difference for better or for worse. It’s not just about the project goals and metrics. These are meant to serve people. And people respond best to other people—caring and adaptive humans with soft skills, not unresponsive robots armed with pre-programmed tools and commands.

And, since the illustrative anecdotes about “bad behavior in the field” were one of my favorite parts of the book, I’m asking readers to contribute their own examples of “international consultants/employees gone wrong” in the comments section. There’s nothing better than learning (or unlearning) by example!

The threat of convergent thinking in M&E for international development

Like many in evaluation, I consider myself to be a lifelong learner; I thrive on learning, relearning, and unlearning. Knowledge isn’t static, and most of what we “know” can withstand healthy debate. I’m intrigued by the diversity in knowledge paradigms among evaluation practitioners. In fact, I’d love to see a study that analyzes how evaluators come to align themselves with particular paradigms and approaches (any takers?) Despite this diversity, I’m quite startled by the pervasiveness of convergent thinking in the field. This phenomenon affects M&E for international development in a particularly strong way.

Thanks to Twitter, I recently discovered Enrique Mendizabal’s article on how labels, frameworks, and tools might be stopping us from thinking (several months old by now, but worth the time. Do trust me on this one). One of his points is that tools and frameworks emphasize process to the point that space for thinking is eliminated. The proliferation of such tools creates an illusion of knowledge and expertise (punctuated by jargon), which causes few people to question the process and/or the product.

In many ways, M&E has become about compliance. In my opinion, this is largely the result and the cause of convergent thinking. Efforts to prove impact must be “rigorous” and “based on evidence,” which usually implies the use of research-based tools. I’ll be the first to say that such tools can be and are, in many cases, highly effective. But development practitioners and policymakers talk out of both sides of their mouths.  They tout innovation while actually encouraging and rewarding convergent thinking. The accepted M&E tools and frameworks are largely created in the Western world using Western paradigms. M&E “capacity building” is often code for “M&E compliance”: training a critical mass of specialists to auto-pilot processes and principles that are supposed to encourage learning, but often teach little more than how to say and do the “right thing” at the “right time” to prove results. The presentation of rigorous tools discourages skeptics in our audiences, who all too often feel gently—or not so gently—pressured to accept and implement such tools without critical review and healthy skepticism. Real innovation requires divergent thinking. Do we need rigor and evidence and research? A resounding Y-E-S. Do we need professionals with considerable expertise and experience to help guide M&E efforts? Without a doubt. But it is the M&E specialist’s job to integrate good practice with new ways of thinking.

I recently finished the book Creative People Must Be Stopped by David A. Owens, professor at Vanderbilt University. Owens argues that innovation constraints occur at many levels: Individual, group, organization, industry, society, and technology. The convergent thinking that affects our ability to truly innovate (solve problems and measure impact in new and better ways) comes into play at each of these levels. In my own practice, I’m trying to take more responsibility at the individual level. This not-so-easy task includes addressing three core components of creativity identified by Owens: Perception, intellection (or thinking), and expression. I find myself—and the M&E field writ large—to be most perceptible to intellection constraints.

The first step is to eliminate stereotypes and patterns that prevent potentially relevant data from entering the problem-solving process. M&E specialists become accustomed to defining the same problems in the same ways. But what if that definition is wrong? I’m not talking about small errors in problem definitions that can be corrected through collaborative inquiry. I’m talking about widely accepted “evidence-based” definitions of problems (and their associated implementation and measurement practices) that have become almost akin to common knowledge in the field. We must challenge the definition as well as the solution to development problems. Unfortunately, our common problem definitions lead to common indicators and common data collection methods, which leads to common solutions—across projects, programs, countries, cultures, and contexts. In this sense, M&E has the potential of doing more harm than good.

Monitoring and evaluation plans have become a staple of international projects, and the weight of importance of M&E plans in a project proposal is increasing. It’s important that we—as a professional community of practice—serve as our own biggest skeptics, continue thinking critically, and avoid falling prey to “evaluation for compliance” pressures. With that being said, I’d love to hear your thoughts. How can this be done? What are some examples of M&E compliance gone wrong? How have you succeeded in using M&E processes as a springboard for learning and innovation?

Protecting Human Rights While Building Trusting Relationships

Evaluation work around social issues is complex. Emerging research on systems thinking and complexity theory explains this; our experience confirms it. This complexity is amplified in situations where human rights are systematically violated. I’ve recently spent some time managing field projects related to documentation in the Dominican Republic, where native-born Dominicans of Haitian descent are often denied their legal right to birth registration and, since 2007, have had their previously issued identity documents revoked by the government. There are many local, national, and international groups currently lobbying the government, implementing programs, and conducting research on the issue. It’s a hot topic attracting significant internal and external attention, which brings the question: How can stakeholders learn more about the issue while protecting those who are affected?

Researchers and evaluators of programs in such contexts are ethically bound to protect the rights of participants, particularly when it comes to confidentiality and consent. IRB protocol is critical, but even the most painstaking attempts to honor its principles can strip the process of its human element (I have a particular aversion to the idea of protecting human “subjects”)! That’s why I’m advocating for greater consideration of how to build trusting relationships with participants in order to not only protect their rights, but honor their dignity and personal histories.

Below I describe some considerations for researchers and/or evaluators who engage in projects related to sensitive issues in complex environments. I strongly believe these considerations should be taken into account at every level, from highly technical external evaluations to grassroots research and program development.

Location, location, location: Let participants choose where they feel most comfortable being interviewed. Some may feel more comfortable in the privacy of their own home while surrounded by family. Others may not feel safe providing information on where they live and would prefer a perceived neutral location in the community, such as a local church.

The company you keep: A local, trusted community member should accompany the researcher to assist in explaining unclear information to the participant, translating where necessary, and generally creating a safe and welcoming environment. Even better if that person is trained to actually conduct the research! Be sure that interviews are private and not overheard by others, unless the participant requests to be accompanied by a friend, family member, etc.

The right to say no: Participants should never feel forced to participate. If the researcher/evaluator is technically an outsider, they may miss important cues signifying that the individual is hesitant to participate. Understand how power differentials may interfere with an individual’s perceived ability to say no, and mitigate against them. Be able to judge verbal and non-verbal cues throughout the entire data collection process and be sure to remind participants that they can choose not to answer a question or decline to continue at any moment.

The right to know: Participants should be informed about how any information collected will be used. Academic research may not be a familiar concept, and there may be (understandable!) suspicion or concern that information will get into the wrong hands and be used against them. Explain why notes are being taken, who will have access to information (both data and results), etc. Give time for them to reflect on informed consent forms and ask questions. Be sure to have documents in multiple languages if the participant is not fluent in the region’s predominant language. Have options for non-literate individuals. Err on over-explaining and providing “too much” information, even if it takes more time. Relationships can be damaged and trust broken within minutes. Ask the participant to repeat back what they are agreeing to in order to ensure full consent and comprehension.

What’s in a name: Only collect personal identifying information (PII) if it is absolutely necessary. Don’t forget that voice recordings are also a form of PII! Participants will want to be assured that their responses cannot be traced back to them. If PII is collected, it should not appear on any materials that could be misplaced or seen by others (survey forms, assessments, etc.). Use another marking system that is linked to participants through secure, internal, and restricted access documents. Consider using pseudonyms for case studies or quotes, but don’t forget that participants might want ownership of their stories. They should have the opportunity to choose whether their identity is used in narratives that describe personal histories and experiences.

Be creative: There are many interesting and creative ways to maintain confidentiality and/or anonymity in situations where face-to-face conversations may not be feasible nor produce honest responses. Implement a creative response system (colored cards, dice, etc.) that give participants a sense of privacy and increased confidence in answering questions. Consider using a dividing screen or private room for submitting responses, as appropriate, to enhance feelings of security and anonymity.

Be human: Open up the session with conversation instead of rigidly following a script or jumping to the informed consent form. It can be considered rude to “get down to business” immediately, and the participant is much less likely to feel comfortable or appreciated for their time and the personal risk they might be taking! Check in frequently with the participant throughout the interview, continuously gauge their comfort level, and make adjustments as necessary. Be open to diverging from protocol if necessary. Letting the conversation take its course is critical when dealing with sensitive topics. Be sure to collect the information you need, but don’t sacrifice the personal connection.

As with any research project or evaluation, the protocol depends on context. What similar challenges have you encountered in the field and how did you overcome them? What advice would you give to others working on sensitive issues in complex environments?

Update: Some resources on human-rights based approaches to M&E. Please add more in the comments section if you know of a great resource!

Selected Resources on Human Rights-Based Monitoring & Evaluation (compiled by GIZ)

Integrating Human Rights and Gender Equality in Evaluation (UNEG)

Rethinking Evaluation and Assessment in Human Rights Work (ICHRP)

Collection of Resources for Evaluating Human Rights Education (UMN Human Rights Library)

Guide to Evaluating Human Rights-Based Interventions in Health and Social Care (HRSJ)

Human Rights-Based Approach to Monitoring and Evaluation (HRBA Toolkit)