Who’s Who in M&E? A Stakeholder Analysis

It’s been a few months since my last guest post, so today I’d like to introduce you to one of the savviest data divas I know, Laura Budzyna (see full bio below). Laura and I first met in graduate school where we led a joint research trip to the Dominican Republic, working with my current employer, The DREAM Project, for which she had previously volunteered. Laura is especially perceptive when it comes to both the technical and interpersonal aspects of M&E, which makes her an asset to any team. As she prepared to embark on a recent consulting gig, she put together a stakeholder analysis to help her manage the cast of characters often found on the M&E stage.

Here’s her story!

“Oh, so you’re like an auditor.”
“What? No!”
“Well, to them you are.”

I stopped mid-sip of my smoothie. I had just explained to a friend of mine that I was going to be visiting a few microfinance institutions in South America and taking a look at their monitoring and evaluation systems. My plan was to hang out with their IT teams and their loan officers, make some recommendations on how to make their collection and input processes more efficient, and offer a few data analysis tips to boot. But an auditor?

“Sure. They probably feel threatened. They’re certainly not anxious to set a date for your arrival.”

Huh. He was right – we had gone back and forth about the date of my visit for weeks now. Well, now, let’s think about this a second. Sure, the MFIs didn’t actually invite me to come – an investor of theirs was paying for my ticket and my time. An investor who…hmm…funds MFIs based on their social impact track record. And now, here I was, a foreigner sent by an investor, asking to rifle around their databases. I suddenly saw it from their perspective: who knows what I might find?

If I found inefficiencies? Okay, those would be helpful enough to point out, if not a little embarrassing for the organization to share with their funder. If I found inaccuracies? Even more awkward. And if I found that their clients’ wellbeing was not improving? Well, I might as well just pull the plug on their funding altogether.

I was at best a disruption, and at worst a threat. I hadn’t quite thought this through.

Monitoring and evaluation, like any other specialty, is criss-crossed with complex relationships. Who are the stakeholders? Who are the influencers, and what are their agendas? Who is the information for, really? And perhaps more importantly…who should it be for? To answer these questions, many development practitioners use a tool called stakeholder analysis. This technique is featured in the handbooks of many of the heavyweights in the industry (see: DFID, UNDP, EuropeAid and CARE). The name rings tinny with jargon, I know. But if you can get past the dry stakeholder matrices prescribed by DFID and their ilk (you know, the ones languishing in the appendices of too many UN reports), the actual thought process behind stakeholder analysis is extremely helpful. It’s one of the first tools that we teach to the Masters in Development Practice students at SIPA, and in solidarity with them, I’m going to take a stab at one of my own.

I’ll begin with the caveat that my analysis is not specific to any one organization, and therefore it makes a few generalizations. It’s also a tad tongue-in-cheek: the summaries read a little like trading cards or Myers-Briggs profiles (if I had artistic leanings, I would have happily adorned each with an avatar, too). Still, I think they offer a useful starting point for an outsider trying to understand the daily drama of the M&E world. So, without further ado…

Who’s Who in M&E?

Most M&E systems have a fairly established cast of characters. Within the organization, it’s the management, the IT/data or admin team, and the client-facing staff that make M&E (especially M) happen on a daily basis. Outside of the organization, there are the funders who demand data and the consultants and academics who vet it. And then of course, there are the clients themselves who provide this valuable data. Let’s take a closer look:

Funders: These are donors and investors who, like the rest of us, have gotten pretty excited about the word “impact” and now want ever more convincing evidence of it in exchange for their dough. As evaluations have gotten fancier over the years, so have their demands. In fairness to them, many also provide technical assistance (see: Evaluation Consultants) and other resources to help their grantees meet these demands.

  • Influence: High
  • Motivation: To see that their money is well spent. And since many funding organizations have their own funders and investors, they want to be able to prove their own impact.
  • Accountable to: their own funders, industry peers, policymakers
  • Use results: to make funding decisions, inform policymakers and other thought leaders, set agendas

Managers: These leaders are constantly balancing the need to produce impact for their clients with the need to prove it to their funders. Some are excited about measuring impact for the organization’s own sake; others are only doing it reluctantly at the behest of their partners. To tell the difference, it helps to find out how the current M&E system kicked off: did the managers initiate and champion the effort, or was it an external request?

  • Influence: High
  • Motivation: To keep the ship afloat. And that means keeping investors happy while trying to focus on their core mission…and keeping the minions (i.e. me) at bay.
  • Accountable to: In mission, clients. In practice, funders.
  • Use results: To plan strategically, to change course, to report to funders, to benchmark against other similar or competing organizations

Evaluation Consultants: These folks are paid to poke around other people’s kitchens. (Okay, they also design instruments, implement studies and analyze data.)

  • Influence: Medium
  • Motivation: To get tons of information in a (usually) short amount of time, so they can make effective recommendations to their client.
  • Accountable to: Whoever is paying them – could be the investor/funder, another consulting company, a research initiative, or the management. (If the management isn’t paying them, managers don’t have any control over how the findings are shared – another scary thought.)
  • Use results: To produce shiny reports for someone else to use, to add to their own knowledge base so they can carry it along to the next project.

Academic Researchers: In the same category as evaluation consultants, these outsiders bring scientific rigor to the evaluation effort. While there is more and more talk of successful partnerships between academic institutions and NGOs, the scrupulous academics and the pragmatic practitioners don’t always see eye to eye on how M&E should be carried out.

  • Influence: Medium
  • Motivation: Rigor and recognition, usually in the form of a peer-reviewed publication.
  • Accountable to: The academic community. And whoever’s on their tenure committee.
  • Use results: To publish, and to add new knowledge to their field

The IT/Data/Admin Team: From the database managers to the data entry wonks, this is actually where the magic happens. In many organizations, these employees are revered, as they’re the wardens and curators of the organizations’ data. Usually, they’re the only ones who can (or know how to) access the raw data, while management can only pull standardized reports. Depending on leadership’s enthusiasm for the M&E effort, this may or may not afford the IT team a good deal of influence in the organization.

  • Influence: Low-to-medium
  • Motivation: To do well at – and be recognized for – their challenging and specialized job
  • Accountable to: Management
  • Use results: To assess data validity

The Data Collectors: Your results are only as good as the data you gather. Without these worker bees, the whole system would collapse. Often, data collectors are client-facing workers who take on data collection on top of an already towering task list (think community health workers, microfinance loan officers, etc.). Others work for local research companies and are contracted for short-term data collection efforts, like an annual survey or an external evaluation. They spend time with the clients, they speak the language, and they populate our spreadsheets with hours and hours of conversations.

  • Influence: Low (Pro tip: if you can engage your data collectors during the survey design process and pilot, they will be your most helpful critics.)
  • Motivation: If they’re exclusively a data collector: per survey compensation. If they’re an employee with 27 tasks to complete other than data collection: to finish as quickly as possible. The data collectors’ time is one of the top considerations when designing surveys or deciding on which indicators to measure on a regular basis.
  • Accountable to: Supervisors
  • Use results: Indirectly. They may hear the highlights, but they don’t usually interact with results firsthand unless they give rise to major operational changes. Still, many of the workers who interact with clients regularly are very excited to learn about their progress, and it’s my opinion that they should be among the first to see the results.

The Client/Beneficiary: Oh yeah, remember them?

  • Influence: Low
  • Motivation: To improve their quality of life. And to get finished with this damn survey. (In fact, clients in a position to choose between different organizations may choose the one with fewer bothersome questionnaires.)
  • Accountable to: Their own families, their own jobs, and certainly not this data collector.
  • Use results: When was the last time you showed an impact report to the client? Imagine the power shift if you did! Clients holding organizations accountable for producing results? The implications are tremendous, and worthy of another blog post altogether.

Looking at these profiles, my own challenges with this MFI project are beginning to make a lot more sense. No one is actually accountable to me, the evaluation consultant, except for the fact that I’m linked to a funder. Still, my recommendations have influence and may well change their jobs substantially. And if I don’t understand how my suggestions will affect all the stakeholders, then my whole project will fall flat.

It’s only the first piece of the puzzle – I’ve got a few weeks of meetings ahead to understand the full picture. But at the very least, I’m glad I did this before tomorrow morning, when I’ll walk into my meetings and (knock on wood!) convince these organizations that I’m on their side.

I’d love to hear whether any of this holds up against your experiences. Where do you fit into the mix? What are your motivations, whom are you accountable to, and how does this affect how you interact with the other stakeholders?

Author Bio:

Laura Budzyna is an independent development consultant specializing in data analysis, monitoring and evaluation, and capacity building for sustainable development. Laura’s most recent work with EA Consultants has focused on access to finance for low-income populations, which has led her to participate in M&E efforts across Africa, Latin America and the USA. She has contributed to research initiatives for such clients as the Inter-American Development Bank, the Corporation for Enterprise Development, the Gates Foundation, the City of New York and the Clinton Health Access Initiative. After graduating with an MPA in Development Practice from Columbia’s School for International and Public Affairs, Laura led the design and pilot of the DP Lab, a four-semester workshop series on program design and evaluation. She remains an associate instructor for the course. Before SIPA, she directed summer programming and taught literacy to at-risk youth at the DREAM Project in the Dominican Republic. She holds a BA in Latin American Studies from Middlebury College.

Are enabling environments in evaluation actually disabling?

This question is one that has been lingering in the weeks after AEA’s Evaluation 2013 Conference. First, let me just say that my debut experience at a national AEA conference was spectacular. I’ve been to my fair share of conferences over the years, and AEA outshined them all. The variety of sessions ensured that there truly was something for everybody (a difficult feat with 3,000+ attendees all at different stages in their evaluation careers!) There was an impressive sense of community and attendees were engaged nonstop in exchanging ideas and learning from one another.

I spent a lot of my time focusing on sessions related to evaluation capacity building and development. As an internal evaluator / M&E officer, I help program coordinators feel comfortable with M&E concepts, adopt them into their work, and advocate for their use over the long-term. My job is not to “do all the M&E,” but rather make sure that M&E “becomes a way of doing.” Evaluation capacity building and development provides great frameworks for achieving these goals within the organization. It also helps me think about how I can contribute to strengthening the practice of evaluation on a larger scale.

I started off the week with Michele Tarsilla’s full-day workshop on Evaluation Capacity Development 101. His workshop emphasized capacity as both a latent attribute and a potential state, something that already exists within individuals, organizations, and institutions, but which can be developed over time. This process requires an understanding of the level of capacity that already exists. As any qualified educator would confirm, it’s essential to recognize that learners already possess significant background knowledge. The job of the educator is to activate and build off that knowledge, which in combination with new information leads to increased levels of knowledge, skills, abilities, dispositions, etc. Tarsilla’s distinction between capacity building (short-term, activity-focused) and capacity development (long-term, process-focused) takes this concept of prior background knowledge and existing capacity into account. Although some might see it as simple semantics, language does matter, not only in how we frame conversations but also in how we execute our work.

Several other evaluation capacity building and development sessions I attended (including from the World Bank’s CLEAR Initiative), emphasized the importance of creating an enabling environment for evaluation. This terminology is common in the field, but can be a bit opaque at first glance. To put it simply, an enabling environment is a context that facilitates the practice of evaluation (it enables effective evaluation to take place). There are a lot of factors that make up such an environment, but examples might be demand by policymakers for the production of evidence, funding available for evaluations, widespread interest in and use of evaluation findings, existence of evaluation policies, etc. EvalPartners is doing great work globally to create more enabling environments for evaluation. They’ve successfully declared 2015 as the International Year of Evaluation (EvalYear) as part of this effort!

Enabling environments were on my mind when I participated in a new AEA session type, Birds of a Feather Roundtable Sessions. These lunchtime gatherings brought together diverse participants to talk about a shared area of interest. I grabbed takeout at the local deli and joined a session on international development, where we chatted about a variety of issues related to evaluation in the sector. One of the questions posed to the group dealt with the introduction of new methods that could effectively address issues of complexity (Here are some great posts that can catch you up on that debate: Complexity and the Future of Aid and Complexity 101 Part 1 and Part 2).

I immediately responded to this question with some doubt. I think the methods exist (or are being developed). There’s a lot of talk about developmental evaluation, systems thinking in evaluation, etc. It’s not uncharted territory methodologically speaking. But it is, perhaps, uncharted territory politically speaking. In other words, we are having a conversation about methods when we really need to be talking about the enabling environment. Does the evaluation environment in international development allow evaluators and practitioners to use these methodologies in their evaluation work? Or is the enabling environment actually disabling? Are evaluation practices in international development confined to certain types of evaluations? Yes, we need to continually innovate with methods. But those methods will never get used if the environment doesn’t allow for it.

I strongly believe that evaluators should not only actively engage in dialogue with other evaluators about the state of evaluation, but they should also advocate for enabling environments that allow the nuanced and specialized work that evaluation requires.  We should be active contributors both in shaping the field itself and shaping how others engage with it. It’s certainly possible (and, perhaps, common) to create an evaluation-friendly environment for certain types of evaluations, but not for others. Efforts to strengthen enabling environments should maintain a holistic perspective that allows for multiple types of evaluations, a diversity of perspectives, and innovations in the field. My hope is that, as more and more countries develop rich evaluation practices, the unique perspectives that each country brings to the table will enrich the conversation on evaluation itself. The worst thing would be to superficially limit the conversation by fostering the creation of evaluation policies, practices, etc. that operate underneath a narrow definition of evaluation. Let’s allow diversity in background knowledge to create diversity in outcomes for the field. The more minds we have thinking differently about evaluation, the more innovations will be introduced. We just need to make sure that enabling environments are built flexibly to accept and foster those innovations.

What unique contributions have you seen to the field of evaluation? Have you seen enabling environments actually “disable” the work of evaluation? How so?

Reclaiming Evidence and Impact: The Case for Context

If the global development community were to put together a list of the most overused—and perhaps misused—terminology of 2013, I would advocate for the inclusion of “evidence” and “impact.” Bureaucratic groupthink has narrowed the definitions of these two words so that only certain types of evidence and impacts can be labeled as such. Me explico. International organizations and donors have become so focused on demonstrating what works that they’ve lost sight of understanding why it works and under what circumstances. I can’t help but feel that the development industrial complex has come down with a case of “Keeping up with the Joneses.” My impact evaluation is more rigorous than yours. My evidence is more conclusive than yours. It’s a race to the top (or bottom?) to see who can drum up the most definitive answers to questions that might be asking us to “check all that apply” instead of “choose the most correct response.” We’re struggling to design multiple-choice answers for questions that might merit a narrative response.

I can’t help but question the motives behind such a movement. Sure, we’re all struggling to stay relevant in an ever-changing world. The global development community has responded to long-time critiques of development that is done to or for communities by launching programs and policies that emphasize development with and by local communities. This is a step in the right direction. While the international development community might claim to be transferring responsibility for technical program knowledge to local consultants and contractors, it has carefully written itself a new role: M&E (emphasis on the E). “Evidence” and “impact” narrowly defined are linked to contracts and consultancies that are linked to big money. It feels like a desperate attempt to keep expertise in the hands of a few, attempting to rally support to scale up select policies and programs that have been rigorously evaluated for impact by some of the major players in the field. Let’s not forget that impact evaluation—if we maintain the narrow definition that’s usually offered—can come with a hefty price tag. There are certainly times when impact evaluations such as RCTs are the best methodological choice and the costs of conducting that evaluation would be relative to the benefits. But we must be very careful about conflating evaluation type/purpose with methodology. And even more careful about when, where, and why we are implementing impact evaluations (again, narrowly defined).

I just finished reading a great piece by Justin Sandefur at the Center for Global Development: The Parable of the Visiting Impact Evaluation Expert. Sandefur does an excellent job of painting an all too familiar picture: the development consultant who (perhaps quite innocently) has been misled to believe that conclusive findings derived in one context can be used to implement programs in completely different contexts. At the individual level, these experts might be simply misguided. The global conversation on impact and evidence leads us to believe that “rigor” matters and that programs or policies rigorously tested can be proven to work. However, as Sandefur reminds us, “there is just no substitute for local knowledge.” What works in Country A might not work in Country B, and what might not work in Country B probably will not work in Country C. It is unwise—and dangerous—to make blind assumptions about the circumstances under which impact evaluations were able to establish significant results.

I would urge anyone interested in reclaiming the conversation on evidence to check out the Big Push Forward, which held a Politics of Evidence Conference in April with more than one hundred development professionals in attendance. The conference report has just been released on their website and is full of great takeaways.

Are you pushing back on the narrow definitions of evidence and impact? How so?

Donor-Driven vs. Indigenous: The Fate of Evaluation in the 21st Century

Today marks the first of a series of guest bloggers. A special welcome and warm thanks to my colleague Hanife Cakici for her contribution this week! Here at Learning Culture we’ll be showcasing diverse perspectives on evaluation practice in a variety of countries, so please contact me if you’d like to bring your voice to the discussion and author a guest post. Now I’ll pass the word over to Hanife!

I am pleased to contribute to Learning Culture: A journey in asking interesting questions today. I would like to extend my gratitude and warm wishes to my colleague Molly Hamm for inviting me to ask questions that I think are interesting for the future of evaluation practice around the world. Before I delve into more serious matters though, I would like to take the opportunity to talk briefly about what I do and why I do it.

Coming to the University of Minnesota as a Fulbright scholar has generated a deep curiosity in me to investigate how to build contextually sensitive evaluation systems and practice in my native country Turkey to improve our national programs and policies. Despite the dramatic expansion of the field of evaluation worldwide, program evaluation remains rather an unexplored terrain in Turkish academic and governmental life. Faced with relatively scarce resources, Turkish policymakers need sound and useful evidence about the relevance and effectiveness of programs and policies. To this end, I have recently launched the Turkish Evaluation Association (TEA), Turkey’s first and only national evaluation network to build a broad-based community of practice for emerging program evaluation practitioners in my native country. The association is featured on the International Organization for Cooperation in Evaluation’s (IOCE) interactive map of evaluation organizations around the world. To support this important work, my dissertation work focuses on developing bottom-up evaluation systems and practice that accommodate Turkish historical, political, social, and cultural context that will effectively and responsibly contribute to improving social policies. In addition to my academic work, I currently serve on the board of directors for the Minnesota Evaluation Association. I assist in organizing and facilitating events, workshops, and seminars for scholars and evaluation practitioners to help develop new knowledge and skills in the field. Although my passion for evaluation practice started as early as 2006, it certainly has accelerated during the last couple of years thanks to worldwide initiatives to expand the field of evaluation to contexts outside of global North, which was a great impetus for me to establish TEA.

Indeed, the field of evaluation in the 21st century will be characterized by its international and cross-cultural expansion. I suspect that this particular trend will overwhelm the majority of discussions during the 27th annual conference of the American Evaluation Association in October 2013. The conference itself will challenge evaluators’ practical toolbox and theoretical dispositions, as the theme invites practitioners to foresee or even make predictions about the future of evaluation practice. Lurking behind this global expansion is however an important question that begs for an answer: Will evaluation practice in the global South be top-down, donor-driven or bottom-up, indigenous? The debate between (a) those who argue that the most appropriate way to strengthen evidence-based decision making in developing countries is for donor agencies (multi-lateral or bi-lateral) and/or donor countries to fund evaluation capacity building activities, thereby contributing to the evolution of evaluation systems and practice in developing countries, and (b) those who argue for a more indigenous approach, in which the developing country takes full ownership of its decision-making process and builds bottom-up evaluation culture and capabilities for and by its people will occupy the headlines in evaluation journals. Certainly empirical research is very much needed to answer this question. Yet I believe I can initiate a fun, scholarly conversation at this point by sharing some excerpts from my upcoming dissertation titled The Perceived Value of Program Evaluation as a Decision-Making Tool in Turkish Educational Decision-Making Context.

Many Western evaluation scholars and practitioners have recognized that evaluation practice was first expanded to low and middle-income countries through Northern-based aid organizations as a means to deliver their services. Given evaluation’s significance in decision-making, a concerted effort by many Northern and some Southern institutions and evaluators to build evaluation systems and practice in developing countries contributed to this expansion.  Numerous sessions, workshops, and conferences have been organized to build evaluation capacity in developing country governments, and many national evaluation organizations and associations have been established (Mertens & Russon, 2000).  EvalPartners, an international evaluation partnership initiative to strengthen civil society evaluation capacities to influence public policy based on evidence, attempted to map existing Voluntary Organizations for Professional Evaluation (VOPEs) around the world and found some information on a total of 158 VOPEs, out of which 135 are at national level, while 23 are at regional and international levels.  Some LMICs have established government-wide evaluation systems to improve their public programs and policies (e.g., Brazil, Korea, Mexico, etc.).

Aid organizations’ efforts to disseminate evaluation systems and practice in developing countries have implications for the future of evaluation practice outside of Western contexts. Evidence from a wide variety of evaluation studies converges to argue that many developing countries consider evaluation a donor-driven activity without any value to their specific learning and information needs (World Bank, 2004).  This ‘potentially’ imposed use of evaluations reduced the opportunities to increase and sustain national evaluation capacity to address national information needs and improve decisions. As noted by Hay (2010), Northern-based and created aid organizations’ dominance over the field of evaluation “created and reinforced inequalities in the global evaluation field by overemphasizing the values, perspectives, and priorities from the North and underemphasizing those from the South” (p. 224).

Researchers have recognized that evaluation is a social intervention; hence contend that evaluation reality is produced in politically, culturally, socially and historically situated contexts (Guba & Lincoln, 1989; LaFrance & Nichols, 2008).  Truth about the value and utility of evaluation can never be isolated from a domain of political discourses, cultural values and historical relations (Bamberger, 1991; Hood, Hopson, & Frierson, 2005).  As a result, evaluation scholars argue that context plays an essential role in grounding and validating the concept of evaluation in a particular setting for a particular group of people, as well as the ways in which it can be conducted and used (Conner, Fitzpatrick, & Rog, 2012).

As a result, the issue of context in evaluation problematizes the applicability of Western cultural frameworks in non-Western settings (Mertens & Hopson, 2006).  Evidence from a wide variety of evaluation studies converges to suggest that the inquiry traditions of the white, majority Western culture may compromise the interests of underrepresented groups—low and middle income countries in this case—due to a widespread failure to appreciate these groups’ ontological and epistemological assumptions and cultural nuances (Smith, 2012; Kirkhart, 2005).  To challenge the status quo in the field of evaluation towards majority, Western thought, and increase the contextual validity of results, many evaluation scholars advocate the use of non-Eurocentric evaluation approaches that are grounded in cultural context and done by and for the cultural community (Hopson, Bledsoe, & Kirkhart, 2011; Mertens, 2007).

These are only a few introductory remarks and arguments from the existing literature into a much bigger and significant discussion. As the field of evaluation evolves globally and cuts across many countries and cultures, evaluators need to be critical of their assumptions related to evaluation systems and practice. It is my hope that they will keep this evaluation dilemma presented above at the back or front of their minds as a guiding star in the 21st century.

Now we invite readers to weigh in on the discussion. What steps are being or could be taken to foster indigenous evaluation? How can the prevalence of donor-driven evaluation be balanced by local organization or community-driven evaluation?

You can’t make a pig fat by weighing it

Last week I swapped islands for 48 hours, traveling from the Dominican Republic to Jamaica, where I had the distinct pleasure of participating in a consultation for the third phase of the Learning Metrics Task Force. The LMTF—co-convened by the Center for Universal Education at Brookings and the UNESCO Institute for Statistics—aims to change the global conversation on education to focus on access plus learning, with an eye towards influencing the post-2015 development agenda. You can keep posted on recommendations from each stage of the consultation process here. The consultation at the Inter-American Development Bank Kingston office convened high-level education officials from various Caribbean states; I attended as a member of the Implementation Working Group.

Image

LMTF consultation participants in Kingston, Jamaica.
Photo courtesy of Dr. Winsome Gordon

The conversation was lively and focused mostly on the discussion guide previously prepared by the Working Group, which meant that much of our time was spent understanding the ways in which learning is measured at various levels in different countries. Of course, as with any discussion among experienced education professionals, the tension between policy (measurement and evaluation of learning) and practice (teaching and learning processes) was highlighted often. Perhaps this tension was best summarized by one of our colleagues (in her grandmother’s words): “You can’t make a pig fat by weighing it.” Indeed, education planning, measurement, and assessment are more often governed by the mantra what gets measured gets done. Such a statement preferences the measurement as an end in itself, implying that actions to produce that measurement will automatically follow. This idea says nothing, however, about the quality of those actions and the validity or reliability of the measurement. Are we working towards the right goals? Are we measuring what we think we are measuring? Are our actions leading to the desired outcomes? We are all too familiar with perverse incentives to distort actions and results because of the pressure attached to measurement!

The more compelling idea (a pig isn’t fat because you weighed it!) reminds us that the sheer act of measurement does not guarantee that the processes needed to achieve the desired outcomes are in place. Identifying desired learning outcomes and selecting appropriate measures to assess that learning is the right place to start (just ask any highly qualified teacher who is lesson planning!) We must know what students should be learning and how we know they are learning before we design the activities that foster desired outcomes. This is what the LMTF is currently doing through the three-phase consultation. The true litmus test for the project will be in its ability to connect policy with practice, to build capacity for the learning competencies to be achieved through high quality learning experiences. As was mentioned in the Caribbean consultation, the project’s theory of change must include this crucial link to ensure that a conversation on access plus learning is more than a conversation.

The project has significant policy weight behind it, and hundreds of policymakers and practitioners have actively participated in consultations. There is currently substantial international attention around issues of teacher quality and the teaching profession. The LMTF can capitalize on this movement—perhaps through strategic partnerships or through the proposed advisory board—so that the teaching and learning process remains at the front and center of the conversation. The project thus far has done an excellent job of facilitating dialogue around the basic framework to ensure that all children are learning. Desired learning outcomes are being identified and measurement methods to assess achievement of those outcomes are being defined. The greatest challenge will come as we “fill in the activities” and support countries not only in their ability to measure learning (which is important), but the ability to “do well” on valid and reliable measures thanks to well-prepared teachers, high quality teaching, and safe, productive learning environments. It’s the teaching and learning process that counts.

I would love to hear from others that have participated in previous consultations. What were your key takeaways? What are the next steps beyond the third phase?

Working in Someone Else’s Country

In the United States, May is graduation season, and with that season comes an influx of both young and seasoned professionals entering and/or re-entering the workforce through full-time positions and short-term consultancies. May also marks the beginning of summer, where hundreds of students head “into the field” to complete research or internships in international settings. For those in international development, I have just one recommendation before you make the transition: Read this book.

Image

Photo courtesy of Amazon

How to Work in Someone Else’s Country by Ruth Stark (reviewed here by Jennifer Lentfer at How Matters) provides practical advice to mitigate against some of international development’s greatest failures as propagated by poorly prepared (and poorly behaved!) international workers. Think you know all there is to know about “getting development right” and working in partnership with local communities? Think again! Even the seasoned aid worker will pull out some hidden gems to act on, all while nodding in agreement at some of the cringe worthy anecdotes of consultants gone wrong. Especially important for evaluators and M&E specialists, as most of our work tends to be shorter-term and—let’s face it—is particularly susceptible to negative perceptions at the community or local level.

Some key takeaways from my favorite chapters:

  • Relationship is everything…and everyone is related (the author struck gold with this first chapter title!) Time constraints can make the process of developing relationships take second stage, but early investment here doesn’t just pay off in the long run: it’s the right thing to do. And everyone that you encounter matters, especially when you are a guest. Besides, you never know who is related to whom. And don’t gossip about local colleagues to other local colleagues. It’s not just bad form; loose lips sink ships!
  • Figure out your job and who you’re working for. The official job description or ToR is only one piece of the puzzle (and sometimes the most puzzling part!) Take time to figure out your antecedents (the who, what, when, where, why) and what it means for your work. Be especially on the lookout for political history that will guide you in what to do and what not to do. Because you’ll have many stakeholders with competing demands, it’s key to find out who the most important client is and prioritize their priorities. But most importantly, never forget the client who is not at the table to begin with.
  • What to do if you get there and nobody wants you. It’s not just about taking up scarce time, space, and resources. The truth of the matter is that your presence might be a perceived and/or real threat on the ground. Understand and even embrace that reality. Meeting resistance with resistance (or, worse, imposition!) never ends well. Prepare upfront by finding out the background context about how your job came to being, but also be prepared to just shut up and listen!
  • How to make them glad that you are there. Let who you are, not your credentials, define you. This can be especially hard for recent graduates. My favorite piece of advice from the whole book: Don’t give the answer until you know the question. This advice makes a great mantra for an international development professional. I would also add that your answer, when given, is never THE answer. The quickest way to lose support is by pushing your own agenda, rather than understanding and supporting someone else’s.
  • Working with your local counterparts. The most important relationship of all. Sustainability, as a buzzword, has lost all meaning. But it’s all about ensuring that work can continue over the long-term, and this means building up and investing in the careers and professional development of local counterparts. All too often international consultants are too busy “building up” their own careers to recognize this tragic flaw. Ironically, an inability to do this could certainly lead to your own demise or, at the very least, to a steep decline in your reputation as someone others want to work with. Local counterparts should always, always, always participate in planning and decision-making, accompany (and direct) visits with key leaders and officials, take leadership in presentation design and delivery, receive recognition in reports and publications, etc. There is never such a thing as giving too much credit, unless it’s to yourself!
  • Working with governments. Stop criticizing and start collaborating, respect official channels and processes, and don’t argue with senior government officials. Give respect where respect is due. As the author reminds us, “never forget that you are a guest of the host country and work there only at the government’s pleasure.” I’d like to personally recommend this chapter to Madonna (she’s not exactly friends with the government in Malawi—and they’ve got good reason to be irritated).
  • Making a difference. Never stop asking yourself if your presence is making a difference for better or for worse. It’s not just about the project goals and metrics. These are meant to serve people. And people respond best to other people—caring and adaptive humans with soft skills, not unresponsive robots armed with pre-programmed tools and commands.

And, since the illustrative anecdotes about “bad behavior in the field” were one of my favorite parts of the book, I’m asking readers to contribute their own examples of “international consultants/employees gone wrong” in the comments section. There’s nothing better than learning (or unlearning) by example!

The threat of convergent thinking in M&E for international development

Like many in evaluation, I consider myself to be a lifelong learner; I thrive on learning, relearning, and unlearning. Knowledge isn’t static, and most of what we “know” can withstand healthy debate. I’m intrigued by the diversity in knowledge paradigms among evaluation practitioners. In fact, I’d love to see a study that analyzes how evaluators come to align themselves with particular paradigms and approaches (any takers?) Despite this diversity, I’m quite startled by the pervasiveness of convergent thinking in the field. This phenomenon affects M&E for international development in a particularly strong way.

Thanks to Twitter, I recently discovered Enrique Mendizabal’s article on how labels, frameworks, and tools might be stopping us from thinking (several months old by now, but worth the time. Do trust me on this one). One of his points is that tools and frameworks emphasize process to the point that space for thinking is eliminated. The proliferation of such tools creates an illusion of knowledge and expertise (punctuated by jargon), which causes few people to question the process and/or the product.

In many ways, M&E has become about compliance. In my opinion, this is largely the result and the cause of convergent thinking. Efforts to prove impact must be “rigorous” and “based on evidence,” which usually implies the use of research-based tools. I’ll be the first to say that such tools can be and are, in many cases, highly effective. But development practitioners and policymakers talk out of both sides of their mouths.  They tout innovation while actually encouraging and rewarding convergent thinking. The accepted M&E tools and frameworks are largely created in the Western world using Western paradigms. M&E “capacity building” is often code for “M&E compliance”: training a critical mass of specialists to auto-pilot processes and principles that are supposed to encourage learning, but often teach little more than how to say and do the “right thing” at the “right time” to prove results. The presentation of rigorous tools discourages skeptics in our audiences, who all too often feel gently—or not so gently—pressured to accept and implement such tools without critical review and healthy skepticism. Real innovation requires divergent thinking. Do we need rigor and evidence and research? A resounding Y-E-S. Do we need professionals with considerable expertise and experience to help guide M&E efforts? Without a doubt. But it is the M&E specialist’s job to integrate good practice with new ways of thinking.

I recently finished the book Creative People Must Be Stopped by David A. Owens, professor at Vanderbilt University. Owens argues that innovation constraints occur at many levels: Individual, group, organization, industry, society, and technology. The convergent thinking that affects our ability to truly innovate (solve problems and measure impact in new and better ways) comes into play at each of these levels. In my own practice, I’m trying to take more responsibility at the individual level. This not-so-easy task includes addressing three core components of creativity identified by Owens: Perception, intellection (or thinking), and expression. I find myself—and the M&E field writ large—to be most perceptible to intellection constraints.

The first step is to eliminate stereotypes and patterns that prevent potentially relevant data from entering the problem-solving process. M&E specialists become accustomed to defining the same problems in the same ways. But what if that definition is wrong? I’m not talking about small errors in problem definitions that can be corrected through collaborative inquiry. I’m talking about widely accepted “evidence-based” definitions of problems (and their associated implementation and measurement practices) that have become almost akin to common knowledge in the field. We must challenge the definition as well as the solution to development problems. Unfortunately, our common problem definitions lead to common indicators and common data collection methods, which leads to common solutions—across projects, programs, countries, cultures, and contexts. In this sense, M&E has the potential of doing more harm than good.

Monitoring and evaluation plans have become a staple of international projects, and the weight of importance of M&E plans in a project proposal is increasing. It’s important that we—as a professional community of practice—serve as our own biggest skeptics, continue thinking critically, and avoid falling prey to “evaluation for compliance” pressures. With that being said, I’d love to hear your thoughts. How can this be done? What are some examples of M&E compliance gone wrong? How have you succeeded in using M&E processes as a springboard for learning and innovation?

Protecting Human Rights While Building Trusting Relationships

Evaluation work around social issues is complex. Emerging research on systems thinking and complexity theory explains this; our experience confirms it. This complexity is amplified in situations where human rights are systematically violated. I’ve recently spent some time managing field projects related to documentation in the Dominican Republic, where native-born Dominicans of Haitian descent are often denied their legal right to birth registration and, since 2007, have had their previously issued identity documents revoked by the government. There are many local, national, and international groups currently lobbying the government, implementing programs, and conducting research on the issue. It’s a hot topic attracting significant internal and external attention, which brings the question: How can stakeholders learn more about the issue while protecting those who are affected?

Researchers and evaluators of programs in such contexts are ethically bound to protect the rights of participants, particularly when it comes to confidentiality and consent. IRB protocol is critical, but even the most painstaking attempts to honor its principles can strip the process of its human element (I have a particular aversion to the idea of protecting human “subjects”)! That’s why I’m advocating for greater consideration of how to build trusting relationships with participants in order to not only protect their rights, but honor their dignity and personal histories.

Below I describe some considerations for researchers and/or evaluators who engage in projects related to sensitive issues in complex environments. I strongly believe these considerations should be taken into account at every level, from highly technical external evaluations to grassroots research and program development.

Location, location, location: Let participants choose where they feel most comfortable being interviewed. Some may feel more comfortable in the privacy of their own home while surrounded by family. Others may not feel safe providing information on where they live and would prefer a perceived neutral location in the community, such as a local church.

The company you keep: A local, trusted community member should accompany the researcher to assist in explaining unclear information to the participant, translating where necessary, and generally creating a safe and welcoming environment. Even better if that person is trained to actually conduct the research! Be sure that interviews are private and not overheard by others, unless the participant requests to be accompanied by a friend, family member, etc.

The right to say no: Participants should never feel forced to participate. If the researcher/evaluator is technically an outsider, they may miss important cues signifying that the individual is hesitant to participate. Understand how power differentials may interfere with an individual’s perceived ability to say no, and mitigate against them. Be able to judge verbal and non-verbal cues throughout the entire data collection process and be sure to remind participants that they can choose not to answer a question or decline to continue at any moment.

The right to know: Participants should be informed about how any information collected will be used. Academic research may not be a familiar concept, and there may be (understandable!) suspicion or concern that information will get into the wrong hands and be used against them. Explain why notes are being taken, who will have access to information (both data and results), etc. Give time for them to reflect on informed consent forms and ask questions. Be sure to have documents in multiple languages if the participant is not fluent in the region’s predominant language. Have options for non-literate individuals. Err on over-explaining and providing “too much” information, even if it takes more time. Relationships can be damaged and trust broken within minutes. Ask the participant to repeat back what they are agreeing to in order to ensure full consent and comprehension.

What’s in a name: Only collect personal identifying information (PII) if it is absolutely necessary. Don’t forget that voice recordings are also a form of PII! Participants will want to be assured that their responses cannot be traced back to them. If PII is collected, it should not appear on any materials that could be misplaced or seen by others (survey forms, assessments, etc.). Use another marking system that is linked to participants through secure, internal, and restricted access documents. Consider using pseudonyms for case studies or quotes, but don’t forget that participants might want ownership of their stories. They should have the opportunity to choose whether their identity is used in narratives that describe personal histories and experiences.

Be creative: There are many interesting and creative ways to maintain confidentiality and/or anonymity in situations where face-to-face conversations may not be feasible nor produce honest responses. Implement a creative response system (colored cards, dice, etc.) that give participants a sense of privacy and increased confidence in answering questions. Consider using a dividing screen or private room for submitting responses, as appropriate, to enhance feelings of security and anonymity.

Be human: Open up the session with conversation instead of rigidly following a script or jumping to the informed consent form. It can be considered rude to “get down to business” immediately, and the participant is much less likely to feel comfortable or appreciated for their time and the personal risk they might be taking! Check in frequently with the participant throughout the interview, continuously gauge their comfort level, and make adjustments as necessary. Be open to diverging from protocol if necessary. Letting the conversation take its course is critical when dealing with sensitive topics. Be sure to collect the information you need, but don’t sacrifice the personal connection.

As with any research project or evaluation, the protocol depends on context. What similar challenges have you encountered in the field and how did you overcome them? What advice would you give to others working on sensitive issues in complex environments?

Update: Some resources on human-rights based approaches to M&E. Please add more in the comments section if you know of a great resource!

Selected Resources on Human Rights-Based Monitoring & Evaluation (compiled by GIZ)

Integrating Human Rights and Gender Equality in Evaluation (UNEG)

Rethinking Evaluation and Assessment in Human Rights Work (ICHRP)

Collection of Resources for Evaluating Human Rights Education (UMN Human Rights Library)

Guide to Evaluating Human Rights-Based Interventions in Health and Social Care (HRSJ)

Human Rights-Based Approach to Monitoring and Evaluation (HRBA Toolkit)

Participatory Development Pitfalls Translate to Evaluation

Lately I’ve been thinking a lot about the challenges of balancing learning and accountability in evaluations. While the two are not mutually exclusive, evaluation for accountability often faces specific boundaries that prevent an evaluator from exercising their full range of professional expertise in designing and executing the evaluation. Such limits can result in certain types of lessons learned being valued over others. Donors often commission specific types of evaluations based on organizational policy or perceived “best practices.” The need to work within such a framework often dictates the type of learning that will result.

In many cases, preferred methods, resources, short timeframes, local conditions, and other constraints eliminate opportunities to conduct participatory evaluations. Yet participatory evaluations—done well—can produce results for development that go well beyond learning and accountability. Anna Colom’s recent article in The Guardian, “How to avoid pitfalls of participatory development” highlights an interesting question of which evaluators should take note: “Is it even possible to run participatory projects in the current context of international development, still very much western-led and tied to logframes, donors and organisational agendas and structures?”

Project design and implementation aren’t the only things to blame for complicating participatory projects. Monitoring and evaluation practices (and values) play a tremendous role in discouraging (or encouraging) real participation.

Colom identifies common pitfalls in participatory development (her categories, my summaries) that apply equally to participatory evaluation:

  • Define participation and ownershipWhen and to what extent participation will be encouraged (and why) + who owns the project and its component parts
  • Understand the context and its nuancesPower relations within communities + power relations between communities and other actors
  • Define the communityTarget community composition, including any sub-groups that define it
  • Facilitators must know when to lead and when to pull backBalancing “external” facilitation with group leadership and ownership
  • Decide what will happen when you goSustainability!

To her list, I would add two more common pitfalls that are critical to address in participatory evaluations:

  • Define what counts as credible evidenceCommunities should have a very real voice in determining what types of evidence are credible for agreed purposes, as well as how that evidence should be collected. Facilitator and community member opinions may come in conflict often at this stage due to varied beliefs about what constitutes credibility. The facilitator as “critical friend” can provide guidance based on knowledge and experience, but should listen carefully to community needs so that collected evidence is valued and validated by the community.
  • Decide how results will be used and communicatedCommunity members should be engaged in answering the following questions: What types of results will be communicated? For what purposes? For what audiences? How will results be used? Community members should also be engaged in helping prepare results to be communicated in ways that would be accepted and appreciated by various stakeholders. Particular attention should be paid to social, cultural and linguistic relevance, with an emphasis on inclusion. Communities should agree to how their work (and community!) will be represented to a wider audience. Participation in the communication and utilization stages is critical for sustainability of projects, as it reinforces community ownership and builds capacity for future implementation and evaluation work.

What other common pitfalls in participatory development and/or evaluation need to be addressed?

What Graduate Students Should Unlearn Before Becoming Evaluators

Calling all new and emerging evaluators—this post is for you! Graduate school can change the way you look at the world, but is that change for better or for worse? The answer, of course, is that it depends. But some skills typically learned in graduate school can actually hinder the ability to properly conduct evaluations.

In the beginning, learning to write literature reviews and design research studies is challenging because it requires one to exercise very careful logic to reach conclusions. Each thought must be justified by research-based evidence, each term must be painstakingly defined, each theoretical framework must be eloquently outlined, and each system or process must be elaborately illustrated. Much time is spent honing not only critical thinking skills, but also the ability to use (and show!) logic. Linking A to B, completing Step 1 before Step 2, proving X leads to Y—the brain becomes accustomed to thinking in a linear manner. Causation, causation, causation! After hours of reading and writing, of rereading and rewriting, it is no surprise that these habits can be hard to break. But we desperately need to unlearn some of those habits we paid so dearly to obtain.

Let me explain. I don’t have the data to prove it, but I wager that a quick survey of graduate students in academic disciplines—research-based rather than practitioner-focused programs—would reveal that more time is spent working alone than in groups. Solitary work is quite conducive to linear thought, at least in the academic sense (keeping focused on the work at hand is another story!) In such circumstances, one has the luxury of designing elaborate models that control for any and every conceivable variable that could affect the ability of the research to yield statistically significant results. And this type of research is incredibly effective for the purposes it intends to serve. In fact, many organizations engage in research, and these skills are critical to pursuing that type of work. But too many students latch onto the idea of evaluation as a career path and, fresh from their studies, attempt to use the research paradigm they’ve recently internalized in order to answer complex questions in a messy world that requires a different approach. As a recent graduate student, I know how hard these habits can be to break and how much practice it takes to effectively determine when to use research versus evaluation. This post aims to help you figure it out faster!

So what exactly does evaluation do differently than research? John LaVelle created a great visual to show just that:

Eval and Research Hourglass

Still not convinced? Check out this excellent article by E. Jane Davidson on unlearning social scientist habits—required reading for recent or soon to be graduates looking to break into evaluation!

If one of the first things you are learning as a new evaluator is how to design a logical framework or logic model, be sure you review these resources (and more!) very carefully. I use logic models regularly and find them useful in many ways, but they can be ineffective and downright dangerous paired with an “objective” research lens. Start unlearning those less-than-helpful-for-evaluation research habits from the start, and you’ll become a stronger and more seasoned evaluator.

What other habits learned in graduate school need to be unbroken or adapted upon entering the messy world of “work’”? Share your tips!