Tagged: M&E

Who’s Who in M&E? A Stakeholder Analysis

It’s been a few months since my last guest post, so today I’d like to introduce you to one of the savviest data divas I know, Laura Budzyna (see full bio below). Laura and I first met in graduate school where we led a joint research trip to the Dominican Republic, working with my current employer, The DREAM Project, for which she had previously volunteered. Laura is especially perceptive when it comes to both the technical and interpersonal aspects of M&E, which makes her an asset to any team. As she prepared to embark on a recent consulting gig, she put together a stakeholder analysis to help her manage the cast of characters often found on the M&E stage.

Here’s her story!

“Oh, so you’re like an auditor.”
“What? No!”
“Well, to them you are.”

I stopped mid-sip of my smoothie. I had just explained to a friend of mine that I was going to be visiting a few microfinance institutions in South America and taking a look at their monitoring and evaluation systems. My plan was to hang out with their IT teams and their loan officers, make some recommendations on how to make their collection and input processes more efficient, and offer a few data analysis tips to boot. But an auditor?

“Sure. They probably feel threatened. They’re certainly not anxious to set a date for your arrival.”

Huh. He was right – we had gone back and forth about the date of my visit for weeks now. Well, now, let’s think about this a second. Sure, the MFIs didn’t actually invite me to come – an investor of theirs was paying for my ticket and my time. An investor who…hmm…funds MFIs based on their social impact track record. And now, here I was, a foreigner sent by an investor, asking to rifle around their databases. I suddenly saw it from their perspective: who knows what I might find?

If I found inefficiencies? Okay, those would be helpful enough to point out, if not a little embarrassing for the organization to share with their funder. If I found inaccuracies? Even more awkward. And if I found that their clients’ wellbeing was not improving? Well, I might as well just pull the plug on their funding altogether.

I was at best a disruption, and at worst a threat. I hadn’t quite thought this through.

Monitoring and evaluation, like any other specialty, is criss-crossed with complex relationships. Who are the stakeholders? Who are the influencers, and what are their agendas? Who is the information for, really? And perhaps more importantly…who should it be for? To answer these questions, many development practitioners use a tool called stakeholder analysis. This technique is featured in the handbooks of many of the heavyweights in the industry (see: DFID, UNDP, EuropeAid and CARE). The name rings tinny with jargon, I know. But if you can get past the dry stakeholder matrices prescribed by DFID and their ilk (you know, the ones languishing in the appendices of too many UN reports), the actual thought process behind stakeholder analysis is extremely helpful. It’s one of the first tools that we teach to the Masters in Development Practice students at SIPA, and in solidarity with them, I’m going to take a stab at one of my own.

I’ll begin with the caveat that my analysis is not specific to any one organization, and therefore it makes a few generalizations. It’s also a tad tongue-in-cheek: the summaries read a little like trading cards or Myers-Briggs profiles (if I had artistic leanings, I would have happily adorned each with an avatar, too). Still, I think they offer a useful starting point for an outsider trying to understand the daily drama of the M&E world. So, without further ado…

Who’s Who in M&E?

Most M&E systems have a fairly established cast of characters. Within the organization, it’s the management, the IT/data or admin team, and the client-facing staff that make M&E (especially M) happen on a daily basis. Outside of the organization, there are the funders who demand data and the consultants and academics who vet it. And then of course, there are the clients themselves who provide this valuable data. Let’s take a closer look:

Funders: These are donors and investors who, like the rest of us, have gotten pretty excited about the word “impact” and now want ever more convincing evidence of it in exchange for their dough. As evaluations have gotten fancier over the years, so have their demands. In fairness to them, many also provide technical assistance (see: Evaluation Consultants) and other resources to help their grantees meet these demands.

  • Influence: High
  • Motivation: To see that their money is well spent. And since many funding organizations have their own funders and investors, they want to be able to prove their own impact.
  • Accountable to: their own funders, industry peers, policymakers
  • Use results: to make funding decisions, inform policymakers and other thought leaders, set agendas

Managers: These leaders are constantly balancing the need to produce impact for their clients with the need to prove it to their funders. Some are excited about measuring impact for the organization’s own sake; others are only doing it reluctantly at the behest of their partners. To tell the difference, it helps to find out how the current M&E system kicked off: did the managers initiate and champion the effort, or was it an external request?

  • Influence: High
  • Motivation: To keep the ship afloat. And that means keeping investors happy while trying to focus on their core mission…and keeping the minions (i.e. me) at bay.
  • Accountable to: In mission, clients. In practice, funders.
  • Use results: To plan strategically, to change course, to report to funders, to benchmark against other similar or competing organizations

Evaluation Consultants: These folks are paid to poke around other people’s kitchens. (Okay, they also design instruments, implement studies and analyze data.)

  • Influence: Medium
  • Motivation: To get tons of information in a (usually) short amount of time, so they can make effective recommendations to their client.
  • Accountable to: Whoever is paying them – could be the investor/funder, another consulting company, a research initiative, or the management. (If the management isn’t paying them, managers don’t have any control over how the findings are shared – another scary thought.)
  • Use results: To produce shiny reports for someone else to use, to add to their own knowledge base so they can carry it along to the next project.

Academic Researchers: In the same category as evaluation consultants, these outsiders bring scientific rigor to the evaluation effort. While there is more and more talk of successful partnerships between academic institutions and NGOs, the scrupulous academics and the pragmatic practitioners don’t always see eye to eye on how M&E should be carried out.

  • Influence: Medium
  • Motivation: Rigor and recognition, usually in the form of a peer-reviewed publication.
  • Accountable to: The academic community. And whoever’s on their tenure committee.
  • Use results: To publish, and to add new knowledge to their field

The IT/Data/Admin Team: From the database managers to the data entry wonks, this is actually where the magic happens. In many organizations, these employees are revered, as they’re the wardens and curators of the organizations’ data. Usually, they’re the only ones who can (or know how to) access the raw data, while management can only pull standardized reports. Depending on leadership’s enthusiasm for the M&E effort, this may or may not afford the IT team a good deal of influence in the organization.

  • Influence: Low-to-medium
  • Motivation: To do well at – and be recognized for – their challenging and specialized job
  • Accountable to: Management
  • Use results: To assess data validity

The Data Collectors: Your results are only as good as the data you gather. Without these worker bees, the whole system would collapse. Often, data collectors are client-facing workers who take on data collection on top of an already towering task list (think community health workers, microfinance loan officers, etc.). Others work for local research companies and are contracted for short-term data collection efforts, like an annual survey or an external evaluation. They spend time with the clients, they speak the language, and they populate our spreadsheets with hours and hours of conversations.

  • Influence: Low (Pro tip: if you can engage your data collectors during the survey design process and pilot, they will be your most helpful critics.)
  • Motivation: If they’re exclusively a data collector: per survey compensation. If they’re an employee with 27 tasks to complete other than data collection: to finish as quickly as possible. The data collectors’ time is one of the top considerations when designing surveys or deciding on which indicators to measure on a regular basis.
  • Accountable to: Supervisors
  • Use results: Indirectly. They may hear the highlights, but they don’t usually interact with results firsthand unless they give rise to major operational changes. Still, many of the workers who interact with clients regularly are very excited to learn about their progress, and it’s my opinion that they should be among the first to see the results.

The Client/Beneficiary: Oh yeah, remember them?

  • Influence: Low
  • Motivation: To improve their quality of life. And to get finished with this damn survey. (In fact, clients in a position to choose between different organizations may choose the one with fewer bothersome questionnaires.)
  • Accountable to: Their own families, their own jobs, and certainly not this data collector.
  • Use results: When was the last time you showed an impact report to the client? Imagine the power shift if you did! Clients holding organizations accountable for producing results? The implications are tremendous, and worthy of another blog post altogether.

Looking at these profiles, my own challenges with this MFI project are beginning to make a lot more sense. No one is actually accountable to me, the evaluation consultant, except for the fact that I’m linked to a funder. Still, my recommendations have influence and may well change their jobs substantially. And if I don’t understand how my suggestions will affect all the stakeholders, then my whole project will fall flat.

It’s only the first piece of the puzzle – I’ve got a few weeks of meetings ahead to understand the full picture. But at the very least, I’m glad I did this before tomorrow morning, when I’ll walk into my meetings and (knock on wood!) convince these organizations that I’m on their side.

I’d love to hear whether any of this holds up against your experiences. Where do you fit into the mix? What are your motivations, whom are you accountable to, and how does this affect how you interact with the other stakeholders?

Author Bio:

Laura Budzyna is an independent development consultant specializing in data analysis, monitoring and evaluation, and capacity building for sustainable development. Laura’s most recent work with EA Consultants has focused on access to finance for low-income populations, which has led her to participate in M&E efforts across Africa, Latin America and the USA. She has contributed to research initiatives for such clients as the Inter-American Development Bank, the Corporation for Enterprise Development, the Gates Foundation, the City of New York and the Clinton Health Access Initiative. After graduating with an MPA in Development Practice from Columbia’s School for International and Public Affairs, Laura led the design and pilot of the DP Lab, a four-semester workshop series on program design and evaluation. She remains an associate instructor for the course. Before SIPA, she directed summer programming and taught literacy to at-risk youth at the DREAM Project in the Dominican Republic. She holds a BA in Latin American Studies from Middlebury College.

Are enabling environments in evaluation actually disabling?

This question is one that has been lingering in the weeks after AEA’s Evaluation 2013 Conference. First, let me just say that my debut experience at a national AEA conference was spectacular. I’ve been to my fair share of conferences over the years, and AEA outshined them all. The variety of sessions ensured that there truly was something for everybody (a difficult feat with 3,000+ attendees all at different stages in their evaluation careers!) There was an impressive sense of community and attendees were engaged nonstop in exchanging ideas and learning from one another.

I spent a lot of my time focusing on sessions related to evaluation capacity building and development. As an internal evaluator / M&E officer, I help program coordinators feel comfortable with M&E concepts, adopt them into their work, and advocate for their use over the long-term. My job is not to “do all the M&E,” but rather make sure that M&E “becomes a way of doing.” Evaluation capacity building and development provides great frameworks for achieving these goals within the organization. It also helps me think about how I can contribute to strengthening the practice of evaluation on a larger scale.

I started off the week with Michele Tarsilla’s full-day workshop on Evaluation Capacity Development 101. His workshop emphasized capacity as both a latent attribute and a potential state, something that already exists within individuals, organizations, and institutions, but which can be developed over time. This process requires an understanding of the level of capacity that already exists. As any qualified educator would confirm, it’s essential to recognize that learners already possess significant background knowledge. The job of the educator is to activate and build off that knowledge, which in combination with new information leads to increased levels of knowledge, skills, abilities, dispositions, etc. Tarsilla’s distinction between capacity building (short-term, activity-focused) and capacity development (long-term, process-focused) takes this concept of prior background knowledge and existing capacity into account. Although some might see it as simple semantics, language does matter, not only in how we frame conversations but also in how we execute our work.

Several other evaluation capacity building and development sessions I attended (including from the World Bank’s CLEAR Initiative), emphasized the importance of creating an enabling environment for evaluation. This terminology is common in the field, but can be a bit opaque at first glance. To put it simply, an enabling environment is a context that facilitates the practice of evaluation (it enables effective evaluation to take place). There are a lot of factors that make up such an environment, but examples might be demand by policymakers for the production of evidence, funding available for evaluations, widespread interest in and use of evaluation findings, existence of evaluation policies, etc. EvalPartners is doing great work globally to create more enabling environments for evaluation. They’ve successfully declared 2015 as the International Year of Evaluation (EvalYear) as part of this effort!

Enabling environments were on my mind when I participated in a new AEA session type, Birds of a Feather Roundtable Sessions. These lunchtime gatherings brought together diverse participants to talk about a shared area of interest. I grabbed takeout at the local deli and joined a session on international development, where we chatted about a variety of issues related to evaluation in the sector. One of the questions posed to the group dealt with the introduction of new methods that could effectively address issues of complexity (Here are some great posts that can catch you up on that debate: Complexity and the Future of Aid and Complexity 101 Part 1 and Part 2).

I immediately responded to this question with some doubt. I think the methods exist (or are being developed). There’s a lot of talk about developmental evaluation, systems thinking in evaluation, etc. It’s not uncharted territory methodologically speaking. But it is, perhaps, uncharted territory politically speaking. In other words, we are having a conversation about methods when we really need to be talking about the enabling environment. Does the evaluation environment in international development allow evaluators and practitioners to use these methodologies in their evaluation work? Or is the enabling environment actually disabling? Are evaluation practices in international development confined to certain types of evaluations? Yes, we need to continually innovate with methods. But those methods will never get used if the environment doesn’t allow for it.

I strongly believe that evaluators should not only actively engage in dialogue with other evaluators about the state of evaluation, but they should also advocate for enabling environments that allow the nuanced and specialized work that evaluation requires.  We should be active contributors both in shaping the field itself and shaping how others engage with it. It’s certainly possible (and, perhaps, common) to create an evaluation-friendly environment for certain types of evaluations, but not for others. Efforts to strengthen enabling environments should maintain a holistic perspective that allows for multiple types of evaluations, a diversity of perspectives, and innovations in the field. My hope is that, as more and more countries develop rich evaluation practices, the unique perspectives that each country brings to the table will enrich the conversation on evaluation itself. The worst thing would be to superficially limit the conversation by fostering the creation of evaluation policies, practices, etc. that operate underneath a narrow definition of evaluation. Let’s allow diversity in background knowledge to create diversity in outcomes for the field. The more minds we have thinking differently about evaluation, the more innovations will be introduced. We just need to make sure that enabling environments are built flexibly to accept and foster those innovations.

What unique contributions have you seen to the field of evaluation? Have you seen enabling environments actually “disable” the work of evaluation? How so?

The threat of convergent thinking in M&E for international development

Like many in evaluation, I consider myself to be a lifelong learner; I thrive on learning, relearning, and unlearning. Knowledge isn’t static, and most of what we “know” can withstand healthy debate. I’m intrigued by the diversity in knowledge paradigms among evaluation practitioners. In fact, I’d love to see a study that analyzes how evaluators come to align themselves with particular paradigms and approaches (any takers?) Despite this diversity, I’m quite startled by the pervasiveness of convergent thinking in the field. This phenomenon affects M&E for international development in a particularly strong way.

Thanks to Twitter, I recently discovered Enrique Mendizabal’s article on how labels, frameworks, and tools might be stopping us from thinking (several months old by now, but worth the time. Do trust me on this one). One of his points is that tools and frameworks emphasize process to the point that space for thinking is eliminated. The proliferation of such tools creates an illusion of knowledge and expertise (punctuated by jargon), which causes few people to question the process and/or the product.

In many ways, M&E has become about compliance. In my opinion, this is largely the result and the cause of convergent thinking. Efforts to prove impact must be “rigorous” and “based on evidence,” which usually implies the use of research-based tools. I’ll be the first to say that such tools can be and are, in many cases, highly effective. But development practitioners and policymakers talk out of both sides of their mouths.  They tout innovation while actually encouraging and rewarding convergent thinking. The accepted M&E tools and frameworks are largely created in the Western world using Western paradigms. M&E “capacity building” is often code for “M&E compliance”: training a critical mass of specialists to auto-pilot processes and principles that are supposed to encourage learning, but often teach little more than how to say and do the “right thing” at the “right time” to prove results. The presentation of rigorous tools discourages skeptics in our audiences, who all too often feel gently—or not so gently—pressured to accept and implement such tools without critical review and healthy skepticism. Real innovation requires divergent thinking. Do we need rigor and evidence and research? A resounding Y-E-S. Do we need professionals with considerable expertise and experience to help guide M&E efforts? Without a doubt. But it is the M&E specialist’s job to integrate good practice with new ways of thinking.

I recently finished the book Creative People Must Be Stopped by David A. Owens, professor at Vanderbilt University. Owens argues that innovation constraints occur at many levels: Individual, group, organization, industry, society, and technology. The convergent thinking that affects our ability to truly innovate (solve problems and measure impact in new and better ways) comes into play at each of these levels. In my own practice, I’m trying to take more responsibility at the individual level. This not-so-easy task includes addressing three core components of creativity identified by Owens: Perception, intellection (or thinking), and expression. I find myself—and the M&E field writ large—to be most perceptible to intellection constraints.

The first step is to eliminate stereotypes and patterns that prevent potentially relevant data from entering the problem-solving process. M&E specialists become accustomed to defining the same problems in the same ways. But what if that definition is wrong? I’m not talking about small errors in problem definitions that can be corrected through collaborative inquiry. I’m talking about widely accepted “evidence-based” definitions of problems (and their associated implementation and measurement practices) that have become almost akin to common knowledge in the field. We must challenge the definition as well as the solution to development problems. Unfortunately, our common problem definitions lead to common indicators and common data collection methods, which leads to common solutions—across projects, programs, countries, cultures, and contexts. In this sense, M&E has the potential of doing more harm than good.

Monitoring and evaluation plans have become a staple of international projects, and the weight of importance of M&E plans in a project proposal is increasing. It’s important that we—as a professional community of practice—serve as our own biggest skeptics, continue thinking critically, and avoid falling prey to “evaluation for compliance” pressures. With that being said, I’d love to hear your thoughts. How can this be done? What are some examples of M&E compliance gone wrong? How have you succeeded in using M&E processes as a springboard for learning and innovation?