Tagged: monitoring and evaluation

The threat of convergent thinking in M&E for international development

Like many in evaluation, I consider myself to be a lifelong learner; I thrive on learning, relearning, and unlearning. Knowledge isn’t static, and most of what we “know” can withstand healthy debate. I’m intrigued by the diversity in knowledge paradigms among evaluation practitioners. In fact, I’d love to see a study that analyzes how evaluators come to align themselves with particular paradigms and approaches (any takers?) Despite this diversity, I’m quite startled by the pervasiveness of convergent thinking in the field. This phenomenon affects M&E for international development in a particularly strong way.

Thanks to Twitter, I recently discovered Enrique Mendizabal’s article on how labels, frameworks, and tools might be stopping us from thinking (several months old by now, but worth the time. Do trust me on this one). One of his points is that tools and frameworks emphasize process to the point that space for thinking is eliminated. The proliferation of such tools creates an illusion of knowledge and expertise (punctuated by jargon), which causes few people to question the process and/or the product.

In many ways, M&E has become about compliance. In my opinion, this is largely the result and the cause of convergent thinking. Efforts to prove impact must be “rigorous” and “based on evidence,” which usually implies the use of research-based tools. I’ll be the first to say that such tools can be and are, in many cases, highly effective. But development practitioners and policymakers talk out of both sides of their mouths.  They tout innovation while actually encouraging and rewarding convergent thinking. The accepted M&E tools and frameworks are largely created in the Western world using Western paradigms. M&E “capacity building” is often code for “M&E compliance”: training a critical mass of specialists to auto-pilot processes and principles that are supposed to encourage learning, but often teach little more than how to say and do the “right thing” at the “right time” to prove results. The presentation of rigorous tools discourages skeptics in our audiences, who all too often feel gently—or not so gently—pressured to accept and implement such tools without critical review and healthy skepticism. Real innovation requires divergent thinking. Do we need rigor and evidence and research? A resounding Y-E-S. Do we need professionals with considerable expertise and experience to help guide M&E efforts? Without a doubt. But it is the M&E specialist’s job to integrate good practice with new ways of thinking.

I recently finished the book Creative People Must Be Stopped by David A. Owens, professor at Vanderbilt University. Owens argues that innovation constraints occur at many levels: Individual, group, organization, industry, society, and technology. The convergent thinking that affects our ability to truly innovate (solve problems and measure impact in new and better ways) comes into play at each of these levels. In my own practice, I’m trying to take more responsibility at the individual level. This not-so-easy task includes addressing three core components of creativity identified by Owens: Perception, intellection (or thinking), and expression. I find myself—and the M&E field writ large—to be most perceptible to intellection constraints.

The first step is to eliminate stereotypes and patterns that prevent potentially relevant data from entering the problem-solving process. M&E specialists become accustomed to defining the same problems in the same ways. But what if that definition is wrong? I’m not talking about small errors in problem definitions that can be corrected through collaborative inquiry. I’m talking about widely accepted “evidence-based” definitions of problems (and their associated implementation and measurement practices) that have become almost akin to common knowledge in the field. We must challenge the definition as well as the solution to development problems. Unfortunately, our common problem definitions lead to common indicators and common data collection methods, which leads to common solutions—across projects, programs, countries, cultures, and contexts. In this sense, M&E has the potential of doing more harm than good.

Monitoring and evaluation plans have become a staple of international projects, and the weight of importance of M&E plans in a project proposal is increasing. It’s important that we—as a professional community of practice—serve as our own biggest skeptics, continue thinking critically, and avoid falling prey to “evaluation for compliance” pressures. With that being said, I’d love to hear your thoughts. How can this be done? What are some examples of M&E compliance gone wrong? How have you succeeded in using M&E processes as a springboard for learning and innovation?

Advertisements