Tagged: evaluation

You can’t make a pig fat by weighing it

Last week I swapped islands for 48 hours, traveling from the Dominican Republic to Jamaica, where I had the distinct pleasure of participating in a consultation for the third phase of the Learning Metrics Task Force. The LMTF—co-convened by the Center for Universal Education at Brookings and the UNESCO Institute for Statistics—aims to change the global conversation on education to focus on access plus learning, with an eye towards influencing the post-2015 development agenda. You can keep posted on recommendations from each stage of the consultation process here. The consultation at the Inter-American Development Bank Kingston office convened high-level education officials from various Caribbean states; I attended as a member of the Implementation Working Group.

Image

LMTF consultation participants in Kingston, Jamaica.
Photo courtesy of Dr. Winsome Gordon

The conversation was lively and focused mostly on the discussion guide previously prepared by the Working Group, which meant that much of our time was spent understanding the ways in which learning is measured at various levels in different countries. Of course, as with any discussion among experienced education professionals, the tension between policy (measurement and evaluation of learning) and practice (teaching and learning processes) was highlighted often. Perhaps this tension was best summarized by one of our colleagues (in her grandmother’s words): “You can’t make a pig fat by weighing it.” Indeed, education planning, measurement, and assessment are more often governed by the mantra what gets measured gets done. Such a statement preferences the measurement as an end in itself, implying that actions to produce that measurement will automatically follow. This idea says nothing, however, about the quality of those actions and the validity or reliability of the measurement. Are we working towards the right goals? Are we measuring what we think we are measuring? Are our actions leading to the desired outcomes? We are all too familiar with perverse incentives to distort actions and results because of the pressure attached to measurement!

The more compelling idea (a pig isn’t fat because you weighed it!) reminds us that the sheer act of measurement does not guarantee that the processes needed to achieve the desired outcomes are in place. Identifying desired learning outcomes and selecting appropriate measures to assess that learning is the right place to start (just ask any highly qualified teacher who is lesson planning!) We must know what students should be learning and how we know they are learning before we design the activities that foster desired outcomes. This is what the LMTF is currently doing through the three-phase consultation. The true litmus test for the project will be in its ability to connect policy with practice, to build capacity for the learning competencies to be achieved through high quality learning experiences. As was mentioned in the Caribbean consultation, the project’s theory of change must include this crucial link to ensure that a conversation on access plus learning is more than a conversation.

The project has significant policy weight behind it, and hundreds of policymakers and practitioners have actively participated in consultations. There is currently substantial international attention around issues of teacher quality and the teaching profession. The LMTF can capitalize on this movement—perhaps through strategic partnerships or through the proposed advisory board—so that the teaching and learning process remains at the front and center of the conversation. The project thus far has done an excellent job of facilitating dialogue around the basic framework to ensure that all children are learning. Desired learning outcomes are being identified and measurement methods to assess achievement of those outcomes are being defined. The greatest challenge will come as we “fill in the activities” and support countries not only in their ability to measure learning (which is important), but the ability to “do well” on valid and reliable measures thanks to well-prepared teachers, high quality teaching, and safe, productive learning environments. It’s the teaching and learning process that counts.

I would love to hear from others that have participated in previous consultations. What were your key takeaways? What are the next steps beyond the third phase?

What Graduate Students Should Unlearn Before Becoming Evaluators

Calling all new and emerging evaluators—this post is for you! Graduate school can change the way you look at the world, but is that change for better or for worse? The answer, of course, is that it depends. But some skills typically learned in graduate school can actually hinder the ability to properly conduct evaluations.

In the beginning, learning to write literature reviews and design research studies is challenging because it requires one to exercise very careful logic to reach conclusions. Each thought must be justified by research-based evidence, each term must be painstakingly defined, each theoretical framework must be eloquently outlined, and each system or process must be elaborately illustrated. Much time is spent honing not only critical thinking skills, but also the ability to use (and show!) logic. Linking A to B, completing Step 1 before Step 2, proving X leads to Y—the brain becomes accustomed to thinking in a linear manner. Causation, causation, causation! After hours of reading and writing, of rereading and rewriting, it is no surprise that these habits can be hard to break. But we desperately need to unlearn some of those habits we paid so dearly to obtain.

Let me explain. I don’t have the data to prove it, but I wager that a quick survey of graduate students in academic disciplines—research-based rather than practitioner-focused programs—would reveal that more time is spent working alone than in groups. Solitary work is quite conducive to linear thought, at least in the academic sense (keeping focused on the work at hand is another story!) In such circumstances, one has the luxury of designing elaborate models that control for any and every conceivable variable that could affect the ability of the research to yield statistically significant results. And this type of research is incredibly effective for the purposes it intends to serve. In fact, many organizations engage in research, and these skills are critical to pursuing that type of work. But too many students latch onto the idea of evaluation as a career path and, fresh from their studies, attempt to use the research paradigm they’ve recently internalized in order to answer complex questions in a messy world that requires a different approach. As a recent graduate student, I know how hard these habits can be to break and how much practice it takes to effectively determine when to use research versus evaluation. This post aims to help you figure it out faster!

So what exactly does evaluation do differently than research? John LaVelle created a great visual to show just that:

Eval and Research Hourglass

Still not convinced? Check out this excellent article by E. Jane Davidson on unlearning social scientist habits—required reading for recent or soon to be graduates looking to break into evaluation!

If one of the first things you are learning as a new evaluator is how to design a logical framework or logic model, be sure you review these resources (and more!) very carefully. I use logic models regularly and find them useful in many ways, but they can be ineffective and downright dangerous paired with an “objective” research lens. Start unlearning those less-than-helpful-for-evaluation research habits from the start, and you’ll become a stronger and more seasoned evaluator.

What other habits learned in graduate school need to be unbroken or adapted upon entering the messy world of “work’”? Share your tips!