Today marks the first of a series of guest bloggers. A special welcome and warm thanks to my colleague Hanife Cakici for her contribution this week! Here at Learning Culture we’ll be showcasing diverse perspectives on evaluation practice in a variety of countries, so please contact me if you’d like to bring your voice to the discussion and author a guest post. Now I’ll pass the word over to Hanife!
I am pleased to contribute to Learning Culture: A journey in asking interesting questions today. I would like to extend my gratitude and warm wishes to my colleague Molly Hamm for inviting me to ask questions that I think are interesting for the future of evaluation practice around the world. Before I delve into more serious matters though, I would like to take the opportunity to talk briefly about what I do and why I do it.
Coming to the University of Minnesota as a Fulbright scholar has generated a deep curiosity in me to investigate how to build contextually sensitive evaluation systems and practice in my native country Turkey to improve our national programs and policies. Despite the dramatic expansion of the field of evaluation worldwide, program evaluation remains rather an unexplored terrain in Turkish academic and governmental life. Faced with relatively scarce resources, Turkish policymakers need sound and useful evidence about the relevance and effectiveness of programs and policies. To this end, I have recently launched the Turkish Evaluation Association (TEA), Turkey’s first and only national evaluation network to build a broad-based community of practice for emerging program evaluation practitioners in my native country. The association is featured on the International Organization for Cooperation in Evaluation’s (IOCE) interactive map of evaluation organizations around the world. To support this important work, my dissertation work focuses on developing bottom-up evaluation systems and practice that accommodate Turkish historical, political, social, and cultural context that will effectively and responsibly contribute to improving social policies. In addition to my academic work, I currently serve on the board of directors for the Minnesota Evaluation Association. I assist in organizing and facilitating events, workshops, and seminars for scholars and evaluation practitioners to help develop new knowledge and skills in the field. Although my passion for evaluation practice started as early as 2006, it certainly has accelerated during the last couple of years thanks to worldwide initiatives to expand the field of evaluation to contexts outside of global North, which was a great impetus for me to establish TEA.
Indeed, the field of evaluation in the 21st century will be characterized by its international and cross-cultural expansion. I suspect that this particular trend will overwhelm the majority of discussions during the 27th annual conference of the American Evaluation Association in October 2013. The conference itself will challenge evaluators’ practical toolbox and theoretical dispositions, as the theme invites practitioners to foresee or even make predictions about the future of evaluation practice. Lurking behind this global expansion is however an important question that begs for an answer: Will evaluation practice in the global South be top-down, donor-driven or bottom-up, indigenous? The debate between (a) those who argue that the most appropriate way to strengthen evidence-based decision making in developing countries is for donor agencies (multi-lateral or bi-lateral) and/or donor countries to fund evaluation capacity building activities, thereby contributing to the evolution of evaluation systems and practice in developing countries, and (b) those who argue for a more indigenous approach, in which the developing country takes full ownership of its decision-making process and builds bottom-up evaluation culture and capabilities for and by its people will occupy the headlines in evaluation journals. Certainly empirical research is very much needed to answer this question. Yet I believe I can initiate a fun, scholarly conversation at this point by sharing some excerpts from my upcoming dissertation titled The Perceived Value of Program Evaluation as a Decision-Making Tool in Turkish Educational Decision-Making Context.
Many Western evaluation scholars and practitioners have recognized that evaluation practice was first expanded to low and middle-income countries through Northern-based aid organizations as a means to deliver their services. Given evaluation’s significance in decision-making, a concerted effort by many Northern and some Southern institutions and evaluators to build evaluation systems and practice in developing countries contributed to this expansion. Numerous sessions, workshops, and conferences have been organized to build evaluation capacity in developing country governments, and many national evaluation organizations and associations have been established (Mertens & Russon, 2000). EvalPartners, an international evaluation partnership initiative to strengthen civil society evaluation capacities to influence public policy based on evidence, attempted to map existing Voluntary Organizations for Professional Evaluation (VOPEs) around the world and found some information on a total of 158 VOPEs, out of which 135 are at national level, while 23 are at regional and international levels. Some LMICs have established government-wide evaluation systems to improve their public programs and policies (e.g., Brazil, Korea, Mexico, etc.).
Aid organizations’ efforts to disseminate evaluation systems and practice in developing countries have implications for the future of evaluation practice outside of Western contexts. Evidence from a wide variety of evaluation studies converges to argue that many developing countries consider evaluation a donor-driven activity without any value to their specific learning and information needs (World Bank, 2004). This ‘potentially’ imposed use of evaluations reduced the opportunities to increase and sustain national evaluation capacity to address national information needs and improve decisions. As noted by Hay (2010), Northern-based and created aid organizations’ dominance over the field of evaluation “created and reinforced inequalities in the global evaluation field by overemphasizing the values, perspectives, and priorities from the North and underemphasizing those from the South” (p. 224).
Researchers have recognized that evaluation is a social intervention; hence contend that evaluation reality is produced in politically, culturally, socially and historically situated contexts (Guba & Lincoln, 1989; LaFrance & Nichols, 2008). Truth about the value and utility of evaluation can never be isolated from a domain of political discourses, cultural values and historical relations (Bamberger, 1991; Hood, Hopson, & Frierson, 2005). As a result, evaluation scholars argue that context plays an essential role in grounding and validating the concept of evaluation in a particular setting for a particular group of people, as well as the ways in which it can be conducted and used (Conner, Fitzpatrick, & Rog, 2012).
As a result, the issue of context in evaluation problematizes the applicability of Western cultural frameworks in non-Western settings (Mertens & Hopson, 2006). Evidence from a wide variety of evaluation studies converges to suggest that the inquiry traditions of the white, majority Western culture may compromise the interests of underrepresented groups—low and middle income countries in this case—due to a widespread failure to appreciate these groups’ ontological and epistemological assumptions and cultural nuances (Smith, 2012; Kirkhart, 2005). To challenge the status quo in the field of evaluation towards majority, Western thought, and increase the contextual validity of results, many evaluation scholars advocate the use of non-Eurocentric evaluation approaches that are grounded in cultural context and done by and for the cultural community (Hopson, Bledsoe, & Kirkhart, 2011; Mertens, 2007).
These are only a few introductory remarks and arguments from the existing literature into a much bigger and significant discussion. As the field of evaluation evolves globally and cuts across many countries and cultures, evaluators need to be critical of their assumptions related to evaluation systems and practice. It is my hope that they will keep this evaluation dilemma presented above at the back or front of their minds as a guiding star in the 21st century.
Now we invite readers to weigh in on the discussion. What steps are being or could be taken to foster indigenous evaluation? How can the prevalence of donor-driven evaluation be balanced by local organization or community-driven evaluation?