Lately I’ve been thinking a lot about the challenges of balancing learning and accountability in evaluations. While the two are not mutually exclusive, evaluation for accountability often faces specific boundaries that prevent an evaluator from exercising their full range of professional expertise in designing and executing the evaluation. Such limits can result in certain types of lessons learned being valued over others. Donors often commission specific types of evaluations based on organizational policy or perceived “best practices.” The need to work within such a framework often dictates the type of learning that will result.
In many cases, preferred methods, resources, short timeframes, local conditions, and other constraints eliminate opportunities to conduct participatory evaluations. Yet participatory evaluations—done well—can produce results for development that go well beyond learning and accountability. Anna Colom’s recent article in The Guardian, “How to avoid pitfalls of participatory development” highlights an interesting question of which evaluators should take note: “Is it even possible to run participatory projects in the current context of international development, still very much western-led and tied to logframes, donors and organisational agendas and structures?”
Project design and implementation aren’t the only things to blame for complicating participatory projects. Monitoring and evaluation practices (and values) play a tremendous role in discouraging (or encouraging) real participation.
Colom identifies common pitfalls in participatory development (her categories, my summaries) that apply equally to participatory evaluation:
- Define participation and ownership – When and to what extent participation will be encouraged (and why) + who owns the project and its component parts
- Understand the context and its nuances – Power relations within communities + power relations between communities and other actors
- Define the community – Target community composition, including any sub-groups that define it
- Facilitators must know when to lead and when to pull back – Balancing “external” facilitation with group leadership and ownership
- Decide what will happen when you go – Sustainability!
To her list, I would add two more common pitfalls that are critical to address in participatory evaluations:
- Define what counts as credible evidence – Communities should have a very real voice in determining what types of evidence are credible for agreed purposes, as well as how that evidence should be collected. Facilitator and community member opinions may come in conflict often at this stage due to varied beliefs about what constitutes credibility. The facilitator as “critical friend” can provide guidance based on knowledge and experience, but should listen carefully to community needs so that collected evidence is valued and validated by the community.
- Decide how results will be used and communicated – Community members should be engaged in answering the following questions: What types of results will be communicated? For what purposes? For what audiences? How will results be used? Community members should also be engaged in helping prepare results to be communicated in ways that would be accepted and appreciated by various stakeholders. Particular attention should be paid to social, cultural and linguistic relevance, with an emphasis on inclusion. Communities should agree to how their work (and community!) will be represented to a wider audience. Participation in the communication and utilization stages is critical for sustainability of projects, as it reinforces community ownership and builds capacity for future implementation and evaluation work.
What other common pitfalls in participatory development and/or evaluation need to be addressed?