More actions, less results: Reflections from two recent evaluations
Image: Piet Mondrian, Tableau III, Composition in oval, 1914
Teams at AP recently completed the final evaluations of two peacebuilding projects using a theory-based approach. While the projects took place in very different countries, and with very different logics, conducting the two assignments close to each other, and using a similar method, provided a good opportunity to reflect on aspects of evaluation practice, which do not usually receive a lot of attention. Nowadays, in fact, evaluative efforts focus so much on results and impact; yet these two experiences were a reminder of just how important it is to also spend time (and energy) on actions.
Generally speaking, AP’s approach to evaluation and learning is theory-based, and theory-based approaches focus on contribution: as changes, or results, in a given context are identified, what an evaluation wants to do is assess whether, how and why a specific intervention contributed to these changes (or not). This is usually done by developing a theory of change for a given project, which describes how actions and results are linked.
Theory-based approaches are neither novel nor original at this point: nearly all donors require that peacebuilding interventions be designed on the basis of a theory of change, and many organisations now also accept and recognise the value of theory of changes for evaluation and learning. This has opened the door for the regular use of theory-based approaches like Contribution Analysis, Process Tracing or Outcome Harvesting.
AP’s own approach is based on elements of Contribution Analysis. Evaluations start with a review of an intervention’s theory of change and, based on this, a so-called “contribution story” is created, which describes how the implementing agencies see what they did and what they achieved. The evaluation then looks at all available evidence, to either confirm or disconfirm the story. Based on this evidence, an intervention’s contribution can be rigorously defined, even when most of the information collected is qualitative.
Contribution Analysis has two advantages. First, it focuses on stories and narratives, which can create a very rich and full picture of a project. And secondly, it is more immediately graspable than Process Tracing or Outcome Harvesting. Like all theory-based approaches, however, the starting point is given by a theory of change, and for this reason what an evaluation can hope to achieve, when using this approach, is ultimately linked to how clearly and soundly such a theory is designed.
The two evaluations AP recently completed are a case in point of just how important this aspect is.
The first evaluation covered a project in Iraq, where an international peacebuilding organisation sought to improve community protection mechanisms. Looking at this project’s theory of change, it was easy to identify results, which included improved trust between communities on the one side, and authorities and armed forces on the other, greater capacities for dialogue and, ultimately, the reduction of violence. What was hard, instead, was identifying actions. The organisation’s staff was very focused on their bottom-up, participatory approach, and expressed resistance to easy labels, which they saw as misplaced. To them, the difference made by the project was due less to any specific activities (like training workshops), than to the type of presence they had in specific locations. Efforts to define this presence were ultimately fruitless, however. For this reason, while the evaluation was able to identify key positive results—in line with those in their theory of change—it could not ascertain the project’s specific contribution to their achievement.
The second evaluation covered a project in Mozambique, where a different international peacebuilding organisation wanted to improve how conflicts around natural resources were addressed. Here, too, the evaluation’s first step was to review and clarify the theory of change. And again, it was relatively easy to identify results, which included the creation of knowledge about natural resources-based conflicts, the strengthening of capacities of traditional community leaders and civil society representatives for conflict resolution, and the actual resolution of some of these conflicts. In this case, however, it was also easy to identify actions and match them to results: in the project’s theory, in-depth research led to knowledge, training workshops to strengthened capacities, and the creation of multi-stakeholder platforms to conflict resolution. The evaluation was able to develop a clear contribution story, to identify results achieved, and, lastly, to see which actions contributed to them—and, importantly, which did not. This brought to light far more compelling and useful findings, and created a generally more satisfying learning experience (the report for this evaluation has been published, while the one for the other was not because of the politically sensitive context in Iraq).
Together, the two experiences highlight the need to critically engage with all elements of a theory of change. As this conceptual tool has become more and more used, in fact, it is also being more and more taken for granted. In evaluative and learning efforts, this often translates into an unbalanced focus on results.
And, yes, defining results remains important, in particular in relation to peacebuilding interventions, whose effects are notoriously difficult to define and measure. But results are only one half of what is needed to rigorously assess (and understand) contribution—the other half being actions. The objective should always be, as such, to produce a theory of change that links actions and results in a way that is conceptually sound and empirically verifiable.