Thirteen staff from the core partner organizations of the Consortium for Elections and Political Process Strengthening (CEPPS) presented at the American Evaluation Association’s (AEA) Evaluation 2023 conference, October 9-14, 2023. Leaning into this year’s conference theme, “The Power of Story,” democracy experts from the consortium discussed evaluation best practices on leveraging storytelling to articulate a collective vision of change. They shared illuminating lessons learned on how to meaningfully engage program stakeholders in innovation, while also mitigating risks and connecting with each person’s story in an intentional and trauma-sensitive way. Three of the AEA-featured best practices shared can be found below with a set of takeaways for democracy, human rights, and governance (DRG) evaluators to consider for their own application.
Organizational-level frameworks can be a valuable tool for organizing stories and other data and for developing and evaluating systematic strategies to achieve results. Designing and implementing such frameworks can be challenging, however, because it involves building consensus among stakeholders on the vision, purpose, and use of such frameworks, as well as overcoming perceptions that they are often too high-level to encapsulate on the ground experiences meaningfully.
Anna Chukhno, Deputy Director of the Evidence and Learning Practice at CEPPS core partner the International Republican Institute (IRI), presented lessons learned at the conference accumulated during the process of integrating storytelling into an organization-wide results framework. In the past two years, as Chukhno discussed, IRI has developed results frameworks for DRG programs intended to achieve three goals: 1) to more systematically design, monitor, and evaluate programs, 2) to encourage results-based management across the organization, 3) to clarify and test implicit theories of change. In her presentation, Chukhno talked about what IRI learned from incorporating storytelling into this initiative and how they encouraged staff and other stakeholders to submit, view, and make sense of individual stories of results from their programs. She also discussed strategies and lessons learned on how IRI developed results frameworks that are generalizable to different types of programs, while still being detailed enough for use to plan and track outcomes of individual programs, as well as how to ensure the results framework remains a living document that guides – not constrains – organizational programming and MEL efforts.
Shared applications for practice:
DRG programs work in challenging environments where actors actively undermine program goals. These environments are often confronted by the difficulties of war, as well as logistical difficulties where program management can be physically distant from program implementation, or target communities live with little or digital connectivity, and where stakeholders are often subject to intimidation and violence. As a result, DRG program monitoring and evaluation activities must apply principles to meaningfully engage program staff and participants in a manner that mitigates risks and innovates approaches to collecting and analyzing data, communicating results that carefully consider digital, physical, and psychosocial safety and measure sustainable, culturally relevant changes at the individual and community level.
Emily Bango, Regional MERL Manager at CEPPS Senior Technical Partner Internews, co-presented a chapter that she co-authored with her colleagues Megan Guidrey and Amelia Ayoob, titled “Equitable Evaluation in Remote and Sensitive Spaces“, an excerpt recently published in the journal “New Directions in Evaluation”. The presentation described how Internews applies the Equitable Evaluation Framework™ (EEF) to guide their monitoring and evaluation work in remote and sensitive spaces. To illustrate the work in practice, Internews highlighted a recent case in Bolivia where Fundación Construir worked with the Araona Indigenous community (learn more: here and here) to improve health outcomes through increased connectivity and access to accurate information that integrates and respects Indigenous customs and practices.
Shared applications for practice:
As evaluators engage in data collection efforts to ensure our democracy, rights, and governance programs are effective and tailored to local contexts, we often find the need to speak with populations that have experienced trauma (political dissidents, refugees, survivors of sexual violence, LGBTQI+ populations, and others). Doing so in a trauma-informed manner is crucial to ensuring that we avoid re-traumatizing individuals we seek to support.
At the conference, Paige Rumelt, Monitoring, Evaluation and Learning (MEL) Associate at IRI, presented the need to follow trauma-sensitive practices at each stage of the data collection process. Following these best practices, which include completing a trauma risk assessment and a safety plan to respond to participant distress, using ongoing informed consent processes and content warnings before discussing sensitive topics, and providing sufficient time for icebreakers and breaks, prioritizes participant safety and reduces the risk of doing inadvertent harm to participants. IRI teams leveraged these approaches in Benin and Nigeria when interviewing communities experiencing violent conflict. Through an intentional approach to thinking about how trauma may affect communities in which we are engaging, IRI seeks to “do no harm” and create opportunities for individuals who have experienced trauma to share their stories in a comfortable, safe space.
Shared applications for practice: