The CEPPS Newsletter

Get important stories about elections around the globe. Delivered monthly.

News / October 31, 2023

Reflecting on the Power of Story for Democracy, Human Rights, and Governance Evaluation

SHARE

Thirteen staff from the core partner organizations of the Consortium for Elections and Political Process Strengthening (CEPPS) presented at the American Evaluation Association’s (AEA) Evaluation 2023 conference, October 9-14, 2023. Leaning into this year’s conference theme, “The Power of Story,” democracy experts from the consortium discussed evaluation best practices on leveraging storytelling to articulate a collective vision of change.  They shared illuminating lessons learned on how to meaningfully engage program stakeholders in innovation, while also mitigating risks and connecting with each person’s story in an intentional and trauma-sensitive way. Three of the AEA-featured best practices shared can be found below with a set of takeaways for democracy, human rights, and governance (DRG) evaluators to consider for their own application.

Staff from CEPPS and the partner organizations gather for a photo during the conference.

Using Results Frameworks to Articulate a Story of Change 

 

Organizational-level frameworks can be a valuable tool for organizing stories and other data and for developing and evaluating systematic strategies to achieve results. Designing and implementing such frameworks can be challenging, however, because it involves building consensus among stakeholders on the vision, purpose, and use of such frameworks, as well as overcoming perceptions that they are often too high-level to encapsulate on the ground experiences meaningfully.

Anna Chukhno, Deputy Director of the Evidence and Learning Practice at CEPPS core partner the International Republican Institute (IRI),  presented lessons learned at the conference accumulated during the process of integrating storytelling into an organization-wide results framework. In the past two years, as Chukhno discussed, IRI has developed results frameworks for DRG programs intended to achieve three goals: 1) to more systematically design, monitor, and evaluate programs, 2) to encourage results-based management across the organization, 3) to clarify and test implicit theories of change. In her presentation, Chukhno talked about what IRI learned from incorporating storytelling into this initiative and how they encouraged staff and other stakeholders to submit, view, and make sense of individual stories of results from their programs. She also discussed strategies and lessons learned on how IRI developed results frameworks that are generalizable to different types of programs, while still being detailed enough for use to plan and track outcomes of individual programs, as well as how to ensure the results framework remains a living document that guides – not constrains – organizational programming and MEL efforts.

Anna Chukhno shares lessons learned from incorporating storytelling into the results framework.

Shared applications for practice:

  • Ensure sufficient time for building buy-in and consensus for organizational-wide results frameworks among various stakeholders.
  • Utilize existing organizational processes for such initiatives, such as reflection and learning processes and systems, communities of practice, and strategy development, or organizational planning processes.
  • Demonstrate the utility of such frameworks throughout the process by showcasing how such tools can be leveraged for proposal, or program design, to provide context or big picture significance for individual stories of success, and to facilitate regional and country strategy planning.

Applying Equitable Evaluation in Remote and Sensitive Spaces 

 

DRG programs work in challenging environments where actors actively undermine program goals.  These environments are often confronted by the difficulties of war, as well as logistical difficulties where program management can be physically distant from program implementation, or target communities live with little or digital connectivity, and where stakeholders are often subject to intimidation and violence. As a result, DRG program monitoring and evaluation activities must apply principles to meaningfully engage program staff and participants in a manner that mitigates risks and innovates approaches to collecting and analyzing data, communicating results that carefully consider digital, physical, and psychosocial safety and measure sustainable, culturally relevant changes at the individual and community level.

Emily Bango, Regional MERL Manager at CEPPS Senior Technical Partner Internews, co-presented a chapter that she co-authored with her colleagues Megan Guidrey and Amelia Ayoob, titled “Equitable Evaluation in Remote and Sensitive Spaces“, an excerpt recently published in the journal “New Directions in Evaluation”. The presentation described how Internews applies the Equitable Evaluation Framework™ (EEF) to guide their monitoring and evaluation work in remote and sensitive spaces. To illustrate the work in practice, Internews highlighted a recent case in Bolivia where Fundación Construir worked with the Araona Indigenous community (learn more: here and here) to improve health outcomes through increased connectivity and access to accurate information that integrates and respects Indigenous customs and practices.

Shared applications for practice:

  • Integrate MERL activities with program implementation (enhancing both in the process).
  • Generate findings useful to a wider range of stakeholders including participants, communities, as well as donors and other implementing organizations.
  • Foster critical thinking regarding what constitutes valuable information and knowledge.

 

Gathering Stories in a Trauma-Informed Manner 

 

Paige Rumelt presents the need to follow trauma-sensitive practices at each stage of the data collection process, from developing data collection tools to following up with participants.

As evaluators engage in data collection efforts to ensure our democracy, rights, and governance programs are effective and tailored to local contexts, we often find the need to speak with populations that have experienced trauma (political dissidents, refugees, survivors of sexual violence, LGBTQI+ populations, and others). Doing so in a trauma-informed manner is crucial to ensuring that we avoid re-traumatizing individuals we seek to support.

At the conference, Paige Rumelt, Monitoring, Evaluation and Learning (MEL) Associate at IRI, presented the need to follow trauma-sensitive practices at each stage of the data collection process. Following these best practices, which include completing a trauma risk assessment and a safety plan to respond to participant distress, using ongoing informed consent processes and content warnings before discussing sensitive topics, and providing sufficient time for icebreakers and breaks, prioritizes participant safety and reduces the risk of doing inadvertent harm to participants. IRI teams leveraged these approaches in Benin and Nigeria when interviewing communities experiencing violent conflict. Through an intentional approach to thinking about how trauma may affect communities in which we are engaging, IRI seeks to “do no harm” and create opportunities for individuals who have experienced trauma to share their stories in a comfortable, safe space.

Shared applications for practice:

  • Complete an assessment to weigh the potential risks and benefits to participants and evaluators before planning a data collection exercise and develop a safety plan to respond to trauma if it does develop (among either the interviewee or the evaluator).
  • If the risks outweigh the benefits, in terms of engaging with trauma-affected populations, consider using an alternative data collection method, such as interviewing service providers currently working with the population of interest.
  • Make soliciting informed consent from interviewees an ongoing process – in particular, checking in with someone if they seem upset or experiencing a trauma response – and use advance warnings before discussing upsetting topics.
  • Leave sufficient time for icebreakers and take time to digest how an interview transpired, especially if sensitive topics are discussed, rather than scheduling back-to-back interviews.
  • Identify what a trauma response looks like and be prepared to recognize it (and respond) should interviewees, or among ourselves, as data collectors, exhibit symptoms.
  • We cannot effectively engage with trauma-affected populations without, first, identifying our own triggers and taking care of ourselves.
Latest News