AEA365 | A Tip-a-Day by and for Evaluators

Posts Related to AEA Topical Interest Groups

  • Advocacy and Policy Change (35)
  • Alcohol, Drug Abuse and Mental Health (15)
  • Arts, Culture, and Audiences (40)
  • Assessment in Higher Education (31)
  • Business, Leadership, and Performance (27)
  • Cluster, Multi-site and Multi-level Evaluation (5)
  • Collaborative, Participatory and Empowerment Evaluation (144)
  • College Access Programs (12)
  • Community Psychology (53)
  • Costs, Effectiveness, Benefits and Economics (12)
  • Crime and Justice (1)
  • Data Visualization and Reporting (177)
  • Disabilities and Other Vulnerable Populations (65)
  • Disaster and Emergency Management Evaluation (1)
  • Distance Education and Other Educational Technologies (12)
  • Environmental Program Evaluation (41)
  • Evaluation Managers and Supervisors (16)
  • Evaluation Policy (8)
  • Evaluation Use (49)
  • Extension Education Evaluation (33)
  • Feminist Issues in Evaluation (21)
  • Government Evaluation (41)
  • Graduate Student and New Evaluators (99)
  • Health Evaluation (66)
  • Human Services Evaluation (9)
  • Independent Consulting (101)
  • Indigenous Peoples in Evaluation (26)
  • Integrating Technology into Evaluation (107)
  • Internal Evaluation (40)
  • International and Cross-cultural Evaluation (81)
  • Latino/a Responsive Evaluation Discourse (6)
  • Lesbian, Gay, Bisexual & Transgender Issues (31)
  • Military and Veteran’s (6)
  • Mixed Methods Evaluation (28)
  • Multiethnic Issues in Evaluation (72)
  • Needs Assessment (26)
  • Nonprofits and Foundations Evaluation (71)
  • Organizational Learning and Evaluation Capacity Building (89)
  • Prek-12 Educational Evaluation (124)
  • Program Design (6)
  • Program Theory and Theory Driven Evaluation (7)
  • Qualitative Methods (100)
  • Quantitative Methods: Theory and Design (67)
  • Research on Evaluation (16)
  • Research, Technology and Development Evaluation (10)
  • Social Media Tools & Updates (7)
  • Social Network Analysis (36)
  • Social Work (21)
  • STEM Education and Training (38)
  • Systems in Evaluation (30)
  • Teaching of Evaluation (59)
  • Theories of Evaluation (13)
  • Translational Research (7)
  • Uncategorized (683)
  • Youth Focused Evaluation (59)

Administration

  • Site Administration Login
  • Site Administration Logout

CAT | Advocacy and Policy Change

Mar/14

14

APC TIG Week: Lisa Hilt on Yes, it can be that simple! Value for Money analyses in policy advocacy and campaigns

2 Comments · Posted by Sheila Robinson in Advocacy and Policy Change

Hi! I’m Lisa Hilt, a Monitoring, Evaluation, and Learning Advisor for Policy and Campaigns at Oxfam.

We strive for policy changes that will right the wrongs of poverty, hunger, and injustice. Much of our progress takes place in small steps forward, resulting from ongoing engagement with key stakeholders and multiple campaign spikes (high intensity, short-term advocacy moments focused on a particular issue).

Following these campaign spikes, teams ask:

  • Were the outcomes worth the resources we invested?
  • How can we be more effective and efficient?

We evaluators ask: How can we support teams to answer these questions with confidence when in-depth analyses are not possible or appropriate? We’ve found from our experience at Oxfam that conducting “simple” value for money analyses for campaign spikes is a useful alternative for the teams we support.

Here are a few tips and lessons based on our experience:

Hot Tips:

Plan ahead: Even simple analysis can be difficult (or impossible) to conduct without pre-planning. Decide in the planning phases of the campaign spike which indicators and investments will be tracked and how.

Break down investments by tactic: Having even a high level breakdown of spending and staff time by key tactics (see example) enables more nuanced analysis of the connections between particular investments and the intended outcomes.

Team analysis is key: In addition to using “hard” data as a source of evidence, utilize insights of team members who bring multiple perspectives and are experts in their field to assess the extent to which their interrelated efforts relate to the results. Team debriefs are an effective way to do this.

spacer

Lessons Learned:

Present information visually: A visual presentation of investments and outcomes enhances the team’s ability to make sense of the information and generate actionable insights (see example). Indicate which tactics were intended to achieve specific objectives.

Don’t let perfection be the enemy of the good: Slightly imperfect analysis is better than no analysis at all, and often adequate for short-term campaign spikes. Match the levels of rigor and effort to the confidence level needed to enable the team to generate reliable insights.

Trust is important: Trust and communication is fundamental to honest conversations within the team. Be cognizant of team dynamics when designing team reviews, and focus the discussion on the outcomes and tactics, not individual performance.

Focus on the future: The strategic learning and forward-looking aspects of this type of exercise are arguably the most important. While looking back at the campaign spike, focus the conversation on what the team can learn from this experience to improve future efforts.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

advocacy

Mar/14

13

APC TIG Week: Carlisle Levine on Using Contribution Analysis to Explore Causal Relationships in Advocacy and Policy Change

No comments · Posted by Sheila Robinson in Advocacy and Policy Change

Hello! I’m Carlisle Levine, an independent evaluator specializing in advocacy, peacebuilding and strategic evaluation. I led CARE USA’s advocacy evaluation and co-led Catholic Relief Services’ program evaluation.

A big challenge in advocacy evaluation, because of the many factors that influence policy change and the time it takes for change to come about, is drawing causal relationships between advocacy activities and policy outcomes. Contribution analysis is an approach that responds to this challenge.

John Mayne outlines a six step process for undertaking contribution analysis:

  1. An advocacy team identifies the causal relationship it wants to explore: Did a particular set of advocacy activities contribute to a targeted policy change?
  2. An evaluator helps the team describe how they believe their advocacy intervention contributed to the desired policy change and identify the assumptions underlying their story, thus, articulating their theory of change.
  3. The evaluator gathers evidence related to this theory of change.
  4. The evaluator synthesizes the contribution story, noting its strengths and weaknesses.
  5. By gathering perspectives from allied organizations, others involved in the policy change process, and ideally, policy makers themselves, the evaluator tests the advocacy team’s theory of change.
  6. Using triangulation, the evaluator develops a more robust contribution story. With a wide enough range of perspectives collected, this analysis can provide a credible indication of an advocacy intervention’s contribution to a targeted policy change.

Cool Tricks:

  • Timelines can help advocacy teams remember when activities happened and how they relate to each other.
  • Questions such as “And then what happened?” can help a team articulate how an activity contributed to short and medium-term results.
  • Questions such as “What else contributed to that change coming about?” can help a team identify other factors, beyond their activities, that also contributed to the targeted results.
  • When gathering external perspectives, interviewers may start by asking about the targeted policy change and how it came about. Later in the interview, once the interviewee has shared his/her change story, the evaluator can ask about the role of the organization or coalition being evaluated.

Lessons Learned:

  • External stakeholders are more likely to agree to an interview about an initially unnamed organization or coalition if they are familiar with the evaluator. This is especially true with policy makers.
  • Where external stakeholders do not know an evaluator, a well-connected person independent of the organization or coalition being evaluated can facilitate those introductions.
  • Stakeholders will offer distinct perspectives, based on their experience and interests. The more stakeholders one can include, the better.

Rad Resource: APC Week: Claire Hutchings and Kimberly Bowman on Advocacy Impact Evaluation, February 7, 2013.

spacer

(Share Clip)

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

advocacy · contribution analysis

Mar/14

12

APC TIG Week: Kat Athanasiades and Veena Pankaj on Small Picture, Big Picture: Using the Framework for Public Policy Advocacy in a Large-Scale Advocacy Campaign

No comments · Posted by Sheila Robinson in Advocacy and Policy Change, Data Visualization and Reporting

Hello evaluation world! We are Kat Athanasiades and Veena Pankaj from Innovation Network.

This might sound familiar: you are given hundreds of pages of grant documents to make sense of. You are left wondering, “Where do I start?”

spacer

Fig. 1: A typical expression of one of the authors in this situation.

We were recently tasked with guiding evaluation for a funder’s national advocacy campaign, and had to make sense of advocacy data contained in 110 grants. Where did we start?

Julia Coffman’s Framework for Public Policy Advocacy (the Framework; Fig. 2), a comprehensive “map” of strategies that might be used in an advocacy campaign, was the perfect tool to analyze the grant reports. It let us identify and compare advocacy strategies employed by grantees individually, as well as step back and look at strategies used across the campaign.

Rad Resource: You can learn more about the Framework in Julia Coffman’s Foundations and Public Policy Grantmaking.

spacer

Fig. 2: The Framework for Public Policy Advocacy plots advocacy strategies against possible audiences (X-axis) and different levels of engagement of those audiences (Y-axis).

So how did we actually use the Framework to help us with analysis?

1. We reviewed grant reports and determined which strategies were used by each grantee. We created a top sheet to record this information (Fig. 3).

spacer

Fig. 3: A sample top sheet for one grant, with relevant advocacy strategies identified.We entered the data into Excel, where it would be easy to manipulate into a visual, reportable format.

2. We entered the data into Excel, where it would be easy to manipulate into a visual, reportable format.

3. We created a series of “bubble charts” (a chart option in Excel) to display the information (Figs. 4, 5).

spacer

Fig. 4: Each “bubble” above represents an advocacy strategy used by Organization X. Blue bubbles represent awareness-building strategies, red show will-building, and yellow denote action strategies.

spacer

Fig. 5: Across all the grants in this campaign, you can quickly see by the bubble size that certain strategies were prioritized: specifically, grantees used awareness-building strategies most often. These charts allowed the funder to quickly grasp the breadth and depth of the advocacy work in their campaign.

Hot Tip: If you’re designing data collection, the Framework provides a systematic way to sort grantees for further analysis based on the type of advocacy work they are engaged in.

Rad Resource: Want to learn how to make bubble charts? Check out Ann Emery’s blog to get help on constructing circle charts.

We would love to hear how you use the Framework in your work! Let us know via email or in the comments below.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day by and for evaluators.

advocacy · bubble charts · data visualization · grantmaking · public policy

Mar/14

11

APC TIG Week: Jewlya Lynn on Evaluations of Public Will-Building Strategies

1 Comment · Posted by Sheila Robinson in Advocacy and Policy Change

I’m Jewlya Lynn, CEO at Spark Policy Institute, where we combine policy work with real-time evaluations of advocacy, field building, collective impact, and systems building to achieve sustainable, meaningful change.

While advocacy evaluation as a field has developed tools and resources that are practical and appropriate for advocacy, it has done little to figure out the messy issue of evaluating actual changes in public will.

Most advocacy evaluation tools are too focused on the advocates and champions to learn about the impact on the public. Polling is one approach, but if you’re on the ground mobilizing volunteers to change the way the public is thinking about an issue, public polls are too far removed from the immediate impact of your work. So what do you evaluate?

Cool Trick: When evaluating a campaign to build public will for access to healthcare, polling results provided us with context on the issue, but didn’t help us understand the impact on the general public. Evaluating the immediate outcome of a strategy (e.g., how forum participants responded to the event) had value, but also didn’t tell us enough about the overall impact of the work on public will.

We decided to try a new approach, designing a “stakeholder fieldwork” technique that was a hybrid of polling and more traditional interviews and surveys:

  • Similar to polling, the interviews took only 15 minutes, were by phone and were unscheduled and unexpected.
  • Unlike typical polling, the participants were identified by sampling the phone numbers of the actual audience members of the various grantee activities. Participants were called by researchers with community mobilizing experience and the questions were open-ended, exploring audience experiences with the activity they had been exposed to and how they engaged in other parts of the strategy. We asked for the names and contact information of people they talked to about their experience, allowing us to call the people who represented the “ripple effect.”

The outcome? We learned about the ways that over 100 audience members benefited from multiple types of engagement and we learned about the impact of the “ripple effect” including the echo chamber that existed among audiences of the overall strategy.

Hot (Cheap) Tip: Polling companies use online software to manage the high volume outbound calling and to capture the data. Don’t have money to purchase this type of capacity? We adapted a typical online survey program into our very own polling software!

Rad Resource: The Building Public Will 5-Phase Communication Approach from The Metropolitan Group is a great resource to guide your evaluation design and give you language to help communicate your results.

The American Evaluation Association is celebrating APC TIG Week with our colleagues in the Advocacy and Policy Change Topical Interest Group. The contributions all this week to aea365 come from our AP TIG members. Do you have questions, concerns, kudos, or content to extend this aea365 contribution? Please add them in the comments section for this post on the aea365 webpage so that we may enrich our community of practice. Would you like to submit an aea365 Tip? Please send a note of interest to aea365@eval.org. aea365 is sponsored by the American Evaluation Association and provides a Tip-a-Day

gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.