Tips on How to Conduct Interviews for Program Evaluation (Part 2)

An interviewer and a job seeker in an interview session

Tip #3: Use trained interviewers who are able to build rapport

Effective interviewing is both an art and a science and takes training and lots of practice! As I mentioned in my previous post, interviewers need to think quickly on their feet. This is important because they need to “go with the flow” so it seems more like a friendly conversation than an interrogation and still manage to cover all the major questions. If resources are very tight, consider networking to find a highly motivated graduate student or professional who is a stay-at-home parent who may be willing to conduct and analyze phone interviews from home:

  • either in exchange for a modest stipend or
  • pro bono to stay in touch with their field or branch out into a new area of expertise

In my early days of beginning to learn about qualitative methods (e.g. interviews, focus groups), I received a lot of help from Earl Babbie’s surprisingly down-to-earth textbook “The Practice of Social Research.” I still recommend this book as a great resource for interviewers.

Tip #4: Obtain audio recordings with the permission of your respondents.

Remember to make a decent audio recording of interviews and to obtain consent before doing so. Some evaluators decide to translate interviews word-for-word, even including non-verbal events such as pauses, laughter, etc. Weigh the benefits and costs of transcription versus detailed notes.

Tip #5: Conduct interviews and begin analyzing results simultaneously

Keep an eye out for emerging patterns and themes as you conduct interviews. Analyzing the interview results for themes, also known as qualitative analysis or coding, as you go, helps alert you to modifications you may need to make to questions to best capture the information you need. For example, in a recent set of interviews, I soon realized that my questions needed to be more direct. For example, “do you receive XYZ type of support?” versus the more indirect: what benefits do you receive from this program? (Although the indirect question can yield a wealth of information on unanticipated outcomes that are still very important!)

Tip #6: Allocate time to learn to use software that assists with qualitative analysis of your data, if needed.

Generally, these tools are very beneficial if you are using multiple interviewers and have a large number of respondents, or if the analyst detests the tedious work that accompanies coding transcripts for themes. These tools do not replace the important role of manually reading and re-reading the interview transcripts though. The American Evaluation Association has a LinkedIn group that is a great resource for questions related to which software to use and for those interested in evaluation in general! http://www.linkedin.com/

Tip #7: Begin writing the interview report even before you have finished analyzing all the data

Stepping back and beginning to see the big picture helps to deal with analysis paralysis or the condition of over-thinking about and over-analyzing the data. Your inputs affect your outputs. Well-designed interview questions that are based on the peer-reviewed literature as well as candid feedback from your program stakeholders, in concert with skillful interviewing techniques, all contribute to facilitating a meaningful and informative interview report.

——————

For more resources, see our Library topic Nonprofit Capacity Building.

_____________________________________________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

Tips on How to Conduct Interviews for Program Evaluation (part 1)

A person sitting with an interviewer during a session

Interviews are a way to collect useful data for program evaluation. They provide qualitative data, which is more text-based–for example: quotes, stories, descriptions– versus the quantitative or numbers-based data that written surveys (also known as questionnaires) provide. I recently interviewed people for a program evaluation and gained a new and fresh appreciation for the following tips:

Tip #1: Decide beforehand whether interviews are the most effective and efficient way of collecting the data you need.

Weigh the pros and cons of interviews:

Pros of interviews:

Interviews may:

  • Provide opportunities to probe for information that you may not otherwise think to ask for in a written questionnaire.
  • Give you information and stories that people may not otherwise share in a written survey.
  • Help you build rapport with interviewees and help identify stakeholders who really care about the program and may want to get further involved in the evaluation. Involving stakeholders is key to a successful evaluation. (see my previous post on the CDC program evaluation model.)
  • Help explain trends in quantitative data, explaining questions such as “why” and “how.” They can give you a good idea of how programs work and can help you generate a program description that is critical for every evaluation. Interviews can provide rich data that paint a picturesque portrait of your program.
  • Have potential to facilitate the expression of opinions and feelings in the interviewees’ unique “voices.” They are a rich source of quotes for future grant proposals.
  • Phone interviews are less expensive than in-person interviews.

Cons of interviews

  • More resource-intensive: it is time consuming to conduct and participate in interviews, to transcribe them and to analyze data.
  • Requires interviewers to be trained (again more resource-intensive—think training time, planning and designing training materials and presentations)
  • Interviewers need to be articulate and to be able to think quickly “on their feet” and simultaneously think ahead to decide on the next question they need to ask, listen and take notes at the same time1.
  • Usually smaller samples are used: so the representativeness of your data is much more limited. For example, your data only represents those 14 people interviewed versus representing 140 surveyed or possibly being able to infer results to a larger population when using a questionnaire.

In the end, if you decide that you really need the type of data that interviews provide, interviews can be really worth the extra time and effort!

Tip #2: Carefully design and follow an interview script even if you are the only interviewer, and train interviewers. Make sure the script and the training facilitate the following practices among interviewers:

  1. upholding ethical standards of behavior,
  2. building rapport and
  3. safeguarding the quality of data. Selected examples:

Adhere to ethical procedures such as informed consent

It can be so tempting to improvise, thinking that this will make the questions sound less rehearsed. But this makes it really easy to forget important steps like informing participants of the purpose of the interviews and asking them whether they are interested in participating in the interview (informed consent). Inform participants of potential risks and benefits of participating in the evaluation. This is especially important when collecting highly confidential health-related data.

Some participants may give you reasons for why they cannot participate in the interviews. This is where the interviewer has to first carefully discern whether the interviewee is actually interested in participating. Do not assume that everyone has the time or interest in participating. The interviewer has to then strike the careful balance between addressing any barriers that would prevent an interested interviewee from participating versus maintaining a high standard of professional ethics by being respectful of the individual’s decision not to participate and being careful about statements that may be misperceived as pressuring or coercing participant.

Do not use leading questions

Do not use leading questions, that is, questions or statements that can unconsciously influence the interviewees to give certain answers. Example of a leading question: “What are the some of the challenges program participants face in getting to classes?” Versus “Do participants face challenges in getting to classes?”

Avoid double-barreled questions

Be extra vigilant to avoid double-barreled questions, these can easily creep in especially while spontaneously asking probing questions. Example of a double barreled question: “Do you either send cards or call your program participants?” Answer: “Yes.” The problem is that these questions don’t help you figure out which of the two options is used.

Consider hiring a professional

Since there may be other considerations that go into upholding ethical conduct, building rapport, and safeguarding data quality, when doing a do-it-yourself evaluation, one option may be to collaborate with a professional evaluator to design the interview script and to train your interviewers.

1Earl Babbie. (2001). The Practice of Social Research, 9th edition. Wadsworth/Thomson Learning

——————

For more resources, see our Library topic Nonprofit Capacity Building.

___________________________________________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

A Guide to Navigating the Evaluation Maze: “A Framework for Evaluation” from the Centers for Disease Control and Prevention (CDC), Part 2

A-man-thinking-of-how-to-work-through-a-maze

This is part 2 of a previous post on the Centers for Disease Control and Prevention’s (CDC) evaluation model. The goal of these posts is not to give an exhaustive description of this model but to whet your appetite for further study, to refer you to other sources and to share with you some related topics that have been percolating in my head.

In the last post, we covered steps 1-3 of the CDC’s evaluation depicted below:

A Framework for Evaluation.

Source: Centers for Disease Control and Prevention (CDC), Office of the Associate Director for Program (OADPG)

Step 4: Gather Credible Evidence

What is credible evidence? Let us back up and consider credibility from various perspectives–those of funders, agency staff and program participants. Involving most important stakeholder groups throughout the evaluation process and being open to learn from their experiences will increase the credibility of the evidence. Some program participants may not believe sources like government agencies and doctors who are traditionally viewed as credible sources in professional circles.

Your evidence is only as good as the tools you use to collect it. Use high quality tools, i.e., questionnaires, interview guides, etc. Pay attention to validity issues–for e.g. do the questions really measure what you think they are measuring? At the very least, choose indicators based on a review of the literature. Indicators are items being measured such as knowledge levels, numbers of low birth weights, etc. that shed light on the health or social condition that your program is trying to change.

Ask experts to review your evaluation tools and then pilot test them among program participants. In some cases, it may be particularly important to use an evaluation tool that has been tested for reliability, i.e., does the questionnaire yield consistent results each time it is used? In these cases I recommend, if possible, using a tool that has been published in the peer-reviewed literature. University libraries often allow visitors to use their databases and to access peer-reviewed journals online. Other tips from the CDC–the CDC recommends:

  • choosing indicators wisely
  • training staff in data collection,
  • paying attention to data quality issues and
  • protecting confidentiality of participants’ information

Step 5: Justify Conclusions

All conclusions need to be based on evidence. Take care also to base all your conclusions on sound statistical reasoning. For e.g., one common mistake is to conclude that there is a cause and effect relationship on the basis of correlational data. A statistical correlation only shows that two variables are associated with one another. Take for example the following piece of evidence: depression is correlated with lower levels of perceived social support. All we can conclude is that there is a correlation between depression and social support. Lower levels of perceived social support could have contributed to the depression, or the depression itself could have led to social withdrawal, which then resulted in lower levels of perceived social support. If you’re interested in a light and amusing read to familiarize yourself with such principles, I second evaluator John Gargani’s recommendation of Darrell Huff’s classic book “How to Lie with Statistics.”

This is another step where it is important to continue engaging stakeholders. Encourage stakeholders to participate in the process of drawing conclusions from evidence. This will increase their trust in the findings and will increase the chances that they will actually use the evaluation.

Step 6: Ensure Use and Share Lessons Learned

So how do we ensure that evaluation findings are actually used? Like cooking, presentation is everything! People process visual information much more intuitively and naturally than verbal information. Consider, for example, how well very young children respond to colors and pictures. This principle translates into effectively communicating your findings to adult audiences as well. A hot topic in the field of evaluation is data visualization or how to display information using sound design principles. While it is true that graphs can be confusing, effectively applying data visualization principles can produce graphs that are elegantly intuitive to a lay evaluation consumer. For further study, read Edward Tufte’s classic book “The Visual Display of Quantitative Information.”

A tool that helps visually depict a variety of graphs and charts in one place is a data dashboard. Think of it as a short cut to communicating information visually. A dashboard is a display of multiple graphs all in the same location. A resource for further reference is Stephen Few’s Information Dashboard Design: The Effective Visual Communication of Data.

To increase use of the evaluation findings, the CDC recommends:

  • aligning evaluation design with how stakeholders plan to use the evaluation,
  • translating findings into practical recommendations for the program and
  • using reporting strategies that are customized to stakeholders’ specific needs.

For DIY (Do It Yourself) evaluators, I highly recommend renowned evaluation theorist Michael Quinn Patton’s book Utilization-Focused Evaluation. You can listen to free recordings of two webinars by Michael Q. Patton here. (You may need to download software first). “But I’m too busy managing my program to sit down and listen to webinars,” you protest.

I understand too well! Well, I listened to both webinars recently while doing housework and received very helpful guidance for a current project.

(To be continued)

——————

For more resources, see our Library topic Nonprofit Capacity Building.

____________________________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

A Guide to Navigating the Evaluation Maze: “A Framework for Evaluation” from the Centers for Disease Control and Prevention (CDC), Part 1

a-businessman-solving-maze-on-a-wall

This week-end I found myself navigating the underground tunnel system of a local university on my way to the library. Although this was not my first time, it got me thinking of others. If not for the signs, newer navigators would have either run into dead-ends or ended up walking in circles. Evaluations can also go around in circles or run into dead-ends. In this post I aim to whet your appetite for the evaluator’s version of signs and guideposts: evaluation models or frameworks.

Some think of them as evaluation road maps or mental models. Usually such models are based on years of experience and/or research. Following such models will help to spare you costly mistakes.

Today I will briefly introduce the Centers for Disease Control and Prevention’s (CDC) Framework for Evaluation. A thorough presentation is beyond the scope of my post, so please review the references I have included for future study.

A Framework for Evaluation.

Source: Centers for Disease Control and Prevention (CDC), Office of the Associate Director for Program (OADPG)

Step 1: Engage Stakeholders

Stakeholders include everyone linked to or benefiting from your program: for e.g. participants, program staff, national staff, collaborators, funders and even evaluators. Identify a small number of key stakeholders and involve them as much as possible throughout the lifespan of the evaluation. Such involvement is crucial since it ensures that stakeholders, especially those belonging to vulnerable populations, are adequately represented. A range of active and passive involvement strategies may include:

  • forming an evaluation committee
  • promoting engagement via
    • face to face meetings
    • capacity building activities
    • teleconferences
    • e-mail or discussion groups
    • simple interviews or surveys of stakeholders
    • letters and newsletters to inform them of evaluation activities and key decisions

The type of involvement strategy you choose should be custom-tailored to the specific needs of your particular program and stakeholders. Pay close attention to organizational climate and of course, timing!

Step 2: Describe the Program

Describing the program can be much harder than it deceptively seems! Various stakeholders may have differing ideas of what the program entails or should entail. Even an individual stakeholder’s perspectives can evolve over time. An iterative process is important to get everyone on the same page and to determine whether everyone’s intentions for the program reflect the actual program goals.

Once program goals are clarified, work backwards to develop a logic model, which is a flow chart demonstrating relationships between program components and the outcomes you are seeking.

Step 3: Focus the Evaluation Design

Focused evaluations are the most useful. Prioritize and focus your evaluation questions in collaboration with the small number of key stakeholders. Consider how to best serve their needs and how to prioritize the competing needs of various stakeholders. Then choose the most appropriate evaluation methods that will provide you with the best answers to those evaluation questions. Seek to balance:

  • efficiency and practicality with
  • the quality and type of data and the level of accuracy needed.

To be Continued…

Sources/Further References:

Centers for Disease Control and Prevention (CDC), Office of the Associate Director for Program (OADPG). (2011) A Framework for Evaluation. Retrieved February 6, 2012. From www.cdc.gov/eval/framework/index.htm A reliable, easy to navigate website hosted by the CDC.

Milstein, B., Wetterhall, S. and the CDC Evaluation Working Group. (2012) A Framework for Program Evaluation: A Gateway to Tools. The Community Toolbox. J. Nagy & S. B. Fawcett (Eds.). Retrieved February 6, 2012. From http://ctb.ku.edu/en/tablecontents/sub_section_main_1338.aspx The Community Tool Box is an online tutorial that is designed especially for community-based nonprofits and hosted by the University of Kansas.

——————

For more resources, see our Library topic Nonprofit Capacity Building.

________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

Which Is More Important—the Means or the Ends? Process, Impact and Outcome Evaluations

a-group-of-coleagues-jubilating-in-an-office

One of my childhood memories is of my fifth grade English teacher posing this question to us as she analyzed a piece of classical literature: does the means justify the ends? She qualified her question with, “I know you are too young to understand this, but one day you will.” I wonder how many of us ask ourselves that question while evaluating programs. In a way, we’re also asking, “Which is really more important to us—the means or the ends, that is, the process or the outcome?” Today we will review simple definitions of 3 types of evaluations: process evaluations, impact evaluations, and outcome evaluations. Introduction to Program Evaluation courses often include this component. For more experienced evaluators, I encourage you to critically consider: if forced to choose just two out of these following 3 options within a particular evaluation situation, which would you rank as more important and why?

Process Evaluations

These evaluate the program activities and methods a program uses to achieve its outcomes. These activities should be directly linked to the intermediary and ultimate outcomes that your program will target. Examples of measures and evaluation questions include:

  • number and demographics of participants served,
  • number of activities such as number of prevention workshops conducted
  • Were activities really implemented as planned? How closely was curriculum followed, etc.

Impact Evaluations

These measure intermediary “outcomes” such as changes in knowledge, attitudes and behaviors that specifically link to the ultimate outcomes your program will target. In order to be able to capture these changes, make sure to measure these items before (pre-test or baseline data) and after (post-test) your intervention. For example, a heart disease prevention program may provide workshops targeting intermediary outcomes such as changes in knowledge, attitudes and behaviors related to nutrition and exercise. We can view these intermediate outcomes as a “go-between” that connects the procedures with the outcomes. A quick note: theory-driven and research-based program activities and measures are much more likely to actually produce/demonstrate the outcomes a program is seeking.

Outcome Evaluations

These evaluate changes in the ultimate outcomes your program is targeting. Again, remember to collect this data before and after your intervention. In our heart disease prevention program, we might measure changes in numbers of coronary events such as heart attacks, etc. In general, this level of outcomes can be harder to measure, especially in cases where stigma or shame is associated with the outcome you are measuring.

Process Evaluation ←→Impact Evaluation ←→ Outcome Evaluation

Thoughts

In program evaluation, both the means and the ends are equally critical. Let us consider the importance of process evaluations since it is so easy to overlook the means. The process indeed determines the outcome. In a well-designed program, process measures link closely to intermediary outcomes, which in turn link closely to final outcomes. If the process evaluation reveals shortages, that is, if the program has not really been implemented as planned, the final outcomes may suffer. A good process evaluation provides an adequate program description over the course of the evaluation, which is so important! A program description portrays what the program is essentially and really all about. This is not that easy to accomplish but is worth the effort. What the program essentially is in its core will determine the outcomes it produces.

Different programmatic contexts call for different evaluations. It is beyond the scope of this post to provide an exhaustive list of the different types of evaluations. Here are a couple resources however:

http://www.cdc.gov/NCIPC/pub-res/dypw/03_stages.htm

Program Evaluation, Third Edition: Forms and Approaches (2006) by John M. Owen.

Question:

Evaluators, if forced to choose just two out of these 3 options, which would you rank as more important within your particular program context and why?

Announcement:

Who: The Center for Urban and Regional Affairs (CURA) at the University of Minnesota is offering

What: a two-day “Introduction to Program Evaluation” workshop by Stacey Stockdill, within its Spring Conference: Evaluation in a Complex World: Changing Expectations, Changing Realities

When: Monday, March 26-Tuesday, March 27, 2012.

Where: University of Minnesota – Saint Paul Campus, Falcon Heights, MN 55113

Scholarships may be available for the Introduction to Program Evaluation workshop. Scholarship application deadline: February 24, 2012.

For more information: http://www.cura.umn.edu/news/scholarships-available-two-day-introduction-program-evaluation-workshop

Contact Person: William Craig

——————

For more resources, see our Library topic Nonprofit Capacity Building.

______________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

Four Differences between Research and Program Evaluation

An-office-team-coducting-a-research-evaluation.

Program evaluations are “individual systematic studies conducted periodically or on an ad hoc basis to assess how well a program is working1.” What was your reaction to this definition? Has the prospect of undertaking a “research study” ever deterred you for conducting a program evaluation? Good news! Did you know that program evaluation is not the same as research and usually does not need to be as complicated?

In fact, evaluation is a process in which we all unconsciously engage to some degree or another on a daily, informal basis. How do you choose a pair of boots? Unconsciously you might consider criteria such as looks, how well the boots fit, how comfortable they are, and how appropriate they are for their particular use (walking long distances, navigating icy driveways, etc.).

Though we use the same techniques in evaluation and research and though both methods are equally systematic and rigorous (“exhaustive, thorough and accurate”2), here are a few differences between evaluation and research:

Program Evaluation Focuses on a Program vs. a Population

Research aims to produce new knowledge within a field. Ideally, researchers design studies to be able to generalize findings to the whole population–every single individual within the group being studied. Evaluation only focuses on the particular program at hand. Evaluations may face added resource and time constraints.

Program Evaluation Improves vs. Proves

Daniel L. Stufflebeam, Ph.D., a noted evaluator, captured it succinctly: “The purpose of evaluation is to improve, not prove3.” In other words, research strives to establish that a particular factor caused a particular effect. For example, smoking causes lung cancer. The requirements to establish causation are very high. The goal of evaluation, however, is to help improve a particular program. In order to improve a program, program evaluations get down-to-earth. They examine all the pieces required for successful program outcomes, including the practical inner workings of the program such as program activities.

Program Evaluation Determines Value vs. Being Value-free

Another prominent evaluator, Michael J. Scriven, Ph.D., notes that evaluation assigns value to a program while research seeks to be value-free4. Researchers collect data, present results and then draw conclusions that expressly link to the empirical data. Evaluators add extra steps. They collect data, examine how the data lines up with previously-determined standards (also known as criteria or benchmarks) and determine the worth of the program. So while evaluators also make conclusions that must faithfully reflect the empirical data, they take the extra steps of comparing the program data to performance benchmarks and judging the value of the program. While this may seem to cast evaluators in the role of judge we must remember that evaluations determine the value of programs so they can help improve them.

Program Evaluations ask “Is it working?” vs. “Did it work”

Tom Chapel, MA, MBA, Chief Evaluation Officer at the Centers for Disease Control and Prevention (CDC) differentiates between evaluation and research on the basis of when they occur in relation to time:

Researchers must stand back and wait for the experiment to play out. To use the analogy of cultivating tomato plants, researchers ask, “How many tomatoes did we grow?” Evaluation, on the other hand, is a process unfolding “in real time.” In addition to determining numbers of tomatoes, evaluators also inquire about related areas like, “how much watering and weeding is taking place?” “Are there nematodes on the plants?” If evaluators realize that activities are insufficient, staff are free to adjust accordingly.5

To summarize, evaluation: 1) focuses on programs vs. populations, 2) improves vs. proves, 3) determines value vs. stays value-free and 4) happens in real time. In light of these 4 points, evaluations, when carried out properly, have great potential to be very relevant and useful for program-related decision-making. How do you feel?

References:

  1. U.S. Government Accountability Office. (2005). Performance Measurement and Evaluation. Retrieved January 8, 2012 from http://www.gao.gov/special.pubs/gg98026.pdf
  2. Definition of “rigorous.” Retrieved January 8, 2012 from google.com
  3. Stufflebeam, D.L. (2007). CIPP Evaluation Model Checklist. Retrieved January 8, 2012 from http://www.wmich.edu/evalctr/archive_checklists/cippchecklist_mar07.pdf
  4. Coffman, J. (2003). Ask the Expert: Michael Scriven on the Differences Between Evaluation and Social Science Research. The Evaluation Exchange, 9(4). Retrieved January 8, 2012 from http://www.hfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research
  5. Chapel, T.J. (2011). American Evaluation Association Coffee Break Webinar: 5 Hints to Make Your Logic Models Worth the Time and Effort. Attended online on January 5, 2012

——————

For more resources, see our Library topic Nonprofit Capacity Building.

____________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

How to Address Others’ Fears about Program Evaluation–Creating a “Culture of Evaluation” (Part 2)

A-group-of-stakeholders-in-a-company-having-a-meeting

Previously we covered part 1 of this post.

Step 4: “Be the Early Bird…”– Plan Evaluation Early

The best time to plan an evaluation is before program implementation has begun. Plan evaluation during the program planning stage. This helps reduce back-tracking and helps to create a culture of evaluation more naturally. This also prevents having to come in with dramatic changes later. People tend to resist change, and late changes can create even more resistance to evaluation. Dealing with such resistance can be likened to trying to turn a huge ship whose course has already been set. It can be a difficult task indeed, but if this where your program is at, it is still worth the effort!

Step 5: “Get Everyone Involved”—Engage Stakeholders

And now for what is the most critical point: engage all stakeholders throughout the evaluation process. A stakeholder is anyone who has an interest in your program—national staff, administrators, board members, partners, program implementers, volunteers, program participants, etc. Begin by asking away for their input. Do your best to learn from them. If they see no agenda being pushed and that everyone is committed to learning from one another other, they may drop defensive mechanisms and openness may gradually follow. Encourage open discussion of concerns. Sometimes enlisting your worst critic, given a certain degree of mutual trust, can benefit your program. Critics of evaluation can provide valuable, candid reality-checks. Due to the variety of interests involved, however, conflict may arise. People-skills such as conflict resolution are vital in your program’s evaluator.

While being careful not to push an agenda, constantly look for teachable moments. A teachable moment, as you may know, is a natural window of opportunity that arises when the person might be more open to what you are trying to communicate. During these teachable moments:

  • share with them what others are doing based on your review of the literature
  • help them think of evaluation more as a way to improve your program and less as a threat to the program
  • help overcome their personal fears of negative evaluation results
  • emphasize how they will benefit from the evaluation
  • commit ahead of time to sharing evaluation results with all stakeholders in a readable format. Negotiate these agreements ahead of time with administrators. Sharing results can motivate some of your stakeholders to support evaluation efforts.
  • Promote trust among evaluation participants by emphasizing ethical treatment of evaluation participants– protecting their rights, confidentiality, doing no harm, etc.

Again, the action steps are:

1) Teach the language of evaluation

2) Mentor and role-model

3) Collaborate with like-minded professionals

4) Plan evaluation early

5) Engage stakeholders

What challenges have you faced with getting others on-board with evaluation efforts?

——————

For more resources, see our Library topic Nonprofit Capacity Building.

_____________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. See her profile at http://www.linkedin.com/in/priyasmall/

How to Address Others’ Fears about Program Evaluation–Creating a “Culture of Evaluation” (Part 1)

A-man-mentoring-two-young-males-in-a-hall.

Now that you:

In this post we will focus on addressing others’ fears about program evaluation. These “others” may include administrators, partners, program staff and participants. As you know, such fears can be harder to address, and there is no cure-all. But consider using a suggestion or two from this list on ways to create a culture of evaluation. Vince Hyman, former publishing director of Fieldstone Alliance discusses the concept of evaluation culture in his article “Create a Culture of Evaluation.” The following is my commentary which applies this concept to my experiences of culture and program evaluation. I am a product of multiple cultures, having picked up various aspects of cultures at different stages in my life. In my experience, culture was most effortlessly instilled in earlier stages of life but it continues to be a gradual life-long process. Some of the aspects that differentiate cultures are language, practices and ways of thinking. Let us apply this to evaluation by considering the following action steps that can help develop a culture of evaluation.

Step 1: “Talk the Talk”—Teach the Language of Evaluation

Familiarize yourself with or continue learning the language of evaluation by reading evaluation handbooks and blogs from credible sources. If you are too busy, aim for at least 5-10 minutes or a page a day. Then speak and patiently teach the language of evaluation, promoting the benefits of evaluation whenever possible. Take time to consider all those who may be resistant to evaluation: explain and define any unfamiliar evaluation-related terms, building on previous concepts and ideas that are more familiar to them.

Step 2: “Walk the Walk”—Mentor and Role-model

Mentor junior program staff. Role-model sound evaluation practices and explain evaluation logic or evaluation-related ways of thinking. This will help them in turn to adopt and promote the culture of program evaluation which will help foster sustained evaluation efforts. (I will be outlining evaluation models that promote sound evaluation practices soon). Staff and administrators’ nightmarish experiences with evaluation could very likely have been a result of poor evaluation practices.

Step 3: “Birds of a Feather”–Collaborate with Like-minded Individuals and Organizations

Ever notice how in general people of similar sub-cultures (whether based on ethnicity or shared values) tend to gravitate toward each other? An existing community helps to draw newcomers to the group as well. Do your best within reasonable limits to start by working with those who already possess an evaluation-related frame of mind. For health-related programs, an option might to hire graduates of accredited community health education programs. This ensures a background in health program evaluation and increases the likelihood of shared evaluation-related goals and values. Nurture such collaborations, for they can in turn help draw others to participate in the culture of evaluation. Have you experienced any challenges or successes with addressing others’ fears about evaluation?

Stay tuned for an important point in Part 2!

_________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. See her profile at http://www.linkedin.com/in/priyasmall/

How to Address Fears about Program Evaluation

distraught-small-business-owner-reading-reports-while-going-through-paperwork

Nervous about Evaluation?

In many ways a program evaluation can be like a well-child doctor’s appointment. Observations are made, evidence collected and advice dispensed to the caregivers. Someone I know, despite being a devoted mother, dreaded well-child doctor’s appointments for her firstborn. The visits made her nervous. Let us pause to consider why check-ups made this new mother nervous. She did her very best with all the resources available to her; yet being a perfectionist, she worried about hearing of areas that needed improvement.

How to Get the Most Use out of Program Evaluations

The caregiver’s ability to let go of these negative emotions and be truly open to the practitioner’s advice can determine how useful that visit was. But so many times, it is easier to listen to our own feelings than it is to receive professional advice—advice that can be hard to swallow (no pun intended). And although that gut feeling can prove important in certain situations, there is great value in basing decisions on the objective, hard evidence that a program evaluation generates. Easier said than done! Despite your hard work and efforts, have you or your program’s “caregivers” ever felt somewhat apprehensive about the thought of a program evaluation? Here are some basic ways to address fear of evaluation.

How to Deal with Fears about Program Evaluation:

Focus on the Positive:

A breakthrough for the perfectionistic mother came from a friend’s advice. The friend told her to keep telling the pediatrician all the positive things that the mother had been doing to promote the child’s health.

When the topic of program evaluation was broached with some tension in a room full of facilitators, an experienced manager said something to the effect of, “Evaluations show us areas of improvement so we can provide the best service. Yet they also provide us opportunities to recognize you for your achievements!”

Shelve the Criticism:

An expert who taught a grant writing workshop for university staff once shared a secret with her participants. My subsequent experiences have also confirmed the truth behind this advice: Yes, listening to criticism about something that is very near and dear to your heart can be difficult. But tuck the criticism away in your drawer for a day or two. Then come back to it with a fresh mind.

Focus on the Remedy

It is easy to remain discouraged about a program that seems hopeless. But concentrate on small, concrete and practical steps you can take day by day to improve a program component in much need of some TLC (tender, loving care). Be a wise consumer—make these practical recommendations one of the deliverables expected of your evaluator.

Think Prevention!

Think of Program Evaluation as a “check-up” for your program. An evaluation can help identify not only problems with effectiveness of programs but also implementation-related issues that can ruin outcomes. Evaluations can identify these situations ahead of time and help prevent a worse and more complicated problem from brewing! A stitch in time indeed saves nine!

How to be Wise about Program Evaluation

Our fearlessness about program evaluations must be tempered with a dose of wise caution:

  • Educate yourself on program evaluation as much as possible so that you can be a wise consumer or implementer of evaluations.
  • If you are not conducting a DIY (Do It Yourself) evaluation, get to know your evaluator and his/her qualifications; check references. Professional ethics play a critical part in all the functions that an evaluator carries out.

As you already know, experience begets wisdom. And yet, although our individual experiences can make us wise, individuals still have blind spots. There is a great degree of wisdom in our collective experiences:

  • Partner with peers or associates to conduct program evaluations.
  • Program Evaluations may present new challenges, depending on the specific situation. But there may be ways to deal with this on a case by case basis, in an ethical yet responsible manner. Be prepared to consult with others who are trained and experienced in program evaluation.

Did you find this post helpful? Do you have any concerns about program evaluation?

——————

For more resources, see our Library topic Nonprofit Capacity Building.

________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/

How to Maximize Funding by Tapping into Hidden Potential: Program Evaluation

A-man-thinking-with-his-hands-on-this-jaw

Is Your Program “Stuck” Due to Inadequate Funding? Consider Program Evaluation

Recently our car would not start. You guessed it– it was the battery. A mechanically-inclined friend made a casual comment about the worth of car batteries, which can provide insight into maximizing funding for your programs. The friend said something to the effect of, “Batteries have all that potential energy stored up in them. They have all that energy to get your car going. But once you get your car started, you technically don’t really need that battery anymore. You could drive around for hours without a battery.” He does not recommend practicing this, however. But the point was made. There is an incredible amount of energy hidden in a car battery just waiting to be converted. And I never appreciated that powerhouse of energy until we got stuck. Is your program “stuck” due to inadequate funding?

Evaluating a program may be the tool you need to unlock the hidden potential “stored up” in your program. What is the first thought that comes to mind when you think of evaluation? A thick, dusty binder full of barely comprehensible information that no one ever uses? The good news is that evaluation standards have changed. One of the benchmarks that characterizes a good evaluation is utility. A successful evaluation is useful, practical and down-to-earth.

How Program Evaluation Can Help

It is a grim reality that funding opportunities have dwindled in the present economic climate. In their book “The Only Grant-writing Book You’ll Ever Need,” grant writing experts Ellen Karsh and Arlen Sue Fox note, however, that funding opportunities still exist but the competition is more intense. Applicants must prove that they are “high-functioning organizations” capable of effectively producing the outcomes that funders expect. Program evaluations help to move your organization towards that goal. Or if you are already high-functioning, a program evaluation can help prove your capabilities.

Here are 4 ways that evaluations can help you do so: 1) Evaluations monitor that activities are conducted as planned 2) Evaluations establish program logic- that is, how activities work together to produce desired outcomes. 3) Evaluations identify effective and healthy program components– those that are able to produce the desired outcomes. 4) Evaluations reveal ways to heal ailing components. Putting one or more of these evaluation functions to good use helps demonstrate that your program is organized and effective in producing specified outcomes.

Even if you decide not to focus on grant applications, the useful evidence that program evaluations yield can help you win the support of private donors. Evaluation data can help set your organization apart and get the attention of donors. It can help convince them that your program will give them the most for their money.

Evaluations can help you tap into your program’s hidden potential by generating practical information that can powerfully launch your program onward.

What has your experience been? What do you like/dislike/loathe about program evaluations? What concerns do you have about them?

——————

For more resources, see our Library topic Nonprofit Capacity Building.

______________________________________________________________________________________________________

Priya Small has extensive experience in collaborative evaluation planning, instrument design, data collection, grant writing and facilitation. Contact her at priyasusansmall@gmail.com. Visit her website at http://www.priyasmall.wordpress.com. See her profile at http://www.linkedin.com/in/priyasmall/