image

This is part three in a four-part series on program evaluation, dedicated to organizations and businesses that provide programs and services for women, girls, and communities of color (and for people with an interest in evaluation practice). Throughout this month, I will be discussing certain aspects of evaluation practice –from how I became interested in evaluation, myths about evaluation, knowing what type of evaluation to perform, and bringing your community together to support evaluation – with the intent on highlighting the importance of evaluation not just from a funding perspective, but from an accountability and empowerment perspective.

So far, we’ve discussed some possible “WHYs” of evaluation practice (from the benefits of evaluating your programs and services, seeing if the objectives of your program or service is currently meeting the needs of your participants, to looking at the misconceptions of evaluation and how they can affect your work). Now, let’s switch gears and focus on WHAT you’re evaluating and WHEN to evaluate. This part of the series is trickier than the others, but I want to touch on the basics so that you have a working knowledge on this important part of evaluation. This is by no means complete list. If you have a question about anything in particular (logic models, strategic plans, etc.) or would like me to give more examples of this week’s topic, please let me know in the comments below and I can follow-up with additional blog posts outside of this series.

What Are You Evaluating?

In order to get to your destination, you need to know where you’re going. In order to do this, we need to develop a strategy that will guide you in how you will look at your data. This will help you determine if your producing the results you’re expecting. This is where evaluation questions come in. An evaluation question helps you look at your data to see if your program or service is producing its intended objectives.

There are two types of evaluation questions: a process evaluation question and a results-focused question. A process question wants to know how the program is functioning. How a program functions depends on a variety of factors, such as the length of the program, the number of participants, the activities being offered in the program, how the participants interpret ad interact to the activities, and so forth. In other words, the who, what, when, and how of the program’s implementation. Process questions are especially useful when you’re in the beginning stages of planning your program; however, they can be asked throughout the program so that you’re always thinking ahead and adjusting your program’s implementation.

A result-focused question, on the other hand, wants to know if the program is accomplishing the results you’re expecting. In other words, how effective is your program, and are your participants benefiting from the program in the way you’ve intended? Results-focused questions typically follow the completion of a program.

Now that we know more about the types of evaluation questions, let’s look at when each question comes into play.

When Should You Evaluate?

There are two types of evaluations: formative evaluation and summative evaluation.  In formative evaluation, you’re evaluating the program or service while it’s in the planning stages. The evaluation taking place during formative evaluation provides you with information on how best to revise and modify what you’re doing before your participants are exposed to the program or service. For example, piloting a new after school art program for special needs elementary school female students in grades 3rd through 5th is an example of a formative evaluation. You can gather feedback from the participants (likes and dislikes), instructors (types of artwork to do), or parents (skills observed in children, i.e. communication, staying focused, etc.)

In summative evaluation (the type of evaluation we commonly associate with program evaluation), you’re collecting data from your participants at the completion of the program or service to measure effectiveness.  Taking our after school art program example, you can evaluate the overall program at the end of the school year, and your finding can determine if this program should continue and what to modify for improvement.

Ideally, formative and summative evaluations flow into each other, and you’ll always be looking for ways to enhance your program to make sure you’re meeting your program’s objectives. Process evaluation questions can coincide with either formative or summative evaluations, and result-focused questions coincide with summative evaluation. Depending on where you are in your program’s implementation, you’ll develop your evaluation questions accordingly.

To get a better understanding, check out this diagram, created by The Pell Institute, to illustrate the differences between formative and summative evaluation, as illustrated in Dean Spaulding’s book Program Evaluation in Practice: Core Concepts and Examples for Discussion and Analysis (2008).

image

Forming Your Questions

Your evaluation questions should reflect what’s important to the people who have an interest or concern in your program (most commonly referred to as stakeholders). Stakeholders can range from Board of Directors members, community leaders, students, and parents. As you gain more stakeholders, they may guide the future of your program’s process. Your evaluation questions should reflect a variety of your stakeholders’ perspectives, key components of your program, the needs you want the most information on, and available resources that can help answer your questions.

Here is some useful steps developed by the Centers for Disease Control and Prevention’s Department of Health and Human Services, that will help guild you as you develop your evaluation questions:

*Gather your stakeholders- Your stakeholders should be instrumental in helping you develop your questions. You can share the questions you’ve already developed with your stakeholders for feedback, and they can make suggestions for questions as well.

*Review supporting materials- Utilize your program’s logic model, strategic plan, curriculum being used, and any useful data from the curriculum or supplemental materials you’re using for comparison for the program results you may experience.

*Brainstorm- Think about questions you want to answer for the overall program, or for a specific activity in the program. Pay attention to your program’s goals, strategies, and objectives in your strategic plan as well as the inputs, activities, and outputs in your program’s logic model.

*Sort evaluation questions- Use categories that provide some type of relevance to you and to your stakeholders. This helps to determine what resources you already have that can help you answer your questions.

*Decide which evaluation questions to answer- This is the biggest one. A variety of factors come into play here, and you should prioritize your questions that are important to you program staff and stakeholders, addresses important program needs, can be answered within a scope of your program objectives and strategies, can also be answered using the resources you already have (and this includes funding), can be answered in a particular timeframe, and can provide information for program improvements.

*Verify that your questions are linked to your program- This is also important. Technically, each of these steps is important, but this one is particularly important because you don’t want to have evaluation questions that aren’t reflected at all in the actual program.

Let’s look at an example: An evaluation of 11 high schools across the Lubbock, Texas school district looked at peer influence as a key component of delayed onset of sexual activity of the district’s mandated abstinence-based sex education curriculum. Let’s say that we expect the result of our program to be that students are heavily influenced by their peers in whether they are successful at delaying onset of sexual activity (vaginal, anal, and oral sex.) Here are some sample process and result-focused questions:

Process questions

How many 9th, 10th, 11th, and 12th grade students participate in the program? How consistently do students participate?

Who are the students involved in the program? (Age, race, gender, sexual experience at time of program, etc.)

What activities are involved in the curriculum? Who is involved in developing the curriculum?

To what extent are the instructors implementing the program?

Is the curriculum evidence-based?

Do students learn about pregnancy prevention and sexually transmitted infection prevention methods in addition to focusing on abstinence?

Are the activities being taught in the same manner across schools?

In what ways, if any, could the curriculum be improved?

Result-focused questions

Do students who participate in the program show higher levels of peer influence as an indicator of delayed onset of sexual activity?

Do instructors of the curriculum feel the participants are more prepared to delay onset of sexuality activity?

What do participants report gaining from the program?

If students report a higher level of peer influence as an indicator of delayed onset of sexuality activity, is this standard across all 11 schools?

Did all students identify with being non-sexually active during the implementation of the program?

Get Ready for Part Four

Hopefully this adds additional clarity to what program evaluation is and how it can work for your organization or business. Next week ends the Program Evaluation for Women & Girls of Color series, and we’re going to discuss what types of data you will need for your program and how to collect it.

Interested in more extensive training on evaluation? Check out my consulting services page and contact me to hire me as an evaluation consultant for your program or to work with you staff to build up your evaluation skills. Together, we can take your program or service to the next level.

RAISE YOUR VOICE: Do you have any process or result-focused evaluations based on our example above? Share in the comments section below.