黑料社区

Skip to Main Content

Enhancing Caring Communities: A Toolkit for Organizations to Better Support Caregivers

Implementing your program and evaluating its impact are the final components discussed in this toolkit. The processes of implementation and evaluation are ongoing, and involve continual assessment of how your program is being implemented, and whether it is doing what you intended for it to do. When evaluating your program, here are some important questions to keep in mind:

What type of evaluation is needed?

A newly-developed service or program requires a very different evaluation strategy compared to an existing one.  If the intervention is new and untested, then a traditional evaluation design is necessary. Remember the television advertisements for Crest toothpaste where one group of kids brushed with Crest, while a control group of comparable kids did not? The idea of this type of experimental evaluation design is that the two groups are equivalent, except for the intervention. However, if you are implementing an evidence-based program that has already been evaluated, a different type of evaluation is needed. An evaluation for an already-tested program focuses on whether the services being delivered are following the intended design. Is the program serving the right people? Is the intervention being delivered the way it was designed? Are staff properly trained? Are the outcomes being achieved as expected? In many ways these descriptive evaluation questions can also be considered critical elements of a quality improvement model.

Who will conduct the evaluation?

This question is somewhat contingent on the type of evaluation you are conducting. An experimental design needs to have an independent evaluation to ensure integrity, but a more descriptive and ongoing evaluation effort can be conducted either internally or externally, or as a hybrid. Developing an internal capacity to assess program performance is a good goal and some organizations have been able to develop the internal expertise to achieve this objective. Others rely on a model where they use an external evaluation unit. A third approach is a hybrid model where an evaluation partner does some tasks, but also trains the agency to do some tasks internally. The strategy may evolve over time, but the critical issue is that a strategy needs to be in place.

What outcome measures will we examine? What is the goal we are trying to achieve and what could we measure to let us know that our program is doing what we set out to have it do?

As you thought about your new program, you had ideas about what you wanted to program to do. If the program is designed to better support caregivers, what are the expected outcomes and how will you determine if they have been achieved? Some of these measures will be purely descriptive, which are sometimes referred to as outputs. How many individuals have you served? What kind of support did they receive? How often and what level of intensity? These descriptive measures are critical for understanding what services are being delivered, but they don’t tell you that your outcomes have been met. Outcomes provide the needed information to assess program effectiveness, such as, was the person able to remain living in the community, avoid unnecessary hospital visits, or report high satisfaction with the services received?  Both outputs and outcomes are important for program evaluation.

How will we collect data to evaluate how our program is performing?

Organizations that are in the service business are not necessarily well-suited for evaluation data collection. Information on outputs, such as who is being served and the type and frequency of services received, are now collected by many agencies, as these have become standard requirements from funders. Collecting data about outcomes has proven to be a bigger challenge for many organizations. As with determining who will conduct your evaluation, there are several data collection approaches that can be used. One is to partner with a local research organization, such as a University or non-profit entity who can help collect data at low cost. Another option is to build internal evaluation capacity. A third strategy is a hybrid approach that uses a local partner to train agency staff to collect data on an ongoing basis. For example, training care managers to collect consumer service satisfaction data as part of their routine assessment process, thus allowing the agency to collect data at a very low cost.  During the design process, you will need to develop a data collection strategy that will work for your organization’s particular circumstances.

How will we measure program quality?

Once the data are collected, you need to know how you will analyze, and most importantly, use the data collected. There is no bigger mistake an organization can make than to collect data and not use it to improve the services provided. Thus, you have to develop a mechanism for analyzing data, and also have a process for understanding the results and determining what to do with them. Again, you might seek external help with analyzing and interpreting the data you collect, or develop internal resources.  But regardless of your approach, you should assemble an internal group to review the data and to determine how the results should be used.. There may be resistance to sharing data that differ from expectations. It is not uncommon for evaluation results to be different than what was expected, and sometimes results need to be verified with additional data collection. Having a mechanism to discuss these issues is crucial.

Resources