Evaluating prevention programs with the Results Mapping evaluation tool: a case study of a youth substance abuse prevention program

Kristen Reed, Allen Cheadle, Beti Thompson, Evaluating prevention programs with the Results Mapping evaluation tool: a case study of a youth substance abuse prevention program , Health Education Research, Volume 15, Issue 1, February 2000, Pages 73–84, https://doi.org/10.1093/her/15.1.73

Navbar Search Filter Mobile Enter search term Search Navbar Search Filter Enter search term Search

Abstract

The harmful effects of alcohol and other drug abuse are widespread. Our health care, social service, education and legal systems are strained under the impact of substance abuse, not to mention the economic costs associated with substance abuse. Consequently, effective strategies which prevent substance abuse must be identified and replicated. Yet, user-friendly and cost-effective evaluation tools for community-based substance abuse prevention programs are rare. A recently developed tool that has promise to overcome some of the barriers which exist when evaluating prevention programs is `Results Mapping'. Results Mapping documents and aims to quantify the contributions of a program to future outcomes of its target population with the intention of making data meaningful at the program and funding agency level. This case study was conducted to assess the feasibility of implementing the Results Mapping evaluation tool for community-based prevention programs. The study assessed qualitatively how well Results Mapping worked for one community-based substance abuse prevention program, as well as how much time and funding it took to implement this new tool. Results suggested that Results Mapping may be a valuable documentation, planning and learning tool, but funding agencies should be cautious about using Results Mapping scores to determine funding allocations.

Introduction

The annual cost of alcohol and other drug abuse approaches $200 billion nationally; in Washington State, this amounts to $1.8 billion each year or $382 per non-institutionalized person ( Washington State Division of Alcohol and Substance Abuse, 1995, 1996). Alcohol and other drug abuse also strains our health care, social service, education and legal systems.

Substance abuse prevention in Washington State has historically been conducted through local efforts in schools and communities. These programs are now being required to report outcome results in addition to process data (e.g. numbers served). Yet, most prevention program managers do not have the knowledge, skills, time or tools to conduct outcome-based program evaluations. Furthermore, few evaluation tools exist which are user-friendly and effective which community-based prevention programs can use to demonstrate their effectiveness. Community-based health promotion research projects such as COMMIT ( Thompson et al., 1995), the Minnesota Heart Health Program ( Luepker et al., 1994), the Stanford Five-City Project ( Fortmann et al., 1993), the Pawtucket Heart Health Program ( Carleton et al., 1995) and the Working Well Trial ( Abrams et al., 1994) have conducted rigorous evaluations of program impacts. However, these methods are not easily transferable to non-research-based programs.

User-friendly evaluation tools are not always available, in part because it is difficult to conduct rigorous, experimental evaluations in applied settings. Rigorous evaluations require randomization, controlled environments and comparison groups. Unfortunately, community-based prevention programs often cannot meet these criteria. The majority of community-based prevention programs do not have the resources to implement large and expensive evaluations ( Higginbotham, 1992), unless a funding agency is willing to provide additional funding. In addition, community-based prevention programs often consist of multiple activities implemented simultaneously, making it difficult to identify which elements are effective and which are not. These activities are constantly changing and evolving; as Pirie et al. ( Pirie et al., 1994) stated, `community programs generally cannot be tightly controlled'. Therefore, using evaluation methods which require strict adherence to protocols is not feasible. Similarly, it is often impossible for a community-based prevention program to use control groups and/or to randomly select individuals/groups for the experimental and control groups. This can be due to lack of funding, lack of knowledge, lack of the number of subjects needed for such a study design and/or to avoid depriving those who need the services most from receiving them. Finally, it is difficult to isolate outcomes when other prevention efforts are being implemented by other organizations at the same time. It is often difficult to determine whether favorable impacts are due to their own efforts, due to other program efforts or, more likely, some combination of the two.

Efforts to rigorously evaluate substance abuse programs, such as those by Botvin ( Botvin et al., 1997) and Harris ( Harris and Ludwig, 1996), have been successful. Numerous other rigorously evaluated prevention programs are discussed in Communities That Care Prevention Strategies: A Research Guide to What Works ( Wong, 1996). Others [e.g. (Nelson-Simley, 1995) and ( Kumpfer et al., 1993)] have conducted less rigorous non-randomized trials, yet still have gained useful information. Due to the barriers discussed above, however, the designs used in these evaluations are difficult to replicate at the local program level. Brooks ( Brooks, 1994) and Woodhouse ( Woodhouse and Livingood, 1991) have suggested that alternatives to quantitative designs should be considered when evaluating substance abuse prevention programs. Brooks proposed that ethnography may be a useful tool while Woodhouse explored the use of qualitative designs.

Another alternative to traditional quantitative evaluation designs is a new tool designed by Barry Kibel to overcome some of the barriers: `Results Mapping' (Kibel, unpublished). Results Mapping documents and aims to quantify the contributions of a program to future outcomes of its target population with the intention of making data meaningful at the program and funding agency level. This case study was conducted in order to explore the feasibility of implementing the Results Mapping evaluation tool for community-based prevention programs; in particular, substance abuse prevention programs. One community-based substance abuse prevention program was selected to use the Results Mapping evaluation tool to evaluate its summer prevention activity. The study assessed qualitatively how well the Results Mapping tool worked for this program, as well as how much time and funding it took for a group of first-time users to implement.

Results Mapping

Kibel (unpublished) designed the Results Mapping evaluation tool to enable programs to score projects on the basis of how likely they are, on average, to contribute toward outcomes. Results Mapping is designed to enable programs to be judged on the quantity and quality of the results they produce, collecting both key process data as well as tapping into the potential impacts the program is likely to produce. The aim was to bridge the gap between process and outcome evaluation. Results Mapping has been refined over the past few years through its implementation with several dozen programs across the US, including those aimed at school dropout prevention, juvenile asthma flare-up reductions, substance abuse prevention and early intervention, family support and preservation, and teen pregnancy prevention (Kibel, unpublished). It is important to note that Results Mapping was designed as a program evaluation tool, not a tool to conduct academic research.

The Results Mapping approach to evaluation was primarily designed for programs involved in `healing and transformation' (Kibel, unpublished) activities whose results are not easily captured through quantitative measures, such as substance abuse and youth violence prevention programs. Unlike programs such as immunization clinics where immunization rates can estimate how much disease was averted, the effectiveness of healing and transformation programs cannot be easily quantified. Attendance at a substance abuse prevention activity cannot tell us whether an adolescent will or will not use drugs. Satisfaction with the program also has little value in predicting substance abuse. At the same time, community prevention program staff and their sponsors wish to know if their program `works'. However, it has been extremely difficult to find evaluation tools which can be implemented at a community level which can tap into whether prevention programs such as substance abuse prevention programs have been effective or not. Kibel has developed the Results Mapping evaluation tool in an attempt to fill this need.

In Results Mapping, program staff write `stories'—a narrative summary—about the activities and accomplishments that occur as their program progresses or retrospectively. These stories are first mapped, then coded and scored using a seven-level hierarchy created by Kibel, as detailed in Table I . This unique hierarchy, developed by Kibel, resembles Maslow's Hierarchy of Need, the Precaution Adoption Model, and Andrew Weil's Health and Healing (Kibel, unpublished).

Activities with a low potential impact such as distributing health promotion brochures receive a base score of 1, while activities of a specialized and sustained nature receive a higher base score of 5 or more. An example of a level 5 activity is mentoring, which pairs positive adult role models with high-risk youth who meet over an extended period of time. Activities which have a greater potential to impact the client positively receive a higher base score. (See Table I for the definitions and examples of each impact level.) Using this hierarchy, activities which were previously only recorded either qualitatively or simply by counting the number of participants now can be ranked and scored according to their potential impact.

An activity also earns more points if it is implemented by someone other than a program staff person (a `change agent'): `dramatic results are likely only when staff combine their talents and resources with those of others in the community, particularly their clients' (Kibel, unpublished). Thus, by engaging `change agents' in activities, Kibel theorizes that greater potential exists for positive results. Furthermore, an activity earns additional points if it involves multiple change agents and/or reaches more of the target population. In computing a score for an activity, the impact and the service provider scores are multiplied by a number from 1 to 10 according to a scale applied to the number of participants. (See Table II for the multiplier scale.) This scale is used to avoid exaggerating the points earned for low-level actions reaching large numbers of people (Kibel, unpublished). For example, if 5000 substance abuse prevention brochures were distributed, without a population multiplier scale this activity would earn 5000 points (since it is an impact level 1 activity). By using the population multiplier scale, it `essentially equates a very low-level result (scored as 1 point) reaching very many people (hence, a 10-point multiplier) to a maximum-level result. being attained by one person (hence, a 1-point multiplier)' (Kibel, unpublished). Kibel uses a quasi-logarithmic scale since it `is easy to apply and seems to lead to story scores with good face validity' (Kibel, unpublished).

In summary, the total score for each activity is computed as follows:

(impact level * service provider multiplier) + (impact level * target population multiplier) = map score

For example, 10 substance abuse prevention program volunteers distributed 5000 brochures to parents about talking to their children about alcohol use. The level of impact for this activity is 1 because the activity involves only written information (see Table I ). The multiplier for the 10 volunteers is 3 (see Table II ), while the multiplier for 5000 brochure recipients is 9 (see Table II ). Thus, the total score for this activity is 12: (1 * 3) + (1 * 9) = 12. In contrast, if 10 volunteers committed to a 1-year mentoring program (an impact level 5 activity) for 20 youth in high-risk environments, the total score would be: (5 * 3) + (5 * 4) = 35.

Some activities do not include a target population. These are called `milestone' activities, where a client or participant of a program engages in a follow-through action as a result of previous learnings and/or experiences in the program. An example of a milestone is when a parent education participant practices newly learned skills with her child in the months that follow (coded as impact level 4).

In conclusion, the scores obtained by program activities through Results Mapping are intended to show how much a program has contributed toward future outcomes, such as reduced drug experimentation. Since there is no upper limit of the scores, the programs or activities that generate the highest scores are thought to be the most effective in contributing to future outcomes.

Methods

Selecting the prevention program

Organizations were eligible to participate in the study if they were:

The organization selected for this study met all of the above criteria. It focuses on the prevention of substance abuse by reducing prevalent risk factors known to increase youth substance abuse and by enhancing protective factors known to buffer the impact of risk factors. The organization has one staff person, a board of directors and other volunteers. The staff person and volunteers conduct several activities, including a week-long summer camp (which is repeated seven times at different locations) that has been implemented for the last 7 years. Program recipients include `campers' (youth attending each week-long camp), youth counselors (high school students), counselor assistants (middle school volunteers who are too old to attend camp but too young to be youth counselors) and AmeriCorps volunteers. In the past, the organization's staff and volunteers have used pre-/post-test evaluations with the campers to assess their knowledge regarding drug refusal skills. They have also tried surveying the parents, youth counselors and volunteers with little success.

Training and implementation of Results Mapping

Approximately 3 weeks before the beginning of camp in 1997, Kibel conducted a 2-day training for the organization's staff and volunteers. The training included an overview of Results Mapping, and practice in mapping and scoring stories. A planning session was also conducted where the participants used the Results Mapping tool to discuss ways in which the effectiveness of camp could be enhanced. Fifteen people participated, including the organization's staff person, two board members, representatives from four funding agencies, two AmeriCorps volunteers, two youth volunteers, a representative from the Washington State government, an interested colleague and the lead study investigator. The level of satisfaction with the training was high, according to verbal feedback from participants.

The initial data collection plan for the implementation of Results Mapping was to collect and map stories from the following groups: campers, youth counselors, AmeriCorps volunteers and counselor assistants. Stories were to be collected for each of the youth counselors, one story for the AmeriCorps volunteers and one story for the counselor assistants. For the campers, it was planned that each youth counselor would write one camper story per week. This would be done through `journaling': a 15 min session at the end of each day during which the youth counselors and AmeriCorps volunteers wrote in their journal about what they had learned that day and how at least one of their campers was progressing in the program. Since seven youth counselors worked each week, this would have been seven camper stories per week for each of the 7 weeks of the program (49 camper stories total).

However, difficulty in obtaining useful information from the youth counselors and the realization that all camper stories were essentially the same led to a change in plan. After the second week-long camp, it was decided to simply write one story for all of the campers since they were being exposed to and involved in the same activities during each of the seven, week-long camps. It also became clear that only one story existed for all of the youth counselors due to the youth counselors' exposure to the same trainings and activities. One AmeriCorps volunteer became the central data collector. In the end, this AmeriCorps volunteer, the program director and the study investigator worked together to do the actual mapping of the stories from the program.

Data collection

In addition to the Results Mapping scores, the following data collection activities were carried out: recording the amount of time spent on implementing Results Mapping and conducting pre-/post-intervention interviews. The amount of time spent on training and technical assistance for the organization was monitored, and a time sheet was distributed to the individuals who anticipated being involved in the evaluation of the camp.

The pre-intervention interview was conducted the day before the Results Mapping training with the two individuals who were involved in prior camp evaluations: the staff person and a former AmeriCorps volunteer. Two post-intervention interviews were conducted within 1 week of completion of the Results Mapping for camp with the program director and the AmeriCorps volunteer involved in implementing Results Mapping. The interviews, recorded by pen and paper, were conducted by K. R. Each interview lasted approximately 1 h.

The pre-intervention interview covered experience with past evaluation tools and methods, their expectations for the Results Mapping evaluation tool and whether their current evaluation strategies assisted them in determining the effectiveness of their program. The post-intervention interview covered the strengths and short-comings of the Results Mapping evaluation tool, anticipated use of Results Mapping in the future, specific aspects of Results Mapping that had helped their program and whether the tool assisted them in determining the effectiveness of their program.

Results

Implementation

Table III is one of the four maps created during the project. Each map included: (1) who provided the service, (2) what service was provided or activity completed, (3) at whom the service was targeted and (4) what resulted from the service. One map each was created for the campers, the youth counselors, the AmeriCorps volunteers and the counselor assistants. A score was computed for each map of a story, as described above.

Table III shows that 80% of the activities conducted for the benefit of the campers were coded at a low impact level (levels 1 and 2); the total score was 179. In the youth counselors' map 70% of the activities scored at a low impact level (levels 1 and 2) with the other 30% scoring at a moderate level (level 4). The total score for the youth counselors' story was 127. Only a limited number of activities were conducted for the counselor assistants' benefit; however, two out of five of the activities scored at a moderate impact level (level 4), with a total score of 45 for the story. Finally, the AmeriCorps volunteers' story reveals that most of the activities were conducted by the volunteers themselves for the benefit of others; however, over 50% (five out of eight) of the activities reached a moderate impact level (levels 3 and 4) with a total score of 72 for the story.

Table IV displays the score summaries for each story and the four stories combined. None of the activities in any of the stories reached impact levels 5, 6 or 7. Furthermore, none of the camper activities reached an impact level above 3 and only 39 of the 179 points earned by the camper story were from impact level 3. However, the other three stories all contained a large number of points from level 4. When looking at the total scores for each story, it is interesting to note that the camper story which included 560 campers earned 179 points, while the youth counselor story which included only 11 youth earned 127 points due to higher impact level activities.

Time and cost estimates

The total hours spent on implementing Results Mapping for the camp this year was approximately 150 h. However, the total drops to less than 41 h when the time the AmeriCorps volunteers and youth counselors spent on journaling is excluded. (Journaling was found to be non-essential in mapping the stories.) These hours translate into an approximate cost of $4200 when including the journaling and roughly $3300 when excluding the journaling. See Table V for details on expenditures.

When estimating the number of hours and how much it would cost to implement Results Mapping for next year's summer camp, both decrease dramatically. The estimated minimum number of hours needed to implement Results Mapping next year is approximately 8 h. This would include the hours of time to be spent by the project director in mapping the stories. This would translate into a cost of approximately $100. However, this cost would increase if staff turnover occurred and more training was needed.

Pre- and post-intervention interviews

A pre-intervention interview was conducted with both the program director and a former AmeriCorps volunteer who had participated in past evaluation activities. Both respondents noted that past evaluations of camp have been primarily pre-/post-test surveys aimed at measuring a change in attitudes regarding substance use/abuse and community service. Counts were also kept of how many children attended each camp and how many scholarships were given. When asked about what they liked about the previous evaluation strategies, they had difficulty thinking of anything. When probed about what they did not like about the previous evaluation strategies, the following points were raised: When asked whether their past evaluation strategies assisted them in determining the effectiveness of their program on a scale from 1 to 5 (with 1 being `strongly disagree', 3 being `neutral,' and 5 being `strongly agree'), both chose 3, neutral. One respondent disagreed with the statement that her past evaluation strategies assisted her in identifying what areas of the program needed to be changed/modified to enhance the effectiveness of the program, when looking at the whole program. However, she stated that her past evaluation strategies had helped identify specific needs in specific areas, e.g. refusal skills. Furthermore, some program changes have occurred in the past due to evaluation findings, such as changes in how information was presented to the campers. Finally, both interviewees did not believe that their past evaluation strategies took too much time to implement; in fact, they felt that more time needed to be devoted to evaluation.

Two post-intervention interviews were conducted: one with the program director and one with a different AmeriCorps volunteer who was involved in implementing Results Mapping. They were first asked about what they have discovered to be the most attractive part of the Results Mapping evaluation tool. One person believed that the Results Mapping tool was useful in influencing how attentive staff and volunteers were to what they were doing with the campers. She believed that the youth counselors `wanted something decent to write' in their journals, causing them to be more aware of their actions with the campers. The other respondent believed the most attractive part of Results Mapping was the learning by the camp staff that happened as a result of using the tool. She believed the tool `caused the camp staff to think about what they were doing and why' they were doing it.

Several short-comings of the Results Mapping evaluation tool were mentioned. One respondent thought that it was `cumbersome' because the campers changed from week to week. She believed it would be more useful in a program that was on-going for a significant length of time. The other respondent felt that it takes more time to use Results Mapping than other evaluation tools because `you have to educate people on Results Mapping—it's not set up so that you can just walk in and use it'. She would like to see it in a format that could be `introduced quickly to those not familiar with it'. She also had some confusion remembering what the different impact levels were. She thinks it would be useful to have more details on what the impact levels are with specific examples of activities at each level for individual programs.

Regarding aspects of the Results Mapping evaluation tool that helped their program, one respondent believed that assigning values to the camp activities made the camp staff think about ways to carry out activities that were of a higher impact value. She commented, `programs don't normally think about the value levels of various activities'. The other person responded that `it was a tremendous tool' because it helped the camp staff to think about `how to move people up to the next [impact] level'. She also believed that Results Mapping helped the camp staff to get re-focused on the important goals of the program. She felt that they had `gone a bit astray' from their main goals the last few years. Finally, she said that the camp was `bumped up' to another level this year by using Results Mapping.

When asked to score Results Mapping on its ability to assist them in determining the effectiveness of their program on a scale of 1–5 (with 1 being `strongly disagree', 3 being `neutral' and 5 being `strongly agree'), one person gave a score of 2. She believed that since the camp was so short, they could not see the results. On the other hand, the other respondent gave a score of 4 (`agree'). When asked if the Results Mapping evaluation tool told her more than the other evaluations in past years did, her response was `absolutely'. She said that the pre-/post-tests did not work in the past because of the young age of the campers. Furthermore, she said that she would never have judged the success of her program on the past evaluations, but she could with Results Mapping. Both agreed with the statement that Results Mapping assisted them in identifying what areas of their program needed to be changed/modified in order to enhance the effectiveness of the program.

Finally, answers varied when respondents were asked to score the statement, `Results Mapping evaluation tool required too much of your time'. One respondent gave a score of 5 (`strongly agree') for the first 3 weeks of using the tool, but a score of 2 (`disagree') for after the data collection plans were revised, beginning in week 4. The other person gave a score of 3 (`neutral') because Results Mapping `took a greater number of people to do it' (staff, AmeriCorps volunteers and youth counselors) compared to just one or two people involved in the evaluation in the past.

Discussion

This paper presented the results of a case study of a first-time implementation of the Results Mapping evaluation tool in a substance abuse prevention program. The interpretation of the Results Mapping stories, maps and scores provided useful information to the camp staff and volunteers about their camp, and how it could be improved in the future. This suggests that Results Mapping may be a good tool to stimulate effective program planning, program revisions and implementation. However, difficulties in mapping camp activities accurately and consistently suggest that Results Mapping may be of more limited value for doing quantitative assessments of program activities. Thus, for example, Results Mapping may be of limited value to funding agencies who might want to allocate resources based on Results Mapping scores.

The case study results suggest that Results Mapping was a useful tool to conduct what was essentially `total quality management' or a formative evaluation for the camp. Results Mapping enabled the camp staff and volunteers to take a more objective look at their program's strengths and weaknesses. The lead study investigator witnessed the program staff and volunteers experience many revelations during the training. They were able to see and accept that camp was ineffective at providing high impact level activities for the campers. Yet, they also saw that they were doing a satisfactory job at providing opportunities for the youth counselors to experience higher level activities.

Furthermore, the Results Mapping process empowered the camp staff to make changes necessary to enhance the impact of the program on the program recipients. Unlike other evaluation methods where the responsibility, learning and power lie with the outside evaluator, Results Mapping required that all camp staff be involved in the evaluation at some level. This involvement ranged from using Results Mapping during planning sessions, to collecting data, to mapping and analyzing the data. Consequently, all camp staff members became cognizant of their actions and the result of their actions. This stimulated them to make changes in the camp activities over the course of the program.

While it provided a stimulus for making program improvements, Results Mapping was less effective in creating a robust, quantitative scoring system. For example, during the implementation, the program staff had difficulty determining which activities should be mapped within a story. No set of rules were included in the Results Mapping materials to guide which activities within a story should be mapped. For example, if a story were to be mapped on an individual camper, should it be mapped when a camper who was shy at the beginning of the week is willing to lead a song at the end of the week? Or should only the formal structured activities be mapped? This was unclear to those involved in mapping the stories for camp. Thus, a decision was made, based on ease of implementation, to map only the structured activities, such as DARE presentations and community service lessons. Programs which are structured in a case management format, such as home visitation or mentoring programs, may be easier to map and therefore more appropriate programs for the Results Mapping tool.

Thus, while Results Mapping may be a useful internal evaluation tool, it may have limitations as a funding allocation tool. One selling point of Results Mapping is that it is a method to quantify qualitative information, allowing for comparisons across programs. However, because of the lack of clarity on which activities should be mapped, subjectivity weighs heavily into the scoring. Thus, while the study investigators observed how useful the tool was in stimulating an objective review of the organization's program, this may not occur if an organization needs to compete with other programs for funding based on their Results Mapping scores. In this case, it is possible that scores could be manipulated easily solely for the purpose of competition, not for the purpose of improving the effectiveness of the program.

On the other hand, it may be useful for funding agencies to review the stories which have been mapped for a program. These stories provide useful details, not generally available to funding agencies, about what exactly a program does. For example, currently, most funding agents for the camp in this study require a report only on the number of children who attended the camp. However, by reviewing stories such as the one in Table III , these funding agents could gain a much better understanding of the activities they are purchasing. In fact, the program director reported that she has already used information gained through the Results Mapping process in grant applications that were submitted for continued funding.

As mentioned above, the start-up cost of implementing Results Mapping for this program was approximately $4200. More than half of this cost can be attributed to the 2-day training which cost approximately $2500. The start-up costs may be prohibitive for many prevention programs. However, if this initial cost can be covered or reduced, it is apparent that future costs can be realistically covered by prevention programs. Thus, the Results Mapping tool, if used over time, could be a fiscally sound evaluation tool for prevention programs, barring a large amount of staff turnover.

A few limitations of this study should be noted. Because this is a case study of one program, the results may not be representative of other programs' experiences in using the Results Mapping tool. Other prevention programs, beyond substance abuse prevention, may have different results when using Results Mapping. Also, this study was not designed to evaluate the validity and reliability of the Results Mapping evaluation tool; future studies should be conducted to assess whether the tool is both valid and reliable in order to ensure evaluations conducted with Results Mapping are credible ( Patrick and Beery, 1991).

In summary, the results of this study suggest that programs should be encouraged to use Results Mapping as a formative evaluation tool for documentation, planning and learning, but funding agencies should be cautious about using Results Mapping scores to determine funding allocations.

Table I.

The seven-level hierarchy used to score the potential impact of an activity with examples of activities which would score at each level

Impact level . Description . Example .
7Client attained mastery level in growth areaClient started his/her own substance abuse prevention program for other clients in need
6Client made and sustained positive adjustmentYouth client was active long-term in a drug-free youth program (at least 6 months)
5Program delivered/client received specialized and/or sustained serviceLong-term mentoring program for youth in high-risk environments
4Client made short-term, positive adjustmentYouth joined a drug-free youth program
3Program delivered/client received short-term service10-week parenting education program
2Program provided/client received personalized information via direct contactLectures and presentations on substance abuse prevention
1Program provided/client received general information via indirect contactBrochures, books, posters, buttons on substance abuse prevention
Impact level . Description . Example .
7Client attained mastery level in growth areaClient started his/her own substance abuse prevention program for other clients in need
6Client made and sustained positive adjustmentYouth client was active long-term in a drug-free youth program (at least 6 months)
5Program delivered/client received specialized and/or sustained serviceLong-term mentoring program for youth in high-risk environments
4Client made short-term, positive adjustmentYouth joined a drug-free youth program
3Program delivered/client received short-term service10-week parenting education program
2Program provided/client received personalized information via direct contactLectures and presentations on substance abuse prevention
1Program provided/client received general information via indirect contactBrochures, books, posters, buttons on substance abuse prevention
Table I.

The seven-level hierarchy used to score the potential impact of an activity with examples of activities which would score at each level

Impact level . Description . Example .
7Client attained mastery level in growth areaClient started his/her own substance abuse prevention program for other clients in need
6Client made and sustained positive adjustmentYouth client was active long-term in a drug-free youth program (at least 6 months)
5Program delivered/client received specialized and/or sustained serviceLong-term mentoring program for youth in high-risk environments
4Client made short-term, positive adjustmentYouth joined a drug-free youth program
3Program delivered/client received short-term service10-week parenting education program
2Program provided/client received personalized information via direct contactLectures and presentations on substance abuse prevention
1Program provided/client received general information via indirect contactBrochures, books, posters, buttons on substance abuse prevention
Impact level . Description . Example .
7Client attained mastery level in growth areaClient started his/her own substance abuse prevention program for other clients in need
6Client made and sustained positive adjustmentYouth client was active long-term in a drug-free youth program (at least 6 months)
5Program delivered/client received specialized and/or sustained serviceLong-term mentoring program for youth in high-risk environments
4Client made short-term, positive adjustmentYouth joined a drug-free youth program
3Program delivered/client received short-term service10-week parenting education program
2Program provided/client received personalized information via direct contactLectures and presentations on substance abuse prevention
1Program provided/client received general information via indirect contactBrochures, books, posters, buttons on substance abuse prevention
Table II.

The multipliers for service providers and target population used when scoring activities within `stories'

Population . Multiplier .
1 1
2–5 2
6–10 3
11–25 4
26–50 5
51–100 6
101–500 7
501–1000 8
1001–10000 9
10001+10
Population . Multiplier .
1 1
2–5 2
6–10 3
11–25 4
26–50 5
51–100 6
101–500 7
501–1000 8
1001–10000 9
10001+10
Table II.

The multipliers for service providers and target population used when scoring activities within `stories'

Population . Multiplier .
1 1
2–5 2
6–10 3
11–25 4
26–50 5
51–100 6
101–500 7
501–1000 8
1001–10000 9
10001+10
Population . Multiplier .
1 1
2–5 2
6–10 3
11–25 4
26–50 5
51–100 6
101–500 7
501–1000 8
1001–10000 9
10001+10
Table III.

Campers' story: list of activities targeted at campers, with Results Mapping scores

Map no. . Service provider . Activity . Target population . Result . Activity impact level a . Service provider multiplier b . Target population multiplier c . Score d .
a Level in the Results Mapping `Impact Hierarchy' (see Table I ).
b Multiplier for who provided the service (see Table II for population multiplier scales).
c Multiplier for the recipient(s) of the service (see Table II for population multiplier scale).
d Total number of points for the story calculated through the following formula: (impact level * service provider multiplier) + (impact level * target population multiplier).
1AmeriCorps (10)distributed information about Camp Hot Spotpotential campers (5000)received information about camp139 12
2AmeriCorps (10)taught refusal skills game (Jeopardy)campers (560)played refusal skills game228 20
3DARE officer (1)presented drug and alcohol resistance and safety factorscampers (560)attended the DARE presentation208 16
4AmeriCorps (10)presented community service lessons (3)campers (560)attended community service lessons238 22
5campers (560)performed a community service project[self]38 24
6staff (2)presented tobacco prevention puppet showcampers (560)attended puppet show208 16
7fire-fighters (3)presented fire safety lessoncampers (560)attended fire safety lesson208 16
8AmeriCorps (10)presented AT CAMP skills through cheers and skitscampers (560)did AT CAMP refusal cheers and watched skits238 22
9campers (30)presented AT CAMP cheers and skits[self]35 15
10campers (560)shared learnings from camp (posters, books, etc.)[self]28 16
Total 179
Map no. . Service provider . Activity . Target population . Result . Activity impact level a . Service provider multiplier b . Target population multiplier c . Score d .
a Level in the Results Mapping `Impact Hierarchy' (see Table I ).
b Multiplier for who provided the service (see Table II for population multiplier scales).
c Multiplier for the recipient(s) of the service (see Table II for population multiplier scale).
d Total number of points for the story calculated through the following formula: (impact level * service provider multiplier) + (impact level * target population multiplier).
1AmeriCorps (10)distributed information about Camp Hot Spotpotential campers (5000)received information about camp139 12
2AmeriCorps (10)taught refusal skills game (Jeopardy)campers (560)played refusal skills game228 20
3DARE officer (1)presented drug and alcohol resistance and safety factorscampers (560)attended the DARE presentation208 16
4AmeriCorps (10)presented community service lessons (3)campers (560)attended community service lessons238 22
5campers (560)performed a community service project[self]38 24
6staff (2)presented tobacco prevention puppet showcampers (560)attended puppet show208 16
7fire-fighters (3)presented fire safety lessoncampers (560)attended fire safety lesson208 16
8AmeriCorps (10)presented AT CAMP skills through cheers and skitscampers (560)did AT CAMP refusal cheers and watched skits238 22
9campers (30)presented AT CAMP cheers and skits[self]35 15
10campers (560)shared learnings from camp (posters, books, etc.)[self]28 16
Total 179
Table III.

Campers' story: list of activities targeted at campers, with Results Mapping scores

Map no. . Service provider . Activity . Target population . Result . Activity impact level a . Service provider multiplier b . Target population multiplier c . Score d .
a Level in the Results Mapping `Impact Hierarchy' (see Table I ).
b Multiplier for who provided the service (see Table II for population multiplier scales).
c Multiplier for the recipient(s) of the service (see Table II for population multiplier scale).
d Total number of points for the story calculated through the following formula: (impact level * service provider multiplier) + (impact level * target population multiplier).
1AmeriCorps (10)distributed information about Camp Hot Spotpotential campers (5000)received information about camp139 12
2AmeriCorps (10)taught refusal skills game (Jeopardy)campers (560)played refusal skills game228 20
3DARE officer (1)presented drug and alcohol resistance and safety factorscampers (560)attended the DARE presentation208 16
4AmeriCorps (10)presented community service lessons (3)campers (560)attended community service lessons238 22
5campers (560)performed a community service project[self]38 24
6staff (2)presented tobacco prevention puppet showcampers (560)attended puppet show208 16
7fire-fighters (3)presented fire safety lessoncampers (560)attended fire safety lesson208 16
8AmeriCorps (10)presented AT CAMP skills through cheers and skitscampers (560)did AT CAMP refusal cheers and watched skits238 22
9campers (30)presented AT CAMP cheers and skits[self]35 15
10campers (560)shared learnings from camp (posters, books, etc.)[self]28 16
Total 179
Map no. . Service provider . Activity . Target population . Result . Activity impact level a . Service provider multiplier b . Target population multiplier c . Score d .
a Level in the Results Mapping `Impact Hierarchy' (see Table I ).
b Multiplier for who provided the service (see Table II for population multiplier scales).
c Multiplier for the recipient(s) of the service (see Table II for population multiplier scale).
d Total number of points for the story calculated through the following formula: (impact level * service provider multiplier) + (impact level * target population multiplier).
1AmeriCorps (10)distributed information about Camp Hot Spotpotential campers (5000)received information about camp139 12
2AmeriCorps (10)taught refusal skills game (Jeopardy)campers (560)played refusal skills game228 20
3DARE officer (1)presented drug and alcohol resistance and safety factorscampers (560)attended the DARE presentation208 16
4AmeriCorps (10)presented community service lessons (3)campers (560)attended community service lessons238 22
5campers (560)performed a community service project[self]38 24
6staff (2)presented tobacco prevention puppet showcampers (560)attended puppet show208 16
7fire-fighters (3)presented fire safety lessoncampers (560)attended fire safety lesson208 16
8AmeriCorps (10)presented AT CAMP skills through cheers and skitscampers (560)did AT CAMP refusal cheers and watched skits238 22
9campers (30)presented AT CAMP cheers and skits[self]35 15
10campers (560)shared learnings from camp (posters, books, etc.)[self]28 16
Total 179
Table IV.

Results Mapping score totals for camp: list of scores achieved by each story according to impact level

Impact level . Campers . Youth counselors . AmeriCorps volunteers . Counselor assistants . Total .
1 12 9 0 9 30
2128 7018 8224
3 39 018 0 57
4 0 483628112
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
Total1791277245423
Impact level . Campers . Youth counselors . AmeriCorps volunteers . Counselor assistants . Total .
1 12 9 0 9 30
2128 7018 8224
3 39 018 0 57
4 0 483628112
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
Total1791277245423
Table IV.

Results Mapping score totals for camp: list of scores achieved by each story according to impact level

Impact level . Campers . Youth counselors . AmeriCorps volunteers . Counselor assistants . Total .
1 12 9 0 9 30
2128 7018 8224
3 39 018 0 57
4 0 483628112
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
Total1791277245423
Impact level . Campers . Youth counselors . AmeriCorps volunteers . Counselor assistants . Total .
1 12 9 0 9 30
2128 7018 8224
3 39 018 0 57
4 0 483628112
5 0 0 0 0 0
6 0 0 0 0 0
7 0 0 0 0 0
Total1791277245423
Table V.

Estimated dollars spent on Results Mapping activities related to the camp

. Hours spent on Results Mapping . Estimated pay per hour ($) . Estimated cost ($) .
a Estimated salary. The evaluation trainer's salary includes travel costs and a set daily fee. The estimated AmeriCorps salary includes a stipend and future college tuition waivers. The study investigator volunteered her time for this study. The salary here reflects the estimated cost of purchasing these services.
Evaluation trainer 16159.25 a 2548.00
Program director 18.75 12.00 225.00
AmeriCorps volunteer (involved in RM) 24.25 10.00 a 242.50
Study investigator 16.75 15.00 a 251.25
Youth counselors 49.5 6.50 321.75
AmeriCorps volunteers 63.5 10.00 a 635.00
Totals188.754223.50
. Hours spent on Results Mapping . Estimated pay per hour ($) . Estimated cost ($) .
a Estimated salary. The evaluation trainer's salary includes travel costs and a set daily fee. The estimated AmeriCorps salary includes a stipend and future college tuition waivers. The study investigator volunteered her time for this study. The salary here reflects the estimated cost of purchasing these services.
Evaluation trainer 16159.25 a 2548.00
Program director 18.75 12.00 225.00
AmeriCorps volunteer (involved in RM) 24.25 10.00 a 242.50
Study investigator 16.75 15.00 a 251.25
Youth counselors 49.5 6.50 321.75
AmeriCorps volunteers 63.5 10.00 a 635.00
Totals188.754223.50
Table V.

Estimated dollars spent on Results Mapping activities related to the camp

. Hours spent on Results Mapping . Estimated pay per hour ($) . Estimated cost ($) .
a Estimated salary. The evaluation trainer's salary includes travel costs and a set daily fee. The estimated AmeriCorps salary includes a stipend and future college tuition waivers. The study investigator volunteered her time for this study. The salary here reflects the estimated cost of purchasing these services.
Evaluation trainer 16159.25 a 2548.00
Program director 18.75 12.00 225.00
AmeriCorps volunteer (involved in RM) 24.25 10.00 a 242.50
Study investigator 16.75 15.00 a 251.25
Youth counselors 49.5 6.50 321.75
AmeriCorps volunteers 63.5 10.00 a 635.00
Totals188.754223.50
. Hours spent on Results Mapping . Estimated pay per hour ($) . Estimated cost ($) .
a Estimated salary. The evaluation trainer's salary includes travel costs and a set daily fee. The estimated AmeriCorps salary includes a stipend and future college tuition waivers. The study investigator volunteered her time for this study. The salary here reflects the estimated cost of purchasing these services.
Evaluation trainer 16159.25 a 2548.00
Program director 18.75 12.00 225.00
AmeriCorps volunteer (involved in RM) 24.25 10.00 a 242.50
Study investigator 16.75 15.00 a 251.25
Youth counselors 49.5 6.50 321.75
AmeriCorps volunteers 63.5 10.00 a 635.00
Totals188.754223.50

The author wishes to thank Jack Wilson, June Torre and Cynthia Kline for their invaluable work on this project.

References

Abrams, D. B., Boutwell, W. B., Grizzle, J., Heimendinger, J., Sorensen, G. and Varnes, J. (