• Non ci sono risultati.

Interpreting Evidence of Effectiveness: How Do You Know When a Prevention Approach Will Work for Your Community? 21

N/A
N/A
Protected

Academic year: 2022

Condividi "Interpreting Evidence of Effectiveness: How Do You Know When a Prevention Approach Will Work for Your Community? 21"

Copied!
13
0
0

Testo completo

(1)

Chapter 21

Interpreting Evidence of Effectiveness: How Do You

Know When a Prevention Approach Will Work for

Your Community?

Corinne L. Peek-Asa and Sue Mallonee

21.1. INTRODUCTION

Many valuable lessons have been learned from injury prevention programs that have been implemented and evaluated. When thinking about implementing a program in your community, literature about program effectiveness can be a pow- erful tool in your decision making throughout the design, implementation, and evaluation of your program.

Existing evidence can help you identify the most promising approaches for your specific injury focus and community characteristics and can be helpful in steering you away from programs that have not worked well in the past. Existing evidence can help you gain support for your project because securing community support or resources for your project will be much more successful if you can dem- onstrate that the approach has already been effective. Existing evidence can also help focus the design and implementation of your project and help you understand what measures to include in your evaluation.

This chapter helps you find the existing evidence, understand what factors are important when reading this evidence, and understand other factors that are important when translating a program to your community.

383

(2)

21.2. FINDING EVIDENCE OF EFFECTIVENESS

A wealth of evaluation studies have been published in the peer-reviewed literature, and these studies can be accessed by various search databases. MEDLINE, the most widely used, is the U.S. National Library of Medicine’s bibliographic database cov- ering the fields of medicine, nursing, dentistry, veterinary medicine, health care systems, and the preclinical sciences. It provides access to abstracts of articles and citations from more than 4000 biomedical journals published worldwide. A list of available Internet MEDLINE searching services (e.g., PubMed, OVID), including the dates covered, if registration is required for access, and whether or not the service is free, can be found at http://omni.ac.uk/medline. Other search engines, such as Google Scholar (http://scholar.google.com), search MEDLINE and non- medical journals (e.g., criminal justice, architectural, engineering) and books.

Because injury evidence exists in a number of different fields, this wider search can be very helpful.

When examining the peer-reviewed literature, be aware that there is a publica- tion bias toward studies with statistically significant results (Dickersin & Min, 1993;

Easterbrook, Berlin, Gopalan, & Matthews, 1991). The presence of significant findings often outweighs considerations about the quality of the study, so many published studies do not have strong study designs (Dickersin & Min, 1993; Elvik, 1998; Easterbrook et al., 1991). Thus the pool of published literature must be read with caution, and we present some of these cautions later in this chapter.

Systematic reviews and meta-analyses synthesize information from existing evaluation studies on a specific topic, and these are useful for determining effec- tiveness (Cooper & Hedges, 1994). Systematic reviews identify and summarize the best available research by considering all available literature and applying rigorous scientific criteria. Meta-analysis is a quantitative method for combining and summarizing the results of different studies into single estimates of effective- ness (Cook, Sackett, & Spitzer, 1994). The Guide to Community Preventive Services:

Systematic Reviews and Evidence-Based Recommendations (the Guide), developed by the Task Force on Community Preventive Services (TFCPS), is one source for systematic reviews (Pappaioanou & Evans, 1998; The Guide to Community Preven- tive Services). The Guide’s recommendations are primarily based on evidence of effectiveness, including the suitability of the study design, but they also assess the applicability of the intervention to other populations or settings, the economic effect, barriers observed in implementing the interventions, and if the intervention had other beneficial or harmful effects (Briss et al., 2000). The Guide then pro- vides a recommendation as to whether the approach is “strongly recommended,”

“recommended,” has “insufficient evidence,” or is “discouraged” (Table 21.1). The Guide has evaluated the use of child safety seats and safety belts, reducing alcohol- impaired driving, therapeutic foster care for the prevention of violence, early child- hood home visitation programs, and firearm laws; these recommendations can be found at www.thecommunityguide.org (The Guide to Community Preventive Services).

The Cochrane Collaboration provides scientific evidence-based reviews of health care interventions through the Cochrane Library (Bero & Rennie, 1995).

Although a subscription is necessary to access these reviews, many academic insti- tutions are subscribers and can partner with you to find relevant information.

The reviews follow internationally accepted guidelines and are peer reviewed by

independent panels (Alderson, Green, & Higgins, 2004). The reviews also attempt

(3)

Table 21.1. Assessing the Strength of a Body of Evidence on Effectiveness of Population-Based Interventions in the Guide to Community Preventive Services

a

Design Suitability Evidence of Execution (greatest, moderate, effectiveness

b

(good or fair)

c

or least) Number of Studies Consistent

d

Effect Size

e

Expert Opinion

f

Strong Good Greatest At least 2 Yes Sufficient Not used Good Greatest or moderate At least 5 Yes Sufficient Not used Good or fair Greatest At least 5 Yes Sufficient Not used Meet design, execution, number and consistency criteria for sufficient but not strong evidence Large Not used Suffi cient Good Greatest 1 Not applicable Sufficient Not used Good or fair Greatest or moderate At least 3 Yes Sufficient Not used Good or fair Greatest, moderate, At least 5 Yes Sufficient Not used or least Expert opinion Varies Varies Varies Varies Sufficient Supports a recommendation Insuffi cient

a a

Reprint from: Briss et al. (2000).

b

The categories are not mutually exclusive; a body of evidence meeting criteria for more than one of these should be categorized in the highest possible category.

c

Studies with limited execution are not used to assess effectiveness.

d

Generally consistent in direction and size.

e

Suffi cient and large effect sizes are defined on a case-by-case basis and are based on task force opinion.

f

Expert opinion will not be routinely used in the Guide but can affect the classification of a body of evidence as shown.

g

Reasons for determining that evidence is insufficient is described as follows: A , insufficient designs or executions; B , too few studies; C , inconsistent; D , effect size too small; E , expert opinion not used. These categories are not mutually exclusive and one or more of these will occur when a body of evidence fails to meet the criteria for strong or sufficient evidence.

(4)

to overcome publication bias by searching for unpublished studies and include a summary of implications for both practice and research.

The Cochrane Injuries Group has published 57 reviews on the prevention, treatment, and rehabilitation of traumatic injuries. There are 13 reviews of general injury prevention interventions, including fall-related injuries to older persons, pool fencing to prevent drowning in children, and interventions for promoting smoke alarm ownership and function. There are 16 reviews of prevention strate- gies to reduce traffic injuries, including graduated driver’s licensing, increasing pedestrian and cyclist visibility to prevent crashes, and safety education of pedestri- ans. A list of reviews conducted by the Cochrane Injuries Group can be found at:

www.cochrane-injuries.lshtm.ac.uk/Review%20links.htm (The Cochrane Collabo- ration, 2004)

21.3. EVALUATING THE SCIENTIFIC EVIDENCE

The most desirable prevention strategies have been studied in a number of different settings, have undergone rigorous evaluation, and have easily available information about the populations and conditions in which they are effective. Although there are some programs that have such strong evidence, it is far from the usual case.

Research, especially evaluation research, is a slow process. The absence of a strong body of evidence for a specific approach is not reason enough to dismiss it as a promising program. However, if evidence suggests that the program has not been effective in previous settings or has led to unintended harmful consequences, the program is not likely to perform better in a new setting for which it was not originally designed. The next section discusses several criteria that should be con- sidered when evaluating evidence that measures program effectiveness.

21.3.1. Criteria to Evaluate Evidence

The first step in evaluating evidence is to determine the strength of the evalua- tion studies that have been conducted. A growing focus on evidence-based public health has led to several frameworks on which to assess the rigor of the available research. One of the most widely cited is the hierarchy of evidence, which places a stronger weight on evidence that comes from more rigorous study designs (Popay

& Williams, 1998).

However, there is growing recognition that this hierarchy is not a sufficient

framework to weigh all of the information needed to decide if an intervention

approach is appropriate for a community (Petticrew & Roberts, 2003; Rychetnic,

Frommer, Hawe, & Shiell, 2002; Tang, Ehsani, & McQueen, 2003). New models

of evidence-based public health recognize that each study design can contribute

important information (Petticrew & Roberts, 2003). For example, a randomized

control trial might be the best method to identify if an educational program

led to increased knowledge and safety behaviors and decreased injury rates. It is

not, however, the most efficient design to decide how well participants liked the

program or how to identify effective protocols to implement the program. Addi-

tional concerns that should be considered in typologies of evidence include the

program repeatability, potential for compliance, timing, and predictability (Tang

et al., 2003). Following are some examples of standard designs used in evaluation

studies and how they can contribute to existing evidence.

(5)

21.3.1.1. Experimental Designs

21.3.1.1.1. Randomized Control Trials. The randomized control trial (RCT) is the most rigorous evaluation design because it allows direct control of who does and does not get the intervention. Such studies usually randomize individuals into treatment groups, which helps create comparable groups. Day et al. (2002) used this design to evaluate the effects of group-based exercise, home hazard management, and vision improvement on reducing falls in the elderly. They identified elderly individuals who lived in their own homes; after recruiting a sufficient sample size, the researchers randomized the subjects into one of the three treatment groups.

They followed the three groups to estimate fall incidence and found that exercise and balance programs had the strongest effect on reducing falls.

There are a number of variations to the RCT. One variation occurs when an agency or group is randomized, rather than an individual. This type of design is especially effective when the intervention occurs at the group level, such as a classroom education program. For these programs, there are concerns about how individuals receiving the intervention might affect the outcome of those not receiving it, perhaps by sharing knowledge or influencing behaviors. To avoid ran- domizing students within one class, which could lead to contamination and might also be infeasible for the teachers, the randomization can occur by classroom or by school.

It is important to note that randomization does not guarantee comparability among groups. Evaluations should provide additional evidence that groups were comparable, such as by comparing groups by age, gender, race/ethnicity, or socio- economic status.

21.3.1.1.2. Quasi-Experimental Studies. There are many variations of experimental designs used for evaluations, and the usefulness of these varies widely. For example, some evaluations describe only outcomes after an intervention. Without baseline data or a comparison group, these studies cannot determine if any changes are actually due to the intervention. Many studies use baseline data to compare outcomes in the intervention group before and after the intervention to show changes.

Even when studies have some baseline or preintervention data, it can still be difficult to determine if the program was effective. For example, consider an evaluation of a peer-mediation program to reduce delinquency and physi- cal fighting among high-risk youth. A before-and-after comparison of students in the program shows that delinquency and fights actually increased after the program. With just these data, it would appear that the program was not only ineffective but potentially harmful. But, what if additional data showed that delin- quency and fighting among students who were not in the program increased even more? Now the results are interpreted very differently. When evaluating quasi- experimental designs, the quality of the study and the comparisons made are important considerations.

21.3.1.2. Nonexperimental Studies

Nonexperimental studies are often called observational, and in these studies the allo-

cation of the intervention is not controlled by the investigator. These study designs

include cohort, case-control, case-crossover, cross-sectional, and ecological.

(6)

21.3.1.2.1. Cohort Studies. A cohort study follows a defined population over time to determine the distribution of exposure and outcomes. Cohort studies can be of varying quality, depending on how they are implemented. Well-designed cohort studies can be valuable for identifying how an intervention works in an observed population. For example, investigators could identify a cohort of homes in a small city, some of which had functioning smoke alarms and some of which did not.

They could then compare reported fires and fire-related injuries to see if the smoke alarms led to reductions in these outcomes. The only difference between this cohort study and a RCT is that the investigators did not decide which homes had smoke alarms. In this cohort study, there could be systematic differences in the homes that did and did not have smoke alarms that are unrelated to the effectiveness of the smoke alarm. For example, if all the homes with smoke alarms are of high socioeconomic status, they might not be comparable to homes without smoke alarms.

21.3.1.2.2. Case-Control Studies. In case-control studies, a series of individuals who have experienced a health outcome are identified as cases, and then a group of people who have not experienced the outcome are identified as controls. Cases and controls are then compared based on the exposure of interest, such as the use of an injury prevention approach. Case-control studies are useful for identifying the protective effect of safety strategies. For example, Grossman et al. (2005) identified a series of children and adolescents in several states who had committed suicide, made a suicide attempt, or had unintentionally injured someone with a firearm (cases). The researchers next identified homes in which children lived or visited and that had firearms but that had not experienced a shooting (controls).

They compared reported safe storage practices of firearms between homes that did and did not have a shooting and found that homes without a shooting were much more likely to have stored firearms using safe storage practices.

21.3.1.2.3. Ecological Studies. Ecological studies involve measurements on groups rather than individuals. Ecological studies can be useful for evaluating population-level outcomes, such as legislation and policy changes, and can be helpful for measuring changes due to several simultaneous interventions.

For example, Ozanne-Smith, Ashby, Newstead, Stathakis, and Clapperton (2004) measured changes in state firearm regulations and their relation to firearm death rates. Although the authors did not know if each specific firearm death was related to specific changes in the regulations, they did note population-level decreases.

21.3.1.2.4. Qualitative Studies. Qualitative data have an important role in evaluation evidence. Qualitative data are particularly useful in the formative, process, and effect phases of program evaluation and can be an efficient approach for identifying issues such as community readiness, acceptance, compliance, appropriateness, and satisfaction. Although this information is rarely included in published manuscripts, it is useful for designing and implementing interventions.

Azeredo and Stephens-Stidham (2003) evaluated an elementary-school safety

curriculum and described the collection of qualitative data, which included

measures of receptiveness, satisfaction, and methods for negotiating school

protocols. As systematic reviews place a greater emphasis on information pertaining

to program implementation, published manuscripts will, we hope, begin to include

more of this information.

(7)

21.3.1.3. Example of Weighing the Evidence

A number of different evaluations of childhood safety education programs have been conducted. Think First, a multi-topic elementary school education program, is an example. The majority of published evaluations show that knowledge and attitudes changes among participants, but the quality of this evidence varies. Hall- Long, Schell, and Corrigan (2001) evaluated knowledge gain among 140 students who underwent the curriculum. They found a knowledge gain but had no control group to show that this was due to the curriculum. Greene et al. (2002) and Wesner (2003) also found knowledge increases, but they used control groups to show that knowledge gain was differentially higher among children who received the curricu- lum than in controls. However, these studies did not randomize classrooms into the intervention. Wesner (2003) matched intervention to control classrooms on age, grade, and socioeconomic background, and this step increases the weight of evidence. Finally, Gresham et al. (2001) conducted an RTC in which participating schools were randomly assigned to the intervention group. This study, which found increased knowledge and decreased reported risk behaviors, provides the strongest evidence because of the well-controlled study design.

21.3.1.4. Frequently Used Measures of Association

Statistical tests are commonly used to determine whether or not study results are meaningful, and results are often reported through a p value. Statistical p values less than .05 are typically considered significant, which indicates that we are 95% certain that the measured change was due to the intervention and not found by chance alone. A p value of .003 for example, suggests the probability that results found had a 3 in 1000 chance of not being related to the intervention. However, many factors affect the p value, and results should not be disregarded simply because the p value was greater than .05. Similarly, a significant p value is not all that meaningful if the study design was weak. The p value measures only the likelihood of random error and does not account for other weaknesses, such as the presence of selection bias, poor measurement of the exposures or outcomes, or comparisons of groups that are not similar.

Many studies report findings through risk ratios (RRs) or odds ratios (ORs).

While the theory and correct estimation of these measures is complex, a simple understanding of the terms can be helpful for interpreting the literature. Basically, RRs and ORs both measure how an outcome, such as an injury rate, is affected by an exposure, such as an intervention program. If the exposure had no effect on the outcome (e.g., the intervention program had no effect on injury rates), then the OR (or RR) will be equal to 1. If the OR is greater than 1, then the exposure led to an increase in the outcome. Similarly, if the OR is less than 1, then the exposure led to a decrease in the outcome.

The OR alone is not sufficient to determine effectiveness of the intervention;

the OR should be accompanied by a confidence interval (CI). Most studies use a 95% CI, which means that we are 95% certain that the true estimate lies within the limits of the confidence interval. However, other confidence intervals (90% or even 80%) can be used.

A confidence interval that spans 1, the null value, is often interpreted as being

nonsignificant from a statistical perspective, somewhat like the p value. However,

a confidence interval is much more than a test of significance. The confidence

(8)

interval also shows precision, so that a narrow confidence interval (e.g., 0.5–0.8) is much more helpful than a wide one (e.g., 0.1–5.1). For example, compare two programs that both had an OR of 0.92 when evaluating the effects of an interven- tion program on a community’s injury rate. This OR would be interpreted as “the odds of an injury were 8% lower in the intervention than in the nonintervention group.” However, one confidence interval was 0.1–0.99, and the other was 0.8–1.01.

Even though the second confidence interval spans 1, it provides stronger support of the program than the first interval because it expresses a more precise estimate.

21.3.2. Other Criteria to Consider When Evaluating Evidence

Other criteria that are important when reviewing existing evidence include strength of the intervention effects, knowledge of how the program works, integration with other injury prevention activities, generalization, feasibility, acceptability, and equity (Runyan, 1998).

21.3.2.1. Strength of the Evidence

Many injury prevention approaches have strong supportive research. However, many evaluations do not show strong support, and these findings must be consid- ered carefully.

If published evaluations indicate weak performance, it does not necessarily mean that the approach is ineffective. Many studies fail to measure a significant outcome because they used small sample sizes and did not have sufficient power to detect differences between intervention groups. In other circumstances, the find- ings are weak because the program was not implemented properly. When decid- ing to implement a program with weak support, it is important to make sure the evidence suggesting some effect is of high quality, that there is information about how best to implement the program, and that the program is not likely to have adverse unintended consequences.

Prevention programs are often continued even when there is sufficient evidence that they are ineffective and potentially even harmful. For example, Neighborhood Watch programs were developed from community policing theories, and they involve organizing community members to conduct surveillance (watch the neighborhood) and share information about crime and crime prevention (Rosenbaum, 1987). The goals were to reduce crime and fear of crime. Repeated evaluations, however, have found that these programs have a negligible to no effect on crime and actually increase fear of crime (Darian-Smith, 1993; Rosenbaum, 1987; Sherman et al., 1997). Why? The high-crime neighborhoods most in need of crime prevention were unable to mobilize and sustain a community policing effort (Hope, 1995; Sherman et al., 1997). Law-enforcement involvement, which is a tenet of the program, was difficult because residents were distrustful of police and police did not actively engage community members (Rosenbaum, 1987). In contrast, neighborhoods with low crime rates were able to implement and sustain programs, but they had very little to gain from crime prevention efforts. Over time, programs led to increased fear of crime and distrust of visitors or strangers in the community (Sherman et al., 1997).

Why would a community continue to use an ineffective program? One reason

is failure to examine existing evaluation evidence. Another might be strong com-

munity support for a program, which may stem from a particular event or an

(9)

individual’s experience. Reviewing existing evidence can help alleviate the risk of implementing a program that has proven ineffective.

21.3.2.2. Knowledge of Why the Program Worked

The most helpful evaluations document not only that the program worked but also why it worked (Tang et al., 2003). This type of information can sometimes be found in reviews of evaluation studies that assess how the program performed in different settings.

In general, the question about why a program worked is best answered if the evaluation included process, effect, and outcome components. For example, reviews of programs to reduce robbery and associated injury in retail businesses have iden- tified that strategies including cash control, good lighting and visibility, employee training, and safety signage are effective (Casteel & Peek-Asa, 2000). These evalu- ations compared robberies or assaults before and after program implementation but included no effect evaluation—they did not actually measure changes made by the individual businesses. Thus it was difficult to attribute crime reductions to the program elements. A more recent evaluation measured differential changes in violent crimes based on the number and types of strategies implemented by each business (Peek-Asa, Casteel, Mineschian, Erickson, & Kraus, 2004). This evaluation found crime reductions among businesses that implemented all or most of the program elements but no reductions among businesses that had low implementa- tion. Because this evaluation included an effect component, it is much stronger in supporting the effectiveness of the program and in explaining why the program worked.

A recent review of mass-media campaigns to reduce alcohol-impaired driving identified a median crash decrease of 13% (Elder et al., 2004). However, success depended on the program being carefully planned and executed, with pretested messages, adequate audience exposure, and coordination with other ongoing pre- vention activities. In contrast, programs that failed to use the mainstream media;

devoted minimal resources to craft, test, and deliver messages; and did not identify a specific audience were likely to fail. From this review, we know that similar pro- grams will work only when implemented a certain way.

21.3.2.3. Integration with Other Injury Prevention Activities

The review of mass-media campaigns discussed above also identified that mass- media campaigns worked best in conjunction with increased enforcement efforts, grassroots activities, and other messages related to drinking and driving (Elder et al., 2004). In fact, the mass-media campaigns may have been unsuccessful without them. This is an example of programs that are effective only when integrated with broader efforts.

Coordination of activities is also critical for sobriety checkpoints to reduce impaired driving. Sobriety checkpoints, which can occur through either random- ized or selective breath testing, result in a median fatal crash decrease of 22%

(Elder et al., 2002). However, the range in success rates varies widely, with some evaluations finding no real effects. Systematic reviews of these evaluations show that program success depends on several factors, including the coexistence of programs that publicize the checkpoint activities (Elder et al., 2002; Fell, Ferguson, Williams,

& Fields, 2003; Peek-Asa, 1999). Public campaigns have also been important in

(10)

enforcement efforts to increase seat belt use, in some states called Click-it or Ticket programs (Shults, Nichols, Dinh-Zarr, Sleet, & Elder, 2004; Solomon, Ulmer, &

Pruesser, 2002; Williams, Lund, Preusser, & Blomberg, 1987). The coordination of public-awareness campaigns with enforcement activities are effective because the campaigns reach a broader audience, support the public message, and increase the perception that violators will be caught.

21.3.2.4. Generalizability to Your Community

Effective prevention programs are designed for specific populations, and evidence that they work in one community does not necessarily mean they will work for yours.

One of the most important issues to consider when reading the literature is whether or not the demographic characteristics of your community are similar to those of the community in which the program was evaluated. These characteristics are age, gender, race/ethnicity/culture, socioeconomic status (particularly educa- tional status), and population density. If these characteristics differ, then you must determine if there are fundamental differences between your community and the one that has been evaluated that could make the program ineffective. For example, nationwide Safe Routes to School programs involve community-wide multicompo- nent efforts to encourage children to walk or ride their bicycles to school and to ensure that they are safe when doing so. Evaluations have shown program success.

(Appleyard 2003; National Highway Traffic Safety Administration [NHTSA], 2002).

However, these programs are not likely to work in communities in which children live too far from their schools to bicycle or walk, such as in rural areas and cities that use long-distance busing.

21.3.2.5. Equity

Successful prevention programs should not create inequities among participants or their communities. In the Safe Routes to School programs, there could potentially be deleterious consequences if some children cannot participate in the program because they are among a minority who live too far from school to participate. If a segment of the children are selectively excluded from participating, they will also be excluded from rewards or recognition that goes to program participants. This type of inequity is important to consider for all programs but is rarely measured in evaluations. Although it is difficult to create programs that are entirely equitable, the potential for inequity and methods to cope with necessary inequity should be considered in program development.

21.3.2.6. Feasibilility

Just because a program is accepted and integrated into one community does not mean it will be feasible in another. For example, universal helmet use laws have strong evidence of effectiveness, yet they have been a topic of great legislative con- troversy. Most states that have enacted a universal law have had repeated attempts for repeal, often successful. Efforts to enact a motorcycle helmet use law in a state that does not have an effective political climate will likely be ineffective. For policy efforts, it is important to find key supporters that can help carry the policy through;

often, even with support, such efforts take many years.

(11)

21.3.2.7. Acceptability

Prevention programs should be acceptable to at least some portion of the com- munity, especially if the program is community based. Some programs are not acceptable due to political or legal issues. For example, random breath testing has been shown to be more effective than selective breath testing, the latter of which is conducted in the United States (Elder et al., 2002; Peek-Asa, 1999). However, random testing is not acceptable under the U.S. legal system, which has constitu- tional protections against unwarranted search and seizure (NHTSA, 1990).

Although many of these criteria seem obvious, failure to consider them can derail an otherwise carefully planned program. Some of this information will be available in existing evaluations or can be gleaned from evaluations of similar programs.

21.4. CONCLUSION

Beginning a literature search to evaluate existing evidence can be a daunting task.

A systematic review is the best place to start, and the best places to find reviews are described in this chapter. If you find a review or reviews that support your proposed program, then it is important to consider how the program will translate to your community. If a review is not available, next look for individual studies that evaluate your proposed approach. Studies that evaluate programs or approaches similar to yours can provide relevant information. Studies that explain why the program was successful or that describe how programs are best implemented are especially helpful. The best way to start is to dive in—do not be intimidated—

and to encourage lively discussion about what you are learning as you read the literature.

When reviewing the scientific literature and making decisions about a preven- tion approach, assembling a collaborative team can be beneficial. The team can include practitioners who have experience with a specific prevention approach, community members who know the intended audience well, and academicians who have experience with research and/or program evaluation methodologies.

Experienced practitioners can be identified in state or local public health injury programs by contacting the State and Territorial Injury Prevention Director’s Asso- ciation www.stipda.org). Academicians may be found in local colleges, universi- ties, or teaching hospitals, especially those that have injury or violence research programs. One resource for identifying injury researchers is the Society for the Advancement of Violence and Injury Research (formerly the National Association of Injury Control Research Centers) (www.savirweb.org). Although individuals you contact will have time limitations, they may be able to answer focused questions about the scientific literature or the prevention approach you are considering and may be interested in collaborating.

Existing evidence is important to inform decisions about prevention activities, but it can never replace the expertise of prevention practitioners. To help the injury prevention community continue to grow and improve, it is important to communi- cate the lessons you have learned when implementing your programs.

Acknowledgments. We would like to thank Christy Cechman, contract refer-

ence librarian with the National Center for Injury Prevention and Control, for her

invaluable help in identifying and retrieving citations.

(12)

REFERENCES

Alderson, P., Green, S., & Higgins, J. P. T. (Eds.). (2004). Cochrane Reviewers’ Handbook 4.2.2. Retrieved February 22, 2005, from www.cochrane.org/resources/handbook/hbook.htm.

Appleyard, B. S. (2003). Planning safe routes to school. Planning, 69 (5), 34–37.

Azeredo, R., & Stephens-Stidham, S. (2003). Design and implementation of injury prevention curricula for elementary schools: Lessons learned. Injury Prevention, 9, 274–278.

Bero, L., & Rennie, D. (1995). The Cochrane Collaboration. Preparing, maintaining, and disseminat- ing systematic reviews of the effects of health care. Journal of the American Medical Association, 274, 1935–1938.

Briss, P. A., Zaza, S., Pappaioanou, M., Fielding, J., Wright-De Aguero, L., Truman, B. I., Hopkins, D. P., Hennessy, M. H., Sosin, D. M., Anderson, L., Carande-Kulis, V. G., Teutch, S. M., & Pappaioanou, M. (2000). Developing an evidence-based Guide to Community Preventive Services—Methods. American Journal of Preventive Medicine, 19 (1S), 35–43.

Casteel, C., & Peek-Asa, C. (2000). The effectiveness of Crime Prevention through Environmental Design (CPTED) in reducing robberies. American Journal of Preventive Medicine, 18 (4S), 99–115.

The Cochrane Collaboration. (2004). Retrieved December 9, 2005, from www.cochrane.org/index0.

htm.

Cook, D. J., Sackett, D. L., & Spitzer, W. O. (1994). Methodological guidelines for systematic reviews of randomized control trials in health care for the Potsdam Consultation on Meta-Analysis. Journal of Clinical Epidemiology, 48, 167–171.

Cooper, H., & Hedges, L. V., (Eds.). (1994). The handbook of research synthesis. New York: Russell Sage Foundation.

Darian-Smith, E. (1993). Neighborhood Watch—Who watches whom? Reinterpreting the concept of neighborhood. Human Organizations, 52 (1), 83–88.

Day, L., Fildes, B., Gordon, I., Fitzharris, M., Flamer, H., & Lord, S. (2002). Randomized factorial trial of falls prevention among older people living in their own homes. British Medical Journal, 325, 128–133.

Dickersin, K., & Min, Y. I. (1993). Publication bias: The problem that won’t go away. Annals of the New York Academy of Sciences, 703 (1), 135–146.

Easterbrook, P. J., Berlin, J. A., Gopalan, R., & Matthews, D. R. (1991). Publication bias in clinical research. Lancet, 337, 868–872.

Elder, R. W., Shults, R. A., Sleet, D. A., Nichols, J. L., Thompson, R. S., & Rajat, W. (2004). Effective- ness of mass media campaigns for reducing drinking and driving and alcohol-involved crashes: A systematic review. American Journal of Preventive Medicine, 27 (1), 57–65.

Elder, R. W., Shults, R. A., Sleet, D. A., Nichols, J. L., Zaza, S., & Thompson, R. S. (2002). Effectiveness of sobriety checkpoints for reducing alcohol-involved crashes. Traffic Injury Prevention, 3, 266–274.

Elvik, R. (1998). Are road safety evaluation studies published in peer reviewed journals more valid than similar studies not published in peer reviewed journals? Accident Analysis & Prevention, 30, 101–118.

Fell, J. C., Ferguson, S. A., Williams, A. F., & Fields, M. (2003). Why are sobriety checkpoints not widely adopted as an enforcement strategy in the United States? Accident Analysis & Prevention, 35, 897–902.

Greene, A., Barnett, P., Crossen, J., Sexton, G., Ruzicka, P., & Neuwelt, E. (2002). Evaluation of the Think First for Kids injury prevention curriculum for primary students. Injury Prevention, 8, 257–258.

Gresham, L. S., Zirkle, D. L., Tolchin, S., Jones, C., Maroufi, A., & Miranda, J. (2001). Partnering for injury prevention: Evaluation of a curriculum-based intervention program among elementary school chidren. Journal of Pediatric Nursing, 16 (2), 79–87.

Grossman, D. C., Mueller, B. A., Riedy, C., Dowd, M. D., Villaveces, A., Prodzinski, J., Nakagawara, J., Howard, J., Thiersch, N., & Harruff, R. (2005). Gun storage practices and risk of youth suicide and unintentional firearm injuries. Journal of American Medical Association, 293, 171–714.

Guide to Community Preventive Services. Systematic Reviews and Evidence Based Recommendations.

Retrieved December 9, 2005, from www.thecommunityguide.org.

Hall-Long, B. A., Schell, K., & Corrigan, V. (2001). Youth safety education and injury prevention program. Pediatric Nursing, 27 (2), 141–146.

Hope, T. (1995). Community crime prevention. In M. Tonry, & D. P. Farrington (Eds.). Building a safer society. Crime and Justice (pp. 216–228, vol. 19). Chicago: University of Chicago Press.

National Highway Traffic Safety Administration. (1990). The use of sobriety checkpoints for impaired driving

enforcement (Report DOT HS-807 656). Washington, DC: Author.

(13)

National Highway Traffic Safety Administration. (2002). Safe routes to school (Report HS-809 497).

Washington, DC: Author.

Ozanne-Smith, J., Ashby, K., Newstead, S., Stathakis, V. Z., & Clapperton, A. (2004). Firearm related deaths: The impact of regulatory reform. Injury Prevention, 10 (5), 280–286.

Pappaioanou, M., & Evans, C., Jr. (1998). Development of The Guide to Community Preventive Services: A U.S. Public Health Service initiative. Journal of Public Health Management & Practice, 4 (S2), 48–54.

Peek-Asa, C. (1999). The effect of random alcohol screening in reducing motor vehicle crash injuries.

American Journal of Preventive Medicine, 16 (1S), 57–67.

Peek-Asa, C., Casteel, C. H., Mineschian, L., Erickson, R., & Kraus, J. F. (2004). Implementation of a workplace robbery and violence prevention program in small retail businesses. American Journal of Preventive Medicine, 4, 276–283.

Petticrew, M., & Roberts, H. (2003). Evidence, hierarchies, and typologies: Horses for courses. Journal of Epidemiology & Community Health, 57, 527–529.

Popay, J., & Williams, G. (1998). Qualitative research and evidence-based healthcare. Journal of the Royal Society of Medicine, 91 (S35), 32–37.

Rosenbaum, D. (1987). The theory and research behind Neighborhood Watch: Is it a sound fear and crime reduction strategy? Crime & Delinquency, 33 (1), 103–134.

Runyan, C. W. (1998). Using the Haddon Matrix: Introducing the third dimension. Injury Prevention, 4, 302–307.

Rychetnic, L., Frommer, M., Hawe, P., & Shiell, A. (2002). Criteria for evaluating evidence on public health interventions. Journal of Epidemiology & Community Health, 56, 119–127.

Sherman, L. W., Gottfredson, D., MacKenzie, D., Eck, J., Reuter, P., & Bushway, S. (1997). Preventing crime: What works, what doesn’t, and what’s promising? A Report to the United States Congress (8-1 to 8-58).

Washington, DC: National Institute of Justice.

Shults, R. A., Nichols, J. L., Dinh-Zarr, T. B., Sleet, D. A., & Elder, R. W. (2004). Effectiveness of primary enforcement safety belt laws and enhanced enforcement of safety belt laws: A summary of the Guide to Community Preventive Services systematic review. Journal of Safety Research, 35, 189–196.

Solomon, M. G., Ulmer, R. G., & Pruesser, D. F. (2002). Evaluation of Click-it or Ticket model programs (Report No HS-809 498). Washington, DC: National Highway Traffic Safety Administration.

Tang, K. C., Ehsani, J. P., McQueen, D. V. (2003). Evidence based health promotion: Recollections, reflections, and reconsiderations. Journal of Epidemiology & Community Health, 57, 841–843.

Wesner, M. L. (2003). An evaluation of Think First Saskatchewan: A head and spinal cord injury preven- tion program. Canadian Journal of Public Health, 94 (2), 115–120.

Williams, A. F., Lund, A. K., Preusser, D. F., & Blomberg, R. D. (1987). Results of a seat belt use law

enforcement and publicity campaign in Elmira, New York. Accident Analysis & Prevention, 19,

243–249.

Riferimenti

Documenti correlati

Si riportano alcuni punti essenziali, alla base delle raccomandazioni prospettate dal CoSM-AMCLI: i) prioritaria rispetto all’esecuzione dei test di sensibilità è l’identificazione

This is primarily why onomastics tends to prove the sensitivity of PNs to registering social change (as we will try to demonstrate for the Hurrian case). Thus, Proper

Insert directly in the Boston shaker filled with ice cubes, in the following order: SQUEEZE LEMON LIME POWER SWEET&SOUR, SQUEEZE MIXED BERRIES LUXURY 2.. Finish the drink

Direttore Operativo Scuola Internazionale di Alta Formazione in Diritto del Negoziato e Arbitrato (CIASU). M EDIAZIONE E INFRAGIUSTIZIA IN

This is particularly striking when we think that there are currently 23.5 million unemployed

Table 3 Distribution and logistic regression analysis for the association of hospitalization rates with the lymphadenectomy (LAD) template used in 284 patients treated with

heat equation, Cauchy problem, Euclidean 3-space, stationary isothermic surface, uni- formly dense domain, transnormal function, constant mean curvature surface, plane,

We analyzed panel DNB Household Survey data for the Netherlands from year 2000 to year 2012 to study (i) whether parental teaching to save positively affects children savings