Evidence-based practices” are the current buzzword in the behavioral health field. Without question, the development and promotion of effective prevention and clinical treatment services are worthy goals. However, we also must be vigilant that those programs and services are at least as effective as the programs they might replace, and that the evidence in “evidence based” supports real-world outcomes.
In the addiction field, both treatment and prevention efforts have seen examples of promotion triumphing over performance. Notable examples of this in the area of prevention are the Drug Abuse Resistance Education (D.A.R.E.) program and the drug testing of high school students. The independent evaluations of D.A.R.E. published to date have indicated that the program has no demonstrable effects on youth substance use. D.A.R.E., as developed by the Los Angeles Police Department, may provide excellent public relations opportunities for law enforcement agencies, but should not be promoted as a prevention program to reduce substance use. The fact that these programs have proliferated in the absence of documented performance, and continue to be actively promoted and defended despite the lack of meaningful evidence, defies logic and sound public policy.
Numerous prevention efforts have been developed with sound empiric support for their effectiveness in reducing substance use among youth. A guide listing approximately 20 evidence-based programs covering grades 1 through 12 can be found at http://www.drugabuse.gov/pdf/prevention/RedBook.pdf. Federal agencies with information on prevention programs include the National Institute on Alcohol Abuse and Alcoholism (http://www.niaaa.nih.gov) and the National Institute on Drug Abuse (http://www.nida.nih.gov).
Drug testing students is another potential boondoggle. Many schools limit drug testing to athletes and other students who participate in extracurricular activities. How is that for looking for troubled youths and high-risk kids? One school district spent $5,000 on drug testing to find four youths who were positive for marijuana. This school year it budgeted $16,000 more for drug testing the lowest-risk groups in the school system-athletes and students active in other extracurriculars.
The fallacious assumptions concerning drug testing programs are that testing the relatively low-risk students will serve as a deterrent to use and will get help for those needing it. Although 18% of high schools drug test students, one independent study found the prevalence of use in schools that drug test to be almost identical to that of schools that don't. It would seem more logical to test athletes for use of performance-enhancing drugs, and to test students with unexcused absences for use of marijuana. Drug testing also does not address alcohol, the most prevalent substance of misuse. According to the annual Monitoring the Future survey of students, underage drinkers drink more heavily when they drink than adults do.
While school officials belabor the point that drug testing is not meant to be punitive, schools appear to do a poor job of facilitating referrals to treatment. Recent data from the Treatment Episode Data Set (TEDS) reveal that schools account for only 11% of treatment referrals for youth under the age of 18, as compared with the 52% that come from the criminal justice system.
The great harms from prevention programs that do not work are that they waste resources that could be used for more effective programs, and they give the impression of achieving something significant when they haven't. Turning back to drug testing, I would suggest that hiring a part-time counselor or contracting with a community agency or addiction professional to interview students with multiple unexcused absences, certain behavior problems, or dropping grades would constitute a more cost-effective means of identifying and addressing students with substance-related problems than drug testing students in extracurricular activities.
The treatment field also has seen its share of promotion versus performance cases. Having founded and directed an independent evaluation system for 15 years, I have seen any number of programs that purported to have remarkable outcomes. In some cases the numbers appeared to be made up or someone's best guess. In other cases, a closer look at the “evidence” revealed serious problems in the methodology. In one memorable case, the “success rate” went from 80% to 4% when one went from outcomes calculated only on those completing 18 months of aftercare to outcomes for all individuals who had entered the program.
Obviously, we do need evidence-based practices to counter the promotion-performance issue, but all evidence is not equal, nor is all evidence relevant. Several recent articles on arbitrary metrics point this out.1,2 An arbitrary metric is a measure that is reliable and scientifically valid, but irrelevant to the real world. In the context of this discussion, an arbitrary metric would be an outcome measure that may or may not be indicative of recovery or wellness. For example, if an individual's Beck Depression Inventory or Hamilton Depression Rating Scale score drops five points, is the individual no longer clinically depressed? Maybe and maybe not.
One of the most frequent measures used to demonstrate that an addiction treatment program or model is evidence-based is whether the average days of use in the past 30 has decreased. The 30-day interval may be measured as soon as three months after treatment or after a year or more. But is days of use an arbitrary metric?