Education Decisions: Looking for Strong Research and Better Implementations

##AUTHORSPLIT##<--->

[Editor's note: This article is the second installation in a two-part series on using research to make decisions on education technology purchases. Part 1 can be found here. --D.N.]

"Without access to information from research about education practices, policymakers are more likely to make decisions that are ineffective or even harmful" (Lauer, 2004, p. 3). You've most likely read about some of those decisions--increased spending on technology interventions, charter schools, voucher programs, single-sex schools and classes, smaller class sizes, supplemental education services, minimizing or eliminating bilingual education--many of which have led to questionable achievement gains. One might wonder about the evidence and research base for those decisions. Yes, policymakers and educators might be influenced by news reports or a company's reports on studies it conducted or its documentation with the research basis for products it sells. However, there might be some bias or inaccuracies in those and evidence not sufficient for sound decisions. Additional evidence might be gathered by reading actual research studies and meta-analyses reported in journals.

In this part 2 on education decisions, I provide guidance on what to look for in a research report, tips for how to read a study, and resources for research and interventions that work or are promising. What is strong research? How do you know if research warrants policy changes or adopting a technology intervention in your setting? Significant outcomes from research are not necessarily of practical significance. Where do you turn, if research is sparse or non-existent? How should a technology solution be implemented? Readers might also note part 1 of this series in which I posed some initial questions to consider for sorting out claims by companies that say their products have a strong research base.

Looking for Strong Research
There are several types of education research, including experimental, quasi-experimental, correlational, descriptive, and case studies. In recent times, schools have been encouraged to adopt programs, practices, and policies resulting from scientifically-based research. "According to NCLB, scientifically-based research is rigorous, systematic, objective, empirical, peer reviewed and relies on multiple measurements and observations, preferably through experimental or quasi-experimental methods" (Lauer, 2004, p. 6).

Research reports generally begin with an abstract followed by the introduction, methods, results, discussion, and references. It will take more than one reading to fully understand most of the research, but the general start is to read the abstract, introduction and hypotheses or research questions, then skip to the end for discussion on how the study turned out.   Then go back to the middle and read the methods focusing on how the hypotheses were tested or questions answered, results, and re-read the discussion section.

Lauer's (2004) primer on education research will help readers understand what education research says, whether it's trustworthy and what it means for policy.  Readers will also learn some of the technical statistical and scientific concepts touched upon in research reports and gain a deeper understanding of education research methodology.  Practical tools, such as a flowchart for analyzing research and an understanding statistics tutorial, are included. The U.S. Department of Education (2003) has a user-friendly guide that will help educators determine whether an educational intervention is supported by rigorous evidence.  It contains a three-step evaluation process, a checklist to use in the process, definitions of research terms, and what to look for in research studies.

An intervention backed by "strong" evidence of effectiveness requires "that the intervention be demonstrated effective, through well-designed [and implemented] randomized controlled trials, in more than one site of implementation [to reduce the likelihood of effectiveness by chance alone], and that these sites be typical school or community settings, such as public school classrooms taught by regular teachers" (U.S. Dept. of Ed., 2003, p. 10). The following guidelines, not exhaustive, stand-out among features to look for when reviewing a study.

"The study should clearly describe (i) the intervention, including who administered it, who received it, and what it cost; (ii) how the intervention differed from what the control group received; and (iii) the logic of how the intervention is supposed to affect outcomes" (p. 5).

"If the study claims that the intervention improves one or more outcomes, it should report (i) the size of the effect, and (ii) statistical tests showing the effect is unlikely to be due to chance" (p. 8). Statistical significance is conventionally reported at the .05 level, which means that the probability is only 1 in 20 that any difference in outcomes between the control and intervention groups could have occurred by chance. Groups should be about the same size. Larger sample sizes are better than smaller sample sizes. For example, for an intervention that is modestly effective, look for about 150 individuals randomly assigned in each, or 25-30 schools or classrooms in each group, depending on study design.

Look for discussion of practical significance (effect sizes), the results reported in real-world terms (e.g., an increase in skills by "X" grade levels). Results might be statistically significant, but yet so small as to be of little value for policy changes.

"The study should report the intervention's effects on all the outcomes that the study measured, not just those for which there is a positive effect" (p. 9).

Merit Software, noted in part 1 of this series, conducted a quasi-experimental study during 2006-2007 in a West Virginia middle school using its reading software. Results indicated effect sizes (Cohen d test) were .94 for grade 6 and .70 for grade 7. Some readers of their study might not understand the meaning of effect sizes, but results become more meaningful to all because Merit also reported results in real-world terms. That is, year-end scores on the state’s standardized reading/language arts test averaged 30 points higher for the intervention group than in the control group.

According to Lauer (2004), adopting an intervention also involves analyzing cost and potential educational benefits of doing so. That's where practical significance, known as effect size, plays a role. Practical significance helps policymakers decide if a statistically significant difference between programs is enough of a difference to merit adoption of a program. The Cohen d measures effect size in standard deviations. Thus, on a normal curve, an effect size d = 1.0 would translate to one standard deviation above the mean. Although the interpretation of an effect size also depends on the intervention itself and the dependent variable, social scientists generally would categorize effects as small (d = .2 to .5), medium (d = .5 to .8) and large (d = .8 and above). Effect sizes might then be translated into percentile gains for better understanding (pp. 42-43). Policymakers and educators might also look for meta-analyses, which report an average effect size from several studies on an educational program or practice. Basing a decision on a single study might pose a problem in the long run.

Practice Guides
In absence of studies with high quality experimental or quasi-experimental designs, educators sometimes turn to practice guides, such as those published by the Institute of Education Sciences. IES guides depend on the expertise of a panel of nationally recognized experts to "bring the best available evidence on the types of systemic challenges that cannot currently be addressed by single interventions or programs" (Herman, Dawson, Dee, et al., 2008, p. 31). They are characterized by coherent set of recommendations upon which educators can take action. Most importantly, each recommendation is connected to the level of evidence (low, moderate, high) supporting it. These guides are more like consensus reports than meta-analyses in terms of the breadth and complexity of the topic addressed (Herman, Dawson, Dee, et al., 2008).

Bottom Line: Implementation
If you do select an evidence-based intervention, be aware that any changes or differences in implementation from those reported in the research might considerably affect expected outcomes in your own setting. An intervention reported as effective in a rural setting might not be so in an urban setting; one that is reported effective for an elementary school might not be the same for middle school. Look for research conducted in settings similar to your own. Schools or classrooms that implement an evidence-based program might collect outcome data from groups selected to use the program and compare it to outcomes from a comparison group, matched in skills and demographic characteristics, which does not use the program. Tracking test outcomes from users and non-users over time might be an indicator of whether the program is having the desired effect in your setting as the evidence-based research had indicated (U.S. Dept. of Ed, 2003).

Are you able to provide solid evidence and a research basis for your education decisions? Reviewing research and best practice is only one part of the educational decision making for adopting technology or any other intervention. Hopefully, your district has established its educational vision including for technology-supported learning, analyzed the needs of learners, ensured that your facilities and support systems are technically able to handle whatever intervention you decide to adopt, and provided the necessary professional development for staff. Ensure that the intervention aligns with existing curricula and instructional materials. Before committing huge sums of money for an all-out district-wide implementation, consider conducting a controlled pilot study and then scaling the implementation. Above all, carefully monitor everything and ultimately conduct summative evaluation (Metiri Group, n.d.-a).

Resources

Education Policy Analysis Archives, a peer-reviewed online journal of education research

Education Resources Information Center (ERIC), sponsored by the U.S. Department of Education, Institute of Education Sciences

Institute of Education Sciences Practice Guides

Metiri Group: Technology Solutions That Work, a fee-based service

Promising Practices Network

United States Department of Education (Use the search phrase "education research.")

What Works Clearinghouse

 

References

Herman, R., Dawson, P., Dee, T., Greene, J., Maynard, R., Redding, S., & Darwin, M. (2008). Turning around chronically low-performing schools: A practice guide (NCEE #2008-4020). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Available: http://ies.ed.gov/ncee/wwc/practiceguides/

Lauer, P. (2004). A policymaker's primer on education research: How to understand it, evaluate it, and use it. Mid-continent Research for Education and Learning and the Education Commission of the States. Available: http://www.ecs.org/html/educationIssues/Research/primer/index.asp

Metiri Group (n.d.-a). Why "What Works" Didn't in L.A. Unified School District. Available: http://www.metiri.com/techsolutions/DefaultTest.asp?StoryID=2

U.S. Department of Education (2003).  Identifying and implementing educational practices supported by rigorous evidence: A user friendly guide. Available: http://www.ed.gov/rschstat/research/pubs/rigorousevid/index.html

Get daily news from THE Journal's RSS News Feed


About the author: Patricia Deubel has a Ph.D. in computing technology in education from Nova Southeastern University and is currently an education consultant and the developer of Computing Technology for Math Excellence at http://www.ct4me.net.

Proposals for articles, news tips, ideas for topics, and questions and comments about this publication should be submitted to David Nagel, executive editor, at [email protected].

About the Author

Patricia Deubel has a Ph.D. in computing technology in education from Nova Southeastern University and is currently an education consultant and the developer of Computing Technology for Math Excellence at http://www.ct4me.net. She has been involved with online learning and teaching since 1997.

Featured