How to Evaluate Educational Research

 

The no child left behind act has brought research, particularly scientifically based research, beyond discussions in graduate schools and back into the consciousness of educators in the field. For many educators, it has been a long time since those discussions, and key concepts about educational research may have become hazy. In this fourth article in our six-part series, edited by guest editor Therese Mageau, T.H.E. Journal turns to a refresher primer on how to evaluate educational research. The first is an article by Dr. Doris Redfield, a noted researcher at AEL. The second is a checklist, which can be found on our Web site (www.thejournal.com), put out by the U.S. Department of Education's Institute of Education Sciences (IES), the organization overseeing the What Works Clearinghouse. The IES contracted with the National Center for Education Evaluation and Regional Assistance to create a report titled "Identifying and Implementing Educational Practices Supported by Rigorous Evidence: A User Friendly Guide."

Because the what works Clearinghouse (WWC) cannot possibly evaluate the effectiveness of every product, program, practice or policy that schools might be likely to use, educators are going to increasingly find themselves in the role of research evaluators. However, many education practitioners do not have the kind of research background necessary to enable them to expertly evaluate research. According to the U.S. Education Department, when evaluating educational research, evaluators should look for the following:

Educational relevance. The research should address interventions, outcomes, participants and settings representative of the school's interests and needs.

Rigorous, systematic and objective methods. The research should offer the highest quality evidence of what really caused the changes in the outcomes measured. According to the Education Department, the best way to produce such evidence is to conduct an experiment, referred to by some as "the gold standard" of research.

Sufficient detail for replication. The research methods and instruments should be described in enough detail that other researchers can replicate the study.

Submitted to independent, expert review. There should be evidence that the research was reviewed by research and content experts other than the researchers. A typical form of expert review is publication in a refereed journal.

To help educators better understand how to review research studies, we offer some rules of thumb on research evaluation by way of a comprehensive list of questions created by Dr. Doris Redfield, vice president for research and director of the Regional Educational Laboratory at AEL (Appalachia Educational Laboratory). AEL houses one of the 10 educational research and development laboratories funded by the Institute for Education Sciences, which oversees the work of the WWC.

Questions for Evaluating Research Claims

1. D'es the research claim that a particular program or product results in student achievement, improved student achievement, or some other outcome such as teacher skill levels?

If yes: Were participants in the study randomly selected from the population to which the results will be generalized? For example, if the study claims that the intervention has a positive effect on fifth-grade students in high-poverty urban schools, were the students participating in the study randomly selected from a population of such students? Or, were participants in the study randomly assigned to the experimental versus the control/comparison groups? The Education Department places greater emphasis on random assignment than on random selection. In addition, was there a control or comparison group (i.e., a group that d'es not receive the intervention which is being studied for effectiveness)?

If you can answer "yes" to both of the above items, the study was an experiment. And if the study was rigorously conducted (i.e., used reliable and valid measures, and so forth), causal claims can be made, especially if the findings have been replicated.

If you answered "no" to the first item and "yes" to the second item, the researcher(s) have used a quasi-experimental design. If the research is quasi-experimental, ask the following question: Did the researchers take every possible precaution to ensure that the experimental and control/comparison groups were alike except for the experimental intervention? For example, were the teachers in all groups equally qualified, did the students have similar backgrounds, and so forth?

2. Whether the research was experimental, quasi-experimental or otherwise:

  • Were the instruments and procedures used to measure results reliable? And, are the procedures described clearly enough that another researcher could replicate them?
  • Is there enough information about the instruments that the reader can reasonably conclude that, if nothing changed in the situation, the instrument would yield the same measurement again?
  • Were the instruments and procedures valid for the purpose of the study? Valid instruments and procedures measure what they purport to measure. For example, a test of mathematical reasoning should measure mathematical reasoning and not simply the ability to accurately compute.

Test manuals should include information about the reliability and validity of the test. Reliability and validity are expressed in terms of a correlation c'efficient (r). The closer "r" is to 1.00, which is perfect reliability or validity, the better; however, c'efficients above 0.7 are generally considered to be acceptable. Of course, if high-stakes decisions will be made on the basis of a particular measure, that measure should be highly reliable and valid.

3. Finally, for any study you evaluate, ask yourself:

  • Did the research address your question?
  • Are there other possible explanations for the results that are reported in the research study or how those results are interpreted by the authors?
  • Is there more than one high-quality study to support the claims?
  • Has the research been reviewed by expert researchers other than those who conducted the study? And, did the independent expert reviewers support the methods used and the conclusions drawn by the researchers?

Featured

  • close-up of a video game controller

    Verizon Launches Free Scholastic High School Esports League

    Through its Verizon Innovative Learning HQ suite of free learning content and resources, Verizon has launched its first-ever scholastic high school esports league. The league opened for registration on Aug. 8 and will run from Sept. 23 to Dec. 13.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report Finds Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, an open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a recent report from Microsoft.

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Introduces Granite 3.0 Family of Advanced AI Models

    IBM unveiled its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • Abstract illustration of a human news reporter interviewing an AI with a microphone

    AI on AI in Education: A Dialogue

    Scholars are doing lots of asking and predicting about the risks and rewards of generative artificial intelligence in school, but has anyone asked the all-knowing chatbots?