Education Decisions: Where's the Evidence and Research Base?

##AUTHORSPLIT##<--->

[Editor's note: This article is the first installation in a two-part series on using research to make decisions on education technology purchases. Part 2 can be found here. --D.N.]

Remember the old Monty Hall program Let's Make a Deal? In that game show, you could win the prize behind one of three doors. If you started by choosing door 1, should you have changed your mind and selected door 2, if Monty showed you what's behind door 3 (Tierney, 2008)? What has this to do with research? Well ... people are convinced what they know is the right thing and forge ahead with decisions based on their rationalizations, no matter what research indicates.

In education, such decisions--especially technology purchasing decisions--can have profound consequences.

So concurred Ben Weintraub, CEO and president of Merit Software, who said he has concerns regarding education decision making and how research is used in K-12 schools. He said he believes there is a lot of faulty research, some of which schools use to make their policy decisions. Educators tend to take a little bit of information and turn it into what they want it to be (personal communications, April, 2008).

According to Weintraub, basing education decisions on misinterpretations of research results has led to policy changes, which have not necessarily led to increases in student achievement. He pointed to increased spending, charter schools, voucher programs, single-sex schools and classes, class size--is small really better? for which students?--supplemental education services mandated by No Child Left Behind, decisions aimed at increasing accountability, and false claims based on extensions of research beyond the initial scope as examples of where interpretations of results of research have been a concern.

More importantly, as a software developer, he said he's concerned about schools that implement technology interventions based on research showing very small effect sizes. He finds problems with basing decisions on studies with small sample sizes, control groups that are ignored, data that are not student focused, evaluations that are short term, and causes of improvement that are in reality undetermined. Many decisions do not consider cost and practicality and are not well thought out.

Patricia Lauer (2004) at Mid-continent Research for Education and Learning emphasized, "Without access to information from research about education practices, policymakers are more likely to make decisions that are ineffective or even harmful" (p. 3). But, access is not enough. In part 1 of this two-part series on education decision making, I'll provide some of those questionable education decisions and practices that have been reported in various news sources and pose some initial questions to consider, which might help sort out claims by companies that say their products have a strong research base.

In part 2, I'll provide guidance on what to look for in a research report, tips for how to read a study, and resources for research and interventions that work or are promising. What is strong research? How do you know if research warrants policy changes or adopting a technology intervention in your setting? Significant outcomes from research are not necessarily of practical significance. Where do you turn, if research is sparse or non-existent? How should a technology solution be implemented?

Ineffective and Harmful?
It's not difficult to find examples of policy decisions that have been questioned. Newspapers, education newsletters, and journals are filled with them. In supporting his views, Weintraub questioned the cost effectiveness of a $1.5 million program to keep 47 Washington, DC schools open Saturdays for 14 weeks to help prepare students for spring standardized tests in reading and math (Labbé, 2008). As the program was voluntary, how many students would come on a weekend, and would the program actually make a difference in their achievement? He referred to Greene County (GA) school system's decision to scrap a controversial plan to become the country's first entirely single-sex school district. The system altered this mandate to allow parents the option to choose gender-separate classes in grades 3 through 8 for math, science, social studies, and language arts (Associated Press, 2008).

After I interviewed Weintraub, I noted other examples just within a few days of each other. Springfield (MA) planned to "warehouse" failing grade 8 students in a special preparatory high school rather than returning students to a setting in which they had already failed. The plan was apparently made without public input, and insufficient knowledge about the effect on students (e.g., isolation), cost, and program effectiveness; it was eventually rescinded after public protest (Goldberg, 2008). There's the controversy regarding replacing bilingual education with structured English immersion as the default method for teaching English language learners. In three states, which had done so, the new approach and achievement were questioned after less-than-stellar results on NAEP testing (Zehr, 2008). Finally, the entrenched practice of ability grouping and tracking in schools and its effect on achievement has been questioned (Futrell & Gomez, 2008).

I asked Weintraub about the optimal conditions for conducting a study in schools, as support from staff plays a huge role in success of a study relying on technology use. He replied that schools must decide how they want to use software. Will this be a freebie, or will they intend to use it? Their commitment to the intervention is critical. To get support from staff, he said he believes it is important to give schools what Merit Software gives to any of its customers--technical help, curriculum alignment help, confidence in the product. People do the strangest things in the name of free.

So, when news reports address a technology intervention that has not worked in a school district, the Metiri Group (n.d.-a) emphasized the importance of looking beyond what might have been reported. Implementation matters. Case in point: Metiri maintains a database called Technology Solutions that Work, which includes the Waterford Early Reading Program from Pearson Education. After the Los Angeles Times reported on the failure of a $50 million reading improvement initiative in the Los Angeles Unified School District, which featured the product, Metiri indicated it would not modify its rating of the product, calling the district "a textbook study on how not to implement a technology-based intervention in your district" (para 1).

Gathering Evidence
Obviously, policymakers wish to avoid such reports of ineffective and potentially harmful results from their decision making. Impact studies and the theoretical research basis behind an intervention are important considerations when adopting a technology solution or other program or practice. You also need to consider the characteristics of your own setting.

Let's focus on technology for the smart classroom. Since NCLB, nearly every company has posted claims that their products are backed by strong research. However, some of those claims are contrived and artificial, rather than having a genuine basis in research (Metiri Group, n.d.-b). How can you tell the difference? Consider the following questions, which I derived from the Metiri Group's process to examine a research basis:

  • Was the research basis in place at the time of the development of the product and used to drive its development? Companies might post a contrived research basis for older products, developed prior to NCLB and its mandate for schools to adopt interventions based on scientific research.
  • Who authored the research basis? Was it written by qualified educators or researchers who were part of the product development team? Or was it written after the development by outside consultants who may have had little or no connection with the product? The latter might lead to erroneous associations.
  • If a product claims a research basis, was that product actually used in the study? While this might appear a strange question to ask, the Metiri Group indicated that companies have been known to use relationships found in studies to support their technology product, when in actuality the relationships between those studies and the intervention "are often forced and even nonsensical."
  • Can you find a reciprocal relationship between the research-based learning technology and the body of research and best practices? Look for multiple articles by researchers and theorists who discuss the product as an example of a model implementation of a learning strategy in practice.
  • Once a product is on the market, have any impact studies been generated that document the product's value in actual practice? The Metiri Group noted a research basis can grow old. There is usually good reason to suspect a product's efficacy and accuracy of its research base, if studies have not been conducted when an intervention has been available for five years or more.

Bottom Line
News reports become sources for decision making, as do a company's reports on studies it conducted or its documentation with the research basis for products it sells. However, there might be some bias or inaccuracies in those and evidence not sufficient for sound decisions. Additional evidence might be gathered by reading actual research studies and meta-analyses reported in journals. Did "effect size" have any meaning for you? Maybe you can't find appropriate research. With achievement and finances on the line, a hasty product adoption and implementation are unwise. Come back next week for part 2.

References

Associated Press (2008, April 20). Greene County schools plan to offer single-gender classes. The Macon Telegraph. Retrieved April 21, 2008 from http://www.macon.com/198/story/328001.html [available in archives].

Futrell, M. H., & Gomez, J. (2008). How tracking creates a poverty of learning. Educational Leadership, 65(8), 74-78.

Goldberg, M. (2008, May 16). Plan rescinded for special school. MassLive: The Republican. Available: http://www.masslive.com/republican/stories/ index.ssf?/base/news-14/121092422944180.xml&coll=1

Labbé, T. (2008, January 19). Weekend test prep program is planned for D.C. schools. The Washington Post, p. B01. Available: http://www.washingtonpost.com/wp-dyn/ content/article/2008/01/18/AR2008011802152.html

Lauer, P. (2004). A policymaker's primer on education research: How to understand it, evaluate it, and use it. Mid-continent Research for Education and Learning and the Education Commission of the States. Available: http://www.ecs.org/html/ educationIssues/Research/primer/index.asp

Metiri Group (n.d.-a). Why "What Works" Didn't in L.A. Unified School District. Available: http://www.metiri.com/ techsolutions/DefaultTest.asp?StoryID=2

Metiri Group (n.d.-b). Bases in research: From convincing to contrived. Available: http://www.metiri.com/ techsolutions/DefaultTest.asp?StoryID=1

Tierney, J. (2008, April 18). And behind door no. 1, a fatal flaw. New York Times. Available:
http://www.nytimes.com/ 2008/04/08/science/08tier.html

Zehr, M. A. (2008, May 14). NAEP scores in states that cut bilingual ed. fuel concern on ELLs. Education Week Eye on Research, 27(37), 10. Available: http://www.edweek.org/ew/articles/ 2008/05/14/37ell_ep.h27.html?tmp=1813280948

Get daily news from THE Journal's RSS News Feed


About the author: Patricia Deubel has a Ph.D. in computing technology in education from Nova Southeastern University and is currently an education consultant and the developer of Computing Technology for Math Excellence at http://www.ct4me.net.

Proposals for articles, news tips, ideas for topics, and questions and comments about this publication should be submitted to David Nagel, executive editor, at [email protected].

About the Author

Patricia Deubel has a Ph.D. in computing technology in education from Nova Southeastern University and is currently an education consultant and the developer of Computing Technology for Math Excellence at http://www.ct4me.net. She has been involved with online learning and teaching since 1997.

Whitepapers