What Does SBR Mean for Education Technology?

##AUTHORSPLIT##<--->

Driven by budgetary and accountability pressures, K-12 education decision-makers have increased their demand for evidence of effectiveness and a return on their investment. The scientifically based research (SBR) provisions in No Child Left Behind (NCLB), while somewhat amorphous in meaning, have nonetheless served as a catalyst for educators demanding ever more rigorous research. Even though the SBR requirement is not part of the Enhancing Education Through Technology grant program (Title IID) - to the extent technology applications are included in other NCLB programs like Reading First or Title I - such SBR provisions do apply to technologies purchased with those funds.

The SBR provisions apply to the wide range of school interventions - policies, programs, practices and products. But with education technology viewed by some as an unproven, supplemental, yet expensive educational intervention, this new call for a burden of proof has had as much impact on technology as on any other educational area. What, then, d'es SBR mean for education technology?

The Software & Information Industry Association (SIIA) has had a long-standing interest in education technology research. For instance, the association periodically publishes a "Report on the Effectiveness of Technology in Schools," which has been used to both inform product development, as well as guide policy and classroom implementation. Among the consistent findings in the report is that education technology is neither inherently effective nor inherently ineffective; instead, its degree of effectiveness depends upon the congruence among the goals of instruction, characteristics of the learners, design of the software, and educator training and decision-making, among other factors.

In other words, while the technologies themselves certainly have a bearing on educational outcomes, ultimate effectiveness - as with all educational interventions - depends upon the appropriate implementation of that technology in meeting teaching and learning goals.

Similarly, technology also presents an added variable whose impact might be difficult to distinguish from the broader intervention. Because technology is a tool used to achieve a variety of purposes and functions, the study of education technology necessarily involves the study of both the education and the technology. For example, an instructional software program includes a pedagogy, an instructional design and a technology design. As another example, learning management and communication systems employ technology to serve an educational function, but at the same time can provide their greatest value indirectly by changing behaviors and systems. For purposes of research, can the educational function and technology delivery be separated? Should they be? If not, how do we isolate the impact of each? Education technology research must consider all these questions in its design, reporting and analysis.

With that said, there certainly is a need for more and better research on education technology. This research can help both refine our understanding of the conditions and practices under which a given technology is effective, as well as validate the efficacy of specific applications.

Impact of SBR

Both educators and publishers/developers are responding to the new research paradigm. Publishers and developers (both for-profit and nonprofit) have long reviewed the research literature to inform product development, as well as conducted product effectiveness evaluations. They are now working to: (1) better document the scientific basis of their products through white papers; and (2) build a more rigorous body of evidence that validates their products' effectiveness. These efforts correspond to NCLB's two applications of the SBR definition. In the first application, most NCLB references simply require that educational decisions be made "based on a review of SBR," meaning that that research demonstrates the efficacy of an intervention's underlying principles. In the second use, some SBR provisions have a more stringent standard, requiring that funds be used only for interventions that demonstrate - via SBR-sanctioned methods such as randomized assignment - to effectively produce the desired result (e.g., improving student achievement, improving teacher quality, and so forth).

While NCLB's SBR definition is somewhat broad and open to interpretation, much of the onus falls on educators to determine whether a study would meet the criteria. Guidance from the U.S. Department of Education is directing educators to adopt the principles of "evidence-based education," which combines professional wisdom with the best available empirical evidence for making decisions. The federal What Works Clearinghouse (WWC) seeks to provide some assistance for determining the quality of research findings, but it will be limited to the scope of topics and interventions it chooses to study. Most important, stakeholders should recognize that the WWC is only a resource, and NCLB d'es not require WWC inclusion to meet SBR requirements. In fact, the WWC focus is only on effectiveness research as it relates to student outcomes. Thus, it will not generally provide other types of SBR information educators need such as whether a program, practice or product is research-based; what factors are necessary to its effective implementation; or how best to combine professional wisdom and empirical evidence to make decisions.

Educators are responding to this new SBR environment by seeking research findings as well as struggling to expand their professional roles to include research on their own practice. In both cases, SBR provisions apply not only to procured products such as technology, but to all school policies, programs and practices. What has especially changed is that educators are now asking for this research. They are also asking questions to ensure the information is sound such as: D'es the evaluation study include control groups? And, are the study's student sample and test instrument authentic to my school?

Of course, this new emphasis on research-guided decision-making comes at a challenging time as educators are struggling with budget cuts, NCLB implementation, and increased testing and accountability demands. Like all educational policies, NCLB's SBR "vision" will be tempered by these practical realities as educators, and those who serve them, balance a myriad of goals, needs and requirements. Therefore, it is too simplistic to expect that SBR can directly guide practice; it is much more realistic for SBR to be one of the considerations that informs practice.

For these and other reasons, SBR implementation and impact will vary widely over time, as well as by district, state, federal program and educational issue. As such, SBR will not trump these other factors, but will simply add information to the increasingly complex equation used by educators to make instructional and management decisions, meet educational goals, and address NCLB requirements.

Technology's SBR Challenges

Given that SBR is now part of the education technology landscape - both in terms of federal requirements and educator needs - it is important to look at the challenges inherent in such demands on educational products, programs, policies and practices. Discussed below are many of the challenges faced by educational research in general, along with the particular dilemma SBR poses for technology-based products and practices.

Resources. Demands for more and more rigorous research must be accompanied by more resources - human, financial, etc. - to adequately address new research goals and SBR requirements. However, right now neither the research community nor the educational marketplace is positioned to quickly adapt to these new resource requirements. In addition, will schools using federal funds to implement their own programs or develop their own technologies have the resources to conduct their own necessary SBR?

For the moment, most developers and publishers will prioritize and conduct SBR for mission-critical products (e.g., reading instructional software typically purchased by districts with Title I funds) that will likely have the highest expectations for research backing. While NCLB makes no differentiation, it is not realistic that all products would be subject to identically stringent SBR standards, with scrutiny directly proportional to cost and anticipated educational impact.

However, it is likely that educators may choose to ask for evidence of effectiveness for supplemental purchases, either on their own or in an effort to comply with their interpretation of an NCLB SBR provision. This will likely have repercussions on product prices and the ability of some publishers to compete, given the cost burdens of such research. This may be especially true in technology publishing, where the margins are very slim and most publishers, particularly small ones, cannot absorb increased development costs.

Whether or not educators agree to a trade-off between cost increases/product fallout and evidenced-backed products remains to be seen, and will depend very much on whether SBR requirements result in better products on the market. Also unclear is whether educators will respond in kind by evaluating their own "homegrown" technology efforts. If educational practice is greatly improved through the application of SBR, a very different educational marketplace may emerge in the next decade.

Time. From initial planning to final reporting, evaluation studies can take considerable time to complete and release. Even evaluation studies lasting just one semester in the classroom can take more than a year. And most researchers agree that it often takes three years or more to complete a satisfactory number of quality effectiveness studies. The problem arises when this research timeline is juxtaposed with the timeline for technology development, which is built on the speedier principle of innovation. At such a speed, many technology products are likely to be obsolete (at least in their original version) before research on them is available.

Thus, it is incumbent upon educational policy-makers, technology publishers and educators to strike the appropriate balance between the need for sound effectiveness research and the "natural laws" of technology advancement. Indeed, SIIA was instrumental in ensuring that Title IID of NCLB did not have SBR requirements attached to it, precisely because we wanted to protect innovation, both in the development and in the school-based implementation of educational technologies.

School Participation. The logistics of conducting research in schools can be challenging to say the least. The demands of educational governance, research protocol and research design often create cumulative, and even contradictory, demands as well as increased costs, both in time and resources. For example, while SBR effectiveness research puts a premium on randomization, practical realities often make this very difficult. Parental denials of permission, perceptions that some students may be excluded from an instructional benefit, and school concerns with disruption to plans and schedules can all compromise the research design. Also, in the past, some educators have seen only the costs and potential harm from participation, and because they were not trained to conduct or rely upon research, they may not appreciate the benefits - direct or indirect.

While studies can often be designed to minimize these barriers, the fact remains that the educational culture - unlike, for example, health care and medicine - has not always been historically receptive to research participation. As a result, even well-intentioned developers may have difficulty finding school research partners. On the other hand, one of the acknowledged potential outcomes of the demands for SBR is that educators are becoming more research-savvy.

Fidelity of Implementation. As noted above, while not all technology is equal in its effects, a given application is neither intrinsically effective nor ineffective in improving education and achievement. Instead, even the most well-designed products and services must be implemented appropriately to accomplish their goals. Most important, educators must understand how to integrate technology as a tool to achieve educational objectives, including, in some cases, adopting a redefined model of the teacher's role and instruction. Professional development, school leadership, adequate technical support, properly configured hardware, appropriate pedagogy, systematic instructional use, recommended intensity of use, and other implementation factors are all inseparable from results. These conditions and practices have as much influence on outcomes as the technology and its design.

However, consistency of implementation is very difficult to maintain from one study setting to another. This is true for all school-based research, because most educational interventions depend to some, and often great, extent on implementation by the teacher, which is extremely difficult for the researcher to control. Most of education technology use d'es not consist of "teacher-proof" applications. Consequently, any study of technology's effectiveness must account for all the variables listed above - in both the study design and the reporting of its results.

Outcomes and Test Instruments. Technology in today's schools takes many varied forms and addresses virtually every educational function and need - from delivery of core curriculum to changing the system of instruction. As a result, any technology evaluation must clearly articulate outcome goals and employ appropriately sophisticated outcome measures and data-gathering techniques to capture the technology's intended impact(s).

However, the year-end standardized achievement tests that serve at the core of NCLB's accountability requirements, and are driving much of education, often provide too narrow and blunt a measurement to adequately capture technology's impact. While SBR d'es not explicitly require use of this outcome measure, it is often the only one of interest to educational decision-makers whose schools' fates may rise and fall on student performance on such tests. There is concern that this focus will undercut the potential promise of the technology as a transformative productivity and learning tool. It will often fail to adequately capture important educational goals, including:

1. 21st century learning goals such as technology literacy, self-directed learning and problem-solving;

2. Indirect benefits in terms of changing the school process, improving efficiency and expanding opportunity; and

3. Core academic knowledge and skills that may not be measured in the finite scope of some standardized tests.

Do the current measures confine us to too narrow a form of learning and assessment? Is increased student achievement only demonstrated by high-stakes performance? Rather than focusing SBR only on improved standardized test scores, education should enhance the depth and breadth of its educational measures to accurately capture many of technology's profound impacts.

Research Models. The SBR definition, especially in the NCLB provisions for "proven effective by SBR," places a premium on evaluation studies capable of demonstrating causation. The NCLB definition, as well as the WWC research design standards, states that causation and effectiveness are best demonstrated through, at a minimum, quasi-experimental designs that employ control groups, preferably through experimental designs that also incorporate randomization. This raises a number of challenges.

First, as important as it is to know whether studies prove a product effective, educators need research to help them better understand how to replicate its successful use. Experimental design, by definition, tries to isolate all possible variables (e.g., class size, teacher preparedness, etc.) to zero in on the intervention itself. The problem, however, is that if a school wants to use the product, it needs to know the circumstance under which the product worked best. D'es class size matter in implementing a particular program? In the case of technology, d'es Internet access speed affect successful implementation? Experimental design, because it "cancels out" all other variables, d'es not offer this information to educators.

Second, many practical challenges arise when implementing studies that employ control groups and/or randomization of subjects. For example, what if students are grouped according to certain academic characteristics, and therefore random assignment is not possible? Or, what if the technology delivers the core curriculum, making it impossible to employ control groups where students would be denied the experimental intervention? While there are solutions to these dilemmas, the result is often a more challenging and less ideal research design.

Finally, concern arises regarding the degree to which randomized studies are representative of real-world practice. Because randomized assignment often disrupts the normal operation of the school, this kind of design can sometimes actually be less reflective of the realities of school practice than can alternative research designs. For all these reasons, while randomized research remains the "gold standard" for validation studies, it should not serve as the sole means by which education answers questions and informs practice, including those that involve technology.

Credibility of Research Results. A hallmark of quality research - and an element of the definition of SBR - is independence and objectivity, which is often established through peer review. However, the educational research enterprise is generally not equipped, at least not at this time, to respond to the need for additional research reviews that could result from NCLB. The issue is especially complicated for private-sector sponsors of research on their proprietary products and services.

First, there are insufficient foundations or public funds available to fund independent research, so companies must use their own resources. Despite efforts to employ a valid process, many stakeholders may categorically dismiss such a study as biased, even if their objections are unfounded. Second, there are insufficient peer-review processes in place to provide a third-party perspective, particularly for technology-based products. Many educational research journals categorically dismiss study submissions focused on a specific commercial technology product or service.

Finally, peer-review questions aside, research in technology-based instructional applications requires three areas of expertise: research, the instructional area itself and the use of technology. Currently, the educational community likely falls short of a sufficient number of researchers possessing this trinity of credentials. Meanwhile, to help educators and publishers find researchers, the WWC is developing a database of those who could be called on to do effectiveness studies. Criteria for this voluntary service will include self-proclaimed adherence to the WWC's rigorous research standards and some listing of content expertise. But the WWC is careful to point out that they do not endorse or validate the quality of the researcher's or research organization's work.

Collaboration Among Stakeholders

These seven challenges provide an overview of the issues confronting education technology in the new SBR environment. Of course, many of these challenges can be at least partially, and hopefully sufficiently, addressed if all parties collaborate and each contributes sufficient resources, flexibility and creativity. SIIA's "Scientifically Based Research: A Guide for Education Publishers and Developers" offers information and solutions necessary for stakeholders to respond to SBR demands. Key to resolution of these challenges is public-private partnerships involving all stakeholders - educators, school officials, researchers, foundations, government officials, and technology publishers and developers.

If these issues can be resolved, SBR may actually present an opportunity for technology. For those many dedicated educators who understand that technology is a critical element to improving educational opportunities and achievement, the SBR agenda provides a chance for validation and even redemption from the naysayers.

This is the final installment of T.H.E.'s "A Closer Look at SBR" series, which was edited by guest editor Therese Mageau.

SIIA is offering T.H.E. readers a discount on its "SBR: A Guide for Education Publishers and Developers" online at www.siia.net/estore/10expand.asp?ProductCode=SBR-03.

Featured

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches 'Reasoning' AI Model Optimized for STEM

    OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • clock with gears and digital circuits inside

    Report Estimates Cost of AI at Nearly $300K Per Minute

    A report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Intros AI-Native Networking and Security Management Platform

    Juniper Networks has launched a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.