Make It Count

##AUTHORSPLIT##<--->

Budget crunches are elevating the need forK-12 technology leaders to demonstrate afavorable return on investment. Here's howto ensure you're getting what you pay for.

Make It CountHOW CAN ROI be demonstrated?

It's the conundrum now facing school ed tech leaders as the struggling national economy has district administrators watching over technology expenditures ever closer, insisting on a positive, demonstrable return on investment. Technology coordinators and other district personnel involved in tech procurement must account for the educational value of every purchase, so they are seeking any method available that can reveal a technology tool's impact on student learning.

Demonstrating the value of a technology project can be problematic because the success or failure of any initiative depends on a host of factors, from implementation strategy to staff buy-in. Do teachers get excited by a new tool in the classroom, or is it considered a foreign object-- even an intruder? The very best technology investments can fall flat if the district culture doesn't encourage technology integration, and if users don't embrace new tools and commit to using them to their fullest advantage.

Generally, technology coordinators don't cite specific metrics they use to measure how well an initiative is working, but most say there are two fundamental yardsticks: tracking usage and, the clincher, charting student achievement. It's the first step, however-- making a smart purchase-- that may be the most important one in guaranteeing a healthy ROI.

Keeping Up

Any large outlay of money had better be preceded by the necessary legwork to ensure that the purchase is a good one. For technologists, keeping tabs on-- and contributing to-- the industry grapevine about the newest technologies and latest research is an obligatory part of the job, helping lead them to the best products and steer them away from technologies that underperform.

To keep themselves up to date, many districts turn to scholarly and scientific research from cutting-edge research organizations nationwide. The most common-- and most reliable-- sources are: the Metiri Group, an educational think tank in Culver City, CA; MDRC, an education-oriented public policy organization with offices in New York City and Oakland, CA; the Washington, DC-based Consortium for School Networking, an organization of K-12 technology leaders; and Mid-Continent Research for Education and Learning, an independent, nonprofit research group in Denver.

Naturally, with research projects going on all the time, this database of knowledge expands as new theories, techniques, and strategies for learning are introduced and tested.

Many districts, including some of the larger ones in California and Illinois, employ one or two specialists who are tasked with reading blogs and trade magazines, and finding other innovative strategies to keep on top of the latest information (see "Researching the Research"). Other districts, such as Loudoun County Public Schools (VA), expect all individual school administrators to follow current research trends on their own, and share what they find with colleagues.

The Loudoun approach is called Loudoun Vital. Through this program, district administrators meet regularly with school principals and assistant principals, and together they share some of the best practices each has discovered between meetings. According to Lynn McNally, the district's technology resource supervisor, the regularity of the meetings has created a culture of individual responsibility mixed with group collaboration that trickles down to the educators themselves. Every August, when the district holds three days of professional development, staff members catch up on research they've read throughout the year.

Make It CountResearching the Research

THERE'S CERTAINLY NO SHORTAGE of studies on the ways to assesstechnology's impact on learning. For school districts, the key is makingsense of all of the scientific and scholarly mumbo jumbo. According to"Technology in Schools: What Does the Research Say?," a recent studyfrom the Metiri Group and Cisco Systems, technology purchasers should pay particularattention to three distinct kinds of research:

  • Rigorous Research. This is defined as experimental or quasi-experimental design studies: the use of treatment and control groups-- preferably through randomization-- and rigorous statistical design and analysis to test hypotheses, conducted to determine how technology specifically improves learning. The best white papers fall into this category.
  • Descriptive Studies. These efforts provide historical insights as to what happened as a technology solution was implemented, how it was implemented, and why it was implemented. Although such studies might include qualitative research and/or pre- and post-statistics that reveal strong correlations, they do not provide definitive evidence of cause and effect.
  • Theoretical Underpinnings. A body of research that addresses educational strategies with respect to those that achieve positive results and those that do not. Educators are encouraged to adopt technology solutions that are grounded in sound educational theory, increasing the likelihood that those solutions will bear positive results in the classroom.

In the latter case, the Metiri/Cisco study notes that it is not sufficient to simply articulate an alignment between a particular theory and a seemingly successful technological solution. The paper states that for educators to be confident that a technology-based learning solution has theoretical underpinnings, evidence of the theoretical basis must be documented in rigorous research conducted during the development of the solution or software.

"Our teachers repeatedly say how refreshing it is to work with others who actually quote research," McNally says. "In this culture, it's much easier for us to stay on top of things, and roll out the most efficient and affordable technologies possible every time we're ready to make the leap."

Of course, simply tracking high-end scientific research doesn't guarantee a quality purchase. Steve Miller, senior vice president of outcomes research at Scientific Learning, an educational technology vendor that sells the Fast ForWord product to help enhance student cognitive skills, says that formal research may be useful as a way to get a good idea of what the experts are saying in the marketplace. But these kind of big-picture studies rarely, if ever, apply directly to a particular district.

"If you run a school, your problem is your own domain, and that's all you care about," he says. Miller suggests that districts take formal studies on technology products and devise trial applications for their own environments, noting that university research says whether something works in theory, while schoolinitiated research says whether something works in practice. He says, "The way to be most successful long-term is to have both pieces of evidence."

Fort Worth Independent School District hardly ever did anything to calculate theROI of its technology purchases. Now, however, as the district embarkson a $23 million commitment to invest in interactivewhiteboards, it will begin mining assessment data todetermine how the new technology improves student comprehension.

Tracking Usage

It sounds elementary, but the only way one can begin to account for the value of a technology purchase is simply to show that students are using it. No technology can have an impact on student learning if it is not used, or not used the way it was intended.

A cautionary tale can be found in the Los Angeles Unified School District's $30 million investment in Pearson's Waterford Early Reading Program in 2001. District evaluations in two successive years found that students using the early-elementary reading program were scoring no better on standardized tests than those not using it. LAUSD soon after dropped the program as a part of daily classroom instruction, using it simply as an aid for struggling students.

The district's first instinct was to hold the technology accountable for the program's ineffectiveness, but the evaluations showed that inadequate usage was the real culprit. Evaluators found that teachers were not properly trained to use the technology and therefore didn't fully implement it into their lessons. In some cases, students spent less than a third of the recommended time on the system.

"The findings confirmed what we already knew: You have to turn it on to have an impact," a Pearson official commented to the Los Angeles Times, defending the software. "If you don't get all the way through the program and cover all of the material, then you can't expect the student gains."

Making certain that usage of district technology installations remains high is a priority of a collaboration between the Northern Ontario Education Leaders (NOEL), an alliance comprising the leaders of educational organizations from eight school boards in northwestern Ontario, Canada, and the York Region District School Board, one of the largest school districts in Ontario. Responding to sagging usage of a legacy system at York, the two groups together turned to ReportNet, a software auditing tool from Cognos that provides a window into what users are doing with the technologies available to them. This information then lets districts know whether they're getting the most out of the various technologies they have in operation. The system tracks how much a technology is being put to use; if the data indicates low usage, school officials can take steps to make the technology more appealing to educators.

One of the technologies whose usage ReportNet is most effective monitoring is a district's learning management system. ReportNet can reveal which users log on to the LMS, what they do once they've logged on, and how much time they spend online during each session. Program coordinators can then use this data to tweak the LMS to better serve its users.

"Now that we know who is logging on, when they're logging on, and what they're doing once they're there, we have been able to tailor the technology to meet user needs," says Diane Findlay, project manager for the NOEL/York collaboration. "If they're not using it, we can go back and say, ‘What's wrong with this? Why aren't you using it? What can we do to make it better?'"

As an example of its utility, the system gathers data on the number of educators who are using the LMS for the purpose of finding out how many of their students are failing. Provided with this information, members of the NOEL/York collaboration can get at least an inkling of how much of their district's student population is academically at risk.

Another benefit of the auditing tools: They enable technologists to change the LMS and other online tools dynamically. Through regular surveys that pop up every time a user logs on, Findlay and her team can get direct feedback from users on how they plan to use the system, whether they find the system valuable, and what they would change about the way things are laid out. This information is processed in real-time, so the NOEL/York technologists can get a constant gauge of how users feel.

Charting Achievement

In a school environment, any final evaluation of an educational tool's worth boils down to one measure: student achievement. As the price of a technology investment goes up, so does the district's need to see a positive spike in learning.

Fort Worth Independent School District (TX) hardly ever did anything to calculate the ROI of its technology purchases. Now, however, as the district embarks on a $23 million commitment to invest in interactive whiteboards from Promethean, it will begin mining assessment data to determine how the new technology improves student comprehension, according to Fort Worth CTO Kyle Davie.

According to Davie, plans for charting achievement are still sketchy, but the district likely will look at high-stakes test scores in mathematics and science before the interactive whiteboards were implemented in classrooms and compare them to highstakes test scores in those subjects after the whiteboards are up and running. The district's plan to phase in the technology in stages will allow Davie's department to assess performance at the classroom level, comparing those who use the interactive whiteboards versus those who don't.

While Davie is excited about the prospect of tabulating the big technology project's ROI, he says that it's important for the district not to obsess over the process at the expense ofother needs.

"It's important to use achievement and assessment to hold the technology accountable, but at the same time we can't spend all day getting at this information," he says. "Considering that we're charged with managing IT for an entire district, at some point, you have to say, ‘We're doing all we can, this technology seems to be working the way we intended it to work,' and then move on to other things."

Davie's use of the phrase "working the way we intended it to work" should serve as a caveat to education officials in their evaluation of a technology's impact. The success of a tech initiative hinges squarely on its implementation and the buy-in of the user. If the technology isn't used as it was intended, it can't be blamed for a failure to produce results.

A case in point comes from a New York Times story last year on supposed "educationally empty" technology, citing the laptop program of the Liverpool Central School District just outside of Syracuse, NY, which bore no quantified impact on student achievement and was phased out. But a close read of the article reveals where the real accountability lay for the program's failure: technical glitches, network freezes caused by the volume of students idly roving the internet, and students using the computers to download pornography and hack into local businesses. A more responsible implementation likely would have resulted in a better outcome.

Part of a Whole

There is truly no magic formula for divining ROI from technology purchases. The process of evaluating the value of a technology investment of any kind differs from district to district. In some districts, anecdotal observations from teachers and testimonials from students might be enough. In other districts, superintendents may be more interested in hard numbers,which can be difficult to produce in the first years.

There are so many distinct concerns that weigh on the amount of impact a technology has: How frequently does the new technology break down? How willing are district users to incorporate it into curriculum? And perhaps most important, how well do educators do their jobs? Answers to all of these questions have a bearing on how an investment in a particular technology pans out.

Cheryl Lemke, CEO of the Metiri Group, warns that educators should not get bogged down in holding technology accountable for student performance, since technology is only one of a number of tools that educators should use to maximize learning. Instead, Lemke advises districts take a systems approach and view technology as part of the whole. "What I'm suggesting is that technology is a violin in the context of an orchestra," she says. "How do you measure the value of the instrument versus the other contexts you put the instrument into?"

Davie agrees, noting that the process of education comes down to an educator's ability to use a host of tools and share key concepts with students effectively.

"It's the total integration of technology with other aspects of teaching," he says. "When you live and breathe technology every day, it's tough to remember it, but in education, technology is onlypart of the recipe-- it's not the whole pie."

::WEBEXTRAS ::
If you would like more information on return oninvestment, visit our website. Enter the keyword ROI.

Matt Villano is a freelance writer based in Healdsburg, CA.

This article originally appeared in the 08/01/2008 issue of THE Journal.

Whitepapers