Getting the Story Straight

##AUTHORSPLIT##<--->

Recent articles in the nation’s two most influential newspapers draw an inaccuratepicture of education technology. A three-pronged counteroffensive is in order.

Policy & Advocacy I REMEMBER TAKING a Red Cross class a number of years ago, and one thing has stuck in my head: to stop the bleeding, apply more pressure.

That phrase has come to mind several times over the past few weeks as the popular press has featured educational technology in a less than favorable light. The first instance was an article in The Washington Post, reported widely in newspapers across the country, about the Department of Education’s study on the impact of educational technology, titled “Effectiveness of Reading and Mathematics Software Products.” The second was a piece in The New York Times that appeared on the front page on May 4, titled “Seeing No Progress, Some Schools Drop Laptops.”

Punctured by the two most revered newspapers in the country, the ed tech industry needs to stop the bleeding. How do we do that? Apply pressure—by debunking the source, asking the right questions, and being our own advocates. All of these approaches are interconnected, but taking a look at each may help you and others in your district and your state counter the negative coverage educational technology is getting.

Debunk the source. In this case, I don’t mean the Post or the Times; I mean the arguments the articles make, and the “facts” presented in support of the arguments. There are a number of reasons for the findings in the DoE study, and they are readily discovered by reading the full report, or even simply scanning the executive summary. It’s unfortunate—and unfair—that the Post writer did not bring those reasons to light, which would have tempered the conclusion that the study is “a rebuke of educational technology.”

For example, one factor the Post left unexamined is the way the technology products were administered. The report says that students used the software on average of about 10 percent of instructional time. For fourth-grade reading, one product was used seven hours in the year, and another was used 20 hours in the year. Can we really expect to change learning outcomes using a tool seven hours in a single year?

Another problem with the DoE study is the averaging of different types of software products into a final result. In sixth-grade math, the study notes, “two products were supplements to the math curriculum, and one was intended as a core curriculum.” In fourth-grade reading, “three of the four products provided tutorials, practice, and assessment geared to specific reading skills, one as a core reading curriculum and two as supplements to the core curriculum. The fourth product offered teachers access to hundreds of digital resources such as text passages, video clips, images, internet sites, and software modules, from which teachers could choose to supplement their reading curriculum.”

Cheryl Lemke of the Metiri Group, a consulting firm dedicated to advancing the use of technology in schools, points out that summaries across studies conducted on different technology-based learning approaches tend to average out the negative and positive gains of any individual approach. She cites a recent report by Yigal Rosen and Gavriel Salomon of Israel’s University of Haifa demonstrating this.

Yet more fault with the study can be found regarding the quality of teacher training. “At the end of the training,” the executive summary explains, “most teachers reported that they were confident that they were prepared to use the products with their classes. Generally, teachers reported a lower degree of confidence in what they had learned after they began using products in the classroom.” However, reading the full report reveals more specific information about apparent inadequacies in the training, such as this from the first-grade reading component: “The need for such additional support is suggested by the finding that by the time of the first classroom observation (generally about mid-fall), when most teachers had begun to use products, the proportion of teachers indicating that the initial training had adequately prepared them had declined from 95 percent at the end of the initial training to 60 percent.” Two out of five teachers did not feel adequately prepared. That affects the results.

Ask the right questions. As I have often written, we need to be sure that we—and especially policymakers—are asking the right questions. The question the DoE study seems to ask is, Does technology work? I can’t tell you how many times I have been asked if technology works in education. Local school boards, state legislators, governors, members of Congress, parents, my next-door neighbor, the person who cut my hair today. They all recognize the importance that the effective use of technology plays in their personal and professional lives, but they want to know if it works in education.

I have two stock answers. One is: “What do you mean by ‘Does it work?’” The other is: “That depends. What kind of technology are you referring to? What sort of outcome are you after? Higher math scores? Better teachers?” Both responses are intended to get people to think about what they want students to get out of education, and how that is measured. The latter response is intended to make sure people understand that the effective use of technology in education depends on having technology that is appropriate to the learning environment, as well as the extent to which:

  • the technology is free from operational problems—there is sufficient bandwidth, etc.
  • the teacher has been trained to use the technology effectively as a teaching and learning tool
  • the teacher has support from the principal and peers in the use of the technology
  • the students spend sufficient time using a given technology (five minutes a day on math software is not enough to improve math skills)

This brings me to the New York Times article. The story opens with students using “their school-issued laptops to exchange answers on tests, download pornography, and hack into local businesses…. Scores of the leased laptops break down each month, and every other morning, when the entire school has study hall, the network inevitably freezes because of the sheer number of students roaming the internet instead of getting help from teachers.” Obviously, the district at issue did not have technology free from operational problems or sufficient bandwidth. Nor must there have been much training for the teachers or an acceptable use policy in place and enforced with any consequences. In short, the proverbial “fidelity of implementation” seems to have been lacking.

Be our own advocates. Just as all politics is local, so all advocacy begins at home. The most effective way for us to apply collective pressure is to publicize the positive results of using educational technology. Implement the technology with good professional development, sufficient bandwidth, and buy-in and support from the administration, and then celebrate your successes. Show the positive results on websites, at parent meetings, at school board meetings— with students doing the presentations. Whether at a school board meeting or at a meeting about an upcoming bond levy for technology, when you are asked if technology works, you will be able to say, “Yes, we want it to [insert need], and it does because you have provided enough teacher training, bandwidth, and computers to ensure that students and teachers can use it for its intended purposes.” That should apply the right pressure in the right place.

Geoffrey H. Fletcher is editorial director of T.H.E. Journal and executive director of T.H.E. Institute.

This article originally appeared in the 06/01/2007 issue of THE Journal.

Whitepapers