Technology's Role in Accountability and Assessment

##AUTHORSPLIT##<--->

Lately the talk online, within newspapers and education journals, and at conferences is about how horrible testing is. The reference, of course, is to statewide standardized testing. I submit that testing is a teacher's best friend, if we are to define testing as anything a teacher d'es to see if a student is learning. A look in a student's eye; a pop quiz; a student's p'em or painting; a program a student has written; and even a statewide standardized exam are all are examples of tests that tell us about student progress. The more we know about what students know, the better we are able to help them learn more.

In one way, technology started this whole statewide testing situation. With the advent of optical character readers (scanners), we were able to score a huge number of tests — the same tests for a lot of students — in a relatively short period of time and return the results to districts. We were also able to automate testing — automation is a typical first use of technology. State, and now federal, policy jumped on the back of the technology and saw that one could compare schools using these tests, and with the standards movement, tie goals and results together. Thus, we have accountability as defined by the No Child Left Behind Act. (I admit that this is an oversimplification.)

However, seldom d'es automation remain a primary use of technology. Technology ultimately changes the practice itself and often the entire industry. Technology in assessment is on the cusp of doing the same thing. Irwin Kirsch, Ph.D., of Educational Testing Service, speaking at a meeting of the Partnership for 21st Century Skills, noted that we are about to have a new psychometric model that can change what and how we are teaching. Pieces of this new model are everywhere:

  • The Graduate Management Admission Test (GMAT) is taken with technology.
  • Virginia and other states are beginning to provide tests online.
  • Testing companies are conducting research showing computer-scored open-ended questions and essays are just as reliable as those scored by humans. Technology can also collect and store more than just right and wrong answers.
  • Online content companies are linking assessments to standards in their programs, while some are also linking them to student information systems.
  • States are using computer-adaptive testing that intelligently changes questions or sequences of questions based upon the students' answers.
  • Electronic portfolios, such as those described by June Ahn in "E-Portfolios: Blending Technology, Accountability and Assessment" (Page 12), are beginning to be incorporated into products and being used by school districts nationwide.
  • Even politicians are asking that technology be used to conduct testing with the primary purpose of getting the test results back to the schools as quickly as possible.

The list could go on, but unless we look at this from another perspective — a futures perspective — we will only get incremental progress at best in applying technology to assessment. A futures perspective is a way of looking at the world based upon five key tenets. Applied to testing, a futures perspective would consider at least:

Alternatives. We should have alternatives in how we measure student progress and in having various measurements count in a state's accountability system.

Holism. Never before have we so tightly connected goals (i.e, standards), curriculum and assessment than we are doing today, and assessment is the key. With technology tools, we can have levels of analysis for individual students, campuses, districts, states and the nation that can help us understand everything from individual student learning to the impact of programs.

Stakeholders. Using technology to publish assessment results on Web sites or protected sites for individual student results, we can communicate with parents every day, not just once a semester at parent-teacher conferences.

Long-term view. This is one thing we are sorely lacking, as our focus is only as far as the next state assessment. See the recommendations below.

Vision. There are a number of visions for assessment out there. Peter Robertson, Cleveland Municipal School District's CIO, envisions a data warehouse where daily assignments and short cycle formative assessments come together to form a robust picture of a student that is matched to standards. John B. Watson, Ph.D., provides a piece of his vision in this month's Industry Perspective.

So, what must we do to apply this futures perspective? First, we must realize testing companies are market-driven; they will do what their customers want. If the states and federal government want technology in testing, the companies will provide it. From the policy perspective, we need to look first to the federal government — the driver of the accountability movement — to provide flexibility, as well as short-term and long-term research:

Flexibility. The U.S. Education Department needs to provide more flexibility to the states in designing their own accountability systems.

Short-term research. Those states using online testing should not only be allowed to continue, but they also should receive funds to study all aspects of the experience. They should look at implementation concerns such as those outlined by McHenry et al. (Page 28), as well as costs, impact of the technology on results, etc.

Long-term research. If the federal government is truly interested in helping students learn more with accountability as a hammer, it should fund research to learn more about visions such as those presented by Dr. Watson, author of the Industry Perspective.

As noted earlier, companies, districts and schools are using technology in a variety of innovative ways. A research effort would accelerate and make acceptable the use of technology with all kinds of assessment. It would also benefit all of education. So, what are we waiting for?

This article originally appeared in the 04/01/2004 issue of THE Journal.

Whitepapers