The Potential, Pitfalls and Promise of Computerized Testing

##AUTHORSPLIT##<--->

Imagine administering an online standardized test to an entire class of 11th-grade students when, halfway through the exam, your server holding the test hits a snag and throws everyone offline. Imagine another scenario in which your elementary school has very few computers so you must bus your students to the local high school for a timed test. At the new test site, six students suddenly refuse to take the test and begin crying, while simultaneously more than half of the students discover they have trouble reaching the keyboards. These scenarios demonstrate that there are many risks in using computers to test students. However, thinking ahead about the needs of students, the testing site and the system being implemented limits the potential for problems to arise during testing.

NCLB Guidelines

Though accountability measures have been in place since the inception of education in America, educators have never before been under such highly refined scrutiny or systematic evaluation of teaching practices as they are now. Concurrently, an increased reliance on technology, the Internet and mass media has yielded an ever-increasingly fast-paced American culture.

To make matters seem worse, the No Child Left Behind Act requires that schools close achievement gaps much faster than before. As a result, many school districts are scrambling to conceptualize the triadic relationship among NCLB, computerized testing and their school district (Recio, Clark and Sevol 2002).

NCLB requires that each school, in concert with its state's guidelines, develops clear cutoff points for achievement in math and reading. Failure to show positive growth in these areas can result in dire consequences for a given school district, including students transferring to better-producing schools, state-mandated financial changes, and/or a significant swing of autonomy and control from the school to outside sources (McDonald 2002).

Meeting Goals

Garnering and using timely data are the main purposes of computerized testing (Thomas and Bainbridge 2002; Foshay 2001; Olson 2001). Reasons for the focused attention to time include immediate results for students, parents and school personnel, as well as the potential for quicker and more effective changes in both curricular and pedagogical delivery.

Likewise, computerized tests can improve accuracy in both the taking and scoring of tests. Olson (2001) points out that computerized testing allows students to take exams in which one question at a time is asked; thus, lessening the possibility of filling out an answer sheet incorrectly. Furthermore, computerized tests theoretically yield 100% accuracy of results. With computerized tests meeting today's tech-savvy generation of kids, ever-faster, more accurate and clearer ways of testing will continue to develop.

Testing can now be constructed which is directly linked to the district and/or state administering the test. Consequently, standards and outcomes can be directly linked to assessment measures. Such measurement can provide a clearer description of how well institutions are meeting their goals (Olson 2001). It's also important to remember that specific information targeting student performance yields a more refined curriculum.

The Good, the Bad and the Ugly

Schools nationwide are experiencing similar difficulties; however, each school will have unique strengths and limitations when it comes to computerized testing. At times, computerized tests en masse seem to be more of a logistical problem rather than a clean and clear answer to the global question of outcomes-based assessment.

To address both the potential and pitfalls of computerized testing, we have arrived at a list of issues to consider. Some of these issues (marked with an asterisk on the chart below) were encountered by our testing team while administering the South Dakota Career Assessment Program, which is an online interest inventory and aptitude measure. Still other examples of possible problems were hypothesized by the team in discussion with other schools around the area.

Potential computerized testing problems can be rooted in the individual, the site and/or the system. Individual students offer unique concerns for any form of testing, and the site, whether it is a school's computer lab or an off-campus facility, forces the testing team to consider other issues. Finally, the system that holds both the test and the assessment data is subject to problems as varied as retesting or litigation based on improper access to test results.

Specific Student Needs

Students with special needs require the testing team to think about how to best accommodate their requirements. This may involve reserving extra time in the lab or ordering special testing programs. And due to the proximity of computers in labs, special considerations are necessary in helping students focus only on their own computer screen. Some tests allow the administrator to alternate test versions, while an adequate number of proctors will help keep wandering eyes to a minimum.

In addition, for some districts, elementary and middle school students may have to take tests at the high school. In such an instance, the chairs and desks may not be suitable for smaller students. Along with this concern for students bussed in for testing is the newness of the site itself. Being in an unfamiliar place for testing can create anxiety; therefore, having students visit the site prior to the actual testing will help minimize such feelings.

Maxed-out computer labs can cause both noise and space problems as well. To counteract such conditions, the testing team should balance the number of students to be tested and the number of computers available. If possible, alternative testing days should be added to prevent an overflow in the lab. Retakes are also a common testing problem; however, when computers are involved, the need to reserve the lab or construct a portable lab and schedule both students and proctors becomes critical. Of course, another logistical problem is the transportation of students and equipment to the site. Such transportation and equipment must be planned well in advance of the testing date.

Thinking Systemically

Computers and software both crash and boot users offline. Though there is no magic solution for such seemingly catastrophic problems, the team should have an alternate plan in place in the event of such an occurrence. Our alternate plan is as follows:

  1. Run a simulation
  2. Hold a team meeting to discuss results
  3. Conduct a workshop for other school personnel

Other factors to consider include makeup dates, having on-site IT specialists during testing, and familiarity with testing protocol regarding incomplete tests.

When data of this sort is saved on a school's server, there is also the risk that some data will be lost or contaminated. To minimize this possible problem, printing test results immediately and making duplicate saves may be helpful. Finally, at no other time in the history of education has there been so much student information available on the school server. Determining who will have access to such privileged information, especially test results, needs to be closely evaluated. The testing team should determine who will have access to results prior to testing.

Conclusion

The use of computers in testing and assessing student performance has become significant in outcomes-based education. Because of the special circumstances and unique requirements involved in using computers, testing teams need to think about potential pitfalls long before the testing date. As schools continue to adjust to NCLB, clarity over the use and administration of testing via computers will emerge. By keeping problems to a minimum, administrators, guidance counselors and other testing-team personnel can be proactive in creating a testing scenario that will allow students to maximize their true potential.

References

Foshay, R. 2001. "Testing, Testing — D'es Anybody Know Why?" T.H.E. Journal, 29 (5): 40-42.

McDonald, D. 2002. "No Child Left Behind Act Mandates Assessment Measures." Momentum, 33 (3): 8-10.

Olson, A. 2001. "Data-Based Change: Using Assessment Data to Improve Education." MultiMedia Schools, 8 (3): 38-43.

Recio, L., J. Clark and A. Sevol. 2002. "New E-Technologies Simplify NCLB Requirements." T.H.E. Journal, 30 (3): 49-51.

Thomas, D. and W. Bainbridge. 2002. "No Child Left Behind: Facts and Fallacies." Phi Delta Kappan, 83 (10): 781-782.

This article originally appeared in the 04/01/2004 issue of THE Journal.

Whitepapers