Testing, Testing ... Does Anybody Know Why?

Have schools gone test crazy? With accountability the watchword of the day, it sometimes seems that way. But the purpose of all this testing should be to help instructors improve their teaching and to help learners progress. Yet, all too often, that seems to be the least important purpose for tests; and many times the tests used can't generate information instructors can really use.

To see why, let's take a look at the way one teacher, Mrs. Anthony, uses tests for different purposes:

- At the beginning of the school year, Mrs. Anthony gives her class a pretest. It tells her which learners are ready to study what parts of the curriculum, and gives her a detailed diagnostic skill profile for each learner. She uses that information to decide how to individualize instruction for each learner.

- Each lesson includes a progress test or other assessment activity. It tells Mrs. Anthony and her learners how well they understand the lesson, and what they need to review before going on.

- At the end of each unit, Mrs. Anthony gives a summative test, which lets her see the big picture of how her learners grasped the major themes and skills in the unit. Then, she can decide what individual review work her learners need before they go on to the next unit.

- Each spring, her school system administers a certification test based on state curriculum standards for her class. It shows how well her class has mastered the state standards, but d'esn't provide detailed diagnostic information for her to use. In fact, the test results don't even come back until after the end of the school year.

Defining the Tests

Often, tests create more heat than light when it comes to data-driven teaching decisions. To see why, let's look at what's in each kind of test:

Pretests. There are two kinds of pretests. Both are criterion-referenced, so they include a sample of questions that represent your curriculum:

- Pretests for Readiness. These check for mastery of prerequisites to a given part of the curriculum. It's useful for deciding where to start learners in an individualized curriculum.

- Pretests for Need. This is a version of the summative test. In an individualized curriculum, you can use it to decide if a learner should skip a given part of the curriculum.

Lesson Quiz and Mastery Tests. These are the short criterion-referenced tests that are embedded in day-to-day instruction. Their purpose is to check the understanding of a specific detailed learning objective, and to provide immediate feedback to instructors and learners to help them decide what to study next.

Summative Tests. These are the relatively long unit tests and final exams written by you or your district. They are administered at the end of a large block of instruction, and often are used as the basis of a grade. Typically they are a competency-based sample of the highest-level objectives of the curriculum. Point scores, percentage scores and grades are the most common means of reporting results, though it's also common to report back which items each learner got wrong.

Certification Tests. These are the long statewide tests administered one to three times per year, at two or three grade levels. They often are used to control promotion to the next grade or graduation. Some are criterion-referenced, but some are norm-referenced tests, which allow you to compare your learners' performance against a national comparison group.

 

Testing Problems

I see four problems with all this testing. First, the tests often are out of alignment with each other and the curriculum. When this happens, the information they provide is actually misleading to both instructors and learners. Using the test scores to make instructional decisions can actually make things worse. Second, the certification and summative tests rarely provide timely information that is detailed enough for instructors to use in making decisions about what to do with particular learners. Third, the more often you test, the bigger the mound of paperwork you have to deal with. It's not long before the tests take more time and energy than they are worth. Fourth, decisions on admission, promotion and graduation are often made with tests that aren't designed with such high-stakes applications in mind. To make such decisions, extra time, cost and effort must go into designing and validating the test. These resources are beyond what teachers - and even school districts - can do, but some state standards tests don't make the investment either.

 

Making Tests Useful

So, how d'es Plato make tests useful? Here are five suggestions:

1. Pretest for readiness and need, then individualize based on the results. A generation ago, Benjamin Bloom showed that half of the "bell-shaped curve" of achievement was due to differences in readiness of learners. No learner should be required to study something they are not ready for, or to study something they have already mastered.

2. Make tests competency based. Norm-referenced tests place learners on the curve, but they don't show what learners have mastered and need to study. Only competency-based tests can generate information that is useful to teachers as they work with individual learners.

3. Keep tests aligned. It's a lot of work to make sure that tests really do test what the curriculum calls for. Without detailed alignment to the curriculum, it's impossible to use a test for personal prescriptions. A badly aligned test places learners and teachers in a catch-22 situation - they have to choose between the curriculum and the test.

4. Automate tests. This provides real-time information for teachers to use in guiding their teaching, while saving valuable classroom and preparation time.

5. Don't use low-stakes tests for high-stakes decisions. Let's use the high-cost validated tests for the high-stakes decisions such as certification. For the other types of tests, we can afford to use low-stakes tests.

 

The PLATO system combines automated testing with automated prescription and instruction - both online and off - in a flexible, well-aligned system that allows teachers to make fine-grained decisions about their learners. The system includes pretests, progress tests, summative tests and practice tests that simulate the state standards certification tests. Because the whole system is online, teachers get real-time information that d'esn't require laborious manual marking. And because the tests are carefully aligned to standards and carefully constructed, the information they produce is valid and detailed enough to be useful to teachers; though not for high-stakes purposes. Powerful improvements in instructional efficiency and effectiveness can result.

 

By Rob Foshay, Ph.D.
Vice President of Instructional Design and Cognitive Learning, PLATO Learning

Contact Information
PLATO Learning Inc.
Minneapolis, MN
(800) 44-PLATO
www.plato.com

Featured

  • depiction of cybersecurity funding featuring a shield with a glowing digital lock at its center

    Application Window for FCC Cybersecurity Pilot to Open Sept. 17

    The application filing window for the Federal Communications Commission Schools and Libraries Cybersecurity Pilot Program will be open from Sept. 17 to Nov. 1, 2024.

  • pattern of glowing blue and green orbs connected by thin luminous lines

    Microsoft Copilot Gains Actions Feature, New Agents in Latest Update

    Microsoft has introduced new and enhanced features for Microsoft 365 Copilot, including Copilot Actions, new AI "agents," and a Copilot Control System.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • glowing blue nodes connected by thin lines in an abstract network on a dark gray to black gradient background

    Gartner Report: Generative AI Taking Over SD-WAN Management

    In a few years, nearly three quarters of network operators will use generative AI for SD-WAN management, according to a new report from market research firm Gartner.