All of the Above

##AUTHORSPLIT##<--->Maryland's new multiple-choice-only state exam a) will leavekey skills untested; b) is a step backward to the 20th century;c) rejects technology; or d) is a poor gauge of learning?

Geoffrey H. FletcherI'M STEPPING OUT on a limb here, buteither we have some very bizarre decisionmaking going on in Maryland, orsome very poor reporting from TheWashington Post. In light of the Post'sarticle on the supposed ineffectivenessof educational software (see Policy &Advocacy, June), I would've banked onpoor reporting. It turns out the Post isjust the messenger this time.

In a story published Sept. 13, the paper reported that "Maryland plans to eliminate written-response questions from its high school exit exams to address long-standing complaints about how slowly test results are processed." Currently, Maryland's state test includes both "brief constructed responses" and "extended constructed responses." Constructed response questions, whether brief or extended, require students to construct— write—responses rather than just choose from answers arranged for them by the test makers.

The reason cited for the change was speed. By eliminating written-response questions, the state can cut the time it takes to get results back to districts from nine weeks down to three weeks. Stephen Bedford, the chief school performance officer in Montgomery County, MD, believes the nine-week turnaround time is "too long, making it difficult to enroll students in appropriate courses or plan for interventions" for students who fail.

Ronald A. Peiffer, Maryland's deputy superintendent for academic policy, says, "The kinds of things we could only test with constructed-response items before now can be done…with selected-response items in a way that's just as good or better."

I have a hard time understanding that one. How can choosing a correct answer from a list be a better way to gauge learning than having the student create an original answer? On top of that, two key skills that higher education and employers consistently tell us are lacking in our high school graduates are critical thinking and problem solving. And those skills are much better judged by a student's written response than by what might be a lucky stab at a multiple-choice answer.

The article states, "Written-response questions take much longer to grade than multiple-choice questions because they have to be evaluated by humans, not computers." Says who? There have been significant technological advances in test scoring, including software that has been shown to grade written work as reliably as human graders (see "A Virtual Treasure Trove," April).

I wonder why we are so quick to discount technological fixes to some of our problems. Maybe Maryland officials considered the technological options and discarded them due to cost or some other factor. Maybe they don't trust the technology.

Either way, we cannot afford to keep using 20th-century methods to test for a 21st-century set of skills. When we are in desperate need of forward thinking, the Maryland decision is a significant step back.

-Geoffrey H. Fletcher, Editorial director

Featured

  • lightbulb

    Call for Speakers Now Open for Tech Tactics in Education: Overcoming Roadblocks to Innovation

    The annual virtual conference from the producers of Campus Technology and THE Journal will return on Sept. 25, 2025, with a focus on emerging trends in cybersecurity, data privacy, AI implementation, IT leadership, building resilience, and more.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Introduces Stand-Alone AI App

    Meta Platforms has launched a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • semi-transparent AI brain with circuit elements under a microscope

    AI 'Microscope' Reveals the Hidden Mechanics of LLM Thought

    Anthropic has introduced new research tools designed to provide a rare glimpse into the hidden reasoning processes of advanced language models — like a "microscope" for AI.