A Fundamental Flaw in Competency Learning

The “competency learning" movement is gaining serious momentum: See the list of schools and districts that are adopting competency learning. But, based on the research literature in the psychology of learning, in this blog post, we will argue that competency learning is based on a fundamentally flawed model of how learning takes place and how learning should be assessed. 

Let’s start at the beginning. What is competency-based learning? From the CompetencyWorks website, which is the home for the competency learning movement, here are some of the key aspects of competency learning:

“Students advance upon mastery.” In our current K-12 educational system, students typically advance based on “seat time.” Children spend a school year in a grade and then are generally promoted to the next grade. While some children are held back or some engage in credit recovery during the summer, social promotion is the norm in the schools. The problem with this time-based basis for promotion, the competency learning movement argues, is that there is no guarantee that students have actually mastered the material in that grade level. Hmm. We leave for another blog post a careful analysis of that element of the competency learning argument. 

“Competencies include explicit, measurable, transferable learning objectives that empower students.” Easy to say, but actually measuring “transferable learning” is deeply problematic, as we discuss below. Foreshadowing our argument: The competency learning model fails to distinguish between performance and learning — a major, robust distinction made in the psychological literature.

“Assessment is meaningful and a positive learning experience for students.” Answering 7 of 10 multiple-choice questions at the end of a section — a typical technique employed in online, competency learning modules — hardly demonstrates mastery, and such drilling and testing is hard to see as a “positive learning experience.” 

Now, computers are key components of competency learning in that they provide “… technology-enabled solutions that incorporate predictive analytic tools. This element is essential to a competency-based system.” But as we argue below, what the “predictive analytic” algorithms are measuring is not what has been learned, but rather those algorithms are measuring a student’s performance. 

“Students receive timely, differentiated support based on their individual learning needs.” Students sit in front of computers and are presented with material about a topic; depending on their responses to questions, they are presented with different material. “Adaptive” presentation is the term used for this sort of individualized learning; personalized learning is also a term that is current in educational discussions.

“Learning outcomes emphasize competencies that include application and creation of knowledge, along with the development of important skills and dispositions.” As we discuss below, the claim that knowledge-creation skills are emphasized is, quite frankly, suspect.

With the above as background, let’s now move to the fundamental flaw in the competency learning strategy.

In a foundational article, Nicholas C. Soderstrom and Robert A. Bjork describe a core distinction in the psychology of learning: the difference between learning and performance. “The primary goal of instruction should be to facilitate long-term learning — that is, to create relatively permanent changes in comprehension, understanding, and skills of the types that will support long-term retention and transfer. During the instruction or training process, however, what we can observe and measure is performance, which is often an unreliable index of whether the relatively long-term changes that constitute learning have taken place. The time-honored distinction between learning and performance dates back decades….”

The implications of this distinction are critical for competency learning. The computer algorithms used in competency learning implementations are not assessing the long-term changes in understanding and skills that are the hallmark of learning. Rather, the computer algorithms are — by necessity — assessing performance.

In administering a 10-item multiple-choice test or some other form of easy-to-grade-by-computer test right after the presentation of a unit of material, the computer algorithms can’t possibly be assessing a student’s learning —how the student will use what was just presented in a new context, for example. It is a challenge for a human teacher to assess such learning; today’s “predictive analytic” algorithms, while (perhaps) better than those of the CAI systems of the 80s, are still not capable of truly predicting learning based on assessing performance.  

Indeed, as Soderstrom and Bjork demonstrate with research study after research study, “performance … is …often an unreliable index…” of real learning and  “…improvements in performance can fail to yield significant learning — and, in fact, that certain manipulations can have opposite effects on learning and performance....”

What are those “certain manipulations”? Again, Soderstrom and Bjork demonstrate with research study after research study that “massed” learning, where knowledge or a skill is drilled and drilled, is a much less effective strategy than “spaced” learning, where knowledge or a skill is practiced for a short time and then the student progresses to another unit. The needed repetition happens over time, in cycles. In fact, in the short term, performance may well go down using the “spaced” learning strategy!

To summarize: competency learning claims that, to advance in a course, a student needs to demonstrate that he or she has mastered the material by answering 7 of 10 MCQs right. But, according to the richly populated research literature, this form of assessment is measuring performance in the short-term, not long-term learning. And progressing from unit to unit may well present a “striking illusion” of learning, according to Soderstrom and Bjork’s reading of the scientific literature on learning.  

Unfortunately, competency learning has taken on a life of its own and the performance/learning distinction is not going to stop this juggernaut. In the short term, it may appear that competency learning “works,” but the movement will likely, ultimately, become discredited — and join the long list of false educational Messiahs.  

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.