Don’t Filter Out Responsibility

##AUTHORSPLIT##<--->

The use of web filters runs counter to 21st-century learning.

"Over there, thanks to solid teaching, the filters are in the students' heads."

Jeff WeinstockTHE PROBLEM WITH WRITING A COLUMN is that you're led to say in 500prim and overly manicured words what someone can lay bare in a singlesparse, extemporaneous sentence, full of truth and nothing but.

Even there I did it. I should have just said, well, at least someone agrees with me about not blaming the tools for the sins of the users-- because someone does: the Finns.

Last month, in our story on the use of web filters to keep students away from unsafe online content, Julie Walker, executive director of the American Association of School Librarians, noted that she had recently traveled to Finland, where she found that educators have a different approach to ensuring responsible internet use: They don't enforce it, they teach it-- and because they teach it, they don't have to enforce it. According to Walker, most computers in Finnish schools don't have web filters. As she said, "Over there, thanks to solid teaching, the filters are in the students' heads."

Teaching kids to govern themselves is what we ought to be doing, not having their visits to the school library or computer lab chaperoned by web filters.

Would that not be the truer expression of the cornerstone of 21st-century education: student-directed experiences? Using filters to blockade portions of the internet does the reverse of that. It takes responsibility off the students, and their accountability along with it. If one manages to slip through to a blacklisted website, we fault the technology, and then hike up the settings on the firewall, rather than sort out why the student didn't think better of breaking the rules. And what curious use it is of older technology (filters) to blunt newer technology (social networks, blogs). You could say we're having technology eat its young, which is no way to run a new century.

It shouldn't be too hard to get students to submit to internet safety. It's merely an extension of the same lesson they already have committed to memory: Don't talk to strangers. We can't go around town removing strangers from the streets, so our solution, rightly, is to affect behavior. Come to think of it, there's no shortage of scissors in our lives, but little risk any of us would do what we've been warned never to do with a pair, and that's run.

"[Finnish] students come into school with a sense of responsibility for their learning and a sense of why they're there," Walker said, concluding her thought. "Ultimately, that's where we need to be too." The only thing web filters are sure to block is our chance of getting there.

-Jeff Weinstock, Executive Editor

Featured

  • stylized illustration of a desktop, laptop, tablet, and smartphone all displaying an orange AI icon

    Survey: AI Shifting from Cloud to PCs

    A recent Intel-commissioned report identifies a significant shift in AI adoption, moving away from the cloud and closer to the user. Businesses are increasingly turning to the specialized hardware of AI PCs, the survey found, recognizing their potential not just for productivity gains, but for revolutionizing IT efficiency, fortifying data security, and delivering a compelling return on investment by bringing AI capabilities directly to the edge.

  • handshake between two individuals with AI icons (brain, chip, network, robot) in the background

    Microsoft, Amazon Announce New Commitments in Support of Presidential AI Challenge

    At the Sept. 4 meeting of the White House Task Force on Artificial Intelligence Education, Microsoft and Amazon announced new commitments to expanding AI education and skills training.

  • digital learning resources including a document, video tutorial, quiz checklist, pie chart, and AI cloud icon

    Quizizz Rebrands as Wayground, Announces New AI Features

    Learning platform Quizizz has become Wayground, in a rebranding meant to reflect "the platform's evolution from a quiz tool into a more versatile supplemental learning platform that's supported by AI," according to a news announcement.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.