Why Ed Tech is Not Endorsing a Ban on ChatGPT in Schools

Education Experts Argue ChatGPT Can Help Students Prepare for the Workforce — but Thoughtful Policies Should Happen Sooner Than Later

The public introduction of OpenAI’s ChatGPT has, in recent months, sent a wave of alarm through the education sector unlike any other technology introduced this century. 

Depending on who’s speaking, ChatGPT will either further erode learning outcomes particularly in English language arts, or it will boost ELA instruction and overall learning outcomes by embedding critical-thinking and modern-workforce skills into everyday writing assignments as students learn to use the new technology with caution and precision. 

Some large school districts across the country immediately banned ChatGPT usage by students and even blocked the OpenAI ChatGPT website from being accessed on school networks and devices. Many education leaders have expressed concern, even those who urge educators to explore using the tool in their classrooms.

Even OpenAI itself published guidance and warnings for educators shortly after ChatGPT launched publicly.

Meanwhile, the ed tech sector’s response has been similarly divided. A few smaller software providers jumped into the fray by debuting “AI detector” tools within weeks of the ChatGPT launch last November. None of them — not even the detector built by the creators of ChatGPT — are very reliable, as demonstrated by this review and comparison report by TheConversation.com.

THE Journal asked leaders at several education technology providers about their thoughts on ChatGPT and AI-generated text, what plans (if any) they have to address the new technology within their own software solutions, and whether they had guidance for K–12 policymakers, administrators, and educators struggling to update their organizations’ rules for using AI in education settings. 

Following are excerpts from those interviews as well as answers to our questions from OpenAI’s official guidance for educators and from a University of Houston Law Center professor who blogs about ethical and legal implications of new technology in education.

THE Journal: What’s your take on the perils and perks of this new technology, and how do you balance them?

DEBORAH RAYOWImagine Learning vice president of Product Management, Courseware: I think academia and ed tech are both going through something similar to the five stages of grief when it comes to this issue. We’ve passed denial and now we’re mostly on anger. I’m not sure all the stages actually apply, but I do think it’s going to be a process before we’ve accepted that this technology is here to stay and will only grow in capabilities. We’ll need to make clear to students when it’s okay to use ChatGPT and other generative AI tools and when it’s not, with a strong emphasis on academic honesty. And we’ll have to have ways to enforce the rules we set. But once students are clear about when NOT to use generative AI, it does open up some interesting possibilities for teaching and learning.

MELISSA LOBLE, Instructure chief customer experience officer: AI writing tools are not new to education, but none have sparked the conversation that ChatGPT has in the last couple months. It’s clear the initial reaction to ChatGPT from many educators has been apprehensive, and while we understand the concern being felt in classrooms and schools across the country, we believe the best way to navigate the reality of ChatGPT, and AI tools like it, is to learn to work with them instead of against them, because technology like this isn’t going anywhere.  

PETER SALIB, University of Houston Law Center assistant professor: For everybody whose main work is writing things on a computer, this is a tool that is going to change how you work, especially as it gets better. ChatGPT produces mediocre content in response to complex questions. There might be some incentive to plagiarize, but probably not if a student wants an A. On the other hand, I’m not sure it’s right to think of using those kind of language models in the classroom just through the lens of plagiarism. They’re extremely useful tools, and they’re going to be extremely useful tools for real people doing real work. I think we do a disservice to a student if we said these tools are not part of education, and they’re forbidden to use them as they work their way through law school or their undergraduate education.”

OPENAI: We recognize that many school districts and higher education institutions do not currently account for generative AI in their policies on academic dishonesty. We also understand that many students have used these tools for assignments without disclosing their use of AI. Each institution will address these gaps in a way and on a timeline that makes sense for their educators and students. We do however caution taking punitive measures against students for using these technologies if proper expectations are not set ahead of time for what users are or are not allowed. Classifiers such as the OpenAI AI text classifier can be helpful in detecting AI-generated content, but they are far from foolproof. Classifiers or detectors should be used only as one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism. Setting clear expectations for students up front is crucial, so they understand what is and is not allowed on a given assignment, and know the potential consequences of using model generated content in their work.

THE Journal: What should educators, administrators, or even parents do if they’re worried about students inappropriately using ChatGPT and advanced AI in education settings?

RAYOW: The best solutions to this problem start with upfront communication. Some educators have expressed to me that they’re hesitant to put policies in place around generative AI because they don’t want to alert students that these tools exist. I understand the concern, but I think that’s probably a mistake. Any students who don’t already know about ChatGPT and other similar tools are certainly going to learn about them quickly, and it’s important that they are explicitly taught not to use them to misrepresent their own work, just as we teach them not to plagiarize from other online sources. So I would say to district and school leaders: Have the conversation. Work with teachers, parents, and students to craft policies that make sense. 

My other advice is to sample student writing at the start of a semester in a controlled environment. Have students write longhand or use locked-down devices to produce original writing samples. Then keep copies of those samples to compare to future work an AI detector flags. Although we certainly expect the quality of student writing to improve over time, it’s often clear when comparing a flagged essay to the start-of-the-semester writing sample that a student didn’t write it unaided.

LOBLE: Simply blocking ChatGPT won’t work. Blocking the tool on school-owned devices will not prevent students from accessing it on their own phones, laptops, etc. And while ChatGPT is the first tool of its kind, it’s simply the first of many to come. Trying to block them all would be a time-consuming and distracting exercise.

We already needed to find better ways to measure mastery of content/skills. Our education mission should be to help all students understand and value the importance of the skills they’re developing throughout their learning journey. Part of that mission is to develop better ways of measuring student mastery as they progress. AI tools such as ChatGPT can potentially play a role in this process as students learn how to make meaning of information and connect ideas. Gone are the days when students could simply write a book report or a brief essay to demonstrate mastery of a subject.

Like it or not, AI is the future. AI technology is already available today that enables people with little to no knowledge of code to write software and develop apps. These tools are capable of generating marketing content, populating legal applications, and enabling non-designers to create artwork that meets their needs. These tools will only become more advanced and ubiquitous in the near future. Let’s prepare the future workforce together.

THE Journal: What are some upsides to ChatGPT being used in K–12 schools?

LOBLE: ChatGPT has the potential to revolutionize the way we approach education by providing an intuitive and interactive platform for students and educators to engage in learning. With its advanced natural language-processing capabilities, ChatGPT can help students understand and retain complex concepts, as well as provide personalized feedback and support. Additionally, its ability to generate human-like text can also be used to create engaging educational content, such as interactive stories and simulations. 

ChatGPT is a powerful tool that can enhance the learning experience for students and make education more accessible and effective — which makes it so much more than the cheating tool it is broadly characterized as by much of the recent press. AI can be a valuable tool for educators. Embraced appropriately by educators, AI tools can be an effective support in delivering a personalized learning experience for students while saving educators time. 

Some of the underrated upsides to ChatGPT that we envision for K–12 classrooms are:

  • Personalized learning: ChatGPT can provide students with interactive, personalized learning experiences that can help them understand and retain information better.
  • Intelligent Tutoring: ChatGPT can act as an intelligent tutor, providing students with feedback and support as they work through problems and concepts.
  • Content Generation: ChatGPT can be used to generate educational content such as interactive stories, simulations, and quizzes, which can make learning more engaging and interactive for students.
  • Language Learning: ChatGPT can help non-native speakers to improve their language skills through interactive conversations.
  • Automation of Grading: ChatGPT can be used to grade student work automatically which can save the educators time and effort.

OPENAI: AI will likely have a significant impact on the world, affecting many aspects of students' lives and futures. For example, the types of job opportunities students look toward may change, and students may need to develop more skepticism of information sources, given the potential for AI to assist in the spread of inaccurate content. To date we have seen instances of productivity improvements that transform jobs, job displacement, and job creation, but both the near- and long-term net effects are unclear. Fortunately, many of the aims of education (e.g. fostering critical thinking) are not related to preparation for specific jobs, and we encourage greater investment in studying the non-economic effects of different educational interventions.

THE Journal: Rather than outright banning ChatGPT and ongoing tech advancements, how can educators proactively prepare to incorporate advanced AI technology into their classrooms?

RAYOW: While ChatGPT and similar tools certainly pose a challenge for educators when it comes to academic integrity, they also provide an interesting opportunity to help students develop more advanced analytical and evaluative skills. For example, have students ask ChatGPT to write an essay about the theme of “phoniness” in The Catcher in the Rye and then have them critique ChatGPT’s essay against a rubric, write feedback for how the essay could be improved, and revise it to make it better. Or the assignment could be to ask ChatGPT to write 10 multiple-choice questions about in The Catcher in the Rye, and then students can write critiques about whether the questions really get to the heart of the novel or whether the wrong answer choices are good distractors and why. (ChatGPT will even write leveled questions if you ask it to; for example, honors students can ask it to write questions appropriate for an honors course.)

The other important thing to remember is that generative AI only amplifies the need to teach our students solid media literacy skills. We all know that not everything you read online is true, but even the savviest adults sometimes fall victim to convincing stories from questionable sources. Factor in the possibility that the “person” you’re chatting with isn’t a real person, and we’d be truly remiss not to include developmentally appropriate media literacy instruction at every grade level.

THE Journal: As educators and administrators consider policies for whether and how ChatGPT can be used by students, what potential impacts on learning should be weighed in those policy discussions?

RAYOW: These policies should be clear about when it’s okay to use generative AI and when it’s not. And it should get into the details about what happens when teachers suspect that students haven’t written the work they’ve submitted as their own. Unlike direct plagiarism from the Internet, the use of generative AI is hard to prove for certain. If a student has copied and pasted their work from an online source or another student, schools can point to clear and incontrovertible evidence. But what happens when an AI detector says there is a 92% probability that an essay was written by AI and the student says they wrote it without help? What should that process be? To avoid problems later, it’s best to thinking through all these issues up front—and ensure that all stakeholders, including students and families, understand them.

SALIB: There probably shouldn’t be just one policy for all kinds of assignments. We probably need something that’s not one size fits all. There should be some kinds of assignments where students are told not to use a language assistant at all, so they develop the chops of writing something from scratch, thinking of something from scratch. There probably should be some assignments for which the requirements are use whatever tools you would like to produce something, but the final product should be more than 70% words you wrote. We have to ensure that when students do work, that doing it well requires that you actually learn something. If an essay question is just copied and pasted into a ChatGPT prompt and the answer earns a B or B+, then we’re not teaching students to use ChatGPT as a tool that helps them think, we’re teaching them to use it as a replacement for thinking, and that’s not good either.

Resources

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.