Study Finds Generative AI Could Inhibit Critical Thinking

A study on how knowledge workers engage in critical thinking recently found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills, who tended to apply more critical thinking to verify, refine, and critically integrate AI responses.

The study ("The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers"), conducted by Microsoft Research and Carnegie Mellon University scientists, surveyed 319 knowledge workers who reported using AI tools such as ChatGPT and Copilot at least once a week. The researchers analyzed 936 real-world examples of AI-assisted tasks.

"[W]e find that knowledge workers engage in critical thinking primarily to ensure the quality of their work," the researchers wrote, "e.g. by verifying outputs against external sources. Moreover, while gen AI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem solving."

According to the researchers, gen AI is eroding critical thinking by fundamentally changing how professionals deal with certain business tasks, specifically in these three areas:

  • Information gathering and verification: AI automates the retrieval and organization of data, reducing the effort needed to find information. However, workers must now spend more time verifying AI-generated content for accuracy and reliability.
  • Problem-solving and AI response integration: Instead of solving problems independently, workers focus on refining and adapting AI outputs to meet their specific needs, including adjusting tone, context, and relevance.
  • Task execution and task stewardship: Rather than performing tasks directly, workers oversee AI processes, guiding and evaluating outputs to ensure quality. While gen AI handles routine work, responsibility and accountability remain with human users.

While gen AI reduces cognitive effort in some areas, it increases the need for verification, integration, and oversight, reinforcing the importance of maintaining critical thinking skills. For this effort, researchers suggest future development of gen AI tools to facilitate higher critical thinking. This can be done by integrating feedback mechanisms that can help users gauge the reliability of gen AI outputs. Further, tools should be designed to customize AI assistance levels, based on a user's task confidence and expertise.

"We find that knowledge workers often refrain from critical thinking when they lack the skills to inspect, improve, and guide AI-generated responses," the researchers wrote. "GenAI tools could incorporate features that facilitate user learning, such as providing explanations of AI reasoning, suggesting areas for user refinement, or offering guided critiques."

The full report is available on the Microsoft site here.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • tutors helping young students with laptops against a vibrant abstract background

    K12 Tutoring Earns ESSA Level II Validation

    Online tutoring service K12 Tutoring recently announced that it has received Level II validation underneath the Every Student Succeeds Act (ESSA). The independently validated study provides evidence of K12 Tutoring's role in creating positive student outcomes through effective academic intervention and research-based solutions.

  • elementary school boy using a laptop with a glowing digital brain above his head and circuit lines extending outward

    The Brain Drain: How Overreliance on AI May Erode Creativity and Critical Thinking

    Just as sedentary lifestyles have reshaped our physical health, our dependence on AI, algorithms, and digital tools is reshaping how we think, and the effects aren't always positive.

  • student reading a book with a brain, a protective hand, a computer monitor showing education icons, gears, and leaves

    4 Steps to Responsible AI Implementation in Education

    Researchers at the University of Kansas Center for Innovation, Design & Digital Learning (CIDDL) have published a new framework for the responsible implementation of artificial intelligence at all levels of education, from preschool through higher education.

  • a cloud, an AI chip, and a padlock interconnected by circuit-like lines

    CrowdStrike Report: Attackers Increasingly Targeting Cloud, AI Systems

    According to the 2025 Threat Hunting Report from CrowdStrike, adversaries are not just using AI to supercharge attacks — they are actively targeting the AI systems organizations deploy in production. Combined with a surge in cloud exploitation, this shift marks a significant change in the threat landscape for enterprises.