Blog: Navigating the Harms of LLMs in Education

Generative AI (GenAI), notably Large Language Models (LLMs), have entered our classrooms, not slowly but with a thunderclap. Tools like ChatGPT and Claude are now helping students and instructors in different phases of the learning/teaching process. They are transforming how we learn, teach, and even think. Their support ranges from personalized tutoring and self-learning guidance to providing educational content and 24/7 learning assistance.  

LLMsPromise on Personalized Learning  

Individual students require personalized support that considers their knowledge level, personal circumstances, and unique learning needs. However, providing one-on-one human tutoring to every student is not feasible due to resource limitations. LLMs can address this challenge by offering individualized instruction tailored to each student's specific requirements. These tools provide accessible support for self-learners at any time and location. When students encounter difficulties during late-night study sessions, they no longer need to wait for instructor availability, and help and support are just a prompt away. 

Furthermore, LLMs can generate diverse educational resources. These range from reading materials to assessment tools and interactive content. Both students and instructors can utilize these resources according to their specific needs. 

When it Becomes Harmful 

However, like every other technological phenomenon, LLMs can become problematic and harm learning significantly. The very features that make them powerful also create new educational challenges. 

  • The Hallucination Problem: One of the most challenging concerns surrounding large language models is their tendency to generate false or misleading information, commonly known as hallucinations. This can have serious consequences, especially in educational settings. Imagine a student asking an LLM to provide information on a specific topic along with references. If the model fabricates the references, one might initially overlook the issue, thinking, "At least the content is correct." But this raises a critical question: if the references are fake, can we really trust the information itself? 
  •  Amplifying Existing Inequalities: Another important issue is bias. LLMs reflect the data they're trained on, and much of that data comes from the internet, which is not unbiased. As a result, these models often amplify existing societal biases. For instance, when asked about careers suitable for boys versus girls, they may reinforce outdated gender stereotypes. In educational contexts, this can quietly magnify inequality by embedding biased perspectives in the learning process. 
  •  The Challenges of Over-Reliance and Academic Integrity: LLMs offer instant answers to a wide range of queries. This convenience can lead to over-reliance on these tools and diminished engagement with the learning process. A recent study found that when students lost access to AI tools, their practice efforts increased, but their exam scores dropped. This suggests that students were not truly learning the underlying concepts. Instead, they were depending on LLMs to complete their assignments and answer questions. Such over-reliance can create an illusion of understanding. This may gradually undermine the integrity and effectiveness of the educational system.  Additionally, educators are also increasingly concerned about the rise of academic dishonesty. Some students may use LLMs to write essays, complete coding assignments, or even take tests. This loss of academic integrity poses a serious threat to the foundation of meaningful education. 

Looking Forward  

While large language models (LLMs) may pose serious challenges to educational systems, banning them is not a practical solution. Instead, advancing AI literacy, redesigning assessment methods, and integrating AI ethics into curricula could be more effective approaches. Education should empower students to understand how these tools work, recognize their limitations, and reflect on their potential impact on critical thinking. Revising assessment strategies to emphasize reasoning-based tasks, oral examinations, and real-world problem-solving can reduce students' reliance on LLM-generated responses. Additionally, incorporating discussions on algorithmic bias, misinformation, hallucinations, and digital responsibility into the curriculum can help learners better understand the risks associated with these technologies. 

 

- Written by Hamayoon Behmanush 

 



 

 

Popular posts from this blog

Job: Student Research Assistant (m/w/d) for AI & Misinformation detection

Blog: The Importance of Good Data in Satellite Imagery Analysis

Job: Student Research Assistants (m/w/d)