Reflections from WAILS 2025: From Learner Modeling to Algorithmic Citizenship

In December 2025, I (Hamayoon Behmanush) had the opportunity to attend the Workshop on Artificial Intelligence with and for Learning Science (WAILS) in Cagliari, Italy, to present our work on Hope, Aspirations, and the Impact of LLMs on Female Programming Learners in Afghanistan. The workshop brought together researchers at the intersection of AI, education, and the learning sciences. Discussions explored what role AI should play in learning, and how to keep humans in the driver's seat as learning environments become increasingly intelligent.
 
Across the keynote talks, Beata Klebanov's presentation, "Language Technology for Learning Modeling," examined what role AI should play in learning. She argued that language models can support the learning loop by relying on an explicit learning model. A learning model is a structured picture of what a learner is trying to do and how they're currently learning. In this view, a learner model captures key characteristics: goals, interests, and learning behaviors. These signals can then be used to adapt to learning support. From this perspective, the potential of AI, including generative AI, extends beyond providing explanations or generating practice tasks. It can make learning support more responsive to learners' objectives. It can also enable role-play and simulated scenarios, and support assessment at scale. A central takeaway was that AI's value in education lies less in automation and more in instructional support with accountability. AI can help generate learning opportunities, surface relevant evidence, and support reflection, but it should not become the authority that determines what learners should study.
 
In addition, another idea discussed during the workshop was Prof. Giovanni Adorni's concept of "Cyber-Humanism," a framework that positions learners and educators as the main stakeholders in the learning process. From this perspective, algorithmic citizenship becomes a useful learning goal. Here, "algorithmic citizenship" means being able to work with AI systems critically rather than just accepting their outputs. While educational environments increasingly rely on algorithmic mediation such as recommendations, automated feedback, ranking, and personalization (for example, a tutor that suggests what you should study next and grades your draft explanations automatically), learners and educators need capacities similar to citizenship: the ability to engage with awareness, to question and challenge system outputs, and to exercise judgment rather than delegate it.
 
In education, algorithmic citizenship can be understood as maintaining agency in AI-mediated environments. In other words, learners should develop the ability to think, learn, and make responsible decisions even when systems constantly offer suggestions and ready-made answers. This view also reshapes the meaning of AI literacy. It is not only a list of risks or a set of prompting tips; it is a set of practices that keeps learners active by evaluating claims, checking evidence, and explaining why they accept or reject outputs. A key takeaway is that learners should be supported to use AI while remaining responsible for what they accept as knowledge.
 
Stepping away from WAILS 2025, what stays with me most is a simple but demanding idea: the point of AI in education isn't to replace learners' judgment, but to amplify it by making support adaptive through learner models while keeping responsibility, evidence, and decision-making firmly with people. The message felt even more vivid in Cagliari, with its great December weather, and the kind of calm that makes reflection come easily, especially when paired with the amazing taste of a fresh Pardulas between sessions.

Popular posts from this blog

AI Girlfriend or AI Boyfriend? Social Determinants of Human-AI Relationships

Blog: The Importance of Good Data in Satellite Imagery Analysis

Job: Student Research Assistant (m/w/d) for AI & Misinformation detection