

In their critical reflection, Bauer et al. (2025) explore the role of artificial intelligence (AI) in education, focusing especially on generative AI. They argue that its effect on student learning is not inherently positive or negative but depends on how thoughtfully it is implemented. While large language models (LLMs) like ChatGPT have triggered a rush of interest and optimism, the authors warn that much of this excitement is based on hype and lacks a strong foundation in empirical research.
The authors of the reflection paper with different research backgrounds on AI and learning highlight key problems in current research on AI in education: Many studies rely on weak methodologies, such as lacking control groups or using only self-reported measures of learning. Others draw overly optimistic conclusions from improvements in performance during AI-supported tasks, without assessing whether students actually learned or retained knowledge. These limitations, the authors argue, hinder our understanding of AI’s true educational value.
Focus on Cognitive Learning
The authors emphasize the importance of focusing on cognitive learning outcomes (knowledge acquisition, critical thinking, and problem-solving) rather than short-term performance or user satisfaction. Although generative AI can automate many tasks, the development of human expertise remains essential. Students must be equipped with AI literacy, which involves understanding what AI can and cannot do, recognizing potential biases, and engaging with AI outputs critically and reflectively.
The ISAR Model with Examples
To clarify the diverse ways AI can influence student learning, the authors introduce the ISAR model, which outlines four distinct types of effects:
Inversion occurs when AI use unintentionally undermines learning by reducing students’ cognitive engagement. Instead of actively processing information, learners may rely too heavily on AI tools to complete tasks. For example, studies have shown that students using ChatGPT to revise or generate written content often achieve higher task performance in the moment, yet show no improvement in their knowledge or ability to transfer that learning to new contexts — suggesting shallow engagement and limited long-term learning.
Substitution refers to the use of AI to replace traditional instructional materials or methods without fundamentally changing the learning process or improving its quality. This may enhance efficiency or scalability but does not lead to deeper learning. For instance, AI-generated teaching videos or flashcards have been found to be just as effective as teacher-made materials, offering a practical alternative but not a pedagogical improvement.
Augmentation describes situations where AI adds meaningful cognitive support to enhance existing instructional approaches. This includes features like adaptive feedback, personalized hints, or data dashboards that help students monitor their own progress. Tools such as AI-based feedback systems or writing assistants have been shown to help learners engage in more effective self-regulation and revision processes, supporting deeper understanding and skill development.
Redefinition represents the most transformative potential of AI, as it enables learning activities that were previously difficult or impossible to implement. These include dialogic, exploratory, or design-based tasks that promote deep, interactive learning. One example is the use of AI-powered tutoring systems that simulate Socratic dialogues or allow students to teach virtual peers, encouraging critical thinking, reflection, and knowledge construction in ways that go beyond traditional formats. The findings show that action-based prompts were more effective than time-based ones, and that group- or learner-specific prompts outperformed uniform prompts given to all learners. Combining generic and directed prompts produced the strongest results, highlighting the value of balancing flexibility with specificity
Conditions for Successful AI Integration
The authors argue that AI can only enhance learning if certain conditions are met. These include learners’ and teachers’ digital and pedagogical competencies, particularly AI literacy and critical thinking skills. Equally important are contextual supports, such as access to devices, professional development opportunities, and ethical and regulatory frameworks. Without these, AI may reinforce inequalities or lead to ineffective educational practices.
Conclusion
Bauer et al. conclude that AI in education should be approached with both critical reflection and scientific rigor. The ISAR model offers a valuable tool for researchers and educators to differentiate between various types of AI effects and to design AI-supported learning environments that foster meaningful, long-term cognitive outcomes. By moving beyond the hype and embracing evidence-based approaches, the educational field can better harness AI’s potential while avoiding its pitfalls.