PhD AI and Education: Developing Students’ Critical Thinking in Generative AI-Supported Learning Environments – July 9, 2025
DESCRIPTION
In Europe’s universities, the rapid, unregulated adoption of generative AI, and LLMs in particular,—from instructors using it to craft lessons, assessments, and feedback (Daniel et al., 2024) to students relying on it for writing, coding, and research (Kuhail et al., 2023)—has outpaced both institutional policy and pedagogical frameworks (Lim et al., 2023), giving rise to a huge body of literature that is not always consistent and generates more confusion than guidance. As debates rage over GenAI’s promise to personalize learning and scaffold complex skills versus its ethical pitfalls, de-skilling risks, and threats to academic integrity (Fütterer et al., 2023; Macnamara et al., 2024), there is a gap between what universities actually do and the principles they claim to uphold. To bridge this divide, it is imperative to develop students’ critical thinking abilities—equipping them not only to navigate and evaluate the overwhelming, unfiltered stream of AI-generated knowledge but also to engage reflectively with technology in ways that uphold scholarly rigor, ethical standards, and institutional coherence.
Critical thinking is defined by Paul and Elder (2001) as “a mode of thinking in which the thinker improves the quality of their thinking by skillfully taking charge of the structures inherent in thinking and imposing intellectual standards upon them. Critical thinking requires more than scepticism or opinion; it involves disciplined reasoning and intellectual self-regulation. This project frames critical thinking around two interlinked capabilities or dimensions: (1) the ability to identify and work with the elements of reasoning (e.g., purpose, assumptions, evidence, inferences); and (2) the ability to evaluate those elements using intellectual standards such as clarity, accuracy, relevance, logic, and precision (Elder, 2022). These must be accompanied by intellectual traits such as humility, autonomy, and fair-mindedness.
In LLM-enhanced learning environments, two key dimensions often become obscured: students encounter fluent but potentially misleading content that they must filter, evaluate and adapt to their own learning objectives.
Recent research shows that although LLMs can ease cognitive load, they may also weaken critical thinking abilities (Stadler et al., 2024; Gerlich, 2025) – an effect reflected in altered patters of brain activity (Kosmyna et al., 2025). Moreover, AI-tool usage correlates negatively with critical-thinking skills, particularly among younger users who tend to rely more heavily on these tools and consequently score lower on cognitive-performance measures (Zhang et al., 2024). Such effects are especially concerning in educational contexts, where fostering cognitive skills is paramount.
To counteract these risks, learners must develop new competencies to critically engage with AI. To establish a trusting and productive collaboration, they need to define clear learning objectives when working with LLMs, strategically interact with the system to meet these objectives, critically evaluate the outputs received, and reflect on how AI contributes to their learning process. However, achieving this critically informed cooperation poses three main challenges: (1) understanding the strategies learners currently employ when using LLMs for study; (2) transforming those strategies to support a more trustworthy, collaborative use of AI; and (3) reimagining LLM architectures and interfaces to better scaffold and foster critical-thinking strategies in educational settings.
This PhD project aims to study how to support and develop students’ critical thinking skills when interacting with LLMs for learning purposes, as well as identifying strategies that foster such critical interaction. Drawing on scientific evidence of AI-supported critical thinking, the main objectives of the thesis are:
1. Conducting a systematic literature review on LLM-supported critical thinking and reasoning practices to identify effective methods, tools, and approaches. The goal is to uncover strategies that encourage ethical, meaningful, and high-quality activities for promoting critical thinking competencies.
2. Compiling best practices for using LLMs to support critical thinking. These practices will be validated with educators to ensure their relevance, practicality, and alignment with academic standards.
3. Designing and proposing an experimental study to evaluate selected best practices within an ecological (real-world educational) environment.
4. Evaluating and enhancing the SIMBA tool —originally developed by the IRIT team—to support critical thinking development. The improved tool will undergo systematic validation through user testing and experimental studies. The evaluation will focus on assessing the tool’s usability, impact on learning outcomes, and alignment with academic and ethical standards. By monitoring key criteria during implementation, the evaluation will ensure the tool effectively supports high-quality activities for developing students’ critical thinking skills and meets the practical needs of educators and learners.
THE THESIS IN THE CONTEXT OF ANITI PROJECT
This PhD project is situated within the ANITI project, specifically its educational division. A primary outcome of this initiative is the establishment of the ANITI Observatory, a collaborative hub dedicated to sharing, experimenting, and advancing pedagogical innovations involving Artificial Intelligence (AI) in higher education.
The Observatory serves as a space for collective reflection on ethical and responsible strategies and practices for integrating AI into teaching and learning. Through research, experimentation, training, and strategic foresight, it connects educational institutions, researchers, and practitioners, thus influencing the future landscape of AI-enhanced education.
The thesis findings will contribute scientific evidence to the Observatory regarding effective methods for developing students’ critical thinking abilities for trustworthy and productive collaboration with Generative AI, particularly LLMs. Consequently, in addition to thesis-related tasks, the PhD candidate will:
1. Synthesize research outcomes into dissemination reports outlining best practices in educational contexts.
2. Conduct experimental studies (both controlled and ecological) involving students and teachers within higher education settings.
3. Organize and lead workshops with educators to disseminate findings and support the development and implementation of best practices aimed at fostering students’ critical and informed engagement with Generative AI.
REQUIRED
The recruited candidate will carry out experimental research, which will involve:
• Conducting a systematic literature review on AI-supported learning, as well as on the regulation of learning and cognitive load
• Designing experimental protocols
• Conducting experiments with learner populations for data collection
• Performing data analyses
• Carrying out inferential statistical analyses
• Writing scientific articles
• Presenting research at scientific conferences
• Participating in regular research meetings with the teams involved in the project
• Collaborating with international partners
REQUIRED SKILLS
• Knowledge in cognitive science (primarily cognitive psychology)
• Familiarity with the experimental method
• Interest on AI and tools
• Understanding of learning processes
• Ability to conduct an experiment (designing protocols, analyzing data)
• Ability to perform inferential statistical analyses
• Ability to analyze large datasets
• Ability to report scientific results
• Ability to present one’s work
• Ability to produce meeting reports
• Openness to other research conducted within the lab
• Rigor and attention to detail
• Development capabilities in any language (not mandatory)
HOST INSTITUTION
The IRIT laboratory is an interdisciplinary UMR in cognitive science at the University of Toulouse, who is part of the ANITI Project. The recruited candidate will join the TALENT team. While primarily based at IRIT, the candidate will also be affiliated with the CLEE Laboratory, an interdisciplinary UMR in cognitive science at the University of Jean Jaurès. The PhD will be co-supervised by Mar Pérez-Sanagustín (Associate Professor in Computer Science at IRIT and Pedagogical Innovation Officer at ANITI) and Franck Amadieu (Professor of Cognitive Psychology at CLLE).
The recruited candidate will work primarily at the IRIT laboratory, but also with the CLLE. They will carry out data collection both in the field with students and in laboratory settings. The candidate will also be expected to travel to national and international scientific conferences.
IMPORTANT DATES & DOCUMENTATION
Deadline applications: 31st July 2025
Documentation:
– 1CV
– Motivation Letter
– Studies Grades: Career and Master
– ID or Passport Copy
Auditions: Informed via e-mail, but planned for September
References
1. Daniel, L., Arnold, M., Srivastava, A., Plastow, K., Strelan, P., Ploeckl, F., Lekkas, D., & Palmer, E. (2024). The impact of generative AI on higher education learning and teaching: A study of educators’ perspectives. Computers & Education: Artificial Intelligence, 100, 221.
2. Elder, L. (2022). Critical thinking. Routledge.
3. Fütterer, T., Fischer, C., Alekseeva, A., Chen, X., Tate, T., & Warschauer, M. (2023). ChatGPT in education: Global reactions to AI innovations. Scientific Reports, 13, 15310.
4. Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
5. Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X. H., Beresnitzky, A. V., & Maes, P. (2025). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing tasks. arXiv preprint arXiv:2506.08872.
6. Kuhail, M. A., Alturki, N., Alramlawi, S., & Alhejori, K. (2023). Interacting with educational chatbots: A systematic review. Education and Information Technologies, 28(1), 973–1018.
7. Lim, W. M., Gunasekara, A., Pallant, J. L., Pallant, J. I., & Pechenkina, E. (2023). Generative AI and the future of education: Ragnarök or reformation? A paradoxical perspective from management educators. The International Journal of Management Education, 21(2), 100790.
8. Macnamara, B. N., Berber, I., Çavuşoğlu, M. C., Krupinski, E. A., Nallapareddy, N., Nelson, N. E., Smith, P. J., Wilson-Delfosse, A. L., & Ray, S. (2024). Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers’ awareness? Cognitive Research: Principles and Implications, 9(1), 46.
9. Stadler, M., Bannert, M., & Sailer, M. (2024). Cognitive ease at a cost: LLMs reduce mental effort but compromise depth in student scientific inquiry. Computers in Human Behavior, 160, 108386.
10. Zhang, S., Zhao, X., Zhou, T., & Kim, J. H. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education, 21(1), 34.
Advisors:
Mar Pérez-Sanagustín – mar.perez-sanagustin@irit.fr
Franck Amadieu – franck.amadieu@univ-tlse2.fr
Net salary:
according to experience
Location:
IRIT, Toulouse
Duration:
36 months
Salary:
2 494,55€/ month (Gross salary). The salary includes health coverage (through the French social security system) and paid vacation (6 weeks, with the possibility of negociation for more).
Required Qualifications:
Master Studies in Computer Science, Cognitive Psychology and related areas.
Funding:
ANITI
Starting date:
1st November 2025 (Earlier if possible if administrative issues allow it)
APPLICATION PROCEDURE
Formal applications should include detailed cv and a motivation letter.
Samples of published research by the candidate will be a plus.
> Applications should be sent by email to: mar.perez-sanagustin@irit.fr
More information: https://aniti.univ-toulouse.fr/