Küchemann, Stefan; Rau, Martina; Schmidt, Albrecht; Kuhn, Jochen (2024): ChatGPT's quality: Reliability and validity of concept inventory items. Frontiers in Psychology, 15. ISSN 1664-1078
fpsyg-15-1426209.pdf
The publication is available under the license Creative Commons Attribution.
Download (1MB)
Abstract
Introduction: The recent advances of large language models (LLMs) have opened a wide range of opportunities, but at the same time, they pose numerous challenges and questions that research needs to answer. One of the main challenges are the quality and correctness of the output of LLMs as well as the overreliance of students on the output without critically reflecting on it. This poses the question of the quality of the output of LLMs in educational tasks and what students and teachers need to consider when using LLMs for creating educational items. In this work, we focus on the quality and characteristics of conceptual items developed using ChatGPT without user-generated improvements.
Methods: For this purpose, we optimized prompts and created 30 conceptual items in kinematics, which is a standard topic in high-school level physics. The items were rated by two independent experts. Those 15 items that received the highest rating were included in a conceptual survey. The dimensions were designed to align with the ones in the most commonly used concept inventory, the Force Concept Inventory (FCI). We administered the designed items together with the FCI to 172 first-year university students. The results show that ChatGPT items have a medium difficulty and discriminatory index but they overall exhibit a slightly lower average values as the FCI. Moreover, a confirmatory factor analysis confirmed a three factor model that is closely aligned with a previously suggested expert model.
Results and discussion: In this way, after careful prompt engineering, thorough analysis and selection of fully automatically generated items by ChatGPT, we were able to create concept items that had only a slightly lower quality than carefully human-generated concept items. The procedures to create and select such a high-quality set of items that is fully automatically generated require large efforts and point towards cognitive demands of teachers when using LLMs to create items. Moreover, the results demonstrate that human oversight or student interviews are necessary when creating one-dimensional assessments and distractors that are closely aligned with students' difficulties.
Doc-Type: | Article (LMU) |
---|---|
Organisational unit (Faculties): | 17 Physics |
DFG subject classification of scientific disciplines: | Natural sciences |
Date Deposited: | 18. Nov 2024 10:00 |
Last Modified: | 18. Nov 2024 10:00 |
URI: | https://oa-fund.ub.uni-muenchen.de/id/eprint/1566 |
DFG: | Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 491502892 |