{"id":1978,"date":"2025-09-03T12:32:29","date_gmt":"2025-09-03T16:32:29","guid":{"rendered":"https:\/\/ozer.gt\/log\/?p=1978"},"modified":"2025-09-03T13:43:49","modified_gmt":"2025-09-03T17:43:49","slug":"student-learning-with-llms","status":"publish","type":"post","link":"https:\/\/ozer.gt\/log\/2025\/09\/03\/student-learning-with-llms\/","title":{"rendered":"Student learning with LLMs"},"content":{"rendered":"<div id='gallery-1' class='gallery galleryid-1978 gallery-columns-3 gallery-size-thumbnail'><figure class='gallery-item'>\n\t\t\t<div class='gallery-icon landscape'>\n\t\t\t\t<a href='https:\/\/ozer.gt\/log\/2025\/09\/03\/student-learning-with-llms\/chatgpt-usage-timeline\/'><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/ozer.gt\/log\/wp-content\/uploads\/2025\/09\/chatgpt-usage-timeline-150x150.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" \/><\/a>\n\t\t\t<\/div><\/figure><figure class='gallery-item'>\n\t\t\t<div class='gallery-icon landscape'>\n\t\t\t\t<a href='https:\/\/ozer.gt\/log\/2025\/09\/03\/student-learning-with-llms\/how-students-use-chatgpt\/'><img loading=\"lazy\" decoding=\"async\" width=\"150\" height=\"150\" src=\"https:\/\/ozer.gt\/log\/wp-content\/uploads\/2025\/09\/how-students-use-chatgpt-150x150.png\" class=\"attachment-thumbnail size-thumbnail\" alt=\"\" \/><\/a>\n\t\t\t<\/div><\/figure>\n\t\t<\/div>\n\n<p class=\"query-text-line ng-star-inserted\">In January, I wrote a short note based on one of my talks: &#8220;<a href=\"https:\/\/ozer.gt\/log\/2025\/01\/07\/how-to-use-llms-for-learning-in-2025\/\">How to use LLMs for learning in 2025<\/a>.&#8221; In that note, I differentiated between using LLMs (1) to learn and (2) to do. With the new semester now underway, I&#8217;ve checked some usage numbers and read the Ammari et al. (2025) paper on how students use ChatGPT. I was particularly interested in the second RQ: &#8220;Which usage patterns correlate with continued or increased reliance on ChatGPT over time?&#8221;<\/p>\n<p class=\"query-text-line ng-star-inserted\">An over-reliance on any tool, regardless of what it is, is a potential red flag for persistent learning, especially when the goal is comprehension. For example, understanding derivatives and calculating them using a computer are two distinct learning objectives. If the reliance on a tool substitutes for understanding, long-term implications may not be a net positive.<\/p>\n<p class=\"query-text-line ng-star-inserted\">The article does not really answer the reliance part of the question. It does, however, report some interesting correlations between LLM behavior and student engagement. Notably, <strong>when ChatGPT asks for clarifications, provides unintended or inconsistent answers, or communicates its limitations, students are less likely to continue using it.<\/strong><\/p>\n<p class=\"query-text-line ng-star-inserted\">Plausible, but what these correlations mean for learning and comprehension is unclear. What is the next step after disengagement? Do they switch to another LLM to get a direct answer without having to answer follow-up questions, or do they go back to figuring it out on their own?<\/p>\n<p class=\"query-text-line ng-star-inserted\">Class of 2029, I guess the answer lies with you. Welcome!<\/p>\n<p><a href=\"https:\/\/openrouter.ai\/provider\/openai\">Source<\/a> \u2013 <a href=\"https:\/\/arxiv.org\/abs\/2505.24126v1\">Paper<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In January, I wrote a short note based on one of my talks: &#8220;How to use LLMs for learning in 2025.&#8221; In that note, I differentiated between using LLMs (1) to learn and (2) to do. With the new semester now underway, I&#8217;ve checked some usage numbers and read the Ammari et al. (2025) paper [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"cybocfi_hide_featured_image":"","footnotes":""},"categories":[1],"tags":[],"class_list":["post-1978","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/posts\/1978","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/comments?post=1978"}],"version-history":[{"count":17,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/posts\/1978\/revisions"}],"predecessor-version":[{"id":1997,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/posts\/1978\/revisions\/1997"}],"wp:attachment":[{"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/media?parent=1978"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/categories?post=1978"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/ozer.gt\/log\/wp-json\/wp\/v2\/tags?post=1978"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}