Revisiting learning with LLMs (Understanding vs. knowing)

Right after giving a short talk at the International Business Pedagogy Workshop on how best to use large language models for learning and discussing the topic with a great panel, I took a short break and never got a chance to revisit and reflect on our discussion.

In this talk, I originally focused on asking a few questions, but we did not have much time during the panel to discuss the answers to the questions. I’ve revisited my slides from that panel and added a slide with some answers (last slide). You can find the updated deck for “Mind the AI Gap: Understanding vs. Knowing” at https://ozer.gt/talks/mind-the-ai-gap.html

Bottom line:

To promote understanding, it’s better to treat LLMs as knowledge aggregators (not oracles with answers). This is what tools like Perplexity aim to do. Instead of asking simple and straightforward questions and expecting a correct answer, deconstructing questions (offline) before interacting with LLMs is likely to trigger reasoning and facilitate understanding (vs. just knowing).

In the classroom, this can be accomplished by encouraging students to think and reflect on the question offline before going online and interacting with LLMs. For example, a homework assignment can begin in the classroom with a task to deconstruct the problem before students go online to find a solution to the problem later.

Source