7 Comments
User's avatar
Tim Hackenberg's avatar

Great post, Matt. Very insightful. I have also been worrying about this, especially the bypassing of the hard work needed to generate VB, as opposed to just reading it. I like the way you put it that the repertoire is not changed in a meaningful way. Good stuff!

Do you think there are legitimate educational uses for LLMs?

Matt Normand's avatar

I am sure there are many legitimate uses, just like with any technology. I think the key is that people have to behave in relevant ways in response to whatever the LLM produces. They can't just read it or, even worse, copy and paste it. Without the LLM, even minimal engagement like copying and paraphrasing material from a textbook to an essay requires the learner to emit some potentially relevant behavior. I just worry that the LLM makes it way too easy to punt. And, of course, if you want to train good thinkers, they need to do a lot more than let the LLM find information and synthesize it--the student needs to be learning how to do that.

Kevin Luczynski's avatar

Many legitimate uses. With the right boundaries and guidance, I would love to be an early student in this day and age. OpenAI has Study Mode behavior that doesn't give comprehensive answers; instead, it prompts the respondent to answer different questions. That is, it prompts further inquiry. Anthropic has Learning Mode, which is functionally the same.

Moreover, I like that fact that exams may take a different form. A form that looks like an interview; an oral exam or ecologically valid in-vivo exam. I did these for my Verbal Behavior course, which was modeled after Dr. Dave Palmers at WNEU. The students were stunned at the summative-assessment format at first, but soon came to appreciate it.

Kevin Luczynski's avatar

Your analysis makes good sense, especially the reading of outputs without subsequent active (collateral) responding. The risk seems notable for learners encountering new subject areas. My oldest son is 10, and the moment he witnesses the output from a chatbot on a difficult academic question, he will be hooked. The immediacy, seemingly accurate, and confident response you receive from frontier models is striking. On a related note, over the last 8 months I mostly talk to my computer using Wisper Flow (at this moment, my count of dictated words is 419,266). This likely has behavioral implications as well. Thanks for sharing your insights, Matt.

As an aside, when you expand the access of frontier models (or as others such as Ethan Mollick call the "harness"), such as moving from a more harnessed chatbot to a less-harnessed AI that has access to your computer and the web in form of Claude Code, it alters how you work and think about working. I am concerned about the negative implications here as well.

Matt Normand's avatar

You sound like you are speaking to me from the Matrix, Kevin. Of course, I expect nothing less from you.

Kevin Luczynski's avatar

On a related note, the style of how to work in lockstep with a frontier LLM is intoxicating and disconcerning at the same time. At 12:46pm today, all three of my monitors displayed Opus 4.6 agents completing tasks on my laptop. One was updating an ongoing meeting-minutes agenda. Another one was creating graphs, graphs that I couldn't even make myself (at the height of my graph-making days) in SigmaPlot or Prism in an HTML format. The final one was updating a profit-and-loss prediction calculator for a children's book disguised as a self-help book for parents to get through the bedtime routine and promote their child initiating sleep in a desired location. I fed each "agent," the documents and instructions needed......and......then.....waited for completion. Satisfying yet odd.

Matt Normand's avatar

I think you might need to touch grass, Kevin. Call Corey.