"Check and consolidate their understanding" by reading generated text that is not checked and has the same confident tone whether it's completely made-up or actually correct? I don't get it.
>interrogates our current teaching model
Jesus, many many things put our current teaching model in question, chatgpt is NOT one of them. Tbh this excitement is an example of focusing on the "cool new tech" instead of the "unsexy" things that actually matter.
> by reading generated text that is not checked and has the same confident tone whether it's completely made-up or actually correct? I don't get it.
This is a valid point, but it's referring to the state of things as of ~1.5 years ago. The field has evolved a lot, and now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.
Is it possible that you are having a visceral reaction to the "cool new tech" without yourself having been exposed to the latest state of that tech? To me your answer seems like a knee-jerk reaction to the "AI hype" but if you look at how things evolved over the past year, there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way. I wonder, at that point, where the goalposts will be moved...
No, ChatGPT and others still happily make stuff up and miss important details and caveats. The goalpost hasn't moved. The fact that there are specialized LLMs that can fact check (supposedly) doesn't help the most popular ones which can't.
Have you tried Claude.ai. In my experience on computer science topics, the LLMs are very good. Because they have been trained on a vast amount of information online. I just had a nice conversation about mutexes and semaphores with claude and was able to finally grasp what they were.
I do not know if this is the case for example for mathematics or sciences.
>To me your answer seems like a knee-jerk reaction to the "AI hype" but if you look at how things evolved over the past year
It's not a kneejerk traction, like you said it's been 2 years of nonstop AI hype. I have used every chatbot model from openAI (3.5, 4, 4o, even o1) and a few from other companies as well. I've used code copilot tools. I've yet never not been disappointed.
> there's a clear indication that these issues will get ironed out, and the next iterations will be better in every way
On the contrary, there's NO indication of meaningful progress since the release of GPT 3.5. There's incremental progress, sure, as models get larger and larger and things get tweaked and perfected, but NO breakthrough and NO indication of an imminent one. Everything points to the fact that the current SotA, more or less, is at good as it gets with the transformer model.
> now you can readily augment LLMs answers with context in the form of validated, sourced and "approved" knowledge.
The student isn't an idiot, they'd use what the teacher says as their ground truth and chatgpt would be used to supplement their understanding. If it's wrong, they didn't understand it anyway, and reasoning/logic would allow them to sus out any incorrect information along the way. The teaching model can account for this providing them the checks to ensure their explanation/understanding is correct. (This is what tests are for, to check your understanding).
How is someone who is learning something supposed to figure out if what chatgpt is saying is bullshit or not? I don't understand this.
It's a kind of Gell-Mann effect. When I ask it a question of which I know the answer (or at least enough to understand if the answer is wrong) it fails miserably. Then I turn to ask if something which I don't know anything about and... I'm supposed to take it at its word?
You have what the teacher has told you as your primary correct reference point (your ground truth). It should align with that, if not the LLM is wrong.
Obviously the gaps between is where the issue would be but as I say the student can think this through (most lessons are built on previous foundations so they should have an understanding of the fundamentals and won't be flying in the dark).
The fact here is that a student, using ChatGPT, managed to give the right answer. And I agree with GP that the teaching model must evolve. The cat is out of the bag now and clearly students, of (unfortunately) almost all ages, are using it. It being "cool new tech" or anything else doesn't matter and as a teacher it must not be dismissed or ignored.
Not all subjects taught have to evolve in the same way. For example, it is very different to use ChatGPT to have a technical discussion than to simply ask it to generate a text for you. Meaning this tech is not having the same impact in a literature class and here in a CS one. It can be misused in both though.
I always come back to the calculator analogy with LLMs and their current usage. Here in the context of education, before calculators were affordable simply giving the right answer could have meant that you knew how to calculate the answer (not entirely true but the signal was stronger). After calculators math teachers were clearing saying "I want to see how you came up with the answer or you won't get any points". They didn't solve the problem entirely, but they had to evolve to that "cool new tech" that was clearly not helping there students learn as it could only give them answers.
I don’t know if you have been teaching, but I have (for nearly 19 years now ) to a lot of different people of various ages. I’m also a daily user of LLMs.
I’m firmly convince that LLMs will have an impact on teaching because they are already used in addition / superimposed on current classes.
The physical class, the group, has not been dislodged even after hundred of thousands of remote classes during confinement. Students were eager to come back, for many reasons.
LLMs have the potential to enhance and augment the live physical class. With a design school I teach at, we even have proposed a test program for a grant, where my History of Tech Design will be the in-vivo test ground of new pedagogical strategies using LLMs.
Tools that graft into the current way of teaching have had more impact that tools that promise to “replace university/schools”
I'm no LLM fanboy and I do know about their issues and shortcomings.
I also think that asking the right questions to a model while following a lecture, assessing its answers and integrating them into one's own reasoning is difficult. There is certainly a minimum age/experience level under which this process will generally fail, possibly hindering the learning outcome.
Nevertheless, I saw with my own eyes a mid-level student significantly improving his understanding of a difficult topic because he had access to a LLM in real time. I believe this is a breakthrough. Time will tell.
I don't know, seeming 'conversationally smarter' when having access to a language model is much different from just looking stuff up and pattern matching answers?
I'm afraid these models are making people sound smarter and feel smarter without any actual gains in real world problem solving skills.
>interrogates our current teaching model
Jesus, many many things put our current teaching model in question, chatgpt is NOT one of them. Tbh this excitement is an example of focusing on the "cool new tech" instead of the "unsexy" things that actually matter.