"I care that at each of these stages, they are putting in the intellectual effort in educational community to come to a better understanding of the ideas under discussion."
Creating opportunities for intellectual effort is the whole point of almost any task I create for my classrooms. The extent to which any tool (calculator, LLM, word processor, piece of paper) is useful in a classroom is the extent to which it empowers or enables the intended focus of the intellectual effort. Using AI in the classroom is positive if it supports the intended intellectual effort. It it takes the place of the intended intellectual effort then it only cheapens the education and narrows the learning taking place.
It's the same with art, as I wrote elsewhere... I think in general, AI-mongers and devotees are only focusing on the results. They are ignorant of, or choose to ignore, the other, far more important part of the creative process: the journey that results in an improved human being.
Why clean my room? It’s just going to get messy again eventually.
Why learn sorting algorithms or grind on leetcode? The best versions of these algorithms are already in every standard library, or if not there, then one of the top three repos for it on github. (And surely I’ll never be the only observer of an unexpected bug or weird use case requiring me to read and understand it.)
Foolish racers are always going around the whole track; why not just cross the finish line when it’s behind them?
What do you think is at the root of the problem? Do you think that AI is the problem and has made taking short-cuts way too easy, or do you think it magnifies a problem in the education system and its incentives at large? Is there a way education could adapt to new tools such as AI or should they fight against it?
I think it's a fair question. I think most technology acts as a kind of magnifier for our failings. Students before generative AI were apt to cheat when they had opportunity to do so, pressure to do so, and felt they could get away with it. AI has, I think, exacerbated all three of those: it's easy to do, often undetectable, and if "everyone is doing it," I might feel foolish if I don't participate. I have friends and colleagues who are confident we can find ways to adapt and adopt AI productively, but if it's a choice between "adapt into the classroom" or "fight against it," I am on the "fight against it" side.
I have a hunch that the insane amount of screen time students get in K-12 classrooms (and its well documented educational downfalls) has created college students who lack the skills to write good papers. I don't think these students are without hope, but I think we have to address AI and screen time K- BA/BS if we are going to regain a culture of people who know HOW to learn. Thoughts?
I think this has to play a role. I don't think kids are reading or learning well at any stage, and certainly screens are a piece of the puzzle. I am very much on board with the push Jonathan Haidt and others are making toward phone-free schools (incredible to me that such things even need to be a policy at younger ages, but here we are). With that relatively simple switch, I think much would be improved.
Yes, much improved but still a long way to go. I've been in the K-12 screen reduction space since 2017 and though the data keeps talking about how bad screens are for students, we keep pushing more into the classroom. I wrote today about one positive stride - Sweden is actually reversing course & going back to paper text books!!
I think when we get AI to do things like write or do science we forget that these are human activities. We do them just for the joy of knowing and using our rational faculties. Getting an AI to write for you is almost like getting an AI to talk to a friend for you or go on a hike for you. We have a problem in this society as seeing everything as a means and not an end.
AI users are incredibly short sighted and usually have little to no insight into neurology, psychology, or even IT for that matter.
AI can produce mediocre content instantly en-masse (we can talk about model collapse and how its spew is worsening but that's another story.)
It will never produce greatness. Greatness is not incremental, it is however a quantum leap past a tipping point which is reached by incremental steps.
You produce bad work, you learn by imitating better work of others. You get a little better, a little faster. Repeat A LOT. Then something happens. All of the learning and all the practice and all the knowledge create a confident, zen-like, flow state. You're playing the notes without thinking about it, writing the lyrics, making debate points ...
Then the eureka moment hits where all the subconsciously held knowledge feeds into an imaginative leap and extraordinary work happens.
This is what AI will never achieve, it can draw in huge quantities of categorised data (paintings, rock music, poetry). It can take fragmented datapoints from those collections and spew out a chimera approximation.
However it can't identify good work from bad, not even its own erroneous approximations (hence the ever more frequent bizzare mistakes). But more importantly it can't transcend those patterns to evolve the artform into the next stage.
That requires imagination which cannot be programmed, coded, or synthesised.
"You might as well say that Shakespeare’s existence ought to keep me from writing poetry, or the existence of virtuosos keep me from learning an instrument."
More than that, you might as well say *Shakespeare himself* shouldn't have taken up poetry!
No one is born knowing how to write a masterpiece. When the boy Shakespeare began his education, it's quite likely his early writings were inferior to the work of adult playwrights of his day -- people of less innate talent but much more experience. Should his teachers have told the future Bard not to bother?
The machine of education has never been perfect, as nothing is, but recently it's been scrutinized and prodded more than ever. We have the most college graduates ever and many of them are over-educated for the jobs they perform. Naturally, it's questioned the role of education in the first place.
Some take a utilitarian stand, that education is supposed to improve your earning-potential; others a moral directive with the goal of expanding a young person's mind. I'd say both are valid, but the former is at greatest risk with the proliferation of AI.
I agree wholeheartedly that we force students to go through the motions (performing inadequate work) simply to develop a foundation and philosophy of learning that can allow them to perform under novel unseen settings (the difference between cramming and learning).
Then, we have to contend with AI becoming a useful tool, how do we teach people to work with AI? If I could leverage a personal anecdote, I'm an undergrad computer science student and I've noticed a clear delineation between "iPad kids" and "computer kids" in their understanding of technologies. iPad Kids were no doubt more immersed within computers, but these computers were streamlined for their personal use, abstracting the details of the computers implementations, so when it comes time to building the next iPad, you're lost. In comparison computer kids have a stronger understanding of how computers work because they watched it in its raw, pathetic form and could poke and prod at the organs of the machine to learn how it worked.
I don't think being immersed with AI will make you much better suited for working with AI because we're designing AI to be simpler and better—the actual effort to "work with AI" is reduced every day, to the point that learning how to work with AI should be relatively seamless.
I think this is a paradigm-shifting moment for higher education with massive potential. On the one hand, there is a massive amount of cheating. But that's for the old manner of higher education—perhaps we can design new curricula to leverage the benefits of AI?
For instance, personalized assessments. What if instead of a multiple-choice exam, we used AI chatbots to engage in a personalized discussion with the student to create a more granular and topical assessment of their learning. What if instead of essays we had students perform oral debates (harder to cheat in) and use AI to assess it. Obviously these proposals aren't specific to AI technology, but they're economically infeasible. In a class of 1000 people, are you going to assign 1 TA to each student for assessment? With AI you could.
AI isn't necessarily a wake-up call for Higher Education (I'm not as anti-college as most people), but it is a shock, and I believe we're entering an awkward transition stage before we strip tradition and embrace the daunting, speculative future.
One other thing—we can require more process work. Like a ledger of the edits you made to a document, or the process you went through to create your work. That could be an alternative that allows traditional education tools but as a new anti-cheating mechanism.
I would suggest reading bryan caplans ”the case against education”.
Anyhow:
A lot of people simply dont get value or retain learning from humanities education or the numerous essay writings.
I do like thinking deeply about things though. But i am weird and unusual. Most humans dont want to reflect like that. And i dont think for ing them to in class will actually make them want to do that, or teach them properly.
They will just mask.
So i guess im fine with AI being used to cheat here, because the end result is sort of the same: they dont learn the things. But time is saves to some degree
I am familiar with Caplan, and I think he makes a provocative case, but as with many economists, I find I think quite differently than him on a variety of issues!
Students may not "value" certain things. Indeed, many of them may simply not learn anything from their courses. But frankly, student interest is not the driver or determiner of a proper education. Students do not know what they do not know, and do not yet know how to learn. If we adopted this "student interest" driver for all education, students would avoid all difficult coursework whether humanities or STEM. I don't know why we should accept that students who don't want to take advanced math must nonetheless do so, but then throw up our hands and say "hey, the students don't like their core humanities classes, so what's the point anyway!"
I do think, too, that forcing them to do in-class, handwritten work that cannot be gamed by AI will actually lead them to learn more than they would otherwise. Obviously some students will just be unreachable regardless of subject and format, but actually making them do work will mean they learn more than if they are not made to do work.
I do appreciate the pushback, and I think there are fair points here educators have to grapple with. Thanks for reading!
Thank you for speaking up for this; it's lovely to see.
I always felt it conceded too much (though I did it anyway), setting work and tasks with the hope of getting them to read the text and do some reflective (or any) work. Especially with classic texts - Plato in particular - unless they approached them in the right spirit then they'd likely miss the point entirely. 'What do I need to know for the exam' is not the right spirit...but it's almost impossible to avoid that when there's a grade at stake. If they really got what it was they were supposed to be doing, they'd understand how useless AI would be to their task.
It's not my business anymore but I think it's a shame the way things have gone. It was bad enough when they were reading and recycling from Wikipedia. Now something does that for the world and we call it progress.
The frustration is we all know the answer (well, for me anyway): conversation with questioning. Only we don't have time to do it properly, it doesn't lend itself to public display, and it's not easily or objectively graded. And grades are more important than philosophy to the modern university. But Plato said more or less the same thing a very long time ago.
Plus ça change... There will always be a difference between real understanding and the mere appearance of it, and one of these will always be more valuable than the other. If students understood this, then we'd be getting somewhere. Then they'd police themselves and sneer at those that didn't.
The problem isn’t with the assignments, it’s with the grading. And it’s been a problem since long before ChatGPT. If the AI can write better or analyze better that doesn’t mean the human shouldn’t learn those skills, but when assignments are busy work a student does for a grade, we’ve lost the plot of education. If, instead, we regularly tested their understanding using different mechanisms, then giving the opportunity to get feedback on homeworks or to simply not turn them in would likely make it more appealing to folks trying to learn the analysis concepts and less frustrating to those who prefer to focus their efforts in other areas.
You can’t force a kid to learn from an assignment, you have to trick them into learning from it ;)
I strongly disagree. My personal experience from elementary school to PhD and my experience as a teacher is that there is nothing that motivates true learning as much as grades. They are an incredibly powerful and benign ingenious technology. And I say this as someone whose always been super curious, read encyclopedia articles political philosophy and classical literature for fun etc. nevertheless my best and most serious study has always been grade motivated, or grade equivalent motivated (getting published in the best journal, getting awards jobs etc). Many people like to complain about grades and possibly they are counterproductive for some but I suspect in many cases people are just not being very honest with themselves.
Well, you are describing a person who enjoys learning and treats grades as rewards. You weren’t forced to learn from those assignments, you chose to learn to get the rewards. You also could have cheated or tried to cheat, but you chose not to. The easier it is to cheat and the harder it is to get caught, the fewer people who will spend effort on anything they don’t already want to learn.
But I agree with you that grades and other reward systems are a great way to encourage someone to go that extra mile in a direction they wanted to go but there was just a little too much friction to do it independently.
The main problem with this seems to be that today college is sold primarily as a vocational experience. It is supposed to be the hoops you jump through to get a good job, not self-improvement for it's own sake. I can kind of understand why, with the price tag being so high these days but it tends to make students more concerned with hoop-jumping than learning.
A related (although lesser) casualty of AI in the classroom / university is the benefits of being a peer tutor and helping your friends with their work. When the person who is good at writing or math or whatever subject helps others (even if that help is cheating let’s say) I think there’s real educational (and social benefit) to that process. If nothing else it helps really refine the material for themselves to a degree they likely wouldn’t need to otherwise. They are spending their time helping someone else and learning how to communicate effectively. But if no one goes to a peer tutor / friend anymore in college for help because AI is more convenient / better, that is lost. Not as important as the main student’s learning but another cost from all this and something I hope can manage to survive the coming changes
"I care that at each of these stages, they are putting in the intellectual effort in educational community to come to a better understanding of the ideas under discussion."
Creating opportunities for intellectual effort is the whole point of almost any task I create for my classrooms. The extent to which any tool (calculator, LLM, word processor, piece of paper) is useful in a classroom is the extent to which it empowers or enables the intended focus of the intellectual effort. Using AI in the classroom is positive if it supports the intended intellectual effort. It it takes the place of the intended intellectual effort then it only cheapens the education and narrows the learning taking place.
Journey before destination, right?
It's the same with art, as I wrote elsewhere... I think in general, AI-mongers and devotees are only focusing on the results. They are ignorant of, or choose to ignore, the other, far more important part of the creative process: the journey that results in an improved human being.
Why clean my room? It’s just going to get messy again eventually.
Why learn sorting algorithms or grind on leetcode? The best versions of these algorithms are already in every standard library, or if not there, then one of the top three repos for it on github. (And surely I’ll never be the only observer of an unexpected bug or weird use case requiring me to read and understand it.)
Foolish racers are always going around the whole track; why not just cross the finish line when it’s behind them?
We all die someday, why do anything?
What do you think is at the root of the problem? Do you think that AI is the problem and has made taking short-cuts way too easy, or do you think it magnifies a problem in the education system and its incentives at large? Is there a way education could adapt to new tools such as AI or should they fight against it?
I think it's a fair question. I think most technology acts as a kind of magnifier for our failings. Students before generative AI were apt to cheat when they had opportunity to do so, pressure to do so, and felt they could get away with it. AI has, I think, exacerbated all three of those: it's easy to do, often undetectable, and if "everyone is doing it," I might feel foolish if I don't participate. I have friends and colleagues who are confident we can find ways to adapt and adopt AI productively, but if it's a choice between "adapt into the classroom" or "fight against it," I am on the "fight against it" side.
I have a hunch that the insane amount of screen time students get in K-12 classrooms (and its well documented educational downfalls) has created college students who lack the skills to write good papers. I don't think these students are without hope, but I think we have to address AI and screen time K- BA/BS if we are going to regain a culture of people who know HOW to learn. Thoughts?
I think this has to play a role. I don't think kids are reading or learning well at any stage, and certainly screens are a piece of the puzzle. I am very much on board with the push Jonathan Haidt and others are making toward phone-free schools (incredible to me that such things even need to be a policy at younger ages, but here we are). With that relatively simple switch, I think much would be improved.
Yes, much improved but still a long way to go. I've been in the K-12 screen reduction space since 2017 and though the data keeps talking about how bad screens are for students, we keep pushing more into the classroom. I wrote today about one positive stride - Sweden is actually reversing course & going back to paper text books!!
I think when we get AI to do things like write or do science we forget that these are human activities. We do them just for the joy of knowing and using our rational faculties. Getting an AI to write for you is almost like getting an AI to talk to a friend for you or go on a hike for you. We have a problem in this society as seeing everything as a means and not an end.
AI users are incredibly short sighted and usually have little to no insight into neurology, psychology, or even IT for that matter.
AI can produce mediocre content instantly en-masse (we can talk about model collapse and how its spew is worsening but that's another story.)
It will never produce greatness. Greatness is not incremental, it is however a quantum leap past a tipping point which is reached by incremental steps.
You produce bad work, you learn by imitating better work of others. You get a little better, a little faster. Repeat A LOT. Then something happens. All of the learning and all the practice and all the knowledge create a confident, zen-like, flow state. You're playing the notes without thinking about it, writing the lyrics, making debate points ...
Then the eureka moment hits where all the subconsciously held knowledge feeds into an imaginative leap and extraordinary work happens.
This is what AI will never achieve, it can draw in huge quantities of categorised data (paintings, rock music, poetry). It can take fragmented datapoints from those collections and spew out a chimera approximation.
However it can't identify good work from bad, not even its own erroneous approximations (hence the ever more frequent bizzare mistakes). But more importantly it can't transcend those patterns to evolve the artform into the next stage.
That requires imagination which cannot be programmed, coded, or synthesised.
Machines can do everything humans can do. The difference will always be that a machine can not live the life of a human.
The point of writing literary essays is to gain the experience of writing those essays; not to make the best essay.
"You might as well say that Shakespeare’s existence ought to keep me from writing poetry, or the existence of virtuosos keep me from learning an instrument."
More than that, you might as well say *Shakespeare himself* shouldn't have taken up poetry!
No one is born knowing how to write a masterpiece. When the boy Shakespeare began his education, it's quite likely his early writings were inferior to the work of adult playwrights of his day -- people of less innate talent but much more experience. Should his teachers have told the future Bard not to bother?
Yes, just what problem is AI trying to solve? Big Tech not investing billions just so we can create a better essay.
The machine of education has never been perfect, as nothing is, but recently it's been scrutinized and prodded more than ever. We have the most college graduates ever and many of them are over-educated for the jobs they perform. Naturally, it's questioned the role of education in the first place.
Some take a utilitarian stand, that education is supposed to improve your earning-potential; others a moral directive with the goal of expanding a young person's mind. I'd say both are valid, but the former is at greatest risk with the proliferation of AI.
I agree wholeheartedly that we force students to go through the motions (performing inadequate work) simply to develop a foundation and philosophy of learning that can allow them to perform under novel unseen settings (the difference between cramming and learning).
Then, we have to contend with AI becoming a useful tool, how do we teach people to work with AI? If I could leverage a personal anecdote, I'm an undergrad computer science student and I've noticed a clear delineation between "iPad kids" and "computer kids" in their understanding of technologies. iPad Kids were no doubt more immersed within computers, but these computers were streamlined for their personal use, abstracting the details of the computers implementations, so when it comes time to building the next iPad, you're lost. In comparison computer kids have a stronger understanding of how computers work because they watched it in its raw, pathetic form and could poke and prod at the organs of the machine to learn how it worked.
I don't think being immersed with AI will make you much better suited for working with AI because we're designing AI to be simpler and better—the actual effort to "work with AI" is reduced every day, to the point that learning how to work with AI should be relatively seamless.
I think this is a paradigm-shifting moment for higher education with massive potential. On the one hand, there is a massive amount of cheating. But that's for the old manner of higher education—perhaps we can design new curricula to leverage the benefits of AI?
For instance, personalized assessments. What if instead of a multiple-choice exam, we used AI chatbots to engage in a personalized discussion with the student to create a more granular and topical assessment of their learning. What if instead of essays we had students perform oral debates (harder to cheat in) and use AI to assess it. Obviously these proposals aren't specific to AI technology, but they're economically infeasible. In a class of 1000 people, are you going to assign 1 TA to each student for assessment? With AI you could.
AI isn't necessarily a wake-up call for Higher Education (I'm not as anti-college as most people), but it is a shock, and I believe we're entering an awkward transition stage before we strip tradition and embrace the daunting, speculative future.
One other thing—we can require more process work. Like a ledger of the edits you made to a document, or the process you went through to create your work. That could be an alternative that allows traditional education tools but as a new anti-cheating mechanism.
I would suggest reading bryan caplans ”the case against education”.
Anyhow:
A lot of people simply dont get value or retain learning from humanities education or the numerous essay writings.
I do like thinking deeply about things though. But i am weird and unusual. Most humans dont want to reflect like that. And i dont think for ing them to in class will actually make them want to do that, or teach them properly.
They will just mask.
So i guess im fine with AI being used to cheat here, because the end result is sort of the same: they dont learn the things. But time is saves to some degree
I am familiar with Caplan, and I think he makes a provocative case, but as with many economists, I find I think quite differently than him on a variety of issues!
Students may not "value" certain things. Indeed, many of them may simply not learn anything from their courses. But frankly, student interest is not the driver or determiner of a proper education. Students do not know what they do not know, and do not yet know how to learn. If we adopted this "student interest" driver for all education, students would avoid all difficult coursework whether humanities or STEM. I don't know why we should accept that students who don't want to take advanced math must nonetheless do so, but then throw up our hands and say "hey, the students don't like their core humanities classes, so what's the point anyway!"
I do think, too, that forcing them to do in-class, handwritten work that cannot be gamed by AI will actually lead them to learn more than they would otherwise. Obviously some students will just be unreachable regardless of subject and format, but actually making them do work will mean they learn more than if they are not made to do work.
I do appreciate the pushback, and I think there are fair points here educators have to grapple with. Thanks for reading!
Thank you for speaking up for this; it's lovely to see.
I always felt it conceded too much (though I did it anyway), setting work and tasks with the hope of getting them to read the text and do some reflective (or any) work. Especially with classic texts - Plato in particular - unless they approached them in the right spirit then they'd likely miss the point entirely. 'What do I need to know for the exam' is not the right spirit...but it's almost impossible to avoid that when there's a grade at stake. If they really got what it was they were supposed to be doing, they'd understand how useless AI would be to their task.
It's not my business anymore but I think it's a shame the way things have gone. It was bad enough when they were reading and recycling from Wikipedia. Now something does that for the world and we call it progress.
The frustration is we all know the answer (well, for me anyway): conversation with questioning. Only we don't have time to do it properly, it doesn't lend itself to public display, and it's not easily or objectively graded. And grades are more important than philosophy to the modern university. But Plato said more or less the same thing a very long time ago.
Plus ça change... There will always be a difference between real understanding and the mere appearance of it, and one of these will always be more valuable than the other. If students understood this, then we'd be getting somewhere. Then they'd police themselves and sneer at those that didn't.
The problem isn’t with the assignments, it’s with the grading. And it’s been a problem since long before ChatGPT. If the AI can write better or analyze better that doesn’t mean the human shouldn’t learn those skills, but when assignments are busy work a student does for a grade, we’ve lost the plot of education. If, instead, we regularly tested their understanding using different mechanisms, then giving the opportunity to get feedback on homeworks or to simply not turn them in would likely make it more appealing to folks trying to learn the analysis concepts and less frustrating to those who prefer to focus their efforts in other areas.
You can’t force a kid to learn from an assignment, you have to trick them into learning from it ;)
I strongly disagree. My personal experience from elementary school to PhD and my experience as a teacher is that there is nothing that motivates true learning as much as grades. They are an incredibly powerful and benign ingenious technology. And I say this as someone whose always been super curious, read encyclopedia articles political philosophy and classical literature for fun etc. nevertheless my best and most serious study has always been grade motivated, or grade equivalent motivated (getting published in the best journal, getting awards jobs etc). Many people like to complain about grades and possibly they are counterproductive for some but I suspect in many cases people are just not being very honest with themselves.
Well, you are describing a person who enjoys learning and treats grades as rewards. You weren’t forced to learn from those assignments, you chose to learn to get the rewards. You also could have cheated or tried to cheat, but you chose not to. The easier it is to cheat and the harder it is to get caught, the fewer people who will spend effort on anything they don’t already want to learn.
But I agree with you that grades and other reward systems are a great way to encourage someone to go that extra mile in a direction they wanted to go but there was just a little too much friction to do it independently.
The main problem with this seems to be that today college is sold primarily as a vocational experience. It is supposed to be the hoops you jump through to get a good job, not self-improvement for it's own sake. I can kind of understand why, with the price tag being so high these days but it tends to make students more concerned with hoop-jumping than learning.
A related (although lesser) casualty of AI in the classroom / university is the benefits of being a peer tutor and helping your friends with their work. When the person who is good at writing or math or whatever subject helps others (even if that help is cheating let’s say) I think there’s real educational (and social benefit) to that process. If nothing else it helps really refine the material for themselves to a degree they likely wouldn’t need to otherwise. They are spending their time helping someone else and learning how to communicate effectively. But if no one goes to a peer tutor / friend anymore in college for help because AI is more convenient / better, that is lost. Not as important as the main student’s learning but another cost from all this and something I hope can manage to survive the coming changes
Thanks for this. I have thought similarly. https://fussyjim.blogspot.com/2023/10/chekhov-and-ai.html