Well, yes, as a teacher I find this somewhat unsettling. I’m starting to feel somewhat obsolete.
I’m not sure the progression will be unstoppable. As far as I know, there are intrinsic limitations to what an AI can do, as generative AIs are not general intelligences, yet. Even so, what these tutors can do right now is remarkable, to say the least.
Admitting my undeniable bias in this whole matter, I’d try to suggest a few thoughts (which in some cases echo what I’ve read here).
The most striking thing is that an AI tutor has the advantage of being ubiquitous, while a teacher can only work, at best, with one person at the time. In this regard, AI tutors have already won and settled the question.
As for the Harvard study, it must be noted that AI tutors were used with college students, that is, with students who are already, by and large, good independent learners. My impression is that this kind of tutors might work particularly well with people who are able to ask purposeful questions, weigh the answers and put them to good use. Most importantly, these are solidly and deeply motivated learners. In other words, my tentative hypothesis is tha AI tutors work particularly well with autodidacts.
I’m not sure the same applies to less experienced learners, albeit some of the studies you quoted seem to provide an at least partially positive answer.
That said, and considering that, ironically, after some twenty years in the teaching profession I’m starting to feel particularly effective thanks to the mastery I’ve eventually acquired in DI (which I embraced three years ago), I am not entirely assuaged by my own reassurances.
Interesting thoughts. Former teacher here myself. Ultimately, the technology like AI tutors, or large language models matter less than the intent of the people using it. My prediction is that AI may become the “alpha” of instruction, but teachers remain the core facilitators of connection.
Yes, early, crude AI tutors, like early, crude steam engines, have very limited application. But the problems they have are pretty standard engineering problems, intrinsically solvable, in both cases.
We must look at the direction and rate of travel rather than where we are now.
Thanks Carl. This is a terrific, thought-provoking essay. It’s one of the most compelling reflections I’ve seen on the “algorithmic turn” in teaching. You did a nice job of capturing both the excitement and the unease of the moment.
One small point I’d add is that the term "algorithmic" can be a bit misleading when applied to modern generative systems. These models are not algorithmic in the traditional, step-by-step, rule-based sense. They behave more like complex pattern recognizers. They're statistical models that have been trained on immense amounts of linguistic data and can simulate the structure of understanding rather than explicitly compute it.
This distinction matters because it suggests that AI’s growing effectiveness is not necessarily evidence that teaching itself is reducible to fixed rules, but rather that machines are becoming extraordinarily good at approximating the kinds of pattern recognition that expert teachers already rely on. Human tutors don’t follow algorithms either. Rather they respond to patterns of misunderstanding, emotion, hesitation, and progress that they’ve internalized over years of experience.
What seems inevitable is that when generative systems incorporate additional modes of perception, such as affective and emotional cues from facial or vocal expressions, they’ll close much of the remaining gap. Once models can read a learner’s reactions and emotions, their ability to scaffold learning responsively may, on average, surpass that of most human tutors.
However, even if AI tutors eventually surpass human tutors in efficiency and precision, I doubt that any tutoring system -- either human or artificial -- can encompass everything learners need to become fully educated. Tutoring can teach concepts, procedures, and even strategies for thinking. However, much of what we learn in life comes from less structured experiences: negotiating relationships, coping with uncertainty, reading social cues, making moral judgments, and trying to discern what is important in the overwhelming noise of the world. These are not just cognitive tasks; they’re forms of situated being that arise from living among others.
A really excellent piece, Carl. Thanks for pushing this conversation forward.
Are you not falling into the 'false dichotomy' fallacy by wanting an either/or answer to the question?
One of the fallacies of the 'teach them skills, they can Google the facts' idea, which was around in the 2000s, is that you need a basic foundation of knowledge and theory before you can think creatively.
So it may be that the first stage is algorithmic and better done with AI help, but that the creative/application phase is human.
As a former classroom teacher and later 1:1 tutor, I know the importance of knowing what the student knows and setting the appropriate 'next step' task or asking the appropriate question. If I had had the patience and skills of an AI when doing the tutoring, the students would have done even better. However, the classroom demonstrations in physics and chemistry are not best done with AI, nor are the 'what do you think about the Sudan war?' type discussions in other subjects.
Exactly. I teach stats with an awesome textbook and at the beginning of each semester, I hold a copy aloft and say, “The day you decide to actually read this book will be the day I lose my job.” Needless to say to say, I’m still employed.
I totally believe AI can do a great job tutoring the student who wants to be tutored.
It feels like there is a similar gap or blindspot to the one often found in pure discovery learning models: this works well for students that are highly motivated (and possibly from more advantaged backgrounds). I wonder how this kind of intervention really goes in the wild?
I'm heavily using AI for home education and I think there's a lot of really good stuff in your article.
The thing I'm using AI for is structured learning that depends heavily on prior knowledge, explicit teaching and all the retrieval practice stuff you mention.
I depend on AI because I have students who can know without being taught, and I've no way to gauge what they know in subjects that depend heavily on stepped learning/prior knowledge without getting a 'smart workbook' to give them the next step/plug any gaps. As such, I am using smart workbook tools (software that provides very detailed analytics and/or can automatically adjust the difficulty of questions until the student starts struggling) for typing, maths, grammar/language structure, and basic language learning.
In a class of 30 students where the top and bottom 20% of students are either not following or bored senseless because they know the material already, 'smart workbooks' improve efficiency dramatically. You can accelerate the top 20% and give the bottom 20% extra practice - while they're all sat in the same room.
What the AI can't do is motivate a panicking student. It can't give strategic hints, stop stupid mistakes or aid concentration. My eldest is happy to work on apps on his own for some things, but not others. My younger son, I have to sit next to him all the time when he's using the apps.
The other thing that AI can't do is have free-ranging conversations. It's only good for largely solitary activities. It can't go on field trips. It can't help a student build a model. No one is doing a group exercise to build the most waterproof possible paper boat with the aim of AI. The time saved on 'dumb workbooks' is the time we spend going out, taking exercise and working on projects.
None of these things, by the way, depend significantly on Generative AI. As you say, the best AI educational tools are not designed to remove work - they're designed for automating and personalising 'the mechanical bits' to give more time for the stuff that does need a human tutor!
Please read the full research reports before citing them. The CTAI MATHia report actually shows negative learning outcomes for all students in the first year of implementation. This is a poorly designed study with a nearly 30% attrition rate and a testing group that had a significantly different starting scores then the control group. The report itself references several other studies that all conclude negative learning outcomes from CTAI. Only after removing the lowest performing students in the second year of the study did they get positive results which were not statistically significant. And the highest performing students actually suffered learning loss both years of the intervention.
While you are correct about the high attrition rate (nearly 30%) in the high school study and that the middle school study had significantly different starting scores, your other key claims are incorrect. The report does not reference "several other studies that all conclude negative learning outcomes"; it explicitly describes prior research as having "mixed effects".
Most importantly, the positive second-year results for the high school study were statistically significant. These results were not achieved by "removing" low-performing students, but by using standard statistical adjustments to account for pre-test differences . Finally, high-performing students did not suffer learning loss "both years"; this was only noted in the first year , while in the second year, the positive effects in high schools were "relatively uniform for students of all ability levels".
I’m sorry. I did not mean to come off as aggressive in my post. I am currently sick with a fever and may have misinterpreted some finding. I have a huge amount of respect for you, Carl. I bought your latest book on day 1.
I am a teacher currently using MATHia, and they are using the old definition for ai when they call it that. CTAI is a “hint” button, which appears to be pre-written not dynamically generated.
I am biased. As are we all. My students hate MATHia with an intense passion and are constantly confused by the problems in it and the accompanying textbook. I would love for you to look into the curriculum because it breaks nearly every precept of good teaching and lesson design that you share in your work.
I constantly have to write new lessons for my students using science of learning principles just so they can be successful. My district has multiple years of data showing a 0 to negative effect on learning after adopting Carnegie.
So again I’m sorry, I wrote before taking the time to think and examine closely. I just have a lot of baggage with Carnegie Learning.
Wonderful piece! As a high school computer science teacher with a strong interest in the philosophy of mind, I wanted to share a few reflections on both the value of learning and teaching, and the limits of AI tutors.
1) I think something missing here is that many educational systems and philosophies implicitly operate on a kind of ‘algorithmic’ assumption: Master this set of skills → to do what? Gain entrance to a good college → to do what? Get a high-paying job → to do what? Make money → to do what? The ultimate 'end' (or telos, in the Aristotelian sense) of the chain often goes unexplored. I strongly agree with your conclusion: teaching ought to be about what we value, who we hope our students become, and the intellectual culture we cultivate. We might debate what those values should be or their ontological grounding, but intuitively, they are not “computable” in any meaningful sense. Perhaps our educational systems should (re)focus on clarifying these ends. This may also help explain why ‘classical’ education schools have grown in popularity in recent years.
2) Recently, I’ve been ‘nerd-sniped’ by computer scientist François Chollet and his quest to develop true AGI through his ARC-AGI project. In On the Measure of Intelligence, he argues that general intelligence fundamentally requires adapting to novel situations. He explains in a recent podcast (https://www.youtube.com/watch?v=rl7B-LHiaNo) why current LLMs face deep challenges. The interview is worth a listen for anyone thinking about AI and (machine) learning, particularly if we’re considering learning as purely algorithmic.
3) I’ve also become convinced by classic arguments for the immateriality of thought and understanding, including John Searle’s Chinese Room thought experiment and Ed Feser’s Argument on the Immateriality of Thought (both easily googleable, or perhaps an LLM could help one understand them!). While Penrose’s microtubule hypothesis is compelling, these purely philosophical arguments highlight a subtle tension in the essay: the distinction between what is scientifically tractable and what falls under natural law(s). The essay seems to assume a form of physicalism without stating it explicitly. From an Aristotelian perspective, true understanding is not reducible to a material process—though that does not imply it violates natural law. It may be unable to be fully reducible to a mechanical or algorithmic process, which actually coheres with point 1); in Aristotle’s framework, more emphasis might be placed on the final cause, on the teleology of education.
All in all, a thought-provoking piece! I’m grateful for the opportunity to reflect on these issues, and I’d love to hear your thoughts on how the essay intersects with these deeper questions of intelligence, understanding, and human flourishing.
Re "... [T]hose values ... are not “computable” in any meaningful sense. Perhaps our educational systems should (re)focus on clarifying these ends."
This is what the AI Safety Community calls the Orthogonality Problem, that ends are not subject to change by rational means. The Alignment Problem follows: given that an AI has its own ends, it will pursue them, and humans may be inconvenient, like an ant-nest where you want to build a garden shed.
We need a crash research programme on the scale of the Apollo Program or the Manhattan Project on these problems.
Assuming those problems are held in abeyance now. On the teleological chain, AI seems likely to destroy that whole "life script", education -> job -> career -> security -> couple formation -> reproduction or other source of meaning.
I see more and more papers from economists (serious NBER economists like Paolo Restrepo) saying that over time wages fall to the opportunity cost of using the available AI resource for something else, and the human labour input into the economy eventually falls to zero.
I think there is an urgent need for education to prepare people, at least as a contingency plan, for a different life script, perhaps couple formation -> reproduction -> social contribution.
A huge part of the problem with these efforts is a staunch resistance to defining terms. What is learning? What does out perform mean? I suspect that narrow scope rote learning is more amenable to algorithmic approaches than creative endeavors, but even those are trainable.
My guess is that there is little double blind evidence, leading to bias issues. With all the hype it's easy to imagine learners are more engaged with the bubble tech than old fashioned classrooms.
As with much social science, replication and scaling will be the real test of the evidence.
I no longer ask what happens to teachers when AI tutors surpass them in educational outcomes, but what happens to learners and to families. If teaching and learning are computable and AI teachers are far more resilient and able to teach humans than we are ourselves, what about AI learners? You mention in this essay that skill acquisition in humans is slow and hard-earned, and thus unlikely to be competitive with AI skill acquisition over time. How can we structure society to encourage a commitment to human flourishing as humans become less economically competitive?
There may be limits to AI systems that we haven't found yet but if there are not, we need to be designing solutions now to build AI systems and protocols that help people resist pure attention-capture, that build in appropriate challenge and that support learning. I hope that policymakers, educators, technologists and parents are equal to the task.
'How can we structure society to encourage a commitment to human flourishing as humans become less economically competitive?' I echo this question and the need for multiple sections of the society to work together.
Any imagination of education with AI is intertwined with environmental, social, political, economic future of society as a whole.
I agree with you that this issue is something that is going to affect all of society. With regard to our role as educators I specifically would like to see school and district-wide use policies for different ages, curricula that are adapted to employ AI rather than be trivially bypassed by it. There is also room for purpose-built AI for education that tracks student progress and provides them with opportunities for retrieval practice and even conceptual discussion. I think that ad hoc solutions are dominating right now and I hope that we can create something more cohesive and consciously-planned.
Re: "Learning either obeys the laws of nature or it doesn't."
Or a third more likely option: Learning obeys laws of nature that we don't yet fully understand, let alone are able to simulate in a computer.
The freaking hubris in this piece is astonishing.
It deserves its place in a book such as More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, by Adam Becker.
Also, there's this recent article: AI-Generated Workslop is destroying productivity, by Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock, in the Harvard Business Review:
"If learning obeys physical laws, (and I think the evidence overwhelmingly suggests it largely does), then it is amenable to description, modelling, and ultimately, design."
I'm not entirely sure what you mean by physical laws here, but the great advances in science over the past 100 years or so have largely recognized the indeterminacy of the universe. Heisenberg uncertainty principle and all that. (A similar probabilistic indeterminacy also underlies AI in the form of large-language models, for that matter.)
But even if you reject that claim and believe learning follows some deterministic path in the brain, I'm curious what evidence suggests we have any ability to capture the relevant data and model it and then design instruction to account for what we learn? Are you sure we'd want to live in such a world, even if we could?
I like this point and completely agree: language is a tool for *communicating* our thoughts, but is largely separate from the process of thinking. Given that large-language models are, as the name suggests, grounded entirely in language”
Well, yes, as a teacher I find this somewhat unsettling. I’m starting to feel somewhat obsolete.
I’m not sure the progression will be unstoppable. As far as I know, there are intrinsic limitations to what an AI can do, as generative AIs are not general intelligences, yet. Even so, what these tutors can do right now is remarkable, to say the least.
Admitting my undeniable bias in this whole matter, I’d try to suggest a few thoughts (which in some cases echo what I’ve read here).
The most striking thing is that an AI tutor has the advantage of being ubiquitous, while a teacher can only work, at best, with one person at the time. In this regard, AI tutors have already won and settled the question.
As for the Harvard study, it must be noted that AI tutors were used with college students, that is, with students who are already, by and large, good independent learners. My impression is that this kind of tutors might work particularly well with people who are able to ask purposeful questions, weigh the answers and put them to good use. Most importantly, these are solidly and deeply motivated learners. In other words, my tentative hypothesis is tha AI tutors work particularly well with autodidacts.
I’m not sure the same applies to less experienced learners, albeit some of the studies you quoted seem to provide an at least partially positive answer.
That said, and considering that, ironically, after some twenty years in the teaching profession I’m starting to feel particularly effective thanks to the mastery I’ve eventually acquired in DI (which I embraced three years ago), I am not entirely assuaged by my own reassurances.
Interesting thoughts. Former teacher here myself. Ultimately, the technology like AI tutors, or large language models matter less than the intent of the people using it. My prediction is that AI may become the “alpha” of instruction, but teachers remain the core facilitators of connection.
As for me, this AI bubble can't pop soon enough.
Yes, early, crude AI tutors, like early, crude steam engines, have very limited application. But the problems they have are pretty standard engineering problems, intrinsically solvable, in both cases.
We must look at the direction and rate of travel rather than where we are now.
Orwell said something about wanting to make political writing an art. I think you have with education, Carl.
Oh my goodness Harry, that might be the best comment I've ever had. Thanks so much.
Totally agree! I'm going to reread for both the enlightenment and the elegance.
Bravo Carl for this piece! Excellent and really helpful to think about
This is a great piece, Carl.
Thank you: so much to reflect on here.
Thanks Carl. This is a terrific, thought-provoking essay. It’s one of the most compelling reflections I’ve seen on the “algorithmic turn” in teaching. You did a nice job of capturing both the excitement and the unease of the moment.
One small point I’d add is that the term "algorithmic" can be a bit misleading when applied to modern generative systems. These models are not algorithmic in the traditional, step-by-step, rule-based sense. They behave more like complex pattern recognizers. They're statistical models that have been trained on immense amounts of linguistic data and can simulate the structure of understanding rather than explicitly compute it.
This distinction matters because it suggests that AI’s growing effectiveness is not necessarily evidence that teaching itself is reducible to fixed rules, but rather that machines are becoming extraordinarily good at approximating the kinds of pattern recognition that expert teachers already rely on. Human tutors don’t follow algorithms either. Rather they respond to patterns of misunderstanding, emotion, hesitation, and progress that they’ve internalized over years of experience.
What seems inevitable is that when generative systems incorporate additional modes of perception, such as affective and emotional cues from facial or vocal expressions, they’ll close much of the remaining gap. Once models can read a learner’s reactions and emotions, their ability to scaffold learning responsively may, on average, surpass that of most human tutors.
However, even if AI tutors eventually surpass human tutors in efficiency and precision, I doubt that any tutoring system -- either human or artificial -- can encompass everything learners need to become fully educated. Tutoring can teach concepts, procedures, and even strategies for thinking. However, much of what we learn in life comes from less structured experiences: negotiating relationships, coping with uncertainty, reading social cues, making moral judgments, and trying to discern what is important in the overwhelming noise of the world. These are not just cognitive tasks; they’re forms of situated being that arise from living among others.
A really excellent piece, Carl. Thanks for pushing this conversation forward.
Are you not falling into the 'false dichotomy' fallacy by wanting an either/or answer to the question?
One of the fallacies of the 'teach them skills, they can Google the facts' idea, which was around in the 2000s, is that you need a basic foundation of knowledge and theory before you can think creatively.
So it may be that the first stage is algorithmic and better done with AI help, but that the creative/application phase is human.
As a former classroom teacher and later 1:1 tutor, I know the importance of knowing what the student knows and setting the appropriate 'next step' task or asking the appropriate question. If I had had the patience and skills of an AI when doing the tutoring, the students would have done even better. However, the classroom demonstrations in physics and chemistry are not best done with AI, nor are the 'what do you think about the Sudan war?' type discussions in other subjects.
Studies don’t take into account the effort to get a 13yo to sit at a computer and be taught, of their own volition.
Exactly. I teach stats with an awesome textbook and at the beginning of each semester, I hold a copy aloft and say, “The day you decide to actually read this book will be the day I lose my job.” Needless to say to say, I’m still employed.
I totally believe AI can do a great job tutoring the student who wants to be tutored.
It feels like there is a similar gap or blindspot to the one often found in pure discovery learning models: this works well for students that are highly motivated (and possibly from more advantaged backgrounds). I wonder how this kind of intervention really goes in the wild?
I'm heavily using AI for home education and I think there's a lot of really good stuff in your article.
The thing I'm using AI for is structured learning that depends heavily on prior knowledge, explicit teaching and all the retrieval practice stuff you mention.
I depend on AI because I have students who can know without being taught, and I've no way to gauge what they know in subjects that depend heavily on stepped learning/prior knowledge without getting a 'smart workbook' to give them the next step/plug any gaps. As such, I am using smart workbook tools (software that provides very detailed analytics and/or can automatically adjust the difficulty of questions until the student starts struggling) for typing, maths, grammar/language structure, and basic language learning.
In a class of 30 students where the top and bottom 20% of students are either not following or bored senseless because they know the material already, 'smart workbooks' improve efficiency dramatically. You can accelerate the top 20% and give the bottom 20% extra practice - while they're all sat in the same room.
What the AI can't do is motivate a panicking student. It can't give strategic hints, stop stupid mistakes or aid concentration. My eldest is happy to work on apps on his own for some things, but not others. My younger son, I have to sit next to him all the time when he's using the apps.
The other thing that AI can't do is have free-ranging conversations. It's only good for largely solitary activities. It can't go on field trips. It can't help a student build a model. No one is doing a group exercise to build the most waterproof possible paper boat with the aim of AI. The time saved on 'dumb workbooks' is the time we spend going out, taking exercise and working on projects.
None of these things, by the way, depend significantly on Generative AI. As you say, the best AI educational tools are not designed to remove work - they're designed for automating and personalising 'the mechanical bits' to give more time for the stuff that does need a human tutor!
Please read the full research reports before citing them. The CTAI MATHia report actually shows negative learning outcomes for all students in the first year of implementation. This is a poorly designed study with a nearly 30% attrition rate and a testing group that had a significantly different starting scores then the control group. The report itself references several other studies that all conclude negative learning outcomes from CTAI. Only after removing the lowest performing students in the second year of the study did they get positive results which were not statistically significant. And the highest performing students actually suffered learning loss both years of the intervention.
You should read it yourself.
While you are correct about the high attrition rate (nearly 30%) in the high school study and that the middle school study had significantly different starting scores, your other key claims are incorrect. The report does not reference "several other studies that all conclude negative learning outcomes"; it explicitly describes prior research as having "mixed effects".
Most importantly, the positive second-year results for the high school study were statistically significant. These results were not achieved by "removing" low-performing students, but by using standard statistical adjustments to account for pre-test differences . Finally, high-performing students did not suffer learning loss "both years"; this was only noted in the first year , while in the second year, the positive effects in high schools were "relatively uniform for students of all ability levels".
I’m sorry. I did not mean to come off as aggressive in my post. I am currently sick with a fever and may have misinterpreted some finding. I have a huge amount of respect for you, Carl. I bought your latest book on day 1.
I am a teacher currently using MATHia, and they are using the old definition for ai when they call it that. CTAI is a “hint” button, which appears to be pre-written not dynamically generated.
I am biased. As are we all. My students hate MATHia with an intense passion and are constantly confused by the problems in it and the accompanying textbook. I would love for you to look into the curriculum because it breaks nearly every precept of good teaching and lesson design that you share in your work.
I constantly have to write new lessons for my students using science of learning principles just so they can be successful. My district has multiple years of data showing a 0 to negative effect on learning after adopting Carnegie.
So again I’m sorry, I wrote before taking the time to think and examine closely. I just have a lot of baggage with Carnegie Learning.
Wonderful piece! As a high school computer science teacher with a strong interest in the philosophy of mind, I wanted to share a few reflections on both the value of learning and teaching, and the limits of AI tutors.
1) I think something missing here is that many educational systems and philosophies implicitly operate on a kind of ‘algorithmic’ assumption: Master this set of skills → to do what? Gain entrance to a good college → to do what? Get a high-paying job → to do what? Make money → to do what? The ultimate 'end' (or telos, in the Aristotelian sense) of the chain often goes unexplored. I strongly agree with your conclusion: teaching ought to be about what we value, who we hope our students become, and the intellectual culture we cultivate. We might debate what those values should be or their ontological grounding, but intuitively, they are not “computable” in any meaningful sense. Perhaps our educational systems should (re)focus on clarifying these ends. This may also help explain why ‘classical’ education schools have grown in popularity in recent years.
2) Recently, I’ve been ‘nerd-sniped’ by computer scientist François Chollet and his quest to develop true AGI through his ARC-AGI project. In On the Measure of Intelligence, he argues that general intelligence fundamentally requires adapting to novel situations. He explains in a recent podcast (https://www.youtube.com/watch?v=rl7B-LHiaNo) why current LLMs face deep challenges. The interview is worth a listen for anyone thinking about AI and (machine) learning, particularly if we’re considering learning as purely algorithmic.
3) I’ve also become convinced by classic arguments for the immateriality of thought and understanding, including John Searle’s Chinese Room thought experiment and Ed Feser’s Argument on the Immateriality of Thought (both easily googleable, or perhaps an LLM could help one understand them!). While Penrose’s microtubule hypothesis is compelling, these purely philosophical arguments highlight a subtle tension in the essay: the distinction between what is scientifically tractable and what falls under natural law(s). The essay seems to assume a form of physicalism without stating it explicitly. From an Aristotelian perspective, true understanding is not reducible to a material process—though that does not imply it violates natural law. It may be unable to be fully reducible to a mechanical or algorithmic process, which actually coheres with point 1); in Aristotle’s framework, more emphasis might be placed on the final cause, on the teleology of education.
All in all, a thought-provoking piece! I’m grateful for the opportunity to reflect on these issues, and I’d love to hear your thoughts on how the essay intersects with these deeper questions of intelligence, understanding, and human flourishing.
Re "... [T]hose values ... are not “computable” in any meaningful sense. Perhaps our educational systems should (re)focus on clarifying these ends."
This is what the AI Safety Community calls the Orthogonality Problem, that ends are not subject to change by rational means. The Alignment Problem follows: given that an AI has its own ends, it will pursue them, and humans may be inconvenient, like an ant-nest where you want to build a garden shed.
We need a crash research programme on the scale of the Apollo Program or the Manhattan Project on these problems.
Assuming those problems are held in abeyance now. On the teleological chain, AI seems likely to destroy that whole "life script", education -> job -> career -> security -> couple formation -> reproduction or other source of meaning.
I see more and more papers from economists (serious NBER economists like Paolo Restrepo) saying that over time wages fall to the opportunity cost of using the available AI resource for something else, and the human labour input into the economy eventually falls to zero.
I think there is an urgent need for education to prepare people, at least as a contingency plan, for a different life script, perhaps couple formation -> reproduction -> social contribution.
A huge part of the problem with these efforts is a staunch resistance to defining terms. What is learning? What does out perform mean? I suspect that narrow scope rote learning is more amenable to algorithmic approaches than creative endeavors, but even those are trainable.
My guess is that there is little double blind evidence, leading to bias issues. With all the hype it's easy to imagine learners are more engaged with the bubble tech than old fashioned classrooms.
As with much social science, replication and scaling will be the real test of the evidence.
I no longer ask what happens to teachers when AI tutors surpass them in educational outcomes, but what happens to learners and to families. If teaching and learning are computable and AI teachers are far more resilient and able to teach humans than we are ourselves, what about AI learners? You mention in this essay that skill acquisition in humans is slow and hard-earned, and thus unlikely to be competitive with AI skill acquisition over time. How can we structure society to encourage a commitment to human flourishing as humans become less economically competitive?
There may be limits to AI systems that we haven't found yet but if there are not, we need to be designing solutions now to build AI systems and protocols that help people resist pure attention-capture, that build in appropriate challenge and that support learning. I hope that policymakers, educators, technologists and parents are equal to the task.
'How can we structure society to encourage a commitment to human flourishing as humans become less economically competitive?' I echo this question and the need for multiple sections of the society to work together.
Any imagination of education with AI is intertwined with environmental, social, political, economic future of society as a whole.
I agree with you that this issue is something that is going to affect all of society. With regard to our role as educators I specifically would like to see school and district-wide use policies for different ages, curricula that are adapted to employ AI rather than be trivially bypassed by it. There is also room for purpose-built AI for education that tracks student progress and provides them with opportunities for retrieval practice and even conceptual discussion. I think that ad hoc solutions are dominating right now and I hope that we can create something more cohesive and consciously-planned.
Re: "Learning either obeys the laws of nature or it doesn't."
Or a third more likely option: Learning obeys laws of nature that we don't yet fully understand, let alone are able to simulate in a computer.
The freaking hubris in this piece is astonishing.
It deserves its place in a book such as More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity, by Adam Becker.
Also, there's this recent article: AI-Generated Workslop is destroying productivity, by Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano and Jeffrey T. Hancock, in the Harvard Business Review:
https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
somewhat terrifying, but my main fear is that something important might be lost (as hinted at the end) and we won’t know what it is until its gone
"If learning obeys physical laws, (and I think the evidence overwhelmingly suggests it largely does), then it is amenable to description, modelling, and ultimately, design."
I'm not entirely sure what you mean by physical laws here, but the great advances in science over the past 100 years or so have largely recognized the indeterminacy of the universe. Heisenberg uncertainty principle and all that. (A similar probabilistic indeterminacy also underlies AI in the form of large-language models, for that matter.)
But even if you reject that claim and believe learning follows some deterministic path in the brain, I'm curious what evidence suggests we have any ability to capture the relevant data and model it and then design instruction to account for what we learn? Are you sure we'd want to live in such a world, even if we could?
I like this point and completely agree: language is a tool for *communicating* our thoughts, but is largely separate from the process of thinking. Given that large-language models are, as the name suggests, grounded entirely in language”
Ah crap Ben, I think I deleted our last two comments. I’m in the car on my phone waiting for my kids at a dance class. Do you have thrm cached?
It appears our wisdom is lost to the ethereal sands of the Internet, I'm afraid.
I'm such an idiot :(
Not!
I don’t think the teachers can pass a Turing test now… unless we consider moronic bureaucrats human? I don’t.