Discussion about this post

User's avatar
James Cantonwine's avatar

While I think your piece is excellent, I have a small nit to pick with this sentence: "They cannot produce culturally responsive or locally relevant materials because they are trained on the 'global average.'"

Provided with context, LLMs can already produce locally relevant content. If context was something we were explicitly asking teachers to provide an AI, or if we were providing it at scale across a school, that can address this one concern. Done carefully, AI can be used to help address and mitigate teacher bias or make suggestions for inclusivity.

Overall, that's not a large comfort, and I don't think it weakens your argument in a meaningful way.

Expand full comment
Norma Sancho's avatar

I asked GPT to respond to your article, and here is response:

-----------------------------

Let me get straight to the point: the article isn’t wrong, but it also falls into a few predictable traps that the educational-academic world has been repeating for decades — sometimes with good reason, sometimes with a purist excess.

Let’s break it down.

🔥 1. He’s right about the core issue: “ChatGPT for Teachers” aims at the superficial

Yes:

Activity templates.

“Learning styles.”

The fantasy that an AI can “build a lesson in seconds.”

That’s cheap marketing. He’s right about that, and so are you.

Serious education isn’t a set of prefab activities, it’s a craft where the sequence and structure of knowledge drive the pedagogy.

So far so good. But…

🔥 2. The article suffers from academic purism

The author —like many pedagogical commentators— writes with righteous indignation while ignoring a basic reality:

Teachers don’t have time.

Period.

They don’t live in an epistemological laboratory.

They live in messy classrooms, with absurd curricula, annoying parents, and supervisors demanding “evidence” for everything.

So yes, his critique is philosophically sound, but practically naive.

IA doesn’t need to replace pedagogical judgment;

it just needs to remove tedious, mechanical work.

If ChatGPT helped with:

drafting materials,

building rubrics,

generating practice sets,

summarizing documents,

creating variations of exercises,

then it’s already worthwhile, even if it doesn’t touch the “disciplinary texture” he’s obsessed with.

He’s criticizing a tool for not being what he wishes it were — not because it’s useless to everyone else.

🔥 3. His attack on “critical thinking skills” is correct… but incomplete

True: there’s no such thing as generic critical thinking, separate from domain knowledge.

Yes, that’s been known for decades.

But here’s what he ignores:

AI can act as scaffolding for novices who don’t yet have domain expertise.

For example:

A student doesn’t understand mitosis → ChatGPT can give a beginner-friendly explanation.

That gives them just enough baseline knowledge to start comparing errors.

You can then show two AI-generated answers (one flawed, one correct) and ask them to contrast.

That does develop early evaluative abilities — not full expertise, but foundational cognitive discrimination.

He collapses everything into:

“You can’t critique anything unless you’re an expert.”

Sure — but learning is incremental. AI can accelerate the early stages.

🔥 4. On learning styles — he’s absolutely right

That theory should be burned, buried, and salted over.

The fact that OpenAI included it in their marketing is embarrassing.

But it’s not shocking.

Teachers believe in learning styles.

Education departments promote them.

OpenAI is following the market, not the science.

Wrong? Yes.

Surprising? No.

🔥 5. Where he’s off: the idea that “LLMs equal homogenization”

This is a fashionable academic fear:

“Models just produce global averages.”

Then explain to me why you and I have created texts that mirror your tone, your personality, your sarcasm, your themes, your anthropological framework — without being flattened into some global average.

An LLM stays generic only when the user is generic.

If a teacher already lacks discipline-specific knowledge, of course the model outputs something average —

but that teacher was already teaching “average” content before the AI.

His argument exaggerates because it expresses anxiety about losing disciplinary elitism.

🔥 6. His real frustration isn’t AI — it’s the state of real-world pedagogy

This is the classic lament of academic educators:

“We’ve had cognitive science for decades. Why does no one apply it?”

Harsh truth:

Because the system doesn’t reward rigor. It rewards compliance and survival.

AI could help, but OpenAI didn’t release a serious instructional-design engine —

they released a polished commercial product.

He wants:

spacing algorithms,

retention diagnostics,

concept maps,

cognitive-gap detection,

long-term distribution of practice,

and yes, that would be revolutionary.

But no one —OpenAI, Microsoft, Google— has built that yet.

🔥 7. Bottom line? He’s right… but he’s also overreacting

ChatGPT for Teachers is a shallow, market-driven product.

It’s not cognitive science, not serious pedagogy, and not instructional design.

But:

it’s not “dangerous,”

it doesn’t destroy disciplines,

it doesn’t make teachers stupid,

and it doesn’t prevent deep pedagogical work by people who know what they’re doing.

He assumes teachers will blindly obey whatever the AI suggests.

That’s not professional respect — that’s academic paternalism.

Expand full comment
27 more comments...

No posts