The "BLOOM-AI" Framework Represents Everything We're Getting Wrong About AI and learning
The folly of making machines think like humans rather than letting them do what humans cannot.
I've just seen this post on LinkedIn introducing something called the “BLOOM-AI Framework”. I’m sure the author is well-intentioned and genuine in their desire to help educators but this contains some serious misconceptions about learning and instruction that need to be addressed.
The Pseudoscience of the Pedagogical Pyramid
Firstly, Bloom’s taxonomy was never meant to be a rigid hierarchy. Its creator, Benjamin Bloom, didn’t actually use the pyramid himself, and it was probably made up by someone lethally mutating his work. As Donald Clark notes, it was “also used to reinforce the prejudices among the academically minded, those who see knowledge, the affective and psychomotor and vocational learning as playing a diminished role in funded learning.”
The pyramid promotes the misconception that “lower-order” components like facts and knowledge are somehow less valuable or that they must be “gotten through” before the real learning happens. In reality, factual knowledge is the substrate from which all higher-order thinking grows. I’ll just say it again: you can’t connect the dots if you don’t have any dots.
The framework commits what we might call a "taxonomic fallacy", ie treating Bloom's taxonomy as if it were a rigid hierarchy of learning activities rather than a very basic heuristic for understanding cognitive complexity.
More problematically, it conflates the form of an activity with its cognitive demand. Having students use AI to generate multiple-choice questions (mapped to "Create") doesn't necessarily engage higher-order thinking. It may simply automate lower-order recall dressed in creative clothing. The cognitive work lies not in the easily produced resource but in the desirable difficulty of retrieval, evaluation, adaptation, and contextual application of what's generated. Pedro De Bruyckere has written well on why this pyramid became so prevalent.
The Scaffolding Mirage
The framework then presents AI as “educational scaffolding”, but scaffolding, at least in Jerome Bruner, David Wood, and Gail Ross's original conception from the 70s, is temporary support within a stage of development that gradually fades as competence emerges. AI tools, however, can tend toward permanent dependency rather than progressive autonomy. Students may become more sophisticated at prompting and curating AI outputs without developing the underlying disciplinary thinking the AI is ostensibly supporting.
The claim that AI “delivers foundational knowledge” while teachers handle critical thinking assumes AI understands anything. It doesn’t. AI can generate, match patterns, and predict, but it does not know. Worse, outsourcing basic instruction to AI risks cognitive overload, misinformation, and superficial learning.
And again this misunderstanding also obscures the central role of knowledge in enabling higher-order thinking. As Daniel Willingham has argued, critical thinking isn’t a generic skill, it depends on deep domain-specific knowledge stored in long-term memory. If students rely on AI to generate ideas, structure arguments, or retrieve facts, they run the risk of cosplaying domain knowledge while bypassing the effortful processes that lead to genuine learning. The result is a kind of intellectual outsourcing that short-circuits schema construction and leaves students with shallow fluency but fragile understanding. The point is not producing the thing, it’s the effort in producing the thing.
Learning Styles: once more into the breach
But the inclusion of VARK learning styles is perhaps the most lamentable feature of this. Learning styles theory (as in the meshing hypothesis) has been debunked repeatedly in the literature. Teaching to a student’s supposed “style” (e.g., visual, auditory, kinaesthetic) does not improve learning outcomes. Anyway I don;t want to go on about this again because I’m sick and tired of talking about it, but if you want to know more, read this or this. For a more authoritative debunking, please read this gem by Paul Kirschner.
The true potential of AI for learning and instruction
I think there is real reason to be positive about what AI can bring to the science of learning, particularly retrieval practice, spacing and interleaving but also in terms of reducing teacher workload and assessment (see
) but this kind of thing sets us right back 30 years when pedagogical snake oil was rife. I’ve met with several people in the sector who appear to genuinely understand the science of learning and are sincere in their efforts to make effective learning environments.But as ever, the devil is in the detail and again, I say this is purely from a cognitive science/instructional design view: AI's genuine educational potential lies not in mimicking human cognition (and in this case a faulty one) but in amplifying distinctly human learning processes through what we might call "cognitive complementarity." AI excels at pattern recognition across vast datasets, rapid iteration of examples and counterexamples, and providing immediate feedback on well-defined tasks - capabilities that can enhance human learning when strategically deployed to reduce extraneous cognitive load while preserving essential desirable difficulties.
For instance, AI can generate multiple varied practice problems which could in theory, strengthen schema formation, provide instant feedback on procedural skills to accelerate deliberate practice, or surface connections across large corpora of information that might otherwise remain invisible to instructors.
For me, the holy grail has always been understanding how the holy trilogy of retrieval practice, interleaving, and spacing interact dynamically, not just that they work independently (as most of the lab studies show), but how their timing, sequencing, and intensity must be orchestrated to optimise long-term retention and transfer in authentic learning environments. Humans have a bad track record at systemising this if we are being brutally honest. I used to think that retrieval practice would be the simplest thing to get right in a classroom. How wrong I was.
AI's capacity for continuous, granular data collection and real-time adaptation could *possibly* allow us to map the precise conditions under which, say, spacing intervals should expand or contract based on retrieval success rates, or how interleaving different problem types should be calibrated to an individual's developing expertise within a domain. We might discover that optimal spacing isn't just about temporal intervals but about the cognitive distance between concepts, or that interleaving works best when it follows certain patterns of conceptual similarity and difference that vary by learner and domain.
This could transform our understanding from broad, general heuristics such as "space your practice," "mix up your problems" etc. to precise, adaptive algorithms that respond to the moment-by-moment fluctuations in a learner's cognitive state. It's moving from the equivalent of weather folklore to meteorological modelling and finally having the computational power to make sense of learning's complex, dynamic systems. The key insight for me is that AI should reduce cognitive noise, not cognitive effort - supporting the conditions under which deep learning flourishes.
In the end, the problem isn’t AI. It’s how we frame it. When we wrap faulty theories in sleek design and powerful tech, we risk scaling confusion rather than clarity. However benevolent its origins, the BLOOM-AI Framework epitomizes our field's pathological susceptibility to theoretical quackery. If we want AI to genuinely serve learning, we must start with what we know about how people learn, not with taxonomic pyramids, discredited myths, or wishful metaphors.
I was reading a foundational document from our national educational research organization called "How students learn" and in it it makes the claim that "learning is a change in long term memory". This supports the notion that having a capacity to recall stuff is foundational to any learning. However, my memory doesn't seem to do this well, yet I've done a lot of learning over the years. Enjoyed this read very much, Karl. I'll do my best to remember some of it.
“AI's capacity for continuous, granular data collection and real-time adaptation could *possibly* allow us to map the precise conditions under which, say, spacing intervals should expand or contract…” - love this. It pushes for me how AI enabled personalisation of learning has been imagined.