Does "AI Literacy" Actually Mean Anything?
Another day, Another Literacy.
“You keep using that word. I do not think it means what you think it means.”
- Inigo Montoya
There is something deeply unsettling about the ease with which policy-makers and the corporate class append “literacy” to virtually any developing trend when at the same time, many systems around the world are producing alarmingly low rates of actual literacy. We now have media literacy, digital literacy, data literacy, financial literacy, emotional literacy, and most recently, AI literacy. Each addition promises to equip students with essential skills for navigating modern life, each assumes that what students lack is merely another form of vague competence that can be taught, assessed, and certified. And each, in its promiscuous use of the vital term literacy, reveals how far education has drifted from conceptual clarity. We’ve reached peak literacy, just not the kind that involves any reading or writing.
The latest literacy, “AI literacy” has achieved something remarkable in the last 2 years: near-universal acceptance despite almost no agreement about what it actually means. Search educational policy documents from the past three years and you’ll find it everywhere; in national curriculum frameworks, in EdTech company marketing materials, in university strategic plans. The OECD speaks of AI literacy as essential for the workforce of tomorrow. The International Society for Technology in Education has developed AI literacy standards. Countless schools now advertise their commitment to developing “AI-literate” students. Everyone, it seems, agrees that AI literacy matters. Almost no one agrees on what it actually is beyond some nebulous concatenation of corporate fumes.
This vagueness has not hindered the term’s ascent. If anything, it may have accelerated it. A concept defined loosely enough can accommodate any agenda, appeal to any constituency, justify any curriculum innovation. AI literacy can mean understanding how large language models work, or it can mean knowing when to fact-check their outputs. It can mean coding skills, or critical thinking about algorithmic bias, or simply “awareness” of AI’s growing influence. The term is capacious enough to include technical knowledge, ethical reasoning, and a generalised scepticism about Silicon Valley, often in the same breath.
And I think the genealogy of “AI literacy” is revealing also. It emerged from the same educational discourse that gave us “media literacy” in the 1990s, “digital literacy” in the 2000s, “data literacy” in the 2010s and the lamentable “emotional literacy” in the 2020s. Each wave of technological change has prompted educators to append “literacy” to the challenge at hand, as though naming something a literacy automatically confers worth and clarifies what should be taught and how.
The pattern repeats with remarkable consistency: new technology appears, educators declare a “X literacy” gap, curricula are hastily assembled, and a few years later the concept has either become so diluted as to be meaningless or has been quietly abandoned in favour of the next literacy crisis. We’re just one policy cycle away from “literacy literacy”: the ability to understand what all these literacies are supposed to mean.
Inventing Literacies, Losing Knowledge
Consider the OECD’s recent definition of AI literacy, which neatly exemplifies the problem. According to their framework;
AI literacy represents the technical knowledge, durable skills, and future-ready attitudes required to thrive in a world influenced by AI. It enables learners to engage, create with, manage, and design AI, while critically evaluating its benefits, risks, and ethical implications. (p.6) 1
This all sounds authoritative and comprehensive but what does it actually mean, and should it be taking up curriculum time in schools? It appears to describe a coherent body of knowledge that students need to acquire, but for me that this kind of imprecise, corporate language is unhelpful for educators and I want to closely examine what the definition actually comprises when we strip away the rhetorical packaging.
“Technical knowledge” of how AI works requires mathematics, statistics, computer science, algorithm design, information theory, and data structures. These are existing disciplines with established pedagogies. We already know how to teach them. They do not become something new simply because they are applied to AI systems rather than other computational problems.
“Creating with AI” depends entirely on what one is creating. Knowledge is highly domain-specific. If students are writing, they need to understand composition, rhetoric, and genre. If they are making images, they need to understand visual design, semiotics, and aesthetics. If they are programming AI systems, they need software engineering, machine learning theory, and computational complexity. The medium does not create a new “literacy”; it requires application of existing domain knowledge.
And then there is “future-ready attitudes,” perhaps the most revealing fragment in this definition. It sounds visionary, urgent, even humane, the kind of phrase that invites nodding agreement precisely because it cannot be defined. What is a “future-ready attitude,” exactly? How would one teach it? How would one assess whether a student possesses it?
For teachers, it offers no guidance whatsoever. You cannot plan a lesson around an “attitude.” You cannot assess one without slipping into pseudo-psychology. You cannot teach it without lapsing into moral theatre; asking students to posture towards a future neither they nor their teachers can either fathom or describe.
This is what happens when the language of learning is colonised by the language of corporate futurism. It evacuates specificity, replaces knowledge with affect, and dresses mood management as curriculum design. “Future-ready attitudes” is not a goal; it is a slogan, a way of sounding prepared while ensuring that nothing measurable or teachable is required.
What then, is distinctively “AI literacy” in this definition? The answer is: nothing. That absence of any distinct substantive content is precisely why the term flourishes; rhetorical emptiness is its strength. You can project anything you want onto it. It’s basically just an assemblage of existing competencies, repackaged them under a fashionable label, and implied that this constitutes a new literacy requiring new curricula. But calling something a “literacy” does not make it one.
The Critical Thinking Fallacy
The problem with AI literacy mirrors a deeper confusion that has plagued education for decades: the notion that “critical thinking” can be taught as a standalone skill, divorced from domain knowledge. This is where the conceptual muddle truly reveals itself.
Dan Willingham’s authoritative paper on critical thinking, was for me, the first time I encountered the idea that critical thinking is not a transferable skill that can be taught in isolation and then applied across domains. As he puts it, “critical thinking is not a set of skills that can be deployed at any time, in any context. It is a type of thought that even 3-year-olds can engage in — and even trained scientists can fail in.” The reason is straightforward: thinking critically about something requires knowing about it. You cannot evaluate the validity of an argument about climate policy without understanding climatology, economics, and political science. You cannot think critically about algorithmic bias without understanding how algorithms work, what data they use, and how their outputs are deployed. In other words, you cannot connect the dots if you don’t have any dots.
Critical thinking, like AI literacy, is domain-specific. When we ask students to “think critically” about something without first ensuring they possess the requisite knowledge to do so, we are asking them to perform a kind of cognitive ventriloquism. Willingham’s insight cuts to the heart of the matter:
“Critical thinking (as well as scientific thinking and other domain-based thinking) is not a skill. There is not a set of critical thinking skills that can be acquired and deployed regardless of context. Second, there are metacognitive strategies that, once learned, make critical thinking more likely. Third, the ability to think critically (to actually do what the metacognitive strategies call for) depends on domain knowledge and practice.” link
Yet education continues to act as though critical thinking were a muscle that could be exercised through practice, as though asking students to evaluate, analyse, or critique in the absence of knowledge somehow develops their capacity to do so. This is the same error embedded in the concept of AI literacy: the assumption that we can teach a generalised competence without teaching the specific knowledge that makes competence possible.
The appeal of both critical thinking and AI literacy as concepts lies precisely in their vagueness. They sound important and progressive. They appear to offer students something more valuable than mere facts, something that will serve them across contexts and throughout their lives. But this is illusory. You cannot separate the skill from the knowledge any more than you can separate the dance from the dancer. To think well about AI requires knowing about statistics, computation, ethics, and the specific domain in which AI is being applied. There is no shortcut, no generalised literacy or critical thinking skill that obviates the need for this knowledge.
Reclaiming Literacy
With the blizzard of literacies that has hit education in the last 20 years, we risk losing sight of what actual literacy is. Reading and writing are hard-won cognitive achievements, built upon years of explicit instruction and deliberate practice. They represent a profound transformation of the human mind; one that does not occur naturally but requires systematic teaching grounded in an understanding of how written language maps onto spoken language, how phonemes correspond to graphemes, how meaning is constructed through syntax and semantics. Literacy, in its proper sense, refers to this specific, foundational competence. It evokes the moral and cognitive weight of a skill essential for participation in civil life, a capacity with deep cultural, cognitive, and historical foundations.
When we speak of “AI literacy,” we are not referring to anything remotely analogous. We are instead gesturing towards a vague constellation of attitudes, dispositions, and skills that might include understanding how large language models work, recognising their limitations, knowing when to trust their outputs, or perhaps maintaining a healthy scepticism about technological determinism.
But unlike reading and writing, AI literacy describes a moving target: a cluster of technical proficiencies and ethical sensibilities that evolve faster than they can be codified. Its inner workings are often non-transparent even to experts; its “texts” are generated stochastically, not authored. AI is not a language or a text, but a set of opaque systems that mediate interpretation. You don’t “read” an algorithm; you live with its consequences. To speak of AI literacy is to imply a level of legibility and control that doesn’t yet exist, perhaps cannot exist in any straightforward sense. To call it a “literacy” is to grant it a false stability, to borrow the authority of a foundational human capacity whilst referring to something far more nebulous and shifting.
Literacy in this sense is essentially just a bureaucratic placeholder for “awareness,” a way of signalling that something matters without specifying what competence would actually look like. The irony is that many systems that are pushing money and resources towards the vague, nebulous concept of AI literacy have neglected to teach actual literacy, something which is both transformative and a foundational skill not just for school but for democratic participation.
Why This Matters
The metaphorical extension of literacy might be harmless if it were merely imprecise, but it is worse than that. It actively obscures what needs to be taught and how it might be taught. When everything becomes a literacy, the term loses its analytical force. The more literacies we invent, the less literate we become about what literacy means. The concept becomes a catch-all for “stuff we think students should know about X.” This is not pedagogy; it is wishful thinking dressed up in the borrowed authority of a term that once meant something specific.
The proliferation of literacies reflects a deeper problem within education: the field’s preference for coining new terms over establishing what actually works. It is far easier to declare a new literacy than to conduct the painstaking research required to understand how students actually develop competence in a domain. It is more appealing to teachers and administrators to embrace a concept that sounds progressive than to implement practices that might be effective but require significant changes to established routines.
This is not to say that students don’t need to understand artificial intelligence or to think critically about its applications. Of course they do. But calling this understanding “AI literacy” does nothing to clarify what should be taught, in what sequence, or how mastery might be assessed. Worse, it encourages the kind of superficial intervention that education has perfected: a workshop here, a lesson plan there, perhaps a unit on “understanding AI” that asks students to discuss ethical implications without first ensuring they understand how these systems actually function. AI doesn’t need readers; it needs critics. But those critics need specific knowledge, not vague awareness.
The Promise of AI
None of this is to say that artificial intelligence has no place in education. Quite the contrary; as I wrote about last week, there are genuinely promising developments, but they bear little resemblance to the nebulous concept of “AI literacy.” The potential lies not in teaching students to be vaguely “aware” of AI, but in using AI to make instruction more precise, more responsive, and more aligned with how learning actually happens.
The most exciting applications for me are the potential to use AI to decompose complex domains of knowledge into their constituent elements: the prerequisite concepts, the declarative knowledge, the procedural skills, and the conditional knowledge about when to apply what. This granular understanding allows for instruction designed in accordance with the science of learning; explicit teaching of foundational concepts, carefully sequenced practice, immediate feedback on specific errors, and adaptive progression that responds to genuine mastery rather than time spent. These are not vague aspirations but concrete, testable approaches grounded in decades of cognitive research.
In assessment, Daisy Christodoulou’s work on comparative judgment offers a particularly instructive example. Rather than asking AI to replace human expertise or to teach some generalised “critical thinking,” her approach uses AI to enhance assessment by enabling teachers to make rapid, reliable judgments about student work through structured comparison. This is specific, it is testable, and it builds on rather than bypasses human judgment.
What we need is not another literacy, but a form of discernment; the capacity to question, to contextualise, and to make principled choices about how AI systems shape learning and society. In that sense, it’s closer to AI “awareness” or judgment than AI literacy. What students need is not another meaningless credential but rather specific knowledge about how these systems work, what they can and cannot do, and how they might be used responsibly.
They need to understand the mathematics of probability and prediction. They need to understand the grammar and rhetoric of written and oral expression. They need to grasp how training data shapes outputs. They need to recognise the difference between statistical pattern matching and genuine understanding. And they need practice applying this knowledge to real situations where these distinctions matter. None of this requires inventing a new literacy; it requires teaching well-established subjects with clarity and rigour. Perhaps we should stop adding literacies until we can manage the first one.
Empowering Learners for the Age of AI - Review Draft (May 2025) p. 6



I just want my kids to have regular old literacy. Word literacy.
"Reading and writing are hard-won cognitive achievements, built upon years of explicit instruction and deliberate practice. They represent a profound transformation of the human mind; one that does not occur naturally but requires systematic teaching grounded in an understanding of how written language maps onto spoken language, how phonemes correspond to graphemes, how meaning is constructed through syntax and semantics." Thank you for saying this! Maryanne Wolf says it beautifully in her recent paper. See Maryanne Wolf Knows Her Proust and Her P.O.S.S.U.M. (https://harriettjanetos.substack.com/p/maryanne-wolf-knows-her-proust-and?r=5spuf)