The recent critique of the Australian Education Research Organisation mischaracterises both the nature of evidence-based practice and AERO’s role in supporting teachers with it.
Thanks for this excellent article, Carl! You have put into words exactly what I was thinking when I read some of the general and baseless criticisms of AERO's work in the links to critiques cited in your post. Those contrived criticisms were very general and ideologically tainted and did not address anything of substance. You summed it up well in "There is nothing ‘empowering’ about promoting pedagogical relativism in the name of nuance."
The critics (predictably perhaps as they were written by education/music lecturers) did not even mention writing, which is why AERO was in the news recently in the first place. AERO's new Writing Framework (written by experts in the field undoubtedly) which has just come out and which not many people have even had a chance to read is what prompted the recent unfounded criticisms. Yet not a jot in the criticism about their writing research, which at first glance looks excellent, by the way!
The comparison to whole language instruction is crystal clear and very frustrating. What did we learn from that failure? That outcome matters more than vibes? No… we still don’t seem to have learned that
Good question. Fortunately (or unfortunately, depending on your loyalties), the evidence against whole language is both broad and deep.
1. Systematic Reviews & Meta-Analyses
The National Reading Panel (2000) in the US reviewed over 100,000 studies and concluded that systematic phonics instruction was significantly more effective than whole language for teaching children to read, especially those at risk of reading difficulties. Whole language, which downplays phonics in favour of immersion and meaning-making, simply didn’t hold up under scrutiny.
Similar findings were echoed in:
- The Rose Report (2006, UK), which found that early, systematic phonics was “essential” and that the searchlight model (akin to whole language) was based on weak theoretical foundations.
- The Australian National Inquiry into the Teaching of Literacy (2005), which concluded that whole language approaches lacked robust evidence of effectiveness and disadvantaged students who didn’t already come to school with strong oral language skills.
2. Population-Level Consequences
In regions where whole language was policy - California in the late 80s and early 90s being the poster child - reading scores plummeted. The 1994 NAEP (National Assessment of Educational Progress) results showed California at or near the bottom of US states in reading performance. Once the state shifted back toward structured phonics, scores began to recover. This isn’t just a policy hiccup. It’s a generation of children underserved by a pedagogy that assumed learning to read was “natural,” when for many children - especially those without rich home literacy environments - it manifestly isn’t.
3. Cognitive Science
Cognitive science backs the phonics-first model. Learning to read in alphabetic languages requires understanding the relationship between graphemes and phonemes. The work of researchers like Stanislas Dehaene, Mark Seidenberg, and David Share demonstrates that decoding is foundational. Whole language’s emphasis on guessing from context or picture cues (which are surprisingly unreliable) ignores how the brain actually processes written language. Dehaene puts it plainly: reading is not natural. It must be taught, and taught explicitly.
4. Intervention Evidence
Children with reading difficulties - especially those with dyslexia - make the greatest gains with explicit, systematic instruction in phonemic awareness and decoding. Whole language approaches leave these children behind. It’s not that nothing works under whole language; it’s that it works least well for those who most depend on good teaching.
5. Professional Consensus Shifts
Even former proponents have revised their views. Ken Goodman, the originator of the whole language philosophy, maintained its value, but the professional consensus has moved. The International Literacy Association (formerly IRA) now recognises the centrality of phonics. Lucy Calkins, once the doyenne of balanced literacy, released revised materials acknowledging the need for systematic phonics — a remarkable pivot.
Whole language wasn’t a failure because it had no merit. It was a failure because it assumed reading would emerge naturally if children were surrounded by books, language, and love. That works for a fortunate few but systematically fails the most vulnerable, disadvantaged children. This is an equity issue. If you only care about the msot privileged then you can crack on with whole language.
You answered the wrong question. I didn't ask for evidence that teaching phonics can play a facilitative role in having children learn to read. I asked for evidence that whole language was a failure. Plus, even the Goodmans wrote back in 1976 that “teaching children to read is not putting them into a garden of print and leaving them unmolested." So maybe please quit straw manning the Goodmans.
But to address one of the things you brought up, even assuming the 1994 NAEP data "plummeted" like you characterized it, it's unscientific to say that that had anything to do with any specific change in teaching methods or particular state-wide policies. How did you isolate that from any of the other possible variables?
The historical data online only goes back to 1992. And California NAEP reading data for 8th grade has been trending down lately. To what do you attribute that?
Your "intervention evidence" paragraph is entirely anecdotal.
Also, how much phonics did the NRP recommend?
You can keep writing a whole blog post to my comments all you want to. It doesn't change the fact that asserting "whole language was a failure" is profoundly unscientific.
That's interesting. The response, as opposed to being a rebuttal, seems to be largely a request for a softening of what Tierney and Pearson proposed were the SOR claims that they were responding to; as such, I have no problem with it.
Although, I do think it's interesting that Goldenberg doesn't seem to understand that three-cueing wasn't supposed to be something we teach children to do; it was supposed to be a guide for teachers using the retrospective miscue analysis reading assessment to understand what might be the reader's thinking process and help teachers apply the right interventions for that student (including phonics interventions if that's appropriate).
As you might have noticed in my responses to you, I don't really have a problem with teachers reading research and applying what they learn to their own practice. By all means, if something is working in your classroom, keep doing it; or if something is not working, find something that might. I have a problem with people who would micromanage the decisions and practices of other teachers, and I have a problem with the hypocrisy of "science of" advocates that don't apply the same scientific standards to their claims of "X was a failure," whether "X" means teachers, schools of teacher education, whole language, or some other educational practice or philosophy.
1. You mention "AERO’ research agenda is shaped by demand, impact, and feasibility, not ideology" and link to their website to justify this claim.
Just because AERO "says so" on their website, doesn't make it so.
2. You state AERO's "consultative approach was evident in AERO's 2020 national focus groups".
Does a sample of 134 teachers and leaders across Australia really represent what teachers want?
3. "But the fact that AERO’s recommendations are being embedded in initial teacher education is not evidence of blind acceptance, it’s an acknowledgement of their strength."
AERO's recommendations are not being embedded, they're being mandated.
4. "CLT is a theory about how novices learn new information, not about the whole ecology of schooling."
Indeed, let's keep it that way 👏
5. "Like the work of the EEF in the UK, AERO synthesises high-quality research to inform teachers’ professional judgment not constrain them."
This may seem true from an outsider's perspective, but for those on the ground in Australian schools is not true. On the ground, AERO's work is being used to justify broad sweeping reforms to mandate specific practice and limit pedagogical approaches.
6. You deride the authors of the AARE for misrepresenting CLT, yet in the same post dismiss, misunderstand and caricature alternative pedagogical approaches.
This seems a bit unfair and hypocritical.
I'll leave it there for the moment, but your belief about what education, teaching and learning look like is ideological and inescapably so. It is all based upon foundational beliefs about what teaching and schooling is for, based upon assumptions about what its purpose ought to be.
"A teacher who understands cognitive load theory can better judge when to provide worked examples versus independent practice, when to introduce complexity, and how to scaffold learning effectively, decisions that require considerable skill and judgment."
As someone who works with below-level readers and also coaches colleagues, I grapple with making the distinctions you describe. I wrote about this in a review (The Vanishing Act in The Balancing Act: Reading Instruction That Ignores Orthographic Mapping and Cognitive Load Theory Is a Setback for Students https://highfiveliteracy.com/2024/08/28/the-vanishing-act-in-the-balancing-act/) of the recent book The Balancing Act: An Evidence-Based Approach to Teaching Phonics, Reading and Writing by Dominic Wyse and Charlotte Hacking.
Thank you for all the important reminders, including the distinction between informing and constraining.
You seem to suggest that Cognitive Load Theory (CLT) is merely "a theory about how novices learn new information." However, CLT has evolved significantly beyond that narrow framing. For instance, the *Expertise Reversal Effect*—which you're familiar with—demonstrates that instructional strategies effective for novices may hinder experts. Moreover, recent research is exploring how *motivation* interacts with CLT. Kalyuga (2025), for example, introduced the concept of *"explicit intention to learn"* as a central principle, which has implications for broader educational contexts, including cultural dimensions.
Here in Victoria, our Department of Education has shifted its teaching model from Hattie’s framework to *Explicit Instruction*. In doing so, many resources that encouraged problem-solving and inquiry—such as the *Middle Years Maths Challenges*—have been removed from their website. This suggests a lack of awareness of the Expertise Reversal Effect. It exemplifies the concern that *teachers are not invited to interpret evidence, only to receive it*, and that *problem-solving is being discouraged for both teachers and students*.
You also haven’t addressed the issue of *contradictory research*. In a previous blog, you claimed there was "decades of converging evidence" supporting CLT. Yet Sweller (2023) presents a different narrative, highlighting replication issues and the need for CLT to expand—Expertise Reversal being a prime example. So rather than convergence, what we’re seeing is *divergence and expansion*.
This brings us to the broader question of *evidence reliability*. Why do major evidence organizations—such as EEF, WWC, and AERO—often produce *conflicting reports*? If the evidence is truly clear and unambiguous, why is there so much disagreement?
This inconsistency undermines the credibility of “evidence-based” practices. Wadhwa et al. (2023), in their review of 12 evidence clearinghouses, concluded:
> “Clearinghouses exist to identify ‘evidence-based’ programs, but the inconsistency in their recommendations of the same program suggests that identifying ‘evidence-based’ interventions is still more of a policy aspiration than a reliable research practice.”
For example, the EEF acknowledges the value of CLT but also notes:
> “The evidence for the application of cognitive science principles in everyday classroom conditions (applied cognitive science) is limited, with uncertainties and gaps about the applicability of specific principles across subjects and age ranges... Applying the principles of cognitive science is harder than knowing the principles... Principles do not determine specific teaching and learning strategies or approaches to implementation.”
In contrast, AERO claims:
> “There is a strong body of evidence from cognitive science, neuroscience, and education psychology about how students learn, which helps to explain why some teaching practices have a greater positive impact on learning outcomes than others.”
Given that AERO receives around ~$20 million annually in public funding, it's reasonable to question whether their judgments are trustworthy.
"The real oppression lies in leaving teachers to discover through trial and error what cognitive science could have taught them from day one." This rings so true for me. As a very experienced teacher I have figured out effective ways to teach, through trial and error as well as through ongoing professional learning. In recent years, as I have learned about how children learn to read, explicit instruction, cognitive load theory, I have realised that, without actually knowing the theory, I have applied it because I have found it to be effective. I could have been an immensely more effective teacher from much earlier in my career if I had this knowledge to start with.
You didn't address their concerns here, mainly that "it reduces teaching to a set of formulaic strategies," that it applies what others have called a procrustean framework to students.
Sure whatever you read in a research paper worked for somebody, but will it work for my students? -- that's the bazillion dollar question, and it's always still a mystery till it gets tried. Everyone in educational leadership is looking for a replicable algorithm, and the truth is that what works for one kid is not what another kid needs, what works in your classroom with your students won't necessarily work in mine. Everyone is special and unique, just like everyone else. Saying "good evidence liberates teachers" misunderstands students, the work of teaching, and the cultures that teachers and students operate in so much that it descends to the level of being Orwellian.
What about their suggestion that "evidence-based" in education should work more like "evidence-based" in medicine? Whatever your position on this, that merits more thought and discussion.
With regard to your handwaving at the charge of neoliberalism: Unfortunately, the micromanagement of teachers is real. The fact that "evidence based" and "science of" is used to sell curriculums and teacher training is real. "We spent a lot of money on this, so teachers should follow it with fidelity" is real. Market-based education reform is real, and it harms students.
And you demand "evidence-based," but provide no evidence that whole language was a failure or that phonics-heavy reading instruction is better than meaning-based reading instruction with real students in real classrooms. And remember: according to your own rules, anecdotes don't count. Speaking of anecdotes, I've been paying attention, and even teachers who drank the phonics kool-aid are realizing that too much phonics kills kids' motivation to read; and if you need real research on that, Margaret Merga has some.
So maybe just listen a little to what they're saying.
I also have to say this, re: "In California for example, after implementing whole language approaches in the 1990s, reading scores plummeted so severely that the state ranked near the bottom nationally in reading proficiency. But the impact wasn't evenly distributed. Middle-class families unconsciously compensated for these pedagogical failures through bedtime stories, vocabulary-rich conversations, and tutoring when problems emerged. Their children succeeded despite poor reading instruction in school, not because of it. Disadvantaged students, whose families lacked the resources to provide intensive reading support at home, bore the full brunt of ineffective classroom instruction, creating achievement gaps that persisted throughout their educational careers."
This is so profoundly unscientific and hypocritical in a blog entitled "exploring the science of learning" that I have to point it out.
Even if California reading scores took a dip in the '90s (and I don't think there's any evidence of that), how did you isolate the variables to "whole language approaches" having a causal effect on them? And the idea that "middle class families unconsciously compensated for pedagogical failures through bedtime stories..."? You have to be kidding. I mean, there is some research that says you can predict test scores if you know the socioeconomic status of the population being tested, so maybe there's some wag-the-dog thing going on here.
So powerful and spot on. The sociology of professions tells us that true professions are guided by a robust, empirically supported knowledge base. In addition, professionals collectively control/govern their work including their standards for entry and promotion, rather than relying on external controls and policies. Professional autonomy makes sense when it's grounded and bounded by these two aspects: a knowledge base and social accountability (peers).
I’m restacking your article, Carl, and wondering, would like to speak on a very popular podcast for teachers? I would love to put you in touch with my friend Vicki Davis if you would.
Thanks for this excellent article, Carl! You have put into words exactly what I was thinking when I read some of the general and baseless criticisms of AERO's work in the links to critiques cited in your post. Those contrived criticisms were very general and ideologically tainted and did not address anything of substance. You summed it up well in "There is nothing ‘empowering’ about promoting pedagogical relativism in the name of nuance."
The critics (predictably perhaps as they were written by education/music lecturers) did not even mention writing, which is why AERO was in the news recently in the first place. AERO's new Writing Framework (written by experts in the field undoubtedly) which has just come out and which not many people have even had a chance to read is what prompted the recent unfounded criticisms. Yet not a jot in the criticism about their writing research, which at first glance looks excellent, by the way!
The comparison to whole language instruction is crystal clear and very frustrating. What did we learn from that failure? That outcome matters more than vibes? No… we still don’t seem to have learned that
What's your evidence that whole language was a failure?
Good question. Fortunately (or unfortunately, depending on your loyalties), the evidence against whole language is both broad and deep.
1. Systematic Reviews & Meta-Analyses
The National Reading Panel (2000) in the US reviewed over 100,000 studies and concluded that systematic phonics instruction was significantly more effective than whole language for teaching children to read, especially those at risk of reading difficulties. Whole language, which downplays phonics in favour of immersion and meaning-making, simply didn’t hold up under scrutiny.
Similar findings were echoed in:
- The Rose Report (2006, UK), which found that early, systematic phonics was “essential” and that the searchlight model (akin to whole language) was based on weak theoretical foundations.
- The Australian National Inquiry into the Teaching of Literacy (2005), which concluded that whole language approaches lacked robust evidence of effectiveness and disadvantaged students who didn’t already come to school with strong oral language skills.
2. Population-Level Consequences
In regions where whole language was policy - California in the late 80s and early 90s being the poster child - reading scores plummeted. The 1994 NAEP (National Assessment of Educational Progress) results showed California at or near the bottom of US states in reading performance. Once the state shifted back toward structured phonics, scores began to recover. This isn’t just a policy hiccup. It’s a generation of children underserved by a pedagogy that assumed learning to read was “natural,” when for many children - especially those without rich home literacy environments - it manifestly isn’t.
3. Cognitive Science
Cognitive science backs the phonics-first model. Learning to read in alphabetic languages requires understanding the relationship between graphemes and phonemes. The work of researchers like Stanislas Dehaene, Mark Seidenberg, and David Share demonstrates that decoding is foundational. Whole language’s emphasis on guessing from context or picture cues (which are surprisingly unreliable) ignores how the brain actually processes written language. Dehaene puts it plainly: reading is not natural. It must be taught, and taught explicitly.
4. Intervention Evidence
Children with reading difficulties - especially those with dyslexia - make the greatest gains with explicit, systematic instruction in phonemic awareness and decoding. Whole language approaches leave these children behind. It’s not that nothing works under whole language; it’s that it works least well for those who most depend on good teaching.
5. Professional Consensus Shifts
Even former proponents have revised their views. Ken Goodman, the originator of the whole language philosophy, maintained its value, but the professional consensus has moved. The International Literacy Association (formerly IRA) now recognises the centrality of phonics. Lucy Calkins, once the doyenne of balanced literacy, released revised materials acknowledging the need for systematic phonics — a remarkable pivot.
Whole language wasn’t a failure because it had no merit. It was a failure because it assumed reading would emerge naturally if children were surrounded by books, language, and love. That works for a fortunate few but systematically fails the most vulnerable, disadvantaged children. This is an equity issue. If you only care about the msot privileged then you can crack on with whole language.
You answered the wrong question. I didn't ask for evidence that teaching phonics can play a facilitative role in having children learn to read. I asked for evidence that whole language was a failure. Plus, even the Goodmans wrote back in 1976 that “teaching children to read is not putting them into a garden of print and leaving them unmolested." So maybe please quit straw manning the Goodmans.
But to address one of the things you brought up, even assuming the 1994 NAEP data "plummeted" like you characterized it, it's unscientific to say that that had anything to do with any specific change in teaching methods or particular state-wide policies. How did you isolate that from any of the other possible variables?
The historical data online only goes back to 1992. And California NAEP reading data for 8th grade has been trending down lately. To what do you attribute that?
Your "intervention evidence" paragraph is entirely anecdotal.
Also, how much phonics did the NRP recommend?
You can keep writing a whole blog post to my comments all you want to. It doesn't change the fact that asserting "whole language was a failure" is profoundly unscientific.
Thanks Andrew @jeffrey_bowers has been asking the same question for awhile, I've not seen an answer yet: https://jeffbowers.blogs.bristol.ac.uk/blog/key-lesson/
Wow, he seems fascinating. Thanks for bringing him to my attention. I definitely have more reading to do now.
I'm going to leave this link here in case you want to further respond:
Fact Checking the Science of Reading, by Robert J. Tierney and P. David Pearson (2024), from the CABE website:
https://gocabe.org/wp-content/uploads/2024/05/Fact-checking-the-SoR-3.pdf
Thank you for posting this link. After reading Tierney and Pearson's piece, I hope you will read Claude Goldenberg's response (https://claudegoldenberg.substack.com/p/commentary-on-tierney-and-pearson?r=1awjg4&utm_campaign=post&utm_medium=web&triedRedirect=true). It is precisely this point/counter-point format that helps practitioners like me make informed decisions.
That's interesting. The response, as opposed to being a rebuttal, seems to be largely a request for a softening of what Tierney and Pearson proposed were the SOR claims that they were responding to; as such, I have no problem with it.
Although, I do think it's interesting that Goldenberg doesn't seem to understand that three-cueing wasn't supposed to be something we teach children to do; it was supposed to be a guide for teachers using the retrospective miscue analysis reading assessment to understand what might be the reader's thinking process and help teachers apply the right interventions for that student (including phonics interventions if that's appropriate).
As you might have noticed in my responses to you, I don't really have a problem with teachers reading research and applying what they learn to their own practice. By all means, if something is working in your classroom, keep doing it; or if something is not working, find something that might. I have a problem with people who would micromanage the decisions and practices of other teachers, and I have a problem with the hypocrisy of "science of" advocates that don't apply the same scientific standards to their claims of "X was a failure," whether "X" means teachers, schools of teacher education, whole language, or some other educational practice or philosophy.
Thanks for pointing it out.
A couple of things:
1. You mention "AERO’ research agenda is shaped by demand, impact, and feasibility, not ideology" and link to their website to justify this claim.
Just because AERO "says so" on their website, doesn't make it so.
2. You state AERO's "consultative approach was evident in AERO's 2020 national focus groups".
Does a sample of 134 teachers and leaders across Australia really represent what teachers want?
3. "But the fact that AERO’s recommendations are being embedded in initial teacher education is not evidence of blind acceptance, it’s an acknowledgement of their strength."
AERO's recommendations are not being embedded, they're being mandated.
4. "CLT is a theory about how novices learn new information, not about the whole ecology of schooling."
Indeed, let's keep it that way 👏
5. "Like the work of the EEF in the UK, AERO synthesises high-quality research to inform teachers’ professional judgment not constrain them."
This may seem true from an outsider's perspective, but for those on the ground in Australian schools is not true. On the ground, AERO's work is being used to justify broad sweeping reforms to mandate specific practice and limit pedagogical approaches.
6. You deride the authors of the AARE for misrepresenting CLT, yet in the same post dismiss, misunderstand and caricature alternative pedagogical approaches.
This seems a bit unfair and hypocritical.
I'll leave it there for the moment, but your belief about what education, teaching and learning look like is ideological and inescapably so. It is all based upon foundational beliefs about what teaching and schooling is for, based upon assumptions about what its purpose ought to be.
Nice work, Tom.
"A teacher who understands cognitive load theory can better judge when to provide worked examples versus independent practice, when to introduce complexity, and how to scaffold learning effectively, decisions that require considerable skill and judgment."
As someone who works with below-level readers and also coaches colleagues, I grapple with making the distinctions you describe. I wrote about this in a review (The Vanishing Act in The Balancing Act: Reading Instruction That Ignores Orthographic Mapping and Cognitive Load Theory Is a Setback for Students https://highfiveliteracy.com/2024/08/28/the-vanishing-act-in-the-balancing-act/) of the recent book The Balancing Act: An Evidence-Based Approach to Teaching Phonics, Reading and Writing by Dominic Wyse and Charlotte Hacking.
Thank you for all the important reminders, including the distinction between informing and constraining.
You seem to suggest that Cognitive Load Theory (CLT) is merely "a theory about how novices learn new information." However, CLT has evolved significantly beyond that narrow framing. For instance, the *Expertise Reversal Effect*—which you're familiar with—demonstrates that instructional strategies effective for novices may hinder experts. Moreover, recent research is exploring how *motivation* interacts with CLT. Kalyuga (2025), for example, introduced the concept of *"explicit intention to learn"* as a central principle, which has implications for broader educational contexts, including cultural dimensions.
Here in Victoria, our Department of Education has shifted its teaching model from Hattie’s framework to *Explicit Instruction*. In doing so, many resources that encouraged problem-solving and inquiry—such as the *Middle Years Maths Challenges*—have been removed from their website. This suggests a lack of awareness of the Expertise Reversal Effect. It exemplifies the concern that *teachers are not invited to interpret evidence, only to receive it*, and that *problem-solving is being discouraged for both teachers and students*.
You also haven’t addressed the issue of *contradictory research*. In a previous blog, you claimed there was "decades of converging evidence" supporting CLT. Yet Sweller (2023) presents a different narrative, highlighting replication issues and the need for CLT to expand—Expertise Reversal being a prime example. So rather than convergence, what we’re seeing is *divergence and expansion*.
This brings us to the broader question of *evidence reliability*. Why do major evidence organizations—such as EEF, WWC, and AERO—often produce *conflicting reports*? If the evidence is truly clear and unambiguous, why is there so much disagreement?
This inconsistency undermines the credibility of “evidence-based” practices. Wadhwa et al. (2023), in their review of 12 evidence clearinghouses, concluded:
> “Clearinghouses exist to identify ‘evidence-based’ programs, but the inconsistency in their recommendations of the same program suggests that identifying ‘evidence-based’ interventions is still more of a policy aspiration than a reliable research practice.”
For example, the EEF acknowledges the value of CLT but also notes:
> “The evidence for the application of cognitive science principles in everyday classroom conditions (applied cognitive science) is limited, with uncertainties and gaps about the applicability of specific principles across subjects and age ranges... Applying the principles of cognitive science is harder than knowing the principles... Principles do not determine specific teaching and learning strategies or approaches to implementation.”
In contrast, AERO claims:
> “There is a strong body of evidence from cognitive science, neuroscience, and education psychology about how students learn, which helps to explain why some teaching practices have a greater positive impact on learning outcomes than others.”
Given that AERO receives around ~$20 million annually in public funding, it's reasonable to question whether their judgments are trustworthy.
"The real oppression lies in leaving teachers to discover through trial and error what cognitive science could have taught them from day one." This rings so true for me. As a very experienced teacher I have figured out effective ways to teach, through trial and error as well as through ongoing professional learning. In recent years, as I have learned about how children learn to read, explicit instruction, cognitive load theory, I have realised that, without actually knowing the theory, I have applied it because I have found it to be effective. I could have been an immensely more effective teacher from much earlier in my career if I had this knowledge to start with.
You didn't address their concerns here, mainly that "it reduces teaching to a set of formulaic strategies," that it applies what others have called a procrustean framework to students.
Sure whatever you read in a research paper worked for somebody, but will it work for my students? -- that's the bazillion dollar question, and it's always still a mystery till it gets tried. Everyone in educational leadership is looking for a replicable algorithm, and the truth is that what works for one kid is not what another kid needs, what works in your classroom with your students won't necessarily work in mine. Everyone is special and unique, just like everyone else. Saying "good evidence liberates teachers" misunderstands students, the work of teaching, and the cultures that teachers and students operate in so much that it descends to the level of being Orwellian.
What about their suggestion that "evidence-based" in education should work more like "evidence-based" in medicine? Whatever your position on this, that merits more thought and discussion.
With regard to your handwaving at the charge of neoliberalism: Unfortunately, the micromanagement of teachers is real. The fact that "evidence based" and "science of" is used to sell curriculums and teacher training is real. "We spent a lot of money on this, so teachers should follow it with fidelity" is real. Market-based education reform is real, and it harms students.
And you demand "evidence-based," but provide no evidence that whole language was a failure or that phonics-heavy reading instruction is better than meaning-based reading instruction with real students in real classrooms. And remember: according to your own rules, anecdotes don't count. Speaking of anecdotes, I've been paying attention, and even teachers who drank the phonics kool-aid are realizing that too much phonics kills kids' motivation to read; and if you need real research on that, Margaret Merga has some.
So maybe just listen a little to what they're saying.
Plus, whole language teachers didn't dismiss phonics instruction; they gave phonics instruction to the kids who needed phonics instruction.
I also have to say this, re: "In California for example, after implementing whole language approaches in the 1990s, reading scores plummeted so severely that the state ranked near the bottom nationally in reading proficiency. But the impact wasn't evenly distributed. Middle-class families unconsciously compensated for these pedagogical failures through bedtime stories, vocabulary-rich conversations, and tutoring when problems emerged. Their children succeeded despite poor reading instruction in school, not because of it. Disadvantaged students, whose families lacked the resources to provide intensive reading support at home, bore the full brunt of ineffective classroom instruction, creating achievement gaps that persisted throughout their educational careers."
This is so profoundly unscientific and hypocritical in a blog entitled "exploring the science of learning" that I have to point it out.
Even if California reading scores took a dip in the '90s (and I don't think there's any evidence of that), how did you isolate the variables to "whole language approaches" having a causal effect on them? And the idea that "middle class families unconsciously compensated for pedagogical failures through bedtime stories..."? You have to be kidding. I mean, there is some research that says you can predict test scores if you know the socioeconomic status of the population being tested, so maybe there's some wag-the-dog thing going on here.
So powerful and spot on. The sociology of professions tells us that true professions are guided by a robust, empirically supported knowledge base. In addition, professionals collectively control/govern their work including their standards for entry and promotion, rather than relying on external controls and policies. Professional autonomy makes sense when it's grounded and bounded by these two aspects: a knowledge base and social accountability (peers).
I’m restacking your article, Carl, and wondering, would like to speak on a very popular podcast for teachers? I would love to put you in touch with my friend Vicki Davis if you would.