The Learning Brief: Research Worth Knowing - July 2025
A monthly roundup and analysis of new research on the science of learning for educators
Some subscribers felt the weekly newsletter was too often and too brief. I agree so I’m going to post a monthly newsletter at the start of the month on newly published research I think is worth knowing with some thoughts.
This large-scale study of over 900 lower secondary students in Germany found that 20% of students did not engage with feedback at all, and nearly half of those who did engage failed to improve their work. By analysing students’ written revisions in response to computer-generated feedback, the researchers could distinguish between “nonengagement” (making no changes) and “unsuccessful engagement” (making changes that didn’t improve the work). Male students and those with lower general cognitive ability were more likely not to engage at all, while successful engagement was associated with higher English grades and greater intrinsic motivation for writing.
For teachers, this has significant implications: giving feedback is not enough. Many students need support to understand and act on it. Educators should be aware that feedback may fall flat not because it’s poorly written but because students lack the skills, confidence, or motivation to use it. Teachers might consider building in structured time and scaffolds to support revision, model how to use feedback, and explicitly teach students how to respond productively. Feedback isn't a magic bullet and in many cases it doesn’t improve student work: nearly half of the students in this study who revised their writing still didn’t get better.
If these figures were replicated with teacher-written feedback, it would suggest that a significant amount of marking may have no impact on learning, which represents a poor return on teacher time. More reason to be looking at new innovative approaches like No More Marking.
This study found that students who intentionally introduce mistakes into practice exercises and then correct them, show stronger retention over time compared to those who only re‑study the material. Interestingly, immediate test performance was similar across groups, but the advantage of deliberate-error practice became apparent later, highlighting its value for durable learning.
From an educational standpoint, this suggests that teachers could improve long-term learning by designing activities that encourage learners to actively generate potential errors such as “predict‑and‑check” prompts, intentionally flawed worked examples, or peer-error identification tasks, instead of solely relying on repetition or passive revision. This approach (arguably) fosters metacognition, as students analyse why errors were made and how the correct process differs, making the learning cue more effective. For educators, shifting a portion of practice time toward error-based exercises could yield deeper, more persistent understanding.
This paper brings together a collection of research that builds on the EMR model, which helps explain how students perceive and manage their mental effort while learning. A key insight is that many students wrongly assume that tasks that feel effortful are unproductive, when in fact, such tasks often lead to better long-term learning. Several studies in the collection explore how motivation, feedback, and task design affect students’ willingness to engage with challenging work. Notably, one study found that when students were taught about the value of using more difficult strategies like interleaving, and when this was tied to clear goals, they were more likely to adopt them.
For teachers, two things are important: First, students often misinterpret what learning feels like, effort can be a sign that learning is happening. Second, classroom strategies that scaffold effort regulation, such as well-timed feedback, autonomy-supportive teaching, and clear strategy instruction, can help students persist with challenging tasks. Teachers should explicitly discuss the role of effort in learning and design tasks that are appropriately demanding without overwhelming students, thereby encouraging a “goal-driven” rather than “data-driven” mindset about difficulty.
This international study examined how different aspects of reading engagement relate to students' digital reading performance using data from 164,233 secondary school students across 24 countries in the 2018 PISA assessment. The researchers investigated three dimensions of reading engagement: emotional (how students feel about reading), behavioural (what students do when reading), and cognitive (how students think about reading).
The study found that students' confidence in their reading ability and their knowledge of reading strategies consistently predicted better digital reading performance across countries. However, students who perceived reading as difficult performed worse. Interestingly, the relationship between reading behaviours (like how much students read) and performance varied significantly between countries, suggesting cultural and educational context matters greatly.
This study tested whether a simple retrieval practice task, just a 10-item multiple-choice quiz, would improve final exam results in a college psychology course. Importantly, the student sample was ethnically diverse, addressing the common limitation that many prior studies on retrieval practice are based on more homogeneous, typically white or WEIRD (Western, Educated, Industrialised, Rich, and Democratic) populations. Compared to a group that simply reviewed key content, students who did retrieval practice scored higher on the final, especially on factual questions that had been rephrased. This suggests that even a brief, low-cost intervention can lead to meaningful learning gains and transfer across question types.
For educators, the key takeaway is: retrieval practice doesn’t have to be time-consuming or complex to be powerful. Embedding small quizzes, asking students to recall material from previous lessons, or setting short retrieval-based tasks before assessments can meaningfully enhance learning. Importantly, the study’s design shows that retrieval can benefit not just rote memory but also application and transfer, making it relevant for a wide range of subjects and age groups. Because it requires little preparation, this is an easy win for teachers looking to improve revision and long-term retention without overhauling their curriculum.
This study explored how the type of question posed after reading, specifically, prompts that require explanation rather than simple recall, affects the accuracy and richness of student responses. Researchers found that when students were asked to explain science content in their own words, either through self-explanation or explanatory retrieval prompts, their recall was significantly more accurate and conceptually rich than when they were asked simple factual questions. This suggests that prompting students to reconstruct meaning, rather than merely retrieve isolated facts, engages deeper cognitive processing and leads to more durable learning.
For educators, there are a few implications: how you ask students to recall matters just as much as what you ask them to recall. Incorporating prompts that demand explanation such as “Why does this happen?” or “How would you teach this to someone else?” can improve both understanding and memory. These can be embedded into existing practices: exit tickets, retrieval quizzes, or mini-whiteboard activities can be reworded to encourage explanation rather than listing. Especially in science, where causal reasoning and conceptual understanding are key, using explanatory retrieval can help students internalize and transfer their knowledge more effectively.
This study reveals that off-task device use, particularly email, texting, and social media, was significantly linked to lower scores on the first exam. However, this relationship weakened later in the semester, suggesting students may have modified their behaviour based on performance feedback. The research demonstrates that students initially lack the metacognitive awareness to self-regulate their technology use effectively.
Texting emerged as the most common and impactful distraction but many students didn’t think it would affect their learning until they got negative feedback. This highlights a critical gap between students' perceived ability to multitask and the cognitive reality that divided attention undermines academic performance. The temporal pattern, where the negative effects diminished over time, suggests that while students can learn to self-regulate, they often need concrete evidence of academic consequences before making behavioural changes.
This study, led by the brilliant Barbara Oakley, argues that an over-reliance on digital tools and AI is undermining human memory and cognitive development, robust internal knowledge remains essential for deep learning, reasoning, and creativity.
Oakley and colleagues present a powerful case for re-evaluating modern educational practices that emphasise "learning to learn" over content knowledge. Drawing from neuroscience, they explain how human memory relies on the integration of declarative (facts) and procedural (skills) systems, and that repeated retrieval and practice are critical to building long-term fluency and flexible thinking. By outsourcing too much cognitive work to digital devices and AI, learners risk developing only superficial knowledge structures, what the authors call "biological pointers"—instead of deep, organised schemata. This makes it harder to detect errors, solve problems, or transfer learning to new contexts.
For educators, the message is clear: memorisation isn’t outdated, it’s foundational. Explicit instruction, retrieval practice, and spaced repetition aren't just classroom strategies, they are biologically aligned with how the brain learns.
A study of 16 research papers finds most teacher motivation research relies on flawed cross-sectional designs that can't prove cause and effect. The authors examined research connecting four major motivation theories (social cognitive theory, situated expectancy-value theory, self-determination theory, and achievement goal theory) to actual teaching behaviours, finding surprisingly limited evidence for the mediating psychological processes that should theoretically bridge motivation and action.
Key findings: those who adopt mastery goals (focusing on learning and improving their craft) are more likely to engage in adaptive teaching and reflective collaboration. In contrast, teachers motivated by performance goals (trying to appear competent or avoid looking incompetent) tend to shy away from such behaviours, likely due to fear of failure or judgment.
Crucially, these positive effects of mastery motivation were strongest in schools where teachers felt psychologically supported and autonomous, where they had room to make decisions and felt encouraged rather than controlled. For schools, this suggests that cultivating a professional culture that promotes growth, trust, and autonomy could be a powerful lever for improving teaching practice.
Self‑concepts aren’t only built on grades, they’re shaped by how students compare what they put in across subjects.
Göllner explores how students subconsciously assess how much effort they've put into different subjects—say, maths versus English—and use these internal comparisons to shape their perceptions of ability. This extends DCT by emphasising effort, not just achievement, as a basis for forming self-concepts. The research indicates that where students perceive a contrast—feeling they work harder in one subject than another—they may develop a stronger self-concept in that subject and a weaker one in the other, affecting motivation, interest, and emotional responses.
This study looked at whether young children stop noticing busy classroom displays after seeing them repeatedly. In a controlled lab setting (Study 1), children did pay slightly less attention to wall displays after two weeks—but they were still distracted far more than in a plain classroom. In real classrooms over 15 weeks (Study 2), there was no sign that children stopped noticing the visual clutter. Because the displays often changed with seasons or lessons, they continued to grab attention.
For teachers, this suggests that even well-meaning and educational displays can pull focus away from learning, especially for young pupils who are still developing the ability to concentrate. Just because something has been on the wall for a while doesn't mean students have stopped noticing it. Rather than removing everything, consider using wall space more intentionally: keep displays relevant to current lessons, reduce unnecessary clutter, and introduce visual elements at the point they support learning.
Consider "strategic minimalism": displaying only materials directly supporting ongoing learning, rotating displays purposefully rather than accumulating decorations, and creating visual quiet zones for focused work. This research supports Montessori principles of uncluttered environments but doesn't advocate sterility, rather, intentional design that respects children's cognitive limitations while fostering belonging and engagement.
This paper presents one of the most detailed early efforts to measure how generative AI, especially tools like ChatGPT and Gemini, is affecting the labour market. Based on a nationally representative U.S. survey and supported by other empirical data, it finds a sharp rise in AI use at work, growing from 30.1% in December 2024 to 43.2% in early 2025. This adoption is highest among young, well-educated, high-income individuals working in sectors such as IT, customer service, and marketing. Workers using AI report significant productivity gains: tasks that took 90 minutes can now be completed in 30, often through partial use of AI tools rather than full automation. However, this productivity boost is uneven. Those with higher incomes and advanced education benefit most, while others may face job substitution risks, particularly in writing-heavy roles.
For educators, the findings offer both opportunity and warning. Students entering the workforce will need to be AI-literate not just in how to use tools, but in understanding when and why to use them. Teaching should shift to emphasise collaboration with AI, critical thinking about AI outputs, and skills that complement rather than compete with machine capabilities. Importantly, the study suggests that productivity gains are highest where human expertise works in tandem with AI, implying that education systems must prepare learners to be not just knowledge consumers, but adaptable, strategic problem-solvers. Moreover, the fact that over 50% of recent jobseekers used AI tools for job applications suggests a need to embed such skills in career guidance and writing instruction.
I can't begin to tell you what a valuable service you are providing to teachers. Thank you! You write:
"Teachers might consider building in structured time and scaffolds to support revision, model how to use feedback, and explicitly teach students how to respond productively."
This was a key component of my master's thesis in writing instruction (Teaching the 'F' Word: Getting Form Without a Formula Using Procedural Facilitation) where the research I discovered confirmed what I saw in the classroom: revision is damn difficult!
It's a shame that the "nonengagement with feedback" study didn't have a control group where the feedback was delivered one-on-one in person. I suspect that their findings were predictable, given "automated computer-based feedback." It just reinforces to me the fact that education is not about delivering information; it's about human connection.
Also there's this to consider with regard to "those with lower general cognitive ability," a phrase they use not one but seven times in their write-up:
"Discussions of intelligence, pertaining to people or machines, are race science all the way down." -- Bender and Hanna, The AI Con: How to Fight Big Tech's Hype and Create the Future We Want (2025)