The scientific journal Nature has retracted a meta-analysis that claimed ChatGPT had a positive impact on student learning performance, learning perception, and higher-order thinking, according to a report by 404 Media.
The paper, titled “The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis,” was originally published in May 2025. It was authored by Jin Wang and Wenxiang Fan of Hangzhou Normal University in China.
As a meta-analysis, the study synthesized data from 51 research studies published between November 20 and February 2025 regarding the effectiveness of ChatGPT in educational settings. The original findings claimed the AI tool had a large or moderately positive impact on students.
In a retraction note, Nature stated the decision was driven by "concerns regarding discrepancies in the meta-analysis." The journal noted that these issues undermined the validity of the analysis and its conclusions.
Ben Williamson, a senior lecturer in digital education at the University of Edinburgh, told 404 Media that the paper gained rapid traction on social media platforms like LinkedIn and X. He noted the paper was accessed online nearly 400,000 times and achieved a high Altmetric score after being shared by influential figures promoting AI in classrooms.
Williamson criticized the speed at which the study was produced, suggesting it aggregated low-quality research from disreputable journals. "What appeared actually to be the case is that the meta analysis aggregated a whole bunch of very low quality research published in disreputable journals," Williamson said.
He added that the study essentially "recycled junk science into headline-grabbing claims about the benefits of ChatGPT for learners." He noted that educators, parents, and policy officials required high-quality evidence, but were instead presented with "substandard research."
Other researchers had previously flagged methodological flaws in similar studies. A 2025 study in the European Journal of Education Policy and Practice by Ilkka Tuomi argued that existing evidence in AI education should not guide policy due to conceptual problems.
Tuomi highlighted that meta-analyses often include papers that vary wildly in quality, making quantitative results meaningless. He specifically pointed out that the Wang and Fan study appeared to copy search patterns—including spelling mistakes—from previous flawed studies and included papers from potentially predatory journals.