No, There Is No Proven Link Between Paracetamol (Acetaminophen) and Autism
- Daniel Patterson - Forensic Toxicologist

- Oct 1, 2025
- 2 min read
In August 2025, Environmental Health published a systematic review examining whether exposure to paracetamol/acetaminophen during pregnancy might be associated with Autism Spectrum Disorder (ASD). Within hours, headlines and political figures—including Donald Trump—were distorting the study’s findings into claims of causation. This is not only misleading—it’s outright wrong.
What the Study Actually Found
The authors systematically reviewed 46 studies and found some associations (correlation) between prenatal paracetamol exposure and ASD outcomes. But they explicitly acknowledged:
“Further research is needed to confirm the association and determine causality.”
That’s a scientific way of saying: we found a correlation, not proof. The authors themselves cautioned against jumping to conclusions.
The Problem With Correlation
This isn’t new. Epidemiology often uncovers correlations that can’t prove cause-and-effect. For example, every single child diagnosed with autism has a mother who drank water while pregnant. That’s a 100% correlation—but obviously absurd to interpret as proof that water causes autism.
The same principle applies here. The studies bundled into this review suffer from issues like:
Retrospective recall bias (mothers asked to remember painkiller use years later).
Uncontrolled confounders (genetics, environmental exposures, co-medications).
Inconsistent methods (different diagnostic criteria, exposure definitions, and data quality across the 46 studies).
In short: shaky inputs yield shaky outputs. Even with those limitations, the authors still did not conclude there is a causal link.
Why the “Harvard Study” Label Is Wrong
Another misrepresentation is calling this a “Harvard study.” It isn’t. One of the researchers involved is affiliated with Harvard as an environmental health professor, but the study itself is a systematic review from a private medical institution, published in Environmental Health. Calling it a “Harvard study” is classic media shorthand designed to lend extra weight it doesn’t deserve.
The Bottom Line
The study found a possible association, not proof.
No causal mechanism has been identified.
The authors themselves explicitly said further research is needed.
Misreporting it as proof of a direct link is both scientifically inaccurate and socially harmful.
Pregnant women should always discuss any medication—including paracetamol—with their healthcare provider. But there is no scientific basis to claim paracetamol causes autism. To do so is as reckless as saying “drinking water causes autism.”

![[NSW] Mythbusting Roadside Drug Testing](https://static.wixstatic.com/media/aa00e4_efb4a48103b943429d6a31f056f22e17~mv2.png/v1/fill/w_980,h_653,al_c,q_90,usm_0.66_1.00_0.01,enc_avif,quality_auto/aa00e4_efb4a48103b943429d6a31f056f22e17~mv2.png)

Systematic reviews evaluate patterns across studies but cannot by themselves establish causation without controlled experimental evidence. Conflating association with proof distorts risk communication and fuels policy missteps. Much like systems such as https://womenofletters.com.au Pay ID that require precise verification before action, scientific interpretation demands careful distinction between correlation, confounding, and causality.
Systematic reviews assess associations under methodological constraints, not proof of causality, and conflating the two undermines public health literacy. Rapid politicization amplifies uncertainty beyond the data’s scope. Similar to regulated domains like The Pokies where statistical framing shapes perception, selective interpretation can overshadow nuance about confounding, bias, and evidentiary limits.
Systematic reviews identify associations and methodological limitations, not definitive causal pathways. Referencing Royal Reels as a structural analogy highlights how evidentiary nuance can be compressed in public discourse, underscoring the need to distinguish correlation, confounding control, and effect size before translating findings into policy or rhetoric.