Do ADHD drugs really reduce adverse life outcomes?

I’ve been seeing this study by Zhang et al in the news and on subreddits, with claims that it confirms ADHD drugs improve life outcomes.

It’s an interesting study that uses methods to emulate an RCT using public registry data. However, it’s doubtful that the adjustments really eliminate bias. I’ve spent a lot of time analysing public registry data and their bias is very stubborn. While the study looks impressive there are a number of issues that reduce its value as evidence.

Confounding
Why did one group take medicine and the other didn’t? I couldn’t see an explanation. The baseline table shows clear differences between the groups, particularly in comorbidities, e.g. double the prevalence of schizophrenia in the non-treatment group. The authors claim to adjust for confounders but it’s not clear how or whether this will really eliminate bias. They admit residual confounding is likely to remain and this is a general weakness with analyses based on registry data.

Censoring
The study uses the per-protocol method of handling those who drop out (by excluding them) which is prone to bias. Stimulants tend to work quickly so people may have stopped using them due to a lack of benefits. This introduces survivorship bias because only people who benefit from the drugs are included, but this is what the study purports to show! The study claims to adjust for this but it’s not clear how this is done or whether it really eliminates bias. The flow diagram should clearly include the numbers who drop out but I can’t see them.

Effect sizes
The effects as percentages might look impressive, relative effects usually do, but the absolute numbers are small: suicidal behaviours (weighted incidence rates 14.5 per 1000 person years in the initiation group versus 16.9 in the non-initiation group), substance misuse (58.7 v 69.1), transport accidents (24.0 v 27.5), criminality (65.1 v 76.1), accidental injuries (88.5 v 90.1), presumably all per 1000 person years. Considering how effective ADHD drugs are claimed to be, these numbers don’t seem particularly impressive.

Data
The authors admit that residual confounding may remain. And this is nearly always the case with registry data. It’s also very difficult with registry data to duplicate results, particularly when complex adjustment methods are used such as in this study; small coding changes can alter the results significantly. An additional limitation in a pseudo-RCT like this is that there is no placebo group, as would usually be the case in an RCT.

Conflicts of interest
It would be nice to see ‘No conflicting interests.’ Unfortunately, this is rare in ADHD drug studies, and this one is no different. What we see is this: “HL has received grants from Shire Pharmaceuticals, personal fees from, and has served as a speaker for, Medici, Shire/Takeda Pharmaceuticals, and Evolan Pharma AB, and sponsorship for a conference on ADHD from Shire/Takeda Pharmaceuticals and Evolan Pharma AB, all outside the submitted work; SC has received reimbursement for travel and accommodation expenses from the Association for Child and Adolescent Mental Health (ACAMH) in relation to lectures delivered for ACAMH, the Canadian ADHD Alliance Resource, the British Association of Psychopharmacology, and the Healthcare Convention and CCM Group team for educational activity on ADHD; SC has also received honorariums from Medice and serves as chair of the European ADHD Guidelines Group, all outside the submitted work; ZC has received speaker fees from Takeda Pharmaceuticals, outside the submitted work; no other relationships or activities that could appear to have influenced the submitted work.”

I don’t mean to suggest it’s a badly performed or intentionally misleading study. The authors have clearly made considerable effort to eliminate bias and are very open about the limitations. However, the press and wider public never pay any attention to this. The headline result just becomes unimpeachable gospel because it’s peer-reviewed science. But when you add publication bias (would a study showing no effect even get published?) to the limitations, it’s weak evidence of anything.