Category: Medicine

  • Do ADHD drugs really reduce adverse life outcomes?

    I’ve been seeing this study by Zhang et al in the news and on subreddits, with claims that it confirms ADHD drugs improve life outcomes.

    It’s an interesting study that uses methods to emulate an RCT using public registry data. However, it’s doubtful that the adjustments really eliminate bias. I’ve spent a lot of time analysing public registry data and their bias is very stubborn. While the study looks impressive there are a number of issues that reduce its value as evidence.

    Confounding
    Why did one group take medicine and the other didn’t? I couldn’t see an explanation. The baseline table shows clear differences between the groups, particularly in comorbidities, e.g. double the prevalence of schizophrenia in the non-treatment group. The authors claim to adjust for confounders but it’s not clear how or whether this will really eliminate bias. They admit residual confounding is likely to remain and this is a general weakness with analyses based on registry data.

    Censoring
    The study uses the per-protocol method of handling those who drop out (by excluding them) which is prone to bias. Stimulants tend to work quickly so people may have stopped using them due to a lack of benefits. This introduces survivorship bias because only people who benefit from the drugs are included, but this is what the study purports to show! The study claims to adjust for this but it’s not clear how this is done or whether it really eliminates bias. The flow diagram should clearly include the numbers who drop out but I can’t see them.

    Effect sizes
    The effects as percentages might look impressive, relative effects usually do, but the absolute numbers are small: suicidal behaviours (weighted incidence rates 14.5 per 1000 person years in the initiation group versus 16.9 in the non-initiation group), substance misuse (58.7 v 69.1), transport accidents (24.0 v 27.5), criminality (65.1 v 76.1), accidental injuries (88.5 v 90.1), presumably all per 1000 person years. Considering how effective ADHD drugs are claimed to be, these numbers don’t seem particularly impressive.

    Data
    The authors admit that residual confounding may remain. And this is nearly always the case with registry data. It’s also very difficult with registry data to duplicate results, particularly when complex adjustment methods are used such as in this study; small coding changes can alter the results significantly. An additional limitation in a pseudo-RCT like this is that there is no placebo group, as would usually be the case in an RCT.

    Conflicts of interest
    It would be nice to see ‘No conflicting interests.’ Unfortunately, this is rare in ADHD drug studies, and this one is no different. What we see is this: “HL has received grants from Shire Pharmaceuticals, personal fees from, and has served as a speaker for, Medici, Shire/Takeda Pharmaceuticals, and Evolan Pharma AB, and sponsorship for a conference on ADHD from Shire/Takeda Pharmaceuticals and Evolan Pharma AB, all outside the submitted work; SC has received reimbursement for travel and accommodation expenses from the Association for Child and Adolescent Mental Health (ACAMH) in relation to lectures delivered for ACAMH, the Canadian ADHD Alliance Resource, the British Association of Psychopharmacology, and the Healthcare Convention and CCM Group team for educational activity on ADHD; SC has also received honorariums from Medice and serves as chair of the European ADHD Guidelines Group, all outside the submitted work; ZC has received speaker fees from Takeda Pharmaceuticals, outside the submitted work; no other relationships or activities that could appear to have influenced the submitted work.”

    I don’t mean to suggest it’s a badly performed or intentionally misleading study. The authors have clearly made considerable effort to eliminate bias and are very open about the limitations. However, the press and wider public never pay any attention to this. The headline result just becomes unimpeachable gospel because it’s peer-reviewed science. But when you add publication bias (would a study showing no effect even get published?) to the limitations, it’s weak evidence of anything.

  • Does obesity really cost the UK £126 billion per year?

    A study has found that obesity costs the NHS £126 billion per year.

    Having gone through a PhD in the economic costs of disease I don’t even need to read the study to know the given figure is meaningless. Studies claiming that ‘disease x costs the economy y amount’, otherwise known as cost of illness (COI) studies, are widely perceived as a joke, even by the economists who make them.

    You only need to look at systematic reviews to see why COI studies are so poorly regarded. The costs for any given disease can vary wildly from negative costs to millions of pounds per person. One review found that the total healthcare costs for a number of common diseases in the US were double the country’s entire health spending.

    Even in theory, few economists agree on how the costs of illness can be measured. A particular point of contention is how to measure lost productivity, i.e. the output a sick (or dead) person would have produced had they been well. Many researchers just multiply the average wage by the number of people unable to work, which tends to produce a high illness cost, while some assume that any ill workers will be replaced and only a friction cost should be counted. The ‘correct’ cost (whatever that even means) is likely somewhere in between and dependent on the particular illness and conditions of the economy. But these are hard to measure and tend to give a lower cost, which doesn’t make such a good headline.

    But what about the healthcare costs? Surely we can measure these? In practice it’s complicated. Healthcare systems tend not to have accounts divided up by disease and attributing healthcare costs to a particular disease is tricky. Even calculating the overall cost of a particular patient’s treatment in the NHS relies on a lot of guesswork and averaging, as there are many fixed costs and variable costs to be considered. In addition, many diseases can be risk factors for other diseases. Should we include the costs of these diseases? It’s not an easy question. We might simply take the excess costs of people with the main disease, but if we also did this for the other diseases and added all the net costs, the combined total could be higher than the actual overall costs. We can adjust for comorbidities but the causal chains are so complex that separating out disease costs is always problematic. Economic deprivation underlies much obesity and many other diseases, so is the cost of obesity not ultimately the cost of economic deprivation? I would go even further and say that the economic costs of obesity are part of the costs of modern capitalist society (in other types of society obesity is extremely rare). Attributing these costs to particular diseases is impossible on theoretical and practical levels.

    Ultimately, the costs of an illness are not measured but imputed on the basis of many questionable assumptions. In theory they are the savings that would be made if the disease did not exist. But we can’t measure this counterfactual and in most cases it’s too abstract to relate to anything concrete.

    Another problem with COI studies is that better treatment and improved survival tend to raise illness costs, which seems counter-intuitive. For instance, cancer is becoming more expensive because new treatments are costly and survivors are living longer, hence using more healthcare. This actually isn’t an error; the costs to healthcare systems really do go up as we are seeing.

    Hence COI studies give the rather strange conclusion that the richer a country is, and to some extent the better its healthcare system is, the more costly disease becomes. So diseases prevalent in developed countries, like cancer and diabetes, will appear more costly than, say, malaria in the Global South. In essence, COI studies tend to be a combination of how much we can pay to treat a disease with the average wages in a country, and the main determinants are how rich the country is and how disruptive the disease is (or perceived to be). And if we choose to put more resources into a disease based on these costs, the costs will rise even further, compounding the problem.

    What about the suffering an illness causes? Why do we not see studies with disease x causes y suffering? While we may use lives lost, we don’t have an objective measure of suffering. We could use QALYs or DALYs but few people understand those. Most people understand money though, and a big sum of money sounds impressive. But couldn’t we assign monetary value to the suffering? We could do this, and some economists do, but it entails many dubious assumptions and can lead to double counting as economic analyses tend to consider costs per QALY.

    In many ways these studies underestimate the true costs of a disease, particularly in diseases affecting poorer countries, for reasons described above.

    But even in Britain, what about the cost to all our lives of being overweight? And what about the risk factors for other diseases? The £126 billion (less than £2000 per person) if anything underestimates the actual cost.

    Most of the problems with COI studies have been known since the 1960s. So why do researchers keep producing them?

    • they make headlines
    • we (or at least politicians) care more about the economy than well-being
    • we don’t question the validity of things we agree with
    • only things that are measured matter (to politicians).

    COI studies are basically lobbying disguised as scientific research. The obesity study may be driven to encourage and justify adoption of weight loss drugs. Alternatively, it might be to prod politicians towards taxing junk foods. I don’t think this is a bad goal, but it won’t solve much.

    If we eliminated obesity, would we all be a couple of thousand pounds richer? It’s unlikely. The costs would migrate somewhere else. Modern healthcare systems are engaged in a game of whack-a-mole. Fix one issue another pops up. The tacitly assumed but never publicly acknowledged underlying fact is that improved health ultimately leads to worse health and higher costs. This is because improved healthcare leads to increased longevity and it’s widely accepted that ageing populations are the main cause of increased healthcare spending. If we cured cancer it likely wouldn’t save any money because of all the additional spending on dementia care and other diseases in old age, in addition to the increased spending on pensions. That’s not to say curing cancer wouldn’t be worthwhile, just that economic grounds shouldn’t be the main ones for treating diseases.

    It’s a symptom of modern Britain that the cost to the economy is considered more important than the cost to actual people.

  • Does drinking champagne really reduce the risk of cardiac arrest?

    The headline: Drinking champagne could reduce risk of sudden cardiac arrest, study suggests.

    But does the study actually suggest that?

    The full text article. For some reason scientific articles don’t get linked to in newspaper stories.

    The study used data from the UK biobank and the authors attempted to reduce confounding and show causality through Mendelian randomisation.

    However, the authors acknowledge that confounding is due to much more than genetics. Despite the Mendelian randomisation, it doesn’t seem that confounding was handled well in this study.

    Now it may well be that alcohol intake improves cardiovascular health. More likely they are both associated with socialising and higher socioeconomic status, which have positive effects on health in many ways, and may themselves be the result of better health.

    The study found a stronger protective association for wine and champagne than for beer and cider. This suggests an underlying association with socioeconomic status and socialising: people who drink wine and champagne will tend to be richer and socialise more than beer and cider drinkers. Further evidence of this is that loneliness and isolation were negatively associated, as were depression, tenseness and other negative feelings.

    The study also found that using a computer reduced risk of cardiac arrest. However, playing computer games raised the risk. So what does that say about using computers? Not much I suspect.

    Using sunscreen had lower risk, which was likely due to higher sun exposure and taking more holidays.

    Hand-grip strength had the second strongest protective association (just behind forced expiratory volume). Does this mean that exercising your hands will the reduce risk of cardiac arrest? Maybe a little bit but probably less than going for a walk. More likely hand-grip strength reflects underlying fitness.

    Unfortunately cross-sectional studies using public data are always prone to confounding, and seldom identify primary causes.

    They lead to data dredging for associations that are trivial, spurious or incidental. They throw up all these unhelpful headlines that cause confusion and distrust, while doing little to reduce health burdens. They take away resources from higher quality longitudinal studies.

    In any case, the difficulty in reducing health problems is with effecting lifestyle and societal changes, not identifying risk factors. We have a pretty good idea of which things are good and bad for health.

    The problem is that most people don’t follow the guidelines. And ultimately if people want to engage in unhealthy behaviours that’s their choice.